Introduction to coding theory (CMU, Spr 2010)

March 26, 2010

Lecture 17 summary

Filed under: Lecture summary — Venkat Guruswami @ 9:58 pm

We proved the distance property of Tanner codes based on (spectral) expanders using a simple application of the expander mixing lemma. We discussed an decoding algorithm for the codes (to correct up to a number of errors about 1/4 the bound on minimum distance), based on O(\log n) iterations of decoding the local codes. We saw a distance amplication technique using dispersers which yield codes of relative distance 1-\epsilon over an alphabet of size \exp(O(1/\epsilon)).

I’d like to make two clarifications about the lecture. The first one concerns the calculation in the analysis of the decoding algorithm where we argued that the set T_1 was a constant factor smaller than S_1. If we are content with ensuring that |T_1| \le \frac{|S_1|}{1+\epsilon}, then it suffices to take the degree d of the expander to be at least 3\lambda/\delta_0 (like I had originally intended to set; in particular the degree need not grow as 1/\epsilon).

Indeed, by the expander mixing lemma and using that |S_1| \le (1-\epsilon) \left( \frac{\delta_0}{2} - \frac{\lambda}{d} \right) n, we have, in the notation from the lecture,

\displaystyle \frac{\delta_0 d}{2} |T_1| \le (1-\epsilon) \left( \frac{\delta_0 d}{2} - \lambda \right) |T_1| + \lambda \frac{|S_1|+|T_1|}{2} \ ,

which upon rearranging yields

\displaystyle |T_1| \le \frac{\lambda}{\epsilon \delta_0 d + (1-2\epsilon) \lambda} |S_1| \le \frac{|S_1|}{1+\epsilon} \ .

(The last step follows if d \ge 3\lambda/\delta_0.)

The second clarification concerns the linear time implementation of the decoding algorithm (instead of the obvious O(n \log n) time implementation). The key insight to argue this is to observe that in each iteration, the only vertices (in the relevant side for that iteration) that need to be locally decoded are those that are adjacent to some vertex on the other side that had some neighboring edges flipped in the local decoding of the previous iteration. The latter set shrinks geometrically in size by an argument as above. Let us be somewhat more specific. After the first (left) iteration, for each v \in L, the local subvector y_{|\Gamma(v)} of the current vector y \in \{0,1\}^{E} belongs to the code C_0. Let T(y) \subset R be the set of right hand side vertices u for which y_{|\Gamma(u)} does not belong to C_0. Let z \in \{0,1\}^E be the vector after the running the right side decoding on y. Note that for each w \in L that is not a neighbor of  any vertex in T(y), its neighborhood is untouched by the decoding. This means that in the next iteration (left side decoding), all these vertices w need not be examined at all.

The algorithmic trick therefore is to keep track of the vertices whose local neighborhoods do not belong to C_0 in each round of decoding. In each iteration, we only perform local decoding at a subset of nodes D_i that was computed in the previous iteration. (This subset of left nodes gets initialized after the first two rounds of decoding as discussed above.) After performing local decoding at the nodes in D_i, we prepare the stage for the next round by computing D_{i+1} as the set of neighbors of all nodes in D_i whose local neighborhood did not belong to C_0 prior to decoding.

Advertisements

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: