Introduction to coding theory (CMU, Spr 2010)

March 31, 2010

Lecture 18 summary

Filed under: Lecture summary — Venkat Guruswami @ 3:07 pm

We discussed a method for distance amplification using expander graphs, and how the resulting codes can be decoded via a majority logic algorithm. We discussed how to get near-MDS codes of rate R that enable linear-time correction of a fraction (1-R-\epsilon)/2 of errors.

We motivated list decoding as a primitive to bridge between the Shannon random errors model and the Hamming worst-case error model. We stated and proved the existence of list-decodable codes that achieve a rate vs error-correction radius trade-off similar to the Shannon capacity for the q-ary symmetric channel. We commented on why the proof does not work as such for linear codes.

Advertisements

Notes on lectures 16,17

Filed under: Lecture notes — Venkat Guruswami @ 11:15 am

A draft of the notes on expander codes and their decoding that was covered in last week’s lectures are now posted on the course webpage. Also posted are related notes (that includes some illustrative figures) from a previous offering of the course.

March 29, 2010

Problem set 3

Filed under: Problem sets — Venkat Guruswami @ 5:19 pm

The 3rd (and final) problem set is now posted on the course webpage. It is due before the beginning of lecture on Friday, April 23.  As always, I encourage you to start thinking about the problems early. You can use the comments section of this post to raise any questions or requests for clarification.

March 26, 2010

Lecture 17 summary

Filed under: Lecture summary — Venkat Guruswami @ 9:58 pm

We proved the distance property of Tanner codes based on (spectral) expanders using a simple application of the expander mixing lemma. We discussed an decoding algorithm for the codes (to correct up to a number of errors about 1/4 the bound on minimum distance), based on O(\log n) iterations of decoding the local codes. We saw a distance amplication technique using dispersers which yield codes of relative distance 1-\epsilon over an alphabet of size \exp(O(1/\epsilon)).

I’d like to make two clarifications about the lecture. The first one concerns the calculation in the analysis of the decoding algorithm where we argued that the set T_1 was a constant factor smaller than S_1. If we are content with ensuring that |T_1| \le \frac{|S_1|}{1+\epsilon}, then it suffices to take the degree d of the expander to be at least 3\lambda/\delta_0 (like I had originally intended to set; in particular the degree need not grow as 1/\epsilon).

Indeed, by the expander mixing lemma and using that |S_1| \le (1-\epsilon) \left( \frac{\delta_0}{2} - \frac{\lambda}{d} \right) n, we have, in the notation from the lecture,

\displaystyle \frac{\delta_0 d}{2} |T_1| \le (1-\epsilon) \left( \frac{\delta_0 d}{2} - \lambda \right) |T_1| + \lambda \frac{|S_1|+|T_1|}{2} \ ,

which upon rearranging yields

\displaystyle |T_1| \le \frac{\lambda}{\epsilon \delta_0 d + (1-2\epsilon) \lambda} |S_1| \le \frac{|S_1|}{1+\epsilon} \ .

(The last step follows if d \ge 3\lambda/\delta_0.)

The second clarification concerns the linear time implementation of the decoding algorithm (instead of the obvious O(n \log n) time implementation). The key insight to argue this is to observe that in each iteration, the only vertices (in the relevant side for that iteration) that need to be locally decoded are those that are adjacent to some vertex on the other side that had some neighboring edges flipped in the local decoding of the previous iteration. The latter set shrinks geometrically in size by an argument as above. Let us be somewhat more specific. After the first (left) iteration, for each v \in L, the local subvector y_{|\Gamma(v)} of the current vector y \in \{0,1\}^{E} belongs to the code C_0. Let T(y) \subset R be the set of right hand side vertices u for which y_{|\Gamma(u)} does not belong to C_0. Let z \in \{0,1\}^E be the vector after the running the right side decoding on y. Note that for each w \in L that is not a neighbor of  any vertex in T(y), its neighborhood is untouched by the decoding. This means that in the next iteration (left side decoding), all these vertices w need not be examined at all.

The algorithmic trick therefore is to keep track of the vertices whose local neighborhoods do not belong to C_0 in each round of decoding. In each iteration, we only perform local decoding at a subset of nodes D_i that was computed in the previous iteration. (This subset of left nodes gets initialized after the first two rounds of decoding as discussed above.) After performing local decoding at the nodes in D_i, we prepare the stage for the next round by computing D_{i+1} as the set of neighbors of all nodes in D_i whose local neighborhood did not belong to C_0 prior to decoding.

March 25, 2010

Notes 7: Justesen codes and Reed-Solomon & GMD decoding

Filed under: Lecture notes — Venkat Guruswami @ 10:48 pm

A draft version of the notes for the material we covered on Justesen codes, Reed-Solomon unique decoding, and GMD decoding of concatenated codes is posted on the course webpage. The material from Section 4 (Peterson’s algorithm) onwards has not been proofread and will be edited soon, but I wanted to make this version available now to avoid too much lag.

March 24, 2010

Lecture 16 summary

Filed under: Lecture summary — Venkat Guruswami @ 8:05 pm

We saw a sequential algorithm for expander codes based on successively flipping bits involved in more unsatisfied checks than satisfied checks, and proved that it corrects a constant fraction of errors (related to the size of the sets which expand by a factor of more than 3D/4).  We then considered a generalization called Tanner codes and saw how weaker expansion suffices to argue about its distance. We discussed the expander mixing lemma and its use to lower bound the distance of Tanner codes whose factor graph is the edge-vertex incidence matrix of a spectral expander.

March 20, 2010

Lecture 15 summary

Filed under: Lecture summary — Venkat Guruswami @ 6:04 pm

We finished the analysis of GMD decoding for concatenated codes. We saw how to use concatenation with outer code that can correct a small fraction of worst-case errors to construct codes that achieve the Shannon capacity of the BEC and BSC, together with polynomial time encoding/decoding. We began the discussion of low-density parity-check codes, and saw why unbalanced bipartite expanders with expansion more than D/2 give rise to asymptotically good binary linear codes.

March 17, 2010

Lecture 14 summary

Filed under: Lecture summary — Venkat Guruswami @ 4:35 pm

We discussed Peterson’s algorithm for decoding Reed-Solomon codes up to half the distance, and decoding algorithms for concatenated codes: both a naive algorithm and GMD decoding.

An extension till Monday, Mar 22 (firm deadline) for problem set 2 is allowed.

March 5, 2010

Notes 6: Code constructions: Reed-Solomon, BCH, Reed-Muller, and concatenated

Filed under: Lecture notes — Venkat Guruswami @ 6:54 pm

[Lectures scribed by Eric Blais]

[The notes are also available in pdf format on the course webpage. Unless there is some explicit demand for it, I suspect that these might be the last set of notes for this course that I make available in wordpress html format.]

In this lecture, we begin the algorithmic component of the course by introducing some explicit families of good algebraic codes. We begin by looking at Reed-Solomon codes.

1. Reed-Solomon codes

Reed-Solomon codes are a family of codes defined over large fields as follows.

Definition 1 (Reed-Solomon codes) For integers {1 \le k < n}, a field {\mathbb{F}} of size {|\mathbb{F}| \ge n}, and a set {S = \{\alpha_1,\ldots,\alpha_n\} \subseteq \mathbb{F}}, we define the Reed-Solomon code

\displaystyle \mathrm{RS}_{\mathbb{F}, S}[n,k] = \{(p(\alpha_1), p(\alpha_2), \ldots, p(\alpha_n)) \in \mathbb{F}^n \mid p \in \mathbb{F}[X] \mathrm{deg}(p) \le k-1\} \ .

A natural interpretation of the {\mathrm{RS}_{\mathbb{F}, S}[n,k]} code is via its encoding map. To encode a message {m = (m_0, m_1,\ldots, m_{k-1}) \in \mathbb{F}^k}, we interpret the message as the polynomial

\displaystyle  p(X) = m_0 + m_1 X + \cdots + m_{k-1} X^{k-1} \in \mathbb{F}[X].

We then evaluate the polynomial {p} at the points {\alpha_1, \alpha_2, \ldots, \alpha_n} to get the codeword corresponding to {m}.

To evaluate the polynomial {p} on the points {\alpha_1,\alpha_2,\ldots,\alpha_n}, we multiply the message vector {m} on the left by the {n \times k} Vandermonde matrix

\displaystyle G = \begin{pmatrix} 1 & \alpha_1 & \alpha_1^{\,2} & \cdots & \alpha_1^{\,k-1} \\ 1 & \alpha_2 & \alpha_2^{\,2} & \cdots & \alpha_2^{\,k-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & \alpha_n & \alpha_n^{\,2} & \cdots & \alpha_n^{\,k-1} \end{pmatrix}.

The matrix {G} is a generator matrix for {\mathrm{RS}_{\mathbb{F}, S}[n,k]}, so we immediately obtain that Reed-Solomon codes are linear codes over {\mathbb{F}}.

1.1. Properties of the code

Let’s now examine the parameters of the above Reed-Solomon code. The block length of the code is clearly {n}. As we will see, the code {\mathrm{RS}_{\mathbb{F}, S}[n, k]} has minimum distance {n-k+1}. This also means that the encoding map is injective and therefore the code has dimension equal to {k}.

The key to establishing the minimum distance of Reed-Solomon codes is the `degree mantra’ that we saw in the previous lecture: A non-zero polynomial of degree {d} with coefficients from a field {\mathbb{F}} has at most {d} roots in {\mathbb{F}}.

Theorem 2 The Reed-Solomon code {\mathrm{RS}_{\mathbb{F},S}[n,k]} has distance {n-k+1}.

Proof: Since {\mathrm{RS}_{\mathbb{F},S}[n,k]} is a linear code, to prove the theorem it suffices to show that any non-zero codeword has Hamming weight at least {n-k+1}.

Let {(m_0, m_1,\ldots,m_{k-1}) \neq 0}. The polynomial {p(X) = m_0 + m_1 X + \cdots + m_{k-1}X^{k-1}} is a non-zero polynomial of degree at most {k-1}. So by our degree mantra, {p} has at most {k-1} roots, which implies that {(p(\alpha_1),\ldots, p(\alpha_{n}))} has at most {k-1} zeros.

By the Singleton bound, the distance cannot exceed {n-k+1}, and therefore must equal {n-k+1}. The upper bound on distance can also be seen by noting that the codeword corresponding to the polynomial {\prod_{i=1}^{k-1} (X-\alpha_i)} has Hamming weight exactly {n-k+1}. \Box

Note that the minimum distance of Reed-Solomon codes meets the Singleton bound. This is quite interesting: Reed-Solomon codes are a simple, natural family of codes based only on univariate polynomials, and yet their rate is optimal.

In our definition above, we have presented Reed-Solomon codes in the most general setting, where {S} can be any arbitrary subset of {\mathbb{F}} of size {n}. This presentation highlights the flexibility of Reed-Solomon codes. In practice, however, there are two common choices of {S} used to instantiate Reed-Solomon codes:

  1. Take {S = \mathbb{F}}, or
  2. Take {S = \mathbb{F}^*} to be the set of non-zero elements in {\mathbb{F}}.

These two choices attain the best possible trade-off between the field size and the block length.

1.2. Alternative characterization

We presented Reed-Solomon codes from an encoding point of view. It is also possible to look at these codes from the “parity-check” point of view. This approach is used in many textbooks, and leads to the following characterization of Reed-Solomon codes.

Theorem 3 (Parity-check characterization) For integers {1 \le k < n}, a field {\mathbb{F}} of size {|\mathbb{F}| = q = n+1}, a primitive element {\alpha \in \mathbb{F}^*}, and the set {S = \{1, \alpha, \alpha^2, \ldots, \alpha^{n-1}\}}, the Reed-Solomon code over {\mathbb{F}} with evaluation set {S} is given by \displaystyle \mathrm{RS}_{\mathbb{F}, S}[n,k] = \{ \, (c_0,c_1,\ldots,c_{n-1}) \in \mathbb{F}^n \,\mid\, \ c(X) = c_0 + c_1 X + \cdots + c_{n-1} X^{n-1} \mathrm{satisfies}~ c(\alpha) = c(\alpha^2) = \cdots = c(\alpha^{n-k}) = 0 \,\} \ . \ \ \ \ \ (1)

In other words, Theorem 3 states that the codewords of the Reed-Solomon code with evaluation points {1,\alpha,\dots,\alpha^{n-1}} correspond to the polynomials of degree {n-1} that vanish at the points {\alpha, \alpha^2,\ldots,\alpha^{n-k}}.

The characterization of Reed-Solomon codes in Theorem 3 has the same dimension as the code obtained with our original definition; to complete the proof of Theorem 3, we only need to check that every codeword obtained in Definition 1 satisfies the parity-check condition (1).

Exercise 1 Complete the proof of Theorem 3.

(Hint: The proof uses the fact that for every {x \neq 1} in {\mathbb{F}^*}, {\sum_{\ell=0}^{n-1} x^\ell = \frac{1-x^n}{1-x} = 0}.)

(more…)

March 4, 2010

No class tomorrow (March 5)

Filed under: Announcements — Venkat Guruswami @ 12:36 pm

Continuing the recent theme of one lecture per week, I just learned that there are no classes tomorrow for “mid semester break.” So we won’t have a lecture tomorrow. Have a great spring break and I will see you in lecture on Wednesday, March 17. The problem set will still be due on Mar 19th.

Next Page »

Create a free website or blog at WordPress.com.