Introduction to coding theory (CMU, Spr 2010)

May 3, 2010

Wrap up

Filed under: Announcements — Venkat Guruswami @ 6:07 pm

Thanks once again to all of you who took the course and hung in there for the semester! I certainly had a great time with the course, and we covered a lot of ground despite the few Friday classes that had to be cancelled.

I have entered the grades on the electronic web form and I think you should be able to access it soon.

If you have not done so already, please fill in the course evaluation at my.cmu.edu under Academics. You probably also got an email with instructions on where to fill this out.

There was some request for notes on the 3-query LDCs we covered in the final lecture. I may not be able to write full blown notes, but I’ll try to expand the lecture summary for that post with some details of the construction and analysis when I find time.

April 30, 2010

Lecture 26 summary

Filed under: Lecture summary — Venkat Guruswami @ 5:17 pm

We mentioned two topics which were introduced to coding theory by theoretical computer science: local testing and local decoding of codes. These and related topics (such as PCPs and applications of locally decodable codes in complexity and cryptography) have been intensively researched in the last 10-15 years, with several breakthroughs occurring in recent years.

We focused on local (unique) decoding of codes for the lecture. We saw how Hadamard codes can be locally decoded using just two queries. However, their encoding length for a message of length n is 2^n. We then saw the higher degree generalization of Hadamard codes, where the message is interpreted as a degree D homogeneous multilinear polynomial (i.e., all terms have degree exactly D \ge 2). This gave us codes of encoding length \approx 2^{O(n^{1/D})}, and we discussed a 2D-query local decoding algorithm. This was based on interpolating the restriction of the multilinear polynomial on a line in a random direction. Thus for any constant q, we got codes that are locally decodable using q queries that have encoding length 2^{n^O(1/q)}.

We then turned to the ingenious 3-query locally decodable code (LDC) construction due to Yekhanin. In keeping with the theme of our initial constructions, we presented a polynomial view of these codes, where the messages are again interpreted as homegeneous multilinear polynomials of certain degree (say D) but only a carefully chosen subset of all possible {M \choose D} monomials are allowed. (This actually reduces the rate compared to our earlier construction, but the big gain is that one is able to locally decode using only three queries instead of about D queries!) Our description is based on a variant of Yekhanin’s construction that was discovered by Raghavendra and subsequently presented by Gopalan as polynomial based codes.

For every t such that 2^t-1 is prime (such a prime is called a Mersenne prime), we gave a construction of 3-query LDCs  of encoding length \exp(O_t(n^{1/t})). Since very large Mersenne primes are known, we get 3-query LDCs of encoding length less than \exp(O(n^{10^{-7}})). We presented a 3-query algorithm and proved its correctness assuming the stated properties of the “matching sets” U_i,V_i used in the construction, and then explained how to construct families of such subsets of \{1,2,\dots,M\} of size \Omega_t(M^t).

Notes on list decoding folded RS codes

Filed under: Lecture notes — Venkat Guruswami @ 4:43 pm

Notes for the lectures on achieving the optimal trade-off between rate and list decoding radius via folded Reed-Solomon codes are now posted on the course webpage. Notes 7,8 on Reed-Solomon unique decoding, GMD decoding, and expander codes have also been edited.

April 28, 2010

Lecture 25 summary

Filed under: Lecture summary — Venkat Guruswami @ 8:51 pm

We discussed irregular LDPC codes, and characterized their rate and erasure correction capability (via the message passing algorithm discussed in the previous lecture) in terms of the degree distribution of the edges. Specifically, let \lambda_i (resp. \rho_i) is the fraction of edges incident on degree i variable (resp. check) nodes, an define the generating functions \lambda(z) = \sum_{i=1}^{d_v^{\max}} \lambda_i z^{i-1} and \rho(z) = \sum_{i=1}^{d_c^{\max}} \rho_i z^{i-1}. Then the rate of the LDPC code is given by

\displaystyle 1 -\frac{\int_0^1 \rho(z) \ dz}{\int_0^1 \lambda(z) \ dz} \ .

Also if \alpha \lambda(1-\rho(1-x)) \le x for every x, 0 \le x \le 1, and some constant \alpha > 0, we argued why the message passing algorithm succeeds with high probability on \mathrm{BEC}_\alpha' for any constant \alpha' < \alpha.

We then argued how the distributions

\displaystyle \lambda(z) = \frac{1}{H(D-1)} \sum_{i=1}^{D-1} \frac{z^i}{i}

and

\displaystyle \rho(z) = \exp \left( \frac{H(D-1)}{\alpha} (z-1) \right)

(perhaps truncated to a finite series) enables achieving capacity of \mathrm{BEC}_{\alpha'} — we can achieve a rate 1-\alpha'-\epsilon with decoding complexity O(n \log (1/\epsilon) (since the average variable node degree is \approx H(D-1)).

This result is from the paper Efficient erasure correcting codes. Further details, including extensions to BSC and AWGN channels, and the martingale argument for the concentration of the performance around that of the average code in the ensemble, can be found in the paper The capacity of low-density parity-check codes under message-passing decoding.

The last quarter of the lecture was devoted a recap of the main topics covered in the course.

April 24, 2010

Lecture 24 summary

Filed under: Lecture summary — Venkat Guruswami @ 10:08 pm

We discussed the message passing algorithm for decoding LDPC codes based on (d_v,d_c)-regular graphs on the binary erasure channel, and derived an expression for threshold erasure probability for which the algorithm guarantees vanishing bit error probability. We then turned to the binary symmetric channel, and discussed Gallager’s “Algorithm A” and derived the recurrence equation for the decay of the bit error probability. We briefly discussed Gallager’s “Algorithm B” as well, where a variable node flips its value if more than a certain cut-off number (typically majority after a few iterations) of its neighboring check nodes suggest that the node flips its value. We mentioned the values of the threshold crossover probability for some small values of d_v and d_c.

During lecture, the question of the speed of convergence of the bit error probability (BER) to zero was asked. The answer I guessed turns out to be correct: if we run the algorithm for \Omega(\log n) iterations which is smaller than the girth of the graph, for Algorithm A the BER is at most 1/n^{\beta} for some \beta > 0, and for Algorithm B for d_v > 3 with an optimized cut-off for flipping, the BER is at most 2^{-n^{\gamma}} for some \gamma > 0.

We do not plan to have notes for this segment of the course.  I can, however, point you to an introductory survey I wrote (upon which the lectures are loosely based), or Gallager’s remarkable Ph.D. thesis which can be downloaded here (the decoding algorithms we covered are discussed in Chapter 4). A thorough treatment of the latest developments in the subject of iterative and belief propagation decoding algorithms can be found in Richardson and Urbanke’s comprehensive book Modern Coding Theory.

April 22, 2010

List-decodability of random linear codes

Filed under: Announcements — Venkat Guruswami @ 9:46 am

In our discussion on random coding arguments to show the existence of list-decodable codes, we showed that a random q-ary code of rate 1-h_q(p)-1/L was (p,L)-list decodable w.h.p. For random linear codes over {\mathbb F}_q, the result (or rather proof) was weaker, and only guaranteed a rate of 1-h_q(p)-1/\log_q(L+1). I had mentioned that this discrepancy in list-size between linear and general codes was recently resolved, showing that a random linear code of rate 1-h_q(p) - O(1/L) is (p,L)-list decodable w.h.p as well.

I’ll speak about this result in the ACO seminar today. The paper can be downloaded here.

April 21, 2010

Lecture 23 summary

Filed under: Lecture summary — Venkat Guruswami @ 3:29 pm

We completed the discussion of the rate vs. list decoding radius trade-off achieved by folded Reed-Solomon codes and multivariate interpolation based decoding, and discussed its complexity and list-size bounds, as well as alphabet size. We highlighted the powerful list recovery property offered by folded RS codes, where having up to \ell possible choices for each codeword position does not affect the ability to correct with agreement R + \epsilon (where R is the rate), and we can “absorb” the effect of \ell into a somewhat larger alphabet size and decoding complexity. This feature is invaluable in using folded RS codes as outer codes in concatenation schemes, as we saw in two results:

  1. Binary codes which are list-decodable up to the Zyablov radius (earlier we saw to unique decode up to half the Zyablov radius using GMD decoding)
  2. Construction of codes of rate R over an alphabet of size \exp((1/\epsilon)^{O(1)}) that are list-decodable up to a fraction 1-R-\epsilon of errors. The alphabet size is not far from the optimal bound of \exp(1/\epsilon), and nicely combines ideas from the algebraic coding and expander decoding parts of the course.

We then wrapped up our discussion of list decoding by mentioning some of the big questions that still remain open, especially in constructing binary codes with near-optimal (or even better than currently known) trade-offs.

We discussed the framework of message-passing algorithms for LDPC codes, which will be the subject of the next lecture or two. We will mostly follow the description in this survey, but will not get too deep into the material.

April 14, 2010

Lecture 22 summary

Filed under: Lecture summary — Venkat Guruswami @ 2:59 pm

We discussed how folded Reed-Solomon codes can be used to approach the optimal trade-off between rate and list decoding radius, specifically list decoding in polynomial time from a fraction 1-R-\epsilon of errors with rate R for any desired constant \epsilon > 0.

We presented an algorithm for list decoding folded Reed-Solomon codes (with folding parameter s) when the agreement fraction is more than \frac{1}{s+1} + \frac{s^2 R}{s+1}.  This was based on the extension of the Welch-Berlekamp algorithm to higher order interpolation (in s+1 variables). Unfortunately, this result falls well short of our desired target, and in particular is meaningless for R > 1/s.

We then saw how to run the (s+1)-variate algorithm on a folded RS code with folding parameter m > s, to list decode when the agreement fraction is more than \frac{1}{s+1} + \frac{s}{s+1} \frac{m}{m-s+1} R. Picking s large and m \gg s, say s \approx 1/\epsilon and m \approx 1/\epsilon^2, then enables list decoding from agreement fraction R+\epsilon. We will revisit this final statement briefly at the beginning of the next lecture, and also comment on the complexity of the algorithm, bound on list-size, and alphabet size of the codes.

Notes for this lecture may not be immediately available, but you can refer to the original paper Explicit codes achieving list decoding capacity: Error-correction with optimal redundancy or Chapter 6 of the survey Algorithmic results for list decoding.  Both of these are tailored to list decode even from the (in general) smaller agreement fraction \left(\frac{mR}{m-s+1}\right)^{s/(s+1)} and use higher degrees for the Z_i‘s in the polynomial Q(X,Z_1,\ldots,Z_s) as well as multiple zeroes at the interpolation points. In the lecture, however, we were content, for sake of simplicity and because it suffices to approach agreement fraction R + \epsilon, with restricting Q to be linear in the Z_i‘s.

A reminder that we will have NO lecture this Friday (April 16) due to Spring Carnival.

April 13, 2010

Notes for lectures 18-21

Filed under: Lecture notes — Venkat Guruswami @ 9:51 pm

Drafts of the notes for the lectures up till last Friday are now posted on the course webpage. I plan to proofread and make necessary edits to portions of the notes (for lecture 15 and later) in the next couple of weeks or so. But the current versions should already be useful if you need a refresher on something we covered in lecture, or as reference for working on the problem set.

April 9, 2010

Lecture 21 summary

Filed under: Lecture summary — Venkat Guruswami @ 2:54 pm

Today we completed the description and analysis of the multiplicities based weighted polynomial reconstruction algorithm which immediately yielded an algorithm for list decoding Reed-Solomon codes up to the Johnson radius of 1-\sqrt{R} for rate R. We discussed the utility of weights in exploiting “soft” information available during decoding (eg. from decoding inner codes in a concatenation scheme, or from a demodulator which “rounds” analog signals to digital values). We saw simple consequences for list decoding binary concatenated codes, and in particular how to list-decode from a fraction (1/2-\gamma) of errors with \Omega(\gamma^6) rate and list-size O(1/\gamma^3). While the rate is positive for every \gamma > 0, it is far from the optimal \gamma^2 bound. (We will soon see how this can be improved substantially by using codes with more powerful list-decoding properties than Reed-Solomon codes at the outer level.) Finally we defined (a version of) folded Reed-Solomon codes (we will give a list decoding algorithm for these next lecture).

We will have notes for this week’s lecture available soon, but the material covered this week has also been written about in several surveys on list decoding (some of which are listed on the course webpage). Here are a couple of pointers, which also discuss the details of list decoding folded RS codes which we will cover next week (though we will use a somewhat simpler presentation with weaker bounds in our lectures):

Next Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.