*[Lectures scribed by Eric Blais]*

[The notes are also available in pdf format on the course webpage. Unless there is some explicit demand for it, I suspect that these might be the *last set* of notes for this course that I make available in wordpress html format.]

In this lecture, we begin the algorithmic component of the course by introducing some explicit families of good algebraic codes. We begin by looking at Reed-Solomon codes.

**1. Reed-Solomon codes **

Reed-Solomon codes are a family of codes defined over large fields as follows.

Definition 1 (Reed-Solomon codes)For integers , a field of size , and a set , we define the Reed-Solomon code

A natural interpretation of the code is via its encoding map. To encode a message , we interpret the message as the polynomial

We then evaluate the polynomial at the points to get the codeword corresponding to .

To evaluate the polynomial on the points , we multiply the message vector on the left by the Vandermonde matrix

The matrix is a generator matrix for , so we immediately obtain that Reed-Solomon codes are linear codes over .

** 1.1. Properties of the code **

Let’s now examine the parameters of the above Reed-Solomon code. The block length of the code is clearly . As we will see, the code has minimum distance . This also means that the encoding map is injective and therefore the code has dimension equal to .

The key to establishing the minimum distance of Reed-Solomon codes is the `degree mantra’ that we saw in the previous lecture: *A non-zero polynomial of degree with coefficients from a field has at most roots in .*

*Proof:* Since is a linear code, to prove the theorem it suffices to show that any non-zero codeword has Hamming weight at least .

Let . The polynomial is a non-zero polynomial of degree at most . So by our degree mantra, has at most roots, which implies that has at most zeros.

By the Singleton bound, the distance cannot exceed , and therefore must equal . The upper bound on distance can also be seen by noting that the codeword corresponding to the polynomial has Hamming weight exactly .

Note that the minimum distance of Reed-Solomon codes meets the Singleton bound. This is quite interesting: Reed-Solomon codes are a simple, natural family of codes based only on univariate polynomials, and yet their rate is optimal.

In our definition above, we have presented Reed-Solomon codes in the most general setting, where can be any arbitrary subset of of size . This presentation highlights the flexibility of Reed-Solomon codes. In practice, however, there are two common choices of used to instantiate Reed-Solomon codes:

- Take , or
- Take to be the set of non-zero elements in .

These two choices attain the best possible trade-off between the field size and the block length.

** 1.2. Alternative characterization **

We presented Reed-Solomon codes from an encoding point of view. It is also possible to look at these codes from the “parity-check” point of view. This approach is used in many textbooks, and leads to the following characterization of Reed-Solomon codes.

Theorem 3 (Parity-check characterization)For integers , a field of size , a primitive element , and the set , the Reed-Solomon code over with evaluation set is given by

In other words, Theorem 3 states that the codewords of the Reed-Solomon code with evaluation points correspond to the polynomials of degree that vanish at the points .

The characterization of Reed-Solomon codes in Theorem 3 has the same dimension as the code obtained with our original definition; to complete the proof of Theorem 3, we only need to check that every codeword obtained in Definition 1 satisfies the parity-check condition (1).

Exercise 1Complete the proof of Theorem 3.

(Hint: The proof uses the fact that for every in , .)

** 1.3. Applications **

Reed-Solomon codes were originally introduced by Reed and Solomon in 1960. There have been many other codes introduced since — we will see some of those more recent codes soon — and yet Reed-Solomon codes continue to be used in many applications. Most notably, they are extensively used in storage devices like CDs, DVDs, and hard-drives.

Why are Reed-Solomon codes still so popular? One important reason is because they are optimal codes. But they do have one downside: Reed-Solomon codes require a large alphabet size. In a way, that is unavoidable; as we saw in Notes 4, any code that achieves the Singleton bound must be defined over a large alphabet.

The large alphabet brings to the fore an important issue: if we operate on bits, how do we convert the codewords over the large field in the binary alphabet? There is one obvious method. Say, for example, that we have a code defined over . Then we can write an element in this field as an 8-bit vector.

More precisely: if we have a message that corresponds to the polynomial in , its encoding in the Reed-Solomon code is the set of values . We can simply express these values in a binary alphabet with bits each. So provided that the Reed-Solomon code is defined over a field that is an extension field of , then this simple transformation yields a code over . In fact, there is way to represent field elements as bit vectors so that the resulting code is a binary *linear* code.

This method is in fact what is done in practice. But then it leads to the natural question: What are the error correction capabilities of the resulting binary code?

Let’s look at an example: say we have a Reed-Solomon code with and . The distance of this code is , so the code can correct errors. The transformation to a binary code yields a binary code where and , since all we have done in the transformation is scale everything. And at worst the distance of the resulting binary code is , so the binary code can also correct at least errors.

Let us now generalize the example. If we have a code where and is a power of , then the transformation described above yields a binary linear code, where . Writing and considering the case where , we observe that the transformation of a Reed-Solomon code to a binary code results in a code.

The resulting binary code has a decent rate, but it is not optimal: *BCH codes* are even better, as they are codes. BCH codes are very interesting in their own right, and we will examine them in the next section. But first we return to the question that we posed at the beginning of this section: why are Reed-Solomon codes still so popular? If BCH codes have the same distance guarantees as Reed-Solomon codes and a better rate, one would expect these codes to have completely replaced Reed-Solomon codes.

The main reason that Reed-Solomon are still frequently used is that in many applications — and in particular in storage device applications — errors often occur in bursts. Reed-Solomon codes have the nice property that bursts of consecutive errors affect bits that correspond to a much smaller number of elements in the field on which the Reed-Solomon code is defined. For example, if a binary code constructed from the code is hit with consecutive errors, these errors affect at most elements in the field and this error is easily corrected.

**2. BCH codes **

BCH codes were discovered by independently by Bose and Ray-Chaudhuri and by Hocquenghem in the late 1950s. As we saw in the previous section, BCH codes have better rate than binary codes constructed from Reed-Solomon codes. In fact, as we will see later in the section, the rate of BCH codes is optimal, up to lower order terms.

BCH codes can be defined over any field, but for today’s lecture we will focus on binary BCH codes:

Definition 4 (Binary BCH codes)For a length , a distance , and a primitive element , we define the binary BCH code

This definition should look familiar: it is almost exactly the same as the alternative characterization of Reed-Solomon codes in Theorem 3. There is one important difference: in Theorem 3, the coefficients could take any value in the extension field, whereas here we restrict the coefficients to take values only from the base field (i.e., the coefficients each take values from instead of ).

The BCH codes form linear spaces. The definition gives the parity-check view of the linear space, as it defines the constraints over the elements. The constraint is a constraint over the extension field , but it can also be viewed as a set of linear constraints over .

The last statement deserves some justification. That each constraint over corresponds to constraints over is clear from the vector space view of extension fields. That the resulting constraints are linear is not as obvious but follows from the argument below.

Consider the (multiplication) transformation defined on . This map is -linear, since . Using the additive vector space structure of , we can pick a basis of over , and represent each element as the (column) vector where . The -linear multiplication map then corresponds to a linear transformation of this vector representation, mapping to for a matrix . And the coefficients correspond to vectors so the constraint is equivalent to the constraint

which yields linear constraints over .

** 2.1. Parameters of BCH codes **

The block length of the code is , and its distance is at least . The latter statement is seen most easily by noting that the BCH code is a subcode of Reed-Solomon codes (i.e., the codewords of the BCH code form a subset of the codewords of the corresponding Reed-Solomon code), so the distance of the BCH code is bounded below by the distance of the Reed-Solomon code.

The dimension of the code is a bit more interesting. The dimension of the code is at least , since in our definition we have constraints on the extension field that each generate constraints in the base field. But this bound on the dimension is not useful: it is (almost) identical to the dimension of Reed-Solomon codes converted to the binary alphabet (for a similar distance and block length), so if this bound were tight we would have no reason for studying BCH codes. This bound, however, can be tightened, as the following more careful analysis shows.

Lemma 5For a length and a distance , the dimension of the code is at least .

*Proof:* In order to establish the tighter bound on the dimension of BCH codes, we want to show that some of the constraints in Definition 4 are redundant. We do so by showing that for any polynomial and any element , if we have , then we must also have . We establish this fact below.

Let and be such that . Then we also have , so

For any two elements , , so also implies

Since the coefficients are in , for all . Therefore, implies that

which is what we wanted to show.

To complete the proof, we now observe that the fact we just proved implies that the constraints for are all redundant (and implies by ); we can remove these constraints from the definition without changing the set of codewords in a BCH code. Doing this operation leaves constraints.

Remark 1The bound in Lemma 5 is asymptotically tight; the can not be improved to for any .

The asymptotic tightness of the bound in Lemma 5 follows from the Hamming bound.

** 2.2. Alternative characterization **

Another interpretation of the code is that it is equivalent to taking the definition of Reed-Solomon codes, and modifying it to keep only the polynomials where all the evaluations lie in the base field. In fact, this interpretation leads to the following corollary to Theorem 3. The proof follows immediately from Theorem 3 and Definition 4 of BCH codes.

Corollary 6BCH codes are subfield subcodes of Reed-Solomon codes. Specifically,

An important implication of Corollary 6 is that any decoder for Reed-Solomon codes also yields a decoder for BCH codes. Therefore, in later lectures, we will only concentrate on devising efficient algorithms for decoding Reed-Solomon codes; those same algorithms will immediately also give us efficient decoding of BCH codes.

** 2.3. Analysis and applications **

Just like Hamming codes, BCH codes have a very good rate but are only useful when we require a code with small distance (in this case, BCH codes are only useful when ). In fact, there is a much closer connection between Hamming codes and BCH codes:

Exercise 2For , show that the code is the same (after perhaps some coordinate permutation) as the Hamming code .

As we mentioned above, we are not particularly interested in BCH codes from an algorithmic point of view, since efficient decoding of Reed-Solomon codes also implies efficient decoding of BCH codes. But there are some applications where the improved bound in the dimension of BCH code is crucial.

In particular, one interesting application of BCH codes is in the generation of -wise independent distributions. A distribution over -bit strings is -wise independent if the strings generated by this distribution look completely random if you look only at positions of the strings. The simplest way to generate a -wise independent distribution is to generate strings by the uniform distribution. But this method requires a sample space with points. Using BCH codes, it is possible to generate -wise independent distributions with a sample space of only points.

**3. Reed-Muller codes **

The BCH codes we introduced were a generalization of Hamming codes. We now generalize the dual of Hamming codes — Hadamard codes. The result is another old family of algebraic codes called Reed-Muller codes. We saw in Notes 1 that Hadamard codes were related to first-order Reed-Muller codes; we now obtain the full class of Reed-Muller codes by considering polynomials of larger degree.

Reed-Muller codes were first introduced by Muller in 1954. Shortly afterwards, Reed provided the first efficient decoding algorithm for these codes. Originally, only binary Reed-Muller codes were considered, but we will describe the codes in the more general case. The non-binary setting is particularly important: in many applications of codes in computational complexity, Reed-Muller codes over non-binary fields have been used to obtain results that we are still unable to achieve with any other family of codes. We saw one such example, of hardness amplification using Reed-Muller codes, in the Introduction to Computational Complexity class last year.

Definition 7 (Reed-Muller codes)Given a field size , a number of variables, and a total degree bound , the code is the linear code over defined by the encoding map

applies to the domain of all polynomials in of total degree .

Reed-Muller codes form a strict generalization of Reed-Solomon codes: the latter were defined based on univariate polynomials, while we now consider polynomials over many variables.

There is one term in the definition of Reed-Muller codes that we have not yet defined formally: the *total degree* of polynomials. We do so now: the total degree of the monomial is , and the total degree of a polynomial is the maximum total degree over all its monomials that have a nonzero coefficient.

** 3.1. Properties of the code **

The block length of the code is , and the dimension of the code is the number of polynomials in of degree at most .

When , the size of the can be computed explicitly: there are () such polynomials.

In general, for any the number of polynomials on variables of total degree at most is

When this count does not have a simple expression.

As with Reed-Solomon codes, the interesting parameter of Reed-Muller codes is their distance. To compute the distance parameter, we look for the minimum number of zeros of any non-zero polynomial. Since for , when considering -variate polynomials over which will be evaluated at points in , we can restrict the degree in each variable to be at most . The distance property of Reed-Solomon codes was a consequence of the following fundamental result: a non-zero univariate polynomial of degree at most over a field has at most roots. The Schwartz-Zippel Lemma extends the degree mantra to give a bound on the number of roots of multi-variate polynomials.

Theorem 8 (Number of zeroes of multivariate polynomials)Let be a polynomial of total degree , with the maximum individual degree in the ‘s bounded by . Then

where , .

The proof of the Schwartz-Zippel Lemma follows from two slightly simpler lemmas. The first lemma provides a good bound on the number of roots of a multi-variate polynomial when its total degree is smaller than the degree of the underlying field.

Lemma 9 (Schwartz 1980)Let be a non-zero polynomial of total degree at most . Then

*Proof:* The proof of Lemma 9 is by induction on the number of variables in the polynomial. In the base case, when is a univariate polynomial, the lemma follows directly from the degree mantra.

For the inductive step, consider the decomposition

where is the degree of in . Then is a non-zero polynomial of total degree at most . By the induction hypothesis,

Also, when , then is a non-zero univariate polynomial of degree at most , so we have

Therefore,

Remark 2A version of Lemma 9 can also be stated for infinite fields (or integral domains). Specifically, the same proof shows that for any field and any subset , the probability that a non-zero polynomial of total degree is zero is at most when the values of the variables are chosen independently and uniformly at random from .

In many computer science applications, the field size is very large, and the bound of Lemma 9 is sufficient. As a result, that lemma is often presented as the Schwartz-Zippel Lemma. For our analysis of Reed-Muller codes, however, we also need to bound the probability that a multi-variate polynomial is zero when the degree of the underlying field is small. The following lemma gives us a good bound in this setting.

Lemma 10 (Zippel 1979)Let be a non-zero polynomial with maximum degree for . Then

*Proof:* We again proceed with a proof by induction on the number of variables. When is univariate, the lemma follows since a degree polynomial has at most zeroes.

For the inductive step, consider again the decomposition

The decomposition say that we can think of the (multi-variate) polynomial as a univariate polynomial in . That is, can be viewed as a polynomial on the variable with coefficients coming from , the field of rational functions in variables . By the degree mantra for univariate polynomials, we get that there are at most values for which (in the field ). Thus there are certainly at least values that can be assigned to such that is a non-zero polynomial (on variables). Applying the induction hypothesis to this polynomial completes the proof of the lemma.

To complete the proof of Theorem 8, we can apply Lemma 10 repeatedly to a polynomial, removing one variable at a time, until the total degree of the polynomial on the remaining variables satisfies , and then we can apply Lemma 9. We leave the details of the proof to the reader.

It is reasonable to ask if the bound of Theorem 8 could be improved. In general, it can’t. Consider the polynomial

The polynomial has total degree and maximum degree . The value of is non-zero only when and . The first condition is satisfied with probability and the second with probability . So the bound of Theorem 8 is tight.

**4. Reed-Muller codes **

We can now use the Schwartz-Zippel Lemma to establish the distance parameter of binary Reed-Muller codes.

Recall that the binary Reed-Muller code is defined by

The block length of this code is , and the dimension of this code is

which is can be roughly approximated by .

Applying Theorem 8 (or Lemma 10) to , we can conclude that the distance of is at least . We will reprove with a more specialized argument and also show that the distance is exactly .

** 4.1. Decoding Reed-Muller codes **

The Reed-Muller codes were first introduced by Muller in 1954. Muller showed that the family of codes he introduced had good distance parameters, but he did not study the problem of decoding these codes efficiently.

The naive method of decoding the code is to enumerate all the codewords, compute their distance to the received word and to output the one with the minimum distance. This algorithm runs in time . The running time of the naïve decoding algorithm is therefore quasi-polynomial (but not polynomial!) in the block length .

Reed introduced the first efficient algorithm for decoding Reed-Muller codes shortly after the codes were introduced by Muller. Reed’s algorithm also corrects up to half the minimum distance (i.e., up to errors) and further runs in time polynomial in the block length .

We will not cover Reed’s decoding algorithm for Reed-Muller codes in this class. At a very high level, the idea of the algorithm is to apply a majority logic decoding scheme. The algorithm was covered in previous iterations of this class; interested readers are encouraged to consult those notes for more details on the algorithm.

** 4.2. Distance of Reed-Muller codes **

Let us now give a self-contained argument proving that the distance of the code is .

We begin by showing that the distance of binary Reed-Muller codes is at most . Since Reed-Muller codes are linear codes, we can do so by exhibiting a non-zero codeword of with weight . Consider the polynomial

The polynomial is a non-zero polynomial of degree , and clearly only when . There are choices of that satisfy this condition, so .

Let us now show that the distance of binary Reed-Muller codes is at least by showing that the weight of any non-zero codewords in is at least . Consider any non-zero polynomial of total degree at most . We can write as

where is a maximum degree term in and . Consider any assignment of values to the variables . After this assignment, the resulting polynomial on is a non-zero polynomial, since the term cannot be cancelled. Therefore, for each of the possible assignment of values to the variables , the resulting polynomial is a non-zero polynomial.

When you have a non-zero polynomial, then there is always at least one assignment of values to its variables such that the polynomial does not evaluate to . Therefore, for each assignment to the variables , there exists at least one assignment of values to such that . This implies that .

In summary, when the maximum degree is constant, binary Reed-Muller codes have good distance, but a poor rate ( for large ). Increasing the parameter increases the rate of the code but also decreases the distance of the code at a faster pace. So there is no setting of that yields a code with constant rate and constant distance.

In the following section, we introduce a family of binary codes that can be constructed efficiently and has both a good rate and a good distance simultaneously.

**5. Concatenated codes **

Concatenated codes were introduced by Forney in his doctoral thesis in 1966. In fact, Forney proved many wonderful properties about these codes; in this lecture we only give a brief overview of the definitions and key properties of concatenated codes.

The starting point in our search for binary codes with good rate and good distance is the idea that we have already seen codes with good rate and good distance when we have large alphabets: the distance of Reed-Solomon codes meets the Singleton bound, so they in fact are optimal codes. So let’s start with Reed-Solomon codes and see if we can use them to construct a family of good binary codes.

We already saw in the last lecture a simple transformation for converting Reed-Solomon codes to binary codes. In this transformation, we started with a polynomial of degree and evaluated it over to obtain the values . We then encoded each of the values in the binary alphabet with bits.

The binary code obtained with the simple transformation has block length and distance . This distance is not very good, since the lower bound on the relative distance is quite weak. Still, the lower bound follows from a very simple analysis; one may hope that a better bound — ideally of the form — might be obtained with a more sophisticated analysis or by applying some neat trick (like, say, by encoding the bits in some clever basis). Unfortunately, that hope is not realizable: there is a nearly tight upper bound showing that with the simple transformation, the distance of the resulting binary code is at most .

So if we hope to obtain a binary code with good distance from the Reed-Solomon code, we need to introduce a new idea to the transformation. One promising idea is to look closely at the step where we took the values from the field and encoded them with bits in the binary alphabet: instead of using the minimum number of bits to encode those elements in the binary alphabet, we could use more bits — say bits — and use an encoding that adds more distance to the final code. That is indeed the idea used to obtain concatenated codes.

** 5.1. Binary concatenated codes **

The concatenated code is defined by two codes. The *outer* code converts the input message to a codeword over a large alphabet , and the *inner* code is a much smaller code that converts symbols from to codewords over . When , the code is a binary concatenated code.

A key observation in the definition of concatenated codes is that the inner code is a small code, in the sense that it only needs one codeword for each symbol in . The size of the alphabet is (typically) much smaller than the total number of codewords encoded by , and will let us do a brute-force search for good inner codes in our construction of good concatenated codes. But first, let us examine the rate and distance parameters of general concatenated codes.

** 5.2. Facts of concatenated codes **

The rate of the concatenated code is

where the last equality uses the fact that .

The simple transformation of Reed-Solomon codes to binary codes used an inner code with rate (which did not add any redundancy). The rate equation says that we can replace the trivial inner code with any other code and incur a rate cost proportional to the rate of .

Let’s now look at the distance of concatenated code. We do not get an exact formula for the distance of these codes, but a simple argument does give us a lower bound that will be sufficient to construct concatenated codes with good distance:

Proposition 11The distance of the concatenated code satisfies

*Proof:* Let and be two distinct messages. The distance property of the outer code guarantees that the encodings and will differ in at least symbols. For each of the symbols where they differ, then the inner code will encode the symbols into codewords that differ in at least places.

The lower bound of Proposition 11 is not tight, and in general the distance of concatenated codes can be much larger. This may seem counter-intuitive at first: at the outer level, we can certainly have two codewords that differ in only places, and at the inner level we can also have two different symbols in whose encodings under differ in only places. But the two events are not necessarily independent — it could be that when there are two codewords at the outer level that differ at only symbols, then they must differ in a pattern that the inner code can take advantage of so that for those cases, the inner code does much better than its worst case.

In fact, a probabilistic argument shows that when the outer code is a Reed-Solomon code and the inner codes are “random projections” obtained by mapping the symbols of to codewords in with independently chosen random bases, then the resulting concatenated code reaches the Gilbert-Varshamov bound with high probability. (And thus has distance much larger than the lower bound suggested by Proposition 11.) This construction is randomized; it is an interesting problem to give a family of *explicit* codes for which the inequality of Proposition 11 is far from tight. (There are some codes called *multilevel concatenated codes* where the Zyablov bound can be improved, but this still falls well short of the GV bound.)

** 5.3. Constructing good concatenated codes **

In this section, we construct a family of binary concatenated codes with good rate and good distance. Fix to be our target rate. We will build a code with rate and distance as large as possible.

For our construction, take to be the Reed-Solomon code with block length . The rate of this outer code is and the relative distance of this code is . Take to be a binary linear code with parameters , so that the rate of the inner code is . The rate of the concatenated code is , so .

We now have a partial construction. The outer code is the Reed-Solomon code, which we know is optimal so we’re done with this part of the construction. The inner code, however, is not yet defined: we have only specified that we want to be a linear code with rate . For our concatenated code to have good distance, we want the distance of to be as large as possible.

The asymptotic Gilbert-Varshamov bound guarantees that there exists a linear code with rate . Rearranging the terms, this means that there is a code with rate and distance . So if we find an inner code that matches this distance bound, we obtain a concatenated code with distance

The question remains: how can we find an inner code with minimum distance ? Since is a small code, so we can do a brute force search over the linear codes to find one with large distance.

We have to be a little careful in the algorithm that we use to search for . A naïve searching algorithm simply enumerates all the possible generator matrices for and checks the distance of each corresponding code. But there are possible generator matrices , so this search does not run in time polynomial in the block length of the code.

There is a more efficient algorithm for finding an inner code with minimum distance . The algorithm uses the greedy method to build a parity check matrix such that every set of columns in is linearly independent: Enumerate all the possible columns. If the current column is not contained by the linear span of any columns already in , add it to .

The greedy algorithm examines columns, and as long as , the process is also guaranteed to find a parity-check matrix of distance . So this method can be used to find a linear code that meets the Gilbert-Varshamov bound in time polynomial in the block length.

This completes our construction of a binary concatenated code with good rate and good distance. In the next section, we examine the best rate-distance trade-off obtained by optimizing the parameters of the concatenated code. But first, we mention one more useful property of the code we have constructed: it is a linear code.

Exercise 3Prove that the concatenated code is linear over .

** 5.4. Zyablov radius **

In our construction of good concatenated codes, we are free to set the rate of the inner code. Optimizing the value of over all the choices that guarantee an overall rate of for the concatenated code yields the following result.

Theorem 12Let . Then it is possible to efficiently construct a code of rate and distance

The function is called the Zyablov trade-off curve, or sometimes the Zyablov bound, and is named after Zyablov, who first observed it in 1971. For any value of , the value of is bounded away from , so we get the following corollary.

Corollary 13Asymptotically good codes of any desired rate can be constructed in polynomial time.

So how good is the resulting bound? Quite a bit weaker than the Gilbert-Varshamov bound, as the figure shows.

Another aspect of our construction of concatenated codes that is somewhat unsatisfactory is that whilt it it is constructed in polynomial time, it involves brute-force search for a code of logarithmic block length. It would be nice to have an explicit formula or description of how the code looks like. From a complexity view point, we might want a linear code the entries of whose generator matrix we can compute in polylogarithmic time.

In the next lecture, we will see an asymptotically code that is constructedexplicitly without any brute-force search of smaller codes, and which further achievs the Zyablov trade-off between rate and relative distance for rates more than .

## Leave a Reply