We now turn to *limitations* of codes, in the form *upper bounds* on the rate of codes as a function of their relative distance. We will typically give concrete bounds on the size of codes, and then infer as corollaries the asymptotic statement for code families relating rate and relative distance. All the bounds apply for general codes and they do not take advantage of linearity. However, for the most sophisticated of our bounds, the linear programming bound, which we discuss in the next set of notes, we will present the proof only for linear codes as it is simpler in this case.

We recall the two bounds we have already seen. The Gilbert-Varshamov bound asserted the existence of (linear) -ary codes of block length , distance at least , and size at least . Or in asymptotic form, the existence of codes of rate approaching and relative distance . The Hamming or sphere-packing bound gave an upper bound on the size (or rate) of codes, which is our focus in these notes. The Hamming bound says that a -ary code of block length and distance can have at most codewords. Or in asymptotic form, a -ary code of relative distance can have rate at most .

As remarked in our discussion on Shannon’s theorem for the binary symmetric channel, if the Hamming bound could be attained, that would imply that we can correct *all* (as opposed to most) error patterns of weight with rate approaching . Recall that there are perfect codes (such as the Hamming codes) that meet the Hamming bound. However, these codes have very small distance ( in the case of Hamming codes). A generalization of Hamming codes called binary BCH codes (the acronym stands for the code’s independent inventors Hocquenghem (1959) and Bose and Ray-Chaudhuri (1960)) show that when is a fixed constant and the block length is allowed to grow, the Hamming bound is again tight up to lesser order terms. However, we will improve upon the Hamming bound and show that its asymptotic form (for any relative distance bounded away from zero) cannot be attained for any fixed alphabet. The proof method has some connections to *list decoding*, which will be an important focus topic later in the course.

**1. Singleton bound **

We begin with the simplest of the bounds:

Theorem 1Let be a code of block length and minimum distance over an alphabet of size . Then .

*Proof:* Suppose not, and . By the pigeonhole principle there must be two codewords , that agree on the first locations. But then , contradicting the hypothesis that has minimum distance .

This gives an alphabet-independent asymptotic upper bound on the rate as a function of relative distance.

Corollary 2The rate and relative distance of a code satisfy .