Skip to main content

Channel Capacity and Channel Coding

  • Chapter
  • First Online:
Information and Life
  • 1297 Accesses

Abstract

Chapter 5 continues the discussion of Shannon’s information theory as regards channel capacity and channel coding. Simple channel models are introduced and their capacity is computed. It is shown that channel coding needs redundancy and the fundamental theorem of channel coding is stated. Its proof relies on Shannon’s random coding, the principle of which is stated and illustrated. A geometrical picture of a code as a sparse set of points within the high-dimensional Hamming space which represents sequences is proposed. The practical implementation of channel coding uses error-correcting codes, which are briefly defined and illustrated by describing some code families: recursive convolutional codes , turbocodes and low-density parity-check codes . The last two families can be interpreted as approximately implementing random coding by deterministic means . Contrary to true random coding, their decoding is of moderate complexity and both achieve performance close to the theoretical limit. How their decoding is implemented is briefly described. The first and more important step of decoding enables regenerating an encoded sequence. Finally, it is stated that the constraints which endow error-correcting codes with resilience to errors can be of any kind (e.g., physical-chemical or linguistic), and not necessarily mathematical as in communication engineering.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A polynomial of degree μ is said to be primitive if taking the successive powers of one of its roots generates all the \(2^{\mu} - 1\) non-zero elements of the μ-th extension of the binary field.

  2. 2.

    Or more, but then some specific difficulties are met; two-component codes suffice for obtaining results close enough to the theoretical limit for most practical purposes.

  3. 3.

    Failing to do so would increase the magnitude of the computed a posteriori real value without improving its reliability; remember that the magnitude of a log-likelihood ratio is intended to measure the reliability of the corresponding bit.

References

  • Bahl, L. R., Cocke, J., Jelinek, F., & J. Raviv, J. (1974). Optimal decoding of linear codes for minimizing symbol error rate. IEEE Transaction on Information Theory, IT20(2) 284–287.

    Article  Google Scholar 

  • Battail, G. (1987a). Pondération des symboles décodés par l’algorithme de Viterbi. Annales Télécommunications, 42(1–2), 31–38.

    Google Scholar 

  • Battail, G. (1987b). Le décodage pondéré en tant que procédé de réévaluation d’une distribution de probabilité. Annales Télécommunications, 42(9-10), 499–509.

    Google Scholar 

  • Battail, G. (1989). Construction explicite de bons codes longs. Annales Télécommunic., 44(7–8), 392–404.

    Google Scholar 

  • Battail, G. (1993). Pseudo-random recursive convolutional coding for near-capacity performance. 2nd International Symposium on Communication Theory and Applications, Ambleside (UK), 12-16 July 1993. (Communications theory and applications II, B. Honary, M. Darnell, P. Farrell, Eds., HW Communications Ltd., pp. 54–65).

    Google Scholar 

  • Battail, G. (1996). On random-like codes. Proceeding 4-th Canadian workshop on information theory, Lac Delage, Québec, 28–31 May 1995. (Information Theory and Applications II, J.-Y. Chouinard, P. Fortier and T. A. Gulliver, Eds., Lecture Notes in Computer Science No. 1133, pp. 76–94, Springer).

    Google Scholar 

  • Battail, G. (2000). On Gallager’s low-density parity-check codes. International symposium on information theory. Proceeding ISIT 2000, p. 202, Sorrento, Italy, 25–30 June 2000.

    Google Scholar 

  • Battail, G., Berrou, C., & Glavieux, A. (1993). Pseudo-random recursive convolutional coding for near-capacity performance. Proceeding GLOBECOM'93, Communication Theory Mini-Conference, Vol. 4, pp. 23–27, Houston, U.S.A.

    Google Scholar 

  • Battail, G., Decouvelaere, M. (1976), “Décodage par répliques”, Ann. Télécommunic., Vol. 31, No. 11-12, pp. 387–404.

    Google Scholar 

  • Battail, G., Decouvelaere, M., & Godlewski, P. (1979). Replication decoding. IEEE Transaction on Information Theory, IT-25(3), 332–345.

    Article  Google Scholar 

  • Coffey, J. T., & Goodman, R. M. (1990). Any code of which we cannot think is good. IEEE Transaction on Information Theory, IT-36(6), 1453–1461.

    Article  Google Scholar 

  • Gallager, R. G. (1962). Low-density parity-check codes. IRE Trans. on Inf. Th., Vol. IT-8, pp. 21–28.

    Google Scholar 

  • Gallager, R. G. (1963). Low-density parity-check codes. Cambridge: MIT Press.

    Google Scholar 

  • Gallager, R. G. (1965). A simple derivation of the coding theorem and some applications. IEEE Transactions on Information Theory, IT-13(1), 3–18.

    Article  Google Scholar 

  • Hagenauer, J., & Hoeher, P. (1989). A Viterbi algorithm with soft-decision outputs and its applications. Proceeding GLOBECOM'89, pp. 47.1.1–47.1.7 (Nov.). Dallas, U.S.A.

    Google Scholar 

  • Khinchin, A. I. (1957). Mathematical foundations of information theory. Ney York: Dover.

    Google Scholar 

  • Kolmogorov, A. N. (1956). On the Shannon theory of information transmission in the case of continuous signals, in (Slepian 1974, pp. 238–244).

    Google Scholar 

  • Massey, J. L. (1963). Threshold decoding. Cambridge: MIT Press.

    Google Scholar 

  • Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27,  379–457, 623–656. (Reprinted in Shannon and Weaver 1949, Sloane and Wyner 1993, pp. 5–83 and in Slepain 1947, pp. 5–29).

    Article  Google Scholar 

  • Shannon, C. E. (1949). Communication in the presence of noise. Proceeding IRE, pp. 10–21. (Reprinted in Sloane and Wyner 1993, pp. 160–172 and in Slepian 1974, pp. 30–41).

    Google Scholar 

  • Shannon, C. E., & Weaver, W. (1949). The mathematical theory of communication. Urbana: University of Illinois Press.

    Google Scholar 

  • Slepian, D. (Ed.). (1974). Key papers in the development of information theory. Piscataway: IEEE Press.

    Google Scholar 

  • Sloane, N. J. A., & Wyner, A. D. (Eds.). (1993). Claude Elwood Shannon, collected papers. Piscataway: IEEE Press.

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gérard Battail .

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Battail, G. (2014). Channel Capacity and Channel Coding. In: Information and Life. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-7040-9_5

Download citation

Publish with us

Policies and ethics