Channel Capacity and Channel Coding

Chapter

Abstract

Chapter 5 continues the discussion of Shannon’s information theory as regards channel capacity and channel coding. Simple channel models are introduced and their capacity is computed. It is shown that channel coding needs redundancy and the fundamental theorem of channel coding is stated. Its proof relies on Shannon’s random coding, the principle of which is stated and illustrated. A geometrical picture of a code as a sparse set of points within the high-dimensional Hamming space which represents sequences is proposed. The practical implementation of channel coding uses error-correcting codes, which are briefly defined and illustrated by describing some code families: recursive convolutional codes , turbocodes and low-density parity-check codes . The last two families can be interpreted as approximately implementing random coding by deterministic means . Contrary to true random coding, their decoding is of moderate complexity and both achieve performance close to the theoretical limit. How their decoding is implemented is briefly described. The first and more important step of decoding enables regenerating an encoded sequence. Finally, it is stated that the constraints which endow error-correcting codes with resilience to errors can be of any kind (e.g., physical-chemical or linguistic), and not necessarily mathematical as in communication engineering.

Keywords

LDPC Code Probability Proba Binary Symmetric Channel Source Source Capacity Capa 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Bahl, L. R., Cocke, J., Jelinek, F., & J. Raviv, J. (1974). Optimal decoding of linear codes for minimizing symbol error rate. IEEE Transaction on Information Theory, IT20(2) 284–287.CrossRefGoogle Scholar
  2. Battail, G. (1987a). Pondération des symboles décodés par l’algorithme de Viterbi. Annales Télécommunications, 42(1–2), 31–38.Google Scholar
  3. Battail, G. (1987b). Le décodage pondéré en tant que procédé de réévaluation d’une distribution de probabilité. Annales Télécommunications, 42(9-10), 499–509.Google Scholar
  4. Battail, G. (1989). Construction explicite de bons codes longs. Annales Télécommunic., 44(7–8), 392–404.Google Scholar
  5. Battail, G. (1993). Pseudo-random recursive convolutional coding for near-capacity performance. 2nd International Symposium on Communication Theory and Applications, Ambleside (UK), 12-16 July 1993. (Communications theory and applications II, B. Honary, M. Darnell, P. Farrell, Eds., HW Communications Ltd., pp. 54–65).Google Scholar
  6. Battail, G. (1996). On random-like codes. Proceeding 4-th Canadian workshop on information theory, Lac Delage, Québec, 28–31 May 1995. (Information Theory and Applications II, J.-Y. Chouinard, P. Fortier and T. A. Gulliver, Eds., Lecture Notes in Computer Science No. 1133, pp. 76–94, Springer).Google Scholar
  7. Battail, G. (2000). On Gallager’s low-density parity-check codes. International symposium on information theory. Proceeding ISIT 2000, p. 202, Sorrento, Italy, 25–30 June 2000.Google Scholar
  8. Battail, G., Berrou, C., & Glavieux, A. (1993). Pseudo-random recursive convolutional coding for near-capacity performance. Proceeding GLOBECOM'93, Communication Theory Mini-Conference, Vol. 4, pp. 23–27, Houston, U.S.A.Google Scholar
  9. Battail, G., Decouvelaere, M. (1976), “Décodage par répliques”, Ann. Télécommunic., Vol. 31, No. 11-12, pp. 387–404.Google Scholar
  10. Battail, G., Decouvelaere, M., & Godlewski, P. (1979). Replication decoding. IEEE Transaction on Information Theory, IT-25(3), 332–345.CrossRefGoogle Scholar
  11. Coffey, J. T., & Goodman, R. M. (1990). Any code of which we cannot think is good. IEEE Transaction on Information Theory, IT-36(6), 1453–1461.CrossRefGoogle Scholar
  12. Gallager, R. G. (1962). Low-density parity-check codes. IRE Trans. on Inf. Th., Vol. IT-8, pp. 21–28.Google Scholar
  13. Gallager, R. G. (1963). Low-density parity-check codes. Cambridge: MIT Press.Google Scholar
  14. Gallager, R. G. (1965). A simple derivation of the coding theorem and some applications. IEEE Transactions on Information Theory, IT-13(1), 3–18.CrossRefGoogle Scholar
  15. Hagenauer, J., & Hoeher, P. (1989). A Viterbi algorithm with soft-decision outputs and its applications. Proceeding GLOBECOM'89, pp. 47.1.1–47.1.7 (Nov.). Dallas, U.S.A.Google Scholar
  16. Khinchin, A. I. (1957). Mathematical foundations of information theory. Ney York: Dover.Google Scholar
  17. Kolmogorov, A. N. (1956). On the Shannon theory of information transmission in the case of continuous signals, in (Slepian 1974, pp. 238–244).Google Scholar
  18. Massey, J. L. (1963). Threshold decoding. Cambridge: MIT Press.Google Scholar
  19. Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27,  379–457, 623–656. (Reprinted in Shannon and Weaver 1949, Sloane and Wyner 1993, pp. 5–83 and in Slepain 1947, pp. 5–29).CrossRefGoogle Scholar
  20. Shannon, C. E. (1949). Communication in the presence of noise. Proceeding IRE, pp. 10–21. (Reprinted in Sloane and Wyner 1993, pp. 160–172 and in Slepian 1974, pp. 30–41).Google Scholar
  21. Shannon, C. E., & Weaver, W. (1949). The mathematical theory of communication. Urbana: University of Illinois Press.Google Scholar
  22. Slepian, D. (Ed.). (1974). Key papers in the development of information theory. Piscataway: IEEE Press.Google Scholar
  23. Sloane, N. J. A., & Wyner, A. D. (Eds.). (1993). Claude Elwood Shannon, collected papers. Piscataway: IEEE Press.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.E.N.S.T., Paris, France (retired)ChabeuilFrance

Personalised recommendations