In Shannon’s classical model of transmitting a message over a noisy channel we have the following situation:

There are two persons called sender and receiver. Sender and receiver can communicate via a channel. In the simplest case the sender just puts some input letters into the channel and the receiver gets some output letters. Usually the channel is noisy, i.e. the channel output is a random variable whose distribution is governed by the input letters. This model can be extended in several ways: Channels with passive feedback for example give the output letters back to the sender. Multiuser channels like multiple access channels or broadcast channels (which will not be considered in this paper) have several senders or receivers which want to communicate simultaneously. Common to all these models of transmission is the task that sender and receiver have to perform: Both have a common message setM and the sender is given a messageiM. He has to encode the message (i.e. transform it into a sequence of input letters for the channel) in such a way, that the receiver can decode the sequence of output letters so that he can decide with a small probability of error what the message i was. The procedures for encoding and decoding are called a code for the channel and the number of times the channel is used to transmit one message is called the blocklength of the code.


Conditional Entropy Stochastic Matrix Feedback Channel Multiple Access Channel Input Alphabet 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Ahlswede, R., Dueck, G.: Identification via channels. IEEE Trans. Inform. Theory 35, 15–29 (1989)MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Ahlswede, R., Dueck, G.: Identification in the presence of feedback—a discovery of new capacity formulas. IEEE Trans. Inform. Theory 35, 30–36 (1989)MATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Ahlswede, R., Verboven, B.: On identification via multiway channels with feedback. IEEE Trans. Inform. Theory 37, 1519–1526 (1991)MATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Han, T.S., Verdú, S.: New results in the theory of identification via channels. IEEE Trans. Inform. Theory 38, 14–25 (1992)MATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Han, T.S., Verdú, S.: Approximation theory of output statistics. IEEE Trans. Inform. Theory 39, 752–772 (1993)MATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Ahlswede, R., Zhang, Z.: New directions in the theory of identification, SFB 343 Diskrete Strukturen in der Mathematik, Bielefeld, Preprint 94-010 (1994)Google Scholar
  7. 7.
    Ahlswede, R.: General theory of information transfer, Preprint 97–118, SFB 343 Diskrete Strukturen in der Mathematik, Universität Bielefeld (1997); General theory of information transfer:updated, General Theory of Information Transfer and Combinatorics, a Special Issue of Discrete Applied Mathematics (to appear)Google Scholar
  8. 8.
    Ahlswede, R., Csiszár, I.: Common randomness in information theory and cryptograpy II: CR capacity. IEEE Trans. Inform. Theory 44, 225–240 (1998)MATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Cover, T.M., Thomas, J.A.: Elements of information theory. Wiley Ser. Telecom, New York (1991)MATHCrossRefGoogle Scholar
  10. 10.
    Cziszar, I., Körner, J.: Information theory: Coding theorems for discrete memoryless systems. Academic Press, New York (1981)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • C. Kleinewächter

There are no affiliations available

Personalised recommendations