Advertisement

Synergetics pp 41-67 | Cite as

Information

How to Be Unbiased
  • Hermann Haken
Part of the Springer Series in Synergetics book series (SSSYN, volume 1)

Abstract

In this chapter we want to show how, by some sort of new interpretation of probability theory, we get an insight into a seemingly quite different discipline, namely information theory. Consider again the sequence of tossing a coin with outcomes 0 and 1. Now interpret 0 and 1 as a dash and dot of a Morse alphabet. We all know that by means of a Morse alphabet we can transmit messages so that we may ascribe a certain meaning to a certain sequence of symbols. Or, in other words, a certain sequence of symbols carries information. In information theory we try to find a measure for the amount of information.

Keywords

Partition Function Internal Energy Information Gain Information Entropy Entropy Density 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

Some Basic Ideas

  1. Monographs on this subject are: L. Brillouin: Science and Information Theory (Academic Press, New York-London 1962)zbMATHGoogle Scholar
  2. L. Brillouin: Scientific Uncertainty and Information (Academic Press, New York-London 1964)zbMATHGoogle Scholar
  3. Information theory was founded by C. E. Shannon: A mathematical theory of communication. Bell System Techn. J. 27, 370–423, (1948)MathSciNetGoogle Scholar
  4. C. E. Shannon: A mathematical theory of communication. Bell System Techn. J. 27, 623–656 (1948)MathSciNetGoogle Scholar
  5. C. E. Shannon: Bell System Techn. J. 30, 50 (1951)zbMATHGoogle Scholar
  6. C. E. Shannon, W. Weaver: The Mathematical Theory of Communication (Univ. of Illin. Press, Urbana 1949)zbMATHGoogle Scholar
  7. Some conceptions, related to information and information gain (H-theorem!) were introduced by L. Boltzmann: Vorlesungen über Gastheorie, 2 Vols. (Leipzig 1896, 1898)Google Scholar

Information Gain: An Illustrative Derivation

  1. For a detailed treatment and definition see S. Kullback: Ann. Math. Statist. 22, 79 (1951)MathSciNetzbMATHCrossRefGoogle Scholar
  2. S. Kullback: Information Theory and Statistics (Wiley, New York 1951)Google Scholar

Here we follow our lecture notes. 3.3 Information Entropy and Constraints

  1. We follow in this chapter essentially E. T. Jaynes: Phys. Rev. 106, 4 (1957)MathSciNetCrossRefGoogle Scholar
  2. E. T. Jaynes: Phys. Rev. 106, 620 (1957)MathSciNetADSCrossRefGoogle Scholar
  3. E. T. Jaynes: Phys. Rev. 108, 171 (1957)MathSciNetADSCrossRefGoogle Scholar
  4. E. T. Jaynes: In Delaware Seminar in the Foundations of Physics (Springer, Berlin-Heidelberg-New York 1967)Google Scholar
  5. Early ideas on this subject are presented in W. Elsasser: Phys. Rev. 52, 987 (1937)ADSzbMATHCrossRefGoogle Scholar
  6. W. Elsasser: Z. Phys. 171, 66 (1968)ADSGoogle Scholar

An Example from Physics: Thermodynamics

  1. The approach of this chapter is conceptually based on Jaynes’ papers, I.c. Section 3.3. For textbooks giving other approaches to thermodynamics see Landau-Lifshitz: In Course of Theoretical Physics, Vol. 5: Statistical Physics (Pergamon Press, London-Paris 1952)Google Scholar
  2. R. Becker: Theory of Heat (Springer, Berlin-Heidelberg-New York 1967)Google Scholar
  3. A. Münster: Statistical Thermodynamics, Vol. 1 (Springer, Berlin-Heidelberg-New York 1969)zbMATHGoogle Scholar
  4. H. B. Callen: Thermodynamics (Wiley, New York 1960)zbMATHGoogle Scholar
  5. P. T. Landsberg: Thermodynamics (Wiley, New York 1961)zbMATHGoogle Scholar
  6. R. Kubo: Thermodynamics (North Holland, Amsterdam 1968)Google Scholar
  7. W. Brenig: Statistische Theorie der Wärme (Springer, Berlin-Heidelberg-New York 1975)Google Scholar
  8. W. Weidlich: Thermodynamik und statistische Mechanik (Akademische Verlagsgesellschaft, Wiesbaden 1976)Google Scholar

An Approach to Irreversible Thermodynamics

  1. An interesting and promising link between irreversible thermodynamics and network theory has been established by A. Katchalsky, P. F. Curran: Nonequilibrium Thermodynamics in Biophysics (Harvard University Press, Cambridge Mass. 1967)Google Scholar
  2. For a recent representation including also more current results see J. Schnakenberg: Thermodynamic Network Analysis of Biological Systems, Universitext (Springer, Berlin-Heidelberg-New York 1977)zbMATHCrossRefGoogle Scholar
  3. For detailed texts on irreversible thermodynamics see I. Prigogine: Introduction to Thermodynamics of Irreversible Processes (Thomas, New York 1955)Google Scholar
  4. I. Prigogine: Non-equilibrium Statistical Mechanics (Interscience, New York 1962)zbMATHGoogle Scholar
  5. S. R. De Groot, P. Mazur: Non-equilibrium Thermodynamics (North Holland, Amsterdam 1962)Google Scholar
  6. R. Haase: Thermodynamics of Irreversible Processes (Addison-Wesley, Reading, Mass. 1969)Google Scholar
  7. D. N. Zubarev: Non-equilibrium Statistical Thermodynamics (Consultants Bureau, New York-London 1974)Google Scholar

Here, we present a hitherto unpublished treatment by the present author. 3.6 Entropy—Curse of Statistical Mechanics?

  1. For the problem subjectivistic-objectivistic see for example E. T. Jaynes: Information Theory. In Statistical Physics, Brandeis Lectures, Vol. 3 (W. A. Benjamin, New York 1962)Google Scholar
  2. Coarse graining is discussed by A. Münster: In Encyclopedia of Physics, ed. by S. Flügge, Vol. III/2: Principles of Thermodynamics and Statistics (Springer, Berlin-Göttingen-Heidelberg 1959) The concept of entropy is discussed in all textbooks on thermodynamics, cf. references to Section 3.4.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1978

Authors and Affiliations

  • Hermann Haken
    • 1
  1. 1.Institut für Theoretische PhysikUniversität StuttgartStuttgart 80Fed. Rep. of Germany

Personalised recommendations