International Journal of Theoretical Physics

, Volume 21, Issue 12, pp 905–940 | Cite as

The thermodynamics of computation—a review

  • Charles H. Bennett
Physical Models of Computation


Computers may be thought of as engines for transforming free energy into waste heat and mathematical work. Existing electronic computers dissipate energy vastly in excess of the mean thermal energykT, for purposes such as maintaining volatile storage devices in a bistable condition, synchronizing and standardizing signals, and maximizing switching speed. On the other hand, recent models due to Fredkin and Toffoli show that in principle a computer could compute at finite speed with zero energy dissipation and zero error. In these models, a simple assemblage of simple but idealized mechanical parts (e.g., hard spheres and flat plates) determines a ballistic trajectory isomorphic with the desired computation, a trajectory therefore not foreseen in detail by the builder of the computer. In a classical or semiclassical setting, ballistic models are unrealistic because they require the parts to be assembled with perfect precision and isolated from thermal noise, which would eventually randomize the trajectory and lead to errors. Possibly quantum effects could be exploited to prevent this undesired equipartition of the kinetic energy. Another family of models may be called Brownian computers, because they allow thermal noise to influence the trajectory so strongly that it becomes a random walk through the entire accessible (low-potential-energy) portion of the computer's configuration space. In these computers, a simple assemblage of simple parts determines a low-energy labyrinth isomorphic to the desired computation, through which the system executes its random walk, with a slight drift velocity due to a weak driving force in the direction of forward computation. In return for their greater realism, Brownian models are more dissipative than ballistic ones: the drift velocity is proportional to the driving force, and hence the energy dissipated approaches zero only in the limit of zero speed. In this regard Brownian models resemble the traditional apparatus of thermodynamic thought experiments, where reversibility is also typically only attainable in the limit of zero speed. The enzymatic apparatus of DNA replication, transcription, and translation appear to be nature's closest approach to a Brownian computer, dissipating 20–100kT per step. Both the ballistic and Brownian computers require a change in programming style: computations must be renderedlogically reversible, so that no machine state has more than one logical predecessor. In a ballistic computer, the merging of two trajectories clearly cannot be brought about by purely conservative forces; in a Brownian computer, any extensive amount of merging of computation paths would cause the Brownian computer to spend most of its time bogged down in extraneous predecessors of states on the intended path, unless an extra driving force ofkTln2 were applied (and dissipated) at each merge point. The mathematical means of rendering a computation logically reversible (e.g., creation and annihilation of a history file) will be discussed. The old Maxwell's demon problem is discussed in the light of the relation between logical and thermodynamic reversibility: the essential irreversible step, which prevents the demon from breaking the second law, is not the making of a measurement (which in principle can be done reversibly) but rather the logically irreversible act of erasing the record of one measurement to make room for the next. Converse to the rule that logically irreversible operations on data require an entropy increase elsewhere in the computer is the fact that a tape full of zeros, or one containing some computable pseudorandom sequence such as pi, has fuel value and can be made to do useful thermodynamic work as it randomizes itself. A tape containing an algorithmically random sequence lacks this ability.


Drift Velocity Thermal Noise Brownian Model History File Bistable Condition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Benioff, Paul (1982) to appear inJournal of Statistical Mechanics.Google Scholar
  2. Bennett, C. H. (1973). “Logical Reversibility of Computation”,IBM Journal of Research and Development,17, 525–532.Google Scholar
  3. Bennett, C. H. (1975). “Efficient Estimation of Free Energy Differences from Monte Carlo Data,”Journal of Computational Physics,22, 245–268.Google Scholar
  4. Bennett, C. H. (1979). “Dissipation-Error Tradeoff in Proofreading,”BioSystems,11, 85–90.Google Scholar
  5. Chaitin, G. (1975a). “Randomness and Mathematical Proof,”Scientific American,232, No. 5, 46–52.Google Scholar
  6. Chaitin, G. (1975b). “A Theory of Program Size Formally Identical to Information Theory,”Journal of the Association for Computing Machinery,22, 329–340.Google Scholar
  7. Chaitin, G. (1977). “Algorithmic Information Theory,”IBM Journal of Research and Development,21, 350–359, 496.Google Scholar
  8. Brillouin, L. (1956).Science and Information Theory (2nd edition, 1962), pp. 261–264, 194–196. Academic Press, London.Google Scholar
  9. Fredkin, Edward, and Toffoli, Tommaso, (1982). “Conservative Logic,” MIT Report MIT/LCS/TM-197;International Journal of Theoretical Physics,21, 219.Google Scholar
  10. Gacs, P. (1974). “On the Symmetry of Algorithmic Information,”Soviet Mathematics Doklady,15, 1477.Google Scholar
  11. Hopfield, J. J. (1974).Proceedings of the National Academy of Science USA,71, 4135–4139.Google Scholar
  12. Keyes, R. W., and Landuer, R. (1970).IBM Journal of Research and Development,14, 152.Google Scholar
  13. Landauer, R. (1961). “Irreversibility and Heat Generation in the Computing Process,”IBM Journal of Research and Development,3, 183–191.Google Scholar
  14. Levin, L. A. (1976). “Various Measures of Complexity for Finite Objects (Axiomatic Description),”Soviet Mathematics Doklady,17, 522–526.Google Scholar
  15. Likharev, K. (1982). “Classical and Quantum Limitations on Energy Consumption in Computation,”International Journal of Theoretical Physics,21, 311.Google Scholar
  16. McCarthy, John (1956). “The Inversion of Functions Defined by Turing Machines,” inAutomata Studies, C. E. Shannon and J. McCarthy, eds. Princeton Univ. Press, New Jersey.Google Scholar
  17. Ninio, J. (1975).Biochimie,57, 587–595.Google Scholar
  18. Reif, John H. (1979). “Complexity of the Mover's Problem and Generalizations,” Proc. 20'th IEEE Symp. Found. Comp. Sci., San Juan, Puerto Rico, pp. 421–427.Google Scholar
  19. Szilard, L. (1929).Zeitschrift für Physik,53, 840–856.Google Scholar
  20. Toffoli, Tommaso (1980). “Reversible Computing,” MIT Report MIT/LCS/TM-151.Google Scholar
  21. Toffoli, Tommaso (1981). “Bicontinuous Extensions of Invertible Combinatorial Functions,”Mathematical and Systems Theory,14, 13–23.Google Scholar
  22. von Neumann, J. (1966). Fourth University of Illinois lecture, inTheory of Self-Reproducing Automata, A. W. Burks, ed., p. 66. Univ. of Illinois Press, Urbana.Google Scholar
  23. Watson, J. D. (1970).Molecular Biology of the Gene (2nd edition). W. A. Benjamin, New York.Google Scholar
  24. Zvonkin, A. K., and Levin, L. A. (1970). “The Complexity of Finite Objects and the Development of the Concepts of Information and Randomness by Means of the Theory of Algorithms,”Russian Mathematical Surveys,25, 83–124.Google Scholar

Copyright information

© Plenum Publishing Corporation 1982

Authors and Affiliations

  • Charles H. Bennett
    • 1
  1. 1.IBM Watson Research CenterYorktown Heights

Personalised recommendations