The thermodynamics of computation—a review
- Charles H. Bennett
- … show all 1 hide
Rent the article at a discountRent now
* Final gross prices may vary according to local VAT.Get Access
Computers may be thought of as engines for transforming free energy into waste heat and mathematical work. Existing electronic computers dissipate energy vastly in excess of the mean thermal energykT, for purposes such as maintaining volatile storage devices in a bistable condition, synchronizing and standardizing signals, and maximizing switching speed. On the other hand, recent models due to Fredkin and Toffoli show that in principle a computer could compute at finite speed with zero energy dissipation and zero error. In these models, a simple assemblage of simple but idealized mechanical parts (e.g., hard spheres and flat plates) determines a ballistic trajectory isomorphic with the desired computation, a trajectory therefore not foreseen in detail by the builder of the computer. In a classical or semiclassical setting, ballistic models are unrealistic because they require the parts to be assembled with perfect precision and isolated from thermal noise, which would eventually randomize the trajectory and lead to errors. Possibly quantum effects could be exploited to prevent this undesired equipartition of the kinetic energy. Another family of models may be called Brownian computers, because they allow thermal noise to influence the trajectory so strongly that it becomes a random walk through the entire accessible (low-potential-energy) portion of the computer's configuration space. In these computers, a simple assemblage of simple parts determines a low-energy labyrinth isomorphic to the desired computation, through which the system executes its random walk, with a slight drift velocity due to a weak driving force in the direction of forward computation. In return for their greater realism, Brownian models are more dissipative than ballistic ones: the drift velocity is proportional to the driving force, and hence the energy dissipated approaches zero only in the limit of zero speed. In this regard Brownian models resemble the traditional apparatus of thermodynamic thought experiments, where reversibility is also typically only attainable in the limit of zero speed. The enzymatic apparatus of DNA replication, transcription, and translation appear to be nature's closest approach to a Brownian computer, dissipating 20–100kT per step. Both the ballistic and Brownian computers require a change in programming style: computations must be renderedlogically reversible, so that no machine state has more than one logical predecessor. In a ballistic computer, the merging of two trajectories clearly cannot be brought about by purely conservative forces; in a Brownian computer, any extensive amount of merging of computation paths would cause the Brownian computer to spend most of its time bogged down in extraneous predecessors of states on the intended path, unless an extra driving force ofkTln2 were applied (and dissipated) at each merge point. The mathematical means of rendering a computation logically reversible (e.g., creation and annihilation of a history file) will be discussed. The old Maxwell's demon problem is discussed in the light of the relation between logical and thermodynamic reversibility: the essential irreversible step, which prevents the demon from breaking the second law, is not the making of a measurement (which in principle can be done reversibly) but rather the logically irreversible act of erasing the record of one measurement to make room for the next. Converse to the rule that logically irreversible operations on data require an entropy increase elsewhere in the computer is the fact that a tape full of zeros, or one containing some computable pseudorandom sequence such as pi, has fuel value and can be made to do useful thermodynamic work as it randomizes itself. A tape containing an algorithmically random sequence lacks this ability.
- Benioff, Paul (1982) to appear inJournal of Statistical Mechanics.
- Bennett, C. H. (1973). “Logical Reversibility of Computation”,IBM Journal of Research and Development,17, 525–532.
- Bennett, C. H. (1975). “Efficient Estimation of Free Energy Differences from Monte Carlo Data,”Journal of Computational Physics,22, 245–268.
- Bennett, C. H. (1979). “Dissipation-Error Tradeoff in Proofreading,”BioSystems,11, 85–90.
- Chaitin, G. (1975a). “Randomness and Mathematical Proof,”Scientific American,232, No. 5, 46–52.
- Chaitin, G. (1975b). “A Theory of Program Size Formally Identical to Information Theory,”Journal of the Association for Computing Machinery,22, 329–340.
- Chaitin, G. (1977). “Algorithmic Information Theory,”IBM Journal of Research and Development,21, 350–359, 496.
- Brillouin, L. (1956).Science and Information Theory (2nd edition, 1962), pp. 261–264, 194–196. Academic Press, London.
- Fredkin, Edward, and Toffoli, Tommaso, (1982). “Conservative Logic,” MIT Report MIT/LCS/TM-197;International Journal of Theoretical Physics,21, 219.
- Gacs, P. (1974). “On the Symmetry of Algorithmic Information,”Soviet Mathematics Doklady,15, 1477.
- Hopfield, J. J. (1974).Proceedings of the National Academy of Science USA,71, 4135–4139.
- Keyes, R. W., and Landuer, R. (1970).IBM Journal of Research and Development,14, 152.
- Landauer, R. (1961). “Irreversibility and Heat Generation in the Computing Process,”IBM Journal of Research and Development,3, 183–191.
- Levin, L. A. (1976). “Various Measures of Complexity for Finite Objects (Axiomatic Description),”Soviet Mathematics Doklady,17, 522–526.
- Likharev, K. (1982). “Classical and Quantum Limitations on Energy Consumption in Computation,”International Journal of Theoretical Physics,21, 311.
- McCarthy, John (1956). “The Inversion of Functions Defined by Turing Machines,” inAutomata Studies, C. E. Shannon and J. McCarthy, eds. Princeton Univ. Press, New Jersey.
- Ninio, J. (1975).Biochimie,57, 587–595.
- Reif, John H. (1979). “Complexity of the Mover's Problem and Generalizations,” Proc. 20'th IEEE Symp. Found. Comp. Sci., San Juan, Puerto Rico, pp. 421–427.
- Szilard, L. (1929).Zeitschrift für Physik,53, 840–856.
- Toffoli, Tommaso (1980). “Reversible Computing,” MIT Report MIT/LCS/TM-151.
- Toffoli, Tommaso (1981). “Bicontinuous Extensions of Invertible Combinatorial Functions,”Mathematical and Systems Theory,14, 13–23.
- von Neumann, J. (1966). Fourth University of Illinois lecture, inTheory of Self-Reproducing Automata, A. W. Burks, ed., p. 66. Univ. of Illinois Press, Urbana.
- Watson, J. D. (1970).Molecular Biology of the Gene (2nd edition). W. A. Benjamin, New York.
- Zvonkin, A. K., and Levin, L. A. (1970). “The Complexity of Finite Objects and the Development of the Concepts of Information and Randomness by Means of the Theory of Algorithms,”Russian Mathematical Surveys,25, 83–124.
- The thermodynamics of computation—a review
International Journal of Theoretical Physics
Volume 21, Issue 12 , pp 905-940
- Cover Date
- Print ISSN
- Online ISSN
- Kluwer Academic Publishers-Plenum Publishers
- Additional Links
- Industry Sectors
- Author Affiliations
- 1. IBM Watson Research Center, 10598, Yorktown Heights, New York