Foundations of Physics

, Volume 44, Issue 8, pp 819–828 | Cite as

Quantum Computing’s Classical Problem, Classical Computing’s Quantum Problem



Tasked with the challenge to build better and better computers, quantum computing and classical computing face the same conundrum: the success of classical computing systems. Small quantum computing systems have been demonstrated, and intermediate-scale systems are on the horizon, capable of calculating numeric results or simulating physical systems far beyond what humans can do by hand. However, to be commercially viable, they must surpass what our wildly successful, highly advanced classical computers can already do. At the same time, those classical computers continue to advance, but those advances are now constrained by thermodynamics, and will soon be limited by the discrete nature of atomic matter and ultimately quantum effects. Technological advances benefit both quantum and classical machinery, altering the competitive landscape. Can we build quantum computing systems that out-compute classical systems capable of some \(10^{30}\) logic gates per month? This article will discuss the interplay in these competing and cooperating technological trends.


Quantum computing Moore’s Law 

1 Introduction

Imagine, for a moment, that classical computers did not exist, and that quantum computers were being developed in a technological Garden of Eden, innocent of the taint of electronic or mechanical computation. What a fantastic future would await! On the horizon, machines that could solve specialized mathematical problems, search through large spaces of possible problem solutions without iterating over the entire space, and calculate numeric values describing (other) quantum systems [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. Surely such an enticing set of capabilities would be enough to lure us out of our pre-technological paradise.

Instead, advances toward quantum computing machines are taking place in a world already filled with fantastic classical computing machines of almost unimaginable power. It has become trite to make comparisons of the computational power of a smart phone to that used to guide an Apollo moon mission, but it is no less true for that. The computational power of a modern-day supercomputer is nothing short of astonishing, and sets an extraordinarily high bar against which the utility of prospective commercial quantum computing systems will be measured.

In this article, we first discuss the market challenges faced by attempts to build a quantum computer, by comparing to the technical capabilities of existing and prospective supercomputers. After this somewhat pessimistic note, we turn to the challenges that classical computing faces in its continuing technological evolution. We end by presenting some historical perspective on these two issues, and reason for optimism on both fronts.

2 Quantum Computing’s Classical Problem

In order to be economically attractive, a quantum computer must be able to compute some output that cannot be calculated using classical computers and supercomputers. Classical supercomputers have two main advantages over quantum computers: computational capability measured in logic gates or floating point operations per second (FLOPS), and memory capacity, which together determine the type and scale of problems which can be successfully attacked. In the last quarter of a century, large-scale computations have come to be dominated by parallel and distributed computing systems, on which we will focus here [11, 12, 13, 14, 15].

In accordance with the Gustafson-Barsis Law [16], as the size of a supercomputer grows, the size of problems that it can tackle grows. Japan’s K (Kei) supercomputer is used for climate research, medical research, and computational chemistry. Weather modeling, protein folding, seismic simulations, fluid flow, galactic evolution simulations, and high-energy physics calculations are just a few of the common applications of such large-scale systems. What applications will a quantum computer have that go beyond such capabilities? To answer this question, let us first probe more deeply what such classical computing power means.

As an example of current capabilities, consider K, which from mid-2011 to mid-2012 was the most powerful system in the world, reaching a speed of over 10 petaFLOPS on the LINPACK linear algebra benchmark using 705,024 processing cores in 88,128 processors.

Each SPARC64 VIIIfx processor, fabricated using Fujitsu’s 45 nm process, consists of 760 million transistors. Thus, the processing elements and on-chip cache memory, interconnects, etc. across the entire system comprise approximately \(6.7\times 10^{13}\) transistors. The main memory, allocating a single transistor per DRAM cell, is some \(1.1\times 10^{16}\) transistors, nearly two hundred times as large.

Considering only the processing elements, the rate of floating point operations alone is some \(10^{16}\) 64-bit operations per second that can be deployed on solving a single classical problem. An alternative way of considering performance is the rate of single-bit classical gate operations. If we assume that an average of \(\sim 10\%\) of the transistors in the processor are switching in any given clock cycle, we have \(6.7\times 10^{12}\) switches per clock cycle. With a clock speed of 2.0 GHz, we have \(1.3\times 10^{22}\) switches, or gates, per second. One large-scale computation may take as much as a month on such a system. A month is approximately 2.5 million seconds, giving us a total computation capacity of approximately \(3\times 10^{28}\) logic gates per month.

The quest to build economically attractive quantum computers must focus on workloads consisting of problems that cannot be solved using such enormous, powerful systems. If we target deploying such a quantum computer in the year 2020, we must strive to exceed the classical systems that will be available then. Supercomputing system engineers are promising a system one hundred times as powerful by the year 2020, a capacity of more than \(10^{30}\) logic gates or \(10^{24}\) FLOPs per month.

Shor’s algorithm for factoring large numbers, the most famous quantum algorithm to date, occupies something of a sweet spot in problem difficulty: although execution time for the best-known classical algorithms does not grow truly exponentially with problem size (and is not an NP-complete problem), it does grow superpolynomially. Moreover, factoring is a problem for which approximate solutions simply will not do: a number is never “approximately” an integer factor of another integer. Only exact, and very difficult to calculate, solutions are acceptable.

In constrast, many other computationally difficult problems, including most of the applications of supercomputers, admit practically useful approximate answers that can be calculated using a variety of heuristic algorithms. In many cases, especially simulations, the problems in question have no exact answer that can be checked (and hence are also not NP-complete problems), but increasing expenditure of computational effort results in a closer match to some objective reality that can be measured. This results in improvements of the accuracy of weather simulations, better airplane designs, and more digits of confirmation of the accuracy of physical theories.

This, then, is the crux of quantum computing’s classical problem: classical systems are already extraordinarily successful at generating results with enormous impact on science and technology, and through them on society as a whole. The search for useful and interesting quantum algorithms then becomes a hunt for problems (a) that do not admit approximate solutions and for which finding exact solutions becomes hard quickly as the problem size grows, or (b) whose required classical resources (logic gates, FLOPs or memory) grow more rapidly as the problem size grows than even supercomputers can handle, but can be solved in a quantum computer that is many orders of magnitude smaller in capacity and likely also far slower at executing logical gates. Shor fits the first category, and simulation of quantum systems the second. A few other algorithms have been developed that likewise appear to fit in this sweet spot, and detailed analysis of their needs has become a community priority.

In general, the ideal problem is one in which a classical solution grows exponentially in the number of operations (computational volume) as the problem size grows, but a quantum solution is polynomial. However, the common language of computer science theorists, the \(O(\cdot )\) notation, can hide large constant factors, and even seemingly innocent polynomials can grow quickly in practical terms.

Motivated in part by results that indicate that systems powerful enough to run Shor’s algorithm will be very large and disappointingly slow [17, 18, 19, 20], the US government agency IARPA is funding a program to examine in detail the possibilities of executing several other quantum algorithms:
  1. 1.

    Ambainis et al.’s Boolean formula evaluation algorithm [21].

  2. 2.

    Childs et al.’s binary welded tree algorithm, executed using a quantum random walk, with a tree height of 300 [22].

  3. 3.

    Hallgren’s solution for the class number problem, for 124 decimal digits [4].

  4. 4.

    Linear systems of equations, with an array dimension of \(3\times 10^8\), using an adapted form of Harrow et al.’s algorithm for sparse matrices [5].

  5. 5.

    Magniez et al.’s triangle-finding algorithm [7].

  6. 6.

    The unique shortest vector (USV) problem, for a problem dimension of 50 [23].

  7. 7.

    Feynman’s original proposal for quantum computers discussed simulating quantum systems [2]; Whitfield et al. have developed a specific algorithm that can be used to find molecular ground-state energy [10]. This algorithm is proposed to be used for an iron-sulphur molecular complex with 208 basis functions, to find the result to 9 bits of accuracy.

Each of these was given a specific problem instance, intended to represent a post-classical result: the size and complexity were set at a level which is considered to be impractical for classical systems. In some cases, this involved modification of parts of the algorithm to include preparation of difficult-to-create input states. These algorithms are then matched with a physical technology and a quantum error correction mechanism.

Early results from the four teams working on this project suggest that all of these algorithms will be computationally challenging for quantum computers. However, the program (along with the work of other groups) is already paying dividends in the form of new approaches to single-gate decomposition that promise to reduce execution times by orders of magnitude [24, 25, 26]. However, further advances in architecture, compilation, and especially error correction, as well as the underlying physical technologies, are necessary to bring most of these applications within reach [27, 28].

Were it not for a critical technical factor, perhaps the story of quantum computing would end there, with a handful of specialized algorithms and a very high hurdle to clear in order to reach economic viability. However, just as quantum computing has a classical problem, classical computing has a quantum problem.

3 Classical Computing’s Quantum Problem

This problem could be more completely described as classical computing’s quantum and atomic problem, and can be summarized succinctly: semiconductor technology is a victim of its own success [29]. After conquering the basic theoretical and fabrication problems in the period from the late 1940s through the early 1960s, integrated circuits entered a Golden Age of tremendous growth in density, known as Moore’s Law [30]. During this period, economic incentives and technical genius combined to raise the number of transistors that could be fabricated in a given area. Capacities doubled approximately every two years, and operating speeds likewise increased, reaching billion-transistor chips running at gigahertz speeds by the early 2000s.

In the mid-2000s, we entered what I call the Late Moore’s Law Period. The rate of density improvement declined to doubling every 36 months or so, instead of every 24 months. More importantly, clock speeds stalled almost entirely. Typical CPU speeds have remained near 2 GHz since then, although improvements to cache and other peripheral functions have brought modest improvements in the average amount of work completed in each clock cycle. Instead, the focus has shifted to improving functionality on important, computationally-intensive tasks such as graphics. This is achieved by increasing the parallelism in the system at both the micro and macro levels, adding both general-purpose cores and graphics processing unit (GPU) cores.

What is driving this decline, and is it temporary or permanent? The end of Moore’s Law has been predicted repeatedly since its inception [31, 32, 33]. Reasons cited include concerns about the technical difficulty of short-wavelength photolithography, about the economic viability of increasingly expensive fab lines, and the difficulty of making defect-free devices of such densities. Such concerns are predicated on engineering and economic matters, which are of course critical, but are less absolute pronouncements than those based on fundamental physical characteristics. In the latter category, two issues loom large: the atomic (and ultimately quantum) nature of matter, and Landauer’s limit on the thermodynamic cost of irreversible (Boolean) logic [29, 34, 35].

Although I do not subscribe to a fully Kurzweilian world view, it is true that our technology for computing has produced a generally downward trend in memory and logic element size, extending back to the beginning of computation. Abacuses, developed three to four thousand years ago, must consist of elements large enough for human fingers to manipulate while storing a few bits of data. The Inca quipu, dating back some 500 years, stores information in the type and position of knots in a string, necessarily at the human scale. These two innovations are among the first storage technologies for numeric data.

By the mid-seventeenth century, devices with at least some support for mechanical computation emerged: the slide rule, and Blaise Pascal’s pascaline, a slow and error-prone mechanical adder. In the early nineteenth century, the Jacquard loom used a precursor to punch cards to control the pattern woven into a textile. Charles Babbage used sets of mechanical rotors as data registers in his Difference Engine, which was capable of calculating values for a chosen polynomial [36]. Babbage’s projects were not completed in his lifetime, and although he is now revered his work languished in some obscurity when modern computing began developing in the early- to mid-twentieth century.

Up to this point, little if any trend in the size of storage or computational elements is discernible, as all technologies were limited by the ability of human hands to create and manipulate them. Punch cards and paper tape shrank the volume of an individual bit, but even replacing relays with vacuum tubes, as happened in the 1940s with the introduction of ENIAC (the first electronic general-purpose digital computer), improved the speed but not so much the size of logic elements. The transistor was invented in 1947. Smaller and cooler than a vacuum tube, it was the basis for a computer constructed in Manchester in 1953. But it was not until the development of the integrated circuit in 1958 that density took a sharp upward turn, as we developed tools capable of making devices for us using non-mechanical means [37]. By 1965, progress was steady enough that Gordon Moore could analyze the economic imperatives behind chip manufacture and formulate his famous law, which ultimately moved from passive analytic tool to business dictum [30].

In 1971, the world’s first microprocessor, the Intel 4004, was introduced, consisting of some 2,300 transistors fabricated in a \(10~\mu m\) process. This represented a thousand-fold increase in chip capacity in thirteen years. It would take eighteen years to reach a million transistors in a logic chip (the Intel i486, in 1989), but less than sixteen more years to reach a billion transistors in a chip, with Intel shipping a 1.7 billion transistor microprocessor in 2005, fabricated in a 90 nm process.

As of late 2012, chips with a minimum feature size of 22 nm are in production 1. Engineering difficulties multiply as we get down toward 10 nm and even below [29, 38, 39], but we are not yet at fundamental limits. The International Technical Roadmap for Semiconductors currently projects that the minimum size of a structure in a chip will reach 5.8 nm in 2026, providing a challenging but in some ways reassuring path over the next 14 years.

With a path that leads forward into the mists of time, in terms of technology generations, why worry? Won’t we continue pressing forward for the indefinite future? After all, we have cleared every hurdle to date.

The problem is that our problems are becoming increasingly fundamental. The distances within a transistor can be measured in atoms; that 22 nm is only about 40 times the size of the silicon crystal lattice cell of 0.54 nm. The 5.8 nm of 2026 is only some eleven times the lattice cell. The exact practical limit has not yet been determined, but it is clear that we don’t know how to build transistors with parts less than one atom thick! Moore’s Law, in its current form governing increased density of transistors in a two-dimensional layout, must end within the next human generation.

In addition to the limits to general dimensions, the atomic nature of matter poses other problems. The behavior of semiconductors depends critically on small amounts of dopants added to the base material. Early models of dopant activity could treat the dopants as a uniform change to a region of the material, but we have now reached the point where the effect of individual dopant atoms matters, and their placement is critical but hard to control.

The actual quantum nature of current-carrying electrons has also begun to matter, as they can tunnel not only through barriers (a desired phenomenon but one we need to control) but also into and out of the wires. Resistance is also a problem at this scale.

The second problem, thermodynamics, has already manifested itself in systems, and is the key reason that clock speeds for individual CPUs have stalled at around 2 GHz. Landauer showed that erasing a bit, as is necessary in any logic operation that is not bijective, results in an unavoidable increase in entropy, manifesting itself as waste heat. This is a fundamental fact of the physical implementation of logic, independent of the medium in which it is built. The exact value for our current and near-future devices depends on factors such as the thermal conductivity of bulk silicon [29, 35, 38]. DeBenedictis has estimated that we can ultimately reach \(10^{22}\) FLOPS or \(2\times 10^{26}\) logic gates per second, or \(2.5\times 10^{28}\) FLOPS and \(5\times 10^{32}\) gates per month, before Landauer’s limit stops us [40]. The 2020 system predicted above is within a factor of one hundred of this limit, depending on the cost assigned to a floating point operation. Thus, we can effectively see the end of the evolution of classical, Boolean supercomputers coming.

4 Discussion

With such dire and seemingly fundamental problems, is there any reason for optimism, for either economically viable quantum computers or continuing advances in classical systems as Moore’s Law peters out? In fact, there is no reason to believe that progress in the overall field of computing systems will stop.

Landauer’s disciple Bennett, joined by Feynman, Toffoli, and Fredkin, devised reversible logic schemes in the 1970s and 80s [41, 42, 43, 44]. Reversible logic results in no erasure of information, and therefore no waste heat. Reversible logic offers one path to continued improvement, and both theorists and experimentalists are working on making it practical [45, 46, 47, 48, 49, 50].

(By interesting coincidence, Babbage’s Difference Engine is logically reversible: from any point in the evolution of the state of the machine, it is possible to infer its state at any earlier point in time. It is, however, neither mechanically nor thermodynamically reversible, and surely the thought of reversibility was not on Babbage’s mind when he developed the machine.)

Technologies with ambition to augment or replace standard CMOS circuits abound in the labs; they are too numerous to cover exhaustively [51, 52]. Carbon nanotubes, numerous new types of transistors and ultimately three-dimensional integration all promise to bring us new capabilities within the same physical constraints [53, 54, 55, 56]. One or more of these may succeed, and certainly there is no reason to believe that our ingenuity in building systems out of these constrained technologies has been exhausted.

The prospects for quantum computing systems continue to improve. Our understanding of how to develop algorithms has grown, compilation techniques are improving, and new error correction mechanisms have been developed. The underlying physical technology steadily gets better, in both memory lifetime and gate fidelity, and in technologies such as superconducting systems has now reached the point where moderate-scale demonstrations of quantum error correction can be contemplated. Importantly, work on the architecture of systems is attracting increasing interest from the experimental community [28]. In early 2013, we seem to stand on the cusp of an inflection point in experimental capabilities, and the next few years likely will see a very competitive atmosphere with important milestones achieved.

The ongoing evolution of classical technologies will benefit quantum systems, especially once industrial processes can be applied. Microcavity ring resonators, for example, depend on essentially atomic-level surface smoothness, difficult to achieve in the lab but potentially doable in an industry setting.

Working on quantum systems also tells us how to build better classical systems in the light of quantum effects and thermodynamic limitations [57]. This is true at the physical level as well as at the logical level, where work on reversible circuits such as arithmetic, while often ostensibly focused on quantum computing, applies to reversible classical as well. More spinoff benefits from research on quantum computing systems can be expected.

5 Perspective

We are approaching the 100th anniversary of the coining of the term robot [58], but Asimovian androids do not (generally) roam the streets of Tokyo. The field of robotics as a whole, however, has contributed enormously to society’s well being in ways such as improved manufacturing and automated monitoring systems, and to fields such as exploration of our solar system. So I expect it to be with quantum computing: development will take time, and the end results and largest societal impact very likely will be nothing we can anticipate today.

The invention (discovery?) of Shor’s algorithm coincided with the availability of a slew of experimental technologies on the verge of single-quantum effects, if not actual digital, entangled operations. This resulted in a surge in interest, and in funding. Funding remains primarily the domain of government agencies. The total spent worldwide on quantum computing research since 1995 is probably a couple of billion dollars. The annual R&D budget of Intel alone is some four times that amount. When quantum system demonstrations (including applications) reach a certain level of maturity, industrial levels of investment will likely occur and the rate of progress will accelerate.

A direct, frontal assault on the bastions of classical supercomputing is unlikely to succeed. Instead, as in The Innovator’s Dilemma, quantum computers may have to take over stealthily, by emphasizing their strengths in areas of classical weakness [59]. Quantum computers will not compete to replace classical supercomputers directly, but instead will open new avenues of computational and intellectual query.

Consistent funding and intelligent choice of problems to attack, and a long-term view of the scale of systems that must be built coupled with an impatience to solve problems and deploy systems, will ultimately lead to success in solving both quantum computing’s classical problem and classical computing’s quantum (and atomic) problem.


  1. 1.

    Some care must be taken in comparing the exact feature sizes, as memory and logic chips are sometimes described using different terminology varying by a factor of two or so, and the actual feature size on chip may differ from the fabrication process, due to factors in the lithography and etching.


  1. 1.
    Bacon, D., van Dam, W.: Commun. ACM 53(2), 84 (2010). doi:10.1145/1646353.1646375 CrossRefGoogle Scholar
  2. 2.
    Feynman, R.P.: In: Hey, A.J.G. (ed.) Feynman and Computation. Westview Press, Boulder (2002)Google Scholar
  3. 3.
    L. Grover, in Proc. 28th Annual ACM Symposium on the Theory of Computation (1996), pp. 212–219.
  4. 4.
    S. Hallgren, Journal of the ACM (JACM) 54(1) (2007).Google Scholar
  5. 5.
    Harrow, A.W., Hassidim, A., Lloyd, S.: Phys. Rev. Lett. 103(15), 150502 (2009). doi:10.1103/PhysRevLett.103.150502 ADSCrossRefMathSciNetGoogle Scholar
  6. 6.
    Jordan, S.P., Lee, K.S.M., Preskill, J.: Science 336, 1130 (2012)ADSCrossRefGoogle Scholar
  7. 7.
    Magniez, F., Santha, M., Szegedy, M.: Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms. Society for Industrial and Applied Mathematics 56, 1109–1117 (2005)MathSciNetGoogle Scholar
  8. 8.
    M. Mosca, Arxiv, preprint arXiv:0808.0369 (2008).
  9. 9.
    Shor, P.W.: Proceedings of 35th Symposium on Foundations of Computer Science, pp. 124–134. IEEE Computer Society Press, Los Alamitos (1994)CrossRefGoogle Scholar
  10. 10.
    Whitfield, J., Biamonte, J., Aspuru-Guzik, A.: Mol. Phys. 109(5), 735 (2011)ADSCrossRefGoogle Scholar
  11. 11.
    D. Anderson, in 5th IEEE/ACM International Workshop on Grid Computing (2004), pp. 365–372. Available translated into Japanese at
  12. 12.
    K. Asanovic, R. Bodik, B. Catanzaro, J. Gebis, P. Husbands, K. Keutzer, D. Patterson, W. Plishker, J. Shalf, S. Williams, et al., The landscape of parallel computing research: A view from Berkeley. Tech. rep., Technical Report UCB/EECS-2006-183, EECS Department, University of California, Berkeley (2006).Google Scholar
  13. 13.
    Coulouris, G., Dollimore, J., Kindberg, T.: Distributed Systems: Concepts and Design, 4th edn. Addison-Wesley, Menlo Park (2005)Google Scholar
  14. 14.
    Fox, G., Williams, R., Messina, P.: Parallel Computing Works!. Morgan Kaufmann Publication, San Francisco (1994)Google Scholar
  15. 15.
    Hennessy, J.L., Patterson, D.A.: Computer Architecture: A Quantitative Approach, 4th edn. Morgan Kaufman, San Francisco (2006)Google Scholar
  16. 16.
    Gustafson, J.L.: Commun. ACM 31(5), 532 (1988)CrossRefGoogle Scholar
  17. 17.
    Devitt, S.J., Fowler, A.G., Stephens, A.M., Greentree, A.D., Hollenberg, L.C.L., Munro, W.J., Nemoto, K.: N. J. Phys. 11, 083032 (2009)CrossRefGoogle Scholar
  18. 18.
    Jones, N.C., Van Meter, R., Fowler, A.G., McMahon, P.L., Kim, J., Ladd, T.D., Yamamoto, Y.: Phys. Rev. 2, 031007 (2012). doi:10.1103/PhysRevX.2.031007.
  19. 19.
    Van Meter, R., Ladd, T.D., Fowler, A.G., Yamamoto, Y.: Int. J. Quantum Inf. 8, 295 (2010)CrossRefGoogle Scholar
  20. 20.
    M.G. Whitney, N. Isailovic, Y. Patel, J. Kubiatowicz, in Proc. 36th Annual International Symposium on Computer Architecture (2009).Google Scholar
  21. 21.
    A. Ambainis, A. Childs, B. Reichardt, in Foundations of Computer Science, 2007. FOCS’07. 48th Annual IEEE Symposium on (IEEE, 2007), pp. 363–372.Google Scholar
  22. 22.
    Childs, A., Cleve, R., Deotto, E., Farhi, E., Gutmann, S., Spielman, D.: Proceedings of the Thirty-Fifth Annual ACM Symposium on Theory of Computing, pp. 59–68. ACM, New York (2003)Google Scholar
  23. 23.
    O. Regev, in Foundations of Computer Science, 2002. Proceedings. The 43rd Annual IEEE Symposium on (IEEE, 2002), pp. 520–529.Google Scholar
  24. 24.
    A. Bocharov, K. Svore, Arxiv, preprint arXiv:1206.3223 (2012).
  25. 25.
    T.T. Pham, R. Van Meter, C. Horsman, arXiv preprint arXiv:1209/4139 (2012).Google Scholar
  26. 26.
    P. Selinger, arXiv:1212.6253 [quant-ph] (2012).
  27. 27.
    R.D. Van Meter III, Architecture of a quantum multicomputer optimized for Shor’s factoring algorithm. Ph.D. thesis, Keio University (2006). Available as arXiv:quant-ph/0607065.
  28. 28.
    R. Van Meter, C. Horsman, Communications of the ACM (2013). To appear.Google Scholar
  29. 29.
    ESIA, JEITIA, KSIA, TSIA, SIA, International technology roadmap for semiconductors. Tech. rep., ESIA and JEITIA and KSIA and TSIA and SIA (2012).
  30. 30.
    G.E. Moore, Electronics 38(8) (1965).Google Scholar
  31. 31.
    N. Forbes, M. Foster, Computing in Science & Engineering pp. 18–19 (2003).Google Scholar
  32. 32.
    Kish, L.: Phys. Lett. A 305(3–4), 144 (2002)ADSCrossRefGoogle Scholar
  33. 33.
    I. Tuomi, First Monday 7(11–4) (2002).Google Scholar
  34. 34.
    R. Landauer, IBM J. of Research and Development 5(3), 183 (1961). Reprinted in IBM J. R.&D. Vol. 44 No. 1/2, Jan./Mar. 2000, pp. 261–269.Google Scholar
  35. 35.
    Meindl, J.D., Chen, Q., Davis, J.A.: Science 293, 2044 (2001)ADSCrossRefGoogle Scholar
  36. 36.
    Swade, D.: The Difference Engine: Charles Babbage and the Quest to Build the First Computer. Penguin, Baltimore (2002)Google Scholar
  37. 37.
    Riordan, M., Hoddeson, L.: Crystal Fire: the Birth of the Information Age. W. W. Norton, New York (1997)Google Scholar
  38. 38.
    Ieong, M., Doris, B., Kedzierski, J., Rim, K., Yang, M.: Science 306, 2057 (2004)ADSCrossRefGoogle Scholar
  39. 39.
    M. Lundstrom. Moore’s Law forever? (2003).Google Scholar
  40. 40.
    E.P. DeBenedictis, in Proc. 2nd conference on Computing Frontiers (ACM, 2005), pp. 391–402.Google Scholar
  41. 41.
    Bennett, C.H.: IBM J. Res. Develop. 17, 525 (1973)CrossRefMATHMathSciNetGoogle Scholar
  42. 42.
    C.H. Bennett, IBM J. of Research and Development 32(1) (1988). Reprinted in IBM J. R.&D. Vol. 44 No. 1/2, Jan./Mar. 2000, pp. 270–277.Google Scholar
  43. 43.
    Feynman, R.P.: Feynman Lectures on Computation. Addison Wesley, Menlo Park (1996)Google Scholar
  44. 44.
    Fredkin, E., Toffoli, T.: Int. J. Theor. Phys. 21, 219 (1982)CrossRefMATHMathSciNetGoogle Scholar
  45. 45.
    W.C. Athas, L.J. Svensson, in Proc. IEEE 1994 Workshop on Physics and Computing (IEEE, 1994).Google Scholar
  46. 46.
    Burignat, S., Vos, A.D.: Int. J. Electron. Telecommun. 58(3), 205 (2012)Google Scholar
  47. 47.
    M.P. Frank, Reversibility for efficient computing. Ph.D. thesis, MIT (1999).Google Scholar
  48. 48.
    Peres, A.: Phys. Rev. A 32(6), 3266 (1985)ADSCrossRefMathSciNetGoogle Scholar
  49. 49.
    Shende, V.V., Prasad, A.K., Markov, I.L., Hayes, J.P.: IEEE Trans. CAD 22(6), 710 (2003)CrossRefGoogle Scholar
  50. 50.
    C. Vieri, M.J. Ammer, M. Frank, N. Margolus, T. Knight. A fully reversible asymptotically zero energy microprocessor. Scholar
  51. 51.
    G. Bourianoff, IEEE Computer pp. 44–53 (2003).Google Scholar
  52. 52.
    G. Tseng, J. Ellenbogen. Toward nanocomputers (2001).Google Scholar
  53. 53.
    Beckman, R., Johnston-Halperin, E., Luo, Y., Green, J.E., Heath, J.R.: Science 310, 465 (2005)ADSCrossRefGoogle Scholar
  54. 54.
    A. DeHon, IEEE Trans. Nanotechnol. 2(1) (2003).Google Scholar
  55. 55.
    S.J. Aaronson, Limits on efficient computation in the physical world. Ph.D. thesis, U.C. Berkeley (2004).Google Scholar
  56. 56.
    Topol, A., Tulipe, D.L., Shi, L., Frank, D., Bernstein, K., Steen, S., Kumar, A., Singco, G., Young, A., Guarini, K., et al.: IBM J. Res. Develop. 50(4.5), 491 (2006)CrossRefGoogle Scholar
  57. 57.
    M. Tsang, C.M. Caves, Phys. Rev. X 2, 031016 (2012). doi:10.1103/PhysRevX.2.031016.
  58. 58.
    K. Capek, R.U.R.: Rossum’s Universal Robots (1920).Google Scholar
  59. 59.
    Christensen, C.M.: The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. Harvard Business Press, Cambridge (1997)Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Keio UniversityTokyoJapan

Personalised recommendations