1 Introduction

Simulating models of the physical world is instrumental in advancing scientific knowledge and developing technologies. Accordingly, the task has long been at the heart of science. For example, orreries have been used for millennia to simulate models of the motions of celestial objects [1]. More recently, differential analysers or mechanical integrators were developed to solve differential equations modelling e.g. heat flow and transmission lines [2, 3].

Unfortunately, simulation is not always easy. There are numerous important questions to which simulations would provide answers but which remain beyond current technological capabilities. These span a multitude of scientific research areas, from high-energy [4, 5], nuclear, atomic [6] and condensed matter physics [7, 8] to thermal rate constants [9] and molecular energies [10, 11] in chemistry [12, 13].

An exciting possibility is that the first simulation devices capable of answering some of these questions may be quantum, not classical, with this distinction to be clarified below. The types of quantum hardware proposed to perform such simulations are as hugely varying as the problems they aim to solve: trapped ions [1419], cold atoms in optical lattices [2022], liquid and solid-state NMR [2327], photons [2833], quantum dots [3436], superconducting circuits [6, 3742], and NV centres [43, 44]. At the time of writing, astonishing levels of control in proof-of-principle experiments (cf. the above references and citations within) suggest that quantum simulation is transitioning from a theoretical dream into a credible possibility.

Here we complement recent reviews of quantum simulation [4550] by providing our answers to several fundamental but non-trivial and often contentious questions about quantum simulators, highlighting whenever there is a difference of opinion within the community. In particular, we discuss how quantum simulations are defined, the role they play in science, and the importance that should be given to verifying their accuracy.

2 What are simulators?

Both simulators and computers are physical devices that reveal information about a mathematical function. Whether we call a device a simulator or a computer depends not only on the device, but also on what is supposed about the mathematical function and the intended use of the information obtained.

If the function is interpreted as part of a physical model then we are likely to call the device a simulator. However, this brief definition neglects the typical purpose and context of a simulation (see Figure 1). As will become clear below, a simulation is usually the first step in a two-step process, with the second being the comparison of the physical model with a real physical system (see Section 4 ‘How are simulators used?’). This then makes simulation part of the usual scientific method. This context is why some loosely state that simulation is the use of one physical device to tell us about another real physical system [51]. It also affects the level of trust that can be reasonably demanded of the simulation (see Section 7 ‘When are quantum simulators trustworthy?’).

Figure 1
figure 1

The role of a quantum simulator. A quantum simulator reveals information about an abstract mathematical function relating to a physical model. However, it is important to consider the typical purpose and context of such a simulation. By comparing its results to a real system of interest, a simulation is used to decide whether or not the model accurately represents that system. If the representation is thought to be accurate, the quantum simulator can then loosely be considered as a simulator for the system of interest. We represent this in the figure by a feedback loop from the quantum device back to the system of interest.

If the accuracy with which a device simulates a model can be arbitrarily controlled and guaranteed then it is often elevated to the status of a computer, a name that reflects our trust in the device. A consequence of this guaranteed accuracy is that it allows assured interpretation of the results of the operation, the information obtained about a mathematical function, without reference to some real system. Thus, as well as to imply accuracy, the term computer is also more often used to describe calculations that relate to more abstract mathematical functions, unconnected to a physical system, and are used outside of the scientific method.

It is interesting to apply our definition of a simulator to well-known situations in which the term is used. The majority of experimental devices advertised as quantum simulations are so-called analogue simulators [4550]. They are devices whose Hamiltonians can be engineered to approximate those of a subset of models put forward to describe a real system. This closely fits our definition of simulators as well as their usual purpose and context outlined above. Another different type of device is Lloyd’s digital quantum simulator [52]. This replicates universal unitary evolution by mapping it, via Trotter decompositions, to a circuit, which can then be made arbitrarily accurate by the use of error correction. Whilst going by the name simulator, it is effectively a universal quantum computer. From our arguments above, we would also describe this as a computer: error correction ensures the result applying to the modelled system can be interpreted without comparison to a real physical system, thus playing the role of a computation. Finally, the company D-wave has developed a device to find the ground state of the classical Ising model [53]. While this is a device that returns a property of a physical model, it is advertised as a computer. We would agree, since its primary use seems to be in solving optimisation problems embedded in the Ising ground state, rather than comparing this to a real physical system.

3 What are quantum simulators?

To complete the definition of a quantum simulator we need to define what is meant by a quantum device. This problem is also faced by quantum biology [5456] and other quantum technologies. It is complicated by the fact that, at some level, quantum mechanics describes the structure and dynamics of all physical objects. Quantumness may be structural and inert e.g. merely responsible for the available single-particle modes. Or quantumness may be active e.g. exploiting entanglement between modes, potentially achieving functionality more efficiently than a classical device (see Section 6 ‘Why do we need quantum simulators?’).

To this end, we must distinguish between devices for which, during the operation of the simulator, the particular degrees of freedom doing the simulating do or do not behave classically. We choose here to define classical as when there is some single-particle basis in which the density operator ρ ˆ (t) describing the relevant degrees of freedom is, for the purposes of the simulation, diagonal at all times t. This is written

ρ ˆ (t)= s , i p ( { N s , i } , t ) |{ N s , i },t{ N s , i },t|.

Here |{ N s , i },t is a Fock state in which N s , i particles of species s occupy mode i. The mode annihilation operator is a ˆ s , i (t)=dr Ψ ˆ s (r) χ s , i (r,t), with χ s , i (r,t) the corresponding single-particle modefunction and Ψ ˆ s (r) the field operator for species s. The diagonal elements p({ N s , i },t) are the probabilities of the different occupations.

This condition ensures there is always a single-particle basis in which dephasing would have no effect. This invariance under dephasing is a common way to define classicality [57]. The condition also disallows entanglement between different single-particle modes, as would be expected for a condition of classicality. It does allow the natural entanglement between identical particles in the same mode due to symmetrisation. Such entanglement can be mapped to entanglement between modes by operations that themselves do not contribute entanglement [58]. However, if such operations are never applied, it is reasonable to consider the device to be classical. In other words, we are less concerned with the potential of entanglement as a resource than how this resource is manifested during the operation of the device.

Let us build confidence in our definition by using it to classify well-known devices as classical or quantum. Reassuringly, the operation of the room-temperature semi-conductor devices used to perform every-day computing are classical according to the definition. The relevant properties of inhomogeneous semi-conductors are captured by a model in which the degrees of freedom are valence (quasi) electrons that incoherently occupy single-particle states χ i (r) of the Bloch type [59]. Next, consider two devices for preparing the ground state of a classical Ising model, classical annealing [60] and quantum annealing [6165]. Classical annealing by coupling the Ising spins to a cooling environment is not quantum since at all times the thermal density matrix of the system is diagonal in the computational basis, a single-particle basis. However, preparing that same state by quantum annealing, adiabatically quenching a transverse field, is expected to be quantum. This is due to the fact that in the middle of the quench, which forms the main part of the simulation, the Ising spins will usually become entangled. Since these are particles in distinguishable modes, the device cannot behave classically at all times. Finally, consider a Bose-Einstein condensate [66, 67], that is many bosons in the same single-particle mode χ 0 (t). Alternatively, consider a Poissonian mixture of different occupation numbers or equivalently a coherent number superposition of unknown phase, both of which are well approximated by N 0 bosons occupying χ 0 (t), for large mean occupation N 0 . In these cases, the single occupied modefunction evolves according to the Gross-Pitaevskii equation and we would label the system as classical. When classifying the use of condensed Bose gases as simulators of gravitational models [6872] the classical or quantum assignment depends on whether, for the purposes of the simulation, the system is possibly described by a single condensate modefunction without fluctuations above the condensate. An example that falls clearly onto the quantum side is provided by a simulator of the Gibbons-Hawking effect [73, 74], which is fundamentally reliant on quantum vacuum fluctuations.

Our chosen boundary between quantum and classical is one of many possibilities. Indeed, defining the quantumness of the simulation entirely in terms of the device is not common. Many others [49, 50] take the quantum in quantum simulator to relate to the model being simulated as well as to the simulating device. In common with definitions of quantum computation, our assignment of the quantum in quantum simulator based only on the device avoids the assumption that only simulating quantum models is hard enough to potentially benefit from a quantum device. This is not so: finding the ground state of even a classical Ising model is NP-hard and thus thought to be inefficient on both a classical and quantum device [75, 76].

4 How are simulators used?

A common perception (that goes right back to the language used at the conception of quantum simulation [51]) is that the purpose of a simulator is purely to reveal information about another real system. We pick an idealised model describing a system of interest, and then simulate that model, taking the output to describe not only the model but the system of interest. As long as the idealised model is a ‘good’ description of the system of interest then it is inferred that the simulator is a ‘good’ simulator of the system. While this inference is correct, it misses an important purpose of a simulator.

This other crucial purpose of a simulator is to reveal information about a model and compare this to the behaviour of the real system of interest. This then allows us to infer whether or not the model provides a ‘good’ description of the system in the first place and whether or not the results bear any relevance to the real world. For example, simulating the Fermi-Hubbard model would be hugely important if it turned out that this model captures the behaviour of some high- T c superconductors (as suggested by some [7780]), but it may be that the main conclusion of simulations will be to rule this out (as expected by others [8183]). Only when we have developed confidence in a model accurately representing a system can we use the simulator of the model to inform us about the system.

5 Why do we need simulators?

Above we have stated that simulators are used to find properties of a model, assess whether the model is relevant to and accurately describes the real system of interest, and, if so, learn about that system. Are there other ways to learn about a system without simulation? Do we need simulators?

There are, of course, many examples of scientists making progress without simulation. Over a century ago, the phenomenon of superconductivity was discovered and later its properties analysed by experimental investigation largely unguided by analytical or numerical simulation [84]. Today, in cases where detailed simulation is not possible, we successfully design drugs largely by trial and error on a mass scale [85].

These two examples, however, also show why simulation is crucial. Computer-aided drug design [86, 87] exploits the simulation of molecular systems to drastically speed up and thus lower the cost of the design process. Similarly, if we wish to manufacture materials with enhanced superconducting properties, e.g. increase the critical temperature T c , then we might benefit from some understanding directing that manufacture, as would be provided by a model and a means of simulating it [88, 89].

Simulation can also be a convenience: in 2014 the USA bobsleigh team won Olympic bronze with a machine designed almost entirely virtually [90]. Simulation was used to optimise the aerodynamic performance without the need for a wind tunnel.

6 Why do we need quantum simulators?

While the idea of simulations is centuries old [1, 2], the suggestion that a quantum device would make for a better mimic of some models than a classical device is commonly attributed to Feynman in 1982 [51]. He noted that calculating properties of an arbitrary quantum model on a classical device is a seemingly very inefficient thing to do (taking a time that scales exponentially with the number of particles in the model being simulated), but a quantum device might be able to do this efficiently (taking a time that scales at most polynomially with particle number [52]).

This does not of course prohibit the simulation of many quantum models from being easy using classical devices and thus not in need of a quantum simulator. The classical numerical tools usually employed include exact calculations, mean-field [91] and dynamical mean-field theory [9294], tensor network theory [95102], density functional theory (DFT) [103107] or quantum Monte Carlo algorithms [108111], which all have their limitations. Exact calculations are only possible for small Hilbert spaces. Mean-field-based methods are only applicable when the correlations between the constituent parts of the system being modelled are weak. Tensor network methods are only applicable if there is a network structure to the Hilbert space and often fail in the presence of strong entanglement between contiguous bipartite subspaces [112], with this sensitivity to entanglement being much greater with two- or higher-dimensional models. For DFT, the functionals describing strong correlations are, in general, not believed to be efficient to find [113]. Quantum Monte Carlo struggles, for example, with Fermionic statistics or frustrated models, due to the sign problem [114, 115].

For the above reasons, quantum devices are expected to be crucial for large network (e.g. lattice) models, featuring Fermions or frustration and strong entanglement, or non-network based many-body models featuring states with strong correlations that are difficult to describe with DFT. Strong entanglement can arise, for example, near a phase transition, or after a non-equilibrium evolution [116]. It must be stated, however, that there is no guarantee that a classical device or algorithm will not sometime in the future be devised to efficiently study some subset of the above quantum models.

In addition to the widely-accepted need for quantum devices for the quantum models discussed above, there are calls and proposals for quantum devices to simulate classical models [117, 118], for example, molecular dynamics [119] and lattice gas models [120, 121]. This also applies to any simulation that reduces to solving an eigenvalue equation [122] or a set of linear equations [123]. As with quantum models, many of these simulations, for example solving a set of linear equations, can be solved without much trouble on a classical device for small to medium simulations. The benefit of a quantum device is that the size of problems that can be tackled in a reasonable time grows significantly more quickly with the size of the simulating device than it does for a classical device, thus it is envisaged that quantum devices will one day be able to solve larger problems than their classical counterparts.

It is clear from this last point that the scaling of classical and quantum simulators must be treated carefully, taking into account the sizes of problems that can be tackled by current or future devices. It is possible that the experimental difficulty of scaling up quantum simulation hardware might cause an overhead such that a quantum device does not surpass the accuracy obtained by a classical algorithm that in principle does not scale as well but runs on ever-improving hardware obeying Moore’s law.

7 When are quantum simulators trustworthy?

So far we are yet to address perhaps the most difficult and important aspect of simulation, upon which its success rests. How can we asses whether the quantum simulator represents the model? How rigorous an assessment is needed?

For this discussion we focus on analogue quantum simulators, because they are the most easily scaled quantum simulators and so are likely to be used in the near future to simulate large systems. They also most closely follow our definition of a simulator, as opposed to a computer (see Section 2 ‘What are simulators?’).

The topic of falsifying bad quantum simulators has received some attention. In certain parameter regimes there may be efficiently calculable exact analytical results or it might be possible to perform a trusted classical simulation, against which the quantum simulator results may be compared [116]. Often there are bounds that some measurable quantities are known to obey, and this too can be tested [47]. Alternatively, it might be possible to check known relationships between two different simulations. For example, in an Ising model, flipping the direction of the magnetic field is equivalent to flipping the sign of the component of the spins along that field, thus giving two simulations whose results are expected to have a clear relationship. A natural extension of this strategy is to compare many quantum simulations realised by different devices, perhaps each with a slightly different source of error, trusting only the aspects of the results shared by all devices [124]. If any of the above tests fail beyond an acceptable accuracy, then we do not trust the simulation results. If a simulator passes all tests, then we may take this as support for the accuracy of that simulator.

It would be incorrect, however, to say that such tests verify the accuracy of a simulator. A simulator could have significant errors yet pass these tests. It might be that the simulator is accurate in the regimes in which we have accurate analytical or classical numerical results, but is more sensitive to errors in regimes that are difficult to treat with other methods, e.g. near phase transitions, perhaps for the same reason. In fact, Hauke et al. gave an example of exactly this phenomenon in the transverse Ising model [47]. The danger with comparing simulations, even realised by different devices [124], is that there may be similar sources of error, or errors in the two simulations may manifest in the results in the same way.

Although this makes simulation difficult to assess, it does not invalidate it; it would be unreasonably harsh to demand verification of all simulators. The reason for this is that, as illustrated in Figure 1, simulators are usually the first step in a two-step process: first a device is devised to simulate a model, and second the model is employed to study a real system (see Section 4 ‘How are they used?’). It might be unreasonable to demand a more rigorous testing of the first part of this process than the second. In the second part, when we devise a model to reproduce the behaviour of a physical system, we only demand that the model be falsifiable [125]. We seek as many fail-able tests as possible of the model, and to the extent that it passes these tests, we retain the model. It is difficult for experiments to verify a particular use of the model, rather successful experiments merely declare the model ‘not yet false’. This is the scientific method. We should not, therefore, demand anything more or less when going in the other direction, devising a physical device to reproduce the behaviour of a model. All we can do is test our simulators as much as possible, and slowly build confidence in accordance with the passing of these tests. If the capability of performing such tests lags behind the development of the simulator, then so naturally must our confidence.

It becomes clear that the purpose of the device is crucial to how it is assessed, explaining our highlighting the purpose of a simulator alongside its definition. If we were using a device to provide information about a model without any additional motivation, as with a computer, then it would be reasonable to search for a means of verification and guarantees of accuracy, as with a computer. Eventually, quantum technologies might develop to a stage where large simulations of this type are feasible, e.g. via Lloyd’s digital simulator [52], but it is likely to be in the more distant future. It must be noted, however, that many of the devices we use regularly for computation are unverifiable in the strictest sense. Not every transistor in the classical computers we use (for instance to simulate quantum systems) can be verified to be functioning as desired [126]. We instead develop an understanding of the sources of error, perform some tests to check for obvious errors, and use the devices with caution.

The words ‘trust’ and ‘confidence’ in the preceding paragraphs are chosen deliberately. They indicate that, since for simulation we do not always have verifiability, we are not discussing objective properties of devices, but our understanding of them. This will change in time (see an example of this in Figure 2). Further, confidence depends on the eventual goal of our use of the simulator. Some properties of a system may be too sensitive to Hamiltonian parameters to be realistically captured by a simulator, while other properties may be statistically robust against parameter variations [127]. In this sense trustworthiness is not a clear-cut topic that is established upon the initial development of a simulator. Instead, it is the result of a complex, time-consuming process in the period that follows. It is the responsibility of critics not to be overly harsh and unfairly demanding of new simulators to provide immediate proof of their trustworthiness, but it is also the responsibility of proponents not to declare trustworthiness before their simulator has earned it.

Figure 2
figure 2

Establishing trust in a simulator. Consider the displacement of a spring due to the pressure of a gas (far left), or the time taken for a dropped ball to fall (middle left). Simple models can be proposed to describe either system. The former might be modelled as an ideal gas trapped in a box by a frictionless piston held in place by a perfect spring. The latter as a frictionless body moving with uniform acceleration. Calculating the quantity of interest within either system, displacement or time, respectively, reduces within the model to calculating a square root. We thus consider four methods to perform this simulation. Building an approximation to either model system; analogue simulation. Alternatively, using an abacus (middle right) or a calculator (far right); digital simulation.

With today’s knowledge, in the parlance used in this article, we would elevate the status of the latter two simulations to computations, because of the guaranteed accuracy with which each calculation reproduces the model. Meanwhile, the former two simulatiors are not so easily verified. Importantly, they are falsifiable, e.g. by comparing one to the other. This is similar to the state of analogue quantum simulators currently used to perform large-scale quantum simulations.

However, the confidence in each simulator is a matter of perspective. It is not objective. Many centuries ago, we would only have trusted the abacus to perform such a calculation, since its principles were well understood and square-root algorithms with assured convergence were known even to the Babylonians. Once Gallileo began the development of mechanics, we might have considered the method of dropping a ball. Confidence in the simulation could have been established by testing the analogue simulator against the abacus. Nearly two centuries ago, when we first began to understand equilibrium thermodynamics, we might have preferred the gas-piston-spring method. Nowadays, we would all choose the calculator or a solid-state equivalent. This confidence is partly a result of testing the calculator against some known results, but also largely because, after the development of quantum mechanics, we feel we understand the components of solid-state systems to such a high level that we are willing to extrapolate this confidence to unknown territory. In a century, our confidence could well be placed most strongly in another system.

8 Where next for quantum simulation?

The majority of the current effort on quantum simulation is, firstly, in matching models of interest to a suitable quantum device with which to perform a simulation [49, 50]. Secondly, experimentalists demonstrate a high level of control and flexibility with a simulator, performing some of the simple fail-able tests mentioned above [18, 22, 33]. This is very much along the lines of the five goals set out by Cirac and Zoller in 2012 [46], and great successes have led to claims that we are now able to perform simulations on a quantum device that we are unable to do on a classical device. In the future, the main direction of inquiry will continue to be along these lines.

However, it is the very fact that the simulation capabilities of quantum devices are beginning to surpass those of classical devices that should prompt a more forceful investigation into the best approach to establishing confidence in quantum simulators. Hauke et al. proposed a set of requirements for a quantum simulator, an alternative to Cirac and Zoller’s, that focuses on establishing the reliability and efficiency of a simulator, and the connection between these two properties [47]. As we move to classically unsimulable system sizes and regimes where there is no clear expected behaviour, trustworthiness and falsifiability should no longer be an afterthought. In fact, they should be primary objectives of experimental and theoretical work, since quantum simulators cannot truly be useful until some level of trust is established.

Can we predict in advance where the results of quantum simulators are more sensitive to errors? How does this overlap with the regimes of classical simulability? Are there even some results that will be exponentially sensitive to the Hamiltonian parameters and not expected to ever be simulable in a strict sense? These are difficult but important questions to answer, and the path towards answering them will be exciting and thought provoking.