1 Introduction

Analog Hamiltonian simulation is one of the most promising applications of quantum computing in the NISQ (noisy, intermediate scale, quantum) era, because it does not require fully fault-tolerant quantum operations. Its potential applications have led to an interest in constructing a rigorous theoretical framework to describe Hamiltonian simulation.

Recent work has precisely defined what it means for one quantum system to simulate another [1] and demonstrated that—within very demanding definitions of what it means for one system to simulate another—there exist families of Hamiltonians that are universal, in the sense that they can simulate all other quantum Hamiltonians. This work was recently extended, with the first construction of a translationally invariant universal family of Hamiltonians [2].

Previous universality results have relied heavily on using perturbation gadgets, and constructing complicated ‘chains’ of simulations to prove that simple models are indeed universal. In this paper, we present a new simplified method for proving universality. This method makes use of another technique from Hamiltonian complexity theory: history state Hamiltonians [3]. Leveraging the fact that it is possible to encode computation into the ground state of local Hamiltonians, we show that it is possible to prove universality by constructing Hamiltonian models which can compute the energy levels of arbitrary target Hamiltonians.

In order to ensure that the universality constructions preserve the entire physics of the target system (and not just the energy levels), we make use of an idea originally from [4] and used recently in [5,6,7]: ‘idling to enhance coherence.’ Before computing the energy levels of the target system, the computation encoded in the simulator system ‘idles’ in its initial state for time L. By choosing L to be sufficiently large, we can ensure that with high probability there is a fixed set of spins in the simulator system which map directly to the state of the target system.

As well as providing a route to simplifying previous proofs, this ‘history-state simulation method’ also offers more insight into the origins of universality and the relationship between universality and complexity. The classification of two-qubit interactions by their simulation ability in [1], which showed that the universal class was precisely the set of QMA-complete interactions, was suggestive of a connection between simulation and complexity. And a complexity theoretic classification of universal models already exists in the classical case [42]. But until now, it was not clear whether a connection existed for general quantum interactions, or whether it was merely an accident in the two-qubit case. Previous methods for proving universality in the quantum case did not offer a route to classifying universal models, and the more complicated non-commutative structure of quantum Hamiltonians meant that the techniques from the classical proof couldn’t be applied. By demonstrating that it is possible to prove universality by leveraging the ability to encode computation into ground states, we have provided a route to showing that the connection between universality and complexity holds more generally. In a companion paper [41], we make this insight rigorous, by deriving a full complexity theoretic classification of universal quantum Hamiltonians.

We also use the ‘history-state simulation method’ to provide a simple construction of two new universal models. Both of these are translationally invariant systems in 1D, and we show that one of these constructions is efficient in terms of the number of spins in the universal construction (yet not in terms of the simulating system’s norm):

Theorem 1.1

There exists a two-body interaction \(h^{(1)}\) depending on a single parameter \(h^{(1)}=h^{(1)}(\phi )\), and a fixed one-body interaction \(h^{(2)}\) such that the family of translationally invariant Hamiltonians on a chain of length N,

$$\begin{aligned} H_{\mathrm {univ}}(\phi , \Delta ,T) = \Delta \sum _{\langle i,j \rangle } h^{(1)}_{i,j}(\phi ) + T \sum _{i = 0}^N h^{(2)}_{i}, \end{aligned}$$
(1)

is a universal model, where \(\Delta \), T and \(\phi \) are parameters of the Hamiltonian, and the first sum is over adjacent sites along the chain. The universal model is efficient in terms of the number of spins in the simulator system.

By tuning \(\phi \), T and \(\Delta \), this model can replicate (in the precise sense of [1]) all quantum many-body physics.

This is the first translationally invariant universal model which is efficient in terms of system size overhead. Its existence implies that, for problems which preserve hardness under simulation, complexity theoretic results for general Hamiltonians can also apply to 1D, translationally invariant Hamiltonians (though care must be taken when applying this, as the construction is not efficient in the norm of the simulating system). This is, for instance, the case for a reduction from a PreciseQMA-hard local Hamiltonian (LH) problem, for which the reduction to a translationally invariant version preserves the correct promise gap scaling. This in turn implies that the local Hamiltonian problem remains PSPACE-hard for a promise gap that closes exponentially quickly, even when enforcing translational invariance for the couplings. This stands in contrast to a promise gap which closes as \(1/{{\,\mathrm{poly}\,}}\) in the system size, in which case the variant is either QMA (for non-translational invariance) or QMAEXP (for translational invariance) complete.

Furthermore, Theorem 1.1 allows us to construct the first toy model of holographic duality between local Hamiltonians from a 2D bulk to a 1D boundary, extending earlier work on toy models of holographic duality in [8] and [9].

We also construct a universal model which is described by just two free parameters, but where the model is no longer efficient in the system size overhead:

Theorem 1.2

There exists a fixed two-body interaction \(h^{(3)}\) and a fixed one-body interaction \(h^{(2)}\) such that the family of translationally invariant Hamiltonians on a chain of length N,

$$\begin{aligned} H_{\mathrm {univ}}(\Delta ,T) = \Delta \sum _{\langle i,j \rangle } h^{(3)}_{i,j} + T \sum _{i = 0}^N h^{(2)}_{i}, \end{aligned}$$
(2)

is a universal model, where \(\Delta \) and T are parameters of the Hamiltonian, and the first sum is over adjacent sites along the chain.

By varying the size of the chain N that this Hamiltonian is acting on, and tuning the \(\Delta \) and T parameters in the construction, this Hamiltonian can replicate (again in the precise sense of [1]) all quantum many-body physics. We are able to demonstrate that constructing a universal model with no free parameters is not possible, but the existence of a universal model with just one free parameter is left as an open question.

The remainder of the paper is set out as follows. In Sect. 2, we cover the necessary background regarding the theory of simulation, and encoding computation into ground states of QMA-hard Hamiltonians. In Sect. 3, we give an overview of the new method for proving universality and our two new universal constructions. Reading these sections should be enough to gain an intuitive understanding of our approach and our results. The full proofs of our results are given in Sect. 4—this section may be skipped on an initial reading if you are primarily interested in understanding the general approach, or the applications of the results. The complexity theory implications are discussed in Sect. 5, while in Sect. 6 the new toy model of holographic duality is constructed. Avenues for future research are discussed in Sect. 7.

2 Preliminaries

2.1 Universal Hamiltonians

2.1.1 Hamiltonian Encodings

Any simulation of a Hamiltonian H by another Hamiltonian \(H'\) must involve ‘encoding’ H in \(H'\) in some fashion. In [1], it was shown that any encoding map \({\mathcal {E}}(A)\) which satisfies three basic requirements

  1. i)

    \({\mathcal {E}}(A) = {\mathcal {E}}(A)^\dagger \) for all \(A \in \text {Herm}_n\)

  2. ii)

    \(\mathrm {spec}({\mathcal {E}}(A)) = \mathrm {spec}(A)\) for all \(A \in \text {Herm}_n\)

  3. iii)

    \({\mathcal {E}}(pA + (1-p)B) = p {\mathcal {E}}(A) + (1-p){\mathcal {E}}(B)\) for all \(A, B \in \text {Herm}_n\) and all \(p \in [0,1]\)

must be of the form

$$\begin{aligned} {\mathcal {E}}(A) = V \left( A \otimes P + {\overline{A}} \otimes Q \right) V^\dagger , \end{aligned}$$
(3)

where V is an isometry, \({\overline{A}}\) denotes complex conjugation, and P and Q are orthogonal projectors. Moreover, it is shown that, under any encoding of the form given in Eq. (3), \({\mathcal {E}}(H)\) will also preserve the measurement outcomes, time evolution and partition function of H.

A local encoding is an encoding which maps local observables to local observables, defined as follows.

Definition 2.1

(Local subspace encoding (Definition 13 from [1])) Let

$$\begin{aligned} {\mathcal {E}}: {\mathcal {B}}\left( \otimes _{j=1}^n {\mathcal {H}}_j \right) \rightarrow {\mathcal {B}}\left( \otimes _{j=1}^{n} {\mathcal {H}}'_j \right) \end{aligned}$$

be a subspace encoding. We say that the encoding is local if for any operator \(A_j \in {{\,\mathrm{Herm}\,}}({\mathcal {H}}_j)\) there exists \(A'_j \in {{\,\mathrm{Herm}\,}}({\mathcal {H}}'_j)\) such that:

$$\begin{aligned} {\mathcal {E}}(A_j \otimes \mathbb {1}) = (A'_j \otimes \mathbb {1}){\mathcal {E}}(\mathbb {1}). \end{aligned}$$

It is shown in [1] that if an encoding \({\mathcal {E}}(M) = V(M \otimes P + {\overline{M}} \otimes Q){V}^\dagger \) is local, then the isometry V can be decomposed into a tensor product of isometries \(V = \otimes _i V_i\), for isometries \(V_i: {\mathcal {H}}_i \otimes E_i \rightarrow {\mathcal {H}}'_i\), for some ancilla system \(E_i\).Footnote 1

In this paper, all of the encodings we work with are of the simpler form \({\mathcal {E}}(A) = VAV^\dagger \).

2.1.2 Hamiltonian Simulation

Building on encodings, [1] developed a rigorous formalism of Hamiltonian simulation, formalizing the notion of one many-body system reproducing identical physics as another system, including the case of approximate simulation and simulations within a subspace. We first describe the simpler special case of perfect simulation. If \(H'\) perfectly simulates H, then it exactly reproduces the physics of H below some energy cutoff \(\Delta \), where \(\Delta \) can be chosen arbitrarily large. For brevity, we abbreviate the low-energy subspace of an operator A via \(S_{\le \Delta (A)} {:}{=}{{\,\mathrm{span}\,}}\{ \mathinner {|{\psi }\rangle } : A \mathinner {|{\psi }\rangle } = \lambda \mathinner {|{\psi }\rangle }\wedge \lambda \le \Delta \}\).

Definition 2.2

(Exact simulation, [1, Def. 20]). We say that \(H'\) perfectly simulates H below the cutoff energy \(\Delta \) if there is a local encoding \({\mathcal {E}}\) into the subspace \(S_{{\mathcal {E}}}\) such that

  1. i.

    \(S_{{\mathcal {E}}} = S_{\le \Delta (H')}\), and

  2. ii.

    \(H'|_{\le \Delta } = {\mathcal {E}}(H)|_{S_{\mathcal {E}}}\).

We can also consider the case where the simulation is only approximate:

Definition 2.3

(Approximate simulation, [1, Def. 23]). Let \(\Delta ,\eta ,\epsilon >0\). A Hamiltonian \(H'\) is a \((\Delta , \eta , \epsilon )\)-simulation of the Hamiltonian H if there exists a local encoding \({\mathcal {E}}(M)=V(M \otimes P + {\overline{M}} \otimes Q){V}^\dagger \) such that

  1. i.

    There exists an encoding \(\tilde{{\mathcal {E}}}(M)={\tilde{V}}(M \otimes P + {\overline{M}} \otimes Q){\tilde{V}}^\dagger \) into the subspace \(S_{\tilde{{\mathcal {E}}}}\) such that \(S_{\tilde{{\mathcal {E}}}} = S_{\le \Delta (H')}\) and \(\Vert {\tilde{V}} - V\Vert \le \eta \); and

  2. ii.

    \(\Vert H'_{\le \Delta } - \tilde{{\mathcal {E}}}(H)\Vert \le \epsilon \).

Note that the role of \(\tilde{{\mathcal {E}}}\) is to provide an exact simulation as per Definition 2.2. However, it might not always be possible to construct this encoding in a local fashion. The local encoding \({\mathcal {E}}\) in turn approximates \(\tilde{{\mathcal {E}}}\), such that the subspaces mapped to by the two encodings deviate by at most \(\eta \). \(\epsilon \) controls how much the eigenvalues are allowed to differ.

If we are interested in whether an infinite family of Hamiltonians can be simulated by another, the notion of overhead becomes interesting: if the system size grows, how large is the overhead necessary for the simulation, in terms of the number of qudits, operator norm or computational resources? We capture this notion in the following definition.

Definition 2.4

(Simulation, [1, Def. 23]). We say that a family \({\mathcal {F}}'\) of Hamiltonians can simulate a family \({\mathcal {F}}\) of Hamiltonians if, for any \(H \in {\mathcal {F}}\) and any \(\eta , \epsilon > 0\) and \(\Delta \ge \Delta _0\) (for some \(\Delta _0 > 0\)), there exists \(H' \in {\mathcal {F}}'\) such that \(H'\) is a \((\Delta , \eta , \epsilon )\)-simulation of H.

We say that the simulation is efficient if, in addition, for H acting on n qudits and \(H'\) acting on m qudits, \(\Vert H'\Vert = {{\,\mathrm{poly}\,}}(n, 1 / \eta ,1 / \epsilon ,\Delta )\) and \(m = {{\,\mathrm{poly}\,}}(n, 1 / \eta ,1 / \epsilon ,\Delta )\); \(H'\) is efficiently computable given H, \(\Delta \), \(\eta \) and \(\epsilon \); each local isometry \(V_i\) in the decomposition of V is itself a tensor product of isometries which map to \({{\,\mathrm{O}\,}}(1)\) qudits; and there is an efficiently constructable state \(\mathinner {|{\psi }\rangle }\) such that \(P \mathinner {|{\psi }\rangle } = \mathinner {|{\psi }\rangle }\).

As already outlined, in [1] it is shown that approximate Hamiltonian simulation preserves important physical properties. We recollect the most important ones in the following.

Lemma 2.5

( [1, Lem. 27, Prop. 28, Prop. 29]). Let H act on \((\mathbb {C}^d)^{\otimes n}\). Let \(H'\) act on \((\mathbb {C}^{d'})^{\otimes m}\), such that \(H'\) is a \((\Delta , \eta , \epsilon )\)-simulation of H with corresponding local encoding \({\mathcal {E}}(M) = V(M \otimes P + {\overline{M}} \otimes Q)V^\dagger \). Let \(p = {{\,\mathrm{rank}\,}}(P)\) and \(q = {{\,\mathrm{rank}\,}}(Q)\). Then, the following holds true.

  1. i.

    Denoting with \(\lambda _i(H)\) (resp. \(\lambda _i(H')\)) the ith-smallest eigenvalue of H (resp. \(H'\)), then for all \(1 \le i \le d^n\), and all \((i-1)(p+q) \le j \le i (p+q)\), \(|\lambda _i(H) - \lambda _j(H')| \le \epsilon \).

  2. ii.

    The relative error in the partition function evaluated at \(\beta \) satisfies

    $$\begin{aligned} \frac{|{\mathcal {Z}}_{H'}(\beta ) - (p+q){\mathcal {Z}}_H(\beta ) |}{(p+q){\mathcal {Z}}_H(\beta )} \le \frac{(d')^m \mathrm {e}^{-\beta \Delta }}{(p+q)d^n \mathrm {e}^{-\beta \Vert H\Vert }} + (\mathrm {e}^{\epsilon \beta } - 1). \end{aligned}$$
    (4)
  3. iii.

    For any density matrix \(\rho '\) in the encoded subspace for which \({\mathcal {E}}(\mathbb {1})\rho ' = \rho '\), we have

    $$\begin{aligned} \Vert \mathrm {e}^{-\mathrm {i}H't}\rho '\mathrm {e}^{\mathrm {i}H't} - \mathrm {e}^{-\mathrm {i}{\mathcal {E}}(H)t}\rho '\mathrm {e}^{\mathrm {i}{\mathcal {E}}(H)t}\Vert _1 \le 2\epsilon t + 4\eta . \end{aligned}$$
    (5)

Definition 2.4 naturally leads to the question in which cases a family of Hamiltonians is so versatile that it can simulate any other Hamiltonian: in that case, we call the family universal.

Definition 2.6

(Universal Hamiltonians [1, Def. 26]). We say that a family of Hamiltonians is a universal simulator—or simply is universal—if any (finite-dimensional) Hamiltonian can be simulated by a Hamiltonian from the family. We say that the universal simulator is efficient if the simulation is efficient for all local Hamiltonians.

2.2 Circuit-to-Hamiltonian Mappings

The key idea behind our universal constructions is that it is possible to encode computation into the ground state of local Hamiltonians. This technique was first proposed by Feynman in 1985 and is the foundation for many prominent results in Hamiltonian complexity theory, such as QMA-hardness of the local Hamiltonian problem [3, 10].

For the constructions we develop in this paper, we will make use of the ability to encode an arbitrary quantum computation into the ground state of a local Hamiltonian. These are often called ‘circuit-to-Hamiltonian mappings,’ though the mappings may involve other models of quantum computation than the circuit model. These Hamiltonians are typically constructed in such a way that their ground states are ‘computational history states.’ A very general definition of history states was given in [11]; we will only require the simpler ‘standard’ history states here.

Definition 2.7

(Computational history state) A computational history state \(\mathinner {|{\Phi }\rangle }_{CQ} \in {\mathcal {H}}_C \otimes {\mathcal {H}}_Q\) is a state of the form

$$\begin{aligned} \ \mathinner {|{\Phi }\rangle }_{CQ} = \frac{1}{\sqrt{T}} \sum _{t=1}^{T} \mathinner {|{\psi _t}\rangle }\mathinner {|{t}\rangle }, \end{aligned}$$

where \(\{\mathinner {|{t}\rangle }\}\) is an orthonormal basis for \({\mathcal {H}}_C\) and \(\mathinner {|{\psi _t}\rangle } = \Pi _{i=1}^tU_i\mathinner {|{\psi _0}\rangle }\) for some initial state \(\mathinner {|{\psi _0}\rangle }\in {\mathcal {H}}_Q\) and set of unitaries \(U_i \in {\mathcal {B}}({\mathcal {H}}_Q)\).

\({\mathcal {H}}_C\) is called the clock register and \({\mathcal {H}}_Q\) is called the computational register. If \(U_t\) is the unitary transformation corresponding to the tth step of a quantum computation, then \(\mathinner {|{\psi _t}\rangle }\) is the state of the computation after t steps. We say that the history state \(\mathinner {|{\Phi }\rangle }_{CQ}\) encodes the evolution of the quantum computation.

Note that \(U_t\) need not necessarily be a gate in the quantum circuit model. It could also, e.g., be one time-step of a quantum Turing machine, or even a time-step in some more exotic model of quantum computation [12], or an isometry [13]. In the particular constructions, we make use of in this work, \(U_t\) will be a time-step of a quantum Turing machine.

3 Overview of Construction

3.1 High-level Outline of the Construction

As mentioned in Sect. 2.2, the key technique we make use of in our universality constructions is the ability to encode computations into the ground states of local Hamiltonians. The model of computation we encode is the quantum Turing machine (QTM) model—standard techniques for encoding QTMs in local Hamiltonians give translationally invariant Hamiltonians [14, 15].

In both the constructions, we develop in this work a description of the Hamiltonian to be simulated (the ‘target’ Hamiltonian, \(H_{\mathrm {target}}\)) is encoded in the binary expansion of some natural number, \(x \in \mathbb {N}\). Details of this encoding are given in Sect. 3.2. The natural number x is then itself encoded in some parameter of the universal Hamiltonian (see Sect. 3.3 for two methods of encoding natural numbers in parameters of universal Hamiltonians).

The Hamiltonian we use to construct the universal model has as its ground state computational history states (cf. Definition 2.7) which encode two QTMs (\(M_1\) and \(M_\mathrm {PE}\)) which share a work tape. The two computations are ‘dovetailed’ together—the computation \(M_1\) occurs first, and the result of this computation is used as input for \(M_\mathrm {PE}\). The first QTM, \(M_1\), extracts the binary expansion of x from the parameter of the Hamiltonian. At the end of \(M_1\)’s computation, the binary expansion of x is written on the work tape which \(M_1\) shares with \(M_\mathrm {PE}\). An outline of the methods we use to extract x and write it on the Turing machine tape are given in Sect. 3.3.

The second QTM, \(M_\mathrm {PE}\) reads in x, which contains a description of \(H_{\mathrm {target}}\), from the work tape which it shares with \(M_1\). It also reads in an input state \(\mathinner {|{\psi }\rangle }\)—this is unconstrained by the computation (it can be thought of as carrying out the same role as a witness in a QMA verification circuit). It then carries out phase estimation on \(\mathinner {|{\psi }\rangle }\) with respect to the unitary generated by \(H_{\mathrm {target}}\).

The Hamiltonian which encodes \(M_1\) and \(M_\mathrm {PE}\) has a zero-energy degenerate ground space, spanned by history states with all possible input states \(\mathinner {|{\psi }\rangle }\). In order to recreate the spectrum of \(H_{\mathrm {target}}\), we need to break this degeneracy. We achieve this by adding one-body projectors to the universal Hamiltonian which give the correct energy to the output of \(M_\mathrm {PE}\) to reconstruct the spectrum of \(H_{\mathrm {target}}\).

With this construction, the energy levels of the universal Hamiltonian recreate the energy levels of \(H_{\mathrm {target}}\). To ensure that the eigenstates are also correctly simulated, before \(M_1\) carries out its computation, it ‘idles’ in its initial state for some time L. By choosing L large enough, we show that this construction can approximately simulate any target Hamiltonian. A more detailed sketch of how we use idling and phase estimation to achieve simulation is given in Sect. 3.4, while rigorous proofs are given in Sect 4.

3.2 A Digital Representation of a Local Hamiltonian

As discussed in Sect. 3.1, we need to encode a description of the target Hamiltonian \(H_{\mathrm {target}}\) in some parameter of the universal Hamiltonian. In Sect. 3.3, we outline the two methods we use to encode a natural number in the parameter of a Hamiltonian. But how do we represent \(H_{\mathrm {target}}=\sum _{i=1}^m h_i\) in the binary expansion of a natural number \(x\in \mathbb {N}\), irrespective of its origin?

We will assume that \(H_{\mathrm {target}}\) is a k-local Hamiltonian, acting on n spins of local dimension d. We emphasize that k can be taken to be n, i.e., the system size—and therefore we can simulate any Hamiltonian, not just local ones. However, we keep track of the locality parameter k as it is relevant when deriving the overhead of our simulations.

Every value needed to specify the k-local simulated system \(H_{\mathrm {target}}\) will be represented in Elias-\(\gamma '\) coding, which is a simple self-delimiting binary code which can encode all natural numbers [16, 17]. For the purpose of the encoding, we will label the n spins in the system to be simulated by integers \(i = 1,\ldots ,n\).

The encoding of \(H_{\mathrm {target}}\) begins with the three meta-parameters n (spin count), followed by k (locality), and then m (number of k-local terms). Each of the m k-local terms in H is then specified by giving the label of the spins involved in that interaction, followed by a description of each term of the \(d^k \times d^k\) Hermitian matrix describing that interaction. Each such matrix entry is specified by giving two integers a and b. The matrix entry can be recovered by calculating \(a \sqrt{2} - b\), which is accurate up to a small error.Footnote 2

Specifying \(H_{\mathrm {target}}\) to accuracy \(\delta \) requires each such matrix entry to be specified to accuracy \(\delta /(md^{2k})\). Therefore, the length of the description of \(H_{\mathrm {target}}\) is

$$\begin{aligned} md^{2k}\log \left( \Vert H_{\mathrm {target}}\Vert md^{2k}/\delta \right) ={{\,\mathrm{poly}\,}}\left( n,d^k,\log (\Vert H\Vert /\delta ) \right) \end{aligned}$$
(6)

Finally, the remaining digits of x specify \(\Xi \)—the bit precision to with which the phase estimation algorithm should calculate the energies (i.e., we require QPE to extract \(\Xi \) binary digits), and L—the length of time the system should ‘idle’ in its initial state before beginning its computation.

So, the binary expansion B(x) of x has the following form:

$$\begin{aligned} B(x) {:}{=}\gamma '(n) \cdot \gamma '(k) \cdot \gamma '(m) \cdot \left[ \gamma '(i)^{\cdot k} \cdot \left( \gamma '(a_j) \cdot \gamma '(b_j) \right) ^{4^k}\right] ^{\cdot m} \cdot \gamma '(\Xi ) \cdot \gamma '(L).\nonumber \\ \end{aligned}$$
(7)

Here, \(\gamma '(n)\) denotes n in Elias-\(\gamma '\) coding and \(\cdot \) denotes concatenation of bit strings.

With regard to the identification of a real number \(n=\sqrt{2} a - b\), we observe that it is clearly straightforward to recover n from a and b (by performing basic arithmetic). The other direction works as follows.

Remark 3.1

Let \(n\in \mathbb {N}\), and let \(\Xi \in \mathbb {N}\) denote a precision parameter. Then, we can find numbers \(a,b\in \mathbb {N}\) such that

$$\begin{aligned} \left| n - \sqrt{2}a + b\right| \le 2^{-\Xi }, \end{aligned}$$

and the algorithm runs in \({{\,\mathrm{O}\,}}({{\,\mathrm{poly}\,}}(\Xi , \log _2 n))\).

Proof

We solve \(2^\Xi n = \lfloor 2^\Xi \sqrt{2} \rfloor a - 2^\Xi b\) as a linear Diophantine equation in the variables a and b, with largest coefficient \({{\,\mathrm{O}\,}}(2^\Xi n)\). This can be done in polynomial time in the bit precision of the largest coefficient, for instance, by using the extended Euclidean algorithm [18]. \(\square \)

In Sect. 4, we describe a construction to \((\Delta ',\eta ,\epsilon ')\)-simulate the Hamiltonian described by x, but note that this will only give a \((\Delta ',\eta ,\epsilon '+\delta )\)-simulation of the actual target Hamiltonian \(H_{\mathrm {target}}\).

3.3 Encoding the Target Hamiltonian in Parameters of the Simulator Hamiltonian

In Sect. 3.2, we describe how we encode the information about the Hamiltonian we want to simulate, \(H_{\mathrm {target}}\) in a natural number x. Now we require a method to encode x in some parameter of the universal Hamiltonian, and a method to write its binary expansion on the Turing machine tape shared by \(M_1\) and \(M_\mathrm {PE}\). We develop two constructions, building on the mappings in [14] and [15]. The first construction is efficient in terms of the number of spins in the simulator system, while the second construction is not efficient, but requires less parameters to specify the universal model. In both cases the computation encoded in the ground state of the Hamiltonian is a QTM, and the mapping from a QTM to the Hamiltonian gives a translationally invariant Hamiltonian.

3.3.1 Encoding the Target Hamiltonian in a Phase of the Simulator Hamiltonian

First, we consider the construction building on the work in [14]. Here, we encode the natural number \(x \in \mathbb {N}\) in a phase \(\phi = x/2^{\lceil \log _2 x \rceil }\) of the Hamiltonian.

The Hamiltonian for this construction is given by \(H = \sum _{i=1}^N h^{(i,i+1)}\) where N is the number of spins in the simulator system, and h is a two-body interaction of the form [21, Theorem 32]:

$$\begin{aligned} h = A + (e^{i\pi \phi } B + e^{i \pi 2^{-|\phi |}}C + \mathrm {h.c.}) \end{aligned}$$
(8)

where A is a fixed Hermitian matrix and BC are fixed non-Hermitian matrices. For a detailed construction of the terms in the Hamiltonian, we refer the interested reader to [21, Section 4].

The circuit-to-Hamiltonian map encodes two Turing machine computations ‘dovetailed’ together, where the two Turing machines share a work tape. The first computation is a phase estimation algorithm. It extracts the phase \(\phi \) from the Hamiltonian, and writes its binary expansion onto the work tape. The second computation will be outlined in Section 3.4.

In order to extract a digits from a phase \(\phi =0.\phi _1\phi _2\cdots \phi _a\phi _{a+1}\cdots \), we require a runtime of \(2^a\). In our case, we have \(a = |x| = {{\,\mathrm{poly}\,}}\left( n,d^k,\log (\Vert H\Vert /\delta ) \right) \), where |x| denotes the number of digits in the binary expansion of x. As our computation is encoded as a computational history state, this in turn means that the spectral gap of the history state Hamiltonian necessarily closes as \({{\,\mathrm{O}\,}}(2^{-{{\,\mathrm{poly}\,}}\left( n,d^k,\log (\Vert H\Vert /\delta ) \right) })\) [11, 19, 20]. This scaling of the spectral gap means that the universal model constructed via this method is not efficient in terms of the norm of the simulator system (see Theorem 4.5 for full discussion of the scaling).

However, it is important to note that using the construction from [14] it is possible to encode a computation with exponential runtime into a Hamiltonian on polynomially many spins. Details of the construction are given in [21, Section 4.5] (in particular the relevant scaling is discussed on [21, Page 81]). We will not give the details of the construction here, but note that it encodes a Turing machine which runs for \({{\,\mathrm{O}\,}}(N\exp (N))\) time steps in a Hamiltonian acting on N spins [21, Proposition 45]. Therefore, the universal model constructed via this method is efficient in terms of the number of spins in the simulator system.

3.3.2 Encoding the Target Hamiltonian in the Size of the Simulator System

Our second construction builds on the mapping in [15]. Here, we encode the description of the \(H_{\mathrm {target}}\) into the binary expansion of N—the number of spins the universal Hamiltonian is acting on.

The circuit-to-Hamiltonian map encodes two Turing machine computations ‘dovetailed’ together, where again the two Turing machines share a work tape. The first Turing machine is a binary counter Turing machine. After it has finished running, the binary expansion of N is written on the Turing machine’s work tape. In our construction, the binary expansion of N contains the description of \(H_{\mathrm {target}}\). We will discuss the second computation in Sect. 3.4.

The binary counter QTM takes time N to write out the binary expansion of N on its work tape. Since \(H_{\mathrm {target}}\) is encoded in the binary expansion of N, this run time, as well as the size of the simulator system is exponential in the size of the target system. Moreover, since the runtime is exponential in the size of the target system, the spectral gap of the universal Hamiltonian closes exponentially fast. Therefore, the universal model constructed via this method is not efficient in terms of number of spins or the norm of the simulator system. See Theorem 4.6 for a full discussion of the scaling of this universal model.

In this case, the interactions of the Hamiltonian are entirely fixed—they enforce that the ground state of the Hamiltonian is a history state encoding a QTM computation (for a detailed construction of the terms in the Hamiltonian we refer readers to [15]. There are two additional global parameters in the Hamiltonian which depend on the accuracy of the simulation—we defer discussion of those parameters to the technical proofs of Lemma 4.3 and Theorem 4.6. All the information about the target Hamiltonian (the Hamiltonian to be simulated) is entirely encoded in the binary expansion of N—the number of spins in the simulator system.

3.4 Dovetailing for Simulation

After the computation carried out by \(M_1\) has finished, the binary expansion of x is written out on the work-tape shared by \(M_1\) and \(M_\mathrm {PE}\). We then construct (using standard techniques from [14, 15]) a Hamiltonian such that the two Turing machines \(M_1\) and \(M_\mathrm {PE}\) share a work tape. At the beginning of its computation, \(M_\mathrm {PE}\) reads in a description of the target Hamiltonian H that we wish to simulate. \(M_\mathrm {PE}\) then carries out phase estimation on some input state \(\mathinner {|{\psi }\rangle }\) (left unconstrained, just like a QMA witness)Footnote 3 with respect to the unitary generated by the target Hamiltonian, \(U = \mathrm {e}^{\mathrm {i}H\tau }\) for some \(\tau \) such that \(\Vert H\tau \Vert < 2 \pi \). It then outputs the eigenphase \(\phi \) in terms of a pair of natural numbers (ab) such that \(\phi =a\sqrt{2}-b\) (which can be done efficiently via Remark 3.1).

The ground space of the Hamiltonian which encodes the computation of \(M_1\) and \(M_\mathrm {PE}\) has zero energy and is spanned by history states in a superposition over all possible initial states \(\mathinner {|{\psi }\rangle }\). In general the Hamiltonian we want to simulate doesn’t have a highly degenerate zero energy ground state, so we need to break this degeneracy and construct the correct spectrum for \(H_{\mathrm {target}}\). In order to break the degeneracy and reconstruct the spectrum of \(H_{\mathrm {target}}\), we add one body projectors to the universal Hamiltonian, which are tailored such that the QPE output (ab) identifies the correct energy penalty to inflict.

In order to ensure that the encoding of \(H_{\mathrm {target}}\) in the universal Hamiltonian is local, we make use of an idea originally from [4] and used recently in [5,6,7], which has been called ‘idling to enhance coherence.’ Before carrying out the phase-estimation computation, the system ‘idles’ in its initial state for time L. By choosing L appropriately large, we can ensure that with high probability the input spins (the spins which form the unconstrained input \(\mathinner {|{\psi }\rangle }\) to \(M_\mathrm {PE}\)) are found in their initial states. This means that (with high probability) there are a subset of spins on the simulator system whose state directly maps to the state which is being simulated in the target system. This ensures that the encoding is (approximately) local (see Lemma 4.3 for detailed analysis of how idling is used to achieve universality).

4 Universality

4.1 Translationally Invariant Universal Models in 1D

In this section, we prove our main result: there exist translationally invariant, nearest neighbor Hamiltonians acting on a chain of qudits, which are universal quantum simulators.

All the ‘circuit-to-Hamiltonian’ mappings we make use of in this work are what are known as ‘standard form Hamiltonians.’ Where ‘Standard form Hamiltonians’ are a certain class of circuit-to-Hamiltonian constructions, defined in [22]. We refer interested readers to [22] for the full definition—and simply note that it encompasses the Turing-machine-based mappings which we make use of in this work [14, 15]. In [22], the following result was shown, which we will make use of in our proofs:

Lemma 4.1

(Standard form ground states; restatement of [22, Lem. 5.8, Lem. 5.10]). Let \(H_\mathrm {SF}\) be a Standard Form Hamiltonian encoding a computation U, which takes (classical) inputs from a Hilbert space \({\mathcal {S}}\), and which sets an output flag with certainty if it is given an invalid input. For \(\mathinner {|{\psi _\mu }\rangle } \in {\mathcal {S}}\) and \(\Pi _{t=1}^T U_t = U\) we define

$$\begin{aligned} \mathinner {|{\Phi (U,\psi _\mu )}\rangle } {:}{=}\frac{1}{\sqrt{T}} \sum _{t=1}^{T} U_t\ldots U_1 \mathinner {|{\psi _\mu }\rangle }\mathinner {|{t}\rangle }. \end{aligned}$$

Then, \({\mathcal {L}} = \text {span }\{ \mathinner {|{\Phi (U,\psi _\mu )}\rangle } \}_{\mu =1}^{d^n}\) defines the kernel of \(H_{SF}\), i.e., \(H_\mathrm {SF}|_{{\mathcal {L}}} = 0\). The smallest nonzero eigenvalue of \(H_\mathrm {SF}\) scales as \(1 - \cos {\pi / 2T}\).

We also require a digital quantum simulation algorithm, summarized in the following lemma:

Lemma 4.2

(Implementing a Local Hamiltonian Unitary). For a k-local Hamiltonian \(H=\sum _{i=1}^m h_i\) on an n-partite Hilbert space of local dimension d, and where \(m={{\,\mathrm{poly}\,}}n\), there exists a QTM that implements a unitary \({{\tilde{U}}}\) such that

$$\begin{aligned} {{\tilde{U}}} = \mathrm {e}^{\mathrm {i}H t} + {{\,\mathrm{O}\,}}(\epsilon ), \end{aligned}$$

and which requires time \({{\,\mathrm{poly}\,}}(1/\epsilon , d^k, \Vert H \Vert t, n)\).

Proof

Follows directly from [23, 24]. \(\square \)

The polynomial time bound in Lemma 4.2 suffices for our purposes; a tighter (and more complicated) bound, also for the more general case of sparse Hamiltonians, can be found in [25].

We can now start our main analysis by proving that ‘dovetailing’ quantum computations—rigorously defined and constructed in [14, Lem. 22]—can be used to construct universal simulators.

Lemma 4.3

(Dovetailing for simulation). Let \(M_1\) be a QTM which writes out the binary expansion of some \(x \in \mathbb {N}\) on its work tape. Assume there exists a standard form Hamiltonian which encodes the Turing machine \(M_1\). Then, there also exists a standard form Hamiltonian \(H_{\mathrm {SF}}(x)\), which encodes the computation \(M_1\) dovetailed with a QTM \(M_\mathrm {PE}\), such that the family of Hamiltonians

$$\begin{aligned} H_{\mathrm {univ}}(x) = \Delta H_{\mathrm {SF}}(x) + T \sum _{i=0}^{N-1} \left( \sqrt{2} \Pi _{\alpha } - \Pi _{\beta } \right) \end{aligned}$$
(9)

can simulate any quantum Hamiltonian. Here \(\Delta \) and T are parameters of the model, and \(\Pi _{\alpha }\) and \(\Pi _{\beta }\) are one-body projectors,

Proof of Lemma 4.3

To prove this, we show that the \(H_{\mathrm {univ}}(x)\) can satisfy the definition to be an approximate simulation of an arbitrary ‘target Hamiltonian’ \(H_{\mathrm {target}}\), to any desired accuracy. We break up the proof into multiple parts. First, we construct a history state Hamiltonian \(H_{\mathrm {SF}}(x)\), which encodes two Turing machine computations: \(M_1\) which extracts a description of \(H_{\mathrm {target}}\) from a parameter of \(H_{\mathrm {SF}}\), and \(M_\mathrm {PE}\) which carries out phase estimation on the unitary generated by \(H_{\mathrm {target}}\). Then, we define the one-body projectors \(\Pi _\alpha \) and \(\Pi _\beta \) which break up the ground space degeneracy of \(H_{\mathrm {SF}}\), and inflict just the right amount of penalty to approximately reconstruct the spectrum of \(H_{\mathrm {target}}\) in its entirety.

Construction of H\(_\mathbf{SF} \). \(H_{\mathrm {SF}}\) is a standard form history state Hamiltonian with a ground space laid out in Lemma 4.1. The local states of the spins on which \(H_{\mathrm {SF}}\) acts are divided into multiple ‘tracks.’ There are a constant number of these, hence a constant local Hilbert space dimension. The exact number will depend on the standard form construction being used. Each track serves its own purpose, as outlined in Table 1. See [14, 15] for more detail.

Table 1 Local Hilbert space decomposition for \(H_{\mathrm {SF}}\)

The QTM \(M_\mathrm {PE}\) reads in the description of \(H_{\mathrm {target}}\)—provided as integer \(x\in \mathbb {N}\) output by the Turing machine \(M_1\) whose worktape it shares. \(M_\mathrm {PE}\) further reads in the unconstrained input state \(\mathinner {|{\psi }\rangle }\) (see Table 1 for details of the local Hilbert space decomposition). But instead of proceeding immediately, \(M_\mathrm {PE}\) idles for L time-steps (where L is specified in the input string x, as explained in Sect. 3.2), before proceeding to carry out the quantum phase estimation algorithm.

The quantum phase estimation algorithm is carried out with respect to the unitary \(U = \mathrm {e}^{\mathrm {i}H_{\mathrm {target}}\tau }\) for some \(\tau \) such that \(\Vert H_{\mathrm {target}}\tau \Vert < 2\pi \). It takes as input an eigenvector \(\mathinner {|{u}\rangle }\) of U, and calculates the eigenphase \(\phi _u\). The output of \(M_\mathrm {PE}\) is then the pair of integers \((a_u,b_u)\) (corresponding to the extracted phase \(\phi _u=\sqrt{2} a_u - b_u\) as explained in Remark 3.1), specified in binary on an output track. To calculate \(\lambda _u\)—the eigenvalue of \(H_{\mathrm {target}}\)—to accuracy \(\epsilon \) requires determining \(\phi _u\) to accuracy \({{\,\mathrm{O}\,}}(\epsilon /\Vert H_{\mathrm {target}}\Vert )\) which takes \({{\,\mathrm{O}\,}}(\Vert H_{\mathrm {target}}\Vert /\epsilon )\) uses of \(U=\mathrm {e}^{\mathrm {i}H_{\mathrm {target}}\tau }\). The unitary U must thus be implemented to accuracy \({{\,\mathrm{O}\,}}(\epsilon / \Vert H_{\mathrm {target}}\Vert )\), which is done using Lemma 4.2; the latter introduces an overhead \({{\,\mathrm{poly}\,}}(n,d^k,\Vert H_{\mathrm {target}}\Vert ,\tau ,1/\epsilon )\) in the system size n, local dimension d, locality k, and target accuracy \(\epsilon \). The error overhead of size \({{\,\mathrm{poly}\,}}1/\epsilon \) due to the digital simulation of the unitary is thus polynomial in the precision, as are the \(\propto 1/\epsilon \) repetitions required for the QPE algorithm. The whole procedure takes time

$$\begin{aligned} T_\mathrm {PE}{:}{=}{{\,\mathrm{poly}\,}}(d^k, \Vert H_{\mathrm {target}}\Vert /\epsilon ,n). \end{aligned}$$
(10)

In our construction, the input to \(M_\mathrm {PE}\) is not restricted to be an eigenvector of \(\mathinner {|{u}\rangle }\), but it can always be decomposed as \(\mathinner {|{\psi }\rangle } = \sum _u m_u \mathinner {|{u}\rangle }\). By linearity, for input \(\mathinner {|{\psi }\rangle } = \sum _u m_u \mathinner {|{u}\rangle }\) the output of \(M_\mathrm {PE}\) will be a superposition in which the output \((a_u,b_u)\) occurs with amplitude \(m_u\).

After \(M_\mathrm {PE}\) has finished its computation, its head returns to the end of the chain. A dovetailed counter then decrements \(a_u, a_u-1, \ldots , 0\) and \(b_u, b_u-1, \ldots , 0\).Footnote 4 For each timestep in the counter \(a_u, a_u-1, \ldots , 0\) the Turing machine head changes one spin to a special flag state \(\mathinner {|{\Omega _a}\rangle }\) which does not appear anywhere else in the computation, while for each timestep in the counter \(b_u, b_u-1, \ldots , 0\) the Turing machine head changes one spin to a different flag state \(\mathinner {|{\Omega _b}\rangle }\). (See, e.g., [26, Lem. 16]) for a construction of a Turing machine with these properties.)

By Lemma 4.1, the ground space \({\mathcal {L}}\) of \(H_{\mathrm {SF}}\) is spanned by computational history states as given in Definition 2.7 and is degenerate since any input state \(\mathinner {|{\psi }\rangle }\) yields a valid computation. Therefore,

$$\begin{aligned} \mathrm {ker}(H_{\mathrm {SF}}) = {\mathcal {L}} = \mathrm {span}_{\mathinner {|{\psi }\rangle }}\left( \frac{1}{\sqrt{T}} \sum _{t=1}^{T} \mathinner {|{\psi ^{(t)}}\rangle }\mathinner {|{t}\rangle }\right) \end{aligned}$$
(11)

where \(\mathinner {|{\psi ^{(t)}}\rangle }\) denotes the state of the system at time step t if the input state was \(\mathinner {|{\psi }\rangle }\).

A Local Encoding. In order to prove that \(H_{\mathrm {univ}}(N)\) can simulate all quantum Hamiltonians, we need to demonstrate that there exists a local encoding \({\mathcal {E}}(M)\) such that the conditions of Definition 2.3 are satisfied. To this end, let

$$\begin{aligned} \mathinner {|{\Phi _\mathrm {idling}(\psi )}\rangle } {:}{=}\frac{1}{\sqrt{L'}} \sum _{t=1}^{L'} \mathinner {|{\psi ^{(t)}}\rangle }\mathinner {|{t}\rangle } \end{aligned}$$

where \(L' = T_1 + L\), and where \(T_1\) is the number of time steps in the \(M_1\) computation. This is the history state up until the point that \(M_\mathrm {PE}\) begins its computation (i.e., the point at which the ‘idling to enhance coherence’ ends). So, throughout the computation encoded by this computation the spins which encode the information about the input state remain in their initial state, and we can write:

$$\begin{aligned} \mathinner {|{\Phi _\mathrm {idling}(\psi )}\rangle } = \mathinner {|{\psi }\rangle } \otimes \frac{1}{\sqrt{L'}} \sum _{t=1}^{L'} \mathinner {|{t}\rangle } \end{aligned}$$

The rest of the history state we capture is

$$\begin{aligned} \mathinner {|{\Phi _\mathrm {comp}(\psi )}\rangle } {:}{=}\frac{1}{\sqrt{T - L'}} \sum _{t=L'+1}^{T} \mathinner {|{\psi ^{(t)}}\rangle }\mathinner {|{t}\rangle }, \end{aligned}$$

such that the total history state is

$$\begin{aligned} \mathinner {|{\Phi (\psi )}\rangle } = \sqrt{\frac{L'}{T}} \mathinner {|{\Phi _\mathrm {idling}(\psi )}\rangle } + \sqrt{\frac{T - L'}{T}} \mathinner {|{\Phi _\mathrm {comp}(\psi )}\rangle }. \end{aligned}$$

We now define the encoding \({\mathcal {E}}(M) = V M V^\dagger \) via the isometry

$$\begin{aligned} V = \sum _i \mathinner {|{\Phi _\mathrm {idling}(i)}\rangle }\mathinner {\langle {i}|}. \end{aligned}$$
(12)

where \(\mathinner {|{i}\rangle }\) are the computational basis states (any complete basis will suffice). \({\mathcal {E}}\) is a local encoding, which can be verified by a direct calculation:

$$\begin{aligned} \begin{aligned} {\mathcal {E}}(A_j \otimes \mathbb {1})&= \sum _{ik}\mathinner {|{\Phi _\mathrm {idling}(i)}\rangle } \mathinner {\langle {i}|}(A_j \otimes \mathbb {1})\mathinner {|{k}\rangle }\mathinner {\langle {\Phi _\mathrm {idling}(k)}|} \\&= \sum _{ik} \mathinner {|{i}\rangle }\mathinner {\langle {i}|}(A_j \otimes \mathbb {1}) \mathinner {|{k}\rangle }\mathinner {\langle {k}|} \otimes \frac{1}{L} \sum _{tt'=1}^L \mathinner {|{t}\rangle }\mathinner {\langle {t'}|} \\&= (A_j \otimes \mathbb {1}) \sum _{i} \mathinner {|{i}\rangle }\mathinner {\langle {i}|} \otimes \frac{1}{L} \sum _{tt'=1}^L \mathinner {|{t}\rangle }\mathinner {\langle {t'}|} \\&=\left( A^\mathrm {phys}_j \otimes \mathbb {1}\right) \sum _i\mathinner {|{\Phi _\mathrm {idling}(i)}\rangle }\mathinner {\langle {\Phi _\mathrm {idling}(i)}|} \\&= \left( A^\mathrm {phys}_j \otimes \mathbb {1}\right) {\mathcal {E}}(\mathbb {1}), \end{aligned} \end{aligned}$$
(13)

where \(A^\mathrm {phys}_j\) is the operator A acting on the Hilbert space corresponding to the \(j^{\hbox {th}}\) qudit.

We now consider the encoding \({\mathcal {E}}'(M) = V'MV^{\prime \dagger }\), defined via

$$\begin{aligned} V' = \sum _i \mathinner {|{\Phi (i)}\rangle }\mathinner {\langle {i}|}. \end{aligned}$$
(14)

We have that

$$\begin{aligned} \begin{aligned} \Vert V' - V\Vert ^2&= \left\| \sum _i \left( \mathinner {|{\Phi (i)}\rangle }\mathinner {\langle {i}|} - \mathinner {|{\Phi _\mathrm {idling}(i)}\rangle }\mathinner {\langle {i}|}\right) \right\| ^2\\&= \left\| \sum _i \left( \sqrt{\frac{T -L'}{T}}\mathinner {|{\Phi _\mathrm {comp}(i)}\rangle }\mathinner {\langle {i}|} + \left( \sqrt{\frac{L'}{T}}-1\right) \mathinner {|{\Phi _\mathrm {idling}(i)}\rangle }\mathinner {\langle {i}|} \right) \right\| ^2 \\&\le 2\left( 1-\sqrt{\frac{L'}{T}}\right) \le 2\frac{T-L'}{T}=2\frac{T_\mathrm {PE}}{T}. \end{aligned}\nonumber \\ \end{aligned}$$
(15)

By Lemma 4.1, \(S_{{\mathcal {E}}'}\) is the ground space of \(H_{\mathrm {SF}}\).

Splitting the Ground Space Degeneracy of H\(_\mathbf{SF }\). What is left to show is that there exist one body-projectors \(\Pi _{\alpha }\) and \(\Pi _{\beta }\) which add just the right amount of energy to states in the kernel \({\mathcal {L}}(H_{\mathrm {SF}})\) to reproduce the target Hamiltonian’s spectrum. We first choose the one body terms in \(H_{\mathrm {univ}}\) to be projectors onto local subspaces which contain the two states which are outputs of the \(M_\mathrm {PE}\) computation—\(\mathinner {|{\Omega _a}\rangle }\) and \(\mathinner {|{\Omega _b}\rangle }\):

We have shown that if the input state is \(\mathinner {|{u}\rangle }\), which is an eigenstate of U with eigenphase \(\phi _u = a_u\sqrt{2} - b_u\), then the history state will contain \(a_u\) terms with one spin in the state \(\mathinner {|{\Omega _a}\rangle }\) and \(b_u\) terms with one spin in the state \(\mathinner {|{\Omega _b}\rangle }\) (each term in the history state will have amplitude \(\frac{1}{T}\)). If the input is a general state \(\mathinner {|{\psi }\rangle }= \sum _u m_u \mathinner {|{u}\rangle }\), then for each u the history state will contain \(a_u\) terms with one spin in the state \(\mathinner {|{\Omega _a}\rangle }\) and \(b_u\) terms with one spin in the state \(\mathinner {|{\Omega _b}\rangle }\), where now each of these terms has amplitude \(m_u / T\).

Let \(\Pi {:}{=}\sum _i \mathinner {|{\Phi (i)}\rangle }\mathinner {\langle {\Phi (i)}|}\) for some complete basis \(\mathinner {|{i}\rangle }\), and we define \(H_1{:}{=}T(\sqrt{2} \Pi _a - \Pi _b)\), where T is the total time in the computation. It thus follows that the energy of \(\mathinner {|{\Phi (u)}\rangle }\) with respect to the operator \(\Pi H_1 \Pi \) is given by \(\phi _u + {{\,\mathrm{O}\,}}(\epsilon )\).

Finally, we need the following technical lemma from [27].

Lemma 4.4

(First-order simulation [27]). Let \(H_0\) and \(H_1\) be Hamiltonians acting on the same space and \(\Pi \) be the projector onto the ground space of \(H_0\). Suppose that \(H_0\) has eigenvalue 0 on \(\Pi \) and the next smallest eigenvalue is at least 1. Let V be an isometry such that \(VV^{\dagger }=\Pi \) and

$$\begin{aligned} \Vert V H_{\mathrm {target}}V^\dag - \Pi H_1 \Pi \Vert \le \epsilon /2. \end{aligned}$$
(16)

Let \(H_{{\text {sim}}} = \Delta H_0 + H_1\) . Then there exists an isometry \({\tilde{V}}\) onto the space spanned by the eigenvectors of \(H_{{\text {sim}}}\) with eigenvalue less than \(\Delta /2\) such that

  1. 1.

    \(\Vert V-{\tilde{V}}\Vert \le {{\,\mathrm{O}\,}}(\Vert H_1\Vert /\Delta )\)

  2. 2.

    \(\Vert {\tilde{V}}H_{{\text {target}}} {\tilde{V}}^{\dagger } -H_{{\text {sim}}< \Delta /2} \Vert \le \epsilon /2 + {{\,\mathrm{O}\,}}(\Vert H_1\Vert ^2/\Delta )\)

We will apply Lemma 4.4 with \(H_0=2T^2H_{\mathrm {SF}}\) and \(H_1=T(\sqrt{2} \Pi _a - \Pi _b)\). We have \(\lambda _{\min }( H_{\mathrm {SF}}) = 0\) and the next smallest nonzero eigenvalue of \(H_{\mathrm {SF}}\) is \((1-\cos (\pi /2T)\ge 1/2T^2)\) by Lemma 4.1, so \(H_0=2T^2H_{\mathrm {SF}}\) has next smallest nonzero eigenvalue at least 1. Moreover, \(\left\| H_1\right\| = \sqrt{2}T\). Note that \(V'\), as defined in Eq. (14), is an isometry which maps onto the ground state of \(H_0\). By construction, we have that the spectrum of \(H_{\mathrm {target}}\) is approximated to within \(\epsilon \) by \(H_1\) restricted to the ground space of \(H_{\mathrm {SF}}\); thus, \(\Vert \Pi H_1 \Pi - \tilde{{\mathcal {E}}}(H)\Vert \le \epsilon \).

Lemma 4.4 therefore implies that there exists an isometry \({\tilde{V}}\) that maps exactly onto the low energy space of \(H_{\mathrm {univ}}\) such that \(\Vert {\tilde{V}}-V'\Vert \le {{\,\mathrm{O}\,}}(\sqrt{2}T/(\Delta /2T^2))={{\,\mathrm{O}\,}}(T^3/\Delta )\). By the triangle inequality and Eq. (15), we have:

$$\begin{aligned} \Vert V-{\tilde{V}}\Vert \le \Vert V-V'\Vert +\Vert V'-{\tilde{V}}\Vert \le O \left( \frac{T^3}{\Delta } + \frac{T_\mathrm {PE}}{T}\right) . \end{aligned}$$
(17)

The second part of the lemma implies that

$$\begin{aligned} \Vert {\tilde{V}} H_{\mathrm {target}}{\tilde{V}}^{\dagger } -H_{{\text {univ}} <\Delta '/2}\Vert \le \epsilon /2+ {{\,\mathrm{O}\,}}((\sqrt{2}T)^2/(\Delta /2T^2))=\epsilon /2 +{{\,\mathrm{O}\,}}(T^4/\Delta ).\nonumber \\ \end{aligned}$$
(18)

Therefore, the conditions of Definition 2.3 are satisfied for a \((\Delta ',\eta ,\epsilon ')\)-simulation of \(H_{\mathrm {target}}\), with \(\eta = O \left( T^3/\Delta + T_\mathrm {PE}/T\right) \), \(\epsilon ' = \epsilon +{{\,\mathrm{O}\,}}(T^4 / \Delta )\) and \(\Delta '= \Delta /2T^2\). Therefore, we must increase L so that \(T \ge {{\,\mathrm{O}\,}}(T_\mathrm {PE}/\eta )={{\,\mathrm{poly}\,}}(n, d^k, \Vert H\Vert ,1/\epsilon ,1/\eta )\) by Eq. (10) (thereby determining x), and increase \(\Delta \) so that

$$\begin{aligned} \Delta \ge \Delta ' T^2 +\frac{T^3}{\eta }+\frac{T^4}{\epsilon } \end{aligned}$$
(19)

to obtain a \((\Delta ', \eta , \epsilon )\)-simulation of the target Hamiltonian. The claim follows.

\(\square \)

We can now prove our main theorem:

Theorem 4.5

There exists a two-body interaction depending on a single parameter \(h(\phi )\) such that the family of translationally invariant Hamiltonians on a chain of length N,

$$\begin{aligned} H_{\mathrm {univ}}(\phi , \Delta , T) = \Delta \sum _{\langle i,j \rangle } h(\phi )_{i,j} + T \sum _{i=0}^{N-1} \left( \sqrt{2} \Pi _{\alpha } - \Pi _{\beta } \right) _i, \end{aligned}$$
(20)

is a universal model, where \(\Delta \), T and \(\phi \) are parameters of the Hamiltonian, and the first sum is over adjacent site along the chain. Furthermore, the universal model is efficient in terms of the number of spins in the simulator system.

Proof

The two-body interaction \(h(\phi )\) makes up a standard form Hamiltonian which encodes a QTM, \(M_1\) dovetailed with the phase-estimation computation from Lemma 4.3. The QTM \(M_1\) carries out phase estimation on the parameter \(\phi \) in the Hamiltonian, and writes out the binary expansion of \(\phi \) (which contains a description of the Hamiltonian to be simulated) on its work tape. There is a standard form Hamiltonian in [14] which encodes this QTM, so by Lemma 4.3 we can construct a standard form Hamiltonian which simulates all quantum Hamiltonians by dovetailing \(M_1\) with \(M_\mathrm {PE}\).

The space requirement for the computation is \({{\,\mathrm{O}\,}}(|\phi |)\), where \(|\phi |\) denotes the length of the binary expansion of \(\phi \), and the computation requires time \(T_1 = {{\,\mathrm{O}\,}}(|\phi |2^{|\phi |})\) [21, Theorem 10] As we commented in Sect. 3.3.1, the standard form clock construction set out in [21, Sect. 4.5] allows for computation time of \( {{\,\mathrm{O}\,}}(|\phi |2^{|\phi |})\) using a Hamiltonian on \(|\phi |\) spins. We therefore find that for a k-local target Hamiltonian \(H_{\mathrm {target}}\) acting on n spins of local dimension d, the number of spins required in the simulator system for a simulation that is \(\epsilon \) close to \(H_{\mathrm {target}}\) is given by \(N = {{\,\mathrm{O}\,}}(|\phi |) ={{\,\mathrm{poly}\,}}\left( n,d^k,\Vert H\Vert ,1/\eta ,1/\epsilon \right) \).

Therefore, the universal model is efficient in terms of the number of spins in the simulator system as defined in Definition 2.4. \(\square \)

Note that this universal model is not efficient in terms of the norm \(\Vert H_{\mathrm {univ}}\Vert \). This is immediately obvious, since \(\Vert H_{\mathrm {univ}}\Vert = \Omega (\Delta )\), and using the relations between \(\Delta '\), \(\eta \), \(\epsilon \), and T and \(\Delta \) from Lemma 4.3 and Eq. (19),

$$\begin{aligned} T= & {} T_1+L+T_\mathrm {PE}\\= & {} O\left( 2^x+{{\,\mathrm{poly}\,}}\left( n,d^k,\Vert H_{\mathrm {target}}\Vert , \frac{1}{\epsilon }, \frac{1}{\eta }\right) \right) \quad \text { and } \quad \Delta \ge \Delta ' T^2 +\frac{T^3}{\eta }+\frac{T^4}{\epsilon } \end{aligned}$$

by Eq. (10), so \(T,\Delta \) are both \({{\,\mathrm{poly}\,}}\left( 2^x,\Vert H_{\mathrm {target}}\Vert ,\Delta ',1/\epsilon , 1/\eta \right) \). For a k-local Hamiltonian \(H_{\mathrm {target}}\) with description x as presented in Sect. 3.2, \(|x|=\Omega \left( md^{2k} \log (\Vert H_{\mathrm {target}}\Vert md^{2k}/\delta )\right) \).

However, if we only wish to simulate a translationally invariant k-local Hamiltonian \(H_{\mathrm {target}}\), this can be specified to accuracy \(\delta \) with just \(\log (\Vert H_{\mathrm {target}}\Vert m d^{2k} /\delta )\) bits of information. In this case (for \(d,k={{\,\mathrm{O}\,}}(1)\) and taking \(\delta =\epsilon \)), the interaction strengths are then \({{\,\mathrm{poly}\,}}(n,\Vert H_{\mathrm {target}}\Vert ,\Delta ', \frac{1}{\eta },\frac{1}{\epsilon })\), and the whole simulation is efficient.

Lemma 4.3 also allows the construction of a universal quantum simulator with two free parameters.

Theorem 4.6

There exists a fixed two-body interaction h such that the family of translationally invariant Hamiltonians on a chain of length N,

$$\begin{aligned} H_{\mathrm {univ}}(\Delta , T) = \Delta \sum _{\langle i,j \rangle } h_{i,j} + T \sum _{i=0}^{N-1} \left( \sqrt{2} \Pi _{\alpha } - \Pi _{\beta } \right) _i, \end{aligned}$$
(21)

is a universal model, where \(\Delta \) and T are parameters of the Hamiltonian, and the first sum is over adjacent sites along the chain.

Proof

As in Theorem 4.5, the two-body interaction h makes up a standard form Hamiltonian which encodes a QTM \(M_1\) dovetailed with the phase-estimation computation from Lemma 4.3. It is based on the construction from [15].

Take \(M_1\) to be a binary counter Turing machine which writes out N—the length of the qudit chain—on its work tape. We will choose N to contain a description of the Hamiltonian to be simulated, as per Sect. 3.2. There is a standard form Hamiltonian in [15] which encodes this QTM, so by Lemma 4.3 we can construct a standard form Hamiltonian which simulates all quantum Hamiltonians by dovetailing \(M_1\) with \(M_\mathrm {PE}\).

Since B(N), as defined in Eq. (7), contains a description of the Hamiltonian to be simulated, we have that

$$\begin{aligned} N = {{\,\mathrm{poly}\,}}\left( 2^{{{\,\mathrm{poly}\,}}(n,\Vert H_{\mathrm {target}}\Vert ,1/\eta ,1/\epsilon )} \right) . \end{aligned}$$

The standard form clock used in the construction allows for computation time polynomial in the length of the chain, so \(\exp ({{\,\mathrm{poly}\,}})\)-time in the size of the target system. As before, by Eq. (10), we require

$$\begin{aligned} T= & {} T_1+L+T_\mathrm {PE}\\= & {} O\left( N+{{\,\mathrm{poly}\,}}\left( n,d^k,\Vert H_{\mathrm {target}}\Vert , \frac{1}{\epsilon }, \frac{1}{\eta }\right) \right) \quad \text { and } \quad \Delta \ge \Delta ' T^2 +\frac{T^3}{\eta }+\frac{T^4}{\epsilon }. \end{aligned}$$

\(\square \)

According to the requirements of Definition 2.3, the universal simulator of the second theorem is not efficient in either the number of spins, nor in the norm. However—as was noted in [2]—this is unavoidable if there is no free parameter in the universal Hamiltonian which encodes the description of the target Hamiltonian: a translationally invariant Hamiltonian on N spins can be described using only \({{\,\mathrm{O}\,}}({{\,\mathrm{poly}\,}}\log (N))\) bits of information, whereas a k-local Hamiltonian which breaks translational invariance in general requires \({{\,\mathrm{poly}\,}}(N)\) bits of information. So, by a simple counting argument, we can see that it is not possible to encode all the information about a k-local Hamiltonian on n spins in a fixed translationally invariant Hamiltonian acting on \({{\,\mathrm{poly}\,}}(n)\) spins.

We observe that the parameters \(\Delta \) and T are qualitatively different to \(\phi \), in that they do not depend on the Hamiltonian to be simulated, but only the parameters \((\Delta ',\epsilon ,\eta )\) determining the precision of the simulation.

4.2 No-Go for Parameterless Universality

Is an explicit \(\Delta \)-dependence of a simulator Hamiltonian \(H_{\mathrm {univ}}\) necessary to construct a universal model? Note that an implicit dependence of \(H_{\mathrm {univ}}\) on \(\Delta \) is possible via the chain length \(N=N(\Delta )\) in Theorem 4.5. In the following, we prove that such an implicit dependence is insufficient, by giving a concrete counterexample for which an explicit \(\Delta \)-dependence is necessary.

To this end, we note that it has previously been shown [28] that a degree-reducing Hamiltonian simulation (in a weaker sense of simulation, namely gap-simulation where only the ground state(s) and spectral gap are to be maintained) is only possible if the norm of the local terms is allowed to grow. In order to construct a concrete example in which an explicit \(\Delta \)-dependence is necessary, we first quote Aharonov and Zhou’s result and then translate the terminology to our setting.

Theorem 4.7

(Aharonov and Zhou ( [28, Thm. 1])). For sufficiently small constants \(\epsilon \ge 0\) and \({\tilde{\omega }}\ge 0\), there exists a minimum system size \(N_0\) such that for all \(N\ge N_0\) there exists no constant-local \([r,M,J]=[{{\,\mathrm{O}\,}}(1),M,{{\,\mathrm{O}\,}}(1)]\) gap simulation (where r is the interaction degree, M the number of local terms, and J the local interaction strength of the simulator) of the Hamiltonian

with a localized encoding, \(\epsilon \)-incoherence, and energy spread \({\tilde{\omega }}\), for any number of Hamiltonian terms M.

Corollary 4.8

Consider a universal family of Hamiltonians with local interactions and bounded-degree interaction graph. Hamiltonians in this family must have an explicit dependence on the energy cut-off (\(\Delta \)) below which they are valid simulations of particular target Hamiltonians.

Proof

We first explain the notation used in Theorem 4.7. As mentioned, the notion of gap simulation is weaker than Definition 2.3. Only the (quasi-) ground space \({\mathcal {L}}\) of \(H_A\), rather than the full Hilbert space, needs to be represented \(\epsilon \)-coherently: \(\Vert H_A|_{{\mathcal {L}}} - {{\tilde{H}}}_A|_{{\mathcal {L}}}\Vert < \epsilon \), where \(\cdot |_{{\mathcal {L}}}\) denotes the restriction to \({\mathcal {L}}\)). And only the spectral gap above the ground space, rather than the full spectrum, must be maintained: \({\tilde{\gamma }}=\Delta ({{\tilde{H}}}_A) \ge \gamma = \Delta (H_A)\). The rest of the spectrum in the simulation can be arbitrary. Energy spread in this context simply means the range of eigenvalues within \({\mathcal {L}}\) spreads out at most such that \(|\lambda _0 - {\tilde{\lambda }}_0|\le {{\tilde{\omega }}}\gamma \).

A \([{{\,\mathrm{O}\,}}(1),M,{{\,\mathrm{O}\,}}(1)]\) simulation with the above parameters then simply means an \(\epsilon \)-coherent gap simulation, constant degree and local interaction strength, where M—the number of local terms in the simulator—is left unconstrained, and the eigenvalues vary by at most \({\tilde{\omega }}\gamma \).

It is clear that this notion of simulation falls within our more generic framework of simulation (cf. [28, Sec. 1.1]): a simulation of \(H_A\) also defines a valid gap simulation of \(H_A\). Since by Definition 2.4 this simulation can be made arbitrarily precise, with parameters \(\epsilon ,{{\tilde{\omega }}}\) arbitrarily small, and has constant interaction degree by assumption, this contradicts Theorem 4.7. \(\square \)

5 Applications to Hamiltonian Complexity

As already informally stated, the Local Hamiltonian problem is the question of approximating the ground state energy of a local Hamiltonian to a certain precision. Based on a history state embedding of a QMA verifier circuit and on Feynman’s circuit-to-Hamiltonian construction [10], Kitaev proved in 2002 that Local Hamiltonian with a promise gap that closes inverse-polynomially in the system size is QMA-complete [3].

To be precise, let us start by defining the Local Hamiltonian problem. We note that variants of this definition can be found throughout literature which commonly omit one or more of the constraints presented herein, in particular with regard to the bit precision to the input matrices. In order to be precise, we explicitly list the matrix entries’ bit precision as extra parameter \(\Sigma \) in the following definition.

figure a

Kitaev’s QMA-completeness result was shown for a promise gap \(f(N) = {{\,\mathrm{poly}\,}}N\) [3, Th. 14.1]. Following the proof construction therein reveals that this was done for a bit complexity of the matrix entries \(\Sigma (N)={{\,\mathrm{O}\,}}(1)\) (assuming a discrete fixed gateset for the encoded QMA verifier). Since his seminal result, the statement has been extended and generalized to ever-simpler many-body systems [29,30,31]. Some of these results allow a coupling constant to scale in the system size, e.g., as \({{\,\mathrm{poly}\,}}N\)—i.e., the matrix entries now feature a bit precision of \(\Sigma (N)={{\,\mathrm{poly}\,}}\log N\).

We remark that despite the apparent relaxation in the bit precision, these results are not weaker than Kitaev, Shen, and Vyalyi’s. Since the number of local terms \(m={{\,\mathrm{poly}\,}}N\), a polynomial number of local terms of \({{\,\mathrm{O}\,}}(1)\) bit complexity acting on the same sites can already be combined to create k-local interactions with polynomial precision (logarithmic bit-precision, \(\Omega (1/{{\,\mathrm{poly}\,}})\cap {{\,\mathrm{O}\,}}({{\,\mathrm{poly}\,}})\)). (Similar to how the encoding in Section 3.2 and Remark 3.1 works by adding up integers to approximate a number in the interval [0, 1].) We also emphasize that the overall bit complexity of the input is already \({{\,\mathrm{poly}\,}}N\), as there are that many local terms to specify in the first place. Indeed, many times in the literature, the matrix entries of the Local Hamiltonian problem are simply restricted to bit precision \(\Sigma ={{\,\mathrm{poly}\,}}N\) (e.g., [32]).

However, translationally invariant spin systems are common in condensed matter models of real-world materials, whereas models with precisely tuned interactions that differ from site to site are less realistic. It is known that QMA-hardness of approximating the ground state energy to \(1/{{\,\mathrm{poly}\,}}\) precision in the system size is a property of non-translationally invariant couplings, that prevails even when those couplings are arbitrarily close to identical [33, Cor. 21]. But even small amounts of disorder can radically change the properties of quantum many-body systems compared to strict translational invariance, which is the intuition behind this result. A variant of Local Hamiltonian for the strictly translationally invariant case can be formulated as follows:Footnote 5

figure b

Gottesman and Irani proved in 2009 that TI-Local Hamiltonian \(({{\,\mathrm{poly}\,}},1)\) is QMAEXP-complete [15], which has since been generalized to systems with lower local dimension [12, 34], variants of which again introduce a polynomially scaling local coupling strength. We emphasize that while Gottesman and Irani’s definition restricts the bit precision \(\Sigma \) to be constant, the input size to the problem—namely the chain length N—is already of size \(\log N\). A poly-time reduction thus does not change the complexity class, and allowing matrix entries of size \({{\,\mathrm{poly}\,}}\log N\) is arguably natural. As noted in [12, Sec. 3.3], an equivalent definition for TI-Local Hamiltonian can thus be obtained by relaxing the norm of the local terms to \(\Vert h_i \Vert \le {{\,\mathrm{poly}\,}}N\), given the promise gap \(f(N) = \Omega ({{\,\mathrm{poly}\,}}N)\).

Care has to be taken in defining QMAEXP for the right input scaling. For TI-Local Hamiltonian \(({{\,\mathrm{poly}\,}},1)\), the input size is given by the system size only, as all the local terms are specified by a constant number of bits. This means that TI-Local Hamiltonian \(({{\,\mathrm{poly}\,}},1)\) is indeed QMAEXP hard, but for an input of size \(\lceil \log (N)\rceil \), where N is the size of the system. As Karp reductions are allowed for QMAEXP, this does not change if we allow the local terms to scale polynomially in the system size; the problem input is still of size at most \({{\,\mathrm{poly}\,}}\log \), and thus constitutes a well-defined input for QMAEXP with respect to this input size. Informally, QMAEXP (‘\({{\,\mathrm{poly}\,}}\log (N)\)-sized input’) < QMA (‘\({{\,\mathrm{poly}\,}}N\)-sized input’), as only that scaling allows to both saturate and maintain the \(1/{{\,\mathrm{poly}\,}}\) promise gap. In short, the problem is easier for translationally invariant systems, as expected. (We refer the reader to the extended discussion in [12, Sec. 3.4].)

How does the situation change if we allow a promise gap that scales differently? In particular, how hard is Local Hamiltonian \((\exp {{\,\mathrm{poly}\,}})\)? In [35] the authors characterize this setup, which they use for a reduction from PreciseQMA. The PreciseQMA verifier has a \(1/\exp {{\,\mathrm{poly}\,}}\) promise gap, instead of QMA ’s usual \(1/{{\,\mathrm{poly}\,}}\) promise gap. (Note that it is this very promise gap which naturally maps to the Local Hamiltonian ’s promise gap on the ground state energy.) They show that Local Hamiltonian \((1/\exp {{\,\mathrm{poly}\,}})\) is complete for PreciseQMA, which they further show equals PSPACE. We emphasize that the authors did not explicitly restrict the bit precision. Yet a natural restriction in this context is again \(\Sigma (N) = {{\,\mathrm{poly}\,}}N\), as there are \(m={{\,\mathrm{poly}\,}}N\) local terms to specify. And a larger bit precision makes the input size too large for containment in PreciseQMA.

A natural question to ask is thus: how hard is TI-Local Hamiltonian \((\exp {{\,\mathrm{poly}\,}},\Sigma (N))\) for either \(\Sigma (N)={{\,\mathrm{poly}\,}}N\) or \({{\,\mathrm{poly}\,}}\log N\)? Furthermore, is it easier because of the translational invariance, as it was for the \({{\,\mathrm{poly}\,}}\)-promise-gap case? We show that this is not the case, and prove the following result.

Theorem 5.1

TI-Local Hamiltonian \((\exp {{\,\mathrm{poly}\,}},{{\,\mathrm{poly}\,}})\) is PSPACE-complete.

Proof

The result follows by Theorem 4.5. Specifying all the local terms in H requires an exponentially long QPE computation to extract \({{\,\mathrm{poly}\,}}(N)\) many bits from a phase. Because a PreciseQMA-complete local Hamiltonian H already has a \(1/\exp {{\,\mathrm{poly}\,}}(N)\)-closing promise gap, this does not attenuate the resulting promise gap by more than another exponential factor. Containment in PSPACE follows by [35]. \(\square \)

Theorem 5.1 illustrates a curious mismatch: irrespective of the promise gap scaling or matrix bit precision, TI-Local Hamiltonian features the system size N as input. A \(1/{{\,\mathrm{poly}\,}}N\) promise gap and \({{\,\mathrm{poly}\,}}\log N\) bit precision saturate this input, and yield a QMAEXP-complete construction, as discussed above. Yet when we need to specify a \(1/\exp N\) promise gap, that bit precision is the dominant input. So we might as well specify the local terms to the same \({{\,\mathrm{poly}\,}}N\) bit precision, which in turn allows the translationally invariant system to simulate a non-translationally invariant one.

6 Applications to Holography

We can use the universal Hamiltonian constructions in this paper to construct a 2D-to-1D holographic quantum error correcting code (HQECC) with a local boundary Hamiltonian. HQECCs are toy models of the AdS/CFT correspondence which capture many of the qualitative features of the duality [8, 36, 37]. Recently, a HQECC was constructed from a 3D bulk to a 2D boundary which mapped local Hamiltonians in the bulk to local Hamiltonians in the boundary [9]. The techniques in [9] require at least a 2D boundary, and it was an open question whether a similar result could be obtained in lower dimensions.

Here, we construct a HQECC from a 2D bulk to a 1D boundary which maps any (quasi-)local Hamiltonian in the bulk to a local Hamiltonian in the boundary. A quasi-k-local Hamiltonian is a generalization of a k-local Hamiltonian, where instead of requiring that each term in the Hamiltonian acts on only k-spins, we require that each term in the Hamiltonian has Pauli rank at most k,Footnote 6 along with some geometric restrictions on the interaction graph. More precisely:

Definition 6.1

(Quasi-local hyperbolic Hamiltonians). Let \(\mathbb {H}^2\) denote d-dimensional hyperbolic space, and let \(B_r(x)\subset \mathbb {H}^2\) denote a ball of radius r centered at x. Consider an arrangement of n qudits in \(\mathbb {H}^2\) such that, for some fixed r, at most k qudits and at least one qudit are contained within any \(B_r(x)\). Let Q denote the minimum radius ball \(B_Q(0)\) containing all the qudits (which without loss of generality we can take to be centered at the origin). A quasi-k-local Hamiltonian acting on these qudits can be written as:

$$\begin{aligned} H_{\mathrm {bulk}}= \sum _Z h^{(Z)} \end{aligned}$$
(22)

where the sum is over the n qudits and each term can be written as:

$$\begin{aligned} h^{(Z)} = h_{\mathrm {local}}^{(Z)} h_{\mathrm {Wilson}}^{(Z)} \end{aligned}$$
(23)

where:

  • \(h_{\mathrm {local}}^{(Z)}\) is a term acting non-trivially on at most k qudits which are contained within some \(B_r(x)\)

  • \(h_{\mathrm {Wilson}}^{(Z)}\) is a Pauli operator acting non-trivially on at most \({{\,\mathrm{O}\,}}(L-x)\) qudits which form a line between x and the boundary of \(B_Q(0)\)

The extension to quasi-local bulk Hamiltonians allows us to consider using the HQECC to construct toy models of AdS/CFT with gravitational Wilson lines in the bulk theory.Footnote 7

With this definition, we obtain the following result.

Theorem 6.2

Consider any arrangement of n qudits in \(\mathbb {H}^2\), such that for some fixed r at most k qudits and at least one qudit are contained within any \(B_r(x)\). Let Q denote the minimum radius ball \(B_Q(0)\) containing all the qudits. Let \(H_{\mathrm {bulk}}= \sum _Z h_Z\) be any (quasi) k-local Hamiltonian on these qudits.

Then, we can construct a Hamiltonian \(H_{\mathrm {boundary}}\) on a 1D boundary manifold \({\mathcal {M}}\) with the following properties:

  1. 1.

    \({\mathcal {M}}\) surrounds all the qudits and has diameter \({{\,\mathrm{O}\,}}\left( \max \left( 1,\log (k)/r\right) Q + \log \log n\right) \).

  2. 2.

    The Hilbert space of the boundary consists of a chain of qudits of length \({{\,\mathrm{O}\,}}\left( n\log n \right) \).

  3. 3.

    Any local observable/measurement M in the bulk has a set of corresponding observables/measurements \(\{M'\}\) on the boundary with the same outcome. A local bulk operator M can be reconstructed on a boundary region A if M acts within the greedy entanglement wedge of A, denoted \({\mathcal {E}}[A]\).Footnote 8

  4. 4.

    \(H_{\mathrm {boundary}}\) consists of 2-local, nearest-neighbor interactions between the boundary qudits.

  5. 5.

    \(H_{\mathrm {boundary}}\) is a \((\Delta _L,\epsilon ,\eta )\)-simulation of \(H_{\mathrm {bulk}}\) in the sense of Definition 2.3, with \(\epsilon ,\eta = 1/{{\,\mathrm{poly}\,}}(\Delta _L)\), \(\Delta _L = \Omega \left( \Vert H_{\mathrm {bulk}}\Vert \right) \), and where the interaction strengths in \(H_{\mathrm {boundary}}\) scale as \(\max _{ij}|\alpha _{ij}| = {{\,\mathrm{O}\,}}\left( (\Delta _L + 1/\eta + 1/\epsilon ) {{\,\mathrm{poly}\,}}(e^R 2^{e^R})\right) \).

Proof

There are three steps to this simulation. The first two steps follow exactly the same procedure as in [9].

Step 1. Simulate \(H_{\mathrm {bulk}}\) with a Hamiltonian which acts on the bulk indices of a HQECC in \(\mathbb {H}^2\) of radius \(R = {{\,\mathrm{O}\,}}\left( \max \left( 1,\log (k)/r\right) L\right) \).

In order to do this, we embed a tensor network composed of perfect tensors in a tessellation of \(\mathbb {H}^2\) by a Coxeter polygon with associated Coxeter system (WS), and growth rate \(\tau \). Note that in a tessellation of \(\mathbb {H}^2\) by Coxeter polytopes the number of polyhedral cells in a ball of radius \(r'\) scales as \({{\,\mathrm{O}\,}}(\tau ^{r'})\), where we are measuring distances using the word metric, \(d(u,v) = l_S(u^{-1}v)\). (See [9] for a detailed discussion.)

If we want to embed a Hamiltonian \(H_{\mathrm {bulk}}\) in a tessellation we will need to rescale distances between the qudits in \(H_{\mathrm {bulk}}\) so that there is at most one qudit per polyhedral cell of the tessellation. If \(\tau ^{r'} = k\), then

$$\begin{aligned} \frac{r'}{r} = \frac{\log (k)}{\log (\tau ) r} = {{\,\mathrm{O}\,}}\left( \frac{\log (k)}{r}\right) . \end{aligned}$$

If \(\log (k)/r\ge 1\), then the qudits in \(H_{\mathrm {bulk}}\) are more tightly packed than the polyhedral cells in the tessellation, and we need to rescale the distances between the qudits by a factor of \({{\,\mathrm{O}\,}}\left( \log (k)/r\right) \). If \(\log (k)/r < 1\), then the qudits in \(H_{\mathrm {bulk}}\) are less tightly packed than the cells of the tessellation, and there is no need for rescaling. The radius R of the tessellation needed to contain all the qudits in \(H_{\mathrm {bulk}}\) is then given by

$$\begin{aligned} R = {\left\{ \begin{array}{ll} {{\,\mathrm{O}\,}}\left( \log (k)/rL\right) ,&{} \text {if } \log (k)/r \ge 1 \\ {{\,\mathrm{O}\,}}(L) &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
(24)

After rescaling, there is at most one qudit per cell of the tessellation. There will be some cells of the tessellation which do not contain any qudits. We can put ‘dummy’ qudits in those cells which do not participate in any interactions, so their inclusion is just equivalent to tensoring the Hamiltonian with an identity operator. We can upper and lower bounds the number of ‘real’ qudits in the tessellation. If no cells contain dummy qudits, then the number of real qudits in the tessellation is given by \(n_{\max } = N = {{\,\mathrm{O}\,}}(\tau ^R)\), where N is the number of cells in the tessellation. By assumption, there is at least one real qudit in a ball of radius \(r'\). Thus, the minimum number of real qudits in the tessellation scales as \(n_{\min } = {{\,\mathrm{O}\,}}(\tau ^R / \tau ^{r'} ) = {{\,\mathrm{O}\,}}(\tau ^R) = {{\,\mathrm{O}\,}}(N)\), and \(n = \Theta (\tau ^R) = \Theta (N)\).

If the tessellation of \(\mathbb {H}^2\) by Coxeter polytopes is going to form a HQECC, the Coxeter polytope must have at least 5 faces [9, Theorem 6.1]. From the HQECC constructed in [8], it is clear that this bound is achievable, so we will without loss of generality assume the tessellation we are using is by a Coxeter polytope with 5 faces. The perfect tensor used in the HQECC must therefore have 6 indices.

It is known that there exist perfect tensors with 6 indices for all local dimensions d [39]. We will restrict ourselves to stabilizer perfect tensors with local dimension p for some prime p. These can be constructed for \(p=2\) [8] and \(p \ge 7\) [40]. Qudits of general dimension d can be incorporated by embedding qudits into a d-dimensional subspace of the smallest prime which satisfies \(p \ge d\) and \(p=2\) or \(p \ge 7\). We then add one-body projectors onto the orthogonal complement of these subspaces, multiplied by some \(\Delta _S' \ge |H_{\mathrm {bulk}}|\) to the embedded bulk Hamiltonian. The Hamiltonian \(H_{\mathrm {bulk}}'\) on the n p-dimensional qudits is then a perfect simulation of \(H_{\mathrm {bulk}}\).

We can therefore simulate any \(H_{\mathrm {bulk}}\) which meets the requirements stated in the theorem with a Hamiltonian which acts on the bulk indices of a HQECC in \(\mathbb {H}^2\).

Step 2. Simulate \(H_{\mathrm {bulk}}\) with a Hamiltonian \(H_B\) on the boundary surface of the HQECC.

We first set \(H_B {:}{=}H' + \Delta _SH_S\), where \(H'\) satisfies \(H'\Pi _{{\mathcal {C}}} = V(H_{\mathrm {bulk}}' \otimes \mathbb {1}_\mathrm {dummy})V^\dagger \). Here, V is the encoding isometry of the HQECC, \(\Pi _{{\mathcal {C}}}\) is the projector onto the code-subspace of the HQECC, \(\mathbb {1}_\mathrm {dummy}\) acts on the dummy qudits and \(H_S\) is given by

$$\begin{aligned} H_S {:}{=}\sum _{w \in W} \left( \mathbb {1}- \Pi _{{\mathcal {C}}^{(w)}}\right) . \end{aligned}$$
(25)

\(\Pi _{{\mathcal {C}}^{(w)}}\) is the projector onto the codespace of the quantum error correcting code defined by viewing the \(w^{th}\) tensor in the HQECC as an isometry from its input indices to its output indices (where input indices are the bulk logical index, plus legs connecting the tensor with those in previous layers of the tessellation).

Provided \(\Delta _S \ge \Vert H_{\mathrm {bulk}}'\Vert \), [9, Lemma 6.9] ensure that \(H_B\) meets the conditions in Definition 2.2 to be a perfect simulation of \(H_{\mathrm {bulk}}'\) below energy \(\Delta _S\), and hence—as simulations compose—a perfect simulation of \(H_{\mathrm {bulk}}\).

Naturally, there is freedom in this definition as there are many \(H'\) which satisfy the condition stated. We will choose an \(H'\) where every bulk operator has been pushed out to the boundary, so that a 1-local bulk operator at radius x corresponds to a boundary operator of weight \({{\,\mathrm{O}\,}}(\tau ^{R-x})\). We will also require that the Pauli rank of every bulk operator has been preserved (see [9, Theorem D.4] for proof we can choose \(H'\) satisfying this condition).

Step 3. Simulate \(H_B\) with a local, nearest neighbor Hamiltonian using the technique from Theorem 4.5.

In order to achieve the scaling quoted, we make use of the structure of \(H_B\) due to the HQECC. It can be shown [9] that \(H_B\) will contain \({{\,\mathrm{O}\,}}(\tau ^x)\) Pauli rank-1 operators of weight \(\tau ^{R-x}\) for \(0 \le x \le R\). A Pauli rank-1 operator of weight w can be specified using \({{\,\mathrm{O}\,}}(w)\) bits of information. So, if we encode \(H_B\) in the binary expansion of \(\phi \) as

$$\begin{aligned} B(\phi ) = \gamma '(R) \cdot _{x=0}^R \left[ \gamma '(i)^{\cdot \tau ^{R-x}} \cdot \left( \gamma '(a_j) \cdot \gamma '(b_j) \cdot P_1 \cdot \ldots \cdot P_{\tau ^{R-x}} \right) \right] ^{\cdot \tau ^x} \cdot \gamma '(L), \end{aligned}$$

we have \(|\phi | = {{\,\mathrm{O}\,}}(R\tau ^R) = {{\,\mathrm{O}\,}}(n \log n)\). The number of boundary spins in the final Hamiltonian therefore scales as \({{\,\mathrm{O}\,}}(n \log n)\). The final boundary Hamiltonian is a \(\left( \Delta , \epsilon , \eta \right) \)-simulation of \(H_{\mathrm {bulk}}\).

In order to preserve entanglement wedge reconstruction [8], the location of the spins containing the input state on the Turing machine work tape has to match the location of the original boundary spins. So, instead of the input tape at the beginning of the \(M_\mathrm {PE}\) computation containing the input state, followed by a string of \(\mathinner {|{0}\rangle }\)s, the two are interspersed. Information about which points on the input tape contain the input state can be included in the description of the Hamiltonian to be simulated.

It is immediate from the definition of the greedy entanglement wedge [8, Definition 8] that bulk local operators in \({\mathcal {E}}(A)\) can be reconstructed on A. The boundary observables/measurements \(\{M'\}\) corresponding to bulk observables/measurements \(\{M\}\) which have the same outcome, because by definition simulations preserve the outcome of all measurements. The claim follows. \(\square \)

It should be noted that the boundary model of the resulting HQECC does not have full rotational invariance. In order to use the universal Hamiltonian construction the spin chain must have a beginning and end, and the point in the boundary chosen to ‘break’ the chain also breaks the rotational invariance. However, it is possible to construct a HQECC with full rotational symmetry by using a history state Hamiltonian construction with periodic boundary conditions, as in [15, Section 5.8.2].

In [15, Section 5.8.2], a Turing machine is encoded into a local Hamiltonian acting on a spin chain of length N with periodic boundary conditions. The ground space of the resulting Hamiltonian is 2N fold degenerate. It consists of history states, where any two adjacent sites along the spin chain can act as boundary spins for the purpose of the Turing machine construction - giving rise to 2N distinct ground states.Footnote 9

We can apply this same idea to construct a rotationally invariant HQECC, which maps a (quasi-)local bulk Hamiltonian, \(H_{\mathrm {bulk}}\) in \(\mathbb {H}^2\) to a local Hamiltonian \(H_{\mathrm {boundary}}\) acting on a chain of N qudits. The code-space of the HQECC is 2N-fold degenerate, and below the energy cut-off \(H_{\mathrm {boundary}}\) has a direct sum structure:

$$\begin{aligned} H_{\mathrm {bulk}}\rightarrow H_{\mathrm {boundary}}|_{\le \frac{\Delta }{2}} = \begin{pmatrix} {\overline{H}}_{\mathrm {bulk}} &{} 0 &{} \ldots &{} 0 \\ 0 &{} {\overline{H}}_{\mathrm {bulk}} &{} \ldots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} 0 \\ 0 &{} 0 &{} \ldots &{} {\overline{H}}_{\mathrm {bulk}} \end{pmatrix} \end{aligned}$$
(26)

where each factor in the direct sum acts on one of the possible rotations of the boundary Hilbert space.

Observables are mapped in the same way as the Hamiltonian. In order to preserve expectation values, we choose the map on states to be of the form:Footnote 10

$$\begin{aligned} \rho _{\mathrm {boundary}} = {\mathcal {E}}_{\mathrm {state}}\left( \rho _{\mathrm {bulk}} \right) = \begin{pmatrix} {\overline{\rho }}_{\mathrm {bulk}} &{} 0 &{} \ldots &{} 0 \\ 0 &{} 0 &{} \ldots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} 0 \\ 0 &{} 0 &{} \ldots &{} 0 \end{pmatrix} \end{aligned}$$
(27)

We can choose that the bulk state maps into the ‘unrotated’ boundary Hilbert space, so that the geometric relationship between bulk and boundary spins is preserved.Footnote 11

7 Discussion

In this work, we have presented a conceptually simple method for proving universality of spin models. The reliance of this novel method on the ability to encode computation into the low-energy subspace of a Hamiltonian suggests that there is a deep connection between universality and complexity. This insight is made rigorous in [41], where we derive necessary and sufficient conditions for spin systems to be universal simulators (as was done in the classical case [42]).

This new simpler proof approach is also stronger, allowing to prove that the simple setting of translationally invariant interactions on a 1D spin chain is sufficient to give universal quantum models. Furthermore, we have provided the first construction of translationally invariant universal model which is efficient in the number of qudits in the simulator system.

Translationally invariant interactions are more prevalent in condensed matter models than interactions which require fine tuning of individual interaction strengths. However, a serious impediment to experimentally engineering either of the universal constructions in this paper is the local qudit dimension, which is very large—a problem shared by the earlier 2d translationally invariant construction in [2].

An important open question is whether it is possible to reduce the local state dimension in these translationally invariant constructions, while preserving universality. One possible approach would be to apply the techniques from [12], which were used to reduce the local dimension of qudits used in translationally invariant QMA-complete local Hamiltonian constructions.

It would also be interesting to explore what other symmetries universal models can exhibit. This is of particular interest for constructing HQECC, where we would like the boundary theory to exhibit (a discrete version of) conformal symmetry.