1 Introduction

Lattice gauge theory (LGT) calculations using quantum computers have already seen substantial progress. This is despite the fact that programmable quantum hardware has only recently become widely available to researchers in physics (see e.g. [1] for an up-to-date high-energy physics perspective on the field). With the basic formulation of quantum simulations of LGT being laid out very early on [2], efficient formulations of Abelian [3] and non-Abelian LGT [4, 5] on universal, gate-based hardware now exist. This includes a complete set of instructions for the efficient and accurate simulation of QCD and QED [6].

On the side of adiabatic quantum computing [7], despite the fact that it has been commercially available for more than a decade in the form of quantum annealers (QA) [8], this approach has only recently been used for LGT calculations. These include the pioneering studies on the annealer for the case of \(\operatorname{SU}(2)\) [9] and \(\operatorname{SU}(3)\) [10]. In these formulations, the number of qubits necessary to digitize the theory under study scales with the size of the Hilbert space of the problem, which grows exponentially with the spatial volume of the system. Thus, this formulation does not show the expected quantum advantage present in universal, gate-based quantum computing. On the other hand, systems in a QA architecture, such as D-Wave’s Advantage_system5.1, already comprise several thousand physical qubits [11]. Even at this stage of hardware development, proof-of-principle quantum computations in LGT [9, 10] or other field theories [12] are feasible. The intrinsic nature of the formulation of problems on the annealer simply requires the mapping of the lattice field theory onto an optimization problem represented by the Hamiltonian

$$\begin{aligned} H(q) = \sum_{i} Q_{ii}q_{i} + \sum_{i< j} Q_{ij}q_{i} q_{j} , \end{aligned}$$
(1)

with a real, upper-triangular matrix Q and binary variables \(q_{i}\in\{ 0,1\}\). This type of problem goes under the name of quadratic unconstrained binary optimization (QUBO). Thus, QA can be seen as an ideal entry point into the field of quantum simulation of LGT. It is in this spirit, that we turn to the subject of our study.

On the lattice, the gauge field Hilbert space is infinite for compact Lie groups and a truncation of the full symmetry becomes necessary (see [13] for a basic overview). One such truncation is the approximation of the full symmetry group by a discrete subgroup, a strategy whose varied success and intricacies are well summarized in [14]. Here, we take the practical approach by choosing the finite, non-Abelian dihedral group \(D_{n}\) as our gauge group, which can be digitized without truncation as in [15]. By mapping this problem onto an optimization problem amenable to the QA, we extend the previous studies of compact Lie groups [9, 10] to the case of simply reducible, finite groups for which we provide the adapted framework. This will allow future studies to separately understand the effects of Hilbert space truncation and comparisons of classical versus quantum simulations. Moreover, our Hamiltonian formulation has connections with gate-based methods. These approaches do not depend on the size of the Hilbert space and thus show polynomial scaling in the spatial volume with the full quantum advantage.

2 Hamiltonian \(D_{n}\) lattice gauge theory

We begin by introducing the Hamiltonian formulation of \(D_{n}\) lattice gauge theory. Our approach closely follows that of [2], where the Hamiltonian approach was worked out for a general gauge group G. As usual, we work on a cubic lattice of dimension d, with the gauge fields living on the links between the lattice sites. The gauge field Hilbert space, denoted by \(\mathcal {H_{G}}\), is a direct product of the individual link spaces \(\mathcal {H_{G}} = \bigotimes_{\ell}\mathcal{H}_{\ell}\) which, in the group element basis, are defined by \(\mathcal{H}_{\ell}= \mathrm{span}(\{ \vert g\rangle\}_{g\in G})\). We start by defining the link operator Û, acting on \(\mathcal{H}_{\ell}\), as

$$\begin{aligned} \hat{U}^{j}_{mn} = \int\mathrm{dg}\,D^{j}_{mn}(g) \vert g \rangle \langle g \vert , \end{aligned}$$
(2)

where j labels the irreducible representations (irreps) of G and \(D^{j}_{mn}(g)\) are the Wigner representation matrices for \(g\in G\). The indices m, n label the multiplicity of the left and right projection of the link, respectively. This object is of primary importance in the Hamiltonian formulation as it is responsible for the interactions.

The Hamiltonian formulation for \(G=\operatorname{SU}(2)\) lattice gauge theory was first written down in [16]. In later work, it was shown how this Hamiltonian could be obtained from the transfer matrix [17]. For a general gauge group G, the lattice Hamiltonian, commonly referred to as the Kogut-Susskind Hamiltonian, consists of two terms

$$ \hat{H}_{\mathrm{KS}}= \hat{H}_{\mathrm{E}}+ \hat{H}_{\mathrm{B}}, $$
(3)

where the first term is referred to as the electric part and the second term is referred to as the magnetic part. The magnetic part takes the form

$$ \hat{H}_{\mathrm{B}}= -\lambda_{B}\sum _{\vec{x},i< j}\mathrm{Re}\operatorname{Tr}\hat{U}_{i}(\vec{x}) \hat{U}_{j}(\vec{x}+\hat{i}) \hat{U}^{\dagger }_{i}( \vec{x}+\hat{j}) \hat{U}^{\dagger}_{j}(\vec{x}), $$
(4)

where the sum is over the spatially-oriented plaquettesFootnote 1 and can be seen to be diagonal in the group elements basis by virtue of the definition in Eq. (2). In this basis, the electric term, \(\hat{H}_{\mathrm{E}}\), has a form that depends on whether G is taken to be a compact Lie group or a finite group [18, 19]. It is often much more convenient to work in the representation basis, labeled by the states \(\vert jmn\rangle\). Here j labels the irrep and m, n run over the states within the multiplet. In this basis \(\hat {H}_{\mathrm{E}}\) becomes diagonal

$$ \hat{H}_{\mathrm{E}}= \lambda_{E} \sum _{\vec{x}} \sum^{d}_{i=1} \sum_{jmn} f_{j} \vert jmn \rangle_{\vec{x},i}\langle jmn \vert _{\vec{x},i} . $$
(5)

The group element basis is related to the representation basis through the following relation

$$ \langle g\vert jmn\rangle = \sqrt{\frac{\operatorname{dim}(j)}{ \vert G \vert }} D^{j}_{mn}(g), $$
(6)

where \(\vert G\vert\) is the order of the finite group G. From here on out, we will assume that we are working with finite groups and our formulae will reflect this. Using the transformation in Eq. (6), one can easily transform between the two bases.

It is important to discuss the couplings in the Kogut-Susskind Hamiltonian. The relationship between the couplings \(\lambda_{E}\), \(\lambda_{B}\) appearing in front of the electric and magnetic terms, respectively, can be determined in the limit \(a_{0}\to0\) when deriving the Hamiltonian in the transfer matrix formalism. However, this procedure depends on the gauge group under consideration. While for a compact Lie group, \(\lambda_{E} = g_{H}^{2}/2\), \(\lambda_{B} = 1/g_{H}^{2}\), for a finite group it has been determined by previous studies that one should use \(\lambda_{E} = \exp{(-2/g^{2}_{H})}\), \(\lambda_{B} = 1/g_{H}^{2}\), instead [18, 19]. Here we have introduced the Hamiltonian coupling \(g_{H}\), which is the geometric mean of the spatial and temporal couplings, \(g_{s}\) and \(g_{t}\). These are introduced in the transfer matrix formulation when one takes the temporal and spatial lattice spacings to be distinct. The coefficients \(f_{j}\) appearing in Eq. (5) are the eigenvalues of the quadratic Casimir operator for the case of compact Lie groups [2]. For example, when \(G = \operatorname{SU}(2)\), one gets the familiar result \(f_{j} = j(j+1)\). On the other hand, for finite gauge groups, the \(f_{j}\) can be derived systematically from the transfer matrix in the limit of vanishing temporal lattice spacing, \(a_{0}\to 0\) [19]. Other choices for the coefficients for the case of discrete gauge groups also exist in the literature [20].

2.1 Computation of \(H_{ij}\)

With the Kogut-Susskind Hamiltonian given in operator form Eq. (3), there still remains the task of constructing the states of the physical Hilbert space \(\mathcal{H_{P}}\). The label physical here refers to the subspace of the larger Hilbert space \(\mathcal{H_{G}}\) that respects local gauge invariance. Formally, in the group element basis, local gauge invariance can be expressed with the help of left and right multiplication operators given by \(\Theta^{L}_{g}(\vec{x}, i)\) and \(\Theta^{R}_{g}(\vec{x}, i)\) that act as follows

$$ \Theta^{L}_{g} \vert h\rangle =\bigl\vert g^{-1}h \bigr\rangle , \qquad \Theta^{R}_{g} \vert h\rangle = \bigl\vert hg^{-1} \bigr\rangle ,\quad g,h \in G, $$
(7)

with \(\Theta^{R \dagger}_{g} = \Theta^{R}_{g^{-1}}\). As usual, the link transforms in the adjoint representation under a gauge transformation. Thus, a local gauge transformation at the site x⃗ parametrized by g is given by

$$ \tilde{\Theta}_{g}(\vec{x}) \equiv\prod ^{d}_{i=1} \Theta^{L}_{g}( \vec {x},i) \Theta^{R \dagger}_{g}(\vec{x}-\hat{i},i). $$
(8)

For a generic physical state \(\vert\psi\rangle\), gauge invariance demands

$$ \tilde{\Theta}_{g}(\vec{x}) \vert\psi\rangle = \vert\psi\rangle,\quad \forall\vec {x}\in\mathbb{Z}^{d}. $$
(9)

In the representation basis, the statement of Gauss’s law in Eq. (9) is equivalent to \(\vert\psi\rangle\) being written as a direct product of color singlets at each lattice site

$$ \vert\psi\rangle = \bigotimes_{\vec{x}} \vert00 \rangle_{\vec {x}} , $$
(10)

where we refer to Appendix B for the details regarding the explicit construction of \(\vert 00\rangle_{\vec{x}}\) in terms of the \(\vert jmn\rangle\).

As we are ultimately interested in mapping our system onto a quantum annealer, we are restricted to a rather small system size (Fig. 1). For a given lattice geometry, we are then left with the task of determining the matrix elements of the Hamiltonian (3). We start by introducing the trivial vacuum state \(\vert 0\rangle\) with every link in the trivial representation, \(j_{\ell}= 0\), s.t. \(\hat{H}_{\mathrm{E}}\vert0\rangle = 0\). Physically, this corresponds to the situation where the links contain zero chromoelectric flux. Following the approach of [2], the states of the physical Hilbert space can be systematically generated from this configuration by subsequently acting with gauge-invariant operators. To carry out this procedure, one needs to know how an individual link operator acts on a general state in the representation basis, \(\vert jmn\rangle\). Using the definition of the link operator in (2) and the matrix element in (6), one obtains

U ˆ m n 2 |jmn= dim ( j ) | G | g G D m n 2 (g) D m n j (g)|g.
(11)

This can be further simplified by using the Clebsch-Gordan (CG) series for the tensor product of two arbitrary representation matrices, which yields the result

U ˆ m n 2 | j m n = J M N dim ( j ) dim ( J ) × 2 m j m | J M J M | 2 n j n | J M N ,
(12)

where the sum on J is over the irreps and the sums over M and N are over the states in a given irrep.Footnote 2

Figure 1
figure 1

Ladder geometry used in this study. The sites of the ladder have been labeled as \((i,j)\), where \(i=0,1,\ldots ,N-1\) and \(j=0,1\). Here N denotes the length of the ladder. The forward link operators are also shown and have been labeled by their direction \(\mu=1,2\) as well as the site from which they emanate. We note that the system is periodic only in the \(\hat {1}\)-direction

Applying gauge invariant operators repeatedly to the trivial vacuum state with the help of Eq. (12), the enumeration of the configuration space can be performed. In this procedure, Gauss’s law is imposed at each step. This is equivalent to the Wigner 3J-symbol being nonzero for a tuple of irreps, \((j_{1},j_{2},j_{3})\), which characterize the three links involving a given lattice site. For more details regarding the 3J symbols and Gauss’s law we refer the reader to Appendices A.3 and B.

This task of mapping out the physical Hilbert space can be automated using a Markov-chain-like approach. For this, all one needs are the 3J symbols for the given gauge group G. The total number of states in the full Hilbert space is \(\tilde{N}_{\mathrm{irreps}}^{3N}\) states, where \(\tilde{N}_{\mathrm{irreps}}\) is the total number of irreps of the gauge group G and N is the size of the ladder. Although enforcing local gauge invariance removes a large number of these states, the physical Hilbert space still grows quite rapidly. In Table 1, we display the size of the physical Hilbert space, where \(G = D_{3}, D_{4}\). These counts show that even for modest system sizes and small non-Abelian groups there are constraints as to the problems that can be mapped to the quantum annealer.

Table 1 List of the size of the physical Hilbert space, \(N_{\mathrm{conf}}\), on a ladder of size N for \(D_{3}\) and \(D_{4}\). The configurations are enumerated by a set of integers \(\{j_{i}, i=1,2,\ldots ,3N \}\) characterizing the irrep of each link on whereby Gauss’s law is satisfied at each site

Once we have enumerated the states in the physical Hilbert space, we can finally compute the matrix elements of the Hamiltonian in this basis. As \(\hat{H}_{\mathrm{E}}\) is diagonal in the representation basis, the application of Eq. (5) to the generic state in (56) is trivial. The magnetic Hamiltonian, which is responsible for the interactions, has a nontrivial action on a physical state. To illustrate this, we first write an arbitrary plaquette operator on the ladder

$$\begin{aligned} \hat{P}_{\vec{x}} =& \sum_{n_{1},\ldots,n_{4}} \hat{U}_{\vec {x},1;n_{1},n_{2}} \hat{U}_{\vec{x}+\hat{1},2;n_{2},n_{3}} \\ &{}\times \hat{U}_{\vec{x}+\hat{2},1;n_{3},n_{4}}^{\dagger}\hat {U}_{\vec{x},1;n_{4},n_{1}}^{\dagger}, \end{aligned}$$
(13)

where \(\vec{x} = (x,0)\), \(x=0,\ldots,N-1\) denotes the vertex at the bottom left corner of the plaquette and the sum is over the group indices. Here we label each link by its site vector and direction. Now, using the relation Eq. (12), we can determine the result of the plaquette operator acting on a state. The matrix element of Eq. (13) between two arbitrary physical states is given by

$$\begin{aligned} \langle\psi' \vert \hat{P}_{\vec{x}} \vert \psi\rangle =& \sum_{n_{1},\ldots ,n_{4}}\sum _{\{m_{i,1},o_{i,1}\}}\cdots\sum_{\{m_{i,n_{s}}, o_{i,n_{s}}\}} \\ &{}\times \prod_{s} \begin{pmatrix} j_{\ell_{1,s}} & j_{\ell_{2,s}} & j_{\ell_{3,s}} \\ m_{1,s} & m_{2,s} & m_{3,s} \end{pmatrix} \overline{ \begin{pmatrix} l_{\ell_{1,s}} & l_{\ell_{2,s}} & l_{\ell_{3,s}} \\ o_{1,s} & o_{2,s} & o_{3,s} \end{pmatrix} } \\ &{}\times \prod_{i \notin\mathcal{L}_{\vec{x}}} \delta_{l_{i},j_{i}} \delta_{m_{R_{i}},o_{R_{i}}} \delta_{m_{L_{i}},o_{L_{i}}} \\ &{}\times \frac{\sqrt{\mathrm{dim}(j_{\tilde{l}_{1}}) \mathrm {dim}(j_{\tilde{l}_{2}}) \mathrm{dim}(j_{\tilde{l}_{3}}) \mathrm {dim}(j_{\tilde{l}_{4}})}}{\sqrt{\mathrm{dim}(l_{\tilde{l}_{1}}) \mathrm{dim}(l_{\tilde{l}_{2}}) \mathrm{dim}(l_{\tilde{l}_{3}}) \mathrm {dim}(l_{\tilde{l}_{4}})}} \\ &{}\times \bigl\langle 2 n_{1} j_{\tilde{l}_{1}} m_{L_{\tilde{l}_{1}}} \vert l_{\tilde{l}_{1}} o_{L_{\tilde{l}_{1}}}\bigr\rangle \bigl\langle l_{\tilde{l}_{1}} o_{R_{\tilde{l}_{1}}} \vert 2 n_{2} j_{\tilde{l}_{1}} m_{R_{\tilde{l}_{1}}} \bigr\rangle \\ &{}\times \bigl\langle 2 n_{2} j_{\tilde{l}_{2}} m_{L_{\tilde{l}_{2}}} \vert l_{\tilde{l}_{2}} o_{L_{\tilde{l}_{2}}}\bigr\rangle \bigl\langle l_{\tilde{l}_{2}} o_{R_{\tilde{l}_{2}}} \vert 2 n_{3} j_{\tilde{l}_{2}} m_{R_{\tilde{l}_{2}}} \bigr\rangle \\ &{}\times \bigl\langle 2 n_{4} j_{\tilde{l}_{3}} m_{L_{\tilde{l}_{3}}} \vert l_{\tilde{l}_{3}} o_{L_{\tilde{l}_{3}}}\bigr\rangle \bigl\langle l_{\tilde{l}_{3}} o_{R_{\tilde{l}_{3}}} \vert 2 n_{3} j_{\tilde{l}_{3}} m_{R_{\tilde{l}_{3}}} \bigr\rangle \\ &{}\times \bigl\langle 2 n_{1} j_{\tilde{l}_{4}} m_{L_{\tilde{l}_{4}}} \vert l_{\tilde{l}_{4}} o_{L_{\tilde{l}_{4}}}\bigr\rangle \bigl\langle l_{\tilde{l}_{4}} o_{R_{\tilde{l}_{4}}} \vert 2 n_{4} j_{\tilde{l}_{4}} m_{R_{\tilde{l}_{4}}} \bigr\rangle , \end{aligned}$$
(14)

where we refer to \(\vert\psi\rangle\) and \(\vert\psi'\rangle\) as the “in” and “out” states, \(\mathcal{L}_{\vec{x}} = \{ \tilde{l}_{1}, \tilde {l}_{2}, \tilde{l}_{3}, \tilde{l}_{4} \}\) denotes the set of all links involved in (13), and the bar denotes complex conjugation. The sums over \(n_{i}\) run over the states in the 2 representation and the sums over \(m_{i,\mu}\), \(o_{i,\mu}\) run over the states in the corresponding irreps for each link in the “in” and “out” states. In Eq. (14), we have introduced the Wigner 3J symbols which can be generalized to \(D_{n}\). This along with other details regarding the group theory of \(D_{n}\) are given in Appendix A, with the full derivation of Eq. (14) in Appendix C. Using this result for the plaquette matrix element, one can construct the magnetic Hamiltonian by recalling that \(\hat{H}_{\mathrm{B}}= -\lambda_{B} \sum_{\vec{x}} \hat{P}_{\vec {x}}\). We note here that this construction is completely general and applies to a ladder of arbitrary length.

For the gauge groups which we have examined, it turns out that the Hamiltonian matrix is extremely sparse. This will work to our advantage later on when we map our problem to the quantum annealer. In Fig. 2, we display a visualization of the sparsity of the Hamiltonian for both \(D_{3}\) and \(D_{4}\). Examining the product of delta functions in Eq. (14), the sparseness of the Hamiltonian ultimately comes from the orthonormality of the link states in the representation basis. Once one has calculated the Hamiltonian matrix, the classical part of the calculation is practically complete. In the following, we discuss how our lattice gauge Hamiltonian is transformed into an optimization problem so that the quantum computation on the annealer can be performed.

Figure 2
figure 2

Visualization of the Hamiltonian matrices (\(g_{H}^{2} = 0.75\)) for gauge groups \(D_{3}\) (left) and \(D_{4}\) (right) on the \(N=2\) ladder. One immediately notices the sparsity of both Hamiltonians, which is due to the structure of the magnetic contribution

3 Implementation and results

3.1 Groundstate via variational formulation

Quantum annealing is a method used to solve a very specific type of problem: the calculation of the ground state of a generalized Ising model [11]. Thus, unlike the case of the gate-based approach where one has at one’s disposal a set of universal quantum gates, for quantum annealing one must cast the problem that one would like to solve into the form of an Ising model.

In the context of lattice gauge theory, it has been shown that one can map the Hamiltonian in Eq. (3) onto a model with QUBO form, Eq. (1), in order to compute the low-lying states of the spectrum [9, 10]. To see how this emerges we consider the variational principle from quantum mechanics

$$ E_{0} \leq\frac{\langle\psi \vert \hat{H} \vert \psi\rangle }{\langle\psi\vert\psi \rangle} , $$
(15)

where \(E_{0}\) is the ground-state energy. Here we use a variational ansatz with a trial wave function

$$ \vert\psi\rangle = \sum_{\alpha=1}^{N_{\mathrm{conf}}} a_{\alpha } \vert\phi_{\alpha}\rangle, $$
(16)

with real parameters \(a_{\alpha}\) and basis states \(\vert\phi_{\alpha} \rangle\) corresponding to the configurations of the physical Hilbert space. The expansion parameters are, in general, complex but here can be chosen to be real as the Hamiltonian is a real, symmetric matrix. It is in this way that solving for the ground-state energy of our lattice Hamiltonian can be recast as an optimization problem as one seeks to minimize \(\langle\psi\vert \hat{H} \vert\psi\rangle\). Our accuracy in determining the eigenstates of the system is limited only by the precision to which we can determine the coefficients \(a_{\alpha}\). To cast our problem in the form of Eq. (1), we do the following. First, the coefficients are given a fixed-point binary representation. Second, the norm of the wave function is discouraged from being zero by adding a penalty term. Incorporating both of these into our variational calculation, the rhs of Eq. (15) can rewritten as a cost function

$$ F = \langle\psi \vert \hat{H} \vert \psi\rangle - \eta\langle \psi\vert\psi\rangle = \sum_{\alpha,\beta}^{{N_{\mathrm{conf}}}} \sum_{i,j}^{K} Q_{\alpha \beta,ij} q_{\alpha,i}q_{\beta,j}, $$
(17)

where

$$\begin{aligned}& Q_{\alpha\beta,ij} = 2^{i+j-2K-2z}(-1)^{\delta_{iK} + \delta _{jK}} h_{\alpha\beta} + \delta_{\alpha\beta}\delta_{ij}\tilde {Q}_{\alpha,i}, \\& \tilde{Q}_{\alpha,i} = 2^{i-K-z+1}(-1)^{\delta_{iK}}\sum _{\gamma }^{N_{\mathrm{conf}}}a_{\gamma}^{(z)}h_{\gamma\alpha}, \\& h_{\alpha\beta} = H_{\alpha\beta}-\eta\delta_{\alpha\beta }. \end{aligned}$$
(18)

Here, η represents the tunable parameter multiplying the penalty term. In addition to giving the variational parameters a floating-point representation, in the above definitions we have already introduced the parameter z, which is used in the adaptive variational search method [10]. This procedure iteratively improves the estimates for the \(a^{(z+1)}_{\alpha}\) by distributing the K sampling points around the preceding solution to the QUBO problem, \(a^{(z)}_{\alpha}\),

$$ a_{\alpha}^{(z+1)} = a_{\alpha}^{(z)}- \frac{q_{\alpha,K}}{2^{z}} + \sum_{i=1}^{K-1} \frac{q_{\alpha,i}}{2^{K-i+z}} , $$
(19)

starting at \(a_{\alpha}^{(0)} = 0\). The estimate for the eigenstate is refined at each step, hence the name “zooming” for this procedure. The number of zoom steps plays a significant role in the overall computational cost of our calculation, as each refinement requires calls to the quantum annealer.

The quantum computations are done on the quantum annealing hardware Advantage_system5.1 from D-Wave [11], which is accessible via its API D-Wave Ocean [21]. As our system sizes are still small (c.f. Table 1), the results can be compared with the exact solution using the Hamiltonian Eq. (3) as well as with simulated annealing via the Ocean package neal. As alternative, which we did not employ, D-Wave offers hybrid solvers s.a. the KerberosSampler [21] that attempt to break down the original QUBO matrix into smaller pieces to be subsequently solved using classical or quantum hardware. This appears to be particularly useful for system sizes that cannot be embedded on currently available annealer architectures.

It should be noted that for computations employing both simulated and quantum annealing, results still have a η dependence, see Eq. (18). As already noticed in [9, 10], convergence to the true ground-state is achieved for η lying in the actual vicinity of the ground-state energy, \(E_{0}\) (approaching from above). For practical reasons, one can determine the “suitable” η for a given Q by solving Eq. (18) iteratively in η, terminating the calculation when a certain convergence criterion, s.a. relative improvement in the solution, is fulfilled. This strategy works well for local computations with simulated annealing and could in principle be employed also for quantum annealing. Here, runtime on the quantum annealer is the major constraint.

We finally comment on our setup when accessing the annealer via the provided software package. We use the quantum annealer in its forward annealing mode with default annealing schedule and annealing time \(t_{f} = 20 \mathrm{\mu s}\). At least one more parameter needs to be provided by the user during the quantum annealing computations. This is what is referred to as the chain strength. For our calculations, we find automatic chain strength tuning (default option) to be sufficient. Figure 3 shows results from both simulated and quantum annealing, for the ground-state energy \(\langle H\rangle\) (red) as well as the expectation value for the magnet part \(\langle H_{B}\rangle\) (blue) and kinetic part \(\langle H_{E}\rangle\) (black) for \(G = D_{3}\) (left). When going to \(G=D_{4}\) (right), an increase in computational resources is needed due to the more complicated energy landscape for the larger group, which we however only partially meet due to runtime restrictions.

Figure 3
figure 3

(Left): Ground-state expectation values for \(D_{3}\) for the full Kogut-Susskind Hamiltonian (red), electric term (black), and magnetic term (blue) as a function of the inverse Hamiltonian coupling squared. The open symbols represent the minimum result from the quantum annealer while the filled symbols were obtained from classical simulated annealing. The colored band, although barely visible, represents the mean and sample standard deviation of the measurements from the quantum annealer. Simulation parameters for the latter were \(K=3\) with \(z_{max} = 5\) zoom steps and \(n_{\mathrm{reads}} = 1000\). (Right): The same quantities for \(G = D_{4}\) where the sample standard deviation from QA is much larger. This is due to the fact that more computing resources are needed to accurately determine the minimum. Simulation parameters for QA were \(K=2\) with \(z_{max} = 7\) zoom steps and \(n_{\mathrm{reads}} = 2000\)

3.2 Time evolution

One of the main motivations for working in the Hamiltonian formulation of lattice gauge theories is the ability to access real-time dynamics. This stands in stark contrast to mainstream lattice calculations which work in Euclidean space and must perform an analytic continuation of numerical data in order to access real-time physics. In the gate-based approach, the so-called Trotter approximation is employed to the time evolution operator, \(\hat{U}(T) = \exp\{ -iT \hat{H} \}\), which evolves an initial state by a finite time T [15]. This approximation consists of replacing the full Û with products of operators which evolve the system on a smaller time interval, δt. Corrections to this approximation typically scale with powers of δt. This approach to time-evolution of quantum states allows for an efficient simulation of the theory using universal quantum computers.

In order to solve this problem on the quantum annealer, however, one must reformulate time evolution as an optimization problem. This can be done by the introduction of Feynman clock states [22], a mechanism first applied to quantum chemistry calculations in order to generate parallel-in-time quantum dynamics [23]. We thus have to introduce an ancillary quantum system with states \(\vert t\rangle\), \(t=1,2,\ldots, N_{t}\) where \(N_{t}\) is the number of time-slices in the time evolution. Tensoring this orthonormal state with our as-yet-unknown state vector \(\vert\psi_{t}\rangle\) at each timeslice, the problem of time evolution is equivalent to the minimization of the following functional

$$\begin{aligned} \mathcal{L} = \sum^{N_{t}}_{t,t'=1} \langle t' \vert \langle\psi _{t'} \vert \hat {\mathcal{C}} \vert \psi_{t}\rangle \vert t \rangle - \eta \Biggl( \sum^{N_{t}}_{t,t'=1} \langle t'\vert \langle\psi_{t'} \vert \psi_{t}\rangle \vert t \rangle - 1 \Biggr), \end{aligned}$$
(20)

where

$$\begin{aligned} \hat{\mathcal{C}} \equiv& \hat{\mathcal{C}}_{0} + \frac{1}{2} \bigl( \mathbb{I} \otimes \vert t\rangle \langle t \vert + \mathbb{I} \otimes \vert t+1\rangle \langle t+1 \vert \\ &{}- \hat {U}_{\delta t} \otimes \vert t+1\rangle \langle t \vert - \hat {U}_{\delta t}^{\dagger} \otimes \vert t\rangle \langle t+1 \vert \bigr). \end{aligned}$$
(21)

Here \(\delta t \equiv T / N_{t}\) is the step size in time, η is a Lagrange multiplier analogous to our previous penalty term, and \(\hat {\mathcal{C}}_{0}\) selects a predetermined initial state. By construction, \(\hat{\mathcal{C}}\) is hermitian. One can show that the minimum of the functional Eq. (20) corresponds to the exact time-evolved state at each step. Thus, as a result of a single optimization problem one obtains the full time-evolution of a many-body quantum state over a finite time interval. From this functional one can now obtain the QUBO matrix. As previously discussed for the case of finding the ground state of our Hamiltonian, this is what the quantum annealer requires as input. Our discussion closely follows the derivation of [10]. Using a variational state \(\vert\psi_{t}\rangle\vert t\rangle\) at each time step t, the functional in Eq. (20) becomes

$$ \mathcal{L} = \sum_{\alpha\beta} a^{*}_{\alpha} L_{\alpha\beta} a_{\beta}, $$
(22)

where the expansion parameters \(a_{\alpha}\) are complex, the indices in the sum run over all \(N_{t} N_{\mathrm{conf}}\) values, and \(L_{\alpha \beta}\) are the matrix elements of the functional. The terms in the above sum can be written in terms of the real and imaginary parts of both \(L_{\alpha\beta}\) and \(a_{\alpha}\). Using the fixed-point representation for both the real and imaginary parts of \(a_{\alpha}\), one obtains the following QUBO matrix

$$ \begin{aligned} &Q_{\alpha,i;\beta,j} = \textstyle\begin{cases} 2^{i+j-2K-2z} (-1)^{\delta_{iK}+\delta_{jK}} \Re L_{\alpha\beta}+ 2\delta_{\alpha\beta} \delta_{ij} 2^{i-K-z} (-1)^{\delta_{iK}} \\ \quad{} \times\sum_{\gamma} ( \Re a^{(z)}_{\gamma}\Re L_{\gamma\beta} + \Im a^{(z)}_{\gamma}\Im L_{\gamma\beta} ) , \quad 1\leq i,j \leq K , \\ - 2^{i+j'-2K-2z}(-1)^{\delta_{iK} + \delta_{j'K}} \Im L_{\alpha \beta} ,\quad 1\leq i, j' \leq K , \\ 2^{i'+j-2K-2z}(-1)^{\delta_{i'K} + \delta_{jK}} \Im L_{\alpha\beta } ,\quad 1\leq i',j \leq K , \\ 2^{i'+j'-2K-2z}(-1)^{\delta_{i'K} + \delta_{j'K}} \Re L_{\alpha \beta} + 2\delta_{\alpha\beta} \delta_{i'j'} 2^{i'-K-z} (-1)^{\delta_{i'K}} \\ \quad{} \times\sum_{\gamma} (\Im a^{(z)}_{\gamma}\Re L_{\gamma\beta}- \Re a^{(z)}_{\gamma}\Im L_{\gamma\beta} ) , \quad 1\leq i',j' \leq K , \end{cases}\displaystyle \end{aligned} $$
(23)

where the Latin indices now run from 1 to 2K to allow for K bits in representing both the real and imaginary parts of the variational parameters and the primed Latin indices are shifted by K. The dimension of this QUBO matrix is \(2KN_{t} N_{\mathrm{conf}} \times2KN_{t} N_{\mathrm{conf}}\), which is significantly larger than the one used to determine the eigenstates of the Hamiltonian. This problem size is too large to be fully embedded on current quantum annealers and we revert to local simulated annealingFootnote 3 to solve it. The results of our time evolution simulations are displayed in Fig. 4 where we have time evolved the trivial vacuum. Shown are the expectation value of the magnetic part, \(\hat {H}_{B}\), as well as the probability that the trivial vacuum persists, \(\vert \langle0\vert U(t)\vert0\rangle \vert ^{2}\). One can see good agreement with the exact results.

Figure 4
figure 4

Results for time evolution using simulated annealing at \(g_{H}^{2} = 0.75\) with simulation parameters given in the text. The red and yellow points represent the expectation value of the magnetic Hamiltonian in the time-evolved trivial vacuum state as a function of time for \(D_{3}\) and \(D_{4}\). The blue and black points represent the probability amplitude for the trivial vacuum state to persist as a function of time. The lines represent the exact result for each case

4 Conclusion and outlook

We have constructed the Hamiltonian formulation of non-Abelian lattice gauge theories for discrete gauge groups \(D_{n}\). For the concrete examples \(D_{3}\), \(D_{4}\) we worked out the Kogut-Susskind Hamiltonian as well as the representations of the corresponding Hilbert space basis states in terms of Clebsch-Gordan coefficients. In principle, this construction can be generalized to larger gauge groups and higher dimensionality. By observing Eq. (55), it is clear that a full 2D lattice, which has a coordination number of four, would involve coupling one additional link per site to yield a representation \(j_{I}\) (corresponding to the addition of three angular momenta in \(\operatorname{SU}(2)\)). The natural coefficients arising in this context would then be the Wigner 6J-symbols, also known as the Racah coefficients [24]. For three dimensions, this construction can in principle be repeated. However, in that case it seems advisable to work purely with CG coefficients. As an example, we have shown how to map these simple lattice gauge theories onto a quantum annealer. In doing so, we have been able to compute the spectrum as well as time evolution for both \(D_{3}\) and \(D_{4}\) on small lattices. These proof-of-principle results are of course affected by finite-size effects and going to larger system sizes will have significant impact on observables, such as the spectrum of the theory. However, these are accessible using traditional Markov chain Monte Carlo methods (see e.g. [14]).

The main obstacle to simulate both larger groups as well as larger lattices in our approach is the scaling of the physical Hilbert space with increasing lattice and group size. It thus may be helpful to look in other directions in order to utilize the power of the quantum annealer.

One idea to get around the barrier of rapidly expanding Hilbert spaces for the case of continuous groups is to perform a truncation in the number of allowed irreps as was done in earlier studies of \(\operatorname{SU}(2)\) and \(\operatorname{SU}(3)\) in the Hamiltonian formulation [4, 5, 9]. This could also be done for \(D_{n}\), where \(n>4\). A similar truncation procedure would involve both the electric and magnetic terms in the Hamiltonian, and thus one could investigate the effects on the ground-state energy as well as the dynamics of the system.

One further avenue that could be pursued with the annealer is the estimation of tunneling rates and vacuum decay [12, 25]. These non-perturbative processes are fundamental to the understanding of a wide range of phenomena in both high-energy and condensed matter physics. Another possibility is using quantum annealing in state preparation. This is an important problem faced by gate-based approaches to simulating lattice gauge theories.