Introduction

Deep learning has become crucial in many research fields, with neural networks being the key to achieve impressive results. Well-known examples include deep convolutional neural networks (CNNs) for image recognition1,2 and Deep Transformers for language-related tasks3,4,5. The success of deep networks comes from two ingredients: architectures with a large number of parameters (often in the billions), which allow for great flexibility, and training these networks on large amounts of data. However, to successfully train these large models in practice, one needs to navigate the complicated and highly non-convex landscape associated with this extensive parameter space.

The most used methods rely on stochastic gradient descent (SGD), where the gradient of the loss function is estimated from a randomly selected subset of the training data. Over the years, variations of traditional SGD, such as Adam6 or AdamW7, have proven highly effective, leading to more accurate results. In the late 1990s, Amari et al.8,9 suggested to use the knowledge of the geometric structure of the parameter space to adjust the gradient direction for non-convex landscapes, defining the concept of natural gradients. In the same years, Sorella10,11 proposed a similar method, now known as stochastic reconfiguration (SR), to enhance the optimization of variational functions in quantum many-body systems. Importantly, this latter approach typically outperforms other methods such as SGD or Adam, leading to significantly lower variational energies. The main idea of SR is to exploit information about the curvature of the loss landscape, thus improving the convergence speed in landscapes which are steep in some directions and shallow in others12. For physically inspired wave functions (e.g., Jastrow-Slater13 or Gutzwiller-projected states14) the original SR formulation is a highly efficient method since there are few variational parameters, typically from O(10) to O(102).

Over the past few years, neural networks have been extensively used as powerful variational Ansätze for studying interacting spin models15, and the number of parameters has increased significantly. Starting with simple restricted Boltzmann machines (RBMs)15,16,17,18,19,20, more complicated architectures as CNNs21,22,23 and recurrent neural networks24,25,26,27 have been introduced to handle challenging systems. In particular, Deep CNNs28,29,30,31 have proven to be highly accurate for two-dimensional models, outdoing methods as density-matrix renormalization group (DMRG) approaches32 and Gutzwiller-projected states14. These deep learning models have great performances when the number of parameters is large. However, a significant bottleneck arises when employing the original formulation of SR for optimization, as it is based on the inversion of a matrix of size P × P, where P denotes the number of parameters. Consequently, this approach becomes computationally infeasible as the parameter count exceeds O(104), primarily due to the constraints imposed by the limited memory capacity of current-generation GPUs.

Recently, Chen and Heyl30 made a step forward in the optimization procedure by introducing an alternative method, dubbed MinSR, to train neural-network quantum states. MinSR does not require inverting the original P × P matrix but instead a much smaller M × M one, where M is the number of configurations used to estimate the SR matrix. This is convenient in the deep learning setup where P ≫ M. Most importantly, this procedure avoids allocating the P × P matrix, reducing the memory cost. However, this formulation is obtained by minimizing the Fubini-Study distance with an ad hoc constraint. In this work, we first use a simple relation from linear algebra to show, in a transparent way, that SR can be rewritten exactly in a form which involves inverting a small M × M matrix (in the case of real-valued wave functions and a 2M × 2M matrix for complex-valued ones) and that only a standard regularization of the SR matrix is required. Then, we exploit our technique to optimize a Deep Vision Transformer (Deep ViT) model, which has demonstrated exceptional accuracy in describing quantum spin systems in one and two spatial dimensions33,34,35,36. Using almost 3 × 105 variational parameters, we are able to achieve state-of-the-art ground-state energy in the most paradigmatic example of quantum many-body spin model, the J1J2 Heisenberg model on square lattice:

$$\hat{H}={J}_{1}{\sum}_{\langle i,j\rangle }{\hat{{{{{{{\boldsymbol{S}}}}}}}}}_{i}\cdot {\hat{{{{{{{\boldsymbol{S}}}}}}}}}_{j}+{J}_{2}{\sum}_{\langle \langle i,j\rangle \rangle }{\hat{{{{{{{\boldsymbol{S}}}}}}}}}_{i}\cdot {\hat{{{{{{{\boldsymbol{S}}}}}}}}}_{j}$$
(1)

where \({\hat{{{{{{{\boldsymbol{S}}}}}}}}}_{i}=({S}_{i}^{x},{S}_{i}^{y},{S}_{i}^{z})\) is the S = 1/2 spin operator at site i and J1 and J2 are nearest- and next-nearest-neighbour antiferromagnetic couplings, respectively. Its ground-state properties have been the subject of many studies over the years, often with conflicting results14,32,37. In particular, several works focused on the highly frustrated regime, which turns out to be challenging for numerical methods14,16,21,22,23,28,29,30,31,32,37,38,39,40,41,42 and for this reason it is widely recognized as the benchmark model for validating new approaches. Here, we will focus on the particularly challenging case with J2/J1 = 0.5 on the 10 × 10 cluster, where the exact solution is not known.

Within variational methods, one of the main difficulties comes from the fact that the sign structure of the ground state is not known for J2/J1 > 0. Indeed, the Marshall sign rule43 gives the correct signs (for every cluster size) only when J2 = 0. However, in order to stabilize the optimizations, many previous works imposed the Marshall sign rule as a first approximation for the sign structure (see Marshall prior in Table 1). By contrast, within the present approach, we do not need to use any prior knowledge of the signs, thus defining a very general and flexible variational Ansatz.

Table 1 Ground-state energy on the 10 × 10 square lattice at J2/J1 = 0.5

In the following, we first show the alternative SR formulation, then discuss the Deep Transformer architecture, recently introduced by some of us33,34 as a variational state, and finally present the results obtained by combining the two techniques on the J1J2 Heisenberg model.

Results

Stochastic reconfiguration

Finding the ground state of a quantum system with the variational principle involves minimizing the variational energy \({E}_{\theta }=\langle {\Psi }_{\theta }| \hat{H}| {\Psi }_{\theta }\rangle /\langle {\Psi }_{\theta }| {\Psi }_{\theta }\rangle\), where \(\left\vert {\Psi }_{\theta }\right\rangle\) is a variational state parametrized through a vector θ of P real parameters; in case of complex parameters, we can treat their real and imaginary parts separately44. For a system of N 1/2-spins, \(\left\vert {\Psi }_{\theta }\right\rangle\) can be expanded in the computational basis \(\left\vert {\Psi }_{\theta }\right\rangle ={\sum }_{\{\sigma \}}{\Psi }_{\theta }(\sigma )\left\vert \sigma \right\rangle\), where Ψθ(σ) = 〈σ∣Ψθ〉 is a map from spin configurations of the Hilbert space, \(\{\left\vert \sigma \right\rangle =\left\vert {\sigma }_{1}^{z},{\sigma }_{2}^{z},\cdots \,,{\sigma }_{N}^{z}\right\rangle ,\,{\sigma }_{i}^{z}=\pm 1\}\), to complex numbers. In a gradient-based optimization approach, the fundamental ingredient is the evaluation of the gradient of the loss, which in this case is the variational energy, with respect to the parameters θα for α = 1, …, P. This gradient can be expressed as a correlation function44

$${F}_{\alpha }=-\frac{\partial {E}_{\theta }}{\partial {\theta }_{\alpha }}=-2\Re \left[\langle (\hat{H}-\langle \hat{H}\rangle )({\hat{O}}_{\alpha }-\langle {\hat{O}}_{\alpha }\rangle )\rangle \right]\,,$$
(2)

which involves the diagonal operator \({\hat{O}}_{\alpha }\) defined as Oα(σ) = ∂Log[Ψθ(σ)]/∂θα. The expectation values 〈…〉 are computed with respect to the variational state. The SR updates10,20,44 are constructed according to the geometric structure of the landscape:

$${{{{{{\boldsymbol{\delta }}}}}}}{{{{{{\boldsymbol{\theta }}}}}}}=\tau {\left(S+\lambda {{\mathbb{I}}}_{P}\right)}^{-1}{{{{{{\boldsymbol{F}}}}}}}\,,$$
(3)

where τ is the learning rate and λ is a regularization parameter to ensure the invertibility of the S matrix. The matrix S has shape P × P and it is defined in terms of the \({\hat{O}}_{\alpha }\) operators44

$${S}_{\alpha ,\beta }=\Re \left[\langle {({\hat{O}}_{\alpha }-\langle {\hat{O}}_{\alpha }\rangle )}^{{{{\dagger}}} }({\hat{O}}_{\beta }-\langle {\hat{O}}_{\beta }\rangle )\rangle \right]\,.$$
(4)

Eq. (3) defines the standard formulation of the SR, which involves the inversion of a P × P matrix, being the bottleneck of this approach when the number of parameters is larger than O(104). To address this problem, we start reformulating Eq. (3) in a more convenient way. For a given sample of M spin configurations {σi} (sampled according to ∣Ψθ(σ)∣2), the stochastic estimate of Fα can be obtained as:

$${\bar{F}}_{\alpha }=-\Re \left[\frac{2}{M}{\sum}_{i=1}^{M}{[{E}_{Li}-{\bar{E}}_{L}]}^{* }[{O}_{\alpha i}-{\bar{O}}_{\alpha }]\right]\,.$$
(5)

Here, \({E}_{Li}=\langle {\sigma }_{i}| \hat{H}| {\Psi }_{\theta }\rangle /\langle {\sigma }_{i}| {\Psi }_{\theta }\rangle\) defines the local energy for the configuration \(\left\vert {\sigma }_{i}\right\rangle\) and Oαi = Oα(σi); in addition, \({\bar{E}}_{L}\) and \({\bar{O}}_{\alpha }\) denote sample means. Throughout this work, we adopt the convention of using latin and greek indices to run over configurations and parameters, respectively. Equivalently, Eq. (4) can be stochastically estimated as

$${\bar{S}}_{\alpha ,\beta }=\Re \left[\frac{1}{M}{\sum}_{i=1}^{M}{\left[{O}_{\alpha i}-{\bar{O}}_{\alpha }\right]}^{* }\left[{O}_{\beta i}-{\bar{O}}_{\beta }\right]\right]\,.$$
(6)

To simplify further, we introduce \({Y}_{\alpha i}=({O}_{\alpha i}-{\bar{O}}_{\alpha })/\sqrt{M}\) and \({\varepsilon }_{i}=-2{[{E}_{Li}-{\bar{E}}_{L}]}^{* }/\sqrt{M}\), allowing us to express Eq. (5) in matrix notation as \({{{\bar{{{{\boldsymbol{F}}}}}}}}=\Re [Y{{{{{{\boldsymbol{\varepsilon }}}}}}}]\) and Eq. (4) as \(\bar{S}=\Re [Y{Y}^{{{{\dagger}}} }]\). Writing Y = YR + iYI we obtain:

$$\bar{S}={Y}_{R}{Y}_{R}^{T}+{Y}_{I}{Y}_{I}^{T}=X{X}^{T}\,$$
(7)

where \(X=\,{{\mbox{Concat}}}\,({Y}_{R},{Y}_{I})\in {{\mathbb{R}}}^{P\times 2M}\), the concatenation being along the last axis. Furthermore, using ε = εR + I, the gradient of the energy can be recast as

$$\bar{{{{{{{\boldsymbol{F}}}}}}}}={Y}_{R}{{{{{{{\boldsymbol{\varepsilon }}}}}}}}_{R}-{Y}_{I}{{{{{{{\boldsymbol{\varepsilon }}}}}}}}_{I}=X{{{{{{\boldsymbol{f}}}}}}}\,,$$
(8)

with \({{{{{{\boldsymbol{f}}}}}}}=\,{{\mbox{Concat}}}\,({{{{{{{\boldsymbol{\varepsilon }}}}}}}}_{R},-{{{{{{{\boldsymbol{\varepsilon }}}}}}}}_{I})\in {{\mathbb{R}}}^{2M}\). Then, the update of the parameters in Eq. (3) can be written as

$${{{{{{\boldsymbol{\delta }}}}}}}{{{{{{\boldsymbol{\theta }}}}}}}=\tau {(X{X}^{T}+\lambda {{\mathbb{I}}}_{P})}^{-1}X{{{{{{\boldsymbol{f}}}}}}}\,.$$
(9)

This reformulation of the SR updates is a crucial step, which allows the use of the well-known push-through identity \({(AB+\lambda {{\mathbb{I}}}_{n})}^{-1}A=A{(BA+\lambda {{\mathbb{I}}}_{m})}^{-1}\)45,46, where A and B are respectively matrices with dimensions n × m and m × n (see “Methods” for a derivation). As a result, Eq. (9) can be rewritten as

$${{{{{{\boldsymbol{\delta }}}}}}}{{{{{{\boldsymbol{\theta }}}}}}}=\tau X{({X}^{T}X+\lambda {{\mathbb{I}}}_{2M})}^{-1}{{{{{{\boldsymbol{f}}}}}}}\,.$$
(10)

This derivation is our first result: it shows, in a simple and transparent way, how to exactly perform the SR with the inversion of a 2M × 2M matrix and, therefore, without allocating a P × P matrix. We emphasize that the last formulation is very useful in the typical deep learning setup, where P ≫ M. Employing Eq. (10) instead of Eq. (9) proves to be more efficient in terms of both computational complexity and memory usage. The required operations for this new formulation are O(M2P) + O(M3) instead of O(P3), and the memory usage is only O(MP) instead of O(P2). For deep neural networks with nl layers the memory usage can be further reduced roughly to O(MP/nl) (see ref. 47). We developed a memory-efficient implementation of SR that is optimized for deployment on a multi-node GPU cluster, ensuring scalability and practicality for real-world applications (see “Methods”). Other methods, based on iterative solvers, require O(nMP) operations, where n is the number of steps needed to solve the linear problem in Eq. (3). However, this number increases significantly for ill-conditioned matrices (the matrix S has a number of zero eigenvalues equal to P − M), leading to many non-parallelizable iteration steps and consequently higher computational costs48. Our proof also highlights that the diagonal-shift regularization of the S matrix in parameter space [see Eq. (3)] is equivalent to the same diagonal shift in sample space [see Eq. (10)]. Furthermore, it would be interesting to explore the applicability of regularization schemes with parameter-dependent diagonal shifts in the sample space28,49. In contrast, for the MinSR update30, a pseudo-inverse regularization is applied in order to truncate the effect of vanishing singular values during inversion.

The variational wave function

The architecture of the variational state employed in this work is based on the Deep ViT introduced and described in detail in ref. 34. In the Deep ViT, the input configuration σ of shape L × L is initially split into b × b square patches, which are then linearly projected in a d-dimensional vector space in order to obtain an input sequence of L2/b2 vectors, i.e., \(({{{{{{{\boldsymbol{x}}}}}}}}_{1},\ldots ,{{{{{{{\boldsymbol{x}}}}}}}}_{{L}^{2}/{b}^{2}})\) with \({{{{{{{\boldsymbol{x}}}}}}}}_{i}\in {{\mathbb{R}}}^{d}\). This sequence is then processed by an encoder block employing Multi-Head Factored Attention (MHFA) mechanism33,50,51,52. This produces another output sequence of vectors \(({{{{{{{\boldsymbol{A}}}}}}}}_{1},\ldots ,{{{{{{{\boldsymbol{A}}}}}}}}_{{L}^{2}/{b}^{2}})\) with \({{{{{{{\boldsymbol{A}}}}}}}}_{i}\in {{\mathbb{R}}}^{d}\) and can be formally implemented as follows:

$${A}_{i,p}={\sum}_{q=1}^{d}{W}_{p,q}{\sum}_{j=1}^{{L}^{2}/{b}^{2}}{\alpha }_{i,j}^{\mu (q)}{\sum}_{r=1}^{d}{V}_{q,r}{x}_{j,r}\,,$$
(11)

where μ(q) = ⌈qh/d⌉ select the correct attention weights of the corresponding head, being h the total number of heads. The matrix \(V\in {{\mathbb{R}}}^{d\times d}\) linearly transforms each input vector identically and independently. Instead, the attention matrices \({\alpha }^{\mu }\in {{\mathbb{R}}}^{{L}^{2}/{b}^{2}\times {L}^{2}/{b}^{2}}\) combine the different input vectors and the linear transformation \(W\in {{\mathbb{R}}}^{d\times d}\) mixes the representations of the different heads.

Due to the global receptive field of the attention mechanism, its computational complexity scales quadratically with respect to the length of the input sequence. Note that in the standard dot product self-attention3, the attention weights αi,j are function of the inputs, while in the factored case employed here, the attention weights are input-independent variational parameters. This choice is the only custom modification that we employ with respect to the standard Vision Transformer architecture53, which is also supported by numerical simulations and analytical arguments suggesting that queries and keys do not improve the performance in these problems52. Finally, the resulting vectors Ai are processed by a two layers fully connected network (FFN) with hidden size 2d and ReLu activation function. Skip connections and pre-layer normalization are implemented as described in refs. 34,54. Generally, a total of nl such blocks are stacked together to obtain a Deep Transformer. This architecture operates on an input sequence, yielding another sequence of vectors \(({{{{{{{\boldsymbol{y}}}}}}}}_{1},\ldots ,{{{{{{{\boldsymbol{y}}}}}}}}_{{L}^{2}/{b}^{2}})\), where each \({{{{{{{\boldsymbol{y}}}}}}}}_{i}\in {{\mathbb{R}}}^{d}\). At the end, a complex-valued fully connected output layer, with \(\log [\cosh (\cdot )]\) as activation function, is applied to z = ∑iyi to produce a single number Log[Ψθ(σ)] and predict both the modulus and the phase of the input configuration (refer to Algorithm 1 in “Methods” for a pseudocode of the neural-network architecture).

To enforce translational symmetry, we define the attention weights as \({\alpha }_{i,j}^{\mu }={\alpha }_{i-j}^{\mu }\), thereby ensuring translational symmetry among patches. This choice reduces the computational cost during the restoration of the full translational symmetry through quantum number projection39,55. Specifically, only a summation involving b2 terms is required. Under the specific assumption of translationally invariant attention weights, the factored attention mechanism can be technically implemented as a convolutional layer with d input channels, d output channels and a very specific choice of the convolutional kernel: \({K}_{p,r,k}={\sum }_{q = 1}^{d}{W}_{p,q}{\alpha }_{k}^{\mu (q)}{\sum }_{r = 1}^{d}{V}_{q,r}\in {{\mathbb{R}}}^{d\times d\times {L}^{2}/{b}^{2}}\). However, it is well-established that weight sharing and low-rank factorizations in learnable tensors within neural networks can lead to significantly different learning dynamics and, consequently, different final solutions56,57,58. Other symmetries of the Hamiltonian in Eq. (1) [rotations, reflections (C4v point group) and spin parity] can also be restored within quantum number projection. As a result, the symmetrized wave function reads:

$${\tilde{\Psi }}_{\theta }(\sigma )={\sum}_{i=0}^{{b}^{2}-1}{\sum}_{j=0}^{7}\left[{\Psi }_{\theta }({T}_{i}{R}_{j}\sigma )+{\Psi }_{\theta }(-{T}_{i}{R}_{j}\sigma )\right]\,.$$
(12)

In the last equation, Ti and Rj are the translation and the rotation/reflection operators, respectively. Furthermore, due to the SU(2) spin symmetry of the J1J2 Heisenberg model, the total magnetization is also conserved and the Monte Carlo sampling (see “Methods”) can be limited in the Sz = 0 sector for the ground-state search.

Numerical calculations

Our objective is to approximate the ground state of the J1J2 Heisenberg model in the highly frustrated point J2/J1 = 0.5 on the 10 × 10 square lattice. We use the formulation of the SR in Eq. (10) to optimize a variational wave function parametrized through a Deep ViT, as discussed above. The result in Table 1 is achieved with the symmetrized Deep ViT in Eq. (12) using b = 2, nl = 8 layers, embedding dimension d = 72, and h = 12 heads per layer. This variational state has in total 267,720 real parameters (the complex-valued parameters of the output layer are treated as couples of independent real-valued parameters). Regarding the optimization protocol, we choose the learning rate τ = 0.03 (with cosine decay annealing) and the number of samples is fixed to be M = 6000. We emphasize that using Eq. (9) to optimize this number of parameters would be infeasible on available GPUs: the memory requirement would be more than O(103) gigabytes, one order of magnitude bigger than the highest available memory capacity. Instead, with the formulation of Eq. (10), the memory requirement can be easily handled by available GPUs (see “Methods”). The simulations took 4 days on twenty A100 GPUs. Remarkably, as illustrated in Table 1, we are able to obtain the state-of-the-art ground-state energy using an architecture solely based on neural networks, without using any other regularization than the diagonal shift reported in Eq. (10), fixed to λ = 10−4. We stress that a completely unbiased simulation, without assuming any prior for the sign structure, is performed, in contrast to other cases where the Marshall sign rule is used to stabilize the optimization21,29,30,31,37 (see Table 1). Furthermore, we verified with numerical simulations that the final results are not affected by the Marshall prior. This is an important point since a simple sign prior is not available for the majority of the models (e.g., the Heisenberg model on the triangular or Kagome lattices). We would like also to mention that the definition of a suitable architecture is fundamental to take advantage of having a large number of parameters. Indeed, while a stable simulation with a simple regularization scheme (only defined by a finite value of λ) is possible within the Deep ViT wave function, other architectures require more sophisticated regularizations. For example, to optimize Deep GCNNs it is necessary to add a temperature-dependent term to the loss function28 or, for Deep CNNs, a process of variance reduction and reweighting30 helps in escaping local minima. We also point out that physically inspired wave functions, as the Gutzwiller-projected states14, which give a remarkable result with only a few parameters, are not always equally accurate in other cases.

In Fig. 1 we show a typical Deep ViT optimization on the 10 × 10 lattice at J2/J1 = 0.5. First, we optimize the Transformer having translational invariance among patches (blue curve). Then, starting from the previously optimized parameters, we restore sequentially the full translational invariance (green curve), rotational symmetry (orange curve) and lastly, reflections and spin parity symmetry (red curve). Whenever a new symmetry is restored, the energy reliably decreases55. We stress that our optimization process, which combines the SR formulation of Eq. (10) with a real-valued Deep ViT followed by a complex-valued fully connected output layer34, is highly stable and insensitive to the initial seed, ensuring consistent results across multiple optimization runs.

Fig. 1: Variational energy optimization.
figure 1

Optimization of the Deep ViT with patch size b = 2, nl = 8 layers, embedding dimension d = 72 and h = 12 heads per layer, on the J1J2 Heisenberg model at J2/J1 = 0.5 on the 10 × 10 square lattice. The first 200 optimization steps are not shown for better readability. Inset: first and last 200 optimization steps when recovering sequentially the full translational (green curve), rotational (orange curve), and reflections and parity (red curve) symmetries. The total number of steps after restoring the symmetries is 5000 for translations, 5000 for rotations, and 4000 for reflections and parity. The mean energy obtained without quantum number projection is also reported for comparison (blue dashed line).

Discussion

We have introduced a formulation of the SR that excels in scenarios where the number of parameters (P) significantly outweighs the number of samples (M) used for stochastic estimations. Exploiting this approach, we attained the state-of-the-art ground-state energy for the J1J2 Heisenberg model at J2/J1 = 0.5, on a 10 × 10 square lattice, optimizing a Deep ViT with P = 267,720 parameters and using only M = 6000 samples. It is essential to note that this achievement highlights the remarkable capability of deep learning in performing exceptionally well even with a limited sample size relative to the overall parameter count. This also challenges the common belief that a large amount of Monte Carlo samples are required to find the solution in the exponentially large Hilbert space and for precise SR optimizations31.

Our results have important ramifications for investigating the physical properties of challenging quantum many-body problems, where the use of the SR is crucial to obtain accurate results. The use of large-scale neural-network quantum states can open new opportunities in approximating ground states of quantum spin Hamiltonians, where other methods fail. Additionally, making minor modifications to the equations describing parameter updates within the SR framework [see Eq. (10)] enables us to describe the unitary time evolution of quantum many-body systems according to the time-dependent variational principle44,59,60. Extending our approach to address time-dependent problems stands as a promising avenue for future works. Furthermore, this formulation of the SR can be a key tool for obtaining accurate results also in quantum chemistry, especially for systems that suffer from the sign problem61. Typically, in this case, the standard strategy is to take M > 10 × P44, which may be unnecessary for deep learning-based approaches.

Methods

Linear algebra identity

The key point of the method is the transformation from Eq. (9) to Eq. (10), which uses the matrix identity

$${(AB+\lambda {{\mathbb{I}}}_{n})}^{-1}A=A{(BA+\lambda {{\mathbb{I}}}_{m})}^{-1}\,,$$
(13)

where A and B are n × m and m × n matrices, respectively. This identity can be proved starting from

$${{\mathbb{I}}}_{m}=(BA+\lambda {{\mathbb{I}}}_{m}){(BA+\lambda {{\mathbb{I}}}_{m})}^{-1}\,,$$
(14)

then, multiplying from the left by A, we get

$$A=A(BA+\lambda {{\mathbb{I}}}_{m}){(BA+\lambda {{\mathbb{I}}}_{m})}^{-1}\,,$$
(15)

and exploiting the fact that \(A{{\mathbb{I}}}_{m}={{\mathbb{I}}}_{n}A\), we obtain

$$A=(AB+\lambda {{\mathbb{I}}}_{n})A{(BA+\lambda {{\mathbb{I}}}_{m})}^{-1}\,.$$
(16)

At the end, multiplying from the left by \({(AB+\lambda {{\mathbb{I}}}_{n})}^{-1}\), we recover Eq. (13).

The identity in Eq. (13) is also used in the kernel trick, which is at the basis of kernel methods which have applications in many areas of machine learning62, including many-body quantum systems63.

Distributed SR computation

The algorithm proposed in Eq. (10) can be efficiently distributed, both in terms of computational operations and memory, across multiple GPUs. To illustrate this, we consider for simplicity the case of a real-valued wave function, where X = YR ≡ Y. Given a number M of configurations, they can be distributed across nG GPUs, facilitating parallel simulation of Markov chains. In this way, on the gth GPU, the elements i ∈ [gM/nG, (g + 1)M/nG) of the vector f are obtained, along with the columns i ∈ [gM/nG, (g + 1)M/nG) of the matrix X, which we indicate using X[:,g]. To efficiently apply Eq. (10), we employ the message passing interface (MPI) alltoall collective operation to transpose X, yielding the sub-matrix X[g,:] on gth GPU. This sub-matrix comprises the rows elements in [gP/nG, (g + 1)P/nG) of the original matrix X (see Fig. 2). Consequently, we can express:

$${X}^{T}X={\sum}_{g=0}^{{n}_{G}-1}{X}_{[g,:]}^{T}{X}_{[g,:]}\,.$$
(17)
Fig. 2: Message passing interface (MPI) alltoall operation.
figure 2

Graphical representation of MPI alltoall operation to transpose the X matrix distributed across multiple GPUs. For example, GPU:0 initially contains sub-matrices A0, A1, A2, A3, while following the transposition, GPU:0 contains sub-matrices A0, B0, C0, D0.

The inner products can be computed in parallel on each GPU, while the outer sum is performed using the MPI primitive reduce with the sum operation. The master GPU performs the inversion, computes the vector \({{{{{{\boldsymbol{t}}}}}}}={({X}^{T}X+\lambda {{\mathbb{I}}}_{2M})}^{-1}{{{{{{\boldsymbol{f}}}}}}}\), and then scatters it across the other GPUs. Finally, after transposing again the matrix X with the MPI alltoall operation, the parameter update can be computed as follows:

$${{{{{{\boldsymbol{\delta }}}}}}}{{{{{{\boldsymbol{\theta }}}}}}}=\tau {\sum}_{g=0}^{{n}_{G}-1}{X}_{[:,g]}{{{{{{{\boldsymbol{t}}}}}}}}_{g}\,.$$
(18)

This procedure significantly reduces the memory requirement per GPU to O(MP/nG), enabling the optimization of an arbitrary number of parameters using the SR approach. In Fig. 3, we report the memory usage and the computational time per optimization step.

Fig. 3: Memory and time usage.
figure 3

Memory usage in gigabytes (a) and computational time per optimization step in seconds (b) as a function of the number of GPUs. The reported values are related to a ViT architecture with h = 12, d = 72, nl = 8, fully symmetrized [see Eq. (12)] and optimized with M = 6000 samples.

Systematic energy improvement

Here, we discuss the impact of the number of samples M and parameters P on the variational energy of the ViT wave function, showing the results in Fig. 4. In the right panel, we show the variational energy as a function of the number of parameters for a fixed number of layers nl = 8, performing the optimizations with M = 6000 samples. The number of parameters is increased by enlarging the width of each layer. In particular, we take the following architectures: (h = 10, d = 40), (h = 10, d = 60), (h = 12, d = 72), and (h = 14, d = 140) with P = 85400, 187100, 267720, and 994,700 parameters, respectively. A similar analysis for a fixed width but varying the number of layers is discussed in ref. 34. Instead, in the left panel, we fix an architecture (h = 12, d = 72) with nl = 8 and increase the number of samples M up to 104. Both analyses are performed without restoring the symmetries by quantum number projection; for comparison, we report in the left panel the energy obtained after restoring the symmetries [see Eq. (12)]. The latter one coincides with the ViT wave function used to obtain the energy reported in Table 1. We point out that the energy curves depicted in Fig. 4 are obtained from unbiased simulations, without the utilization of Marshall sign prior. The final value of the energy, as well as the convergence to it, are qualitatively similar regardless of its inclusion.

Fig. 4: Variational energy scalings.
figure 4

a Energy per site as a function of the number of samples M for a ViT with nl = 8 layers, embedding dimension d = 72 and h = 12 heads per layer. b Energy per site as a function of the number of parameters P, increased by adding heads h and taking larger embedding dimensions d, for a fixed number of layers nl = 8. For both panels, the energy values (blue circles) are obtained without restoring the symmetries; for comparison, we also show the energy corresponding to the fully symmetrized state in Eq. (12) (red star) which is the one reported in Table 1.

Details on the Transformer architecture

In this section, we provide a pseudocode (see Algorithm 1) describing the steps for the implementation of the Vision Transformer architecture employed in this work and described in the “Results” section. In particular, we emphasize that skip connections and layer normalization are implemented as described in ref. 54.

Algorithm 1

Vision Transformer wave function

1: Input configuration σ ∈ {−1, 1}L×L

2: Patch and Embed: \({{{{{{\mathcal{X}}}}}}}\leftarrow ({{{{{{{\boldsymbol{x}}}}}}}}_{1},\ldots {{{{{{{\boldsymbol{x}}}}}}}}_{{L}^{2}/{b}^{2}})\in {{\mathbb{R}}}^{{L}^{2}/{b}^{2}\times d}\)

3: for i = 1, nl do

4: \({{{{{{\mathcal{X}}}}}}}\leftarrow {{{{{{\mathcal{X}}}}}}}+\,{{\mbox{MHFA}}}({{\mbox{LayerNorm(}}}\,{{{{{{\mathcal{X}}}}}}}\,{{\mbox{)}}})\)

5: \({{{{{{\mathcal{X}}}}}}}\leftarrow {{{{{{\mathcal{X}}}}}}}+\,{{\mbox{FFN}}}({{\mbox{LayerNorm(}}}\,{{{{{{\mathcal{X}}}}}}}\,{{\mbox{)}}})\)

6: end for

7: \(({{{{{{{\boldsymbol{y}}}}}}}}_{1},\ldots {{{{{{{\boldsymbol{y}}}}}}}}_{{L}^{2}/{b}^{2}})\leftarrow \,{{\mbox{LayerNorm}}}\,({{{{{{\mathcal{X}}}}}}})\)

8: \({{{{{{\boldsymbol{z}}}}}}}\leftarrow {\sum }_{i = 1}^{d}{{{{{{{\boldsymbol{y}}}}}}}}_{i}\)

9: \(\,{{\mbox{Log}}}\,[{\Psi }_{\theta }(\sigma )]\leftarrow {\sum }_{\alpha = 1}^{d}g({b}_{\alpha }+{{{{{{{\boldsymbol{w}}}}}}}}_{\alpha }\cdot {{{{{{\boldsymbol{z}}}}}}})\)

Monte Carlo sampling

The expectation value of an operator \(\hat{A}\) on a given variational state \(\left\vert {\Psi }_{\theta }\right\rangle\) can be computed as

$$\langle \hat{A}\rangle =\frac{\langle {\Psi }_{\theta }| \hat{A}| {\Psi }_{\theta }\rangle }{\langle {\Psi }_{\theta }| {\Psi }_{\theta }\rangle }={\sum}_{\{\sigma \}}{P}_{\theta }(\sigma ){A}_{L}(\sigma )\,,$$
(19)

where Pθ(σ) ∝ ∣Ψθ(σ)∣2 and \({A}_{L}(\sigma )=\langle \sigma | \hat{A}| {\Psi }_{\theta }\rangle /\langle \sigma | {\Psi }_{\theta }\rangle\) is the so-called local estimator of \(\hat{A}\)

$${A}_{L}(\sigma )={\sum}_{\{\sigma ^{\prime} \}}\langle \sigma | \hat{A}| \sigma ^{\prime} \rangle \frac{{\Psi }_{\theta }(\sigma ^{\prime} )}{{\Psi }_{\theta }(\sigma )}\,.$$
(20)

For any local operator (e.g., the Hamiltonian) the matrix \(\langle \sigma | \hat{A}| \sigma ^{\prime} \rangle\) is sparse, then the calculation of AL(σ) is at most polynomial in the number of spins. Furthermore, if it is possible to efficiently generate a sample of configurations {σ1, σ2, …, σM} from the distribution Pθ(σ) (e.g., by performing a Markov chain Monte Carlo), then Eq. (19) can be used to obtain a stochastic estimation of the expectation value

$$\bar{A}=\frac{1}{M}{\sum}_{i=1}^{M}{A}_{L}({\sigma }_{i})\,,$$
(21)

where \(\bar{A}\approx \langle \hat{A}\rangle\) and the accuracy of the estimation is controlled by a statistical error which scales as \(O(1/\sqrt{M})\).

Note added to the proof. During the revision process, we became aware of an updated version of Ref. 30 where a variational energy per site of -0.4976921(4) has been obtained.