Machine learning the deuteron: new architectures and uncertainty quantification

We solve the ground state of the deuteron using a variational neural network ansatz for the wave function in momentum space. This ansatz provides a flexible representation of both the $S$ and the $D$ states, with relative errors in the energy which are within fractions of a percent of a full diagonalisation benchmark. We extend the previous work on this area in two directions. First, we study new architectures by adding more layers to the network and by exploring different connections between the states. Second, we provide a better estimate of the numerical uncertainty by taking into account the final oscillations at the end of the minimisation process. Overall, we find that the best performing architecture is the simple one-layer, state-independent network. Two-layer networks show indications of overfitting, in regions that are not probed by the fixed momentum basis where calculations are performed. In all cases, the error associated to the model oscillations around the real minimum is larger than the stochastic initialisation uncertainties. The conclusions that we draw can be generalised to other quantum mechanics settings.


Introduction
Artificial Neural Networks (ANNs) have become routine computational tools in a wide range of scientific domains [1,2].Nuclear physics is no exception, and the number of machine learning (ML) tools is increasing steadily [3,4,5,6,7,8,9,10].An interesting and relatively recent use of ANNs is the solution of quantum mechanical problems, including in the many-body domain [11,12].These methods use ANNs as an ansatz for the many-body wavefunction, and employ ready-made ML tools to minimise the energy of the system [13,14].The first attempts in condensed matter [15,16] were quickly a E-mail: jrozalen@ub.edufollowed by advances both within [17] and outside the field [5], including quantum chemistry [18,19].A good example in this direction is FermiNet [20], which uses an inherently antisymmetric wavefunction as an ansatz in a variational approach to solve the many-electron problem.
The methodology that we use to solve the deuteron here is based on the same variational philosophy and stems from Ref. [21], where we used a one-layer variational ANN ansatz to describe the wavefunction of the deuteron (the only 2−body nuclear bound state) in momentum space.This yields excellent results in comparison with benchmark solutions for the energies and the wavefunctions, quantified through fidelity measures.Moreover, we estimate the out-of-sample uncertainty of the model by using several random initializations in the minimisation problem.
The assessment of uncertainties in variational ANN (vANN) is particularly relevant to pinpoint the fundamental limitations of this methodology.There are several systematic errors that are important for vANN solutions to the Schrödinger equation.For instance, network architectures play a key role in determining the convergence properties of the network.Similarly, for a fixed architecture, vANN widths and depths change the expressivity of the network, and are thus fundamental ingredients in providing faithful representations of continuous wavefunctions.One should explore this systematic uncertainties extensively before reaching conclusions about any intrinsic shortcomings of the vANN method itself.
The deuteron is an excellent test bed for such studies.It is a relatively simple, one-body-like problem, which already includes the complexity of the strong force.Moreover, dealing with a wavefunction with two different angular momentum states adds an additional level of sophistication.In this manuscript, we extend the elementary approach of Ref. [21] in two different directions.First, we look at somewhat more complex ANN architectures to describe the wavefunction.This yields insight on the importance of the network architecture and should ultimately provide information on the limitations of the method beyond a specific architecture.Second, we introduce a novel way to compute the out-of-sample error.In particular, we look at oscillations in the energy, which is our cost function, around the real minimum.These oscillations are an additional source of uncertainty, associated to the minimisation process.The analysis of such errors provides a new insight into uncertainty quantification for variational ANN solvers.
This paper is structured in the following manner.In Section 2 we briefly go over the methods used to approach the deuteron problem with ANNs.Section 3 is devoted to the analysis of the numerical results, including their uncertainties.Finally, in Section 4, we draw a series of conclusions and comment on ways in which our work could be expanded and improved.

Methodology
The approach that we use to solve the problem is variational.The ANN plays the role of a wavefunction ansatz, which we denote |ψ W ANN ⟩.W = {W (1) ,W (2) ,W (3) ,b} is a set formed by network weights, W (i) , and biases b.In a variational setting, the total energy of the system is the loss function, which reads, The deuteron is the simplest nuclear two-body problem.We solve its structure in momentum space, where we can directly access nuclear interaction matrix elements.We further separate the center of mass from the relative coordinates [22].Consequently, the wavefunction of the system depends only on the relative momentum coordinate, ⃗ q.Working in momentum space allows us to skip numerically costly derivatives on the wavefunctions when computing the kinetic energy [9].We can further reduce the dimensionality of the deuteron problem via a partial wave expansion, thus separating the dependence on the absolute value of the momentum q from its dependence on angles.For the deuteron, the tensor component of the strong interaction admixes the S (L = 0) and the D (L = 2) components of the ground-state wavefunction.The network will consequently have two different outputs, one for each state.The potential energy term used to compute Eq. ( 1) mixes these two states in a non-trivial way, so they are not completely independent from each other in the variational setting.We choose the N3LO Entem-Machleidt nucleon-nucleon force [23], which is easily implemented in our code as a momentum-dependent potential.
To implement the problem computationally, we use a one-dimensional grid in q with N q = 64 points.This grid is used to compute the energy integrals in Eq. ( 1) via Gaussian quadrature.The loss function is obtained from a global integral, the energy, and is always computed with the aforementioned set of momentum grid points.As we shall see later, this can create some issues in deeper models.To efficiently capture the lowmomentum structure and the high-momentum tails of the wavefunction, we use a mesh that densely covers the region of low momenta but is sparse in regions of high momenta.We first distribute N q Gauss-Legendre points x i between 0 and 1, and then extend them tangentially using the transform In this setup, the overlap ⟨ψ L targ |ψ L ANN ⟩ , as well as all analogous integrated quantities, are discretised as follows: where w i are the integration weights associated to the tangentially-transformed Gauss-Legendre mesh.Note that because the angular dependence has been removed via a partial wave expansion, all the integrals involve a radial q 2 term.Note also that all the wavefunctions are real-valued.
In order to estimate the errors in our quadratures introduced by the discretization of the mesh, we have explored how different values of N q can affect values computed with this quadrature rule.For instance, computing the global energy, Eq. ( 1), with N q = 48, 64 and 80 introduces variations of the order of keV.However, throughout the text we will use only N q = 64 and perform all comparisons with this number of points.
In the following, we present results corresponding to different ANN architectures for the two wavefunctions, ψ L ANN (q), with L = 0 and 2. Following Ref. [21], we take three identical steps to train all the networks.These three steps are common to most vANN problems in quantum mechanics [16].First, we initialise network parameters to uniform random values in the ranges W (1) ∈ [−1, 0), W (2) ∈ [0, 1) and b ∈ [−1, 1).We choose a Sigmoid activation function, σ(x), which pro-vided marginally better results than the Softplus function in Ref. [21].We also explore the use of different activation functions σ(x) in Sec.3.4.
Second, we train the ANN to mimic a target function that bears a certain degree of physical meaning.We choose a functional form ψ L targ (q) = q L e − ξ 2 q 2 2 , which has the correct low-momentum asymptotic values and a reasonable real-space width, ξ = 1.5 fm.In this pretraining step, we use the overlap, to compute a total cost function, C, defined as, Equation ( 4) is zero if, and only if, the two overlaps We choose the optimiser RMSprop [1], which dynamically adapts a global learning locally for all the network parameters.Training a medium-sized ANN with N hid ≈ 20 hidden nodes usually takes about 10 4 iterations.We point out that this pretraining step is not strictly necessary to achieve a correct energy minimisation, but it helps in guaranteeing a successful energy minimisation with fewer epochs.
In the third and final step, we define a new cost function: the energy, as given in Eq. ( 1).We compute it with the momentum quadrature described above.The network is then trained to minimise E W for 2.5 × 10 5 epochs.The exact ground state energy can be computed in the very same momentum quadrature via exact diagonalization, and its value is E GS = −2.2267MeV.We use this value as a benchmark.We note that the kinetic energy has diagonal contributions in the L = 0 and 2 states, but the potential term mixes both states.We perform all calculations in double precision.We use the PyTorch library [24,25], which is particularly useful due to its automatic differentiation capabilities [26].A full version of the code is available on GitHub [27].

Architectures
The vANN procedure is relatively general, and can be applied to any feed-forward ANN.In the initial exploration of deuteron wavefunctions of Ref. [21], a minimal approach was chosen deliberately to explore the potential and the limitations of the method in the simplest possible model.We used a single-layer ANN with an increasing number of hidden nodes in order to assess systematics for a fixed architecture.Here, we take a step further and assess the impact of increasing the depth of the ANN which leads to different architectures.
To this end, we introduce four different ANN architectures, which we show in the four panels of Fig. 1.All our networks have a single input, q, and two outputs, |Ψ S ANN ⟩ and |Ψ D ANN ⟩.We distinguish between different configurations depending on how they connect to the two final states.On the one hand, in fully-connected networks, which we dub "state-connected" (sc) networks, all the parameters contribute to both outputs (see the top panels, Figs.1a and 1b).On the other hand, "statedisconnected" (sd) networks (bottom panels, Figs.1c  and 1d) provide independent parameters for each state.One may naively expect sd networks to provide more flexibility (and hence better variational properties) for the two wavefunctions.
In addition to the state dependence of the network, we also explore architecture systematics in terms of depth.We implement both sc and sd networks with one (left panels) and two (right panels) layers.In the following, we use the acronyms nsc and nsd (n = 1, 2), with n the number of layers, to refer to the 4 architectures explored in this work.We stress that in Ref. [21] we only looked at the 1sc architecture.The motivation to explore other architectures and, in particular further depths, is the widely-held belief that deep ANNs outperform their shallow single-layered counterparts [28].
In the initial layer of all four architectures, we introduce both weights W (1) and biases b.In contrast, all the second (and/or third) layers only have weights, W (2) (W (3) ).For clarity and completeness, we now provide the functional form of the four networks.For the one-layer 1sd configurations, the wavefunction ansatz reads, i,L σ W i,L q + B i,L .
The corresponding 1sc contribution is obtained by setting N hid /2 → N hid and using L−independent W (1)  and B parameters.When 2 layers are included, in contrast, the sd ansatz is more complex, and becomes the sum of sums of two nested activation functions, One can again easily manipulate the previous expression to obtain a 2sd wavefunction.
The relation between the total number of hidden neurons, N hid , and the total number of variational parameters, N , is different for each architecture.The relations N (N hid ) are shown in the captions of Fig. 1   each network configuration.One-layer networks have a linear N (N hid ) relation, whereas the relation for twolayer networks is quadratic.In other words, for the same N hid , two-layer networks will usually involve a much larger number of parameters than one-layer models, and one may expect overfitting to become an issue.Whereas for the 1sc architecture the number of hidden nodes is unrestricted so long as N hid > 2, the 2sc architecture must have N hid > 4 with N hid even.Likewise, the 1sd network must have an even N hid .In contrast, for the 2sd network, N hid is a multiple of 4.

Learning process
We minimise the energy cost function using RMSprop [1] with hyperparameters ϵ = 10 −8 and a smoothing constant α = 0.9.We set the learning rate to 10 −2 and explore the N hid dependence using the values N hid ∈ {20, 30, 40, 60, 80, 100} (in the 2sc architecture we change N hid = 30 → N hid = 32).To explore the full flexibility of the wavefunction ansätze, we optimise the networks by mimising the overlap loss function for 2×10 3 epochs of pre-training, followed by 2.5 × 10 5 epochs of energy minimisation.Rather than doing this a single time, we use 20 different random initializations and display mean values and (Bessel-corrected) standard deviations obtained with all these runs1 .With this, we explore the out-of-sample bias of the network and we attempt to draw generic conclusions for the network architecture, rather than for a single, specific network model.
In fact, we perform 150 minimisations for each architecture (other than for a specific 2sd model, as we discuss in the following).Not all of these runs converge Fig. 2 Convergence rate of the four network architectures as a function of the number of hidden neurons.The rate (vertical axis) is defined as the ratio between the number of models that converge and the total number of trained models, or, if they do, they may not converge to meaningful values, close enough to the minimum.Figure 2 shows the rate of convergence of different architectures as a function of N hid .This is obtained as the ratio of the number of converged models, N con , to the total set of model initializations, N tot =150, r = N con /N tot .As a selection criterion to compute N con , we define converged models as those that provide a final energy within the range E ∈ (−2.220, −2.227) MeV.We set N tot = 150 to guarantee that all configurations have a minimum of N con = 20 converged models.The only exception is the 2sd architecture with N hid = 20, which has a very small convergence rate.This architercture requires N tot = 1100 runs to get N con = 20 converged models.Whenever more than 20 initializations lead to converged results, we randomly pick 20 states to have representative, but also homogeneous, statistics across the N hid domain.
We stress the fact that these rates are obtained from initializations with a pretraining step.With no pretraining, all models show slower convergence rates2 .We take this difference as an indication of the fact that pretraining is effective in providing a physical representation of the ANN wavefunction.Initally non-pretrained networks may occasionally lead to accurate results, but are likely to require many more energy minimisation epochs.
Going back to Fig. 2, we find that the one-layer configurations provide better convergence rates than their two-layer counterparts up to N hid ≈ 60, at which point the 2sc architecture is marginally better than 1sd.The 1sc network (solid line) has a perfect convergence rate.The 1sd architecture (dash-dotted line) provides a convergence rate of r > 50% relatively independent of the value of N hid .We attribute these similarities to the fact that both pretraining and training are more easily done with a single-layer ANN.
In contrast, two-layer networks start with a relatively low convergence rate.In architecture 2sc we have r ≈ 25% for N hid = 20, and the rate increases in a relatively steady and linear fashion up to r ≈ 80 − 90% for N hid = 100.In architecture 2sd we find even lower convergence rates, r ≈ 1% for N hid = 20, and a relatively slow increasing tendency in terms of N hid up to r ≈ 20%.The stark differences in convergence rates between one and two-layer ANNs may be due to a variety of factors.On the one hand, network parameter initialization may be an issue.Network parameters in different domains than the ones we prescribe at the moment may provide better starting points.On the other hand, we fix the total number of energy minimisation epochs and the current rate may just be an indication that smaller two-layer networks simply take, with RM-Sprop, a larger number of epochs to converge.Finally, two-layer networks may be problematic in terms of the Sigmoid activation function, which is prone to a vanishing gradients issue [29,30].This should be accentuated in deeper (as opposed to shallow one-layer) networks.Along these lines, having few neurons increases the probability that all of them become frozen, which may explain why low N hid models have a harder time converging.These arguments suggest that an activation function like Softplus or ReLU work better with 2-layer architectures.We have carried out tests with different activation functions that show an improvement in convergence rates for 2−layer networks, as shown in Table 3.

Error analysis
In addition to different network architectures, we examine two types of uncertainties.First, as we have just explained, we run 20 minimizations for different N hid values and network architectures.We take the standard deviation of these 20 results as a measurement of the uncertainty related to the stochastic minimization process.This out-of-sample error, which was also explored in Ref. [21], is represented in terms of dark bands in the figures of the following section.It tends to be a relatively small error, of the order of a fraction of a keV in energy.
Second, even after a full minimisation, our results still show some residual oscillations.These oscillations are typically within a few keV of the total energy.Rather  than keeping only the lowest values of the energy as our final values, we instead adopt a more conservative stance and we average over the oscillations to obtain a central value.The oscillation amplitude can then be used to quantify an additional source of uncertainty, associated to the minimiser.To determine this error, we take a trained model and let it evolve for 300 additional epochs -a process which we call post-evolution.
The number of post-evolution epochs is small enough so that the mean value does not change during the process, but also large enough to observe periodicity in the oscillations.
We illustrate this behavior in Fig. 3.The top panel shows the evolution of the D−state fidelity F D over the 300 post-evolution epochs for a 1sc network with N hid = 100.Here, we define F L as the overlap between the ANN and benchmark (obtained via exact diagonalization) wavefunctions in analogy to Eq. (3).The solid blue line, corresponding to the fidelity at every epoch, displays clear oscillations that are asymmetric with respect to a mean value of F D ≈ 0.999987.To account for the asymmetry in the oscillating behavior, we assign an upper and lower error estimate.The upper value corresponds to the maximum value across the post-evolution phase -δF D + = 0.000008 (dash-dotted line), in the case shown in the figure.In contrast, the lower value is significantly lower for this specific example, leading to δF D − = 0.00002 (dotted line).The bottom panel of Fig. 3 shows the energy (solid line) as a function of the post-evolution epoch for the same simulation.The energy oscillates around the mean value (dashed line), and the top and bottom bounds of these oscillations are shown in dash-dotted and dotted lines respectively.In this particular example, the mean value of the energy is E W ≈ −2.2258 MeV, with oscillation errors δE W + = 0.5 keV and δE W − = 0.3 keV.We note that these values are typical for almost any number of neurons, and are extremely small, less than 0.1% of the total energy value.
In order to take into account the stochastic out-ofsample uncertainty in these oscillations, we repeat the process above several times for each network configuration (N hid ) and keep 20 random samples of the networks.As explained earlier, we extract central values for the different physical quantities by taking the average over these 20 mean values.We quote post-evolution oscillation errors that also include this stochastic element.In other words, both upper and lower bounds are calculated by averaging over 20 individual post-evolution runs.In a cost function with no near multiple minimums of similar heights, all oscillations should happen around the same minimum, and the mean values of E W should all be similar.This, in tandem with the fact that we estimate the stochastic δE W as the standard deviation, leads us to expect small stochastic errors and comparatively bigger oscillation errors.We confirm this expectation in the following section.

Energy
The two quantities that we use as indicators of the quality of our final trained models are the energy and the fidelity, represented in the top and bottom panels of Fig. 4, respectively.The four columns correspond to the different network architectures of Fig. 1.The solid dark lines in Fig. 4 indicate the central values.We show two different types of uncertainties for each quantity.First, stochastic uncertainties associated to 20 different initializations are shown using dark colour bands.Second, post-evolution oscillation uncertainties are displayed with pale colours.A key finding of our work is that post-evolution uncertainties are always larger than stochastic uncertainties.
Before discussing uncertainties, we want to stress the quality of the results.As discussed in the context of Fig. 2, the ANNs used to generate these results have been preselected to lie in the range E ∈ (−2.220, −2.227) MeV.Therefore, all these models already lie within 0.3% of the final energy result.For some architectures, the rate of convergence to this energy window is not 100%.Yet, converged results provide energy values which, according to the stochastic uncertainty, are well within  1 keV (or 0.04%) of each other.Post-evolution uncertainties are much larger than stochastic uncertainties, but are typically smaller than 4 keV (or 0.2%).We take this as an indication that all these ANNs provide faithful, high-quality representations of the deuteron wavefunction.While none of the models shown here are able to reach the real minimum even when the postevolution lower bounds are considered, the distance between the lower bound and the real minimum is always less than 2 keV.Our analysis indicates that this limitation is due to the fact that the network cannot always distinguish between the S− and D−state contributions in high-momentum regions, where the two wavefunctions are equally small.Ultimately, this limitation only has minor consequences, of a fraction of a percent, in the total energy.An important takeaway of this analysis is that the simplest architecture, 1sc, provides the most stable and overall best performing results: see panel (a) of Fig. 4.An interesting conclusion of Fig. 4 is that statedisconnected architectures (panels (c) and (d)) perform seemingly worse than state-connected networks.One could have expected the opposite, on the basis that disconnected architectures may provide more independent flexibility for the ψ S and ψ D wavefunctions.One possible explanation for this behavior is as follows.Within a single minimisation epoch, the optimiser proposes steps based on the backpropagation of gradients.Changes in parameters associated to the S−state are thus immediately propagated to the D state in a fully connected configuration.In contrast, in the disconnected case, these changes do not necessarily affect the D state.
In spite of having fewer parameters than connected architectures, the training may take longer due to this effect.In a minimisation protocol with a fixed number of epochs like ours, this may lead to worse results.
We now proceed to discuss the N hid dependence of the results.We find two very different trends depending on the depth of the network.One-layer network results are relatively independent on the network width.In other words, in terms of energy (but also in terms of fidelity), models with N hid ≈ 30 are as good as models with N hid = 100, as can be seen in panels (a) and (b) of Fig. 4. In contrast, the two-layer models display a clear improvement in energy.This suggests that twolayer networks with small values of N hid have not managed to reach the full minimum after 2.5 × 10 5 epochs.Single-layer models have notably fewer parameters and can thereby be trained faster than their two-layer counterparts.In fact, snapshots of the minimisation process indicate that two-layer models with few neurons are still learning at the end of the 2.5×10 5 epochs.The decrease in energy as N hid increases extends to the whole range of N hid in architectures 2sc and 2sd (panels (b) and (d) of Fig. 4).However, the post-evolution uncertainties for these two-layer models are relatively big and almost compatible with a constant value of energy.
In addition to central values, the dependence of uncertainties in N hid is informative.The stochastic uncertainties are relatively constant across all values, but are marginally larger for two-layer models.Post-evolution errors for one-layer networks are on the order of ≈ 2 keV, with a mild decreasing dependence on network width.In contrast, two-layer network post-evolution un-certainties are as large as 6 (4) keV for low (high) N hid values.Following the same argument about the energies of such models, this can be understood assuming that the updates in W from RMSprop affect both ψ S and ψ D in the state-connected case (as opposed to the state-disconnected case).One expects this may lead to larger energy variations and, hence, bigger oscillations.

Fidelity
The panels (e)-(h) at the bottom of Fig. 4 also provide an insight on the model quality after minimisation.Again, we highlight that overall the fidelities are extremely close to benchmark values, within a fraction of a percent in all cases.We find general conclusions in terms of network width that are in line with those we found for the energy.Across all architectures and network depths, the fidelity of the S−state is better than that associated to the D-state.Not only are the central values of F S closer to one, but post-evolution uncertainties are significantly smaller for this state.This is in contrast to the much smaller stochastic uncertainty, which is of the same size for both states3 .
When it comes to the N hid dependence of the results, we find again that one-layer networks are relatively independent of the network width.Overall, networks perform better with 1sc than with 2sc architectures as measured by overlaps that are closer to one, although they have relatively similar uncertainties.The fidelities of the 1sd architecture are relatively constant as the width increases.The behavior of the 2sd networks shown in panel (h) is more erratic.While the S−state fidelity is relatively close to the benchmark and seemingly improves with N hid , the D−state fidelity is closer to one for N hid = 60, and subsequently its quality decreases.We take this as a sign that such architectures with N hid values larger than 80 may be problematic and, in fact, we show in the following subsection that overfitting is an issue in this region.Finally, we stress again that there is no observable bias-variance trade-off in the fidelities [1].

Wavefunctions
We now turn our attention to analyzing directly the ANN outputs: the wavefunctions.This is an instructive exercise that allows us to have local indicators of model quality, in addition to the information provided by integrated quantities such as the energy or the fidelity.We show 20 different instances of the wavefunctions of the S− (top panels, (a)-(d)) and D−states (bottom panels, (e)-(h)) in Fig. 5.To show the "best" possible wavefunctions, we focus on the number of hidden nodes N hid that provide the lowest (central value of the) energies.For all four architectures considered, this corresponds to N hid = 100.By looking at these instances directly, one gets an idea of the possible spread of variationally evolved models.Overall, we find that all networks provide very similar, high-quality representations of the benchmark wavefunction, which is shown in dashed lines.
In most cases, the largest discrepancies occur towards the origin.As explained in Ref. [21], the presence of a q 2 factor in the energy integrals allows the networks to push a large amount of variance towards the low-q region.In other words, changing the wavefunction near the origin has no reflection on the global cost function.The large variance can be seen in the insets of all panels.For the S−state, one-layer models have relatively linear behaviors as q → 0, whereas 2-layer networks saturate close to the origin.None of these behaviors matches the benchmark, showed in dashed lines, in this limit.We note that the 20 realizations provide a relatively similar amount of deviations around a central value.This is in contrast to the D−state wavefunction of the 1sc architecture, panel (e) of Fig. 5, which has a large variation around the origin, in spite of providing the best overall energies.
Unlike in Ref. [21], we display the wavefunctions of Fig. 5 in a linear and denser grid of points than the one used to compute energy quadratures.This allows us to investigate whether the network is able to learn efficiently not only the properties at the mesh points or the origin, but also the continuity of the wavefunction across a set of points at finite momentum.Indeed, we observe a step-like behavior in the two-layer D−state wavefunctions (panels (f) and (h) of Fig. 5).These steps occur precisely in the vicinity of each quadrature mesh point and they are more easily seen at high momenta where mesh points are relatively spaced apart.Clearly, two-layer networks have enough flexibility to generate horizontal steps around these points.In other words, the networks learn the properties of the wavefunction only locally around the meshpoints and interpolate in a stepwise fashion between them.We take this as another, different sign of out-of-sample uncertainty.We stress that the steps do not occur in simpler, less flexible onelayer models.We speculate that the lack of flexibility of such models forces them to behave in a more continuous fashion.Further, these effects are not detected in integrated quantities, since the quadrature meshpoint values are well reproduced by all the networks.In other words, in our set-up, the regions between mesh points do not contribute to the loss functions.Therefore, the network can push the variance to these regions with no observable effect.We also note that all wave functions go immediately to zero for values of q larger than the ones shown in Fig. 5.
Figure 6 provides further insight into the structure of the wavefunction variance in our models.The variance here is estimated as the standard deviation associated to the 20 model initializations of our networks.The different line styles in the figure correspond to different values of N hid .Top (bottom) panels show S (D) state variances.It is quite clear from these figures that the overall variance at finite momentum is relatively independent of the network width.Moreover, most network models have maximum variances towards the origin, which is in line with Fig. 5. 1−layer networks have relatively flat declines with momentum, and saturate at values of the order of σ ≈ 10 −3 .
In contrast, 2−layer networks have pronounced oscillatory values.The oscillation positions have a one-toone correspondence with the plateaus in Fig. 5. Variance minima occur at the quadrature mesh points, and maxima happen in between.The oscillation minima (e.g. the smallest uncertainties) have variances which are similar to the 1−layer case.This is a clear indication that the network has learned locally the wavefunctions of the system at the quadrature mesh grids, but is overfitting the values in between which are not penalised by the cost function.We discuss potential mitigation strategies to this problem in the following sections.
Before discussing solutions to this problem, we want to quantify its relevance.To this end, we generate wave-function models in a new uniform 10 3 -point mesh in the interval [0, 5] fm −1 .We use this new linear mesh to compute, in quadrature, the overlaps of the network models to the benchmark wavefunction as well as the total energy.To summarise our findings, we show in Table 1 the results of this exercise for 2 different models.On the one hand, the 1sd model with N hid = 100 (top row) shows almost no signs of overfitting.In contrast, the middle row shows results for the 2sd architecture with N hid = 100, which has significant overfitting.The bottom row quotes the values obtained with a benchmark wavefunction, obtained in the very same uniform mesh.
First, we discuss the fidelities reported in the first and second columns of Table 1.These are computed in the uniform mesh, as opposed to the results presented in Fig. 4 for the original quadrature.For the 1sd model, the fidelities are practically the same, close to 1 up to the fourth or fifth significant digit.This is due to the fact that the model has barely any overfitting.The fidelities of the 2sd model are, however, one and two orders of magnitude further away from one than those reported in Fig. 4. We take this as an indication of potential overfitting.
A second, stronger indication comes from the energies in the final column.These values should be compared to the benchmarks in the same uniform mesh, reported in the bottom row.Two conclusions can be drawn here.First, the energy values obtained in this mesh are significantly worse than the exact ones.Whereas for the 1sd value the energy lies within a fraction of a percent of the benchmark, for the 2sd model the model is only accurate up to 10% of the total energy.Second, (f) 0.0 1.0 2.0 3.0 4.0 q (fm 1 ) (g) 0.0 1.0 2.0 3.0 4.0 q (fm 1 ) (h) Fig. 6 Top panels: standard deviation associated to 20 instances of the S−state wavefunction as a function of momentum, for each of the 4 architectures considered here.Different line styles correspond to different N hid values.Bottom panels: the same for the D−state wavefunction.Note the difference in scales of the momentum in both rows.
Table 1 Fidelity and energies of the 1sd and 2sd architectures with N hid = 100 hidden neurons in a uniform mesh, where the overfitting is captured.Each value is the mean over the set of 20 initializations, and the errors are the associated standard deviations.
the stochastic uncertainties are significantly larger than those reported in Fig. 4. For the 1sd network, the values are relatively competitive, of the order of 1 keV.The 2sd model has stochastic uncertainties of 200 keV, several orders of magnitude higher.This substantial increase in stochastic uncertainty, as well as the important change in energy central values, suggest that overfitting is particularly problematic for 2−layer networks.Among other things, this indicates that the results shown in Fig. 4 are not fully representative of the true values of 2−layer networks.The general behavior of these quantities as a function of N hid is, however, expected to remain the same.
Having quantified the importance of overfitting, we now discuss its causes.The differences between the exact and predicted wavefunctions appear almost exclusively in two-layer models.These models have comparatively more parameters, which are presumably redundant.Two-layer networks can "hide" this redundancy in regions with little or no contributions to the cost functions, either towards the origin or in regions inbetween fixed quadrature mesh points.We stress that having too many parameters does not restrict the ANN outputs, but it may increase the difficulty of the task assigned to the optimiser.The easy path out in this case is to overfit the mesh points.
Moreover, as one increases the number of parameters, the network may find multiple ways of fitting the data set evaluated on the same fixed N q mesh points.In other words, larger networks can represent the same wavefunctions in many different ways, and as the number of parameters grow, so do the number of ways in which a network can represent the wavefunctions.This is usually identified as a bias-variance trade-off.
We now speculate on how to mitigate the overfitting effects on the wavefunctions.An obvious alternative is to try and increase the mesh density.One expects that in regions which are densely covered by the mesh, the networks may not have enough freedom to develop the artificial plateaus observed in Fig. 5. Just as in traditional variational models, improving the quality of the (momentum-space) basis by, say, increasing N q , may also be beneficial in terms of the optimization.Alternatively, one can envisage a set-up in which meshes change from epoch to epoch.If such changes are entirely stochastic, one could rephrase them as traditional variational quantum Monte Carlo implementations, although these are not usually formulated with non-local interactions like the Entem-Machleidt potential.Alternatively, one could work with deterministic meshes but change the total number of meshpoints or the meshpoint positions at each epoch.In the most favourable scenario, one would expect to find a network that does not memorise specific data mesh points, but rather learns the wavefunction structure.In the end, any such strategy would try to enforce a network learning process that is independent of the specific quadrature that is used to compute the global cost energy function.

Hyperparameter exploration
In the analysis performed throughout the previous subsections we explored the performance of four ANN architectures with different numbers of hidden neurons, N hid , with the rest of the hyperparameters kept fixed.Now, for the sake of completeness, we present some results for the ground state energy computed with different hyperparameters.We discuss here the 1sd architecture and show a qualitative discussion for the other architectures in the Appendix.All the hyperparameter exploration has been performed for networks with N hid = 60.
In this exploration, we look at the effect of the activation function and the learning rate.We use one of the best-behaved optimizers in the literature (RMSprop), and let some of its parameters vary to explore additional systematics.Specifically, we modify the smoothing constant (α) and the momentum (µ) of RMSprop.To explore the systematic dependence of the hyperparameters, we change one specific hyperparameter at a time, keeping the values of all other hyperparameters fixed to those used in Section 3. Specifically, our baseline uses a Sigmoid activation function, a learning rate of 0.01 and α = 0.9 and µ = 0.0.
Table 2 shows numerical results of both the energy and the convergence rate for different combinations of the aforementioned hyperparameters, all of them sharing the same network architecture, 1sd.Notice that for each hyperparameter class there is a row which is repeated: this corresponds to the baseline hyperparameter configuration, which is also the same one used in the previous sections, and displayed here to facilitate the comparison.In a similar spirit to that in Section 2, we use N tot = 150 trained models to compute all the convergence rates presented in Table 2. Nevertheless, not all hyperparameter configurations achieve N con = 20 converged models with this N tot .In Table 2, we include an asterisk (*) next to the convergence rate of the models which need more runs.For all such models, the convergence rate is computed using N tot = 1100.
Concerning the changes in the activation function shown at the top of the table, we observe that our choice, the Sigmoid, has the highest convergence rate of r = 66.9% for this architecture.This is contrast to r < 15% for both the ReLU and Softplus functions, indicating that models have a harder time converging for these functions.Not only this, but Sigmoids also yield the lowest (and hence best) energy value.In contrast, ReLU and Softplus come at considerably higher energies.
Models with lower learning rates are inherently slower to train.Because we work with a fixed number of iterations, a change in the learning can thus have a large effect in the results.To attenuate such artifices, we train models with a learning rate that is half that of our benchmark, lr = 0.005, for 5 × 10 5 epochs instead of 2.5 × 10 5 epochs.We also explore models with larger learning rates (lr = 0.05) for the same number of epochs than our baseline.We observe that both the energy and the convergence rate improve as the learning rate decreases.The associated energy uncertainties also decrease, and we note that the best results for lr = 0.005 are incompatible with the baseline of lr = 0.01, which is half as cheap in epochs.
Finally, we explore the hyperparameter space associated to the optimiser in the bottom rows of Table 2.
Our results indicate that variations in the smoothing constant α and the momentum µ do not have a significant impact on the metrics studied here.Perhaps the most robust conclusion is that adding momentum to RMSprop reduces the convergence rate of our simula-tions.The interested reader can find a similar table for the 1sc, 2sc and 2sd configurations in the Appendix.

Conclusions and future outlook
In this work, we use vANNs to compute the ground state properties of the deuteron with a realistic Hamiltonian.To this end, we discretise the problem in a fixed mesh on the relative momentum coordinate.We use standard ML tools, based on PyTorch, to pre-train our models and subsequently minimise the energy for a fixed number of epochs.
High-quality solutions for the variational wavefunction were already obtained in Ref. [21] with a similar set-up.We extend this work here in two directions, aimed at identifying fundamental limitations of the vANN approach.First, we look at different ANN architectures, increasing the number of layers and treating the connection to the output states in a connected or disconnected fashion.Second, we identify a new source of uncertainty associated to the oscillations around the final energy minimum.Third, by carefully analysing the wavefunction outputs, we identify conditions in which finite-momentum overfitting arises.
All vANNs models provide excellent results for the energies and fidelities, when compared to benchmark wavefunctions.By looking at the rate of model convergence for different architectures, we find a first qualitative sign that two-layer networks have a harder time minimising the energy than their one-layer counterparts.In terms of model uncertainties, the post-evolution oscillation errors dominate over the stochastic initialization errors, but they remain relatively small (of the order of 6 to 8 keV) across a wide range of network widths.When it comes to the structure of the network, stateconnected networks, with output nodes that are connected to the internal hidden layer, provide marginally better results than state-disconnected architectures.
Overall, we find that two-layer networks provide worse results than one-layer models.This may have been expected since the input layer is based on a single, positive-defined degree of freedom (the magnitude of the relative momentum, q).Central values of the energy are less attractive, and the stochastic and postevolution uncertainties are larger.We also identify a dependence on the number of hidden nodes, which is absent in the one-layer case.Representing the wavefunctions of these models at grid points that are not used in the minimisation process, we find anomalous horizontal steps between mesh points for the two-layer models.These are clear signs of network overfitting.It seems that, during the minimisation process, the network is able to push some of the redundant dependence of parameters into these regions, which do not contribute to the energy cost function.This is analogous to the observation of a large variance (in terms of wavefunction values) around the origin, where the spherical q 2 factor allows the network to change values arbitrarily around q ≈ 0 without affecting the total energy.Unlike the situation at the origin, however, a change of integration mesh for the energy can easily detect the degradation of the model associated to overfitting at finite momentum.We find that, in the new mesh, the fidelities with respect to benchmark wavefunctions become worse.The energy values in a different mesh are also substantially less attractive.The associated stochastic uncertainty increases, reaching up to 300 keV in some cases.
While unveiling the internal behavior of the ANN is hard, the comparison between different architectures certainly sheds some light on the fundamental limitations of these variational methods.When it comes to presenting wavefunctions that depend on a single continuous variable, our results indicate that one-layer networks provide a better starting point than the more complex two-layer approaches.This effect may be due to the fixed (if long) total number of epochs, but other observations indicate that overfitting arises much more easily in deeper networks.We have explored the hyperparameter and activation function dependence of our results.We find that our baseline model, for Sigmoid functions and moderate learning rates, are almost optimal.The only significant dependence is on the learning rate, which unfortunately requires running simulations for a higher number of epochs.
Similarly, our results suggest that the network can tackle states with different quantum numbers in a fully connected configuration.In training ANNs to represent continuous quantum systems, non-fixed grid methods may provide superior learning capabilities.Overall, this experience provides useful ideas to build more sophisticated nuclear models and tackle more difficult problems, including those of a many-body nature.
Appendix A: Hyperparameter dependence exploration In section 3.4 we briefly commented on the various hyperparameters in our models, and in Table 2 we showed the effects of these variations upon the energy and the convergence rate for the 1sd architecture.Here, we provide a similar analysis for the architectures 1sc, 2sc and 2sd.
Table 3 shows numerical results of both the energy and the convergence rate for different values of the hyperparameters.We explore this dependence by changing one parameter at a time and keeping the remaining terms fixed.Here we find the same issue with convergence rates discussed in Section 3.4: for some hyperparameter configurations, using N tot = 150 is insufficient to guarantee a minimum of N con = 20 converged models.For such models, we use N tot = 1100 trained models instead, and we include an asterisk (*) next to their convergence rate in Table 3.We extract two relevant pieces of information from this table.First, we find that 1sc is the most stable architecture.Both the convergence rates and the energy values of this network are robust against changes of the hyperparameters.Second, we also find that a high convergence rate does not always entail a low ground-state energy.This can be easily understood realising that, while the trend for convergence rates appears to be linked almost exclusively to the architecture, the energy values depend instead on a wider range of factors.This is certainly something to bear in mind in future vANN calculations, where ANN architectures may have to be explored in detail.
We also find a less direct but significant dependence on the learning rate.Just as for the 1sd architecture, we find that lowering the learning rate by a factor of 2 (and doubling the number of epochs) seems to lead to better variational minima.These also show slightly smaller uncertainties and improved convergence rates.
For the 1sc and 2sd architerctures, the numbers in Table 3 also suggest that adding momentum results in models with lower (and hence better) energies.We also notice that two-layer architectures have much worse convergence rates than their one-layer counterparts for µ = 0.9.This may be understood with similar arguments as those presented in the main text.The 2sc and 2sd networks present overfitting, which suggests that there is an excessive number of hidden neurons.Therefore, parameter updates in these models are prone to having a greater impact on the overall wavefunction, which makes training less predictable, and increasing the momentum value is yet another step in that same direction.
Table 3 also confirms some of the results we discussed in the main body of this manuscript.We observe that two-layer architectures have lower convergence rates than one-layer networks throughout the entire range of hyperparameter values explored here.The same thing occurs for disconnected architectures, which have lower rates and somewhat worse energies than their counterparts.We take this as an indication that the tendencies are robust and reflect features linked to the network architecture itself, and not the training process details.

Data Availability Statement
The code used to produce all the results in this paper, including the figures appearing in the text, is available at the GitHub repository https://github.com/javier-rozalen/deuteron.There are

Fig. 1
Neural network architectures used in this work.

Fig. 4
Fig. 4 Top panels: energy as a function of the number of hidden nodes.Different panels correspond to the network architectures of Fig. 1.Lines represent central values obtained with 20 stochastic initializations.Dark bands show the associated standard deviations (stochastic uncertainty), whereas light bands show the post-evolution oscillation uncertainty.The dashed red lines indicate the benchmark value.Bottom panels: the same for the fidelity of the ANN wavefunction with respect to the benchmark for the S (purple) and D (green) states.

Fig. 5
Fig.5Top panels: 20 instances of the S−state wavefunction as a function of momentum, for each of the 4 architectures considered here for N hid = 100.The dashed line represents the benchmark wavefunction.The insets focus on the region around the origin, q ≈ 0. Bottom panels: the same for the D−state wavefunction.Note the difference in scales of the momentum in both rows.