Abstract
We present a first determination of the nuclear parton distribution functions (nPDF) based on the NNPDF methodology: nNNPDF1.0. This analysis is based on neutralcurrent deepinelastic structure function data and is performed up to NNLO in QCD calculations with heavy quark mass effects. For the first time in the NNPDF fits, the \(\chi ^2\) minimization is achieved using stochastic gradient descent with reversemode automatic differentiation (backpropagation). We validate the robustness of the fitting methodology through closure tests, assess the perturbative stability of the resulting nPDFs, and compare them with other recent analyses. The nNNPDF1.0 distributions satisfy the boundary condition whereby the NNPDF3.1 proton PDF central values and uncertainties are reproduced at \(A=1\), which introduces important constraints particularly for lowA nuclei. We also investigate the information that would be provided by an ElectronIon Collider (EIC), finding that EIC measurements would significantly constrain the nPDFs down to \(x\simeq 5\times 10^{4}\). Our results represent the firstever nPDF determination obtained using a Monte Carlo methodology consistent with that of stateoftheart proton PDF fits, and provide the foundation for a subsequent global nPDF analyses including also protonnucleus data.
Introduction
It has been known for more than three decades [1] that the parton distribution functions (PDFs) of nucleons bound within nuclei, more simply referred to as nuclear PDFs (nPDFs) [2, 3], can be modified with respect to their freenucleon counterparts [4]. Since MeVscale nuclear binding effects were expected to be negligible compared to the typical momentum transfers (\(Q\gtrsim 1\) GeV) in hardscattering reactions such as deepinelastic leptonnucleus scattering, such a phenomena came as a surprise to many in the physics community. Despite active experimental and theoretical investigations, the underlying mechanisms that drive inmedium modifications of nucleon substructure are yet to be fully understood. The determination of nPDFs is therefore relevant to improve our fundamental understanding of the strong interactions in the nuclear environment.
In addition to pinning down the dynamics of QCD in heavy nuclei, nPDFs are an essential ingredient for the interpretation of heavy ion collisions at RHIC and the LHC, in particular for the characterization of the QuarkGluon Plasma (QGP) [5, 6] via hard probes. Moreover, a reliable determination of the nuclear PDFs is required to separate the hot nuclear matter (QGP) from the cold nuclear matter effects that will in general already be present in the initial stages of the heavy ion collision.
The importance of nPDF fits is further highlighted by their contribution to the quark flavor separation in global PDF analyses of the proton [7,8,9,10]. Even with constraints from related processes such as gauge boson production at the Tevatron and the LHC, information provided by neutrinoinduced charged current deepinelastic scattering on heavy nuclear targets play a critical role in disentangling the proton’s quark and antiquark distributions. However, given the current precision of proton PDF fits, neglecting the nuclear uncertainties associated with neutrinonucleus scattering may not be well justified anymore [11], as opposed to the situation some years ago [12].
Lastly, nPDF extractions can sharpen the physics case of future highenergy leptonnucleus colliders such as the ElectronIon Collider (EIC) [13] and the Large Hadron electron Collider (LHeC) [14, 15], which will probe nuclear structure deep in the region of small parton momentum fractions, x, and aim to unravel novel QCD dynamics such as nonlinear (saturation) effects. The latter will only be possible provided that a faithful estimate of the nPDF uncertainties at small x can be attained, similar to what was required for the recent discovery of BFKL dynamics from the HERA structure function data [16].
Unfortunately, the determination of the nuclear PDFs is hampered by the rather limited experimental dataset available. In fact, until a few years ago, most nPDF analyses [17,18,19,20,21,22] were largely based on fixedtarget DIS structure functions in leptonnucleus scattering (with kinematic coverage restricted to \(x\gtrsim 0.01\)) supplemented by some Drell–Yan crosssections. A major improvement in this respect has been the recent availability of data on hardscattering crosssections from protonlead collisions at the LHC, with processes ranging from jet [23,24,25,26,27] and electroweak boson production [28,29,30,31,32], to heavy quark production [33,34,35,36,37,38,39,40] among several others. Indeed, measurements of hard probes in p+Pb collisions provide useful information to constrain the nPDFs, as was demonstrated by a few recent studies [41, 42].
On the other hand, a survey of various nPDF determinations reveals limitations that are of methodological origin as well. First of all, current nuclear PDF fits rely on modeldependent assumptions for the parameterization of the nonperturbative x and atomic mass number A dependence, resulting in a theoretical bias whose magnitude is difficult to assess. Moreover, several nPDF sets are extracted in terms of a proton baseline (to which the former must reduce in the \(A\rightarrow 1\) limit) that have been determined by other groups based on fitting methodologies and theoretical settings which might not fully equivalent, for instance, in the prescriptions used to estimate the nPDF uncertainties. Finally, PDF uncertainties are often estimated using the Hessian method, which is restricted to a Gaussian approximation with ad hoc tolerances, introducing a level of arbitrariness in their statistical interpretation.
Motivated by this need for a reliable and consistent determination of nuclear PDFs and their uncertainties, we present in this work a first nPDF analysis based on the NNPDF methodology [43,44,45,46,47,48,49,50,51,52]: nNNPDF1.0. In this initial study, we restrict our analysis to neutralcurrent nuclear deepinelastic structure function measurements, and compute the corresponding predictions in QCD up to NNLO in the \(\alpha _s\) expansion. Moreover, heavy quark mass effects are included using the FONLL generalmass variableflavor number scheme [53]. Since the nPDFs are determined using the same theoretical and methodological framework as the NNPDF3.1 proton PDFs, we are able to impose the boundary condition in a consistent manner so that the nNNPDF1.0 results reproduce both the NNPDF3.1 central values and uncertainties when evaluated at \(A=1\).
The nNNPDF1.0 sets are constructed following the general fitting methodology outlined in previous NNPDF studies, which utilizes robust Monte Carlo techniques to obtain a faithful estimate of nPDF uncertainties. In addition, in this study we implement for the first time stochastic gradient descent to optimize the model parameters. This is performed using TensorFlow [54], an open source machine learning library in which the gradients of the \(\chi ^2\) function can be computed via automatic differentiation. Together with several other improvements, we present a validation of the nNNPDF1.0 methodology through closure tests.
As a first phenomenological application of the nNNPDF1.0 sets, we quantify the impact of future leptonnucleus scattering measurements provided by an ElectronIon Collider. Using pseudodata generated with different electron and nucleus beam energy configurations, we perform fits to quantify the effect on the nNNPDF1.0 uncertainties and discuss the extent to which novel QCD dynamics can be revealed. More specifically, we demonstrate how the EIC would lead to a significant reduction of the nPDF uncertainties at small x, paving the way for a detailed study of nuclear matter in a presently unexplored kinematic region.
The outline of this paper is the following. In Sect. 2 we present the input experimental data used in this analysis, namely ratios of neutralcurrent deepinelastic structure functions, followed by a discussion of the corresponding theoretical calculations. The description of the fitting strategy, including the updated minimization procedure and choice of parameterization, is presented in Sect. 3. We discuss the validation of our fitting methodology via closure tests in Sect. 4. The main results of this work, the nNNPDF1.0 nuclear parton distributions, are then presented in Sect. 5. In Sect. 6 we quantify the impact on the nPDFs from future EIC measurements of nuclear structure functions. Lastly, in Sect. 7 we summarize and discuss the outlook for future studies.
Experimental data and theory calculations
In this section we review the formalism that describes deepinelastic scattering (DIS) of charged leptons off of nuclear targets. We then present the data sets that have been used in the present determination of the nuclear PDFs, discussing also the kinematical cuts and the treatment of experimental uncertainties. Lastly, we discuss the theoretical framework for the evaluation of the DIS structure functions, including the quark and antiquark flavor decomposition, the heavy quark mass effects, and the software tools used for the numerical calculations.
Deepinelastic leptonnucleus scattering
The description of hardscattering collisions involving nuclear targets begins with collinear factorization theorems in QCD that are identical to those in freenucleon scattering.
For instance, in deep inelastic leptonnucleus scattering, the leading power contribution to the cross section can be expressed in terms of a hard partonic cross section that is unchanged with respect to the corresponding leptonnucleon reaction, and the nonperturbative PDFs of the nucleus. Since these nPDFs are defined by the same leading twist operators as the free nucleon PDFs but acting instead on nuclear states, the modifications from internal nuclear effects are naturally contained within the nPDF definition and the factorization theorems remain valid assuming power suppressed corrections are negligible in the perturbative regime, \(Q^2 \gtrsim \) 1 GeV\(^2\). We note, however, that this assumption may not hold for some nuclear processes, and therefore must be studied and verified through the analysis of relevant physical observables.
We start now by briefly reviewing the definition of the DIS structure functions and of the associated kinematic variables which are relevant for the description of leptonnucleus scattering. The double differential crosssection for scattering of a charged lepton off a nucleus with atomic mass number A is given by
where \(Y_\pm = 1 \pm (1y)^2\) and the usual DIS kinematic variables can be expressed in Lorentzinvariant form as
Here the fourmomenta of the target nucleon, the incoming charged lepton, and the exchanged virtual boson (\(\gamma ^*\) or Z) are denoted by P, k, and q, respectively. The variable x is defined here to be the standard Bjorken scaling variable, which at leading order can be interpreted as the fraction of the nucleon’s momentum carried by the struck parton, and y is known as the inelasticity. Lastly, the virtuality of the exchanged boson is \(Q^2\), which represents the hardness of the scattering reaction.
As will be discussed below, the maximum value of the momentum transfer \(Q^2\) in the nNNPDF1.0 input dataset is \(Q^2_\mathrm{max} \simeq 200\) GeV\(^2\) (see Fig. 1). Given that \(Q^2_\mathrm{max}\ll M_Z^2\), the contribution from the parityviolating \(xF_3\) structure functions and the contributions to \(F_2\) and \(F_L\) arising from Z boson exchange can be safely neglected. Therefore, for the kinematic range relevant to the description of available nuclear DIS data, Eq. (2.1) simplifies to
where only the photonexchange contributions are retained for the \(F_2\) and \(F_L\) structure functions. In Eq. (2.3) we have isolated the dominant \(F_2\) dependence, since the second term is typically rather small. Note that since the center of mass energy of the leptonnucleon collision \(\sqrt{s}\) is determined by
where hadron and lepton masses have been neglected, measurements with the same values for x and \(Q^2\) but different center of mass energies \(\sqrt{s}\) will lead to a different value of the prefactor in front of the \(F_L/F_2\) ratio in Eq. (2.3), allowing in principle the separation of the two structure functions as in the free proton case.
Experimental data
In this analysis, we include all available inclusive DIS measurements of neutralcurrent structure functions on nuclear targets. In particular, we use data from the EMC [55,56,57,58], NMC [59,60,61,62], and BCDMS experiments at CERN, E139 measurements from SLAC [63], and E665 data from Fermilab. The measurements of nuclear structure functions are typically presented as ratios of the form
where \(A_1\) and \(A_2\) are the atomic mass numbers of the two different nuclei. Some of the experimental measurements included in this analysis are presented instead as ratios of DIS crosssections. As discussed earlier, the doubledifferential DIS crosssections are related to the \(F_2\) and \(F_L\) structure functions by
Therefore, one should in principle account for the contributions from the longitudinal structure function \(F_L\) to crosssection ratios measured by experiment. However, it is well known that the ratio \(F_L/F_2\) exhibits a very weak dependence with A [64, 65], and therefore the second term in Eq. (2.6) cancels out to a good approximation when taking ratios between different nuclei. In other words, we can exploit the fact that
in which then the ratios of DIS crosssections for \(Q \ll M_Z\) in the form of Eq. (2.6) are equivalent to ratios of the \(F_2\) structure functions. Lastly, it is important to note that whenever the nuclei involved in the measurements are not isoscalar, the data is corrected to give isoscalar ratios and an additional source of systematic error is added as a result of this conversion.
Summarized in Table 1 are the different types of nuclei measured by the experiments included in the nNNPDF1.0 analysis. For each dataset, we indicate the nuclei \(A_1\) and \(A_2\) that are used to construct the structure function ratios in Eq. (2.5), quoting explicitly the corresponding atomic mass numbers. We also display the number of data points that survive the baseline kinematical cuts, and give the corresponding publication references.
In Fig. 1 we show the kinematical coverage in the \((x,Q^2)\) plane of the DIS nuclear data included in nNNPDF1.0. To minimize the contamination from lowscale nonperturbative corrections and highertwist effects, and also to remain consistent with the baseline proton PDF analysis (to be discussed in Sect. 3), we impose the same kinematical cuts on \(Q^2\) and the invariant final state mass squared \(W^2=(P+q)^2\) as in the NNPDF3.1 global fit [71], namely
which are represented by the dashed lines in Fig. 1. In Table 2, we compare our kinematics cuts in \(W^2\) and \(Q^2\) to those implemented in the nCTEQ15 and EPPS16 fits. We find that our cuts are very similar to those of the nCTEQ15 analysis [20], and as a result our neutralcurrent DIS nuclear structure function dataset is similar to that used in their analysis. On the other hand, our choice of both the \(Q^2_\mathrm{min}\) and \(W^2_\mathrm{min}\) cut is more stringent than that made in the EPPS16 analysis [41], where they set \(Q^2_\mathrm{min} = 1.69\) GeV\(^2\) and do not impose any cut in \(W^2\).
After imposing the kinematical cuts in Eq. (2.8), we end up with \(N_\mathrm{dat}=451\) data points. As indicated in Table 1, around half of these points correspond to ratios of heavy nuclei with respect to to deuterium, namely \(R_{F_2}(A_1,A_2=2)\) in the notation of Eq. (2.5). For the rest of the data points, the values of \(A_1\) and \(A_2\) both correspond to heavier nuclei, with \(A_2 \ge 6\). It is worth noting that the measurements from the NMC collaboration contain a significant amount of points for which the carbon structure function is in the denominator, \(R_{F_2}(A_1,A_2=12)\). In particular, we have \(N_\mathrm{dat}=119\) data points for the \(Q^2\) dependence of the tin to carbon ratio, \(R_{F_2}(119,12)\). These measurements provide valuable constraints on the A dependence of the nuclear PDFs, since nuclear effects enter both the numerator and denominator of Eq. (2.5).
Concerning the treatment of the experimental uncertainties, we account for all correlations among data points whenever this information is provided by the corresponding experiments. This information is then encoded into the experimental covariance matrix, constructed using the \(t_0\) prescription [48]:
where one treats the \(N_\mathrm{add}\) additive (‘sys,a’) relative experimental systematic errors separately from the \(N_\mathrm{mult}\) multiplicative (‘sys,m’) ones. In the additive case, the central value of the experimental measurement is used for the structure function ratio, \(R_i^{(\mathrm{exp})}\). In the multiplicative case, e.g. for overall normalization uncertainties, a fixed set of theoretical predictions for the ratios, \(\{R_{i}^{(\mathrm th,0)}\}\), is constructed. These predictions are typically obtained from a previous fit which is then iterated until convergence is reached. The use of the \(t_0\) covariance matrix defined in Eq. (2.9) for the \(\chi ^2\) minimization (to be discussed in Sect. 3) avoids the bias associated with multiplicative uncertainties, which lead to a systematic underestimation of the bestfit values compared to their true values [72].
For the case in which correlated systematic uncertainties are not available, we simply add statistical and systematic errors in quadrature and Eq. (2.9) reduces to
where \(N_\mathrm{sys}=N_\mathrm{add}+N_\mathrm{mult}\). It turns out that for all of the measurements listed in Table 1, the detailed breakup of the experimental systematic errors is not available (in most cases these partially or totally cancel out when taking ratios of observables), and the only systematic error that enters the \(t_0\) covariance matrix Eq. (2.9) is the multiplicative normalization error.
Numerical implementation
We turn now to discuss the numerical implementation of the calculations of the DIS structure functions and their ratios \(R_{F_2}\) relevant for the nPDF interpretation of the nuclear DIS data. In the framework of collinear QCD factorization, the \(F_2\) structure function can be decomposed in terms of hardscattering coefficient functions and nuclear PDFs as,
where \(C_i(x,Q^2)\) are the processdependent coefficient functions which can be computed perturbatively as an expansion in the QCD and QED couplings; \(\Gamma _{ij}(Q^2,Q_0^2)\) is an evolution operator, determined by the solutions of the DGLAP equations, which evolves the nPDF from the initial parameterization scale \(Q_0^2\) into the hardscattering scale \(Q^2\), \(f_i(x,Q^2_0,A)\) are the nPDFs at the parameterization scale, and \(\otimes \) denotes the Mellin convolution. The sum over flavors i, j runs over the \(n_f\) active quarks and antiquarks flavors at a given scale Q, as well as over the gluon.
The direct calculation of Eq. (2.11) during the nPDF fit is not practical since it requires first solving the DGLAP evolution equation for each new boundary condition at \(Q_0\) and then convoluting with the coefficient functions. To evaluate Eq. (2.11) in a more computationally efficient way, it is better to precompute all the perturbative information, i.e. the coefficient functions \(C_i\) and the evolution operators \(\Gamma _{ij}\), with a suitable interpolation basis. Several of these approaches have been made available in the context of PDF fits [73,74,75,76]. Here we use the APFELgrid tool [77] to precompute the perturbative information of the nDIS structure functions provided by the APFEL program [78].
Within this approach, we can factorize the dependence on the nPDFs at the input scale \(Q_0\) from the rest of Eq. (2.11) as follows. First, we introduce an expansion over a set of interpolating functions \(\{ I_{\beta }\}\) spanning both \(Q^2\) and x such that
where the nPDFs are now tabulated in a grid in the \((x,Q^2)\) plane, \(f_{i,\beta \tau }\equiv f_i(x_\beta ,Q^2_{\tau },A)\). We can express this result in terms of the PDFs at the input evolution scale using the (interpolated) DGLAP evolution operators,
so that the nuclear DIS structure function can be evaluated as
This can be rearranged to give
where all of the information about the partonic crosssections and the DGLAP evolution operators is now encoded into the socalled FK table, \(\mathtt{FK}_{i,\alpha }\). Therefore, with the APFELgrid method we are able to express the series of convolutions in Eq. (2.11) by a matrix multiplication in Eq. (2.15), increasing the numerical calculation speed of the DIS structure functions by up to several orders of magnitude.
In this work, the FK tables (and thus the nDIS structure functions) are computed up to NNLO in the QCD coupling expansion, with heavy quark effects evaluated by the FONLL generalmass variable flavor number scheme [53]. Specifically, we use the FONLLB scheme for the NLO fits and the FONLLC for the NNLO fits. The value of the strong coupling constant is set to be \(\alpha _s(m_Z)=0.118\), consistent with the PDG average [79] and with recent highprecision determinations [80,81,82,83] (see [84] for an overview). Our variable flavor number scheme has a maximum of \(n_f=5\) active quarks, where the heavy quark pole masses are taken to be \(m_c=1.51\) GeV and \(m_b=4.92\) GeV following the Higgs CrossSections Working Group recommendations [85]. The charm and bottom PDFs are generated dynamically from the gluon and the light quark PDFs starting from the thresholds \(\mu _c=m_c\) and \(\mu _b=m_b\). Finally, since all of these theoretical settings are the same as in the NNPDF3.1 global proton PDF analysis, we choose this set to represent our nPDFs at \(A=1\), which we explain in more detail in Sect. 3.
In Table 3 we show a comparison between the deepinelastic structure function \(F_2(x,Q^2,A)\) computed with the APFEL program and with the FK interpolation method, Eq. (2.15), using the theoretical settings given above. The predictions have been evaluated using the EPPS16 sets for two different perturbative orders, FONLLB and FONLLC, at sample values of x and \(Q^2\) given by carbon (\(A=12\)) and lead (\(A=208\)) data. We also indicate the relative difference between the two calculations, \(\Delta _\mathrm{rel}\equiv \mathtt{APFEL}\mathtt{FK}/\mathtt{APFEL}\). Here we see that the agreement is excellent with residual differences much smaller than the typical uncertainties of the experimental data, and thus suitable for our purposes.
Quark flavor decomposition
With the APFELgrid formalism, we can express any DIS structure function in terms of the nPDFs at the initial evolution scale \(Q_0^2\) using Eq. (2.15). In principle, one would need to parameterize 7 independent PDFs: the up, down, and strange quark and antiquark PDFs and the gluon. Another two input PDFs would be required if in addition the charm and anticharm PDFs are also parameterized, as discussed in [86]. However, given that our input dataset in this analysis is restricted to DIS neutral current structure functions, a full quark flavor separation of the fitted nPDFs is not possible. In this section we discuss the specific quark flavor decomposition that is adopted in the nNNPDF1.0 fit.
We start by expressing the neutralcurrent DIS structure function \(F_2(x,Q^2,A)\) at leading order in terms of the nPDFs. This decomposition is carried out for \(Q^2 < m_c^2\) and therefore the charm PDF is absent. In this case, one finds for the \(F_2\) structure function,
where for consistency the DGLAP evolution has been performed at LO, and the quark and antiquark PDF combinations are given by
In this analysis, we will work in the PDF evolution basis, which is defined as the basis composed by the eigenstates of the DGLAP evolution equations. If we restrict ourselves to the \(Q < m_c\) (\(n_f=3\)) region, the quark combinations are defined in this basis as
It can be shown that the neutral current DIS structure functions depend only on these three quark combinations: \(\Sigma \), \(T_3\), and \(T_8\). Other quark combinations in the evolution basis, such as the valence distributions \(V=u^+d^+s^\) and \(V_3=u^d^\), appear only at the level of chargedcurrent structure functions, as well as in hadronic observables such as W and Z boson production.
In the evolution basis, the \(F_2\) structure function for a proton and a neutron target at LO in the QCD expansion can be written as
Therefore, since the nuclear effects are encoded in the nPDFs, the structure function for a nucleus with atomic number Z and mass number A will be given by a simple sum of the proton and neutron structure functions,
Inserting the decomposition of Eq. (2.21) into Eq. (2.22), we find
Note that nuclear effects, driven by QCD, are electriccharge blind and therefore depend only on the total number of nucleons A within a given nuclei, in addition to x and \(Q^2\). The explicit dependence on Z in Eq. (2.23) arises from QED effects, since the virtual photon \(\gamma ^*\) in the deepinelastic scattering couples more strongly to uptype quarks (\(e_q=2/3\)) than to downtype quarks (\(e_q=1/3\)).
From Eq. (2.23) we see that at LO the \(F_2^p\) structure function in the nuclear case depends on three independent quark combinations: the total quark singlet \(\Sigma \), the quark triplet \(T_3\), and the quark octet \(T_8\). However, the dependence on the nonsinglet triplet combination is very weak, since its coefficient is given by
where \(\Delta A \equiv A  2Z\) quantifies the deviations from nuclear isoscalarity (\(A=2Z\)). This coefficient is quite small for nuclei in which data is available, and in most cases nuclear structure functions are corrected for nonisoscalarity effects. In this work, we will assume \(\Delta A=0\) such that we have only isoscalar nuclear targets. The dependence on \(T_3\) then drops out and the nuclear structure function \(F_2\) at LO is given by
where now the only relevant quark combinations are the quark singlet \(\Sigma \) and the quark octet \(T_8\). Therefore, at LO, neutralcurrent structure function measurements on isoscalar targets below the Z pole can only constrain a single quark combination, namely
At NLO and beyond, the dependence on the gluon PDF enters and the structure function Eq. (2.25) becomes
where \(C_{\Sigma }\), \(C_{T_8}\), and \(C_{g}\) are the coefficient functions associated with the singlet, octet, and gluon respectively. In principle one could aim to disentangle \(\Sigma \) from \(T_8\) due to their different \(Q^2\) behavior, but in practice this is not possible given the limited kinematical coverage of the available experimental data. Therefore, only the \(\Sigma + T_8/4\) quark combination is effectively constrained by the experimental data used in this analysis, as indicated by Eq. (2.26).
Putting together all of this information, we will consider the following three independent PDFs at the initial parameterization scale \(Q_0\):

the total quark singlet \(\Sigma (x,Q^2_0,A) = \sum _{i=1}^{3}f^+_i(x,Q^2_0,A)\),

the quark octet \(T_{8}(x,Q^2,A) = \left( u^+ + d^+  2s^+\right) (x,Q^2,A)\),

and the gluon nPDF \(g(x,Q_0,A)\).
In Sect. 3 we discuss the parameterization of these three nPDFs using neural networks. In Fig. 2 we show the results for the correlation coefficient between the nPDFs that are parameterized in the nNNPDF1.0 fit (presented in Sect. 5), specifically the NNLO set for copper (\(A=64\)) nuclei. The nPDF correlations are computed at both \(Q=1\) GeV and \(Q=100\) GeV, the former of which contains experimental data in the region \( 0.01 \lesssim x \lesssim 0.4\) (illustrated in Fig. 1). In the data region, there is a strong anticorrelation between \(\Sigma \) and \(T_8\), consistent with Eq. (2.26) which implies that only their weighted sum can be constrained. As a result, we will show in the following sections only results of the combination \(\Sigma +T_8/4\) which can be meaningfully determined given our input experimental data. From Fig. 2, one can also observe the strong correlation between \(\Sigma \) and g for \(x\lesssim 0.01\) and \(Q=100\) GeV, arising from the fact that these two PDFs are coupled via the DGLAP evolution equations as opposed to \(T_8\) and g where the correlation is very weak.
Fitting methodology
In this section we describe the fitting methodology that has been adopted in the nNNPDF1.0 determination of nuclear parton distributions. While much of this methodology follows from previous NNPDF analyses, a number of significant improvements have been implemented in this work. Here we discuss these developments, together with relevant aspects of the NNPDF framework that need to be modified or improved in order to deal with the determination of the nuclear PDFs, such as the parameterization of the A dependence or imposing the \(A=1\) proton boundary condition.
Following the NNPDF methodology, the uncertainties associated with the nPDFs are estimated using the Monte Carlo replica method, where a large number of \(N_\mathrm{rep}\) replicas of the experimental measurements are generated in a way that they represent a sampling of the probability distribution in the space of data. An independent fit is then performed for each of these replicas, and the resulting ensemble of nPDF samples correspond to a representation of the probability distribution in the space of nPDFs for which any statistical estimator such as central values, variances, correlations, and higher moments can be computed [4].
In order to illustrate the novel ingredients of the present study as compared to the standard NNPDF framework, we display in Fig. 3 a schematic representation of the toolchain adopted to construct the nNNPDF1.0 sets. The items in blue correspond to components of the fitting methodology inherited from the NNPDF code, those in green represent new code modules developed specifically for this project, and those in yellow indicate external tools. As highlighted in Fig. 3, the main development is the application of TensorFlow [54], an external machine learning library that allows us access to an extensive number of tools for the efficient determination of the bestfit weights and thresholds of the neural network. The ingredients of Fig. 3 will be discussed in more detail in the following and subsequent sections.
The outline of this section is the following. We start first with a discussion of our strategy for the parameterization of the nPDFs in terms of artificial neural networks. Then we present the algorithm used for the minimization of the cost function, defined to be the \(\chi ^2\), which is based on stochastic gradient descent. We also briefly comment on the performance improvement obtained in this work as compared to previous NNPDF fits.
Nuclear PDF parameterization
As mentioned in Sect. 2, the nonperturbative distributions that enter the collinear factorization framework in leptonnucleus scattering are the PDFs of a nucleon within an isoscalar nucleus with atomic mass number A, \(f_i(x,Q^2,A)\). While the dependence of the nPDFs on the scale \(Q^2\) is determined by the perturbative DGLAP evolution equations, the dependence on both Bjorkenx and the atomic mass number A is nonperturbative and needs to be extracted from experimental data through a global analysis.^{Footnote 1} Taking into account the flavor decomposition presented in Sect. 2.4, we are required to parameterize the x and A dependence of the quark singlet \(\Sigma \), the quark octet \(T_8\), and the gluon g, as indicated by Eq. (2.25) at LO and by Eq. (2.27) for NLO and beyond.
The three distributions listed above are parameterized at the input scale \(Q_0\) by the output of a neural network NN\(_f\) multiplied by an xdependent polynomial functional form. In previous NNPDF analyses, a different multilayer feedforward neural network was used for each of the parameterized PDFs so that in this case, three independent neural networks would be required:
However, in this work we use instead a single artificial neural network consisting of an input layer, one hidden layer, and an output layer. In Fig. 4 we display a schematic representation of the architecture of the feedforward neural network used in the present analysis. The input layer contains three neurons which take as input the values of the momentum fraction x, \(\ln (1/x)\), and atomic mass number A, respectively. The subsequent hidden layer contains 25 neurons, which feed into the final output layer of three neurons, corresponding to the three fitted distributions \(\Sigma , T_8\) and g. A sigmoid activation function is used for the neuron activation in the first two layers, while a linear activation is used for the output layer. This latter choice ensures that the network output will not be bounded and can take any value required to reproduce experimental data. The output from the final layer of neurons is then used to construct the full parameterization:
where \(\xi ^{(3)}_i\) represent the values of the ith neuron’s activation state in the third and final layer of the neural network.
Overall, there are a total of \(N_\mathrm{par}=178\) free parameters (weights and thresholds) in the neural network represented in Fig. 4. These are supplemented by the normalization coefficient \(B_g\) for the gluon nPDF and by the six preprocessing exponents \(\alpha _f\) and \(\beta _f\). The latter are fitted simultaneously with the network parameters, while the former is fixed by the momentum sum rule, described in more detail below. Lastly, the input scale \(Q_0\) is set to 1 GeV to maintain consistency with the settings of the baseline proton fit, chosen to be the NNPDF3.1 set with perturbative charm.
Sum rules. Since the nucleon energy must be distributed among its constituents in a way that ensures energy conservation, the PDFs are required to obey the momentum sum rule given by
Note that this expression needs only to be implemented at the input scale \(Q_0\), since the properties of DGLAP evolution guarantees that it will also be satisfied for any \(Q > Q_0\). In this analysis, Eq. (3.3) is applied by setting the overall normalization of the gluon nPDF to
where the denominator of Eq. (3.4) is computed using Eq. (3.2) and setting \(B_g=1\). Since the momentum sum rule requirement must be satisfied for any value of A, the normalization factor for the gluon distribution \(B_g\) needs to be computed separately for each value of A given by the experimental data (see Table 1).
In addition to the momentum sum rule, nPDFs must satisfy other sum rules such as those for the valence distributions,
as well as
given the quark flavor quantum numbers of isoscalar nucleons. These valence sum rules involve quark combinations which are not relevant for the description of neutralcurrent DIS structure functions, and therefore do not need to be used in the present analysis. However, they will become necessary in future updates of the nNNPDF fits in which, for instance, chargedcurrent DIS measurements are also included.
Preprocessing. The polynomial preprocessing functions \(x^{\alpha _f}(1x)^{\beta _f}\) in Eq. (3.2) have long been known to approximate well the general asymptotic behavior of the PDFs at small and large x [88]. Therefore, they help to increase the efficiency of parameter optimization since the neural networks have to learn smoother functions. Note that the preprocessing exponents \(\alpha _f\) and \(\beta _f\) are independent of A, implying that the entire A dependence of the input nPDFs will arise from the output of the neural networks.
In previous NNPDF analyses, the preprocessing exponents \(\alpha _f\) and \(\beta _f\) were fixed to a randomly chosen value from a range that was determined iteratively. Here instead we will fit their values for each Monte Carlo replica, so that they are treated simultaneously with the weights and thresholds of the neural network. The main advantage of this approach is that one does not need to iterate the fit to find the optimal range for the exponents, since now their bestfit values are automatically determined for each replica.
Based on basic physical requirements, as well as on empirical studies, we impose some additional constraints on the range of allowed values that the exponents \(\alpha _f\) and \(\beta _f\) can take. More specifically, we restrict the parameter values to
Concerning the largex exponent \(\beta _f\), the lower limit in Eq. (3.7) guarantees that the nPDFs vanish in the elastic limit \(x\rightarrow 1\); the upper limit follows from the observation that it is unlikely for the nPDFs to be more strongly suppressed at large x [88]. With respect to the smallx exponent \(\alpha _f\), the upper limit follows from the nPDF integrability condition, given that for \(\alpha _f > 1\) the momentum integral Eq. (3.3) becomes divergent.
In addition to the conditions encoded in Eq. (3.7), we also set \(\beta _\Sigma = \beta _{T_8}\), namely we assume that the two quark distributions \(\Sigma \) and \(T_8\) share a similar largex asymptotic behavior. The reason for this choice is twofold. First, we know that these two distributions are highly (anti) correlated for neutralcurrent nuclear DIS observables (see Eq. (2.26)). Secondly, the largex behavior of these distributions is expected to be approximately the same, given that the strange distribution \(s^+\) is known to be suppressed at large x compared to the corresponding \(u^+\) and \(d^+\) distributions. In any case, it is important to emphasize that the neural network has the ability to compensate for any deviations in the shape of the preprocessing function, and therefore can in principle distinguish any differences between \(\Sigma \) and \(T_8\) in the largex region.
To illustrate the results of fitting the small and largex preprocessing exponents, we display in Fig. 5 the probability distributions associated with the \(\alpha _f\) and \(\beta _f\) exponents computed using the \(N_\mathrm{rep}=1000\) replicas of the nNNPDF1.0 NLO set, to be discussed in Sect. 5. Here the mean value of each exponent is marked by the solid red line, and the transparent red band describes the \(1\sigma \) deviation. Note that these exponents are restricted to vary only in the interval given by Eq. (3.7). Interestingly, the resulting distributions for each of the \(\alpha _f\) and \(\beta _f\) exponents turn out to be quite different, for instance \(\beta _{\Sigma }\) is Gaussianlike while \(a_{\Sigma }\) is asymmetric.
The \(A=1\) limit of the nPDFs. An important physical requirement that must be satisfied by the nPDFs is that they should reproduce the x dependence of the PDFs corresponding to isoscalar free nucleons when evaluated at \(A=1\). Therefore, the following boundary conditions needs to be satisfied for all values of x and \(Q^2\):
where \(f_p\) and \(f_n\) indicate the parton distributions of the free proton and neutron, respectively, and are related by isospin symmetry (which is assumed to be exact). As opposed to other approaches adopted in the literature, we do not implement Eq. (3.8) at the nPDF parameterization level, but rather we impose it as a restriction in the allowed parameter space at the level of \(\chi ^2\) minimization, as will be discussed below. Our strategy has the crucial advantage that it assures that both central values and uncertainties of the freenucleon PDFs will be reproduced by the nNNPDF1.0 nuclear set in the \(A\rightarrow 1\) limit.
Minimization strategy
Having described the strategy for the nPDF parameterization in terms of neural networks, we turn now to discuss how the bestfit values of these parameters, namely the weights and thresholds of the neural network and the preprocessing exponents \(\alpha _f\) and \(\beta _f\), are determined. We also explain how we impose the \(A=1\) boundary condition, Eq. (3.8).
In this analysis, the bestfit parameters are determined from the minimization of a \(\chi ^2\) function defined as
Here, \(R_i^\mathrm{(exp)}\) and \(R_i^\mathrm{(th)}(\{ f_m\})\) stand for the experimental data and the corresponding theoretical predictions for the nuclear ratios, respectively, the latter of which depend on the nPDF fit parameters. The \(t_0\) covariance matrix \(\mathrm{cov_{t_0}}\) has been defined in Eq. (2.9), and \(N_\mathrm{dat}\) stands for the total number of data points included in the fit. Therefore, the first term above is the same as in previous NNPDF fits. Note that the first row in Eq. (3.9) could also be expressed in terms of shifts to the data or theory allowed by the correlated systematic errors [4].
Reproducing the proton PDF baseline. The second term in Eq. (3.9) is a new feature in nNNPDF1.0. It corresponds to a quadratic penalty that forces the fit to satisfy the boundary condition in Eq. (3.8), namely that the fitted nPDFs for \(A=1\) reproduce the PDFs of an isoscalar free nucleon constructed as the average of the proton and neutron PDFs. In order to impose this constraint in a fully consistent way, it is necessary for the proton PDF baseline to have been determined using theoretical settings and a fitting methodology that best match those of the current nPDF analysis. This requirement is satisfied by the NNPDF3.1 global analysis [71], a stateoftheart determination of the proton PDFs based on a wide range of hardscattering processes together with higherorder QCD calculations. Crucially, NNPDF3.1 shares most of the methodological choices of nNNPDF1.0 such as the use of neural networks for the PDF parameterization and of the Monte Carlo replica method for error propagation and estimation.
As can be seen from Eq. (3.9), this constraint is only imposed at the initial scale \(Q_0\). This is all that is required, since the properties of DGLAP evolution will result in distributions at \(Q > Q_0\) that automatically satisfy the constraint. The \(A=1\) boundary condition is then constructed with a grid of \(N_x=60\) values of x, where 10 points are distributed logarithmically from \(x_\mathrm{min}=10^{3}\) to \(x_\mathrm{mid}=0.1\) and 50 points are linearly distributed from \(x_\mathrm{mid}=0.1\) to \(x_\mathrm{max}=0.7\).
Note that in the lowx region the coverage of this constraint is wider than that of the available nuclear data (see Fig. 1). Since proton PDF uncertainties, as a result of including HERA structure function data, are more reduced at small x than in the corresponding nuclear case, the constraint in Eq. (3.9) introduces highly nontrivial information regarding the shape of the nPDFs within and beyond the experimental data region. Moreover, we have also verified that the constraint can also be applied down to much smaller values of x, such as \(x_\mathrm{min}=10^{5}\), by taking as a proton baseline one of the NNPDF3.0 sets which include LHCb charm production data [89,90,91], as will be demonstrated in Sect. 5.3.
It is important to emphasize that the boundary condition, Eq. (3.8), must be satisfied both for the PDF central values and for the corresponding uncertainties. Since proton PDFs are known to much higher precision than nPDFs, imposing this condition introduces a significant amount of new information that is ignored in most other nPDF analyses. In order to ensure that PDF uncertainties are also reproduced in Eq. (3.8), for each nNNPDF1.0 fit we randomly choose a replica from the NNPDF3.1 proton global fit in Eq. (3.9). Since we are performing a large \(N_\mathrm{rep}\) number of fits to estimate the uncertainties in nNNPDF1.0, the act of randomly choosing a different proton PDF baseline each time guarantees that the necessary information contained in NNPDF3.1 will propagate into the nPDFs. Finally, we fix the hyperparameter to \(\lambda = 100\), which is found to be the optimal setting together with the choice of architecture to yield \(A=1\) distributions that best describe the central values and uncertainties of NNPDF3.1.
Optimization procedure. Having defined our \(\chi ^2\) function in Eq. (3.9), we now move to present our procedure to determine the bestfit values of the parameters associated with each Monte Carlo replica. This procedure begins by sampling the initial values of the fit parameters. Concerning the preprocessing exponents \(\alpha _f\) and \(\beta _f\), they are sampled from a uniform prior in the range \(\alpha _f \in [1,1]\) and \(\beta _f \in [1,10]\) for all fitted distributions. Note that these initial ranges are contained within the ranges from Eq. (3.7) in which the exponents are allowed to vary. Since the neural network can accommodate changes in the PDF shapes from the preprocessing exponents, we find the choice of the prior range from which \(\alpha _f\) and \(\beta _f\) are initially sampled does not affect the resulting distributions. In the end, the distributions of \(\alpha _f\) and \(\beta _f\) do not exhibit flat behavior, as is shown in Fig. (5).
Concerning the initial sampling of the neural network parameters, we use Xavier initialization [92], which samples from a normal distribution with a mean of zero and standard deviation that is dependent on the specific architecture of the network. Furthermore, the initial values of the neuron activation are dropped and rechosen if they are outside two standard deviations. Since a sigmoid activation function is used for the first and second layers, this truncation of the sampling distribution ensures the neuron input to be around the origin where the derivative is largest, allowing for more efficient network training.
As highlighted by Fig. 3, the most significant difference between the fitting methodology used in nNNPDF1.0 as compared to previous NNPDF studies is the choice of the optimization algorithm for the \(\chi ^2\) minimization. In the most recent unpolarized [71] and polarized [93] proton PDF analysis based on the NNPDF methodology, an inhouse Genetic Algorithm (GA) was employed for the \(\chi ^2\) minimization, while for the NNFF fits of hadron fragmentation functions [94] the related Covariance Matrix AdaptationEvolutionary Strategy (CMAES) algorithm was used (see also [95]). In both cases, the optimizers require as input only the local values of the \(\chi ^2\) function for different points in the parameter space, but never use the valuable information contained in its gradients.
In the nNNPDF1.0 analysis, we utilize for the first time gradient descent with backpropagation, the most widely used training technique for neural networks (see also [43]). The main requirement to perform gradient descent is to be able to efficiently compute the gradients of the cost function Eq. (3.9) with respect to the fit parameters. Such gradients can in principle be computed analytically, by exploiting the fact that the relation between the structure functions and the input nPDFs at \(Q_0\) can be compactly expressed in terms of a matrix multiplication within the APFELgrid formalism as indicated by Eq. (2.15). One drawback of such approach is that the calculation of the gradients needs to be repeated whenever the structure of the \(\chi ^2\) is modified. For instance, different analytical expressions for the gradients are required if uncertainties are treated as uncorrelated and added in quadrature as opposed to the case in which systematic correlations are taken into account.
Rather than following this path, in nNNPDF1.0 we have implemented backpropagation neural network training using reversemode automatic differentiation in TensorFlow, a highly efficient and accurate method to automatically compute the gradients of any userdefined cost function. As a result, the use of automatic differentiation makes it significantly easier to explore optimal settings in the model and extend the analysis to include other types of observables in a global analysis.
One of the drawbacks of the gradient descent approach, which is partially avoided by using GAtypes of optimizers, is the risk of ending up trapped in local minima. To ensure that such situations are avoided as much as possible, in nNNPDF1.0 we use the Adaptive Moment Estimation (ADAM) algorithm [96] to perform stochastic gradient descent (SGD). The basic idea here is to perform the training on randomly chosen subsets of the input experimental data, which leads to more frequent parameter updates. Moreover, the ADAM algorithm significantly improves SGD by adjusting the learning rate of the parameters using averaged gradient information from previous iterations. As a result, local minima are more easily bypassed in the training procedure, which not only increases the likelihood of ending in a global minima but also significantly reduces the training time.
In this analysis, most of the ADAM hyperparameters are set to be the default values given by the algorithm, which have been tested on various machine learning problems. This includes the initial learning rate of the parameters, \(\eta = 0.001\), the exponential decay rate of the averaged squared gradients from past iterations, \(\beta _2 = 0.999\), and a smoothing parameter \(\epsilon = 10^{8}\). However, we increase the exponential decay rate of the mean of previous gradients, \(\beta _1 = 0.9 \rightarrow 0.99\), which can be interpreted more simply as the descent momentum. This choice was observed to improve the performance of the minimization overall, as it exhibited quicker exits from local minima and increased the rate of descent.
Given that our neuralnetworkbased parameterization of the nPDFs, Eq. (3.2), can be shown to be highly redundant for the current input dataset (see also Sect. 5.3), we run the risk of fitting the statistical fluctuations in the data rather than the underlying physical law. To prevent such overfitting, we have implemented the lookback crossvalidation stopping criterion presented for the first time in NNPDF fits in Ref. [7]. The basic idea of this algorithm is to separate the input dataset into disjoint training and validation datasets (randomly chosen replica by replica), minimize the training set \(\chi ^2\) function, \(\chi ^2_\mathrm{tr}\), and stop the training when the validation \(\chi ^2\), \(\chi ^2_\mathrm{val}\), reveals a global minimum. In this analysis, the data is partitioned 50%/50% to construct each of the two sets, except for experiments with 5 points or less which are always included in the training set.
The final fits are chosen to satisfy simultaneously,
where \(N_\mathrm{tr}\) and \(N_\mathrm{val}\) are the number of data points in the training and validation sets, respectively, and \(\chi ^2_\mathrm{penalty}\) corresponds to the second term in Eq. (3.9). Upon reaching the above conditions during \(\chi ^2\) minimization, checkpoints are saved for every 100 iterations. A fit is then terminated when a smaller value for the validation \(\chi ^2\) is not obtained after \(5\times 10^4\) iterations, or when the fit has proceeded \(5\times 10^5\) iterations (early stopping). The former is set to allow sufficient time to escape local minima, and the latter is set due to the SGD algorithm, which can fluctuate the parameters around the minimum indefinitely. In either case the fit is considered successful, and the parameters that minimize \(\chi ^2_\mathrm{val}\) are selected as the bestfit parameters (lookback).
While automatic differentiation with TensorFlow is the baseline approach used to construct the nNNPDF1.0 sets, during this work we have also developed an alternative C++ fitting framework that uses the ceressolver optimizer interfaced with analytical calculations of the gradients of the \(\chi ^2\) function. The comparison between the analytical and automated differentiation strategies to compute the \(\chi ^2\) gradients and to carry out the minimization of Eq. (3.9) will be presented elsewhere [97], together with a comparison of the performance and benchmarking between these two approaches.
Performance benchmarking. While a detailed and systematic comparison between the performances of the TensorFlowbased stochastic gradient descent optimization used in nNNPDF1.0 and that of the GA and CMAES minimizers used in previous NNPDF analyses is beyond the scope of this work, here we provide a qualitative estimate for improvement in performance that has been achieved as a result of using the former strategy.
In order to assess the performance of these two strategies, we have conducted two Level0 closure tests (see Sect. 4 for more details) on the same computing machine. For the first test, a variant of the nNNPDF1.0 fit was run without the \(A=1\) constraint and with the preprocessing exponents fixed to randomly selected values within a specific range. For the second, a variant of the NNPDF3.1 DISonly fit was run with kinematic cuts adjusted so that they match the value of \(N_\mathrm{dat}\) used in the first fit. Moreover, the preprocessing exponents are fixed in the same manner. To further ensure similar conditions as much as possible with the nNNPDF1.0like fits, the fitting basis of NNPDF3.1 has been reduced to that composed by only \(\Sigma , T_8\) and g, while the architecture of the networks has been modified so that the number of fitted free parameters is the same in both cases. While these common settings are suitable for a qualitative comparison between the two optimizers, we want to emphasize that the results should not be taken too literally, as other aspects of the two fits are slightly different.
In Fig. 6 we show the results of this benchmark comparison for the performance of the TensorFlowbased stochastic gradient descent optimization with that of the Genetic Algorithm (labelled NGA) used in NNPDF3.1. Within a Level0 closure test, we monitor the average time it takes each optimizer to reach a given \(\chi ^2_\mathrm{target}/N_\mathrm{dat}\) target. We then plot the ratio of the average SGD time with the corresponding GA result. For a conservative target, \(\chi ^2_\mathrm{target}/N_\mathrm{dat}=0.1\), the NGA appears to perform better than SGD. This is a wellknown property of evolutionary algorithms: they manage to rather quickly bring the parameter space to the vicinity of a minimum of the cost function.
The real power of SGD becomes apparent once a more stringent \(\chi ^2_\mathrm{target}/N_\mathrm{dat}\) target is set. As will be described in more detail in the following section, the figure of merit in Level 0 closure tests can be arbitrarily reduced until asymptotically the \(\chi ^2 \rightarrow 0\) limit is reached. We find that in this case, the average time for SGD can be significantly smaller than the corresponding NGA time. For \(\chi ^2_\mathrm{target}/N_\mathrm{dat}=10^{3}\), the speed improvement is around an order of magnitude, and from the trend it is apparent that this improvement would continue for new \(\chi ^2\) targets. The benchmark comparison of Fig. 6 highlights how, with the use of SGD, it becomes possible to explore the vicinity of minima in a more efficient way than NGA, thus bringing in a considerable speed improvement that can reach a factor of 10 or more.
Closure tests
Since a significant amount of the fitting methodology used to construct nNNPDF1.0 has been implemented for the first time in this analysis, it is important to test its performance and demonstrate its reliability using closure tests. The general idea of closure tests is to carry out fits based on pseudodata generated with a known underlying theory. In this case, an existing nPDF set is used and the fit results are compared with this “true” result using a variety of statistical estimators. In addition, the exercise is performed within a clean environment which is not affected by other possible effects that often complicate global fits, such as limitations of the theory calculations or the presence of internal or external data inconsistencies.
In this section, we briefly outline the different types closure tests that we have performed to validate the nNNPDF1.0 fitting methodology. The nomenclature and settings for the different levels of closure tests follows Ref. [7] (see also [98] for a related discussion in the context of SMEFT analyses).
For all of the closure tests presented in this section, the fitting methodology is identical to that used for the actual nNNPDF1.0 analysis and was described in the previous section. The differences between the different closure fits are then related to the generation of the pseudodata. The underlying distributions have been chosen to be those from the nCTEQ15 analysis, from which the pseudodata is constructing by evaluating predictions for the nuclear structure functions using the theoretical formalism in Sect. 2. Furthermore, the deuteron structure functions are constructed from the NNPDF3.1 proton PDF set so that only the numerator is fitted in the \(F_2^A/F_2^D\) ratios. The nPDFs to be determined by the fit are then parameterized at the input scale \(Q_0 = 1.3\) GeV (rather than 1 GeV) to maintain the consistency with the settings of nCTEQ15. Lastly, we do not impose our boundary condition at \(A=1\), since our aim is not to achieve the smallest possible uncertainties but instead to show that we are able to reproduce the input distributions from nCTEQ15.
Level 0
We start by presenting the results of the simplest type of closure test, Level 0 (L0), and then discuss how these are modified for the more sophisticated Level 1 (L1) and Level 2 (L2) fits. At Level 0, the pseudodata is generated from the nCTEQ distributions without any additional statistical fluctuations, and the uncertainties are taken to be the same as the experimental data. The \(\chi ^2\) is then defined to be the same as in the fits to real data, taking into account all sources of experimental uncertainties in the \(t_0\) covariance matrix Eq. (2.9). Moreover, there are no Monte Carlo replicas, and closure tests are carried out \(N_\mathrm{rep}\) times for different random values of the initial fit parameters. The variance of the \(N_\mathrm{rep}\) fits then defines the PDF uncertainties at this level. By defining the closure test Level 0 in this way, there is guaranteed to exist at least one possible solution for the fit parameters which result in \(\chi ^2=0\), where the fitted nPDFs coincide with nCTEQ15. Therefore a key advantage of this test is its ability to assess the flexibility of the chosen input functional form and determine whether the shapes of the underlying distributions are able to be reproduced.
Due to the absence of statistical fluctuations in the pseudodata, overlearning is not possible. Consequently, the fits are performed without crossvalidation and early stopping, and the maximum number of iterations a given fit can progress is a free parameter that can be chosen to be arbitrarily large. Although the value of the total \(\chi ^2\) with respect to the number of iterations may flatten as the maximum number of iterations is increased and one is close to the absolute minimum, the \(\chi ^2\) should continue to vanish asymptotically provided the optimizer is adequately efficient.
To demonstrate that these expectations are indeed satisfied in our case, we display in Fig. 7 the value of the total \(\chi ^2\) as a function of the number of iterations proceeded in the minimization process for a specific choice of the initial conditions of the fit parameters. We find the results for other initial conditions are qualitatively similar. For this case, the \(\chi ^2\) decreases monotonically with the number of iterations without ever saturating. Note also how the rate of decrease of the \(\chi ^2\) is high for low number of iterations, but becomes slower as the number of iterations is increased. This behavior is expected since it is more difficult to find directions in the parameter space close to the minimum that further reduce the cost function. The final results are chosen to satisfy \(\chi ^2/N\mathrm{dat}<0.1\) after a maximum number of iterations of \(2\times 10^5\).
In Fig. 8, we show the resulting nPDFs from a Level 0 closure test. Here the \(\Sigma +T_8/4\) quark combination and the gluon are plotted as a function of x at the initial evolution scale \(Q_0=1.3\) GeV for \(A=12\) and \(A=208\). We also display the 1\(\sigma \) uncertainties computed over the \(N_\mathrm{rep}\) replicas, while for the input nCTEQ distributions only the central values are shown. Since the aim of closure tests is not to reproduce the uncertainties of the prior distributions but rather the central values used to generate the pseudodata, the nCTEQ uncertainties are not relevant here and therefore we do not show them in any of the results presented in this section.
As we can see from the comparison in Fig. 8, there is a very good agreement between the central values of both the quark combination \(\Sigma +T_8/4\) and the gluon with the nCTEQ15 prior in the data region. This is especially marked for the quark distribution, given that it is directly constrained by the structure function measurements. Concerning the gluon nPDF, which is only constrained indirectly from data, the agreement is somewhat better for \(A=12\) than for \(A=208\) for which there are very limited experimental constraints. This behavior can be understood by the fact that the gluon for \(A=208\) is much less constrained by the available data than for \(A=12\), and thus even in a perfect L0 closure test, with \(\chi ^2\rightarrow 0\), one can expect small deviations with respect to the underlying physical distributions. Nevertheless, our results agree overall with the nCTEQ15 central values and the L0 closure test is considered successful.
Level 1
We continue now to discuss the results of the Level 1 closure tests. In this case, the pseudodata is generated by adding one layer of statistical fluctuations to the nCTEQ15 predictions. These fluctuations are dictated by the corresponding experimental statistical and systematic uncertainties, and are the same that enter in the \(t_0\) covariance matrix Eq. (2.9). In other words, we take the L0 pseudodata and add random noise by Gaussian smearing each point about the corresponding experimental uncertainties. As in the L0 closure tests, the same pseudodata set is used to perform an \(N_{\mathrm {rep}}\) number of fits, each with a different initialization of the fit parameters, and the resulting variance defines the nPDF uncertainties. Due to the Gaussian smearing, however, overlearning is now possible at Level 1 and therefore crossvalidation with early stopping is required. As a result, the optimal fit parameters are expected to give instead \(\chi ^2_\mathrm{tot}/N_\mathrm{dat}\simeq 1\).
In Fig. 9 we show a similar comparison as that of Fig. 8 but now for the L1 closure tests. While the level of agreement with the nCTEQ15 prior is similar as in the case of L0 fits, the PDF uncertainties have now increased, especially for the gluon nPDF. This increase at L1 reflects the range of possible solutions for the initial nPDFs at \(Q_0\) that lead to a similar value of \(\chi ^2/N_\mathrm{dat}\simeq 1\). Therefore, the L1 test not only allows us to verify that the input distributions are reproduced, but also that the added statistical fluctuations at the level of the generated pseudodata are reflected in the resulting uncertainties.
In Table 4 we list the averaged values for \(\chi ^2/N_\mathrm{dat}\) in the L1 closure test compared with the corresponding values obtained with the prior theory. As expected, we find the \(\chi ^2\) values at L1 being close to those of the prior both at the level of the total dataset as well as that of individual experiments. The agreement is particularly good for datasets with a large number of points that carry more weight in the fit. In summary, the comparison in Fig. 9 combined with that in Table 4 demonstrate the closure tests are also successful at L1.
Level 2
In L2 closure tests, the pseudodata generated in the L1 case is now used to produce a large \(N_\mathrm{rep}\) number of Monte Carlo replicas. A nuclear PDF fit is then performed for each replica, and lookback crossvalidation is again activated to prevent overfitting. The procedure at L2 therefore matches the one applied to real data to determine the nNNPDF1.0 set of nPDFs, so that the statistical and systematic uncertainties provided by the experimental measurements are propagated into the resulting nPDF uncertainties. By comparing the PDF uncertainties at L2 with those at L1 and L0, one can disentangle the various contributions to the total nPDF error, as we discuss in more detail below.
Given the extra layer of statistical fluctuations introduced by the Monte Carlo replica generation, at L2 the figure of merit for each replica is \(\chi ^2_\mathrm{tr\,(\mathrm val)}/N_\mathrm{tr\,(val)}\simeq 2\), where \(N_\mathrm{tr\,(val)}\) indicates the number of data points in the training (validation) set. In Fig. 10, we show a similar plot as in Fig. 7 but now for a representative replica from the Level 2 closure test.
Here the \(\chi ^2\) values from the training and validation samples are plotted separately, and the vertical dashed line indicates the stopping point, defined to be the absolute minimum of \(\chi ^2_\mathrm{val}\), at which the optimal parameters are taken. Since we have \(N_\mathrm{tr} = 239\) and \(N_\mathrm{val}=212\), we find \(\chi ^2_\mathrm{tr\,(\mathrm val)}/N_\mathrm{tr\,(val)}\simeq 2\) as expected.
Figure 10 clearly illustrates the importance of crossvalidation stopping. For a low number of iterations, both \(\chi ^2_\mathrm{tr}\) and \(\chi ^2_\mathrm{val}\) are similar in size and decrease monotonically: this corresponds to the learning phase. However, beyond a certain point the \(\chi ^2_\mathrm{tr}\) keeps decreasing while the \(\chi ^2_\mathrm{val}\) instead begins to increase, indicating that the statistical fluctuations rather than the underlying distributions are being fitted. As a result of using crossvalidation, we are able to avoid overfitting and ensure that for each replica the minimization is stopped at the optimal number of iterations.
In Fig. 11, a similar comparison as that of Fig. 8 is shown, where now the nPDFs at the initial parameterization scale \(Q_0=1.3\) GeV obtained from the L0, L1, and L2 closure tests are displayed together. Here the nCTEQ15 prior agrees well with the central values of all the closure tests. Moreover, it is important to note that the nPDF uncertainties are smallest at Level 0 and then increase subsequently with each level. The comparison of the results for the different levels of closure tests can be interpreted following the discussions of [7].
First of all, at L0 the PDF uncertainty within the data region should go to zero as the number of iterations is increased due to the fact that \(\chi ^2 \rightarrow 0\), as illustrated in Fig. 7. While the PDF uncertainties will generally decrease with the number of iterations, this may not necessarily be true between data points (interpolation) and outside the data region (extrapolation). The latter is known as the extrapolation uncertainty, and is present even for a perfect fit for which the \(\chi ^2\) vanishes. In our fits, the extrapolation region can be assessed from Fig. 1, where \(x\lesssim 0.01\) and \(x\gtrsim 0.7\) are not directly constrained from any measurement in nNNPDF1.0.
In a L1 fit, the central values of the data have been fluctuated around the theoretical predictions from nCTEQ15. This means that now there can exist many functional forms for the nPDFs at \(Q_0\) that have equally good \(\chi ^2\) values. The difference between the PDF uncertainties at L0 and L1 is thus known as the functional uncertainty. Finally, at L2 one is adding on top of the L1 pseudodata the Monte Carlo replica generation reflecting the statistical and systematic errors provided by the experimental measurements. This implies that the difference between L1 and L2 uncertainties can be genuinely attributed to the experimental data errors, and is therefore known as the data uncertainty. Comparing the resulting nPDFs for various levels of closure tests, as in Fig. 11, allows us to discriminate the relative importance of the extrapolation, function, and data components to the total nNNPDF1.0 uncertainty band.
From the comparison of Fig. 11, we see that the extrapolation uncertainty is very small for the quarks except for \(x\lesssim 0.01\) where indeed experimental data stops. The same applies for the gluon for \(A=12\), while for \(A=208\) the extrapolation (L0) uncertainty becomes large already at \(x\lesssim 0.1\). Interestingly, there is a big increase in uncertainties when going from L0 to L1 for the gluon distribution: this highlights how functional uncertainties represent by far the dominant component in the PDF uncertainty for most of the relevant kinematic range. Lastly, differences between L1 and L2 are quite small, and therefore the experimental data errors propagated by the MC replicas contributes little to the overall uncertainties. Nevertheless, it is an important component and must be included for a robust estimation of the nPDF uncertainties.
Results
In this section we present the main results of our analysis, namely the nNNPDF1.0 sets of nuclear parton distributions. We first assess the quality of our fit by comparing the resulting structure function ratios with experimental data. This is followed by a discussion of the main features of the nNNPDF1.0 sets, as well as a comparison with the recent EPPS16 and nCTEQ15 nPDF analyses. We also assess the stability of our results with respect to the perturbative order, which are generated using NLO and NNLO QCD theories. Finally, the section is concluded by presenting a few methodological validation tests, complementing the closure test studies discussed in Sect. 4. Here we show that our results are stable with respect to variations of the network architecture and quantify the impact of the \(A=1\) boundary condition.
Before moving forward, it is useful to illustrate qualitatively the expected outcome for a nuclear PDF analysis. In Fig. 12 we show a schematic representation of different types of nuclear modifications that are assumed to arise in the nPDFs, \(f^{(N/A)}\), when presented as ratios to their freenucleon counterparts \(f^{(N)}\),
The ratio \(R_f\) defined here corresponds to the nPDF equivalent of the structure function ratios, Eq. (2.5), where \(R_f\simeq 1\) signifies the absence of nuclear modifications. Moving from small to large x, a depletion known as shadowing is expected, followed by an enhancement effect known as antishadowing. For the region \(x\simeq 0.4\), there is expected to be a suppression related to the original EMC effect, while at larger x there should be a sharp enhancement as a result of Fermi motion. In presenting the results of the nNNPDF1.0 PDF sets, the discussion will focus primarily on whether the different nuclear effects shown in Fig. 12 are supported by experimental data and how the various effects compare between the quark and gluon distributions.
Comparison with experimental data
To begin, we list in Table 5 the absolute and normalized values of the \(\chi ^2\) for each of the input datasets (see Table 1) and for the total dataset. The values are given for both the NLO and NNLO fits. In total, there are \(N_\mathrm{dat}=451\) data points that survive the kinematic cuts and result in the overall value \(\chi ^2/N_\mathrm{dat}=0.68\), indicating an excellent agreement between the experimental data and the theory predictions. Moreover, we find that the fit quality is quite similar between the NLO and NNLO results. The fact that we obtain an overall \(\chi ^2/N_\mathrm{dat}\) less than one can be attributed to the absence of correlations between experimental systematics, leading to an overestimation of the total error.
At the level of individual datasets, we find in most cases a good agreement between the experimental measurements and the corresponding theory calculations, with many \(\chi ^2/N_\mathrm{dat} \lesssim 1\) both at NLO and at NNLO. The agreement is slightly worse for the ratios Ca/D and Pb/D from FNAL E665, as well as the Sn/D ratio from EMC, all of which have \(\chi ^2/N_\mathrm{dat} \ge 1.5\). The apparent disagreement of these datasets can be more clearly understood with the visual comparison between data and theory.
In Fig. 13 we display the structure function ratios \(F_2^A/F_2^{A'}\) measured by the EMC and NMC experiments and the corresponding theoretical predictions from the nNNPDF1.0 NLO fit. Furthermore, in Figs. 14 and 15 we show the corresponding comparisons for the \(Q^2\)dependent structure function ratio \(F_2^\mathrm{Sn}/F_2^\mathrm{C}\) provided by the NMC experiment, and the data provided by the BCDMS, FNAL E665, and SLACE139 experiments, respectively.
In the comparisons shown in Figs. 13, 14 and 15, the central values of the experimental data points have been shifted by an amount determined by the multiplicative systematic uncertainties and their nuisance parameters, while uncorrelated uncertainties are added in quadrature to define the total error bar. We also indicate in each panel the value of \(\chi ^2/N_\mathrm{dat}\), which include the quadratic penalty as a result of shifting the data to its corresponding value displayed in the figures. The quoted \(\chi ^2\) values therefore coincide with those of Eq. (3.9) without the \(A=1\) penalty term. Lastly, the theory predictions are computed at each x and \(Q^2\) bin given by the data, and its width corresponds to the 1\(\sigma \) deviation of the observable using the nNNPDF1.0 NLO set with \(N_\mathrm{rep} = 200\) replicas. Note that in some panels, the theory curves (and the corresponding data points) are shifted by an arbitrary factor to improve visibility.
As expected by the \(\chi ^2\) values listed in Table 5, the experimental measurements agree well with the structure function ratios computed using the nNNPDF1.0 sets, apart from the three observables mentioned previously. For the FNAL data, the disagreement comes from datasets that contain a total of 3 data points with larger uncertainties than other experimental measurements, and therefore do not significantly impact the fit results.
A similar argument can be made for the Sn/D ratio from the EMC experiment, which has \(\chi ^2/N_\mathrm{dat} = 2.22\). Here the lack of agreement between theory and data can be traced to the lowx region of the structure function ratio. Such a deviation can also be seen in the recent nCTEQ and EPPS analyses, and can be attributed to a possible tension with the \(Q^2\) dependent ratio Sn/C presented in Fig. 14. While the comparison here is with carbon and not deuterium, the nuclei are relatively close in mass number and therefore the effects in the ratio are expected to be similar. On the other hand, the data show a roughly \(\sim 1520\%\) difference between EMC’s Sn/D and NMC’s Sn/C at \(x\sim 0.03\). Since the NMC data have smaller uncertainties than EMC, its influence on the fit is much stronger, driving the disagreement with EMC Sn/D at low x. Overall, the agreement with NMC data is excellent, including the \(Q^2\) dependent Sn/C data presented in Fig. 14.
From the data versus theory comparisons, the various nuclear effects encoded in the structure function ratios can clearly be observed. At small x the structure functions exhibit shadowing, namely the depletion of \(F_2(x,Q,A)\) compared to its freenucleon counterpart (or compared to lighter nuclei). At larger x the well known EMC effect is visible, resulting in ratios below unity. Between these two regimes, one finds an enhancement of the nuclear structure functions. However, we do not observe the Fermi motion effect, which gives \(R_{F_2} > 1\) for large x and increases rapidly in the \(x\rightarrow 1\) limit. This is due simply to the kinematic \(W^2\) cut illustrated in Fig. 1, which removes much of the largex data. Note that although the three nuclear regimes are visible at the structure function level, such effects may not be reflected at the level of PDF ratios, as we will highlight in the following section.
The nNNPDF1.0 sets of nuclear PDFs
With the agreement between data and theory established, we present now the results for the NLO nPDF sets. Later, we will assess the perturbative stability of the results by comparing to the corresponding NNLO fit. Unless otherwise indicated, the results presented in this section are generated with \(N_\mathrm{rep} = 1000\) Monte Carlo replicas.
To begin, we show in Fig. 16 the nNNPDF1.0 NLO set as a function of x at the input scale \(Q_0=1\) GeV for different values of A. In this figure, the nPDF uncertainty bands are computed as the 90% confidence level intervals, with the central value being taken as the midpoint of the corresponding range. The confidence levels presented here follow that of previous NNPDF studies [49] and are computed in the following way. For a given x, Q, and A, we have \(N_\mathrm{rep}\) values of a particular nPDF flavor \(f^{(k)}(x,Q,A)\). The replicas are then ordered such that
Finally, we remove symmetrically \((100X)\%\) of the replicas with the highest and lowest values. The resulting interval defines the \(X\%\) confidence level for the nPDF f(x, Q, A) for a given value of x, Q, and A. In other words, a 90% CL interval (corresponding to a 2\(\sigma \) interval for a Gaussian distribution) is obtained by keeping the central 90% replicas, leading to
The rationale for estimating the nPDF uncertainties as 90% CL intervals, as opposed to the standard deviation, is that it turns out that the nNNPDF1.0 probability distribution is not well described by a Gaussian, in particular when ratios between different nuclei A are taken. Therefore, the variance \(\sigma ^2\) may not be the best estimator for the level of fluctuations in the distribution. While deviations from the Gaussian approximation in the proton case are moderate, there are several reasons why the nPDFs may behave differently. First of all, there is a limited amount of experimental information, especially for the gluon. Secondly, imposing the \(A=1\) boundary condition skews the A dependence of the distribution. Lastly, even if the resulting nPDFs do follow a Gaussian distribution, in general their ratio between different values of A will not. Therefore, in Fig. 16, and in the remaining figures of this analysis, the uncertainties will be presented as the 90% CL defined above.
We also show in Fig. 16 the results of the baseline freenucleon PDF set, NNPDF3.1, compared to the nuclear parton distributions evaluated at \(A=1\). As can be observed, there is an excellent match between both the central values and the PDF uncertainties of nNNPDF1.0 and those of NNPDF3.1 in the region of x where the boundary condition is imposed, \(10^{3} \le x \le 0.7\). This agreement demonstrates that the quadratic penalty in Eq. (3.9) is sufficient to achieve its intended goals. In Sect. 5.3 we will discuss the importance of implementing such a constraint, particularly for light nuclei.
From Fig. 16, we can also see that the PDF uncertainties increase as we move towards larger values of A, in particular for the gluon nPDF. Recall that the latter is only constrained indirectly from inclusive DIS data via DGLAP evolution effects. On the other hand, the quark combination \(\Sigma + T_8/4\) turns out to be reasonably well constrained for \(x\gtrsim 10^{2}\), since this is the combination directly related to the nuclear structure function \(F_2(x,Q^2,A)\). For both the gluon and the quark nuclear distributions, the PDF uncertainties diverge in the smallx extrapolation region, the beginning of which varies with A. For example, the extrapolation region for the quarks in Sn (\(A=119\)) is found to be \(x\lesssim 5 \times 10^{3}\), while for the gluon PDF uncertainties become very large already for \(x\lesssim 5 \times 10^{2}\).
Next, we illustrate in Fig. 17 the nNNPDF1.0 PDFs normalized by the \(A=1\) distributions. Here the results for He (\(A=4\)), Cu (\(A=64\)), and Pb (\(A=208\)) nuclei are shown for \(Q^2 = 10\) GeV\(^2\). With this comparison, we can assess whether the different nuclear effects introduced previously are visible at the nPDF level, since Eq. (3.2) is analagous to the structure function ratios displayed in Figs. 13, 14 and 15.
When evaluating ratios of nPDFs between different values of A, it is important to account for the correlations between the numerator and denominator. These correlations stem from the fact that nPDFs at two values of A are related by the common underlying parameterization, Eq. (3.2), and therefore are not independent. This can be achieved by computing the ratio \(R_f\) for each of the \(N_\mathrm{rep}\) Monte Carlo replicas of the fit
and then evaluating the 90% CL interval following the procedure that leads to Eq. (5.3). Note that a rather different result from that of Eq. (5.4) would be obtained if either the correlations between different A values were ignored (and thus the PDF uncertainties in numerator and denominator of Eq. (5.1) are added in quadrature) or if the uncertainties associated to the \(A=1\) denominator were not considered. Also, as discussed above, the 90% CL interval for Eq. (5.4) will in general be quite different compared to the range defined by the 2\(\sigma \) deviation.
From Fig. 17, we can see that for the relevant quark combination \(\Sigma +T_8/4\) in \(A=64\) and \(A=208\) nuclei, it possible to identify the same three types of nuclear effects that were present at the structure function level. In particular, the antishadowing and EMC effects are most evident, where the deviation from unity is outside the 90% CL range. Moreover, shadowing behavior appears briefly in the region \(x\simeq 0.01\), particularly for copper nuclei, before the uncertainties grow quickly in the extrapolation region. On the other hand, the nuclear effects appear to be negligible for all x in helium nuclei within the present uncertainties.
The situation is much worse for the nuclear gluons, where the ratio \(R_f=f^{(N/A)}/f^N\) is consistent with one within the uncertainties for all values of x. This indicates that using only neutralcurrent DIS nuclear structure functions, there is limited information that one can extract about the nuclear modifications of the gluon PDF. Here we find no evidence for gluon shadowing, and the ratio \(R_f\) is consistent with one for \(x\lesssim 0.1\). The only glimpse of a nontrivial nuclear modification of the gluon nPDF is found for Cu (\(A=64\)), where between \(x\simeq 0.1\) and \(x\simeq 0.3\) there appears to be an enhancement reminiscent of the antishadowing effect.
The comparisons of Fig. 17 demonstrate that, without additional experimental input, we are rather far from being able to probe in detail the nuclear modifications of the quark and gluon PDFs, particularly for the latter case. We will highlight in Sect. 6 how the present situation would be dramatically improved with an Electron Ion Collider, allowing us to pin down nuclear PDFs in a wider kinematic range and with much better precision.
The scale dependence of the nuclear modifications. In Fig. 18, we show a similar comparison as that of Fig. 17, but now for the \(Q^2\) dependence of the nuclear modifications in \(^{64}\)Cu. More specifically, we compare the results of nNNPDF1.0, normalized as before to the \(A=1\) distribution, for \(Q^2=2\) GeV\(^2\), 10 GeV\(^2\), and 100 GeV\(^2\). We can observe in this case how nPDF uncertainties are reduced when the value of \(Q^2\) is increased. This effect is particularly dramatic for the gluon in the smallx region, but is also visible for the quark distributions. This feature is a direct consequence of the structure of DGLAP evolution, where at small x and large \(Q^2\) the results tend to become insensitive of the specific boundary condition at low scales as a result of double asymptotic scaling [99].
It is important to point out that, by the same token, the sensitivity to nuclear modifications is also reduced when going from low to high \(Q^2\) in the smallx region. Indeed, we can see from Fig. 18 that the ratios \(R_f\) move closer to one at small x as Q is increased. However, this is not the case for medium and large x, where DGLAP evolution effects are milder. Therefore, nuclear effects in this region can be accessible using probes both at low and high momentum transfers. The comparisons in Fig. 18 highlight that the best sensitivity for nuclear modifications present in the smallx region arises from lowscale observables, while for medium and largex modifications there is also good sensitivity at high scales.
Comparison with EPPS16 and nCTEQ15. We now turn to compare the nNNPDF1.0 nuclear PDFs with other recent analyses. Here we restrict our comparison to the EPPS16 and nCTEQ15 fits, given that they are the only recent nPDF sets available in LHAPDF. In Fig. 19, we display the nNNPDF1.0 NLO distributions together with EPPS16 and nCTEQ15 at \(Q^2 = 10\) GeV\(^2\) for three different nuclei: \(^{12}\)C, \(^{64}\)Cu, and \(^{208}\)Pb. The three nPDF sets have all been normalized to the central value of their respective proton PDF baseline to facilitate the comparison. For the nNNPDF1.0 results, the uncertainties are computed as before but without including the correlations with the \(A=1\) distribution. Lastly, the PDF uncertainties for EPPS16 and nCTEQ15 correspond to the 90% CL ranges computed using the standard Hessian prescription.
From this comparison, there are a number of interesting similarities and differences between the three nPDF fits. First of all, the three nuclear regimes sketched in Fig. 12, namely shadowing, antishadowing, and the EMC effect, are visible between the three sets for the quark combination \(\Sigma +T_8/4\). Interestingly, in the data region the PDF uncertainties for this quark combination are similar between the different analyses. Much larger differences are found in the smallx and largex extrapolation regions, particularly for nCTEQ15, where the uncertainties are smaller. Note that the different approaches for uncertainty estimation have noticeable physical consequences. For instance, it would appear that there is rather strong evidence for quark shadowing down to \(x\simeq 10^{4}\) for the nCTEQ15 result, while for nNNPDF1.0, the nuclear modifications are consistent with zero within uncertainties for \(x\lesssim 10^{2}\).
Concerning the nuclear modifications of the gluon PDF, here we can percieve large differences at the level of PDF errors, with nCTEQ15 exhibiting the smallest uncertainties and nNNPDF1.0 the largest. While nCTEQ15 indicates some evidence of smallx gluon shadowing, this evidence is absent from both nNNPDF1.0 and EPPS16. Moreover, the three sets find some preference for a mild enhancement of the gluon at large x, but the PDF uncertainties prevent making any definite statement. Overall, the various analyses agree well within the large uncertainties for \(x\gtrsim 0.3\).
While it is beyond the scope of this paper to pin down the origin of the differences between the three nPDF analyses, one known reason is the choice of nPDF parameterization together with the method of imposing the \(A\rightarrow 1\) boundary condition. Recall that in nNNPDF1.0 we adopt a modelindependent parameterization based on neural networks, Eq. (3.2), with the boundary condition imposed at the optimization level in Eq. (3.9). In the EPPS16 analysis, the bound nucleus PDFs are instead defined relative to a free nucleon baseline (CT14) as
where the nuclear modification factors are parameterized at the input evolution scale \(R_i^A(x,Q_0^2)\) with piecewise polynomials that hardwire some of the theoretical expectations shown in Fig. 12. In this approach, the information contained in PDF uncertainties of the free nucleon baseline is not exploited to constrain the nPDFs.
In the nCTEQ15 analysis, the nuclear PDFs are parameterized by a polynomial functional form given by
where the coefficients \(c_k(A)\) encode all the A dependence. During the fit, these coefficients are constrained in a way that for \(A=1\) they reproduce the central value of the the CTEQ6.1like fit of Ref. [100]. Note here that in the nCTEQ15 fit the baseline proton set does not include the experimental measurements that have become available in the last decade, in particular the information provided by the highprecision LHC data and the HERA combined structure functions. Moreover, as in the case of EPPS16, the information about the PDF uncertainties in the freenucleon case is not exploited to constrain the nPDF errors.
While these methodological choices are likely to explain the bulk of the differences between the three analyses, a more detailed assessment could only be obtained following a careful benchmarking exercise along the lines of those performed for the proton PDFs [101,102,103,104].
Perturbative stability. To conclude the discussion of the main properties of the nNNPDF1.0 fits, in Fig. 20 we compare the NLO and NNLO nuclear ratios \(R_f\) for the same three nuclei as in Fig. 19. The ratios are constructed using the \(A=1\) distributions from their respective perturbative order PDF set using \(N_\mathrm{rep}=200\) replicas. In terms of central values, we can see that the NLO and NNLO fit results are consistent within the 90% CL uncertainty band. The regions where the differences between the two perturbative orders are the largest turn out to be the small and largex extrapolation regions, in particular as A is increased.
Another difference between the NLO and NNLO nNNPDF1.0 fits concerns the size of the PDF uncertainty band. We find that for the gluon nPDF, the NNLO fit leads to a slight decrease in uncertainties, perhaps due to the improved overall fit consistency when higherorder theoretical calculations are used. This effect is more distinct for the gluon distribution of \(A=64\) and \(A=208\) nuclei, while it is mostly absent for \(A=12\). The apparent reduction of uncertainties, together with marginally better \(\chi ^2\) values (see Table 5), suggests that the NNLO fit is only slightly preferred over the NLO one. That said, the difference is unlikely to have significant phenomenological implications given the current level of uncertainties.
Methodological studies
We conclude the discussion of the nNNPDF1.0 results by presenting some further studies that demonstrate the robustness of our fitting methodology, complementing those based on the closure tests discussed in Sect. 4. In particular, in the following we discuss the stability of our results with respect to variations of the neural network architecture and the role of the \(A=1\) boundary condition in constraining the nPDF uncertainties. For all results presented in this section, we use \(N_\mathrm{rep} = 200\) Monte Carlo replicas.
Stability with respect to the network architecture. As explained in Sect. 3.1, the nNNPDF1.0 fits are based on a single neural network with the 3–25–3 architecture represented in Fig. 4. This architecture is characterized by \(N_\mathrm{par}=178\) free parameters, without counting the preprocessing exponents. We have verified that this choice of network architecture is redundant given our input dataset, namely that the nNNPDF1.0 results are stable if neurons are either added or removed from the hidden layer of the network. To illustrate this redundancy, here we compare fit results using the standard 3–25–3 architecture with that using twice as many neurons in the hidden layer, 3503. The latter configuration is characterized by \(N_\mathrm{par}=353\) free parameters, which is enlarged by a factor two compared to the baseline fits.
In Fig. 21 the nNNPDF1.0 results at the input scale \(Q_0=1\) GeV for \(^{12}\)C and \(^{208}\)Pb nuclei are shown with the two different architectures, 3–25–3 (baseline) and 3–50–3. We find that differences are very small and consistent with statistical fluctuations. Given that now there are twice as many free parameters as in the baseline settings, this stability demonstrates that our results are driven by the input experimental data and not by methodological choices such as the specific network architecture. Furthermore, we have also verified that the outcome of the fits is similarly unchanged if a network architecture with a comparable number of parameters but two hidden layers is used.
The role of the \(A=1\) boundary condition. Imposing the \(A=1\) boundary condition Eq. (3.8) leads to important constraints on both the central values and the uncertainties of nNNPDF1.0 fit, particularly for low values of the atomic mass number A. Here we want to quantify this impact by comparing the baseline nNNPDF1.0 results with those of the corresponding fit where this boundary condition has not been imposed. This can be achieved by performing the fits with the hyperparameter \(\lambda =0\) in Eq. (3.9). Note that in this case the behavior of the fitted at nPDFs for \(A=1\) is unconstrained, since only experimental data with \(A \ge 2\) is included in the fit.
In Fig. 22, we show a comparison between the nNNPDF1.0 baseline, which imposes NNPDF3.1 as the \(A=1\) boundary condition between \(x=10^{3}\) and \(x=0.7\), in addition to a resulting fit where this boundary condition is not implemented. Moreover, we display the gluon and the \(\Sigma +T_8/4\) quark combination at \(Q^2=2\) GeV\(^2\) for \(A=4, 12,\) and 64. This comparison demonstrates a significant impact on nNNPDF1.0 resulting from the \(A=1\) constraint, especially for helium and carbon nuclei where the PDF uncertainties are increased dramatically if no boundary condition is used. The impact is more distinct for the gluon, where even for relatively heavy nuclei such as \(^{64}\)Cu the boundary condition leads to a reduction of the nPDF uncertainties by up to a factor two. We can thus conclude that imposing consistently the \(A=1\) limit using a stateoftheart proton fit is an essential ingredient of any nPDF analysis.
While the baseline nNNPDF1.0 fits only constrain the \(A=1\) distribution between \(x=10^{3}\) and \(x=0.7\), one in principle could extend the coverage of the boundary condition down to smaller values of x provided a reliable proton PDF baseline is used. Indeed, it is possible to demonstrate that we can impose the constraint down to much smaller values of x, e.g. \(x=10^{5}\). For this purpose, we perform a fit using instead for the boundary condition the NNPDF3.0+LHCb NLO sets constructed in Refs. [89, 90]. More specifically we use the set based on the \(N_5\), \(N_7\), and \(N_{13}\) normalized distributions of D meson production in the forward region at 5, 7, and 13 TeV. The reason is that these sets exhibit reduced quark and gluon PDF uncertainties down to \(x\simeq 10^{6}\), and therefore are suitable to constrain the smallx nPDF uncertainties.
The comparison between the baseline nNNPDF1.0 fit and its LHCb variant is shown in Fig. 22. We now find a further reduction of the nPDF uncertainties at smallx, again more notably for light nuclei. In this case, the reduction of uncertainties is more distinct for the quarks, which benefit from the very accurate determination of the proton’s quark sea at smallx in NNPDF3.0+LHCb. Note that, in turn, the improved nPDF errors at smallx might lead to increased sensitivity to effects such as shadowing and evidence for nonlinear evolution corrections.
Nuclear PDFs at the electronion collider
As illustrated by Fig. 1, the kinematic reach in x is rather limited for the available leptonnucleus deepinelastic scattering data. As a consequence, nPDF analyses based on these measurements will exhibit large uncertainties for \(x\lesssim 0.01\), as was shown in Sect. 5. However, the coverage at small x and large \(Q^2\) can be improved with measurements in protonion scattering, in particular from the p+Pb collisions provided by the LHC. There, smallx gluon shadowing can be studied with D meson production [105] and direct photon production [106, 107] in the forward region. Furthermore, quarkflavor separation at \(Q\simeq M_W\) can be disentangled using the rapidity distributions in W and Z production [31]. On the other hand, access to these extended kinematic regions is desirable also with leptonnucleus scattering, since leptons represent significantly cleaner probes in scattering processes, as was extensively demonstrated by the HERA collider [108].
Such a machine would be realized by an ElectronIon Collider (EIC) [109, 110], currently under active discussion in the U.S. The EIC would collide electrons with protons and nuclei using a range of different beam energy configurations and nucleon species, pinning down nuclear modifications of the freenucleon PDFs down to \(x\simeq 5\times 10^{4}\). Such a machine would therefore significantly improve our understanding of the strong interaction in the nuclear medium, in a similar way as it would with the spin structure of the nucleons [111,112,113]. Another option in discussion is the Large Hadron electron Collider (LHeC) [114], which would operate concurrently with the HighLuminosity LHC and would further extend the lowx coverage of leptonnucleus reactions down to \(x\simeq 10^{6}\). Both options for future highenergy leptonnucleus colliders have demonstrated their potential to constrain the nuclear PDFs [13, 115, 116].
In this section, we quantify the constraints that future EIC measurements of inclusive nuclear structure functions would impose on the nNNPDF1.0 nuclear PDFs. To achieve this, we generate EIC pseudodata following Ref. [13], where it was subsequently interpreted in the framework of the EPPS16 analysis of nPDFs. The projections used here are constructed instead with the central value of the nNNPDF1.0 NLO set for different scenarios of the lepton and nucleon beam energies, which are then added to the input data of this analysis listed in Table 1.
The simulated EIC pseudodata from Ref. [13] is available for both carbon (\(A=12\) and \(Z=6\)) and gold (\(A=197\) and \(Z=79\)) nuclei. We assume that the latter would be corrected for nonisoscalarity effects, and therefore treat gold as an isoscalar nucleus with \(A=197\) (see also Sect. 2). The nuclear structure functions for carbon and gold nuclei are then normalized by the deuteron structure functions following Eq. (2.5).
The different scenarios for the lepton and nucleon beam energies of the EIC pseudodata that are considered here are listed in Table 6. As in Ref. [13], we denote the “low energy” EIC scenario the one that consists of measurements taken with electron beams with energy \(E_e = 5\) GeV, whereas the “high energy” EIC scenario corresponds to measurements taken with \(E_e = 20\) GeV electrons. We also indicate in Table 6 the atomic mass number A, the nucleus (per nucleon) \(E_A/A\) energy, the maximum and minimum values of \(Q^2\) and x of the pseudodata, respectively, and the number of pseudodata points \(N_\mathrm{dat}\). Here we restrict ourselves again to the inclusive structure functions and do not include EIC pseudodata on charm structure functions.
In Fig. 23, we display the kinematic coverage of the EIC pseudodata compared to the existing leptonnucleus scattering measurements. Here we can see that the EIC would significantly extend the sensitivity to nPDFs both in the smallx and in the large\(Q^2\) regions. This is particularly marked for the higher energy scenario with \(E_e=20\) GeV and \(E_A/A=100\) GeV, where the kinematic coverage at small x in the perturbative region \(Q \gtrsim 1\) GeV would be increased by a factor 20.
In this exercise, we assume that the “true” central value of the EIC pseudodata is the central value of the nNNPDF1.0 NLO fit, which is then fluctuated according to the corresponding experimental uncertainties. Given that we also use NLO QCD theory for the fits including EIC data, by construction the resulting fits are expected to have \(\chi ^2_\mathrm{EIC}/N_\mathrm{dat}\simeq 1\). Concerning the projected experimental uncertainties for the EIC pseudodata, we assume the same total relative uncertainty as in Ref. [115], which is taken to be uncorrelated among different bins. Moreover, each of the scenarios listed in Table 6 are assumed to have a \(\delta _{\mathcal {L}}=1.98\%\) normalization uncertainty, not included in the total uncertainty mentioned above. This normalization error is taken to be fully correlated among each scenario but uncorrelated among the different scenarios. Note that the different nucleus energies in Table 6 are statistically independent, so they can be added simultaneously to the same nPDF fit without any double counting.
The results of the fits are given in Fig. 24, where a comparison between the nNNPDF1.0 NLO fit and the corresponding fit including the different EIC pseudodataset scenarios is shown. We show results at \(Q^2=10\) GeV\(^2\) for \(^{12}\)C and \(^{197}\)Au, the two nuclei for which pseudodata is available. Since by construction the central values of the nNNPDF1.0 and nNNPDF1.0+EIC fits will coincide, the most pertinent information is the relative reduction of the nPDF uncertainties. In all cases, we have verified that \(\chi ^2_\mathrm{EIC}/N_\mathrm{dat} \simeq 1\) is obtained.
From the comparisons in Fig. 24, one finds that the EIC measurements would lead to a significant reduction in the PDF uncertainties both for the gluon and for the \(\Sigma +T_8/4\) quark combination. The effect is especially visible for gold (\(A=197\)), given that the constraint from the proton boundary condition is much smaller there than for a lighter nuclei such as carbon (\(A=12\)). Here, the improvement can be up to an order of magnitude for \(x\simeq 10^{3}\) as compared to the current situation. Therefore it is clear that such improvement will allow the EIC to carefully study important dynamics such as the quark and gluon shadowing in addition to the possible onset of saturation effects down to \(x\simeq 5\times 10^{4}\).
From Fig. 24 we can also observe that the “high energy” scenario would constrain the nPDFs down to smaller values of x better than the “low energy” one, again more notably for heavier nuclei such as gold. For instance, the uncertainty for the gluon distribution in the region \(x\simeq 5 \times 10^{4}\) would be around three times larger in the lower energy case compared to the higher scenario. Given that saturation effects are expected to scale by \(\sim A^{1/3}\), these results demonstrate that the “high energy” scenario would provide a rather sharper probe of smallx QCD dynamics that the lower energy option.
Summary and outlook
In this work, we have presented a first determination of the parton distribution functions of nucleons bound within nuclei using the NNPDF methodology. Using as experimental input all available measurements on neutralcurrent deepinelastic nuclear structure functions, we have determined the nuclear gluon g, the quark singlet \(\Sigma \), and the quark octet \(T_8\) for a range of atomic mass numbers from \(A=2\) to \(A=208\). We find an excellent overall agreement with the fitted experimental data, with stable results with respect to the order of the perturbative QCD calculations. While the quark distributions are reasonably well constrained for \(x\gtrsim 10^{2}\), the nuclear gluon PDFs are affected by large uncertainties, in particular for heavy nuclei.
From the methodological point of view, the main improvement with respect to previous NNPDF fits has been the implementation of TensorFlow to perform stochastic gradient descent with reversemode automatic differentiation. The application of SGD for the \(\chi ^2\) minimization has lead to a marked performance improvement as compared to the evolutionarytype algorithms used so far in NNPDF. Two other related developments in this study have been the use of a single neutral network to parameterize the nPDFs rather than multiple networks, and the fitting of the preprocessing exponents rather than their determination from an iterative procedure.
As opposed to other nPDF analyses, the nNNPDF1.0 set is determined with the boundary condition imposed at the minimization level so that the baseline proton PDFs (NNPDF3.1) are reproduced both in terms of their central values and, more importantly, their uncertainties. Moreover, we have applied this constraint in a fully consistent way, since the proton PDF baseline has been determined using the same fitting methodology and theoretical settings. We have shown that this \(A=1\) constraint results in a significant reduction of the nPDF uncertainties, especially for lowA nuclei, and therefore represents a vital ingredient for any nPDF analysis.
By using nNNPDF1.0 as a baseline, we have also quantified the impact of future e+A measurements from an ElectronIon Collider by exploiting the projections generated in Ref. [13]. We have demonstrated that the EIC measurements of inclusive nuclear structure functions would constrain the quark and gluon nuclear PDFs down to \(x\simeq 5\times 10^{4}\), opening a new window to study the nuclear modification of the proton substructure in the smallx region. With future EIC measurements, it will therefore be possible to construct a reliable nPDF set based on collinear factorization that can identify and isolate the onset of novel QCD regimes such as nonlinear evolution effects or smallx resummation.
The main limitations of the present work are the lack of a reliable separation between the quark flavors, which is not possible from neutralcurrent DIS measurements alone, as well as the large uncertainties that affect the nuclear gluon PDFs. This implies that the possible phenomenological applications of nNNPDF1.0 are restricted to processes that do not require a complete quark flavor separation, such as the analysis of EIC structure functions in Sect. 6, or D meson production in p+Pb collisions [105]. To bypass these limitations, we plan to extend the present nPDF analysis to a global dataset including neutrinoinduced chargedcurrent deepinelastic structure functions as well as inclusive jets and dijets, photons, electroweak boson production, and heavy quark production from protonion collisions from RHIC and the LHC.
The results of this work, namely the nNNPDF1.0 NLO and NNLO sets for different values of A, are available via the LHAPDF library [117], and have also been linked to the NNPDF website:
These LHAPDF grid files contains \(N_\mathrm{rep}=250\) replicas each, which are fully correlated between different values of A as discussed in Sect. 5.
Moreover, due to the lack of a complete quark flavour separation, additional assumptions might be required when the nNNPDF1.0 sets are used, in particular for phenomenological applications in heavyion collisions. To comply with the LHAPDF format, we have assumed that \(u=d\) and that \(\bar{u}=\bar{d}=\bar{s}=s\), namely a symmetric quark sea. With this convention, the only meaningfully constrained quark combinations can be reconstructed using the flavour basis PDFs by means of \(\Sigma =2\,u+4\,\bar{u}\) and \(T_8 = 2(u\bar{u})\).
Data Availability Statement
This manuscript has associated data in a data repository. [Authors’ comment: All the results produced in this paper are publicly available via the LHAPDF repository.]
Notes
See [87] for an overview of recent efforts in the firstprinciple calculations of PDFs by means of lattice QCD.
References
European Muon Collaboration, J.J. Aubert et al., The ratio of the nucleon structure functions \(F2_n\) for iron and deuterium. Phys. Lett. 123B, 275–278 (1983)
Zurita, P., Recent progress in nuclear parton distributions, in 13th Conference on the Intersections of Particle and Nuclear Physics (CIPANP 2018) Palm Springs, California, USA, May 29–June 3, 2018 (2018). arXiv:1810.00099
H. Paukkunen, Nuclear PDFs today, in 9th International Conference on Hard and Electromagnetic Probes of HighEnergy Nuclear Collisions: Hard Probes 2018 (HP2018) AixLesBains, Savoie, France, October 1–5, 2018 (2018). arXiv:1811.01976
J. Gao, L. HarlandLang, J. Rojo, The structure of the proton in the LHC precision era. Phys. Rep. 742, 1–121 (2018). [arXiv:1709.04922]
N. Armesto, N. Borghini, S. Jeon, U.A. Wiedemann, S. Abreu, V. Akkelin, J. Alam, J.L. Albacete, A. Andronic, D. Antonov et al., Heavy ion collisions at the LHC—last call for predictions. J. Phys. G35, 054001 (2008). arXiv:0711.0974
S.T.A.R. Collaboration, J. Adams et al., Experimental and theoretical challenges in the search for the quark gluon plasma: the STAR Collaboration’s critical assessment of the evidence from RHIC collisions. Nucl. Phys. A 757, 102–183 (2005). arXiv:nuclex/0501009
NNPDF Collaboration, R.D. Ball et al., Parton distributions for the LHC Run II. JHEP 04, 040 (2015). arXiv:1410.8849
S. Dulat, T.J. Hou, J. Gao, M. Guzzi, J. Huston, P. Nadolsky, J. Pumplin, C. Schmidt, D. Stump, C.P. Yuan, New parton distribution functions from a global analysis of quantum chromodynamics. Phys. Rev. D 93(3), 033006 (2016). arXiv:1506.07443
L.A. HarlandLang, A.D. Martin, P. Motylinski, R.S. Thorne, Parton distributions in the LHC era: MMHT 2014 PDFs. Eur. Phys. J. C 75, 204 (2015). arXiv:1412.3989
S. Alekhin, J. Blümlein, S. Moch, R. Placakyte, Parton distribution functions, \(\alpha _s\), and heavyquark masses for LHC run II. Phys. Rev. D 96(1), 014011 (2017). arXiv:1701.05838
NNPDF Collaboration, R.D. Ball, E.R. Nocera, R.L. Pearson, Nuclear uncertainties in the determination of proton PDFs. arXiv:1812.09074
The NNPDF Collaboration, R.D. Ball et al., Precision determination of electroweak parameters and the strange content of the proton from neutrino deepinelastic scattering. Nucl. Phys. B823, 195–233 (2009). arXiv:0906.1958
E.C. Aschenauer, S. Fazio, M.A.C. Lamont, H. Paukkunen, P. Zurita, Nuclear structure functions at a future electronion collider. Phys. Rev. D 96(11), 114005 (2017). arXiv:1708.05654
LHeC Study Group Collaboration, J. Abelleira Fernandez et al., A large hadron electron collider at cern: report on the physics and design concepts for machine and detector. J. Phys. G39, 075001 (2012). arXiv:1206.2913
I. Helenius, H. Paukkunen, N. Armesto, nPDF constraints from the large hadron electron collider, in Proceedings, 24th International Workshop on DeepInelastic Scattering and Related Subjects (DIS 2016): Hamburg, Germany, April 11–25, 2016 (2016). arXiv:1606.09003
R.D. Ball, V. Bertone, M. Bonvini, S. Marzani, J. Rojo, L. Rottoli, Parton distributions with smallx resummation: evidence for BFKL dynamics in HERA data. Eur. Phys. J. C 78(4), 321 (2018). arXiv:1710.05935
K.J. Eskola, H. Paukkunen, C.A. Salgado, EPS09—a new generation of NLO and LO nuclear parton distribution functions. arXiv:0902.4154
D. de Florian, R. Sassot, Nuclear parton distributions at next to leading order. Phys. Rev. D 69, 074028 (2004). arXiv:hepph/0311227
M. Hirai, S. Kumano, T.H. Nagai, Determination of nuclear parton distribution functions and their uncertainties at nexttoleading order. Phys. Rev. C 76, 065207 (2007). arXiv:0709.3038
K. Kovarik et al., nCTEQ15—global analysis of nuclear parton distributions with uncertainties in the CTEQ framework. Phys. Rev. D 93(8), 085037 (2016). arXiv:1509.00792
D. de Florian, R. Sassot, P. Zurita, M. Stratmann, Global analysis of nuclear parton distributions. Phys. Rev. D 85, 074028 (2012). arXiv:1112.6324
H. Khanpour, S. Atashbar Tehrani, Global analysis of nuclear parton distribution functions and their uncertainties at nexttonexttoleading order. Phys. Rev. D93(1), 014026 (2016). arXiv:1601.00939
ALICE Collaboration, J. Adam et al., Measurement of charged jet production cross sections and nuclear modification in pPb collisions at \(\sqrt{s_{NN}} = 5.02\) TeV. Phys. Lett. B 749, 68–81 (2015). arXiv:1503.00681
ALICE Collaboration, J. Adam et al., Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in pPb collisions at \(\sqrt{{\bf s}_{{\bf NN}}} \) = 5.02 TeV. Phys. Lett. B 760, 720–735 (2016). arXiv:1601.03658
ALICE Collaboration, J. Adam et al., Measurement of dijet \(k_T\) in pPb collisions at \(\sqrt{s}_{NN}\)=5.02 TeV. Phys. Lett. B 746, 385–395 (2015). arXiv:1503.03050
ATLAS Collaboration, G. Aad et al., Transverse momentum, rapidity, and centrality dependence of inclusive chargedparticle production in \(\sqrt{s_{NN}}=5.02\) TeV p+Pb collisions measured by the ATLAS experiment. arXiv:1605.06436
CMS Collaboration, S. Chatrchyan et al., Studies of dijet transverse momentum balance and pseudorapidity distributions in pPb collisions at \(\sqrt{s_{{\rm NN}}} = 5.02\) TeV. Eur. Phys. J. C 74(7), 2951 (2014). arXiv:1401.4433
ALICE Collaboration, J. Zhu, Measurement of Wboson production in pPb collisions at \(\sqrt{s_{NN}}\) = 5.02 Tev with ALICE at the LHC. J. Phys. Conf. Ser. 612(1), 012009 (2015)
ATLAS Collaboration, G. Aad et al., Measurement of \(W\rightarrow \mu \nu \) production in \(p\)+Pb collision at \(\sqrt{s_{{NN}}}=5.02\) TeV with ATLAS detector at the LHC. arXiv:ATLASCONF2015056
ATLAS Collaboration, G. Aad et al., \(Z\) boson production in \(p+\)Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV measured with the ATLAS detector. Phys. Rev. C 92(4), 044915 (2015). arXiv:1507.06232
CMS Collaboration, V. Khachatryan et al., Study of Z boson production in pPb collisions at \(\sqrt{s NN }\)=5.02 TeV. Phys. Lett. B 759, 36–57 (2016). arXiv:1512.06461
CMS Collaboration, V. Khachatryan et al., Study of W boson production in pPb collisions at \(\sqrt{s_{{\rm NN}}} =\) 5.02 TeV. Phys. Lett. B 750, 565–586 (2015). arXiv:1503.05825
CMS Collaboration, S. Chatrchyan et al., Charmtagged jet production in pPb collisions at 5.02 TeV and pp collisions at 2.76 TeV. arXiv:CMSPASHIN15012
ALICE Collaboration, J. Adam et al., Measurement of Dmeson production versus multiplicity in pPb collisions at \( \sqrt{{\rm s}_{\rm NN}}=5.02 \) TeV. JHEP 08, 078 (2016). arXiv:1602.07240
ALICE Collaboration, J. Adam et al., Measurement of electrons from heavyflavour hadron decays in pPb collisions at \(\sqrt{s_{NN}} =\) 5.02 TeV. Phys. Lett. B 754, 81–93 (2016). arXiv:1509.07491
ALICE Collaboration, B.B. Abelev et al., Measurement of prompt \(D\)meson production in \(pPb\) collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV. Phys. Rev. Lett. 113(23), 232301 (2014). arXiv:1405.3452
CMS Collaboration, V. Khachatryan et al., Transverse momentum spectra of inclusive b jets in pPb collisions at \(\sqrt{s_{NN}} = \) 5.02 TeV. Phys. Lett. B 754, 59 (2016). arXiv:1510.03373
CMS Collaboration, V. Khachatryan et al., Study of B meson production in p\(+\)Pb collisions at 5.02 TeV using exclusive hadronic decays. Phys. Rev. Lett. 116(3), 032301 (2016). arXiv:1508.06678
LHCb Collaboration, R. Aaij et al., Study of prompt D\(^{0}\) meson production in \(p\)Pb collisions at \( \sqrt{s_{\rm NN}}=5 \) TeV. JHEP 10, 090 (2017). arXiv:1707.02750
LHCb Collaboration, R. Aaij et al., Measurement of \(B^+\), \(B^0\) and \(\Lambda _b^0\) production in \(p{\rm Pb}\) collisions at \(\sqrt{s_{{\rm NN}}}=8.16\,{\rm TeV}\). Phys. Rev. D 99(5), 052011 (2019). arXiv:1902.05599
K.J. Eskola, P. Paakkinen, H. Paukkunen, C.A. Salgado, EPPS16: nuclear parton distributions with LHC data. Eur. Phys. J. C 77(3), 163 (2017). arXiv:1612.05741
A. Kusina, F. Lyonnet, D.B. Clark, E. Godat, T. Jezo, K. Kovarik, F.I. Olness, I. Schienbein, J.Y. Yu, Vector boson production in pPb and PbPb collisions at the LHC and its impact on nCTEQ15 PDFs. Eur. Phys. J. C 77(7), 488 (2017). arXiv:1610.02925
S. Forte, L. Garrido, J.I. Latorre, A. Piccione, Neural network parametrization of deepinelastic structure functions. JHEP 05, 062 (2002). arXiv:hepph/0204232
The NNPDF Collaboration, L. Del Debbio, S. Forte, J.I. Latorre, A. Piccione, J. Rojo, Unbiased determination of the proton structure function f2(p) with estimation. JHEP 03, 080 (2005). arXiv:hepph/0501067
The NNPDF Collaboration, L. Del Debbio, S. Forte, J. I. Latorre, A. Piccione, J. Rojo, Neural network determination of parton distributions: the nonsinglet case. JHEP 03, 039 (2007). arXiv:hepph/0701127
The NNPDF Collaboration, R.D. Ball et al., A determination of parton distributions with faithful uncertainty estimation. Nucl. Phys. B 809, 1–63 (2009). arXiv:0808.1231
The NNPDF Collaboration, J. Rojo et al., Update on neural network parton distributions: NNPDF1.1. arXiv:0811.2288
The NNPDF Collaboration, R.D. Ball et al., Fitting parton distribution data with multiplicative normalization uncertainties. JHEP 05, 075 (2010). arXiv:0912.2276
The NNPDF Collaboration, R.D. Ball et al., A first unbiased global NLO determination of parton distributions and their uncertainties. Nucl. Phys. B 838, 136 (2010). arXiv:1002.4407
The NNPDF Collaboration, R.D. Ball et al., Impact of heavy quark masses on parton distributions and LHC phenomenology. Nucl. Phys. B 849, 296 (2011) arXiv:1101.1300
The NNPDF Collaboration, R.D. Ball et al., Unbiased global determination of parton distributions and their uncertainties at NNLO and at LO. Nucl. Phys. B 855, 153 (2012). arXiv:1107.2652
R.D. Ball et al., Parton distributions with LHC data. Nucl. Phys. B 867, 244 (2013). arXiv:1207.1303
S. Forte, E. Laenen, P. Nason, J. Rojo, Heavy quarks in deepinelastic scattering. Nucl. Phys. B 834, 116–162 (2010). arXiv:1001.2312
M. Abadi et al., TensorFlow: largescale machine learning on heterogeneous systems (2015). Software available from tensorflow.org
European Muon Collaboration, J.J. Aubert et al., Measurements of the nucleon structure functions \(F2_n\) in deep inelastic muon scattering from deuterium and comparison with those from hydrogen and iron. Nucl. Phys. B 293, 740–786 (1987)
European Muon Collaboration, J. Ashman et al., Measurement of the ratios of deep inelastic muonnucleus crosssections on various nuclei compared to deuterium. Phys. Lett. B 202, 603–610 (1988)
European Muon Collaboration, M. Arneodo et al., Measurements of the nucleon structure function in the range \(0.002{\rm GeV}^2 < x < 0.17{\rm GeV}^2\) and \(0.2GeV^2 < q^2 < 8GeV^2\) in deuterium, carbon and calcium. Nucl. Phys. B 333, 1–47 (1990)
European Muon Collaboration, J. Ashman et al., A measurement of the ratio of the nucleon structure function in copper and deuterium. Z. Phys. C 57, 211–218 (1993)
New Muon Collaboration, P. Amaudruz et al., A reevaluation of the nuclear structure function ratios for D, He, Li6, C and Ca. Nucl. Phys. B 441, 3–11 (1995). arXiv:hepph/9503291
New Muon Collaboration, M. Arneodo et al., The structure function ratios F2(li) / F2(D) and F2(C) / F2(D) at small x. Nucl. Phys. B 441, 12–30 (1995). arXiv:hepex/9504002
New Muon Collaboration, M. Arneodo et al., The A dependence of the nuclear structure function ratios. Nucl. Phys. B 481, 3–22 (1996)
New Muon Collaboration, M. Arneodo et al., The \(Q^2\) dependence of the structure function ratio \(F_2^{{\rm Sn}}/F_2^{C}\) in deep inelastic muon scattering. Nucl. Phys. B 481, 23–39 (1996)
J. Gomez et al., Measurement of the \(a\) dependence of deepinelastic electron scattering. Phys. Rev. D 49, 4348 (1994)
New Muon Collaboration, P. Amaudruz et al., Measurements of R(d)  R(p) and R(Ca)  R(C) in deep inelastic muon scattering. Phys. Lett. B 294, 120–126 (1992)
S. Dasu et al., Measurement of the difference in R = \(\sigma ^\)l / \(\sigma ^\)t and \(\sigma \)(a) / \(\sigma (D\)) in deep inelastic \(e D\), \(e\) Fe and \(e\) Au scattering. Phys. Rev. Lett. 60, 2591 (1988)
J. Gomez et al., Measurement of the Adependence of deep inelastic electron scattering. Phys. Rev. D 49, 4348–4372 (1994)
E665 Collaboration, M.R. Adams et al., Shadowing in inelastic scattering of muons on carbon, calcium and lead at low x(Bj). Z. Phys. C 67, 403–410 (1995). arXiv:hepex/9505006
D .M. Alde et al., Nuclear dependence of dimuon production at 800GeV. FNAL772 experiment. Phys. Rev. Lett. 64, 2479–2482 (1990)
BCDMS Collaboration, A.C. Benvenuti et al., Nuclear effects in deep inelastic muon scattering on deuterium and iron targets. Phys. Lett. B 189, 483–487 (1987)
Fermilab E665 Collaboration, M.R. Adams et al., Shadowing in the muon xenon inelastic scattering crosssection at 490GeV. Phys. Lett. B 287, 375–380 (1992)
NNPDF Collaboration, R.D. Ball et al., Parton distributions from highprecision collider data. Eur. Phys. J. C 77(10), 663 (2017). arXiv:1706.00428
G. D’Agostini, Bayesian Reasoning in Sata Analysis: A Critical Introduction (World Scientific, Singapore, 2003)
L. Del Debbio, N.P. Hartland, S. Schumann, MCgrid: projecting cross section calculations on grids. Comput. Phys. Commun. 185, 2115–2126 (2014). arXiv:1312.4460
V. Bertone, R. Frederix, S. Frixione, J. Rojo, M. Sutton, aMCfast: automation of fast NLO computations for PDF fits. JHEP 1408, 166 (2014). arXiv:1406.7693
T. Carli et al., A posteriori inclusion of parton density functions in NLO QCD finalstate calculations at hadron colliders: the APPLGRID project. Eur. Phys. J. C 66, 503 (2010). arXiv:0911.2985
fastNLO Collaboration, M. Wobisch, D. Britzger, T. Kluge, K. Rabbertz, F. Stober, Theorydata comparisons for jet measurements in hadroninduced processes. arXiv:1109.1310
V. Bertone, S. Carrazza, N.P. Hartland, APFELgrid: a high performance tool for parton density determinations. Comput. Phys. Commun. 212, 205–209 (2017). arXiv:1605.02070
V. Bertone, S. Carrazza, J. Rojo, APFEL: a PDF evolution library with QED corrections. Comput. Phys. Commun. 185, 1647 (2014). arXiv:1310.1394
Particle Data Group Collaboration, M. Tanabashi et al., Review of particle physics. Phys. Rev. D 98(3), 030001 (2018)
NNPDF Collaboration, R.D. Ball, S. Carrazza, L. Del Debbio, S. Forte, Z. Kassabov, J. Rojo, E. Slade, M. Ubiali, Precision determination of the strong coupling constant within a global PDF analysis. Eur. Phys. J. C 78(5), 408 (2018). arXiv:1802.03398
A. Verbytskyi, A. Banfi, A. Kardos, P.F. Monni, S. Kluth, G. Somogyi, Z. Szőr, Z. Trócsányi, Z. Tulipánt, G. Zanderighi, High precision determination of \(\alpha _s\) from a global fit of jet rates. arXiv:1902.08158
ALPHA Collaboration, M. Bruno, M. Dalla Brida, P. Fritzsch, T. Korzec, A. Ramos, S. Schaefer, H. Simma, S. Sint, R. Sommer, QCD coupling from a nonperturbative determination of the threeflavor \(\Lambda \) parameter. Phys. Rev. Lett. 119(10), 102001 (2017). arXiv:1706.03821
S. Zafeiropoulos, P. Boucaud, F. De Soto, J. RodríguezQuintero, J. Segovia, The strong running coupling from the gauge sector of domain wall lattice QCD with physical quark masses. Phys. Rev. Lett. 122, (16), 162002 (2019). arXiv:1902.08148
A. Pich, J. Rojo, R. Sommer, A. Vairo, Determining the strong coupling: status and challenges, in 13th Conference on Quark Confinement and the Hadron Spectrum (Confinement XIII) Maynooth, Ireland, July 31–August 6, 2018 (2018). arXiv:1811.11801
LHC Higgs Cross Section Working Group Collaboration, D. de Florian et al., Handbook of LHC Higgs Cross Sections: 4. Deciphering the Nature of the Higgs Sector. arXiv:1610.07922
NNPDF Collaboration, R.D. Ball, V. Bertone, M. Bonvini, S. Carrazza, S. Forte, A. Guffanti, N.P. Hartland, J. Rojo, L. Rottoli, A determination of the charm content of the proton. Eur. Phys. J. C 76(11), 647 (2016).arXiv:1605.06515
H.W. Lin et al., Parton distributions and lattice QCD calculations: a community white paper. Prog. Part. Nucl. Phys. 100, 107–160 (2018). arXiv:1711.07916
R.D. Ball, E.R. Nocera, J. Rojo, The asymptotic behaviour of parton distributions at small and large \(x\). Eur. Phys. J. C 76(7), 383 (2016). arXiv:1604.00024
R. Gauld, J. Rojo, L. Rottoli, J. Talbert, Charm production in the forward region: constraints on the smallx gluon and backgrounds for neutrino astronomy. JHEP 11, 009 (2015). arXiv:1506.08025
R. Gauld, J. Rojo, Precision determination of the small\(x\) gluon from charm production at LHCb. Phys. Rev. Lett. 118(7), 072001 (2017). arXiv:1610.09373
V. Bertone, R. Gauld, J. Rojo, Neutrino telescopes as QCD microscopes. JHEP 01, 217 (2019). arXiv:1808.02034
X. Glorot, Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, in JMLR W&CP: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010), vol. 9, pp. 249–256 (May, 2010)
NNPDF Collaboration Collaboration, E.R. Nocera, R.D. Ball, S. Forte, G. Ridolfi, J. Rojo, A first unbiased global determination of polarized PDFs and their uncertainties. Nucl. Phys. B 887, 276 (2014). arXiv:1406.5539
NNPDF Collaboration, V. Bertone, N.P. Hartland, E.R. Nocera, J. Rojo, L. Rottoli, Charged hadron fragmentation functions from collider data. Eur. Phys. J. C 78(8), 651 (2018). arXiv:1807.03310
J. Rojo, Machine Learning tools for global PDF fits, in 13th Conference on Quark Confinement and the Hadron Spectrum (Confinement XIII), Maynooth, Ireland, July 31–August 6, 2018 (2018). arXiv:1809.04392
D.P. Kingma, J. Ba., Adam: a method for stochastic optimization. CoRR (2014) arXiv:1412.6980
R. Abdul Khalek, V. Bertone, PDF analysis with analytical neural network gradient descent (Manuscript in preparation)
N.P. Hartland, F. Maltoni, E.R. Nocera, J. Rojo, E. Slade, E. Vryonidou, C. Zhang, A Monte Carlo global analysis of the standard model effective field theory: the top quark sector. arXiv:1901.05965
R.D. Ball, S. Forte, Double asymptotic scaling at hera. Phys. Lett. B 335, 77–86 (1994). arXiv:hepph/9405320
J.F. Owens et al., The impact of new neutrino DIS and Drell–Yan data on largex parton distributions. Phys. Rev. D 75, 054030 (2007). arXiv:hepph/0702159
J. Butterworth et al., PDF4LHC recommendations for LHC run II. J. Phys. G43, 023001 (2016). arXiv:1510.03865
R.D. Ball, S. Carrazza, L. Del Debbio, S. Forte, J. Gao et al., Parton distribution benchmarking with LHC data. JHEP 1304, 125 (2013). arXiv:1211.5142
M. Botje et al., The PDF4LHC Working Group interim recommendations. arXiv:1101.0538
S. Alekhin et al., The PDF4LHC Working Group interim report. arXiv:1101.0536
A. Kusina, J.P. Lansberg, I. Schienbein, H.S. Shao, Gluon shadowing in heavyflavor production at the LHC. Phys. Rev. Lett. 121(5), 052004 (2018). arXiv:1712.07024
J.M. Campbell, J. Rojo, E. Slade, C. Williams, Direct photon production and PDF fits reloaded. Eur. Phys. J. C 78(6), 470 (2018). arXiv:1802.03021
ALICE Collaboration, T. Peitzmann, Forward photon measurements in ALICE as a probe for lowx gluons, in 9th International Conference on Hard and Electromagnetic Probes of HighEnergy Nuclear Collisions: Hard Probes 2018 (HP2018) AixLesBains, Savoie, France, October 1–5, 2018 (2018). arXiv:1812.07246
M. Klein, R. Yoshida, Collider physics at HERA. arXiv:0805.3334
A. Accardi et al., Electron ion collider: the next QCD frontier. Eur. Phys. J. A 52(9), 268 (2016). arXiv:1212.1701
D. Boer, M. Diehl, R. Milner, R. Venugopalan, W. Vogelsang, et al., Gluons and the quark sea at high energies: distributions, polarization, tomography. arXiv:1108.1713
NNPDF Collaboration, R.D. Ball, S. Forte, A. Guffanti, E.R. Nocera, G. Ridolfi, J. Rojo, Polarized parton distributions at an electronion collider. Phys. Lett. B 728, 524–53 (2014). arXiv:1310.0461
R.D. Ball, A. Deshpande, The proton spin, semiinclusive processes, and a future electron ion collider (2018). arXiv:1801.04842
E.C. Aschenauer, S. Fazio, J.H. Lee, H. Mantysaari, B.S. Page, B. Schenke, T. Ullrich, R. Venugopalan, P. Zurita, The electronion collider: assessing the energy dependence of key measurements. Rep. Prog. Phys. 82(2), 024301 (2019). arXiv:1708.01527
J.L. Abelleira Fernandez et al., A large hadron electron collider at CERN, in CERN Council Open Symposium on European Strategy for Particle Physics (ESPP2012) Cracow, Poland, September 10–12, 2012 (2012). arXiv:1211.4831
LHeC Study Group Collaboration, H. Paukkunen, An update on nuclear PDFs at the LHeC. PoS D IS2017, 109 (2018). arXiv:1709.08342
I. Helenius, H. Paukkunen, N. Armesto, Electronion physics with the LHeC. PoS D IS2015, 226 (2015). arXiv:1509.02802
A. Buckley, J. Ferrando, S. Lloyd, K. Nordström, B. Page et al., LHAPDF6: parton density access in the LHC precision era. Eur. Phys. J. C 75, 132 (2015). arXiv:1412.7420
Acknowledgements
We are grateful to our colleagues of the NNPDF collaboration for illuminating discussions. We thank Nathan Hartland and Carlota Andrés Casas for collaboration in the initial stages of this work. We are grateful to Alexander Kusina and Hannu Paukkunen for information about the nCTEQ15 and EPPS16 nuclear sets. We thank Hannu Paukkunen and Pia Zurita for sharing with us their projections for the EIC pseudodata. J. R. is partially supported by the European Research Council Starting Grant “PDF4BSM”. R. A.K., J. E., and J. R. are supported by the Netherlands Organization for Scientific Research (NWO).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Funded by SCOAP^{3}
About this article
Cite this article
Khalek, R.A., Ethier, J.J. & Rojo, J. Nuclear parton distributions from leptonnucleus scattering and the impact of an electronion collider. Eur. Phys. J. C 79, 471 (2019). https://doi.org/10.1140/epjc/s1005201969831
Received:
Accepted:
Published:
DOI: https://doi.org/10.1140/epjc/s1005201969831