Correlation of Theoretical Uncertainties in PDF Fits and Theoretical Uncertainties in Predictions

We show how to account for correlations between theoretical uncertainties incorporated in parton distribution function (PDF) fits, and the theoretical uncertainties in the predictions made using these PDFs. We demonstrate by explicit calculations, both analytical and numerical, that these correlations can lead to corrections to the central values of the predictions, and reductions in both the PDF uncertainties and the theoretical uncertainties in the prediction. We illustrate our results with predictions for top production rapidity distributions and the Higgs total cross-section at the LHC, using the NLO NNPDF3.1 PDF set which incorporates missing higher order uncertainties. We conclude that the inclusion of correlations can increase both the accuracy and precision of predictions involving PDFs, particularly for processes with data already included in the PDF fit.

Parton distribution functions (PDFs) play a crucial role in the prediction of Standard Model observables at the Large Hadron Collider (LHC) and beyond [1,2]. PDF uncertainties are already a limiting factor in these predictions [3]. This means that a precise and accurate knowledge of PDFs is essential for full exploitation of the physics at LHC. Besides the familiar experimental uncertainties in the data that are input to a global PDF fit, it is becoming increasingly necessary to also consider theoretical uncertainties: missing higher order uncertainties (MHOUs) due to the truncation of perturbative expansions in the hard cross-sections and parton evolution, uncertainties due to the use of nuclear targets, uncertainties due to missing higher twist, uncertainties due to external parameters such as quark masses, showering and hadronization uncertainties in final state Monte Carlos, and so on. In the past the technique used to estimate the effect of theoretical uncertainties was to compare the PDFs produced with and without various theoretical corrections, rather than to incorporate the uncertainties into the fit itself. However the limitations of such a procedure are clear.
Recently a new approach to estimating the impact of theoretical uncertainties on global PDF fits has been developed, through the construction of a 'theory covariance matrix' [4], analogous to the experimental covariance matrix used in global PDF fits. By adding the theory covariance matrix to the experimental covariance matrix as an additional source of uncertainty, theoretical uncertainties can be incorporated directly in the fit, where they impact not only the overall level of PDF uncertainty, but also the relative weight of different datasets. This novel approach has so far been applied to uncertainties associated with nuclear effects [5,6], and to the estimation of MHOUs by scale variation [7][8][9] (though other methods [10][11][12] of estimating MHOU are under development). Factorization scale variations estimate the MHOUs in parton evolution (correlated across all observables), and renormalization scale variations estimate the MHOUs in fixed order calculations of process-dependent hard cross-section (correlated only across a given class of processes). The first global PDFs including MHOUs estimated in this way were presented in [9].
When making predictions for hadronic observables there are again two sources of MHOU: uncertainties in the PDF evolution, which can also be estimated by factorization scale variation, and uncertainties in the hard cross-section, again estimated by renormalization scale variation. These arise on top of the MHOUs incorporated in the determination of the PDFs themselves, manifested as a part of the PDF uncertainty. So there will be correlations between the MHOUs in the PDFs and the additional MHOUs in the predictions. The PDFs themselves contain a wealth of data from a wide range of processes. When making a prediction for one of these processes (for example in a new kinematic regime), there will inevitably be correlations between the renormalization scale variation in the PDF determination, and that in the prediction. Even if the process is a new one, for example Higgs production, correlations due to factorization scale variation will still be present: all processes dependent on PDFs have a MHOU due to the need to evolve the PDFs to the scale of the process.
The potential importance of these correlations can be exposed by considering a simple situation in which a PDF is determined from data at a given scale on a single observable (such as a nonsinglet structure function), and used to make predictions of another observable at the same scale [13]. In this special case the PDF can be eliminated altogether, and the predicted observable determined directly from the measured observable. Including the MHOU twice (once in the calculation of the measured observable, then again in making the predicted observable), whilst ignoring their possible correlation, would amount to "double counting", and thus an overestimate of the uncertainty. Of course in a global PDF fit, involving many different processes, this particular formulation is no longer applicable. However the fact remains that there will still be some residual correlation between the MHOU in the PDF determination and the MHOU in the prediction, and to neglect it may lead to an overestimation of the MHOU.
In Ref. [9], it was seen that in a realistic NLO global fit, the effect of MHOU on the PDF uncertainty is quite small, the main consequence being a rebalancing of the impact of different datasets depending on their relative MHOU. The MHOU in the PDF is consequently rather smaller than the MHOU in the prediction, and if they are combined in quadrature, the effect of missing correlations will likely also be small. It was further argued that combining the PDF uncertainties and theoretical uncertainties in quadrature is conservative, since it can only lead to an overestimate of uncertainties, and thus better than neglecting MHOUs altogether.
Nevertheless, since the correlation between the uncertainties has not been computed explicitly, there remains the intriguing possibility that in some circumstances including the correlation may yield more precise, and perhaps even more accurate, predictions.
In this paper we show explicitly how to propagate theoretical uncertainties in the determination of global PDFs into the predictions made using these PDFs, taking account of all correlations between theoretical uncertainties. As this is a complicated problem, we proceed step by step. In Sec. 1 we show how a single source of theoretical uncertainty can be reformulated in terms of a nuisance parameter, which holds the key to the propagation of uncertainties. We explore the interplay between the theoretical uncertainties and the experimental uncertainties in the data in two idealised contexts: the first in which there are no fitted parameters, the second in which there as many parameters as data, so the experimental data can be fitted exactly. This allows us to identify three distinct effects of the correlation of theoretical uncertainties. Then in Sec. 2 we consider a more realistic, but still simplified, situation where we fit the data using a single parameter, still in a theory with a single source of theoretical uncertainty, and find that all three of these correlation effects are still present, and can be computed. In Sec. 3 these results are extended to multiple theoretical uncertainties, in a multiparameter fit, and then less trivially to a PDF fit where the PDFs are continuous functions, with a functional uncertainty. Finally in Sec. 4 we present numerical results, made in the context of the NNPDF3.1 NLO global fit with MHOU presented in Ref. [9], and make predictions with MHOU for repetitions of the experiments included in the fit (so-called 'autopredictions'), and for genuine predictions (top and Higgs production). We are able to compute all correlations explicitly, calculating corrections to central values and changes in PDF and theoretical uncertainties for both autopredictions and genuine predictions, and confirm that the correlated predictions can be both more accurate and more precise than the conservative prescription. A summary is provided in Sec. 5.

Predictions with Correlated Theoretical Uncertainties
We showed in Refs. [4,5] that when we fit N experimental data points D i to theoretical predictions T i , i = 1, . . . , N , then the uncertainties in the theoretical predictions can can be incorporated into the fit simply by adding a theoretical covariance matrix S ij to the experimental covariance matrix C ij . The only assumptions made in deriving this result are that all uncertainties, both experimental and theoretical, can be treated as Gaussian, and that the theoretical uncertainties are independent of the experimental data. We used this result in Refs. [5,6] to include nuclear uncertainties in a PDF fit, and in Refs. [8,9] to incorporate missing higher order uncertainties in global PDF fits.
The result of Sec. 2 of Ref. [5] may be summarised in terms of the Bayesian probability where for simplicity we adopt a matrix notation: C and S are real symmetric matrices, with C strictly positive definite (thus invertible), and S positive semi-definite. In practice theoretical uncertainties are highly correlated, so S may (and generally will) have zero eigenvalues. We determine T from D by maximizing P (T |D): this is equivalent to minimizing with respect to free parameters characterizing the theoretical prediction. In this section we will also assume that there is only a single source of fully correlated theoretical uncertainty, so that the theory covariance matrix can be written as for some real nonzero vector β (i.e. that S ij = β i β j ). Then all eigenvalues of S except one are zero. We will develop a nuisance parameter formalism to propagate this theoretical uncertainty (Sec. 1.1), and then show how to apply it in two extreme cases: firstly to the situation in which the theory contains no free parameters, and thus where there is no fitting (Sec. 1.2), and then to the situation where there as many free parameters as data points, so that we can achieve a perfect fit (Sec. 1.3).

Nuisance Parameters
It will be useful in what follows to introduce a nuisance parameter λ for the correlated theoretical uncertainty. Following the notation in Ref. [14], we model the theoretical uncertainty as a fully correlated shift in the theoretical prediction: T → T + λβ. The nuisance parameter λ then gives the size of the shift. For a given shift, with Gaussian experimental uncertainties Now using Bayes' Theorem, To determine P (T |D) we need to first fix the prior distribution of λ. Since this is a theoretical uncertainty, it is reasonable to assume that the prior is independent of the data, thus that P (λ|D) = P (λ). Then, marginalizing Eq. (1.7) over λ, Taking P (λ) as a Gaussian, centred on zero (so the theoretical uncertainty is unbiased), with unit width (fixing the overall normalization of the prior theoretical uncertainty), we choose The integration over the nuisance parameter is now Gaussian, and can be performed in the usual way by completing the square: where we have defined The second expression was obtained by noting that we can combine the two terms on the second line of Eq. (1.9) to give since the integration over λ yields a factor (2πZ) 1/2 which is independent of T and D.
The nuisance parameter formalism thus produces our original result Eq. (1.1). However it is useful because it allows us to also determine the posterior distribution of the nuisance parameter: inverting Eq. (1.5) so once we use the information on T and D, the prior distribution Eq. (1.7) is modified. In particular the peak of the distribution is shifted away from zero to λ, while the width is now given by Z. It is easy to see from the definition Eq. (1.10) that 0 < Z < 1, (1.18) so the width of the theoretical uncertainty is generally reduced by the addition of new information.

Predictions and Autopredictions without Fits
In order to explore the implications of the formalism described in the previous section we first consider a 'pure' theory T = T 0 , one where there are no unknown parameters to be fitted, but which nevertheless has a theoretical uncertainty. Despite the fact that this theory cannot be fitted to the data, so in general T 0 = D, the data can still inform the theoretical predictions.
Computing expectation values of functions of λ using the probability distribution P (λ|T 0 D), Now in the nuisance parameter formalism, the theoretical predictions are If we made these predictions before comparing to any data, we would use the prior distribution Eq. (1.7) for λ. Since this is a unit gaussian centred on zero, we would then find that E[T (λ)] = T 0 , while Cov[T (λ)] = ββ T = S as expected. However if instead we first compare T to the data D, then we can instead compute expectation values using P (λ|T D). We then find (using Eqs. (1.19,1.15)) One can consider this as an 'autoprediction': first the theory is compared to the data, and then using this information one makes new theoretical predictions for precise repetitions of the same experiments. The original theoretical predictions T 0 are then shifted by an amount and their theoretical uncertainties are reduced by a factor of √ Z, since the data add new information: the covariance matrix of the autopredictions is This is a simple example of Bayesian learning: the theory 'learns' from the data, within the constraints imposed by the prior theoretical uncertainty. It is interesting to compare the experimental χ 2 of the original predictions to that obtained with the autopredictions It is easy to see that χ 2 auto ≤ χ 2 exp since C is positive definite (i.e. C > 0) and S semipositive definite (i.e. S ≥ 0), so 2S + SC −1 S ≥ 0, whence (C + S)C −1 (C + S) ≥ C, and (C + S) −1 C(C + S) −1 ≤ C −1 . So the data induced shifts are always such as to improve the quality of the fit to the data, by exploiting the theoretical uncertainty.
To make this more explicit, consider a simple model in which the experimental uncertainties are uncorrelated, and the same for each data point: then we can write with e a unit vector, e T e = 1, and β = se so S = ββ T . Thus σ is the experimental uncertainty on each data point, and s/ √ N is the size of the correlated theoretical uncertainty. Then it is easy to see that and (using Eq. (1.10)) So the reduction in theoretical uncertainties depends on the ratio s 2 /σ 2 : when s 2 σ 2 the influence of the data on the theoretical uncertainty is very small, while when s 2 σ 2 the size of the theoretical uncertainty is reduced from s to σ, since in the limit (1.32) Note that when the theoretical uncertainties are comparable to the experimental, s 2 /N ∼ σ 2 , and Z ∼ 1/(N + 1): if there are a large number of independent data points, the reduction of the theoretical uncertainty can be very substantial.
In this model the shifts in the theoretical predictions is (using Eq. (1.24)) δT = − s 2 σ 2 + s 2 (e T (T 0 − D))e, (1.33) as expected in the direction e of the theoretical uncertainty. When s 2 /σ 2 → ∞, e T (T 0 + δT ) → e T D, so in this direction the autopredictions coincide precisely with the data. The experimental χ 2 for the autopredictions is (using Eq. (1.27)) So N −1 contributions to the χ 2 orthogonal to e are unchanged, while the contribution along e is reduced by a factor Z 2 . The autopredictions will in general have a χ 2 of size N − 1, rather than the N of the original predictions, as naively expected since the nuisance parameter is effectively fitted.
Of course we can also consider genuine predictions T I , I = 1, . . . , N , which again have no free parameters, but have a theoretical uncertainty which is correlated to that of the observables T i , i = 1, . . . , N for which we have the data D i . The theoretical predictions including the theoretical uncertainty may be written where the vector β I gives the size and direction of the theoretical uncertainty in T I . Now if the nuisance parameters λ are independent of the parameters λ, the theoretical uncertainties in T are uncorrelated with those in T , and the theory covariance matrix for the prediction T is given by However if they are correlated, λ = λ, and then taking advantage of the data D for T , we can compute expectation values for the predictions using P (λ|T D). We then find that (1.37) so the predictions are shifted by is the cross-covariance matrix between observables T for which we have data D, and the predic- so the covariance matrix of the predictions is reduced by the same factor Z as that for the autopredictions. Thus the data D can lead to more precise (and if the theory is correct, also more accurate) predictions for observables that are not yet measured, through the correlation of theoretical uncertainties. The reduction in the size of the covariance matrix is through the same factor Z as for the autopredictions, while the size of the shift is proportional to the crosscovariance between the theoretical uncertainties. Note that if, having made the predictions T + δ T , an experimentalist made measurements to produce independent data D, with experimental covariance matrix C, and we wanted to combine the datasets {D, D} into a single dataset, then the combined (N + N ) × (N + N ) theoretical covariance matrix for {T, T } to be used to compare to the data would be While we might hope that the shifted predictions T + δ T give a better χ 2 to the new data D, this is no longer guaranteed, since the shifts are driven by the old data D, and it is possible that D are inconsistent with them.

Autopredictions in Perfect Fits
In the previous section we considered the situation of a theory T which was not fitted to the data D. Now consider what is in some sense the opposite situation: a 'perfect' fit, where the theoretical predictions T have sufficient flexibility to fit the data D exactly. For a perfect fit, P (T |D) is always maximized when T = D, and thus (Eq. (1.2)) χ 2 = 0. We compute expectation values of functions of T using the probability distribution P (T |D) in Eq. (1.1), thus These are the expectation value and covariance matrix of the variables T fitted to the data D.
To compute the autopredictions, we need to consider T (λ) = T + λβ (Eq. (1.21)), and this is more subtle since, as we saw in the previous section, expectation values of λ involve T . We thus need to generalise the definition Eq. (1.19) of expectation values of functions of λ using the probability distribution P (λ|T D), to also include the subsequent integration over T : thus we define for any function f (λ, T ) of λ and T . There are several things to note about this procedure: • We always perform the integration over λ, weighted with the probability distribution P (λ|T D), before we perform the integration over T using P (T |D): this is because while P (λ|T D) depends on T , P (T |D) does not depend on λ, because it has been marginalised as in Eq. (1.16).
• The data D are always held fixed throughout: both P (λ|T D) and P (T |D) are conditional on D.
• For functions f (λ, T ) which only depend on T , the integration over λ is trivial and and we recover for example the results Eqs. (1.42,1.43).
• For the pure theory discussed in the previous section, the theory T was held fixed, so the T integration was trivial and we recover the results Eqs. (1.19,1.20).
Thus in the case of a perfect fit, the expectation value of the nuisance parameter is using Eq. (1.15) and then finally Eq. (1.42). The calculation of the variance requires some care, to ensure that the average over λ is separated out from the average over T . This is most easily accomplished by adding and subtracting λ(T, D), so where in the second line we note that the cross term vanishes, while in the last line we used Eq. (1.10) for Z, and Eqs. (1.43) to simplify the second term. We thus find that in a perfect fit, the probability distribution of the nuisance parameters after fitting the data is the same as the prior distribution Eq. (1.7): we learn nothing from the data about the theoretical uncertainty because all the information in the data is absorbed in the fitted parameters. The calculation of the variance is particularly instructive: the reduction by the factor Z found in the pure theory, Eq.  We thus find that . Thus the covariance of the autopredictions in a perfect fit is simply the covariance of the data. The way in which this arises is that the theory covariance arising from the fit (the first term in Eq.(1.51) and the theory covariance arising in the autoprediction (the last term in Eq.(1.51) are each cancelled by the cross-covariance between fitting and prediction, just as was argued in Ref. [13]. When we have a perfect fit, there is really no distinction between the autoprediction and the data, and the theory uncertainty thus becomes irrelevant. So in a sense this model is pure phenomenology: the only nontrivial information is in the data. Indeed in this model as it stands there is no possibility of making genuine predictions, since a prediction of the form Eq. (1.35) is useless unless we can determine T , presumably as functions of T , and for this we need a genuine theory.

Correlated Theory Uncertainties in One Parameter Fits
In the previous section, we considered two simple but unrealistic models: the first in which the theory T is fixed, with no free parameters to be fitted to the data (pure theory), and the second in which the theory T is so flexible that we could achieve a perfect fit, T = D (pure phenomenology). These exercises were useful, in that they gave us some practice in the use of nuisance parameters to propagate theoretical uncertainties. However we now need to consider the more realistic situation in which the theory has parameters that can be constrained by data, but is still sufficiently restrictive that it can be considered a theory. The fit to the data is then not perfect, but the theory is sufficiently constraining that it can be used to predict new observables, T , for which we as yet have no data. We will find that the interesting features of the pure theory and pure phenomenology models (the shifts, the reduction in uncertainties due to Bayesian learning, and the correlations between theory uncertainties in the fitting and theory uncertainties in predictions) are also found in the more realistic theories. In this section we consider a theory with only one fitted parameter: in Sec. 2.1 we explain for the fitting is performed using replicas, in Sec. 2.2 we consider autopredictions in such a theory, and in Sec. 2.3 we consider general predictions. Generalization to many fitting parameters will be considered in the following section.

Fitting a Theory with a Single Parameter
We can model this situation by considering theoretical predictions T (θ) which depend on a single parameter θ, so that χ 2 (θ) is minimized for some choice of this parameter, θ = θ 0 , with some variance Var[θ]. Other observables T (θ) are then predicted to be T (θ 0 ), with an associated uncertainty proportional to Var[θ]. We assume as before that uncertainties are Gaussian, which means that we can linearize T (θ) about T (θ 0 ) ≡ T 0 : This model has the advantage that while it captures the essence of the fitting problem, it is sufficiently simple that we can solve it exactly. In order to determine the uncertainty on θ, we will need to propagate the experimental uncertainties in the data D and the theoretical uncertainties in the the predictions T (θ) into θ. This can be done most easily by generating N rep pseudodata replicas D (r) distributed according to a Gaussian distribution centred on the actual data D, with covariance C + S: defining the replica average for any function F of the replicas, the replicas are chosen such that A parameter replica θ (r) is then fitted to each pseudodata replica D (r) by maximizing P (T (θ)|D (r) ) as given by Eq. (1.1), and thus by minimizing with respect to θ, replica by replica. Using Eq. (2.1), minimization of the quadratic gives Using the replica averages Eq. (2.3), and choosing θ 0 = θ (r) , we find for consistencẏ and thus we can rewrite Eq. (2.5) as Then since C and S are symmetric matrices, Note the way the double reciprocation in this expression works: data points with a relatively large dependence on θ (i.e. largeṪ 0 ) contribute more than those with small dependence, however directions with large uncertainty (i.e. projections of C + S) contribute less than those with small uncertainty. Now that we understand the uncertainty of the fitted parameter θ, we can use it to predict the uncertainties on T (θ): where n is a unit vector in the direction ofṪ 0 , n T n = 1. This shows that X depends only on n, and not on |Ṫ 0 |. The singular matrix X will play an important role in what follows: it is the covariance matrix of T due to the experimental and theoretical uncertainties in the fitting of the parameter θ -the 'fitting uncertainty'. When the fitted parameter minimizes the χ 2 , and is thus given by Eq. (2.7), X satisfies the projective relation Using Eq. (2.7) in Eq. (2.1), we see that X(C + S) −1 projects the data replicas onto the theory replicas: Because this relation is projective, some information is lost whenever we perform the fit: Eq. (2.14) cannot be inverted to obtain data replicas from theory replicas. This is an inevitable consequence of describing N data (assuming N > 1) with only a single parameter θ.
To make X more explicit, consider the simple model Eq. (1.29) for C and S. Then using Eq. (1.30), where cos φ = n T e, and thus if we project X onto n (projections orthogonal to n give zero) .
The contribution of the theory uncertainty s thus depends on how well aligned e is to the direction n of the parameter dependence: if φ = 0 we have complete alignment, and the variance of T in this direction is σ 2 + s 2 as expected, while if φ = π 2 we have orthogonality, and the variance of T is σ 2 -the theory uncertainty is then irrelevant to the fitting.

Autopredictions in Single Parameter Fits
We can now consider the evaluation of the mean and covariance of the 'autopredictions' T (θ, λ) = T (θ) + λβ (2.17) in this one parameter model. Just as in Sec. 1.3, we do this by first computing expectation values over λ, using P (λ|T D), which depend on T , and then evaluate the expectation values over T , according to the probability distribution P (T |D), now performed by averaging over theory replicas T (r) = T (θ (r) ). It is important to note that both these averages are performed holding the data D fixed, as both probabilities are conditional on the data: the data replicas D (r) employed in Sec. 2.1 are only a device to generate the theory replicas T (r) , and are not to be averaged over when determining expectation values. Accordingly, Eq. (2.18) now becomes where the angled brackets denote the replica average Eq. (2.2). Following the same steps as in the perfect fit in Sec. 1.3, but now using the theory replicas T (r) = T (θ (r) ) determined in the one parameter fit Sec. 2.1, we find Unlike in the perfect fit Eq. (1.45), but just as in the pure theory Eq. (1.19) the nuisance parameters can thus now have nonzero expectation values, since the one parameter fit no longer fits the data exactly. These in turn give nontrivial shifts in the theoretical predictions: So again the data give us information, inducing shifts in the autopredictions: Note however that since (from Eq. (2.6)) n T (C + S) −1 (T 0 − D) = 0, these shifts will only be nonzero when n and e (the data and the theory) point in different directions: when they are parallel (φ = 0), the theoretical uncertainty is simply absorbed by the fit, just as it was in the perfect fit in Sec. 1.3. We can use the same argument as in Sec. 1.2, Eq. (1.27), to show that the shifts will always improve the fit to the experimental data. For the uncertainties, consider first the variance of λ: following the same argument that led to Eq. (1.47) in Sec. subsec:phenomenology, we now find where in the second line we turned the expectation value over T in to a replica average, and in the last line we used Eq. (1.10) for Z, and Eq. (2.10) for Cov [T ]. We thus find that in the more restrictive environment of the one parameter fit, the last two terms no longer cancel: the information in the data can no longer be entirely absorbed in the single fitted parameter, and so it can still inform the nuisance parameter. It is easy to see that Z ≥ Z because (C + S) −1 X(C + S) −1 is positive semi-definite, while Z ≤ 1 since X(C + S) −1 is projective, Eq. (2.13, so its eigenvalues are either zero or one. So in place of Eq. (1.18) we now have 0 < Z ≤ Z ≤ 1. (2.23) The information on theoretical uncertainties extracted from the data is thus less in the one parameter fit than it was in the pure theory of Sec. 1.2, due to the extra uncertainty arising in the fit itself, but unlike in the perfect fit Sec. 1.3, the data will still constrain the theoretical uncertainties provided the parameter and theoretical uncertainty act in different directions.
In the model Eq. (1.29) for C and S, and using Eq. (2.16) for X, Eq. (2.22) becomes Comparing with the corresponding expression for Z, Eq. (1.32), we see that indeed Z = 1 when φ = 0, thus when n = e and the parameter variation and theoretical uncertainty are aligned, while Z = Z only if φ = π/2, so when n and e are orthogonal, the data have the greatest influence on the uncertainty. For the covariance of the autopredictions Eq. (2.17) we also have to take account of the correlation between the fitted theory and the nuisance parameter: proceeding as in Eq. (1.49) We thus find that the result is simply The meaning of the four terms is easy to understand: the first is the 'fitting uncertainty' (which includes contributions from both experimental and theoretical uncertainties), the last the 'theory uncertainty', reduced through exposure to the data, and the middle two terms are due to the correlations between the two sources of theoretical uncertainty. We can simplify it by using Eq. (2.22) to show that and then some straightforward algebra to write We thus find finally is no longer C +S, but rather the smaller matrix X (which is in a sense C + S restricted to the space of variation of the parameter θ as in Eq. (2.10)). Thus the result is no longer the experimental covariance matrix C, but rather the sum in quadrature of the 'fitting uncertainty', X, and the 'theory uncertainty' S, each reduced to some extent by the correlation of the theoretical uncertainties in fit and prediction.
In the model Eq. (1.29) for C and S, and using Eq. (2.16) for X, we now find (2.31) The first term is just X, the off-diagonal term in the middle is the correlation term, and the last is ZS. If φ = 0, so n = e, the three terms combine to give simply σ 2 nn T : in the direction of the fitted parameter, the uncertainty in the autoprediction is the experimental uncertainty, just as in the perfect fit described in Sec. 1.3. On the other hand, if φ = π/2, so n and e are orthogonal, the correlation term disappears, and the result reduces to X + ZS: we add the uncertainties in quadrature, since they are in orthogonal directions, and the theoretical uncertainty is reduced just as in the pure theory described in Sec. 1.2. The one parameter fit thus interpolates smoothly between these two extremes. Note that we can write Eq. (2.31) as (2.32) Here the last term is just ZS, while the first is X, but with the vector n given an additional component in the direction e due to the theory correlation: besides changing its direction, this reduces the size of the fitting uncertainty by a factor sin 2 φ + Z 2 cos 2 φ. However it is easy to see that the size of the correlated fitting uncertainty is still larger than it would be if the theory uncertainty had not been included in the fit.

Correlated Predictions in One Parameter Fits
We now consider predictions T I (θ), I = 1, . . . , N , which depend on the same parameter θ as the fitted predictions T i (θ), i = 1, . . . , N . There are two distinct sources of uncertainty in T I (θ): uncertainties in the determination of θ due to the experimental uncertainties in the data D i and theoretical uncertainties in the theory T i (θ) used in its determination; and theoretical uncertainties in the predictions T I (θ) themselves. The first uncertainty is expressed through Eq. (2.7), which gives the variance Eq. (2.8). In analogy with Eq. (2.1) the linearised dependence of the predictions T (θ) may be written . Then the covariance of T I (θ) due to the uncertainty in the parameter θ is derived just as in Eqs. (2.10-2.12) for the autopredictions: writing The second uncertainty -the theoretical uncertainty in the predictions T I (θ) -may again be either correlated or uncorrelated with the theoretical uncertainty in T i (θ). Consider first the simpler situation when it is uncorrelated: this might be the case if, for example, the observable T (θ) was a different type of observable to the T (θ) used to determine θ. Then introducing a nuisance parameter λ, Gaussian distributed about zero with unit variance, and uncorrelated with λ, we can write (as in the pure theory model Eq. (1.35)) where the vector β I gives the size of the theoretical uncertainties in T I (θ). Since θ and λ are uncorrelated, we then have where S = β β T is the theory covariance matrix for the prediction T (θ) (compare Eq. (1.36) in the pure theory). Thus when the theoretical uncertainty is uncorrelated we simply add it in quadrature to the uncertainty due to that in the parameter θ derived from the fit. Now consider the more interesting case in which the theoretical uncertainty in T I (θ) is fully correlated to that in the T i (θ) used in the fit to determine θ: then λ = λ, which has already been determined in the fit to have nonzero expectation value and variance Eqs. (2.19,2.22). Then writing T (θ, λ) = T (θ) + λ β, where S = ββ T , Eq. (1.39) is the matrix of cross-correlations between observables T (θ) used in the fit and the predictions T (θ) . Likewise using the same arguments as were used to derive Cov[T (θ, λ)], Eq. (2.27) where S is given by Eq. (1.39), and in analogy to Eq. (2.10) and Eq. (2.34) we define the cross-covariance between T and T We thus find that Note that the coefficients Z and Z are the same as for the autopredictions, and thus satisfy the bounds Eq. (2.23), we must have (for positive definite C and positive semi-definite S, i.e. C > 0, , we see that including the correlations between the theoretical uncertainties in the fit and the prediction results in three effects: a shift in the central value of the prediction, a reduction in the theoretical uncertainty, and a reduction in the fitting uncertainty due to the correlations. Performing the fit gives us information (from the data) about the theory, which results in more precise, and hopefully more accurate, predictions.

Correlated MHOU in PDF fits
We now repeat the above analysis, but instead of the toy model we consider the more realistic situation in which the theoretical expressions T i [f ] depend on PDFs f , determined in a global fit to N data D i , with experimental covariance matrix C ij , and then used to make N predictions T I [f ]. There are then many sources of theoretical uncertainty in the relation between the theoretical calculations and the PDFs: here we consider the most generic, the missing higher order uncertainty (MHOU), computed using scale variations according to one of the prescriptions set out in Ref. [8,9]. The theory covariance matrices S ij and S IJ associated with the MHOU will then have many non-zero eigenvalues, and thus there will be n nuisance parameters λ α , α = 1, . . . , n to take into account. There is in principle no limit on n, though in practice n N . As in the toy model, the fit to the PDFs will determine the mean and covariance of the nuisance parameters, which will then translate into systematic shifts and changes in the uncertainties of the theoretical predictions.

Expectation and Covariance of Multiple Nuisance Parameters
The nuisance parameters λ α correspond to shifts in the theoretical predictions: where we adopt the summation convention for the index α. The shift vectors β i,α are not necessarily orthogonal to each other. We again assume Gaussian uncertainties, so that in place of Eq. (1.4) we now have and assume that each nuisance parameter has a prior which is Gaussian distributed with unit variance, centred on zero, the distributions being independent both of each other and of the data, so that We now marginalize over λ α , as in Eq. (1.6) by first completing the square: the details are messy, but the result is very similar to Eq. (1.16), namely the inverse on the right hand side being the matrix inverse with respect to the indices α and β, and χ 2 is once again given by Eq. (1.2), but now with in place of Eq. (1.3) as expected. The Gaussian integration in Eq. (3.4) is now trivial, taking us back again to Eq. (1.1) up to a factor (2π) n/2 (detZ) 1/2 , which we can ignore as it does not depend on T or D, while Bayes' Theorem Eq. (1.5) gives us the posterior distribution of the nuisance parameters: whence we see that It is easy to see from the definition Eq. (3.5) that if e α is a unit eigenvector of Z αβ , and β = e α β α , then the corresponding eigenvalue of Z αβ is z = (1 + β T C −1 β) −1 , so 0 < z < 1, and (in analogy to the bounds Eq. (1.18) Z αβ is positive definite (thus invertible) and δ αβ − Z αβ is also positive definite (because the eigenvalues z are all less than one). We can summarise this by writing, in place of Eq. (1.18) 0 < Z αβ < δ αβ . (3.10) We can express Z αβ in terms of the inverse of C + S: Combining Eq. (3.11) with Eq. (3.6), we then have

Fitting the PDFs
We now proceed to apply the above results in the context of a PDF fit incorporating MHOU [8,9]. We consider first a fixed parametrization: the PDFs f (θ) will then depend on m parameters θ p , p = 1, . . . , m, with m < N so the data D are sufficient to determine all the parameters through minimization of the χ 2 Eq. (1.2). We can then follow the same procedure as in Sec.2, the only difference being that now we fit the m parameters θ p rather than just the single parameter θ. Writing the theoretical predictions T [f (θ)] ≡ T (θ), the linearization relation Eq. (2.33) becomes where f (θ 0 ) ≡ f 0 is the PDF that minimizes the χ 2 , T 0 ≡ T (θ 0 ), T p ≡ ∂T (θ 0 )/∂θ 0 p , and we use the summation convention for indices p. Using the data replicas Eq. (2.2,2.3), minimizing Eq. (2.4) with respect to θ p rather than θ, we find in place of Eq. (2.7) that the fluctuations of the PDF replica parameters are given by where the matrix inverse in the first factor on the right hand side is with respect to the p, q indices. It follows that instead of Eq. (2.8) we now have the covariance matrix so the projective relation Eq. (2.13) still holds, and X(C + S) −1 projects data replicas onto theory replicas as in Eq. (2.14).
It is now easy to see that the results for the autopredictions in Sec.2.2 continue to hold, and that in particular that since the central values of the nuisance parameters λ α are given by These shifts will improve the χ 2 to the experimental data, in just the same way as in Sec. 1.2. Likewise for the uncertainties: Eq. (2.22) for the variance of the nuisance parameter becomes an equation for the covariance matrix of the nuisance parameters in the context of the PDF fit, where S = β α β T α , and Using Eq. (3.20), we can write the last term as The final result thus again has exactly the same for as that found in Sec. 2.3: once the nuisance parameters are all eliminated, the only changes are in the expressions for the covariances X, X and X, generalizing the previous one parameter expressions to many parameters.

Fitting NNPDFs
PDFs in an NNPDF fit are parametrized by a neural network, with a very large number of parameters. The fitting procedure differs from that using a fixed parametrization, since we want to avoid fitting noise. In practice this is achieved using a cross-validation procedure. It follows that when we fit to each data replica D (r) , the neural net parameters, and thus the PDF replicas f (r) , are not precisely determined through exact minimization of the χ 2 , but rather include some random noise, which is responsible for the 'functional uncertainty' inherent in the fit [15]. It is not easy to describe this analytically: all we can say is that while all the general results in Sec.3.1 remain valid, relations such as Eq. as averages over the PDF replicas. This matrix gives the PDF uncertainties (and correlations) for the observables T [f ], which includes both the experimental uncertainties in the data and the theoretical uncertainties in extracting the PDFs from the data. Note that in an NNPDF fit X no longer satisfies the projective relation Eq. (2.13), and indeed X(C + S) −1 no longer projects data replicas directly onto theory replicas as in Eq. (2.14). It can confirm this for a given set of PDF replicas by computing the cross-covariance matrix For a fixed parametrization, we can use Eq. (2.14) and Eq. (2.3) to show that then Y = X = Y T . However it is easy to check by explicit computation that in an NNPDF fit Y is generally considerably smaller than X: the fluctuations of the theory replicas are not very well correlated to the fluctuations of the data replicas due to the functional uncertainty. So although many of the eigenvalues of X(C + S) −1 will still be zero (because m < N ), the nonzero eigenvalues will differ from one, and many will be somewhat larger than one due to the functional uncertainty. This means that while Eq. (3.10) still holds, the upper bound on Z αβ , Eq.(3.21), does not: the covariance of the nuisance parameters can be larger than the prior when the functional uncertainty is large. Note that the fact that X is not invertible is not in any sense a technical limitation: the mapping of a global dataset into a set of PDFs cannot in principle be invertible (except possibly in certain special cases, such as data from a single process taken at a single scale [13]), since it is impossible to recover the data solely from the PDFs. This is in part because the PDFs are only functions of x, while the data also depend on a scale: when we determine PDFs, all the data are effectively projected onto a common scale. But it is also because PDFs are by definition universal, i.e. process independent, so given a set of PDFs it is impossible in principle to say even which processes were used to determine them.
We can now derive general results for the expectation and covariance of autopredictions from the three matrices C, S and X, following the procedure set out in Sec. 2.2. The shifts in the autopredictions are given by a similar expression to Eq. (3.19), and will reduce the experimental χ 2 as explained already in Sec. 1.2. The covariance matrix is still given by Eq. (3.23): If the theory uncertainty S is much smaller than the experimental uncertainty C, P approaches the result P con = X + S; (3.34) the fitting uncertainty and theoretical uncertainty can be combined in quadrature. So when the experimental uncertainties dominate there is almost complete decorrelation of the theoretical uncertainties, and the 'conservative' prescription recommended in Ref. [9] is a useful approximation.
Returning to the more general correlated result Eq. (3.33), both contributions to P (which we may call the correlated PDF uncertainty and the correlated theory uncertainty) are also positive semi-definite, and combine in quadrature to give the total uncertainty. Moreover the correlated theory uncertainty bounded above by the corresponding uncorrelated theory uncertainty: It is tempting to also think that the correlated PDF uncertainty will also be bounded above by the uncorrelated PDF uncertainty X, because since C is positive definite, and S positive semidefinite, C ≤ C + S, so C(C + S) −1 ≤ 1, and C(C + S) −1 X(C + S) −1 C ≤ X. This argument is wrong, however, and the correlated PDF uncertainty can sometimes exceed the uncorrelated. Writing in some circumstances the sum of the last three terms can be positive. For this reason it seems impossible to prove in general that P ≤ P con , though in all practical applications we have tested so far this seems to be the case. For genuine predictions, with theory uncertainties correlated to those in the fit, shifts are given by Eq.
where now besides the matrix X Eq. (3.30) we must now also evaluate Again the covariance Eq. (3.38) separates into the sum in quadrature of a correlated PDF uncertainty (the first line) and a correlated theory uncertainty (the second line), When the cross-covariance S is very small, we obtain the conservative result proposed in Ref. [9]. This will typically be the case for predictions of new processes where the dominant MHOU is in the hard cross-section. However for processes already included in the fit, the situation is more complex, since S and S may be large even if S is small. Note that since the inclusion of MHOU in the PDF determination leads to only a small increase in the uncertainties of the PDFs [9], the conservative results Eq. (3.34,3.41) give uncertainties very close to the conventional prescription, in which the PDF uncertainty (without MHOU) is combined in quadrature with the MHOU in the prediction. This will be particularly true when the MHOU in the prediction is larger than the PDF uncertainty.

Numerical Results
In Sec. 3.3 we saw that in a realistic global PDF fit that we still use the same analytic expressions Eqs. (3.32,3.33,3.37,3.38) for the shifts and reduction in uncertainties induced by the correlations between the theoretical uncertainties in fit and prediction as we would use in a fit with a fixed parametrization Sec. 3.2. This is despite the fact that PDFs are smooth functions, which cannot be determined uniquely from a finite set of discrete data, but necessarily have an additional 'functional uncertainty', so the PDF parameters are not fixed uniquely by the fit. All that is necessary is to evaluate the matrices X, X and X Eqs. (3.30,3.39,3.40) as ensemble averages over the PDF replicas determined in the fit. In this section we will compute these matrices in a realistic global PDF fit with theory uncertainties, and use them to evaluate autopredictions and genuine predictions including the effect of the correlated theoretical uncertainties.
We will carry out these studies in the context of the NNPDF3.1 NLO global fit with MHOU presented in Ref. [9]. This in turn employed the same experimental data and theory calculations in NNPDF3.1 [16] with two minor differences: the value of the lower kinematic cut was increased from Q 2 min = 2.69 GeV 2 to 13.96 GeV 2 , and the HERA F b 2 , fixed-target Drell-Yan cross-sections, and some LHC inclusive jet data were removed, for technical reasons. This left a total of N dat = 2819 data points. The complete list of data included in the fit may be found in Tab. 6.3 of Ref. [9]. These data were divided into five classes, depending on the type of process involved, as summarized in Tab. 4.1. The MHOU covariance matrix S ij was constructed using renormalization and factorization scale variations, by a factor of two either side. The factorization scale variations (estimating the MHOU in the NLO parton evolution) are correlated across all processes, but the renormalization scale variations (estimating the MHOU in the NLO hard cross-sections peculiar to each process) while correlated within data belonging to the same process, are uncorrelated between different processes. These variations were then combined to  give S ij using the 9pt scheme, as explained in Ref. [9]. The matrices C ij and S ij computed in Ref. [9] are reproduced in Fig. 4.1 as heat maps.

Covariance of PDF uncertainties X
We begin by computing the covariance matrix X ij , Eq. (3.30), shown in Fig. 4.2 as a heat map alongside the corresponding correlation matrix. It can be seen that the off-diagonal elements of X ij are almost as large as the diagonal elements: this is confirmed by examination of the correlation matrix. This is because theoretical predictions are often very strongly correlated, not only for nearby bins within the same experiment, but also for different processes at nearby scales, due primarily to the smoothness of the underlying PDFs, both in x and in Q 2 , but also due to the highly correlated theoretical uncertainties included in the fit. We compare the PDF uncertainties to the experimental and theoretical uncertainties by looking at the per-point uncertainty (Fig. 4.3). Recall (Eqn. (2.3)) that C + S is the covariance of the data replicas to which the PDF replicas are fitted. It can be seen that at NLO the relative size of the experimental uncertainties C ii and the theoretical uncertainties S ii varies considerably between different datasets: for the fixed target DIS data S ii is generally below C ii , i , with those for C and S the same as in Ref. [9]. The datasets are arranged in the order given in Fig. 4.7 below. except at large x, whereas for the HERA NC data S ii is much less than C ii at large x, but the other way around at small x where the theoretical uncertainty dominates. For CHORUS, the experimental uncertainty also dominates, while for most DY datasets the theoretical uncertainty dominates. In contrast the PDF uncertainties X ii are generally less than either C ii or S ii , because the combination of data within given datasets and information from other datasets in the fit conspire to reduce the uncertainty. This is especially evident for DIS CC, DY and JETS. However for some data sets, particularly cross-section ratios with very small theory uncertainty (such as NMC d/p, the asymmetries, and the differential top data), X ii lies above S ii , though still below C ii .

Nuisance Parameters
Having computed X, we next calculate the nuisance parameters λ α of the theory covariance matrix S ij for the MHOU in the NNPDF3.1 NLO global fit. The posterior distributions of these nuisance parameters give us information on which directions of MHOU are constrained by the fit. We showed in Ref. [9] that when there are five different processes, there are 28 nonzero eigenvalues. Thus we have 28 nuisance parameters, in one-to-one correspondence with the 28 eigenvectors with nonzero eigenvalues. The eigenvalues are shown in descending order in Fig. 4.4, with the nuisance parameters below them. The expectation values of the nuisance parameters are computed using Eq. (3.18), and their uncertainties using Eq. (3.20). The nuisance parameters are normalized so that their prior is a unit gaussian centred on zero, as in Eq. (3.2). It can be seen that after fitting, the uncertainty in the nuisance parameters associated with the largest nine or so eigenvalues has been substantially reduced from one, indicating that exposure to the data has reduced the MHOUs. For those corresponding to the smaller eigenvalues there is very little reduction, showing that the data do not much constrain these directions in the space of MHOU. The central values for the three largest eigenvalue nuisance parameters remain close to zero within uncertainties, showing that the prior choices (mainly overall normalizations) were reasonable, while the next three or four show significant deviations from zero: for these the data seem to carry significant information about the MHOU. For the remaining (smaller) eigenvalues the central values of nuisance parameters are all consistent with zero, and clearly for the very small ones the data have no effect at all, the posterior distributions being the same as the prior. This shows that only the eigenvectors corresponding to the larger eigenvalues are  actually relevant for the PDF determination: the remainder correspond to such small changes in theoretical uncertainty that the fit ignores them.
We can understand these features better by separating out the two contributions to the total uncertainty in the nuisance parameters: that due purely to the impact of the fit of a given replica on the MHOU (given by Eq. (3.11)), and that due to the additional PDF uncertainty when the fits to all the replicas are averaged over PDF replicas, given by the last term in Eq. (3.20), also shown in Fig. 4.4. We see that when fitting a single replica, the uncertainties in the nuisance parameters corresponding to the larger eigenvalues are indeed very substantially reduced: the MHOU along the eigenvectors corresponding to these larger eigenvalues is learnt in the fit to the data, just as we saw in the simple models in Sec. 1.2. Again, very little information is retained about the smaller eigenvalues. The uncertainty contributed by averaging over the PDF replicas is also small for the largest eigenvalue nuisance parameters, but becomes the dominant contribution after the first three. For the smallest it is very small again: for these the data have no effect.
We can learn a little more about which MHOUs are learnt most by choosing different directions for the shift vectors β α in Eq. (3.1) than for the eigenvectors of S ij . Specifically, we can choose the β α to correspond to factorization scale variations (up or down), or renormalization scale variations (up or down, but now separately for each process). The results are shown in Fig. 4.5, where we again show the total uncertainty, scale uncertainty, and PDF uncertainty.
The central values fluctuate about zero, but all remain within the band ±1, showing that the effect of fitting the experimental data on the nuisance parameters is rather mild: this is reassuring, as it confirms the choice of central scales used to make the predictions, and the choice of the range of the scale variations (implicit in the choice of the prior for the nuisance parameters, Eq. (3.2)). We also see that the uncertainties in the nuisance parameters corresponding to factorization scale variations (estimating the MHOUs in parton evolution), are reduced the most when fitting to any given replica, as we would expect since MHOUs in parton evolution are common to all data included in the fit. A little is also learnt about the renormalization scales for DIS. However, the PDF uncertainties partially wash out these effects. This suggests that the significant shifts in the nuisance parameters seen in Fig. 4.4 are due to global tensions between different processes, rather than problems with the choice of scales for particular processes.
Already we see that information from the data in the fit significantly updates the priors for the nuisance parameter distribution. From this it is likely that there will be an effect at the level of autopredictions, which is the subject of the next section.

Autopredictions
We now present results for the 'autopredictions'. As we explained already in Sec 1.2, these are the theoretical predictions we make for all the datasets included in the PDF fit, including theoretical uncertainties, after the fitting of the PDFs (with these same theoretical uncertainties). They can thus be thought of as 'postdictions', or predictions for the results of experiments run in exactly the same way, with the same equipment, as the original experiment, but taking account of the original global dataset. They thus form an ideal theoretical laboratory for testing the extent of the decorrelation between the theoretical uncertainties in the PDF fit, and those in the (auto)predictions.
We begin by computing the shifts δT i , Eq. (3.32), in the autopredictions, due to the correlation in theoretical uncertainty: these are shown in Fig. 4.6, normalized to the orginal theoretical prediction T i . It can be seen from the plot that these shifts are generally much smaller than the difference between data and theory, particularly for DIS NC and DY. However for some datasets (in particular CHORUS and inclusive jets), there seems to be an overall shift in central value of the same order as the difference between experiment and theory. However it is difficult to draw any further conclusions from these observations, since the shifts are very correlated within datasets.
To see whether the shifts actually improve the autopredictions, in Fig. 4.7 we thus show the experimental χ 2 for the original data, computed using the autopredictions for the fit with no theory uncertainties, those when the theory uncertainties are included in the fit, and then when the autoprediction includes the shift. Needless to say all three results are generally very     close, and including the theory uncertainties in the fit has mixed results, some predictions getting better, but at the expense of others getting worse, since the main effect of the theory uncertainties is to rebalance the datasets in the fit [9] . Nevertheless when the correlated shifts are included, the fit to most datasets improves, sometimes quite substantially, just as anticipated in the very simple 'pure theory' model in Sec. 1.2, and confirmed for the more realistic models in Sec. 2.2 and Sec. 3.2. The numbers broken down by process are shown in Tab. 4.2. When including theory uncertainties the total χ 2 increases just a little, from 1.17 to 1.19, but when the correlated shift is added to the theoretical predictions, we see a significant improvement to 1.10. This improvement is seen across all the processes.

DIS NC DIS CC DY JETS TOP Total
Although these autopredictions are in some sense artificial -in practice experiments are never repeated using exactly the same equipment and settings -the implications of this exercise for the learning of theoretical uncertainties are nevertheless rather general. This is because in a global fit of the size of that performed here, with 2819 data points from 35 datasets, involving five different processes, removing any one of the smaller datasets has very little impact on the  PDFs, and removing any dataset has the effect of increasing PDF uncertainties, while theoretical uncertainties for the remaining data remain unchanged. Consequently if we were to perform the PDF fit without a given dataset, and repeat the analysis, so that the autoprediction becomes a genuine prediction (or more properly 'postdiction'), the result for this genuine prediction would be very close to the autoprediction. So we expect the shifts in central values that we see in the autopredictions to be also give improvements in such predictions: the shifts should improve the accuracy of the such predictions.
To see whether we can also increase the precision, we consider the uncertainties in the autopredictions. In Fig. 4.8 we show the full covariance matrix P ij , Eq. (3.33), again normalized to the theoretical predictions, and corresponding correlation matrix. The matrix P ij is the sum of the PDF uncertainty (derived from data uncertainties and theoretical uncertainties combined) and the theoretical uncertainty in the autoprediction, each reduced in size to account for the learning of the theoretical uncertainties and the correlation between the two sources of theoretical uncertainty. As might be expected there are very large correlations in the autopredictions within datasets, particularly for nearby kinematic points, but there are also smaller correlations, and anticorrelations, between datasets. They are due not only to the correlations of experimental uncertainties within datasets, but also to the use of a common set of smooth underlying PDFs, and the correlations of the theory uncertainties. The correlations within each process are generally larger than those between processes. This suggests that the combined effects of the correlations due to the use of a common factorization scale and the correlations induced by the smoothness of the PDFs is small compared to the correlations from the renormalization scale.
In Fig. 4.9 we show the percentage uncertainties of the autopredictions: √ P ii , compared to the purely PDF uncertainties √ X ii to aid comparison. The correlated autoprediction uncertainties are generally of similar size to the PDF uncertainties; they are rather larger for some of the DY datasets and JETS, but are actually smaller for most of the DIS NC data (most remarkably for the HERA data at small x), and some DY data. So the full autopredictions are not only more accurate: they are also more precise. This increase in precision must be taken with a pinch of salt, since it depends to some extent on the assumptions made in modelling the prior MHOU [9], in particular the choice of independent scales, and the scheme through which they are combined into the theory covariance matrix S. In particular the aggressive reduction in the small x uncertainties for the HERA NC autopredictions seen in Fig. 4.9 may be due to the adoption of the same factorization scale for both singlet and nonsinglet, which overconstrains the singlet evolution at small x [13]. We leave the relaxation of these kinds of assumptions for future work.
As expected all of the autopredictions uncertainties are smaller than would be obtained by the standard prescription of adding PDF uncertainties and theory uncertainties in quadrature [9], which ignores both learning and correlation. However the conservative approach overestimates the correlated uncertainty for almost all datapoints, typically by a factor of two or more, particularly those for which the theoretical uncertainty is larger than the PDF uncertainty. The only data for which the conservative prescription works well are ratio data (for example the NMC d/p data), for which theoretical uncertainties are very small.
To understand better how these changes in uncertainty arise, we show in Fig. 4.10 the contributions to the diagonal elements of the correlated theory uncertainty (the second term in Eq. (3.33), normalised to the uncorrelated elements S ii . The 'learning' of the theoretical uncertainty, given by the contribution −S(C + S) −1 S (note that S − S(C + S) −1 S is equal in the one parameter example described in Sec. 2.2 to ZS) is very significant, reducing the prior uncertainty S almost to zero for NC DIS and DY (where there is considerable data), and by an order of magnitude for DIS CC, JETS and TOP. Probably more flexibility is required in the modelling of the prior for this uncertainty. However the PDF fluctuations S(C + S) −1 X(C + S) −1 S (note that S − S(C + S) −1 S + S(C + S) −1 X(C + S) −1 S is equal in the one parameter example described in Sec. 2.2 to ZS) undo much of this learning, though for all data points this effect is insufficient to take the ratio to S above one.
A similar breakdown of the contributions to the diagonal elements of the correlated PDF uncertainty (the first term in Eq. (3.33), which can be expanded as in Eq. (3.36)), normalised to the uncorrelated elements X ii , is shown in Fig. 4.11. The correlation terms −S(C + S) −1 X − X(C + S) −1 S are indeed very large, as anticipated in Ref. [13], in particular for data with relatively large theoretical uncertainty (such as HERA NC at small x, or JETS), sufficient there to overwhelm X and give a negative result. However they can also be positive for some data (such as JETS). In any event, the addition of the PDF fluctuation term S(C + S) −1 X(C + S) −1 S (remember the decomposition Eq. (3.36) of the total correlated PDF uncertainty) is always sufficient to restore positivity of the correlated PDF uncertainty, and can in some situations (where S is large, in particular for JETS, but also some DIS CC and DY data) take the total correlated PDF uncertainty above the uncorrelated result X. Thus the correlations, while generally reducing uncertainties, can in some circumstances increase them, in contrast to learning which always reduces them.
For autopredictions we expect high levels of learning and correlation because we are making predictions for exact repeats of experiments already in the fit. As we noted for the shifts however, removing a smaller dataset from the fit has little effect in the PDFs, so we might expect similar effects for genuine predictions of processes already included in the fit, particular if they are in a similar kinematic region.

Predictions for Top
We now consider genuine predictions, for experiments not used in the PDF fit. These are of two kinds: those for datasets obtained through processes already contained in the fit, and those for completely new processes. For the former we consider the tt production rapidity distributions (dilepton and lepton+jets) measured by CMS at 13 TeV [17,18]. We chose these datasets for two reasons: firstly, the MHOU is large compared to the experimental uncertainty; secondly, the fit in Ref. [9] contains the total tt cross-sections at 7, 8 and 13 TeV, and the normalized rapidity distributions at 8 TeV, all from both ATLAS and CMS. Both these factors mean that we expect to see some of the largest possible effects due to correlations between the theoretical uncertainties of the data in the PDFs and the theoretical uncertainties of the 13 TeV rapidity distributions.
To make genuine predictions, including all correlations, we need to first calculate the covariance matrix for the MHOU of the predictions, S IJ , and its cross-covariance with the MHOU of the theoretical predictions for the data used in the PDF fit, S Ij (the indices I, J running over the predicted data points, while i, j run over the data included in the PDF fit): these are in fact the same as if we were planning to include data for the new process in a PDF fit, since the complete covariance matrix for the MHOU would be then of the form Eq. (1.41). Similarly we need to compute, using the fitted PDF replicas the covariance matrix X IJ of the PDF uncertainty for The upper two panels show predictions for tt unnormalized rapidity distribution data taken at 13 TeV by CMS, the dilepton rapidity distribution [17] (left) and the lepton+jets distribution [18] (right). The four predictions show: the NLO fit with no MHOUs, PDF error only; the combined PDF and MHOU fit, ignoring correlations (thus P con II ); the correlated result including the shift, and uncertainty computed using the simplified result (thus P sim II ); the result with the same shift, but with the correlations included exactly (thus P II ), and the NNLO result with no MHOU. In the middle panels the same is shown, but normalized to the uncorrelated result. In the lower panels we show the fractional reduction in the PDF uncertainty and the theory uncertainty due to the inclusion of the correlations.  the new predictions, and the cross-covariance X Ij with the PDF uncertainty of the observables used in the fit. All of these matrices are required for an exact calculation of the correlated shifts in the theoretical predictions δ T I , Eq. (3.37) and the covariance matrix P IJ of their combined PDF and correlated theoretical uncertainties Eq. (3.38).
Predictions for the CMS 13 TeV tt production rapidity distributions were computed using the same tool chain as in Ref. [16] for the 8 TeV distributions: NLO theoretical predictions were generated with Sherpa [19], in a format compliant with APPLgrid [20], using the MCgrid code [21] and the Rivet [22] analysis package, with OpenLoops [23] for the NLO matrix elements. Renormalization and factorization scales have been chosen based on the recommendation of Ref. [24] as H T /4.
The results of these calculations are shown in Fig. 4.12. The prior theoretical uncertainty in the original prediction is around 10%, considerably greater than the PDF uncertainty, as expected since the hard cross-sections are only computed at NLO. The correlated shift is sizeable, around 5%, and almost fully correlated across all the rapidity distributions. This is because these are unnormalized distributions, and thus have an overall theoretical normalization uncertainty which is strongly correlated to the measurements of the total tt cross-sections at 7, 8 and 13 TeV by ATLAS and CMS included in the PDF fit. This is confirmed by breaking down the contributions to the shift Eq. (3.24) from the various data points included in the fit: the results are shown in Tab. 4.3. All of the shift comes from the six total cross-section measurements, while the 8 TeV normalized rapidity distributions push it down again by around 25%. The remaining data make almost no contribution. Nevertheless, the shift is still rather less than the theoretical uncertainty in the original prediction, as expected from the shifts in the nuisance parameters for the renormalization scale variation for top processes shown in Fig. 4.5.
We can compare the shift due to correlations to that from going from NLO to NNLO. We thus show in Fig. 4.12 the results of a complete NNLO calculation (without theory uncertainties): the NNLO corrections also increase predictions by 5-8%. It is very interesting that the shift, driven by the data for the tt total cross-sections at 7, 8 and 13 TeV, largely accounts for the NNLO correction: the data know that the NLO theoretical predictions are on the low side, and this information is carried over into the prediction for the 13 TeV rapidity distributions. Indeed, we compare the experimental χ 2 for these data in the various calculations in Tab. 4.4, while the theory uncertainties increase the χ 2 (due presumably to the other top data being deweighted in the fit), the shift gives a significant improvement both for dileptons and lepton+jets, comparable to that obtained with the complete NNLO corrections. So the shifts provide a new method for using experimental data to make improved theoretical predictions through the learning of theoretical uncertainties. The method should be particularly effective when there is substantial data on the process to be predicted already included in the PDF fit, as is the case here.
The middle panels of Fig. 4.12 shows the same points as the top panel, but as a ratio to the conservative result, making the uncertainties more visible. Comparing the uncertainties, the difference between the uncorrelated and correlated uncertainty is striking; the correlated uncertainties are much smaller than the uncorrelated. The correlated uncertainties are however still larger than the purely PDF uncertainties, but the very large theoretical uncertainty has been substantially reduced. So not only are the correlated predictions more accurate, they are also more precise. Despite this significant shrinking of uncertainties, the correlated predictions are still compatible with the NNLO result, thanks to the shift in central values. While the conservative prescription is also compatible with the NNLO result, it is immediately clear from the plot that it is inferior to the correlated prediction. The breakdown of the reduction in uncertainties due to the correlations is shown in the lower panels of Fig. 4.12. The correlated theory uncertainty is substantially reduced (uniformly across the rapidities), due to the learning of the normalization from the data already included in the fit, while the correlated PDF uncertainty is reduced rather less: as much as a factor of two when the differential cross-section is small, but hardly at all when it is large. So here the dominant effect is clearly the learning of the theoretical uncertainty in the overall normalization.
The theoretical uncertainties in the theoretical predictions are strongly correlated amongst themselves, and between the two rapidity distributions: in Fig. 4.13 we show the correlation matrices for the PDF uncertainties, X IJ , and that of the correlated prediction P IJ . We see that while the predictions are more than 50% correlated across the range of rapidities by the PDF, when the correlated theoretical uncertainties are also included all points are rather more correlated, to more than 70%. The pattern of correlations reflects the symmetry in the dilepton distribution and asymmetry in the lepton+jet distribution: the least correlated points are those with the greatest rapidity separation.

Predictions for Higgs
As an example of a process not included in the PDF fit we consider the prediction for the total cross-section for Higgs production in gluon fusion at 14 TeV. For the calculation of the crosssection we performed calculations using ggHiggs [25][26][27]. Renormalization and factorization scales are set to m h /2, and the computation is performed using rescaled effective theory.
Our results are shown in in Fig. 4.14. The PDFs are still the NLO PDFs with MHOU from Ref. [9], but the Higgs total cross-sections are computed at NLO, NNLO and N3LO. At NLO the MHOU in the Higgs cross-section, estimated by varying the renormalization scale, completely dominates all other uncertainties (the PDF uncertainty, the theoretical uncertainty in the PDFs, and the scale uncertainty in the PDF evolution, estimated by factorization scale variation), so the effect of correlations in the MHOU is completely negligible. However when the cross-section is computed at NNLO, the renormalization scale uncertainty is rather smaller, and at N3LO it becomes more comparable to the other sources of uncertainty. The shift due to the correlation Predictions for the Higgs total cross-section at 14 TeV, made using a variety of approximations. All results use NLO PDFs, while the Higgs total cross-section is computed at NLO (left panel), NNLO (centre panel) and N3LO (right panel). In each panel, we then have, from left to right: MHOU included only in the PDF determination in the 9pt scheme; the same but with the factorization scale uncertainty (MHOU in PDF evolution) included in quadrature; the same but with instead the renormalization scale uncertainty (MHOU in the Higgs cross-section); the total PDF uncertainty and 9pt MHOU combined in quadrature, as recommended in Ref. [9]; the total PDF plus 9pt MHOU, but now including also the shift and the correlation between theoretical uncertainties. In the centre panel we also show the NNLO prediction with NNLO PDFs (but no theoretical uncertainties), as a dashed line.
between MHOUs is always small compared to the overall uncertainty, and becomes smaller still as the perturbative order of the cross-section is increased, as expected. This is because, unlike in the top predictions, data for this process are not included in the fit, so the renormalization scale uncertainty is completely uncorrelated. It is interesting to note however that the small shift due to the correlation in factorization scale takes the NNLO prediction very close to the result computed with NNLO PDFs (though the coincidence is surely accidental).
Unlike for the top predictions, the effect of the correlation on the size of the overall uncertainty is small. Again, this is because the PDF contains far less information about Higgs production than it does about top production. We saw in Fig. 4.8 that the information propagated through to the correlated uncertainties was primarily through the renormalization scales, and that correlation due to the factorization scale was a lot weaker. Higgs production, being a new process, is therefore only impacted through the weak factorization scale correlations, and so the reduction in uncertainties is small. Overall, the uncorrelated conservative prescription [9] is comparable in this case to the fully correlated one. Note that if we were to perform these studies with NNLO (or indeed N3LO) PDFs which include MHOU, the MHOU in the PDF determination would have presumably been rather smaller than at NLO, and thus the effect of correlations in the MHOU, in particular the shift, but also the effect on the size of the uncertainty would be even smaller than the small corrections we see here.
From these examples of autopredictions, and genuine predictions for top and Higgs, we have seen that the extent of the shift and correlation can vary quite significantly, depending on the type of prediction being made and what information is already contained in the PDFs. The conservative prescription recommended in Ref. [9] is certainly not appropriate in general, as the full inclusion of correlations can be quite substantially reduce uncertainties, as we saw both for the autopredictions and top predictions. However, when predicting a new process for which the PDF contains little information about correlated theoretical uncertainties, unsurprisingly the impact of correlations is small and the conservative prescription is quite sufficient.

Summary
In this paper we studied in detail the correlation between theoretical uncertainties in the calculations used in the determination of PDFs in a global fit, as formulated in Ref. [9], and the theoretical uncertainties in the predictions made using these PDFs. We began by recasting the theoretical uncertainties using nuisance parameters, determined replica by replica, which carry all the information about the effect of the experimental data on the theoretical uncertainties. Using increasingly realistic models of the fitting procedure, we produced analytical formulae for computing fully correlated predictions. In the process we identified three distinct but related effects, each of which has a significant impact on the final theoretical predictions: • Shifts in central values. These are an effect of Bayesian learning: just as we can use experimental data to determine PDFs, so we can also use it to identify theoretical corrections that improve the agreement between data and theory, while remaining within theoretical uncertainties. The correlations between theoretical uncertainties in the fit and those in predictions then lead to more accurate predictions. This effect was first identified in Sec. 1.2.
• Learning of theoretical uncertainties. A second consequence of Bayesian learning is a reduction in theoretical uncertainty, due to the information provided by the data in the PDF fit, which through correlation of theoretical uncertainties can lead to a corresponding reduction in the theoretical uncertainties in predictions. This effect was also identified in Sec. 1.2, and is complementary to the shift in central values.
• Correlation in theoretical uncertainties. The third effect is that the correlation between the theoretical uncertainties in the fit and the theoretical uncertainties in the predictions lead to a change in the PDF uncertainties in the prediction, even in situations where there is no shift, thus avoiding any 'double counting' of the theoretical uncertainty. The existence of this effect was first noted in Ref. [13], and identified as an effect distinct from Bayesian learning in Sec. 1.3.
While these three effects were first identified in the simple models of Sec. 1, we showed that they are all present in the one parameter fits of Sec. 2 and the more realistic fits with multiple parameters in Sec. 3. Using the NNPDF3.1 NLO global fits with MHOU [9], we demonstrated in Sec. 4 that the shifts can give sensible estimates of NNLO corrections, and thereby reduce the χ 2 to the experimental data. We also showed that while the uncertainty in NLO predictions is still a sum in quadrature of the theoretical uncertainty and the PDF uncertainty (which also includes a theory uncertainty), this sum can be significantly reduced, depending on the relative size of the theoretical and experimental uncertainties. Consequently the 'conservative' prescription of Ref. [9], where the theory uncertainty in the prediction is combined in quadrature with the PDF uncertainty, is indeed conservative. We expect these conclusions to also hold in global PDF fits with fixed parametrization and tolerance [28,29], if these were to include MHOU in the PDF fit.
The degree of correlation is highly dependent on the type of prediction being made. For the autopredictions (predictions for new measurements of the same data points as those included in the fit), Sec. 4.3, where there is maximal correspondence between the data in the fit and the predictions being made, the correlation is very high, leading to shifts that improve the quality of the fit to the data, together with a significant reduction in uncertainties, in some cases down to a small fraction of the uncorrelated values. For genuine predictions for new measurements of processes already included in the PDF fit, such as the new measurements of differential top production discussed in Sec. 4.4, we observe that the shift takes the correlated NLO predictions very close to the NNLO prediction, with a significant reduction in uncertainties: the prediction is both more accurate and more precise. For Higgs production, discussed in Sec. 4.5, a process not included in the PDF fit, the level of correlation is much smaller, since the dominant uncertainty (the MHOU in the hard cross-section) is uncorrelated with the MHOU of the fitted processes. In this case the shift is well within uncertainties, and the reduction in uncertainty very modest, so here the use of the conservative prescription Ref. [9] is entirely appropriate. We expect this to be true of predictions for any new process.
Thus our main conclusion is that when using PDFs which include MHOUs, taking account of the correlations between the MHOU included in the determination of the PDFs and the MHOU in the prediction can result in a significant improvement in both accuracy and precision. This is especially true in the case where the predicted process is among those included in the fit. However the correlated predictions must be treated with care, since their reliability relies to some extent on the generality of the prior estimation of the MHOU: if unjustified assumptions are made in the choice of prior, the uncertainty estimates in the correlated predictions may be too aggressive. For these reasons the conservative prescription, as an upper bound on the overall uncertainty, may sometimes be preferable, especially for predictions of new processes.
In order to calculate fully correlated predictions and uncertainties, one requires besides the PDF replicas some additional information: the cross-correlations between the theoretical uncertainties in the prediction and those in the theoretical calculations used to determine the PDFs, S Ij ; and the cross-correlations between the PDF uncertainties in the prediction and all the calculations included in the fit, X Ij . In the future, it may be possible to present this information in separate NNPDF deliverables to facilitate the calculation of the correlation effects.
Although we presented our numerical study of correlations in the context of MHOUs, we would expect similar results for other kinds of theoretical uncertainty, such as nuclear uncertainties, higher twist uncertainties, or indeed parametric uncertainties: once the theory covariance matrix has been computed, the linear algebra has no concern for the type of theoretical uncertainty it contains. This suggests a new technique for determining external parameters in PDF fits, such as quark masses or electroweak parameters, taking full account of all correlations with the PDFs and theoretical uncertainties. We hope to explore this possibility in the near future.