Information and treatment of unknown correlations in the combination of measurements using the BLUE method

We discuss the effect of large positive correlations in the combinations of several measurements of a single physical quantity using the Best Linear Unbiased Estimate (BLUE) method. We suggest a new approach for comparing the relative weights of the different measurements in their contributions to the combined knowledge about the unknown parameter, using the well-established concept of Fisher information. We argue, in particular, that one We discuss the effect of large positive correlations in the combinations of several measurements of a single physical quantity using the Best Linear Unbiased Estimate (BLUE) method. We suggest a new approach for comparing the relative weights of the different measurements in their contributions to the combined knowledge about the unknown parameter, using the well-established concept of Fisher information. We argue, in particular, that one contribution to information comes from the collective interplay of the measurements through their correlations and that this contribution cannot be attributed to any of the individual measurements alone. We show that negative coefficients in the BLUE weighted average invariably indicate the presence of a regime of high correlations, where the effect of further increasing some of these correlations is that of reducing the error on the combined estimate. In these regimes, we stress that assuming fully correlated systematic uncertainties is not a truly conservative choice, and that the correlations provided as input to BLUE combinations need to be assessed with extreme care instead. In situations where the precise evaluation of these correlations is impractical, or even impossible, we provide tools to help experimental physicists perform more conservative combinations.

Abstract We discuss the effect of large positive correlations in the combinations of several measurements of a single physical quantity using the Best Linear Unbiased Estimate (BLUE) method. We suggest a new approach for comparing the relative weights of the different measurements in their contributions to the combined knowledge about the unknown parameter, using the well-established concept of Fisher information. We argue, in particular, that one contribution to information comes from the collective interplay of the measurements through their correlations and that this contribution cannot be attributed to any of the individual measurements alone. We show that negative coefficients in the BLUE weighted average invariably indicate the presence of a regime of high correlations, where the effect of further increasing some of these correlations is that of reducing the error on the combined estimate. In these regimes, we stress that assuming fully correlated systematic uncertainties is not a truly conservative choice, and that the correlations provided as input to BLUE combinations need to be assessed with extreme care instead. In situations where the precise evaluation of these correlations is impractical, or even impossible, we provide tools to help experimental physicists perform more conservative combinations.

Introduction
Our knowledge about some of the most fundamental parameters of physics is derived from a vast number of measurements produced by different experiments using several complementary techniques. Many statistical methods are roua e-mail: andrea.valassi@cern.ch b e-mail: roberto.chierici@cern.ch tinely used [1] to combine the available data and extract the most appropriate estimates of the values and uncertainties for these parameters, properly taking into account all correlations between the measurements. One the most popular methods for performing these combinations is the Best Linear Unbiased Estimate (BLUE) technique, an approach first introduced in the 1930's [2] and whose reformulation in the context of high-energy physics [3,4] has been routinely used for the combination of the precision measurements performed by experiments at the LEP [5], Tevatron [6] and LHC [7] colliders, as well as in other domains.
To quantify the "relative importance" of each measurement in its contribution to the combined knowledge about the measured physical quantity, its coefficient in the BLUE weighted average is traditionally used. In many examples in the literature where the BLUE technique has been used, the combinations are dominated by systematic uncertainties, often assumed as fully correlated among different measurements. This often leads to situations where one or more measurements contribute with a negative BLUE coefficient, pushing experimentalists to redefine the "relative importance" of a measurement as the absolute value of its BLUE coefficient, normalised to the sum of the absolute values of all coefficients [6,7]. In our opinion, this approach is incorrect.
In this paper, we propose a different approach for comparing the relative contributions of the measurements to the combined knowledge about the unknown parameter, using the well-established concept of Fisher information [8]. We also show that negative coefficients in the BLUE weighted average invariably indicate the presence of very high correlations, whose marginal effect is that of reducing the error on the combined estimate, rather than increasing it. In these regimes, we stress that taking systematic uncertainties to be fully (i.e. 100%) correlated is not a conservative assumption, and we therefore argue that the correlations provided as inputs to BLUE combinations need to be assessed with extreme care. In those situations where their precise evaluation is impossible, we offer a few guidelines and tools for critically re-evaluating these correlations, in order to help experimental physicists perform more "conservative" combinations. In our discussion, we will generally limit ourselves to BLUE combinations of a single measured parameter and where the correlations used as inputs to the combination are positive. Many of the concepts and tools we present could be applied also to the more general cases of BLUE combinations of several measured parameters, and/or involving also negative correlations between measurements, but this discussion is beyond the scope of this paper.
The outline of this article is the following. In Sec. 2 we review the definition of "relative importance" of a measurement in a BLUE combination as presented by some papers in the literature and we present our objections to it by using a simple numerical example. We then present our alternative definitions of information weights in Sec. 3, after a brief recall of the definition of Fisher information and of its relevant features. By studying marginal information and information derivatives, in Sec. 4 we show that negative BLUE coefficients in the combination of several measurements of one parameter are always a sign of a "highcorrelation" regime, thus generalising the results presented for two measurements by the authors of Ref. [3]. In Sec. 5 we go on to discuss practical guidelines and tools, illustrated by numerical examples, to identify correlations that may have been overestimated and to review them in a more "conservative" way. In Sec. 6 we summarize our discussion and present some concluding remarks.

"Relative importance" and negative BLUE coefficients
In the BLUE technique, the best linear unbiased estimate of each unknown parameter is built as a weighed average of all available measurements. The coefficients multiplying the measurements in each linear combination are determined as those that minimize its variance, under the constraint of a normalisation condition which ensures that this represents an unbiased estimate of the corresponding parameter. As discussed extensively in Refs. [3,4,9], this technique is equivalent to minimizing the weighted sum of squared distances of the measurements from the combined estimates, using as weighting matrix the input covariance matrix of the measurements, which is assumed to be known a priori.
In the case of n measurements y i of a single parameter whose true value is Y , in particular, the best linear unbiased estimateŶ can be determined as follows. First, the BLUE should be a linear combinationŶ = ∑ n i=1 λ i y i of the available measurements. Second, the BLUE should be an unbiased estimator, i.e. its expectation value E[Ŷ ] should be equal to the true value Y of the unknown parameter. Assuming that each measurement is also an unbiased estimator, i.e. that its outcomes are distributed as random variables with expectation values E[y i ] = Y , this is equivalent to requiring a normalisation condition ∑ n i=1 λ i = 1 for the coefficients λ i in the linear combination. Third, the BLUE should be the best of such unbiased linear combinations, i.e. that for which the combined variance σ 2 Y = ∑ n i=1 ∑ n j=1 λ i M i j λ j , where M is the covariance matrix of the measurements, is minimized. It is then easy to show [3] thatŶ is the best linear unbiased estimate if the coefficients λ i are equal to where U is a vector whose elements are all equal to 1. While the normalisation condition ensures that the coefficients λ i sum up to 1, one peculiar and somewhat counterintuitive feature of this method is that some of these individual coefficients may be negative. Negative coefficients in the BLUE weighted averages apparently still pose a problem of interpretation sometimes, especially if these coefficients are used to compare the contributions of the different measurements to the combined knowledge about the measured observable. For instance, the "relative importance" of each measurement in the combination of ATLAS and CMS results on the top quark mass [7] was defined as the absolute value of its coefficient in the BLUE weighted average, divided by the sum of the absolute values of the coefficients for all input measurements, The same procedure had already been used to visualize the "weight that each measurement carries in the combination" of CDF and D0 results on the top quark mass [6]. In both cases, the relative importances of the n measurements sum up to 1 by definition, ∑ n i=1 RI i = 1. In our opinion, this procedure is an artefact that is conceptually wrong and suffers from two important limitations: first, it is not internally self-consistent and may easily lead to numerical conclusions which go against common sense; second, it does not help to understand in which way the results with negative coefficients contribute to reducing the uncertainties on the combined estimates. We will use a simple example to illustrate the first objection. Consider the combination of two uncorrelated measurements y A = 103.00 ± 3.87 and y B = 98.00 ± 3.16 of an observable Y in the appropriate units. The covariance matrix is then and the BLUE for their combination isŶ = λ A y A + λ B y B = 100.00 ± 2.45, where the coefficients of these two uncorrelated measurements in the BLUE weighted average, λ A =0.4 and λ B =0.6, are proportional to the inverses of the variances σ 2 A and σ 2 B as expected from simple error propagation. It is rather intuitive in this case to claim that the relative contributions to the knowledge about Y contributed by the two independent measurements A and B can be quantified by their BLUE coefficients, 40% for A and 60% for B. As λ A and λ B are both positive, these are also the "relative importances" of A and B according to Eq. 2.
Imagine now that y B is not the result of a direct measurement, but is itself the result of the combination of two measurements y B1 = 99.00 ± 4.00 and y B2 = 101.00 ± 8.00, where a high positive correlation ρ = 0.875 between them leads to negative BLUE coefficients in their weighted average y B = 1.5×y B1 − 0.5×y B2 = 98.00 ± 3.16. Instead of combining first y B1 and y B2 and then adding y A , one could also combine y A , y B1 and y B2 directly using the full covariance matrix 15.00 0 0 0 16.00 28.00 0 28.00 64.00 where the BLUE coefficients in this overall weighted average are given by λ A = 0.4, λ B1 = 0.9 and λ B2 = −0.3. As expected, the final numerical result forŶ is of course the same whether it is obtained from the combination of y A and y B or from the combination of y A , y B1 and y B2 . It is also not surprising that the BLUE coefficient λ A = 0.4 for y A is the same in both combinations, as this is an independent measurement that is not correlated to either y B1 or y B2 (the sum of whose BLUE coefficients, λ B1 + λ B2 = 0.9−0.3 = 0.6, of course, equals the BLUE coefficient λ B of y B ). What is rather surprising, however, is that the "relative importance" of y A computed using normalised absolute values of the BLUE coefficients is very different in the two cases: In our opinion, this is an internal inconsistency of Eq. 2, as common sense suggests that the relative contribution of y A to the knowledge about Y is the same in both combinations.
In particular, we consider that the contribution of y A is indeed 40%, and that this is underestimated as 25% in the second combination because the relative contributions of y B1 and y B2 in the presence of negative BLUE coefficients are not being properly assessed and are overall overestimated.
More generally, the problem with defining the "relative importances" of measurements according to Eq. 2 is that the coefficient with which a measurement enters the linear combination of all measurements in the BLUE, i.e. its "weight" in the BLUE weighted average, is being confused with the impact or "weight" of its relative contribution to the knowledge about the measured observable. In the following we will therefore clearly distinguish between these two categories of "weights": we will sometimes refer to the BLUE coefficient λ i of a measurement as its "central value weight" (CVW), while we will use the term "information weight" (IW) to refer to, using the same words as in Refs. [6,7], its "relative importance" or the "weight it carries in the combination". We will propose and discuss our definitions of intrinsic and marginal information weights in the next section, using the well-established concept of Fisher information.

Fisher information and "information weights"
In this section, we present our definitions of intrinsic and marginal information weights, after briefly recalling the definition of Fisher information and summarizing its main relevant features. A more general discussion of Fisher information and its role in parameter estimation in experimental science is well beyond the scope of this paper and can be found in many textbooks on statistics such as the two excellent reviews in Refs. [9,10], which will largely be the basis of the overview presented in this section.

Definition of Fisher information
Consider n experimental measurements y i = {y 1 , . . . y n } that we want to use to infer the true values X α = {X 1 , . . . X N } of N unknown parameters, with n > N (though each of the y i need not necessarily be a direct measurement of one of the parameters X α ). We will use the symbols y and X to indicate the vectors of all y i and of all X α , respectively. The measurements y are random variables distributed according to a probability density function p(y; X) that is defined under the normalisation condition p(y; X)dy 1 . . . dy n = 1. The sensitivity of the measurements to the unknown parameters can be represented by the Fisher "score vector" s (X) α = ∂ log p(y; X)/∂ X α , which is itself a random variable, defined in the n-dimensional space of the measurements and whose value in general also depends on the parameters X. Under certain regularity conditions (in summary, the ranges of values of y must be independent of X, and p(y; X) must be regular enough to allow ∂ 2 /∂ X α X β and dy to commute), it can be shown [9,10] that the expectation value of the Fisher score is the null vector, s (X) α p(y; X) dy 1 . . . dy n = 0. The Fisher information matrix, which in the following we will generally refer to simply as "information", is defined as the covariance of the score vector: as the expectation value of the score is null, this can simply be written as Information is thus defined as the result of an integral over dy 1 . . . dy n and does not depend on the specific numerical outcomes of the measurements y i , although in general it is a function of the parameters X α instead. In other words, information is a property of the measurement process, and more particularly of the errors on the measurements and of the correlations between them, rather than of the specific measured central values y i . As pointed out in Ref. [10], Fisher information is a valuable tool for assessing quantitatively the contribution of an individual measurement to our knowledge about an unknown parameter inferred from it, because it possesses three remarkable properties.
First, information increases with the number of observations y i and in particular it is additive, i.e. the total information yielded by two independent experiments is the sum of the information from each experiment taken separately.
Second, the definition of the "information obtained from a set of measurements" depends on which parameters we want to infer from them. This is clear from Eq. 6, which defines Fisher information I (X) about X in terms of a set of derivatives with respect to the parameters X.
Finally, information is related to precision: the greater the information available from a set of measurements about some unknown parameters, the lower the uncertainty that can be achieved from the measurements on the estimation of these parameters. More formally, ifx is any unbiased estimator of the parameter vector X derived from the measurements y, then under the same regularity conditions previously assumed it can be shown that cov(x,x) (I (X) ) −1 , where the symbol indicates that the difference between the matrices on the left and right hand sides is positive semidefinite. In particular, for the diagonal elements of these matrices, In other words, the quantity (I (X) ) −1 αα represents a lower bound (called Cramer-Rao lower bound) on the variance of any unbiased estimator of each parameter X α .

BLUE estimators and Fisher information
An unbiased estimator whose variance is equal to its Cramer-Rao lower bound, i.e. one for which the equality in Eq. 7 holds, is called an efficient unbiased estimator. While in the general case it is not always possible to build one, an efficient unbiased estimator does exist under the assumption that the n measurements y are multivariate Gaussian distributed with a positive definite covariance matrix that is known a priori and does not depend on the unknown parameters X. This is the same assumption that had been used for the description of the BLUE method in Ref. [4] and we will take it as valid throughout the rest of this paper.
As discussed at length in Refs. [9,10], such distributions possess in fact a number of special properties that significantly simplify all statistical calculations involving them. In particular, it is easy to show, in the general case with several unknown parameters, that the best linear unbiased estimator is under these assumptions an unbiased efficient estimator, i.e. that its covariance matrix is equal to the inverse of the Fisher information matrix. Moreover, the Fisher information matrix and the combined covariance do not depend on the unknown parameters X under these assumptions, while this is not true in the general case. For Gaussian distributions, the best linear unbiased estimator also coincides with the maximum likelihood estimator [9], while this is not true in most other cases, including the case of Poisson distributions.
In the case of one unknown parameter, in particular, i.e. when the parameter vector X reduces to a scalar Y , the probability density function is simply is the covariance of the unbiased measurements y i and y j , the Fisher information for Y , which also reduces to a scalar I (Y ) , can simply be written as This is clearly the inverse of the variance of the BLUE for Y corresponding to the central value weights λ i given in Eq. 1, To further simplify the notation, in the following by I we will always indicate the information I (Y ) relative to Y , dropping the superscript Y .

Intrinsic and marginal information weights
Having recalled the relevance of the Fisher information concept to quantitatively assess the contribution of a set of measurements to the knowledge about an unknown parameter, we may now introduce our proposal about how to best represent the "weight that a measurement carries in the combination" or its "relative importance". We define this in terms of intrinsic and marginal information weights. Our approach is radically different from that of Refs. [6,7], because we do not attempt to make sure that the n weights for the different measurements sum up to 1. Formally, we define the "intrinsic" information weight for each individual measurement simply as the ratio of the information it carries when taken alone (the inverse of its variance) to the total information for the set of measurements in the combination (the inverse of the BLUE variance), We complement this definition by introducing a weight carried by the ensemble of all correlations between the measurements: so that the sum of the n + 1 terms adds up to 1, In our opinion, the information contribution represented by IIW corr cannot be attributed to any of the individual measurements alone, because it is the result of their collective interplay through the ensemble of their correlations. Note that we did not split this weight into sub-contributions from one or more specific correlations because, while this is unambiguous in some specific cases, in general it is a complex task which implies a certain arbitrariness. Another useful way to quantify the information that an individual measurement y i brings in a combination is to look at its "marginal" information ∆ I i , i.e. the additional information available when y i is added to a combination that already includes the other n−1 measurements. We define the marginal information weight of y i as the ratio of its marginal information to the total information in the combination of all n measurements: The sum of the weights for all measurements does not add up to 1, but we do not find it appropriate to introduce an extra weight to re-establish a normalisation condition. The interest of a marginal information weight MIW i , in fact, is that it already accounts both for the information 1/σ 2 i "intrinsically" contributed by measurement y i , and for that contributed by its correlations to all other measurements in the presence of their correlations to one another. The sum of the marginal information weights for all measurements involves a complex double-counting of these effects and we do not find it to be a useful quantity to easily understand the effect of correlations.
The intrinsic information weights for the different measurements are, by construction, always positive. The weight for the correlations can, instead, be negative, null or positive. In other words, according to our definition, while every measurements always adds intrinsic information to a combination, the net effect of correlations may be to increase the combined error, to keep the combined error unchanged or, less frequently, to decrease it.
Marginal information weights are guaranteed to be nonnegative (as discussed more in detail in Sec. 4.2), but they are generally different from the corresponding intrinsic information weights if the measurement is correlated to any of the others. In particular, MIW i < IIW i represents the common situation where one part of the intrinsic information contributed by one measurement is reduced by correlations, while MIW i > IIW i represents the cases where its correlations amplify its net contribution to information. We will discuss these issues in more detail in Sec. 4.1, in the specific case of two measurements of one observable.
In the simple example we used in Sec. 2, the intrinsic and marginal information weights in the combination of the two measurements A and B and in the combination of the three measurements A, B1 and B2 are summarised in the top and central sections of Table 1, where they are compared to their BLUE coefficients (or central value weights) and their "relative importances" according to Eq. 2. Note that all of these quantities (IIW i , MIW i , CVW i , RI i ) coincide for both measurements in the combination of A and B, where there are no correlations, but they differ significantly in the combination of A, B1 and B2, in the presence of large positive correlations. In particular, the intrinsic and marginal information weights for A are always equal to 40% whether A is combined to B alone or to B1 and B2 together; conversely, the marginal information weights of B1 and B2 are significantly larger than their intrinsic information weights, precisely because together, thanks to their correlation, they achieve more than they could achieve individually. Note also that the "relative importance" of B1 is larger than both its intrinsic and marginal information weights, which in our opinion shows that it is clearly overestimated.
We should stress at this point that information weights also have their own limitations and should be used with care. In particular, the main interest of information weights should not be that of ranking measurements, but rather that of providing a quantitative tool for a better understanding of how the different measurements, individually and together, contribute to our combined knowledge about the parameters that we want to infer. We believe that attempting to determine which individual experiment provides the "best" or "most important" contribution to a combination is a goal of relatively limited scientific use and, more importantly, is a question that involves some degree of arbitrariness. As we mentioned above, when combining n correlated measurements, it is very difficult to unambiguously split IIW corr into subcontributions from the several correlations that simultaneously exist between those measurements. In particular, it would be quite complex to disentangle the two competing effects that each correlation may have on the information contributed by any given measurement, that of amplifying this contribution through the collaboration with other measurements, and that of reducing this contribution by making the measurements partially redundant with each other. As a consequence, "ranking" individual measurements by their intrinsic or marginal information weights is a practise that we do not advocate or recommend.
To better illustrate what we mean, in the bottom section of Table 1 we have added a slightly different example, where it is now assumed that y B1 is itself the result of the combination of two very similar measurements y B11 = 99.01 ± 4.00 and y B12 = 98.99 ± 4.00 that are 99.999% correlated to each other (and are each individually 87.5% correlated to y B2 ). It is not surprising in this case that B11 and B12 have a central value weight equal to half that of B1, an intrinsic information weight that is the same as that of B1, but a marginal information weight that is essentially zero (because including B12 is largely redundant if the almost identical measurement B11 has already been included, and viceversa). While in the combination of A, B1 and B2 the net effect of correlations was to amplify the information contribution of both B1 and B2 by MIW B1 −IIW B1 = MIW B2 −IIW B2 = 13.1%, in this third example the information contributions of B11 and B12 are also affected by the competing effect of their mutual correlation, which brings their MIW down essentially to zero. This example is also interesting because it clearly shows that very different "rankings" may be obtained for the individual measurements if they are ordered by decreasing values of IIW i , MIW i , CVW i or RI i : for instance, measurement B11 has the highest CVW and RI, the second highest IIW, but the lowest MIW. Excluding RI, which we already argued to be an ill-defined quantity, we see in this case that CVW, IIW and MIW all have their limitations if they are used for "ranking". Indeed, CVW can be negative, which may give the false impression that a measurement makes a combination worse instead of improving it; IIW completely ignores the effect of correlations; MIW only describes the marginal contribution of a single measurement and of its correlations. For these reasons, we propose to quote all of CVW, IIW and MIW whenever a combination of several measurements is presented, while explicitly refraining to use any of them for ranking individual measurements.

Negative BLUE coefficients and "high-correlation regimes"
In this section, we use the concept of Fisher information to explore the relation between negative BLUE coefficients and the size of correlations between measurements. We start by revisiting the discussion of these issues presented in Ref. [3] for two measurements of one parameter, whose conclusion was that negative weights appear when the positive correlation between the two measurements exceeds a well-defined threshold. We then generalize this conclusion to n measurements of one parameter, first by computing marginal information and then by analysing the derivatives of Fisher information with respect to the correlations between measurements: we show, in particular, that negative central value weights in BLUE combinations are always a sign of a "highcorrelation" regime, where the marginal effect of further increasing one or more of these correlations is that of reducing the errors on the combined estimates rather than increasing them. In Sec. 5 we will discuss important practical consequences of what is presented in this section.

The simple case of two measurements of one parameter
In the simple case of two measurements A and B of a single physical quantity Y , the coefficients in the BLUE weighted averageŶ = λ A y A + λ B y B are simply given by and the combined variance σ 2 Y (i.e. the inverse of the Fisher information) by where σ A and σ B are the errors on the two measurements and ρ is their correlation. Assuming that the two errors are fixed, with σ B > σ A , the functional dependency on the correlation ρ of the BLUE coefficient λ B for measurement B and of the combined variance σ 2 Y , given by Eq. 16 and Eq. 17 respectively, are shown in Fig.1, left and right respectively, for various values of the ratio σ B /σ A . For positive correlations, as discussed in Ref. [3], the combined BLUE variance increases from σ 2 Y = 1/(1/σ 2 A + 1/σ 2 B ) at ρ = 0 to a maximum value of σ 2 Y = σ 2 A at ρ = σ A /σ B and it then decreases to σ 2 Y = 0 in the limit of ρ = 1. Therefore, for combinations with large correlations among measurements, the combined uncertainty strongly depends on ρ, and is expected to vanish at ρ =1; this implies that, close to these regions, determining the correlation with high accuracy is mandatory so as not to bias the combination. The BLUE coefficient for B steadily decreases from In other words, the threshold value ρ = σ A /σ B effectively represents a boundary between two regimes, a "lowcorrelation regime", where λ B is positive and σ 2 Y increases as ρ grows, and a "high-correlation regime", where λ B is negative and σ 2 Y decreases as ρ grows. Note that the BLUE variance from the combination of A and B at the boundary between the two regimes ρ = σ A /σ B is equal to that from A alone (σ 2 Y =σ 2 A ), while it is lower on either side of the boundary. In the same way, the Fisher information from the combination at the boundary between the two regimes is equal to that from A alone, while it is higher on either side of the boundary: in other words, the marginal contribution to information from the addition of B in the combination is zero at the boundary, but it is positive on either side of it. Note in passing that the fact that the BLUE coefficient for B is zero does not mean however that the measurement is simply not used in the combination, because the central value measured by B does in any case contribute to the calculation of the overall χ 2 for the combination: this statement remains valid for the combination of n measurements, although we will not repeat it in the following.
A possible interpretation of the transition at ρ = σ A /σ B , which will become useful later on and is complementary to that given in Ref. [3] (as well as to that given in Ref. [11] using the Cholesky decomposition formalism), is the following. In the low-correlation regime ρ ≤ σ A /σ B , the full covariance matrix can be written as the sum of two positivedefinite components: one that is common to A and B, i.e. 100% correlated and with the same size in both, and one that is uncorrelated: Indeed, only when the off-diagonal covariance ρσ A σ B is smaller than both variances σ 2 A and σ 2 B can the "uncorrelated" error component be positive definite. The possibility to split the covariance matrix in this way can be interpreted by saying that, in the low-correlation regime, the marginal information added to the combination by the less precise measurement B comes from its contribution of independent (uncorrelated) knowledge about the unknown parameter. To combine A and B in this case, in fact, one may simply think of temporarely ignoring the irreducible common component of the error, combining the two measurements based only on the uncorrelated error components and finally adding back the common error component: it is easy to see that this would lead to a total combined variance which adds up to the same value given in Eq. 17. With respect to the measurement of A taken alone, adding B helps in this case by reducing the uncorrelated error component (the second term on the right-hand side).
In the high correlation regime, conversely, the covariance matrix can not be seen as the sum of a common error and an uncorrelated error as in Eq. 18. Instead, the full covariance matrix can be written as the sum of a component common to A and B, i.e. 100% correlated and with the same size in both, and of another systematic effect that is also 100% correlated, but has different sizes in A and B: In the combined result, the total error σŶ comes exclusively from the common systematic uncertainty in the first component, while the contribution from the correlated systematic uncertainty in the second component is 0. In other words, which can again be seen by removing the common component, combining and then adding it back at the end. For all practical purposes, one can thus say that, in the high correlation regime, the marginal information added by the less precise measurement B does not come from independent knowledge it contributes about the unknown parameter, but from its ability to constrain and remove a systematic uncertainty that also affects A, but to which B has a larger sensitivity. With respect to the measurement of A taken alone, in fact, adding B helps by completely getting rid of the correlated error component (the second term on the right-hand side). Note also, as discussed in Refs. [3,11], that the two individual measurements A and B are on the same side of the combined estimate in the high correlation regime (unless they coincide), because λ B < 0 implies thatŶ < y A < y B or y B < y A <Ŷ . To further illustrate this difference between the low and high correlation regimes, it is interesting to study the functional dependency on the correlation ρ of the intrinsic and marginal information weights IIW and MIW that we introduced in Sec. 3. In Fig. 2 we compare the functional dependencies on ρ of the BLUE coefficients λ , intrinsic information weights IIW, marginal information weights MIW and relative importances RI for one specific example where σ B /σ A = 2. As expected, both the intrinsic and marginal information weights of the two individual measurements are non-negative and actually coincide with each other and with the BLUE coefficients when ρ =0, while they have two maxima and two minima, respectively, at the boundary between low-correlation and high-correlation regimes at ρ = σ A /σ B where the total information from A and B is minimized. In the limit of extremely high correlation ρ =1, where the combined variance tends to 0 as the information contributed by the correlation between the two measurements tends to infinity, the intrinsic information weights tend to 0 while the marginal information weights tend to 1, because the intrinsic information contributed by each experiment individually is negligible with respect to the large contribution they achieve together through their correlation. For comparison, note instead that the "relative importances" of A and B at ρ = 1 are both positive and sum up to 1 while being different from each other, which in our opinion is another indication that this concept fails to acknowledge the relevance here of the information contribution from correlations.

Marginal information from the i th measurement of one parameter
To generalize the concepts of low and high correlation regimes, and show their relation to negative BLUE coefficients, in the more general case of n measurements of one parameter, we now derive formulas to calculate the marginal information ∆ I i of the i th measurement in an n-measurement combination, using the "information inflow" formalism of Ref. [9]. Without loss of generality, imagine that the n measurements in the combination are reordered so that the i th measurement we are interested in becomes the last, i.e. the n th , measurement. The full covariance matrix for all n measurements can then be written as where the variance of the i th measurement is given by σ 2 i =d, its covariances with all other measurements are the (n−1) elements of the vector c, and D is the (n−1)×(n−1) covariance matrix of these n−1 other measurements. Using Frobenius' formula [9], the inverse of this matrix can be written as where 1/(d − (cD −1 c)) > 0 because it is a diagonal element of the inverse of the symmetric positive definite matrix M . Keeping in mind that the information I from the combination of n measurements according to Eq. 9 is simplỹ UM −1 U, it can easily be shown [9] that the marginal information ∆ I i (or information inflow) from the i th measurement is given by where u is a vector whose (n−1) elements are all equal to 1 (i.e. the equivalent of U, but with one less measurement).
Let us now analyze this formula, assuming that the covariance matrix D of the other (n−1) measurements and the variance d of the i th measurement are both fixed, while the (n−1) correlations in c can vary. We then observe that the condition ∆ I i =0, where information has a minimum, corresponds to a hyperplane in the (n−1)-dimensional space of these correlations, defined by This hyperplane divides the (n−1)-dimensional space of correlations into two half-spaces: a half-space containing the origin c = 0 (i.e. the point where there are no correlations), which we will therefore call the "low-correlation" regime; and a half-space that does not contain the origin, which we will call the "high-correlation" regime, defined by Keeping in mind that the BLUE coefficients are given by Eq. 1, i.e. bỹ and substituting M −1 by the expression in Eq. 23, we observe that the BLUE coefficient for the i th measurement is Having already observed that (d − (cD −1 c)) > 0, this implies that Comparing this to Eq. 26, this shows that the condition λ i ≤ 0 coincides with that defining the half-space corresponding to the "high-correlation" regime. In other words, the BLUE coefficient for the i th measurement is negative if and only if its correlations to the other n−1 measurements are higher than the thresholds for crossing over into the highcorrelation regime; it is instead zero on the boundary between the two regimes, i.e. if and only if these correlations are such that the measurement contributes no additional information to the combination.
The results presented above are interesting not only to point out the relation between negative BLUE coefficients and high-correlation regimes, but also because they provide a formula for computing marginal information weights. For the i th measurement, this is simply equal to Keeping in mind that the intrinsic information weight for the same measurement is IIW i = (1/σ 2 i )/I = 1/dI, this implies that which provides an interesting relationship between intrinsic information weights, marginal information weights and central value weights. Note, in particular, that the equality sign holds if the i th measurement is not correlated to any other measurements (i.e. if c is the null vector), in which case all three weights coincide as seen in Table 1.

Information derivatives
In the first part of this section, we have described the boundary between low-correlation and high-correlation regimes in the simplest case of the combination of two measurements, as well as in the more complex but still specific case of the combination of n measurements, where only the n−1 correlations of the i th measurement to all of the others are allowed to vary. We now analyze the most general case of the combination of n measurements of one parameter, as a function of the n(n−1)/2 correlations of all the measurements to one another. We do this by studying the first derivatives of information with respect to these correlations ρ i j . Let us consider the linear dependency of the covariance matrix M on the n(n−1)/2 correlations ρ i j between any two distinct measurements y i and y j , assuming instead that the variances M ii are fixed. Applying the generic formula [9] for the first derivatives of the inverse of a non-singular square matrix with respect to the elements of a vector it depends on, we find that Keeping in mind that I =ŨM −1 U and thatŨM −1 = Iλ λ λ according to Eqs. 9 and 27, respectively, the derivatives of information with respect to the correlations ρ i j can be written as Under our assumption that only the off-diagonal covariances M i j may vary while the variances M ii are fixed, the derivatives of the covariance matrix M with respect to the correlations ρ i j are The derivatives of information with respect to the correlations ρ i j are then simply given by where the factor 2 comes from the fact that the covariance matrix is symmetric and has twice as many off-diagonal elements as there are independent correlations. Equation 35 clearly shows that, if all BLUE coefficients are positive, the first derivatives of information are always negative with respect to the correlations between any two measurements, i.e. information can only decrease if correlations are further increased: this is the equivalent in n(n− 1)/2 dimensional space of what we have previously called a "low-correlation" regime, as this sub-space is guaranteed to contain the point where all correlations are zero. Conversely, if at least one BLUE coefficient is negative (and keeping in mind that they can not be all negative), then at least one information derivative must be positive, i.e. there is at least one correlation which leads to higher information if it is increased: this is the equivalent of what we have previously called a "high-correlation" regime. The boundary between the two regimes is a hypersurface in n(n−1)/2 dimensional space, defined by the condition that at least one BLUE coefficient is zero, while all the others are non-negative: when this condition is satisfied, the information derivatives with respect to one or more correlations are also zero, meaning that information has reached a minimum in its partial functional dependency on those correlations. This completes the generalization to several measurements of one observable of the discussion presented in Ref. [3] for only two measurements. Note finally that in that case, i.e. for n = 2, all these considerations are trivially illustrated by Figures 1  and 2, showing that the boundary between low and high correlation regimes in the 1-dimensional space of the correlation ρ is a 0-dimensional hypersurface (a point) at the value ρ = σ A /σ B .

"Conservative" estimates of correlations in BLUE combinations
A precise assessment of the correlations that need to be used as input to BLUE combinations is often very hard. Ideally, one should aim to measure these correlations in the data or by using Monte Carlo methods. This, however, turns out to be often impractical, if not impossible, for instance when combining results produced by different experiments that use different conventions for assessing the systematic errors on their measurements, or when trying to combine results from recent experiments to older results for which not enough details were published and the expertise and the infrastructure to analyse the data are no longer available. In these situations, it may be unavoidable to combine results using input covariance matrices where the correlations between the different measurements have only been approximately estimated, rather than accurately measured. In the following, we will refer to these estimates of correlations as the "nominal" correlations (and we will extensively study the effect on BLUE combination results of reducing correlations below these initial "nominal" values).
In particular, it is not uncommon to read in the literature that correlations have been "conservatively" assumed to be 100%. In this section, we question the validity of this kind of statement. A "conservative" estimate of a measurement error should mean that, in the absence of more precise assessments, an overestimate of the true error (at the price of losing some of the available information from a measurement) is more acceptable than taking the risk of claiming that a measurement is more accurate than it really is. Likewise, by "conservative" estimate of a correlation, one should mean an estimate which is more likely to result in an overall larger combined error than in a wrong claim of smaller combined errors.
When BLUE coefficients are all positive, i.e. in a lowcorrelation regime, information derivatives are negative and the net effect of increasing any correlation can only be that of reducing information and increasing the combined error: in this case, choosing the largest possible positive correlations (100%) is clearly the most conservative choice. Our discussion in the previous section, however, shows that negative BLUE coefficients are a sign of a high-correlation regime, where the net effect of increasing some of these correlations is that of increasing information and reducing the combined error: in other words, if correlations are estimated as 100% and negative BLUE coefficients are observed, it is wrong to claim that correlations have been estimated "conservatively".
In this section, we will first analyse under which conditions it is indeed conservative to assume that correlations are 100%, using a simple two-measurement combination as an example. For those situations where a precise evaluation of correlations is impossible, and where setting them to their "nominal" estimates would result in negative BLUE coefficients, we will then offer a few guidelines and tools to help physicists make more conservative estimates of correlations.

Conservative estimates of correlations in a two-measurement combination
Let us consider the combination of two measurements A and B, whose errors are well known, but where the correlation ρ between them could not be precisely determined. We want to determine in this case which would be the most "conservative" estimate for ρ. We do this by studying the functional dependency of the total combined error on ρ (which throughout this Sec. 5.1 is taken as a variable parameter, rather than any "nominal" estimate of the correlation). We observed in Sec. 4.1 that this combination remains in a lowcorrelation regime as long as the off-diagonal covariance ρσ A σ B is smaller than both variances σ 2 A and σ 2 B , Let us now assume that there are only two sources of uncertainty, an uncorrelated error σ (unc) , e.g. of statistical origin, and a single systematic effect σ (cor) whose correlation between the two measurements is ρ cor . The most conservative estimate for ρ cor according to Eq. 36 above is thus the largest value of ρ cor such that This expression is very interesting because it is automatically satisfied by any value of ρ cor for measurements that are statistically dominated, i.e. where each of σ A(unc) and σ B(unc) is much larger than both σ A(cor) and σ B(cor) : in other words, for statistically dominated measurements, it is indeed a correct statement to say that "correlations are conservatively assumed to be 100%".
If the measurements are not statistically dominated, however, the situation is different. Taking σ A(cor) to be the smaller of the two correlated errors, i.e. σ A(cor) ≤ σ B(cor) , then the second condition is automatically true, while the first condition is satisfied if and only if This shows that, when systematic errors cannot be ignored and the sensitivities of A and B to the correlated systematic effect are so different that (σ B(cor) /σ A(cor) ) ≥ (σ A /σ A(cor) ) 2 , then it is no longer correct to take 100% as the most conservative value of ρ cor , and one must choose a correlation that is smaller than 100%. This is shown in Fig. 3, where the most conservative value of ρ cor is plotted as a function of σ A(cor) /σ A , for several values of σ B(cor) /σ A(cor) . This figure shows that there are two different regimes. When (σ B(cor) /σ A(cor) ) < (σ A /σ A(cor) ) 2 , the most conservative value of ρ cor is 1: in this case, both measurements A and B contribute to the combination with positive BLUE coefficients because, no matter how large ρ cor is, the combination always remains in a low-correlation regime. When (σ B(cor) /σ A(cor) ) ≥ (σ A /σ A(cor) ) 2 , instead, the most conservative value of ρ cor is smaller than 1: in this case, the combination is at the boundary of low and high correlation regimes, where the combined error is maximised and equal to σŶ =σ A while λ A = 1 and λ B = 0, which is more or less equivalent (modulo the effect on χ 2 previously discussed) to excluding the less precise measurement B from the combination.

Identifying the least conservative correlations between n measurements
While it is relatively straightforward to determine the "most conservative" estimate of correlations with only two mea-surements, things get more complicated when the combination includes n measurements, as the number of intermeasurement correlations increases to n(n−1)/2 and the "most conservative" value for any of them would depend on the values of the others. Instead of treating all correlations as completely free parameters as we did in Sec. 5.1, throughout this Sec. 5.2 we will then suggest ways to analyse how "conservative" an existing "nominal" estimate of correlations is. In particular, we will consider the most general case where the full "nominal" covariance matrix is built as the sum of S sources of uncertainty, each with a different set of "nominal" correlations between the measurements, If the "nominal" values of the correlations ρ [s] i j are not assessed rigorously and it is still conceivable to modify them to make the combination more "conservative", it is certainly useful to know which of these correlations correspond to a high correlation regime and which of them contribute more significantly to a change in the combined error. To answer the first question, one should look at the correlations of those measurements whose BLUE coefficients are negative in the "nominal" combination. In particular, one should concentrate on the correlations ρ i j between two measurements y i and y j such that the derivatives ∂ I/∂ ρ i j in Eq. 35 are positive. To answer the second question, we propose to use the information derivatives we derived in Sec. 4.3 and look at which correlation would yield the largest relative decrease of information δ I/I for the same relative rescaling downwards δ ρ [s] i j /ρ [s] i j of that correlation. In other words, we suggest to rank correlations by the normalised information derivative where all quantities in the formula (which is easily derived by extending Eq. 33) are computed at the "nominal" values of the correlations. The correlations between measurements i and j and for error source s with the highest positive values of the normalized derivative in Eq. 40 are those that should be most urgently reassessed. The quantity in Eq. 40 is a dimensionless number: in the simple example presented in Sec. 2, for instance, the value of this normalised derivative for the correlation between B1 and B2 (for the single source of uncertainty considered in that example) is as high as +2.52, indicating that the combined error would increase by 2.52% for a relative reduction of the correlation by 1% of its "nominal" value (from 0.87500 to 0.86625). We will illustrate this in more detail at the end of this section using a numerical example.
Note that the sums of all (ρ [s] i j /I)(∂ I/∂ ρ [s] i j ) over all error sources s effectively represent the effect on information of rescaling the correlations between measurements i and j by the same factor for all error sources, while their sums over measurements i and j represent the effect of rescaling the correlations between all measurements by the same factor in a given error source s. Likewise, their global sum over measurements i and j and error sources s represents the effect on information of rescaling all correlations by a global factor. While they lack the granularity to give more useful insight about which correlations are most relevant when trying to make the combination more conservative, these sums also represent interesting quantities to analyse in some situations. In particular, we will point out in Sec. 5.3.1 that each of these different sums of derivatives becomes zero in one of the information minimization procedures that will be described in that section.

Reducing correlations to make them "more conservative"
Having proposed a way to identify which "nominal" correlations have not been estimated "conservatively" and may need to be reassessed, we now propose some practical procedures to reduce their values and try to make the combination more conservative, when a full and precise reevaluation of these correlations is impossible. What follows must be understood as simple guidelines to drive the work of experimental physicists when combining measurements: we propose different methods, but the applicability of one rather than the other, which also implies some level of arbitrariness, would have to be judged on a case-by-case basis.
We propose three main solutions to the problem of reducing the (large and positive) "nominal" values of correlations to make the combination more conservative: the first is a numerical minimization of information with respect to these correlations, the second consists in ignoring some of the input measurements, and the third one is a prescription that we indicate with the name of "onionization" and that consists in decreasing the off-diagonal elements in the covariance matrices so that they are below a specific threshold. At the end of the section we will present a practical example that illustrates the different features of these methods.

Minimizing information by numerical methods
This approach is based on a multi-dimensional minimization of information as a function of rescaling factors applied to the "nominal" values of correlations. In the most general case, one would independently rescale by a scale factor f [s] i j between 0 and 1 each correlation ρ [s] i j between the errors on the measurements y i and y j due to the s th source of uncertainty. This corresponds to studying the effect on information of replacing the "nominal" covariance matrix M [s] i j for each error source by a modified covariance matrix (M ) [s] i j given by where Minimizing information by varying all of those S×n(n− 1)/2 scale factors f [s] i j , however, is not an option because information (i.e. the inverse of the total combined variance) ultimately depends only on the off-diagonal elements in the full covariance matrix and the minimization would be under-constrained. We therefore considered three more specific minimization scenarios. The first scenario, which we indicate as "ByErrSrc", consists in rescaling all correlations within each error source by the same factor f [s] , thus involving S independent scale factors ( f [s] i j = f [s] for every i and j). The second case, which we indicate as "ByOffDi-agElem", consists in rescaling in all error sources the correlation between y i and y j by the same factor f i j , thus involving n(n−1)/2 independent scale factors ( f [s] i j = f i j for every s). Finally, the simplest case, which we indicate as "ByGlob-Fac", consists in rescaling all correlations by the same global rescaling factor f (i.e. f [s] i j = f for every i, j and s). A software package, called BLUEFIN 1 , was specifically prepared to study all these issues. Within this package, numerical minimizations are performed using the MINUIT [12] libraries through their integration in ROOT [13], imposing the constraints that scale factors remain between 0 and 1. All scale factors are varied in the minimization, except those which are known to have no effect on information because the information derivatives with respect to them (which are essentially those presented in Sec. 4.3) are zero both at "nominal" and at zero correlations (i.e. when all scale factors are 1 and 0, respectively).
The "ByOffDiagElem" minimization is the most tricky, as it may trespass into regions where the total covariance matrix is not positive definite, sometimes in an unrecoverable way, in which case we declare the minimization to have failed. Even when this minimization does converge to a minimum, one should also keep in mind that at this point the partial covariance matrices for the different error sources may be non positive-definite with negative eigenvalues: this is clearly a non-physical situation, which should be used for illustrative purposes only and is clearly not suitable for a physics publication.
Not surprisingly, in the very simple example presented in Sec. 2, where only one off-diagonal correlation is nonzero and errors are assumed to come from a single source of uncertainty, these three minimizations all converge to the same result, where the off-diagonal covariance is reduced to ρσ B1 σ B2 =σ 2 B1 =16.00, which leads to a combination where λ B2 = 0 and again the less precise measurement B2 is essentially excluded. In a more general case with several non-zero correlations and many different sources of uncertainty, the three minimizations may instead converge to rather different outcomes. The BLUEFIN software will also be used for the numerical examples shown at the end of the section.

Iteratively removing measurements with negative BLUE coefficients
Having observed many times that choosing "more conservative" correlations may ultimately lead to combinations where the BLUE coefficients of one or more measurements are increased from a negative value to zero, it is perfectly legitimate to think of excluding these measurements from the combination from the very beginning. If one should choose to adopt this approach, we suggest to do this iteratively, by removing first the measurement with the most negative BLUE coefficient, then performing a new combination and finally iterating until only positive BLUE coefficients remain. This procedure is guaranteed to converge as the combination of a single measurement has a single BLUE coefficient equal to 1. We will present an example later on.
Excluding measurements from a combination may be a very controversial decision to take. At the same time, if there are negative BLUE coefficients and it is impossible to determine precisely the correlations, this may be the truly conservative and soundest scientific choice, to avoid the risk of claiming combined results more accurate than they really are. Note that excluding a measurement differs from including it with a rescaled correlation which gives it a zero BLUE coefficient, as in the latter case the measurement does contribute to the χ 2 for the fit while in the former case it does not. If correlations for that measurement cannot be precisely assessed, in any case, even the accuracy of its contribution to the χ 2 with an ad-hoc rescaled correlation is somewhat questionable and it may be better to simply exclude the measurement from the combination altogether.
Note finally that high correlations between different measurements in a combination are not only caused by correlated systematic uncertainties in the analyses of independent data samples, but are also expected for statistical uncertainties when performing different analyses of the same data samples. In these cases, where negative BLUE coefficients would be likely if two such measurements were combined, it is already common practice to only publish the more precise analysis and simply use the less precise one as a cross-check.

The "onionization" prescription
Guided by the remarks we have made so far in this paper, but without any formal demonstration, we finally propose a simple rule of thumb for trying to modify "nominal" correlations to make them more "conservative". In the following we will again indicate by M the "nominal" covariance and by M the modified covariance matrices (which will only differ in their off-diagonal terms as we will only modify correlations, keeping variances unchanged). Our proposal essentially consists in generalising Eq. 36 to n measurements, by defining correlations so that the total modified covariance (M ) i j between any two measurements remains smaller than both individual total variances, Recalling the interpretation of high-correlation regimes we gave in Sec. 4.1, the idea is to prevent situations where one part of the systematic error is 100% correlated between different measurements which have a different sensitivity to it, as their joint action would effectively constrain that effect and lead to a reduction of the combined error. It should be noted that this procedure is similar to the "minimum overlap" assumption that was used to estimate the correlation between systematic errors for different energies or experiments in several QCD studies at LEP, including, but not limited to, the analysis presented in Ref. [14].
In more detail, if the full covariance matrix is built as the sum of S error sources as in Eq. 39, then S×n(n−1)/2 correlations (M ) [s] i j / (M ) [s] ii (M ) [s] j j need to be separately estimated in the S partial covariances (M ) [s] . We considered two possible rules of thumb to provide conservative estimates of the partial covariances satifying Eq. 43.
The first one consists in requiring that i.e. in keeping the (M ) [s] i j unchanged and equal to their "nominal" values M [s] i j if these satisfy Eq. 44, or in reducing them to the upper bounds above otherwise. This limits the offdiagonal covariance for each error source to the maximum allowed for the sum of all such contributions, but by doing so it does not strictly ensure that their sum does not exceed this limit, hence the resulting correlations may still be overestimated with respect to their "most conservative" values.
The second rule of thumb consists in requiring that i.e. again in keeping the (M ) [s] i j unchanged and equal to their "nominal" values M [s] i j if these satisfy Eq. 45, or in reducing them to the upper bounds above otherwise. This limits the off-diagonal covariance for each error source to the maximum allowed when only that error source is considered, as if there were no others or they were all negligible with respect to it. While this second rule of thumb may result in a full covariance matrix where the off-diagonal covariances are even lower than the "most conservative" values in Eq. 43, we believe that this is a more solid procedure, because it is applied to each error source independently. In particular, we think that this may guarantee more "conservative" values of the combined BLUE errors in the different error sources, and not only for their total.
In the following, we will refer to this rule of thumb as the "onionization" prescription. In fact, if we consider a set of measurements {A,B,C,D. . . }, ordered so that σ [s] A D,cor for the s th source of uncertainty, this prescription ensures that the corresponding partial covariance matrix (M ) [s] has no unreasonably large off-diagonal element, but has instead an upper bound with a regular layered pattern similar to that of an onion: Not surprisingly, in the simple example presented in Sec. 2, the onionization prescription gives the same result as the three minimization procedures described above (i.e. ρσ B1 σ B2 = σ 2 B1 = 16.00), leading to a combination where λ B2 = 0 and again the less precise measurement B2 is essentially excluded. A more complex example is presented below.
Note that, in the procedure described above as well as in its implementation in the BLUEFIN software that we used to produce the results presented in the next section, we systematically apply the "onionization" of the partial covariance matrix for each source of uncertainty. In a real combination, it may be more appropriate to only apply this procedure to the partial covariance matrices of those sources of uncertainty for which at least some of the information derivatives in Eq. 40 are positive. More generally, we stress again that we only propose this prescription as a rule of thumb, but no automatic procedure can replace an estimate of correlations based on a detailed understanding of the physics processes responsible for each source of systematic uncertainty.  Table 2 Results of the combination of y A , y B , y C and y D (χ 2 /ndof=4.23/3). The central value, total error and individual error components for each input measurement i are listed, followed by the central value weight CVW i or λ i , the intrinsic information weight IIW i (also shown for the correlations), the marginal information weight MIW i , the relative importance RI i . In the last row, the BLUE central value and errors and the sum of all weights in each column are displayed.

A more complex example
As an illustration of the tools we presented in this paper, we finally present a slightly more complex example representing the fictitious combination of four different cross-section measurements A, B, C and D. For consistency with the notation used so far and to avoid any confusion with the use of the symbol σ to indicate variances, we refer to the crosssection observable as y, to its four measurements as y A , y B , y C , y D , and to its BLUE asŶ . Let us assume in the following that the central values and errors on the four measurements are given by where UNC indicates the uncorrelated errors of statistical or systematic origin, while BKGD and LUMI are systematic errors assumed to be 100% correlated between the first three experiments, due for instance to a common background and a common luminosity measurement.
The assumption that the background is fully correlated between all experiments may be the result of a detailed analysis, or a supposedly "conservative" assumption in the absence of more precise correlation estimates. It is rather unlikely that a more detailed analysis would not be performed in a case like this one -in particular, in this type of situation, with such a large difference in the sizes of the fully correlated BKGD errors in the different measurements, we would recommend to try to split the BKGD systematics into its sub-components in the combination -but this is clearly an example for illustrative purposes only.
Under the given assumptions, the results of the BLUE combination are those shown in Table 2, where information weights and relative importance are also listed. There are several comments that can be made about these numbers. First, if correlation estimates are actually correct, then the negative BLUE coefficient for y B indicates that we are effectively using this measurement to constrain the background: note, in particular, the very small final uncertainty on background after the BLUE combination. Second, y D is for all practical purposes a measurement independent from y A , y B and y C : this is reflected in the fact that its intrinsic and marginal information weights are both equal to its central value weight. Third, the relative importance of y D is clearly underestimated, while that of y B is clearly overestimated, as it is larger than both its intrinsic and marginal information weights.
It is also interesting to look in this example at the normalised information derivatives described in Sec. 4.3. These are shown in Table 3. The table tells us that the negative BLUE coefficient for y B is primarily due to its correlations to y A , mainly that between BKGD errors, but to a lesser extent also that between LUMI errors: these two information derivatives, in fact, are both positive and very large. The correlations between y B and y C also go in the direction of increasing information, while those between y A and y C are in the opposite regime and decrease information.
We now apply to this example the minimization, negative BLUE coefficient removal and onionization prescriptions described in this section. The results of the combinations performed after modifying correlations according to these prescriptions are listed in Table 4 Table 3 Normalised information derivatives ρ/I*dI/dρ for the combination of of y A , y B , y C and y D in the cross-section example, computed at "nominal" correlation values. The last column and last row list information derivatives when the same rescaling factor is used for a given off-diagonal element or error source, which are equal to the sums of individual derivatives in each row and column, respectively.
Combination The most striking effect, perhaps, is the fact that all modifications of the "nominal" correlations to make them "more conservative" lead to significant central value shifts (i.e. possibly to biased combined estimates) and to much larger combined BKGD systematics, in spite of relatively small increases in the total combined errors. In particular, it is somewhat counter-intuitive that the combined uncorrelated error decreases when reducing correlations, while the combined systematic errors increase: this is likely to be another feature of the high-correlation regime characterizing the "nominal" correlations of this example. We stress that, in real situations, it is important to analyse this type of effects, and not only the effect on the total combined error, when testing different estimates of correlations. This is especially important if one keeps in mind that the results of BLUE combinations are generally meant to be further combined with other results (e.g. the combined top masses from LHC and the combined top mass from Tevatron will eventually be combined).
It is not too surprising, conversely, that the effects on combined BKGD systematics are much larger than those on the combined LUMI systematics. This could be guessed by remembering that normalised information derivatives are much larger for the former than for the latter.
It is also not surprising that the "ByOffDiagElem" minimization gives essentially the same results (except for the χ 2 value) that are found when excluding the measurements with negative BLUE coefficients. By construction, in fact, this is the only one of the three minimizations which almost always guarantees that BLUE coefficients which were initially negative end up equal to zero after the minimization: if the minimum is a local minimum, some of the derivatives in Eq. 35, which are directly proportional to the BLUE coefficients, must eventually be zero.
Note also that the onionization prescription leads to the only combination where the BLUE coefficient for measurement y B becomes strictly positive. As mentioned earlier, this may be a consequence of the fact that this prescription may reduce correlations even more than their "most conservative" values, trespassing well into the low-correlation regime. In this respect, it is interesting to have a look at the effect of onionization on the partial covariance matrices, and more generally at the effect on the total covariance matrices of all procedures presented in this section: these are shown in Tables 5 and 6, respectively.
In particular, note in Table 5 that the onionization procedure (but the same is true for minimizations) affects correlations for the BKGD and LUMI error sources in exactly the same way without distinctions. If this was a real combination, instead, one would most likely keep the LUMI correlation unchanged (because a common luminosity measurement would indeed result in a 100% correlation between y A , y B and y C , and these three measurements together could even help to constrain the error on it), concentrating instead on the re-assessment of the BKGD correlation alone (because the initial "nominal" estimate of 100% correlation is neither conservative nor realistic in the presence of different sensitivities to differential distributions).
It should finally be added that the total covariance matrix derived from the onionization prescription is used as the starting point of the "ByOffDiagElem" minimization in the BLUEFIN software, as we have found this to improve the efficiency of the minimization procedure. As an additional   Table 6 Modified input covariances for the four measurements in the cross-section example, when reducing correlations according to the procedures described in this paper. The values are given in pb 2 .
cross-check of the onionization prescription, we also tested a fourth type of minimization, where information is independently minimized for each source of uncertainty as if this was the only one, varying each time only the correlations in that error source (after removing those measurements not affected by it and slightly reducing the allowed correlation ranges to keep the partial covariance positive definite). The preliminary results of this test (not included in Table 4) indicate that these minimizations do not seem to significantly move partial covariances or the final result away from those obtained through the onionization prescription, which are used as a starting point also in this case. We conclude this section by reminding that the prescriptions presented here are only empirical recipes that assume no prior knowledge of the physics involved and, for this reason, can never represent valid substitutes for a careful quantitative analysis of correlations using real or simulated data. A precise estimate of correlations is important in general, but absolutely necessary in high correlation regimes, where it may be as important as a precise assessment of measurement errors themselves.

Conclusions
Combining many correlated measurements is a fundamental and unavoidable step in the scientific process to improve our knowledge about a physical quantity. In this paper, we recalled the relevance of the concept of Fisher information to quantify and better understand this knowledge. We stressed that it is extremely important to understand how the information available from several measurements is effectively used in their combination, not only because this allows a fairer recognition of their relative merit in their contribution to the knowledge about the unknown parameter, but especially because this makes it possible to produce a more robust scientific result by critically reassessing the assumptions made in the combination.
In this context, we described how the correlations between the different measurements play a critical role in their combination. We demonstrated, in particular, that the presence of negative coefficients in the BLUE weighted average of any number of measurements is a sign of a "highcorrelation regime", where the effect of increasing correlations is that of reducing the error on the combined result. We showed that, in this regime, a large contribution to the combined knowledge about the parameter comes from the joint impact of several measurements through their correlation and we argued, as a consequence, that the merit for this particular contribution to information cannot be claimed by any single measurement individually. In particular, we presented our objections to the standard practice of presenting the "relative importances" of different measurements based on the absolute values of their BLUE coefficients, and we proposed the use of ("intrinsic" and "marginal") "information weights" instead.
In the second part of the paper, we questioned under which circumstances assuming systematic errors as fully correlated can be considered a "conservative" procedure. We proposed the use of information derivatives with respect to inter-measurement correlations as a tool to identify those "nominal" correlations for which this assumption is wrong and a more careful evaluation is necessary. We also suggested a few procedures for trying to make a combination more "conservative" when a precise estimate of correlations is simply impossible.
We should finally note that BLUE combinations are not the only way to combine different measurements, but they are actually the simplest to understand when combinations are performed under the most favorable assumptions that measurements are multivariate Gaussian distributed with covariances known a priori, as in this case all relevant quantities become easily calculable by matrix algebra. We therefore stress that, while the results in this paper were obtained under these assumptions and using the BLUE technique, large positive correlations are guaranteed to have a big impact, and should be watched out for, also in combinations performed with other methods or under other assumptions. This work has been inspired by many discussions, during private and public meetings, on the need for critically reviewing the assumptions about correlations and the meaning of "weights", when combining several measurements in the presence of high correlations between them. It would be difficult to mention and acknowledge all those colleagues who have hinted us towards the right direction and with whom we have had very fruitful discussions. We are particularly grateful to the members of the TopLHCWG and to the AT-LAS and CMS members who have helped in the reviews of the recent top mass combinations at the LHC.
We would also like to thank our colleagues who have sent us comments about the first two public versions of this paper. In particular, it is a pleasure to thank Louis Lyons for his extensive feedback and his very useful suggestions. We are also grateful to the EPJC referees for their detailed and insightful comments, as well as for making us aware of the research presented in Ref. [11].
Finally, A.V. would like to thank the management of the CERN IT-ES and IT-SDC groups for allowing him the flexibility to work on this research alongside his other committments in computing support for the LHC experiments.