Higher-order asymptotic corrections and their application to the Gamma Variance Model

We present improved methods for calculating confidence intervals and p values in situations where standard asymptotic approaches fail due to small sample sizes. We apply these techniques to a specific class of statistical model that can incorporate uncertainties in parameters that themselves represent uncertainties (informally, “errors on errors”) called the Gamma Variance Model. This model contains fixed parameters, generically denoted by ε\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon $$\end{document}, that represent the relative uncertainties in estimates of standard deviations of Gaussian distributed measurements. If the ε\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon $$\end{document} parameters are small, one can construct confidence intervals and p values using standard asymptotic methods. This is formally similar to the familiar situation of a large data sample, in which estimators for all adjustable parameters have Gaussian distributions. Here we address the important case where the ε\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon $$\end{document} parameters are not small and as a consequence the first-order asymptotic distributions do not represent a good approximation. We investigate improved test statistics based on the technology of higher-order asymptotics (modified likelihood root and Bartlett correction). The effective application of higher-order corrections removes an important computational barrier to the use of the Gamma Variance Model.


Introduction
In experimental sciences such as Particle Physics one collects data, here denoted by y, and seeks to make inferences about a hypothesis H that defines the probability distribution for the data, P (y|H).Often P (y|H) is indexed by a set of parameters of interest µ and by a set of nuisance parameters θ, thus P (y|H) = P (y|µ, θ).The parameters of interest are the main objective of the analysis, whereas nuisance parameters are often introduced to account for systematic uncertainties in the model.
We focus here on frequentist tests of the hypothesized parameters that use a test statistic derived from the likelihood function L(µ, θ) = P (y|µ, θ).These tests lead to confidence intervals or regions for the parameters of interest as well as p-values that quantify goodness of fit.To find these results, one requires the sampling distribution of test statistics that are obtained from the likelihood function and are described in greater detail below.For appropriately defined test statistics, the corresponding distributions can often be found using asymptotic results based on theorems due to Wilks [1] and Wald [2] (see, e.g., [3,4]).The asymptotic distributions are valid in specific limits, which usually correspond to having a large data sample, whose size we will denote generically by n.
In this paper we are interested specifically in the case where n is not sufficiently large for the asymptotic distri-butions of the relevant test statistics to represent a good approximation.In such problems one could use Monte Carlo methods to obtain the distributions, but this involves additional time-consuming computation.Instead, one can modify the test statistic using the higher-order asymptotic methods, specifically, the r * statistic of Barndorff-Nielsen [5] and the Bartlett correction [6], as described, e.g., in [7,8].With these methods, the distribution of the modified statistic becomes closer to the asymptotic form, allowing one to find confidence intervals and p-values without use of Monte Carlo.
In this paper we consider applications of higher-order asymptotic methods to the Gamma Variance Model (GVM), which was proposed in Ref. [9].In the GVM, measured values are modeled as following Gaussian distributions with a mean that depends on the parameters of the problem, and with variances σ 2 whose values are themselves not certain.The variances as well are thus taken as adjustable parameters, and the values one would assign to them are treated as measurements that follow a gamma distribution with parameters α and β (see Sec. 4 below).These parameters are assigned so that the gamma distribution's relative width reflects the desired uncertainty on σ 2 .This is quantified using the quantity ε = 1/2 √ α, which to first approximation is the relative uncertainty on the estimate of the standard deviation σ, informally referred to as the "error on the error". 1n the Gamma Variance Model there is a correspondence between the error-on-error parameters ε and an effective sample size n of That is, the large-sample limit corresponds to the case where ε → 0 and thus the values of σ are accurately estimated.For many analyses, however, the assigned values of standard deviations for individual measurements may easily be uncertain at the level of several tens of percent or more.In this case the effective sample size is low and thus the asymptotic distributions of likelihood-based test statistics are not necessarily valid.The goal of this paper is to apply higher-order asymptotics to this model and thus achieve more accurate confidence levels and p-values.
In Sec. 2 we briefly review the basic techniques for finding confidence intervals and p-values in a general likelihoodbased analysis and Sec. 3 describes how these techniques can be improved using higher-order asymptotics.In Sec. 4 we recall the important properties of the Gamma Variance Model, and then we explore use of higher-order asymptotic corrections to three specific realizations of the model: in Sec. 5 we apply corrections to a simple example of the GVM based on a single measurement, in Sec.6 to an average of measurements and in Sec.7 to an average of measured values that includes control measurements to constrain nuisance parameters.A summary and conclusions are given in Sec. 8.

Parameter inference using the profile likelihood ratio
In this section we review the basic technology used to find confidence intervals and p-values from test statistics derived from the likelihood ratio and the likelihood root by using the first-order asymptotic distributions based on Wilks' theorem.Further details on these methods as applied in Particle Physics analyses can be found, e.g., in Ref. [3].
In statistical data analysis, the central object needed to carry out inference related to the parameters of interest µ using measured data y is the likelihood function: L(µ, θ) = P (y|µ, θ).Suppose there are M parameters of interest µ = (µ 1 , . . ., µ M ) and N nuisance parameters θ = (θ 1 , . . ., θ N ), which are introduced to account for systematic uncertainties.In frequentist statistics, a test of hypothesized parameter values can be carried out by defining a test statistic based on the (profile) likelihood ratio Here μ and θ are the Maximum Likelihood Estimators (MLEs) for the parameters of interest and the nuisance parameters, respectively, and θ are the profiled (or constrained) estimators of the nuisance parameters, given by the values of θ that maximize the likelihood for a fixed value of µ (here and below we use ℓ to denote the loglikelihood).The likelihood ratio is used to test the compatibility of a value of µ with the experimental data, with greater w µ corresponding to increasing incompatibility.
The likelihood ratio can be used to derive a confidence region for the parameters of interest µ (or a confidence interval if there is just one parameter of interest) by computing the p-value for a hypothesized value of µ, Here f (w µ |µ, θ) is the probability density function of w µ under the hypothesis that µ and θ are the true parameters, w µ,obs is the observed value of the likelihood ratio, and F is the cumulative distribution of w µ .The boundary of the confidence region for µ, with confidence level 1 − α, is found from the p-value of Eq. ( 3) by solving p µ = α.This gives a region in parameter space that satisfies When the model has a single parameter of interest µ, it is possible to define another test statistic called the likelihood root as the square root of the likelihood ratio, multiplied by the sign of μ − µ: In contrast to the likelihood ratio, the likelihood root can be defined only when there is a single parameter of interest.However, while the likelihood ratio gives a two-sided test, the likelihood root allows for one-sided tests as well.
The statistic r µ can be used to compute p-values in the same way this is done for the likelihood ratio using its density function f (r µ |µ, θ).In many realistic applications, finding the probability density functions f (w µ |µ, θ) or f (r µ |µ, θ) is a major challenge since they are usually not known in closed form.Monte Carlo simulations are often used to compute them, but this can be very time-consuming for complex models with many measurements.
It is possible, however, to avoid the numerical computation of f (w µ ) and f (r µ ) in the asymptotic limit, in which all the MLEs of the model are Gaussian distributed.This limit is typically reached when the experimental sample size n approaches infinity, i.e., in the so-called large sample limit, where the MLEs have a Gaussian distribution with an error term of order O(n −1/2 ).If this condition holds, w µ follows a chi-square distribution with M degrees of freedom (χ 2 M ), where M is the number of parameters of interest.In contrast, in the asymptotic limit r µ follows a normal distribution with mean 0 and standard deviation 1: As already noted, n generally represents the sample size of the experiment, but it can also be another parameter of the likelihood that controls the convergence of the likelihood to the asymptotic limit.It is important to note that the asymptotic distribution of r µ exhibits a larger error term compared to that of w µ .This behavior reflects a trade-off for the flexibility of performing one-sided tests.
In the asymptotic limit, one can show that the profile log-likelihood is approximated by where . This can be found using the observed information matrix j ij ( ψ), where ψ = (µ, θ) represents all of the parameters.The inverse of the matrix j gives the covariance of all the estimators, , μj ] can be extracted.Under assumption of these approximations, the likelihood ratio is given by w Deviations from the quadratic approximations of the likelihood root and the profile likelihood are expected when the conditions of the asymptotic limit are not satisfied.

Higher-order asymptotic corrections
When the MLEs of the model parameters are not Gaussian distributed, which usually happens when the experimental sample size is small, the distributions of the statistics w µ and r µ deviate from their asymptotic forms.In such instances, there are two potential strategies to derive test statistics with known distributions: one approach is to refine the approximation of the test statistics' distributions, and the other is to modify the test statistics themselves such that their distributions are more accurately approximated by the asymptotic formulae, even for small sample sizes.An example of the former is given in Ref. [10]; in this paper, we focus on the latter approach.Specifically, this section explores two potential solutions, the r * approximation [5,[11][12][13] and the Bartlett correction [6,16].The aim is to derive corrections for w µ and r µ such that the distributions of the refined statistics, denoted by w * µ and r * µ , can be more precisely approximated by the asymptotic distributions outlined earlier, with error terms of order O(n −3/2 ) or smaller.Moreover, these enhanced statistics are constructed such that P(S * > S * obs ) = P (S > S obs ) + O(n −1/2 ), where S represents one of the original statistics and S * corresponds to its improved, higher-order counterpart.This condition ensures that p-values computed using the improved statistics are equivalent to those computed with the original ones, up to error terms of order n −1/2 or smaller.

The r * approximation
The asymptotic distributions of the likelihood root and the likelihood ratio are derived from the assumption that the MLEs are Gaussian in the asymptotic limit.But this assumption is only valid up to error terms of order O(n −1/2 ).A major development in likelihood-based inference has been to improve the approximation of the distributions of the MLEs.For models with a single parameter µ, the Barndorff-Nielsen p * approximation [5,[11][12][13] is the basic higher-order approximation to the distribution of μ: Here w µ is the likelihood ratio as defined in Eq. ( 2), c is a normalization constant equal to 1/ √ 2π(1 + O(n −1 )), and j = − ∂ 2 ℓ ∂µ 2 represents the observed information.Notably, the error term on the p * approximation is of order n −3/2 .This represents a significant improvement when compared to the O(n −1/2 ) error term associated with the first-order Gaussian approximation.
When the p * approximation is expanded to O(n −1/2 ) the Gaussian density of μ is recovered.At this order, the likelihood ratio w µ can be approximated using Eq.(10) as where O p denotes convergence in probability.In this way the p * approximation reduces to a Gaussian with mean µ and standard deviation |j(μ)|: where j is a constant in µ at order n −1/2 .The p * approximation provides a way to modify the statistic r µ to reduce the error on its asymptotic distribution.In particular, through an integration of Eq. (11) (see [12]), it can be proved that the modified statistic follows a standard normal distribution with an error term of order n −3/2 .Equation ( 14) for r * µ involves a correction term q µ , whose exact definition is given later in this section.As the model approaches the asymptotic limit, this correction term converges towards r µ , thereby leading r * µ to approach r µ .Conversely, when the model diverges from the asymptotic limit, this correction term modifies r µ so that the asymptotic distribution of r * µ has a reduced error term: That is, the error on the asymptotic distribution of r * µ falls off three powers of n −1/2 faster than that of the likelihood root (see Eq. ( 5)).In addition, by squaring r * µ one obtains the statistic (r * µ ) 2 , which is asymptotically distributed as chi-squared with one degree of freedom.This can be interpreted as a higher-order correction to the likelihood ratio statistic w µ .
An intuitive interpretation of the r * µ statistic can be obtained, despite its non-trivial derivation (see, e.g., [7]).One can show that the new r * µ statistic is related to r µ by where E[r µ ] and V[r µ ] are the expectation value and variance of r µ .This equation says that, up to errors of order n −3/2 , r * µ represents the standardized version of r µ .Furthermore, this equation provides a method to approximately compute r * µ using MC to estimate E[r µ ] and V[r µ ] as an alternative to the analytical computation of r * µ .The analytical computation of r * µ requires the correction term q µ , which can be found in different ways depending on the characteristics of the statistical model.Details can be found in [7]; here we summarize the main results.For models with one parameter of interest and no nuisance parameters, q µ can be found as where the first derivative is computed for µ = μ and the second for the value of µ being tested.For statistical models that include nuisance parameters, if we assume that the full parameter space can be written as ψ = (µ, θ), where µ is the parameter of interest and θ is a vector of N nuisance parameters, the correction term q µ can be found from where j is the information matrix The subscripts on ℓ indicate derivatives; for example, ℓ ψ is the gradient of ℓ with respect to the parameters ψ.The numerator and denominator of Eq. ( 18) contain determinants of (N + 1) × (N + 1) matrices, where N + 1 is the dimension of the full parameter space.The matrix in the numerator of the first factor has the (N + 1)-dimensional vector ℓ ψ (μ, θ) − ℓ ψ (µ, θ) as its first column, while the remaining (N + 1) × N section of the matrix is given by ℓ θ ψ (µ, θ), i.e. the matrix of the second derivatives of ℓ with respect to the nuisance parameters θ as the first index, and with respect to the full parameters vector ψ as the second index.
Equations ( 17) and ( 18) contain derivatives of the likelihood with respect to the MLEs; therefore they require the observed data y, or equivalently the likelihood, to be expressed as an explicit function of them, which is not always possible.In such cases, one can replace p * by an alternative quantity p TEM , where TEM stands for Tangent Exponential Model (see [14]).This expression for the density function of μ exploits a local approximation of the likelihood with an exponential family model.The canonical parameters ϕ of the approximating model are defined by and they can be used to derive an alternative expression for the correction term q µ .Here y obs is the n-dimensional vector of the observed data and V is an n×(N +1) matrix defined as In the last expression, z = (z 1 (y 1 ), ..., z N (y n )) is a vector of pivotal quantities.Pivotal quantities are transformations of data that have a fixed distribution under the model, i.e., their distribution does not depend on the parameters of the model.Such a vector always exists in the form of cumulative distributions F (y i ), which are always uniformly distributed in [0, 1] for continuous data.But alternative choices are often available, e.g., for a Gaussian distributed random variable y with mean µ and standard deviation σ one can define the pivotal statistic z = (y − µ)/σ, whose distribution is a standard normal for any chosen µ and σ.
Using the canonical parameters ϕ, alternative equations for the correction term q µ can be derived.For models with one parameter of interest and no nuisance parameters the formula for q µ becomes Whereas, for models that include nuisance parameters, q µ is given by The definition of V given by Eq. ( 21) applies to the case where y i are continuous variables.If instead they are discrete, V can be obtained from In addition, in the definition of the canonical parameters ϕ given by Eq. ( 20) the derivatives ∂ℓ/∂y i are replaced by ∂ log(f i )/∂y i , where f i (y i ) is the probability distribution of y i .More details on how to compute r * µ for applications involving discrete data can be found in [15].

The Bartlett correction
A different approach to higher-order asymptotics due to Bartlett [6] involves a scaling of the likelihood ratio statistic, rather than a correction to the distributions of the MLEs.Bartlett's argument is as follows.For a model with M parameters of interest µ the likelihood ratio follows a chi-square distribution with M degrees of freedom (χ 2 M ) and its expectation value is where b is the correction to the asymptotic expectation value.The modified statistic follows a distribution closer to the asymptotic χ 2 M .The quantity b = E[w µ ] − M characterizes the size of the Bartlett correction.
In many realistic applications, b cannot be computed exactly.In such scenarios, two possible approaches exist.The first is to estimate E[w µ ] using MC methods, while the second is to approximate it perturbatively using a result provided by Lawley.
Lawley [16] developed a general method to compute the expectation value up to O(n −2 ) proving that all the cumulants of w * µ match with the cumulants of a χ 2 M distribution up to this order.Specifically, Lawley's formula is based on a quartic expansion of both the likelihood ratio and the score equation, ∂ℓ ∂µ (μ) = 0, in powers of μi − µ i (see, e.g., [8]), where i is an index running over the parameter space of the model.The two expansions can be combined to obtain an approximation of the expectation value where ϵ M here represents the Bartlett correction factor b computed to order n −1 using the Lawley method.The correction term ϵ M has a complicated structure involving derivatives of the likelihood up to the fourth order and their expectation values.Nevertheless, for many applications, it is possible to compute it analytically.Specifically, for a model with M parameters of interest and without any nuisance parameters (the case with nuisance parameters will be considered later in this section) ϵ M can be written as where the indexes r,s,t,u,v, and w label all the M parameters of the model (see, e.g., Ref. [8]).The two terms inside the sum are sw .
(29) The terms inside the above definitions can be computed as and where the matrices with upper indices are the inverses of the corresponding matrices with lower indices.The general expression for the Bartlett correction is quite involved, but its computation is not conceptually complicated, as it only involves computing derivatives and expectation values of them.
Often the parameters can be split into two subsets: parameters of interest µ = (µ 1 , ..., µ M ), and nuisance parameters θ = (θ 1 , ..., θ N ), and one is typically interested in testing specific parameter values in µ space.In such scenarios, the Lawley formula to compute the expectation of w µ is given by The notation ϵ N +M indicates that the summation in Eq.( 28) is performed over all the indices labeling the full parameters space, wheres the notation ϵ N indicates that the summation is only performed over indices of the nuisance parameters.However, a more efficient way to compute Eq. (32), is to directly calculate the difference ϵ N +M − ϵ N by summing the terms in Eq. ( 29) over all permutations of the indices that contain at least one parameter of interest.
It is worth noting that, for composite hypotheses, the expectation value of the likelihood ratio is dependent on the nuisance parameters, as the distribution of w µ still has a dependence on them.Therefore, to evaluate the expectation value, and thus Eq.( 32), one should use θ = θ(µ), the constrained MLEs of the nuisance parameters.The Lawley formula (32) is a valuable tool in situations where it is not feasible to compute the exact expectation value of w µ analytically, a common scenario in realistic applications.An alternative approach is to numerically estimate the expectation value of w µ using MC methods.Specifically, E[w µ ] can be estimated by generating data and setting the parameters of interest µ to the value in the parameter space being tested, and the nuisance parameters θ to their profiled values θ(µ) (also known as parametric bootstrap estimate).

Overview of the Gamma Variance Model
Having outlined in the preceding section the general formalism for higher-order asymptotic corrections, we now demonstrate their use with the Gamma Variance Model (GVM) introduced in Ref. [9].The GVM extends the general likelihood often used in particle physics analysis, where P (y|µ, θ) denotes the probability density function of the data y, which depends on M parameters of interest µ = (µ 1 , . . ., µ M ) and N nuisance parameters θ = (θ 1 , . . ., θ N ).To provide information on the nuisance parameters one includes N control measurements u = (u 1 , . . ., u N ), here assumed to be direct estimates of the nuisance parameters θ that are independent and Gaussian distributed.We suppose these are unbiased, (i.e., E[u] = θ), and their standard deviations σ u = (σ u1 , ..., σ u N ) are often referred to as systematic errors.The inclusion of nuisance parameters enlarges the model's parameter space, thereby enabling a better approximation of the truth, albeit at the cost of reduced sensitivity to the parameters of interest.Very often, the values of the standard deviations σ u are assigned by the experimenter and treated as fixed.
The GVM extends this model to address the important situation where the systematic errors are themselves uncertain by regarding the σ 2 ui as adjustable rather than known parameters.The values that one would have assigned to them before are now treated as independent gamma-distributed estimates v i , i.e., Here the parameters of the gamma distribution α i and β i are defined such that the expected value is E[v i ] = α i /β i and the variance is σ 2 vi = α i /β 2 i .These are chosen such that v i is an unbiased estimator for σ 2 ui , (i.e., E[v i ] = σ 2 ui ) and the width of the gamma distribution is adjusted to reflect the appropriate level of uncertainty by defining which is a fixed parameter of the model.Using error propagation Eq. ( 35) becomes where The quantity ε i is thus the relative uncertainty on the assigned systematic error (also called a coefficient of variation), which we refer to informally as the relative error-on-error parameter.Including the v i as measurements into the likelihood gives Although treating the σ 2 ui as adjustable in the GVM in effect doubles the number of nuisance parameters in comparison to the model where the σ 2 ui are known, one can profile over them in closed form.After some manipulation (see Ref. [9]), the profile log-likelihood is found to be As discussed in Refs.[9], the Gamma Variance Model leads to interesting and useful consequences for inference about the parameters of interest µ.In particular, the size of the confidence region for µ becomes coupled to the goodness of fit, with increasing incompatibility of the input data leading to larger regions.Furthermore, the point estimate for µ shows a decreased sensitivity to outliers in the data.It is therefore of particular interest to apply the GVM in cases where the input values are in tension either among themselves or with the predictions of a hypothesis of interest.For example, the tension between measured and predicted values of the anomalous muon magnetic moment was explored in Ref. [17].The GVM represents a purely frequentist approach to this type of problem.Bayesian methods have been found to yield qualitatively similar results, e.g., in Refs.[18][19][20][21].
A practical difficulty with the Gamma Variance Model arises in connection with the use of asymptotic formulae to obtain p-values and confidence regions when the ε parameters exceed a value of around 0.2.As discussed in Ref. [9], there is a correspondence between the parameters ε i and an effective sample size, n i , which can be found by considering a sample of n i independent observations of u i and using their sample variance as an estimate of σ 2 ui .This estimator is found to be gamma distributed with an error-on-error parameter ε i related to the sample size by Thus when ε i becomes too large, then n i drops to become of order unity and the large-sample criterion required for use of asymptotic distributions no longer holds.Values of ε are expected to be roughly 0.2 to 0.5 or even larger in many applications, which could make it far more difficult to compute p-values and confidence regions.
The breakdown of the asymptotic formulae for large ε i can be understood intuitively by expanding the logarithmic term of Eq. (38) in powers of ε i : Thus as ε i approaches zero, the logarithmic constraint reduces to a quadratic one, associated with a Gaussian constraint for the nuisance parameter, leading to the asymptotic distributions for the statistics w µ and r µ discussed above.However, for large ε i , the Gamma Variance Model deviates from the quadratic approximation by an error term that begins at O p (ε 2 i ), as can be seen in Eq. (40).Consequently, when ε i is not equal to zero, the asymptotic formulae used to obtain p-values and confidence regions are not guaranteed to represent valid approximations.Furthermore, the interval of convergence of the logarithm in Eq. ( 38) is and thus the asymptotic formulae are not expected to give accurate approximations if the above condition is not satisfied.
In principle, this difficulty can be overcome by using Monte Carlo calculations, but this can entail substantial additional work and computing time.It is therefore valuable to have a method of finding p-values and confidence regions without MC, and thus the primary goal of this paper is to investigate the use of higher-order asymptotics with the GVM to obtain results that remain accurate even for large ε i .

Single-measurement model
In order to investigate the asymptotic properties of a statistical model with uncertain error parameters, it is convenient to use the simple model introduced in Ref. [9].Here for completeness we reproduce several results shown in that paper using the Bartlett correction and extend them in Sec.5.1 using the r * approximation.
The single-measurement model describes a single Gaussian distributed measurement y with mean µ and standard deviation σ.We take µ to be the parameter of interest and σ 2 to be a nuisance parameter constrained by an independent gamma-distributed estimate v. Therefore, the likelihood is where α = 1/4ε 2 and β = 1/(4ε 2 σ 2 ), and ε is the relative error on the standard deviation σ.The log-likelihood of the model is given by (43) The goal is to compute the likelihood ratio w µ (see Eq. ( 2)) to study its asymptotic properties and to apply to it the higher-order corrections defined in Sec.3.This requires the estimators μ = y , With the help of the above expressions it is easy to derive the likelihood ratio w µ , which, in the limit ε → 0, becomes In this limit, the likelihood ratio can be approximated by a quadratic expression, as expected in the asymptotic limit.As seen previously, the parameter ε is related to an effective sample size, as it measures the extent to which the model deviates from the asymptotic limit.In particular, it is expected that the distribution of w µ should deviate from its asymptotic χ 2 1 distribution by an error term of order O(ε 2 ).
Figure 1 shows the distributions of the likelihood ratio statistic with data generated according to Eq. (42) setting µ = 0, σ = 1 and ε = 0.01, 0.2, 0.4, 0.6.As found in Ref. [9], the distribution deviates from the asymptotic χ 2  1 form as the ε parameter increases.The simple dependence of the single measurement model on the parameter ε makes it an ideal candidate for studying the effectiveness of higher-order asymptotic methods in improving asymptotic formulae.

Higher-order asymptotics for the single-measurement model
As one can see in Fig. 1, the likelihood ratio exhibits noticeable deviations from its asymptotic χ 2 1 distribution even for moderate values of ε.It is therefore important to investigate whether higher-order statistics, namely r * µ and w * µ as defined in Eqs. ( 14) and (26), can be better approximated by their asymptotic distributions, particularly for larger values of ε.
The asymptotic distribution of r * is a standard normal and it has an associated error term of O(n −3/2 ).For the single-measurement model, ε gives the effective sample size (n = 1+1/2ε 2 ), and thus the error term is expected to be of order O(ε 3 ) or smaller.In order to compute r * µ one needs q µ as defined in Eq. (18).The dependence of the likelihood of the single-measurement model on the data can be explicitly re-expressed in terms of the MLEs defined in Eq.(44): Therefore, it is possible to use Eq.( 18) to compute q µ , for which one finds Since the asymptotic distribution of r * µ is a standard normal, the asymptotic distribution of r * 2 µ is a χ 2 1 distribution, and therefore it can be seen as a higher-order correction to the likelihood ratio.
The second higher-order statistic we want to study is the Bartlett-corrected likelihood ratio, where E[w µ ] = M + b is what one must find to obtain the Bartlett correction.The Bartlett-corrected likelihood ratio w * µ is expected to be χ 2 1 distributed in the asymptotic limit.The expectation value E[w µ ] can be estimated using the Lawley formula (32), which yields The asymptotic distribution of w * µ will have an error term of O(n −2 ), or equivalently O(ε 4 ) for the single-measurement model.All of the higher-order statistics described above, namely r * 2 µ and w * µ , follow a χ 2 distribution in the asymptotic limit.The expectation value in Eq. ( 50) matches, at O(ε 2 ), the result found in [9] using a Taylor expansion of the integral for its computation.
In Fig. 2, we show the distributions of these two statistics for data generated according to Eq. ( 42) with µ = 0, σ = 1, and ε values of 0.01, 0.2, 0.4, and 0.6.The distributions of two statistics are much better approximated by χ 2 1 compared to the original likelihood ratio w µ , indicating that higher-order statistics provide significant improvements in this example.

Confidence intervals for the single-measurement model
The likelihood ratio is a commonly used tool for deriving confidence regions, typically obtained by finding the p-value of µ and then solving the equation p µ = α, where 1 − α represents the desired confidence level.In the case of the single-measurement model, which involves only one parameter of interest µ, our goal is to construct a confidence interval for it as described in Sec. 2. To obtain the p-value, the distribution of w µ must be determined.As seen in Fig. 1, the distribution of w µ departs from its asymptotic form for large values of ε, and p-values derived from a χ 2 1 distribution are thus not accurate.To address this, we can use higher-order statistics such as r * µ and w * µ to compute the p-values, i.e., or To illustrate this we find the confidence interval for µ as a function of the parameter ε under the assumption that the observed values of y and v are 0 and 1, respectively.Figure 3 shows a comparison of the confidence intervals obtained using the likelihood ratio w µ and the higher-order statistics r * µ and w * µ .Additionally, the confidence interval is also computed by calculating the p-value exactly, as described in [9].The plot in Fig. 3 shows that the use of higher-order statistics significantly improves the accuracy of the confidence interval.Figure 3 confirms the findings in [9], but also shows that r * provides estimates of the confidence interval as accurate as those computed with w * , proving to be a valid alternative to the Bartlett correction.

Simple-average model
The single-measurement model can be extended to an average of N measurements y = (y 1 , ..., y N ), which are assumed to follow a Gaussian distribution with a common mean µ and variances σ 2 = (σ 2 1 , ..., σ 2 N ).Here the variances are assumed to be uncertain with independent estimates v = (v 1 , ..., v N ) that are gamma distributed with error-on-error parameters (cf.Eq. ( 35)) of ε = (ε 1 , ..., ε N ).The likelihood of the model is thus where α i = 1/4ε 2 i and β i = 1/(4ε 2 i σ 2 i ).Equivalently, the log-likelihood of the model is In contrast to the full Gamma Variance Model described in Sec. 4, it does not include nuisance parameters θ i or their estimates, but rather treats the variances σ 2 i of the primary measurements y i as uncertain.It can be easily generalized to a curve-fitting problem where the expectation value of each measurement y i can be defined as a function of the parameters of interest µ and a control measurement x i , i.e., E[y i ] = f (x i ; µ).
The log-likelihood of Eq. ( 54) profiled over the σ 2 is given by ) which has been computed using the profiled value of σ 2 i : As in the example of the single-measurement model from Sec. 5, we compute the likelihood ratio w µ , and the statistics r * µ and w * µ .These require the MLE μ, which in general must be found numerically.As discussed in Sec. 4, the distribution of the likelihood ratio w µ is expected to deviate from its asymptotic χ 2  1 form, and we investigate whether the higher-order statistics r * µ and w * µ can improve the precision of the inference on µ.
Because the log-likelihood of Eq.( 54) cannot be written explicitly as a function of the MLEs, but is known in terms of the data values y i , the correction term q µ , needed to compute r * µ , must be found using Eq.( 23).Additionally, to compute q µ , as discussed in Sec.3.1, a vector of pivotal quantities z = (z y1 , ..., z y N , z v1 , ..., z v N ) must be defined and used in Eqs.(20) and (21).We choose these to be  The Bartlett-corrected likelihood ratio w * µ can be calculated using the Lawley formula (32).The result can be expanded at order ε 2 i as which in the limit ε i → 0 gives 1 as expected.Using this result one can thus find the corrected statistic w * µ = w µ /E[w µ ].

Confidence intervals for the parameter of interest
In this section, we compute the 68.3% confidence interval for the parameter µ of Eq. (54) to benchmark the higherorder statistics computed in the last section.Specifically, we consider the simple case of averaging two measurements, y 1 and y 2 , with observed values of +δ and −δ, respectively.The estimates of the variances v 1 and v 2 are set to 1.We consider δ = 0.5, which represents the case where y 1 and y 2 are reasonably consistent, and δ = 1.5, corresponding to a substantial tension between the two measurements.Both measurements are assigned equal erroron-error parameters, ε 1 = ε 2 = ε, and we present results as a function of ε.The intervals are found using the likelihood ratio w µ and the higher-order statistics r * µ and w * µ .In addition, the confidence interval is found by estimating the p-value of the likelihood ratio using MC.This is done by generating the exact distribution of the data for a fixed value of µ while setting the nuisance parameters σ 2 i to their profiled values.In Particle Physics, this technique is commonly known as the profile construction [22] or hybrid resampling [23,24] method (in statistics often referred to as a parametric bootstrap).This technique is expected to provide the most accurate determination of the intervals that is computationally feasible.
The uncorrected interval based on the likelihood ratio w µ is found to undershoot the result from profile construction in both cases, with the discrepancy increasing with ε.Conversely, the intervals obtained using the r * µ and w * statistics are found to be in good agreement with that from profile construction when the averaged data are mutually compatible (top panel of Fig.( 4)).However, for larger values of ε, r * µ breaks down as the tension in the observed data grows (see the lower panel of Fig.   42)) as a function of ε, computed using wµ (blue), r * µ (green) and w * µ calculated using the Lawley formula (orange).The black curve represents the exact half-length of the confidence interval.
numerical instability.This occurs because the correction term q µ in the definition of r * µ (see Eq. ( 14)) no longer effectively corrects the likelihood root, as the likelihood becomes strongly non-Gaussian.Consequently, deviations from the asymptotic limit can no longer be adequately addressed by a correction term.
A conservative approach to determine the applicability of r * µ in improving the likelihood ratio predictions is to verify whether the arguments of the logarithmic terms of Eq. ( 55) are within their radius of convergence.Specifically, for the endpoints of the confidence interval, one should check whether the inequality is satisfied for every measurement y i .In the example above, this condition implies that one should not trust the accuracy of the result from r * µ if ε ≥ 0.5 for δ = 0.5 and ε ≥ 0.3 for δ = 1.5.

Goodness-of-fit
A confidence region for the parameters of interest does not, on its own, provide a measure of how well the selected model describes the observed data.This can be quantified using a goodness-of-fit statistic for which we take where L s represents the likelihood of the saturated model.This is obtained by replacing the expectation values E[y i ] = µ with a set of independent parameters φ = (φ 1 , , ..., , φ N ), such that E[y i ] = φ i .Since there is an adjustable φ i for each measurement y i , one finds φi = y i , σ2 i = v i , and the goodness-of-fit statistic reduces to If the above expression is expanded in powers of ε 2 i , in the limit ε 2 i → 0 one finds In this limit, q reduces to a sum of squares of Gaussiandistributed quantities, and thus its distribution follows a chi-square distribution with N − 1 degrees of freedom.However, for large values of the ε i parameters, deviations from the χ 2 N −1 asymptotic distribution are expected.To correct the goodness-of-fit statistic using higherorder asymptotics, q needs to be defined as a likelihood ratio.This can be done by defining the saturated model such that the simple-average model is nested within it.A possible choice is to define the saturated model as where we fix α N = − N −1 i=1 α i so that N i=1 α i = 0. Given this definition, the simple-average model is recovered by fixing all the α i to zero, hence ℓ = ℓ s (α = 0, µ, σ 2 ).Therefore, the goodness-of-fit statistic can be written as a likelihood ratio of the saturated model,   54)) as a function of ε for δ = 0.5 (top) and δ = 1.5 (bottom), computed using wµ (blue), r * µ (green) and w * µ calculated using the Lawley formula (orange).The black dots represent our most precise estimate of the interval, computed using profile construction.The confidence interval obtained with r * µ becomes numerically unstable for large values of ε when there is significant tension in the dataset (bottom).and its Bartlett correction can be computed using the Lawley formula.This is done by treating the α i as parameters of interest and µ and the σ 2 i as nuisance parameters.
The result is given by: The terms k αiαj and k µαi refer to the components of the inverse of the expectation value of the Hessian matrix of the likelihood (see Eq. ( 30)) and are computed for σ 2 yi = v i .The matrix C is a N × (N − 1) matrix, whose only non-zero entries are: The r * statistic cannot be applied to goodness-of-fit unless N = 2, as it can only be computed for models with one parameter of interest.
To measure how well the model describes the observed data, one can compute the p-value of the goodness-of-fit, In general, small values of the p-value are associated with a bad agreement between the model and the data.In Particle Physics, p-values are typically converted to a related quantity Z called the significance, defined (for a two-sided test) as where Φ −1 is the inverse cumulative distribution of a standard normal.The p-value is thus equated to the probability of a Gaussian variable to fluctuate Z standard deviations or more away from the mean (i.e., a probability of p/2 in each direction).
To illustrate this method, we find the p-value and corresponding significance Z for an average of two incompatible measurements, namely y 1 = −3 and y 2 = 3, with estimated variances of v 1 = v 2 = 1.We set ε 1 = ε 2 = ε and we show the results as a function of ε.The significance is estimated using both the goodness-of-fit statistic q and the Bartlett-corrected version q * .Figure 5 shows the significance as a function of the parameter ε.The use of the Bartlett correction results in a substantial improvement in estimating the significance, almost perfectly overlapping with the MC estimates.This is a crucial result, as estimating p-values of goodness-of-fit for incompatible data can require a large number of pseudo-experiments.For example, roughly on the order of 10 5 simulated experiments are necessary to accurately capture a 4σ effect.
The Bartlett correction can also be found using MC to estimate E[w µ ], which takes less time than computing the full distribution.The result is very close to what is found from the Lawley formula, and the latter has the advantage of avoiding MC entirely.

Averages using the full Gamma Variance Model
The simple-average model of the previous section assumes that the individual measurements are unbiased estimators of the parameter of interest, i.e., E[y i ] = µ, but that the standard deviations of the y i are uncertain.In practice, the σ yi are often well estimated because they correspond to the statistical uncertainty in y i , and thus they are directly related to a sample size or a number of counts.It often happens, however, that y i may have a potential bias which must be constrained with a control measurement, as described in the full Gamma Variance Model of Sec. 4. In this section we apply higher-order asymptotic corrections to this case.
More precisely, N measurements are assumed to be independent and Gaussian distributed with means E[y i ] = µ+θ i and known variances (the "statistical errors") V[y i ] = σ 2 yi .Here, the nuisance parameters θ i represent potential biases to the means of the y i .As described in Sec. 4, their values are estimated with independent Gaussian distributed control measurements u i , whose variances σ 2 ui (the "systematic errors") are treated as adjustable parameters.The σ 2 ui are estimated by measurements v i , whose gamma distributions are characterized by the error-onerror parameters ε i .The log-likelihood of the model is and the profiled log-likelihood ℓ p can be computed using leading to The MLEs μ and θi can be found numerically or by solving a system of cubic equations (see Ref. [9]).
As before, we compute the likelihood ratio w µ and the higher-order statistics r * µ and w * µ .To compute r * µ , one needs to calculate q µ as defined in Eq. ( 23).This requires a vector of pivotal quantities z = (z y1 , ..., z y N , z u1 , ..., z u N , z v1 , ..., z v N ), which can be defined as The Bartlett-corrected likelihood ratio w * µ = w µ /E[w µ ] can be estimated numerically using the Lawley formula defined by Eq. (32), which predicts the expectation value of w µ to be Significance (Z) q q * MC Fig. 5. Significance of discrepancy for δ = 3 as defined by Eq. ( 68) plotted as a function of ε.Higher significance values indicate greater tension between the data.The significance was computed using the goodness-of-fit statistic (blue) defined by Eq. ( 60) and its Bartlett corrected counterpart calculated using the Lawley formula (orange).The black dots represent the significance computed by generating the distribution of q with MC.The plot illustrates that the Bartlett correction enables an accurate approximation of the numerical predictions.
Therefore, the Bartlett correction factor is equal to unity up to i O(ε 4 i ), indicating that any deviations of the likelihood ratio's density function from its asymptotic distribution are also expected to be i O(ε 4 i ).To further improve the accuracy of the Lawley formula, one can compute the Bartlett correction numerically.Specifically, one can approximate the expectation value of w µ by generating data with all the model parameters set to their maximum likelihood estimates and treating it as a constant independent of the model parameters: This approximation has been found to yield highly accurate results, as shown below, and moreover it significantly speeds up the computation of confidence intervals.Rather than generating a new set of data to estimate the Bartlett correction for every tested value of µ, data only needs to be generated once.
In certain scenarios, an analyst may wish to conduct inference on one or more nuisance parameters θ i for example to generate a ranking plot of the systematics or obtain the correlation matrix of their estimators.In such cases, the nuisance parameters must be treated as parameters of interest.According to the Lawley formula, the expected value of the likelihood ratio is where, M represents the number of nuisance parameters that have been promoted to parameters of interest.The term k θiθi refers to the θ i component of the inverse of the expectation values of the Hessian matrix of the likelihood, as defined by the first term of equation Eq. ( 30)), and is evaluated at σ 2 ui = v i .

Confidence regions
As in the previous examples, we compute confidence intervals for the parameter of interest µ using the likelihood ratio and higher-order statistics, assuming their density functions are given by the asymptotic distributions.Specifically, consider an example similar to what was used in Sec.6, namely, the mean of two measurements, y 1 = −δ and y 2 = +δ, here with associated statistical errors σ 1 and σ 2 both equal to 1/ √ 2 and δ = 0.5 or 1.5.Additionally, we assume that the control measurements u 1 and u 2 have observed values of 0, and the estimates of the systematic errors to be 1/ √ 2, or equivalently, the estimates of the variances v 1 and v 2 to be 1/2.Both measurements are assumed to have equal error on error parameters, ε 1 = ε 2 = ε, and we look at the results for different ε.
Figure 6 shows the confidence interval for the parameter µ found using the likelihood ratio w µ , as well as from the higher-order statistics w * µ and r * µ .The resulting confidence intervals are compared to what is found using the profile construction method, which is taken as the best available estimate of such intervals.Among the three methods, w * µ is the most accurate, almost perfectly overlapping with the numerical predictions obtained using the profile construction method.Moreover, w * µ is significantly   69)) as a function of ε for δ = 0.5 (top) and δ = 1.5 (bottom), computed using wµ (blue), r * µ (green) and w * µ calculated with MC (orange).The black dots represent our most precise estimate of the interval, computed using the profile construction.The confidence interval obtained with r * µ breaks down for large values of ε when there is significant tension in the dataset (bottom).
faster to compute as it only requires data generation for µ = μ.In contrast, the profile construction method entails generating a new set of data for every tested value of µ.

The r *
µ statistic, on the other hand, provides accurate predictions for internally consistent data (see the top plot of Fig. 6).However, for larger discrepancies between the measurements (bottom plot of Fig. 6), it is reliable only for small values of ε.In this example, it starts deviating from the numerical prediction for ε above 0.3 and breaks down for ε exceeding 0.6, resulting in numerical instabilities.To assess the usability of r * µ , one can check whether the logarithmic terms in the log-likelihood associated with the control measurements u fulfill the perturbative condition given by Eq. (41) for the endpoints of the confidence interval.For δ = 1.5, this condition limits the applicability of r * µ to ε ≃ 0.3, whereas, for δ = 0.5, the threshold is higher, above 0.6.Figure 7 illustrates the 2D confidence regions in the (µ, θ 1 ) plane.These confidence regions were computed by fixing θ 2 to its profiled value, while treating θ 1 as a parameter of interest.A similar exercise could have been conducted for µ and θ 2 , or θ 1 and θ 2 .The results shown in Fig. 7 were derived using the same measured data as in the previous example, with the error on error parameters ε 1 and ε 2 fixed at 0.5.The confidence regions were computed using the likelihood ratio w µ,θ and the Bartlettcorrected likelihood ratio w * µ,θ computed via Eq.(75), and were then compared with the confidence regions estimated using the profile construction technique.The results indicate that the Bartlett correction improves the accuracy of predictions significantly compared to the uncorrected likelihood ratio. .68.3% confidence regions in (µ, θ1) plane for δ = 0.5 (top) and δ = 1.5 (bottom), computed using the likelihood ratio (blue) and the Bartlett correction calculated with the Lawley formula (orange).The black dots represent the most precise estimate of the interval, computed using the profile construction.The error-on-error parameters ε1 and ε2 are fixed to 0.5.

Goodness of fit
The goodness-of-fit statistic for the Gamma Variance Model can be defined using the same approach as used for the simple-average model in Sec.6.2, leading to In this case, however, constructing a saturated model is not useful because the Lawley formula gives b = 0 at order ε 2 i .Nonetheless, the Bartlett correction can still be computed using MC.This method allows for a significant computational improvement over generating the exact distribution of q using pseudo-experiments to estimate its p-value.The latter approach would require roughly on the order of 10 5 simulated experiments to accurately capture a 4σ effect, while the expectation value of q can be estimated with a precision of several percent using only O(10 3 ) pseudo-experiments.
To illustrate these techniques we compute the significance of the p-value for an average of two incompatible measurements using Eq.(69).The observed values of y 1 and y 2 are assumed to be −3 and 3 whereas the control measurements u 1 and u 2 are set to 0. The statistical uncertainties, σ 1 and σ 2 , are set to 1/ √ 2 as the estimates of the systematic errors (which is equivalent to set v 1 = v 2 = 1/2).Figure 8 compares the significance computed using the goodness-of-fit statistic q and the Bartlettcorrected q * , with the significance computed generating the distribution of q * numerically, all of them as a func-  68) plotted as a function of ε.Higher significance values indicate greater tension between the data.The significance was computed using the goodness-of-fit statistic (blue) defined by Eq. ( 60) and its Bartlett corrected counterpart calculated numerically (orange).The black dots represent the significance computed by generating the distribution of q with MC.The plot illustrates that the Bartlett correction enables an accurate approximation of the numerical predictions.tion of ε.Consistent with our earlier findings, the Bartlett correction is found to yield highly accurate predictions.

Conclusions
We have demonstrated the efficacy of higher-order asymptotics in the computation of confidence intervals and pvalues within the framework of the Gamma Variance Model, a specialized statistical model designed to address uncertainties in parameters that themselves represent uncertainties.The methods studied in this paper hold particular relevance when the GVM's fixed parameters ε, indicative of the relative uncertainties in estimates of standard deviations for Gaussian-distributed measurements, are not negligible.In such scenarios, standard asymptotic methods are unable to provide accurate confidence intervals or p-values.
Our investigation specifically focused on the r * statistic and the Bartlett correction, both of which are higherorder asymptotic techniques that offer adjustments to the first-order (profile) likelihood ratio and likelihood root test statistics.These adjustments enable the test statistics to be more accurately approximated by their asymptotic distributions, even when the ε parameter is large.
Both the Barndorff-Nielsen r * statistic and the Bartlett corrected likelihood ratio demonstrated their value as tools to enhance the accuracy and reliability of confidence interval and p-value calculations using Gamma Variance Models.However, it should be noted that the r * statistic exhibited instabilities in the presence of internally incompatible data for large values of ε.Additionally, while r * can be computed analytically for all the examples examined in this paper, the expressions become complicated for models associated with realistic applications, such as the simple-average and full GVM models.
Conversely, the Bartlett correction, calculated using the Lawley formula (32), offers a more elegant expression for the expectation value of the likelihood ratio, which is employed to compute the Bartlett correction factor for the likelihood ratio.Specifically, refer to Eq.(58) for the simple-average model (see Sec.6) and Eqs.(73) and (75) for combinations utilizing the full GVM (see Sec. 7.1).In addition, when necessary, estimating the Bartlett correction numerically is straightforward, as shown in Sec. 7.
The Bartlett correction also proved to be an effective technique for improving the goodness-of-fit statistic.This was true for cases where the statistic could be computed analytically using the Lawley formula, such as the simpleaverage model (see Eq. (65) in Sec.6.2), as well as for cases where it was estimated using Monte Carlo methods, as in the full Gamma Variance Model (see Sec. 7.2).The application of the Bartlett correction in the latter scenario significantly reduced the number of pseudo-experiments required for accurately estimating the significance of rare effects.
Overall, to improve both the applicability and precision of the GVM, the Bartlett correction has proven to be a more reliable and versatile solution.
Furthermore, these findings highlight the potential of higher-order asymptotics to refine inference on the parameters of interest in various contexts, not only the GVM.Higher-order asymptotics are valuable tools when the MLEs of statistical models do not follow Gaussian distributions or, equivalently, when log-likelihoods are not well approximated by quadratic expressions.For Gamma Variance Models this occurs when ε is large; however, in general, such deviations are typically associated with small exper-imental sample sizes.In Particle Physics, it is not uncommon to search for new signal processes by counting collision events with very specific characteristics, such that the expected number of background events may be order unity.With sample sizes of this order it is expected that asymptotic distributions will not be accurate and highorder asymptotic formulae should prove valuable (see, e.g., Refs.[7,25].
The introduction of higher-order asymptotic corrections removes a potential stumbling block for use of the Gamma Variance Model.As many estimates of systematic uncertainties may themselves be uncertain at the level of 20% to 50% or more, one would not expect asymptotic confidence intervals or p-values to be accurate.By using higher-order corrections, accurate results can be achieved without minimal or no Monte Carlo simulation, greatly simplifying use of the model.

Fig. 4 .
Fig.4.Half-length of 1-σ confidence intervals for parameter µ (Eq.(54)) as a function of ε for δ = 0.5 (top) and δ = 1.5 (bottom), computed using wµ (blue), r * µ (green) and w * µ calculated using the Lawley formula (orange).The black dots represent our most precise estimate of the interval, computed using profile construction.The confidence interval obtained with r * µ becomes numerically unstable for large values of ε when there is significant tension in the dataset (bottom).

Fig. 6 .
Fig.6.Half-length of 1-σ confidence intervals for parameter µ (Eq.(69)) as a function of ε for δ = 0.5 (top) and δ = 1.5 (bottom), computed using wµ (blue), r * µ (green) and w * µ calculated with MC (orange).The black dots represent our most precise estimate of the interval, computed using the profile construction.The confidence interval obtained with r * µ breaks down for large values of ε when there is significant tension in the dataset (bottom).

Fig. 7
Fig.7.68.3% confidence regions in (µ, θ1) plane for δ = 0.5 (top) and δ = 1.5 (bottom), computed using the likelihood ratio (blue) and the Bartlett correction calculated with the Lawley formula (orange).The black dots represent the most precise estimate of the interval, computed using the profile construction.The error-on-error parameters ε1 and ε2 are fixed to 0.5.

Fig. 8 .
Fig.8.Significance of discrepancy as defined by Eq. (68) plotted as a function of ε.Higher significance values indicate greater tension between the data.The significance was computed using the goodness-of-fit statistic (blue) defined by Eq. (60) and its Bartlett corrected counterpart calculated numerically (orange).The black dots represent the significance computed by generating the distribution of q with MC.The plot illustrates that the Bartlett correction enables an accurate approximation of the numerical predictions.