Constraining power of open likelihoods, made prior-independent

One of the most criticized features of Bayesian statistics is the fact that credible intervals, especially when open likelihoods are involved, may strongly depend on the prior shape and range. Many analyses involving open likelihoods are affected by the eternal dilemma of choosing between linear and logarithmic prior, and in particular in the latter case the situation is worsened by the dependence on the prior range under consideration. In this letter, we revive a simple method to obtain constraints that depend neither on the prior shape nor range and, using the tools of Bayesian model comparison, extend it to overcome the possible dependence of the bounds on the choice of free parameters in the numerical analysis. An application to the case of cosmological bounds on the sum of the neutrino masses is discussed as an example.


Introduction
In several cases, physics experiments try to measure unknown quantities: the mass of some particle, a new coupling constant, the scale of new physics. Most of the times, the absolute scale of such new quantities is completely unknown, and the analyses of experimental data require to scan a very wide range of values for the parameter under consideration, to finally end up with a lower or upper bound when data are compatible with the null hypothesis.
In the context of Bayesian analysis, performing this kind of analysis implies a profound discussion on the choice of the considered priors, which may be logarithmic when many orders of magnitude are involved. A robust analysis usually shows what happens when more than one type of prior is considered, but the calculation of credible intervals always require also a precise definition of the prior range. Especially in the case of logarithmic priors, a choice of the range can be difficult even when physical boundaries (e.g. a mass or a e-mail: gariazzo@ific.uv.es (corresponding author) coupling must be positive) exist, with the consequence that the selected allowed range for the parameter can influence the available prior volume and as a consequence the bound itself.
Let us consider for example the case of neutrino masses and their cosmological constraints. Current data are sensitive basically only on the sum of the neutrino masses and not on the single mass eigenstates (see e.g. [1,2]). There are therefore good reasons to describe the physics by means of Σm ν and to consider a linear prior on it, as the parameter range is limited from below by oscillation experiments [3][4][5] and from above by KATRIN [6]. Even given these considerations, however, one can decide to perform the analysis considering a lower limit Σm ν > 0 [7], instead of enforcing the oscillation-driven one, Σm ν 60 (100) meV (respectively for normal and inverted ordering of the neutrino masses, see e.g. [8,9]): the obtained upper bounds will differ in the various cases.
In order to overcome these problems, in this letter we revisit a simple way [10][11][12] to use Bayesian model comparison techniques to obtain prior-independent constraints, which can be useful for an easier comparison of the constraining power of various experimental results, not only in the context of cosmology, but in all Bayesian analyses in general. Furthermore, we extend the already known method to address the problems related to the possible existence of degeneracies with multiple free parameters and the choice of the considered parameterizations when performing the numerical analyses.

Prior-free Bayesian constraints
The foundation of Bayesian statistics is represented by the Bayes theorem: where π(θ|M i ) and p(θ |d, M i ) are the prior and posterior probabilities for the parameters θ given a model is the likelihood function, depending on the parameters θ , given the data d and the model M i , and is the Bayesian evidence of M i [13], the integral of prior times likelihood over the entire parameter space Ω θ . While the Bayes theorem indicates how to obtain the posterior probability as a function of all the model parameters θ , when presenting results we are typically interested in the marginalized posterior probability as a function of one parameter (or two), which we can generally indicate with x. The marginalization is performed over the remaining parameters, which we can indicate with ψ: Let us now assume that the prior is separable and we can write π(θ|M i ) = π(x|M i ) · π(ψ|M i ). Under such hypothesis, Eq. (3) can be written as: Let us consider the marginalized posterior as written in Eq. (4). The prior dependence is only present explicitly outside the integral, and therefore we can obtain a priorindependent quantity 1 just dividing the posterior by the prior. The right-hand side of Eq. (4), however, has an explicit dependence on the value of x through the likelihood that appears in the integral. We can note that such integral resembles the definition of the Bayesian evidence in Eq. (2), not anymore for model M i , but for a sub-case of M i which contains x as a fixed parameter. Let us label this model with M x i and define its Bayesian evidence: which is independent of the prior π(x), but still depends on the parameter value x, now fixed. Note that Eq. (4) can be rewritten in the following form: 1 This is not exactly true, in the sense that the prior also enters the calculation of the Bayesian evidence, see Eq.
(2). The shape of the posterior, in any case, is not affected by such contribution, that only enters as a normalization constant. Now, let us consider two models M x 1 i and M x 2 i . Since Z i is independent of x, we can use Eq. (6) to obtain which can be rewritten as The left hand side of this equation is a ratio of the Bayesian evidences of the models M x 1 i and M x 2 i , therefore it is a Bayes factor. For reasons that will be clear later, let us rename x 1 → x and x 2 → x 0 and define this ratio as R(x, x 0 |d), which was named "relative belief updating ratio" or "shape distortion function" in the past [10][11][12]: Although this function has been already employed in the past, see e.g. [14][15][16][17], its use has been somewhat faded into obscurity. Here, we will revise its properties and discuss them in details.
This quantity therefore represents a prior-independent way to compare some results concerning two values of some parameter x. At the practical level, R is particularly useful when dealing with open likelihoods, i.e. when data only constrain the value of some parameter from above or from below. In such case, the likelihood becomes insensitive to the parameter variations below (or above) a certain threshold. Let us consider for example the absolute scale of neutrino masses, on which data (either cosmological or at laboratory experiments) only put an upper limit: the data are insensitive to the value of x when x goes towards 0, so we can consider x 0 = 0 as a reference value. Regardless of the prior, when x is sufficiently close to x 0 the likelihoods in x and x 0 are essentially the same in all the points of the parameter space In the same way, when x is sufficiently far from x 0 , the data penalize its value In the middle, the function R indicates how much x is favored/disfavored with respect to x 0 in each point, in the same way a Bayes factor indicates how much a model is preferred with respect to another one.
While R can define the general behavior of the posterior as a function of x, any probabilistic limit one can compute will always depend on the prior shape and range, which is an unavoidable ingredient of Bayesian statistic. The description of the results through the R function, however, allows to use the data to define a region above which the parameter values are disfavored, regardless of the prior assumptions, and also to guarantee an easier comparison of two experi-mental results. A good standard could be to provide a (nonprobabilistic) "sensitivity bound", defined as the value of x at which R drops below some level, for example | ln R| = 1, 3 or 5 in accordance to the Jeffreys' scale (see e.g. [2,13]). Let us consider x 0 = 0 as above: we could say, for example, that we consider as "moderately (strongly) disfavored" the region x > x s for which ln R < s, with s = − 3 (or −5), and then use the different values x s to compare the strengths of different data combinations d in constraining the parameter x. This will not represent an upper bound at some given confidence level, since it is not a probabilistic bound, but rather a hedge "which separates the region in which we are, and where we see nothing, from the the region we cannot see" [11].
From the computational point of view, it is not necessary to perform the integrals in the definition of Z x i in order to compute R. One can directly use the right hand side of Eq. (9), i.e. numerically compute p(x|d, M i ) with a specific prior assumption, then divide by π(x, M i ) and normalize appropriately. Notice also that, once R is known, anyone can obtain credible intervals with any prior of choice: the posterior p(x|d, M i ) can easily be computed using Eq. (9) and normalizing to a total probability of 1 within the prior.
Few final comments: in most of the cases, obtaining limits with the R function is nearly equivalent to using a likelihood ratio test. The difference is that, while the likelihood ratio test only takes into account the likelihood values in the best-fit at fixed x 0 and x, the R method weighs the information of the entire posterior distribution and takes into account the mean likelihood over the prior Ω ψ . This means that in cases with multiple posterior peaks or complex posterior distributions, the limits obtained using the R function can be more conservative than those obtained with the likelihood ratio test. As an example, we provide in the lower panel of Fig. 1 a comparison of the likelihood ratio and of the −2 ln R functions when the following likelihood is considered: The dependence of the likelihood on x and θ is shown in the upper panel of Fig. 1. In such case, the R function takes into account the existence of a second peak in the posterior. The choice of the function and the coefficients in Eq. (10) is appropriate to show that, while cutting at 1 (corresponding to the 1σ limit, in a frequentist sense, for the likelihood ratio test) the likelihood ratio and the R methods give the same results, the cut at 4 (corresponding to a 2σ significance for the likelihood ratio test) leads to different results, because the likelihood ratio takes into account only the likelihood values at the best-fit, while the R method is also affected by the Another advantage is computational. In cosmological analyses, it is typically difficult to study the maximum of the likelihood, because of the number of dimensions, the numerical noise and the computational cost of the likelihood. An example showing the technical difficulties in such kind of analyses can be found in [18]. Similar difficulties can emerge in different analyses. Even if the best-fit point is not known with sufficient precision, however, the R function allows to obtain a prior-independent bound with a Markov Chain Monte Carlo or similar method.
The calculation of R is easy in this case. The Planck collaboration considered a flat prior on Σm ν , 3 so we simply have to obtain the posterior p(Σm ν |d, ΛCDM + Σm ν ) by standard means and use it to compute R according to Eq. (9). 4 Since the lower limit adopted by Planck is Σm ν = 0, we can compute R for any positive value of Σm ν , as far as we do not exceed the upper bound of the prior. To better show the results, we consider a logarithmic scale and an appropriate parameter range for the plot in Fig. 2.
From the figure, we can notice that the data are completely insensitive to the value of Σm ν when it falls below 0.01 eV: in this region, there will be no change between 3 Note that, although the considered prior is linear, the calculations through the CAMB code enforce a non-trivial distortion of the prior, which comes from the numerical requirements of the code. These come from the fact that some combinations of parameter values may create numerical instabilities, divergences or simply unphysical values for some cosmological quantities. These problematic points are therefore excluded by the cosmological calculation "a priori", in the sense that the even if they are formally included in the prior, their likelihood cannot be computed at the practical level. In the region below 1 eV, however, the prior on Σm ν is substantially unchanged by this fact. 4 Note that this is practically what is already shown by most authors in cosmology, since the results for 1-dimensional marginalized posteriors are often presented in plots where p π (x|d)/ p max π is shown, being π(x) a linear prior on the quantity x, therefore not affecting the conversion between posterior and R according to Eq. (9). Apart for the normalization constant, p π (x|d)/ p max π may be intended as an unnormalized posterior probability, which can be employed for bounds calculations as if a linear prior on x was considered, or as a shape distortion function, therefore not suitable to compute limits unless some prior is assumed first.
prior and posterior distributions, and R → 1 as expected. On the other hand, Σm ν 0.4 eV will be disfavored by data, for all the data combinations shown here, as R → 0. As is also expected, the exact shape of R between 0.01 and 0.4 eV depends on the inclusion of the BAO constraints and only slightly on the lensing dataset. Regardless of considering a cut at R = e −3 or R = e −5 , indeed, the value of the sensitivity bound only depends on the inclusion of the BAO data. A comparison of the CMB dataset without (P18) or with (P18+BAO) the BAO constraints, therefore, can be summarized by two numbers, considering for example s = −5:

The case of multiple models
In the previous sections we discussed the case when dealing with only one model, which was already known in the literature. The situation is slightly different when more models are considered, for example if one wants to study and take into account several extensions of the same minimal scenario, as in Ref. [27]. It is not difficult to rewrite the definition of R to deal with multiple models, if we assume that the prior for the parameter x under consideration is the same in all of them, i.e. that π(x) ≡ π(x|M i ) does not depend on M i . Let us now recall the method proposed in [27]. The modelmarginalized posterior distribution of the parameter x is obtained as where p(M i |d) is the posterior probability of the model M i , which can be computed using [28] p( In both cases the sum runs over all the studied models. Coming back to Eq. (13) and using Eqs. (4) 5 and (14), we obtain the fully (prior-and model-) marginalized posterior probability of x: Remembering that we assumed π(x) ≡ π(x|M i ) to be independent of M i , the ratio between prior and marginalized Fig. 3 The R(Σm ν , 0) function in Eq. (9) obtained considering different models (dashed) together with the model-marginalized one from Eq. (17) (solid), using the full dataset adopted in Ref. [27] (see text for details). The horizontal lines show the levels ln R = 0, − 1, − 3, − 5, respectively posterior probabilities for the parameter x is: If we use this result to define R again as in Eq. (9), we have: which has exactly the same meaning as before, apart for the fact that in this case R has been marginalized over several models. From the computational point of view, in the modelmarginalized case obtaining R is as simple as when only one model is considered. One just has to select a prior π(x) and a sufficiently broad range, obtain the marginalized posterior probability as in [27], then divide it by the considered prior and normalize appropriately.
As an example, we provide in Fig. 3 the R function obtained from the vary same posteriors studied in Ref. [27]. Such cases are computed considering the full Planck 2015 (P15) CMB data [29,30], together with the lensing likelihood [31] and the BAO observations by from the SDSS BOSS DR11 [32], the 6DF [25] and the SDSS DR7 MGS [26] surveys. The considered models are the same extensions of the ΛCDM+Σm ν case adopted by the Planck collaboration for the 2015 public release, but with a prior Σm ν > 60 meV. Also in this case we can see how the R function is very close to one below 0.1 eV and always goes to zero above ∼ 0.7 eV. In the middle, the various models (dashed lines) have different constraining powers, whose weighted average is represented by the solid line. The model-marginalized, prior-independent result corresponds to Σm ν ,−5 = 0.6 eV (P15+lensing+BAO).

Discussion and conclusions
In this letter we discussed a possible way to show priorindependent results in the context of Bayesian analysis, generalising a previously known method [10][11][12] to deal with multiple models, extending also the work presented in Ref. [27]. The method uses Bayesian model comparison techniques to compare the constraining power of the data at different values of the considered parameter, and is particularly useful when open likelihoods are involved in the analysis. While the method can be similar to a likelihood ratio test, it does not only take into account the information contained in the best-fit point, i.e. the maximum of the likelihood, but also the information of the full posterior, so that in case of multivariate posterior distributions, more conservative limits are obtained. Furthermore, the discussed method can be much less expensive than the likelihood ratio test from the computational point of view. We applied the simple formulas to the case of neutrino mass constraints from cosmology, discussing the case of several datasets analyzed with one single cosmological model, and the case where we have only one dataset but multiple models. In the latter case, Bayesian model comparison also allows to take into account the constraints from the different models to obtain a prior-independent and modelmarginalized bound. An extended application of this method is left for a separate work.