1 Introduction

A vast experimental program has been mounted in the last decade to search for a new elementary particle named sterile neutrino [1]. The sterile neutrino is a particular type of neutrino that does not interact through the weak force. Since Pontecorvo postulated its existence in 1967 [2], the sterile neutrino has become increasingly popular and its existence is nowadays often invoked to explain the mysterious origin of neutrino masses and dark matter [3]. The discovery of sterile neutrinos would hence be a milestone towards the development of new theories beyond the Standard Model, with deep repercussions in particle physics and cosmology.

The phenomenology of sterile neutrinos depends on their hypothetical mass value. The main target of the ongoing experimental efforts is sterile neutrinos with a mass of the order of the eV, whose existence has been hinted at by various experiments [4,5,6] and is still under debate [1, 7]. The statistical data treatment for this kind of searches presents various challenges and has not been standardized yet. Currently, different statistical methods are used in the field, each addressing a different statistical question, and thus providing different results. This situation prevents a direct comparison of the performance of the experiments and of their outcome. In addition, approximations often adopted in the statistical analysis can lead to significantly inaccurate results.

In this article we review the statistical methods used in the search for sterile neutrinos at the eV mass scale, expanding the discussion of Refs. [8,9,10] and performing a comprehensive comparison of the analysis approaches in scenarios with and without a signal. Section 2 describes the phenomenology of eV-mass sterile neutrinos, the signature sought after by the experiments, and the features of two toy experiments that are used in this article to compare the analysis techniques. Section 3 reviews the statistical methods and concepts used in the field. The performance of the different methods are discussed in Sects. 4 and 5. Finally, in Sect. 6, the methods are compared and a standardized analysis is proposed.

2 Phenomenology and experiments

Neutrinos of three different flavours have been observed: the electron (\(\nu _e\)), the muon (\(\nu _{\mu }\)) and the tau neutrino (\(\nu _{\tau }\)) [11]. These standard neutrinos can be detected by experiments because they interact through the weak force. Neutrinos can change flavor as they move through space. This phenomenon, called neutrino flavour oscillation, is possible because neutrinos of different flavours do not have a fixed mass but are rather a quantum-mechanical superposition of different mass eigenstates (i.e. \(\nu _1\), \(\nu _2\), and \(\nu _3\)), each associated to a distinct mass eigenvalue (\(m_1\), \(m_2\) and \(m_3\)).

A sterile neutrino (\(\nu _s\)) would not interact through the weak force and cannot be directly detected. However its existence would affect the standard neutrino oscillations in two ways. Firstly, a standard neutrino could oscillate into an undetectable sterile neutrino, leading to a reduction of the observed event rate within the detector. Secondly, the mass eigenstate (\(\nu _4\) with mass \(m_4\)) primarily associated to the sterile neutrino would enhance the transformation probability between standard neutrinos, leading to the detection of a neutrino flavor that is not emitted by the source. The experiments looking for a reduction of the interaction rate are called “disappearance” experiments while the ones seeking for an enhanced neutrino conversion are called “appearance” experiments. In principle, more than one sterile neutrino with mass at the eV scale could exist. In this work we will focus on the scenario in which there is only one eV-mass sterile neutrino.

The current-generation sterile-neutrino experiments are designed to search for oscillations among standard neutrinos at a short distance from the neutrino source, where the effect of neutrino oscillations is expected to be negligible unless eV-mass sterile neutrinos exist. The oscillation probability expected by these so-called short-baseline experiments can be approximated by:

$$\begin{aligned}&P (\nu _{\alpha } \rightarrow \nu _{\alpha }) = ~ 1 - \sin ^2\left( 2\theta _{\alpha \alpha }\right) \sin ^2 \left( k \cdot \varDelta m^2\cdot \frac{L}{E}\right) \end{aligned}$$
(1)
$$\begin{aligned}&P (\nu _{\alpha } \rightarrow \nu _{\beta }) = \sin ^2 \left( 2\theta _{\alpha \beta }\right) \sin ^2\left( k \cdot \varDelta m^2\cdot \frac{L}{E}\right) \end{aligned}$$
(2)

where \(P(\nu _{\alpha } \rightarrow \nu _{\alpha })\) is the survival probability for a specific neutrino of flavor \(\alpha \) and \(P (\nu _{\alpha } \rightarrow \nu _{\beta })\) is the probability for a neutrino of flavor \(\alpha \) to transform into the flavor \(\beta \) (\(\nu _{\alpha }\) and \(\nu _{\beta }\) indicate any of the standard neutrino flavors, i.e.: \(\nu _e\), \(\nu _{\mu }\) and \(\nu _{\tau }\)). The mixing angles (i.e. \(\theta _{\alpha \alpha }\) and \(\theta _{\alpha \beta }\)) and the difference between the squared mass eigenvalues (i.e. \(\varDelta m^2\)) are physical constants.Footnote 1The experiments aim at extracting these constants from the measurement of the oscillation probability as a function of the distance travelled by the neutrino before its interaction (L) and its initial energy (E). The maximum value of the oscillation probability is proportional to \(\sin ^2(2\theta )\) that acts as a scaling factor, while the modulation of the probability is determined by \(\varDelta m^2\). The constant \(k=1.27\) MeV/eV\(^2\)/m applies when \(\varDelta m^2\) is expressed in eV\(^2\), L in meters and E in MeV. The modulation of the oscillation as a function of \(L/E\) is shown in Fig. 1a for a selection of \(\varDelta m^2\) values.

Fig. 1
figure 1

a Normalized probability of a neutrino flavor oscillation as a function of \(L/E\) for different \(\varDelta m^2\) values. The absolute probability is given by the product between the plotted normalized probability and \(\sin ^2(2\theta )\). b, c Probability of neutrino oscillations as a function of \(L_{\text {rec}}/E_{\text {rec}}\) for two toy experiments searching for a disappearance (b) and an appearance (c) signal. The probabilities are shown assuming the existence of sterile neutrinos at various possible \(\sin ^2(2\theta )\) and \(\varDelta m^2\) values. The reconstructed probability from pseudo-data generated with Monte Carlo simulations under the hypothesis that there are no sterile neutrinos are also shown. The experimental parameters of the two toy experiments are summarized in Table 1. The binning reflects the typical experimental resolutions on \(L_{\text {rec}}\) and \(E_{\text {rec}}\). The error bars account for the statistical uncertainties before background subtraction

Table 1 Features and parameters of a selection of short-baseline experiments grouped according to the source of neutrinos. The kind of neutrinos emitted by the source and those detected by the experiment are shown in the third column. For each experiment the accessible range of \(L_{\text {rec}}\), \(E_{\text {rec}}\), and \(L_{\text {rec}}/E_{\text {rec}}\) are given along with the binning used to analyze the data and the expected number of neutrino and background events. The number of neutrino events is given assuming an oscillation probability of one: \(P (\nu _{\alpha } \rightarrow \nu _{\alpha }) =1\) in the case of disappearance experiments and \(P (\nu _{\alpha } \rightarrow \nu _{\beta }) =1\) in the case of appearance experiments. Absolute resolutions on the reconstructed baseline and energy (\(\sigma _L\) and \(\sigma _E\)) are quoted for the mean value of \(L_{\text {rec}}\) and \(E_{\text {rec}}\). The parameters quoted in this table are sometimes effective or approximated quantities and are intended to give an idea of the signal expected in each experiment. The last two rows show the parameters of two toy experiments used in this work to compare statistical methods. The toy experiments have parameters typical of disappearance searches based on nuclear reactors and appearance searches based on accelerators

The features of various short-baseline experiments are summarized in Table 1. Different kinds of neutrino sources and detection technologies are used. The most common neutrino sources are nuclear reactors (producing electron anti-neutrinos up to 10 MeV), radioactive sources (electron neutrinos and anti-neutrinos up to a few MeV), and particle accelerators (muon neutrinos and anti-neutrinos up to several GeV). The detector designs are very different, but they mostly rely on scintillating materials and light detectors, or on liquid-argon time-projection chambers (LAr TPC) [1].

In order to extract the sterile neutrino parameters, both L and E must be reconstructed for each detected neutrino. The oscillation baseline L is well defined as either it is much larger than the dimensions of the source and the detector – as in accelerator-based experiments – or the source is relatively compact and the detector is capable of reconstructing the position of an event – as in experiments with radioactive isotopes or reactors. The reconstruction of the event position is achieved with the physical segmentation of the detector and/or with advanced analysis techniques based on the properties of the scintillation or ionization signal.

The strategy used to reconstruct E varies according to the primary channel through which neutrinos interact in the detector. Experiments using low energy anti-neutrinos can measure E through a calorimetric approach thanks to the fact that anti-neutrinos interact via inverse beta decay and their energy is entirely absorbed within the detector. In experiments with high energy neutrinos interacting through charged-current quasi-elastic reactions, E is estimated from the kinematic of the particles produced in the interaction. Some experiments measure neutrinos that interact through electron scattering and release only a random fraction of their energy inside the detector. In this cases the energy cannot be accurately reconstructed and monoenergetic neutrino sources are typically used. In the following we will use \(L_{\text {rec}}\) and \(E_{\text {rec}}\) to refer to the reconstructed value of baseline and energy.

Table 1 shows for each experiment the range and resolution of \(L_{\text {rec}}\) and \(E_{\text {rec}}\). To maximize the sensitivity to sterile neutrino masses at the eV-scale, the experiments are designed to be sensitive to \(L_{\text {rec}}/E_{\text {rec}}\) values of the order of 1 m/MeV. The experiments can thus observe multiple oscillations within the detector for \(\varDelta m^2\) values at the eV scale. As the sought-after signal is similar among the experiments, the issues and challenges related to the statistical data treatment are the same.

The analysis of an experiment can exploit two complementary pieces of information. When the neutrino energy spectrum, flux and cross section are accurately known, the integral number of neutrino interactions expected within the detector can be computed for a given oscillation hypothesis and compared with the observed one. This approach is often called “rate” analysis. Alternatively, the relative change of rate as a function of the interaction position and neutrino energy can be compared with the expectations under different oscillation hypotheses, leaving unconstrained the integral number of events. This second approach is known as “shape” analysis. Rate and shape analysis are used simultaneously to maximize the experimental sensitivity, however they are affected by different systematic uncertainties and for a specific experiment only one of the two might be relevant. In the following we will discuss these two analyses separately. Results for a specific experiment can be estimated by interpolating between these two extreme cases. Experiments based on nuclear reactors sometimes use the so-called “ratio” method [17], in which the energy spectrum measured in a given part of the detector is normalized against what observed in a reference section. The ratio method has features similar to the shape analysis and it is not explicitly considered in the following.

Two toy experiments are used in this work to compare different analysis techniques. The first one is an example of disappearance experiment representative of the projects using nuclear reactors or radioactive isotopes as anti-neutrino source. In these experiments, the electron anti-neutrinos partially convert into sterile neutrinos with different probabilities as a function of \(L_{\text {rec}}\) and \(E_{\text {rec}}\). The anti-neutrino energy spectrum is considered between 2 and 7 MeV and the range of oscillation baselines accessible by the experiment is from 7 to 10 m (\(L_{\text {rec}}/E_{\text {rec}}\) \(=\) 1–5 m/MeV with a resolution varying between 5 and 10%). The second toy experiment is an example of appearance experiment in which muon neutrinos transform into electron neutrinos with a probability enhanced by the existence of sterile neutrinos. In this case, typical for experiments based on particle accelerators, the neutrino energy varies between 200 and 1200 MeV and the oscillation baseline between 500 and 550 m (\(L_{\text {rec}}/E_{\text {rec}}\) \(=\) 0.4–2.4 m/MeV with a resolution varying between 10 and 25%).

The toy disappearance experiment can observe an oscillatory pattern in the event rate as a function of both energy and baseline, whereas the appearance experiment can observe it only in energy as the baseline can be regarded as fixed. The energy distribution of both the neutrinos emitted by the source and the background events is assumed to be flat. This is not the case in real experiments where the initial neutrino energy is peaked at some value and the background spectrum depends on the background sources. However this approximation does not affect the study reported in this work. The experimental parameters of our toy experiments, including the number of signal and background events, are summarized in the last two rows of Table 1. The uncertainties on the signal and background rates are assumed to have the realistic values of 10% and 5% for the appearance experiment, and of 2% both for the signal and background rate in the disappearance experiment.

An example of the oscillation probability reconstructed from our toy experiments is shown in Fig. 1b, c as a function of \(L_{\text {rec}}/E_{\text {rec}}\). The probability is reconstructed from a set of pseudo-data generated with Monte Carlo simulations from a model with no sterile neutrinos. The oscillation probability expected assuming the existence of sterile neutrinos is shown for four mass values (\(\varDelta m^2\) \(=\) 0.1, 0.5, 2 and 10 eV\(^2\)). Given the \(L_{\text {rec}}/E_{\text {rec}}\) resolution of our toy experiments, an oscillatory pattern can be observed only for \(\varDelta m^2\) of 0.5 and 2 eV\(^2\). For higher \(\varDelta m^2\) values, the frequency of the oscillation becomes too high and only an integrated change of the rate is visible. For smaller \(\varDelta m^2\) values, the oscillation length approaches the full \(L_{\text {rec}}/E_{\text {rec}}\) range to which the experiment is sensitive, resulting in a loss of sensitivity. In the appearance experiments the discrimination power among different oscillatory patterns relies only on \(E_{\text {rec}}\) since \(L_{\text {rec}}\) is fixed.

In this work we focus on short-baseline experiments. We do not consider other oscillation experiments (e.g. Daya Bay [24], Double Chooz [25], RENO [26], MINOS [27], NOvA [28] and Ice Cube [29]) for which the oscillation probability cannot be approximated by Eqs. (1) and  (2) as it is either complicated by the overlap between oscillations driven by multiple mass eigenstates or by matter effects [7]. We also do not consider approaches that are not based on oscillations such as the study of cosmological structures [30], the high-precision spectroscopy of beta-decays (e.g. KATRIN [31]), or electron captures (e.g. ECHO [32]). The statistical issues of these searches are different from those of the short-baseline experiments and would require a specific discussion.

3 Statistical methods

The goal of short-baseline experiments is to search for a signal due to a sterile neutrino with mass at the eV-scale by measuring the oscillation probability at different L and E values. The parameters of interest associated to the sterile neutrino are the mixing angle and its mass eigenvalue. However, because of the functional form of Eqs. (1) and  (2), the observables of the experiments are a function of the angle and mass, i.e.: \(\sin ^2(2\theta )\) and \(\varDelta m^2\). In the following we will refer to \(\sin ^2(2\theta )\) and \(\varDelta m^2\) as the parameters of interest of the analysis.

The role of statistical inference applied to the data from sterile neutrino searches can be divided into four tasks:

  1. 1.

    point estimation: the computation of the most plausible value for \(\sin ^2(2\theta )\) and \(\varDelta m^2\);

  2. 2.

    hypothesis testing: given a hypothesis on the value of \(\sin ^2(2\theta )\) and \(\varDelta m^2\), decide whether to accept or reject it in favor of an alternative hypothesis. Among the different tests that can be performed, testing the hypothesis that there is no sterile neutrino signal (i.e. \(\sin ^2(2\theta )=0\) or \(\varDelta m^2=0\)) is of primary interest for an experiment aiming at a discovery;

  3. 3.

    interval estimation: construct a set of \(\sin ^2(2\theta )\) and \(\varDelta m^2\) values that includes the true parameter values at some predefined confidence level;

  4. 4.

    goodness of fit: estimate if the data can be described by the model.

The statistical methods used by sterile neutrino experiments are based on the likelihood function. The point estimation is carried out using maximum likelihood estimators, i.e. by finding the values of \(\sin ^2(2\theta )\) and \(\varDelta m^2\) that correspond to the maximum of the likelihood function. The hypothesis testing is based on the ratio of likelihoods. The interval estimation is carried out by inverting a set of likelihood-ratio based hypothesis tests, and grouping the hypotheses that are accepted. The goodness-of-fit test can be carried out assuming the most plausible value of the parameters of the model (i.e. the maximum likelihood estimator for \(\sin ^2(2\theta )\) and \(\varDelta m^2\)) and using for instance a Pearson \(\chi ^2\) or a “likelihood ratio” test [33, 34].

While the procedures for point estimation and goodness of fit are not controversial, the hypothesis testing differs significantly among the experiments since multiple definitions of the hypotheses are possible. Changing the hypothesis definition does not only affect the outcome of the hypothesis test but also of the interval estimation, which is performed by running a set of hypothesis tests. The comparison of tests based on different hypothesis definitions is the subject of Sects. 4, 5 and 6.

In this section we review the ingredients needed to build the tests and the statistical concepts that will be used in the following. Firstly, we consider the likelihood function and derive a general form that can be applied to all experiments (Sect. 3.1). Then we discuss the possible hypothesis definitions and the resulting test statistics (Sect. 3.2). The properties of the test statistic probability distributions are described in Sect. 3.3. Finally, in Sect. 3.4 we examine the construction of confidence regions and in Sect. 3.5 the concept of power of a test and sensitivity.

Bayesian methods have not been applied in the search for sterile neutrinos so far. Even if their usage could be advantageous, we will not consider them in the following and keep the focus on the methods that are currently in use.

3.1 The likelihood function

Short-baseline experiments measure the oscillation baseline and the energy of neutrinos, i.e. a pair of \(\{L_{\text {rec}},E_{\text {rec}}\}\) values for each event. \(L_{\text {rec}}\) and \(E_{\text {rec}}\) are random variables whose probability distributions depend on the true value of L and E. Monte Carlo simulations are used to construct the probability distributions of \(L_{\text {rec}}\) and \(E_{\text {rec}}\) for a neutrino event given a \(\sin ^2(2\theta )\) and \(\varDelta m^2\) value, \(p_{e}(L,E|\sin ^2(2\theta ),\varDelta m^2)\), and for a background event, \(p_{b}(L,E)\). Additional quantities are sometimes measured, however they are ultimately used to constrain the background or the systematic uncertainties and can be neglected in this work.

To our knowledge, all the experiments organize the data in histograms and base their analysis on a binned likelihood function. The use of histograms is motivated by the fact that the number of neutrino events is large, between \(10^4\) and \(10^6\) as shown in Table 1. Binning the data leads to a new set of random variables that are the numbers of observed events in each bin: \(N^{obs} = \{N^{obs}_{11}, N^{obs}_{12}, \ldots , N^{obs}_{ij}\ldots \}\) where i runs over the \(L_{rec}\) bins and j over the \(E_{rec}\) bins. Consistently, we indicate with \(h^e_{ij}\) and \(h^b_{ij}\) the integral of the probability distribution function for neutrino and background events over each bin:

$$\begin{aligned} h^{e}_{ij}= & {} \int _{L,E\in bin_{ij}} p_{e}(L,E|\sin ^2(2\theta ),\varDelta m^2)\,dL\,dE \end{aligned}$$
(3)
$$\begin{aligned} h^{b}_{ij}= & {} \int _{L,E\in bin_{ij}} p_{b}(L,E)\,dL\,dE. \end{aligned}$$
(4)

The generic likelihood function can hence be written as:

$$\begin{aligned}&{\mathcal {L}}(\sin ^2(2\theta ), \varDelta m^2, N^e, N^b|N^{obs}) = \prod _{ij} \mathcal {P}(N^{obs}_{ij} | \nonumber \\&N^e \cdot h^e_{ij}(\sin ^2(2\theta ), \varDelta m^2) + N^b \cdot h^b_{ij}) \end{aligned}$$
(5)

where i and j run over \(L_{\text {rec}}\) and \(E_{\text {rec}}\) bins, \(\mathcal {P}(N|\lambda )\) indicates the Poisson probability of measuring N events given an expectation \(\lambda \), and \(N^e\) and \(N^b\) are scaling factors representing the total number of standard neutrino and background events.

External constraints on the number of neutrino and background events related to auxiliary data are in this work included as additional multiplicative Gaussian terms:

$$\begin{aligned} \mathcal {{L}} \rightarrow \mathcal {{L}} \cdot \,\mathcal {G}(\bar{N}^e(\sin ^2(2\theta ),\varDelta m^2)|N^e ,\sigma ^e) \cdot \,\mathcal {G}(\bar{N}^b |N^b ,\sigma ^b) \nonumber \\ \end{aligned}$$
(6)

where \(\mathcal {G} (\bar{N}|N,\sigma )\) indicates the probability of measuring \(\bar{N}\) given a normal distributed variable with mean N and standard deviation \(\sigma \). The pull terms can be based on other probability distributions (e.g. log-normal or truncated normal distributions), however their specific functional form is not relevant for our study. It should be noted that the expected number of neutrino counts \(\bar{N}^e\) depends on the particular oscillation hypothesis tested. Examples of the likelihood can be found in Appendix A.

While \(\sin ^2(2\theta )\) and \(\varDelta m^2\) are the parameters of interest of the analysis, \(N^e\) and \(N^b\) are nuisance parameters. The constraints on these parameters could follow different probability distributions and additional nuisance parameters could also be needed to account for systematic uncertainties in the detector response, neutrino source, and event reconstruction efficiency. The actual number of nuisance parameters and the particular form of their constraints in the likelihood does not affect the results of our work. Systematic uncertainties typically cannot mimic the expected oscillatory signal, even though they can change the integral rate. Thus, in a pure rate analysis a precise understanding of the systematic uncertainties including those related to the background modeling is mandatory.

To keep the discussion general, in the following we will indicate with \(\varvec{\eta } = \{N^e, N^b,\ldots \}\) a generic vector of nuisance parameters. Each parameter has an allowed parameter space, for instance the number of neutrino and background events are bounded to non-negative values. The nuisance parameters are assumed to be constrained in their allowed parameter space even if not explicitly stated.

The general form of the likelihood given in Eq. (6) accounts for a simultaneous rate and shape analysis. A pure shape analysis will be emulated by removing the pull term on the number of neutrino events. Conversely, a pure rate analysis will be emulated by enlarging the size of the bins in \(h^e\) and \(h^b\) up to the point at which there is a single bin and any information on the number of events as a function of \(L_{\text {rec}}\) or \(E_{\text {rec}}\) is lost.

3.2 Hypothesis testing and test statistics

The hypothesis testing used nowadays in particle physics is based on the approach proposed by Neyman and Pearson in which the reference hypothesis \(\text {H}_{\text {0}}\) (i.e. the null hypothesis) is compared against an alternative hypothesis \(\text {H}_{\text {1}}\) [35]. The test is a procedure that specifies for which data sets the decision is made to accept \(\text {H}_{\text {0}}\) or, alternatively, to reject \(\text {H}_{\text {0}}\) and accept \(\text {H}_{\text {1}}\). Usually a hypothesis test is specified in terms of a test statistic \(\text {T}_{\text {}}\) and a critical region for it. The test statistic is a function of the data that returns a real number. The critical region is the range of test statistic values for which the null hypothesis is rejected.

The critical region is chosen prior the analysis such that the test rejects \(\text {H}_{\text {0}}\) when \(\text {H}_{\text {0}}\) is actually true with a desired probability. This probability is denoted with \(\alpha \) and called the “size” of the test. In the physics community, it is more common to quote \(1-\alpha \) and refer to it as the “confidence level” of the test. For instance, if \(\text {H}_{\text {0}}\) is rejected with \(\alpha =5\%\) probability when it is true, the test is said to have 95% confidence level (CL). In order to compute the critical thresholds, the probability distribution of the test statistic must be known. In our work, the distributions are constructed from large ensembles of pseudo-data sets generated via Monte Carlo techniques.

In sterile neutrino searches a hypothesis is defined by a set of allowed values for \(\sin ^2(2\theta )\) and \(\varDelta m^2\). The null hypothesis is defined as:

$$\begin{aligned} \text {H}_{\text {0}}:\{\sin ^2(2\theta ), \varDelta m^2:\sin ^2(2\theta )=X, \varDelta m^2=Y\} \end{aligned}$$
(7)

where X and Y are two particular values. Since the mixing angle and the mass eigenvalue are defined as non-negative numbers by the theory and \(m_4\ge m_1\), the most general version of the alternative hypothesis is

$$\begin{aligned} \text {H}_{\text {1}}:\{\sin ^2(2\theta ), \varDelta m^2: 0\le \sin ^2(2\theta )\le 1, \varDelta m^2\ge 0\}. \end{aligned}$$
(8)

A test based on these two hypotheses leads to a generalized likelihood-ratio test statistic of the form [35]:

$$\begin{aligned} \text {T}_{\text {}} = -2\ln \dfrac{\sup \limits _{ \varvec{\eta }} \mathcal {L}\left( \sin ^2(2\theta )=X,\varDelta m^2=Y,\varvec{\eta } | N^{obs}\right) }{\sup \limits _{\sin ^2(2\theta ),\varDelta m^2,\varvec{\eta }} \mathcal {L}\left( \sin ^2(2\theta ),\varDelta m^2,\varvec{\eta } | N^{obs}\right) } \end{aligned}$$
(9)

where the denominator is the maximum of the likelihood for the observed data set over the parameter space allowed for the parameters of interest (\(\{\sin ^2(2\theta ), \varDelta m^2\}\in \text {H}_{\text {1}}\)) and the nuisance parameters. The numerator is instead the maximum of the likelihood in the restricted space in which \(\sin ^2(2\theta )\) and \(\varDelta m^2\) are equal to the value specified by \(\text {H}_{\text {0}}\).

If the value of \(\varDelta m^2\) or \(\sin ^2(2\theta )\) are considered to be known because of theoretical predictions or of a measurement, then the parameter space of the alternative hypothesis can be restricted. Restricting the parameter space is conceptually equivalent to folding into the analysis new assumptions and changes the question addressed by the hypothesis test. The smaller is the parameter space allowed by the alternative hypothesis, the greater the power of the test will be.

Three tests have been used in the context of sterile neutrino searches and are summarized in Table 2. The most general test is the one that we just described and that leads to the test statistic given in Eq. (9). We will indicate this test statistic with \(\text {T}_{\text {2}}\). This test is agnostic regarding the value of \(\sin ^2(2\theta )\) or \(\varDelta m^2\) and can be applied to search for a sterile neutrino with unknown parameters.

Table 2 Definition of the test statistics used for sterile-neutrino searches in the presence of nuisance parameters (\(\varvec{\eta }\)). The null hypothesis is \(\text {H}_{\text {0}}:\{\sin ^2(2\theta ), \varDelta m^2: \sin ^2(2\theta )=X, \varDelta m^2=Y\}\) for all tests while the alternative hypothesis \(\text {H}_{\text {1}}\) changes. The free parameters of interest in \(\text {H}_{\text {1}}\) are shown in the second column. The name of the techniques based on each test statistics and a selection of experiments using them are listed in the last columns

The second test statistic used in the field can be traced back to the situation in which the mass squared difference is considered to be perfectly known and is equal to the value of the null hypothesis (\(\varDelta m^2=Y\)). In this case the alternative hypothesis and its related test statistic are:

$$\begin{aligned} \text {H}_{\text {1}}:\{\sin ^2(2\theta ), \varDelta m^2: 0\le \sin ^2(2\theta )\le 1, \varDelta m^2=Y\} \end{aligned}$$
(10)
$$\begin{aligned} \text {T}_{\text {1}} = -2\ln \dfrac{\sup \limits _{\varvec{\eta }} \mathcal {L}\left( \sin ^2(2\theta )=X,\varDelta m^2=Y,\varvec{\eta } | N^{obs}\right) }{\sup \limits _{\sin ^2(2\theta ),\varvec{\eta }} \mathcal {L}\left( \sin ^2(2\theta ),\varDelta m^2=Y ,\varvec{\eta } | N^{obs}\right) }. \nonumber \\ \end{aligned}$$
(11)

While the numerator of \(\text {T}_{\text {1}}\) is the same of \(\text {T}_{\text {2}}\), the maximum of the likelihood at the denominator is now computed over a narrower parameter space, restricted by the condition \(\varDelta m^2=Y\).

The third test corresponds to the simplest kind of hypothesis test that can be performed. Both the null and alternative hypothesis have the parameters of interest fully defined. The alternative hypothesis is now the no-signal hypothesis and this leads to a test of the form:

$$\begin{aligned}&\text {H}_{\text {1}}:\{\sin ^2(2\theta ), \varDelta m^2: \sin ^2(2\theta )=0, \varDelta m^2=0\} \end{aligned}$$
(12)
$$\begin{aligned}&\text {T}_{\text {0}} = -2\ln \dfrac{\sup \limits _{\varvec{\eta }} \mathcal {L}\left( \sin ^2(2\theta )=X,\varDelta m^2=Y,\varvec{\eta } | N^{obs}\right) }{\sup \limits _{\varvec{\eta }} \mathcal {L}\left( \sin ^2(2\theta )=0 ,\varDelta m^2=0 ,\varvec{\eta } | N^{obs}\right) }. \nonumber \\ \end{aligned}$$
(13)

The numerator and denominator are the maximum likelihoods for fixed values of the parameters of interest, where the maximum is computed over the parameter space allowed for the nuisance parameters. By construction, the no-signal hypothesis is always accepted when it is used as \(\text {H}_{\text {0}}\), since the test statistic becomes identically equal to zero.

Nowadays, the value of \(\sin ^2(2\theta )\) or \(\varDelta m^2\) is still considered to be unknown and all the parameter space accessible by the experiment is probed in search for a signal. This situation should naturally lead to the usage of \(\text {T}_{\text {2}}\). However the maximization of the likelihood required by \(\text {T}_{\text {2}}\) is challenging from the computational point of view. Reducing the dimensionality of the parameter space over which the likelihood is maximized can enormously simplify the analysis and, for such a practical reason, \(\text {T}_{\text {1}}\) and \(\text {T}_{\text {0}}\) are used even if the restriction of the parameter space is not intended.

In the neutrino community, the analysis based on \(\text {T}_{\text {2}}\) has been called “2D scan” or “global scan” while the analysis based on \(\text {T}_{\text {1}}\) is known as “raster scan” [8, 9]. In the absence of nuisance parameters, the definition of these test statistics reduce to those discussed in Ref. [8]. \(\text {T}_{\text {0}}\) has been used in the framework of a method called “Gaussian CLs” [10].

The search for new particles at accelerators presents many similarities with the search for sterile neutrinos. For instance, in the search for the Higgs boson, the sought-after signal is a peak over some background. The two parameters of interest are the mass of the Higgs boson, which defines the position of the peak, and its coupling with other particles, which defines the amplitude. Similarly, in the search for sterile neutrinos \(\varDelta m^2\) defines the shape of the signal and \(\sin ^2(2\theta )\) its strength. When the Higgs boson is searched without assumptions on its mass and coupling, a test similar to \(\text {T}_{\text {2}}\) is performed (i.e. a “global p-value” analysis). When the mass is assumed to be known, a test similar to \(\text {T}_{\text {1}}\) is used (i.e. a “local p-value” analysis) [36]. Procedures for converting a local into a global p-value analysis have been developed in the last years [37, 38] and are nowadays used to avoid the direct usage of \(\text {T}_{\text {2}}\) that is computationally demanding. This procedure is known as a correction for the “look-elsewhere effect”. We have studied the correction described in Ref. [37] and found that it does not provide accurate results for sterile neutrino experiments because of the oscillatory nature of the sought-after signature. Our studies are discussed in Appendix E.

3.3 Test statistic probability distributions

The test statistic \(\text {T}_{\text {2}}\) and \(\text {T}_{\text {1}}\) can assume any non-negative value. If the absolute maximum of the likelihood corresponds to \(\text {H}_{\text {0}}\), these test statistics are identically zero. The farther the absolute maximum is from the parameter space of the null hypothesis, the larger the test statistic value becomes. If the null hypothesis is true, the probability distribution of this kind of test statistic is expected to converge to a chi-square function in the large sample limit, but only if the regularity conditions required by Wilks’ theorem are met [39]. In particular, given the ratio between the dimensionality of the parameter space for the null and alternative hypothesis (i.e. the number of free parameters of interest in the likelihood maximization), \(\text {T}_{\text {2}}\) would converge to a chi-square with two degrees of freedom and \(\text {T}_{\text {1}}\) to a chi-square with one degree of freedom. As discussed in Sect. 4.3, the conditions required by Wilks’ theorem are not always valid in sterile neutrino experiments and the assumption that the test statistic follows a chi-square distribution can lead to significantly inaccurate results.

The probability distributions of \(\text {T}_{\text {0}}\) are qualitatively different from those of \(\text {T}_{\text {2}}\) and \(\text {T}_{\text {1}}\). \(\text {T}_{\text {0}}\) is negative when the tested signal hypothesis is more likely than the no-signal hypothesis, positive in the opposite case. The larger is the test statistic value, the more the tested hypothesis is disfavoured. Under mild conditions, the probability distribution of \(\text {T}_{\text {0}}\) converges to a Gaussian function [10].

Our results are based on test statistic probability distributions constructed from ensembles of pseudo-data. Firstly a grid in the \(\sin ^2(2\theta )\) vs. \(\varDelta m^2\) space is defined. Secondly, for each point on the grid an ensemble of pseudo-data is generated. The probability distributions are hence constructed by computing the test statistic for the pseudo-data in the ensemble. The pseudo-data are generated for a fixed value of the nuisance parameters. More details on our procedure are described in Appendix B.

3.4 Interval estimation and confidence regions

The results of a neutrino oscillation search are generally summarized by a two-dimensional confidence region in the \(\sin ^2(2\theta )\) vs. \(\varDelta m^2\) space. The confidence region defines a set of parameter values that are compatible with the data given a certain confidence level. The construction of a confidence region is formally referred to as an interval estimation.

One of the most popular statistical techniques to construct a confidence region is through the inversion of a set of hypothesis tests [35, 40]. This is also the technique used by experiments searching for sterile neutrinos. The construction starts from the selection of a specific test and its resulting test statistic. The parameter space considered for \(\text {H}_{\text {1}}\) is naturally the space in which the region will be defined. Usually a grid is fixed over this space and a test is run for each point. The tests in this set have the same \(\text {H}_{\text {1}}\) but \(\text {H}_{\text {0}}\) is changed to the value of the parameters at each point. This standard construction guarantees that the properties of the test statistic carry over to the confidence region and the confidence level of the region is equal to that of the test [35].

Since the confidence region is constructed in the parameter space considered by the alternative hypothesis, tests based on \(\text {T}_{\text {2}}\) would naturally lead to two-dimensional confidence regions in the \(\sin ^2(2\theta )\) vs. \(\varDelta m^2\) space, tests based on \(\text {T}_{\text {1}}\) to one-dimensional regions in the \(\sin ^2(2\theta )\) space, and tests based on \(\text {T}_{\text {0}}\) to point-like regions. As we already mentioned, \(\text {T}_{\text {0}}\) and \(\text {T}_{\text {1}}\) are used even if the restriction of the parameter space is not intended. To build two-dimensional regions for \(\text {T}_{\text {0}}\) and \(\text {T}_{\text {1}}\) a non-standard procedure is used, i.e. the final region is created as union of one-dimensional or point-like regions. For instance, while \(\text {T}_{\text {1}}\) would require the value of \(\varDelta m^2\) to be known, one can technically construct a one-dimensional \(\sin ^2(2\theta )\) region for a scan of “known” \(\varDelta m^2\) values and then take the union of these regions. Assuming that the true \(\varDelta m^2\) is among the scanned values, the confidence level of the union will be the same of the test.

The standard procedure for constructing confidence regions ensures that inverting uniformly most powerful tests provides uniformly most accurate confidence regions, i.e. regions with minimal probability of false coverage [35]. This is not true for the non-standard procedure described above, which indeed produces regions with peculiar features and pathologies as discussed in Sects. 4 and 5.

3.5 Power of the test and sensitivity

The performance of the different kinds of hypothesis tests can be studied by comparing their expected outcome under the assumption that an hypothesis is true. The idea of expected outcome is captured by the statistical concept of “power” of a test. The power is defined as the probability that the test rejects the null hypothesis when the alternative hypothesis is true [35].

In high-energy physics the concept of power is replaced by the idea of sensitivity of an experiment. The sensitivity is defined as the set of hypotheses for which the test would have a 50% power. The focus is thus shifted from the expected outcome of a test to a set of hypotheses. Two kinds of sensitivities are commonly used. The exclusion sensitivity is the set of hypotheses that have a 50% chance to be excluded assuming there is no signal (\(\text {H}_{\text {0}}\) is the oscillation signal for which the test has a 50% power). The discovery sensitivity is the set of hypotheses that, if true, would allow in 50% of the experiments to reject the no-signal hypothesis (\(\text {H}_{\text {0}}\) is now the no-signal hypothesis). More details on how the sensitivities are defined and computed can be found in Appendix C.

Both sensitivities will be displayed as contours in the \(\sin ^2(2\theta )\) vs. \(\varDelta m^2\) parameter space and for a 95% CL test. A larger confidence level is typically required for a discovery, however we prefer to use the same value for the exclusion and discovery sensitivity to ease their comparison.

4 2D and raster scan

In this section we compare the confidence regions built using \(\text {T}_{\text {2}}\) and \(\text {T}_{\text {1}}\). The comparison is done using the toy experiments introduced in Sect. 2: a disappearance experiment representative of searches based on reactor neutrinos and an appearance experiment representative of the accelerator-based experiments. First we focus on the sensitivity of the toy experiments (Sect. 4.1) and then consider the results extracted for specific sets of data (Sect. 4.2). Finally, in Sect. 4.3, we study the impact of approximating the test statistic distributions with chi-square functions. Our results and conclusions fully agree with previous works [8, 9].

Fig. 2
figure 2

Exclusion and discovery sensitivity at 95% CL for a toy disappearance and appearance experiment. The sensitivities are shown for the test statistics \(\text {T}_{\text {2}}\) and \(\text {T}_{\text {1}}\). Whenever possible, the contribution of the shape and rate analysis are displayed separately. The discovery sensitivity is typically associated to a larger confidence level compared to the exclusion sensitivity, however here we use the same value to highlight their differences

4.1 Sensitivity

The exclusion and discovery sensitivities of our toy disappearance experiment based on the statistic \(\text {T}_{\text {2}}\) are shown in Fig. 2a. The exclusion sensitivity (black lines) delimits the parameter space that has a 50% chance to be rejected by a 95%-CL test under the assumption that sterile neutrinos do not exist. The discovery sensitivity (red lines) delimits the set of hypotheses which, assuming those to be true, have a 50% chance that the no-signal hypothesis is rejected by a 95%-CL test. The figure shows separately the sensitivity for a rate and shape analysis (dotted lines) that are useful to illustrate which kind of information contributes most to the overall sensitivity as a function of \(\varDelta m^2\). Three \(\varDelta m^2\) regions can be identified in Fig. 2a:

  • \(\varDelta m^2>~10\) eV\(^2\): the oscillation length is smaller than the detector resolution on \(L_{\text {rec}}\) and/or \(E_{\text {rec}}\), making the experiment sensitive only to an overall reduction of the integral rate (sensitivity dominated by the rate analysis);

  • 0.1 eV\(^2< \varDelta m^2<10\) eV\(^2\): the oscillation length is larger than the experimental resolution and smaller than the range of \(L_{\text {rec}}\) and/or \(E_{\text {rec}}\) values accessible by the detector, making the experimental sensitivity dominated by the shape analysis;

  • \(\varDelta m^2<0.1\) eV\(^2\): the oscillation length becomes larger than the detector dimensions. The experimental sensitivity decreases with the \(\varDelta m^2\) value (i.e. with increasing oscillation length) and larger \(\sin ^2(2\theta )\) values are needed in order to observe a signal. The sensitivity is approximately proportional to the product \(\sin ^2(2\theta )\times \varDelta m^2\).

Example of the expected oscillations in these three regions are shown in Fig. 1b, c.

The total sensitivity is given by a non-trivial combination of the sensitivity of the rate and shape analysis. The rate and shape analysis are emulated by considering only parts of the likelihood function (see Sect. 3.1) that would otherwise have common parameters. The sensitivity for the rate analysis is higher than the total one for high \(\varDelta m^2\) values. This feature is related to the fact that \(\sin ^2(2\theta )\) and \(\varDelta m^2\) are fully degenerate parameters in a rate analysis based on \(\text {T}_{\text {2}}\). A rate analysis uses a single piece of information and cannot fix two correlated parameters. Given an observed number of events, the global maximum of the likelihood function can be obtained for infinite combinations of \(\sin ^2(2\theta )\) and \(\varDelta m^2\) values. The degeneracy is however broken when the rate information is combined with the shape one. The number of effective degrees of freedom of the problem changes, and this results in a reduction of sensitivity.

The sensitivities of our toy appearance experiment computed for \(\text {T}_{\text {2}}\) is shown in Fig. 2b. The same \(\varDelta m^2\) regions discussed for the disappearance experiment can be identified, even if the relative weight of the shape and rate information is different. In particular, in the appearance searches a shape analysis can provide information on \(\varDelta m^2\) but not on \(\sin ^2(2\theta )\). The number of expected \(\nu _e\) events is indeed proportional to the product of the oscillation amplitude \(\sin ^2(2\theta )\) and flux of \(\nu _\mu \) neutrinos. If the flux is left unconstrained in the fit, no statement can be made about the oscillation amplitude. This is the reason why the shape analysis contribution is not displayed. The rate analysis accounts for the bulk of the sensitivity and adding the shape information does not result in a net improvement, in the sense that the slight improvement is compensated by the reduction of sensitivity due to the increased number of effective degrees of freedom discussed above. Having a sensitivity dominated by the rate analysis is typical for experiments using accelerators as neutrino source.

The exclusion and discovery sensitivities are similar to each other for both the disappearance and appearance experiments. Some differences are however present. When computing the discovery sensitivity, the hypothesis tested is the no-signal hypothesis. Since \(\sin ^2(2\theta )=0\) or \(\varDelta m^2=0\) are points at the edge of the allowed parameter space, the number of degrees of freedom of the problem decreases when testing them and the power of the test increases. The exclusion sensitivity is instead computed for values of the parameters far from the edges. This is the reason why the discovery sensitivity is in general expected to be stronger than that the exclusion one. However the situation is reversed in the shape analysis because of a peculiar feature of the sterile neutrino signature. Statistical fluctuations between bins mimic an oscillation signal and the maximum of the likelihood is always found far from the no-signal hypothesis. This decreases significantly the power of the test for a discovery while it does not affect much the exclusion case. If the shape analysis dominates the overall sensitivity, as in our disappearance experiment, its features propagate also to the combined sensitivities.

Figure 2c, d show the sensitivities for our toy disappearance and appearance experiments computed for \(\text {T}_{\text {1}}\). The overall features are similar to those of \(\text {T}_{\text {2}}\) and the weight of the rate and shape information in the three \(\varDelta m^2\) regions are also consistent. However, since the parameter space of the alternative hypothesis is now restricted, \(\text {T}_{\text {1}}\) has greater power than \(\text {T}_{\text {2}}\) for a given \(\varDelta m^2\) value. This leads to sensitivities that are stronger by up to a factor 2 in terms of \(\sin ^2(2\theta )\). This is particular evident for high \(\varDelta m^2\) values where, differently from \(\text {T}_{\text {2}}\), now the number of effective degrees of freedom in the alternative hypothesis is always one (only \(\sin ^2(2\theta )\) is free) and the total sensitivity is equal to the one of the rate analysis. The restriction of the parameter space is also the reason why the maximum of the likelihood can now correspond to the no-signal hypothesis and the discovery sensitivity is stronger than the exclusion sensitivity even in a shape analysis.

It should be emphasized that the mixing angle in the plot for the disappearance experiment is different from that of the appearance experiment. The minimal value of \(\sin ^2(2\theta )\) accessible by an experiment cannot be used as a figure of merit to compare disappearance and appearance experiments. A comparison can however be done assuming a specific theoretical model that connects the value of \(\sin ^2(2\theta _{ee})\) and \(\sin ^2(2\theta _{\mu e})\) [7].

4.2 Results from observed data sets

The confidence region derived from an observed data set can significantly differ from the expectations because of statistical fluctuations on the number of signal and background events. This issue is particularly relevant when no signal is observed and an upper limit on a parameter is reported. Frequentists upper limits can indeed become extremely strong in case of background fluctuations.

In sterile neutrino searches, when no signal is observed, the confidence region extends down to \(\sin ^2(2\theta )=0\) for most of the \(\varDelta m^2\) values and it is bounded by an upper limit on \(\sin ^2(2\theta )\) that plays the role of the maximum signal strength. It is hence informative to report the observed upper limit along with its expected value and variance under the no-signal hypothesis. This has been first proposed in Ref. [8] and it is nowadays common practice.

Figure 3a shows the confidence region derived with \(\text {T}_{\text {2}}\) from a pseudo-data set generated for the toy disappearance experiment under the no-signal hypothesis. In addition to the confidence region, the expected distribution of the upper limit is displayed in terms of its median value and 68%/95% central intervals. The median is exactly the exclusion sensitivity plotted in Fig. 2a. The observed upper limit fluctuates around the median expectation. This is true for all possible realizations of the data as the likelihood is maximized for a specific phase of the oscillatory pattern that matches the statistical fluctuations between the bins of the data set. This phase is reproduced at regularly spaced values of \(\varDelta m^2\) over the full parameter space. The limit gets weaker when the phase helps describing the data, stronger when it does not. The overall shift of the observed limit with respect to the median value is instead due to the fact that the random number of events injected in this particular data set is slightly above its median expectation. The width of the green and yellow bands gives an idea of the magnitude of the fluctuations at a given \(\varDelta m^2\), as they contain the upper limit on \(\sin ^2(2\theta )\) with a probability of 68% and 95% respectively.

Fig. 3
figure 3

Confidence regions at 95%-CL for a pseudo-data set generated by the toy disappearance experiment under the no-signal hypothesis. The top plot is obtained using \(\text {T}_{\text {2}}\) while the bottom plot using \(\text {T}_{\text {1}}\). The probability distribution of the upper bound of the confidence region expected under the no-signal hypothesis is displayed through its median value (i.e. the exclusion sensitivity) and the 68% and 95% central intervals

The results and expectations based on \(\text {T}_{\text {1}}\) are shown in Fig. 3b. For a given \(\varDelta m^2\) value, \(\text {T}_{\text {1}}\) has greater power than \(\text {T}_{\text {2}}\) as the parameter space allowed under the alternative hypothesis is smaller. This leads to stronger limits in terms of \(\sin ^2(2\theta )\). On the other hand, the non-standard construction of the confidence region can lead to accept the no-signal hypothesis (i.e. \(\sin ^2(2\theta )=0\)) at some \(\varDelta m^2\) value and reject it at others (see for instance \(\varDelta m^2\) \(\sim \)1 eV\(^2\)). This can happen because the tests performed by \(\text {T}_{\text {1}}\) at different \(\varDelta m^2\) values are independent by each other.

Fig. 4
figure 4

Confidence regions at 68% and 95% CL for pseudo-data generated by the toy disappearance experiment assuming the existence of a sterile neutrino with \(\sin ^2(2\theta )=0.04\) and \(\varDelta m^2=1\) eV\(^2\). The 95% CL discovery sensitivity is shown for both the statistic \(\text {T}_{\text {2}}\) (red line) and \(\text {T}_{\text {1}}\) (grey line). The right panel shows the minimal size of a test based on \(\text {T}_{\text {1}}\) that is required to reject the no-signal hypothesis as a function of the \(\varDelta m^2\) value (grey line). The minimal test size for \(\text {T}_{\text {2}}\) (red line) is independent by \(\varDelta m^2\) as all values are tested simultaneously. Differently from \(\text {T}_{\text {2}}\), \(\text {T}_{\text {1}}\) would reject the no-signal hypothesis even for a test size of \(10^{-5}\) (corresponding to a 4\(\sigma \) two-sided Gaussian deviation). This discrepancy is due to the look-elsewhere effect

The difference between \(\text {T}_{\text {2}}\) and \(\text {T}_{\text {1}}\) is more evident when a signal is present in the data. Figure 4 shows the reconstructed confidence regions for a pseudo-data set generated assuming a sterile neutrino with \(\sin ^2(2\theta )=0.04\) and \(\varDelta m^2=1\) eV\(^2\). The confidence regions are shown for 68% and 95% CL along with the discovery sensitivity. The analysis based on \(\text {T}_{\text {2}}\) is able to properly pin down the signal and it returns a two-dimensional confidence region surrounding the true parameter values. \(\text {T}_{\text {1}}\) returns a \(\sin ^2(2\theta )\) region that is similar to that of \(\text {T}_{\text {2}}\) for \(\varDelta m^2\) values close to the true one. However it returns an allowed region for any \(\varDelta m^2\) value. This is again due to the non-standard construction of the confidence region used in combination with \(\text {T}_{\text {1}}\). As tests performed at different \(\varDelta m^2\) values are independent from each other, an allowed \(\sin ^2(2\theta )\) interval is always found.

In summary, the greater power of \(\text {T}_{\text {1}}\) in terms of \(\sin ^2(2\theta )\) comes at the cost of losing any capability in constraining \(\varDelta m^2\). This is consistent with the fact that this test statistic originates from an hypothesis test in which \(\varDelta m^2\) is considered to be a known and fixed parameter.

4.3 Validity of Wilks’ theorem

Constructing the probability distributions of the test statistic through Monte Carlo methods can be computationally challenging and often it is avoided by invoking the asymptotic properties of the generalized likelihood ratio test. If the regularity conditions required by Wilks’ theorem are met [39, 41], \(\text {T}_{\text {2}}\) and \(\text {T}_{\text {1}}\) are indeed expected to follow a chi-square distribution with a number of degrees of freedom equal to the effective number of free parameters of interest [42]. Sterile neutrino experiments do not typically fulfill these regularity conditions. The parameter \(\sin ^2(2\theta )\) is often reconstructed next to the border of its allowed range (only positive values are physically allowed) or far from its true value when the statistical fluctuations mimic a signal. This induces a bias in its maximum likelihood estimator. In addition, for \(\text {T}_{\text {2}}\), the alternative hypothesis becomes independent by \(\varDelta m^2\) when \(\sin ^2(2\theta )\) is equal to zero.

The impact of assuming Wilks’ asymptotic formulas has been evaluated by studying the coverage probability, i.e. the probability that the confidence region covers the true value of the parameters of interest [35]. If the asymptotic formulas are a good approximation of the actual test statistic distribution, the coverage should be equal to the confidence level of the test used to create the confidence region. A direct comparison of the test statistic distributions is discussed in Appendix D.

The coverage probability computed for \(\text {T}_{\text {2}}\) assuming the validity of Wilks’ theorem is shown in Fig. 5 for both our toy disappearance and appearance experiments, considering separately a rate and shape analysis. The test statistic distributions have been approximated by a chi-square with one or two degrees of freedom, according to the number of non-degenerate parameters of interest in the alternative hypothesis (see insets in the figure). The coverage is generally correct in the parameter space where the experiment is sensitive to a signal. The rate analysis shows just a slight overcoverage where the experiment is not sensitive. This is expected as \(\sin ^2(2\theta )\) is bounded to positive values, causing an effective reduction of the degrees of freedom of the test when the signal is reconstructed close to the edge of the allowed parameter space [43].

Fig. 5
figure 5

Coverage of the 95%-CL confidence region based on the test performed with \(\text {T}_{\text {2}}\) and assuming that this test statistic follows chi-square distributions. The number of degrees of freedom of the chi-square distribution is two (i.e. the number of free parameters in the likelihood) unless these parameters are degenerate such as \(\sin ^2(2\theta )\) and \(\varDelta m^2\) in the rate analysis or \(\sin ^2(2\theta )\) and \(N^{e}\) in the shape analysis of the appearance experiment. The top panels show the coverage probability for the toy disappearance experiment, the bottom panels for the toy appearance experiment. For each kind of experiment the coverage for a rate analysis (left), a shape analysis (middle) and the full analysis (right panel) are shown separately. The color palette used to show the coverage probability is the same in all plots. The 95% CL exclusion sensitivity is also displayed to delimit the parameter space in which the experiment is sensitive

The shape analysis has instead a severe undercoverage for \(\sin ^2(2\theta )\) values below the sensitivity of the experiment and the coverage can be as low as 60%, while its nominal value should be 95%. The undercoverage is connected to the fact that when a binned analysis is performed, it is always possible to find a sterile neutrino hypothesis whose oscillatory pattern helps reproducing the statistical fluctuations between bins. As a result, even if no signal is present in the data, the maximum of the likelihood always corresponds to some oscillation hypothesis. This is conceptually equivalent to overfitting and it artificially increases the degrees of freedom of the test and the test statistic values (see discussion in Appendix D). A region of overcoverage is present also in the parameter space within the sensitivity of the experiment at low \(\varDelta m^2\) values, where the oscillation length becomes close to the dimension of the detector, creating a degeneracy between the parameters of interest. Together with the restriction of the parameter space (\(\sin ^2(2\theta )\le 1\)), the number of effective degrees of freedom changes.

When the analysis includes both the rate and shape information, the coverage shows a combination of the features discussed above. In particular, in the parameter space beyond the sensitivity of the experiment, the overcoverage of the rate analysis partially compensates for the undercoverage of the shape analysis. Severe undercoverage regions are however still present, consistently with the results obtained in Ref. [8].

The difference between the outcome of a test based on probability distributions constructed with Monte Carlo techniques and their chi-square approximation is shown in Fig. 6. Both the sensitivities and the confidence regions reconstructed from pseudo data are significantly different, up to 70% in terms of \(\sin ^2(2\theta )\). For experiments with a sensitivity dominated by the shape analysis, the confidence region can even switch from an upper limit to an island, leading to an unjustified claim for a discovery. The probability for this event to occur can be significant, up to 40% in our toy experiment for the considered hypothesis. More details on the probability distributions of \(\text {T}_{\text {2}}\) and the option to compute exclusion sensitivities based on the Asimov data set are discussed in Appendix D.

Fig. 6
figure 6

Comparison of the sensitivity and confidence regions at 95% CL for the toy disappearance experiment obtained with \(\text {T}_{\text {2}}\), \(\text {T}_{\text {1}}\) and \(\text {T}_{\text {0}}\). The exclusion and discovery sensitivities (first and third row respectively) have been computed using 10,000 pseudo-data sets. The confidence regions for a concrete data-set have instead been calculated for a pseudo-data set generated under the no-signal hypothesis (second row) and one generated assuming a sterile neutrino signal with \(\sin ^2(2\theta )\) \(=\)0.04 and \(\varDelta m^2\) \(=\)1 eV\(^2\) (fourth row). The rate (left) and shape analysis (middle) are shown independently and combined together (right column). The probability distribution of the test statistics are computed using Monte Carlo techniques. Results obtained when approximating the distributions with chi-square functions are shown for \(\text {T}_{\text {2}}\)

While the asymptotic approximation is not satisfactory for tests based on \(\text {T}_{\text {2}}\), it is instead very good for tests based on \(\text {T}_{\text {1}}\). The coverage of \(\text {T}_{\text {1}}\) has exactly the same features of Fig. 5a, d and therefore it is not shown here. The coverage is correct in the region in which the experiment is sensitive and is slightly higher (97.5%) in the parameter space beyond the experimental sensitivity. The possibility of avoiding a Monte Carlo construction of the probability distributions of \(\text {T}_{\text {1}}\) is a significant advantage and contributed to make \(\text {T}_{\text {1}}\) popular in the sterile neutrino community.

5 Testing of simple hypotheses

The exclusion sensitivity based on \(\text {T}_{\text {0}}\) for our toy disappearance experiment is shown in the first row of Fig. 6 separately for the rate, shape and combined analysis. This test provides a sensitivity significantly stronger than what obtained with \(\text {T}_{\text {2}}\) and \(\text {T}_{\text {1}}\). This is expected as the test involves “simple” hypotheses in which the parameters of interest are fixed. The parameter space of the alternative hypothesis is now even more restricted than for \(\text {T}_{\text {1}}\) and the test has maximum power. The discovery sensitivity cannot be calculated as the no-signal hypothesis is always accepted when used as \(\text {H}_{\text {0}}\).

The confidence regions extracted for specific pseudo-data sets not containing a signal is shown in the second row of Fig. 6. \(\text {T}_{\text {0}}\) can provide extremely stringent constraints on \(\sin ^2(2\theta )\) that are orders of magnitudes beyond the sensitivity. To mitigate this behaviour this test statistic is used in combination with the CL\(_\text {s}\) method [44] that penalizes constraints stronger than the sensitivity by introducing an overcoverage in the test. The combination of \(\text {T}_{\text {0}}\) and the CL\(_\text {s}\) method is known as “Gaussian CL\(_\text {s}\)” [10].

The plots in the fourth row of Fig. 6 show the confidence regions extracted for a pseudo-data set with an injected signal. These regions have two peculiarities. Similarly to \(\text {T}_{\text {1}}\), the non-standard construction of the confidence region produces an allowed \(\sin ^2(2\theta )\) interval for each \(\varDelta m^2\) value. However, differently from \(\text {T}_{\text {1}}\), the \(\sin ^2(2\theta )\) intervals are now always connected to the no-signal hypothesis \(\sin ^2(2\theta )\) \(=\)0, even for the true \(\varDelta m^2\) value, as the alternative hypothesis in the test is now fixed to the no-signal hypothesis.

In conclusion, while \(\text {T}_{\text {0}}\) has a greater power and can produce the strongest limits in terms of \(\sin ^2(2\theta )\), it produces confidence regions that cannot constrain either of the parameters of interest. We confirm that the probability distribution of \(\text {T}_{\text {0}}\) converges to a normal distribution for our toy appearance and disappearance experiment as reported in Ref. [10].

6 Comparison and discussion

The main difference among the statistical methods applied to the search for sterile neutrinos has been traced back to the definition of the alternative hypothesis in the hypothesis testing procedure. The considered definitions lead to three different test statistics that are used to construct confidence regions in the \(\sin ^2(2\theta )\) vs. \(\varDelta m^2\) parameter space. The sensitivities and confidence regions constructed for each test are compared in Fig. 6.

In \(\text {T}_{\text {2}}\), the parameter space of the alternative hypothesis covers all possible values of \(\sin ^2(2\theta )\) and \(\varDelta m^2\). This test is the natural choice when the values of the parameters of interest are unknown and a generic search over the full parameter space is intended. Using this test for an interval estimation procedure provides naturally two-dimensional confidence regions in the \(\sin ^2(2\theta )\) vs. \(\varDelta m^2\) space. The probability distributions of this test statistic are not well approximated by chi-square functions in the analysis of sterile neutrino experiments, and such an approximation can lead to very inaccurate confidence regions and even to erroneously reject the no-signal hypothesis.

In \(\text {T}_{\text {1}}\), the value of \(\varDelta m^2\) is assumed to be known prior to the experiment and the parameter space of the alternative hypothesis is restricted to a unique \(\varDelta m^2\) value. \(\text {T}_{\text {1}}\) naturally generates one-dimensional confidence regions in the \(\sin ^2(2\theta )\) space. Two-dimensional confidence regions can be technically created as as union of \(\sin ^2(2\theta )\) intervals, each computed for a different fixed value of \(\varDelta m^2\). Such confidence regions have proper coverage but also some pathologies. In particular, while the constraints on \(\sin ^2(2\theta )\) are more stringent than for \(\text {T}_{\text {2}}\), the test has no capability to constrain \(\varDelta m^2\) and the confidence region extends over any \(\varDelta m^2\) value. The conditions of Wilks’ theorem are almost fulfilled and its probability distribution follows accurately a chi-square function except in the parameter space close to the physical border where the probability distribution becomes half a chi-square function and half a delta-Dirac function at zero [43].

The test statistic \(\text {T}_{\text {0}}\) compares two simple hypotheses with a fixed value of the parameters of interest. The alternative hypothesis is defined as the no-signal hypothesis. Thus, the no-signal hypothesis is accepted by construction when used as \(\text {H}_{\text {0}}\). The natural confidence regions constructed using this test are point-like. Two-dimensional regions in the \(\sin ^2(2\theta )\) vs \(\varDelta m^2\) can be obtained as union of point-like confidence regions, but this non-standard construction produces regions that do not constrain the parameters of interest and only set upper limits on \(\sin ^2(2\theta )\). These limits are consistently stronger than for \(\text {T}_{\text {1}}\) and \(\text {T}_{\text {2}}\) as this test has maximum power. The asymptotic formulas, namely Gaussian distributions, seem to describe well the probability distribution of this test statistic in a large set of conditions.

In summary, all the test statistics are conceptually correct and have a natural scope of application. The more information is available, the more we can restrict the parameter space of the alternative hypotheses and the greater the power of the test becomes. However, some issues arise when these tests are used – regardless of what is their natural scope – to build two-dimensional confidence regions in the \(\varDelta m^2\) vs. \(\sin ^2(2\theta )\) space. The regions produced by \(\text {T}_{\text {1}}\) and \(\text {T}_{\text {0}}\) do not constrain \(\varDelta m^2\) as they have an allowed \(\sin ^2(2\theta )\) interval for any \(\varDelta m^2\) value. In addition, the regions produced by \(\text {T}_{\text {0}}\) do not even constrain \(\sin ^2(2\theta )\)  as the \(\sin ^2(2\theta )\) interval is always connected to \(\sin ^2(2\theta )\) \(=\)0, even in the presence of a strong signal. Since the primary goal of the current experiments is to find a signal at unknown \(\sin ^2(2\theta )\) and \(\varDelta m^2\) values, we find natural to adopt an analysis that is able to pin down simultaneously both oscillation parameters and recommend the usage of \(\text {T}_{\text {2}}\).

To ease the comparison of the performance and results from different experiments, it would be convenient for the field to adopt a standardized analysis procedure. Based on the results presented in this article and expanding the proposal of Ref. [8], such a standard analysis could follow these steps:

  1. 1.

    identification of the most likely value for \(\sin ^2(2\theta )\) and \(\varDelta m^2\) defined as the value corresponding to the maximum of the likelihood function over the space \(\{\sin ^2(2\theta ), \varDelta m^2: 0\le \sin ^2(2\theta )\le 1, \varDelta m^2\ge 0\}\) (i.e. maximum likelihood estimators);

  2. 2.

    check that the data is compatible with the model corresponding to the most likely value of the parameters of interest by using a “likelihood ratio” goodness-of-fit test [34] whose probability distribution is verified or constructed with Monte Carlo techniquesFootnote 2;

  3. 3.

    construct the two-dimensional confidence region based on \(\text {T}_{\text {2}}\). If the no-signal hypothesis is accepted, the confidence region will extend down to vanishing \(\sin ^2(2\theta )\) values and its upper limit can be plotted along with its median value (i.e. the exclusion sensitivity) and 68/95% central intervals expected under the no-signal hypothesis (as in Fig. 3). If the no-signal hypothesis is instead rejected, the confidence region can be plotted for different confidence levels along with the discovery sensitivity as in Fig. 4.

When the number of events observed by an experiment is large and each bin of the data set contains tens of counts or more, the Poisson probability in the likelihood function can be approximated by a Gaussian probability. In this case, the likelihood function can be converted into a chi-square function. This treatment can be regarded as a sub-case of what is discussed in the previous sections. However, independently by the number of events, Wilks’ theorem is not valid for \(\text {T}_{\text {2}}\) because of the presence of physical borders and of the statistical fluctuations in the data sample that mimic a sterile neutrino signature. In addition, the alternative hypothesis becomes independent by \(\varDelta m^2\) when \(\sin ^2(2\theta )\) tends to zero. The construction of the test statistic probability distribution through Monte Carlo techniques is hence mandatory in order to ensure accurate results. The Monte Carlo construction is computationally demanding, but it is feasible as proved by the experiments that are already performing it. Indeed, the proposed analysis based on \(\text {T}_{\text {2}}\) is similar to the one used by e.g. MiniBooNE and PROSPECT.

The inapplicability of Wilks’ theorem and the non-trivial interplay between the rate and shape analysis have repercussions also on the global fits for which the likelihood of each experiment must be combined and the probability distribution of \(\text {T}_{\text {2}}\) must be computed by generating simultaneously pseudo-data for all the experiments considered. For this reason, it would be useful if the experiments would release in addition to their likelihood fit function and their data, also the probability distributions of the individual signal and background components used in the fit and for the pseudo-data generation.

7 Conclusions

The statistical methods used to search for short-baseline neutrino oscillations induced by an hypothetical sterile neutrino with mass at the eV scale have been reviewed and compared. Three hypothesis testing procedures are used in the field to create confidence intervals. Each procedure is based on a specific test statistic. We identified how two out of the three tests make implicit assumptions on the value of the parameters of interest for sterile neutrinos, i.e. \(\varDelta m^2\) and \(\sin ^2(2\theta )\). Making different assumptions changes the question addressed by the test and, consequently, changes the result of the analysis.

For the first time, the performance of the three tests have been compared in a coherent way over a comprehensive set of limit setting and signal discovery scenarios. In particular, we considered both disappearance and appearance experiments as well as rate- and shape-based analyses. For each scenario we constructed the probability distributions of the test statistic using Monte Carlo techniques and found that they can differ significantly from the usual asymptotic approximations. The confidence regions reconstructed by the three tests can be significantly different, making hard to compare the results from experiments that adopt different analyses. Our results are consistent with those obtained in Refs. [8,9,10] for specific scenarios.

The current generation sterile-neutrino searches aim at finding a signal anywhere in the parameter space available to the experiment. This should naturally lead to an analysis based on a test without any assumption on the value of the oscillation parameters. In this way, the analysis will be able to constrain both the value of \(\varDelta m^2\) and \(\sin ^2(2\theta )\) when a signal is observed. Thus, we recommend the use of \(\text {T}_{\text {2}}\) and the construction of its probability distributions through Monte Carlo techniques.