Abstract
We report a study of the CUORE sensitivity to neutrinoless double beta (\(0\nu \beta \beta \)) decay. We used a Bayesian analysis based on a toy Monte Carlo (MC) approach to extract the exclusion sensitivity to the \(0\nu \beta \beta \) decay halflife (\(T_{1/2}^{\,0\nu }\)) at \(90\%\) credibility interval (CI) – i.e. the interval containing the true value of \(T_{1/2}^{\,0\nu }\) with \(90\%\) probability – and the \(3~\sigma \) discovery sensitivity. We consider various background levels and energy resolutions, and describe the influence of the data division in subsets with different background levels. If the background level and the energy resolution meet the expectation, CUORE will reach a \(90\%\) CI exclusion sensitivity of \(2\cdot 10^{25}\) year with 3 months, and \(9\cdot 10^{25}\) year with 5 years of live time. Under the same conditions, the discovery sensitivity after 3 months and 5 years will be \(7\cdot 10^{24}\) year and \(4\cdot 10^{25}\) year, respectively.
Introduction
Neutrinoless double beta decay is a non Standard Model process that violates the total lepton number conservation and implies a Majorana neutrino mass component [1, 2]. This decay is currently being investigated with a variety of double beta decaying isotopes. A recent review can be found in Ref. [3]. The cryogenic underground observatory for rare events (CUORE) [4,5,6] is an experiment searching for \(0\nu \beta \beta \) decay in \(^{130}\)Te. It is located at the Laboratori Nazionali del Gran Sasso of INFN, Italy. In CUORE, 988 TeO\(_2\) crystals with natural \(^{130}\)Te isotopic abundance and a 750 g average mass are operated simultaneously as source and bolometric detector for the decay. In this way, the \(0\nu \beta \beta \) decay signature is a peak at the Qvalue of the reaction (\(Q_{\beta \beta }\), 2527.518 keV for \(^{130}\)Te [7,8,9]). Bolometric crystals are characterized by an excellent energy resolution (\({\sim }0.2\%\) Full Width at Half Maximum, FWHM) and a very low background at \(Q_{\beta \beta }\), which is expected to be at the \(10^{\text{ }2}\) cts\(/(\)keV\(\cdot \)kg\(\cdot \)yr\()\) level in CUORE [10].
The current best limit on \(0\nu \beta \beta \) decay in \(^{130}\)Te comes from a combined analysis of the CUORE0 [11, 12] and Cuoricino data [13, 14]. With a total exposure of 29.6 kg\(\cdot \)year, a limit of \(T_{1/2}^{0\nu }>4.0\cdot 10^{24}\) year (\(90\%\) CI) is obtained [15] for the \(0\nu \beta \beta \) decay half life, \(T_{1/2}^{\,0\nu }\).
After the installation of the detector, successfully completed in the summer 2016, CUORE started the commissioning phase at the beginning of 2017. The knowledge of the discovery and exclusion sensitivity to \(0\nu \beta \beta \) decay as a function of the measurement live time can be exploited to set the criteria for the unblinding of the data and the release of the \(0\nu \beta \beta \) decay analysis results.
In this work, we dedicate our attention to those factors which could strongly affect the sensitivity, such as the background index (\(BI\)) and the energy resolution at \(Q_{\beta \beta }\). In CUORE, the crystals in the outer part of the array are expected to show a higher \(BI\) than those in the middle [10]. Considering this and following the strategy already implemented by the Gerda Collaboration [16, 17], we show how the division of the data into subsets with different \(BI\) could improve the sensitivity.
The reported results are obtained by means of a Bayesian analysis performed with the Bayesian analysis toolkit (BAT) [18]. The analysis is based on a toyMC approach. At a cost of a much longer computation time with respect to the use of the median sensitivity formula [19], this provides the full sensitivity probability distribution and not only its median value.
In Sect. 2, we review the statistical methods for the parameter estimation, as well as for the extraction of the exclusion and discovery sensitivity. Section 3 describes the experimental parameters used for the analysis while its technical implementation is summarized in Sect. 4. Finally, we present the results in Sect. 5.
Statistical method
The computation of exclusion and discovery sensitivities presented here follows a Bayesian approach: we exploit the Bayes theorem both for parameter estimation and model comparison. In this work, we use the following notation:

H indicates both a hypothesis and the corresponding model;

\(H_0\) is the backgroundonly hypothesis, according to which the known physics processes are enough to explain the experimental data. In the present case, we expect the CUORE background to be flat in a 100 keV region around \(Q_{\beta \beta }\), except for the presence of a \(^{60}\)Co summation peak at 2505.7 keV. Therefore, \(H_0\) is implemented as a flat background distribution plus a Gaussian describing the \(^{60}\)Co peak. In CUORE0, this peak was found to be centered at an energy \(1.9\pm 0.7\) keV higher than that tabulated in literature [15]. This effect, present also in Cuoricino [14], is a feature of all gamma summation peaks. Hence, we will consider the \(^{60}\)Co peak to be at 2507.6 keV.

\(H_1\) is the backgroundplussignal hypothesis, for which some new physics is required to explain the data. In our case, the physics involved in \(H_1\) contains the background processes as well as \(0\nu \beta \beta \) decay. The latter is modeled as a Gaussian peak at \(Q_{\beta \beta }\).

\(\mathbf {E}\) represents the data. It is a list of N energy bins centered at the energy \(E_i\) and containing \(n_i\) event counts. The energy range is [2470; 2570] keV. This is the same range used for the CUORE0 \(0\nu \beta \beta \) decay analysis [15], and is bounded by the possible presence of peaks from \(^{214}\)Bi at 2447.7 keV and \(^{208}\)Tl Xray escape at \({\sim }2585\) keV [15]. While an unbinned fit allows to fully exploit the information contained in the data, it can result in a long computation time for large data samples. Given an energy resolution of \({\sim }5\) keV FWHM and using a 1 keV bin width, the \(\pm 3\) sigma range of a Gaussian peak is contained in 12.7 bins. With the 1 keV binning choice, the loss of information with respect to the unbinned fit is negligible.

\(\Gamma ^{0\nu }\) is the parameter describing the \(0\nu \beta \beta \) decay rate for \(H_1\):
$$\begin{aligned} \Gamma ^{0\nu } = \frac{\ln {2}}{T_{1/2}^{0\nu }}. \end{aligned}$$(1) 
\(\mathbf {\theta }\) is the list of nuisance parameters describing the background processes in both \(H_0\) and \(H_1\);

\(\Omega \) is the parameter space for the parameters \(\mathbf {\theta }\).
Parameter estimation
We perform the parameter estimation for a model H through the Bayes theorem, which yields the probability distribution for the parameters based on the measured data, under the assumption that the model H is correct. In the \(0\nu \beta \beta \) decay analysis, we are interested in the measurement of \(\Gamma ^{0\nu }\) for the hypothesis \(H_1\). The probability distribution for the parameter set \((\Gamma ^{0\nu },\mathbf {\theta })\) is:
The numerator contains the conditional probability \(P( \mathbf {E} \big  \Gamma ^{0\nu }, \mathbf {\theta }, H_1)\) of finding the measured data \(\mathbf {E}\) given the model \(H_1\) for a set of parameters \((\Gamma ^{0\nu },\mathbf {\theta })\), times the prior probability \(\pi \) for each of the considered parameters. The prior probability has to be chosen according to the knowledge available before the analysis of the current data. For instance, the prior for the number of signal counts \(\Gamma ^{0\nu }\) might be based on the halflife limits reported by previous experiments while the prior for the background level in the region of interest (ROI) could be set based on the extrapolation of the background measured outside the ROI. The denominator represents the overall probability to obtain the data \(\mathbf {E}\) given the hypothesis \(H_1\) and all possible parameter combinations, \(P(\mathbf {E}H_1)\).
The posterior probability distribution for \(\Gamma ^{0\nu }\) is obtained via marginalization, i.e. integrating \(P\left( \Gamma ^{0\nu },\mathbf {\theta } \big  \mathbf {E},H_1\right) \) over all nuisance parameters \(\mathbf {\theta }\):
For each model H, the probability of the data given the model and the parameters has to be defined. For a fixed set of experimental data, this corresponds to the likelihood function [20]. Dividing the data into \(N_d\) subsets with index d characterized by different background levels, and considering a binned energy spectrum with N bins and a number \(n_{di}\) of events in the bin i of the d subset spectrum, the likelihood function is expressed by the product of a Poisson term for each bin di:
where \(\lambda _{di}\) is the expectation value for the bin di. The bestfit is defined as the set of parameter values \((\Gamma ^{0\nu },\mathbf {\theta })\) for which the likelihood is at its global maximum. In the practical case, we perform the maximization on the loglikelihood
where the additive terms \(\ln {(n_{di}!)}\) are dropped from the calculation.
The difference between \(H_0\) and \(H_1\) is manifested in the formulation of \(\lambda _{di}\). As mentioned above, we parametrize \(H_0\) with a flat distribution over the considered energy range, i.e. [2470; 2570] keV:
plus a Gaussian distribution for the \(^{60}\)Co peak:
The expected background counts in the bin di corresponds to the integral of \(f_{bkg}(E)\) in the bin di times the total number of background counts \(M^{bkg}_d\) for the subset d:
where \(E^{\mathrm{min}}_{di}\) and \(E^{\mathrm{max}}_{di}\) are the left and right margins of the energy bin di, respectively. Considering bins of size \(\delta E_{di}\) and expressing \(M^{bkg}_{di}\) as function of the background index \(BI_d\), of the total mass \(m_d\) and of the measurement live time \(t_d\), we obtain:
Similarly, the expectation value for the \(^{60}\)Co distribution on the bin di is:
where \(M^{Co}_d\) is the total number of \(^{60}\)Co events for the subset d and can be redefined as function of the \(^{60}\)Co event rate, \(R^{Co}_d\):
The total expectation value \(\lambda _{di}\) for \(H_0\) is then:
In the case of \(H_1\) an additional expectation value for \(0\nu \beta \beta \) decay is required:
The number of \(0\nu \beta \beta \) decay events in the subset d is:
where \(N_A\) is the Avogadro number, \(m_a\) and \(f_{130}\) are the molar mass and the isotopic abundance of \(^{130}\)Te and \(\varepsilon _{\mathrm{tot}}\) is the total efficiency, i.e. the product of the containment efficiency \(\varepsilon _{MC}\) (obtained with MC simulations) and the instrumental efficiency \(\varepsilon _{\mathrm{instr}}\).
Exclusion sensitivity
We compute the exclusion sensitivity by means of the \(90\%\) CI limit. This is defined as the value of \(T_{1/2}^{\,0\nu }\) corresponding to the \(90\%\) quantile of the posterior \(\Gamma ^{0\nu }\) distribution:
An example of posterior probability for \(\Gamma ^{0\nu }\) and the relative \(90\%\) CI limit is shown in Fig. 1, top. Flat prior distributions are used for all parameters, as described in Sect. 3.
In the Bayesian approach, the limit is a statement regarding the true value of the considered physical quantity. In our case, a \(90\%\) CI limit on \(T_{1/2}^{\,0\nu }\) is to be interpreted as the value above which, given the current knowledge, the true value of \(T_{1/2}^{\,0\nu }\) lies with \(90\%\) probability. This differs from a frequentist \(90\%\) C.L. limit, which is a statement regarding the possible results of the repetition of identical measurements and should be interpreted as the value above which the bestfit value of \(T_{1/2}^{\,0\nu }\) would lie in the \(90\%\) of the imaginary identical experiments.
In order to extract the exclusion sensitivity, we generate a set of N toyMC spectra according to the backgroundonly model, \(H_0\). We then fit spectra with the backgroundplussignal model, \(H_1\), and obtain the \(T_{1/2}^{\,0\nu }\left( 90\%\ CI \right) \) distribution (Fig. 1, bottom). Its median \(\hat{T}_{1/2}^{\,0\nu }\left( 90\%\ CI \right) \) is referred as the median sensitivity. For a real experiment, the experimental \(T_{1/2}^{\,0\nu }\) limit is expected to be above/below \(\hat{T}_{1/2}^{\,0\nu }\left( 90\%\ CI \right) \) with 50% probability. Alternatively, one can consider the mode of the distribution, which corresponds to the most probable \(T_{1/2}^{\,0\nu }\) limit.
The exact procedure for the computation of the exclusion sensitivity is the following:

for each subset, we generate a random number of background events \(N_d^{bkg}\) according to a Poisson distribution with mean \(\lambda ^{bkg}_d\);

for each subset, we generate \(N_d^{bkg}\) events with an energy randomly distributed according to \(f_{bkg}(E)\);

we repeate the procedure for the \(^{60}\)Co contribution;

we fit the toyMC spectrum with the \(H_1\) model (Eq. 2), and marginalize the likelihood with respect to the parameters \(BI_d\) and \(R_d^{Co}\) (Eq. 3);

we extract the \(90\%\) CI limit on \(T_{1/2}^{\,0\nu }\);

we repeat the algorithm for N toyMC experiments, and build the distribution of \(T_{1/2}^{\,0\nu }\left( 90\%\ CI \right) \).
Discovery sensitivity
The discovery sensitivity provides information on the required strength of the signal amplitude for claiming that the known processes alone are not sufficient to properly describe the experimental data. It is computed on the basis of the comparison between the backgroundonly and the backgroundplussignal models. A method for the calculation of the Bayesian discovery sensitivity was introduced in Ref. [21]. We report it here for completeness.
In our case, we assume that \(H_0\) and \(H_1\) are a complete set of models, for which:
The application of the Bayes theorem to the models \(H_0\) and \(H_1\) yields:
In this case, the numerator contains the probability of measuring the data \(\mathbf {E}\) given the model H:
while the prior probabilities for the models \(H_0\) and \(H_1\) can be chosen as 0.5 so that neither model is favored.
The denominator of Eq. 17 is the sum probability of obtaining the data \(\mathbf {E}\) given either the model \(H_0\) or \(H_1\):
At this point we need to define a criterion for claiming the discovery of new physics. Our choice is to quote the \(3~\sigma \) (median) discovery sensitivity, i.e. the value of \(T_{1/2}^{\,0\nu }\) for which the posterior probability of the backgroundonly model \(H_0\) given the data is smaller than 0.0027 in 50% of the possible experiments. In other words:
The detailed procedure for the determination of the discovery sensitivity is:

we produce a toyMC spectrum according to the \(H_1\) model with an arbitrary value of \(T_{1/2}^{\,0\nu }\);

we fit the spectrum with both \(H_0\) and \(H_1\);

we compute \(P(H_0  \mathbf {E})\);

we repeat the procedure for N toyMC spectra using the same \(T_{1/2}^{\,0\nu }\);

we repeat the routine with different values of \(T_{1/2}^{\,0\nu }\) until the condition of Eq. 20 is satisfied. The iteration is implemented using the bisection method until a \(5\cdot 10^{\text{ }5}\) precision is obtained on the median \(P(H_0  \mathbf {E})\).
Experimental parameters
The fit parameters of the \(H_1\) model are \(BI\), \(R^{Co}\) and \(\Gamma ^{0\nu }\), while only the first two are present for \(H_0\). If the data are divided in subsets, different \(BI\) and \(R^{Co}\) fit parameter are considered for each subset. On the contrary, the inverse \(0\nu \beta \beta \) halflife is common to all subsets.
Prior to the assembly of the CUORE crystal towers, we performed a screening survey of the employed materials [22,23,24,25,26,27,28,29]. From these measurements, either a nonzero activity was obtained, or a \(90\%\) confidence level (C.L.) upper limit was set. Additionally, the radioactive contamination of the crystals and holders was also obtained from the CUORE0 background model [30]. We developed a full MC simulation of CUORE [10], and we used the results of the screening measurements and of the CUORE0 background model for the normalization of the simulated spectra. We then computed the \(BI\) at \(Q_{\beta \beta }\) using the output of the simulations. In the present study, we consider only those background contributions for which a nonzero activity is obtained from the available measurements. The largest background consists of \(\alpha \) particles emitted by U and Th surface contaminations of the copper structure holding the crystals. Additionally, we consider a \(^{60}\)Co contribution normalized to the \(90\%\) C.L. limit from the screening measurement. In this sense, the effect of a \(^{60}\)Co background on the CUORE sensitivity is to be held as an upper limit. Given the \(^{60}\)Co importance especially in case of suboptimal energy resolution, we preferred to maintain a conservative approach in this regard. In the generation of the toyMC spectra, we take into account the \(^{60}\)Co half life (5.27 year), and set the start of data taking to January 2017.
The parameter values used for the production of the toyMC are reported in Table 1. The quoted uncertainty on the BI comes from the CUORE MC simulations [10]. We produce the toyMC spectra using the bestfit value of the BI. In a second time, we repeat the analysis after increasing and decreasing the BI by an amount equivalent to its statistical and systematic uncertainties combined in quadrature.
After running the fit on the entire crystal array as if it were a unique detector, we considered the possibility of dividing the data grouping the crystals with a similar \(BI\). Namely, being the background at \(Q_{\beta \beta }\) dominated by surface \(\alpha \) contamination of the copper structure, the crystals facing a larger copper surface are expected to have a larger \(BI\). This effect was already observed in CUORE0, where the crystals in the uppermost and lowermost levels, which had 3 sides facing the copper shield, were characterized by a larger background than those in all other levels, which were exposed to coppper only on 2 sides. Considering the CUORE geometry, the crystals can be divided in 4 subsets with different numbers of exposed faces. Correspondingly, they are characterized by different \(BI\), as reported in Table 2.
A major ingredient of a Bayesian analysis is the choice of the priors. In the present case, we use a flat prior for all parameters. In particular, the prior distribution for \(\Gamma ^{0\nu }\) is flat between zero and a value large enough to contain \({>}99.999\%\) of its posterior distribution. This corresponds to the most conservative choice. Any other reasonable prior, e.g. a scale invariant prior on \(\Gamma ^{0\nu }\), would yield a stronger limit. A different prior choice based on the real characteristic of the experimental spectra might be more appropriate for \(BI\) and \(R^{Co}\) in the analysis of the CUORE data. For the time being the lack of data prevents the use of informative priors. As a crosscheck, we performed the analysis using the \(BI\) and \(^{60}\)Co rate uncertainties obtained by the background budget as the \(\sigma \) of a Gaussian prior. No significant difference was found on the sensitivity band because the Poisson fluctuations of the generated number of background and \(^{60}\)Co events are dominant for the extraction of the \(\Gamma ^{0\nu }\) posterior probability distribution.
Table 3 lists the constant quantities present in the formulation of \(H_0\) and \(H_1\). All of them are fixed, with the exception of the live time t and the FWHM of the \(0\nu \beta \beta \) decay and \(^{60}\)Co Gaussian peaks. We perform the analysis with a FWHM of 5 and 10 keV, corresponding to a \(\sigma \) of 2.12 and 4.25 keV, respectively. Regarding the efficiency, while in the toyMC production the BI and \(R^{Co}\) are multiplied by the instrumental efficiency,^{Footnote 1} in the fit the total efficiency is used. This is the product of the containment and instrumental efficiency. Also in this case, we use the same value as for CUORE0, i.e. \(81.3\%\) [15]. We evaluate the exclusion and discovery sensitivities for different live times, with t ranging from 0.1 to 5 year and using logarithmically increasing values: \(t_{i} = 1.05\cdot t_{i1}\).
Fit procedure
We perform the analysis with the software BAT v1.1.0DEV [21], which internally uses CUBA [31] v4.2 for the integration of multidimensional probabilities and the MetropolisHastings algorithm [32] for the fit. The computation time depends on the number of samples drawn from the considered probability distribution.
For the exclusion sensitivity, we draw \(10^5\) likelihood samples for every toyMC experiment, while, due to the higher computational cost, we use only \(10^3\) for the discovery sensitivity.
For every combination of live time, \(BI\) and energy resolution, we run \(10^5\) (\(10^3\)) toyMC experiments for the exclusion (discovery) sensitivity study. In the case of the discovery sensitivity, we chose the number of toyMC experiments as the minimum for which a \(2\%\) relative precision was achievable on the median sensitivity. For the exclusion sensitivity, it was possible to increase both the number of toyMC experiments and iterations, with a systematic uncertainty on the median sensitivity at the per mil level.
Results and discussion
Exclusion sensitivity
The distributions of \(90\%\) CI limit as a function of live time with no data subdivision are shown in Fig. 2. For all \(BI\) values and all live times, the FWHM of 5 keV yields a \({\sim }45\%\) higher sensitivity with respect to a 10 keV resolution. The median sensitivity after 3 months and 5 years of data collection in the two considered cases are reported in Table 4. The dependence of the median sensitivity on live time is typical of a backgrounddominated experiments: namely, CUORE expects about one event every four days in a \(\pm 3\sigma \) region around \(Q_{\beta \beta }\). The results in Table 4 show also the importance of energy resolution and suggest to put a strong effort in its optimization. As a cross check, we compare the sensitivity just obtained with that provided by the analytical method presented in [19] and shown in dark green in Fig. 2. The analytical method yields a slightly higher sentitivity for short live times, while the two techniques agree when the data sample is bigger. We justify this with the fact that the uncertainty on the number of background counts obtained with the Bayesian fit is slightly larger than the corresponding Poisson uncertainty assumed in the analytical approach [33], hence the limit on \(T_{1/2}^{\,0\nu }\) is systematically weaker.^{Footnote 2} The effect becomes less and less strong with increasing data samples, i.e. with growing live time. With a resolution of 5 keV, the difference goes from \(8\%\) after 3 months to \({<}0.1\%\) after 5 years, while for a 10 keV FWHM the difference is \({\sim }6\%\) after 3 months and \(4\%\) after 5 years. One remark has to be done concerning the values reported in [19]: there we gave a \(90\%\) C.I. exclusion sensitivity of \(9.3\cdot 10^{25}\) year with 5 year of live time. This is \({\sim }5\%\) higher than the result presented here and is explained by the use of a different total efficiency: \(87.4\%\) in [19] and \(81.3\%\) in this work.
We then extract the exclusion sensitivity after dividing the crystals in 4 subsets, as described in Sect. 3. The median exclusion sensitivity values after 3 months and 5 years of data collection with one and 4 subsets are reported in Table 4. The division in subsets yields only a small improvement (at the percent level) in median sensitivity. Based on this results only, one would conclude that dividing the data into subsets with different \(BI\) is not worth the effort. This conclusion is not always true, and strongly relies on the exposure and \(BI\) of the considered subsets. As an example, we repeated a toy analysis assuming a \(BI\) of \(10^{\text{ }2}\) cts\(/(\)keV\(\cdot \)kg\(\cdot \)yr\()\), and with two subsets of equal exposure and \(BI\) \(0.5\cdot 10^{\text{ }2}\) cts\(/(\)keV\(\cdot \)kg\(\cdot \)yr\()\) and \(1.5\cdot 10^{\text{ }2}\) cts\(/(\)keV\(\cdot \)kg\(\cdot \)yr\()\), respectively. In this case, the division of the data in to two subsets yields a \({\sim }10\%\) improvement after 5 year of data taking. Hence, the data subdivision is a viable option for the final analysis, whose gain strongly depends on the experimental BI of each channel. Similarly, we expect the CUORE bolometers to have different energy resolutions; in CUORE0, these ranged from \({\sim }3\) keV to \({\sim }20\) keV FWHM [34]. In the real CUORE analysis a further splitting of the data can be done by grouping the channels with similar FWHM, or by keeping every channels separate. At the present stage it is not possible to make reliable predictions for the FWHM distribution among the crystals, so we assumed an average value (of 5 or 10 keV) throughout the whole work.
Ideally, the final CUORE \(0\nu \beta \beta \) decay analysis should be performed keeping the spectra collected by each crystal separate, additionally to the usual division of the data into data sets comprised by two calibration runs [15]. Assuming an average frequency of one calibration per month, the total number of energy spectra would be \({\sim }6\cdot 10^4\). Assuming a different but stationary \(BI\) for each crystal, and using the same \(^{60}\)Co rate for all crystals, the fit model would have \({\sim }10^3\) parameters. This represents a major obstacle for any existing implementation of the MetropolisHastings or Gibbs sampling algorithm. A possible way to address the problem might be the use of different algorithms, e.g. nested sampling [35, 36], or a partial analytical solution of the likelihood maximization.
We perform two further crosschecks in order to investigate the relative importance of the flat background and the \(^{60}\)Co peak. In the first scenario we set the \(BI\) to zero, and do the same for the \(^{60}\)Co rate in the second one. In both cases, the data are not divided into subsets, and resolutions of 5 and 10 keV are considered. With no flat background and a 5 keV resolution, no \(^{60}\)Co event leaks in the \(\pm 3\sigma \) region around \(Q_{\beta \beta }\) even after 5 year of measurement. As a consequence, the \(90\%\) CI limits are distributed on a very narrow band, and the median sensitivity reaches \(1.2\cdot 10^{27}\) year after 5 year of data collection. On the contrary, if we assume a 10 keV FWHM, some \(^{60}\)Co events fall in the \(0\nu \beta \beta \) decay ROI from the very beginning of the data taking. This results in a strong asymmetry of the sensitivity band. In the second crosscheck, we keep the \(BI\) at \(1.02\cdot 10^{\text{ }2}\) cts\(/(\)keV\(\cdot \)kg\(\cdot \)yr\()\), but set the \(^{60}\)Co rate to zero. In both cases, the difference with respect to the standard scenario is below \(1\%\). We can conclude that the \(^{60}\)Co peak with an initial rate of 0.428 cts/(kg\(\cdot \)yr) is not worrisome for a resolution of up to 10 keV, and that the lower sensitivity obtained with 10 keV FWHM with respect to the 5 keV case is ascribable to the relative amplitude of \(\lambda ^{bkg}_{di}\) and \(\lambda ^{0\nu }_{di}\) only (Eqs. 9 and 13). This is also confirmed by the computation of the sensitivity for the optimistic scenario without the 1.9 keV shift of the \(^{60}\)Co peak used in the standard case.
We test the fit correctness and bias computing the pulls, i.e. the normalized residuals, of the number of counts assigned to each of the fit components. Denoting with \(N^{bkg}\) and \(N^{Co}\) the number of generated background and \(^{60}\)Co events, respectively, and with \(M^{bkg}\) and \(M^{Co}\) the corresponding number of reconstructed events, the pulls are defined as:
where \(\sigma _{M^{bkg(Co)}}\) is the statistical uncertainty on \(M^{bkg(Co)}\) given by the fit.
For an unbiased fit, the distribution of the pulls is expected to be Gaussian with a unitary root mean square (RMS). In the case of exclusion sensitivity, we obtain \(r_{bkg}=0.2\pm 0.4\) and \(r_{Co}=0.1\pm 0.5\) for all live times. The fact that the pull distributions are slightly shifted indicates the presence of a bias. Its origin lies in the Bayesian nature of the fit and is that all fit contributions are constrained to be greater than zero. We perform a crosscheck, by extending the range of all parameters (BI, \(R^{Co}\) and \(\Gamma ^{0\nu }\)) to negative values. Under this condition, the bias disappears. In addition to this, an explanation is needed for the small RMS of the pull distributions. This is mainly due to two effects: first, the toyMC spectra are generated using \(H_0\), while the fit is performed using \(H_1\); second, the statistical uncertainties on all parameters are larger than the Poisson uncertainty on the number of generated events. To confirm the first statement, we repeat the fit using \(H_0\) instead of \(H_1\) and we obtain pulls with zero mean and an RMS \({\sim }0.8\), which is closer to the expected value. Finally, we compare the parameter uncertainty obtained from the fit with the Poisson uncertainty for the equivalent number of counts, and we find that the difference is of \(O(20\%)\).
Discovery sensitivity
The extraction of the discovery sensitivity involves fits with the backgroundonly and the backgroundplussignal models. Moreover, two multidimensional integrations have to be performed for each toyMC spectrum, and a loop over the \(0\nu \beta \beta \) decay halflife has to be done until the condition of Eq. 20 is met. Due to the high computation cost, we compute the \(3~\sigma \) discovery sensitivity for a FWHM of 5 and 10 keV with no crystal subdivision. As shown in Fig. 3, with a 5 keV energy resolution CUORE has a \(3~\sigma \) discovery sensitivity superior to the limit obtained from the combined analysis of Cuore0 and Cuoricino data [15] after less than one month of operation, and reaches \(3.7\cdot 10^{25}\) year with 5 year of live time.
Also in this case, the pulls are characterized by an RMS smaller than expected, but no bias is present due to the use of \(H_1\) for both the generation and the fit of the toyMC spectra.
Conclusion and outlook
We implemented a toyMC method for the computation of the exclusion and discovery sensitivity of CUORE using a Bayesian analysis. We have highlighted the influence of the \(BI\) and energy resolution on the exclusion sensitivity, showing how the achievement of the expected 5 keV FWHM is desirable. Additionally, we have shown how the division of the data into subsets with different \(BI\) could yield an improvement in exclusion sensitivity.
Once the CUORE data collection starts and the experimental parameters are available, the sensitivity study can be repeated in a more detailed way. As an example, nonGaussian spectral shapes for the \(0\nu \beta \beta \) decay and \(^{60}\)Co peaks can be used, and the systematics of the energy reconstruction can be included.
Notes
 1.
The containment efficiency is already encompassed in BI and \(R^{Co}\) [10].
 2.
See the discussion of the pulls for a more detailed explanation.
References
 1.
J. Schechter, J.W.F. Valle, Phys. Rev. D 25, 2951 (1982)
 2.
M. Duerr, M. Lindner, A. Merle, JHEP 06, 091 (2011)
 3.
S. Dell’Oro et al., Adv. High Energy Phys. 2016, 2162659 (2016)
 4.
D.R. Artusa et al. [CUORE Collaboration], Adv. High Energy Phys. 2015, 879871 (2015)
 5.
C. Arnaboldi et al. [CUORE Collaboration], Nucl. Instrum. Meth. A 518, 775 (2004)
 6.
C. Arnaboldi et al. [CUORE Collaboration], Astropart. Phys. 20, 91 (2003)
 7.
M. Redshaw et al., Phys. Rev. Lett. 102, 212502 (2009)
 8.
N.D. Scielzo et al., Phys. Rev. C 80, 025501 (2009)
 9.
S. Rahaman et al., Phys. Lett. B 703, 412 (2011)
 10.
C. Alduino et al., [CUORE Collaboration], Eur. Phys. J. C (2017). doi:10.1140/epjc/s1005201750806
 11.
C. Alduino et al. [CUORE Collaboration], JINST 11, P07009 (2016)
 12.
D.R. Artusa et al. [CUORE Collaboration], Eur. Phys. J. C 74, 2956 (2014)
 13.
C. Arnaboldi et al. [Cuoricino Collaboration], Phys. Rev. C 78, 035502 (2008)
 14.
E. Andreotti et al. [Cuoricino Collaboration], Astropart. Phys. 34, 822 (2011)
 15.
K. Alfonso et al. [CUORE Collaboration], Phys. Rev. Lett. 115, 102502 (2015)
 16.
M. Agostini et al., GERDA. Phys. Rev. Lett. 111, 122503 (2013)
 17.
M. Agostini et al., Nature 544, 47 (2017)
 18.
A. Caldwell et al., Comput. Phys. Commun. 180, 2197 (2009)
 19.
F. Alessandria et al. [CUORE Collaboration] (2011). arXiv:1109.0494v3
 20.
F. James, Statistical Methods in Experimental Physics, 2nd edn. (World Scientific, Singapore, 2006)
 21.
A. Caldwell, K. Kroninger, Phys. Rev. D 74, 092003 (2006)
 22.
F. Alessandria et al. [CUORE Collaboration], Astropart. Phys. 35, 839 (2012)
 23.
A.F. Barghouty et al., Nucl. Instrum. Meth. B 295, 16 (2013)
 24.
B.S. Wang et al., Phys. Rev. C 92, 024620 (2015)
 25.
F. Alessandria et al. [CUORE Collaboration], Astropart. Phys. 45, 13 (2013)
 26.
E. Andreotti et al. [Cuoricino Collaboration], Astropart. Phys. 34, 18 (2010)
 27.
F. Bellini et al., Astropart. Phys. 33, 169 (2010)
 28.
E. Andreotti et al., JINST 4, P09003 (2009)
 29.
A. Giachero, Characterization of cryogenic bolometers and data acquisition system for the CUORE experiment, PhD thesis, Università degli Studi di Genova, 2008
 30.
C. Alduino et al. [CUORE Collaboration], Eur. Phys. J. C 77, 13 (2017)
 31.
T. Hahn, Comput. Phys. Commun. 168, 78 (2005)
 32.
D.D.L. Minh, D.L.P. Minh, Commun. Stat. Simul. Comput. 44, 332 (2015)
 33.
G. Cowan, K. Cranmer, E. Gross, O. Vitells (2011). arXiv:1105.3166
 34.
C. Alduino et al. [CUORE Collaboration], Phys. Rev. C 93, 045503 (2016)
 35.
F. Feroz et al., Mon. Not. R. Astron. Soc. 398, 1601 (2009)
 36.
W.J. Handley et al., Mon. Not. R. Astron. Soc. 450, L61 (2015)
Acknowledgements
The CUORE Collaboration thanks the directors and staff of the Laboratori Nazionali del Gran Sasso and the technical staff of our laboratories. CUORE is supported by The Istituto Nazionale di Fisica Nucleare (INFN); The National Science Foundation under Grant Nos. NSFPHY0605119, NSFPHY0500337, NSF PHY0855314, NSFPHY0902171, NSFPHY0969852, NSFPHY1307204, NSFPHY1314881, NSFPHY 1401832, and NSFPHY1404205; The Alfred P. Sloan Foundation; The University of Wisconsin Foundation; Yale University; The US Department of Energy (DOE) Office of Science under Contract Nos. DEAC0205CH11231, DEAC5207NA27344, and DESC0012654; The DOE Office of Science, Office of Nuclear Physics under Contract Nos. DEFG0208ER41551 and DEFG0300ER41138; The National Energy Research Scientific Computing Center (NERSC).
Author information
Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Funded by SCOAP^{3}
About this article
Cite this article
Alduino, C., Alfonso, K., Artusa, D.R. et al. CUORE sensitivity to \(0\nu \beta \beta \) decay. Eur. Phys. J. C 77, 532 (2017). https://doi.org/10.1140/epjc/s1005201750989
Received:
Accepted:
Published: