Introduction

Noble gas detection systems process air to extract, measure, and quantify the radioactive isotopes of xenon [1,2,3,4]. These systems measure the decay of the radioxenon isotopes that are most likely to be present after an underground nuclear explosion (131mXe, 133Xe, 133mXe, and 135Xe). The systems were developed for use under the Comprehensive Nuclear-Test-Ban Treaty (CTBT) [5] and specifically are allowed to be installed at 40 of the 80 CTBT-specified radionuclide stations within the International Monitoring System (IMS), a global network of sensors to detect nuclear explosions.

Radioxenon is measured in units of concentration, typically in terms of milliBecquerel (mBq) per standard cubic meter (SCM) of (dry) air. The systems contain radiation detectors that quantify the amount of radioactivity in xenon from processed air. Processed air, in terms of monitoring, is when non-xenon components of air are removed and the xenon gas is concentrated. The radioactivity is the number of decays measured in the detector for the particular isotope, corrected for the detector efficiency and nuclear decay branching ratio (i.e., the probability that a decay results in a particular type of radiation emission) [6]. Additionally, the radioactivity is corrected to account for the time the system spends in collection, processing, and activity measurement. The activity concentration is the activity divided by the volume of collected and processed air, which is determined by the volume of collected xenon divided by the nominal concentration of xenon in air (87 parts per billion [7]).

Beta–gamma coincident detectors are utilized by most current radioxenon systems and next generation systems. The radioxenon isotopes are measured through their decay products which include beta particles (β), conversion electrons (CE), gamma-rays (γ) and x-rays. Auger electrons, which can also be emitted during decay, currently are neglected for routine concentration analysis [8]. By measuring the β-particles or CEs that are coincident with a γ-ray or x-ray, the interfering backgrounds can be significantly reduced. Historically, methods using γ spectroscopy without coincident detections rely on γ emissions that have low intensity branching ratios (the branching ratios are 1.95 ± 0.06% for 131mXe and 10.12 ± 0.15% for 133mXe) [6], resulting in higher detection limits for these metastable isotopes. In contrast, the background reduction from the coincidence measurement improves the detection limits by allowing these metastable xenon isotopes to be detected through a high emission rate decay process (the branching ratios are greater than 50%).

The development of the specific equations to calculate activity concentrations from β–γ coincidence spectra has taken many years. The equations have undergone several updates and modifications over the years as insights have been gained. The resulting method, which is called the net count calculation (NCC), subtracts several background terms from the sample gross count spectra to arrive at the net counts. The NCC is the name given to a set of equations [one for each xenon isotope and spectral region-of-interest (ROI)] using net counts to determine the concentration by the following general Eq. (1):

$$Conc_{i} = \left( {\frac{{Counts_{{Net_{j} }} }}{{\varepsilon_{{\gamma_{i} }} \cdot \varepsilon_{{\beta_{i} }} \cdot {\text{BR}}_{{\gamma_{i} }} \cdot {\text{BR}}_{{\beta_{i} }} }}} \right)\left( {\left( {\frac{{\lambda_{i} \cdot T_{\text{C}} }}{{1 - e^{{ - \lambda_{i} \cdot T_{\text{C}} }} }}} \right)\left( {\frac{1}{{e^{{ - \lambda_{i} \cdot T_{\text{P}} }} }}} \right)\left( {\frac{{\lambda_{i} }}{{1 - e^{{ - \lambda_{i} \cdot T_{\text{A}} }} }}} \right)} \right)\left( {\frac{1}{{V_{\text{Air}} }}} \right)$$
(1)

where Conci = Activity concentration in units of \(\frac{Bq}{{{\text{m}}^{3} }}\) for isotope i, \(Counts_{{Net_{j} }}\) = Net counts after background, interference, and memory effect subtractions for each region of interest j, \(\varepsilon_{{\gamma_{i} }}\) = γ efficiency for xenon isotope i within the specific ROI (~ 50–60%), \(\varepsilon_{{\beta_{i} }}\) = β efficiency for xenon isotope i within the specific ROI (~ 85–99%), \({\text{BR}}_{{\gamma_{i} }}\) = γ branching ratio for each xenon isotope i, specific to the decay radiation expected in the energy range defined by the ROI, \({\text{BR}}_{{\beta_{i} }}\) = branching ratio for each xenon isotope i, λi= \(\frac{\ln (2)}{{t_{1/2} }}\) for each xenon isotope i, where \(t_{1/2}\) is the isotope half-life, TC = Collection time (the amount of time to collect air onto a collection trap, typically ~ 12 h for a SAUNA II system), TP = Processing time (the amount of time to elute the sample from the collection trap and transfer the sample to the nuclear reactor, ~ 6 h), TA = Acquisition time (typically ~ 12 h for a SAUNA II system), VAir = Volume of air collected (~ 12 m3).

The term \(Counts_{{Net_{j} }}\) in Eq. (1) is calculated for a specific ROI, typically for one isotope, but when there are multiple activity concentrations calculated for a given isotope (i.e., using multiple ROIs), the results may be combined through an uncertainty weighted average. The branching ratio for a given isotope are for the particular γ- or x-ray decay energy that is observed in the isotope ROI. This general equation is described in more detail in a number of publications [9,10,11,12]. This paper focuses on systems that use β–γ coincidence to calculate radioxenon concentrations, in particular, the Swedish Automatic Unit for Noble gas Acquisition (SAUNA II) [13], developed by the Swedish Defense Research Agency (FOI) [14], the Xenon International [4, 15], developed by the US Department of Energy’s Pacific Northwest National Laboratory [16], new generation SPALAX system (Si + HPGe) [17, 18], and ARIX-4 system [19]. This paper focuses on analysis approaches that discriminate between radioxenon isotopes via β–γ spectral analysis.

The SAUNA II, Xenon International, and ARIX-4 systems employ plastic scintillators, for β detection, combined with NaI detectors, for γ detection. The detectors typically have a relative γ resolution of 10–15% full-width-half-maximum (FWHM) at 80 keV and a relative β resolution of 20–30% FWHM for a CE at an energy of 129 keV (see Fig. 1), where the resolution defines the region of interest limits. Better detector energy resolution will improve the system sensitivity by increasing the signal to background ratio through more tightly defined regions of interest. The new generation SPALAX system employs silicon beta cells to achieve improved energy resolution.

Fig. 1
figure 1

Example of two-dimensional (2D) β–γ plots of the background, radon daughters (214Pb and 214Bi), and four radioxenon isotopes

Over the last 20 years of noble gas system developments, significant improvements resulted in increased sensitivity, sample throughput, and reliability. In addition to these improvements, it is prudent to examine the equations and the analysis approaches used to identify areas that provide increased precision and accuracy of the radioxenon measurements. Following is a description of current analysis approaches, recommendations on areas of improvements to the equations, and approaches with potential to further improve the analysis of the data collected.

Net count calculation: region-of-interest analysis

Determination of the net counts (\(\varvec{Counts}_{{\varvec{Net}}}\)) of Eq. (1) is where significant improvements to sensitivity can be realized, because the net counts are dependent upon discriminating between events attributed to the isotope of interest versus counts attributed to other isotopes or background. The net counts for the isotopes are determined from a two-dimensional (2D) spectrum, which may be any combination of the background (measured over 3-days), radon, and four radioxenon isotope spectra (shown in Fig. 1), as well as memory effect and any other isotopes that may be present (i.e., non-CTBT-relevant isotopes). The NCC method [11] uses simple 2D regions-of-interest (ROIs) which span a range in the β and γ energies to determine the total counts. For each ROI, the net counts are calculated by subtracting the backgrounds, interferences from other isotopes, and any remaining radioactivity from previous samples in the β-cell (memory effect). It is critical to account for the background rate and any additional counts not associated with the specific isotope analyzed to accurately determine the xenon activity. A general equation to determine the number of counts attributed to a particular isotope in each ROI is:

$$\varvec{Counts}_{{\varvec{Net}}} = \varvec{Counts}_{{\varvec{Gross}}} - \varvec{Counts}_{{\varvec{Background}}} - \varvec{Counts}_{{\varvec{Interference}}} - \varvec{Counts}_{{\varvec{Memory}}}$$
(2)

where the net counts are determined from the total counts in the ROI from the sample (\(\varvec{Counts}_{{\varvec{Gross}}}\)) minus the contributions from the background (\(\varvec{Counts}_{{\varvec{Background}}}\)), interferences from other isotopes present in the sample (\(\varvec{Counts}_{{\varvec{Interference}}}\)), and the memory effect (\(\varvec{Counts}_{{\varvec{Memory}}}\)). Explicit equations for all ROIs can be found in [12] for one approach (7-ROI), and in [1, 11, 20] for another approach (10-ROI). The differences between these two approaches will be described, including the impact of the ROI on the gross counts term. The background, interference, and memory effect terms will be discussed in later sections of this paper. Combining Eq. (2) into Eq. (1), and expanding the equation out to include the interference and memory terms for each of the isotopes makes for equations with a large number of terms. The implementation of the different terms is somewhat dependent on the system and approach used by the developer, including different nomenclature for the various terms.

Activity, the number of decays in time, is determined through the integration of coincident counts (\(\varvec{Counts}_{{\varvec{Gross}}}\)) in an ROI. The sampling process produces a mixture of carrier gas (He or N2) and xenon with only trace amounts of 222Rn entering the detector cell. Since the number of isotopes and hence β and γ energies is well defined and limited by preprocessing of the gas, the interference between isotopes is low enough to allow for a ROI–based counting.

The 2D β–γ histogram is broken into a number of ROIs that correspond to the coincidence signature for each of the four radioxenon isotopes and one region for the 222Rn daughter product 214Pb as shown in the individual plots of Fig. 1. These ROIs are well defined by the nuclear decay physics of the isotopes. The most prominent decay features of the four CTBT-relevant xenon isotopes, which determine the ROI boundaries, are portrayed individually in Fig. 1 and combined in a schematic in Fig. 2. The ROIs located closest to the x-axis in Fig. 2 represent the x-ray and CE physics for each isotope. Each of the radioxenon isotopes has x-rays that are in coincidence with a CE and additionally for 133Xe and 135Xe, a γ-ray is in coincidence with a β particle.

Fig. 2
figure 2

Expected radiations with γ-ray branching ratios of at least 5% in coincidence with β or CE emissions for the four radioxenon isotopes and radon daughters [6]

The ROI approach is affected by the detector energy calibration. The detectors generate an electrical pulse that has an amplitude proportional to the energy deposited in the detector material. The data acquisition electronics sort the electronic pulses by the amplitude using a multi-channel analyzer, which provides a pulse height distribution or spectrum as shown in Fig. 1. The pulse height spectrum is converted into an energy spectrum through linear or quadratic equations whose coefficients are determined during the detector calibration, and verified using a quality control (QC) source prior to each sample measurement. A long-lived QC source, such as 137Cs, is routinely used to perform a relative energy calibration. Since the position of the β–γ peaks relative to the ROIs is determined during the energy calibration, any uncorrected shift in energy-to-channel relationship leads to a displacement of the peak from the defined ROIs. There has been significant refinement in the use of calibration sources that is not discussed here, except to mention precise energy calibration of the detector is needed for accuracy of the ROI analysis. Methods, such as automated energy calibration corrections, are based on QC-source-acquired spectra compared to a set of references that can provide energy calibration stability; these methods are being field-tested and are very promising [21]. If the automated gain stabilization is proven to be reliable, then it will keep the nuclear detector gains constant, and the concentration results will be more accurate with no need for gain adjustments during spectral analysis. This method will also provide the possibility of summing consequent spectra without additional gain matching.

There are many ways to calibrate a nuclear detector, however, all methods attempt to replicate the nuclear signature that the detector is intended to measure. That is, the detector energy, resolution and efficiency are best determined using the radiation type that will be measured during normal operation and in the same gas composition and geometrical conditions of real samples, and therefore the use of the four radioxenon isotopes, when logistically possible, is optimal. Routine use of the four radioxenon isotopes for calibration is difficult in the field due to the short half-lives, but could be done during initial setup using a QC source as verification. Publications by Idaho National Laboratory (INL) [22] and Pacific Northwest National Laboratory (PNNL) [23] show methods that may make in-field calibration possible, if the station can be reached by courier in a sufficiently short time. Alternatively, the manufacturers’ calibration could be verified in the field using other isotopes like 214Pb or 127Xe. Until long-term studies on detector efficiencies are performed, routine calibrations or verifications, e.g., by radioxenon spikes and sample re-measurements in a laboratory, should be performed periodically through a program of quality assurance and QC to maintain system detection accuracy. The frequency of calibration needs to be refined experimentally by long-term testing to determine the appropriate time interval for stable nuclear measurement capability (e.g., within two standard deviations).

7-ROI NCC approach

In the 7-ROI approach [12], used by Xenon International, the number of regions has been minimized to provide a simple, yet robust, analysis framework. The ROIs for 135Xe and 133Xe (80 keV) in this approach are generally distinct and straightforward to analyze. As can be noted from Fig. 2, the most complex area to deal with is the 30 keV γ energy region which contains multiple isotope signatures. 7-ROI uses four ROIs (regions 4 to 7 of Table 1) to separate the contributions from 133Xe (7R-4 and 7R-7), 131mXe (7R-5), and 133mXe (7R-6). The 7-ROI approach uses values of the energy bands shown in Table 1, with a graphical representation illustrated in Fig. 3.

Table 1 A list of the ROIs used in a 7-region radioxenon analysis [12, 24]
Fig. 3
figure 3

Regions-of-interest for 7-ROI analysis. As noted in the legend contained in the upper third of the figure, 7R-4 is represented by the green shaded rectangle, while the rectangle with red hatching represents 7R-7 and spans 7R-5 and 7R-6. See Table 1 for the actual energy range that each ROI spans. (Color figure online)

10-ROI NCC approach

The 10-ROI approach [20], used by SAUNA II, has similar ROIs to the 7-ROI method for 214Pb (radon daughter), 135Xe, and 133Xe (80 keV). However, the 30 keV region, has 7 regions (10R-4 through 10R-10) of interest to allow fine tuning during the analysis. If metastables are present then a combination of regions are used to minimize the interference from the metastable, while providing the highest counting statistics for 133Xe. If metastables are not observed, the 10R-4 ROI is used to increase precision on the 133Xe measurement. Alternatively, if one or both of the xenon metastable isotopes are present, different ROI combinations are used to provide the optimal precision for 133Xe and the observed metastable isotope. The 10-ROI approach potentially provides higher accuracy for most samples, but also introduces more complex analysis and potential biasing. The ROIs for the 10-ROI approach are listed in Table 2 with a graphical representation illustrated in Fig. 4.

Table 2 A list of the ROIs used in a 10-region radioxenon analysis [11, 20]
Fig. 4
figure 4

Regions-of-interest for 10-ROI analysis. As noted in the legend contained in the upper third of the figure, 10R-4 is represented by the green shaded rectangle, and spans from the lower β bound of 10R-7 and up to the β upper bound of 10R-8. See Table 2 for the actual energy range that each ROI spans. (Color figure online)

Method standardization

Having system-specific methods for the ROI analysis limits the comparison of data analysis and results across different systems. Different analysis programs for each system increases software development and maintenance costs. It would be beneficial to have a consistent ROI approach for all detector systems. However, differences in system gas collection methods (e.g., radon gas removal) and nuclear measurement (e.g., efficiency, energy resolution of each system) drive the concentration calculations to be system–specific to obtain the best precision and accuracy. Additionally, each analysis may weigh the uncertainties associated with a given method differently, which directly affect the calculated detection limits, LC and LD, as shown in Eq. (3) below [25], and ultimately the number of false positive/negative events reported:

$$\begin{aligned} L_{\text{C}} & = 2.33\sqrt {\mu_{\text{B}} } \\ L_{\text{D}} & = 2.71 + 4.65\sqrt {\mu_{\text{B}} } \\ \end{aligned}$$
(3)

where \(\mu_{\text{B}}\) is the limiting mean of the blank sample (B). In the NCC method the normalized net counts are represented by Eq. (2). The variance used for the NCC method is represented by Eq. (4) [9].

$$\sigma_{\text{s}}^{2} = \mu_{\text{I}} +\mu_{\text{M}} +\mu_{\text{B}} + \sigma_{\text{I}}^{2} + \sigma_{\text{M}}^{2} + \sigma_{\text{B}}^{2}$$
(4)

The NCC method as applied to the sample spectra from IMS beta–gamma coincidence systems has been observed to provide higher than expected detection rates. The high false positive is inconsistent with the expectations for the current statistical model. The NCC method should use Poisson distributions to represent the uncertainty for low counting statistics measurements. Using an appropriate statistical model will become more important as systems become more sensitive. Developers need to provide operators their specific analysis implementation (e.g., as has been provided for some systems [1, 12]), including estimated statistical and systematic uncertainties.

While standardization is desirable, it should not prohibit advancements. The ROI approach performs well, but as detector resolutions improve, for example with the use of silicon-based β detectors, incorporating more sophisticated approaches, such as peak fitting, may be advisable. Instead of using traditional rectangular regions, two-dimensional Gaussians can be used to establish elliptical regions which would minimize background and interference contributions.

There has been promising research in this approach [26]. Similarly, the β endpoint energies could be determined more precisely to also minimize background contributions. One method uses the Fermi–Kurie plot to calculate the endpoint energy and limit the higher-energy portion of the ROI. A simplified Fermi–Kurie function is given in Eq. (5):

$$F\left( E \right) = \sqrt {\frac{n\left( E \right)}{{\sqrt {E^{2} + 2mc^{2} E} \left( {E + mc^{2} } \right)}}}$$
(5)

where E is the energy detected by the β cell, n is the number of counts, and mc2 is 511 keV [27,28,29]. The endpoint energy is then found when the function is linearly extrapolated to zero value.

Interference terms

As discussed and shown in Eq. (2), to properly calculate the net counts of a specific isotope, counts from other xenon isotopes and 214Pb (interference effects) must be accounted for and subtracted. If one does not account for all the interference effects, they can bias activity concentration results [30]. To accurately determine the sample activity, interference terms must be determined during detector calibration, and implemented in the analysis equations. The isotopes that both the 7-ROI and 10-ROI have interference ratios for are 214Pb, 135Xe, and 133Xe (see Table 3 for a list of isotopes and corresponding photon energies for which interference ratios are determined for the 7- and 10-ROI methods.

Table 3 A list of the 214Pb, 135Xe, and 133Xe photon energies that interfere with beta–gamma radioxenon signatures. Interference ratios are determined for both 7- and 10-ROI methods for the isotopes listed below

The ROI (7R-1 and 10R-1) around the radon contributions is used to determine the amount of radon in the sample and subtract its contribution out from other ROIs. Radon removal is important since over time the automated systems tend to observe 222Rn making it through the collection/processing units and into the nuclear detector. 222Rn decays by emitting an alpha particle, however its daughters 214Pb and 214Bi emit β-particles and γ-rays as well as CE and x-rays. The daughter products of 222Rn, 214Pb and 214Bi, have strong interferences across the β–γ spectrum (see Fig. 1). The most effective method for mitigating the effect from these daughter products is to remove the radon gas during the collection and purification step. Systems can reduce the radon contamination considerably (e.g., gas absorption and elution provide reduction greater than 105), however even greater reductions are required in samples for the radon interference terms to be ignored.

Quantifying and reducing interferences of isotopes present in the sample is a recognized necessity. The most promising avenue is to increase the detector resolution to reduce the interferences and background during the measurement. Solutions exploiting increased energy resolution are being pursued through different scintillation materials, configurations, and solid state detectors [31].

Recently, two silicon-based radioxenon detectors have been developed as alternatives to plastic beta-cells. The first is the commercially-available Canberra PIPSBox, which was developed by the CEA. This detector has two circular silicon detectors that are roughly 70 mm in diameter and 500 μm thick. They are placed opposite one another at the bases of a cylindrical housing. The PIPSbox shows an energy resolution of 10% (13 keV) FWHM for the 131mXe 129-keV CE [32]. The PIPSBox is designed to be placed in the new generation SPALAX [17] on the face of an HPGe detector (as in the Système de Prélèvement Automatique en Ligne avec l’Analyse du Xénon (SPALAX)), and is not a modular replacement for plastic scintillator β cells used in the SAUNA II or Xenon International systems.

The second detector was created by Lares Ltd. This detector is a cubic detector which houses six 2.25 cm square silicon detectors. Similar to the PIPSBox, the detector shows an energy resolution of 5.4% (7.0 keV) FWHM for the 131mXe 129-keV CE [33]. In contrast, current IMS systems use plastic β-cells that have a typical energy resolution of 30% (38.7 keV) FWHM for the 131mXe 129-keV CE. For silicon beta detectors, the memory effect was shown to be minimal (~ 0.4%) or below the ability to measure. In 2016, the detector was successfully tested inside a SAUNA II γ detector [34].

Silicon detectors have better resolution than plastic scintillators, thus allowing narrower ROIs for the metastable radioxenon isotopes. With the narrower ROIs the number of counts from the background and isotopic interferences will be reduced for 131mXe and 133mXe, resulting in improved detection sensitivities. Silicon-based β detectors have been shown to significantly increase the discrimination power of radioxenon detectors for the metastables by reducing interference contributions [33,34,35]. Although silicon has demonstrated improved resolution it has a higher probability for electron backscatter, which, in some cases, increases interference between ROIs. Although silicon detectors are commonly used in laboratory environments, additional feasibility studies for field deployed silicon detectors are warranted. One concern is electronic noise levels and how to mitigate environmental influences that may impact the noise, such as microphonics or dark current. Often the noise can be significantly reduced by actively cooling the detector. There are also questions about detector fragility, which significantly impacts operation in field environments. Silicon detectors will need new algorithms developed to fully exploit the benefit of the increased resolution. The new algorithms might use methods similar to ones used for high purity germanium, such as Gaussian fitting, to estimate the number of decays.

The use of silicon offers several advantages due to the higher energy resolution and reduced intrinsic memory effect compared to plastic cells, but at the cost of decreased efficiency. Silicon-based β cells utilize planar silicon detectors and therefore are limited by geometry. Additionally, a mechanical structure is required to house the silicon detectors within the gas cell (required for vacuum and structural integrity of the silicon). This outer gas cell causes a reduction of detector efficiency to ~ 50% for the PIPSBox [36] and 67% for the Lares-detector [37]. And finally, the thickness of the silicon attenuates the photons, and further reduces the coincident beta–gamma detection efficiency [37]. The reduction in efficiency is offset by the improvement in energy resolution, which reduces the background rate and interference between isotopes. The performance improvement due to a change to silicon detectors is difficult to determine since there are many factors that influence the actual system sensitivity and performance.

Including interference terms into the equations is important, but determining the precise value of an interference term in a particular ROI for a specific isotope is not easy. The 7-ROI method currently accounts for interferences from 214Pb, 135Xe, and 133Xe; while the 10-ROI method, implemented on the IMS SAUNA II systems, accounts for 214Pb and 133Xe. The 7-ROI method incorporates interferences as a simple ratio of counts measured during system calibration. For example, the interference ratios for 135Xe are based upon the primary ROI, 7R-2. The interference ratios are measured from an isotopically pure 135Xe sample. When an isotopically pure 135Xe source is present in the detector, the interference caused by 135Xe in each of the ROIs is determined as the ratio of the net counts in that ROI with the net counts in 7R-2. The interference ratio in each ROI due to interference from a specific isotope is based on the net counts from Eq. (2). Since the sample contains only 135Xe, it is possible to remove the interference and memory effect terms from Eq. (2) and simplify it to:

$$\varvec{Counts}_{{\varvec{Net}}} = \varvec{Counts}_{{\varvec{Gross}}} - \varvec{Counts}_{{\varvec{Background}}}$$
(6)

The interference ratios are the ratio of the net counts in each ROI to the net counts in the primary ROI for that isotope. For the case of 135Xe the primary region is 7R-2, so the interference terms for the 7-ROI approach are as follows:

$$\begin{array}{*{20}l} {{\text{Interference}}_{{{\mathbf{Pb}}214_{{1:\varvec{x}}} }} = \frac{{\varvec{Counts}_{{\varvec{Net}_{{7\varvec{R} - \varvec{x}}} }} }}{{\varvec{Counts}_{{\varvec{Net}_{{7\varvec{R} - 1}} }} }};\quad {\text{for}}\quad x = 2,3, \ldots ,N} \hfill \\ {{\text{Interference}}_{{{\mathbf{Xe}}135_{{2:\varvec{y}}} }} = \frac{{\varvec{Counts}_{{\varvec{Net}_{{7\varvec{R} - \varvec{y}}} }} }}{{\varvec{Counts}_{{\varvec{Net}_{{7\varvec{R} - 2}} }} }};\quad {\text{for}}\quad y = 3,4,5,6, \ldots ,N} \hfill \\ {{\text{Interference}}_{{{\mathbf{Xe}}133_{{3:\varvec{z}}} }} = \frac{{\varvec{Counts}_{{\varvec{Net}_{{7\varvec{R} - \varvec{z}}} }} }}{{\varvec{Counts}_{{\varvec{Net}_{{7\varvec{R} - 3}} }} }};\quad {\text{for}}\quad z = 4,5,6, \ldots ,N} \hfill \\ \end{array}$$
(7)

where x, y, and z refer to the specific region of interest for the corresponding isotope. The other isotopes and regions follow a similar pattern, resulting in a rather large and complex set of terms. These ratios include all events that occur in each ROI from the isotopically pure xenon isotope, including events that arise from decay into the ROI and from Compton scatter events that are measured within each ROI, for the 7-ROI method. The 7-ROI method does not use the inference ratios ROI-3 to ROI-4 and ROI-3 to ROI-7 in the calculation of 133Xe, but does use ROI-3 to ROI-5 and ROI-3 to ROI-6 for 131mXe and 133mXe respectively. For the 10-ROI method, the interference ratios vary depending on whether the secondary region contains the primary ROI isotope. In the case of 133Xe, this means \(\varvec{Interference}_{{{\mathbf{Xe133}}_{{{\mathbf{3:4}}}} }}\) contains only the Compton scatter events as a correction, while \(\varvec{Interference}_{{{\mathbf{Xe133}}_{{{\mathbf{3:5}}}} }}\) contains the Compton scatter and 133Xe decay events.

The addition of interference terms from 131mXe and 133mXe are important to accurately estimate the sample activity under all activity scenarios, but are currently not accounted for in the 7- and 10-ROI methods. Specifically, interference terms between ROI-5 and ROI-6, ROI-5 and ROI-3, and ROI-6 and ROI-3 have been shown in testing of IMS noble gas radionuclide laboratories to result in reporting elevated isotope activities. The interference terms will need to be either measured through isotopically pure xenon calibration gas (preferred) or estimated through modeling and simulations. However, not all the needed isotopically pure xenon gases are available (e.g., 133mXe samples always have 133Xe present in them) in which case an estimate can be made using modeling. Unfortunately, it is difficult to account for detector-to-detector difference in models, and the simulations will introduce additional uncertainties that negatively impact overall accuracy of the measurement.

Long-term field tests will be needed to verify the viability of routine field calibration and simulations to:

  • Quantify how much the interference terms are changed when switching to a high resolution detector (e.g., silicon) and determine the overall impact on detection sensitivity and occurrence of false positives.

  • Determine the magnitude of the interference terms for varying metastable activities in current and next generation systems and include the uncertainty and biases associated with them.

  • Incorporate the interference terms as seen in Eq. (7), and simultaneously solve for the terms across the five isotopes.

  • Further investigate the minor interferences from radon or xenon isotopes that either Compton down-scatter into ROIs or have small branching ratios.

Memory effect and background

Proper accounting of the radioactivity from a previous measurement (memory effect in Eq. 2) is an area where improvement is readily achievable. The current β cells made from plastic scintillator material adsorb xenon which is not removed completely through sample purging. If a sample is radioactive, there will be some radioxenon adsorbed that contributes to the subsequent sample counts. The amount depends on the length of time the xenon is in the cell, but can be as large as 10% [38]. Considerable progress has been made to reduce the memory effect in the plastic scintillator material, for example applying coatings to the plastic or using different detection materials (e.g., silicon) for the β cells. Early coating attempts used metallic coatings that were sputtered onto the inner surface of the plastic β-cells [38], but the temperature needed to sputter the metal to plastic damaged the scintillating plastic. Separately, FOI and X-Ray Instrumentation Associates, LLC (XIA) studied the suitability of coatings such as Al2O3 to reduce the memory effect [39,40,41,42]. FOI worked with a commercial entity, Nanexa, to develop a coating method using Al2O3 that appears to bond well without negative impact to the detection efficiency, and small impact on resolution. Cells with Al2O3 coatings have undergone successful long-term testing, including stress testing, and have been used in IMS systems. The coating reduces the memory effect from approximately 3% to less than 0.1%. Alternatively, changing the detector materials is also a promising area. Currently, the most promising new material for β cells is silicon-based, as silicon detectors not only improve the energy resolution, which reduces the interference issues, but also alleviates the memory effect. Reducing the memory effect of the β cells reduces the amount of radioactivity from previous samples that needs to be accounted for in the net count calculation.

Independent of the memory effect, cross contamination between samples due to residues of the previous sample in the sampling system may occur. This effect, which is less than 1% in current systems, is not accounted for through the gas background measurements and limits the correction of interference between samples. However, an effect this small has negligible impact to the system detection limits as defined in [25].

Reduction of memory effect will likely change how measurements between samples are used, or if they are even needed. Typically, a background measurement is performed to estimate the amount of the memory effect prior to the sample being measured. If the background is stable over long periods of time, the background measurement prior to the sample measurement may be unnecessary, providing increased sample measurement time leading to increased precision. If the background does change over time, it may be possible to monitor the change in background activity using different regions of the spectral space during a sample measurement, and account for the background change during a sample measurement. Introducing additional ROIs to monitor the background in both 7- and 10-ROI approaches, provides reference points to estimate the background count rate and account for it within the sample file. Determining the feasibility of using background ROIs for analysis of data from silicon detectors and quantifying the impact they have on the accuracy of the measurement will require long-term field testing to gather sufficient data.

Decay correction terms

One assumption built into the analysis equations originally formulated in the year 2000 [10, 20] that should now be considered for change is the radioxenon decay collection term that assumes constant collection. This is an adequate assumption if there is a steady state plume with a constant activity flowing by the system. However, the assumption is generally not useful since the true shape of the plume is unknown, but is generally not constant, due to local wind conditions and the finite duration of the plume over the station.

One solution is to calculate the additional activities with the assumption of constant plume activity removed. This can be accomplished by removing both the collection and processing decay terms and calculating the activity to the acquisition start reference time. The general Eq. (1) would then be simplified to Eq. (8):

$${\text{Activity}} = \left( {\frac{{Counts_{Net} }}{{\varepsilon_{\gamma } \cdot \varepsilon_{\beta } \cdot {\text{BR}}_{\gamma } \cdot {\text{BR}}_{\beta } }}} \right)\left( {\left( {\frac{\lambda }{{1 - e^{{ - \lambda \cdot T_{\text{A}} }} }}} \right)} \right)$$
(8)

where the first two decay correction terms have been removed. This will remove the assumption that the sample air has constant radioactivity during the collection time. The activity will then be provided at (decay corrected to) the time of the start of the radioactivity measurement, and assumptions on the air collected can be made by analysts incorporating the atmospheric transport modeling. This is a relatively small change that could be made to the analysis software on operating systems that will simplify evaluation of concentration results.

Measurement statistics and quality assurance

The equations given in Eqs. (1) and (2) are generalized and do not rigorously show all correction factors. The calculation of the net counts is described meticulously in [11, 12]. First, one must handle the dependent variables, such as counts from the detector background, correctly. Also, the 7R-4 to 7R-7 and 10R-4 to 10R-10 overlap and therefore the terms contain partially dependent variables. Second, in the 10-ROI approach when calculation steps are performed based on the presence or absence of isotopes, the results may become biased. The 10-ROI approach uses internal “hypothesis testing” which is problematic since the absence of a detection of an isotope is not evidence for absence of the isotope. Hypothesis testing may increase the precision of the result; however, it is likely that a result obtained after a series of “decisions” will be statistically biased and the variance may be underestimated. Third, the structure of the Eqs. (1) and (2) is such that statistical errors are negatively correlated. In order to facilitate a more concise representation of the data analysis, the following reformulation of the equations is presented.

Seen from a more mathematical point of view, the calculation of the activities from a number of ROIs is a typical linear over-determined problem [43].

$$c_{i} = M_{ij} a_{j}$$
(9)

Here ci denotes the counts in the different ROIs in all three spectra, i.e., sample, gas background, and detector background. aj denotes the activities for all three spectra.

All factors involved in Eqs. (1) and (2) are represented in a matrix Mij. The importance of this representation is the availability of numerical tools to calculate activities and uncertainties in a concise way. The problem can be solved, for instance, by singular value decomposition (SVD) which is a standard tool in numerical mathematics [44]. The outcome of SVD is a unique solution vector aj as well as a covariance matrix cov(aj, ak). The diagonal elements cov(aj, aj) correspond to the variance, σ2. Off-diagonal elements can be used for advanced uncertainty analysis, particularly if nuclide ratios are concerned [45].

The SVD algorithm does not inhibit negative outcomes. Indeed, negative activities are possible and need to be interpreted correctly using Bayesian statistics according to [46].

A second advantage of this approach is the use of the data for QC. Long-term operational systems, under stable conditions, provide data sets which are a good approximation of the assumed probability distribution. For locations with no radioxenon, the underlying activity concentration distribution is Gaussian and the observed frequency distribution of the data should be consistent with calculated probability distributions. Skewness and other parameters of the empirical distributions give further hints of possible problems and yet undiscovered effects that need additional investigation.

Nuclear decay data

The physics constants used in Eq. (1) have considerable uncertainties in some branching ratios and decay constants (λ) or half-lives of the radioxenon isotopes, which affect the accuracy of the results and can lead to biases. A standardized set of nuclear decay constants (see Table 4) [47] for the γ- or x-rays has been suggested and, when validated, should be used by all system developers. The analysis for the x-rays was performed with simulation data, and needs to be updated with experimental data. A similar analysis for the beta and conversion electrons should be performed. Improved accuracy of the nuclear decay data has a direct positive impact on the radioxenon system accuracy.

Table 4 New evaluated xenon decay data with uncertainties in parenthesis [47]

Additional isotopes

The accuracy of analysis approaches can be improved by identifying and accounting for unexpected or unknown contaminants in a sample. Traditionally, the four isotopes 131mXe, 133Xe, 133mXe, and 135Xe are considered for verification of the CTBT; they are therefore referred to as CTBT-relevant xenon radioxenon isotopes in this paper. The four traditional radioxenon isotopes originate from nuclear explosions but can also be produced in commercial and research reactors as well as medical isotope production facilities. Besides generation by fission, radioxenon can also be produced via neutron activation of natural stable xenon (e.g., in air). Air activation results in the production of the four traditional isotopes, but additionally the following non-CTBT-relevant isotopes: 125Xe, 127Xe, 129mXe, 135Xem, and 137Xe. While the non-CTBT-relevant isotopes have some potential future uses in the IMS, a greater concern is how to identify them when present in a sample and how to correct for their interference with the traditional radioxenon isotope signatures.

Of the non-CTBT-relevant isotopes, 127Xe has been the most heavily studied. With a month half-life, it has been proposed for use as a calibration source in the IMS and laboratories [48, 49] and has been used in subsurface gas transport experiments. A careful study comparing experimental and simulated data is needed to better understand the possible implications of sample contamination by non-CTBT-relevant radioxenon isotopes.

Analysis approaches should be examined and possibly accommodate other isotopes (contaminates) or unexpected occurrences (high activity samples with large detector dead times) that might be observed. Although these are expected to be rare occurrences, there is a possibility that other isotopes, contamination, or even a very high activity sample will cause inaccurate activities to be reported for the four xenon isotopes. These cases should be considered and approaches developed to account for them through a subtraction term (\(\varvec{Counts}_{{\varvec{Contaminant}}}\)) that could be included in Eq. (2):

$$\varvec{Counts}_{{\varvec{Net}}} = \varvec{Counts}_{{\varvec{Gross}}} - \varvec{Counts}_{{\varvec{Background}}} - \varvec{Counts}_{{\varvec{Interference}}} - \varvec{Counts}_{{\varvec{Memory}}} - \varvec{Counts}_{{\varvec{Contaminant}}} .$$
(10)

The presence of contaminants can be determined through half-life analysis of the sample; however, it is important to determine the energy and type of radiation to identify specific isotopes. Once specific contaminants are identified they can be accounted for and accurate estimates the xenon concentrations can be calculated.

Conclusion

Radioxenon detection and analysis systems have matured from laboratory benchtop systems into operational field systems for continuous monitoring of atmospheric radioxenon concentration. This was spurred by the CTBTO Preparatory Commission installing and certifying radioxenon systems in the IMS which, is now nearing completion of the initial 40 stations that may have noble gas systems. In addition, the CTBTO Preparatory Commission is actively engaged in considering ways to improve performance. Significant work has been done on development and upgrading hardware of the current systems to next-generation systems emphasizing increased sensitivity, time resolution, energy resolution, and reliability. As the systems become more sensitive, the analysis approaches and equations also need to be examined and updated. This paper provides a description of analysis approaches and equations used on principal systems, lessons learned during field operation, and specific approaches to improve the system sensitivity.

One lesson learned is the importance of reporting the activity concentration to the data acquisition start time, in addition to the collection start time. This will remove assumptions about air sampling in the decay correction term of Eq. (1). Other recommended changes are summarized in Table 5. The first two columns of the table provide the equation number and the equation term impacted. The third column provides the area for improvement. The recommendations are provided in column four, with some being near- and long-term, while the fifth column describes further work including experiments useful for supporting the recommendations. The first two recommendations involve the main equation (Eq. 1) with the remaining recommendations addressing improvements to the net count calculations (Eqs. 2, 3, and 10). The first two recommendations are to improve decay and efficiency constants in the equation and apply the constants consistently throughout calibration and analysis, which if not addressed, could introduce biases in the overall concentration values. Measurement uncertainty must be addressed throughout the equations and field testing plays a large part in quantifying these over the long-term. Improvements to the interference and memory terms of the net count calculation and accounting for additional isotopes in the sample have associated near- and long-term recommendations to support the development and verification of the improvements.

Table 5 Summary of recommendations

The recommended changes to the equations are to improve radioxenon analysis and thereby provide more accurate and precise measurement values from the currently operating and emerging generation of radioxenon systems that use β–γ coincidence.