Detection channels
Liquid argon has a particular sensitivity to the \(\nu _e\) component of a supernova neutrino burst, via the dominant interaction, CC absorption of \(\nu _e\) on \(^{40}\)Ar,
$$\begin{aligned} \nu _e +{}^{40}\mathrm{Ar} \rightarrow e^- +{}^{40}\mathrm{K^*}, \end{aligned}$$
(4)
for which the observable is the \(e^-\) plus deexcitation products from the excited \(^{40}\)K\(^*\) final state. Additional channels include a \(\bar{\nu }_e\) CC interaction and ES on electrons. Cross sections for the most relevant interactions are shown in Fig. 3. It is worth noting that none of the neutrino-\(^{40}\)Ar cross sections in this energy range have been experimentally measured, although several theoretical calculations exist [5, 6, 82]. The uncertainties on the theoretical calculations are not generally quantified, and they may be large.
Another process of interest for supernova detection in liquid argon detectors, not yet fully studied, is NC scattering on Ar nuclei by any type of neutrino: \(\nu _X +\mathrm{Ar} \rightarrow \nu _X +\mathrm{Ar}^*\), for which the observable is the cascade of deexcitation gammas from the final state Ar nucleus. A dominant 9.8-MeV Ar\(^*\) decay line has been recently identified as a spin-flip M1 transition [83]. At this energy the probability of \(e^+e^-\) pair production is relatively high, offering a potentially interesting NC tag. Other transitions are under investigation. NC interactions are not included in the studies presented here, although they represent a topic of future investigation.
The predicted event rate from a supernova burst may be calculated by folding expected neutrino flux differential energy spectra with cross sections for the relevant channels, and with detector response; this is done using SNOwGLoBES [6] (see Sect. 5.3.1.)
Event simulation and reconstruction
Supernova neutrino events, due to their low energies, will manifest themselves primarily as spatially small events, perhaps up to a few tens of cm scale, with stub-like tracks from electrons (or positrons from the rarer \(\bar{\nu }_e\) interactions). Events from \(\nu _e\)CC, \(\nu _e+{}^{40}\mathrm{Ar}\rightarrow e^{-}+{}^{40}\mathrm{K}^{*}\), are likely to be accompanied by de-excitation products – gamma rays and/or ejected nucleons. Gamma rays are in principle observable via energy deposition from Compton scattering, which will show up as small charge blips in the time projection chamber. Gamma rays can also be produced by bremsstrahlung energy loss of electrons or positrons. The critical energy for bremsstrahlung energy loss for electrons in argon is about 45 MeV. Ejected nucleons may result in loss of observed energy for the event, although some may interact to produce observable deexcitations via inelastic scatters on argon. Such MeV-scale activity associated with neutrino interactions has been observed in the ArgoNeuT LArTPC [85]. ES on electrons will result in single scattered electron tracks, and single or cascades of gamma rays may result from NC excitations of the argon nucleus. Each interaction category has, in principle, a distinctive signature. Figure 4 shows examples of simulated \(\nu _e\)CC and neutrino-electron ES interactions in DUNE.
The canonical event reconstruction task is to identify the interaction channel, the neutrino flavor for CC events, and to determine the four-momentum of the incoming neutrino; this overall task is the same for low-energy events as for high-energy ones. The challenge is to reconstruct the properties of the lepton (if present), and to the extent possible, to tag the interaction channel by the pattern of final-state particles. LArSoft [86] open-source event simulation and reconstruction software tools for low-energy events is employed; a full description of the algorithms is beyond the scope of this work. Performance is described in Sect. 5.2.2. Enhanced tools are under development, for example for interaction channel tagging; however, standard tools already provide reasonable capability for energy reconstruction and tracking of low-energy events. Event reconstruction in this energy range has been demonstrated by MicroBooNE for Michel electrons [87].
Event generation
MARLEY (Model of Argon Reaction Low Energy Yields) [5, 82] simulates tens-of-MeV neutrino-nucleus interactions in liquid argon. For the studies here, MARLEY was only used to simulate CC \(\nu _e\) scattering on \(^{40}\)Ar,Footnote 3 but other reaction channels will be added in the future.
MARLEY weights the incident neutrino spectrum according to the assumed interaction cross section, selects an initial excited state of the residual \(^{40}\)K\(^*\) nucleus, and samples an outgoing electron direction using the allowed approximation for the \(\nu _e\)CC differential cross section, i.e., the zero momentum transfer and zero nucleon velocity limit of the tree-level \(\nu _e\)CC differential cross section, which may be written as
$$\begin{aligned} \frac{d\sigma }{d\cos \theta }= & {} \frac{G_F^2 |V_{ud}|^2}{2\pi } |\mathbf {p}_e|\, E_e \,F(Z_f, \beta _e) \\&\times \left[ (1+\beta _e \cos \theta ) B(F) + \left( \frac{3 - \beta _e \cos \theta }{3}\right) B(GT)\right] . \end{aligned}$$
In this expression, \(\theta \) is the angle between the incident neutrino and the outgoing electron, \(G_F\) is the Fermi constant, \(V_{ud}\) is the quark mixing matrix element, \(F(Z_f, \beta _e)\) is the Fermi function, and \(|\mathbf {p}_e|\), \(E_e\), and \(\beta _e\) are the outgoing electron’s three-momentum, total energy, and velocity, respectively. B(F) and B(GT) are the Fermi and Gamow-Teller matrix elements. MARLEY computes this cross section using a table of Fermi and Gamow-Teller nuclear matrix elements. Their values are taken from experimental measurements at low excitation energies and a quasiparticle random phase approximation (QRPA) calculation at high excitation energies.
After simulating the initial two-body \(^{40}\)Ar(\(\nu _e\), \(e^{-}\))\(^{40}\)K\(^*\) reaction for an event, MARLEY also handles the subsequent nuclear de-excitation. For bound nuclear states, the de-excitation gamma rays are sampled using tables of experimental branching ratios [88,89,90]. These tables are supplemented with theoretical estimates when experimental data are unavailable. For particle-unbound nuclear states, MARLEY simulates the competition between gamma-ray and nuclear fragmentFootnote 4 emission using the Hauser-Feshbach statistical model. Figure 5 shows an example visualization of a simulated MARLEY event. Figure 6 shows the mean fraction of energy apportioned to the different possible interaction products by MARLEY as a function of neutrino energy.
Low-energy event reconstruction performance
The LArSoft [86] Geant4-based software package is used to simulate the final-state products from MARLEY in the DUNE LArTPC. Both TPC ionization-based signals and scintillation photon signals are simulated.
For the studies described here, the DUNE LArSoft \(1\times 2\times 6\) m far detector geometry was used [3], along with standard DUNE reconstruction tools included in the LArSoft package. To determine event-by-event reconstruction information, 2D hits are formed using the HitFinder algorithm. HitFinder scans through wires and defines hits in regions between two signal minima where the maximum signal is above threshold. The algorithm then performs n Gaussian fits for n consecutive regions. The hit center is defined as the fitted Gaussian center, while the beginning and end are defined using the fitted Gaussian width. We used the TrajCluster algorithm to form reconstructed clusters. The TrajCluster algorithm creates clusters using local information from 2D trajectories, taking advantage of minimal ionization energy loss compared to the kinetic energy of the particle. A 2D trajectory is formed from trajectory points defined by the cryostat, plane, and TPC in which the trajectory resides. The trajectory points are made up of charge-weighted positions of all hits used to form the point. The algorithm steps through the 2D space of hits sorted by wire ID number, region of interest in time, and then by “multiplet” (i.e., a collection of hits found using a multi-Gaussian fit). Clusters are formed in the algorithm by stitching together nearby 2D hits. 3D track information is produced using the Projection Matching Algorithm (PMA). PMA takes in 2D clusters formed through TrajCluster, and the algorithm matches clusters in the three 2D projection wire planes to build the tracks. PMA measures the distance between projections, and tracks are formed based on stitching together nearby projections.
The photon (scintillation) simulation implemented ARAPUCA light collection devices with realistic light yields that differ between particle types. Reconstructed photon flashes are used to correct ionization charge loss during drift, which provides substantial improvement to energy reconstruction. Even in the absence of efficient TPC-flash matching, resolution smearing due to drift losses may end up being a small effect, particularly given the high electron lifetimes recently achieved in the DUNE prototype detector [91]. Photons may also be used for calorimetry, although that method has not been implemented for these studies.
Figure 7 shows summarized fractional energy resolution and efficiency performance for MARLEY events. Angular resolution performance will be addressed in a separate publication.
Backgrounds
Understanding of cosmogenic [92] and radiological backgrounds is also necessary for determination of low-energy event reconstruction quality and for setting detector requirements. The dominant radiological is expected to be \(^{39}\)Ar, which \(\beta \) decays at a rate of \(\sim \)1 Bq/liter, with an endpoint of <1 MeV. Small single-hit blips from these decays or other radiologicals may fake de-excitation gammas. However preliminary studies show that these background blips will have a very minor effect on reconstruction of triggered supernova burst events. The effects of backgrounds on a data acquisition (DAQ) and triggering system that satisfies supernova burst triggering requirements need separate consideration. These will be the topics of future study. For studies presented here, the impact of backgrounds on event reconstruction is ignored.
Expected Supernova burst signal
SNOwGLoBES
Many supernova neutrino studies done for DUNE so far have employed SNOwGLoBES [6], a fast event-rate computation tool. This uses GLoBES front-end software [93] to convolve fluxes with cross sections and detector parameters. The output is in the form of both mean interaction rates for each channel as a function of neutrino energy and mean “smeared” rates as a function of detected energy for each channel (i.e., the spectrum that actually would be observed in a detector). The smearing (transfer) matrices incorporate both interaction product spectra for a given neutrino energy and detector response. Figure 8 shows examples of such transfer matrices created using MARLEY and LArSoft. They were made by determining the distribution of reconstructed charge using a full simulation of the detector response (including the generation, transport, and detection of ionization signals and the electronics, followed by high-level reconstruction algorithms) as a function of neutrino energy in 0.5-MeV steps. Each column of a transfer matrix for a given interaction channel represents the detector response to interactions of monoenergetic neutrinos in the detector. An electron drift attenuation correction, which can be computed using the reconstructed photon signal (which determines the time of the interaction and hence the drift distance), improves resolution significantly; see Fig. 9.
Time dependence of a supernova flux in SNOwGLoBES can be straightforwardly handled by providing multiple fluxes divided into different time bins (see Fig. 11), although studies here assume a time-integrated flux.
While SNOwGLoBES is, and will continue to be, a fast, useful tool, it has limitations with respect to a full simulation. One loses correlated event-by-event angular and energy information, for example; studies of directionality require such complete event-by-event information [94]. Nevertheless, transfer matrices generated with full simulations can be used for fast computation of observed event rates and energy distributions from which useful conclusions can be drawn.
Expected event rates
Table 1 shows rates calculated for the dominant interactions in argon for the “Livermore” model [95] (included for comparison with literature), the “GKVM” model [96], and the “Garching” electron-capture supernova model [8].Footnote 5 For the first and last, no flavor transitions are assumed in the supernova or Earth; the GKVM model assumes collective effects in the supernova. In general, there is a rather wide variation – up to an order of magnitude – in event rate for different models due to different numerical treatment (e.g., neutrino transport, dimensionality), physics input (nuclear equation of state, nuclear correlation and impact on neutrino opacities, neutrino-nucleus interactions) and flavor transition effects. In addition, there is intrinsic variation in the nature of the progenitor and collapse mechanism. Neutrino emission from the supernova may furthermore have an emitted lepton-flavor asymmetry [97], so that observed rates may be dependent on the supernova direction.
Table 1 Event counts computed with SNOwGLoBES for different supernova models in 40 kton of liquid argon for a core collapse at 10 kpc, for \(\nu _e\)CC and \(\bar{\nu }_e\)CC channels and ES (X represents all flavors) on electrons. Event rates will simply scale by active detector mass and inverse square of supernova distance. No flavor transitions are assumed for the “Livermore” and “Garching” models; the “GKVM” model includes collective effects. Note that flavor transitions (both standard and collective) will potentially have a large, model-dependent effect, as discussed in Sect. 2.4.1 Figure 10 shows the expected event spectrum and the interaction channel breakdown for the “Garching” model before and after detector response smearing with SNOwGLoBES. Clearly, the \(\nu _e\) flavor dominates. Although water and scintillator detectors will record \(\nu _e\) events [98, 99], the \(\nu _e\) flavor may not be cleanly separable in these detectors. Liquid argon is the only future prospect for a large, cleanly tagged supernova \(\nu _e\) sample [50].
Figure 11 shows computed event rates showing the effect of different mass orderings, using the assumptions in Sect. 2.4.1. MSW-dominated transitions affect the subsequent rise of the signal over a fraction of a second; the time profile will depend on the turn-on of the non-\(\nu _e\) flavors. For this model at 10 kpc there are statistically-significant differences in the time profile of the signal for the different orderings.
For a given supernova, the number of signal events scales with detector mass and inverse square of distance as shown in Fig. 12. The standard supernova distance is 10 kpc, which is just beyond the center of the Milky Way. At this distance, DUNE will observe from several hundred to several thousand events. For a collapse in the Andromeda galaxy, 780 kpc away, a 40-kton detector would observe a few events at most.
Burst triggering
Given the rarity of a supernova neutrino burst in our galactic neighbourhood and the importance of its detection, it is essential to develop a redundant and highly efficient triggering scheme in DUNE. In DUNE, the trigger on a supernova neutrino burst can be done using either TPC or photon detection system information. In both cases, the trigger scheme exploits the time coincidence of multiple signals over a timescale matching the supernova luminosity evolution. Development of such a data acquisition and triggering scheme is a major activity within DUNE and will be the topic of future dedicated publications. Both TPC and PD information can be used for triggering, for both SP and DP. Here are described two concrete examples of preliminary trigger design studies. Note that the general strategy will be to record data from all channels over a 30-100 second period around every trigger [3], so that the individual event reconstruction efficiency as described in Sect. 5.2.2 will apply for physics performance.
The first example is a trigger based on the photon detection system of the DP module. A real-time algorithm should provide trigger primitives by searching for photomultiplier hits and optical clusters, where the latter combines several hits together based on their time/spatial information. According to simulations, the optimal cluster reconstruction parameters yield a 0.05 Hz radiological background cluster rate for a supernova \(\nu _{\text {e}}\)CC signal cluster efficiency of 11.8%. Once the optimal cluster parameters are found, the computation of the supernova neutrino burst trigger efficiency is performed using the minimum cluster multiplicity. This value, set by the radiological background cluster rate and the maximum fake trigger rate (one per month), is \(\ge \)3 in a 2-second window (time in which about half of the events are expected). Approximately 3/0.118\(\simeq \)25 interactions must occur in the active volume to obtain approximately 45% trigger efficiency while maintaining a fake trigger rate of one per month.
The triggering efficiency as a function of the number of supernova neutrino interactions is shown in Fig. 13. At 20 kpc, the edge of the Galaxy, about 80 supernova neutrino interactions in the 12.1-kton active mass (assumed supernova-burst-sensitive mass for a single DP module) are expected (see Fig. 12). Therefore, the DP photon detection system should yield a highly efficient trigger for a supernova neutrino burst occurring anywhere in the Milky Way.
The second example considered is a TPC-based supernova neutrino burst trigger in a SP module (SP photon-based triggering will be considered in a future study). Such a trigger considering the time coincidence of multiple neutrino interactions over a period of up to 10 s yields roughly comparable efficiencies. Figure 14 shows efficiencies for supernova bursts obtained in this way for a DUNE SP module and for supernova bursts with an energy and time evolution as shown in Fig. 1. Triggering using TPC information is facilitated by a multi-level data selection chain whereby ionization charge deposits are first selected on a per wire basis, using a threshold-based hit finding scheme. This results in low-level trigger primitives (hit summaries) which can be correlated in time and channel space to construct higher-level trigger candidate objects. Low-energy trigger candidates, each consistent with the ionization deposition due to a single supernova neutrino interaction, subsequently serve as input to the supernova burst trigger. Simulations demonstrate that the trigger candidate efficiency for any individual supernova burst neutrino interaction is on the order of 20–30%; see Fig. 14. However, a multiplicity-based supernova burst trigger that integrates low-energy trigger candidates over \(\sim \)10 s integration window yields high trigger efficiency out to the galactic edge while keeping fake supernova burst trigger rates due to noise and radiological backgrounds to the required level of one per month or less.
An energy-weighted multiplicity count scheme further increases efficiency and minimizes fake triggers due to noise and/or radiological backgrounds. This effect is illustrated in Fig. 14, where a nearly 100% efficiency is possible out to the edge of the galaxy, and 70% efficiency is possible for a burst at the Large Magellanic Cloud (or for any supernova burst creating \(\sim \)10 events). This performance is obtained by considering the summed-waveform digitized-charge distribution of trigger candidates over 10 s and comparing to a background-only vs. background-plus-burst hypothesis. The efficiency gain compared to a simpler, trigger candidate counting-based approach is significant; using only counting information, the efficiency for a supernova burst at the Large Magellanic Cloud is only 6.5%. These algorithms are being refined to further improve supernova burst trigger efficiency for more distant supernova bursts. Alternative data selection and triggering schemes are also being investigated, involving, e.g., deep neural networks implemented for real-time or online data processing in the DAQ [100].
Event timing in DUNE
Timing for supernova neutrino events is provided by both the TPC and the photon detector system. Basic timing requirements are set by event vertexing and fiducialization needs. Here we note a few supernova-specific design considerations. During the first 50 ms of a 10-kpc-distant supernova, the mean interval between successive neutrino interactions is \(0.5{-}1.7~\mathrm {ms}\) depending on the model. The TPC alone provides a time resolution of 0.6 ms (corresponding to the drift time at 500 V/cm), commensurate with the fundamental statistical limitations at this distance. However nearly half of galactic supernova candidates lie closer to Earth than this, so the rate can be tens or (less likely) hundreds of times higher. A resolution of \(\mathord {<}1~\upmu \mathrm {s}\), as already provided by the photon detector system, ensures that DUNE’s measurement of the neutrino burst time profile is always limited by rate and not detector resolution. The hypothesized oscillations of the neutrino flux due to standing accretion shock instabilities would lead to features with a characteristic time of \(\sim \)10 ms, comfortably greater than the time resolution. The possible neutrino “trapping notch” (dip in luminosity due to trapping of neutrinos in the stellar core) right before the start of the neutronization burst has a width of \(1 - 2~\mathrm {ms}\). Identifying the trapping notch could be possible for the closest supernovae (few kpc).