Non-Standard Interactions in propagation at the Deep Underground Neutrino Experiment

We study the sensitivity of current and future long-baseline neutrino oscillation experiments to the effects of dimension six operators affecting neutrino propagation through Earth, commonly referred to as Non-Standard Interactions (NSI). All relevant parameters entering the oscillation probabilities (standard and non-standard) are considered at once, in order to take into account possible cancellations and degeneracies between them. We find that the Deep Underground Neutrino Experiment will significantly improve over current constraints for most NSI parameters. Most notably, it will be able to rule out the so-called LMA-dark solution, still compatible with current oscillation data, and will be sensitive to off-diagonal NSI parameters at the level of $\varepsilon \sim \mathcal{O}(0.05 - 0.5)$. We also identify two degeneracies among standard and non-standard parameters, which could be partially resolved by combining T2HK and DUNE data.


Introduction
The discovery of neutrino oscillations (and with them, neutrino masses) stands today as one of the most clear evidences of physics beyond the Standard Model (SM). If the SM is regarded as a low-energy effective theory, neutrino masses can be added by the inclusion of a non-renormalizable d = 5 operator, also known as the Weinberg operator [1]: where L L stands for the lepton doublet,φ = iσ 2 φ, φ being the SM Higgs doublet, and Λ is the scale of New Physics (NP) up to which the effective theory is valid to. In Eq. 1.1, c d=5 is a coefficient which depends on the high energy theory responsible for the effective operator at low energies. Interestingly enough, the Weinberg operator is the only SM gauge invariant d = 5 operator which can be constructed within the SM particle content. Furthermore, it beautifully explains the smallness of neutrino masses with respect to the rest of fermions in the SM through the suppression with a scale of NP at high energies. When working in an effective theory approach, however, an infinite tower of operators would in principle be expected to take place. The effective Lagrangian at low energies would be expressed as: Thus, the effects coming from higher dimensional operators could also potentially give observable signals at low energies (as in the case of neutrino masses), in the form of Non-Standard Interactions (NSI) between SM particles. In the case of neutrinos these could take place via d = 6 four-fermion effective operators 1 , in a similar fashion as in the case of Fermi's theory of weak interactions. Four-fermion operators involving neutrino fields can be divided in two main categories: 1. Operators affecting charged-current neutrino interactions. These include, for instance, operators in the form (l α γ µ P L ν β )(qγ µ P q ), where l stands for a charged lepton, P stands for one of the chirality projectors P R,L ≡ 1 2 (1 ± γ 5 ), α and β are lepton flavor indices, and q and q represent up-and down-type quarks.
2. Operators affecting neutral-current neutrino interactions. These are operators in the form (ν α γ µ P L ν β )(f γ µ P f ). In this case, f stands for any SM fermion.
Operators belonging to the first type will affect neutrino production and detection processes. For this type of NSI, near detectors exposed to a very intense neutrino beam would be desired, in combination with a near detector, in order to collect a large enough event sample [4]. Systematic uncertainties would play an important role in this case, since for neutrino beams produced from pion decay the flux cannot be computed precisely. 2 For recent studies on the potential of neutrino oscillation experiments to study NSI affecting neutrino production and detection, see e.g., Refs. [7][8][9][10][11][12].
For operators affecting neutral-current neutrino interactions the situation is very different since these can take place coherently, leading to an enhanced effect. Therefore, longbaseline neutrino oscillation experiments, with L ∼ O(500 − 1000) km, could potentially place very strong constraints on NSI affecting neutrino propagation. Moreover, unlike atmospheric neutrino oscillation experiments [13][14][15][16], at long-baseline beam experiments the beam is well-measured at a near detector, keeping systematic uncertainties under control. Future long-baseline facilities, combined with a dedicated short-baseline program [17][18][19] to determine neutrino cross sections precisely, expect to be able to bring systematic uncertainties down to the percent level. Therefore, they offer the ideal benchmark to constrain NSI in propagation. This will be the focus of the present work.
As a benchmark setup, we consider the proposed Deep Underground Neutrino Experiment [20] (DUNE) and determine the bounds that it will be able to put on NSI affecting neutrino propagation through matter. For comparison, we will also show the sensitivity reach for the current generation of long-baseline neutrino oscillation experiments, 1 In principle, the largest effects from NSI are expected to come from d = 6 operators since they appear at low order in the expansion. However, this is might not be always the case [2]. The situation might be otherwise if, for instance, some operators in the expansion are forbidden by a symmetry. In a similar fashion, effects coming from d = 6 operators might be less suppressed than those coming from d = 5 operators, e.g., if the scales of NP associated to the breaking of lepton number and lepton flavor symmetries are very different [3]. 2 A different situation would take place at beams produced from muon decay, such as Neutrino Factories or the more recently proposed nuSTORM facility. In this case, the flux uncertainties are expected to remain at (or below) 1% [5,6].
i.e., T2K [21] and NOvA [22]. Finally, we will also compare its reach to a proposed future neutrino oscillation experiment with much larger statistics but a much shorter baseline, to illustrate the importance of the long-baseline over the size of the event sample collected.
As an example, we will consider the reach of the T2HK experiment [23]. The impact of NSI in propagation at long-baseline experiments has been studied extensively in the literature, see Refs. [24][25][26][27][28][29][30][31][32] for an incomplete list, or see Refs. [33,34] for recent reviews on the topic. In particular, the reach of the LBNE experiment (very similar to the DUNE setup considered in this work) was studied in Ref. [35]. However, this study was performed under the assumption of a vanishing θ 13 , and only one non-standard parameter was switched on at a time. In the current work, we will follow the same approach as in Ref. [32]: all NSI parameters are included at once in the simulations, in order to explore possible correlations and degeneracies among them. As we will see, this will reveal two important degeneracies, potentially harmful for standard oscillation analyses.
The recent determination of θ 13 also has important consequences for the sensitivity to NSI parameters. On one hand, the large value of θ 13 makes it possible for the interference terms between standard and non-standard contributions to the oscillation amplitudes to become relevant (see, e.g., Ref. [36] for a recent discussion). In addition, the value of θ 13 has now been determined to an extremely good accuracy by reactor experiments [37][38][39], while the current generation of long-baseline facilities expects to significantly improve the precision on the atmospheric parameters in the upcoming years [40]. At the verge of the precision Era in neutrino experiments, it thus seems appropriate to reevaluate the sensitivity of current and future long-baseline experiments to NSI parameters and, in particular, of the DUNE proposal.
The paper is structured as follows. In Sec. 2 we introduce the NSI formalism; Sec. 3 describes the simulation procedure and the more technical details of the experimental setups under study; Sec. 4 summarizes our results, and we present our conclusions in Sec. 5. Finally, App. A contains some more technical details regarding the implementation of previous constraints on the oscillation parameters in our simulations.
2 The formalism of NSI in propagation NSI affecting neutrino propagation (from here on, we will refer to them simply as NSI) take place through the following four-fermion effective operators: where G F is the Fermi constant, f = u, d, e stands for the index running over fermions in the Earth matter, P stands for the projection operators P L ≡ 1 2 (1 − γ 5 ) or P R ≡ 1 2 (1 + γ 5 ), and α, β = e, µ, τ . From neutrino oscillations we have no information on the separate contribution of a given operator with coefficient ε f P αβ , but only on their sum over flavours and chirality. The effects of these operators appear in the neutrino evolution equation, in the flavour basis 3 , as: where ∆ ij = ∆m 2 ij /2E, U is the lepton flavor mixing matrix, A ≡ 2 √ 2G F n e and ε αβ ≡ (1/n e ) f,P n f f P αβ , with n f the f -type fermion number density and G F the Fermi coupling constant. The three diagonal entries of the modified matter potential in Eq. 2.2 are real parameters, while the off-diagonal parameters are generally complex.
Due to the requirement of SM gauge invariance, in principle any operators responsible of neutrino NSI would be generated simultaneously with analogous operators involving charged leptons [2,[42][43][44]. Thus, the tight experimental constraints on charged lepton flavor violating processes can be automatically applied to operators giving NSI, rendering the effects unobservable at neutrino experiments. However, there are ways in which the charged lepton constraints can be avoided, e.g., if the NSI are generated through operators involving the Higgs, or from interactions with a new light gauge boson, see e.g., Refs. [2,42,43,45]. At this point, however, model dependence comes into play. In the present work, we will explore how much the current bounds can be improved from a direct measurement at neutrino oscillation experiments, without necessarily assuming the viability of a model which can lead to large observable effects.
Direct constraints on NSI can be derived either from 4 scattering processes [43,[48][49][50] or from neutrino oscillation data [51][52][53][54]. Currently, the strongest bounds for NSI in propagation come from the global fit to neutrino oscillation data in Ref. [54]. At the 90% CL, most constraints on the effective ε parameters are around ∼ O(0.05 − 0.1). An exception to this isε ee , for which only O(1) can be derived from current data.
An important conclusion derived from the global fits performed in Refs. [51][52][53][54] is the presence of strong degeneracies in the data. In presence of NSI in propagation, global analyses of neutrino oscillation data are fully compatible with two solutions: the LMA solution: the standard Large Mixing Angle (LMA) solution corresponds to mixing angles fully compatible with the results obtained from a global fit to neutrino oscillation data in absence of NSI. The results are fully compatible with the hypothesis of no NSI. There is a slight preference for a non-zero value ofε ee in the fit, which arises from the non-observation of the up-turn in the solar neutrino transition probability.
the LMA-dark solution: this solution is obtained forε ee ∼ −3. In this case, all the oscillation parameters remain essentially unchanged, except for θ 12 which now lies in the higher octant [51]. It should be stressed that this solution is fully compatible with all current oscillation data, and there is no significant tension in the fit.
In this work, we will consider that both solutions are equally viable, and will be considered when adding prior constraints on the NSI parameters to our simulations. As we will show later on, DUNE will be able to probe the LMA-dark solution at high confidence level. The impact of NSI on the oscillation probabilities has been studied extensively in the literature. Perturbative expansions of the relevant oscillation probabilities to this work can be found, for instance, in Ref. [25,32,55]. The main impact of NSI on the probabilities can be summarized as follows: • The major impact on the ν µ → ν e andν µ →ν e oscillation probabilities is expected to come from the ε µe and ε τ e parameters, as well as fromε ee . The dependence with ε µe and ε τ e appears at the same order in the perturbative expansion, and therefore non-trivial correlations are expected to take place between them. The dependence with the CP-violating phases (δ, φ µe and φ τ e ) is also expected to be non-trivial.
• On the other hand, the disappearance channels ν µ → ν µ andν µ →ν µ are mainly affected by the presence ofε µµ and ε τ µ . The dependence of the oscillation probability on these parameters will be briefly discussed in Sec. 4.2.
Before finalizing this section it should be mentioned that, in the event of sizable NSI effects in propagation, the currently measured values of the oscillation parameters may be affected. In our simulations, we leave the atmospheric parameters free within their current experimental priors, and all parameters (standard and non-standard) will be fitted simultaneously. However, some comments are in order. Firstly, the measured value of θ 13 observed at the Daya Bay experiment is not expected to be significantly affected, due to the short baseline and low neutrino energies involved. It can thus be considered as precise input for the long-baseline analyses. A different situation may take place for the atmospheric mixing angle θ 23 , though, since its determination comes mainly from atmospheric and long-baseline experiments, where NSI could be sizable. Nevertheless, in Refs. [53,54] it was found that the determination of the atmospheric parameters is not significantly affected by the addition of a generalized matter potential. Finally, long-baseline experiments are not very sensitive to the solar parameters, and in this case they have to rely in previous measurements. We will consider the input values and priors at 1σ from Ref. [54], where the allowed confidence regions were obtained under the assumption of a generalized matter potential with NSI effects.

Sampling of the parameter space
In our simulations, all relevant standard and non-standard parameters are marginalized over. This amounts to a total of fourteen parameters: six standard oscillation parameters (the three mixing angles, the CP-violating phase and the two mass splittings), five moduli for the non-standard parameters (ε ee ,ε µµ , |ε µe |, |ε τ e | and |ε τ µ |) and three non-standard CP-violating phases (φ µe , φ τ e and φ µτ ). In order to sample all parameters efficiently, a Monte Carlo Markov Chain (MCMC) algorithm is used. The Monte Carlo Utility Based Experiment Simulator (MonteCUBES) C library [58] has been used to incorporate MCMC sampling into the General Long Baseline Experiment Simulator (GLoBES) [59,60]. For the implementation of the NSI probabilities in matter, we use the non-Standard Interaction Event Generator Engine (nSIEGE), distributed along with the MonteCUBES package.
Parameter estimation through MCMC methods is based on Bayesian inference. The aim is to determine the probability distribution function of the different model parameters Θ given some data set d, i.e., the posterior probability P (Θ | d): where L(d | Θ) is the likelihood, i.e., the probability of observing the data set d given a certain set of values for the parameters Θ, and P (d) is the total probability of measuring the data set d and can be regarded as a normalization constant. The prior P (Θ) is the probability that the parameters assume the value Θ regardless of the data d, that is, our prior knowledge of the parameters. For the standard parameters, the assumed priors are taken to be gaussian, and in agreement with the current experimental uncertainties (see Tab. 3 in App. A for details). For the NSI parameters, on the other hand, we have used the profiles shown for the NSI with up quarks in Fig. 6 in Ref. [54], rescaled accordingly as ε αβ ∼ 3 ε u αβ , see Ref. [54] for details. At least 50 MCMC chains have been used in all our simulations, and the number of distinct samples after combination always exceeds 10 6 . The convergence of the whole sample improves as R → 1, with R being the ratio between the variance in the complete sample and the variance for each chain. We have checked that, for most of the parameters the convergence of the whole sample is much better than R − 1 = 5 × 10 −3 , and in all cases is better than 10 −2 . More technical details related to the sampling of the parameter space can be found in App. A.

Experimental setups
In this work we have considered several facilities among the current and future generation of neutrino oscillation experiments: DUNE We consider a 40 kton fiducial liquid argon detector placed at 1300 km from the source, on-axis with respect to the beam direction. The neutrino beam configuration considered in this work corresponds to the 80 GeV configuration from Ref. [62], with a beam power of 1.08 MW. The detector performance has been simulated following Ref. [62], with migration matrices for neutral current backgrounds from Ref. [63]. Three years of running time are assumed in both neutrino and antineutrino modes. Systematic uncertainties of 2% and 5% are assumed for the signal and background rates, respectively.
NOvA The NOvA experiment has a baseline of 810 km, and the detector is exposed to an off-axis (0.8 • ) neutrino beam produced from 120 GeV protons at Fermilab. The implementation of the NOvA experiment follows Refs. [22,64]. The fiducial mass of the detector is 14 kton, and 6.0 × 10 20 protons on target (PoT)/year are assumed. Again, a running time of 3 years in both neutrino and antineutrino modes is considered.
T2K+NOvA In this case, the expected results for the T2K experiment after 30 × 10 20 PoT in neutrino mode 5 are added to the NOνA results. The Super-KamiokaNDE detector is placed off-axis (2.5 • ) with respect to the beam direction at L = 295 km, and has a fiducial mass of 22.5 kton. The neutrino fluxes have been taken from Ref. [65]. The signal and background rejection efficiencies have been set to match the event rates and sensitivities from Ref. [21] for the same exposure, and rescaled up to the larger statistics considered here. Given the much larger uncertainties in antineutrino mode, only neutrino data is considered for T2K.
T2HK The T2HK experiment is a proposed upgrade for the T2K experiment, with a much larger detector (560 kton fiducial mass) located at the same off-axis angle and at the same distance as for the T2K experiment [23]. In this case, the signal and background rejection efficiencies have been taken as in Ref. [66]. The number of events as well as the physics performance is consistent with the values reported in Tables VIII and IX in Ref. [67]. These correspond to 3(7) years of data taking in (anti)neutrino mode with a beam power of 750 MW. Systematic uncertainties of 5% and 10% are assumed for the signal and background rates, respectively.
For all the setups simulated in this work, systematic uncertainties are taken to be correlated among all contributions to the signal and background event rates, but uncorrelated between different oscillation channels. In principle, a more detailed systematics implementation should be performed, taking into account the possible impact of a near detector, correlations between systematics affecting different channels, etc. However, a careful implementation of systematic errors would add a large number of nuisance parameters to the problem, which would have to be marginalized over during the simulations. This would considerably complicate the problem, and is beyond the scope of the present work. For reference, the total expected event rates for the four experiments considered in this work are summarized in Tab. 1. The true values assumed for the oscillation parameters are in good agreement with the best-fit values from Ref. [56]: θ 12 = 33.5 • , sin 2 2θ 13 = 0.085,

Results
This section summarizes the results obtained for the expected sensitivities to NSI in propagation for the setups considered in this work. We will first summarize the expected results for the DUNE experiment in more detail in Sec. 4.1; a discussion of the degeneracies found among standard and non-standard parameters will be performed in Sec. 4.2; finally, a comparison to the expected results from T2K, NOvA and from the T2HK experiment will then be performed in Sec. 4.3. Our results will be presented in terms of credible intervals, or credible regions, which are obtained as follows. The total sample of points collected during the MCMC is projected onto a particular plane in the parameter space. After projection, the regions containing a given percentage (68%, 90% and 95%, in this work) of the distinct samples are identified.

Expected sensitivities for the DUNE experiment
The DUNE sensitivities to NSI parameters are summarized in Fig. 1. The figure shows oneand two-dimensional projections of the MCMC results onto several planes. The parameters used in the projections are indicated in the left and low edge of the collection of panels. In the one-dimensional distributions, the vertical band indicates the credible interval at 68% level, while the dashed line shows the value which maximizes the posterior probability. In the two-dimensional projections, the red, green and blue lines show the 68%, 90% and 95% credible regions. In our simulations, all standard and non-standard parameters are left free and marginalized over. Similar projections for the standard oscillation parameters can be found in App. A, see Fig. 7. Several features can be observed from Fig. 1. Most notably, two important degeneracies appear in the sensitivities: the first affects the determination ofε µµ , while the second degeneracy is observed in theε ee − ε τ e plane. We will discuss these degeneracies in more detail in Sec. 4.2. A second important conclusion that can be derived from Fig. 1 is that DUNE will already be able to explore the LMA dark solution at more than 90% CL. This can be observed in the leftmost column in Fig. 1, where the range of values ofε ee compatible with the LMA-dark solution are disfavoured at more than 90%. We will return to this point again in Sec. 4.2. When considering operators which are not diagonal in flavor space, it is important to bear in mind that they may be accompanied by new sources of CP-violation. The presence of such new phases may considerably affect our sensitivity to the moduli of the NSI parameters, due to destructive and constructive interference effects. For this reason, we show in Fig. 2 the two-dimensional projections for the expected credible regions but in this time after projecting the MCMC results on the |ε αβ | − φ αβ planes. As can be seen from the figure, the effect is rather large for the three operators considered, and the bounds are modified by a factor of between two and three in all cases. The dependence with the CP-phases is also different depending on the parameter under study.
The case where the dependence of the sensitivity with the CP phase is most notable is the case of ε τ µ . In this case, the sensitivity for values of φ τ µ close to ±π/2 can be up to a factor of three worse than the sensitivity around CP-conserving values. While in the former case the sensitivity would not be able to improve over current constraints, in the latter case DUNE would be able to improve over current constraints by a factor of two. The dependence with φ τ µ can be well understood from the leading order expansion of the ν µ disappearance channel [25,32,55]: where A ≡ 2 √ 2G F n e stands for the standard matter potential, ∆ ij = (∆m 2 ij /2E), and P std µµ is the oscillation probability in absence of NSI. Additional terms, which depend on both the real and imaginary parts of ε τ µ , enter the probability at second order in the perturbative expansion, and provide some sensitivity in the regions with φ τ µ ∼ ±π/2. At second order, the probability P µµ also depends onε µµ , and will be further discussed in Sec. 4.2.
The situation is a bit more convoluted for ε τ e and ε µe due to their combined effect on the appearance oscillation probabilities, see for instance Ref. [55]. In the case of ε µe , we find that DUNE will improve over current constraints regardless of the value of its associated CP-phase. The sensitivity changes by a factor of 2 depending on the value of φ µe , and fluctuates between 0.05 and 0.1. The results for ε τ e also show a sizable dependence with the value of φ τ e . However, in this case the prior constraints play a very relevant role, as can be seen from the comparison between the dashed green and solid blue lines in the panel for ε τ e in Fig. 2. Whereas before imposing prior constraints on the NSI parameters negative values of φ eτ are perfectly allowed in the fit, once the prior constraints on NSI are imposed this is no longer the case. This has important consequences in the analysis, and implies that DUNE will be sensitive to values of ε τ e down to 0.05 for values of φ eτ ∼ −π/2. The reason for this is as follows. As it was shown in Fig. 1, DUNE is insensitive to large values ofε ee and |ε τ e | as long as their moduli lie along the two lines identified in Fig. 1 (see the projected allowed regions in theε ee − ε τ e plane). For negative values ofε ee , the degeneracy condition can only be satisfied for values of φ eτ ∼ −π/2, as we will discuss in more detail in Sec. 4.2. However, prior constraints on NSI rule out a large fraction of the parameter space forε ee ∈ (−2, 0). Therefore, once these are included in the fit, the degeneracy condition can no longer be satisfied, which is translated into an increased sensitivity at DUNE for ε τ e , at the level of 0.05 for φ eτ .
Finally, it is important to keep in mind that the new CP-violating phases could have an impact on standard CP-violating searches, see for instance Ref. [32] for a study in the context of Neutrino Factories, or Ref. [69] for a pseudo-analytical study at DUNE. This will be further discussed in Sec. 4.2.

Degeneracies
When studying the sensitivity of DUNE to NSI, we have identified two important degeneracies between both standard and non-standard parameters. The first one has been previously reported in the literature (see, e.g., Refs. [32,55,70,71]), and takes place between the parametersε µµ and δθ 23 ≡ θ 23 − π/4. This degeneracy can be understood analytically at the level of the oscillation probabilities. As already mentioned in Sec. 2, the sensitivity to theε µµ parameter comes from the ν µ andν µ disappearance channels. A perturbative expansion of the ν µ → ν µ oscillation probability on δθ 23 , ε τ µ andε µµ gives [25,32,55]: where A stands for the standard matter potential, ∆ ij = (∆m 2 ij /2E), and P std µµ is the oscillation probability in absence of NSI. Note the different combination of oscillatory phases in the terms in Eq. 4.2. The second term in principle should be subleading with respect to the first term, since it depends quadratically on a combination of δθ 23 (∼ 0.05, in our case) and ε, as opposed to the first term which is linear. However, for energies matching the oscillation peak, the first term will be strongly suppressed with the oscillatory phase. Left: Results from a fit in the θ 23 − δ plane to simulated DUNE data alone, and in combination with T2HK data. Three cases are shown for DUNE: the standard case when no NSI are allowed in the fit, a case where marginalization is performed over NSI parameters within previous constraints, and a case where no previous constraints are assumed over NSI during the fit. The combination with T2HK data is only shown in the case where prior NSI constraints are imposed in the fit. Right: same results, projected in the θ 23 −ε µµ plane. The dot indicates the true input values considered.
Due to the simultaneous dependence of P µµ on δθ 23 andε µµ , a degeneracy appears in this plane. In fact, while in the standard scenario the DUNE experiment is able to successfully resolve the octant of θ 23 (see Fig. 8 in App. A), when NSI are marginalized over in the fit this is no longer the case, and the fake solution in the higher octant reappears. This is explicitly shown in Fig. 3. The left panel shows the results projected onto the θ 23 −δ plane for three different scenarios: when no NSI are considered in the analysis (solid yellow), when NSI are marginalized over within current priors (dashed green) and when NSI are marginalized over with no priors on the NSI parameters (dotted blue). As it can be seen from the figure, the higher octant solution is not allowed by the data when NSI are not included in the fit, but reappears if they are marginalized over (see also Figs. 7 and 8 in App. A). The reason is that there is a strong degeneracy betweenε µµ and θ 23 , explicitly shown in the right panel. In the case where no prior uncertainties are assumed for the NSI parameters (dotted blue line), two additional solutions appear around θ 23 = 45 • . However, these take place for values ofε µµ in tension with current constraints, and are therefore partially removed when the prior on the ε µµ parameter is imposed (dashed green lines). Finally, we find that when T2HK is added to the DUNE data the degeneracy is almost completely solved, as it is shown by the dot-dashed gray contours.
The second degeneracy we found in this study takes place between the CP violating phase δ, and the NSI parametersε ee and ε τ e (including its CP phase). In this case, due to the large values ofε ee involved, perturbation theory cannot be used to understand the interplay of parameters. The degeneracy is explicitly shown in Fig. 4, for DUNE and for DUNE+T2HK, in the planesε ee − |ε τ e | (left panel) andε ee − φ τ e (right panel). As can be seen from this figure, there is a non-trivial dependence with the CP-violating phase φ τ e , which is responsible of this degeneracy: while for small values ofε ee all values of φ τ e are equally probable, as the value ofε ee increases only certain values of φ τ e are possible (namely, a negative phase forε ee < 0, while only positive phases are allowed ifε ee > 0). This also illustrates why in Fig. 2 the sensitivity to ε τ e improves so dramatically in the region where φ τ e < 0. Again in this case, when T2HK is added to the DUNE data the degeneracy is again partially solved, although not completely, as can be seen from the solid contours in Fig. 4.  The fact that this degeneracy depends on the value of φ τ e suggests that it might have a relevant impact on CP-violation searches. This is shown explicitly Fig. 5, where the oscillation probabilities are shown for the ν µ → ν e and ν µ → ν µ oscillation channels at L = 1300 km as a function of the neutrino energy, for three different cases. The solid blue lines show the probabilities in the standard case, with true values of the oscillation parameters matching the best-fit values from Ref. [56] and δ = −90 • . The dashed red line, on the other hand, shows the probabilities forε ee = −2 and ε τ e = 0.45, φ τ e = −130 • and δ = −150 • , where the rest of the NSI parameters are taken to be zero and the standard ones are unchanged with respect to the standard scenario. Finally, the dotted green line shows the probabilities forε ee = 1, ε τ e = 0.25, φ τ e = 100 • and δ = −90 • . The three probabilities are identical, as can be seen from the figure, which could eventually lead to a misinterpretation of the data and a wrong determination of the value of δ. To the best of our knowledge, this degeneracy has not been studied previously in the literature 6 . A detailed study would be needed to address its impact on CP violation searches at DUNE. This remains beyond the scope of this work and is left for future studies.

Comparison to other facilities and to prior experimental constraints
It is interesting to compare the DUNE sensitivities to current constraints as well as to other oscillation experiments currently in operation (such as T2K and/or NOvA) or being planned for the future (such as T2HK). Our results from this comparison are presented in Fig. 6, where the colored bands indicate the credible intervals found at 90% found for each 6 The degeneracy in theεee − ετe plane shows similar features to the degeneracy studied in Refs. [70][71][72].
Both degeneracies might be related but there are important differences. While the degeneracy studied in Refs. [70][71][72] appeared in the disappearance probabilities, our degeneracy takes place in the appearance channels instead and involves the new CP-phases. Furthermore, the relation between ετe andεee is also different: while in our case the degeneracy imposes a linear relation between the two parameters, in Refs. [70][71][72] the degeneracy took place along a parabola. This indicates that a possible way to break this degeneracy could be through combination with atmospheric neutrino data. of the NSI parameters, either for the experiments alone or in combination with one another. Results are presented for the moduli of the different NSI parameters, after marginalization over the remaining oscillation parameters and the CP-phases. The results are compared to the constraints from previous experiments (see Tab. 1 or Fig. 6 in Ref. [54]), indicated by the dashed vertical lines. We have found that the combination of T2K and NOvA is not sensitive to NSI below the current constraints derived in Ref. [54], due to the presence of strong degeneracies among different oscillation parameters, and therefore their results are not shown in this figure.
The most important feature in Fig. 6 can be seen in the uppermost panel, where the sensitivity toε ee is shown and compared to the currently allowed regions by global fits to neutrino oscillation data. As can be seen from this panel, under the assumption of no relevant NSI effects in the oscillation probability, both DUNE and T2HK will be able to probe the LMA-dark solution. The possibility of ruling out the LMA-dark solution with long-baseline experiments was already pointed out previously in the literature. For instance, in Ref. [45] it was found that NOvA could rule out this solution at approximately 85% CL. We find, however, that the NOvA experiment on its own (or in combination with T2K) will not be able to rule out the LMA-dark solution. Due to the strong degeneracy betweenε ee and ε τ e (see Sec. 4.2), it is always possible to reconcile the fit and the simulated data by assuming simultaneously large values forε ee and ε τ e . This degeneracy is partially solved when prior constraints are imposed on ε τ e ; however, we find that a small region of the parameter space aroundε ee ∼ −3 and |ε τ e | ∼ 0.45 still provides a good fit to the data. Conversely, DUNE and/or T2HK will be sensitive enough to the presence of NSI in order to rule out the LMA-dark solution on their own. The rejection power is then increased if prior constraints on NSI parameters are included, as expected (dark bands in Fig. 6).
According to our results, the DUNE experiment will also be able to improve current constraints on ε τ e and ε µe by a factor of between 2 and 5, and at least by a factor of two with respect to the results expected at T2HK alone, as can be seen from the comparison of the light colored bands. In the case of ε τ µ , the sensitivity when no prior is imposed goes above the current experimental constraint, indicating that the sensitivity to this parameter is somewhat limited. However, as it was shown in Fig. 2, the sensitivity to this parameter depends strongly on the value of its CP-violating phase, and DUNE is expected to improve over the current limit as long as φ µτ = ±π/2, see Fig. 2.
Finally, it is worth pointing out that, on its own, DUNE will not be able to improve over current constraints forε µµ , for the set of true oscillation parameters assumed in this work. In this case, combination with T2HK would be essential. As can be seen from the second panel in Fig. 6, before combination none of the two experiments is able to improve over current experimental constraints, although they favour different regions in the parameter space. Thus, after combination, the sensitivity toε µµ is notably improved, yielding a slightly better result than the ones from current limits.  Figure 6. Comparisons of the expected sensitivities to NSI parameters at DUNE and T2HK, before and after combining their respective data sets. Darker (Lighter) bands show the results when priors constraints on NSI parameters are (not) included in the fit. The vertical gray areas bounded by the dashed lines indicate the allowed regions at 90% CL (taken from the SNO-DATA lines for f=u in Ref. [54]).

Conclusions
Neutrino physics is entering the precision Era. After the discovery of the third mixing angle in the leptonic mixing matrix, and in view of the precision measurements performed by the reactor experiments (most notably, Daya Bay) and long-baseline experiments (MINOS, T2K and, in the near future, NOνA), it appears timely to reevaluate the sensitivity of current and future oscillation experiments to possible Non-Standard neutrino Interactions (NSI). We have focused on the impact of NSI on neutrinos in propagation through matter, something for which the planned Deep Underground Neutrino Experiment (DUNE) is very well suited for, due to its relatively high energies and very long baseline. Given the current experimental and theoretical effort to keep systematic uncertainties below the 2%-5% level, it offers a very well-suited environment to conduct New Physics searches.
In this work, a Monte Carlo Markov Chain (MCMC) has been used to explore the multi-dimensional parameter space surrounding the global minimum of the χ 2 . The total number of parameters which are allowed to vary in the fit is fourteen: six standard oscillation parameters, five moduli for the non-standard parameters, and three new CP-violating phases. Prior experimental constraints, completely model-independent, have been implemented in our simulations, see Sec. 3.1 and App. A for details. By including all (standard and non-standard) parameters at once in the simulation, we derive conservative and completely model-independent limits on each of the coefficients accompanying the new operators entering the effective operator expansion. At the same time, we fully take into account possible degeneracies among different parameters entering the oscillation probabilities.
We have identified two potentially important degeneracies among standard and nonstandard parameters. The first one takes place in the disappearance channels between θ 23 andε µµ , and could be potentially harmful for the octant sensitivity of the DUNE experiment. While in the standard case we find that the DUNE experiment is able to reject the higher octant solution, this is no longer the case if theε µµ parameter is marginalized over during the fit. The second degeneracy takes place betweenε ee , ε τ e , φ τ e and δ in the appearance channels. The interplay between the different parameters in this case is nontrivial and it involves one of the non-standard CP-violating phases, φ τ e . This degeneracy could potentially pose a challenge for standard CP-violating searches and a more careful study will be left for future work.
One of the most relevant results shown in the present study is that the DUNE experiment will be able to probe the so-called LMA-dark solution. The LMA-dark solution, which is fully compatible with current oscillation data [54], favors a large non-standard matter potential driven byε ee ∼ −3 and a solar mixing angle in the second octant, θ 12 > π/4. We find that, for the true oscillation parameters assumed in this work, the credible regions at 90% do not include the LMA-dark region, see Figs. 4 and 6.
We find that DUNE will be able to improve over current constraints on ε µe by at least a factor of five, and on ε τ e by at least a 20%. The sensitivity to ε τ e shows a significant (and non-trivial) dependence with the value of its associated CP-phase and, in particular, is significantly affected by the current prior onε ee (see Figs. 4 and 2). Regarding ε τ µ , DUNE will be able to improve over current constraints as long as φ τ µ = ±π/2, see Fig. 2. Finally, we find that DUNE will not be able to improve over current constraints onε µµ , for the set of true oscillation parameters assumed in this work. For convenience, the expected sensitivity of DUNE to NSI parameters is summarized in Tab Finally, we have also compared the expected reach for the DUNE experiment to that of the current generation of long-baseline experiments and to the future T2HK proposal. We found that the combination of T2K and NOvA will not be sensitive enough to the presence of NSI in order to improve over current constraints from oscillation data. The T2HK experiment on its own will not be able to improve over current constraints either for most parameters, with the exception of ε µe . Interestingly enough, we find that the combination of T2HK and DUNE is able to partially resolve the degeneracies discussed in Sec. 4.2. In particular, the combination of DUNE and T2HK would yield a strong improvement in the determination ofε µµ and solve almost completely the degeneracy betweenε µµ and θ 23 , see Fig. 3.
Note added: The preprint version of Ref. [73] was made available online two days before the present manuscript. In Ref. [73], a very similar analysis was performed for non-standard interactions in propagation at DUNE. Prior (at 68%) 0.02 0.005 0.013 Free 3% 3% Table 3. Gaussian priors implemented for the standard oscillation parameters. All priors are in agreement with the current uncertainties from Ref. [56], except for sin 2 2θ 23 for which the prior has been relaxed by a factor of two.
garding the prior constraints on NSI coming from current oscillation data.

A Implementation of prior constraints
In order to restrict the region sampled by the MCMC to the physical region of interest, priors have been implemented for all parameters (standard and non-standard) in our simulations, with the only exception of the standard CP-violating phase δ, since current hints only have a limited statistical significance at the 1 − 2σ CL (see, however, Refs. [56,74] for recent discussions on this topic). Since the measurements on θ 13 and θ 23 do not come from a direct measurement of the angles themselves, these priors have been implemented according to the quantities that are directly measured at oscillation experiments. For θ 13 this amount to imposing a gaussian prior on sin 2 2θ 13 . In the case of θ 23 , however, the situation is a bit more complicated. The most precise determination of θ 23 comes from the observation of ν µ disappearance at long-baseline experiments, which measure an "effective" mixing angle sin 2 2θ µµ , see e.g., Refs. [75,76]. Given the large value of θ 13 , the correspondence θ µµ ↔ θ 23 no longer takes place. Instead, the following relation holds: sin θ µµ = sin θ 23 cos θ 13 . (A.1) Therefore, a gaussian prior affecting θ 23 has been implemented on this effective angle instead, since this is the quantity which is actually constrained by long-baseline experiments.
The DUNE experiment will provide the most precise determination of this parameter, though. Therefore, in this case only a mild prior has been imposed, relaxing the current constraints by a factor of two, in order to ease convergence of the simulations. Finally, for the solar mixing angle we have implemented a gaussian prior on sin 2 2θ 12 since, in practice, this is the only quantity that can be determined from current oscillation data. Table 3 summarizes the priors implemented for the standard oscillation parameters, which are assumed to be gaussian. For the NSI parameters, we have implemented non-gaussian priors, extracted from the results for SNO-DATA lines from Fig. 6 in Ref. [54], for f=u. These have been rescaled according to the relation ε αβ = 3.051ε u αβ . We have considered that both the LMA and LMA-dark solutions are equally allowed by the data.
Finally, a typical problem usually encountered when a multi-dimensional parameter space is explored using a MCMC has to do with the existence of multiple minima. If the χ 2 between different minima is large enough, the MCMC will generally tend to sample only one of them, leaving the rest unexplored. This is specially relevant in neutrino oscillations, where degeneracies are expected to arise between different parameters, even in absence of NSI [78][79][80][81]. This problem is dealt with in our simulations by the use of "degeneracy steps", chosen specifically to make sure that all possible degeneracies are explored by the MCMC. For example, since a non-maximal value of θ 23 has been considered in our simulations, an obvious choice in this case is to add a larger step in the θ 23 direction so as to guarantee that the octant degeneracy is appropriately sampled. Additional steps in the ε directions have also been set up in order to guarantee that all possible degenerate solutions are found in the simulations (for instance, in order to guarantee that the LMA-dark solution is appropriately sampled, we have added a step in theε ee direction with ∆ε ee = 4). Figure 7 shows explicitly that the octant degeneracies are well sampled in our simulations. This figure shows the same type of one-and two-dimensional projections of the MCMC results as in Fig. 1, for the standard oscillation parameters 7 , assuming no priors over the NSI parameters. As it can be clearly seen from this figure, the octant degeneracy in the θ 23 axis has been properly sampled by our MCMC, and three well separated regions are obtained. For comparison, Fig. 7 shows the same projections when no NSI are allowed in the fit (i.e., only standard parameters are allowed in the fit). In this case, the octant degeneracies disappear, in agreement with the results in previous literature (see, e.g., Refs. [35,82]).
Finally, it should be mentioned that the T2HK experiment [23] is not sensitive to the neutrino mass ordering at high confidence level for all possible values of the CP-violating phase δ and all values of the atmospheric mixing angle. Therefore, degeneracies in the ∆m 2 31 direction are expected to take place, and should be explored as well. Nevertheless, the determination of the mass ordering might come instead from a combination of different facilities [83][84][85][86][87][88][89], from atmospheric data at HK [23], or from the combination of T2K+NOνA at some level, if the current hint for δ ∼ −π/2 persists in the future. Therefore, we will adopt an optimistic approach in this paper and assume that the neutrino mass ordering is determined by the time these experiments finish taking data. Normal ordering has been assumed in all our simulations.   Figure 8. Same as Fig. 7 but under the assumption that there are no NSI effects on the oscillation probabilities.