Heavy neutrinos in displaced vertex searches at the LHC and HL-LHC

We study the sensitivity of displaced vertex searches for heavy neutrinos produced in W boson decays in the LHC detectors ATLAS, CMS and LHCb. We also propose a new search that uses the muon chambers to detect muons from heavy neutrino decays outside the tracker. The sensitivity estimates are based on benchmark models in which the heavy neutrinos mix exclusively with one of the three Standard Model generations. In the most sensitive mass regime the displaced vertex searches can improve existing constraints on the mixing with the first two SM generations by more than four orders of magnitude and by three orders of magnitude for the mixing with the third generation.


Introduction
Heavy right handed neutrinos ν R appear in many extensions of the Standard Model (SM) of particle physics and provide an elegant explanation for the masses of the SM neutrinos via the type-I seesaw mechanism [1][2][3][4][5][6]. Depending on their masses, they can potentially also explain several other open puzzles in cosmology and particle physics, cf. e.g. [7] for an overview. For instance, they may explain the matter-antimatter asymmetry in the early universe that is believed to be the origin of baryonic matter in the present day universe 1 via leptogenesis [9] (cf. e.g. [10] for a recent review), compose the Dark Matter (DM) [11] (cf. e.g. [12,13] for recent reviews) or explain anomalies in some neutrino oscillation experiments (cf. e.g. [14]).
In order to explain the light neutrino masses, the right handed neutrino flavours ν Ri must necessarily mix with the left handed SM neutrinos ν La , with a = e, µ, τ . Through this mixing they can interact with the weak gauge bosons. More precisely, the mass eigenstates N i after electroweak symmetry breaking couple to the weak interaction with amplitudes that are suppressed by small mixing angels θ ai in the Lagrangian (2.1). This interaction is unavoidable in the framework of the seesaw mechanism. In general the heavy neutrinos can, in addition to this, have new gauge interactions that may lead to an interesting phenomenology [15,16]. In the present work we take the conservative approach and assume that the N i can exclusively be produced and decay via their θ-suppressed weak interactions. Then the phenomenology of heavy neutrinos N i in collider experiments can be entirely characterised by their mass M i and the mixing angle θ ai that suppresses their weak interaction with SM generation a: if kinematically allowed, the N i appear in any process that involve ordinary neutrinos of flavour a, but with a coupling constant that is suppressed by θ ai and a phase space that is affected by their mass M i [17,18]. In particular, in the relativistic regime, their production cross section is simply given by U 2 ai σ νa , where σ νa is the ordinary neutrino production cross section per generation and we have defined decay width parametrically scales as [19][20][21][22], where the precise prefactor depends on the mass range, for the masses under consideration here it is approximately given in relation (2.3). 2 For sufficiently small mixing angles, the heavy neutrinos are long lived particles that can travel macroscopic distances before they decay into SM particles, cf. figure 1, giving rise to displaced vertex signatures.
In the present work we explore the sensitivity of the LHC main detectors to heavy neutrinos with masses between the D meson and W boson masses in the type-I seesaw model. From a theoretical viewpoint this mass range is interesting because the heavy neutrinos could simultaneously explain the light neutrino masses via the seesaw mechanism and the matter-antimatter asymmetry of the universe via low scale leptogenesis [25,26], while avoiding the weak hierarchy problem [27] due to the smallness of their interactions. 3 From an experimental viewpoint this mass range is particularly interesting because the heavy neutrinos can be produced in comparably large numbers at the Large Hadron Collider (LHC) in the decays of real gauge bosons, cf. figure 2. For larger masses the production cross section is much smaller than the exchanged W boson is virtual, and the heavy neutrinos always decay promptly, cf. e.g. reference [31] for a recent study. For smaller masses the heavy neutrinos tend to be too long lived to decay inside the main LHC detectors in sizeable numbers, though some improvement can be achieved by using parked data or data from heavy ion runs [32,33]. A better sensitivity can be reached with the recently approved FASER experiment [34] and other proposed dedicated detectors [35][36][37][38][39][40], cf. [36,38,41]. Fixed target experiments like NA62 [42,43], T2K [44], DUNE [45] and in the future at SHiP [46][47][48] are generally more sensitive than the LHC [49].
1. We compare the sensitivity of ATLAS, CMS and LHCb in a single study.
2. We estimate the sensitivity that can be achieved with the HL-LHC in all three experiments. Apart from an increased luminosity of 3000 fb −1 for ATLAS and CMS and 380 fb −1 for LHCb and collision energy of 14 TeV, this means that CMS will be able to detect particles with larger pseudorapidity |η| < 4 [65].
3. We study the sensitivity of each experiment for heavy neutrinos that mix with any of the SM generations, using benchmark scenarios in which the mixing is exclusively with one generation at a time. We consider both, charged and neutral current interactions, in the heavy neutrino decay. Sensitivity estimates for heavy neutrinos that mix with ν Lτ were previously made in [62,64], but were restricted to decays via neutral current interactions.
4. We propose a new search based on the idea to use the muon chambers for long lived particle searches [66,67], which was shown to be feasible in [68]. We should add that a similar proposal was made independently in [64], which appeared while we were in the final stage of our analysis.

Signatures
We study the LHC sensitivity to heavy neutrinos from displaced vertex signatures in a simple model with effectively only one heavy neutrino N with mass M and mixing angles θ a to the SM flavours.
Here h the physical Higgs field and v = 174 GeV is its vacuum expectation value. We restrict ourselves to three benchmark scenarios in each of which the heavy neutrinos exclusively mix with one SM generation. 5 For fixed U 2 we find that the sensitivity is best for U 2 = U 2 µ and worst for U 2 = U 2 τ , for realistic scenarios in which the heavy neutrinos mix with several SM generations one can expect that the number of events lies somewhere between these extremes. The dependence of the total number of events that can realistically be observed on the flavour mixing pattern is too complicated to obtain reliable estimates by a simple interpolation. A similar analysis in reference [43] suggests that the sensitivity is roughly given by the values that we find for the scenarios with U 2 = U 2 µ or U 2 = U 2 e unless the total mixing U 2 is strongly dominated by U 2 τ , i.e., as soon as U 2 µ /U 2 or U 2 e /U 2 considerably exceed ∼ 10 %.
We search for displaced vertex signatures from the N decay inside one of the LHC main detectors. We consider only the dominant production through the decay of real W bosons, in which the N are produced along with a neutrino or charged lepton a , cf. figure 3. We do not take the production via Z decays into account because this channel does not produce a prompt charged lepton from the interaction point that one can trigger on. The authors of reference [63] have included such processes, but we estimated that the much stronger 5 The simple model (2.1) with a single heavy neutrino is not realistic because the seesaw mechanism requires at least one flavour of right handed neutrinos to explain each non-zero light neutrino mass mi, i.e., the number n of their flavours νRi should be at least two (n ≥ 2) if the lightest SM neutrino is massless and at least three (n ≥ 3) if it is massive. Moreover, the heavy neutrinos must mix with all SM generations to explain the observed light neutrino mixing angles. Recent discussions of the constraints on the heavy neutrino flavour mixing pattern from light neutrino oscillation data can e.g. be found in [43,69,70]. A justification for our simple phenomenological model (2.1) is given in appendix A.

JHEP02(2020)070
cuts that are necessary to suppress backgrounds in this case make the gain in sensitivity marginal. For the same reason we neglect the production via Higgs decays. At the lower end of the mass spectrum under consideration here the N can also be produced in B hadron decays. We have estimated the reach of searches for displaced vertices from N that are produced in B decays for CMS in [32,33], 6 here we do not include it because this would require a refinement in the computation of the production cross section from B decays that goes beyond the scope of this work. Moreover, for ATLAS and CMS the vast majority of events would fail to pass the cut on the transversal momentum p T of the prompt lepton.
For the N decay we include processes mediated by both, virtual W * and Z * bosons. For decays mediated by the charged current, a charged lepton of flavour a is necessarily produced along with the W * bosons. The W * can then decay into leptons and neutrinos or quarks that hadronise, so that the final states of the N decay can be leptonic or semileptonic. N decays mediated by the neutral current can also have purely hadronic detector-visible final states.
For a = e, µ the first lepton produced in W * mediated decays is detector stable and can be used to reconstruct the displaced vertex. For a = τ the τ -lepton decays within the detector, mostly pions, leptons and neutrinos. It has been pointed out in reference [62,62] that the finite lifetime of the τ -lepton implies that the published ATLAS efficiencies for displaced vertices [71] cannot be applied because they assume that all decay products appear promptly at the displaced vertex. To avoid this problem the authors only included N decays mediated by the neutral current. The same strategy was also adapted in reference [64]. This drastically reduces the sensitivity in the scenario with a = τ when a cut on the displaced vertex invariant mass is applied because the unobservable ν τ that unavoidably appears in decays mediated by Z * carries away part of the energy and momentum. In the present work we include both, N decays mediated by neutral and charged currents, for all three scenarios a = e, µ, τ . For a = τ this can be justified with two arguments. First, for τ -leptons with energies ∼ 10 GeV in the laboratory frame the opening angle between decay products is so small that it is hardly noticeable that they do not promptly originate from the displaced vertex. Second, the displaced vertex reconstruction algorithms can be improved in the future and therefore do not pose a fundamental restriction. Such studies are e.g. already under way in the CMS collaboration.
It is instructive to illustrate the dependence of the expected number of events on the model parameters with in a simplified spherical detector of radius l 1 . Under the assumptions summarised in section 2 the cross section of events in a scenario where the right handed neutrinos mix with lepton flavour a can be estimated as 7 where γ N is the Lorentz factor and l 0 is the minimal displacement that is required by the trigger. An illustrative sensitivity estimate based on equation ( 3) see also figure 1. We recall that we only study N that are produced in W boson decays, i.e., σ νa is the production cross section of neutrinos ν La in W boson decays. We do not actually use the cross section (2.2) to obtain our results, but this estimate is nevertheless helpful to qualitatively understand their dependence on the model parameters.

Analysis
We calculate the Feynman rules of heavy neutrinos coupled to the SM with FeynRules 2.3 [73] using the implementation [74] based on the calculations in references [20,75]. Subsequently, we generate events with MadGraph5 aMC@NLO 2.6.4 [76] (cf. figure 3). Thereby, we calculate the total decay width of the heavy neutrinos with MadWidth [77] and simulate their decays with MadSpin [78, 79] (cf. figure 1). Finally we hadronise and shower coloured particles with Pythia 8.2 [80]. We calculate the efficiencies of the three LHC main detector using our own code based on public information of the detector geometry.
• The ATLAS detector covers a pseudo-rapidity of |η| < 2.5. The tracking system extends to 1.1 and 3.4 m in the transversal and longitudinal direction, respectively, while the muon chamber covers 5-10 m and 7-21 m in the transversal and logitudinal direction, respectively. • Up to Run 3 the CMS detector covers a pseudo-rapidity of |η| < 2.5, after the upgrade for the HL-LHC the pseudo-rapidity coverage will be extended to |η| < 4. The tracker extents to 1.1 and 2.8 m in the transversal and longitudinal direction, respectively. The muon chamber extents over 4-7 m and 7-11 m in the transversal and longitudinal direction, respectively.

JHEP02(2020)070
• The LHCb detector is optimized for measurements along the beam and has a pseudorapidity coverage of 2 < η < 5. It has three tracking systems, the vertex locator (VELO) spanning 50 and 40 cm in transversal and longitudinal direction and two Ring Imaging Cherenkov (RICH) detectors. The first RICH is located at a distance of 1 m from the interaction point and has a size of 60 and 100 cm in transversal and longitudinal direction, respectively. The second RICH is located at 9 m and has a size of 4 and 3 m in transversal and longitudinal direction. The muon chamber is located at 15 m and has a size of 5 m in both directions. This layered design allows to reconstruct secondary vertices with very large displacement as the inner tracker can provide a veto for appearing track candidates.
We propose to search in event samples which have been triggered by a single lepton or a pair of leptons [81,82]. The minimal lepton p T used for the pair triggers can be considerable softer than for the single lepton triggers, most notable when a τ -lepton is involved. For the single lepton trigger at CMS and LHCb we use p T (e, µ, τ ) = 30, 25, 140 GeV and p T (e, µ, τ ) = 10, 15, 50 GeV, respectively. Exemplarily, we show the complete trigger values used for the ATLAS detector in table 1. Due to the lack of public lepton pair triggers for the CMS experiment we assume that the values published for the ATLAS experiment are a good estimate. For the tracking and tagging efficiencies we use the values found in the DELPHES 3.4.1 [83] detector cards we require all particles to have |p| > 5 GeV in order to allow them to escape the magnetic field, but do not apply further cuts on the particles and missing transverse energy. The reconstruction of displaced τ -leptons is not very well studied, therefore we use a simplified τ -tagger based on the decay products and apply the prompt τ reconstruction efficiencies. We assume that it is possible to search for secondary vertices stemming from decays of heavy neutrinos as long as two charged tracks with a ∆R > 0.1 are detectable. As the secondary vertex reconstruction can be performed independent of the jet reconstruction, we refrain from performing jet reconstruction and do not reduce the number of tracks through vetos. In order to reduce the background stemming from long lived SM hadrons we require the secondary vertices to have a minimal JHEP02(2020)070 displacement l 0 of 5 mm. During the reconstruction of the displaced vertices we require at least two tracks with a ∆R > 0.1 and an invariant mass of 5 GeV in order to suppress further backgrounds, originating in particular from "nuclear interactions" with the detector material, which are hard to simulate [84,85]. In the case of purely muonic displaced vertices we do not apply this invariant mass cut because the backgrounds are much lower than in the case with hadrons in the final state. In this case it is favourable to carefully simulate the backgrounds instead of loosing signal events due to a hard cut. Such detailed simulation goes beyond the scope of this work, and our strategy is to provide an estimate of what sensitivity could in principle be achieved in this way. The displaced vertex reconstruction is well established if the produced particles traverse almost the entire tracker. If a particle transverses only a part of the tracking system the efficiency must drop off. In order to be able to compare different experiments we require that the remaining tracks transverse at least half of the tracker and that the reconstruction efficiency drops off linearly before that. This functional form is consistent with efficiency dependence published by ATLAS [71], and we assume that the behaviour is similar for CMS and LHCb. In order to calculate the remaining path of the particle we use ray tracing [86,87].
These assumptions are slightly more optimistic than what is done in present searches. We e.g. use less tracks to reconstruct the displaced vertex than reference [71] (cf. also [61]), and we assume that the largest observable displacement can be improved by a factor of two compared to current ATLAS estimates [71]. Our assumption that the displaced vertex invariant mass cut can be removed in the purely muonic search is consistent with reference [88], where, however, the pairing of displaced vertices could be used to reduce the background. It can further be justified by arguing that interactions with the detector material are unlikely to create a purely muonic final state. The main problem is that the simulation of some backgrounds, including the "nuclear interactions" with the detector material, is very difficult and requires detailed knowledge of the detector, which goes beyond the scope of this work. Our current goal is to estimate what can be achieved if one is only limited by the properties of the detectors, e.g. by using improved algorithms in the future, under reasonable assumptions.
The muon chamber systems can record muons which are an order of magnitude further away from the primary interaction vertex than the inner tracking system. Therefore, it is compelling to search for long lived particles using also the muon chambers to identify displaced vertices as we have also proposed before in the context of displaced signatures in supersymmetric models [66,67]. In the meantime it has been demonstrated that such a search is feasible [68]. In order to be conservative we require reconstructed particle to transverses the whole muon chamber.
For n observed events the significance is given by [89] S(n|h) = −2 ln P (n|h) 10 -8 project to future searches we estimate the observed number of events n with the prediction for the alternative hypothesis. Hence, we use S(b|b+s) ≥ 2 and S(b+s|b) ≥ 5, for exclusion and discovery, respectively. In doing so we neglect the systematic uncertainties of the signal and the background estimation. The SM background can be efficiently excluded with the cuts on the invariant mass and the displacement, cf. figure 5. It is not easy to quantify the remaining backgrounds without an extremely realistic simulation of the whole detector. In this analysis we ignore backgrounds originating from cosmic rays and beam-halo muons, based on the low rate in the LHC experimental caverns and the good capability of the experiments to recognize them as such [90]. Furthermore, we do not include pile up in our analysis. We also ignore the scattering of ordinary neutrinos that come from the collision point, based on the low cross section of charged-current interaction in the detector material. We assume that the background number is smaller than one and calculate the efficiencies based on one background event. In this case the non-observation of four events and the observation of nine events suffices for exclusion and discovery, respectively.

Results
We present our results for each experiment (ATLAS, CMS and LHCb) and each benchmark scenario (exclusive mixing with e, µ or τ ) in figure 6. Due to their similar geometry ATLAS and CMS have comparable sensitivities, the different η coverage in the HL-LHC Runs does not have a strong impact on the sensitivity curves because of the strong dependence of the number of events on U 2 . The geometry of the muon chambers is different for both detectors, but the effect of this on our results is small because we only require the heavy neutrino to decay before the muon chamber, so that the difference is practically captured JHEP02(2020)070 The three rows constitute the three extreme cases of pure electron, muon and tau coupling, respectively. The blue (red) curves show the discovery (exclusion) potential, both of which are shown for searches using only the tracker and using the tracker together with the muon chamber. For the simulation of LHCb we require decays to happen before or within the first RICH (cf. also figure 8). The gray bands represent the exclusion bound from DELPHI [52] and CMS [50].

JHEP02(2020)070
by the η coverage. For pure electron or muon mixing they can exclude heavy neutrinos with couplings as small as ∼ 5 × 10 −8 and masses up to ∼ 20 GeV with an integrated luminosity of 300 fb −1 . With 3000 fb −1 mixing angles of ∼ 5 × 10 −10 and masses up to ∼ 40 GeV become accessible. LHCb can exclude down to 5 × 10 −7 and 5 × 10 −8 and masses up to 12 and 20 GeV for LHC and HL-LHC. ATLAS and CMS can exclude heavy neutrinos with pure τ -couplings up to ∼ 10 −6 and ∼ 10 −8 with masses up to 15 and 25 GeV for integrated luminosities of 300 fb −1 and 3000 fb −1 , respectively. The lower sensitivity of LHCb is a result of both, the lower integrated luminosity and the different η coverage of LHCb, where the dominant effect comes from the luminosities. For the mixing with τ this is partly compensated by the lower p T thresholds in the triggers. For masses below the mass of optimal sensitivity the sensitivity is limited by the fact that the N are too long lived and do not decay inside the detector, cf. figure 4. For the mixing with electrons the displaced vertex invariant mass cut is clearly visible in figure 6, it explains the drop in sensitivity around 5 GeV. The sensitivity for M < 5 GeV comes entirely from muons in the final state, for which we have removed the displaced vertex invariant mass cut. 9 In the scenarios with pure muon or tau mixing the sensitivity that can be achieved from those is good enough that the effect of the cut is barely visible in the plots.
For long lifetimes the reach can be extended by using the muon chamber. This approach is especially fruitful for masses as small as a few GeV because we assume that for purely muonic vertices no invariant mass cut is necessary in order to remove the hadronic background. For M < 5 GeV the production of heavy neutrinos from meson decays dominates over the gauge boson decays considered here, hence we can expect that our proposal to use the muon chambers yields a bigger improvement in searches for those. In the present analysis the improvement for ATLAS and CMS amounts only to a factor of order one in the scenario with pure muon mixing and less than that in the other scenarios. This can be understood as a result of the scaling σ(W → a N → a b f f ) ∝ U 4 a M 5 in this regime, which can be obtained by expanding the exponential functions in relation (2.2) to linear order in Γ N l 0 and Γ N l 1 and using the estimate (2.3).
Additionally, we show the sensitivity reach of a purely leptonic search in the case of pure muon mixing in figure 7. The reach in U 2 µ is one order of magnitude weaker than the inclusive reach shown in figure 6. Finally, we show the reach of the LHCb detector under the assumptions that the secondary vertex must either lie within the VELO or is allowed to lie before or within one of the two RICHs, in figure 8.
In figure 9 we summarise our most optimistic estimates for the reach of each experiment in each of the benchmark scenarios. For this plot we have made the same assumptions as for the most optimistic scenario presented in figure 6, and additionally relaxed the lower bound on the displaced vertex invariant mass to 2 GeV. For LHCb we have in addition assumed that the RICH 2 can be used. This relaxation may be justified by a better understanding 9 The main reason why a displaced vertex invariant mass cut has been imposed in previous analyses is that there are backgrounds in this regime that were not fully quantified, including SM backgrounds from long lived resonances like J/ψ and interactions with the detector material. However, in the currently ongoing CMS search for heavy neutrinos, no displaced vertex invariant mass cut is imposed because the SM backgrounds can be measured directly from data down to very low dilepton masses [91].

Discussion and conclusions
We have compared the sensitivity of the LHC detectors ATLAS, CMS and LHCb to heavy neutrinos from W boson decays in displaced vertex searches in three benchmark models, in each of which the heavy neutrinos mix exclusively with one SM generation. Moreover, we propose a new search strategy that includes the muon chambers to detect tracks from the displaced vertex. We summarise our main results in figure 9. In figure 6 we present more detailed estimates for each of the different scenarios we investigated when both, leptonic hand hadronic final states are included. figure 7 shows the sensitivity for purely leptonic  We find that the sensitivity that can be reached at ATLAS or CMS with 300 fb −1 exceeds existing bounds by three orders of magnitude for mixing with the first and second generation and by one order of magnitude for mixing with the third generation. The reach of LHCb shown in figure 6 is more than an order of magnitude worse than ATLAS and CMS. The main reason for this is the lower integrated luminosity. However, figure 9 suggests that the LHCb sensitivity in the scenario with exclusive coupling to the third JHEP02(2020)070 generation can be competitive with ATLAS and CMS if the RICH 2 is used. Our results in this regard are not conclusive considering that we have made simplifying assumptions about the efficiencies in the RICH 2. Additionally, there are statistical uncertainties in this particular channel due to the comparably small number of total events. We propose to address this issue with a dedicated study.
It is noteworthy that the sensitivity in terms of the squared mixing angle U 2 a scales with the square root of the integrated luminosity for small values of U 2 a , which can be illustrated by expanding the simple estimate (2.2) in Γ N l 1 . The reason is that the LHC is practically used as an intensity frontier machine. This has the important implication that the HL-LHC can further improve the sensitivity considerably, as shown in figure 6. Our results qualitatively agree with other recent studies, but are more general in the sense that we for the first time compare all three detectors in all three scenarios for both, the LHC and HL-LHC.
The considerable improvement in sensitivity compared to the existing bounds is particularly interesting from a cosmological viewpoint because it implies that the LHC can probe a considerable part of the parameter region where low scale leptogenesis can generate the observed matter-antimatter asymmetry of the universe. In the minimal model with two heavy neutrinos (n = 2) and in the νMSM this parameter region is inaccessible to conventional LHC searches, cf. [64,[92][93][94] for updated parameters scans. In this model, independent measurements of all U 2 ai and the heavy neutrino mass spectrum at colliders would, together with a measurement of leptonic CP violation in neutrino oscillation experiments, in principle allow to constrain all parameters in the Lagrangian (A.1), making this a fully testable model of neutrino masses and baryogenesis [69]. In practice it is unlikely that data from the LHC will be accurate enough to pin down all parameters, but measurements of the flavour mixing pattern at the LHC would nevertheless provide an important test of the hypothesis that heavy neutrinos are responsible for the light neutrino masses and baryogenesis and motivate further studies at future colliders [93]. If three heavy neutrinos contribute to leptogenesis and the seesaw mechanism (n = 3) then the viable parameter space for leptogenesis is much larger [95,96], in the mass range considered here it entirely covers the experimentally allowed region in the mass-mixing plane (white area in our plots) [97]. In our simulation we find that the LHC could possibly observe tens of thousands of events for the largest experimentally allowed mixing angles.
Our results provide a strong motivation to improve the displaced vertex reconstruction efficiencies in LHC experiments. Moreover, a better understanding of the backgrounds would also be highly desirable. If one could, for instance, lower the cut on the displaced vertex mass, this would make it possible to search for heavy neutrinos with smaller masses. Those could be produced in considerable numbers in the decay of B hadrons, which are much more numerous in LHC collisions than the W bosons considered here. In this regime the sensitivity can further be improved by using the muon chambers to detect tracks from displaced vertices that lie outside the main tracker. A Type I seesaw model

JHEP02(2020)070
The type-I seesaw model is given by the most general renormalisable extension of the SM Lagrangian L SM that only contains n sterile neutrinos ν Ri as new fields, Here a are the SM lepton doublets and φ the SM Higgs doublet. The Here c.c. refers to the charge conjugation, which acts as ν c R = Cν R T with C = i γ 2 γ 0 .
U ν is the standard light neutrino mixing matrix and U N its equivalent amongst the heavy neutrinos. We use the tree level relation and expand to leading order in the mixing between left and right handed neutrinos, which is quantified by the matrix θ = vF M −1 M . Naively one would expect the magnitude of the mixing angles θ ai to be comparable to the ratio between light and heavy neutrino masses, θ 2 ∼ m i /M i . However, if the M i and F ai approximately respect a generalised B − L symmetry [98], then the mixings U 2 ai = |θ ai | 2 can be large enough to be within reach of the LHC [99]. 10 While the symmetry naturally explains the smallness of the light neutrino masses for below the electroweak scale and Yukawa couplings that are larger than that of the electron, it also parametrically suppresses the rate of lepton number violating processes [99]. This includes as the decay of W bosons into same sign dileptons, which are considered to be a "golden channel" for heavy neutrino JHEP02(2020)070 searches, cf. e.g. [101,102] for recent updates. A precise quantification of this suppression difficult because it depends on the splitting between the heavy neutrino masses. If all mass splittings are bigger than the experimental resolution, then cross section (2.2) can simply be generalised to the case with several heavy neutrino flavours; the cross section of events with lepton flavour a from the first vertex (N production) and b from the second vertex (N decay) that can be seen in a detector can be estimated as Parametrically this estimate applies to both, decays that violate the SM lepton number L and those that conserve it. If the mass splittings are too small to be experimentally resolved, but still much larger than the Γ N i , then one has to simply add the event rates from the decays of the mass-degenerate states for both, L conserving and L violating decays.
In the context of realistic seesaw models, we can identify U 2 a = |θ a | 2 in the estimate (2.2) based on the phenomenological model (2.1) with U 2 a2 or U 2 a3 in (2.2). If the mass splitting is smaller than Γ N i , then the destructive interference between contributions from different N i to the diagram in figure 3 leads to a suppression of processes that violate L [99]. In the intermediate regime where |M i − M j | ∼ Γ N i , Γ N j there can be non-trivial effects due to coherent oscillations among the heavy neutrinos [58,60,93,[103][104][105][106]. Hence, a precise prediction of the number of events depends on parameters that may not be directly observable if the mass splitting is smaller than the kinematic resolution of the detector. However, our present analysis does not require lepton number violation because we use the displacement of the vertex in with the N i decay to suppress the SM backgrounds. Whether or not lepton number violating processes contribute therefore at most amounts to a factor two in the heavy neutrino decay rate Γ N i . This rate in principle enters the cross section (A.3) exponentially. However, in most of the testable region in the mass-mixing plane the exponentials can be approximated linearly. In the double-logarithmic sensitivity plots in the mass-mixing plane the main difference is a shift of the sensitivity region in the direction of comparable large masses and large couplings, where it is cut off due to the fact that the N i decay before they travel a distance l 0 . Moreover, in the range of masses and couplings considered here, the suppression of lepton number violating processes only occurs for fine-tuned parameter choices [107]. Therefore we can perform the analysis in the effective model (2.1).

JHEP02(2020)070
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.