Shining light on the scotogenic model: interplay of colliders and cosmology

In the framework of the scotogenic model, which features radiative generation of neutrino masses, we explore light dark matter scenario. Throughout the paper we chiefly focus on keV-scale dark matter which can be produced either via freeze-in through the decays of the new scalars, or from the decays of next-to-lightest fermionic particle in the spectrum, which is produced through freeze-out. The latter mechanism is required to be suppressed as it typically produces a hot dark matter component. Constraints from BBN are also considered and in combination with the former production mechanism they impose the dark matter to be light. For this scenario we consider signatures at High Luminosity LHC and proposed future hadron and lepton colliders, namely FCC-hh and CLIC, focusing on searches with two leptons and missing energy as a final state. While a potential discovery at High Luminosity LHC is in tension with limits from cosmology, the situation greatly improves for future colliders.


Introduction
While more than two decades have passed since the groundbreaking discovery of neutrino oscillations, which unambiguously established that the most elusive Standard Model (SM) particles are massive, the origin of neutrino mass still remains unknown. In spite of the viable scenario in which, by supplementing SM left-handed neutrino fields with righthanded components, neutrino masses are generated in the same way as for all the other fermions, the smallness of Yukawa couplings required for generating eV-scale masses has led to a much greater interest in Majorana mass models. The famous realization of the latter possiblity is the type-I seesaw model [1][2][3][4] in which neutrino masses are generated at tree-level in the presence of at least two generations of heavy neutral leptons. For "natural" O(1) values of Yukawa couplings, this model suggests that the mass scale of heavy leptons is around 10 13 GeV, clearly unreachable at any terrestrial experiment. In contrast, radiative neutrino mass models can lower the scale of new physics by several orders of magnitude.
Among radiative neutrino models, one of the simplest realizations is the so called "scotogenic" model [5]. Since a Z 2 symmetry needs to be imposed in order to forbid the tree-level neutrino mass generation, the lightest among the newly introduced particles can -1 -JHEP09(2020)136 be a viable dark matter (DM) candidate [6][7][8][9][10][11]. The success of thermal leptogenesis in this model has also been demonstrated [12][13][14][15][16][17][18] and, in addition, the authors of this work have recently shown that light DM and ARS leptogenesis [19] can be embedded simultaneously in the framework of the ννMSM model [20]. It was found that one can have the spectrum of new particles below the TeV-scale, while bounds from cosmology require the mass of the light DM to be below O(10 keV). This raises the question whether one can test this model setup at present and future colliders, where the accessible energy range exceeds the masses of all newly introduced particles. Moreover, the light DM candidate can lead to interesting consequences for the early Universe and in this paper we put a special emphasis on a synergy between collider searches and cosmological probes for testing the scotogenic model.
The paper is organized as follows. In section 2 we introduce the particle content of the model and discuss the mechanism for the generation of neutrino masses. In section 3 we discuss the production of light fermionic DM and derive respective limits from cosmology. In section 4, for such a scenario with light DM and hierarchical spectrum in Z 2 -odd sector, we present the calculated projections for High Luminosity LHC (HL-LHC) search with two tau leptons or electrons/muons and missing transverse energy in the final state. Cosmological bounds put the parameter space testable at HL-LHC in tension, which motivates us to go beyond and consider future colliders. In section 5 we therefore present the discovery potential at FCC-hh [21] and CLIC [22] by considering di-lepton + E T channel. We conclude in section 6.

The model and neutrino masses
We consider the scotogenic model which, in addition to the SM field content, contains one scalar doublet Σ = (σ + , σ 0 ) T as well as three generations of heavy neutral leptons (HNLs) N i (i = 1, 2, 3). In addition to these novel fields, the model requires a discrete Z 2 symmetry under which new degrees of freedom have an odd charge. The part of the Lagrangian containing newly introduced fields is where y iα is the Yukawa coupling between i-th HNL, Σ and SM lepton doublet L α = (ν α , α − ) T (α = e, u, τ ), m N i is the mass of i-th HNL, D µ is the covariant derivative, Φ = (φ + , φ 0 ) T is the SM Higgs doublet, and V (Φ, Σ) represents the scalar potential The couplings in the scalar sector are constrained from the vacuum stability requirement [7,23] λ 3 > − λ 1 λ 2 , λ 3 + λ 4 − |λ 5 | > − λ 1 λ 2 , λ 1,2 > 0 . From eq. (2.2), we can directly infer the masses of novel scalar degrees of freedom after electroweak symmetry breaking (EWSB) where v = 246/ √ 2 is the vacuum expectation value of the Higgs field. In the first line of eq. (2.4) the mass of charged scalars is given, whereas the latter two masses correspond to the CP-even (S) and CP-odd (A) neutral scalars, defined as σ 0 = (S + iA)/ √ 2. Since the exact Z 2 symmetry forbids tree-level neutrino masses, they are realized radiatively with the following expression obtained by calculating self-energy corrections to the neutrino propagator from the exchange of neutral spin-zero S and A fields [5,24,25] Here, the summation index runs over HNL generations and in the last equality we abbreviated this formula, modulo Yukawa couplings, with Λ i . In the present work, the mass of the lightest HNL is O(keV) with y 1α O(10 −8 ) and this state effectively does not participate in the neutrino mass generation (2.5). This makes the lightest active neutrino effectively massless, which is a viable scenario, consistent with the data from neutrino oscillation experiments that are probing only mass squared differences. In this case, N 1 is decoupled from the mass generation and, hence, only the elements of 2 × 3 submatrix of y, y 23 , enter in eq. (2.5).
In order to properly account for low-energy neutrino data in the analysis, we employ the Casas-Ibarra parametrization [26] which imposes the following expression for the Yukawa submatrix where Λ diag = diag(Λ 2 , Λ 3 ), and R is an orthogonal matrix parametrized with a complex angle ϑ = ω − i η

JHEP09(2020)136
where m 2 sol and m 2 atm are solar and atmospheric mass squared differences, and the leptonic mixing matrix, U PMNS , which is parameterized as in ref. [27]. The relevant parameters for us are one Dirac (δ) and two Majorana CP phases (α 1 , α 2 ). While the mixing angles are relatively precisely determined (see [28] for recent NuFIT results that we employ in this work), the value of the Dirac CP phase is practically unconstrained, and Majorana phases are not testable at neutrino oscillation facilities.
The elements of Yukawa couplings in y 23 are constrained from above due to nonobservation of LFV processes such as α → α γ and α → 3α where α and α denote different species of charged leptons. The upper bounds on the branching ratios (BR) for these types of decays are given in [27] and also compiled in table 1 in [20]. While we have implemented all available constraints from LFV decays it is worthwhile pointing out that the dominant effect arises from the lack of observation of µ → eγ process. The upper bound on the BR for this process is 4.2 × 10 −13 which converts to [24] i=2,3 for m N 2,3 0.1 TeV. Finally, the neutrino mass matrix in eq. (2.5) depends on λ 5 which enters in the expression for m S and m A (see again eq. (2.4)). This formula actually features a linear dependence of neutrino masses on λ 5 for small values of λ 5 [5]. Parameters λ 5 and entries of y 23 depend on each other, and jointly set the scale for neutrino mass ∼ 0.1 eV. This means that there is a lower bound λ 5 O(10 −7 ), calculated for m ± = O(1) TeV. Constraints arising from electroweak precision data [29] are not competitive to the above discussed ones.

Dark matter and cosmology
Since the lightest of the newly introduced particles is stable, it is natural to consider whether it can account for the DM in the Universe. In the scotogenic model there are neutral particles both in the fermionic and scalar sector, making them potential candidates. Motivated by our previous work [20] we stick to fermionic DM and keep the scalars heavier than HNLs. It was shown in refs. [6,20] that in the scotogenic setup with fermionic O(100) GeV DM, the relic abundance from freeze-out generally strongly overshoots the measured values. There are, however, options how to remedy this problem: (i) If additional processes, namely coannihilations of DM with new scalars, are involved, DM can stay longer in the thermal equilibrium and freeze-out with much smaller abundance (see for instance figure 2 in [20]). The coannihilations are only effective if the splitting between the DM and scalar mass is tiny. Such a scenario would yield soft final-state leptons which are hard to reconstruct when considering di-lepton and di-tau searches (see sections 4 and 5) and hence freeze-out of O(100) GeV DM is not compatible with the signatures at hadron colliders that are studied in this paper. This conclusion changes for the case of the future lepton collider CLIC, for which we will show in section 5 that such regime can be probed as well. 1 1 Let us note that the realization with a very compressed spectrum of Z2-odd particles [30][31][32][33] is also testable using astrophysical observations [34][35][36][37][38][39]. This would in particular require small splitting in Z2-odd fermion sector, scenario which will not be further explored in this work.

JHEP09(2020)136
(ii) The overproduction problem can be solved by considering light, non-thermally produced DM. As we have shown in [20], such DM can be produced either via freezein [40,41] from the decays of neutral and charged scalars in the Σ doublet or from the decays of frozen-out next-to-lightest HNL, i.e. N 2 . 2 However, the latter mechanism is also constrained by requiring N 2 to decay before the time of big bang nucleosynthesis (BBN). Namely, if N 2 is too long lived, the abundances of light nuclei will be altered. This production mechanism also leads to too hot momentum distributions and therefore needs to be subdominant with respect to scalar decay contribution. We elaborate on this in the present section.
Generally, in the freeze-in scenario there is some freedom to choose a DM mass. On the contrary, in this model there is an upper bound from demanding that the DM production has stopped before BBN takes place, as will be discussed in the last part of this section. We would like to point out, that one can live with DM masses up to few MeV and in that case limits from structure formation, discussed in the second part of this section, are weakened. However, in the following we will consider the DM mass to be O(10 keV). This particular choice is motivated by our findings in [20] where we identified an interesting open window in parameter space to successfully incorporate leptogenesis.

Production mechanisms
The processes through which keV-scale N 1 is frozen-in are The corresponding Boltzmann equation for the DM yield, Y FI , which is the ratio of DM number density and entropy density, reads [20,40] dY FI dx = 135 M Pl |y 1 | 2 1.66 · 64 π 5 g 3/2 * m ± Here, M Pl = 1.22 × 10 19 GeV is the Planck mass, and the number of relativistic degrees of freedom across relevant temperatures is fixed to g * = 114.25, taking new particles into account. In eq. (3.2) it is assumed for simplicity that the Yukawa couplings of N 1 are flavor universal, i.e. y 1α ≡ y 1 . Furthermore, the following abbreviations r A = m A /m ± and r S = m S /m ± are introduced to account for all three production channels. K 1 is the modified Bessel function of the second kind and the redefined temperature x = m ± /T is employed. In the computation, we adopted Maxwell-Boltzmann distributions for phase space densities of all thermalized species involving Σ. Finally, we assumed the initial DM number density to vanish and this simplifies the computation of DM abundance. Namely, instead of solving differential equation, a straightforward integration suffices.
We can obtain the expression for Y FI by simply integrating eq. (3.2) between the temperature at the end of inflation and the present one, where the former is associated to the reheating temperature which is assumed to be larger than all particle masses in the 2 Such a scenario was already considered in [6]. model. Practically, this allows us to use x = 0 and x = ∞ as the respective integration boundaries. We obtain By taking λ 4 = λ 5 = 0 which renders r S = r A = 1 and by using the relation between yield and relic abundance, Ωh 2 = 2.742 · 10 2 (m N 1 /keV) Y FI , we arrive at the analytical estimate for DM relic abundance from which one infers that in order to have scalar decays as a dominant DM production mechanism, the required DM Yukawa couplings need to be O(10 −8 ) for O(TeV) masses of new scalars and keV-scale N 1 . In addition to the described freeze-in mechanism, DM in this model can be produced from the decays of next-to-lightest Z 2 -odd particle, N 2 . The Yukawa couplings y 2α , that are required for the successful generation of neutrino masses via mechanism described in section 2, are sufficiently strong to put this particle in thermal equilibrium with the SM bath. Hence, N 2 will freeze-out at y = m N 2 /T 15 . Note that the Z 2 -odd scalars are also in thermal equilibrium due to gauge interactions. Still, all such particles eventually decay to N 2 and hence one effectively needs to solve a single Boltzmann equation for N 2 where σ eff v accounts for the annihilations and coannihilations in the Z 2 -odd sector. The multitude of relevant processes enforces the evaluation of eq. (3.5) with numerical tools and to this end we employed micrOMEGAs 5.1 [42]. For a detailed description of our implementation as well as the procedure to derive eq. (3.5) we refer the reader to our previous publication [20] where we also demonstrated the strong effect of coannihilations to the freeze-out abundance of N 2 . After freeze-out, N 2 decays into N 1 and a pair of charged or neutral leptons with the rate [6] where M stands for the mass of the scalar particle that is exchanged in the process and α and β denote the flavor of final state leptons. The decay of N 2 gives a contribution to the total DM abundance of the form where Ωh 2 N 2 is the freeze-out abundance of N 2 that can be related to the corresponding yield, Y , calculated by solving eq. (3.5). Even though N 2 decays give an extra source of DM, this production mechanism actually yields two "problems": JHEP09(2020)136 • N 2 decays occur after freeze-out, at temperatures much lower than m N 1 and this leads to the production of DM particles with a hot momentum distribution [43]. This is not a generic property of the model but it does occur for the parameter choices under our consideration. This can drastically suppress the structure at small scales and may not be compatible with observations.
• The decays of N 2 should be fast enough in order not to violate BBN predictions. For N 2 decays to τ leptons, which dominantly decay hadronically, the decay time needs to be τ N 2 →N 1 1 sec, whereas decays into leptons of first and second generation do not lead to such stringent limits, yielding τ N 2 →N 1 100 sec [44].

Constraints from structure formation
In this section we consider the compatibility of structure formation with the two previously described DM production mechanisms. The structure formation limits on keV-scale sterile neutrino DM are commonly derived for non-resonant production, dubbed Dodelson-Widrow [45], for which non-zero mixing between active and sterile states is required. Currently, the most stringent structure formation limit on the mass of non-resonantly produced particles (m NRP ) arises from Lyman-α forest data [46] and yields m NRP 28.8 keV. This limit should be taken with a grain of salt because the Lyman-α forest absorption spectra can be dominated by the effects stemming from the gas dynamics in the Inter-Galactic Medium [47]. More robust constraints arise from Milky Way satellite counts [48] and give m NRP 10 keV. In order to derive constraints from these observations for our model, we evaluate the DM momentum distribution function f N 1 (z, r) which is calculated as a function of the dimensionless variables z ≡ p/T and r ≡ m P /T . Here, m P stands for the mass of a parent particle, which are either heavy scalars in the case of freeze-in or N 2 in late time next-to-lightest particle decays. For scalar decays we are following the discussion in ref. [43], whereas for the case of N 2 decays we employ the procedure outlined in ref. [49]. The total distribution function is hence given as a sum of the two contributions where with superscripts we indicated the production mechanisms of N 1 . In what follows we discuss the calculation of both components.
The general expression for f Σ N 1 (z, r) assuming a Maxwell-Boltzmann distribution is given by [43,49]: where C Σ Γ = M 0 Γ Σ /m 2 ± is the effective decay width, originating from rescaled 2-body decays of the Σ particles

JHEP09(2020)136
From f Σ N 1 (z, r) we find the average DM momentum z prod FI ≈ 2.5, in agreement with eq. (19) in [43]. This result, together with the information that the production dominantly occurs at temperatures T ∼ m Σ /3 (see for instance figure 1 in [20]), allows us to estimate the limit on m N 1 by using [43] 11) and assuming that the freeze-in DM production dominates. Here, the entropy dilution factor (10.75/g * (T prod )) 1/3 takes into account that the DM production happens at early times. Taking the aforementioned limit m NRP 10 keV we obtain m N 1 > 3.7 keV. The combination of this limit and eq. (3.4) sets the upper bound on the values of y 1α . If decays of Z 2 -odd scalars were the only source of DM production, our structure formation analysis would end here. However, decays of N 2 significantly complicate the picture. To calculate the DM distribution function for the production via N 2 decays, we apply the master equation 3 from [49] and evaluate it numerically Here, r FO ∈ (8, 16) is evaluated at the freeze-out temperature and C N 2 Γ is the effective decay width given by where Γ is the decay width of N 2 into N 1 and a pair of leptons given in eq. (3.6). The expression for the distribution function of N 2 after freeze-out is [49] where a Maxwell-Boltzmann distribution for N 2 is assumed.
In figure 1 we show f N 1 (z, r) z 2 as a function of z for two masses of N 2 , namely m N 2 = 100 GeV and 400 GeV, with scalar mass set to m ± = 600 GeV. The red curves represent f N 2 N 1 (z, r) z 2 , obtained by solving eq. (3.12) and fixing r to sufficiently large values in order to capture the effect of decaying N 2 . For comparison, we also show the distribution function corresponding to the production via freeze-in (blue), taking r → ∞. Clearly, the peak of f N 2 N 1 (z, r) z 2 is shifted to very large values of z indicating that N 2 decays yield a hot DM component. However, we also see from the figure that its amplitude is greatly suppressed with respect to f Σ N 1 (z, r) z 2 , implying that this component is subdominant for the selected benchmark point. Quantitatively, the distribution shown in the left panel yields Ωh 2 N 2 →N 1 = 0.03 × 0.12, whereas for m N 2 = 400 GeV it follows that less than 1 per mille of the observed DM abundance is produced in N 2 decays.
=3.78×10 -9 Figure 1. In the left (right) panel we show the momentum distribution function f N1 (z, r) z 2 for both DM production mechanisms, taking m ± = 600 GeV and m N2 = 100 GeV (m N2 = 400 GeV). Blue and red curves correspond to freeze-in and N 2 late-time decays, respectively.
The simplest method for inferring the structure formation limit is to calculate freestreaming length by employing z . However, this method is not applicable for our scenario since the spectrum consists of two components with two distinct peaks. Therefore we apply a more robust analysis and evaluate the transfer function T (k), given by the power spectrum ratio where P (k) is the power spectrum calculated from f N 1 (z, r) using CLASS [50,51] and P (k) ΛCDM is the spectrum for cold DM only. The transfer function indicates at which scales non-cold DM will lead to deviations in comparison to cosmological observations. The temperature of the DM species, T N 1 , relative to the photon temperature, T γ , is relevant for the analysis and an input for CLASS: since we have two different mechanisms for DM production in the model, we are left with two independent dark sector temperatures: (i) One of these temperatures is set by the time when N 1 is produced via freeze-in mechanism from the decays of heavy scalars. These processes occur when the heavy scalars are still in thermal equilibrium implying that DM particles are produced with temperatures identical to those of SM sector. After production, N 1 is decoupled and does not experience reheating when SM degrees of freedom drop out of equilibrium. The temperature ratio, governed by the entropy dilution factor, yields where T prod roughly corresponds to Z 2 -odd scalar masses.
(ii) To evaluate the temperature of DM produced from out-of-equilibrium N 2 decays we estimate the temperature when these decays are taking place. We assume an -9 -

JHEP09(2020)136
instantaneous decay at τ = 1/Γ and make use of the time-temperature relation for a radiation-dominated Universe which allows us to obtain the expression for the temperature at which N 2 particles decay For the benchmark point, employed already for presenting momentum distributions in the right panel of figure 1, we obtain At T ∼ T Γ , HNLs are at rest and each decay product has an energy E ≈ m N 2 /3 ≈ O(100) GeV. Note that by dividing this energy with T Γ in eq. (3.18) one obtains z 10 4 and this explains the position of the N 2 decay peak in the momentum distribution (see figure 1). In principle, the DM temperature is incorporated in the momentum distribution. However, we still have to take into account that, unlike DM, SM bath is reheated when electron-positron annihilation occurs. In summary, the temperature of this DM contribution is given by where the expression is evaluated for the aforementioned benchmark point.
In order to assess the cosmological viability of particular benchmark points, we compare the calculated T 2 (k) against the function corresponding to the constraint stemming from Lyman-α forests. For the latter, we adopt an analytical fit for the transfer function [52], taking m NRP 10 keV. 4 In figure 2 we show in red (green) the calculated transfer function for f N 1 (z, r) with m ± = 600 GeV and m N 2 = 100 GeV (m N 2 = 400 GeV); these are identical benchmark points as those from figure 1. If a given curve lies below the Lyman-α limit (blue curve) the corresponding parameter point is disfavored. We observe that the scenario with lighter mass of N 2 is excluded since the abundance of hot DM is too large in this case and hence larger cosmological scales than observed are affected. On the other hand, the green curve is in agreement with observational data. One should note that curves drop to zero at roughly the same point; this scale is set by the temperature of the dominant, frozen-in, DM component. If

Constraints from N eff
When discussing possible implications on the formation of structures in the early Universe, one should also take into account that keV-scale DM could change the number of relativistic species N eff . The effective number of relativistic non-photonic species, N eff , after electronpositron annihilation, enters in the expression for the radiation density where ρ γ represents the energy density of photons. In the SM, N eff = 3.046 and thus we denote contributions from additional relativistic species as ∆N eff = N eff − 3.046. The contribution to ∆N eff from N 1 can be estimated by comparing its energy density against the one corresponding to a fully relativistic neutrino with temperature T ν [49]: Using as an example the benchmark points employed in figure 1, we can derive the following values: Clearly, large mass gaps between N 2 and σ ± are disfavored; the reason is that such cases would lead to larger abundances of N 2 and therefore the hot DM component becomes more prominent in the spectrum. Mass of σ ± [GeV] Mass of Figure 3. Constraints from structure formation (red curves) confronted with N eff limits (blue curves) derived using eq. (3.21). The solid curves correspond to the conservative and dashed ones to the aggressive choice of ∆N eff . Note that structure formation limits also indirectly depend on ∆N eff as it is an input parameter for CLASS. Clearly, N eff yields much stronger limits in comparison to those arising from structure formation. Shown in black is the curve for Ωh 2 Current measurements by the Planck collaboration allow for an upper limit of ∆N eff ≈ 0.28 at 95% CL (TT, TE, EE+lowE+lensing+BAO). Including the present tension in the Hubble constant measurement, this value increases to ∆N eff ≈ 0.52 at 95 % CL (TT, TE, EE+lowE+lensing+BAO+R18) [53]. In the following, we dub the first value aggressive and the second one conservative.
By using eq. (3.21), we can estimate which parameter choices lead to ∆N eff values that exceed the conservative/aggressive value. We have performed a scan and have deduced the following conditions when using conservative values necessary for consistency with cosmology (see also black curve in figure 3). For instance, taking m ± = 1 TeV, the lower bound on the heavy lepton mass is m N 2 300 GeV. 5 In figure 2 and eq. (3.22) we demonstrated that one of the chosen benchmark points is excluded by both structure formation and N eff considerations. Comparing both probes in figure 3 we however conclude that N eff generally leads to stronger exclusion limits. Hence, in sections 4 and 5 we will compare regions in parameter space that are accessible at colliders with N eff limits.

BBN constraints
Primordial abundances of light nuclei may be affected by processes involving new particles in the model. As we have seen in the previous section, N 2 decays produce SM leptons 5 There is a caveat as one has a freedom to choose the couplings between N2,3 and the charged leptons.
Throughout this section we assume couplings to τ leptons to be subdominant.  with large momenta, which can inject a lot of energy into the plasma. Specifically, we need to ensure that N 2 decays are fast enough such that these highly energetic particles can thermalize with the plasma and thus the BBN measurements remain unaffected. N 2 decays to N 1 and a pair of leptons and the rate of this process is given in eq. (3.6), being proportional to two powers of small Yukawa coupling y 1α .
In order to obtain the BBN limits in the considered scenario, we adopt the results from the recent analysis [44] where the authors studied the impact of the decaying hidden sector particles to the primordial abundances of light nuclei. The channels of our interest are those containing charged leptons. Decays of N 2 into electrons and muons can take as long as O(100 s), since they mainly induce electromagnetic showers, which affect BBN at later times only. In contrary, τ leptons decay mostly hadronically and this can significantly alter the observed neutron to proton ratio; unless the abundance of N 2 is strongly suppressed, N 2 decay time has to be 1 s.
In figure 4, black lines indicate BBN exclusions for a representative value of m ± = 600 GeV. The left panel corresponds to the dominant decay into e ± /µ ± and in the right panel the case where N 2 decays prominently into τ ± pairs is shown. These channels are motivated by the di-lepton and di-tau searches at colliders which will be presented in the following sections. To be conservative, we impose the decay time for the respective channel to be shorter than 1 second.

JHEP09(2020)136
Note that by increasing the scalar masses, larger y 1α are required for DM production through Σ decays and hence the BBN bounds get weaker. LFV bounds (blue regions) are then also relaxed, see eq. (2.9).
The thick green solid line indicates a parameter space corresponding to the DM production only through N 2 decays and such scenario is clearly excluded; this curve lies below the red shaded region which indicates N eff limits. We found numerically that in terms of DM density, the N eff exclusion line is roughly set by the requirement Ωh 2 N 2 →N 1 /Ωh 2 DM ≈ 0.1%. The blue vertical dashed line shows the region where the energies of final state leptons, arising from σ ± → N 2 l α are below 100 GeV, making them harder to resolve at colliders.
Here we have shown that there are regions which are not excluded neither by LFV nor N eff and BBN arguments. In the next two sections we will demonstrate that some of these portions in the parameter space overlap with the regions accessible at colliders.

HL-LHC projections
In this section we explore the di-lepton and di-tau signatures with missing transverse energy: The former were for instance already scrutinized for the scotogenic model in [54], where the authors used the data from LHC Run-1 to set the limits. Here we • calculate the projected sensitivities for HL-LHC using the same analysis techniques as presented in recent ATLAS publications (ref. [55] and ref. [56] for di-lepton and di-tau channel, respectively). In particular, we are using the results from Run-2 with an integrated luminosity of 36.1 fb −1 at √ s = 13 TeV.
• present both optimal and realistic projections; the first one is defined such that the branching ratio for charged scalar decay into HNL and charged lepton considered in the search is set to 1. Such case is, however, not feasible in our model 6 and hence we also define realistic projections by maximizing respective Yukawa couplings. This will lead to branching ratios smaller than 1.
• are in position to confront the calculated projections with bounds from BBN and structure formation (see section 3). Let us point out that this is very rarely performed in the papers focused on collider signatures; in particular this has not been done for the scotogenic model.
The model files were created with FeynRules [57]. The signal processes, as shown in figure 5, were simulated at leading order (LO) with MadGraph5 aMC@NLO 2.6.3.2 [58] interfaced with Pythia8 [59] for showering the events and Delphes 3.4.1 [60] was used JHEP09(2020)136 with even larger luminosities. The relevant searches consider either two or more tau's [24 e s/µ s [25] as final state particles. Creation of the model files were done with FeynRules. Then the signal processes as show Production channels for the i j + E T process. Pair produced charged scalar decaying into N 2,3 and e, µ or τ leptons. fig. 1 were simulated at LO with MadGraph. Event showering was done using Pythia8 and Delp was used for a fast detector simulation(SB: Here we should cite the programs proper After setting up our own analysis pipeline we cross checked it with the results given in [ In order to increase to the cross section we can make use of free parameters to maximize respective branching ratios. We take the complex angle w + ξ, the Majorana phase α 2 2 and CP phase δ 3 as free parameters and maximized the entries of the yukawa matrix according to following prescription: y 2τ (w, ξ, α 2 , δ) 2 + y 3τ (w, ξ, α 2 , δ) 2 k=2,3 i y ki (w, ξ, α 2 , δ) 2 ⇒ maximal; for w 0 , ξ 0 , α 0 2 , δ 0 .
en larger luminosities. The relevant searches consider either two or more tau's [24] or [25] as final state particles. ion of the model files were done with FeynRules. Then the signal processes as shown in Production channels for the i j + E T process. Pair produced charged scalar decaying into RHN d e, µ or τ leptons. ere simulated at LO with MadGraph. Event showering was done using Pythia8 and Delphes ed for a fast detector simulation(SB: Here we should cite the programs properly.). etting up our own analysis pipeline we cross checked it with the results given in [24,25].
A. Di-tau+ E T signal regions were defined to present constraints on the visible non-SM cross sections They are chosen such that different mass gaps between σ ± and N k can be covered. The per limits on the cross sections are summarized in r to increase to the cross section we can make use of free parameters to maximize the ive branching ratios. We take the complex angle w + ξ, the Majorana phase α 2 2 and the se δ 3 as free parameters and maximized the entries of the yukawa matrix according to the g prescription: y 2τ (w, ξ, α 2 , δ) 2 + y 3τ (w, ξ, α 2 , δ) 2 k=2,3 i y ki (w, ξ, α 2 , δ) 2 ⇒ maximal; for w 0 , ξ 0 , α 0 2 , δ 0 .
implification we consider the heavy RHN N 2,3 to have equal masses and their respective ing ratios y kτ (w 0 , ξ 0 , α 0 2 , δ 0 ) 2 k=2,3 i y ki (w 0 , ξ 0 , α 0 2 , δ 0 ) 2 will nearly have the same value.   for a fast detector simulation. By using these tools, we were able to reproduce the results given in [55,56]. In figure 6 we show the expected cross section for p p → σ ± σ ∓ pair production for different energies and masses. One can already conclude that large luminosities are needed to see a significant number of events inside the detector. For definiteness we have fixed the portal couplings to The most relevant parameter in the scalar sector is the physical mass of the charged scalar, m ± , and it is this quantity that will appear in all our sensitivity projections. We assume that m N 2 = m N 3 but in general one can take a hierarchical spectrum as well, i.e. m N 2 < m N 3 . A hierarchical spectrum would, on the one hand, weaken the search strategy because in this case decays σ ± → ± N 3 are more likely to yield soft leptons due to the smaller mass gap between σ ± and N 3 . On the other hand, this could give

Di-tau+¨Ë T
Following the procedure outlined in [56], the following cuts were applied after event reconstruction: events shall contain no b-jet but at least two tau leptons with opposite charges. The invariant mass of every tau pair has to be larger than 12 GeV and must be 10 GeV away from the mean visible Z boson mass, set at 79 GeV. The final cross section in our model is determined by the product The cross section can be increased by maximizing the respective branching ratios. The latter can be achieved by making use of unconstrained parameters. In particular, we took the complex angle θ = ω − iη (see eq. (2.7)), the Majorana phase α 2 7 and the CP phase δ 8 as free parameters and maximized the following expression The other Majorana phase does not affect the minimization. 8 We allowed δ to float in the range (135 • , 366 • ) which corresponds to a 3σ range from recent fits.
-16 -JHEP09(2020)136  Table 3. Largest possible branching ratios for the decay of σ ± into τ and N 2,3 . Above the given value for η 0 , the branching ratios are to a good approximation independent of this parameter. As can be seen, the IO gives rise to smaller branching ratios compared to NO.
The parameter values that correspond to the extremum are in what follows denoted as ω 0 , η 0 , α 0 2 , δ 0 . We have performed this procedure for both normal and inverted neutrino mass ordering. The results are given in table 3.
For calculating the sensitivity curves we fixed the portal couplings λ i such that σ ± is the lightest Z 2 -odd scalar, suppressing the additional decays into the other scalars. We scanned over the charged scalar mass as well as the heavy lepton masses m N 2,3 ; our grid spans m ± ∈ (150 GeV, 600 GeV) and m N 2,3 ∈ (10 GeV, m ± ) and we simulated 10 4 events for each point.
We compared the simulation with the recent ATLAS results and found that current sensitivities are not strong enough to place limits. The reason is twofold: first, the cross section for the pair production in the model is significantly smaller than the one in the simplified model used in the ATLAS analysis. Second, the analysis uses specific cuts on kinematic variables which do suppress SM background but unfortunately also cut away a significant portion of signal events. For instance, the "best case" benchmark point (where we set the branching ratio into tau leptons to 1) features a quite small surviving cross section: Benchmark : m N 2,3 = 10 GeV m ± = 200 GeV → SR-highMass: σ vis = (0.15 ± 0.06) fb. (4.5) After taking Casas-Ibarra parametrization into account, and inserting realistic branching ratios, the situation gets even worse as the cross section is further reduced due to nonmaximal branching ratio into tau leptons. This finding motivated us to go beyond the current experimental results and consider a similar search at the foreseen HL-LHC program [62] at CERN which will deliver a final integrated luminosity of up to 4000 fb −1 at √ s = 14 TeV. This would lead to a huge increase in potential signal events.
To estimate the potential of HL-LHC to test the scotogenic model we conduct a similar analysis as in [56] but use a projected sensitivity where S and B represent signal and background events, respectively. This formula is derived in the limit S/B 1 from the general expression for the case of exclusion limits obtained using the procedure described in [63].  By taking the identical signal regions as in the previous analysis and assuming a similar scaling of signal and background for the increased center of mass energies and luminosities we can now redo the cut and count analysis for increased event rates. As can be seen in figure 7 (solid lines), this allows us to significantly enhance the testable parameter space. For such high luminosities, scalar masses of up to 420 GeV and respective HNL masses of 170 GeV can be tested and there is even a potential discovery region for scalar masses between 200 and 300 GeV. The sharp drop for large masses is due to a decrease in the pair production cross section. The cuts on the kinematic variables, such as transverse momentum p T and stransverse mass m T 2 , need a sufficiently large mass gap between σ ± and m N 2,3 which bounds the accessible parameter space from above and also from the left, because charged scalar should not be too light, as in this regions leptons are too soft. We also show in dashed corresponding sensitivity curves for the optimal case. As expected, the sensitivities improve in this case, however we have not found such scenario in our numerical procedure (see table 3); the branching ratios for the decay into pair of tau leptons can be at most around 40%. By taking BBN and N eff limits into account it turns out that the discovery region (S ≥ 5) is in tension with these cosmological probes; only parameter space around m N 2,3 100 GeV is not excluded for the optimal case while the sensitivity for the realistic case is strongly ruled out. However, a certain portion of parameter space with S 2 around m N 2,3 100 − 150 GeV and m ± 200 − 400 GeV (realistic case) is not constrained by cosmological data.
86.42 % IO 3.07 < −2 −π π 99.75 % Table 5. Largest possible branching ratios for the decay of σ ± into e ± , µ ± . Below the given value for η 0 , the branching ratios are to a good approximation independent of this parameter. Interestingly, the IO regime can feature a situation with a very small branching ratios into tau's, implying approximate zeros in the third column of the Yukawa matrix.

Di-lepton+¨Ë T
Now we turn to di-lepton+ & & E T channel. Following the procedure outlined in ref. [55], the following cuts were applied after event reconstruction and preselection. The invariant di-lepton mass should be larger than 40 GeV. Events should not contain any b-jet with p T > 20 GeV nor a jet with p T > 60 GeV. In the 2 + 0jets channel, six different signal regions were defined: 4 aiming at different flavor (DF) leptons in the final states and 2 at leptons with the same flavor (SF). All regions are inclusively defined and mainly separated by increasing cuts on the invariant mass of the lepton pair and m T 2 , ranging from m > 110 GeV to m > 300 GeV and m T 2 > 100 GeV to m T 2 > 300 GeV. The 95% CL upper limits on the cross sections are summarized in table 4.
In contrast to section 4.1 we now want to minimize the expression given in eq. (4.4) in order to have as large as possible branching ratios into leptons. Under the same assumptions as before and taking both, NO and IO regimes into account we obtain the results given in table 5. From the respective branching ratios one can calculate the suppression factor B of the pair production cross section according to (4.8) The first sum corresponds to SF production channel, whereas the second term resembles DF as final state particles. We introduced the shorthand notation BR(N k i ) ≡ BR(σ → i N k ). Inserting for instance the branching ratio for NO (  Figure 8. Projected sensitivities for HL-LHC using L = 4000 fb −1 and the same analysis techniques as [55]. Given in blue are exclusion limits, S = 2, and shown in red are discovery limits, where S = 5. The dashed lines correspond to optimal branching ratio into leptons, whereas to obtain the solid lines we used the value for NO given in table 5. HNL masses in the black shaded region are in conflict with BBN constraints discussed in section 3.4 and the green lines arise from N eff limits: the dashed green line indicates conservative and the solid one represents aggressive limit. The latter are stronger for di-lepton searches in contrast with the di-tau case.
since NO is favored by roughly 3σ from the global fit analyses of neutrino oscillation. Still, we would like to point out that if IO is realized in Nature, not only the sensitivity for this channel would improve, but one would also expect a discovery from neutrinoless double beta experiments which will soon start probing the IO band [64].
The sensitivities are shown in figure 8 and one can readily infer that, as for the case of di-tau shown in figure 7, cosmology disfavors a significant part of the available parameter space. Unfortunately, bounds from N eff nearly cover the whole S = 2 region, but it is clear that the sensitivity is generally better in comparison with the di- It can be seen by comparing figures 7 and 8 that N eff limits are stronger for di-lepton case. This is due to the fact, that we can have larger couplings in the di-tau case, because LFV processes yield less stringent bounds on the Yukawa couplings for the third lepton generation. Hence, N eff bounds are weakened in such case since larger interaction rates give rise too a smaller freeze-out abundance of N 2 , suppressing the hot DM component.
These cosmological bounds starts to flatten for large scalar masses (m ± > 500 GeV) and it is therefore tempting to go beyond the energy range of HL-LHC and consider proposed future hadron and lepton colliders. In the following, we will stick to the di-lepton search only, as the results from this section clearly indicate stronger sensitivity in comparison to the di-tau analysis.  Table 6. Cuts made for distinguishing signal and background at FCC with a luminosity of 3 ab −1 .
We show the number of signal and background events for m N2,3 = 500 GeV and m ± = 1 TeV. We employed λ 3 = −0.27 in the analysis. No systematic errors on the background were assumed for this analysis.

Future colliders
Even though the projections for HL-LHC presented in section 4 do not generally surpass the cosmological bounds, we will demonstrate in this section that future hadron and lepton colliders lead to a different, more promising conclusion.

FCC-hh
We start by discussing the scotogenic model in the context of a future circular hadron collider, dubbed FCC-hh [65]. For this purpose, we follow ref. [66], where an oppositesign-di-lepton (OSDL) final state with missing energy is discussed with the goal of finding TeV-scale winos and light binos.
The following variables were used in the analysis to define cuts: (i) M eff , which is the scalar sum of the p T of leptons, jets and missing energy (MET) is the larger p T value of the two final state leptons, (iii) the invariant mass of the same sign opposite flavor lepton pair, mSFOS, and, finally, (iv) the transverse mass M T . We simulated signal events for different mass parameters using the same pipeline as in section 4. To ensure that our simulations are comparable to those in [66], the most dominant backgrounds (W W and W Z) were also simulated and compared to the cut flow given in [66]. Our results are presented in table 6.
Since we generally find that the number of events for signal and background are of the same order, the significance given in eq. (4.6) is not a good approximation and hence for exclusions we use eq. (4.7), while for discoveries we employ [63] S 0 = 2 (S + B) log 1 + S B − S .  The analysis is based on a proposed search for supersymmetry presented in ref. [66]. The red and blue curves correspond to the "best case" scenario with maximized couplings to e, µ. While the S 0 = 5 region for L = 3 ab −1 is in tension with bounds from N eff , S 1 = 2 extends to much higher HNL masses, indicating that a significant portion of the parameter space can be probed at FCC-hh. The situation further improves for larger luminosity. The gray curve corresponds to the "worst" case in which couplings to tau-lepton are maximized; this case does not feature particularly strong sensitivity. The thin black line corresponds to m N2,3 = m ± .
2 and S 0 = 5 are derived for the maximal branching ratios into electrons and muons (see table 5) for normal neutrino mass ordering. This scenario is dubbed "best case" in figure 9 and it may be inferred that for such couplings FCC-hh will provide the possibility of scanning a large portion of parameter space that is not restricted by BBN and N eff constraints. The sensitivity for the "worst case" scenario in which couplings to τ lepton are maximized (table 3) is shown in gray in figure 9 and as expected, it is much weaker than the case with dominant couplings to light leptons.

CLIC
The Compact Linear Collider (CLIC) [22] is a proposed e + e − collider that will operate in three stages with √ s= 380 GeV, 1.5 TeV, and 3 TeV, respectively [22,67]. In the following, however, we restrict ourselves to the latter case as it offers the possibility to test larger parameter space of Z 2 -odd particle masses in comparison to two other stages. As for FCC-hh, we consider the di-lepton signal. At e + e − colliders there are two complementary processes to produce σ ± in our model: one is through the exchange of a Z boson or a photon in the s-channel; another possibility is via HNLs in the t-channel. In the latter case, elements of Yukawa matrix enter in the -22 -JHEP09(2020)136 production cross section. However, we have checked by performing analytic calculations of the cross section that such production is subdominant for the Yukawa couplings employed in this work.
The most outstanding advantage that lepton colliders offer with respect to hadron colliders is the clean signal; without parton distribution functions to be considered, missing energy and momentum can be precisely reconstructed and the distributions in the kinematic variables are not heavily smeared out due to the vastly different energies that interacting partons can have during the collisions. Furthermore, QCD backgrounds are reduced to a degree that allows a more detailed analysis of pure electroweak processes and this is particularly relevant for the model under our consideration.
For general background rejection, we make the following preselection cuts: to get rid of QCD processes, we require the final state to contain no jets. 9 Furthermore, we require exactly two leptons of opposite charge and that & & E T exceeds 100 GeV. The latter cut suppresses most of the e + e − → l + l − background.
The dominant background after the preselection cuts is e + e − → l + l − νν (l stands for any generation of charged leptons) and in the following this is the only background process under our consideration. In the case of τ leptons in the final state, only those events in which τ decays leptonically are considered; in this case there are hence four final state neutrinos. We have checked that any other SM background gives a negligible contribution after the above preselection cuts. A very promising result for the exclusion limits was found by choosing cuts as given in table 7.
Regarding cuts, the variables mSFOS and M eff , already employed for our FCC-hh analysis, were reused for CLIC because of their great potential for discriminating the scotogenic model, with its comparatively large masses of Z 2 -odd particles, against the SM. Furthermore, the pseudorapidity of the first lepton, η(l 1 ), turns out to be a very useful variable; it peaks at large values for the SM background while most of the signal events are more central in this variable.
Regarding the large number of events still present after the final cuts, systematic uncertainties have to be considered, as they more dominantly affect the significance in this case. For that reason, the significances in eq. (5.2) for discovery and in eq. (4.7) for exclusion have to be adapted to include these systematic uncertainties. Assuming the systematic uncertainties to be described by Gaussian noise with standard deviation σ B = εB, one finds 3)  Table 7. Cuts made for distinguishing signal and background at CLIC with a center of mass energy of 3 TeV and a luminosity of 5 ab −1 . We show the number of signal and background events together with the corresponding sensitivities for a benchmark point m N2,3 = 500 GeV and m ± = 1 TeV. The results are shown both for "best" and "worst" case scenarios which correspond to maximizing couplings to e,µ and τ lepton, respectively. Details about the systematic errors are given in the text. where We followed [22] and assumed a background uncertainty of 0.3% in the following analysis.
A further suppression of the background can be made by utilizing the kinematics of this process. In the di-lepton search at CLIC, one can reconstruct the 4-momenta of the two HNLs (denoted by p µ 3,4 in what follows) by using only the initial state energy √ s and the known 4-momenta of the two charged leptons (p µ 1,2 ). Four out of these eight unknowns are fixed by 4-momentum conservation, namely 4 i=1 p µ i = ( √ s , 0 , 0 , 0). Furthermore, imposing that all intermediate and final state particles are on-shell yields (up to combinatorics) the following four relations: In total, we end up with a solvable system of equations. For separating signal and background events we use that the latter ones typically have different kinematic properties, since the mass of the intermediate particle is for instance set by the W boson mass. After requiring that (i) there is a physical solution to the aforementioned equations, i.e. we demand the momenta of the invisible particles to be real valued, and (ii) there is no real valued solution if m 2 ± and m 2 N 2,3 in eq. (5.5) get replaced with the W boson and active neutrino mass, respectively, we reach the following effect: the number of signal events typically decreases only by 70%, whereas background is strongly reduced, at most only a -24 -JHEP09(2020)136 few percent of such events survive this cut. In reality, the 4-momenta in eq. (5.5) are only known to a finite precision. So it is crucial to include the finite detector resolution by using the respective Delphes card for the CLIC detector which is based on [68]. As such, the simulated p T values experience a smearing with respect to the detector resolution.
We wish to stress that the cuts on mSFOS and M eff are not optimal for each parameter point since we only used a very simple function of the model parameters. Still, the resulting exclusion limits, shown in figure 10, already indicate the great potential for testing this model setup at CLIC; sensitivity curves both for the "best case" and the "worst case" scenario exceed N eff limits. Interestingly, S 1 = 2 and S 0 = 5 curves reach similar limits for these two cases. This is a consequence of the aforementioned use of kinematics: due to the suppression of background events, we find large sensitivity values across the parameter space. Although the "worst case" features fewer event rates due to smaller branching ratios, there is still a sizable sensitivity available.
In comparison with FCC sensitivities at 3 ab −1 , CLIC can reach higher HNL masses while being less sensitive to σ ± masses larger than 1400 GeV; this is to be expected with a total center of mass energy of 3000 GeV. Overall, CLIC offers a testable parameter space which is comparable to the FCC result. While the reach at CLIC is not as large in comparison to FCC with 30 ab −1 , we note that, unlike FCC, CLIC can nearly close the available kinematic window, having the sensitivity in the vicinity of m N 2,3 = m ± line. Let us note that this is the kinematic window in which coannihilations are very effective and this can strongly suppress the hot DM component, relaxing the N eff limits, while BBN limits still play a role. We wish to stress that such small splitting between Z 2 -odd fermions and scalars and m N 2,3 1 TeV is exactly the setup in which we have previously [20] shown that the observed amounts of DM and baryon asymmetry of the Universe can be simultaneously explained within the scotogenic model. It is very intriguing that future lepton colliders will have a sensitivity to such a scenario.
In conclusion, we have shown in this section that both lepton and hadron colliders offer promising and complementary ways to look for the considered scotogenic scenario.

Summary of collider searches
Having explored the capability of HL-LHC, as well as future hadron (FCC-hh) and lepton (CLIC) colliders for testing the scotogenic model, we summarize the situation in figure 11. The figure contains sensitivity curves as well as BBN and N eff limits already presented in figures 7 to 10. While the discovery at HL-LHC is less likely due to the tension with cosmological limits, the future colliders offer more promising situation in which large portions of the parameter space can be tested.
While we have shown that the future collider prospects are bright, it is fair to note that the cosmological and other terrestrial searches will also provide stronger limits, particularly for heavier HNLs. In what follows we list most notable among these projects: • New searches for LFV processes can lower the bounds on Yukawa couplings y 2α and y 3,α ; for instance MEG II [69] features a projected sensitivity of BR(µ → e γ) < 6 · 10 −14 ; an improvement of about an order of magnitude compared to the previous bound.  Figure 10. CLIC sensitivity for the di-lepton search. Using maximized couplings to e, µ we obtained the red solid contour that corresponds to a 5σ discovery and the blue one that represents 2σ exclusion. The corresponding dashed contours are for the case where τ couplings are maximized. The thin black line indicates m N2,3 = m ± . We assumed an background uncertainty of 0.3%.
• New observations of small scale structure in combination with detailed simulations of warm DM will push m NRP to larger values and this in turn requires smaller y 1α to produce the observed DM abundance via freeze-in, which will also impact BBN limits.
• Upcoming CMB experiments will measure N eff to a precision of ∆N eff = 0.06 [70] which leaves less room for a hot DM subcomponent.
To summarize, these rather complementary searches would probe the parameter space up to even smaller mass ratios between σ ± and HNLs. They directly or indirectly set a stricter upper limit on the abundance of N 2 which is crucial for cosmology. Hence, in the near future these experiments will offer novel relations between collider searches and cosmological observations.

Summary and conclusions
The scotogenic model is a very popular extension of the Standard Model which can explain both neutrino masses and the origin of DM. In this paper we focused on the case of keV-scale fermionic DM with the mass of remaining Z 2 -odd fermion and scalar degrees of freedom at O(100) GeV. In this setup there are two distinct DM production mechanisms: freeze-in through the decays of heavy scalars and the production from the decays of nextto-lightest Z 2 -odd particle, N 2 , produced via freeze-out. The large mass gap between DM -26 -

Mass of σ ± [GeV]
Mass of  Figure 11. Summarized sensitivity curves as discussed in sections 4.2, 5.2 and 5.1. The light shaded regions correspond to S = 2 and the darker regions represent S = 5. Again, the thin black line indicates m N2,3 = m ± whereas thick black line shows BBN constraints; green solid (dashed) curves represent conservative (aggressive) N eff constraints. We assumed a maximal coupling to e, µ in this case. and N 2 can generally allow for a sufficient suppression of the abundance arising from the latter mechanism, which is required as the corresponding DM momentum distribution is hot and could hence lead to washout of structures at small scales. We have shown that even stronger constraints arise from the contribution of such hot DM to N eff . We also derived BBN bounds from the requirement that N 2 particles decay within ∼ 1 second.
Armed with the limits from cosmology, we focused on collider phenomenology; in particular we studied p p → σ ± σ ∓ → ± ∓ + & & E T channels. We have demonstrated that testing this model at colliders strongly relies on forthcoming stages of LHC as well as future colliders since we were not able to extract robust bounds by using 36.1 fb −1 data. Thus, we further considered HL-LHC, FCC-hh and CLIC. For the former, we found that the testable region is in tension with N eff and BBN limits, whereas both CLIC and FCC can probe significant regions of parameter space free from cosmological bounds. The sensitivity reach of CLIC exceeds 1 TeV for both HNL and charged scalar masses. The FCC with 3 ab −1 would reach similar scalar and even larger HNL masses, and this further improves for 30 ab −1 where HNL masses up to 2 TeV would be probed.
In summary, we have shown that if Nature has chosen the scotogenic scenario, there is a number of complementary tests, stemming from terrestrial experiments to cosmological surveys which indicate that the model has a rich phenomenology and a very promising discovery potential.