Expanding the parameter space of natural supersymmetry

SUSY/SUGRA models with naturalness defined via small $\mu$ are constrained due to experiment on the relic density and the experimental limits on the WIMP-proton cross-section and WIMP annihilation cross-section from indirect detection experiments. Specifically models with small $\mu$ where the neutralino is higgsino-like lead to dark matter relic density below the observed value. In several works this problem is overcome by assuming dark matter to be constituted of more than one component and the neutralino relic density deficit is made up from contributions from other components. In this work we propose that the dark matter consists of just one component, i.e., the lightest neutralino and the relic density of the higgsino-like neutralino receives contributions from the usual freeze-out mechanism along with contributions arising from the decay of hidden sector neutralinos. The model we propose is an extended MSSM model where the hidden sector is constituted of a $U(1)_X$ gauge sector along with matter charged under $U(1)_X$ which produce two neutralinos in the hidden sector. The $U(1)_X$ and the hypercharge $U(1)_Y$ of the MSSM have kinetic and Stueckelberg mass mixing where the mixings are ultraweak. In this case the hidden sector neutralinos have ultraweak interactions with the visible sector. Because of their ultraweak interactions the hidden sector neutralinos are not thermally produced and we assume their initial relic density to be negligible. However, they can be produced via interactions of MSSM particles in the early universe, and once produced they decay to the neutralino. For a range of mixings the decays occur before the BBN producing additional relic density for the neutralino. Models of this type are testable in dark matter direct and indirect detection experiments and at the high luminosity and high energy LHC.


Introduction
The observation of the Higgs boson mass at ∼ 125 GeV [1,2] implies that the size of weak scale supersymmetry is large lying in the TeV region. The relative size of weak scale supersymmetry implies that the observation of supersymmetry would be more difficult than previously thought. However, the sparticle spectrum is governed by more than one mass parameter. Specifically if the universal scalar mass lies in the several TeV region, the sfermion masses are expected to be large. However, the electroweakinos could be much lighter than the sfermions. This is so because the electroweakino masses for the universal supergravity (SUGRA) model are determined by the universal gaugino mass at the grand unification scale and the Higgs mixing parameter µ. For the case when µ is relatively small the composition of electroweakinos is strongly influenced by the size of µ and the smaller µ is the larger the Higgsino content of electroweakinos. Of specific interest is the composition of the lightest neutralino where a small µ would lead to a higgsino-like neutralino. However, a higgsino-like neutralino has copious annihilation in the early universe which leads to the relic density of higgsino-like neutralino to fall below the experimental value of Ωh 2 ∼ 0.12. Now small µ models arise quite naturally on the hyperbolic branch of radiative breaking of the electroweak symmetry [3][4][5] (for related works see [6][7][8][9]) and lead to an electroweakino mass spectrum which in part lies far below the sfermion masses and would be accessible at future colliders. A relatively small µ is also associated with naturalness. Thus what is natural is to a degree rather subjective and there are various technical definitions quantifying naturalness. Typically most naturalness models feature a relatively small µ, and it is the relative smallness of µ (compared to, for example, squark masses) that we will use as the criterion of naturalness in this work. However, the content of the analysis given in this work stands on its merits irrespective of the nomenclature one assigns to the models considered. As indicated above higgsino-like neutralino typically leads to a relic density that falls below the experimental value. One way to overcome this problem is to assume that dark matter is made of more than one component [10][11][12][13] with each component contributing only a fraction of the total relic density. In this case the relic density contribution of the higgsino-like neutralino does not pose a problem as the deficit can be attributed to other component(s) of dark matter. For instance the other component could be an axion [12,14] or a Dirac fermion [10,11,13].
In this work we take a different approach. We assume that there is just one component of dark matter and it is the lightest neutralino which is the lightest supersymmetric particle (LSP). In this case we propose that the relic density arises from two sources: First we have the conventional freeze-out relic density for the neutralino. Second there is additional contribution to the relic density where the hidden sector neutralinos decay into the LSP. The hidden sector neutralinos are assumed to have negligible initial abundance, and are produced via interactions of the minimal supersymmetric standard model (MSSM) particles in the early universe. We also assume their masses are larger than the lightest neutralino. The interactions of the hidden sector neutralinos are ultraweak so they are long-lived but for a range of ultraweak couplings they decay to the LSP before the Big Bang Nucleosynthesis (BBN) sets in. In the specfic model we propose, the hidden sector is constituted of U (1) X gauge fields and matter fields charged under the U (1) X , while the hidden sector is not charged under the Standard Model gauge group. We assume that the interactions between the hidden sector and the visible sector arise due to kinetic mixing [15,16] and the Stueckelberg mass mixing [17][18][19][20] between the gauge field of the U (1) Y hypercharge and the gauge field of the U (1) X .
The outline of the rest of the paper is as follows: In Section 2 we discuss details of the extended MSSM/SUGRA model [21,22]. The analysis of the relic density of dark matter in the extended model is given in Section 3. Here it is shown that the ultraweakly interacting particles produced in the early universe, i.e., in the post inflationary period, decay into the LSP of the MSSM and for a range of the parameter space they decay before the BBN time producing the desired relic density observed today. We consider three classes of processes for the production of the ultraweakly interacting hidden sector particles which we label as ξ [23][24][25][26]. These are: A + B → ξ, A → B + ξ, A + B → C + ξ where particles A, B, C are MSSM particles in the thermal bath, while ξ is a particle not in the thermal bath in the early universe and is assumed to have negligible initial abundance, and has a mass larger than the lightest MSSM neutralino. In Section 4 we present the results of the scan performed on the model's parameter space and give a set of benchmarks which satisfy the Higgs boson mass constraint, the relic density constraint and are chosen such that the sparticle spectrum satisfies the current experimental lower limits given by the LHC. A discussion on the electroweakino spectrum along with a full collider analysis of the benchmarks are carried out in Section 5. A part of the parameter space discussed here can be probed at HL-LHC and HE-LHC [27][28][29][30] (for related works on HL-LHC and HE-LHC, see Refs. [13,[31][32][33][34]). Conclusions are given in Section 6. In the Appendix we list A + B → C + ξ type processes that contribute to the dark matter relic density.

The model
The model we discuss contains the visible sector, a hidden sector and the interactions of the visible sector with the hidden sector so that the total Lagrangian of the extended system has the form [35] In our analysis we will assume that the visible sector is constituted of the MSSM Lagrangian. There are many options for the hidden sector but to be concrete we will assume that the hidden sector consists of a U (1) X gauge field and matter charged under the U (1) X but the hidden sector is not charged under the Standard Model gauge group. We assume that L vh arises from two sources: First there is a gauge kinetic mixing between U (1) X of the hidden sector and the hypercharge U (1) Y of the Standard Model gauge group, and additionally there is a Stueckelberg mass mixing between the U (1) Y gauge field B µ and the U (1) X gauge field C µ . Thus for L vh we have where δ is the kinetic mixing parameter, λ is the gaugino component of the vector superfield. The axion field a is from the two additional chiral superfields S andS that enter the model [17]. In the unitary gauge the axion a is absorbed to generate mass for the U (1) X gauge boson. A more detailed discussion of the Stueckelberg extended model can be found in [17][18][19][20]. In addition to the above we add soft terms to the Lagrangian so that where m X is mass of the U (1) X gaugino and M XY is the U (1) X -U (1) Y gaugino mixing mass. We note that the mixing parameter M XY and M 2 even when set to zero at the grand unification scale will assume non-vanishing values due to renormalization group evolution. Thus M XY has the beta-function evolution so that where g Y is the U (1) Y gauge coupling and s δ = δ/(1−δ 2 ) 1/2 . Similarly, the mixing parameter M 2 has the beta-function so that β (1) In the MSSM sector we will take the soft terms to consist of m 0 , A 0 , m 1 , m 2 , m 3 , tan β, sgn(µ).
Here m 0 is the universal scalar mass, A 0 is the universal trilinear coupling, m 1 , m 2 , m 3 are the masses of the U (1) Y , SU (2) L , and SU (3) C gauginos, tan β = v u /v d is the ratio of the Higgs vacuum expectation values and sgn(µ) is the sign of the Higgs mixing parameter which is chosen to be positive. Here we have assumed non-universalities in the gaugino mass sector which will be useful in the analysis in Section 4 (for some relevant works on non-universalities in the gaugino masses see Ref. [36]).
The neutralino sector of the extended SUGRA model contains 6 neutralinos. We label the mass eigenstates asξ 0 1 ,ξ 0 2 ;χ 0 1 ,χ 0 2 ,χ 0 3 ,χ 0 4 . Since the mixing parameter δ is assumed to be very small (δ 10 −10 ), the first two neutralinosξ 0 1 andξ 0 2 reside mostly in the hidden sector while the remaining fourχ 0 i (i = 1 · · · 4) reside mostly in the MSSM sector. The details of the mixing are given in [26,34]. For the classes of models we are interested in,χ 0 1 is the LSP of the entire supersymmetric sector and thus the dark matter candidate. Although the coupling between the MSSM sector and the hidden sector is ultraweak, MSSM particles can produce significant amount ofξ 0 1 ,ξ 0 2 , and once producedξ 0 1 ,ξ 0 2 will subsequently decay to the LSPχ 0 1 . These decay processes would happen after the lightest neutralinoχ 0 1 goes through the freeze-out process and thus will make additional contribution to theχ 0 1 relic density. The lifetime of the hidden sector neutralinosξ 0 1 ,ξ 0 2 is less than 1-10 second, and their late decay will still be consistent with the BBN. Thus, the relic density ofχ 0 1 dark matter consists of two parts: (1) The normalχ 0 1 freeze-out contribution; (2) A contribution arising from the decays of the hidden sector neutralinosξ 0 1 ,ξ 0 2 toχ 0 1 . As will be seen in our analysis later, the contribution (2) is very significant in achieving the desired relic density for dark matter.
We turn now to the charge neutral gauge vector boson sector. Here the 2 × 2 mass-squared matrix of the Standard Model is enlarged to become a 3 × 3 mass-squared matrix in the U (1) X -extended SUGRA model. Thus after spontaneous electroweak symmetry breaking and the Stueckelberg mass growth the mass-squared matrix of neutral vector bosons is a 3 × 3 matrix in the basis (C µ , B µ , A 3 µ ) where A 3 µ is the neutral component of the SU (2) L gauge field A a µ , a = 1 − 3. This 3 × 3 matrix has three eigenstates which are the photon, the Z boson and the Z boson. Assuming that the hidden sector neutralinos have masses greater thanχ 0 1 , they will decay to theχ 0 1 via interactions involving the Z, Z and also via Higgs interactions. Computations of these interactions are straightforward extensions of the MSSM interactions and details of how this can be carried out can be found in [17][18][19][20].

Dark matter relic density
As noted in the introduction, models with small µ such that the lightest neutralino has a significant higgsino content have the problem of not getting enough relic density for the neutralinos as they annihilate copiously in the early universe. One way to overcome this problem is to have multi-component dark matter where the deficit is made up from other dark matter candidates. Also as mentioned in the introduction, in this work we propose another possibility where the ultraweakly interacting particles in the hidden sector decay into the neutralino to make up the deficit. Thus the relic density in this case consists of two parts so that where (Ωh 2 ) 1 is the relic density arising from the usual freeze-out mechanism while (Ωh 2 ) 2 is the relic density arising from the decay of the hidden sector neutralinos into the MSSM neutralino. In terms of the comoving number density Y , the relic density of a dark matter particle of mass m is given by where s 0 is today's entropy density, ρ c is the critical density and h = 0.678. Below we discuss the main contributions to (Ωh 2 ) 2 . The contributions to (Ωh 2 ) 2 arise mostly from A → B + ξ type processes where particles A and B are MSSM particles and ξ is a hidden sector particle which is assumed heavier than the LSP and decays into it before the BBN time. In a similar fashion we also have A + B → ξ and A + B → C + ξ types of processes where C is also an MSSM particle in the thermal bath. We assume that all the processes above occur when bath particles A, B, C are in thermal equilibrium in the early universe. We discuss these processes in further detail below.
A → B + ξ process: Here a bath particle A decays to another bath particle B plus the hidden sector particle ξ. In the model we discuss here ξ could be the hidden sector neutralinoξ 0 1 orξ 0 2 where A and B are in thermal equilibrium, while ξ is not and we assume it has a negligible initial abundance. In this case the Boltzmann equation for the number density of ξ is given bẏ is the phase space density. The plus sign in the above expression is for bosons and minus for fermions. In Eq. (8), the matrix element squared |M | 2 is summed over initial and final spin and color states. The initial ξ abundance being zero indicates f ξ = 0, and thus the term corresponding to B + ξ → A in Eq. (8) vanishes. By setting (1 ± f B ) ≈ 1, Eq. (8) reduces tȯ Further one can reduce Eq. (9) so that it takes the forṁ where Γ A is the A → B + ξ partial decay width and we have used f A ≈ e −E A /T . Changing the differentiation variable to energy we can further writė where K 1 is the Bessel function of the second kind and degree one, which is given by the integral Again using the conservation of entropy per comoving volume (sR 3 = const), we havė where Y ξ ≡ n ξ /s the number density of ξ per comoving volume. Defining where the entropy density and the Hubble parameter are given by In Eqs. (15) and (16), g * S and g * are the effective number of degrees of freedom at temperature T for the entropy and energy density, respectively, and M pl is the Planck mass. We introduce the fugacity z of the system as z = z f e µc/T with µ c being the chemical potential and z f = +1 for a boson, −1 for a fermion and zero for a dark matter particle. In the numerical analysis we use the more exact form of Y ξ given by where T 0 is the current temperature and T R is the reheating temperature and we have defined K 1 as the generalized Bessel function of the second kind of degree one given by with the function S is as defined in [24] and is given by We note that neglecting the effect of the chemical potential, i.e. setting z B and z ξ to zero, S → 1 and Eq. (18) reduces to Eq. (12). The function K 1 , which takes six arguments corresponding to values of x A,ξ,B where x = m/T and corresponding to the fugacity parameters z A,ξ,B , is evaluated using micrOMEGAs5.0 routines. The integral of Eq. (17) is then computed to determine the relic density of ξ using Eq. (7). The hidden sector particle will eventually decay to the dark matter particle of mass m DM for which the relic density is given by A + B → ξ process: Here two bath particles in thermal equilibrium combine into the hidden sector particle ξ which could beξ 0 1 orξ 0 2 . For example, Higgs or Z boson combine with a light neutralino so thatχ 0 1 + h/Z →ξ 0 1 , or W ± boson combine with charginos, so thatχ ± 1 + W ∓ →ξ 0 1 , etc. The Boltzmann equation for the number density of ξ in this case readṡ where the second term in the parenthesis can be dropped since the initial abundance of ξ is negligible. Using the principle of detailed balance, one can rewrite Eq. (21) aṡ where f EQ ξ ≈ e −E ξ /T . One can then see that Eq. (22) has a form similar to Eq. (9), and the computation of relic density in this case is also similar to the previous case. Thus we write the comoving number density for the A + B → ξ process as Using Eqs. (15) and (16), setting x ξ = m ξ /T and carrying out the integration in Eq. (23) gives Once ξ is produced via A + B → ξ process, it would subsequently decay to the LSP. If A is the LSP dark matter particle such as in the processχ 0 1 + h/Z →ξ 0 1 , then using Eq. (7) the contribution to the dark matter relic density from A + B → ξ process is given by where in our case g ξ = 2 for the hidden sector neutralinos. If A is some other bath particle heavier than the dark matter particle with mass m DM such as in the processχ ± 1 + W ∓ →ξ 0 1 , the contribution to the dark matter relic density from A + B → ξ process is then given by A + B → C + ξ process: Here two bath particles A and B in thermal equilibrium scatter into C +ξ where C is another bath particle in thermal equilibrium and ξ as above is the hidden sector particle which has ultraweak interactions with the MSSM sector and with negligible initial abundance. Possible processes of this type are summarized in the Appendix. For this process the Boltzmann equation for the number density of ξ is given bẏ Again |M | 2 AB→Cξ is summed over initial and final spins. Recall that the differential crosssection is given by where |M| 2 is averaged over initial spins and summed over final spins. Notice that where Thus now Eq. (27) reduces to [37] where

Model implementation and parameter scan
The model described in Sections 2 and 3 is implemented with high scale boundary conditions using the mathematica package SARAH-4.14 [38,39] that generates files for the spectrum generator SPheno-4.0.4 [40,41] which runs the two-loop renormalization group equations (RGE) starting from a high scale input taking into account threshold effects to produce the loop-corrected sparticle masses and calculate their decay widths. SARAH also generates CalcHep/CompHep [42,43] files used by micrOMEGAs-5.0.9 [44] to determine the dark matter relic density via the freeze-out and freeze-in routines and UFO files [45] which are input to MadGraph5 [46]. Our analysis is based on the supergravity grand unified model [21] (for a review see [22]). The Tadpole equations are solved in terms of m 2 Hu and m 2 H d , the Higgs soft supersymmetry breaking parameters, and v ρ , the VEV developed by the real scalar component of the additional chiral scalar superfield S. This method allows us to have µ, the Higgs mixing parameter, as a high scale input of the model. Hence, the input parameters of the U (1) X -extended MSSM/SUGRA [21,22] are of the usual non-universal SUGRA model with additional parameters (all at the GUT scale): 3 , tan β and sgn(µ) are the soft parameters in the MSSM sector as defined earlier. The parameters M 2 and M XY are set to zero at the GUT scale. However, those parameters acquire a tiny value at the electroweak scale due to RGE running. In scanning the parameter space of the model we accept points satisfying the Higgs boson mass and dark matter relic density constraints. Taking theoretical uncertainties into consideration, the constraint of the Higgs mass is taken to be 125 ± 2 GeV while the relic density is taken in the range 0.110−0.128. In generating acceptable parameter points constraints on the sparticle spectrum implied by the LHC data are also taken into account. The scan is carried out with xBIT [47] which uses pytorch for artificial neural networks (ANN) and xSLHA [48] for reading SLHA files. In the scan, we employ an ANN with three hidden layers and 25 neurons per layer. The result of the scan is shown in Fig. 1.
The color coding exhibits the dark matter relic density from freeze-out processes (including coannihilation). Many points are already above the current limits from LUX [50], PANDA [51] and XENON1T [52] while others (mostly wino-like neutralinos) are not within reach as they lie below the coherent neutrino scattering floor. Here we do not consider yet contributions to the relic density due to the decay from hidden sector neutralinos. The nomenclature 'bino' (B), 'wino' (W ) and 'higgsino' (H u andH d ) correspond to the content of the neutralino LSP which can be written asχ 0 1 = αB + βW + γH u + δH d . We consider the LSP to be mainly b(w)(higgs)ino if max(α, β, γ 2 + δ 2 ) = α(β)( γ 2 + δ 2 ). Next we switch on the hidden sector contributions to the relic density via the freeze-in mechanism. Heavy sparticles will decay to the hidden sector neutralinoξ 0 1 andξ 0 2 which in turn decay to the visible LSP through the ultraweak couplings. Where it exists, the deficit in the neutralino number density (mainly for the wino-like) is made up by the decay ofξ 0 1 andξ 0 2 raising this number above the neutrino floor. This is exhibited in panel (ii) of Fig. 1 where we see that the models which in the absence of the hidden sector contribution were undetectable are now lifted above the neutrino floor and should be within reach of future direct detection experiments. Now the total relic density is no longer only due to freeze-out but is given by Eq. (6) which includes the hidden sector contribution and so R = Ωh 2 /(Ωh 2 ) PLANCK ∼ 1. Model points lying above the XENON1T limit have been eliminated. A further elimination of model points is applied whenξ 0 1 andξ 0 2 lifetimes exceed 10 seconds which results in panels (iii) and (iv) of Fig. 1. In panel (iii) we exhibit the freeze-out as well as the freeze-in contributions from A → B + ξ and A + B → ξ processes while the total relic density is consistent with Eq. (33). In panel (iv) we show the neutralino thermally averaged annihilation cross-section versus the neutralino mass in the dominant W + W − channel. The combined experimental limit from indirect detection experiments, Fermi-LAT and MAGIC Collaborations [53] is shown with the 68% confidence interval. Model points with a freeze-out relic density less than 0.006 are above the experimental limit (upper branch) while model points with a larger freeze-out relic density and mass greater than ∼ 230 GeV lie below (or within) the current bound (lower branch). The points on the upper branch have mostly wino-like LSP and have a very compressed spectrum with a chargino-LSP mass difference less than 0.3 GeV while the points on the lower branch have mostly higgsino-like LSP with a less compressed spectrum (mass gap greater than 1 GeV). The proximity of the chargino mass to the LSP mass gives a larger t-channel contribution toχ 0 1χ 0 1 → W + W − for the upper branch due to chargino exchange which also explains the smaller relic density. We note here that all the benchmarks in Table 1 lie on the lower branch and are consistent with the experimental limits from Fermi-LAT and MAGIC Collaborations [53]. A further discussion on the compressed spectrum is given in Section 5.
The presence of the ultraweakly coupled hidden sector has expanded the allowed MSSM parameter space with parameter points which could be detected by future experiments such as XENONnT and LUX-ZEPLIN [54]. In the analysis here we have not taken into account the effect of phases to which the neutralino-proton cross sections are sensitive [55]. However, their inclusion would not significantly affect the conclusions of our analysis. For a thorough collider study of possible detection of electroweakinos, we proceed by selecting ten benchmarks from panel (iv) after removing the model points lying above the indirect detection limit (the upper branch). In Table 1 we exhibit those points which as we said satisfy all the constraints discussed above. The µ parameter ranges from ∼ 200 GeV to ∼ 800 GeV which is much less than m 1 and m 2 . As a result, all the LSPs in our ten benchmarks are mostly higgsino-like and relatively light.
The sparticle spectrum is displayed in Table 2. The large m 0 and m 3 values render the stop and gluino masses heavy while satisfying the Higgs boson mass constraint. The electroweakinos,χ 0 1 ,χ 0 2 andχ ± 1 are relatively light (less than one TeV) with small mass splittings (less than 6 GeV). The reason for this small mass splitting and LHC constraints on the gaugino masses will be discussed in Section 5. We also show in Table 2 the contributions to the relic density from freeze-out and freeze-in where we see that the dominant contribution to Ωh 2 . Panel (i) shows all points satisfying the Higgs boson mass and (Ωh 2 ) 1 < 0.12 while panel (ii) includes the additional constraint from the XENON1T limit on direct detection and the contribution to R from the hidden sector (R = Ωh 2 /(Ωh 2 ) PLANCK ∼ 1). Panel (iii): plot shows the different relic density contributions, namely, the freeze-out (along y-axis), all A → B + ξ freeze-in processes (x-axis) and the smaller freeze-in contribution from A + B → ξ processes where the relative contributions are indicated by color coding. Panel (iv) shows the neutralino annihilation cross-section versus the neutralino mass in the W + W − channel with the combined limit from Fermi-LAT and MAGIC experiments where the 68% confidence interval is shown. The colors bar to the right of the panels (i), (ii) and (iv) gives the contribution (Ωh 2 ) 1 to the freeze-out relic density. Points corresponding to hidden sector neutralinos with lifetimes longer than 10 seconds are removed from panels (iii) and (iv).
comes from freeze-in. The masses of the hidden sector neutralinos and the lifetime ofξ 0 1 are also shown.   Table 2: Display of the Higgs boson (h 0 ) mass, the stop (t) mass, the gluino (g) mass, the relevant electroweakino (χ 0 1 ,χ 0 2 ,χ ± 1 ) masses, the hidden sector neutralino ξ 0 1 , ξ 0 2 masses, and the relic density for the benchmarks of Table 1 computed at the electroweak scale. τ 0 (in s) is the lifetime of the hidden sector neutralinoξ 0 1 which has a decay consistent with the BBN constraint. All masses are in GeV.
For the benchmarks of Table 1, we display in Table 3 the spin-independent proton-neutralino scattering cross-section and the thermally averaged neutralino annihilation cross-section satisfying the bounds from XENON1T for direct detection and Fermi-LAT and MAGIC for indirect detection in the W + W − channel.
Model proton-χ 0 1 cross-section,χ 0 1 annihilation cross-section,  The spin-independent proton-neutralino scattering cross-section, R × σ SI (second column) and the thermally averaged neutralino annihilation cross-section, R 2 σv W + W − in the W + W − channel (third column) for the ten benchmarks of Table 1. Here R ∼ 1 due to the hidden sector contribution.

Collider study of a compressed electroweakino spectrum
Models of natural supersymmetry requiring small µ are highly constrained by the LEP and LHC data. However, regions of parameter space exist consistent with the current experimental limits where models with relatively small µ lead to electroweakino masses which would be accessible for discovery at HL-LHC and HE-LHC. We discuss here a class of models with these characteristics as given in Table 1 and Table 2. One characteristic of these models is that the electroweakino mass spectrum is compressed with the chargino-lightest neutralino mass gap ranging from ∼ 1 to ∼ 4 GeV. The hierarchy between m 1 , m 2 and µ determines how much compressed the spectrum is. We distinguish here between two cases: m 1 m 2 > µ and m 2 m 1 > µ where we have taken the sign of µ to be positive.
Case 1: m 1 m 2 > µ Here we consider the 4 × 4 MSSM neutralino mass matrix with small µ for the case m 2 Z |m 1,2 ± µ| 2 , where m Z is the Z boson mass where the lightest neutralinos are higgsino-like and their masses are given by [56] where where θ W the weak mixing angle and µ, β, m 1 and m 2 assume their values at the electroweak scale. The lightest chargino which is mostly Higgsino has a mass m χ ± 1 = µ − K 2 (µ + m 2 sin 2β).
In this case, the chargino-LSP and the second neutralino-LSP mass differences are given by where m W is the W boson mass. We note that the chargino-neutralino masses become more degenerate the larger m 2 is. In particular this is true for benchmarks (c) and (e) of Table 1 where the mass gap is ∼ 1.6 GeV (see the spectrum in Table 2).
Case 2: m 2 m 1 > µ Here the heavy wino component can be integrated out and the mass difference between the light chargino and lightest neutralino and the second neutralino and the lightest one are given by This case applies in particular to the benchmarks (b), (d), (i) and (j). Note that even if m 1 = m 2 at the GUT scale (as given in Table 1), RGE running of the gaugino parameters produces very different values of m 1 and m 2 at the low scale. We have seen that by requiring a small µ and satisfying the LHC constraints on electroweakino masses, we are lead to a compressed spectrum due to the large m 1 and m 2 as evident from Eqs. (37) and (38).
Experiments at ATLAS and CMS have set stringent limits on chargino and neutralino masses corresponding to large mass splittings. Chargino mass up to 1.1 TeV and a neutralino of mass ∼ 600 GeV have been ruled out [57]. As for tiny mass splittings, charginos and neutralinos of masses less than 200 GeV have been excluded [58,59]. Searches targetting mass splittings near the electroweak scale [60][61][62], where W and Z bosons are on their mass shell, have led ATLAS to exclude neutralinos and charginos up to 345 GeV while CMS pushed the limit to 475 GeV. Most recently and using 139 fb −1 of data, ATLAS performed a search for electroweakino pair production for mass splittings of 1.5 GeV to 2.4 GeV [63] where limits on chargino mass have been set at 92 GeV to ∼ 190 GeV and at 240 GeV for ∆m 1 = 7 GeV. In this section, we perform a collider analysis study for our benchmarks at HL-LHC and HE-LHC which are characterized by a very small mass splittings ranging from 1.2 GeV to 3.7 GeV for ∆m 1 and up to ∼ 6 GeV for ∆m 2 . This is a very challenging search due to the softness of the final states but it is expected that HL-LHC will have a better electron and muon track reconstruction efficiency even for large pseudorapidity ranges and small lepton transverse momenta (down to 2 or 3 GeV). The replacement of the inner detector in both ATLAS and CMS will extend the coverage to |η| < 4.0 and for the range 2.5 < |η| < 4.0 the electron efficiency can be ∼ 15% for tight identification requirement and up to ∼ 40% for loose identification requirement at p T = 5 GeV [64].

Signal and background simulation and LHC production of electroweakino pairs
The signal under study consists of a second neutralino production in association with a chargino (χ 0 2χ ± 1 ) and a chargino pair production (χ + 1χ − 1 ) with dileptonic final states as shown in Fig. 2. The leptons may come from the decay of a second neutralino via Z * (left Feynman diagram) or from two charginos via W * decay (right Feynman diagram). Thus the final states we are looking for in this study are at least two soft leptons, jets and a large missing transverse energy due to the neutralino (and neutrinos). Because of the soft final states, an initial state radiation (ISR)-assisted topology is employed which can boost the sparticle system giving the final states an additional transverse momenta essential for their detection. We present in Table 4 the decay branching ratios (BR) of the second neutralino and the chargino into the final states considered in Fig. 2. The leptonic channel BR of the second neutralino is ∼ 10% and that of the chargino is close to 40%. For larger mass gaps, this BR decreases due to the opening of the tau decay channel [for example for benchmark (j)]. The hadronic decay channel of the chargino is dominant across all benchmarks. Model BR(χ 0  The production cross-sections ofχ 0 2χ ± 1 andχ + 1χ − 1 at next-to-leading order (NLO) in QCD with next-to-next-to-leading logarithm resummation (NNLL) at 14 TeV and 27 TeV are calculated with Resummino-2.0.1 [65,66] using the five-flavour NNPDF23NLO PDF set. The NLO+NNLL cross-sections for the ten benchmarks of Table 1 are shown below in Table 5.  For the final states, the dominant SM backgrounds are W/Z/γ * + jets, diboson production, tt, t + W/Z and dilepton production from off-shell vector bosons (V * → ). The signal and background events are simulated at LO with up to two partons at generator level using MadGraph5_aMC@NLO-2.6.3 interfaced to LHAPDF [67] using the NNPDF30LO PDF set. The signal and background cross-sections are then scaled to their NLO+NNLL and NLO values, respectively, at 14 TeV and at 27 TeV. The showering and hadronization of parton level events is done with PYTHIA8 [68] using a five-flavour MLM matching [69] in order to avoid double counting of jets. For the signal samples, a matching/merging scale is set at onefourth the mass of the chargino. Jets are clustered with FastJet [70] using the anti-k t algorithm [71] with jet radius R = 0.4. Detector simulation and event reconstruction is handled by DELPHES-3.4.2 [72] using the beta card for HL-LHC and HE-LHC studies which addresses the improvements in lepton reconstruction efficiencies. We do not modify those settings which seem reasonably close to what the experimental collaborations are suggesting. Accordingly, an electron reconstruction efficiency for p T > 4 GeV ranges from ∼ 35% to ∼ 65% depending on the η region whereas the muon reconstruction efficiency can have values starting at 16% for p T > 2 GeV in 1.0 < |η| < 1.5 range. The analysis of the resulting event files and cut implementation is carried out with ROOT 6 [73].

Analysis technique and event preselection
For such a highly compressed spectrum and very soft final states, a traditional cut-and-count analysis is inefficient and may lead to maximal loss of the signal relative to the overwhelming SM background. In order to efficiently exploit the ISR-boosted signal topology we employ the recursive jigsaw reconstruction (RJR) technique [74,75]. The idea is to build a decay tree which describes the signal topology of interest. Each element of this tree behaves as a reference frame of its own where reconstructed objects are assigned to. We show in Fig. 3 the generic decay tree used for compressed spectra. CM stands for the center-of-mass frame from which an ISR jet and a sparticle (S) system arise. The (S) system recoiling against ISR then decays to two categories of states: visible and invisible (I). The latter corresponds to massive LSPs and/or neutrinos. Here we distinguish between two visible states: (J) which contains all jets that are not identified as ISR and (L) which contains reconstructed leptons (electrons and muons). In case where very little transverse momentum is imparted to the LSP due to the very compressed phase space, the missing transverse energy of the system is entirely due to the recoil against ISR and is given by where M is the mass of the parent supersymmetric particle. This relation is only an approximation and assumes that the ISR system is entirely due to a single jet. So it is important to be able to properly calculate E miss T /| p ISR T | for more complicated situations. The RJR technique does exactly that by applying a set of "jigsaw rules" which result in a number of observables evaluated in specific reference frames. The rules followed for the reconstruction of events are: 1. Setting the longitudinal components to zero and considering only the transverse ones.
2. The mass of the invisible system is set to zero.
3. All missing transverse energy is set to the (I) system and reconstructed leptons' fourmomenta assigned to the (L) system. 4. The object partitioning between ISR and (J) systems aims at distinguishing ISR jets from jets resulting from the (S) system decay.
The partitioning between ISR and (J) is done by minimizing the reconstructed masses of the (S) system, m S , and the ISR system, m ISR . Boosting to the transverse CM frame, we can write the CM mass as where p ISR and p S T are the transverse momenta of the ISR and (S) systems, respectively, evaluated in the CM frame. With m CM fixed, jets are assigned in such a way to maximize p

Selection criteria and results
In addition to the RJR observables derived in the previous section we will use a few more variables which will help us discriminate the signal from the SM background: is the sum of the transverse momenta of the first two leading leptons. This is a very effective variable since the signal is characterized by a large missing tranverse energy and soft leptons.
2. The di-tau invariant mass, m τ τ , is very effective in rejecting Z/γ * → τ τ + jets background [76][77][78]. In order to calculate this variable, we consider the leptonic decays of the tau and assume that taus are highly relativistic. This means that their leptonic products and neutrinos are almost collinear with each other and moving in the same direction as the parent tau. With this in mind, the total missing transverse momentum due to the neutrinos can be written as This is basically a set of two independent equations which can be solved to determine κ 1 and κ 2 leading to an estimate of m 2 τ τ determined by The quantities κ 1 and κ 2 can assume negative values if p T is smaller than E miss the SM background shows a large increase unlike the signal which makes it an effective variable in eliminating a large part of the background. Even though an excess of signal events is clear for E miss T /H lep T > 15, this is not enough to extract the signal. The variable R ISR peaks at one for the signal with good enough resolution to reject the background for R ISR < 0.6 while retaining most of the signal events. Compared to the background, the di-tau invariant mass, m τ τ , of the signal has a larger slope on either sides of the peak and can reject the Z/γ * + jets background especially for negative values of this variable. The opening angle between the invisible system and ISR is also effective as the SM background distribution is almost featureless whereas the signal peaks for values greater than 3 rad. To design effective cuts we look at the two-dimensional distributions in two observables, R ISR and m τ τ , shown in Fig. 6. It is clear that for the signal (left panel) most events are clustered near R ISR = 1 and almost symmetric in m τ τ whereas for the background (right panel), events are clustered for smaller R ISR and more negative m τ τ . This feature can be used to reject the SM backgrounds and enhance the signal-to-background ratio. We consider two signal regions, SR-2 Nj-Low and SR-2 Nj-High targeting low and high mass ranges, respectively. We show the preselection criteria and the analysis cuts used in Table 6. The preselection criteria and analysis cuts have to be optimized for the 27 TeV case as harder cuts are naturally required to maximize the signal-to-background ratio.
Observable SR-2 Nj-Low SR-2 Nj-High SR-2 Nj-Low SR-2 Nj-High 14 TeV 27 TeV Preselection criteria Analysis cuts > −630 and < 460 > −750 and < 300 Table 6: Preselection and analysis cuts (at 14 TeV and 27 TeV) applied to the signal and SM backgrounds for two signal regions targeting low and high electroweakino mass ranges.
The selection criteria listed in Table 6 are applied to the signal and SM backgrounds simu-lated at 14 TeV and 27 TeV. The samples are normalized to their respective cross-sections in fb. After the cuts, the surviving signal (S) and background (B) cross-sections are used to determine the integrated luminosity necessary for an S √ S+B excess at the 5σ level which merits a discovery. To illustrate the effectiveness of the cuts, we plot the distributions in R ISR for select benchmarks after applying all the cuts in Table 6, except the ones on R ISR itself. The distributions are shown in Fig. 7. Panels (i) and (ii) of Fig. 7 show the R ISR distribution for point (a) at 14 TeV (left) and 27 TeV (right). The excess of signal events signifies that point (a) is discoverable at 14 TeV with 500 fb −1 of integrated luminosity while only 150 fb −1 is required for discovery at 27 TeV. However, for point (d) shown in panels (iii) and (iv), 500 fb −1 is not enough for discovery at 14 TeV while this amount is sufficient to claim discovery at 27 TeV.  Table 6 except the ones on R ISR . Point (a) can be discovered at both HL-LHC and HE-LHC while point (d) is only visible at HE-LHC.
We calculate the integrated luminosity required for discovery of the ten benchmarks at HL-LHC and HE-LHC. The results are displayed in Fig. 8. Here one finds that points (a), (b), (c) and (d) are discoverable at HL-LHC requiring an integrated luminosity of ∼ 260 fb −1 for point (a) and ∼ 2060 fb −1 for point (d). On the other hand, all benchmarks are discoverable at HE-LHC with (a) requiring as little as ∼ 70 fb −1 . The integrated luminosities required for this electroweakino mass spectrum range from ∼ 70 fb −1 to 1955 fb −1 for point (i). Despite having a smaller cross-section, point (j) seems to require slightly less integrated luminosity for discovery compared to point (i). The reason is that the gauginos for this point have a larger mass gap compared to the other points. This advantage allows us to retain more signal events and thus require less integrated luminosity for discovery. Figure 8: Estimated integrated luminosity for discovery of benchmarks of Table 1 at 14 TeV  and 27 TeV. Only four benchmarks are discoverable at HL-LHC while all ten points are visible at HE-LHC. In the figure SR-2 Nj-Low/High are defined as in Table 6.
An integrated luminosity of 260 fb −1 should be attainable with run 3 of the LHC, hence point (a) should be discoverable after ∼ 5 months from resuming operation while points (b)−(d) need ∼ 1 yr to ∼ 6 yrs. As for the HE-LHC, the rate at which data is expected to be collected is ∼ 820 fb −1 /yr which implies that point (a) would be discovered within one month of running, while points (b) to (e) require ∼ 1.5−9 months and ∼ 1.3−2.5 yr for the rest of the benchmarks. Thus HE-LHC would be an efficient machine for probing the electroweakino mass range under study.

Conclusion
The measurement of the Higgs boson mass at ∼ 125 GeV implies a large size of weak scale supersymmetry lying in the several-TeV region which makes the observation of supersym-metry at colliders more difficult. Specifically a large value of the universal scalar mass in SUGRA models would typically lead to sfermion masses to be large. However, not all supersymmetric particles need be heavy. Specifically for models where µ is relatively small lying in the few-hundred GeV region, some of the electroweakinos would be light and accessible at the LHC. Models with small µ can naturally arise on the hyperbolic branch of radiative breaking of the electroweak symmetry and thus this branch provides a possible region of the parameter space accessible at colliders. However, models with small µ typically imply a significant higgsino content for the LSP neutralino which leads to copious annihilation of neutralinos in the early universe and consequently the neutralino relic density significantly below the experimental value. One possible approach in previous works to correct this problem is to assume that dark matter is multi-component and use the dark matter candidates other than the neutralino to make up the deficit.
In this work we propose a solution where the neutralino is the only component of dark matter but its relic density arises from more than one source. One source is the conventional freeze-out mechanism which, however, produces only a fraction of the desired relic density.
To make up the deficit we assume that the visible sector couples with a hidden sector which possesses a U (1) X gauge invariance and after the kinetic mixing and the Stueckelberg mass mixing the neutralinos in the hidden sector mix with the neutralinos in the visible sector by ultraweak interactions. While the hidden sector neutralinos are not thermally produced in the early universe, and we assume that their relic density is initially negligible, they can be produced via interactions of MSSM particles in the early universe. For a range of the mixing parameters the hidden sector neutralinos decay into the LSP before the BBN and provide the remaining component of the relic density. With the proposed mechanism, models which would otherwise be not viable as they do not provide the desired amount of dark matter become viable.
In this work we have provided a set of benchmarks which satisfy the Higgs boson mass constraint, the relic density constraint as well as constraints from the current limits on dark matter direct and indirect detection. The sparticle spectrum predicted in these models is consistent with the current experimental lower bounds. The proposed mechanism enlarges the parameter space of natural supersymmetric models defined by small µ. Some of the enlarged parameter space of the proposed models may be probed by direct detection experiments while some of the other models may be testable at HL-LHC and HE-LHC. The models considered have a very compressed electroweakino spectrum consisting of charginos and neutralinos which lie in the range 250 GeV to ∼ 870 GeV. However, we show that with appropriate procedures to suppress the background, some of the parameter points are discoverable at the HL-LHC with as low as 260 fb −1 of integrated luminosity. The discoverable mass range is pushed further to reach ∼ 870 GeV at HE-LHC with a required integrated luminosity ranging from as little as 70 fb −1 up to ∼ 2000 fb −1 .

PHY-1913328.
Appendix: Summary of A + B → C + ξ processes In this appendix we summarize A + B → C + ξ type processes relevant for the production of the ultraweakly interacting particle ξ. As noted already in the model we discuss in this work ξ could be one or the other of the two hidden sector neutralinosξ 0 1 ,ξ 0 2 . We note in passing that processes where the final state contains two hidden sector neutralinos will be doubly suppressed and thus these processes are not considered. Processes of the type A+B → C +ξ can be divided into two categories such that the initial particles are either R-parity even or R-parity odd as exhibited in Table 7. Here f stands for any of the three generations of quarks and leptons andf for any of the three generations of squarks and sleptons; H denotes any one of the states h, H, A; V denotes neutral gauge bosons, which can be γ, Z, Z . Our rough counts gives O(10 4 ) processes of type A + B → C + ξ using initial and final states listed in Table 7. For a typical A + B → C + ξ process involving one hidden sector neutralino we estimate the relic density contribution to be 10 −7 for values of δ we use. Thus the total contribution of A + B → C + ξ processes listed in Table 7 to the dark matter relic density is size 10 −3 which is significantly smaller than contributions from A → B + ξ and A + B → ξ types of processes in Section 3 for the range of parameters we consider. The above indicates that A + B → C + ξ type processes do not play a significant role in our analysis.