Higgs boson to di-tau channel in Chargino-Neutralino searches at the LHC

We consider chargino-neutralino production, $\tilde{\chi}_2^0 \tilde{\chi}_1^\pm \to (h \tilde{\chi}_1^0)(W^\pm \tilde{\chi}_1^0)$, which results in Higgs boson final states that subsequently decay (inclusively) to leptons (either $h\rightarrow \tau^+\tau^-$ or $h\rightarrow W^+W^- \to (e^+e^-, \mu^+\mu^-, \tau^+\tau^-)+E_T^\mathrm{miss}$). Such channels are dominant in large regions of the allowed supersymmetric parameter space for many concrete supersymmetric models. The existence of leptons allows for good control over the backgrounds, rendering this channel competitive to the conventional $h\rightarrow b\bar{b}$ channel that has been previously used to impose constraints. We include hadronic decays of the $\tau$ leptons in our analysis through a $\tau$-identification algorithm. We consider integrated luminosities of 100 fb$^{-1}$, 300 fb$^{-1}$ and 3000 fb$^{-1}$, for an LHC running at $pp$ centre-of-mass energy of 14 TeV and provide the expected constraints on the $M_2$-$M_1$ plane.


Introduction
One of the primary goals of the CERN Large Hadron Collider (LHC) is to discover or rule out weak scale supersymmetry (SUSY). So far the ATLAS and CMS collaborations have conducted a number of direct SUSY searches in many different channels. The absence of excesses in those channels over the Standard Model (SM) background in turn placed impressive constraints on the SUSY parameter space. The limit is particularly stringent for coloured SUSY particles because of their large production cross sections. For instance, gluino and light flavour squarks are excluded up to masses of about 1 − 1.5 TeV [1][2][3][4][5][6][7], although the precise mass bounds depend on the details of the decay chains and mass spectrum. 1 The recent observation of a SM-like Higgs boson [12,13] also provides interesting implications and opportunities for the exploration of SUSY phenomenology. First of all, the observed mass ∼ 125 GeV and the measured properties of the SM-like Higgs boson are consistent with the lightest CP-even Higgs (h) in the minimal SUSY extension of the SM (MSSM) especially when the masses of scalar superparticles are larger than the several TeV [14][15][16][17]. Such scenarios are also consistent with the null results of direct SUSY searches and the precise measurements of flavour-changing neutral currents (FCNC) and CP-violating observables.
Even though the scalars are anticipated to be heavy, it is possible to have relatively light gauginos in the SUSY spectrum. In particular, the electorweak (EW) gauginos can exist and still be very light, since their production cross sections are much smaller than the coloured SUSY particles of the same mass. Indeed, in concrete models, the EW gauginos tend to be much lighter than the coloured SUSY particles. This is due to the fact that the renormalisation group evolution (RGE) increases coloured SUSY particle masses at low energies, owing to their strong QCD interaction, whilst the effect is much smaller for EW gauginos. It is known [18] that if the gaugino GUT relation (M 3 : M 2 : M 1 ∼ 7 : 2 : 1) holds, the production of EW gauginos can dominate over gluino pair production at the 14 TeV LHC due to the mass hierarchy. Moreover, many SUSY breaking scenarios predict a large mass splitting between gauginos and scalars [19][20][21][22][23][24]. Unlike the scalar masses, gaugino mass terms are prohibited by R-symmetry, and their mass generation mechanism may be very different. In the scenarios where R-symmetry is only weakly broken, the gauginos tend to be much lighter than the scalars. In such scenarios, gauginos are the only SUSY particles which are accessible at the LHC [25][26][27][28].
The EW gauginos, namely, charginos and neutralinos, have already been intensively searched for at the LHC. ATLAS and CMS interpreted their results in the context of simplified models, where several assumptions were made. For instance, the lightest neutralino (χ 0 1 ) was assumed to be bino-like and the second lightest neutralino (χ 0 2 ) and the lighter chargino (χ ± 1 ) wino-like, while possessing the same mass, mχ0 2 = mχ± 1 . In these simplified models, particular decays of the chargino and the second lightest neutralino with 100 % branching ratios were considered. The most stringent constraints were found for the models where theχ 0 2 andχ ± 1 decay exclusively into on-shell sleptons (˜ andν). In this case, is excluded up to about 700 GeV with mχ0 1 < ∼ 300 GeV [29,30]. Simplified models with the chargino and neutralino decays leading to di-τ final states via on-shellτ andν τ have also been searched for, and the limit was found to be mχ0 2 = mχ± 1 > ∼ 300 (350) GeV with mχ0 1 < ∼ 100 (50) GeV [29,30]. If the sleptons and staus are heavier than the EW gauginos, theχ ± 1 predominantly decays to W ± andχ 0 1 . On the other hand, theχ 0 2 has two possible decay modes:χ 0 2 → Zχ 0 1 andχ 0 2 → hχ 0 1 . The former has been searched for and the resulting limit was mχ0 2 = mχ± 1 > ∼ 350 with mχ0 1 < ∼ 100 GeV [29]. The latter process, , has also been looked for recently by employing the h → bb channel.
This channel suffers from an overwhelmingly large tt background and only weak constraints have been found. The bound is mχ0 2 = mχ± 1 > ∼ 200 GeV [31] and 300 GeV [32] only when The fact that current searches provide weak constraints is not the only reason thẽ χ 0 2χ ± 1 → (hχ 0 1 )(W ±χ0 1 ) process is especially interesting for further study. Firstly, in this process one can take advantage of the discovery of the SM-like Higgs boson, making use of the of its properties as measured in the present dataset [33]. Identifying the observed boson as the lightest CP-even Higgs in the MSSM allows us to make a precise prediction of theχ 0 2χ ± 1 → (hχ 0 1 )(W ±χ0 1 ) signature, which is necessary for the limit calculation and also useful in designing optimal search strategies for this mode. Secondly, as we will see in Section 2, the scenarios with heavy scalar SUSY particles may imply thatχ 0 2 predominantly decays into h andχ 0 1 . In this paper, we study the exclusion and discovery reach of theχ 0 2χ ± 1 → (hχ 0 1 )(W ±χ0 1 ) process, using the W decays to electrons muons or taus and the h → τ τ and h → W W → (τ / , ν)(τ / , ν) modes. Our study differs from earlier studies forχ 0 33,34], which have focused on the decays of the W to electrons or muons and h → bb modes. The obvious advantage of the channel with h → bb is its relatively large branching ratio BR(h → bb). However, this channel suffers from an overwhelming tt background. Employing h → τ τ and h → W W → (τ / , ν)(τ / , ν) introduces a reduction of the branching ratio, by a factor of [BR(h → ττ ) + BR(h → W W → (τ / , ν)(τ / , ν))]/BR(h → bb) ∼ 0.15, but the tt background can be reduced significantly by vetoing b-jets and requiring two τ s in the final state as we will see in Section 3. We will demonstrate that the channel with h → τ τ and h → W W → (τ / , ν)(τ / , ν) can provide competitive discovery and exclusion prospects to those obtained in the channel with the h → bb mode.
Revealing the details of the EW gaugino sector is especially important. It is commonly believed that this sector contains the particle that can be a candidate for dark matter. Moreover, studying the accessible mass scale of the EW gauginos at the LHC is important [35] for the planning of future collider programmes.
The article is organised as follows: in the next section, we provide the details of the setup we use for theχ 0 2χ ± 1 → (hχ 0 1 )(W ±χ0 1 ) mode and discuss the cross section and branching ratio of EW gauginos with particular attention to heavy scalar scenarios with a large µ-term. In Section 3 we provide details of the Monte Carlo simulation performed to generate the samples used in the analysis and give details of the algorithm employed for the identification of jets originating from hadronic decays of τ leptons. We then proceed to outline the details of our discrimination analysis, which forms the basis for defining the signal regions for the SUSY parameter space scan. The results of the parameter space scan are presented and discussed in Section 4. We conclude in Section 5. Supplementary appendices describe the definition of a kinematic variable used in our analysis, the calculation of cross sections for signal and background and statistical methods with low event numbers. The last appendix in particular describes a systematic way to recast our results onto the other scenarios. The application includes higgsino NSLP scenarios with a bino LSP and higgsino/wino NLSP scenarios with a gravitino LSP as discussed, for example, in [36][37][38][39]. Figure 1. The tree-level diagrams for the relevantχ 0 2 andχ ± 1 production.
In this section we describe the setup of our analysis and clarify the assumptions we made in the chargino and neutralino sectors. Moreover, we discuss the cross sections and branching ratios of the production and decay modes relevant to our analysis.

The setup
Throughout this paper we consider CP-conserving EW gaugino sector and assume mχ0 2 mχ± 1 > mχ0 1 for simplicity. This relation is realised in many SUSY breaking scenarios, particularly in the cases where |µ| M 2 > M 1 and M 2 |µ| > M 1 . The former case is motivated by the heavy scalar scenario. In the MSSM, the soft scalar masses for H u and H d and the µ-parameter are related by the EW symmetry breaking condition [40] This condition implies that the µ-parameter is expected to be of the same scale as the scalar masses, unless m Hu and m H d are carefully tuned at the EW scale in such a way that the first terms in the right hand side of Eq. (2.1) becomes unnaturally small. 2 In this section we assume the scale of µ is equal to the scalar masses and |µ| M 2 > M 1 > 0. However, the collider analysis described in Section 3 is applicable to other scenarios as far as theÑC ± → (hχ)(W ± χ) topology is concerned, whereÑ andC ± are massive BSM particles with the same mass and χ is an invisible particle with an arbitrary mass. One such scenario involves a bino LSP scenario with a higgsino NLSP, M 2 |µ| > M 1 . The application also includes gravitino LSP scenarios with wino or higgsino NSLP as discussed for example in [36][37][38][39], where the same topology is realised byχ 0 1χ ± 1 → (hG)(W ±G ) with G being gravitino. We will get back to this point in the end of this section. Fig. 1 shows the tree-level diagrams for the relevant modes ofχ 0 2 andχ ± 1 production. There are two types of diagrams which may interfere: s-channel diagrams with gauge boson exchange and t-channel diagrams with squark exchange. The t-channel diagrams are suppressed by the squark mass and it is expected that the contribution of this diagram decreases as the squark mass increases.   [41,42] with all the charges summed. In the plot and throughout the paper, we take |µ| = mq for simplicity. For the specific plot, we take M 2 = 350 GeV and M 1 = 100 GeV. The solid and dashed curves correspond to tan β = 2 and 50, respectively. As a result of destructive interference between the s-channel gauge boson exchange diagram and the t-channel squark exchange diagram, theχ 0 2χ ± 1 andχ + 1χ − 1 production cross sections increase as the squark mass increases. For a squark mass larger than ∼ 4 TeV, the contribution of the squark exchange diagram is decoupled and the cross sections become insensitive to the squark mass. It is interesting to note that theχ 0 2χ ± 1 andχ + 1χ − 1 cross sections are maximised in the limit of large squark mass. This gives additional motivation to perform EW gaugino searches in the context of heavy scalar scenarios.  ggχ 0 2χ The EW gaugino production modes other thanχ 0 2χ ± 1 andχ + 1χ − 1 have cross sections which are a few orders of magnitude smaller. This is due to the fact that these production modes contain at least one bino state or twoW 0 states in the large µ limit, and there exists no gaugino-gaugino-gauge boson couplings for those states.

The cross sections
As can be seen from Figs. 2 and 3, theχ 0 2χ ± 1 cross section is more than two times larger than theχ + 1χ − 1 cross section. This is mainly becauseχ 0 2χ ± 1 contains two distinctive modes: It is therefore more beneficial to target theχ 0 2χ ± 1 production mode in the EW gaugino searches.

The branching ratios
If scalar fermions and the MSSM Higgs bosons (other than the SM-like one) are heavier than theχ ± 1 andχ 0 2 , these gaugino states decay predominantly intoχ 0 1 and SM bosons, W ± , Z and h, if the decays are kinematically allowed. In this case,χ ± 1 exclusively decays into W ± andχ 0 1 with BR ∼ 100 %. On the other hand,χ 0 2 has two possible decay modes:  given by [43,44] where e is the electric charge (α em = e 2 /(4π)). Fig. 4 shows the branching ratios ofχ 0 2 → hχ 0 1 (left) andχ 0 2 → Zχ 0 1 (right) modes as functions of |µ| in the µ > 0 case. M 2 has been fixed to M 2 = 350 GeV but tan β and M 1 are varied as tan β = 2 (red), 10 (blue), 50 (green) and M 1 = 100 GeV (solid), 1 GeV (dashed). Here and throughout the paper, we have explicitly set m h = 125.5 GeV. This condition can be always realised by tuning the stop mass, which has no effect on the EW gaugino sector, and hence on our phenomenological analysis. The branching ratios were calculated using SUSY-HIT [45].
To summarise, we have demonstrated that in the scenarios with large mq and |µ|,χ 0 2 andχ ± 1 become wino-like gauginos with mχ0 2 mχ± 1 M 2 andχ 0 2χ ± 1 has the largest cross section among the EW gaugino production modes. We also argued that in such scenarios χ ± 1 predominantly decays into W ± andχ 0 1 and theχ 0 2 → hχ 0 1 mode typically dominates theχ 0 2 decay. These arguments provide a strong motivation to study the pp →χ 0 2χ ± 1 → (hχ 0 1 )(W ±χ0 1 ) mode in the EW gaugino searches in the scenarios with large mq and |µ|. In the following sections, we study the pp →χ 0 2χ ± 1 → (hχ 0 1 )(W ±χ0 1 ) channel using the W → τ / , ν plus h → τ τ and h → W W → (τ / , ν)(τ / , ν) channel. We set mq = µ = m A = 3 TeV, tan β = 10 throughout. This leads to BR(χ ± 1 → W ±χ0 1 ) BR(χ 0 2 → hχ 0 1 ) 100 %. With this parameter choice, the lightest CP-even Higgs becomes SM-like and we use the same branching ratios as those for the SM Higgs boson. Although the above parameter set is motivated by the heavy scalar scenario, our analysis can easily be recast onto other SUSY scenarios. Changing the above parameters may modify the pp →χ 0 2χ ± 1 cross section and theχ 0 2 → hχ 0 1 branching ratio significantly but does not alter the signal efficiencies for the signal regions defined in the next section. The discovery reach and exclusion limit for a different set of parameters can therefore be obtained by rescaling the cross section and branching ratio accordingly. Moreover, the calculated signal efficiencies can also be used for a larger class of models as far as theÑC ± → (hχ)(W ± χ) topology is concerned, as mentioned in subsection 2.1. Neglecting a finite width effect and spin correlations, the signal efficiencies will be very similar betweenχ 0 We explain this point in more detail in Appendix E and provide the necessary information to perform such a re-analysis.

Monte Carlo simulation
The SUSY pp →χ 0 2χ ± 1 → (hχ 0 1 )(W ±χ0 1 ) signals were generated using the HERWIG++ generalpurpose event generator [46][47][48] via SUSY Les Houches Accord files used as input for the parameter points, according to the assumptions outlined in the previous section. The signal cross sections were scaled to the next-to-leading order cross sections using results obtained from Prospino 2.1. The hV , tt, tth and W Z backgrounds were also generated internally in HERWIG++ at leading order. The Z+jets and W +jets backgrounds were generated using the parton-level matrix element generator AlpGen and merged with the HERWIG++ parton shower using the MLM method [49][50][51]. The generator-level cuts on the V +jets backgrounds were taken to be p T j,min = 15 GeV, η j,max = 3.0, ∆R j,min = 0.2 with m ∈ (15, 160) GeV (or m τ,τ ) for V = Z. For the Z+jets case we considered matrix elements with one extra parton merged to the shower, whereas for the W +jets case we considered matrix elements with two partons merged to the shower.
For the signal we allowed the W to decay to all lepton flavours, including taus. Likewise, for the backgrounds we consider all of the leptonic decays of the W and Z, to muons, electrons or taus. We consider the Monte Carlo samples of the Z and W backgrounds going to electrons or muons separately from those going to taus, as they would have different amounts of missing energy, leptons and jets. The Higgs boson was allowed to decay to τ + τ − or W + W − with subsequent decay of the W bosons to eν e , µν µ and τ ν τ .
In all cases of signal and background the full parton shower, hadronization and the underlying event [52] were included. 3 All the runs have been generated using the MSTW2008nlo 68% PDF set. We note that we do not consider pure QCD-initiated backgrounds since these are expected to be negligible in the high-missing transverse momentum regime, particularly in conjunction with the existence of isolated leptons or τ s.
We define a SUSY benchmark point C350-100, with parameters This point will be used as an example to demonstrate the effect of cuts and provide a typical point to aid the development of the strategy for discriminating the signal against the various backgrounds.

Tau lepton decay modes
The study of final states containing hadronically decaying τ leptons is an important and growing part of the LHC's physics program. The τ lepton has a multitude of decay modes, which we may split these into two categories: 'leptonic', if the visible decay products contain a single lepton, and 'hadronic', if there are one or three charged hadrons present. We label the corresponding modes τ and τ h respectively. The hadronic modes are also categorised as '1-prong' and '3-prong', according to the number of charged particles involved in the decay. 4 The label ' ' here and elsewhere implies an electron or a muon. The branching ratios for these modes are: 5 • leptonic: BR(τ → τ ) ∼ 0.35.

Hadronic tau identification algorithm
Both ATLAS [54] and CMS [55] employ reconstruction and identification algorithms, used to identify hadronically decaying τ leptons and reject various backgrounds. Here, we do not attempt to reproduce either of the ATLAS or CMS algorithms exactly, but instead use elements from both resulting in an algorithm that we expect performs in an equivalent way.
We also borrow elements from [56], which examines di-τ tagging in the boosted regime. 6 The resulting algorithm is expected to provide conservative hadronic τ -tagging results, and could be improved substantially via the use of boosted decision trees (BDT) or other advanced multivariate methods. Since we will not employ simulation of detector effects in the present analysis, we focus on a simple cut-based algorithm for simplicity. The first part of the basic algorithm for hadronic τ identification proceeds as follows: • Reconstruct jets with R = 0.5 using the Cambridge/Aachen jet algorithm as implemented in FastJet [58]. An individual jet is then investigated for constituent hadronic tracks. 7 • Consider a track to be a 'seed' if it is the hardest track in the jet, has p T > 5 GeV and is within ∆R = 0.1 of the jet axis.
• If such a track is found, one defines inner and outer cones around it. We use R in = 0.2 and R out = 0.4 respectively.
• Require no photons with p T > 2 GeV and no charged tracks with p T > 1 GeV to lie within the defined annulus between R in and R out . 4 If one of the taus undergoes a 3-prong decay, one may improve the analysis significantly using the information of the secondary vertex of the 3-prong tau decay [53]. This requires a dedicated study and we do not use the secondary vertex information in this paper. 5 These do not add up to 100%, since we are only considering the dominant 1-prong and 3-prong decay modes. 6 For di-τ tagging in Higgs searches, see also Ref. [57]. 7 Usage of the word "track" here and elsewhere in this article implies "charged particle". The basic part of the algorithm itself does not provide satisfactory rejection against the QCD jet background to hadronically decaying τ leptons. If a jet satisfies all the above criteria, then the following variables are constructed: • ∆R max : the distance to the track furthest away from the jet axis.
• f core : the fraction of the total jet energy contained in the centre-most cone defined by ∆R < 0.1.
These variables provide strong discriminating power against QCD jets [54,59]. 8 To perform the rejection of QCD jets, here we apply the following cuts: • ∆R max < 0.05.
In Fig. 6 we show the variables ∆R max and f core , constructed for hadronic jets for a signal benchmark point C350-100 and the W +jets background. Only jets with p T > 20 GeV and |η| < 2.5 were considered. In Fig. 7 we show the efficiency of τ identification versus the transverse momentum of the jet in question, p T,jet , obtained by the procedure outlined in this section. For the signal, the efficiency was defined for the identification of 'true' τ jets, defined to be those closest to the visible τ decay products taken from the Monte Carlo truth. For the W +jets background, the efficiency was defined with respect to any jet. The efficiency for the SUSY benchmark point C350-100 varies from around 50% in the Figure 7. The efficiency of tagging a jet as a τ -jet for the SUSY benchmark point C350-100 and the W +jets background (with W → eν e /µν µ ). For C350-100, the efficiency was defined for the identification of 'true' τ jets, defined to be those closest to the visible τ decay products taken from the Monte Carlo truth. For the W +jets, the efficiency was defined with respect to any jet. Jets of p T > 20 GeV and |η| < 2.5 are considered in both cases. p T,jet region of 20 − 300 GeV and then drops down to ∼ 20% at around p T,jet ∼ 400 GeV. For the W +jets background the efficiency starts off at ∼ 1% at p T,jet ∼ 20 GeV and then rises to an efficiency of 2 − 3%, more or less constant up to p T,jet ∼ 500 GeV.

Analysis
Since the signal events contain hard jets or isolated hard leptons, they are expected to pass the experimental triggers with high efficiency, and hence we do not consider the effect of triggering here. We define the first level of the analysis for discriminating the signal against the various backgrounds as follows: 1. Particles of p T > 0.5 GeV and |η| < 5.0 are considered.
2. If isolated leptons with p T > 20 GeV are found, they are placed in a separate list, and removed from the list of particles. An isolated lepton is defined as either: having i p T,i less than 20% of its transverse momentum around a cone of ∆R = 0.4 around it, or as a lepton that contains no photons with p T > 2 GeV and no tracks with p T > 1 GeV in the annulus ∆R = (0.2, 0.4) around it. 9 3. Jet finding is performed on the list of remaining particles, using FastJet and the Cambridge/Aachen jet algorithm, with parameter R = 0.5. Jets of p T > 20 GeV are accepted.
4. Tagging of τ -jets is performed as described in Section 3.2.

5.
Only events with a total number of isolated leptons, n ,iso , and τ -tagged jets, n τ,tag , equal to 3 are accepted: i.e. we require n τ,tag + n ,iso = 3. A hypothesis is then performed to match the topology of the SUSY events. The hypotheses vary according to the number of isolated leptons and τ -tagged jets and are listed in detail in Table 1. 6. Several variables are calculated and are passed through to the second level of analysis.
n τ,tag n ,iso real signal channels hypothesis assign hardest two to h.
assign hardest two to h.
if leptons are same sign, assign highest-p T to h along with the τtagged jet. Otherwise: assign any two highest-p T to h.
If all leptons are the same sign, reject the event. Otherwise: pair two highest-p T of opposite sign as the h. Table 1. The hypotheses applied for the reconstruction of the Supersymmetric topology as described in the main text. The different hypotheses are given according to the number of τ -tagged jets, n τ,tag , and the number of isolated leptons n ,iso . In the final stage of the analysis, the n τ,tag = 3 was found to reduce significance and was not considered.
Steps 1-5 are what we define as the 'basic' analysis. The variables calculated in step 6 and used for further discrimination in the second-level analysis are: the transverse momentum of the di-τ -tagged system, p T,τ τ , the distance between the τ -tagged jets, ∆R τ,τ , the distance between the di-τ -tagged system and the lepton, ∆R τ τ, , the missing transverse energy, / p T and the variable M min , which is sharply peaked at low values for the W Z background and broadly falls off for the signal, defined in Appendix A. The variables are outlined in Table 2. There we provide an example set of cuts, applied to the SUSY benchmark point C350-100, found to give a significance of ∼ 2.5σ for an integrated luminosity of 100 fb −1 at 14 TeV. For completeness, we show in Table 3 the resulting cross sections after applying the analysis on the SUSY benchmark point and the different backgrounds for this example. In the final stage of the analysis the n τ,tag = 3 channel was excluded, since it was found to reduce significance by allowing more background. Note that this set of cuts will constitute 'signal region 1' of our full analysis.
Details of how the initial cross sections for the signal and background are calculated are given in Appendix B. We note that in the case of the Z+jets and W +jets samples, we obtained N cuts = 0 events after all cuts. 10 To provide an estimate of the cross section, we assume that the Poisson distribution has mean number of events λ = 3 and use this as an upper bound to estimate the resulting cross sections. The probability of having a Poisson-distributed sample with mean λ > 3, given that zero events have been observed, is 0.05. It is useful to mention at this point that we do not apply a K-factor to the Z+jets or W +jets cross sections. The induced uncertainty due to this omission can be absorbed in the systematic uncertainty due to lack or low number of events in the final Monte Carlo samples. Nevertheless, since conservative estimates for these backgrounds have been assumed, K-factors of ∼ 2 would not have a significant impact to the main conclusions of our analysis.
variable definition benchmark point cut (≡ signal region 1) ∆R τ τ, distance between di-τ -tagged jet system and lepton ∈ (0.1, 2.6) Table 2. The variables used for further discrimination after the basic part of the analysis is applied to the signal and backgrounds.

Signal regions
To perform a scan of the supersymmetric parameter space, we define signal regions, with cuts that aim to bring out the different qualities of the defined variables. These signal regions are shown in Table 4, for the variables defined in Table 2. All the signal regions exclude the n τ,tag = 3 channel, since it was found to reduce significance.

Results
We performed the analysis on the M 2 -M 1 plane, according to the cuts defined in the signal regions in Table 4 at integrated luminosities of 100 fb −1 , 300 fb −1 and 3000 fb −1 . We show the resulting envelope of significances in Fig. 8    region, whereas the dashed curves show the 5σ discovery region. We also show in Fig. 9, the expected exclusion region at 2σ (solid) and 3σ (dashed). For completeness, we show the corresponding overlapping signal regions in Appendix E. There, we also provide the total cross sections for the backgrounds after cuts given by the different signal regions. These can be used to infer constraints in explicit SUSY models that contain the specific decay chain we are considering. The analysis can yield a low number of events for both signal and background, of O(10), and for the calculation of significance we used the Poisson distribution to calculate the p-values. These were subsequently converted to the corresponding Gaussian standard deviations. Details of the procedure are provided in Appendix C, with supplementary material in Appendix D.
Although the authors of Ref. [34] have not performed an equivalent parameter-space scan over M 1 -M 2 , and the details of the chosen parameters differ from the ones presented in this article, it is still interesting to compare with the potential of the final state in which the channelχ 0 2χ ± 1 → (hχ 0 1 )(W ±χ0 1 ) involves leptonic W decays and Higgs boson decays to bb. There, the authors have found that it is possible to discover a signal of the process at the ∼ 5σ level at ∼ 100 fb −1 of luminosity, for points for which M 2 ∼ 265 − 390 GeV and M 1 ∼ 133 − 198 GeV. Indeed, our analysis is competitive with this result, with such points falling somewhere between the 3σ and 5σ discovery regions at 100 fb −1 and 300 fb −1 , as demonstrated by the black and red curves respectively, in Fig. 8. This indicates that this channel is as important as the final state with h → bb, or at least complementary.  Table 4 at integrated luminosities of 100 fb −1 (black) or 300 fb −1 (red). The solid curves show the 3σ evidence region, whereas the dashed curves show the 5σ discovery region.

Conclusions
We have presented a phenomenological analysis of the channelχ 0 2χ ± 1 → (hχ 0 1 )(W ±χ0 1 ) using the W → ν /τ ν τ and Higgs boson channels (h → τ + τ − and h → W + W − → leptons) at the LHC. Such channels are common in many concrete SUSY models where the predictions includeχ 0 2 andχ ± 1 that predominantly decay into h and W , respectively. Our analysis has included detailed hadron-level simulation of the relevant dominant backgrounds, including the effects of the underlying event. Hadronic τ identification was  Table 4 at integrated luminosities of 100 fb −1 (black) or 300 fb −1 (red). The solid curves show the 2σ exclusion boundary, whereas the dashed curves show the 3σ boundary. modelled at hadron level with a custom-made algorithm based on the ones employed by both the ATLAS and CMS experiments. We have employed a cut-based analysis on several variables that bring out the properties of the signal against those of the backgrounds. Specifically, we have constructed a mass variable, M min , which is sharply peaked at low value for the W Z background and broadly falls off for the signal.
Consequently we have demonstrated the potential for discovering or constraining the SUSY parameter space in the M 2 -M 1 plane at integrated luminosities of 100 fb −1 , 300 fb −1 and 3000 fb −1 , collected at a 14 TeV proton-proton centre-of-mass energy. The 5σ discovery potential of our analysis reaches up to M 2 350 GeV with M 1 < ∼ 100 GeV at the 14 TeV LHC with 300 fb −1 . This implies that a future e + e − collider with √ s = 1 TeV can play indispensable role to cover M 2 < 500 GeV region. A large part of this region can also be covered by the 14 TeV High Luminosity LHC with 3000 fb −1 , which has a discovery potential in the M 2 < ∼ 550 GeV, M 1 < ∼ 200 GeV region.
This work serves a first study of making use of h → τ τ mode in the chargino-neutralino searches. We thus recommend further examination of this channel by experimental collaborations, including the effects of full detector simulation, τ -jet tagging and multi-variate analyses. Figure 10. The W Z background topology considered in constructing the M min variable.

A Definition of the M min variable
We define the M min variable that we will use as a handle for rejecting non-SUSY backgrounds. Although the variable is designed to reject the W Z background, it can also potentially perform well against other backgrounds. There are three neutrinos in the final state: one coming from W decay, the other two from the τ lepton decays. The direction of the τ -neutrino is approximately collimated with respect to the original τ lepton direction due to the mass hierarchy, m Z m τ . With this approximation, the momenta of the τ lepton and the τ -neutrino can be parametrised as where p ρ 1/2 is the momentum of the visible decay products and: 0 < a(b) < 1. Note that events that in the phenomenological analysis of this article that do not satisfy this condition on a and b are deemed 'unphysical' and rejected. Assuming the event topology in Fig. 10, the unknown neutrino momenta can be constrained by the mass shell conditions of the W and Z bosons and the missing momentum conditions. 11 a, b, p ν : 5 unknowns m Z , m W , p x miss , p y miss : 4 constraints Since (# of unknown − # of constraints) = 1, we can parameterise the all neutrino momenta by a single parameter, θ.
The mass-shell constraint for the Z boson gives 11 Vectors in bold typeset represent 3-vectors.
By introducing θ ≡ arctan a b , a and b can be written as The transverse components of the neutrino momentum are determined by The mass shell condition of W constrains the last unknown parameter p z ν as where t /ν = p T /ν , c = t · t ν + m 2 W /2. If Eq. (A.5) yields complex solution, we simply take the real part [60,61].
All the neutrino momenta are now parametrised by θ. We define the invariant mass of the system where ± corresponds to the discrete ambiguity in Eq.

B Calculation of the initial cross sections
For completeness we provide the branching ratios used to reproduce the initial cross sections that appear in Table 3.
• W +jets: The AlpGen tree-level cross section merged to the HERWIG++ parton shower is σ(W + jets) 300 pb per lepton flavour (electrons, muons or taus). This was calculated for 2 associated partons with the W boson.
• Z+jets: The AlpGen tree-level cross section merged to the HERWIG++ parton shower is σ(Z + jets) 300 pb per lepton flavour (electrons, muons or taus). This sample has been produced with one associated parton with the Z boson. Figure 12. The shaded region in the above probability distribution shows the probability of obtaining N > n obs events.

C Discovery with low statistics
Discovery occurs when the probability of obtaining a given experimental result, which contains some signal, is small when compared to the expected background hypothesis. How small this probability should be is somewhat a matter of preference and convention. Nowadays, in high energy physics, these probabilities are taken to correspond to 3 standard deviations away from the assumed central value of a Gaussian for the case of 'evidence' of a signal, and 5 standard deviations for the case of 'discovery' of a signal. On the other hand, exclusion is based on the probability of having fewer events than the background alone would give, given the signal plus background hypothesis.
To be concrete, let us assume that we are performing counting experiments of events, obtaining as a result, N i counts in each experiment i. Let us assume that in one specific experiment, we obtained a measurement n obs . By some theoretical prediction, for example obtained using a Monte Carlo event generator, or otherwise, the expected background number of events in this experiment is given to be b. We can assume that the counts N i are random variables, distributed according to some distribution P (N i , b), where b is the mean of the distribution. In this case, the probability of obtaining n obs or more events, when the mean is equal to the expected background b is given by: where the sum can also be turned into an integral in the continuous variable case. In simple words, according to the 'background only' distribution, getting a measurement of n obs or more amounts to the probability of the shaded area in Fig. 12., and this probability tells you how likely b is as an assumption of the mean of the distribution.
In the specific case of the Poisson distribution: This sum can be shown (see Appendix D) to be equivalent to the so-called 'regularised incomplete gamma function', and Γ(n obs ) is defined in Eq. (D.2) in the following section, for n = n obs . We can then calculate the probability for discovery. This is given by P (N ≥ n obs , b) for n obs = s + b, where s is the expected signal contribution to the event counts. This probability will differ from the one obtained using the large sample (i.e. Gaussian) approximation, in which the significance is given approximately by σ ∼ s/ √ b. For exclusion, we need to calculate the probability of having less than b events, under the assumption that the expected number of events is s + b, i.e. P (N < b, s + b).
which is nothing but the cumulative sum for the Poisson distribution.  Table 4 at integrated luminosities of 100 fb −1 (upper left), 300 fb −1 (upper right) and 3000 fb −1 (bottom). The solid curves show the 3σ evidence region, whereas the dashed curves show the 5σ discovery region.

E Individual signal regions
In Figs. 13 and 14 we demonstrate the individual signal regions contributing to the envelops shown in Figs. 8 and 9. The analyses at each luminosity are identical and the more 'irregular' form at lower luminosities is related to the Poisson statistics that govern the smaller number of events in those cases.
In Table 5 we show the resulting cross sections after applying each of the signal regions, defined in Table 4. These can be used in conjunction with the efficiency data files for the  Table 4 at integrated luminosities of 100 fb −1 (upper left), 300 fb −1 (upper right) and 3000 fb −1 (bottom). The solid curves show the 2σ exclusion boundary, whereas the dashed curves show the 3σ boundary.
signal on the M 2 -M 1 plane attached to this article 12 to construct the signal cross section for each signal region for explicit BSM scenarios withÑC ± → (hχ)(W ± χ) topology, whereÑ andC ± are massive BSM particles with the same mass, M 2 , and χ is an invisible particle with mass M 1 . One can calculate the signal cross section for the process in question according to the given model: and use this in conjunction with the background cross section for region X as given in the table to obtain the p-value over the parameter space. Our efficiency data considers only the process with the W → /τ, ν and Higgs bosons decaying inclusively to leptons (either h → τ + τ − or h → W + W − → (e + e − , µ + µ − , τ + τ − ) + / E T ).  Table 5. The resulting sum of cross sections for the backgrounds for the different signal regions (SR) used in the analysis.