Search for electroweak production of charginos and neutralinos in proton-proton collisions at s\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \sqrt{s} $$\end{document} = 13 TeV

A direct search for electroweak production of charginos and neutralinos is presented. Events with three or four leptons, with up to two hadronically decaying τ leptons, or two same-sign light leptons are analyzed. The data sample consists of 137 fb−1 of proton-proton collisions with a center of mass energy of 13 TeV, recorded with the CMS detector at the LHC. The results are interpreted in terms of several simplified models. These represent a broad range of production and decay scenarios for charginos and neutralinos. A parametric neural network is used to target several of the models with large backgrounds. In addition, results using orthogonal search regions are provided for all the models, simplifying alternative theoretical interpretations of the results. Depending on the model hypotheses, charginos and neutralinos with masses up to values between 300 and 1450 GeV are excluded at 95% confidence level.


Introduction
Supersymmetry (SUSY) is a promising extension of the standard model (SM) with the potential to solve several of the outstanding problems in particle physics by introducing a new symmetry between bosons and fermions [1][2][3][4][5]. This symmetry leads to the prediction of many new particles, called superpartners of the SM particles [6]. The addition of superpartners can mend the hierarchy problem by introducing cancellations between the large -1 -

JHEP04(2022)147
loop corrections to the mass of the Higgs boson (H). Additionally, SUSY models in which R-parity [3] is conserved, implying pair production of superpartners, provide a suitable dark matter candidate in the form of the lightest SUSY particle (LSP).
Searches for the production of SUSY particles have already been carried out in a multitude of final states by the ATLAS and CMS Collaborations at the CERN LHC, however none resulted in evidence of the existence of new particles. Particularly stringent exclusion limits have been placed on the production of strongly interacting superpartners (squarks and gluinos) due to the relatively large production cross section of such processes [7][8][9][10][10][11][12][13][14][15][16]. The absence of any evidence for the production of such particles could mean that colored superpartners are too heavy to be produced at the LHC. The lower production cross sections associated with electroweak production directly lead to the currently softer exclusion limits on the superpartner masses. This makes searches for electroweak SUSY production especially interesting. Such superpartners might still be observed, even if their strongly interacting counterparts are out of reach at the LHC.
In this paper, we present a search for the direct production of charginos ( χ ± 1 ) and neutralinos ( χ 0 2 ), mixed states of the SUSY partners of the electroweak gauge and Higgs bosons, in final states with multiple leptons ( ). Events with three or four leptons, with up to two hadronically decaying τ leptons (τ h ), as well as events with two light leptons (electrons or muons) of the same sign are analyzed. The multitude of final states in this analysis mirrors the complexity of chargino and neutralino decay modes. A data set of proton-proton (pp) collision events collected with the CMS detector from 2016 to 2018 is used, corresponding to an integrated luminosity of 137 fb −1 . Previous searches in these final states were performed on data samples of approximately 36 fb −1 by ATLAS [17][18][19] and CMS [20,21], resulting in exclusion limits on chargino masses up to 1150 GeV for particular model assumptions. The use of parametric neural networks [22], which is the main novelty in this paper, together with the re-optimization of the search strategy, and the increased data volume, significantly extend the reach of this search compared to previous results. This paper is structured as follows. Section 2 contains a brief description of the CMS detector. Descriptions and diagrams of all targeted models can be found in section 3. Section 4 outlines the baseline requirements imposed to select events corresponding to final states of interest in the search. Details on the simulation of the different background and signal processes that populate such selections are included in section 5. Section 6 includes a description of the search strategies developed to isolate the different signals from the background processes. The different techniques used for the estimation of the contributions of the SM backgrounds are detailed in section 7. A summary of all sources of uncertainty affecting the interpretation of the results is included in section 8. A comparison between the observed data and the expectations for the different signal extraction strategies are presented in section 9. Section 10 is composed of an interpretation of such information in terms of several SUSY models. Finally, section 11 contains a brief summary of the obtained results. The central feature of the CMS detector is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Silicon pixel and strip trackers, a lead tungstate crystal electromagnetic calorimeter, and a brass and scintillator hadron calorimeter, each composed of a barrel and two endcap sections, reside within the solenoid. Forward calorimeters extend the pseudorapidity (η) coverage provided by the barrel and endcap detectors. Muons are detected in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant variables, can be found in ref. [23].
A two-tiered trigger system [24] is used to reduce the rate of recorded events and select those of interest. The first level, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of 100 kHz within a time latency of less than 4 µs. The second level consists of a processor farm which runs a version of the full event reconstruction, optimized for fast processing, and decreases the event rate to around 1 kHz before data storage.

Signal models
This search is aimed at the production of charginos and neutralinos, specifically in decay modes that lead to final states with three or more leptons. The results will be interpreted in the context of several simplified models in which the only free parameters are the superpartner masses [25,26]. Interpretations are performed for both χ ± 1 χ 0 2 production and effective χ 0 1 χ 0 1 production in gauge mediated models with mass degenerate χ 0 1 , χ 0 2 and χ ± 1 . In the former models χ ± 1 and χ 0 2 are assumed to be wino-like, i.e. mass-degenerate mixtures of superpartners of the SU(2) L gauge field, while χ 0 1 is the LSP and bino-like, i.e. the superpartner of the U(1) Y field. The latter models consider Higgsino-like χ ± 1 , χ 0 2 , and χ 0 1 that are nearly mass-degenerate with χ 0 1 being the next-to-LSP (NLSP), and a gravitino being the LSP. In all models, the other superpartners are assumed to be heavy and decoupled. The lightest of the CP even bosons inside the Higgs sector of the minimal supersymmetric SM is assumed to have SM-like properties, including the mass and branching fractions [27], and is referred to as the Higgs boson. The rest of the bosons inside the Higgs sector are assumed to be heavy and decoupled. An overview of all specific models used for the interpretation of the search is given below. Scenarios in which the mass splitting between any of the superpartners in the decay chain is small are referred to as "compressed" in this paper, and generally have one or more decay products with low p T . Cases where the mass splittings between all superpartners are relatively large, resulting in high p T decay products, are called "uncompressed".

JHEP04(2022)147
After reconstruction, PF candidates are clustered into jets using the anti-k T algorithm [35], with a distance parameter of 0.4, implemented in the FastJet package [36,37]. Several selection criteria are applied, designed to remove jets that are likely to originate from extraneous energy deposits in the calorimeters [38]. The missing transverse momentum vector p miss T is defined as the negative vector sum of transverse momenta (p T ) of all PF candidates in the event, taking into account jet energy corrections [39,40]. Its magnitude is referred to as p miss T . The vertex with the largest squared p T sum of all objects returned by the jet finding algorithm, with the tracks associated with this candidate vertex as inputs, as well as the p miss T computed from the vector sum of the p T of those jets, is taken to be the primary pp interaction vertex.
Electrons are reconstructed from a combination of the tracker and the electromagnetic calorimeter measurements. They are required to satisfy |η| < 2.5, ensuring they are within the volume of the tracker, and p T > 10 GeV. Additionally, requirements are placed on the shower shape, and on a multivariate discriminant based on the shower shape and track quality of the electrons [41]. Electrons that are matched to a secondary vertex consistent with a photon conversion or have a missing hit in the tracker are vetoed.
Muon reconstruction uses a global fit combining information from the tracker, muon spectrometers, and calorimeters. Muons must be within the acceptance of the muon spectrometers, |η| < 2.4, and have p T > 10 GeV. Selected muons further pass criteria on the geometrical matching between the track in the inner tracker and the muon spectrometers, and on the quality of the global fit [42].
Both electron and muon candidates must be consistent with originating from the primary pp interaction vertex. This is ensured by requiring the transverse impact parameter (d 0 ) to be smaller than 0.5 mm, and the longitudinal one (d z ) not to exceed 1.0 mm. The significance of the impact parameter must satisfy |d 3D |/σ(d 3D ) < 8, where d 3D and σ(d 3D ) are, respectively, the three-dimensional impact parameter and its uncertainty.
In order to select leptons resulting from superpartner production, it is important to identify "prompt" leptons that originate from the decay of electroweak bosons or superpartners. Prompt leptons have to be separated from other genuine leptons produced in hadron decays, as well as particles in jets that are incorrectly reconstructed as leptons. Such lepton candidates are collectively called "nonprompt". As a first step in rejecting nonprompt leptons, electrons and muons must fulfill several prerequisites on their relative mini-isolation (I mini rel ), defined as the scalar p T sum of all other PF candidates in a cone of p T dependent radius around the lepton's direction, divided by the lepton p T . The radius of this cone is given by ∆R(p T ( )) = (∆η) 2 + (∆φ) 2 = 10 GeV/ min[max(p T ( ), 50 GeV), 200 GeV] in (η, φ) space, where φ is the azimuthal angle in radians, taking into account increased particle collimation at high lepton p T values [43]. All electrons and muons must satisfy I mini rel < 0.4. The lepton selection discussed up to here is referred to as the baseline selection. A gradient boosted decision tree (BDT) trained to distinguish prompt from nonprompt light leptons is used [44,45]. This BDT uses the properties of the jet, as returned by the jet clustering algorithm, containing the lepton: its DeepFlavor [46] b tagging score, the ratio of the lepton p T to that of the jet, and the component of the jet momentum that is transverse to the lepton's direction. Other input variables are p T , η, I mini rel , d 0 , d z , and of the lepton. The BDT additionally has access to the muon segment compatibility for muons and to the earlier mentioned multivariate discriminant for electrons. Two selection criteria on the BDT output are used in the analysis, one for events with three or more leptons, and a tighter one for events with two leptons of the same sign. The latter results in a smaller nonprompt background at the cost of slightly lower selection efficiencies for superpartner production. For prompt muons, the BDT-based selection results in typical efficiencies ranging 90-99%. Misidentification rates for nonprompt muons passing the baseline selection range 5-10%. Prompt electrons are identified with an efficiency of around 75% in events with three or more leptons, with a corresponding misidentification rate of about 5% for nonprompt electrons passing the baseline selection. The efficiency is typically in the range 50-60% for the tighter same-sign dilepton selection, with a misidentification rate around 2%. Reconstruction of τ h candidates is performed using the "hadron-plus-strips" algorithm [47]. The τ h candidates are required to be consistent with one-or three-pronged hadronic τ lepton decays, and must have |η| < 2.3 and p T > 20 GeV. In order to reject a large background from hadrons misidentified as τ leptons, the τ h candidates must pass a stringent selection on a BDT discriminant aimed at identifying prompt τ h candidates [47]. This selection has typical efficiencies around 50% for prompt τ h candidates in the analysis, while having a misidentification rate of 0.2% for quantum chromodynamics (QCD) jets. Additional selection criteria based on the consistency between the measurements from the tracker, calorimeters and muon detectors are required to reduce the proportion of electrons and muons misidentified as τ h candidates.
Leptons passing the BDT-based selection criteria mentioned above are labeled "tight" leptons. Electrons or muons are "loose" if they either pass the same BDT discriminant or pass additional requirements on the properties of the jet containing the lepton in case they fail the BDT selection. Similarly, loose τ h candidates are those passing a looser requirement on the BDT discriminant. Tight leptons always satisfy the conditions of the loose selection, but not the other way around. The final analysis selection consists of tight leptons, while loose leptons are used to categorize events based on their lepton content and to predict the background from nonprompt leptons. The loose definition of electrons and muons is tuned to facilitate this background prediction, as explained in section 7.
In events with two same-sign light leptons, with or without an additional τ h candidate, further requirements are placed on tight leptons to ensure that their sign is well-measured. For electrons, the sign is determined by the position of a linear extrapolation of the deposits in the pixel detector to the inner calorimeter surface relative to the calorimeter deposit, and compared to the sign determined from the full fit used for electron reconstruction. Electrons in which the two sign measurements are inconsistent are not considered tight. Tight muons are required to have σ(p T )/p T < 0.2 where, p T and σ(p T ) are respectively the p T as measured from a tracker-only fit and the associated uncertainty. These requirements are found to reduce the sign mismeasurement probability to under 0.0001 (0.3)% with efficiencies for prompt well measured leptons greater than 99.9 (99)% for muons (electrons).
Jets retained for analysis must satisfy p T > 25 GeV, |η| < 2.4, and have a separation of ∆R > 0.4 from any loose lepton. Jets originating from the hadronization of b quarks JHEP04(2022)147 are identified with the DeepCSV algorithm [48]. Jets satisfying the tight working point of this algorithm are referred to as b-tagged jets. The chosen working point corresponds to a typical efficiency of 50% for correctly identifying b quark jets, with a misidentification probability of 2.4 (0.1)% for c quark (light-flavor) jets.
Events that have at least three loose leptons, or two loose light leptons of the same sign, are selected for further analysis. To enter the nominal analysis selection either all loose leptons, or at least four leptons, must be tight. Events in which one or more of the loose leptons fail the tight selection are used to predict the background from nonprompt leptons, following the procedure explained in section 7. Events with one or more b-tagged jets are vetoed to reduce the backgrounds from processes involving top quarks. To match the analysis selection to the online selection, events must satisfy the requirements of trigger algorithms selecting one, two, or three electrons or muons. The lepton p T cuts mentioned in section 6 are designed to ensure selected events efficiently pass the trigger selection. Events with any opposite-sign and same flavor (OSSF) pair of light leptons passing the baseline selection, with a dilepton invariant mass below 12 GeV, are vetoed to reduce the background from photon conversions and low-mass resonances.

Simulation
Monte Carlo (MC) simulated event samples are used for the estimation of most of the backgrounds, the determination of signal efficiencies, and the training of the parametric neural networks used in the analysis. Separate samples, simulating the data-taking conditions in 2016, 2017, and 2018, are used for each process. In each instance we cite below, the generator program and parameters used for its simulation are chosen to be the most advanced available. Since the required computational resources are large, the samples produced for a given period of data taking (2016, 2017, or 2018) are retained for that data set, while for subsequent data sets newer generator configurations are used, along with updates to the detector model and running conditions. None of the differences in the configuration of the simulation between the data taking periods are found to have a significant impact on the analysis results.
Each event is overlaid with additional inelastic pp collisions generated in pythia to mimic the presence of additional collisions in the same or adjacent bunch crossings (pileup). The simulated number of interactions per bunch crossing is reweighted to match the one observed in data. Simulated background events include a full Geant4-based [65] detector simulation, while signal events use the CMS fast simulation package [66] to simulate the detector response. All simulated events are subsequently reconstructed using the same software employed for collision data.

Search strategy
As explained in section 3, the search targets several models for the production of charginos and neutralinos in final states with multiple leptons. In each model we work under the assumption of R-parity conservation, meaning that the LSP is stable, giving a significant p miss T in most cases. Many final states, including events with two leptons of the same sign, three leptons, and four or more leptons are selected to target several possible SUSY signals that might be present in the collision data. In the case of same-sign dilepton events, only electrons or muons are considered, whereas up to two τ h candidates are selected in the other final states. The choices here are dictated by the varied background levels in different kinematic regions, the lepton multiplicity, and the quality of lepton identification. For example, because of the lower purity in tau lepton reconstruction, the selection criteria differentiate between these candidates and the light leptons, electrons and muons. Events are further categorized according to the lepton flavors and signs to focus on various signal hypotheses. A summary of this categorization is presented in table 1. In each of these categories, a set of search regions is defined based on the kinematics of the events to further separate potential signal events in data from the SM backgrounds. Because of the large background in events with three light leptons including an OSSF pair of leptons, and the difficulty of optimizing kinematic bins for sensitivity to a host of models, parametric neural networks are trained for separating signal and background in this region.

Same-sign dilepton events
The signal models described in section 3 yield final states with three or more leptons. In models where the mass difference between the NLSP and LSP is small, or the slepton mass is close to either the NLSP or LSP mass, one or more of the leptons in the final state could have a high probability to fail the lepton selection. The sensitivity of the analysis to such models is increased by retaining events with two leptons. Dilepton events with an oppositesign lepton pair suffer from a very large SM background, but events with same-sign lepton pairs are relatively rare in the SM. For this reason, we select only events in which both leptons have the same sign (2 SS).
To ensure efficient triggering on these events, the leading lepton is required to have p T > 20 GeV in µ ± µ ± events, and p T > 25 GeV in µ ± e ± and e ± e ± events. The subleading lepton must satisfy p T > 15(10) GeV in case it is an electron (muon). The slightly higher p T thresholds for electrons are mandated by the fact that the corresponding trigger requirements are more stringent. In addition the turn-on curve of electron triggers is softer than that of muon triggers. Events in which a third loose light lepton or tight τ h candidate is present are vetoed to ensure orthogonality with the other event categories. As this category mainly targets signal events with a lepton that fails the selection or fails to be reconstructed, we do not veto events with a third lepton passing the baseline selection, as long as it fails the loose selection. When a third lepton passing the baseline selection is present, it is not allowed to form a mass within a 15 GeV window around the Z boson mass (m Z ) with another lepton in the event. This requirement is found to reduce the SM WZ background while keeping a signal efficiency over 99% for the most compressed signal scenarios with mass splittings between the NLSP and the LSP under 50 GeV. Events with more than one jet with p T > 40 GeV are rejected to reduce the tt background, while still allowing for some hadronic recoil in signal events. Finally, p miss T > 60 GeV is required.   Events are then binned according to their kinematic properties to maximally separate SUSY signal from the background. The stransverse mass M T2 , defined to have an endpoint at the parent particle's mass for events with two semi-invisible decays [67], is used, because its tails tend to be populated by signal events with high p miss T . Additional discriminating variables are the p T of the dilepton system (p T ( )), which tends to be high in uncompressed models, and p miss T . Bins with expected yields sufficient for an accurate background description are further split according to the sign of the leptons. This is motivated by the pp nature of LHC collisions, which makes same-sign lepton pairs of positive sign more common than those of negative sign. The magnitude of this sign asymmetry depends on the initial state of the process and is generally different between signal and background. The full set of search regions is shown in table 2.

Three-lepton events
All signal models considered in this analysis yield at least three leptons in the final state, so the analysis retains all events with three or more leptons, with up to two τ h candidates. This section describes the search strategy for events with exactly three leptons, while events with four or more leptons are discussed in section 6.3.
In addition to the selection requirements specified in section 4 we impose p T thresholds on the leptons. In a similar fashion to those discussed in section 6.1 for same-sign dilepton events, these additional requirements ensure efficient triggering by at least one of the leptonic triggers used in the analysis. The leading light lepton is required to satisfy p T > 25 (20) GeV if it is an electron (muon). If two or more light leptons are present, the subleading light lepton must have p T > 15 (10) GeV. In events with just a single muon, -11 -

JHEP04(2022)147
where this is also the leading light lepton, the muon must satisfy p T > 25 GeV. In events with just a single muon, where this is also the leading light lepton, the muon must satisfy p T > 25 GeV. These p T requirements are added in addition of those specified in section 4, in order to ensure efficient triggering on the events by at least one of the leptonic triggers used in the analysis. As we target signals with escaping particles, we require p miss T > 50 GeV, significantly reducing the background from processes without particles evading detection.

Three light leptons with an OSSF pair
If no τ leptons are present in the decay, the signal models in section 3 mainly give final states with an OSSF pair of leptons. As such, these events will dominate the sensitivity to χ ± 1 χ 0 2 production with flavor-democratic decays through sleptons, or decays via the emission of a W and a Z boson. Meanwhile, this event category also suffers from the largest amount of background among all analysis categories, dominated by SM WZ production. Because of the category's importance and the relatively large background, several parametric neural networks are trained to distinguish the signal models from the background in this region. Additionally, a set of search regions is also defined, which are less sensitive than the neural networks, but that facilitate alternative interpretations of the results. This event category is referred to as 3 A as shown in table 1.
Our signal model has several varying parameters, namely the masses of the NLSP and LSP. One could search for such a model by training a single machine learning discriminant based on reconstructed quantities, or by training one such discriminant for each value of the signal parameters. If the event kinematics depend on the signal parameters, the former approach will be suboptimal for most or all signal points, while the latter introduces a great deal of complexity. Additionally, the second approach of training separate discriminants for each signal point does not allow for the interpolation of the results to signal parameters not seen while training the discriminant. A solution to these problems is the training of a "parametric" machine learning discriminant [22]. On top of a set of reconstructed quantities, such a discriminant uses one or more signal parameters as additional input features. In the training each background event is given a value randomly drawn from the parameter distribution in the signal simulation. This results in a discriminant that learns to optimally distinguish each signal hypothesis from the background, and that can be evaluated using signal parameters not seen during training.
The kinematics of the signal events are largely determined by the mass splitting δm = m , with relatively small kinematic differences between signal points having equal δm, but differing m values. This is exploited by training a neural network parametric in δm for each of the four different signal models: χ ± 1 χ 0 2 production with decays through W and Z bosons, and χ ± (L T + p miss T ), and the scalar sum of the p T of all jets in the event (H T ). For each of these variables the distribution is compared between the fast simulation of the CMS detector used in the signal simulation and the nominal Geant4-based simulation. This comparison is made for several representative signal points: compressed, noncompressed and at δm close to m Z . No significant differences are observed between the distributions.
The neural networks are fully connected feed-forward networks with a single output node representing the probability that an event is signal. They are trained in Tensor-Flow [69,70] using the Keras [71] interface. To reinforce the learning of the parametrization, the signal parameter is fed as an additional input to each hidden layer of the network, and the δm values assigned to background events are resampled from the signal distribution for each training epoch. The gradient descent of the network weights for training is done with a variant of the Adam [72] algorithm using Nesterov momentum [73]. Batch normalization [74] is added between all of the hidden layers to reduce the internal covariance shift of the network, speeding up training and increasing the final performance. To regularize the network, dropout [75] is added to each hidden layer. At each node a parametric rectified linear unit activation function is used, except for the output node, which uses a sigmoid activation. The number of nodes in each layer of the network, the number of hidden layers, the learning rate, the learning rate decay, the dropout rate, and the used activation function, are all varied in grid scans, training the neural network each time with a different configuration. The performance of each configuration is then evaluated, in terms of the area under the receiver operating characteristic curve (AUC), on a validation set. The optimal values of these parameters are chosen for the final training of each network. The results of the grid scan optimization are cross-checked with a custom-made evolutionary algorithm designed to optimize the neural network hyperparameters. The evolutionary algorithm results in an equivalent final neural network performance, though with significantly fewer training iterations than needed in the grid scan.
It is explicitly verified that the trained parametric networks are optimally performing at each δm point, and able to interpolate to unseen points. The ability of the network to interpolate to a particular point is checked by training a parametric neural network excluding all events at a particular δm value as well as a nonparametric network trained for just events at this δm value. If both the new parametric model and our nominal one perform equally well it implies that the latter performs equally well on seen and unseen parameter points. The comparison between the nominal model and the nonparametric network tells us if the network's parametrization performs optimally or not. This check is repeated for each δm point present in each of the signal models, training 10 neural networks of each type at each point to estimate the variations due to random weight initializations. It is found that the parametric network is able to achieve optimal performance at each δm point present in the signal simulation even without explicitly seeing it during training, as shown in figure 4.
The signal and background predictions, as well as the yield in data, are then evaluated for each of the four neural networks at every δm value present in the signal simulation. For the interpretation of the results in a particular signal model at a given δm only the corresponding neural network output is used. At each δm value, the neural network output  Neural network models shown in blue are trained using all available δm points, those in red are trained with all available points except the point for which the performance is shown. The models in green are not parametric and only trained to find a signal at the point where the performance is indicated. Each neural network is retrained ten times, and the mean performances are shown, with error bars indicating the standard deviation computed from ten performance values. This means that each red and green point correspond to ten neural network trainings. The entire blue curve in each figure also corresponds to ten trainings.
is binned in terms of the expected background yields. The last bin is defined to have a single expected background event in the 2016 data set, corresponding to 35.9 fb −1 , and each preceding bin has twice the expected yield of the following bin. The shape of the outputs of the neural networks varies substantially with the δm parameter, and this method allows for a robust binning definition across all values of δm.
Aside from the neural network, a set of search regions is also defined to extract the signals from the background in a cut-based manner. Most of the SM WZ background, as well as χ ±  tables 3 and 4, using some, but not all, of the neural network input variables to define the bins. The WZ background falls off quickly when M T exceeds the W boson mass (m W ), making it a powerful tool to reduce the background. For signal events with sleptonmediated decays, M and M 3 T provide sensitivity to δm, and are used to separate signal and background events. The search regions targeting WZ-mediated superpartner decays are further binned in p miss T and H T . Due to escaping LSPs in signal events, their p miss T spectrum tends to be harder than that of SM events, a fact which is further enhanced at large H T .
The neural network analysis and search regions use the same data and thus can not be analyzed simultaneously. The results of both approaches are interpreted separately, and shown in section 10. The neural network approach has higher sensitivity, while the search region results are easier to reinterpret.

Three light leptons without an OSSF pair
Events with three light leptons that do not contain an OSSF pair (3 B) do not occur frequently in the SM, because most events with multiple leptons involve a Z boson decay. This category of events is particularly sensitive to signal models with nonresonant lepton production from the decay of an H. Since SM production of an H with an additional lepton is exceedingly rare, the search regions are designed to target possible H → WW decays in signal events. The events are binned in the minimum ∆R between any two leptons in the event (min(∆R( , ))), exploiting the increased collimation of leptons in H → WW events when compared to events with nonprompt leptons or nonresonant WW production. The search region definitions are given in table 5.

Three leptons with one or more τ h candidates
If chargino or neutralino decays are mediated by right-handed sleptons, or the first-and second-generation sleptons are heavy and decoupled, signal events will favor final states  ≥300 A64 Table 4. Definition of the search regions used for events with three light leptons, at least two of which form an OSSF pair, and which satisfy 75 < M < 105 GeV (AXX). -16 -JHEP04(2022)147

B: three light leptons without an OSSF pair.
250-500 C08 ≥500 C09 with one or more τ leptons. To retain sensitivity to such models, events with τ h candidates are selected and split into further categories.
The first category consists of events with an OSSF pair of light leptons and a τ h candidate (3 C). These events are mainly sensitive to τ -enriched χ ± Events with two τ h candidates provide additional sensitivity to models with τ dominated slepton decays. Events in this category are binned in the invariant mass of the leading τ h and light lepton (M τ h ), which tends to be high for uncompressed signal events. The same lepton pair and the p miss T enter the computation of the stransverse mass (M T2 ( , τ h )), which is used to further suppress the SM background. The complete set of bins is shown in table 9.
-17 -JHEP04(2022)147 ≥200 D16 Table 7. Definition of the search regions for events with a e ± µ ∓ pair and a τ h candidate (DXX). Table 8. Definition of the search regions for events with a pair of light leptons of the same sign and a τ h candidate (EXX).

Four or more lepton events
Events with four leptons provide sensitivity to effective χ 0 1 χ 0 1 production with subsequent decays via H or Z bosons. Further categorization of the events is done depending on the number of OSSF pairs and light leptons. If more than four loose leptons are present in an event, the event categorization and computation of analysis variables uses the four highest p T leptons.
Decays of χ 0 1 χ 0 1 via two Z bosons tend to give two OSSF pairs. For this reason, the first category consists of events with four light leptons forming two separate OSSF pairs (4 G). The OSSF dilepton pair with the closest invariant mass to m Z forms the first Z boson candidate (Z 1 ), while the remaining OSSF pair is taken to be the second Z boson candidate (Z 2 ). The M T2 computed with both Z boson candidates (M T2 (ZZ)) is expected to have a sharply falling distribution beyond m NLSP , providing a handle to separate different signal points and to discriminate signal from the background. Events are further binned in the mass of the Z 2 candidate (M Z2 ) to enhance the sensitivity to signal models without two Z bosons in the χ 0 1 χ 0 1 decay. The search region definitions are listed in table 10.
The remaining events are further split up as follows: four light leptons forming one or no OSSF pairs (4 H), one τ h candidate and three light leptons (4 I), two τ h candidates and two light leptons forming two OSSF pairs (4 J), and two τ h and two light leptons forming one or fewer OSSF pairs (4 K). The same binning is used in each of these categories, as they are sensitive to the same signal models, and it is shown in table 11. If at least one OSSF pair is present, the OSSF pair of mass closest to m Z is taken to reconstruct a Z boson candidate (Z1). If no OSSF pair is present, other opposite sign lepton combinations are considered when finding the Z boson candidate. The Z boson candidate mass M Z1 is used to discriminate between processes with and without a true on-shell Z boson involved. The remaining two leptons in the event are assigned to be the decay of an H candidate. Events are further subdivided according to the ∆R between those two remaining leptons (∆R H ), as these are expected to be collimated if they are from genuine H decay products.
>60 X01 Table 11. Definition of the search regions for events with 4 leptons with one or more τ h , or without two light-lepton OSSF pairs (XYY).

Background estimation
The background contributions in each of the search categories can be subdivided into four distinct categories. Firstly, there are SM events with three or more prompt leptons, or two prompt leptons of same sign. Secondly, external and internal conversions of photons also result in events entering our search region. Backgrounds from both conversions and prompt leptons are estimated using simulated samples. Thirdly, backgrounds involving one or more nonprompt leptons are directly predicted from data. Lastly, events that enter a particular event category due to the mismeasurement of a lepton sign are estimated from data for events in the 2 SS and 3 E categories, while its importance is minute in other event categories and is estimated from simulation. The dominant background contribution to events in the 3 A category comes from WZ production. With leptonic decays of the Z and W bosons, respectively, WZ production results in events with three prompt leptons and a neutrino giving sizable p miss T , thus mimicking the topology of χ ± 1 χ 0 2 production. The background is estimated from simulation and is validated in a control region that is contained within the search regions but nearly depleted of signal events. The control region has the same selection criteria as 3 A events, with the following additional requirements: |M − m Z | < 15 GeV, 50 < p miss T < 100 GeV, 50 < M T < 100 GeV and |M 3 − m Z | > 15 GeV. A fit is performed to the data in the WZ CR, in which the WZ normalization is free to float. This fit takes into account all relevant analysis uncertainties and assumes that no signal is present. The result is a normalization factor of r WZ = 1.17 ± 0.05 over the powheg prediction, which is applied in the analysis. The 3 A events are interpreted twice, once using the neural network and once using the search regions. The WZ control region is included into the fit for the signal region interpretation and, to avoid double counting the data, the partially overlapping search regions A23, A36, and A49 are removed from it. When the neural network results are used for the interpretation, the WZ control region is excluded from the fits used for the interpretation of the results in terms of superpartner production.
One of the most important discriminating variables used in both the parametric neural networks and the 3 A search regions is M T , the transverse mass of the lepton not forming the Z boson candidate. Simulation studies indicate that the tails of the M T distribution mainly originate from the mispairing of leptons when forming the Z boson candidate, leading to M T being computed with one of the leptons from the Z boson decay. The prediction of such mispaired events is validated by selecting eeµ and eµµ events in the aforementioned -20 -

JHEP04(2022)147
WZ control region. In these events, where there is no ambiguity when assigning leptons to W and Z boson decays, the leptons are intentionally mispaired, and the simulated predictions are validated by comparison to data. A second, though smaller, source of events in the M T tails comes from p miss T mismeasurements. This effect is studied in µγ events enriched in the Wg process where no possible lepton ambiguity may arise. The muon is required to have p T > 25 (28)  Processes involving one or more top quarks and electroweak bosons can produce many prompt leptons and contribute to all of the analysis final states. The main contributions come from tttt, ttH, ttW, and ttZ, which are collectively labeled ttX. Smaller contributions originate from processes with a single top quark or with top quarks and multiple electroweak bosons and are labeled tX. These are minor backgrounds because of their small cross sections, and they are further reduced by the b veto applied in the event selection. Even smaller contributions appear due to processes in which two top quarks and two massive bosons are produced, labeled ttXX, which have cross sections so small that barely one event in total appears at the analysis level. The predictions for the ttZ background are verified in a ttZ-enriched control region with the selection of the 3 A category, but requiring at least one b jet, and |M −m Z | < 15 GeV. All other contributions are estimated from simulation. For plotting purposes, the ttX and tX contributions are grouped together.
Rare processes involving the production of three or more electroweak bosons (WWW, WWZ, WZZ, ZZZ) can also lead to events with enough prompt leptons to enter the search regions. These processes have extremely small cross sections, and thus constitute only a tiny fraction of the background. Their contributions are estimated from simulation. For plotting purposes this contributions are labeled as multiboson.

JHEP04(2022)147
Internal and external photon conversions can lead to additional leptons in an event. Such events typically enter the search regions when the conversion is asymmetric and one of the leptons coming from the conversion has a very low p T and fails to be reconstructed. This background is dominated by Zg for events with three or more leptons, while Wg provides the dominant source for 2 SS events. The conversion background is estimated from simulation, which is validated and normalized in a Zg-enriched control region in data. This region is obtained by applying the 3 A selection with inverted requirements p miss T < 50 GeV, and M < 75 GeV. The last requirement is applied because asymmetric conversions from Zg tend to have M 3 rather than M values close to the Z boson mass. Processes involving a final state photon (Zg, Wg, ttg) which undergoes an asymmetric conversion are included into the Xg background group for plotting and fitting purposes. A fit is performed in the control region, in which the photon conversion process normalization is free to float, all analysis uncertainties are included, and the signal presence is suppressed. This leads to a normalization factor of r Xg = 1.12±1.10 over the MadGraph5_amc@nlo simulation prediction.
Events with nonprompt leptons entering the search regions come mostly from tt and Drell-Yan production with an additional nonprompt lepton. It is a dominant background source in categories 3 B, 2 SS, and all of the categories involving one or more τ h candidates. This background contribution is estimated from data using the "tight-to-loose" ratio method, as described in ref. [43]. The probability for a loose nonprompt lepton to also pass the tight lepton selection, the "nonprompt rate", is measured as a function of p T and |η|. For light leptons, this is done in a QCD-enriched sample of single lepton events. The nonprompt rate of τ h candidates is measured in both Drell-Yan-and tt-enriched events. These nonprompt rates differ due to the flavor content of the jets in the event. In the 3 D and 3 E categories the background from nonprompt τ leptons is expected to be dominated by tt, so the tt based nonprompt rate measurement is used. For 3 C and 3 F events, Drell-Yan is the dominant background source, and the nonprompt rates measured in Drell-Yan enriched data are used. The measured nonprompt rates are applied to events passing the search region selection but with one or more leptons failing the tight selection while still passing the loose selection. Both simulated events and a data sample enriched in nonprompt leptons are used to validate the method.
Electron sign mismeasurements are an important source of background in 2 SS and 3 E events. This background is reduced by the application of additional requirements on the leptons designed to ensure a well-determined sign, as discussed in section 4. The remaining background for electron sign mismeasurement is predicted from data in 2 SS and 3 E events. The probability for an electron sign mismeasurement is computed as a function of p T and |η| in a large sample of simulated events from Drell-Yan, tt, and diboson production. The resulting background contribution in the search regions is then determined by applying this probability to data events with two light leptons of opposite sign. A sample of same-sign dilepton events dominated by Drell-Yan is selected by requiring |M − m Z | < 10 GeV, in which the predictions are validated, and an integral normalization factor is measured for each data-taking year by which the predictions are scaled. in simulated events indicate that the probability of sign misidentification for muons is negligible, and the minuscule background contribution that results in the search regions is estimated using simulation.

Systematic uncertainties
Several sources of systematic uncertainties affect both the background and signal predictions, changing both the total yields and the contribution of each process to the different analysis bins. The experimental sources of uncertainty that affect the simulated samples are pileup modeling, jet energy scale and resolution, b tagging, lepton identification and trigger efficiencies, p miss T resolution, and the measurement of the integrated luminosity. Additional sources of systematic uncertainty come from the uncertainties in the theoretical calculations used to generate samples of simulated events. The effects of each of these uncertainties, aside from those associated with the integrated luminosity and the trigger efficiency, vary across the analysis bins.
Light lepton identification efficiencies are measured in a Z boson enriched data sample using the "tag-and-probe" technique [41,42]. The corresponding corrections are applied to simulated events. Uncertainties on these measured corrections, as well as on the validity of their extrapolation to the search regions, are applied to simulated events. Signal events are expected to contain relatively high p T leptons because of the potentially large superpartner masses. For this reason the lepton efficiencies are measured separately for events with a reconstructed Z boson p T above and below 80 GeV. The difference between the corresponding corrections at high and low p T of the Z boson is taken as the uncertainty in the application of these corrections, and is around 0.5% for most of the leptons, but ranges up to 3% for very high and low p T leptons.
Similarly, identification efficiencies for τ h candidates are measured in µτ h events enriched in Z bosons for p T values up to approximately 60 GeV. For τ h candidates with intermediate p T values, up to 100 GeV, tt enriched µτ h events are used. At even higher p T values the efficiencies are measured using single τ h events enriched in highly virtual W bosons. The associated uncertainties in the measured efficiencies applied in the analysis range from 1 to 3%.
The uncertainty in the correction of the number of events per bunch crossing applied to simulated events is estimated by varying the total pp inelastic cross section up and down by 4.6% [76]. The uncertainty in the measurement of the integrated luminosity, used to normalize all simulated yields, is 2.3 (2.5)% for the data set collected in 2017 [77] (2016 [78] and 2018 [79]). The integrated luminosity of the total data set has an uncertainty of 1.8%, where the improved understanding comes from the independence of some parts of the uncertainty between data-taking years. The correlated (uncorrelated) components correspond to 1.2, 1.1 and 2.0 (2.2, 2.0 and 1.5%) for the 2016, 2017 and 2018 data-taking years. The trigger efficiency is measured in an unbiased sample of events, triggered on the p miss T and total hadronic momentum in the event. The uncertainties in the trigger efficiency range from 1.4% for events with three or more light leptons to 3% for events in the 2 SS and 5% for category 3 F events have less redundancy to pass the leptonic trigger requirements.

JHEP04(2022)147
The trigger uncertainties are split into a component correlated across years accounting for possible biases in the method used to measure trigger efficiencies and an uncorrelated component per year because of the limited data available in the data sideband used for such measurements. The latter accounts for effects of roughly 1% for each category and year with the former accounting for the rest of the uncertainty size.
The uncertainties due to the jet energy scale are computed by varying the scale for all jets up and down within its uncertainty. Similarly, the uncertainties from the jet energy resolution are estimated by smearing the jets according to the resolution uncertainty [39,40]. Both effects are subsequently propagated to all steps of the analysis, affecting p miss T , the b veto and all analysis variables calculated using jets or p miss T [40]. The p miss T is affected by additional resolution uncertainties due to objects not clustered into jets, which are also propagated to all affected analysis variables. Corrections are applied to account for differences between data and simulation in the b tagging efficiency and misidentification rate. Uncertainties in this correction affect the b veto used in the analysis, and the effects are propagated to all analysis bins. These effects are partially correlated across datataking years accounting for the possible year dependency of the origin of each source of uncertainty included into these variations. The overall effect of the correlated and uncorrelated components is approximately equal for all data-taking years.
Uncertainties stemming from a limited knowledge of the proton PDFs are estimated using a set of NNPDF3.0 (NNPDF3.1) replicas in simulations corresponding to 2016 (2017 and 2018) data-taking conditions. Uncertainties stemming from missing higher order corrections are estimated by varying the renormalization and factorization scales up and down simultaneously by a factor two and evaluating the effect on simulated events. Both of these theoretical uncertainty sources lead to changes in the predicted cross sections of simulated processes, as well as additional kinematic variations across analysis bins. The shape variations are taken into account for all simulated events, whereas for several processes the cross section uncertainties are replaced by a prior uncertainty which is constrained by a fit to data. This is the case for WZ, ZZ and Zg processes.
The experimental and theoretical uncertainties listed earlier in this section affect all simulated processes, both signal and background, and the effects are considered correlated across processes. A number of additional process-specific systematic uncertainty sources are taken into account.
The modeling of QCD ISR in signal events is done by MadGraph5_amc@nlo and affects the total ISR transverse momentum of the events (p ISR T ). The p ISR T distribution in 2016 signal events is reweighted based on the Z boson p T spectrum observed in data [80]. Differences between corrected and uncorrected signal events are taken into account as systematic uncertainties. For 2017 and 2018 data, the p ISR T distribution was found to be well modeled, but a small correction based on the distribution of the number of reconstructed jets in a Z boson enriched data sample is applied. The size of the corrections are also considered as uncertainties in this case.
As discussed in section 7, Wg data are used to validate the modeling of events with p miss A prior normalization uncertainty of 10% is assigned to WZ events, which is constrained implicitly by the fit to the data in 3 A events. Similarly prior normalization uncertainties of 10% are assigned to both the ZZ and Zg processes, which are further constrained by including their respective control regions in the analysis fit. A normalization uncertainty of 30% is assigned to the nonprompt lepton background prediction, covering any biases found in simulated studies of the method. Three separate uncorrelated nuisance parameters with priors of the same size are used for nonprompt light leptons, nonprompt τ h candidates from tt and nonprompt τ h candidates from Drell-Yan production. A 20% uncertainty is assigned to the normalization of the sign misidentification background to cover deviations observed in the Z boson enriched control region mentioned in section 7.
A summary of the systematic uncertainties applied in the analysis, and their effects on the predicted event yields across analysis bins is shown in table 12.

Results
The observed and expected SM yields in each of the search regions introduced in section 6 are shown in this section. The expected yields are obtained using the background estimation procedures elucidated in section 7, with systematic uncertainties as explained in section 8.
The yields as a function of the parametric neural network output in 3 A events are respectively shown in figures 5-6 for the different χ ± 1 χ 0 2 production models considered. For each model the neural network discriminant is shown as evaluated at three distinct δm hypotheses, representing low, intermediate, and high δm values. To obtain the final results, the neural network is evaluated at far more δm parameters, separated by 50 GeV for models with slepton-mediated decays and by 25 GeV in case of WZ-mediated decays with δm in excess of 100 GeV. When δm is below 90 GeV in the former models, the neural network is evaluated in δm steps of 10 GeV, and in steps of 1 GeV between 90 and 100 GeV. The expected and observed yields as a function of the search regions in each event category are shown in figures 7-14.
In all categories, and in both evaluations of 3 A events, based on the neural networks and on the search regions, the data are found to be consistent with the expectation from the SM backgrounds. The agreement in the search regions is summarized in figure 15 (upper plot), where the expected test statistic [81] distribution for a background only fit to data is compared to the observed test statistic value. One expects the observed test statistic to lie in a likely region of the expected test statistic distribution in the absence of any unknown physics. A similar plot is shown in figure 15 (lower plot) for the neural network targeting WZ-mediated decays of the chargino-neutralino pair. This is the neural network for which the data are evaluated at the most δm values, and the agreement is shown for each one of them.     ). The top panels show only the total uncertainty in the background prediction, while the lower panels show the total and statistical uncertainties separately. The following abbreviations are used in the legends of this and the following figures: "bkg." stands for background, "unc." for uncertainty and "obs." for observed.
Total bkg. unc. VH Total bkg. unc. VH Total bkg. unc.       Table 13. Summary of the event categories used for the interpretation of the results in terms of different models, and references to the associated figure summarizing the expected and observed 95% CL upper limits.

Interpretation
No significant excess of events over the SM-only hypothesis is observed, as shown in section 9. The expected signal and background yields and the observed data are then used to determine 95% confidence level (CL) upper limits on χ ± 1 χ 0 2 and effective χ 0 1 χ 0 1 production cross sections for the different decay models introduced in section 3, using the CL s criterion [82,83]. The asymptotic approximation of the distribution of the profile likelihood test statistic [84,85] is used when computing these limits. The systematic uncertainties introduced in section 8 are included as nuisance parameters with additional constrain terms -37 -

JHEP04(2022)147
into the likelihood function. Systematic uncertainties with an effect on overall process normalizations but not distributions are included through log-normal probability density functions while those that have an effect on both normalization and shape of any processes are included via the template morphing technique [86] and represented with Gaussian probability density functions. Uncertainties related to the limited size of the MC samples are introduced into the likelihood following the Barlow-Beeston approach [87].
For each model interpretation, a global fit of the analysis bins is performed, using the events from categories corresponding to the final state of the particular model. The event categories used to interpret each model are listed in table 13. In the interpretations that include the neural network approach, the corresponding distributions used for the interpretation in the 3 A category in each signal point correspond to the evaluation of the neural network discriminant at the specific δm value of the chosen point.
The sensitivity of χ ± 1 χ 0 2 production models with slepton-mediated decays is mainly driven by 3 A events in case of flavor-democratic decays. In case of compressed models, in particular for x = 0.05 and 0.95, 2 SS events increase the sensitivity significantly, being the leading source of exclusion for mass splittings of m . This is equivalent to a relative improvement of about 50% in the excluded cross section. At very low δm values, the relative improvement varies more and can be greater than 100% for x = 0.5 while typically being smaller than 50% for the other x values. Chargino masses up to 1450 GeV and LSP masses up to 1000 GeV can be excluded by the neural networks depending on the model parameters. The reach of the previous iteration of the analysis has been improved up to 400 GeV in the chargino masses -a factor of 9 in terms of production cross-section -and up to 500 GeV in the LSP masses.  provide minimal sensitivity to these models and are thus excluded from the interpretation. The interpretation is done separately, using the parametric neural network and the search region bins. The resulting exclusion limit curves are shown in figure 19. The neural network provides maximal sensitivity to the models we are probing, resulting in more stringent exclusion limits by about 130 GeV in m The interpretation of χ 0 1 pair production models, with subsequent decays via H or Z bosons uses all event categories. In the case of decays via two Z bosons, 4 G events are the most important contributors to the final exclusion limits. In decays via an H and a Z boson, four lepton events provide the most sensitivity for low χ 0 1 mass hypotheses, while trilepton events become more important at higher χ 0 1 masses. When the χ 0 1 pair decays via two H, trilepton events drive the results. The exclusion limits as a function of m +0.95m +0.5m +0.05m

Summary
A search for new physics in events with two leptons of the same sign, or with three or more leptons with up to two hadronically decaying τ leptons, is presented. A data set of proton-proton collisions with √ s = 13 TeV collected with the CMS detector at the LHC, corresponding to an integrated luminosity of 137 fb −1 , is analyzed. Events are categorized according to the number of leptons, their signs, and flavors. Events in each category are further binned using a plethora of kinematic quantities to maximize the sensitivity of the search to an extensive set of hypotheses of supersymmetric particle production via the electroweak interaction. In events with three light leptons, of which two have opposite sign and same flavor, parametric neural networks are used to significantly enhance the sensitivity of the search to several signal hypotheses.
No significant deviation from the standard model expectation is observed in any of the event categories. The results are interpreted in terms of a number of simplified models of superpartner production. Models of chargino-neutralino pair production with the neutralino forming the lightest supersymmetric particle (LSP), as well as models of effective neutralino pair production with a nearly massless gravitino as the LSP are considered. The signal topologies depend on the masses of the leptonic superpartners and the mixing of the gauge eigenstates.
If left-handed sleptons lighter than the chargino existed, the chargino-neutralino pair might undergo slepton-mediated decays resulting in final states with three leptons. The results of the analysis lead to a lower limit in the chargino mass up to 1450 GeV when using a parametric neural network. Searches in events with three light leptons including an opposite-sign, same-flavor pair provide sensitivity to these models. Events with two same-sign leptons further enhance the sensitivity in experimentally challenging scenarios with small mass differences between the chargino and the LSP.
If sleptons were right-handed, the chargino, or both the chargino and the neutralino, might decay almost exclusively to τ leptons. In the former scenario, a chargino mass up to 1150 GeV is excluded, while a mass up to 970 GeV is excluded in the latter.
If sleptons were sufficiently heavy, charginos and neutralinos would undergo direct decay to the LSP via the emission of W, Z, or Higgs bosons. For decays of the charginoneutralino pair via a W and a Z boson, values of the chargino mass up to 650 GeV are excluded through the use of a parametric neural network. In case of a neutralino decay via the emission of a Higgs boson, charginos with a mass below 300 GeV are excluded for nearly massless LSPs.
In models of effective neutralino production we assume the neutralinos decay to almost massless gravitino LSPs via Z and Higgs bosons. This leads to excluded values of the neutralino mass up to 600 GeV.
The obtained results currently provide the most stringent limits for chargino-neutralino production with mass splittings close to the Z boson mass, nearly closing the gap in the exclusion plane found in this region of the parameter space. The exclusions obtained for the slepton-mediated decays are as well the most stringent results currently for all the considered branching fraction hypotheses. In the case of the flavor-democratic decay -45 -JHEP04(2022)147 scenario, the obtained exclusion limits of up to 1450 GeV are the overall highest exclusion values obtained for the production of electroweak superpartners.
The analysis techniques have been considerably refined compared to the earlier version of this search that used 35.9 fb −1 of proton-proton collision data at √ s = 13 TeV [20]. The integrated luminosity has increased by just short of a factor four, which in the absence of novel analysis techniques would result in improvements in the excluded cross sections by a factor of roughly two. In most search categories the improvements in analysis techniques result in significantly larger improvements to the results than the increased data volume, e.g. the limits on the excluded superpartner production cross sections are improved by up to a factor ten, and more than a factor five for most model hypotheses.