Combined searches for the production of supersymmetric top quark partners in proton–proton collisions at \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sqrt{s} = 13\,\text {Te}\text {V} $$\end{document}s=13Te

A combination of searches for top squark pair production using proton–proton collision data at a center-of-mass energy of 13\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\,\text {Te}\text {V}$$\end{document}Te at the CERN LHC, corresponding to an integrated luminosity of 137\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\,\text {fb}^{-1}$$\end{document}fb-1 collected by the CMS experiment, is presented. Signatures with at least 2 jets and large missing transverse momentum are categorized into events with 0, 1, or 2 leptons. New results for regions of parameter space where the kinematical properties of top squark pair production and top quark pair production are very similar are presented. Depending on the model, the combined result excludes a top squark mass up to 1325\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\,\text {Ge}\text {V}$$\end{document}Ge for a massless neutralino, and a neutralino mass up to 700\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\,\text {Ge}\text {V}$$\end{document}Ge for a top squark mass of 1150\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\,\text {Ge}\text {V}$$\end{document}Ge. Top squarks with masses from 145 to 295\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\,\text {Ge}\text {V}$$\end{document}Ge, for neutralino masses from 0 to 100\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\,\text {Ge}\text {V}$$\end{document}Ge, with a mass difference between the top squark and the neutralino in a window of 30\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\,\text {Ge}\text {V}$$\end{document}Ge around the mass of the top quark, are excluded for the first time with CMS data. The results of theses searches are also interpreted in an alternative signal model of dark matter production via a spin-0 mediator in association with a top quark pair. Upper limits are set on the cross section for mediator particle masses of up to 420\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\,\text {Ge}\text {V}$$\end{document}Ge.


Introduction
The standard model (SM) of particle physics describes subatomic phenomena with outstanding precision. However, the SM cannot address several open questions such as the hierarchy problem [1,2] and the absence of a suitable particle candidate for dark matter (DM) [3,4]. Supersymmetry (SUSY) [5][6][7][8][9][10][11][12] is a well-known extension of the SM that can resolve both of these problems by introducing a relation between bosons and fermions. For each known particle, it assigns a new SUSY partner that differs by a half unit of spin. SUSY provides a natural solution to the gauge hierarchy problem provided that the SUSY partners of the top quark, gluon, and Higgs boson are not too massive. While difficult to quantify precisely, values of the top squark mass e-mail: cms-publication-committee-chair@cern.ch up to the 1 TeV range are favored [1,[13][14][15]. The lightest SUSY particle (LSP), which is potentially massive, may be a viable DM candidate if it is stable and electrically neutral.
This paper presents the combination of previously published searches [16][17][18] for the pair production of SUSY top quark partners in final states without leptons, with one, or with two charged leptons, in events from proton-proton (pp) collisions at a center-of-mass energy ( √ s) of 13 TeV at the CERN LHC, corresponding to an integrated luminosity of 137 fb −1 , and referred to here as inclusive analyses. It also includes a new analysis targeting a parameter space where the mass difference between the top squark and the neutralino is close to the top quark mass, whose results are combined with the previously published studies. All analyses are performed with the data set collected in 2016-2018 (Run 2) by CMS. The inclusive searches are interpreted in terms of top squark pair production with two different subsequent decays, as described in the simplified model context [19][20][21]. Two decay chains are considered, both of which lead to a signature with a neutralino ( χ 0 1 ), which is the LSP, a W boson and a bottom quark. These are the direct decay of the top squark to a top quark and a neutralino, and the decay of the top squark to a chargino ( χ ± 1 ) and a bottom quark where the chargino decays to a W boson and a neutralino. Three simplified models are used for interpretation. In the first model, both top squarks decay according to the first decay chain; in the second model, both decay according to the second decay chain; in the third model, these two decays occur with equal probability. The mass of the chargino in the second model is chosen to be an arithmetic average of the top squark mass (m t 1 ) and the LSP mass (m χ 0 1 ), while in the third model the mass splitting between the neutralino and chargino is assumed to be 5 GeV. Typical diagrams are shown in Fig. 1. In previous analyses by the CMS collaboration top squark masses up to 1310 GeV have been excluded [16][17][18][22][23][24][25][26][27][28][29]. Limits on the production of top squark pairs with masses up to 1260 GeV have been set by the ATLAS Collaboration [30][31][32][33][34][35].
If the mass difference between the top squark and the lightest neutralino in the t 1 → t χ 0 1 model is close to the mass of the In the top quark corridor region, the signal could be observed as an excess over the tt background prediction [36], but the sensitivity to m This paper presents a new dedicated search in events with an opposite-charge lepton pair that is sensitive to the top quark corridor region. The sensitivity in the top quark corridor is extended by using a larger data set and a more sophisticated strategy, using a deep neural network (DNN) [39] to exploit the differences between the signal and the SM tt production, which is the main background.
In order to reduce the background from tt events, the missing transverse momentum ( p where refers to an electron or a muon, m T is the Fig. 2 Feynman diagram of direct DM production through a scalar (φ) or pseudoscalar (a) mediator particle, in association with a top quark pair transverse mass, and p miss T1 and p miss T2 correspond to the estimated transverse momenta of the two invisible particles (neutrinos in the case of tt events) that are presumed to determine the total p miss T of an SM event. The transverse mass is calculated for each lepton-neutrino pair, for different assumptions of the neutrino transverse momentum ( p miss Ti ). The computation of m T2 ( ) is done using the algorithm discussed in Ref. [41]. A signal region is defined applying requirements on m T2 ( ) and on p miss T , the magnitude of p miss T . A DNN is used to optimize the sensitivity for signal at each mass point.
We also consider the alternative model tt + DM shown in Fig. 2, in which a DM particle is produced in association with a pair of top quarks. In this simplified model, a scalar (φ) or pseudoscalar (a) particle mediates the interaction between SM quarks and a new Dirac fermion (χ ), which is the DM candidate particle [42][43][44][45][46]. Under the assumption of minimal flavor violation [47,48] the spin-0 mediators couple to quarks having mass m q with SM-like Yukawa couplings proportional to g q m q , where the coupling strength g q is taken to be 1. The coupling strength g DM of the mediator to the DM particles is also set to 1. In the case of a scalar mediator, mixing with the SM Higgs boson is neglected. Prior searches by the ATLAS and CMS Collaborations excluded scalar and pseudoscalar mediator particles with a mass of up to 290 and 300 GeV, respectively [30,49-52].

The CMS detector
The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity coverage provided by the barrel and endcap detectors. Muons are detected in gas-ionization chambers embedded in the steel flux-return yoke outside the solenoid.
Events of interest are selected using a two-tiered trigger system. The first level, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of around 100 kHz within a fixed latency of about 4 µs [53]. The second level, known as the high-level trigger, consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, and reduces the event rate to around 1 kHz before data storage [54].
A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [55].

Monte Carlo simulation
Monte Carlo (MC) simulation is used to design the searches, predict or aid the prediction of the background events from SM processes, and to provide estimations of the expected SUSY and tt + DM signal event yields.
Several models from the simplified model spectra [19,21] are used to simulate the SUSY signals. The helicity states of the produced top quarks are not considered in these models, and in the simulation the top quarks are treated as unpolarized.  100,200,300, and 500 GeV have been generated for both the scalar and pseudoscalar models. The mass of the DM particle is set to 1 GeV while a value of 1 is chosen for the couplings.
The SM tt process is simulated using the powheg (v2) [59-61] generator at next-to-leading order (NLO) for the dilepton analyses or the MadGraph generator at LO for the analyses of zero or one lepton events. In the top quark corridor analysis the powheg generator is used, as this analysis relies on a precise estimate of the tt background and its associated modeling uncertainties, which are better described in CMS by the powheg generator [36,62]. This sample is also used to calculate the dependence of the tt acceptance on m t and on the factorization and renormalization scales (μ F , μ R , respectively). A parameter denoted as h damp is used in the modeling of the parton shower matrix element [63,64]. The central value and uncertainties in h damp are discussed in Sect. 6.4.2.
The powheg v1 [65] generator is used for the single top quark and antiquark production in association with a W boson (tW) at NLO. The MadGraph v2.2.2 [56] generator is used at NLO for modeling the Drell-Yan (DY) process, the production of W or Z bosons in association with tt events (ttW, ttZ), and the WW, WZ, and ZZ production processes. The production of the DY process is simulated with up to two additional partons [66], and the FxFx scheme is used for the matching of jets from the matrix element calculations and from parton showers. Samples of W+jets, Z+jets events (with Z → νν ), γ +jets, and QCD multijet production are simulated with up to four extra partons in the matrix element calculations using the MadGraph (v2. is close to the top quark mass. The CMS fast simulation program [73,74] is used to simulate the detector response for the remaining signal samples. The effect of additional interactions in the same event (referred to as pileup) is accounted for by simulating additional minimum bias interactions for each hard scattering event. The observed distribution of the number of pileup interactions, which has an average of 23 and 32 collisions per bunch crossing for the 2016 period, and for the 2017 and 2018 periods, respectively, is reproduced by the simulation.
Simulated background events are normalized according to the integrated luminosity and the theoretical cross section of each process. The latter are computed at next-to-next-toleading order (NNLO) in QCD for DY [75], approximately NNLO for tW [76], and NLO for WW, WZ, ZZ [77], ttW and ttZ [78]. For the normalization of the simulated tt sample, the full NNLO plus next-to-next-to-leading logarithmic (NNLL) accurate calculation [79], performed with the Top++ 2.0 program [80], is used. The PDF uncertainties are added in quadrature to the uncertainty associated with the strong coupling constant (α S ) to obtain a tt production cross section of 832 +20 −29 (scale) ± 35 (PDF+α S )pb assuming m t = 172.5 GeV.

Event reconstruction
In this section, the event reconstruction common to all the analyses presented in this paper is described.
An event may contain multiple primary vertices, corresponding to multiple pp collisions occurring in the same bunch crossing. The candidate vertex with the largest value of summed physics-object p 2 T is taken to be the primary pp interaction vertex. The physics objects for this determination are the jets, clustered using the jet finding algorithm [91,92] using tracks assigned to candidate vertices as inputs, and the associated missing transverse momentum, taken as the negative vector sum of the transverse momentum of those jets.
The particle-flow algorithm [93] aims to reconstruct and identify each individual particle in an event, with an optimized combination of information from the various elements of the CMS detector. The energy of photons is obtained from the ECAL measurement. The energy of electrons is determined from a combination of the electron momentum at the primary interaction vertex as determined by the tracker, the energy of the corresponding ECAL cluster, and the energy sum of all bremsstrahlung photons spatially compatible with originating from the electron track. The energy of muons is obtained from the curvature of the corresponding track. The energy of charged hadrons is determined from a combination of their momentum measured in the tracker and the matching ECAL and HCAL energy deposits, corrected for the response function of the calorimeters to hadronic showers. Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energies.
For each event, hadronic jets are clustered from these reconstructed particles using the infrared and collinear safe anti-k T algorithm [91,92] with a distance parameter of 0.4. The jet momentum is determined as the vectorial sum of all particle momenta in the jet, and is found from simulation to be, on average, within 5-10% of the generated momentum over the whole p T spectrum and detector acceptance.
Additional pp interactions within the same or nearby bunch crossings can contribute with additional tracks and calorimetric energy depositions to the jet momentum. To mitigate this effect, charged particles identified as originating from pileup vertices are discarded, and an offset correction is applied to correct for the contribution from neutral particles [94]. Jet energy corrections are derived from simulation to bring the energy of a jet measured from the detector response to that of a particle-level jet on average. In situ measurements of the momentum balance in dijet, photon+jets, Z+jets, and multijet events are used to account for any residual differences in jet energy scale between data and simulation [95]. The jet energy resolution amounts typically to 15% at 10 GeV, 8% at 100 GeV, and 4% at 1 TeV [95]. Additional selection criteria are applied to each jet to remove jets potentially dominated by anomalous contributions from various subdetector components or reconstruction failures [96]. Jets produced by the hadronization of b quarks are identified using btagging multivariate algorithms: DeepCSV [97] for the inclusive searches and DeepJet [98,99] for the corridor search. The more recently developed DeepJet algorithm has slightly better performance for some parts of the phase space than the DeepCSV algorithm. All analyses use a medium working point for the tagger, corresponding to a a misidentification probability for jets originating from gluons or up, down, and strange quarks of 1%, and a btagging efficiency of about 70%. A tight working point, corresponding to a misidentification rate of 0.1%, is also used in the analysis of Sect. 5.2.
The missing transverse momentum vector is defined as the negative vector p T sum of all particle-flow candidates reconstructed in an event with jet energy corrections applied. Events with serious p miss T reconstruction failures are rejected using dedicated filters [100].
The requirements imposed to select reconstructed particle objects specific to the separate search strategies incorporated into the present combination are given in the following sections. In Sect. 5 we give brief summaries of the previously published searches, and in Sect. 6 the detailed presentation of the new top quark corridor search. generated through the leptonic decay of a W boson, where the neutrino escapes detection, are significantly suppressed by rejecting events containing isolated electrons or muons. The contribution from events in which a W boson decays to a τ lepton is suppressed by rejecting events containing isolated hadronically decaying τ candidates [101,102]. A central feature of this analysis is the deployment of advanced jet tagging algorithms to identify hadronically decaying top quarks and W bosons, with different algorithms covering both the highly Lorentz-boosted regime and the resolved regime. For the highly Lorentz-boosted regime, where the decay products of the particle in quest are expected to merge into a single large-R jet with a distance parameter of R = 0.8, the DeepAK8 algorithm [103] is used to identify these large-R jets originating from top quarks or W bosons. In the resolved regime, where the decay products of the top quark are separately reconstructed using jets with R = 0.4, the DeepResolved algorithm [17] is used to tag these top quarks with intermediate p T , ranging from 150 to 450 GeV.
To enhance sensitivity to signal models with a compressed mass spectrum where the mass of the top squark is close to the sum of the masses of the LSP and the W boson, a dedicated "soft b tag" algorithm developed to identify very low p T b hadrons is also used for the event categorization [104]. The analysis includes a total of 183 nonoverlapping signal regions, defined in Ref.
[16] and optimized for different SUSY models and ranges of mass splittings between SUSY particles. A large p miss T , due to the presence of a pair of neutralinos in the signal model, is required.
The dominant sources of SM background with intrinsic p miss T are the inclusive production of top quark pairs, W and Z bosons, single top quark production, and the ttZ process. The contribution from tt, W+jets, ttW, and single top quark processes arises from events in which a W boson decays leptonically to produce p miss T associated with an energetic neutrino, but the charged lepton either falls outside of the kinematic acceptance or fails the lepton identification criteria. This background is collectively referred to as "lost-lepton" background. The contributions from Z+jets and ttZ events arise when the Z boson decays to neutrinos, resulting in large genuine p miss T . Contributions from the QCD multijet process enter the search sample in cases where severe mismeasurements of jet momenta (i.e., jets passing through poorly instrumented regions of the detector) produce significant artificial p miss T , or when neutrinos arise from leptonic decays of heavyflavor hadrons produced during the jet fragmentation. The contribution of each SM background process to the search sample is estimated through measurements of event rates in dedicated background control samples that are translated to predicted event counts in the corresponding search sample with the aid of simulation. The data are found to be in good agreement with the predicted backgrounds.

Single-lepton analysis
The search for top squark pair production in the single-lepton final state [17] focuses on final states with exactly 1 lepton, coming from the decay of a W boson from the decay chain in the final state of the signal gives rise to substantial p miss T compared with SM processes, p miss T > 250 GeV is required. The transverse mass computed from the lepton p T and p miss T is required to be larger than 150 GeV to reduce the lepton+jets background from tt and W+jets processes, for which m T has a natural cutoff at the W boson mass (m W ).
The dileptonic tt process, where one of the leptons is lost, is the largest remaining SM background. In these lost-lepton events m T is not bound by m W because of the additional p miss T arising from the presence of a second neutrino. The modified topness (t mod ) variable, introduced in Ref.
[17], is a measure for the likelihood of a single lepton event to originate from dileptonic tt and is used to introduce better discrimination against this background.
The dileptonic tt background is estimated through a set of dedicated control regions that require two isolated leptons. The second lepton is added to p miss T in the calculation of variables that depend on p miss T , e.g. m T and t mod , to mimic the lost-lepton scenario.
The subleading SM background comes from the process of W+jets production, where the W boson decays leptonically. While the single-lepton events from the W boson are largely suppressed by the m T requirement, events where the W boson is produced off-shell can still enter the signal regions. The requirement of at least one b-tagged jet significantly reduces this type of background. Events are further categorized in terms of the invariant mass of the lepton and the b-tagged jet, which helps to further reduce the W+jets background. The W+jets background is estimated using control regions with an inverted b-tagged jet requirement which yields a high-purity sample of W+jets events.
Irreducible SM backgrounds arise from the ttZ and WZ processes when the Z boson decays into a pair of neutrinos.
These rare backgrounds and the remaining events from the single lepton tt process are sub-dominant contributions in most search regions and are estimated using simulated samples.
This analysis also makes use of the same jet tagging algorithms, described above in the fully hadronic channel, to identify hadronic top quark decays in the final state. This is motivated by the fact that none of the leading SM backgrounds, except ttZ, has a hadronically decaying top quark in the final state, while in some signal scenarios one hadronically decaying top quark is expected. Events in the lower p miss T search regions are categorized into different regions according to the presence of at least one merged or resolved top quark candidate.
Finally, a dedicated search strategy is used for signal scenarios with small mass splitting between the top squark and the LSP to optimize sensitivity. In these compressed scenarios with Δm t 1 , χ 0 1 close to m W or m t , p miss T can be small when neutralinos are back-to-back, and therefore t mod and the merged and resolved top quark tags are not used. Instead, one non-b-tagged jet, which could arise from ISR for a signal event, is required and a requirement on the proximity of the lepton to the p miss T is introduced. In the case of Δm t 1 , χ 0 1 ∼ m W at least one "soft b tag", such as a secondary vertex, is required instead of the standard b-tagged jets, to improve the acceptance for b quarks that do not carry sufficient momentum to be reconstructed as a jet. In order to enhance the sensitivity to different signal scenarios events are categorized into 39 non-overlapping signal regions based on the values of p miss T and several of the variables introduced above.

Dilepton analysis
The search in the dilepton final state [18] is carried out using events containing a pair of leptons (electron or muons) with opposite charges. The invariant mass of the lepton pair (m ) is required to be greater than 20 GeV to suppress backgrounds with misidentified or nonprompt leptons from the hadronization of heavy-flavor jets in multijet events. Events with additional leptons, including candidates with looser requirements on transverse momentum, and isolation are rejected. Events with a same-flavor lepton pair that is consistent with the SM DY production are removed by requiring |m Z − m | > 15 GeV, where m Z is the mass of the Z boson. To further suppress DY and other vector boson backgrounds, the number of jets is required to be at least two and, among them, the number of b-tagged jets to be at least one.
The p miss T significance, denoted as S, is used to suppress events where detector effects and misreconstruction of particles from pileup interactions are the main source of reconstructed p miss T . The algorithm used to obtain S is described in Ref. [100]. A requirement of S > 12 is used in order to suppress the otherwise overwhelming DY background in the same-flavor channel. This requirement exploits the stability of S with respect to the pileup rate for events with no genuine p miss T . The DY background is further reduced through a requirement on the azimuthal angular separation between p miss T and the momentum of the leading (subleading) jet of cos Δφ( p miss T , j) < 0.80 (0.96). These criteria reject a small background of DY events with significantly mismeasured jets.
The main variable in this analysis is m T2 ( ), which is defined in equation (1), and extensively discussed in Ref.
[23]. The key feature of the m T2 ( ) observable is that it retains a kinematic endpoint at the W boson mass for background events from the leptonic decays of two W bosons, produced directly or through a top quark decay. Similarly, the m T2 (b b ) observable, defined with equation (1), but using the vector sum of the leptons and the b-jets instead of leptons alone [18], is bounded by the top quark mass if the leptons, neutrinos and b-tagged jets originate from the decay of top quarks. By contrast, signal events do not have the respective endpoints and are expected to populate the tails of these distributions.
Signal regions based on m T2 ( ), m T2 (b b ), and S are defined to enhance the sensitivity to different signal scenarios. The regions are further divided into different categories separately for events with a same-flavor and a different-flavor lepton pair, to account for the different SM background composition. The signal regions are defined such that there is no overlap between them, nor with the background-enriched control regions.
Events with an opposite-charge lepton pair are abundantly produced by the DY and tt processes. The event selection rejects the vast majority of DY events. Therefore, the major backgrounds from SM processes in the search regions are top quark pair and single top events that pass the m T2 ( ) threshold because of severely mismeasured p miss T or a misidentified lepton. In high m T2 ( ) and S signal regions, tt Z events with Z → νν are the main SM background. Remaining DY events with large p miss T from mismeasurement, multiboson production and other tt/single t processes in association with a W, a Z or a Higgs boson (ttW, tqZ, or ttH) are sources of smaller contributions. A detailed description of the background estimation method is given in Ref. [18].

Top quark corridor analysis
The top quark corridor analysis is discussed in this section in more detail, as it is presented for the first time in this paper. In this search, events containing a dilepton pair with opposite charge and p miss T are selected, and a DNN algorithm is used to increase the sensitivity to the signal. The full DNN score distribution for events in the signal region is used to test the presence of the signal.

Object and event selection
The object selection and baseline requirements of the event selection are the same as those for the dilepton analysis summarized in the first paragraph of Sect. 5.3, and are detailed in this section. Electron and muon candidates are required to have p T > 20 GeV and |η| < 2.4. In addition, the p T of the leading lepton must be at least 25 GeV. The leptons are required to be isolated by measuring their relative isolation as the scalar p T sum, normalized to the lepton p T , of the photons and of the neutral and charged hadrons within a the candidate electron (muon). In order to reduce the dependence on the number of pileup interactions, charged hadron candidates are included in the sum only if they are consistent with originating from the selected primary vertex in the event.
The expected contribution from neutral hadrons due to pileup is estimated following the method described in Ref.
[105]. For an electron candidate the relative isolation requirement depends on η (values close to 0.04) and for a muon it is required to be smaller than 0.15. Selected jets are required to have p T > 30 GeV and |η| < 2.4. Additionally, jets that are found within a cone of ΔR = 0.4 around the selected leptons are rejected. Jets originating from the hadronization of bottom quarks are identified as btagged jets by using the medium working point of the DeepJet algorithm [98,99].
Simulated events are corrected to account for differences with respect to data in the lepton reconstruction, identification, and isolation efficiencies, as well as efficiencies in the performance of b tagging. The values of the data-tosimulation correction factors are parameterized as functions of the p T and η of the object and deviate from unity by less than 1% for leptons and less than 10% for b-tagged jets.
Selected events are classified in categories according to the flavor of the two leading leptons (ee, eμ, μμ) and the data-taking period (2016,2017,2018). Moreover, events are required to contain at least two jets, one of which must be b tagged. This set of requirements is referred to as the baseline selection.
After the baseline selection, most of the background events (about 98%) are expected to come from tt, tW, and DY processes. To suppress these backgrounds, the signal region is defined with the requirements p miss T > 50 GeV and m T2 ( ) > 80 GeV. As described in Sect. 5.3, m T2 ( ) serves to account for the multiple sources of p miss T in the signal process and to exploit the differences with respect to the background processes. For tt, tW or W+jets events this variable's distribution has a kinematic endpoint at the W boson mass, because the transverse mass of each leptonneutrino pair corresponds to the transverse mass of the W boson, whereas signal events have neutralinos contributing to the total p miss T , so they populate larger m T2 ( ).

Background estimation
Although most of the tt events are rejected by requiring m T2 ( ) > 80 GeV, it is still the dominant background contribution in the signal region, where most of the events have a large m T2 ( ) value because of resolution effects when computing p miss T . In this region, some of the tt events contain jets with a mismeasured energy and, in a smaller proportion, there are events where one of the leptons is missed and a lepton that is not from a W boson decay (nonprompt lepton) is taken as the second lepton in the event. The effect of the jet mismeasurements is checked in MC and an uncertainty is assigned. Events containing nonprompt leptons are considered in a different background category.
The second-largest contribution is tW background, which is approximately 4% of the total background, and is also estimated from MC simulation. The DY events give the thirdlargest background contribution in the same-flavor channel, while their contribution is negligible in the eμ channel.
Background with nonprompt leptons is estimated from MC simulation and validated in a control region with the same selection as the signal region, but requiring two samesign leptons. These events include the contribution from jets misidentified as leptons or with leptons coming from the decay of a bottom quark mistakenly identified as coming from the hard process. In the same-charge region, most of the events come from tt with nonprompt leptons, with a smaller contribution of events with prompt leptons from tt W and ttZ production, and dileptonic tt with prompt leptons and a mismeasurement of the electron charge. A reasonable agreement with same-charge data, within 10-15%, is observed in this validation region. Minor background contributions are also estimated from MC simulation and come from diboson (WW, WZ, and ZZ), ttW, and ttZ events, with a total contribution of about 1%.
The distributions of the main observables in data, the leading lepton p T , m T2 ( ), the scalar sum of the p T of all the selected jets (H T ) and p miss T in the signal region, are shown in Fig. 3. The simulation and data are generally in agreement within the uncertainties. The uncertainties are described in Sect. 6.4.

Search strategy
In order to maximize the sensitivity and to exploit all the differences between the signal and tt background, a multivariate   , which allows one to interpret the output numbers as probabilities. The optimizer that is selected corresponds to Adam [108] with a learning rate of 0.0001. Out of the 40% of events used for the DNN implementation, 60% are used for training, 15% for validation, and the rest to check that the DNN works properly and there is no overfitting. Figure 5 shows the DNN output for two different mass parameters in the signal region for signal and tt background. Since both masses are introduced in the training, the DNN score shape is different for both signal and background. This figure shows that the DNN score is a good discriminator between signal and background, especially at high values of the distribution.
The gain in sensitivity by using the DNN score instead of using only the p  , even if Δm cor = 0, the DNN starts to improve the sensitivity (as shown in Fig. 5). The score shape separation between signal and background also starts to increase for relatively low m t 1 and m χ 0 1 if Δm cor > 0.
The modeling of the DNN is checked in a validation region in which the signal region selection requirements are applied, except that p miss T < 50 GeV and m T2 ( ) < 80 GeV are required, and that only the eμ channel is used. This region is orthogonal to the signal region, and the signal contamination is expected to be small for the signal masses in which the sensitivity relies on the DNN discriminant. This region is highly dominated by tt and tW events and a good agreement with data is observed. Furthermore, the DY modeling of the DNN output distribution is also checked in a validation region where the invariant mass of the same-flavor lepton pairs is close to the mass of the Z boson. The DY background is observed to be well modeled and populates preferentially low DNN score bins.

Systematic uncertainties
Several sources of systematic uncertainty affect the background prediction and signal yields. Modeling of the trigger, efficiencies of the lepton reconstruction, identification and isolation, the jet energy scale and resolution, efficiencies of the b tagging and mistag rate, and the pileup modeling have uncertainties that are considered in the estimate of both background and signal yields. All these sources are described in Sect. 6.4.1.
As the tt background plays an essential role and needs to be accurately estimated, various modeling uncertainties are taken into account. These uncertainties consider variations of the main theoretical parameters used in the simulation and have been studied previously by the CMS Collaboration [62, 63]. These uncertainties are explained in detail in Sect. 6.4.2.
Uncertainties in signal modeling are described in Sect. 6.4.3. Section 6.4.4 includes other sources of uncertainty as the background and signal normalization uncertainties.

Experimental uncertainties
The following experimental uncertainties are calculated for every background and signal estimate and are propagated to the final DNN output distribution in the signal region.
The uncertainties in the trigger, lepton identification, and isolation efficiencies used in simulation are estimated by varying data-to-simulation scale factors by their uncertainties, which are about 1.5% for electron identification and isolation efficiencies, 1% for muon identification and isolation efficiencies, and about 1.5% for the trigger efficiency. The uncertainties in the muon momentum scales are taken into account by varying the momenta of the muons by their uncer- For the b tagging efficiency and mistag rate the uncertainties are determined by varying the scale factors for the b-tagged jets and mistagged light-flavor quark and gluon jets, according to their uncertainties, as measured in QCD multijet events [97][98][99]. The uncertainties related to the jet energy scale and jet energy resolution are calculated by varying these quantities in bins of p T and η, according to the uncertainties in the jet energy corrections, which amount to a few percent [95]. The uncertainty in the effect of the jet mismeasurements, described in Sect. 6.2, is added to the jet energy resolution uncertainties. This uncertainty is taken as partially correlated between years.
The uncertainty in p miss T from the contribution of unclustered energy is evaluated based on the momentum resolution of the different particle-flow candidates, according to their classification. Details on the procedure can be found in Refs. [93,110,111]. The uncertainty in the modeling of the contributions from pileup collisions is evaluated by varying the inelastic pp cross section in the simulation by ±4 .6% [112]. These uncertainties are treated as correlated between data periods.
A summary of the experimental uncertainties in the tt background and signal is shown in Table 1. These uncertainties are also applied to the prediction of other minor backgrounds and have an effect in both the shape and the normalization.

Modeling uncertainties in the tt background
Modeling uncertainties for the tt background are calculated by varying different theoretical parameters in the MC gener- ator within their uncertainties and propagating their effect to the final distributions. The uncertainty in the modeling of the hard-interaction process is assessed in the powheg sample varying up and down μ F and μ R by factors of 2 and 1/2 relative to their common nominal value of μ Here p T,t denotes the p T of the top quark in the tt rest frame. The effect of this variation is propagated to the acceptance and efficiency, estimated using the tt powheg sample. This uncertainty is correlated among the data-taking periods.
The uncertainty in the choice of the PDFs and in the value of α S is determined by reweighting the sample of simulated tt events according to the envelope of a PDF set of 100 NNPDF3.0 replicas [67] for 2016 and 32 PDF4LHC replicas [113] for 2017 and 2018. The uncertainty in α S is propagated to the acceptance by reweighting the simulated sample by sets of weights with two variations within the uncertainties of α S . Only the uncertainties for the 2017 and 2018 periods are taken to be correlated, while the 2016 period is kept uncorrelated, because the PDF set used is different.
The effect of the modeling uncertainties of the initial-state and final-state radiation is evaluated by varying the parton shower scales (running α S ) by factors of 2 and 1/2 [59]. In addition, the impact of the matrix element and parton shower matching, which is parameterized by the powheg generator as h damp = 1.58 +0.66 −0.59 m t [63,64], is calculated by varying this parameter within the uncertainties. This uncertainty is calculated using dedicated tt samples and is taken as correlated between the years.
To model the measured underlying event the parameters of pythia are tuned [64,70]. An uncertainty is assigned by varying these parameters within their uncertainties using dedicated tt samples. The uncertainty corresponding to the 2016 period is applied for the CUETP8M2T4 tune and is kept as uncorrelated to the uncertainty on the CP5 tune for 2017 and 2018, which is fully correlated for the two periods.
An uncertainty on the p T of the top quark is also considered to account for the known mismodeling found in the powheg MC sample [63]. A reweighting procedure exists to fix the mismodeling but, to avoid biasing the search, we use unweighted distributions and assign an uncertainty from the full difference to the weighted distributions.
For the top quark mass, 1 GeV is conservatively taken as the uncertainty, which corresponds to twice the uncertainty of the CMS measurement [114], and is also propagated to the acceptance. The differences in the final yields for each bin of the DNN score distribution between the tt background prediction with m t = 172.5 ± 1.0 GeV are taken as an uncertainty, accounting for the possible bias introduced in the choice of m t = 172.5 GeV in the MC simulation. The uncertainty is assessed using dedicated tt samples produced with a different m t .
The modeling uncertainties in the signal region yields for the tt background are summarized in Table 2; they have an effect on the shape and also on the normalization.

Signal modeling
The effect on the signal model of the ISR reweighting, described in Sect. 3, is considered. Half of the deviation from unity is taken as the systematic uncertainty in these reweighting factors. This uncertainty is propagated to the final distribution and taken as a shape uncertainty.
The uncertainty in the modeling of the hard interaction in the simulated signal sample is calculated varying up and down μ F and μ R by factors of 2 and 1/2 relative to their nominal value. In addition, a 6.5% uncertainty in the signal normalization is assigned, according to the uncertainties in the predicted cross section of signal models in the top squark mass range of the analysis [87].

Other uncertainties
The uncertainty in the overall integrated luminosity for the combined sample, which affects the signal and background normalization, amounts to 1.6% [115][116][117][118]. The total uncertainty is split in different sources, partially correlated across years.
A normalization uncertainty is applied to each background and signal estimate separately. The uncertainty in the tt normalization is taken from the uncertainty in the NNLO+NNLL cross section, as quoted in Sect. 3, and additionally the top quark mass is varied by ±1 GeV, leading to a variation of the cross section of 6%.
For DY, dibosons, ttW, and ttZ processes a 30% normalization uncertainty is assigned covering the uncertainty in the theoretical cross section and in the measurements. For the tWprocess an uncertainty of 12% is assigned. In the case of the nonprompt lepton background, a normalization uncertainty of 30% is also applied, covering the largest devia- Statistical uncertainties arise from the limited size of the MC samples. They are considered for each signal and background process, in each bin of the distributions. They are introduced through the Barlow-Beeston approach [119].
All the systematic uncertainties described in Sects. 6.4.1 and 6.4.2 are assigned to each DNN distribution bin individually, and treated as correlated among all the bins and all processes. The statistical uncertainties are treated as uncorrelated nuisance parameters in each bin of the DNN score distribution. All of the systematic uncertainties are treated as fully correlated among the different final states.

Corridor results
The statistical interpretation is performed by testing the SM hypothesis against the SUSY hypothesis. The data and predicted distributions for the DNN response in the signal region are combined in the nine channels (3 data-taking period × 3 lepton flavor combinations of the two leading leptons) in order to maximize the sensitivity to the signal. Each of the distributions is computed for different values of the mass parameters and compared to the prediction for the signal model with the corresponding masses. In Fig. 6 the DNN score distributions for data are compared with those from the fit. The fit function includes the background, and the signal prediction scaled by the post-fit signal strength, for different mass parameters. The points whose DNN distributions A binned profile likelihood fit of the DNN output distribution is performed, where the nuisance parameters are modeled using Gaussian distributions. The correlation scheme for different data periods is described in Sect. 6.4. No significant excess is observed over the background prediction for any of the distributions.
Upper limits on the production cross section of top squark pairs are calculated at 95% confidence level (CL) using a modified frequentist approach and the CL s criterion, implemented through an asymptotic approximation [120][121][122][123].
Results are interpreted for different signals characterized by 145 < m t 1 < 295 GeV and Δm cor ≤ 30 GeV. The observed upper limit on the signal cross section is shown in Fig. 7. The expected and observed upper limits are also shown for three different slices corresponding to Δm t 1 , χ 0 1 = 165, 175 and 185 GeV in Fig. 8. Both the observed and expected cross section limits exclude the model over the region of the search.

Combined results
A statistical combination of the results of the three searches described in detail in Sect. 5 is performed outside the corridor area in order to provide interpretations in the context of the signal scenarios described in Sect. 1. The signal regions of the analyses targeting different final states are designed to be mutually exclusive. Additionally, there is no significant overlap of any of the control regions with signal regions of a different analysis. The overlap between control regions of the single-lepton and dilepton analysis is found to be less than 1% and therefore considered negligible. Correlations of systematic uncertainties in the expected signal and background yields are studied and taken into account. Uncertainties in the jet energy scale and p miss T resolution, b tagging efficiency scale factors, heavy resonance taggers, integrated luminosity and background normalizations are treated as fully correlated. Because of differences in the lepton identification methods and working points, as well as the triggers to select events, the corresponding uncertainties are considered uncorrelated. Theory uncertainties in the choice of the PDF, μ R and μ F and ISR modeling of the signal prediction, as well as SM backgrounds that are estimated using simulation, are taken to be fully correlated. Figure 9 (upper left) shows the combination of the results of the three searches for direct top squark pair production for the model with t 1 → t χ 0 1 decays. The analysis described in Sect. 6 is exclusively used for extracting limits in the top quark corridor region. No result of the other analyses is used in this particular region of parameter space. The combined result excludes a top squark mass of 1325 GeV for a massless LSP, and an LSP mass of 700 GeV for a top squark mass of 1150 GeV. The expected limit of the combination is dominated by the fully hadronic search for signals with large mass splitting. In regions with smaller mass splitting between the top squark and the LSP, searches in the zero-and singlelepton channels have similar sensitivity. Figure 9 (upper right) shows the equivalent limits for direct top squark pair production for the model with t 1 → b χ  Figure 9 (lower) shows the limits for the model with a 50% branching fraction of the top squark decays discussed previously. In this model, the mass splitting between the neutralino and chargino is assumed to be 5 GeV. Because of the low acceptance for low-momentum leptons the dilepton result is not interpreted in terms of this model. Top squark masses up to 1175 GeV are excluded in this model when the LSP is massless, and up to 1000 GeV for LSP masses up to 570 GeV.
As shown in Fig. 9 (upper left), the region of the parameter space of the simplified SUSY models considered for interpretation in this analysis, which is favored by the naturalness paradigm, is now further constrained by the exclusion limits. The results of the inclusive top squark searches are interpreted in simplified models of associated production of DM particles with a top quark pair, shown in Fig. 2. The interaction of the DM particles and the top quark is mediated by a scalar or pseudoscalar mediator particle. Assuming a dark matter particle mass of 1 GeV, scalar and pseudoscalar mediators with masses up to 400 and 420 GeV are excluded at 95% CL, respectively, as shown in Fig. 10. The obtained upper limits on σ (pp → ttχχ)/σ theory are independent of the mass of the DM fermion (m χ ), as long as the mediator is produced on-shell [46]. Previous results are improved by more than 100 GeV [50,51] and the sensitivity extends beyond m φ/a > 2m t for the first time. The competing decay channel of the mediator into a top quark pair, φ/a → tt, is taken into account in the signal simulation and cross section calculation.

Summary
Four searches for top squark pair production and their statistical combination are presented. The searches use a data set of proton-proton collisions at a center-of-mass energy of 13 TeV collected by the CMS detector and corresponding to an integrated luminosity of 137 fb −1 . A dedicated analysis is presented that is sensitive to signal models where the mass splitting between the top squark and the lightest supersymmetric particle (LSP) is close to the top quark mass. A deep neural network algorithm is used to separate the signal from the top quark background using events containing an opposite-charge dilepton pair, at least two jets, at least one b-tagged jet, p miss T > 50 GeV, and stransverse mass greater than 80 GeV. No excess of data over the standard model prediction is observed, and upper limits are set at 95% confidence level on the top squark production cross section. Top ) and a model with a branching fraction of 50% for each of these top squark decay modes (lower), assuming a mass difference between the neutralino and chargino of 5 GeV. The color indicates the 95% CL upper limit on the cross section at each point in the plane. The area below the thick black curve represents the observed exclusion region at 95% CL, while the dashed red lines indicate the expected limits at 95% CL and the region containing 68% of the distribution of limits expected under the background-only hypothesis of the combined analyses. The thin black lines show the effect of the theoretical uncertainties in the signal cross section squarks with mass from 145 to 275 GeV, for LSP mass from 0 to 100 GeV, with a mass difference between the top squarks and LSP of up to 30 GeV deviation around the mass of the top quark, are excluded for the first time in CMS. Previously published searches in final states with 0, 1, or 2 leptons are combined to extend the exclusion limits of top squarks with masses up to 1325 GeV for a massless LSP and an LSP mass up to 700 GeV for a top squark mass of 1150 GeV, for cer-tain models of top squark production. In an alternative signal model of dark matter production via a spin-0 mediator in association with a top quark pair, mediator particle masses up to 400 and 420 GeV are excluded for scalar or pseudoscalar mediators, respectively, assuming a dark matter particle mass of 1 GeV. Acknowledgements We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid and other centers for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC, the CMS detector, and the support-

Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors' comment: Release and preservation of data used by the CMS Collaboration as the basis for publications is guided by the CMS policy as written in its document "CMS data preservation, re-use and open access policy" (https://cms-docdb.cern. ch/cgi-bin/PublicDocDB/RetrieveFile?docid=6032&filename=CMS DataPolicyV1.2.pdf&version=2).]

Declarations
Conflict of interest The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 .