Search for new phenomena in a lepton plus high jet multiplicity final state with the ATLAS experiment using $\sqrt{s}$ = 13 Tev proton-proton collision data

A search for new phenomena in final states characterized by high jet multiplicity, an isolated lepton (electron or muon) and either zero or at least three $b$-tagged jets is presented. The search uses 36.1 fb$^{-1}$ of $\sqrt{s}$ = 13 TeV proton-proton collision data collected by the ATLAS experiment at the Large Hadron Collider in 2015 and 2016. The dominant sources of background are estimated using parameterized extrapolations, based on observables at medium jet multiplicity, to predict the $b$-tagged jet multiplicity distribution at the higher jet multiplicities used in the search. No significant excess over the Standard Model expectation is observed and 95% confidence-level limits are extracted constraining four simplified models of $R$-parity-violating supersymmetry that feature either gluino or top-squark pair production. The exclusion limits reach as high as 2.1 TeV in gluino mass and 1.2 TeV in top-squark mass in the models considered. In addition, an upper limit is set on the cross-section for Standard Model $t\bar{t}t\bar{t}$ production of 60 fb (6.5 $\times$ the Standard Model prediction) at 95% confidence level. Finally, model-independent limits are set on the contribution from new phenomena to the signal-region yields.


Introduction
The ATLAS trigger system [17] consists of two levels; the first level is a hardware-based system, while the second is a software-based system called the High-Level Trigger.
3 Data and simulated event samples 3

.1 Data sample
After applying beam, detector and data-quality criteria, the data sample analysed comprises 36.1 fb −1 of √ s = 13 TeV proton-proton (pp) collision data (3.2 fb −1 collected in 2015 and 32.9 fb −1 collected in 2016) with a minimum pp bunch spacing of 25 ns. In this data set, the mean number of pp interactions per proton-bunch crossing (pile-up) is µ = 23.7. The luminosity and its uncertainty of 3.2% are derived following a methodology similar to that detailed in Ref.
[18] from a preliminary calibration of the luminosity scale using a pair of x-y beam separation scans performed in August 2015 and June 2016.
Events are recorded online using a single-electron or single-muon trigger with thresholds that give a constant efficiency as a function of lepton-p T of ≈90% (≈80%) for electrons (muons) for the event selection used. For the determination of the multi-jet background, alternative lepton triggers, using less stringent lepton isolation requirements with respect to the nominal ones, are considered, as discussed in Section 6. Single-photon and multi-jet triggers are also employed to select data samples used in the validation of the background estimation technique.

Simulated signal events
Simulated signal events from four SUSY benchmark models are used to guide the analysis selections and to estimate the expected signal yields for different signal-mass hypotheses used to interpret the analysis results. In all models, the RPV couplings and the SUSY particle masses are chosen to ensure prompt decays of the SUSY particles. Diagrams of the first three benchmark simplified models, which involve gluino pair production, are shown in Figures 1(a), 1(b), and 1(c). In the first model, each gluino decays via a virtual top squark to two top quarks and the lightest neutralino (χ 0 1 ) which is the lightest supersymmetric particle (LSP). Theχ 0 1 decays to three light quarks (χ 0 1 → uds) via the RPV coupling λ 112 . For this model, χ 0 1 masses below 10 GeV are not considered in order to avoid the effect of the limited phase space in theχ 0 1 decay. In the second model, each gluino decays to a top quark and a top squark LSP, with the top squark decaying to an s-quark and a b-quark via a non-zero λ 323 RPV coupling. 2 The third model involves the gluino decaying to two first or second generation quarks (q ≡ (u, d, s, c)) and theχ 0 1 LSP, which then decays to two additional first or second generation quarks and a charged lepton or a neutrino (χ 0 1 → qq orχ 0 1 → qqν, labelled asχ 0 1 → qq /ν). The decay proceeds via a λ RPV coupling, where each RPV decay can produce any of the four first-and second-generation leptons (e ± , µ ± , ν e , ν µ ) with equal probability. For this model,χ 0 1 masses below 50 GeV are not considered.
The fourth scenario considered involves right-handed top-squark pair production with the top squark decaying to a bino or higgsino LSP and a top or bottom quark. The LSP decays through the non-zero RPV coupling λ 323 ≈ O(10 −2 -10 −1 ), with the value chosen to ensure prompt decays for the particle masses considered 3 and to avoid more complex patterns of RPV decays that are not considered here. Figure 1 (d) shows the production and possible decays considered. The different decay modes depend on the nature of the LSP and have a small dependence on the top-squark mass, with the top squark decaying as:t → tχ 0 1 for a bino-like LSP and ast → tχ 0 2 (≈25%),t → tχ 0 1 (≈25%),t → bχ + 1 (≈50%) for higgsinolike LSPs. With the chosen model parameters, the electroweakinos decay asχ 0 1/2 → tbs orχ ± 1 → bbs. The search results are interpreted in this model, with the assumption of either a pure higgsino (H) or pure bino (B) LSP. In the case of a wino LSP, the search has no sensitivity as the top squark decays directly as t →bs with no leptons produced in the final state. 4 Event samples for the first signal model (g → ttχ 0 1 → ttuds) are produced using the Herwig++ 2.7. 1 [29] event generator with the cteq6l1 [30] PDF set, and the UEEE5 tune [31]. For the other three models, the MG5_aMC@NLO v2.3.3 [32] event generator interfaced to Pythia 8.210 is used. For these cases, signal events are produced with either one (g →tt →tbs model) or two (g → qqχ 0 1 → qqqq /ν andt → tH/B models) additional partons in the matrix element and using the A14 [33] tune. The parton luminosities are provided by the NNPDF23LO [34] PDF set.
Signal cross-sections are calculated to next-to-leading order in the strong coupling constant, adding the resummation of soft-gluon emission at next-to-leading-logarithmic accuracy (NLO+NLL) [35][36][37][38][39]. The nominal cross-section and its uncertainty are taken from an envelope of cross-section predictions using different PDF sets as well as different factorization and renormalization scales, as described in Ref. [40].
The analysis is also used to search for SM four-top-quark production. In this case, the tttt sample is generated with the MG5_aMC@NLO 2.2.2 event generator interfaced to Pythia 8.186 using the NNPDF23LO PDF set and the A14 tune.

Simulated background events
The dominant backgrounds from top-quark pair production and W/Z+jets production are estimated from the data as described in Section 6, whereas the expected yields for minor backgrounds are taken from MC simulation. In addition, the background estimation procedure is validated with simulated events, and 2 The same final state can be produced by requiring a non-zero λ 313 Figure 1: Diagrams of the four simplified signal benchmark models considered. The first three models involve pair production of gluinos with each gluino decaying as (a)g → ttχ 0 1 → ttuds, (b)g →tt →tbs, (c)g → qqχ 0 1 → qqqq /ν. The fourth model (d) involves pair production of top squarks with the decayt → tχ 0 1/2 ort → bχ + 1 and with the LSP decaysχ 0 1/2 → tbs orχ + 1 →bbs; the specific decay depends on the nature of the LSP. In all signal scenarios, anti-squarks decay into the charge-conjugate final states of those indicated for the corresponding squarks, and each gluino decays with equal probabilities into the given final state or its charge conjugate. some of the systematic uncertainties are estimated using simulated event samples. The samples used are shown in Table 1

Event reconstruction
For a given event, primary vertex candidates are required to be consistent with the luminous region and to have at least two associated tracks with p T > 400 MeV. The vertex with the largest p 2 T of the associated tracks is chosen as the primary vertex of the event.
Jet candidates are reconstructed using the anti-k t jet clustering algorithm [62,63] with a radius parameter of 0.4 starting from energy deposits in clusters of calorimeter cells [64]. The jets are corrected for energy deposits from pile-up collisions using the method suggested in Ref. [65] and calibrated with ATLAS data in Ref.
[66]: a contribution equal to the product of the jet area and the median energy density of the event is subtracted from the jet energy. Further corrections derived from MC simulation and data are used to calibrate on average the energies of jets to the scale of their constituent particles [67]. In the search, three jet p T thresholds of 40 GeV, 60 GeV and 80 GeV are used, with all jets required to have |η| < 2.4. To minimize the contribution from jets arising from pile-up interactions, the selected jets must satisfy a loose jet vertex tagger (JVT) requirement [68], where JVT is an algorithm that uses tracking and primary vertex information to determine if a given jet originates from the primary vertex. The chosen working point has an efficiency of 94% at a jet p T of 40 GeV and is nearly fully efficient above 60 GeV for jets originating from the hard parton-parton scatter. This selection reduces the number of jets originating from, or heavily contaminated by, pile-up interactions, to a negligible level. Events with jet candidates originating from detector noise or non-collision background are rejected if any of the jet candidates satisfy the 'LooseBad' quality criteria, described in Ref. [69]. The coverage of the calorimeter and the jet reconstruction techniques allow high-jet-multiplicity final states to be reconstructed efficiently. For example, 12 jets take up only about one fifth of the available solid angle.
Jets containing a b-hadron (b-jets) are identified by a multivariate algorithm using information about the impact parameters of ID tracks matched to the jet, the presence of displaced secondary vertices, and the reconstructed flight paths of b-and c-hadrons inside the jet [70]. The operating point used corresponds to an efficiency of 78% in simulated tt events, along with a rejection factor of approximately 110 for jets induced by gluons or light quarks and of 8 for charm jets [71], and is configured to give a constant b-tagging efficiency as a function of jet p T .
Since there is no requirement on E miss T or any E miss T derived quantity the search is particularly sensitive to fake or non-prompt leptons in multi-jet events. In order to suppress this background to an acceptable level, stringent lepton identification and isolation requirements are used.
Muon candidates are formed by combining information from the muon spectrometer and the ID and must satisfy the 'Medium' quality criteria described in Ref. [72]. They are required to have p T > 30 GeV and |η| < 2.4. Furthermore, they must satisfy requirements on the significance of the transverse impact parameter with respect to the primary vertex, |d PV 0 |/σ(d PV 0 ) < 3, the longitudinal impact parameter with respect to the primary vertex, |z PV 0 sin(θ)| < 0.5 mm, and the 'Gradient' isolation requirements, described in Ref.
[72], relying on a set of ηand p T -dependent criteria based on tracking-and calorimeter-related variables.
Electron candidates are reconstructed from isolated energy deposits in the electromagnetic calorimeter matched to ID tracks and are required to have p T > 30 GeV, |η| < 2.47, and to satisfy the 'Tight' likelihood-based identification criteria described in Ref. [73]. Electron candidates that fall in the transition region between the barrel and endcap calorimeters (1.37 < |η| < 1.52) are rejected. They are also required to have |d PV 0 |/σ(d PV 0 ) < 5, |z PV 0 sin(θ)| < 0.5 mm, and to satisfy isolation requirements described in Ref. [73].
An overlap removal procedure is carried out to resolve ambiguities between candidate jets (with p T > 20 GeV) and baseline leptons 5 as follows: first, any non-b-tagged jet candidate 6 lying within an angular distance ∆R ≡ (∆y) 2 + (∆φ) 2 = 0.2 of a baseline electron is discarded. Furthermore, non-b-tagged jets within ∆R = 0.4 of baseline muons are removed if the number of tracks associated with the jet is less than three or the ratio of muon p T to jet p T is greater than 0.5. Finally, any baseline lepton candidate remaining within a distance ∆R = 0.4 of any surviving jet candidate is discarded.
Corrections derived from data control samples are applied to account for differences between data and simulation for the lepton trigger, reconstruction, identification and isolation efficiencies, the lepton momentum/energy scale and resolution [72,73], and for the efficiency and mis-tag rate of the b-tagging algorithm [70].

Event selection and analysis strategy
Events are selected online using a single-electron or single-muon trigger. For the analysis selection, at least one electron or muon, matched to the trigger lepton, is required in the event. The analysis is carried out with three sets of jet p T thresholds to provide sensitivity to a broad range of possible signals. These thresholds are applied to all jets in the event and are p T = 40 GeV, 60 GeV, and 80 GeV. The jet multiplicity is binned from a minimum of five jets to a maximum number that depends on the p T threshold. The last bin is inclusive, so that it also includes all events with more jets than the bin number. This bin corresponds to 12 or more jets for the 40 GeV requirement, and 10 or more jets for the 60 GeV and 80 GeV thresholds. There are five bins in the b-tagged jet multiplicity (exclusive bins from zero to three with an additional inclusive four-or-more bin). In this article, the notation N process j,b is used to denote the number of events predicted by the background fit model, with j jets and b b-tagged jets for a given process, e.g. N tt+jets j,b for tt+jets events. The number of events summed over all b-tag multiplicity bins for a given number of jets is denoted by N process j , and is also referred to as a jet slice.
For probing a specific BSM model, all of these bins in data are simultaneously fit to constrain the model, in what is labeled a model-dependent fit. In the search for a hypothetical BSM signal, dedicated signal 5 Baseline leptons are reconstructed as described above, but with a looser p T requirement (p T > 10 GeV), no isolation or impact parameter requirements, and, in the case of electrons, the 'Loose' lepton identification criteria [74]. 6 In this case, a b-tagging working point corresponding to an efficiency of identifying b-jets in a simulated tt sample of 85% is used.
regions (SRs) are defined which could be populated by a signal, and where the SM contribution is expected to be small. The background in these SRs is estimated from a fit in which some of the bins can be excluded to limit the effect of signal contamination biasing the background estimate; this set-up is labeled a model-independent fit. More details of the SR definitions are given in Section 7.
An example of the expected background contributions from MC simulation for the different b-tag bins, with a selection of at least ten jets, can be seen in Figure 2. This figure shows that the background in the zero b-tag bin is dominated by W/Z+jets and tt+jets, whereas in the other b-tag bins it is dominated by tt+jets. The contribution from other processes is very small in all bins.  The estimation of the dominant background processes of tt+jets and W/Z+jets production is carried out using a combined fit to the jet and b-tagged jet multiplicity bins described above. For these backgrounds, the normalization per jet slice is derived using parameterized extrapolations from lower jet multiplicities. The b-tag multiplicity shape per jet slice is taken from simulation for the W/Z+jets background, whereas for the tt+jets background it is predicted from the data using a parameterized extrapolation based on observables at medium jet multiplicities. A separate likelihood fit is carried out for each jet p T threshold, with the fit parameters of the background model determined separately in each fit. The assumptions used in the parameterization are validated using data and MC simulation. Regarding the model-independent results, it is to be noted that possible signal leakage to the control regions can produce a bias in the background estimation. Such limits have been hence obtained assuming negligible signal contributions to events with five, six or seven jets. Signal processes with final states that the search is targeting, generally have negligible leakage into these jet slices, as is the case for the benchmark models considered.

.1 W/Z+jets
A partially data-driven approach is used to estimate the W/Z+jets background. Since the selected W/Z+jets background events usually have no b-jets, the shapes of the b-tag multiplicity distributions are taken from simulated events, whereas the normalization in each jet slice is derived from the data. The estimate of the normalization relies on assuming a functional form to describe the evolution of the number of W/Z+jets events as a function of the jet multiplicity, r( j) ≡ N W/Z+jets j+1 Above a certain number of jets, r( j) can be assumed to be constant, implying a fixed probability of additional jet radiation, referred to as "staircase scaling" [75][76][77][78]. This behaviour has been observed by the ATLAS [79, 80] and CMS [81] collaborations. For lower jet multiplicities, a different scaling is expected with r( j) = k/( j + 1) where k is a constant, referred to as "Poisson scaling" [78]. 7 For the kinematic phase space relevant for this search, a combination of the two scalings is found to describe the data in dedicated validation regions (described later in this section), as well as in simulated W/Z+jets event samples with an integrated luminosity much larger than the one of the data. This combined scaling is parameterized as where c 0 and c 1 are constants that are extracted from the data. Studies using simulated event samples, both at generator level and after event reconstruction, demonstrate that the flexibility of this parameterization is also able to absorb reconstruction effects related to the decrease in event reconstruction efficiency with increasing jet multiplicity, which are mainly due to the lepton-jet overlap and lepton isolation requirements.
The number of W+jets or Z+jets events with different jet and b-jet multiplicities, N W/Z+jets j,b , is then parameterized as follows: Due to different b-tagged-jet multiplicity distributions in W+jets and Z+jets events, the b-tag distribution is modelled separately for the two processes. The normalization and scaling parameters N W/Z+jets 5 determination of the W+jets background relies on control regions containing the remaining events with exactly five, six or seven jets, and zero b-tags, which, for each jet multiplicity, are split according to the electric charge of the highest-p T lepton. The expected charge asymmetry in W+jets events is taken from MC simulation separately for five-jet, six-jet and seven-jet events and used to constrain the W+jets normalization from the data using these control regions. Although all parameters are determined in a global likelihood fit, the most powerful constraint on the absolute normalization comes from the five-jet control regions, and the dominant constraints on the c 0 and c 1 parameters originate from the combination of the five-jet, six-jet and seven-jet control regions. The contamination by tt events in the Z+jets two-lepton control regions is negligible, whereas in the control regions used to estimate the W+jets normalization it is significant and is discussed in Section 6.2. Once the W+jets and Z+jets backgrounds are normalized, they are extrapolated to higher jet multiplicities using the same common scaling function r( j). While independent scalings could be used, tests in data show no significant difference and therefore a common function is used.
The jet-scaling assumption is validated in data using γ+jets and multi-jet events, and simulated W+jets and Z+jets samples are also found to be consistent with this assumption. The γ+jets events are selected using a photon trigger, and an isolated photon [82] with p T > 145 GeV is required in the event selection, whereas the multi-jet events are selected using prescaled and unprescaled multi-jet triggers. In both cases, selections are applied to ensure these control regions probe a kinematic phase-space region similar to the one relevant for the analysis. Figure 3 shows the r( j) ratio for various processes used to validate the jet-scaling parameterization. Each panel shows the ratio for data or MC simulation with the fitted parameterization overlaid as a line. In the case of pure "staircase scaling", the shown ratio would be a constant.
Since the last jet-multiplicity bin used in the analysis is inclusive in the number of jets, the W/Z+jets background model is used to predict this by iterating to higher jet multiplicities and summing the contribution for each jet multiplicity above the maximum used in the analysis, and therefore gives the correct inclusive yield in this bin.

tt+jets
A data-driven model is used to estimate the number of events from tt+jets production in a given jet and b-tag multiplicity bin. The basic concept of this model is based on the extraction of an initial template of the b-tag multiplicity distribution in events with five jets and the parameterization of the evolution of this template to higher jet multiplicities. The absolute normalization for each jet slice is constrained in the fit as discussed later in this section. Figure 4 shows the b-tag multiplicity distributions in tt+jets MC simulation, for five-, eight-and ten-jet events, demonstrating how the distributions evolve as the number of jets increases. The background estimation parameterizes this effect and extracts the parameters describing the evolution from a fit to the data.
The extrapolation of the b-tag multiplicity distribution to higher jet multiplicities starts from the assumption that the difference between the b-tag multiplicity distribution in events with j and j + 1 jets arises mainly from the production of additional jets, and can be described by a fixed probability that the additional jet is b-tagged. Given the small mis-tag rate, this probability is dominated by the probability that the additional jet is a heavy-flavour jet which is b-tagged. In order to account for acceptance effects due to  Figure 3: The ratio of the number of events with ( j + 1) jets to the number with j jets for various processes used to validate the jet-scaling parameterization. Each panel shows the ratio for data or MC simulation with the fitted parameterization overlaid as a line. In the case of pure "staircase scaling", the shown ratio would be a constant. For the multi-jet data points, the 40 GeV jet p T selection uses a prescaled trigger corresponding to an integrated luminosity of 358 nb −1 ; all other selections use unprescaled triggers corresponding to the full data set. The uncertainties shown are statistical. the different kinematics in events with high jet multiplicity, the probability of further b-tagged jets entering the acceptance is also taken into account. The extrapolation to one additional jet can be parameterized as: where N tt+jets j is the number of tt+jets events with j jets and f j,b is the fraction of tt events with j jets of which b are b-tagged. The parameters x i describe the probability of one additional jet to be either not b-tagged (x 0 ), b-tagged (x 1 ), or b-tagged and causing a second b-tagged jet to move into the fiducial acceptance (x 2 ). The latter is dominated by cases where the extra jet is a b-jet, influencing the event kinematics such that an additional b-jet, below the jet p T threshold, enters the acceptance. Given that the x i parameters describe probabilities, the sum i x i is normalized to unity. Subsequent application of this parameterization produces a b-tag template for arbitrarily high jet multiplicities.
Studies based on MC simulated events with sample sizes corresponding to very large equivalent luminosities, as well as studies using fully efficient generator-level b-tagging, indicate the necessity to add a fit parameter that allows for correlated production of two b-tagged jets as may be expected with b-jet production from gluon splitting. This is implemented by changing the evolution described in Eq.
(2) such that any term with x 1 · x 1 is replaced by x 1 · x 1 · ρ 11 , where ρ 11 describes the correlated production of two b-tagged jets.
The initial b-tag multiplicity template is extracted from data events with five jets after subtracting all nontt background processes, and is denoted by f 5,b and scaled by the absolute normalization N tt+jets 5 in order to obtain the model in the five-jet bin: where the sum of f 5,b over the five b-tag bins is normalized to unity.
The model described above is based on the assumption that any change of the b-tag multiplicity distribution is due to additional jet radiation with a certain probability to lead to b-tagged jets. There is, however, also a small increase in the acceptance for b-jets produced in the decay of the tt system, when increasing the jet multiplicity, due to the higher jet momentum on average. The effect amounts to up to 5% in the one-and two-b-tag bins for high jet multiplicities, and is taken into account using a correction to the initial template extracted from simulated tt events.
As is the case for the W/Z+jets background, the normalization of the tt background in each jet slice is constrained using a scaling behaviour similar to that in Eq. (1). The parameterization is slightly modified to: where the three parameters c tt+jets 0 , c tt+jets 1 and c tt+jets 2 are extracted from a fit to the data. In this case, since j is the total number of jets in the event, and not the number of jets produced in addition to the tt system, the denominator ( j + 1) in Eq. (1) is replaced by ( j + c tt+jets 2 ) to take into account the ambiguity in the counting of additional jets due to acceptance effects for the tt decay products.
The scaling behaviour is tested in tt+jets MC simulation (both with the nominal sample and the alternative sample described in Table 1), and also in data with a dileptonic tt+jets control sample. This sample is selected by requiring an electron candidate and a muon candidate in the event, with at least three jets of which at least one is b-tagged, and the small background predicted by MC simulation is subtracted. In this control region, the scaling behaviour can be tested for up to eight jets, but this corresponds to ten jets for a semileptonic tt+jets sample (which is the dominant component of the tt+jets background). Figure 5 presents a comparison of the scaling behaviour in data and MC simulation compared to a fit of the parameterization used and shows that the assumed function describes the data and MC simulation well for the jet-multiplicity range relevant to this search.
As for the W/Z+jets background estimate, the tt+jets background model is used to predict the yield in the highest jet-multiplicity bin by iterating to higher jet multiplicities and summing these contributions to give the inclusive yield.
The zero-b-tag component of the initial tt template, which is extracted from events with five jets, exhibits an anti-correlation with the absolute W+jets normalization, which is extracted in the same bin. The control regions separated in leading-lepton charge, detailed in Section 6.1, provide a handle to extract the absolute W+jets normalization. The remaining anti-correlation does not affect the total background estimate. For these control regions, the tt+jets process is assumed to be charge symmetric and the model is simply split into two halves for these bins.

Multi-jet events
The contribution from multi-jet production with a fake or non-prompt (FNP) lepton (such as hadrons misidentified as leptons, leptons originating from the decay of heavy-flavour hadrons, and electrons from photon conversions), constitutes a minor but non-negligible background, especially in the lower jet slices. It is estimated from the data with a matrix method similar to that described in Ref. [83]. In this method, two types of lepton identification criteria are defined: "tight", corresponding to the default lepton criteria described in Section 4, and "loose", corresponding to baseline leptons after overlap removal. The matrix method relates the number of events containing prompt or FNP leptons to the number of observed events with tight or loose-not-tight leptons using the probability for loose-prompt or loose-FNP leptons to satisfy the tight criteria. The probability for loose-prompt leptons to satisfy the tight selection criteria is obtained using a Z → data sample and is modelled as a function of the lepton p T . The probability for loose FNP leptons to satisfy the tight selection criteria is determined from a data control region enriched in non-prompt leptons requiring a loose lepton, multiple jets, low E miss T [84, 85] and low transverse mass. 8 The efficiencies are measured as a function of lepton candidate p T after subtracting the contribution from prompt-lepton processes and are assumed to be independent of the jet multiplicity.  Figure 5: The ratio of the number of events with ( j + 1) jets to the number with j jets in dileptonic and semileptonic tt+jets events, used to validate the jet-scaling parameterization. Each panel shows the ratio for data or MC simulation with the fitted parameterization overlaid as a line. In the case of pure "staircase scaling", the shown ratio would be a constant. The uncertainties shown are statistical.

Small backgrounds
The small background contributions from diboson production, single-top production, tt production in association with a vector/Higgs boson (labeled ttV/H) and SM four-top-quark production are estimated using MC simulation. In all but the highest jet slices considered, the sum of these backgrounds contributes not more than 10% of the SM expectation in any of the b-tag bins; for the highest jet slices this can rise up to 35% .

Fit configuration and validation
For each jet p T threshold, the search results are determined from a simultaneous likelihood fit. The likelihood is built as the product of Poisson probability terms describing the observed numbers of events in the different bins and Gaussian distributions constraining the nuisance parameters associated with the systematic uncertainties. The widths of the Gaussian distributions correspond to the sizes of these uncertainties. Poisson distributions are used to constrain the nuisance parameters for MC simulation and data control region statistical uncertainties. Correlations of a given nuisance parameter between the different background sources and the signal are taken into account when relevant. The systematic uncertainties are not constrained by the data in the fit procedure.
The likelihood is configured differently for the model-dependent and model-independent hypothesis tests. The former is used to derive exclusion limits for a specific BSM model, and the full set of bins (for example 5 to 12-inclusive jet multiplicity bins, and 0 to 4-inclusive b-jet bins for the 40 GeV jet p T threshold) is employed in the likelihood. The signal contribution, as predicted by the given BSM model, is considered in all bins and is scaled by one common signal-strength parameter. The number of freely floating parameters in the background model is 15. There are four parameters in the W/Z+jets model: the two jet-scaling parameters (c 0 , c 1 ), and the normalizations of the W+jets and Z+jets events in the five-jet region (N ), four for the initial b-tag multiplicity template ( f 5,b , b = 1-4), and three for the evolution parameters (x 1 , x 2 and ρ 11 ), taking into account the constraints: The number of fitted bins 10 varies between 36 and 46 depending on the highest jet-multiplicity bin used, leading to an over-constrained system in all cases.
The model-independent test is used to search for, and to set generic exclusion limits on, the potential contribution from a hypothetical BSM signal in the phase-space region probed by this analysis. For this purpose, dedicated signal regions are defined which could be populated by such a signal, and where the SM contribution is expected to be small. The SR selections are defined as requiring exactly zero or at least three b-tags (labelled 0b, or 3b respectively) for a given minimum number of jets J, and for a jet p T threshold X, with each SR labelled as X-0b-J or X-3b-J. For each jet p T threshold, six SRs are defined as follows: • For the 40 GeV jet p T threshold: 40-0b-10, 40-3b-10, 40-0b-11, 40-3b-11, 40-0b-12, 40-3b-12.
The SRs therefore overlap and an event can enter more than one SR. Due to the efficiency of the btagging algorithm used, signal models with large b-tag multiplicities can have significant contamination in the two-b-tag bins, which can bias the tt+jets background estimate and reduce the sensitivity of the search. To reduce this effect, for the SRs with ≥ 3 b-tags, the two-b-tag bin is not included in the fit for the highest jet slice in each SR. 11 For the model-independent hypothesis tests, a separate likelihood fit is performed for each SR. A potential signal contribution is considered in the given SR bin only. The number of freely floating parameters in the background model is 15, whereas the number of observables varies between 23 (for SRs 60-3b-8 and 80-3b-8) and 45 (for SR 40-0b-12), so the system is also always over-constrained.
The fit set-up was extensively tested using MC simulated events, and was demonstrated to give a negligible bias in the fitted yields, both in the case where the background-only distributions are fit, or when a signal is injected into the fitted data. These tests were carried out with the nominal MC samples as well as the alternative samples described in Table 1. In addition, when fitting the data the fitted parameter values and their inter-correlations were studied in detail and found to be in agreement with the expectation based on MC simulation. The jet-reconstruction stability at high multiplicities was validated by comparing jets with track-jets that are clustered from ID tracks with a radius parameter of 0.2. The ratio of the multiplicities of track-jets and jets, which is sensitive to jet-merging effects, was found to be stable up to the highest jet multiplicities studied. The estimate of the multi-jet background was validated in data regions enriched in FNP leptons, and was found to describe the data within the quoted uncertainties.

Systematic uncertainties
The dominant backgrounds are estimated from the data without the use of MC simulation, and therefore the main systematic uncertainties related to the estimation of these backgrounds arise from the assumptions made in the W/Z+jets, tt+jets and multi-jet background estimates. Uncertainties related to the theoretical modelling of the specific processes and due to the modelling of the detector response in simulated events are only relevant for the minor backgrounds, which are taken from MC simulation, and for the estimates of the signal yields after selections.
For the W/Z+jets background estimation, the uncertainty related to the assumed scaling behaviour is taken from studies of this behaviour in W+jets and Z+jets MC simulation, as well as in γ+jets and multi-jet data control regions chosen to be kinematically similar to the search selection (see Figure 3). No evidence is seen for a deviation from the assumed scaling behaviour and the statistical precision of these methods is used as an uncertainty (up to 18% for the highest jet-multiplicity bins). The expected uncertainty of the charge asymmetry for W+jets production is 3-5% from PDF variations [86], but in the seven-jet region, the uncertainty is dominated by the limited number of MC events (up to 10% for the 80 GeV jet p T threshold). The uncertainty in the shape of the b-tag multiplicity distribution in W+jets and Z+jets events is derived by comparing different MC generator set-ups (e.g. varying the renormalization and factorization scale and the parton-shower model parameters). It is seen to grow as a function of jet multiplicity and is about 50% for events with five jets, after which the MC statistical uncertainty becomes very large. A conservative uncertainty of 100% is therefore assigned to the fractional contribution from W+b and W+c events for all jet slices considered, which has a very small impact on the final result as the background from W boson production with additional heavy flavour jets is small compared to that from top quark pair production. In addition, the uncertainties related to the b-tagging efficiency and mis-tag rate are taken into account in the uncertainty in the W/Z+jets b-tag template.
The uncertainties related to the tt+jets background estimation primarily relate to the number of events in the data regions used for the fit. As mentioned in Section 6.2, the method shows good closure using simulated events, so no systematic uncertainty related to these studies is assigned. There is a small uncertainty related to the acceptance correction for the initial b-tag multiplicity template, which is derived by varying the MC generator set-up for the tt sample used to estimate the correction. This leads to a 3% uncertainty in the correction and has no significant effect on the final uncertainty. The uncertainty related to the parameterization of the scaling of the tt+jets background with jet multiplicity is determined with MC simulation closure tests. The validation of the method presented in Figure 5 shows that the parameterization describes the data and MC simulation well. The uncertainties assigned vary from 3% (at 8 jets) to 33% (at 12 jets) for the 40 GeV jet p T threshold, and from 10% (at 8 jets) to 60% (at 10 jets) for the 80 GeV jet p T threshold. These are estimated by studying the closure of the method in different MC samples (including using alternative MC generators, and varying the event selection) and are of similar size to the statistical uncertainty from the data validation.
The dominant uncertainties in the multi-jet background estimate arise from the number of data events in the control regions, uncertainties related to the subtraction of electroweak backgrounds from these control regions (here a 20% uncertainty is applied to the expected yield of the backgrounds in the control regions) and uncertainties to cover the possible dependencies of the real-and fake-lepton efficiencies [83] on variables other than lepton p T (for example the dependence on the number of jets in the event). The total uncertainty in the multi-jet background yields is about 50%.
The uncertainty in the expected yields of the minor backgrounds includes theoretical uncertainties in the cross-sections and in the modelling of the kinematics by the MC generator, as well as experimental uncertainties related to the modelling of the detector response in the simulation. The uncertainties assigned to cover the theoretical estimate of these backgrounds in the relevant regions are 50%, 100% and 30% for diboson, single top-quark, and ttV/H production, respectively.
The final uncertainty in the background estimate in the SRs is dominated by the statistical uncertainty related to the number of data events in the different bins, and other systematic uncertainties do not contribute significantly.
The uncertainties assigned to the expected signal yield for the SUSY benchmark processes considered include the experimental uncertainties related to the detector modelling, which are dominated by the modelling of the jet energy scale and the b-tagging efficiencies and mis-tagging rates. For example, for a signal model with four b-quarks the b-tagging uncertainties are ≈10%, and the jet related uncertainties are typically ≈5%. The uncertainty in the signal cross-sections used is discussed in Section 3.2.1. The uncertainty in the signal yields related to the modelling of additional jet radiation is studied by varying the factorization, renormalization, and jet-matching scales as well as the parton-shower tune in the simulation. The corresponding uncertainty is small for most of the signal parameter space, but increases to up to 25% for very light or very heavy LSPs where the contribution from additional jet radiation is relevant.

Results
Results are provided both as model-independent limits on the contribution from BSM physics to the dedicated signal regions and in the context of the four SUSY benchmark models discussed in Section 3.2.1. As previously mentioned, different fit set-ups are used for these two sets of results. In all cases, the profilelikelihood-ratio test [87] is used to establish 95% confidence intervals using the CL s prescription [88]. Figures 6, 7 and 8 show the observed numbers of data events compared to the fitted background model, for the three jet p T thresholds, respectively. The likelihood fit is configured using the model-dependent set-up where all bins are input to the fit, and fixing the signal-strength parameter to zero. An example signal model is also shown to illustrate the separation between the signal and the background achieved, as well as the level of signal-event leakage into lower b-tag and jet-multiplicity bins. The bottom panel of each figure shows the background prediction using MC simulation. For high b-tag multiplicities (≥ 3), the MC simulation strongly underestimates the background contributions compared to the data-driven background estimation. This effect has been observed before [89,90] and shows that the MC simulations are not able to correctly describe final states with high b-jet multiplicity. In addition, the MC simulation predicts too many events at low b-jet multiplicity, which is likely to be due to a mismodelling of the W+jets production at high jet multiplicity. Since the background prediction from MC simulation does not reflect the expected background contribution, in all cases the expected limit is computed using the background prediction from a fit to all bins in the data with no signal component included in the fit model. GeV is also overlaid (although its contribution is very small in most of the jet multiplicity slices shown). The bottom panels show the ratio of the observed data to the expected background, as well as the ratio of the prediction from MC simulation to the expected background. All uncertainties, which can be correlated across the bins, are included in the error bands (shaded regions).

Model-independent results
The model-independent results are calculated from the observed number of events, and the expected background in the SRs. Tables 2, 3, and 4 show the expected background in the SRs from these fits together with the observed numbers of events for the sets of SRs with the 40 GeV, 60 GeV and 80 GeV jet p T thresholds. In addition, the p 0 values are shown, which quantify the probability that a backgroundonly experiment results in a fluctuation giving an event yield equal to or larger than the one observed in the data. The background estimate describes the observed data in the SRs well, with the largest excesses over the background estimate corresponding to 0.8 standard deviations in SRs 40-3b-11 and 40-3b-12.
≥ 10 jets ≥ 11 jets ≥ 12 jets Process 14.3 ± 2.9 53 ± 6 3.0 ± 0.7 10.5 ± 1.8 0.58 ± 0.20 1.9 ± 0.6 W+jets 7 ± 4 0.22 ± 0.08 0.9 ± 0.9 0.04 ± 0.03 0.  Table 2: Fitted background yields in the different b-tag multiplicity bins for jet p T > 40 GeV in the different signal regions. The parameters of the model are determined in a fit to a reduced set of bins, corresponding to the model-independent fit discussed in the text. The individual background uncertainties can be larger than the total uncertainty due to correlations between parameters. The p 0 value quantifies the probability that a background-only experiment results in a fluctuation giving an event yield equal to or larger than the one observed in the data, and is capped at 0.5.
≥ 8 jets ≥ 9 jets ≥ 10 jets Process  Table 3: Fitted background yields in the different b-tag multiplicity bins for jet p T > 60 GeV in the different signal regions. The parameters of the model are determined in a fit to a reduced set of bins, corresponding to the model-independent fit discussed in the text. The individual background uncertainties can be larger than the total uncertainty due to correlations between parameters. The p 0 value quantifies the probability that a background-only experiment results in a fluctuation giving an event yield equal to or larger than the one observed in the data, and is capped at 0.5.
Model-independent upper limits at 95% confidence level (CL) on the number of BSM events, N BSM , that ≥ 8 jets ≥ 9 jets ≥ 10 jets Process  Table 4: Fitted background yields in the different b-tag multiplicity bins for jet p T > 80 GeV in the different signal regions. The parameters of the model are determined in a fit to a reduced set of bins, corresponding to the model-independent fit discussed in the text. The individual background uncertainties can be larger than the total uncertainty due to correlations between parameters. The p 0 value quantifies the probability that a background-only experiment results in a fluctuation giving an event yield equal to or larger than the one observed in the data, and is capped at 0.5. may contribute to the signal regions, are computed from the observed number of events and the fitted background. Normalizing these results by the integrated luminosity L of the data sample, they can be interpreted as upper limits on the visible BSM cross-section σ vis , defined as the product σ prod × A × = N BSM /L of production cross-section (σ prod ), acceptance (A) and reconstruction efficiency ( ). These limits are presented in Table 5.  Table 5: Observed and expected 95% CL model-independent upper limits on the product of cross-section, acceptance and efficiency (in fb) for each signal region. The limits are determined by fitting the background model in a reduced set of bins as described in the text.
For a hypothetical signal with three or four b-jets, the analysis sensitivity is reduced because of the leakage of signal events into lower b-tag jet multiplicity bins due to the b-tagging efficiency of about 78%, which would bias the normalization of the tt+jets background. This is partially mitigated by excluding the twob-tag bin in the background determination for the highest jet slice probed, and by the constraint on the scaling of the tt+jets background as a function of jet multiplicity.

Model-dependent results
For each signal model probed, the fit is configured using the model-dependent set-up, as detailed in Section 7. All bins are included in the fit and the expected signal contribution in each bin is taken into account. Figure 9 shows the observed and expected exclusion limits for the three benchmark signal models featuring gluino pair production, as a function of the gluino mass and neutralino or top-squark mass. Figure 10 shows exclusion limits in the top-squark production model where the limit for pure bino and higgsino LSPs are shown separately, taking into account the processes discussed in Section 3.2.1. For the gluino production models, all the probed model points have the best expected sensitivity when using the 80 GeV jet p T threshold, whereas for the top-squark production model, the 60 GeV jet p T threshold gives the best expected sensitivity, and these thresholds are used to set the exclusion limits.
In the model with an RPV decay of theχ 0 1 to three light-quark jets, gluino masses up to 2.10 TeV are excluded, with weaker limits for light and heavyχ 0 1 . For the benchmark model withg →tt andt → bs, gluino masses up to 1.65 TeV are excluded. In this case, the observed limit is about two standard deviations stronger than the expected limit. This is due to a difference between the observed data and the expected background in the three-and four-b-tag bins in the eight-, nine-and ten-jet slices (see Figure 8), which are the most sensitive bins for this model. An exclusion limit is also derived for the same model but with a virtual top squark (with mass set to 2 TeV) where gluinos of mass up to 1.62 TeV are excluded (with an expected exclusion up to 1.50 TeV). The analysis excludes gluinos with masses up to 1.80 TeV in theg → qqχ 0 1 → qqqq /ν model.
For the top-squark pair production model, top-squark masses up to 1.10 TeV and 1.25 TeV are excluded for higgsino and bino LSPs respectively. There is greater sensitivity in the case of the bino LSP because the lepton and jet multiplicities are higher than in the higgsino LSP scenario. 12 Typical acceptance times efficiency (A × ) values for the relevant SR for each of the benchmark signal models are: • 8% for theg → ttχ 0 1 → ttuds model for the 80-3b-10 SR, • 3% for theg →tt →tbs model for the 80-3b-8 SR, • 13% for theg → qqχ 0 1 → qqqq /ν model for the 80-0b-8 SR, • 2% (6%) for the top-squark production model with a higgsino (bino) LSP for the 60-3b-10 SR.
These values correspond to the case where the produced SUSY particle is close to the exclusion limit, and for intermediate LSP masses. In general, the acceptance falls for light or heavy LSPs as some of the produced jets or leptons become softer.

Limits on four-top-quark production
The analysis is also used to search for SM four-top-quark production. In this case, the small contribution to the background from four-top-quark production is removed, and a model-dependent fit is carried out with the four-top-quark simulated sample used as the signal. The best expected sensitivity is achieved with the 60 GeV jet p T threshold, which leads to a cross-section upper limit at 95% CL on the four-topquark signal of 60 fb (whereas 84 fb is expected), which is 6.5 times the SM cross-section for this process . 13 13 No uncertainty in the theoretical modelling of the four-top-quark process is included when setting the cross-section limit, although uncertainties related to the b-tagging, jet and lepton reconstruction are taken into account.
All limits at 95% CL Figure 9: Observed and expected exclusion contours on theg andχ 0 1 ort masses in the context of the RPV SUSY scenarios probed, with simplified mass spectra featuringgg pair production with exclusive decay modes. The contours of the band around the expected limit are the ±1σ variations, including all uncertainties except theoretical uncertainties in the signal cross-section. The dotted lines around the observed limit illustrate the change in the observed limit as the nominal signal cross-section is scaled up and down by the theoretical uncertainty. All limits are computed at 95% CL. The diagonal line indicates the kinematic limit for the decays in each specified scenario. For theg →tt →tbs model, the limit on the top-squark mass from Ref. [

ATLAS
All limits at 95% CL

Conclusion
A search for beyond the Standard Model physics in events with an isolated lepton (electron or muon), high jet multiplicity and no, or many, b-tagged jets is presented. Unlike many previous searches in similar final states, no requirement on the missing transverse momentum in the event is applied. A novel data-driven technique is used to estimate the dominant backgrounds from tt+jets and W/Z+jets production. The analysis is performed with proton-proton collision data at √ s = 13 TeV collected in 2015 and 2016 with the ATLAS detector at the Large Hadron Collider corresponding to an integrated luminosity of 36.1 fb −1 . With no significant excess over the Standard Model expectation observed, results are interpreted in the framework of simplified models featuring gluino or top-squark pair production in R-parity-violating supersymmetry scenarios. In a benchmark model withg → ttχ 0 1 → ttuds, gluino masses up to 2.10 TeV are excluded at 95% confidence level. In a model withg →tt andt →bs, gluino masses up to 1.65 TeV are excluded, whereas in a model withg → qqχ 0 1 → qqqq /ν, gluino masses up to 1.80 TeV are excluded. A model with direct top-squark production and R-parity-violating decays of higgsino or bino LSPs excludes top squarks with masses up to 1.10 TeV and 1.25 TeV respectively. These results improve the previously existing limits for the gluino production models considered, whereas they represent the first limits for the top squark production model. In addition, an upper limit of 60 fb is set on the cross-section of Standard Model four-top-quark production, improving on the previous strongest limit of 69 fb [14]. Finally, model-independent limits are set on the contribution of new phenomena to the signal-region yields.
[9] ATLAS Collaboration, Search for massive supersymmetric particles decaying to many jets using the ATLAS detector in pp collisions at √ s = 8