Search for supersymmetry in events with four or more charged leptons in $139\,\mbox{fb\(^{-1}\)}$ of $\sqrt{s}=13$ TeV $pp$ collisions with the ATLAS detector

A search for supersymmetry in events with four or more charged leptons (electrons, muons and $\tau$-leptons) is presented. The analysis uses a data sample corresponding to $139\,\mbox{fb\(^{-1}\)}$ of proton-proton collisions delivered by the Large Hadron Collider at $\sqrt{s}=13$ TeV and recorded by the ATLAS detector. Four-lepton signal regions with up to two hadronically decaying $\tau$-leptons are designed to target several supersymmetric models, while a general five-lepton signal region targets any new physics phenomena leading to a final state with five charged leptons. Data yields are consistent with Standard Model expectations and results are used to set upper limits on contributions from processes beyond the Standard Model. Exclusion limits are set at the 95% confidence level in simplified models of general gauge-mediated supersymmetry, excluding higgsino masses up to $540$ GeV. In $R$-parity-violating simplified models with decays of the lightest supersymmetric particle to charged leptons, lower limits of $1.6$ TeV, $1.2$ TeV, and $2.5$ TeV are placed on wino, slepton and gluino masses, respectively.


Introduction
Standard Model (SM) processes rarely produce events with four or more charged leptons, while many new theories, such as supersymmetry (SUSY) [1][2][3][4][5][6], predict events which would regularly decay to these multilepton final states. This paper presents a search for new phenomena in final states with at least four isolated, charged leptons (electrons, muons or -leptons) where up to two hadronically decaying -leptons are considered. Here, electrons and muons are referred to as 'light leptons' and include those from leptonic decays. The full proton-proton dataset delivered by the LHC and collected by the ATLAS experiment during the 2015-2018 data-taking period is used in the analysis, corresponding to an integrated luminosity of 139 fb −1 [7] at a centre-of-mass energy of 13 TeV. Several SUSY signal models are used to optimise the search, but the search itself is generally model-agnostic, using selections on either the presence of, or absence of, bosons in the event, and loose requirements on either the effective mass or the missing transverse momentum. Results are presented in terms of limits on SUSY models.
Previous searches for SUSY particles using signatures with three or more leptons were carried out at the Tevatron collider [8][9][10][11][12][13], and at the LHC by the ATLAS experiment [14][15][16][17][18][19] and the CMS experiment [20][21][22][23][24][25]. Searching for new physics using a four or more lepton final state may offer more sensitivity to some beyond the SM scenarios than using lower lepton multiplicities, as the very low SM background can allow for a looser selection and a more inclusive approach to be adopted. This analysis closely follows the ATLAS analyses on the datasets at 7 TeV [14] and 8 TeV [17], and on the partial dataset at 13 TeV [18]. Previous results are extended here by analysing the full ATLAS 13 TeV dataset, expanding the search with an additional channel selecting at least five leptons, and using data to constrain major sources of SM background.

Targeted models
SUSY is a space-time symmetry that postulates the existence of a new superpartner for every SM particle, with spin differing by one half-unit from its SM partner: each SM fermion (boson) is associated with a SUSY boson (fermion). The new SUSY particles (sparticles) would have the same quantum numbers as their SM counterparts except for spin and provide a potential solution to the hierarchy problem [26][27][28][29]. The scalar superpartners of the SM fermions are the charged sleptons,l, the sneutrinos,˜, and the squarks, , while the gluons have fermionic superpartners called gluinos (˜). The bino, wino and higgsino fields are fermionic superpartners of the SU(2) × U(1) gauge fields of the SM, and the two complex scalar doublets of a minimally extended Higgs sector, respectively. They mix to give mass eigenstates that are referred to as charginos˜± ( = 1, 2) and neutralinos˜0 ( = 1, 2, 3, 4), numbered in order of increasing mass.
SUSY processes can result in proton decay at a rate that is in conflict with the stringent experimental constraints on the proton lifetime if they do not conserve both lepton number ( ) and baryon number ( ) [30]. This conflict can be avoided by imposing the conservation of -parity [31], defined as (−1) 3( − )+2 , where is spin, or by explicitly conserving either or in -parity-violating (RPV) scenarios [32,33]. In -parity-conserving (RPC) models, the lightest SUSY particle (LSP) is stable and a viable dark-matter candidate [34,35], and leptons can originate from unstable weakly interacting sparticles decaying to the LSP. In RPV models, the LSP is unstable and decays into SM particles, including charged leptons and neutrinos when violating but not . Both the RPC and RPV SUSY scenarios can therefore result in signatures with high lepton multiplicities and substantial missing transverse momentum, selections on which can be used to suppress SM background processes effectively.
RPC and RPV SUSY models are used for signal region optimisation and to interpret the results of this analysis; each requires a different approach for signal selection, as discussed in Section 5. In all SUSY scenarios considered here, the light CP-even Higgs boson, ℎ, of the minimal supersymmetric extension of the SM [36,37] Higgs sector is assumed to be practically identical to the SM Higgs boson [38], with mass and couplings compatible with the LHC measurements [39][40][41][42]. In addition, the decoupling limit is used, which is defined by , while the CP-odd ( ), the neutral CP-even ( ), and the two charged ( ± ) Higgs bosons are considered to be very heavy and thus considerably beyond the kinematic reach of the LHC. Naturalness [43,44] motivates light higgsino states (˜0 1 ,˜0 2 and˜± 1 ); however, searching for higgsinos can be experimentally challenging. The sparticles in the higgsino system are close in mass, thus decays of the˜0 2 /˜± 1 to a˜0 1 LSP result in low-momentum decay products that are difficult to reconstruct efficiently. The LEP experiments searched for higgsino˜± 1 in approximately mass-degenerate scenarios and excluded chargino masses below 103.5 GeV (reduced to 92 GeV for small chargino-LSP mass differences between 0.1 GeV and 3 GeV) [45]. More recently, the ATLAS and CMS experiments have searched for higgsino production [46,47], excluding higgsino˜0 2 up to masses of ∼240 GeV and down to˜0 2 -LSP mass differences of 1.5 GeV.

RPC SUSY scenarios
General gauge-mediated (GGM) SUSY models [48] offer an opportunity to study light higgsinos without relying on the reconstruction of experimentally challenging, low-momentum final states. In the Planckscale-mediated SUSY breaking scenario, the gravitino˜is the fermionic superpartner of the graviton and its mass is comparable to the masses of the other SUSY particles, ∼ 100 GeV [49,50]. In contrast, GGM models predict that the˜is nearly massless and can be produced at the LHC via the decays of the higgsinos, e.g.˜0 1 → /ℎ +˜. The leptonic decays of the /ℎ from the two˜0 1 decays can be reconstructed and are targeted in this analysis, giving an opportunity to study four-lepton signatures with one or more boson candidates.
Simplified RPC SUSY models [51][52][53] inspired by GGM are considered here, where an almost massdegenerate higgsino system˜± 1 ,˜0 1 ,˜0 2 and an LSP˜with mass 1 MeV are the only SUSY particles within the reach of the LHC. The˜± 1 and˜0 2 masses are set to 1 GeV above the˜0 1 mass to ensure they decay promptly, and because they have only a weak coupling to the˜, the˜± 1 and˜0 2 always decay to the˜0 1 via virtual / bosons. The virtual / in turn decay to very soft final states that are not reconstructed, while the˜0 1 decays promptly to a gravitino plus a or ℎ boson,˜0 1 → /ℎ +˜. A higgsino system offers four production processes at the LHC:˜+ 1˜− 1 ,˜± 1˜0 1 ,˜± 1˜0 2 and˜0 1˜0 2 , all of which are considered in these GGM models, as shown in Figure 1. The˜0 1 mass and˜0 1 →˜branching ratio are the two free parameters of the simplified GGM higgsino scenarios.

RPV SUSY scenarios
In generic SUSY models with minimal particle content, the superpotential includes terms that violate conservation of and : 1 2¯+ where and indicate the lepton and quark SU(2)-doublet superfields, respectively, and¯,¯andā re the corresponding singlet superfields. Quark and lepton generations are referred to by the indices , and , while the Higgs field that couples to up-type quarks is represented by the Higgs SU(2)-doublet superfield 2 . The , and parameters are three sets of new Yukawa couplings, while the parameters have dimensions of mass.
Simplified models of RPV SUSY scenarios are considered here, with a bino neutralino (˜0 1 ) LSP which decays via an RPV interaction. The lepton-number-violating superpotential term 1 2¯m ediates the LSP decay into two charged leptons and a neutrino, through a virtual slepton or sneutrino, with the allowed lepton flavours depending on the indices of the associated couplings [54]. The complex conjugate of the decay in Eq. (1) is also allowed. Thus, when two˜0 1 are present in a signal process, every signal event contains a minimum of four charged leptons and two neutrinos, giving an opportunity to study four-lepton SUSY signatures.
In principle, the nine 1 RPV couplings allow the˜0 1 to decay to every possible combination of charged-lepton pairs, where the branching ratio for each combination differs for each . For example, for 121 ≠ 0 the branching ratios for˜0 1 → ,˜0 1 → and˜0 1 → are 50%, 50% and 0% respectively, whereas for 122 ≠ 0 the corresponding branching ratios are 50%, 0% and 50%. It was shown in Ref.
[17] that the four-charged-lepton search sensitivity is comparable in the cases of 121 ≠ 0 or 122 ≠ 0, and for 133 ≠ 0 or 233 ≠ 0. Since the analysis reported here uses similar techniques for these cases, the number of -violating RPV scenarios studied is reduced by making no distinction between the electron and muon decay modes of the˜0 1 . Two extremes of the RPV couplings are considered: •¯12 ( ∈ 1, 2) scenarios, where 12 ≠ 0 and only decays to electrons and muons are included, •¯33 ( ∈ 1, 2) scenarios, where 33 ≠ 0 and only decays to -leptons and either electrons or muons are included, 1 The 27 RPV couplings are reduced to 9 by the antisymmetry requirement = − and the ≠ requirement for the generation of the terms in the superpotential. with all other RPV couplings assumed to be zero. The branching ratios for the˜0 1 decay in the¯12 and¯33 scenarios are shown in Table 1. The sensitivity to ≠ 0 couplings not considered here (e.g. 123 ≠ 0) is expected to be between that achieved in the¯12 and¯33 scenarios. Pure-bino˜0 1˜0 1 production has a vanishingly small cross-section at the LHC, thus models that include one or more next-to-lightest SUSY particles (NLSP) are considered in order to obtain a reasonably large cross-section. The choice of NLSP in the RPV SUSY scenarios determines the production cross-section, and can impact the signal acceptance to a lesser extent as intermediate decay products may also decay to leptons. In all cases considered here, the NLSP is pair-produced in an RPC interaction and decays to thẽ 0 1 LSP (which itself undergoes an RPV decay). Three different possibilities are considered for the NLSP in the¯12 and¯33 scenarios: • wino NLSP: mass-degenerate wino charginos and neutralinos are produced in association (˜+ 1˜− 1 or ± 1˜0 2 ). The charginos decay via˜± 1 → ( * )˜0 1 with 100% branching fraction, while the neutralinos decay via˜0 2 → ( * )˜0 1 or ℎ˜0 1 with 50% branching fraction each, as shown in Figure 2(a). •l L ℓ L ℓ L /˜NLSP: mass-degenerate sleptons and sneutrinos of all three generations are produced in association (l LlL ,˜˜,l L˜, where the subscript L refers to the chirality of the partner lepton). The sleptons decay vial L → ℓ˜0 1 and sneutrinos decay via˜→˜0 1 , both with 100% branching fraction, as seen in Figure 2(b).
•g NLSP: gluino pair-production, where the gluino decays with 100% branching fraction viã →¯˜0 1 ( = , , , , only, with equal branching fractions), as seen in Figure 2(c). Decays to top quarks are not considered here, as this would introduce a significant change in signature for scenarios with mass difference (˜) − (˜0 1 ) above and below ∼ 350 GeV.
For the RPV models, the LSP mass is restricted to the range 10 GeV ≤ (LSP) ≤ (NLSP) − 10 GeV to ensure that both the RPC cascade decay and the RPV LSP decay are prompt. Non-prompt decays of the˜0 1 in similar models were previously studied in Refs. [55,56].

ATLAS detector
The ATLAS experiment [57][58][59] at the LHC is a multipurpose particle detector with a forward-backward symmetric cylindrical geometry and a near 4 coverage in solid angle. 2 It consists of an inner tracking Figure 2: Diagrams of the benchmark SUSY models of RPC NLSP pair production of a (a) wino, (b) slepton/sneutrino and (c) gluino, followed by the RPV decay of the˜0 1 LSP. The LSP is assumed to decay as˜0 1 → ℓℓ with 100% branching ratio.
detector (ID) surrounded by a thin superconducting solenoid providing a 2 T axial magnetic field, electromagnetic and hadron calorimeters, and a muon spectrometer (MS). The ID covers the pseudorapidity range | | < 2.5. It consists of silicon pixel, silicon microstrip, and transition radiation tracking detectors. Lead/liquid-argon (LAr) sampling calorimeters provide electromagnetic (EM) energy measurements with high granularity. Hadronic calorimetry is provided by the steel/scintillator-tile calorimeter, which covers the central pseudorapidity range (| | < 1.7). The endcap and forward regions are instrumented with LAr calorimeters for EM and hadronic energy measurements up to | | = 4.9. The MS surrounds the calorimeters and is based on three large air-core toroidal superconducting magnets with eight coils each. The field integral of the toroids ranges between 2.0 and 6.0 T m across most of the detector. The MS includes a system of precision tracking chambers covering the region | | < 2.7 and fast detectors for triggering. A two-level trigger system is used to select events. The first-level trigger is implemented in hardware and uses a subset of the detector information to accept events at a rate below 100 kHz. This is followed by a software-based trigger that reduces the accepted event rate to 1 kHz on average, depending on the data-taking conditions.

Data and simulated event samples
This analysis uses the full √ = 13 TeV dataset collected by the ATLAS experiment during the 2015-2018 data-taking period. The average number of multiple collisions in the same or nearby bunch crossings (pile-up) increased from 14 in 2015 to ∼ 38 in 2018. After the application of beam, detector and data-quality requirements [60], the total integrated luminosity considered in this analysis corresponds to 139.0 ± 2.4 fb −1 [7]. Events recorded during stable data-taking conditions are used in the analysis if the reconstructed primary vertex has at least two tracks with transverse momentum T > 500 MeV associated with it. The primary vertex of an event is identified as the vertex with the highest Σ 2 T of associated tracks.
Events are selected using the single-lepton, dilepton, or trilepton triggers [61,62] listed in Table 2, where the trigger efficiencies are in the plateau region above the offline T thresholds. Dilepton (trilepton) triggers are used only when the leptons in the event fail T -threshold requirements for the single-lepton (single-lepton and dilepton) triggers. The trigger efficiency for events with four (three) electrons/muons in signal SUSY scenarios is typically >99% (>96%). For signal SUSY events with only two light leptons, the trigger efficiency is typically > 95% for events with at least one electron and decreases to ∼ 90% for events with only two muons.  9 (21, 9) or (23,9) Table 3.
The SUSY signal processes were generated from leading-order (LO) matrix elements with up to two extra partons. Jet-parton matching followed the CKKW-L prescription [63], with a matching scale set to one quarter of the mass of the pair-produced SUSY particles. Signal cross-sections were calculated to next-to-leading order in the strong coupling constant, adding the resummation of soft gluon emission at next-to-leading-logarithm accuracy (NLO+NLL) [64][65][66][67][68][69][70][71]. The nominal signal cross-section and its uncertainty were taken from an envelope of cross-section predictions using different parton distribution function (PDF) sets and factorisation and renormalisation scales, as described in Ref. [72].
The dominant irreducible background processes that can produce four prompt and isolated charged leptons are ,¯, and Higgs production (where = , , and includes off-shell / contributions). For simulated production, the matrix elements contain all diagrams with four electroweak vertices, and they were calculated for up to one extra parton at NLO, and up to three extra partons at LO. The production of top quark pairs with an additional boson was simulated with matrix elements calculated at NLO precision. Simulated triboson ( ) production includes the processes , and with four to six charged leptons, and was generated at NLO with additional LO matrix elements for up to two extra partons. The simulation of Higgs processes includes Higgs production via gluon-gluon fusion (ggF) and vector-boson fusion (VBF), and associated production with a boson ( , ) or a top-antitop pair (¯). Other irreducible background processes with small cross-sections are grouped into a category labelled 'Other', which contains the ,¯,¯,¯,¯,¯,¯and¯¯processes.
Top quark pair production and +jets are the dominant SM processes that may produce one or more non-prompt or misidentified leptons among the four charged leptons. Processes such as +jets, , , and¯also contribute to a four or more charged lepton signature, but at very small rates either due to a small production cross-section, or they require a higher multiplicity of non-prompt or misidentified leptons. MC simulation of these processes is used as part of the estimation of the reducible background, as described in Section 7.2. Further information about the MC simulations of the reducible backgrounds can be found in Refs. [73,74].
For all MC simulation samples, the propagation of particles through the ATLAS detector was modelled with G 4 [75] using the full ATLAS detector simulation [76], except for the SUSY signal samples, which use a fast simulation based on a parameterisation of the response of the electromagnetic and hadronic calorimeters [77] and full simulation elsewhere. The effect of pile-up is incorporated into the simulation by overlaying additional inelastic events onto hard-scatter events. These were generated with P 8 [78] with a set of tuned parameters called the A3 tune [79] and the MSTW2008LO PDF set [80]. Simulated events are reconstructed in the same manner as data, and are weighted to match the distribution of the expected mean number of interactions per bunch crossing in data. The simulated MC samples are corrected to account for differences from the data in the triggering efficiencies, lepton reconstruction efficiencies, -quark jet identification efficiencies, and the energy and momentum measurements of leptons and jets.

Event reconstruction
This analysis uses reconstructed electrons, muons, -leptons, and jets, which are classified as 'preselected' or 'signal' using various kinematic and quality criteria. Preselected objects must satisfy a loose set of criteria and pass the overlap removal procedure, which resolves ambiguities among reconstructed objects.
Signal leptons are those preselected leptons that satisfy a more stringent set of criteria; those failing the signal lepton requirements are used as part of the background estimation in Section 7.2. The T thresholds for leptons are nominally low; however, T thresholds are higher for the one, two, or three leptons responsible for triggering the event via the single-lepton, dilepton, or trilepton triggers listed in Table 2.
The missing transverse momentum, miss T , is the magnitude of the negative vector sum of the transverse momenta of all preselected objects (electrons, photons, muons, and jets, including all jets with | | < 4.5) and an additional soft term [108]. Hadronically decaying -leptons are included in the miss T as jets. The soft term is constructed from the tracks matched to the primary vertex, but not associated with identified physics objects. By using tracks, it cannot account for the neutral component of calorimeter energy deposits; however, this allows the soft term to be nearly independent of pile-up [109].
Preselected electrons are reconstructed using calibrated clusters of energy deposits in the electromagnetic calorimeter that are matched to a track in the ID, and must have T > 4.5 GeV and | | < 2.47. They must also satisfy the tracking-and calorimeter-based 'loose and B-layer' criteria of the likelihood-based identification algorithm [110]. Preselected muons are reconstructed by combining tracks in the ID with tracks in the MS, and must have T > 3 GeV and | | < 2.7. They must also satisfy 'medium' identification requirements [111], which are based on the number of hits in the different ID and MS subsystems, and on the significance of the charge-to-momentum ratio. The cosmic-ray muon background is suppressed by rejecting events containing one or more muons that have a transverse impact parameter | 0 | > 0.2 mm, or a longitudinal impact parameter | 0 | > 1 mm, both relative to the primary vertex. Preselected electrons and muons must point back to the primary vertex, with | 0 sin | required to be less than 0.5 mm.
Jets are reconstructed from three-dimensional calorimeter energy clusters using the anti-algorithm [112] with a radius parameter of = 0.4. The jets are calibrated following Ref. [113] and must have T > 20 GeV and | | < 2.8. Events with large calorimeter noise or non-collision backgrounds are suppressed by rejecting events with jets that fail to satisfy the quality criteria described in Ref. [114]. A multivariate technique based on quantities related to reconstructed secondary vertices is used to identify jets with | | < 2.5 that originate from -quarks (referred to as ' -tagging'). The -tagging algorithm [115] used here correctly identifies -quark jets in simulated¯samples with an efficiency of 85% and a rejection factor of 25 for light-flavour jets.
Leptonically decaying -leptons are reconstructed as electrons and muons as described above. Hadronically decaying -leptons are denoted by had , and their visible decay products are reconstructed as jets, as described above, with T > 10 GeV and | | < 2.47. In this analysis, kinematic variables built with had candidates use only their visible decay products. The had reconstruction algorithm [116] uses the electromagnetic and hadronic shower shapes in the calorimeters, as well as information about the tracks within Δ = 0.2 of the jet direction. Since -leptons mostly decay into either one or three charged hadrons together with a neutrino (and often additional neutral hadrons), had candidates are required to have one or three associated tracks, referred to as 'prongs'. The preselected had candidates must have T > 20 GeV, | | < 1.37 or 1.52 < | | < 2.47, total charge of their constituent tracks equal to ±1, and the had energy scale is corrected using an -and T -dependent calibration. A recurrent neural network (RNN) uses discriminating track and cluster variables to optimise had identification, where 'loose', 'medium' and 'tight' working points are defined [117]. The RNN-based identification is used to define signal had candidates, but not preselected had candidates. Transition radiation tracker and calorimeter information is used to suppress electrons misidentified as preselected had candidates.
To avoid double counting of identified physics objects, preselected charged leptons and jets must survive an overlap removal procedure, applied in the following order: 1. Any had within Δ = 0.2 of an electron or muon is removed. 2. Any electron sharing an ID track with a muon is removed. 3. Any jet within Δ = 0.2 of an electron is removed. 4. Any electron within Δ = 0.4 of a jet is removed (to suppress electrons from semileptonic decays of -and -hadrons). 5. Any jet with fewer than three associated tracks is removed either if a muon is within Δ = 0.2 or if the muon can be matched to a track associated with the jet. 6. Any muon within Δ = 0.4 of a jet is removed (to suppress muons from semileptonic decays ofand -hadrons). 7. Any jet within Δ = 0.4 of a preselected had passing 'medium' RNN-based identification requirements is removed.
To suppress low-mass particle decays, if surviving electrons and muons form an opposite-sign (OS) pair with O < 4 GeV, or form a same-flavour, opposite-sign (SFOS) pair in the Υ(1 )-Υ(3 ) mass range 8.4 < SFOS < 10.4 GeV, both leptons are discarded. Finally, to suppress leptons from a decay chain with multiple heavy flavour quarks undergoing leptonic decay, e.g. → (→ ) where → ℓ¯, if two leptons are found within Δ = 0.6 of one another and one of them has T < 30 GeV, both leptons are discarded.
Reconstructed charged leptons may be 'real', defined to be prompt and genuinely isolated leptons (including those from leptonic decays), or 'fake/non-prompt', defined to be non-prompt or non-isolated leptons that could originate from semileptonic decays of -and -hadrons, from in-flight decays of light mesons, from misidentification of particles within light-flavour or gluon-initiated jets, or from photon conversions. To suppress fake/non-prompt leptons, preselected objects surviving overlap removal are required to satisfy additional identification criteria and are referred to as signal leptons/jets. Signal electrons must have T > 7 GeV and signal muons must have T > 5 GeV. Signal electrons must also satisfy 'medium' likelihood-based identification criteria [110], while signal had must satisfy the 'medium' RNN-based identification criteria [117]. Signal electrons and muons must pass T -dependent isolation requirements imposed to reduce the contributions from semileptonic decays of hadrons and jets misidentified as prompt leptons. The 'Loose' isolation working point is used for electrons and muons, as described in Refs. [110] and [111], including updates to improve the performance under conditions with higher pile-up encountered during 2017 and 2018 data-taking. To improve the identification of closely spaced charged leptons (e.g. from boosted decays), contributions to the isolation energy and T sums from nearby electrons and muons passing all other signal lepton requirements are removed. To further suppress electrons and muons originating from secondary vertices, the transverse impact parameter normalised to its uncertainty must be small, | 0 |/ 0 < 5 (3) for electrons (muons). To reduce pile-up effects, signal jets with T < 120 GeV and | | < 2.5 must satisfy additional criteria using the 'medium' working point of the jet-vertex-tagging algorithm described in Ref. [118].

Signal regions
The search strategy for the SUSY scenarios considered here selects events with at least four signal leptons ( , , had ) and the events are classified according to the number of light signal leptons ( = , ) and signal had ( ) required as follows: 4 0 , with at least four light leptons and no had multiplicity requirement; 3 1 , with exactly three light leptons and at least one had ; or 2 2 , with exactly two light leptons and at least two had . A general region, 5 0 , with at least five light leptons and no had multiplicity requirement is also considered. The signal region (SR) definitions are summarised in Table 4.
To target the RPC GGM scenarios, events with 4 0 are selected and these must have two pairs of SFOS leptons that are both consistent with a leptonic boson decay. The SFOS pair with mass closer to the boson mass is labelled as the first candidate, while the other SFOS pair is labelled as the second candidate. The first (second) candidate must have an invariant mass ( ) in the range 81.2-101.2 GeV (61.2-101.2 GeV). The peak of the first candidate is narrower due to the ordering of the candidates, so that widening the low-mass side of the ( ) window used for the selection of a second candidate increases the GGM signal acceptance. GGM scenarios with branching ratio B (˜0 1 →˜ℎ) > 0 will have a significant ℎ →¯component, but the four-lepton analysis is not sensitive to these decays, so -tagged jets are vetoed to suppress the¯and¯SM backgrounds. Two SRs are defined with 4 0 , no -tagged jets, a first-and second-requirement, and different selections on miss T : a loose signal region (SR0-ZZ loose bveto ) with miss T > 100 GeV, and a tighter signal region (SR0-ZZ tight bveto ) with miss T > 200 GeV, optimised for the low-mass and high-mass higgsino GGM scenarios, respectively. Two further SRs that showed an excess in the 13 TeV partial dataset analysis in Ref. [18] are also examined here, and are defined with 4 0 , no requirement on -tagged jets, with a first-and second-requirement, and with different selections on miss T : a loose signal region (SR0-ZZ loose , labelled SR0C in Ref. [18]) with miss T > 50 GeV, and a tighter signal region (SR0-ZZ tight , labelled SR0D in Ref. [18]) with miss T > 100 GeV. The two newly defined regions, SR0-ZZ loose bveto and SR0-ZZ tight bveto , are subsets of these two regions, SR0-ZZ loose and SR0-ZZ tight . For the RPV scenarios, events with 4 0 are used to target the¯12 models, and events with 4 0 , 3 1 , and 2 2 are used to target the¯33 models. To suppress SM backgrounds with a boson, a veto is required, which rejects events where any SFOS lepton pair combination has an invariant mass close to the boson mass, in the range 81.2-101.2 GeV. The veto is extended to three-and four-lepton invariant mass combinations to suppress events where a photon radiated from a → ℓℓ decay converts to a second SFOS lepton pair; any event with an ℓ + ℓ − ℓ ± or ℓ + ℓ − ℓ + ℓ − system with invariant mass in the range 81.2-101.2 GeV is rejected (the flavour of ℓ and ℓ may be different). A small number of four-lepton events will satisfy neither the requirement described above for the GGM scenarios nor the veto; however, these are assumed to come from → ℓ + ℓ − and → ℓ + ℓ − ℓ + ℓ − decays, which are not considered to be signal-like.
The gluino and wino RPV models can produce -quarks (˜→¯˜0 1 , or˜0 2 →˜0 1 ℎ, ℎ →¯) and these decay chains are an important component of the signal for high Δ (NLSP,˜0 1 ) = (NLSP) − (˜0 1 ). A veto on the presence of -tagged jets is required for some signal regions to minimise heavy-flavour SM backgrounds, and at least one -tagged jet is required for other signal regions to improve sensitivity to high Δ (NLSP,˜0 1 ) gluino and wino RPV scenarios.
In order to separate the RPV SUSY signal from the SM background, the effective mass of the event, eff , is used, defined as the scalar sum of the miss T , the T of signal leptons and the T of all jets with T > 40 GeV. The T > 40 GeV requirement for jets aims to suppress contributions from pile-up and the underlying event. A selection using the eff rather than the miss T is particularly effective for the RPV SUSY scenarios, which produce multiple high-energy leptons (and in some cases jets), but only low to moderate miss T from neutrinos in the final state. The chosen eff thresholds are found to be broadly optimal for the wide range of RPV scenarios with different NLSPs considered in this paper.
Three general signal regions are defined with a veto, no -tagged jets, and eff > 600 GeV: SR0 loose bveto with 4 0 , SR1 loose bveto with 3 1 , and SR2 loose bveto with 2 2 . These signal regions are non-optimal for the SUSY scenarios considered here and select regions with low levels of SM background to target new phenomena decaying to four-lepton final states. Two further signal regions are defined with 4 0 and a veto: a higheff signal region (SR0 tight bveto ) with no -tagged jets and eff > 1250 GeV, and a signal region (SR0 breq ) with one or more -tagged jets and eff > 1300 GeV, both optimised for RPV¯12 scenarios. Similarly, two further signal regions are defined with 3 1 and a veto: a higheff signal region (SR1 tight bveto ) with no -tagged jets and eff > 1000 GeV, and a signal region (SR1 breq ) with one or more -tagged jets and eff > 1300 GeV, both optimised for RPV¯33 scenarios. Finally, two signal regions are defined with 2 2 and a veto: a higheff signal region (SR2 tight bveto ) with no -tagged jets and eff > 1000 GeV, and a signal region (SR2 breq ) with one or more -tagged jets and eff > 1100 GeV, both optimised for RPV¯33 scenarios.
A general signal region, SR5L, with at least five light leptons is also defined, with no further selection applied. Table 4: Signal region definitions. The boson column refers to the veto or selection of a first and second candidate as described in the text.

Name
Signal Region

Background determination
The SM background is composed of processes that can give rise to four real or fake/non-prompt leptons and these are classified into two categories: Irreducible background: hard-scattering processes giving rise to events with four or more real leptons, ,¯,¯,¯,¯,¯,¯, , ( , , ), Higgs ( via ggF, , , via VBF,¯),¯¯,¯.
Reducible background: processes leading to events with at least one fake/non-prompt lepton,¯, +jets, , , ,¯,¯. Processes listed under irreducible that do not undergo a decay to four real leptons (e.g. →¯ℓℓ) are also included in the reducible background.
Backgrounds with three or more fake/non-prompt leptons (e.g. +jets) are found to be < 1% of the total SM background in four-lepton regions using the method outlined in Section 7.2 and are neglected. The systematic uncertainty of the reducible background is increased to cover any effect from neglected backgrounds (discussed in Section 8).
In the four-lepton signal regions, the main irreducible backgrounds are , and¯, while the reducible background is dominated by the two-fake/non-prompt-lepton backgrounds¯and +jets. The and backgrounds are estimated using MC simulation normalised to data in control regions (CR), while the other irreducible backgrounds are estimated from MC simulation. The reducible backgrounds are derived from data using a fake-factor method. Signal regions with 4 0 are dominated by irreducible background processes, whereas the reducible background processes dominate the 3 1 and 2 2 regions. The predictions for irreducible and reducible backgrounds are tested in validation regions (Section 9).
For SR5L, the main irreducible background processes are and Higgs, followed by small contributions from → 6ℓ and¯→ 5ℓ, where virtual photons convert into lepton pairs (internal conversions). However, reducible background processes are the leading source of events in the 5 0 signal region, and are dominated by → 4ℓ and¯→ 4ℓ.
The H F [119] software framework is used when constraining the and¯background normalisations and a 'background-only fit' of observations in the CRs is used to estimate the expected background in the SRs, without considering any CR signal contamination. A likelihood function is built as a product of Poisson probability functions, describing the observed and expected number of events in the CRs and SRs. The observed numbers of events in various CRs and SRs are used in a combined profile likelihood fit to determine the expected SM background yields in each of the SRs. The systematic uncertainties in the expected SM background yields described in Section 8 are included as nuisance parameters, constrained to be Gaussian with a width determined by the size of the uncertainty. Common nuisance parameters take into account the correlations between CRs and SRs, and background processes. The fit parameters are determined by maximising the product of the Poisson probability functions and the Gaussian constraints on the nuisance parameters.

Irreducible background determination
The irreducible background processes and¯are estimated using MC simulation normalised to data yields in CRs which are orthogonal to the SRs and minimise potential signal contamination. By normalising the MC simulation to data, the estimation of and¯is improved in the SRs. A simultaneous fit to the CRs and SRs (see in Section 10) provides the final estimate of the yields and their uncertainties.

The
and¯control region definitions are shown in Table 5. The CR, CRZZ, is defined with at least four light leptons, no -tagged jets, a first-and second-requirement, and miss T < 50 GeV, while the¯CR, CRttZ, is defined with 4 0 , at least one -tagged jet, only one boson candidate, and miss T > 100 GeV. The background-only fit is used to obtain normalisation factors for the and¯MC simulation in their CRs of 1.15 ± 0.09 and 1.06 ± 0.24, respectively. The uncertainties quoted for the normalisation factors include the statistical uncertainty of the data and MC simulation in the CR, as well as the experimental and theory uncertainties from the subtraction of contaminating SM processes (see Section 8). The eff distributions for CRZZ and CRttZ after the simultaneous fit is performed are shown in Figure 3.
Since the regions CRZZ and CRttZ include five-light-lepton events, they are both restricted to exactly four light leptons when estimating the backgrounds for SR5L. In these restricted CRs, normalisation factors of 1.14 ± 0.09 and 1.07 ± 0.25 are obtained for the and¯backgrounds, respectively.

Reducible background determination
The number of reducible background events in a given region is estimated from data with a hybrid fake-factor method that uses a combination of data and MC simulation. Preselected leptons surviving overlap removal are classified as 'signal' or 'loose' depending on whether they pass or fail the signal lepton requirements, respectively. A very loose selection on the identification RNN of > 0.05 is also applied to the preselected had , as those with very low RNN scores are typically gluon-induced jets and jets arising from pile-up, which is not the case for the signal had candidates. Probabilities for a fake/non-prompt lepton to be identified as a signal or loose lepton are calculated from simulation and corrected to data where possible. The ratio = /¯for fake/non-prompt leptons is then defined as the "fake factor", where (¯) is the probability that a fake/non-prompt lepton is identified as a signal (loose) lepton.
The reducible background prediction is extracted by applying fake factors to control regions in data. The CR definition only differs from that of the associated SR in the quality of the required leptons; here exactly one (CR1) or two (CR2) of the four leptons must be identified as a loose lepton, as shown in Table 6. In 3 1 and 5 0 events, the contribution from events with two fake/non-prompt light leptons is negligible, as is the contribution from one and two fake/non-prompt light leptons in 2 2 events. Table 6: Reducible background control region definitions where "L" and "T" denote signal light leptons and had , while "l" and "t" denote loose light leptons and had . Loose leptons are preselected leptons surviving overlap removal that do not pass signal lepton requirements. Additional selections for -tagged jets, veto/requirement, miss T , and eff are applied to match a given signal or validation region.
The fake factors depend on the lepton flavour, the source of the fake/non-prompt lepton, and the production process. Fake factors are calculated separately for each fake/non-prompt-lepton flavour ( , , had ) and source (light-flavour jets, heavy-flavour jets, gluon-initiated jets for had only, and photon conversions for electrons and had only), where these categories are referred to as fake/non-prompt-lepton 'types'. The fake factor per fake/non-prompt-lepton type for each production process (¯and +jets, or for 5 ) is binned in lepton T , , proximity to other leptons (Δ ) for electrons and muons, and number of prongs for had . The statistical uncertainties on the fake factors for the dominant types of fake/non-prompt-lepton are very small. To correctly account for the relative abundances of fake/non-prompt-lepton types and production processes, a weighted average of fake factors is computed in each CR, as The term is the corresponding fake factor for fake/non-prompt leptons of type from process calculated using MC simulation. The fake factors are weighted by the 'process fractions', , that describe the fraction of fake/non-prompt leptons of type from process in that region. The process fractions are determined from MC simulation in the corresponding CR2, and are similar to the process fractions obtained in the signal regions from MC simulation, which suffer from having few events. For the five lepton regions, no corresponding CR2 is used for the reducible estimation and the process fractions are determined from MC simulation in the corresponding CR1. To account for possible differences between data and MC simulation, the fake factors obtained from simulation are corrected to data using 'scale factors', . The scale factors are assumed to be independent of the physical process (e.g.¯, +jets) and depend on the fake/non-prompt-lepton type only. They are determined from data in regions enriched in objects of a given fake/non-prompt-lepton type, where MC simulation is used to remove any small contamination from leptons not from the fake/non-prompt-lepton type under study.
For fake/non-prompt leptons from heavy-flavour jets, the scale factor is measured in a¯-dominated control sample. The heavy-flavour scale factors are seen to have a modest T -dependence, decreasing for electrons from 1.18 ± 0.10 to 1.08 ± 0.08 as the electron T increases from 7 GeV to 20 GeV. For muons, the heavy-flavour scale factor is seen to be less dependent on T , changing from 1.00 ± 0.04 to 0.94 ± 0.10 as the muon T increases from 5 GeV to 20 GeV. For 1-prong (3-prong) had , the heavy-flavour scale factor decreases from 1.26 ± 0.07 to 0.93 ± 0.11 (1.15 ± 0.06 to 0.97 ± 0.12) as the had T increases from 20 GeV to 50 GeV. Uncertainties quoted for the scale factors include the statistical uncertainties of the data and MC simulation.
The scale factor for fake/non-prompt had originating from light-flavour jets is measured separately for one-and three-prong had in a control sample dominated by +jets events. The scale factors are seen to be T -dependent, decreasing from 1.115 ± 0.009 to 0.919 ± 0.017 (1.340 ± 0.023 to 1.04 ± 0.05) as the 1-prong (3-prong) had T increases from 20 GeV to 50 GeV. The scale factor for had originating from gluon-initiated jets is assumed to be the same as the one obtained from light-flavour jets. The scale factor for fake/non-prompt electrons originating from light-flavour jets is measured in a +jets-dominated control sample, where the light-flavour scale factor increases from 1.05 ± 0.29 to 1.38 ± 0.09 as the electron T increases from 7 GeV to 20 GeV. The contribution to the signal regions from fake/non-prompt muons originating from light-flavour jets or leptons from photon conversions is very small and the scale factor cannot be measured reliably using data. Therefore, values of 1.00 ± 0.10 are used instead, motivated by similar uncertainties in the other scale factor measurements.
The final estimate of the number SR red of background events with one or two fake/non-prompt leptons in each SR is determined from the number of events in data in the corresponding CRs, CR1 data and CR2 data , according to where ,1 and ,2 are the two weighted fake factors constructed using the highest-and second-highest-T loose leptons in the CRs, respectively. Only the first term is used to estimate the reducible background for five lepton events. The small contributions from irreducible background processes in the CRs, CR1,CR2 irr , are evaluated using MC simulation and subtracted from the corresponding number of events seen in data. The second term removes the double-counting of events with two fake/non-prompt leptons in the first term. Processes with one fake/non-prompt-lepton are included in the reducible background estimation, but have very low contributions to CR1 and CR2 because of their small cross-sections e.g.
and¯. Both CR1 and CR2 are dominated by the two-fake/non-prompt-lepton processes¯and +jets, so the first term is roughly twice the second term. Higher-order terms in describing three-and four-fake/non-prompt-lepton backgrounds are neglected, as are some terms with a very small contribution; e.g. in 3 1 events, the contribution from events with two fake/non-prompt light leptons is negligible. A systematic uncertainty is applied to account for these neglected terms, as described in the following section.

Systematic uncertainties
The uncertainties affecting the SM and signal simulation-based estimates can be divided into three components: statistical uncertainty of the MC simulation, experimental uncertainty in event reconstruction ( , , had and jets, miss T ), and theoretical uncertainty. The reducible background estimation is affected by different sources of uncertainty associated with data counts in control regions and uncertainties in the weighted fake factors. The leading SM background in the signal regions targeting the higgsino GGM models (SR0-ZZ) is generally production, and the jet experimental and theoretical uncertainties are seen to dominate the total uncertainty. The exception is SR0-ZZ tight bveto , where the reducible background and its associated uncertainties are the leading contribution to the total SM background and its uncertainty. The reducible background is also the leading component of the SM background in most of the higheff signal regions targeting the RPV models, as well as the five-lepton signal region, and the uncertainty in the reducible background dominates the total uncertainty. The exceptions to this are SR0 loose bveto and SR0 breq , where the irreducible processes and theory uncertainties also contribute significantly to the total SM background and uncertainty. The primary sources of systematic uncertainty, described below, are summarised in Figure 4.   Figure 4: Breakdown of the dominant systematic uncertainties in the background estimates for the signal regions. Theoretical uncertainties in the simulation-based estimates, including those in the acceptance for events in and simulations, are grouped under the "Theoretical" category, while the "Normalisation" category includes the statistical uncertainties of the data counts in CRZZ and CRttZ and the uncertainty from the fitted normalisation factors. For simplicity, the individual uncertainties are added in quadrature for each category, without accounting for correlations. The total background uncertainty is taken from the background-only fit where correlations of individual uncertainties are accounted for.

ATLAS
The statistical uncertainty of the MC simulation-based background estimate is small and typically less than 5% of the total background estimate in the signal regions. However, this rises to 10-15% in SR0 breq , SR2 tight bveto , and SR2 breq where tight selections on eff are made. The experimental uncertainties include the uncertainties associated with electrons, muons, had , jets, and miss T , as well as the uncertainty associated with the simulation of pile-up, and uncertainty in the luminosity. The uncertainty in the combined 2015-2018 integrated luminosity is 1.7% [7], obtained using the LUCID-2 detector [120] for the primary luminosity measurements. The uncertainties associated with pile-up and luminosity are included in the total uncertainty in Figure 4. The experimental uncertainties pertaining to electrons, muons and had include the uncertainties due to the lepton identification efficiencies, lepton energy scale and energy resolution, and isolation and trigger efficiencies. Systematic uncertainties from electron, muon, and had sources are generally small in all signal regions, at a few percent relative to the total expected background. The uncertainties associated with jets are due to the jet energy scale, jet energy resolution, jet vertex tagging, and -tagging. The jet energy resolution is the dominant source of uncertainty for jets in the signal regions. Uncertainties in the object momenta are propagated to the miss T measurement, and additional uncertainties in miss T arising from energy deposits not associated with any reconstructed objects are also considered. The jet and miss T uncertainties are generally of the order of a few percent in the signal regions, but approach 10-20% in SR0-ZZ loose , SR0-ZZ tight and SR0-ZZ loose bveto , where selections on miss T are made.
Theoretical uncertainties in the simulation-based estimates include the theoretical cross-section uncertainties due to the choice of renormalisation and factorisation scales and PDFs, the acceptance uncertainty due to PDF and scale variations, and the choice of MC generator. The theoretical cross-section uncertainties for the irreducible backgrounds used in this analysis are 20% for [82], 10% for¯[121], 5% for via ggF [121], and 20% for¯/ / / , , and Higgs via VBF/ , where the order of each cross-section calculation is shown in Table 3.
Uncertainties in the acceptance are evaluated for the irreducible processes ,¯, ,¯and Higgs, where for each background the uncertainties are correlated across all regions. The uncertainty in the acceptance from varying the strong coupling constant S within its uncertainty is included for the , , Higgs, and¯backgrounds. Uncertainties arising from varying the renormalisation and factorisation scales ( r and f ) by factors of 2 and 1/2 are considered -the r and f variations are taken as uncorrelated and the additional coherent up/down variation of r and f is also considered. Resummation and merging-scale variations are also considered for the sample. The uncertainty related to the choice of CKKW merging scale (20 GeV) was evaluated by considering alternative values of 15 GeV and 30 GeV. The uncertainty related to the resummation scale q (2 GeV, the upper cut-off in perturbative calculations of parton shower evolution), was evaluated by varying the scale by factors of 1/4 and 4. Finally, for S samples the impact of using an alternative recoil scheme for single-particle emission in the S parton shower is studied by comparing the nominal samples with alternative ones in which the CSS scheme is used (CSSKIN) [122]. The theoretical uncertainties in the acceptance dominate the total uncertainties in signal regions where the irreducible backgrounds are a large fraction of the total SM background: SR0-ZZ tight , SR0-ZZ loose bveto , SR0 loose bveto , and SR0 breq . The uncertainty in the reducible background is dominated by the statistical uncertainty of the data events in the corresponding CR1 and CR2, which is the leading source of uncertainty in many of the signal regions where the reducible background is a large fraction of the total SM background, as seen in Figure 4. The uncertainty in the weighted fake factors includes the statistical uncertainty of the MC simulation in the process fractions, the uncertainty in the fake/non-prompt-lepton scale factors, and the statistical uncertainty from the fake factors determined from simulation. The uncertainties in the fake factors from each fake/non-prompt-lepton type are treated as correlated across processes. Thus, since both CR1 and CR2 are dominated by two-fake/non-prompt-lepton processes with the same source of fake/non-prompt lepton, correlations in the fake factors applied to CR1 and CR2 cause the uncertainties from the weighted fake factors to largely cancel out between the first and second terms in Eq. (2). Finally, a conservative uncertainty is applied to account for the neglected terms in Eq. (2). For example, for 4 0 events the three-and four-fake/non-prompt-lepton terms are neglected. Weighted fake factors are applied to data events with one signal light lepton and three loose light leptons to estimate an upper limit on this neglected contribution for each 4 0 region. The calculated upper limit plus its 1 statistical uncertainty is then added to the reducible background uncertainty, adding an absolute uncertainty of 0.17 events in SR0 loose bveto . This is repeated for the 3 1 and 2 2 regions, accounting for the neglected terms with one or two fake/non-prompt light leptons as necessary, adding an absolute uncertainty of 0.59 events in SR1 loose bveto , and 2.5 events in SR2 loose bveto . Finally, for SR5L, the neglected terms with two or three fake/non-prompt light leptons are evaluated in the same way, adding an absolute uncertainty of 0.5 events.
Systematic uncertainties in the SUSY signal yields from experimental sources are typically of the order of 10%, while typical uncertainties in the signal acceptance and distribution shapes due to scale and parton shower variations are found to range from ∼ 5% in 4 0 regions to ∼ 20% in 2 2 regions. The systematic uncertainty in the signal cross-section is described in Section 4.

Background modelling validation
To validate the modelling of the SM backgrounds, the yields and shapes of key kinematic variable distributions are compared with data in validation regions (VR). The VRs are defined to be close to, yet disjoint from the SRs and CRs, and be dominated by the background process under consideration, as shown in Table 7. The ( * ) modelling is validated in a region (VRZZ) defined with 4 0 , no -tagged jets, and only one boson candidate, while the¯modelling is validated in a region (VRttZ) with 4 0 , a veto, at least one -tagged jet, and high eff . For signal regions that veto boson candidates, three general VRs are defined by reversing the eff requirement: VR0-noZ, VR1-noZ, and VR2-noZ. Two further VRs with 3 1 and 2 2 are defined with one boson requirement to check the reducible background modelling across the full range of eff : VR1-Z and VR2-Z. The background model adopted in the VRs is the same as in the SRs, with the and¯backgrounds obtained from MC simulation normalised to data, the other irreducible processes from MC simulation, and the reducible background estimated from data using the fake-factor method with process fractions and loose lepton control regions corresponding to the VRs. The systematic uncertainties of the SM backgrounds in the VRs are evaluated as in Section 8.
Observed and expected event yields in the VRs are shown in Table 8, where good agreement is seen in general within statistical and systematic uncertainties. No significant deviations from SM expectations are observed in any VR.
The miss T , eff , and lepton T distributions in the VRs are shown in Figure 5 and Figure 6. The eff distributions in VR0-noZ, VR1-noZ, VR2-noZ, and VRttZ can be seen in the lower eff bins in Figure 9(a), Figure 9(b), Figure 9(c), and Figure 10(a), respectively.

Results
The observed number of events in each signal region is reported in Table 9 and Figure 7, along with the background expectations and uncertainties. The miss T and eff distributions for all events passing signal region requirements, except the miss T or eff requirement itself, are shown in Figures 8-10.
The H F software framework is used for the statistical interpretation of the results. In order to quantify the probability for the background-only hypothesis to fluctuate to the observed number of events or higher, a one-sided 0 -value is calculated using pseudo-experiments, where the profile likelihood ratio is used as a test statistic [124] to exclude the signal-plus-background hypothesis. Signal contamination in the CRs is assumed to be zero in this model-independent fit. The observations are consistent with the SM expectations, with SR5L giving rise to the highest local significance of 1.9 .
A signal model can be excluded at 95% confidence level (CL) if the CL s [125] of the signal-plus-background hypothesis is below 0.05. For each signal region, the expected and observed upper limits at 95% CL on the number of beyond-the-SM events ( 95 exp and 95 obs ) are calculated using the model-independent signal fit. The 95% CL upper limits on the signal cross-section times efficiency ( 95 obs ) also calculated for each signal region and shown in Table 10.
Exclusion limits at 95% CL are calculated for the SUSY models and shown in Figure 11. A statistical combination is performed using the results from a number of disjoint signal regions targeting the SUSY model, and signal contamination in the CRs used to constrain and¯background is accounted for. Where two (or more) signal regions overlap, the signal region with the better (best) expected exclusion is used in the combination: specifically either SR0-ZZ loose or SR0-ZZ tight or SR0-ZZ loose bveto or SR0-ZZ tight bveto , and either SR0 loose bveto or SR0 tight bveto , and either SR1 loose bveto or SR1 tight bveto , and either SR2 loose bveto or SR2 tight bveto . The four-lepton signal regions are more sensitive than SR5L in all signal models considered and so SR5L is not used to calculate model-dependent exclusion limits. For the exclusion limits, the observed and expected 95% CL limits take into account the theoretical and experimental uncertainties in the SM background and the experimental uncertainties in the signal. For all expected and observed exclusion limit contours, the ±1 exp uncertainty band shows the impact on the expected limit of the systematic and statistical uncertainties included in the fit. The ±1 SUSY theory uncertainty lines around the observed limit illustrate the change in the observed limit as the nominal signal  cross-section is scaled up and down by the theoretical cross-section uncertainty. Experimental uncertainties affecting irreducible backgrounds and signal are treated as correlated between regions and processes, while uncertainties associated with the estimate of the reducible background are correlated between regions only. Theoretical uncertainties in the irreducible background and signal are treated as correlated between regions, while statistical uncertainties from MC simulation and data in the CR are treated as uncorrelated between Table 9: Expected and observed yields for 139 fb −1 in the signal regions after the background-only fit. "Other" is the sum of the ,¯,¯,¯,¯,¯,¯, and¯¯backgrounds. Both the statistical and systematic uncertainties in the SM background are included in the uncertainties shown.  The exclusion contours for the higgsino GGM models are shown in Figure 11(a), where SR0-ZZ loose bveto and SR0-ZZ tight bveto dominate the exclusion for low and high higgsino masses, respectively. The analysis is more sensitive to scenarios with higher B (˜0 1 → +˜) as they produce more four-lepton events. For scenarios with B (˜0 1 → ℎ +˜) = 100%, final states with lower lepton multiplicity are more sensitive [126]. Higgsino masses up to 540 GeV are excluded for B (˜0 1 → +˜) = 100%, while the exclusion is weaker for scenarios with B (˜0 1 → +˜) < 100%. These results significantly improve upon limits in Ref. [18] by around 200-260 GeV, and in particular, set stronger limits for scenarios with B (˜0 1 → +˜) < 100%. Figure 11 also shows the exclusion contours for the RPV models considered here, where the limits extend to high NLSP masses due to the high lepton multiplicity in these scenarios (˜0 1 → ℓℓ with 100% branching ratio) and the high efficiency of the eff selections. The exclusion in the¯12 models is dominated by the combination of SR0 tight bveto and SR0 breq while the combination of SR1 tight bveto and SR2 tight bveto is important for the¯33 models. The sensitivity is reduced for large mass splittings between the NLSP and the˜0 1 , where the decay products are strongly boosted. These results improve on the limits set in similar models in Ref. [18] by around 100-350 GeV.
The exclusion contours for the RPV wino NLSP¯12 models are shown in Figure 11 Figure 11(b) also shows exclusion contours for the RPV wino NLSP¯33 models, where˜± 1 /˜0 2 masses up to ∼ 1.13 TeV are excluded for 500 GeV < (˜0 1 ) < 700 GeV. Figure 11(c) shows exclusion contours for the RPVl L /˜NLSP model, where left-handed slepton/sneutrino masses are excluded up to ∼ 1.2 TeV for (˜0 1 ) 800 GeV for 12 models, and up to 0.87 TeV for (˜0 1 ) 500 GeV for¯33 models. Finally, the exclusion contours for the RPV˜NLSP model are shown in Figure 11(d), where gluino masses are excluded up to ∼ 2.45 TeV for (˜0 1 ) > 1 TeV for¯12 models, increasing to ∼ 2.55 TeV for (˜0 1 ) > 2.4 TeV. For¯33 models, gluino masses are excluded up to ∼ 1.78 TeV for (˜0 1 ) > 500 GeV, increasing to ∼ 1.97 TeV for (˜0 1 ) > 1.8 TeV.      33 where , ∈ 1, 2. The limits are set using a statistical combination of disjoint signal regions. Where two (or more) signal regions overlap, the signal region contributing its observed CL s value to the combination is the one with the better (best) expected CL s value.

Conclusion
A search for new physics in final states with four or more leptons (electrons, muons or hadronically decaying -leptons) is presented, using 139 fb −1 of √ = 13 TeV collision data collected by the ATLAS detector at the LHC from 2015 to 2018. No significant excess over the expected background is observed. Observed 95% CL limits on the visible cross-section are placed in each signal region, and the null result is interpreted in simplified higgsino GGM models, where higgsino˜± 1 /˜0 2 /˜0 1 masses up to 540 GeV are excluded in scenarios with a 100% branching ratio for˜0 1 decay into a boson and a gravitino. This significantly improves upon previous GGM higgsino mass limits by around 200-260 GeV. The results are also interpreted in simplified models of NLSP pair production with RPV LSP decays, where winõ        [122] S. [127] ATLAS Collaboration, ATLAS Computing Acknowledgements, ATL-SOFT-PUB-2020-001, : https://cds.cern.ch/record/2717821.