Search for doubly charged Higgs boson production in multi-lepton final states using 139 fb-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{-1}$$\end{document} of proton–proton collisions at s\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sqrt{s}$$\end{document} = 13 TeV with the ATLAS detector

A search for pair production of doubly charged Higgs bosons (H±±\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H^{\pm \pm }$$\end{document}), each decaying into a pair of prompt, isolated, and highly energetic leptons with the same electric charge, is presented. The search uses a proton–proton collision data sample at a centre-of-mass energy of 13 TeV corresponding to an integrated luminosity of 139 fb-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{-1}$$\end{document} recorded by the ATLAS detector during Run 2 of the Large Hadron Collider (LHC). This analysis focuses on same-charge leptonic decays, H±±→ℓ±ℓ′±\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H^{\pm \pm } \!\rightarrow \ell ^{\pm } \ell ^{\prime \pm }$$\end{document} where ℓ,ℓ′=e,μ,τ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell , \ell ^\prime \!=\!e, \mu , \tau $$\end{document}, in two-, three-, and four-lepton channels, but only considers final states which include electrons or muons. No evidence of a signal is observed. Corresponding upper limits on the production cross-section of a doubly charged Higgs boson are derived, as a function of its mass m(H±±)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m(H^{\pm \pm })$$\end{document}, at 95% confidence level. Assuming that the branching ratios to each of the possible leptonic final states are equal, B(H±±→e±e±)=B(H±±→e±μ±)=B(H±±→μ±μ±)=B(H±±→e±τ±)=B(H±±→μ±τ±)=B(H±±→τ±τ±)=1/6\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {B}(H^{\pm \pm } \rightarrow e^\pm e^\pm ) = \mathcal {B}(H^{\pm \pm } \rightarrow e^\pm \mu ^\pm ) = \mathcal {B}(H^{\pm \pm } \rightarrow \mu ^\pm \mu ^\pm ) = \mathcal {B}(H^{\pm \pm } \rightarrow e^\pm \tau ^\pm ) = \mathcal {B}(H^{\pm \pm } \rightarrow \mu ^\pm \tau ^\pm ) = \mathcal {B}(H^{\pm \pm } \rightarrow \tau ^\pm \tau ^\pm ) = 1/6$$\end{document}, the observed (expected) lower limit on the mass of a doubly charged Higgs boson is 1080 GeV (1065 GeV) within the left-right symmetric type-II seesaw model, which is the strongest limit to date produced by the ATLAS Collaboration. Additionally, this paper provides the first direct test of the Zee–Babu neutrino mass model at the LHC, yielding an observed (expected) lower limit of m(H±±)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m(H^{\pm \pm })$$\end{document} = 900 GeV (880 GeV).


Introduction
In the Standard Model (SM), no doubly charged bosons are present. However, various theories beyond the Standard Model (BSM theories), namely type-II seesaw models [1][2][3][4][5][6], left-right symmetric models (LRSMs) [7][8][9][10][11][12], the Zee-Babu neutrino mass model [13][14][15], 3-3-1 models [16], and the Georgi-Machacek model [17], predict the existence of doubly charged bosons decaying into same-charge lepton pairs. In left-right symmetric models based on the (2) × (2) × (1) − symmetry, Higgs multiplets Δ and Δ transforming, respectively, as triplets under (2) and (2) gauge symmetries, contain doubly charged Higgs bosons, termed ±± and ±± . Doubly charged Higgs particles can therefore couple to either left-handed or right-handed leptons. The cross-section for the Drell-Yan production of ++ −− is a factor of about two larger than for the ++ −− production, due to the different couplings to the boson [18]. Since the ±± particle is common to LRSMs and the canonical type-II seesaw model, the results can be directly interpreted in both. In the Zee-Babu case, two complex scalar (2) singlets are proposed within the SM gauge group, where one of them is doubly charged and is usually denoted by ±± . As it has the same quantum numbers as the ±± from LRSMs, their electroweak production is the same. 1 It was recently shown that for the Drell-Yan production mechanism studied in this analysis, cross-sections and differential scalar distributions in the Zee-Babu and type-II seesaw models differ at most by a normalisation factor when all theoretical inputs are the same [19]. If not stated explicitly, ±± represents any of the ±± , ±± , or ±± particles throughout the paper.
The Feynman diagram of the Drell-Yan pair production mechanism considered in this analysis is shown in Figure 1 At the LHC, other mechanisms such as vector-boson fusion, gluon-gluon fusion, and photoninitiated [20] processes are less important than Drell-Yan production [18] in the mass range of interest for this paper, which is 400 GeV to 1300 GeV.
The ATLAS Collaboration previously analysed data corresponding to an integrated luminosity of 36.1 fb −1 from √ = 13 TeV collisions at the LHC during the 2015 and 2016 data-taking periods. Masses of doubly charged Higgs bosons were excluded up to 870 GeV for ±± and up to 760 GeV for 1 In principle, the ±± also couples to a boson, which makes their production cross-sections distinguishable. However, the boson is assumed to have a mass that is well beyond the energy reach of the LHC, thus making its effects negligible.
±± at 95% confidence level (CL) [21]. The CMS Collaboration performed a similar search using √ = 7 TeV collisions collected during Run 1 [22], establishing lower bounds on the ±± mass between 204 GeV and 459 GeV in the 100% branching fraction scenarios, depending on the flavour content of the two-lepton final states. A complementary analysis from the ATLAS Collaboration searched for doubly charged Higgs bosons in the ±± → ± ± decay channel using the whole 139 fb −1 dataset from Run 2 of the LHC and excluded ±± masses up to 350 GeV [23].
A measurement of same-charge lepton pairs is sensitive to the same-charge partial decay width of ±± given by: with = 2 if the two leptons have the same flavour and = 1 if their flavours are different. The factor ℎ ℓℓ has an upper bound that depends on the flavour combination [24,25] but it is set to an allowed value ℎ ℓℓ = 0.02 for all flavour combinations in this analysis. This choice corresponds to prompt decays of the ±± bosons and, according to Eq. (1), results in a mean proper decay length ∼ 10 fm across the whole ±± mass range considered in this search. In general, there is no preference for decays into heavier leptons, as the coupling is not proportional to the lepton mass as it is for the SM Higgs boson. Therefore, the couplings of ±± to all three SM charged leptons ( , , ) were assumed to be the same.
The analysis presented in this paper searches for leptonic decays ±± → ℓ ± ℓ ± of doubly charged Higgs boson pairs, resulting in same-charge lepton pairs in final states with two, three or four leptons. Since these events are produced very rarely in proton-proton ( ) collisions by SM processes, same-charge lepton pairs represent a striking signature for BSM physics. The analysis only considers light leptons ( and ) in the two-, three-and four-lepton final states, thereby including leptonic decays, using the whole 139 fb −1 dataset from Run 2 of the LHC. The assumption that the branching ratios to each of the possible leptonic final states are equal, B ( ±± → ± ± ) = B ( ±± → ± ± ) = B ( ±± → ± ± ) = B ( ±± → ± ± ) = B ( ±± → ± ± ) = B ( ±± → ± ± ) = 1/6, is made. To estimate the compatibility of the data with the SM expectations, as well as to extract upper limits on the ±± mass at 95% CL, a binned maximum-likelihood fit to data is performed in all control and signal regions, described in Section 4. A simultaneous fit to the distribution of the invariant mass of the two same-charge leptons with the highest T in the event is implemented in the two-and three-lepton regions, while a single-bin event yield is used in the four-lepton regions, merging two-lepton channels to obtain the best sensitivity.
This paper is organised as follows. The ATLAS detector is described in Section 2, the data and simulated events used in the analysis are outlined in Section 3, and the event reconstruction procedure and the analysis strategy are detailed in Section 4. The background estimation is presented in Section 5, and the systematic uncertainties are described in Section 6. Finally, results and their statistical interpretation are presented in Section 7, followed by the conclusions in Section 8.

ATLAS detector
The ATLAS detector [27] is a multipurpose particle detector covering nearly 4 in solid angle in one of the interaction regions of the LHC. It consists of several subdetectors assembled with a cylindrical symmetry coaxial with the beam axis. 2 The inner detector (ID) surrounding the beam pipe is immersed in a 2 T solenoidal magnetic field and provides charged-particle tracking in the range | | < 2.5. Starting from the beam pipe and going outwards, the ID is composed of a high-granularity silicon pixel detector that typically provides four measurements per track, the first hit normally being in the insertable B-layer installed before Run 2 [28,29], a silicon microstrip tracker, and a transition radiation tracker (TRT) that covers the region up to | | = 2.0. The TRT also provides electron identification information based on the fraction of hits (typically 30 in total) above a higher energy-deposit threshold corresponding to transition radiation.
Outside the thin superconducting solenoid, the calorimeter system covers the pseudorapidity range | | < 4.9. Within the region | | < 3.2, electromagnetic calorimetry is provided by barrel and endcap high-granularity lead/liquid-argon (LAr) calorimeters, with an additional thin LAr presampler covering | | < 1.8 to correct for energy loss in material upstream of the calorimeters. Hadron calorimetry is provided by the steel/scintillator-tile calorimeter, segmented into three barrel structures within | | < 1.7 and two copper/LAr hadron endcap calorimeters. The solid angle coverage is completed with forward copper/LAr and tungsten/LAr calorimeter modules optimised for electromagnetic and hadronic energy measurements respectively.
In the outer part of the ATLAS detector, the muon spectrometer (MS) covers the region | | < 2.7 with three layers of precision tracking chambers. These consist of monitored drift tubes except in the innermost layer at 2.0 < | | < 2.7, where cathode-strip chambers are used to cope with the much higher rate of background particles. The muon trigger system covers the range | | < 2.4 with resistive-plate chambers in the barrel, and thin-gap chambers in the endcap regions. The MS is immersed in a magnetic field produced by three large superconducting air-core toroidal magnets with eight coils each. The field integral of the toroids ranges between 2.0 and 6.0 T m across most of the detector.
A two-level trigger system selects events that are of interest for the ATLAS physics programme [30]. The first-level trigger is implemented in hardware and reduces the event rate to below 100 kHz. A software-based trigger further reduces this to a recorded event rate of approximately 1 kHz.
An extensive software suite [31] is used in data simulation, in the reconstruction and analysis of real and simulated data, in detector operations, and in the trigger and data acquisition systems of the experiment. 2 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the -axis along the beam pipe. The -axis points from the IP to the centre of the LHC ring, and the -axis points upwards. Cylindrical coordinates ( , ) are used in the transverse plane, being the azimuthal angle around the -axis. The pseudorapidity is defined in terms of the polar angle as = − ln tan( /2). Angular distance is measured in units of Δ ≡ √︁ (Δ ) 2 + (Δ ) 2 .

Dataset and simulated event samples
This analysis uses data from collisions occurring every 25 ns at √ = 13 TeV, collected during Run 2 of the LHC from 2015 to 2018. Events selected for analysis are required to pass standard data-quality requirements [32] and they amount to a total integrated luminosity of 139 fb −1 . The uncertainty in the combined 2015-2018 integrated luminosity is 1.7% [33], obtained using the LUCID-2 detector [34] for the primary luminosity measurements. Events were collected using two-lepton triggers that select pairs of electrons [35] or muons [36] or electron-muon combinations [35]. The transverse momentum ( T ) thresholds of the unprescaled two-lepton triggers were raised during Run 2 because of the increasing luminosity of the colliding beams but were never higher than 24 GeV (24 GeV) for the (sub-)leading electron or 22 GeV (14 GeV) for the (sub-)leading muon.
Monte Carlo (MC) methods were used to model signal and background events. The G 4 toolkit [37] was used to simulate the response of the ATLAS detector [38]. Simulated events were reconstructed with the same algorithms as those applied to data [31] and were corrected with calibration factors to better match the performance measured in data. The various tools and set-ups used to simulate the different processes are summarised in Table 1. To minimise the impact of the theoretical uncertainties on the final result, the normalisation of the MC samples that model the dominant backgrounds is considered a free parameter in the final likelihood fit, described in Section 7.
Signal events were generated at leading order (LO) with the LRSM package in P 8.212 [39], which implements the ±± scenario described in Ref. [40]. All samples contain a mixture of ±± and ±± particles. A set of tuned parameters called the A14 tune [41] and the NNPDF2.3 parton distribution function (PDF) set [42] were used. The value of ℎ ℓℓ in Eq. (1) was set to 0.02 for each leptonic decay mode to obtain a ±± decay width that is negligible compared to the detector resolution. Consequently, the branching ratios are assumed to be equal for all possible leptonic final states. Only Drell-Yan production of the ±± particles was simulated. The ±± pair production cross-sections at √ = 13 TeV were calculated to leading-order and next-to-leading-order (NLO) accuracies [18]. The resulting NLO-to-LO -factor was then applied to the ±± LO production cross-section, LO . The cross-sections for the Zee-Babu model were calculated using the same simulation set-up at NLO accuracy [19] and agree well with the × LO value, as expected. Eleven samples were simulated with different masses of the ±± particle decaying into light leptons, and an additional 11 samples with at least one -lepton in the final state were produced. Signal mass points considered in the analysis range from 400 GeV to 1300 GeV in steps of 100 GeV.  -53]. The parton shower used the set of tuned parameters developed by the S authors, and the samples were normalised to a next-to-next-to-leading-order (NNLO) prediction [54]. Similar methods were used to simulate + jets processes that contribute to the fake-lepton background, which is estimated in Section 5.  The NLO P B v2 [55][56][57][58] generator was used with the NNPDF3.0 [44] PDF set to model the production of¯and single-top-quark -channel events; the top-quark mass was set to 172.5 GeV and the ℎ damp parameter 3 was set to 1.5 top [60]. For the -channel single-top events, the diagram removal scheme [61] was used to remove interference and overlap with¯production. The events were interfaced to P 8.230 [39] to model the parton shower, hadronisation, and underlying event, with parameters set according to the A14 tune [41] and using the NNPDF2.3 set of PDFs [42]. The decays of bottom and charm hadrons were performed by E G 1.6.0 [62]. Additionally, top-quark spin correlations are preserved through the use of M S [63]. The predicted¯production cross-section, calculated with T ++ 2.0 [64] to NNLO in perturbative QCD and including soft-gluon resummation to next-to-next-to-leading-log order, is 830 +20 −29 (scale) ± 35 (PDF + s ) pb. Top-quark processes (¯, single-top-quark, 3 , 4 and¯+ / / ) and multiboson processes ( , , and ) are grouped together and form the 'Other' background category. Pile-up events from additional interactions in the same or neighbouring bunch crossings were simulated with P 8.186 [65] using the NNPDF2.3 PDFs and the A3 tune [66] and overlaid on the simulated hard-scatter events, which were then reweighted to match the pile-up distribution observed in data.

Event reconstruction and selection
To reduce non-collision backgrounds originating from beam-halo events and cosmic rays, events are required to have at least one reconstructed primary vertex with at least two associated tracks, each having T greater than 500 MeV. Among all the vertices in the event, the one with the highest sum of squared T of the associated tracks is identified as the primary vertex. Events that contain jets must also satisfy the quality criteria described in Ref. [67].

Event reconstruction
This analysis classifies leptons into two exclusive categories called tight and loose, defined specifically for each lepton flavour as described in Section 5. Leptons selected in the tight category are mostly prompt leptons that are estimated using Monte Carlo samples and originate from decays of , , bosons, or from prompt leptonic decays, while loose leptons are mostly non-prompt and misidentified leptons used for the estimation of reducible background. Lepton definitions described in the following three paragraphs correspond to the tight category and the corresponding selection is referred to as baseline event selection.
Electron candidates are reconstructed by using electromagnetic calorimeter and ID information to match an isolated calorimeter energy deposit to a charged-particle track. They are required to stay within the fiducial volume of the inner detector, | | < 2.47, and to have transverse momenta T > 40 GeV. Moreover, electrons must pass at least the LHTight identification level, based on a multivariate likelihood discriminant [68,69]. The discriminant relies on track and calorimeter energy cluster information. Electron candidates within the transition region between the barrel and endcap electromagnetic calorimeters (1.37 < | | < 1.52) are vetoed due to limitations in their reconstruction quality. The track associated with the electron candidate must have a transverse impact parameter 0 that, evaluated at the point of closest approach of the track to the beam axis in the transverse plane, satisfies | 0 |/ ( 0 ) < 5, where ( 0 ) is the uncertainty on 0 . In addition, the longitudinal distance 0 from the primary vertex to the point where 0 is measured must satisfy | 0 sin( )| < 0.5 mm, where is the polar angle of the track. The combined identification and reconstruction efficiency for LHTight electrons varies from 58% to 88% over the transverse energy range from 4.5 GeV to 100 GeV. Electron candidates must also satisfy the FCLoose isolation requirement, which uses calorimeter-based and track-based isolation criteria that reach an efficiency of about 99%. Electron candidates are discarded if their angular distance from a jet satisfies 0.2 < Δ < 0.4. If Δ ( , ) < 0.2, the jet is discarded. Both the reconstruction and isolation performance are evaluated in → decay measurements described in Ref. [68].
A dedicated machine-learning algorithm (BDT) is applied to reject electrons with incorrectly identified charge [68]. A selection requirement on the BDT output is chosen to achieve a rejection factor that is between 7 and 10 for electrons having a wrong charge assignment while selecting properly measured electrons with an efficiency of 97%.
Muon candidates are reconstructed by matching a track from the muon spectrometer with an innerdetector track. The candidates must have T > 40 GeV and | | < 2.5, and satisfy the impact parameter requirements | 0 |/ ( 0 ) < 3 and | 0 sin( )| < 0.5 mm. If a muon candidate has a T higher than 300 GeV, it must satisfy a special HighPt muon quality requirement. If its T is less than 300 GeV, the Medium quality requirement is used. The criteria associated with these selections are described in Ref. [70]. The muon candidates must also meet the track-based FixedCutTightTrackOnly isolation requirement. The full set of selections results in a reconstruction and identification efficiency of over 95% in the entire phase space, as measured in → events [71]. If a muon and a jet featuring more than three tracks lie Δ < 0.4 apart and the muon's transverse momentum is less than half of the jet's T , the muon is discarded. Furthermore, a muon candidate that deposits a sufficiently large fraction of its energy in the calorimeter is rejected if it also shares a reconstructed ID track with an electron. Jets are reconstructed using particle-flow objects that combine tracking and calorimetric information [72]. The anti-algorithm [73], implemented in F J [74], is applied with a radius parameter = 0.4. The measurements of jet T are calibrated to the particle level [75]. To help reject additional jets produced by pile-up, a 'jet vertex tagger' (JVT) algorithm [76] is used for jets with T < 60 GeV and | | < 2.4. It employs a multivariate technique that relies on jet energy and tracking variables to determine the likelihood that the jet originates from the primary vertex. The medium JVT working point is used, which has an average efficiency of 92%.
Jets considered in this analysis are required to have T > 20 GeV and | | < 2.5. Jets within Δ = 0.2 of an electron are discarded. Jets within Δ = 0.2 of a muon and featuring fewer than three tracks or having T ( )/ T (jet) > 0.5 are also discarded. Events containing jets identified as originating from -quarks are vetoed. They are identified with a multivariate algorithm [75] which uses information about the impact parameters of inner-detector tracks matched to the jet, the presence of displaced secondary vertices, and the reconstructed flight paths of -and -hadrons inside the jet [77][78][79]. The algorithm is used at a working point that provides a -tagging efficiency of 77%, as determined in a sample of simulated¯events. The working point provides rejection factors of approximately 134, 6, and 22 for light-quark and gluon jets, -jets, and hadronically decaying -leptons, respectively.
The missing transverse momentum is defined as the negative vector sum of the transverse momenta of fully reconstructed particles and its magnitude is denoted by miss T . If an event has tracks that are not assigned to any of the physics objects but are matched to the primary vertex, an additional 'soft term' is added. In a high pile-up environment, this term improves the reconstruction of miss Correction factors are applied to the selected electrons, muons, and jets. For electrons and muons, these factors account for identification and selection efficiency differences between data and simulation [68,70], while for jets, they correct the jet energy scale and jet energy resolution [75].

Event selection
The baseline event selection requires at least two light leptons to be identified as tight. For this search, three distinct types of regions are defined: control regions (CR), validation regions (VR), and signal regions (SR). The normalisation factors of the dominant backgrounds are treated as free parameters in the likelihood fit and are constrained by the control regions. Validation regions are used to cross-check the fit results but are not included in the fit itself. Both the control regions and validation regions are intended to be close to the kinematic region of the expected signal but must be constructed in a way to allow only negligible signal contamination. The CRs and VRs are defined using selections that are orthogonal to the SRs. The analysis is sensitive to final states with three different lepton multiplicities. The regions are optimised to search for one same-charge lepton pair in the case of regions requiring two or three leptons, while four-lepton events are required to feature two same-charge pairs with zero total charge. The main variable used to distinguish between control, validation and signal regions is the invariant mass of the two same-charge leptons with the highest T in the event, (ℓ ± , ℓ ± ) lead , where ℓ, ℓ = , . This variable is also used in the final fit to data in the two-and three-lepton regions, while Table 2: Summary of regions defined in the analysis. The table is split into four horizontal blocks: the upper block indicates the final states for each region, the second block lists the lepton multiplicity of the region, the third block indicates the mass range of the corresponding final state, and the lower block indicates the event selection criteria for the region. The application of a selection requirement is indicated by a check-mark (), or by inverted when it is inverted. The three-and four-lepton regions include all light-lepton flavour combinations. No selection is applied when a dash is present in the corresponding cell. A selection on the average invariant mass of the two same-charge lepton pairs, ≡ ( ℓ + ℓ + + ℓ − ℓ − )/2, is used to increase the signal significance in the four-lepton signal region.

Control regions
Signal regions Validation regions a single-bin event yield is used in the four-lepton regions. The selection criteria for each region are summarised in Table 2.
Events with at least one -tagged jet are vetoed in all regions to suppress background events arising from top-quark decays. In regions with more than two leptons, the so-called -veto condition is required, i.e. events containing same-flavour lepton pairs having invariant masses within 20 GeV of the -boson mass (71.2 GeV < (ℓ, ℓ) < 111.2 GeV) are rejected in order to suppress events featuring a boson in the final state. The -veto is not applied in four-lepton control and validation regions -this increases the available number of simulated diboson events.

Signal regions
The signal regions, independently of the lepton multiplicity and flavour combination, require invariant masses of the same-charge leading lepton pair to be above 300 GeV. To maximise the sensitivity, additional requirements are imposed on same-charge lepton pairs, regardless of the flavour. These exploit both the boosted decay topology of the ±± resonance and the high energy of the decay products. The same-charge lepton angular separation in the two-lepton signal region (SR2L) is required to be Δ (ℓ ± , ℓ ± ) < 3.5. Furthermore, the vector sum of the two leading leptons' transverse momenta, T (ℓ ± , ℓ ± ) lead , must exceed 300 GeV in both the two-lepton and three-lepton (SR3L) signal regions. In the four-lepton signal region (SR4L), the signal significance is increased by requiring the average invariant mass of the two same-charge lepton pairs to satisfy = ( ℓ + ℓ + + ℓ − ℓ − )/2 > 300 GeV.
The efficiency for signal events to pass the signal region selections depends on the signal mass and on the flavour combination of the leptons in the final states. For the benchmark ±± masses, the efficiencies range between 10% and 20% in the two-lepton signal regions, while the typical efficiencies in the three-and four-lepton signal regions vary from 30% to 40%. Generally, the efficiencies increase with the ±± mass. Focusing on two-lepton regions, the efficiencies are highest for the channel, lower for the channel and lowest for the channel, which is an effect of the two-lepton triggers used in the search.
The following control and validation regions either span a lower (ℓ ± , ℓ ± ) lead interval or require other orthogonal selections if the same (ℓ ± , ℓ ± ) lead window as in the signal regions is used.

Control regions
Firstly, the two-lepton diboson control region (DBCR2L) is used to constrain the dominant diboson contribution in the two-lepton final states and spans a (ℓ ± , ℓ ± ) lead ∈ [200, 300) GeV range. A significant Drell-Yan event yield was observed in the channel, so additional selections on miss T and the pseudorapidity of the same-charge lepton pair, | (ℓ ± , ℓ ± )|, are required to specifically suppress the Drell-Yan background. The Drell-Yan background is further constrained by the opposite-charge Drell-Yan control region (DYCR), where exactly two electrons with opposite charges are required. Since it is designed to target only opposite-charge electron pairs, the invariant mass of the opposite-charge electron pair is required to be ( ± , ∓ ) lead > 300 GeV and is also used as the fit variable. Then, the three-lepton diboson control region (DBCR3L) is used to constrain the diboson background yield, independently of the flavour combination. Since the (ℓ ± , ℓ ± ) lead requirement is the same as it is in the corresponding three-lepton signal region, at least one -boson candidate is required in order to achieve orthogonality. Finally, the four-lepton control region (CR4L) is used to constrain the yield of the dominant diboson background in four-lepton regions, where (ℓ ± , ℓ ± ) lead is restricted to be between 100 GeV and 200 GeV. The signal contamination in all control regions is negligible.

Validation regions
The two-lepton validation region (VR2L) is used to validate the data-driven fake-lepton background estimation and to assess the diboson modelling in the two-lepton channel. The (ℓ ± , ℓ ± ) lead selection is the same as in the SR2L, thus requiring an inverted T (ℓ ± , ℓ ± ) lead cut with respect to the corresponding signal region. Similarly to the DBCR2L, additional requirements on miss T and | (ℓ ± , ℓ ± )| are imposed only in the channel. The three-lepton validation region (VR3L) is used to validate the diboson background and fake-lepton background with three reconstructed leptons in the final states. The (ℓ ± , ℓ ± ) lead value is required to be within the interval of [100, 300) GeV. Additionally, a -veto condition is applied. The four-lepton validation region (VR4L) is used to validate the diboson modelling in the four-lepton region and is defined by the 200 GeV < (ℓ ± , ℓ ± ) lead < 300 GeV requirement. The signal yield does not exceed a few per cent of the background yield in any validation region.

Background composition and estimation
In this section, background estimation techniques are described. Different lepton multiplicity and flavour channels in the analysis have different background compositions and thus require different treatments. Backgrounds can be categorised into irreducible and reducible types, which can be identified by their source. The former category is derived from MC simulation and the latter with data-driven methods.
Irreducible background sources are SM processes producing the same prompt final-state lepton pairs as the signal, with the dominant contributions in this analysis coming from diboson production. Irreducible prompt SM backgrounds in all regions are estimated using the simulated samples listed in Section 3.
Reducible background sources are processes where reconstructed leptons originate from misreconstructed objects such as jets, or from light-or heavy-quark decays or, in the electron case, from photon conversions. These types of events thus contain at least one non-prompt lepton. Events containing leptons whose charges were incorrectly assigned also enter the reducible background category. An example of this type of background is the Drell-Yan background, where the contribution is normalised to the DYCR, then reweighted for the charge misidentification probability and finally used to predict yields in the same-charge selections.
To avoid overlap between the irreducible backgrounds estimated using MC simulation and the datadriven, reducible backgrounds, events from the MC background samples are considered only if reconstructed leptons can be matched to their prompt generator-level counterparts. Electrons are the only significant source of charge misidentification.

Electron charge misidentification
Electron charge misidentification is caused mainly by bremsstrahlung emission from the electrons as they propagate through the detector material. The emitted photon can either convert to an electron-positron pair or traverse the inner detector without creating any track. In the first case, the calorimeter energy cluster corresponding to the initial electron can be matched to the wrong-charge track. In the case of photon emission without subsequent pair production, the electron track usually has only very few hits in the silicon pixel layers, while other hits are from unknown sources. This can lead to the wrong charge assignment from the track curvature, but a correct estimate of the electron energy, since its estimation relies mostly on the calorimeter.
The reducible background due to charge misidentification is estimated in same-charge analysis channels that contain electrons. The modelling of charge misidentification in the G 4 simulations deviates from data because of the complex processes involved, which depend sensitively on the arrangement of the material in the detector. Consequently, charge reconstruction correction factors are derived by comparing the charge misidentification probability measured in → data with the one in simulation. The charge misidentification probability is extracted by performing a likelihood fit to a dedicated → data sample, as described in Ref. [68]. These scale factors are then applied to the simulated background events to compensate for the differences.

Fake/non-prompt lepton background
The non-prompt lepton background refers to a reducible background that originates from secondary decays of light-or heavy-flavour mesons into light leptons that are usually surrounded by jets. Another source of reducible background (fakes) arises from other physics objects misidentified as electrons or muons. Collectively, such events are referred to as the fake/non-prompt (FNP) background. The -jet veto significantly reduces the number of FNP leptons from heavy-flavour decays. Most of the FNP leptons still passing the analysis selection originate from in-flight decays of mesons inside jets, jets misreconstructed as electrons, and conversions of initial-and final-state radiation photons. MC samples are not used to estimate these background sources because the simulation of jet production and hadronisation has large intrinsic uncertainties. It is estimated with a data-driven approach, the so-called fake factor method described in Ref. [81]. This method relies on determining the probability, or fake factor ( ℓ ), for a FNP lepton to be identified as a prompt lepton, corresponding to the default lepton selection. Electrons that pass the LHLoose identification requirements [68,69] but fail either the LHTight identification or the isolation requirements are referred to as loose. Muons, on the other hand, must pass the LHMedium identification requirements [70] but fail isolation to enter the loose category. Electron and muon fake factors, ℓ , are then defined as the ratio of the number of tight leptons to the number of loose leptons and are parameterised as functions of T , miss T , and . The FNP lepton background (containing at least one fake lepton) is estimated in the SR by applying ℓ as a normalisation correction to relevant distributions in a region which has the same selection criteria as the SR, except that at least one of the two leptons must pass the loose selection but fail the tight one. The fake factor is measured in dedicated fake-enriched regions, reported in Table 3, where events must pass prescaled support single-lepton triggers without isolation criteria, contain no -jets and have exactly one reconstructed lepton that satisfies either the tight selection criteria or a relaxed loose selection. No additional requirements are used in the electron channel. Exactly one jet with a T > 35 GeV and one reconstructed muon have to be back-to-back in the muon fake-enriched region. This is achieved by requiring Δ ( , jet) > 2.7. In addition, → events are rejected by requiring miss T < 40 GeV. Most of the selected events are dĳet events, while the majority of prompt leptons originate from the + jets process.

< 40 GeV
The fake-factor method relies on the assumption that no prompt leptons appear in the fake-enriched samples, which is not fully correct given the imposed selections. Therefore, the number of residual prompt leptons in the fake-enriched regions is estimated using the Monte Carlo samples described in Section 3 and subtracted from the numbers of tight and loose leptons used to measure the fake factors.
Dedicated two-, three-, and four-lepton validation regions, defined in Table 2, are used to validate both the data-driven FNP lepton estimation and the modelling of the dominant diboson background in regions as similar to the signal regions as possible. Figure 2 and Figure 3 present the (ℓ ± , ℓ ± ) lead distributions (ℓ, ℓ = , ) in the validation regions, which are sensitive to different background sources. A simultaneous background-only fit to data in all CRs was performed with diboson and Drell-Yan background normalisations considered as free parameters and with nuisance parameters corresponding to systematics listed in Section 6 included. Good background modelling is observed in all these regions.

Systematic uncertainties
Several sources of systematic uncertainty are accounted for in the analysis. These correspond to experimental and theoretical sources affecting both the background and signal predictions. All considered sources of systematic uncertainty affect the total event yield, and all, except the luminosity and cross-section uncertainties, also affect the distributions of the variables used in the fit (Section 7).
The cross-sections used to normalise the simulated samples are varied to account for the scale and PDF uncertainties in the cross-section calculation. The variation is 6% for diboson production [82], 13% for¯production, 12% for¯production, and 8% for¯production [83]. Since the yield of the diboson background is derived from the likelihood fit to the data, these systematic variations contribute by changing the shapes and the relative normalization of the background predictions used in the likelihood fit in the CRs and SRs. The theoretical uncertainty in the Drell-Yan background is estimated by PDF eigenvector variations of the nominal PDF set, and variations of the PDF scale, s , electroweak corrections, and photon-induced corrections. For S -simulated processes (Drell-Yan, diboson and multiboson processes as listed in Table 1), uncertainties from missing higher orders were evaluated [84] using seven variations of the QCD factorisation and renormalisation scales in the matrix elements by factors of 0.5 and 2, avoiding variations in opposite directions. The effect of the PDF choice is considered using the PDF4LHC prescription [85]. Uncertainties in the nominal PDF set were evaluated using 100 replica variations. Additionally, the results were cross-checked using the central values of the CT14 [86] and MMHT2014 [87] PDF sets. The effect of the uncertainty in the strong coupling constant, s , was assessed by variations of ±0.001. For¯and single-top-quark samples, the uncertainties in the cross-section due to the PDF and s were calculated using the PDF4LHC15 prescription [85] with the MSTW2008 [88,89], CT10 [90,91] and NNPDF2. 3 [42] PDF sets in the five-flavour scheme, and were added in quadrature to the effect of the factorisation scale uncertainty. For¯production and processes producing three or more top quarks, the uncertainty due to initial-state radiation (ISR) was estimated by comparing the nominal event sample with two samples where the up/down variations of the A14 tune were employed. The theoretical uncertainty of the NLO cross-section for → ++ −− is reported to range from a few per cent at low ±± masses to approximately 25% [18,19] for the highest signal mass points studied in the analysis. It is not included in a fit as a nuisance parameter but is drawn as an uncertainty band around the theoretical curves in the exclusion limit plots in Figures 8 and 9. The uncertainty on signal cross-section includes the renormalisation and factorisation scale dependence and the uncertainty in the parton densities. The theoretical uncertainty in the → ++ −− simulation is assessed by varying the A14 parameter set in P 8.186 and choosing CTEQ6L1 and CT09MC1 [92] as alternative PDFs. The impact on signal acceptance is found to be negligible. These theoretical uncertainties are considered fully correlated across the various analysis regions.
A significant contribution to the total uncertainty arises from the statistical uncertainty of the MC samples in all analysis regions. Analysis regions have very restrictive selections, and only a small fraction of the initially generated MC events remains after applying all requirements. The statistical uncertainty varies from 5% to 40% depending on the signal region.
Experimental systematic uncertainties due to differences between data and Monte Carlo for lepton reconstruction, identification, isolation, and trigger efficiencies are estimated by varying the corresponding scale factors. The impact of scale-factor systematic variations on the background yield estimation is at most 3% and is less significant than the other systematic uncertainties and MC statistical uncertainties. The same is true for the lepton energy or momentum calibration. Experimental systematic uncertainties due to different reconstruction and -tagging efficiencies for reconstructed jets in data and simulation are taken into account. Uncertainties in the absolute jet energy scale and resolution, measured in multi-jet, + jets, and +jet events, are considered and are estimated to be less than 2% across the range of T (jet) of interest [75,80]. -tagging efficiencies are measured in two-lepton¯events and are estimated to range from 8% at low momentum to 1% at high momentum [77]. Furthermore, the miss T measurement uncertainties are estimated by comparing data to simulation in → events without jets, as described in Ref. [80]. The uncertainty in the pile-up simulation, derived from a comparison of data with simulation, is also taken into account [76].
The experimental uncertainty related to electron charge misidentification arises from the statistical uncertainty of both the data and the simulated sample of / * → events used to measure this probability. The charge misidentification probability increases from ∼ 10 −4 to ∼ 0.1 with increasing electron T and | |. The systematic effects obtained by altering the selection requirements imposed on the invariant mass used to select / * → events are found to be negligible compared to the statistical uncertainty.
The experimental systematic uncertainty in the data-driven estimate of the FNP lepton background is evaluated by varying the nominal fake factor to account for different effects. The miss T requirement is altered to consider variations in the + jets composition. The flavour composition of the fakes is investigated by requiring an additional recoiling jet in the electron channel and changing the definition of the recoiling jet in the muon channel. Furthermore, the transverse impact parameter criterion for tight muons (defined in Section 4.1) is varied by one standard deviation. Finally, in the fake-enriched regions, the normalisation of the subtracted prompt-lepton contribution is altered within its uncertainties. This variation accounts for uncertainties related to the luminosity, the cross-section, and the corrections applied to simulation-based predictions. The FNP systematic uncertainties are correlated across all the different regions and bins, while the statistical uncertainties in each bin are uncorrelated. The statistical uncertainty of the fake factors is added in quadrature to the total systematic uncertainty. The total uncertainty of the FNP background yield varies between 10% and 20%.
The total relative background systematic uncertainty after the background-only fit (Section 7), and its breakdown into components, is presented in Figure 4.

Statistical analysis and results
The H F [93] statistical analysis package is used to implement a maximum-likelihood fit. The fit considers the leading lepton pair's invariant mass distribution (ℓ ± , ℓ ± ) lead , where ℓ, ℓ = , , in all two-and three-lepton control and signal regions. In the four-lepton CRs and SRs, the single-bin event yield is used to obtain the numbers of signal and background events. The binning is chosen to optimise the expected sensitivity to the signal model, while also keeping low statistical uncertainties in each bin. The likelihood is the product of a Poisson probability density function describing the observed number of events and Gaussian distributions to constrain the nuisance parameters associated with the systematic uncertainties. The widths of the Gaussian distributions correspond to the magnitudes of these uncertainties, whereas Poisson distributions are used for MC simulation statistical uncertainties. Furthermore, additional free parameters are introduced for the Drell-Yan and diboson background contributions to fit the yields in the corresponding control regions. These free parameters are then used to normalise relevant backgrounds in the signal regions. Fitting the yields of the largest backgrounds reduces the systematic uncertainty in the predicted yield from SM sources. The fitted normalisations are compatible with their SM predictions, within the uncertainties. The diboson yield is described by three free parameters, one each for the two-, three-and four-lepton multiplicity channels. For the scenario in which the ±± particle decays into the different possible lepton-flavour combinations with equal probability, a 95% CL upper limit was set on the → ++ −− cross-section using the CL s method [94].

Fit results
The observed and expected yields in all control, validation, and signal regions used in the analysis are presented in Figure 5 and summarised in Tables 4-6. Here 'pre-fit' denotes the nominal simulated MC yields and 'post-fit' denotes the simulated yields scaled with the normalisation parameters obtained from the likelihood fit to the two-, three-, and four-lepton control and signal regions. In general, good agreement, within statistical and systematic uncertainties, between data and SM predictions is observed in the various regions, demonstrating the validity of the background estimation procedure. No significant excess is observed in any of the signal regions. Correlations between various sources of uncertainty are evaluated and used to estimate the total uncertainty in the SM background prediction. Some of the uncertainties, particularly those connected with the normalisation of the background contributions and the FNP background, are anti-correlated. The (ℓ ± , ℓ ± ) lead distributions (ℓ, ℓ = , ) of the two-lepton signal regions are presented in Figure 6, and those of the three-and four-lepton signal regions are presented in Figure 7. In the four-lepton signal region, no data event is observed, which is within the expected yield.
The signal regions were designed to fully exploit the pair production of the ±± boson and its boosted topology by applying selections that target same-charge high-T leptons. After a background-only likelihood fit, the Drell-Yan normalisation factor is found to be 1.13 ± 0.04 and the two-, three-and four-lepton channel diboson normalisation factors are 1.10 ± 0.06, 0.92 ± 0.05, and 1.08 ± 0.11, respectively. Figure 8 shows the upper limit on the cross-section as a function of the ±± boson mass, where decays into each leptonic final state are equally probable. Since the yields in some regions are very small, the asymptotic approximation [95] cannot be used reliably, so 10 5 pseudo-experiments were run instead to obtain the final limits.
The observed lower limits on the ±± mass within LRSMs (the Zee-Babu model) vary from 520 GeV to 1050 GeV (410 GeV to 880 GeV), depending on the lepton multiplicity channel, with ℓℓ B ( ±± → ℓ ± ℓ ± ) = 100%. The observed lower limit on the mass reaches 1080 GeV and 900 GeV when combining all three channels for LRSMs and the Zee-Babu model, respectively. The expected exclusion limit is 1065 +30 −50 GeV for LRSMs and 880 +30 −40 GeV for the Zee-Babu model, where the uncertainties of the limit are extracted from the ±1 band. The limit obtained from the four-lepton final state is the strongest and drives the combined result. A comparison between the various limits obtained from this measurement is presented in Figure 9.   Table 4: The number of predicted background events in control regions after the background-only fit, compared with the number of events observed in data. Uncertainties correspond to the uncertainties in the predicted event yields and their total is smaller than the sum of the components in quadrature due to correlations between these components. Due to rounding, the totals can differ from the sums of components. FNP refers to the fake/non-prompt lepton background. Backgrounds from top-quark and multiboson processes are merged, forming the 'Other' category. Background processes with a negligible yield are marked with a dash (-). 11.9 ± 0.9 10.9 ± 0.6 1.76 ± 0.13 Table 5: The number of predicted background events in validation regions after the background-only fit, compared with the number of events observed in data. Uncertainties correspond to the uncertainties in the predicted event yields and their total is smaller than the sum of the components in quadrature due to correlations between these components. Due to rounding, the totals can differ from the sums of components. FNP refers to the fake/non-prompt lepton background. Backgrounds from top-quark and multiboson processes are merged, forming the 'Other' category. Background processes with a negligible yield are marked with a dash (-).  Table 6: The number of predicted background events in signal regions after the background-only fit, compared with the number of events observed in data. Uncertainties correspond to the uncertainties in the predicted event yields and their total is smaller than the sum of the components in quadrature due to correlations between these components. Due to rounding, the totals can differ from the sums of components. FNP refers to the fake/non-prompt lepton background. Backgrounds from top-quark and multiboson processes are merged, forming the 'Other' category. Background processes with a negligible yield are marked with a dash (-).   Figure 6: Distributions of (ℓ ± , ℓ ± ) lead in signal regions, namely (a) the electron-electron two-lepton signal region (SR2L), (b) the electron-muon two-lepton signal region (SR2L) and (c) the muon-muon two-lepton signal region (SR2L). The background expectation is the result of the background-only fit described in the text. The hatched bands include all systematic uncertainties post-fit with the correlations between various sources taken into account. The solid coloured lines correspond to signal samples, normalised using the theory cross-section, with the ±± mass marked in the legend. The ×50 in the legend indicates the scaling of the signal yield to make it clearly visible in the plots. The error bars show statistical uncertainties. Backgrounds from top-quark and multiboson processes are merged, forming the 'Other' category. The last bin also includes any overflow entries. The lower panels show the ratio of the observed data to the estimated SM background. The binning presented in the figures is used in the fit.  production cross-section as a function of ( ±± ) resulting from the combination of all analysis channels, assuming ℓℓ B ( ±± → ℓ ± ℓ ± ) = 100%, where ℓ, ℓ = , , . The surrounding green and yellow bands correspond to the ±1 and ±2 standard deviation (±1 and ±2 ) uncertainty around the combined expected limit, respectively, as estimated using the frequentist approach, where toy experiments based on both the background-only and signal+background hypotheses are generated for this purpose. The theoretical signal cross-section predictions, given by the NLO calculation [18,19], are shown as blue, orange and red lines for the left-handed ±± , right-handed ±± (which is the same as the Zee-Babu ±± ), and a sum of the two LRSM chiralities, respectively, with the corresponding uncertainty bands.  : Observed 95% CL upper limits on the ±± pair production cross-section as a function of ( ±± ) assuming ℓℓ B ( ±± → ℓ ± ℓ ± ) = 100%, where ℓ, ℓ = , , . The dashed blue, green, and purple lines indicate the observed limit using the two-, three-, and four-lepton exclusive final states, respectively. The limit obtained from the four-lepton final state is the strongest and drives the combined result. The black lines show the combined observed limit obtained using the frequentist approach for a fit with only statistical uncertainties (dotted) and a fit with statistical and systematic uncertainties (solid). The grey line shows the limit using the asymptotic approximation [95], and the cyan dashed line shows the combined observed limit obtained analysing the first 36.1 fb −1 of Run 2 [21]. The theoretical signal cross-section predictions, given by the NLO calculation [18,19], are shown as blue, orange and red lines for the left-handed ±± , right-handed ±± (which is the same as the Zee-Babu ±± ), and a sum of the two LRSM chiralities, respectively, with the corresponding uncertainty bands.

Conclusion
The ATLAS detector at the Large Hadron Collider was used to search for doubly charged Higgs bosons in the same-charge two-lepton invariant mass spectrum, using ± ± , ± ± and ± ± final states as well as final states with three or four leptons (electrons and/or muons). The search was performed with 139 fb −1 of data from proton-proton collisions at √ = 13 TeV, recorded during the Run 2 data-taking period lasting from 2015 to 2018. No significant excess above the Standard Model prediction was found. Lower limits are set on the mass of doubly charged Higgs bosons in the context of the left-right symmetric type-II seesaw and Zee-Babu models. These vary between 520 GeV and 1050 GeV for LRSMs and between 410 GeV and 880 GeV for the Zee-Babu model, depending on the lepton multiplicity channel, assuming that ℓℓ B ( ±± → ℓ ± ℓ ± ) = 100% and that decays to each of the , , , , , final states are equally probable. The observed combined lower limit on the ±± mass is 1080 GeV within LRSMs and 900 GeV within the Zee-Babu model.