Measurements of $W^+W^-+\ge 1~$jet production cross-sections in $pp$ collisions at $\sqrt{s}=13~$TeV with the ATLAS detector

Fiducial and differential measurements of $W^+W^-$ production in events with at least one hadronic jet are presented. These cross-section measurements are sensitive to the properties of electroweak-boson self-interactions and provide a test of perturbative quantum chromodynamics and the electroweak theory. The analysis is performed using proton$-$proton collision data collected at $\sqrt{s}=13~$TeV with the ATLAS experiment, corresponding to an integrated luminosity of 139$~$fb$^{-1}$. Events are selected with exactly one oppositely charged electron$-$muon pair and at least one hadronic jet with a transverse momentum of $p_{\mathrm{T}}>30~$GeV and a pseudorapidity of $|\eta|<4.5$. After subtracting the background contributions and correcting for detector effects, the jet-inclusive $W^+W^-+\ge 1~$jet fiducial cross-section and $W^+W^-+$ jets differential cross-sections with respect to several kinematic variables are measured, thus probing a previously unexplored event topology at the LHC. These measurements include leptonic quantities, such as the lepton transverse momenta and the transverse mass of the $W^+W^-$ system, as well as jet-related observables such as the leading jet transverse momentum and the jet multiplicity. Limits on anomalous triple-gauge-boson couplings are obtained in a phase space where interference between the Standard Model amplitude and the anomalous amplitude is enhanced.


Introduction
The measurement of -boson pair ( ) production cross-sections is an important test of the Standard Model (SM). production at hadron colliders is sensitive to the properties of electroweak-boson self-interactions and provides a test of perturbative quantum chromodynamics (QCD) and the electroweak (EW) theory. It also constitutes a large background in the measurement of Higgs boson production as well as in searches for physics beyond the SM. Inclusive and fiducial production cross-sections have been measured in proton-proton ( ) collisions at √ = 7 TeV [1, 2], 8 TeV [3-5] and 13 TeV [6-8], as well as in + − collisions at LEP [9] and in¯collisions at Tevatron [10][11][12]. However, to reduce backgrounds, the measurements of inclusive cross-sections typically require that the WW pair is produced without additional jet activity, or at most with one additional jet. The production of +jets has therefore not been studied in detail. Figure 1: Feynman diagrams for the production of a + − boson pair in association with a jet.
This article presents results of measurements of fiducial and differential cross-sections for a pair produced in association with one or more jets. For the first time at the LHC, differential measurements are performed in a jet-inclusive phase space. This measurement complements previous results as the combination of measurements with and without jets improves the precision of the inclusive crosssection due to an anti-correlation of important systematic uncertainties, for example the jet energy scale uncertainty, as demonstrated in previous measurements from ATLAS [5] and CMS [8].
The analysis of one-jet topologies can also improve searches for anomalous triple-gauge-boson couplings (aTGCs), due to the increased interference between the SM amplitude and the anomalous amplitude [13]. The impact of the aTGC operator, as defined in Ref. [14], increases rapidly with energy, making a measurement at the energies probed by the LHC important. However, at high centre-of-mass energy, the SM amplitude and the anomalous amplitude are dominated by different helicity configurations, so their interference is suppressed, which reduces the impact of the operator. The reduced sensitivity to the interference also poses a problem for the validity of the effective field theory interpretation, as contributions that are quadratic in the dimension-six amplitude, which are expected to be subdominant in the EFT expansion, become large. Requiring hard jets in addition to the diboson pair allows different helicity configurations and, thus, reduces the interference-suppression [13].
In collisions, two leading processes contribute to production:¯→ in the -and -channel, and loop-induced gluon-gluon fusion processes → . Beyond leading order in perturbation theory and in particular for +jets production, additional partonic initial states can contribute to both processes. 1 Representative diagrams for +jet production are shown in Figure 1. In this analysis, the resonant → → production is included in the signal definition and simulation, although the process is strongly suppressed via kinematic selection requirements.
The measurement of → ± ∓ production cross-sections at √ = 13 TeV is performed, using collision data recorded by the ATLAS experiment in 2015-2018, corresponding to an integrated luminosity of 139 fb −1 . The number of events due to top-quark pair production (¯), the largest background for this measurement, is reduced by rejecting events containing jets from -hadron decays ( -jets). However, theb ackground is still sizeable due to the requirement that events contain at least one jet, and a data-driven method is required to reduce its contribution to systematic uncertainties in the measurement. This is achieved by simultaneously measuring the number of¯events and the efficiency of identifying -jets in these events. The procedure reduces the impact of systematic uncertainties associated with the modelling of¯events and the -tagging efficiency calibration, and provides a precise and accurate estimate of the background up to partonic centre-of-mass energies of the order of 1 TeV and for up to five jets.

Data and Monte Carlo samples
The analysis uses data collected in proton-proton collisions at a centre-of-mass energy of 13 TeV from 2015 to 2018. After applying data quality criteria [20], the dataset corresponds to 139 fb −1 , with an uncertainty of 1.7% [21], obtained using the LUCID-2 detector [22] for the primary luminosity measurements.
Monte Carlo (MC) simulated event samples are used to correct the signal yield for detector effects and to estimate background contributions. All samples were passed through a full simulation of the ATLAS detector [23], based on G 4 [24]. Table 1 lists the configuration for the nominal MC simulation used in the analysis.
Signal events were modelled using the S 2.2.2 [25] generator at next-to-leading order (NLO) accuracy in QCD for up to one additional parton, and leading-order (LO) accuracy for two to three additional parton emissions for¯initial states. The matrix element calculation of → production, which includes off-shell effects and Higgs boson contributions, incorporates up to one additional parton emission at LO. It was matched and merged with the S parton shower based on Catani-Seymour dipole [26,27] using the MEPS@NLO prescription [28][29][30][31]. The virtual QCD corrections were provided by the O L library [32,33]. The NNPDF3.0NNLO set of parton distribution functions (PDF) was used [34], along with the dedicated set of tuned parton-shower parameters developed by the S authors.
To assess the uncertainty in the matrix element calculation and the parton shower modelling, alternative events for¯→ production were generated using the P -B v2 [35][36][37][38] generator at NLO accuracy in QCD. Events were interfaced to P 8.186 [39] for the modelling of the parton shower, hadronization, and underlying event, with parameter values set according to the AZNLO set of tuned parameters [40]. The CT10nlo set PDF [41] was used for the hard-scattering processes, whereas the CTEQ6L1 PDF set [42] was used for the parton shower. The events were normalized to the next-to-next-toleading order (NNLO) cross-section [43]. For the → initial state, which makes up only 5% of the signal, no alternative simulation is used. 5 The production of¯and single-top events was modelled using the P -B v2 [35][36][37]44] generator at NLO with the NNPDF3.0NLO [34] PDF. The events were interfaced to P 8.230 [45] to model the parton shower, hadronization, and underlying event, with the A14 set of tuned parameters [46] and using the NNPDF2.3LO set of PDFs [47]. For¯event generation, the ℎ damp parameter 4 was set to 1.5 top [48]. The diagram-removal scheme [49] was employed to handle the interference between the and¯production processes [48]. Alternative samples were generated to assess the uncertainties in the top-background modelling. The uncertainty due to initial-state radiation and higher-order QCD effects was estimated by simultaneous variations of the ℎ damp parameter and the renormalization and factorization scales, and by choosing the Var3c up/down variants of the A14 set of tuned parameters as described in Ref. [50]. The impact of final-state radiation was evaluated with weights that account for the effect of varying the renormalisation scale for final-state parton-shower emissions up or down by a factor two. To assess the dependence on the¯-overlap removal scheme, the diagram-subtraction scheme [49] was employed as an alternative to the diagram-removal scheme. The uncertainty due to the parton shower and hadronization model was evaluated by comparing the nominal sample of events with an event sample generated by P -B v2 and interfaced to H 7.04 [51,52], using the H7UE set of tuned parameters [52] and the MMHT2014LO PDF set [53]. To assess the uncertainty in the matching of NLO matrix elements to the parton shower, the nominal sample was compared with a sample generated by M G 5_aMC@NLO 2.6.2 [54] at NLO in QCD using the five-flavour scheme and the NNPDF2.3NLO PDF set. The events were interfaced with P 8, as for the nominal sample. Thes ample was normalized to the cross-section prediction at NNLO QCD. in QCD including the resummation of next-to-next-to-leading logarithmic (NNLL) soft-gluon terms calculated using T ++2.0 [55][56][57][58][59][60][61]. The inclusive cross-section for single-top was corrected to the theory prediction calculated at NLO in QCD with NNLL soft-gluon corrections [62,63].
The background due to / * +jets production was simulated with the S 2.2.1 generator using NLOaccurate matrix elements for up to two jets, and LO-accurate matrix elements for three and four jets calculated with the Comix [26] and OpenLoops libraries. They were matched with the S parton shower [27] using the MEPS@NLO prescription [28][29][30][31] and the set of tuned parameters developed by the S authors. The NNPDF3.0NNLO set of PDFs was used, and the samples were normalised to a NNLO prediction [64]. To assess the uncertainties in modelling the +jets process, an alternative sample was simulated using LO-accurate matrix elements with up to four final-state partons with M G 5_aMC@NLO 2.2.2, with the NNPDF2.3LO set of PDFs. Events were interfaced to P 8.186 using the A14 set of tuned parameters. The overlap between matrix-element and parton-shower emissions was removed using the CKKW-L merging procedure [65,66]. The inclusive cross-section of both the nominal simulation and the alternative simulation was corrected to the theory prediction calculated at NNLO in QCD.
The production of , , (with = , ) and triboson ( , on-shell) final states was simulated with the S 2.2.2 and S 2.2.8 generators using O L at NLO QCD accuracy for up to one additional parton and LO accuracy for two to three additional parton emissions, matched and merged with the S parton shower. The simulation includes * contributions for (ℓℓ) > 4 GeV. Samples were generated using the NNPDF3.0NNLO PDF set and normalized to the cross-section calculated by the event generator. Alternative samples for diboson backgrounds with or production were generated in the same way as the nominal signal sample: the default S simulation was exchanged for P + P 8, using NLO-accurate matrix elements. The P diboson cross-section was scaled to NNLO [67][68][69][70], while the cross-section calculated by S was found to be in good agreement with the Samples generated with P -B or M G 5_aMC@NLO used the E G 1.2.0 or 1.6.0 program [71] to model the decay of bottom and charm hadrons. The effect of multiple interactions in the same and neighbouring bunch crossings (pile-up) was modelled by overlaying the hard-scattering event with simulated inelastic events generated with P 8.186 using the NNPDF2.3LO set of PDFs and the A3 set of tuned parameters [72].

Event reconstruction and selection
Candidate events are selected by requiring exactly one isolated electron and one isolated muon with opposite charges. Events with two isolated leptons of the same flavour are not considered in the analysis due to the higher background from Drell-Yan events.
Events were recorded by either single-electron or single-muon triggers [74,75]. The minimum T threshold varied during data-taking between 24 GeV and 26 GeV for electrons, and between 20 GeV and 26 GeV for muons, both requiring 'loose' to 'medium' isolation criteria. Triggers with higher T thresholds and looser isolation requirements are also used to increase the efficiency. The trigger selection efficiency is more than 99% for signal events fulfilling all other selection requirements, which are detailed below. Leptons are required to be compatible with the primary vertex by imposing requirements on the impact parameters of associated tracks. The transverse impact parameter significance, 0 / 0 is required to satisfy | 0 / 0 | < 5 (3) for electrons (muons). The longitudinal impact parameter 0 must satisfy | 0 · sin | < 0.5 mm, where is the polar angle of the track. Additionally, leptons are required to be isolated using information from the ID tracks and energy clusters in the calorimeters in a cone around the lepton. The Gradient working point is used for electrons [76], while for muons the Tight_FixedRad working point is used, which is similar to the Tight selection defined in Ref.
[78] but with altered criteria at muon T > 50 GeV in order to increase the background rejection. The electron or muon trigger object is required to match the respective reconstructed lepton.
Jets are reconstructed using the anti-algorithm [79] with a radius parameter of = 0.4 using particle-flow objects [80]. They are required to have T > 30 GeV and | | < 4.5. To suppress jets that originate from pile-up, a jet-vertex tagger [81] is applied to jets with T < 60 GeV and | | < 2.4. Jet energy scale and resolution are corrected with -and T dependent scale factors [82]. Jets with T > 20 GeV and | | < 2.5 containing decay products of a -hadron are identified using the DL1r -tagging algorithm [83,84] at the 85% efficiency working point.
The missing transverse momentum, with magnitude miss T , is computed as the negative of the vectorial sum of the transverse momenta of tracks associated with jets and muons, as well as tracks in the ID that are not associated with any other component. The T of the electron track is replaced by the calibrated transverse momentum of the reconstructed electron [85].
In order to resolve the overlap between particles reconstructed as multiple physics objects in the detector, non--tagged jets are removed if they overlap, within Δ < 0.2, with an electron, or with a muon if the jet has less than three associated tracks with T > 500 MeV and satisfies T / jet T > 0.5, and the ratio of the muon T to the sum of the track T associated with the jet is greater than 0.7. Electrons or muons overlapping within Δ < 0.4 with any jet, including -tagged jets, after the former selection are removed.
Events having at least one jet, but no -tagged jets, are selected for the analysis. To reduce the Drell-Yan backgrounds, dominated by +jets events with → + − decays, the invariant mass of the electronmuon pair is required to be > 85 GeV. This requirement also reduces the contribution of resonant → → production. Events with additional leptons with T > 10 GeV and satisfying Loose isolation and LooseLH (Loose) identification requirements for electrons (muons), are vetoed to reduce backgrounds due to and production. Additionally, the subsets of events with high leading-jet transverse momentum, lead. jet T > 200 GeV, are analysed in detail, to investigate the reduced interferencesuppression in the aTGC interpretation. Table 2 gives a summary of the lepton, jet and event selection requirements used to define the signal region.

Background estimate
The top-quark background, from either¯or single-top production, comprises about 60% of the events passing the event selection and about 90% of the total background. Additional backgrounds considered are +jets production, events with non-prompt or misidentified leptons, diboson production ( , , , and ), and triboson production.

Top-quark background
An estimate of the¯background is obtained using a data-driven technique, while the single-top background is estimated using simulation and is found to contribute about 16% of the top-quark background. Following the procedure used in a measurement of the¯cross-section [86], two control regions requiring exactly one and exactly two -tagged jets are defined. All other selection criteria are the same as in the signal region. These regions are dominated by¯events and can be used to infer the number of¯events in the signal region with minimal dependence on the selection and -tagging efficiencies. The contribution of non-¯events in the 1--jet and 2--jet control regions is 13% and 4% of the expected events, respectively, of which 90% can be attributed to single-top production.
The numbers of¯events in the two control regions, as well as in the signal region, are given bȳ where and others are, respectively, the number of selected events in data and the number of non-¯events, estimated using simulation, with exactly -tagged jets. The term L¯is the product of the integrated luminosity, the¯cross-section, and the general selection efficiency, and is the efficiency of selecting a -jet in a¯event. The correction factor = / 2 accounts for correlation effects between selecting one and two -jets. It is determined from¯simulation as ,MC 2 , and typically has values close to unity. The -jet selection efficiency, , accounts for the efficiency of the -tagging algorithm and also for the acceptance of -jets. Using Eqs. (1)-(3), the number of¯events in the signal region can be expressed as which depends only on and others ( = 1, 2), as well as . The¯background estimate is performed in each analysis bin, i.e. for the fiducial selection as well as in each individual bin for the differential measurements. Because -tagged jets are selected with a lower T threshold than regular jets, this method also works for events with exactly one regular jet.
As the¯background estimate is largely based on observed yields in data control regions and the only input from¯simulation is the correlation factor , this method strongly reduces experimental and theoretical uncertainties in the¯background, and, thus, lowers the total uncertainty in the background by a factor of approximately five. In regions of phase space where, for a large fraction of events, one or both -jets are outside the detector acceptance, the reliance on¯simulation for the extrapolation into the signal region increases. In such cases, the¯estimate remains valid because modelling uncertainties cover rate and shape differences between data and simulation for -jet kinematic distributions in the control region. Uncertainties in the single-top production rate that are independent of the -jet multiplicity, such as the cross-section uncertainty, partially cancel out because single-top is the dominant background to¯in the¯control regions. A variation leading to a larger prediction in the control regions reduces theē stimate, so if the same variation also leads to a larger prediction in the signal region, the overall effect on the combined top background is reduced. The total uncertainty in the top background in the signal region is 2.8%.
The top background estimate is validated in a top-enriched subset of the signal region which requires ℓ < 140 GeV and Δ ( , ) < /2 in addition to the normal event selection. Here ℓ is the invariant mass of the leading jet and the closest lepton. This region is approximately 70% pure in top events and shows good agreement between the data and the combined signal and background prediction, which uses the data-driven top background estimate. The level of agreement of the prediction with the observed events in the control regions and the top-enriched selection is summarized in Table 3. Figure 2 shows the distributions of the lead. lep. T and the jet multiplicity, confirming the accurate modelling of lepton and jet-related properties in events without -jets.

Drell-Yan background
The Drell-Yan +jets background is estimated using MC simulation. The > 85 GeV requirement strongly suppresses this background by a factor of about nine. The contribution of this background to the selected events in the signal region is about 3%, almost entirely due to / * → + − +jets events.
The +jets estimate is checked in a validation region requiring a dilepton invariant mass between 45 GeV and 80 GeV and either T, < 30 GeV or miss T < 20 GeV, in addition to the -jet veto and the requirement of at least one jet with T > 30 GeV and | | < 4.5. The +jets purity of this region is 75% and good modelling of the data is observed, as shown in Table 3. Figure 2 shows the distribution of the dilepton invariant mass in the validation region, which features the resonant → distribution over a rising background of top events.
In addition to the theoretical uncertainty in the +jets cross-section of 5% [87], uncertainties are estimated by comparing the nominal MC simulation with events simulated by M G 5_aMC@NLO. This uncertainty estimate was found to bracket the effect of scale uncertainties. In the signal region, the total uncertainty in the +jets background is about 30%.

Backgrounds with non-prompt or misidentified leptons
Reducible backgrounds from events with non-prompt or misidentified leptons are called fake-lepton backgrounds or 'fakes'. Fake leptons correspond to leptons from heavy-flavour hadron decays and jets misidentified as electrons. Fake-lepton events stem mainly from +jets production and contribute about 3% of the selected events. Top backgrounds with one prompt lepton contribute about 10% of the fake-lepton backgrounds.
Fake-lepton backgrounds are estimated using a data-driven technique. A control region is defined, where one of the two lepton candidates fails the nominal selection with respect to the impact parameters and isolation criteria, but instead fulfils a looser set of requirements designed to increase the contribution of fake leptons. The fake-lepton background in the signal region is, then, obtained by scaling the number of data events in this control region by an extrapolation factor, after subtracting processes with two prompt leptons using simulation. The extrapolation factor is determined in a data sample that is dominated by fake leptons, and it depends on the T , | |, and flavour of the lepton. The data sample is selected by requiring events with a dĳet-like topology with one lepton candidate recoiling against a jet, with |Δ (ℓ, )| > 2.8. To suppress contamination from +jets events in this sample, the sum of miss T and the transverse mass of the lepton and miss T system is required to be smaller than 50 GeV. The approach used closely follows the one applied in Ref. [88].
Systematic uncertainties in the composition of the different sources of fake leptons are estimated by varying the selection of the data sample in which the extrapolation factors are determined. The variations include selecting events with a -jet recoiling against the lepton candidate, as well as changing the miss T requirements to increase the fake-lepton contributions. The normalization of the prompt-lepton background in the control region used for the extrapolation factor determination is varied by 10%, which covers the largest discrepancies between simulation and data observed in a dedicated validation region. An additional 25% uncertainty in the fake-lepton background normalization covers a potential mismodelling of the identification efficiency of prompt leptons that fail the tight, but fulfil the looser, lepton identification requirements. The uncertainty in the signal contamination in the control region, which is subtracted using simulation, is determined from the typical size of the largest deviations between the measured and predicted differential cross-sections, which is 20%. The total relative uncertainty in the fake-lepton background is about 40%.
In order to validate the estimate of the fake-lepton backgrounds, the opposite-charge requirement of the signal region selection is inverted, and events with an electron-muon pair of the same charge are selected. As many processes leading to fake leptons are charge symmetric, while most Standard Model processes are not, this selection increases the contribution of +jets events to about 25%. The modelling of the fake-lepton backgrounds can be validated despite the relatively low purity since the dominant diboson background in this region is known with a precision of about 10%. Reasonable agreement of the prediction with the data is observed, as is shown in Figure 2 in the sub-lead. lep. T distribution, and in Table 3 comparing the numbers of observed and predicted events. Table 3: Summary of the observed and predicted events in the background control regions (CR) and validation regions (VR), and in the top-background enriched selection. The uncertainty in the prediction includes statistical and systematic effects, excluding theory uncertainties on the signal. The purity column gives the purity of the target process, relative to the total prediction. The¯prediction in the two¯control regions is from simulation, while in the top-enriched region the data-driven estimate is used.

Region
Observed

Other backgrounds
Backgrounds from , , and production are estimated from simulation, and are found to contribute about 3% of the total selected events, dominated by events, which are observed to be well described by the nominal S simulation in Ref. [89]. Uncertainties are derived by comparing the nominal simulation with events simulated by P + P 8. The difference in generator predictions was found to be larger than the impact of scale uncertainties in the Sherpa simulation, and thus the assigned uncertainty is the conservative option. Additionally, the uncertainty in the diboson cross-section of 10% [90, 91] is included.

The
( and ) prediction is validated in events containing a third lepton having T ≥ 10 GeV that must satisfy loosened identification criteria. The invariant mass of the resulting same-flavour oppositecharge pair of leptons is required to be between 80 GeV and 100 GeV, close to the boson mass. These selections gives a very pure sample of diboson events, and the prediction is in good agreement with the data, as seen in Figure 2 and Table 3. In Figure 2 the miss T distribution in the validation region shows separation between and events.
( and ) events enter the signal region as backgrounds when the photon is reconstructed and selected as an electron candidate. To validate estimates of these backgrounds, the electron identification requirements are changed such that contributions from photon conversions increase. As the electron candidates reconstructed from photon conversion are charge symmetric, both opposite-charge and samecharge candidates are selected with respect to the selected muon. For the validation region the T distribution of the electron candidates is shown in Figure 2. It is dominated by electrons from photon conversion. Good agreement with the observed data in the validation regions is found.
Based on MC simulations, it is estimated that the triboson background contributes less than 0.1% of the inclusive selected events and at most 0.5% of the selected events in a single bin and is thus neglected in the analysis.   Table 4 lists the number of selected candidate events, as well as the breakdown of the background predictions. Details of the systematic uncertainties are given in Section 7. Figure 3 shows selected distributions at detector level in the final analysis binning and compares the observed data with the signal prediction and the background estimate. Reasonable agreement between data and expectations is observed for both the event yields and the shapes of the distributions. For the nominal signal model, small excesses are seen in the predictions at low lead. lep. T , as well as at low in the high-lead. jet T selection (both in Figure 3). These are, however, covered by the theory uncertainties of the signal, which are not included in the error bands in this figure.

Fiducial and differential cross-section determination
The +jets cross-section is evaluated in the fiducial phase space of the → ± ∓ decay channel as defined in Table 5. In simulated events, electrons and muons are required to originate directly from the hard interaction and not from -lepton or hadron decays. The momenta of photons emitted in a cone of size Δ = 0.1 around the lepton direction that do not originate from hadron decays are added to the lepton momentum to form 'dressed' leptons. Stable final-state particles, 5 excluding prompt leptons and the associated photons, are clustered into particle-level jets using the anti-algorithm with radius parameter = 0.4. The missing transverse momentum is defined at particle level as the transverse component of the vectorial sum of the neutrino momenta. The nominal definition of the particle-level fiducial phase space does not include a veto on -jets. Alternative results that include a veto on particle-level -jets 6 with T > 20 GeV are provided in HEPData 7 . 5 Particles are considered stable if their decay length is greater than 1 cm. 6 At particle level, -jets are defined by ghost-association [92], wherein -hadrons are included in the jet clustering as infinitely soft particles (ghosts). Jets with -hadron ghosts among their constituents are -jets.  The fiducial cross-section is obtained as follows: where L is the integrated luminosity, obs is the observed number of events, bkg is the estimated number of background events and accounts for detector inefficiencies, resolution effects, and contributions from -lepton decays. is calculated as the number of simulated signal events passing the reconstruction-level event selection divided by the events in the fiducial phase space. Its numerical value is = 0.747 ± 0.061 and its uncertainty is dominated by uncertainties in jet energy scale, jet energy resolution, and pile-up modelling. The fraction of events passing the event selection but containing at least one lepton from -lepton decays is 9%.
The differential cross-sections are determined using an iterative Bayesian unfolding method [93,94]. The unfolding procedure corrects for migrations between bins in the distributions during the reconstruction of the events, and applies fiducial as well as reconstruction efficiency corrections. The fiducial corrections take into account events that are reconstructed in the signal region, but originate from outside the fiducial region; the reconstruction efficiency corrects for events inside the fiducial region that are not reconstructed in the signal region due to detector inefficiencies. Tests with MC simulation demonstrate that the method is successful in retrieving the true distribution in the fiducial region from the reconstructed distribution in the signal region. To reduce bias due to the assumed true distribution, the method can be applied iteratively, at the cost of an increased statistical uncertainty. Two iterations are used to unfold the T , T , and lead. jet T distributions and the exclusive jet multiplicity, which are subject to large modelling uncertainties. For the remaining distributions, either the result is independent of the number of iterations, or the modelling uncertainty is not reduced and the statistical uncertainties increase. For these cases, only one unfolding iteration is performed.

Uncertainties
Systematic uncertainties in the +jets cross-section measurements arise from experimental sources, the background determination, the procedures used to correct for detector effects, and theoretical uncertainties in the signal modelling.
The dominant experimental systematic uncertainties arise in the calibration of the jet energy scale and resolution and the calibration of the -tagging efficiency and mis-tag rates. Experimental uncertainties also encompass uncertainties in the calibration of lepton trigger, reconstruction, identification and isolation efficiencies, the calibration of the lepton momentum or energy scale and resolution, and the modelling of pile-up. All experimental uncertainties are evaluated by varying the respective calibrations, and propagating their effects through the analysis, affecting both the background estimates and the unfolding of detector effects.
Systematic uncertainties in the estimate of fake leptons are derived by changing the selection used to estimate the weights, in order to change the composition of the sources of fake leptons. Additionally, the subtraction of the prompt-lepton sources in the control region is varied, and the statistical uncertainties of the weights are propagated. More details on the uncertainties affecting the fake-lepton estimate can be found in Section 5.
The estimate of the top background is affected by the statistical uncertainty of the number of events in the control region, and by uncertainties in the modelling of¯and single-top events, such as the uncertainty in the matrix element calculation, the parton shower modelling, the QCD scale choices, the initial-and final-state radiation and the interference between¯and single-top events. These are evaluated by using the alternative simulations described in Section 3 and propagating the results through the top background estimate. The effect of the PDF uncertainty on the top background was evaluated, but found to be negligible.
The uncertainty in minor backgrounds is estimated by varying their total cross-section within its uncertainty and by using alternative simulations, as described in Section 5. The difference between nominal and alternative simulations covers PDF uncertainties, missing higher-order QCD corrections, and the parton shower model. The bias introduced by using distributions generated by the nominal signal simulation as a prior in the unfolding is estimated by reweighting these distributions at generator level with a smooth function such that, after including simulated detector effects, they closely resemble the background-subtracted data. This reweighted detector-level prediction is unfolded using the nominal unfolding set-up. The unfolding procedure is able to very accurately recover the generator-level distribution, so this uncertainty source is negligible. Uncertainties in the unfolding procedure due to the theoretical modelling of the signal are evaluated by repeating the unfolding procedure with alternative signal simulations. The uncertainty due to missing higher-order QCD corrections is evaluated by varying the renormalization and factorization scales. The uncertainty due to the choice of generator for the hard interaction, the parton shower model and the underlying-event modelling is estimated using the alternative simulation of¯→ production, from P -B v2, interfaced to P 8.186. For the uncertainty estimation, the alternative model is first reweighted to the nominal model, so that uncertainties due to disagreement in the predicted shape of distributions can be ignored, and only the difference in the prediction of the migration matrix and fiducial and efficiency corrections are taken into account. Statistical uncertainties are evaluated by creating pseudo data samples that are obtained by varying the data within their Poisson uncertainties in each bin and then propagating these varied samples through the unfolding. The statistical uncertainties of the background estimates, which include statistical uncertainties in MC predictions and due to the control regions used in estimating the top and fake-lepton backgrounds, are evaluated using the same method. If not stated otherwise, 'statistical uncertainties' refers to the combined statistical uncertainties from signal and control regions. Table 6 gives a breakdown of the uncertainties in the fiducial cross-section measurements, and  Table 6: Breakdown of the uncertainties in the measured fiducial cross-section. "Jet calibration" uncertainties encompass jet energy scale and resolution uncertainties, "Top modelling" and "Signal modelling" are uncertainties in the theoretical modelling of the respective processes, "Fake-lepton background" is the uncertainty in the fake-lepton estimate while "Other background" is the uncertainty due to minor prompt-lepton backgrounds, "Flavour tagging" is all uncertainties in flavour tagging efficiency and mis-tag rate, and "Luminosity" is the uncertainty in the measurement of the integrated luminosity. All systematic uncertainties belonging to none of the above categories are included in "Other systematic uncertainties". Statistical uncertainties arise in both the signal region and control region used for the data-driven top and fake-lepton estimates and also from backgrounds that are estimated using MC simulations.

Uncertainty source
Relative effect

Results
The measured fiducial cross-section for +jets production, with → ± ∓ , at √ = 13 TeV, for the phase space defined in Table 5 is fid = 258 ± 4 (stat.) ± 25 (syst.) fb, with a total uncertainty of 10%. In Figure 5, the measured result is compared with various predictions for +jets production, and good agreement is found. Differential fiducial cross-sections are presented in Figures 6 to 8. Figure 9 displays distributions in a phase space that additionally requires a jet with a transverse momentum of at least 200 GeV.

Comparison with theoretical predictions
The measurement is compared to the theory predictions listed in  distributions. "Jet Calibration" uncertainties encompass jet energy scale and resolution uncertainties while "Top Modelling" encompasses all¯and single-top modelling uncertainties. "Fake Lepton Backgr." is the uncertainty in the non-prompt-lepton estimate from the fake-factor method. "Other Systematics" includes modelling and total cross-section uncertainties in the remaining backgrounds, lepton-related uncertainties as well as uncertainties due to pile-up reweighting and the signal modelling in the unfolding, while "Statistical Uncertainty" is the combined statistical uncertainty in the signal region, from control regions, and from MC simulations.  Figure 5: Comparison of the measured fiducial +jets cross-section with various theoretical predictions. Theoretical predictions are indicated as points with inner (outer) error bars denoting PDF (PDF+scale) uncertainties. The central value of the measured cross-section is indicated by a vertical line with the narrow band showing the statistical uncertainty and the wider band the total uncertainty including statistical and systematic uncertainties. The result is compared with a fixed-order parton-level prediction from MATRIX 2.0 that is accurate to NNLO (NLO) for → ( → ) production, and a prediction that additionally accounts for EW corrections to + jet production, which have been calculated with S 2.  this prediction, the NNPDF3.1NNLO parton distribution function is used, while renormalization and factorization scales are set to . In Figure 5, the measured integrated fiducial cross-section is also compared with a prediction that combines the QCD corrections from MATRIX with NLO EW corrections to +jets production that were generated with S 2.2.2 + O L [25,[100][101][102]. Photon-induced contributions are included as an additive correction, while the EW correction to¯→ is taken into account multiplicatively. The latter correction decreases the cross-section by 4%, while the former leads to an increase of 4%. The importance of both corrections increases with energy. The difference between an additive and multiplicative combination scheme for QCD and EW corrections is typically of the order 1% but can be as large as 10% in the highest T and T bins. Also displayed in Figure 5

Effective field theory interpretation
Many new-physics models that introduce new states at a high energy scale (Λ) can be described, at lower energy scales, by operators with mass dimensions larger than four in an effective field theory (EFT) framework. The higher-dimensional operators of the lowest order that can generate anomalous triple-gauge-boson couplings (aTGC) are of dimension six. The dimension-six operator, as defined in [fb] σ ∆  [fb] σ ∆        Ref.
[14], is of particular interest for an analysis of diboson production because it can only be measured in processes affected by modifications of the gauge-boson self-couplings. Its effect increases rapidly with the centre-of-mass energy, making a measurement at the energies probed by the LHC important. However, the interference of the SM and anomalous amplitudes, and, thus, the observable consequences of the operator, decrease with increasing energy due to the different helicities of the dominant contributions to the two amplitudes [105,106]. As a consequence, the square of the anomalous dimension-six amplitude, which is quadratic in the ratio of the Wilson coefficient, , to Λ 2 , dominates, while, in general, the interference of dimension-six operators with the SM is expected to be larger, as it is linear in /Λ 2 and, thus, less suppressed by Λ. The interference-suppression weakens the limits on that can be achieved by a measurement of diboson production and also poses a problem for the validity of an interpretation in a dimension-six model, since other terms of order Λ −4 , for example those due to dimension-eight operators, are neglected. Requiring a hard jet in addition to the diboson pair alters the relative contributions of different helicity configurations and reduces the suppression of the interference of SM and anomalous amplitudes [13].
Constraints on the Wilson coefficient, , are determined using the unfolded cross-section, which is the measured distribution most sensitive to the interference of the operator with the SM. The fit is performed both for jet T > 30 GeV and for jet T > 200 GeV. The latter selection is used to enhance the effect of the interference term per the above discussion.
Templates of the distributions representing the pure SM contribution, the new-physics contribution, and the interference between the SM and the new-physics contributions at LO are prepared using  [107], interfaced to P 8.244 [45], with the A14 tune [46], for parton showering, and hadronization. Events with zero or one jet are simulated in M G 5_aMC@NLO and the overlap between matrix-element and parton-shower emissions is removed using the CKKW-L merging procedure [65,66]. Agreement of the M G 5_aMC@NLO prediction with the baseline S 2.2.2 generator is ensured by applying a bin-wise correction, determined as the ratio of the SM predictions from S and M G 5_aMC@NLO. It is assumed that the relative scale-induced uncertainties of the S prediction are also applicable, differentially in , to the prediction that includes the effect of dimension-six operators. The prediction and the measured cross-section are, then, used to construct a likelihood function. Measurement uncertainties are modelled using a multivariate Gaussian distribution, while QCD scale and PDF uncertainties affecting the theory prediction are considered as nuisance parameters, constrained with a Gaussian distribution. Two nuisance parameters are introduced to model the scale uncertainty affecting the predicted distribution so that its effect is not fully correlated between bins. The first (second) parameter models the full effect of the scale uncertainty in the first (last) bin of the distribution. The effect decreases linearly with log( ) such that the parameter has no effect in the last (first) bin. The decorrelation of scale-uncertainty effects increases the width of confidence intervals by up to 40% relative to a model in which the scale-uncertainty effects are assumed to be fully correlated between bins of . Confidence intervals for are derived using Wilk's theorem [108], assuming that the profile likelihood test statistic is 2 distributed [109].
Observed and expected 95% confidence intervals for the EFT coefficients are summarized in Table 8. They are presented both for a fit that takes into account only linear terms in the cross-section parameterization and for a fit that also takes into account quadratic terms due to the square of the dimension-six amplitude. For jet T > 200 GeV, limits in the linearized EFT expansion are improved relative to a T > 30 GeV requirement, and the impact of the quadratic term is reduced. As expected, the analysis of the phase space characterized by a high-T jet increases the experimental sensitivity to effects proportional to /Λ 2 due to the reduced suppression of the interference between the SM amplitude and the dimension-six amplitude. However, pure dimension-six contributions, which are O (Λ −4 ), are still dominant in this phase space, and the EFT expansion in Λ −1 does not converge quickly. The limits are, thus, not valid in a general SM EFT scenario that includes additional Λ −4 contributions due to dimension-eight operators.
The presented constraints on , obtained accounting for quadratic terms, are weaker than those obtained by the ATLAS measurement of events with no associated jets [7]. There, a dataset corresponding to only 36 fb −1 was analysed and the results constrain /Λ 2 to a 95% confidence interval with a width of 0.5/TeV 2 . Limits obtained from this measurement when only including linear terms are improved relative to the previous measurement, for which the corresponding confidence interval has a width of 11/TeV 2 . The limits from such a linear fit are, however, an order of magnitude weaker than those obtained by the ATLAS analysis of electroweak production of dĳets in association with a boson [110].

Conclusion
The cross-section for the production of -boson pairs decaying into ± ∓ final states in collisions at √ = 13 TeV is measured in a fiducial phase space that requires the presence of at least one hadronic jet with transverse momentum of at least 30 GeV, providing jet-inclusive measurements in events. The measurement is performed with data recorded by the ATLAS experiment at the LHC between 2015 and 2018 that correspond to an integrated luminosity of 139 fb −1 . The measured fiducial cross-section, fid = 258 ± 4 (stat) ± 25 (syst) fb, is found to be consistent with theoretical predictions. With a total uncertainty of 10%, this result represents a precise measurement of production in association with jets at the LHC that probes a previously unexplored event topology. Differential cross-sections for +jets production are measured as a function of the kinematics of the final-state charged leptons, jets, and missing transverse momentum, and are compared with predictions from perturbative QCD calculations. The data agree well with predictions in all differential distributions, up to the highest measured transverse momenta and for up to five jets. Dimension six operators that produce anomalous triple-gauge-boson interactions are studied in a phase space that benefits from enhanced interference between the Standard Model amplitude and the anomalous amplitude.

Appendix A Measurement at high lead. lep. T
At high vector-boson T , predictions for inclusive events suffer from so-called 'giant -factors', which correspond to large higher-order corrections for QCD and electroweak effects [111]. These come in part from event topologies similar to those in +jets production with the additional emission of a real boson from a hard jet.
In order to study kinematic configurations that are expected to be strongly affected by higher-order EW and QCD corrections, a sample of events is selected with the requirement that  Table 9 lists the selected candidate events in this region, as well as the breakdown of the background estimates. Figure 10 shows the measured distributions at detector level in the final analysis binning, comparing the observed data with the signal prediction and the background estimate.
The unfolded distributions are shown in Figure 11. In general, the predictions are in good agreement with the measurement.      Figure 12 shows the lead. lep. T and jet multiplicity distributions in the two¯control regions, which require exactly one and exactly two -jets, respectively. The -jet correlation factor for the two distributions is shown in Figure 13. Figure 14 shows the distribution for lead. jet T > 200 GeV in the two control regions and for the top-enriched selection, together with the -jet correlation factor . The excess of events predicted at high lead. lep. T , in comparison with data, is corrected for by the data-driven estimate, and no discrepancy is seen in the top-enriched selection, as shown in Figure 2 in the main body. The jet multiplicity is well modelled up to five selected jets.        [10] CDF Collaboration, Observation of + − Production in¯collisions at