Measurements of $Z\gamma+$jets differential cross sections in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector

Differential cross-section measurements of $Z\gamma$ production in association with hadronic jets are presented, using the full 139 fb$^{-1}$ dataset of $\sqrt{s}=13$ TeV proton-proton collisions collected by the ATLAS detector during Run 2 of the LHC. Distributions are measured using events in which the $Z$ boson decays leptonically and the photon is usually radiated from an initial-state quark. Measurements are made in both one and two observables, including those sensitive to the hard scattering in the event and others which probe additional soft and collinear radiation. Different Standard Model predictions, from both parton-shower Monte Carlo simulation and fixed-order QCD calculations, are compared with the measurements. In general, good agreement is observed between data and predictions from MATRIX and MiNNLO$_\text{PS}$, as well as next-to-leading-order predictions from MadGraph5_aMC@NLO and Sherpa.


Introduction
Precision measurements of cross sections for the production of a boson and a photon ( ) at the Large Hadron Collider (LHC) [1] play a crucial role in the study of the Standard Model (SM) and are sensitive to physics beyond the SM. Differential cross sections for in association with jet activity ( +jets) can be used to test fixed-order perturbative QCD (pQCD) calculations and predictions with resummation of Sudakov logarithms [2]. This process is also sensitive to the parton distribution functions (PDFs) and can validate those PDFs extracted in global analyses [3]. In addition, the +jets differential cross sections can be used to constrain the Monte Carlo (MC) models, especially the parton-shower (PS) approximation [4].
In phase-space regions where the transverse momentum ( T ) of the system is much smaller than the mass ( ) of the boson or , fixed-order QCD calculations are dominated by Sudakov-logarithm terms, due to soft and collinear emission, of the order of s ln +1 ( T / ), where is the fixed order considered. These terms are usually treated by resummation [5,6] and can give very precise predictions with next-to-leading logarithms (NLL) and up to next-to-next-to-next-to-leading logarithms (N3LL) [2]. These resummation models can be tested in phase-space regions where the logarithm terms dominate, i.e. in regions where the hard scale of the process is much larger than the value of the observable considered.
Such a phase-space region can be probed by simultaneously measuring two independent observables, providing a more complete description of the pattern of QCD emission [7]. This is done with twodimensional (2D) distributions, measuring an observable sensitive to the hard scale, called the hard variable, as a function of another observable, called the resolution variable, which probes the additional soft radiation. Thus, the hard variable is an observable which is directly sensitive to the hard scale of the process and its value is non-zero at leading order (LO), e.g. T , T , , or any linear combinations of these variables. On the other hand, a resolution variable is an observable sensitive to the additional soft or collinear QCD radiation; the values of these observables, e.g.
T or the number of jets ( jet ), are zero at LO and take non-zero values only beyond LO. An example of a 2D measurement is the differential cross section as a function of T − T in different regions of T + T . In these measurements, T − T is the resolution variable that allows effects near the Jacobian peak to be studied, whereas T + T is the hard variable that tests the different scales [6].
Measurements of production have been performed by experiments at LEP [8][9][10], the Tevatron [11,12], and the LHC [13][14][15]. No new physics or deviations from the predictions of the SM have been observed so far. An example of physics beyond the SM is given by a model that includes axion-like particles (ALPs) [16], which is particularly relevant for production; these particles were introduced to solve the strong CP problem and are also considered as a dark-matter candidate [17]. Measurements of production can help to constrain the ALP's couplings to the boson and the photon [18], which define the most general CP-conserving Lagrangian describing the ALP's bosonic interactions [19]. Another case where production can help is in the use of effective field theory [20]. These models describe different theories beyond the SM that introduce new-physics states at a mass scale Λ that is large in comparison with the electroweak scale, using gauge-invariant combinations of SM fields. Previous measurements have not found any evidence of new physics in the final state. However, none of these measurements included any dedicated study of jet activity. Requiring the presence of jets in addition to the pair, leads to configurations in the final state that enhance a region of the phase space different than that studied in the case of inclusive production. Therefore, measurements of differential cross sections for +jets production are expected to provide additional sensitivity to constrain ALPs and other models for physics beyond the SM.
This paper presents measurements of differential cross sections as functions of QCD-related observables associated with the +jets process. The measurements are performed differentially in either one or two observables. The analysis uses the full dataset of proton-proton ( ) collisions at a centre-of-mass energy of √ = 13 TeV recorded by the ATLAS detector during Run 2 (2015-2018) of the LHC. The results presented here build upon a previous analysis performed by ATLAS [14], which focused on more inclusive observables. The measurements extend the published results by including the hadronic activity associated with the system and by measuring double-differential cross sections.
As in the previous analysis, only bosons decaying into pairs of charged leptons (ℓ + ℓ − , with ℓ = , ) are considered. This restriction makes it easier to fully reconstruct the final state with high resolution, and also provides a relatively large cross section with little background. Events are selected by requiring the invariant mass of the two leptons ( ℓℓ ) to be greater than 40 GeV, and the sum of the mass of the dilepton system and the mass of the ℓℓ system ( ℓℓ + ℓℓ ) to be greater than 182 GeV. These selections define a phase space that is enriched in photons from initial-state radiation (ISR), such as shown in Figure 1(a). In addition, these requirements reduce the contribution from final-state radiation (FSR), where the photons are radiated from the leptons as shown in Figure 1(b). In the ℓℓ vs ℓℓ plane (see Figure 2 in Ref. [14]) the second requirement forms a diagonal straight line that separates FSR events from ISR events; this is because the FSR events are expected to lie in the region with ℓℓ around the nominal boson mass, with ℓℓ at lower values.
(a) (b) Figure 1: Diagrams for (a) production via the ISR process and (b) ℓℓ production via the FSR process.
The predictions of several MC models for production, which include multileg matrix elements interfaced with parton-shower and hadronisation approximations, are compared with the measurements. Several models, which have different levels of precision, are considered: Sherpa 2.2.4 [21] at LO and Sherpa 2.2.11 [21] at next-to-leading order (NLO), MadGraph at NLO [22] and MiNNLO PS at NNLO [23,24]. The predictions of the fixed-order QCD calculations by MATRIX [25,26] at NNLO are also compared with the data.

The ATLAS detector
The ATLAS experiment [27] at the LHC is a multipurpose particle detector with a forward-backward symmetric cylindrical geometry and a near 4 coverage in solid angle. 1 It consists of an inner tracking detector (ID) surrounded by a thin superconducting solenoid providing a 2 T axial magnetic field, electromagnetic and hadron calorimeters, and a muon spectrometer (MS). The inner tracking detector covers the pseudorapidity range | | < 2.5. It consists of silicon pixel, silicon microstrip, and transition radiation tracking detectors. Lead/liquid-argon (LAr) sampling calorimeters provide electromagnetic (EM) energy measurements with high granularity. A steel/scintillator-tile hadron calorimeter covers the central pseudorapidity range (| | < 1.7). The endcap and forward regions are instrumented with LAr calorimeters for both the EM and hadronic energy measurements up to | | = 4.9. The muon spectrometer surrounds the calorimeters and is based on three large superconducting air-core toroidal magnets with eight coils each. The field integral of the toroids ranges between 2.0 and 6.0 T m across most of the detector. The muon spectrometer includes a system of precision tracking chambers and fast detectors for triggering. A two-level trigger system is used to select events. The first-level trigger is implemented in hardware and uses a subset of the detector information to accept events at a rate below 100 kHz. This is followed by a software-based trigger that reduces the accepted event rate to 1 kHz on average depending on the data-taking conditions. An extensive software suite [28] is used in data simulation, in the reconstruction and analysis of real and simulated data, in detector operations, and in the trigger and data acquisition systems of the experiment.

Data and simulated samples
The data used in this analysis were obtained from collisions produced by the LHC in Run 2, and after applying the data quality criteria [29], the total integrated luminosity recorded by the ATLAS detector is 139 fb −1 . The uncertainty in the luminosity is 1.7% [30], obtained from measurements with the LUCID-2 detector [31].
Three different MC samples are used to simulate the +jets process. The nominal sample was generated using the program Sherpa 2.2.11 to calculate matrix elements with up to one additional parton at NLO and up to three additional partons at LO. The matrix element calculation includes all diagrams at order 2 EW , where EW is the electroweak coupling constant. The merging of the matrix element and parton shower (PS) was performed with MEPS@LO [32 -35]. The NNPDF3.0nnlo [36] PDF set was used, with an additional set of tuned PS parameters developed by the Sherpa authors [21]. Frixione isolation [37] was applied to the photon with the parameter choices 0 = 0.1, = 0.1 and = 2. This sample requires the transverse momentum of the photon ( T ) to be greater than 7 GeV. Throughout the paper the signal estimate refers to this sample, unless it is otherwise specified.
A second sample was produced using the program Sherpa 2.2.4, with matrix elements at LO accuracy in QCD for up to three additional parton emissions matched and merged with the Sherpa parton shower based on Catani-Seymour dipole factorisation [38,39] using the MEPS@LO prescription [32][33][34][35]. The matrix element calculation includes all diagrams at order 2 EW . Samples were generated using the NNPDF3.0nnlo 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the -axis along the beam pipe. The -axis points from the IP to the centre of the LHC ring, and the -axis points upwards. Cylindrical coordinates ( , ) are used in the transverse plane, being the azimuthal angle around the -axis. The pseudorapidity is defined in terms of the polar angle as = − ln tan( /2). Angular distance is measured in units of For all these MC samples, pile-up from additional collisions in the same and neighbouring bunch crossings was simulated by overlaying each MC event with a variable number of simulated inelastic collisions generated using Pythia 8.186 with the ATLAS set of tuned parameters for minimum-bias events (the A3 tune) [56]. The MC events are weighted ('pile-up reweighting') to reproduce the distribution of the average number of interactions per bunch crossing observed in the data. electron trigger, and 20 GeV for the muon trigger. For data recorded during the period 2016−2018, these thresholds were both raised to 26 GeV and tighter isolation criteria were applied, to compensate for the increase in instantaneous luminosity. Triggers with a higher T threshold, but looser isolation, are also used because they increase the total trigger efficiency. The trigger efficiency for events satisfying all the selection criteria is about 99%. This is estimated using signal simulated samples.

Lepton, photon, and jet selections
Photons and electrons are reconstructed from energy clusters in the electromagnetic calorimeter (ECAL). Electron candidates are required to have a matching track in the ID. Photon candidates must have | | < 2.37 and T > 30 GeV, while electron candidates must have | | < 2.47 and T > 25 GeV. Both electron and photon candidates are rejected if they lie in the transition region between the barrel and endcaps of the ECAL (1.37 < | | < 1.52). Electrons are identified using a likelihood function based on shower shape variables in the ECAL, track variables, and the quality of the track-cluster matching. Electrons are required to satisfy the Medium criteria, as described in Ref. [66]. Photons are identified using shower shape variables in the ECAL and are required to satisfy the Tight criteria [66]. Photons are classified as converted to electron-positron pairs if the ECAL cluster is matched to a conversion vertex formed by the tracks of oppositely charged particles, or by a single track consistent with having originated from a photon conversion. Photon candidates are classified as unconverted if it is not possible to match clusters to tracks. Both types of photons are used in this analysis, and the distinction between converted and unconverted photons has no impact on the result. The photon and electron energy scale is calibrated using → events, as described in Ref. [66].
Muons are reconstructed by matching the tracks in the MS with tracks in the ID. The momentum is obtained by combining the MS measurement, corrected for the energy deposited in the calorimeter, and the measurement in the ID. Muon candidates are also required to satisfy the Medium identification criterion, as described in Ref. [67]. This criterion is based on the number of hits matched to the muon's tracks reconstructed in the ID and the MS, and on the compatibility of the ID and MS measurements of the muon's transverse momentum. Muon candidates are required to have | | < 2.5 and T > 25 GeV.
Electrons and muons must be compatible with originating from the primary vertex. This requirement is fulfilled by requiring that the transverse impact parameter ( 0 ) relative to the beam-spot divided by its uncertainty ( ( 0 )), i.e. the significance, satisfy | 0 / ( 0 )| < 5 for electrons and | 0 / ( 0 )| < 3 for muons. Additionally, for both electrons and muons, the longitudinal impact parameter ( 0 ), i.e. the -distance from the primary vertex to the point where 0 is measured, must satisfy | 0 sin | < 0.5 mm.
Leptons and photons are required to be isolated, i.e. without additional activity in their proximity. Isolation requirements are based on tracking information and calorimeter energy clusters. The isolation variable iso T is computed as the T of nearby tracks with T > 1 GeV, excluding tracks associated with the lepton or photon candidate. The variable iso T is obtained as the scalar sum of the transverse energies of nearby topological clusters [68], corrected for the energy deposited by the photon or lepton candidate itself and the contribution from the underlying event and pile-up [69,70].
Photons must satisfy an isolation criterion, as described in Ref. [66], with iso T / T < 0.05 and iso T / T < 0.065 in a cone of size Δ = 0.2 around the photon candidate, where T is the transverse energy of the photon. Electrons must satisfy iso T / T < 0.15 in a cone of T -dependent size up to Δ = 0.2 around the electron candidate, and iso T / T < 0.2 in a cone of size Δ = 0.2. Muon isolation [67] requires iso T / T < 0.15 in a cone of T -dependent size up to Δ = 0.3 (Δ = 0.2) for muons with T less (greater) than 50 GeV, and iso T / T < 0.3 in a cone of fixed size Δ = 0.2. Jets are reconstructed with the anti-algorithm [71,72] with a radius parameter of = 0.4, using a particle-flow [73] procedure, with clusters of energy deposited in the calorimeter as inputs. Jets are calibrated and their energy is corrected to account for detector effects, using methods based on MC and in-situ techniques [74]. Jets with T < 60 GeV and | | < 2.4 are removed if they are identified as pile-up jets by the jet vertex tagger (JVT) [75]. Jets are required to have T > 30 GeV if | | < 2.5, or T > 50 GeV if | | > 2.5, to further suppress pile-up. Distributions with jets require at least one jet, unless it is otherwise explicitly stated.

Signal region and control region definitions
The signal region (SR) is defined by events with at least two opposite-sign (OS) same-flavour (SF) leptons and a photon. The leading lepton (with the highest transverse momentum) is required to have T > 30 GeV. Events must also have at least one photon with T > 30 GeV. Events are further selected by requiring ℓℓ > 40 GeV, to avoid low-mass resonances. As mentioned in Section 1, FSR events are suppressed by requiring that the sum of the invariant mass of the leptons and the invariant mass of the leptons and the photon is greater than twice the mass of the boson, i.e. ℓℓ + ℓℓ > 182 GeV. In the ℓℓ vs ℓℓ plane, this requirement is a diagonal straight line that separates FSR and ISR events since FSR events are expected to lie in the region with ℓℓ ∼ 90 GeV and ℓℓ < 90 GeV (see Figure 2 in Ref. [14]).
The¯background modelling is checked in a dedicated control region (¯-CR) obtained by applying all of the SR requirements except the SF-lepton requirement, which is replaced by a requirement of different-flavour (DF) leptons. The signature in the¯-CR is then . Table 2 shows a summary of these selection criteria.

Measured observables
Differential cross sections are measured for the following one-dimensional observables: • jets , the number of jets • ℓℓ T , the transverse momentum of the two-lepton system • ℓℓ T − T , the difference between the transverse momenta of the ℓℓ system ( ℓℓ T ) and the photon ( T ) • ℓℓ T + T , the scalar sum of the transverse momenta of the ℓℓ system and the photon • ℓℓ j T , the transverse momentum of the ℓℓ system. The QCD-sensitive 2D observables measured in this paper are: • The resolution variable ℓℓ T / ℓℓ , the ratio of the transverse momentum of the ℓℓ system to its mass, is measured in bins of the hard variable ℓℓ For the 2D distributions, computational complications in the unfolding of a 2D distribution are avoided by unfolding the resolution observable in wide bins of the hard observable. These wide bins are chosen such that the migration effects in the hard observable are negligible.

Background estimation
The main background to the +jets signal arises from + jets events, in which one of the jets is misidentified as a photon. This background is estimated using a data-driven method. Pile-up events, in which the leptons and the photon originate from two different interactions during the same bunch crossing, also constitute a background and are estimated using a data-driven method. Another large background, especially at high jet multiplicity, is¯production, where the top-quark decays can also produce same-flavour leptons. The¯background is estimated using MC samples normalised to data in the dedicated¯-CR defined in Section 6.3. Other small backgrounds that can also produce the same signature as the signal, such as triboson events from , , and production, are estimated using MC samples. The background from diboson events, such as (→ ℓ ℓℓ) and (→ ℓℓℓℓ), where one electron is misidentified as a photon is also taken into account using MC samples.

+ jets background
A two-dimensional sideband method [69], similar to the one in Ref. [14], is used to estimate the background in each bin of each distribution. In addition to the SR and the¯-CR, three + jets-CRs are created to estimate this background by inverting the isolation and/or identification criteria for the photon. Photons that fail to satisfy the Tight identification criteria must still satisfy a loose identification criterion, where the requirements on four of the EM calorimeter shower shape variables are removed, as described in Ref. [80]. The photon isolation is modified such that only the calorimeter-based component is considered, while the track-based isolation is applied in all regions. Photon candidates fail to satisfy the isolation criteria when iso T is an energy gap set to gap T = 2 GeV and helps to reduce the number of +jets signal events leaking into the + jets-CRs (signal leakage).
The + jets-CRs described above are dominated by + jets events. The leakage of signal events into the + jets-CRs is removed via the signal leakage fractions estimated using the MC simulation. These factors are inclusively about 6% (1.4%) for the control region with modified identification (isolation) criteria, and less than 0.2% for the control region where both the identification and isolation criteria are modified. Backgrounds from other processes are subtracted using the MC simulation of each process. The purity in the CR is 0.90 ± 0.02, with values varying from 0.86 to 0.92, depending on the exact bin. The yields of + jets events in the SR can then be derived from the number of events in the SR and in the three + jets-CRs, using the formulas described in Ref. [69].
Possible correlations between the isolation and identification variables are estimated with + jets MC samples with the use of a correlation factor , which is the ratio of the fraction of + jets events satisfying the photon isolation requirement iso T < 0.065 × T in events satisfying the identification criteria, to those not satisfying the identification criteria. In the absence of correlation, this ratio is equal to unity. To preserve the correlation and reduce the statistical uncertainties, is computed in larger bins than those used in the sideband method or integrated, depending on the observable. Results of the + jets estimation with larger intervals for the correlation computation are compatible within uncertainties with the results with finer binning, but with reduced systematic and statistical uncertainties.
The uncertainty in the correlation factor is obtained by varying the definition of the + jets-CRs in both data and MC simulation. The gap T requirement is varied by ±1 GeV and different loose identification criteria are used, for which three or five of the EM calorimeter shower shape variables are removed from the Tight criteria instead of four. In the inclusive phase space, the correlation factor is = 1.30 ± 0.04 (stat.) ± 0.23 (syst.), estimated using the MC samples, as mentioned above. A cross-check of this estimate is performed by computing in a + jets-CR where photons also fail the track isolation, and in this CR the estimate is = 1.29 ± 0.02 (stat.), in agreement with the nominal estimate. Another source of uncertainty arises from the estimation of the +jets signal leakage into the + jets-CRs. The signal leakage factors are computed using Sherpa 2.2.11 and are found to be small; the uncertainty is estimated by using MadGraph instead of Sherpa. Additional uncertainties in arise from the subtraction of other backgrounds (such as diboson events, or¯). For these, the uncertainties in the cross sections are propagated to the final + jets estimate. The total uncertainty (including statistical uncertainties) in the integrated + jets estimate is 22% and is dominated by the uncertainty of the data-driven method.

Pile-up background
The selected photons may originate from different collisions in the same bunch crossing because photons do not have requirements on the longitudinal position of their origin ( ) with respect to the primary vertex, since it is not a well-measured quantity. The reconstructed photon is determined by using a weighted mean of the intersections of the directions obtained from the electromagnetic clusters by taking into account the longitudinal segmentation of the calorimeter, with a constraint from the beam-spot position, and has a typical resolution of 15 mm.
This background is estimated, using a method similar to the one described in Ref. [14], by evaluating the fraction of pile-up events in data ( PU ) after the ℓℓ selection, and it is briefly described here. To select photons with a better position resolution ( ), only photons that converted to electron-positron pairs with two tracks in the pixel detector are considered. Additionally, the radial conversion position of the photons must be between 5 mm from the beam-spot (outside the beam pipe) and 125 mm (before the end of the pixel detector). The PU of converted photons is assumed to be the same as for unconverted ones. This assumption is checked using a sample of signal MC events. In this sample, the fraction of events with 'MC truth'-matched photons is the same for events with and without photon conversion. This is expected since the two effects (i.e. pile-up fraction and conversion fraction) should not be correlated.
The primary vertex position vtx has a Gaussian distribution with a measured width of ( vtx ) ∼ 35 mm [14], corresponding to the width of the luminous region. The fraction PU can then be written as: where PU data(MC) is the number of data (MC) events in a region dominated by pile-up, defined as the region with |Δ | = | vtx − | > 50 mm. Since the pile-up events are Gaussian-distributed with a width ( vtx − ) = √ 2 ( vtx ) ∼ 50 mm, the probability of observing events with |Δ | > 50 mm is estimated to be PU = 0.32. The term MC describes the MC events where the boson and the photon come from the same collision, and is taken from signal MC simulation. The MC sample is normalised to the data with |Δ | < 5 mm. The | vtx − | distribution is shown in Figure 3 of Ref. [14]. To have a better description of the pile-up events in the differential observables, PU is computed as a function of jets and T . The estimated PU varies from 0.02 to 0.08.
The procedure described above gives the total fraction of pile-up events in bins of T and jets , while the shape for the other distributions is taken from the MC samples at particle level, as described in the following. A sample is built by adding together a generated single-photon sample and a generated + jets sample. Only jets from the + jets sample are considered, since in data and MC events the jets are required to be matched to the vertex with the highest 2 T of associated tracks through the JVT requirement, which is likely to reject the jets produced in association with the photon.
The difference between the nominal particle-level sample and a pile-up enriched sample is assigned as an uncertainty. This additional pile-up enriched sample is selected from the data, by selecting only events where is closer to the vertex with the second highest 2 T of associated tracks than to the primary vertex. By definition, these events will be pile-up-like events. Additionally, only in observables that depend on jets, the difference between this particle-level distribution and the one obtained by considering all the jets is added as a further uncertainty.

Other backgrounds
Background contributions from¯, triboson, and diboson events are estimated with simulated samples. Since the¯process is about four times larger than all other backgrounds, the modelling is estimated using the dedicated¯-CR defined in Section 4.2. The¯MC sample is scaled by a normalisation factor of 1.44, and a relative uncertainty of 15% is assigned to this normalisation [81]. Figure 2 shows a comparison between data and MC events in the¯-CR as functions of jets and T / √ T . The + jets estimate is obtained using the same method as previously described in Section 6.1, but using events instead of / events. The correlation factor in the + jets background estimation is fixed to = 1.30 ± 0.04(stat.), as explained in Section 6.1. Another background also present in this region is from diboson events where one lepton is misidentified as a photon ( → ℓ ℓℓ). A 30% uncertainty is assigned to this background, which accounts for uncertainties in the inclusive cross-sections due to possible higher-order contributions. Good agreement is seen between data and MC events in the¯-CR. The largest discrepancy can be seen in the 0-jet bin. Such mismodelling has negligible impact on the analysis since the contribution of¯and diboson processes for events with no jets in the SR is more than one order of magnitude smaller than the signal.
The other backgrounds (tribosons and dibosons) contribute around 1% of the total expected yield in the SR. For this reason, they are estimated directly from MC simulation. Other even smaller backgrounds (such as → ) are neglected, since they contribute less than 0.03% of the events in total. Table 3 shows the data event yield and the signal and background estimates in the SR. The Sherpa 2.2.11 MC sample is used for the +jets process, together with the purely electroweak production of . Table 3 includes the statistical uncertainties, experimental uncertainties (see Section 8), and background systematic uncertainties (as described in this section). The and event yields are compatible with each other, once differences in efficiency are accounted for. Figures 3 and 4 show a comparison between the data and the expected SM events in a subset of distributions. The Sherpa 2.2.11 signal sample is scaled by a normalisation factor of 1.08 to match the rate in the data. The normalisation factor is obtained from the ratio of the measured yields to the predicted yields from Sherpa 2.2.11, as shown in Table 3. The hatched band in the figures shows the impact of the systematic uncertainties, as also shown in Table 3. After the normalisation of the backgrounds, good agreement is observed between the data and the SM estimates. Observables inclusive in the number of jets are well  modelled; in some bins of some observables, small differences are observed which, when comparing the measured differential cross sections with the predictions, are covered by the theoretical uncertainties (see Section 9).

Cross-section determination 7.1 Fiducial region at particle level
The measurements are unfolded to a fiducial phase space defined by particle-level quantities. The fiducial phase space in this analysis is built to be as close as possible to the detector-level selection discussed in Section 4, with selection criteria that minimise the extrapolation and allow comparisons with theoretical predictions. The phase space is selected for → ℓ + ℓ − events, with ℓ being either an electron or muon. Events/GeV    Only stable particles (with a mean lifetime > 10 mm) are used in the definition of the fiducial region. Additionally, only 'prompt' leptons (dressed) and photons (only those that do not originate from hadron decays) are considered.
Leptons are required to pass the same T requirements as in the SR: T (ℓ 1 ) > 30 GeV, T (ℓ 2 ) > 25 GeV. However, the requirements are different: for both electrons and muons | (ℓ)| < 2.47 is required, since at particle level the discontinuities in the detector are not present. A particle-level isolation requirement is applied to photons: the scalar sum of the T of all particles, except muons and neutrinos, within a cone of size Δ = 0.2 around the photon must be less than 7% of the transverse energy of the photon, T . This selection is the same as in Ref. [14] and is optimised to achieve the same level of acceptance in both the detector-level and particle-level selections. Photons are rejected if they are within Δ = 0.4 of any lepton. Jets are obtained by clustering stable particles, excluding prompt leptons and using the anti-algorithm with a radius parameter of = 0.4. Photons within a cone of size Δ = 0.1 around prompt leptons are also excluded. Jets are defined in the same way as for the SR, by requiring T > 30 GeV for | | < 2.5 and T > 50 GeV for 2.5 < | | < 4.5. Jets are rejected if they are within Δ = 0.4 of any photon. As in the SR, ℓℓ > 40 GeV and ℓℓ + ℓℓ > 182 GeV requirements are applied. A pair of opposite-sign, same-flavour leptons is selected, and no additional veto on the number of leptons is applied. Table 4 summarises the fiducial selection used in the analysis.

Fiducial and differential cross section
The fiducial cross section is evaluated in the fiducial region described in the previous subsection. It is obtained with the following formula: where obs and bkg are the observed number of events and the expected number of background events, respectively, L is the integrated luminosity, and is the correction factor which accounts for detector inefficiency and resolution effects. Events from inside (outside) the fiducial region at particle level that migrate outside (inside) the SR are also accounted for by this correction factor. The factor is calculated as the number of simulated +jets events entering the detector-level SR divided by the number of simulated +jets events entering the fiducial volume. The inclusive factor is obtained with Sherpa 2.2.11 MC samples, combining the and channels; its value is found to be = 0.543 ± 0.001 (stat) ± 0.020 (syst), with the systematic uncertainties estimated as described in Section 8. Differential cross sections are evaluated in the fiducial signal region for several observables. The event yields in the + − and + − decay channels are added together and unfolded in a single step. The distributions are unfolded using an iterative Bayesian method [82], with two iterations as the nominal number. The +jets events simulated with Sherpa 2.2.11 are used to produce the response matrices needed to correct for the migration between bins in the detector-and particle-level distributions. These migrations are mainly due to the jet reconstruction. Additionally, the unfolding corrects for fiducial and reconstruction efficiencies. These are respectively the probability of having particle-level events satisfy the detector-level SR criteria, and the probability that detector-level events originate from outside the fiducial region.

Systematic uncertainties
Systematic uncertainties from several different sources affect this measurement: experimental uncertainties due to detector reconstruction, uncertainties in the background estimate (from simulated samples as well as data-driven methods, as described in Section 6), systematic uncertainties in the unfolding, and theoretical uncertainties in the signal prediction. The individual sources of uncertainty are varied by ±1 in the MC simulations and propagated through the analysis separately. The uncertainties are propagated to the cross sections by modifying the migration matrix and computing the resulting deviation from the nominal cross section. This deviation is taken as the systematic uncertainty.
Experimental uncertainties account for the finite resolution of the objects reconstructed by the ATLAS detector, their calibration, and the modelling of the reconstruction in the simulation. Uncertainties affecting the electrons and photons include the uncertainties in the energy scale and resolution [66], while for muons, uncertainties are considered for the momentum resolution [67]. Both leptons and photons have uncertainties in the efficiency of the identification and the isolation [66,67]. Uncertainties in the lepton trigger efficiencies are also considered [64,65]. Jet uncertainties account for both the energy scale (JES) and the resolution (JER) [83]. The JES uncertainties take into account detector modelling, statistical effects, flavour composition, and the description of pile-up jets. The JVT efficiency uncertainties are also considered [75]. Additional uncertainties are added to take into account the modelling of the number of collisions. An uncertainty of 1.7% in the total integrated luminosity is considered in this analysis.
The statistical uncertainty in the measured cross sections is evaluated using 'toy experiments' (bootstrap technique [84]). Statistically-independent replicas of the data distributions are used and each one of them is then unfolded; the root mean square (RMS) of the replicas distribution is used as the uncertainty. For the MC samples, the limited number of simulated events mainly affects the estimation of the migration matrices. This statistical uncertainty is also calculated using toy experiments and found to be small.
The unfolding procedure is based on an assumption, namely our choice of a simulated signal sample. This choice can bias the results, and a systematic uncertainty to account for this effect is obtained through a data-driven closure test. The simulated signal distributions are reweighted with a smooth function obtained by requiring that the detector-level distribution matches the data (after background subtraction). The reweighted distribution is then unfolded, treating this sample as pseudo-data, and using the migration matrix from the reweighted distributions. The uncertainty is obtained by comparing this result with the nominal unfolded result.
Systematic uncertainties in the cross sections due to the theoretical modelling are obtained by unfolding the data with a migration matrix calculated using alternative signal simulations. Uncertainties in the signal predictions are due to missing higher-order contributions in the cross-section calculation, the uncertainties from the PDF choice, and the uncertainties in s . The effect of QCD scale uncertainties is estimated by halving and doubling the renormalisation and factorisation scales in the signal simulation relative to their nominal values. Uncertainties are obtained by taking an envelope: in each bin the largest resulting change is used as the uncertainty. Additional uncertainties are added to account for the choice of a specific PDF in the cross-section calculation. Following the PDF4LHC recommendation [85], the NNPDF3.0nnlo_as_0118 PDF set is used as the nominal set, and is compared with results obtained with weights stored in the Sherpa samples. An envelope is then taken of all the variations. A similar approach is used for the s variations, where the NNPDF3.0nnlo_as_0117 and NNPDF3.0nnlo_as_0119 PDF sets are used. For the theory predictions, uncertainties are obtained by taking an envelope of the difference between the nominal unfolded results and their variations.
Uncertainties for the background estimates are taken to be 30% in the diboson cross section and 15% in the¯cross section (which corresponds to the uncertainty in the normalisation factor from the LO cross section to the NLO cross section [51]). The uncertainty of 30% in the diboson cross section covers both the nominal uncertainty [86] and the typical size of the mismodelling of non-prompt objects. The systematic uncertainties for the + jets background are estimated as described in Section 6.1 and propagated through the unfolding framework. Table 5 shows the breakdown of the systematic uncertainties in the cross section as a function of jets . The last row in the table is the total relative uncertainty obtained as the sum in quadrature of each systematic uncertainty and statistical uncertainty. The increase in jet systematic uncertainties with the number of jets, is due to the modelling of pile-up jets, forward jet modelling, and statistical fluctuations. The combination of the pileup-jet tagger, and the jets definition as described in Section 4 allows to reduce the uncertainty in the bins with at least 1 jet.
The measured differential cross sections as functions of the different observables are shown in Figures 5  to 12. To obtain these results the unfolding uses as signal the +jets MC samples added together with the MC sample for purely electroweak production of . The theoretical predictions from Sherpa 2.2.4, Sherpa 2.2.11 (both interfaced with MEPS@LO and using the NNPDF3.0nnlo PDF set) and MadGraph5_aMC@NLO (interfaced with Pythia 8.212 and using the NNPDF3.0nlo PDF set), and the NNLO predictions from MiNNLO PS (using the NNPDF3.0nnlo PDF set) and MATRIX (using the CT14nnlo PDF set) are compared with the measurements in these figures. The purely electroweak production of has not been added to the theoretical predictions shown in these figures; this contribution is estimated to be around 1%. In general, Sherpa samples underestimate the total cross section, with Sherpa 2.2.11 (NLO) being higher than Sherpa 2.2.4, while MadGraph5_aMC@NLO, MiNNLO PS and MATRIX show a better agreement for the total cross section. Between the two Sherpa samples, Sherpa 2.2.11 also shows generally better agreement in the distribution shapes, especially for the number of jets.
The boson T is a fundamental observable, correlated with the jet activity, and its difference from T is an observable that probes pQCD over a wide range of scales, while ℓℓ T + T describes the hard scale of the process. Figure 5 shows the differential cross sections as functions of the observables ℓℓ T , ℓℓ T − T , ℓℓ T + T , and Δ (ℓ, ℓ). All the predictions show good agreement with the measurements, although Sherpa generally underestimates the data. The MATRIX calculations are also in good agreement with the measurements.
Jet multiplicity is a fundamental observable to probe QCD and additional soft radiation [24]. The ratio jet2 T / jet1 T in particular is an observable that tests the limits of PS effects and resummation of Sudakov logarithms. Differential cross sections for jet observables are shown in Figure 6. The differential cross section is dominated by events with zero jets, and rapidly falls off with increasing QCD emission. The leading and subleading jets are mostly produced with similar T ; however, the cross section is not zero at jet2 T / jet1 T = 0.1, which means that the subleading jet has only 10% of the T of the leading jet. In general, the MC samples are in good agreement with the data; however, at high jet multiplicity and high jet T Sherpa 2.2.4 predicts higher yields than Sherpa 2.2.11, while MadGraph5_aMC@NLO has lower yields in jets but it is comparable to the Sherpa samples in the leading ( jet1 T ) and subleading ( jet2 T ) jet momenta. The MiNNLO PS calculation instead predicts softer jets and lower jet multiplicity, while the MATRIX prediction models the jet T spectrum better, but predicts higher jet multiplicity. It is worth noting that MATRIX produces no more than two jets, so the last bin is empty for this calculation. The ratio jet2 T / jet1 T is equally well described by both Sherpa models and MadGraph5_aMC@NLO. The MATRIX prediction is also in good agreement, while MiNNLO PS underestimates the data.
The invariant mass of the two leading jets is an important observable that describes the hard scale of the process, and its precise modelling is fundamental for the QCD background in measurements of purely electroweak production or searches for new physics [87]. The distribution in Figure 7 is well modelled in general, except for MiNNLO PS , which underestimates the highest bins. The MATRIX prediction shows good agreement, except for a few bins with some overestimations between 60 and 100 GeV. The invariant mass ℓℓ is a variable sensitive to the hard scale of the process, and is also well modelled by the predictions, except for the last bins in the case of MiNNLO PS , where an underestimation is observed. The MATRIX prediction also shows good agreement, with a small overestimation, although within systematic uncertainties, in the last bins.  In summary, the Sherpa and MadGraph5_aMC@NLO predictions describe the data well, especially for observables involving jets, although Sherpa underestimates the measured total cross section. The MiNNLO PS and MATRIX predictions give an adequate description of the measurements, but some deviations from the data are observed at high jet multiplicity.

Conclusion
Measurements of several differential cross sections for production in association with jets are presented, in the final state where the boson decays into two opposite-sign same-flavour leptons ( + − or + − ). The measurements are performed in a fiducial phase space enhanced in ISR photons, where the sum of the invariant mass of the leptons and the invariant mass of the leptons and the photon is greater than twice the mass of the boson. The measurements are performed using data collected by the ATLAS detector from LHC collisions at √ = 13 TeV, using a total integrated luminosity of 139 fb −1 .
Differential cross sections are measured as functions of the kinematics of jets, leptons, and photons. Both one-dimensional and two-dimensional distributions are chosen to enhance the separation of hard-scatter effects from soft collinear radiation. A precise measurement of production in association with jets is obtained, with a total uncertainty between 4% and 10% depending on the number of jets. The results are compared with QCD predictions from MC generators involving different precision of multileg merging at LO and NLO, as well as recent predictions at NNLO, including from MiNNLO PS , and fixed-order calculations such as with MATRIX. The predictions are in general in good agreement with the measurements within the experimental uncertainties. Jet activity is generally well described, but some trends are observed in the different predictions. Observables sensitive to polarisation effects of the boson are well modelled by all predictions. The measurements of production in association with jets have the potential to constrain the QCD predictions and improve resummation calculations in regions where Sudakov-logarithm terms dominate.