Measurement of isolated-photon plus two-jet production in $pp$ collisions at $\sqrt s=13$ TeV with the ATLAS detector

The dynamics of isolated-photon plus two-jet production in $pp$ collisions at a centre-of-mass energy of 13 TeV are studied with the ATLAS detector at the LHC using a dataset corresponding to an integrated luminosity of 36.1 fb$^{-1}$. Cross sections are measured as functions of a variety of observables, including angular correlations and invariant masses of the objects in the final state, $\gamma+jet+jet$. Measurements are also performed in phase-space regions enriched in each of the two underlying physical mechanisms, namely direct and fragmentation processes. The measurements cover the range of photon (jet) transverse momenta from 150 GeV (100 GeV) to 2 TeV. The tree-level plus parton-shower predictions from SHERPA and PYTHIA as well as the next-to-leading-order QCD predictions from SHERPA are compared with the measurements. The next-to-leading-order QCD predictions describe the data adequately in shape and normalisation except for regions of phase space such as those with high values of the invariant mass or rapidity separation of the two jets, where the predictions overestimate the data.


Introduction
The production of prompt photons 1 in association with two jets in proton-proton collisions, pp → γ + jet + jet + X, provides a means of testing perturbative QCD (pQCD) predictions. Measurements of the angular correlations between the photon and each of the jets as well as between the two jets can be used to probe the dynamics of the hard-scattering process. In addition, measurements of the invariant-mass distributions of the dijet system and the photon plus dijet system are sensitive to the dynamics of the hard interaction. A comprehensive study of the observables describing this final state is of relevance for the development of pQCD calculations as well as for the tuning of Monte Carlo (MC) models.
Prompt-photon plus two-jet production proceeds via two mechanisms (see Figure 1): direct processes, in which the photon originates from the hard interaction, and fragmentation processes, in which the photon arises from the fragmentation of a high transverse momentum 2 (p T ) parton [1,2]. The direct and fragmentation contributions are well defined only at leading order (LO) in QCD; at higher orders this distinction is no longer possible. Furthermore, pure electroweak processes or electroweak virtual corrections are expected to play an important role for photon transverse energies at the TeV scale [3][4][5][6] and dijet configurations with large invariant mass and large separation in rapidity. Measurements of prompt-photon production in a final state with accompanying hadrons necessitate an isolation requirement on the photon to minimise the large multi-jet background where hadrons in jets decay into photons. The production of isolated photons in association with jets in pp collisions at √ s = 7, 8 and 13 TeV was studied by ATLAS [7-10] and CMS [11][12][13][14][15]. Previous measurements of the production cross sections of photons accompanied by two jets were done for angular correlations and the transverse momenta of the photon and the jets at √ s = 8 TeV [9]. With the increase in centre-of-mass energy to 13 TeV, new observables have been introduced to explore the dynamics where electroweak contributions could be relevant, in particular, the invariant mass of the two jets and the invariant mass of the photon-jet-jet system.
Measurements of photon plus two-jet production provide a deeper understanding of the dynamics of the underlying processes and complement the recent measurements of photon plus one-jet production at 13 TeV [10]. Measurements of the final state γ + jet + jet have the advantage that the second jet is explicitly identified and the interplay between the two underlying production mechanisms can be tested more directly than in the γ + jet final state. The analysis presented here includes the study of the kinematics of the photon plus two-jet system via the measurement of the cross section as a function of the leading-photon transverse energy (E γ T ), and of the transverse momentum (p jet T ) and rapidity (y jet ) of each of the two jets. The dynamics of the photon plus two-jet system is studied by measuring the azimuthal angular separation between the photon and each of the jets (∆φ γ−jet ), the difference in rapidity between the photon and each of the jets (∆y γ−jet ), the invariant mass of the dijet system (m jet−jet ), the azimuthal angular separation between the two jets (∆φ jet−jet ), the difference in rapidity between the two jets (∆y jet−jet ) and the invariant mass of the photon-jet-jet system (m γ−jet−jet ). The two underlying production mechanisms exhibit distinct features in kinematic variables as well as in angular correlations or invariant masses of the objects in the final state. For example, the spectrum in E γ T (p jet T , m jet−jet ) is expected to be harder (softer) for direct processes than for fragmentation processes; as another example, angular correlations such as ∆φ jet−jet are expected to be different for the two processes, with the two jets being closer to each other in direct processes than in fragmentation processes. These distinct features arise from the matrix elements of the two processes and the fraction of the centre-of-mass energy of the colliding partons carried away by the final-state photon [16]. In order to enhance the sensitivity to the two underlying mechanisms, measurements are performed in two complementary regions of phase space, in addition to the inclusive phase space, by selecting those events in which E γ T is larger (smaller) than the leading (sub-leading) jet p T . 3 This separation allows a more in-depth study of the direct and fragmentation contributions since it is performed in terms of kinematic variables and, therefore, does not rely on distinguishing between direct and fragmentation processes using a MC generator. Furthermore, the photon is required to be isolated and the distance in the η-φ plane between the photon and each of the jets is required to be larger than 0.8; the region of phase space in which a jet is close to the photon is removed to avoid a bias in the photon isolation energy as well as to suppress the dependence on the modelling of the fragmentation component in that region. The analysis is performed using 36.1 fb −1 of ATLAS data at √ s = 13 TeV taken during 2015 and 2016. The predictions of the tree-level plus parton shower models of Pythia [17] and Sherpa [18], as well as the next-to-leading-order (NLO) QCD predictions from Sherpa [19][20][21][22][23], are compared with the measurements.

ATLAS detector
The ATLAS experiment uses a multipurpose particle detector [24] with a forward-backward symmetric cylindrical geometry. It consists of an inner tracking detector surrounded by a thin superconducting solenoid, electromagnetic and hadronic calorimeters, and a muon spectrometer incorporating three large superconducting toroidal magnets. The inner-detector system is immersed in a 2 T axial magnetic field and provides charged-particle tracking in the range |η| < 2.5. The high-granularity silicon pixel detector is closest to the interaction region and provides four measurements per track; the innermost layer, known as the insertable B-layer [25,26], provides high-resolution hits at small radius to improve the tracking performance. The pixel detector is followed by a silicon microstrip tracker, which typically provides four three-dimensional space-point measurements per track. These silicon detectors are complemented by a transition radiation tracker, which enables radially extended track reconstruction up to |η| = 2.0. The calorimeter system covers the range |η| < 4.9. Within the region |η| < 3.2, electromagnetic (EM) calorimetry is provided by barrel and endcap high-granularity lead/liquid-argon (LAr) EM calorimeters, with an additional thin LAr presampler covering |η| < 1.8 to correct for energy loss in material upstream of the calorimeters; for |η| < 2.5, the EM calorimeter is divided into three layers in depth. Hadronic calorimetry is provided by a steel/scintillator-tile calorimeter, segmented into three barrel structures within |η| < 1.7, and two copper/LAr hadronic endcap calorimeters, which cover the region 1.5 < |η| < 3.2. The solid-angle coverage is completed out to |η| = 4.9 with forward copper/LAr and tungsten/LAr calorimeter modules, which are optimised for EM and hadronic measurements, respectively. Events are initially selected using a first-level trigger implemented in custom electronics, which reduces the event-acceptance rate from the maximum bunch crossing rate of 40 MHz to a design value of 100 kHz using a subset of detector information. Software algorithms with access to the full detector information are then used in the high-level trigger to yield a recorded event rate of about 1 kHz [27].

Data selection
The data used in this analysis were collected with the ATLAS detector during the pp collision running periods of 2015 and 2016, when the LHC operated at a centre-of-mass energy of √ s = 13 TeV. Only events taken during stable beam conditions and satisfying detector and data-quality requirements [28], which include the calorimeters and inner tracking detectors being in nominal operation, are considered. The total integrated luminosity of the collected sample amounts to 36.1 ± 0.8 fb −1 . The uncertainty in the combined 2015-2016 integrated luminosity of 2.1% [29] is obtained using the LUCID-2 detector [30] for the primary luminosity measurements.

Event selection
Events were recorded using a single-photon trigger, with a transverse energy threshold of 140 GeV; 'loose' identification criteria for the photon, which are based on the shower shapes in the second layer of the electromagnetic calorimeter and on the fraction of energy deposited in the hadronic calorimeter, were also applied [31]. Events are required to have a reconstructed primary vertex [32]. If multiple primary vertices are reconstructed, the one with the highest sum of the p 2 T of the associated tracks is selected as the primary vertex.

Photon selection
The photon-candidate reconstruction, selection and calibration closely follow those reported in Ref. [33] and are summarised in the following. Photon candidates are reconstructed from clusters of energy deposited in the electromagnetic calorimeter and classified [34] as unconverted photons (those candidates without a matching track or reconstructed conversion vertex in the inner detector) or converted photons (those candidates with a matching reconstructed conversion vertex or a matching track consistent with originating from a photon conversion).
The photon identification is primarily based on shower shapes in the calorimeter [34]. The information from the hadronic calorimeter and the lateral shower shape in the second layer of the EM calorimeter are used in an initial selection. The final 'tight' selection comprises more stringent criteria for these variables as well as requirements on the shower shapes in the first layer of the EM calorimeter. Small differences are observed in the average values of the shower-shape variables between data and simulation and are corrected for in simulated events before applying the photon identification criteria. The photon identification efficiencies are measured to be above 90% (95%) for unconverted (converted) photon candidates with E γ T > 150 GeV. The measurement of the photon energy is based on the energy collected in calorimeter cells in an area of size ∆η × ∆φ = 0.075 × 0.175 in the barrel and 0.125 × 0.125 in the endcaps. The photon energy calibration procedure is described in Ref. [35]. The uncertainty in the photon energy scale at E γ T = 150 GeV is in the range 0.4%-3.1% (0.4%-2.8%) for unconverted (converted) candidates depending on |η γ |.
The selection of isolated photons is based on the amount of transverse energy inside a cone of size ∆R = 0.4 in the η-φ plane around the photon candidate, excluding an area of size ∆η × ∆φ = 0.125 × 0.175 centred on the photon. The isolation transverse energy (E iso T ), which is computed from topological clusters of calorimeter cells [36], is corrected for the leakage of the photon energy into the isolation cone and the estimated contributions from the underlying event (UE) and additional inelastic pp interactions (pile-up) [37]. The combined correction to E iso T for the last two effects is computed on an event-byevent basis using the jet-area method [38] and typically amounts to 3.5 GeV (1.3 GeV) for |η γ | < 1.37 (1.56 < |η γ | < 2.37). In situ corrections to E iso T are applied in simulated events such that the peak position in the E iso T distribution coincides with that in the data. After corrections, E iso T is required to be less than E iso T,cut ≡ 0.0042 · E γ T + 4.8 GeV. Events with at least one isolated photon candidate with E γ T > 150 GeV and |η γ | < 2.37 are selected. Candidates in the region 1.37 < |η γ | < 1.56, which includes the transition region between the barrel and endcap calorimeters, are not considered. In events with more than one photon candidate satisfying the selection criteria, only the highest-E γ T (leading) photon is considered for further study.

Jet selection
The anti-k t algorithm [39,40] with radius parameter R = 0.4 is used to reconstruct jets. Topological clusters of calorimeter cells are used as input (jet constituents). The calorimeter cell energies are measured at the EM scale, which corresponds to the energy deposited by electromagnetically interacting particles. The jet four-momenta are computed from the sum of the jet-constituent four-momenta, treating each as a four-vector with zero mass. At this stage, the jet four-momentum values refer to the EM energy scale.
The calibration of the jets is performed following the methods described in Ref.
[41] and is discussed below. The four-momentum of jets is recalculated to point to the selected primary vertex of the event rather than the centre of the detector. The jet-area method [42] is then used on a jet-by-jet basis to subtract the contributions from the UE and pile-up. Subsequently, a calibration of the jet energy and direction is applied that corrects the reconstructed jet four-momentum to the particle level. For this purpose, simulated events are employed and the same jet algorithm used on data is applied to the generated stable particles. Stable particles are defined as those with a decay length cτ > 10 mm. Muons, neutrinos and particles from pile-up interactions are excluded. The resulting jets are referred to as particle-level jets. Following the previous calibration, residual dependencies of the reconstructed jet energy on the longitudinal and transverse characteristics of the jet are observed. Corrections to reduce these dependencies are derived from simulated events, using global properties of the jet obtained from tracking information, calorimeter energy deposits and muon spectrometer information. These corrections are applied sequentially to the jet four-momentum while conserving the average jet response in the sample of dijet simulated events used to derive the corrections. These corrections improve the jet energy resolution and account for differences in energy response between quark-and gluon-initiated jets. Since the jet flavour composition of the final state γ + jet + jet is, in principle, different from that used for the calibration (dijet events), the corrections help to reduce the dependence on the jet flavour composition assumed. In addition, systematic uncertainties due to the jet flavour response and composition are considered. A final correction is applied to account for differences in the jet response between data and simulations. The final correction is derived in situ from a combination of dijet, γ + jet, Z + jet and multi-jet p T -balance methods. Dijet events are used to correct the average response for forward jets to that for well-measured central jets. The other three in situ calibrations correct for differences between the average responses for central jets and well-measured reference objects. The uncertainty in the jet energy scale at p jet T = 100 GeV varies in the range 1.5%-2% depending on the jet pseudorapidity.
Quality criteria are applied to reject events with jets reconstructed from calorimeter signals not originating from a pp collision [43]. These criteria suppress events with jets from beam-induced background due to proton losses upstream of the interaction point, cosmic-ray air showers overlapping with collision events and calorimeter noise from large-scale coherent noise or isolated pathological cells. Jets are required to have calibrated transverse momenta greater than 100 GeV and rapidity |y jet | < 2.5. Jets overlapping with the photon candidate are not considered if the jet axis lies within a cone of size ∆R = 0.8 around the photon candidate; this requirement prevents any overlap between the photon isolation cone (∆R = 0.4) and the jet cone (∆R = 0.4). Finally, the event is selected if there are at least two jets (leading and sub-leading jets) satisfying the requirements above. The threshold p jet T = 100 GeV is chosen to avoid the increasingly large uncertainties in the jet energy scale for lower values of p jet T .

Data samples
The number of data events selected by the requirements listed in the previous subsections is 755 270. Three samples of events are selected according to the following criteria: a 'total' sample in which no further requirement is applied; a 'fragmentation-enriched' sample in which the photon is required to have E γ T < p jet2 T , where p jet2 T is the p T of the sub-leading jet; and a 'direct-enriched' sample in which the photon is required to have E γ T > p jet1 T , where p jet1 T is the p T of the leading jet. A summary of the requirements on the photon and the jets is given in Table 1, together with the number of selected events in each data sample.  [48] and the CT10 tune for Sherpa.
The Pythia simulation of the signal includes LO photon-plus-jet events from both the direct processes (the subprocesses qg → qγ and qq → gγ, called the 'hard' component) and the photon bremsstrahlung in LO QCD dijet events (called the 'bremsstrahlung' component). The bremsstrahlung component was modelled by final-state QED radiation arising from calculations of all 2 → 2 QCD processes. The Sherpa samples were generated with LO matrix elements for photon-plus-jet final states with up to three additional partons (2 → n processes with n from 2 to 5); the matrix elements were merged with the Sherpa parton shower [49] using the ME+PS@LO prescription. The bremsstrahlung component is accounted for in Sherpa through the matrix elements of 2 → n processes with n ≥ 3. In the generation of the Sherpa samples, a requirement on the photon isolation at the matrix-element level was imposed using the criterion defined in Ref. [50]. This criterion, commonly called Frixione's criterion, requires the total transverse energy inside a cone of size V around the generated final-state photon, excluding the photon itself, to be below a certain threshold, E max ) n , for all V < R, and is applied to avoid singularities in the matrix-elements calculation. The parameter values for the threshold were chosen to be R = 0.3, n = 2 and = 0.025. These values guarantee a selection loose enough to minimise possible biases from the application of the photon isolation requirement at reconstruction and particle levels (see Table 1). 4 A possible bias is accounted for by a systematic uncertainty (see Section 7.1.5) estimated from a comparison with Pythia samples, for which no photon isolation requirement is applied during the event generation. The Sherpa predictions from this computation are referred to as LO Sherpa.
Pile-up from additional pp collisions in the same and neighbouring bunch crossings was simulated by overlaying each MC event with a variable number of simulated inelastic pp collisions generated using Pythia 8.186 with the ATLAS set of tuned parameters for minimum-bias events (A2 tune) [51] and the MSTW2008LO PDF set [52]. The MC events are weighted to reproduce the distribution of the average number of interactions per bunch crossing observed in the data, referred to as 'pile-up reweighting'.
All the samples of generated events were passed through the Geant4-based [53] ATLAS detector-and trigger-simulation programs [54]. They are reconstructed and analysed with the same program chain as the data.

Background Monte Carlo simulations
The main background to isolated-photon events arises from jets misidentified as photons. This background, which is typically below 5%, is subtracted using an in situ technique, as described in Section 5. The background from electrons or positrons misidentified as photons is estimated using MC samples generated with the program Sherpa 2.2.1 [19][20][21]. The Z ( * ) /γ * → e + e − and W ( * ) → eν processes were generated with matrix elements calculated for up to two additional partons at NLO and up to four partons at LO.

Next-to-leading-order pQCD predictions
The NLO pQCD predictions presented in this paper are computed using the program Sherpa 2. A requirement on the photon isolation at the matrix-element level is imposed using Frixione's criterion with R = 0.1, n = 2 and = 0.1; for the Sherpa NLO calculations the parameters are fixed to looser values than for the LO calculations to further minimise any possible bias from the application of the photon isolation requirement at particle level. The NNLO NNPDF3.0 PDFs [55] are used in conjunction with the corresponding Sherpa default tuning. Dynamic factorisation and renormalisation scales are adopted and set equal to E γ T , as well as a dynamical merging scale withQ cut = 20 GeV [56]. The strong coupling constant is set to α s (m Z ) = 0.118. Fragmentation into hadrons and simulation of the UE are made using the same models as for the LO Sherpa samples of simulated events (see Section 4.1). All the Sherpa NLO predictions are based on the particle-level observables from this computation after applying the requirements listed in Table 1. In particular, the photon isolation requirement is applied at particle level using the procedure described in Section 6.

Multi-jet background subtraction
The main background to prompt-photon production consists of multi-jet events where one jet typically contains a π 0 or η meson that carries most of the jet energy and is misidentified as a photon because it decays into an almost collinear photon pair. The multi-jet background estimation relies on an in situ method based on signal-suppressed control regions. The method uses the same two-dimensional sideband technique as in previous analyses [7, 8, 37, 57-59] and it is described briefly in the following. The background subtraction procedure is performed bin by bin for each distribution. Two variables, the photon identification quality and E iso T , are used to define the control regions for the background estimation. The photon candidate The non-isolated region is separated by 2 GeV from the isolated region to reduce the contamination from signal events. The upper bound in E iso T is applied to avoid highly non-isolated photons; in this way, the determination of the signal yield does not depend on the description by the MC generators of the distribution of E iso T for prompt photons with high values of E iso T . A photon candidate is classified as 'tight' if tight identification criteria are satisfied. It is classified as 'non-tight' if it fails at least one of four tight requirements on the shower-shape variables computed from the energy deposits in the first layer of the electromagnetic calorimeter, but satisfies the tight requirements listed below: the tight requirement on the total lateral shower width in the first layer, all the other tight identification criteria in other layers and the tight requirement on the leakage into the hadronic calorimeter [34].
The two-dimensional plane defined by E iso T and the tight/non-tight photon-identification variable is divided into four non-overlapping regions: the 'signal' region of tight isolated candidates (region A) and three background control regions of tight non-isolated (region B), isolated non-tight (region C) and non-isolated non-tight (region D) photon candidates. The signal yield N sig A in region A is determined by using the relation where N K , with K = A, B, C, D, is the number of events in region K and R bg = N bg is the so-called background correlation and is taken as R bg = 1 for the nominal results; N The signal purity, defined as N sig A /N A , is typically above 95%. The use of Pythia or LO Sherpa samples to extract the signal leakage fractions leads to similar signal purities; the difference in the signal purity is taken as a systematic uncertainty (see Section 7.1.5). The photon isolation and identification variables are assumed to be uncorrelated (R bg = 1) for background events. The dependence of the signal yield on this assumption is investigated in validation regions, which are defined within the background-enriched regions B and D. A study of R bg in the validation regions, accounting for signal leakage using either the Pythia or LO Sherpa simulations, shows deviations from unity, ranging from 10% to 50%, which are then propagated through Eq. (1) and taken as systematic uncertainties (see Section 7.1.7).

Other backgrounds
The background from electrons or positrons misidentified as photons, mainly produced in Drell-Yan Z ( * ) /γ * → e + e − and W ( * ) → eν processes, is also studied. Such events are largely suppressed by the photon selection. The remaining background contribution is estimated using MC samples of fully simulated events and found to be sub-percent for most of the phase space considered. No subtraction is performed and a systematic uncertainty equal to the size of the estimated background is assigned (see Section 7.1.8).
6 Unfolding kinematic distributions to particle level Cross sections for isolated-photon production in association with at least two jets are measured in the fiducial phase-space regions outlined in Table 1. The particle-level isolation transverse energy of the photon is calculated by summing the transverse energy of all stable particles, except for muons and neutrinos, in a cone of size ∆R = 0.4 around the photon direction after the contribution from the underlying event is subtracted. The same subtraction procedure used on data is applied at the particle level; the particles associated with the overlaid pp collisions are not considered. The particle-level requirement on E iso T is determined using the Pythia and LO Sherpa MC samples by comparing the calorimeter isolation transverse energy with the particle-level isolation on an event-by-event basis. The effect of the experimental isolation requirement used in the data is found to be close to a particle-level requirement of E iso T (particle) < 0.0042 · E γ T + 10 GeV; the same requirement is obtained whether Pythia or LO Sherpa is used. Particle-level jets are reconstructed with the anti-k t algorithm with R = 0.4 using all stable final-state particles as input; muons, neutrinos and particles from pile-up activity are excluded.
The efficiency of the selection, defined as the fraction of the events generated in the fiducial region that are also reconstructed in that region, is evaluated using MC simulations. The selection efficiency using LO Sherpa (Pythia) samples is 76% (76%), 74% (77%), 71% (70%) for the total, fragmentation-and direct-enriched phase-space regions, respectively.
The data distributions, after background subtraction, are unfolded to the particle level using an unfolding method [60] based on the iterative application of Bayes' theorem, as implemented in RooUnfold [61]. In this method, an unfolding matrix is constructed using the samples of simulated events. The unfolding matrix encapsulates the migrations across bin boundaries of the observable when switching between particle and reconstruction levels. The binning of the distributions is chosen according to the resolution in those variables and in some cases, such as at high E γ T or p jet T , larger bin sizes are used to reduce the statistical uncertainties in the data. There are observables that are filled once per event, such as E γ T , m jet−jet , ∆φ jet−jet , |∆y jet−jet | and m γ−jet−jet , and observables which are filled twice per event, such as p jet T , |y jet |, ∆φ γ−jet and |∆y γ−jet |. The unfolding matrix is filled with those events that pass the selection requirements at both the reconstruction and particle levels. In addition, the reconstructed jet (photon) is required to be matched to a particle-level jet (photon) within ∆R = 0.4 (0.2). Corrections are also applied to take into account the fractions of events for which there is no matching either at particle or reconstruction level; the fraction of unmatched particle-level (reconstruction-level) events is typically in the range 20%-40% (5%-25%). In the evaluation of the systematic uncertainties (see Section 7.1), modifications of the unfolding matrix and of the corrections are applied simultaneously in a consistent way. Four iterations of the unfolding procedure are performed. The nominal cross sections are measured using the signal leakage fractions and unfolding matrices from LO Sherpa since these simulations describe the shape of the distributions of the signal yield better than those of Pythia. The results obtained by unfolding the data with Pythia are used to estimate the model dependence and their deviations from the nominal results are taken as systematic uncertainties, as explained in Section 7.1.5. The results of the Bayesian unfolding procedure are checked with a bin-by-bin method based on LO Sherpa simulations, giving consistent results; no systematic uncertainty needs to be assigned to the unfolding procedure.

Experimental and theoretical uncertainties 7.1 Uncertainties in the cross-section measurements
The sources of systematic uncertainty that affect the measurements are the photon energy scale and resolution, the photon-identification efficiency, the photon-isolation modelling, the jet energy scale and resolution, the parton-shower and hadronisation model dependence, the definition of the background control regions, the identification and isolation correlation in the background, the background from electrons or positrons faking photons, the modelling of pile-up, the trigger efficiency and the luminosity measurement. These sources of uncertainty and their effects are described below.

Photon energy scale and resolution
The uncertainties in the photon energy scale and resolution are described in Ref. [35]. The sources of uncertainty include: the uncertainty in the overall energy scale adjustment using Z → e + e − decays; the uncertainty in the non-linearity of the energy measurement at the calorimeter cell level; the uncertainty in the relative calibration of the different calorimeter layers; the uncertainty in the amount of material in front of the calorimeter; the uncertainty in the modelling of the reconstruction of photon conversions; the uncertainty in the modelling of the lateral shower shape; the uncertainty in the modelling of the sampling term; 5 and the uncertainty in the measurement of the constant term in Z-boson decays. The uncertainties are split into independent components to account for correlations and their η dependence. The individual sources of uncertainty are varied by ±1σ in the MC simulations and propagated through the analysis separately, to maintain the full information about the correlations. The uncertainties are propagated to the cross sections by modifying the unfolding matrix, the unmatched fractions and the signal yields. The impact of the photon energy resolution uncertainty is much smaller than that of the photon energy scale uncertainty except for high E γ T (E γ T > 500 GeV), low m jet−jet (m jet−jet < 400 GeV) and low ∆φ jet−jet (∆φ jet−jet < 0.6 rad) in the fragmentation-enriched region. The resulting uncertainty in the measured cross sections is obtained by adding in quadrature all the individual components. For dσ/dE γ T this uncertainty increases from 0.5% at E γ T = 150 GeV to 6% at E γ T = 2 TeV.

Photon-identification efficiency
Scale factors are applied to simulated events to match the efficiency for tight photon identification measured in data [34]. The uncertainties in the scale factors are propagated to the measured cross sections by modifying the unfolding matrix, the unmatched fractions and the signal yields. The resulting uncertainty in the measured cross sections is in the range 1.5%-2%.

Photon-isolation modelling
In situ corrections to E iso T are applied in simulated events for the nominal results. The differences between the nominal results and those obtained without applying the aforementioned corrections are taken as systematic uncertainties due to the modelling of E iso T in the MC simulation. The uncertainties are propagated to the cross sections by modifying the unfolding matrix, the unmatched fractions and the signal yields; the resulting uncertainties are not symmetrised. The resulting uncertainty in the measured cross sections is typically smaller than 1%.

Jet energy scale and resolution
The uncertainties in the jet energy scale and resolution are considered using the method reported in Ref. [41]. The sources of uncertainty include: the uncertainty in the overall energy scale adjustment using Z + jet, γ + jet 6 and multi-jet p T -balance methods; the uncertainty due to modelling, statistics and calibration non-closure in the η intercalibration; the uncertainty from single-particle response studies of high-p T jets; the uncertainty due to the modelling of the pile-up contribution; the uncertainty in the jet response and simulated jet composition of light-quark, b-quark, and gluon-initiated jets; and the uncertainty due to punch-through jets. 7 The individual sources of uncertainty are varied by ±1σ in the MC simulations and propagated through the analysis separately, to maintain the full information about the correlations. The uncertainties are propagated to the cross sections by modifying the unfolding matrix and the unmatched fractions. The resulting uncertainty in the measured cross sections is obtained by adding in quadrature all the individual components. For dσ/dp jet T this uncertainty is in the range 3%-8%.

Parton-shower and hadronisation model dependence
The dependence of the signal purity and unfolding corrections on the parton-shower and hadronisation models used in the simulations are studied separately. The differences observed in the signal purity between the nominal results and those obtained using the Pythia MC samples for the determination of the signal leakage fractions are taken as systematic uncertainties. The uncertainties are propagated to the cross sections by modifying the signal yields. The resulting uncertainties in the measured cross sections are typically smaller than 1%. The differences between the nominal results and those obtained using the Pythia MC samples for the unfolding are taken as systematic uncertainties and are typically smaller than 2%. In both cases, for the signal leakage fractions and for the unfolding, the uncertainties also account for a possible bias due to the application of the Frixione's criterion at particle level in the LO Sherpa MC samples.

Definition of the background control regions
The signal yield depends on the definition of the background control regions. The dependence is investigated by (i) varying the lower limit on E iso T of regions B and D by ±1 GeV, (ii) removing the upper limit on E iso T of regions B and D, and (iii) changing the choice of inverted photon identification variables used in the selection of non-tight photons. The signal yields obtained after each variation are compared with the nominal results and the differences are taken as systematic uncertainties. The uncertainties are propagated to the cross sections by modifying the signal yields. The uncertainty in the measured cross sections is typically below 0.2%, 0.2%, and 1% for the (i), (ii) and (iii) variations, respectively.

Identification and isolation correlation in the background
In the in situ background subtraction (Eq. (1)), R bg is assumed to be unity. A range in R bg is set to cover the deviations from unity observed in the validation regions after subtracting the signal leakage with either Pythia or LO Sherpa MC samples. The maximum deviations of R bg from unity in the validation regions vary in the range 10%-50% depending on the observable. The uncertainties are propagated to the cross sections by modifying the signal yields. The resulting uncertainty in the measured cross sections is typically smaller than 1%.

Background from electrons or positrons faking photons
A systematic uncertainty is assigned by taking the full size of the estimated contribution; the W + jets and Z + jets contributions are added linearly. The uncertainties are propagated to the cross sections by modifying the signal yields. The resulting uncertainty in the measured cross sections is typically smaller than 1%.

Pile-up, trigger efficiency and integrated luminosity
The pile-up reweighting of simulated events is varied to cover the uncertainty in the ratio of the predicted and measured inelastic cross sections [62]. The uncertainties are propagated to the cross sections by modifying the unfolding matrix, the unmatched fractions and the signal yields. The resulting uncertainty in the measured cross sections is typically smaller than 1%. There might be double counting between this source of uncertainty and that in the modelling of the photon isolation. However, given the smallness of the uncertainties, the impact is limited. The uncertainty in the trigger efficiency is estimated using the same methodology as in Ref.

Total systematic uncertainty
The total systematic uncertainty is computed by adding in quadrature the uncertainties from the sources listed above and the statistical uncertainty from the MC samples. The latter is computed using the bootstrap resampling technique [63]. There are large correlations in the systematic uncertainties across bins of one observable, particularly in the uncertainties due to the photon and jet energy scales, which are dominant. The total systematic uncertainty in the measured cross sections, excluding the uncertainty of the luminosity determination, for the total phase space varies in the range shown in Table 2 depending on the variable. Table 2: The total systematic uncertainty in the measured cross sections, excluding the uncertainty of the luminosity determination, for the total phase space: the range for each variable is shown in percentage.

Range of the relative uncertainty (in %) for each variable
3.5%-6.5% 4%-15% 4%-6% 4%-9% 3.5%-6% 4%-8% 3%-7% 4%-9% 4%-11% Figures 2 and 3 show the total systematic uncertainty, excluding the uncertainty of the luminosity determination, for each measured cross section for the total phase space. The dominant components, namely the jet energy scale, the photon energy scale and the photon identification, are shown separately in these figures. There are variables such as ∆φ jet−jet for which the systematic uncertainties due to the jet and photon energy scales depend on the variable itself. These dependencies arise from the event topology: at low ∆φ jet−jet the two jets are close together and recoil against the photon whereas at high ∆φ jet−jet the two jets recoil against each other and the photon has lower p T . Thus, the populations in jet and photon p T vary with ∆φ jet−jet . In the total phase space, the systematic uncertainty dominates the total experimental uncertainty for E

Theoretical uncertainties in the predictions
The following sources of uncertainty in the theoretical predictions are considered. The uncertainty in the NLO pQCD predictions from Sherpa due to terms beyond NLO is estimated by repeating the calculations using values of the renormalisation (µ R ) and factorisation (µ F ) scales multiplied by the factors 0.5 or 2. The two scales are varied either simultaneously or individually. In all cases, the condition 0.5 ≤ µ R /µ F ≤ 2 is imposed. The largest deviation from each nominal prediction among the six possible variations is taken as the uncertainty. The uncertainty in the predictions due to the proton PDFs is estimated by using 100  replicas from the NNPDF3.0 analysis [55]. The uncertainty in the predictions due to the uncertainty in α s is estimated by repeating the calculations using two additional sets of proton PDFs from the NNPDF3.0 analysis, for which different values of α s (m Z ) were assumed in the fits, namely 0.117 and 0.119; in this way, the correlation between α s and the PDFs is preserved.
The total theoretical uncertainty is obtained by adding in quadrature the individual uncertainties mentioned above. The dominant theoretical uncertainty is that arising from terms beyond NLO and typically varies between about 20% and about 40%, depending on the observable and region of phase space. The uncertainty in the predictions arising from those in the PDFs is below 6% and that arising from the value of α s (m Z ) is below 6% for all observables and phase-space regions.

Results
The measured isolated-photon plus two-jets cross sections in the three phase-space regions are shown in The measured dσ/dp jet T for the total phase space, shown in Figure 4(b), decreases by approximately five orders of magnitude in the measured range. Values of p jet T up to 2 TeV are measured. The measured dσ/dp jet T for the fragmentation-enriched phase space (see Figure 6(b)) has a harder spectrum than that for the direct-enriched one (see Figure 8(b)).
Figure 4(c) shows the measured dσ/d|y jet | for the total phase space. The measured cross section decreases as |y jet | increases. The measured dσ/d|y jet | for the fragmentation-and direct-enriched phase-space regions are shown in Figures 6(c) and 8(c), respectively, and are similar for |y jet | < 1, but for 1 < |y jet | < 2.5 the spectrum for the fragmentation-enriched phase space decreases faster as |y jet | increases.
The measured dσ/d|∆y γ−jet | for the total, fragmentation-enriched and direct-enriched phase-space regions are shown in Figures 4(d), 6(d) and 8(d), respectively. The measured cross sections decrease as |∆y γ−jet | increases in all three phase-space regions.
The measured dσ/d∆φ γ−jet for the total phase space is shown in Figure 4(e). The measured cross sections for the total and direct-enriched (see Figure 8(e)) phase-space regions increase as ∆φ γ−jet increases and display a maximum at ∆φ γ−jet ∼ 2.8 rad. In contrast, dσ/d∆φ γ−jet for the fragmentation-enriched phase space (see Figure 6(e)) exhibits a peak at lower ∆φ γ−jet values, ∆φ γ−jet ∼ 2.2 rad. The difference between the distributions of ∆φ γ−jet for the direct-and fragmentation-enriched phase-space regions is partly due to the kinematical constraints that define those regions, namely E γ T > p jet1 T and E γ T < p jet2 T , respectively. Figures 5(a), 7(a) and 9(a) show the measured dσ/d|∆y jet−jet | for the total, fragmentation-and directenriched phase-space regions, respectively. The measured cross sections have similar shapes in the three phase-space regions. However, the measured dσ/d∆φ jet−jet (see Figures 5(b), 7(b) and 9(b)) have very different shapes in the three phase-space regions. For the fragmentation-enriched phase space, the two jets primarily populate the range ∆φ jet−jet > 2.2 rad whereas for the direct-enriched one they mostly populate the range 0 < ∆φ jet−jet < 2.2 rad. The jet radius parameter of R = 0.4 has an effect for ∆φ jet−jet < 0.4 as can be seen in Figures 5(b) and 9(b). The kinematical constraints that define the direct-and fragmentationenriched phase-space regions have also an effect on the distributions of ∆φ jet−jet ; for instance, dσ/d∆φ jet−jet for the fragmentation-enriched phase-space region is suppressed at low ∆φ jet−jet .
The dependence of the cross sections on m jet−jet and m γ−jet−jet is measured up to values of 4 TeV and 5.5 TeV, respectively. For dσ/dm jet−jet (see Figures 5(c), 7(c) and 9(c)), the measured spectra have different shapes in the three phase-space regions. The cross section for the fragmentation-enriched phase space is suppressed at low m jet−jet values, but it is harder than that for the direct-enriched one at high m jet−jet values. The kinematical constraints have some effect on the differences between the distributions of m jet−jet for the direct-and fragmentation-enriched phase-space regions. For dσ/dm γ−jet−jet (see Figures 5(d), 7(d) and 9(d)), the shapes of the spectra are more similar, but the distribution for the fragmentation-enriched phase space is harder than for the direct-enriched one.
The characteristics observed in the measured cross sections in the fragmentation-and direct-enriched phase-space regions are as expected from the two underlying production mechanisms, each of which dominates in one of the two regions. For some observables, the kinematical constraints that define the direct-and fragmentation-enriched phase-space regions have an effect and are partly responsible for the differences in the distributions between the two phase-space regions.

Comparison with tree-level plus parton-shower Monte Carlo models
The predictions of the tree-level plus parton-shower MC models of Pythia and Sherpa are compared with the data in Figures 4 to 9. These predictions are normalised to the measured integrated cross section in each phase space to compare the shapes of the predictions with the data. The normalisation factors relative to the cross-section predictions of the generators are indicated in Figures 4 to 9. Being tree-level predictions, the theoretical uncertainties in these models can be as large as 50% and are not included in the figures.
The predictions of LO Sherpa provide a good description of the shape of the data, except at high E γ T , |∆y jet−jet | and m γ−jet−jet . The predictions of Pythia, in general, fail to describe the shape of the data. There are two reasons for this disagreement. First, since Pythia uses 2 → 2 matrix elements, the second jet necessarily originates from the parton shower; Sherpa incorporates matrix elements for the processes 2 → n with n = 2 to 5 and thus the second jet is better simulated. Second, the fragmentation component in Pythia is overestimated for the parameter settings used here. This is particularly visible in the cross section for the total phase space as a function of p jet T in the region p jet T > 300 GeV, where the prediction overestimates the data by 50% or more. It is also visible in the cross section for the total phase space as a function of m jet−jet and m γ−jet−jet , where the predictions of Pythia overestimate the data by up to an order of magnitude. The overestimation of the fragmentation component is also supported by the fact that the normalisation factor for the predictions from Pythia is 1.2 (0.7) for the total (fragmentation-enriched) phase space. The inclusion of higher-order tree-level matrix elements for the processes 2 → n with n = 3 to 5 in Sherpa significantly improves the description of the data for all observables and phase-space regions.

Comparison with next-to-leading-order plus parton-shower QCD predictions
The predictions of the NLO calculations of Sherpa are compared with the data in Figures 4 to 9. The uncertainties in the predictions include those due to missing higher-order terms in the perturbative expan-sion, those due to the uncertainties in the PDFs and those due to the uncertainty in α s (m Z ), as explained in Section 7.2. The predictions are shown with the predicted theory normalisation, so the comparison between data and theory is done in terms of both normalisation and shape. These predictions describe the data adequately in shape and normalisation within the experimental and theoretical uncertainties except for ranges at high values of |∆y γ−jet |, |∆y jet−jet |, m jet−jet and m γ−jet−jet , where the predictions overestimate the data in the total and fragmentation-enriched phase-space regions. In those ranges, the overestimation of the predictions relative to the data is larger in the fragmentation-enriched phase space than when considering the total phase space. These discrepancies might be attributable to unaccounted higher-order corrections given the fact that the variation of the renormalisation and factorisation scales leads to relatively large uncertainties in the predictions. In particular, the use of E γ T as the scale might not be optimal in the ranges where p jet T is much larger than E γ T . A mismodelling of the high tail in m jet−jet is also observed for the measurements of the final state V + jet + jet with V = W ± or Z boson [64-66]. In addition, decorrelation effects from multiple parton emission might be present at large values of |∆y γ−jet | and |∆y jet−jet |. The comparison of the predictions based on different parameterisations of the proton PDFs shows that the description of the data does not depend significantly on the specific PDF used. The theoretical uncertainties are much larger than the experimental ones and, therefore, more precise calculations are needed, either in the form of resummed calculations or through the inclusion of higher-order terms. Once the size of the uncertainties of the predictions is reduced significantly, a meaningful comparison with predictions including electroweak effects will be possible.

Summary
Measurements of the cross sections for the production of an isolated photon in association with two jets in pp collisions at √ s = 13 TeV, pp → γ + jet + jet + X, are presented. These measurements are based on an integrated luminosity of 36.1 fb −1 of ATLAS data recorded at the LHC. The photon is required to have E γ T > 150 GeV and |η γ | < 2.37, excluding the region 1.37 < |η γ | < 1.56. The jets are reconstructed using the anti-k t algorithm with radius parameter R = 0.4. The cross sections are measured as functions of E γ T , p jet T , |y jet |, ∆φ γ−jet , |∆y γ−jet |, m jet−jet , ∆φ jet−jet , |∆y jet−jet | and m γ−jet−jet in three different regions of phase space, namely the total phase space, the fragmentation-photon enriched phase space where E TeV, respectively. The measured cross sections in the fragmentation-photon and direct-photon enriched phase-space regions exhibit the features expected from the two underlying production mechanisms, each of which dominates in one of the two regions, and for some observables, from the effects of the kinematical constraints that define the two phase-space regions.
The predictions of the tree-level plus parton-shower MC model in LO Sherpa give a good description of the shape of the data distributions, except at high E γ T , |∆y jet−jet | and m γ−jet−jet . In contrast, the predictions from Pythia, for which the sub-leading jet must necessarily originate from the parton shower, generally fail to describe the shape of the distributions in the data. The improved description of the data by the predictions from LO Sherpa is attributed to the inclusion of tree-level higher-order matrix elements. The precision of the measurements is significantly better than the differences between the predictions. The NLO predictions from Sherpa describe the data adequately in shape and normalisation within the experimental and theoretical uncertainties except for the regions at high values of |∆y γ−jet |, |∆y jet−jet |, m jet−jet and m γ−jet−jet , where the predictions overestimate the data for the total and fragmentation-photon enriched phase-space regions. There are regions in which the NLO predictions exhibit trends that differ from those in the data, such as at high E γ T and p jet T , and at low ∆φ γ−jet . The theoretical uncertainties are much larger than those of experimental nature, preventing a more precise test of the theory. The measurements provide tests of pQCD at energy scales as high as 2 TeV for the photon and jet transverse momenta and scrutinise the NLO QCD description of the dynamics of isolated-photon plus two-jet production in pp collisions. [7] ATLAS Collaboration, Measurement of the production cross section of an isolated photon associated with jets in proton-proton collisions at √ s = 7 TeV with the ATLAS detector, Phys. [28] ATLAS Collaboration, ATLAS data quality operations and performance for 2015-2018 data-taking, (2019), arXiv: 1911.04632 [physics.ins-det].