Measurement of the inclusive isolated prompt photon cross section in pp collisions at √ s = 8 TeV with the ATLAS detector

A measurement of the cross section for the inclusive production of isolated prompt photons in proton–proton collisions at a centre-of-mass energy of √ s = 8 TeV is presented. The measurement covers the pseudorapidity ranges | η γ | < 1 . 37 and 1 . 56 ≤ | η γ | < 2 . 37 in the transverse energy range 25 < E γ T < 1500 GeV. The results are based on an integrated luminosity of 20.2 fb − 1 , recorded by the ATLAS detector at the LHC. Photon candidates are identiﬁed by combining information from the calorimeters and the inner tracker. The background is subtracted using a data-driven technique, based on the observed calorimeter shower-shape variables and the deposition of hadronic energy in a narrow cone around the photon candidate. The measured cross sections are compared with leading-order and next-to-leading order perturbative QCD calculations and are found to be in a good agreement over ten orders of magnitude.


Introduction
Prompt photons, excluding those originating from hadron decays, are produced at the LHC in the hard process pp → γ + X.The measurement of this inclusive production provides a probe of perturbative Quantum Chromodynamics (pQCD) and specifically, through the dominant leading-order (LO) process qg → qγ, can be used to study the gluon parton distribution function (PDF) [1][2][3][4][5][6] of the proton.In addition, an improved understanding of prompt photon production is potentially important in aiding analyses of processes for which they are an important background (for instance, measurements of the Higgs boson in the diphoton decay channel).
Inclusive prompt photon production is made up of two contributions: direct and fragmentation photons.Direct photons are those associated with the hard sub-process, whereas fragmentation photons are produced from the fragmentation of a coloured parton.An isolation requirement is used to reduce both the poorly understood non-perturbative fragmentation contribution and the contamination from the dominant background of photons originating from hadron decays, mainly light neutral mesons (i.e.π 0 , η).
The fiducial region of the measurement presented is defined in terms of the photon kinematic quantities: 1 transverse energy E γ T , pseudorapidity η γ and transverse isolation energy E iso T .The differential cross section is measured as a function of E γ T , for the highest-energy photon in the event, and spans the 25 < E γ T < 1500 GeV range.The η γ range is split to give four intervals for the cross-section measurement: |η γ | < 0.6, 0.6 ≤ |η γ | < 1.37, 1.56 ≤ |η γ | < 1.81 and 1.81 ≤ |η γ | < 2.37.The final constraint is the photon isolation, where E iso T is calculated within a cone of size ∆R = 0.4, centred around the photon, and is chosen to be E iso T < 4.8 GeV + 4.2 × 10 −3 × E γ T .This fiducial region is identical in both the theoretical calculations and the experimental measurement; however, there are differences in the calculation of E iso T : • At detector level it is the sum of energy deposits in the calorimeter, corrected for the deposits related to the photon candidate itself.
• At particle level it is the sum of energy from all particles, except for muons, neutrinos and the photon itself.
• At parton level it is the sum of energy from all coloured partons.
An additional correction to remove energy from the underlying event (UE) or additional proton-proton interactions is applied at detector and particle level, as detailed in Section 4.2.
There are several differences between the measurement presented here and the previous ATLAS inclusive photon measurements [7][8][9].In addition to the change in centre-of-mass energy and E γ T reach, it also probes for the first time the region 25 < E γ T < 45 GeV for 1.81 ≤ |η γ | < 2.37.The measurement is also compared to different theoretical predictions than used previously, as detailed in Section 3.An E γ Tdependent isolation requirement is introduced for the first time, effectively relaxing the maximum E iso T at high E γ T , as outlined in Section 4 along with the discussion of changing the upper edge of the excluded η γ region from 1.52 to 1.56.Other differences in the background estimation, unfolding and uncertainty calculations are highlighted in Sections 5, 6 and 7 respectively, and the results are shown in Section 8.

ATLAS detector and data
The ATLAS experiment [21] at the LHC is a multi-purpose particle detector with a forward-backward symmetric cylindrical geometry and a near 4π coverage in solid angle.It consists of an inner tracking detector surrounded by a thin superconducting solenoid providing a 2 T axial magnetic field, electromagnetic and hadronic calorimeters, and a muon spectrometer.The inner tracking detector covers the pseudorapidity range |η| < 2.5.It consists of silicon pixel, silicon microstrip, and transition radiation tracking detectors.Within the region |η| < 3.2, EM calorimetry is provided by high-granularity lead/liquid-argon (LAr) sampling calorimeters, with an additional thin LAr presampler covering |η| < 1.8, to correct for energy loss in material upstream of the calorimeters.A hadronic (steel/scintillator-tile) calorimeter covers the central pseudorapidity range (|η| < 1.7).The end-cap and forward regions are instrumented with LAr calorimeters for both the EM and hadronic energy measurements up to |η| = 4.9.The muon spectrometer surrounds the calorimeters and is based on three large air-core toroid superconducting magnets with eight 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis along the beam pipe.The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upwards.Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the z-axis.
coils each.It includes a system of precision tracking chambers and fast detectors for triggering.A threelevel trigger system is used to select events.The first-level trigger is implemented in hardware and uses a subset of the detector information to reduce the accepted rate to at most 75 kHz.This is followed by two software-based high-level triggers that together reduce the accepted event rate to 400 Hz on average, depending on the data-taking conditions during 2012.
The dataset used in this analysis was obtained using proton-proton collisions recorded in 2012 by the ATLAS detector, when the LHC operated at a centre-of-mass energy of √ s = 8 TeV.The integrated luminosity of the dataset used in this measurement is 20.2 fb −1 with an uncertainty of 1.9% [22].The events used in the analysis were recorded by the trigger system using single-photon triggers [23], which use identification criteria looser than the selection described in Section 4.1.For the high-level triggers, E γ T thresholds are defined in 20 GeV steps from 20 GeV to 120 GeV.Multiple trigger thresholds are required because the triggers are prescaled to reduce their rate, except for the unprescaled 120 GeV threshold.Each threshold is used in the analysis within an exclusive E γ T range, determined to be where the trigger has an efficiency greater than 99.5%, with respect to the full selection detailed in Section 4.Only events taken during periods of good data quality, where the calorimeters and inner tracking detectors are in nominal operation, are retained in the dataset.To remove any non-collision background, each event is required to have a reconstructed vertex consistent with the average beam-spot position, where the vertex is required to have at least two associated tracks.This condition is close to 100% efficient for retaining events with photons within the detector acceptance.

Theoretical predictions
The theoretical calculations used in the analysis consist of LO Monte Carlo (MC) event generators and calculations at next-to-leading-order (NLO) or higher.Two event generators are used at LO: Pythia 8.165 [24] and Sherpa 1.4.0 [25].These event generators are interfaced with a detailed detector simulation [26] (based on GEANT4 [27]), the output of which is reconstructed in the same way as the data.The LO predictions are used to study many aspects of the analysis and are also compared to the final cross section.The final cross sections are also compared to three calculations: JetPhox [28], PeTeR [29,30] and MCFM [31].
Event generation with Pythia includes: the description of the PDFs using CTEQ6L1 [32], the simulation of initial-and final-state radiation, the simulation of the UE using the ATLAS AU2 set of tuned parameters (tune) [33] based on the multiple parton interaction model [34], and the modelling of the hadronisation based on the Lund string model [35].The LO direct contribution to the prompt photon production is fully included in the main matrix-element calculation.In contrast, the fragmentation contribution is modelled by final-state QED radiation arising from calculations of all 2 → 2 QCD processes.
Pythia is used to extract the central values of the measurement, while Sherpa is used as a second LO generator as it showed excellent agreement with the results in the ATLAS photon plus jet measurement [36].The Sherpa predictions are used to cross-check the results and determine uncertainties arising from the use of MC simulations in parts of the analysis.The Sherpa calculations are performed with up to four parton emissions and the radiation of gluons and photons is done coherently.This means that the fragmentation contribution is produced differently to the contribution in Pythia and is also indistinguishable from the direct contribution, unlike Pythia where the contributions can be separated.The Sherpa events are produced with: the CT10 [37] PDF, the UE model based on the recommended tune provided by the Sherpa authors, and hadronisation modelled using a modified version of the cluster model [38].
The LO simulated events used in the analysis are reweighted in order to match as well as possible the experimental conditions of the dataset.One of these corrections is to reproduce the pile-up (additional proton-proton interactions in the same bunch crossing) conditions, where the weights are derived from the distribution of average interactions per bunch crossing (µ) in data and MC simulations with an additional constant to improve the agreement of the number of primary vertices.A second weight is used to ensure an accurate η γ measurement by reproducing in the MC simulations the z-vertex position of the hard interaction measured in data.
The final cross sections are compared to these LO generators and also to parton-level calculations.The kinematic selection used in all of the predictions matches the fiducial region defined in Section 1.For the higher order predictions the nominal renormalisation (µ R ), factorisation (µ F ) and fragmentation (µ f ) scales were set to the photon transverse energy JetPhox, a well-established NLO parton-level generator for the prediction of processes with photons in the final state, is used as the baseline to compare the results.JetPhox is capable of calculating the doubledifferential inclusive prompt photon cross section d2 σ/(dE γ T dη γ ) at parton level to NLO accuracy for both the direct and fragmentation photon processes.The calculation can be configured to use an E γ Tdependent isolation requirement 2 and uses the NLO photon fragmentation function of BFG set II [39,40].To check the effect of the PDF choice on the predictions, they are generated with different PDF sets (CT10, MSTW2008NLO [41], NNPDF2.3[42] and HERAPDF1.5 [43]), provided by the LHAPDF package [44].The strong coupling constant (α S ) is also obtained for each PDF using LHAPDF and the fine-structure constant (α EM ) is set to the JetPhox default of 1/137.
The following systematic uncertainties (combined in quadrature) are assigned to the JetPhox calculations and are estimated by means of procedures [45] used in the previous measurements: • The uncertainty on the scale choice is evaluated from the envelope of varying the three scales by a factor of two around the nominal value, both simultaneously and independently (keeping two fixed at the nominal value).The impact on the predicted cross section varies between 12% and 20%.
• The PDF uncertainty is obtained by repeating the JetPhox calculation for the 52 eigenvector sets of the CT10 PDF and applying a scaling factor in order to produce the uncertainty for the 68% confidence-level (CL) interval.The corresponding uncertainty in the cross section increases with E γ T and varies between 5% at 100 GeV and 15% at 900 GeV.• The uncertainty due to α S is evaluated, following the recommendation of Ref. [37], by repeating the calculation with α S varied by ±0.002 around the central value of 0.118 and scaling in order to obtain the uncertainty for the 68% CL interval.The uncertainty due to α S is smaller than that from the scale or PDF uncertainties for the whole phase space; it slowly decreases from 9% with increasing E γ T , with the exception of above 900 GeV where it increases to 15%.• To be able to correct from parton level to particle level, additional hadronisation-plus-UE correction factors were evaluated using the two alternative hadronisation and UE models in Pythia and Sherpa.
The study was performed by repeating the calculation with and without the hadronisation and UE contributions and resulted in a correction close to unity for both MC models with a small deviation of at most 2% at low E γ T .Therefore, as in the previous analyses, no correction factor is applied to the central value; however, in this measurement an E γ T -dependent uncertainty is assigned to the theory, based on the largest deviation from unity between the two models.
PeTeR is used as a second parton-level generator to predict the differential isolated prompt photon cross section at NLO including the resummation of threshold logarithms at the next-to-next-to-next-to-leadinglogarithmic (NNNLL) level.PeTeR is roughly equivalent to a fixed-order calculation at next-next-toleading-order (NNLO); there is currently no exact calculation available for inclusive photons at this order.To account for the isolation criteria applied in the measurement, the PeTeR result at NLO is normalised to that from JetPhox.The PeTeR predictions are supplemented with the resummation of large electroweak Sudakov logarithms according to Ref. [46,47].These electroweak corrections, not included in the predictions from JetPhox, provide estimates of electroweak uncertainties that are important for high E γ T and also mean that, unlike JetPhox, PeTeR uses a running α EM .The scale uncertainty is calculated similarly to JetPhox, by varying the scales around the central value, but in PeTeR there are four scales [48]: hard matching, jet, soft and factorisation.Finally the PDF uncertainty is taken directly from JetPhox.
An additional study was made using MCFM, following on from the studies in Ref. [49], with parameters (CT10 PDF, photon isolation, scale choice and α EM ) matching those in JetPhox.MCFM calculates the fragmentation process only to LO and therefore deviations from JetPhox predictions were expected below approximately 200 GeV.Surprisingly, however, even at higher E γ T the predictions from MCFM were found to be consistently below the predictions from JetPhox, although within the theoretical uncertainties.This trend is under investigation by the calculations authors and the predictions are not presented here.

Photon selection
The photon selection, in both data and MC simulation, is based on the reconstruction [50] of an EM cluster in the calorimeter as a photon candidate.The absence of an associated track in the inner detector classifies the photon candidate as an unconverted photon, whereas it is classified as a converted photon if the cluster is matched to two tracks coming from a conversion vertex or to one track which has no hits in the innermost layer of the inner tracking detector.Both the converted and unconverted candidates are kept in the analysis.A further track-based classification [51] is used to minimise the number of electrons reconstructed as photons, although this introduces a slight decrease in efficiency for reconstructing converted photons.The conversion classification is used both to determine the size of the photon cluster in the barrel calorimeter and also as an input to the dedicated energy calibration [52], which is applied to account for energy loss before the EM calorimeter.This calibration starts by correcting the response from each of the layers in the EM calorimeter and then applies a response calibration from MC simulations to the cluster energies.After accounting for detector response variations not included in the simulation, such as high-voltage inhomogeneities in some sectors, energy scale factors are then applied from the comparison of the detector response to Z boson decays to electron-positron pair events in data and MC simulations.
Following this calibration, only photon candidates with E γ T > 25 GeV and a cluster barycentre (in the second layer of the EM calorimeter) lying within |η γ | < 1.37 or 1.56 ≤ |η γ | < 2.37 are retained for the analysis.The transition region between the barrel and end-cap calorimeters (1.37 ≤ |η γ | < 1.56) is excluded due to the degraded performance induced by the increased amount of inactive material in front of the calorimeter.This region is expanded in the measurement presented here to 1.56, compared to the value of 1.52 used previously, to improve the accuracy of the photon energy measurement as it avoids using clusters calibrated by scintillators that are part of the hadronic calorimeter.Finally, photons reconstructed near regions of the calorimeter affected by read-out or high-voltage failures are not included in the analysis.The remaining photon candidates are then used in this analysis if they satisfy further selection and quality criteria based on their calorimeter shower shapes and isolation energy.

Photon identification
In order to reduce the previously mentioned largest background, namely non-prompt photons originating mainly from decays of energetic π 0 and η mesons, nine shower-shape variables [50] are exploited, similarly to the previous ATLAS inclusive photon measurements.These shower-shape variables are formed based on the relative and absolute energy deposition within the calorimeter cells using the full granularity of the different layers of the calorimeter system.The particular selection criteria for each of the nine variables are tuned for converted and unconverted photons separately, as well as being adjusted depending on η γ (in intervals matching the four η γ regions of this measurement).In the MC simulations the same criteria are applied as in data, but with two corrections.Firstly, the shower-shape variables are shifted [50] to match the measured distributions in data.Secondly, additional correction factors (at most a few percent from unity) to match the identification efficiency in the MC simulations and that in data are applied, calculated in each E γ T and η γ interval.To quantify the effect of the identification criteria, the identification efficiency for prompt photons is defined in MC simulations as: where reconstructed photons have to satisfy the identification criteria and be geometrically matched, with ∆R < 0.2, to isolated photons generated at particle level.This MC id is shown in Figure 1 along with the efficiencies for converted and unconverted photons. 3The unconverted photon efficiency is high and approximately constant for more energetic photons, as expected since they should leave a more pronounced shower in the detector.However, a drop in efficiency is observed when combining with converted photons.The efficiency to reconstruct conversions decreases at high E γ T (> 150 GeV) where it becomes more difficult to separate the two tracks from the conversions.These very close-by tracks are more likely to fail the tighter selections, including a transition radiation requirement, applied to singletrack conversion candidates.

Photon isolation
The photon candidates are required to be isolated to distinguish between prompt photons and hadronic background.As stated in Section 1, E iso T is calculated from topological clusters of calorimeter cells in a cone of size ∆R = 0.4 around the photon and corrected for the deposits related to the photon candidate itself.As this quantity is susceptible to contributions from the UE and pile-up, a correction based on the jet area method [53] is applied.This estimates on an event-by-event basis the ambient energy density, which is then subtracted from the E iso T before applying the isolation requirement.These corrections are typically between 1.5 and 2 GeV.In order for the detector-level E iso T distribution to reproduce the distribution from data, it is corrected in each E γ T and η γ interval by the difference between the mean value of E iso T in data In contrast to the fixed value (3 or 7 GeV) used in the previous analyses, this requirement has been optimised to retain more of the photons satisfying the identification criteria in Section 4.1 whilst also obtaining the best signal-to-background ratio throughout the large E γ T range of the measurement.In addition, the fraction of photon candidates that have satisfied the identification criteria and subsequently also satisfy the isolation requirement, stays high and constant.This is due to the isolation requirement being relaxed at higher E γ T , compared to using a fixed cut.

Background subtraction
The number of events with a photon candidate (N γ,data ) satisfying the kinematic, identification and isolation selection criteria, as detailed in Section 4, has contributions from hadronic background and electrons.These contributions are removed statistically by techniques detailed below.
The hadronic background (from meson decays and jets) is removed by a data-driven technique, as done in the previous ATLAS analyses.This technique uses a two-dimensional sidebands method based on the isolation and identification criteria.For the identification, photons either satisfy the full criteria of all the shower-shape variables outlined in Section 4.1 or an orthogonal selection which aims to maximise the hadronic background.This orthogonal selection is achieved by inverting four variables related to the first layer of the EM calorimeter, which has cells with a very small width in η.For isolation, photons are either isolated as defined in Section 4.2 or non-isolated by having E iso T > 7.8 GeV + 4.2 × 10 −3 × E γ T .The four regions are then defined in data to be: • N A,data : photon candidates satisfying both the isolation and identification criteria, i.e.N γ,data .
• N B,data : photon candidates that are non-isolated, but satisfy the identification criteria.
• N C,data : photon candidates that only satisfy the orthogonal identification criteria but are isolated.
• N D,data : photon candidates that only satisfy the orthogonal identification criteria and are nonisolated.
As defined above, there is a 3 GeV separation between the non-isolated region and the isolated region.This separation is used to limit the number of particle-level signal photons that fall into the background regions.To quantify this effect, signal leakage fractions are calculated in MC simulations: with K = B, C, D. These leakage fractions are found to be small and are calculated in Pythia for the central value, with Sherpa used as a cross-check.
The two-dimensional sidebands method assumes that the two chosen variables are independent for the background.The isolation and identification criteria are chosen to minimise any such dependence, but any deviation from this assumption can be accounted for by using MC simulations to calculate the ratio: where N K,MC bkg are the number of background events in each of the regions K = A, B, C, D. For the central value the assumption, confirmed in a control region, that they are independent (R bkg = 1) is used; however, R bkg is varied in Section 7 to obtain the systematic uncertainty of any potential dependence.
The four sideband regions, signal leakage fractions and R bkg are then used to solve for N A,data signal via: ( This solution is used in the cross-section measurement via the signal purity, which is defined as: In all four η γ regions, P signal is found to rise with E γ T from 60% at 25 GeV to 100% at around 300 GeV.In the highest E γ T interval the method is inaccurate due to a lack of events in the background regions so here the central value of P signal from the previous E γ T interval is used in the cross-section calculation.Finally, after the above subtraction a remaining background of fake photons from electrons is accounted for.As in previous measurements, this is estimated using MC simulations, scaled to the measured integrated luminosity in data, of Z and W boson decays to electrons.Reconstructed photons from these simulations passing the selection of Section 4 are counted if they are geometrically matched to a particlelevel electron.The number of fake photons removed (N e→γ ) is less than 0.2% of the remaining signal photons (N γ,data P signal ) in all four η γ regions and for most of the E γ T range -only reaching a maximum of 0.7% in some low E γ T intervals.As this is such a small effect no systematic uncertainty is assigned to this subtraction.

Cross section
The differential isolated prompt photon cross section as a function of E γ T (calculated in four |η γ | regions) includes elements described in the previous sections and takes the form: where E γ T is that of the highest transverse energy photon satisfying the kinematic, identification and isolation criteria (Section 4).The trigger efficiency ( trig ) corrects N γ,data for any events that would satisfy the selection criteria but were not recorded in the dataset (Section 2).The number of events (N γ,data ) with a photon satisfying the selection criteria is corrected for background using the previously introduced subtraction factors P signal and N e→γ (Section 5).Further, the overall size of the studied dataset is accounted for by dividing by the total integrated luminosity ( Ldt) and the cross section is normalised to inverse GeV by dividing each measured E γ T interval by its size (∆E γ T ).The remaining factor, corr , is the unfolding correction factor used to correct the measurement to particle level to allow for direct comparisons to theoretical predictions.The unfolding factors are derived using Pythia, with Sherpa used as a cross-check.The unfolding correction factors are extracted by using a bin-by-bin unfolding procedure and are defined as: where N MC signal and N MC particle refer to the number of events with an isolated photon at detector level and particle level respectively.
The main contribution to corr is the identification efficiency (Section 4.1), resulting in a very similar shape including the slight decrease at high E γ T .However, corr differs as it also contains the effects from photon migrations between different E γ T intervals and the isolation efficiency (Section 4.2).The overall correction lies between 0.8 and 0.9 and therefore indicates that detector effects are rather small.The results of the bin-by-bin unfolding procedure are cross-checked using an iterative unfolding method, which reduces the reliance on the shape of the MC simulation distributions of E γ T at particle or detector level.The method is based on Bayes Theorem [54] and iteratively 4 unfolds the spectrum by changing the prior of the particle-level distribution to the previously unfolded spectrum for the next iteration.The results show that the two unfolding procedures are in very good agreement, considering statistical uncertainties only.

Uncertainties
To estimate the systematic uncertainties, the cross-section calculation was repeated varying the selection procedure, background subtraction techniques or the unfolding correction factor.One difference compared to the previous analyses is that this measurement makes use of the Bootstrap technique [55] to evaluate the statistical influence on systematic uncertainties, achieved by producing a large number of weighted (based on a Poisson distribution) replicas for each event.The result is then used to reduce the statistical fluctuations by applying a two-step smoothing technique; firstly combining E γ T intervals until the propagated uncertainty has a sufficiently large statistical significance, followed by performing a Gaussian kernel smoothing on the original E γ T intervals.
The following text describes the included uncertainty sources (quantifying those that are smaller): • The photon energy scale is altered by varying systematic sources up and down, with the resulting shifts being summed in quadrature to provide the total uncertainty.The sources are split to account for correlations and range from being related to: detector material and read-out; simulation of the detector; extrapolations from data-driven measurements; and finally details related to the differences between unconverted or converted photon showers in the calorimeter.The uncertainty in the photon energy scale is around 1%, except for the region 1.56 ≤ |η γ | < 1.81, but the uncertainty in the measurement is larger due to the steeply falling cross section.
• The admixture of direct and fragmentation photons in a given E γ T interval affects the calculation of both P signal and corr .Instead of using the default MC simulation fraction, a fit of the E γ T distribution is performed in Pythia to find the optimal admixture (as done in the recent photon plus jet paper [36]).The uncertainty is derived by comparing the results from this optimal admixture with the default Pythia simulation.This replaces the systematic uncertainty obtained previously from an arbitrary removal or doubling of the fragmentation component.
• R bkg is set to unity when P signal is calculated.As described in Section 5, this follows the assumption that there are no correlations between the isolation and identification criteria for the background.A test of this assumption is performed by subdividing the background-dominated region with an additional non-isolated criterion and then repeating the two-dimensional sidebands in background only regions.A 10% difference from unity is found in this test, which is then applied to R bkg to calculate the uncertainty.
• As described in Section 4.1, the photon identification efficiency in the MC simulations uses correction factors and the associated uncertainty in these alters the cross section by 0.5% for most of the E γ T range.In the lowest E γ T intervals it reaches 2% and above 550 GeV it ranges from 1% to 4% (increasing with η γ ).
• For the above photon identification correction factors an extra uncertainty is required, obtained from MC simulations, to account for a small difference in the photon isolation requirement applied in this analysis from that used for the measurement of the photon identification efficiency.This impacts the cross section by 0.5% but rises to 1% for the highest E γ T intervals.• The orthogonal identification selection in Section 5 relies on inverting the selection criteria of four of the shower-shape variables.The uncertainty in this procedure is estimated by inverting either only two of these variables or by inverting an extra variable.A data-driven technique is used to disentangle this uncertainty from that already included in the R bkg uncertainty above.The resulting uncertainty is 2% for E γ T < 100 GeV but quickly falls to zero for higher E γ T .
• The isolation requirement used to define the background region in the P signal calculation was altered so that the constant part of the requirement (7.8 GeV) was varied by ±1 GeV (chosen as it is larger than any difference in the MC simulations between particle-level and detector-level isolation).The resulting uncertainty is less than 0.5%.
• The photon energy resolution is calculated from several independent sources in a similar manner to the energy scale, but the resolution is found to be of much less importance than the scale as it only produces an uncertainty of 0.5%, which rises to 1% for the highest E γ T intervals.• The effect of unfolding is investigated by using a smooth function to reweight the MC simulations to match the data E γ T distribution.Unfolding the data using this reweighted MC prediction gives a difference of less than 0.5% compared to the nominal value.
• The uncertainty in the correction factors from the choice of QCD-cascade and hadronisation model is derived from comparing Sherpa with Pythia.To avoid double counting the effects from the fragmentation contribution, the Pythia simulation with the optimal admixture of direct and fragmentation photons is used again.The resulting uncertainty is 2% at low E γ T but quickly falls to zero as E γ T increases.
• The integrated luminosity has an uncertainty measured to be ±1.9%.It is derived, following the same methodology as that detailed in Ref. [22], from a calibration of the luminosity scale derived from beam-separation scans performed in November 2012.
• Other uncertainties were studied, but are not included in the systematic uncertainty as they were found to be negligible.Examples of these studies include: investigating the trigger efficiency (statistical uncertainties are < 0.1%), pile-up (splitting the dataset by number of interactions per bunch crossing) and the MC simulation isolation shift (correcting the MC simulation by twice the fit accuracy).
The systematic uncertainties except for the luminosity uncertainty are combined.This is done by treating each of the sources as uncorrelated in each E The statistical uncertainty is mainly from the data, but also has a component due to the MC simulation.This component is from the reliance on MC simulations in the calculation of P signal and corr .The resulting total statistical uncertainty is 1-2% for most of the measured E γ T range, until it rises steeply in the highest E γ T intervals.

Results and discussion
The final cross sections are measured following Eqn.7 in the fiducial region given in Section 1.The systematic uncertainties, as described in Section 7, are combined with the statistical uncertainty, but do not include the luminosity uncertainty.The measured cross sections are compared to theoretical predictions, as detailed in Section 3, along with uncertainties from the combination of the scale, PDF, α S and hadronisation plus UE uncertainties. Figure 3 shows a summary of the results (with the measured cross sections also being tabulated in Appendix A), where it can be seen that the measurement is well described overall by JetPhox over ten orders of magnitude in cross section.The total cross sections shown in Table 1 are integrated over the entire E γ T for each η γ region.As seen in the previous measurement [9] the total cross sections are 20% higher in data than those predicted by JetPhox, but the results are consistent within the uncertainties.It can also be seen that the measurement uncertainty, dominated by the systematic uncertainty, is smaller than the theoretical uncertainty.The difference between data and JetPhox is explored further in Figure 4 where the cross-section ratios are shown in each of the four η γ regions as a function E γ T .Each η γ region shows a similar trend at low E γ T , in that the JetPhox NLO predictions are up to 20% lower than those measured.This difference remains constant, especially in the central η γ region, for E γ T < 500 GeV where the fragmentation contribution decreases with E γ T from being a large contribution to the cross section, showing that JetPhox models this contribution well apart from the normalisation.The normalisation difference decreases above this E γ T and in the range 1100 ≤ E γ T < 1500 GeV the prediction overestimates the measurement, although this is where the experimental and PDF uncertainties are largest.The results are shown using the CT10 PDF, but there is very little difference when comparing the central value to those from MSTW2008, NNPDF2.3The overall trend in differences between data and theory is similar to that seen in the measurement using 2011 data.However, a significant increase in the experimental precision of this measurement compared to the previous ATLAS measurements reveals new qualitative features in the comparison to JetPhox.While the theoretical uncertainties have not changed, the measurement uncertainties are halved over most of the phase space. 5This makes the uncertainties considerably smaller than the theoretical uncertainties, except in the statistically limited highest E γ T intervals, which leads to disagreement in some E γ T intervals between the measurement and the JetPhox prediction.This improvement in accuracy can help to reduce PDF uncertainties once the measurement is included in a global fit.

ATLAS
In order for the data to provide a tighter constraint on proton PDF uncertainties, it would be preferable both to have a better general agreement between data and the predictions and also to reduce the dominant theoretical scale uncertainties.This can be achieved by using calculations beyond NLO, as done here by using the predictions from PeTeR.This comparison is shown in Figure 5 where it can be seen that PeTeR does an excellent job of removing the normalisation difference seen between data and JetPhox, especially in the region |η γ | < 1.37.The uncertainties shown, from combining the scale, PDF and electroweak uncertainties, are about 20% lower than those from JetPhox.The PeTeR predictions match the data well, within the combined measured and theoretical uncertainties, in all of the measured phase space.The for the four |η γ | regions.The statistical component of the uncertainty in the data is indicated by the horizontal tick marks whereas the whole error bar corresponds to the combined statistical and systematic uncertainty (the additional systematic uncertainty arising from the uncertainty in the integrated luminosity is displayed separately as a dotted line).The NLO total uncertainty from JetPhox is displayed as a band, which corresponds to the combination of the scale, α S , PDF and hadronisation-plus-UE uncertainties.In the highest E γ T interval of the |η γ | < 0.6 region the theoretical prediction and uncertainty is not shown as it is above the range of the figure .improved normalisation and smaller uncertainties are also seen in the total cross sections as shown in Table 2.  Finally, the measured cross sections are also compared to the LO parton shower MC calculations in Figure 6.Here it can be seen that generally Sherpa, without any normalisation scaling, matches the data in the range 100 ≤ E T for the four |η γ | regions.The statistical component of the uncertainty in the data is indicated by the horizontal tick marks whereas the whole error bar corresponds to the combined statistical and systematic uncertainty (the additional systematic uncertainty arising from the uncertainty in the integrated luminosity is displayed separately as a dotted line).The NLO total uncertainty from PeTeR is displayed as a band, which corresponds to the combination of the scale, PDF and electroweak uncertainties.In the highest E γ T interval of the |η γ | < 0.6 region the theoretical predictions and uncertainty are not shown as they are above the range of the figure .than both the measurement and the other predictions, tending to overestimate the measured cross section, which suggests that the fragmentation contribution is not well modelled by the parton shower.

Conclusion
In conclusion, a measurement of the inclusive isolated photon cross section has been presented, using 20.2 fb −1 of √ s = 8 TeV proton-proton collision data recorded by the ATLAS detector at the LHC.This is measured for the highest-energy photon in the event, spanning 25 < E γ T < 1500 GeV, in one of four η γ regions (|η γ | < 0.6, 0.6 ≤ |η γ | < 1.37, 1.56 ≤ |η γ | < 1.81 and 1.81 ≤ |η γ | < 2.37) and with the isolation requirement E iso T < 4.8 GeV + 4.2 × 10 −3 × E γ T calculated within a cone of size ∆R = 0.4.The results presented cover ten orders of magnitude in cross section, extending the measurement above 1 TeV whilst also revisiting lower-E γ T data points.The results show a significant improvement in experimental uncertainties over the previous measurements.The results are compared to JetPhox predictions, which, for most of the E γ T range, have a similar shape but lie below the data.The predictions from PeTeR agree much better in normalisation and, unlike JetPhox, are within the uncertainties of the measured cross section for the entire phase space measured, showing the need for higher-order calculations to better understand this process theoretically.Comparing the results to LO parton shower MC calculations shows different trends, with the largest differences being at low E γ T in the region dominated by the fragmentation contribution.Finally, halving the measured uncertainties compared to previous measurements will make this a useful constraint on proton PDF uncertainties once the result is included in a global fit.

Appendix A. Tables of measured cross sections
The measured E γ T -differential cross sections are listed in Tables 3, 4, 5 and 6

Figure 1 :
Figure 1: The photon identification efficiency (with statistical uncertainty) as a function of E γ T determined in Pythia MC simulations, along with the separated efficiencies for unconverted and converted photons.The efficiency is shown for the region |η γ | < 0.6, with similar results found in other |η γ | regions.

γ
T interval.However, the sources are treated as correlated across different intervals in E γ T .This combination is shown in Figure 2 along with several of the main systematic uncertainties detailed above.The energy scale uncertainty dominates the high-E γ T region, especially in the region 1.56 < |η γ | < 1.81.At low E γ T the uncertainties from the R bkg variation and admixture of direct and fragmentation photons are of similar magnitude and dominate the uncertainty.In the E γ T range 80-200 GeV the main systematic uncertainties are of similar order and, in all but the region 1.56 < |η γ | < 1.81, this leads to the luminosity uncertainty being larger than this combination of the other systematic uncertainties.

ATLASFigure 2 :
Figure 2: Summary of the relative size of the combined systematic uncertainty (which excludes the luminosity) and its four main contributions, shown as a function of E γ T .

Figure 3 :
Figure 3: Differential cross sections from data and JetPhox (using the CT10 PDF), shown as a function of E γ T for the four |η γ | regions.The distributions are scaled, by specified factors, to separate the distributions visually.

ATLASFigure 4 :
Figure 4: Ratio of theory (JetPhox using the CT10 PDF) to data for the differential cross sections as a function of E γ T

γT<ATLASFigure 5 :
Figure5: Ratio of theory (PeTeR and JetPhox both using the CT10 PDF) to data for the differential cross sections as a function of E γ T for the four |η γ | regions.The statistical component of the uncertainty in the data is indicated by the horizontal tick marks whereas the whole error bar corresponds to the combined statistical and systematic uncertainty (the additional systematic uncertainty arising from the uncertainty in the integrated luminosity is displayed separately as a dotted line).The NLO total uncertainty from PeTeR is displayed as a band, which corresponds to the combination of the scale, PDF and electroweak uncertainties.In the highest E γ T interval of the |η γ | < 0.6 region the theoretical predictions and uncertainty are not shown as they are above the range of the figure.

ATLASFigure 6 :
Figure 6: Ratio of theory (Pythia, Sherpa and JetPhox) to data for the differential cross sections as a function of E γ T for the four |η γ | regions.The statistical component of the uncertainty in the data is indicated by the horizontal tick marks whereas the whole error bar corresponds to the combined statistical and systematic uncertainty (the additional systematic uncertainty arising from the uncertainty in the integrated luminosity is displayed separately as a dotted line).The NLO total uncertainty from JetPhox is displayed as a band, which corresponds to the combination of the scale, α S , PDF and hadronisation-plus-UE uncertainties.In the highest E γ T interval of the |η γ | < 0.6 region the theoretical predictions and uncertainty are not shown as they are above the range of the figure.

Table 1 :
Measured and predicted total cross sections shown for each of the four |η γ | ranges.The JetPhox predictions are made using the CT10 PDF.

Table 2 :
Predicted total cross sections from PeTeR shown for each of the four |η γ | ranges, made using the CT10 PDF. .

Table 3 :
The inclusive prompt photon cross section with systematic and statistical uncertainties for the region |η γ | < 0.6.

Table 4 :
The inclusive prompt photon cross section with systematic and statistical uncertainties for the region 0.6 ≤ |η γ | < 1.37

Table 5 :
The inclusive prompt photon cross section with systematic and statistical uncertainties for the region 1.56 ≤ |η γ | < 1.81.