Measurement of the production cross section of pairs of isolated photons in pp collisions at 13 TeV with the ATLAS detector

Abstract
 A measurement of prompt photon-pair production in proton-proton collisions at $$ \sqrt{s} $$
 
 s
 
 = 13 TeV is presented. The data were recorded by the ATLAS detector at the LHC with an integrated luminosity of 139 fb−1. Events with two photons in the well-instrumented region of the detector are selected. The photons are required to be isolated and have a transverse momentum of $$ {p}_{\mathrm{T}{,}_{\gamma 1(2)}} $$
 
 p
 
 T
 
 ,
 
 γ
 1
 
 2
 
 
 
 
 
 > 40 (30) GeV for the leading (sub-leading) photon. The differential cross sections as functions of several observables for the diphoton system are measured and compared with theoretical predictions from state-of-the-art Monte Carlo and fixed-order calculations. The QCD predictions from next-to-next-to-leading-order calculations and multi-leg merged calculations are able to describe the measured integrated and differential cross sections within uncertainties, whereas lower-order calculations show significant deviations, demonstrating that higher-order perturbative QCD corrections are crucial for this process. The resummed predictions with parton showers additionally provide an excellent description of the low transverse-momentum regime of the diphoton system.


Introduction
The production of a prompt photon pair in proton-proton collisions is one of the cornerstone processes of the LHC physics programme. Most prominently, photon pairs are of vital importance in searching for or studying the properties of particles that decay into photon pairs, such as the Higgs boson or other neutral particles. The main background to resonant diphoton production originates from the continuum production of such pairs, which is the topic of this publication. Despite the electromagnetic nature of this process, diphoton production in hadron colliders involves intricate strong-interaction dynamics. Theoretical predictions are therefore highly non-trivial and measurements are necessary to scrutinise and validate such predictions.
Non-resonant photons can be produced by various mechanisms in pp collisions at the LHC. In theoretical calculations, prompt photons, i.e. those not produced in hadron decays, are typically subdivided into two different production mechanisms: the direct production mode, represented by a qq → γγ t-channel diagram at leading order; and the single-and double-fragmentation component with a hard γ+jet or dijet configuration and sub-leading photon emissions in a resummed approach. A schematic representation exemplifies these photon production mechanisms in figures 1(a) and 1(b), respectively. Hadron decays such as π 0 → γγ, cf. figure 1(c), are the most abundant source of photons in pp collisions, but are considered as background in this publication. To suppress this non-prompt contribution, photons are required to be isolated from hadronic activity in their vicinity.
In this publication, prompt diphoton production is measured in √ s = 13 TeV pp collisions at the LHC using the full Run-2 dataset. Previous diphoton cross-section measurements were performed in pp collisions at The main challenge and source of uncertainty in the experimental side is the estimation of the background from non-prompt photons in jet events; in this analysis, a data-driven technique is used to estimate this background. The background-subtracted yield is then corrected for detector effects using an iterative unfolding procedure. Differential crosssection measurements are presented in a fiducial region as functions of several kinematic observables of the photon pair. State-of-the-art theoretical predictions from fixed-order calculations and from Monte Carlo (MC) event generators are compared with the data.

ATLAS detector
The ATLAS experiment [8] at the LHC is a multipurpose particle detector with a forwardbackward symmetric cylindrical geometry and a near 4π coverage in solid angle. 1 It consists of an inner tracking detector surrounded by a thin superconducting solenoid providing a 2 T axial magnetic field, electromagnetic and hadronic calorimeters, and a muon spectrometer.
The inner tracking detector covers the pseudorapidity range |η| < 2.5. It consists of silicon pixel, silicon microstrip, and transition radiation tracking detectors.
The calorimeter system covers the pseudorapidity range |η| < 4. 9. Within the region |η| < 3.2, electromagnetic (EM) calorimetry is provided by a lead/liquid-argon (LAr) sampling calorimeter with accordion geometry. It is divided into a barrel section covering |η| < 1.475 and two endcap sections covering 1.375 < |η| < 3.2. For |η| < 2.5, it is divided into three layers in depth, which are finely segmented in η and φ. An additional thin LAr presampler layer covering |η| < 1.8 is used to correct for energy loss in material upstream of the calorimeters. Hadronic calorimetry is provided by a steel/scintillator-tile calorimeter, segmented into three barrel structures within |η| < 1.7, and two copper/LAr hadronic endcap calorimeters. The solid angle coverage is completed with forward copper/LAr and tungsten/LAr calorimeter modules optimised for electromagnetic and hadronic measurements respectively.
The muon spectrometer surrounds the calorimeters and is based on three large air-core toroidal superconducting magnets with eight coils each. The field integral of the toroids ranges between 2.0 and 6.0 T m across most of the detector. The muon spectrometer includes a system of precision tracking chambers and fast detectors for triggering.
A two-level trigger system is used to select events. The first-level trigger is implemented in hardware and uses a subset of the detector information to keep the rate of accepted events below 100 kHz. This is followed by a software-based trigger that reduces the accepted event rate to 1 kHz on average depending on the data-taking conditions [9].
An extensive software suite [10] is used for the reconstruction and analysis of real and simulated data, for detector operations, and in the trigger and data acquisition systems of the experiment.

Data samples
The data used in this measurement were recorded by the ATLAS detector in pp collisions at √ s = 13 TeV during the LHC Run-2 data taking period. Events were selected using a diphoton trigger requiring the presence of at least two clusters of energy depositions with |η| < 2.47 in the EM calorimeter, with transverse momentum (p T ) above 35 GeV

Event selection and definition of observables
The photon and event selections are explained in section 4.1, the particle-level selection to which the data are unfolded is described in section 4.2, and the definition of the observables is included in section 4.3.

Photon reconstruction and event selection
Photon candidates are reconstructed from clusters of energy depositions in the EM calorimeter [34]; the clusters are formed using a dynamical topological cell-clustering algorithm [35]. Clusters matched to a track consistent with originating from an electron produced in the beam interaction region are considered electron candidates, otherwise they are considered photon candidates. Photons which convert to electron-positron pairs in the inner detector are distinguished from unconverted photons by matching reconstructed conversion vertices or tracks consistent with originating from photon conversion to the clusters. The photon energy is calibrated in several steps as described in refs. [34,36,37]. The energy resolution is optimised using a multivariate regression algorithm based on properties of the shower development in the EM calorimeter and trained on simulated events. The absolute energy scale is adjusted using Z → ee events, and validated using radiative Z boson decays. The energy scale uncertainty depends on the p T and η of the photon, and varies between 0.25%-1% for photons with transverse momentum in the range 30-100 GeV dominating this analysis.
For this study, at least two photon candidates are required per event, with transverse momentum p T,γ above 40 GeV and 30 GeV for the p T,γ -ordered leading and sub-leading photons, respectively. These p T,γ thresholds are each 5 GeV above the corresponding trigger thresholds, at the plateau of the trigger efficiency versus p T,γ [11]. The photon pseudorapidity is required to be |η γ | < 2.37, excluding the transition region between the barrel and endcaps (1.37 < |η γ | < 1.52). In this acceptance region, the high granularity of the EM calorimeter allows efficient identification of photons.
The momenta of the photon candidates are corrected for the position of their associated reconstructed primary interaction vertex. The associated vertex is chosen among all reconstructed vertices, which are each required to have at least two inner detector tracks with p T > 0.5 GeV. The measured trajectories of the two photons, along with the reconstructed vertex information in the event, are used as inputs to a neural-network algorithm trained on simulated events to select the most probable primary vertex [38] for the photon pair. The photon trajectories are reconstructed using the shower-depth (longitudinal) segmentation of the EM calorimeter; for converted photons, the position of the conversion vertex is also used if the conversion tracks have hits in the silicon detectors.
The identification of photons is based on multiple variables that quantify the longitudinal and lateral shape of the EM shower produced by the photon. Requirements on these shower variables are used to identify photons. A detailed description of all variables and selections for the tight photon identification used in this analysis is given in ref. [34].
The identification variables f side (energy fraction outside of core cells) and w s3 (lateral shower width) are used to define control regions for the background estimation presented in section 5.1. These variables use information from the finely segmented first layer of the electromagnetic calorimeter, and are sensitive to differences between candidates from prompt photons and background candidates, e.g. from the two collimated photons of a neutral hadron decay.
To further reject background candidates from hadron decays, which are in general surrounded by hadronic activity, the photons are required to be isolated. The isolation measurement is based on calorimeter information [34]. The initial isolation variable is defined as the scalar sum of the transverse momenta computed from topological clusters of calorimeter cells [35] in a cone of ∆R = 0.2 around the photon candidate direction. The contribution of the photon to the transverse isolation energy is subtracted, based on the energy depositions in a rectangular window of ∆η × ∆φ = 0.125 × 0.175 around the photon candidate and the energy expected to leak out of this window. Effects of pile-up and from the underlying event (UE) are corrected for in calculating the isolation variable E iso,0.2 T,γ 1(2) used for the analysis. The pile-up and UE correction is estimated on an event-by-event basis, using the ambient transverse momentum density [39]. This density is determined as the median transverse momentum density of jets reconstructed with the k t -algorithm [40] with a radius parameter R = 0.5; it is computed separately in two pseudorapidity regions, |η| < 1.5 and 1.5 < |η| < 3, and used accordingly depending on the photon pseudorapidity. The photon candidates are required to have E iso,0.2 T,γ 1(2) < 0.05 · p T,γ 1(2) . This isolation requirement is optimised to minimise the overall uncertainties, achieving high signal purities while retaining good signal efficiency. More stringent requirements and use of a larger isolation area would increase the purity at the cost of higher uncertainties from pile-up modelling in the simulation.  The angular separation of the photons is required to be ∆R γγ > 0.4, thus avoiding overlap between their isolation cones, and minimising the correlation between the transverse isolation energy of the two photons. The isolation criterion is also used for the background estimation (cf. section 5), and such a correlation between the isolation criteria of the two photons would have an impact on the background estimation results.
The event selection described above is summarised in table 1, where the particle-level event selection is also presented, which is described in more detail in the next section. With these requirements 4 708 978 events are selected in the data.

Particle-level event selection
The particle-level selection, to which the data are unfolded (cf. section 6), closely follows the detector-level event requirements, and is also shown in table 1. Some aspects are clarified in detail in the remainder of this section.
The particle-level event selection in MC samples takes the leading two prompt photons into account, and vetoes the event if one of them fails the fiducial cuts.
The transverse isolation energy at particle level is determined as the scalar sum of the transverse momenta of all stable particles, 3 excluding particles from pile-up, muons, neutrinos and the photon itself. Furthermore, to ensure that the side effects of the detector-level UE subtraction are matched at the particle level, 4 a final-state-based subtraction using the same method as described in section 4.1 is applied. The value of the isolation requirement at the particle level (0.09) is chosen to be different from the one at the detector level (0.05). This difference accounts for the non-compensating nature of the ATLAS calorimeter. The particle-level value is optimised to minimise the number of events where photons pass a detector-level isolation requirement but fail the particle-level one, or vice versa.

Observables
Cross sections are measured differentially as functions of several observables defined for the diphoton system: • Transverse momentum p T,γ 1 (p T,γ 2 ) of the leading (sub-leading) photon.
3 Stable particles are defined as those with a decay length cτ > 10 mm. 4 While pile-up particles are excluded at particle level, there are still other effects, e.g. multiple parton interactions, that can cause such a subtraction procedure to have an influence.
• The absolute value of the cosine of the scattering angle with respect to the z-axis in the Collins-Soper frame [41]: where E γ is the energy of the photon and p z,γ is the z-component of the momentum of the photon. The definition in the Collins-Soper frame simplifies an interpretation of the scattering angle in the presence of initial-state radiation.
• An angular variable sensitive to p T,γγ defined as: Angular variables are typically measured with better resolution than the photon energy. Therefore, a particular reference frame that allows φ * η to be expressed in terms of angular variables only, denoted by the subscript η and described in ref.
[42], is used in order to optimise the resolution. In particular, φ * η is an ideal probe of QCD effects because it is sensitive to similar dynamics to p T,γγ , but with a significantly better resolution in the low-p T,γγ region. A similar argument holds for a T,γγ below.
• Transverse component of p T,γγ with respect to the thrust axis: 5 where p x(y),γ 1(2) are the x(y)-component of the leading (sub-leading) photon momenta, and p T,γ 1 − p T,γ 2 is the vectorial difference of the transverse momenta of the two photons.
A fine observable binning is used, taking advantage of the detector resolution and large sample size. In general, the bin size is set to be roughly four times the detector resolution; this requirement ensures large purities in the response matrices used for unfolding the measurement to particle level. To avoid very large statistical uncertainties in the regions with low event counts, the minimum bin size is limited based on the expected number of signal events, keeping the statistical uncertainty below 10%. The two previous requirements would still allow an extremely fine binning in some regions for | cos θ * | (CS) , φ * η and π−∆φ γγ . In those regions the bin size is arbitrarily enlarged to make the plot easily readable. 5 The thrust axis is defined ast = ( p T,γ 1 − p T,γ 2 )/| p T,γ 1 − p T,γ 2 |.

Background estimation
Reconstructed photon pairs can appear in the detector from a variety of sources beyond prompt production processes. These are summarised in this section along with details about how they are estimated and subtracted from the selected events to give the actual γγ signal yield.
Most of the background arises from jets misidentified as photons; these are mainly jets containing a neutral hadron which carries most of the jet energy and decays into a collimated pair of photons with overlapping showers in the calorimeter. This background consists of γj, jγ, or jj events with one or two jets misidentified as photons, respectively; the notation jγ (γj) here and in the following refers to the component where the leading (sub-leading) photon candidate stems from a misidentified jet, while in jj events both photon candidates do. The data-driven determination is described in detail in section 5.1.
A few percent of the sample corresponds to the electron background (ee), with one or two electrons 6 being misidentified as photons. The electromagnetic showers created by electrons in the calorimeter are very similar to the ones from photons, and therefore electrons can be misidentified as photons. The main process contributing to this background is Drell-Yan production. It is estimated using the MC simulation described in section 3.2. It makes its largest contribution to the background in the analysis region at m γγ ∼ m Z .
Finally, less than a percent of the sample is associated with pile-up background (PU), i.e. pairs of γj events from different pp collisions in the same bunch crossing. It is determined with a data-driven approach discussed in section 5.2.
The selected diphoton data sample also contains signal events in which the two photon candidates come from the decay of a Higgs boson. Their contribution to the signal region is predicted to be 0.2% of the integrated signal and increases to a few percent in the relevant m γγ bin. This contribution is not subtracted from the data as background, since it is a small genuine part of prompt-photon pair production.

Jet background
The background from jets misidentified as prompt photons is estimated using a data-driven method, extrapolating the amount from multiple control regions enriched in background to the signal region, i.e. the region defined by the nominal event selection. This extrapolation is done performing a Poisson likelihood fit. The method is an extension of the 'twodimensional sideband' technique used in the single-photon analyses [43-52] to the twophoton case. It is similar to the method used in previous diphoton analyses [5, 6]. The fit is performed separately in each bin of each observable.
To define the background control regions, two almost independent jet-photon discriminant criteria are used: the lateral shape of the shower at the first layer of the electromagnetic calorimeter and the photon isolation. Specifically, if a photon candidate fails the tight identification selection, by failing the requirements on either f side , w s3 , or both shower shape variables (defined in section 4.1), but satisfies the rest of the identification criteria, the event enters into one of the control regions. If a photon fails the nominal 6 An 'electron' here and in the following always refers to an electron or a positron. Relative photon isolation distribution for the sub-leading photon candidate as extracted from the Sherpa diphoton sample (black histogram), and from misidentified jets, estimated from data events with sub-leading photon candidates failing the tight identification requirement (blue histogram). A small prompt-photon contribution to the non-tight control region is subtracted from the blue histogram, according to the Sherpa sample predictions.  isolation requirement, but falls into 0.05 · p T,γ 1(2) < E iso,0.2 T,γ 1(2) < 0.15 · p T,γ 1(2) , the event enters in a control region. The isolation signal and control regions are indicated in figure 2, which shows the E iso,0.2 T,γ 2 /p T,γ 2 relative isolation distribution for prompt photons and for misidentified jets.
In addition to the signal region, where both photons satisfy both the isolation and identification criteria, 15 background control regions are defined representing all possible combinations where at least one photon fails to meet at least one of the criteria, as illustrated in figure 3. These regions are labelled by the index i = 1 . . . 16, with i = 1 being the signal region.
The fit model is built of probability functions f p,i , corresponding to the probability of an event from process p ∈ {γγ, γj, jγ, jj, ee, PU} to fall in the diphoton region i. They -10 -

JHEP11(2021)169
are constructed as a product of the four probabilities: the probabilities for each of the two photon candidates to fulfil both the isolation and identification requirements of the given region, according to Here, ε iso p,n (ε id p,n ) is the efficiency of the leading (n = 1) or sub-leading (n = 2) photon candidate to fulfil the signal region isolation (identification) requirement, relative to the looser requirements that include the control regions. The correlation correction factors R account either for correlations between the isolation and identification of one photon candidate, R iso−id p,n , or for correlations between the isolation (identification) of both candidates, R iso p (R id p ). A value R = 1 corresponds to no correlation between the relevant variables, while values below or above unity indicate a certain degree of positive or negative correlation, respectively. The R values are typically close to one, slightly below one, with the lowest values reaching 0.9; more details are given in section 5.1.1.
The expected number of events n exp i in each region i can then be written as: Here, N γγ , N γj , N jγ , N jj , N ee and N PU are the total number of events integrated in all 16 regions, attributed to the signal or background processes, respectively. The Table 2. Estimated composition of the signal region diphoton candidate samples, resulting from the inclusive sample fit. The uncertainties are fully dominated by the systematic uncertainties, which are discussed in section 7.
parameter of interest, i.e. the number of γγ events in the signal region, is given by Most of the parameters associated with misidentified jets are estimated from the data, by allowing them to float in the fit, because their MC predictions are not accurate, which motivates this sophisticated approach. The correlation correction factors R are fixed in the fit after being estimated either from MC simulation or from data in validation regions, except for R id jj and R iso jj , which are allowed to float in the fit. In total, a set of 13 nuisance parameters are allowed to float in the fit in addition to n γγ : The Poisson likelihood function is given by where n obs i is the number of diphoton candidates observed in each region i. The efficiency parameters ε corresponding to prompt photons are determined from MC simulation and fixed in the fit. Details about the predetermined input parameters are given in section 5.1.1. The parameters of the electron and pile-up background probability functions, f ee,i and f PU,i , as well as N ee and N PU , are also predetermined and fixed in the fit; the normalisation of the pile-up background N PU is estimated in a data-driven way, as described in section 5.2, while the values for the other parameters are extracted from MC simulation.
As an example, the distribution of the inclusive data sample in the 16 regions is shown in figure 4, together with the result of the fit to this inclusive sample; the fractional composition of the data in the signal region resulting from this fit is given in table 2. For the nominal measurement, as mentioned above, the fit is performed separately in each bin of each observable, and the equivalent sample decomposition in the signal region is displayed differentially in figure 5. The purity of the γγ signal process ranges from 35% at low invariant mass to 80% at high p T,γ .
This background and signal estimation method is tested using MC simulation. A pseudo-dataset is prepared including γγ signal and jet background events, with propor-  tions similar to the ones observed in the data. The results obtained with the fit to the pseudo-dataset are compatible with the fractional composition of the MC events included in the sample.

Fit model input parameters
This section describes the parameters of the fit model that are predetermined and fixed in the fit, and how their values and systematic uncertainties are determined. The propagation of the systematic uncertainties to the estimated n γγ is done by rerunning the fit using systematic variations of the input parameters, keeping the uncertainty from each source fully correlated between the bins of the observables.
Signal parameters: and R iso−id γγ,2 . The values of the eight γγ signal parameters are extracted from the Sherpa MC signal sample, corrected to match the isolation and identification variable distributions in data [34]. These parameters are extracted as a function of each observable for which the differential cross section is measured. Uncertainties in these parameter values due to the detector simulation and theory modelling, in particular for the isolation distribution, are taken into account as described in section 7.2.

JHEP11(2021)169
Prompt-photon isolation and identification efficiencies in γj and jγ events: ε iso γj,1 , ε iso jγ,2 , ε id γj,1 , ε id jγ,2 . For the γj and jγ background components, isolation and identification efficiencies for the prompt photons are derived from the γγ signal MC sample. The underlying assumption is cross-checked by comparing these efficiencies with the ones derived from the γj Sherpa MC sample. The efficiencies are compatible within statistical uncertainties. In addition, a change of 2% in those efficiencies is tested, which is roughly the size of the statistical uncertainty of the comparison ratio, and the impact on the estimated signal yield result is found to be negligible.

Isolation-identification correlation for misidentified jets:
. The correlation between isolation and identification for jets misidentified as photons is studied with γj events as a function of p T,γ 2 . A twofold approach is used to estimate it. First, a γj MC simulation sample is used, yielding R iso−id,MC γj,2 = 0.990 ± 0.025 (stat) without a significant trend as a function of p T,γ 2 given the limited MC sample size. In addition to the MC study, a γj-rich validation region in data is constructed by requiring the sub-leading photon to fail a track isolation requirement. The track isolation variable p iso,0.2 T,γ 2 is computed as the sum of the transverse momenta of charged-particle tracks coming from the primary vertex, in a cone of ∆R = 0.2 around the photon candidate direction. In the validation region, the sub-leading photon candidates are required to have p iso,0.2 T,γ 2 > 0.02 · p T,γ 2 + 1.5 GeV. The contamination from γγ events in that validation region is subtracted using the signal MC simulation. With the higher statistical power of the data, an enhancement of the positive correlation as a function of p T,γ 2 is observed, i.e. values of R iso−id γj,2 significantly below one. This dependency is extrapolated to the signal region using the MC simulation.
An average of the MC and data results is used for the central values of this correction factor versus p T,γ 2 ; the values vary in a range of 0.93 < R iso−id < 0.99 as shown in figure 6. The MC simulation is susceptible to mismodelling of the correlation, while the data estimation relies on the MC-based subtraction of γγ events in the validation region, and on the low correlation between the calorimetric and track isolation. Half of the difference between the MC and data results is assigned as a systematic uncertainty, and combined in quadrature with the MC statistical uncertainty. For each bin of the observables and each of the four R iso−id parameters for misidentified jets, an average is calculated taking into account the corresponding photon candidate p T,γ distribution, estimated from data events in the control regions dominated by each background process.
In addition to p T,γ 2 , the dependency of R iso−id γj,2 on the photon candidate pseudorapidity is also studied; however, its impact is found to be negligible.
Correlation between isolation in γj and jγ events: R iso γj and R iso jγ . The correction factor R iso p parameterises the correlation between the isolation variables of the two photon candidates. For the signal probability function, this parameter is extracted from MC simulation as R iso γγ . A similar extraction for misidentified jets from γj MC samples alone is not possible because of the limited statistical precision available. A slight correlation is expected, as found for the signal process, where 0.95 < R iso γγ < 1. It is also extracted for the isolation-identification correlation, for jets misidentified as photons. The nominal value is indicated in black, and is the average of the values obtained from a data validation region and a γj MC sample. The blue band corresponds to the simulation statistical uncertainty, and the grey band shows the total uncertainty, taking into account half of the difference between the MC and data results.
from jj background events, as R iso jj is left to float in the fit and yields values in the range 0.9 < R iso jj < 1. For the γj and jγ backgrounds, R iso γj = R iso jγ = 0.95 ± 0.05 is chosen in all regions of phase space, thus the uncertainties cover the extreme cases. This uncertainty is treated as fully correlated between all the bins of the observables, in the same way as all the uncertainties discussed in this section.
In order to have the common correction factor R iso γj = R iso jγ consistently affecting the isolation efficiency of the photon candidate associated with the misidentified jet in both processes, γj and jγ, the probability function for the jγ background f jγ,i is slightly modified relative to the eq. (5.1) function. In eq. (5.1), the R iso p parameter corrects the isolation efficiency of the sub-leading photon candidate ε iso p,2 in the regions where the leading candidate fails the nominal isolation requirement, i.e. in the regions 3, 4, 7, 8, 11, 12, 15 and 16. In the jγ case, the function is modified, and R iso jγ corrects the leading candidate efficiency ε iso jγ,1 in the regions where the sub-leading candidate fails the nominal isolation criterion, i.e. in the regions 2, 4, 6, 8, 10, 12, 14 and 16. For example, for regions 3 and 4: Identification correlation in γj events: R id γj and R id jγ . The correlation between the identification of the two objects is in general very low. This is cross-checked with MC simulation for the γγ signal process. For the jj process, the correlation is extracted from the data, by allowing R id jj to float in the fit. In both cases, R id is close to one within 2%. Performing the fit with the R id γj and R id jγ values changed by 2% produces negligible variations of the final result. The assumption R id γj = R id jγ = 1 is thus kept without an additional uncertainty.

Pile-up of multiple single-photon events
Pairs of γj events from different pp collisions in the same bunch crossing can form a diphoton final state in the detector. The normalisation of this background is estimated using a fit to the data. The shape of the background in the diphoton observables is based on MC simulations. Details of the procedure are given below.
In diphoton events, in general, there is not enough information to efficiently identify and reject the pile-up background. However, around 0.5% of the events do have sufficiently precise information for both photons to effectively determine whether they are produced at the same primary interaction vertex or not. This subsample of events is used to estimate the pile-up background normalisation. It consists of events with both photons converted to electron-positron pairs within the detector material, with both conversions fully reconstructed, i.e. each including the reconstruction of the two charged-particle tracks, and with both photons in the central pseudorapidity region, |η| < 1.37. 7 In addition, the conversion tracks are required to have hits in the silicon detectors of the inner tracking system. For each photon, the trajectory is reconstructed using the shower-depth segmentation of the calorimeter, and the position of the conversion vertex. The intersection of the extrapolated trajectory with the beam axis provides a precise estimate of the longitudinal position z γ of the vertex at which the photon is produced. This information is used in a two-component fit to ∆z = |z γ 1 − z γ 2 |, as can be seen in figure 7.
The first component of the model, representing the distribution of pile-up events in ∆z, is considered to be a convolution of the two identical Gaussian z γ distributions that have a 7 The fraction of converted photons is fairly constant as a function of photon E T and η and well modelled in MC simulation. The conversion events can thus be used directly for an extrapolation to the full dataset. width of σ z = 34.15±0.16 mm, resulting in a Gaussian with a width of σ ∆z = √ 2×σ z . The σ z value is estimated by fitting a Gaussian function to the z γ distribution. Since σ z changes between data-taking periods, depending on the beam conditions, only data collected in 2018 are used for this pile-up background study; during this period σ z was relatively stable, with a root-mean-square of the σ z variations of 2.8 mm. The integrated luminosity of the 2018 dataset is 58.5 fb −1 . Afterwards, the estimated amount of pile-up background is extrapolated to the full Run-2 dataset, taking into account the integrated luminosity and the average number of interactions per bunch crossing µ of the two datasets, µ 2018 = 36.1 and µ Run 2 = 33.7. The amount of pile-up background is proportional to µ .
The second component of the fit model, representing the non-pile-up contribution, is taken as a power-law function. For most of the photon candidates, the z γ resolution is around 0.2 mm; however, some candidates have significantly larger z γ resolution, which induces a long tail in the ∆z distribution. This tail is well modelled by the power-law function.
To extract the fraction of events due to pile-up, the power-law parameters and the normalisation of each component are fitted simultaneously to the measured ∆z distribution. The fit is performed in two regions simultaneously, ∆φ γγ < π/2 and ∆φ γγ > π/2. For nonpile-up diphoton candidates, there are 10 times more events with ∆φ γγ > π/2 than with ∆φ γγ < π/2; this proportion is allowed to float in the fit. On the other hand, for the pile-up background, the events are almost equally distributed between the two regions; only the ∆R γγ > 0.4 selection requirement unbalances the ∆φ γγ distribution, slightly reducing the fraction of pile-up events with ∆φ γγ < π/2. In this case, this fraction is fixed in the fit, according to the MC simulation prediction. The difference between the proportions of pileup and non-pile-up events in the two ∆φ γγ regions helps to constrain the fit parameters. The fit yields an uncertainty of δ PU fit = ±15% for the pile-up background fraction. The event yield resulting from this fit still contains contributions from misidentified jets, which are subtracted in the main background fit also for pile-up events. To avoid a double subtraction, the nominal background fit is performed for the events with ∆z > 48 mm to determine the fraction of prompt γγ events within the pile-up contribution. The large uncertainty of δ PU γγ = ±50% in this fraction is of statistical nature due to the low number of input events, and is added in quadrature with the δ PU fit to form the total uncertainty in the pile-up background propagated to the final results, as described in section 7.
With the normalisation of the pile-up background fixed, the shape of this background in the fiducial phase space and as a function of each observable is derived using a sample of pseudo-events. Events in this sample are constructed by overlaying two separate reconstructed events from a γj MC simulation. This shape is combined with the normalisation determined as above and fed into the background estimation fit as a predetermined component. Given the large relative uncertainty of the normalisation and the small contribution from this background, the uncertainties of the MC sample are negligible.
In the inclusive sample, the pile-up background is found to be (0.6 ± 0.4)%. Differentially, the largest contribution from this background is around 6% at | cos θ * | (CS) close to one, cf. figure 5.

Correction to particle level
Effects such as the finite resolution of the detector and the event selection efficiency have to be corrected for, in order to be able to compare results from different experiments and to allow comparisons with theory predictions without detector simulation. The particle-level selection to which the data are unfolded is presented in table 1. The unfolding procedure is applied to the background-subtracted distribution at the detector level in three steps.
First, the measured distribution is corrected for the fraction of MC events reconstructed and selected at detector level that do not pass the particle-level selection. The contribution of these non-fiducial reconstructed events is estimated using the signal MC samples. The main source of non-fiducial photons is failure to satisfy the transverse momentum and isolation requirements. The particle-level selection is chosen to be close to the detectorlevel selection to reduce the influence of this correction.
The second step is to correct for detector resolution effects which cause migrations from one particle-level bin to multiple detector-level bins. For this step, an iterative unfolding method [53,54] based on Bayes' theorem is used. Response matrices parameterise these migrations and are constructed from simulated events passing both the detector-level and particle-level selections. A first iteration of the migration correction uses the particlelevel prediction from Sherpa as the prior input. The prior then gets updated with the obtained result, and the procedure is repeated; three iterations are done in this analysis. The chosen number of iterations is high enough to minimise the dependence on the prior, but low enough to not be sensitive to statistical fluctuations.
In the last step, the unfolded results are corrected for detector inefficiencies. This accounts for the fraction of particle-level selected events in the signal region which do not pass the signal selections at detector level. Their contribution is also estimated using the signal MC samples. The largest corrections stem from the photon identification and isolation selection, as shown in figure 8(a) as a function of p T,γ 2 . Figure 8(b) illustrates individual steps of the unfolding correction, as well as the total correction, for the p T,γ 2 observable. The efficiency and fiducial corrections partially cancel each other out due to the symmetric shape of the detector response, e.g. for the photon transverse momentum criteria.

Uncertainties
Several uncertainties affect the measurement. They are evaluated independently and then added in quadrature as uncorrelated components into a total uncertainty, with its breakdown shown in table 3 for the integrated fiducial cross section. Figure 9 shows the breakdown of the systematic uncertainties in the differential cross section.
The individual sources of uncertainties and the methods to propagate them to the final result are described in more detail in the following sections.  Figure 8. (a) Efficiency of consecutive sets of detector-level selections for diphoton events passing the particle-level selection as a function of the particle-level p T,γ2 . (b) Impact of the unfolding on the detector-level yields after background subtraction as a function of p T,γ2 . The correction for the fraction of non-fiducial events not passing the particle-level cuts is depicted in green. The correction for the detector efficiency is shown in red. The black line is the ratio of the detector-level yields to the result after the three corrections (i.e. for non-fiducial events, the iterative migration unfolding and the correction for detector inefficiencies) are applied. The impact of the iterative unfolding can be seen by comparison with the grey dotted line, which is the product of the efficiency and fiducial correction.
from a data-driven approach, as described in section 5.1.1. While the average of the two methods is used for the central value, half of the difference is assigned as an uncertainty to cover both approaches and is combined in quadrature with the statistical uncertainty of the MC result. This is by far the dominant contribution to the total uncertainty for the background, which amounts to 4.2% in general and rises to 17% in the low-m γγ region.
The correlation between the photon and jet isolation in γj and jγ events is varied around its central value by ±5%, as determined from cross-checks in the γγ MC sample and in the control regions dominated by jj events in data. This uncertainty is at the sub-percent level almost everywhere, except at low m γγ , where it rises to 6%.
Uncertainties in the identification and isolation efficiencies of the prompt photons also affect the background estimation. They are discussed in the section 7.2.
Electrons misidentified as photons. Three components are taken into account as uncertainties for the electron-related background sources. The physics modelling of the Z → ee MC sample is varied by a 5% normalisation uncertainty, and a p Z T -dependent shape uncertainty covers the difference between MC simulation and data [33]. The uncertainty related to the modelling of e → γ fake rates is estimated from a dedicated measurement using Z → ee events. All of these uncertainties have a completely negligible contribution almost everywhere, rising to 2% only in the m γγ ∼ m Z bin.    Table 3. Breakdown of the relative uncertainties in the integrated fiducial cross-section measurement. The uncertainties are grouped into different categories and the first row of each category shows the sum in quadrature of its components.
impact of this background contribution is located at | cos θ * | (CS) → 1 (see figure 5) and its uncertainty rises to 6% in this region, whereas it is small in all other regions of phase space.
The uncertainties described above are propagated to the final cross-section results by using different background-subtracted signal yields as input to the unfolding procedure, with all bins varied in a correlated way, and response matrices unaffected.

Photon selection
The detector simulation and reconstruction of photons within the MC samples play significant roles in the background estimation and unfolding. This results in uncertainties from various sources.
Photon isolation. The modelling of the photon isolation variable in simulated events is corrected using a data-driven technique when deriving the nominal results [34]. An uncertainty due to a potential isolation mismodelling is estimated by disabling these datadriven corrections. The corrections modify the position of the peak of the isolation variable.

JHEP11(2021)169
Additionally, the width of the isolation transverse energy distribution is affected significantly by the modelling of pile-up in the simulation. In the MC samples, the noise induced by the pile-up in the calorimeters is found to be overestimated, increasing the width of the distribution of the used calorimetric isolation. This simulation feature is observed in the electron isolation distribution in Z → ee events. Therefore, to extract the efficiency of the photon isolation requirement, the simulation pile-up profile is reweighted, scaling down the number of pile-up interactions by 17%, based on the observations from the Z → ee studies. Systematic uncertainties are estimated by further reducing and increasing the amount of pile-up similarly by 17%. The uncertainty in the modelling of pile-up dominates the isolation uncertainty and constitutes one of the leading sources of uncertainty in the analysis, with a typical size of 4%.
The isolation distribution is also affected by a theory modelling uncertainty; the estimation of this uncertainty is discussed in section 7.3.

Photon trigger, reconstruction and identification efficiency.
The uncertainties in the modelling of the detector efficiency of the photon trigger, reconstruction and identification are addressed using scale factors that depend on the transverse momentum p T,γ and pseudorapidity η of the photon. The trigger and identification scale factors and their uncertainties are derived to match the detector efficiency in simulation to data as described in refs. [11,34]. The trigger efficiency uncertainty is around 0.7%, while the uncertainty in the measured diphoton cross section related to the photon identification efficiency ranges between 2% and 4%. The photon reconstruction efficiency is studied with Z → µµγ events by looking for events in which the reconstructed cluster is misclassified as an electron. The fraction of photons misreconstructed as electrons is around 4%. To match the data, the simulated reconstruction efficiency of the diphoton systems is corrected on average by −1.8%, with an uncertainty of 0.4%. The reconstruction efficiency corrections are derived and applied independently for data taken in the years 2015+2016 and 2017+2018.
Photon energy. Uncertainties in the photon energy scale and resolution are taken into account by modifying the MC simulation used to determine response matrices, efficiencies and non-fiducial-photon rates [37]. This is done separately for independent sources of the uncertainty, e.g. from the overall energy scale determined in Z → ee decays, the nonlinearity of the energy measurement at the calorimeter cell level, the relative calibration of the different calorimeter layers, the amount of material in front of the calorimeter, and the simulation modelling of the photon conversion reconstruction, lateral shower shape, and energy resolution. They are propagated separately to account for correlations and their η dependence, and are combined in quadrature afterwards. This uncertainty is at the sub-percent level in most regions of the analysis, rising to 3-4% only at high p T,γ , m γγ , p T,γγ and a T,γγ .
For the photon isolation and identification variations described above, modified MC simulations are used coherently in the background fit and unfolding, to propagate the uncertainty for each source to the final result and maintain the correlations. The photon energy and photon trigger efficiency uncertainty affect only the unfolding.

Other uncertainties
Signal theory modelling. The theory modelling in the signal MC samples can affect the correction to the particle level as well as the signal model parameters in the background fit. The resulting uncertainty is estimated by reweighting the nominal Sherpa sample in two different ways and repeating the whole analysis chain. The first reweighting is based on the photon isolation distribution taken from a Pythia 8 sample at particle level. The uncertainty in the final results is relatively constant at the 1% level but rises to 4% in the low-m γγ region. Additionally, a reweighting of the p T,γ 1 distribution is performed such that the detector-level distribution matches the background-subtracted result. The effect of this variation is below 1%. The differences between the nominal and each of the two reweighted results are treated as independent uncertainties and applied coherently in the background fit and the unfolding.
Unfolding method uncertainties. The results of the unfolding procedure can depend on the choice of prior and the number of iterations. To estimate an uncertainty, the MC simulation is reweighted such that it matches the data at the detector level, and this emulated data is then unfolded using the nominal sample. The non-closure with respect to the reweighted particle-level simulation then yields an additional uncertainty orthogonal to the one from signal theory modelling and is negligible everywhere.

Data period stability.
A test is performed comparing the results obtained by splitting the data into 2015+2016, 2017 and 2018 periods, and the MC sample in their corresponding representative subsamples. The whole analysis is redone for these three individual datasets. An additional uncertainty is assigned as a function of the different observables to cover the non-closure among the different years. It is typically around 3%.

MC statistics.
The influence of the limited size of the MC samples on the final result is evaluated using pseudo-experiments. Each experiment is generated by assigning a random weight according to a Poisson distribution with an expectation value of one to every MC event. The MC statistical uncertainty is estimated from the standard deviation of the results of the pseudo-experiments and is negligible everywhere.
Data statistics. The statistical uncertainty of the data sample is propagated from the signal extraction to the particle level using pseudo-experiments. It is the dominant uncertainty at high p T and high m γγ , but sub-leading in general.
Luminosity. The uncertainty in the combined 2015-2018 integrated luminosity is 1.7 % [55], obtained using the LUCID-2 detector [56] for the primary luminosity measurements. This uncertainty is sub-leading in comparison with the other uncertainties.

Theory predictions
The unfolded data are compared with several state-of-the-art theory predictions, which are summarised in table 4 and detailed in this section.
Fixed-order NNLO with NNLOJET. The Nnlojet framework [57] is used to obtain fixed-order predictions for diphoton production up to NNLO QCD accuracy. It is based on the antenna subtraction method [58,59]  • a very loose smooth cone isolation [25] with n = 2, δ 0 = 0.1, ε = 0.1, and • the experimental cone isolation with the fiducial photon isolation given in table 1, but without the UE subtraction.
The smooth-cone isolation requirement is necessary to remove collinear photon-quark configurations from the calculation, which would be singular and would have to be factorised into a fragmentation function for a full calculation. The hybrid scheme helps to minimise the bias from not taking such configurations into account.
Fixed-order NLO with DIPHOX. Fixed-order NLO predictions including the direct and single-and double-fragmentation contributions are made with Diphox [62] using CT10 NLO PDFs [63]. The nominal factorisation, renormalisation, and fragmentation scales are chosen as µ r = µ f = µ frag = m γγ . The loop-induced gg → γγ process is included in the direct component at leading order. Since the fragmentation contributions are included, the Diphox predictions are valid for a cone-based photon isolation consistent with the one used in the measurement, E iso,0.2 T,γ 1(2) < 0.09 · p T,γ 1(2) . To estimate the perturbative uncertainties of the Diphox predictions, three sets of input parameters are varied. Factorisation and renormalisation scales undergo 7-point variations as defined above. This is the dominant uncertainty for this prediction. The fragmentation scale is raised and lowered by a factor of 2, yielding an uncertainty which is always significantly smaller than that from the factorisation and renormalisation scale variations. Finally, variations between predictions based on the nominal PDF set and those based on MSTW2008 [64] and NNPDF2.3 [26] are also significantly smaller than those from the scale variation in all phase-space regions.

JHEP11(2021)169
The fixed-order predictions do not include non-perturbative effects. The impact of the non-perturbative effects is found to be negligible, except in soft or collinear regions of the phase space, where they grow to O(10%); in such regions, fixed-order predictions are not valid without resummation. Thus, no correction is applied to the fixed-order predictions.
Sherpa. In addition to these fixed-order calculations, the unfolded data are also compared with fully hadronised particle-level predictions from the Sherpa signal calculations described in section 3.2. This sample is used extensively throughout the analysis chain, e.g. in the background fit and the correction for detector effects.
Systematic theory uncertainties for the following comparisons are obtained from three sets of variations performed using event reweighting [65]. Factorisation and renormalisation scales are varied independently in a 7-point variation as defined above to estimate the perturbative uncertainties. A second set of uncertainties is obtained by coherently varying the strong coupling used in the simulation and in the PDF fit, α s (m Z ) in the range 0.117-0.119. A third uncertainty set is constructed from all replicas within the NNPDF3.0 NNLO set. These uncertainties are completely dominated by the scale uncertainties and are summed in quadrature to obtain a final uncertainty envelope for the Sherpa predictions.

Integrated cross section in the fiducial phase space
The measurement of the differential cross section as a function of p T,γ 2 is used to compute the integrated fiducial cross section, σ γγ = 31.4 ± 0.1 (stat.) ± 2.4 (syst.) pb.
The integration is also performed over the other differential cross-section measurements yielding compatible results. The fiducial phase space is defined in table 1.
A comparison of the measured cross section with the theoretical predictions (see figure 10 and table 5) shows the importance of higher-order QCD contributions even for such an inclusive diphoton measurement. The predictions from Sherpa MEPS@NLO, which has NLO multi-leg matrix elements supplemented by a parton shower, and that from the fixed-order NNLO prediction as implemented in Nnlojet are the only predictions compatible with the data within the uncertainties. The predictions limited to LO or NLO QCD, as implemented in Nnlojet (N)LO and Diphox NLO, fail to agree with the data and also do not provide a reliable estimate of the perturbative uncertainties, as will be discussed further below.  emissions are relevant, while Sherpa provides remarkably good agreement with the data in these regions also. The systematic uncertainties of the fixed-order calculations estimated from scale variations fail to provide a reliable estimate of the true uncertainties, as indicated by the failure of the uncertainty bands to overlap with each other. This is a typical feature of the diphoton process, where significant contributions and their uncertainties only appear at higher orders, namely the single-and double-fragmentation contributions shown in figure 1(b). For example, the largest part of the fixed-order uncertainty bands stems from renormalisation scale variations in the real-emission topologies, which are by definition spuriously absent at LO. The multi-leg merged prediction from Sherpa includes real-emission topologies for higher multiplicities than in the other predictions and also exhibits correspondingly large uncertainties. For the transverse momentum spectrum of the leading photon, p T,γ 1 , Nnlojet and

Differential cross sections
Sherpa predictions are compatible with the data over the full measured range. Diphox adequately describes the shape of the measured differential cross section; however, as already seen for the integrated cross section, the normalisation of this prediction is much lower than that of the data. A similar picture holds for the transverse momentum of the sub-leading photon, p T,γ 2 , with the only notable difference being the shape of the Diphox prediction, which adequately describes the shape of the measured cross section for p T,γ 2 ≥ 40 GeV, but considerably underestimates the rate of photons at lower p T,γ 2 values.
Stark features of the theory predictions can be seen in the m γγ spectrum. The shape of this distribution is governed by the transverse-momentum requirements placed on the individual photons, with the region m γγ < p T,γ 1 + p T,γ 2 being suppressed and only populated through γγ+multi-jet configurations. Such configurations are not modelled well at NLO accuracy in Diphox, but benefit significantly from the higher-order contributions included in the Nnlojet and Sherpa predictions. The predictions from Nnlojet and Sherpa agree with the data within uncertainties, while the uncertainties in the Diphox prediction are severely underestimated judging from the higher-order predictions. At high m γγ , the Sherpa predictions are compatible with the data, while Nnlojet underestimates the rate slightly.   Figure 12. Differential cross sections measured as functions of φ * η , π − ∆φ γγ , p T,γγ , and a T,γγ (black error bars) compared with the predictions from Diphox NLO, Nnlojet NNLO and Sherpa MEPS@NLO (lines with coloured bands). At the bottom of each plot, the ratio of the prediction to the data is shown. Uncertainty bars on the data represent the total uncertainty, while uncertainty bands (bars) on the predictions represent perturbative scale (statistical) uncertainties. Inset plots show the core of each distribution on a linear scale. For φ * η , p T,γγ and a T,γγ the last bin is only visible in the ratio inset and includes the overflow events.
For high p T,γγ , Sherpa and Nnlojet agree well with the data, while for intermediate values the Nnlojet prediction is slightly too low to cover the data within uncertainties, and Sherpa is slightly high but still compatible. As expected, the fixed-order predictions fail at low p T,γγ < 10 GeV, where soft QCD emissions are relevant, while Sherpa is in good agreement with the data also in this phase space region. It is worth noting, that the Nnlojet prediction for p T,γγ > 0 is only NLO accurate, and an even higher-order calculation would be needed to describe these phase-space regions at NNLO accuracy, e.g. γγ+jet at NNLO [66].
The three observables π−∆φ γγ , a T,γγ , and φ * η are closely related and reveal very similar features in the comparison between the data and the theory predictions. The almost backto-back configuration of the two photons (π − ∆φ γγ < 0.1, a T,γγ < 10 GeV, or φ * η < 0.1) is difficult to model correctly with fixed-order predictions, because even almost collinear or low-energy parton emissions can decorrelate the two photons. Only the parton-showerbased Sherpa prediction includes a resummation of these effects and is able to model the back-to-back region reliably, agreeing well with the data over the full measured range in these three observables. In the region of intermediate azimuthal separation of the photons (π − ∆φ γγ ≈ 1, a T,γγ ≈ 40 GeV, or φ * η ≈ 0.5), Nnlojet underestimates the data, while for a large decorrelation they agree well, in contrast to the NLO predictions from Diphox, which fail to accurately model the shape of these distributions over the full measured range.
For scattering angles close to the limit | cos θ * | (CS) → 1, the fixed-order predictions underestimate the measured rate. This region is sensitive to uncorrelated photons, which can occur as a consequence of the associated production of multiple jets. The NNLO prediction is affected by larger perturbative uncertainties in this region; this is an indication of the (N)LO-accurate associated production of two jets (one jet) becoming relevant. The multi-leg merged prediction from Sherpa provides a better description of the measured cross section in this region.

Conclusion
A measurement of prompt photon pair production in proton-proton collisions at √ s = 13 TeV is presented. The data were recorded by the ATLAS detector at the LHC with an integrated luminosity of 139 fb −1 .
Events with at least two photons in the well-instrumented region of the detector are selected, and the photons are required to be isolated and have p T,γ 1(2) > 40(30) GeV for the leading (sub-leading) photon ordered in transverse momentum. The dominant background stems from non-prompt photons from hadron decays, which is very challenging to estimate using simulation. A sophisticated data-driven method is used to estimate this background using background-rich control regions based on inverted photon identification and isolation requirements. The small background from two photons being produced in separate protonproton collisions in the same bunch crossing is estimated from data; it will become even more relevant for future LHC runs.
The background-subtracted signal yields are corrected for detector efficiencies and migrations, and converted into fiducial cross sections. The measured integrated fiducial cross section is 31.4 ± 2.4 pb. Differential cross sections as functions of several observables of the diphoton system are presented, including the transverse momenta of the leading (sub-leading) photon up to 550 (330) GeV, the invariant mass of the diphoton system from 15 GeV to 1 TeV, and the transverse momentum of the diphoton system up to 500 GeV. Specifically, diphoton variables sensitive to different aspects of higher-order corrections in perturbative QCD are studied and compared with theory predictions from state-of-the-art MC and fixed-order calculations.
Good agreement is generally found with the predictions at the highest theoretical precision. An impressive improvement is observed when taking into account higher-order terms in perturbative QCD. Only the merged approach with multi-leg matrix elements at NLO, as implemented by Sherpa, and a fixed-order NNLO calculation, as implemented by -30 -

JHEP11(2021)169
Nnlojet, give a satisfactory description of the data. The fixed-order NNLO calculation has a higher formal precision than Sherpa and thus lower theoretical uncertainties in the regions governed by perturbative QCD. The multi-leg merged prediction yields larger uncertainties, but is in better agreement with the data in all regions. Sherpa includes QCD resummation through the parton shower, and thus provides a more reliable prediction in regions sensitive to collinear or low-energy parton emissions. The shape of the measured cross section as a function of m γγ is best described by the Sherpa predictions. The region of low m γγ is sensitive to multi-jet configurations, which are only modelled with low precision in all predictions.

JHEP11(2021)169
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.