) Search for new phenomena in final states with photons, jets and missing transverse momentum in pp collisions at √s = 13 TeV with the ATLAS detector

: A search for new phenomena has been performed in ﬁnal states with at least one isolated high-momentum photon, jets and missing transverse momentum in proton– proton collisions at a centre-of-mass energy of √ s = 13 TeV. The data, collected by the ATLAS experiment at the CERN LHC, correspond to an integrated luminosity of 139 fb − 1 . The experimental results are interpreted in a supersymmetric model in which pair-produced gluinos decay into neutralinos, which in turn decay into a gravitino, at least one photon, and jets. No signiﬁcant deviations from the predictions of the Standard Model are observed. Upper limits are set on the visible cross section due to physics beyond the Standard Model, and lower limits are set on the masses of the gluinos and neutralinos, all at 95% conﬁdence level. Visible cross sections greater than 0.022 fb are excluded and pair-produced gluinos with masses up to 2200 GeV are excluded for most of the NLSP masses investigated.


Introduction
In this paper, proton-proton collisions at the LHC are used to search for new phenomena in experimental signatures with photons, jets and a large amount of missing transverse momentum in the final state. These signatures are motivated by a gauge-mediated supersymmetry breaking (GMSB) model [1][2][3], and its more generalized form, general gauge mediation (GGM) [4,5], where supersymmetry (SUSY) breaking takes place in a hidden sector and communicates with the visible sector through Standard Model (SM) gauge boson interactions. The results are interpreted in the context of a set of simpified GGM models that include the production of gluinos (˜) -supersymmetric partners of strongly coupled SM particles -and various assumptions for the couplings of the new particles to the SM bosons.
In this scenario, the lightest supersymmetric particle (LSP) is the ultralight gravitino (˜), which passes through the detector undetected and induces a non-zero missing transverse momentum ( miss T ) in the events in which it is produced. In certain circumstances it is a viable dark-matter candidate [6][7][8]. The models considered in this analysis conserve R-parity [9], so supersymmetric particles are produced in pairs and each decay must contain an odd number of supersymmetric decay products. Each decay therefore results in a decay chain leading to the LSP, usually proceeding via the emission of jets through the next-to-lightest supersymmetric particle (NLSP), which is often the lightest neutralino (˜0 1 ).
In the simplified GGM models considered, the decoupled mass scales for supersymmetric partners of the SM particles allow the NLSP neutralino to have large higgsino or bino components. This kind of NLSP decays into a gravitino and a photon or a gravitino and either a boson or a Higgs boson (ℎ, assumed to have a mass of 125 GeV with SM-like couplings). Thus, GGM models with a neutralino NLSP predict final states with two of these bosons (photon, , or ℎ) and two LSPs, and hence large miss T . The final state targeted by this search corresponds to a signature including many jets from the decay chain and miss T from the undetected particles, in combination with a high transverse momentum ( T ) photon appearing in the decay because of the bino component of the NLSP allowed by GGM.
Examples of production modes in proton-proton collisions for the topologies (from now on called / and /ℎ) targeted in this paper are shown in Figure 1, where all decays modes for the and ℎ bosons are considered.
An event selection strategy is designed to maximize the sensitivity for final states with photons, jets and

ATLAS detector
The ATLAS detector [15] at the LHC covers nearly the entire solid angle around the collision point. 1 It consists of an inner tracking detector surrounded by a thin superconducting solenoid, electromagnetic and hadron calorimeters, and a muon spectrometer incorporating three large superconducting air-core toroidal magnets.
The inner-detector system (ID) is immersed in a 2 T axial magnetic field and provides charged-particle tracking in the range | | < 2.5. The high-granularity silicon pixel detector covers the vertex region and typically provides four measurements per track, the first hit normally being in the insertable B-layer [16,17] installed before Run 2. It is followed by the silicon microstrip tracker, which usually provides eight measurements per track. These silicon detectors are complemented by the transition radiation tracker (TRT), which enables radially extended track reconstruction up to | | = 2.0. The TRT also provides electron identification information based on the fraction of hits (typically 30 in total) above a higher energy-deposit threshold corresponding to transition radiation.
The calorimeter system covers the pseudorapidity range | | < 4.9. Within the region | | < 3.2, electromagnetic calorimetry is provided by barrel and endcap high-granularity lead/liquid-argon (LAr) calorimeters, with an additional thin LAr presampler covering | | < 1.8 to correct for energy loss in material upstream of the calorimeters. Hadron calorimetry is provided by the steel/scintillator-tile calorimeter, segmented into three barrel structures within | | < 1.7, and two copper/LAr hadron endcap calorimeters. The solid angle coverage is completed with forward copper/LAr and tungsten/LAr calorimeter modules optimized for electromagnetic and hadronic energy measurements respectively.
The muon spectrometer (MS) comprises separate trigger and high-precision tracking chambers measuring the deflection of muons in a magnetic field generated by the superconducting air-core toroidal magnets. 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the -axis along the beam pipe. The -axis points from the IP to the centre of the LHC ring, and the -axis points upwards. Cylindrical coordinates ( , ) are used in the transverse plane, being the azimuthal angle around the -axis. The pseudorapidity is defined in terms of the polar angle as = − ln tan( /2). Angular distance is measured in units of The field integral of the toroids ranges between 2.0 and 6.0 T m across most of the detector. A set of precision chambers covers the region | | < 2.7 with three layers of monitored drift tubes, complemented by cathode-strip chambers in the forward region, where the background is highest. The muon trigger system covers the range | | < 2.4 with resistive-plate chambers in the barrel, and thin-gap chambers in the endcap regions.
Interesting events are selected by the first-level trigger system implemented in custom hardware, followed by selections made by algorithms implemented in software in the high-level trigger [18]. The first-level trigger accepts events from the 40 MHz bunch crossings at a rate below 100 kHz, which the high-level trigger further reduces in order to record events to disk at about 1 kHz.
An extensive software suite [19] is used in the reconstruction and analysis of real and simulated data, in detector operations, and in the trigger and data acquisition systems of the experiment.

Samples of simulated processes
Samples of the targeted SUSY signals and SM backgrounds were simulated at √ = 13 TeV using dedicated Monte Carlo (MC) generators. For the interpretation of the results, a grid of signal samples was simulated with a set of benchmark parameter values covering the region in which the signal can be observed. In these particular regions of the GGM model space, the lightest neutralino is a mixture of bino and higgsino fields; the neutral wino field has a much larger mass than the bino/higgsino, so the corresponding wino content of the lightest neutralino is negligible. The gluino is the only relevant coloured particle, since all squark (supersymmetric partners of the SM quarks) soft masses are decoupled at a value of 5 TeV. The full model parameters include the U(1), SU(2) and SU(3) gauge partner mass parameters ( 1 , 2 and 3 , respectively), the higgsino mass parameter , the gravitino mass˜, and the ratio tan of the two SUSY Higgs-doublet vacuum expectation values. Due to the Weinberg mixing angle in the SM, the bino component of the lightest neutralino couples to both the photon and the boson in the case of positive values of , and to the photon and the Higgs boson in the case of negative values [20,21]. For all GGM models considered, the phenomenology relevant to this search is only weakly dependent on the value of tan , chosen to be 1.5. The mass parameter 2 is decoupled at a value of 3 TeV and 3 matches the gluino mass since no radiative corrections are taken into account. The lifetime of the NLSP is set so that is less than 0.1 mm, ensuring that the neutralino decays promptly, where for most of the mass space it is achieved with a minimal gravitino mass of˜= 10 −9 GeV. All trilinear coupling terms are set to zero and the slepton masses are set to 5 TeV. The Higgs sector is in the decoupling regime at 2 TeV (except for the lightest neutral Higgs boson). Setting 1 ∼ | | ∼˜0 1 , the branching ratios of the lightest neutralino to +˜and /ℎ +˜are approximately constant at 50%, maximizing the production of the final states of interest for this search.
The generated signal samples cover the˜-˜0 1 mass plane in the range 1400-2600 GeV for˜and 150-2600 GeV for˜0 1 , for each of the targeted models. 2 The full mass spectrum, the gluino and neutralino branching ratios, and their decay widths were calculated using SUSPECT v2.43 [22], SDECAY v1.5 [23] and HDECAY v3.4 [24], run as part of the SUSYHIT package v1.5a [25]. All signal samples were then generated at leading order (LO) with up to two additional partons with M G 5 interfaced to P 8. Signal cross sections are calculated at next-to-leading order (NLO) in the strong coupling constant, adding the resummation of soft gluon emission at next-to-leading-logarithm accuracy [26][27][28][29][30]. The nominal cross 2 The˜range for the /ℎ model starts at 1200 GeV to match the mass range used in the previous publication [10]. section and its uncertainty are taken from an envelope of cross-section predictions using different parton distribution function (PDF) sets and factorization and renormalization scales, as described in Ref. [31].
Most of the backgrounds affecting this search were estimated using control samples selected from data, defined such that one of the background processes becomes dominant, but otherwise kinematically similar to the signal region (SR). The extrapolations from these control regions (CRs) to the SRs are based on samples of simulated events. Samples of¯events were generated with MG5_aMC@NLO [32] at NLO (with top = 172.5 GeV), interfaced to the P 8 parton shower model [33]. The NNPDF3.0 [34] set of PDFs was used, with parameter values set to the A14 tune [35].  [51] and the A3 tune [52]. The simulations are further corrected with efficiency scale factors and a smearing of the energy scale of photons, leptons and jets, to better describe the data. Table 1 presents a summary of the signal and background samples used in the analysis.

Reconstruction of candidates and observables
This analysis is performed using the full Run-2 dataset of LHC collisions at √ = 13 TeV collected by the ATLAS detector between 2015 and 2018, corresponding to an integrated luminosity of 139 fb −1 after the application of beam, detector and data quality requirements [53].
The data sample selected by a single-photon trigger with a transverse momentum ( T ) threshold of 140 GeV consists of events with at least one photon satisfying the 'loose' identification criteria [54]. This trigger is the lowest-threshold unprescaled trigger (considering the complete data-taking period) and is fully efficient for photons with T > 145 GeV accepted by the signal selection requirements described in Ref. [55].
The vertex with the highest sum of the squared transverse momenta of its associated tracks that is reconstructed from at least two good-quality tracks with T > 0.5 GeV is defined as the primary vertex [56]. After the trigger selection, events are removed from the data sample if they contain jets likely to be produced by beam-induced backgrounds, cosmic rays, or detector noise. Photon, lepton, and jet candidates are selected with baseline requirements as described below. Those used to define the different control, validation and signal regions are required to fulfil extra requirements, and are called 'signal-region candidates' in the following.
In the offline selection, photon candidates are required to satisfy the 'tight' identification criteria for the lateral and longitudinal shower shape [55], have T > 25 GeV and | | < 2.37, and are removed if they are within the electromagnetic calorimeter (ECAL) barrel-endcap transition region defined by 1.37 < | | < 1.52. An extra requirement of T > 50 GeV is imposed on signal-region photons. 3 To reduce the background from jets that can be misidentified as photons, both track and calorimetric isolation requirements are applied to signal-region photon candidates. The calorimetric isolation energy, iso T , is computed as the sum of the topological cluster transverse energies [57] calibrated at the electromagnetic (EM) scale within a cone of size Δ = 0.4 around the cluster barycentre. This iso T is required to be less than 2.45 GeV + 0.022 T , where T is that of the photon. The track isolation variable, iso T , is obtained as the scalar sum of the transverse momenta of good-quality tracks inside a cone of size Δ = 0.2 around the candidate, and is required to be less than 0.05 T .
Electron candidates are required to have T > 10 GeV and | | < 2.47, and to originate from the primary vertex in both the -and -planes. A 'loose' set of identification criteria are imposed [55] and these are based on the characteristics of the EM shower development, the quality of the associated reconstructed track, and the angular proximity of the track to the calorimeter energy deposition. Signal-region electrons must satisfy 'loose' isolation criteria, have T > 25 GeV and not be within the ECAL barrel-endcap transition region.
Muons are reconstructed by combining compatible track information from the MS and the ID. Muon candidates are required to have T > 10 GeV and | | < 2.7, to satisfy the 'medium' quality criteria [58] and to originate from the primary vertex in both the -and -planes. Signal-region muons must have T > 25 GeV and satisfy a 'loose' isolation requirement.
Jets are reconstructed using the anti-algorithm [59,60] with a radius parameter = 0.4 and are seeded by the energy in topological clusters of calorimeter cells [57]. The expected average energy contribution from pile-up interactions is subtracted using a factor that depends on the jet area. Track-based selection requirements are applied to reject jets with T < 120 GeV and | | < 2.4 that originate from pile-up interactions [61]. Except for the miss T computation (defined in the following), where a requirement of | | < 4.5 is applied, jets are kept only if they are in the central region of the detector and have T > 20 GeV. Signal-region jets must have T > 30 GeV and | | < 2.5.
Although jets containing -hadrons, called -jets from now on, are not explicitly used in the analysis selection, they are used in the definition of control regions from which the and¯MC normalization is extracted, as is described in Section 5. These -jets are selected with the same T requirement of jets within the ID acceptance (| | < 2.5) and identified by the MV2 algorithm, which uses the long lifetime, high decay multiplicity, hard fragmentation and large mass of -hadrons to distinguish them from light-flavour jets (jets originated from light quarks and gluons) [62]. The -tagging algorithm has a nominal efficiency of 77% for -jets in simulated¯events and the corresponding probability of misidentifying light-flavour jets is below 1%.
Due to possible final-state object misidentification, a single object can be reconstructed as more than one object and thus effectively counted multiple times. A procedure to remove these overlaps is applied to preselected objects before the corresponding isolation requirements are imposed. The basic strategy and the order of removal is described in Refs. [63,64].
The missing transverse momentum is computed with an object-based algorithm considering objects passing baseline requirements. Calorimeter energy deposits are matched to high-T objects in the following order: electrons, photons, jets and muons. Primary-vertex tracks not associated with any such objects are included in the so-called soft term [65] contribution to miss T . The miss T is computed from the negative vector sum of the transverse momenta of calibrated reconstructed physical objects and the soft term.

Event selection
The analysis is designed to compare the event yields observed in three signal regions for strong production (named as SRL, SRM and SRH) with the predictions of the rates of SM processes. The SRL region targets the phase space with large mass differences between the gluino and the neutralino, resulting in events characterized by high jet multiplicity and hadronic activity but moderate missing transverse momentum. The SRH region is optimized for the compressed scenarios, near the diagonal in the gluino-neutralino mass plane, giving events with high miss T , higher-T photons, and lower jet multiplicity and hadronic activity than SRL. Finally, the SRM region is defined for the intermediate phase space between SRL and SRH.
Given the high mass of the gluinos produced in the GGM model-space explored, the total visible transverse momentum is expected to be large. This results in a large value for the variable T , defined as the scalar sum of the transverse momenta of all individual signal jets and the leading photon in the final state. The selection of signal events includes a requirement on T as well as on miss T . In SRL, SRM and SRH, events must contain at least one isolated photon with T above 145 GeV, 300 GeV or 400 GeV, respectively, and zero leptons in order to remove SM events where the vector boson decays leptonically. In addition, more than four jets are required in SRL and SRM, while more than two jets are required in SRH.
In events characterized by large reconstructed miss T without a significant contribution from non-interacting particles or arising from instrumental sources and poorly reconstructed physics objects, the miss T vector tends to be aligned with either the photon or one of the two leading jets. A selection based on the angular separation between these objects and the miss T vector, i.e. Δ (jet, miss T ) and Δ ( , miss T ), removes most events from these background processes.
SUSY signals considered in this analysis are characterized by high-T multĳet events in a wide region of the parameter space. The sub-leading jets have comparatively larger T than those in SM background events. Consequently, for signal processes with high-T jets, 4 T (defined as the ratio of the scalar sum of the T of the four highest-T signal-region jets to the scalar sum of the T of all signal-region jets in the event) takes values less than one, while for SM backgrounds with fewer and softer jets, 4 T is typically closer to unity [11,66]. No 4 T selection is applied for SRH because fewer jets are required in this region. The event selection for all the signal regions is summarized in Table 2.

Background estimation
Several SM processes can give final states with real photons and miss T from the presence of neutrinos. Other SM processes can emulate the targeted topologies if a jet or an electron is misidentified as a photon. The estimation of these backgrounds in the different SRs is therefore essential.
The dominant SM background contributions to the SRs are expected to be from and¯production, followed by prompt photon production with instrumental (fake) miss T . These three contributions are determined using MC simulations constrained by the number of data events observed in dedicated control regions through the estimation of normalization factors. The smaller backgrounds, from , , and , are estimated directly from MC simulation. The backgrounds with misidentified jets or electrons are determined with data-driven techniques as described in the following.
Control regions labeled CRW, CRT, and CRQ are used to obtain the MC normalization for the ,¯, and QCD + jets events, respectively. The selection criteria for the CRs associated with the SRs are presented in Table 3. CRs were designed to be orthogonal but still kinematically similar to SRs, and enhanced in the background process of interest, with negligible signal contamination. Looser miss T requirements are applied in the CRs to increase the yields. No 4 T requirement is applied for the same reason. An upper bound is placed on miss T in CRW and CRT to reduce the signal contamination.
The selection for CRQ is based on SRL but applies a lower miss T requirement (> 100 GeV), a similar T selection and an inverted Δ (jet, miss T ) selection to increase the fraction of + jets in the control sample.
The CRW sample is defined by requiring a photon, a lepton and 100 GeV < miss T < 200 GeV. A -jet veto requirement is applied to reduce the contamination from¯.
The CRT sample is defined by requiring a photon, a lepton, jets and 50 GeV < miss T < 200 GeV. At least two -tagged jets are required in order to increase the sample's purity in¯events.
A further set of event selections define validation regions (VRs) used to check the results of the background estimation procedure. They were designed to be kinematically between the signal regions and the control regions, but with one or more criteria inverted or modified to reduce possible signal contamination. The VRL regions were designed to be enriched in and¯backgrounds. No -jets requirement is applied, so contributions from both backgrounds are expected. The four regions (VRL1-4) cover different parts of the parameter space between the control regions and signal regions by varying the miss T and T requirements. A signal-region-like VRQ is designed to be orthogonal to the SR only because of a reduced requirement on miss T . The VRM regions are built to validate the extrapolation of the + jets background from the CR to the SR. Two sets of VRMs were specifically designed to select either events with low jet multiplicity and a high-T leading photon (VRM1H and VRM2H) or events with high jet multiplicity and a less energetic photon (VRM1L and VRM2L), in order to validate the background estimation in regions closer to SRH or SRL respectively. By selecting different ranges in miss T , VRM2L is included in VRM1L, and VRM2H is included in VRM1H. A summary of the different selection criteria is shown in Table 3.
Jets can be misidentified as photons (called 'fake photons') if they contain mostly 0 mesons (or any other neutral hadrons) carrying most of the jet energy and decaying into a pair of collimated photons, resulting in an electromagnetic object resembling a single, highly energetic photon. This background arises primarily from QCD multĳets, +jets and semileptonic¯events. The 'tight' identification criteria applied to photon candidates reduce this background. After applying this selection, the data sample is expected to contain real photons with moderate jet contamination. As this misidentification rate is not expected to be modelled accurately in MC simulation, a data-driven sideband counting method [64] is used. The so-called ABCD method makes use of the different isolation profiles expected for real photons and misidentified jets [67]. Two variables are considered simultaneously in order to include both tracking and calorimetric isolation of the photon candidate, as defined in Section 4. The 'tight' offline identification is by design tighter than the photon trigger used to collect the data, so it is expected that some photon candidates from misidentified jets will fail the 'tight' selection but satisfy an intermediate selection. These photon-like jets, hereinafter called non-tight photons, are defined as those passing the loose identification and satisfying the 'tight' selection requirements, except at least one of four selections associated with energy deposits in the EM calorimeter [64], chosen to be largely uncorrelated with the isolation variables. In this manner, the use of non-tight photons enhances the 'jets faking photons' contribution, as needed by the ABCD method. In the identification-isolation plane, the method defines a signal region consisting of isolated photon candidates that satisfy the 'tight' identification, and three control regions, namely , and , with photon candidates being non-isolated and 'tight', isolated and non-tight, and non-isolated and non-tight, respectively.
A possible residual correlation between the photon identification and the isolation is estimated using MC simulations, and so is the contamination of the background regions by real photons. These effects are included as part of the computation of the contribution of misidentified jets in all the regions used in the analysis. The systematic uncertainties of the method are evaluated by varying the definition of the non-tight objects, and considering the differences introduced by the residual correlation between the regions.
Significant contamination in the signal regions from SM processes such as / + jets and¯events is expected in cases where one high-T electron is misidentified as a photon. This background is estimated by weighting the number of electron events observed in an electron control sample by the electron-to-photon fake rate. These electron control samples come from the same control, validation and signal regions as in the analysis, but the photon kinematic selections are applied to electrons, and then a high-T isolated electron is required and signal photons are vetoed. To estimate the electron-to-photon fake rate, a method based on a sample of (→ ) data events is used [63,64]. Since the boson cannot decay directly into an electron and a photon, the electron-photon events appearing under the peak most likely correspond to misidentified electrons. However, the same applies to other particles decaying into pairs of electrons. Therefore, a background subtraction technique is applied, and this also takes into account the contamination from random combinatorics background. The electron-to-photon fake factor is then estimated as the ratio of the number of electron-photon pairs to the number of electron-electron pairs found under the peak when fitting the invariant mass distribution. This fit uses a double-sided Crystal Ball (DSCB) function (a Gaussian core with asymmetric non-Gaussian power-law tails) to model the peak, and a Gaussian distribution to model the small non-resonant backgrounds to (→ ) production. Only the pairs within a defined invariant mass window are selected to compute the electron-to-photon fake factor. This window is defined as ±3 around the centre of the peak in the DSCB function, where is the width of the peak. Only events with miss T < 40 GeV are selected, to avoid electrons from the decays.
A dedicated validation region (VRE) is designed with the event selection described in Table 3, to validate the accuracy of the corresponding electron-to-photon background predictions based on the calculated fake factors. The set of requirements mostly selects ( ) + jets events, where a boosted boson (including those coming from top quarks) decays into a neutrino (giving high miss T ) and an almost collinear high-T electron (misidentified as a photon).
Likelihood fits [68] are performed assuming i) a background-only hypothesis to estimate the total background in the SRs and VRs; ii) a model-dependent signal plus background hypothesis where the fit is performed in the CRs and SRs simultaneously; and iii) a model-independent signal plus background hypothesis, where both the CRs and SRs are used in the same manner as for the model-dependent signal fit with the number of signal events in the SRs added as a parameter to the fit. This approach constrains the expected background to the yields observed in the data using the CRs and to reduce the systematic uncertainties. Figure 2 shows the contributions of the backgrounds in all the different control and validation regions. They are obtained with a 'background-only maximum-likelihood fit', constraining the normalizations of the dominant backgrounds and including those estimated using data-driven techniques. The lower panel shows the differences, in standard deviations, between the observed and expected yields [69]. Good agreement is found between data and SM background predictions in all the validation regions.

Systematic uncertainties and yields
All background processes estimated either by making use of MC simulations or by data-driven methods, as well as MC signal predictions, are affected by systematic uncertainties which mainly originate from two kinds of sources: experimental and theoretical ones. These systematic uncertainties can impact the expected event yields in both the control and signal regions.
The uncertainty in the combined 2015-2018 integrated luminosity is 1.7% [70], obtained using the LUCID-2 detector [71] for the primary luminosity measurements. The uncertainty on the pile-up reweighting is also considered.
The systematic uncertainties due to the photon identification and isolation efficiencies are estimated following the prescriptions in Ref. [55]. They are evaluated by varying the correction factors for the photon selection efficiencies in MC simulation by the corresponding uncertainties. The photon energy scale is determined using samples of → events, varying the scale corrections and resolutions upwards and downwards by one standard deviation.
For electrons [55] and muons [72], similarly to photons, the uncertainties from the identification efficiency, energy scale and resolution were determined from → ℓ + ℓ − and ± → ℓ ± control samples.
For jets, the energy scale and resolution uncertainties are derived following the procedure described in Ref. [73], where a simplified scheme with 38 parameters is used. A set of -tagging uncertainties are also considered, taking an envelope around the nominal jet weight for the selection of different flavour jets [62].
For miss T , the uncertainties associated with all underlying objects from which it is constructed are propagated through the calculation, and additional uncertainties accounting for the scale and resolution of the soft term [65] are considered. For the fake-photon backgrounds (jet → fakes and → fakes), there are two different kinds of uncertainties affecting their estimations: the systematic uncertainty from the method used to estimate the fake factors and the statistical uncertainty of the control sample.
For each of the main simulated background samples, a theoretical uncertainty is assessed by considering different sources of systematic uncertainty. Each sample contains several internal weights representing the effect of varying different parameters of the theory. The systematic variations considered for each sample are variations of the renormalization and factorization scales r and f at generator level, and variations of the PDFs [74] and the strong coupling constant ( s ). For r and f , three independent nuisance parameters are used, two constructed to keep each of the scales constant while varying the other one, and one as a coherent variation of both scales. The PDF uncertainty is taken from an envelope of the nominal PDF (NNPDF3.0) and the variations. Finally, uncertainties associated with s determination and truncation are considered. The PDF and s -related uncertainties are added in quadrature. The total theoretical systematic uncertainty in the signal regions is between 15% and 30% depending on the MC sample.
The relative impact of each systematic uncertainty on the SM background expectation after the backgroundonly fit applied to the CRs is presented in Table 4. One of the largest experimental systematic uncertainties is related to the jet energy scale and resolution (except for SRH where it is smaller because of the lower jet multiplicity and hadronic activity). Theoretical systematic uncertainties are close to 3% for SRL and SRM, and are the largest for SRH, reaching the 10% level.

Results
The background-only fit is based on the SRs and CRs listed in Tables 2 and 3 and takes into account all the systematic uncertainties discussed in Section 7, treated as Gaussian-distributed nuisance parameters. When fitting the CRs and SRs simultaneously, common normalization factors for each of the ,¯and QCD + jets events are implemented in order to correctly take into account the other background contributions. Each experimental uncertainty is treated as fully correlated across the CRs and the corresponding SR, and the physics processes considered. The theoretical systematic uncertainties are treated as correlated across the different regions but uncorrelated across the background samples.   Figure 3, compared to the background predictions. The predicted distributions for selected signals with gluino and neutralino masses near the expected sensitivity of the analysis are also shown for comparison. In each plot, all the SR selection requirements are applied except for the one on miss T . The number of data events in each SR and the expected contributions from the different SM backgrounds after the background-only fit applied to the CRs are shown in Table 5. Since no significant excess above the SM background is observed in the SRs, these are used to set limits on the number of new-physics events (model-independent limits), and on the GGM signal model parameters described in Section 1.
The background-only fit applied to the CRs in previous sections to estimate the background can be extended to include the SRs and perform hypothesis tests, using a profile log-likelihood ratio (LLR) approach [75], to assess the compatibility of the observed number of events with the SM, to set limits on the visible cross sections, and to set exclusion limits on specific SUSY models.
The model-independent limits on the number of events from non-SM processes in each SR are listed in Table 6, together with the discovery -value ( 0 ), defined as the probability of observing at least the observed event yield when assuming that no signal is present, and the corresponding Gaussian significance . Also shown is the 95% confidence level (CL) upper limit on the visible cross section × × , obtained by normalizing the upper limit on the number of signal events to the integrated luminosity, where is the production cross section for a beyond-the-SM (BSM) signal, is the acceptance (fraction of events with objects passing all the kinematic selections at particle level) and is the efficiency (fraction of those events that would be observed after the detector reconstruction). For SRL and SRM, 0 is capped at 0.5 because the predictions exceed the data. For SRH, the discovery -value is 0.09, so these observations are compatible with the SM-only hypothesis. The number of observed events and the background expectation in each SR is used to set a 95% CL upper limit on the number of events from any BSM physics scenario [76]. The most stringent observed limit is from SRM, where visible cross sections greater than 0.022 fb are excluded. Table 6: Summary of the model-independent limits results, with the 95% CL upper limits on the visible cross section ( The exclusion limit for a specific SUSY signal model is based on the profile LLR test statistic, and it is obtained from a simultaneous fit to the contributions from SM processes and the targeted model in a given signal region and its associated background control regions, which are all by design statistically independent. These one-sided limits are set at the 95% CL using the CL s prescription [76]. The observed exclusion limit is calculated with signal yields corresponding to the nominal cross section ±1 of the SUSY theoretical uncertainty. The combined exclusion limits are shown in Figure 4, for each of the two signal models considered. These are obtained with pseudo-data experiments, and using the signal region with the best expected sensitivity at each point. The black dashed line corresponds to the expected limits at 95% CL, with the light (yellow) bands indicating the ±1 exclusions due to experimental and background-theory uncertainties. The observed limits are indicated by medium (red) curves, the solid contour represents the nominal limit, and the dotted lines are obtained by varying the signal cross section by the theoretical scale and PDF uncertainties. The discontinuity in the transition from high to moderate NLSP masses is due to the number of events observed in SRH (SRM) being higher (lower) than the expected value.
For the / signal model, the lower limits on the gluino mass in this paper are between 200 and 400 GeV higher than those obtained in the previous search [11]. For the /ℎ signal model, the previous search [10] was performed in Run-1 data with a slightly different mass-plane coverage and set a lower limit around 1.2 TeV for the gluino mass. In the present study, the lower limits on the gluino mass in this model are almost 1 TeV higher. For both models, the most stringent lower limit on the gluino mass is set at 2.4 TeV for a neutralino mass of 1.3-1.4 TeV. Furthermore, an overall lower limit on the gluino mass of 2.2 TeV is obtained for all neutralino masses except those below 150 GeV, which are regions where the analysis is expected to have low signal acceptance, due to trigger constrains and large miss T requirements in the SR.

Conclusions
Based on proton-proton collision data at √ = 13 TeV corresponding to an integrated luminosity of 139 fb −1 recorded by the ATLAS detector at the LHC in Run 2, a search has been performed for the experimental signature of at least one isolated photon with high transverse momentum, jets and a large amount of missing transverse momentum. Three signal regions are defined, one with a prediction of 2.67 ± 0.75 background events and 2 events observed, another with 2.55 ± 0.64 background events predicted and no events observed, and the third with 2.55 ± 0.44 background events predicted and 5 events observed. These results are compatible with no significant excess of events over the SM background expectation. Model-dependent 95% CL upper limits are set on possible contributions from new physics in a GGM scenario with a NLSP neutralino that is a mixture of higgsino and bino. Pair-produced gluinos with masses up to 2200 GeV are excluded for most of the NLSP masses investigated, giving the most stringent limits obtained by ATLAS. Model-independent 95% CL upper limits are set on the visible cross section for contributions from new physics for each of the defined signal regions. The most stringent observed limit is from SRM, where visible cross sections greater than 0.022 fb are excluded.