Search for natural supersymmetry in events with top quark pairs and photons in pp collisions at $\sqrt{s} =$ 8 TeV

Results are presented from a search for natural gauge-mediated supersymmetry (SUSY) in a scenario in which the top squark is the lightest squark, the next-to-lightest SUSY particle is a bino-like neutralino, and the lightest SUSY particle is the gravitino. The strong production of top squark pairs can produce events with pairs of top quarks and neutralinos, with each bino-like neutralino decaying to a photon and a gravitino. The search is performed using a sample of pp collision data accumulated by the CMS experiment at $\sqrt{s} = $8 TeV, corresponding to an integrated luminosity of 19.7 fb$^{-1}$. The final state consists of a lepton (electron or muon), jets, and one or two photons. The imbalance in transverse momentum in the events is compared with the expected spectrum from standard model processes. No excess event yield is observed beyond the expected background, and the result is interpreted in the context of a general model of gauge-mediated SUSY breaking that leads to exclusion of top squark masses below 650-730 GeV.

In this paper, we describe a search for light top squarks ( t) in a data sample corresponding to an integrated luminosity of 19.7 fb −1 of pp collisions at √ s = 8 TeV. This search is motivated by models of gauge-mediated SUSY breaking (GMSB) [27][28][29] in which the neutralino ( χ 0 1 ) is the next-to-lightest sparticle (NLSP) and the gravitino (G) is the lightest sparticle (LSP). The gravitino escapes undetected and contributes to missing transverse momentum ( p miss T ) in the detector, where the magnitude of p miss T is referred to as p miss T . This search considers a binolike neutralino that decays to a photon and a gravitino. Assuming that R-parity [30,31] is conserved, pair production of sparticles would be the dominant production mechanism for SUSY particles in pp collisions at the LHC. Because top squarks are expected to be relatively light in natural SUSY scenarios, we search for top squark pair production, a strong process. Assuming a bino-like neutralino NLSP, each top squark would decay to a top quark and a neutralino, with the neutralino decaying to a photon and a gravitino, leading to a tt+γγ+p miss T topology. This event topology is shown in Fig. 1. The analysis concentrates on the semileptonic decay of the tt pair, thereby requiring the presence of exactly one isolated electron or muon. This minimizes contributions from multijet and γ+jet backgrounds. At least one jet in each event is required to be tagged as originating from a b quark to reduce non-tt backgrounds. No explicit tt+γγ sample is used in the background estimates because of the exceedingly small cross section for such events in the SM. Two signal regions are defined for both electron and muon channels, depending on the presence of one or two selected photons in the event. Control regions are similarly defined, using photons that fail either the nominal isolation or shower-energy distribution requirements.
The results of the analysis are evaluated by comparing the shapes of p miss T distributions between

Object reconstruction
All physics objects in the event (muons, electrons, photons, jets, and p miss T ) are reconstructed using the particle-flow (PF) algorithm [40,41]. Jets are formed by clustering PF candidates using the anti-k T algorithm [42], as implemented in FASTJET toolkit [43], using a distance parameter of 0.5, and their momenta are corrected for effects of multiple interactions in the same or neighboring bunch crossings (pileup). The p miss T of an event is defined by the projection of the negative of the vector sum of the momenta of all reconstructed objects in the event onto the plane perpendicular to the proton beams. All PF candidates are used in the calculation of p miss T . Photons are reconstructed from energy clusters in the ECAL barrel (|η| < 1.44), are required to be highly isolated from other objects, and to have transverse momentum p T > 20 GeV. The ratio of the energy deposited in the HCAL tower closest to the seed of the ECAL photon cluster to the energy in the photon cluster has to be less than five percent. The photon shower is required to have a photon-like spatial distribution in its energy [38]. The isolation variable, defined through the sum of the scalar values of p T of all PF candidates within a cone centered on the photon axis, in the η-φ plane of ∆R = √ (∆φ) 2 + (∆η) 2 = 0.3, is calculated without including the p T of the candidate photon. The isolation energy for charged hadrons is required to be <15 GeV, the neutral-hadron energy <3.5 GeV + 4% of the photon candidate p T , and the isolation energy from any other photons in the cone must be <13 GeV + 0.5% of the candidate photon p T . Pileup corrections depending on η are applied to all calculated isolation variables.
Electrons are reconstructed from clusters of deposited energy in the ECAL that are matched to a track in the silicon tracker [44]. Candidate electrons are required to have p T > 30 GeV, and to be within |η| < 2.5, excluding the small transition region (1.44 < |η| < 1.52) between the ECAL barrel and the endcaps. Electrons are required to be isolated, with the sum of the energy deposition within a cone of radius ∆R = 0.3, excluding the electron, to correspond to < 10% of the momentum of the candidate electron.
Muons are reconstructed from measurements in the muon system and compatible track segments in the silicon tracker [45]. Candidate muons are required to have p T > 30 GeV, be within |η| < 2.1, and to have an isolation energy sum in a cone of radius ∆R = 0.4, excluding the muon, of <12% of their p T . Looser lepton requirements are applied to identify extra leptons that are used to veto the dilepton tt final states, as described in Section 4.
The combined secondary vertex algorithm (CSV) [46,47] is used to identify jets from b quarks. The CSV algorithm uses secondary vertices and track impact parameters to provide a discriminant separating b quark jets from charm, light quark, or gluon jets. The selection efficiency is about 70% for b quark jets and 20% for c quark jets. The rejection factor for lighter quark or gluon jets at this working point is about 2%.

Event selection and analysis strategy
Events are required to pass either a single-electron or single-muon trigger, requiring one isolated electron or muon with minimum p T of 27 or 24 GeV, respectively. In addition, the singlemuon trigger requires the muon candidate to be within |η| < 2.1. The trigger efficiency is approximately 100% using offline cuts on p T of 30 GeV.
Only one lepton and at least three jets with p T > 30 GeV and |η| < 2.4 are required, with at least one of the three jets tagged as originating from a b quark. All objects are required to be separated from each other by at least ∆R = 0.5. Events containing additional leptons satisfying less restrictive criteria of p T > 10 GeV, |η| < 2.5, and isolation-energy sums with <20% of their p T , are rejected.
After this preselection, events are separated into independent samples based on the number of candidate photons. Candidate photons are required to be separated from all jets by ∆R > 0.7. Two signal regions are defined, with SR1 containing one photon candidate, and SR2 at least two photon candidates.
Photons that fail either the shower-energy distribution or charged-hadron isolation criteria are referred to as fake photons. These objects are predominantly jets with large electromagnetic fluctuations in their hadronization and are used to define two control regions: CR1, containing one fake and no properly reconstructed photons, and CR2, containing two or more fake and no properly reconstructed photons. The control regions are defined not to overlap with signal regions, to have very small acceptance for signal, and to greatly enhance the population of photon-like jets that contribute most of the estimated background in signal regions. The control regions also provide events that can be used to study the performance of the p miss T simulation for poorly reconstructed photon-like objects in the signal region. The effect on the p miss T resolution from these poorly reconstructed photon-like objects is found to be negligible compared to the effect of p T resolutions in the jets from the tt decays.
The background expected in the signal regions is largely dominated by tt+jets and tt+γ events, where many selected photons may originate from misreconstructed jets. These two processes are simulated in Monte Carlo (MC) using the leading-order (LO) MADGRAPH 5.1.3 [48] matrix element generator matched to PYTHIA 6.426 [49] for parton showering and fragmentation. Simulated tt+γ events are generated in a 2 → 7 configuration (pp→bbjj νγ). Approximately 0.6% of the simulated tt+jets events that contain a generator-level photon fall into the phase space of the tt+γ sample, and are removed to avoid double counting these events. Most other backgrounds are simulated with MADGRAPH and matched to PYTHIA, including W+jets or Z+jets, tt+W or tt+Z, W+γ or Z+γ, and diboson (ZZ, WZ, and WW) processes. Single top quark events are generated with the next-to-lowest-order (NLO) generator POWHEG 1.0 [50], modeling the decay of τ leptons with TAUOLA [51]. The Z2* tune [52,53] is used for the underlying event. All simulated backgrounds are processed using the full simulation of the response of the CMS detector using the GEANT4 [54] package, and reconstructed under the same conditions as the data. These backgrounds are then normalized to the integrated luminosity of the data using their respective cross sections calculated at least at NLO. The CTEQ6M parton distribution functions (PDF) are used in the signal and background simulations [55]. A summary of the software used in the MC simulations of backgrounds is given in Table 1. In the muon+jets channel, the background from Z+jets and Z+γ events is very small because of the low probablility for a muon to be misidentified as a photon. In the electron+jets channel, however, these processes contribute more to the background, especially at low p miss T , because the probability for an electron to be misidentified as a photon is much greater. This electron misidentification rate can be determined from the size of the peak at the Z boson mass in the invariant mass distribution of electron-photon pairs in the electron+jets channel of SR1. This rate depends on an estimate of the number of selected Z bosons in the electron+jets channel, the accuracy of which can be improved through the implementation of a scale factor (SF) extracted to normalize the Z+jets and Z+γ MC events in both the electron and muon channels. The SF is measured imposing a dilepton selection similar to the one used in the SR1 selection, but altered to require two same-flavor leptons rather than just a single lepton. Events with additional leptons are vetoed, and no photons are required. A fit to the invariant mass of the dilepton system in data, using the Z+jets and Z+γ MC events as the signal template and all other MC events as background templates, provides a normalization scale factor for both the Z+jets and Z+γ MC events, labeled SF Z(γ) , in the electron and the muon channels.
Once this first SF is applied to correct the MC estimate of the number of Z bosons, the Z resonance in the SR1 electron+jets channel is used to obtain a second scale factor SF e→γ which corrects the misidentification of electrons as photons. A fit to the invariant mass of the electronphoton system in SR1 data, with p miss T < 50 GeV, to limit the presence of signal, is performed using the Z+jets and Z+γ MC events to determine their contributions. Generator-level matching of reconstructed photons to generated electrons is applied to increase the purity of the misidentified eγ mass template. To increase the statistics available for each template, the b tagging requirement is removed from the MC events and from the data sample, as the misidentification does not depend on the presence of a b jet. From the result of this fit, a normalization SF e→γ is measured and applied to both the Z+jets and Z+γ MC events in the electron-signal regions. A corresponding SF µ→γ scale factor is not applied in the muon-signal regions, as the misidentification of muons as photons is negligible. The results of the fits for each of these scale factors are listed in Table 2. Comparisons of the data and MC distributions are shown in Fig. 2 after the applying the scale factors of Table 2. Table 2: Measured values of scale factors, SF Z(γ) and SF e→γ , used to correct the MC predictions for Z+jets and Z+γ backgrounds and electron-to-photon misidentification. For the electron+jets channel, the product of the two is applied to Z+jets and Z+γ backgrounds. In the muon+jets channel, only the SF Z(γ) scale factor is relevant. The first uncertainties are statistical, obtained from uncertainties in the resultant fits. The second uncertainties correspond to differences in the resulting scale factors, added in quadrature, that were obtained by allowing each systematic uncertainty to fluctuate up and down by one standard deviation and refitting.

Channel
SF Z(γ) SF e→γ e 1.38 ± 0.02 ± 0.15 1.58 ± 0.03 ± 0.04 µ 1.60 ± 0.02 ± 0.17 - The final ingredient needed to estimate the background is the relative compositions of photons and photon-like jets in the dominant tt+jets and tt+γ backgrounds. As stated in the introdcution, no explicit tt+γγ sample is used in the background estimate because of the exceedingly small cross section for such events. The sources of two photon events in SR2 are largely the result of jets or electrons misidentified as photons as described above, or of initial or final-state radiation as predicted by PYTHIA. While the precise photon purity in each signal region is important for absolute measurements, no difference in the overall shape of simulated p miss T is found when altering the purity of selected photons. The maximum bin-by-bin difference between the simulated p miss T of tt+jets and tt+γ events is found to be 5%. When their relative normalizations are adjusted to the observed photon purity in data through a fit to the photon isolation variable, the result is well contained within the statistical uncertainties in the p miss T distribution. The p miss T distribution in both signal regions is found to be insensitive to the source of selected photons in tt+jets and tt+γ backgrounds, and, as such, no dedicated tt+γγ sample is required. To eliminate any dependence on the overall production rate of tt+γ events, the normalizations of tt+jets and tt+γ backgrounds are allowed to float freely in the calculations of upper limits, so that the interpretation of the results is based completely on the observed shapes of the distributions.
The control regions allow us to validate the prediction of the p miss T background, as they contain less than 1% contamination from signal. Inverting the requirements on the photon shower selection or on charged-hadron isolation, the CR1 and CR2 regions can contain the same tt systems as the signal regions, but with greatly enhanced contributions from misidentified jets compared to the photon content in each sample. The observed data and predicted background p miss T are shown in Fig. 3 for each control region.
The bin-by-bin fractional disagreement (1-Data/Background) ranging between 10-20% between data and background in CR1 is taken as signal region systematic uncertainty in the modeling of p miss T , and is applied bin-by-bin in the signal region. The Kolmogorov-Smirnov test [56] result of 0.66 between data and simulation for CR2 is attributable to the very small number of events in data and, therefore, CR2 is not used to determine an uncertainty for the signal region SR2. The CR1 results are therefore used for both SR1 and SR2. An additional systematic uncertainty     for the combined e and µ control regions is shown: (upper pane) CR1 with one fake photon, and (lower pane) CR2 with two fake photons. The content of each bin is normalized to its bin width. The ratios of data to background are shown below the two panels. The overall uncertainties are obtained from the sum in quadrature of the statistical and systematic components. Note the Diboson background includes WW, WZ, ZZ, W+gamma, and Z+gamma. in SR1 is obtained using the bin-by-bin fractional differences (1 − CR1/SR1) of CR1 and SR1 p miss T shapes. A final systematic uncertainty is obtained from a similar bin-by-bin difference (1 − SR1/SR2) for SR2. Overall, this accounts for a 10-20% systematic uncertainty from differences between the data and the CR1 MC p miss T shapes, a 1-8% systematic uncertainty in SR1 due to the difference between CR1 and SR1 p miss T shapes, and a 10-50% systematic uncertainty (the 50% value applies only in the highest bin of p miss T ) in SR2, based on the difference SR1 and SR2 p miss T shapes.

Results and interpretation
For any given background or signal process, contributions from systematic uncertainties affecting p miss T are treated simultaneously and are assumed to be completely uncorrelated. All backgrounds are simulated using MC generated events and assigned systematic uncertainties based on integrated luminosity uncertainties, PDF and scale uncertainties, corrections for the number of pileup events, and jet energy scale and resolution (JES and JER). Estimated uncertainties on trigger efficiency and object selections are derived from the systematic uncertainties in MC scale factors. These include trigger efficiencies, b tagging [46,47] as well as electron [44], muon [45], and photon identification [38]. The systematic uncertainties are summarized in Table 3.
The observed data are compared to the SM background estimates as a function of p miss T in each signal region, as shown in Fig. 4. No significant deviation is observed between data and the background prediction. The final results are summarized in Table 4. To demonstated what a GGM signal would look like compared to the data, an example of a GGM spectrum is generated with FASTSIM [57] using PYTHIA 6 and SUSPECT 2.41 [58], using the decay tables from SDECAY 1.2 [59] and NLO cross sections calculated with PROSPINO 2.1 [60]. We scan over the parameters M 1 (U(1) Y ), gaugino (bino) mass and M tR in the SLHA files [61]. The other input parameters of GGM such as M 2 (SU(2) L ), gaugino (wino) mass) and M dR , etc. are decoupled. As a result SDECAY + SUSPECT produce neutralino and top squark masses that are similar to the settings of M 1 and M tR , and the rest of the particles masses are in the TeV range.  The GGM signal is shown superimposed on the data and background MC in Fig. 4. The mass of the top squark (m t ) is chosen to range from 360 to 910 GeV. The neutralino is assumed for simplicity to be 100% bino-like, decaying 100% to a photon plus a gravitino. The neutralino mass (m χ 0 1 ) is chosen to range from 150 to 725 GeV and the gravitino mass is 1 GeV. Signal points are evaluated in 25 GeV steps in both m χ 0 1 and m t up to 300 GeV, and in 50 GeV steps for higher masses. All other SUSY particles (squarks, gluinos, and gauginos) are decoupled by setting their masses to very large values so that the only relevant process is the production of top squark pairs that decay to bino-like NLSPs. The mass region where m t − m χ 0 1 < m t is not considered, as the requirement for high-p T leptons and b jets limits the sensitivity in this mass range.
No significant excess of events is observed beyond the SM expectation, and 95% confidence level (CL) upper limits are placed on the cross sections by combining the results of all four search regions (electron SR1, muon SR1, electron SR2, and muon SR2) using the CL s criterion [62][63][64]. The test statistic is constructed as the product of likelihood ratios in bins of p miss T . Systematic uncertainties are included as nuisance parameters in the signal and background p miss T shapes. Systematic uncertainties affecting only the normalization of signal or background processes are modeled through log-normal distributions, taken as the probability density functions in their associated nuisance parameters. Fluctuations in the shape of p miss T distributions determine both upward and downward systematic uncertainties.
A single ±100% nuisance parameter is introduced with a log-uniform probability density function for its normalization to allow the tt and tt+γ normalizations to float freely in the upperlimit calculation. Statistical uncertainties resulting from the limited number of MC events are also included as nuisance parameters, as prescribed in Ref. [65].
The expected and observed upper limits are shown in Fig. 5. The observed upper limits are slightly less stringent than the expected limits. Observed and expected exclusion contours are also determined and shown in Fig. 6 with exclusion of top squark mass below 650 to 730 GeV corresponding to neutralino masses of 500 and 150 GeV, respectively. These exclusions are obtained using the −1σ theoretical excursion from the observed exclusion mean.    Figure 6: Observed and expected mean exclusions at the 95% CL in the top squark and bino mass plane, and their ranges of uncertainties given by the contours at the 68% CL. The region to the left of the contour for m t -m χ 0 1 < m t is excluded by this analysis.

Summary
We have presented a search for natural gauge-mediated supersymmetry breaking in events with a top quark pair and one or two photons. No significant deviation is found in the distribution of the missing transverse momentum between data and expected SM backgrounds that would indicate the presence of new physics. Upper limits on signal cross sections are calculated for a range of top squark and bino masses. Top squark masses between 650 to 730 GeV are excluded at the 95% CL corresponding to the neutralino mass range of 500 to 150 GeV, respectively. These top squark mass points are obtained using the −1σ theoretical excursion from the observed exclusion mean. These results set the most stringent exclusions on top squark masses in gauge-mediated supersymmetric model considered here.