Performance of algorithms that reconstruct missing transverse momentum in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sqrt{s}$$\end{document}s= 8 TeV proton–proton collisions in the ATLAS detector

The reconstruction and calibration algorithms used to calculate missing transverse momentum (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E_{\text {T}}^{\text {miss}}$$\end{document}ETmiss ) with the ATLAS detector exploit energy deposits in the calorimeter and tracks reconstructed in the inner detector as well as the muon spectrometer. Various strategies are used to suppress effects arising from additional proton–proton interactions, called pileup, concurrent with the hard-scatter processes. Tracking information is used to distinguish contributions from the pileup interactions using their vertex separation along the beam axis. The performance of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E_{\text {T}}^{\text {miss}}$$\end{document}ETmiss reconstruction algorithms, especially with respect to the amount of pileup, is evaluated using data collected in proton–proton collisions at a centre-of-mass energy of 8 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {TeV}$$\end{document}TeV during 2012, and results are shown for a data sample corresponding to an integrated luminosity of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$20.3\, \mathrm{fb}^{-1}$$\end{document}20.3fb-1. The simulation and modelling of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E_{\text {T}}^{\text {miss}}$$\end{document}ETmiss in events containing a Z boson decaying to two charged leptons (electrons or muons) or a W boson decaying to a charged lepton and a neutrino are compared to data. The acceptance for different event topologies, with and without high transverse momentum neutrinos, is shown for a range of threshold criteria for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E_{\text {T}}^{\text {miss}}$$\end{document}ETmiss , and estimates of the systematic uncertainties in the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E_{\text {T}}^{\text {miss}}$$\end{document}ETmiss measurements are presented.

The Large Hadron Collider (LHC) provided proton-proton ( pp) collisions at a centre-of-mass energy of 8 TeV during 2012. Momentum conservation transverse to the beam axis 1 implies that the transverse momenta of all particles in the final state should sum to zero. Any imbalance may indicate the presence of undetectable particles such as neutrinos or new, stable particles escaping detection.
The missing transverse momentum ( E miss T ) is reconstructed as the negative vector sum of the transverse momenta ( p T ) of all detected particles, and its magnitude is represented by the symbol E miss T . The measurement of E miss T strongly depends on the energy scale and resolution of the reconstructed "physics objects". The physics objects considered in the E miss T calculation are electrons, photons, muons, τleptons, and jets. Momentum contributions not attributed to any of the physics objects mentioned above are reconstructed as the E miss T "soft term". Several algorithms for reconstructing the E miss T soft term utilizing a combination of calorimeter signals and tracks in the inner detector are considered.
The E miss T reconstruction algorithms and calibrations developed by ATLAS for 7 TeV data from 2010 are summarized in Ref. [1]. The 2011 and 2012 datasets are more affected by contributions from additional pp collisions, referred to as "pileup", concurrent with the hard-scatter process. Various techniques have been developed to suppress such contributions. This paper describes the pileup dependence, calibration, and resolution of the E miss T reconstructed with different algorithms and pileup-mitigation techniques.
The performance of E miss T reconstruction algorithms, or "E miss T performance", refers to the use of derived quantities like the mean, width, or tail of the E miss T distribution to study pileup dependence and calibration. The E miss T reconstructed with different algorithms is studied in both data and Monte Carlo (MC) simulation, and the level of agreement between the two is compared using datasets in which events with a leptonically decaying W or Z boson dominate. The W boson sample provides events with intrinsic E miss T from noninteracting particles (e.g. neutrinos). Contributions to the E miss T due to mismeasurement are referred to as fake E miss T . 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis along the beam pipe. The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upward. Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the beam pipe. The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2).
Sources of fake E miss T may include p T mismeasurement, miscalibration, and particles going through un-instrumented regions of the detector. In MC simulations, the E miss T from each algorithm is compared to the true E miss T (E miss,True T ), which is defined as the magnitude of the vector sum of p T of stable 2 weakly interacting particles from the hard-scatter collision. Then the selection efficiency after a E miss T -threshold requirement is studied in simulated events with highp T neutrinos (such as top-quark pair production and vector-boson fusion H → τ τ ) or possible new weakly interacting particles that escape detection (such as the lightest supersymmetric particles).
This paper is organized as follows. Section 2 gives a brief introduction to the ATLAS detector. Section 3 describes the data and MC simulation used as well as the event selections applied. Section 4 outlines how the E miss T is reconstructed and calibrated while Sect. 5 presents the level of agreement between data and MC simulation in W and Z boson production events. Performance studies of the E miss T algorithms on data and MC simulation are shown for samples with different event topologies in Sect. 6. The choice of jet selection criteria used in the E miss T reconstruction is discussed in Sect. 7. Finally, the systematic uncertainty in the absolute scale and resolution of the E miss T is discussed in Sect. 8. To provide a reference, Table 1 summarizes the different E miss T terms discussed in this paper.

ATLAS detector
The ATLAS detector [2] is a multi-purpose particle physics apparatus with a forward-backward symmetric cylindrical geometry and nearly 4π coverage in solid angle. For tracking, the inner detector (ID) covers the pseudorapidity range of |η| < 2.5, and consists of a silicon-based pixel detector, a semiconductor tracker (SCT) based on microstrip technology, and, for |η| < 2.0, a transition radiation tracker (TRT). The ID is surrounded by a thin superconducting solenoid providing a 2 T magnetic field, which allows the measurement of the momenta of charged particles. A high-granularity electromagnetic sampling calorimeter based on lead and liquid argon (LAr) technology covers the region of |η| < 3.2. A hadronic calorimeter based on steel absorbers and plasticscintillator tiles provides coverage for hadrons, jets, and τleptons in the range of |η| < 1.7. LAr technology using a copper absorber is also used for the hadronic calorimeters in the end-cap region of 1.5 < |η| < 3.2 and for electromagnetic and hadronic measurements with copper and tungsten absorbing materials in the forward region of 3.1 < |η| < 4.9. The muon spectrometer (MS) surrounds the calorimeters. It consists of three air-core superconducting toroid magnet systems, precision tracking chambers to provide accurate muon tracking out to |η| = 2.7, and additional detectors for triggering in the region of |η| < 2.4. A precision measurement of the track coordinates is provided by layers of drift tubes at three radial positions within |η| < 2.0. For 2.0 < |η| < 2.7, cathode-strip chambers with high granularity are instead used in the innermost plane. The muon trigger system consists of resistive-plate chambers in the barrel (|η| < 1.05) and thingap chambers in the end-cap regions (1.05 < |η| < 2.4).

Data samples and event selection
ATLAS recorded pp collisions at a centre-of-mass energy of 8 TeV with a bunch crossing interval (bunch spacing) of 50 ns in 2012. The resulting integrated luminosity is 20.3 fb −1 [3]. Multiple inelastic pp interactions occurred in each bunch crossing, and the mean number of inelastic collisions per bunch crossing ( μ ) over the full dataset is 21 [4], exceptionally reaching as high as about 70.
Data are analysed only if they satisfy the standard ATLAS data-quality assessment criteria [5]. Jet-cleaning cuts [5] are applied to minimize the impact of instrumental noise and outof-time energy deposits in the calorimeter from cosmic rays or beam-induced backgrounds. This ensures that the residual sources of E miss T mismeasurement due to those instrumental effects are suppressed.

Track and vertex selection
The ATLAS detector measures the momenta of charged particles using the ID [6]. Hits from charged particles are recorded and are used to reconstruct tracks; these are used to reconstruct vertices [7,8].
Each vertex must have at least two tracks with p T > 0.4 GeV; for the primary hard-scatter vertex (PV), the requirement on the number of tracks is raised to three. The PV in each event is selected as the vertex with the largest value of (p T ) 2 , where the scalar sum is taken over all the tracks matched to the vertex. The following track selection criteria 3 [7] are used throughout this paper, including the vertex reconstruction: • p T > 0.5 GeV (0.4 GeV for vertex reconstruction and the calorimeter soft term), • |η| < 2.5, • Number of hits in the pixel detector ≥ 1, • Number of hits in the SCT ≥ 6.
The transverse (longitudinal) impact parameter d 0 (z 0 ) is the transverse (longitudinal) distance of the track from the PV and is computed at the point of closest approach to the PV in the plane transverse to the beam axis. The requirements on the number of hits ensures that the track has an accurate p T measurement. The |η| requirement keeps only the tracks within the ID acceptance, and the requirement of p T > 0.4 GeV ensures that the track reaches the outer layers of the ID. Tracks with low p T have large curvature and are more susceptible to multiple scattering.
The average spread along the beamline direction for pp collisions in ATLAS during 2012 data taking is around 50 mm, and the typical track z 0 resolution for those with |η| < 0.2 and 0.5 < p T < 0.6 GeV is 0.34 mm. The typical track d 0 resolution is around 0.19 mm for the same η and p T ranges, and both the z 0 and d 0 resolutions improve with higher track p T .
Pileup effects come from two sources: in-time and out-oftime. In-time pileup is the result of multiple pp interactions in the same LHC bunch crossing. It is possible to distinguish the in-time pileup interactions by using their vertex positions, which are spread along the beam axis. At μ = 21, the efficiency to reconstruct and select the correct vertex for Z → μμ simulated events is around 93.5% and rises to more than 98% when requiring two generated muons with p T > 10 GeV inside the ID acceptance [10]. When vertices are separated along the beam axis by a distance smaller than the position resolution, they can be reconstructed as a single vertex. Each track in the reconstructed vertex is assigned a weight based upon its compatibility with the fitted vertex, which depends on the χ 2 of the fit. The fraction of Z → μμ reconstructed vertices with more than 50% of the sum of track weights coming from pileup interactions is around 3% at μ = 21 [7,10]. Out-of-time pileup comes from pp collisions in earlier and later bunch crossings, which leave signals in the calorimeters that can take up to 450 ns for the charge collection time. This is longer than the 50 ns between subsequent collisions and occurs because the integration time of the calorimeters is significantly larger than the time between the bunch crossings. By contrast the charge collection time of the silicon tracker is less than 25 ns.

Event selection for Z →
The "standard candle" for evaluation of the E miss T performance is Z → events ( = e or μ). They are produced without neutrinos, apart from a very small number originating from heavy-flavour decays in jets produced in association with the Z boson. The intrinsic E miss T is therefore expected to be close to zero, and the E miss T distributions are used to evaluate the modelling of the effects that give rise to fake E miss T . Candidate Z → events are required to pass an electron or muon trigger [11,12]. The lowest p T threshold for the unprescaled single-electron (single-muon) trigger is p T > 25 (24) GeV, and both triggers apply a track-based isolation as well as quality selection criteria for the particle identifica-tion. Triggers with higher p T thresholds, without the isolation requirements, are used to improve acceptance at high p T . These triggers require p T > 60 (36) GeV for electrons (muons). Events are accepted if they pass any of the above trigger criteria. Each event must contain at least one primary vertex with a z displacement from the nominal pp interaction point of less than 200 mm and with at least three associated tracks.
The offline selection of Z → μμ events requires the presence of exactly two identified muons [13]. An identified muon is reconstructed in the MS and is matched to a track in the ID. The combined ID+MS track must have p T > 25 GeV and |η| < 2.5. The z displacement of the muon track from the primary vertex is required to be less than 10 mm. An isolation criterion is applied to the muon track, where the scalar sum of the p T of additional tracks within a cone of size R = ( η) 2 + ( φ) 2 = 0.2 around the muon is required to be less than 10% of the muon p T . In addition, the two leptons are required to have opposite charge, and the reconstructed dilepton invariant mass, m , is required to be consistent with the Z boson mass: 66 < m < 116 GeV.
The E miss T modelling and performance results obtained in Z → μμ and Z → ee events are very similar. For the sake of brevity, only the Z → μμ distributions are shown in all sections except for Sect. 6.6.  [14] and satisfy p T > 25 GeV. Electron candidates in the region 1.37 < |η| < 1.52 suffer from degraded momentum resolution and particle identification due to the transition from the barrel to the end-cap detector and are therefore discarded in these studies. The electrons are required to be isolated, such that the sum of the energy in the calorimeter within a cone of size R = 0.3 around the electron is less than 14% of the electron p T . The summed p T of other tracks within the same cone is required to be less than 7% of the electron p T . The calorimeter isolation variable [14] is corrected by subtracting estimated contributions from the electron itself, the underlying event [15], and pileup. The

Event selection for
The W boson selection is based on the single-lepton triggers and the same lepton selection criteria as those used in the Z → selection. Events are rejected if they contain more than one reconstructed lepton. Selections on the E miss T and transverse mass (m T ) are applied to reduce the multi-jet background with one jet misidentified as an isolated lepton. The transverse mass is calculated from the lepton and the E miss T , where p T is the transverse momentum of the lepton and φ is the azimuthal angle between the lepton and E miss T directions. Both the m T and E miss T are required to be greater than 50 GeV. These selections can bias the event topology and its phase space, so they are only used when comparing simulation to data in Sect. 5.2, as they substantially improve the purity of W bosons in data events.
The E miss T modelling and performance results obtained in W → eν and W → μν events are very similar. For the sake of brevity, only one of the two is considered in following two sections: E miss T distributions in W → eν events are presented in Sect. 5.2 and the performance studies show W → μν events in Sect. 6. When studying the E miss T tails, both final states are considered in Sect. 6.6, because the η-coverage and reconstruction performance between muons and electrons differ. Table 2 summarizes the MC simulation samples used in this paper. The Z → and W → ν samples are generated with Alpgen [16] interfaced with Pythia [17] (denoted by Alp-gen+Pythia) to model the parton shower and hadronization, and underlying event using the PERUGIA2011C set [18] of tunable parameters. One exception is the Z → τ τ sample with leptonically decaying τ -leptons, which is generated with Alpgen interfaced with Herwig [19] with the underlying event modelled using Jimmy [20] and the AUET2 tunes [21]. Alpgen is a multi-leg generator that provides tree-level calculations for diagrams with up to five additional partons. The matrix-element MC calculations are matched to a model of the parton shower, underlying event and hadronization. The main processes that are backgrounds to Z → and W → ν are events with one or more top quarks (tt and single-top-quark processes) and diboson production (W W , W Z, Z Z). The tt and t W processes are generated with Powheg [22] interfaced with Pythia [17] for hadronization and parton showering, and PERUGIA2011C for the underlying event modelling. All the diboson processes are generated with Sherpa [23]. Powheg is a leading-order generator with corrections at next-to-leading order in α S , whereas Sherpa is a multi-leg generator at tree level.

Monte Carlo simulation samples
To study event topologies with high jet multiplicities and to investigate the tails of the E miss T distributions, tt events with at least one leptonically decaying W boson are considered in Sect. 6.6. The single top quark (t W ) production is considered with at least one leptonically decaying W boson. Both the tt and t W processes contribute to the W and Z boson distributions shown in Sect. 5 as well as Z boson distributions in Sects. 4, 6, and 8 that compare data and simulation. A supersymmetric (SUSY) model comprising pair-produced 500 GeV gluinos each decaying to a tt pair and a neutralino is simulated with Herwig++ [24]. Finally, to study events with forward jets, the vector-boson fusion (VBF) production of H → τ τ , generated with Powheg+Pythia8 [25], is considered. Both τ -leptons are forced to decay leptonically in this sample.
To estimate the systematic uncertainties in the data/MC ratio arising from the modelling of the soft hadronic recoil, E miss T distributions simulated with different MC generators, parton shower and underlying event models are compared. The estimation of systematic uncertainties is performed using a comparison of data and MC simulation, as shown in Sect. 8.2. The following combinations of generators and parton shower models are considered: Sherpa, Alpgen+Herwig, Alpgen+Pythia, and Powheg+Pythia8. The corresponding underlying event tunes are mentioned in Table 2. Parton distribution functions are taken from CT10 [30] for Powheg and Sherpa samples and CTEQ6L1 [38] for Alpgen samples.
Generated events are propagated through a Geant4 simulation [39,40] of the ATLAS detector. Pileup collisions are generated with Pythia8 for all samples, and are overlaid on top of simulated hard-scatter events before event reconstruction. Each simulation sample is weighted by its corresponding cross-section and normalized to the integrated luminosity of the data.

Reconstruction and calibration of the E miss T
Several algorithms have been developed to reconstruct the E miss T in ATLAS. They differ in the information used to reconstruct the p T of the particles, using either energy deposits in the calorimeters, tracks reconstructed in the ID, or both. This section describes these various reconstruction algorithms, and the remaining sections discuss the agreement between data and MC simulation as well as performance studies.
where each term is calculated as the negative vectorial sum of transverse momenta of energy deposits and/or tracks. To avoid double counting, energy deposits in the calorimeters and tracks are matched to reconstructed physics objects in the following order: electrons (e), photons (γ ), the visible parts of hadronically decaying τ -leptons (τ had-vis ; labelled as τ ), jets and muons (μ). Each type of physics object is represented by a separate term in Eq. (2). The signals not associated with physics objects form the "soft term", whereas those associated with the physics objects are collectively referred to as the "hard term". The magnitude and azimuthal angle 4 (φ miss ) of E miss T are calculated as: The total transverse energy in the detector, labelled as E T , quantifies the total event activity and is an important observable for understanding the resolution of the E miss T , especially with increasing pileup contributions. It is defined as: which is the scalar sum of the transverse momenta of reconstructed physics objects and soft-term signals that contribute to the E miss T reconstruction. The physics objects included in p soft T depend on the E miss T definition, so both calorimeter objects and track-based objects may be included in the sum, despite differences in p T resolution.

Reconstruction and calibration of the E miss T hard terms
The hard term of the E miss T , which is computed from the reconstructed electrons, photons, muons, τ -leptons, and jets, is described in more detail in this section.
Electrons are reconstructed from clusters in the electromagnetic (EM) calorimeter which are associated with an ID track [14]. Electron identification is restricted to the range of |η| < 2.47, excluding the transition region between the barrel and end-cap EM calorimeters, 1.37 < |η| < 1.52. They are calibrated at the EM scale 5 with the default electron calibra-tion, and those satisfying the "medium" selection criteria [14] with p T > 10 GeV are included in the E miss T reconstruction. The photon reconstruction is also seeded from clusters of energy deposited in the EM calorimeter and is designed to separate electrons from photons. Photons are calibrated at the EM scale and are required to satisfy the "tight" photon selection criteria with p T > 10 GeV [14].
Muon candidates are identified by matching an ID track with an MS track or segment [13]. MS tracks are used for 2.5 < |η| < 2.7 to extend the η coverage. Muons are required to satisfy p T > 5 GeV to be included in the E miss T reconstruction. The contribution of muon energy deposited in the calorimeter is taken into account using either parameterized estimates or direct measurements, to avoid double counting a small fraction of their momenta.
Jets are reconstructed from three-dimensional topological clusters (topoclusters) [41] of energy deposits in the calorimeter using the anti-k t algorithm [42] with a distance parameter R = 0.4. The topological clustering algorithm suppresses noise by forming contiguous clusters of calorimeter cells with significant energy deposits. The local cluster weighting (LCW) [43,44] calibration is used to account for different calorimeter responses to electrons, photons and hadrons. Each cluster is classified as coming from an EM or hadronic shower, using information from its shape and energy density, and calibrated accordingly. The jets are reconstructed from calibrated topoclusters and then corrected for in-time and out-of-time pileup as well as the position of the PV [4]. Finally, the jet energy scale (JES) corrects for jet-level effects by restoring, on average, the energy of reconstructed jets to that of the MC generator-level jets. The complete procedure is referred to as the LCW+JES scheme [43,44]. Without changing the average calibration, additional corrections are made based upon the internal properties of the jet (global sequential calibration) to reduce the flavour dependence and energy leakage effects [44]. Only jets with calibrated p T greater than 20 GeV are used to calculate the jet term E miss,jets x(y) in Eq. (2), and the optimization of the 20 GeV threshold is discussed in Sect. 7.
To suppress contributions from jets originating from pileup interactions, a requirement on the jet vertex-fraction (JVF) [4] may be applied to selected jet candidates. Tracks matched to jets are extrapolated back to the beamline to ascertain whether they originate from the hard scatter or from a pileup collision. The JVF is then computed as the ratio shown below: This is the ratio of the scalar sum of transverse momentum of all tracks matched to the jet and the primary vertex to the p T sum of all tracks matched to the jet, where the sum is performed over all tracks with p T > 0.5 GeV and |η| < 2.5 and the matching is performed using the "ghost-association" procedure [45,46]. The JVF distribution is peaked toward 1 for hard-scatter jets and toward 0 for pileup jets. No JVF selection requirement is applied to jets that have no associated tracks. Requirements on the JVF are made in the STVF, EJAF, and TST E miss T algorithms as described in Table 3 and Sect. 4.1.3. Hadronically decaying τ -leptons are seeded by calorimeter jets with |η| < 2.5 and p T > 10 GeV. As described for jets, the LCW calibration is applied, corrections are made to subtract the energy due to pileup interactions, and the energy of the hadronically decaying τ candidates is calibrated at the τ -lepton energy scale (TES) [47]. The TES is independent of the JES and is determined using an MC-based procedure. Hadronically decaying τ -leptons passing the "medium" requirements [47] and having p T > 20 GeV after TES corrections are considered for the E miss T reconstruction.

Reconstruction and calibration of the E miss T soft term
The soft term is a necessary but challenging ingredient of the E miss T reconstruction. It comprises all the detector signals not matched to the physics objects defined above and can contain contributions from the hard scatter as well as the underlying event and pileup interactions. Several algorithms designed to reconstruct and calibrate the soft term have been developed, as well as methods to suppress the pileup contributions. A summary of the E miss T and soft-term reconstruction algorithms is given in Table 3.
Four soft-term reconstruction algorithms are considered in this paper. Below the first two are defined, and then some motivation is given for the remaining two prior to their definition.

• Calorimeter Soft Term (CST)
This reconstruction algorithm [1] uses information mainly from the calorimeter and is widely used by ATLAS. The algorithm also includes corrections based on tracks but does not attempt to resolve the various pp interactions based on the track z 0 measurement. The soft term is referred to as the CST, whereas the entire E miss T is written as CST E miss T . Corresponding naming schemes are used for the other reconstruction algorithms. The CST is reconstructed using energy deposits in the calorimeter which are not matched to the highp T physics objects used in the E miss T . To avoid fake signals in the calorimeter, noise suppression is important. This is achieved by calculating the soft term using only cells belonging to topoclusters, which are calibrated at the LCW scale [43,44]. The tracker and calorimeter provide redundant p T measurements for charged particles, so an energy-flow algorithm is used to determine which measurement to use. Tracks algorithm uses a soft term that is calculated using tracks within the inner detector that are not associated with highp T physics objects. The JVF selection requirement is applied to jets  with p T > 0.4 GeV that are not matched to a highp T physics objects are used instead of the calorimeter p T measurement, if their p T resolution is better than the expected calorimeter p T resolution. The calorimeter resolution is estimated as 0.4 · √ p T GeV, in which the p T is the transverse momentum of the reconstructed track. Geometrical matching between tracks and topoclusters (or highp T physics objects) is performed using the R significance defined as R/σ R , where σ R is the R resolution, parameterized as a function of the track p T . A track is considered to be associated to a topocluster in the soft term when its minimum R/σ R is less than 4. To veto tracks matched to highp T physics objects, tracks are required to have R/σ R > 8. The E miss T calculated using the CST algorithm is documented in previous publications such as Ref. [1] and is the standard algorithm in most ATLAS 8 TeV analyses.
The TST is reconstructed purely from tracks that pass the selections outlined in Sect. 3.1 and are not associated with the highp T physics objects defined in Sect. 4.1.1. The detector coverage of the TST is the ID tracking volume (|η| < 2.5), and no calorimeter topoclusters inside or beyond this region are included. This algorithm allows excellent vertex matching for the soft term, which almost completely removes the in-time pileup dependence, but misses contributions from soft neutral particles. The track-based reconstruction also entirely removes the outof-time pileup contributions that affect the CST.
To avoid double counting the p T of particles, the tracks matched to the highp T physics objects need to be removed from the soft term. All of the following classes of tracks are excluded from the soft term: tracks within a cone of size R = 0.05 around electrons and photons tracks within a cone of size R = 0.2 around τ had-vis -ID tracks associated with identified muons tracks matched to jets using the ghost-association technique described in Sect. 4.1.1 -isolated tracks with p T ≥ 120 GeV (≥200 GeV for |η| < 1.5) having transverse momentum uncertainties larger than 40% or having no associated calorimeter energy deposit with p T larger than 65% of the track p T . The p T thresholds are chosen to ensure that muons not in the coverage of the MS are still included in the soft term. This is a cleaning cut to remove mismeasured tracks.
A deterioration of the CST E miss T resolution is observed as the average number of pileup interactions increases [1]. All E miss T terms in Eq. (2) are affected by pileup, but the terms which are most affected are the jet term and CST, because their constituents are spread over larger regions in the calorimeters than those of the E miss T hard terms. Methods to suppress pileup are therefore needed, which can restore the E miss T resolution to values similar to those observed in the absence of pileup.
The TST algorithm is very stable with respect to pileup but does not include neutral particles. Two other pileupsuppressing algorithms were developed, which consider contributions from neutral particles. One uses an η-dependent event-by-event estimator for the transverse momentum density from pileup, using calorimeter information, while the other applies an event-by-event global correction based on the amount of charged-particle p T from the hard-scatter vertex, relative to all other pp collisions. The definitions of these two soft-term algorithms are described in the following: • Extrapolated Jet Area with Filter (EJAF) The jet-area method for the pileup subtraction uses a soft term based on the idea of jet-area corrections [45]. This technique uses direct event-by-event measurements of the energy flow throughout the entire ATLAS detector to estimate the p T density of pileup energy deposits and was developed from the strategy applied to jets as described in Ref. [4]. The topoclusters belonging to the soft term are used for jet finding with the k t algorithm [48,49] with distance parameter R = 0.6 and jet p T > 0. The catchment areas [45,46] for these reconstructed jets are labelled A jet ; this provides a measure of the jet's susceptibility to contamination from pileup. Jets with p T < 20 GeV are referred to as soft-term jets, and the p T -density of each soft-term jet i is then measured by computing: In a given event, the median p T -density ρ med evt for all softterm k t jets in the event (N jets ) found within a given range −η max < η jet < η max can be calculated as This median p T -density ρ med evt gives a good estimate of the in-time pileup activity in each detector region. If determined with η max = 2, it is found to also be an appropriate indicator of out-of-time pileup contributions [45]. A lower value for ρ med evt is computed by using jets with |η jet | larger than 2, which is mostly due to the particular geometry of the ATLAS calorimeters and their cluster reconstruction algorithms. 6 In order to extrapolate ρ med evt into the forward regions of the detector, the average topocluster p T in slices of η, N PV , and μ is converted to an average p T density ρ (η, N PV , μ) for the soft term. As described for the ρ med evt , ρ (η, N PV , μ) is found to be uniform in the central region of the detector with |η| < η plateau = 1.8. The transverse momentum density profile is then computed as is therefore 1, by definition, for |η| < η plateau and decreases for larger |η|.
A functional form of P ρ (η, N PV , μ ) is used to parameterize its dependence on η, N PV , and μ and is defined as where the central region |η| < η plateau = 1.8 is plateaued at 1, and then a pair of Gaussian functions G core (|η| − η plateau ) and G base (η) are added for the fit in the forward regions of the calorimeter. The value of G core (0) = 1 so that Eq. (9) is continuous at η = η plateau . Two example fits are shown in Fig. 1 for N PV = 3 and 8 with μ = 7.5-9.5 interactions per bunch crossing. For both distributions the value is defined to be unity in the central region (|η| < η plateau ), and the sum of two Gaussian functions provides a good description of the change in the amount of in-time pileup beyond η plateau . The baseline Gaussian function G base (η) has a larger width and is used to describe the larger amount of in-time pileup in the forward region as seen in Fig. 1. Fitting with Eq. (9) provides a parameterized function for in-time and outof-time pileup which is valid for the whole 2012 dataset. The soft term for the EJAF E miss T algorithm is calculated as which sums the transverse momenta, labelled p jet,corr x(y),i , of the corrected soft-term jets matched to the primary vertex. The number of these filtered jets, which are selected The average transverse momentum density shape P ρ (η, N PV , μ ) for jets in data is compared to the model in Eq. (9) with μ = 7.5-9.5 and with a three reconstructed vertices and b eight reconstructed vertices. The increase of jet activity in the forward regions coming from more in-time pileup with N PV = 8 in b can be seen by the flatter shape of the Gaussian fit of the forward activity after the pileup correction based on their JVF and p T , is labelled N filter-jet . More details of the jet selection and the application of the pileup correction to the jets are given in Appendix A.
The algorithm, called the soft-term vertex-fraction, utilizes an event-level parameter computed from the ID track information, which can be reliably matched to the hard-scatter collision, to suppress pileup effects in the CST. This correction is applied as a multiplicative factor (α STVF ) to the CST, event by event, and the resulting STVF-corrected CST is simply referred to as STVF. The α STVF is calculated as which is the scalar sum of p T of tracks matched to the PV divided by the total scalar sum of track p T in the event, including pileup. The sums are taken over the tracks that do not match highp T physics objects belonging to the hard term. The mean α STVF value is shown versus the number of reconstructed vertices (N PV ) in Fig. 2. Data and simulation (including Z , diboson, tt, and t W samples) are shown with only statistical uncertainties and agree within 4-7% across the full range of N PV in the 8 TeV dataset. The differences mostly arise from the modelling of the amount of the underlying event and p Z T . The 0-jet and inclusive samples have similar values of α STVF , with that for the inclusive sample being around 2% larger.

Jet p T threshold and JVF selection
The TST, STVF, and EJAF E miss T algorithms complement the pileup reduction in the soft term with additional requirements on the jets entering the E miss T hard term, which are also aimed at reducing pileup dependence. These E miss T reconstruction algorithms apply a requirement of JVF > 0.25 to jets with p T < 50 GeV and |η| < 2.4 in order to suppress those originating from pileup interactions. The maximum |η| value is lowered to 2.4 to ensure that the core of each jet is within the tracking volume (|η| < 2.5) [4]. Charged parti-cles from jets below the p T threshold are considered in the soft terms for the STVF, TST, and EJAF (see Sect. 4.1.2 for details).
The same JVF requirements are not applied to the CST E miss T because its soft term includes the soft recoil from all interactions, so removing jets not associated with the hardscatter interaction could create an imbalance. The procedure for choosing the jet p T and JVF criteria is summarized in Sect. 7.
Throughout most of this paper the number of jets is computed without a JVF requirement so that the E miss T algorithms are compared on the same subset of events. However, the JVF > 0.25 requirement is applied in jet counting when 1-jet and ≥ 2-jet samples are studied using the TST E miss T reconstruction, which includes Figs. 8 and 22. The JVF removes pileup jets that obscure trends in samples with different jet multiplicities.

Track E miss T
Extending the philosophy of the TST definition to the full event, the E miss T is reconstructed from tracks alone, reducing the pileup contamination that afflicts the other objectbased algorithms. While a purely track-based E miss T , designated Track E miss T , has almost no pileup dependence, it is insensitive to neutral particles, which do not form tracks in the ID. This can degrade the E miss T calibration, especially in event topologies with numerous or highly energetic jets. The η coverage of the Track E miss T is also limited to the ID acceptance of |η| < 2.5, which is substantially smaller than the calorimeter coverage, which extends to |η| = 4.9.
Track E miss T is calculated by taking the negative vectorial sum of p T of tracks satisfying the same quality criteria as the TST tracks. Similar to the TST, tracks with poor momentum resolution or without corresponding calorimeter deposits are removed. Because of Bremsstrahlung within the ID, the electron p T is determined more precisely by the calorimeter than by the ID. Therefore, the Track E miss T algorithm uses the electron p T measurement in the calorimeter and removes tracks overlapping its shower. Calorimeter deposits from photons are not added because they cannot be reliably associated to particular pp interactions. For muons, the ID track p T is used and not the fits combining the ID and MS p T . For events without any reconstructed jets, the Track and TST E miss T would have similar values, but differences could still originate from muon track measurements as well as reconstructed photons or calorimeter deposits from τ had-vis , which are only included in the TST.
The soft term for the Track E miss T is defined to be identical to the TST by excluding tracks associated with the highp T physics objects used in Eq. (2).

Comparison of E miss T distributions in data and MC simulation
In this section, basic E miss T distributions before and after pileup suppression in Z → and W → ν data events are compared to the distributions from the MC signal plus relevant background samples. All distributions in this section include the dominant systematic uncertainties on the highp T objects, the E miss,soft T (described in Sect. 8) and pileup modelling [7]. The systematics listed above are the largest systematic uncertainties in the E miss T for Z and W samples.

Modelling of Z → events
The CST, EJAF, TST, STVF, and Track E miss T distributions for Z → μμ data and simulation are shown in Fig. 3 Fig. 3 for Z → μμ data are observed to be compatible with the sum of expected signal and background contributions, namely tt and the summed diboson (V V ) processes including W W , W Z, and Z Z, which all have highp T neutrinos in their final states. Instrumental effects can show up in the tails of the E miss T , but such effects are small.
The E miss T φ distribution is not shown in this paper but is very uniform, having less than 4 parts in a thousand difference from positive and negative φ. Thus the φ-asymmetry is greatly reduced from that observed in Ref. [1].
The increase in systematic uncertainties in the range 50-120 GeV in Fig. 3 comes from the tail of the E miss T distribution for the simulated Z → μμ events. The increased width in the uncertainty band is asymmetric because many systematic uncertainties increase the E miss T tail in Z → μμ events by creating an imbalance in the transverse momentum. The largest of these systematic uncertainties are those associated with the jet energy resolution, the jet energy scale, and pileup. The pileup systematic uncertainties affect mostly the CST and EJAF E miss T , while the jet energy scale uncertainty Events / 10 GeV    does not have the same increase in systematic uncertainties because it does not make use of reconstructed jets. Above 120 GeV, most events have a large intrinsic E miss T , and the systematic uncertainties on the E miss T , especially the soft term, are smaller. Figure 4 shows the soft-term distributions. The pileupsuppressed E miss T algorithms generally have a smaller mean soft term as well as a sharper peak near zero compared to the CST. Among the E miss T algorithms, the soft term from the EJAF algorithm shows the smallest change relative to the CST. The TST has a sharp peak near zero similar to the STVF but with a longer tail, which mostly comes from individual tracks. These tracks are possibly mismeasured and further studies are planned. The simulation under-predicts the TST relative to the observed data between 60-85 GeV, and the differences exceed the assigned systematic uncertainties. This region corresponds to the transition from the narrow core to the tail coming from highp T tracks. The differences between data and simulation could be due to mismodelling of the rate of mismeasured tracks, for which no systematic uncertainty is applied. The mismeasured-track cleaning, as discussed in Sect. 4.1.2, reduces the TST tail starting at 120 GeV, and this region is modelled within the assigned uncertainties. The mismeasured-track cleaning for tracks below 120 GeV and entering the TST is not optimal, and future studies aim to improve this. The E miss T resolution is expected to be proportional to √ E T when both quantities are measured with the calorimeter alone [1]. While this proportionality does not hold for tracks, it is nevertheless interesting to understand the modelling of E T and the dependence of E miss T resolution on it. Figure 5 shows the E T distribution for Z → μμ data and MC simulation both for the TST and the CST algorithms. The E T is typically larger for the CST algorithm than for the TST because the former includes energy deposits from pileup as well as neutral particles and forward contributions beyond the ID volume. The reduction of pileup contributions in the soft and jet terms leads to the E T (TST) having a sharper peak at around 100 GeV followed by a large tail, due to highp T muons and large p jets T . The data and simulation agree within the uncertainties for the E T (CST) and E T (TST) distributions.

Modelling of W → ν events
In this section, the selection requirements for the m T and E miss T distributions are defined using the same E miss    The previous ATLAS E miss T performance paper [1] studied the resolution defined by the width of Gaussian fits in a narrow range of ±2RMS around the mean and used a separate study to investigate the tails. Therefore, the results of this paper are not directly comparable to those of the previous study. The resolutions presented in this paper are expected to be larger than the width of the Gaussian fitted in this manner because the RMS takes into account the tails.
In this section, the resolution for the E miss T is presented for Z → μμ events using both data and MC simulation. Unless it is a simulation-only figure (labelled with "Simulation" under the ATLAS label), the MC distribution includes the signal sample (e.g. Z → μμ) as well as diboson, tt, and t W samples. For the 0-jet sample in Fig. 7a, the STVF, TST, and Track E miss T resolutions all have a small slope with respect to N PV , which implies stability of the resolution against pileup. In addition, their resolutions agree within 1 GeV throughout the N PV range. In the 0-jet sample, the TST and Track E miss T are both primarily reconstructed from tracks; however, small differences arise mostly from accounting for photons in the TST E miss T reconstruction algorithm. The CST E miss T is directly affected by the pileup as its reconstruction does not apply any pileup suppression techniques. Therefore, the CST E miss T has the largest dependence on N PV , with a resolution ranging from 7 GeV at N PV = 2 to around 23 GeV at N PV = 25. The E miss T resolution of the EJAF distribution, while better than that of the CST E miss T , is not as good as that of the other pileup-suppressing algorithms.
For the inclusive sample in Fig. 7b  GeV with each additional jet, which is much larger than any dependence on N PV . The inclusive distribution has a larger slope with respect to N PV than the individual jet categories, which indicates that the behaviour seen in the inclusive sample is driven by an increased number of pileup jets included in the E miss T calculation at larger N PV .

Resolution of the E miss T as a function of E T
The resolutions of E miss T , resulting from the different reconstruction algorithms, are compared as a function of the scalar sum of transverse momentum in the event, as calculated using Eq. (4). The CST E miss T resolution is observed to depend linearly on the square root of the E T computed with the CST E miss T components in Ref. [1]. However, the E T used in this subsection is calculated with the TST E miss T algorithm. This allows studies of the resolution as a function of the momenta of particles from the selected PV without including the amount of pileup activity in the event. Figure 9 shows the resolution as a function of E T (TST) for Z → μμ data and MC simulation in the 0-jet and inclusive samples.
In the 0-jet sample shown in Fig. 9a, the use of tracking information in the soft term, especially for the STVF, TST, and Track E miss T , greatly improves the resolution relative to the CST E miss T . The EJAF E miss T has a better resolution than that of the CST E miss T but does not perform as well as the other reconstruction algorithms. All of the resolution curves have an approximately linear increase with E T (TST); however, the Track E miss T resolution increases sharply starting at E T (TST) = 200 GeV due to missed neutral contributions like photons. The resolution predicted by the simulation is about 5% larger than in data for all E miss T algorithms at E T (TST) = 50 GeV, but agreement improves as E T (TST) increases until around E T (TST) = 200 GeV. Events with jets can end up in the 0-jet event selection, for example, if a jet is misidentified as a hadronically decaying τ -lepton. The p τ T increases with E T (TST), and the rate of jets misreconstructed as hadronically decaying τ -leptons is not well modelled by the simulation, which leads to larger E miss T resolution at high E T (TST) than that observed in the data. The Track E miss T can be more strongly affected by misidentified jets because neutral particles from the highp T jets are not included.
For the inclusive sample in Fig. 9b, the pileup-suppressed E miss T distributions have better resolution than the CST E miss T for E T (TST) < 200 GeV, but these events are mostly those with no associated jets. For higher E T (TST), the The balance of E miss T against the vector boson p T in W/Z +jets events is used to evaluate the E miss T response. A lack of balance is a global indicator of biases in E miss T reconstruction and implies a systematic misestimation of at least one of the E miss T terms, possibly coming from an imperfect selection or calibration of the reconstructed physics objects. The procedure to evaluate the response differs between Z +jets events (Sect. 6.2.1) and W +jets events (Sect. 6.2.2) because of the highp T neutrino in the leptonic decay of the W boson.

Measuring E miss
T recoil versus p Z T In events with Z → μμ decays, the p T of the Z boson defines an axis in the transverse plane of the ATLAS detector, and 7 As defined in Sect. 4.1.3, the CST E miss T does not apply a JVF requirement on the jets like the TST, EJAF, and STVF E miss T . However, large E jets T tends to come from hard-scatter jets and not from pileup.
for events with 0-jets, the E miss T should balance the p T of the Z boson ( p Z T ) along this axis. Comparing the response in events with and without jets allows distinction between the jet and soft-term responses. The component of the E miss T along the p Z T axis is sensitive to biases in detector responses [50]. The unit vector of p Z T is labelled asÂ Z and is defined as: where p T + and p T − are the transverse momentum vectors of the leptons from the Z boson decay.
The recoil of the Z boson is measured by removing the Z boson decay products from the E miss T and is computed as Since the E miss T includes a negative vector sum over the lepton momenta, the addition of p Z T removes its contribution. With an ideal detector and E miss T reconstruction algorithm, Z → events have no E miss T , and the R balances with p Z T exactly. For the real detector and E miss T reconstruction algorithm, the degree of balance is measured by projecting the recoil ontoÂ Z , and the relative recoil is defined as the projection R ·Â Z divided by p Z T , which gives a dimensionless estimate that is unity if the E miss T is ideally reconstructed and calibrated. Figure 10 shows the mean relative recoil versus p Z T for Z → μμ events where the average value is indicated by angle brackets. The data and MC simulation agree within around 10% for all E miss T algorithms for all p Z T ; however, the agreement is a few percent worse for p Z T > 50 GeV in the 0-jet sample.
The Z → μμ events in the 0-jet sample in Fig. 10a have a relative recoil significantly lower than unity ( R · A Z / p Z T < 1) throughout the p Z T range. In the 0-jet sample, shows the largest bias for p Z T < 60 GeV. The STVF algorithm scales the recoil down globally by the factor α STVF as defined in Eq. (11), and this correction decreases the already underestimated soft term. The α STVF does increase with p Z T going from 0.06 at p Z T = 0 GeV to around 0.15 at p Z T = 50 GeV, and this results in a rise in the recoil, which approaches the TST E miss T near p Z T ∼ 70 GeV. In Fig. 10b, the inclusive Z → μμ events have a significantly underestimated relative recoil for p Z T < 40 GeV. The balance between the R and p Z T improves with p Z T because of an increase in events having highp T calibrated jets recoiling against the Z boson. The presence of jets included in the hard term also reduces the sensitivity to the soft term, which is difficult to measure accurately. The difficulty in isolating effects from soft-term contributions from highp T physics objects is one reason why the soft term is not corrected. As with the 0-jet sample, the CST E miss T has a significantly under-calibrated relative recoil in the lowp Z T region, and all of the other E miss T algorithms have a lower relative recoil than the CST E miss T . Of the pileup-suppressing E miss T algorithms, the TST E miss T is closest to the relative recoil of the CST E miss T . The relative recoil of the Track E miss T is significantly lower than unity because the neutral particles recoiling from the Z boson are not included in its reconstruction. Finally, the STVF E miss T shows the lowest relative recoil among the object-based E miss T algorithms as discussed above for Fig. 10a, even lower than the Track E miss T for p Z T < 16 GeV.

Measuring E miss T response in simulated W → ν events
For simulated events with intrinsic E miss T , the response is studied by looking at the relative mismeasurement of the reconstructed E miss T . This is referred to here as the "linearity", and is a measure of how consistent the reconstructed E miss GeV, the on-shell W boson must have nonzero p T , which typically comes from its recoil against jets. However, no reconstructed or generator-level jets are found in this 0-jet sample. Therefore, most of the events with 40 < E miss,True T < 60 GeV have jets below the 20 GeV threshold contributing to the soft term, and the soft term is not cal-  ), which has a mean value of zero. The RMS of the distribution is taken as the resolution, which is labelled RMS ( φ).
No selection on the E miss T or m T is applied in order to avoid biases. The RMS ( φ) is shown as a function of E miss,True T in Fig. 12a for the 0-jet sample in W → μν simulation; the angular resolution generally improves as the E miss,True The E miss T significance is a metric defined to quantify how likely it is that a given event contains intrinsic E miss T and is computed by dividing the measured E miss T by an estimate of its uncertainty. Using 7 TeV data, it was shown that the CST E miss T resolution follows an approximately stochastic behaviour as a function of E T , computed with the CST components, and is described by where σ (E miss T ) is the CST E miss T resolution [1]. The typical value of a in the 8 TeV dataset is around 0.97 GeV 1/2 for the CST E miss T .  Fig. 14 in Z → μμ data and MC simulation. The data and MC simulation agree within the assigned uncertainties for both algorithms. The CST E miss T distribution in Fig. 14a has a very narrow core for the Z → μμ process, having 97% of data events with 1. with intrinsic E miss T (e.g. tt and dibosons) and those with fake E miss T (e.g. poorly measured Z → μμ events with a large number of jets).
The TST E miss T is shown as an example of a pileupsuppressing algorithm. The E T is not always an accurate reflection of the resolution when there are significant contributions from tracking resolution, as discussed in Sect. 5.1. In particular, the performance of the TST reconstruction algorithm is determined by the tracking resolution, which is generally more precise than the calorimeter energy measurements because of the reduced pileup dependence, especially for charged particles with lower p T . Neutral particles are not included in the E T for the Track E miss T and TST algorithms, but they do affect the resolution. In addition, a very small number of tracks do have very large over-estimated momentum measurements due to multiple scattering or other effects in the detector, and the momentum uncertainties of these tracks are not appropriately accounted for in the E T methodology.   In Fig. 16, selection efficiencies are shown as a function of the E miss T threshold requirement for various simulated physics processes defined in Sect. 3.4 with no lepton, jet, or m T threshold requirements. The physics object and event selection criteria are not applied in order to show the selection efficiency resulting from the E miss T threshold requirement without biases in the event topology from the ATLAS detector acceptance for leptons or jets. Only the efficiencies for the CST and TST E miss T distributions are compared for brevity.
In Fig. 16a, the efficiencies with the TST E miss T selection are shown. Comparing the physics processes while imposing a moderate E miss T threshold requirement of ∼100 GeV results in a selection efficiency of 60% for an ATLAS search for gluino-pair production [52], which is labelled as "SUSY". The VBF H → τ τ and tt events are also selected with high efficiencies of 14 and 20%, respectively. With the 100 GeV E miss T threshold the selection efficiencies for these processes are more than an order of magnitude higher than those for leptonically decaying W bosons and more than two orders of magnitude higher than for Z boson events.
The Z → ee events have a lower selection efficiency (around 20 times lower at E miss T = 100 GeV) than the Z → μμ events. This is due to the muon tracking coverage, which is limited to |η| < 2.7, whereas the calorimeter covers |η| < 4.9. Muons behave as minimum-ionizing particles in the ATLAS calorimeters, so they are not included in the E miss T outside the muon spectrometer acceptance. The electrons on the other hand are measured by the forward calorimeters. The electron and muon decay modes of the W boson have almost identical selection efficiencies at E miss T = 100 GeV because there is E miss,True T from the neutrino. However, the differences in selection efficiency are around a factor of four higher for W → μν than for W → eν at E miss T = 350 GeV. Over the entire E miss T spectrum, the differences between the electron and muon final states for W bosons are smaller than that for Z bosons because there is a neutrino in W → ν events as opposed to none in the Z → final state.
In Fig. 16b, the selection efficiencies for CST E miss T threshold requirements are divided by those obtained using the TST E miss T . The selection efficiencies resulting from CST E miss T thresholds for SUSY, tt, and VBF H → τ τ are within 10% of the efficiencies obtained using the TST E miss T . For E miss T thresholds from 40-120 GeV, the selection efficiencies for W and Z boson events are higher by up to 60-160% for CST E miss T than TST E miss T , which come from pileup contributions broadening the CST E miss T distribution. The Z → μμ and Z → ee events, which have no E miss,True T , show an even larger increase of 2.6 times as many Z → ee events passing a E miss T threshold of 50 GeV. The increase is not as large for Z → μμ as Z → ee events because neither E miss T algorithm accounts for forward muons (|η| > 2.7) as discussed above. Moving to a higher E miss T threshold, mismeasured tracks in the TST algorithm cause it to select more Z → ee events with 120 < E miss T < 230 GeV. In addition, the CST E miss T also includes electron energy contributions ( p T < 20 GeV) in the forward calorimeters (|η| > 3.1) that the TST does not.
The CST and TST E miss T distributions agree within 10% in selection efficiency for E miss T > 250 GeV for all physics processes shown. This demonstrates a strong cor- The tracking and the calorimeters provide almost completely independent estimates of the E miss T . These two measurements complement each other, and the E miss T algorithms discussed in this paper combine that information in different ways. The distribution of the TST E miss T versus the CST E miss T is shown for the simulated 0-jet Z → μμ sample in Fig. 17. This figure shows the correlation of fake E miss T between the two algorithms, which originates from many sources including incorrect vertex association and miscalibration of highp T physics objects.
Vector correlation coefficients [53], shown in Table 5, are used to estimate the correlation between the E miss T distributions resulting from different reconstruction algorithms. The value of the vector correlation coefficients ranges from 0 to 2, with 0 being the least correlated and 2 being the most correlated. The coefficients shown are obtained using the simulated 0-jet and inclusive Z → μμ MC samples. The least-correlated E miss T distributions are the CST and Track E miss T , which use mostly independent momenta measurements in their reconstructions. The correlations of the other E miss T distributions to the CST E miss T decrease as more tracking information is used to suppress the pileup dependence of the soft term, with the TST E miss T distribution having the second smallest vector correlation coefficient with respect to the CST E miss T distribution. Placing requirements on a combination of E miss T distributions or requiring the difference in azimuthal direction between two E miss T vectors to be small

Jetp T threshold and vertex association selection
Jets can originate from pileup interactions, so tracks matched to the jets are extrapolated back to the beamline to ascertain whether they are consistent with originating from the hard scatter or a pileup collision. The JVF defined in Sect. 4.1.1 is used to separate pileup jets and jets from the hard scatter. The STVF, EJAF, and TST E miss T algorithms improve their jet identification by removing jets associated with pileup vertices or jets that have a large degradation in momentum resolution due to pileup activity. Energy contributions from jets not associated with the hard-scatter vertex are included in the soft term. For the TST, this means that charged particles from jets not associated with the hard-scatter vertex may then enter the soft term if their position along the beamline is consistent with the z-position of the hard-scatter vertex.
Applying a JVF cut is a trade-off between removing jets from pileup interactions and losing jets from the hard scatter.
Therefore, several values of the JVF selection criterion are considered in Z → events with jets having p T > 20 GeV; their impact on the E miss T resolution and scale is investigated in Fig. 18. Larger JVF thresholds on jets reduce the pileup dependence of the E miss T resolution, but they simultaneously worsen the E miss T scale. Thus the best compromise for the value of the JVT threshold is chosen. Requiring JVF > 0.25 greatly improves the stability of the E miss T resolution with respect to pileup by reducing the dependence of the E miss T resolution on the number of reconstructed vertices as shown in Fig. 18a. The E miss T in Z → events ideally has a magnitude of zero, apart from some relatively infrequent neutrino contributions in jets. So its magnitude should be consistently zero along any direction. The p Z T remains unchanged for different JVF requirements, which makes its direction a useful reference to check the calibration of the E miss T . The difference from zero of the average value of the reconstructed E miss T along p Z T increases as tighter JVF selections are applied as shown in Fig. 18b. Requiring a JVF threshold of 0.25 or higher slightly improves the stability of the resolution with respect to pileup, whereas it visibly degrades the E miss T response by removing too many hard-scatter jets. Lastly, pileup jets with p T > 50 GeV are very rare [4], so applying the JVF requirement above this p T threshold is not useful. Therefore, requiring JVF to be larger than 0.25 for jets with p T < 50 GeV within the tracking volume (|η| < 2.4) is the preferred threshold for the E miss T reconstruction. In addition, the p T threshold, which defines the boundary between the jet and soft terms, is optimized. For these studies, the jets with p T > 20 GeV and |η| < 2.4 are required to have JVF > 0.25. A procedure similar to that used for the JVF optimization is used for the jetp T threshold using the same two metrics as shown in Fig. 19. While applying a higher p T threshold improves the E miss T resolution versus the number of pileup vertices, by decreasing the slope, the E miss T becomes strongly biased in the direction opposite to the p Z T . Therefore, the p T threshold of 20 GeV is preferred.

Systematic uncertainties of the soft term
The E miss T is reconstructed from the vector sum of several terms corresponding to different types of contributions from reconstructed physics objects, as defined in Eq. (2). The estimated uncertainties in the energy scale and momentum resolution for the electrons [14], muons [13], jets [44], τ had-vis [47], and photons [14] are propagated into the E miss T . This section describes the estimation of the systematic uncertainties for the E miss T soft term. These uncertainties take into account the impact of the generator and underlying-event modelling used by the ATLAS Collaboration, as well as effects from pileup.
The balance of the soft term with the calibrated physics objects is used to estimate the soft-term systematic uncertainties in Z → μμ events, which have very little E miss,True T . The transverse momenta of the calibrated physics objects, p hard T , is defined as which is the vector sum of the transverse momenta of the highp T physics objects. It defines an axis (with unit vector p hard T ) in the transverse plane of the ATLAS detector along which the E miss T soft term is expected to balance p hard T in Z → μμ events. This balance is sensitive to the differences in calibration and reconstruction of the E miss,soft T between data and MC simulation and thus is sensitive to the uncertainty in the soft term. This discussion is similar to the one in Sect. 6.2; however, here the soft term is compared to the hard term rather than comparing the E miss T to the recoil of the Z .

Methodology for CST
Two sets of systematic uncertainties are considered for the CST. The same approach is used for the STVF and EJAF algorithms to evaluate their soft-term systematic uncertainties. The first approach decomposes the systematic uncertainties into the longitudinal and transverse components along the direction of p hard T , whereas the second approach estimates the global scale and resolution uncertainties. While both methods were recommended for analyses of the 8 TeV dataset, the first method, described in Sect. 8.1.1, gives smaller uncertainties. Therefore, the second method, which is discussed in Sect. 8.1.2, is now treated as a cross-check.
Both methods consider a subset of Z → μμ events that do not have any jets with p T > 20 GeV and |η| < 4.5. Such an event topology is optimal for estimation of the soft-term systematic uncertainties because only the muons and the soft term contribute to the E miss T . In principle the methods are valid in event topologies with any jet multiplicity, but the Z → μμ + ≥1-jet events are more susceptible to jet-related systematic uncertainties.

Evaluation of balance between the soft term and the hard term
The primary or "balance" method exploits the momentum balance in the transverse plane between the soft and hard terms in Z → events, and the level of disagreement between data and simulation is assigned as a systematic uncertainty.
The E miss,soft T is decomposed along thep hard T direction. The direction orthogonal top hard T is referred to as the per- The E miss,soft is sensitive to scale and resolution differences between the data and simulation because the soft term should balance the p hard  Fig. 20. The mean increases linearly with p hard T , because the soft term is not calibrated to the correct energy scale. On the other hand, the width is relatively independent of p hard T , because the width is mostly coming from pileup contributions.
The small discrepancies in mean and width between data and simulation are taken as the systematic uncertainties for the scale and resolution, respectively. A small dependence on the average number of collisions per bunch crossing is observed for the scale and resolution uncertainties for high p hard T , so the uncertainties are computed in three ranges of pileup and three ranges of p hard T . The scale uncertainty varies from −0.4 to 0.3 GeV depending on the bin, which reduces the uncertainties from the 5% shown in Fig. 20 for p hard T > 10 GeV. A small difference in the uncertainties for the resolution along the longitudinal and perpendicular directions is observed, so they are considered separately. The average uncertainty is about 2.1% (1.8%) for the longitudinal (perpendicular) direction.

Cross-check method for the CST systematic uncertainties
As a cross-check of the method used to estimate the CST uncertainties, the sample of Z → μμ +0-jet events is also used to evaluate the level of agreement between data and simulation. The projection of the E miss T ontop hard T provides a test for potential biases in the E miss T scale. The systematic uncertainty in the soft-term scale is estimated by comparing the ratio of data to MC simulation for E miss T ·p hard T versus E T (CST) as shown in Fig. 21a. The average deviation from unity in the ratio of data to MC simulation is about 8%, which is taken as a flat uncertainty in the absolute scale. The systematic uncertainty in the soft-term resolution is estimated by evaluating the level of agreement between data and MC simulation in the E miss x and E miss y resolution as a function of the E T (CST) (Fig. 21b). The uncertainty on the softterm resolution is about 2.5% and is shown as the band in the data/MC ratio.
Even though the distributions appear similar, the results in this section are derived by projecting the full E miss T onto thê p hard T in the 0-jet events, and are not directly comparable to the ones in Sect. 8  The E miss,soft in data is fit with the MC simulation convolved with a Gaussian function, and the fitted Gaussian mean and width are used to extract the differences between simulation and data. The largest fit values of the Gaussian width and offset define the systematic uncertainties. For the perpendicular component, the simulation is only smeared by a Gaussian function of width σ ⊥ to match the data. The mean, which is set to zero in the fit, is very small in data and MC simulation because the hadronic recoil only affects E miss,soft . The fitting is done in 5 or 10 GeV bins of p hard T from 0-50 GeV, and a single bin for p hard T > 50 GeV. An example fit is shown in Fig. 22 for illustration. The 1-jet selection with the JVF requirement is used to show that the differences between data and simulation, from the jet-related systematic uncertainties, are small relative to the differences in the soft-term modelling. The impact of the jet-related systematic uncertainties is less than 0.1% in the Gaussian smearing (σ = 1.61 GeV), indicating that the jetrelated systematic uncertainties do not affect the extraction of the TST systematic uncertainties.
The Gaussian width squared of E miss,soft and E miss,soft ⊥ components and the fitted mean of E miss,soft for data and MC simulation are shown versus p hard T in Fig. 23. The systematic uncertainty squared of the convolved Gaussian width and the systematic uncertainty of the offset for the longitudinal component are shown in the bands. While the systematic uncertainties are applied to the MC simulation, the band is shown centred around the data to show that all MC generators plus parton shower models agree with the data within the assigned uncertainties. Similarly for the E miss,soft ⊥ , the width of the convolved Gaussian function for the perpendicular component is shown in the band. The Alpgen+Herwig simulation has the largest disagreement with data, so the Gaussian smearing parameters and offsets applied to the simulation are used as the systematic uncertainties in the soft term. The p hard T > 50 GeV bin has the smallest number of data entries; therefore, it has the largest uncertainties in the fitted mean and width. In this bin of the distribution shown in Fig. 23(a), the statistical uncertainty from the Alpgen+Herwig simulation, which is not the most discrepant from data, is added to the uncertainty band, and this results in a systematic uncertainty band that spans the differences in MC generators for σ 2 (E miss,soft ) for events with p hard T > 50 GeV. The impact of uncertainties coming from the parton shower model, the number of jets, μ dependence, JER/JES uncertainties, and forward versus central jet differences was evaluated. Among the uncertainties, the differences between the generator and parton shower models have the most dominant effects. The total TST systematic uncertainty is summarized in Table 6.

Propagation of systematic uncertainties
The CST systematic uncertainties from the balance method defined in Sect. 8 where E miss,soft x(y),reso and E miss,soft x(y),scale± are the values after propagating the resolution and scale uncertainties, respectively, in the x (y) directions. Here, δ is the fractional scale uncertainty, andσ CST corrects for the differences in resolution between the data and simulation.
The systematic uncertainties in the resolution and scale for the TST E miss,soft T are propagated to the nominal E miss,soft T as follows: The symbol Gaus( TST , σ (⊥) ) represents a random number sampled from a Gaussian distribution with mean TST and width σ (⊥) . The shift TST is zero for the perpendicular component. All of the TST systematic-uncertainty variations have a wider distribution than the nominal MC simulation, when the Gaussian smearing is applied. To cover cases in which the data have a smaller resolution (narrower distribution) than MC simulation, a downward variation is computed using Eq. (20). To compute the yield of predicted events in the variation, Y down (X ), for a given value X of the E miss T , the yield is defined as the where the square of the yield of the nominal distribution, Y (X ), is divided by the yield of events after applying the variation with Gaussian smearing to the kinematic variable, Y smeared (X ). In practice, the yields are typically the content of histogram bins before (Y (X )) and after (Y smeared (X )) the systematic uncertainty variations. This procedure can be applied to any kinematic observable by propagating only the smeared soft-term variation to the calculation of the kinematic observable X and then computing the yield Y down (X ) as defined in Eq. (20). There are six total systematic uncertainties associated with the TST:

Closure of systematic uncertainties
The systematic uncertainties derived in this section for the CST and TST E miss T are validated by applying them to the Z → μμ sample to confirm that the differences between data and MC simulation are covered. The effects of these systematic uncertainty variations on the CST E miss T are shown for the Z → μμ events in Figs. 24 and 25 for the primary (Sect. 8.1.1) and the cross-check (Sect. 8.1.2) methods, respectively. The uncertainties are larger for the cross-check method, reaching around 50% for E miss,soft T > 60 GeV in Fig. 25a. The corresponding plots for the TST E miss T are shown in Fig. 26 using the Z → μμ +0-jet control sample, where the uncertainty band is the quadratic sum of the variations with the MC statistical uncertainty. The systematic uncertainty band for the TST is larger in Fig. 26a than the one for the primary CST algorithm. In all the distributions, the systematic uncertainties in the soft term alone cover the disagreement between data and MC simulation.

Systematic uncertainties from tracks inside jets
A separate systematic uncertainty is applied to the scalar summed p T of tracks associated with highp T jets in the Track E miss T because these tracks are not included in the TST. The fraction of the momentum carried by charged particles within jets was studied in ATLAS [57], and its uncertainty varies from 3 to 5% depending on the jet η and p T . These uncertainties affect the azimuthal angle between the Track E miss T and the TST E miss T , so the modelling is checked with Z → μμ events produced with one jet. The azimuthal angle between the Track E miss T and the TST E miss T directions is well modelled, and the differences between data and MC simulation are within the systematic uncertainties.

Conclusions
Weakly interacting particles, which leave the ATLAS detector undetected, give rise to a momentum imbalance in the plane transverse to the beamline. An accurate measurement of the missing transverse momentum (E miss T ) is thus important in many physics analyses to infer the momentum of these particles. However, additional interactions occurring in a given bunch crossing as well as residual signatures from nearby bunch crossings make it difficult to reconstruct the E miss T from the hard-scattering process alone. The E miss T is computed as the negative vector sum of the reconstructed physics objects including electrons, photons, muons, τ -leptons, and jets. The remaining energy deposits not associated with those highp T physics objects are also considered in the E miss T . They collectively form the so-called soft term, which is the E miss T component most affected by pileup. The calorimeter and the tracker in the ATLAS detector provide complementary information to the reconstruction of the highp T physics objects as well as the E miss T soft term. Charged particles are matched to a particular collision point or vertex, and this information is used to determine which charged particles originated from the hardscatter collision. Thus tracking information can be used to greatly reduce the pileup dependence of the E miss T reconstruction. This has resulted in the development of E miss T reconstruction algorithms that combine the information from the tracker and the calorimeter. The performance of these reconstruction algorithms is evaluated using data from 8 TeV proton-proton collisions collected with the ATLAS detector at the LHC corresponding to an integrated luminosity of 20.3 fb −1 .
The Calorimeter Soft Term (CST) is computed from the sum of calorimeter topological clusters not associated with any hard object. No distinction can be made between energy contributions from pileup and hard-scatter interactions, which makes the resolution on the E miss T magnitude and direction very dependent on the number of pileup interactions. The pileup-suppressed E miss T definitions clearly reduce the dependence on the number of pileup interactions but also introduce a larger under-estimation of the soft term than the CST.
The Track Soft Term (TST) algorithm does not use calorimeter energy deposits in the soft term and uses only the inner detector (ID) tracks. It has stable E miss T resolution with respect to the amount of pileup; however, it does not have as good a response as the CST E miss T , due mainly to missing neutral particles in the soft term. Nevertheless, its response is better than that of the other reconstruction algorithms that aim to combine the tracking and calorimeter information. For large values of E miss,True T , the CST and TST E miss T algorithms all perform similarly. This is because contributions from jets dominate the E miss T performance, making the differences in soft-term reconstruction less important.
The Extrapolated Jet Area with Filter (EJAF) and Soft-Term Vertex-Fraction (STVF) E miss T reconstruction algorithms correct for pileup effects in the CST E miss T by utilizing a combination of the ATLAS tracker and calorimeter measurements. Both apply a vertex association to the jets used in the E miss T calculation. The EJAF soft-term reconstruction subtracts the pileup contributions to the soft term using a procedure similar to jet area-based pileup corrections, and the EJAF E miss T resolution has a reduced dependence on the amount of pileup, relative to the CST algorithm. The STVF reconstruction algorithm uses an event-level correction of the CST, which is the scalar sum of charged-particle p T from the hard-scatter vertex divided by the scalar sum of all charged-particle p T . The STVF correction to the soft term greatly decreases the dependence of the E miss T resolution on the amount of pileup but causes the largest under-estimation of all the soft-term algorithms.
Finally, the Track E miss T reconstruction uses only the inner detector tracks with the exception of the reconstructed electron objects, which use the calorimeter E T measurement. The resolutions on the Track E miss T magnitude and direction are very stable against pileup, but the limited |η| coverage of the tracker degrades the E miss T response, as does not accounting for highp T neutral particles, especially in events with many jets.
The different E miss T algorithms have their own advantages and disadvantages, which need to be considered in the context of each analysis. For example, removing large backgrounds with low E miss T , such as Drell-Yan events, may require the use of more than one E miss T definition. The tails of the track and calorimeter E miss T distributions remain uncorrelated, and exploiting both definitions in parallel allows one to suppress such backgrounds even under increasing pileup conditions. The systematic uncertainties in the E miss T are estimated with Z → μμ events for each reconstruction algorithm, and are found to be small.

A. Calculation of EJAF
A jet-level η-dependent pileup correction of the form ρ med η (η) = ρ med evt · P ρ fct (η, N PV , μ ), is used, where the N PV and μ are determined from the event properties. This multiplies the median soft-term jet p T -density, ρ med evt , from Eq. (7) by the functional form, P ρ fct (η, N PV , μ ) as defined in Eq. (9), which was fit to the average transverse momentum density. The median transverse momentum density ρ med evt is determined from soft-term jets with |η| < 2 and then extrapolated to higher |η| as discussed in Sect. 4.1.2 using the fitted P ρ fct (η, N PV , μ ). The pileup correction ρ med η (η) from Eq. (21) is applied to the transverse momenta of the soft-term jets passing a JVF selection. The pileup-corrected jet p T is labelled p filter-jet,corr T,i , and it is computed as While all other jets used in this paper use an R = 0.4 reconstruction, the larger value of R = 0.6 is used to reduce the number of k t soft-term jets with p T = 0 (see Eq. (22)) in the central detector region. While negative energy deposits are possible in the ATLAS calorimeters, their contributions cannot be matched to the soft-term jets by ghost-association. Studies that modify the cluster-to-jet matching to include negativep T clusters indicate no change in the E miss T performance, so negativep T clusters are excluded from the softterm jets. Finally, only filter-jets with p