Differential top-antitop cross-section measurements as a function of observables constructed from final-state particles using pp collisions at $\sqrt{s}=7$ TeV in the ATLAS detector

Various differential cross-sections are measured in top-quark pair ($t\bar{t}$) events produced in proton-proton collisions at a centre-of-mass energy of $\sqrt{s} = 7$ TeV at the LHC with the ATLAS detector. These differential cross-sections are presented in a data set corresponding to an integrated luminosity of $4.6$ fb$^{-1}$. The differential cross-sections are presented in terms of kinematic variables, such as momentum, rapidity and invariant mass, of a top-quark proxyreferred to as the pseudo-top-quark as well as the pseudo-top-quark pair system. The dependence of the measurement on theoretical models is minimal. The measurements are performed on $t\bar{t}$ events in the lepton+jets channel, requiring exactly one charged lepton and at least four jets with at least two of them tagged as originating from a $b$-quark. The hadronic and leptonic pseudo-top-quarks are defined via the leptonic or hadronic decay mode of the $W$ boson produced by the top-quark decay in events with a single charged lepton. Differential cross-section measurements of the pseudo-top-quark variables are compared with several Monte Carlo models that implement next-to-leading order or leading-order multi-leg matrix-element calculations.


Introduction
The large number of top-quark pair (tt) events produced at the Large Hadron Collider (LHC) has allowed the ATLAS [1] and CMS [2] experiments to perform precise inclusive and differential top-quark related cross-section measurements. Both experiments have recently published measurements of the inclusive tt production cross-section in proton-proton (pp) collisions at centre-of-mass energies, √ s, of 7 and 8 TeV [3-14] as well as differential cross-section measurements as functions of the top-quark transverse momentum (p T ) and rapidity (y), and of the mass (m tt ) and y of the tt system [15][16][17]. These cross-section measurements triggered recent work on Quantum Chromodynamics (QCD) calculations of heavy quark production [18][19][20][21][22][23]. Precision measurements of tt production provide the opportunity to conduct tests of predictions based on perturbative QCD (pQCD) and gain direct information on the gluon parton distribution function (PDF) at large momentum fractions (x Bj ) of about 0.1-0.5 [24]. Differential tt measurements are particularly sensitive to the tt production mechanism in QCD in a region of parton momentum fractions at large momentum transfers. Such studies can lead to improvements in background predictions for Higgs measurements and searches for physics beyond the Standard Model (SM).
In the SM, a top-quark decays to a W boson and a b-quark with a branching fraction close to unity. Hence there are three tt signatures that correspond to different decay modes of the W bosons. The signal for this study is in the single-lepton channel. It corresponds to the case where one W boson decays directly, or via an intermediate τ decay, into an electron or muon and at least one neutrino, and the other into a pair of quarks. The neutrino(s) will escape the detector unseen, leading to missing transverse momentum whose magnitude is denoted by E miss T . This paper presents differential tt cross-section measurements using a definition where the variables are constructed from an object that is directly related to detector-level observables. This top-quark proxy object is referred to as the pseudo-top-quark (t). The goal of presenting measurements using a definition where the variables are constructed from reconstructed charged lepton, jet and missing transverse momentum objects, is to allow precision tests of pQCD in final states with top-quarks, using reconstructed objects that avoid large model-dependent extrapolation corrections to the parton-level top-quark but remain strongly correlated with corresponding objects reconstructed from the partons.
In this approach, corrections applied to data depend less on Monte Carlo (MC) models that describe the hard scattering process modelled by matrix elements calculated to a given order, the emission of additional partons from the hard scattering process, parton shower effects, hadronisation and multiple-parton interactions. In particular, at low top-quark p T , the modelling of soft parton emissions can significantly modify the kinematics of the topquark pair. Observables based on stable particles can be unambiguously compared to MC generator predictions and therefore provide useful benchmarks to test and further develop MC models and to adjust free model parameters.
Thet object can be evaluated for hadronic or leptonic decays of the top-quark from the detector information or analogously from the stable final-state particles generated by -2 -MC simulations. The differential cross-sections are measured as functions of the transverse momentum p T (t) and rapidity y(t) of the leptonic (t l ) and the hadronic (t h ) pseudo-topquark as well as the transverse momentum p T (t lth ), rapidity y(t lth ) and invariant mass m(t lth ) of the reconstructed tt system (t lth ). This paper is structured as follows. The definition oft together with the detector-level or particle-level objects used to constructt are presented in section 2. The correlation in MC simulation between reconstructedt observables and corresponding top-quark observables at the parton level is also discussed in this section. Section 3 provides a short overview of the ATLAS detector. A description of the different MC samples in the study is found in section 4. The data and MC event selection is described in section 5 together with the reconstruction of final-state objects. Section 6 covers the treatment and evaluation of systematic uncertainties. Comparisons between data and MC simulation, for the yields and pseudo-top-quark distributions before unfolding, are presented in section 7. A description of the unfolding, the results obtained for data and various MC models and concluding remarks are found in sections 8, 9 and 10, respectively.

Measurement definition
The model dependence of parton-level tt cross section measurements has been an ongoing concern. The use of particle-based definitions in cross-section measurements is a standard methodology in high-energy physics [25,26] to reduce the model dependence. Standardised tools exist to compare theory predictions for such measurements [27][28][29]. In tt events, such measurements were published for the inclusive cross-section [3, 4] and for differential crosssection measurements as a function of the transverse momentum and the rapidity of the final-state leptons and jets [17, [30][31][32]. In this paper the concept of the particle-based crosssection definition is extended to the kinematic properties of the top-quark decay products that when correctly combined are closely related to the kinematic properties of the topquark. An operational definitiont that is defined from measured observables as described below has been introduced to reduce the model dependence of the measurements. 1 The differential cross-section measurements in this paper are presented in terms of the kinematics of thet object, introduced in section 1. The identification of reconstructed charged lepton, jet and E miss T objects from actual or simulated detector signals is discussed in section 5.1. The identification of particle-level objects for MC events is discussed in section 2.1. The kinematic fiducial region for both the reconstructed and particle-level objects is defined in section 2.2. The algorithm used to construct leptonic or hadroniĉ t objects from either reconstructed detector-level objects or from particle-level objects is described in section 2.3.
The measurements presented in this paper can be directly compared to MC simulations using matrix-element calculations for the hard scattering, interfaced with parton shower and hadronisation models. These are referred to as particle-level predictions. For comparisons 1 Discussions related to an operational definition have occurred between experimentalists and theorists in the Top Physics LHC working group.
-3 -to fixed-order pQCD calculations, corrections for the transition from partons to hadrons need to be applied.

Particle objects
In the case of MC simulation, objects can be identified at the particle level. Leptons and jets are defined using particles with a mean lifetime τ > 3 × 10 −11 s that are directly produced in pp interactions or from subsequent decays of particles with a shorter lifetime. The lepton definition only includes prompt electrons, muons and neutrinos not originating from hadron decays as well as electrons, muons and neutrinos from tau decays. The electron and muon four-momenta are calculated after the addition of any photon four-momenta, not originating from hadron decay, within ∆R = (∆φ) 2 + (∆η) 2 = 0.1 between the photon and lepton directions 2 . The missing transverse momentum vector and its associated azimuthal angle are evaluated from the sum of the neutrino four-momenta, where all neutrinos from W boson and τ decays are included. Jets are defined by the anti-k t algorithm [33] with a radius parameter of 0.4. The jets include all stable particles except for the selected electrons, muons and neutrinos, and the photons associated with these electrons or muons. The presence of one or more b-hadrons with p T > 5 GeV associated to a jet defines it as a b-jet. To perform the matching between b-hadrons and jets, the b-hadron energy is scaled to a negligible value and included in the jet clustering (ghost-matching) [34].

Kinematic range of objects
The cross-section measurement is defined in a kinematic region where the reconstructed physics objects have a high reconstruction efficiency (fiducial region). The kinematic region is chosen such that the kinematic selection of the physics objects reconstructed in the detector and of the particle objects is as close as possible. The fiducial region is defined in the same way for reconstructed physics and particle objects. However, on detector-level some additional selections can be applied.
Electrons, muons and jets are required to satisfy p T > 25 GeV and |η| < 2.5. The fiducial volume is defined by requiring exactly one muon or electron, four or more jets of which at least two are b-jets, E miss T > 30 GeV and a W boson transverse mass m T (W ) > 35 GeV. 3 Events are discarded if the electron or muon is within ∆R = 0.4 of a jet, or two jets are within ∆R = 0.5 of each other.

Hadronic and leptonic pseudo-top-quark definition
The definition oft as a hadronic or leptonic object is determined by the decay of the W boson. In the cross-section definition used in this paper, the two highest p T b-jets are 2 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis along the beam pipe. The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upward. Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the beam pipe. The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2).
3 The W boson transverse mass mT(W ) is defined as 2p ℓ T p ν T (1 − cos(φ ℓ − φ ν )), where ℓ and ν refer to the charged lepton (e or µ) and the missing transverse momentum vector, respectively. The symbol φ denotes the azimuthal angle of the lepton or missing transverse momentum vector.
-4 -assumed to be the b-jets from the top-quark decay.
• In the case oft l , the leptonically decaying W boson is constructed from the electron or muon and the E miss T . The b-jet with the smallest angular separation (∆R) from the electron or muon is then assigned as a decay product oft l . Using the measured W boson mass, m W = 80.399 GeV [35], and the components of the missing transverse momentum vector (denoted as p x,ν , p y,ν ) associated with the W boson decay neutrino, the p z,ν of the neutrino can be constrained: where the subscript ℓ refers to the electron or muon. Neglecting the neutrino mass, the p z,ν of the neutrino is taken from the solution of the resulting quadratic equation: If both solutions are real, the solution with the smaller |p z,ν | is chosen. In cases where (b 2 − 4ac) is less than zero, p z,ν is taken as: Given the value of p z,ν ,t l is formed from the combination of the charged lepton, neutrino and assigned b-jet.
• In the case oft h , the hadronically decaying W boson is constructed from the remaining two highest-p T jets. Thet h is then defined from the hadronically decaying W boson candidate and the remaining b-jet.
Once the leptonic and hadronict are defined, their four-momenta can be evaluated, and used in the measurement to define, for example, the p T , the rapidity and the mass of thet. In summary, three different definitions are used in the following sections: • A parton-level top-quark is the MC generator-level top-quark selected before it decays but after any radiative emissions; 4 • A particle-level pseudo-top-quark (hadronic and leptonic) is defined by stable generatorlevel particles within the described acceptance; GeV 1 T dp dEvents  Figure 1. Simulated event distribution of the parton-level top-quark p T distribution for all events (green), and the particle-level pseudo-top-quark p T (t h ) for events within the fiducial region (blue). In both cases the top-quark that decays hadronically is chosen. The distributions are evaluated for the same event sample based on powheg+pythia at √ s = 7 TeV. The upper figure is made for an arbitrary integrated luminosity. The lower figure shows the ratio of the particle-levelt h over the parton-level top-quark normalised distributions to emphasise the difference in shape between the two.
• A detector-level pseudo-top-quark (hadronic and leptonic) is evaluated with the use of physics objects reconstructed from detector measurements as discussed in section 5.1.
Detector-level distributions oft variables are corrected for detector efficiency and resolution effects to allow comparison with distributions of equivalent particle-levelt variables (see section 8). Therefore any MC model that simulates the final-state particles from pp collisions can be compared to these data. This makes existing or future MC model comparisons possible, independent of the presence of top-quark partons in the MC event record. For comparisons of data to simulation it is not necessary to use the tt kinematic reconstruction method with the best performance, but it is important that the method is well defined and the same definition is applied to data and simulation.

Kinematic comparison of the parton-level top-quark with the pseudo-topquark
The pseudo-top-quark definition is chosen such that it is closely related to the top-quark parton provided by pQCD calculations. Figure 1 compares the parton-level p T distribution of the top-quark with the particlelevel p T distribution of the pseudo-top-quark. The two distributions are shown for the same integrated luminosity, using the powheg+pythia Monte Carlo generator (see section 4). The number of events with a reconstructedt is much smaller than the total number of generated tt events. This is primarily due to the requirement of a single lepton, four jets and the pseudo-top construction efficiency.
The ratio of the normalised distributions illustrates the shape difference. The larger phase space of the parton-level distribution results in a softer p T distribution when compared with the pseudo-top-quark distribution where fiducial cuts are applied. In addition, with increasing p T , the decay products tend to be more collimated due to their large boost resulting in a reduced pseudo-top reconstruction efficiency. Using the same Monte Carlo sample, figure 2(a) shows the correlation between the generated parton-level p T distribution and the particle-level hadronic pseudo-top-quark p T . Despite the strong correlation, there remains a significant bin migration in the region populated by the majority of topquarks. These migrations can be affected by a change in the modelling of tt production. This makes any measurement extrapolated to the parton level model-dependent. However, there is a much stronger correlation between the the hadronic particle-level and detectorlevel pseudo-top-quark p T distributions as shown in figure 2(b). Hadronic pseudo-top-quark measurements that are presented at the particle level are therefore less affected by modeldependent corrections.  Figure 2. (a) Monte Carlo study using the nominal powheg+pythia MC sample showing the correlation between the parton-level top-quark p T and the particle-level hadronic pseudo-top-quark p T . (b) Monte Carlo study showing the correlation between the particle-level hadronic pseudo-topquark p T and the hadronic pseudo-top-quark p T evaluated from reconstructed objects. In each case the correlation is normalised to all the events within a bin on the horizontal axis.

ATLAS detector
The ATLAS detector [1] is a general-purpose detector that covers nearly the entire solid angle around one of the pp interaction points of the LHC [36]. It is composed of an inner tracking detector (ID), covering a range of |η| < 2.5, surrounded by a superconducting solenoid that provides a 2 T magnetic field, high-granularity electromagnetic (EM) and hadronic sampling calorimeters and a muon spectrometer (MS) that incorporates a system of three air-core superconducting toroid magnets, each with eight coils. The ID comprises a silicon pixel detector, a silicon microstrip detector (SCT), and a transition radiation tracker (TRT). The EM barrel calorimeter is composed of a liquid-argon (LAr) active medium and lead absorbers. The hadronic calorimeter is constructed from steel absorber and scintillator tiles in the central pseudorapidity range of |η| < 1.7, whereas the end-cap and forward regions are instrumented with LAr calorimeters for both the electromagnetic and hadronic energy measurements up to |η| = 4.9. The MS toroid magnets are arranged with an eight-fold azimuthal coil symmetry around the calorimeters. Three layers of muon spectrometer chambers surround the toroids. High-precision drift tubes and, at small radius in the end-cap, region cathode strip chambers provide an independent momentum measurement. Resistive plate chambers in the central region and fast thin gap chambers in the end-cap region provide a muon trigger.
Data are selected from inclusive pp interactions using a three-level trigger system. A hardware-based first-level trigger is used to initially reduce the trigger rate to approximately 75 kHz. The detector readout is then available for two stages of software-based (higherlevel) triggers. In the second level, partial object reconstruction is carried out to improve the selection and at the last level, the event filter, a full online event reconstruction is made to finalise the event selection. During the 2011 run period, the selected event rate for all triggers following the event filter was approximately 300 Hz.

Monte Carlo simulation
Monte Carlo simulation samples were generated to correct the measurements for detector effects. The production of tt events is modelled using the powheg [37][38][39][40], mc@nlo [41] and alpgen [42] generators. powheg and mc@nlo use NLO matrix-element calculations interfaced with parton showers and alpgen implements a leading-order (LO) multi-leg calculation for up to five additional partons, which are subsequently matched to parton showers [43]. A common feature of the samples, unless stated otherwise, is the generation of the underlying event and parton shower, which is either performed by pythia (v6.425) [44] or by herwig (v6.520) [45] together with jimmy (v4.31) [46]. The program tauola [47] is used for the decays of τ -leptons and photos [48] for photon radiation.
A powheg sample was generated using the version powheg-hvq p4 [39], with the ct10nlo PDF set [49] and the default factorisation and renormalisation scales set to Q 2 = m 2 t +p 2 T , where m t is the top-quark mass and p T the top-quark transverse momentum as evaluated for the underlying Born configuration (i.e. before radiation). The powheg matrix-element calculation was interfaced with pythia, using the "C" variant of the Perugia -8 -2011 tunes [50] and the corresponding cteq6l1 PDF set [51]. This sample is referred to as "powheg+pythia" and is the benchmark signal sample in this study.
An additional powheg sample was generated in order to compare the parton shower and fragmentation models. It was interfaced with herwig together with jimmy for the underlying event model using the AUET2 tune [52] ("powheg+herwig"). To evaluate the importance of the gluon PDF on the final corrected distributions, an alternative powheg sample was produced with the herapdf15nlo PDF set [53] ("powheg(herapdf)+pythia"). This PDF set is based on HERA I data together with the inclusion of the precise high-Q 2 preliminary HERA II data. Simulations using this PDF set are in good agreement with early W and Z boson production measurements at the LHC [54].
To assess the modelling of LO matrix-element calculations for additional partons, the alpgen [42] generator (v2.13) was used together with the cteq6l1 PDF set and associated strong coupling constant, α S (m Z ) = 0.129. The produced processes correspond to the LO matrix elements for tt with five inclusive associated partons and tt + bb and tt + cc states, at its default renormalisation scale and with the factorisation scale set to Q 2 = m 2 + p 2 T . Here the sum runs over heavy quarks and light jets 5 with mass m and transverse momentum p T . The alpgen samples were interfaced with herwig and jimmy, using the MLM partonjet matching scheme [42] with a matching scale of 20 GeV. The exclusive heavy-flavour samples were combined with the inclusive samples, after the removal of overlapping events. This sample is referred to as "alpgen+herwig".
Both the NLO matrix-element-based MC models and the LO multi-leg MC models have higher-order correction uncertainties that can be estimated in terms of initial-state radiation (ISR) and final-state radiation (FSR) variations. alpgen (v2.14) is used to generate tt samples with the cteq5l PDF set [55], the pythia parton shower and the Perugia 2011 tune. Nominal and shifted ISR/FSR samples were produced with an α S corresponding to Λ QCD = 0.26 GeV as used in the Perugia tune and by modifying the renormalisation scale at each local vertex in the matrix element by a factor of 2.0 (0.5) relative to the original scale to obtain more (less) radiation. The renormalisation scale is varied by the same factor in the matrix-element calculation and pythia, where the radHi and radLo Perugia 2011 pythia tunes are used [50]. These samples are referred to as alpgen+pythia(α S Up) and alpgen+pythia(α S Down), respectively. The selected α S values are found to produce variations that are similar to the uncertainty band of crosssection measurements for tt events with additional jet activity, as described in ref. [30]. The effect of colour reconnection is estimated by generating a powheg+pythia sample in which no colour reconnection is allowed within pythia, using the noCR Perugia 2011 tune [50]. 5 Defined to be a jet comprising gluons and light quarks and subsequently used in the MLM matching scheme.
For the simulation of background processes, samples of W and Z bosons produced in association with jets were generated using alpgen (v2.13) with LO matrix elements for up to five inclusive associated partons. The cteq6l1 PDF set and the herwig parton shower were used. In addition to the inclusive jet-flavour processes, separate samples of W bb+jets, W cc+jets, W c+jets and Zbb+jets matrix-element processes with three additional partons were generated and the overlap between them removed. The normalisation of the W +jets samples were determined from data as described in section 5.3. The Z +jets samples are normalised to the cross-section obtained from an NLO QCD calculation with mcfm [63] using the mstw2008nlo PDF set [64].
A sample of t-channel single top-quark decays was generated using the acermc generator [65] (v3.8), while mc@nlo was used to generate W t-channel and s-channel processes. Each of these samples is normalised according to an NLO+NNLL calculation for the corresponding t-channel [66], s-channel [67] and W t-channel [68] processes. Diboson events (W W , W Z, ZZ) were produced using herwig and normalised to the cross-section obtained from an NLO QCD calculation with mcfm using the mstw2008nlo PDF set.
To properly simulate the LHC environment, additional inelastic pp interactions were generated with pythia using the AMBT1 tune [69] and then overlaid on the hard process. The MC events are weighted such that the distribution of the generated mean number of pp collisions per bunch crossing ( µ ) matched that of the data-taking period. The particles from additional interactions are added before the signal digitisation and reconstruction steps of the detector simulation, but are not used within the particle-level measurement defined in section 2.
The response of the detector to the generated events is determined by a full geant4 [70] simulation of the ATLAS detector [71]. This is performed for all samples except for the ISR/FSR variations, colour reconnection and powheg+herwig MC samples. For those samples a faster simulation which parameterises the ATLAS calorimeter response is used instead [71].

Data sample and event selection
The data are selected from the full 2011 data-taking period. Events are required to meet baseline data quality criteria during stable LHC running periods. These criteria reject data with significant detector noise or read-out problems and depend on the trigger conditions and the reconstruction of physics objects. The resulting data set corresponds to an integrated luminosity of 4.59 ± 0.08 fb −1 [72]. During this period, the LHC delivered instantaneous luminosities that were sufficient to produce several pp interactions within the same bunch crossing (in-time pile-up). Interactions in adjacent bunch crossings also influ--10 -enced the detector and readout signals in the selected bunch crossing. The mean number of in-time pile-up interactions, µ , was measured by averaging over all pp bunch crossings in a given luminosity block. The average value of µ was approximately 5 at the beginning of the data-taking period and as high as 18 by the end of the 2011 run.

Object reconstruction
Primary vertices are formed from tracks reconstructed in the ID. The selected primary vertex is required to include at least four reconstructed tracks satisfying p T > 0.4 GeV and to be consistent with the pp beam collision region in the transverse plane. In the cases where more than one primary vertex with at least four tracks is reconstructed, the vertex with the highest p 2 T of the associated tracks is chosen and assumed to be associated with the hard process.
Electron candidates are identified as electromagnetic energy deposits (clusters) matched to a reconstructed track in the ID [73]. Selected electrons are required to satisfy stringent identification criteria. The reconstructed tracks are required to have a minimal number of pixel and SCT hits among those expected along the electron trajectory and at least a minimum number of high-threshold TRT hits. The longitudinal and lateral shower profiles in the calorimeter are required to match those expected for an electron, with a satisfactory match between the cluster energy and the reconstructed track momentum.
To reduce the rate of events with non-prompt and fake lepton signatures from multijet background processes, electrons are required to be isolated within both the calorimeter and ID. The calorimeter isolation is defined using a cone of size ∆R = 0.2 around the electron direction. The transverse energy sum of the clusters found in the cone is calculated and required to be less than 10% of the electron transverse energy, after excluding the calorimeter cells associated with the electron cluster and correcting for leakage from the electron cluster. The ID-based isolation is calculated using the summed track-p T within a ∆R = 0.3 cone around the electron direction and is required to be less than 10% of the electron track p T . Electrons are selected by requiring p T > 25 GeV in the range |η| < 2.47, excluding the barrel/end-cap transition region of 1.37 < |η| < 1.52. Electrons with p T > 15 GeV are used for the object overlap removal discussed later in this section and to remove events with two or more leptons as discussed in section 5.2.
Muon candidates are required to be composed of a reconstructed track in the MS combined with a track in the ID [74]. Track quality criteria are used to reduce the nonprompt and fake muon background from multi-jet processes and to select a sample with improved p T resolution. Reconstructed tracks are required to have a hit in the innermost pixel layer if expected from the track trajectory and at least a minimum number of pixel and SCT hits, set below the number of expected hits on a muon trajectory. Muons crossing the TRT are required to have a hit pattern consistent with a well-reconstructed track.
To further reduce the fake muon background, muons are required to be isolated within the calorimeter and ID. The calorimeter isolation is determined using using calorimeter energy deposits in a ∆R = 0.2 cone around the direction of the muon and is required to be less than 4 GeV. The ID isolation is determined from the summed p T of tracks in a ∆R = 0.3 cone around the direction of the muon and is required to be less than 2.5 GeV, excluding the p T of the muon. The muon channel event selection requires the reconstruction of one muon with p T > 25 GeV associated with the selected primary vertex. Muons with p T > 15 GeV are used to define an additional lepton veto discussed in section 5.2. Both types of muons are selected within |η| < 2.5.
Topological clusters [75] are formed from calorimeter energy deposits. Jets are reconstructed from these clusters with the anti-k t algorithm [33] with a radius parameter of 0.4. The jets are calibrated using the EM+JES scheme, where the jet energy scale (JES) is derived as a correction of the initial calorimeter calibration set for electromagnetic showers [76]. The jet energy is corrected for the effect of additional pp collisions in data and MC events.
To correct for energy losses due to non-compensation in the calorimeter, uninstrumented material and detector subsystems in front of the calorimeter, jet energy corrections factors are applied that depend on the jet energy and the jet η to achieve a calibration that matches the energy of stable particle jets in simulated events (excluding neutrinos and muons).
Differences between data and Monte Carlo simulation are evaluated using in situ techniques and are corrected in an additional step [77]. The in situ calibration exploits the p T balance in events with a Z boson (Z+jet) or a photon (γ+jet) and a recoiling jet and dijet events. Z+jet and γ+jet data are used to set the JES in the central detector region, while p T balancing in dijet events is used to achieve an η intercalibration of jets in the forward region with respect to central jets The calibrated jets are required to satisfy p T > 25 GeV and be within the range |η| < 2.5. Jets associated with large energy deposits from additional pp interactions are removed by requiring that the p T sum of the reconstructed tracks matched with the jet and the selected primary vertex is at least 75% of the total p T sum of all tracks associated to the jet. This quantity is referred to as the jet vertex fraction (JVF). Jets satisfying p T > 50 GeV are always accepted and jets having no associated tracks are also accepted.
The MV1 algorithm [78] is used to select jets associated with b-hadron decays. The algorithm combines several tagging algorithms into a single neural-network-based discriminant. Jets are identified as b-jets by using an MV1 discriminant tuned to achieve a 70% tagging efficiency for jets with p T > 20 GeV in simulated tt events. The corresponding rejection factor for jets originating from gluons or light quarks is found to be approximately 130.
The E miss T and its associated azimuthal angle are reconstructed from the vector sum of the transverse momenta of the reconstructed objects (electrons, muons, jets) as well as the transverse energy deposited in calorimeter cells not associated with these objects, within the range |η| < 4.9. The object classification scheme for the electrons, muons and jets used to calculate E miss T is chosen to be the same as the object definitions used in this analysis. Calorimeter cells not associated with an object are calibrated at the electromagnetic scale before being included in the E miss T calculation. This calibration scheme is similar to the one described in ref. [79].
Electron and jet objects are reconstructed using separate algorithms that are run independently. Jets are reconstructed from topological clusters as described above, with no -12 -distinction made between identified electron and jet energy deposits within the electromagnetic and hadronic calorimeters. Jets associated with an electron energy deposit are discarded using angular matching. For each electron, the jet with an axis closest to that of the electron direction, within ∆R < 0.2, is discarded. To remove leptons from heavyflavour decays, the lepton is discarded if the lepton is found to be within ∆R < 0.4 of a selected jet axis.

Event selection
Data were collected by requiring either a high-p T electron trigger, based on calorimeter energy deposits, shower shape and track quality constraints, or a high-p T muon trigger that included a reconstructed track in the MS matched with a track in the ID. The electron trigger p T threshold was either 20 GeV or 22 GeV, depending on the data-taking period, whereas the muon trigger p T threshold remained at 18 GeV for the duration of the entire 2011 data taking period.
The selected events are required to contain at least one reconstructed primary vertex. A small number of events are rejected that included one or more jets of p T > 20 GeV with energy that is identified as being from noise in the calorimeter electronics, from non-pp collision background sources or cosmic-ray showers. Events where an identified electron and muon share the same reconstructed track in the ID are also removed.
Events are classified in the electron (muon) channel by the presence of one electron (muon) with p T > 25 GeV, no additional electron or muon with p T > 15 GeV and at least four reconstructed jets with p T > 25 GeV and |η| < 2.5 where at least two are identified as b-jets. To reduce the background contribution from non-prompt or fake leptons, E miss T > 30 GeV and m T (W ) > 35 GeV are also required. To reduce the effects of jet merging and migrations within the p T ordering of the jets described in section 2, events with a pair of reconstructed jets separated by ∆R < 0.5 are vetoed.

Estimation of backgrounds
The dilepton tt final states constitute the most important background to this analysis, followed by single top-quark production, by W boson production in association with jets including jets initiated by charm-and bottom-quarks (W + jets) and by processes where only many jets are produced (multi-jet production). In comparison, Z boson production in association with jets (Z+jets) and diboson production processes are smaller background components.
The fraction of dilepton events that remain after applying the full event selection is evaluated with a MC simulation and removed from the data. A bin-by-bin correction factor derived from the baseline powheg+pythia tt Monte Carlo simulation is used (see section 8). Contributions from single top-quark, Z + jets and diboson (W W , W Z, ZZ) production are evaluated using corresponding MC samples and theoretical cross-sections for these processes, as discussed in section 4.
The overall normalisation of the W +jets MC sample was determined in the data via a lepton charge asymmetry measurement described in ref. [15]. This method exploits the fact that the production of W bosons at the LHC is charge-asymmetric and also that -13 -the ratio of the number of W − to W + bosons is more precisely known than the total number of W bosons [80]. Most of the other background processes result in lepton charge distributions that are symmetric. The numbers of events with positively and negatively charged leptons are measured in the data and are referred to as N + ℓ and N − ℓ , respectively. A MC simulation was used to estimate the charge-asymmetric background from single topquarks and to subtract that from the values of N + ℓ and N − ℓ . The number of W +jets events was then extracted from: where r MC is the ratio of the W + +jets and W − +jets production cross-sections determined using the W +jets MC simulation for the signal region kinematic cuts, and N W + (N W − ) is the number of W + (W − ) events. The W + jets normalisation was determined using the event selection of this analysis, but without the b-tagging requirement. The values of (N W + + N W − ) are independently determined for W + 4-jet and W+ ≥ 5-jet events. The normalisation was separately obtained for each of the MC systematic uncertainty evaluations listed in section 6. The normalisation of the heavy-flavour fractions within the W +jets sample was determined by measuring the number of W +2-jet events, without a b-tagging requirement and with the requirement of at least one b-tag. To make this measurement, the charge asymmetry technique described above was applied to both sets of selected events. The number of events that have one or more b-tags is related to the number of events before the b-tagging requirement, the b-tagging probability and the flavour fractions in the W+jets sample. The simulated fractions for the heavy-flavour processes (W bb+jets, W cc+jets and W c+jets) are determined with the ratio W cc/W bb taken from simulation. The measurement used the number of events after requiring at least one b-tag where the overall normalisation of W +jets events was fixed using the values previously determined by the charge asymmetry method. The heavy-flavour fractions are then extrapolated from the W +2-jet selection to W+ ≥ 4-jet events using the heavy-flavour fractions from the MC simulation. As a result the W bb+jets process is scaled up by about 30% with respect to the Monte Carlo prediction with a relative systematic uncertainty of 20% and the W c+jets process is scaled down by about 15% with a relative systematic uncertainty of 40%. The scale factors are determined separately for the electron and muon channel.
Multi-jet production processes have a large cross-section and can provide a non-prompt or fake lepton signature due to interactions with detector material, electromagnetic shower fluctuations and heavy-flavour decays. In the electron channel, jets and electrons from photon conversions or heavy-flavour decays can mimic isolated electrons from W bosons. In the muon channel, the background is dominated by the decay of heavy-flavour hadrons to muons.
A matrix method [81] is used to estimate the number of background events using a second event sample for which the lepton identification criteria are relaxed and the isolation requirements removed (loose selection). The number of background events that pass the -14 -standard tight lepton selection described in section 5.2 (N tight fake ) is then given by: where ǫ real and ǫ fake are the fractions of real and fake leptons that pass the loose and the tight selection and N loose (N tight ) is the number of events with a lepton passing the loose (tight) selection. The efficiency for a real lepton to pass the tight selection (ǫ real ) is measured using a tag-and-probe technique using leptons from Z boson decays. The efficiency for a loose background event to pass the tight selection (ǫ fake ) is estimated in control regions dominated by background. Contributions from W +jets and Z +jets production are subtracted in the control regions using simulation.
In the electron channel the control region to determine ǫ fake is defined by E miss T < 20 GeV. In the muon channel two control regions with similar numbers of events are used. One control region has events with low-m T (W ) while the other control region contains events where the selected muons have a large impact parameter. The efficiency ǫ fake is extracted separately from the two background-enriched samples and the average is used.

Systematic uncertainties
The systematic uncertainties due to detector effects and the modelling of signal and background are determined for each bin of the measured observables. Each systematic uncertainty is evaluated by varying the relevant source by one standard deviation about its nominal value. This effect is propagated through the event selection, unfolding and correction procedure. Deviations from the nominal case are evaluated separately for the upward and downward variations for each bin, observable and channel. The total systematic uncertainty for each bin is calculated by adding the individual systematic contributions for that bin in quadrature.
The uncertainty on the tt signal and background modelling are discussed in sections 6.1 and 6.2. The detector-related uncertainties are presented in section 6.3.

Signal modelling
The uncertainty due to the choice of MC generator to model tt events is estimated by comparing mc@nlo+herwig and alpgen+herwig, while the colour reconnection modelling uncertainty is estimated by comparing the nominal powheg+pythia sample to the associated MC tune where the colour reconnection is disabled in pythia.
The uncertainty in modelling the parton shower and hadronisation is evaluated from the relative difference between the alpgen+pythia tt MC sample and the alternative alpgen+herwig sample.
The evaluation of the uncertainty due to the choice of PDF set is obtained using the nnpdf 2.0 [82], mstw2008nlo and cteq66 [83] PDF sets. An envelope of uncertainty bands is determined using the PDF4LHC recommendations [84].
The uncertainty associated with the modelling of additional QCD radiation accompanying the tt system is calculated by comparing the alpgen+pythia sample to the ones -15 -with varied radiation settings presented in section 4. The variation is achieved by changing the renormalisation scale associated with α S consistently in the hard-scattering matrix element as well as in the parton shower. The level of radiation through parton showering [85] is adjusted to encompass the ATLAS measurement of additional jet activity in tt events [30]. The uncertainty is estimated as the maximum difference between the specialised samples and the nominal sample, with the uncertainty being symmetrised.

Background modelling
The individual experimental and theoretical uncertainties are used to calculate the uncertainty on the size of background contributions determined by MC simulation. This results in an uncertainty on the background subtraction for all backgrounds except for those from W + jets processes and from multi-jet processes resulting in a non-prompt or fake lepton signature.
In the muon channel the normalisation and shape uncertainties are determined as the difference between the two multi-jet estimates described in section 5.3. The normalisation uncertainty is determined to be 20%. The shape uncertainty is evaluated by two different linear combinations of the two multi-jet estimates. In the electron channel the normalisation uncertainty is determined from the variation of the background estimate when the efficiencies for real and fake leptons are varied within their uncertainties. The uncertainty of the efficiency for real leptons is estimated by varying the fit parameters in the tag-andprobe method using Z boson events. The uncertainty on the efficiency for fake leptons is estimated by varying the E miss T cut between 15 GeV and 25 GeV and by relaxing the b-tag requirement to at least one b-tag. From these variations a normalisation uncertainty of 50% is assigned.
For the background contribution due to W+jets, the overall uncertainty from the charge asymmetry normalisation method (see section 5.3) and the uncertainty on the flavour fractions are separately determined. A shape uncertainty is estimated by varying model parameters in the W+jets alpgen simulation. The total background uncertainty is evaluated by adding in quadrature each of the different background uncertainty contributions.

Experimental uncertainties
The experimental uncertainties refer to the quality of the detector simulation to describe the detector response in data for each of the reconstructed objects. These uncertainties affect the MC signal and background predictions, changing the numbers of events accepted.
The jet energy scale (JES) systematic uncertainty [77] is a major contributor to the overall systematic uncertainty in all distributions affected by the signal efficiency and bin migration. In the central region of the detector (|η| < 1.7) it varies from 2.5% to 8% as a function of jet p T and η as estimated from in situ measurements of the detector response [77]. It incorporates uncertainties from the jet energy calibration, calorimeter response to jets, detector simulation, and the modelling of the fragmentation and underlying event, as well as other choices in the MC generation. Additional sources of the JES uncertainty are also estimated. The main contributions are the intercalibration of the forward region detector response from the central regions of the detector, effects from the correction of additional pp interactions, jet flavour composition, b-jet JES calibration and the presence of close-by jets. Uncertainties due to different detector-simulation configurations used in the analysis and in the calibration are added as one additional uncertainty parameter ("relative non-closure"). The JES uncertainty is evaluated using a total of 21 individual components to model the uncertainty correlations as a function of the jet transverse momentum and the rapidity.
The jet energy resolution (JER) has been found to be well modelled in simulation. In situ methods are used to measure the resolution, which MC simulation describes within 10% for jets in the p T range 30-500 GeV [86]. The jet reconstruction efficiency is also well modelled by the simulation and the uncertainty is evaluated by randomly removing simulated jets within the 1σ uncertainty of jet reconstruction efficiency measured in data [77].
The uncertainties introduced by the JVF requirement used to suppress pile-up jets are estimated using events with a leptonic Z boson decay and an associated high-p T jet. The efficiency to select jets from the hard-scatter and the contamination by jets produced by pile-up interactions are measured in appropriate control regions and the agreement between data and MC simulation is evaluated.
The uncertainties related to the MC modelling of the lepton trigger, reconstruction and identification efficiency are evaluated by comparing high-purity events featuring leptons in data and simulation. These include Z → ee, Z → µµ and W → eν events in data and simulation, while tt events are also included in the simulation studies [73]. Similar studies are also performed for the lepton energy and momentum scales and resolutions. Since the two channels require different lepton flavours, the electron uncertainty affects only the electron channel, and similarly for the muon channel. The electron uncertainty is approximately double the muon uncertainty. In both cases the uncertainty is small with little variation between bins. The uncertainty on E miss T is determined by propagating all the uncertainties associated with the energy scales and resolutions for leptons and jets to the calculation of E miss T . Two additional uncertainties that are included originate from the calorimeter cells not associated with any physics objects and the pile-up modelling. The systematic uncertainties associated with tagging jets originating from b-quarks are separated into three categories. These are the efficiency of the tagging algorithm (b-quark tagging efficiency), the efficiency with which jets originating from c-quarks pass the btag requirement (c-quark tagging efficiency) and the rate at which light-flavour jets are tagged (misidentified tagging efficiency). The efficiencies are estimated from data and parameterised as a function of p T and η [78,87]. The systematic uncertainties arise from factors used to correct the differences between simulation and data in each of the categories. The uncertainties in the simulation modelling of the b-tagging performance are assessed by studying b-jets in dileptonic tt events [88]. The b-tagging efficiency is another major contributor to the overall systematic uncertainty and tends to slightly increase with p T . The c-quark efficiency uncertainty is approximately constant at ≈ 2% while the misidentification uncertainty contributes at the percent level for all distributions.
The last experimental uncertainty evaluated is due to the measurement of the integrated luminosity. This is dominated by the accuracy of the beam separation scans and has an associated uncertainty of 1.8% [72] that is assigned to each bin of the distributions -17 -and the MC background predictions. With the exception of the MC and data statistics and the background modelling all uncertainty components are correlated across the bins and for all observables.

Reconstructed yields and distributions
A summary of the number of selected data events, background contributions and total predictions is given in table 1. Dilepton tt events constitute the largest background followed by single top-quark production. The W +jets and non-prompt or fake lepton backgrounds are smaller in comparison.
The reconstructed distributions of the observables of thet l ,t h andt lth system are shown in figures 3 to 7. This includes the p T and rapidity of the individual pseudo-top-quarks and the p T , rapidity and mass of thet lth system for the muon and electron channels.
In each region, the expected number of events agrees with the number observed in the data. The data are shown using bin sizes that correspond to one standard deviation resolution in the hadronic variables, except in the tails of the distributions where the bin width is increased to reduce the statistical fluctuations.  Other" including small backgrounds from diboson and Z +jets production, as well as non-prompt and fake leptons from multi-jet processes. The powheg+pythia MC generator with the Perugia 2011C tune is used for the tt signal estimate. The shaded band shows the total systematic and statistical uncertainties on the signal plus background expectation.
-23 -Each of the reconstructed pseudo-top-quark observable distributions is corrected for the effects of detector efficiencies and resolution. All distributions are presented within the kinematic range defined in section 2, which is close to the acceptance of the reconstructed object and event selections, such that model dependencies from regions of phase space outside of the acceptance are minimised. Section 8.1 describes the correction procedure, section 8.2 describes the propagation of the statistical and systematic uncertainties to the final distributions, and section 8.3 describes the combination of the results obtained in the electron and muon channels.

Correction procedure
The reconstructed pseudo-top-quark observable distributions are corrected as follows: where N j reco (N i part ) is the number of reconstructed (fully corrected) events in a given reconstructed observable bin j (particle-level observable bin i), and N j bgnd is the number of background events estimated as explained in section 5.3. The correction factors f i part!reco , f j misassign and f j reco!part are discussed below. Detector resolution effects on the reconstructed pseudo-top-quark observables are corrected with an iterative Bayesian unfolding procedure [89] using a response matrix M reco part that describes the migration between the detector-level observable bin j and particlelevel observable bin i. To ensure a one-to-one relationship between the particle-level and the detector-level observables, the response matrix is constructed from events where for each detector-level pseudo-top-quark object a particle-level pseudo-top-quark object can be matched. The matching is based on the angular differences between the components of the pseudo-top-quarks. Detector-level and particle-level jets (jet/jet ′ ) and leptons (ℓ/ℓ ′ ) are considered matched if they satisfy ∆R(jet, jet ′ ) < 0.35 and ∆R(ℓ, ℓ ′ ) < 0.02. This matching definition is found to be fully efficient for leptons and close to 100% efficient for jets.
The response matrix is derived from MC simulations and has diagonal elements that are larger than 0.6 for all observables. The elements in the matrix are normalised to the total number of reconstructed events determined from the sum of all the bins associated with the x-axis. The response matrix is applied using two iterations, which is found to provide convergence and avoid higher statistical uncertainties in the tails of the corrected distributions.
Events that have no matched detector-level and particle-level pair are taken into account by three factors that correct the detector-level observable distribution to the particlelevel observable distribution: • The correction for events that pass the detector-level event selection but fail the particle-level event selection (f reco!part ); • The correction factor for events with a reconstructed pseudo-top-quark that has no counterpart at the particle level (f misassign ); • The correction for events that fulfil the particle-level event selection requirements but fail the reconstruction-level event selection (f part!reco ).
These correction factors are also derived from MC simulation. The correction factors f reco!part and f misassign are in the range between 0.65-0.70 for all observables. The correction factor f part!reco is primarily dominated by the detector efficiency, in particular by the b-tagging efficiency, and is in the range 5.5-8.0. They are found to be similar for the tt generators discussed in section 4, such that no large MC modelling dependencies are observed between the different MC samples.
The same unfolding procedure is applied to each of the distributions p T (t), y(t), y(t lth ), p T (t lth ), and m(t lth ).
The number of background events N bgnd and the correction factors f reco!part and f misassign are functions of the detector-level pseudo-top-quark observables. The correction factor f part!reco is a function of the particle-level pseudo-top-quark observable x i part . To evaluate the cross-section in bin i, it is necessary to also take account of the luminosity and bin width.

Propagation of uncertainties
With the exception of the non-tt backgrounds, each of the correction factors in eq. (8.1) is calculated using powheg+pythia events that are passed through the detector simulation. The effect of the statistical uncertainty on M reco part is estimated by smearing the number of events in each element of the matrix, using a Poisson probability density function. The statistical uncertainty on the correction factors (f reco!part , f misassign and f part!reco ) is evaluated by smearing the value in each bin using a Gaussian distribution following the statistical uncertainty in the bin. The correction factors and the response matrix are smeared simultaneously by performing 1000 pseudo-experiments and repeating the unfolding procedure for each pseudo-experiment. The statistical uncertainty for each measurement point is taken from the root-mean-square (1σ) of the spread of the unfolded distributions over the various pseudo-experiments.
The statistical uncertainty on the reconstructed distributions in data is propagated to the final distributions by performing 1000 pseudo-experiments, following a Poisson distribution defined by the number of events in each bin j. Similar to the MC statistical uncertainty, each bin of x j reco is independently fluctuated. The experimental systematic uncertainty on the reconstructed distributions is evaluated by changing the values of the physics objects by their associated uncertainties. The total uncertainty on the number of reconstructed background events (N bgnd ) is evaluated by summing in quadrature each of the background uncertainties discussed in section 6.
The systematic uncertainty on the unfolded spectra due to the background is evaluated by performing 1000 pseudo-experiments, following a normal distribution with a width matching the total uncertainty band. The root-mean-square of the distribution of unfolded spectra of the pseudo-experiments is taken as the uncertainty on the background. The systematic uncertainties on M reco part , f reco!part , f misassign and f part!reco arising due to the choice of the powheg+pythia tt MC model are each evaluated as a relative bias. For a given MC simulation set the bias is defined as the fully corrected unfolded yield for a given bin minus the true particle yield for that bin. The relative bias is defined as the difference in bias between the nominal MC sample and the varied MC sample, both unfolded using the nominal MC sample. For the powheg+pythia tt MC sample, the bias is found to be consistent with zero within the statistical uncertainties of each measurement point. For each tt modelling systematic uncertainty, a pair of particle-level and detectorlevel spectra is generated. One thousand pseudo-experiments are used to fluctuate the reconstructed input spectrum within its statistical uncertainty. The pseudo-experiments are used to evaluate the statistical significance of the systematic variation in the output distribution. The relative bias is calculated for each pseudo-experiment.
The ISR/FSR systematic uncertainty is evaluated from the relative bias between the alpgen+pythia central and shifted ISR/FSR samples. The uncertainty on the matrixelement calculation and matching scheme (the generator uncertainty) is estimated from the relative bias of mc@nlo+herwig with respect to the alpgen+herwig tt sample.
Each of the tt uncertainties is propagated individually and are then symmetrised before being combined, taking the larger of the upward or downward variation.

Combination of lepton channels
The electron and muon channel measurements of each pseudo-top-quark distribution are combined by using the Best Linear Unbiased Estimate (BLUE) method [90,91]. The BLUE method determines the coefficients (weights) to be used in a linear combination of the input measurements by minimising the total uncertainty of the combined result. All uncertainties are assumed to be distributed according to a Gaussian probability density function. The algorithm takes both the statistical and systematic uncertainties and their correlations into account. The BLUE combination was cross-checked against an average performed using the algorithm discussed in ref. [92]. The result of the two methods are found to be consistent. The MC statistical uncertainties on the correction factors for the two samples are assumed to be uncorrelated. The uncertainties related to the electron and muon efficiencies are also treated uncorrelated. All other systematic uncertainties are treated as fully correlated. In particular, the total background systematic uncertainty is assumed to be completely correlated between the electron and muon channel. The uncertainties of the combined measurement are dependent on the observables but tend to closely follow the muon channel uncertainties.

Results
The measurements of the differential tt cross-section corrected for detector effects are presented as a function of the p T (t) and y(t) for the hadronic or leptonic reconstruction of the pseudo-top-quarks, as well as the variables y(t lth ), p T (t lth ) and m(t lth ) of the pseudotop-quark-pair system;t refers to both the hadronic and leptonic pseudo-top-quark. The fiducial cross-section measurements are presented within the kinematic range defined in section 2, and are evaluated as described in section 8.
The tt MC generators are of two types (for a detailed discussion see section 4): NLO matrix elements are used to describe the hard scattering pp → tt, and LO multi-leg MC generators model tt processes with up to five additional quark or gluon emissions. All tt MC samples are normalised to the NNLO+NNLL QCD inclusive cross-section. Figures 8, 9 and 10 show the corrected data for the differential variables noted above. Superimposed on the data are the expectations of the LO multi-leg MC generator alpgen (see section 4): • alpgen interfaced with pythia (alpgen+pythia), • alpgen interfaced with herwig (alpgen+herwig), • alpgen interfaced with pythia but with the renormalisation scale varied by a factor of 2 (α S Up) and a factor of 0.5 (α S Down).
Both the alpgen+herwig and alpgen+pythia models indicate yields within the acceptance that are higher than the observed data. This is evident in figures 9 and 10(b) where for the complete rapidity range the models predict more events. The models also indicate a p T (t) spectrum that is harder than the ones observed in data, a possible consequence of using the cteq6l1 PDF set. For the alpgen+pythia sample, the effect of increased or decreased radiation is illustrated with the renormalisation scale changed by a factor of 2.0 (α S Up) and by a factor of 0.5 (α S Down) applied consistently to both alpgen and pythia as noted in section 4. The increased radiation gives fewer events at low p T (t) and more at high p T (t). This is at the level of 5-10%. For the leptonic p T (t) this effect is slightly larger as shown in figure 8(b). The p T (t lth ) distribution, shown in figure 10(a), is very sensitive to additional radiation. When the renormalisation scale factor changes from 0.5 to 2.0, 15-20% more events are observed at low p T (t lth ) and 20-30% fewer at high p T (t lth ). Increased radiation also leads to 5% fewer events at low m(t lth ) and 10% more events at high m(t lth ) as shown in figure 10(c). Nevertheless, the (α S Down) variation is not sufficient to restore agreement between the data and the MC simulation. The alpgen+herwig sample follows the alpgen+pythia sample with the (α S Down) variation. Figures 11, 12 and 13 compare the data with expectations of the NLO MC generators powheg and mc@nlo. In particular, the following NLO variants are shown: • powheg-hvq v.4 generator with the ct10nlo PDF set interfaced with pythia using the "C" variant of the Perugia 2011 tunes and cteq6l1 PDF set (powheg+pythia).
-27 - • powheg-hvq v.4 generator where the ct10nlo PDF set is replaced with the herapdf15nlo PDF set to assess the sensitivity of the distributions to changes in the gluon PDF (powheg(herapdf)+pythia).
• mc@nlo generator with the ct10nlo PDF set interfaced with herwig and jimmy with the AUET2 tune (mc@nlo+herwig).
The individual hadronic and leptonic y(t) distributions in figure 12 are well described by each of the NLO MC models. The powheg(herapdf)+pythia sample is the one closest to predicting the data, whereas the other NLO models predict a more forward distribution. This is also true for the hadronic and leptonic p T (t) distributions (see figure  11) for which the powheg(herapdf)+pythia sample lowers the cross section at high p T (t) with respect to the nominal sample, which results in a better description of the data. For these variables mc@nlo also gives a good description. The p T (t lth ) distribution, shown in figure 13(a), highlights the different hard-gluon emission models. The mc@nlo prediction is lower than the data at high p T (t lth ), as expected due to the softer fifth-jet p T from mc@nlo in comparison to the other generators [31].
The pseudo-top-quark-pair variables are in general less well predicted than the individual pseudo-top-quark kinematic variables, both for LO and NLO models. All the models -28 - predict an excess in the lower m(t lth ) region, as shown in figure 13(c), implying that the threshold region description is inadequate. The powheg(herapdf)+pythia sample agrees well with the high-mass m(t lth ) tail, while the other samples overestimate the tail. This is consistent with the softer gluon component in the herapdf15nlo PDF set compared to the one in the ct10nlo PDF set. The y(t lth ) distribution is reasonably predicted for all models in the low y(t lth ) region, but only the powheg+pythia model with the herapdf15nlo PDF set provides a good overall description. The p T (t lth ) spectrum is sensitive to the extra radiation produced in the parton collision process. All models agree with the data within the systematic uncertainties. If these uncertainties can be reduced, this suggests that the p T (t lth ) distribution can be used to constrain phenomenological radiation parameters in future MC tunes.  Figure 11. Differential tt cross-section after channel combination as a function of (a) the hadronic pseudo-top-quark p T (t h ) and (b) the leptonic pseudo-top-quark p T (t l ). The data points are shown with a blue band which represents the total uncertainty (statistical and systematic). The model predictions from several NLO MC generators described in the text are superimposed: powheg(ct10)+pythia, powheg(herapdf)+pythia, powheg+herwig and mc@nlo+herwig.
-31 -  Figure 12. Differential tt cross-section after channel combination as a function of (a) the hadronic pseudo-top-quark rapidity y(t h ) and (b) the leptonic pseudo-top-quark rapidity y(t l ). The data points are shown with a blue band which represents the total uncertainty (statistical and systematic). The model predictions from several NLO MC generators described in the text are superimposed: powheg(ct10)+pythia, powheg(herapdf)+pythia, powheg+herwig and mc@nlo+herwig.
-32 -  Figure 13. Differential tt cross-section after channel combination as a function of (a) the total leptonic and hadronic tt pseudo-top-quark variables p T (t lth ), (b) the rapidity y(t lth ) and (c) the mass m(t lth ). The data points are shown with a blue band which represents the total uncertainty (statistical and systematic). The model predictions from several NLO MC generators described in the text are superimposed: powheg(ct10)+pythia, powheg(herapdf)+pythia, powheg+herwig and mc@nlo+herwig.
-33 -Differential fiducial tt cross-section measurements are presented for kinematic variables of the pseudo-top-quark (t), defined at the particle level by the decay products of the W boson and b-quark occurring in top-quark decays. The pseudo-top-quark approach is a new tool to probe pQCD in the top-quark sector. It is an experimental observable that is strongly correlated with the top-quark parton and is used to define differential tt cross-sections with reduced model dependence. It can also be used to assess how well MC simulations can describe the tt production mechanism in proton-proton collisions and to compare various MC models.
The present measurements were performed within a kinematic range that closely matches the acceptance of the reconstructed objects and each reconstructed kinematic variable distribution is corrected (unfolded) for the effects of detector efficiency and resolution.
The differential fiducial cross-sections are measured with the ATLAS detector at the LHC for proton-proton collisions at √ s = 7 TeV and an integrated luminosity of 4.6 fb −1 , as a function of p T (t h ), y(t h ), p T (t l ), y(t l ), y(t lth ), p T (t lth ), and m(t lth ), wheret l andt h refers to the hadronic or leptonic pseudo-top-quark. The distributions provide complementary information and show some sensitivity to the selected PDF set, the parton shower procedure, and to a lesser extent the matrix element and parton shower matching scheme. The larger acceptance for the alpgen MC models resulted in an excess of events reconstructed within the fiducial region. The higher α S variation is seen to be disfavoured at higher p T (t lth ) and m(t lth ) values. Among the several tt MC models used, the powheg(herapdf)+pythia sample provides the best representation for all of the distributions, except for low m(t lth ) where many of the MC models predict a higher cross-section than the data at low mass values. The mc@nlo prediction is seen to produce a recoil distribution that is too soft with respect to the data. These measurements are currently limited by the systematic uncertainty, the main components being the b-tagging uncertainty, the jet energy measurement uncertainty and the modelling uncertainty of the initial and final state parton showers.