Search for the Standard Model Higgs boson produced in association with top quarks and decaying into $b\bar{b}$ in pp collisions at $\sqrt{s}$ = 8 TeV with the ATLAS detector

A search for the Standard Model Higgs boson produced in association with a pair of top quarks, $t\bar{t}H$, is presented. The analysis uses 20.3 fb$^{-1}$ of pp collision data at $\sqrt{s}$ = 8 TeV, collected with the ATLAS detector at the Large Hadron Collider during 2012. The search is designed for the H to $b\bar{b}$ decay mode and uses events containing one or two electrons or muons. In order to improve the sensitivity of the search, events are categorised according to their jet and b-tagged jet multiplicities. A neural network is used to discriminate between signal and background events, the latter being dominated by $t\bar{t}$+jets production. In the single-lepton channel, variables calculated using a matrix element method are included as inputs to the neural network to improve discrimination of the irreducible $t\bar{t}$+$b\bar{b}$ background. No significant excess of events above the background expectation is found and an observed (expected) limit of 3.4 (2.2) times the Standard Model cross section is obtained at 95% confidence level. The ratio of the measured $t\bar{t}H$ signal cross section to the Standard Model expectation is found to be $\mu$ = 1.5 $\pm$ 1.1 assuming a Higgs boson mass of 125 GeV.


Introduction
The discovery of a new particle in the search for the Standard Model (SM) [1][2][3] Higgs boson [4][5][6][7] at the LHC was reported by the ATLAS [8] and CMS [9] collaborations in July 2012. There is by now clear evidence of this particle in the H → γ γ , H → Z Z ( * ) → 4 , H → W W ( * ) → ν ν and H → τ τ decay channels, at a mass of around 125 GeV , which have strengthened the SM Higgs boson hypothesis [10][11][12][13][14][15] of the observation. To determine all properties of the new boson experimentally, it is important to study it in as many production and decay modes as possible. In particular, its coupling to heavy quarks is a strong focus of current experimental searches. The SM Higgs boson production in e-mail: atlas.publications@cern.ch association with a top-quark pair (tt H) [16][17][18][19] with subsequent Higgs decay into bottom quarks (H → bb) addresses heavy-quark couplings in both production and decay. Due to the large measured mass of the top quark, the Yukawa coupling of the top quark (y t ) is much stronger than that of other quarks. The observation of the tt H production mode would allow for a direct measurement of this coupling, to which other Higgs production modes are only sensitive through loop effects. Since y t is expected to be close to unity, it is also argued to be the quantity that might give insight into the scale of new physics [20].
The H → bb final state is the dominant decay mode in the SM for a Higgs boson with a mass of 125 GeV. So far, this decay mode has not yet been observed. While a search for this decay via the gluon fusion process is precluded by the overwhelming multijet background, Higgs boson production in association with a vector boson (V H) [21][22][23] or a top-quark pair (tt) significantly improves the signal-tobackground ratio for this decay. This paper describes a search for the SM Higgs boson in the tt H production mode and is designed to be primarily sensitive to the H → bb decay, although other Higgs boson decay modes are also treated as signal. Figure 1a, b show two examples of tree-level diagrams for tt H production with a subsequent H → bb decay. A search for the associated production of the Higgs boson with a top-quark pair using several Higgs decay modes (including H → bb) has recently been published by the CMS Collaboration [24] quoting a ratio of the measured tt H signal cross section to the SM expectation for a Higgs boson mass of 125.6 GeV of μ = 2.8 ± 1.0.
The main source of background to this search comes from top-quark pairs produced in association with additional jets. The dominant source is tt+bb production, resulting in the same final-state signature as the signal. An example is shown in Fig. 1c. A second contribution arises from tt production in association with light-quark (u, d, s) or gluon jets, referred to as tt+light background, and from tt production in association with c-quarks, referred to as tt+cc. The size of the second contribution depends on the misidentification rate of the algorithm used to identify b-quark jets.
The search presented in this paper uses 20.3 fb −1 of data collected with the ATLAS detector in pp collisions at √ s = 8 TeV during 2012. The analysis focuses on final states containing one or two electrons or muons from the decay of the tt system, referred to as the single-lepton and dilepton channels, respectively. Selected events are classified into exclusive categories, referred to as "regions", according to the number of reconstructed jets and jets identified as b-quark jets by the b-tagging algorithm (b-tagged jets or b-jets for short). Neural networks (NN) are employed in the regions with a significant expected contribution from the tt H signal to separate it from the background. Simpler kinematic variables are used in regions that are depleted of the tt H signal, and primarily serve to constrain uncertainties on the background prediction. A combined fit to signalrich and signal-depleted regions is performed to search for the signal while simultaneously obtaining a background prediction.

ATLAS detector
The ATLAS detector [25] consists of four main subsystems: an inner tracking system, electromagnetic and hadronic calorimeters, and a muon spectrometer. The inner detector provides tracking information from pixel and silicon microstrip detectors in the pseudorapidity 1 range |η| < 2.5 and from a straw-tube transition radiation tracker covering |η| < 2.0, all immersed in a 2 T magnetic field provided by 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis coinciding with the axis of the beam pipe. The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upward. Cylindrical coordinates (r , φ) are used in the transverse plane, φ being the azimuthal angle around the beam pipe. The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2). Transverse momentum and energy are defined as p T = p sin θ and E T = E sin θ, respectively. a superconducting solenoid. The electromagnetic sampling calorimeter uses lead and liquid-argon (LAr) and is divided into barrel (|η| < 1.475) and end-cap regions (1.375 < |η| < 3.2). Hadron calorimetry employs the sampling technique, with either scintillator tiles or liquid argon as active media, and with steel, copper, or tungsten as absorber material. The calorimeters cover |η| < 4.9. The muon spectrometer measures muon tracks within |η| < 2.7 using multiple layers of high-precision tracking chambers located in a toroidal field of approximately 0.5 T and 1 T in the central and end-cap regions of ATLAS, respectively. The muon spectrometer is also instrumented with separate trigger chambers covering |η| < 2.4.

Object reconstruction
The main physics objects considered in this search are electrons, muons, jets and b-jets. Whenever possible, the same object reconstruction is used in both the single-lepton and dilepton channels, though some small differences exist and are noted below.
Electron candidates [26] are reconstructed from energy deposits (clusters) in the electromagnetic calorimeter that are matched to a reconstructed track in the inner detector. To reduce the background from non-prompt electrons, i.e. from decays of hadrons (in particular heavy flavour) produced in jets, electron candidates are required to be isolated. In the single-lepton channel, where such background is significant, an η-dependent isolation cut is made, based on the sum of transverse energies of cells around the direction of each candidate, in a cone of size R = ( φ) 2 + ( η) 2 = 0.2. This energy sum excludes cells associated with the electron and is corrected for leakage from the electron cluster itself. A further isolation cut is made on the scalar sum of the track p T around the electron in a cone of size R = 0.3 (referred to as p cone30 T ). The longitudinal impact parameter of the electron track with respect to the selected event primary vertex defined in Sect. 4, z 0 , is required to be less than 2 mm. To increase efficiency in the dilepton channel, the electron selection is optimised by using an improved electron identification method based on a likelihood variable [27] and the electron isolation. The ratio of p cone30 T to the p T of the electron is required to be less than 0.12, i.e. p cone30 T / p e T < 0.12. The optimised selection improves the efficiency by roughly 7 % per electron.
Muon candidates are reconstructed from track segments in the muon spectrometer, and matched with tracks found in the inner detector [28]. The final muon candidates are refitted using the complete track information from both detector systems, and are required to satisfy |η| < 2.5. Additionally, muons are required to be separated by R > 0.4 from any selected jet (see below for details on jet reconstruction and selection). Furthermore, muons must satisfy a p T -dependent track-based isolation requirement that has good performance under conditions with a high number of jets from other pp interactions within the same bunch crossing, known as "pileup", or in boosted configurations where the muon is close to a jet: the track p T scalar sum in a cone of variable size R < 10 GeV / p T μ around the muon must be less than 5 % of the muon p T . The longitudinal impact parameter of the muon track with respect to the primary vertex, z 0 , is required to be less than 2 mm. Jets are reconstructed from calibrated clusters [25,29] built from energy deposits in the calorimeters, using the antik t algorithm [30-32] with a radius parameter R = 0.4. Prior to jet finding, a local cluster calibration scheme [33,34] is applied to correct the cluster energies for the effects of dead material, non-compensation and out-of-cluster leakage. The jets are calibrated using energy-and η-dependent calibration factors, derived from simulations, to the mean energy of stable particles inside the jets. Additional corrections to account for the difference between simulation and data are applied [35]. After energy calibration, jets are required to have p T > 25 GeV and |η| < 2.5. To reduce the contamination from lowp T jets due to pileup, the scalar sum of the p T of tracks matched to the jet and originating from the primary vertex must be at least 50 % of the scalar sum of the p T of all tracks matched to the jet. This is referred to as the jet vertex fraction. This criterion is only applied to jets with p T < 50 GeV and |η| < 2.4.
During jet reconstruction, no distinction is made between identified electrons and jet candidates. Therefore, if any of the jets lie R < 0.2 from a selected electron, the single closest jet is discarded in order to avoid double-counting of electrons as jets. After this, electrons which are R < 0.4 from a jet are removed to further suppress background from non-isolated electrons.
Jets are identified as originating from the hadronisation of a b-quark via an algorithm [36] that uses multivariate techniques to combine information from the impact parameters of displaced tracks with topological properties of secondary and tertiary decay vertices reconstructed within the jet. The working point used for this search corresponds to a 70 % efficiency to tag a b-quark jet, with a light-jet mistag rate of 1 %, and a charm-jet mistag rate of 20 %, as determined for b-tagged jets with p T > 20 GeV and |η| < 2.5 in simulated tt events. Tagging efficiencies in simulation are corrected to match the results of the calibrations performed in data [37]. Studies in simulation show that these efficiencies do not depend on the number of jets.

Event selection and classification
For this search, only events collected using a single-electron or single-muon trigger under stable beam conditions and for which all detector subsystems were operational are considered. The corresponding integrated luminosity is 20.3 fb −1 . Triggers with different p T thresholds are combined in a logical OR in order to maximise the overall efficiency. The p T thresholds are 24 or 60 GeV for electrons and 24 or 36 GeV for muons. The triggers with the lower p T threshold include isolation requirements on the lepton candidate, resulting in inefficiency at high p T that is recovered by the triggers with higher p T threshold. The triggers use selection criteria looser than the final reconstruction requirements.
Events accepted by the trigger are required to have at least one reconstructed vertex with at least five associated tracks, consistent with the beam collision region in the x-y plane. If more than one such vertex is found, the vertex candidate with the largest sum of squared transverse momenta of its associated tracks is taken as the hard-scatter primary vertex.
In the single-lepton channel, events are required to have exactly one identified electron or muon with p T > 25 GeV and at least four jets, at least two of which are b-tagged. The selected lepton is required to match, with R < 0.15, the lepton reconstructed by the trigger.
In the dilepton channel, events are required to have exactly two leptons of opposite charge and at least two b-jets. The leading and subleading lepton must have p T > 25 GeV and p T > 15 GeV, respectively. Events in the single-lepton sample with additional leptons passing this selection are removed from the single-lepton sample to avoid statistical overlap between the channels. In the dilepton channel, events are categorised into ee, μμ and eμ samples. In the eμ category, the scalar sum of the transverse energy of leptons and jets, H T , is required to be above 130 GeV. In the ee and μμ event categories, the invariant mass of the two leptons, m , is required to be larger than 15 GeV in events with more than two b-jets, to suppress contributions from the decay of hadronic resonances such as the J/ψ and ϒ into a same-flavour lepton pair. In events with exactly two b-jets, m is required to be larger than 60 GeV due to poor agreement between data and prediction at lower m . A further cut on m is applied in the ee and μμ categories to reject events close to the Z boson mass: |m − m Z | > 8 GeV.
Single-lepton channel: a S/ √ B ratio for each of the regions assuming SM cross sections and branching fractions, and m H = 125 GeV . Each row shows the plots for a specific jet multiplicity (4, 5, ≥6), and the columns show the b-jet multiplicity (2, 3, ≥4). Signal-rich regions are shaded in dark red, while the rest are shown in light blue.
The S/B ratio for each region is also noted. b The fractional contributions of the various backgrounds to the total background prediction in each considered region. The ordering of the rows and columns is the same as in a After all selection requirements, the samples are dominated by tt+jets background. In both channels, selected events are categorised into different regions. In the following, a given region with m jets of which n are b-jets are referred to as "(mj, nb)". The regions with a signal-to-background ratio S/B > 1 % and S/ √ B > 0.3, where S and B denote the expected signal for a SM Higgs boson with m H = 125 GeV , and background, respectively, are referred to as "signalrich regions", as they provide most of the sensitivity to the signal. The remaining regions are referred to as "signaldepleted regions". They are almost purely background-only regions and are used to constrain systematic uncertainties, thus improving the background prediction in the signal-rich regions. The regions are analysed separately and combined statistically to maximise the overall sensitivity. In the most sensitive regions, (≥ 6j, ≥ 4b) in the single-lepton channel and (≥ 4j, ≥ 4b) in the dilepton channel, H → bb decays are expected to constitute about 90 % of the signal contribution as shown in Fig. 20 of Appendix A.
In the dilepton channel, a total of six independent regions are considered. The signal-rich regions are (≥ 4j, 3b) and (≥ 4j, ≥ 4b), while the signal-depleted regions are (2j, 2b), (3j, 2b), (3j, 3b) and (≥ 4j, 2b). Figure 2a shows the S/ √ B and S/B ratios for the different regions under consideration in the single-lepton channel based on the simulations described in Sect. 5. The expected proportions of different backgrounds in each region are shown in Fig. 2b. The same is shown in the dilepton channel in Fig. 3a, b.

Background and signal modelling
After the event selection described above, the main background in both the single-lepton and dilepton channels is tt+jets production. In the single-lepton channel, additional background contributions come from single top quark production, followed by the production of a W or Z boson in association with jets (W/Z +jets), diboson (W W , W Z, Z Z) production, as well as the associated production of a vector boson and a tt pair, tt+V (V = W, Z ). Multijet events also contribute to the selected sample via the misidentification of a jet or a photon as an electron or the presence of a non-prompt electron or muon, referred to as "Lepton misID" background. The corresponding yield is estimated via a datadriven method known as the "matrix method" [38]. In the dilepton channel, backgrounds containing at least two prompt leptons other than tt+jets production arise from Z +jets, diboson, and W t-channel single top quark production, as well as from the tt V processes. There are also several processes which may contain either non-prompt leptons that pass the lepton isolation requirements or jets misidentified as leptons. These processes include W +jets, tt production with a single prompt lepton in the final state, and single top quark production in t-and s-channels. Their yield is estimated using  The S/B ratio for each region is also noted. b The fractional contributions of the various backgrounds to the total background prediction in each considered region. The ordering of the rows and columns is the same as in a simulation and cross-checked with a data-driven technique based on the selection of a same-sign lepton pair. In both channels, the contribution of the misidentified lepton background is negligible after requiring two b-tagged jets.

ATLAS Simulation
In the following, the simulation of each background and of the signal is described in detail. For all MC samples, the top quark mass is taken to be m t = 172.5 GeV and the Higgs boson mass is taken to be m H = 125 GeV.
The tt+jets sample is generated inclusively, but events are categorised depending on the flavour of partons that are matched to particle jets that do not originate from the decay of the tt system. The matching procedure is done using the requirement of R < 0.4. Particle jets are reconstructed by clustering stable particles excluding muons and neutrinos using the anti-k t algorithm with a radius parameter R = 0.4, and are required to have p T > 15 GeV and |η| < 2.5.
Events where at least one such particle jet is matched to a bottom-flavoured hadron are labelled as tt+bb events. Similarly, events which are not already categorised as tt+bb, and where at least one particle jet is matched to a charmflavoured hadron, are labelled as tt+cc events. Only hadrons not associated with b and c quarks from top quark and W boson decays are considered. Events labelled as either tt+bb or tt+cc are generically referred to as tt+HF events (HF for "heavy flavour"). The remaining events are labelled as tt+light-jet events, including those with no additional jets.
Since Powheg+Pythia only models tt+bb via the parton shower, an alternative tt+jets sample is generated with the Madgraph5 1.5.11 LO generator [52] using the CT10 PDF set and interfaced to Pythia 6.425 for showering and hadronisation. It includes tree-level diagrams with up to three extra partons (including b-and c-quarks) and uses settings similar to those in Ref. [24]. To avoid double-counting of partonic configurations generated by both the matrix element calculation and the parton-shower evolution, a parton-jet matching scheme ("MLM matching") [53] is employed.
Fully matched NLO predictions with massive b-quarks have become available recently [54] within the Sherpa with OpenLoops framework [55,56] referred to in the following as SherpaOL. The SherpaOL NLO sample is generated following the four-flavour scheme using the Sherpa 2.0 prerelease and the CT10 PDF set. The renormalisation scale (μ R ) is set to μ R = i=t,t,b,b E 1/4 T,i , where E T,i is the transverse energy of parton i, and the factorisation and resummation scales are both set to (E T,t + E T,t )/2.

Fig. 4
Relative contributions of different categories of tt+bb events in Powheg+Pythia, Madgraph+Pythia and SherpaOL samples. Labels "tt+MPI" and "tt+FSR" refer to events where heavy flavour is produced via multiparton interaction (MPI) or final state radiation (FSR), respectively. These contributions are not included in the Sher-paOL calculation. An arrow indicates that the point is off-scale. Uncertainties are from the limited MC sample sizes For the purpose of comparisons between tt+jets event generators and the propagation of systematic uncertainties related to the modelling of tt+HF, as described in Sect. 8.3.1, a finer categorisation of different topologies in tt+HF is made. In particular, the following categories are considered: if two particle jets are both matched to an extra b-quark or extra c-quark each, the event is referred to as tt+bb or tt+cc; if a single particle jet is matched to a single b(c)-quark the event is referred to as tt+b (tt+c); if a single particle jet is matched to a bb or a cc pair, the event is referred to as tt+B or tt+C, respectively. Figure 4 shows the relative contributions of the different tt+bb event categories to the total tt+bb cross section at generator level for the Powheg+Pythia, Mad-graph+Pythia and SherpaOL samples. It demonstrates that Powheg+Pythia is able to reproduce reasonably well the tt+HF content of the Madgraph tt+jets sample, which includes a LO tt+bb matrix element calculation, as well as the NLO SherpaOL prediction.
The relative distribution across categories is such that SherpaOL predicts a higher contribution of the tt + B category, as well as every category where the production of a second bb pair is required. The modelling of the relevant kinematic variables in each category is in reasonable agreement between Powheg+Pythia and SherpaOL. Some dif-ferences are observed in the very low regions of the mass and p T of the bb pair, and in the p T of the top quark and tt systems.
The prediction from SherpaOL is expected to model the tt+bb contribution more accurately than both Powheg +Pythia and Madgraph+Pythia. Thus, in the analysis tt+bb events are reweighted from Powheg+ Pythia to reproduce the NLO tt+bb prediction from SherpaOL for relative contributions of different categories as well as their kinematics. The reweighting is done at generator level using several kinematic variables such as the top quark p T , tt system p T , R and p T of the dijet system not coming from the top quark decay. In the absence of an NLO calculation of tt+cc production, the Madgraph+Pythia sample is used to evaluate systematic uncertainties on the tt+cc background.
Since achieving the best possible modelling of the tt+jets background is a key aspect of this analysis, a separate reweighting is applied to tt+light and tt+cc events in Powheg+Pythia based on the ratio of measured differential cross sections at √ s = 7 TeV in data and simulation as a function of top quark p T and tt system p T [57]. It was verified using the simulation that the ratio derived at √ s = 7 TeV is applicable to √ s = 8 TeV simulation. It is not applied to the tt+bb component since that component was corrected to match the best available theory calculation. Moreover, the measured differential cross section is not sensitive to this component. The reweighting significantly improves the agreement between simulation and data in the total number of jets (primarily due to the tt system p T reweighting) and jet p T (primarily due to the top quark p T reweighting). This can be seen in Fig. 5, where the number of jets and the scalar sum of the jet p T (H T had ) distributions in the exclusive 2-btag region are plotted in the single-lepton channel before and after the reweighting is applied.

Other backgrounds
The W/Z +jets background is estimated from simulation reweighted to account for the difference in the W/Z p T spectrum between data and simulation [58]. The heavy-flavour fraction of these simulated backgrounds, i.e. the sum of W/Z + bb and W/Z + cc processes, is adjusted to reproduce the relative rates of Z events with no b-tags and those with one b-tag observed in data. Samples of W/Z +jets events, and diboson production in association with jets, are generated using the Alpgen 2.14 [59] leading-order (LO) generator and the CTEQ6L1 PDF set. Parton showers and fragmentation are modelled with Pythia 6.425 for W/Z +jets production and with Herwig 6.520 [60] for diboson production. The W +jets samples are generated with up to five additional partons, separately for W +light-jets, W bb+jets, W cc+jets, and W c+jets. Similarly, the Z +jets background is generated with up to five additional partons separated in different par- using the MSTW2008 NNLO PDF set [67,68].
Samples of tt+V are generated with Madgraph 5 and the CTEQ6L1 PDF set. Pythia 6.425 with the AUET2B tune [69] is used for showering. The tt V samples are normalised to the NLO cross-section predictions [70,71].

Signal model
The tt H signal process is modelled using NLO matrix elements obtained from the HELAC-Oneloop package [72]. Powheg-Box serves as an interface to shower Monte Carlo programs. The samples created using this approach are referred to as PowHel samples [73]. They are inclusive in Higgs boson decays and are produced using the CT10nlo PDF set and factorisation (μ F ) and renormalisation scales set to μ F = μ R = m t + m H /2. The PowHel tt H sample is showered with Pythia 8.1 [74] with the CTEQ6L1 PDF and the AU2 underlying-event tune [75]. The tt H cross section and Higgs boson decay branching fractions are taken from (N)NLO theoretical calculations [19,[76][77][78][79][80][81][82], collected in Ref. [83]. In Appendix A, the relative contributions of the Higgs boson decay modes are shown for all regions considered in the analysis.

Common treatment of MC samples
All samples using Herwig are also interfaced to Jimmy 4.31 [84] to simulate the underlying event. All simulated samples utilise Photos 2.15 [85] to simulate photon radiation and Tauola 1.20 [86] to simulate τ decays. Events from minimum-bias interactions are simulated with the Pythia 8.1 generator with the MSTW2008 LO PDF set and the AUET2 [87] tune. They are superimposed on the simulated MC events, matching the luminosity profile of the recorded data. The contributions from these pileup interactions are simulated both within the same bunch crossing as the hardscattering process and in neighbouring bunch crossings.
Finally, all simulated MC samples are processed through a simulation [88] of the detector geometry and response either using Geant4 [89], or through a fast simulation of the calorimeter response [90]. All simulated MC samples are processed through the same reconstruction software as the data. Simulated MC events are corrected so that the object identification efficiencies, energy scales and energy resolutions match those determined from data control samples. Figure 6a, b show a comparison of predicted yields to data prior to the fit described in Sect. 9 in all analysis regions in the single-lepton and dilepton channel, respectively. The data agree with the SM expectation within the uncertainties of 10-30 %. Detailed tables of the event yields prior to the fit and the corresponding S/B and S/ √ B ratios for the single-lepton and dilepton channels can be found in Appendix B.
When requiring high jet and b-tag multiplicity in the analysis, the number of available MC events is significantly reduced, leading to large fluctuations in the resulting distributions for certain samples. This can negatively affect the sensitivity of the analysis through the large statistical uncertainties on the templates and unreliable systematic uncertainties due to shape fluctuations. In order to mitigate this problem, instead of tagging the jets by applying the b-tagging algorithm, their probabilities to be b-tagged are parameterised as functions of jet flavour, p T , and η. This allows all events in the sample before b-tagging is applied to be used in predicting the normalisation and shape after b-tagging [91]. The tagging probabilities are derived using an inclusive tt+jets simulated sample. Since the b-tagging probability for a b-jet coming  6 Comparison of prediction to data in all analysis regions before the fit to data in a the single-lepton channel and b the dilepton channel. The signal, normalised to the SM prediction, is shown both as a filled red area stacked on the backgrounds and separately as a dashed red line. The hashed area corresponds to the total uncertainty on the yields from top quark decay is slightly higher than that of a b-jet with the same p T and η but arising from other sources, they are derived separately. The predictions agree well with the normalisation and shape obtained by applying the b-tagging algorithm directly. The method is applied to all signal and background samples.

Analysis method
In both the single-lepton and dilepton channels, the analysis uses a neural network (NN) to discriminate signal from background in each of the regions with significant expected tt H signal contribution since the S/ √ B is very small and the uncertainty on the background is larger than the signal. Those include (5j, ≥ 4b), (≥ 6j, 3b) and (≥ 6j, ≥ 4b) in the case of the single-lepton channel, and (≥ 4j, 3b) and (≥ 4j, ≥ 4b) in the case of the dilepton channel. In the dilepton channel, an additional NN is used to separate signal from background in the (3j, 3b) channel. Despite a small expected S/ √ B, it nevertheless adds sensitivity to the signal due to a relatively high expected S/B. In the single-lepton channel, a dedicated NN is used in the (5j, 3b) region to separate tt+light from tt+HF backgrounds. The other regions considered in the analysis have lower sensitivity, and use H T had in the single-lepton channel, and the scalar sum of the jet and lepton p T (H T ) in the dilepton channel as a discriminant.
The NNs used in the analysis are built using the Neu-roBayes [92] package. The choice of the variables that enter the NN discriminant is made through the ranking procedure implemented in this package based on the statistical separation power and the correlation of variables. Several classes of variables were considered: object kinematics, global event variables, event shape variables and object pair properties. In the regions with ≥6 (≥4) jets, a maximum of seven (five) jets are considered to construct the kinematic variables in the single-lepton (dilepton) channel, first using all the b-jets, and then incorporating the untagged jets with the highest p T . All variables used for the NN training and their pairwise correlations are required to be described well in simulation in multiple control regions.
In the (5j, 3b) region in the single-lepton channel, the separation between the tt+light and tt+HF events is achieved by exploiting the different origin of the third b-jet in the case of tt+light compared to tt+HF events. In both cases, two of the b-jets originate from the tt decay. However, in the case of tt+HF events, the third b-jet is likely to originate from one of the additional heavy-flavour quarks, whereas in the case of tt+light events, the third b-jet is often matched to a c-quark from the hadronically decaying W boson. Thus, kinematic variables, such as the invariant mass of the two untagged jets with minimum R, provide discrimination between tt+light and tt+HF events, since the latter presents a distinct peak at the W boson mass which is not present in the former. This and other kinematic variables are used in the dedicated NN used in this region.
In addition to the kinematic variables, two variables calculated using the matrix element method (MEM), detailed in Sect. 7, are included in the NN training in (≥ 6j, 3b) and (≥ 6j, ≥ 4b) regions of the single-lepton channel. These two variables are the Neyman-Pearson likelihood ratio (D1) (Eq. (4)) and the logarithm of the summed signal likelihoods (SSLL) (Eq. (2)). The D1 variable provides the best separation between tt H signal and the dominant tt+bb background in the (≥ 6j, ≥ 4b) region. The SSLL variable further improves the NN performance.
The variables used in the single-lepton and dilepton channels, as well as their ranking in each analysis region, are listed in Tables 1 and 2, respectively. For the construction of variables in the (≥ 4j, ≥ 4b) region of the dilepton channel, the two b-jets that are closest in R to the leptons are considered to originate from the top quarks, and the other two b-jets are assigned to the Higgs candidate. Figures 7 and 8 show the distribution of the NN discriminant for the tt H signal and background in the single-lepton and dilepton channels, respectively, in the signal-rich regions. In particular, Fig. 7a shows the separation between the tt+HF and tt+light-jet production achieved by a dedicated NN in the (5j, 3b) region in the single-lepton channel. The distributions in the highest-ranked input variables from each of the NN regions are shown in Appendix C.
For all analysis regions considered in the fit, the tt H signal includes all Higgs decay modes. They are also included in the NN training.
The analysis regions have different contributions from various systematic uncertainties, allowing the combined fit to constrain them. The highly populated (4j, 2b) and (2j, 2b) regions in the single-lepton and dilepton channels, respectively, provide a powerful constraint on the overall normalisation of the tt background. The (4j, 2b), (5j, 2b) and (≥ 6j, 2b) regions in the single-lepton channel and the (2j, 2b), (3j, 2b) and (≥ 4j, 2b) regions in the dilepton channel are almost pure in tt+light-jets background and provide an important constraint on tt modelling uncertainties both in terms of normalisation and shape. Uncertainties on c-tagging are reduced by exploiting the large contribution of W → cs decays in the tt+light-jets background populating the (4j, 3b) region in the single-lepton channel. Finally, the consideration of regions with exactly 3 and ≥ 4 b-jets in both channels, having different fractions of tt+bb and tt+cc backgrounds, provides the ability to constrain uncertainties on the tt+bb and tt+cc normalisations.

The matrix element method
The matrix element method [94] has been used by the D0 and CDF collaborations for precision measurements of the top quark mass [95,96] and for the observations of single top quark production [97,98]. Recently this technique has been used for the tt H search by the CMS experiment [99]. By directly linking theoretical calculations and observed quantities, it makes the most complete use of the kinematic information of a given event.
The method calculates the probability density function of an observed event to be consistent with physics process i described by a set of parameters α. This probability density function P i (x|α) is defined as and is obtained by numerical integration over the entire phase space of the initial-and final-state particles. In this equation, x and y represent the four-momentum vectors of all finalstate particles at reconstruction and parton level, respectively. The flux factor F and the Lorentz-invariant phase space ele-  normalises P i to unity taking acceptance and efficiency into account.
The assignment of reconstructed objects to final-state partons in the hard process contains multiple ambiguities. The process probability density is calculated for each allowed assignment permutation of the jets to the final-state quarks of the hard process. A process likelihood function can then be built by summing the process probabilities for the N p allowed assignment permutation, The process probability densities are used to distinguish signal from background events by calculating the likelihood ratio of the signal and background processes contributing with fractions f bkg , This ratio, according to the Neyman-Pearson lemma [100], is the most powerful discriminant between signal and back-ground processes. In the analysis, this variable is used as input to the NN along with other kinematic variables.
Matrix element calculation methods are generated with Madgraph 5 in LO. The transfer functions are obtained from simulation following a similar procedure as described in Ref. [101]. For the modelling of the parton distribution functions the CTEQ6L1 set from the LHAPDF package [102] is used.
The integration is performed using VEGAS [103]. Due to the complexity and high dimensionality, adaptive MC techniques [104], simplifications and approximations are needed to obtain results within a reasonable computing time. In particular, only the numerically most significant contributing helicity states of a process hypothesis for a given event, identified at the start of each integration, are evaluated. This does not perceptibly decrease the separation power but reduces the calculation time by more than an order of magnitude. Furthermore, several approximations are made to improve the VEGAS convergence rate. Firstly, the dimensionality of integration is reduced by assuming that the final-state object directions in η and φ as well as charged lepton momenta are well measured, and therefore the corresponding transfer functions are represented by δ functions. The total momentum conservation and a negligible transverse momentum of the initial-state partons allow for further reduction. Secondly, kinematic transformations are utilised to optimise the integration over the remaining phase space by aligning the peaks of the integrand with the integration dimensions. The narrow-  width approximation is applied to the leptonically decaying W boson. This leaves three b-quark energies, one light-quark energy, the hadronically decaying W boson mass and the invariant mass of the two b-quarks originating from either the Higgs boson for the signal or a gluon for the background as the remaining parameters which define the integration phase space. The total integration volume is restricted based upon the observed values and the width of the transfer functions and of the propagator peaks in the matrix elements. Finally, the likelihood contributions of all allowed assignment permutations are coarsely integrated, and only for the leading twelve assignment permutations is the full integration performed, with a required precision decreasing according to their relative contributions.
The signal hypothesis is defined as a SM Higgs boson produced in association with a top-quark pair as shown in Fig. 1a, b. Hence no coupling of the Higgs boson to the W boson is accounted for in |M i | 2 to allow for a consistent treatment when performing the kinematic transformation. The Higgs boson is required to decay into a pair of bquarks, while the top-quark pair decays into the single-lepton channel. For the background hypothesis, only the diagrams of the irreducible tt+bb background are considered. Since it dominates the most signal-rich analysis regions, inclusion of other processes does not improve the separation between signal and background. No gluon radiation from the finalstate quarks is allowed, since these are kinematically suppressed and difficult to treat in any kinematic transformation aiming for phase-space alignment during the integration process. In the definition of the signal and background hypothesis the LO diagrams are required to have a top-quark pair as an intermediate state resulting in exactly four b-quarks, two light quarks, one charged lepton (electron or muon) and one neutrino in the final state. Assuming lepton universality and invariance under charge conjugation, diagrams of only one lepton flavour and of only negative charge (electron) are considered. The probability density function calculation of the signal and background is only performed in the (≥ 6j, 3b) and (≥ 6j, ≥ 4b) regions of the single-lepton channel. Only six reconstructed jets are considered in the calculation: the four jets with the highest value of the probability to be a b-jet returned by the b-tagging algorithm (i.e. the highest btagging weight) and two of the remaining jets with an invariant mass closest to the W boson mass of 80.4 GeV. If a jet is btagged it cannot be assigned to a light quark in the matrix element description. In the case of more than four b-tagged jets, only the four with the highest b-tagging weight are treated as b-tagged. Assignment permutations between the two light quarks of the hadronically decaying W boson and between the two b-quarks originating from the Higgs boson or gluon result in the same likelihood value and are thus not considered. As a result there are in total 12 and 36 assignment permutations in the (≥ 6j, ≥ 4b) and (≥ 6j, 3b) region, respectively, which need to be evaluated in the coarse integration phase.
Using the tt H process as the signal hypothesis and the tt+bb process as the background hypothesis, a slightly modified version of Eq. (3) is used to define the likelihood ratio D1: where α = 0.23 is a relative normalisation factor chosen to optimise the performance of the discriminant given the finite bin sizes of the D1 distribution. In this definition, signal-like and background-like events have D1 values close to one and zero, respectively. The logarithm of the summed signal likelihoods defined by Eq. (2) and the ratio D1 are included in the NN training in both the (≥ 6j, 3b) and (≥ 6j, ≥ 4b) regions.

Systematic uncertainties
Several sources of systematic uncertainty are considered that can affect the normalisation of signal and background and/or the shape of their final discriminant distributions. Individual sources of systematic uncertainty are considered uncorrelated. Correlations of a given systematic effect are maintained across processes and channels. Table 3 presents a summary of the sources of systematic uncertainty considered in the analysis, indicating whether they are taken to be normalisationonly, shape-only, or to affect both shape and normalisation. In Appendix D, the normalisation impact of the systematic uncertainties are shown on the tt background as well as on the tt H signal.
In order to reduce the degradation of the sensitivity of the search due to systematic uncertainties, they are fitted to data in the statistical analysis, exploiting the constraining power from the background-dominated regions described in Sect. 4. Each systematic uncertainty is represented by an independent parameter, referred to as a "nuisance parameter", and is fitted with a Gaussian prior for the shape differences and a log-normal distribution for the normalisation. They are centred around zero with a width that corresponds to the given uncertainty.

Luminosity
The uncertainty on the integrated luminosity for the data set used in this analysis is 2.8 %. It is derived following the same methodology as that detailed in Ref. [105]. This systematic uncertainty is applied to all contributions determined from the MC simulation.

Leptons
Uncertainties associated with the lepton selection arise from the trigger, reconstruction, identification, isolation and lepton momentum scale and resolution. In total, uncertainties associated with electrons (muons) include five (six) components. Table 3 List of systematic uncertainties considered. An "N" means that the uncertainty is taken as normalisation-only for all processes and channels affected, whereas an "S" denotes systematic uncertainties that are considered shape-only in all processes and channels. An "SN" means that the uncertainty is taken on both shape and normalisation. Some of the systematic uncertainties are split into several components for a more accurate treatment. This is the number indicated in the column labelled as "Comp."

Jets
Uncertainties associated with the jet selection arise from the jet energy scale (JES), jet vertex fraction requirement, jet energy resolution and jet reconstruction efficiency. Among these, the JES uncertainty has the largest impact on the analysis. The JES and its uncertainty are derived combining information from test-beam data, LHC collision data and simulation [35]. The jet energy scale uncertainty is split into 22 uncorrelated sources which can have different jet p T and η dependencies. In this analysis, the largest jet energy scale uncertainty arises from the η dependence of the JES calibration in the end-cap regions of the calorimeter. It is the second leading uncertainty.

Heavy-and light-flavour tagging
A total of six (four) independent sources of uncertainty affecting the b(c)-tagging efficiency are considered [37]. Each of these uncertainties corresponds to an eigenvector resulting from diagonalising the matrix containing the information about the total uncertainty per jet p T bin and the bin-tobin correlations. An additional uncertainty is assigned due to the extrapolation of the b-tagging efficiency measurement to the highp T region. Twelve uncertainties are considered for the light-jet tagging and they depend on jet p T and η. These systematic uncertainties are taken as uncorrelated between b-jets, c-jets, and light-flavour jets.
No additional systematic uncertainty is assigned due to the use of parameterisations of the b-tagging probabilities instead of applying the b-tagging algorithm directly since the difference between these two approaches is negligible compared to the other sources.

tt+ jets modelling
An uncertainty of +6.5 %/-6 % is assumed for the inclusive tt production cross section. It includes uncertainties from the top quark mass and choices of the PDF and α S . The PDF and α S uncertainties are calculated using the PDF4LHC prescription [106] with the MSTW2008 68 % CL NNLO, CT10 NNLO [107] and NNPDF2.3 5f FFN [108] PDF sets, and are added in quadrature to the scale uncertainty. Other systematic uncertainties affecting the modelling of tt+jets include uncertainties due to the choice of parton shower and hadronisation model, as well as several uncertainties related to the reweighting procedure applied to improve the tt MC model. Additional uncertainties are assigned to account for limited knowledge of tt+HF jets production. They are described later in this section.
As discussed in Sect. 5, to improve the agreement between data and the tt simulation a reweighting procedure is applied to tt MC events based on the difference in the top quark p T and tt system p T distributions between data and simulation at √ s = 7 TeV [57]. The nine largest uncertain-ties associated with the experimental measurement of top quark and tt system p T , representing approximately 95 % of the total experimental uncertainty on the measurement, are considered as separate uncertainty sources in the reweighting applied to the MC prediction. The largest uncertainties on the measurement of the differential distributions include radiation modelling in tt events, the choice of generator to simulate tt production, uncertainties on the components of jet energy scale and resolution, and flavour tagging. Because the measurement is performed for the inclusive tt sample and the size of the uncertainties applicable to the tt+cc component is not known, two additional uncorrelated uncertainties are assigned to tt+cc events, consisting of the full difference between applying and not applying the reweightings of the tt system p T and top quark p T , respectively.
An uncertainty due to the choice of parton shower and hadronisation model is derived by comparing events produced by Powheg interfaced with Pythia or Herwig. Effects on the shapes are compared, symmetrised and applied to the shapes predicted by the default model. Given that the change of the parton shower model leads to two separate effects -a change in the number of jets and a change of the heavy-flavour content -the parton shower uncertainty is represented by three parameters, one acting on the tt+light contribution and two others on the tt+cc and tt+bb contributions. These three parameters are treated as uncorrelated in the fit.
Detailed comparisons of tt+bb production between Powheg+Pythia and an NLO prediction of tt+bb production based on SherpaOL have shown that the cross sections agree within 50 % of each other. Therefore, a systematic uncertainty of 50 % is applied to the tt+bb component of the tt+jets background obtained from the Powheg+Pythia MC simulation. In the absence of an NLO prediction for the tt+cc background, the same 50 % systematic uncertainty is applied to the tt+cc component, and the uncertainties on tt+bb and tt+cc are treated as uncorrelated. The large available data sample allows the determination of the tt+bb and tt+cc normalisations with much better precision, approximately 15 and 30 %, respectively (see Appendix D). Thus, the final result does not significantly depend on the exact value of the assumed prior uncertainty, as long as it is larger than the precision with which the data can constrain it. However, even after the reduction, the uncertainties on the tt+bb and the tt+cc background normalisation are still the leading and the third leading uncertainty in the analysis, respectively.
Four additional systematic uncertainties in the tt+cc background estimate are derived from the simultaneous variation of factorisation and renormalisation scales, matching threshold and c-quark mass variations in the Mad-graph+Pythia tt simulation, and the difference between the tt+cc simulation in Madgraph+Pythia and Powheg +Pythia since Madgraph+Pythia includes the tt+cc process in the matrix element calculation while it is absent in Powheg+Pythia.
For the tt+bb background, three scale uncertainties, including changing the functional form of the renormalisation scale to μ R = (m t m bb ) 1/2 , changing the functional form of the factorisation μ F and resummation μ Q scales to T,i and varying the renormalisation scale μ R by a factor of two up and down are evaluated. Additionally, the shower recoil model uncertainty and two uncertainties due to the PDF choice in the SherpaOL NLO calculation are quoted. The effect of these variations on the contribution of different tt+bb event categories is shown in Fig. 9. The renormalisation scale choice and the shower recoil scheme have a large effect on the modelling of tt+bb. They provide large shape variations of the NN discriminants resulting in the fourth and sixth leading uncertainties in this analysis.
Finally, two uncertainties due to tt+bb production via multiparton interaction and final-state radiation which are not present in the SherpaOL NLO calculation are applied. Overall, the uncertainties on tt+bb normalisation and modelling result in about a 55 % total uncertainty on the tt+bb background contribution in the most sensitive (≥ 6j, ≥ 4b) and (≥ 4j, ≥ 4b) regions.

The W/Z +jets modelling
As discussed in Sect. 5, the W/Z +jets contributions are obtained from the simulation and normalised to the inclusive theoretical cross sections, and a reweighting is applied to improve the modelling of the W/Z boson p T spectrum. The full difference between applying and not applying the W/Z boson p T reweighting is taken as a systematic uncertainty, which is then assumed to be symmetric with respect to the central value. Additional uncertainties are assigned due to the extrapolation of the W/Z +jets estimate to high jet multiplicity.

Misidentified lepton background modelling
Systematic uncertainties on the misidentified lepton background estimated via the matrix method [38] in the singlelepton channel receive contributions from the limited number of data events, particularly at high jet and b-tag multiplicities, from the subtraction of the prompt-lepton contribution as well as from the uncertainty on the lepton misidentification rates, estimated in different control regions. The statistical uncertainty is uncorrelated among the different jet and b-tag multiplicity bins. An uncertainty of 50 % asso- Simulation ATLAS  ciated with the lepton misidentification rate measurements is assumed, which is taken as correlated across jet and btag multiplicity bins, but uncorrelated between electron and muon channels. Uncertainty on the shape of the misidentified lepton background arises from the prompt-lepton background subtraction and the misidentified lepton rate measurement.
In the dilepton channel, since the misidentified lepton background is estimated using both the simulation and samesign dilepton events in data, a 50 % normalisation uncertainty is assigned to cover the maximum difference between the two methods. It is taken as correlated among the different jet and b-tag multiplicity bins. An additional uncertainty is applied to cover the difference in shape between the predictions derived from the simulation and from same-sign dilepton events in data.

Electroweak background modelling
Uncertainties of +5 %/-4 % and ±6.8 % are used for the theoretical cross sections of single top production in the singlelepton and dilepton channels [64,65], respectively. The former corresponds to the weighted average of the theoretical uncertainties on s-, t-and W t-channel production, while the latter corresponds to the theoretical uncertainty on W tchannel production, the only single top process contributing to the dilepton final state.
The uncertainty on the diboson background rates includes an uncertainty on the inclusive diboson NLO cross section of ±5 % [62] and uncertainties to account for the extrapolation to high jet multiplicity.
Finally, an uncertainty of ±30 % is assumed for the theoretical cross sections of the tt+V [70,71] background. An additional uncertainty on tt+V modelling arises from variations in the amount of initial-state radiation. The tt + Z background with Z boson decaying into a bb pair is an irreducible background to the tt H, H → bb signal, and as such, has kinematics and an NN discriminant shape similar to those of the signal. The uncertainty on the tt+V background normalisation is the fifth leading uncertainty in the analysis.

Uncertainties on signal modelling
Dedicated NLO PowHel samples are used to evaluate the impact of the choice of factorisation and renormalisation scales on the tt H signal kinematics. In these samples the default scale is varied by a factor of two up and down. The effect of the variations on tt H distributions was studied at particle level and the nominal PowHel tt H sample was reweighted to reproduce these variations. In a similar way, the nominal sample is reweighted to reproduce the effect of changing the functional form of the scale. Additional uncertainties on the tt H signal due to

Statistical methods
The distributions of the discriminants from each of the channels and regions considered are combined to test for the presence of a signal, assuming a Higgs boson mass of m H = 125 GeV . The statistical analysis is based on a binned likelihood function L(μ, θ ) constructed as a product of Poisson probability terms over all bins considered in the analysis. The likelihood function depends on the signal-strength parameter μ, defined as the ratio of the observed/expected cross section to the SM cross section, and θ , denoting the set of nuisance parameters that encode the effects of systematic uncertainties on the signal and background expectations. They are implemented in the likelihood function as Gaussian or log-normal priors. Therefore, the total number of expected events in a given bin depends on μ and θ . The nuisance parameters θ adjust the expectations for signal and background according to the corresponding sys-tematic uncertainties, and their fitted values correspond to the amount that best fits the data. This procedure allows the impact of systematic uncertainties on the search sensitivity to be reduced by taking advantage of the highly populated background-dominated control regions included in the likelihood fit. It requires a good understanding of the systematic effects affecting the shapes of the discriminant distributions. The test statistic q μ is defined as the profile likelihood ratio: q μ = −2 ln(L(μ,θ μ )/L(μ,θ)), whereμ andθ are the values of the parameters that maximise the likelihood function (with the constraints 0 ≤μ ≤ μ), andθ μ are the values of the nuisance parameters that maximise the likelihood function for a given value of μ. This test statistic is used to measure the compatibility of the observed data with the background-only hypothesis (i.e. for μ = 0), and to make statistical inferences about μ, such as upper limits using the CL s method [112][113][114] as implemented in the RooFit package [115,116].
To obtain the final result, a simultaneous fit to the data is performed on the distributions of the discriminants in 15 regions: nine analysis regions in the single-lepton channel and six regions in the dilepton channel. Fits are performed under the signal-plus-background hypothesis, where the signal-strength parameter μ is the parameter of interest in the fit and is allowed to float freely, but is required to be the same in all 15 fit regions. The normalisation of each background is determined from the fit simultaneously with μ. Contributions from tt, W/Z +jets production, single top, diboson and tt V backgrounds are constrained by Statistical uncertainties in each bin of the discriminant distributions are taken into account by dedicated parameters in the fit. The performance of the fit is tested using simulated events by injecting tt H signal with a variable signal strength and comparing it to the fitted value. Good agreement between the injected and measured signal strength is observed.

Results
The results of the binned likelihood fit to data described in Sect. 9 are presented in this section. Figure 10 shows the yields after the fit in all analysis regions in the singlelepton and dilepton channels. The post-fit event yields and the corresponding S/B and S/ √ B ratios are summarised in Appendix E. Figures 11,12,13,14 and 15 show a comparison of data and prediction for the discriminating variables (either H T had , H T , or NN discriminants) for each of the regions considered in the single-lepton and dilepton channels, respectively, both pre-and post-fit to data. The uncertainties decrease significantly in all regions due to constraints provided by data and correlations between different sources of uncertainty introduced by the fit to the data. In Appendix F, the most highly discriminating variables in the NN are shown post-fit compared to data. Table 4 shows the observed μ values obtained from the individual fits in the single-lepton and dilepton channels, and their combination. The signal strength from the combined fit for m H = 125 GeV is: The expected uncertainty for the signal strength (μ = 1) is ±1.1. The observed (expected) significance of the signal is 1.4 (1.1) standard deviations, which corresponds to an observed (expected) p-value of 8 % (15 %). The probability, p, to obtain a result at least as signal-like as observed if no signal is present is calculated using q 0 = −2ln(L(0,θ μ )/L(μ,θ)) as a test statistic.   The fitted values of the signal strength and their uncertainties for the individual channels and their combination are shown in Fig. 16.

ATLAS
The observed limits, those expected with and without assuming a SM Higgs boson with m H = 125 GeV , for each channel and their combination are shown in Fig. 17. A signal 3.4 times larger than predicted by the SM is excluded at 95 % CL using the CL s method. A signal 2.2 times larger than for the SM Higgs boson is expected to be excluded in the case of no SM Higgs boson, and 3.1 times larger in the case of a SM Higgs boson. This is also summarised in Table 5.  In particular, the last bin of Fig. 18 includes the two last bins from the most signal-rich region of the NN distribution in (≥ 6j, ≥ 4b) and the two last bins from the most signalrich region of the NN in (≥ 4j, ≥ 4b) from the fit. The signal is normalised to the fitted value of the signal strength (μ = 1.5) and the background is obtained from the global fit. A signal strength 3.4 times larger than predicted by the SM, which is excluded at 95 % CL by this analysis, is also shown. Figure 19 demonstrates the effect of various systematic uncertainties on the fitted value of μ and the constraints provided by the data. The post-fit effect on μ is calculated by fixing the corresponding nuisance parameter atθ ± σ θ , whereθ is the fitted value of the nuisance parameter and σ θ is its post-fit uncertainty, and performing the fit again. The difference between the default and the modified μ, μ, represents the effect on μ of this particular systematic uncertainty. The largest effect arises from the uncertainty in normalisation of the irreducible tt+bb background. This uncertainty is reduced by more than one half from the initial 50%. The tt+bb background normalisation is pulled up by about 40 % in the fit, resulting in an increase in the observed tt+bb yield with respect to the Powheg+Pythia prediction. Most of the reduction in uncertainty on the tt+bb normalisation is the result of the significant number of data events in the signal-rich regions dominated by tt+bb background. With no Gaussian prior considered on the tt+bb normalisation, as described in Sect. 8, the fit still prefers an increase in the amount of tt+bb background by about 40 %.
The tt+bb modelling uncertainties affecting the shape of this background also have a significant effect on μ. These systematic uncertainties affect only the tt+bb modelling and are not correlated with the other tt+jets backgrounds. The largest of the uncertainties is given by the renormalisation scale choice. The uncertainty drastically changes the shape of the NN for the tt+bb background, making it appear more signal-like.
The tt+cc normalisation uncertainty is ranked third (Fig. 19) and its pull is slightly negative, while the postfit yields for tt+cc increase significantly in the four-and five-jet regions in the single-lepton channel and in the twoand three-jet regions of the dilepton channel (see Tables 10,  11 of Appendix 1). It was verified that this effect is caused by the interplay between the tt+cc normalisation uncertainty and several other systematic uncertainties affecting the tt+cc background yield.  Fig. 19 The fitted values of the nuisance parameters with the largest impact on the measured signal strength. The points, which are drawn conforming to the scale of the bottom axis, show the deviation of each of the fitted nuisance parameters,θ , from θ 0 , which is the nominal value of that nuisance parameter, in units of the pre-fit standard deviation θ. The error bars show the post-fit uncertainties, σ θ , which are close to 1 if the data do not provide any further constraint on that uncertainty. Conversely, a value of σ θ much smaller than 1 indicates a significant reduction with respect to the original uncertainty. The nuisance parameters are sorted according to the post-fit effect of each on μ (hashed blue area) conforming to the scale of the top axis, with those with the largest impact at the top The noticeable effect of the light-jet tagging (mistag) systematic uncertainty is explained by the relatively large fraction of the tt+light background in the signal region with four b-jets in the single-lepton channel. The tt+light events enter the 4-b-tag region through a mistag as opposed to the 3-b-tag region where tagging a c-jet from a W boson decay is more likely. Since the amount of data in the 4-b-tag regions is not large this uncertainty cannot be constrained significantly.
The tt + Z background with Z → bb is an irreducible background to the tt H signal as it has the same number of b-jets in the final state and similar event kinematics. Its normalisation has a notable effect on μ (dμ/dσ (tt V ) = 0.3) and the uncertainty arising from the tt+V normalisation cannot be significantly constrained by the fit. Other leading uncertainties include b-tagging and some components of the JES uncertainty.
Uncertainties arising from jet energy resolution, jet vertex fraction, jet reconstruction and JES that affect primarily low p T jets as well as the tt+light-jet background modelling uncertainties are constrained mainly in the signal-depleted regions. These uncertainties do not have a significant effect on the fitted value of μ.

Summary
A search has been performed for the Standard Model Higgs boson produced in association with a top-quark pair (tt H) using 20.3 fb −1 of pp collision data at √ s = 8 TeV collected with the ATLAS detector during the first run of the Large Hadron Collider. The search focuses on H → bb decays, and is performed in events with either one or two charged leptons.
To improve sensitivity, the search employs a likelihood fit to data in several jet and b-tagged jet multiplicity regions. Systematic uncertainties included in the fit are significantly constrained by the data. Discrimination between signal and background is obtained in both final states by employing neural networks in the signal-rich regions. In the singlelepton channel, discriminating variables are calculated using the matrix element technique. They are used in addition to kinematic variables as input to the neural network. No significant excess of events above the background expectation is found for a Standard Model Higgs boson with a mass of 125 GeV. An observed (expected) 95 % confidencelevel upper limit of 3.4 (2.2) times the Standard Model cross section is obtained. By performing a fit under the signalplus-background hypothesis, the ratio of the measured signal strength to the Standard Model expectation is found to be μ = 1.5 ± 1.1.  Figure 20 shows the contributions of different Higgs boson decay modes in each of the analysis regions in the singlelepton and dilepton channels. The H → bb decay is the dominant contribution in the signal-rich regions.

Appendix B: Event yields prior to the fit
The event yields prior to the fit for the combined e+jets and μ+jets samples for the different regions considered in the analysis are summarised in Table 6.
The event yields prior to the fit for the combined ee+jets, μμ+jets and eμ+jets samples for the different regions considered in the dilepton channel are summarised in Table 7.
Appendix C: Discrimination power of input variables Figures 21,22,23,24,25,26 and 27 show the discrimination between signal and background for the top four input variables in each region where NN is used in the single-lepton and dilepton channels, respectively. In Fig. 21, the NN is designed to separate tt+HF from tt+light. Tables 8 and 9 show pre-fit and post-fit contributions of the different categories of uncertainties (expressed in %) for the tt H signal and main background processes in the (≥ 6j, ≥ 4b) region of the single-lepton channel and the (≥ 4j, ≥ 4b) region of the dilepton channel, respectively.

Appendix D: Tables of systematic uncertainties in the signal region
The "Lepton efficiency" category includes systematic uncertainties on electrons and muons listed in Table 3. The "Jet efficiency" category includes uncertainties on the jet vertex fraction and jet reconstruction. The "tt heavy-flavour modelling" category includes uncertainties on the tt+bb NLO shape and on the tt+cl p T reweighting and generator. The "Theoretical cross sections" category includes uncertainties on the single top, diboson, V +jets and tt+V theoretical cross sections. The "tt H modelling" category includes contributions from tt H scale, generator, hadronisation model and PDF choice. The details of the evaluation of the uncertainties can be found in Sect. 8.

Appendix E: Post-fit event yields
The post-fit event yields for the combined single-lepton channel for the different regions considered in the analysis are summarised in Table 10. Similarly, the post-fit event yields for the combined dilepton channels for the different regions are summarised in Table 11.      Table 8 Single lepton channel: normalisation uncertainties (expressed in %) on signal and main background processes for the systematic uncertainties considered, before and after the fit to data in (≥ 6j, ≥ 4b) region of the single lepton channel. The total uncertainty can be different from the sum in quadrature of individual sources due to the anti-correlations between them       Single lepton Single lepton