NLO QCD corrections to top anti-top bottom anti-bottom production at the LHC: 2. full hadronic results

We present predictions for top anti-top bottom anti-bottom production at the LHC in next-to-leading order QCD. The precise description of this background process is a prerequisite to observe associated top anti-top Higgs production in the Higgs ->bottom anti-bottom decay channel and to directly measure the top-quark Yukawa coupling at the LHC. The leading-order cross section is extremely sensitive to scale variations. We observe that the traditional scale choice adopted in ATLAS simulations underestimates the top anti-top bottom anti-bottom background by a factor two and introduce a new dynamical scale that stabilizes the perturbative predictions. We study various kinematic distributions and observe that the corrections have little impact on their shapes if standard cuts are applied. In the regime of highly boosted Higgs bosons, which offers better perspectives to observe the top anti-top Higgs signal, we find significant distortions of the kinematic distributions. The one-loop amplitudes are computed using process-independent algebraic manipulations of Feynman diagrams and numerical tensor reduction. We find that this approach provides very high numerical stability and CPU efficiency.


Introduction
The discovery of the Higgs boson and the measurement of its interactions with massive quarks and vector bosons represent a central goal of the ATLAS [1,2] and CMS [3] experiments at the Large Hadron Collider (LHC). The present limits from direct searches and electroweak precision measurements favour a Higgs-mass range below the W-decay threshold. In this light-Higgs scenario, the Higgs predominantly decays into bottom quarks, and its observation is very challenging at the LHC. In the dominant Higgs-production channel, i.e. in gluon-gluon fusion, the H → bb signal is completely obscured by a huge QCD background.
Associated production mechanisms, where the Higgs boson is accompanied by a massive gauge boson or top-quark pairs, feature more distinctive signatures that can be exploited to reduce the background to the H → bb final state. These associated Higgsproduction channels provide direct access to the interactions of the Higgs boson with gauge bosons and heavy quarks. Their observation would permit to test the electroweak symmetry-breaking mechanism. But the QCD background to associated WH, ZH, and ttH production followed by H → bb decay remains a critical issue, which requires significant progress in two directions. The low signal-to-background ratios must be increased by means of improved selection strategies; and more precise descriptions of the background are needed in order to reduce systematic uncertainties. In this paper we address the latter issue by providing next-to-leading-order (NLO) QCD predictions for the irreducible QCD background to ttH(H → bb) production.
The strategies elaborated by ATLAS and CMS to identify the ttH(H → bb) signal [2][3][4][5][6][7] are based on the full reconstruction of the ttbb signature, starting from a final state with four b quarks and additional light jets. After imposing four b-taggings, a reconstruction of the top quarks is performed. This permits to identify two b quarks as top-decay products. The remaining two b quarks constitute a Higgs candidate, and their invariant-mass distribution is the relevant observable to find the Higgs signal. However, the presence of multiple b quarks and light jets in the final state represents a serious obstacle to the correct identification of the bb Higgs candidates. Realistic simulations indicate that only about 1/3 of the selected b-quark pairs have correct combinatorics, while the other Higgs candidates contain b jets from top decays or miss-tagged light jets. This so-called combinatorial background significantly dilutes the Higgs signal and increases its background contamination. The QCD processes pp → ttbb and ttjj are the main background components. The latest ATLAS and CMS simulations [2,3], for 30 fb −1 and 60 fb −1 , respectively, anticipate a statistical significance around 2σ (ignoring systematic uncertainties) and a fairly low signal-to-background ratio of order 1/10. This calls for better than 10% precision in the background description, a very demanding requirement both from the experimental and theoretical point of view.
More recently, alternative search strategies based on the selection of highly boosted Higgs bosons, which decay into "fat jets" containing two b quarks, have opened new and very promising perspectives, both for V H(H → bb) [8] and ttH(H → bb) [9]. In the case of ttH, this novel approach might enable a better background suppression and increase the signal-to-background ratio beyond 1/3. Moreover, three b-taggings would be sufficient to strongly suppress the ttjj contamination. In this case the background would be completely dominated by ttbb production.
The calculation of the NLO QCD corrections to the process pp → ttbb, first presented in Refs. [10,11] and subsequently confirmed in Ref. [12], constitutes another important step towards the observability of the ttH(H → bb) signal at the LHC. The ATLAS simulations of the ttbb background [2,4,5] are based on leading-order (LO) matrix elements and the scale choice µ R = µ F = E thr /2, where E thr = 2m t + m bb is the partonic threshold energy. 1 These predictions suffer from huge scale uncertainties: the LO ttbb cross section can vary up to a factor four if the renormalization and factorization scales are identified with different kinematic parameters [4]. However, in the case of the ttH signal [13,14] it was found that setting the QCD scale equal to half the threshold energy leads to fairly moderate NLO corrections (K ≃ 1.2). The same behaviour was subsequently observed in two other processes that feature a final state similar to ttbb. At the scale µ R,F = E thr /2, the NLO QCD corrections to pp → ttj [15] and ttZ [16] at the LHC amount to K ≃ 1.0−1.15 (for p T,jet > ∼ 20−50 GeV) and K ≃ 1.35, respectively. On the basis of these observations one might have expected that LO ttbb predictions obtained with the same scale choice might have a decent level of precision, say at the 20-30% level. However, it turned out that the NLO corrections to ttbb production, for µ R,F = E thr /2, are much larger and correspond to a K factor of about 1.8 [11,12]. Apart from the sizable impact on the ttH analysis, this big K factor suggests the presence of large logarithms that tend to spoil the convergence of the perturbative expansion. As we argue, this is mainly due to the fact that the scale µ R,F = E thr /2 does not provide an adequate description of the QCD dynamics of ttbb production. To cure this problem we introduce a new and more natural scale choice. This leads to a much smaller K factor and also reduces the residual scale dependence at NLO. Besides the issue of scale dependences, in this paper we also investigate NLO effects on various differential distributions and selection cuts that are relevant for the ttH analysis, both within the traditional ATLAS/CMS approach and in the boosted-Higgs framework.
In addition to its phenomenological relevance, the calculation of the NLO corrections to pp → ttbb constitutes also an important technical benchmark. The description of manyparticle processes at NLO plays an important role for the LHC physics programme [17,18]. Numerous Higgs and new-physics signals, as well as their background, are characterized by multi-particle final states. These processes often involve high powers of α s that give rise to very large scale uncertainties if NLO effects are not taken into account. The most prominent multi-particle reactions that require NLO predictions are summarised in the so-called Les Houches priority list [17,18]. In the recent years, the technical challenges raised by these calculations have triggered an impressive amount of conceptual and technical developments. In particular, in order to treat one-loop amplitudes with particle multiplicities higher than five, various methods based on tensor integrals [19][20][21][22][23][24][25][26][27][28] or on-shell reductions [29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44] have been developed. Very recently, these different techniques have lead to the first NLO results for six-particle processes at the LHC, namely for pp → ttbb [11,12], the leading- [45] and the full-colour contributions [46] to pp → Wjjj, and for the qq contribution to pp → bbbb [47].  To compute the virtual corrections to ttbb production we employ explicit diagrammatic representations of the one-loop amplitudes. A key feature of our approach is the factorization of colour structures at the level of individual diagrams. This permits to reduce the CPU cost of colour sums essentially to zero. Helicity-dependent structures are algebraically reduced to a common set of so-called standard matrix elements. In this way the sums over physical helicities are strongly boosted. Tensor loop integrals are related to scalar integrals by means of numerical algorithms that systematically avoid numerical instabilities from inverse Gram determinants and other spurious singularities [20,21]. The efficiency of the calculation is strongly increased by recycling a multitude of common subexpressions, which occur both inside individual diagrams and in tensor integrals of different diagrams that share common sub-topologies. As demonstrated by the remarkably high CPU speed of the numerical code, these procedures strongly mitigate the factorial complexity that is inherent in Feynman diagrams. The real corrections are handled with the dipole subtraction method [50][51][52][53], and the phase-space integration is performed with adaptive multi-channel methods [54][55][56]. Our results have been confirmed with the OPP method [41][42][43][44] and HELAC-1LOOP [48,49] within the statistical Monte Carlo error of 0.2% [12].
The paper is organised as follows. Section 2 is devoted to technical aspects of the calculation of the virtual and real corrections. In Section 3 we present predictions for the LHC. In particular, we discuss the scale dependence and investigate NLO effects on the shape of several distributions. Our results are summarised in Section 4. In App. A we outline the algebraic reduction of helicity structures, and in App. B we provide benchmark results for the matrix element squared in lowest order and including virtual corrections for one phase-space point.

Details of the calculation
In LO, the hadronic production of ttbb proceeds via the partonic processes qq → ttbb and gg → ttbb, which are described by 7 and 36 tree diagrams, respectively (see Figure 1). The virtual NLO QCD corrections to these channels involve 188 and 1003 one-loop di- Figure 3: Sample real-emission diagrams contributing to the channels qq → ttbbg, qg → ttbbq, and gg → ttbbg. agrams, respectively. A few examples of pentagon and hexagon graphs are illustrated in Figure 2. The real emission contributions are induced by the partonic processes qq → ttbbg, gg → ttbbg, qg → ttbbq, and gq → ttbbq. The gg channel involves 341 tree diagrams. The qq, qg and gq channels, which are related by crossing transformations, are described by 64 tree diagrams each (see Figure 3).
In the following we describe the calculation of the virtual and real NLO corrections. Each of these contributions has been worked out twice and independently, resulting in two completely independent computer codes. Top quarks are treated fully inclusively, i.e. we do not include top decays. Moreover we handle bottom quarks in the massless approximation, corresponding to the five-flavour scheme. However, we do not take into account the suppressed contribution from initial-state bottom quarks.

Virtual corrections
The virtual corrections are generated with two in-house Mathematica programs that reduce Feynman diagrams and generate Fortran77 code in a fully automatized way. One of the two programs relies on FormCalc 5.2 [57] for preliminary algebraic manipulations. Here we outline the underlying structure of the calculation, with emphasis on colour/helicity structures and tensor integrals. In this respect, both programs are organised in a fairly similar way. Since the treatment of the qq channel is already documented in Ref. [10], we focus on the gg channel.

Diagram-by-diagram approach
The virtual corrections are obtained from the interference of the one-loop and LO matrix elements summed over external-state colours and helicities. This quantity is computed on a diagram-by-diagram basis, The contributions of individual loop diagrams (Γ) are evaluated by separate numerical routines and summed explicitly. The Feynman diagrams are generated with two independent tools, FeynArts 1.0 [58] and FeynArts 3.2 [59].

Colour factorization
One of the key features of the diagram-by-diagram approach is that the cost related to the large number of diagrams is compensated by the possibility to perform colour sums very efficiently. This is a consequence of colour factorization: individual (sub)diagrams consist of a single colour-stripped amplitude A (Γ) multiplied by a trivial colour factor C (Γ) , More precisely, each diagram gives rise to 3 n 4 colour-factorized contributions of type (2.2), where n 4 is the number of quartic gluon vertices in the diagram. These terms are handled as separate subdiagrams. However, most diagrams do not involve quartic couplings, and their colour structures factorize completely. For instance, the last diagram in Figure 2 involves a single colour structure where a 1 , a 2 and i 3 , i 4 , i 5 , i 6 are the colour indices of the gg and ttbb external states, numbered in this order. All colour structures can be easily reduced to Kronecker symbols and Gell-Mann matrices T a = λ a /2 by using and other well-known SU(N c ) relations. In the gg channel, this reduction leads to a colour basis of 14 elements, (2.5) For N c = 3, only 13 of these colour operators are independent owing to the relation The summation over external colours is performed once and for all at the level of the colour basis and the LO matrix element. To this end, we compute the colour-interference matrix and reducing the tree matrix element in colour space, Then, upon reduction of the factorized colour structure of the loop diagrams, we obtain the colour-summed loop-tree interference as where the coefficients c (Γ) k are simple numbers. The colour-summed result is given by a combination of previously computed colour-Born interference terms (2.9). This requires a single evaluation of the non-trivial colour-stripped amplitude A (Γ) of each (sub)diagram.
Helicity structures are handled in a very similar way. The helicity-dependent parts of all diagrams are reduced to a common basis of so-called Standard Matrix Elements (SMEs), and helicity sums are performed once and for all at the level of the SMEs-Born interference (see below). The diagram-independent treatment of the helicity-dependent parts of loop graphs is made possible by the covariant decomposition of tensor integrals, i.e. by replacing loop momenta (in the numerator) with external momenta and metric tensors.

Covariant decomposition and numerical reduction of tensor integrals
Tensor one-loop integrals with N propagators and P Lorentz indices are expressed in terms of totally symmetric covariant structures {g . . . gp . . . p} µ 1 ...µ P j 1 ...j P involving g µν and the external momenta p 1 , . . . , p N −1 , (2.12) with D denoting the number of space-time dimensions. For details of the notation we refer to Ref. [21]. To describe N-point integrals with N ≥ 5, tensor structures with only four external momenta would be sufficient. However, in order to avoid potential instabilities due to inverse Gram determinants we use a redundant set of structures, including the metric tensor and N − 1 momenta.
The virtual corrections to qq → ttbb and gg → ttbb involve tensor integrals up to rank P = 3 and P = 4, respectively. The one-loop amplitudes are expressed as linear combinations of tensor-integral coefficients T N j 1 ,...,j P . The latter are evaluated by numerical libraries that recursively reduce them to master integrals using the methods of Refs. [20,21] 2 . Avoiding an explicit reduction of analytic expressions to master integrals, this numerical approach prevents prohibitively large expressions and permits to adapt the reduction strategy to the specific numerical problems that appear in different phase-space regions.
Tensor N-point integrals with N ≥ 5 are expressed in terms of lower-rank and lowerpoint integrals exploiting the four-dimensionality of space-time [20,21]. 3 The tensor rank and the number of propagators are simultaneously reduced without introducing inverse Gram determinants. Consequently, the maximal power of inverse Gram determinants resulting from the entire reduction is given by the maximal rank of four-point integrals, which never exceeds four in renormalizable gauges. Scalar hexagons and pentagons are reduced to boxes using Melrose's method [60]. Tensor 4-point and 3-point integrals are reduced to scalar integrals with the Passarino-Veltman algorithm [61] as long as no small Gram determinant appears in the reduction. If small Gram determinants occur, alternative schemes are applied [21]. 4 More precisely, we make use of expansions of the tensor coefficients about the limit of vanishing Gram determinants and possibly other kinematical determinants. One-and two-point tensor integrals are obtained with numerically stable analytic expressions.
Ultraviolet (UV) divergences are regularized dimensionally throughout, but infrared (IR) divergences are treated in different variants, which comprise pure dimensional regularization with strictly massless light quarks and a hybrid scheme with small quark masses. The corresponding scalar integrals are evaluated using the methods and results of Refs. [63,64], and different regularization schemes are translated into each other as described in Ref. [65].
The calculation of tensor integrals is implemented in two independent Fortran libraries. This permits to perform detailed cross checks, which confirm the excellent numerical stability of the reduction procedure. An automatic cache system is implemented that strongly boosts the reduction by recycling a multitude of tensor integrals among Feynman diagrams with common sub-topologies. The virtual corrections to gg → ttbb comprise about 350 different scalar integrals, which require roughly 10 ms CPU time per phase-space point on a 3 GHz Intel Xeon processor. The calculation of the complete set of scalar and tensor integrals with and without cache system takes approximately 40 ms and 200 ms, respectively.

Rational parts
In D = 4 − 2ǫ dimensions, UV-singular tensor integrals give rise to 1/ǫ UV poles, Consequently, their D-dimensional coefficients need to be expanded in D − 4, resulting in so-called rational terms that are proportional to the pole residues R N j 1 ...j P . Rational contributions originate from D-dependent terms in tensor-reduction identities and in the loop-momentum-independent part of the diagram numerators. The relevant expansions are automatically performed by means of a catalogue of residues of UV poles.
Note that in (2.14) we have implicitly assumed that rational terms resulting from 1/ǫ and 1/ǫ 2 poles of IR kind vanish. This is a non-trivial and general property of one-loop QCD amplitudes. More precisely, while rational terms of IR origin can be present in the wave-function renormalization factors, in truncated one-loop amplitudes they cancel. This holds within the 't Hooft-Feynman gauge and similar gauge fixings, as was explicitly proven in Appendix A of Ref. [10].

Algebraic reduction of helicity structures and helicity sums
The helicity structures encountered in the explicit evaluation of all Feynman diagrams are algebraically reduced to a common basis of SMEs as described below. The general form of SMEs for the gg → ttbb channel is 5 consists of combinations of metric tensors and external momenta. These compact spinor chains permit to decouple helicity information from the remnant parts of the diagrams, so that helicity sums can be performed in a diagram-independent and efficient way. In practice, the colour-stripped part of each loop diagram [see (2.10)] is expressed as a linear combination of SMEs and tensor integrals, The coefficients K (Γ) m;i 1 ...i P are rational functions of the kinematic invariants. These functions involve only denominators from intermediate-particle propagators and are free of spurious poles that might generate numerical instabilities.
Helicity sums are performed at the level of the interference of the diagram-independent SMEs with the colour-projected Born amplitude (2.9), This matrix is computed only once per phase-space point employing the Weyl-van der Waerden spinor formalism of Ref. [68]. Using M km one can directly obtain the colour-and helicity-summed contributions of each loop diagram in terms of its colour-and helicityindependent form factors F (Γ) m and the coefficients c (Γ) k of its factorized colour structure (2.10), Owing to the high number of SMEsM m and the complexity of the corresponding form factors F (Γ) m , the representation (2.18) yields fairly large expressions. For instance, the size of the numerical routines describing individual hexagon diagrams in the gg channel is of the order of 0.5−1 MB. The reduction of helicity structures to SMEs is one of the key aspects that determine the size and the speed of the code. In order to avoid possible numerical cancellations, this procedure is entirely based on algebraic identities that are free of denominators. The reduction algorithm consists of two main steps. The first step is based on process-independent identities in D dimensions. To reduce helicity structures of type (2.15) we employ: momentum conservation; Dirac algebra; ordering of Dirac matrices inside Dirac chains; Dirac equation; transversality and gauge-fixing conditions for the gluon-polarization vectors, p µ i ε µ (p j ) = 0 for i, j = 1, 2. The basis of SMEs obtained with these identities contains more than thousand elements for the gg channel.
After these manipulations in D dimensions we extract all rational terms performing the relevant expansions in D − 4. We then proceed with a second reduction step, based on four-dimensional relations. Specifically, we apply two alternative reduction algorithms that are based on relations derived from Chisholm's identity.
The first algorithm is constructed along the lines of the reduction described in Ref. [10] for the qq channel. Each fermion chain is split into two contributions via insertion of chiral This permits to employ various relations of the type [10,66] where the tensor product connects Dirac matrices that belong to different fermion chains. By means of such identities one can exchange Dirac matrices between chains that are connected by γ µ ⊗ γ µ contractions. As described in Ref. [10], using identities of type (2.20) in combination with the above-mentioned D-dimensional relations (Dirac equation, etc.) one can obtain a rich variety of non-trivial reduction identities. In this way we have constructed a fairly sophisticated reduction algorithm (see App. A) that relates all helicity structures present in the gg channel to 502 SMEs. In spite of its efficiency, this reduction procedure has the disadvantage of depending on process-specific aspects, like the number of massive and massless fermion chains and the number of external momenta. It is thus important to investigate the trade-off between the obtained efficiency and the time-consuming task of designing the reduction on a process-by-process basis. To this end, we have implemented an alternative and much simpler reduction method. This procedure is entirely process-independent. It does not make use of chiral projectors, and consists of a single four-dimensional identity, which can be derived from Chisholm's identity and permits to eliminate any spinor chain involving more than three Dirac matrices without introducing γ 5 and ǫ-tensors. 6 In this case we could reduce all gg-channel helicity structures to 970 SMEs. Comparing the number of SMEs obtained with the process-dependent and processindependent algorithms, we observe that the former is superior by roughly a factor two. Thus, if we naively assume that the CPU efficiency scales with the number of SMEs, we would expect a factor-two difference in the speed of the numerical code. In contrast, we find that the CPU efficiency obtained with the two reductions is almost identical. This suggests that the reduction of the number of SMEs is compensated by an increase in the size of the form factors. This unexpected result means that the obtained CPU performance-at least for this process-does not depend on sophisticated and processdependent optimisations.

Real corrections
The calculation of the qq channel has been described in Ref. [11]. The evaluation of the (−) q g channels is done in the same way. In the following we sketch the calculation for the gg channel. We have again performed two independent calculations of all building blocks.
In both evaluations of the real corrections the amplitudes are calculated as helicity matrix elements which have been generated with Madgraph 4.1.33 [67]. While the amplitudes for qq → ttbbg and q have been checked with the Weyl-van der Waerden spinor formalism of Ref. [68], those for gg → ttbbg have been verified with an implementation of off-shell recursion relations [69][70][71]. The singularities for soft and collinear gluon emission are isolated via dipole subtraction [50][51][52][53] for NLO QCD calculations using the formulation [53] for massive quarks. Soft and collinear singularities in the "endpoint part" of the subtraction function (the I operator of Refs. [50,53]), i.e. the part of the subtraction terms that has to be combined with the virtual corrections, are regularized using the same regularization prescription (dimensional or with small quark masses) as the corresponding virtual corrections. No regularization is needed in the subtraction terms for the real corrections. For both the qq and gg channels 30 different dipole subtraction terms need to be included while each (−) q g channel requires only 10, since we demand b quarks with finite transverse momentum in the final state. After combining virtual and real corrections, singularities connected to collinear configurations in the final state cancel for "collinear-safe" observables after applying a jet algorithm. Singularities connected to collinear initial-state splittings are removed via MS QCD factorization by PDF redefinitions. In both evaluations the phase-space integration is performed with multi-channel Monte Carlo generators [54] and adaptive weight optimisation similar to the one implemented in Lusifer [56].
Version 1 of the real corrections employs the MadDipole implementation of dipole subtraction [72]. The phase-space integration, implemented in C++, is based on RacoonWW, but the phase-space mappings are built up in a more generic way very  Table 1: Statistics and speed of various parts of the calculation based on 2 × 10 7 events before applying cuts generated on a 3 GHz Intel Xeon processor using the pgf77 compiler with standard options.
similar to the approach of Lusifer [56]. For each of the 341 bremsstrahlung Feynman diagrams a corresponding channel is taken into account in the Monte Carlo integration.
In version 2 all dipole subtraction terms have been implemented directly into the Monte Carlo generator. The Monte Carlo generator is a further development of the one used in COFFERγγ [73] and for the calculation of the NLO corrections to pp → Hjj + X [74]. In addition to the Monte Carlo channels for the bremsstrahlung diagrams 30 × 36 = 1080 channels are used to map the dipole subtraction terms in the gg channel. These additional channels lead to some improvement in the convergence of the Monte Carlo integration.
All real bremsstrahlung amplitudes of Madgraph have been checked against independent calculations for several phase-space points. The cancellation between real matrix elements and dipole subtraction terms has been verified numerically in all soft and collinear regions. The individual dipole subtraction terms, the subtracted real matrix elements, and the integrated subtraction terms (P and K terms of Refs. [50,53]) have been compared point-wise between the two independent calculations. The agreement was generally at the level of 13 digits. The integrated LO cross section has been verified with SHERPA [75] at the level of the integration errors of 0.2%. From the point of view of Monte Carlo integration the most complicated and time-consuming part is the integration of the real corrections in the gg channel. For the complete NLO cross section we found agreement between the two versions of our code within 1-3σ, where 1σ corresponds to 0.1-0.2%. The results for the distributions coincide within 1-3σ for each bin.
In order to give an impression on the statistics and the required CPU time we show in Table 1 some numbers for 2×10 7 generated phase-space points before cuts. This yields an accuracy for the NLO cross section of about 0.5%. The contributions of the (−) q g channel were calculated for every 4th event and those of the virtual corrections in the gg channel and the contributions of the qq channel for every 20th event. The bulk of the runtime is taken by the gg channel. For the virtual corrections the CPU time is dominated by the gg channel and amounts to 180 ms per event. The virtual correction in the qq channel are by a factor 20 faster. In order to produce the plots for the scale variations we generated 2 × 10 7 phase-space points for the NLO predictions and 2 × 10 8 for the LO cross section. To generate the distributions we used about 20 times more events for the NLO results.

Predictions for the LHC
The thorough description of a complex signature like ttbb involves numerous possible observables. Here we investigate distributions and cuts that are relevant for the search of ttH(H → bb) at the LHC [5,7], where ttbb contributes to the irreducible background. In our previous work [11] we found a K factor of about 1.8 for the integrated ttbb cross section at the LHC. This unexpectedly large NLO effect raises two important issues that we address in the present analysis. Firstly, we discuss the relation between the large K factor and the scale choice. This leads us to a new and more appropriate scale choice, which improves the convergence of the perturbative expansion. Secondly, we consider possible strategies to reduce the ttbb cross section in order to facilitate the extraction of the ttH signal. In particular, we study the influence of a jet veto on the NLO cross section and its perturbative stability. We also explore the kinematic region of highlyboosted bb pairs, which helps to separate the Higgs signal from its QCD background, as first suggested for associated WH and ZH production [8] and, very recently, also for ttH production [9].
Let us remind that top-quark decays are not included in our calculation. In practice we treat top quarks in a completely inclusive way, and we restrict our analysis to the kinematic properties of those two b quarks that do not result from top decays. From the experimental view-point, the presented distributions correspond to the unrealistic situation of perfect top-quark reconstruction. The detailed description of top-decay products and the related issue of b-quark combinatorics are left for future studies.

Input parameters, jet definition, and cuts
We study the process pp → ttbb + X at √ s = 14 TeV. For the top-quark mass, renormalized in the on-shell scheme, we take the numerical value m t = 172.6 GeV [76]. All other QCD partons, including b quarks, are treated as massless particles. Collinear final-state configurations, which give rise to singularities, are recombined into IR-safe jets using a k T -algorithm [77]. Specifically, we adopt the k T -algorithm of Ref. [78] and recombine all final-state b quarks and gluons with pseudorapidity |η| < 5 into jets with separation √ ∆φ 2 + ∆y 2 > D = 0.4 in the rapidity-azimuthal-angle plane. Requiring two b jets, this also avoids collinear singularities resulting from massless g → bb splittings. 7 After recombination, we impose the following cuts on the transverse momenta and rapidities of the b jets: This choice is dictated by the detector geometry and the search for a ttH(H → bb) signal at the LHC [5,7]. The outgoing (anti)top quarks are neither affected by the jet algorithm nor by phase-space cuts. For what concerns b quarks, the jet algorithm and the requirement of having two b jets with p T,b > 20 GeV sets an effective lower limit on the bb invariant mass of roughly 10 GeV. But the m bb -range that is relevant for the Higgs-boson search is actually much higher. In this kinematic region, with p T,b ≫ m b and m bb ≫ m b , we expect that the m b = 0 approximation works fairly well. To asses its precision we compared the LO cross section for m b = 0 and m b = 4.2 GeV using SHERPA [75]. For the integrated cross section, which is dominated by m bb values well below 100 GeV, we found that the finite-m b effect is smaller than 3%. We consistently use the CTEQ6 [79] set of PDFs, i.e. we take CTEQ6L1 PDFs with a one-loop running α s in LO and CTEQ6M PDFs with a two-loop running α s in NLO, but neglect the suppressed contributions from b quarks in the initial state. The number of active flavours is N F = 5, and the respective QCD parameters are Λ LO 5 = 165 MeV and Λ MS 5 = 226 MeV. In the renormalization of the strong coupling constant the top-quark loop in the gluon self-energy is subtracted at zero momentum. In this scheme, the running of α s is generated solely by the contributions of the light-quark and gluon loops.

Renormalization and factorization scales
The perturbative expansion of the pp → ttbb cross section starts with the fourth power of α s . Consequently the LO predictions, and also the K factor, are extremely sensitive to variations of the renormalization scale. In Ref. [4] it was pointed out that the LO ttbb cross section can vary by up to a factor four if the QCD scale is identified with different kinematic parameters. In all recent ATLAS studies of ttH(H → bb) [2,4,5] the signal and its ttbb background were simulated by setting the renormalization and factorization scales equal to half the threshold energy, E thr = 2m t +m bb . For pp → ttH+X, this choice was well supported by the existing NLO analysis [13,14]. But, in the absence of NLO predictions for ttbb, the choice of the same scale for signal and background was motivated solely by the assumption that the two processes have similar kinematics. However, in Ref. [11] we found that, if both processes are evaluated at µ R = µ F = E thr /2, pp → ttbb receives much larger NLO corrections (K ≃ 1.8) than pp → ttH (K ≃ 1.2). This is mainly due to the fact that the scale E thr /2 does not provide an adequate description of the QCD dynamics that governs ttbb production.
The main difference between pp → ttH(H → bb) and its irreducible QCD background is that the former process involves only two powers of α s at LO. Moreover, the part of the signal process that is mediated by strong interactions does not involve any scale significantly smaller than E thr /2. In contrast, pp → ttbb is entirely driven by QCD and is proportional to α 4 s at LO. In the m bb → 0 limit, the dominant ttbb production mechanism is pp → ttg(g → bb), where a gluon with small virtuality plays a role analogous to the intermediate Higgs boson in the signal process. In this regime, the factorization of ttg production and g → bb splitting provides two well-defined and widely separated scales: E thr /2 and m bb , respectively. This simple picture is, however, not applicable to the kinematic region of interest, where m bb > ∼ 100 GeV. Here, pp → ttbb involves various other mechanisms. For instance, the radiation of one or both b quarks off initial-state gluons can play an important role due to collinear enhancements. In order to find an optimal QCD scale, we have tried to identify a dominant production mechanism. To this end, we have inspected the relative weights of the channels corresponding to various Feynman-diagram topologies in our adaptive Monte Carlo generator. However, we found that none of these channels is strongly enhanced with respect to the others. Similarly we were not able to reproduce the bulk of the cross section in terms of effective approximations Setup m bb,cut p T,bb,cut p jet,veto p T,b,cut y b,cut  based on collinear g → bb splittings of the incoming gluons. This suggests that pp → ttbb is a genuinely multi-scale and multi-channel reaction. We thus decided to adopt a pragmatic scale choice, based on the kinematic properties of the ttbb final state. While m t sets a clear scale for the couplings to the top quarks, the inspection of differential distributions reveals that the cross section is saturated by b quarks with p T,b ≪ m t (see Figs. 8 and 9). In order to account for these different scales we have adopted a dynamical QCD scale corresponding to their geometric average, Our LO and NLO predictions are obtained by varying the renormalization (µ R ) and factorization (µ F ) scales around the central value (3.2), In the following sections we investigate the dependence of the LO and NLO integrated cross section with respect to uniform (ξ F = ξ R ) and antipodal (ξ F = ξ −1 R ) scale variations in the range 1/8 ≤ ξ F , ξ R ≤ 8. We find that uniform variations have a larger impact on the cross section as compared to antipodal variations. For all distributions we provide LO and NLO predictions with uncertainty bands corresponding to factor-two uniform scale variations. More precisely, all observables are evaluated at three different scales: ξ F = ξ R = 0.5, 1, 2. As we will see from the reduction of the K factor and the scale uncertainties, the scale choice (3.2) clearly improves the convergence of the perturbative expansion as compared to Ref. [11].

Additional cuts
Besides the standard cuts (3.1), we have imposed the following kinematic restrictions to the bb system and the extra jet that is radiated at NLO: m bb > m bb,cut , p T,bb > p T,bb,cut , p T,jet < p jet,veto . In order to investigate the individual effect of these extra cuts and correlations with other observables, we have generated differential distributions in four different setups described in Table 2. The setups I-III implement the standard cuts (3.1) and explore the individual impact of the extra cuts (3.4). Setup IV is a variant of setup I, where the cut (3.1) on the b-jet p T is increased from 20 GeV to 50 GeV.

Setup I
In this setup we impose the cut m bb > 100 GeV. This removes a large part of the cross section and selects the kinematic region of interest for the ttH(H → bb) signal.

Scale dependence
The LO and NLO integrated cross sections and their dependence with respect to uniform (left plot) and antipodal (right plot) scale variations are displayed in Figure 4. At the central scale we obtain σ LO = 786.3(2) fb and σ NLO = 978(3) fb, where the numbers in parentheses are the errors of the Monte Carlo integration for 2 × 10 8 LO events and 2 × 10 7 NLO events before applying cuts. These predictions are not directly comparable to those of Ref. [11], where we did not apply any cut to m bb . Still we can compare the K factors, which are rather insensitive to m bb . We observe that the new scale choice (3.2) reduces the NLO corrections from K ≃ 1.77 [11] to K ≃ 1.24. We note that, in spite of the smaller K factor, the new scale choice yields larger LO and NLO cross sections as compared to the scale E thr /2 used in Ref. [11]. This can be easily seen from Figure 4, where the new and the previous central scales correspond to µ R /µ 0 = 1 and µ R /µ 0 = E thr /(2µ 0 ) ≫ 1, respectively. 8 In addition to the K factor, the new scale choice reduces also the NLO scale uncertainty. Varying the scale up or down by a factor 2 changes the LO and NLO cross section by 78% in LO and by 21% in NLO. The improvement with  respect to Ref. [11], where we had a 33% NLO uncertainty, is evident also from the shape of the NLO curves in Figure 4. Now we observe a stable point in the vicinity of the central scale.

Jet veto
As anticipated in Ref. [11], a jet veto can significantly reduce the large cross section of the ttbb background. This could facilitate the extraction of the ttH signal at the LHC. In Figure 5, the integrated ttbb cross section is plotted versus the upper bound, p jet,veto , imposed to the jet transverse momentum. Here, as well as in the following figures, the left plot shows the absolute LO (blue) and NLO (red) predictions. The curves and their uncertainty bands represent factor-two (uniform) scale variations around the central value (3.2). The right plot displays the LO and NLO bands normalized to the central value of the LO prediction. There the blue band illustrates the relative uncertainty of the LO cross section, i.e. σ LO (ξ)/σ LO (ξ = 1), the red curve corresponds to the K factor, K = σ NLO (ξ = 1)/σ LO (ξ = 1), and the red band shows the variation of the K factor when varying the scales in the NLO cross section but keeping them fixed in the LO cross section, K ξ = σ NLO (ξ)/σ LO (ξ = 1). In Figure 5 the red (NLO) curve tends to saturate the upper bound of its uncertainty band, a feature that can be observed in various other distributions. This is consistent with the shape of the NLO curve in Figure 4, which develops a maximum in the vicinity of the central scale.
The NLO curve in Figure 5 shows that a sizable reduction of the cross section requires a jet veto well below 200 GeV. For p jet,veto = 150, 100, and 50 GeV, the K factor is reduced to 1.03, 0.89, and 0.54, respectively. However, there is a trade-off between suppressing the NLO cross section and increasing its perturbative uncertainty. The jet veto tends to destroy the cancellation between IR logarithms of virtual and real origin and its effect grows as −α 5 s ln 2 (E thr /p jet,veto ) when p jet,veto becomes small. Since they are accompanied by an α 5 s coefficient, these logarithms can give rise to huge scale uncertainties already for moderate values of p jet,veto . This is reflected by the dramatic amplification of the NLO uncertainty band in Figure 5. Its lower bound enters the pathologic regime of negative cross sections around p jet,veto = 50 GeV. Here, besides the NLO cross section itself, also its uncertainty estimate becomes completely unreliable. The region of small p jet,veto would require a resummation. This would stabilize the perturbative calculation and compensate for its divergent behaviour. As a result, the unphysical suppression of the NLO cross section for p jet,veto ≪ 100 GeV would be washed out. If we restrict ourselves to the fixed-order NLO result, the plot tells us that jet-veto values around 100 GeV provide a good compromise: the reduction of the K factor is already significant (K ≃ 0.89) and the NLO scale uncertainty (19%) is at the same level as for the total cross section (21%).

Invariant-mass distributions
The invariant-mass distribution of the bb pair, shown in Figure 6, constitutes a key observable for the search of the ttH signal. Because of limited resolution and b-quark combinatorial problems, the Higgs boson would appear as a relatively broad and small peak on top of this distribution. The subtraction of the ttbb background requires an accurate determination of its normalization, possibly by direct measurement in a signalfree region, and a precise theoretical description of its shape.
In Figure 6 we observe that the NLO predictions perfectly fit within the LO band and significantly reduce the QCD uncertainty over the entire invariant-mass range. The numerical impact of the corrections is moderate and almost constant (1.21 < K < 1.27). This favourable behaviour is ensured by the dynamical scale choice (3.2). At high invariant masses the upper bound of the NLO uncertainty band slightly decreases and approaches the central NLO prediction. The same trend appears in the high-energy tail of other distributions. Figure 7 displays the dependence of the cross section with respect to a cut on the total invariant mass of the ttbb system (m ttbb > m cut,ttbb ). Since it corresponds to the invariant mass of the W + W − bbbb final state, this quantity is independent of the b-jet combinatorics. It can thus be measured with better resolution as compared to observables that involve only a particular subset of the four b jets. This property might be exploited in order to improve the signal-to-background ratio. Apart from a slight increase in the high invariant-mass tail, the NLO corrections behave similarly as for the m bb distribution.

Transverse-momentum distributions
The transverse-momentum distributions of the individual b jets, ordered according to their p T , are presented in Figure 8 (harder b jet) and Figure 9 (softer b jet). While the softer b jet tends to saturate the lower bound of 20 GeV (3.1), the harder behaves rather differently. Its distribution has a maximum around 100 GeV and a tail that extends up to fairly high transverse momenta. These shapes suggest that one of the two quarks is often emitted from initial-state gluons, while the other one participates to the hard scattering.          In contrast, none of the b quarks resulting from ttH originates from initial-state radiation. This feature, which renders the cross section quite sensitive to p T,b , might be exploited to improve the separation of the ttH signal.
The dynamical scale introduced in (3.2) accounts for the different kinematics of the two b jets and the extension of their transverse momenta over a wide p T range. The goodness of this choice is confirmed by the stability of the K factor over the entire p Tspectrum. The transverse-momentum distribution of the bb pair is shown in Figure 10. Its shape resembles fairly closely that of the harder b-jet distribution. Also the K factor and the scale uncertainties behave similarly.

Rapidity and azimuthal distributions
The rapidities of the individual b jets, ordered in p T , are shown in Figure 11 (harder b jet) and Figure 12 (softer b jet). Both b jets tend to populate the central region. But this feature is much more pronounced for the harder b jet, while the softer one has a significant probability to be emitted also in the forward and backward directions. The rapidity distribution of the bb system (not plotted) resembles that of the harder b jet, and the rapidity-separation distribution ( Figure 13) does not suggest any strong correlation between the two b jets. All rapidity distributions receive moderate and almost constant NLO corrections.
The rapidity-azimuthal-angle separation, ∆R bb = (y b − yb) 2 + (φ b − φb) 2 , of the b jets is displayed in Figure 14. The shape of this distribution is determined by three kinematic constraints: the rapidity cut (3.1) is responsible for the suppression at high ∆R bb ; the sharp lower bound at ∆R bb = 0.4 results from the jet algorithm; and the invariant-mass cut m bb > 100 GeV keeps the two b jets at intermediate ∆R-separations.      Finally, in Figure 15, we plot the angular variable φ bb , which describes the azimuthal orientation of the b jets. This observable is defined as the opening angle between two planes: the "production" plane spanned by the beam axis and the total momentum of the bb system, and the "decay" plane, which contains the momenta of the individual b jets, defined by Equivalently φ bb represents the azimuthal orientation of the b jets with respect to the beam direction in the plane perpendicular to the bb momentum. In the case where the bb pair results from the decay of an intermediate particle, like the Higgs boson in ttH production, the spin of the latter can be determined from the φ bb distribution. Since the Higgs boson has spin 0, the φ bb distribution is expected to be isotropic, i.e. φ bbindependent. As we see from Figure 15, the NLO corrections have a negligible influence on the shape of this observable for the ttbb background.

Setup II
As discussed in the introduction, the selection of ttbb signatures with highly boosted b-quark pairs may help to separate the ttH(H → bb) signal from its backgrounds. This motivates us to study the irreducible ttbb background in this particular phase-space p T,bb > 200 GeV p T,bb > 200 GeV region. Specifically, in setup II, we select highly boosted bb pairs with p T,bb > 200 GeV, as proposed in Ref. [9]. In contrast to setup I, here we do not impose any cut on the bb invariant mass. Nevertheless the cuts on p T,bb , p T,b , and p T,b , together with the bound ∆R bb > 0.4 resulting from the jet algorithm, impose an effective lower bound m bb ≃ ∆R bb p T,bb z(1 − z) > ∆R bb p T,b,cut p T,bb,cut p T,b,cut − 1 = 24 GeV, (3.6) where z and 1 − z are the (transverse) momentum fractions of the two b jets, and the first equation holds for small ∆R bb .

Scale dependence
The scale dependence of the LO and NLO integrated cross sections is shown in Figure 16. At the central scale we obtain σ LO = 451.8(2) fb and σ NLO = 592(4) fb. This corresponds to an NLO correction factor K = 1.31. The absolute NLO cross section is reduced by about 40% as compared to setup I. The shape of the scale-dependence curves is quite similar as in Figure 4 and indicates good convergence and stability of the perturbative expansion. The shifts induced by factor-two variations of the QCD scales amount to 79% in LO and 22% in NLO.
Investigating the sensitivity of the NLO cross section to a jet veto we found similar results as in setup I. For a jet veto of 100 GeV the K factor and the NLO uncertainty amount to 0.84 and 33%, respectively.

Invariant-mass and transverse-momentum distributions
The bb invariant-mass distribution is displayed in Figure 17. Its behaviour in the region m bb < ∼ 50 GeV reflects the effective lower bound (3.6). We find that the NLO corrections induce an appreciable shape distortion of about 20%, in particular near the physically interesting region of m bb ∼ 100 GeV. Such effect tends to mimic a Higgs signal and should be carefully taken into account in the ttH(H → bb) analysis.
The transverse-momentum distributions of the harder and softer b jets are presented in Figure 18 and Figure 19, respectively. As a consequence of the cut imposed on the transverse momentum of the b pair, the harder b jet is pushed to much higher p T values as compared to setup I. The maximum of its distribution is located around 200 GeV. In contrast, the softer b jet is much less sensitive to the p T,bb cut and is predominantly produced in the region 20 GeV < p T < 100 GeV. This different kinematic behaviour of the two b jets might be exploited to separate the ttbb background from the ttH signal, where both b jets are produced by the Higgs boson and should thus have more similar p T -values. The NLO corrections to both p T,b distributions feature a slight transversemomentum dependence, with 10% variations of the K factor within the plotted range.

Rapidity and azimuthal distributions
The rapidities of the harder and softer b jets are shown in Figures 20 and 21, respectively. Due to the high p T of the b-pair system, both b jets tend to be more central as compared to setup I. While the K factor is almost insensitive to the rapidity of the soft b jet, the NLO corrections have a non-negligible influence on the shape of the hard-b-jet distribution for |y b | > 1. The rapidity distribution of the bb system (not plotted) behaves similarly to the one of the harder b jet.       The rapidity-separation distribution ( Figure 22) is strongly peaked at small ∆y bb and the NLO corrections have an appreciable influence on its shape. In the region ∆y bb < 2 the K factor varies between 1.17 and 1.38.
Finally, in Figure 23 we show the rapidity-azimuthal-angle separation of the b jets, which is strongly peaked at small ∆R bb . Also in this distribution we observe a significant shape distortion. In the region ∆R bb < 1, which corresponds to invariant masses of the order of 100 GeV, the K factor increases by about 20%. This NLO effect might have an important impact on the measurement of ttH in the highly-boosted Higgs regime.

Setup III
In order to explore the effect of a jet veto and its possible correlation with other observables, we have generated events with m bb > 100 GeV and p T,jet < 100 GeV. The LO and NLO cross sections and their scale dependence are shown in Figure 24. At the central scale we obtain σ LO = 786.1(6) fb and σ NLO = 700(3) fb, corresponding to a correction factor K = 0.89. It is evident from Figure 24 that the central scale is very close to a stable point. This demonstrates that a jet-veto value of 100 GeV is sufficiently large to avoid perturbative instabilities. Varying the QCD scales up and down by a factor two shifts the NLO cross section by only 0.4% and −19%, respectively.
Inspecting various kinematic distributions we find that the NLO corrections have a much bigger impact on shapes as compared to setup I. In particular, we observe that the suppression effect resulting from the jet veto is rather sensitive to the transverse momentum of the b jets. For instance, the NLO correction to the p T distribution of the harder b jet varies by about 15% in the range 20 GeV < p T,b 1 < 200 GeV. p jet < 100 GeV m bb > 100 GeV p jet < 100 GeV m bb > 100 GeV   and antipodal (ξ R = ξ −1 F = ξ) scale variations, respectively.

Setup IV
As observed in Figure 9, the ttbb cross section is dominated by relatively soft b jets, which tend to saturate the p T,b > 20 GeV cut. It is thus interesting to investigate the influence of this cut on the ttbb background and, in particular, on the NLO corrections. To this end, we have studied a variation of setup I where the p T,b cut is increased from 20 to 50 GeV. The LO and NLO cross sections and their scale dependence are shown in Figure 25. At the central scale we obtain σ LO = 419.4(1) fb and σ NLO = 526(2) fb, corresponding to a correction factor K = 1.25. The higher p T,b cut reduces the NLO cross section by 46% as compared to setup I. The LO and the NLO scale dependence remain very similar as in setup I (cf. Figure 4): the uncertainty corresponding to factortwo scale variations amounts to 77% in LO and 21% in NLO.
Inspecting various kinematic distributions we find, as in setup I, that the NLO corrections have little impact on the shapes. In general the kinematic dependence of the NLO correction factor is below 10%. The largest shape distortion is observed in the ∆R bb distribution: similarly as in setup I (cf. Figure 14) we find an increase of the K factor in the region 0.4 < ∆R bb < ∼ 1 by about 30%.

Conclusions
The direct production of ttbb final states represents the major background to the production of Higgs bosons in association with top-antitop-quark pairs at the LHC, pp → ttH, where the Higgs boson decays into bb pairs. This process can lead to a direct measurement of the top-quark Yukawa coupling. Apart from improvements in the experimental analysis, a successful exploitation of this demanding channel requires predic-tions for ttbb production at the next-to-leading-order level in QCD, maybe even further improvements.
Extending our first results on the total cross section for pp → ttbb published earlier, we have presented a detailed study of integrated and differential cross sections at the LHC, discussing in particular a dynamical scale choice, the influence of various cuts on the outgoing b quarks, and the impact of a veto on an additional hard jet. We observe that the traditional choice of a constant scale determined by the energy threshold for the process underestimates the ttbb background by a factor of two, while an appropriate dynamical scale, which is tied to the transverse momenta of the b quarks, stabilizes the perturbative predictions much better. Moreover, the K factor is reduced from 1.8 to 1.2. Using the new scale choice, the corrections have little impact on the shapes of distributions if standard cuts are applied. Strengthening the cut on the transverse momentum of the bottom quarks has no big influence on the K factor and the effect of the NLO corrections on the shape of the distributions. On the other hand, imposing a jet veto of 100 GeV reduces the K factor to 0.9 and enhances the impact of the NLO corrections on the shapes of distributions. In the regime of highly boosted Higgs bosons, which offers better perspectives to observe the ttH signal, we find significant distortions of the kinematic distributions, while the K factor is 1.3.
Our calculation builds on the Feynman-diagrammatic approach, i.e. on an algorithmic reduction of each Feynman diagram to a canonical standard form, which is automatically processed to Fortran output, and on a numerical reduction of tensor loop integrals to an appropriate set of scalar master integrals. A key feature of the diagram-by-diagram approach is that colour sums can be preformed very efficiently. Helicity summation is simplified by introducing a basis of O(1000) basic structures. The reduction to these structures can be performed in a process-independent way. The numerical tensor-integral reduction employs dedicated methods that have been developed to treat the numerically delicate phase-space regions where small Gram determinants appear in denominators during the traditional tensor reduction. Real corrections are integrated using well-known dipole subtraction methods. We find that our Feynman-diagrammatic approach provides very high numerical stability and CPU efficiency, a result that is very encouraging in view of future challenging next-to-leading-order calculations for important multiparticle processes. 4. In a further step it is possible to eliminate all products of three Dirac matrices in the where we have used that the bottom mass is set to zero, i.e. m 5 = m 6 = 0. Note that we did not use p 2 1 = 0 in the first relation, in order to indicate that the whole reduction procedure described here does not only apply to gluons but also to the case where gluons are replaced by massive vector bosons. 5. After having reduced all bottom chains to contain only one Dirac matrix we apply (2.21) to the top chain recursively as far as possible. In this way the top chains contain only up to four Dirac matrices.
6. Next we reduce products of the form [/ a/ b/ c/ d] σ 34 [/ e] τ 56 , which do not involve a contraction. After eliminating p 2 via momentum conservation, the product / a/ b/ c/ d contains one of the factors / p 1 / p 5 , / p 1 / p 6 , or / p 5 / p 6 , because at most two of the slashed vectors can be polarization vectors. The majority of such cases can be reduced with the following relations, Γ/ a/ p 5  where we divided out the strong coupling constant g s . We express NLO contributions in the 2 → 4 phase space as Laurent series in ǫ = (4 − D)/2, where we factor out the LO term and the normalization factor which correspond to the contributions of renormalized loop diagrams (loops) and the I operator of the dipole subtraction function as defined in Ref. [53]. The numbers in Table 3 have been obtained in the 't Hooft-Feynman gauge using the 't Hooft-Veltman variant of dimensional regularization (four-dimensional external partons). We set the scale of dimensional regularization and the renormalization scale as µ = µ R = m t . The corresponding values of the strong coupling constant in the renormalization scheme described in Section 3 are The agreement between our two independent versions of the virtual corrections is typically about 10 digits at regular phase-space points.
Another benchmark result for the virtual corrections to gg → ttbb can be found in Ref. [44]. The quantity presented there and denoted as 'HELAC-1L' corresponds to the unrenormalized virtual one-loop correction plus the mass-renormalization contribution (without wave-function and coupling constant renormalization). We find agreement at the 10-digit level.