Multi-leg One-loop Massive Amplitudes from Integrand Reduction via Laurent Expansion

We present the application of a novel reduction technique for one-loop scattering amplitudes based on the combination of the integrand reduction and Laurent expansion. We describe the general features of its implementation in the computer code NINJA, and its interface to GoSam. We apply the new reduction to a series of selected processes involving massive particles, from six to eight legs.


Introduction
Scattering amplitudes in quantum field theories are analytic functions of the kinematic variables of the interacting particles, hence they can be determined by studying the structure of their singularities. The multi-particle factorization properties of the amplitudes become transparent when internal particles go on their mass-shell [1,2]. These configurations correspond to poles of of coefficients entering each subtracted term, in particular boxes and pentagons decouple from the computation of lower-points coefficients.
If either the analytic expression of the integrand or the tensor structure of the numerator is known, this procedure can be implemented in a semi-numerical algorithm. Indeed, the coefficients of the Laurent expansion of a rational function can be computed, either analytically or numerically, by performing a polynomial division between the numerator and the (uncut) denominators.
The scope of the present paper is to review the main features of the novel reduction algorithm and demonstrate its performance on a selection of challenging calculations of scattering amplitudes with massive bosons and quarks, involving six, seven, and eight particles. The integrand-reduction via Laurent expansion has been implemented in the c++ library Ninja [82,83], and interfaced to the GoSam framework [12] for the evaluation of virtual one-loop scattering amplitudes. The cleaner and lighter system-solving strategy, which deals with a diagonal system instead of a triangular one, and which substitutes the polynomial subtractions with coefficients corrections, turns into net gains in terms of both numerical accuracy and computing speed. We recall that the new library has been recently used in the evaluation of NLO QCD corrections to pp → ttHj [45].
The paper is organized as follows. In Section 2, we discuss the theoretical foundations of the integrand decomposition via Laurent expansion, and its implementation in an algorithm for the reduction of one-loop amplitudes. The description of the interface between Ninja and GoSam for automated one-loop calculation is discussed in Section 3. Section 4 is devoted to a detailed study of the precision and of the computational performance of the novel framework, which shows a significant improvement with respect to the standard algorithms. Applications of the GoSam+Ninja framework to the evaluation of NLO QCD virtual correction to several multi-leg massive processes are shown in Section 5.

Reduction Algorithm -Integrand Reduction via Laurent Expansion
In this section we describe the Laurent-expansion method for the integrand reduction of one-loop amplitudes as implemented in the C++ library Ninja.

Integrand and Integral decomposition
An n-point one-loop amplitude can be written as a linear combination of contributions M 1···n of the form where N (q) is a process-dependent polynomial numerator in the components of the d = (4 − 2ǫ)-dimensional loop momentumq. The latter can be decomposed as follows, in terms of its 4-dimensional component, q, and µ 2 which encodes its (−2ǫ)-dimensional components. The denominators D i are quadratic polynomials inq and correspond to Feynman loop propagators, Every one-loop integrand in d dimensions can be decomposed as [6,73] where the ∆ i 1 ···i k are irreducible polynomial residues, i.e. polynomials which do not contain any term proportional to the corresponding loop denominators D i 1 , . . . , D i k . The second sum in Eq. (2.4) runs over all unordered selections without repetition of the k indices {i 1 , · · · , i k }.
For any set of denominators D i 1 , . . . , D i k with k ≤ 5, one can choose a 4-dimensional basis of momenta E = {e 1 , e 2 , e 3 , e 4 } which satisfies the following normalization conditions e 1 · e 2 = 1, e 2 3 = e 2 4 = δ k4 , e 3 · e 4 = −(1 − δ k4 ), (2.5) while all the other scalar products vanish. In addition, for k = 4 we choose the basis such that e 4 is orthogonal to the external legs of the sub-diagram identified by the set of denominators in consideration. Similarly, for k = 2, 3 we choose both e 3 and e 4 to be orthogonal to the external legs of the corresponding sub-diagram. With this choices of momentum basis, the numerator and the denominators can be written as polynomials in the coordinates z ≡ (z 1 , z 2 , z 3 , z 4 , z 5 ) = (x 1 , x 2 , x 3 , x 4 , µ 2 ). The variables x i are the components of q with respect to the basis E, More explicitly, the numerator is a polynomial in the components of q and µ 2 The coordinates x i can also be written as scalar products, With these definitions, one can show [6,51,73] that the most general parametric form of a residue in a renormalizable theory is This parametric form can also be extended to non-renormalizable and effective theories, where the rank of the numerator can be larger than the number of loop propagators [78].
Most of the term appearing in Eq. (2.9) vanish after integration, i.e. they are spurious. The non-spurious coefficients, instead, appear in the final result which expresses the amplitude M 1···n as a linear combination of known Master Integrals, where The problem of the computation of any one-loop amplitude can therefore be reduced to the problem of the determination of the coefficients of the Master Integrals appearing in Eq. (2.10), which in turn can be identified with a subset of the coefficients of the parametric residues in Eq. (2.9).

Scattering amplitudes via Laurent expansion
In Ref. [78], elaborating on the the techniques proposed in [80,81], an improved version of the integrand-reduction method for one-loop amplitudes was presented. This method allows, whenever the analytic dependence of the integrand on the loop momentum is known, to extract the unknown coefficients of the residues ∆ i 1 ···i k by performing a Laurent expansion of the integrand with respect to one of the free loop components which are not constrained by the corresponding on-shell conditions D i 1 = · · · = D i k = 0.
Within the original integrand reduction algorithm [61,62,64], the determination of these unknown coefficients requires: i) to sample the numerator on a finite subset of the onshell solutions; ii) to subtract from the integrand the non-vanishing contributions coming from higher-point residues; and iii) to solve the resulting linear system of equations.
With the Laurent-expansion approach, since in the asymptotic limit both the integrand and the higher-point subtractions exhibit the same polynomial behavior as the residue, one can instead identify the unknown coefficients with the ones of the expansion of the integrand, corrected by the contributions coming from higher-point residues. In other words, the system of equations for the coefficients becomes diagonal and the subtractions of higher-point contributions can be implemented as corrections at the coefficient level which replace the subtractions at the integrand level of the original algorithm. Because of the universal structure of the residues, the parametric form of these corrections can be computed once and for all, in terms of a subset of the higher-point coefficients. This also allows to significantly reduce the number of coefficients entering in each subtraction. For instance, box and pentagons do not affect at all the computation of lower-points residues. In the followings, we describe in more detail how to determine the needed coefficients of each residue.
Pentagons Pentagons contributions are spurious, i.e. they do not appear in the integrated result. In the original integrand reduction algorithm their computation is nevertheless needed, in order to implement the corresponding subtractions at the integrand level. Within the Laurent expansion approach, since the subtraction terms of five-point residues always vanish in the asymptotic limit, their computation is instead not needed and can be skipped.
Boxes The coefficient c 0 of the box contributions can be determined via 4-dimensional quadruple cuts [4]. In four dimensions (i.e.q = q, µ 2 = 0) a quadruple cut D i 1 = · · · = D i 4 = 0 has two solutions, q + and q − . The coefficient c 0 can be expressed in terms of these solutions as The coefficient c 4 can be found by evaluating the integrand on d-dimensional quadruple cuts in the asymptotic limit µ 2 → ∞ [81]. A d-dimensional quadruple cut has an infinite number of solutions which can be parametrized by the extra-dimensional variable µ 2 . These solutions become particularly simple in the limit of large µ 2 , namely where the vector a ν and the constant β are fixed by the cut conditions. The coefficient c 4 , when non-vanishing, can be found in this limit as the leading term of the expansion of the integrand N (q, µ 2 ) The other coefficients of the boxes are spurious and their computation can be avoided.
Triangles The residues of the triangle contributions can be determined by evaluating the integrand on the solutions of d-dimensional triple cuts [80], which can be parametrized by the extra-dimensional variable µ 2 and another parameter t, where the vector a ν and the constant β are fixed by the cut conditions D i 1 = D i 2 = D i 3 = 0. On these solutions, the integrand generally receives contributions from the residue of the triple cut ∆ i 1 i 2 i 3 , as well as from the boxes and pentagons which share the three cut denominators. However, after taking the asymptotic expansion at large t and dropping the terms which vanish in this limit, the pentagon contributions vanish, while the box contributions are constant in t but vanish when taking the average between the parametrizations q + and q − of Eq. (2.14). More explicitly, Moreover, even though the integrand is a rational function, in this asymptotic limit it exhibits the same polynomial behavior as the expansion of the residue ∆ i 1 i 2 i 3 , It is worth to observe that, within the Laurent expansion approach, the determination of the 3-point residues does not require any subtraction of higher-point terms.
Bubbles The on-shell solutions of a d-dimensional double cut D i 1 = D i 2 = 0 can be parametrized as 19) in terms of the three free parameters x, t and µ 2 , while the vectors a ν i and the constants β i are fixed by the on-shell conditions. After evaluating the integrand on these solutions and taking the asymptotic limit t → ∞, the only non-vanishing subtraction terms come from the triangle contributions, (2.20) The integrand and the subtraction term are rational function, but in the asymptotic limit they both have the same polynomial behavior as the residue, namely The coefficients c Tadpoles Once the coefficients of the triangles and the bubbles are known, one can determine the non-spurious coefficient c 0 of a tadpole residue ∆ i 1 by evaluating the integrand on the single cut D i 1 = 0. One can choose 4-dimensional solutions of the form parametrized by the free variable t, while the constant β is fixed by the cut conditions. In the asymptotic limit t → ∞, only bubbles and triangles coefficients are non-vanishing, Similarly to the case of the bubbles, in this limit the integrand and the subtraction terms exhibit the same polynomial behavior as the residue, i.e.
Putting everything together, we can write the coefficient of the tadpole integral as the corresponding one in the expansion of the integrand, corrected by coefficient-level subtractions Once again, we observe that the subtraction terms c (j) s 3 ,0 , coming from bubbles and triangles contributions respectively, are known parametric functions of the coefficients of the corresponding higher-point residues.

The C++ library Ninja
The C++ library Ninja [82,83] provides a semi-numerical implementation of the Laurent expansion method described above. Since the integrand is a rational function of the loop variables, its Laurent expansion is performed via a simplified polynomial division algorithm between the expansion of the numerator N and the uncut denominators.
The inputs of the reduction algorithm implemented in Ninja are the external momenta p i and the masses m i of the loop denominators defined in Eq. (2.3), and the numerator N (q, µ 2 ) of the integrand cast in four different forms.
• The first form provides a simple evaluation of the numerator as a function of q and µ 2 , which is used in order to compute the coefficient c 0 of the boxes. It can also be used in order to compute the spurious coefficients of the pentagons via penta-cuts, and all the ones of the boxes when the expansion in µ 2 is not provided.
The other three forms of the numerator yield instead the leading terms of a parametric expansion of the integrand.
• The first expansion is the one used in order to obtain the coefficient c 4 of the boxes. When the rank r is equal to the number n of external legs of the diagram, this method returns the coefficient of t n obtained by substituting in the numerator N (q, µ 2 ) as a function of a generic vector v, which is computed by Ninja and is determined by the quadruple-cut constraints.
• The second expansion is used in order to compute triangles and tadpole coefficients.
In this case it returns coefficients of the terms t j µ 2k for j = r, r−1, . . . , n−3, obtained from N (q, µ 2 ) with the substitutions as a function of the generic momenta v ν i and the constant β 0 . Since in a renormalizable theory r ≤ n, and by definition of rank we have j + 2k ≤ r, at most 6 terms can be non-vanishing in the specified range of j. For effective theories with r ≤ n + 1, one can have instead up to 9 non-vanishing polynomial terms. In each call of the numerator, Ninja specifies the lowest power of t which is needed in the evaluation, avoiding thus the computation of unnecessary coefficients of the expansion.
• The third and last expansion is needed for the computation of the 2-point residues and returns the coefficients of the terms t j µ 2k x l for j = r, r − 1, . . . , n − 2, obtained from N (q, µ 2 ) with the substitutions 34) as a function of the cut-dependent momenta v ν i and constants β i . In a renormalizable theory one can have at most 7 non-vanishing terms in this range of j, while for r ≤ n + 1 one can have 13 non-vanishing terms. As in the previous case, in each call of the numerator, Ninja specifies the lowest power of t which is needed. It is worth to notice that the expansion in Eq. (2.34) can be obtained from the previous one in Eq. (2.33) with the substitutions All these expansions can be easily and quickly obtained from the knowledge of the analytic dependence of the loop momentum on q and µ 2 . For every phase-space point, Ninja computes the parametric solutions of all the multiple cuts, performs the Laurent expansion of the integrand via a simplified polynomial division between the expansion of the numerator and the set of the uncut denominators, and implements the subtractions at the coefficient level appearing in Eqs. (2.24) and (2.31). Finally, the obtained non-spurious coefficients are multiplied by the corresponding Master Integrals in order to get the integrated result as in Eq. (2.10). The routines which compute the Master Integrals are called by Ninja via a generic interface which allows to use any integral library implementing it, with the possibility of switching between different libraries at run-time. By default, a C++ wrapper of the OneLoop integral library [84,85] is used. This wrapper caches every computed integral allowing constant time lookups of their values from their arguments. An interface with the LoopTools [86,87] library is available as well.
The Ninja library can also be used in order to compute integrals from effective and non-renomalizable theories where the rank r of the numerator can exceed the number of legs by one unit. An example of this application, given in Section 5, is Higgs boson production plus three jets in gluon fusion, in the effective theory defined by the infinite top-mass limit.

Interfacing Ninja with GoSam
The library Ninja has been interfaced with the automatic generator of one-loop amplitudes GoSam. The latter provides Ninja with analytic expressions for the integrands of one-loop Feynman diagrams for generic processes within the Standard Model and also for Beyond Standard Model theories.
After the generation of all contributing diagrams, the virtual corrections are evaluated using the d-dimensional integrand-level reduction method, as implemented in the library Samurai [62], which allows for the combined determination of both cut-constructible and rational terms at once. As an alternative, the tensorial decomposition provided by Golem95C [92][93][94] is also available. Such reduction, which is numerically stable but more time consuming, is employed as a rescue system. After the reduction, all relevant master integrals can be computed by means of Golem95C [94], QCDLoop [95,96], or OneLoop [84].
The possibility to deal with higher-rank one-loop integrals, where powers of loop momenta in the numerator exceed the number of denominators, is implemented in all three reduction programs Samurai [78,79], Ninja and golem95C [97]. Higher rank integrals can appear when computing one-loop integrals in effective-field theories, e.g. for calculations involving the effective gluon-gluon-Higgs vertex [42,44], or when dealing with spin-2 particles [41].
In order to embed Ninja into the GoSam framework, the algebraic manipulation of the integrands was adapted to generate the expansions needed by Ninja and described in Section 2.3. The numerator, in all its forms, is then optimized for fast numerical evaluation by exploiting the new features of Form 4 [98,99], and written in a Fortran90 source file which is compiled. At running time, these expressions are used as input for Ninja.
The Fortran90 module of the interface between Ninja and GoSam defines subroutines which allow to control some of the settings of Ninja directly from settings of the code that generated the virtual part of the amplitudes. Upon importing the module, the user can change the integral library used by Ninja choosing between the use of OneLoop [84] and the LoopTools [86,87].
For debugging purposes, one can also ask Ninja to perform some internal test or print some information about the ongoing computation. This option allows to choose among different internal tests, namely the global N = N test, the local N = N tests on different cuts, or a combination of both. These tests have been described in [62]. The verbosity of the debug output can be adjusted to control the amount of details printed out in the output file, for example the final results for the finite part and the poles of the diagram, the values of the coefficients that are computed in the reduction, the values of the corresponding Master Integrals, and the results of the various internal tests.

Precision tests
Within the context of numerical and semi-numerical techniques, the problem of estimating correctly the precision of the results is of primary importance. In particular, when performing the phase space integration of the virtual contribution, it is important to assess in real time, for each phase space point, the level of precision of the corresponding one-loop matrix element.
Whenever a phase space point is found in which the quality of the result falls below a certain threshold, either the point is discarded or the evaluation of the amplitude is repeated by means of a safer, albeit less efficient procedure. Examples of such a method involve the use of higher precision routines, or in the case of GoSam the use of traditional tensorial reconstruction of the amplitude, provided by Golem95C.
Various techniques for detecting points with low precision have been implemented within the different automated tools for the evaluation of one-loop virtual corrections.
A standard method which is widely employed is based on the comparison between the numerical values of the poles with their known analytic results dictated by the universal behavior of the infrared singularities. While this method is quite reliable, not all integrals which appear in the reconstruction of the amplitude give a contribution to the double and single poles. This often results in an overestimate of the precision, which might lead to keep phase space points whose finite part is less precise than what is predicted by the poles.
A different technique, which we refer to as scaling test [9], is based on the properties of scaling of scattering amplitudes when all physical scales (momenta, renormalization scale, masses) are rescaled by a common multiplicative factor x. As shown in [9], this method provides a very good correlation between the estimated precision, and the actual precision of the finite parts.
Additional methods have been proposed, within the context of integrand-reduction approaches, which target the relations between the coefficients before integration, namely the reconstructed algebraic expressions for the numerator function before integration. This method, labeled N = N test [61,62], can be applied to the full amplitude (global N = N test) or individually within each residue of individual cuts (local N = N test). The drawback of this technique comes from the fact that the test is applied at the level of individual diagrams, rather than on the final result summed over all diagrams, making the construction of a rescue system quite cumbersome.
For the precision analysis contained in this paper, we present a new simple and efficient method for the estimation of the number of digits of precision in the results, which we call rotation test. This new method exploits the invariance of the scattering amplitudes under an azimuthal rotation about the beam axis, namely the direction of the initial colliding particles.
Such a rotation, which does not affect the initial states, changes the momenta of all final particles without changing their relative position, thus reconstructing a theoretically identical process. However, the change in the values of all final state external momenta is responsible for different bases for the parametrization of the residues within the integrand reconstruction, different coefficients in front of the master integrals, as well as different numerical values when the master integrals are computed. We tested that the choice of the angle of rotation does not affect the estimate, as long as this angle is not too small.
In order to study the correlation of the error estimated by the rotation test and the exact error, we follow the strategy of Ref. [9]. In particular, we generated 10 4 points for the process ud → W bbg with massive bottom quarks, both in quadrupole and standard double precision, which we label with A quad and A respectively, as well as the same points in double precision after performing a rotation, called A rot .
We define the exact error δ ex as and the estimated error δ rot as In Fig. 1, we plot the distribution of the quantity evaluated for each phase space point. In the ideal case of a perfect correlation between the exact error, as estimated by the quadrupole precision result, and the error estimated by the less time-consuming rotation test, the value of C would be close to zero, while the spread of the distribution can provide a picture of the degree of correlation. Moreover, we observe a similar behavior for the rotation and the scaling tests. In the following, we will employ the rotation test as the standard method to estimate the precision of the finite part of each renormalized virtual matrix elements. If we call δ 0 the error at any given phase space point and calculate it according to Eq. (4.2), we can define the precision of the finite part as P 0 = log 10 (δ 0 ). Concerning the precision of the double and single poles, P −2 = log 10 (δ −2 ) and P −1 = log 10 (δ −1 ), we employ the fact that the values of the pole coefficients, after renormalization, are solely due to infrared (IR) divergencies, whose expressions are well known [100]. δ −2 and δ −1 are defined using formula in Eq. (4.1), in which the exact values are provided by the reconstructed IR poles, which is automatically evaluated by GoSam.
In order to assess the level of precision of the results obtained with Ninja within GoSam, in Figs. 2 and 3, we plot the distributions of P −2 (precision of the double pole), P −1 (single pole) and P 0 (finite part) for two challenging virtual amplitudes with massive internal and external particles, namely gg → ttHg (ttHj) and uū → Huūgg (Hjjjj) in VBF. By selecting an upper bound on the value of P 0 , we can set a rejection criterium for phase space points in which the quality of the calculated scattering amplitudes falls below the requested precision. This also allows to estimate the percentage of points which would be discarded (or redirected to the rescue system). This value, as expected by analyzing the shape of the various distributions, is strongly process dependent and should be selected according to the particular phenomenological analysis at hand. As a benchmark value, in Ref. [9], the threshold for rejection was set to P 0 = −3. In a similar fashion, in Table 1, we provide the percentages of bad points, which are points whose precision falls below the threshold, for increasing values of the rejection threshold.
The two plots are built using a set of 5 · 10 4 and 1 · 10 5 phase space points, respectively for gg → ttHg and uū → Huūgg (VBF). No cuts have been introduced in the selection  of the points, which are randomly distributed over the whole available phase space for the outgoing particles, and are generated using rambo.
The use of the novel algorithm implemented in Ninja yields significant improvements both in the accuracy of results and in reduction of the computational time, due to a more efficient reduction and less frequent calls to the rescue system.
These features make GoSam+Ninja an extremely competitive framework for massive,

Applications to Massive Amplitudes
In order to demonstrate the performances of the new reduction algorithm, we apply GoSam+Ninja to a collection of processes involving six, seven and eight external particles. We choose processes where massive particles appear in the products of the reactions or run in the loop. We list them in Table 2, and give the details of their calculations in the following subsections: for each process we provide results for a phase space point and a detailed list of the input parameters employed. While some of the considered processes have already been studied in the literature, the virtual NLO QCD contributions to pp → W bb + n jets (n = 1, 2), pp → Zbbj, pp → Zttj, pp → V V V j (with V = W, Z), pp → ZZZZ, and pp → H + n jets (n = 4, 5) in VBF are presented in this paper for the first time. When occurring in the final state, the bottom quark is treated as massive. For calculation which were already performed with previous versions of the GoSam framework, we observe that the new reduction technique yields a significant net gain both in computing time and in accuracy. A paradigmatic example is represented by gg → Hggg, whose evaluation per ps-point required approximately 20 seconds, as reported in [44], while now can be computed in less than 10 seconds.
In the following, for each of the considered scattering amplitudes, we provide a benchmark phase space point for the most involved subprocesses and, when possible, a comparison with results available in the literature. The coefficients a i are which appear in the various tables are defined as follows: where the finite part a 0 is computed in the dimensional reduction scheme if not stated otherwise. The reconstruction of the renormalized pole can be checked against the value of a −1 and a −2 obtained by the universal singular behavior of the dimensionally regularized one-loop amplitudes [100], while the precision of the finite parts is estimated by re-evaluating the amplitudes for a set of momenta rotated by an arbitrary angle about the axis of collision, as described in Section 4. The accuracy of the results obtained with GoSam+Ninja is indicated by the underlined digits.  Table 2: A summary of results obtained with GoSam+Ninja. Timings refer to full color-and helicity-summed amplitudes, using an Intel Core i7 CPU @ 3.40GHz, compiled with ifort. The timings indicated with an (*) are obtained with an Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz, compiled with gfortran.

p p → W + 3 jets
Partonic process: dū →ν e e − ggg The finite part for this process is given in the conventional dimensional regularization (CDR) scheme and was compared to the new version of NJet [16]. We also find agreement in the phase space point given by the BlackHat Collaboration [22].  Table 3: Benchmark point for the subprocess d(p 1 )ū(p 2 ) →ν e (p 3 )e − (p 4 )g(p 5 )g(p 6 )g(p 7 ).

p p → Z + 3 jets
Partonic process: dd → e + e − ggg The finite part for this process is given in CDR and was compared to the new version of NJet [16]. We also find agreement in the phase space point given by the BlackHat Collaboration [25].       Table 9: Benchmark point for the subprocess u(p 1 )ū(p 2 ) → Z(p 3 )Z(p 4 )Z(p 5 )Z(p 6 ).  Table 10: Benchmark point for the subprocess u(p 1 )ū(p 2 ) → W + (p 3 )W − (p 4 )W + (p 5 )W − (p 6 ).

p p → tt + 2 jets
Partonic process: gg → ttgg The finite part for this process is given in CDR and was compared with Ref. [29].  Table 13: Benchmark point for the subprocess g(p 1 )g(p 2 ) → t(p 3 )t(p 4 )g(p 5 )g(p 6 ).

Conclusions
The integrand reduction techniques have changed the way to perform the decomposition of scattering amplitudes in terms of independent integrals. In these approaches, the coefficients which multiply each integral can be completely determined algebraically by relying on the knowledge of the universal structure of the residues of amplitudes at each multiple cuts. The residues are irreducible polynomials in the components of the loop momenta which are not constrained by the on-shell conditions defining the cuts. The coefficients of the master integrals are a subset of the coefficients of the residues. The generalized unitarity strategy implemented within the integrand decomposition requires to solve a triangular system, where the coefficients of the residues, hence of the master integrals, can be projected out of cuts only after removing the contributions of higher-point residues. By adding one more ingredient to this strategy, namely the Laurent series expansion of the integrand with respect to the unconstrained components of the loop momentum, we improved the system-solving strategy, that became diagonal.
We demonstrated that this novel reduction algorithm, now implemented in the computer code Ninja, and currently interfaced to the GoSam framework, yields a very efficient and accurate evaluation of multi-particle one-loop scattering amplitudes, no matter whether massive particles go around the loop or participate to the scattering as external legs. We used GoSam+Ninja to compute NLO corrections to a set of non-trivial processes involving up to eight particles.
The level of automation reached in less than a decade by the evaluation of scattering amplitudes at next-to-leading order has been heavily stimulated by the awareness that the methods for computing the virtual contributions were simply not sufficient, while the techniques for controlling the infrared divergencies and, finally, for performing phase-space integration were already available. Nowadays, the scenario is changed, and one-loop contributions to many multi-particle scattering reactions are waiting to be integrated. We hope that these advancements can stimulate the developments of novel methods for computing cross sections and distributions at next-to-leading-order accuracy for high-multiplicity final states.