1 Introduction

While the LHC is still running, preparation is starting for the High-Luminosity LHC. As part of this preparation, the CPU efficiency of our Monte-Carlo generators is crucial due to the sheer number of events that need to be generated. Given the current constraints on the LHC IT budget this will not be possible without significant software improvement [1, 2]. While the full CPU time of the LHC experiment is not dominated by event generation, it is still estimated to represent between ten and twenty percent of it. Consequently, we have received a specific request to speed-up that step by at least 20% and ideally by a factor of two [1].

In addition to the High-Luminosity LHC, work is also starting for future high-energy accelerators [3]. Going to the high-energy regime will amplify multi-scale issues which in turn can reduce the efficiency of event generation. This issue is particularly striking within MadGraph5_aMC@NLO [4, 5] (MG5aMC) for VBF-like processes where the current phase-space integration algorithm either fails to generate the requested number of events or takes an unreasonable time to do so.

The different avenues for speeding up Monte-Carlo integration is a well covered topic. Such efforts can be classified into four different categories. First one can optimise the evaluation of the function being integrated, which in our case is the matrix element [6,7,8,9]. Second one can optimise the integration method to minimise the number of times such functions need to be evaluated [10,11,12,13,14]. Third, one can try to use more efficiently the various types of hardware (e.g. GPU, MPI, vectorization) [15,16,17,18] and finally one can play with the weights of the sample to optimise/re-use information [19,20,21,22,23,24].

In the context of this work, we will focus on optimising MG5aMC, one of the main Monte-Carlo generators [25,26,27] and we will combine two different methods to achieve our goal, one optimising the time to evaluate the matrix element and one optimising the phase-space integrator. The strategy we employed was to keep the main design choices in place (in particular the helicity amplitude method [28] and the single diagram enhancement method [29]) and to deeply study them to see how they could be further improved.

To reduce the time needed to evaluate a given squared matrix element, we use a quite standard Memory/CPU trade-off guided by physics considerations. We have identified parts of the computation that can be stored in memory in order to avoid their re-computation later. We dubbed this method helicity recycling since the additional terms stored in memory correspond to identical sub-expressions shared between different helicity configurations. This improvement will be presented in Sect. 2. We start in Sect. 2.1 by presenting the helicity formalism used in MG5aMC, then we continue in Sect. 2.2 by explaining the main idea behind helicity recycling. The details of the associated speed-up will be then presented in Sect. 2.3.

The second improvement that we have implemented improves the phase-space integrator. Contrary to the first method, it is more challenging to introduce an improvement that acts positively on all processes. On the other hand, the expected (and obtained) gain can be much more impressive with this method. For this paper we will mainly focus on the speed-up of VBF-like processes since they are the ones where MG5aMC has some specific issues.

This will be covered in Sect. 3 where we start by reviewing the current multi-channel strategy (the single diagram enhancement method) in Sect. 3.1. We then explain in Sect. 3.2 the methods used to have a better handling on the integration of t-channel propagators. We then turn in Sect. 3.3 to the deeper changes we made to the multi-channel strategy as well as the impact on some physical observables – which are beyond LO accurate. Our speed-ups are then compared to the older version of the code in section (3.4).

Our conclusions will be presented in Sect. 4. We also provide two appendices. First, we will give a brief manual on how to tune the optimisation parameters. Finally in Sect. 1, we describe the modifications (and the associated conventions) to the Aloha package [30], related to the helicity recycling algorithm.

2 Helicity recycling within the helicity amplitude method

2.1 Helicity Amplitudes and previous optimisations in MG5aMC

When evaluating a matrix element one can identify two key structures: the Lorentz structure of the matrix element and its colour structure. Within MG5aMC, the evaluation of these two structures factorises at the amplitude level [9]. Hence it is possible to discuss one without the other. The helicity-recycling optimisation only impacts the evaluation of the Lorentz structure and in this section we will explain how this evaluation is performed.

2.1.1 Helicity amplitude formalism

Matrix elements typically contain factors of spinors (from the external fermions) and Lorentz vectors (from internal propagators). When analytically evaluating the square of this matrix element it is common to remove the dependence on spinors via their on-shell condition:

(1)

where the sum is over helicity. The squared matrix element can then be reduced to a series of scalar products between Lorentz vectors. However, such a formalism is typically not used by programs that perform numerical evaluations of matrix elements. This is because such a formalism will cause the computational complexity to grow quadratically with the number of Feynman diagrams (due to the presence of interference terms).Footnote 1

One solution is to use the helicity amplitude formalism [32,33,34] where the summation over helicity is postponed. Under this formalism the matrix element is reduced to a series of spinor products, rather than Lorentz products.Footnote 2 These spinors will depend on the specific helicities of the external particles. The advantage of this formalism is that its complexity grows linearly with the number of diagrams (since interference terms can be handled by a simple sum over amplitudes).Footnote 3

Fig. 1
figure 1

S-channel \(e^-e^+\rightarrow \mu ^- \mu ^+\) diagram with photon mediator. We use \(p_1\), \(p_2\), \(p_3\) and \(p_4\) to label the external momenta

An example will help illustrate the key principles here. Following Ref. [37], let us work in the massless limit and consider the unpolarised \(e^-e^+\rightarrow \mu ^- \mu ^+\) s-channel matrix element with a photon mediator. Let us use \(p_1,p_2\) to label the momenta of the electrons and \(p_3,p_4\) for the muons. This process is pictured in Fig. 1.Footnote 4

In what follows it will also be helpful to use the standard Mandelstam variables, defined as:

$$\begin{aligned} s&= (p_1+p_2)^2 = (p_3+p_4)^2 \end{aligned}$$
(2)
$$\begin{aligned} t&= (p_1-p_3)^2 = (p_4-p_2)^2 \end{aligned}$$
(3)
$$\begin{aligned} u&= (p_1-p_4)^2 = (p_3-p_2)^2 \end{aligned}$$
(4)

Accordingly, this matrix element is written as:

$$\begin{aligned} i{\mathcal {M}} = (-ie)^2{\bar{v}}(p_2)\gamma _\mu u(p_1) \frac{-ig^{\mu \nu }}{s} {\bar{u}}(p_3) \gamma _\nu v(p_4). \end{aligned}$$
(5)

Remember that u(p) and v(p) are Dirac spinors and can either be left- or right-handed. In the massless limit this corresponds to them having \(+1\) or \(-1\) helicity eigenvalues respectively. In what follows we will also use \(\psi \) to denote a general Dirac spinor. The subscript L and R will be used to denote left- and right-handedness respectively. It is common to use the following notation when expressing helicity amplitudes, for a spinor of momentum \(p_n\):

$$\begin{aligned} \psi _L = n\rangle&, \quad \psi _R = n], \end{aligned}$$
(6)
$$\begin{aligned} {\bar{\psi }}_L = \langle n&, \quad {\bar{\psi }}_R = [ n. \end{aligned}$$
(7)

One can show that \([n \gamma ^\mu m] = \langle n \gamma ^\mu m\rangle = 0\). Hence, for the matrix element to be non-zero, the incoming fermions must have opposite helicity and similarly for the outgoing. Hence, for all the possible helicity combinations we could construct, the only ones that give a non-zero matrix element are:

$$\begin{aligned}&1^-2^+3^-4^+, \end{aligned}$$
(8)
$$\begin{aligned}&1^+2^-3^+4^-, \end{aligned}$$
(9)
$$\begin{aligned}&1^+2^-3^-4^+, \end{aligned}$$
(10)
$$\begin{aligned}&1^-2^+3^+4^-. \end{aligned}$$
(11)

Here, the notation \(n^m\) means the fermion with momentum \(p_n\) has helicity m. The first helicity combination will give:

$$\begin{aligned} i{\mathcal {M}}(1^-2^+3^-4^+)&=(-ie)^2\langle 2 \gamma _\mu 1] \frac{-ig^{\mu \nu }}{s}\langle 3 \gamma _\nu 4], \end{aligned}$$
(12)
$$\begin{aligned}&=2\frac{ie^2}{s}[41]\langle 23 \rangle . \end{aligned}$$
(13)

Using the relation \([nm]\langle mn \rangle = \langle m n \rangle [nm] = p_m \cdot p_n \), one finds the square to be:

$$\begin{aligned} \left| {\mathcal {M}}(1^-2^+3^-4^+)\right| ^2=4e^4\frac{u^2}{s^2}, \end{aligned}$$
(14)

By parity, the helicity combination \(1^+2^-3^+4^-\) will give the same result. The only remaining contributing helicity combinations are the same two but with \(1 \leftrightarrow 2 \). By symmetry arguments, one can see they will give the same result as in Eq. (14) but with \(u \rightarrow t\). Hence the final result for this matrix element becomes:

$$\begin{aligned} \frac{1}{4}\sum _{\text {helicity}}\left| {\mathcal {M}}\right| ^2=2e^4\frac{t^2+u^2}{s^2}. \end{aligned}$$
(15)

2.1.2 MadGraph implementation

The main points from the simple example looked at in section (2.1.1) are:

  1. 1.

    Diagrams are evaluated at the amplitude level (before squaring the matrix element).

  2. 2.

    To get the final result these diagrams must be summed over helicity combinations.

  3. 3.

    Only some of these helicity combinations will actually contribute.

Fig. 2
figure 2

Iterative steps used to evaluate a given Feynman diagram in the helicity amplitude formalism. In a the spinors of the external particles are evaluated. In b and c the algorithm works through each vertex (circled), evaluating the wave function of the associated propagator (in red). Finally in d the last vertex is reach at which point the algorithm uses what it has calculated so far to evaluate the amplitude

MG5aMC follows the HELAS strategy [28] to compute amplitudes. It starts by computing the spinorial representation of all external particles. These spinors are then iteratively combined using some model specific functions (generated thanks to the Aloha [30] package). In this way the full diagram can be built up. This process is represented in Fig. 2. In the helicity amplitude formalism one distinguishes three types of functions:

  • external wave function: function calculating the spinorial representation (e.g. u, \({{\bar{v}}}\), \(\epsilon ^\mu \)) of the external particles evaluated for a given helicity.

  • internal wave function: function calculating the spinorial representation (see Eq. (16)) of an internal particle (i.e. of a propagator) evaluated for a given helicity combination.

  • amplitude: function fully contracting all spinorial representations and therefore returning the value for a given helicity combination of the amplitude of the associated Feynman diagram.

As depicted in Fig. 2, the evaluation of the matrix element starts by evaluating all the external wave functions, before combining them to get the internal wave functions associated to the propagators. Finally it reaches the last vertex and at that point returns an amplitude.

For example, consider again the \(e^-e^+\rightarrow \mu ^- \mu ^+\) process. After the computation of the spinors associated to the electrons and muons (which depends only on their momenta and helicity), the algorithm will call the routine for calculating the internal photon wave function. The analytic expression of this photon wave function \(\phi _\gamma \) will be:

$$\begin{aligned} \phi _\gamma ^{\mu h_3h_4} = -ie\frac{-ig^{\mu \nu }}{s}{\bar{\psi }}^{h_3}_{\mu ^+} \gamma _\nu \psi _{\mu ^-}^{h_4}, \end{aligned}$$
(16)

where \(\psi _\mu \) has been used to represent the muon spinors. As already mentioned, these will be dependent on the helicity of the muons, labelled \(h_3\) and \(h_4\). Note that the wave function associated to this propagator is not unique since it depends on which vertex is used to compute it.

In this example, we are already at the last vertex and all that remains is to contract the various wave functions, taking into account the Lorentz/spinor structure of the associated vertex. Analytically, this is written as:

$$\begin{aligned} {\mathcal {M}}^{h_1h_2h_3h_4}&= \left( -ie{{\bar{\psi }}}_{e^-}^{h_2} \gamma _\mu \psi _{e^+}^{h_1} \right) \phi _\gamma ^{\mu h_3h_4}, \end{aligned}$$
(17)
$$\begin{aligned}&= (-ie)^2{\bar{\psi }}_{e^-}^{h_2} \gamma _\mu \psi _{e^+}^{h_1} \left( \frac{-ig^{\mu \nu }}{s}{\bar{\psi }}_{\mu ^+}^{h_3} \gamma _\nu \psi _{\mu ^-}^{h_4}\right) , \end{aligned}$$
(18)

where, just as with the muons, \(\psi _e\) has been used to represent the spinors of the electrons. This is the same as the expression in (12) but without the specific choice of helicity combination.

MG5aMC generates a FORTRAN subroutine that carries out the above process for a given matrix element and helicity combination and then returns the squared amplitude.Footnote 5 MG5aMC then performs a loop over all possible helicity combinations, at each point calling this matrix-element subroutine. The results from this loop are then summed to produce the final matrix element. This can be represented, for our \(e^-e^+\rightarrow \mu ^- \mu ^+\) example, by the following pseudo-code:

figure a

which can be generalised for any process asFootnote 6:

figure b

Obviously, MG5aMC has already implemented a couple of optimisations. First, none of the external wave functions depend on the Feynman diagram representation and can therefore be moved outside of the loop over diagrams. Furthermore, the same propagator can appear in multiple Feynman diagrams and in such a case it is also highly beneficial to move it out of the Feynman diagram’s loop. Therefore a more accurate representation of MG5aMC, prior to this paper, can be written like:

figure c

Secondly, recall the point made at the start of this section that only a subset of helicity combinations contributes. Hence, another optimisation already included in the above meta-code exploits this fact and makes sure the loop over helicity combinations only includes “contributing” ones. We shall refer to this optimisation as ‘helicity filtering’.Footnote 7

The matrix routine discussed in this section is called by the madevent binary. This binary is also responsible for generating phase-space points, evaluating the PDFs and writing the events (amongst other things). We can evaluate the computational cost of the matrix routine by comparing the number of instructions it executes to the total number executed by madevent. This is presented in Table 1 for top-quark pair production plus gluons. The diagrams for \(t{\bar{t}}\) are simpler and fewer in number than those of \(t{\bar{t}}gg\) and even more so than those of \(t{\bar{t}}ggg\). Hence the total number of madevent instructions executed by madevent increases across these processes. Furthermore, this means the matrix routine is also responsible for an increasing percentage of the instructions: \(23\% \rightarrow 96\% \rightarrow {\sim }100\%\).

We also see that for \(t{\bar{t}}gg\) and \(t{\bar{t}}ggg\) (the more complex processes) the amplitude routines account for 44% and 51% of the computation, making it the dominant contribution to the matrix element calculation. This is again due to the higher number of diagrams and since the number of unique propagators does not scale as fast as the diagram multiplicity. Hence it is important that any future optimisation targets not just wave-function routines but also the amplitude routines.

Finally, one can also see from the table that the wave-function and amplitude routines do not add up to the number of matrix routine instructions. This is because the routine has other things to evaluate, most noticeably the colour factors for the relevant amplitudes. Such computation is even the second hot-spot for the three gluon multiplicity case and therefore will limit the potential impact of our optimisation.

Table 1 Here we present the number of instructions evaluated by the matrix routine (matrix1) and the total number of instructions evaluated by madevent. In brackets we also present these numbers as a percentage of the madevent total. These results are presented for three processes: \(gg\rightarrow t\bar{t}\), \(gg\rightarrow t\bar{t}gg\) and \(gg\rightarrow t\bar{t}ggg\). We have also broken down the matrix routine into the number of instructions (again alongside their percentage of the total) evaluated by calls to functions that evaluate external spinors (ext), internal wavefunctions (int) and amplitudes (amp). The data in this table was obtained using a combination of Valgrind [39] and KCachegrind [40]

2.2 Helicity recycling

In general, when summing over helicity combinations, the same spinor with the same helicity can appear multiple times. For example, in the combinations (8)–(11) each helicity spinor (such as \(1^+\)) appears twice. Hence when MG5aMC loops over the combinations it will calculate the same wave function for these spinors multiple times (see the above meta-code). This is a waste of computation time (even if in this simple case the external wave functions are cheap to evaluate). It would be more efficient to only calculate the wave functions once, save their values and reuse it when appropriate.

The same principle also applies to the wave function of internal particles. Such routines take other wave functions as input, therefore for a subset of helicity combinations the same input will be given and the code will waste time by re-computing the same quantity. For example, when looking at the internal (photon) wave function given in Eq. (16) the helicity combination (8) and (10) will give exactly the same result. Hence, the computational efficiency can again be improved by only calculating such internal wave functions once and then re-using the value when necessary.

This technique of calling the wave-function routines only once is what we refer to as “helicity recycling” and can be written in terms of pseudo-code as:

figure g

Here you can note that all stored variables have an additional index indicating which helicity combination was used to compute them.

While such an optimisation sounds natural, it has two potential issues. First, the amount of RAM needed for the computation will increase. However the RAM is currently dominated by meta-data related to the phase-space integrator and not to the amount of RAM used in the matrix element. The increase of memory needed by the helicity recycling is actually shadowed by such meta-data and hence we did not observe a sizeable increase and certainly not faced any issues with RAM assignment even for the most complex processes.

Second, while the previous strategy was allowing helicity filtering at run time, this method requires us to know which helicity combinations do not contribute when creating the code. In order to numerically determine the null-helicities, we have designed the code’s work-flow in the following way:

  1. 1.

    We first allow MG5aMC to create the matrix-element subroutine as it normally would.

  2. 2.

    We then sample a couple of events in order to determine which helicity combinations and amplitudes do not contribute.

  3. 3.

    Next, the matrix-element subroutine is rewritten in the new paradigm.

The conversion of the code is done by using a directed acyclic graph to represent how the various wave-function and amplitude routines depend on one another. Having established this graph, the program is then able to easily evaluate which wave-function routines are needed to calculate a given diagram. This allows us to avoid duplicate calls to these routines when unrolling the helicity loop. It also allows us to prune efficiently any wave-function (and amplitude) calls that are only associated with vanishing amplitudes.Footnote 8 Compared to the helicity filtering discussed in Sect. 2.1.2, this new algorithm is more efficient since it can identify any non-contributing component of the computation, like an amplitude that is vanishing only for one particular helicity combination.

So far we have discussed how helicity recycling is used to minimise calls to external and internal wave functions. However the impact of such an optimisation is at best quite limited since the computation is actually dominated by the amplitude routines. Thankfully, this implementation also allows us to optimise these routines by a significant factor.

For the sake of example (and without any lack of generality) let us assume that the final vertex of a given Feynman diagram is a Standard Model fermion–fermion–vector vertex. This corresponds to (see Eq. (17)):

$$\begin{aligned} {\mathcal {M}}_{h_1h_2h_\phi } = {\bar{\psi }}^{h_1}_1\, \gamma _\mu \, \psi ^{h_2}_2 \phi ^\mu _{h_\phi }. \end{aligned}$$
(19)

Where we have explicitly added indices representing the associated helicity (or helicity combination for internal wave functions) of each component. Now, when computing \({\mathcal {M}}_{h_1h_2h_\phi }\) and \({\mathcal {M}}_{h_1h_2{\tilde{h}}_\phi }\), the factor \({\bar{\psi }}^{h_1}_1\, \gamma _\mu \, \psi ^{h_2}_2\) will be identical and can therefore be re-used multiple times. Therefore we have been able to optimise the code further by implementing a recycling of this factor. However, a similar factor can also be recycled between \({\mathcal {M}}_{h_1h_2h_\phi }\) and \({\mathcal {M}}_{{\tilde{h}}_1h_2h_\phi }\). Therefore it is advantageous to compare (for each Feynman diagram) which one of the following expressions:

$$\begin{aligned}&{\bar{\psi }}^{h_1}_1\, \gamma _\mu \, \psi ^{h_2}_2, \end{aligned}$$
(20)
$$\begin{aligned}&\gamma _\mu \, \psi ^{h_2}_2 \phi ^\mu _{h_\phi } , \end{aligned}$$
(21)
$$\begin{aligned}&{\bar{\psi }}^{h_1}_1\, \gamma _\mu \, \phi ^\mu _{h_\phi }, \end{aligned}$$
(22)

can be re-used at a higher frequency and use the most optimal recycling strategy. This optimisation requires us to define a new type of helicity routine and the details of this implementation into Aloha are presented in Sect. 1.

2.3 Result

In this section we will quantify the speed-up resulting from using the improvements detailed in Sect. 2.2.

2.3.1 Matrix routine breakdown

Table 2 The same as Table 1 but this time with helicity recycling enabled. We also present the percentage reduction between the two tables

First we reproduce Table 1 with helicity recycling switched on. This is shown in Table 2. One can see that for all processes the total number of evaluated instructions has reduced:

  • from 13G to 11G for \(t{\bar{t}}\) (15% reduction)

  • from 470 to 180G for \(t{\bar{t}}gg\) (62% reduction)

  • from 11T to 5T for \(t{\bar{t}}ggg\) (55% reduction)

The latter reductions are much larger because evaluating the matrix element represents a larger percentage of the overall computation for those processes. This is because the diagrams are more complex and numerous for \(t{\bar{t}}gg\) and \(t{\bar{t}}ggg\).

Looking at Table 2, we observe that both external and internal wave-function routines represent, after helicity recycling, a relatively insignificant computational cost. Firstly, they were not that significant before the optimisation and secondly they have been highly reduced by the helicity recycling (by at least \(10\times \) factor). The final speed-up is actually more dependent on the reduction in calls to amplitude routines. Focusing on the \(t{\bar{t}}gg\) process, one can see the amplitude routines have seen a large reduction in the number of associated instructions (by a factor of two) but still represent 42% of the overall computation. Although not shown in this table, roughly half of this computation (19% of the total) is spent evaluating simply scalar products (the contraction of Eqs. (2022) with the remaining wave function) which strongly limit the hope of further optimisation.

For the three gluon final state the situation is similar, even if the reduction of amplitude routine instructions is closer to a factor of 4. However, for this process the limiting factor is now the computation of the colour factor (taking around 60% of the computation). We have also investigated how that step could be optimised and introduced two simple improvements of the code. First we use a common sub-expression reduction algorithm [41] on that segment of the code. Second we merge the numerator and denominator of the colour-matrix into a single matrix, reducing the number of operations and allowing for a better memory pattern. Combined together those modifications lead to a speed-up of around 20%.Footnote 9

2.3.2 Overall speed-up

Having looked at how the computational cost of madevent breaks down into different functions, we now present its overall factor speed increase. This is shown for a range of processes in Table 3. If \(t_{\text {with}}\) and \(t_{\text {without}}\) are the times it takes to run madevent with and without helicity recycling respectively then the speed-up is represented as

$$\begin{aligned} \text {speed-up} = \frac{t_{\text {without}}}{t_{\text {with}}}. \end{aligned}$$

As has already been alluded to in Tables 1 and 2, the speed-up we gain from helicity recycling is highly process dependent. Helicity recycling reduces the number of times we must calculate wave functions and amplitudes and so processes with more complicated diagrams and with a higher number of total diagrams see the biggest boost. For example, consider the \(gg\rightarrow t {\bar{t}}\) results shown in Table 3a. As more gluons are added to the final state the gain increases, with the \(t{\bar{t}}gg\) final state seeing a dramatic \(2.27\times \) speed increase.

Table 3 The factor speed-up of madevent as a result of using helicity recycling for a selection of SM processes. Here \(q \in \{u,d,c,s\}\). One can see two general principles at play. Firstly, the more diagrams a process has and the more complicated those diagrams are, the bigger the speed increase. For example, see the effect of adding more gluons in (a). Secondly, the more helicity combinations we filter away, the lower the speed increase. This is why processes with more fermions see a lower speed-up
Table 4 Integration timing (time to solution) on a mac-book pro (quad-core 2016) to generate 10k events. Comparing the impact with and without helicity recycling

In contrast to this large increase the similar process \(qq\rightarrow t {\bar{t}} q q\) (where \(q \in \{u,d,c,s\}\)) sees a noticeably lower speed increase of \(1.27\times \). This is because processes involving fermions have a higher number of non-contributing helicity combinations and so helicity filtering will have a bigger effect. Hence, there will be fewer wave functions/amplitudes to evaluate and so helicity recycling will have a smaller impact.

One can see that the other processes presented in Table 3 also follow these general principles regarding diagram complexity and fermion multiplicity. W bosons allow for minimal helicity filtering and so Table 3b displays a large speed increase, whereas electrons – being fermions – suppress the speed-up in Table 3c.

In Table 4 we present these results for a wider range of processes and with a more detailed breakdown. In the ‘hel’ column we present how many helicity combinations are evaluated per phase-space point. In this column we use a ‘\(\rightarrow \)’ to indicate that the maximum value across the various matrix elements is being shown. The columns “survey” and “refine” present the timing of the two main phases of the phase-space integration/event generation for a single run. The “survey” is designed to get a first (un-precise) estimate of the cross section and to know the relative importance of each contribution. The amount of time spent in the “survey” is therefore independent of the number of requested events. The “refine” stage aims to generate the number of requested events and therefore scales linearly with the number of events.Footnote 10 In both cases, the timing is the time to solution observed on a i7-7700HQ CPU (2016 macbook pro laptop) using up to 8 threads in the standard multi-core mode of the code when requesting ten thousand events.Footnote 11 The last column presents the global speed-up (computed by comparing the sum of the timings of the survey and of the refine) between the version of the code including all the optimisations introduced in this paper (2.9.0) and the same code without helicity recycling (2.9.0 nohel). The optimisation related to the colour computation are present in both columns. A detailed description of the cuts, scale choices and such are given as supplementary material. There one can also find all the material required to reproduce this (and sub-sequent) tables.

Again one can see the same principles at play: the biggest speed increase is seen for complex processes dominated by QCD and vector boson interactions as they have the most non-vanishing helicity combinations and diagrams. The disparity of the gain between “survey” and “refine” can be explained due to different relative importances of the various matrix elements in each of the steps.

Notice that helicity recycling’s tendency to more greatly speed-up more intensive processes is very convenient. For example, in a \(pp\rightarrow 4j\) simulation, the processes with \(j=g\) will heavily dominate the calculation and so speeding those up is the most effective way to speed-up the overall simulation. This is why the simulation sees an impressive \(2.1\times \) speed-up in Table 4.

3 Phase-space integrator

3.1 Monte-Carlo integration and single diagram enhancement

In addition to the speed of the matrix-element evaluation, the efficiency of the phase-space integrator is another crucial factor determining the speed of the computation since it controls how many times we need to evaluate the matrix element. Due to both the number of dimensions of integration and also the requirement to generate uncorrelated events, the only suitable method is Monte-Carlo integration. However, the convergence of the method is quite slow (\(1/\sqrt{N}\), where N is the number of times the matrix element is evaluated).

In Monte-Carlo methods (see [42] for a nice review), one evaluates the integrand for random values of the variable of integration. The estimator of the integral (\(I_N\)) is simply given by the average of the function estimated across these random points \(x_i\):

$$\begin{aligned} \int f(x) \approx I_N = \frac{1}{N} \sum _{i=1}^N f(x_i). \end{aligned}$$
(23)

The statistical error (\(\Delta I_N\)) of this estimator \(I_n\) is controlled by the variance of the integrand and can be estimated as

$$\begin{aligned} \Delta I_N = \frac{\text {Var}_f}{\sqrt{N}}\approx \frac{1}{\sqrt{N}} \sqrt{\frac{1}{N}\sum _{i=1}^N f^2(x_i) -I_N^2}. \end{aligned}$$
(24)

Since the convergence rate is fixed to \(\frac{1}{\sqrt{N}}\), the various avenues of optimisation consist in modifying the function being integrated to effectively reduce its variance. In MG5aMC, we use the single diagram enhancement method [29], which combines various importance sampling methods – both analytical [42] and numerical [43, 44] – on top of a multi-channel strategy. The integral is decomposed as a sum of various contributions:

$$\begin{aligned} \int |M|^2= & {} \sum _i \int |M_i|^2 \frac{|M|^2}{\sum _j |M_j|^2}, \end{aligned}$$
(25)
$$\begin{aligned}\equiv & {} \sum _i \int \alpha _i |M|^2, \end{aligned}$$
(26)

where the indices i and j range over individual (or potentially a subset of) Feynman diagrams. The values \(\alpha _i\equiv \frac{|M_i|^2}{\sum _j |M_j|^2}\) are called the channel weights, they do not modify the value of the integral (as long as \(\sum _i \alpha _i =1\) for every phase-space point) but do impact the variance of the integrand and therefore the speed of the computation.

While in general the value \(\alpha _i\) could be any arbitrary function, the single diagram enhancement method makes a specific choice a priori. This choice is particularly motivated by the classical limit where interference terms between the Feynman diagrams are small. In that case

$$\begin{aligned} \frac{|M|^2}{\sum _j |M_j|^2} \approx 1, \end{aligned}$$
(27)

and therefore we have

$$\begin{aligned} \int |M|^2 = \sum _i \int |M_i|^2 \frac{|M|^2}{\sum _j |M_j|^2} \approx \sum _i \int |M_i|^2. \end{aligned}$$
(28)

In other words, with this choice the assumption is that each of the terms of the sum – called channels of integration – are mainly behaving as a single diagram squared from which the poles are easily identifiable and importance sampling is relatively easy to implement (see Sect. 3.2). The role of machine learning algorithms (MG5aMC uses a modified VEGAS algorithm [43]) is then mainly to catch the impact of the \(\frac{|M|^2}{\sum _j |M_j|^2}\) term as well as other sources of deviation from the ideal case (like for example the impact of generation cuts or the impact of the running of the strong coupling).

While this method works extremely well in general, it is not the most suited approach for VBF-like processes, especially at high energy where large interference occurs due to gauge symmetry. As a matter of fact, MG5aMC is much slower than many other programs [45] (in particular VBFNLO [46]) for such types of generation.

3.2 t-channel strategy

When running MG5aMC, one can easily identify that the slowest channels of integration are the ones with multiple t-channel propagators. In MG5aMC, we handle t-channel propagators with the following change of variable/phase-space measure [47, 48]:

$$\begin{aligned} d\Phi _2&= \frac{d^3k_1}{(2\pi )^32E_1}\frac{d^3k_2}{(2\pi )^32E_2} (2\pi )^4\delta ^{(4)}(p+q-k_1-k_2), \end{aligned}$$
(29)
$$\begin{aligned}&=\frac{1}{16\pi ^2 \sqrt{\lambda (t_1,q^2,p_1^2)}} dt_1 d\phi , \end{aligned}$$
(30)
Fig. 3
figure 3

Feynman diagram used to fix the convention in the variable

where as in Fig. 3, p and q are the initial state momenta and \(k_1\) and \(k_2\) the final state ones, \(E_1\) (respectively \(E_2\)) is the energy of \(k_1\) (respectively \(k_2\)), \(S=(p+q)^2\) is the center of mass energy of the collision, \(t_1\) is the virtuality of the t-channel propagator given by \(t_1=(p-k_1)^2\) and where \(\lambda (a,b,c)=a^2+b^2+c^2-2ab-2ac-2bc\).

The integration over \(t_1\) is bounded by the following values:

$$\begin{aligned} t^\pm _1= & {} m_p^2+m_1^2 - \frac{1}{2S}\Big ((S+m_p^2-m_q^2)(S+m_1^2-m_2^2)\nonumber \\&\mp \lambda ^\frac{1}{2}(S, m_p^2,m_q^2)\lambda ^\frac{1}{2}(S,m_1^2,m_2^2) \Big ), \end{aligned}$$
(31)

In the presence of multiple t-channel propagators, MG5aMC writes them as

$$\begin{aligned} t_1&=(p-k_1)^2 \nonumber \\ t_2&=(p-k_1-k_2)^2 \nonumber \\&\ldots \nonumber \\ t_{n-1}&=(p-k_1-k_2-\ldots -k_{n-1})^2 \end{aligned}$$
(32)

and performs the integration in the following (ordered) way

$$\begin{aligned} \int dt_{n-1}\int dt_{n-2} \dots \int dt_{1}, \end{aligned}$$
(33)

meaning that we first integrate over \(t_{1}\), then \(t_{2}\) and continues up to \(t_{n-1}\).

Fig. 4
figure 4

Comparison of various orderings of the three variables of integration corresponding to the invariant of time-like particles for the channel associated to the Feynman diagram represented on the left. We present both the relative error, the number of events generated at a given iteration and the associated un-weighting efficiency

The combination of such an ordering with the boundary condition of Eq. (31) creates correlations between the variables of integration. This correlation is problematic for any type of VEGAS-like algorithm [42, 43]. The importance of the ordering can be seen in Fig. 4 where we compare various orderings for a given channel of integration (the associated diagram is displayed in the figure). We present both the un-weighting efficiency and the estimated statistical uncertainty after three and five iterations which gives a hint on the convergence of the integration grid.

In this example, it is clear that the original ordering strategy picked by MG5aMC is sub-optimal (between 3 and 6 times slower than the optimal strategy depending of the number of iterations). Not only is the best strategy more efficient at generating events but also the associated grid seems to need fewer iterations to converge.

While we hand-picked an example of channel of integration where the ordering was sub-optimal, it would not have been difficult to also present cases where it was optimal (e.g. just flip the initial state in the example). Indeed the optimal ordering is deeply dependent on the channel of integration under consideration.

On top of the old ordering, we have added support for three additional orderings (see details in Sect. 1). Each channel of integration is then associated to one of those orderings such that the virtuality of the most singular t-channel propagator is integrated first. In principle one could use the values of \(t_i^+\) (Eq. 31) to choose such an ordering, but this is technically not possible and we used a simple heuristic approach for such determination.

Table 5 Integration timing (time to solution) on a mac-book pro (quad-core 2016) to generate 10k events depending of the t-channel ordering strategy. If the generation fails to generate enough events the timing is re-scaled accordingly. In such a case the timing is set in bold and the number of events actually generated is then indicated

Our resultant timings are presented in Table 5, which contains the same type of information as Table 4. The comparison is between the version of the code containing all the optimisations of this paper (2.9.0) with the same version of the code where the ordering strategy was forced to the old one (Eq. 33). While in general the “survey” presents only a small speed-upFootnote 12 a more sizeable gain is achieved during the “refine”. The actual speed-up will therefore slightly increase when requesting more events. Additionally, one can note that the biggest gain is achieved for the slowest processes.

One process (VBF with transverse polarised W boson) shows a significant slow-down (1.6 times slower). Our investigation shows that the ordering picked in that case was correct but the convergence of the grid was actually slower than what it was for the previous strategy leading to the slow-down. Such an issue can only happen for relatively easy channels of integration since most complex processes would need more iterations and then this effect would disappear (as observed on the VBF process at 100 TeV).

Table 6 Integration timing (time to solution) on a mac-book pro (quad-core 2016) to generate 10k events depending of the multi-channel strategy. If the generation fails to generate enough events the timing is re-scaled accordingly. In such a case the timing is set in bold and the number of events actually generated is then indicated
Table 7 Integration timing (time to solution) on a mac-book pro (quad-core 2016) to generate 10k events. If in 2.8.1, it was not possible to generate ten thousand events, the time is re-scaled accordingly (and the timing is set in bold). The second number presented in that case is the actual number of events that was generated

3.3 New Diagram enhancement strategy

A recent paper [49] pointed to the importance of the gauge choice when generating events within MG5aMC. Even if the amplitude is fully gauge invariant, the definition of the channel weights (\(\alpha _i\)) is not. The presence of large gauge cancellations are related to large interference terms and therefore the assumptions used by the phase-space integrator (Eq. (27)) will not hold anymore. Consequently, the \(\alpha _i\) associated with Feynman diagrams with t-channel propagators will be artificially large at high-energy. This will increase the variance of the function and reduce the efficiency of the method.

We propose here, a second strategy for the single diagram enhancement method. Instead of using Eq. (25), we replace the \(|M_i|^2\) by the product of the denominator (both s and t channel) associated to the single diagram under consideration (and normalise them as needed).

$$\begin{aligned} {{\bar{\alpha }}}_i\equiv & {} \prod _{\begin{array}{c} k \in \\ \text {propagator} \end{array}} \frac{1}{|p_k^2-M_k^2 -iM_k\Gamma _k|^2}, \end{aligned}$$
(34)
$$\begin{aligned} \alpha _i= & {} \frac{{{\bar{\alpha }}}_i}{\sum _j {{\bar{\alpha }}}_j}. \end{aligned}$$
(35)

Such a change in the definition of the multi-weights does not in principle impact the cross section (see Eq. 26). However in practise the choice of the dynamical scale is done on a CKKW inspired clustering [50] which depends on the Feynman diagram selected by the single diagram enhancement method. This modification of the channel weight will therefore impact the scale associated to each event and therefore the cross section and shape, both within scale uncertainties. In our tests, the impact of this effect was at the order of the percent, so well below the scale uncertainty. In general this can be avoided – if needed – by changing the scale computation to \(H_T/2\) or some other purely kinematical scale choice [51]. However, one should note that it is not possible to avoid such effects when running matched/merged generation within MLM or shower-kT MLM [52, 53], since the CKKW clustering is mandatory in those type of generation.

A more subtle effect of the modification of the channel weight is related to the parton shower. When writing an event inside the output file, MG5aMC provides (in addition to the kinematic variables) the colour dipole representation in the leading colour approximation. The determination of the eligible dipole depends on the Feynman diagram (e.g. in presence of mixed expansion) and therefore the modification of the multi-channel strategy can impact such a selection. One can therefore expect some change, within theoretical uncertainties, after parton shower for some QCD related observable.

In Table 6, we compare the time needed to generate ten thousand events with the previous strategy and the one introduced in this paper (all other optimisations of this paper are included in both cases). As one can expect for such deep modification of the phase-space integration strategy, the observed spectrum of speed-up/down is extremely broad going from three orders of magnitude speed-up to five times slower. It is clear that such an optimisation is a must-have in the context of VBF processes but must be avoided for most QCD multi-jet processes. While the user can easily switch from one strategy to the other (see Sect. 1), we have configured the code such that the default value is process dependent. All processes with only one colour-flow will use the new method while others processes will use the old method. We made an exception for pure multi-jet processes which now use the new method as well. The default for each process is indicated in the last column in Table 6. Since for most QCD processes we keep the previous integration strategy, the caveats on the change in the scale/leading colour choice, mentioned above, are naturally mitigated.

3.4 Comparison with older version of MG5aMC

In Table  7, we compare the speed of the code between two versions of MG5aMC (2.8.1 and 2.9.0). 2.9.0 is the first version of the code containing the modification described in this paper. Let’s stress that every optimisation flag is set to their default value and therefore this is the speed-up that a user will observe without playing with any options.

The combined impact of all our optimisations is striking for VBF-like processes with a speed-up of more than 30,000 times faster for one process. While this process is very specific and probably not the most important one for many users, all the VBF processes show massive speed-up passing from hour long runs to a couple of minutes. Actually, in many cases, the previous version of the code had a lot of trouble in generating the requested number of events and sometimes even to correctly converge to the correct cross section. All those problems are now solved with 2.9.0; the cross section converges quickly and events are generated very efficiently.

The gain for the other processes is more modest. Firstly because the phase-space integration was much better handled to start with and secondly because those processes are less sensitive to t-channel diagrams on which we have focused. Nevertheless in combination with the helicity recycling, the code is, for processes heavily used at the LHC, around three times faster, a very valuable gain.

4 Conclusion

In order to evaluate an amplitude MG5aMC must sum it over all contributing helicity combinations. Before the work of this paper MG5aMC would calculate every wave function and amplitude separately for each helicity combination. We have successfully restructured the code so it will now only calculate a given wave function once and then reuse the output for all the different helicity combinations. We have also been able to split up the amplitude calculation such that part of it can be recycled across different helicity combinations. This restructuring of the code has also allowed us to avoid calculating parts of the helicity calculation that contribute to null-diagrams. All these optimisations mean that for complex processes with few fermions we can a see a speed-up of the code of around \(2\times \). At the other end of scale, simple processes dominated by fermions can see a speed-up of only a few percent.

Additionally, we have studied the efficiency issue of MG5aMC for VBF-like processes at high energy. We have identified that modifying the order of integration of the virtuality of t-channel particles and changing the multi-channel weight was highly valuable, providing game-changing speed-up for such computations. This has fixed a lot of the issues faced in the past for such processes.

Combining all those optimisations allow us to overshoot the target speed-up asked by the HSF community [1, 2] since we provide a code at least three times faster for CPU intensive processes and far higher for VBF processes.

Table 8 Short description of the various hidden parameters that can be specified within the run_card (configuration file) to tweak the behaviour of the new methods introduced in this paper

Optimisation is important for any program heavily used by LHC experiments and MG5aMCrepresents a non-negligible amount of grid/local cluster usage. We believe that this paper is a significant milestone for MG5aMC  providing significant speed improvements both in the time to evaluate a phase-space point and to the phase-space integrator. However this is certainly not the end of the road and this effort has to (and will) continue. First, the techniques developed in this paper need to be applied to the NLO processes within MG5aMC. We do not expect any big difficulties in such porting and expect similar gain in speed. Second, there is still room to optimise the evaluation of the matrix element, even at leading order: work is in progress to have a tighter link to the hardware with investigation on a GPU port but also on the use of SIMD operation on CPUs [54].