Abstract
In this paper we will describe two new optimisations implemented in MadGraph5_aMC@NLO, both of which are designed to speed-up the computation of leading-order processes (for any model). First we implement a new method to evaluate the squared matrix element, dubbed helicity recycling, which results in factor of two speed-up. Second, we have modified the multi-channel handling of the phase-space integrator providing tremendous speed-up for VBF-like processes (up to thousands times faster).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
While the LHC is still running, preparation is starting for the High-Luminosity LHC. As part of this preparation, the CPU efficiency of our Monte-Carlo generators is crucial due to the sheer number of events that need to be generated. Given the current constraints on the LHC IT budget this will not be possible without significant software improvement [1, 2]. While the full CPU time of the LHC experiment is not dominated by event generation, it is still estimated to represent between ten and twenty percent of it. Consequently, we have received a specific request to speed-up that step by at least 20% and ideally by a factor of two [1].
In addition to the High-Luminosity LHC, work is also starting for future high-energy accelerators [3]. Going to the high-energy regime will amplify multi-scale issues which in turn can reduce the efficiency of event generation. This issue is particularly striking within MadGraph5_aMC@NLO [4, 5] (MG5aMC) for VBF-like processes where the current phase-space integration algorithm either fails to generate the requested number of events or takes an unreasonable time to do so.
The different avenues for speeding up Monte-Carlo integration is a well covered topic. Such efforts can be classified into four different categories. First one can optimise the evaluation of the function being integrated, which in our case is the matrix element [6,7,8,9]. Second one can optimise the integration method to minimise the number of times such functions need to be evaluated [10,11,12,13,14]. Third, one can try to use more efficiently the various types of hardware (e.g. GPU, MPI, vectorization) [15,16,17,18] and finally one can play with the weights of the sample to optimise/re-use information [19,20,21,22,23,24].
In the context of this work, we will focus on optimising MG5aMC, one of the main Monte-Carlo generators [25,26,27] and we will combine two different methods to achieve our goal, one optimising the time to evaluate the matrix element and one optimising the phase-space integrator. The strategy we employed was to keep the main design choices in place (in particular the helicity amplitude method [28] and the single diagram enhancement method [29]) and to deeply study them to see how they could be further improved.
To reduce the time needed to evaluate a given squared matrix element, we use a quite standard Memory/CPU trade-off guided by physics considerations. We have identified parts of the computation that can be stored in memory in order to avoid their re-computation later. We dubbed this method helicity recycling since the additional terms stored in memory correspond to identical sub-expressions shared between different helicity configurations. This improvement will be presented in Sect. 2. We start in Sect. 2.1 by presenting the helicity formalism used in MG5aMC, then we continue in Sect. 2.2 by explaining the main idea behind helicity recycling. The details of the associated speed-up will be then presented in Sect. 2.3.
The second improvement that we have implemented improves the phase-space integrator. Contrary to the first method, it is more challenging to introduce an improvement that acts positively on all processes. On the other hand, the expected (and obtained) gain can be much more impressive with this method. For this paper we will mainly focus on the speed-up of VBF-like processes since they are the ones where MG5aMC has some specific issues.
This will be covered in Sect. 3 where we start by reviewing the current multi-channel strategy (the single diagram enhancement method) in Sect. 3.1. We then explain in Sect. 3.2 the methods used to have a better handling on the integration of t-channel propagators. We then turn in Sect. 3.3 to the deeper changes we made to the multi-channel strategy as well as the impact on some physical observables – which are beyond LO accurate. Our speed-ups are then compared to the older version of the code in section (3.4).
Our conclusions will be presented in Sect. 4. We also provide two appendices. First, we will give a brief manual on how to tune the optimisation parameters. Finally in Sect. 1, we describe the modifications (and the associated conventions) to the Aloha package [30], related to the helicity recycling algorithm.
2 Helicity recycling within the helicity amplitude method
2.1 Helicity Amplitudes and previous optimisations in MG5aMC
When evaluating a matrix element one can identify two key structures: the Lorentz structure of the matrix element and its colour structure. Within MG5aMC, the evaluation of these two structures factorises at the amplitude level [9]. Hence it is possible to discuss one without the other. The helicity-recycling optimisation only impacts the evaluation of the Lorentz structure and in this section we will explain how this evaluation is performed.
2.1.1 Helicity amplitude formalism
Matrix elements typically contain factors of spinors (from the external fermions) and Lorentz vectors (from internal propagators). When analytically evaluating the square of this matrix element it is common to remove the dependence on spinors via their on-shell condition:
where the sum is over helicity. The squared matrix element can then be reduced to a series of scalar products between Lorentz vectors. However, such a formalism is typically not used by programs that perform numerical evaluations of matrix elements. This is because such a formalism will cause the computational complexity to grow quadratically with the number of Feynman diagrams (due to the presence of interference terms).Footnote 1
One solution is to use the helicity amplitude formalism [32,33,34] where the summation over helicity is postponed. Under this formalism the matrix element is reduced to a series of spinor products, rather than Lorentz products.Footnote 2 These spinors will depend on the specific helicities of the external particles. The advantage of this formalism is that its complexity grows linearly with the number of diagrams (since interference terms can be handled by a simple sum over amplitudes).Footnote 3
An example will help illustrate the key principles here. Following Ref. [37], let us work in the massless limit and consider the unpolarised \(e^-e^+\rightarrow \mu ^- \mu ^+\) s-channel matrix element with a photon mediator. Let us use \(p_1,p_2\) to label the momenta of the electrons and \(p_3,p_4\) for the muons. This process is pictured in Fig. 1.Footnote 4
In what follows it will also be helpful to use the standard Mandelstam variables, defined as:
Accordingly, this matrix element is written as:
Remember that u(p) and v(p) are Dirac spinors and can either be left- or right-handed. In the massless limit this corresponds to them having \(+1\) or \(-1\) helicity eigenvalues respectively. In what follows we will also use \(\psi \) to denote a general Dirac spinor. The subscript L and R will be used to denote left- and right-handedness respectively. It is common to use the following notation when expressing helicity amplitudes, for a spinor of momentum \(p_n\):
One can show that \([n \gamma ^\mu m] = \langle n \gamma ^\mu m\rangle = 0\). Hence, for the matrix element to be non-zero, the incoming fermions must have opposite helicity and similarly for the outgoing. Hence, for all the possible helicity combinations we could construct, the only ones that give a non-zero matrix element are:
Here, the notation \(n^m\) means the fermion with momentum \(p_n\) has helicity m. The first helicity combination will give:
Using the relation \([nm]\langle mn \rangle = \langle m n \rangle [nm] = p_m \cdot p_n \), one finds the square to be:
By parity, the helicity combination \(1^+2^-3^+4^-\) will give the same result. The only remaining contributing helicity combinations are the same two but with \(1 \leftrightarrow 2 \). By symmetry arguments, one can see they will give the same result as in Eq. (14) but with \(u \rightarrow t\). Hence the final result for this matrix element becomes:
2.1.2 MadGraph implementation
The main points from the simple example looked at in section (2.1.1) are:
-
1.
Diagrams are evaluated at the amplitude level (before squaring the matrix element).
-
2.
To get the final result these diagrams must be summed over helicity combinations.
-
3.
Only some of these helicity combinations will actually contribute.
MG5aMC follows the HELAS strategy [28] to compute amplitudes. It starts by computing the spinorial representation of all external particles. These spinors are then iteratively combined using some model specific functions (generated thanks to the Aloha [30] package). In this way the full diagram can be built up. This process is represented in Fig. 2. In the helicity amplitude formalism one distinguishes three types of functions:
-
external wave function: function calculating the spinorial representation (e.g. u, \({{\bar{v}}}\), \(\epsilon ^\mu \)) of the external particles evaluated for a given helicity.
-
internal wave function: function calculating the spinorial representation (see Eq. (16)) of an internal particle (i.e. of a propagator) evaluated for a given helicity combination.
-
amplitude: function fully contracting all spinorial representations and therefore returning the value for a given helicity combination of the amplitude of the associated Feynman diagram.
As depicted in Fig. 2, the evaluation of the matrix element starts by evaluating all the external wave functions, before combining them to get the internal wave functions associated to the propagators. Finally it reaches the last vertex and at that point returns an amplitude.
For example, consider again the \(e^-e^+\rightarrow \mu ^- \mu ^+\) process. After the computation of the spinors associated to the electrons and muons (which depends only on their momenta and helicity), the algorithm will call the routine for calculating the internal photon wave function. The analytic expression of this photon wave function \(\phi _\gamma \) will be:
where \(\psi _\mu \) has been used to represent the muon spinors. As already mentioned, these will be dependent on the helicity of the muons, labelled \(h_3\) and \(h_4\). Note that the wave function associated to this propagator is not unique since it depends on which vertex is used to compute it.
In this example, we are already at the last vertex and all that remains is to contract the various wave functions, taking into account the Lorentz/spinor structure of the associated vertex. Analytically, this is written as:
where, just as with the muons, \(\psi _e\) has been used to represent the spinors of the electrons. This is the same as the expression in (12) but without the specific choice of helicity combination.
MG5aMC generates a FORTRAN subroutine that carries out the above process for a given matrix element and helicity combination and then returns the squared amplitude.Footnote 5 MG5aMC then performs a loop over all possible helicity combinations, at each point calling this matrix-element subroutine. The results from this loop are then summed to produce the final matrix element. This can be represented, for our \(e^-e^+\rightarrow \mu ^- \mu ^+\) example, by the following pseudo-code:
which can be generalised for any process asFootnote 6:
Obviously, MG5aMC has already implemented a couple of optimisations. First, none of the external wave functions depend on the Feynman diagram representation and can therefore be moved outside of the loop over diagrams. Furthermore, the same propagator can appear in multiple Feynman diagrams and in such a case it is also highly beneficial to move it out of the Feynman diagram’s loop. Therefore a more accurate representation of MG5aMC, prior to this paper, can be written like:
Secondly, recall the point made at the start of this section that only a subset of helicity combinations contributes. Hence, another optimisation already included in the above meta-code exploits this fact and makes sure the loop over helicity combinations only includes “contributing” ones. We shall refer to this optimisation as ‘helicity filtering’.Footnote 7
The matrix routine discussed in this section is called by the madevent binary. This binary is also responsible for generating phase-space points, evaluating the PDFs and writing the events (amongst other things). We can evaluate the computational cost of the matrix routine by comparing the number of instructions it executes to the total number executed by madevent. This is presented in Table 1 for top-quark pair production plus gluons. The diagrams for \(t{\bar{t}}\) are simpler and fewer in number than those of \(t{\bar{t}}gg\) and even more so than those of \(t{\bar{t}}ggg\). Hence the total number of madevent instructions executed by madevent increases across these processes. Furthermore, this means the matrix routine is also responsible for an increasing percentage of the instructions: \(23\% \rightarrow 96\% \rightarrow {\sim }100\%\).
We also see that for \(t{\bar{t}}gg\) and \(t{\bar{t}}ggg\) (the more complex processes) the amplitude routines account for 44% and 51% of the computation, making it the dominant contribution to the matrix element calculation. This is again due to the higher number of diagrams and since the number of unique propagators does not scale as fast as the diagram multiplicity. Hence it is important that any future optimisation targets not just wave-function routines but also the amplitude routines.
Finally, one can also see from the table that the wave-function and amplitude routines do not add up to the number of matrix routine instructions. This is because the routine has other things to evaluate, most noticeably the colour factors for the relevant amplitudes. Such computation is even the second hot-spot for the three gluon multiplicity case and therefore will limit the potential impact of our optimisation.
2.2 Helicity recycling
In general, when summing over helicity combinations, the same spinor with the same helicity can appear multiple times. For example, in the combinations (8)–(11) each helicity spinor (such as \(1^+\)) appears twice. Hence when MG5aMC loops over the combinations it will calculate the same wave function for these spinors multiple times (see the above meta-code). This is a waste of computation time (even if in this simple case the external wave functions are cheap to evaluate). It would be more efficient to only calculate the wave functions once, save their values and reuse it when appropriate.
The same principle also applies to the wave function of internal particles. Such routines take other wave functions as input, therefore for a subset of helicity combinations the same input will be given and the code will waste time by re-computing the same quantity. For example, when looking at the internal (photon) wave function given in Eq. (16) the helicity combination (8) and (10) will give exactly the same result. Hence, the computational efficiency can again be improved by only calculating such internal wave functions once and then re-using the value when necessary.
This technique of calling the wave-function routines only once is what we refer to as “helicity recycling” and can be written in terms of pseudo-code as:
Here you can note that all stored variables have an additional index indicating which helicity combination was used to compute them.
While such an optimisation sounds natural, it has two potential issues. First, the amount of RAM needed for the computation will increase. However the RAM is currently dominated by meta-data related to the phase-space integrator and not to the amount of RAM used in the matrix element. The increase of memory needed by the helicity recycling is actually shadowed by such meta-data and hence we did not observe a sizeable increase and certainly not faced any issues with RAM assignment even for the most complex processes.
Second, while the previous strategy was allowing helicity filtering at run time, this method requires us to know which helicity combinations do not contribute when creating the code. In order to numerically determine the null-helicities, we have designed the code’s work-flow in the following way:
-
1.
We first allow MG5aMC to create the matrix-element subroutine as it normally would.
-
2.
We then sample a couple of events in order to determine which helicity combinations and amplitudes do not contribute.
-
3.
Next, the matrix-element subroutine is rewritten in the new paradigm.
The conversion of the code is done by using a directed acyclic graph to represent how the various wave-function and amplitude routines depend on one another. Having established this graph, the program is then able to easily evaluate which wave-function routines are needed to calculate a given diagram. This allows us to avoid duplicate calls to these routines when unrolling the helicity loop. It also allows us to prune efficiently any wave-function (and amplitude) calls that are only associated with vanishing amplitudes.Footnote 8 Compared to the helicity filtering discussed in Sect. 2.1.2, this new algorithm is more efficient since it can identify any non-contributing component of the computation, like an amplitude that is vanishing only for one particular helicity combination.
So far we have discussed how helicity recycling is used to minimise calls to external and internal wave functions. However the impact of such an optimisation is at best quite limited since the computation is actually dominated by the amplitude routines. Thankfully, this implementation also allows us to optimise these routines by a significant factor.
For the sake of example (and without any lack of generality) let us assume that the final vertex of a given Feynman diagram is a Standard Model fermion–fermion–vector vertex. This corresponds to (see Eq. (17)):
Where we have explicitly added indices representing the associated helicity (or helicity combination for internal wave functions) of each component. Now, when computing \({\mathcal {M}}_{h_1h_2h_\phi }\) and \({\mathcal {M}}_{h_1h_2{\tilde{h}}_\phi }\), the factor \({\bar{\psi }}^{h_1}_1\, \gamma _\mu \, \psi ^{h_2}_2\) will be identical and can therefore be re-used multiple times. Therefore we have been able to optimise the code further by implementing a recycling of this factor. However, a similar factor can also be recycled between \({\mathcal {M}}_{h_1h_2h_\phi }\) and \({\mathcal {M}}_{{\tilde{h}}_1h_2h_\phi }\). Therefore it is advantageous to compare (for each Feynman diagram) which one of the following expressions:
can be re-used at a higher frequency and use the most optimal recycling strategy. This optimisation requires us to define a new type of helicity routine and the details of this implementation into Aloha are presented in Sect. 1.
2.3 Result
In this section we will quantify the speed-up resulting from using the improvements detailed in Sect. 2.2.
2.3.1 Matrix routine breakdown
First we reproduce Table 1 with helicity recycling switched on. This is shown in Table 2. One can see that for all processes the total number of evaluated instructions has reduced:
-
from 13G to 11G for \(t{\bar{t}}\) (15% reduction)
-
from 470 to 180G for \(t{\bar{t}}gg\) (62% reduction)
-
from 11T to 5T for \(t{\bar{t}}ggg\) (55% reduction)
The latter reductions are much larger because evaluating the matrix element represents a larger percentage of the overall computation for those processes. This is because the diagrams are more complex and numerous for \(t{\bar{t}}gg\) and \(t{\bar{t}}ggg\).
Looking at Table 2, we observe that both external and internal wave-function routines represent, after helicity recycling, a relatively insignificant computational cost. Firstly, they were not that significant before the optimisation and secondly they have been highly reduced by the helicity recycling (by at least \(10\times \) factor). The final speed-up is actually more dependent on the reduction in calls to amplitude routines. Focusing on the \(t{\bar{t}}gg\) process, one can see the amplitude routines have seen a large reduction in the number of associated instructions (by a factor of two) but still represent 42% of the overall computation. Although not shown in this table, roughly half of this computation (19% of the total) is spent evaluating simply scalar products (the contraction of Eqs. (20–22) with the remaining wave function) which strongly limit the hope of further optimisation.
For the three gluon final state the situation is similar, even if the reduction of amplitude routine instructions is closer to a factor of 4. However, for this process the limiting factor is now the computation of the colour factor (taking around 60% of the computation). We have also investigated how that step could be optimised and introduced two simple improvements of the code. First we use a common sub-expression reduction algorithm [41] on that segment of the code. Second we merge the numerator and denominator of the colour-matrix into a single matrix, reducing the number of operations and allowing for a better memory pattern. Combined together those modifications lead to a speed-up of around 20%.Footnote 9
2.3.2 Overall speed-up
Having looked at how the computational cost of madevent breaks down into different functions, we now present its overall factor speed increase. This is shown for a range of processes in Table 3. If \(t_{\text {with}}\) and \(t_{\text {without}}\) are the times it takes to run madevent with and without helicity recycling respectively then the speed-up is represented as
As has already been alluded to in Tables 1 and 2, the speed-up we gain from helicity recycling is highly process dependent. Helicity recycling reduces the number of times we must calculate wave functions and amplitudes and so processes with more complicated diagrams and with a higher number of total diagrams see the biggest boost. For example, consider the \(gg\rightarrow t {\bar{t}}\) results shown in Table 3a. As more gluons are added to the final state the gain increases, with the \(t{\bar{t}}gg\) final state seeing a dramatic \(2.27\times \) speed increase.
In contrast to this large increase the similar process \(qq\rightarrow t {\bar{t}} q q\) (where \(q \in \{u,d,c,s\}\)) sees a noticeably lower speed increase of \(1.27\times \). This is because processes involving fermions have a higher number of non-contributing helicity combinations and so helicity filtering will have a bigger effect. Hence, there will be fewer wave functions/amplitudes to evaluate and so helicity recycling will have a smaller impact.
One can see that the other processes presented in Table 3 also follow these general principles regarding diagram complexity and fermion multiplicity. W bosons allow for minimal helicity filtering and so Table 3b displays a large speed increase, whereas electrons – being fermions – suppress the speed-up in Table 3c.
In Table 4 we present these results for a wider range of processes and with a more detailed breakdown. In the ‘hel’ column we present how many helicity combinations are evaluated per phase-space point. In this column we use a ‘\(\rightarrow \)’ to indicate that the maximum value across the various matrix elements is being shown. The columns “survey” and “refine” present the timing of the two main phases of the phase-space integration/event generation for a single run. The “survey” is designed to get a first (un-precise) estimate of the cross section and to know the relative importance of each contribution. The amount of time spent in the “survey” is therefore independent of the number of requested events. The “refine” stage aims to generate the number of requested events and therefore scales linearly with the number of events.Footnote 10 In both cases, the timing is the time to solution observed on a i7-7700HQ CPU (2016 macbook pro laptop) using up to 8 threads in the standard multi-core mode of the code when requesting ten thousand events.Footnote 11 The last column presents the global speed-up (computed by comparing the sum of the timings of the survey and of the refine) between the version of the code including all the optimisations introduced in this paper (2.9.0) and the same code without helicity recycling (2.9.0 nohel). The optimisation related to the colour computation are present in both columns. A detailed description of the cuts, scale choices and such are given as supplementary material. There one can also find all the material required to reproduce this (and sub-sequent) tables.
Again one can see the same principles at play: the biggest speed increase is seen for complex processes dominated by QCD and vector boson interactions as they have the most non-vanishing helicity combinations and diagrams. The disparity of the gain between “survey” and “refine” can be explained due to different relative importances of the various matrix elements in each of the steps.
Notice that helicity recycling’s tendency to more greatly speed-up more intensive processes is very convenient. For example, in a \(pp\rightarrow 4j\) simulation, the processes with \(j=g\) will heavily dominate the calculation and so speeding those up is the most effective way to speed-up the overall simulation. This is why the simulation sees an impressive \(2.1\times \) speed-up in Table 4.
3 Phase-space integrator
3.1 Monte-Carlo integration and single diagram enhancement
In addition to the speed of the matrix-element evaluation, the efficiency of the phase-space integrator is another crucial factor determining the speed of the computation since it controls how many times we need to evaluate the matrix element. Due to both the number of dimensions of integration and also the requirement to generate uncorrelated events, the only suitable method is Monte-Carlo integration. However, the convergence of the method is quite slow (\(1/\sqrt{N}\), where N is the number of times the matrix element is evaluated).
In Monte-Carlo methods (see [42] for a nice review), one evaluates the integrand for random values of the variable of integration. The estimator of the integral (\(I_N\)) is simply given by the average of the function estimated across these random points \(x_i\):
The statistical error (\(\Delta I_N\)) of this estimator \(I_n\) is controlled by the variance of the integrand and can be estimated as
Since the convergence rate is fixed to \(\frac{1}{\sqrt{N}}\), the various avenues of optimisation consist in modifying the function being integrated to effectively reduce its variance. In MG5aMC, we use the single diagram enhancement method [29], which combines various importance sampling methods – both analytical [42] and numerical [43, 44] – on top of a multi-channel strategy. The integral is decomposed as a sum of various contributions:
where the indices i and j range over individual (or potentially a subset of) Feynman diagrams. The values \(\alpha _i\equiv \frac{|M_i|^2}{\sum _j |M_j|^2}\) are called the channel weights, they do not modify the value of the integral (as long as \(\sum _i \alpha _i =1\) for every phase-space point) but do impact the variance of the integrand and therefore the speed of the computation.
While in general the value \(\alpha _i\) could be any arbitrary function, the single diagram enhancement method makes a specific choice a priori. This choice is particularly motivated by the classical limit where interference terms between the Feynman diagrams are small. In that case
and therefore we have
In other words, with this choice the assumption is that each of the terms of the sum – called channels of integration – are mainly behaving as a single diagram squared from which the poles are easily identifiable and importance sampling is relatively easy to implement (see Sect. 3.2). The role of machine learning algorithms (MG5aMC uses a modified VEGAS algorithm [43]) is then mainly to catch the impact of the \(\frac{|M|^2}{\sum _j |M_j|^2}\) term as well as other sources of deviation from the ideal case (like for example the impact of generation cuts or the impact of the running of the strong coupling).
While this method works extremely well in general, it is not the most suited approach for VBF-like processes, especially at high energy where large interference occurs due to gauge symmetry. As a matter of fact, MG5aMC is much slower than many other programs [45] (in particular VBFNLO [46]) for such types of generation.
3.2 t-channel strategy
When running MG5aMC, one can easily identify that the slowest channels of integration are the ones with multiple t-channel propagators. In MG5aMC, we handle t-channel propagators with the following change of variable/phase-space measure [47, 48]:
where as in Fig. 3, p and q are the initial state momenta and \(k_1\) and \(k_2\) the final state ones, \(E_1\) (respectively \(E_2\)) is the energy of \(k_1\) (respectively \(k_2\)), \(S=(p+q)^2\) is the center of mass energy of the collision, \(t_1\) is the virtuality of the t-channel propagator given by \(t_1=(p-k_1)^2\) and where \(\lambda (a,b,c)=a^2+b^2+c^2-2ab-2ac-2bc\).
The integration over \(t_1\) is bounded by the following values:
In the presence of multiple t-channel propagators, MG5aMC writes them as
and performs the integration in the following (ordered) way
meaning that we first integrate over \(t_{1}\), then \(t_{2}\) and continues up to \(t_{n-1}\).
The combination of such an ordering with the boundary condition of Eq. (31) creates correlations between the variables of integration. This correlation is problematic for any type of VEGAS-like algorithm [42, 43]. The importance of the ordering can be seen in Fig. 4 where we compare various orderings for a given channel of integration (the associated diagram is displayed in the figure). We present both the un-weighting efficiency and the estimated statistical uncertainty after three and five iterations which gives a hint on the convergence of the integration grid.
In this example, it is clear that the original ordering strategy picked by MG5aMC is sub-optimal (between 3 and 6 times slower than the optimal strategy depending of the number of iterations). Not only is the best strategy more efficient at generating events but also the associated grid seems to need fewer iterations to converge.
While we hand-picked an example of channel of integration where the ordering was sub-optimal, it would not have been difficult to also present cases where it was optimal (e.g. just flip the initial state in the example). Indeed the optimal ordering is deeply dependent on the channel of integration under consideration.
On top of the old ordering, we have added support for three additional orderings (see details in Sect. 1). Each channel of integration is then associated to one of those orderings such that the virtuality of the most singular t-channel propagator is integrated first. In principle one could use the values of \(t_i^+\) (Eq. 31) to choose such an ordering, but this is technically not possible and we used a simple heuristic approach for such determination.
Our resultant timings are presented in Table 5, which contains the same type of information as Table 4. The comparison is between the version of the code containing all the optimisations of this paper (2.9.0) with the same version of the code where the ordering strategy was forced to the old one (Eq. 33). While in general the “survey” presents only a small speed-upFootnote 12 a more sizeable gain is achieved during the “refine”. The actual speed-up will therefore slightly increase when requesting more events. Additionally, one can note that the biggest gain is achieved for the slowest processes.
One process (VBF with transverse polarised W boson) shows a significant slow-down (1.6 times slower). Our investigation shows that the ordering picked in that case was correct but the convergence of the grid was actually slower than what it was for the previous strategy leading to the slow-down. Such an issue can only happen for relatively easy channels of integration since most complex processes would need more iterations and then this effect would disappear (as observed on the VBF process at 100 TeV).
3.3 New Diagram enhancement strategy
A recent paper [49] pointed to the importance of the gauge choice when generating events within MG5aMC. Even if the amplitude is fully gauge invariant, the definition of the channel weights (\(\alpha _i\)) is not. The presence of large gauge cancellations are related to large interference terms and therefore the assumptions used by the phase-space integrator (Eq. (27)) will not hold anymore. Consequently, the \(\alpha _i\) associated with Feynman diagrams with t-channel propagators will be artificially large at high-energy. This will increase the variance of the function and reduce the efficiency of the method.
We propose here, a second strategy for the single diagram enhancement method. Instead of using Eq. (25), we replace the \(|M_i|^2\) by the product of the denominator (both s and t channel) associated to the single diagram under consideration (and normalise them as needed).
Such a change in the definition of the multi-weights does not in principle impact the cross section (see Eq. 26). However in practise the choice of the dynamical scale is done on a CKKW inspired clustering [50] which depends on the Feynman diagram selected by the single diagram enhancement method. This modification of the channel weight will therefore impact the scale associated to each event and therefore the cross section and shape, both within scale uncertainties. In our tests, the impact of this effect was at the order of the percent, so well below the scale uncertainty. In general this can be avoided – if needed – by changing the scale computation to \(H_T/2\) or some other purely kinematical scale choice [51]. However, one should note that it is not possible to avoid such effects when running matched/merged generation within MLM or shower-kT MLM [52, 53], since the CKKW clustering is mandatory in those type of generation.
A more subtle effect of the modification of the channel weight is related to the parton shower. When writing an event inside the output file, MG5aMC provides (in addition to the kinematic variables) the colour dipole representation in the leading colour approximation. The determination of the eligible dipole depends on the Feynman diagram (e.g. in presence of mixed expansion) and therefore the modification of the multi-channel strategy can impact such a selection. One can therefore expect some change, within theoretical uncertainties, after parton shower for some QCD related observable.
In Table 6, we compare the time needed to generate ten thousand events with the previous strategy and the one introduced in this paper (all other optimisations of this paper are included in both cases). As one can expect for such deep modification of the phase-space integration strategy, the observed spectrum of speed-up/down is extremely broad going from three orders of magnitude speed-up to five times slower. It is clear that such an optimisation is a must-have in the context of VBF processes but must be avoided for most QCD multi-jet processes. While the user can easily switch from one strategy to the other (see Sect. 1), we have configured the code such that the default value is process dependent. All processes with only one colour-flow will use the new method while others processes will use the old method. We made an exception for pure multi-jet processes which now use the new method as well. The default for each process is indicated in the last column in Table 6. Since for most QCD processes we keep the previous integration strategy, the caveats on the change in the scale/leading colour choice, mentioned above, are naturally mitigated.
3.4 Comparison with older version of MG5aMC
In Table 7, we compare the speed of the code between two versions of MG5aMC (2.8.1 and 2.9.0). 2.9.0 is the first version of the code containing the modification described in this paper. Let’s stress that every optimisation flag is set to their default value and therefore this is the speed-up that a user will observe without playing with any options.
The combined impact of all our optimisations is striking for VBF-like processes with a speed-up of more than 30,000 times faster for one process. While this process is very specific and probably not the most important one for many users, all the VBF processes show massive speed-up passing from hour long runs to a couple of minutes. Actually, in many cases, the previous version of the code had a lot of trouble in generating the requested number of events and sometimes even to correctly converge to the correct cross section. All those problems are now solved with 2.9.0; the cross section converges quickly and events are generated very efficiently.
The gain for the other processes is more modest. Firstly because the phase-space integration was much better handled to start with and secondly because those processes are less sensitive to t-channel diagrams on which we have focused. Nevertheless in combination with the helicity recycling, the code is, for processes heavily used at the LHC, around three times faster, a very valuable gain.
4 Conclusion
In order to evaluate an amplitude MG5aMC must sum it over all contributing helicity combinations. Before the work of this paper MG5aMC would calculate every wave function and amplitude separately for each helicity combination. We have successfully restructured the code so it will now only calculate a given wave function once and then reuse the output for all the different helicity combinations. We have also been able to split up the amplitude calculation such that part of it can be recycled across different helicity combinations. This restructuring of the code has also allowed us to avoid calculating parts of the helicity calculation that contribute to null-diagrams. All these optimisations mean that for complex processes with few fermions we can a see a speed-up of the code of around \(2\times \). At the other end of scale, simple processes dominated by fermions can see a speed-up of only a few percent.
Additionally, we have studied the efficiency issue of MG5aMC for VBF-like processes at high energy. We have identified that modifying the order of integration of the virtuality of t-channel particles and changing the multi-channel weight was highly valuable, providing game-changing speed-up for such computations. This has fixed a lot of the issues faced in the past for such processes.
Combining all those optimisations allow us to overshoot the target speed-up asked by the HSF community [1, 2] since we provide a code at least three times faster for CPU intensive processes and far higher for VBF processes.
Optimisation is important for any program heavily used by LHC experiments and MG5aMCrepresents a non-negligible amount of grid/local cluster usage. We believe that this paper is a significant milestone for MG5aMC providing significant speed improvements both in the time to evaluate a phase-space point and to the phase-space integrator. However this is certainly not the end of the road and this effort has to (and will) continue. First, the techniques developed in this paper need to be applied to the NLO processes within MG5aMC. We do not expect any big difficulties in such porting and expect similar gain in speed. Second, there is still room to optimise the evaluation of the matrix element, even at leading order: work is in progress to have a tighter link to the hardware with investigation on a GPU port but also on the use of SIMD operation on CPUs [54].
Data Availability Statement
This manuscript has associated data in a data repository. [Authors’ comment: The script needed to reproduce Table 4–7 are included in this published article [and its supplementary information files, they can also be download via https://doi.org/10.14428/DVN/B5NADE.]
Notes
This strategy is actually a winning one at low-multiplicity and is used for example in CalcHEP [31].
This is in general possible because \(\text {SU(2)}\times \text {SU(2)}\) is the double cover of SO(1,3).
The Feynman diagrams in this paper were generated using the Tikz–Feynman package [38].
Evaluation of the matrix element for a single phase-space point is also possible in C++, CUDA and a python wrapper is also available.
This representation/meta-code is technically accurate only for colourless processes but the general idea is still valid for QCD processes since MG5aMC is using the colour-flow formalism [9].
This filtering is done numerically after the numerical evaluations of a couple of phase-space points.
These vanishing terms are determined during the initial sampling of the matrix element.
One should note that the remaining contribution of the colour-matrix is auto-vectorisable and it is therefore advantageous to compile with hardware specific flag (giving an additional speed-up of around 10%) (See how to do that in Sect. 1).
The scaling is not perfectly linear due to various thresholds in the method of integration. For this table (and similar ones) we always request the code to generate ten thousand un-weighted events.
To first approximation, the precision on the cross section is directly related to the number of events generated, which means all integrals are estimated at the percent level.
This is due to the requirement of the code to perform at least three iterations.
References
T. Aarrestad, et al. HL-LHC Computing Review: Common Tools and Community Software, in 2021 Snowmass Summer Study ed. by P. Canal, et al. (2020). https://doi.org/10.5281/zenodo.4009114
J. Albrecht et al., A roadmap for HEP software and computing R & D for the 2020s. Comput. Softw. Big Sci. 3(1), 7 (2019). https://doi.org/10.1007/s41781-018-0018-8
2020 Update of the European Strategy for Particle Physics. CERN Council, Geneva (2020). https://doi.org/10.17181/ESU2020
J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H.S. Shao, T. Stelzer, P. Torrielli, M. Zaro, The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations. JHEP 07, 079 (2014). https://doi.org/10.1007/JHEP07(2014)079
J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer, T. Stelzer, MadGraph 5: going beyond. JHEP 06, 128 (2011). https://doi.org/10.1007/JHEP06(2011)128
J. Alnefjord, A. Lifson, C. Reuschle, M. Sjodahl, The chirality-flow formalism for the standard model (2020)
F.A. Berends, W. Giele, Recursive calculations for processes with n gluons. Nucl. Phys. B 306, 759–808 (1988). https://doi.org/10.1016/0550-3213(88)90442-7
F.A. Berends, W. Giele, H. Kuijf, Exact expressions for processes involving a vector boson and up to five partons. Nucl. Phys. B 321, 39–82 (1989). https://doi.org/10.1016/0550-3213(89)90242-3
F. Maltoni, K. Paul, T. Stelzer, S. Willenbrock, Color flow decomposition of QCD amplitudes. Phys. Rev. D 67, 014026 (2003). https://doi.org/10.1103/PhysRevD.67.014026
M. Backes, A. Butter, T. Plehn, R. Winterhalder, How to GAN Event Unweighting (2020)
J. Bendavid, Efficient Monte Carlo integration using boosted decision trees and generative deep neural networks (2017)
E. Bothmann, T. Janßen, M. Knobbe, T. Schmale, S. Schumann, Exploring phase space with neural importance sampling. Sci. Post Phys. 8(4), 069 (2020). https://doi.org/10.21468/SciPostPhys.8.4.069
C. Gao, S. Höche, J. Isaacson, C. Krause, H. Schulz, Event generation with normalizing flows. Phys. Rev. D 101(7), 076002 (2020). https://doi.org/10.1103/PhysRevD.101.076002
M.D. Klimek, M. Perelstein, Neural network-based approach to phase space integration. Sci. Post Phys. 9, 053 (2020). https://doi.org/10.21468/SciPostPhys.9.4.053
D. Benjamin, J. Childers, S. Hoeche, T. LeCompte, T. Uram, Challenges in scaling NLO generators to leadership computers. J. Phys. Conf. Ser. 898(7), 072044 (2017). https://doi.org/10.1088/1742-6596/898/7/072044
K. Hagiwara, J. Kanzaki, Q. Li, N. Okamura, T. Stelzer, Fast computation of MadGraph amplitudes on graphics processing unit (GPU). Eur. Phys. J. C 73, 2608 (2013). https://doi.org/10.1140/epjc/s10052-013-2608-2
K. Hagiwara, J. Kanzaki, N. Okamura, D. Rainwater, T. Stelzer, Calculation of HELAS amplitudes for QCD processes using graphics processing unit (GPU). Eur. Phys. J. C 7, 513–524 (2010). https://doi.org/10.1140/epjc/s10052-010-1465-5
K. Hagiwara, J. Kanzaki, N. Okamura, D. Rainwater, T. Stelzer, Fast calculation of HELAS amplitudes using graphics processing unit (GPU). Eur. Phys. J. C 6(6), 477–492 (2010). https://doi.org/10.1140/epjc/s10052-010-1276-8
J.R. Andersen, C. Gütschow, A. Maier, S. Prestel, A positive resampler for Monte Carlo events with negative weights. Eur. Phys. J. C 80(11), 1007 (2020). https://doi.org/10.1140/epjc/s10052-020-08548-w
H. Brooks, C.T. Preuss, Efficient multi-jet merging at high multiplicities (2020)
R. Frederix, S. Frixione, S. Prestel, P. Torrielli, On the reduction of negative weights in MC@NLO-type matching procedures. JHEP 07, 238 (2020). https://doi.org/10.1007/JHEP07(2020)238
K.T. Matchev, P. Shyamsundar, OASIS: optimal analysis-specific importance sampling for event generation (2020)
O. Mattelaer, On the maximal use of Monte Carlo samples: re-weighting events at NLO accuracy. Eur. Phys. J. C 76(12), 674 (2016). https://doi.org/10.1140/epjc/s10052-016-4533-7
B. Nachman, J. Thaler, Neural resampler for Monte Carlo reweighting with preserved uncertainties. Phys. Rev. D 102(7), 076004 (2020). https://doi.org/10.1103/PhysRevD.102.076004
E. Bothmann et al., Event Generation with Sherpa 2.2. Sci. Post Phys. 7(3), 034 (2019). https://doi.org/10.21468/SciPostPhys.7.3.034
S. Frixione, P. Nason, C. Oleari, Matching NLO QCD computations with parton shower simulations: the POWHEG method. JHEP 11, 070 (2007). https://doi.org/10.1088/1126-6708/2007/11/070
W. Kilian, T. Ohl, J. Reuter, WHIZARD: simulating multi-particle processes at LHC and ILC. Eur. Phys. J. C 71, 1742 (2011). https://doi.org/10.1140/epjc/s10052-011-1742-y
H. Murayama, I. Watanabe, K. Hagiwara, HELAS: HELicity amplitude subroutines for Feynman diagram evaluations (1992)
F. Maltoni, T. Stelzer, MadEvent: automatic event generation with MadGraph. JHEP 02, 027 (2003). https://doi.org/10.1088/1126-6708/2003/02/027
P. de Aquino, W. Link, F. Maltoni, O. Mattelaer, T. Stelzer, ALOHA: automatic libraries of helicity amplitudes for Feynman diagram computations. Comput. Phys. Commun. 183, 2254–2263 (2012). https://doi.org/10.1016/j.cpc.2012.05.004
A. Belyaev, N.D. Christensen, A. Pukhov, CalcHEP 3.4 for collider physics within and beyond the Standard Model. Comput. Phys. Commun. 184, 1729–1769 (2013). https://doi.org/10.1016/j.cpc.2013.01.014
P. De Causmaecker, R. Gastmans, W. Troost, T.T. Wu, Helicity amplitudes for massless QED. Phys. Lett. B 105, 215 (1981). https://doi.org/10.1016/0370-2693(81)91025-X
P. De Causmaecker, R. Gastmans, W. Troost, T.T. Wu, Multiple Bremsstrahlung in gauge theories at high-energies. 1. General formalism for quantum electrodynamics. Nucl. Phys. B 206, 53–60 (1982). https://doi.org/10.1016/0550-3213(82)90488-6
R. Gastmans, The Helicity Method: a review. AIP Conf. Proc. 201, 58–72 (1990). https://doi.org/10.1063/1.39098
R. Britto, F. Cachazo, B. Feng, New recursion relations for tree amplitudes of gluons. Nucl. Phys. B 715, 499–522 (2005). https://doi.org/10.1016/j.nuclphysb.2005.02.030
R. Britto, F. Cachazo, B. Feng, E. Witten, Direct proof of tree-level recursion relation in Yang–Mills theory. Phys. Rev. Lett. 94, 181602 (2005). https://doi.org/10.1103/PhysRevLett.94.181602
M.D. Schwartz, Quantum Field Theory and the Standard Model (Cambridge University Press, Cambridge, 2014)
J.P. Ellis, Ti k z-feynman: Feynman diagrams with ti k z. Comput. Phys. Commun. 210, 103–123 (2017). https://doi.org/10.1016/j.cpc.2016.08.019
N. Nethercote, J. Seward, Valgrind: a framework for heavyweight dynamic binary instrumentation. In: Proceedings of the 28th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’07, p. 89–100. Association for Computing Machinery, New York, NY, USA (2007). https://doi.org/10.1145/1250734.1250746
J. Weidendorfer, Sequential performance analysis with callgrind and kcachegrind, in Tools for High Performance Computing, ed. by M. Resch, R. Keller, V. Himmler, B. Krammer, A. Schulz (Springer, Berlin, 2008), pp. 93–113
D.E. Knuth, The State of the Art of Computer Programming (1976)
S. Weinzierl, Introduction to Monte Carlo methods (2000)
G.P. Lepage, Adaptive Multidimensional Integration: VEGAS Enhanced (2020)
W.H. Press, G.R. Farrar, Recursive stratified sampling for multidimensional Monte Carlo integration (1989)
M. Rauch, Vbfnlo. Slides at AQGC (Dresden 2013)
J. Baglio, et al.: Release Note - VBFNLO 2.7.0 (2014)
E. Byckling, K. Kajantie, Particle Kinematics: (Chapters I-VI, X) (University of Jyvaskyla, Jyvaskyla, 1971)
F. Maltoni, G. Ridolfi, M. Ubiali, b-initiated processes at the LHC: a reappraisal. JHEP 07, 022 (2012). https://doi.org/10.1007/JHEP04(2013)095. (Erratum: JHEP 04, 095 (2013))
K. Hagiwara, J. Kanzaki, K. Mawatari, QED and QCD helicity amplitudes in parton-shower gauge. Eur. Phys. J. C 80(6), 584 (2020). https://doi.org/10.1140/epjc/s10052-020-8154-9
S. Catani, F. Krauss, R. Kuhn, B. Webber, QCD matrix elements + parton showers. JHEP 11, 063 (2001). https://doi.org/10.1088/1126-6708/2001/11/063
V. Hirschi, O. Mattelaer, Automated event generation for loop-induced processes. JHEP 10, 146 (2015). https://doi.org/10.1007/JHEP10(2015)146
J. Alwall, S. de Visscher, F. Maltoni, QCD radiation in the production of heavy colored particles at the LHC. JHEP 02, 017 (2009). https://doi.org/10.1088/1126-6708/2009/02/017
F. Caravaglios, M.L. Mangano, M. Moretti, R. Pittau, A New approach to multijet calculations in hadron collisions. Nucl. Phys. B 539, 215–232 (1999). https://doi.org/10.1016/S0550-3213(98)00739-1
J. Reinders, B. Ashbaugh, J. Brodman, M. Kinsner, J. Pennycook, X. Tian, vectors. In: Data Parallel C++, p. 259–276. Apress (2020). https://doi.org/10.1007/978-1-4842-5574-2_11
D.B. Franzosi, O. Mattelaer, R. Ruiz, S. Shil, Automated predictions from polarized matrix elements. JHEP 04, 082 (2020). https://doi.org/10.1007/JHEP04(2020)082
N.D. Christensen, P. de Aquino, N. Deutschmann, C. Duhr, B. Fuks, C. Garcia-Cely, O. Mattelaer, K. Mawatari, B. Oexl, Y. Takaesu, Simulating spin-\( \frac{3}{2}\) particles at colliders. Eur. Phys. J. C 73(10), 2580 (2013). https://doi.org/10.1140/epjc/s10052-013-2580-x
C. Degrande, C. Duhr, B. Fuks, D. Grellscheid, O. Mattelaer, T. Reiter, UFO: the universal FeynRules output. Comput. Phys. Commun. 183, 1201–1214 (2012). https://doi.org/10.1016/j.cpc.2012.01.022
Acknowledgements
The authors would like to thanks Fabio Maltoni, Mike Seymour, Richard Ruiz, Luca Mantani, Andrew Lifson, Andrea Valassi, Stefan Roiser and all MG5aMC authors (past and present) for useful discussions. We would also like to thank the Université catholique de Louvain staff for working around the limitations imposed by the Covid-19 pandemic. This work has received funding from the European Union’s Horizon 2020 research and innovation programme as part of the Marie Skłodowska-Curie Innovative Training Network MCnetITN3 (grant agreement no. 722104). This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 824093. Computational resources have been provided by the Consortium des Équipements de Calcul Intensif (CÉCI), funded by the Fonds de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under Grant No. 2.5020.11 and by the Walloon Region.
Author information
Authors and Affiliations
Corresponding author
Supplementary Information
Below is the link to the electronic supplementary material.
Appendices
Appendix A: Manual
When generating an LO process within MG5aMC, the optimisation described in this paper will be activated by default (since version 2.9.0). If for some reason one wants to de-activate some optimisation or change some internal parameter, we offer the possibility to do so via a couple of parameters that can be included in the run_card.dat which is the main configuration file. Most of these parameters are not present by default in that card since we consider them as advanced parameters. It is enough to add them in the file when needed. The only parameter present by default is the one allowing for the choice of multi-channel strategy.
In Table 8, we briefly present the various parameters that can be modified and in what way. One should notice that the parameter “hel_recycling” is a switch that forbids the use of helicity recycling, therefore when set to False, the other parameters related to helicity recycling (prefixed with “hel_”) will be without any impact.
It is also possible to ask directly MG5aMC to generate code without any helicity recycling. This allows for the generation of code closer to the previous version and avoids spending time generating the new aloha routines. To do this one needs to modify the “output” command and add the flag
“--hel_recycling=False”. For example
Another new optional flag for the “output” command allows for the control of the ordering of the variable of integration corresponding to the invariant mass of t-channel propagators. For example:
The possible values and the associated meaning is described below. A concrete example for the ordering of the Feynman diagram represented in Fig. 4 is also given to provide a simpler comparison.
-
0 [default] Automatically decide based on the diagram.
-
1 Always use one-sided ordering integrating the t invariant mass from the bottom of the Feynman Diagram to the top (initial state particle with positive \(p_z\) is displayed conventionally at the top of the diagram). For the example, this corresponds to \(\int dt_1 \int dt_2 \int dt_3\).
-
2 Always use one-sided ordering integrating the t invariant mass from the top of the Feynman Diagram to the bottom. This was the only option in older versions of the code. For the example, this correspond to \(\int dt_3 \int dt_2 \int dt_1\).
-
−1 Going from external to internal t-variables starting with the t invariant mass at the bottom of the Feynman Diagram, then the one at the top, then the second one from the bottom, followed by the second one from the top and so on up to depletion. For the example, this correspond to \(\int dt_2 \int dt_1 \int dt_3\). For an example with 4 t-channel propagator this will correspond to \(\int dt_2 \int dt_3 \int dt_1 \int dt_4\).
-
−2 Same as the previous ordering but starting from the most top invariant mass. For the example, this correspond to \(\int dt_2 \int dt_3 \int dt_1\) and for a 4 t-channel propagator case, this will correspond to \(\int dt_3 \int dt_2 \int dt_4 \int dt_1\).
A final flag for the “output” command allows for the deactivation of the common sub-expression reduction algorithm for the colour-factor part of the code (which by default is activated):
In some cases this flag can be relevant since the upfront cost of such a function can be large (here 5 min for the example), while the run-time gain (around 8%) might not always be enough to justify it. In general both the upfront cost and the reduction are barely noticeable. Note that this flag can also be used for Fortran standalone output.
Appendix B: Extension of Aloha
Aloha [30] is a program shipped with MG5aMC which computes automatically the set of helicity amplitudes needed for a given matrix-element computation (like Eq. 16). After the generation of the Feynman diagram by MG5aMC, MG5aMC requests Aloha to generate the HELAS [28] function needed for the associated computation.
In the context of helicity recycling a new type of helicity routine has been introduced (Eqs. (20–22)). Contrary to the other types of routines, MG5aMC does not know at generation time which of those functions will be effectively used. Consequently MG5aMC request Aloha to generate all possible choices (so in general three routines) such that any strategy can be picked if relevant.
The implementation strategy is to ask Aloha to generates a standard internal wave-function routine but with a custom propagator. This was possible thanks to a previous extension of Aloha adding support for custom propagators [55, 56]. The definitions for such propagators are
The reason for the presence of a metric term for the vector propagator is that it allows us to not include the metric in the final scalar product and therefore have code which is easier for the compiler to optimise (giving the possibility to use some SIMD instructions) which can be critical since a large part of the computation is spent evaluating such simple scalar products (\(\approx \) 20% for \(g g \rightarrow t{\bar{t}} g g\)).
Concerning the ALOHA naming scheme convention, such a routine will have a suffix “P1N”. So for the following Lorentz structure (which correspond to a \(\gamma _\mu \) Lorentz structure [57]):
the three new expressions (20–22) will have the following name/definitions:
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Funded by SCOAP3
About this article
Cite this article
Mattelaer, O., Ostrolenk, K. Speeding up MadGraph5_aMC@NLO. Eur. Phys. J. C 81, 435 (2021). https://doi.org/10.1140/epjc/s10052-021-09204-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1140/epjc/s10052-021-09204-7