1 Introduction

Parton showers form an integral part of the event generators that are commonly used to compare data from collider experiments with theory [13]. The Sudakov veto algorithm is used in the procedure of generating the subsequent emissions that make up the shower. It facilitates the resummation of logarithmic contributions to all orders in the coupling constant in a Monte Carlo framework, thereby producing realistic final states. A positive ordering variable (scale) t is typically evolved down from an initial scale u, generating ordered branchings of partons. The scale of the next branching is selected according to a probability distribution of the form

$$\begin{aligned} E(t;u)=p(t)\Delta (t,u), \end{aligned}$$
(1)

where

$$\begin{aligned} \Delta (t,u) \equiv \exp \left( -\int _{t}^{u}p\left( \tau \right) d\tau \right) . \end{aligned}$$
(2)

The function p(t) is the branching kernel. The function \(\Delta (t,u)\) is known as the Sudakov form factor. It represents the probability of no emission occurring between two scales. In a Monte Carlo setting, scales must be sampled from Eq. (1). To do that, the inverse of the Sudakov form factor must be computed. Unfortunately, p(t) is typically not simple enough for this inverse to be analytically calculable. Therefore, the Sudakov veto algorithm is used. In this paper, we will present a thorough analysis of this algorithm. In a practical setting, Eq. (1) has to be extended in several ways, one of which is the competition between branching channels. We will analyze the veto algorithm for these extensions, and we will in particular provide multiple algorithms to handle competition. Among these algorithms are those used currently by event generators, and some alternatives which, although seemingly different, will be shown to be equivalent. By implementing them in an antenna parton shower much like [46], we test their performance and show that the alternative algorithms are much faster.

This paper is organized as follows. In Sect. 2, we will first set up a formalism to analyze Monte Carlo algorithms in general. This formalism is then used to show the validity of the Sudakov veto algorithm in Sect. 3. Next, in Sect. 4, the algorithm is extended to include a cutoff scale, a second variable and competition between branching channels. We will then prove the equivalence of several different algorithms for competition. In Sect. 5, the performance of these algorithms is tested by implementing them in an actual parton shower.

2 The unitary algorithm formalism

A useful approach to the analysis of algorithms can be formulated in terms of integration results. This can be denoted the formalism of unitary algorithms. The idea is that these integration results can be translated, at the one hand, into positive statements and, on the other hand, into readily implementable pseudocode. Let g(x) be a probability density. Then, the formula

$$\begin{aligned} 1 = \int \;g(x)\;dx \end{aligned}$$
(3)

on the one hand reads ‘we have an algorithm to generate random numbers according to the distribution \(g(x)'\), and, on the other hand, the pseudocode statement

$$\begin{aligned} x \leftarrow g \end{aligned}$$
(4)

which says that the number x be obtained from the algorithm delivering the distribution g. As a simple example, the statement

$$\begin{aligned} 1 = \int _0^1\;dx \end{aligned}$$
(5)

implies that we have available an algorithm that delivers random numbers x, uniformly distributed with the density \(\theta (0<x<1)\), where we have defined the logical step function

$$\begin{aligned} \theta (S) = \left\{ \begin{array}{ll}1 &{} \text { if the statement }S\text { is true},\\ 0 &{} \text { if the statement }S\text { is false}.\end{array}\right. \end{aligned}$$
(6)

And indeed, this just says ‘we generate a random number uniformly distributed between 0 and 1’, using the pseudorandom number generator of choice.Footnote 1 In fact, to shorten notation later on, we will denote any random number generated according to Eq. (5) by \(\rho \). A second ingredient of the formalism is the assignment operation

$$\begin{aligned} 1 = \int dy\;\delta (y - h(x)), \end{aligned}$$
(7)

which is equivalent to the pseudocode statement

$$\begin{aligned} y \leftarrow h(x). \end{aligned}$$
(8)

We shall of course use the standard result

$$\begin{aligned} \delta (y - h(x)) = \sum _j\;\frac{1}{|h'(x)|}\,\delta (x-x_j), \end{aligned}$$
(9)

where the sum runs over the roots \(x_j\) of \(h(x)=y\) (all assumed to be single). It is to be noted here that the integral over y runs over all real values, but if the range of h is restricted to \(h_0 \le h(x) \le h_1\), then we automatically have the corresponding bounds on y.

As a simple example, let us imagine the inverse of the primitive function P(t) of p(t) from Eq. (1) is available. The pseudocode to generate values of t according to Eq. (1) is:

$$\begin{aligned} t \leftarrow P^{-1}(\log (\rho )+P\left( u\right) ), \end{aligned}$$
(10)

where \(\rho \) here and in the following comes from an (idealized) source of iidFootnote 2 random numbers uniform in (0, 1]. We analyze Eq. (10):

$$\begin{aligned} 1&= \int _{0}^{1}d\rho \int dt\,\delta (t-P^{-1}(\log (\rho )+P(u))) \nonumber \\&= \int _{0}^{1}d\rho \int dt\,\delta (P(t)-\log (\rho )-P(u))p(t) \nonumber \\&= \int _{0}^{1}d\rho \int dt\,\delta (\rho -e^{P(t)-P(u)})e^{P(t)-P(u)}p(t) \nonumber \\&= \int _{0}^{u}dt\,p(t)e^{-\int _{t}^{u}p(t)}. \end{aligned}$$
(11)

So that ‘we have an algorithm to generate t according to Eq. (1)\('\), where the algorithm is of course given by Eq. (10).

A variant of the formalism is encountered in the rejection algorithm, which is already very close to the Sudakov veto algorithm. Let g(x) be a probability density that we can generate, f(x) a non-negative function, and c a number such that \(c\,g(x)\ge f(x)\) over the support of f(x). The rejection algorithm then reads.

figure a

Let K(x) be the resulting density. We can then write

$$\begin{aligned} K(x)&= \int dy\;g(y)\;\int _0^1 d\rho \;\bigg [ \theta \left( \rho \le \frac{f(y)}{c\,g(y)}\right) \,\delta (x-y) \nonumber \\&\quad + \theta \left( \rho > \frac{f(y)}{c\,g(y)}\right) \,K(x)\bigg ] \nonumber \\&= \int dy\;g(y)\;\left[ \frac{f(y)}{c\,g(y)}\,\delta (x-y) + \left( 1- \frac{f(y)}{c\,g(y)}\right) \,K(x)\right] \nonumber \\&= \int dy\;\left[ \frac{f(y)}{c}\, \delta (x-y) + g(y)\,K(x) - \frac{f(y)}{c}\,K(x)\right] \nonumber \\&= \frac{1}{c}f(x) + K(x) - \frac{1}{c}\int dy\;f(y)\,K(x), \end{aligned}$$
(12)

from which we see that K(x) is the normalized probability density proportional to f(x):

$$\begin{aligned} K(x) = \frac{f(x)}{\int dy f(y)}. \end{aligned}$$
(13)

Note how the loop is embodied by the reappearance of K(x) on the right-hand side in the first line of Eq. (12). With these few basic ingredients the result of any algorithm (provided it terminates with unit probability) can be reduced to the elimination of Dirac delta functions, and we shall employ these ideas in what follows.

3 Analyzing the Sudakov veto algorithm

We now present the Sudakov veto algorithm and analyze it using the techniques of the previous section. We first establish that Eq. (1) is normalized if P(t), the primitive function of p(t), goes to \(-\infty \) as \(t\rightarrow 0\):

$$\begin{aligned} \int _{0}^{u}E\left( t;u\right) dt=1-\exp \left( P\left( 0\right) -P\left( u\right) \right) . \end{aligned}$$
(14)

The Sudakov veto algorithm relies on the existence of an overestimate function \(q(t) \ge p(t)\) which does have an invertible Sudakov factor. The algorithm is given below in pseudocode.

figure b

It was shown in the previous section that the first step in the loop generates values of t distributed according to Eq. (1) where the kernel is q(t) instead of p(t), and the scale u is set to the previous value of t. Thus, the value of t is evolved downward at every step of the loop, which is the crucial difference with algorithm Eq. (12). There, subsequent values for t would be generated in the same way every time. The if-statement represents the veto step. A scale is accepted with probability p(t) / q(t), at which point the algorithm terminates. We now convert the algorithm to unitary language as we did before in Eq. (12) for the rejection algorithm.

$$\begin{aligned} E(t;u)&= \int _{0}^{u}d\tau \,q\left( \tau \right) e^{Q\left( \tau \right) -Q\left( u\right) } \nonumber \\&\quad \times \int _{0}^{1}d\rho \bigg [ \theta \left( \rho <\frac{p(\tau )}{q(\tau )}\right) \delta \left( \tau -t\right) \nonumber \\&\quad + \theta \left( \rho >\frac{p(\tau )}{q(\tau )}\right) E(t;\tau ) \bigg ]. \end{aligned}$$
(15)

After generating a trial scale \(\tau \), the random number \(\rho \) and the step functions guide the algorithm to either accept the generated scale, or to start over using \(\tau \) as the new starting point. Next, the integral over \(\rho \) is worked out.

$$\begin{aligned} e^{Q(u)}E(t;u)&=\int _{0}^{u}d\tau \, e^{Q(\tau )}[p(\tau )\delta (t-\tau ) \nonumber \\&\quad +(q(\tau )-p(\tau ))E(t;\tau )]. \end{aligned}$$
(16)

Taking the derivative with respect to u, we find the following differential equation:

$$\begin{aligned} \frac{\partial }{\partial u}E(t;u)=p(u)\delta (t-u)-p(u)E(t;u). \end{aligned}$$
(17)

It is solved by

$$\begin{aligned} E(t;u)=p(t)\exp \left( -\int _{t}^{u}dx\,p(x)\right) \theta \left( 0<t<u)\right) , \end{aligned}$$
(18)

which is Eq. (1). It is, however, not the most general solution to Eq. (17). We will consider this issue more carefully in the next section.

4 Extending the algorithm

Next, we consider the Sudakov veto algorithm in a more practical setting. The algorithm needs to be extended in several ways to be applicable in a real parton shower. They are:

  • An infrared cutoff \(\mu \) has to be introduced. This cutoff is required in QCD to avoid the nonperturbative regime. In event generators, the parton shower is evolved to this cutoff scale, after which the results are fed to a hadronization model. The consequence is that the Sudakov factor will not equal zero at the lower boundary of the scale integral. Therefore Eq. (1) is no longer normalized to one and is thus not a probability distribution.

  • The scale variable t is not enough to parameterize the entire branching phase space. An additional variable z has to be introduced.Footnote 3 In traditional parton showers, this parameter is the energy fraction carried by a newly created parton. However, in the more modern dipole or antenna showers, it is just a variable that parameterizes the factorized phase space. The boundaries of the branching phase space translate to scale-dependent boundaries on z.

  • The algorithm has to account for emissions from multiple channels. These channels can originate from either the presence of multiple partons or dipoles, or from multiple branching modes.

We now include these issues separately before incorporating them into a single algorithm.

4.1 Introducing a cutoff

In a realistic parton shower, the values of the scale t are not allowed to reach zero. In the case of QCD, a cutoff value \(\mu \) is set at a value of about 1 GeV, below which a perturbative approach is no longer valid. Equation (1) now no longer represents a probability distribution. This same problem would occur if the primitive of the branching kernel P(t) would not diverge for vanishing t, as is for instance the case for kernels of massive particles. The following algorithm, due to [7], allows for the introduction of a cutoff and deals with non-diverging P(t) simultaneously. The algorithm below first shows how to generate trial values for t.

figure c

We analyze this algorithm to find what probability distribution it represents.

$$\begin{aligned} \bar{E}(t;\mu ,u)&=\int _{0}^{1}d\rho \bigg [\theta \left( \rho \le \rho _{c}\right) \delta \left( t-\mu \right) \nonumber \\&\quad +\theta \left( \rho >\rho _{c}\right) \delta \left( t-Q^{-1}\left( \log (\rho )+Q(t_0)\right) \right) \bigg ] \nonumber \\&=e^{Q(\mu )-Q(t_0)}\delta \left( t-\mu \right) \nonumber \\&\quad +q(t)e^{Q(t)-Q(t_0)}\theta (\mu<t<t_0). \end{aligned}$$
(19)

where in the last step we used the fact that q(t) is a positive function, and therefore Q(t) is monotonically increasing. Compared with Eq. (11), Eq. (19) has an additional term that compensates the contribution of the lower bound on the original probability distribution. The veto algorithm should reproduce this distribution for the branching kernel p(t).

figure d

Writing it down in unitary language:

$$\begin{aligned} E(t;u)&=\int d\tau (e^{Q(\mu )-Q(u)}\delta (\tau -\mu ) \nonumber \\&\quad +q(\tau )e^{Q(\tau )-Q(u)}\theta (\mu<\tau<u)) \nonumber \\&\quad \times \Biggl \{ \theta \left( \tau =\mu \right) \delta \left( t-\mu \right) + \theta \left( \tau \ne \mu \right) \Biggr . \nonumber \\&\quad \times \Biggr .\int _{0}^{1}d\rho \bigg [\theta \left( \rho <\frac{p(\tau )}{q(\tau )}\right) \delta \left( t-\tau \right) \nonumber \\&\quad +\theta \left( \rho >\frac{p(\tau )}{q(\tau )}\right) E(t;\tau )\bigg ] \Biggr \}. \end{aligned}$$
(20)

Going through the same steps as before, we find

$$\begin{aligned} e^{Q(u)}E(t;u)&=e^{Q(\mu )}\delta \left( t-\mu \right) \nonumber \\&\quad +\int _{\mu }^{u}d\tau \, e^{Q(\tau )}[p(\tau )\delta (t-\tau ) \nonumber \\&\quad +(q(\tau )-p(\tau ))E(t;\tau )]. \end{aligned}$$
(21)

After taking the derivative with respect to u, the first term drops out and the \(\mu \)-dependence disappears from the second. Therefore, Eq. (17) is recovered. However, Eq. (18) is not the only solution to this differential equation. A more general solution is:

$$\begin{aligned} E(t;u) = e^{P(\sigma )-P(u)}\delta \left( t-\sigma \right) +p(t)e^{P(t)-P(u)}\theta (\sigma {<}t<u) \end{aligned}$$
(22)

for some scale \(\sigma < u\). To fix sigma, we require that E(tu) reduces to a delta function distribution when \(u \rightarrow \mu \), which leads to \(\sigma = \mu \).

4.2 Introducing a second variable

The targeted distribution is now:

$$\begin{aligned} E(t,z;u)=p(t,z)\Delta (u,t) \end{aligned}$$
(23)

where

$$\begin{aligned} \Delta (u,t)=\exp \left( -\int _{t}^{u}d\tau \int _{z_{-}(\tau )}^{z_{+}(\tau )}d\zeta \,p(\tau ,\zeta )\right) \end{aligned}$$
(24)

which is normalized as

$$\begin{aligned} \int _{0}^{u}dt\int _{z_{-}(t)}^{z_{+}(t)}dz\,E(t,z;u)=1. \end{aligned}$$
(25)

We now need to produce pairs (tz) distributed according to E(tzu). A difficulty lies in the dependence of the range of z on the scale. In order to generate a value for t, the \(\zeta \) integral in the Sudakov factor is required, which depends on t. On the other hand, z cannot be generated first, since its boundaries depend on t.

To deal with this problem, an additional veto condition is introduced. We introduce a constant overestimate of the z-range as \(z_{-} \le z_{-}(t)\) and \(z_{+} \ge z_{+}(t)\). Additionally we require the overestimate function to be factorized as \(q(t,z)=r(t)s(z)\) where still \(q(t,z) \ge p(t,z)\). Then, we define

$$\begin{aligned} q(t) \equiv r(t)\int _{z_{-}}^{z_{+}}dz\,s(z) = r(t)\left( S(z_{+})-S(z_{-})\right) . \end{aligned}$$
(26)

The algorithm is given below.

figure e

We first analyze the step of this algorithm that generates z.

$$\begin{aligned} 1&= \int _{0}^{1}d\rho _{2}\int dz\,\delta \left( z-S^{-1}\left[ \rho _{2}\left( S(z_{+})-S(z_{-})\right) +S(z_{-})\right] \right) \nonumber \\&= \int _{0}^{1}d\rho _{2}\int dz\,\delta \left( S(z)-\rho _{2}\left( S(z_{+})-S(z_{-})\right) +S(z_{-})\right) s(z) \nonumber \\&= \int _{z_{-}}^{z_{+}}dz\,\frac{s(z)}{S(z_{+})-S(z_{-})}. \end{aligned}$$
(27)

Thus, z is distributed according to s(z). Introducing the notation

$$\begin{aligned} \theta ^{\tau }(\zeta ) \equiv \theta (z_{-}(\tau )< \zeta < z_{+}(\tau )), \end{aligned}$$
(28)

we now analyze Algorithm 5.

$$\begin{aligned} E\left( t,z;u\right)&= \int _{0}^{u}q(\tau )e^{Q(\tau )-Q(u)}\int _{z_{-}}^{z_{+}}d\zeta \frac{s(\zeta )}{S(z_{+})-S(z_{-})} \nonumber \\&\quad \times \int _{0}^{1}d\rho \biggl \{ \biggr . \left( 1-\theta ^{\tau }(\zeta )\right) E\left( t,z;\tau \right) \nonumber \\&\quad +\theta ^{\tau }(\zeta )\theta \left( \rho >\frac{p(\tau ,\zeta )}{q(\tau ,\zeta )}\right) E\left( t,z;\tau \right) \nonumber \\&\quad +\biggl .\theta ^{\tau }(\zeta )\theta \left( \rho <\frac{p(\tau ,\zeta )}{q(\tau ,\zeta )}\right) \delta \left( \tau -t\right) \delta \left( \zeta -z\right) \biggr \}. \end{aligned}$$
(29)

Evaluating the integrals and taking the derivative with respect to u leads to:

$$\begin{aligned} \frac{\partial }{\partial u}E\left( t,z;u\right)= & {} p(u,z)\delta (u-t)\theta _{z}\nonumber \\&-\int _{z_{-}(t)}^{z_{+}(t)}d\zeta \,p(u,\zeta )E\left( t,z;u\right) , \end{aligned}$$
(30)

which is solved by Eq. (23).

4.3 Competing channels

Let us assume there are n branching channels, each characterized by a branching kernel \(p_{i}(t)\). The density E(tu) now contains a Sudakov factor representing the no-branching probability for all channels, which is just the product of the individual Sudakov factors. The probability of branching at some scale is the sum of the kernels. Introducing the notation

$$\begin{aligned} \widetilde{f}(t) \equiv \sum _{i=1}^{n}f_{i}(t) \end{aligned}$$
(31)

for any set of n functions, this leads to the probability distribution

$$\begin{aligned} E(t;u)=\widetilde{p}(t)\Delta (t,u) \end{aligned}$$
(32)

where

$$\begin{aligned} \Delta (t,u)=\exp \left( -\int _{t}^{u}\widetilde{p}\left( \tau \right) d\tau \right) . \end{aligned}$$
(33)

This distribution can be produced by generating multiple scales and selecting the highest. This can be shown using the following result:

$$\begin{aligned} 1&=\int _{0}^{u}dt\left[ \prod _{i=1}^{n}\int _{0}^{u}d\tau _{i}f_{i}(\tau _{i})\exp (F_{i}(\tau _{i})-F_{i}(u))\right] \nonumber \\&\quad \times \sum _{j=1}^{n}\theta (\max (\tau _{j}))\, \delta \left( t-\tau _{j}\right) \nonumber \\&=\int _{0}^{u}dt\sum _{i=1}^{n}\left[ \prod _{j\ne i}\int _{0}^{\tau _{i}}d\tau _{j}f(\tau _{j})\exp (F_{j}(\tau _{j})-F_{j}(u)) \right] \nonumber \\&\quad \times \int _{0}^{u}d\tau _{i}f_{i}(\tau _{i})\exp (F_{i}(\tau _{i})-F_{i}(u))\,\delta \left( t-\tau _{i}\right) \nonumber \\&=\int _{0}^{u}dt\sum _{i=1}^{n}f_{i}(t)\exp (F_{i}(t)-F_{i}(u)) \nonumber \\&\quad \times \left[ \prod _{j\ne i}\exp (F_{j}(t)-F_{j}(u))\right] \nonumber \\&=\int _{0}^{u}dt\,\widetilde{f}(t)\exp (\widetilde{F}(t)-\widetilde{F}(u)), \end{aligned}$$
(34)

where we used the notation

$$\begin{aligned} \theta (\max (\tau _{j})) \equiv \prod _{k\ne j} \theta \left( \tau _{j}>\tau _{k}\right) , \end{aligned}$$
(35)

which is a step function selecting the highest of all \(\tau \). The functions \(f_i\) can be either \(p_i\) or \(q_i\). In the first case, the veto algorithm for a single channel can be used to produce the densities that appear in the first line of Eq. (34). In the second case, the highest of the trial scales is selected and subsequently the veto step is applied using the kernel of the selected channel. Both procedures result in Eq. (32).

Next, we present a very different algorithm that also produces this density.

figure f

We analyze this algorithm to show that it also produces Eq. (32):

$$\begin{aligned} E(t;u)&=\int _{0}^{u}d\tau \,\widetilde{q}(\tau )e^{\widetilde{Q}(\tau )-\widetilde{Q}(u)} \nonumber \\&\quad \times \int _{0}^{1}d\rho _{1}\sum _{i=1}^{n}\theta \left( \frac{\sum _{j=0}^{i-1}q_{j}(\tau )}{\widetilde{q}(\tau )}\right. \nonumber \\&\quad \left.<\rho _{1}<\frac{\sum _{j=0}^{i}q_{j}(\tau )}{\widetilde{q}(\tau )}\right) \nonumber \\&\quad \times \int _{0}^{1}d\rho _{2}\bigg [\theta \left( \rho _{2}<\frac{p_{i}(\tau )}{q_{i}(\tau )}\right) \delta \left( t-\tau \right) \nonumber \\&\quad +\theta \left( \rho _{2}>\frac{p_{i}(\tau )}{q_{i}(\tau )}\right) E(t;\tau )\bigg ], \end{aligned}$$
(36)

where \(q_{0}(t) \equiv 0\). We go through the usual steps, noting that after doing the \(\rho _{1}\) integral, the new sum over step functions yields terms \(q_{i}(\tau )/\widetilde{q}(\tau )\) representing the probabilities to select the corresponding channels. The differential equation becomes:

$$\begin{aligned} \frac{\partial }{\partial u}E(t;u)=\widetilde{p}(u)\delta (t-u)-\widetilde{p}(u)E(t;u), \end{aligned}$$
(37)

which is solved by Eq. (32).

Algorithm 6 requires the generation of trial scales using \(\widetilde{q}(t)\) as overestimated branching kernel. In practice, this is often not much harder than generating trial scales for individual channels, since the kernels \(q_{i}(t)\) can usually be chosen to have the same t-dependence. In such a case, the channel selection step in Algorithm 6 does not even require the evaluation of the kernels at the trial scale anymore. We note that Algorithm 6 can still be used in more complicated situations by using the procedure outlined in Eq. (34) to split \(\widetilde{q}(t)\) up into groups of similar channels. In the next chapter, we incorporate the extensions discussed here into a full, practical veto algorithm. Since it was found there are multiple ways to handle competition, these algorithms are tested for their computing times.

5 Testing the algorithms

We now combine all the pieces discussed in the previous section into a single algorithm. Here, we give a description of the full algorithms that all handle competition differently. A concrete statement of the algorithms can be found in the appendix. Additionally, the expression of every algorithm in unitary language is included. These equation can all be shown to be satisfied by:

$$\begin{aligned} E(t,z;u)&= \delta (t-\mu )\delta (z-z_0) \nonumber \\&\quad \times \exp \left( -\sum _{i=1}^{n}\int _{\mu }^{u} d\tau \int _{z_{i-}(\tau )}^{z_{i+}(\tau )}d\zeta \,p_i(\tau ,\zeta )\right) \nonumber \\&\quad +\sum _{i=1}^{n}f(t,z)\theta _i^t(z)\theta (\mu<t<u) \nonumber \\&\quad \times \exp \left( -\sum _{i=1}^{n}\int _{t}^{u} d\tau \int _{z_{i-}(\tau )}^{z_{i+}(\tau )}d\zeta \,p_i(\tau ,\zeta )\right) . \end{aligned}$$
(38)
  • Veto-Max: This algorithm handles competition using Eq. (34), where \(f_i(t,z) = p_i(t,z)\). That is, the veto algorithm is applied to every channel individually, then the highest of the generated scales is selected. This is the most common way of handling competition. It is usually cited in the literature as the competition algorithm [7, 8], and is used in most parton showers.

  • Max-Veto: This algorithm also uses Eq. (34), but with \(f_i(t,z) = q_i(t,z)\). That is, trial pairs (tz). The highest of these scales is selected, to which the veto step is applied using the branching kernel of the selected channel. This algorithm is used in the Vincia parton shower [4, 5].

  • Generate-Select: This is the new algorithm described in Sect. 4.3. It generates trial scales \(\tau \) using the sum of the overestimate functions \(\widetilde{q}(t,z)\). The overestimate functions are required to have the same z-dependence. That way, a corresponding \(\zeta \) can be generated using boundaries that are overestimates for all channels. Next, a channel i is selected with probability \(q_i(\tau )/\widetilde{q}(\tau )\). Then, the veto step is applied to this channel.

  • Select-Generate: Under certain circumstances, a slight variation of the Generate-Select algorithm is possible. If we require all overestimate functions \(q_i(t,z)\) to have the same scale dependence, this dependence drops out of the selection probabilities. In that case, a channel can be selected before a scale is generated. As a consequence, the overestimate functions can have different dependence on z, and universal overestimates are no longer required.

We test these algorithms by implementing them in a relatively simple antenna shower very close to what is described in [4, 5]. This shower handles QCD radiation using an antenna scheme to include collinear and soft enhancements. It features exact \(2 \rightarrow 3\) kinematics for massive particles, but does not include any matching scheme and concerns only final state radiation. It is very basic compared with the parton showers of [13] or recent versions of the Vincia shower [6], including only the absolute necessities for a functional parton shower.

The running coupling is taken into account by an overestimate

$$\begin{aligned} \hat{\alpha }_s(t) = a\ln ^{-1}(bt) \end{aligned}$$
(39)

where a and b are chosen such that, at the starting scale and the cutoff scale, \(\hat{\alpha }_s(t)\) matches the real one-loop running \(\alpha _s(t)\), which includes the proper flavor thresholds. This overestimate is corrected by using \(\hat{\alpha }_s(t)\) for the overestimate kernels and \(\alpha _s(t)\) for the branching kernels.

The possible branchings for a QCD shower can be divided into two categories: emissions, where a quark or gluon sends out a new gluon, and splittings, where a gluon splits into a quark–antiquark pair. We use \(p_\perp \)-ordering for both for easy application of the Generate-Select and Select-Generate algorithms. The following overestimate kernels were used:

$$\begin{aligned}&q_{\text{ emit }}(t,z) = \frac{2a\,C_{A}}{4\pi \sqrt{\lambda (1,\frac{m_1^2}{s_{12}},\frac{m_2^2}{s_{12}})}} \frac{1}{z(1-z)} \frac{1}{t \ln (bt)},\end{aligned}$$
(40)
$$\begin{aligned}&q_{\text{ split }}(t,z) = \frac{2a\,n_{F}T_{R}}{4\pi \sqrt{\lambda (1,\frac{m_1^2}{s_{12}},\frac{m_2^2}{s_{12}})}} \frac{1}{z(1-z)} \frac{1}{t \ln (bt)}, \end{aligned}$$
(41)
Table 1 The average multiplicities produced by the shower with starting at \((7\,\text{ TeV })^2\) for all veto algorithms
Fig. 1
figure 1

The average CPU times required by the shower to produce events as a function of the available branching channels at termination

where \(\lambda \) is the Källén function, \(m_1\) and \(m_2\) are the masses of the particles in the antenna and \(s_{12}\) is its invariant mass. Note that a factor \(n_F\) is included in the overestimate of the splitting kernel. It is there because Vincia uses a mix of the Max-Veto and the Generate-Select algorithms. If a gluon splitting is selected through the Max-Veto algorithm, a quark flavor is chosen at random as is done by the Generate-Select algorithm. We use the antennae functions given in given in [5] for the splitting kernels. The code can be found in [9].

We compare the performance of the algorithms described above on this shower. In the Veto-Max algorithm we have implemented the following shortcut. While running the single-channel veto algorithm on all available channels, the algorithm keeps track of the highest scale generated thus far. Then, if a scale lower than this highest scale is ever reached, the veto algorithm on the current channel can immediately be aborted. This trick is not available for the Max-Veto algorithm, because it performs the veto step after selecting the highest trial scale between all channels.

For the Select-Generate algorithm, the bottleneck is the channel selection step. It is complicated by the fact that the Källén function and the z integral in the overestimates are different for every antenna. We use stochastic roulette-wheel selection [10] for the selection step, which achieves \(\mathcal {O}(1)\) complexity.Footnote 4 The Generate-Select algorithm assigns the same boundaries for the z integral for all channels, but retains differences in the Källén function. We move this difference to the veto step by using the lowest Källén function of all antennae for all channels, increasing the overestimation of the branching kernels. Then, for \(n_F=6\) and the standard values \(C_A = 3\) and \(T_R = 1/2\), all overestimate functions are the same, and the channel selection step is trivial. In this sense, the difference between the Generate-Select and the Select-Generate algorithms is a trade-off between easier selection of a channel and lower veto rates.

A remark is in order here. In the splitting \(g \rightarrow q\, \bar{q}\) the original colour structure is separated into two pieces which can be evolved independently. Since our interest here is in the speed of the various algorithms rather than the development of a fully realistic parton shower, we have not implemented this effect.

We produce 8 million events per algorithm. The initial scale is \((7\,\text{ TeV })^2\) and the cutoff scale is \((1\,\text{ GeV })^2\). These settings produce events with parton multiplicities of \(\mathcal {O}(100)\), which are typical at the LHC. To check the equivalence of the veto algorithms, we compute the average amounts of quarks and gluons generated per event. These numbers are very sensitive to small differences in distribution. Table 1 shows these averages for every algorithm.

Figure 1 shows the average amount of CPU time the shower requires to produce events, plotted as a function of the amount of available branching channels as the shower terminates. This measure gives us a good idea of the performance of the algorithms in a practical context. The shape of the curves of the Veto-Max and the Max-Veto algorithms should not be heavily influenced by the specifics of the shower, since factors like branching kernel evaluation times and veto probabilities should be similar for different implementations. However, the relative performance of the Generate-Select and the Select-Generate algorithms does depend on the specific implementation. In this case, the algorithms perform similarly, but this may not be the case for other branching kernels. Either way, the Generate-Select and the Select-Generate algorithms perform much better than the Veto-Max and the Max-Veto algorithms.

6 Conclusion

The Sudakov veto algorithm forms an integral part of all modern parton shower programs. We describe a formalism that can be used to analyze the distributions that are produced by different versions of this algorithm. Using this method, we discuss various ways of handling competition. While seemingly different, our formal analysis shows that they produce the same distributions. The algorithms were tested using a simple antenna shower, which showed that the new algorithms are faster than the traditional algorithms used in most parton shower programs currently, which may be of considerable importance for higher energy events or for the inclusion of more types of radiation.