# Fooling around with the Sudakov veto algorithm

- 596 Downloads
- 11 Citations

## Abstract

The Sudakov veto algorithm for generating emission and no-emission probabilities in parton showers is revisited and some oversampling and reweighting techniques are suggested, mainly to improve statistics. Specifically we consider the generation of Sudakov form factors in algorithms of CKKW type for matrix element/parton shower merging, both at tree- and loop-level, and the generation of highly suppressed splittings inside standard parton showers.

### Keywords

Transverse Momentum Parton Shower Splitting Function Sudakov Form Factor Boost Factor## 1 Introduction

*i*into partons

*j*and

*k*given by

*z*represents the energy sharing between

*j*and

*k*, and where we have integrated over azimuth angle in the Altarelli–Parisi splitting function

*P*

_{ i,jk }(

*z*). Starting from some maximum scale

*t*

_{0}we then want to find the exclusive probability of the first emission, which we get from the inclusive splitting probability by multiplying with the probability that there is no emission before the first emission,

*Δ*(

*t*

_{0},

*t*) is this no-emission probability, or the Sudakov form factor, given by

In principle the Sudakov form factor can be calculated analytically. However, often the integration region in the *z*-integral can be non-trivial, and most PS programs today prefer to calculate it numerically using the so-called Sudakov veto algorithm [2]. The trick here is to find a simple function which is everywhere larger than *P* _{ i,jk } and which is easy to integrate, and by systematically vetoing emissions generated according to this overestimated function, a correct no-emission probability is obtained.

The Sudakov veto algorithm (SVA) is normally used for purely probabilistic processes, but recently it has been generalized to also be used in cases where the function being exponentiated is not positive definite [3, 4], and also other improvements have been introduced to allow for more flexible generation [5, 6].

In this article we shall investigate other modifications of the Sudakov veto algorithm, where we try to increase the statistical precision in some special cases by oversampling techniques, but we will also briefly discuss the issue of negative contributions to splitting functions.

In Sect. 3 we will investigate the usage of the SVA in CKKW-like [7, 8] algorithms, where parton showers are matched with fixed order inclusive matrix elements (MEs). Here, the SVA is used to make the inclusive MEs exclusive by multiply them with no-emission probabilities taken from a parton shower. One problem with this procedure is that every event produced with the matrix element generator is either given the weight zero or one, which becomes very inefficient if the cutoff used in the ME-generation is small. We will find that by introducing oversampling, a weight can be calculated which is never zero, but nevertheless will give the correct no-emission probability. In Sect. 3 we will also discuss an extension of the CKKW algorithm to include next-to-leading order (NLO) MEs [9] where the SVA is used to extract fixed orders of *α* _{s} from the parton shower to avoid double counting of corresponding terms in the NLO calculation.

Then, in Sect. 4 we will consider cases where a parton shower includes different competing processes, where some of them are very unlikely. This is the case in e.g. the Pythia parton shower, where photon emissions off quarks are included together with standard QCD splittings. Since *α* _{EM} is much smaller than *α* _{s} it is very time consuming to produce enough events containing hard photons to get reasonable statistics. We shall see that a naive oversampling of the photon emissions has unwanted effects on the total no-emission probability, and that a slightly more involved procedure is needed. The method presented is different from the one introduced by Höche et al. in [5], but is equally valid. It turns out that both these methods can be used to include negative terms in the splitting functions.

But first we shall revisit the derivation of the SVA, as we will use many of the steps from there when we investigate the different oversampling techniques.

## 2 The Sudakov veto algorithm

*z*,

*Γ*(

*t*) can be integrated analytically, and if the primitive function, \(\check{{\varGamma}}\) has a simple inverse, it is easy to see that we can generate the

*t*-value of the first emission by simply generate a random number,

*R*, between zero and unity and obtain

*Γ*(

*t*) is divergent for small

*t*, such that

*Δ*(

*t*,0)=0, an assumption we will come back to below.

*Γ*is not possible to do analytically, and if it is, the inverse function may be non-trivial. This is the case which is solved by the SVA. All we need to do is to find a nicer function, \(\hat{{\varGamma}}\), with an analytic primitive function, which in turn has a simple inverse, such that it everywhere overestimates

*Γ*,

*under*estimate of

*Δ*(

*t*

_{0},

*t*

_{ c }), and we can generate the first

*t*according to it. As in the standard

*accept–reject*method, we now accept the generated value with a probability \({\varGamma}(t)/\hat{{\varGamma}}(t)<1\). However, contrary to the standard method, if we reject the emission, we replace

*t*

_{0}with the rejected

*t*-value before we generate a new

*t*. Loosely speaking, we have underestimated the probability that the emission was not made above

*t*, so we need not consider that region again. We now continue generating downwards in

*t*until we either accept a

*t*-value, or until the generated

*t*drops below

*t*

_{ c }at which point we give up and conclude that there was no emission above

*t*

_{ c }.

*t*

_{ c },

*n*intermediate

*t*-values. To start with, we have where for \({\mathcal{P}}_{1}\) we first have the probability that we generate a value

*t*and then throw it away with probability \(1-{\varGamma}(t)/\hat{{\varGamma}}(t)\) and then the probability that we do not generate anything below

*t*. Similarly we get from noting that

*Δ*(

*t*

_{0},

*t*

_{1})

*Δ*(

*t*

_{1},

*t*

_{2})

*Δ*(

*t*

_{2},

*t*

_{ c })=

*Δ*(

*t*

_{0},

*t*

_{ c }) and that the nested integral can be easily factorized.

*z*. For each

*t*we generate, we then also generate a

*z*in the interval \([\hat{z}_{\min}(t),\hat{z}_{\max}(t)]\) according to the probability distribution

*P*to zero outside the integration region, while keeping \(\hat{P}\) finite. Although the formulae become more cluttered, it is straight-forward to show, by going through the steps above that this will give the correct distributions of emissions.

If we now go back to Eq. (6), we there assumed that *Γ* diverges at zero such that *Δ*(*t*,0)=0. This is, of course, not necessarily the case, as pointed out in [3]. However, all PS models have some kind of non-perturbative cutoff, below which all emissions are assumed to be unresolved w.r.t the typical scales in the subsequent hadronization model, and we need therefore only concern ourselves with emissions above some cutoff *t* _{ c }>0. This means that we can always add to our overestimate a term which is zero above *t* _{ c } but which diverges at *t*=0. The fact that nowhere in the veto algorithm does anything thing depend on the form of \(\hat{{\varGamma}}\) below *t* _{ c }, means that we do not even need to specify how it diverges, it is enough to assume that it does.^{1}

*t*-value given according to

*a*with a probability

*t*for each possible splitting according to

*t*

_{ a }. Since the probability that any other splitting,

*b*, was not above

*t*

_{ a }is precisely

*Δ*

_{ b }(

*t*

_{0},

*t*

_{ a }), we therefore again get the probability that the first splitting was of type

*a*at

*t*=

*t*

_{ a },

## 3 Reweighting in CKKW-like procedures

*n*-parton state is then reconstructed as a series of parton shower emissions with emission scales {

*t*

_{1},…,

*t*

_{ n }} and the corresponding intermediate states {

*S*

_{0},…,

*S*

_{ n }} where

*S*

_{ n }is the one generated by the ME. We then want to multiply by the no-emission factors

*Γ*

_{ i }is the sum of the

*z*-integrated splitting functions from the partons in state

*S*

_{ i }.

What we can do is to simply put the state *S* _{ i } into the parton shower program and ask it to generate one emission starting from the scale *t* _{ i }. If the generated emission has a scale *t*>*t* _{ i+1} we throw the whole partonic event away and ask the ME generator to produce a new state. The probability for this not to happen is exactly *Δ* _{ i }(*t* _{ i },*t* _{ i+1}) and the procedure corresponds to reweighting the ME state with the desired no-emission probability.

The problem is that if the ME state corresponds to very low scales, we will throw away very many events, which is very inefficient and may result in poor statistics.

*C*>1, and multiply the overestimate \(\hat{\varGamma}_{i}\) with the same factor. As before this just gives a simple overestimate of the splitting function, which we know how to handle from Sect. 2. But rather than throwing an emission away with an extra probability 1/

*C*(and not veto the event) we can always reject the emission while at the same time multiply the whole event with a weight 1−1/

*C*. The total weight of the event will then be the sum of all possible ways we can veto a generated emission (we here assume that the normal rejection procedure has already been applied) In this way we get the right weight but we never throw away an event.

*z*-integrated splitting function, \(\int_{t_{i+1}}^{t_{i}}dt\,{\varGamma}_{i}(t)\) which is used as a way of subtracting the fixed first order result from the exponentiation and then replace it with the correct NLO result. The way this was done in [9] was similar to the procedure above. The shower is started, and each emission is vetoed, but the number of emissions above

*t*

_{ i+1}was counted and it was noted that the average number of vetoed emissions is given by

*α*

_{s}used in the shower.

Again, the statistics can become a bit poor if most events yield the weight zero (which is the case if for large merging scales when the no-emission probability is close to unity), and only a few have non-zero values. We can instead again introduce the boost factor, *C*, and rather than simply counting the number of emissions we take the weight *n*/*C*.

*C*need not be a simple constant, it can be a function of the scale (or any other variable in the splitting.) This is used in the NLO version of CKKW-L, where the leading order

*α*

_{s}term in the expansion of the no-emission probability is needed at a fixed renormalization scale,

*μ*

_{ R }, while in the shower we have a coupling running with the transverse momentum. Therefore, rather than counting the number of emissions, we sum up ratios of fixed and running

*α*

_{s}for the emissions which are generated and discarded. Introducing

^{2}and we get what we desired. To obtain higher powers of the integral with fixed

*α*

_{s}, it is easy to show that e.g. the average sum of triplets

*a*is the incoming parton before, and

*b*is the parton after the splitting. What is needed in the NLO-version of CKKW-L in this case is the integral for a given factorization scale, which is obtained by simply changing the

*α*

_{s}-weight in Eq. (28) to

*z*is the energy fraction of the vetoed generated splitting. The derivation in Eq. (28) becomes a bit more cumbersome, but is straight forward.

## 4 Reweighting competing processes

Often we have many different competing splitting processes. The example we shall use here is the process of a quark radiating a gluon (*Γ* _{ g }) competing with the process of the same quark radiating a photon (*Γ* _{ γ }). Since generating the latter is much less likely because of the smallness of *α* _{EM} as compared to *α* _{s}, the generation may become very inefficient if we are interested in observables related to an emitted photon.

In principle we could again consider introducing a boost factor *C*>1 and replace *Γ* _{ γ }(*t*) with \(\tilde{\varGamma}_{\gamma}(t) = C{\varGamma}_{\gamma}(t)\) and do the same with the overestimate \(\hat{\varGamma}_{\gamma}\). As long as \(\tilde{\varGamma}_{\gamma}(t) \ll {\varGamma}_{g}(t)\) we can reweight each event containing *n* photons with a factor 1/*C* ^{ n } and get approximately the correct results for the observables. However this only gives the right emission probability, not the correct no-emission probability.

Instead we adopt a different procedure. Every time we generate a photon emission (accepted with probability \({\varGamma}_{\gamma}/\hat{\varGamma}_{\gamma}\)), we veto it anyway with a probability 0.5. If we veto it, we also reweight the whole event with a factor 2−2/*C*, while if we keep it, we reweight the whole event with a factor 2/*C*. Clearly the emissions will still be correctly weighted, 0.5×2/*C*, but now we also get the correct no-emission probabilities.^{3} Loosely speaking we are half the time reweighting the event to compensate for the boosting of the emission, and half the time compensating for the corresponding underestimate of the no-emission probability.

*b*, vetoing the emissions with probability

*b*and reweighting with (1−1/

*C*)/

*b*, while reweighting with 1/((1−

*b*)

*C*) if not vetoed, and still obtain the correct result.

*C*, while the acceptance of a generated emission at scale

*t*is still done with probability \({\varGamma}_{\gamma}(t)/\hat{{\varGamma}}_{\gamma}(t)\) and a rejected emission instead reweights the event by a factor

^{4}a \(\hat{\varGamma}_{\gamma}=2{\varGamma}_{\gamma}\) and let

*C*→

*C*/2, the reweighting of the events would be exactly the same.

While we gain in efficiency for the emissions, we will also lose in precision for the no-emission probability due to fluctuating weights. It is easy to calculate the variance in the weights, but it is maybe more instructive to look at a real example.

As an illustration we let Pythia8 [12] generate standard LEP events, with photon emission included in the shower, and we compare the default generation with weighting the photon emission cross section with a factor *C*. We show for different *C* the effect of using the full reweighting procedure (*proper* weighting), but also show for comparison the case of just using event weights with a factor \(1/C^{n_{\gamma}}\) (*naive* weighting).

*C*=1 (i.e. the default),

*C*=2 for the naive, and

*C*=4 for the proper weighting.

^{5}The error bands indicate the statistical error using 10

^{8}events, and the results are shown as a ratio to the result from a high statistics run (3×10

^{9}events) with Pythia8. We see that the statistical error is somewhat reduced in the reweighted samples, but we also see what seems to be a systematic shift in the naive reweighting, due to the mistreatment of the no-emission probability. This shift becomes very pronounced if we increase

*C*, as seen in Fig. 1b, where we use

*C*=32 for the naive and

*C*=64 for the proper reweighting. Here we see that the statistical errors are very much reduced for both reweightings, but the naive procedure is basically useless due to the large systematic shift.

So far we have implicitly assumed that *C*>1, since we motivated the whole procedure by the desire to increase the number of rare splittings. Note, however, that the proof of the procedure does not at all depend on the size of *C*. In fact it can even in some cases be taken negative.

Consider a case where there are negative contributions to the total splitting probability. One of the most simple cases is the emission of a second gluon in a final-state dipole shower in e^{+}e^{−}-annihilation into jets. Once a gluon has been radiated from the original \(\mathrm{q}\bar{\mathrm{q}}\) pair, it can be shown that the distribution of a second gluon is well described by independent emissions from two dipoles, one between the quark and the first gluon and one between the gluon and the anti-quark. However, examining the \(\mathrm{e}^{+} \mathrm{e}^{-}\to \mathrm{q} gg \bar{\mathrm{q}}\) matrix element one finds that there is a colour-suppressed negative contribution corresponding to emissions from the dipole between the q and \(\bar{\mathrm{q}}\). This contribution is normally ignored completely in parton showers, mainly because it is difficult to handle in a probabilistic way in the SVA. It may even result in a no-emission probability above unity.

In the reweighting scheme introduced here we can easily include the negative contribution to the splitting functions, and apply a boost factor of *C*=−1 for the \(\mathrm{q}\bar{\mathrm{q}}\)-dipole. If a gluon emission is generated from such a dipole, it is then either accepted and the event is given a negative weight, or it is rejected, in which case the event weight is multiplied by a factor four. We note that in this way it is in principle conceivable to implement a parton shower which includes all possible interference effects. We will, of course, have even larger issues with statistics, compared to the photon emission case above, as we now have potentially large weights that must cancel each other, but this procedure could still be an interesting alternative to the ones presented in [3] and [4] (an extension of [5] to negative weights).

## 5 Conclusions

This article does not claim to present innovative new physics results. Rather it presents a number of methods collected by the author during a couple of decades working with parton showers in general and with the Sudakov veto algorithm in particular. They are presented here in the hope that they may come in handy for the community now that more and more efforts are put into the merging and matching of parton showers with matrix element. Especially in the case of matching with next-to-leading order matrix elements (and beyond), a thorough understanding of how parton showers work and knowledge of how to manipulate them is necessary, and these kinds of methods may become increasingly important.

## Footnotes

- 1.
Note that we must also require \(\hat{{\varGamma}}>0\) everywhere to be able to reach below

*t*_{ c }. - 2.
See Appendix.

- 3.
Note that the whole procedure in principle can be implemented in Pythia8 in a non-intrusive way, by artificially increasing

*α*_{EM}and implementing the reweighting and extra rejection in a UserHooks class. - 4.
Note that one need to choose a \(\hat{\varGamma}\) which is everywhere some factor higher than

*Γ*_{ γ }since otherwise the denominator in Eq. (36) could tend to zero, giving wildly fluctuating weights. - 5.
The proper reweighting has a twice as high boost factor, to get the same weighting of the emissions.

## Notes

### Open Access

This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

### References

- 1.V. Sudakov, Sov. Phys. JETP
**3**, 65–71 (1956) MathSciNetMATHGoogle Scholar - 2.T. Sjöstrand, S. Mrenna, P.Z. Skands, J. High Energy Phys.
**05**, 026 (2006). arXiv:hep-ph/0603175 ADSCrossRefGoogle Scholar - 3.S. Plätzer, M. Sjödahl, Eur. Phys. J. Plus
**127**, 26 (2012). arXiv:1108.6180 [hep-ph] CrossRefGoogle Scholar - 4.S. Höeche, F. Krauss, M. Schönherr, F. Siegert, J. High Energy Phys.
**1209**, 049 (2012). arXiv:1111.1220 [hep-ph] CrossRefGoogle Scholar - 5.S. Höche, S. Schumann, F. Siegert, Phys. Rev. D
**81**, 034026 (2010). arXiv:0912.3501 [hep-ph] ADSCrossRefGoogle Scholar - 6.
- 7.S. Catani, F. Krauss, R. Kuhn, B.R. Webber, J. High Energy Phys.
**11**, 063 (2001). arXiv:hep-ph/0109231 ADSCrossRefGoogle Scholar - 8.
- 9.N. Lavesson, L. Lönnblad, J. High Energy Phys.
**12**, 070 (2008). arXiv:0811.2912 [hep-ph] ADSCrossRefGoogle Scholar - 10.A. Buckley et al., Phys. Rep.
**504**, 145–233 (2011). arXiv:1101.2599 [hep-ph] ADSCrossRefGoogle Scholar - 11.L. Lönnblad, S. Prestel, arXiv:1211.7278 [hep-ph]
- 12.T. Sjöstrand, S. Mrenna, P. Skands, Comput. Phys. Commun.
**178**, 852–867 (2008). arXiv:0710.3820 [hep-ph] ADSMATHCrossRefGoogle Scholar