Weak Gauge Boson Radiation in Parton Showers

The emission of W and Z gauge boson is included in a traditional QCD + QED shower. The unitarity of the shower algorithm links the real radiation of the weak gauge bosons to the negative weak virtual corrections. The shower evolution process leads to a competition between QCD, QED and weak radiation, and allows for W and Z boson production inside jets. Various effects on LHC physics are studied, both at low and high transverse momenta, and effects at higher-energy hadron colliders are outlined.


Introduction 2 The weak shower
In this section we describe how the production of W/Z + n jets is handled. To be more precise, the bulk of the study will be concerned with n ≥ 2, i.e. from where production of W/Z inside a jet becomes possible. The n = 0, 1 processes do not have a direct overlap with QCD jets, and an existing shower formalism is appropriate to handle them, as will be described further below.
In principle, the introduction of W/Z emission in showers would only involve the introduction of two new splitting kernels. In practice, the large W/Z masses lead to large corrections, both in the kinematics handling and in the splitting behaviour. In order to provide a reasonably accurate description, within the limits of the shower approach, several matrix elements are used as templates to provide a correct dependence on the W/Z masses.
Also other problems will appear, that are new relative to the already existing QCD/QED formalism, notably that the weak force has spin-dependent couplings and that the emission of a W boson changes the flavour of the radiating quark. Further, a complete description would need to include full γ * /Z 0 interference, but in the following these interference terms will be neglected. That is, for low virtualities a pure γ * is assumed, and for higher virtualities a pure Z 0 . This should be a good first approximation, since the bulk of the shower activity should be in the two peak regions.

The basic shower formalism
The starting point for shower evolution is the DGLAP evolution equation, which can be written as dP a→bc = α 2π with α = α s or α em , Q 2 some evolution variable expressing the hardness of the a → bc branching and z the energy-momentum sharing between the daughters b and c. Some further azimuthal ϕ dependence may or may not be included. The branchings can be of the character q → qg, g → gg, g → qq, f → fγ (f = q or ), and γ → ff. To this list we now want to add q → q W and q → qZ 0 , including subsequent decays W → ff and Z → ff. The W/Z production mechanism is directly comparable with that of g/γ, whereas the decays happen with unit probability and therefore are slightly separated in character from the corresponding g → qq and γ → ff ones. The difference obviously is related to the W/Z being massive and the g/γ ones massless. To set the coupling notation, consider the case that the W/Z masses are set to zero. Then the evolution equations for a quark can be written in a common form dP q→qX = α eff 2π = α em e 2 q for q → qγ , (2.4) = α em sin 2 θ W cos 2 θ W (T 3 q − e q sin 2 θ W ) 2 for q L → q L Z , (2.5) = α em sin 2 θ W cos 2 θ W (e q sin 2 θ W ) 2 for q R → q R Z , (2.6) = 0 for q R → q R W . (2.8) Here L/R denotes left-/right-handed quarks, T 3 q = ±1/2 for up/down-type quarks, and V CKM is the CKM quark mixing matrix.
It will be assumed that the incoming beams are unpolarized, i.e. that incoming fermions equally often are left-as righthanded. Since QCD interactions are spin-independent, a leftor righthanded helicity is picked at random for each separate fermion line at the hard interaction. (Usually this association is unique, but in cases like uu → uu a choice is made that favours a small scattering angle, using 1/t 2 and 1/û 2 as relative weights.) Since the gauge-boson emissions preserve helicity (for massless fermions), the choice at the hard process is propagated through the full shower evolution. The emission rate for a single W/Z boson is not affected by this helicity conservation, relative to what spin-averaged splitting kernels would have given, but the rate of several W/Z bosons is increased. This is most easily realized for the W case, where a first emission fixes the fermion line to be lefthanded, and a second W therefore can be emitted twice as often as with a spin-averaged branching kernel.
The formalism for FSR and ISR, for the case of massless gauge bosons, is outlined in [22]. A brief summary is as follows, for the case that on-shell masses can be neglected.
For FSR the evolution variable for branchings a → bc is p 2 ⊥evol = z(1 − z)Q 2 where Q 2 is the off-shell (timelike) virtuality of parton a. The evolution equation becomes dP a = α(p 2 ⊥evol ) 2π dp 2 ⊥evol p 2 ⊥evol P a→bc (z) ∆ a (p 2 ⊥max , p 2 ⊥evol ) , (2.9) where ∆ a is the Sudakov form factor, i.e. the no-emission probability from the initial maximal scale p 2 ⊥max down to the current p 2 ⊥evol one [19]. It is obtained from an exponentiation of the real-emission probability in such a way that unitarity is restored: the total probability for parton a to branch, or to reach a lower cutoff scale p 2 ⊥min without branching, adds to unity. A dipole shower [23] approach is used to set the kinematics of a branching. That is, for a QCD shower, colour is traced in the N C → ∞ limit, and thus the radiating parton a can be associated with a "recoiler" r that carries the opposite colour. A gluon is split into two possible contributions by its colour and anticolour, both as a radiator and as a recoiler. The a + r system preserves its four-momentum in a branching and, if viewed in its rest frame, the a and r three-momenta are scaled down without a change in direction to give a the mass Q. In this frame z (1 − z) is the fraction of the modified a energy that b (c) takes.
For ISR it is most convenient to use backwards evolution [24], i.e. to start at the hard interaction and then proceed towards earlier branchings at lower evolution scales. That is, the a → bc branching process is now interpreted as parton b becoming "unbranched" into a. Parton b has a spacelike virtuality with absolute value Q 2 , and the evolution variable is p 2 ⊥evol = (1 − z)Q 2 . The evolution equation now depends on PDF ratios where again the Sudakov form factor is obtained by an exponentiation of the real-emission expression, to preserve unitarity. The parton coming in from the other side of the event defines a recoiler r, such that z = x b /x a = (p b + p r ) 2 /(p a + p r ) 2 . With b originally moving parallel with the incoming beam particle with a fraction x b of the beam momentum, the branching requires a redefinition of kinematics such that afterwards parton a is parallel with the beam and carries a fraction x a . Accordingly, all the outgoing partons produced by the b + r collision are boosted and rotated to a new frame. Both ISR and FSR are evolved downwards in p 2 ⊥evol , starting from a p 2 ⊥max scale typically set by the hard interaction at the core of the event. A branching at a given scale sets the maximum for the continued evolution. At each step all the partons that potentially could branch must be included in the sum of possibilities. There are always two incoming partons that can radiate, while the number of outgoing ones increases as the evolution proceeds.
A third component of perturbative parton production is multiparton interactions (MPI). These can also conveniently be arranged in a falling p ⊥ sequence, and by unitarity acquires a "Sudakov" factor in close analogy with that in showers [25]. Therefore both ISR, FSR and MPI can be combined in one common sequence of falling p ⊥ scales [26]: with a combined Sudakov factor. Each MPI gives further incoming and outgoing partons that can radiate, so the ISR and FSR sums now both run over an increasing number of potentially radiating partons. The decreasing p ⊥ scale can be viewed as an evolution towards increasing resolving power; given that the event has a particular structure when activity above some p ⊥ scale is resolved, how might that picture change when the resolution cutoff is reduced by some infinitesimal dp ⊥ ? That is, let the "harder" features of the event set the pattern to which "softer" features have to adapt. Specifically, energy-momentum conservation effects can be handled in a reasonably consistent manner, where the hardest steps almost follow the standard rules, whereas soft activity is reduced by the competition for energy, mainly between ISR and MPI. For massless particles only kinematics variables such as p ⊥ can set the scale. For weak showers the W/Z mass introduces an alternative scale, and this opens up for ambiguities. Consider if a combination such as p 2 ⊥evol + km 2 W/Z , with k as a free parameter, is used as ordering variable for W/Z emission (but otherwise not affecting kinematics). Then an increased k will shift W/Z emissions to occur earlier in the combined evolution, which gives them a competitive advantage relative to QCD/QED emissions. We will later study the impact of such possible choices.
A key feature for the efficient generator implementation is that the real and virtual corrections exactly balance, i.e. that eq. (2.11) contains exactly the same dP expressions in the prefactor and in the Sudakov factor. This holds for QCD and QED emissions to leadinglog accuracy, and also for Z 0 ones, but not for W ± emissions, due to the above-mentioned Bloch-Nordsieck violations. It comes about by a combination of two facts. Firstly, a real emission of a W ± in the initial state changes the flavour of the hard process, while a W ± loop does not. Secondly, the incoming state is not isospin invariant, i.e. the proton is not symmetric between u and d quarks, nor between other isospin doublets. Together this leads to a mismatch between real and virtual Sudakov logarithms, that is not reproduced in our implementation. In that sense our results on the reduced rate of events without a W emission are not trustworthy. But only the qq → qq processes with both quarks lefthanded are affected, not ones where either quark is righthanded, nor qg → qg processes [6]. Also, real and virtual corrections cancel for final-state emissions, so only initial-state ones contribute. The total error we make on this count therefore is small, in particular compared with true NLO corrections beyond our accuracy.

Merging generics
One of the key techniques that will be used in the following is matrix-element merging [27][28][29]. It can be viewed as a precursor to PowHeg [30,31].
In a nutshell the philosophy is the following. Assume a Born cross section σ B , usually differential in a number of kinematical variables that we do not enumerate here. The real NLO correction to this is dσ R , differential in three further kinematical variables, with ratio dK ME = dσ R /σ B . The parton-shower approximation also starts from σ B and multiplies this with dK PS , which represents the shower branching kernel, cf. eq. (2.1), summed over all possible shower branchings from the Born state, differential in Q 2 , z and ϕ. At this stage the Sudakov form factor has not yet been introduced. Now ensure that dK PS ≥ dK ME over all of phase space, which may be automatic or require some adjustment, e.g. by a multiplicative factor. Then begin the evolution of the shower from a starting scale Q 2 max downwards, with a first (= "hardest") branching at a Q 2 distributed according to Since dK PS is an overestimate, accept a branching with a probability dK ME /dK PS . This replaces the dK PS prefactor in eq. (2.12) by dK ME , but leaves the Sudakov unchanged. Now use the veto algorithm trick: when a Q 2 scale is not accepted, set Q 2 max = Q 2 and continue the evolution down from this new maximal scale. This gives a distribution (for proof see e.g. [21]). Here the dependence on the original dK PS is gone, except that the shower Q 2 definition is used to set the order in which the phase space is sampled. The soft and collinear divergences leads to eqs. (2.12) and (2.13) being normalized exactly to unity; a first emission is always found. In practice a lower cutoff Q 2 min is always introduced; if the evolution falls below this scale then an event of the Born character is assumed. This preserves unitarity.
This completes the description of ME merging. In PowHeg the hardness scale is not based on any specific shower, but fills a similar function. More importantly, to achieve full NLO accuracy, PowHeg normalizes the differential cross section in eq. (2.13) to σ B + σ V + dσ R , where σ V are the virtual corrections, including PDF counterterms. We will not here aim for a corresponding NLO accuracy, but keep open the possibility to multiply by an overall "K factor", which catches the bulk of the NLO effects.
A simple application of ME merging is W/Z + 1 jet, starting from the Born W/Z production process [29]. The qq → Zg (or qq → Wg) final state can be reached by two shower emission histories, which match the t-and u-channel Feynman graphs of the matrix elements. It is found that 1/2 < dK ME /dK PS ≤ 1, so that Monte Carlo rejection is straightforward. (The original result was found for an evolution in virtuality rather than in p 2 ⊥ , but both give the same result since dQ 2 /Q 2 = dp 2 ⊥ /p 2 ⊥ and z is the same.) The qg → Zq (or qg → Wq ) process has one shower history, with a g → qq branching, that corresponds to the u-channel Feynman diagram, while the s-channel quark exchange diagram has no shower correspondence. In this case 1 ≤ dK ME /dK PS ≤ ( √ 5 − 1)/(2( √ 5 − 2)) < 3, which requires the shower emission rate to be artificially enhanced for Monte Carlo rejection to work. For both processes agreement is found in the p ⊥ → 0 limit, as it should, with increasing discrepancies at larger p ⊥ , but still only by a modest factor right up to the kinematical limit.
It is plausible that the (uncorrected) PS underestimate of the qg → Zq emission rate at least partly is related to it missing one Feynman graph. If so, the shower description of W/Z+ ≥ 2 partons can be expected to do even worse, since it misses out on further diagrams. This is the behaviour observed in data [32][33][34]. By starting up from QCD 2 → 2 processes as well, but avoiding doublecounting, it is the hope to bring up this rate.
Given that the ME merging approach has been used to select the hardest emission, a normal shower can be attached for subsequent emissions from this scale downwards. Normally these emissions would be based on the shower algorithm pure and simple. In some cases it may be convenient to use the merging approach also for subsequent emissions, notably for massive particles in the final state, where the suppression of collinear radiation may not be fully described by the shower [28,35]. Although the ME is not the correct one for consecutive emissions, it still encodes the suppression from mass terms to a reasonable approximation, at least as well as a shower could. The one modification is to apply it to changed kinematical conditions, e.g. to a gradually decreasing dipole mass for FSR. We will come back to this point.

Pure final-state emissions
As a starting point for FSR we consider the simplest possible case, when a Z or W is radiated in the final state of an s-channel process such as q 0 q 0 → g * (0) → qq → q(1) q(2) Z 0 (3) (or q 0 q 0 → g * (0) → q(1) q (2) W(3)). Using CalcHEP [36] for these and subsequent ME calculations, the matrix element can be written as Here x i = 2p 0 p i /p 2 0 = 2E i /E cm , with the latter expression valid in the rest frame of the process, and r i = m 2 i /E 2 cm , here with the quarks assumed massless. In order to arrive at the above result, the ME was integrated over three angular variables. Setting r 3 = 0 the kinematics dependence reverts to the familiar one for three-jet events in e + e − annihilation, as it should. The α eff values are provided in eqs. (2.5)-(2.8).
Owing to the W/Z mass, the phase space for a weak emission is considerably different from that of a QCD one. Notably the soft and collinear divergences lie outside the physical region. Within the accessed region we would like to use the matrix-element merging approach to achieve as accurate a description as possible. As a first step, an overestimate is obtained by , (2.15) with N = 8. This translates into an overestimate dP over q→qZ = α eff 2π dp 2 which later is to be corrected. The emission of heavy bosons in final state radiation has already been considered in the context of massive Hidden-Valley photons [35], and therefore only a short review is provided here. Consider the process p 0 → p 13 + p 2 → p 1 + p 2 + p 3 , where all particles are allowed to be massive. While the matrix elements are described by x 1 and x 2 (after a suitable integration over angles), the parton shower is described in terms of and z, which in the massless limit equals x 1 /(x 1 + x 3 ). For a massive case it is convenient to start out from two massless four-vectors p 3 and then create the massive ones as linear combinations (2.20) This new energy sharing corresponds to a rescaled The p 2 ⊥evol and z expressions, can be combined to give the Jacobian dp 2 . (2.23) Note that the shower expressions so far only referred to emissions from the q(1), whereas the matrix elements also include emissions from the q(2) and interferences. For a ME/PS comparison it is therefore necessary either to sum the two PS possibilities or split the ME expression. We choose the latter, with a split in proportion to the propagators, which gives a probability for the q(1) Thus we arrive at the ME/PS correction factor All the explicit dependence on m 3 is located in k 1 in the last factor, but obviously implicitly the whole kinematics setup is affected by the value of m 3 . The emission of W bosons introduces flavour changes in the shower, and thus also the need for implementing the full CKM-matrix in the emissions, eq. (2.7). The change of flavour to top is excluded due to the high mass of the top quark, which significantly reduces W emission off b quarks. All quarks are considered massless in the ME weights, but proper masses are included in the kinematics calculations, as demonstrated above.
The ME merging technique, viewed as a correction to the LO expression, is properly valid only for the first branching. The arguments for including a sensible behaviour in the soft and collinear regions remain, however. Therefore eq. (2.25) can be applied at all steps of the shower evolution. That is, starting from an original qq dipole, the downwards evolution in p 2 ⊥evol gradually reduces the dipole mass by the g/γ/W/Z emissions. When a W/Z is emitted, the ME correction is based on the current dipole mass and the emission kinematics. This is particularly relevant since it may be quite common with one or a few QCD emissions before a W/Z is emitted.
In non-Abelian theories the radiated gauge bosons themselves carry charge and can themselves radiate. For QCD emissions this is well approximated by the dipole picture, where each emission of a further gluon leads to the creation of a new dipole. Similarly the emission of a W/Z leads to more weak charges, with the possibility of non-Abelian branchings W ± → W ± Z 0 and Z 0 → W + W − . So far we have not included these further branchings, and therefore preserve the original qq weak-dipole when a W/Z is emitted. This will imply some underestimation of multiple-W/Z production rate.
New qq pairs can be created within the shower evolution, e.g. by gluon branchings g → qq. These are considered as new weak dipoles, and can thus enhance the rate of W/Z emissions.

Pure initial-state emissions
As a starting point for ISR we here instead consider a process such as q(1) q(2) → Z(3) g * (4) (or q(1) q (2) → W(3) g * (4)), where the subsequent g * → q 0 q 0 (or g * → gg) decay has been integrated out. This matrix element can then be written as The ISR kinematics is already set up to handle the emission of a massive particle, e.g. in b → gb, with a b quark in the final state. The ME correction machinery [29] has only been set up for the emission of a massless particle, however, so some slight changes are necessary. For the case that the W/Z is emitted by the incoming parton 1 the Mandelstam variables becomeŝ It turns out that the massless DGLAP-kernel eq. (2.2) is not an overestimate for the ME eq. (2.26). Instead the following slightly modified splitting kernel is used The standard DGLAP kernel is recovered in the massless limit. Using the Jacobian dt/t = dp 2 ⊥evol /p 2 ⊥evol , the shower emission rate translates to .
Adding the emission from parton 2, easily obtained byt ↔û, gives In this case it is convenient to use W = W ME /W PS as ME correction factor. That is, the full ME is compared with the sum of the two PS possibilities, unlike the FSR case, where the ME is more easily split into two parts each compared with a single shower history. It can most easily be seen that the modified DGLAP kernel is an upper estimate by taking the ratio of the PS weight with the ME one, A new upper estimate for the range of allowed z values is needed, since the standard one enters unphysical regions of the modified DGLAP kernel, turning the PS weight negative. This is not a surprise, since the standard upper estimate does not include massive emissions. The upper estimate chosen is This limit should ensure that the emitted particle will always have enough energy to become massive and have the chosen p 2 ⊥evol . It is not formally proven to be an upper limit, but works for all studied cases.
The handling of CKM weights for W emission becomes slightly more complicated in ISR than in FSR, owing to the presence of PDFs in the evolution. The PDF ratio in eq. (2.10) is generalized to an upper estimate used in the downwards evolution with the veto algorithm. For a trial emission the relevant part of the acceptance weight then becomes . (2.37) Once a branching has been accepted, the new mother flavour a is selected in proportion to the terms in the numerator sum. Like for final-state radiation, the ME merging weight will be used not only for a W/Z emission in direct association with the hard process, but for all branchings in the backwards evolution. All final-state particles are then lumped into one single effective particle, like the g * above.

Mixed initial-final-state emissions
In addition to the pure-final or pure-initial topologies, the two other relevant possibilities are with one or two quark lines flowing through the hard 2 → 2 topologies, i.e. qg → qg and qq → qq .
It would have been tempting to use the ME correction factors as above for FSR and ISR. Unfortunately this does not give a particularly good agreement with the qg → qgZ 0 matrix element. Specifically, whereas s-channel processes tend to populate the available phase space with only a dp 2 ⊥ /p 2 ⊥ fall-off, the coherence between ISR and FSR in t-channel processes leads to a destructive interference that reduces emissions at large angles [37]. Thus emission rates depend on thet of the core 2 → 2 process, not only on itsŝ. Therefore we have chosen to base the ME corrections on the full 2 → 3 kinematics.
The general strategy will be to use that the three-body phase space can be split into two two-body ones, with an intermediate state i, e.g.
One of the dΦ 2 factors will be associated with the QCD hard 2 → 2 process, whereas the rest comes from the shower branching. This way it is possible to compare the 2 → 3 ME with the 2 → 2 ME + shower in the same phase space point, with proper Jacobians.
To begin with, consider the simpler first process, qg → qg, with an additional Z 0 emission, labeled as q(a) g(b) → q(1) g(2) Z 0 (3). We will first outline the procedures for FSR and ISR separately, and then explain how to combine the two, and how to modify for W ± emission.
For FSR the intermediate state is the virtual quark that emits the Z 0 , q(a) g(b) → q * (i) g(2) → q(1) g(2) Z 0 (3), which gives the phase space separation Rewriting the second dΦ 2 in terms of angles in the i rest frame, the 2 → 3 ME can be expressed as The 2 → 2 ME combined with the shower instead gives an answer Here dΦ 2 (i + 2) represents the outgoing i before it acquires a mass by the q * → qZ 0 branching, as assumed for the initial 2 → 2 QCD process. The correct phase space, used in the ME expression, is scaled down by a factor β i2 = 1 − m 2 i /ŝ. To compare the two rates, it is necessary to convert between the standard two-body phase-space variables and the shower ones. The relationship p 2 Now insert into eq. (2.22), with k 1 = m 2 3 /m 2 i and k 3 = 0, from which d(cos θ * )/dz can be read off. This gives a ME correction weight to the shower where the q * is the spacelike quark after having emitted the Z 0 . Thus the phase space separation here is The first dΦ 2 is rewritten in terms of angles in the a + b rest frame, giving while the shower gives The relation m 2 i = zŝ gives dm 2 i /dz =ŝ. To relate cos θ and p 2 ⊥evol it is convenient to go via the spacelike virtuality Q 2 of the q * propagator, which by definition is related as In the rest frame, p a,b = ( √ŝ /2) (1; 0, 0, ±1), p 3 can be written as and thus i.e.ŝβ 3i d(cos θ)/dp 2 ⊥evol = 2/(1 − z). Put together, this gives . (2.52) Two further aspects need to be considered. Firstly, the 2 → 3 ME expression should be compared with the sum of the FSR and ISR contributions. This could become tedious, so here a simpler route is to split the ME into two parts, one that is used for the FSR reweighting, and another for the ISR one. The relative fractions are chosen by the respective propagator, which gives an additional factor (2.53) Secondly, there are some differences for W emission. As in the s-channel case, the ISR has to include CKM-weighted PDFs and choices of incoming flavour. The flavours in the hard process are also different for ISR and FSR: a process like ug → dgW + has a QCD subprocess dg → dg for ISR and ug → ug for FSR. Since QCD is flavour-blind, and the MEs are for massless quarks, this is only a matter of bookkeeping.
The matrix elements for processes like qq → qq Z 0 and qq → qq Z 0 , q = q, are pure t-channel. They therefore have a somewhat different structure from the qg → qgZ 0 ones. The general pattern from four radiating partons can be quite complex, so for the purpose of a correction to the parton shower we have chosen to neglect the cross-terms between emission from the q and q flavour lines. That is, the 2 → 3 ME used for correcting emissions off the q flavour line is obtained by letting couplings to the q line vanish. As we will show later on, this is a reasonable approximation. From there on, the procedure is as for qg → qgZ 0 . That is, the remaining ME is split into one part associated with FSR and another with ISR. For each of them a correction from PS to ME is done using either W FSR or W ISR .
For qq → qqZ 0 it is not possible to separate by couplings. Instead the fermion lines are picked probabilistically with equal probability for each combination. Thereafter each line is considered as in the qq → qq Z 0 case.
Finally, a qq → qq process is handled as pure s-channel, just like a qq → q q process. The description so far has been formulated in terms of corrections to a W/Z emission as the first branching attached to a 2 → 2 QCD process, i.e. what the matrix elements have been calculated for. But for it to be useful, the corrections must be applicable for emissions at any stage of the shower, i.e. following a number of earlier QCD, QED and weak emissions. To do that, the whole system is converted to a pseudo 2 → 2 process, for which the ME correction procedure can be applied as above. In particular, this should guarantee a proper account of W/Z mass effects.
For FSR, a recoiler is always chosen in the final state. For a process like qq → qq the initial q flavour is considered as recoiler to q, however many branchings have occurred. For qg → qg, in a consecutive branching g → g 1 g 2 the new recoiler is chosen to be the one of g 1 and g 2 that forms the largest invariant mass together with q. The kinematics of the branching process is first boosted longitudinally to the rest frame of the two incoming partons of the original 2 → 2 process, and thereafter boosted to the rest frame of the radiator + recoiler. The momenta of the two incoming partons, still along the beam axis, are rescaled (down) to the same invariant mass as the outgoing state. Thus a consistent 2 → 2 → 3 kinematics is arrived at, and ME corrections can applied to this sequence as before.
For ISR there is always a unique recoiler, given by the opposite incoming parton. In this case a core 2 → 2 process is constructed in its rest frame, with incoming partons that need not agree with the original ones, while the original outgoing partons are scaled (up) to the same invariant mass. Thus the scattering angle is preserved, in some sense. The relevant Z emission is then added on to this kinematics, and the ME correction weight can be found.

Doublecounting with weak Born processes
Throughout the description, doublecounting issues have appeared. The 2 → 3 ME has been split into two parts, one used for the FSR ME corrections, and the other for the corresponding ISR ones. Within either of ISR or FSR, the possibility of radiation from two incoming or two outgoing partons is also taken into account. There remains one significant source of doublecounting, however, namely the production of a W/Z as a Born process, followed by further QCD emissions. That is, starting from qq → Z 0 , first-order topologies qq → gZ 0 and qg → qZ 0 will be generated, and from those qq → ggZ 0 , qq → q q Z 0 , qg → qgZ 0 and gg → qqZ 0 . It is therefore possible to arrive at the same set of 2 → 3 processes either from a weak or a QCD base process, which opens up for another type of doublecounting.
The two production paths, here denoted "weak" or "QCD" by the base process, are expected preferentially to populate different phase space regions. To begin with, consider only ISR emission, and recall that branchings are ordered in p ⊥evol , which approximately translates into ordering in ordinary p ⊥ . In the weak path, the Z 0 and its recoiling parton therefore are produced at a larger p ⊥ scale than the further parton produced by the next PS branching. By contrast, in the QCD path the Z 0 will be added at the lower p ⊥ . Similarly, FSR in the weak path allows one parton to split into two preferentially nearby partons, which thereby both tend to be opposite to the Z 0 , while FSR in the weak path would preferentially place the Z 0 close to either outgoing parton.
What complicates the picture above is the use of ME corrections for the QCD path, which are there to include W/Z mass effects and ISR/FSR interference, but as a consequence also weights up the singular regions associated with the weak path. This makes the doublecounting issue more severe than if either path only had non-singular tails stretching into the singular region of the other path. As a technical side effect, the Monte Carlo efficiency of the QCD path elsewhere can become low, since the upper limit for the ME/PS ratio, needed to set the trial sampling rate, becomes larger the closer to the "unexpected" singularities the QCD path is allowed to come. By contrast, the Pythia description of W/Z production only performs ME corrections for the first emission, as already discussed, so the weak path is not corrected by any 2 → 3 MEs.
The solution we have adopted to this issue is to separate the full 2 → 3 phase space into two non-overlapping regions, in the spirit of the k ⊥ clustering algorithm [38,39]. That is, for a 2 → 3 phase-space point define distances that represent the relative closeness to the ISR and FSR singularities, respectively, with R providing the relative normalization of the two. Then find the smallest of these distances, disregarding d ij combinations that are not associated with ME singularities, such as Z 0 g or qq. Associate the phase-space point with the weak path if a parton is closest to the beam or two partons closest to each other, and with the QCD path if the Z 0 is closest to the beam or to a quark. Starting from weak production, this means that a check is made after the shower has emitted two partons, and if the phase-space point lies in the QCD-path region the event is rejected. Events with at most one branching thus are not affected, and emissions subsequent to the first two are not checked any further. Starting from a QCD event, the emission of a Z 0 is vetoed if it falls in the weak-path region. Not much should be made of the asymmetric treatment, in one case the veto of the whole event and in the other only of the branching: were it not for the ME correction on the QCD path then neither path would populate the "wrong" region to any appreciable extent. The weak-path choice is motivated by starting out from a qq → Z 0 cross section that is inclusive, so that the addition of the QCD path should be viewed as swapping in a better description of a region that already is covered. A corresponding argument for the QCD-path evolution is less obvious, and it is simpler to operate as if Z 0 emissions into the wrong region do not form a part of the shower evolution.

Other shower aspects
In the description so far the choice of W/Z mass has not been mentioned. The practical implementation is such that a new W/Z mass is chosen each time a trial W/Z emission is to be defined, according to a relativistic Breit-Wigner with running width. This allows the W/Z mass distribution to be reweighted by the mass dependence of matrix elements and of phase space.
In addition, by default there is a lower cutoff at 10 GeV for the W/Z mass. This is intended to avoid doublecounting between the PS description of γ * production below this scale and the ME description of γ * /Z 0 production above it. For the purposes of this study the contribution below 10 GeV is negligible. More relevant is the absence of the γ * contribution above 10 GeV, and the γ * /Z 0 interference contribution everywhere. This could become a further extension some day, but would involve relatively minor corrections to the overall picture, which is dominated by the W/Z peak regions.
The emitted weak bosons are decayed after the evolution of the full parton shower, into the possible decay channels according to their partial widths. In order to achieve a better description of the decay angles, a ME correction is applied. For FSR this is corrected to the ME of a four-body final state, e.g. g * → uu → uuZ → uue + e − . The ME is based on the helicity previously chosen for the radiating fermion line. Since the weak boson is already produced, all overall factors that do not depend on the decay angles are irrelevant, including the W/Z propagators. An upper estimate of the ME expression is obtained by taking four times the maximum obtained for six different decay directions (±x, ±ŷ, ±ẑ in the W/Z rest frame); empirically this always works. Then standard hit-and-miss Monte Carlo is used to pick the decay direction. For ISR the same method is applied, the only difference is the change of ME to the uu → g * Z → g * e + e − . In the case of the mixed-initialfinal state, the same two MEs are applied and the choice between them is made depending on where in the shower algorithm the emission is produced.
After the decay of the weak boson, a new parton shower is started, with the two decay products defining the first dipole.
The implementation of the weak shower only works for 2 → 2 or 2 → 1 hard processes. The reason behind this is that the mixed initial and final state ME correction relies on a 2 → 2 hard process. And if the starting point would be a 2 → 3 process, it is not always possible to identify a unique 2 → 2 process.

Validation
In this section we collect some checks on the functioning of the weak-shower implementation. This provides insight into the approximations made and their limitations. Needless to say, many further checks have been made.

Control that parton showers offer overestimates
As the implementation relies heavily on correcting the shower behaviour by a ME/PS ratio, it is relevant to study the correction procedures. Specifically, the uncorrected PS should everywhere have a higher differential rate than the corresponding ME-corrected one has. Results for the s-channel process, as a function of the evolution variable, can be seen in Fig. 1. The FSR results are obtained with N = 8 in eq. (2.16), and so the rather crude overestimate of the ME expression is not unexpected. The ISR uses an overestimate specifically designed for the weak shower eq. (2.30), which does a better job at imitating the behaviour of the ME. The difference between the two curves is largest for small p ⊥evol , whereas for larger momenta the agreement improves. This is expected since the mass of the weak bosons is more important in the low-p ⊥evol region. The reason that the PS without any correction does not diverge for p ⊥evol → 0 is the purely kinematic restriction from the emission of a heavy boson. Around the PS peak the ratio between the uncorrected and the corrected number of events goes above 100. The generation of weak emissions therefore is rather inefficient, leaving room for improvements. But it should be remembered that the QCD shower part produces more trial emissions than the weak shower one does, and that therefore the overall simulation time should not be significantly affected by the weak shower. Similar results but as a function of the energy-sharing variable, z, can also be seen in Fig. 1. The FSR overestimate has the same structure as the ME and only an overall factor differs between the two. The ISR overestimate gets slightly worse for high and low values of z.
For the t-channel processes the PS is not guaranteed to be an overestimate of the ME. Indeed, for all the processes there are emissions with weights above unity, and significantly so. This indicates a divergence in the ME that is not covered in the PS. It turns out that the problematic type of events contain a low-p ⊥ parton in the final state, quark or gluon, or two almost collinear partons. This can be seen in Fig. 2 for the ug → ugZ process, where the weight becomes high as the quark becomes collinear with the gluon. These types of events were discussed in the double-counting section: in a PS approach they should be produced by a Drell-Yan weak boson production followed by QCD emissions, and not by emission of weak bosons within the PS. Once the doublecounting cuts are introduced, Fig. 2, the weights are much better behaved than before. The hard cutoff at ∆R = 2π/3 is due to momentum conservation in the three-particle final state. Some very few phase-space points remain with weights above unity. Events in such points are always accepted, but only with the standard unit weight, so the PS produces too few events in these regions. Given the tiny fraction involved, this effect should be small in most phenomenological studies. Similar to what was seen for the s-channel overestimate, the ISR overestimates behave nicely. Almost all the ISR weights stay within a band between 0.01 and 1. In addition to the aforementioned problem with weights above unity, the FSR has a large bulk of trial emissions with very low weights, making the generation inefficient.

Transverse momentum distribution of the weak boson
It is possible to validate the implementation of the PS by comparing it to ME calculations for 2 → 3 processes, and to this end we have used CalcHEP to generate events according  to various MEs. Strictly speaking this only ensures that the first emission is correct and does not reveal anything about how the PS describes later emissions.
In order for the comparison to be meaningful, the Sudakov factors have to be removed from the PS. This can be achieved by a veto on each emission after statistics on it has been collected. The evolution thus continues downwards from the vetoed scale as if there had been no branching, which cancels the Sudakov. To match the choices made for the ME calculations, the factorization and renormalization scales are held fixed at the Z 0 mass throughout the shower evolution. In addition the starting scale for the emissions was set to √ s in order to fill up the full phase space. For comparisons in pure s-channel processes, an ISR and an FSR part can be read off from the full ME, using the coupling structure. It is therefore possible to compare these parts individually with their respective PS type. Since the PS is already corrected to these MEs, a perfect match is expected and also observed, see Fig. 3. For the combined case, this is no longer true, since the PS does not include the interference effects present in the full ME answer. This difference amounts to the order of 10% in the chosen phase-space region, with similar numbers in several other phase-space regions that have been tested. The PS is only expected to be an approximation to the full MEs, and therefore the discrepancy just shows with which accuracy the PS works.
Since the t-channel processes do not admit a natural split between ISR and FSR, only the combined results are relevant. The ug → ugZ, ud → udZ and uu → uuZ comparisons are shown in Fig. 3. For the quark-gluon hard process a perfect match is expected and observed, since the full ME correction is used. It also shows that, at least in this part of  phase space, the small problem with weights above unity is negligible (not a surprise given the cuts chosen). The ud → udZ and uu → uuZ cases both have a discrepancy between the MEs and the PS. In both cases this comes from the PS ignoring interference between emissions of weak bosons from different fermion lines. For the latter case there is a further problem, namely that the applied ME correction is that of ud → udZ and not uu → uuZ.

Angular distributions of the weak boson decay products
The angles between the decay products of the weak boson and the other partons in a 2 → 4 process (e.g. qq → qqZ 0 → qqe − e + ) are by construction matched to the angles calculated from the correct MEs for s-channel processes, separately for ISR and FSR. The same schannel-derived corrections are used also for t-channel processes. To check the validity of this approach, the angular distributions between the partons and the leptons have been studied for the ug → uge + e − case, see Fig. 4. The angle is defined in the rest frame of the decaying weak boson. Since only the shape is relevant, all the curves are normalized to unit area. The angle between the quark and the electron is well described by this ME correction, whereas the same can not be said for the angle between the electron and the gluon: the PS prediction is almost flat, while the ME has peaks around the collinear and anti-collinear regions. Note the suppressed zero on the y axis of the figure, however, such that both distributions stay within ±20% of being flat. The discrepancy could influence relevant observables, one obvious candidate being the isolation of leptons: if the the gluon and the electron are less likely to be collinear, there is a higher probability for the electron to be isolated. It should be noted that these calculations are in the rest frame of the decaying weak boson, and that the decay products of a boosted  Figure 5. Comparison between the 2 → 3 ME calculation and the prediction from the PS for the uū → udW − process, including only electroweak diagrams. Similar cuts as in Fig. 3 were applied.
Z 0 will tend to be collimated with each other and with the emitting parton, away from the g direction.

W emission in QED hard processes
So far all the validation tests shown have been for the emission of Z 0 bosons. For QCD hard processes and the emission of a single weak boson these results translate directly to the W ± cases. This does not apply for a hard QED process, e.g. qq → γ * → e + e − .
Here the emission of a Z 0 can be split into ISR and FSR parts, as before, whereas a W ± additionally can couple to the γ * . Then an attempted split into ISR and FSR becomes gauge dependent, and can individually become negative for certain regions of phase space.
To study this phenomenon, consider the simplest possible s-channel FSR case, γ * → udW − , and compare it with the three related γ * → uuZ 0 , g * → udW − and g * → uuZ 0 ones. In each case two possible Feynman diagrams for the emission of a W off a quark are included and, in Feynman gauge, the squared MEs take the common form with the same definitions as before for x 1 , x 2 and r. The c, α 1 , α 2 and β are coefficients that depend on the specific process. For the three reference processes the coefficients are α 1 = α 2 = β = 1, suitably normalized, cf. eq. (2.14). For the γ * → udW − process, however, the coefficients become α 1 = e 2 u = 4/9, α 2 = e 2 d = 1/9 and β = e u e d = −2/9. This gives a cross section that is negative over a large fraction of the phase space. The reason obviously is that we have neglected a third diagram specific to this case, involving the triple-gauge-boson vertex γ * → W + W − , which restores positivity.
The introduction of a complete electroweak gauge boson shower is beyond the scope of this study. For now we therefore handle cases like this using the same shower couplings and ME corrections as for the cases with a gluon propagator. To estimate the effect of this approximation, the 2 → 3 ME for uu → udW − with all electroweak diagrams included, but not QCD ones, was compared to the prediction from the PS, see Fig. 5. This includes sand t-channel exchange of W ± , Z and γ, and is dominated by the t-channel contributions for the studied regions of phase space. The comparison looks reasonable for large p ⊥ values, but for small values the ME rate is about twice as large as the PS one. Such a qualitative agreement should be good enough, given the dominance of QCD processes at the LHC.

Studies of jet phenomena at LHC energies
We now turn to studies of how the introduction of a weak shower changes different observables at the LHC. Three representative examples have been chosen. Firstly, weak corrections to the exclusive di-jet production, and some other generic rate measures. Secondly, how likely it is to find a W/Z decaying hadronically inside a high-p ⊥ QCD jet. Thirdly, whether it is possible to describe the inclusive W/Z + jets cross sections that the ordinary PS fails to describe. Pythia version 8.181 was used for all the phenomenological studies. The choice of PDF was CTEQ6L [40], with a NLO running α s . Standard competition (k=0) New competition (k=1)

Number of Z/W bosons
Number of QCD emissions preceding the weak emission

Exclusive di-jet studies
The calculation of Moretti, Nolten and Ross (MNR) [9,10] showed large negative O(α 2 s α w ) corrections to jet production at hadron colliders, in the range 10-30% for jets with p ⊥ > 1 TeV. To put these numbers in perspective, we simulate 2 → 2 QCD hard processes with only the weak shower turned on. The weak correction is then defined by the rate at which at least one W/Z boson is produced, Fig. 6. That rate increases for larger p ⊥ of the hard process, partly by the PDF change from gluon-dominated to quark-dominated hard processes, partly by the intrinsic logarithmic increase of the emission rate. In our calculations the corrections are only in the range 4-14%, i.e. less than half of the corresponding MNR numbers. The comparison is not fair, however, since we only study O(α w ) corrections to O(α 2 s ) hard processes, whereas MNR additionally includes O(α s ) corrections to O(α s α w ) hard processes.
There is another difference between MNR and our PS, in that MNR includes Bloch-Nordsieck violation effects, whereas the PS approach is based on exact balance between real and virtual corrections to the di-jet production. But, as mentioned earlier, the BN violations are expected to be small at the LHC, and therefore the comparison should still be sensible.

Resummation and competition between the QCD shower and the weak shower
With the availability of a weak shower it is possible to study the effect of multiple weak boson emissions. This probability is largest for high-p ⊥ jets, but even for 2 → 2 QCD processes with p ⊥ > 1 TeV the probability for more than a single weak boson emission is found to be about one per mille at LHC energies, Fig. 7. For most (if not all) analyses the experimental uncertainty will be significantly above this probability, and it is therefore a good approximation to neglect the effects coming from other than the first W/Z boson. As already mentioned the possibility of pure electroweak shower evolution, like γ * /Z 0 → W + W − , is not included here. In a PS it is possible to tell when the weak emission occurs in the ordered shower evolution, as opposed to a ME calculation. It is therefore of interest to study the competition between QCD and weak emission. The non-negligible W/Z mass could make for ambiguities, which we explore by comparing the normal p ⊥evol ordering with an alternative p 2 ⊥evol + km 2 W/Z one for weak branchings, with k = 1 taken as the extreme alternative. In Fig. 7 we show how many QCD branchings occur before the weak one, where a long tail is significantly reduced by the larger weak scale choice. The probability of having at least one preceding QCD emission is of order 40% in both extremes, however, underlining that large jets masses are common in the QCD evolution. Furthermore, the total number of weak bosons only varies by about 2% between the two choices of evolution scale. The chances of constraining k from data thus are limited. One possibility is to study the energy distributions inside a jet, but the problem here is the low experimental rate (to be discussed later). Another possibility is weak boson production in association with jets, where especially the events with a high jet multiplicity could be influenced by the choice of k, a topic that will be studied later in this section.
In fixed-order perturbation theory there is no concept of an ordered evolution, and thus not of how many QCD emissions precede the weak one. The shower results here could be an indication of how high orders that need to be used, in matching and merging schemes, so as to describe multijet production in association with a W/Z. Assuming a ME-to-PS matching at the W/Z emission scale, say, Fig. 7 suggests that at most 1% of the W/Z have more than five QCD emissions at a higher scale than the W/Z itself for a p ⊥ > 1 TeV jet. Thus, taking into account the two original jets, it would be necessary to include up to W/Z + 7 jets so as not to miss relevant multijet topologies down to the 1% level. This is entirely feasible with current technology. Thus the W/Z emission in showers does not address otherwise unapproachable topologies, but it is likely to address some issues considerably faster. And jet numbers will rise dramatically for matching scales below the W/Z one, making showers unavoidable in that region.

Substructure of a jet with weak bosons inside the jet
If a W/Z is radiated from a high-p ⊥ jet, it is probable that the W/Z will be boosted along the jet direction, and that the W/Z decay products will fall within a reasonably-sized jet cone. For leptonic decays it should still be possible to separate the decay products from the other components of the jet, but for hadronic decays the distinction will be more difficult. One possibility is to study the jet substructure and jet-jet invariant masses for signs of bumps at the W and Z masses.
One key challenge is the low W/Z emission rate, around 4% for p ⊥ > 1 TeV events in total, and not all of that inside jets. Another is that the QCD evolution itself produces a rich substructure, with a high rate in the W/Z mass region. Thus the signal of W/Z production inside jets would be small bumps sitting on top of a large but smooth QCD background. Before searching for W/Zs inside jets, let us take a step back and consider some of the more basic properties of W/Zs produced inside jets, Fig. 8. It is expected that the p ⊥ distribution for the weak bosons will be significantly harder than for those produced in Drell-Yan. The Drell-Yan production peaks at a few GeV, whereas the emissions peak at the mass of the weak bosons, mainly by simple phase-space effects in the PS.
The location of the weak boson within the jet is different from that of the normal QCD emission. This can be most easily realized by considering the ∆R and energy sharing distributions, Fig. 9. The energy sharing distribution for QCD has a peak around zero due to the soft divergence. Weak emissions do not have soft divergencies, but instead have a hard cut-off due to the mass. The peaks observed at respectively ∆R = 0 and z = 1 are due to isolated weak bosons produced from either ISR or large-angle FSR emissions. This is in agreement with the ∆R = 0 peak disappearing when the weak boson is excluded in the jet clustering.
The above distributions are shown for two different competition schemes, the standard p ⊥evol ordered and the new p 2 ⊥evol + M 2 W/Z ordered. The two schemes give essentially the same results for all the distributions and thus none of the observables provide any separation power between the schemes.
Returning to the hadronically decaying W/Z inside jets, a simple phenomenological study has been carried out to determine whether is possible to locate these. First all jets with p ⊥ > 1 TeV are found according to the anti-k ⊥ algorithm with R = 1, using FastJet [41]. Afterwards these jets are split into subjets by the trimming algorithm, with parameters Z = 0.05 and r tr = 0.2. The invariant mass of all subjet pairs is shown in  As a check, if only those events are singled out that contains a W/Z the signal stands out, and even more so for those jets that contain a W/Z. As a further step, mass spectra are compared with or without weak radiation in the shower, Fig. 10, together with the ratio. It is possible to see a difference between the two runs, but note that only statistical uncertainties have been included (1 000 000 events, corresponding to about 77 fb −1 ), and not any detector smearing effects.

Weak boson production in association with jets
Inclusive weak boson production in association with jets for a long time has been known to be poorly described by the Pythia shower approach alone. The PS can describe the emission of the first jet to a decent level of agreement with data, but for additional jets the shower predictions fall below the observed data, increasingly so for more jets. The use of ME corrections for the first emission is not the reason it works out there: as we have already discussed, the uncorrected PS overestimates the qq → Zg process and underestimates the qg → Zq one, but by small amounts over most of phase space, and in such a way that the sum of the two comes out approximately right. Rather we would like to attribute the problem to the absence of weak emissions in showers, and are now in a position to check this hypothesis.
The new shower framework has been compared with the W + jets and Z + jets data from the ATLAS experiment [34,42] using the Rivet framework [43]. The Pythia results are obtained as the sum of two components, one "weak (production) path" where the starting topology for shower evolution is a W/Z, and the other a "QCD (production) path" where the starting topology is a 2 → 2 QCD process. The former, being leading order, is well known to miss out on an overall K factor, which we address by normalizing this component to the inclusive W rate. For the Z production this rate is not quoted, so we instead normalize to Z + 1 jet. Empirically there does not seem to be a corresponding need for a large K factor in QCD 2 → 2, in the context of tunes where α s is among the free parameters, and so none has been used here. Doublecounting between the two paths is avoided as already discussed.
The inclusive W/Z cross sections as a function of the number of associated jets are shown in Fig. 11. The weak production starts to fall off at higher jet multiplicities, as foretold, where the QCD path becomes the dominant production channel. It is clear that the addition of the QCD path, absent in previous comparisons between data and Pythia, plays a key role in achieving a much improved agreement with data. The agreement for the first four jets is very good both for W and Z, the only slight problem being the W + 1 jet bin. For higher jet multiplicities the PS start to overestimate the production. The discrepancy might very well be within tuneable parameters, for instance a small change in α s will have a large influence at high jet multiplicities. And given all the approximations made, the overall agreement might be better than one had initially expected.
Next we turn to more exclusive quantities, beginning with the jet p ⊥ spectra. These are known to fall off too rapidly for the weak path alone, Fig. 12. But here again the QCD path provides a slower drop that nicely takes over with increasing p ⊥ , giving a good overall agreement. This is not really surprising, given that the likelihood of emitting a W/Z increases with increasing p ⊥ of the QCD process. A further check is provided by the ϕ angle between the two leading jets. The QCD path starts out from two back-to-back jets, and part of that behaviour could be expected to survive the emission of a weak boson. For the weak path the jets come from ISR, and are therefore not expected to be particularly anticorrelated in ϕ. This is also what is observed in Fig. 13: the weak path is almost flat in ∆ϕ, whereas the QCD path gives a clear peak around ∆ϕ = π. Combining the two production channels does not give overwhelming agreement between data and the event generator, however, with data having a stronger peak structure.
As mentioned previously, the implementation introduces a new parameter, k, that changes the competition between the weak shower and the QCD shower. To test whether any of the weak boson plus jets observables are sensitive to the choice of k, two different simulations have been carried out. The first simulation uses the standard p ⊥evol competition with k = 0 and the second uses k = 1, thus the weak bosons are produced earlier in the shower. Fig. 14 shows the inclusive jet multiplicities for the two different competitions. It may be counterintuitive that the new competition produces a lower number of weak bosons. The explanation is that the QCD emissions can open new paths that allows weak emissions. Consider for instance a gg → gg process at a hard scale of 75 GeV. This process can not radiate any weak bosons with the new competition, due to it requiring the weak emissions to happen prior to any QCD emissions (since 75 GeV is below the W mass). In the standard competition it is possible to have a QCD emission prior to the weak emission, thus enabling for instance a gluon splitting into two quarks followed by the emission of a weak boson. This was also verified, by considering only those events that had a hard scale significantly above the Z mass. And for these, the two curves were equal within statistical uncertainties. The difference between the two competitions is not very large, however, and given the experimental uncertainty these observables do not provide any significant discrimination power. More differential distributions were also tested, but none allowed a  better distinction between the competitions. Thus so far it has not been possible to find observables that can actually tell the two competitions apart.

Prospects for future colliders
The emission rate of weak bosons is expected to scale as α w ln 2 (ŝ/M 2 Z/W ), and thus the effect of a weak PS is higher for colliders with a higher center-of-mass energy. One of the suggestions for a possible next step beyond the LHC is a new 100 TeV pp collider. In this section we will redo some of the phenomenological studies presented in the last section, but now with the center-of-mass energy cranked up accordingly.
The weak virtual corrections to the di-jet exclusive cross section at 100 TeV are shown in Fig. 15. As before this equals the probability to emit at least one weak boson, up to a   minus sign. For the di-jet with a p ⊥ around 30 TeV the corrections reach about 30%, a significant increase compared to the maximal 14% for LHC energies. Clearly one will need to consider weak corrections for all processes that can have jets at large p ⊥ . Since the emission rate for a single weak boson is enhanced significantly, also the rate of multiple weak emissions goes up, Fig. 16. This is of special interest since currently the matching and merging schemes only describe a single emission of a weak boson. The probability for radiating at least 2 weak bosons stays within a few percent for inclusive di-jet production. The effects may be larger for more exclusive observables. For instance, if you consider the production of a weak boson in association with jets, you would have an   additional weak boson in ∼10 % of the events (under the conditions of Fig. 16). It is also interesting to note that the larger available phase space means that more QCD emissions can precede that of a weak boson, Fig. 16. To again obtain a one percent accuracy the simulations now need to include up to 11 QCD emissions before the weak one, which is beyond current ME capability. A matching to a shower that can cover at least the softer W/Z emissions, relative to the large scales of the hard process, there offers obvious advantages.

Summary and outlook
In this article we have described an implementation of weak gauge boson emission as an integrated part of a standard parton-shower machinery, outlined its consequences and compared it with some relevant data. This is a first, to the best of our knowledge.
The challenges of obtaining a realistic description have been larger than might have been foreseen. For instance, the matching to first-order matrix elements for W/Z emission is a natural way to obtain a realistic description of corrections induced by the gauge boson masses, an issue not encountered for QCD and QED showers. A first step thus is to consider W/Z emission off s-channel 2 → 2 QCD processes, where initial-and final-state radiation can be cleanly separated (by dropping interference effects), and use these matrix elements to correct the shower behaviour also for other processes. But such a factorized description then performs rather poorly for t-channel-dominated QCD processes, necessitating a more complex matrix-element-correction machinery. The main drawback is that, in a shower language, there now arises doublecounting issues between what should be classified as QCD emission off a weak process and weak emission off a QCD process, and this has to be resolved. At the end of the day the weak-emission machinery therefore becomes more cumbersome than intended.
Possibly the most satisfying outcome of this study is the so much improved description of W/Z + n jet data. A widespread misconception is that showers are bound to under estimate emission rates, in spite of several studies to the contrary [27][28][29]. The poor performance of Pythia for W/Z + n jets, with a clear trend to the worse for increasing n, has fed this myth. Now we see that the discrepancies essentially disappear once the possibility of weak showers is introduced, at least within experimental errors and reasonable model variations, e.g. of α s . This is not to say that everything is perfect; as always the shower does better for azimuthally-averaged quantities than in more differential distributions.
Apart from this insight, has the outcome been worth the effort? Not surprisingly we have shown that, even at the highest LHC scales, W/Z emissions usually occur early in the shower evolution, such that the dominant W/Z + n jet topologies can be generated perfectly well by standard matrix elements technology. So from that point of view the answer would be no.
However, in step with the computational advances has come the realization that "raw" order-by-order matrix elements are not enough. Essentially all matching/merging techniques for combining the different fixed-n-jet results adopt a parton-shower perspective to overcome doublecounting issues. Notably a fictitious shower history is used to define the Sudakov form factors that are needed to turn inclusive matrix elements into exclusive ones [44]. In the CKKW-L approach [45] these Sudakovs are derived from a shower algorithm, meaning that the overall reliability of the matching/merging procedure is dependent on the quality of this algorithm. Here the lack of W/Z emission as a possibility can force the adoption of less natural shower histories [46]. The new machinery thus opens the road to a better combined description, even in cases when no real W/Z emissions are taken from the shower itself. Further, the shower histories are used to reweight the fixed α s couplings of the ME calculations to ones running as a function of the relevant branching scales, so also here improvements are possible.
So what lies in the future? Firstly, we hope that the extended shower will prove useful in its own right. In particular it offers a convenient tool for studying how the structure of jets is affected by real and virtual weak-emission corrections, interleaved into the standard QCD (+ QED) framework. Thus it is easy to get a first understanding of where effects could be significant, and the general order of such effects, both for jets and for the event as a whole. The low computertime cost means that weak showers could be included routinely in event generation of any process, as a reminder/warning of complications that may occur, both from readily visible lepton pairs, from missing neutrino momenta and from the stealth mode of hadronically decaying weak bosons.
Input from higher-order matrix elements will still be needed for precision studies. But, secondly, we have already stressed the improvements of matching/merging strategies that are made possible by including W/Z emission as part of the shower evolution, so an obvious step is to actually upgrade the existing matching/merging strategies available with Pythia [46][47][48].
Thirdly, there are some issues that have not been addressed. One is that we have not included the full γ * /Z 0 interference structure; currently the QED machinery includes pure γ * effects up to some mass scale, while the pure Z 0 kicks in above this scale. Furthermore not all electroweak branchings are included as part of the shower, such as W ± → W ± γ, Z 0 → W + W − or W ± → W ± Z 0 . One could even imagine to include the Higgs in the game. However, this is on a slope of rapidly falling shower-language relevance, so it is not clear whether the investment in time would be worth it.