On the W&Y interpretation of high-energy Drell-Yan measurements

High-energy neutral and charged Drell--Yan differential cross-section measurements are powerful probes of quark-lepton contact interactions that produce growing-with-energy effects. This paper provides theoretical predictions of the new physics effects at the Next-to-Leading order in QCD and including one-loop EW corrections at the single-logarithm accuracy. The predictions are obtained from SM Monte Carlo simulations through analytic reweighting. This eliminates the need of scanning the new physics parameter space, enabling the global exploration of all the relevant interactions. Furthermore, our strategy produces consistently showered events to be employed for a direct comparison of the new physics predictions with the data, or to validate the unfolding procedure than underlies the cross-section measurements. Two particularly relevant interactions, associated with the W and Y parameters of EW precision tests, are selected for illustration. Projections are presented for the sensitivity of the LHC and of the HL-LHC measurements. The impact on the sensitivity of several sources of uncertainties is quantified.


Introduction
Accurate measurements of high-energy observables are powerful probes of new physics, and arguably one of the most promising avenues for the continuation of the LHC experimental program. The study of neutral (l + l − ) and charged (lν) Drell-Yan differential cross-section measurements offers a clear illustration of this potential [1], which however has also been demonstrated for several other processes, including diboson and boson-plus-Higgs [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17] and di-quark [18,19] production, and at future colliders (see ref. [20] for a summary). The main goal of the present paper is to produce the theoretical tools needed to exploit the Drell-Yan (DY) measurements potential. While our results and methodologies are specific of the DY process, the challenges we face are of general nature. Some of the elements presented here could thus also be of help for the study of high-energy measurements in more complex final states.
The competitive advantage of high-energy measurements stems from the fact that the effects of heavy new physics, at a scale Λ, increase with the energy E of the process as a (positive) power of E/Λ. Given finite measurements accuracy, new physics could thus be visible only at high enough E. The growing-with-energy behavior is easily understood by dimensional analysis in the Effective Field Theory (EFT) description of heavy new physics. Restricting as customary to dimension-6 operators, with Wilson coefficients G ∝ 1/Λ 2 of dimension −2, we immediately identify possible contributions to the scattering amplitudes JHEP02(2021)144

JHEP02(2021)144
Second, measuring (or bounding) the W and Y parameters in DY is sufficient for the global analysis of certain classes of EFTs. Namely for those of the "universal" [25] and (since the top quark plays no role here) "top-philic" [26,27] type, that include motivated new physics scenarios such as Higgs (and Top) Compositeness [28]. Of course many more operators than O 2W and O 2B are present in these EFTs. However O 2W/2B are the only relevant ones in the DY process and thus the only ones to be included in this channel in view of a global fit. The "W&Y interpretation" of DY measurements is thus an ideal (simple and informative) benchmark target for experimental analyses in this channel. Therefore we focus on these two operators, taking however into account that a larger set of current-current operators will have to be included at a second stage of the DY measurements interpretation. The reweighting strategy we develop in the paper will play in this extended analysis an even more vital role than what it does in the W&Y case. We will return to this point later.
The search for EFT effects in DY data will be most likely based on unfolded differential cross-section measurements, similar to those in refs. [29,30] and [31] for 8 TeV and early run-1 data, to be compared with the corresponding EFT predictions. 3 We should then provide such predictions as accurately as possible and, equally importantly, provide reliable estimates of the associated parametrical and theoretical uncertainties. The target accuracy is dictated by the experimental error on the corresponding measurement, which is going to be vastly different in different energy regions. At very high energy the error will unavoidably get large, because of the limited statistics. This reduces the needs for theoretical accuracy, potentially allowing us to cope with the limited knowledge of Parton Distribution Functions (PDF), with the lack of Electroweak (EW) logs resummation, and with other effects that enhance the uncertainties at very high energy. Verifying to what extent this is indeed the case is one of the goals of the present paper. A lot of data are instead available at lower energy, and the measurement error will be dominated by systematic uncertainties. While we are unable to quantify them, based on refs. [29,30] we expect experimental systematics of order few percent in the energy range from 300 GeV to 2 TeV. We will see if this accuracy goal can be met given state-of-the-art calculations and PDF uncertainties.
It should be noted that the growing-with-energy nature of our signal is very well compatible with the hierarchical structure of the experimental and theoretical errors described above. At very high energy, where the error is larger, the signal is also larger, hence potentially visible. At lower energies the signal gets smaller, but still it is potentially visible because the error shrinks. We thus end up in a situation where the sensitivity to the signal comes from a wide range of energies, rather than from a single energy bin. This is ideal from two viewpoints. First, because we have the opportunity to improve the sensitivity by enlarging this range either on the high energy side, by collecting more luminosity, or on the low energy one by reducing the experimental systematic and theoretical uncertainties. Second, because the observation of a tension with the SM in multiple bins would be a very convincing evidence of new physics.

JHEP02(2021)144
Current state-of-the-art EFT predictions for DY, at Next-to-Leading Order (NLO) in QCD and consistently interfaced with parton shower, are implemented in POWHEG [32]. The practical applicability of this tool however is limited by the fact that NLO simulations are long and demanding, and they should be run several times in order to extract, for each bin, the dependence of the cross-section on the EFT parameters. Of course the task is simplified by the fact that the cross-section is a quadratic polynomial in the Wilson coefficients. However extracting the polynomial coefficients (in particular, the linear ones) requires very accurate simulations, to be sensitive to the small correction due to the EFT on top of the SM. Moreover it requires a careful choice of the simulation parameters, which should be such that neither the SM nor the quadratic terms dominate by too many orders of magnitude. Since this parameters choice depends strongly on the bin, a large number of accurate simulations is required. While this approach might perhaps still work in the two-parameters W&Y case, it would definitely be unfeasible in the large parameter space of generic current-current operators.
To solve the problem, in this paper we adopt a different methodology, based on event reweighting. Namely we notice that the Born, the virtual, and the real helicity amplitudes are all affected by current-current operators through a common multiplicative factor. This factor is a linear polynomial in the Wilson coefficients (which enters squared in the cross-section), with constant term equal one corresponding to the SM contribution and coefficients that depend on the dilepton center-of-mass energy. The coefficients of the polynomial are readily computed for each combination of helicities and of quarks and lepton flavors, and they allow us to model the entire EFT parameter space, at exact NLO accuracy, by reweighting the events of a single Monte Carlo simulation. Namely, for each simulated event we compute the coefficients and we store them in the events file. Once a cross-section binning is defined, the events are binned accordingly and the stored information is used to compute the coefficients of the quadratic polynomial that describes the dependence on the EFT parameters of the cross-section in each bin. Reweighted Monte Carlo events can also be used for the direct comparison of the EFT with the data, or in order to check the possible impact of the EFT effects on the unfolding procedure by which the cross-sections are measured. Notice that the obvious virtues of the reweighting methodology (whenever applicable) are well-recognized in the literature, to the point that reweighting has been automated in MadWeight [33]. However we cannot use MadWeight for our analysis because quark-lepton four-fermion operators are not yet included in the EFT MadGraph [34] model at NLO [35]. We produced our own implementation based on the SM POWHEG DY generator [36].
Analytic reweighting is not only more efficient in providing state-of-the-art (NLO QCD) EFT predictions, but it also allows one to improve the accuracy of the predictions by including new effects. We consider in particular the EW single and double logs, which are enhanced at high mass and constitute the dominant NLO EW effects, and we include them in the EFT prediction. We also use reweighting to estimate the effect (on the SM prediction, most importantly) of Sudakov logs of higher orders in the loop expansion.
The paper is structured as follows. In section 2 we introduce our reweighting strategy and its implementation and show how to obtain EFT cross-section predictions and to JHEP02(2021)144 estimate the corresponding uncertainties. As mentioned, our EFT predictions are accurate at the NLO in QCD and include NLO EW logs. Corrections due to QCD at NNLO and the complete NLO EW corrections can be straightforwardly added to the SM term using FEWZ [37] and POWHEG [38,39]. In sections 3 and 4 we present LHC sensitivity projections, focusing on integrated luminosities of 100 fb −1 , 300 fb −1 , and 3 ab −1 . These are obtained by a likelihood fit that includes all the relevant sources of theoretical uncertainties, allowing us to quantify their impact. Experimental systematic uncertainties are assumed at the few % level. We report our conclusions in section 5.

Reweighting strategy
We start, in section 2.1, by discussing how fixed-order QCD NLO predictions, in the presence of quark-lepton current-current new physics interactions, can be obtained by analytic reweighting. Next, in section 2.2, we illustrate our POWHEG implementation and we show that reweighting is fully compatible with the POWHEG master formula, ensuring that showering effects are consistently included in our reweighted Monte Carlo events. We address in section 2.3 the slightly more technical problem of including EW logarithms of IR and UV (RG-running) nature.

Fixed-order QCD corrections
We first consider neutral DY, i.e. the process with l = e, µ or (possibly) a τ . We are interested in the high energy regime of the process, with a lower threshold on the dilepton center-of-mass energy that we set for definiteness at √ s > 300 GeV. In all the amplitudes that contribute to dilepton production, at the leading order in the EW and in the new physics couplings but at all orders in QCD, it is possible to isolate a common subdiagram, displayed in figure 1 with its corresponding Feynman rule. In the figure, χ q = L, R and χ l = L, R denote the chirality of the quark and of the lepton legs, and P χ q,l the corresponding chirality projectors acting on the quarks and leptons spinor indices, respectively. Notice that only same-chirality q/q and l + /l − pairs can interact in the SM (Higgs interactions are of course totally negligible), and the same is true for the current-current effective vertices. Also the flavor (q = u, d, s, c, b) of the quark must be the same of the anti-quark since we are excluding FCNC new physics interactions as explained in footnote 2.
The effective coupling C 0 depends on the quarks and leptons chirality and flavor, and it reads where K 0 are constants that denote the coefficients of the effective neutral current interactions K 0 The SM contribution depends on the dilepton invariant mass and it can be concisely written as , (2.4) where g and g denote the SU(2) L and U(1) Y couplings, e is the electric charge, T 3 is the third SU(2) L generator, Y and Q are the hypercharge and the fractional charge. 4 Based on the above discussion, it is obvious that at tree-level the dependence on the new physics parameters K 0 can be obtained by reweighting the SM (K 0 = 0) predictions. The dilepton production cross-section (fully differential in the dilepton 4-momenta) is the sum of the polarized qq → l + l − partonic cross-sections convoluted with the corresponding PDF. The quarks and the leptons being effectively massless, each term in the sum depends on new physics through the square of the corresponding C 0 (q χq , l χ l ) coefficient. The differential cross-section is thus the sum of the SM cross-sections in each helicity and quark flavor channel, each weighted by the factor ρ n (s, K 0 ; q χq , l χ l ) = C 0 (q χq , l χ l ) C 0 SM (s; q χq , l χ l ) (2.5) Starting from a SM Monte Carlo simulation where quark and lepton flavors and helicities are stored in the events file (or, equivalently, from simulations of the individual channels), Monte Carlo events implementing the differential cross-section calculation are readily obtained by assigning to each event its reweighting factor ρ as defined above. Of course we do not need to commit ourselves to a specific value of the K 0 parameters and reweight the SM sample as the first step. Since ρ is merely a linear polynomial (squared) in K 0 , with unit constant term, we just need to compute and store its coefficient 1/C 0 SM (plus the information of the helicity and flavor channel of the event) in the events file. The actual reweighting can be performed at a later stage, or one can use eq. (2.5) to compute the dependence on K 0 of the cross-sections in the analysis bins.
The reweighting formula in eq. (2.5) also holds at NLO in QCD, because the gluonquark coupling preserves the quark flavor and chirality. Therefore the one-loop q χq q χq → l + χ l l − χ l amplitude is proportional to the same C 0 (q χq , l χ l ) factor as the tree-level one, and JHEP02(2021)144 the same is true for q χq q χq -initiated real emission amplitudes with one final gluon and for g q χq -and g q χq -initiated real emissions. The dilepton differential cross-section is thus the linear combination, with ρ reweighting coefficients as in eq. (2.5), of Born plus virtual plus real contributions in each individual channel labeled by the flavor and chirality of the initial quark or anti-quark and by the ones of the leptons. Notice that the IR divergencies consistently cancel in each channel. The UV divergencies also cancel, with no renormalization needed for the new physics coupling because the new interaction involves a QCD-neutral vector current. Monte Carlo events reweighting can thus be carried out at NLO in the exact same way described above for the tree level case. It should be possible in principle to extend the reweighting approach also to NNLO accuracy. The main difference is that at NNLO new channels appear (like for instance the ud → ud l + l − real correction) whose amplitude is not proportional to one specific C 0 (q χq , l χ l ) effective coupling, but to a linear combination of them. New reweighting factors should thus be computed and used to deal with these new channels. We do not explore this possibility because NLO accuracy will turn out to be more than sufficient for our purposes. NNLO corrections to the SM contribution are instead important, but those are easily added on top of our NLO new physics predictions. Analogous considerations hold for the charged DY process (2.7) for both charge plus and minus DY processes. Explicitly, the SM effective coupling reads

JHEP02(2021)144
Generic current-current Table 1. Left: Generic current-current operators in the notation of refs. [21,22]. Right: The two operators associated with the W and Y parameters. Operator couplings are denoted with "G".
where V is the CKM quark mixing matrix. New physics is encapsulated in the couplings of the effective charged current interactions Charged DY NLO events reweighting can be performed, using eq. (2.7), with the exact same logic we described in the neutral case. Notice that we can regretless apply the charged reweighting factor to all the events in the simulation, in spite of the fact that it was derived for the Left-Left (LL) chirality subprocesses, because all the SM events are indeed of the LL type. The only (very minor) subtlety with charged DY reweighting is associated with real NLO corrections producing a top quark in the final state, through for instance the SM b g → t l − ν l subprocess. Given that the top is massive, and since we are excluding effective interactions involving the top quark, we cannot deal with this process with our strategy. However its contribution is totally negligible in the SM and we do not expect that new physics effects in the top sector could be large enough to make it detectable. Otherwise, the final states with an extra top quark could be isolated experimentally and studied separately.
We now specialize the general reweighting formulas to the subset of operators that are selected for the W&Y interpretation. The W and Y parameters are defined in this paper as the coefficients of the four-fermion operators O 2W and O 2B reported in table 1. More precisely, we write and O 2W/2B , see ref. [1]. By employing eq. (2.10), table 1, and the almost direct correspondence between the K 0,+ couplings and the Warsaw basis operator coefficients, we immediately derive the neutral and charged DY reweighting factors for the W&Y interpretation ρ n (s, W, Y ; q χq , l χ l ) = 1 + a n W (s; q χq , l χ l )W + a n Y (s; where the neutral and charged a n,c coefficients are a n W (s; q χq , l χ l ) = − with C 0 SM as in eq. (2.4). The neutral DY reweighting coefficients are independent of the lepton flavor and of the quark family, because O 2W/2B are quark-and lepton-family independent. The a n W and a n Y coefficients can thus be computed for each SM Monte Carlo event based on the quark type (u or d) and on the quark/lepton (LL, LR, RL or RR) chirality combinations, for a total of 8 options. Actually a n W is non-vanishing only for LL-chirality events, which are thus the only ones bringing the dependence on W. This is because O 2W can be viewed as a modification of the W 3 /W 3 component, which only couples to left-handed fermions, of the neutral vector bosons propagators. By similar considerations it is easy to understand why the charged DY reweighting does not depend on Y (O 2B does not affect the charged W -boson propagator) and why a c W is flavor-independent (i.e., the CKM factor drops). Notice that these features make reweighting for charged DY trivial, in the sense that all events have to be scaled with the same (but dependent on the dilepton mass) factor. Namely the charged dilepton differential cross-section is equal to the SM one, summed over all channels, times the overall factor ρ c (s, W ) that brings the entire dependence on new physics. This is of course not the case for neutral DY, where different flavor and helicity channels are weighted by different W&Y-dependent factors.

Reweighting POWHEG
We applied our reweighting strategy to the POWHEG SM DY generator [36]. For the charged process, our procedure merely consists in computing and storing the reweighting coefficient a c W (s) in eq. (2.12) for each SM Monte Carlo event. The invariant mass s = (p l + p ν ) 2 is obtained from the lepton and neutrino momenta before the showering Monte Carlo (Pythia 8 [40], in our case) is applied to the event. The augmented SM Monte Carlo sample can be used to produce histograms, with the cross-section (or, more generally, the total weight) of each bin obtained as the (positive or negative) SM weight of the event, times the reweighting factor (2.11), summed over the events that fall in the bin. Notice

JHEP02(2021)144
that the cross-section in the bin can be evaluated as a function of W. Namely one can expand ρ c in W, evaluate the coefficient of the linear and of the quadratic term and sum them up (with the appropriate SM weights) separately over the events in the bin. We thus obtain the linear and quadratic coefficients of the polynomial that describes the crosssection in the bin as a function of W. The constant term of the polynomial is of course the SM prediction for the cross-section.
The procedure is only slightly more complicated in the neutral DY case, because subprocesses with different quark and lepton helicities must be reweighted with different factors, while the SM Monte Carlo collects them in a single (one for each quark flavor) unpolarized channel. This is not a problem for charged DY because the amplitudes are non-vanishing only for the LL polarization subprocess as previously explained. Therefore even if the Monte Carlo evaluates unpolarized cross-sections, the result is effectively the polarized one. Fortunately in the code implementing the SM neutral DY calculation of ref. [36] it is easy to access and modify the Z and the photon chiral couplings to quarks and leptons. We can thus produce four SM generators, labeled as LL, LR, RL and RR, in which only the corresponding quark/lepton chiral couplings are present (and set to the SM value) while the others are set to zero. POWHEG evaluates the unpolarized cross-sections in each of the four cases, however the results are effectively polarized as discussed above for the charged process. The four Monte Carlo samples obtained by the four generators represent the contribution of the four helicity subprocesses, to be reweighted with the corresponding factor. For each event in each sample we compute a n W (s; q χq , l χ l ) and a n Y (s; q χq , l χ l ) as in eq. (2.12), using the information of the quark flavor in the event. Finally we combine the four samples in the calculation of the cross-section as a function of W and Y similarly to what previously explained for the charged case.
The procedure outlined above is exact (in the limit of massless leptons and quarks) from the viewpoint of a fixed-order NLO QCD calculation. However POWHEG [41] also describes the hardest parton showering emission, producing events that can be further showered without introducing double-counting. It is thus legitimate to ask if and how our procedure interferes with the POWHEG approach, possibly invalidating its consistency.
In order to answer, we sketch below the implementation of the POWHEG method, in the presence of new physics, on each individual helicity subprocess. This is a trivial extension of refs. [36,41], from which we borrow all notations.
The Born (B), the real (R qq,g , R gq,q and R qg,q , summed/averaged over the gluon helicity) and the "bare" virtual (V b ) contributions, for given helicities, are equal to the appropriate ρ factor times the corresponding SM expressions. The same applies to the bare factorization counterterms G ⊕,b and G ,b , since they emerge from the bare parton distribution functions in the tree-level term and are therefore proportional to B. The Catani-Seymour counterterms are also equal to ρ times their SM expressions, again because they are proportional to the Born term. One might want to cross-check the latter statement because the Catani-Seymour formalism was developed [42] to deal with unpolarized processes, while here we are considering a polarized one. The statement can be readily verified by direct calculation or by noticing that the Catani-Seymour formulas hold for a completely generic unpolarized process with arbitrary Born term, and that our polarized JHEP02(2021)144 cross-sections are effectively the unpolarized cross-sections as computed in a theory where the Z and the photon only couple to specific quark and lepton chiralities. The Catani-Seymour formulas must thus apply. A last potential subtlety is associated with the fact that the ρ reweighting factor depends on the dilepton invariant mass √ s and that the Catani-Seymour counterterms are evaluated on an "underlying-Born" 2 → 2 kinematics that is obtained from the true 2 → 3 kinematics by a prescription that is, to some extent, arbitrary. Fortunately with the choice of ref. [36] the underlying Born dilepton invariant mass is identical to the true one, therefore the exact same ρ(s) factor appears in the Born, virtual and real contributions and in all counterterms. The dilepton invariant mass is of course also consistently preserved in the reconstruction of the 3-body kinematics out of the underlying Born 2-body variables.
We conclude that the elements that appear in the POWHEG master formula (see eq. (4.17) of ref. [41]), including the subtracted virtual and the real contribution decomposition in the two singular regions, all depend on new physics through the same ρ(s) multiplicative factor. The same thus holds for the B term, which is a linear combination of the latter terms. The rescaling instead cancels in the Sudakov exponent, which contains the "R/B" ratio of real over Born, and in the real radiation term of the formula for the same reason. Consequently, the POWHEG master formula for the cross-section is also equal to ρ(s) times the corresponding SM object. Each of the LL, LR, RL and RR generators described above implements the POWHEG formula for the corresponding helicity subprocess with ρ = 1 (i.e., in the SM), and in each of them the contributions from different quark flavors are treated separately. By reweighting based on the quark flavor of each event and joining the four helicity samples we thus obtain events that rigorously implement the POWHEG calculation of the Drell-Yan process in the presence of new physics. After passing them through a showering Monte Carlo program, these events consistently include showering effects at the NLO in QCD.
Our reweighting strategy is one consistent implementation of the POWHEG method for the Drell-Yan calculation, but it is slightly different from the one of ref. [32] (where SM EFT effects are included), and from the SM calculation [36]. This is because in these JHEP02 (2021)144 implementations, different helicity subprocesses are grouped into unpolarized channels as previously mentioned. Of course we eventually sum the four helicity contributions, but this is not sufficient to make our implementation identical to the other ones, because of the R/B ratio that appears in the Sudakov and in the real radiation term of the master formula. Our R/B is the ratio of real and Born terms where the external quarks and leptons have fixed helicity, while those of refs. [32,36] are summed over the helicities. On inclusive observables the two implementations (after we sum over helicites, of course) give the same prediction at NLO, owing to the NLO accuracy of the POWHEG formula. The predictions are also identical at the leading log order where the real term in the Sudakov exponent and the real radiation term (for low-k T emissions) factorize as the product of the Born, which drops in the R/B ratio, times the appropriate splitting functions. Since the splitting functions are the same (notice that the gluon helicity sum is performed also in our case), the same expressions are found for R/B in the two implementations. The latter property clearly follows from the fact that the first POWHEG showering emission is consistent at the leading log level. The residual difference between the two implementations is thus beyond NLO and leading log accuracy, and too small to be appreciable in practice, as the results below demonstrate.
A validation of our reweighting is readily obtained by comparing with ref. [32], as in figure 3 and in table 2. The left panel of the figure shows the neutral dilepton invariant distribution at four selected points in the W and Y parameter space as computed with our strategy, compared with those obtained with the code of ref. [32], represented as points. One minus the ratio between our prediction and the one of ref. [32], denoted as δ, is displayed below the plot with the corresponding Monte Carlo error. A similar comparison is shown in table 2 for 9 bins of the double-differential invariant mass and cos θ * (with θ * the dilepton center-of-mass angle) distribution. The relative discrepancy δ is in all cases compatible with zero within the error. Notice that the error on δ is tiny in the invariant mass distribution plot because the cross-sections result from dedicated simulations in each bin. The error is larger in the doubly-differential distribution comparison because the cross-sections are obtained in this case by cutting the dedicated simulation events (of 10 5 events each) in the 3 cos θ * bins. In the right panel of the figure we consider instead the transverse momentum of the dilepton pair, integrated over the dilepton mass above 300 GeV and over the angles. Although measuring this distribution is not relevant to probe W and Y, the comparison is interesting because of the slightly different implementation of the POWHEG radiation emission in the two approaches. Also in this distribution, no difference is found within the Monte Carlo error. Notice that the P T,ll distribution includes showering with Pythia 8 [40], while the other results described above are obtained with pure POWHEG events before showering. Other comparison plots were made, also for charged DY production, and no significant difference was found.  Table 2. Comparison with ref. [32] in 9 bins of the doubly-differential m ll and c * = cos θ * distribution. The relative discrepancy δ is reported for the four values of the W&Y parameter employed in figure 3.

Electroweak logarithms
origin. One-loop EW NLO corrections, including in particular the corresponding EW logs, are present in the neutral DY SM predictions of FEWZ [37] (together with QCD at NNLO), and in POWHEG both for charged and for neutral DY [38,39]. New physics must also be modeled correctly if we want to discover it by exploiting correlated deviations from the SM of the measurements in multiple bins. Clearly the new physics term is itself a correction to the SM, therefore it needs not to be predicted as accurately as the SM one. However one should still carefully monitor the impact of high-order corrections on the new physics contribution and include them if possible, as we did above for the NLO QCD corrections. We now show how to add, again through reweighting, EW logs at the one-loop order in the EW coupling expansion. The relevant IR logs have been computed in ref. [43] (see also refs. [44][45][46][47]) up to two loops, and they have been recently implement in Sherpa [48] (at one loop). Restricting to one loop, and defining the Feynman amplitudes for the fully exclusive 2 → 2 Drell-Yan processes at Next to Leading Logarithm (NLL) accuracy read 5 (2.14) 5 The equations that follow assume that the charge-minus amplitude is the conjugate of the charge-plus amplitude, as it is the case in the SM and for generic current-current New Physics operators. The sum over the u and d flavor indices is understood. Log-enhanced terms with imaginary coefficient are not reported because they do not interfere with the Born amplitudes.

JHEP02(2021)144
In the equation, M B denote the Born (tree-level) amplitudes, including their dependence on new physics encapsulated in the C 0 and C ± = (C ∓ ) * effective couplings defined in section 2.1. The charged process amplitudes are of course only non-vanishing for the LL chirality process. Neutral amplitudes for uu → ν l ν l are equal to those for dd → l − l + and similarly for down-initiated neutrino production. We denote as q 1 , q 2 , l 1 and l 2 the four particles involved in the scattering with the corresponding chiralities, such that the generic Drell-Yan partonic subprocess is With this notation, the Mandelstam variables are defined as The "diagonal" F D factors in eq. (2.14) depend on the fermion species and chiralities. They contain angular independent (a.i.) and angular dependent (a.d.) contributions. The latter ones emerge, together with the other angular-dependent terms in eq. (2.14) (those proportional to L t and L u ), from the double logarithms of t/m 2 W and of u/m 2 W rewritten in terms of L = log s/m 2 W . The a.d. contributions which are not proportional to L (e.g., the last term in eq. (2.13)), are normally not retained at the NLL accuracy. We do include them because they are enhanced in the forward and backward regions. We have verified that they considerably improve the quality of the NLL approximation, not only in the angular but also in the invariant mass dilepton distribution.
We write F D as where the (IR-divergent) angular-independent and angular-dependent contributions from soft and collinear photon loops have been isolated in the corresponding f qed terms, to be discussed later. The others can be written concisely as in terms of the T 3 eigenvalue (t 3 f ), the Casimir (C f = 0, 3/4), the charge and hypercharge (y f and q f ) of each of the four fermions f = q 1 , q 2 , l 1 , l 2 . The coupling of the Z boson g Z,f = g(t 3 f − s 2 w q f )/c w is used in place of t 3 f for more compact expressions.

JHEP02(2021)144
The results above are in D = 4−2 dimensions and the UV singularities are subtracted in the MS renormalization scheme. The photon and the fermions are exactly massless, therefore soft and collinear divergences appear in the f qed terms Notice that the f qed terms diverge, but they do not depend on L = log(s/m 2 W ). This is because they are defined as the contribution to the loop integrals from the region where the virtual photon is soft and/or collinear to an external leg, and these regions are insensitive to m W/Z up to m 2 W/Z /s power corrections. The poles get canceled by real corrections and by PDF renormalization in the calculation of the dilepton differential cross-section, provided extra emissions are allowed and provided the charged leptons momenta are defined by recombining collinear photons. If the energy (or p T ) threshold for extra photons, the k rec T transverse momentum threshold for recombination, and the factorization scale are a considerable fraction of √ s, no large finite contributions emerge from the cancellation and the f qed terms can simply be dropped in the cross-section calculation. We construct our reweighted Monte Carlo samples targeting the "fully-inclusive" differential cross-section as defined above. More exclusive results, incorporating in particular the effect of a lower (or absent) k rec T threshold (or of a small ∆R rec recombination cone), are easily obtained by passing the events through a QED showering Monte Carlo code.
Reweighted Monte Carlo samples implementing the calculation described above are easily obtained from a LO generator, which employs the SM Born matrix element M B,SM . Provided of course that the fermion chirality channels are treated separately, or based on LL, LR, RL, and RR polarized generators constructed as in section 2.2, one can compute the reweighting factor ρ q 1 q 2 →l 1 l 2 NLL (s, t, u) = |M for each event, as a function of the new physics couplings, and use it in a way similar to that described in the previous sections for the NLO QCD reweighting. Up to running effects, to be discussed below, the first term on the first line of the equation coincides with ρ n,c in eqs. (2.5) and (2.7) for the neutral and charged processes, respectively. Namely, it can be expressed in terms of the C 0 and C ± neutral and charged amplitude coefficients and the corresponding SM expressions. It is a perfect square and, restricting to the W&Y case for concreteness, its dependence on new physics can be parametrized by the reweighting coefficients a n W/Y and a c W as in eq. (2.11). The second term in eq. (2.19) contains NLL effects. It can also be expressed in terms of the C's using eq. (2.14) and noticing that JHEP02(2021)144 the spinor current matrix elements drop in the amplitude ratio. The ∆ρ NLL term is still a quadratic polynomial in the new physics parameters, but it is not a perfect square and its constant term is not equal to zero. This is due to the fact that NLL corrections are introduced also on the SM term, therefore the reweighting is non-trivial even in the absence of new physics. The 6 coefficients of the ∆ρ NLL polynomial have to be computed and stored in the events file, together with a n W/Y and a c W , in order to obtain the analytical W&Y predictions at NLL EW accuracy.
Notice that the NLL corrections are often negative, so that ρ can become negative in certain regions of the phase space. If this had to result in a negative cross-section after the weights are summed up in some bin, it would mean that the EW IR corrections are too large to be treated perturbatively in that bin. Fortunately this does not occur in the energy range of interest for the LHC measurements. Finally, we remark that the NLL corrections in the LL chirality channel make neutral contact interaction operators contribute to the charged DY process, because of the amplitude mixing in eq. (2.14). Therefore, at least in principle, charged DY measurements are actually also sensitive to the Y parameter and not only to W.
So far we only discussed EW logs of IR origin. UV logs do not appear explicitly in eq. (2.14) because we are applying the results of ref. [43] with the MS renormalization scale set to the center-of-mass energy √ s. At one loop order this is irrelevant for the oneloop corrections in eq. (2.14), which can still be computed at a fixed scale. However the tree-level amplitudes need to be evaluated with running couplings, RG-evolved at the scale √ s. When expanding at one-loop, this produces single logarithms of s. The running of the SM couplings g and g starts at the Z-boson mass m Z m W , where these parameters are defined. Therefore the SM couplings renormalization produces "IR-type" logarithms of s/m 2 W . These are readily computed by replacing The RG running of the new physics couplings is the last source of enhanced logarithms. However these are not logarithms of s/m 2 W , but rather of Λ 2 /s, where Λ is the scale where the EFT operators are renormalized. The explicit form of these terms depends on the definition of the renormalized W and Y parameters. These are given by eq. (2.10), in terms of the G 2W/2B four-fermion operator coefficients renormalized at Λ. The SM parameters (g, g , and m W ) that appear in the equation are evaluated at m Z , therefore they do not contribute to the running. Insertions of O 2W/2B in EW loops generates RGrunning logarithmic contributions to a number of dimension-six operators. However, only the current-current quark-lepton non-FCNC operators listed in table 1 produce quadratically energy-growing effects in the DY cross-section and need to be retained. The effect of the others is power-suppressed relative to the leading energy-growing terms. It should be noted that O 2W/2B generate generic quark-lepton operators, namely the universality rela-

JHEP02(2021)144
tions on the right panel of table 1 are violated by RG-running. In order to include running effect we thus need to go back to eqs. (2.5) and (2.7), evaluated with the K's obtained by solving the evolution equations at the leading log. The final expression for the reweighting factors takes the form (2.21) Explicit results, obtained with the DsixTools [49] calculation of the relevant β-functions for the new physics couplings, and including the running of the SM couplings, are presented in appendix A. The RG EFT logs are found to have a marginal impact on the phenomenological analysis of the DY data, but they introduce conceptually novel aspects that is worth clarifying. First, they introduce a dependence on the EFT operators renormalization scale Λ. Technically, Λ is arbitrary and we conventionally set it to Λ = 10 TeV in our projections for the W and Y parameters sensitivity. On the other hand, for the interpretation of the results in the microscopic UV theory the EFT operators emerge from, setting Λ to the cutoff scale of the EFT would have been preferable. The EFT cutoff intrinsically depends on the UV theory. The choice Λ = 10 TeV corresponds to the estimated cutoff scale in the Composite Higgs UV scenario with moderate g * , for values of W and Y close to the LHC reach [1]. Naively, one could consider employing the results with Λ = 10 TeV also for EFT's with much higher cutoff, by running the operator coefficients down to 10 TeV. However this would not be correct in general because running produces many operators at 10 TeV, while our calculation assumes that O 2W/2B are the only non-vanishing currentcurrent operators at Λ = 10 TeV. Therefore our results are strictly speaking inapplicable even to theories where O 2W/2B are the only operators that emerge at the cutoff scale, if the cutoff scale is much higher than 10 TeV. Furthermore even in theories with 10 TeV cutoff, the presence of other operators, even if not of the current-current type, does influence the current-current operators running below Λ and our calculation does not apply. While of limited practical relevance (since the RG logs are very small and the cutoff is unlikely to be much higher than 10 TeV), this issue could be readily addressed by including in the reweighting all the current-current quark-lepton non-FCNC operators, with coefficients RG-evolved starting from the most general d = 6 operators content at the scale Λ. Since current-current operators are the only relevant ones in DY up to power-suppressed effects, this will produce complete NLL predictions in the general EFT parameter space.
We can not fully validate our implementation of the EW logarithms because EW radiative corrections in the presence of the EFT operators have not been computed. However we can validate the SM EW logarithms using the POWHEG calculation of neutral DY including the complete one-loop EW correction [39]. We employ the "weak-only" POWHEG routine, that implements only the virtual corrections involving massive vector bosons, obtaining excellent agreement as figure 4 shows. The logarithms reproduce the exact one-loop result up to O(1%) accuracy, relative to the tree-level, in the entire mass spectrum. In particular they reproduce very accurately the O(20%) enhancement of the corrections in the very high mass tail. Somewhat larger discrepancies are found as expected in the forward JHEP02(2021)144 and backward regions of the angular distribution. As previously mentioned, the agreement would significantly deteriorate if we had not fully retained the angular-dependent logs. Given the expected statistics and experimental errors, 1% accuracy in the predictions would probably be sufficient in the analysis, therefore one might even consider using the reweighted Monte Carlo in place of POWHEG for the Standard Model prediction. The same level of accuracy is expected in the prediction of the new physics EFT effects, relative to the exact one-loop calculation. Since new physics is itself a small correction to the SM (never more than 10% in the relevant configurations), the reweighted prediction of the new physics term is fully equivalent to the exact one-loop result to all practical purposes. A technical aspect worth mentioning is that, since the "weak-only" POWHEG routine does not implement the box diagrams involving the exchange of a photon and of a Z-boson, the corresponding EW logs due to the soft/collinear Z-boson region need to be consistently removed from our reweighting formulas for the comparison. A successful comparison also relies on a judicious choice of the SM input parameters. The most accurate predictions are obtained using tree-level input parameters in the G µ -scheme [50,51].
Up to now we discussed pure EW corrections, obtained by reweighting tree-level Monte Carlo events. We can straightforwardly combine EW corrections with NLO QCD effects by reweighting the POWHEG DY generator [36]. The reweighting strategy is similar to the one described in section 2.2, with the reweighting factors given by eq. (2.21). The only difference is that reweighting now depends also on the t and u Mandelstam variables, and not only on s. These are computed on the POWHEG events, before showering, with the following prescription. If a gluon is present in the final state, we assume that it is emitted from the initial parton moving along the positive z-axis if it moves in the right hemisphere (in the center of mass frame), and the converse for left-hemisphere gluons. Concretely, we compute t and u using the four-momentum of the incoming quark or anti-quark that travels in the z direction opposite to the final-state gluon. For gluon-initiated process, the momentum of the initial quark or anti-quark is employed. There is of course no ambiguity in events without emissions. At the leading log in QCD, where the emissions are collinear or soft and factorize, this prescription is exact. It is not exact at NLO, for hard emissions. However the t-and u-dependent terms in the reweighting are EW corrections, therefore we

JHEP02(2021)144
do not need to model them precisely at NLO in QCD since mixed (two-loops order) QCD and EW corrections are not included in our calculation.
Summarizing, our reweighting produces NLO QCD events, consistently matched with QCD parton showering, and including NLL EW corrections on the SM and on the new physics contributions. The NLL QED accuracy for partially exclusive quantities, like lepton momenta defined with a narrow recombination cone, or "bare" muons momenta, is obtained by the Pythia 8 [40] QED showering. We validated QED showering effects by comparing with the literature [37,39,51,52]. In particular we reproduced table 1 of ref. [51], with ∆R rec = 0.1 recombination cone, p γ T,min = 10 GeV, and |η| γ max = 3 thresholds for photons. The thresholds on the recombined leptons are p l T,min = 25 GeV and |η| l max = 2.5. We also recombine to the nearest lepton the lepton pairs produced by photon splitting. The same recombination strategy is adopted for the predictions reported in the following sections.
Before concluding this section, it is worth emphasizing that our result does not include real emissions of massive vector bosons. Namely we target a final state without W or Z bosons. While theoretically well-defined, this final state is not experimentally accessible because the vector bosons might not be detectable if they are soft, or collinear to the beam, or if they decay to neutrinos. We could straightforwardly account for real emissions, including new physics by reweighting, because at the NLL order the real emissions factorize. Therefore they can be generated through splitting, starting from Monte Carlo events without emissions, duly weighted to include new physics. We did not implement this strategy because it is much simpler to use the MadGraph. For a tree-level process such as the massive vector bosons emission, reweighting is automated and can be used to include the EFT effects. The effect of real corrections depends strongly on the exact definition of the cross-section that is measured experimentally, which in turn is also dictated by experimental considerations. Therefore we ignore real corrections in the analysis of the following section, having in mind an hypothetical measurement of the exclusive cross-section as defined above. However it should be emphasized that these effects should be properly taken into account in the experimental analysis because they are as relevant as the virtual EW logs [53], as expected. Two more processes are not included in our results. One is the photon-quark dilepton production, which does depend on new physics but is extremely small [50,51]. The other is photon-photon initiated production, which is not sensitive to new physics and thus can be easily added on top by a tree-level SM simulation.

The Drell-Yan likelihood
We now turn to phenomenological applications. In this section we discuss the parametrization of the predicted cross-section as a function of W and Y, with the associated uncertainties, and use it to build the binned Likelihood function needed for the W&Y interpretation of the DY measurements. This will be the starting point for the LHC sensitivity projections reported in section 4. The central values of the other coefficients, c k,I , are readily computed with our reweighted samples, starting from central-value SM Monte Carlo data. As explained in the previous section, the reweighted events contain the coefficients of the weights as a polynomial in W and Y. These are summed up in each bin producing the polynomial coefficients in the bin, out of which the Cholesky decomposition coefficients can be computed, provided the cross-section is a positive polynomial as it must be by consistency. This is always the case in the kinematical regimes accessible at the LHC, because the negative EW logs are still sufficiently small. The only subtlety is associated with the dependence on Y of the charged DY cross-section. Since the latter emerges only through the EW logs, which we expanded at fixed order in our reweighting formulas, no Y 2 term is present and the cross-section polynomial becomes negative at W = 0 for very large Y. While such large values of Y are phenomenologically irrelevant, we solved the problem by adding the Y 2 term to the charged DY reweighting for a fully consistent combined expansion in the new physics and in the EW loop parameters.

Parametric and theoretical uncertainties
We now discuss the estimate of the uncertainties on the theoretical predictions for the c k,I coefficients. These are described statistically, and included in the Likelihood, in terms of nuisance parameters, with an approach that can fit both in a frequentist and in a Bayesian inference framework. From the frequentist point of view the nuisance are related to parameters the c k,I predictions depends on, such as for instance the value of α s or the PDF. The results of auxiliary measurements (e.g., α s or PDF measurements) are incorporated in the Likelihood as multiplicative terms that depend on the nuisance parameters but not on the parameters of interest (i.e., W and Y). From the Bayesian perspective, the nuisance are random variables (and so in turn the c k,I 's), and the likelihood of the auxiliary measurements can be interpreted as their statistical distribution. In what follows we adopt the Bayesian

JHEP02(2021)144
language to describe the auxiliary likelihood associated with the nuisance parameters, but we eventually employ it for a frequentist inference on the W and Y parameters.
Notice that the discussion above applies only to systematic uncertainties with an underlying statistical origin. The uncertainties from scale variation instead, and more in general all the uncertainties associated with missing higher order corrections in the predictions, do not possess a robust statistical interpretation. As customary we will nevertheless include them as nuisance parameters, but fortunately we will see that they do not play a dominant role in our sensitivity projections.
We now examine the different sources of uncertainties individually, discuss their parametrization in terms of nuisance parameters, and start quantifying their impact.
Uncertainty from Monte Carlo statistic. No nuisance parameters must be included for Monte Carlo statistical uncertainties, which are completely negligible. More precisely, the uncertainties on the new physics terms are negligible provided the Monte Carlo statistics is sufficient to provide accurate enough (well below 1%) predictions of the SM terms. This is because new physics is included by reweighting, hence the relative accuracy on the new physics c k,I parameters is the same one of the SM terms. Since new physics is itself a correction to the SM in the kinematical regime of interest and for the relevant values of the W and Y parameters, the resulting cross-section uncertainty is completely negligible. Accurate SM predictions for unfolded differential cross-section measurements are easy to obtain. If instead the analysis had to be performed on the observed distribution, producing large enough detector simulations might be problematic. However once this is achieved, new physics effects could be included by reweighting with negligible Monte Carlo error. As discussed in the Introduction, it would have been harder to bring the uncertainties on the new physics prediction to a negligible level if employing Monte Carlo predictions that are not obtained by reweighting.
Uncertainty from α s . The uncertainty coming from the value of the QCD coupling α s is, by construction, determined by a single parameter. It is thus included through a single nuisance parameter θ αs affecting all bins in a correlated way. The nuisance is distributed as a standard normal, i.e.
We can regard θ αs as a variable related to the physical α s (which is Gaussian-distributed by assumption) by a suitable linear transformation that brings its distribution to the standard normal.
The POWHEG SM DY [36] Monte Carlo samples include the weights of each event when α s is set to the lower and upper (α l s = 0.1165 and α u s = 0.1195) boundaries of the 1σ confidence interval, plus of course the weight for α s equal to its central value α s = 0.1180. From the latter, we obtain the central-value coefficients c k,I (with c 0,I = 1 as previously discussed). From the former, we obtain the values of c k,I for α s = α l s and for α s = α u s . The resulting relative variations are shown in the left panel of figure 5 for the neutral DY invariant mass distribution, with the binning employed for the LHC projections in section 4. . Right: relative impact of the c k,I coefficients on the total σ th I for W = 5 · 10 −4 , Y = −5 · 10 −4 , and all nuisance parameters θ i = 0. This is computed as |∆ k σ I |/σ th I , with ∆ k σ I the difference between the central-value cross-section σ th I and the value of σ th I obtained by setting "c k " to zero in eq. (3.1).

JHEP02(2021)144
We see that the α s uncertainties are rather small, compared with the expected experimental (statistical and systematic) uncertainties of the cross-section measurements (see figure 7). Also notice that the α s uncertainties are much smaller for the new physics c k,I 's (for k = 1, . . . , 5) than for the overall multiplicative c 0,I coefficient, which encapsulate in particular the uncertainty on the SM term of the prediction. Moreover, the new physics contribution to the cross-section is small, suggesting that all the α s uncertainties apart from those on c 0,I can safely be ignored in the analysis. This is confirmed by the right panel of figure 5, which quantifies the relative impact of the new physics terms to the total expected cross-sections σ th I . Large values of the W and Y parameters are chosen in the figure, well above the projected LHC sensitivity with only 100 fb −1 . Even for these values, new physics is a small correction to the SM up to around 2 TeV energies. At this high energy, α s uncertainties are anyhow irrelevant because of the large statistical uncertainties (see again figure 7). Similar conclusions are reached by studying the charged DY transverse mass distribution we consider in section 4 for the LHC projections.
In light of the above discussion, we include the dependence on θ αs only on c 0,I , with a linear parameterization c 0,I = c 0,I (θ αs ) = c 0,I + κ αs I θ αs = 1 + κ αs I θ αs , results in a double coverage of the space of the predictions in terms of θ αs . However the problem is irrelevant in practice because the uncertainties are so small that c 0,I will never change sign in the Likelihood marginalization (or profiling) process.
Uncertainty from the parton distribution functions. The PDF uncertainties on the c's are computed using POWHEG, with the same strategy outlined above for the α s uncertainties. We employed the 30 PDF in the set PDF4LHC15_nlo_30_pdfas (code 90400 in the LHAPDF database [54]) [55][56][57][58], which correspond to the Hessian reduction of the PDF uncertainties to 30 nuisance parameters θ pdf i , with i = 1, . . . , 30. The nuisance are uncorrelated and normally distributed: The use of an Hessian set is legitimated by the fact that we look for small deviations from the SM, rather than to on-shell new physics. In this context, the Hessian parametrization allows for a simpler treatment of the PDF uncertainties including correlations between different bins and different process such as the neutral and charged DY. Our choice of the set with 30 replicas, in alternative to the one with 100 replicas, is motivated by a study we performed for neutral DY using the PDF4LHC15_nlo_mc_pdfas Monte Carlo ensemble set, where we identified less than 20 eigenvectors of the c's covariance matrix with uncertainties above .
The following dependence of the c k,I coefficients on the PDF nuisance parameters is assumed. The c 0,I , c 2,I and c 5,I , which need to be positive for the unicity of the Cholesky decomposition, are parametrized with an exponential: , for X = {c 0,I , c 2,I , c 5,I } . (3.6) The others are parameterized linearly In the equation, we indicate with a bar the central value predictions, while the superscript (i) denotes the value of the parameter obtained with each of the 30 PDF replicas in the set. The parametrization is such that X equals (approximately, in the case of eq. (3.6)) X (i) when θ pdf i is at its one-sigma value and all the other θ pdf i 's vanish, compatibly with the definition of the Hessian set.
The PDF uncertainties are larger than those on α s , and eventually turn out to be the dominant component of the total theoretical uncertainties shown in figure 7. Furthermore these uncertainties grow with the energy like the new physics effects. Therefore in our analysis we account for them fully, both in the SM and in the new physics contributions to the cross-section. Uncertainty from missing higher orders. The uncertainties due to the truncation of the perturbative series in the cross-section prediction are harder to quantify, and impossible to incorporate rigorously in any statistical framework. Nevertheless we can estimate their impact as follows. Missing higher orders in the QCD loop expansion are estimated by varying the factorization (µ F ) and QCD-coupling renormalization (µ R ) scales independently around the central values µ F = µ R = √ s. The scales are varied by multiplicative factors equal to 2 ±1 , 2 ±1/2 and 1, in a grid with a total of 24 entries plus the central value configuration. The maximal and the minimal values of the c k,I coefficients in this grid, denoted as c max k,I and c min k,I below, are used for the uncertainty estimate. Missing higher order in the EW loop expansion are instead estimated by adding the leading IR logarithmic terms at two loops to our reweighting formulas. All IR logs have been computed in ref. [43] at two loops, however only the leading (i.e., L 4 angular-independent) terms are retained in the estimate of the uncertainties. Compatibly with Sudakov resummation formulas, these are straightforwardly included by replacing f a.i. → f a.i. + f 2 a.i. /2 in eq. (2.17). The predictions for the c k,I coefficients that include this contribution are denoted as c 2−Sudakov k,I . The uncertainties from missing higher orders in QCD (left panel) and in EW (right panel) are displayed in figure 6. We discuss them in turn. NLO QCD scale variation effects are known (see, e.g., ref. [59]) to be sizable in the SM. Correspondingly we see in the figure that the uncertainties on the c 0,I 's are relatively big. On the other hand, the scale uncertainties on the new physics c k,I 's (with k = 0) are extremely small and completely negligible. Namely, we find that the NLO QCD scale variations mostly affect σ th I in eq. (3.1) as an overall new physics-independent multiplicative factor. The uncertainties due to the missing higher-orders in the EW loop-expansion are smaller than the QCD scale variation, and they become sizable only at high energy where the statistical error gets big. They are definitely irrelevant for the new physics term, but they could play a role for the SM contribution, in particular for the charged DY process where they are slightly larger than what shown in the figure for the neutral case.

JHEP02(2021)144
The previous results show that our predictions for the new physics contribution to the cross-section are sufficiently accurate, and the associated theoretical uncertainties can be neglected. On the SM term instead, NLO QCD scale variations and missing higher orders JHEP02(2021)144 in the EW expansion are potentially relevant. However the SM predictions are available at NNLO [37], and 2-loops enhanced EW logarithms can be easily included in by analytic reweighting. By replacing the SM term of our prediction with the NNLO SM, and including 2-loops logs, we could thus lower the NLO scale variations to the NNLO level, and make higher order EW corrections completely negligible. In what follows we will thus ignore EW effects and include NNLO-sized QCD scale variations which we estimate, following ref. [37], to be one tenth of the NLO ones. These uncertainties are modeled by introducing one nuisance parameter θ tu I for each bin, following a standard normal distribution. Linear dependence on θ tu I is assumed for c 0,I

Statistical inference
In the following section we will present sensitivity estimates for the W&Y parameters at the LHC with the standard [60] frequentist approach based on the profile Likelihood ratio and employing Asymptotic formulas and the "Asimov dataset". Namely, we define the "t µ " test statistic (with µ = (W, Y) the parameters of interest), with the Likelihood in the numerator maximized over the nuisance parameters for fixed W and Y and the one in the denominator maximized also on the parameters of interest. The Asymptotic (χ 2 2 ) distribution is assumed for t µ in the EFT hypothesis in order to set the 95% (or 68%) CL boundaries, while the median t µ in the SM hypothesis is obtained by setting the observed data to the central-value SM prediction.
The treatment of experimental (statistical and systematical) uncertainties would be completely straightforward if the experimental result was presented as a measurement of the unfolded cross-section in the bins. Namely, the complete Likelihood will be merely obtained by plugging σ th I in the experimental Likelihood, expressed as a function of the "truth-level" cross-sections σ I , including the dependence on the parameters of interest and on the nuisance, and multiplying by the nuisance parameters constraint terms. The simplest way to mimic the complete Likelihood would be to employ a Gaussian guess for the experimental Likelihood, which should include an estimate of the uncertainties on the measurement emerging from the combination of statistical and systematic errors. Since it is unclear how the statistical and systematic errors should be combined, a slightly more sophisticated approach is considered in what follows. However it should be emphasized that this adds nothing to the accuracy of our modeling of the experimental errors, given the lack of basic information on the systematic uncertainties expected in the measurement and of the (potentially very important) correlations between the errors in different bins and in neutral and charged DY. One advantage of the strategy we follow is that it could be adapted to the direct comparison of the W and Y prediction with the observed-level distributions without unfolding.  dictions µ th I = L · σ th I (with L the integrated luminosity) up to experimental uncertainties which we encapsulate in normal-distributed nuisance parameters θ exp I , and to the luminosity uncertainty. Namely, the µ I are defined as where Σ exp is the covariance matrix associated with the systematic experimental uncertainties in the relation between the truth-level expected countings µ th I and the observed-level expectations µ I . Notice that the expression above does not take into account event migrations from the truth-and observed-level bins, which should be encapsulated in the response matrix that multiplies the µ th I term. However it can model realistically the effect of uncertainties on the determination of the response matrix, provided a reasonable guess is made for the covariance matrix Σ exp . The simple choice we consider in the next session is based on current experimental results. The error on the luminosity measurement, at the 2% level, is described by the normally distributed nuisance parameter θ L .
The complete Likelihood we will employ for the statistical inference finally reads (3.10) The dependence of µ I (through σ th I , as in eq. (3.9)) on the parametric and theoretical uncertainties is introduced by combining additively the correction terms in eqs.

LHC projections
We base our projection on hypothetical measurements of the neutral DY invariant mass (m ll ) and of the charged DY transverse mass (m T ) distributions in logarithmically- The LHC collider energy is set to 13 TeV, and integrated luminosities of 100 fb −1 , 300 fb −1 , and 3 ab −1 are considered. The former luminosity is roughly the one that has been collected as of today. The two latter ones are those that will be available at the end of the LHC and of the HL-LHC programs, respectively. We incorporate in the projections 65% identification efficiency for electrons and 80% for muons, which effectively reduces the luminosity by a factor of 2 in neutral DY and by around 40% for the charged process.
The projections are obtained with the Likelihood described in the previous section, where all the relevant sources of parametrical and theoretical uncertainties in the crosssection predictions are taken into account. However they are not fully realistic because the experimental systematic uncertainties in the cross-section measurements (and the correlation of these uncertainties across different bins) can only be estimated by the experimental collaboration. Based on run-1 results, in our "baseline" scenario we set to 2% and to 5% the experimental relative uncertainties in the measurement of the neutral and of the charged cross-sections, respectively. No correlation is assumed across different bins, i.e. Σ exp ∝ 1 in eq. (3.9), aiming to a conservative result.
The results are illustrated in the rest of this section, starting from those in the baseline configuration for the uncertainties. We next consider departures from the baseline setup and discuss the impact of the various sources of uncertainties separately.
A first qualitative assessment of the sensitivity can be obtained by looking at figure 7. The figure shows the corrections to the cross-section, relative to the SM, at 4 points in the W&Y parameter space, overlaid with the total uncertainties in the theoretical predictions, represented as a gray shaded region. As discussed in the previous section, these uncertainties are dominated by the PDF contribution. The black bars correspond to the statistical uncertainties in each bin at the HL-LHC. The 1% uncertainty level is marked with horizontal dotted lines because it provides a reasonable absolute lower bound to the systematic component of the experimental error on the cross-section measurements, on top of the statistical one. Based on the figure, we expect values of W&Y of the order of 1 · 10 −4 or less to be within the reach of the HL-LHC. This is confirmed by the contours in the W&Y plane, at 68 and 95% CL, displayed in figure 8 for the 3 integrated luminosities we considered. The neutral and charged DY sensitivities are shown separately and combined.
We further inspect our results, following ref. [1], from the viewpoint of the validity of the EFT modeling of new physics. The first three plots in figure 9 show single-parameter 95% CL sensitivities as a function of the maximal energy (invariant or transverse mass) of the data employed in the analysis. These are obtained considering only one (W or Y) parameter of interest, with the other set to zero. The first two panels refer to neutral and charged DY, respectively, and the third one to the combination of the two channels. Consistently with ref. [1], we see that the reach sits comfortably below the "Derivative Expansion Breakdown" region, showing that the usage of the EFT is justified and the resulting limits are valid. More quantitatively, we see that the energy region which is relevant for the limit does not exceed 2 or 3 TeV. The value of the W&Y parameters we JHEP02(2021)144 are sensitive to can easily be due to new physics particles which are much heavier than that. For instance in Composite Higgs theories the new physics scale could easily be at 10 TeV or more, justifying the usage of the EFT at few TeV energies. A more simple examples is the one of a Z , such as the "Universal Z model" employed in ref. [20] for future colliders performance assessments. The sensitivity projection on the Y parameter (which is the only one generated by this model), once translated in the mass-coupling plane of the Z model as in figure 9, reveals that the HL-LHC could be sensitive to values of the Y parameter induced by a Z which is as heavy as 30 TeV. The projected direct reach on the Z particle at the HL-LHC, from ref. [20], is overlaid to the figure in order to outline that the sensitivity to the model is dominated by the neutral DY measurement of Y in a wide region of the parameter space.
It is interesting to investigate the impact of each source of uncertainty on the sensitivity. We report in figure 10 the projected single-operator limits obtained with different assumptions on the errors, compared with the baseline configuration. Eliminating the uncertainties from α s and from missing higher orders in the perturbative expansion is found not to improve the sensitivity appreciably, and for this reason the corresponding reach is not reported in the figure. On the other hand, we have verified that the reach would significantly deteriorate with respect to the baseline, especially at the HL-LHC, if the theory uncertainties on the SM prediction were increased to the level estimated by the NLO scale variation in figure 6. Incorporating uncertainties from missing 2-loops EW Sudakov effects degrades instead the reach by 10% at most at the HL-LHC. The baseline reach projections thus rely on the availability of NNLO predictions, while it is less relevant to include the enhanced EW logarithms at the 2-loops order.
As expected, PDF are the most relevant source of uncertainties in the theoretical predictions. However we see in figure 10 that halving or eliminating these uncertainties does not improve the sensitivity radically. We now turn to the uncertainties of experimental origin, i.e. the luminosity and the Σ exp uncertainty of eq. (3.9). Removing the latter (as in the "No Syst" bars) has a moderate impact on the reach, while the former is completely irrelevant. Indeed, by removing also the luminosity uncertainty ("No Exp" bar), the reach does not improve further. JHEP02(2021)144 Figure 9. Projected bounds as a function of a cutoff on the mass variable. Bottom right: projected exclusions on a simple Z model (defined as in ref. [20]) from the measurement of the Y parameter. The exclusion reach from direct Z searches, at the HL-LHC, is also shown. The picture emerging from the previous discussion is that the experimental accuracy assumed in the baseline configuration is sufficient, given the state-of-the art accuracy of the theoretical predictions, to exploit at best the LHC and HL-LHC potential to probe the W&Y parameters, and vice versa. A more accurate determination of the PDF could JHEP02(2021)144 improve the sensitivity, but only slightly. On the other hand, it should be emphasized that our estimate of the experimental uncertainties is a mere guess, which in particular does not take into account correlations between the experimental errors in the different bins and in the different processes, which might reduce the impact of these uncertainties on the reach. If this was the case, the adequacy of the theoretical predictions should be reconsidered, and an improvement of the PDF determination could entail a much more significant progress in the sensitivity. The absolute lower bound for the reach is provided by the "Only Stat" bars in figure 10, where all sources of theoretical and experimental systematic error are eliminated.
Before concluding, we compare our results with the findings of ref. [1]. Our projected limits are weaker by around 30%, due to a different estimate of the PDF uncertainties. In the present paper, we used LHAPDF, which combines several PDF sets, while the estimate in ref. [1] was based on ref. [61], where only one set (NNPDF) was considered. The PDF uncertainties of ref. [61] are a factor of around 2 smaller than ours in the relevant kinematical range, making the uncertainties employed in ref. [1] effectively correspond to our "Half PDF" configuration. With this configuration we could indeed accurately reproduce the results of ref. [1].

Conclusions
We have shown that the effect of the most relevant dimension-6 operators (i.e., those that grow quadratically with the energy at the interference level) can be incorporated in the high-energy Drell-Yan predictions by analytic reweighting, up to the NLO accuracy in QCD and including double and single log-enhanced EW corrections at one loop. Our method allows to compute the dependence on the new physics parameters of the cross-section in any phase-space bin without performing a scan on the parameters space. It can also generate events that include QCD and QED showering effects consistently, based on the POWHEG method.
Two operators in this set, associated with the W and Y parameters, are particularly interesting because they are generated in universal new physics scenarios including Composite Higgs. We thus focused on these operators for an illustration of the methodology, and performed LHC (and HL-LHC) sensitivity projections. Our results confirm and strengthen the findings of ref. [1], where less accurate predictions and systematic uncertainties estimates were employed. The accuracy of our predictions for the new physics contribution to the cross-sections is found to be totally adequate, and the associated uncertainties are negligible. The relevant uncertainties are those on the SM term, and PDF are the dominant source. Theoretical uncertainties are under control provided NNLO QCD predictions are employed for the SM term. One-loop EW radiative corrections should also be included, possibly exactly rather than at the single-log order using our strategy. The impact of twoloops EW logarithms on the reach has been found to be marginal, also at the HL-LHC. Nevertheless, these terms could be included straightforwardly by analytic reweighting.
Our work could be extended in two directions. First, by including all the relevant operators in view of a global EFT fit. Second, by assessing the impact on the sensitivity JHEP02(2021)144 (q χq , l χ l ) (u L , e L ) (d L , e L ) (q L , e R ) (u R , e L ) (d R , e L ) (u R , e R ) (d R , e R ) β n,W g 2 −g 2 12 5(11g 2 −g 2 ) 12 − 1 6 g

JHEP02(2021)144
The second term in eq. (2.19) is given by Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.