Higgs CAT

Higgs Computed Axial Tomography, an excerpt. The Higgs boson lineshape (⋯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dots $$\end{document}and the devil hath power to assume a pleasing shape, Hamlet, Act II, scene 2) is analyzed for the gg→ZZ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathrm{g}}{\mathrm{g}}\rightarrow {\mathrm{Z}}{\mathrm{Z}}$$\end{document} process, with special emphasis on the off-shell tail which shows up for large values of the Higgs virtuality. The effect of including background and interference is also discussed. The main focus of this work is on residual theoretical uncertainties, discussing how much-improved constraint on the Higgs intrinsic width can be revealed by an improved approach to analysis.


Introduction
Here I present a few personal recollections and observations on what is necessary in order to obtain the most accurate theoretical predictions outside the Higgs-like resonance region, given the present level of theoretical knowledge.
Somebody had an idea, somebody else gave it wings, a third group did the cut-and-count, and a fourth did a shape-based analysis. 1 Ideas are like rabbits. You get a couple, learn how to handle them, and pretty soon you have a dozen.
In Ref. [1] the off-shell production cross section has been shown to be sizeable at high ZZ -invariant mass in the gluon fusion production mode, with a ratio relative to the onpeak cross section of the order of 8% at a center-of-mass energy of 8 TeV. This ratio can be enhanced up to about 20% when a kinematical selection used to extract the signal in the resonant region is taken into account [2]. This arises from the vicinity of the on-shell Z pair production threshold, and is further enhanced at the on-shell top pair production threshold. 1 Inspired by a friend. a e-mail: giampiero@to.infn.it In Ref. [3] the authors demonstrated that, with fewer assumptions and using events with pairs of Z particles, the high invariant mass tail can be used to constrain the Higgs width.
This note provides a more detailed description of the theoretical uncertainty associated with the camel-shaped and square-root-shaped tails of a light Higgs boson.
The outline of the paper is as follows: old and new ideas on measuring the Higgs boson intrinsic width are presented in Sect. 2, off-shell effects are discussed in Sect. 3, inclusion of the interference is analyzed in Sect. 3.4 with the introduction of different options for the corresponding theoretical uncertainty. Historical remarks are given in Sect. 4; in Sect. 4.2 improvements are introduced and critically analyzed.

An old idea
The problem of determining resonance parameters in e + e − annihilation, including initial state radiative corrections and resolution corrections is an old one, see Ref. [4]. For the interested reader we recommend the original Refs. [4,5] or the summary in Chap. 2 of Ref. [6].

Higgs intrinsic width
Is there anything we can say about what the intrinsic width of the light resonance is like? Ideas pass through three periods: • It can't be done.
• It probably can be done, but it's not worth doing.
• I knew it was a good idea all along! From the depths of my memory . . .
Remark ✗ It can't be done: at LHC we reconstruct the invariant mass of the Higgs decay products, "easy" in case of γ γ or 4 charged lepton final states. The mass resolution has a Gaussian core but non-Gaussian tails (e.g., due to calorimeter segmentation but also pile-up effects etc.). The accuracy in the mean of the mass peak can then approach that 1.% precision. Thus it could perhaps compare with the W -mass extraction at LEP, based on some measured invariant mass distribution. Experimentalists would let the detector event simulation program do the folding of the theoretical invariant mass distribution, hoping that the MC catches most of the Gaussian and non-Gaussian resolution effects with the remainder being put into the systematic uncertainty. However, this would affect the width much more than the mass (mean of the distribution).
Remark ✗ It's not worth doing. For the width of the Higgs things are thus much more difficult: For M H < 180 GeV detector resolution dominates, so experimentally it will be very tough.
Let's review what we have learned in the meantime, highlighting new steps for Higgs precision physics: • complete off-shell treatment of the Higgs signal • signal-background interference • residual theoretical uncertainty 3 The wrath of the "heavy" Higgs You didn't want me to be real, I will contaminate your data, come and see if ye can swerve me Let's see how this develops.

Higgs boson production and decay: the analytic structure
Remark ✓ I knew it was a good idea all along! Before giving an unbiased description of production and decay of an Higgs boson we underline the general structure of any process containing a Higgs boson intermediate state.
The corresponding amplitude is schematically given by where N (s) denotes the part of the amplitude which is non-Higgs-resonant. Strictly speaking, signal (S) and background (B) should be defined as follows: Definition The Higgs complex pole (describing an unstable particle) is conventionally parametrized as As a first step we will show how to write f (s) in a way such that pseudo-observables make their appearance [7,8]. Consider the process i j → H → F where i, j ∈ partons and F is a generic final state; the complete cross-section will be written as follows: where s,c is over spin and colors (averaging on the initial state For instance, we have where [9] and where δ QCD gives the QCD corrections to gg → H up to next-to-next-to-leading-order (NNLO) + next-to-leading logarithms (NLL) resummation. Furthermore, we define which gives the partial decay width of a Higgs boson of virtuality s into a final state F.
which gives the production cross-section of a Higgs boson of virtuality s. We can write the final result in terms of pseudoobservables It is also convenient to rewrite the result as where we have introduced a sum over all final states, tot H = f∈F H→f (11) Note that we have written the phase-space integral for i( p 1 )+ j ( p 2 ) → F as where we assume that all initial and final states (e.g. γ γ , 4 f, etc.) are massless. Why do we need pseudo-observables? Ideally experimenters (should) extract so-called realistic observables from raw data, e.g. σ (pp → γ γ + X) and (should) present results in a form that can be useful for comparing them with theoretical predictions, i.e. the results should be transformed into pseudo-observables; during the deconvolution procedure one should also account for the interference background -signal; theorists (should) compute pseudo-observables using the best available technology and satisfying a list of demands from the self-consistency of the underlying theory.
Definition We define an off-shell production cross-section (for all channels) as follows: When the cross-section i j → H refers to an off-shell Higgs boson the choice of the QCD scales should be made according to the virtuality and not to a fixed value. Therefore, for the PDFs and σ i j→H+X one should select μ 2 F = μ 2 R = z s/4 (z s being the invariant mass of the detectable final state). Indeed, beyond lowest order (LO) one must not choose the invariant mass of the incoming partons for the renormalization and factorization scales, with the factor 1/2 motivated by an improved convergence of fixed order expansion, but an infrared safe quantity fixed from the detectable final state, see Ref. [10]. The argument is based on minimization of the universal logarithms (DGLAP) and not the process-dependent ones.

More on production cross-section
We give the complete definition of the production crosssection; let us define ζ = z s, κ = v s, and write Definition σ prod is defined by the following equation: where z 0 is a lower bound on the invariant mass of the H decay products, the luminosity is defined by where f i is a parton distribution function and Therefore, σ i j→H+X (ζ, κ, μ R ) is the cross section for two partons of invariant mass κ (z ≤ v ≤ 1) to produce a final state containing a H of virtuality ζ = z s plus jets (X); it is made of several terms (see Ref. [9] for a definition of σ ), Remark As a technical remark the complete phase-space integral for the processp i +p j → p k + { f } (p i = x i p i etc.) is written as where d dec is the phase-space for the process Q → { f } and Equations (14) and (16) follow after folding with PDFs of argument x i and x j , after using x i = x, x j = v/x and after integration overt. At NNLO there is an additional parton in the final state and five invariants are need to describe the partonic process, plus the H virtuality. However, one should remember that at NNLO use is made of the effective theory approximation where the Higgs-gluon interaction is described by a local operator.

An idea that is not dangerous is unworthy of being called an idea at all
Let us consider the case of a light Higgs boson; here, the common belief was that the product of on-shell production crosssection (say in gluon-gluon fusion) and branching ratios reproduces the correct result to great accuracy. The expectation is based on the well-known result [11]  A more familiar representation of the propagator can be written as follows: Definition with the parametrization of Eq. (3) we perform the well-known transformation showing that the Bar-scheme is equivalent to introducing a running width in the propagator with parameters that are not the on-shell ones. Special attention goes to the numerator in Eq. (22) which is essential in providing the right asymptotic behavior when s → ∞, as needed for cancellations with contact terms in VV scattering.
The natural question is: to which level of accuracy does the ZWA [delta-term only in Eq. (20)] approximate the full off-shell result given that at μ H = 125 GeV the on-shell width is only 4.03 MeV? For definiteness we will consider i j → H → ZZ → 4l. When searching the Higgs boson around 125 GeV one should not care about the region M ZZ > 2 M Z but, due to limited statistics, theory predictions for the normalization inq − q − gg → ZZ are used over the entire spectrum in the ZZ invariant mass.
Therefore, the question is not to dispute that off-shell effects are depressed by a factor γ H /μ H but to move away from the peak and look at the behavior of the invariant mass distribution, no matter how small it is compared to the peak; is it really decreasing with M ZZ ? Is there a plateau? For how long? How does that affect the total cross-section if no cut is made?
Let us consider the signal, in the complex-pole scheme: where s H is the Higgs complex pole, given in Eq. (3). Away (but not too far away) from the narrow peak the propagator and the off-shell H width behave like above threshold with a sharp increase just below it (it goes from 1.62 × 10 −2 GeV at 175 GeV to 1.25 × 10 −1 GeV at 185 GeV). Our result for the VV (V = W/Z) invariant mass distribution is shown in Fig. 3: after the peak the distribution is falling down until the effects of the VV -thresholds become effective with a visible increase followed by a plateau, by another jump at thet − t-threshold. Finally the signal distribution starts again to decrease, almost linearly.
What is the net effect on the total cross-section? We show it in Table 1 where the contribution above the ZZ -threshold amounts to 7.6%. The presence of the effect does not depend on the propagator function used (Breit-Wigner or complexpole propagator). The size of the effect is related to the distribution function. In Table 2 we present the invariant mass distribution integrated bin-by-bin.
If we take the ZWA value for the production cross-section at 8 TeV and for μ H = 125 GeV (19.146 pb) and use the branching ratio into ZZ of 2.67 × 10 −2 we obtain a ZWA result of 0.5203 pb with a 5% difference w.r.t. the off-shell result, fully compatible with the 7.6% effect coming form the high-energy side of the resonance.  Always from Table 1 we see that the effect is much less evident if we sum over all final states with a net effect of only 0.8% (the decay isb − b dominated).
Of course, the signal per se is not a physical observable and one should always include background and interference. In Fig. 4 we show the complete LO result. Numbers are shown with a cut of 0.25 M ZZ on p Z T . The large destructive effects of the interference wash out the peculiar structure of the signal distribution. If one includes the region M ZZ > 2 M Z in the analysis then the conclusion is: interference effects are relevant also for the low-mass region.
It is worth noting again that the whole effect on the signal has nothing to do with γ H /μ H effects; above the ZZthreshold the distribution is higher than expected (although tiny w.r.t. the narrow peak) and stays approximately constant till thet−t-threshold after which we observe an almost linear decrease. This is why the total cross-section is affected (in a VV final state) at the 5% level. The higher-order correction in gluon-gluon fusion have shown a huge K -factor

The zero-knowledge scenario
A potential worry is: should we simply use the full LO calculation or should we try to effectively include the large (factor two) K -factor to have effective NNLO observables? There are different opinions since interference effects may be as large or larger than NNLO corrections to the signal. Therefore, it is important to quantify both effects. We examine first the scenario where zero knowledge is assumed on the K -factor for the background. So far, two options have been introduced to account for the increase in the signal. Let us consider any distribution D (for definiteness we will consider where M ZZ is the invariant mass of the ZZ -pair and p Z T is the transverse momentum. Two possible options are: Definition The additive option is defined by the following relation Definition The multiplicative [12] (M) or completely multiplicative (M) option is defined by the following relation: where K D is the differential K -factor for the distribution. The M option is only relevant for background subtraction and it is closer to the central value described in Sect. 4.2.1.
In both cases the NNLO corrections include the NLO electroweak (EW) part, for production [13] and decay. The EW NLO corrections for H → WW/ZZ → 4f can reach a 15% in the high part of the tail. It is worth noting that the differential K -factor for the ZZ -invariant mass distribution is a slowly increasing function of M ZZ after M ZZ = 2 M t , going (e.g. for μ H = 125.6 GeV) from 1.98 at M ZZ = 2 M t to 2.11 at M ZZ = 1 GeV.
The two options, as well as intermediate ones, suffer from an obvious problem: they are spoiling the unitarity cancellation between signal and background for M ZZ → ∞. Therefore, our partial conclusion is that any option showing an early onset of unitarity violation should not be used for too high values of the ZZ -invariant mass.
Therefore, our first prescription in proposing an effective higher-order interference will be to limit the risk of overestimation of the signal by applying the recipe only in some restricted interval of the ZZ -invariant mass. This is especially true for high values of μ H where the off-shell effect is large. Explicit calculations show that the multiplicative option is better suited for regions with destructive interference while the additive option can be used in regions where the effect of the interference is positive, i.e. we still miss higher orders from the background amplitude but do not spoil cancellations between signal and background.
Actually, there is an intermediate options that is based on the following observation: higher-order corrections to the signal are made of several terms, see Eq. (14): the partonic cross-section is defined by From this point of view it seems more convenient to define
The so-called area method [4] is not so useless, even for a light Higgs boson. One can use a measurement of the off-shell region to constrain the couplings of the Higgs boson. Using a simple cut-and-count method and one scaling parameter (see Eq. (32) [23]). MEM-based analysis has been the first to describe a method for suppressingq − q background; the importance of his work cannot be overestimated. Complementary results from H → WW in the high transverse mass region are shown in Ref. [24]. The MEM-based analysis for separation of the gg → ZZ andqq → ZZ processes, including signal and background interference within the gg → ZZ process, has been suggested and implemented within the MELA framework on CMS [25] (http://www.pha.jhu.edu/spin/). • This note provides a more detailed description of the theoretical uncertainty associated with the camel-shaped and square-root-shaped tails of a light Higgs boson. • A similar analysis, performed for the exclusion of a heavy SM Higgs boson, can be found in Ref. [15] and in Ref. [12] with improvements suggested in Ref. [26].

How to use an LO MC?
The MCs used in the analysis are based on LO calculations, some of them include K -factors for the production but all of them have decay and interference implemented at LO. The adopted solution is external "re-weighting" (i.e. re-weighting with results from some analytical code), although rescaling exclusive distributions (e.g. in the final state leptons) with inclusive K -factors is something that should not be done, it requires (at least) a 1−1 correspondence between the two lowest orders. An example of K -factors that can be used to include interference in the zero-knowledge scenario is given in Fig. 5. For a more general discussion on re-weighting see Ref. [27].
Most of the studies performed so far are for the exclusion of a heavy SM Higgs boson 5  according to the virtuality and not to a fixed value. Indeed, one must choose an infrared safe quantity fixed from the detectable final state, see Ref. [10]. Using the Higgs virtuality or the QCD scales has been advocated in Ref. [14]: the numerical impact is relevant, especially for high values of the invariant mass, the ratio static/dynamic scales being 1.05. The authors of Ref. [21] seem to agree on our choice [14]. • References [3,21] consider the following scenario (onshell ∞ -degeneracy): allow for a scaling of the Higgs couplings and of the total Higgs width defined by Looking for ξ -dependent effects in the highly off-shell region is an approach that raises sharp questions on the nature of the underlying extension of the SM; furthermore it does not take into account variations in the SM background and the signal strength in 4l, relative to the expectation for the SM Higgs boson, is measured by CMS to be 0.91 +0.30 −0.24 [28] and by ATLAS to be 1.43 +0.40 −0.35 [29]. We adopt the approach of Ref. [30] [in particular Eqs. (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)] which is based on the κ -language, allowing for a consistent "Higgs Effective Field Theory" (HEFT) interpretation, see Ref. [31]. Neglecting loop-induced vertices, we have gg SM Remark The measure of off-shell effects can be interpreted as a constraint on γ H only when we scale couplings and total width according to Eq. (32) to keep σ peak untouched, although its value is known with 15-20% accuracy.

Proposition 4.1 The generalization of Eq. (32) is an
On the whole, we have a constraint in the multidimensional κ -space, since κ 2 g = κ 2 g (κ t , κ b ) and κ 2 H = κ 2 H (κ j , ∀ j). Only on the assumption of degeneracy we can prove that off-shell effects "measure" κ H ; a combination of on-shell effects (measuring κ i κ f / κ H ) and off-shell effects [measuring κ i κ f , see Eq. (9)] gives information on κ H without prejudices. Denoting by S the signal and by I the interference and assuming that I peak is negligible we have for the normalized S + I off-shell cross section. The background, e.g. gg → 4l, is also changed by the inclusion of d = 6 operators and one cannot claim that New Physics is modifying only the signal. 6 • The total systematic error is dominated by theoretical uncertainties, therefore one should never accept theoretical predictions that cannot provide uncertainty in a systematic way (i.e. providing an algorithm).
In Fig. 6 we consider the estimated theoretical uncertainty (THU) on the signal lineshape for a mass of 125.6 GeV . Note that PDF + α s and QCD scales uncertainties are not included. As expected for a light Higgs boson, the EW THU is sizable only for large values of the off-shell tail, reaching ±4.7% at 1 TeV (the algorithm is explained in Ref. [14]). To summarize the various sources of parametric (PU) and theoretical (THU) uncertainties, we have THU summary ➀ PDF + α s ; these have a Gaussian distribution; ➁ ✓ μ R , μ F (renormalization and factorization QCD scales) variations; they are the standard substitute for missing higher order uncertainty (MHOU) [32]; MHOU are better treated in a Bayesian context with a flat prior; for the background where ✓ means discussed in this note. When ➁ is included one should remember the N 3 LO effect in gluon-gluon fusion (estimated +17% in Ref. [33]) and and additional +7% for an all-order estimate, see Ref. [32]. These numbers refer to the fully inclusive K -factors. The effect of varying QCD scales, Fig. 7, for K and K gg . Once again, it should be stressed that QCD scale variation is only a conventional simulation of the effect of missing higher orders. Taking Fig. 7 for its face value, we register a substantial reduction in the uncertainty when K -factors are included. For instance, we find [−12.1% , +11.0%] for the NNLO prediction around the peak, [−10.9% , +9.9%] around 2 M Z and [−9.7% , +6.6%] at 1 TeV. The corresponding LO prediction is [−27.3% , +12.9%] around the peak, [−29.5% , +32.1%] around 2 M Z and [−38% , +42%] at 1 TeV. Note that μ R enters also in the values of α s .
Admittedly, showing the effect of QCD scale variations on K -factors is somewhat misleading but we have adopted this choice in view of the fact that, operatively speaking, the experimental analysis will generate bins in M 4l with a  Fig. 7 one should remember that the scale variation that increases (decreases) the distributions is the one decreasing (increasing) the K -factor. The NNLO and LO (camelshaped) lineshapes, with QCD scale variations, are given in Fig. 8. The THU induced by QCD scale variation can be reduced by considering the (peak) normalized lineshape, as shown in Fig. 9. In other words the constraint on the Higgs intrinsic width should be derived by looking at the ratio as a function of γ H /γ H SM , where N 4l is the number of 4leptons events. Since the K -factor has a relatively small range of variation with the virtuality, the ratio in Eq. (35) is much less sensitive also to higher order terms.
An additional comment refers to Eqs. (42, 43) of Ref. [21], where γ H = γ H SM produces a negative number of events, a typical phenomenon that occurs with large and destructive interference effects when only signal + interference is considered. Unless the notion of negative events is introduced (background-subtracted number of events), the SM case cannot be included, as also shown in their Fig. 9, where only the portion γ H > 4.58(2.08) γ H SM should be considered for M 4l > 130(300) GeV, roughly a factor of 10 smaller than the estimated bounds. This clearly demonstrate the impor-

Improving THU for interference?
One could argue that zero knowledge on the background Kfactor is a too conservative approach but it should be kept in mind that it's better to be with no one than to be with wrong one. Let us consider in details the process i j → F; the amplitude can be written as the sum of a resonant (R) and a non-resonant (NR) part, We denote by LO the lowest order in perturbation theory where a process starts contributing and introduce K -factors that include higher orders.
Furthermore, we introduce the interference becomes

The soft-knowledge scenario
Neglecting PDF + α s uncertainties and those coming from missing higher orders, the major source of THU is due to the missing NLO interference. In Ref. [26] the effect of QCD corrections to the signal-background interference at the LHC has been studied for a heavy Higgs boson. A soft-collinear approximation to the NLO and NNLO corrections is constructed for the background process, which is exactly known only at LO. Its accuracy is estimated by constructing and comparing the same approximation to the exact result for the signal process, which is known up to NNLO, and the conclusion is that one can describe the signal-background interference to better than ten percent accuracy for large values of the Higgs virtuality. It is also shown that, in practice, a fairly good approximation to higher-order QCD corrections to the interference may be obtained by rescaling the known LO result by a K -factor computed using the signal process.
The goodness of the approximation, when applied to the signal, remains fairly good down to 180 GeV and rapidly deteriorates only below the 2 M Z -threshold; note that both M 4l > 130 GeV and M 4l > 300 GeV have been considered in the study of Ref. [21]. The exact result for the background is missing but the eikonal nature of the approximation should make it equally good, for signal as well as for background. 7 This line of thought looks very promising, with a reduction of the corresponding THU (zero-knowledge scenario), although its extension from the heavy Higgs scenario to the light Higgs off-shell scenario has not been completely worked out. In a nutshell, one can write 7 S. Forte, private communication.
where "universal" (the + distribution) gives the bulk of the result while "process dependent" (the δ function) is known up to two loops for the signal but not for the background and "reg" is the regular part. A possible strategy would be to use for background the same "process dependent" coefficients and allow for their variation within some ad hoc factor. Assuming we could write In this scenario the subtraction of the background cannot be performed at LO. It is worth noting that simultaneous inclusion of higher order corrections for Higgs production (NNLO) and Higgs decay (NLO) is a three-loop effect that is not balanced even with the introduction of the eikonal QCD K factor for the background; three loop mixed EW-QCD corrections are still missing, even at some approximate level. Note that K d 4l can be obtained by running Prophecy4f [34] in LO/NLO modes.

Background-subtracted lineshape
In Fig. 10 we present our results for σ S+I for the ZZ → 4 e final state. The pseudo-observable σ S+I that includes only signal and interference (not constrained to be positive) is now a standard in the experimental analysis.
The blue curve in Fig. 10 gives the intermediate option for including the interference and the cyan band the associated THU between additive and multiplicative options. Multiplicative option is the green curve. Red curves give the THU due to QCD scale variation for the intermediate option Remark Of course, one could adopt the soft-knowledge recipe, in which case the result is given by the green curve in Fig. 10; provisionally, one could assume a ±10% uncertainty, extrapolating the estimate made for the high-mass study in Ref. [26]. Background subtraction should be performed accordingly [K b i jF of Eq. (37)]. . A cut p Z T > 0.25 M 4e has been applied. If one adopts the softknowledge recipe, the result is given by the green curve; provisionally, one could assume a ±10% uncertainty, extrapolating the estimate made for the high-mass study in Ref. [26] It is worth introducing few auxiliary quantities [15]: the minimum and the half-minima of σ S+I : given we define As observed in Ref. [15], THU is tiny on M 1 and moderately larger for M ± 1/2 .
Remark Alternatively, and taking into account the indication of Ref. [26] we could proceed as follows: 8 we can try to turn our three measures of the lineshape into a continuous estimate in each bin; there is a technique, called "vertical morphing" [35], that introduces a "morphing" parameter f which is nominally zero and has some uncertainty. If we define , option I, D + = max A,M D, 8 I gratefully acknowledge the suggestion by S. Bolognesi. the simplest "vertical morphing" replaces Of course, the whole idea depends on the choice of the distribution for f , usually Gaussian which is not necessarily our case; instead, one would prefer to maintain, as much as possible, the indication from the soft-knowledge scenario (in a Bayesian sense). Therefore, we define two curves We assume that the parameter λ, with 0 ≤ λ ≤ 1, has a flat distribution. We will have D − < D I < D + and a value for λ close to one (e.g. 0.9) gives less weight to the additive option, highly disfavored by the eikonal approximation. The corresponding THU band will be labelled by VM(λ). Consider D 1 of Eq. (44): we have M 1 = 233.9 GeV and the THU band corresponding to the full variation between Aoption and M-option is 0.00171 f b, equivalent to a ±39.9%. If we select λ = 0.9 in Eq. (47) the difference D − − D + reduces the uncertainty to 0.00098 f b, equivalent to ±22.8%. The destructive effect of the interference shows how challenging will be to put more stringent bounds on γ H when γ H → γ H SM . The off-shell effects are an ideal place where to look for "large" deviations from the SM (from γ H SM ) where, however, large scaling of the Higgs couplings raise severe questions on the structure of underlying BSM theory.
Definition There is an additional variable that we should consider: For instance, integrate dσ S+I /d M 2 4l over bins of 2.25 GeV for M 4l > 212 GeV and obtain σ S+I (i). Next, consider the ratio R S+I (i) = σ S+I (i)/σ S+I (1) which is shown in Fig. 11 where the THU band is given by VM(0.9). To give an example the THU corresponding to the bin of 300 GeV is 14.9%. THU associated with QCD scale variations is given by the two dashed lines.

Conclusions
The successful search for the on-shell Higgs-like boson has put little emphasis on the potential of the off-shell events; the attitude was "the issue of the Higgs off-shellness is very interesting but it is not relevant for low Higgs masses" and "for SM Higgs below 200 GeV, the natural width (mostly for MSSM as well) is much below the experimental resolution. We have therefore never cared about it for light Higgs. Just produce on-shell Higgs and let them decay in MC"; luckily the panorama is changing.
In 2012, it was demonstrated that, with few assumptions and using events with pairs of Z particles, the high invariant mass tail can be used to constrain the Higgs width. One can also extract an upper limit on the Higgs width at the price of assuming that its couplings to the known particles are given by the Standard Model, yet allowing new particles to affect the width: the LHC is becoming a precision instrument even in the Higgs sector.
It is clear that one can't do much without a MC, therefore the analysis should be based on some LO MC, or some other. However, more inclusive NLO (or even NNLO) calculations show that the LO predictions can be far away, which means that re-weighting can be a better approximation, as long as it is accompanied by an algorithmic formulation of the associated theoretical uncertainty. The latter is (almost) dominating the total systematic error and precision Higgs physics requires control of both systematics, not only the experimental one. Very often THU is nothing more than educated guesswork but a workable falsehood is more useful than a complex incomprehensible truth. In other words, closeness to the whole truth is in part a matter of degree of informativeness of a proposition.
Since [36] we derive that the exact behavior of F off is controlled by the amplitude and by its first two derivatives. The form factors F l admit a formal expansion in α s given by where we have considered QCD corrections but not the EW.