Challenges in Semileptonic B Decays

Two of the elements of the Cabibbo-Kobayashi-Maskawa quark mixing matrix, $|V_{ub}|$ and $|V_{cb}|$, are extracted from semileptonic B decays. The results of the B factories, analysed in the light of the most recent theoretical calculations, remain puzzling, because for both $|V_{ub}|$ and $|V_{cb}|$ the exclusive and inclusive determinations are in clear tension. Further, measurements in the $\tau$ channels at Belle, Babar, and LHCb show discrepancies with the Standard Model predictions, pointing to a possible violation of lepton flavor universality. LHCb and Belle II have the potential to resolve these issues in the next few years. This article summarizes the discussions and results obtained at the MITP workshop held on April 9--13, 2018, in Mainz, Germany, with the goal to develop a medium-term strategy of analyses and calculations aimed at solving the puzzles. Lattice and continuum theorists working together with experimentalists have discussed how to reshape the semileptonic analyses in view of the much higher luminosity expected at Belle II, searching for ways to systematically validate the theoretical predictions in both exclusive and inclusive B decays, and to exploit the rich possibilities at LHCb.


Executive Summary
The magnitudes of two of the elements of the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix [1,2], |V ub | and |V cb |, are extracted from semileptonic Bmeson decays. The results of the B factories, analysed in the light of the most recent theoretical calculations, remain puzzling, because -for both |V ub | and |V cb | -the determinations from exclusive and inclusive decays are in tension by about 3σ. Recent experimental and theoretical results reduce the tension, but the situation remains unclear. Meanwhile, measurements in the semitauonic channels at Belle, Babar, and LHCb show discrepancies with the Standard Model (SM) predictions, pointing to a possible violation of lepton-flavor universality. LHCb and the upcoming experiment Belle II have the potential to resolve these issues in the next few years.
Thirty-five participants met at the Mainz Institute for Theoretical Physics to develop a medium-term strategy of analyses and calculations aimed at the resolution of these issues. Lattice and continuum theorists discussed with experimentalists how to reshape the semileptonic analyses in view of the much larger luminosity expected at Belle II and how to best exploit the new possibilities at LHCb, searching for ways to systematically validate the theoretical predictions, to confirm new physics indications in semitauonic decays, and to identify the kind of new physics responsible for the deviations.

Format of the workshop
The program took place during a period of five days, allowing for ample discussion time among the participants. Each of the five workshop days was devoted to specific topics: the inclusive and exclusive determinations of |V cb | and |V ub |, semitauonic B decays and how they can be affected by new physics, as well as related subjects such as purely leptonic B decays and heavy quark masses. In the mornings, we had overview talks from the experimental and theoretical sides, reviewing the main aspects and summarizing the state of the art. In the late afternoon, we organized discussion sessions led by experts of the various topics, addressing questions that have been brought up before or during the morning talks.

Exclusive heavy-to-heavy decays
The B → D ( * ) ν decays have received significant attention in the last few years. New Belle results for the q 2 and angular distributions have allowed studies of the role played by the parametrization of the form factors in the extraction of |V cb |. It turns out that the extrapolation to zero-recoil is very sensitive to the parametrization employed, a problem that can be solved only by precise calculations of the form factors at non-zero recoil. Until these are completed, the situation remains unclear, with repercussions on the calculation of R(D * ) as well, with diverging views on the theoretical uncertainty of present estimates based on Heavy Quark Effective Theory (HQET) expressions.
Beside a critical reexamination of these recent developments, we discussed several incremental and qualitative improvements in lattice QCD, also in baryonic decays. Though unlikely to carry much weight in determining |V cb |, the latter offer great opportunities to test lepton-flavor universality violation (LFUV) and lattice QCD. The discussions also addressed the fact that QCD errors are now almost as small as effects from QED. Thus, further improvement must be theoretically made by properly studying the effect of QED radiation, especially the treatment of soft photons and photons that are neither soft nor hard and their sensitivity to the meson wave functions.
Concerning studies of LFUV, we discussed the role played by higher excited charmed states in establishing new physics and the challenges that the present R(D ( * ) ) measurements represent for model building.

Exclusive heavy-to-light decays
This determination of |V ub | relies on nonperturbative calculations of the form factor of B → π ν, which is the most precise channel. We discussed the status of the light-cone sum rule (LCSR) calculations and several recent improvements in lattice QCD, in particular the most recent results from the Fermilab Lattice & MILC Collaborations and from the RBC & UKQCD Collaborations, as well as future prospects. The Fermilab/MILC calculation alone leads to a remarkably small total error on |V ub |, about 4%. While at present the most precise extraction of |V ub | comes from B → π ν , it is worth considering the channel B s → K ν as well, because here the lattice-QCD calculations are affected by somewhat smaller uncertainties. B s → K ν can be accessible at Belle II in a run at the Υ (5S) and a precision of about 5-10% could be achieved with 1 fb −1 . On the other hand, LHCb has an ongoing analysis of the ratio B(B s → K ν)/B(B s → D s ν), which will provide a new determination of |V ub /V cb |. This approach follows the success that LHCb demonstrated for semileptonic baryon decays via the precise measurement of the ratio B(Λ b → pµν)/(Λ b → Λ c µν) in the high-q 2 region. This measurement, combined with precise lattice-QCD calculations of the form factors, allowed the extraction of ratio |V ub /V cb | with an uncertainty of 7%. We discussed also other channels, in particular how to study B → ππ ν including the resonant structures. Careful studies of other heavy-to-light channels will also be crucial to improve the signal model for the inclusive |V ub | measurements.

Inclusive heavy-to-heavy decays
The theoretical predictions in this case are based on an operator product expansion. Theoretical uncertainties already dominate current determinations, and better control of all higher-order corrections is needed to reduce them. In this respect, it would be important to have the perturbative-QCD corrections to the complete coefficient of the Darwin operator and to check the treatment of QED radiation in the experimental analyses. A full O(α 3 s ) calculation of the total width may be within reach with recently developed techniques. From the experimental point of view, new and more accurate measurements will be most welcome, in particular to better understand the correlations between different moments and moments with different cuts. A better determination of the higher hadronic mass moments and a first measurement of the forward-backward asymmetry would benefit the global fit, as would a better understanding of higher power corrections. The importance of having global fits to the moments in different schemes and by different groups has also been stressed. This calls for an update of the 1S scheme fit and could lead to a cross-check of the present theoretical uncertainties. Lattice QCD already provides inputs to the fit with the calculation of the heavy quark masses, which have been reviewed. New developments discussed at the workshop may soon be able to provide additional information that can be fed into the fits, such as constraints on the heavy-quark quantities µ 2 π and µ 2 G . The two main approaches are i) computing inclusive rates directly with lattice QCD and ii) using the heavy quark expansion for meson masses, precisely computed at different quark mass values. The state of theoretical calculations for inclusive semitauonic decays has also been discussed, as they represent an important cross-check of the LFUV signals.

Inclusive heavy-to-light decays
This determination is based on various well-founded theoretical methods, most of which agree well. The 2017 endpoint analysis by BaBar seems to challenge this consolidated picture, suggesting discrepancies between some of the methods and a lower value of |V ub |. For the future, the complete NNLO corrections in the full phase space should be implemented and the various methods should be upgraded in order to make the best use of the Belle II differential data based on much higher statistics. These data will make it possible to test the various methods and to calibrate them, as they will contain information on the shape functions. The SIMBA and NNVub methods seem to have the potential to fully exploit the B → X u lν (and possibly radiative) measurements through combined fits to the shape function(s) and |V ub |. The separation of B ± and B 0 in the experimental analyses will certainly help to constrain weak annihilation, but the real added value of Belle II could be precise measurements of kinematic distributions in M X , q 2 , E l , etc. A detailed measurement of the high q 2 tail might be very useful, also in view of attempts to check quark-hadron duality. Experimentally, better hybrid (inclusive+exclusive) Monte Carlos are badly needed; s-s popping should be investigated to develop a better understanding of kaon vetos. The b → c background will be measured better, which will benefit these analyses.

Leptonic decays
The measurement of B → τ ν is not yet competitive with semileptonic decays for measuring |V ub |, because of a 20% error on the rate. Belle II will improve on this. The corresponding lattice-QCD calculation is however very precise, with an error below 1%, according to the 2019 report from FLAG [3] and based mainly on a result from Fermilab/MILC that was presented at the workshop. That said, the mode is useful today to model builders trying to understand new physics explanations of the tension between inclusive and exclusive determinations of |V ub |. Belle II will also access B → µν(γ) with the possibility to reach an uncertainty on the branching fraction of about 5% with 50 ab −1 , allowing for a new determination of |V ub | in the long term. We discussed also the LHCb contribution to leptonic decays with the process B → µµµν µ where two of the muons come from virtual γ or light vector meson decays. A study of this channel has been published in [4] and a very stringent upper limit obtained, inconsistent with the existing branching fraction predictions, calls for new reliable theoretical calculations.
2 Heavy-to-heavy exclusive decays The aim of this section is to present an overview of b → c exclusive decays. After an introduction to the parametrization of the relevant form factors between hadronic states we describe the status of current lattice QCD calculations with particular focus on B → D * and Λ b → Λ c . Next, we discuss experimental measurements of B → D ( * ) semileptonic decays with special focus on the ratios R(D ( * ) ), and several phenomenological aspects of these decays: the extraction of V cb , theoretical predictions for R(D ( * ) ), the role of B → D * * transitions and constraints on new physics. We also briefly discuss the information that is required to reproduce results presented in experimental analyses and to incorporate older measurements into approaches based on modern form factor parametrizations. We conclude with the description of HAMMER, a tool designed to more easily calculate the change in signal acceptancies, efficiencies and signal yields in the presence of new physics.

Parametrization of the form factors
In this section, we introduce the form factors for the hadronic matrix elements that arise in semileptonic decays. Several different notations appear in the literature, often using different conventions depending on whether the final-state meson is heavy (e.g., D) or light (e.g., π). A general decomposition relies, however, only on Lorentz covariance and other symmetry properties of the matrix elements. As discussed below, it is advantageous to choose the Lorentz structure so that the form factors have definite parity and spin.
In this spirit, let us consider the matrix elements for a meson decay B (l) → X ( * ) ν, where the quark content of theB is bl with l a light quark (u, d, or s), and the quark content of the X isql where q can be either a light quark or the c quark. The desired decomposition can be written as 3) where q µ = (p − p ) µ is the momentum transfer, S =bq is the scalar current, P = bγ 5 q is the pseudoscalar current, V µ =bγ µ q is the vector current, A µ =bγ µ γ 5 c is the axial current, T µν =bσ µν c is the tensor current, m q is the mass of the quark q, M is the mass of the parent meson (B in this case), m (without subscript) is the mass of the daughter meson, and r = m/M . Contracting Eqs. (2.2) and (2.6) with q µ and using the appropriate Ward identities shows that the scalar form factor, f 0 , and pseudoscalar form factor, A 0 , appear in the vector and axial vector transitions. The J P quantum numbers of the form factors are given in Table 2 One can impose bounds on the shape of these form factors by using QCD dispersion relations for a generic decay H b → H q ν. Since the amplitude for production of H b H q from a virtual W boson is determined by the analytic continuation of the form factors from the semileptonic region of momentum transfer m 2 < q 2 < M 2 − m 2 to the pair production region q 2 ≥ M 2 + m 2 , one can find constraints in the pair-production region, amenable to perturbative QCD calculations, and then propagate the constraint to the semileptonic region by using analyticity. The result of this process applied to the form factors is the modelindependent Boyd-Grinstein-Lebed (BGL) parametrization [5,6], which expands Challenges in Semileptonic B Decays 7 a form factor F (z) in the dimensionless variable z as where t ± = (M ±m) 2 , B F (z) are known as the Blaschke factors, which incorporate the below-or near-threshold [7] poles in the s-channel process ν →BX, and φ F (z) is called the outer function. The poles, and hence the Blaschke factor, depend on the spin and the parity of the intermediate state, which is why it is useful to use fixed J P for the form factors. See Sec. 3.5.5 for more details. 1 Of course, in practical applications the series (2.8) is truncated at some power z n F . By taking certain linear combinations of form factors with the same spin and parity one obtains the BGL notation for the helicity amplitudes, 14) leaving aside the (BSM) tensor form factors. Here the velocity transfer 16) with v M = p/M and v m = p /m, is often used in heavy-to-heavy decays. For heavy-to-light decays it can be helpful to work with the energy of the daughter meson in the rest frame of the parent, i.e., (2.17) These form factors are subject to three kinematic constraints, namely (M 2 − m 2 )f BGL + (q 2 = 0) = f BGL 0 (q 2 = 0), (2.18) (M − m)f (q 2 = q 2 max ) = F 1 (q 2 = q 2 max ), (2.19) 2 M 2 − m 2 F 1 (q 2 = 0) = F 2 (q 2 = 0), (2.20) where q 2 max = (M − m) 2 , corresponding to w = 1 and E = m. The variable z can also be expressed via w, , (2.21) where N = (t + − t 0 )/(t + − t − ), is real for q 2 ≤ (M + m) 2 , and it becomes a pure phase beyond that limit. The constant t 0 defines the point at which z = 0. Often t 0 = t − , one end of the kinematic range, so z ranges from 0 at maximum sets z = 0 exactly in the middle of the kinematic range. Even for B → π ν, z is always a small quantity, which ensures a fast convergence of the power series defined in (2.8).
Unitarity constraints from the QCD dispersion relations are translated into constraints for the coefficients of the BGL expansion. In general, for each form factor F , but in the particular case ofB → D * ν the bound becomes for the f and F 1 form factors, because they have the same quantum numbers. These bounds are known as the weak unitarity constraints. A modification of the BGL parametrization by Bourrely, Lellouch and Caprini (BCL) [8] is often chosen in analyses of heavy-to-light decays. The BCL parametrization improves BGL by fixing two artifacts of the truncated BGL series. In particular, it removes an unphysical singularity at the pair production threshold and corrects the large q 2 behavior (see [9,10]) in the functional form. These two modifications improve the convergence of the expansion. However, the kinematic range is much more constrained in the heavy-to-heavy case, and lies farther from both the production threshold and the large q 2 region. Therefore, the presence of far singularities or an incorrect asymptotic behavior are not expected to spoil the z-expansion in that case.
In the heavy-to-heavy case, one can sharpen the weak unitarity constraints on the BGL coefficients using heavy quark symmetry (HQS) which relates the different B ( * ) → D ( * ) ν channels and their form factors: each form factor is either proportional to the Isgur-Wise function ξ(w) or zero. Using heavy quark effective theory (HQET) one can improve the precision by introducing radiative and power (i.e. in inverse powers of the heavy masses) corrections. Then we can define any form factor in such a way that it admits the expansion in both α s and the heavy quark masses These expansions can be used to link the z expansion coefficients of different form factors, leading to the so-called strong unitarity constraints [11,12]. The power corrections depend on subleading Isgur-Wise functions that have been estimated with QCD sum rules [13,14,15].
Previous analyses of B → D * ν have used the Caprini-Lellouch-Neubert (CLN) parametrization [11]. CLN employ a notation for the form factors that satisfies (2.24), 2 where the letter naming the form factor (S, P , V and A) encodes its quantum numbers (scalar, pseudoscalar, vector and axial vector), and R CLN 1,2 are two convenient ratios of form factors. Sometimes the ratio R CLN In the CLN parametrization the strong unitarity constraints obtained with HQET at NLO are used to remove some of the coefficients of the z expansion. Further, specific numerical coefficients are introduced in a polynomial in w for R CLN 1,2 . The numerical values were determined using information available in 1997, which has been partly superseded but not updated. The numerical values also omit error estimates (which were discussed in the original CLN paper [11], although in an optimistic manner) because at the time the experimental statistical errors dominated, which is no longer the case. A consensus of the workshop recommends that CLN no longer be used, certainly not unless the numerical coefficients have been updated and the ensuing theoretical uncertainties are accounted for. It is better to use a general form of the z expansion.
HQET naturally presents another basis for the form factors of theB → D ( * ) ν processes. Using velocities instead of momenta and otherwise mimicking the Lorentz structure of Eqs.(2.2), (2.5), and (2.6), the notation is h + and h − for B → D ν, and h V and h A 1,2,3 forB → D * ν. In the heavy quark limit, these form factors tend to where M is the mass of the Λ b , m is the mass of the daughter baryon and s ± = (M ±m) 2 −q 2 . The z expansions for the baryonic form factors employed in Ref. [16] use trivial outer functions and do not impose unitarity bounds on the coefficients of the expansion. As a result, the coefficients are unconstrained and reach values as high as ∼ 10. See also Sec. 3.5.5.

Heavy-to-heavy form factors from lattice QCD
The lattice QCD calculation of the form factors for the semileptonic decay of a hadron uses two-and three-point correlation functions, which are constructed from valence quark propagators obtained by solving the Dirac equation on a set of gluon field configurations. Averaging the correlation functions over the gluon field configurations then yields the appropriate Feynman path integral. The twopoint correlation functions give the amplitude for a hadron to be created at the time origin and then destroyed at a time T . The three-point correlation functions include the insertion of a current J at time t on the active quark line, changing the active quark from one flavor to another. Usually calculations are performed with the initial hadron at rest. Momentum is inserted at the current so that a range of momentum transfer, q, from initial to final hadron can be mapped out. The three-point correlation functions (for multiple q values) and the two-point correlation functions (with multiple momenta in the case of the final-state hadron) are fit as functions of t and T to determine the matrix elements of the currents between initial and final hadrons that yield the required form factors. An important point here is that the initial and final hadrons that we focus on are the ground-state particles in their respective channels. However, terms corresponding to excited states must be included in the fits in order to make sure that systematic effects from excited-state contamination are taken into account in the fit parameters that yield the ground-state to ground-state matrix element of J and hence the form factors.
Statistical uncertainties in the form factors obtained obviously depend on the numbers of samples of gluon-field configurations on which correlation functions are calculated. To improve statistical accuracy further, calculations usually include multiple positions of the time origin for the correlation functions on each configuration. The numerical cost of the calculation of quark propagators falls as the quark mass increases and so heavy (b and c) quark propagators are typically numerically inexpensive. The accompanying light quark propagators for heavylight hadrons are much more expensive, especially if u/d quarks with physically light masses are required. It is this cost that limits the statistical accuracy that can be obtained, especially since the statistical uncertainty for a heavy-light hadron correlation function (on a given number of gluon field configurations) also grows as the separation in mass between the heavy and light quarks increases.
A key issue for heavy-to-heavy (b to c) form factor calculations is how to handle heavy quarks on the lattice. Discretization of the Dirac equation on a space-time lattice gives systematic discretization effects that depend on powers of the quark mass in lattice units. The size of these effects depends on the value of the lattice spacing and the power with which the effects appear (i.e., the level of improvement used in the lattice Lagrangian).
Since the b quark is so heavy, its mass in lattice units will be larger than 1 on all but the finest lattices (a < 0.05 fm) currently in use. Highly-improved discretizations of the Dirac equation are needed to control the discretization effects. A good example of such a lattice quark formalism is the highly improved staggered quark (HISQ) action developed by HPQCD [17] for both light and heavy quarks with discretization errors appearing at O(α s (am) 2 ) and O((am) 4 ). An alternative approach is to make use of the fact that b quarks are nonrelativistic inside their bound states. This means that a discretization of a nonrelativistic action (NRQCD) can be used, expanding the action to some specified order in the b quark velocity. Discretization effects then depend on the scales associated with the internal dynamics and these scales are all much smaller than the b quark mass. Relativistic effects can be included and discretization effects corrected at the cost of complicating the action with additional operators. A third possibility is to start from the Wilson quark action and improved versions of it but to tune the parameters (such as the quark mass) using a nonrelativistic dispersion relation for the meson, which is known as the Fermilab method [18]. This removes the leading source of mass-dependent discretization effects, whilst retaining a discretization that connects smoothly to the continuum limit. Again, improved versions of this approach (such as the Oktay-Kronfeld action [19]) include additional operators.
The c quark has a mass larger than Λ QCD but within lattice QCD it can be treated successfully as a light quark because its mass in lattice units is less than 1 on lattices in current use (with a < 0.15 fm). This means that, although discretization effects are visible in lattice QCD calculations with c quarks, they are not large and can easily be extrapolated away accurately for a continuum result. For example, discretization effects are less than 10% at a = 0.15 fm in calculations of the decay constant of the D s using the HISQ action [20]. Purely nonrelativistic approaches to the c quark are therefore not useful on the lattice. There can be some advantage for b-to-c form factor calculations in using the same action for b and c, however, as we discuss below.
Because lattice and continuum QCD regularize the theory in a different way, the lattice current J needs a finite renormalization factor to match its continuum counterpart so that matrix elements of J, and form factors derived from them, can be used in continuum phenomenology. For NRQCD and Wilson/Fermilab quarks the current J must be normalized using lattice QCD perturbation theory. Since this is technically rather challenging it has only been done through O(α s ) and this leaves a sizeable (possibly several percent) systematic error from missing higherorder terms in the perturbation theory. If Wilson/Fermilab quarks are used for both b and c quarks, then arguments can be made about the approach to the heavy-quark limit that can reduce, but not eliminate, this uncertainty [21].
Relativistic treatments of the b and c quarks have a big advantage here, because J can generally be normalized in a fully nonperturbative way within the lattice QCD calculation and without additional systematic errors. The advantages of this approach were first demonstrated by the HPQCD collaboration using the HISQ action to determine the decay constant of the B s [22]. The HISQ PCAC relation normalizes the axial-vector current in this case. Calculations for multiple quark masses on lattices with multiple values of the lattice spacing allow both the physical dependence of the decay constant on quark mass and the dependence of the discretization effects to be mapped out so that the physical result at the b quark mass can be determined. This calculation has now been updated and extended to the B meson by the Fermilab Lattice and MILC collaborations [23], achieving better than 1% uncertainty. HPQCD is now carrying out a similar approach to b-to-c form factor calculations [24], and the JLQCD collaboration is also working in that direction [25] with Möbius domain-wall quarks.
An equivalent approach, using ratios of hadronic quantities at different quark masses where normalization factors cancel, has been developed by the European Twisted Mass collaboration using the twisted-mass action [26,27] for Wilson fermions.

B → D ( * ) form factors from lattice QCD
Early lattice QCD calculations of B → D form factors were limited to the determination of G B→D (w) = 4rf + (q 2 )/(1 + r) (with notation defined near (2.16)) at the zero-recoil point w = 1. Results include the N f = 2 + 1 calculation of Fermilab/MILC [28,29] and the N f = 2 calculation of Atoui et al. [30]. More recently Fermilab/MILC [31] and HPQCD [32,33] have presented N f = 2 + 1 calculations of the B → D form factor at non-zero recoil based on partially overlapping subsets of the same MILC asqtad (a 2 tadpole improved) ensembles.
The Fermilab/MILC calculation [31] uses configurations with four different lattice spacings and with pion masses in the range [260,670] MeV. The bottom and charm quarks are implemented in the Fermilab approach. The form factors f B→D +,0 (w) are extracted from double ratios of three point functions up to a matching factor which is calculated at 1-loop in lattice perturbation theory. The results are presented in terms of three synthetic data points which can be subsequently fitted using any form factor parametrization. The systematic uncertainty due to the joint continuum-chiral extrapolation is about 1.2% and dominates the error budget.
The HPQCD calculations [32,33] rely on ensembles with two different lattice spacings and two/three light-quark masses values, respectively. The treatment of heavy quarks is different from that used in the Fermilab/MILC papers: the bottom quark is described in NRQCD and the charm quark using HISQ. The form factors are extracted from appropriate three-point functions and the results are presented in terms of the parameters of a modified BCL z expansion that incorporates dependence on lattice spacing and light-quark masses into the expansion coefficients.
In order to combine the Fermilab/MILC and HPQCD results [3], it is necessary to generate a set of synthetic data which is (almost exactly) equivalent to the HPQCD calculation. The two sets of synthetic data can then be combined while taking into account the correlation due to the fact the Fermilab/MILC and HPQCD share MILC asqtad configurations. As mentioned above, dominant uncertainties are of systematic nature, implying that this correlation (whose estimate is rather uncertain) is a subdominant effect. A simultaneous fit of Fermilab/MILC and HPQCD synthetic data together with the available Belle and Babar data yields a determination of |V cb | with an overall 2.5% uncertainty (dominated by the experimental error which contributes about 2% to the total error).
Finally, both collaborations present values for both the f + and f 0 form factors, which allow for a lattice only calculation of the SM prediction for R(D). The uncertainty on the Fermilab/MILC and HPQCD combined determination of R(D), without experimental input, is about 2.5% and is negligible compared to current experimental errors.
The advantage of an approach in which currents can be nonperturbatively normalised has been demonstrated by HPQCD for B s → D s form factors in [34]. They use the HISQ action for all quarks, extending the method developed for decay constants. The range of heavy quark masses can be increased on successively finer lattices (keeping the value in lattice units below 1) until the full range from c to b is reached. The full q 2 range of the decay can also be covered by this method since the spatial momentum of the final state meson (which should also be less than 1 in lattice units) grows in step with the heavy meson/quark mass. Results from [34] improve on the uncertainties obtained in [33] with NRQCD b quarks and this promising all-HISQ approach is now being extended to other processes.
Calculations of B → D * form factors at non-zero recoil are considerably more involved due to difficulties in describing the resonant D * → Dπ decay. Up to now, lattice QCD simulations have focused on the single B → D * form factor that contributes to the rate at zero recoil, A 1 (q 2 max ). The quantity generally quoted is The combination of the lattice QCD result and the experimental rate, extrapolated to zero recoil, yields a value for V cb . The Fermilab Lattice/MILC Collaborations have achieved the highest precision for this result so far [35]. They use improved Wilson quarks within the Fermilab approach for both b and c quarks and work on gluon field configurations that include u/d (with equal mass) and s quarks in the sea (n f = 2 + 1) using the asqtad action. By taking a ratio of three-point correlation functions they are able simultaneously able to improve their statistical accuracy and reduce part of the systematic uncertainty from the normalization of their current operator. Their result is h A 1 (1) = 0.906(4) (12) where the uncertainties are statistical and systematic respectively. Their systematic error is dominated by discretization effects. They take the systematic uncertainty from missing higher-order terms in the perturbative current matching [36] to be 0.1α 2 s . Plot taken from Ref. [24] showing the comparison of lattice QCD results for h A 1 (1) (left side) and h s A 1 (1) (right side). Raw results for h A 1 (1) are from [37] and [35] and are plotted as a function of valence (=sea) light quark mass, given by the square of Mπ. On the right are points for h s A 1 (1) from [37] plotted at the appropriate valence mass for the s quark, but obtained at physical sea light quark masses. The final result for h A 1 (1) from [35], with its full error bar, is given by the inverted blue triangle. The inverted red triangles give the final results for h A 1 (1) and h s A 1 (1) from [37]. The HPQCD results of [24] are given by the black stars.
The HPQCD collaboration have calculated h A 1 (1) on gluon field configurations that include n f = 2 + 1 + 1 HISQ sea quarks using NRQCD b quarks and HISQ c quarks [37]. Their result, h A 1 (1) = 0.895(10)(24) has a larger uncertainty, dominated by the systematic uncertainty of 0.5α 2 s allowed for in the current matching. They were also able to calculate the equivalent result for B s → D * s , obtaining h s A 1 (1) = 0.879 (12) (26) and demonstrating that the dependence on light quark mass is small. The B s → D * s provides a better lattice QCD comparison point than B → D * because it has less sensitivity to light quark masses (in particular the D * Dπ 'cusp') and to the volume. More recently the HPQCD collaboration have used the HISQ action for all quarks, with a fully nonperturbative current normalization, to determine h s A 1 (1) [24]. Their result, h s A 1 (1) = 0.9020(96)(90) agrees well with the earlier results and has smaller systematic uncertainties. Figure 2.1 compares the three results.
The importance of being able to compare lattice QCD and experiment away from the zero recoil point is now clear and several lattice QCD calculations are underway, attempting to cover the full q 2 range of the decay and all 4 form factors. This includes calculations for B → D * from JLQCD [25] with Möbius domainwall quarks, Fermilab/MILC [38] (see also talk at Lattice 2019) with improved Wilson/Fermilab quarks and LANL/SWME with an improved version of this formalism known as the Oktay-Kronfeld action [39]. Calculations for other b-to-c pseudoscalar-to-vector form factors, B s → D * s [40] and B c → (J/ψ, η c ) are also underway from HPQCD [41,42] using the all-HISQ approach. At the same time fur-ther B → D and B s → D s form factor calculations are in progress, including those using a variant of the Fermilab approach known as Relativistic Heavy Quarks on RBC/UKQCD configurations [43]. In future we should be able to compare results from multiple actions with experiment for improved accuracy in determining |V cb |.
The Λ b → Λ c form factors have been calculated with 2+1 dynamical quark flavors; the vector and axial vector form factors can be found in Ref. [16], while the tensor form factors (which contribute to the decay rates in many new-physics scenarios) where added in Ref. [44]. This calculation used two different lattice spacings of approximately 0.11 fm and 0.08 fm, sea quark masses corresponding to pion masses in the range from 360 down to 300 MeV, and valence quark masses corresponding to pion masses in the range from 360 down to 230 MeV. The lattice data for the form factors, which cover the kinematic range from near q 2 max ≈ 11 GeV 2 down to q 2 ≈ 7 GeV 2 , were fitted with a "modified" version of the BCL z expansion [8] discussed in Sec. 2.1, where simultaneously to the expansion in z, an expansion in powers of the lattice spacing and quark masses is performed. No dispersive bounds were used in the z expansion here (this is something that can perhaps be improved in the future, see also Sec. 3.5.5). The form factors extrapolated to the continuum limit and physical pion mass yield the following Standard Model predictions: for the fully integrated decay rate, which has a total uncertainty of 6.3% (corresponding to a 3.2% theory uncertainty in a possible |V cb | determination from this decay rate), for the partially integrated decay rate, which has a total uncertainty of 4.5% (corresponding to 2.3% for |V cb |), and for the lepton-flavor-universality ratio, which has a total uncertainty of 3.1%. The systematic uncertainties of the vector and axial vector form factors are dominated by finite-volume effects and the chiral extrapolation. Both of these can be reduced substantially in the future by adding a new lattice gauge field ensemble with physical light-quark masses and a large volume, and dropping the "partially quenched" data sets that have m Adding another ensemble at a third, finer lattice spacing will also be beneficial to better control the continuum extrapolation.
At this workshop, there was some discussion about the validity of the modified z expansion; it has been argued that it would be safer to first perform chiral/continuum extrapolations and then perform a secondary z expansion fit. This is expected to make a difference mainly if nonanalytic quark-mass dependence from chiral perturbation theory is included. However, the fits used in Ref. [16] for the Λ b form factors were analytic in the lattice spacing and light-quark mass. Note that the shape of the Λ b → Λ c µ −ν µ differential decay rate was later measured by LHCb, and found to be in good agreement with the lattice QCD prediction all the way down to q 2 = 0 [45].
Motivated by the prospect of an LHCb measurement of R(Λ * c ), work is now also underway to compute the Λ b → Λ * c form factors in lattice QCD, for the Λ * c (2595) and Λ * c (2625), which have J P = 1 2 − and J P = 3 2 − , respectively. Preliminary results were shown at the workshop. For these form factors, the challenge is that, to project the Λ * c interpolating field exactly to negative parity and avoid contamination from the lower-mass positive parity states, one needs to perform the lattice calculation in the Λ * c rest frame. With the b-quark action currently in use, discretization errors growing with the Λ b momentum then limit the accessible kinematic range to a small region near q 2 max . To predict R(Λ * c ), it will be necessary to combine the lattice QCD results for the form factors in the high-q 2 region with heavy-quark effective theory and LHCb data for the shapes of the Λ b → Λ * c µ −ν µ differential decay rates [46]. In the case of B → D ν one can also use the existing lattice calculations at non-zero recoil [31,32] to guide the extrapolation to zero recoil, together with the w spectrum measured by Belle [48]. In the BGL parametrization, this leads to a higher value, |V cb | = 40.83(1.13)10 −3 , a more reliable determination than (2.41). In the following we will have a closer look at the most recent measurements by the various experiments.
Belle has recently updated the untagged measurement of the B 0 → D * − + ν mode [49]. While the new analysis is based on the same 711 fb −1 Belle data set, the re-analysis takes advantage of a major improvement of the track reconstruction software, which was implemented in 2011, leading to a substantially higher slow pion tracking efficiency and hence to much larger signal yields than in the previous publication [50]. Again D * + mesons are reconstructed in the cleanest mode, D * + → D 0 π + followed by D 0 → K − π + , combined with a charged, light lepton (electron or muon) and yields are extracted in 10 bins for each of the 4 kinematic variables describing the B 0 → D * − + ν decay. These yields are published along with their full error matrix. The updated publication also contains an analysis of these yields using both the CLN and the BGL form factors (where BGL has only 5 free parameters). The CLN analysis results in η EW F(1)|V cb | = (35.06 ± 0.15(stat) ± 0.56(syst)) × 10 −3 , while the BGL fit gives η EW F(1)|V cb | = (34.93 ± 0.23(stat) ± 0.59(syst)) × 10 −3 . Both results are thus well consistent. This contrasts with a tagged measurement of B 0 → D * − + ν first shown by Belle in November 2016 [51]. Analyzing the raw data of this measurement in terms of the CLN and BGL form-factors gives a difference of almost two standard deviations in |V cb | [52,53]. However, this result has remained preliminary and will not be published. A new tagged analysis, using an improved version of the hadronic tag is now underway and should clarify the experimental situation.
Babar has presented a full four-dimensional angular analysis of B 0 → D * 0 − ν decays, using both CLN and BGL parametrizations [54]. This analysis is based on the full data set of 450 fb −1 , and exploits the hadronic B-tagging approach. The full decay chain e + e − → Υ (4S) → B tag B sig (→ D * ν ) is considered in a kinematic fit that includes constraints on the beam properties, the secondary vertices, the masses of B tag , B sig , D * and the missing neutrino. After applying requirements on the probability of the χ 2 of this constrained fit, which is the main discriminating variable, the remaining background is only about 2% of the sample. The resolution on the kinematic variables is about a factor five better than the one possible with untagged measurements. The shape of the form factors is extracted using an unbinned maximum likelihood fit where the signal events are described by the four dimensional differential decay rate. The extraction of |V cb | is performed indirectly by adding to the likelihood the constraint that the integrated rate Γ = B/τ B , where B is the B → D * ν branching fraction and τ B is the B-meson lifetime. The values of these external inputs are taken from HFLAV [47]. The final result, using h A 1 (1) from [35], is |V cb | = (38.36 ± 0.90) × 10 −3 with a 5-parameters BGL version and |V cb | = (38.40 ± 0.84) × 10 −3 in the CLN case, both compatible with the above HFLAV average. Nevertheless, the individual form factors show significant deviations from the world average CLN determination by HFLAV.
LHCb has extracted V cb from semileptonic B 0 s decays for the first time [55]. The measurement uses both B 0 s → D − s µ + ν µ and B 0 s → D * − s µ + ν µ decays using 3 fb −1 collected in 2011 and 2012. The value of |V cb | is determined from the observed yields of B 0 s decays normalized to those of B 0 decays after correcting for the relative reconstruction and selection efficiencies. The normalization channels are B 0 → D − µ + ν µ and B 0 → D * − µ + ν µ with the D − reconstructed with the same decay mode as the D s ,   Table 2.2 Summary of R D and R D * measurements and theoretical predictions. The number of observed signal and normalization events is also reported. The normalization channel is B→D ( * ) ν for all measurements but the LHCb one with three-prong τ decays, where the normalization channel is B→ D * πππ. The latter LHCb measurement has been updated using the latest HFLAV average for B(B → D * ν ). The quoted theory predictions are arithmetic averages of the values reported in Table 2.3 below; they are given for illustration only and do not imply consent from the authors of the calculations.
where the first uncertainty are statistical, the second systematic and the third due to the limited knowledge of the external input, in particular the B 0 s to B 0 production ratio f s /f d which is known with an uncertainty of about 5%. The results are compatible with both the inclusive and exclusive decays. Although not competitive with the results obtained at the B factories, the novel approach used can be extended to the semileptonic B 0 decays.

Past measurements of R(D) and R(D * )
R D and R D * are defined as the ratios of the semileptonic decay width of B d and B u meson to a τ lepton and its associated neutrino ν τ over the B decay width to a light lepton. A summary of the currently available measurements of R D and R D * is presented in Table 2.2, showing the yield of B signal and B normalization decays and the stated uncertainties. The data were collected by the BaBar and Belle experiments at e + e − colliders operating at the Υ (4S) resonance, which decays exclusively to pairs of B + B − or B 0B0 mesons. The LHCb experiment operates at the high energy pp collider at CERN at total energies of 7 and 8 TeV, where pairs of b-hadrons (mesons or baryons) along with a large number of other charged and neutral particles are produced. While the maximum production rate of the Υ (4S) → BB events has been 20 Hz, the rates observed at LHCb exceed 100kHz.
Currently we have only two measurements [56,57,58] of the ratios R D and R D * based on two distinct samples of hadronic tagged BB events with signal B → Dτ ν τ and B → D * τ ν τ decays and purely leptonic tau decays, In addition, there is a measurement from Belle [60,61] of R D * with hadronic tags and a semileptonic one-prong τ decay (τ − → π − ν τ or τ − → ρ − ν τ ). A Belle measurement [59] of R D and R D * with semi-leptonic tags and purely leptonic τ decays appeared recently, superceding a previous measurement [65] of R D * obtained with the same technique.
BaBar and Belle analyses rely on the large detector acceptance to detect and reconstruct all final state particles from the decays of the two B mesons, except for the neutrinos. They exploit the kinematics of the two-body Υ (4S) decay and known quantum numbers to suppress non-BB and combinatorial backgrounds. They differentiate the signal decays involving two or three missing neutrinos from decays involving a low mass charged lepton, an electron or muon, plus an associated neutrino.
LHCb isolates the signal decays from very large backgrounds by exploiting the relatively long B decay lengths which allows for a separation of the charged particles from the B and charm decay vertex from many others originating from the pp collision point. There are insufficient kinematic constraints and therefore the total B meson momentum is estimated from its transverse momentum, degrading the resolution of kinematic quantities like the missing mass and the momentum transfer squared q 2 . Also, the production of D * + D − s pairs with the decay D − s → τ −ν τ leads to sizable background in the signal sample. The summary in Table 2.2 indicates that the results are not inconsistent. For BaBar and Belle the systematic uncertainties are comparable for R D * , while Belle systematic uncertainties are smaller for R D . However the differences in the signal yield and the background suppression lead to smaller statistical errors for BaBar. The Belle measurements based on semileptonic tagged samples result in a 50% smaller signal yield than for the hadronic tag samples. For the two LHCb measurements, the event yields exceed the BaBar yields by close to a factor of 20, but the relative statistical errors on R D * are comparable to BaBar, and the systematic uncertainties are larger by a factor of 2.

Lessons learned
All currently available measurements are limited by the difficulty of separating the signal from large backgrounds from many sources, leading to sizable statistical and systematic uncertainties. The measurement of ratios of two B decay rates with the very similar -if not identical -final state particles, significantly reduces the systematic uncertainties due to detector effects, tagging efficiencies, and also from uncertainties in the kinematics due to form factors and branching fractions. For all three experiments the largest systematic uncertainties are attributed to the limited size of the MC samples, the fraction and shapes of various backgrounds, especially from decays involving higher mass charm states, and uncertainties in the relative efficiency of signal and normalization, the efficiency of other backgrounds, as well as lepton mis-identification. Though the total number of BB events of the full Belle data set exceeds the one for BaBar by 65%, the signal BaBar signal yield for B → D ( * ) τ ν τ exceeds Belle by 67% due to differences in event selection and fit procedures.
While the use by Belle of semileptonic B decays as tags for BB events benefits from the fewer decay modes with higher BFs, the presence of a neutrino in the tag decays results in the loss of stringent kinematic constraints. The resulting signal yields are lower by 50% compared to hadronic tags, and the backgrounds are much larger. The use of the ECL, namely the sum of the energies of the excess photons in a tagged event, in the fit to extract the signal yield is somewhat problematic, since it includes not only the photons left over from incorrectly reconstructed BB events, but also photons emitted from the high intensity beams. As a result the signal contributions are difficult to separate from the very sizable backgrounds.

Outlook for R(D) and R(D * )
Belle II and the upgraded LHCb are expected to collect large data samples with considerable improved detector performances. This should lead to much reduced detector related uncertainties, higher signal fractions, and opportunities to measure many related processes. The goal is to push the sensitivity of many measurements of critical variables and distributions beyond theory uncertainties and thereby increase the sensitivity to non-Standard Model processes.
Currently there are only two measurements of the ratio R D , one each by BaBar and Belle, based on two distinct samples of hadronic tagged BB events for the signal B → Dτ ν τ and B → D * τ ν τ decays. The decay B → Dτ ν τ is dominated by a P-wave, whereas in the B → D * τ ν τ S, P, and D waves contribute and the impact for contributions from new physics processes is expected to be smaller. A contribution of a hypothetical charged Higgs would result in an S-wave for B → Dτ ν τ , and a P-wave for B → D * τ ν τ , thus measurements of the angular distributions and the polarization of the τ lepton or D and D * mesons will be important. Such measurements would of course also serve as tests of other hypotheses, for instance contributions from leptoquarks. The studies for many decay modes, the detailed kinematics of the signal events, the four-momentum transfer q 2 , the lepton momentum, the angles and momenta of D and D * and the τ spin should be extended to perform tests for potential new physics contributions.
Belle II will benefit from major upgrades to all detector components, except for the barrel sections of the calorimeter and the muon detector. In addition, a new data acquisition and analysis software are being developed to benefit from the very high data rates and improved detector performance. Upgrades to the precision tracking and lepton identification, especially at lower momenta, are expected to significantly improve the mass resolution and purity of the signal samples. This should also improve the detector modeling of efficiencies for signal and backgrounds and fake rates that are the major contributions to the current systematic uncertainties. The much larger data rates should allow choice of cleaner and more efficient BB tagging algorithms.
Major improvements to the MC simulation signal and backgrounds will be needed. They require much better understanding of all semileptonic B decays, contributing to signal and backgrounds, i.e., updated measurements of branching fractions and form factors and theoretical predictions, especially for backgrounds involving higher mass charm mesons, either resonances or states resulting from charm quark fragmentation. The fit to extract the signal yields could be improved by reducing the backgrounds and making use of fully 2D or 3D distributions of kinematic variables, and by avoiding simplistic parametrizations. The suppression of fake photons and π 0 s needs to be scrutinized to avoid unnecessary signal loss and very large backgrounds for D * 0 decays. Shapes of distributions entering multi-variable methods to reduce the backgrounds should be scrutinized by comparisons with data or MC control samples, and any significant differences should be addressed. The use of ECL, the sum of the energies of all unassigned photon in an event, may be questionable, given the expected high rate of beam generated background.
The first study by Belle of the τ spin in B → D * τ ν τ decays with τ − → ρ − ν τ or τ − → π − ν τ is very promising, it indicates that much larger and cleaner data samples will be needed. The systematic uncertainty on the R D * measurement of 11% is dominated by the hadronic B decay composition of 7% and the size of the MC sample [61]. The measured transverse τ polarization of P τ = −0.38±0.51 +0.21 −0.16 is totally statistics dominated, and implies P τ < 0.5 at 90% C.L.
Among the many other measurements Belle II is planning, ratios R for both inclusive and inclusive semileptonic B decays are of interest, for instance in addition to R D , R D * , and R D * * also R X c , as well as R π and R X u , which rely on unique capabilities of Belle II.
The LHCb detector is currently undergoing a major upgrade with the goal to switch to an all software trigger and to be able to select and record data up to rates of 100kHz. Replacements of all tracking devices are planned, ranging from radiation hard pixel detector near interaction region to scintillation fibers downstream. Improvements to electron and muon detection and reduction in pion misidentification will be critical for the suppression of backgrounds, and should also allow rate comparison for decays involving electron or muons. LHCb relies on large data samples rather than MC simulation to assess signal efficiencies and most importantly the many sources of backgrounds and their suppression.
Several analyses are underway based on Run 1 and Run 2 data samples, and are benefiting from improved trigger capabilities. The first analysis based on 3-prong τ decays showed a clear separation of the τ decay vertex from both the D and the proton interaction point, improving the signal purity to about 11%, compared to 4.4% for the purely leptonic 1-prong τ decay. This may therefore be the favored τ decay mode, and should also be tried for B + → D 0 τ + ν τ . Improved measurements of the branching fractions for normalization and the τ decays will be essential.
As a follow-up on the first LHCb measurement of R D * , a simultaneous fit to two disjoint D 0 µ − and D * + µ − samples is in preparation, taking into account the large feed-down from D * decay present in the D 0 µ − sample. As pointed out above, the decay B + → D 0 τ + ν τ is more sensitive to new physics processes than B 0 → D * − τ + ν τ and thus this analysis is expected to be very important to establish the excess in these decay modes and its interpretation. This analysis will benefit from the addition of dedicated triggers sensitive to D 0 µ − , D * + µ − , Λ + c µ and D + s µ final states.
LHCb is considering a series of other ratios measurements, , most of which will be challenging to observe and not trivial to normalize. The decay Λ + b → Λ * c τ + ν τ probes a different spin structure, and a precise measurement of R Λ c would be of great interest for the interpretation of the excess of events in R D . The observation of the decay been reported. It is a very rare process which is only observable at LHCb. The final state of 3 muons is a unique signature, though impacted by sizable backgrounds from hadron misidentification. The measured ratio R J/ψ = 0.71 ± 0.17 ± 0.18 has large uncertainties, dominated systematically by the signal simulation since the form factors are unknown.

Extraction of V cb and predictions for R D ( * )
The values of V cb extracted from inclusive and exclusive decays have been in tension for a long time [66]. In order to extract V cb from B → D ( * ) lν data we need information on the form factors, which is mostly provided by Lattice QCD. For the B → D form factors f +,0 there are lattice results at w ≥ 1 [31,32,3]. A fit to all the available experimental and lattice data of B → Dlν leads to [67] V cb · 10 3 = 40.49 (97) , with χ 2 /dof = 19.0/22. Similar results have been obtained in [3]. For B → D * at the moment there is only information on one of the four form factors at zerorecoil, A 1 (w = 1) [35,37], however further developments look promising [68,69,70]. At the other end of the w or q 2 spectrum there are results available from Light Cone Sum Rules (LCSR) [71,72]. In view of the advanced experimental precision, a key question for the precise extraction of V cb and a robust prediction of and how accurate are the QCDSR results that are used at NLO? A guideline for an answer to these questions can be provided by studying the size of NLO corrections in the HQET expansion and by a comparison with corresponding available lattice results [12]. A definite answer, especially for the pseudoscalar form factor P 1 , which is needed for the prediction of R(D * ), will be given only by future lattice results [68,69,70].
In all experimental analyses prior to 2017, HQET relations have been employed in terms of a form of the CLN parametrization [11] where theoretical uncertainties noted in Ref. [11] were set to zero by fixing coefficients to definite numbers. Moreover, the slope and curvature of R 1,2 (w) depend on the same underlying theoretical quantities as R 1,2 (1), which makes the variation of the latter and fixing of the former inconsistent. In future experimental analyses this has to be taken into account.
Recent preliminary Belle data [51] allowed for a reappraisal of fits to B → D * lν by several groups [52,12,53,73,74,75,37]. For the first time, Ref. [51] reported deconvoluted w and angular distributions which are independent of the parametrization. This allowed to test the possible influence of different parametrizations on the extracted value of V cb . Indeed, based on that data set the central values for |V cb | varied by up to 6% between CLN and BGL fits [52,53,75]. By floating some additional parameters of the less flexible CLN parametrization, the agreement between BGL and CLN could be restored [52,74]. Furthermore, in the literature one could observe a correlation of smaller central values for V cb with stronger HQET+QCDSR input [51,52,12,53,73,74,75,37].
R(D * ) Exp. deviation [73] 0.257 (3) 2.7σ [77] 0.254 7 6 2.7σ [79] 0.251 4 5 3.1σ [78] 0.250 (3) 3.2σ Recently, on top of the tagged analysis Ref. [51] a new untagged Belle analysis of B → D * lν appeared [76]. The new, more precise data brought the |V cb | central values of the CLN and BGL fits closer together. However, in order to obtain a reliable error, it is necessary to employ the BGL parametrization with a sufficient number of coefficients rather than the CLN parametrization. Including the new data, Ref. [77] obtains  [73,75,84,78,85]: varying the coefficients of the HQE consistently allows for a simultaneous description of the available experimental and lattice data in B → D, while the parametrization dependence in the extraction of V cb from Ref. [51] remains [73]. Including additionally contributions at O(1/m 2 c ) and higher orders in the z expansion, the extracted values for V cb using the BGL parametrization and the HQE become compatible [78].
For the above reasons, older HFLAV averages, which are based on the CLN parametrization, should not be employed in future analyses, with the exception of the total branching ratios, whose parametrization dependence is expected to be negligible. The two most recent experimental analyses ofB (s) → D * (s) l −ν l [54,55] present results obtained in both CLN and a simplified version of the BGL parametrization. They did not observe sizeable parametrization dependence, but found very different values of V cb . However, they did not provide data in a format that allows for independent reanalyses.
For the lepton flavor nonuniversality observables R(D ( * ) ) we list a few recent theoretical predictions in Table 2.3. Predictions for further lepton flavor nonuniversality observables of underlying b → clν transitions can be found in Refs. [86,87]. Compared to predictions from before 2016, the predictions in Table 2.3 make use of new lattice results and new experimental data. The results are based on different methodologies and a different treatment of the uncertainties of HQET + QCDSR. We have a very good consensus for R(D) predictions because in this case the predictions are dominated by the recent comprehensive lattice results from Refs. [31,32,88]. QED corrections to R(D) remain a topic which deserves further study [89,90]. In the case of R(D * ), as we do not have yet lattice information on the form factor P 1 , we can use the exact endpoint relation P 1 (w max ) = A 5 (w max ) and results from HQET and QCDSR. Depending on the estimate of the corresponding theory uncertainty one obtains different theoretical errors for the prediction of R(D * ). As soon as we have lattice results for P 1 [68], the different fits will stabilize and we expect a similar consensus as for R(D). Despite the most recent experimental results being closer to the SM predictions, the R(D ( * ) ) anomaly persists and remains a tough challenge for model builders.
2.5 Semileptonic B → D * * ν decays Semileptonic B decays to the four lightest excited charm mesons, D * * = {D * 0 , D * 1 , D 1 , D * 2 }, are important both because they are complementary signals of possible new physics contributions to b → cτν, and because they are substantial backgrounds to the R(D ( * ) ) measurements (as well as to some |V cb | and |V ub | measurements). Thus, the correct interpretation of future B → D ( * ) ν measurements requires consistent treatment of the D * * modes.
The spectroscopy of the D * * states is important, because in addition to the impact on the kinematics, it also affects the expansion of the form factors [91,92] in HQET [93,94]. The isospin averaged masses and widths for the six lightest charm mesons are shown in Table 2.4. In the HQS [95,96] limit, the spin-parity of the light degrees of freedom, s π l l , is a conserved quantum number, yielding doublets of heavy quark symmetry, as the spin s l is combined with the heavy quark spin [97]. doublets were measured to be much smaller than m D * −m D . This is not supported by the more recent data (see Table 2.4), so Ref. [100] extended the predictions of Refs. [91,92] accordingly, including deriving the HQET expansions of the form factors which do not contribute in the m = 0 limit. The impact of arbitrary new physics operators was analyzed in Ref. [99], including the O(Λ QCD /m c,b ) and (α s ) corrections in HQET. The corresponding results in the heavy quark limit were obtained in Ref. [101]. The large impact of the O(Λ QCD /m c,b ) contributions to the form factors can be understood qualitatively by considering how heavy quark symmetry constrains the structure of the expansions near zero recoil. It is useful to think of a simultaneous expansion in powers of (w − 1) and (Λ QCD /m c,b ). (The kinematic ranges are 0 < w − 1 0.2 for τ final states, and 0 < w − 1 0.3 for e and µ.) The decay rates to the spin-1 D * * states, which are not helicity suppressed near w = 1, are of the form (2.44) Here ε is a power-counting parameter of order Λ QCD /m c,b , and the 0-s are consequences of heavy quark symmetry. The ε 2 term in the first parenthesis is fully determined by the leading order Isgur-Wise function and hadron mass splittings [91,92,100,99]. The same also holds for those new physics contributions to B → D * 0 ν, which are not helicity suppressed. This explains why the O(Λ QCD /m c,b ) corrections to the form factors are very important, and can make O(1) differences in physical predictions, without being a sign of a breakdown of the heavy quark expansion. The sensitivity of the D * * modes to new physics is complementary and sometimes greater than those of the D and D * modes [101,99]. Thus, using HQET, the predictions for B → D * * τν are systematically improvable by better data on the e and µ modes, just like they are for B → D ( * ) τν [73], and are being implemented in HAMMER [102,103,104].

New physics in
Independently of the recent discussion on form factor parametrizations and their influence on the extraction of V cb (covered in Sec. 2.1) it is clear from Table 2.3 that the SM cannot accomodate the present experimental data on R(D ( * ) ). Even after the inclusion of the most recent Belle measurement [105], the significance of the anomaly remains 3.1σ. This leaves, apart from an underestimation of systematic uncertainties on the experimental side, NP as an exciting potential explanation. The required size of such a contribution comes as a surprise, however: defininĝ R(X) ≡ R(X)/R(X) SM , the new average corresponds toR(D) = 1.14 ± 0.10 andR(D * ) = 1.14 ± 0.06; for NP to accommodate these data, a contribution of 5 − 10% relative to a SM tree-level amplitude is required for NP interfering with the SM, and O(40%) for NP without interference. An effect of this size can be clearly identified with upcoming measurements by LHCb and Belle II [106,107]. It would also immediately imply large effects in other observables.
The potential of R(D ( * ) ) as discovery modes does not diminish the importance of additional measurements with b-hadrons. Specifically, even with a potential discovery, model discrimination will require measurements beyond these ratios. These additional measurements fall in four categories: s )) are important crosschecks to establish R(D ( * ) ) as NP with indepen- Fig. 2.2 State-of-the-art fit results in single-mediator models for selected pairs of observables in B → D ( * ) τ ν decays (following Ref. [113] for form factor and input treatment). All outer ellipses correspond to 95% confidence level, inner (where present) to 68%. We show the SM prediction in grey, the experimental measurement/average in yellow (where applicable) and scenarios I, II, III IV and V in dark green, green, dark blue, dark red and red, respectively, see text. Contours outside the experimental ellispse imply that the measured central values cannot be accomodated within that scenario. The limit BR(Bc → τ ν) ≤ 30% has been applied throughout, but affects only the fits with scalar coefficients. Dark green contours are missing in the two graphs on the right, because the predictions of scenario I are identical to the SM ones. dent systematics and provide independent NP sensitivity (especially R(Λ c ) and R(X c )), as discussed in subsections 2.3 and 2.5. -Integrated angular and polarization asymmetries and polarization fractions are excellent model discriminators. In many models they are completely determined once the measurements of R(D ( * ) ) are taken into account. For instance, the recent measurement of the longitudinal polarization fraction of the D * in B → D * τ ν, F L (D * ), was able to rule out solutions that remained compatible with the whole set of the remaining b → cτ ν data [108,109,110,111,112,113]. The model-discriminating potential of both R(D ( * ) ) and selected angular quantities is visualized in Fig. 2.2, where fit results for pairs of B → D ( * ) τ ν observables within all phenomenologically viable single-mediator scenarios with left-handed neutrinos to the state-of-the-art data are shown. -Differential distributions in q 2 and the different angles are extremely powerful in distinguishing between NP models, as can be seen for instance from a recent analysis of data with light leptons in the final state [84]. They require, however, large amounts of data and the insufficient information on the decay kinematics can pose difficulties for the interpretation of the data, as discussed in subsection 2.7. However, already the rather rough available information on the differential rates dΓ/dq 2 (B → D ( * ) τ ν) [58,56] is excluding relevant parts of the parameter space [114,115,116,117,113]. -An analysis of the flavor structure of the observed effect, e.g. in b → c(e, µ)ν, b → uτ ν and t → bτ ν transitions.
In addition to the above observables, the leptonic decay B c → τ ν plays a special role. Although it is not expected to be measured in the foreseeable future, it provides nevertheless a strong constraint on NP, since the relative influence of scalar NP is enhanced in this mode. A limit can then be obtained even from the total width of the B c meson [118]. Theoretical estimates for the partial width assumed unaffected by NP can be used to strengthen these bounds [119,120,116], and also data from LEP [121]. Both approaches rely on additional assumptions, however, see Refs. [110,111] for recent extensive discussions. The constraints discussed so far are relevant in any scenario trying to address the existing anomalies. An interesting subclass of such models is that where the existence of a single mediator coupling to only the known SM degrees of freedom is assumed, classified in [115], creating only a subset of the possible operators at the b scale. Among those, only five scenarios remain that can reasonably well accomodate the data described above, see also Refs. [82,110,115,117,122,123,124,125,126] for comparisons (additional constraints in specific scenarios are commented on below): Scenario I yields only a left-handed vector operator, created by either a heavy color-less vector particle [127,128,129,130] (phenomenologically highly disfavoured) or a leptoquark, see Refs. [118,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150] for this and other leptoquark variants. Scenario II includes Scenario I, but yields also a right-handed scalar operator, realized for example by a vector leptoquark. Scenario III involves both left-and right-handed scalar operators, generated for instance by a charged Higgs [151,81,152,116,153,154,155,156] (with a limited capability to accomodate R(D * ) due to the B c constraint discussed above). Scenarios IV and V involve the left-handed scalar and tensor operator which are generated proportionally to each other (C S L = ±4C T at the NP scale Λ), in the latter case with the addition of the left-handed vector operator, again realized in leptoquark models. It is also possible to analyze the available data in more general contexts. For example, within SMEFT the right-handed vector current is expected to be universal [157,158,159], see [113] for a global analysis in this framework, while this does not hold when the electroweak symmetry breaking is realized non-linearly [159]. Allowing for additional light degrees of freedom beyond the SM opens the possibility of contributions with right-handed neutrinos, see Refs. [160,144,161,162,163,164,165,166].
Once specific models are considered, typically additional constraints apply. Important ones include high-p T searches, looking for collider signatures of the mediators related to the anomaly [167,168,169,170], RGE-induced flavor-non-universal effects in τ decays [171], lepton-flavor violating decays [171], precision universality tests in quarkonia decays [172], charged-lepton magnetic moments [168] and electric dipole moments in models with non-vanishing imaginary parts [173].

Interpretation of experimental results
The reconstructed kinematic distributions used in measurements are sensitive to both the modeling of required non-perturbative inputs (e.g., form factors, lightcone meson wave functions), and to assumptions about the underlying fundamental theory (e.g. possible presence of operators with chiral structures different from those found in the SM). Current measurements assume the SM operator structure, and include the non-perturbative uncertainties as they are known at the time of publication. While this is a valid strategy for testing the SM, if in future the presence of a non-SM contribution with a different chiral structure is established then past measurements will require reinterpretation.
In order to present experimental results in such a way to allow a-posteriori analyses to have maximum flexibility in the description of non-perturbative inputs and BSM content, the following strategies might be considered. The techniques to allow for reinterpretation of results overlap with those used to make differential measurements designed to be sensitive to the chiral structure and non-perturbative quantities.
A first possibility, is the publication of unfolded distributions (see, for instance, the B → D * ν spectrum presented in Ref. [51]). This method offers the possibility to fit with ease the experimental results to arbitrary parametrizations of the form factors [52,53,73]; its downside is that it requires relatively high statistics and that the unfolded distributions do not contain the whole experimental information.
A second option, which has been employed in the untagged Belle analysis of ref. [49], is to provide folded distributions in which detector effects are not removed and no extrapolation is performed, together with experimental efficiencies and the detector response matrix (which reproduces detector effects to a given accuracy). This allows the use of any parametrization of SM and BSM effects in comparing with the experimental result. This approach, while requiring slightly more involved a posteriori fitting strategies, avoids the statistical problems associated with unfolding and can be extended more easily to higher dimensions.
Finally, the most complete information is contained in the Likelihood function which depends on a set of SM parameters (e.g., for B → π ν they could be the coefficients of the z-expansion of the form factors and V ub ) and on the Wilson coefficients of BSM operators). This method has not been currently pursued in any B decay measurement, in part because of difficulties related to the extremely large amount of information that would need to be presented. Two differing approaches are to publish the full experimental Likelihood in the full parameter space of BSM Wilson coefficients and SM non-perturbative coefficients, or to publish the tools for external readers to be able to repeat the full experimental fit with the signal model varied. For representing the experimental Likelihood in a high-dimensional space, possible approaches include the use of Markov chain sampling, or MVA surface modelling. These are the only strategies which would allow the entirety of the experimental information to be available in a posteriori theoretical investigations. It is essential to this approach for the experimental measurement to cover the full parameter space in a sufficiently general way, including alternative Likelihoods with different parametrizations for nonperturbative effects.

HAMMER
Future new physics searches in b → c τ ν τ decays are a challenging endeavour: most experimental results make use of kinematic properties of the process to discriminate between the signal of interests and backgrounds. For instance recent measurements from the B-factories BaBar and Belle used the lepton momentum spectrum and measurements of LHCb use fits to the four-momentum transfer q 2 . In new physics scenarios, these distributions change and alter the analysis acceptance, efficiencies, and extracted signal yields. In addition, large samples of simulated decay processes play an integral part in those measurements. In most, one of the leading systematic uncertainties is due to the limited availability of such samples. Thus producing large enough simulation samples for a wide range of new physics points, needed to take into account the aforementioned changes in acceptance, etc. is not a viable path. This is where the HAMMER tool [104,103] can help: it implements an event-level reweighing, assigning a weight based on the ratio of new physics to simulated matrix element, which allows one to re-use the already generated events. In addition, it is capable of providing histograms for arbitrary new-physics parameter values (including also form factor variations), which can be used in e.g. template fits to kinematic observables. These event weights can completely account for acceptance changes and will enable Belle II and LHCb to directly extract limits on the Wilson coefficients present in b → c τ ν transitions.

Heavy-to-light exclusive
In this section we present an overview of b → u exclusive decays. We start with a discussion of the lattice calculations of the b hadron decay form factors to a light pseudoscalar, vector meson or baryon. We then review the light-cone sum rule calculation of the same form factors and the current experimental situation, as well as the prospects at Belle II and LHCb. Finally, we briefly discuss a few related subjects, such as the semitauonic decays, b → γ ν , the non-resonant B → ππ ν decays, and some subtlety of the z-expansion. , where X * now denotes a ρ, K * , or φ meson. As discussed in Sec. 2.1, modern theoretical calculations of the form factors employ z-parametrizations to describe their shapes, which can be implemented in a model-independent way, being based on analyticity and unitarity constraints. For the case at hand, an often used choice for the zparameter defined in Eq. (2.9) is t 0 = (M + m)/( √ M − √ m) 2 , which results in a range |z| < 0.3, centered around z = 0. In general, the small range of z coupled with unitarity constraints on the coefficients ensure that the polynomial expansions converge quickly. As discussed already in Sec. 2.1, for B-meson decays to light hadrons with their larger q 2 range, the BCL parametrization [8] is the standard choice, as the resulting forms satisfy the expected asymptotic q 2 and near threshold scaling behaviors [9,10]: (3.2)

Lattice QCD results for B-meson decay form factors to light pseudoscalars
Lattice-QCD calculations of the form factors for semileptonic B (s) -meson decays to light hadrons proceed along the same lines as discussed in Sec. 2.2. In particular, there are a number of different, well-developed strategies for dealing with the heavy b-quark in lattice QCD, see Ref. [3] for a review. The same two-and threepoint functions as for the heavy-to-heavy case are needed here, albeit with the appropriate valence quark propagators, to describe the heavy-to-light decay process. While this affects the statistical errors in the next step, the fits to the spectral representations of the correlation functions to obtain the desired matrix elements on each gauge ensemble and each recoil momentum, the procedure is essentially the same. The resulting "lattice data" are then used in combined chiral-continuum fits coupled with a systematic errors analysis to obtain the form factors in the continuum over the range of recoil energies that are included in the simulations. Here, a well known challenge is that the recoil energies that are accessible in lattice-QCD calculations cover only a fraction of the entire kinematic region. A related challenge is that the validity of Chiral Perturbation Theory (used to extrapolate or interpolate to the physical pion mass) is limited to pion energies of ≈ 1 GeV. The final step is the z-expansion fit, from which the form factors are obtained over the entire kinematic range, albeit with larger errors in the region not directly covered in the lattice calculation.
Lattice-QCD calculations of the B → π vector current form factors f + and f 0 can be used to determine |V ub | from experimental measurements of the B → π ν decay rate. There are currently two independent, published lattice-QCD computations that employ the modern methods outlined above, including the modelindependent z-expansion [174,175]. The RBC/UKQCD collaboration [174] uses ensembles with N f = 2 + 1 flavors of Domain Wall fermions at two lattice spacings with sea-pion masses in the range [300,400] MeV. The Fermilab/MILC collaboration [175] uses ensembles with N f = 2 + 1 flavors of asqtad (improved staggered) fermion at four lattice spacings covering the range a ≈ 0.045 − 0.12 fm and a range of sea-pion masses down to 177 MeV. Earlier work [176] used a subset of these ensembles. The treatment of the b-quark is similar in the two works; Ref. [174] uses a variant of the Fermilab approach, called the relativistic heavy quark (RHQ) action, while Ref. [175] employs the original Fermilab formalism. Both groups also use the mostly nonperturbative renormalization method to compute the renormalization factors. The form factors obtained by the two lattice groups are in good agreement with each other, and can be combined in joint fits together with experimental data for an improved |V ub | determination [3].
Ongoing work by RBC/UKQCD is extending the calculation to include more ensembles [43]. Ongoing work by the Fermilab/MILC collaboration employs the HISQ N f = 2 + 1 + 1 ensembles with sea-pion masses at (or near) the physical point, and the Fermilab formalism for the b-quark [177]. The HPQCD collaboration has published a calculation of the scalar form factor for the B → π transition at zero recoil f 0 (q 2 max ) on a subset of the N f = 2 + 1 + 1 HISQ ensembles and treating the b -quark in NRQCD [178], which provides a nice test of the softpion theorem, but cannot be used in |V ub | determinations. Ongoing work includes a calculation of the B → π form factors over a range of q 2 on a subset of the asqtad ensembles using NRQCD b-quarks and HISQ light-valence quarks [179]. The JLQCD collaboration has an ongoing project to calculate the B → π form factors on N f = 2 + 1 Domain Wall ensembles using also Domain Wall fermions for the heavy and light valence quarks [180]. They focus their calculation on small lattice spacings (a ≈ 0.044 − 0.080 fm) and include a series of heavy-quark masses to extrapolate to the physical b-quark mass.
The vector current form factors f + and f 0 needed for rare B → π decay are the same as for B → π ν decay (up to small isospin corrections), but the tensor form factor f T is also needed to describe the rare process in the SM, while it can contribute to B → π ν decay only in BSM theories. So far, f T has been calculated only by the Fermilab/MILC collaboration [181] using the same ensembles and methods as for the vector current form factors. However, most (if not all) of the ongoing projects described above, now include the complete set of form factors in their analyses, and new results for this form factor will therefore also be forthcoming.
The B s → K ν process can be used for an alternate determination of |V ub |, and there currently are three independent, published lattice-QCD computations of the vector-current form factors [182,174,183]. In Ref. [182] the HPQCD collaboration used NRQCD b-quarks and HISQ light-valence quarks to calculate the form factors on a subset of asqtad ensembles. The RBC/UKQCD [174] work is already described above, since they calculated the B s → K and B → π transition form factors together. The Fermilab/MILC collaboration [183] used the same methods and setup as for their B → π project [175] but on a subset of asqtad ensembles. Both Fermilab/MILC [183] and, in a follow-up paper, HPQCD [184] also computed ratios of B s → K and B s → D s observables, which can be used in combination with LHCb measurements to determine |V ub /V cb |.

Challenges of vector mesons
Lattice calculations of B (s) decay form factors with vector mesons (ρ, K * , φ) in the final state are substantially more challenging, as these vector mesons are unstable resonances for sufficiently light quark masses. The asymptotic final state in the continuum then contains (at least) two hadrons, and the relation with the finite-volume matrix elements computed on the lattice becomes nontrivial. The formalism that allows a mapping of finite-volume to infinite-volume 1 → 2 hadron matrix elements has been developed [185,186,187,188,189,190,191] and will be discussed in more detail below. First numerical applications to a form factor with nonzero momentum transfer have been published for the electromagnetic process πγ * → ππ, where the ππ final state in a P wave couples to the ρ resonance [192,193,194].
The lattice QCD calculations of B (s) → V form factors published to date did not implement this 1 → 2 formalism. For the B → ρ form factors, there is only an early study by the UKQCD collaboration [195], performed in the quenched approximation and with heavy up and down quark masses for which the ρ is stable. For the B → K * , B s → K * , B s → φ form factors, an unquenched lattice QCD calculation is available [196]. This work used three different ensembles of lattice gauge field configurations with pion masses of approximately 310, 340, and 520 MeV. For the lower two pion masses, the K * is expected to be unstable, but the analysis was performed as if the K * were stable. This entails using only a quark-antiquark interpolating field for the K * , and assuming that the information extracted from exponential fits to the two-point and three-point correlation functions corresponds 32 P. Gambino 1 et al.
to the "K * " contribution. The systematic errors introduced by this treatment are difficult to quantify. For unstable K * , none of the actual discrete finite-volume energy levels directly corresponds to the resonance, and the actual ground state may be far from the resonance location (for typical lattice volumes, this problem is more severe at nonzero momentum). However, a quark-antiquark interpolating field couples more strongly to energy levels in the vicinity of the resonance, and ground-state saturation is typically not seen in the correlation functions before the statistical noise becomes overwhelming. In these cases, exponential fits are still dominated by one or multiple energy levels in the vicinity of the resonance.
In the following, we will denote the vector meson resonance as V , and the two pseudoscalar mesons whose scattering shows the resonance as P 1 and P 2 . The finite-volume energy levels for a given total momentum and irreducible representation of the appropriate symmetry group are determined by the Lüscher quantization condition [197] and its generalizations, as reviewed in Ref. [198]. In the absence of interactions, they would consist of P 1 P 2 scattering states with energies equal to the sums of the P 1 and P 2 energies, where the P 1 and P 2 momenta take on the discrete values allowed by the periodic boundary conditions. Through the P 1 P 2 interactions, these energy levels are shifted away from their noninteracting values in a volume-dependent way. In the simplest case (considering only elastic scattering and neglecting the partial-wave mixing induced by the finite volume), each interacting finite-volume energy level can be mapped to a corresponding value of the infinite-volume P 1 P 2 scattering phase shift, or, equivalently, scattering amplitude; more complicated cases with coupled channels and partial-wave mixing can also be treated. The dependence of the scattering amplitude on the P 1 P 2 invariant-mass-squared, s, can be described by a Breit-Wigner-type function. By analytically continuing the scattering amplitude to complex s, one finds poles on the second Riemann sheet at s = (m V ± iΓ V /2) 2 , where Γ V is the width of the resonance. This procedure has been applied successfully to the ρ, K * , and other resonances (see Ref. [198] for a review).
The B (s) → V form factors correspond to the residues at the pole at s = (m V − iΓ V /2) 2 in the B (s) → P 1 P 2 form factors, where the P 1 P 2 final state is projected to the = 1 partial wave. These B (s) → P 1 P 2 form factors are functions of q 2 and s. In the single-channel case, the lattice computation involves the following steps: (i) Determine the P 1 P 2 finite-volume energy spectrum, and the B (s) → P 1 P 2 finitevolume matrix elements both for the ground states and multiple excited states. (ii) Obtain the infinite-volume P 1 P 2 scattering amplitude from the finite-volume energy spectrum using the Lüscher method, and fit a suitable parametrisation of the s-dependence to the data. (iii) Map the finite-volume B (s) → P 1 P 2 matrix elements to infinite-volume B (s) → P 1 P 2 matrix elements using the Lellouch-Lüscher factor, which depends on the energy-derivative of the scattering phase shift and a known finite-volume function.
The finite-volume formalism requires the center-of-mass energy √ s to be small enough so that no more than two particles can be produced by the scattering through the strong interaction (however, the total momentum of the P 1 P 2 system can in principle be arbitrarily large). For example, in the case of the B → ππ form factors, the formalism requires √ s 4 m π , which becomes more restrictive when performing the calculation at lighter quark masses. However, it is likely that the coupling to four pions has negligible effects even at somewhat higher values of √ s, as needed to map out the ρ resonance region when using physical quark masses.

Λ b → p and Λ b → Λ ( * ) form factors from lattice QCD
The Λ b → p form factors relevant for the decay Λ b → pµ −ν have been computed in lattice QCD together with the Λ b → Λ c form factors [16]; some aspects of this work were already discussed in Sec. 2.2.2. The lattice data for Λ b → p cover the kinematic range from q 2 ≈ 15 GeV 2 to near q 2 max ≈ 22 GeV 2 , and consequently the predicted Λ b → pµ −ν µ differential decay rate is most precise in this range. The integrated decay rates in the Standard Model were found to be The latter has a total uncertainty of 8.8% (corresponding to a 4.4% theory uncertainty in a |V ub | determination from this rate), and the ratio to the partially integrated Λ b → Λ c µ −ν decay rate (2.38) has a total uncertainty of 9.8%, corresponding to a 4.9% theory uncertainty in the determination of |V ub /V cb | performed by LHCb [199], commensurate with the experimental uncertainty. The Λ b → p form factors from Ref. [16] can also be used to predict the Standard-Model value of the baryonic b → u ν lepton-flavor-universality ratio, By increasing statistics, removing the partially quenched data sets (cf. Sec. 2.2.2), adding one ensemble with physical light-quark masses, and another ensemble with a third, finer lattice spacing, it will likely be possible to reduce the uncertainties in both the Λ b → p and Λ b → Λ c form factors by a factor of 2 in the near future. The same methods have also been used to compute the Λ b → Λ [200], Λ c → p [201], and Λ c → Λ [202] form factors with lattice QCD. The latter calculation already includes an ensemble with the physical pion mass, and gave results for the Λ c → Λe + ν e and Λ c → Λµ + ν µ branching fractions consistent with, and two times more precise than, the measurements performed recently by the BESIII Collaboration [203,204]. This is a valuable test of the lattice methods used to determine the heavy-baryon decay form factors.
A lattice-QCD calculation is also in progress for the Λ b → Λ * (1520) form factors (in the narrow-width approximation) [205], which are relevant for the rare decay Λ b → Λ * (→ p K)µ + µ − . As with Λ b → Λ * c , discussed in Sec. 2.2.2, this initial calculation only reaches q 2 in the vicinity of q 2 max .

Light-cone sum rules calculations of heavy-to-light form factors
QCD sum rules on the light cone (LCSR) is a non-perturbative method for calculating hadronic quantities [206,207,208]. It has been applied to obtain the form factors for B decays (see the definitions in Section 2.1). The first LCSR calculations relevant for V ub were performed in 1997 when the next-to-leading order (NLO) twist-2 corrections to f + (q 2 ) were calculated [209,210]. The leading order (LO) corrections up to twist-4 were calculated in Ref. [211]. Since the LO twist-3 contribution was found to be large, further improvements were made by calculating the smaller NLO corrections [212]. A more recent update where the MS mass is used in place of the pole mass for m b can be found in Ref. [213,214] for the B → π case and in Ref. [215] for the B s → K case. Here we will discuss a selection of the more recent LCSR calculations.
For B → π, a NNLO (O(α 2 s β 0 )) calculation of f + (0) was performed, with the result f + (0) = (0.262 +0.020 −0.023 ) with uncertainties 9% [216]. This calculation tested the argument that radiative corrections to f + f B and f B should cancel when both calculated in sum rules (the 2-loop contribution to f B in QCDSR is sizeable). It was found that despite ∼ 9% O(α 2 s β 0 ) change to f B , the effect on f + (0) was only ∼ 2%.
More recently unitarity bounds and extrapolation were used to perform a Bayesian analysis of the form factor f + (q 2 ) for B → π [217]. Prior distributions were taken for inputs, a likelihood function was constructed based on fulfilling the sum rule for m B to 1%, and posterior distributions were obtained using Bayes' theorem. The posterior distributions of the inputs differed only for s 0 , which was pushed to higher values s 0 = 41 ± 4 GeV (mainly due to the choice of m b ). Finally the results were fit to the BCL parametrisation, finding a central value of f + (0) = 0.31 ± 0.02. Obtaining f + (q 2 ) and the first two derivatives at 0 and 10 GeV 2 has allowed the extrapolation to higher q 2 using improved unitarity bounds.
V ub can also be obtained from the channels B → ρ/ω, and updated LCSR results were made available in 2015 [218]. The improvements in these results include: the computation of full twist-4 (+partial twist-5) 2-particle DA contribution to FFs, plus the determination of certain so-far unknown twist-5 DAs in the asymptotic limit; a discussion of the non-resonant background for vector meson final states; the determination and usage of updated hadronic matrix elements, specifically the decay constants; fits with full error correlation matrix for the z expansion coefficients, as well as an interpolation to the most recent lattice computation. The result for |V ub | from B → ρ ν has comparable errors to the B → π determination. In general the B → V results agree with previous exclusive determinations and global fits within errors.
Future prospects for exclusive V ub from LCSR include extending the subset of NNLO corrections calculated both in q 2 and to include all NNLO twist 2 and 3 contributions. It would also be beneficial to perform a Bayesian uncertainty analysis of all B → P ,D → P LCSRs (along the lines of the aforementioned analysis for B → π [217]). Finally the measurement of B s → K ν at LHCb/Belle II will allow an important complementary determination of V ub using results from Ref. [219].

Measuring |V ub | exclusively and the prospects for Belle II
The most precise exclusive determinations of |V ub | will ultimately come from the most theoretically clean b → ul −ν l modes:B 0 → π + l −ν l ,B 0 s → K + l −ν l and Λ 0 b → pl −ν l , which involve ground state hadrons in the final state. The main challenge facing measurements of |V ub | from these modes is the large background from b → cl −ν l decays, which is O(|V cb | 2 /|V ub | 2 ) ≈ 100 more likely to occur. This background is difficult to separate from signal given the need to partially reconstruct the missing signal neutrino.
Several measurements of exclusiveB 0 → π + l −ν l decays were made at the B factories CLEO, BaBar and Belle. These measurements fall in to two categories of tagged and untagged measurements, which exploit the unique e − e + → Υ (4S) → BB topology and fully hermetic detector design of the B factories. In tagged measurements [220] the non-Signal B meson in the event is first reconstructed in a number of hadronic modes before selecting the signal pion and lepton. Exploiting the known energies and momenta of the interacting e + e − beams allows for neutrino 4-momentum, p ν to be reconstructed and the signal to be extracted using the missing mass squared of the neutrino, M 2 = p 2 ν . In untagged measurements [221,222] the signal pion and lepton are first selected with a tight selection to reduce background from b → cl −ν l decays. The neutrino is then reconstructed by inclusively reconstructing the other B in the event as a sum of remaining tracks and photons. The beam constrained mass, M bc , and beam energy difference 3 are used as fit variables to simultaneously extract the signal. While tagged measurements give a high purity and better q 2 resolution they suffer from a much lower efficiency resulting from the branching fractions and reconstruction efficiencies for tagged modes.
In both tagged and un-tagged measurements the exclusiveB 0 → π + l −ν l signal is fitted in bins of q 2 to determine the partial branching fraction in each bin. These measurements together with LQCD and LCSR predictions can be used as constraints to simultaneously fit the form factors of decays and determine the parameter |V ub |. HFLAV performed a fit for |V ub | theB 0 → π + l −ν l form factor, f + (q 2 ), under a BCL parametrisation utilising BaBar and Belle tagged and untaggged datasets and state of the art theory predictions [66]. This resulted in the most precise determination of |V ub | to date, |V ub | = 3.67 ± 0.09(exp) ± 0.12(theo), which has a total uncertainty of 4%.
Untagged and tagged measurements of |V ub | fromB 0 → π + l −ν l decays at Belle II will significantly improve the precision on |V ub |. In order to project the reduction in uncertainty both tagged and untagged analyses were performed on simulated Belle II Monte Carlo. The expected uncertainty on |V ub | was determined for a given luminosity by extracting the partial branching fractions from pseudodatasets generated from Monto Carlo expectations and fitting these together with LQCD predictions. With 50 ab −1 and future expected improvements in LQCD predictions the projected uncertainties on |V ub | fromB 0 → π + l −ν l decays were 1.7% (tagged) and 1.3% (untagged). The dominant systematic for the tagged analysis is the calibration of the tagging efficiency which is assumed irreducible at 1% on |V ub |. For the untagged analysis the dominant systematic uncertainty results 3 Here M bc = E * 2 beam − P * 2 B and ∆E = E * Beam − E * B where E * beam and E * B are beam and B meson energies in the centre of mass frame.
from the uncertainty on the number of BB pairs which is assumed irreducible at 0.5%. Several systematics relating to the branch fractions and form factors of b → cl −ν l and b → ul −ν l decays are also considered irreducible in the untagged analysis given the lower purity than the tagged analysis.
3.4 Measuring |V ub |/|V cb | at LHCb All b-hadron species are accessible at hadron colliders thus opening to LHCb a wide possibility of |V ub | measurements from exclusive b → u transitions, while inclusive |V ub | measurements do not seem feasible at the moment. In proton-proton collision at high energy bb quark pairs are produced mainly from gluon splitting and hadronize independently, as a consequence b-hadrons have a wide continuum momentum spectrum and the reconstruction of semileptonic decays can not profit of the beam-energy constraints used at B-factories. However, thanks to the large boost acquired by the b-hadrons, the direction of the momentum can be well determined by the vector connecting the primary vertex of proton-proton interactions and the b-hadron decay vertex. By imposing the b-hadron mass constraint, the missing neutrino momentum can be calculated with a two-fold ambiguity. A small fraction of unphysical solutions arises from the imperfect reconstruction of vertices positions. The best way to choose between the two solutions depends on the specific decay mode under study. The choice can be optimized considering additional variables related to the decay kinematics by using linear regression algorithms [223].
The precise determination of an absolute branching fraction requires the precise knowledge of the total b-hadron production rate and of the experimental detection efficiency, which includes reconstruction, trigger and final states selection. To minimize the experimental uncertainty it is preferred to determine ratios of branching fractions, normalizing the b-hadron decay mode under study to a well-known bhadron decay mode, that has as similar as possible decay topology. Choosing a decay of the same b-hadron removes the dependence on the production fraction of the specific b-hadron.
The first determination of |V ub | at LHCb was done with baryons, measuring the branching fractions for Λ b → pµ − ν and Λ 0 b → Λ + c µ − ν decays [199]. What is directly determined is the ratio of the CKM matrix elements where R F F is the ratio of the relevant form factors, calculated using LQCD. The ratio represents a band in the |V ub | versus |V cb | plane and can be converted into a measurement of |V ub | using existing measurements of |V cb |. Approximately 10% of b-hadrons produced at LHC are Λ b and a clean signal identification is possible imposing stringent proton identification requirements. The large background from b-hadron decays with additional charged tracks in the decay products is strongly reduced employing isolation criteria by means of multivariate machine-learning algorithms. The signal yields are determined from a χ 2 fit to the B corrected mass The corrected mass is defined as m corr = m 2 hµ + p 2 ⊥ + p ⊥ where p ⊥ is the momentum of the hadron-µ pair transverse to the Λ 0 b flight direction.
The LQCD form-factors that are used in the calculation of |V ub | [16] are most precise in the kinematic region where q 2 , the invariant mass squared of the leptonic system, is high. When the branching fractions of the b → u (b → c) decays are integrated in the region q 2 > 15(7)GeV 2 the theory uncertainty on |V ub |/|V cb | is 4.9%. This measurements, performed with Run 1 data, gives |V ub |/|V cb | = 0.83 ± 0.004 (stat) ±0.004 (syst), consistent with previous exclusive measurements of the two CKM matrix elements.
A new measurement of this type is currently under study at LHCb. It uses B 0 s → K + µ − ν decays whose branching fraction is predicted to be of the same order of magnitude of the Λ 0 b → pµ − ν one. The signal selection is challenging due to the large background from partially reconstructed decays of all species of b-hadrons, but it can exploit the good efficiency and purity of kaon and muon identification provided by the LHCb detector, the separation of the Kµ vertex from primary vertex and the already mentioned isolation tools. The chosen normalization mode B 0 s → D + s µ − ν, D + s → K − K + π + benefits of small uncertainty in the D + s branching fraction. The good identification of this decay mode, despite the large feed-down from B 0 s decays to excited D s mesons with un-reconstructed neutral particles, has been proven to be possible at LHCb with the measurement of B 0 s lifetime [224]. Form factors for the B 0 s mesons decays to K and D s have been calculated with LQCD by several groups [182,174]. The calculation are performed in the high q 2 region and extrapolated to the full region with BGL or BCL z-expansions. Different calculations agree at high q 2 , but there is currently a disagreement in the q 2 = 0 extrapolated value. For B 0 s → K + µ − ν in the low q 2 region (up to 12 GeV 2 ) form factors calculated with LCSR are also available [219]. The uncertainties on the experimental measurement of the B 0 s → K + µ − ν yield increase at high q 2 (low kaon momentum) due to the reduced efficiency and the larger background contamination. It is foreseen to perform the measurement in few q 2 bins so that the use of different calculations of form factors will be possible. Larger data samples, accumulated during the LHCb Upgrade period will allow a differential measurement in finer q 2 bins.
Purely leptonic B − → µ −ν decays are not accessible at LHCb. An alternate way has been tested, searching for the decay B − → µ −ν µ + µ − where an hard photon is irradiated from the initial state and materializes into two muons. This decay has the experimental advantage of the presence of additional particles in the final state and of a larger branching fraction, due to the removal of the helicity suppression. An upper limit on the branching fraction of 1.6 × 10 −8 has been determined with 4.7 fb −1 of integrated luminosity [4], making it a possible candidate for a |V ub | measurement in the LHCb Upgrade period [225].

R π
The experimental signature of B → πτ ν τ is challenging: low in rate due to CKM suppression, this final state can only be isolated from backgrounds using multivariate analysis techniques. Due to the pseudoscalar nature of the pion in the final state, an increased sensitivity to certain new physics models involving scalar exchange particles is expected and measurements of this branching fraction offer an orthogonal path to probe the anomalies observed in R(D) and R(D * ). The first limit on the branching fraction using leptonic and one-prong τ decay modes was reported by Ref. [226]. They reported using a frequentist method. This result can be converted into a value of which can in turn be compared to the SM prediction of Refs. [175,227] of Although the current precision is very limited, this result can already exclude the model parameter space of new physics models e.g. charged Higgs bosons, cf.
Ref. [227]. Albeit a challenging signature, the final state with a charged pion has excellent prospects to be discovered in the large future Belle II data set. A naive extrapolation of Eq. 3.7 using assuming SM couplings results in evidence with 4 ab −1 and discovery with 11 ab −1 of integrated luminosity. The theoretical precision in R π will further increase with progress in lattice and with combined light lepton and lattice fits (the measured spectra can constrain the low q 2 region, which the lattice has difficulties in predicting reliably).

Experimental status and prospects of B → ν γ
The experimental study of B → ν γ with = e, µ is challenging and requires the clean laboratory of an e + e − machinery: in such a setting the known initial state and the full reconstruction of the second B-meson produced in the collision provide the necessary constraint to successfully identify this signature. In addition, to not be overwhelmed with background, only photons at high energies ( ≈ 1 GeV or larger) can be studied this way. The difficulties lie in the low efficiency of the reconstruction performance of the second B-meson, which have to happen in low branching fraction hadronic modes, and the still sizeable cross-feed from B → π 0 ν and B → η ν decays. These two semileptonic processes produce very similar final states, namely B → ν γγ, but can be reduced by looking for a unassigned second high-energetic photon in the collision event under study. To separate B → ν γ from such decays successfully a fit to can be carried out. Here p and p γ denote the reconstructed four-vectors of the visible final states of B → ν γ. The four-vector of the decaying signal B-meson, p B sig , can be reconstructed using the information from the reconstructed tag-side B-meson. Correctly reconstructed signal decays peak at m 2 ν ≈ 0 GeV 2 , whereas the dominant semileptonic decays are shifted to higher values due to the absence of the additional photon in the four-vector sum. The sensitivity can be further increased by explicitly reconstructing the semileptonic backgrounds and combine the ratio R. In addition, it might then be possible to establish a new channel to measure |V ub |. Figure 8.10 shows the projection for the measurement of ⁄ B and |V ub | with respect to measured central values. The ellipses correspond to the expected statistical and systematic uncertainties. For the projection symmetric Gaussian uncertainties are assumed. With an increased data set from Belle II the uncertainties on the measured parameters can be drastically reduced by about 90%. At Belle II the partial branching fraction of B + ae¸+‹¸" should ideally be measured for several cuts on the signal-side photon energy above 1 GeV. This would reduce the theoretical uncertainties originating from the B + ae¸+‹¸" form factors and allow for a more precise measurement of ⁄ B . this information into a global analysis. This was the strategy pursuit by Ref. [228], which constrained the π 0 semileptonic background this way. The current experimental limit with a lower photon energy cut of 1 GeV is (3.10) The above limit was determined using a flat Bayesian prior. The discovery prospects for this decay at Belle II are excellent: the improved tracking capabilities, better calorimeter electronics, and the continuous development of modern tagging algorithms such as Ref. [229] will help improving the sensitivity. Extrapolating from the central value and uncertainty of the currently most precise limit of Eq. 3.10 of ∆B(B → ν γ) = (1.4 ± 1.1) × 10 −6 , evidence should be possible with 5 ab −1 and a discovery is possible with 50 ab −1 [230]. In principle, after discovery the value of |V ub | could be extracted from this decay as well, along with the first inverse momentum of the light-cone distribution amplitude, λ B . An extrapolation from the current sensitivity is shown in Figure 3.1, based on the numbers from Ref. [230]. The sensitivity for |V ub | will not be competitive with other methods (leptonic and semileptonic), but the achievable precision on λ B will help measurements and interpretations, which rely on our understanding of the light-cone distribution amplitude properties.

Theoretical progress for B → γ ν
The photoleptonic decay B → γ ν determined by two independent form factors is the simplest probe of the B-meson light-cone distribution amplitudes (LCDAs), which represent one of the most important inputs in the theory of semileptonic and nonleptonic B-decays based on QCD factorization and LCSRs. The calculation of the form factors in HQET and at large photon recoil in the leading power is well developed and can be found in Ref. [231]. The 1/m b and 1/E γ power suppresed effects, expressed in a form of the soft overlap part of the form factors, were quantified using a technique [232] based on dispersion relations and quark-hadron duality (see also Ref. [233]). The most advanced calculation of the B → γ ν form factors, including power suppressed terms, was done recently [234] resulting in the prediction of the decay branching fraction at E γ > 1.0 GeV as a function of the key unknown theoretical quantity: the inverse moment λ B of the B-meson LCDA. An alternative approach [235] calculates the power-suppressed corrections due to photon emission at long distances in terms of the photon LCDAs in the LCSR framework. The proof of concept for a lattice QCD calculation of radiative leptonic decays was recently done in [236], see also [237].

B → ππ ν decay beyond ρ
Calculations of B → ρ form factors both in lattice QCD and from LCSRs usually adopt a narrow ρ approximation and by default ignore the influence of nonresonant effects(radially excited ρ's) in the mass interval around ρ. The role of these effects has to be assessed at a quantitative level. In Refs. [238,239] the first attempt to calculate more general B → ππ form factors from LCSRs, using two-pion LCDAs at low mass of dipion system and at large recoil, was undertaken. The currently limited knowledge of these nonperturbative inputs calls for their further development and also for alternative methods. In Ref. [240] a different version of LCSRs with B meson LCDAs was obtained which predicts the convolutions of theB 0 → π + π 0 form factors in P wave with the timelike pion form factor. In the narrow ρ-meson limit these sum rules reproduce analytically the known LCSRs for B → ρ form factors. Using data for the pion vector form factor from τ decay, the finite-width effects and the contribution of excited ρ-resonances to the B → ππ form factors were found to amount up to ∼ 20% in the small dipion mass region where they can be interpreted as a nonresonant (P -wave) background to the B → ρ transition.

Remarks on the z expansion
The use of the so-called z expansion for form factors has become a standard practice for semileptonic decays, see Refs. [241,242] for a pedagogical discussion. In the workshop several issues concerning it were discussed, in particular its application to baryon form factors.
Form factors which parametrize matrix elements of the form L|J|H have known analytic structure. In particular, they are analytic in the complex t = q 2 plane outside a cut on the real axis. The cut starts at some positive t cut equals to the invariant mass squared of the lightest state the current J can produce. The domain of analyticity can be mapped onto the unit circle via z where t 0 is a free parameter denoting the point that is mapped to z = 0. The form factor can be expanded as a Taylor series in z which is a model-independent parametrization. For heavyto-light form factors the maximum value of z is related to the distance between (m H − m L ) 2 and t cut . As a result, increasing t cut decreases the maximum value of z leading to a faster convergence of the series.
Naively one might assume that the lightest state is the two-particle stateHL. This would imply that t cut = (m H + m L ) 2 , but this is not the case in general. For example, for the proton electric and magnetic form factors (H = L = p) the cut starts at the two-pion threshold and not at the pp threshold. As another example, for one of the B → π form factors (f + ) the cut starts at m 2 B * . Since this is a simple pole, it can be easily "removed" by considering (t − m B * )f + as a Taylor series in z. For (t − m B * )f + the cut starts at (m B + m π ) 2 . If one uses a higher value of t cut than the physical one, one faces the danger of trying to expand the form factor in a region where it is not analytic. One of the immediate results of the workshop was the identification of such a problem in the literature. For baryon form factors, e.g. Λ B → p, analyses have used the wrong value of t cut = (m Λ B + m p ) 2 , see Ref. [243] and arXiv.org version 2 of Ref. [16]. In fact, t cut for the baryon form factors is the same as for the meson form factors of analogous decays.
Another issue discussed in the workshop is the use (or lack of use) of bounds on the coefficients of the z expansion. Although the form factor is expressed as an infinite series, in practice the series is truncated after a few terms. One would like to ensure that the value of a physical parameter such as |V ub | is independent of the number of parameters used, by bounding the coefficients. For example, one can use a unitarity bound [244] or a bound from the heavy quark expansion [245]. It seems that currently there is no consistent use of bounds in extraction of |V ub |. As the analysis [246] shows, this can be a problem as the data improve and the number of necessary parameters increases. This can be especially problematic if one needs to use the z-expansion for extrapolation. The community needs to be aware of this issue and at least test that results do not change if bounds are applied to the coefficients.
The unitarity bounds for meson decays such as B → π rely on the fact that for (t − m B * )f + the cut starts at the (m B + m π ) 2 . For baryon decays such as Λ B → p, unitarity can only constrain the region above (m Λ B + m p ) 2 . The region between (m B +m π ) 2 and (m Λ B +m p ) 2 is left unconstrained. Following the analysis of Ref. [246] one might worry that the contribution of the latter region is the dominant one. While considering together mesons and baryons contributions to the dispersive bounds might overcome the problem [87], further study is warranted.

Quark masses
In the Standard Model (and many extensions), quark masses and the CKM matrix all stem from Higgs-Yukawa couplings between the quark fields and the Higgs doublet. It is therefore natural to consider the bottom-quark mass, m b , in this report. As discussed in Sec. 5, m b can be extracted from the inclusive semileptonic B decay distributions, along with |V cb |. In the theory of inclusive decays, the charm-quark mass, m c , is also needed to control an infrared sensitivity; see Sec. 5. based on Refs. [248,249,27,250,251] and [247,248,251,252,253], respectively. Note that the most precise results [247,248,251] all use the very high statistics MILC HISQ ensembles with staggered fermions for the sea quarks [272,23]. In the future, other groups [273,274,275,276] will have to collect similar statistics to enable a complete cross check.
Four distinct methods are used in the results shown in Fig. 4.1: 1) converting the bare lattice mass to the MS scheme, 2) fitting to a formula for the heavy-light hadron mass in the heavy-quark expansion [277,278], and 3) computing moments of quarkonium correlation functions [279,280]. 5 The first two require an intermediate renormalization scheme that can be defined for any ultraviolet regulator: quark masses defined this way can be computed with lattice gauge theory or dimensional regularization. For example, HPQCD 13 (Υ decays) [257] uses two-loop lattice perturbation theory to convert the bare NRQCD mass to the pole mass [281,282], and dimensional regularization to convert the pole mass into the MS mass.
Instead of the pole mass, one can use a regularization-independent momentumsubtracted mass [283]. Like the MS scheme these RI-MOM schemes are massindependent renormalization schemes, but they depend on the gauge. In lattice gauge theory, Landau gauge is easily obtained on each gauge-field configuration via a minimization procedure [284]. The mass renormalization factor, Z m , can be 5 Lattice methods with no results in Fig. 4.1 are not discussed here.  [247,248,249,27,250,251,252,253]; triangles denote lattice-QCD calculations with 2 + 1 flavors of sea quark [254,255,256,257,258]; circles denote results extracted from e + e − collisions near QQ threshold [259,260,261,262,263,264,265,266,267,268,269,270,271]. The vertical band shows the FLAG 2019 average for 2 + 1 + 1 sea flavors. Note that 2 + 1-flavor calculations are in rough (good) agreement for bottom (charm).
computed from the three point function for the scalar or pseudoscalar density, because Z −1 m = Z S = Z P (up to technical details for Wilson fermions). For example, the matrix element p |P |p , between gauge-fixed quark states, can be used to define Z P using the same formulas for lattice gauge theory as for continuum gauge theory (with dimensional regularization) [283]. The schemes labeled RI-MOM and RI -MOM use p = p and slightly different definitions of the quark-field normalization Z 2 ; for a review see Ref. [285]. The momentum transfer q ≡ p − p = 0 here, namely it is "exceptional" in the sense of Weinberg's theorem [286]. On the other hand, the RI-sMOM scheme [287] chooses p and p such that p 2 = q 2 = p 2 ≡ µ 2 . Without the exceptional momentum, the extraction of Z P is more robust. It would be interesting to see whether RI-sMOM on the ETM 2+1+1 ensembles yieldsm b favoring the RI -MOM results or the RI-sMOM results on MILC's ensembles.
The HQE method starts with the HQE formula for a heavy-light hadron mass [288,289], where M is the hadron mass, which is computed in lattice QCD as a function of the quark mass, m, and d J depends on the spin of the hadron. The quantities can be identified with the energy of gluons and light quarks,Λ, the Fermi motion of the heavy quark, µ 2 π , and the hyperfine splitting, µ 2 G . (µ 2 G depends logarithmically on m.) Although this idea is not new [277,278], to be precise one has to confront the definition of m. Although the pole mass is natural in the context of the HQE, it is not suitable in practice, because of its infrared sensitivity. The MS mass, on the other hand, breaks the power counting: m pole − m MS ∝ α s m pole . Instead, one chooses mass definitions that, in some sense, lie in between these two choices. Gambino et al. [249] choose the kinetic mass [290], while Fermilab/MILC/TUMQCD [248] choose the minimal renormalon subtracted (MRS) mass [291]. After extracting m kin or m MRS from fitting Eq. (4.3), the result can be converted to the MS scheme with three-and four-loop perturbation theory, respectively. In addition to the different matching, the error bar from Fermilab/MILC/TUMQCD is so small because it is based on the largest data set of all calculations in Fig. 4.1. See Sec. 5.3 for further discussion and results forΛ, µ 2 π , µ 2 G , and higher-dimension corrections to the HQE. One can avoid an intermediate scheme by computing a short-distance quantity in lattice QCD, taking the continuum limit, and analyzing the result with MS perturbation theory. For example, on can compute moments of quarkonium correlation functions [279,280], for some Dirac matrix Γ . In lattice gauge theory, the pseudoscalar density needs no renormalization if Γ = γ 5 and c γ 5 = m 2 Q . The moments G (n) Γ are physical observables with a good continuum limit, which is proportional to m Q to the appropriate power, multiplied by a dimensionless function of α s (m Q ). Thus, these moments also yield determinations of the strong coupling as well as quark masses. In Fig. 4.1, results obtained in this way are labeled "moments".
The same moments G (n) Γ can be obtained from the cross section for e + e − annihilation into QQ hadrons via a suitably subtracted dispersion relation. In this case, Γ = γ µ for the electromagnetic current, and c γ µ because the electromagnetic current is conserved. Thus, the same perturbative calculations (only changing Γ ) can be used to extract the bottom-and charm-quark masses and α s from experimental measurements. The dispersion relation, related sum rules, and the perturbative series for the moments are the basis of the result labeled e + e − → bb and e + e − → cc in Fig. 4.1. The order α p s , p = 1, 2, 3, became available in 1993 [292], 1997 [293], and 2006 [294,270], respectively.

Leptonic decays
Instead of semileptonic decays, CKM matrix elements can also be determined from purely leptonic decays. For example, a goal of Belle II is to improve the determination of V ub from B + → τ + ν, as well as V cd from D + → + ν and V cs from D + s → + ν, and a goal of LHCb is to observe B c → τ ν. The rates for leptonic decays suffer a helicity suppression, making tauonic and muonic decays preferred experimentally. Leptonic decays are mediated by the axial-vector part of the electroweak current, as well as possible pseudoscalar currents, so they complement semileptonic decays in this way.
The hadronic quantity describing the decay is known as the decay constant, defined by where p µ is the four-momentum of the B meson and f B + is the decay constant. For other mesons, the axial currents and notation change in obvious ways. From the partial conservation of the flavor-nonsinglet axial current, the pseudoscalar density can also be used to compute the decay constant: where m b and m u are bare quark masses. Equations (4.6) and (4.7) are the basis of lattice-QCD calculations. In general, the axial current used is not a Noether current, so it is not absolutely normalized. Fermion formulations with good chiral symmetry (staggered, overlap, domain wall) provide an absolutely normalized pseudoscalar density. Until recently, however, lattice spacings have not been small enough to use these approaches for the b quark. Methods developed especially for heavy quarks have therefore been used, and they do not provide any absolutely normalizedbΓ u bilinears. Figure 4.2 compares results from lattice QCD with realistic sea content of n f = 2 + 1 + 1 or 2 + 1 sea quarks with the FLAG 2019 [3] average for the 2 + 1 + 1 sea. Because the Fermilab/MILC results dominate the FLAG average, we simply quote them [23]: where the systematic uncertainties stem from different choices in choosing fit ranges for the correlation functions and checking the continuum extrapolation by adding a coarser lattice; the third "f π,PDG " error comes from converting from lattice units to MeV with the pion decay constant of the PDG [98]; the last uncertainty stems from ambiguities in estimating electromagnetic effects in the context of a QCD calculation omitting QED. The results are arguably precise enough for the foreseeable future.
The results in Eqs. (4.8)-(4.13) again use the very high statistics MILC HISQ ensembles with staggered fermions for the sea quarks. Here the lattice spacing is, for some ensembles, small enough to reach the b quark, so the calculation uses the HISQ action for all b and light quarks alike. Thus, an absolutely normalized pseudoscalar density is available, so the uncertainty is essentially statistical, as propagated through a fit to the continuum limit with physical quark mass. Again, other groups will have to collect similar statistics in the future to enable a complete cross check. To go beyond the precision quoted here, analyses of leptonic decays will have to include QED radiative corrections to the measured rates. The issues and an elegant solution for light mesons (pion and kaon) can be found in Refs. [303,304,305]. Radiative corrections for heavy-light mesons will be more difficult to incorporate, because of the hierarchy of soft scales Λ QCD , Λ 2 QCD /m Q , Λ 3 QCD /m 2 Q , etc.
5 Heavy-to-heavy inclusive 5.1 Heavy Quark Expansion for b → c

Review of the Current Status
The heavy quark expansion (HQE) for the inclusive semileptonic b → c transitions starts form a correlation function for the b → c current The time ordered product in the last line can be expanded in an operator product expansion which for large m b and m c yields an expansion in terms of local hadronic matrix elements which parametrize the hadronic input. Within this approach, the differential rate can be expressed as a series in 1/m The coefficients dΓ i are given by are operators of mass-dimension i + 3 and the sum over k runs over all elements of the operator basis, C (k) i are coefficients that can be calculated in QCD perturbation theory as a series in α s (m b ). Note that starting at order 1/m 3 b the b → c HQE exhibits an infrared sensitivity to the charm quark mass; for the total rate, Γ 3 contains a log(m 2 c ) while Γ 5 contains inverse powers of m 2 c which are explicitly shown in Eq. (5.2).
The leading term dΓ 0 is the partonic result which turns out to be independent of any unknown hadronic matrix element. This term is fully known (triple differential rate) at tree level, at order α s [306,307] and order α 2 s [307,308,309,310,311].
Due to heavy quark symmetry, there is no term dΓ 1 and the leading power corrections appear at order 1/m 2 . These are given in terms of two non-perturbative matrix elements The coefficients of these two matrix elements are known to order α s [312,313,314,315,316]. At order 1/m 3 b there are again only two matrix elements which are given by For these matrix elements only the tree level coefficients are known. Furthermore, if the matrix elements are defined as above 6 , the coefficient of ρ 3 LS vanishes for the total rate, which is related to reparametrization invariance of the HQE [317].
The HQE predictions of the inclusive semileptonic rates depend on m b and m c , and the size of the perturbative QCD corrections depends on the choice of the quark-mass scheme. The quark masses are discussed in detail in a different section of this paper, and we refer the reader to this section.

Higher power corrections
At order 1/m 4 b and higher the number of independent nonperturbative parameters starts to proliferate. In addition, due to the dependence on powers of 1/m c the power counting needs to be re-defined: since we have parametrically m 2 c ∼ Λ QCD m b one has to count the term dΓ 5 a 2 as a part of dΓ 4 , see (5.2). Thus the full complexity of the dim-8 operators already enters an analysis of the 1/m 4 b contribution.
We shall not list the independent matrix elements appearing at order 1/m 4 b and 1/m 5 b , rather we refer the reader to the list given in Refs. [318,319]. However, the proper counting of the number of independent operators has been settled only recently [320], using the method of Hilbert series. It turns out that at tree level there are 9 dimension 7 operators [318] while QCD corrections will increase this number to 11 [320].
The reason is very simple. At order 1/m 4 b we have operators with four covariant derivatives, which can be written as E 2 (chromoelectric field squared) and B 2 (chromomagnetic field squared) where E and B are both color-octets. Thus the combination appearing at tree level is However, the symmetric product of T a and T b contains a singlet and an octet component The two terms on the right-hand side acquire different coefficients once QCD corrections are taken into account, and thus become independent operators. Although this observation [320] is correct, it has no impact unless QCD corrections are considered at order 1/m 4 b . The same argument explains the different counting at order 1/m 5 b where we have 18 parameters at tree level [318], while the general case involves 25 matrix elements [320].
Clearly the number of independent parameters appearing at order 1/m 4,5 b is too large to extract them from experiment, even if data will become very precise in the future. To this end, one has to rely on some additional theoretical input, which should better be model dependent. A systematic approach has been proposed in Ref. [318] and refined in Ref. [319]: it is based on the "lowest-lying state saturation Ansatz" (LLSA) and corresponds to a naive factorization of the matrix elements. The LSSA allows us to write all matrix elements appearing in 1/m 4 b and 1/m 5 b in terms of four parameters, which are µ 2 π and µ 2 G (see Eqs. (5.4) and (5.5)) and 1/2 and 3/2 , where j are the excitation energies of the lowest orbitally excited spin symmetry doublets with j the spin of the light degrees of freedom. Note that in this setup also ρ D and ρ LS can be computed which may serve as a check, since these parameters can also be extracted from experiment.
The LLSA has been used to study the impact of the 1/m 4,5 b terms on the extraction of V cb in Ref. [321]. It turns out that, even if a generous margin is allowed for the uncertainties, the shift in the extracted V cb remains well below 1%, and with the default choices of Ref. [321] a shift of −0.25% is found.
Recently the impact of the reparametrization invariance on the HQE has been re-investigated. In Ref. [317,322] it has been shown that the number of independent parameters in higher orders can be reduced by reparametrization invariance, for the total rate and the q 2 moments. While the number of HQE parameters up to order 1/m 2 b is still two, there is only one parameter at 1/m 3 b , since the spin-orbit term can be absorbed into µ 2 G . At order 1/m 4 b there will be only four parameters, which opens up the possibility of constraining the higher dimensional matrix elements directly with experimental data, at least if Belle II will be able to measure several moments of the q 2 distribution.

Heavy Quark Expansion for B → X c τν
The recent data on the exclusive decays B → D ( * ) τν indicate that the branching ratios of these channels lie above the prediction of the SM. This issue is discussed in detail in sec. 2.4, but we may also consider the inclusive decay B → X c τν for which the HQE provides us with a precise prediction.
While a new measurement of B → X c τν has to wait until Belle II has collected a sufficient data sample, we may compare with a measurement performed at LEP resulting in [98] Br(b-admix → Xτν) = (2.41 ± 0.23)% where b-admix refers to the b-hadron admixture produced in a Z decay. Since to leading order the inclusive semitauonic branching fraction of all b-hadrons are the same, we may take this as an estimate of B → X c τν. More recently, sizable effects of order 1/m 3 b have been found [324], which using the kinetic scheme, but without O(α 2 s ) contributions, found Br(B − → X c τν) = (2.26 ± 0.05)%.
The additional inclusion of O(α 2 s ) effects in the kinetic scheme appears to lead to a very similar value [325]. These HQE calculations are compatible with the LEP measurement.
However, the LEP measurement is not very precise and thus leaves room for new physics contributions. In the context of R(D ( * ) ) many new physics scenarios have been discussed, and we will not repeat any of this here. Instead we use a very simple ansatz to explore qualitatively the effect of new physics. To this end, we add an additional interaction of the form We may fit the two parameters α and β to the data on B → D ( * ) τν and find α = −0.15 ± 0.04 and β = 0.35 ± 0.08 [324]. This may be inserted back into the calculation of the total rate for B → X c τν for which we find indicating a significant shift of the inclusive rate. This result is graphically presented in fig 5.1 and indicates that generically the exclusive and inclusive data are in tension, unless the new physics is such that it almost cancels in the inclusive rate.

Inclusive processes in lattice QCD
Until recently, the application of lattice QCD has been limited to the calculation of form factors of exclusive processes such as B → D ( * ) ν or B → π ν, for which initial and final states contain a single hadron. A first proposal to evaluate the structure functions relevant to the inclusive decays B → X u,c ν in lattice QCD was put forward in [326]. As mentioned above, the differential decay rate for the inclusive decay B(p B ) → X c (p X ) (p )ν(p ν ) may be written in terms of the structure functions of W µν (p B , q), which contains the sum over all possible final states: where J µ stands for the b → c weak current and q µ = (p + p ν ) µ is the momentum transfer. The optical theorem relates this to the forward scattering matrix element T µν (p B , q), as −(1/π)ImT µν = W µν , see for instance [327,328]. One can calculate these forward matrix elements on the lattice as long as the momenta p B and q are in the region where no singularity develops. It means that the lattice calculation is possible in an unphysical kinematical region where no real decay is allowed. This kinematical region corresponds to the situation where the energy given to the final charm system p 0 X is too small to create real states such as the D and D * mesons or the Dπ continuum states. The connection to the physical region can be established by using Cauchy's integral on the complex plane of p 0 X . An alternative method is to reconstruct the spectral density (of the states X appearing in the sum) directly from the lattice correlation function [329].
An exploratory lattice calculation has been performed at relatively light b quark masses [326]. The numerical results suggest that the matrix element is nearly saturated by the ground state D ( * ) meson contribution at the zero-recoil limit.
Since the non-perturbative lattice calculation may be obtained at the kinematical point away from the resonance region, it may also be used to validate the heavy quark expansion (HQE) method. So far, the HQE calculation is available in the unphysical region only at the tree-level, O(α 0 s ). The one-loop and two-loop corrections have been calculated for the differential decay rate. They have to be transformed to the unphysical kinematical point by applying the Cauchy integral. Such work is in progress.
As already mentioned, the lattice calculation can be made only in the unphysical kinematical region and its comparison with the experimentally observed B decay distribution is not straightforward. One should first perform an integral of the experimental data with an appropriate weight to reproduce Cauchy's integral in the complex plane of p 0 X , which requires the experimental data obtained as a function of two kinematical variables q 2 and p B · q. It still doesn't cover the whole complex plane, and one need to supplement by a perturbative QCD calculation for the region of p 0 X > p 0 B . The perturbative expansion in this unphysical region should be well-behaved, but the details should be investigated further.
More recently, a different approach that in principle allows to calculate the total decay rate has been proposed [330]. In the new method, the integral corresponding to the phase space of the B → X c ν is directly performed rather than the Cauchy's integral. As a result, information about the unphysical kinematical region is no longer necessary. A first comparison with the HQE with a small m b ∼ 2.7GeV shows good agreement with the lattice calculation, despite large uncertainties.
This method may open an opportunity to compute the inclusive decay rate fully non-perturbatively using lattice QCD, and can also be applied to calculate various moments of the B → X c ν decays, as well as the more challenging B → X u ν decays.

HQE matrix elements from lattice QCD
The same hadronic parameters appearing in the OPE analysis of inclusive semileptonic B-meson decays appear also in the HQE of the pseudoscalar (PS) and vector (V) heavy-light meson masses. Therefore, one can try to determine them from a lattice calculation of the latter at different values of the heavy quark mass. After the pioneering work of Ref. [278], new unquenched results have been presented recently [248,249]. These papers are mentioned in Sec. 4.1 for their results on quark masses.
In Ref. [249] a precise lattice computation of PS and V heavy-light meson masses has been performed for heavy-quark masses ranging from the physical charm mass up to 4 times the physical b-quark mass, adopting the gauge configurations generated by the European Twisted Mass Collaboration (ETMC) with N f = 2 + 1 + 1 dynamical quarks at three values of the lattice spacing (a 0.062, 0.082, 0.089 fm) with pion masses in the range M π 210-450 MeV. The heavy-quark mass is simulated directly on the lattice up to 3 times the physical charm mass. The interpolation to the physical b-quark mass is obtained with the ETMC ratio method [26,27], based on ratios of the spin-averaged meson masses computed at nearby heavy-quark masses, and the kinetic scheme is adopted. The extrapolation to the physical pion mass and to the continuum limit The size of two combinations of the matrix elements of dimension-6 operators is also determined: with the full covariance matrix provided in Ref. [249]. Although all the above results refer to the asymptotic limit, namely to infinitely heavy quarks, and differ from the matrix elements extracted in the inclusive fits described above by higher power corrections, they are found to be mutually consistent. In the future lattice results could be used as additional constraints in the semileptonic fits. Another interesting future application concerns the heavy-quark sum rules for the form factor entering the semileptonic decay B → D * ν at zero-recoil; here the nonlocal correlators ρ A,S,ππ,πG play an important role; see Ref. [331]. The analysis by the Fermilab, MILC and TUMQCD Collaborations [248], based on [291], employs only PS mesons and the minimal renormalon subtracted (MRS) heavy quark mass. The results are obtained using MILC ensembles with five values of lattice spacing ranging from approximately 0.12 fm to 0.03 fm, enabling good control over the continuum extrapolation, and both physical and unphysical values of the two light and the strange sea-quark masses. This leads to Λ MRS = 0.555 (31) GeV (5.19) while power corrections are controlled by the difference µ 2 π − µ 2 G (m H ). Assuming µ 2 G (m b ) = 0.35(7)GeV 2 as a prior, the authors find µ 2 π = 0.05(21)GeV 2 . Notice that the definition of µ 2 π used here still has a renormalon ambiguity of order Λ 2 QCD .

Measurements of inclusive observables in B → X c ν
Several experiments have measured the partial branching fraction of the inclusive decay B → X c ν ( = e, µ) as a function of the lower threshold on the lepton momentum (E cut ), or other inclusive observables in this decay such as the moments of the lepton energy and of the X c mass distribution. Available measurements are listed in Table 5.1, where it should be noted that the most recent experimental result is from the year 2010. The Belle collaboration has measured spectra of the lepton energy E and the hadronic mass M (X c ) in B → X c ν using 152 million Υ (4S) → BB events [334, Table 5.1 List of available measurements of inclusive moments in B → Xc ν. We also specify the types of the lepton energy E and hadronic mass M (Xc) spectrum moments which have been determined in the respective publications. The zeroth order moment of the lepton energy spectrum (n = 0) refers to a measurement of the partial branching fraction.

335
]. These analyses proceed as follows: first, the decay of one B meson in the event is fully reconstructed in a hadronic mode (B tag ). Next, the semileptonic decay of the second B meson in the event (B sig ) is identified by searching for a charged lepton amongst the remaining particles in the event. In Ref. [334], the electron momentum spectrum in the B meson rest frame is measured down to 0.4 GeV. In [335], all remaining particles in the event, excluding the charged lepton (electron or muon), are combined to reconstruct the hadronic X system. The M (X c ) spectrum is measured for different lepton energy thresholds in the B meson rest frame. The observed spectra are distorted by resolution and acceptance effects and cannot be used directly to obtain the moments. In the Belle analyses, acceptance and finite resolution effects are corrected by unfolding the observed spectra using the Singular Value Decomposition (SVD) algorithm [339]. Belle measures the energy moments E k for k = 0, 1, 2, 3, 4 and minimum lepton energies ranging from 0.4 to 2.0 GeV. Moments of the hadronic mass M k X are measured for k = 2, 4 and minimum lepton energies from 0.7 to 1.9 GeV.
BaBar has measured the lepton energy and hadronic mass moments in B → X c ν [333,332]. Furthermore, first measurements of combined hadronic mass and energy moments of the form n k X with k = 2, 4, 6 are presented. They are defined as n 2 where M X and E X are the mass and the energy of the X system and the constant Λ is taken to be 0.65 GeV. The most recent analysis is the one of hadronic mass M (X c ) moments, which are determined using a data sample of 232 million Υ (4S) → BB events [332]. The experimental method is similar to the Belle analysis discussed previously, i.e., one B meson is fully reconstructed in a hadronic mode and a charged lepton with momentum above 0.8 GeV in the B meson frame identifies the semileptonic decays of the second B. The remaining particles in the event are combined to reconstruct the hadronic system X. The resolution in M (X c ) is improved by a kinematic fit to the whole event, taking into account 4-momentum conservation and constraining the missing mass to zero. To derive the true moments from the reconstructed ones, BaBar applies a set of linear corrections. These corrections depend on the charged particle multiplicity of the X system, the normalized missing mass, E miss − p miss , and the lepton momentum. In this way, BaBar measures the moments of the hadronic mass spectrum up to M 6 X for minimum lepton energies ranging from 0.8 to 1.9 GeV.

Determination of |V cb | from inclusive decays
The Heavy flavor Averaging Group (HFLAV) has used the measurements discussed in the previous section to determine |V cb | from a fit to HQEs of inclusive observables [66]. Using expressions in the so-called kinetic scheme [340,341,311,314,342] and a precise determination of the c-quark mass, m MS c (3 GeV) = 0.986 ± 0.013 GeV [269], as external input, HFLAV obtains with a χ 2 of the fit of 23.0 for 59 degrees of freedom. This analysis uses measurements of the photon energy moments in B → X s γ [345,346,347,348] to constrain the b-quark mass and does not include higher order corrections of O(α 2 s ) and O(α s /m 2 b ). As mentioned above, the semileptonic moments have been analysed also including higher order power corrections estimated using the LSSA [321]. In this case a kinetic scheme fit to the experimental data that additionally includes a constraint m kin b = 4.550(42)GeV from PDG (after scheme conversion) leads to a slightly more precise value, |V cb | = (42.00 ± 0.64) × 10 −3 . (5.26) 6 Heavy-to-light inclusive

Introduction and theoretical background
Inclusive semileptonic heavy to light decays can in principle be analyzed similarly to B → X c ν by using a local OPE. In practice, due to the large charm background, experimental cuts are generally imposed and reduce the "inclusivity" of the theoretical prediction. In particular, the local OPE does not converge well when the invariant mass of the hadronic system is M X M D . In such a case the decay spectra are described using a "non-local" OPE [349,350,351], where perturbative coefficients are convoluted with non-perturbative "Shape Functions" (SFs), the B meson analogs of parton distribution functions. In this SF region, the perturbative coefficients themselves can be factorized into "hard" and "jet" pieces, where the former has a typical scale of m b and the latter has a typical scale of m b Λ QCD . In the infinite mass limit m b → ∞ there is a single non-perturbative SF. Power corrections start at 1/m b and include multiple "subleading" SFs [352,353,354,355,356].
One can classify the terms based on their suppression by 1/m b and α s . The perturbative components of the leading power term are known at O(α 2 s ) [357,358,359,360,361,362]. The 1/m b power corrections include terms convoluted with the leading power SF whose perturbative parts are known at O(α s ) [363] and terms convoluted with subleading SFs whose perturbative parts are known at O(α 0 s ) [352,353,354]. At this order one can still use subleading functions of one light-cone variable. The inclusion of O(α s ) contributions of subleading SFs requires functions of multiple light-cone momenta in analogy to higher twist effects in Deep Inelastic Scattering [364]. Schematically, in the SF region we have the factorization formula where H is the leading power hard function, J is the leading power jet function, both known at O(α 2 s ), J 0 is the O(α 0 s ) part of J, h = 1 + O(α s ), s i are given in Refs. [352,353,354], and j k in Ref. [363]. The symbol ⊗ denotes an integral over the light-cone momentum.
The moments of the leading and subleading SFs are related to the HQE parameters measured in the inclusive semileptonic decays to charm. The relations are known for the leading SF up to at least the fifth moment [365], although the current large uncertainty of higher HQE parameters [318,321] might limit the use of higher moments relations. The formalism in Ref. [365] allows to construct such relations for the subleading SFs too, but at present only the first three moments are known [355,366]. A detailed knowledge of the SFs is necessary only in a portion of the phase space where p + = E X − p X ∼ Λ QCD ; elsewhere only the first few moments of the SFs are relevant and one recovers the local OPE description.
The present |V ub | determination by HFLAV [66] is based on various approaches which are all rooted in (6.1) and differ in the inclusion and treatment of perturbative and nonperturbative contributions, see Ref. [367] for a detailed discussion.
The approach known as BLNP (Bosch-Lange-Neubert-Paz) [368] aimed at a precision extraction of |V ub | from B → X u ν and B → X s γ, based on the knowledge in 2005. It used the first two terms in (6.1), in particular the O(α s ) expression for H · J ⊗ S and the O(α 0 s ) expression for the h · J 0 ⊗ s i terms. Kinematical corrections that scale as α s /m b and α s /m 2 b [369], as well as 1/m 2 b corrections [327,328], for which factorization formulas were not known, were also included by convolution with the leading power shape function. Using Renormalisation Group methods H is evolved from "hard" to the "jet" scale to resum Sudakov double logs. As for the non-perturbative inputs, the leading order SF was to be taken from B → X s γ and subleading SFs s i to be modeled using ∼ 700 models. In practice, the current treatment of S by experiments is to use an exponential or Gaussian model constrained by the first two moments of S obtained from the global fit of HQE parameters in the kinetic scheme [66].
Since Ref. [368] appeared, there have been many theoretical advances. Twoloop calculations of H [358,359,360,361] and J [370] as well as one-loop calculation of j k [363] became available. The free quark differential decay rate were calculated at O(α 2 s β 0 ) [371,372,373,357] and at complete O(α 2 s ) [362]. Running effects from the "hard" to the "jet" at O(α 2 s ) were studied [374]. It was found there that the factorization of the perturbative coefficient into jet and hard functions is not strictly necessary. More recently, three loop calculations of J [375] and the partonic S [376] were performed. Implementing these within the BLNP framework would probably require also the calculation of H at three-loops, which is not available yet. There were also theoretical advances in the description of non-perturbative effects in B → X s γ [377,378,379]. In particular, new subleading shape functions unique to B → X s γ were identified [378], making it more difficult to use data from radiative B decays as input for the extraction of |V ub |. These new features are not yet implemented in the BLNP approach. An alternative implementation of the same conceptual framework has been presented in Ref. [380], together with a systematic procedure to account for the uncertainties in the modelling of the leading SF, to be discussed below.
The GGOU (Gambino-Giordano-Ossola-Uraltsev) approach [381] avoids the expansion in 1/m b and the introduction of subleading SFs. The perturbative coefficients are computed at fixed order to O(α 2 s β 0 ) in the kinetic scheme. The effect of RGE evolution in the SF region and all subleading SFs are absorbed into three q 2 -dependent SF F i (k, q 2 ), whose first moments are fixed by present semileptonic fits. The uncertainty due to the functional form is estimated comparing ∼ 100 models.
The emergence of the SF can also be seen in perturbation theory: soft-gluon resummation together with an infrared prescription gives rise to a b quark SF. In the DGE (Dressed-Gluon Exponentiation) approach [382,383] this is achieved by an internal resummation of running coupling corrections in the Sudakov exponent, thus providing a perturbative model for the leading SF. A somewhat similar line of action is followed in Ref. [384] where the infrared prescription is provided by the so-called analytic QCD coupling.
The so-called Weak Annihilation (WA) contributions are a source of theoretical uncertainty common to all approaches. In the local OPE they emerge at O(1/m 3 b ) but are enhanced by a large Wilson coefficient [385] and may give rise to a difference between B + and B 0 decays. As they are expected to be much more important in charm decays, the latter constrain them most effectively at present. In particular, the D 0 , D + and D s total semileptonic rates and the electron spectra measured by the CLEO Collaboration [386] have been employed [387,388,389]. From the absence of clear indications for WA effects in semileptonic charm decays, one can conclude that the WA correction to the total rate of B → X u ν must be smaller than about 2% [389]. However, WA is localized in the high q 2 region and therefore the related uncertainty on |V ub | depends on the kinematical cuts, and this is taken into account in the current HFLAV averages. Because the high q 2 tail is particularly sensitive to higher power corrections (and not to the SFs), see for instance Refs. [390,372,381], one might eventually expect the cleanest determinations of |V ub | to come from the low q 2 region only. An upper cut on q 2 might therefore be beneficial [368,381].
A few recent experimental analyses [391,392] have relaxed the kinematic cuts, making use of experimental information to subtract the background. As a result, most of the B → X u ν phase space is taken into account and the sensitivity to the SFs is substantially reduced, while a description based on the local OPE sets in. In these cases the quoted theoretical uncertainties are smaller, but one should keep in mind that these analyses still depend on the SFs treatment and modelling for the determination of the reconstruction efficiencies, whose uncertainty contribute to the final experimental systematic error. As will be discussed later on, a realistic signal simulation requires the implementation of so-called hybrid models that transform the inclusive predictions of the approaches mentioned above into individual final hadronic states. The uncertainties related to such hybrid models remain a major issue for the inclusive determination of |V ub |.

Status of the experimental results
The most difficult task of the inclusive measurements is the discrimination between the B → X u ν signal and the much more abundant decays involving Cabibbo-favoured B → X c ν decays. The signal events are studied in restricted regions of the phase space to improve the signal-to-background ratio. Compared to B → X c ν events, the signal tends to have higher lepton momenta p , lower invariant mass of the X u state M X , higher q 2 , and smaller values of the light-cone momentum P + = E X − |p X |, where E X and p X are energy and momentum of the hadronic system X u in the B meson rest frame. As explained above, these restrictions introduce difficulties in the calculation of the expected partial branching fraction, enhancing perturbative and nonperturbative QCD corrections which lead to large theoretical uncertainties in the measurement of |V ub |.
The measurement of the partial branching fraction ∆B can be obtained with tagged or untagged analyses.

Tagged Analyses
In tagged analyses, the Υ (4S) → BB events are identified by reconstructing one of the B mesons, B reco , via fully hadronic decays. The signal decay of the second B meson (B signal ) is identified just by the presence of an electron or a muon. The tracks and neutral objects not associated with the B reco can be uniquely assigned to the signal side, so that the inclusive X u state can be clearly reconstructed. The neutrino four-momentum p ν can be estimated from the missing momentum p miss = p e + e − − p B reco − p X u − p , where p e + e − is the initial state four-momentum. From this, all the kinematic variables of the signal state can be easily computed.
Because the momentum of the signal B meson is determined from of the B reco , the signal decay products can be computed directly in the B-meson rest frame, resulting in an improved resolution of the accessible observables. Moreover, the constrained kinematics allow for a better separation of the signal from the background.
The downside of the tagged analysis is the low signal efficiency (about 0.3-0.5%) which implies that for kinematic variables like the lepton momentum p , the untagged analyses at the B-factories can give competitive or better results. Undetected and poorly reconstructed tracks or photons lead to irreducible background from the dominant B → X c decays even in regions of the phase space potentially free of such background, and this can affect the final resolution on the signal kinematics.
Belle published a measurement [391] of B → X u ν partial branching fraction, requiring only p > 1 GeV which covers about 90% of the signal phase space. The analysis was done performing a fit in M X and q 2 . BaBar determined the partial branching fraction in the same p > 1 GeV region, but also in other several restricted regions of the phase space [392].

Untagged Analyses
The untagged measurements allow to collect large samples but are affected by considerable backgrounds. The untagged measurements have access only to a few kinematic variables, namely the lepton momentum p , and the q 2 spectra, lepton spectrum: this can be studied inclusively without requirements on the rest of the event. In this case the momentum spectrum can only be given in the Υ (4S) rest frame. q 2 distribution: this requires the reconstruction of the neutrino 4-momentum, which exploits the high hermeticity of the B factories' detectors. The neutrino 4-momentum is given by the event missing 4-momentum, p miss = p e + e − − p vis , where p e + e − is the initial state 4-momentum, and p vis is the total visible 4-momentum determined by all the charged tracks from the collision point, identified pairs of charged tracks from K s , Λ and γ → e + e − , and energy deposits in the electromagnetic calorimeter.
The lepton momentum spectrum is affected by large backgrounds from B → X c ν via the D ν, D * ν, D * * ν (where by D * * is a mixture of charm excited state and non resonant D ( * ) −nπ transitions), D s K νX and also secondary leptons from D mesons decays, and a background from e + e − → qq events, where the main contribution comes from cc, which is assessed from control data samples recorded below the Υ (4S) resonance. Because of the large background, usually the signal is extracted only for regions with high momentum lepton, typically p > 1.9 − 2.1 GeV. Old analyses of the lepton endpoints are from CLEO [393], Belle [394] and BaBar [395].
Recently, BaBar published a study [396] of the lepton spectrum using the full data set, and exploiting all the knowledge about the rate and the form factors of the various B → X c ν exclusive decays which are the major source of backgrounds. The signal is extracted from a fit to the electron momentum spectrum, which is described as the sum of predicted signal (model dependent shape) and various specific backgrounds yields with shapes fixed by MC. The fit covers lepton momentum in the Υ (4S) rest frame from 0.8 to 2.7 GeV, in 50 MeV bins, except for the data in the interval 2.1 to 2.7 GeV which are combined in a single bin to avoid effects from differences in the shape of the theoretically predicted signal spectrum. In a given momentum interval, the excess of events above the sum of the fitted background contributions is taken as the number of signal events.
An important difference of this analysis with respect to the other ones is that different theoretical models are considered in the extraction of the partial branching fractions. Instead, all other measurements determine the partial branching fraction by using a single model, and its partial rate is then converted in a measurement of |V ub | by taking the corresponding partial rate predicted by the theory calculations.
The extracted inclusive signal branching fractions and the values of |V ub | agree well for GGOU and DGE, although they are about 13% smaller than the average of the other measurements. This difference can be attributed to the shape of the predicted signal spectrum and/or the shapes of some of the large background contributions above 2 GeV where the signal fraction is largest. On the other hand, the value of |V ub | based on BLNP agrees well with other measurements.
A subset of all the measurements of the inclusive |V ub | are reported in Fig.6.1 for the various frameworks considered, see [47] for more details.

Lessons learned from the past
The measurements based on tagged samples have considerably larger statistical uncertainties. The sample size allows for only a few bins in the 2D fit, but there are regions of the phase space (e.g. low M X ) where the background fractions are  modest. The current sensitivity to the details of the shapes of the signal and background distributions is however limited. For untagged measurements only the high end of the spectrum is sensitive to the signal and also to the background near their kinematic endpoints. Both approaches have their pros and cons, given the size of the currently available data. The latest BaBar measurement of the lepton spectrum, shows a high dependence of the result from the signal model. The same effect, even if not directly evident, was observed also in tagged measurements from the sensitivity of the signal yield extraction on the shape function parameters in the analyses that cover larger portion of the phase space.
Semileptonic B → X u ν decays are simulated as a combination of resonant decays with X u = π, η, η , ρ, ω, and decays to nonresonant hadronic final states X u . The latter is simulated with a continuous invariant mass spectrum following the theory predictions by De Fazio and Neubert [369], which depend on the SF parameters and m b . The nonresonant and the resonat part are combined such that the sum of their branching fractions is equal to the measured one for the inclusive B → X u ν. The events generated with this model, are reweighted to obtain predictions for different SF parameters and different branching fraction of the resonant states. This model is usually called "hybrid model". Belle in [391], corrects the hybrid model to match the moments of the M X and q 2 distributions predicted by the the GGOU model. A picture of the model of the invariant mass M X shape used to describe the B → X u ν is reported in Fig.6.2.
Another effect not considered so far, is the impact of the fragmentation of the generated u quark into final state hadrons, which is performed using JETSET. The modeling of the final state multiplicity could affect both the signal efficiency and the signal templates used to separate signal from background.
The measurement of the partial branching fraction separately for neutral and changed B mesons has been used to constrain the WA contribution. Both tagged approach, in various regions of the phase space [392], and untagged approach, in the high lepton region [397], have been used, but these have given weak upper limits mainly because of the large statistical uncertainties. More stringent upper limit on WA has been obtained by CLEO which used a model dependent approach studying the high q 2 region in B → X u ν decays [398]. Both these bounds are milder than those estimated from D and D s semileptonic decays in Refs. [388,389] which were mentioned above.
In the tagged measurements the suppression of the b → c background is performed by vetoing events where a K + or a K 0 s is detected in the hadronic X system. This causes a loss in the signal contribution where a ss pair is produced (usually called ss-popping). The fraction of these events is about 12% of the non-resonant component and it is fixed in the fragmentation parameters of JETSET/PYTHIA. The uncertainty on this fraction is assumed to be about 30%, so for analyses that aim to cover larger regions of the phase space, with higher statistics this could be an irreducible source of systematic uncertainty. This is another point that should be improved in future analyses at Belle II.

Fitting distributions: SIMBA and NNVub
As we discussed above, SFs modelling is an important source of theoretical uncertainty in the study of B → X u ν and particularly in the extraction of |V ub | from these decays. While the first few moments of the SFs must satisfy OPE constraints, direct experimental information on the SFs is somewhat limited. Indeed, the measured photon spectrum in B → X s γ is sensitive to a different set of subleading SFs. However differential distributions in B → X u ν such as the lepton energy and the invariant mass distributions depend directly on all the SFs and can therefore be used to constrain them. Conversely, they can be used to validate SFs models and approaches where the SFs are calculated, such as DGE. The high luminosity expected makes the measurement of differential distributions possible at Belle II.
The extraction of |V ub | performed by HFLAV in the BLNP and GGOU frameworks assumes a set of two-parameter functional forms, and it is unclear to what extent the chosen set is representative of the available functional space, and whether the estimated uncertainty really reflects the limited knowledge of the SFs. This point was first emphasized in Ref. [380], where a different strategy was proposed, based on the expansion of the leading SF in a basis of orthogonal functions, whose coefficients are fitted to the B → X s γ spectrum, and on the modeling of the subleading SFs. The SIMBA project [399] aims at performing a global fit to B → X s γ and B → X u ν spectra, to simultaneously determine |V ub |, m b , the leading SF, as well as the Wilson coefficient of radiative b decays. Additional external constraints, such from B → X c ν, can also be employed.
Another strategy, called NNVub and explored in [400] for the GGOU approach, employs artificial neural networks as unbiased interpolants for the SFs, in a way similar to what the NNPDF Collaboration do in fitting for Parton Distribution Functions [401]. This method allows for unbiased estimates of the SFs functional form uncertainty, and for a straightforward implementation of new experimental data, including B → X s γ and B → X u ν spectra and other inputs on quark masses and OPE matrix elements. Both SIMBA and NNVub appear well posed to analyse the Belle II data in a model independent and efficient way.  The measurements of fully differential spectra on the kinematic variables, e.g. q 2 , M 2 X , p ± X , E l , and separate measurements for charged and neutral B-meson decays are required to allow for an improved extraction of |V ub | in the long term. Therefore, the future measurements should provide these unfolded spectra independent of theoretical assumptions.
Combining both B → X u lν and B → X s γ as well as constraints on the SF moments from B → X c lν in a global fit can simultaneously provide the inclusive |V ub | and the leading SF functional form with its uncertainties as they follow from the uncertainties in the included experimental measurements. Fig. 6.3 shows the projections for a global fit in the SIMBA framework with two projected singledifferential spectra of M X and E l for B → X u lν and a E γ spectrum for B → X s γ from 1 ab −1 and 5 ab −1 Belle II data set [107].
The new tagging algorithm developed for Belle II can perform better than the old neural network method used in the previous Belle publications with about 3 times higher efficiency [229]. With a larger data set, the systematic uncertainties counted for reconstruction efficiencies, fake leptons and continuum background knowledge are expected to improve for this measurement. The projections for inclusive |V ub | are summarized in Table 6.1.

Outlook
We have summarized our main results in Sec. 1. In this final Section, we would like to look at the prospects of our field over the next five years. What can we expect for semileptonic b decays at the two main experiments? What kind of progress can we reasonably anticipate in lattice QCD and continuum calculations?
Belle II has started data taking with a complete detector in March 2019 and recorded about 10/fb in its first year of operation. The β * y = 1 mm optics, which were commissioned in the autumn 2019 run, were also used in spring 2020 and allowed to achieve an instantaneous luminosity of 1.94 × 10 34 /cm 2 /s at the peak -already approaching the record value at Belle of 2.2 × 10 34 /cm 2 /s. By June 3rd, 2020 Belle II has recorded an integrated luminosity of 55/fb on the Υ (4S) resonance and expects to achieve close to 100/fb by the end of the run in July 2020. Assuming the luminosity evolves as planned, Belle II will accumulate a data set equivalent to the Belle luminosity of about 1/ab by the end of 2021. In 2022 the experiment will enter a long shutdown to install the second pixel detector layer and replace the silicon photomultipliers in the barrel particle identification device. Data taking will resume in 2023 and by 2025 Belle II expects to have recorded a data sample exceeding 10/ab. Given these luminosity prospects, competitive Belle II results for semileptonic B decays can be expected in the years to follow. In addition, a three times more efficient hadronic tag and better low momentum tracking of the slow pion from the D * decay will further benefit semileptonic analyses in particular. This will allow to take a fresh look at the CKM matrix element magnitudes |V cb | and |V ub | and to improve measurements which are still statistically limited, such as R(D) and R(D * ).
The LHCb experiment has shown great capabilities with the results on R(D * ), |V ub |/|V cb | with Λ b decays, and |V cb | with B s decays. These measurements are based on the data collected in 2011 and 2012 (Run 1), corresponding at 3/fb of integrated luminosity. The data collected in 2015-2018 (Run 2) at pp collision energy of √ s = 13 TeV, correspond to about 6/fb of integrated luminosity. There are various ongoing analyses on the full dataset. Most of the measurements are limited by systematic uncertainties, among which the largest ones are generally due to external inputs from other experiments and to the limited available samples of Monte Carlo simulations. Nevertheless the large dataset available is going to be fully exploited.
The LHCb experiment is at present undergoing a major upgrade of the detector. The construction and commissioning should end in 2021, when LHC will resume the activity. The upgrade will allow to collect data at higher instantaneous luminosity, so about five pp collisions per bunch crossing are foreseen, to be compared with about one-two pp collisions in Run1 and Run2. To handle the higher occupancy expected in the detector, besides the improvements in the various subdetectors, a full software L0 trigger will be employed. The software L0 trigger will add flexibility to the data taking, allowing to reduce the thresholds for muon and hadron trigger decisions, enlarging in this way the physics capabilities. The analyses of semileptonic decays with taus and electrons will benefit from the lower trigger thresholds in terms of signal efficiencies. With this upgraded detector, LHCb is planning to integrate a luminosity of 23/fb by 2024, and to collect a total sample of 50/fb by 2028-2029, after LHC will have switched to higher luminosity.
By now, lattice QCD is the tool of choice for the form factors describing semileptonic decays of b-hadrons. At present, the most urgent need is the q 2 (or, equivalently, w) dependence of the form factors of B → D * lν, both to see how the form-factor slopes affect the |V cb | determination and to solidify the SM prediction of R(D * ). A few such calculations are underway. Given the success of LHCb with Λ b semileptonic decays, updates of the baryon form factors are desirable, and we encourage other lattice-QCD practitioners to turn their attention to these decays. Another topic for future research are rigorous calculations with a ρ or φ vector meson in the final state.
The leptonic decay constants are now at the subpercent level of uncertainty, and the prospects for extending these methods to semileptonic form factors are underway. In general, near-term lattice-QCD calculations of this precision will be based on the MILC collaboration's HISQ ensembles, which, among all lattice data sets, span the largest range of lattice spacing at physical light-quark masses and with high statistics. We consider it important that other ensemble sets be extended to a similar range, to enable further (sub)percent-level calculations with different systematics from the fermion discretization.
The inclusive determination of |V cb | will benefit from the calculation of new higher order effects, such as the O(α 3 s ) contributions to the total width, and from a reassessment of QED effects. However, the next frontier is represented by the integration with lattice QCD calculations to improve the determination of HQE matrix elements, and eventually by the calculation of the inclusive rates directly on the lattice. For what concerns inclusive charmless decays, the general theoretical framework appears solid but needs to be updated in the light of recent higher order calculations and should be extensively validated by experimental data which will become available at Belle II. In particular, the measurement of the lepton energy and hadronic invariant mass distributions will provide important information on the Shape Functions, while the q 2 distribution will allow us to constrain and possibly avoid the effect of Weak Annihilation. The wealth of data expected at Belle II, a close cooperation between theorists and experimentalists, and hopefully new lattice data should help resolve various open issues, so that we might eventually expect the uncertainty on inclusive |V ub | to become lower than 3%.