Past, present, and future of precision determinations of the QCD coupling from lattice QCD

Non-perturbative scale-dependent renormalization problems are ubiquitous in lattice QCD as they enter many relevant phenomenological applications. They require solving non-perturbatively the renormalization group equations for the QCD parameters and matrix elements of interest in order to relate their non-perturbative determinations at low energy to their high-energy counterparts needed for phenomenology. Bridging the large energy separation between the hadronic and perturbative regimes of QCD, however, is a notoriously difficult task. In this contribution we focus on the case of the QCD coupling. We critically address the common challenges that state-of-the-art lattice determinations have to face in order to be significantly improved. In addition, we review a novel strategy that has been recently put forward in order to solve this non-perturbative renormalization problem and discuss its implications for future precision determinations. The new ideas exploit the decoupling of heavy quarks to match \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${N_{\mathrm{f}}}$$\end{document}Nf-flavor QCD and the pure Yang–Mills theory. Through this matching the computation of the non-perturbative running of the coupling in QCD can be shifted to the computationally much easier to solve pure-gauge theory. We shall present results for the determination of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varLambda $$\end{document}Λ-parameter of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${N_{\mathrm{f}}}=3$$\end{document}Nf=3-flavor QCD where this strategy has been applied and proven successful. The results demonstrate that these techniques have the potential to unlock unprecedented precision determinations of the QCD coupling from the lattice. The ideas are moreover quite general and can be considered to solve other non-perturbative renormalization problems.


Introduction
Renormalization is a fundamental step in order to extract (meaningful) phenomenologically relevant results from lattice QCD calculations. For the lattice theorist it is natural to renormalize the bare parameters of the lattice QCD Lagrangian and the composite operators of interest in terms of some hadronic renormalization schemes (cf. Refs. [1,2]). In order to make the determinations accessible to phenomenologists, however, it is often necessary to translate the results obtained in the chosen hadronic schemes to results in the (perturbative) schemes and at the scales commonly considered in phenomenology. In practice, this requires the determination of the non-perturbative renormalization group (RG) running of the renormalized QCD parameters and operators in some convenient intermediate scheme, from the hadronic scales where they were originally defined, up to some highenergy scale, where perturbation theory eventually applies and a matching to phenomenological schemes can be performed.
Over the last decade or so, lattice QCD has entered a precision era for an increasingly large set of quantities (cf. Ref. [3]). Renormalization is a relevant part of many of these computations where it can significantly impact the quality of the final results. Hence, as we are forced to become more aware of all possible sources of uncertainties in the determination of the bare lattice quantities, the same care must be reserved to their renormalization. In particular, as any other lattice calculation, besides the statistical errors the determinations of renormalized parameters and operators have their systematics to deal with, i.e. discretization effects, finitevolume effects, quark-mass effects, and, when a matching to phenomenological schemes is necessary, also perturbative uncertainties. It is therefore important that the development in strategies to compute (bare) lattice quantities is accompa-nied with new ideas to improve their renormalization, so to guarantee a precise and robust end result.
An extreme example of this situation, if we can call it this way, is the determination of the QCD parameters. In this case we can say that the problem is entirely a renormalization problem, which, however, has very important phenomenological applications. On the lattice, the QCD coupling and quark masses are renormalized in terms of hadronic masses and decay constants, while in phenomenology the QCD parameters are needed at energies of the order of a hundred GeV and above. One would thus think that lattice QCD is not the right tool for providing this information given the very high energies involved. It appears more natural indeed to obtain these parameters directly from high-energy quantities, rather than from the hadronic spectrum. As we shall recall later in this contribution, this is actually not the case, as lattice techniques offer an ideal framework for these computations.
For the last 10-15 years, lattice QCD has consistently delivered some of the most precise determinations for the QCD parameters, as in particular for the QCD coupling α s (see e.g. Refs. [3][4][5]). 1 The current world average for the QCD coupling evaluated for reference at the Z -boson pole mass M Z is α s (M Z ) = 0.1179(10) [5], and has a precision of about 0.8%. The lattice determinations alone give α s (M Z ) = 0.1182 (8) [3], and are the most precise subcategory of those considered by the PDG. Besides the high precision of the individual state-of-the-art determinations, it is important to emphasize also their overall consistency. This is a rather non-trivial result considering the fact that even though all lattice determinations share some common systematics, these are probed quite differently by considering very different strategies [3]. It is fair to say that such a variety of approaches within a PDG subcategory is in fact unique [5].
Despite the tremendous efforts on and off the lattice, however, the current uncertainty on α s is still large. It is one of the largest sources of uncertainty in several key processes, particularly so within the Higgs sector, and it is expected to be a limiting factor in many high-precision studies at future colliders (see e.g. Refs. [4,6]). An uncertainty on α s (M Z ) comfortably below the percent level is desired for precision applications. For these reasons, there are plans for future phenomenological determinations of α s (M Z ) aiming at reaching the extremely competitive accuracy of 0.2% using highluminosity high-energy data (see e.g. Refs. [7][8][9][10]). The lattice community needs to meet the challenge. 1 We here adopt the common notation α s (μ) ≡ α (5) MS (μ), where α (5) MS (μ) is the QCD coupling of the 5-flavor theory renormalized in the MSscheme (see e.g. Ref. [5]). Note that for the ease of notation we often omit to write explicitly the μ-dependence of the coupling.
Reducing the current uncertainties on lattice determinations of α s by such an important factor is not easy. Similarly to several phenomenological determinations most lattice determinations of α s are currently limited by systematic uncertainties related to the use of perturbation theory at relatively low scales [3]. The issue is due to the fact that reaching high energy on the lattice requires small lattice spacings to be simulated and this is in general difficult without a dedicated strategy.
A way around this has been known since a long time and it is based on the concepts of finite-volume renormalization schemes and finite-size scaling (or step-scaling) techniques [11,12]. The methods have been recently applied for obtaining one of the most precise determinations of α s [13]. The key feature of the approach is that it allows for reaching high energy with all systematics under control. This puts the lattice determinations in the privileged position of being able to reach in a clean and controlled way high energies fully non-perturbatively. The systematics due to the application of perturbation theory, in particular, can be entirely avoided at the expenses of the statistical errors accumulated in running from low up to high-enough energy. The net advantage of this situation is that differently from systematic uncertainties, statistical errors can be straightforwardly reduced. Nonetheless, a reduction of the current uncertainties on α s by an important factor is yet a computationally expensive task, even employing a step-scaling strategy (cf. Ref. [13]).
In this contribution we want to review the recent proposal made in Ref. [14] which may allow for such error reduction in a substantially cheaper way. The key feature of this proposal is that one can replace the computation of the RG running of the coupling in N f -flavor QCD with that in the pure-gauge theory. It is clear that, regardless of the chosen strategy, this allows for a substantial simplification of the problem.
In short, the idea is built on three main steps and exploits the decoupling of heavy quarks in a couple of ways. In the first step, heavy-quark decoupling is used to connect a low-energy scale μ dec in N f -flavor QCD with the corresponding scale in the pure-gauge theory. This is achieved through the computation of a massive renormalized coupling in an (unphysical) theory with N f heavy quarks of mass M μ dec . In a second step, by computing the non-perturbative RG running in the pure Yang-Mills theory of a convenient coupling one obtains the pure-gauge Λ-parameter in units of μ dec , i.e. Λ tions at the charm and/or bottom quark-mass scale to estimate Λ (N f =5) MS and from it α s (M Z ). The strategy has already been proven successful in the determination of Λ (N f =3) MS [14]. The ideas presented in this reference are however general and may be applied to solve other non-perturbative scale-dependent renormalization problems that face analogous challenges.
The outline of this contribution is the following. We begin in Sect. 2 by recalling the main challenges in solving scale-dependent renormalization problems on the lattice. The emphasis will be on the determination of the QCD coupling. Besides introducing important concepts for later sections, the presentation gives us the opportunity to discuss some recent interesting determinations. These clearly illustrate the difficulties that state-of-the-art computations of the coupling have to face in order to be significantly improved.
In Sect. 3, we introduce the theory of heavy-quark decoupling and present the results of several recent studies that systematically assessed the size of non-perturbative effects induced by heavy quarks. More precisely, the accuracy of using perturbative decoupling relations to match the Λparameters of different N f -flavor theories is investigated, as well as the corrections due to the heavy quarks in low-energy quantities. These studies not only set the foundation for the renormalization strategy based on decoupling, but also establish the precision at which α s can be obtained from results in N f = 3 QCD.
In Sect. 4, the application of heavy-quark decoupling to the determination of the N f -flavor QCD coupling is described in detail and the results of Ref. [14] for Λ (N f =3) MS are presented. We conclude in Sect. 5 with some comments on the future prospects for α s determinations in view of this new strategy.
We care to note that it is not the aim of the present contribution to discuss in detail the many different lattice approaches that are currently considered to determine the QCD coupling. In particular, we do not provide a complete account of all recent determinations. For such a discussion, we refer the interested reader to the comprehensive work of FLAG [3] and to other interesting recent reviews (see e.g. Refs. [15,16]).

Precision determinations: the case of α s
Before presenting the renormalization ideas based on decoupling, we believe it is important to put these into context. The aim of these strategies, in fact, is not simply that of providing alternative ways to solve non-perturbative scaledependent renormalization problems. The goal is to develop a framework that will allow us to improve significantly our control over the current most relevant uncertainties. In this section, we thus want to recall what the main challenges are in solving this class of problems and which are the common approaches that are used to tackle them. Many of the con-cepts and observations that will be presented are for the most part well known. However, these issues are now more current than ever given the high precision that lattice QCD calculations have achieved, in particular in the determination of the QCD parameters. For this reason, we think it is important to address them also here. This gives us the opportunity to discuss some new insight that has been gathered from several recent high-precision studies, as well as introducing key concepts for later sections.
As anticipated, the discussion will focus on the case of the QCD coupling α s . This allow us to analyze in easier terms the main challenges that we need to face in high-precision nonperturbative determinations of RG runnings while capturing all the relevant issues. Moreover, lattice determinations of the strong coupling are a distinct case of competitive calculations which have the potential to deliver unprecedentedly precise results for a very relevant and fundamental quantity. Making a significant progress over the present state-of-theart determinations by mere brute force, however, is extremely demanding from the computational point of view. It is therefore mandatory to develop new strategies with the clear scope of improving our control on all sources of uncertainty.

Determinations of α s on and off the lattice
As already mentioned, since more than a decade lattice QCD is providing the Particle Physics community with the most accurate determinations of α s (see Refs. [3][4][5]). The reason behind this is that, as we shall recall, lattice determinations have some important advantages over their phenomenological counterparts (see e.g. Refs. [2,6,15] for some reviews).
Any determination of α s , whether on the lattice or not, relies on the following basic strategy. One considers a shortdistance observable O(q) which depends on a characteristic energy scale q. In the limit where q → ∞, this observable is compared with its theoretical prediction, O th (q), in terms of a perturbative expansion 2 The functions k n (s) appearing in this equation are the coefficient functions defining the perturbative series. They are known up to some order N and depend on the scale factor, s > 0, that relates the renormalization scale μ at which α s is extracted with the scale q. The basic difference between phe- 2 Note that in general the coupling α s to be considered here should be the QCD coupling of the relevant N f -flavor theory, i.e. α s (μ) ≡ α (Nf ) MS (μ), from which α (5) MS (M Z ) can eventually be extracted (cf. e.g. Ref. [3] and Sect. 3.1.2). At this stage, however, we prefer to keep the discussion simple. We shall return to this point later in detail. nomenological and lattice determinations of α s is the choice of the observable O(q).
Requiring O th (q) = O(q) for some finite q, clearly fixes the value of α s (μ) only up to some error. This error comes from several different sources, many of which are common to all types of determinations. First of all, there is the precision δO(q) to which the observable O(q) is known. This of course depends on the relevant statistical and systematic uncertainties associated with the determination of O(q). Secondly, there is the effect of the truncation of the perturbative series to a given order, i.e. the size of the O(α N +1 s ) terms in Eq. (1). In addition to these there are contaminations from "non-perturbative contributions". These are represented in Eq. (1) by power corrections to the perturbative expansion of O(Λ p /q p ), where p > 0 and Λ is some characteristic nonperturbative scale of QCD. 3 Thus, regardless of the chosen strategy, an accurate determination of α s needs to have, at least, these general sources of error under control. Note that for the most part these are systematic in nature.
Lattice determinations of α s are in principle favored in several ways in succeeding at this task. Firstly, on the lattice the QCD parameters are first renormalized in terms of some precisely measured hadronic quantities (e.g. hadron masses, decay constants, etc.), for which experimental uncertainties typically contribute only marginally to the end result. Once these are fixed, one has lots of freedom in choosing an observable O(q) as the getaway to extract α s . One can therefore devise convenient observables which have small statistical and systematic uncertainties; in particular there is no need for these quantities to be accessible in experiments. Phenomenological determinations of α s , instead, rely on experimental data for the observable O(q). It is the typical situation that when q becomes large, and therefore many sources of systematic uncertainty in Eq. (1) become small, the experimental errors δO(q) become large. It is thus difficult to find in general a single experimental quantity O(q) that allows one to accurately follow its scale dependence over a wide range of q-values. On the lattice, on the other hand, if carefully chosen, O(q) can be computed precisely from low-up to very high-energy scales. This gives a unique handle on controlling non-perturbative corrections and the contribution of the missing perturbative orders in Eq. (1).
Another advantage for the lattice theorist is that O is defined within QCD alone. Consequently, the theoretical description O th of Eq. (1) does not need to include contributions besides those from QCD. In addition, no modeling of hadronization is needed when comparing the observable O 3 Our knowledge of the form of non-perturbative effects is in fact rather limited. In addition, strictly speaking, perturbative and non-perturbative contributions cannot really be separated due to the asymptotic nature of the series (see e.g. Ref. [17]). However, as the discussion is at this point qualitative we simply adopt the simplistic representation of nonperturbative effects as power corrections to the perturbative expansion. with its perturbative prediction O th . Different is again the situation for phenomenological determinations. In these cases, other Standard Model (SM) contributions may be needed in order to extract α s and some modeling of hadronization is necessary. Depending on the process, these are known only up to some accuracy and typically depend on the value of other SM parameters as well. The precision one can aim for α s can therefore be limited by these factors. 4 Of course, lattice QCD determinations are not entirely exempted from this issue. In this case, however, the problem of disentangling the QCD contributions from "the rest" is confined to the hadronic quantities entering the renormalization of the theory, rather than to the observable O itself. As mentioned earlier, the uncertainties on the hadronic quantities have, at present, limited impact on the results for α s .
All current lattice QCD determinations of α s , on the other hand, have to deal with the fact that their calculations are performed with an unphysical number of quark flavors. The bottom and typically also the charm quark are in fact not included in the simulations. This brings up the issue of having to account for their missing contributions. We shall leave this very important discussion aside for the moment and come back to it in detail in the following (see Sect. 3).

The challenge of reaching high energy
Having for the most part presented the pros that lattice determinations of α s in principle have, we now address what the main difficulties are in practice.
As any lattice QCD observable, besides the statistical uncertainties, O(q) is affected by several systematics that need to be controlled. These include general ones, i.e. discretization errors, finite-volume effects, an unphysical number of quarks, and quark-mass effects, as well as others which depend on the specific choice of O and set-up that we make (e.g. excited-state contaminations, finite-temperature effects, Gribov copies, topology freezing, etc.). Finite quark-mass effects are typically not a relevant issue in determinations of the QCD coupling [3]. As anticipated, we then leave the problem of having an unphysical quark-content for later. We also ignore observable specific issues. Here we focus instead on discretization and finite-volume effects. The combination of having the two under control, in fact, can severely restrict the accessible range of q-values, if the renormalization strategy is not carefully chosen.
In particular, if one is determined in resolving within the same lattice simulation both the hadronic energy scales relevant for the renormalization of the lattice theory, and the energy scale q at which α s is extracted, then one is necessar-ily limited by the simultaneous constraints: The first inequality expresses the fact that finite-volume effects must be under control in hadronic quantities. The infrared cutoff set by the finite extent L of the lattice must be smaller than the typical non-perturbative scales of QCD, denoted here by Λ. The second inequality, instead, encodes two separate requirements. On the one hand, the scale q must be much lower than the ultraviolet cutoff set by the lattice spacing a. In this way, discretization errors in O(q) are kept under control, and O(q) can be obtained in the continuum limit with controlled errors. 5 On the other hand, q needs to be much larger than the scales Λ. Only in this situation perturbation theory can reliably be applied to extract α s by comparing O(q) with Eq. (1). The typical lattices that are simulated today have sizes L/a 100. Taking for definiteness m π L = 4, with m π = 140 MeV, the first condition in Eq. (2) implies that for such ensembles q a −1 ≈ 3.5 GeV. Of course this is a crude estimate and somewhat higher energies might be achieved by compromising at different stages of the calculation (e.g. considering heavier pion masses or smaller volumes in order to reach smaller values of a, or taking aq 1). Nonetheless, it is clear that although convenient in practice, considering lattices devised for studying hadronic physics to compute short-distance quantities necessarily poses severe challenges on the feasibility of the approach, as the accessible energies q are quite limited.
As a concrete example of this situation, we can point to the recent determination of α s of Ref. [19]. For their computation the authors employ state-of-the-art large-volume simulations by the CLS initiative [20,21]. The two-point functions of both axial and vector currents at short-distances are used to extract α s . Given the range of lattice spacings at their disposal, a ≈ 0.04 − 0.08 fm, the accessible energies for which continuum limit extrapolations could be performed in a controlled way are limited to q 2 GeV.
As we shall see below with explicit examples, a limited range of low q-values unavoidably implies a limited attainable precision for α s as perturbative truncation errors become a major issue. 5 Here and in the following we assume that O(q) is a properly renormalized lattice quantity with a well-defined continuum limit and free from infrared divergences. Alternative strategies extract α s from bare lattice quantities by taking q ∝ a −1 and expressing their expansion in lattice perturbation theory in terms of α s (see e.g. Ref. [18]). In these cases, discretization errors and scale dependence of O(q) are entangled. Addressing systematic uncertainties becomes more subtle and requires a separate discussion (see Ref. [3]).

The finite volume is your friend
In order to reach high precision we must tailor the lattice simulations to the problem at hand. Going back to Eq. (2), there is in fact no reason to try to satisfy simultaneously the two conditions as these belong to separate problems. On the one hand, there is the determination of the low-energy quantities used for the hadronic renormalization of the lattice theory, while, on the other hand, there is the determination of O(q) for large q.
A more natural strategy is therefore to split the problem over several sets of lattice simulations, each one covering a different range of energy scales. In this way, systematic effects can be more easily kept under control, as the relevant conditions will be milder for each individual set of simulations. The way to effectively achieve this in practice is to employ what are known as finite-volume renormalization schemes [11]. In this case, the scale q at which the observable O(q) is evaluated is identified with the inverse linear extent of the finite volume, i.e. q = L −1 . One may say that, in fact, the observable O considered is a finite-volume effect. With this choice, one computes the non-perturbative RG running of O(L −1 ) by simulating lattices with different physical extent L. This strategy goes under the name of finite-size (or step-)scaling [11,12] (see Ref. [2] for a recent account).
More precisely, having fixed the bare QCD parameters through some hadronic quantities, one computes O(L −1 had ) at a low-energy scale q had ≡ L −1 had ≈ Λ, and determines L had in physical units. This is achieved by computing lim a→0 (am had )(L had /a) = O(1), where m had is a known low-energy scale. No large scale separations are involved in this step, and at common bare parameters one can satisfy the conditions L had /a 1 and am had 1, as well as having finite-volume effects in m had under control.
Secondly, one computes in the continuum limit the change in O(L −1 ) as L is varied by a known factor, say, L → L/2. This step is repeated a number of times n, going from each new L to the next one. Once the energy scale reached, q n = 2 n /L had , is large compared to the hadronic scales, perturbation theory can safely be applied to extract α s (μ PT = q n /s) from the value of O(q n ) (cf. Eq. (1)).
It is important to emphasize that, if carefully chosen, the only source of systematic errors that affect the determination of O(L −1 ) are discretization effects. In particular, no matter what the scale q = L −1 is, discretization effects are under control once L −1 = q a −1 , i.e. L/a 1. This approach elegantly exploits the freedom that we have in lattice QCD in choosing the observable O(q) to completely circumvent the issue of necessarily having a finite volume. In particular, within this strategy the computational power is entirely invested into controlling a single systematic uncertainty.
In principle there is quite some freedom in choosing the finite-volume observable O(q). For the strategy to be successful in practice, however, this should have a number of desirable properties (see e.g. Ref. [2]). First of all, it should be easily and precisely measurable in Monte Carlo simulations. It should be computable in perturbation theory to a sufficiently high-loop order in order to guarantee good precision in extracting α s through Eq. (1). It should preferably be gauge invariant, in order to avoid issues with Gribov copies once studied non-perturbatively, and also be directly measurable for zero quark masses (see below). Finally, it should have, in general, small lattice artifacts. In fact, it is not straightforward to find a single observable that has all these nice features for any range of q-values. This, however, is not a real issue as different observables may be accurately combined in order to cover all the relevant range of scales q. We shall see explicit examples of good complementary observables in forthcoming sections.

Λ-parameters, β-functions, and all that
As we leave the general discussion for entering a more quantitative analysis of the challenges of extracting α s , we find convenient to reformulate the problem in slightly different terms. First of all, it is useful to associate with the observable O a renormalized couplingḡ O through the relation: where the coefficients k i ≡ k i (1) are those appearing in the perturbative expansion, Eq. (1). This simple procedure defines a non-perturbative, regularization-independent, QCD coupling. In terms of these couplings, the extraction of α s is interpreted as the perturbative matching between α O and α s , where different observables O define different renormalization schemes. The common normalization allows us to compare the value of the couplings in different schemes as they approach the high-energy limit. This is useful in assessing the regime of applicability of the perturbative matching as the latter can be characterized by the value that α O should reach. We recall at this point that the non-perturbative couplings studied within lattice QCD are implicitly defined for a given number of quark flavors, N f . A more proper notation to use for the couplings is therefore α which emphasizes the fact thatḡ O (μ) must be considered as a coupling within the N f -flavor theory. For ease of notation, however, in this section we will often take the liberty of dropping the superscripts N f and leave these understood, unless they are needed to avoid any confusion or for later reference.
Having this noticed, a particularly convenient class of schemes to consider are mass-independent (or simply massless) renormalization schemes [22]. These are defined in terms of observables O evaluated for vanishing quark masses. As a result, the RG running of these couplings is decoupled from that of the renormalized quark masses and therefore simpler to solve. On the other hand, differently from the physical case of massive schemes, quarks do not decouple in the RG running of massless schemes [23,24]. Hence, in the N fflavor theory the latter is characterized by a fixed number of active flavors corresponding to N f . 6 To the couplingḡ (N f ) O in a given mass-independent scheme we can associate a quark-mass independent Λ-parameter, Λ Through this relation, the value of the couplingḡ is known. The β-function describes the dependence of the coupling on the renormalization scale. In perturbation theory, it has an expansion which at the N -loop order reads: The coefficients b 0 (N f ), b 1 (N f ) are universal and shared by all mass-independent renormalization schemes. The scheme dependence only enters through the higher-order coefficients, A first compelling property of Λ-parameters is that, differently from the case of couplings, their scheme dependence is in fact trivial. Leaving the N f -dependence implicit and taking the Λ-parameter in the MS-scheme, Λ MS , as reference, we have that any other Λ-parameter Λ O is exactly related to Λ MS by the relation: In this equation c 1 (1) is the 1-loop coefficient of the perturbative matching relation between the corresponding couplings, which at M-loop order reads, It is clear that the Λ-parameters are non-perturbatively defined once the corresponding couplings and β-functions are. 7 In addition, they are exact solutions of the Callan-Symanzik equations [25][26][27], and therefore RG invariants, i.e. dΛ O /dμ = 0. From a non-perturbative perspective this makes the Λ-parameters natural quantities to compute, as they provide reference scales for both the low-and highenergy regimes of QCD. We finally stress that, as their corresponding couplings, the Λ-parameters are defined for a given number of quark flavors, N f (cf. Eq. (4)). Each N f -flavor theory therefore has its own Λ-parameters.

Systematic uncertainties in extracting
Once the overall energy scale of the given N f -flavor theory has been set, 8 the non-perturbative value of the coupling, g PT ≡ḡ O (μ PT ), in any scheme, at some high-energy scale μ PT , allows for estimating Λ (N f ) MS . Below we present two commonly employed strategies. Strategy 1. Using Eq. (4) we first get an asymptotic estimate for Λ O /μ PT as,  7 Interestingly, Eq. (8) provides an indirect non-perturbative definition of Λ MS through the Λ-parameter of any non-perturbative scheme. 8 The issue of fixing the scale of a theory with an unphysical value of N f will be addressed in Sect. 3. estimate for Λ MS /μ PT is in principle also affected by "nonperturbative corrections", if the couplingḡ PT is not small enough for perturbation theory to be in the regime of applicability. We shall come back to this issue shortly.

Strategy 2.
A second possibility to estimate Λ MS is to first obtain from the non-perturbative value ofḡ PT an estimate for g MS (sμ PT ) using the M-loop relation, Eq. (9). Given this, we can estimate Λ MS /(sμ PT ) using Eq. (4) and the perturbative expression for the β-function, Eq. (6), in the MS-scheme. As before, the knowledge of μ PT in physical units then gives us Λ MS .
In order to establish the perturbative uncertainties associated with this second approach, we first recall that the βfunction in the MS-scheme is currently known to 5-loop order [28][29][30]. This introduces a systematic uncertainty in Eqs. (10) and (9)). Secondly, using Eq. (4) it is easy to show that the perturbative matching at M-loop order between g PT andḡ MS (sμ PT ) translates into a systematic uncertainty in Λ MS /μ PT of O ḡ 2M PT . For all schemesḡ O used in lattice QCD determinations of Λ MS we have that M ≤ 3 (see e.g. Ref. [3]). The systematic errors coming from the matching between couplings is therefore parametrically larger than the one from the truncation of β MS . Moreover, note that for all these schemes we have that M = N − 1, where N is the loop order at which the corresponding perturbative β-function, β PT O , is known (cf. Eq. (6)). 9 This means that, for the schemesḡ O commonly used, Strategy 1. and 2. result in the same parametric uncertainties of O ḡ 2N −2

PT
. Clearly, although parametrically the same, the actual size of the corrections might be different. In this second strategy, in particular, when matching the couplings we have the freedom to choose the parameter s (cf. Eq. (9)). Different choices can result in different perturbative corrections to Λ MS /μ PT .
Devising different strategies like the ones above and comparing their outcome can help us assessing the systematic uncertainties in Λ MS coming from the use of perturbation theory at μ PT  corrections. If this is the case, one may be confident that the asymptotic regime of the perturbative expansion is reached, no "non-perturbative corrections" are relevant, and one can therefore take as final estimate for Λ MS /μ ref the extrapolated result forḡ PT → 0. Clearly, the program above is ambitious. The running of the couplingḡ PT at high energies is only logarithmic in μ PT /Λ O . Reducing the size of the perturbative truncation errors by a given factor hence requires an exponentially larger change in the energy scale μ PT . In order to accurately estimate the systematic uncertainties coming from the use of perturbation theory one therefore needs to cover, nonperturbatively, a wide range of energies, reaching up to very high scales.
If the chosen strategy to determine Λ MS does not allow for this and the accessible range ofḡ PT is quite limited, one might be tempted to estimate the uncertainties due to the application of perturbation theory in more simplistic ways. For example, one might opt for simply adding to the final result an uncertainty δΛ MS , where k is estimated in some way from the available perturbative information. Alternatively, one might estimate these uncertainties based on the spread of the results obtained at the smallest availableḡ PT from different strategies (e.g. Strategy 1. vs Strategy 2.). Given the asymptotic nature of the perturbative expansion, however, these practices cannot be considered reliable in general. From the very definition of asymptotic series the only reliable way to assess its accuracy is to compare the series with the full function asḡ PT → 0. 10 In order to do so, the couplingḡ PT must be varied by a sensible amount reaching down to small values.
For the same reasons, it is not advisable to estimate the size of "non-perturbative corrections" using some model assumption, or use some model to extrapolate the results for Λ MS /μ ref toḡ PT → 0. Our knowledge of the form of non-perturbative effects is rather limited and the separation between what is perturbative and non-perturbative is all but well defined. Hence, it is always debatable whether any model that tries to capture non-analytic terms in the coupling is really adequate to describe the data within the given accuracy. Moreover, if the couplingḡ PT cannot be varied much, it is difficult to really distinguish, e.g. a power correction, from some higher-order term inḡ PT , when statistical errors and other uncertainties are present. A more reliable practice is thus to avoid regions of largeḡ PT where the O ḡ 2N −2 PT behavior has not clearly set in. 10 We recall that a series is said to be asymptotic to the function f (λ), , ∀N . Note in particular that at fixed λ, larger N does not necessarily imply a better approximation of the series to the function.  [32]. As the reader can see, when the extraction is performed at high-enough energies (α PT ≈ 0.1), all schemes nicely agree 2.3 The accuracy of perturbation theory at high energy In this section we want to review some recent determinations of Λ MS which paid particular attention to the issue of the accuracy of perturbation theory in extracting Λ MS [31][32][33][34][35]. As we shall see, the concerns exposed in the previous sections are legitimate once the precision goals become competitive. A robust analysis of perturbative truncation errors is essential to reach high accuracy.

The high-energy regime of N f = 3 QCD
We begin with the high-energy studies of Refs. [31,32] in N f = 3 QCD. In Fig. 1 [36][37][38]. Different schemes within the family are identified by different values of the parameter ν. The precise definition of the schemes is not important and can be found in Refs. [31,32,39]. 11 In order to estimate Λ  Fig. 1, we see that in all cases the results are well described by a O(α 2 PT ) dependence over the whole range of investigated couplings. This is compatible with the expectation from the known leading non-analytic term in the expansion which is expected to be quite small at these couplings, i.e. O(e −2.6/α ) [32]. However, we clearly see a substantial difference in the size of the O(α 2 PT ) corrections depending on the SF ν -scheme that is considered. While one can find cases  (19) quoted in Ref. [32], which corresponds to the gray band in the plot.
As the value of the coupling at which perturbation theory is applied becomes smaller, any significant difference among the different determinations of Λ  (19) given above, for instance, corresponds to μ PT = 2 4 μ ref ≈ 70 GeV from the ν = 0 scheme (cf. ν = 0 (fit C) in Fig. 1) [32].
The first important message from this study is that it is in fact impossible to predict the actual size of perturbative truncation errors only from the available perturbative information. To reliably assess these errors, perturbation theory must be tested against non-perturbative data over a wide range of energy scales. From the study we presented, in particular, we conclude that in order to be able to quote in full confidence the competitive precision of 2 − 3% on Λ (3) MS /μ ref , one must reach non-perturbatively α PT ≈ 0.1. At these couplings perturbative truncation errors are fully under control and the error on Λ (3) MS /μ ref is entirely dominated by the statistical uncertainties coming from the non-perturbative running of the coupling.
It is now instructive to look at the result of the analysis of the same data according to Strategy 2 of Sect. 2.2.3. The cor- 12 For comparison the 3-loop coefficient of β MS for N f = 3 is responding estimates for Λ (3) MS /μ ref are shown in Fig. 2 for the two cases, ν = 0, −0.5 [32]. The different determinations in each plot are obtained by varying the parameter s entering the perturbative matching between the MS-coupling and the SF ν -couplings (cf. Eq. (9)). The values of s considered vary by about a factor 2 − 3 around the value of fastest apparent convergence, s * . 13 In phenomenological determinations of the QCD coupling the spread of the results obtained by varying s around some "optimal" value, typically by a factor 2 or so, is commonly used to get an estimate of perturbative truncation errors (see e.g. Ref. [5]). Our intention is to test how this approach works in the present case.
As one can see from Fig. 2, for all choices of s the data show the expected O(α 2 PT ) scaling. The slope of the data, however, can vary significantly depending on the choice of the parameter s. As expected, the significance of these differences is reduced as α PT → 0, and the different determinations come together once α PT 0.1.
What is clear from the results of Fig. 2 is that the procedure of assigning a systematic error based on the spread of the results with s at some fixed coupling is not always reliable. In the case of the ν = 0 scheme (left panel), the spread in the results between, say, s * /2 and 2s * , encloses the final estimate (gray band in the plot) for all coupling values in the range. If this uncertainly was added to the statistical errors, it would give a conservative estimate for the total uncertainly. On the other hand, in the case of the ν = −0.5 scheme (right panel), the procedure significantly underestimates the actual size of the O(α 2 PT ) corrections. Again α PT ≈ 0.1 has to be reached for the perturbative uncertainties to be small compared to the statistical ones.
From this second analysis we reaffirm the conclusion that it is very difficult to reliably estimate perturbative truncation errors if the coupling α PT cannot be varied much, and if this is confined to values significantly larger than α PT ≈ 0.1.

The case of the pure Yang-Mills theory
Finite-volume schemes. The second example that we consider is taken from the recent study of Ref. [35] in the pure Yang-Mills (YM) theory. This work presents an independent analysis of the results from a previous study [33], using novel techniques. Before entering the discussion, we care to stress that the case of the pure Yang-Mills theory is not just a curious example. As we shall see in the fol- 13 The scale factor s * is defined by (cf. Eq. (8)): where b 0 is the 1-loop coefficient of the β-function in Eq. (7), and c 1 (1) is the 1-loop coefficient of the matching relation between the MS-couplingḡ 2 MS (μ) and the coupling of interestḡ 2 O (μ) (cf. Eq. (9)). With the choice s = s * , the k = 1 term in Eq. (9) vanishes.  [32]. The left (right) panel uses the SF ν -scheme with ν = 0 (ν = −0.5), cf. text MS as a function of α PT [35]. The empty symbols represent the data at the given α PT , while the filled symbols are extrapolations to α PT → 0 (shifted for better readability) of the different approaches to the perturbative matching (see text for more details). The gray band is the result of Ref. [33], while the data point labeled FlowQCD is the result of Ref. [46] lowing section, through the strategy of renormalization by decoupling precise results for Λ (5) MS can be obtained from the accurate knowledge of Λ  [47], while α PT is once again the value of the relevant coupling at the renormalization scale μ PT where perturbation theory is applied. Similarly to the case of N f = 3 QCD discussed above, different schemes and strategies have been considered in order to extract Λ (0) MS /μ had and study the perturbative truncation errors. In all cases, the non-perturbative RG running from μ had up to μ PT is obtained using a finite-volume scheme based on the YM gradient flow (GF) [47][48][49][50]. The interested reader can find more details about the scheme in the main Refs. [33,35] Once μ PT is reached, Λ (0) MS /μ had is estimated in a number of ways. For the case labeled as (GF) in the plot, Strategy 1. of Sect. 2.2.3 is employed using the 3-loop β-function in the GF-scheme of choice [51]. In the other cases, the GF-scheme is first non-perturbatively matched to the SF ν=0 -scheme introduced in the previous section. Perturbation theory is then applied either following Strategy 1. based on the SF ν=0scheme (SF label in the plot), or by following Strategy 2. and matching the SF ν=0 -and MS-coupling (MS(s = 1, 2) in the figure). In the latter case, two values of the s-parameter, s = 1, 2, are studied; note that s * ≈ 2 in this case. In all cases, the leading parametric uncertainties in Λ (0) MS /μ had from the truncation of the perturbative expansion are of O(α 2 PT ). Going back to Fig. 3 we see how two out of the four strategies ((SF) and MS(s = 2))) give results which are essentially independent on α PT over the whole range of couplings considered for the extraction of Λ (0) MS /μ had . Note that in going from the largest to the smallest couplings the energy scale varies by a factor 32 while α PT changes by about a factor 2. On the other hand, the other two types of determinations ((GF) and MS(s = 1))) show a significant α PT dependence, roughly compatible with the expected O(α 2 PT ) scaling. What is remarkable is that even considering values of α PT ≈ 0.08 the different strategies give estimates for Λ (0) MS /μ had which vary up to ≈ 3%. This is about twice as large as the statistical errors on the points (cf. Table 3 of Ref. [35]). In the case of the (GF) and (MS(s = 1)) determinations, it is clear that a trustworthy estimate for Λ (0) MS /μ had can be quoted only by extrapolating the results for α PT → 0. In general, perturbative truncation errors are large also in the pure YM theory given the precision one can reach.
The results above show us once again the importance of an explicit non-perturbative calculation of the running of the coupling over a significant range of values, reaching down to small couplings, in order to assess the actual size of the perturbative corrections. We join the authors of Ref. [35] and conclude that only by studying non-perturbatively the limit α PT → 0 one can avoid the dangerous game of estimating perturbative uncertainties at some finite (potentially large) value of α PT . Without studying this limit, the determinations can easily be affected by perturbative truncation errors, even at surprisingly small values of the coupling.
Large-volume schemes. A precise determination of the Λparameter in the pure YM theory is certainly very much facilitated from the computational point of view with respect to the case of QCD. However, as we have seen in the previous example, it is yet a non-trivial challenge to control perturbative truncation errors once a 1−2% precision in Λ MS is reached.
The disagreement among some recent determinations of Λ (0) MS is a clear signal that these difficulties should not be underestimated. The issue is well illustrated in Fig. 3, where the very precise results labeled (FlowQCD) from Ref. [46] show a net tension with the determinations of Refs. [33,35]. We recall that the former result is based on extracting Λ (0) MS /μ had from the plaquette expectation value calculated in large-volume simulations. Bare lattice perturbation theory at couplings α PT ≈ 0.095−0.12 is used, with parametric uncertainties of O(α 2 PT ). We refer the reader to the given reference for the details. Here we just note that all the above determinations satisfy the most stringent criteria set by FLAG (cf. Ref. [3]). Yet, one or more of these results have underestimated uncertainties.
Other groups have recently engaged in a precision determination of Λ (0) MS , also with the intent of resolving the disagreement above. The recent results of Ref. [34] based on the qq-coupling, α qq , defined from the static potential [52], are particularly interesting in this respect. 14 We report them in Fig. 4. In the case of α qq the corresponding β-function, β qq , is known up to 4-loop order, and some partial information is available also at 5-loops (see e.g. Ref. [55]). Determinations of Λ (0) MS /μ had from α qq are hence expected to have asymptotically O(α 3 qq ) corrections. In the plot, α qq refers to the coupling at which perturbation theory is used according to Strategy 1. of Sect. 2.2.3, i.e. it corresponds to α PT in our previous discussions.
Despite the accurate perturbative knowledge there are a few challenges when using the qq-scheme for precision determinations of Λ (0) MS [34]. The most relevant ones for our discussion are, first of all, that the scheme is conventionally defined in an infinite space-time volume. In order to measure the coupling at small lattice spacings one therefore needs large lattice sizes to maintain the physical extent of the lattice large. In the computation of Ref. [34] lattice spacings down to a ≈ 0.01 fm are reached while keeping the lattice extent L ≈ 2 fm. This means simulating lattices with up to L/a ≈ 200. Secondly, the perturbative expansion of α qq dis- 14 A similar earlier study on the challenges of extracting Λ MS in both N f = 2 QCD and the pure-gauge theory using the qq-coupling can be found in Ref. [53]. For a recent application of this scheme for the computation of α s and a detailed account of the most recent results and developments see Refs. [16,54]. 8t 0 determined at various values of α qq ≡ α PT [34]. The results using different orders of perturbation theory for β qq are shown, as well as a comparison with the determinations of Refs. [33] and [46]. (The reference numbers in the plot are those of Ref. [34] from which the plot is taken.) plays some infrared divergences starting at 3-loop order in β qq . When resummed these give rise to terms of the form, α n qq log(α qq ) m , n ≥ 3, 1 ≤ m ≤ n − 2, which are enhanced at small couplings (cf. Ref. [34]). MS /μ had have good precision, but perturbative uncertainties are large. This can be seen by looking at the difference between the 3-loop and 4-loop results (or analogously between the 4-loop and 4loop + 5-loop log-terms results). At these large couplings, the perturbative expansion seems to have reached its limit of applicability. This severely limits the precision one can aim at for Λ (0) MS /μ had if one is restricted to this range of couplings. For couplings α qq 0.21, the different orders of perturbation theory seem to start converging. On the other hand, the errors on the data become large. This is due to the difficulties in extrapolating the results to the continuum limit [34]. In fact, the errors are too large to make definite conclusions for the relevant limit α qq → 0.
All in all, we see from this last example that a precise determination of Λ (0) MS is a challenge. Finite-volume renormalization schemes allow us to cover a wide range of couplings, reaching down to rather small values. Yet, having control on perturbative truncation errors requires care. When using large-volume schemes the situation is further complicated by controlling continuum limit extrapolations at the smallest (most relevant) couplings. Small couplings require small lattice spacings, which require large lattice sizes in order to keep the physical volume large. As a result, even in the computationally simpler case of the pure YM theory, one might have precise data confined for the most part to a region of couplings too large to have perturbative uncertainties fully under control, while at smaller couplings the data is not precise enough for a competitive determination of Λ.

The tricky business of continuum extrapolations
Having discussed the difficulties of estimating perturbative truncation errors in precision determinations of the Λparameters, we now want to touch on the issue of systematic uncertainties related to the continuum limit extrapolations of the relevant couplings.
To give the reader a feeling of the pitfalls that these continuum extrapolations can conceal, we first consider the N f = 3 results of Ref. [50]. The relevant quantity to look at in this case is the step-scaling function (SSF) of the finite-volume GF-couplingḡ 2 GF (μ) with SF boundary conditions (see Refs. [49,50] and Eq. (53) for the definition of this scheme). We recall that the SSF, σ (u), encodes the change in the coupling u when the renormalization scale is varied by a factor of 2 [11]. Specifically, having set μ = L −1 , It is clear from its definition that the SSF is a discrete version of the β-function. The latter can in fact be obtained once the SSF is known in a range of couplings (cf. Ref. [50]). On the lattice, the SSF is determined by extrapolating to the continuum limit its discrete approximations, Σ(a/L , u). In order to compute the latter one must first identify a set of lattice sizes L/a and corresponding values of the bare coupling g 0 for whichḡ 2 GF (L −1 ) = u, with u a specific value. The lattice SSFs Σ(a/L , u) are then given by the couplings g 2 GF ((2L) −1 ) measured at the values of g 0 previously determined but on lattices with sizes 2L/a.
The results for the lattice SSFs of the GF-coupling of Ref. [50] are shown in Fig. 5. They correspond to 9 values of the coupling u i ∈ [2.1, 6.5], i = 1, . . . , 9. As one can see from the figure, the lattice data are very precise. On the other hand, discretization effects are in general large, particularly so at the largest couplings. The results for Σ(u, a/L) vary in fact by up to 20% in the range of L/a considered, which is quite a significant change compared to the statistical errors on the points.
Given the results in Fig. 5, we may expect that a simple fit of the data linear in (a/L) 2 is all that is needed to extrapolate these to the continuum limit. In particular, we may consider individual continuum extrapolations for each u i value using the functional form where σ are fit parameters. Within the uncertainties, linearly in (a/L) 2 is in fact excellent and the above fits are very good (χ 2 /dof ≈ 0.7). One is thus tempted to take the precise values for σ (A) i as estimates for the continuum SSF. The continuum results so obtained are well described by the simple relation: Δσ Note that this is the functional form expected from the perturbative expansion of σ (u) at 1-loop order, although the coefficient predicted by perturbation theory is slightly different, i.e. ≈ −0.079. 15 This observation suggests us to perform alternative fits to the data in Fig. 5 considering the functional form with σ new fit parameters. The quality of these fits is as good as for the fits A of Eq. (13). Distinguishing between the two fit forms would require significantly higher statistical precision than the present one.
It is important to note at this point that any functional form that we consider for the continuum extrapolations, necessarily comes with assumptions. Both fits A and B above, for instance, assume discretization errors of O(a n ), n ≥ 3, to be negligible. Moreover, even focusing only on the leading O(a 2 ) effects, we know from Symanzik effective theory (SymEFT) [56][57][58] that these are not simply given by a "classical" term ∝ (a/L) 2 . They are in fact a non-trivial combination of different terms which in the limit a/L → 0 are asymptotically ∝ (a/L) 2 . . , and g 2 (a −1 ) is the given renormalized coupling of the effective theory evaluated at a scale μ = a −1 (cf. Refs. [59][60][61][62]).
If terms of higher order than a 2 as well as logarithmic corrections to pure a 2 scaling were completely negligible in the data, the fit parameters σ (A) i and σ (B) i should perfectly agree. From the results in Fig. 5 we see that there is in fact agreement within one standard deviation. However, the difference between the results from the two fits is clearly systematic, with the results from fit B being always larger than those from fit A.
The issue becomes more evident if one tries to obtain a smooth parameterization for the continuum SSF from the fitted continuum values σ i . As noticed earlier, a fit of Δσ i = 1/σ i − 1/u i to a constant Δσ provides a good description of the continuum data σ i in the whole range of u ∈ [2.1, 6.5]; this is the case for both Δσ 15 Perturbation theory predicts: (7) (see e.g. Ref. [32]). The close agreement between the non-perturbative data for σ (u) and 1-loop perturbation theory is quite peculiar, considering the fact that it holds up to σ (u) = O(10). We refer the interested reader to Ref. [50] for a detailed discussion about this point. i , respectively. The previous considerations show that the description of discretization errors as pure (a/L) 2 effects is in this case not accurate enough for the level of precision claimed in the continuum limit. Even though the different functional forms in Eqs. (13) and (14) fit the data well and perfectly agree with each other at finite L/a, the corresponding extrapolations for a/L → 0 are clearly affected by some systematics. In Ref. [50], more conservative error estimates and robust central values for the continuum results are eventually obtained by carefully accounting as systematic uncertainties in the data the not entirely negligible effects of the higher-order terms neglected in Eqs. (13) and (14) (cf. Ref. [50] for the details).
The example above might not seem too pessimistic. However, it should come as a warning for the more general situation. Estimating properly the systematic uncertainties related to continuum limit extrapolations of high-precision data can easily become a challenge, particularly so if discretization errors are not small.
As recalled earlier, the leading asymptotic dependence of renormalized lattice quantities on the lattice spacing as a → 0 is given by a combination of terms ∝ a n [ḡ 2 (a −1 )] Γ i , where the number and values of the Γ i , as well as n, depend on the chosen discretization and set-up. The Γ i are in fact inferred from the anomalous dimensions of the fields defining the O(a n ) counterterms in the SymEFT, and the order of perturbative improvement that has been possibly implemented (cf. Ref. [61]). Hence, when one considers a pure a n dependence for the discretization errors, one is implicitly assuming that all Γ i ≈ 0. This, however, cannot be taken for granted.
For most cases of interest, the leading discretization effects have n = 2, i.e. they are of O(a 2 ). 16 The results of Refs. [61,62] then show that in the case of QCD we have O(10) different terms that contribute in general, and Γ i ≈ [−0.1, 3] for several common discretizations and values of N f = 2 − 4. 17 18 Having all Γ i 0 is certainly positive. In particular, the contributions relevant in the massless theory all have Γ i > 0, which implies a faster approach to the continuum limit with respect to pure a 2 terms. However, the large number of terms contributing makes for a complicated pattern of discretization errors in the general case, with no clear contribution(s) dominating. As a result, it may be difficult in practice to identify the terms that are actually relevant. Moreover, terms of the form can be hard to distinguish from a 3 or a 4 terms in a limited range of lattice spacings when statistical uncertainties are present. 19 The continuum estimates obtained by including different contributions, on the other hand, may vary appreciably. In this situation, precise and robust final estimates are not easily achieved.
We stress that it is particularly important to take these considerations into account when aiming for precise determinations of short-distance quantities like the couplings. As discussed in previous sections, in the most interesting region of high energy, μ Λ, aμ may not be so small. Continuum extrapolations are thus likely to be difficult and require special attention. Following the lines of Refs. [61,62] one should take the non-trivial a-dependence predicted by SymEFT into account, provided the information is available. If this is not the case, one should try at least to estimate the uncertainties associated with neglecting logarithmic corrections to classi- 16 A relevant exception is the case of the SF, for which the leading discretization errors are parametrically of O(a) (cf. Sect. 4.2.3). In applications, however, the O(a) effects are subdominant with respect to the O(a 2 ) effects, and often also compared to the statistical errors. The precision studies of Refs. [32,33,50] thus opt for treating the O(a) effects as (small) systematic uncertainties in the data, and perform continuum extrapolations assuming leading O(a 2 ) effects. In this respect, we note that in Refs. [61,62] the Γ i relevant for the O(a) effects in the (puregauge) SF have been computed. The results support the treatment of O(a) effects pursued in Refs. [32,33,50] (cf. the given references for the details). 17 The results refer to the contributions to discretization effects coming from the lattice action, considering several popular options (cf. Ref. [62]). If the relevant observable is not a spectral quantity, additional effects originating from the lattice fields that define it are present. These depend on the specific observable and choice of discretization (see, e.g. Refs. [61,62]). 18 In the case of the pure-gauge theory only two terms from the lattice action contribute to the O(a 2 ) effects. The Γ i for different options can be found in Ref. [61]. In all cases, Γ i 0.6. 19 It is clear that even though the SymEFT can predict the form of the leading asymptotic discretization errors, it cannot predict the region where these dominate over formally suppressed contributions. In practice, it may thus be difficult to establish the regime of applicability of the results from SymEFT. cal scaling, e.g. by considering terms ∝ a 2 [ḡ 2 (a −1 )] Γ i , with Γ i ≈ 1 − 3, in the fit ansätze. Ideally, one would like to be in the situation where within the uncertainties the continuum estimates do not sensibly depend on whether these terms are considered or not.
Given the observations above, we want to bring the reader's attention to a recent study where the non-trivial adependence of discretization effects was found to be a relevant issue. Specifically, we consider the computations of Refs. [51,63] of the GF-coupling in the pure Yang-Mills theory using Numerical Stochastic Perturbation Theory. 20 In this framework, the lattice theory is numerically solved through a Monte Carlo simulation up to a finite order in the bare coupling g 0 [64,65]. From expectation values in this "truncated theory" one can obtain the perturbative coefficients of the expansion of lattice quantities in g 0 .
In Refs. [51,63], the GF-coupling with SF boundary conditionsḡ 2 GF (μ) has been computed up to two-loop order in g 2 0 . Using the relation between α 0 ≡ g 2 0 /(4π) and α MS ≡ α (N f =0) MS [66,67], one can thus infer the relation The coefficients k 1 (a/L), k 2 (a/L) are functions of the resolution a/L considered for the lattice. In order to obtain the matching relation between the couplings in the continuum limit the coefficients must be extrapolated for a/L → 0. Focusing on the 1-loop coefficient, k 1 (a/L), from SymEFT we expect that (see Refs. [51,63]) with r mn some constants. Note that the coefficient r 21 of the leading term ∝ ln(L/a) implicitly depends on the Γ i predicted by SymEFT (cf. Sect. 5.2. of Ref. [61] and also Ref. [62]). Compared to the case of the full theory, the results from the truncated theory have a simpler (yet non-trivial) cutoff dependence. Given the high precision reached in these calculations, this allows for a clean illustration of the difficulties in continuum extrapolations. In Fig. 6 we show the results from Ref. [63] for k 1 (a/L) for two different values of the parameter c, 0.3 and 0.4, that specifies the GF-scheme (cf. Refs. [51,63] and Eq. (53)). Two different discretizations of the observable defining the 20 A recently expanded discussion in Ref. [35] provides another clear illustration of the difficulties of continuum limit extrapolations of precise coupling data using results from the pure-gauge theory (cf. Figs. 6 and 7 of this reference and related discussion). We strongly recommend the interest reader to consult this reference. We moreover refer to the important pioneering studies of Refs. [59,60] in the non-linear σ -model in two dimensions.
Starting from the results for c = 0.3 (left panel of Fig. 6), we see how the data is very precise but discretization effects are sizable. In the plot we then show two types of extrapolations to the continuum limit. For the first type (solid lines), lattices with L/a = 12−32 are fitted to the asymptotic form, Eq. (16), considering the leading terms m = 2, n = 0, 1. The fits are good, χ 2 /dof ∼ 1, and the extrapolated results for the two discretizations agree well. The m = 2, n = 1, term is in fact crucial to obtain good fits. For the second set of extrapolations (dashed lines), we consider instead lattices with L/a = 12 − 24. In this case the data can be very well described by a pure (a/L) 2 term (m = 2, n = 0) over the whole range of lattice sizes. The continuum extrapolated values obtained from these fits have significantly smaller statistical errors than the ones from the previous fits, and yet there is perfect agreement between the two discretizations. On the other hand, the results deviate from the previous estimates by several of their standard deviations.
The results for c = 0.4 exhibit qualitatively the same features, although the statistical errors on k 1 (a/L) are about a factor 2 larger and the two discretizations now show rather different lattice artifacts. On the other hand, cutoff effects are generally smaller than for c = 0.3, and we thus include L/a = 10 in the fits. It is clear that in both cases, c = 0.3, 0.4, a reliable continuum extrapolation for k 1 (a/L) is challenging due to the non-trivial a-dependence of the data. In particular, larger lattices than the ones considered here are clearly needed in order to obtain accurate continuum results (cf. Ref. [51] for the final determination).
In conclusion, through these examples we saw how assessing the systematics related to the continuum limit extrapolations of couplings can be challenging. This is especially true when one wants to maintain the high precision reached on the lattice data also in the continuum limit, but discretization errors are large. It then becomes hard to avoid systematic biases in the final determinations. To this end, it is crucial to test all the assumptions that enter the functional forms chosen for the extrapolations. In particular, we must keep in mind that good fits do not necessarily mean good results for parameters, especially for extrapolations outside the range covered by the data.

Heavy-quark decoupling
So far we focused on the main challenges that stand on the way of a precise determination of Λ (N f ) MS and discussed in detail the cases of N f = 0, 3. The interesting quantity for phenomenology, however, is Λ (5) MS . At present, lattice estimates of Λ (5) MS are for the most part based on determinations MS (cf. Ref. [3]). As we shall recall in the next subsection, the most common strategy to obtain Λ (5) MS is in fact to non-perturbatively compute Λ (3) MS through simulations of the N f = 3 theory and then rely on perturbative decoupling relations for the heavy quarks to estimate the ratios Λ (4) MS /Λ (3) MS and Λ (5) MS /Λ (4) MS (see e.g. Ref. [3]).
The main reason for this is because, as is well-known, simulating the charm quark dynamically is at present challenging, let alone the case of the bottom quark. While the inclusion of the charm quark in the computation of the running of the QCD coupling may be only moderately challenging with a suitable strategy (see e.g. Ref. [68]), it does pose important difficulties in large-volume hadronic simulations. Besides the increased computational cost in simulating an additional quark with respect to N f = 2 + 1 simulations, and the more complicated tuning of the bare QCD parameters necessary to define proper lines of constant physics, discretization effects are a serious source of concern. Given the currently most accessible lattice spacings in hadronic simulations, say a 0.05 fm, we have that am c 0.3 and am b 1, where for definiteness we took m c ≈ 1.27 GeV and m b ≈ 4.2 GeV. In the hadronic regime it is therefore a real challenge to control the discretization effects induced by including the charm quark in simulations, and unrealistic for the case of the bottom quark. This is particularly true for the case of Wilson quarks where the charm quark can potentially introduce large O(am c ) effects, unless a complete Symanzik O(a) improvement programme is carried out, which is certainly no simple task (see e.g. Ref. [69]).
In this situation, it is mandatory to assess the reliability of the strategy presented above for the determination of Λ (5) MS . To this end, in the following we shall recall the general theory of decoupling of heavy quarks and critically address its application in lattice determinations of Λ (5) MS . This includes both the usage of perturbation theory for the inclusion of heavy-quark loops in the running of the QCD coupling, that is to estimate the ratios Λ (4) MS /Λ (4) MS , as well as the determination of the physical units of Λ (5) MS from scale setting in the N f = 2 + 1 theory. As we shall see, given the current precision on Λ (3) MS , accounting for heavy-quark effects by means of perturbation theory in the running of the QCD coupling is remarkably accurate, even for the case of the charm quark. In addition, charm-quark effects in (dimensionless) low-energy quantities are found to be quite small, supporting the fact that N f = 2 + 1 QCD is accurate enough for establishing the physical scale. As a result, competitive determinations of Λ (5) MS are possible from results in the N f = 3 flavor theory.

The effective theory for heavy-quark decoupling and the QCD couplings
In this subsection we introduce the effective theory of heavy quarks and recall how this is conventionally applied in the determination of Λ (5) MS . We refer the reader to Refs. [70,71] for a more detailed presentation.

The effective theory for heavy-quark decoupling
We begin by considering QCD with N f flavors of quarks, which in short we denote QCD N f . Of these, N are considered to be light, while the other N h ≡ N f − N are heavy. For simplicity, we assume that the light quarks are degenerate with mass m, while the heavy quarks are also degenerate but with a mass M Λ. The effective theory associated with the decoupling of the heavy quarks is formally obtained by integrating out in the functional integral the fields associated with the heavy quarks [72]. The field theory that results is characterized by having an infinite number of non-renormalizable interactions, which are suppressed at low energies by negative powers of the heavy-quark masses M. The couplings of the effective theory can be fixed order by order in M −1 by requiring that, at each given order, a finite number of observables is equal to the corresponding ones in the funda-mental theory. Once the couplings are fixed up to a certain order M −n , the effective theory is said to be matched to the fundamental one at this order, and can be used to describe the effects of the heavy quarks at low energies up to corrections of O(M −n−1 ). In this sense, we say that as M → ∞ the heavy quarks decouple from low-energy physics as their effects eventually fade away [73].
In formulas, the Lagrangian of the effective theory is of the general form (see e.g. Ref. [71]) where the leading order corresponds to the Lagrangian of QCD with N light quarks, i.e. L 0 = L QCD N , while the corrections L k , k ≥ 1, consist of linear combinations of local fields of mass dimension 4 + k, i.e.
with ω (k) i dimensionless couplings. The fields Φ (k) i are built from the light-quark and gluon fields, and include possible powers of the light-quark masses. They must respect the symmetries of the fundamental QCD N f theory, as in particular gauge invariance, Euclidean (or Lorentz) symmetry, and chiral symmetry.
In the case where the light quarks are massless, the leading-order effective theory, QCD N , has a single parameter: the gauge couplingḡ (N ) (μ). The effective and fundamental theory are therefore matched at leading order in M −1 onceḡ (N ) (μ) is matched. This requires thatḡ (N ) (μ) is properly prescribed at a given renormalization scale in a given renormalization scheme in terms of the couplingḡ (N f ) (μ) of the fundamental theory and the heavy-quark masses M. 21 In addition, provided that the fundamental theory is defined on a manifold without boundaries, 22 it is possible to show that L 1 = 0 and therefore O(M −1 ) corrections are absent [71]. 23 In this situation, the leading-order corrections induced by the heavy quarks are suppressed as M −2 at low energy.
In the case where the light quarks have a non-vanishing mass, the only mass-dimension 5 fields allowed in L 1 are given by the fields of the leading-order Lagrangian L QCD N multiplied by the light-quark masses m [71]. Their effect can be reabsorbed into a redefinition of the gauge coupling and light-quark masses of O(m/M). Of course, in the case of 21 For ease of notation we do not use any symbol to indicate the generic scheme of renormalized couplings. We however assume that, unless otherwise stated, the couplings are defined in a mass-independent scheme. 22 This means that the theory lives in infinite space-time or in a finite volume with (some variant of) periodic boundary conditions. The special but relevant case of a finite volume with boundaries will be considered in Sect. 4.2.3. 23 Note that for the sake of argument we exclude the uninteresting case of N = 1, for which a proof of this result is to our knowledge missing. massive light quarks the matching of the effective and fundamental theory at leading order also requires the matching of the light-quark masses. In addition, the couplings ω (k) i of the corrections L k≥2 will depend in general on the light-quark masses, too.

The effective theories and their couplings
The application of the effective theory for heavy quarks in the determination of the QCD couplings was first advocated by Weinberg in his seminal paper on effective field theories [72]. The idea is based on the observation that for massindependent renormalization schemes the RG equations of the renormalizable couplings of the effective theory completely decouple from the others. This means that the couplings of the non-renormalizable interactions can be completely ignored when determining the variation of the running couplingḡ (N ) (μ) of the effective theory with the energy scale μ. The heavy quarks affect the value of the coupling of the effective theory only through the matching with the coupling of the fundamental theory,ḡ (N f ) (μ).
The matching relation betweenḡ (N ) (μ) andḡ (N f ) (μ) can in principle be established in perturbation theory. This is best done at a scale μ match ≈ M [23,72]. Assuming the validity of perturbation theory at this scale, if the running couplinḡ g (N ) (μ match ) in the effective theory is known, one can turn tables and obtainḡ (N f ) (μ match ) from inverting the matching conditions.
In phenomenological applications of this strategy (see e.g. Ref. [5]), the value ofḡ (N ) (μ match ) is extracted from the value of the couplingḡ (N ) (μ low ) at some lower energy scale, μ low M. The latter is obtained by comparing the perturbative expansion for some process O(q) with characteristic energy scale q ≈ μ low , with its experimental results. The effects of the heavy quarks in O(q) are expected to be suppressed as O((q/M) 2 ) (cf. Sect. 3.1.1). Hence, assuming that these effects can be neglected, the perturbative expansion of O(q) can be considered in the N -flavor theory, which allows the couplingḡ (N ) (μ low ) to be extracted. As observed above, the determination of the running of the effective coupling in the N -flavor theory does not require any input from the fundamental theory: one can thus readily obtainḡ (N ) (μ match ) fromḡ (N ) (μ low ). Clearly, for this strategy to work in practice the energy scale μ low M must be yet sufficiently high for perturbation theory to apply.
In non-perturbative applications on the lattice, the strategy presented above is realized in the following way (see e.g. Ref. [3]). Firstly, as discussed in Sect. 2.2.3, through the study of the non-perturbative running of a given massless coupling within the effective N -flavor theory, one determines the ratio Λ (N ) MS /μ had , where μ had is a convenient (not necessarily physical) low-energy scale. Assuming that the effects of the heavy quarks can be neglected in the ratio of low-energy scales μ had /μ phys , where μ phys is an experimentally accessible hadronic quantity, this can also be computed within Nflavor QCD. The physical units of μ had and therefore Λ (N ) MS can then be established by taking μ phys from experiments. As a second step, as we shall see in detail in the following subsection, the matching relation between the couplings of the effective and fundamental theory is expressed as a relation between their Λ-parameters. The ratio Λ (N f ) MS /Λ (N ) MS is thus estimated using perturbation theory at a scale μ ≈ M. From this estimate and the results for Λ (N ) It is important to note that in extracting Λ (N ) MS from the running of the chosen non-perturbative scheme, perturbation theory can be applied at arbitrarily large energy scales. The perturbative matching between the effective and fundamental theory, instead, is best performed at a scale μ ≈ M, where M is set by the mass of the given heavy quarks that decouple. Whether perturbation theory is accurate in this step hence depends on how heavy these quarks are. As we shall see, in the MS-scheme the perturbative matching is remarkably accurate already for masses M close to that of the charm quark.
In the next subsection we describe in detail the perturbative matching of the QCD couplings. We follow the lines of the presentation in Ref.
[71], and start by reformulating this matching in terms of Λ-parameters [23]. After doing so, we investigate the accuracy of a perturbative matching within perturbation theory itself, and later present a discussion on the typical size of non-perturbative corrections one can expect in these relations.

Definitions
As we are interested only in the QCD coupling we assume for simplicity that the relevant effective theory is given by massless QCD N . As discussed above, at leading order in M −1 the only parameter of the effective theory that needs to be fixed is therefore the running couplingḡ (N ) (μ). The effective theory hence predicts any observable once the coupling in the chosen scheme is specified at some scale.
The general form of the relation between the couplings of the leading-order effective theoryḡ (N ) and of the fundamental theoryḡ (N f ) , reads [23,24,72] [ḡ (N ) (μ/Λ (N ) )] 2 (19) where for later convenience we explicitly wrote the dependence of the couplings on their corresponding Λ-parameter. The function F O depends in principle on the specific observable O that is used to establish the matching between the two theories. The dependence on the observable, however, is suppressed by powers of M −1 . In perturbation theory, these power corrections can be uniquely isolated from the logarithmic terms in M and can therefore be dropped. This is consistent with matching the theories at leading order in M −1 . In doing so, the relation between the couplings becomes universal, i.e independent on the specific matching condition. It only depends on the renormalization schemes chosen for the couplings.
In the MS-scheme the matching relation (also referred to as decoupling relation) is known up to 4-loop order. Below, we consider this only for the convenient choice of matching scale μ = m * , where m * is implicitly defined by the equation are the running masses of the heavy quarks in the fundamental theory in the MS-scheme. Given this choice, the matching relation reads: where the h i are pure numbers that depend on both N h and N [74][75][76][77][78]. As explained in these references, the particular choice of matching scale makes all contributions ∝ log(m MS (μ)/μ) appearing in the matching relation vanish, and it is considered to be optimal. This implies in particular that the g 2 * term in C(g * ) is absent. In the general case, the scales m MS (μ) and μ should anyway not be chosen too separated. Large coefficients otherwise appear in the perturbative matching relation which can compromise its applicability.
As anticipated, from the perspective of lattice applications it is compelling to recast the matching relation, Eq.
where b 0 (N f ) is the 1-loop coefficient of the β-function, Eq. (7), and the function encodes the scale-dependence of the quark masses in the N f -flavor theory (in the MS-scheme). It has a perturbative expansion known up to 5-loops [79][80][81][82], where the actual scheme and N f -dependence start from d i , i ≥ 1. Note that even though in Eq. (21) we conveniently defined the RGI mass M through the MS-scheme, its value is in fact scheme independent, as long as mass-independent schemes are considered for the quark masses. This means in particular that M can be nonperturbatively defined through any non-perturbative (massless) renormalization scheme. Using the above definitions, together with the definitions in Eq. (4), and the matching relation Eq. (20), it is immediate to conclude that [70,71] As anticipated by our notation, P ,f can be considered as a function of M/Λ (N f ) MS , since the value of the coupling g * can be expressed in terms of the RGI parameters through the relation In the limit where M/Λ MS → ∞, the coupling g * goes to zero and the function P ,f admits an asymptotic perturbative expansion in terms of g 2 * .

Perturbative uncertainties
We now want to study the behavior of the perturbative expansion of P ,f (M/Λ). We shall focus our attention on the functions P 3,4 and P 4,5 , which are the relevant ones for estimating Λ (5) MS given Λ (3) MS . In Fig. 7 we present the results taken from Ref.
[71] for the relative deviation of P 3,4 (left plot) and P 4,5 (right plot), from their unsystematic 1-loop approximation P (1) , 2N ) [71]. 24 The results are shown as a function of M/Λ and for different orders of perturbation theory. The order in perturbation theory refers to the order 24 The unsystematic approximation P (1) ,f obtained in Ref.
[71] happens to be quite close to the different orders of the perturbative expansion of P ,f for the values of N f and N considered and M/Λ 30. For this reason it is used here to get the overall magnitude of the function P ,f . On the other hand, the different orders of perturbation theory are not expected to converge to P Starting from the case of P 4,5 , we see how the difference between the 3-loop and 2-loop results is around 2% at the bquark mass. The 4-and 5-loop results then differ only by very tiny corrections from the 3-loop approximation. Looking at the behavior of the perturbative series alone, the perturbative prediction for the decoupling of the b-quark appears to be very reliable and accurate. Similar conclusions can be drawn for the decoupling of the charm quark. Also in the case of P 3,4 , the difference between the 3-and 2-loop results is about 2% at the charm-quark mass, and higher-order corrections are all much smaller.
In conclusion, the perturbative results suggest that a perturbative treatment of the matching between the relevant effective and fundamental theories introduces only errors at the sub-percent level in the functions P ,f , and therefore in the connection of their Λ-parameters.

Definitions
Judging from perturbation theory alone, the perturbative description of the decoupling of heavy quarks seems to work remarkably well. The rapid convergence of the different orders of the expansion of P ,f at the values of both the charm-and bottom-quark masses, seems to suggest that the series is well within its regime of applicability and higherorder corrections are small. On the other hand, the perturbative expansion cannot tell us anything about the size of non-perturbative corrections to the decoupling relations and whether perturbation theory actually applies at all. Whether non-perturbative effects are significant within the target precision at the relevant quark masses can only be established through a non-perturbative investigation.
In order to set the grounds for estimating the error that one makes when using a perturbative approximation for P ,f to extract Λ (N f ) MS from Λ (N ) MS , let us begin by recasting the matching of the effective and fundamental theory in more non-perturbative terms. 25 The leading-order effective theory describes the fundamental theory at low energy only when the corresponding Λ-parameter Λ (N ) is a properly chosen function of the scale In these equations S (N ) and S (N f ) (M) refer to the given low-energy scale computed in the effective and fundamental theory, respectively. Note in particular that the matching function P S ,f (M/Λ (N f ) ) depends on the scale S that is considered in the matching relation. We also stress again that while the value of Λ (N f ) does not depend on M the one of Λ (N ) does depend through the matching condition.
Once Λ (N ) is properly fixed through Eq. (27) in terms of S, for any other low-energy quantity S we expect that Note that ratios of low-energy scales, instead, do not depend on the value of the Λ-parameters and are therefore insensitive to their matching. For these, it readily holds that with S and S any two low-energy scales. Given this observation, from Eq. (27) we conclude that In other words, at the non-perturbative level the function P S stands for the low-energy quantity S computed in the chiral limit of the N f -flavor theory. Through this simple manipulation we find that [70,71] S where the factor is defined in terms of the massless N f -and N -flavor theories. The interesting aspect of Eq. (31) is that the ratio on the l.h.s. can be computed within the fundamental theory, while the r.h.s. is a consequence of the decoupling of the heavy quarks. In particular, we see how the mass dependence of the ratio S (N f ) (M)/S (N f ) (0) is expressed in terms of the function P S ,f , while the factor Q S ,f is just an overall constant. In the limit of large mass M, the mass dependence of this ratio is therefore expected to be universal and described by perturbation theory. As a result, this relation allows us to put at test the perturbative expansion of P ,f and estimate the typical size of non-perturbative corrections.
To this end, it is convenient in practice to introduce the mass-scaling function [71] Considering Eq. (31), this can be computed from the mass dependence of any hadronic quantity as 26 with no need for determining S (N f ) (0) or Q ,f . As for the functions P S ,f , the η M,S ,f obtained from different low-energy quantities S are expected to differ by O((Λ (N f ) /M) 2 ) contributions. In the limit M/Λ (N f ) → ∞, however, the massscaling function becomes universal and can asymptotically be estimated in perturbation theory [71]. In the following we shall see how by studying the mass-scaling function we will be able to obtain valuable insight on the applicability of perturbation theory in computing P ,f .

Non-perturbative corrections to decoupling
Non-perturbative effects in the decoupling of heavy quarks have been systematically investigated in a series of recent papers [70,71,[84][85][86][87], which focus on the relevant case of the charm. The main question that these works contribute to answer is how good of an approximation N f = 2 + 1 QCD is to the N f = 2 + 1 + 1 flavor theory. From our perspective, we are particularly interested in understanding how precisely we can expect to obtain Λ (4) MS (and consequently Λ (5) MS ) from results in N f = 2 + 1 QCD. As already mentioned, the first issue is to understand how accurately we can estimate Λ (4) MS from Λ (3) MS by relying on a perturbative approximation for P 3,4 (M c /Λ (4) MS ). Secondly, we must question how much we can rely on setting the physical units of the theory using N f = 2 + 1 QCD.
Studying the decoupling of the charm quark through simulations of N f = 2 + 1 + 1 and N f = 2 + 1 QCD is very challenging from the computational point of view. As a result, the physical effects one is after can easily end up being masked by the final uncertainties. For this reason, the authors of Refs. [70,71,84] have investigated non-perturbatively a model system given by QCD with N f = 2 degenerate heavy quarks. When the mass of the doublet of quarks becomes large the heavy quarks eventually decouple, and the theory is expected to be described at low energy by the pure Yang-Mills theory, i.e. N = 0 flavor QCD. Studying this model rather than the realistic case, avoids the complications of simulating light quarks. This allows one to reach much finer lattice spacings than typically possible in large-volume simulations with light quarks, because the volumes are similar to those in pure-gauge theory. Fine lattice spacings are essential to have discretization errors under control in the presence of quarks with masses close to that of the charm. By studying this model one expects to be able to reliably estimate the typical size of the effects induced by the charm in low-energy physics. In particular, the absence of the light quarks is not expected to change the picture very much and their effect is likely more than compensated by the additional heavy quark present in the model. In fact, as we shall report below, some first results have been recently obtained for the more realistic situation where a single charm quark decouples in the presence of 3 mass-degenerate lighter quarks [87]. The results collected so far confirm the findings of the model study.

Ratios of Λ-parameters
The first set of results that we want to discuss are from Ref. [71] and are shown in Fig. 8. They correspond to the determination of the mass-scaling functions η M,S 0,2 based on the gluonic low-energy scales S = t (34)). The precise definition of these scales can be found in the given reference. The results refer to the continuum limit of the model system introduced above, i.e. N f = 2 QCD with two heavy quarks. The RGI mass M of the heavy quarks is varied from about M c /8 to 1.2M c , where M c is the RGI mass of the charm quark. 27 The plot also includes the results for η M 0,2 at 1-and 4-loop order in perturbation theory [71].
As one can see from the figure, the results for the massscaling functions corresponding to different low-energy observables significantly differ at the smaller values of M in the range. As expected, however, they consistently approach 27 The value of the charm-quark mass in simulations is set by targeting M c /Λ (2) MS = 4.87, which is obtained using Λ (2) MS = 310 MeV from Ref. [88] and M c = 1510 MeV from Ref. [89] (cf. Ref. [71] for the details).

Fig. 9 Continuum extrapolated values for
√ t c /t 0 (left) and √ t 0 /w 0 (right) as a function of Λ/M [84]. The line in the blue band is the leading-order effective theory prediction: The results for η M 0,2 are about 1/10 for M ≈ M c , both in perturbation theory and non-perturbatively. In fact, within the uncertainties of roughly 10% the perturbative and nonperturbative results perfectly agree. Note that even though the relative precision on η M 0,2 might not seem impressive, it corresponds to an absolute error on η M 0,2 of about 0.01. This means, in particular, that it is reasonable to assume that the difference, Δη M 0,2 , between the non-perturbative η M 0,2 and its (4-loop) perturbative approximation is bounded by this error for masses M M c . The scaling of the non-perturbative corrections Δη M 0,2 to η M 0,2 as a function of M can then be assessed by studying the M-dependence seen in Fig. 8  Putting all this information together, the authors of Ref.
[71] obtain a conservative estimate for the size of the nonperturbative contributions ΔP 0,2 to P 0,2 . We shall not report their detailed discussion here and refer the interested reader to Ref.
[71]. 28 Summarizing their conclusions, given the results 28 The deviation Δ log(P 0,2 ) of the full function log(P 0,2 ) from its perturbative approximation is obtained by integrating Δη M 0,2 in log(M/Λ) for Δη M 0,2 obtained in the model, one can safely state that the non-perturbative contributions to P 0,2 (M c /Λ (2) MS ) are at most 2% and quite likely at the level of 0.4%. This translates into at least a 2% precision of perturbation theory in the conversion of Λ-parameters for the investigated case.
What can we conclude from this about the phenomenologically interesting case of P 3,4 (M c /Λ (4) MS )? We first note that the dependence in perturbation theory of η M ,f on N at fixed N h is very mild. At leading order it amounts to about a 20% effect in going from N = 0 to N = 3 [71]. For this reason, the authors of Ref.
[71] include an additional 50% contribution to their estimate for the non-perturbative corrections Δη M 0,2 to account for the missing light-quark effects. Secondly, our intuition from both perturbative and non-perturbative considerations suggests that most likely the effects of the decoupling of a single charm quark are about half of those of two quarks. From these observations, one concludes that one can safely neglect non-perturbative effects in from the value of M/Λ of interest up to M/Λ = ∞ (cf. Eq. (34) and Ref.
[71]). In order to translate to P 0,2 the estimate for the non-perturbative corrections to η M 0,2 made for M M c , one therefore needs to make some assumptions on how these approach the limit M/Λ → ∞. Depending on the assumptions, more or less conservative estimates are obtained. connecting Λ (3) MS and Λ (4) MS down to a precision of 1.5% or better [71].

Ratios of hadronic quantities
The second important category of effects that we address are non-perturbative contributions from heavy quarks to dimensionless ratios of lowenergy quantities. For us these are relevant in the context of setting the physical scale of the theory (see e.g. Ref. [90]). As well-known, in fact, besides the technical difficulties of computing the relevant ratios, an important issue that one faces in setting the scale in lattice QCD is the fact that one never really simulates the "real world", where experiments are conducted. Hence, when comparing the lattice results with experiments in order to set the scale of the theory, one must "correct" the experimental quantities for effects that are not taken into account in the lattice simulations, or at least verify that these are not relevant once compared with the rest of the uncertainties. The most common examples of effects to be considered are the difference in the u-and d-quark masses, electromagnetic effects, and the unphysical number of quark flavors.
In the following we focus on the issue of estimating the effects of the charm quark in low-energy quantities, and to which extent this can be omitted in lattice simulations. From the point of view of determining Λ (4) MS from the results of Λ (3) MS , thus, the relevant question is how accurate the determination of the physical scale is from simulations in the 3flavor theory. This amounts to quantify how accurate it is to compute in N f = 2 + 1 rather than N f = 2 + 1 + 1 QCD, the ratios of low-energy quantities that are used to set the physical scale.
As presented in Eq. (29), the effects of heavy quarks in dimensionless ratios of low-energy quantities are expected to be of O(M −2 ), provided that the mass of the heavy quarks is large enough compared to the energy scales of the observables considered. In Ref. [84] a systematic study was conducted in order to assess the range of heavy-quark masses for which the O(M −2 ) scaling of the heavy-quark effects predicted by the effective theory actually sets in. In addition, the authors estimated the typical size of these contributions when M ≈ M c . For their computations, they considered the very same model of QCD with two degenerate heavy quarks previously introduced, and the same range of masses, The results for several different ratios of low-energy quantities obtained in the fundamental N f = 2 QCD theory and in the effective pure-gauge theory were compared. Note that since the effective theory is purely gluonic only gluonic quantities were considered. These, however, are all relevant observables entering realistic scale-setting determinations.
In Fig. 9 we show two examples given by the ratios √ t c /t 0 (left panel) and √ t 0 /w 0 (right panel), evaluated in the continuum limit. The results from N f = 2 QCD for different values of the heavy-quark masses M M c /4 are shown, as well as those from the effective pure-gauge theory which correspond to M = ∞. Several fits to the data are proposed. Let us focus first on the quadratic (blue) and linear (green) fits in Λ/M. Among these four fit types, the data seem to favor the leading-order effective theory predictions: Although the results do not completely exclude a linear Λ/M dependence, the fits obtained by adding to the leading-order prediction a next-to-leading term ∝ (Λ/M) 4 and including data down to M ≈ M c /4 (red dashed lines), further support the findings from the previous fits. Indeed, the close agreement for M M c /2 of these fits and the leading-order predictions restricted to this range, reinforce the conclusion that the O(M −2 ) scaling sets in for masses M M c /2, while for smaller masses higher-order contributions become relevant, or the expansion has broken down entirely.
In general the corrections induced by the heavy quarks are small. They amount to 2 − 3% at the smallest masses in the plot, around M c /4, but they are reduced significantly below the percent level, to about 0.4%, once M ≈ M c . As discussed above, we expect that this result provides a reliable estimate for the magnitude of these effects in the realistic case of the decoupling of the charm quark in N f = 2+1+1 QCD. The simultaneous decoupling of two charm-like quarks rather than just one, likely compensates the missing effects from the light quarks; this at least in the purely gluonic quantities under consideration.
These expectations are confirmed by the recent results of Ref. [87]. In Fig. 10 we show their continuum limit extrapolations for different discretizations of the ratios √ t 0 /t c and √ t 0 /w 0 . In this case, the results from N f = 3 and N f = 3 + 1 QCD are compared. Both simulations include 3 mass-degenerate quarks with a mass around the physical average mass of the u-, d-and s-quarks. The N f = 3 + 1 simulations include in addition a fourth quark with a mass set to the physical value of the charm. By comparing the results of the two set of simulations one can directly test decoupling in a close-to-real situation. As one can see from the figure, the differences in these very precise ratios of lowenergy gluonic quantities are far below the percent level and of the order of magnitude found in the model. Interestingly, the ensembles generated in Ref. [87] open the possibility to study systematically the effects of the charm quark also in fermionic low-energy observables. √ t 0 /w 0 (top) of the N f = 3+1 data from Ref. [87] (right) with corresponding N f = 3 CLS results (left) from Refs. [20,91] including the finest ensemble J500, cf. Ref. [92] Conclusions Let us summarize what we have learned from the studies above. 1) The ratio Λ (4) MS /Λ (3) MS can be safely computed in perturbation theory with a precision of at least 1.5%, and realistically much better. In fact, it is important to stress that the estimate for non-perturbative corrections to the function P 3,4 (M c /Λ (4) MS ) is very conservative and the actual size of these effects is much likely quite smaller, i.e. at the level of 0.3% or so [71]. 2) Power corrections of O(M −2 c ) in lowenergy observables are also found to be very small, i.e. wellbelow the percent level [84]. All in all, this means that Λ (5) MS can be accurately predicted at the 1-2% level from Λ (3) MS . In this respect, we note that the competitive precision of about 0.7% on the α s (M Z ) determination of Ref. [13] (to be discussed below), corresponds to an uncertainty of 3.5% on Λ (3) MS . This uncertainty is also conservative. In conclusions, there is plenty of room for improvement within N f = 3 QCD.

The QCD coupling from N f = 3 QCD
In the previous section we established that by relying on perturbative decoupling relations for the charm and bottom quarks, precise determinations of Λ (5) MS are possible from results in the N f = 3 flavor theory. In order to be able to make this statement the systematic studies on the non-perturbative effects induced by the charm quark in low-energy quantities have been instrumental.
Having this settled, as discussed in detail in Sect. 2, the main challenges for an accurate determination of α s on the lattice are: (1) controlling discretization errors in continuum limit extrapolations of the chosen non-perturbative cou-pling(s), and (2) estimating the uncertainties associated with the use of perturbation theory in extracting the Λ-parameter. In this respect, the combined application of finite-volume renormalization schemes and finite-size scaling techniques has proven to be extremely effective in dealing with these difficulties, paving the way for robust and precise lattice determinations of the QCD coupling.
The determination of Ref. [13], in particular, relies on these techniques to compute Λ (3) MS . The calculation reaches a final precision on Λ (3) MS of about 3.5%, which translates into a 0.7% uncertainty on α s (M Z ). The strength of this result lies in the fact that all systematic uncertainties are carefully kept under control at this competitive level of accuracy. The calculation is therefore a prominent example of the current stateof-the-art determinations of α s from the lattice [3]. Below we want to briefly recall the main steps that led to this result in order to understand what the challenges are in improving on it. For a more detailed presentation we refer the interested reader to the original references [13,31,32,50] and reviews [93][94][95][96][97].
The Λ MS determination of Ref. [13] was obtained from the study of the non-perturbative running of some convenient finite-volume schemes from a scale of about 0.2 GeV to roughly 70 GeV. The very high energies reached nonperturbatively allowed for a systematic and robust assessment of the uncertainties related to the application of perturbation theory. This study has been presented in Sect. 2.3.1, where the high-energy end of the running in the SF-schemes and the result for Λ (3) MS /μ 0 with μ 0 ≈ 4.3 GeV, have been discussed in detail. The rest of the determination is built on the following steps.
Firstly, we have the computation of the lower end of the running and corresponding determination of the ratio of finite-volume scales μ 0 /μ had with μ had ≈ 200 MeV [50]. For this step, a novel finite-volume coupling defined in terms of the Yang-Mills gradient flow was employed. This allowed for reaching much greater precision than otherwise possible using the schemes considered at high energy [98]. Note that a non-perturbative matching between the finite-volume schemes at μ 0 was performed in order to continue the running at lower energies. For illustration, we show in Fig. 11 the nonperturbative running of the finite-volume couplings over the energy range covered, together with the results for the coupling in the MS-scheme obtained from the corresponding determination of Λ (3) MS (see below). In a second step, the relation μ had / f π K was established passing through the intermediate technical scale [13]. Here, f π K ≡ 1 3 (2 f K + f π ) is a convenient combination of the pion and kaon decay constants, while the scale μ * ref is given in terms of the flow time t * 0 defined in the SU(3) flavor-symmetric limit [91]. This step involved a combination of small-volume and large-volume simulations Fig. 11 Running couplings of N f = 3 QCD from integrating the nonperturbative β-functions in the SF-and GF-schemes [31,50]. They are matched non-perturbatively at the scale μ 0 defined byḡ 2 SFν=0 (μ 0 ) = 2.012 by computingḡ 2 GF (μ 0 /2) = 2.6723(64) [50]. The scales μ PT = 16μ 0 and μ had defined byḡ 2 GF (μ had ) = 11.31 are also shown, as well as the perturbative prediction for the SF ν=0 -coupling for μ > μ PT using the 3-loop β-function. The red curves correspond to the results for α (3) (12) MeV [13], considering different perturbative orders for the β-function in the MS-scheme in the hadronic regime [13]. From the experimental value of f π K the precise physical units for μ had could be inferred and hence those of Λ (3) MS . Finally, perturbation theory was used for the functions P 3,4 (M c /Λ (4) MS ) and P 4,5 (M b /Λ (5) MS ) to obtain Λ (5) MS and from this α s (M Z ). Splitting the determination of Λ (5) MS over the above steps was the key to keep all systematic errors under control. With the proper choice of observables and techniques the hard multi-scale problem of relating the low-and high-energy sectors of QCD could be solved in full confidence.
It is now instructive to look at the error budget of this α s determination. This is given in Fig. 12 which shows the contribution in percentage to the relative error squared on α s (M Z ) from the different steps described above [93]. As it is clear from the figure, the main source of uncertainty comes from the determination of the non-perturbative running from μ had ≈ 0.2 GeV up to μ PT ≈ 70 GeV, where perturbation theory is applied to extract Λ (3) MS /μ PT (cf. Sect. 2.3.1). In particular, the error accumulated by running from μ 0 to μ PT (labeled as Λ (N f =3) MS /μ 0 in the plot) contributes roughly 60% of the total budget.
It is important to recall at this point that the error coming from the running is completely dominated by statistical uncertainties. In particular, thanks to the fact that μ PT ≈ 70 GeV was reached non-perturbatively, the uncertainties due to the use of perturbation theory are well below the statistical errors (cf. Sect. 2.3.1). In this respect, we want to stress the important difference between this and the majority of other lattice determination of Λ (3) MS , where perturbation theory is applied at scales μ PT 2 − 3 GeV. In these cases, Fig. 12 Contribution in percentage to the relative error squared of α s (M Z ) from the different steps of the determination of Ref. [13] (cf. text for more details). As the reader can see, the dominant source of uncertainty is the non-perturbative running at high energy μ ≈ 4 − 70 GeV (Λ [31,32], followed by the running at low energy μ ≈ 0.2 − 4 GeV (μ 0 /μ had ) [50], and by scale setting (μ * ref ) [91]. Note that the error from decoupling (Λ (5) MS /Λ (3) MS ) is only perturbative. However, even adding the very conservative uncertainty estimated in Sect. 3.3.2 for the non-perturbative contributions to the decoupling of the charm quark, the total error is still dominated by the one from the running a large fraction of the final error comes from the systematic uncertainties related to the truncation of the perturbative series and possible remnants of non-perturbative contributions (cf. Ref. [3]). As we have seen, estimating these systematics reliably is very challenging, particularly so when precision is desired but the accessible range of scales is limited to low energy. In this situation, a reduction of the final uncertainties is highly non-trivial, and can eventually come only from reaching significantly higher energy scales. Without a step-scaling approach this is in practice extremely demanding in QCD given the present computational and algorithmic capabilities. 29 In the case of the step-scaling method, on the other hand, reducing the uncertainties on the current α s (M Z ) determination is a question of reducing the statistical uncertainties coming from the running of the coupling(s) in N f = 3 QCD. This is in principle a straightforward task. However, reducing the total error by an important factor, say a factor 2 or so, is yet a non-trivial challenge from the computational point of view.
Rather than following a brute force approach for the reduction of the error in the computation of the running in N f = 3 QCD, in the following we shall discuss a novel strategy which promises the desired error reduction in a significantly cheaper way [14]. It is based on the ideas presented in the previous section on heavy-quark decoupling. The distinct feature of the approach is that one can replace the non-perturbative computation of the running in N f = 3 QCD needed to determine Λ (3) MS with the running in the pure Yang-Mills theory. 30

General strategy and master formula
Let us begin by considering QCD with N f flavors of heavy quarks of RGI mass M. In this theory all quarks are massive and there are no light quarks. As we have seen in Sect. 3.1.1, as the mass M becomes larger and larger this theory is expected to be approximated better and better by an effective theory given by the pure Yang-Mills theory. In particular, once the Λ-parameters of the fundamental and effective theory are properly matched, dimensionless lowenergy observables can be computed in the effective theory up to corrections of O(M −n ), where, in general, n = 2 if the theories are matched at leading order.
The decoupling of heavy quarks is also valid for the interesting case of couplings defined in massive renormalization schemes [23,73]. This is a direct consequence of the fact that such couplings are defined in terms of dimensionless observables in the massive theory. Following the notation of Sect. 2.2.2, we indicate the generic renormalized massive coupling in the N f -flavor theory asḡ where O denotes the short-distance observable used to define the coupling (cf. Eq. (3)), while μ is the renormalization scale. From the decoupling of heavy quarks applied to the observable O it follows that, whereḡ The decoupling relation, Eq. (35), can be equivalently recast in terms of the renormalization scales μ implicitly defined by the couplings. Specifically, given a numerical value for the coupling g M , we define the scales μ From the theory of decoupling it then follows that We stress again that we assume that the Λ-parameters in the two theories are properly matched.
From this basic observation the master formula proposed in Ref. [14] follows. We start by considering the relation in Eq. (27), and take for the low-energy scale S the renormalization scale μ dec defined above in terms of the given couplings. In formulas, where for later convenience we took the Λ-parameters in the MS-scheme. 31 Now, rather than interpreting the above equation as a matching relation for the Λ-parameters that defines the function P 0,f , we shall turn tables and use it to predict the ratio Λ dec . To this end, we first replace the function P 0,f with its perturbative expansion P PT 0,f in the MS-scheme to some order n (cf. Eqs. (24), (10) and Sects. 3.2.2, 3.3.1), We therefore have that, As a second step, using Eq. (4) and the definition in Eq. (36) we write, which only involves quantities in the pure-gauge theory. We recall that O appearing here is the observable used to calculate the coupling in Eq. (36). Moreover, as discussed in Sect. 2.2.2, the change of scheme given by the ratio O can be computed exactly through a 1-loop calculation (cf. Eq. (8)).
Finally, introducing the dimensionless variables, using Eqs. (40) and (41) we arrive at the master equation, which can be solved for ρ once the pure-gauge function ϕ (0) g,O (g M ) is known. As promised, the master formula allows us to replace the non-perturbative computation of the running of the coupling from the low-energy scale μ dec up to infinite energy in N f -flavor QCD with the corresponding running in the pure-gauge theory. A few remarks are in order at this point.
First of all, it is important to stress the fact that Eq. (43) is exact in the limit where M → ∞. In this limit both the perturbative O(g 2n−2 * ) corrections and the nonperturbative O(M −2 ) contributions vanish. The basic idea that is applied in this strategy is in fact similar to when we extract Λ (4) MS from Λ (3) MS by replacing the non-perturbative function P 3,4 (M c /Λ (4) MS ) with its perturbative approximation to some order, neglecting both higher-order terms and non-perturbative corrections. The crucial difference in the present case is that the approximation can be made systematically better by considering larger values of M, which, at least in principle, is a free parameter of the strategy. In this respect, note that the perturbative corrections to to its experimental counterpart. Of course, it goes without saying that in order to be able to set the scale accurately in terms of experimentally measurable quantities, as well as to perturbatively match Λ (N f ) MS and Λ (5) MS , we must consider N f = 3 or 4.

Another hard multi-scale problem?
The general strategy presented above is certainly very compelling. As any other strategy, though, in practical implementations it comes with its challenges. In particular, in order to have all systematic effects under control, it is necessary to carefully address how to accommodate in lattice simulations the different scales that enter the problem. As we shall see, a naive approach can easily end up facing severe limitations.
In the general situation, first of all, the space-time volume has to be large enough for finite-volume effects to be under control in all relevant observables. This means that the infrared cutoff set by the linear extent L of the lattice must be much smaller than all other scales. Secondly, in order to have small decoupling corrections in Eq. (43) we must have that the heavy-quark mass M is larger than all other physical scales, as in particular μ dec is in principle arbitrary, in practice it is not convenient to take this scale to be much larger than Λ phys . Last but not least, all scales have to be well below the ultraviolet cutoff set by the lattice spacing. Putting all these constraints together we find (cf. Eq. (2)), It is clear from this series of inequalities that having all these scales comfortably resolved on a single lattice is challenging and requires a very large L/a. Just to give an example, if one attempts the calculations using a state-of-the-art largevolume ensemble with, say, L/a = 100, m π L = 4, m π = 140 MeV, which results in a ≈ 0.056 fm, Eq. (44) translates into M 3.5 GeV. By employing finite-volume couplings and scales the only constraints that we have to meet are to have small decoupling corrections in Eq. (43), and to keep discretization errors under control. The first condition requires z = M L dec 1, while discretization effects are small once a M 1 and a/L dec 1. Putting these inequalities together we find the conditions

Finite-volume couplings rescue us again
If we take, say, μ Heavy-quark decoupling in a finite volume As pointed out for the case of computing the running through a step-scaling procedure (cf. Sect. 2.2.1), the choice of finite-volume coupling is dictated by several technical aspects, as for instance, statistical precision and discretization errors. For the strategy based on decoupling an additional factor becomes relevant which is the size of non-perturbative contributions in the decoupling of heavy quarks (cf. Eq. (35)). In a finite volume, the situation can be quite different from one coupling definition to another, as even the leading power in M −1 may be different.
Most finite-volume couplings that are used in practice are based on the QCD Schrödinger functional (SF) [36][37][38]. In the SF the quark and gluon fields satisfy Dirichlet boundary conditions at the space-time boundaries located at x 0 = 0 and T , where T is the temporal extent of the space-time volume (cf. Eqs. (47)- (48)). These boundary conditions guarantee many compelling features [36,37]. However, they come with the price of having, for instance, additional discretization effects of O(a) [36,101]. Using Symanzik effective theory these can be understood as dimension 4 counterterms localized at the space-time boundaries [36,101]. In close analogy with Symanzik effective theory an analysis of the effective Lagrangian for heavy quarks in the presence of SF boundary conditions shows that the same boundary fields appear as O(M −1 ) counterterms. More precisely, considering the relevant case N = 0, the Lagrangian L 1 in Eq. (17) has the form (cf. Eq. (18)) where F μν (x) is the gluon-field strength tensor, while ω b is a coefficient function that can be fixed by matching with the fundamental N f -flavor theory. Note that for simplicity we listed the only gluonic operator that is relevant for the class of the SF boundary conditions normally employed (cf. Eq. (47)). As a result, if couplings based on the SF are considered, the decoupling relations Eqs. (35), (43) must be corrected to have leading O(M −1 ) corrections rather than O(M −2 ) [14,41,102]. 33 On the other hand, finite-volume couplings defined through some variant of periodic boundary conditions, as for instance regular periodic [48] or twisted [105,106]  More elegant solutions have been proposed which eliminate entirely the issue. For example, one could consider for the heavy quarks a twisted rather than a standard mass [107][108][109][110]. In this case, one can show that L 1 = 0 and the decoupling in the SF is realized with O(M −2 tw ) corrections, where M tw is the heavy twisted mass of the quarks [102]. Equivalent in the continuum is the situation where the heavy quarks have a standard mass but the SF boundary conditions are chirally rotated [102,111,112] (see also Refs. [113][114][115][116][117][118]). The issue with these solutions is that they require an even number of flavors N f . They are therefore a promising approach for the case of N f = 4 QCD. For N f = 3, one may consider having a doublet of twisted-mass quarks and a regular massive quark (or equivalently a doublet of chirally rotated quarks and a regular SF quark (see e.g. Ref. [118])). This would reduce the O(M −1 ) contributions to those of only a single flavor.
Another possibility that we want to mention, which is valid for any choice of N f , is to match the effective and fundamental theory at O(M −1 ) [119]. In other words, by equating the results for some convenient observable in the effective and fundamental theory one can determine the coefficient ω b appearing in Eq. (46). Once this is determined, the O(M −1 ) terms can be taken into account in the effective theory by computing the insertion of the counterterm in Eq. (46) in the relevant observables. This guarantees that the decoupling corrections are of O(M −2 ).

Λ (3)
MS from the decoupling of heavy quarks

Definitions
In this subsection we present the results of Ref. [14] where the master formula, Eq. (43), was first applied for the computation of Λ (3) MS . The study considers N f = 3 QCD which is set on the lattice in terms of non-perturbatively O(a)-improved Wilson quarks and the tree-level Symanzik O(a 2 )-improved gauge action [120]. 34 The theory is defined in a finite volume with time extent T and spatial size L. The quark fields ψ, ψ and the gauge field A μ satisfy Dirichlet boundary conditions in the time direction, specifically P ± ≡ 1 2 (1 ± γ 0 ) [36,37,101], k = 1, 2, 3, while in the spatial directions we have The finite-volume couplings that we consider in the following are constructed in terms of the gradient flow field B μ (t, x) which is defined by the equations [47,121], where t > 0 is the flow time and is the flow-field strength tensor. On the lattice several discretizations of the GF equations have been considered, here we employ the Symanzik O(a 2 )-improved definition proposed in Ref. [122], also known as Zeuthen flow.
Gauge-invariant composite fields made out of the flow field B μ (t, x) are automatically finite [123], and are thus ideal quantities to define renormalized couplings [47][48][49]. The definition to which we apply decoupling (cf. Eq. (35)) is the massive finite-volume scheme given by (51) where N is a normalization constant [49] and is the magnetic component of the energy density of the flow field B μ (t, x). On the lattice, we define E mag in terms of the O(a 2 )-improved definition given in Ref. [122]. Note how in Eq. (51) the renormalization scale μ is set in terms of the finite spatial extent L, as appropriate for a finitevolume coupling. In order for the coupling to depend on a single scale (apart from M), the flow time t is also linked to L through the constant c [48,49]. The value of this constant is in principle arbitrary, but experience suggests that c = 0.3 is a good compromise between statistical precision and discretization errors [49]. From the point of view of decoupling, we note that the larger the value of c is, the more sensitive the coupling is to the O(M −1 ) counterterms located at x 0 = 0, T . This is so because for larger values of t the footprint of the flow field B μ (t, x) extends closer to the boundaries.
In order to attenuate the sensitivity to the O(M −1 ) terms, in Eq. (51) we consider a space-time volume with T = 2L, and place the energy density E mag (t, x) at x 0 = T /2, in order to maximize the distance from the boundaries (cf. Sect. 4.2.3). Taking only the magnetic part of the flow energy density also helps in reducing the O(M −1 ) contaminations. 35 Lastly, we note that the expectation value · · · SF,Q=0 in Eq. (51) is meant to be considered in the presence of the SF boundary conditions, Eqs. (47)- (48), and restricted to gauge fields in the trivial topological sector [50,124]. The latter constraint is imposed in order to circumvent issues related to topology freezing at small lattice spacings [125,126].
Having introduced the massive scheme of choice, we now move to the definition of the decoupling scale μ (3) dec . We define this in terms of a massless finite-volume coupling. Its definition slightly differs from that of Eq. (51). Specifically, we take [49,50], where as before we set c = 0.3. The main differences with respect to the definition in Eq. (51) are that the coupling is evaluated at vanishing (renormalized) quark masses and the temporal extent is shorter, i.e. T = L. The reason for considering this specific definition is because its non-perturbative running in the N f = 3 theory is known very precisely in the range of scales μ ≈ 0.2 − 4 GeV (see Ref. [50] and Sect. 4.1). This gives us the freedom to choose for μ (3) dec the most convenient value in this range. Its physical units can in fact be inferred from combining the knowledge of the non-perturbative β-function ofḡ (3) GF (μ) [50] and the physical scales μ (3) phys determined in large-volume hadronic simulations [13,91]. 36 In this study, the decoupling scale is specified by the condition which using the information mentioned above is found to correspond to This scale is convenient in practice as, given our choice of lattice resolutions (see below), it allows us to simulate at values of the lattice spacing sensibly smaller than those typically accessible to large-volume simulations. As we shall see, this enables us to simulate quark masses up to M ≈ 4M c , while having a M effects under good control. Additionally, we can profit from some perturbative information in the treatment of O(a M) effects, which is expected to be accurate enough at the values of the bare coupling corresponding to the relevant lattice spacings. Lastly, the value of μ (3) dec is low enough in energy that only a very limited part of the running in N f = 3 QCD is needed in order to connect it to the scales μ (3) phys and set its physical units.

Determinations of the massive coupling
The next step in the strategy is to determine the value of the couplingḡ (3) GFT (μ (3) dec , M) for some large quark masses. To this end, we must evaluate the couplingḡ (3) GFT (μ (3) dec , M) for several values of the lattice spacing at fixed μ (3) dec and given M, and extrapolate it to the continuum limit.
The condition in Eq. (54) defines the line of constant physics along which μ (3) dec is constant. Given a set of lattice sizes L/a, by tuning the bare coupling g 0 so that the massless GF-coupling has the prescribed value u 0 , we can identify the values of a for which L = L dec is fixed in physical units as a/L → 0. In this respect, note that in order to compute the massive couplingḡ (3) GFT (μ (3) dec , M) and the massless couplingḡ (3) GF (μ (3) dec ) at matching values of the lattice spacing up to O((a M) 2 ) corrections, the two must be evaluated at the 36 The latter are given by a combination of pion and kaon decay constants (cf. Refs. [13,91] and Sect. 4.1 for more details). same value of the O(a)-improved bare couplingg 0 [101]. The latter, we recall, is defined as [101], where m 0 is the bare quark mass, m cr is its critical value at which the (renormalized) quark masses vanish, and b g (g 0 ) is a function of the bare coupling to be determined. In the massless theory m q = 0, and the improved bare coupling coincides with g 0 . The values of g 0 determined from the condition (54) in terms of the massless coupling therefore specify the values ofg 0 at which the massive couplings should be evaluated. According to the Symanzik improvement programme, the coefficient b g (g 0 ) can be tuned in order to remove O(am q ) effects in the matching between the massless and massive renormalization schemes. At present, however, this is only known to 1-loop order in lattice perturbation theory, where b g (g 0 ) = 0.012 g 2 0 × N f + O(g 4 0 ) [41,101]. Together with the bare coupling, the bare quark masses must be set in order to guarantee a given value for the RGI mass M in the continuum limit. This is achieved by considering a value of z = M L dec , and by solving for a given lattice size L dec /a the following equation for the bare quark masses, In this equation, is the O(a)-improved definition for the bare quark mass, which replaces the regular bare mass m q in order to eliminate O(am q ) effects in massive schemes [101]. To this end, the function b m (g 0 ) must be properly chosen. The function Z m (g 0 , a/L), instead, refers to the renormalization constant that relates the bare quark mass to the renormalized quark mass m SF (μ) in the SF-scheme of Refs. [127][128][129][130]. This, together with the improvement coefficient b m , and the critical mass m cr , are known non-perturbatively for the relevant parameters (see Ref. [14]). The matching factor M/m dec ) then allows us to convert the renormalized quark mass in the SF-scheme at the scale μ (3) dec to the RGI mass M. It can be obtained from the results of Ref. [130]. Once am q for the given z is known from Eq. (57) at the values ofg 0 given by the condition Eq. (54), using the 1-loop results for b g we can infer from Eq. (56) the values of g 0 at which the massive couplings must be computed in simulations [14].
Having set the bare parameters we can finally evaluate the functions, In Fig. 13 the results for the extrapolations in Eq. (59) are shown. Several values of z have been considered, ranging  (59)) [14]. Two cuts (aM) 2 < 1/8, 1/4 are applied in order to estimate the systematic uncertainties in the extrapolations from z ≈ 2 − 8, which correspond to RGI masses M ≈ 1.6−6.3 GeV. The different values of z will allow us to assess the size of the non-perturbative corrections to decoupling in Eq. (43). The range of lattice sizes considered is L/a = 12 − 32.
As one can see from the figure, as expected, the continuum limit extrapolations become more challenging as z becomes larger. However, at the smaller values of a/L, the data seem to be well described by O(a 2 ) discretization errors. In order to assess systematic effects in the extrapolations, fits with different cuts in a M have been considered, specifically (a M) 2 < 1/8, 1/4. The results from the different fits are compatible, with the results for (a M) 2 < 1/8 having significantly larger errors at large values of z, where fewer points are left after the cut is imposed. We take as final results those with cut (a M) 2 < 1/8. GFT (μ), should be known. This, however, has never been computed. On the other hand, the running of the pure-gauge coupling in the GF-scheme,ḡ (0) GF (μ), is known very precisely [33]. In other to resolve the issue, all we have to do is to match non-perturbatively the GFT-and GF-schemes in the pure Yang-Mills theory. More precisely, we need to determine the values of the coupling g M =ḡ (0)

Results for Λ
Given this relation we can compute, where χ([ḡ  MS ) is evaluated at 5-loop order. In this respect we note that, as expected from the discussion in Sect. 3.2.2, the perturbative uncertainties in ρ estimated from the effect of the last known terms of P PT 0,3 are completely negligible compared to the other sources of uncertainties (cf. Table 2 of Ref. [14]). As one can see from the figure, excluding the point at z ≈ 2, the non-perturbative corrections to decoupling are small. At larger z values they are compatible with O(z −2 ) scaling, indicating that the O(z −1 ) corrections due to the SF boundary conditions are subdominant. For values of M ≈ 6.3 GeV (i.e. z = 8) the estimated ρ agrees well with the fully N f = 3 flavor theory results for Λ (3) MS /μ (3) dec , where Λ (3) MS is given by the FLAG average value [3] and μ (3) dec is taken from Eq. (55). If one attempts a z → ∞ extrapolation of the data the agreement becomes even better.

Summary and miscellaneous remarks
The results of Ref. [14] put on solid grounds the application of the decoupling relation Eq. (43) as a novel strategy to determine the QCD coupling from lattice QCD. The remarkable feature of this approach is that the non-perturbative running of the coupling from the low-energy scale μ dec up to high energy is done entirely in the pure-gauge theory. This opens up the possibility to significantly reduce the current error on α s (cf. Sects. 5 and 4.1).
In order to translate the results for dec , the strategy relies on two crucial ingredients. The first ingredient is the computation of a massive couplinḡ g  MS /μ (3) dec determined from the decoupling relation, Eq. (43) [14]. As z = M/μ (3) dec gets larger, the results for ρ approach the FLAG value Λ (Note that μ (3) dec gives a negligible contribution to the uncertainty of Λ (3) MS /μ (3) dec from FLAG.) The uncertainties on ρ may be further reduced with modest computational effort by improving the determination of MS /μ (0) dec from Ref. [33]. The data both at finite z and the extrapolations for z → ∞ have been slightly shifted horizontally for better clarity of the massive coupling can be achieved by employing a suitable finite-volume scheme. Thanks to the fact that the physical volume does not need to be large, small lattice spacings can be simulated, and safe continuum limit extrapolations ofḡ From the results of figure 14 we can appreciate how the strategy based on decoupling promises great accuracy. The z → ∞ extrapolations give in fact results for ρ which are about a factor 2 more precise than those obtained using the current FLAG estimate for Λ (3) MS and μ (3) dec from Eq. (55). In order to set this result on firmer grounds, however, a robust z → ∞ extrapolation must be performed. To this end, it is important that the continuum limit extrapolations for the massive couplings Ψ M (u 0 , z) are made more solid at the largest (most relevant) z values by investing some additional computational effort. The lattices used for the computations entering Fig. 13 are in fact rather modest. The largest simulated lattices have L/a = 32, T /a = 64. In addition, the large quark masses make these simulations significantly cheaper than the more common massless SF simulations. A factor two finer lattices are hence within reach with affordable computational resources. These lattices will allow us to consider larger quark masses too, and thus improve even further the control on the z → ∞ extrapolations. All in all, we can expect that after these improvements the final determination for z → ∞ will be at least as precise as the results in Fig.  14 promise, but will include conservative estimates for all systematics.
It is important to note at this point that a significant fraction of the error on ρ at finite z comes from the uncertainty on Λ (0) MS /μ (0) dec from Ref. [33], which is about 1.5% (cf. Fig.  14). A reduction of this error down to 0.5% or so is desirable and in principle possible. Given the importance of the result for the determination of Λ (3) MS , however, it is crucial that this error reduction is achieved robustly. As discussed in Sect. 2.3.2, even though the determination of Λ in the pure Yang-Mills theory is very much simplified from the computational point of view compared to QCD, the problem is yet non-trivial and care must be taken, especially if such a high precision is desired. For this reason, it is mandatory that the results for Λ (0) MS are corroborated by investigating different strategies where the estimates of systematic uncertainties are put to a stringent test. As we have seen, there is currently tension among different determinations of Λ (0) MS , some of which quote the desired sub-percent precision (cf. Sect. 2.3.2 and Ref. [33]). Studies as the ones of Refs. [34,35] are hence encouraged in order to set the actual accuracy at which we currently know Λ MS . In this respect we point out the results of Ref. [35], where an alternative way to do step-scaling for GF-based couplings was proposed and tested. In short, the change of renormalization scale in the coupling is first achieved by changing the flow time at fixed physical volume and in a second step the physical volume is changed at fixed flow time. This has to be compared with the traditional situation where both flow time and spatial size are changed at once. One of the interesting features of the approach is that it amounts to a reanalysis of data gathered from a traditional step-scaling study. However, the systematics to deal with are quite independent given the different continuum limit extrapolations involved. By comparing the two analysis one can stringently test the assumptions made in one or the other approach. Furthermore, it would be interesting to employ other definitions of finite-volume schemes based on either different observables and/or set-ups. For instance, the GFcoupling with twisted boundary conditions explored in Refs. [106,[131][132][133] is promising. Differently from the SF case it enjoys full translational invariance and yet its perturbative expansion in finite volume appears feasible [134]. Of course, standard periodic boundary conditions are also an option [48] despite the difficulties with perturbation theory in this set-up. In fact, in cases where the perturbative information is limited or the relevant perturbative expansion is poorly convergent, a viable option is to non-perturbatively match the given finite-volume scheme to some other scheme for which the perturbative β-function is known to high-loop order and it is well behaved (see e.g. Refs. [33,50]). This may allow for a significantly more precise determination of Λ (0) MS (cf. Sect. 2.3.2). In this respect, we note that a powerful framework for automated numerical high-loop calculations in finite volume has been recently developed and successfully applied [51,135].
Another idea that may be interesting to consider is the determination of Λ (0) MS based on the infinite-volume βfunction of GF-based couplings, following the strategy of Refs. [136][137][138] (see also Ref. [103]). In this approach, the infinite-volume results are obtained by extrapolations from small-volume simulations, which might already be at hand from a conventional step-scaling study. If the (non-trivial) infinite-volume extrapolations can be performed in a controlled way and the convergence to the perturbative regime of the chosen scheme is fast enough, this strategy may allow for interesting crosschecks of the results from step-scaling. In this case, the framework developed in Ref. [139] can be used to obtain the necessary infinite-volume perturbative information to high-loop order.
Besides applying different strategies for the calculation of Λ (0) MS , different techniques can be considered for the QCD part of the decoupling strategy as well. A simple extension is to consider different schemes for the massive finite-volume coupling. In the case of N f = 3 QCD, periodic and twisted boundary conditions would avoid entirely the issue with O(M −1 ) contaminations. For N f = 4 QCD twisted boundary conditions cannot be implemented [140], but twisted-mass fermions with SF boundary conditions or regular massive quarks with chirally rotated boundary conditions are available options (cf. Sect. 4

.2.3).
A substantially different approach is to avoid entirely finite-volume couplings and rely on heavy-quark decoupling in hadronic quantities. Particularly interesting observables to consider are the popular gluonic scales S = t As discussed in Sect. 4.2.2, it might be difficult to reach large masses M with this approach, while having discretization errors and finite-volume effects under control. Some compromises are likely necessary in order to reach high enough M values to be able to control decoupling corrections. On the other hand, the studies of Refs. [71,84] show that interesting results may be obtained if masses close to that of the charm can be reliably reached. This makes the strategy worth being explored.

Conclusions
In this contribution we presented a novel strategy for the determination of the QCD coupling using lattice QCD. It exploits the decoupling of heavy quarks at low energy to connect the pure Yang-Mills theory and QCD with N f flavors of quarks. The main result is that the computation of the running of the coupling from a known low-energy scale μ dec = O(1 GeV) up to high energies can be done entirely in the pure-gauge theory instead of N f -flavor QCD. Considering N f = 3 or 4, this paves the way for unprecedented precision determinations of Λ (N f ) MS from which Λ (5) MS and α s can be obtained. In Ref. [14] the potential of these methods was successfully established in the determination of Λ (3) MS . We now want to put this result into context of a future precision α s extraction.
As presented in Sect. 4.3, the results for Λ (3) MS /μ (3) dec from decoupling have an uncertainty which is about half the one obtained using the FLAG average Λ  Fig. 14). As discussed in Sect. 4.4, by investing some modest computational effort, this result can be set on very solid grounds by improving the continuum limits of the massive couplings (cf. Fig.  13) and performing a robust z → ∞ extrapolation (cf. Fig.  14). Considering lattices twice as large as the ones simulated in Ref. [14] is in fact affordable. With such lattices we can expect that the continuum limit extrapolations of Ψ M (u 0 , z) in Eq. (59) can be performed with confidence also at the largest masses investigated so far (M ≈ 6 GeV). In addition, we will be able to consider some larger z values, too.
The precision on Λ (3) MS /μ (3) dec can be further improved significantly by reducing the uncertainties coming from the pure-gauge determination of Λ (0) MS /μ (0) dec , which has the (conservative) error of 1.5% [33]. As noticed in Sect. 4.4, this error can in principle be reduced by a substantial factor, e.g. down to 0.5%. However, while it is certainly possible to reach such a high precision in a given computation, it is crucial that the results of different analysis and groups corroborate it. At present, there is in fact tension among determinations of Λ (0) MS involving results with sub-percent accuracy (cf. Sect. 2.3.2). It is important to understand the origin of these differences. We hope that the renovated interest in this quantity brought by this new strategy motivates the community to resolve the issue and contribute to a high-precision determination.
Once the above steps are achieved the precision on Λ (3)

MS
will be limited by the present error on μ (3) dec , which is about 2%. A reduction of this error down to 1% is however foreseeable. It requires, first of all, to improve the results for the running of the GF-couplingḡ (3) GF (μ) at energies μ < μ dec [50]. We recall that this is needed in order to connect μ (3) dec with the hadronic scales μ (3) phys used to set the physical units of the theory (cf. Sect. 4.3). Work in this direction has already started as part of the HQET efforts by the ALPHA Collaboration (cf. Ref. [141]). Secondly, the scale setting in terms of the physical scales μ (3) phys must be improved as well. In practice, this means to obtain a more precise determination for a convenient low-energy reference scale in N f = 3 QCD in physical units, like for example, μ * ref = 1/ 8t * 0 (cf. Ref. [91]). A precision of 1% or better on this or similar scales is desirable. This is expected to be possible by exploiting the new CLS ensembles close to the physical point [20,92,142]. Also in this case, however, corroboration from different strategies and groups is important.
Through all these steps a determination of Λ MS with a final uncertainty of 1-2% appears feasible. As discussed in Sect. 3.3.2, at this level of precision Λ (5) MS can yet be obtained from Λ (3) MS by relying on the perturbative decoupling of the charm quark, eventually including some conservative estimate for the unaccounted non-perturbative corrections. A determination of α s (M Z ) at the level of 0.4% is therefore within reach thanks to the novel techniques. To further halve the error on α s (M Z ), on the other hand, requires several issues to be reconsidered. Non-perturbative decoupling effects might become relevant, and one might need to include electromagnetic and m u = m d effects in the lattice computations in order to set the physical scale of the theory to a greater level of accuracy (cf. Sect. 3.3.2 and see also discussion in Ref. [15]). Before concluding we want to note that even though our emphasis was on the determination of Λ (N f ) MS , the ideas presented can be extended to solve other RG problems. A clear case is that of the quark masses, where one can replace their running in N f -flavor QCD with the one in the quenched approximation. In Ref. [86] a similar application was in fact explored in order to study the non-perturbative charm-quark effects in the determination of the charm-quark mass itself. More complicated composite operators, like for instance four-quark operators, require more thought. First of all, a study of the quality of their perturbative decoupling relations is necessary in order to establish whether the strategy has any chance to be applied in the first place. Then, an investigation of the non-perturbative decoupling corrections must follow.
In conclusion, we can affirm that the decoupling of heavy quarks enters at full right in the renormalization toolkit of the lattice field theorist. Many more applications of these powerful ideas in lattice QCD and lattice field theory in general are likely to come.