Abstract
We review lattice results related to pion, kaon, \(D\) and \(B\)meson physics with the aim of making them easily accessible to the particlephysics community. More specifically, we report on the determination of the lightquark masses, the form factor \(f_+(0)\), arising in semileptonic \(K \rightarrow \pi \) transition at zero momentum transfer, as well as the decayconstant ratio \(f_K/f_\pi \) of decay constants and its consequences for the CKM matrix elements \(V_{us}\) and \(V_{ud}\). Furthermore, we describe the results obtained on the lattice for some of the lowenergy constants of \(\hbox {SU}(2)_L\times \hbox {SU}(2)_R\) and \(\hbox {SU}(3)_L\times \hbox {SU}(3)_R\) Chiral Perturbation Theory and review the determination of the \(B_K\) parameter of neutral kaon mixing. The inclusion of heavyquark quantities significantly expands the FLAG scope with respect to the previous review. Therefore, we focus here on \(D\) and \(B\)meson decay constants, form factors, and mixing parameters, since these are most relevant for the determination of CKM matrix elements and the global CKM unitaritytriangle fit. In addition we review the status of lattice determinations of the strong coupling constant \(\alpha _\mathrm{s}\).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Flavour physics provides an important opportunity for exploring the limits of the Standard Model of particle physics and for constraining possible extensions of theories that go beyond it. As the LHC explores a new energy frontier and as experiments continue to extend the precision frontier, the importance of flavour physics will grow, both in terms of searches for signatures of new physics through precision measurements and in terms of attempts to unravel the theoretical framework behind direct discoveries of new particles. A major theoretical limitation consists in the precision with which strong interaction effects can be quantified. Largescale numerical simulations of lattice QCD allow for the computation of these effects from first principles. The scope of the Flavour Lattice Averaging Group (FLAG) is to review the current status of lattice results for a variety of physical quantities in lowenergy physics. Set up in November 2007,^{Footnote 1} it comprises experts in Lattice Field Theory and Chiral Perturbation Theory. Our aim is to provide an answer to the frequently posed question “What is currently the best lattice value for a particular quantity?”, in a way which is readily accessible to nonlatticeexperts. This is generally not an easy question to answer; different collaborations use different lattice actions (discretisations of QCD) with a variety of lattice spacings and volumes, and with a range of masses for the \(u\) and \(d\)quarks. Not only are the systematic errors different, but also the methodology used to estimate these uncertainties varies between collaborations. In the present work we summarise the main features of each of the calculations and provide a framework for judging and combining the different results. Sometimes it is a single result which provides the “best” value; more often it is a combination of results from different collaborations. Indeed, the consistency of values obtained using different formulations adds significantly to our confidence in the results.
The first edition of the FLAG review was published in 2011 [1]. It was limited to lattice results related to pion and kaon physics: lightquark masses (\(u\), \(d\) and \(s\)flavours), the form factor \(f_+(0)\) arising in semileptonic \(K \rightarrow \pi \) transitions at zero momentum transfer and the decay constant ratio \(f_K/f_\pi \), as well as their implications for the CKM matrix elements \(V_{us}\) and \(V_{ud}\). Furthermore, results were reported for some of the lowenergy constants of \(\hbox {SU}(2)_L \otimes \hbox {SU}(2)_R\) and \(\hbox {SU}(3)_L \otimes \hbox {SU}(3)_R\) Chiral Perturbation Theory and the \(B_K\) parameter of neutral kaon mixing. Results for all of these quantities have been updated in the present paper. Moreover, the scope of the present review has been extended by including lattice results related to \(D\) and \(B\)meson physics. We focus on \(B\) and \(D\)meson decay constants, form factors, and mixing parameters, which are most relevant for the determination of CKM matrix elements and the global CKM unitaritytriangle fit. Last but not least, the current status of lattice results on the QCD coupling \(\alpha _\mathrm{s}\) is also reviewed. Bottom and charmquark masses, though important parametric inputs to Standard Model calculations, have not been covered in the present edition. They will be included in a future FLAG report.
Our plan is to continue providing FLAG updates, in the form of a peerreviewed paper, roughly on a biannual basis. This effort is supplemented by our more frequently updated website http://itpwiki.unibe.ch/flag, where figures as well as pdffiles for the individual sections can be downloaded. The papers reviewed in the present edition have appeared before the closing date 30 November 2013.
Finally, we draw attention to a particularly important point. As stated above, our aim is to make lattice QCD results easily accessible to nonlatticeexperts and we are well aware that it is likely that some readers will only consult the present paper and not the original lattice literature. We consider it very important that this paper is not the only one which gets cited when the lattice results which are discussed and analysed here are quoted. Readers who find the review and compilations offered in this paper useful are therefore kindly requested to also cite the original sources. The bibliography at the end of this paper should make this task easier. Indeed we hope that the bibliography will be one of the most widely used elements of the whole paper.
This review is organised as follows. In the remainder of Sect. 1 we summarise the composition and rules of FLAG, describe the goals of the FLAG effort and general issues that arise in modern lattice calculations. For the reader’s convenience, Table 1 summarises the main results (averages and estimates) of the present review. In Sect. 2 we explain our general methodology for evaluating the robustness of lattice results which have appeared in the literature. We also describe the procedures followed for combining results from different collaborations in a single average or estimate (see Sect. 2.2 for our use of these terms). The rest of the paper consists of sections, each of which is dedicated to a single (or groups of closely connected) physical quantity(ies). Each of these sections is accompanied by an Appendix with explicatory notes.
1.1 FLAG enlargement
Upon completion of the first review, it was decided to extend the project by adding new physical quantities and coauthors. FLAG became more representative of the lattice community, both in terms of the geographical location of its members and the lattice collaborations to which they belong. At the time a parallel effort had been made [2, 3]; the two efforts have now merged in order to provide a single source of information on lattice results to the particlephysics community.
The experience gained in managing the activities of a mediumsized group of coauthors taught us that it was necessary to have a more formal structure and a set of rules by which all concerned had to abide, in order to make the inner workings of FLAG function smoothly. The collaboration presently consists of an Advisory Board (AB), an Editorial Board (EB), and seven Working Groups (WG). The rôle of the Advisory Board is that of general supervision and consultation. Its members may interfere at any point in the process of drafting the paper, expressing their opinion and offering advice. They also give their approval of the final version of the preprint before it is rendered public. The Editorial Board coordinates the activities of FLAG, sets priorities and intermediate deadlines, and takes care of the editorial work needed to amalgamate the sections written by the individual working groups into a uniform and coherent review. The working groups concentrate on writing up the review of the physical quantities for which they are responsible, which is subsequently circulated to the whole collaboration for criticisms and suggestions.
The most important internal FLAG rules are the following:

members of the AB have a 4year mandate (to avoid a simultaneous change of all members, some of the current members of the AB will have a shorter mandate);

the composition of the AB reflects the main geographical areas in which lattice collaborations are active: one member comes from America, one from Asia/Oceania and one from Europe;

the mandate of regular members is not limited in time, but we expect that a certain turnover will occur naturally;

whenever a replacement becomes necessary this has to keep, and possibly improve, the balance in FLAG;

in all working groups the three members must belong to three different lattice collaborations;^{Footnote 2}

a paper is in general not reviewed (nor colourcoded, as described in the next section) by one of its authors;

lattice collaborations not represented in FLAG will be asked to check whether the colour coding of their calculation is correct.
The current list of FLAG members and their Working Group assignments is:
\(\bullet \) Advisory Board (AB): S. Aoki, C. Bernard, C. Sachrajda
\(\bullet \) Editorial Board (EB): G. Colangelo, H. Leutwyler,
A. Vladikas, U. Wenger
\(\bullet \) Working Groups (WG)
(each WG coordinator is listed first):

Quark masses: L. Lellouch, T. Blum, V. Lubicz

\(V_{us},V_{ud}\): A. Jüttner, T. Kaneko, S. Simula

LEC: S. Dürr, H. Fukaya, S. Necco

\(B_K\): H. Wittig, J. Laiho, S. Sharpe

\(f_{B_{(s)}}\), \(f_{D_{(s)}}\), \(B_B\): A. ElKhadra,Y. Aoki, M. Della Morte

\(B_{(s)}\), \(D\) semileptonic and radiative decays: R. Van de Water, E. Lunghi, C. Pena, J. Shigemitsu^{Footnote 3}

\(\alpha _\mathrm{s}\): R. Sommer, R. Horsley, T. Onogi
1.2 General issues and summary of the main results
The present review aims at two distinct goals:

(a)
offer a description of the work done on the lattice concerning lowenergy particle physics;

(b)
draw conclusions on the basis of that work, which summarise the results obtained for the various quantities of physical interest.
The core of the information about the work done on the lattice is presented in the form of tables, which not only list the various results, but also describe the quality of the data that underlie them. We consider it important that this part of the review represents a generally accepted description of the work done. For this reason, we explicitly specify the quality requirements used and provide sufficient details in the appendices so that the reader can verify the information given in the tables.
The conclusions drawn on the basis of the available lattice results, on the other hand, are the responsibility of FLAG alone. We aim at staying on the conservative side and in several cases reach conclusions which are more cautious than what a plain average of the available lattice results would give, in particular when this is dominated by a single lattice result. An additional issue occurs when only one lattice result is available for a given quantity. In such cases one does not have the same degree of confidence in results and errors as one has when there is agreement among many different calculations using different approaches. Since this degree of confidence cannot be quantified, it is not reflected in the quoted errors, but it should be kept in mind by the reader. At present, the issue of having only a single result occurs much more often in heavyquark physics than in lightquark physics. We are confident that the heavyquark calculations will soon reach the state that pertains in lightquark physics.
Several general issues concerning the present review are thoroughly discussed in Sect. 1.1 of our initial paper [1] and we encourage the reader to consult the relevant pages. In the remainder of the present section, we focus on a few important points.
Each discretisation has its merits but also its shortcomings. For the topics covered already in the first edition of the FLAG review, we have by now a remarkably broad data base, and for most quantities lattice calculations based on totally different discretisations are now available. This is illustrated by the dense population of the tables and figures shown in the first part of this review. Those calculations which do satisfy our quality criteria indeed lead to consistent results, confirming universality within the accuracy reached. In our opinion, the consistency between independent lattice results, obtained with different discretisations, methods and simulation parameters, is an important test of lattice QCD, and observing such consistency then also provides further evidence that systematic errors are fully under control.
In the sections dealing with heavy quarks and with \(\alpha _\mathrm{s}\), the situation is not the same. Since the \(b\)quark mass cannot be resolved with current lattice spacings, all lattice methods for treating \(b\) quarks use effective field theory at some level. This introduces additional complications not present in the lightquark sector. An overview of the issues specific to heavyquark quantities is given in the introduction of Sect. 8. For \(B\) and \(D\)meson leptonic decay constants, there already exist a good number of different independent calculations that use different heavyquark methods, but there are only one or two independent calculations of semileptonic \(B\) and \(D\)meson form factors and \(B\) meson mixing parameters. For \(\alpha _\mathrm{s}\), most lattice methods involve a range of scales that need to be resolved and controlling the systematic error over a large range of scales is more demanding. The issues specific to determinations of the strong coupling are summarised in Sect. 9.
The lattice spacings reached in recent simulations go down to 0.05 fm or even smaller. In that region, growing autocorrelation times slow down the sampling of the configurations [4–8]. Many groups check for autocorrelations in a number of observables, including the topological charge, for which a rapid growth of the autocorrelation time is observed if the lattice spacing becomes small. In the following, we assume that the continuum limit can be reached by extrapolating the existing simulations.
Lattice simulations of QCD currently involve at most four dynamical quark flavours. Moreover, most of the data concern simulations for which the masses of the two lightest quarks are set equal. This is indicated by the notation \(N_\mathrm{f}=2+1+1\) which, in this case, denotes a lattice calculation with four dynamical quark flavours and \(m_{u} = m_{d} \ne m_{s} \ne m_{c}\). Note that calculations with \(N_\mathrm{f}=2\) dynamical flavours often include strange valence quarks interacting with gluons, so that bound states with the quantum numbers of the kaons can be studied, albeit neglecting strange sea quark fluctuations. The quenched approximation (\(N_\mathrm{f}=0\)), in which the sea quarks are treated as a mean field, is no longer used in modern lattice simulations. Accordingly, we will review results obtained with \(N_\mathrm{f}=2\), \(N_\mathrm{f}=2+1\) and \(N_\mathrm{f} = 2+1+1\), but we omit earlier results with \(N_\mathrm{f}=0\). On the other hand, the dependence of the QCD coupling constant \(\alpha _\mathrm{s}\) on the number of flavours is a theoretical issue of considerable interest, and we therefore include results obtained for gluodynamics in the \(\alpha _\mathrm{s}\) section. We stress, however, that only results with \(N_\mathrm{f} \ge 3\) are used to determine the physical value of \(\alpha _\mathrm{s}\) at a high scale.
The remarkable recent progress in the precision of lattice calculations is due to improved algorithms, better computing resources and, last but not least, conceptual developments, such as improved actions which reduce lattice artefacts, actions which preserve (remnants of) chiral symmetry, understanding finitesize effects, nonperturbative renormalisation, etc. A concise characterisation of the various discretisations that underlie the results reported in the present review is given in Appendix A.1.
Lattice simulations are performed at fixed values of the bare QCD parameters (gauge coupling and quark masses) and physical quantities with mass dimensions (e.g. quark masses, decay constants...) are computed in units of the lattice spacing; i.e. they are dimensionless. Their conversion to physical units requires knowledge of the lattice spacing at the fixed values of the bare QCD parameters of the simulations. This is achieved by requiring agreement between the lattice calculation and experimental measurement of a known quantity, which “sets the scale” of a given simulation. A few details on this procedure are provided in Appendix A.2.
Several of the results covered by this review, such as quark masses, the gauge coupling, and \(B\)parameters, are quantities defined in a given renormalisation scheme and scale. The schemes employed are often chosen because of their specific merits when combined with the lattice regularisation. For a brief discussion of their properties, see Appendix A.3. The conversion of the results, obtained in these socalled intermediate schemes, to more familiar regularisation schemes, such as the \({\overline{\mathrm{MS}}}\)scheme, is done with the aid of perturbation theory. It must be stressed that the renormalisation scales accessible by the simulations are subject to limitations, naturally arising in FieldTheory computations at finite UV and small nonzero IR cutoff. Typically, such scales are of the order of the UV cutoff, or \(\Lambda _\mathrm{QCD}\), depending on the chosen scheme. To safely match to \({\overline{\mathrm{MS}}}\), a scheme defined in perturbation theory, Renormalisation Group (RG) running to higher scales is performed, either perturbatively, or nonperturbatively (the latter using finitesize scaling techniques).
Because of limited computing resources, lattice simulations are often performed at unphysically heavy pion masses, although results at the physical point have recently become available. Further, numerical simulations must be done at finite lattice spacing. In order to obtain physical results, lattice data are generated at a sequence of pion masses and a sequence of lattice spacings, and then extrapolated to \(M_\pi \approx 135\) MeV and \(a \rightarrow 0\). To control the associated systematic uncertainties, these extrapolations are guided by effective theory. For lightquark actions, the latticespacing dependence is described by Symanzik’s effective theory [9, 10]; for heavy quarks, this can be extended and/or supplemented by other effective theories such as HeavyQuark Effective Theory (HQET). The pionmass dependence can be parameterised with Chiral Perturbation Theory (\(\chi \)PT), which takes into account the Nambu–Goldstone nature of the lowest excitations that occur in the presence of light quarks; similarly one can use HeavyLight Meson Chiral Perturbation Theory (HM\(\chi \)PT) to extrapolate quantities involving mesons composed of one heavy (\(b\) or \(c\)) and one light quark. One can combine Symanzik’s effective theory with \(\chi \)PT to simultaneously extrapolate to the physical pion mass and continuum; in this case, the form of the effective theory depends on the discretisation. See Appendix A.4 for a brief description of the different variants in use and some useful references.
2 Quality criteria
The essential characteristics of our approach to the problem of rating and averaging lattice quantities reported by different collaborations have been outlined in our first publication [1]. Our aim is to help the reader assess the reliability of a particular lattice result without necessarily studying the original article in depth. This is a delicate issue, which may make things appear simpler than they are. However, it safeguards against the common practice of using lattice results and drawing physics conclusions from them, without a critical assessment of the quality of the various calculations. We believe that despite the risks, it is important to provide some compact information about the quality of a calculation. However, the importance of the accompanying detailed discussion of the results presented in the bulk of the present review cannot be underestimated.
2.1 Systematic errors and colourcoding
In Ref. [1], we identified a number of sources of systematic errors, for which a systematic improvement is possible, and assigned one of three coloured symbols to each calculation: green star, amber disc or red square. The appearance of a red tag, even in a single source of systematic error of a given lattice result, disqualified it from the global averaging. Since results with green and amber tags entered the averages, and since this policy has been retained in the present edition, we have decided to substitute the amber disc by a green unfilled circle. Thus the new colour coding is as follows:

the systematic error has been estimated in a satisfactory manner and convincingly shown to be under control;

a reasonable attempt at estimating the systematic error has been made, although this could be improved;

no or a clearly unsatisfactory attempt at estimating the systematic error has been made. We stress once more that only results without a red tag in the systematic errors are averaged in order to provide a given FLAG estimate.
The precise criteria used in determining the colour coding is unavoidably timedependent; as lattice calculations become more accurate the standards against which they are measured become tighter. For quantities related to the lightquark sector, which have been dealt with in the first edition of the FLAG review [1], some of the quality criteria have remained the same, while others have been tightened up. We will compare them to those of Ref. [1], casebycase, below. For the newly introduced physical quantities, related to heavy quark physics, the adoption of new criteria was necessary. This is due to the fact that, in most cases, the discretisation of the heavy quark action follows a very different approach to that of light flavours. Moreover, the two Working Groups dedicated to heavy flavours have opted for a somewhat different rating of the extrapolation of lattice results to the continuum limit. Finally, the strong coupling being in a class of its own, as far as methods for its computation are concerned, led to the introduction of dedicated rating criteria for it.
Of course any colour coding has to be treated with caution; we repeat that the criteria are subjective and evolving. Sometimes a single source of systematic error dominates the systematic uncertainty and it is more important to reduce this uncertainty than to aim for green stars for other sources of error. In spite of these caveats we hope that our attempt to introduce quality measures for lattice results will prove to be a useful guide. In addition we would like to stress that the agreement of lattice results obtained using different actions and procedures evident in many of the tables presented below provides further validation.
For a coherent assessment of the present situation, the quality of the data plays a key role, but the colour coding cannot be carried over to the figures. On the other hand, simply showing all data on equal footing would give the misleading impression that the overall consistency of the information available on the lattice is questionable. As a way out, the figures do indicate the quality in a rudimentary way:

results included in the average;

results that are not included in the average but pass all quality criteria;

all other results.
The reason for not including a given result in the average is not always the same: the paper may fail one of the quality criteria, may not be published, be superseded by other results or not offer a complete error budget. Symbols other than squares are used to distinguish results with specific properties and are always explained in the caption.
There are separate criteria for lightflavour, heavyflavour, and \(\alpha _\mathrm{s}\) results. In the following the criteria for the former two are discussed in detail, while the criteria for the \(\alpha _\mathrm{s}\) results will be exposed separately in Sect. 9.2.
2.1.1 Lightquark physics
The colour code used in the tables is specified as follows:
\(\bullet \) Chiral extrapolation:

\(M_{\pi ,{\mathrm {min}}}< 200\) MeV

200 MeV \(\le M_{\pi ,{\mathrm {min}}} \le \) 400 MeV

400 MeV \( < M_{\pi ,{\mathrm {min}}}\) It is assumed that the chiral extrapolation is done with at least a threepoint analysis; otherwise this will be explicitly mentioned. Note that, compared to Ref. [1], chiral extrapolations are now treated in a somewhat more stringent manner and the cutoff between green star and green open circle (formerly amber disc), previously set at 250 MeV, is now lowered to 200 MeV.
\(\bullet \) Continuum extrapolation:

three or more lattice spacings, at least two points below 0.1 fm

two or more lattice spacings, at least one point below 0.1 fm

otherwise
It is assumed that the action is \(O(a)\)improved (i.e. the discretisation errors vanish quadratically with the lattice spacing); otherwise this will be explicitly mentioned. Moreover, for nonimproved actions an additional lattice spacing is required. This criterion is the same as the one adopted in Ref. [1].
\(\bullet \) Finitevolume effects:

\(M_{\pi ,{\mathrm {min}}} L > 4\) or at least three volumes

\(M_{\pi ,{\mathrm {min}}} L > 3\) and at least two volumes

otherwise
These ratings apply to calculations in the \(p\)regime and it is assumed that \(L_\mathrm{min}\ge 2\) fm; otherwise this will be explicitly mentioned and a red square will be assigned.
\(\bullet \) Renormalisation (where applicable):

nonperturbative

oneloop perturbation theory or higher with a reasonable estimate of truncation errors

otherwise
In Ref. [1], we assigned a red square to all results which were renormalised at one loop in perturbation theory. We now feel that this is too restrictive, since the error arising from renormalisation constants, calculated in perturbation theory at one loop, is often estimated conservatively and reliably.
\(\bullet \) Running (where applicable):

For scaledependent quantities, such as quark masses or \(B_K\), it is essential that contact with continuum perturbation theory can be established. Various different methods are used for this purpose (cf. Appendix A.3): Regularisationindependent Momentum Subtraction (RI/MOM), Schrödinger functional, direct comparison with (resummed) perturbation theory. Irrespective of the particular method used, the uncertainty associated with the choice of intermediate renormalisation scales in the construction of physical observables must be brought under control. This is best achieved by performing comparisons between nonperturbative and perturbative running over a reasonably broad range of scales. These comparisons were initially only made in the Schrödinger functional (SF) approach, but they are now also being performed in RI/MOM schemes. We mark the data for which information about nonperturbative running checks is available and give some details, but we do not attempt to translate this into a colourcode.
The pion mass plays an important rôle in the criteria relevant for chiral extrapolation and finite volume. For some of the regularisations used, however, it is not a trivial matter to identify this mass. In the case of twistedmass fermions, discretisation effects give rise to a mass difference between charged and neutral pions even when the up and downquark masses are equal, with the charged pion being the heavier of the two. The discussion of the twistedmass results presented in the following sections assumes that the artificial isospinbreaking effects which occur in this regularisation are under control. In addition, we assume that the mass of the charged pion may be used when evaluating the chiralextrapolation and finitevolume criteria. In the case of staggered fermions, discretisation effects give rise to several light states with the quantum numbers of the pion.^{Footnote 4} The mass splitting among these “taste” partners represents a discretisation effect of \({\mathcal {O}}(a^2)\), which can be significant at big lattice spacings but shrinks as the spacing is reduced. In the discussion of the results obtained with staggered quarks given in the following sections, we assume that these artefacts are under control. When evaluating the chiralextrapolation criteria, we conservatively identify \(M_{\pi ,\mathrm{min}}\) with the rootmean square (RMS) of the mass of all taste partners. These masses are also used in Sects. 4 and 6 when evaluating the finitevolume criteria, while in Sects. 3, 5, 7 and 8, a more stringent finitevolume criterion is applied: \(M_{\pi ,\mathrm{min}}\) is identified with the mass of the lightest state.
2.1.2 Heavyquark physics
This subsection discusses the criteria adopted for the heavyquark quantities included in this review, characterised by nonzero charm and bottom quantum numbers. There are several different approaches to treating heavy quarks on the lattice, each with their own issues and considerations. In general all \(b\)quark methods rely on the use of Effective Field Theory (EFT) at some point in the computation, either via direct simulation of the EFT, use of the EFT to estimate the size of cutoff errors, or use of the EFT to extrapolate from the simulated lattice quark mass up to the physical \(b\)quark mass. Some simulations of charmquark quantities use the same heavyquark methods as for bottom quarks, but there are also computations that use improved lightquark actions to simulate charm quarks. Hence, with some methods and for some quantities, truncation effects must be considered together with discretisation errors. With other methods, discretisation errors are more severe for heavyquark quantities than for the corresponding lightquark quantities.
In order to address these complications, we add a new heavyquark treatment category to the ratings system. The purpose of this criterion is to provide a guideline for the level of action and operator improvement needed in each approach to make reliable calculations possible, in principle. In addition, we replace the rating criteria for the continuum extrapolations of Sect. 2.1.1 with a new empirical approach based on the size of observed discretisation errors in the lattice simulation data. This accounts for the fact that whether discretisation and truncation effects in a given calculation are sufficiently small as to be controllable depends not only on the range of lattice spacings used in the simulations, but also on the simulated heavyquark masses and on the level of action and operator improvement. For the other categories, we adopt the same strict criteria as in Sect. 2.1.1, with one minor modification, as explained below.
\(\bullet \) Heavyquark treatment

A description of the different approaches to treating heavy quarks on the lattice is given in Appendix A.1.3 including a discussion of the associated discretisation, truncation, and matching errors. For truncation errors we use HQET power counting throughout, since this review is focussed on heavy quark quantities involving \(B\) and \(D\) mesons. Here we describe the criteria for how each approach must be implemented in order to receive an acceptable () rating for both the heavy quark actions and the weak operators. Heavyquark implementations without the level of improvement described below are rated not acceptable (). The matching is evaluated together with renormalisation, using the renormalisation criteria described in Sect. 2.1.1. We emphasise that the heavyquark implementations rated as acceptable and described below have been validated in a variety of ways, such as via phenomenological agreement with experimental measurements, consistency between independent lattice calculations, and numerical studies of truncation errors. These tests are summarised in Sect. 8.
Relativistic heavy quark actions:

at least treelevel \(O(a)\) improved action and weak operators
This is similar to the requirements for light quark actions. All current implementations of relativistic heavy quark actions satisfy these criteria.
NRQCD:

treelevel matched through \(O(1/m_{h})\) and improved through \(O(a^2)\)
The current implementations of NRQCD satisfy these criteria, and also include treelevel corrections of \(O(1/m_{h}^2)\) in the action.
HQET:

treelevel matched through \(O(1/m_{h})\) with discretisation errors starting at \(O(a^2)\)
The current implementation of HQET by the ALPHA collaboration satisfies these criteria with an action and weak operators that are nonperturbatively matched through \(O(1/m_{h})\). Calculations that exclusively use a staticlimit action do not satisfy theses criteria, since the staticlimit action, by definition, does not include \(1/m_{h}\) terms. However, for SU(3)breaking ratios such as \(\xi \) and \(f_{B_{s}}/f_B\) truncation errors start at \(O((m_{s}  m_{d})/m_{h})\). We therefore consider lattice calculations of such ratios that use a staticlimit action to still have controllable truncation errors.
Lightquark actions for heavy quarks:

discretisation errors starting at \(O(a^2)\) or higher This applies to calculations that use the tmWilson action, a nonperturbatively improved Wilson action, or the HISQ action for charm quark quantities. It also applies to calculations that use these light quark actions in the charm region and above together with either the static limit or with an HQETinspired extrapolation to obtain results at the physical \(b\) quark mass. In these cases, the continuumextrapolation criteria must be applied to the entire range of heavy quark masses used in the calculation.
\(\bullet \) Continuum extrapolation:

First we introduce the following definitions:
$$\begin{aligned} D(a) = \frac{Q(a)  Q(0)}{Q(a)}, \end{aligned}$$(1)where \(Q(a)\) denotes the central value of quantity \(Q\) obtained at lattice spacing \(a\) and \(Q(0)\) denotes the continuum extrapolated value. \(D(a)\) is a measure of how far the continuum extrapolated result is from the lattice data. We evaluate this quantity on the smallest lattice spacing used in the calculation, \(a_\mathrm{min}\).
$$\begin{aligned} \delta (a) = \frac{Q(a)  Q(0)}{\sigma _Q}, \end{aligned}$$(2)where \(\sigma _Q\) is the combined statistical and systematic (due to the continuum extrapolation) error. \(\delta (a)\) is a measure of how well the continuumextrapolated result agrees with the lattice data within the statistical and systematic errors of the calculation. Again, we evaluate this quantity on the smallest lattice spacing used in the calculation, \(a_\mathrm{min}\).

(i) Three or more lattice spacings,

(ii)
\(a^2_\mathrm{max} / a^2_\mathrm{min} \ge 2\),

(iii)
\(D(a_\mathrm{min}) \le 2\,\%\), and

(iv)
\(\delta (a_\mathrm{min}) \le 1\)

(ii)

(i) Two or more lattice spacings,

(ii)
\(a^2_\mathrm{max} / a^2_\mathrm{min} \ge 1.4\),

(iii)
\(D(a_\mathrm{min}) \le 10\,\%\),

(iv)
\(\delta (a_\mathrm{min}) \le 2\),

(ii)

otherwise.
For the time being, these new criteria for the quality of the continuum extrapolation have only been adopted for the heavyquark quantities, but their use may be extended to all FLAG quantities in future reviews.
\(\bullet \) Finitevolume:

\(M_{\pi ,\mathrm{min}} L \gtrsim 3.7\) or two volumes at fixed parameters

\(M_{\pi ,\mathrm{min}} L \gtrsim 3\)

otherwise
Here the boundary between green star and open circle is slightly relaxed compared to that in Sect. 2.1.1 to account for the fact that heavyquark quantities are less sensitive to this systematic error than lightquark quantities. A rating requires an estimate of the finitevolume error either by analysing data on two or more physical volumes (with all other parameters fixed) or by using finitevolume chiral perturbation theory. In the case of staggered sea quarks, \(M_{\pi ,\mathrm{min}}\) refers to the lightest (taste Goldstone) pion mass.
2.2 Averages and estimates
For many observables there are enough independent lattice calculations of good quality that it makes sense to average them and propose such an average as the best current lattice number. In order to decide whether this is true for a certain observable, we rely on the colour coding. We restrict the averages to data for which the colour code does not contain any red tags. In some cases, the averaging procedure nevertheless leads to a result which in our opinion does not cover all uncertainties. This is related to the fact that procedures for estimating errors and the resulting conclusions necessarily have an element of subjectivity, and would vary between groups even with the same data set. In order to stay on the conservative side, we may replace the average by an estimate (or a range), which we consider as a fair assessment of the knowledge acquired on the lattice at present. This estimate is not obtained with a prescribed mathematical procedure, but it is based on a critical analysis of the available information.
There are two other important criteria which also play a role in this respect, but which cannot be colour coded, because a systematic improvement is not possible. These are: (i) the publication status, and (ii) the number of flavours \(N_\mathrm{f}\). As far as the former criterion is concerned, we adopt the following policy: we average only results which have been published in peerreviewed journals, i.e. they have been endorsed by referee(s). The only exception to this rule consists in obvious updates of previously published results, typically presented in conference proceedings. Such updates, which supersede the corresponding results in the published papers, are included in the averages. Nevertheless, all results are listed and their publication status is identified by the following symbols:
\(\bullet \) Publication status:

A published or plain update of published results

P preprint

C conference contribution
Note that updates of earlier results rely, at least partially, on the same gaugefield configuration ensembles. For this reason, we do not average updates with earlier results. In the present edition, the publication status on November 30, 2013 is relevant. If the paper appeared in print after that date this is accounted for in the bibliography, but it does not affect the averages.
In this review we present results from simulations with \(N_\mathrm{f}=2\), \(N_\mathrm{f}=2+1\) and \(N_\mathrm{f}=2+1+1\) (for \( r_0 \Lambda _{\overline{\mathrm{MS}}}\) also with \(N_\mathrm{f}=0\)). We are not aware of an a priori way to quantitatively estimate the difference between results produced in simulations with a different number of dynamical quarks. We therefore average results at fixed \(N_\mathrm{f}\) separately; averages of calculations with different \(N_\mathrm{f}\) will not be provided.
To date, no significant differences between results with different values of \(N_\mathrm{f}\) have been observed. In the future, as the accuracy and the control over systematic effects in lattice calculations will increase, it will hopefully be possible to see a difference between \(N_\mathrm{f}= 2\) and \(N_\mathrm{f}= 2 + 1\) calculations and so determine the size of the Zweigrule violations related to strange quark loops. This is a very interesting issue per se, and one which can be quantitatively addressed only with lattice calculations.
2.3 Averaging procedure and error analysis
In [1], the FLAG averages and their errors were estimated through the following procedure: Having added in quadrature statistical and systematic errors for each individual result, we obtained their weighted \(\chi ^2\) average. This was our central value. If the fit was of good quality (\(\chi _\mathrm{min}^2/\hbox {dof} \le 1\)), we calculated the net uncertainty \(\delta \) from \(\chi ^2 = \chi _\mathrm{min}^2 + 1\); otherwise, we inflated the result obtained in this way by the factor \(S = \sqrt{(}\chi ^2/\hbox {dof})\). Whenever this \(\chi ^2\) minimisation procedure resulted in a total error which was smaller than the smallest systematic error of any individual lattice result, we assigned the smallest systematic error of that result to the total systematic error in the average.
One of the problems arising when forming such averages is that not all of the data sets are independent; in fact, some rely on the same ensembles. In particular, the same gaugefield configurations, produced with a given fermion discretisation, are often used by different research teams with different valence quark lattice actions, obtaining results which are not really independent. In the present paper we have modified our averaging procedure, in order to account for such correlations. To start with, we examine error budgets for individual calculations and look for potentially correlated uncertainties. Specific problems encountered in connection with correlations between different data sets are commented in the text. If there is any reason to believe that a source of error is correlated between two calculations, a 100 % correlation is assumed. We then obtain the central value from a \(\chi ^2\) weighted average, evaluated by adding statistical and systematic errors in quadrature (just as in Ref. [1]): for a set of individual measurements \(x_i\) with error \(\sigma _i\) and correlation matrix \(C_{ij}\), central value and error of the average are given by
The correlation matrix for the set of correlated lattice results is estimated with Schmelling’s prescription [16]. When necessary, the statistical and systematic error bars are stretched by a factor \(S\), as specified in the previous paragraph.
3 Masses of the light quarks
Quark masses are fundamental parameters of the Standard Model. An accurate determination of these parameters is important for both phenomenological and theoretical applications. The charm and bottom masses, for instance, enter the theoretical expressions of several cross sections and decay rates in heavyquark expansions. The up, down and strangequark masses govern the amount of explicit chiral symmetry breaking in QCD. From a theoretical point of view, the values of quark masses provide information about the flavour structure of physics beyond the Standard Model. The Review of Particle Physics of the Particle Data Group contains a review of quark masses [17], which covers light as well as heavy flavours. The present summary only deals with the lightquark masses (those of the up, down and strange quarks), but it discusses the lattice results for these in more detail.
Quark masses cannot be measured directly with experiment because quarks cannot be isolated, as they are confined inside hadrons. On the other hand, quark masses are free parameters of the theory and, as such, cannot be obtained on the basis of purely theoretical considerations. Their values can only be determined by comparing the theoretical prediction for an observable, which depends on the quark mass of interest, with the corresponding experimental value. What makes lightquark masses particularly difficult to determine is the fact that they are very small (for the up and down) or small (for the strange) compared to typical hadronic scales. Thus, their impact on typical hadronic observables is minute and it is difficult to isolate their contribution accurately.
Fortunately, the spontaneous breaking of SU(3)\(_L\otimes \)SU(3)\(_R\) chiral symmetry provides observables which are particularly sensitive to the lightquark masses: the masses of the resulting Nambu–Goldstone bosons (NGB), i.e. pions, kaons and etas. Indeed, the GellMann–Oakes–Renner relation [18] predicts that the squared mass of a NGB is directly proportional to the sum of the masses of the quark and antiquark which compose it, up to higherorder mass corrections. Moreover, because these NGBs are light and are composed of only two valence particles, their masses have a particularly clean statistical signal in latticeQCD calculations. In addition, the experimental uncertainties on these meson masses are negligible.
Three flavour QCD has four free parameters: the strong coupling, \(\alpha _\mathrm{s}\) (alternatively \(\Lambda _\mathrm{QCD}\)) and the up, down and strange quark masses, \(m_{u}\), \(m_{d}\) and \(m_{s}\). However, present day lattice calculations are often performed in the isospin limit, and the up and down quark masses (especially those in the sea) usually get replaced by a single parameter: the isospinaveraged up and downquark mass, \(m_{ud}=\frac{1}{2}(m_{u}+m_{d})\). A lattice determination of these parameters requires two steps:

1.
Calculations of three experimentally measurable quantities are used to fix the three bare parameters. As already discussed, NGB masses are particularly appropriate for fixing the lightquark masses. Another observable, such as the mass of a member of the baryon octet, can be used to fix the overall scale. It is important to note that until recently, most calculations were performed at values of \(m_{ud}\) which were still substantially larger than its physical value, typically four times as large. Reaching the physical up and downquark mass point required a significant extrapolation. This situation is changing fast. The PACSCS [19–21] and BMW [22, 23] calculations were performed with masses all the way down to their physical value (and even below in the case of BMW), albeit in very small volumes for PACSCS. More recently, MILC [24] and RBC/UKQCD [25] have also extended their simulations almost down to the physical point, by considering pions with \(M_\pi \gtrsim 170\,\mathrm{MeV}\).^{Footnote 5} Regarding the strange quark, modern simulations can easily include them with masses that bracket its physical value, and only interpolations are needed.

2.
Renormalisations of these bare parameters must be performed to relate them to the corresponding cutoffindependent, renormalised parameters.^{Footnote 6} These are shortdistance calculations, which may be performed perturbatively. Experience shows that oneloop calculations are unreliable for the renormalisation of quark masses: usually at least two loops are required to have trustworthy results. Therefore, it is best to perform the renormalisations nonperturbatively to avoid potentially large perturbative uncertainties due to neglected higherorder terms. However, we will include in our averages oneloop results which carry a solid estimate of the systematic uncertainty due to the truncation of the series.
Of course, in quark mass ratios the renormalisation factor cancels, so that this second step is no longer relevant.
3.1 Contributions from the electromagnetic interaction
As mentioned in Sect. 2.1, the present review relies on the hypothesis that, at low energies, the Lagrangian \(\mathcal{L}_{\mathrm{QCD}}+\mathcal{L}_{\mathrm{QED}}\) describes nature to a high degree of precision. Moreover, we assume that, at the accuracy reached by now and for the quantities discussed here, the difference between the results obtained from simulations with three dynamical flavours and full QCD is small in comparison with the quoted systematic uncertainties. This will soon no longer be the case. The electromagnetic (e.m.) interaction, on the other hand, cannot be ignored. Quite generally, when comparing QCD calculations with experiment, radiative corrections need to be applied. In lattice simulations, where the QCD parameters are fixed in terms of the masses of some of the hadrons, the electromagnetic contributions to these masses must be accounted for.^{Footnote 7}
The electromagnetic interaction plays a crucial role in determinations of the ratio \(m_{u}/m_{d}\), because the isospinbreaking effects generated by this interaction are comparable to those from \(m_{u}\ne m_{d}\) (see Sect. 3.4). In determinations of the ratio \(m_{s}/m_{ud}\), the electromagnetic interaction is less important, but at the accuracy reached, it cannot be neglected. The reason is that, in the determination of this ratio, the pion mass enters as an input parameter. Because \(M_\pi \) represents a small symmetrybreaking effect, it is rather sensitive to the perturbations generated by QED.
We distinguish the physical mass \(M_P\), \(P\in \{\pi ^+, \pi ^0\), \(K^+\), \(K^0\}\), from the mass \(\hat{M}_P\) within QCD alone. The e.m. selfenergy is the difference between the two, \(M_P^\gamma \equiv M_P\hat{M}_P\). Because the selfenergy of the Nambu–Goldstone bosons diverges in the chiral limit, it is convenient to replace it by the contribution of the e.m. interaction to the square of the mass,
The main effect of the e.m. interaction is an increase in the mass of the charged particles, generated by the photon cloud that surrounds them. The selfenergies of the neutral ones are comparatively small, particularly for the Nambu–Goldstone bosons, which do not have a magnetic moment. Dashen’s theorem [31] confirms this picture, as it states that, to leading order (LO) of the chiral expansion, the selfenergies of the neutral NGBs vanish, while the charged ones obey \(\Delta _{K^+}^\gamma = \Delta _{\pi ^+}^\gamma \). It is convenient to express the selfenergies of the neutral particles as well as the mass difference between the charged and neutral pions within QCD in units of the observed mass difference, \(\Delta _\pi \equiv M_{\pi ^+}^2M_{\pi ^0}^2\):
In this notation, the selfenergies of the charged particles are given by
where the dimensionless coefficient \(\epsilon \) parameterises the violation of Dashen’s theorem,^{Footnote 8}
Any determination of the lightquark masses based on a calculation of the masses of \(\pi ^+,K^+\) and \(K^0\) within QCD requires an estimate for the coefficients \(\epsilon \), \(\epsilon _{\pi ^0}\), \(\epsilon _{K^0}\) and \(\epsilon _{m}\).
The first determination of the selfenergies on the lattice was carried out by Duncan et al. [33]. Using the quenched approximation, they arrived at \(M_{K^+}^\gamma M_{K^0}^\gamma = 1.9\,\hbox {MeV}\). Actually, the parameterisation of the masses given in that paper yields an estimate for all but one of the coefficients introduced above (since the mass splitting between the charged and neutral pions in QCD is neglected, the parameterisation amounts to setting \(\epsilon _{m}=0\) ab initio). Evaluating the differences between the masses obtained at the physical value of the electromagnetic coupling constant and at \(e=0\), we obtain \(\epsilon = 0.50(8)\), \(\epsilon _{\pi ^0} = 0.034(5)\) and \(\epsilon _{K^0} = 0.23(3)\). The errors quoted are statistical only: an estimate of lattice systematic errors is not possible from the limited results of Duncan et al. [33]. The result for \(\epsilon \) indicates that the violation of Dashen’s theorem is sizeable: according to this calculation, the nonleading contributions to the selfenergy difference of the kaons amount to 50 % of the leading term. The result for the selfenergy of the neutral pion cannot be taken at face value, because it is small, comparable to the neglected mass difference \(\hat{M}_{\pi ^+}\hat{M}_{\pi ^0}\). To illustrate this, we note that the numbers quoted above are obtained by matching the parameterisation with the physical masses for \(\pi ^0\), \(K^+\) and \(K^0\). This gives a mass for the charged pion that is too high by 0.32 MeV. Tuning the parameters instead such that \(M_{\pi ^+}\) comes out correctly, the result for the selfenergy of the neutral pion becomes larger: \(\epsilon _{\pi ^0}=0.10(7)\) where, again, the error is statistical only.
In an update of this calculation by the RBC collaboration [34] (RBC 07), the electromagnetic interaction is still treated in the quenched approximation, but the strong interaction is simulated with \(N_\mathrm{f}=2\) dynamical quark flavours. The quark masses are fixed with the physical masses of \(\pi ^0\), \(K^+\) and \(K^0\). The outcome for the difference in the electromagnetic selfenergy of the kaons reads \(M_{K^+}^\gamma M_{K^0}^\gamma = 1.443(55)\,\hbox {MeV}\). This corresponds to a remarkably small violation of Dashen’s theorem. Indeed, a recent extension of this work to \(N_\mathrm{f}=2+1\) dynamical flavours [32] leads to a significantly larger selfenergy difference: \(M_{K^+}^\gamma M_{K^0}^\gamma = 1.87(10)\,\hbox {MeV}\), in good agreement with the estimate of Eichten et al. Expressed in terms of the coefficient \(\epsilon \) that measures the size of the violation of Dashen’s theorem, it corresponds to \(\epsilon =0.5(1)\).
The input for the electromagnetic corrections used by MILC is specified in [35]. In their analysis of the lattice data, \(\epsilon _{\pi ^0}\), \(\epsilon _{K^0}\) and \(\epsilon _{m}\) are set equal to zero. For the remaining coefficient, which plays a crucial role in determinations of the ratio \(m_{u}/m_{d}\), the very conservative range \(\epsilon =1\pm 1\) was used in MILC 04 [36], while in more recent work, in particular in MILC 09 [15] and MILC 09A [37], this input is replaced by \(\epsilon =1.2\pm 0.5\), as suggested by phenomenological estimates for the corrections to Dashen’s theorem [38, 39]. Results of an evaluation of the electromagnetic selfenergies based on \(N_\mathrm{f}=2+1\) dynamical quarks in the QCD sector and on the quenched approximation in the QED sector are also reported by MILC [40–42]. Their preliminary result is \(\bar{\epsilon }=0.65(7)(14)(10)\), where the first error is statistical, the second systematic, and the third a separate systematic for the combined chiral and continuum extrapolation. The estimate of the systematic error does not yet include finitevolume effects. With the estimate for \(\epsilon _{m}\) given in (9), this result corresponds to \(\epsilon = 0.62(7)(14)(10)\). Similar preliminary results were previously reported by the BMW collaboration in conference proceedings [43, 44].
The RM123 collaboration employs a new technique to compute e.m. shifts in hadron masses in twoflavour QCD: the effects are included at leading order in the electromagnetic coupling \(\alpha \) through simple insertions of the fundamental electromagnetic interaction in quark lines of relevant Feynman graphs [45]. They find \(\epsilon =0.79(18)(18)\) where the first error is statistical and the second is the total systematic error resulting from chiral, finitevolume, discretisation, quenching and fitting errors all added in quadrature.
The effective Lagrangian that governs the selfenergies to nexttoleading order (NLO) of the chiral expansion was set up in [46]. The estimates in [38, 39] are obtained by replacing QCD with a model, matching this model with the effective theory and assuming that the effective coupling constants obtained in this way represent a decent approximation to those of QCD. For alternative model estimates and a detailed discussion of the problems encountered in models based on saturation by resonances, see [47–49]. In the present review of the information obtained on the lattice, we avoid the use of models altogether.
There is an indirect phenomenological determination of \(\epsilon \), which is based on the decay \(\eta \rightarrow 3\pi \) and does not rely on models. The result for the quark mass ratio \(Q\), defined in (24) and obtained from a dispersive analysis of this decay, implies \(\epsilon = 0.70(28)\) (see Sect. 3.4). While the values found in older lattice calculations [32–34] are a little less than one standard deviation lower, the most recent determinations [40–45, 50], though still preliminary, are in excellent agreement with this result and have significantly smaller error bars. However, even in the more recent calculations, e.m. effects are treated in the quenched approximation. Thus, we choose to quote \(\epsilon = 0.7(3)\), which is essentially the \(\eta \rightarrow 3\pi \) result and covers generously the range of post 2010 lattice results. Note that this value has an uncertainty which is reduced by about 40 % compared to the result quoted in the first edition of the FLAG review [1].
We add a few comments concerning the physics of the selfenergies and then specify the estimates used as an input in our analysis of the data. The Cottingham formula [51] represents the selfenergy of a particle as an integral over electron scattering cross sections; elastic as well as inelastic reactions contribute. For the charged pion, the term due to elastic scattering, which involves the square of the e.m. form factor, makes a substantial contribution. In the case of the \(\pi ^0\), this term is absent, because the form factor vanishes on account of charge conjugation invariance. Indeed, the contribution from the form factor to the selfenergy of the \(\pi ^+\) roughly reproduces the observed mass difference between the two particles. Furthermore, the numbers given in [52–54] indicate that the inelastic contributions are significantly smaller than the elastic contributions to the selfenergy of the \(\pi ^+\). The low energy theorem of Das et al. [55] ensures that, in the limit \(m_{u},m_{d}\rightarrow 0\), the e.m. selfenergy of the \(\pi ^0\) vanishes, while the one of the \(\pi ^+\) is given by an integral over the difference between the vector and axialvector spectral functions. The estimates for \(\epsilon _{\pi ^0}\) obtained in [33] are consistent with the suppression of the selfenergy of the \(\pi ^0\) implied by chiral SU(2) \(\times \) SU(2). In our opinion, \(\epsilon _{\pi ^0}=0.07(7)\) is a conservative estimate for this coefficient. The selfenergy of the \(K^0\) is suppressed less strongly, because it remains different from zero if \(m_{u}\) and \(m_{d}\) are taken massless and only disappears if \(m_{s}\) is turned off as well. Note also that, since the e.m. form factor of the \(K^0\) is different from zero, the selfenergy of the \(K^0\) does pick up an elastic contribution. The lattice result for \(\epsilon _{K^0}\) indicates that the violation of Dashen’s theorem is smaller than in the case of \(\epsilon \). In the following, we use \(\epsilon _{K^0}=0.3(3)\).
Finally, we consider the mass splitting between the charged and neutral pions in QCD. This effect is known to be very small, because it is of second order in \(m_{u}m_{d}\). There is a parameterfree prediction, which expresses the difference \(\hat{M}_{\pi ^+}^2\hat{M}_{\pi ^0}^2\) in terms of the physical masses of the pseudoscalar octet and is valid to NLO of the chiral perturbation series. Numerically, the relation yields \(\epsilon _{m}=0.04\) [56], indicating that this contribution does not play a significant role at the present level of accuracy. We attach a conservative error also to this coefficient: \(\epsilon _{m}=0.04(2)\). The lattice result for the selfenergy difference of the pions, reported in [32], \(M_{\pi ^+}^\gamma M_{\pi ^0}^\gamma = 4.50(23)\,\hbox {MeV}\), agrees with this estimate: expressed in terms of the coefficient \(\epsilon _{m}\) that measures the pion mass splitting in QCD, the result corresponds to \(\epsilon _{m}=0.04(5)\). The corrections of nexttonexttoleading order (NNLO) have been worked out [57], but the numerical evaluation of the formulae again meets with the problem that the relevant effective coupling constants are not reliably known.
In summary, we use the following estimates for the e.m. corrections:
While the range used for the coefficient \(\epsilon \) affects our analysis in a significant way, the numerical values of the other coefficients only serve to set the scale of these contributions. The range given for \(\epsilon _{\pi ^0}\) and \(\epsilon _{K^0}\) may be overly generous, but because of the exploratory nature of the lattice determinations, we consider it advisable to use a conservative estimate.
Treating the uncertainties in the four coefficients as statistically independent and adding errors in quadrature, the numbers in equation (9) yield the following estimates for the e.m. selfenergies,
and for the pion and kaon masses occurring in the QCD sector of the Standard Model,
The selfenergy difference between the charged and neutral pion involves the same coefficient \(\epsilon _{m}\) that describes the mass difference in QCD—this is why the estimate for \( M_{\pi ^+}^\gamma M_{\pi ^0}^\gamma \) is so sharp.
3.2 Pion and kaon masses in the isospin limit
As mentioned above, most of the lattice calculations concerning the properties of the light mesons are performed in the isospin limit of QCD (\(m_{u}m_{d}\rightarrow 0\) at fixed \(m_{u}+m_{d}\)). We denote the pion and kaon masses in that limit by \(\overline{M}_{\pi }\) and \(\overline{M}_{K}\), respectively. Their numerical values can be estimated as follows. Since the operation \(u\leftrightarrow d\) interchanges \(\pi ^+\) with \(\pi ^\) and \(K^+\) with \(K^0\), the expansion of the quantities \(\hat{M}_{\pi ^+}^2\) and \(\frac{1}{2}(\hat{M}_{K^+}^2+\hat{M}_{K^0}^2)\) in powers of \(m_{u}m_{d}\) only contains even powers. As shown in [58], the effects generated by \(m_{u}m_{d}\) in the mass of the charged pion are strongly suppressed: the difference \(\hat{M}_{\pi ^+}^2\overline{M}_{\pi }^{\,2}\) represents a quantity of \(O[(m_{u}m_{d})^2(m_{u}+m_{d})]\) and is therefore small compared to the difference \(\hat{M}_{\pi ^+}^2\hat{M}_{\pi ^0}^2\), for which an estimate was given above. In the case of \(\frac{1}{2}(\hat{M}_{K^+}^2+\hat{M}_{K^0}^2)\overline{M}_{K}^{\,2}\), the expansion does contain a contribution at NLO, determined by the combination \(2L_8L_5\) of lowenergy constants, but the lattice results for that combination show that this contribution is very small, too. Numerically, the effects generated by \(m_{u}m_{d}\) in \(\hat{M}_{\pi ^+}^2\) and in \(\frac{1}{2}(\hat{M}_{K^+}^2+\hat{M}_{K^0}^2)\) are negligible compared to the uncertainties in the electromagnetic selfenergies. The estimates for these given in Eq. (11) thus imply
This shows that, for the convention used above to specify the QCD sector of the Standard Model, and within the accuracy to which this convention can currently be implemented, the mass of the pion in the isospin limit agrees with the physical mass of the neutral pion: \(\overline{M}_{\pi }M_{\pi 0}=0.2(3)\) MeV.
3.3 Lattice determination of \(m_{s}\) and \(m_{ud}\)
We now turn to a review of the lattice calculations of the lightquark masses and begin with \(m_{s}\), the isospinaveraged up and downquark mass, \(m_{ud}\), and their ratio. Most groups quote only \(m_{ud}\), not the individual up and downquark masses. We then discuss the ratio \(m_{u}/m_{d}\) and the individual determination of \(m_{u}\) and \(m_{d}\).
Quark masses have been calculated on the lattice since the mid1990s. However, early calculations were performed in the quenched approximation, leading to unquantifiable systematics. Thus in the following, we only review modern, unquenched calculations, which include the effects of light seaquarks.
Tables 2 and 3 list the results of \(N_\mathrm{f}=2\) and \(N_\mathrm{f}=2+1\) lattice calculations of \(m_{s}\) and \(m_{ud}\). These results are given in the \({\overline{\mathrm{MS}}}\) scheme at \(2\,\mathrm{GeV}\), which is standard nowadays, though some groups are starting to quote results at higher scales (e.g. [25]). The tables also show the colourcoding of the calculations leading to these results. The corresponding results for \(m_{s}/m_{ud}\) are given in Table 4. As indicated earlier in this review, we treat \(N_\mathrm{f}=2\) and \(N_\mathrm{f}=2+1\) calculations separately. The latter include the effects of a strange seaquark, but the former do not.
3.3.1 \(N_\mathrm{f}=2\) lattice calculations
We begin with \(N_\mathrm{f}=2\) calculations. A quick inspection of Table 2 indicates that only the most recent calculations, ALPHA 12 [59] and ETM 10B [60], control all systematic effects—the special case of Dürr 11 [61] is discussed below. Only ALPHA 12 [59], ETM 10B [60] and ETM 07 [62] really enter the chiral regime, with pion masses down to about 270 MeV for ALPHA and ETM. Because this pion mass is still quite far from the physical pion mass, ALPHA 12 refrain from determining \(m_{ud}\) and give only \(m_{s}\). All the other calculations have significantly more massive pions, the lightest being about 430 MeV, in the calculation by CPPACS 01 [63]. Moreover, the latter calculation is performed on very coarse lattices, with lattice spacings \(a\ge 0.11\,\,{\mathrm {fm}}\) and only oneloop perturbation theory is used to renormalise the results.
ETM 10B’s [60] calculation of \(m_{ud}\) and \(m_{s}\) is an update of the earlier twistedmass determination of ETM 07 [62]. In particular, they have added ensembles with a larger volume and three new lattice spacings, \(a = 0.054, 0.067\) and \(0.098\,\,{\mathrm {fm}}\), allowing for a continuum extrapolation. In addition, it presents analyses performed in SU(2) and \(\hbox {SU}(3) \chi \)PT.
The new ALPHA 12 [59] calculation of \(m_{s}\) is an update of ALPHA 05 [64], which pushes computations to finer lattices and much lighter pion masses. It also importantly includes a determination of the lattice spacing with the decay constant \(F_K\), whereas ALPHA 05 converted results to physical units using the scale parameter \(r_0\) [65], defined via the force between static quarks. In particular, the conversion relied on measurements of \(r_0/a\) by QCDSF/UKQCD 04 [66] which differ significantly from the new determination by ALPHA 12. As in ALPHA 05, in ALPHA 12 both nonperturbative running and nonperturbative renormalisation are performed in a controlled fashion, using Schrödinger functional methods.
The conclusion of our analysis of \(N_\mathrm{f}=2\) calculations is that the results of ALPHA 12 [59] and ETM 10B [60] (which update and extend ALPHA 05 [64] and ETM 07 [62], respectively), are the only ones to date which satisfy our selection criteria. Thus we average those two results for \(m_{s}\), obtaining 101(3) MeV. Regarding \(m_{ud}\), for which only ETM 10B [60] gives a value, we do not offer an average but simply quote ETM’s number. Because ALPHA’s result induces an increase by 7 % of our earlier average for \(m_{s}\) [1], while \(m_{ud}\) remains unchanged, our average for \(m_{s}/m_{ud}\) also increases by 7 %. For the latter, however, we retain the percent error quoted by ETM, who directly estimates this ratio, and add it in quadrature to the percent error on ALPHA’s \(m_{s}\). Thus, we quote as our estimates:
The errors on these results are 3, 6 and 4 %, respectively. The error is smaller in the ratio than one would get from combining the errors on \(m_{ud}\) and \(m_{s}\), because statistical and systematic errors cancel in ETM’s result for this ratio, most notably those associated with renormalisation and the setting of the scale. It is worth noting that thanks to ALPHA 12 [59], the total error on \(m_{s}\) has reduced significantly, from 7 % in the last edition of our report to 3 % now. It is also interesting to remark that ALPHA 12’s [59] central value for \(m_{s}\) is about 1 \(\sigma \) larger than that of ETM 10B [60] and nearly 2 \(\sigma \) larger than our present \(N_\mathrm{f}=2+1\) determination given in (14). Moreover, this larger value for \(m_{s}\) increases our \(N_\mathrm{f}=2\) determination of \(m_{s}/m_{ud}\), making it larger than ETM 10B’s direct measurement, though compatible within errors.
We have not discussed yet the precise results of Dürr 11 [61] which satisfy our selection criteria. This is because Dürr 11 pursue an approach which is sufficiently different from the one of other calculations that we prefer not to include it in an average at this stage. Following HPQCD 09A, 10 [72, 73], the observable which they actually compute is \(m_{c}/m_{s}=11.27(30)(26)\), with an accuracy of 3.5 %. This result is about 1.5 combined standard deviations below ETM 10B’s [60] result \(m_{c}/m_{s}=12.0(3)\). \(m_{s}\) is subsequently obtained using lattice and phenomenological determinations of \(m_{c}\) which rely on perturbation theory. The value of the charmquark mass which they use is an average of those determinations, which they estimate to be \(m_{c}(2\,\mathrm{GeV})=1.093(13)\,\mathrm{GeV}\), with a 1.2 % total uncertainty. Note that this value is consistent with the PDG average \(m_{c}(2\,\mathrm{GeV})=1.094(21)\,\mathrm{GeV}\) [74], though the latter has a larger 2.0 % uncertainty. Dürr 11’s value of \(m_{c}\) leads to \(m_{s}=97.0(2.6)(2.5)\,\mathrm{MeV}\) given in Table 2, which has a total error of 3.7 %. The use of the PDG value for \(m_{c}\) [74] would lead to a very similar result. The result for \(m_{s}\) is perfectly compatible with our estimate given in (13) and has a comparable error bar. To determine \(m_{ud}\), Dürr 11 combine their result for \(m_{s}\) with the \(N_\mathrm{f}=2+1\) calculation of \(m_{s}/m_{ud}\) of BMW 10A, 10B [22, 23] discussed below. They obtain \(m_{ud}=3.52(10)(9)\,\mathrm{MeV}\) with a total uncertainty of less than 4 %, which is again fully consistent with our estimate of (13) and its uncertainty.
3.3.2 \(N_\mathrm{f}=2+1\) lattice calculations
We turn now to \(N_\mathrm{f}=2+1\) calculations. These and the corresponding results are summarised in Tables 3 and 4. Somewhat paradoxically, these calculations are more mature than those with \(N_\mathrm{f}=2\). This is thanks, in large part, to the head start and sustained effort of MILC, who have been performing \(N_\mathrm{f}=2+1\) rooted staggered fermion calculations for the past ten or so years. They have covered an impressive range of parameter space, with lattice spacings which, today, go down to 0.045 fm and valence pion masses down to approximately 180 MeV [37]. The most recent updates, MILC 10A [75] and MILC 09A [37], include significantly more data and use twoloop renormalisation. Since these data sets subsume those of their previous calculations, these latest results are the only ones that must be kept in any world average.
Since our last report [1] the situation for \(N_\mathrm{f}=2+1\) determinations of light quarks has undergone some evolution. There are new computations by RBC/UKQCD 12 [25], PACSCS 12 [76] and Laiho 11 [77]. Furthermore, the results of BMW 10A, 10B [22, 23] have been published and can now be included in our averages.
The RBC/UKQCD 12 [25] computation improves on the one of RBC/UKQCD 10A [78] in a number of ways. In particular it involves a new simulation performed at a rather coarse lattice spacing of 0.144 fm, but with unitary pion masses down to 171(1) MeV and valence pion masses down to 143(1) MeV in a volume of \((4.6\,\,{\mathrm {fm}})^3\), compared, respectively, to 290 MeV, 225 MeV and \((2.7\,\,{\mathrm {fm}})^3\) in RBC/UKQCD 10A. This provides them with a significantly better control over the extrapolation to physical \(M_\pi \) and to the infinitevolume limit. As before, they perform nonperturbative renormalisation and running in RI/SMOM schemes. The only weaker point of the calculation comes from the fact that two of their three lattice spacings are larger than 0.1 fm and correspond to different discretisations, while the finest is only 0.085 fm, making it difficult to convincingly claim full control over the continuum limit. This is mitigated by the fact that the scaling violations which they observe on their coarsest lattice are for many quantities small, around 5 %.
The Laiho 11 results [77] are based on MILC staggered ensembles at the lattice spacings 0.15, 0.09 and 0.06 fm, on which they propagate domainwall quarks. Moreover, they work in volumes of up to \((4.8\,\,{\mathrm {fm}})^3\). These features give them full control over the continuum and infinitevolume extrapolations. Their lightest RMS sea pion mass is 280 MeV and their valence pions have masses down to 210 MeV. The fact that their sea pions do not enter deeply into the chiral regime penalises somewhat their extrapolation to physical \(M_\pi \). Moreover, to renormalise the quark masses, they use oneloop perturbation theory for \(Z_A/Z_S1\) which they combine with \(Z_A\) determined nonperturbatively from the axialvector Ward identity. Although they conservatively estimate the uncertainty associated with the procedure to be 5 %, which is the size of their largest oneloop correction, this represents a weaker point of this calculation.
The new PACSCS 12 [76] calculation represents an important extension of the collaboration’s earlier 2010 computation [21], which already probed pion masses down to \(M_\pi \simeq 135\,\mathrm{MeV}\), i.e. down to the physicalmass point. This was achieved by reweighting the simulations performed in PACSCS 08 [19] at \(M_\pi \simeq 160\,\mathrm{MeV}\). If adequately controlled, this procedure eliminates the need to extrapolate to the physicalmass point and, hence, the corresponding systematic error. The new calculation now applies similar reweighting techniques to include electromagnetic and \(m_{u}\ne m_{d}\) isospinbreaking effects directly at the physical pion mass. It technically adds to Blum 10 [32] and BMW’s preliminary results of [43, 44] by including these effects not only for valence but also for seaquarks, as is also done in [86]. Further, as in PACSCS 10 [21], renormalisation of quark masses is implemented nonperturbatively, through the Schrödinger functional method [87]. As it stands, the main drawback of the calculation, which makes the inclusion of its results in a world average of lattice results inappropriate at this stage, is that for the lightest quark mass the volume is very small, corresponding to \(LM_\pi \simeq 2.0\), a value for which finitevolume effects will be difficult to control. Another problem is that the calculation was performed at a single lattice spacing, forbidding a continuum extrapolation. Further, it is unclear at this point what might be the systematic errors associated with the reweighting procedure.
As shown by the colourcoding in Tables 3 and 4, the BMW 10A, 10B [22, 23] calculation is still the only one to have addressed all sources of systematic effects while reaching the physical up and downquark mass by interpolation instead of by extrapolation. Moreover, their calculation was performed at five lattice spacings ranging from 0.054 to 0.116 fm, with full nonperturbative renormalisation and running and in volumes of up to (6 fm)\(^3\) guaranteeing that the continuum limit, renormalisation and infinitevolume extrapolation are controlled. It does neglect, however, isospinbreaking effects, which are small on the scale of their error bars.
Finally we come to another calculation which satisfies our selection criteria, HPQCD 10 [73] (which updates HPQCD 09A [72]). The strangequark mass is computed using a precise determination of the charmquark mass, \(m_{c}(m_{c})=1.273(6)\) GeV [73, 85], whose accuracy is better than 0.5 %, and a calculation of the quarkmass ratio \(m_{c}/m_{s}=11.85(16)\) [72], which achieves a precision slightly above 1 %. The determination of \(m_{s}\) via the ratio \(m_{c}/m_{s}\) displaces the problem of lattice renormalisation in the computation of \(m_{s}\) to one of renormalisation in the continuum for the determination of \(m_{c}\). To calculate \(m_{ud}\) HPQCD 10 [73] use the MILC 09 determination of the quarkmass ratio \(m_{s}/m_{ud}\) [15].
The high precision quoted by HPQCD 10 on the strangequark mass relies in large part on the precision reached in the determination of the charmquark mass [73, 85]. This calculation uses an approach based on the lattice determination of moments of charmquark pseudoscalar, vector and axialvector correlators. These moments are then combined with fourloop results from continuum perturbation theory to obtain a determination of the charmquark mass in the \({\overline{\mathrm{MS}}}\) scheme. In the preferred case, in which pseudoscalar correlators are used for the analysis, there are no lattice renormalisation factors required, since the corresponding axialvector current is partially conserved in the staggered lattice formalism.
Instead of combining the result for \(m_{c}/m_{s}\) of [72] with \(m_{c}\) from [73], one can use it with the PDG [74] average \(m_{c}(m_{c})=1.275(25)\,\mathrm{GeV}\), whose error is four times as large as the one obtained by HPQCD 10. If one does so, one obtains \(m_{s}=92.3(2.2)\) in lieu of the value \(m_{s}=92.2(1.3)\) given in Table 3, thereby nearly doubling HPQCD 10’s error. Though we plan to do so in the future, we have not yet performed a review of lattice determinations of \(m_{c}\). Thus, as for the results of Dürr 11 [61] in the \(N_\mathrm{f}=2\) case, we postpone its inclusion in our final averages until we have performed an independent analysis of \(m_{c}\), emphasizing that this novel strategy for computing the lightquark masses may very well turn out to be the best way to determine them.
This discussion leaves us with three results for our final average for \(m_{s}\), those of MILC 09A [37], BMW 10A, 10B [22, 23] and RBC/UKQCD 12 [25], and the result of HPQCD 10 [73] as an important crosscheck. Thus, we first check that the three other results which will enter our final average are consistent with HPQCD 10’s result. To do this we implement the averaging procedure described in Sect. 2.2 on all four results. This yields \(m_{s}=93.0(1.0)\,\mathrm{MeV}\) with a \(\chi ^2/\hbox {dof} = 3.0/3=1.0\), indicating overall consistency. Note that in making this average, we have accounted for correlations in the small statistical errors of HPQCD 10 and MILC 09A. Omitting HPQCD 10 in our final average results in an increase by 50 % of the average’s uncertainty and by 0.8 \(\sigma \) of its central value. Thus, we obtain \(m_{s}=93.8(1.5)\,\mathrm{MeV}\) with a \(\chi ^2/\hbox {dof} = 2.26/2=1.13\). When repeating the exercise for \(m_{ud}\), we replace MILC 09A by the more recent analysis reported in MILC 10A [75]. A fit of all four results yields \(m_{ud}=3.41(5)\,\mathrm{MeV}\) with a \(\chi ^2/\hbox {dof} = 2.6/3=0.9\) and including only the same three as above gives \(m_{ud}=3.42(6)\,\mathrm{MeV}\) with a \(\chi ^2/\hbox {dof} = 2.4/2=1.2\). Here the results are barely distinguishable, indicating full compatibility of all four results. Note that the outcome of the averaging procedure amounts to a determination of \(m_{s}\) and \(m_{ud}\) of 1.6 and 1.8 %, respectively.
The heavy seaquarks affect the determination of the lightquark masses only through contributions of order \(1/m_{c}^2\), which moreover are suppressed by the Okubo–Zweig–Iizukarule. We expect these contributions to be small. However, note that the effect of omitted sea quarks on a given quantity is not uniquely defined: the size of the effect depends on how the theories with and without these flavours are matched. One way to set conventions is to ensure that the bare parameters common to both theories are fixed by the same physical observables and that the renormalisations are performed in the same scheme and at the same scale, with the appropriate numbers of flavours.
An upper bound on the heavyquark contributions can be obtained by looking at the presumably much larger effect associated with omitting the strange quark in the sea. Within errors, the average value \(m_{ud} = 3.42(6)\) MeV obtained above from the data with \(N_\mathrm{f} = 2+1\) agrees with the result \(m_{ud} = 3.6(2)\) MeV for \(N_\mathrm{f} = 2\) quoted in (13): assuming that the underlying calculations more or less follow the above matching prescription, the effects generated by the quenching of the strange quark in \(m_{ud}\) are within the noise. Interpreting the two results as Gaussian distributions, the probability distribution of the difference \(\Delta m_{ud} \equiv (m_{ud}_{N_\mathrm{f}=2}) (m_{ud}_{N_\mathrm{f}=3})\) is also Gaussian, with \(\Delta m_{ud}=0.18(21)\) MeV. The corresponding rootmeansquare \(\langle \Delta m_{ud}^2\rangle ^\frac{1}{2}= 0.28\) MeV provides an upper bound for the size of the effects due to strange quark quenching; it amounts to 8 % of \(m_{ud}\). In the case of \(m_{s}\), the analogous calculation yields \(\langle \Delta m_{s}^2\rangle ^\frac{1}{2}=7.9\) MeV and thus also amounts to an upper bound of about 8 %. Taking any of these numbers as an upper bound on the omission of charm effects in the \(N_\mathrm{f}=2+1\) results is, we believe, a significant overestimate.
An underestimate of the upper bound on the seacharm contributions to \(m_{s}\) can be obtained by transposing, to the \(s\bar{s}\) system, the perturbative, heavy quarkonium arguments put forward in [94] to determine the effect of sea charm on the \(\eta _{c}\) and \(J/\psi \) masses. An estimate using constituent quark masses [95] leads very roughly to a 0.05 % effect on \(m_{s}\), from which [95] concludes that the error on \(m_{s}\) and \(m_{ud}\) due to the omission of charm is of order 0.1 %.
One could also try to estimate the effect by analysing the relation between the parameters of QCD\(_3\) and those of full QCD in perturbation theory. The \(\beta \) and \(\gamma \)functions, which control the renormalisation of the coupling constants and quark masses, respectively, are known to four loops [83, 84, 96, 97]. The precision achieved in this framework for the decoupling of the \(t\) and \(b\)quarks is excellent, but the \(c\)quark is not heavy enough: at the percent level, we believe that the corrections of order \(1/m_{c}^2\) cannot be neglected and the decoupling formulae of perturbation theory do not provide a reliable evaluation, because the scale \(m_{c}(m_{c})\simeq 1.28\,\mathrm{GeV}\) is too low for these formulae to be taken at face value. Consequently, the accuracy to which it is possible to identify the running masses of the light quarks of full QCD in terms of those occurring in QCD\(_3\) is limited. For this reason, it is preferable to characterise the masses \(m_{u}\), \(m_{d}\), \(m_{s}\) in terms of QCD\(_4\), where the connection with full QCD is under good control.
The role of the \(c\)quarks in the determination of the lightquark masses will soon be studied in detail—some simulations with \(2+1+1\) dynamical quarks have already been carried out [24, 98]. For the moment, we choose to consider a crude, and hopefully reasonably conservative, upper bound on the size of the effects due to the neglected heavy quarks that can be established within the \(N_\mathrm{f}=2+1\) simulations themselves, without invoking perturbation theory. In [99] it is found that when the scale is set by \(M_\Xi \), the result for \(M_\Lambda \) agrees well with experiment within the 2.3 % accuracy of the calculation. Because of the very strong correlations between the statistical and systematic errors of these two masses, we expect the uncertainty in the difference \(M_\Xi M_\Lambda \) to also be of order 2 %. To leading order in the chiral expansion this mass difference is proportional to \(m_{s}m_{ud}\). Barring accidental cancellations, we conclude that the agreement of \(N_\mathrm{f}= 2+1\) calculations with experiment suggests an upper bound on the sensitivity of \(m_{s}\) to heavy seaquarks of order 2 %.
Taking this uncertainty into account yields the following averages:
where the first error comes from the averaging of the lattice results, and the second is the one that we add to account for the neglect of sea effects from the charm and more massive quarks. This corresponds to determinations of \(m_{ud}\) and \(m_{s}\) with a precision of and 2.6 and 2.7 %, respectively. These estimates represent the conclusions we draw from the information gathered on the lattice until now. They are shown as vertical bands in Figs. 1 and 2, together with the \(N_\mathrm{f}=2\) results (13).
In the ratio \(m_{s}/m_{ud}\), one of the sources of systematic error—the uncertainties in the renormalisation factors—drops out. Also, we can compare the lattice results with the leadingorder formula of \(\chi \)PT,
which relates the quantity \(m_{s}/m_{ud}\) to a ratio of meson masses in QCD. Expressing these in terms of the physical masses and the four coefficients introduced in (6)–(8), linearizing the result with respect to the corrections and inserting the observed mass values, we obtain
If the coefficients \(\epsilon \), \(\epsilon _{\pi ^0}\), \(\epsilon _{K^0}\) and \(\epsilon _{m}\) are set equal to zero, the right hand side reduces to the value \(m_{s}/m_{ud}=25.9\) that follows from Weinberg’s leadingorder formulae for \(m_{u}/m_{d}\) and \(m_{s}/m_{d}\) [100], in accordance with the fact that these do account for the e.m. interaction at leading chiral order, and neglect the mass difference between the charged and neutral pions in QCD. Inserting the estimates (9) gives the effect of chiral corrections to the e.m. selfenergies and of the mass difference between the charged and neutral pions in QCD. With these, the LO prediction in QCD becomes
The quoted uncertainty does not include an estimate for the higherorder contributions, but it only accounts for the error bars in the coefficients, which is dominated by the one in the estimate given for \(\epsilon _{\pi ^0}\). The fact that the central value remains unchanged indicates that chiral corrections to the e.m. selfenergies and massdifference corrections are small in this particular quantity. However, given the high accuracy reached in lattice determinations of the ratio \(m_{s}/m_{ud}\), the uncertainties associated with e.m. corrections are no longer completely irrelevant. This is seen by comparing the 0.1 in (17) with the 0.15 in (18). Nevertheless, this uncertainty is still smaller than our \(\sim 1.\div 1.5\,\%\) upper bound on possible \(1/m_{c}^2\) corrections (Fig. 3).
The lattice results in Table 4, which satisfy our selection criteria, indicate that the corrections generated by the nonleading terms of the chiral perturbation series are remarkably small, in the range 3–10 %. Despite the fact that the SU(3)flavoursymmetrybreaking effects in the Nambu–Goldstone boson masses are very large (\(M_K^2\simeq 13\, M_\pi ^2\)), the mass spectrum of the pseudoscalar octet obeys the SU(3) \(\times \) SU(3) formula (15) very well.
Our average for \(m_{s}/m_{ud}\) is based on the results of MILC 09A, BMW 10A, 10B and RBC/UKQCD 12—the value quoted by HPQCD 10 does not represent independent information as it relies on the result for \(m_{s}/m_{ud}\) obtained by the MILC collaboration. Averaging these results according to the precription of Sect. 2.3 gives \(m_{s}/m_{ud}=27.46(15)\) with \(\chi ^2/\hbox {dof}=0.2/2\). The fit is dominated by MILC 09A and BMW 10A, 10B. Since the errors associated with renormalisation drop out in the ratio, the uncertainties are even smaller than in the case of the quark masses themselves: the above number for \(m_{s}/m_{ud}\) amounts to an accuracy of 0.5 %.
At this level of precision, the uncertainties in the electromagnetic and strong isospinbreaking corrections are not completely negligible. The error estimate in the LO result (17) indicates the expected order of magnitude. The uncertainties in \(m_{s}\) and \(m_{ud}\) associated with the heavy seaquarks cancel at least partly. In view of this, we ascribe a total 1.5 % uncertainty to these two sources of error. Thus, we are convinced that our final estimate,
is on the conservative side, with a total 1.5 % uncertainty. It is also fully consistent with the ratio computed from our individual quark masses in (14), \(m_{s}/m_{ud}=27.6(6)\), which has a larger 2.2 % uncertainty. In (18) the first error comes from the averaging of the lattice results, and the second is the one that we add to account for the neglect of isospinbreaking and heavy seaquark effects.
The lattice results show that the LO prediction of \(\chi \)PT in (17) receives only small corrections from higher orders of the chiral expansion: according to (18), these generate a shift of \(5.7\pm 1.5\, \%\). Our estimate does therefore not represent a very sharp determination of the higherorder contributions.
The ratio \(m_{s}/m_{ud}\) can also be extracted from the masses of the neutral Nambu–Goldstone bosons: neglecting effects of order \((m_{u}m_{d})^2\) also here, the leadingorder formula reads \(m_{s}/m_{ud}\mathop {=}\limits ^{\mathrm{LO}}\frac{3}{2}\hat{M}_\eta ^2/\hat{M}_\pi ^2\frac{1}{2}\). Numerically, this gives \(m_{s}/m_{ud}\mathop {=}\limits ^{\mathrm{LO}}24.2\). The relation has the advantage that the e.m. corrections are expected to be much smaller here, but it is more difficult to calculate the \(\eta \)mass on the lattice. The comparison with (18) shows that, in this case, the contributions of NLO are somewhat larger: \(14\pm 2\) %.
3.4 Lattice determination of \(m_{u}\) and \(m_{d}\)
The determination of \(m_{u}\) and \(m_{d}\) separately requires additional input. MILC 09A [37] uses the mass difference between \(K^0\) and \(K^+\), from which they subtract electromagnetic effects using Dashen’s theorem with corrections, as discussed in Sect. 3.1. The up and down seaquarks remain degenerate in their calculation, fixed to the value of \(m_{ud}\) obtained from \(M_{\pi ^0}\).
To determine \(m_{u}/m_{d}\), BMW 10A, 10B [22, 23] follow a slightly different strategy. They obtain this ratio from their result for \(m_{s}/m_{ud}\) combined with a phenomenological determination of the isospinbreaking quarkmass ratio \(Q=22.3(8)\), defined below in (24), from \(\eta \rightarrow 3\pi \) decays [30] (the decay \(\eta \rightarrow 3\pi \) is very sensitive to QCD isospinbreaking but fairly insensitive to QED isospinbreaking). As discussed in Sect. 3.5, the central value of the e.m. parameter \(\epsilon \) in (9) is taken from the same source.
RM123 11 [105] actually uses the e.m. parameter \(\epsilon =0.7(5)\) from the first edition of the FLAG review [1]. However, they estimate the effects of strong isospinbreaking at first nontrivial order, by inserting the operator \(\frac{1}{2}(m_{u}m_{d})\int (\bar{u}u\bar{d}d)\) into correlation functions, while performing the gauge averages in the isospin limit. Applying these techniques, they obtain \((\hat{M}_{K^0}^2\hat{M}_{K^+}^2)/(m_{d}m_{u})=2.57(8)\,\mathrm{MeV}\). Combining this result with the phenomenological \((\hat{M}_{K^0}^2\hat{M}_{K^+}^2)=6.05(63)\times 10^3\) determined with the above value of \(\epsilon \), they get \((m_{d}m_{u})=2.35(8)(24)\,\mathrm{MeV}\), where the first error corresponds to the lattice statistical and systematic uncertainties combined in quadrature, while the second arises from the uncertainty on \(\epsilon \). Note that below we quote results from RM123 11 for \(m_{u}\), \(m_{d}\) and \(m_{u}/m_{d}\). As described in Table 5, we obtain them by combining RM123 11’s result for \((m_{d}m_{u})\) with ETM 10B’s result for \(m_{ud}\).
Instead of subtracting electromagnetic effects using phenomenology, RBC 07 [34] and Blum 10 [32] actually include a quenched electromagnetic field in their calculation. This means that their results include corrections to Dashen’s theorem, albeit only in the presence of quenched electromagnetism. Since the up and downquarks in the sea are treated as degenerate, very small isospin corrections are neglected, as in MILC’s calculation.
PACSCS 12 [76] takes the inclusion of isospinbreaking effects one step further. Using reweighting techniques, it also includes electromagnetic and \(m_{u}m_{d}\) effects in the sea.
Lattice results for \(m_{u}\), \(m_{d}\) and \(m_{u}/m_{d}\) are summarised in Table 5. In order to discuss them, we consider the LO formula
Using Eqs. (6)–(8) to express the meson masses in QCD in terms of the physical ones and linearizing in the corrections, this relation takes the form
Inserting the estimates (9) and adding errors in quadrature, the LO prediction becomes
Again, the quoted error exclusively accounts for the errors attached to the estimates (9) for the epsilons—contributions of nonleading order are ignored. The uncertainty in the leadingorder prediction is dominated by the one in the coefficient \(\epsilon \), which specifies the difference between the meson squaredmass splittings generated by the e.m. interaction in the kaon and pion multiplets. The reduction in the error on this coefficient since the previous review [1] results in a reduction of a factor of a little less than 2 in the uncertainty on the LO value of \(m_{u}/m_{d}\) given in (21).
It is interesting to compare the assumptions made or results obtained by the different collaborations for the violation of Dashen’s theorem. The input used in MILC 09A is \(\epsilon =1.2(5)\) [37], while the \(N_\mathrm{f}=2\) computation of RM123 13 finds \(\epsilon =0.79(18)(18)\) [45]. As discussed in Sect. 3.5, the value of \(Q\) used by BMW 10A, 10B [22, 23] gives \(\epsilon =0.70(28)\) at NLO (see (31)). On the other hand, RBC 07 [34] and Blum 10 [32] obtain the results \(\epsilon =0.13(4)\) and \(\epsilon =0.5(1)\). Note that PACSCS 12 [76] do not provide results which allow us to determine \(\epsilon \) directly. However, using their result for \(m_{u}/m_{d}\), together with (20), and neglecting NLO terms, one finds \(\epsilon =1.6(6)\), which is difficult to reconcile with what is known from phenomenology (see Sects. 3.1 and 3.5). Since the values assumed or obtained for \(\epsilon \) differ, it does not come as a surprise that the determinations of \(m_{u}/m_{d}\) are different.
These values of \(\epsilon \) are also interesting because they allow us to estimate the chiral corrections to the LO prediction (21) for \(m_{u}/m_{d}\). Indeed, evaluating the relation (20) for the values of \(\epsilon \) given above, and neglecting all other corrections in this equation, yields the LO values \((m_{u}/m_{d})^{\mathrm {LO}}=0.46(4)\), 0.547(3), 0.52(1), 0.50(2), 0.49(2) for MILC 09A, RBC 07, Blum 10, BMW 10A, 10B and RM123 13, respectively. However, in comparing these numbers to the nonperturbative results of Table 5 one must be careful not to double count the uncertainty arising from \(\epsilon \). One way to obtain a sharp comparison is to consider the ratio of the results of Table 5 to the LO values \((m_{u}/m_{d})^\mathrm{LO}\), in which the uncertainty from \(\epsilon \) cancels to good accuracy. Here we will assume for simplicity that they cancel completely and will drop all uncertainties related to \(\epsilon \). For \(N_\mathrm{f} = 2\) we consider RM123 13 [45], which updates RM123 11 and has no red dots. Since the uncertainties common to \(\epsilon \) and \(m_{u}/m_{d}\) are not explicitly given in [45], we have to estimate them. For that we use the leadingorder result for \(m_{u}/m_{d}\), computed with RM123 13’s value for \(\epsilon \). Its error bar is the contribution of the uncertainty on \(\epsilon \) to \((m_{u}/m_{d})^\mathrm{LO}\). To good approximation this contribution will be the same for the value of \(m_{u}/m_{d}\) computed in [45]. Thus, we subtract it in quadrature from RM123 13’s result in Table 5 and compute \((m_{u}/m_{d})/(m_{u}/m_{d})^\mathrm{LO}\), dropping uncertainties related to \(\epsilon \). We find \((m_{u}/m_{d})/(m_{u}/m_{d})^\mathrm{LO} = 1.02(6)\). This result suggests that chiral corrections in the case of \(N_\mathrm{f}=2\) are negligible. For the two most accurate \(N_\mathrm{f}=2+1\) calculations, those of MILC 09A and BMW 10A, 10B, this ratio of ratios is 0.94(2) and 0.90(1), respectively. Though these two numbers are not fully consistent within our rough estimate of the errors, they indicate that higherorder corrections to (21) are negative and about 8 % when \(N_\mathrm{f}=2+1\). In the following, we will take them to be \(\)8(4) %. The fact that these corrections are seemingly larger and of opposite sign than in the \(N_\mathrm{f}=2\) case is not understood at this point. It could be an effect associated with the quenching of the strange quark. It could also be due to the fact that the RM123 13 calculation does not probe deeply enough into the chiral regime—it has \(M_\pi \gtrsim 270\,\mathrm{MeV}\)—to pick up on important chiral corrections. Of course, being less than a two standard deviation effect, it may be that there is no problem at all and that differences from the LO result are actually small.
Given the exploratory nature of the RBC 07 calculation, its results do not allow us to draw solid conclusions about the e.m. contributions to \(m_{u}/m_{d}\) for \(N_\mathrm{f}=2\). As discussed in Sect. 3.3.2, the \(N_\mathrm{f}=2+1\) results of Blum 10 and PACSCS 12 do not pass our selection criteria either. We therefore resort to the phenomenological estimates of the electromagnetic selfenergies discussed in Sect. 3.1, which are validated by recent, preliminary lattice results.
Since RM123 13 [45] includes a lattice estimate of e.m. corrections, for the \(N_\mathrm{f}=2\) final results we simply quote the values of \(m_{u}\), \(m_{d}\) and \(m_{u}/m_{d}\) from RM123 13 given in Table 5:
with errors of roughly 10, 5 and 8 %, respectively. In these results, the errors are obtained by combining the lattice statistical and systematic errors in quadrature.
For \(N_\mathrm{f}=2+1\) there is to date no final, published computation of e.m. corrections. Thus, we take the LO estimate for \(m_{u}/m_{d}\) of (21) and use the \(\)8(4) % obtained above as an estimate of the size of the corrections from higher orders in the chiral expansion. This gives \(m_{u}/m_{d}=0.46(3)\). The two individual masses can then be worked out from the estimate (14) for their mean. Therefore, for \(N_\mathrm{f}=2+1\) we obtain
In these results, the first error represents the lattice statistical and systematic errors, combined in quadrature, while the second arises from the uncertainties associated with e.m. corrections of (9). The estimates in (23) have uncertainties of order 5, 3 and 7 %, respectively.
Naively propagating errors to the end, we obtain \((m_{u}/m_{d})_{N_\mathrm{f}=2}/(m_{u}/m_{d})_{N_\mathrm{f}=2+1}=1.09(10)\). If instead of (22) we use the results from RM123 11, modified by the e.m. corrections in (9), as was done in our previous review, we obtain \((m_{u}/m_{d})_{N_\mathrm{f}=2}/(m_{u}/m_{d})_{N_\mathrm{f}=2+1}=1.11(7)(1)\), confirming again the strong cancellation of e.m. uncertainties in the ratio. The \(N_\mathrm{f}=2\) and \(2+1\) results are compatible at the 1 to 1.5 \(\sigma \) level.
It is interesting to note that in the results above, the errors are no longer dominated by the uncertainties in the input used for the electromagnetic corrections, though these are still significant at the level of precision reached in the \(N_\mathrm{f}=2+1\) results. This is due to the reduction in the error on \(\epsilon \) discussed in Sect. 3.1. Nevertheless, the comparison of Eqs. (21) and (23) indicates that more than half of the difference between the prediction \(m_{u}/m_{d}=0.558\) obtained from Weinberg’s mass formulae [100] and the result for \(m_{u}/m_{d}\) obtained on the lattice stems from electromagnetism, the higher orders in the chiral perturbation generating a comparable correction.
In view of the fact that a massless upquark would solve the strong CPproblem, many authors have considered this an attractive possibility, but the results presented above exclude this possibility: the value of \(m_{u}\) in (23) differs from zero by 20 standard deviations. We conclude that nature solves the strong CPproblem differently. This conclusion relies on lattice calculations of kaon masses and on the phenomenological estimates of the e.m. selfenergies discussed in Sect. 3.1. The uncertainties therein currently represent the limiting factor in determinations of \(m_{u}\) and \(m_{d}\). As demonstrated in [32–34, 40–44, 50], lattice methods can be used to calculate the e.m. selfenergies. Further progress on the determination of the lightquark masses hinges on an improved understanding of the e.m. effects.
3.5 Estimates for \(R\) and \(Q\)
The quarkmass ratios
compare SU(3)breaking with isospinbreaking. The quantity \(Q\) is of particular interest because of a lowenergy theorem [106], which relates it to a ratio of meson masses,
Chiral symmetry implies that the expansion of \(Q_M^2\) in powers of the quark masses (i) starts with \(Q^2\) and (ii) does not receive any contributions at NLO:
Inserting the estimates for the mass ratios \(m_{s}/m_{ud}\) and \(m_{u}/m_{d}\) given for \(N_\mathrm{f}=2\) in Eqs. (13) and (22), respectively, we obtain
where the errors have been propagated naively and the e.m. uncertainty has been separated out, as discussed in the third paragraph after (21). Thus, the meaning of the errors is the same as in (23). These numbers agree within errors with those reported in [45] where values for \(m_{s}\) and \(m_{ud}\) are taken from ETM 10B [60].
For \(N_\mathrm{f}=2+1\), we use Eqs. (18) and (23) and obtain
where the meaning of the errors is the same as above. The \(N_\mathrm{f}=2\) and \(N_\mathrm{f}=2+1\) results are compatible within 2\(\sigma \), even taking the correlations between e.m. effects into account.
It is interesting to use these results to study the size of chiral corrections in the relations of \(R\) and \(Q\) to their expressions in terms of meson masses. To investigate this issue, we use \(\chi \)PT to express the quarkmass ratios in terms of the pion and kaon masses in QCD and then again use Eqs. (6)–(8) to relate the QCD masses to the physical ones. Linearizing in the corrections, this leads to
While the first relation only holds to LO of the chiral perturbation series, the second remains valid at NLO, on account of the low energy theorem mentioned above. The first terms on the right hand side represent the values of \(R\) and \(Q\) obtained with the Weinberg leadingorder formulae for the quarkmass ratios [100]. Inserting the estimates (9), we find that the e.m. corrections lower the Weinberg values to \(R_M= 36.7(3.3)\) and \(Q_M= 22.3(9)\), respectively.
Comparison of \(R_M\) and \(Q_M\) with the full results quoted above gives a handle on higherorder terms in the chiral expansion. Indeed, the ratios \(R_M/R\) and \(Q_M/Q\) give NLO and NNLO (and higher) corrections to the relations \(R \mathop {=}\limits ^{\mathrm{LO}}R_M\) and \(Q\mathop {=}\limits ^{\mathrm{NLO}}Q_M\), respectively. The uncertainties due to the use of the e.m. corrections of (9) are highly correlated in the numerators and denominators of these ratios, and we make the simplifying assumption that they cancel in the ratio. Thus, for \(N_\mathrm{f}=2\) we evaluate (29) and (30) using \(\epsilon =0.79(18)(18)\) from RM123 13 [45] and the other corrections from (9), dropping all uncertainties. We divide them by the results for \(R\) and \(Q\) in (27), omitting the uncertainties due to e.m. We obtain \(R_M/R\simeq 0.88(8)\) and \(Q_M/Q\simeq 0.91(5)\). We proceed analogously for \(N_\mathrm{f}=2+1\), using \(\epsilon =0.70(3)\) from (9) and \(R\) and \(Q\) from (28), and find \(R_M/R\simeq 1.02(5)\) and \(Q_M/Q\simeq 0.99(3)\). The chiral corrections appear to be small for \(N_\mathrm{f}=2+1\), especially those in the relation of \(Q\) to \(Q_M\). This is less true for \(N_\mathrm{f}=2\), where the NNLO and higher corrections to \(Q=Q_M\) could be significant. However, as for other quantities which depend on \(m_{u}/m_{d}\), this difference is not significant.
As mentioned in Sect. 3.1, there is a phenomenological determination of \(Q\) based on the decay \(\eta \rightarrow 3\pi \) [107, 108]. The key point is that the transition \(\eta \rightarrow 3\pi \) violates isospinconservation. The dominating contribution to the transition amplitude stems from the mass difference \(m_{u}m_{d}\). At NLO of \(\chi \)PT, the QCD part of the amplitude can be expressed in a parameterfree manner in terms of \(Q\). It is wellknown that the electromagnetic contributions to the transition amplitude are suppressed (a thorough recent analysis is given in [109]). This implies that the result for \(Q\) is less sensitive to the electromagnetic uncertainties than the value obtained from the masses of the Nambu–Goldstone bosons. For a recent update of this determination and for further references to the literature, we refer to [110]. Using dispersion theory to pin down the momentumdependence of the amplitude, the observed decay rate implies \(Q=22.3(8)\) (since the uncertainty quoted in [110] does not include an estimate for all sources of error, we have retained the error estimate given in [104], which is twice as large). The formulae for the corrections of NNLO are available also in this case [111]—the poor knowledge of the effective coupling constants, particularly of those that are relevant for the dependence on the quark masses, is currently the limiting factor encountered in the application of these formulae.
As was to be expected, the central value of \(Q\) obtained from \(\eta \)decay agrees exactly with the central value obtained from the lowenergy theorem: we have used that theorem to estimate the coefficient \(\epsilon \), which dominates the e.m. corrections. Using the numbers for \(\epsilon _{m}\), \(\epsilon _{\pi ^0}\) and \(\epsilon _{K^0}\) in (9) and adding the corresponding uncertainties in quadrature to those in the phenomenological result for \(Q\), we obtain
The estimate (9) for the size of the coefficient \(\epsilon \) is taken from here, as it is confirmed by the most recent, preliminary lattice determinations [40–45].
Our final results for the masses \(m_{u}\), \(m_{d}\), \(m_{ud}\), \(m_{s}\) and the mass ratios \(m_{u}/m_{d}\), \(m_{s}/m_{ud}\), \(R\), \(Q\) are collected in Tables 6 and 7. We separate \(m_{u}\), \(m_{d}\), \(m_{u}/m_{d}\), \(R\) and \(Q\) from \(m_{ud}\), \(m_{s}\) and \(m_{s}/m_{ud}\), because the latter are completely dominated by lattice results while the former still include some phenomenological input.
4 Leptonic and semileptonic kaon and pion decay and \(V_{ud}\) and \(V_{us}\)
This section summarises state of the art lattice calculations of the leptonic kaon and pion decay constants and the kaon semileptonic decay form factor and provides an analysis in view of the Standard Model. With respect to the previous edition of the FLAG review [1] the data in this section have been updated, correlations of lattice data are now taken into account in all the analysis and a subsection on the individual decay constants \(f_K\) and \(f_\pi \) (rather than only the ratio) has been included. Furthermore, when combining lattice data with experimental results we now take into account the strong SU(2) isospin correction in chiral perturbation theory for the ratio of leptonic decay constants \(f_K/f_\pi \).
4.1 Experimental information concerning \(V_{ud}\), \(V_{us}\), \(f_+(0)\) and \( {f_{K^\pm }}/{f_{\pi ^\pm }}\)
The following review relies on the fact that precision experimental data on kaon decays very accurately determine the product \(V_{us}f_+(0)\) and the ratio \(V_{us}/V_{ud}f_{K^\pm }/f_{\pi ^\pm }\) [112]:
Here and in the following \(f_{K^\pm }\) and \(f_{\pi ^\pm }\) are the isospinbroken decay constants, respectively, in QCD (the electromagnetic effects have already been subtracted in the experimental analysis using chiral perturbation theory). We will refer to the decay constants in the SU(2) isospinsymmetric limit as \(f_{K}\) and \(f_{\pi }\). \(V_{ud}\) and \(V_{us}\) are elements of the Cabibbo–Kobayashi–Maskawa matrix and \(f_+(t)\) represents one of the form factors relevant for the semileptonic decay \(K^0\rightarrow \pi ^\ell \,\nu \), which depends on the momentum transfer \(t\) between the two mesons. What matters here is the value at \(t=0\): \(f_+(0)\equiv f_+^{K^0\pi ^}(t)\,{}_{\;t\rightarrow 0}\). The pion and kaon decay constants are defined by^{Footnote 9}
In this normalisation, \(f_{\pi ^\pm } \simeq 130\) MeV, \(f_{K^\pm }\simeq 155\) MeV.
The measurement of \(V_{ud}\) based on superallowed nuclear \(\beta \) transitions has now become remarkably precise. The result of the update of Hardy and Towner [115], which is based on 20 different superallowed transitions, reads^{Footnote 10}
The matrix element \(V_{us}\) can be determined from semiinclusive \(\tau \) decays [122–125]. Separating the inclusive decay \(\tau \rightarrow \hbox {hadrons}+\nu \) into nonstrange and strange final states, e.g. HFAG 12 [126] obtain
Maltman et al. [124, 127, 128] and Gamiz et al. [129, 130] arrive at very similar values.
In principle, \(\tau \) decay offers a clean measurement of \(V_{us}\), but a number of open issues yet remain to be clarified. In particular, the value of \(V_{us}\) as determined from inclusive \(\tau \) decays differs from the result one obtains from assuming threeflavour SMunitarity by more than three standard deviations [126]. It is important to understand this apparent tension better. The most interesting possibility is that \(\tau \) decay involves new physics, but more work both on the theoretical (see e.g. [131–134]) and experimental side is required.
The experimental results in Eq. (32) are for the semileptonic decay of a neutral kaon into a negatively charged pion and the charged pion and kaon leptonic decays, respectively, in QCD. In the case of the semileptonic decays the corrections for strong and electromagnetic isospin breaking in chiral perturbation theory at NLO have allowed for averaging the different experimentally measured isospin channels [112]. This is quite a convenient procedure as long as lattice QCD does not include strong or QED isospinbreaking effects. Lattice results for \(f_K/f_\pi \) are typically quoted for QCD with (squared) pion and kaon masses of \(M_\pi ^2=M_{\pi ^0}^2\) and \(M_K^2=\frac{1}{2} (M_{K^\pm }^2+M_{K^0}^2M_{\pi ^\pm }^2+M_{\pi ^0}^2)\) for which the leading strong and electromagnetic isospin violations cancel. While progress is being made for including strong and electromagnetic isospin breaking in the simulations (e.g. [19, 86, 105, 135–137]), for now contact to experimental results is made by correcting leading SU(2) isospin breaking guided by chiral perturbation theory.
In the following we will start by presenting the lattice results for isospinsymmetric QCD. For any Standard Model analysis based on these results we then utilise chiral perturbation theory to correct for the leading isospinbreaking effects.
4.2 Lattice results for \(f_+(0)\) and \(f_K/f_\pi \)
The traditional way of determining \(V_{us}\) relies on using theory for the value of \(f_+(0)\), invoking the Ademollo–Gatto theorem [150]. Since this theorem only holds to leading order of the expansion in powers of \(m_{u}\), \(m_{d}\) and \(m_{s}\), theoretical models are used to estimate the corrections. Lattice methods have now reached the stage where quantities like \(f_+(0)\) or \(f_K/f_\pi \) can be determined to good accuracy. As a consequence, the uncertainties inherent in the theoretical estimates for the higherorder effects in the value of \(f_+(0)\) do not represent a limiting factor any more and we shall therefore not invoke those estimates. Also, we will use the experimental results based on nuclear \(\beta \) decay and \(\tau \) decay exclusively for comparison—the main aim of the present review is to assess the information gathered with lattice methods and to use it for testing the consistency of the SM and its potential to provide constraints for its extensions.
The data base underlying the present review of the semileptonic form factor and the ratio of decay constants is listed in Tables 8 and 9. The properties of the lattice data play a crucial role for the conclusions to be drawn from these results: range of \(M_\pi \), size of \(L M_\pi \), continuum extrapolation, extrapolation in the quark masses, finitesize effects, etc. The key features of the various data sets are characterised by means of the colour code specified in Sect. 2.1. More detailed information on individual computations are compiled in Appendix B.2.
The quantity \(f_+(0)\) represents a matrix element of a strangeness changing null plane charge, \(f_+(0)\!=\!(KQ^{us}\pi )\). The vector charges obey the commutation relations of the Lie algebra of SU(3), in particular \([Q^{us},Q^{su}]=Q^{{uu}\mathrm{ss}}\). This relation implies the sum rule \(\sum _n (KQ^{us}n)^2\sum _n (KQ^{su}n)^2=1\). Since the contribution from the onepion intermediate state to the first sum is given by \(f_+(0)^2\), the relation amounts to an exact representation for this quantity [151]:
While the first sum on the right extends over nonstrange intermediate states, the second runs over exotic states with strangeness \(\pm 2\) and is expected to be small compared to the first.
The expansion of \(f_+(0)\) in SU(3) chiral perturbation theory in powers of \(m_{u}\), \(m_{d}\) and \(m_{s}\) starts with \(f_+(0)=1+f_2+f_4+\cdots \,\) [56]. Since all of the low energy constants occurring in \(f_2\) can be expressed in terms of \(M_\pi \), \(M_K\), \(M_\eta \) and \(f_\pi \) [152], the NLO correction is known. In the language of the sum rule (35), \(f_2\) stems from nonstrange intermediate states with three mesons. Like all other nonexotic intermediate states, it lowers the value of \(f_+(0)\): \(f_2=0.023\) when using the experimental value of \(f_\pi \) as input. The corresponding expressions have also been derived in quenched or partially quenched (staggered) chiral perturbation theory [140, 153]. At the same order in the SU(2) expansion [154], \(f_+(0)\) is parameterised in terms of \(M_\pi \) and two a priori unknown parameters. The latter can be determined from the dependence of the lattice results on the masses of the quarks. Note that any calculation that relies on the \(\chi \)PT formula for \(f_2\) is subject to the uncertainties inherent in NLO results: instead of using the physical value of the pion decay constant \(f_\pi \), one may, for instance, work with the constant \(f_0\) that occurs in the effective Lagrangian and represents the value of \(f_\pi \) in the chiral limit. Although trading \(f_\pi \) for \(f_0\) in the expression for the NLO term affects the result only at NNLO, it may make a significant numerical difference in calculations where the latter are not explicitly accounted for (the lattice results concerning the value of the ratio \(f_\pi /f_0\) are reviewed in Sect. 5.2).
The lattice results shown in the left panel of Fig. 4 indicate that the higherorder contributions \(\Delta f\equiv f_+(0)1f_2\) are negative and thus amplify the effect generated by \(f_2\). This confirms the expectation that the exotic contributions are small. The entries in the lower part of the left panel represent various model estimates for \(f_4\). In [175] the symmetrybreaking effects are estimated in the framework of the quark model. The more recent calculations are more sophisticated, as they make use of the known explicit expression for the \(K_{\ell 3}\) form factors to NNLO in \(\chi \)PT [174, 176]. The corresponding formula for \(f_4\) accounts for the chiral logarithms occurring at NNLO and is not subject to the ambiguity mentioned above.^{Footnote 11} The numerical result, however, depends on the model used to estimate the lowenergy constants occurring in \(f_4\) [171–174]. The figure indicates that the most recent numbers obtained in this way correspond to a positive rather than a negative value for \(\Delta f\). We note that FNAL/MILC 12 [140] have made an attempt at determining some of the lowenergy constants appearing in \(f_4\) from lattice data.
4.3 Direct determination of \(f_+(0)\) and \(f_{K^\pm }/f_{\pi ^\pm }\)
All lattice results for the form factor and the ratio of decay constants that we summarise here (Tables 8, 9) have been computed in isospinsymmetric QCD. The reason for this unphysical parameter choice is that simulations of SU(2) isospinbreaking effects in lattice QCD, while ultimately the cleanest way for predicting these effects, are still rare and in their infancy [32, 33, 40, 43, 105, 136, 137]. In the meantime one relies either on chiral perturbation theory [36, 56] to estimate the correction to the isospin limit or one calculates the breaking at leading order in \((m_{u}m_{d})\) in the valence quark sector by making a suitable choice of the physical point to which the lattice data are extrapolated. Aubin 08, MILC and Laiho 11 for example extrapolate their simulation results for the kaon decay constant to the physical value of the \(up\)quark mass (the results for the pion decay constant are extrapolated to the value of the average lightquark mass \(\hat{m}\)). This then defines their prediction for \(f_{K^\pm }/f_{\pi ^\pm }\).
As long as the majority of collaborations present their final results in the isospinsymmetric limit (as we will see this comprises the majority of results which qualify for inclusion into a FLAG average) we prefer to provide the overview of world data in Fig. 4 in this limit.
To this end we compute the isospinsymmetric ratio \(f_{K}/f_{\pi }\) for Aubin 08, MILC and Laiho 11 using NLO chiral perturbation theory [56, 177] where
and where [177],
We use as input \(\epsilon _\mathrm{SU(2)}=\sqrt{3}/4/R\) with the FLAG result for \(R\) of Eq. (28), \(F_0=f_0/\sqrt{2}=80(20)\) MeV, \(M_\pi =135\) MeV and \(M_K=495\) MeV (we decided to choose a conservative uncertainty on \(f_0\) in order to reflect the magnitude of potential higherorder corrections) and obtain for example
\(f_{K^\pm }/f_{\pi ^\pm }\)  \(\delta _\mathrm{SU(2)}\)  \(f_K/f_\pi \)  

Aubin 08  1.202(11)(9)(2)(5)  \(\)0.0044(8)  1.205(11)(2)(9)(2)(5) 
MILC 10  1.197(2)(\(^{+3}_{7}\))  \(\)0.0043(7)  1.200(2)(2)(\(^{+3}_{7}\)) 
Laiho 11  1.191(16)(17)  \(\)0.0041(9)  1.193(16)(2)(17) 
(and similarly also for all other \(N_\mathrm{f}=2+1\) and \(N_\mathrm{f}=2+1+1\) results where applicable). In the last column the first error is statistical and the second is the one from the isospin correction (the remaining errors are quoted in the same order as in the original data). For \(N_\mathrm{f}=2\) a dedicated study of the strongisospin correction in lattice QCD does exist. The result of the RM123 collaboration [105] amounts to \(\delta _\mathrm{SU(2)}=0.0078(7)\) and we will later use this result for the correction in the case of \(N_\mathrm{f}=2\). We note that this value for the strongisospin correction is incompatible with the above results based on SU(3) chiral perturbation theory. One would not expect the strange seaquark contribution to be responsible for such a large effect. Whether higherorder effects in chiral perturbation theory or other sources are responsible still needs to be understood. To remain on the conservative side we attach the difference between the two and threeflavour result as an additional uncertainty to the result based on chiral perturbation theory. For the further analysis we add both errors in quadrature.
The plots in Fig. 4 illustrate our compilation of data for \(f_+(0)\) and \(f_K/f_\pi \). In both cases the lattice data are largely consistent even when comparing simulations with different \(N_\mathrm{f}\). We now proceed to form the corresponding averages, separately for the data with \(N_\mathrm{f}=2+1+1\), \(N_\mathrm{f}=2+1\) and \(N_\mathrm{f}=2\) dynamical flavours and in the following will refer to these averages as the “direct” determinations.
For \(f_+(0)\) there are currently two computational strategies: FNAL/MILC 12 and FNAL/MILC 13 use the Ward identity relating the \(K\rightarrow \pi \) form factor at zero momentum transfer to the matrix element \({\langle }\pi SK\rangle \) of the flavourchanging scalar current. Peculiarities of the staggered fermion discretisation (see [140]) which FNAL/MILC is using makes this the favoured choice. The other collaborations are instead computing the vectorcurrent matrix element \({\langle }\pi V_\mu K\rangle \). Apart from MILC 13C all simulations in Table 8 involve unphysically heavy quarks and therefore the lattice data need to be extrapolated to the physical pion and kaon masses corresponding to the \(K^0\rightarrow \pi ^\) channel. We note that all state of the art computations of \(f_+(0)\) are using partially twisted boundary conditions which allow one to determine the form factor results directly at the relevant kinematical point \(q^2=0\) [178, 179].
The colour code in Table 8 shows that for \(f_+(0)\), presently only the result of ETM (we will be using ETM 09A [146]) with \(N_\mathrm{f}=2\) and the results by the FNAL/MILC and RBC/UKQCD collaborations with \(N_\mathrm{f}=2+1\) dynamical flavours of fermions, respectively, are without a red tag. The latter two results, \(f_+(0) =0.9670(20)(^{+18}_{46})\) (RBC/UKQCD 13) and \(f_+(0) =0.9667(23)(33)\) (FNAL/MILC 12), agree very well. This is nice to observe given that the two collaborations are using different fermion discretisations (staggered fermions in the case of FNAL/MILC and domainwall fermions in the case of RBC/UKQCD). Moreover, in the case of FNAL/MILC the form factor has been determined from the scalarcurrent matrix element while in the case of RBC/UKQCD it has been determined from the matrix element of the vector current. To a certain extent both simulations are expected to be affected by different systematic effects.
The result FNAL/MILC 12 is from simulations reaching down to a lightest RMS pion mass of about 380 MeV (the lightest valence pion mass for one of their ensembles is about 260 MeV). Their combined chiral and continuum extrapolation (results for two lattice spacings) is based on NLO staggered chiral perturbation theory supplemented by the continuum NNLO expression [174] and a phenomenological parameterisation of the breaking of the Ademollo–Gatto theorem at finite lattice spacing inherent in their approach. The \(p^4\) lowenergy constants entering the NNLO expression have been fixed in terms of external input [57].
RBC/UKQCD 13 has analysed results on ensembles with pion masses down to 170MeV, mapping out nearly the complete range from the SU(3)symmetric limit to the physical point. Although no finite volume or cutoff effects were observed in the simulation results, the expected residual systematic effects for finitevolume effects in NLO chiral perturbation theory and an order of magnitude estimate for cutoff effects were included into the overall error budget. The dominant systematic uncertainty is the one due to the extrapolation in the light quark mass to the physical point which RBC/UKQCD did with the help of a model motivated and partly based on chiral perturbation theory. The model dependence is estimated by comparing different ansätze for the mass extrapolation.
The ETM collaboration which uses the twistedmass discretisation provides a comprehensive study of the systematics by presenting results for three lattice spacings [180] and simulating at light pion masses (down to \(M_\pi =260\) MeV). This allows one to constrain the chiral extrapolation, using both SU(3) [152] and SU(2) [154] chiral perturbation theory. Moreover, a rough estimate for the size of the effects due to quenching the strange quark is given, based on the comparison of the result for \(N_\mathrm{f}=2\) dynamical quark flavours [169] with the one in the quenched approximation, obtained earlier by the SPQcdR collaboration [181]. We note for completeness that ETM extrapolate their lattice results to the point corresponding to \(M_K^2\) and \(M_\pi ^2\) as defined at the end of Sect. 4.1. At the current level of precision though this is expected to be a tiny effect.
We now compute the \(N_\mathrm{f} =2+1\) FLAGaverage for \(f_+(0)\) based on FNAL/MILC 13 and RBC/UKQCD 12, which we consider uncorrelated, and for \(N_\mathrm{f}=2\) the only result fulfilling the FLAG criteria is ETM 09A,
The brackets in the second line indicate the statistical and systematic errors, respectively. The dominant source of systematic uncertainty in these simulations of \(f_+(0)\), the chiral extrapolation, will soon be removed by simulations with physical light quark masses (see FNAL/MILC 13C [138] and RBC/UKQCD [182])
In the case of the ratio of decay constants the data sets that meet the criteria formulated in the introduction are MILC 13A [157] and HPQCD 13A [156] with \(N_\mathrm{f}=2+1+1\), MILC 10 [159], BMW 10 [161], HPQCD/UKQCD 07 [165] and RBC/UKQCD 12 [25] (which is an update of RBC/UKQCD 10A [78]) with \(N_\mathrm{f}=2+1\) and ETM 09 [169] with \(N_\mathrm{f}=2\) dynamical flavours.
MILC 13A have determined the ratio of decay constants from a comprehensive set of ensembles of Highly Improved Staggered Quarks (HISQ) which have been taylored to reduce staggered tastebreaking effects. They have generated ensembles for four values of the lattice spacing (0.06–0.15 fm, scale set with \(f_\pi \)) and with the Goldstone pion masses approximately tuned to the physical point which at least on their finest lattice approximately agrees with the RMS pion mass (i.e. the difference in mass between different pion species which originates from staggered taste splitting). Supplementary simulations with slightly heavier Goldstone pion mass allow one to extract the ratio of decay constants for the physical value of the lightquark masses by means of polynomial interpolations. In a second step MILC extrapolates the data to the continuum limit where eventually the ratio \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) is extracted. The final result of their analysis is \( {f_{K^\pm }}/{f_{\pi ^\pm }}=1.1947(26)(33)(17)(2)\) where the errors are statistical, due to the continuum extrapolation, due to finite volume effects and due to electromagnetic effects. MILC has found an increase in the central value of the ratio when going from the second finest to their finest ensemble and from this observation they derive the quoted 0.28 % uncertainty in the continuum extrapolation. They use NLO staggered chiral perturbation theory to correct for finitevolume effects and estimate the uncertainty in this approach by comparing to the alternative correction in NLO and NNLO continuum chiral perturbation theory. Although MILC and HPQCD are independent collaborations, MILC shares its gaugefield ensembles with HPQCD 13A, whose study of \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) is therefore based on the same set of ensembles bar the one for the finest lattice spacing (\(a=\) 0.09–0.15 fm, scale set with \(f_{\pi ^+}\) and relative scale set with the Wilson flow [183, 184]) supplemented by some simulation points with heavier quark masses. HPQCD employed a global fit based on continuum NLO SU(3) chiral perturbation theory for the decay constants supplemented by a model for higherorder terms including discretisation and finitevolume effects (61 parameters for 39 data points supplemented by Bayesian priors). Their final result is \(f_{K^\pm }/f_{\pi ^\pm }=1.1916(15)(12)(1)(10)\), where the errors are statistical, due to the continuum extrapolation, due to finitevolume effects and the last error contains the combined uncertainties from the chiral extrapolation, the scalesetting uncertainty, the experimental input in terms of \(f_{\pi ^+}\) and from the uncertainty in \(m_{u}/m_{d}\).
Despite the large overlap in primary lattice data both collaborations arrive at surprisingly different error budgets. In the preparation of this report we interacted with both collaborations trying to understand the origin of the differences. HPQCD is using a rather new method to set the relative lattice scale for their ensembles which together with their more aggressive binning of the statistical samples, could explain the reduction in statistical error by a factor of 1.7 compared to MILC. Concerning the cutoff dependence, the finest lattice included into MILC’s analysis is \(a=0.06\) fm while the finest lattice in HPQCD’s case is \(a=0.09\) fm. MILC estimates the residual systematic after extrapolating to the continuum limit by taking the split between the result of an extrapolation with up to quartic and only up to quadratic terms in \(a\) as their systematic. HPQCD on the other hand models cutoff effects within their global fit ansatz up to including terms in \(a^8\). In this way HPQCD arrives at a systematic error due to the continuum limit which is smaller than MILC’s estimate by about a factor 2.8. HPQCD explains^{Footnote 12} that in their setup, despite lacking the information from the fine ensemble (\(a=0.06\) fm), the approach to the continuum limit is reliably described by the chosen fit formula leaving no room for the shift in the result on the finest lattice observed by MILC. They further explain that their different way of setting the relative lattice scale leads to reduced cutoff effects compared to MILC’s study. We now turn to finitevolume effects which in the MILC result is the secondlargest source of systematic uncertainty. NLO staggered chiral perturbation theory (MILC) or continuum chiral perturbation theory (HPQCD) was used for correcting the lattice data towards the infinitevolume limit. MILC then compared the finitevolume correction to the one obtained by the NNLO expression and took the difference as their estimate for the residual finitevolume error. In addition they checked the compatibility of the effective theory predictions (NLO continuum, staggered and NNLO continuum chiral perturbation theory) against lattice data of different spatial extent. The final verdict on the related residual systematic uncertainty on \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) made by MILC is larger by an order of magnitude than the one made by HPQCD. We note that only HPQCD allows for tastebreaking terms in their fit model while MILC postpones such studies to future work.
The above comparison shows that MILC and HPQCD have studied similar sources of systematic uncertainties, e.g. by varying parts of the analysis procedure or by changing the functional form of a given fit ansatz. One observation worth mentioning in this context is the way in which the resulting variations in the fit result are treated. MILC tends to include the spread in central values from different ansätze into the systematic errors. HPQCD on the other hand determines the final result and attached errors from preferred fitansatz and then confirms that it agrees within errors with results from other ansätze without including the spreads into their error budget. In this way HPQCD is lifting the calculation of \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) to a new level of precision. FLAG is looking forward to independent confirmations of the result for \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) at the same level of precision. For now we will only provide a range for the result for \(N_\mathrm{f}=2+1+1\) that covers the result of both HPQCD 13A and MILC 13A,
Concerning simulations with \(N_\mathrm{f}\!=\!2+1\), MILC 10 and HPQCD/UKQCD 07 are based on staggered fermions, BMW 10 has used improved Wilson fermions and RBC/UKQCD 12’s result is based on the domainwall formulation. For \(N_\mathrm{f}=2\) ETM has simulated twistedmass fermions. In contrast to MILC 13A all these latter simulations are for unphysically heavy quark masses (corresponding to smallest pion masses in the range 240–260 MeV in the case of MILC 10, HPQCD/UKQCD 07 and ETM 09 and around 170 MeV for RBC/UKQCD 12) and therefore slightly more sophisticated extrapolations needed to be controlled. Various ansätze for the mass and cutoff dependence comprising SU(2) and SU(3) chiral perturbation theory or simply polynomials were used and compared in order to estimate the model dependence.
We now provide the FLAG average for these data. While BMW 10 and RBC/UKQCD 12 are entirely independent computations, subsets of the MILC gauge ensembles used by MILC 10 and HPQCD/UKQCD 07 are the same. MILC 10 is certainly based on a larger and more advanced set of gauge configurations than HPQCD/UQKCD 07. This allows them for a more reliable estimation of systematic effects. In this situation we consider only their statistical but not their systematic uncertainties to be correlated. For \(N_\mathrm{f}=2\) the FLAG average is just the result by ETM 09 and this is illustrated in terms of the vertical grey band in the r.h.s. panel of Fig. 4. For the purpose of this plot only, the isospin correction has been removed along the lines laid out earlier. For the average indicated in the case of \(N_\mathrm{f}=2+1\) we take the original data of BMW 10, HPQCD/UKQCD 07 and RBC/UKQCD 12 and use the MILC 10 result as computed above. The resulting fit is of good quality, with \(f_K/f_\pi =1.194(4)\) and \(\chi ^2/\hbox {dof}=0.4\). The systematic errors of the individual data sets are larger for MILC 10, BMW 10, HPQCD/UKQCD 07 and RBC/UKQCD 12, respectively, and following again the prescription of Sect. 2.3 we replace the error by the smallest one of these leading to \(f_K / f_\pi = 1.194(5)\) for \(N_\mathrm{f}=2+1\).
Before determining the average for \(f_{K^\pm }/f_{\pi ^\pm }\) which should be used for applications to Standard Model phenomenology we apply the isospin correction individually to all those results which have been published in the isospinsymmetric limit, i.e. BMW 10, HPQCD/UKQCD07 and RBC/UKQCD 12. To this end we invert Eq. (36) and use
The results are:
\(f_K/f_\pi \)  \(\delta _\mathrm{SU(2)}\)  \(f_{K^\pm }/f_{\pi ^\pm }\)  

HPQCD/UKQCD 07  1.189(2)(7)  \(\)0.0040(7)  1.187(2) (2)(7) 
BMW 10  1.192(7)(6)  \(\)0.0041(7)  1.190(7) (2)(6) 
RBC/UKQCD 12  1.199(12)(14)  \(\)0.0043(9)  1.196(12)(2)(14) 
As before, in the last column the first error is statistical and the second error is due to the isospin correction. Using these results we obtain
for QCD with broken isospin.
It is instructive to convert the above results for \(f_+(0)\) and \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) into a corresponding range for the CKM matrix elements \(V_{ud}\) and \(V_{us}\), using the relations (32). Consider first the results for \(N_\mathrm{f}=2+1\). The range for \(f_+(0)\) in (38) is mapped into the interval \(V_{us}=0.2239(7)\), depicted as a horizontal green band in Fig. 5, while the one for \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) in (41) is converted into \(V_{us}/V_{ud}= 0.2314(11)\), shown as a tilted green band. The smaller green ellipse is the intersection of these two bands.
More precisely, it represents the 68 % likelihood contour (note also that the ellipses shown in Fig. 5 of Ref. [1] have to be interpreted as 39 % likelihood contours), obtained by treating the above two results as independent measurements. Values of \(V_{us}\), \(V_{ud}\) in the region enclosed by this contour are consistent with the lattice data for \(N_\mathrm{f}=2+1\), within one standard deviation. In particular, the plot shows that the nuclear \(\beta \) decay result for \(V_{ud}\) is in good agreement with these data. We note that with respect to the previous edition of the FLAG review the reanalysis including new results has moved the ellipse representing QCD with \(N_\mathrm{f}=2+1\) slightly down and to the left.
Repeating the exercise for \(N_\mathrm{f}=2\) leads to the larger blue ellipse. The figure indicates a slight tension between the \(N_\mathrm{f}=2\) and \(N_\mathrm{f}=2+1\) results, which, at the current level of precision is not visible if considering the \(N_\mathrm{f}=2\) and \(N_\mathrm{f}=2+1\) results for \(f_+(0)\) and \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) in Fig. 4 on their own. It remains to be seen if this is a first indication of the effect of quenching the strange quark.
In the case of \(N_\mathrm{f}=2+1+1\) only results for \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) are without red tags. In this case we have therefore only plotted the corresponding band for \(V_{us}\) from \(f_{K^\pm }/f_{\pi ^\pm }\) corresponding to \(V_{us}/V_{ud}=0.2310(11)\).
4.4 Testing the Standard Model
In the Standard Model, the CKM matrix is unitary. In particular, the elements of the first row obey
The tiny contribution from \(V_{ub}\) is known much better than needed in the present context: \(V_{ub}= 4.15 (49) \cdot 10^{3}\) [74]. In the following, we first discuss the evidence for the validity of the relation (42) and only then use it to analyse the lattice data within the Standard Model.
In Fig. 5, the correlation between \(V_{ud}\) and \(V_{us}\) imposed by the unitarity of the CKM matrix is indicated by a dotted arc (more precisely, in view of the uncertainty in \(V_{ub}\), the correlation corresponds to a band of finite width, but the effect is too small to be seen here).
The plot shows that there is a slight tension with unitarity in the data for \(N_\mathrm{f} = 2 + 1\): Numerically, the outcome for the sum of the squares of the first row of the CKM matrix reads \(V_{u}^2 = 0.987(10)\). Still, it is fair to say that at this level the Standard Model passes a nontrivial test that exclusively involves lattice data and wellestablished kaon decay branching ratios. Combining the lattice results for \(f_+(0)\) and \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) in (38) and (41) with the \(\beta \) decay value of \(V_{ud}\) quoted in (33), the test sharpens considerably: the lattice result for \(f_+(0)\) leads to \(V_{u}^2 = 0.9993(5)\), while the one for \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) implies \(V_{u}^2 = 1.0000(6)\), thus confirming CKM unitarity at the permille level.
Repeating the analysis for \(N_\mathrm{f} = 2\), we find \(V_{u}^2 = 1.029(35)\) with the lattice data alone. This number is fully compatible with 1, in accordance with the fact that the dotted curve penetrates the blue contour. Taken by themselves, these results are perfectly consistent with the value of \(V_{ud}\) found in nuclear \(\beta \) decay: combining this value with the data on \(f_+(0)\) yields \(V_{u}^2=1.0004(10)\), combining it with the data on \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) gives \(V_{u}^2= 0.9989(16)\). With respect to the first edition of the FLAG report the ellipse for \(N_\mathrm{f}=2\) has moved slightly to the left because we have now taken into account isospinbreaking effects.
For \(N_\mathrm{f}=2+1+1\) we can carry out the test of unitarity only with input from \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) which leads to \(V_{u}^2=0.9998(7)\).
Note that the above tests also offer a check of the basic hypothesis that underlies our analysis: we are assuming that the weak interaction between the quarks and the leptons is governed by the same Fermi constant as the one that determines the strength of the weak interaction among the leptons and determines the lifetime of the muon. In certain modifications of the Standard Model, this is not the case. In those models it need not be true that the rates of the decays \(\pi \rightarrow \ell \nu \), \(K\rightarrow \ell \nu \) and \(K\rightarrow \pi \ell \nu \) can be used to determine the matrix elements \(V_{ud}f_\pi \), \(V_{us}f_K\) and \(V_{us}f_+(0)\), respectively and that \(V_{ud}\) can be measured in nuclear \(\beta \) decay. The fact that the lattice data are consistent with unitarity and with the value of \(V_{ud}\) found in nuclear \(\beta \) decay indirectly also checks the equality of the Fermi constants.
4.5 Analysis within the Standard Model
The Standard Model implies that the CKM matrix is unitary. The precise experimental constraints quoted in (32) and the unitarity condition (42) then reduce the four quantities \(V_{ud},V_{us},f_+(0), {f_{K^\pm }}/{f_{\pi ^\pm }}\) to a single unknown: any one of these determines the other three within narrow uncertainties.
Figure 6 shows that the results obtained for \(V_{us}\) and \(V_{ud}\) from the data on \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) (squares) are quite consistent with the determinations via \(f_+(0)\) (triangles). In order to calculate the corresponding average values, we restrict ourselves to those determinations that we have considered best in Sect. 4.3. The corresponding results for \(V_{us}\) are listed in Table 10 (the error in the experimental numbers used to convert the values of \(f_+(0)\) and \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) into values for \(V_{us}\) is included in the statistical error).
We consider the fact that the results from the five \(N_\mathrm{f}=2+1\) data sets FNAL/MILC 12 [140], RBC/UKQCD 13 [139], RBC/UKQCD 12 [25], BMW 10 [161], MILC 10 [159] and HPQCD/UKQCD 07 [165] are consistent with each other to be an important reliability test of the lattice work. Applying the prescription of Sect. 2.3, where we consider MILC 10, FNAL/MILC 12 and HPQCD/UKQCD 07 on the one hand and RBC/UKQCD 12 and RBC/UKQCD 13 on the other hand, as mutually statistically correlated since the analysis in the two cases starts from partly the same set of gauge ensembles, we arrive at \(V_{us} = 0.2247(7)\) with \(\chi ^2/\hbox {dof}=0.8\). This result is indicated on the left hand side of Fig. 6 by the narrow vertical band. The value for \(N_\mathrm{f}=2\), \(V_{us}= 0.2253(21)\), with \(\chi ^2/\hbox {dof}=0.9\), where we have considered ETM 09 and ETM 09A as statistically correlated is also indicated by a band. For \(N_\mathrm{f}=2+1+1\) we only consider the data for \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) yielding \(V_{us}=0.2251(10)\). The figure shows that the result obtained for the data with \(N_\mathrm{f}=2\), \(N_\mathrm{f}=2+1\) and \(N_\mathrm{f}=2+1+1\) are perfectly consistent.
Alternatively, we can solve the relations for \(V_{ud}\) instead of \(V_{us}\). Again, the result \(V_{ud}=0.97434(22)\) which follows from the lattice data with \(N_\mathrm{f}=2+1+1\) is perfectly consistent with the values \(V_{ud}=0.97447(18)\) and \(V_{ud}=0.97427(49)\) obtained from those with \(N_\mathrm{f}=2+1\) and \(N_\mathrm{f}=2\), respectively. The reduction of the uncertainties in the result for \(V_{ud}\) due to CKM unitarity is to be expected from Fig. 5: the unitarity condition reduces the region allowed by the lattice results to a nearly vertical interval.
Next, we determine the value of \(f_+(0)\) that follows from the lattice data within the Standard Model. Using CKM unitarity to convert the lattice determinations of \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) into corresponding values for \(f_+(0)\) and then combining these with the direct determinations of \(f_+(0)\), we find \(f_+(0)= 0.9634(32)\) from the data with \(N_\mathrm{f}=2+1\) and \(f_+(0)= 0.9595(90)\) for \(N_\mathrm{f}=2\). In the case \(N_\mathrm{f}=2+1+1\) we obtain \(f_+(0)=0.9611(47)\).
Finally, we work out the analogous Standard Model fits for \( {f_{K^\pm }}/{f_{\pi ^\pm }}\), converting the direct determinations of \(f_+(0)\) into corresponding values for \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) and combining the outcome with the direct determinations of that quantity. The results read \( {f_{K^\pm }}/{f_{\pi ^\pm }}=1.197(4)\) for \(N_\mathrm{f}=2+1\) and \( {f_{K^\pm }}/{f_{\pi ^\pm }}= 1.192(12) \) for \(N_\mathrm{f}=2\), respectively.
The results obtained by analysing the lattice data in the framework of the Standard Model are collected in the upper half of Table 11. In the lower half of this table, we list the analogous results, found by working out the consequences of CKM unitarity for the experimental values of \(V_{ud}\) and \(V_{us}\) obtained from nuclear \(\beta \) decay and \(\tau \) decay, respectively. The comparison shows that the lattice result for \(V_{ud}\) not only agrees very well with the totally independent determination based on nuclear \(\beta \) transitions, but it is also remarkably precise. On the other hand, the values of \(V_{ud}\), \(f_+(0)\) and \( {f_{K^\pm }}/{f_{\pi ^\pm }}\) which follow from the \(\tau \) decay data if the Standard Model is assumed to be valid, are not in good agreement with the lattice results for these quantities. The disagreement is reduced considerably if the analysis of the \(\tau \) data is supplemented with experimental results on electroproduction [128]: the discrepancy then amounts to little more than one standard deviation.
4.6 Direct determination of \(f_K\) and \(f_\pi \)
It is useful for flavour physics to provide not only the lattice average of \(f_K / f_\pi \), but also the average of the decay constant \(f_K\). Indeed, the \(\Delta S = 2\) hadronic matrix element for neutral kaon mixing is generally parameterised by \(M_K\), \(f_K\) and the kaon bag parameter \(B_K\). The knowledge of both \(f_K\) and \(B_K\) is therefore crucial for a precise theoretical determination of the CPviolation parameter \(\epsilon _K\) and for the constraint on the apex of the CKM unitarity triangle.
The case of the decay constant \(f_\pi \) is somehow different, since the experimental value of this quantity is often used for setting the scale in lattice QCD (see Appendix A.2). However, the physical scale can be set in different ways, namely by using as input the mass of the \(\Omega \)baryon (\(m_\Omega \)) or the \(\Upsilon \)meson spectrum (\(\Delta M_\Upsilon \)), which are less sensitive to the uncertainties of the chiral extrapolation in the lightquark mass with respect to \(f_\pi \). In such cases the value of the decay constant \(f_\pi \) becomes a direct prediction of the lattice QCD simulations. It is therefore interesting to provide also the average of the decay constant \(f_\pi \), obtained when the physical scale is set through another hadron observable, in order to check the consistency of different scalesetting procedures.
Our compilation of the values of \(f_\pi \) and \(f_K\) with the corresponding colour code is presented in Table 12. With respect to the case of \(f_K / f_\pi \) we have added two columns indicating which quantity is used to set the physical scale and the possible use of a renormalisation constant for the axial current. Indeed, for several lattice formulations the use of the nonsinglet axialvector Ward identity allows one to avoid the use of any renormalisation constant.
One can see that the determinations of \(f_\pi \) and \(f_K\) suffer from larger uncertainties with respect to the ones of the ratio \(f_K / f_\pi \), which is less sensitive to various systematic effects (including the uncertainty of a possible renormalisation constant) and, moreover, is not so exposed to the uncertainties of the procedure used to set the physical scale.
According to the FLAG rules three data sets can form the average of \(f_\pi \) and \(f_K\) for \(N_\mathrm{f} = 2 + 1\): RBC/UKQCD 12 [25] (update of RBC/UKQCD 10A), HPQCD/UKQCD 07 [165] and MILC 10 [159], which is the latest update of the MILC program.^{Footnote 13} We consider HPQCD/UKQCD 07 and MILC 10 as statistically correlated and use the prescription of Sect. 2.3 to form an average. For \(N_\mathrm{f} = 2\) the average cannot be formed for \(f_\pi \), and only one data set (ETM 09) satisfies the FLAG rules in the case of \(f_K\). Following the discussion around the \(N_\mathrm{f}=2+1+1\) result for \(f_{K^\pm }/f_{\pi ^\pm }\) we refrain from providing a FLAGaverage for \(f_K\) for this case.
Thus, our estimates (in the isospinsymmetric limit of QCD) read
The lattice results of Table 12 and our estimates (43–44) are reported in Fig. 7. The latter ones compare positively within the errors with the latest experimental determinations of \(f_\pi \) and \(f_K\) from the PDG:
which, we recall, do not correspond, however, to pure QCD results in the isospinsymmetric limit. Moreover, the values of \(f_\pi \) and \(f_K\) quoted by the PDG are obtained assuming Eq. (32) for the value of \(V_{ud}\) and adopting the RBCUKQCD 07 result for \(f_+(0)\).
5 Lowenergy constants
In the study of the quarkmass dependence of QCD observables calculated on the lattice it is common practice to invoke Chiral Perturbation Theory (\(\chi \)PT). For a given quantity this framework predicts the nonanalytic quarkmass dependence and it provides symmetry relations among different observables. These relations are best expressed with the help of a set of linearly independent and universal (i.e. processindependent) lowenergy constants (LECs), which appear as coefficients of the polynomial terms (in \(m_{q}\) or \(M_\pi ^2\)) in different observables. If one expands around the SU(2) chiral limit, in the Chiral Effective Lagrangian there appear two LECs at order \(p^2\)
and seven at order \(p^4\), indicated by \(\bar{\ell }_i\) with \(i=1,\ldots ,7\). In the analysis of the SU(3) chiral limit there are also just two LECs at order \(p^2\)
but ten at order \(p^4\), indicated by the capital letter \(L_i(\mu )\) with \(i=1,\ldots ,10\). These constants are independent of the quark masses^{Footnote 14}, but they become scale dependent after renormalisation (sometimes a superscript \(r\) is added). The SU(2) constants \(\bar{\ell }_i\) are scale independent, since they are defined at \(\mu =M_\pi \) (as indicated by the bar). For the precise definition of these constants and their scale dependence we refer the reader to [56, 58].
First of all, lattice calculations can be used to test if chiral symmetry is indeed broken as SU\((N_\mathrm{f})_L \times \)SU\((N_\mathrm{f})_R \rightarrow \)SU\((N_\mathrm{f})_{L+R}\) by measuring nonzero chiral condensates and by verifying the validity of the GMOR relation \(M_\pi ^2\propto m\) close to the chiral limit. If the chiral extrapolation of quantities calculated on the lattice is made with the help of \(\chi \)PT, apart from determining the observable at the physical value of the quark masses one also obtains the relevant LECs. This is a very important byproduct for two reasons:

1.
All LECs up to order \(p^4\) (with the exception of \(B\) and \(B_0\), since only the product of these times the quark masses can be estimated from phenomenology) have either been determined by comparison to experiment or estimated theoretically. A lattice determination of the better known ones thus provides a test of the \(\chi \)PT approach.

2.
The less wellknown LECs are those which describe the quarkmass dependence of observables—these cannot be determined from experiment, and therefore the lattice provides unique quantitative information. This information is essential for improving phenomenological \(\chi \)PT predictions in which these LECs play a role.
We stress that this program is based on the nonobvious assumption that \(\chi \)PT is valid in the region of masses used in the lattice simulations under consideration.
The fact that, at large volume, the finitesize effects, which occur if a system undergoes spontaneous symmetry breakdown, are controlled by the Nambu–Goldstone modes, was first noted in solidstate physics, in connection with magnetic systems [187, 188]. As pointed out in [189] in the context of QCD, the thermal properties of such systems can be studied in a systematic and modelindependent manner by means of the corresponding effective field theory, provided the temperature is low enough. While finite volumes are not of physical interest in particle physics, lattice simulations are necessarily carried out in a finite box. As shown in [190–192], the ensuing finitesize effects can also be studied on the basis of the effective theory—\(\chi \)PT in the case of QCD—provided the simulation is close enough to the continuum limit, the volume is sufficiently large and the explicit breaking of chiral symmetry generated by the quark masses is sufficiently small. Indeed, \(\chi \)PT represents also a useful tool for the analysis of the finitesize effects in lattice simulations.
In the following two subsections we summarise the lattice results for the SU(2) and SU(3) LECs, respectively. In either case we first discuss the \(O(p^2)\) constants and then proceed to their \(O(p^4)\) counterparts. The \(O(p^2)\) LECs are determined from the chiral extrapolation of masses and decay constants or, alternatively, from a finitesize study of correlators in the \(\epsilon \)regime. At order \(p^4\) some LECs affect twopoint functions while other appear only in three or fourpoint functions; the latter need to be determined from form factors or scattering amplitudes. The \(\chi \)PT analysis of the (nonlattice) phenomenological quantities is nowadays^{Footnote 15} based on \(O(p^6)\) formulae. At this level the number of LECs explodes and we will not discuss any of these. We will, however, discuss how comparing different orders and different expansions (in particular \(x\) versus \(\xi \)expansion; see below) can help to assess the theoretical uncertainties of the LECs determined on the lattice.
5.1 SU(2) lowenergy constants
5.1.1 Quarkmass dependence of pseudoscalar masses and decay constants
The expansions^{Footnote 16} of \(M_\pi ^2\) and \(F_\pi \) in powers of the quark mass are known to nexttonexttoleading order in the SU(2) chiral effective theory. In the isospin limit, \(m_{u}=m_{d}=m\), the explicit expressions may be written in the form [193]
Here the expansion parameter is given by
but there is another option as discussed below. The scales \(\Lambda _3,\Lambda _4\) are related to the effective coupling constants \(\bar{\ell }_3,\bar{\ell }_4\) of the chiral Lagrangian at running scale \(M_\pi \equiv M_\pi ^\mathrm{phys}\) by
Note that in Eq. (48) the logarithms are evaluated at \(M^2\), not at \(M_\pi ^2\). The coupling constants \(k_M,k_F\) in Eq. (48) are massindependent. The scales of the squared logarithms can be expressed in terms of the \(O(p^4)\) coupling constants as
Hence by analysing the quarkmass dependence of \(M_\pi ^2\) and \(F_\pi \) with Eq. (48), possibly truncated at NLO, one can determine^{Footnote 17} the \(O(p^2)\) LECs \(B\) and \(F\), as well as the \(O(p^4)\) LECs \(\bar{\ell }_3\) and \(\bar{\ell }_4\). The quark condensate in the chiral limit is given by \(\Sigma =F^2B\). With precise enough data at several small enough pion masses, one could in principle also determine \(\Lambda _M\), \(\Lambda _F\) and \(k_M\), \(k_F\). To date this is not yet possible. The results for the LO and NLO constants will be presented in Sect. 5.1.6.
Alternatively, one can invert Eq. (48) and express \(M^2\) and \(F\) as an expansion in
and the corresponding expressions then take the form
The scales of the quadratic logarithms are determined by \(\Lambda _1,\ldots ,\Lambda _4\) through
5.1.2 Twopoint correlation functions in the epsilonregime
The finitesize effects encountered in lattice calculations can be used to determine some of the LECs of QCD. In order to illustrate this point, we focus on the two lightest quarks, take the isospin limit \(m_{u}=m_{d}=m\) and consider a box of size \(L_{s}\) in the three space directions and size \(L_{t}\) in the time direction. If \(m\) is sent to zero at fixed box size, chiral symmetry is restored. The behaviour of the various observables in the symmetryrestoration region is controlled by the parameter \(\mu \equiv m\,\Sigma \,V\), where \(V=L_{s}^3L_{t}\) is the fourdimensional volume of the box. Up to a sign and a factor of two, the parameter \(\mu \) represents the minimum of the classical action that belongs to the leadingorder effective Lagrangian of QCD.
For \(\mu \gg 1\), the system behaves qualitatively as if the box was infinitely large. In that region, the \(p\)expansion, which counts \(1/L_{s}\), \(1/L_{t}\) and \(M\) as quantities of the same order, is adequate. In view of \(\mu =\frac{1}{2}F^2 M^2V \), this region includes configurations with \(ML\gtrsim \! 1\), where the finitesize effects due to pion loop diagrams are suppressed by the factor \(e^{ML}\).
If \(\mu \) is comparable to or smaller than 1, however, the chiral perturbation series must be reordered. The \(\epsilon \)expansion achieves this by counting \(1/L_{s}, 1/L_{t}\) as quantities of \(O(\epsilon )\), while the quark mass \(m\) is booked as a term of \(O(\epsilon ^4)\). This ensures that the symmetryrestoration parameter \(\mu \) represents a term of order \(O(\epsilon ^0)\), so that the manner in which chiral symmetry is restored can be worked out.
As an example, we consider the correlator of the axial charge carried by the two lightest quarks, \(q(x)\!=\!\{u(x),d(x)\}\). The axial current and the pseudoscalar density are given by
where \(\tau ^1, \tau ^2,\tau ^3\), are the Pauli matrices in flavour space. In Euclidean space, the correlators of the axial charge and of the space integral over the pseudoscalar density are given by
\(\chi \)PT yields explicit finitesize scaling formulae for these quantities [192, 194, 195]. In the \(\epsilon \)regime, the expansion starts with
where the coefficients \(a_A\), \(b_A\), \(a_P\), \(b_P\) stand for quantities of \(O(\epsilon ^0)\). They can be expressed in terms of the variables \(L_{s}\), \(L_{t}\) and \(m\) and involve only the two leading lowenergy constants \(F\) and \(\Sigma \). In fact, at leading order only the combination \(\mu =m\,\Sigma \,L_{s}^3 L_{t}\) matters, the correlators are \(t\)independent and the dependence on \(\mu \) is fully determined by the structure of the groups involved in the SSB pattern. In the case of SU(2) \(\times \) SU(2) \(\rightarrow \) SU(2), relevant for QCD in the symmetryrestoration region with two light quarks, the coefficients can be expressed in terms of Bessel functions. The \(t\)dependence of the correlators starts showing up at \(O(\epsilon ^2)\), in the form of a parabola, viz. \(h_1(\tau )=\frac{1}{2}[(\tau \frac{1}{2})^2\frac{1}{12}]\). Explicit expressions for \(a_A\), \(b_A\), \(a_P\), \(b_P\) can be found in [192, 194, 195], where some of the correlation functions are worked out to NNLO. By matching the finitesize scaling of correlators computed on the lattice with these predictions one can extract \(F\) and \(\Sigma \). A way to deal with the numerical challenges genuine to the \(\epsilon \)regime has been described [196].
The fact that the representation of the correlators to NLO is not “contaminated” by higherorder unknown LECs, makes the \(\epsilon \)regime potentially convenient for a clean extraction of the LO couplings. The determination of these LECs is then affected by different systematic uncertainties with respect to the standard case; simulations in this regime yield complementary information which can serve as a valuable crosscheck to get a comprehensive picture of the lowenergy properties of QCD.
The effective theory can also be used to study the distribution of the topological charge in QCD [197] and the various quantities of interest may be defined for a fixed value of this charge. The expectation values and correlation functions then not only depend on the symmetryrestoration parameter \(\mu \), but also on the topological charge \(\nu \). The dependence on these two variables can explicitly be calculated. It turns out that the twopoint correlation functions considered above retain the form (57), but the coefficients \(a_A\), \(b_A\), \(a_P\), \(b_P\) now depend on the topological charge as well as on the symmetry restoration parameter (see [198–200] for explicit expressions).
A specific issue with \(\epsilon \)regime calculations is the scale setting. Ideally one would perform a \(p\)regime study with the same bare parameters to measure a hadronic scale (e.g. the proton mass). In the literature, sometimes a gluonic scale (e.g. \(r_0\)) is used to avoid such expenses. Obviously the issues inherent in scale setting are aggravated if the \(\epsilon \)regime simulation is restricted to a fixed sector of topological charge.
It is important to stress that in the \(\epsilon \)expansion higherorder finitevolume corrections might be significant, and the physical box size (in fm) should still be large in order to keep these contributions under control. The criteria for the chiral extrapolation and finitevolume effects are obviously different from the \(p\)regime. For these reasons we have to adjust the colour coding defined in Sect. 2.1 (see Sect. 5.1.6 for more details).
Recently, the effective theory has been extended to the “mixed regime” where some quarks are in the \(p\)regime and some in the \(\epsilon \)regime [201, 202]. In [203] a technique is proposed to smoothly connect the \(p\) and \(\epsilon \)regimes. In [204] the issue is reconsidered with a counting rule which is essentially the same as in the \(p\)regime. In this new scheme, the theory remains IR finite even in the chiral limit, while the chirallogarithmic effects are kept present.
5.1.3 Energy levels of the QCD Hamiltonian in a box and \(\delta \)regime
At low temperature, the properties of the partition function are governed by the lowest eigenvalues of the Hamiltonian. In the case of QCD, the lowest levels are due to the Nambu–Goldstone bosons and can be worked out with \(\chi \)PT [205]. In the chiral limit the level pattern follows the one of a quantummechanical rotator, i.e. \(E_\ell =\ell (\ell +1)/(2\,\Theta )\) with \(\ell = 0, 1,2,\ldots \). For a cubic spatial box and to leading order in the expansion in inverse powers of the box size \(L_{s}\), the moment of inertia is fixed by the value of the pion decay constant in the chiral limit, i.e. \(\Theta =F^2L_{s}^3\).
In order to analyse the dependence of the levels on the quark masses and on the parameters that specify the size of the box, a reordering of the chiral series is required, the socalled \(\delta \)expansion; the region where the properties of the system are controlled by this expansion is referred to as the \(\delta \)regime. Evaluating the chiral perturbation series in this regime, one finds that the expansion of the partition function goes in even inverse powers of \(FL_{s}\), that the rotator formula for the energy levels holds up to NNLO and the expression for the moment of inertia is now also known up to and including terms of order \((FL_{s})^{4}\) [206–208]. Since the level spectrum is governed by the value of the pion decay constant in the chiral limit, an evaluation of this spectrum on the lattice can be used to measure \(F\). More generally, the evaluation of various observables in the \(\delta \)regime offers an alternative method for a determination of some of the lowenergy constants occurring in the effective Lagrangian. At present, however, the numerical results obtained in this way [209, 210] are not yet competitive with those found in the \(p\) or \(\epsilon \)regimes.
5.1.4 Other methods for the extraction of the lowenergy constants
An observable that can be used to extract the LECs is the topological susceptibility
where \(\omega (x)\) is the topological charge density,
At infinite volume, the expansion of \(\chi _{t}\) in powers of the quark masses starts with [211]
The condensate \(\Sigma \) can thus be extracted from the properties of the topological susceptibility close to the chiral limit. The behaviour at finite volume, in particular in the region where the symmetry is restored, is discussed in [195]. The dependence on the vacuum angle \(\theta \) and the projection on sectors of fixed \(\nu \) have been studied in [197]. For a discussion of the finitesize effects at NLO, including the dependence on \(\theta \), we refer to [200, 212].
The role that the topological susceptibility plays in attempts to determine whether there is a large paramagnetic suppression when going from the \(N_\mathrm{f}=2\) to the \(N_\mathrm{f}=2+1\) theory has been highlighted in Ref. [213]. The potential usefulness of higher moments of the topological charge distribution to determine LECs has been investigated in [214].
Another method for computing the quark condensate has been proposed in [215], where it is shown that starting from the Banks–Casher relation [216] one may extract the condensate from suitable (renormalisable) spectral observables, for instance the number of Dirac operator modes in a given interval. For those spectral observables higherorder corrections can be systematically computed in terms of the chiral effective theory. A recent paper based on this strategy is ETM 13 [217]. As an aside let us remark that corrections to the Banks–Casher relation that come from a finite quark mass, a finite fourdimensional volume and (with Wilsontype fermions) a finite lattice spacing can be parameterised in a properly extended version of the chiral framework [218].
An alternative strategy is based on the fact that at LO in the \(\epsilon \)expansion the partition function in a given topological sector \(\nu \) is equivalent to the one of a chiral Random Matrix Theory (RMT) [219–222]. In RMT it is possible to extract the probability distributions of individual eigenvalues [223–225] in terms of two dimensionless variables \(\zeta =\lambda \Sigma V\) and \(\mu =m\Sigma V\), where \(\lambda \) represents the eigenvalue of the massless Dirac operator and \(m\) is the sea quark mass. More recently this approach has been extended to the Hermitian (Wilson) Dirac operator [226] which is easier to study in numerical simulations. Hence, if it is possible to match the QCD lowlying spectrum of the Dirac operator to the RMT predictions, then one may extract^{Footnote 18} the chiral condensate \(\Sigma \). One issue with this method is that for the distributions of individual eigenvalues higherorder corrections are still not known in the effective theory, and this may introduce systematic effects which are hard^{Footnote 19} to control. Another open question is that, while it is clear how the spectral density is renormalised [230], this is not the case for the individual eigenvalues, and one relies on assumptions. There have been many lattice studies [231–235] which investigate the matching of the lowlying Dirac spectrum with RMT. In this review the results of the LECs obtained in this way^{Footnote 20} are not included.
5.1.5 Pion form factors
The scalar and vector form factors of the pion are defined by the matrix elements
where the operators contain only the lightest two quark flavours, i.e. \(\tau ^1\), \(\tau ^2\), \(\tau ^3\) are the Pauli matrices, and \(t\equiv (p_1p_2)^2\) denotes the momentum transfer.
The vector form factor has been measured by several experiments for timelike as well as for spacelike values of \(t\). The scalar form factor is not directly measurable, but it can be evaluated theoretically from data on the \(\pi \pi \) and \(\pi K\) phase shifts [236] by means of analyticity and unitarity, i.e. in a modelindependent way. Lattice calculations can be compared with data or modelindependent theoretical evaluations at any given value of \(t\). At present, however, most lattice studies concentrate on the region close to \(t=0\) and on the evaluation of the slope and curvature which are defined as
The slopes are related to the meansquare vector and scalar radii which are the quantities on which most experiments and lattice calculations concentrate.
In chiral perturbation theory, the form factors are known at NNLO [237]. The corresponding formulae are available in fully analytical form and are compact enough that they can be used for the chiral extrapolation of the data (as done, for example in [238, 239]). The expressions for the scalar and vector radii and for the \(c_{S,V}\) coefficients at twoloop level read
where
and \(k_{r_S},k_{r_V}\) and \(k_{c_S},k_{c_V}\) are independent of the quark masses. Their expression in terms of the \(\ell _i\) and of the \(O(p^6)\) constants \(c_M,c_F\) is known but will not be reproduced here.
The difference between the quarkline connected and the full (i.e. containing the connected and the disconnected piece) scalar pion form factor has been investigated by means of Chiral Perturbation Theory in [240]. It is expected that the technique used can be applied to a large class of observables relevant in QCDphenomenology.
As a point of practical interest let us remark that there are no finitevolume correction formulae for the meansquare radii \({\langle }r^2{\rangle }_{V,S}\) and the curvatures \(c_{V,S}\). The lattice data for \(F_{V,S}(t)\) need to be corrected, point by point in \(t\), for finitevolume effects. In fact, if a given \(t\) is realised through several inequivalent \(p_1p_2\) combinations, the level of agreement after the correction has been applied is indicative of how well higherorder effects are under control.
5.1.6 Lattice determinations
In this section we summarise the lattice results for the SU(2) couplings in a set of Tables 13, 14, 15, 16 and Figs. 8, 9, 10). The tables present our usual colour coding which summarises the main aspects related to the treatment of the systematic errors of the various calculations.
A delicate issue in the lattice determination of chiral LECs (in particular at NLO) which cannot be reflected by our colour coding is a reliable assessment of the theoretical error that comes from the chiral expansion. We add a few remarks on this point:

1.
Using both the \(x\) and the \(\xi \) expansion is a good way to test how the ambiguity of the chiral expansion (at a given order) affects the numerical values of the LECs that are determined from a particular set of data. For instance, to determine \(\bar{\ell }_4\) (or \(\Lambda _4\)) from lattice data for \(F_\pi \) as a function of the quark mass, one may compare the fits based on the parameterisation \(F_\pi =F\{1+x\ln (\Lambda _4^2/M^2)\}\) [see Eq. (48)] with those obtained from \(F_\pi =F/\{1\xi \ln (\Lambda _4^2/M_\pi ^2)\}\) [see Eq. (53)]. The difference between the two results provides an estimate of the uncertainty due to the truncation of the chiral series. Which central value one chooses is in principle arbitrary, but we find it advisable to use the one obtained with the \(\xi \) expansion,^{Footnote 21} in particular because it makes the comparison with phenomenological determinations (where it is standard practice to use the \(\xi \) expansion) more meaningful.

2.
Alternatively one could try to estimate the influence of higher chiral orders by reshuffling irrelevant higherorder terms. For instance, in the example mentioned above one might use \(F_\pi =F/\{1x\ln (\Lambda _4^2/M^2)\}\) as a different functional form at NLO. Another way to establish such an estimate is through introducing by hand “analytical” higherorder terms (e.g. “analytical NNLO” as done, in the past, by MILC [15]). In principle it would be preferable to include all NNLO terms or none, such that the structure of the chiral expansion is preserved at any order (this is what ETM [241] and JLQCD/TWQCD [67] have done for SU(2) \(\chi \)PT and MILC for SU(3) \(\chi \)PT [37]). There are different opinions in the field as to whether it is advisable to include terms to which the data are not sensitive. In case one is willing to include external (typically: nonlattice) information, the use of priors is a theoretically wellfounded option (e.g. priors for NNLO LECs if one is interested in LECs at LO/NLO).

3.
Another issue concerns the \(s\)quark mass dependence of the LECs \(\bar{\ell }_i\) or \(\Lambda _i\) of the SU(2) framework. As far as variations of \(m_{s}\) around \(m_{s}^\mathrm{phys}\) are concerned (say for \(0<m_{s}<1.5m_{s}^\mathrm{phys}\) at best) the issue can be studied in SU(3) ChPT, and this has been done in a series of papers [56, 242, 243]. However, the effect of sending \(m_{s}\) to infinity, as is the case in \(N_\mathrm{f}=2\) lattice studies of SU(2) LECs, cannot be addressed in this way. A unique way to analyse this difference is to compare the numerical values of LECs determined in \(N_\mathrm{f}=2\) lattice simulations to those determined in \(N_\mathrm{f}=2+1\) lattice simulations (see e.g. [244] for a discussion).

4.
Last but not least let us recall that the determination of the LECs is affected by discretisation effects, and it is important that these are removed by means of a continuum extrapolation. In this step invoking an extended version of the chiral Lagrangian [245–247] may be useful^{Footnote 22} in case one aims for a global fit of lattice data involving several \(M_\pi \) and \(a\) values and several chiral observables.
In the tables and figures we summarise the results of various lattice collaborations for the SU(2) LECs at LO (\(F\) or \(F/F_\pi \), \(B\) or \(\Sigma \)) and at NLO (\(\bar{\ell }_1\bar{\ell }_2\), \(\bar{\ell }_3\), \(\bar{\ell }_4\), \(\bar{\ell }_5\), \(\bar{\ell }_6\)). Throughout we group the results into those which stem from \(N_\mathrm{f}=2+1+1\) calculations, those which come from \(N_\mathrm{f}=2+1\) calculations and those which stem from \(N_\mathrm{f}=2\) calculations (since, as mentioned above, the LECs are logically distinct even if the current precision of the data is not sufficient to resolve the differences). Furthermore, we make a distinction whether the results are obtained from simulations in the \(p\)regime or whether alternative methods (\(\epsilon \)regime, spectral quantities, topological susceptibility, etc.) have been used (this should not affect the result). For comparison we add, in each case, a few phenomenological determinations with high standing.
A generic comment applies to the issue of the scale setting. In the past none of the lattice studies with \(N_\mathrm{f}\ge 2\) involved simulations in the \(p\)regime at the physical value of \(m_{ud}\). Accordingly, the setting of the scale \(a^{1}\) via an experimentally measurable quantity did necessarily involve a chiral extrapolation, and as a result of this dimensionful quantities used to be particularly sensitive to this extrapolation uncertainty, while in dimensionless ratios such as \(F_\pi /F\), \(F/F_0\), \(B/B_0\), \(\Sigma /\Sigma _0\) this particular problem is much reduced (and often finite latticetocontinuum renormalisation factors drop out). Now, there is a new generation of lattice studies [20, 22, 23, 140, 249, 250] which does involve simulations at physical pion masses. In such studies even the uncertainty that the scale setting has on dimensionful quantities is much mitigated.
It is worth repeating here that the standard colourcoding scheme of our tables is necessarily schematic and cannot do justice to every calculation. In particular there is some difficulty in coming up with a fair adjustment of the rating criteria to finitevolume regimes of QCD. For instance, in the \(\epsilon \)regime^{Footnote 23} we reexpress the “chiralextrapolation” criterion in terms of \(\sqrt{2m_\mathrm{min}\Sigma }/F\), with the same threshold values (in MeV) between the three categories as in the \(p\)regime. Also the “infinitevolume” assessment is adapted to the \(\epsilon \)regime, since the \(M_\pi L\) criterion does not make sense here; we assign a green star if at least two volumes with \(L>2.5\,{\mathrm {fm}}\) are included, an open symbol if at least one volume with \(L>2\,{\mathrm {fm}}\) is invoked and a red square if all boxes are smaller than \(2\,{\mathrm {fm}}\). Similarly, in the calculation of form factors and charge radii the tables do not reflect whether an interpolation to the desired \(q^2\) has been performed or whether the relevant \(q^2\) has been engineered by means of “partially twisted boundary conditions” [253]. In spite of these limitations we feel that these tables give an adequate overview of the qualities of the various calculations.
We begin with a discussion of the lattice results for the SU(2) LEC \(\Sigma \). We present the results in Table 13 and Fig. 8. We add that results which include only a statistical error are listed in the table but omitted from the plot. Regarding the \(N_\mathrm{f}=2\) computations there are five entries without a red tag (ETM 08, ETM 09C, ETM 12, ETM 13, Brandt 13). We form the average based on ETM 09C, ETM 13 (here we deviate from our “superseded” rule, since the latter work has a much bigger error) and Brandt 13. Regarding the \(N_\mathrm{f}=2+1\) computations there are three published papers (RBC/UKQCD 10A, MILC 10A and Borsanyi 12) which make it into the \(N_\mathrm{f}=2+1\) average and a preprint (BMW 13) which will be included in a future update. We also remark that among the three works included RBC/UKQCD 10A is inconsistent with the other two (MILC 10A and Borsanyi 12). For the time being we inflate the error of our \(N_\mathrm{f}=2+1\) average such that it includes all three central values it is based on. This yields
where the errors include both statistical and systematic uncertainties. In accordance with our guidelines we plead with the reader to cite [217, 241, 257] (for \(N_\mathrm{f}=2\)) or [75, 78, 249] (for \(N_\mathrm{f}=2+1\)) when using these numbers. Finally, for \(N_\mathrm{f}=2+1+1\) there is only one calculation, and we recommend to use the result of [217] as given in Table 13. Another look at Fig. 8 confirms that these values are well consistent with each other.
The next quantity considered is \(F\), i.e. the pion decay constant in the SU(2) chiral limit (\(m_{ud}\rightarrow 0\) at fixed physical \(m_{s}\)) in the Bernese normalisation. As argued on previous occasions we tend to give preference to \(F_\pi /F\) (here the numerator is meant to refer to the physicalpionmass point) wherever it is available, since often some of the systematic uncertainties are mitigated. We collect the results in Table 14 and Fig. 9. In those cases where the collaboration provides only \(F\), the ratio is computed on the basis of the phenomenological value of \(F_\pi \), and the corresponding entries in Table 14 are in slanted fonts. Among the \(N_\mathrm{f}=2\) determinations only three (ETM 08, ETM 09C and Brandt 13) are without red tags. Since the first two are by the same collaboration, only the latter two enter the average. Among the \(N_\mathrm{f}=2+1\) determinations three values (MILC 09A as an obvious update of MILC 09, NPLQCD 11 and Borsanyi 12) make it into the average. Finally, there is a single \(N_\mathrm{f}=2+1+1\) determination (ETM 10) which forms the current best estimate in this category.
Given this input our averaging procedure yields
where the errors include both statistical and systematic uncertainties. We plead with the reader to cite [241, 257] (for \(N_\mathrm{f}=2\)) or [37, 249, 267] (for \(N_\mathrm{f}=2+1\)) when using these numbers. Finally, for \(N_\mathrm{f}=2+1+1\) we recommend to use the result of [98]; see Table 14 for the numerical value. From these numbers (or from a look at Fig. 9) it is obvious that the \(N_\mathrm{f}=2+1\) and \(N_\mathrm{f}=2+1+1\) results are not quite consistent. From a theoretical viewpoint this is rather surprising, since the only difference (the presence of absence of a dynamical charm quark) is expected to have a rather insignificant effect on this ratio (which, in addition, would be monotonic in \(N_\mathrm{f}\), contrary to what is seen in Fig. 9). In our view this indicates that—in spite of the conservative attitude taken in this report—the theoretical uncertainties in at least one of the two cases is likely underestimated. We hope that a future release of the FLAG report can clarify the issue.
We move on to a discussion of the lattice results for the NLO LECs \(\bar{\ell }_3\) and \(\bar{\ell }_4\). We remind the reader that on the lattice the former LEC is obtained as a result of the tiny deviation from linearity seen in \(M_\pi ^2\) versus \(Bm_{ud}\), whereas the latter LEC is extracted from the curvature in \(F_\pi \) versus \(Bm_{ud}\). The available determinations are presented in Table 15 and Fig. 10. Among the \(N_\mathrm{f}=2\) determinations ETM 08, ETM 09C and Brandt 13 are published and without red tags, and our rules imply that the latter two determinations enter our average. The colour coding of the \(N_\mathrm{f}=2+1\) results looks very promising; there is a significant number of lattice determinations without any red tag. At first sight it seems that RBC/UKQCD 10A, MILC 10A, NPLQCD 11, Borsanyi 12 and RBC/UKQCD 12 make it into the average. Unfortunately, \(\bar{\ell }_3\) and \(\bar{\ell }_4\) of RBC/UKQCD 10A have no systematic error; therefore we exclude this work from the \(N_\mathrm{f}=2+1\) average. Among the \(N_\mathrm{f}=2+1+1\) determinations only ETM 10 qualifies for an average.
Given this input our averaging procedure yields
where the errors include both statistical and systematic uncertainties. Again we plead with the reader to cite [241, 257] (for \(N_\mathrm{f}=2\)) or [25, 75, 249, 267] (for \(N_\mathrm{f}=2+1\)) when using these numbers. For \(N_\mathrm{f}=2+1+1\) we stay with the recommendation to use the results of [98], see Table 15 for the numerical values.
Let us add two remarks. On the input side our procedure^{Footnote 24} symmetrises the asymmetric error of ETM 09C with a slight adjustment of the central value. On the output side the error of the \(\bar{\ell }_3\) average for \(N_\mathrm{f}=2\) and of the \(\bar{\ell }_3,\bar{\ell }_4\) averages for \(N_\mathrm{f}=2+1\), according to the FLAG procedure, got inflated by hand to cover all central values. From these numbers (or from a look at Fig. 10) it is clear that the lattice results for \(\bar{\ell }_3\) do not show any obvious \(N_\mathrm{f}\)dependence—thanks, chiefly, to our conservative error treatment strategy. On the other hand, in the case of \(\bar{\ell }_4\) even our practice of inflating the error of the \(N_\mathrm{f}=2+1\) average did not manage to avoid some mild inconsistency between the \(N_\mathrm{f}=2+1\) average on one side and either the \(N_\mathrm{f}=2\) or the \(N_\mathrm{f}=2+1+1\) average on the other side. Again, the dependence of the average on the number of active flavours is not monotonic, and this raises a decent amount of suspicion that some of the systematic errors might still be underestimated.
More specifically, it seems that again the \(N_\mathrm{f}=2+1+1\) value by ETM shows some tension relative to the average \(N_\mathrm{f}=2+1\) value quoted above, in close analogy to what happened for \(F\) or \(F_\pi /F\); see the discussion around (66). Since both \(F\) and \(\bar{\ell }_4\) are determined from the quarkmass dependence of the pseudoscalar decay constant, perhaps the formulae in Refs. [273, 274] for dealing with cutoff and finitevolume effects with twistedmass data might prove useful in future analysis.
From a more phenomenological viewpoint there is a notable difference between \(\bar{\ell }_3\) and \(\bar{\ell }_4\) in Fig. 10. For \(\bar{\ell }_4\) the precision of the phenomenological determination achieved in Colangelo 01 [193] represents a significant improvement compared to Gasser 84 [58]. Picking any \(N_\mathrm{f}\), the lattice average of \(\bar{\ell }_4\) is consistent with both of the phenomenological values and comes with an error which is roughly comparable to the uncertainty of the result in Colangelo 01 [193]. By contrast, for \(\bar{\ell }_3\) the error of the lattice determination is significantly smaller than the error of the estimate given in Gasser 84 [58]. In other words, here the lattice really provides some added value.
We finish with a discussion of the lattice results for \(\bar{\ell }_6\) and \(\bar{\ell }_1\bar{\ell }_2\). The LEC \(\bar{\ell }_6\) determines the leading contribution in the chiral expansion of the pion charge radius—see (63). Hence from a lattice study of the vector form factor of the pion with several \(M_\pi \) one may extract the radius \({\langle }r^2{\rangle }_V^\pi \), the curvature \(c_V\) (both at the physical pionmass point) and the LEC \(\bar{\ell }_6\) in one go. Similarly, the leading contribution in the chiral expansion of the scalar radius of the pion determines \(\bar{\ell }_4\)—see (63). This LEC is also present in the pionmass dependence of \(F_\pi \), as we have seen. The difference \(\bar{\ell }_1\bar{\ell }_2\), finally, may be obtained from the momentum dependence of the vector and scalar pion form factors, based on the twoloop formulae of [237]. The top part of Table 16 collects the results obtained from the vector form factor of the pion (charge radius, curvature and \(\bar{\ell }_6\)). Regarding this lowenergy constant two \(N_\mathrm{f}=2\) calculations are published works without a red tag; we thus arrive at the estimate
which is represented as a grey band in the last panel of Fig. 10. Here we plead with the reader to cite [238, 257] when using this number.
The experimental information concerning the charge radius is excellent and the curvature is also known very accurately, based on \(e^+e^\) data and dispersion theory. The vector form factor calculations thus present an excellent testing ground for the lattice methodology. The table shows that most of the available lattice results pass the test. There is, however, one worrisome point. For \(\bar{\ell }_6\) the agreement seems less convincing than for the charge radius, even though the two quantities are closely related. So far we have no explanation, but we urge the groups to pay special attention to this point. Similarly, the bottom part of Table 16 collects the results obtained for the scalar form factor of the pion and the combination \(\bar{\ell }_1\bar{\ell }_2\) that is extracted from it.
Perhaps the most important physics result of this section is that the lattice simulations confirm the approximate validity of the GellMann–Oakes–Renner formula and show that the square of the pion mass indeed grows in proportion to \(m_{ud}\). The formula represents the leading term of the chiral perturbation series and necessarily receives corrections from higher orders. At first nonleading order, the correction is determined by the effective coupling constant \(\bar{\ell }_3\). The results collected in Table 15 and in the top panel of Fig. 10 show that \(\bar{\ell }_3\) is now known quite well. They corroborate the conclusion drawn already in Ref. [278]: the lattice confirms the estimate of \(\bar{\ell }_3\) derived in [58]. In the graph of \(M_\pi ^2\) versus \(m_{ud}\), the values found on the lattice for \(\bar{\ell }_3\) correspond to remarkably little curvature: the GellMann–Oakes–Renner formula represents a reasonable first approximation out to values of \(m_{ud}\) that exceed the physical value by an order of magnitude.
As emphasised by Stern and collaborators [279–281], the analysis in the framework of \(\chi \)PT is coherent only if (i) the leading term in the chiral expansion of \(M_\pi ^2\) dominates over the remainder and (ii) the ratio \(m_{s}/m_{ud}\) is close to the value 25.6 that follows from Weinberg’s leadingorder formulae. In order to investigate the possibility that one or both of these conditions might fail, the authors proposed a more general framework, referred to as “Generalised \(\chi \)PT”, which includes \(\chi \)PT as a special case. The results found on the lattice demonstrate that QCD does satisfy both of the above conditions—in the context of QCD, the proposed generalisation of the effective theory does not appear to be needed. There is a modified version, however, referred to as “Resummed \(\chi \)PT” [282], which is motivated by the possibility that the Zweigrule violating couplings \(L_4\) and \(L_6\) might be larger than expected. The available lattice data do not support this possibility, but they do not rule it out either (see Sect. 5.2.4 for details).
5.2 SU(3) lowenergy constants
5.2.1 Quarkmass dependence of pseudoscalar masses and decay constants
In the isospin limit, the relevant SU(3) formulae take the form [56]
where \(m_{ud}\) is the common up and down quark mass (which may be different from the one in the real world), and \(B_0=\Sigma _0/F_0^2\), \(F_0\) denote the condensate parameter and the pseudoscalar decay constant in the SU(3) chiral limit, respectively. In addition, we use the notation
At the order of the chiral expansion used in these formulae, the quantities \(\mu _\pi \), \(\mu _K\), \(\mu _\eta \) can equally well be evaluated with the leadingorder expressions for the masses,
Throughout, \(L_i\) denotes the renormalised lowenergy constant/coupling (LEC) at scale \(\mu \), and we adopt the convention which is standard in phenomenology, \(\mu =770\,{\mathrm {MeV}}\). The normalisation used for the decay constants is specified in footnote 16.
5.2.2 Charge radius
The SU(3) formula for the slope of the pion vector form factor reads [152]
while the expression \({\langle }r^2\rangle _S^{\mathrm {oct}}\) for the octet part of the scalar radius does not contain any NLO lowenergy constant at the oneloop order [152] (cf. 5.1.5 for the situation in SU(2)).
5.2.3 Partially quenched formulae
The term “partially quenched QCD” is used in two ways. For heavy quarks (\(c,b\) and sometimes \(s\)) it usually means that these flavours are included in the valence sector, but not into the functional determinant. For the light quarks (\(u,d\) and sometimes \(s\)) it means that they are present in both the valence and the sea sector of the theory, but with different masses (e.g. a series of valence quark masses is evaluated on an ensemble with a fixed sea quark mass).
The program of extending the standard (unitary) SU(3) theory to the (second version of) “partially quenched QCD” has been completed at the twoloop (NNLO) level for masses and decay constants [283]. These formulae tend to be complicated, with the consequence that a stateoftheart analysis with \(O(2000)\) bootstrap samples on \(O(20)\) ensembles with \(O(5)\) masses each [and hence \(O(200'000)\) different fits] will require significant computational resources for the global fits. For an uptodate summary of recent developments in Chiral Perturbation Theory relevant to lattice QCD we refer to [284].
The theoretical underpinning of how “partial quenching” is to be treated in the (properly extended) chiral framework is given in [285]. Specifically for partially quenched QCD with staggered quarks it is shown that a transfer matrix can be constructed which is not Hermitian but bounded, and can thus be used to construct correlation functions in the usual way.
5.2.4 Lattice determinations
To date, there are three comprehensive SU(3) papers with results based on lattice QCD with \(N_\mathrm{f}= 2 + 1\) dynamical flavours [15, 19, 79], and one more with results based on \(N_\mathrm{f}= 2 + 1 + 1\) dynamical flavours [156]. It is an open issue whether the data collected at \(m_{s} \simeq m_{s}^\mathrm{phys}\) allow for an unambiguous determination of SU(3) lowenergy constants (cf. the discussion in [