Abstract
Since Wilson’s seminal papers of the mid1970s, the lattice approach to Quantum Chromodynamics has become increasingly important for the study of the strong interaction at low energies, and has now turned into a mature and established technique. In spite of the fact that the lattice formulation of Quantum Field Theory has been applied to virtually all fundamental interactions, it is appropriate to discuss this topic in a chapter devoted to QCD, since by far the largest part of activity is focused on the strong interaction. Lattice QCD is, in fact, the only known method which allows ab initio investigations of hadronic properties, starting from the QCD Lagrangian formulated in terms of quarks and gluons.
You have full access to this open access chapter, Download chapter PDF
Similar content being viewed by others
5.1 Introduction and Outline
Since Wilson’s seminal papers of the mid1970s, the lattice approach to Quantum Chromodynamics has become increasingly important for the study of the strong interaction at low energies, and has now turned into a mature and established technique. In spite of the fact that the lattice formulation of Quantum Field Theory has been applied to virtually all fundamental interactions, it is appropriate to discuss this topic in a chapter devoted to QCD, since by far the largest part of activity is focused on the strong interaction. Lattice QCD is, in fact, the only known method which allows ab initio investigations of hadronic properties, starting from the QCD Lagrangian formulated in terms of quarks and gluons.
5.1.1 Historical Perspective
In order to illustrate the wide range of applications of the lattice formulation, we give a brief historical account below.
First applications of the lattice approach in the late 1970s employed analytic techniques, predominantly the strong coupling expansion, in order to investigate colour confinement and also the spectrum of glueballs. While these attempts gave valuable insights, it soon became clear that in the case of nonAbelian gauge theories such expansions were not sufficient to produce quantitative results.
First numerical investigations via Monte Carlo simulations, focusing in particular on the confinement mechanism in pure Yang–Mills theory, were carried out around 1980. The following years saw already several valiant attempts to study QCD numerically, yet it was realized that the available computer power was grossly inadequate to incorporate the effects of dynamical quarks. It was then that the socalled “quenched approximation” of QCD was proposed as a first step to solving full QCD numerically. This approximation rests on the ad hoc assumption that the dominant nonperturbative effects are mediated by the gluon field. Hadronic observables can then be computed on a pure gauge background with far less numerical effort compared to the real situation where quarks have a feedback on the gluon field. The main focus of activity during the 1980s was on bosonic theories: numerical simulations were used to compute the glueball spectrum in pure Yang–Mills theory. Another important result during this period concerned ϕ ^{4}theory and the implications of its supposed “triviality” for the HiggsYukawa sector of the Standard Model. Using a combination of analytic and numerical techniques, the triviality of ϕ ^{4} theory could be rigorously established.
Except for a brief spell of activity around the turn of the decade to simulate QCD with dynamical fermions, most projects in the 1990s were devoted to explore quenched QCD. Having recognized that the available computers and the efficiency of known algorithms were by far not sufficient to perform “realistic” simulations of QCD with controlled errors, lattice physicists resorted to exploring the quenched approximation and its limitations for a number of phenomenologically interesting quantities. Although the systematic error that arises by neglecting dynamical quarks could not be quantified reliably, many important quantities, such as quark and hadron masses, the strong coupling constant and weak hadronic matrix elements, were computed for the first time. One of the icons of that period was surely a plot of the masses of the lightest hadrons in the continuum limit of quenched QCD, produced by the CPPACS Collaboration: their results indicated that the quenched approximation works surprisingly well (at least for these quantities), since the computed spectrum agreed with experimental determinations at the level of 10%. Simultaneously, a number of sophisticated techniques have been developed during the 1990s, thereby helping to control systematic effects, mainly pertaining to the influence of lattice artefacts, as well as the renormalization of local operators in the lattice regularized theory and their relation to continuum schemes such as \({\overline {{\mathrm {MS}}}}\). Perhaps the most significant development at the end of the 1990s was the clarification of the issue of chiral symmetry and lattice regularization. Following this work it is now understood under which conditions the lattice formulation is compatible with chiral symmetry. The importance of this development extends far beyond QCD and implies new prospects for the nonperturbative study of chiral gauge theories.
Since 2000 the focus has decidedly shifted from the quenched approximation to serious attempts to simulate QCD with dynamical quarks, thereby tackling the biggest remaining systematic uncertainty. Progress in this area has not just been determined by the vast increase in computer power since the very first Monte Carlo simulations, but rather by the development of new algorithmic ideas, combined with the use of alternative discretizations that are numerically more efficient. At the time of writing this contribution (2007), the whole field is actually in a state of transition: although the quenched approximation is being abandoned, the latest results from simulations with dynamical quarks have not yet reached the same level of accuracy in regard to controlling systematic errors due to lattice artefacts and effects from renormalization, as compared to earlier quenched calculations. It can thus be expected that many of the results discussed later in this chapter will soon be superseded by more accurate numbers. In turn, the quenched approximation will be completely obsolete in a few years time, except perhaps to test new ideas or for exploratory studies of more complex quantities.
5.1.2 Outline
We begin with an introduction of the basic concepts of the lattice formulation of QCD. This shall include the field theoretical foundations, discretizations of the QCD Lagrangian, as well as simulation algorithms and other technical aspects related to the actual calculation of physical observables from suitable correlation functions. The following sections deal with various applications. Lattice calculations of the hadron spectrum are described in Sect. 5.3. Section 5.4 is devoted to lattice investigations of the confinement phenomenon. Determinations of the fundamental parameters of QCD, namely the strong coupling constant and quark masses are a major focus of this article, and are presented in Sect. 5.5. Another important property of QCD, namely the spontaneously broken chiral symmetry, is discussed in some detail in Sect. 5.6, which also includes a brief introduction into analytical nonperturbative approaches to the strong interaction, based on effective field theories. Lattice calculations of weak hadronic matrix elements, which serve to pin down the elements of the Cabibbo–Kobayashi–Maskawa matrix, are covered in Sect. 5.7. We end this contribution with a few concluding remarks.
In addition to the topics listed above, lattice simulations of QCD have also made important contributions to the determination of the phase structure of QCD, including results for the critical temperature of the deconfinement phase transition. Nevertheless, in this chapter we restrict the discussion to QCD at zero temperature and refer the reader to other parts of this volume.
5.2 The Lattice Approach to QCD
The essential features of the lattice formulation can be summarized by the following statement:
Lattice QCD is the nonperturbative approach to the gauge theory of the strong interaction through regularized, Euclidean functional integrals. The regularization is based on a discretization of the QCD action which preserves gauge invariance at all stages.
This definition includes all basic ingredients: starting from the functional integral itself avoids any particular reference to perturbation theory. This is what we mean when we call lattice QCD an ab initio method. The Euclidean formulation, which is obtained by rotating to imaginary time, reveals the close relation between Quantum Field Theory and Statistical Mechanics. In particular, the Euclidean functional integral is equivalent to the partition function of the corresponding statistical system. This equivalence is particularly transparent if the field theory is formulated on a discrete spacetime lattice. Via this relation, the whole toolkit of condensed matter physics, including hightemperature expansions, and, perhaps most importantly, Monte Carlo simulations, are at the disposal of the field theorist.
Many of the basic concepts introduced in this section are discussed in several common textbooks on the subject [1,2,3,4], which can be consulted for further details.
5.2.1 Euclidean Quantization
The generic steps in the Euclidean quantization procedure of a lattice field theory are the following:

1.
Define the classical, Euclidean field theory in the continuum;

2.
Discretize the corresponding Lagrangian;

3.
Quantize the theory by defining the functional integral;

4.
Determine the particle spectrum from Euclidean correlation functions.
We shall now illustrate this procedure for a simple example, namely the theory for a neutral scalar field.
Step 1
Consider a real, classical field ϕ(x), with x = (x ^{0}, x ^{1}, x ^{2}, x ^{3}), whose time variable x ^{0} is obtained by analytically continuing t to −ix ^{0}. The Euclidean action S _{E}[ϕ] is defined as
where
Step 2
In order to discretize the theory, a hypercubic lattice, Λ_{E}, is introduced as the set of discrete spacetime points, i.e.
Thus, any spacetime point is an integer multiple of the lattice spacing a. The total number of lattice sites is \(N_{\mathrm {t}}\times N_{\mathrm {s}}^3\), while the physical spacetime volume is T × L ^{3}. The discretized action is then given by
where the lattice derivatives can be defined as
Here and below \(\hat \mu \) denotes a unit vector in direction of μ. Via a Fourier transform, the Euclidean lattice Λ_{E} is related to the dual lattice, \(\Lambda _{\mathrm {E}}^*\), defined by
This not only implies that the momenta p _{0} and p _{j} are quantized in units of 2π∕T and 2π∕L, respectively, but also that a momentum cutoff has been introduced, since
As we shall see below, this way of introducing a momentum cutoff can be extended to gauge theories in such a way that gauge invariance is respected. An important point to realize is that the lattice action is not unique: it is only required that the discretized expression for S _{E} reproduces the continuum result as the lattice spacing a is taken to zero.
Step 3
The theory is quantized via the Euclidean functional integral
Here one sees explicitly that the discretization procedure has given a mathematical meaning to the integration measure, which reduces to that of an ordinary, multipledimensional integration.
One can now define Euclidean correlation functions of local fields through
In the continuum limit, these correlation functions approach the Schwinger functions, which encode the physical information about the spectrum within the Euclidean formulation. Osterwalder and Schrader [5] have laid down the general criteria which must be satisfied such that the information in Minkowskian spacetime can be reconstructed from the Schwinger functions.
Step 4
The particle spectrum is extracted from the exponential falloff of the Euclidean twopoint correlation function. To this end, one must define the Euclidean time evolution operator. The transfer matrixT describes time propagation by a finite Euclidean time interval a. The functional integral can be expressed in terms of the transfer matrix as
where the trace is taken over the basis α〉 of the Hilbert space of physical states. In order to obtain expressions which are more reminiscent of those in Minkowski spacetime, one can define a Hamiltonian H _{E} by
If α〉 denotes an eigenstate of the transfer matrix with eigenvalue λ _{α}, i.e.
then one can work out the spectral decomposition of the twopoint correlation function, viz.
Here, the quantity (E _{α} − E _{0}) is the socalled mass gap, i.e. the energy of the state α〉 above the vacuum. For large Euclidean time separations (x _{0} − y _{0}) the lowest state dominates the twopoint function, i.e. all higher states die out exponentially. The spectral decomposition of the twopoint function forms the basis for numerical simulations of lattice field theories, as the mass (or energy) of a given state is given by the dominant exponential falloff at large Euclidean times (see Sect. 5.2.3).
5.2.2 Lattice Actions for QCD
Our goal now is to find a lattice transcription of the Euclidean QCD action in the continuum, i.e.
where g _{0} denotes the gauge coupling, and our conventions are chosen such that the covariant derivative is defined through
while the field tensor reads
Before attempting to write down a discretized version, we must first elucidate the notion of a lattice gauge field in a nonAbelian theory. In fact, in this case it turns out that the gauge potential A _{μ} must be abandoned when the theory is discretized. The reason is that the familiar nonAbelian transformation law, i.e.
no longer holds exactly when ∂ _{μ} is replaced by its discrete counterpart d _{μ} of Eq. (5.5). Strict gauge invariance at the level of the regularized theory cannot be maintained in this fashion.
The definition of a lattice gauge field relies on the concept of the parallel transporter. If a quark moves in the presence of a background gauge field from y to x, it picks up a nonAbelian phase factor, given by
where “P.O.” denotes path ordering, as a consequence of the nonAbelian nature of the gauge field. By contrast to the gauge potential A _{μ}, which is an element of the Lie algebra of SU(3), the parallel transporter U(x, y) is an element of the gauge group itself. On the lattice, the parallel transporter between neighbouring lattice sites x and \(x+a\hat \mu \) is called link variable:
A consistent and manifestly gauge invariant discretization of QCD is obtained by identifying the gauge degrees of freedom with the link variables U _{μ}(x), which transform under the gauge group as
The connection with the gauge potential A _{μ}(x) is somewhat subtle: if U _{μ}(x) denotes a given link variable in the discretized theory, it can be used to define a vector field A _{μ}(x) as an element of the Lie algebra of SU(3) via
In turn, if \(A_\mu ^{\mathrm {c}}\) is a given gauge potential in the continuum theory, one can always find a link variable which approximates \(A_\mu ^{\mathrm {c}}\) up to cutoff effects.
Now we turn to the problem of defining a discretized version of the Yang–Mills action. To this end we define the plaquetteP _{μν}(x) as the product of link variables around an elementary square of the lattice:
A graphical representation is shown in Fig. 5.1. Using the transformation property in Eq. (5.22), it is easy to convince oneself that this object is manifestly gauge invariant. Moreover, it serves to define the simplest discretization of the Yang–Mills action, the Wilson plaquette action [6]
It has become a standard textbook exercise to verify that for small lattice spacings
provided that one relates the parameter β to the bare gauge coupling via \(\beta =6/g_0^2\) in Eq. (5.25). We have remarked already that the discretization of a field theory is not unique, and hence one is free to add further gauge invariant terms to the plaquette action which formally vanish as a → 0, but which produce a discretization with an accelerated rate of convergence to the continuum limit. The most widely chosen alternatives are the Symanzik [7] and Iwasaki [8] actions.
Quark and antiquark fields, ψ(x) and \(\bar {\psi }(x)\), are associated with the lattice sites and transform under the gauge group as
Using the transformation property of the link variables, it is straightforward to write down a discretized version of the covariant derivative, i.e.
where ∇_{μ} and \(\nabla _\mu ^*\) denote the “forward” and “backward” derivatives, respectively. Finally, we note that in Euclidean spacetime, the Dirac matrices can be defined to satisfy \(\left \{\gamma _\mu ,\gamma _\nu \right \}=2\delta _{\mu \nu }\).
Before we attempt to construct the fermionic part of the action of lattice QCD, it is useful to identify the basic properties that the discretized, massless Dirac operator, D, should satisfy:

(a)
D is local;

(b)
\(\widetilde {D}(p)={\mathrm {i}}\gamma _\mu p_\mu +{\mathrm {O}}(ap^2)\);

(c)
\(\widetilde {D}(p)\) is invertible for p ≠ 0;

(d)
γ _{5}D + Dγ _{5} = 0.
Locality, i.e. the absence of longranged interactions, is a basic property of any quantum field theory describing elementary particles. Property (b) implies that the correct continuum behaviour of the quarkgluon interaction is reproduced. Furthermore, condition (c) ensures that the correct fermion spectrum is obtained: fermion masses are associated with poles of \(\{\widetilde {D}(p)\}^{1}\), which, in the continuum theory, only occur at vanishing fourmomentum. Finally, property (d) ensures that the massless theory respects chiral symmetry.
Using the definition of the covariant derivative and the conventions for the Dirac matrices in Euclidean spacetime, we can now write down the simplest discretized version of the massless lattice Dirac operator:
It turns out, however, that this “naïve” discretization violates condition (c) and therefore produces spurious fermionic degrees of freedom. This is the socalled fermion doubling problem, which is most easily explained by considering D _{disc} in momentum space for the free theory. The Fourier transform yields
The discretization procedure has thus replaced p _{μ} by a sine function. While the Taylor expansion guarantees that condition (b) is satisfied, the occurrence of \(\sin {}(ap_\mu )\) implies that \(\widetilde {D}_{\text{disc}}(p)\) vanishes not only at p _{μ} = 0, but also at π∕a for μ = 0, …, 3 in the permitted range of momenta, thereby violating condition (c). The massless propagator \(\{\widetilde {D}_{\text{disc}}(p)\}^{1}\) therefore has 2^{4} = 16 poles, and thus there is a 16fold degeneracy of the fermion spectrum.
As we shall see below, the fermion doubling problem is closely linked with the issue of chiral symmetry on the lattice. For now we simply list the various methods that have been devised to address fermion doubling. Historically the first was due to Wilson (“Wilson fermions”) [6]. Here, the degeneracy is lifted completely, but the price to pay is the explicit breaking of chiral symmetry at the level of the regularized theory. Another method, due to Kogut and Susskind (“staggered fermions”) [9], is based on the idea of spreading individual spinor components over the corners of an elementary hypercube of the lattice. Although the degeneracy is only lifted partially (from 16 to 4), this formulation has the advantage of leaving a subgroup of chiral symmetry unbroken. More recent developments include the use of socalled “domain wall” [10, 11] or “overlap” [12] fermions. These formulations leave chiral symmetry unbroken in principle, and also succeed in lifting the degeneracy completely. Finally, there are the socalled “perfect” actions [13], which are based on a renormalization group approach and which are in principle completely free of lattice artefacts. An exact realization of the perfect action which can be used in simulations is, however, difficult to obtain. In practice, one typically uses a socalled truncated fixed point action. Domain wall and overlap fermions, as well as perfect actions are particular realizations of a class of discretizations dubbed “GinspargWilson fermions”. They have the remarkable feature that chiral symmetry is preserved, while the fermion doubling problem is completely avoided. We shall come back to this issue in more detail below.
For now we turn specifically to Wilson’s treatment of the fermion doubling problem. It exploits the fact that the discretization is not unique. Thus, one can add a term to D _{disc}, which formally vanishes as a → 0, but which pushes the masses of the unwanted doubler states to the cutoff scale at any nonzero value of the lattice spacing. Explicitly, the massless WilsonDirac operator D _{w} reads
where r is the socalled Wilson parameter, which is usually set to one. The Fourier transform of D _{w} for a trivial gauge field reads
which explicitly demonstrates (for the free theory, at least) that the poles at p _{μ} = π∕a receive additional contributions proportional to r∕a, which is of order of the cutoff for r = O(1). Although this procedure leads to a complete lifting of the degeneracy,^{Footnote 1} it has a number of unwanted features: first, it should be noted that the Wilson fermion action differs from the classical action in the continuum by terms of order a, as a result of adding the counterterm proportional to r. By contrast, the leading discretization effects of the Wilson plaquette action for Yang–Mills theory are only O(a ^{2}). The Wilson fermion formulation will thus have a reduced rate of convergence towards the continuum limit. Secondly, the addition of the Wilson term results in an explicit breaking of chiral symmetry, since the massless theory is no longer invariant under global axial rotations, such as
which implies that property (d) is violated. While the rate of convergence to the continuum limit can be accelerated by employing what is known as “O(a) improvement” (see below), the explicit breaking of chiral symmetry cannot be cured within the Wilson theory. Thus, quantities like the quark condensate, which arises from the spontaneous breaking of chiral symmetry, cannot be studied in a conceptually “clean” manner using Wilson fermions. A detailed discussion how this can be achieved with the help of a more sophisticated fermionic discretization (“GinspargWilson fermions”) is presented in Sect. 5.6. However, for most applications of lattice QCD, explicit chiral symmetry breaking is merely an inconvenience, but no serious obstacle.
We have already remarked when discussing the discretized Yang–Mills part of the QCD action that the nonuniqueness of the discretization opens the possibility to construct lattice actions with an accelerated rate of convergence towards the continuum limit. A systematic way how to do this is the socalled Symanzik improvement programme [14], in which lattice artefacts can be removed order by order in the lattice spacing. In a nutshell, the improvement programme amounts to extending the renormalization procedure of a field theory to the level of irrelevant operators, i.e. operators that formally vanish as a → 0. In this sense one adds suitable counterterms, which for any nonzero value of a produce a cancellation of the cutoff effects at a given order, provided that their coefficients are tuned appropriately. For QCD with Wilson fermions, Sheikholeslami and Wohlert [15] have shown that the Symanzik improvement programme to lowest order is realized by adding one O(a) counterterm to the WilsonDirac operator D _{w}. The resulting expression in the massless case reads
where \(\sigma _{\mu \nu }=\frac {{\mathrm {i}}}{2}[\gamma _\mu ,\gamma _\nu ]\), and \(\widehat {F}_{\mu \nu }\) is a lattice transcription of the gluon field strength tensor F _{μν}. A suitable representation of \(\widehat {F}_{\mu \nu }\) in terms of plaquette variables is given by
where Q _{μν}(x) is the sum of the four plaquettes emanating from the site x, as depicted in Fig. 5.2. The object Q _{μν}(x) is aptly called “clover” leaf. In order to remove all lattice artefacts of order a in hadron masses, the improvement coefficient c _{sw} must be fixed by imposing a suitable improvement condition. Without going into details here, we note that it is possible to find such a condition, which can also be evaluated at the nonperturbative level [16, 17]. The resulting, nonperturbatively O(a) improved Wilson action can then be used to compute, say, hadron masses whose values differ from the continuum result by terms of only O(a ^{2}).
The WilsonDirac operator for a quark with bare mass m _{0} is simply (D _{w} + m _{0}). However, the form of the Wilson fermion action, \(S_{\mathrm {F}}^{\mathrm {W}}[U,\bar {\psi },\psi ]\), which is found in the literature is usually expressed in terms of the “hopping parameter” κ rather than m _{0}. By rescaling the fermion fields according to
one obtains
The hopping parameter κ is related to the bare mass m _{0} via
while the dimensionless parameter r is usually set to one. Taken together with the plaquette action of Eq. (5.25), the Wilson action for QCD is thus conveniently parameterized in terms of the bare parameters (β, κ), with \(\beta =6/g_0^2\) and κ as above, instead of the bare gauge coupling and quark mass (g _{0}, m _{0}).
Another consequence of adding the Wilson term to the naïve lattice action is the resulting additive renormalization of the quark mass. In other words, the point where the quark mass vanishes is a priori unknown. The value that must be subtracted is called the critical quark mass, which corresponds to the critical value of the hopping parameter, κ _{c}. The bare subtracted quark mass is then given by
From Eq. (5.38) one easily infers that the critical value of κ in the free theory occurs at
while for nonzero g _{0} the value of κ _{c} must be determined, for instance, by adjusting κ to the point where the pion mass vanishes.
We now turn to discussing one alternative to using Wilson’s solution to the fermion doubling problem, namely the socalled “staggered” (or KogutSusskind) fermions. One might think that the doubling problem arises since there are too many fermion degrees of freedom in the discretized theory, if one associates a fourcomponent Dirac spinor with each individual lattice site. Pictorially, the main idea of Kogut and Susskind was to “thin out” the degrees of freedom by distributing single spinor components over different lattice sites. In their particular formulation, the 16 corners of a fourdimensional hypercube serve to accommodate the individual components of four Dirac spinors. Therefore, if these hypercubes are regarded as the main building blocks for the fermionic discretization, rather than the lattice sites themselves, this procedure will result in a partial lifting of the degeneracy from 16 fermion species down to four. It is clear, though, that a simple distribution of spinor components is not sufficient to define the action, since the Dirac matrices mix different spinor components. Thus, the staggered fermion action is only obtained after performing a diagonalization in spinor space, which then decouples the individual components.
Rather than describing the details of this procedure, which can be found in most textbooks, we simply state the result. Starting from the usual fourcomponent spinor and performing a spindiagonalization, the lattice action for staggered fermions with bare mass m _{0} coupled to the gauge field is derived as
where χ _{α} denotes a onecomponent Grassmann variables. The spindiagonalization has thus replaced the Dirac matrices γ _{μ} by real, positiondependent phase factors η _{μ}(x), which are given by
At the level of the classical action, the spinor components are completely decoupled, and the action is decomposed into four identical pieces. In order to occupy all 16 corners of a fourdimensional hypercube with onecomponent Grassmann variables, one needs four Dirac spinors, each of which contributes a term like Eq. (5.41) to the overall action. This produces the fourfold degeneracy of staggered fermions, with the remnant doubler states being referred to as “tastes”, in order to distinguish them from physical flavours. The formulation using the onecomponent fields within a hypercube can be reexpressed in terms of the spintaste basis [18], from which one can infer directly that the taste symmetry is broken. However, one axial generator of the taste symmetry remains unbroken. The fermion mass in the staggered approach is therefore protected against any additive renormalization through the associated global axial U(1) symmetry, unlike the case of the Wilson action. While the various tastes decouple in the continuum limit, nonvanishing interactions between the tastes at O(a ^{2}) in the lattice spacing are induced, leading to large lattice artefacts. The Symanzik improvement programme can be employed to reduce these tastechanging interactions [19], and the resulting “improved staggered fermions” (the socalled “Asqtad”action being one particular example [20]) have been widely used in a series of simulations.
For a long time lattice physicists have struggled to find a fermionic discretization which would both solve the doubling problem and be compatible with chiral symmetry. In fact, physicists grew increasingly doubtful that this could be achieved, following the proof of a “NoGo theorem” by Nielsen and Ninomiya [21], which stated that the conditions (a)–(d) mentioned above could not be satisfied simultaneously. Since one does not want to give up locality and property (b), this would imply that either (c) or (d) must be violated. Indeed, the Wilson and staggered discretizations seem to confirm this expectation: while the Wilson fermion action removes all doublers, it breaks chiral symmetry, leading to an additive renormalization of the quark mass, as well as several other consequences. By contrast, the staggered formulation preserves a U(1) subgroup of chiral symmetry at the price of only partially removing the spurious degrees of freedom.
A way to circumvent the Nielsen–Ninomiya theorem was already pointed out by Ginsparg and Wilson in 1982 [22], when they suggested to relax condition (d) in favour of
However, it was not before 1997 that this condition—now commonly referred to as the GinspargWilson relation—was confronted with a nontrivial solution. It was shown [23] that the socalled “perfect action” constructed from a renormalization group approach satisfied equation (5.43). It was also realized that any lattice Dirac operator, which is a solution to the GinspargWilson relation, also satisfies the Atiyah–Singer index theorem, i.e.
such that the operator D exhibits n _{−}− n _{+} exact chiral zero modes. Finally, it was shown [24] that the GinspargWilson relation implies an exact symmetry of the associated action, with infinitesimal variations proportional to
Moreover, this symmetry reproduces the correct chiral anomaly in the flavour singlet case, and therefore all the hallmarks of the correct chiral behaviour are present in the lattice theory: chiral zero modes, an exact index theorem and the chiral anomaly derived from the Ward identities associated with the exact symmetry.
Another line in the development of lattice fermion actions that preserve chiral symmetry goes back to Kaplan’s domain wall fermion approach [10], which was subsequently applied to QCD by Furman and Shamir [11]. Without going into detail, we state that the basic idea is to introduce an extra, fifth dimension and to couple the fermions to a mass defect (the socalled “domain wall height”) in that extra dimension. To make this more explicit, let x, y denote the coordinates in the fourdimensional bulk, and s, t the coordinates in the 5th dimension, which has finite length N _{5}. The gauge fields are trivial in the 5th direction, and the Dirac operator then has the general structure
where D ^{∥}(x, y) is the usual WilsonDirac operator with a negative mass term, − M, which represents the domain wall height. The operator \(D_{st}^\perp \) couples fermions in the 5th dimension and contains the physical bare quark mass m _{0}. It can then be shown that for m _{0} = 0 and in the limit N _{5} →∞ there are no fermion doublers and, more importantly, chiral modes of opposite chirality are trapped in the fourdimensional domain walls at s = 1, N _{5}.
However, in a real lattice simulation of domain wall fermions, one has to work with a finite value of N _{5}, so that the decoupling of chiral modes is not exact. One expects, though, an exponential suppression of the remnant chiral symmetry breaking effects, and this has been confirmed in several simulations. Furthermore, the rate of suppression may be accelerated by optimizing the choice of lattice action for the gauge fields. Hence, the domain wall formulation of QCD offers a method to realize almost exact chiral symmetry at nonzero lattice spacing at the expense of simulating a fivedimensional theory.
Another operator which correctly reproduces the chiral properties of QCD at nonzero lattice spacing was constructed by Neuberger [12]. Its definition is
where D _{w} is the massless WilsonDirac operator, and s < 1 is a tunable parameter. By defining Q = −γ _{5}A, one can rewrite Eq. (5.47) as
The NeubergerDirac operator D _{N} removes all doublers from the spectrum, and can easily be shown to satisfy the GinspargWilson relation [12]. The occurrence of an inverse square root in D _{N} raises two issues. First, it is a priori not clear whether or not D _{N} is local. Second, the application of D _{N} in a computer program is potentially very costly, since the signfunction of the matrix Q must be implemented using, for instance, a polynomial approximation.
In order to qualify as a viable discretization of the quark action, “strict” locality, meaning that only fields in a local neighbourhood of a given lattice site are coupled, is not actually required. If D(x, y) denotes a generic lattice Dirac operator which couples fields at sites x and y, then a sufficient condition for locality of D is the exponential suppression of nonlocal interactions, i.e.
where x − y is the distance between sites and ∥⋅∥ denotes a suitably defined matrix norm. In Ref. [25] it was shown that the NeubergerDirac operator D _{N} is local in the sense of Eq. (5.49), provided that the lattice spacing in physical units^{Footnote 2} is not larger than about 0.13 fm. As far as the issue of numerical efficiency is concerned, we note that the most widely used approximations of sign(Q) with good convergence properties include Chebysheff or Zolotarev polynomials, as well as rational fractions.
The last fermionic discretization we wish to mention here was originally constructed to address another problem of Wilson’s discretization, namely the fact that they are not protected against the occurrence of zero modes for any nonzero value of the bare quark mass. These unphysical zero modes manifest themselves as “exceptional” configurations, which occur with a certain frequency in numerical simulations with Wilson quarks and which can lead to strong statistical fluctuations. The problem can be cured by introducing a socalled “chirally twisted” mass term, after which the fermionic part of the QCD action in the continuum assumes the form
Here, μ _{q} is the twisted mass parameter, and τ ^{3} is a Pauli matrix. The standard action in the continuum can be recovered via a global chiral field rotation:
Fixing the twist angle α by requiring that \(\tan \alpha =\mu _{\mathrm {q}}/m\) one finds
which demonstrates the complete equivalence of the twisted formulation with “ordinary” QCD. The lattice action of twisted mass QCD for N _{f} = 2 flavours is defined as [26]
Although this formulation breaks physical parity and flavour symmetries, is has a number of advantages over standard Wilson fermions. In particular, the presence of the twisted mass parameter μ _{q} protects the discretized theory against unphysical zero modes. Another attractive feature of twisted mass lattice QCD is the fact that the leading lattice artefacts are of order a ^{2} without the need to add the SheikholeslamiWohlert term [27], even though the WilsonDirac operator is used in Eq. (5.53). Although the problem of explicit chiral symmetry breaking remains, the twisted formulation is particularly useful to circumvent some of the problems that are encountered in connection with the renormalization of local operators on the lattice. Recent review of twisted mass lattice QCD can be found in [28, 29].
We wish to end this part with a few general remarks. Although we have discussed discretizations of the QCD action in some detail, including the most recent developments, many more variants of the basic types of action—including several different combinations of fermionic and pure gauge parts—can be found in the literature. This reflects the fact that the discretization is not unique. The actual choice of lattice action in a particular simulation will influence the convergence rate to the continuum limit, the algorithmic efficiency, the renormalization properties of local operators, or—in the case of domain wall fermions—the extent to which chiral symmetry is realized. Depending on the properties of a particular discretization, the choice of lattice action can be optimized for the physics one wishes to study.
5.2.3 Functional Integral and Observables
The lattice formulation provides a regularization of nonAbelian gauge theories whilst preserving the gauge invariance at all stages of the calculation. This comes at a price, since all continuous spacetime symmetries are broken explicitly and must be recovered in the continuum limit. Nevertheless, the lattice regularized theory inherits all consequences of gauge invariance, including renormalizability. Moreover, the lattice regularizes the theory without any reference to perturbation theory. By contrast, in continuum schemes like the \({\overline {{\mathrm {MS}}}}\) scheme of dimensional regularization the cutoff is only defined after fixing the order of the perturbative expansion. As we shall see below, observables in lattice QCD are directly given in terms of functional integrals, which can be evaluated stochastically using Monte Carlo integration. In this way, any use of perturbation theory is completely avoided.
For concreteness, let us assume that we have made a particular choice for the Yang–Mills part S _{G}[U] and the fermionic part \(S_{\text{F}}[U,\bar {\psi },\psi ]\), for instance, the Wilson plaquette action and Wilson fermions. Let Ω denote an observable, which is represented by a polynomial in the quark and antiquark fields and the link variables. The expectation value, 〈 Ω〉, is defined through the Euclidean functional integral^{Footnote 3}
where Z is fixed by the condition. The functional integral involves an integration over the gauge group and over all fermionic degrees of freedom, the latter being represented by anticommuting (Grassmann) variables. Since the fermionic action, \(S_{\mathrm {F}}[U,\bar {\psi },\psi ]\) is bilinear in the quark and antiquark fields, the integration over the Grassmann variables is Gaussian and can be performed analytically. This yields
Equation (5.55) requires some further explanation:

\(\widetilde {\Omega }\) denotes the representation of Ω in the (effective) theory, where the quark fields have been integrated out and only the link variables remain in the functional integral measure;

D _{lat} denotes a generic, massive lattice Dirac operator. For instance, for Wilson quarks one has D _{lat} = D _{w} + m _{0}. For simplicity we have displayed the expression for QCD with N _{f} flavours of equal mass m _{0}, which accounts for the power N _{f}. In the case of nondegenerate quarks \(\{\det {D_{\text{lat}}}\}^{N_{\mathrm {f}}}\) must be replaced by a product of determinants, in which each factor represents the contribution from a single flavour:

The lattice formulation has given a welldefined meaning to the measure D[U]. The integration over the gauge degrees of freedom reduces to a finitedimensional integration over the gauge group, based on the invariant group (Haar) measure.
The numerical evaluation of 〈 Ω〉 via Monte Carlo integration proceeds as follows. One starts by generating a set of gauge configurations using a computer program. One configuration in the set represents the collection of all link variables on a given lattice, i.e.
for which we shall use the shorthand {U _{μ}(x)} below. A collection of an infinite number of configurations is called an ensemble. The statistical weight, W, of an individual configuration is given by
In other words, the composition of the ensemble is determined by a probability distribution, which is given by the negative exponentiated classical action in the integrand of the Euclidean functional integral. Owing to the weight factor, the integrand of the functional integral will be strongly peaked around those configurations for which W is large. This particular feature makes the expectation value amenable to a Monte Carlo treatment. The key idea is to replace the ensemble by a finite sample of N _{cfg} gauge configurations, which is dominated by those configurations for which W is large. Provided that one can construct a suitable algorithm, the sample will then consist predominantly of those configurations which give a large contribution to the Euclidean functional integral and thus 〈 Ω〉. Such a procedure is called importance sampling.
Technically, the sample is produced by generating a sequence of configurations via a Markov process:
One assigns a probability for the transition from \(\left \{U_\mu (x)\right \}_i\) to \(\left \{U_\mu (x)\right \}_{i+1}\), which is usually a function of the statistical weights of the two configurations, W _{i} and W _{i+1}, respectively. For each individual configuration in the sequence one then evaluates the observable, which yields the estimates Ω_{i}, i = 1, …, N _{cfg}. The expectation value 〈 Ω〉 is related to the mean value \(\overline \Omega \) via
In other words, in the limit of infinite statistics the mean value converges to the ensemble average which is identical to the expectation value. An important consequence of approximating the ensemble average by the sample average is a nonzero value of the variance. Hence, in order to specify the results from a Monte Carlo integration completely, one must also quote the statistical error which is given by the square root of the variance.
In the standard algorithms that implement Markov processes (such as the Metropolis algorithm [30]), the transition probabilities for going from one configuration to another are determined by comparing the statistical weights for local variations in the field variables. This guarantees computational efficiency, since the variation of individual link variables does not involve global information from the entire lattice. In Eq. (5.55) the dynamical effects of the quark fields are incorporated via the determinant of the lattice Dirac operator. The determinant, however, is a nonlocal object, which is expensive to compute. When the first efforts were made to compute observables in QCD in the 1980s, the available computer power did not allow for the inclusion of the quark determinant. Instead, lattice physicists resorted to what is known as the “quenched approximation”, which is based on the assumption that the bulk of nonperturbative contributions is carried by the gauge field, so that the determinant is set to a constant:
The resulting gain in computer time amounts to several orders of magnitude. In the quenched approximation the effects of virtual quark loops are entirely suppressed. As a consequence, results for observables are afflicted with an unknown systematic error. As we shall see later, there are several quantities (for instance, the masses of the lightest hadrons) for which the quenching error amounts to just 10–15%. Although this justifies the use of the quenched approximation to some extent, it is clear that dynamical quark effects must be taken into account, in order to arrive at reliable, nonperturbative predictions with a total accuracy at the percent level.
Modern algorithms for dynamical quarks, such as the Hybrid Monte Carlo algorithm [31], do not evaluate the quark determinant directly. Rather, one exploits the property that the determinant can be rewritten as a functional integral over bosonic fields, which is then evaluated stochastically. Thereby one avoids computing a global object, but the computational effort involved in the stochastic estimation of the quark determinant is still large compared with the quenched approximation. More details can be found in Sect. 5.2.6 below.
Correlation functions, i.e. the expectation values of polynomials in the quark and gluon fields, are the most important quantities, since they determine implicitly the particle spectrum of the theory. As was discussed already in Sect. 5.2.1, the link between correlation functions and the particle spectrum is provided by the transfer matrix T. For lattice QCD with Wilson fermions, the existence of a positive transfer matrix was rigorously established [32].
As a concrete example we shall discuss the twopoint correlation function of a charged kaon. A polynomial of quark fields with the quantum numbers of the kaon is given by
where the parentheses indicate summation over spinor and colour components of the fields. Mostly one is interested in correlation functions in which all spatial points have been summed over and which therefore only depend on the Euclidean time separation. We define
The inclusion of the phase factor in conjunction with the summation over \(\vec {x}\) amounts to a projection onto spatial momentum \(\vec {p}\). On a finite lattice with periodic boundary conditions \(C_{\mathrm {K}}(x_0;\vec {p})\) must be symmetric under x _{0} ↔ T − x _{0}. Therefore, the spectral decomposition of \(C_{\mathrm {K}}(x_0;\vec {p})\) reads
where the sum runs over all states in the kaon channel with fixed momentum \(\vec {p}\), and \(\epsilon _\alpha (\vec {p})\) is the mass gap (see Sect. 5.2.1).^{Footnote 4} For large Euclidean times x _{0} the ground state dominates. If we further set \(\vec {p}=0\), then the asymptotic form of the twopoint function reads
where \(m_{\mathrm {K}}=\epsilon _0(\vec {p}){ }_{\vec {p}=0}\) is the mass of the kaon, and the sum of the two exponentials has been reexpressed using the cosh function. Owing to the ordering \(\epsilon _0(\vec {p})<\epsilon _1(\vec {p})<\ldots \), the higher excited states are exponentially suppressed. The functional form of Eq. (5.64) is nicely illustrated by the plot in Fig. 5.3, where simulation data for \(C_{\mathrm {K}}(x_0;\vec {p}=0)\) are compared to its asymptotic form. The data show indeed the expected coshbehaviour. Furthermore, one observes how the contributions from higher excited states, which are clearly visible at small values of x _{0}∕a, quickly die out as the time separation increases. From the twopoint function we can extract two important quantities: the falloff of \(C_{\mathrm {K}}(x_0;\vec {p}=0)\) is characteristic of the kaon mass, i.e. the energy of the ground state. Moreover, the prefactor of the \(\cosh \)function yields the transition amplitude between a kaon state and the vacuum, and thus contains information on the kaon’s decay properties.
5.2.4 Continuum Limit, Scale Setting and Renormalization
In Sect. 5.2.2 we have discussed how to discretize the QCD action. The main principle for their construction was the condition that the corresponding expressions reproduce the continuum action in the formal limit a → 0, regardless of the values of the bare parameters (such as β and the hopping parameter κ in the case of QCD with Wilson fermions). If one goes beyond the classical theory this is not possible anymore: it is a general property of quantum field theory that the parameters of the regularized theory (masses and couplings) must be adjusted as the regulator is removed. In the context of lattice QCD this implies that the continuum limit, a → 0, is reached by a suitable tuning of the bare parameters.
To make this statement more precise, we shall invoke the close connection between Euclidean lattice field theory and a system in statistical mechanics. Models in statistical physics (think of the Ising model as an example) usually have a phase structure. Depending on the choice of parameters, the different phases may exhibit entirely different physical properties. The analogy with lattice field theory then implies that a particular discretization of QCD also possesses a phase structure in the space of bare parameters (β and κ, for example).^{Footnote 5} We shall now explain that the continuum limit of QCD is associated with a critical point in the phase diagram, which corresponds to a secondorder phase transition. In the previous section we have considered hadronic twopoint correlation functions, and how the mass in a given channel can be extracted from the asymptotic behaviour at large Euclidean times. Actually, this procedure yields the dimensionless combination (aM), i.e. the hadron mass in lattice units. In order to take the continuum limit, one must take a → 0, while the physical mass M must remain constant. This implies
In other words, the correlation length ξ diverges in the continuum limit. In the language of statistical physics, a divergent correlation length signals a secondorder phase transition. The existence of the continuum limit in lattice QCD is therefore equivalent to the existence of a secondorder transition in the space of bare parameters.
For simplicity we shall now consider Yang–Mills theory on the lattice, which we choose to describe by Wilson’s plaquette action and the bare coupling parameter \(\beta \equiv 6/g_0^2\). The existence of a secondorder phase transition corresponds to a critical value of the bare gauge coupling, g _{0,c}. Furthermore, it implies that the bare coupling g _{0} and the lattice spacing a (or, equivalently, the correlation length ξ) cannot be varied independently when the continuum limit is approached.^{Footnote 6} In this way we may regard the bare coupling as a function of the lattice spacing, g _{0}(a), such that
Let P be an observable, computed for a particular value of g _{0}, i.e. P = P(g _{0}, a). Since P is a physical quantity it must stay constant as the continuum limit is taken, i.e.
This leads to the Callan–Symanzik equation
We can define the renormalization group βfunction β _{lat} as
which describes the change in g _{0} when a is varied. Note that β _{lat} depends on the choice of discretization. In perturbation theory, however, one recovers the familiar universal coefficients at one and twoloop order. For gauge group SU(N) one has
where
and N _{f} = 0 in pure Yang–Mills theory. Starting from the perturbative expansion of β _{lat} one can integrate the Callan–Symanzik equation, which gives
where the integration constant Λ_{lat} represents a characteristic scale of the theory. The above expression establishes the connection between the lattice spacing and the bare coupling in perturbation theory. One reads off that
and hence the critical point occurs at g _{0,c} = 0. These findings are a consequence of asymptotic freedom. Taking Eq. (5.72) at face value one would conclude that the relation between P(a, g _{0}) and \(P(a^\prime ,g_0^\prime )\), computed for two different values of the bare coupling g _{0} and \(g_0^\prime \) near the critical point, was simply given by the ratio of Eq. (5.72) evaluated for g _{0} and \(g_0^\prime \). However, actual simulations do not confirm this expectation. The reason for the failure to observe “asymptotic scaling”, i.e. a variation of P(a, g _{0}) with g _{0} which is consistent with Eq. (5.72), is that the accessible values of g _{0} in simulations are by far not near enough the critical point, in order for perturbation theory to be a good approximation.
Let P and P ^{′} be two different observables that both satisfy Eq. (5.68). Then, regardless of whether or not asymptotic scaling holds, one would expect the ratio aP(a, g _{0})∕aP ^{′}(a, g _{0}) to be equal to the physical ratio P∕P ^{′} for all values of g _{0}. However, even this weaker scaling criterion is usually not observed, the reason being that the righthand side of Eq. (5.68) is not strictly zero. Rather one has
where p is a positive integer. These socalled scaling violations on the righthand side depend both on the lattice action and the observable in question. As a consequence, the ratio considered above behaves like
In other words, as g _{0} is tuned towards zero, dimensionless ratios of observables converge to the continuum limit with a rate proportional to a ^{p}, where the power p is characteristic of the particular discretization employed in the lattice calculation. In Table 5.1 we have already listed the leading scaling violations (lattice artefacts) for several widely used fermionic lattice actions. Discretizations of the Yang–Mills part, such as the plaquette action, have leading lattice artefacts of O(a ^{2}). The Symanzik improvement programme allows to construct lattice actions with an accelerated rate of convergence to the continuum limit.
In actual lattice calculations, the continuum limit must be taken by performing simulations at several different values of the lattice spacing and extrapolating the results to a = 0. The functional form of the extrapolation is chosen such that it is consistent with the leading discretization errors for a given lattice action. Such a procedure is only viable if the relation between the lattice spacing in physical units and the dimensionless coupling parameter g _{0} (which is an input parameter in the simulation) is known with good accuracy. Since the perturbative formula Eq. (5.72) is not of any practical use, the relation between the scale and the coupling must be mapped out nonperturbatively. To this end one picks a value for g _{0} and computes in a Monte Carlo simulation a dimensionful quantity Q, whose value is known from experiment. Common choices for Q in the pure gauge theory are the string tension or the hadronic radius r _{0} [33, 34], while in full QCD one may choose the mass of the nucleon. The Monte Carlo procedure yields Q in lattice units, (aQ), and the calibration of the lattice spacing is achieved via
Knowledge of (aQ) over a range of bare couplings is a prerequisite for performing the continuum extrapolation. In Fig. 5.4 we show a particular example, namely the continuum extrapolation of the combination M _{s} + 1 2(M _{u} + M _{d}) of quark masses, normalized by the kaon decay constant, computed using O(a) improved Wilson fermions in the quenched approximation [35]. The expected linear convergence in a ^{2} is clearly exhibited by the lattice data.
So far we have restricted the discussion to the pure gauge theory which contains only one bare parameter, the gauge coupling g _{0} (sometimes expressed in terms of \(\beta =6/g_0^2\)). When quarks are incorporated, the set of parameters must be enlarged by the values of the bare masses, one for each flavour. Lattice QCD is thus parameterized by the set of bare parameters
In order to be predictive, the theory must be renormalized, by expressing the bare parameters in terms of renormalized ones.
A convenient and practical method for lattice QCD is based on socalled hadronic renormalization schemes. Here the bare coupling and quark masses are eliminated in favour of renormalized quantities such as hadron masses or decay constants. An example how this works in the pure gauge theory was already given in the preceding discussion on scale setting, where the bare coupling was eliminated by assigning a value in physical units to the lattice spacing. In the process one has to choose a quantity that sets the scale and which cannot be predicted anymore.
Replacing the values of the bare quark masses m _{u}, m _{d}, … in favour of hadronic quantities works as follows. Like the bare coupling, the bare quark mass is an input parameter for the simulation and thus freely adjustable. Therefore, simulations yield hadron masses (in lattice units) as a function of the input quark masses. For instance, am _{PS}(m _{1}, m _{2}) denotes the mass in lattice units of a generic pseudoscalar meson composed of a quark and antiquark with bare masses m _{1} and m _{2}, respectively. Let us assume that the lattice spacing a has been calibrated using some input quantity Q. If we further assume exact isospin symmetry we can then determine the value of the bare isospinsymmetrized light quark mass \({\hat {m}}=\textstyle {1\over 2}(m_u+m_d)\), by requiring that
i.e. the value of \({\hat {m}}\) is fixed by adjusting the input mass m _{1} until m _{PS}(m _{1}, m _{2})∕Q coincides with the experimental result. We can extend this procedure to include more massive flavours. The bare strange quark mass is found by tuning m _{2} such that
Alternatively one can fix m _{s} via the condition \( {m_{\mathrm {V}}({\hat {m}},m_2)}/{Q} = {m_{\mathrm {K}}^\ast }/{Q}{ }_{\text{exp}}, \) where m _{V} denotes the mass in the vector channel. An example of a particular hadronic renormalization scheme is shown below:
Parameter  Quantity 

g _{0}  f _{π} 
1 2(m _{u} + m _{d})  m _{π} 
m _{s}  m _{K} 
m _{c}  \(m_{\mathrm {D}_{\mathrm {s}}}\) 
m _{b}  \(m_{\mathrm {B}_{\mathrm {s}}}\) 
All quantities in a lattice calculation are genuine predictions, except for those that are listed in the righthand column of the table, which are used to eliminate the bare parameters.
Given the multitude of hadronic states, it is obvious that there is considerable freedom in choosing hadronic renormalization schemes. Usually, masses or mass splittings of hadrons that are stable in QCD are suitable to define a scheme. Resonances, such as the ρ, should be avoided, since they do not have a sharply defined energy, owing to their large width.
5.2.5 Limitations and Systematic Effects
The lattice formulation is the basis for an exact nonperturbative treatment of QCD. The accuracy of lattice results is chiefly limited by the algorithmic performance and the available computer power. In particular, the set of bare parameters that can be simulated efficiently for a given number of lattice sites is restricted. This has the important consequence that the quark masses at the very extremes of the physical mass scale (i.e. the up/down quarks and the bquark) cannot be simulated directly with currently available methods and machines. These technical limitations are usually translated into a systematic error, which is quoted alongside the statistical one. The most important systematic effects are due to

lattice artefacts (cutoff effects),

finite volume effects, and

extrapolations in the quark mass.
In order to have sufficient control over these effects, the simulation parameters must be chosen such that the following inequalities are satisfied:
where m _{had} is the mass of a generic hadron in physical units computed in the simulation. The inequality on the left of (5.79) states that the hadron’s correlation length must be much smaller than the linear extent of the spatial box (in lattice units), as otherwise its value will be strongly distorted by finite volume effects. The inequality on the right states that the hadron mass must be significantly smaller than the inverse lattice spacing. If this is not the case, lattice artefacts will be uncontrollably large, meaning that the presence of higherorder cutoff effects cannot be excluded, so that a reliable extrapolation to the continuum limit as a linear function of the leading power of lattice artefacts cannot be performed. With current algorithms and machines, lattice sizes of up to L∕a = 48 and lattice spacings down to 0.05 fm are affordable, even if dynamical quarks are included. Since a = 0.05 fm corresponds to a ^{−1} ≈ 4 GeV, it is obvious that the bquark mass is too large to be simulated directly. Several techniques have been devised to address this problem, and a brief account can be found in Sect. 5.7.2.
In the light quark sector, the primary limitation that forbids making direct contact with the physical values of the up and down quarks is mostly due to algorithmic performance, rather than finite size effects. A detailed discussion of the algorithmic difficulties associated with the simulation of light dynamical quarks is presented separately in the following section. Moreover, it is difficult even in the quenched approximation to reach quark masses significantly smaller than half the physical strange quark mass, in particular with Wilson fermions. This is related to the occurrence of arbitrarily small eigenvalues in the spectrum of the WilsonDirac operator, even for small but nonvanishing values of the bare mass. As a result, observables computed on individual, socalled “exceptional” configurations may differ from the Monte Carlo average by orders of magnitude, and thus a reliable determination of the result and its error is virtually impossible. As already mentioned in Sect. 5.2.2, the problem of exceptional configuration can be cured by employing alternative discretizations such as twisted mass QCD or the overlap operator. A related problem arising from the particular spectral properties of the WilsonDirac operator is the bad performance of standard algorithms for dynamical quarks, discussed in the next section.
Due to these reasons, many simulations (quenched and unquenched) were restricted to quark masses not much smaller than m _{s}∕2. This value translates into a minimum mass of about 490 MeV in the pseudoscalar meson channel, so that in these simulations the pion is as heavy as the physical kaon. In this region of parameter space it is known empirically that a spatial lattice length of 2–3 fm is sufficient to satisfy the first inequality in (5.79) and to rule out significant finite volume effects. Moreover, an important analytic result derived by Lüscher [36], implies that the asymptotic convergence to the result in infinite volume is exponential.
In order to make contact with the chiral regime, lattice results must be extrapolated to the physical values of the up and down quark masses. The functional form for the dependence of observables on the quark mass is usually provided by Chiral Perturbation Theory (ChPT). For instance, at lowest order the relation between the mass of a pseudoscalar meson composed of quarks with masses m _{1} and m _{2} is
where the ellipses represent higher orders in the chiral expansion. Similar expressions are derived for vector meson and baryon masses, e.g.
and also for other quantities such as pseudoscalar decay constants. In the above formulae, M ^{2} ≡ B _{0}(m _{1} + m _{2}), and \(m_{\mathrm {V}}^0\) and \(m_{\mathrm {N}}^0\) denote the (nonvanishing) masses in the chiral limit. A more formal introduction to the basic concepts of ChPT is presented in Sect. 5.6.1.
It remains largely unknown whether or not the expressions of ChPT considered at a given order in the expansion can be applied in the quark mass range that is accessible in current simulations. Therefore, chiral extrapolations can lead to substantial systematic uncertainties. For instance, lattice predictions for the ratio of decay constants of the B and B _{s} mesons, \(f_{\mathrm {B}_{\mathrm {s}}}/f_{\mathrm {B}}\), may differ by 10%, depending on whether the LO or NLO expressions are used as an ansatz for the extrapolation from quark masses around m _{s}∕2. Currently it is estimated that pseudoscalar meson masses of 300 MeV and below must be reached in simulations, in order that ChPT at one or even twoloop order provides an accurate prediction for the quark mass dependence of hadron masses and matrix elements.
In the quenched approximation, chiral extrapolations are particularly problematic, since the chiral limit is intrinsically pathological, due to the appearance of singularities in the quark mass dependence. This is illustrated by the NLO expression for the ratio \(m_{\mathrm {PS}}^2/(m_1+m_2)\), i.e.
where B _{0}, α _{5}, α _{8}, δ and α _{Φ} are lowenergy constants. For notational convenience we have introduced
where F _{0} denotes the pion decay constant in the chiral limit. The lowenergy constants δ and α _{Φ}, which multiply the socalled “quenched chiral logarithms”, have no counterpart in the unquenched case. Since δ has a nonzero value [37], the quenched chiral logarithm in Eq. (5.82) gives rise to a singularity in the chiral limit. For many applications, the singularity can be ignored, since its effect is numerically small even at the physical pion mass. However, it signals that the quenched approximation suffers from fundamental conceptual problems.
5.2.6 Simulations with Dynamical Quarks
Although one may argue that the quenched approximation describes hadronic properties fairly well, it is clearly unsatisfactory, both from a conceptual point of view, and also because it introduces an unknown systematic error. Below we shall discuss some general issues relating to simulations with dynamical quarks. It must be stressed that several different techniques how to treat the quark determinant of Eq. (5.57) efficiently are currently being explored. A preferred or clearly superior method has not emerged so far, and it is likely that some of the approaches presented below may become obsolete in the years to come.
In order to illustrate the main difficulties, we start by introducing the Hybrid Monte Carlo (HMC) algorithm [31], which has been the standard algorithm for simulations with dynamical quarks for many years. In order to produce one step in the Markov chain, the algorithm evolves the link variables according to the equations of motion of a classical Hamiltonian system. To this end one introduces a conjugate momentum variable Π_{μ}(x) for every link U _{μ}(x). The Hamiltonian is defined as
where S _{G}[U] is the lattice gauge action, and \(S_{\mathrm {F}}^{\mathrm {eff} }[U,\phi ^*,\phi ]\) denotes an effective lattice fermion action, which is obtained by rewriting the quark determinant as a functional integral over complex bosonic fields ϕ(x) and ϕ ^{∗}(x). Explicitly, for N _{f} = 2 one has
For each step in the Markov chain, the conjugate momenta are drawn randomly from a Gaussian distribution (“momentum refreshment”). The Hamiltonian H[U, Π] governs the dynamics of the variables U _{μ}(x) and Π_{μ}(x) with respect to “simulation time” τ, which parameterizes the evolution of U _{μ}(x) and Π_{μ}(x) as the simulation algorithm progresses. The evolution is described by Hamilton’s equations, which read
where
are the forces associated with the gluon and quark fields, respectively. The algorithm then proceeds by integrating the equations of motion numerically. As in any numerical integration scheme, the total time interval is divided into a number of subintervals of finite length Δτ, which is called the step size. Starting from an initial gauge configuration {U _{μ}(x)} and a set of conjugate momenta { Π_{μ}(x)}, one obtains new sets \(\{U_\mu ^\prime (x)\}\), \(\{\Pi _\mu ^\prime (x)\}\) after the integration. In the language of classical mechanics, the variables U _{μ}(x) and Π_{μ}(x) evolve along a trajectory in phase space which connects the initial and final configurations. However, since numerical integration is not exact, owing to the finite step size, the energy is not conserved. In the HMC algorithm this is rectified by introducing a global accept/reject step: if ΔH denotes the energy difference between the initial and final configurations, i.e.
then the new configuration \(\{U_\mu ^\prime (x)\}\) is accepted with probability^{Footnote 7}
In other words, a configuration \(\{U_\mu ^\prime (x)\}\) associated with a large value for the energy violation ΔH is less likely to be accepted. This final step completes the Monte Carlo update. The name “Hybrid Monte Carlo” reflects the fact that one combines a deterministic classical dynamics procedure with a pseudorandom accept/reject step.
One major problem which has plagued simulations with dynamical quarks over many years is the fact that the efficiency of the conventional HMC algorithm deteriorates sharply when the lattice spacing is decreased and the masses of the light (up and down) quarks are tuned to their physical values. The poor scaling behaviour is driven by the condition number of the lattice Dirac operator D _{lat}, i.e. the ratio of the largest to the smallest eigenvalue. This quantity is known to grow inversely proportional to the lattice spacing and the quark mass. In particular, the HMC algorithm scales with the second, perhaps the third power of the light quark mass. Thus, simulations based on the WilsonDirac operator were found to be unpractical for lattice spacings below 0.1 fm and quark masses significantly smaller than half of the strange quark mass.^{Footnote 8} This is related to the aforementioned fact that even the massive WilsonDirac operator is not protected against arbitrarily small eigenvalues. Its condition number may thus fluctuate strongly in the course of the simulation, leading not only to numerical instabilities, but also to large fluctuations in the quark force term F _{F,μ}(x), and, in turn, ΔH. In order to keep a reasonably large acceptance rate of well over 75%, one must reduce the step size Δτ accordingly, and thus the numerical effort to integrate the equations of motion for an interval τ of fixed length, increases.
Two basic strategies to address this problems have been followed: the first is based on using fermionic discretizations that avoid the problem of arbitrarily small eigenvalues, while the aim of the second approach is to improve the simulation algorithms.
Staggered fermions have been advocated as a numerically more efficient alternative to the WilsonDirac formulation: since the staggered Dirac operator couples onecomponent Grassmann fields rather than fourcomponent spinors, fewer floating point operations are required for one application of the operator. Moreover, the residual U(1) ⊗U(1) symmetry protects the quark mass against additive renormalization and thus prevents the occurrence of very small eigenvalues. However, the fact that the staggered formulation describes four “tastes” per quark flavour makes a physical interpretation difficult. Technically, the degeneracy implies that the statistical weight of the quark determinant is too large compared with that of one physical flavour. An ad hoc method to compensate for this is to take fractional powers of the staggered quark determinant. For instance, to simulate QCD with a doublet of degenerate up and down quarks with mass \(\hat {m}\), and a single heavier (strange) quark with mass m _{s}, the probability measure is taken as
where D _{stagg} is the massless staggered Dirac operator. This procedure is known as the “fourth root trick”. The main question, which has been hotly debated, is whether or not the rooted staggered operator corresponds to a local field theory, or whether it induces spurious interactions among the fermionic degrees of freedom, which might lead to a violation of the universality of the continuum limit. A thorough analysis of this problem was given in [39], but so far no firm conclusion has been reached. Nevertheless, the probability measure Eq. (5.90) and the “rooting trick” it is based on, have been employed in largescale simulations (see, e.g. Ref. [40]).
Discretizations based on twisted mass QCD have also been proposed as a numerically more efficient quark action. Here, the twisted mass parameter μ _{q} protects the operator against arbitrarily small eigenvalues. The smallest mass in the pion channel that has been reached with this formulation was as low as 300 MeV [41]. This corresponds to a physical quark mass of about m _{s}∕5, which may be sufficient to enter the regime where the quark mass behaviour of observables can be described analytically using Chiral Perturbation Theory.
Owing to several major algorithmic improvements, simulations based on the WilsonDirac operator can now be performed much more efficiently. Without going into much detail, we simply state that most of the gain is due to the use of suitably chosen factorizations of the WilsonDirac operator into its low and highfrequency parts. The various factors are then “better conditioned”. In particular, fluctuations in the condition number can be controlled via a separate and optimized treatment of the lowenergy part. In this way the step size Δτ can be increased whilst keeping a reasonably high acceptance rate for fixed total trajectory length τ. Algorithmic implementations of factorization range from Hasenbusch’s “mass preconditioning” [42, 43], Lüscher’s domain decomposition technique based on the Schwarz Alternating Procedure (DDHMC algorithm) [44], to factorizations based on mass preconditioning combined with rational approximations of the contributions from multiple pseudofermion fields [45]. Thanks to these developments, it appears that the spectral properties of the WilsonDirac operator are no longer an obstacle to the efficient simulation of lattice QCD with light dynamical quarks. At the same time, largescale simulations employing the recent algorithmic improvements are only just starting.
5.3 Hadron Spectroscopy
The determination of the spectrum of hadrons, i.e. mesons, baryons, glueballs, and possibly “exotic” hadronic states, starting from the underlying gauge theory of quarks and gluons has traditionally been one of the main applications of lattice QCD. The rôle of lattice calculations in this context is twofold: first, the determination of the experimentally known values of hadron masses from first principles represents a stringent test of QCD. Second, lattice calculations can make predictions for the masses of undiscovered or poorly established states. For instance, lattice results have been instrumental in the search for glueball candidates, and have also contributed significantly to the debate on the existence of pentaquarks.
The principles of hadronic mass calculations have already been outlined at the end of Sect. 5.2.3: After defining a suitable interpolating operator with the quantum numbers of the desired hadronic channel, one computes its Euclidean twopoint function. The mass (energy) of the ground state in that channel is then extracted from the exponential falloff of the correlation function at large Euclidean times. The detailed functional form of the asymptotic behaviour depends on the choice of boundary conditions. Thus, it is not always described by a \(\cosh \) function, as in the example of a pseudoscalar meson on a lattice with periodic boundary conditions in time, c.f. Eq. (5.64). In the limit of infinite temporal lattice size T, the effect of the boundary conditions is sufficiently weak, so that one may approximate the functional form of the correlation function for a generic interpolating operator ϕ _{had}(x) by
Here, the quantity \(w_\alpha (\vec {p})\) is referred to as the spectral weight of the state α〉. A large value for the spectral weight of the ground state, \(w_1(\vec {p})\), will lead to an early domination of the correlation function by the ground state energy. The choice of ϕ _{had} in a given channel can be optimized such that
An optimal choice of interpolating operator is not only important to ensure a reliable determination of the ground state energy: In order to determine the energies in the excitation spectrum, the associated spectral weights must be maximized by specifying appropriate operators.
Below we provide examples for interpolating operators in several mesonic and baryonic channels:
Here, parentheses indicate summation over spinor and colour indices, while curly brackets denote that only spinor indices are summed over.
5.3.1 Light Hadron Spectrum
The determination of the spectrum of light hadrons was historically one of the first attempts to compute hadronic properties on the lattice. Since the masses of the lowlying hadrons are known from experiment, such calculations serve as benchmarks to test the intrinsic accuracy of the lattice approach.
The quenched approximation has been widely used to compute a number of quantities that are of great phenomenological interest. However, these results are of limited value, since the inherent quenching error is left undetermined. A precise calculation of the masses of the lowest lying hadrons in quenched QCD will expose the typical magnitude of the systematic error incurred by neglecting dynamical quark effects. To this end, several calculations of the quenched light hadron spectrum, using different lattice actions, have been performed [46,47,48,49,50,51].
In Ref. [47], the CPPACS Collaboration presented a comprehensive study of the masses of the lowest pseudoscalar and vector mesons, as well as octet and decuplet baryons. The Wilson fermion action without O(a) improvement was used at four different values of the lattice spacing, and a continuum extrapolation linear in a has been performed for all quantities. CPPACS adopted a hadronic renormalization scheme in which the lattice scale was fixed using the mass of the ρmeson. The average up and down quark mass was set using m _{π}. In order to fix m _{s}, either the kaon mass (“K”input) or the mass of the ϕmeson (“ϕ”input) was used. Chiral extrapolations were either based on the form expected from quenched Chiral Perturbation Theory at NLO (see Eq. (5.82)), or on the leadingorder formula supplemented by a quadratic term in the quark mass. The resulting (small) differences in the extrapolated values were added as systematic errors in the final results, which are summarily displayed in Fig. 5.5. Although the lattice results are in remarkable overall agreement with the experimentally observed spectrum, one finds significant deviations. For instance, the ratio of the nucleon and the ρmeson masses is determined as
where the first error is statistical, and the second is an estimate of systematic uncertainties other than quenching. The above value is 6.7% (2.5 standard deviations) below the experimental value of 1.218. Similarly, vectorpseudoscalar mass splittings, such as \(m_{\mathrm {K}^*}m_{\mathrm {K}}\), are underestimated by 10–15% (4–6σ), depending on whether m _{K} or m _{ϕ} was used to fix the strange quark mass.
The findings reported by CPPACS, which were based on unimproved Wilson fermions, have been broadly confirmed by other collaborations employing different lattice actions [48,49,50,51]. Thereby, the universality of the continuum limit of quenched QCD has been established: although different discretizations may yield statistically inconsistent results at nonzero lattice spacing, they converge to a common continuum limit, provided that the same hadronic renormalization scheme has been employed. The latter requirement is important, as there is considerable freedom in choosing a particular scheme. This leads to ambiguities in the quenched approximation, since different quantities are affected in different way by quark loops. In Ref. [51] it was found that, by using only stable or narrow states to define the hadronic renormalization scheme, the discrepancies between the quenched and experimental spectra could be shifted to the broad resonances, ρ, Δ, N ^{∗}, while the agreement for states like K, ϕ, N, Ω could be improved. Yet this observation does not alter the conclusion that the quenched approximation is unable to reproduce the spectrum of light hadrons with an accuracy better than 10%.
The obvious question is whether sea quark effects can account for the observed deviation between the quenched and experimental spectra. Owing to the larger numerical effort required to simulate QCD with dynamical quarks, unquenched studies have not yet reached the same level of control over systematic effects—notably lattice artefacts and chiral extrapolations—compared with the quenched benchmark [47]. Thus, a “definitive” unquenched calculation of the light hadron spectrum is still lacking, and thus we refrain from presenting an overview of recent results.
Nevertheless, the observed tendency in all simulations performed to date is that dynamical quarks “do the right thing”, i.e. the deviation from experiment is decreased. An example is shown in Fig. 5.6, where continuum extrapolations of meson masses in the quenched and unquenched theories are compared. The plot shows that the data obtained for N _{f} = 2 are closer to the experimental results in the continuum limit in comparison with their quenched counterparts. However, the figure also shows that the extrapolation of unquenched data is not well constrained, since only three data points are available. Clearly, additional simulations at smaller lattice spacings and quark masses are required for a solid determination of the total error in unquenched calculations of the light hadron spectrum.
It should also be noted that the various discretizations of the quark action have complementary advantages and shortcomings. While simulations with Wilson quarks have in the past been restricted to quark masses not much smaller than half the strange quark mass for algorithmic reasons, the use of staggered fermions in conjunction with the rooting procedure may be afflicted with conceptual problems (see the discussion in Sect. 5.2.6). Domain wall and overlap fermions are per se more expensive to simulate. In simulations based on tmQCD the incorporation of a third, heavier quark flavour is quite complicated. Thus, progress in this area is likely to be made through the combined information from different discretizations.
5.3.2 Glueballs
In addition to bound states composed of a quarkantiquark pair or, alternatively, three quarks, QCD is also widely believed to support the existence of glueballs, i.e. bound states consisting mainly of gluonic degrees of freedom. Although several candidates for such states have been proposed (e.g. the f _{0}(1370), f _{0}(1500) and f _{0}(1710)), the experimental difficulty consists in their unambiguous identification as glueballs. To this end, they need to be distinguished from “conventional” flavoursinglet meson resonances in the scalar channel. Predictions for the masses and widths of glueballs from lattice QCD provide crucial input for this task.
The basic principles of mass calculations for glueballs in lattice QCD are the same as for bound states composed of quark degrees of freedom: first one must define an interpolating operator with the appropriate quantum numbers of the glueball state in question. That is, the operator must transform correctly under spin, parity and charge conjugation. At this point a complication arises: the lattice breaks all continuous spacetime symmetries, such that Lorentzinvariance or—in the language of Euclidean field theory—rotational invariance is only recovered in the continuum limit. At nonzero lattice spacing the spin assignment is therefore ambiguous. Since the gluon field is represented by link variables, any glueball operator must be constructed from particular combinations of Wilson loops, i.e. products of link variables along closed paths on a hypercubic lattice (see Fig. 5.7).
Operators constructed in this way transform under irreducible representations (IRs) of the octahedral group O _{h}, which are conventionally labelled A _{1}, A _{2}, E, T _{1} and T _{2}. By computing the relations between the IRs of O _{h} and SU(2) one finds that each IR in the set {A _{1}, A _{2}, E, T _{1}, T _{2}} corresponds to infinitely many spins in the continuum. For instance, A _{1} transforms not only like a scalar (spin 0) state, but also contributes to spin 4 and yet higher spin states. Similarly, the lowest states to which T _{1} makes a contribution are spin 1 and spin 3, while E corresponds to spins 2, 4, 5,…. In order to fully classify lattice glueball operators, the representations of O _{h} are supplemented by the transformation properties under parity and charge conjugation, in full analogy with the usual J ^{PC}assignment in the continuum. For example, an operator labelled \(A_1^{++}\) corresponds to the scalar channel 0^{++} in the continuum.
The above discussion implies that the twopoint correlation function of an operator transforming under \(A_1^{++}\), which is used to describe the scalar glueball, will be contaminated by contributions from a spin 4 state. However, in accordance with Regge theory one may expect that the latter dies out quickly, since higher spin states are more massive.
Another technical complication arises from the empirical observation that the spectral weight, \(w_1(\vec {p})\), of the ground state in Eq. (5.91) is usually quite small. This implies that the asymptotic behaviour of the twopoint correlation function is only isolated at large Euclidean times. However, the statistical accuracy deteriorates quickly as x _{0} is increased, and in the asymptotic regime the correlation function is numerically comparable to the statistical noise. This precludes a precise determination of the mass of the ground state. A heuristic explanation for the small spectral weight can be given by noting that the operators constructed from the usual link variables are pointlike and thus have little projection onto an extended object such as a glueball. The situation can be much improved if the links in the Wilson loops of Fig. 5.7 are replaced by socalled “smeared” or “fuzzed” links [54, 55]. For instance, the approach of [54] replaces the spatial link U _{j}(x) by the combination
where α is a real, tunable parameter, and the symbol \({\mathcal {P}}\) denotes the projection back into the group manifold of SU(3). The procedure can be iterated, so that links at smearing level s, i.e. \(U_j^s(x)\), are constructed from those at level s − 1 via Eq. (5.95). One may say that smearing reduces the UV fluctuations of the gauge field, so that the smeared, extended link variables are better suited to project onto the IR regime, i.e. the longdistance properties. It should be stressed that the links in the temporal direction do not undergo the fuzzing procedure: Fuzzed temporal links will alter the transfer matrix and the spectral information it contains.
In order to obtain detailed information on the glueball spectrum one also seeks to determine the masses of the excited states in a given channel. This requires another level of refinement, since one normally hopes that excited state contributions die out quickly, while they now become the very focus of interest. A widely used method to gain information on the higher excitations is to construct a whole set of interpolating operators {O _{1}, …, O _{r}} in a given channel, say, \(A_1^{++}\). This is achieved either by considering different shapes of Wilson loops that share the same transformation properties, or by applying several different smearing levels to one particular Wilson loop. Thus, each individual member of the set {O _{1}, …, O _{r}} is a perfectly valid operator in a given channel, but the projection properties, i.e. the associated spectral weights \(w_{\alpha }^{(i)}\) for a particular state α in the spectral sum will in general be different for each member i = 1, …, r. One then computes the matrix
whose elements consist of the correlations of all combinations of operators in the set. The diagonalization of the matrix correlator then yields the appropriate linear combination of operators which correspond to the states α = 1, 2, … in the spectral decomposition. Diagonalization is achieved by solving the generalized eigenvalue problem
where ϕ denotes a vector, \(x_0^\prime \) is fixed, and C(x _{0}), \(C(x_0^\prime )\) denote the matrix correlators taken at Euclidean times x _{0} and \(x_0^\prime \), respectively. As shown in [56], the set of eigenvalues \(\lambda (x_0,x_0^\prime )\) converges rapidly towards
where 𝜖 _{α} is the mass (energy) of the state α in the spectral sum.
After all these technicalities, we now report on the status of glueball calculations. Recent results obtained in the quenched approximation were published in [53, 57,58,59,60]. In Fig. 5.8 we show the results from Ref. [57]. The three lowestlying states are the scalar (0^{++}), tensor (2^{++}) and the 0^{−+} glueballs, whose masses are determined as
Here, the first error is statistical, while the second is an estimate of systematic uncertainties, which is dominated by the ambiguity in the scale setting in the quenched approximation.
While it is tempting to identify the experimentally established resonance f _{0}(1710) as a scalar glueball in the light of the above results, the situation is more complicated. Since lattice predictions for the mass of the lightest glueballs fall into the mass range of conventional scalar mesons, mixing of glueballs with conventional \(q\bar {q}\) states in conjunction with the observed decay patterns must be considered before drawing any definite conclusions. More details on the current phenomenological and experimental situation can be found in [61, 62]. So far, there have been only exploratory attempts to study glueballmeson mixing directly on the lattice. Any meaningful investigation must inevitably include dynamical quark effects, whose influence on the glueball spectrum have so far only been poorly understood.
5.4 Confinement and String Breaking
The empirical fact that quarks and gluons are not observed as free particles is commonly referred to as confinement. Since all experimentally observed states are singlets under SU(3)_{colour}, confinement is tantamount to saying that isolated colour charges are not allowed. A theoretical understanding of this phenomenon must inevitably go beyond the perturbative level, since QCD is a strongly coupled theory.
In Ref. [6], Wilson formulated a criterion for the confinement of colour charges known as the “area law”. Let \(U({\mathcal {C}})\) denote the product of link variables around a closed loop \({\mathcal {C}}\) on a hypercubic lattice. The trace over colour indices is called the “Wilson loop”, i.e.
The area law then states that colour charges are confined if the expectation value of \(W({\mathcal {C}})\) decays exponentially with a rate proportional to the area \(A({\mathcal {C}})\) enclosed by the curve \({\mathcal {C}}\), i.e.
where σ is a constant. An example for a rectangular Wilson loop is shown in Fig. 5.9.
The interpretation of the area law rests on the observation that a Wilson loop of area r⋅t is equal to the Euclidean correlator which describes the propagation of a static, i.e. infinitely heavy, quarkantiquark pair separated by a distance r over a Euclidean time interval t. If t is taken to infinity at fixed r, the correlator yields the energy of the quarkantiquark pair:
The area law then implies \({\sigma }A({\mathcal {C}})=V(r)t\), and for a rectangular loop one obtains
Hence the energy of a static quarkantiquark pair increases linearly with the distance r. To achieve a full separation of static colour sources would therefore require an infinite amount of energy.
It has long been believed that SU(3) gauge theory is related to some kind of string theory. Heuristically, confinement may be viewed as due to the formation of a narrow tube of chromoelectric and magnetic flux between static colour charges, the dynamics of which can be described by a string theory. The bosonic string model yields an asymptotic expansion for the static quark potential
where V _{0} = const, and the universal coefficient c has been computed as [63]
in the fourdimensional theory. The proportionality factor σ is called the “string tension”. Instead of the potential one often considers the force, F(r) ≡dV (r)∕dr. The ansatz Eq. (5.104) yields
so that the string tension is obtained as the limiting value of the force, as r →∞,
String models of hadrons have been known since the late 1960s, and a phenomenological value for σ has been determined from Regge theory, \(\sqrt {\sigma }=440\,{\text{MeV}}\).
In QCD with light sea quarks the linear rise of the potential cannot persist for arbitrarily large distances. Instead, the creation of a light quarkantiquark pair from the vacuum will cause the hadronization of the static colour charges, leading to the formation of two staticlight mesonic states. Thus, the string or fluxtube is expected to “break” when the twomeson state is energetically favoured over the linearly rising potential. The breaking of the string should set in at a characteristic value for the separation distance, r _{b}, causing the potential to flatten off for , since the energy of a state of two mesons is independent of their separation.
Lattice simulations have been instrumental for establishing that the area law, the string picture of confinement, as well as string breaking (i.e. hadronization) are indeed properties of SU(3) gauge theory and/or QCD. However, computations of large Wilson loops in lattice simulations suffer from the same problem encountered in glueball mass calculations: due to the strong exponential falloff, the correlator in the asymptotic region, r, t →∞, is of the same order of magnitude than the statistical noise. Consequently, the same techniques have been applied, namely the smearing of link variables and the variational approach, which is based on the diagonalization of a matrix correlator. By combining these techniques with procedures designed to reduce statistical fluctuations [64] in the computation of large Wilson loops, one could verify the linear rise of the potential up to distances of r ≲ 1.5 fm [65, 66] (See Fig. 5.10).
Since a phenomenological value for \(\sqrt {\sigma }\) could be inferred from Regge theory, the string tension used to be a popular quantity to set the lattice scale. However, as lattice calculations became increasingly precise, it was realized that the extrapolation r →∞ is not easy to perform on the basis of lattice data restricted to r ≲ 1.5 fm. An alternative, conceptually much more reliable scale is obtained from the force between static colour charges [33]. The hadronic radius r _{0} is defined by requiring that the force F(r) evaluated at r = r _{0} assumes a given reference value. The latter is fixed by matching F(r) to phenomenological, nonrelativistic potential models for heavy quarkonia. The scale r _{0} is defined as the solution of
where the constant on the righthand side is chosen such that r _{0} has a value of r = 0.5 fm in QCD. Choosing r _{0} to set the scale avoids the systematic uncertainty associated with the extrapolation of the force to infinite distance. Furthermore, r _{0} remains welldefined in QCD with dynamical quarks, where string breaking must occur and the concept of a string tension as the limiting value of the force is intrinsically flawed. The quantity r _{0}∕a has been determined numerically with good statistical accuracy over a wide range of bare couplings, corresponding to lattice spacings between 0.026 − 0.17 fm [34, 65].
To test whether the bosonic string model for confinement is consistent with lattice data, one must confront the value of the Coulombic coefficient c in Eq. (5.104) with the predicted value of c = −π∕12. As in the case for the string tension, such a comparison is difficult to perform reliably, since − π∕12 represents the asymptotic value at infinite distance, which must be determined from data computed over a narrow range of accessible distances. Using highly accurate data for the potential V (r), generated by an algorithm which allows for an exponential suppression of statistical fluctuations at large r and t, it could be shown [67] that the quantity
indeed converges towards the predicted value of − π∕12. This result confirms the string picture of confinement and suggests that stringlike behaviour already sets in at rather small distances of .
The incorporation of dynamical quarks should drastically change the string picture beyond a characteristic scale r _{b}, where due to \(q\bar {q}\) pair creation string breaking occurs, since a twomeson state is energetically favoured over the fluxtube. However, the static quark potential determined from Wilson loops on dynamical configurations typically does not show any clear signs of flattening off, even at distances as large as 1 fm, where one expects hadronization to set in. This is attributed to the Wilson loop having little overlap onto the state of a broken string, such that the spectral weight associated with the broken string is extremely small. Therefore, extracting its energy reliably would require large Euclidean time separations, for which the statistical signal is usually lost.
It was thus proposed to address this problem by constructing a matrix correlator of Wilson loops supplemented by operators that directly project onto a twomeson state, and to consider their crosscorrelations with the unbroken fluxtube. This strategy was first applied to Higgs models, i.e. nonAbelian gauge theory coupled to bosonic matter fields (“scalar QCD”), which are computationally much more efficient, whilst preserving the mechanism for string breaking to occur [68, 69]. The method was later extended to QCD with two flavours of dynamical quarks [70]. The plots in Fig. 5.11 clearly show that the ground state energy at short distances is linearly rising, while the first excited state (i.e. the twomeson state) is constant in r. At a certain separation r _{b} one observes a crossing of energy levels and a continuing flat behaviour of the ground state energy. Near the crossing point one actually observes a repulsion of the energy levels, which is characteristic for the breaking phenomenon. The diagonalization of the matrix correlator also yields information on the composition of the states in the spectral decomposition. Indeed, for distances r < r _{b} the combination of operators describing the ground state is dominated by Wilson loops, whereas for r > r _{b}, twomeson operators are the most relevant.
5.5 Fundamental Parameters of QCD
We have noted already that QCD is parameterized in terms of the gauge coupling and the masses of the quarks. In order to make predictions for cross sections, decay rates and other observables, their values must be fixed from experiment. As was discussed in detail in Sect. 4.3 , the renormalization of QCD leads to the concept of a “running” coupling constant, which depends on some momentum (energy) scale μ, and the same applies to the quark masses^{Footnote 9}:
The property of asymptotic freedom implies that the coupling becomes weaker as the energy scale μ is increased. This explains why the perturbative expansion of cross sections in the highenergy domain allows for an accurate determination of α _{s} from experimental data.
The scale dependence of the coupling and the quark masses is encoded in the renormalization group (RG) equations, which are formulated in terms of the βfunction and the anomalous dimension τ,
At high enough energy the RG functions β and τ admit perturbative expansions according to
Here, b _{0}, b _{1} and d _{0} = 8∕(4π)^{2} are universal, while the higher coefficients depend on the adopted renormalization scheme.
From the asymptotic scaling behaviour at high energies one can extract the fundamental scale parameter of QCD via
Like the running coupling itself, the Λparameter depends on the chosen renormalization scheme.^{Footnote 10} A related, but less commonly used variable is the renormalization group invariant (RGI) quark mass
Unlike Λ, the RGI quark masses are schemeindependent quantities. Instead of using the running coupling and quark masses of Eq. (5.110), one can parameterize QCD in an entirely equivalent way through the set
At the nonperturbative level these quantities represent the most appropriate parameterization of QCD, since their values are defined without any truncation of perturbation theory.
The perturbative renormalization of QCD is accomplished by replacing the bare parameters with renormalized ones, whose values are fixed by considering the highenergy behaviour of Green’s functions, usually computed in the \({\overline {{\mathrm {MS}}}}\)scheme of dimensional regularization. However, at low energies it is convenient to adopt a hadronic renormalization scheme, in which the bare parameters are eliminated in favour of quantities such as hadron masses and decay constants (see Sect. 5.2.4). Since QCD is expected to describe both the low and highenergy regimes of the strong interaction, one should be able to express the quantities of Eq. (5.115), which are determined from the highenergy behaviour, in terms of hadronic quantities. In other words, by matching a hadronic renormalization scheme to a perturbative scheme like \({\overline {{\mathrm {MS}}}}\) one achieves the nonperturbative renormalization of QCD at all scales. In particular, one can express the fundamental parameters of QCD (running coupling and masses, or, equivalently, the Λparameter and RGI quark masses) in terms of lowenergy, hadronic quantities. This amounts to predicting the values of these fundamental parameters from first principles.
5.5.1 Nonperturbative Renormalization
To illustrate the problem of matching hadronic and perturbative schemes like \({\overline {{\mathrm {MS}}}}\), it is instructive to discuss the determination of the light quark masses. A convenient starting point is the PCAC relation, which for a charged kaon can be written as
In order to determine the sum of quark masses \((\bar {m}_u+\bar {m}_s)\), using the experimentally determined values of f _{K} and m _{K}, it suffices to compute the matrix element \(\left \langle 0 \bar {u}\gamma _5{s}{K^+}\right \rangle \) in a lattice simulation, as outlined in Sect. 5.2.3 (see Eq. (5.64)). The dependence on the renormalization scale and scheme cancels in Eq. (5.116), since the quantities on the left hand side are physical observables. Thus, in order to determine the combination \((\bar {m}_u+\bar {m}_s)\) in the \({\overline {{\mathrm {MS}}}}\)scheme, one must compute the relation between the bare matrix element of the pseudoscalar density evaluated on the lattice and its counterpart in the \({\overline {{\mathrm {MS}}}}\)scheme:
Here, μ is the subtraction point (renormalization scale) in the \({\overline {{\mathrm {MS}}}}\)scheme. Provided that Z _{P} and the matrix element of \((\bar {u}\gamma _5{s})_{\text{lat}}\) are known, one can use Eq. (5.116) to compute \((\bar {m}_u+\bar {m}_s)/f_{\mathrm {K}}\), which is just the ratio of a renormalized fundamental parameter expressed in terms of a hadronic quantity, up to lattice artefacts. In Fig. 5.4 we have already shown the continuum extrapolation of this ratio.^{Footnote 11}
The factor Z _{P} is obtained by imposing a suitable renormalization condition involving Green’s functions of the pseudoscalar densities in the \({\overline {{\mathrm {MS}}}}\) as well as the hadronic scheme. Since the \({\overline {{\mathrm {MS}}}}\)scheme is intrinsically perturbative, in the sense that masses and couplings are only defined at a given order in the perturbative expansion, it is actually impossible to formulate such a condition at the nonperturbative level. In perturbation theory at one loop one finds
where C is a constant that depends on the chosen discretization of the QCD action. Expressions like these are actually not very useful, since perturbation theory formulated in terms of the bare coupling g _{0} converges rather slowly, so that reliable estimates of renormalization factors at one or even twoloop order in the expansion cannot be obtained. Thus it seems that the problem of nonperturbative renormalization is severely hampered by the intrinsically perturbative nature of the \({\overline {{\mathrm {MS}}}}\) scheme in conjunction with the bad convergence properties of lattice perturbation theory.
This problem can, in fact, be resolved by introducing an intermediate renormalization scheme. Schematically, the matching procedure for the pseudoscalar density (or, equivalently, the quark mass) via such a scheme is sketched in Fig. 5.12. At low energies, corresponding to typical hadronic scales, it involves computing a nonperturbative matching relation between the hadronic and the intermediate scheme X at some scale μ _{0}. This matching step can be performed reliably if μ _{0} is much smaller than the regularization scale a ^{−1}. In the following step one computes the scale dependence within the intermediate scheme nonperturbatively from μ _{0} up to a scale \(\bar \mu \gg \mu _0\), which is large enough so that perturbation theory can be safely applied. At that point one may then determine the matching relation to the \({\overline {{\mathrm {MS}}}}\)scheme perturbatively. Alternatively, one can continue to compute the scale dependence within the intermediate scheme to infinite energy via a numerical integration of the perturbative RG functions. According to Eq. (5.114) this yields the relation to the RGI quark mass. Since the latter is scale and schemeindependent, one can use directly the perturbative RG functions, which in the \({\overline {{\mathrm {MS}}}}\)scheme are known to fourloop order [71], to compute the relation to \(\bar {m}_{{\overline {{\mathrm {MS}}}}}\) at some chosen reference scale. By applying this procedure, the direct perturbative matching between between the hadronic and \({\overline {{\mathrm {MS}}}}\)schemes (upper two boxes in Fig. 5.12), using the expression in Eq. (5.118) is thus completely avoided.
Decay constants of pseudoscalar mesons provide another example for which the renormalization of local operators is a relevant issue. For instance, the kaon decay constant is defined by the matrix element of the axial current, i.e.
If the matrix element on the right hand side is evaluated in a lattice simulation, then the axial current in the discretized theory must be related to its counterpart in the continuum via a renormalization factor Z _{A}:
Normally one would expect that the chiral Ward identities ensure that the axial current does not get renormalized. However, this no longer applies if the discretization conflicts with the symmetries of the classical action. This is clearly the case for Wilson fermions, which break chiral symmetry, such that the resulting shortdistance corrections must be absorbed into a renormalization factor Z _{A}. Similar considerations apply to the vector current: if the discretization does not preserve chiral symmetry, current conservation is only guaranteed if the vector current is suitably renormalized by a factor Z _{V}, which must be considered even in the massless theory. Unlike the case of the renormalization factor of the pseudoscalar density, Z _{A} and Z _{V} are scaleindependent, i.e. they only depend on the bare coupling g _{0}. From the above discussion it is obvious that perturbative estimates of Z _{A} and Z _{V} are inadequate in order to compute hadronic matrix elements of the axial and vectors currents with controlled errors. A nonperturbative determination of Z _{A} and Z _{V} can be achieved by imposing the chiral Ward identities as a renormalization condition.
Two widely used intermediate schemes, namely the Schrödinger functional (SF) and the Regularization independent momentum subtraction (RI/MOM) schemes are briefly reviewed in the following. We strongly recommend that the reader consult the original articles (Refs. [72,73,74,75] for the SF, and [76] for RI/MOM) for further details.
5.5.2 Finite Volume Scheme: The Schrödinger Functional
The Schrödinger functional is based on the formulation of QCD in a finite volume of size L ^{3} ⋅ T—regardless of whether spacetime is discretized or not—with suitable boundary conditions. Assuming that lattice regularization is employed, one imposes periodic boundary conditions on the fields in all spatial directions, while Dirichlet boundary conditions are imposed at Euclidean times x _{0} = 0 and x _{0} = T. In order to make this more precise, let C and C ^{′} denote classical configurations of the gauge potential. For the link variables at the temporal boundaries one then imposes
In other words, the links assume prescribed values at the temporal boundaries, but remain unconstrained in the bulk (see Fig. 5.13).
Quark fields are easily incorporated into the formalism. Since the Dirac equation is first order, only two components of a full Dirac spinor can be fixed at the boundaries. By defining the projection operator P _{±} = 1 2(1 ± γ _{0}), one requires that the quark fields at the boundaries satisfy
where \(\rho ,\ldots ,\bar \rho ^\prime \) denote prescribed values of the fields. The functional integral over all dynamical fields in a finite volume with the above boundary conditions is called the Schrödinger functional of QCD:
The classical field configurations at the boundaries are not integrated over. Using the transfer matrix formalism, one can show that this expression is the quantum mechanical amplitude for going from the classical field configuration \(\{C,\rho ,\bar \rho \}\) at x _{0} = 0 to \(\{C^\prime ,\rho ^\prime ,\bar \rho ^\prime \}\) at x _{0} = T.
Functional derivatives with respect to \(\rho ,\ldots ,\bar \rho ^\prime \) behave like quark fields located at the temporal boundaries, and hence one may identify
The boundary fields \(\zeta , \bar \zeta ,\ldots \) can be combined with local composite operators (such as the axial current or the pseudoscalar density) of fields in the bulk to define correlation functions. Particular examples are the correlation function of the pseudoscalar density, f _{P} and the boundarytoboundary correlation f _{1}
which are shown schematically in the middle and right panels of Fig. 5.13. In the above expressions, the Pauli matrices act on the first two flavour components of the fields.
The specific boundary conditions of the Schrödinger functional ensure that the Dirac operator has a minimum eigenvalue proportional to 1∕T in the massless case [73]. As a consequence, renormalization conditions can be imposed at vanishing quark mass. If the aspect ratio T∕L is set to some fixed value, the spatial length L is the only scale in the theory, and thus the masses and couplings in the SF scheme run with the box size. The recursive finitesize scaling study described below can then be used to map out the scale dependence of running quantities nonperturbatively from low to high energies. It is important to realize that in this way the relevant scale for the RG running (the box size L) is decoupled from the regularization scale (the lattice cutoff a). It is this features which ensures that the running of masses and couplings can be obtained in the continuum limit.
Let us now return to our earlier example of the renormalization of quark masses. The transition from lattice regularization and the associated hadronic scheme to the SF scheme is achieved by computing the scaledependent renormalization factor which links the pseudoscalar density in the intermediate scheme to the bare one, i.e.
A renormalization condition that defines Z _{P} can be formulated in terms of SF correlation functions:
where the constant c must be chosen such that Z _{P} = 1 in the free theory. In order to determine the RG running of the quark mass nonperturbatively one can perform a sequence of finitesize scaling steps, as illustrated in Fig. 5.14. To this end one simulates pairs of lattices with box lengths L and 2L, at fixed lattice spacing a. The ratio of Z _{P} evaluated for each box size yields the ratio \(\bar {m}_{\mathrm {SF}}(L)/\bar {m}_{\mathrm {SF}}(2L)\) (upper horizontal step in Fig. 5.14), which amounts to the change in the quark mass when the volume is scaled by a factor 2. In a subsequent step, the physical volume can be doubled once more, which gives \(\bar {m}_{\mathrm {SF}}(2L)/\bar {m}_{\mathrm {SF}}(4L)\). The important point to realize is that the lattice spacing can be adjusted for a given physical box size. In this way the number of lattice sites can be kept at a manageable level, while the physical volume is gradually scaled over several orders of magnitude, as indicated by the zigzag pattern in Fig. 5.14. Furthermore, each horizontal step can be performed for several lattice resolutions, so that the continuum limit can be taken. By contrast, if one attempted to scale the physical volume for fixed lattice spacing, one would, after only a few iterations, end up with systems so large that they would not fit into any computer’s memory.
In an entirely analogous fashion one can set up the finitesize scaling procedure for the running coupling constant in the SF scheme, \(\bar {g}_{\mathrm {SF}}(L)\).^{Footnote 12} Setting a value for the coupling actually corresponds to fixing the box size L, since the renormalization scale and the coupling in a particular scheme are in onetoone correspondence. The sequence of scaling steps begins at the matching scale \(\mu _0=1/L_{\max }\) between the hadronic and SF schemes, and in order to express the scale evolution in physical units, the maximum box size \(L_{\max }\) must be determined in terms of some hadronic quantity, such as f _{π} or r _{0}. In typical applications of the method, \(L_{\max }\) corresponds to an energy scale of about 250 MeV. After n steps, the box size has decreased by a factor 2^{n} (typically n = 7 − 9), and at this point one is surely in the regime where the perturbative approximations to the RG functions are reliable enough to extract the Λparameter (in the SF scheme) and the RGI quark masses according to Eqs. (5.113) and (5.114). The transition to the \({\overline {{\mathrm {MS}}}}\)scheme is easily performed, since the ratios \(\Lambda _{\mathrm {SF}}/\Lambda _{{\overline {{\mathrm {MS}}}}}\), as well as \(\bar {m}_{{\overline {{\mathrm {MS}}}}}/M\) are computable in perturbation theory. At that point one has completed the steps in Fig. 5.12, and all reference to the intermediate SF scheme has dropped out in the final result.
As examples we show the running coupling and quark mass in the SF scheme from actual simulations of lattice QCD for N _{f} = 2 flavours of dynamical quarks in Fig. 5.15. The numerical data points in these plots originate from simulations with two flavours of O(a)improved Wilson fermions and have been extrapolated to the continuum limit.
5.5.3 RegularizationIndependent Momentum Subtraction Scheme
An alternative choice of intermediate renormalization scheme is based on imposing renormalization conditions in terms of Green’s functions of external quark states in momentum space, evaluated in a fixed gauge (e.g. Landau gauge) [76]. The external quark fields are offshell, and their virtualities are identified with the momentum scale. Here we summarize the basic steps in this procedure by considering a quark bilinear nonsinglet operator \(O_\Gamma =\bar {\psi }_1\Gamma \psi _2\), where Γ denotes a generic Dirac structure, e.g. Γ = γ _{5} in the case of the pseudoscalar density. The corresponding renormalization factor Z _{Γ} is fixed by requiring that a suitably chosen renormalized vertex function Λ_{Γ,R}(p) be equal to its treelevel counterpart:
This condition defines Z _{Γ} up to quark field renormalization. Such a prescription can be formulated in any chosen regularization, which is why the method is said to define a regularizationindependent momentum subtraction (RI/MOM) scheme. However, Z _{Γ} does depend on the external states and the gauge.
In order to connect to our previous example of the renormalization of quark fields, we consider the pseudoscalar density for concreteness: Γ = γ _{5} = “P”. In this case, , and Eq. (5.128) can be cast into the form
where the trace is taken over Dirac and colour indices.
In practice, the unrenormalized vertex function Λ_{P}(p) is obtained by computing the quark propagator in a fixed gauge in momentum space and using it to amputate the external legs of the Green’s function of the operator in question, evaluated between quark states, i.e.
The quark field renormalization constant \(Z_\psi ^{1/2}\) can be fixed, e.g. via the vertex function of the vector current^{Footnote 13}:
The numerical evaluation of the Green’s function and quark propagators in momentum space is performed on a finite lattice with periodic boundary conditions. Unlike the situation encountered in the Schrödinger functional, there is thus no additional infrared scale, so that the renormalization conditions cannot be evaluated directly at vanishing bare quark mass. A chiral extrapolation is then required to determine massindependent renormalization factors.
Equation (5.128) is also imposed to define the subsequent matching of the RI/MOM and \({\overline {{\mathrm {MS}}}}\) schemes. In this case, the unrenormalized vertex function on the lefthand side is evaluated to a given oder in perturbation theory, using the \({\overline {{\mathrm {MS}}}}\)scheme of dimensional regularization. For a generic quark bilinear this yields the factor \(Z_{\Gamma }^{{\overline {{\mathrm {MS}}}}}(\bar {g}_{{\overline {{\mathrm {MS}}}}}(\mu ))\). In our specific example of the pseudoscalar density operator in the PCAC relation, Eq. (5.116), the transition between the RI/MOM and \({\overline {{\mathrm {MS}}}}\) schemes is provided by
The ratio R _{P} admits a perturbative expansion in terms of the coupling in the \({\overline {{\mathrm {MS}}}}\)scheme, i.e.
which is not afflicted with the bad convergence properties encountered in the direct matching of hadronic and \({\overline {{\mathrm {MS}}}}\)schemes. Finally, for the whole method to work, one must be able to fix the virtualities μ of the external fields such that
In other words, the method relies on the existence of a “window” of scales in which lattice artefacts in the numerical evaluation are controlled, μ ≪ 1∕a, and where μ is also large enough such that the perturbative matching to the \({\overline {{\mathrm {MS}}}}\) scheme can be performed reliably. In the ideal situation one expects that the dependence of \(Z_{\Gamma }^{\text{MOM}}(g_0,a\mu )\) on the virtuality μ inside the “window” is well described by the perturbative RG function.
The RI/MOM prescription is a flexible method to introduce an intermediate renormalization scheme and can easily be adapted to a range of operators and lattice actions. In particular, the extension to discretizations of the quark action based on the GinspargWilson relation is straightforward. This contrasts with the situation encountered in the Schrödinger functional, where extra care must be taken to ensure that imposing Schrödinger functional boundary conditions is compatible with the GinspargWilson relation [79,80,81]. On the other hand, the nonperturbative scale evolution, for which the Schrödinger functional is tailored, is not so easy to incorporate into the RI/MOM framework. Hence, the matching between RI/MOM and \({\overline {{\mathrm {MS}}}}\) schemes is usually performed at fairly low scales, i.e. \(\bar \mu =\mu _0\) in the notation of Fig. 5.12. Furthermore, the accessible momentum scales in the matching of hadronic and RI/MOM schemes are typically quite narrow, i.e. aμ _{0} ≈ 1. Special care must also be taken when one considers operators that couple to the pion, such as the pseudoscalar density. In this case the vertex function receives a contribution from the Goldstone pole, which for p ≡ μ = 0 diverges in the limit of vanishing quark mass. The fact that the chiral limit is illdefined may spoil a reliable determination of the renormalization factor, in particular when the accessible “window” is narrow such that μ cannot be set to large values.
5.5.4 MeanField Improved Perturbation Theory
Another widely used strategy is to avoid the introduction of an intermediate renormalization scheme altogether and attempt the direct, perturbative matching between hadronic and \({\overline {{\mathrm {MS}}}}\) schemes via an effective resummation of higher orders in the expansion. In this sense one regards the bare coupling and masses as parameters that run with the cutoff scale a ^{−1}.
The bad convergence properties of perturbative expansions such as Eq. (5.118) has been attributed to the presence of large gluonic tadpole contributions in the relation between the link variable U _{μ}(x) and the continuum gauge potential A _{μ}(x). It was already suggested by Parisi [82] that the convergence of lattice perturbation theory could be accelerated by replacing the bare coupling \(g_0^2\) by an “improved” coupling \(\tilde {g}^2\equiv g_0^2/u_0^4\), where \(u_0^4\) denotes the average plaquette:
A more systematic extension of the idea of setting up such a “tadpole” or “meanfield” improved version of lattice perturbation theory was presented in Ref. [83]. The main strategy is to factor out tadpole contributions through a redefinition of the link variable:
where u _{0} is the average link, defined e.g. via the average plaquette. A factor of u _{0} is then absorbed into the normalization of the quark fields. According to [83], the mismatch between nonperturbative estimates for u _{0} and its expression in lattice perturbation theory can be used to improve the convergence properties of lattice perturbation theory via a relative rescaling of quark fields in the continuum and lattice formulations. To make this more explicit, we consider Wilson fermions (see Sect. 5.2.2). Factoring out the average link u _{0} modifies the quark field normalization of Eq. (5.36) according to
The general expression for the perturbative expansion of Z _{P} in powers of the bare coupling reads
where \(Z_{\mathrm {P}}^{(1)}(a\mu )\) denotes the oneloop expansion coefficient. The convergence of Eq. (5.138) can be accelerated by dividing out u _{0} in the rescaling factors of the quark and antiquark fields using its perturbative expansion and replacing it by its nonperturbative estimate computed in simulations. In other words, the rescaling of the quark fields is exploited to divide out the relative mismatch between the perturbative and nonperturbative estimates for the average link in expressions like Eq. (5.138):
where the oneloop coefficient \(u_0^{(1)}=1/12\) for the average plaquette. In this way, i.e. by combining nonperturbatively determined values for u _{0} with its perturbative expansion, and after replacing the bare coupling by \(\tilde {g}^2\), one arrives at the meanfield improved version of Eq. (5.138), viz.
Instead of Parisi’s “boosted” coupling \(\tilde {g}\) other expansion parameters have been suggested, which are expected to accelerate the convergence of the perturbative series [83]. While meanfield improvement is a general procedure, which is easily adapted to a wide range of actions and operators, it is difficult to estimate the effectiveness of the resummation and, in turn, the size of higherorder corrections. Also, a principal problem is the identification of the running scale with the cutoff, since it is difficult to separate renormalization effects from lattice artefacts.
5.5.5 The Running Coupling from the Lattice
Having discussed the nonperturbative renormalization of QCD in detail, we shall now present results for the running coupling constant, α _{s}, from two different approaches. This complements the discussion in Sect. 4.6, where determination of α _{s} from experimental data has been described in detail. Any lattice calculation of α _{s} proceeds along the following steps:

1.
A nonperturbative definition of the coupling must be provided in terms of some quantity which can be evaluated in lattice simulations with high precision. This amounts to specifying the running coupling in a particular renormalization scheme, α _{X}(aμ _{0}), which can be related to the \({\overline {{\mathrm {MS}}}}\) scheme of dimensional regularization.

2.
Scale setting: the matching to a hadronic scheme is performed via the calibration of the lattice spacing, which yields the scale μ _{0} at which α _{X} is evaluated in units of some physical quantity Q:
$$\displaystyle \begin{aligned} \mu_0\,[\text{MeV}] = (a\mu_0)\cdot a^{1}\,[\text{MeV}] = (a\mu_0)\cdot \frac{Q\,[\text{MeV}]}{(aQ)}. \end{aligned} $$(5.141) 
3.
Running and matching: provided that the energy scale at which α _{X} has been determined is large enough, one can use perturbation theory to relate α _{X} to the coupling in the \({\overline {{\mathrm {MS}}}}\) scheme, e.g.
$$\displaystyle \begin{aligned} \alpha_{{\overline{{\mathrm{MS}}}}}(\bar\mu) = \alpha_{\mathrm{X}}(\mu) +c_{\mathrm{X}}^{(1)}(\bar\mu/\mu)\alpha_{\mathrm{X}}(\mu)^2 +\ldots. \end{aligned} $$(5.142) 
4.
The Λparameter can be determined from the asymptotic behaviour of α _{X} via Eq. (5.113).
The attentive reader has surely noticed that the above steps follow closely the general strategy for nonperturbative renormalization via an intermediate renormalization scheme outlined in Sect. 5.5.1 and Fig. 5.12.
First we discuss the determination of α _{s} from the Schrödinger functional. The definition of the running coupling is somewhat technical in this case. The starting point is the effective action of Eq. (5.123); the classical field configurations at the boundaries at x _{0} = 0, T can be parameterized in terms of a real variable η:
For explicit expressions we refer the reader to the original article [84]. The associated effective action is defined by
and admits a perturbative expansion in terms of the bare coupling g _{0}, viz.
A renormalized coupling can then be defined in terms of the effective action via
This definition is imposed at vanishing quark mass, m = 0, and provided that the aspect ratio T∕L has been fixed, the spatial dimension is the only scale in the theory, such that \(\bar {g}_{\text{SF}}(L)\) runs with the box size L. From the perturbative expansion of Γ(η) one easily infers that \(\bar {g}_{\mathrm {SF}}^2(L)=g_0^2\) at tree level. The quantity on the righthand side is given in terms of plaquettes attached to the SF boundaries and can be computed with good statistical precision.
If \(L_{\max }\) denotes the largest box size for which \(\bar {g}_{\mathrm {SF}}\) is computed, then the scale is set by expressing \(L_{\max }\) in terms of some known dimensionful quantity, for instance, by computing the combination \(L_{\max }/r_0\) in the continuum limit and using r _{0} = 0.5 fm.
The finitesize scaling procedure described earlier in Sect. 5.5.1 allows to compute the scale evolution of \(\bar {g}_{\mathrm {SF}}\) over several orders of magnitude. In particular, each of the horizontal steps in Fig. 5.14 can be repeated for several values of the lattice spacing, so that the continuum limit is reached by taking a∕L → 0 for fixed physical box size L. The resulting scale evolution of \(\alpha _{\mathrm {SF}}\equiv \bar {g}_{\mathrm {SF}}^2/4\pi \) is shown in Fig. 5.15 and compared to the perturbative evolution. Although the nonperturbatively determined points are described very well by perturbation theory, using the threeloop expression for the RG function, one should realize that this behaviour may be specific to the SF scheme and should not be generalized to other schemes.
Starting from \(\mu _0=1/L_{\max }\) one obtains the coupling at \(\mu =2^9/L_{\max }\) after nine steps in the scaling procedure. At that point one can extract the Λparameter by evaluating the exact expression
where \(\mu =2^9/L_{\max }\). The integral can be computed using the threeloop approximation to the RG βfunction in the SF scheme. Equation (5.147) yields the combination \(\Lambda _{\mathrm {SF}}L_{\max }\), and knowledge of \(L_{\max }\) in physical units allows to express the Λparameter in MeV. Conversion to the \({\overline {{\mathrm {MS}}}}\) scheme is easily achieved, since the ratio of Λparameters in two different schemes is computable via a oneloop calculation in which \(\bar {g}_{{\overline {{\mathrm {MS}}}}}^2\) is expanded in powers of \(\bar {g}_{\mathrm {SF}}^2\). This gives
The entire procedure of determining the Λparameter via the Schrödinger functional has so far been carried out for the pure SU(3) gauge theory (N _{f} = 0) and for QCD with two flavours of dynamical quarks. The values of the coefficient c _{Λ} are 2.04872(4) for N _{f} = 0 [84] and c _{Λ} = 2.382035(3) for N _{f} = 2 [85], and the resulting values for \(\Lambda _{{\overline {{\mathrm {MS}}}}}\) are [75, 77]
where r _{0} = 0.5 fm is used to convert into physical units. There is room for improvement in several respects: for N _{f} = 2 the extrapolation to the continuum limit can be made more reliable by including simulations at smaller lattice spacings, which should reduce the first of the two quoted errors. Also, the conversion into physical units should be performed in terms of a quantity such as f _{π}, which is directly accessible in experiment. Finally, the calculation must be repeated with more dynamical quark flavours, in order to allow for a direct comparison with phenomenology, since all experimental determinations yield the Λparameter for N _{f} = 4 or 5 quark flavours.
The determination of α _{s} and \(\Lambda _{{\overline {{\mathrm {MS}}}}}\) via the Schrödinger functional is quite involved. However, it is the only method so far, which allows to map out the running of α _{s} in a completely nonperturbative manner, including the systematic elimination of lattice artefacts. In particular, perturbation theory is used only for energy scales well above 50 GeV.
The second method that we will discussed here in some detail is the determination of α _{s} via heavy quarkonia. Below we present an account of the calculation published in [86]. Here, the dynamical quark effects of the light (u, d, s) quarks have been accounted for in simulations with improved staggered quarks employing the fourthroot trick (see Sect. 5.2.6). In this approach, the coupling constant is defined in the socalled “V scheme” via the heavy quark potential in momentum space:
Small Wilson loops such as the plaquette can be expanded in powers of α _{V}
where s _{P} is a real dimensionless variable which can be chosen to optimize the convergence properties of the expansion [83]. Equation (5.151) thus provides the link between the coupling and a quantity that is easily computed in lattice simulations. The above expression can be generalized to (small) rectangular Wilson loops W _{rt} with area r ⋅ t:
Knowledge of the expansion coefficients in conjunction with lattice data for the quantity on the left hand side allows for the determination of α _{V}.
The second step, namely the calibration of the momentum scale which appears in the argument of α _{V}, is done by determining the lattice spacing from mass splittings in the bottomonium system. Here one typically considers the mass differences between the Υ and Υ^{′}, or alternatively, between the χ _{b} and Υ states. Of course, any other lowenergy quantity like f _{π} or r _{0} could be used. It can be argued, however, that mass splittings in heavy quarkonia are a natural choice for setting the scale in this particular approach, chiefly because of their relative insensitivity to the exact value of the heavy quark mass. Since the bquark mass of m _{b} ≈ 4 GeV is greater than typical values of the inverse lattice spacing, a ^{−1} one must employ special techniques to deal with heavy quarks on the lattice. In [86] this is done via an approach based on nonrelativistic QCD. A detailed discussion of the specific treatment of heavy quarks in lattice simulations is deferred to Sect. 5.7.2.
After setting the scale, the Wilson loops \(\left \langle {W_{rt}}\right \rangle \) computed on ensembles with N _{f} = 3 flavours of rooted staggered quarks are used to determine α _{V} via a global fit involving data at three different values of the lattice spacing. This yields
where the superscript on the coupling reminds us that the result is valid in the threeflavour theory. The relation to the coupling in the \({\overline {{\mathrm {MS}}}}\)scheme at the Zpole is determined in perturbation theory, by employing the thirdorder expansion of \(\alpha _{{\overline {{\mathrm {MS}}}}}\) in terms of α _{V} [87]:
which yields \(\alpha _{{\overline {{\mathrm {MS}}}}}^{(3)}(3.26\,{\mathrm {GeV}})\). This coupling is then translated to \(\alpha _{{\overline {{\mathrm {MS}}}}}^{(5)}(M_Z)\) via the numerical integration of the fourloop RG βfunction, including the effects from quark mass thresholds at m _{c} and m _{b}, which finally yields
This result is included in the world average of \(\alpha _{{\overline {{\mathrm {MS}}}}}^{(5)}(M_Z) = 0.1176\pm 0.002\) in Ref. [61]. It is also in very good agreement with the nonlattice global estimate of \(\alpha _{{\overline {{\mathrm {MS}}}}}^{(5)}(M_Z) = 0.1182\pm 0.0027\) [88].
The running and matching in this approach is done perturbatively, involving energy scales from M _{Z} down to m _{c}. In this sense the method may be regarded as similar in spirit to, say, the determination of α _{s} from the semileptonic branching ratio of τ decays, as in both cases the coupling is extracted from the perturbative expansion of a particular observable. While for τlepton decays an experimentally measured quantity is considered, it is the nonperturbatively computed data for the Wilson loops in the lattice approach which are expressed in terms of the running coupling. This contrasts with the Schrödinger functional approach, where also the running is computed nonperturbatively, albeit with considerable numerical effort.
The error on the result in Eq. (5.155) is rather small. It is left for future studies to confirm this level of precision, which must entail further investigations into the influence of lattice artefacts, as well as the validity of the fourth root trick.
5.5.6 Light Quark Masses
We shall now apply the general framework of nonperturbative renormalization to the determination of quark masses. Typically one distinguishes the “light” u, d, s quarks from the “heavy” c, b, t quarks. At first, this distinction may seem rather arbitrary. It is actually based on the relative magnitude of the quark masses compared with the chiral symmetry breaking scale Λ_{χ}, which separates “soft” from “hard” momentum scales. Masses and momenta well below Λ_{χ} break chiral symmetry only softly, so that spontaneous chiral symmetry breaking still dominates over the explicit breaking generated by nonzero values of the quark masses. Gasser and Leutwyler [89, 90] have demonstrated that QCD with u, d, s flavours can be studied via an “effective” theory of Goldstone boson fields. This approach, called Chiral Perturbation Theory (ChPT), has an SU(3)_{L} ⊗ SU(3)_{R} chiral symmetry, which is spontaneously broken to the SU(3) vector subgroup. The associated Goldstone bosons are then identified with the pions, kaons and ηmesons, whose masses are indeed small compared to typical hadronic scales, such as the mass of the nucleon, for instance. Thus, the magnitude of Λ_{χ} is identified with a value close to 1 GeV. In ChPT, quantities like hadron masses, decay rates or cross sections are computed through an expansion in powers of quark masses (and 4momenta) about the chiral limit. The inclusion of the charm quark into the formalism is rather useless, since the masses if the lightest charmed pseudoscalar mesons are far greater than Λ_{χ} ≈ 1 GeV.
The top quark can be safely ignored in this context, since its lifetime is an order of magnitude shorter than typical QCD processes. As a consequence, the top quark does not undergo any hadronization effects (for instance, “toponium”, i.e. \(t\bar {t}\) bound states have never been observed), but rather decays weakly into a Wboson and a bquark.
The mass of the bquark is rather large (and to some extent this is also true for the charm quark), so that one may attempt to determine their values from perturbative expansions in α _{s} of some massdependent quantity. By contrast, in the light quark sector nonperturbative effects such as spontaneous chiral symmetry breaking dominate. As far as the determination of the masses of the u, d, s quarks is concerned, ChPT is of limited value, since only ratios of quark masses can be predicted, but not their absolute values. The reason is that although the light quark masses appear as parameters of ChPT, their values cannot be fixed by chiral symmetry (see Sect. 5.6.1 for more details). The absolute normalization must therefore be provided by nonperturbative methods such as lattice simulations or QCD sum rules.
Below we will focus on attempts to compute the values of the light quark masses in units of some hadronic quantity. As indicated in Sect. 5.5.1, this entails the knowledge of the renormalization factor that links lattice regularization to the chosen continuum scheme. Lattice simulations have maximum impact in the light quark sector, owing to the dominance of nonperturbative effects, which is in fact signified by the large uncertainties quoted for the values of the u, d and s quark masses in the particle data book [61].
The general procedure for the determination of light quark masses in lattice QCD starts from the PCAC relation, Eq. (5.116). Assuming exact isospin symmetry, m _{u} = m _{d}, one can consider a generic light flavour ℓ with mass \(m_\ell \equiv \hat {m}=\textstyle {1\over 2}(m_u+m_d)\). In order to determine, say, the combination \(\hat {m}+m_s\), one must define a particular hadronic renormalization scheme, by specifying the lattice scale and the hadronic quantity that fixes the value of \(\hat {m}+m_s\). Furthermore, the renormalization factor which connects hadronic and continuum schemes must be known. Equation (5.116) can then be rewritten such that it yields the sum of RGinvariant quark masses \(\hat {M}+M_s\) in units of the quantity Q which sets the lattice spacing:
In this expression, the subscript “exp” denotes the experimental values for the respective quantities, while the matrix element \(G_{\mathrm {PS}}^{\text{bare}}\) is given by
The pseudoscalar decay constant \(f_{\mathrm {PS}}^{\text{bare}}\) parameterizes the matrix element of the unrenormalized axial current, i.e.
The renormalization factor Z _{M} relates the bare current quark mass to the RGinvariant mass. Thus, the task for lattice calculations is to compute the ratio \({f_{\mathrm {PS}}^{\text{bare}}}Q/G_{\mathrm {PS}}^{\text{bare}}\) for a generic pseudoscalar state and tune the bare quark mass such that m _{PS} = m _{K}. By combining the result with the renormalization factor Z _{M} and the experimental value of \(m_{\mathrm {K}}^2/Q^2\), the RGI quark masses in units of Q are obtained up to lattice artefacts of order a ^{p}, where p is characteristic of the details of the discretization. Since the RGI quark masses are scale and schemeindependent quantities, the factor Z _{M} depends only on the bare coupling g _{0}. Using the Schrödinger functional as the intermediate renormalization scheme, nonperturbative estimates of Z _{M} computed for O(a) improved Wilson fermions within a wide range of bare couplings, have been published in Refs. [75] and [78]. In this case, Z _{M} is given by
where the ratio \({M}/{\bar {m}_{\mathrm {SF}}(\mu _0)}\) is computed via the finitesize scaling procedure. The transition between lattice regularization and the SFscheme is accomplished by determining Z _{P} and the renormalization factor Z _{A} of the axial current.^{Footnote 14} Note that the dependence on the intermediate matching scale μ _{0} drops out completely in this expression. Finally, the conversion to the \({\overline {{\mathrm {MS}}}}\)scheme is performed by considering
where the ratio \({\bar {m}_{{\overline {{\mathrm {MS}}}}}(\mu )}/{M}\) can be computed through the numerical integration of the perturbative approximation of the anomalous dimension τ and the βfunction at four loops. This yields [35, 78]
Estimates for the strange quark mass itself can be obtained in two ways: first, one combines \(\hat {M}+M_s\) with the ratio \(M_s/\hat {M}=24.4\pm 1.4\) estimated in ChPT [38]. Alternatively, one might attempt to compute \(\hat {M}\) directly from lattice data, by considering Eq. (5.116) for a pion. In this case, however, one relies on chiral extrapolations, because of the difficulties involved when tuning the masses of the light quarks towards the values of the physical up and downquark masses.
In Table 5.2 we present a selection of results for the mass of the strange quark in the quenched approximation, normalized in the \({\overline {{\mathrm {MS}}}}\)scheme at μ = 2 GeV, as well as the ratio \(M_s/\hat {M}\). Two observations are worth mentioning: first, direct determinations of \(M_s/\hat {M}\) via chiral extrapolations agree well with the estimate from ChPT, even though the chiral limit is illdefined in the quenched approximation. Second, the different systematics in the simulations (lattice actions, renormalization of local operators) generate a spread of seemingly incompatible results for the mass of the strange quark. However, the spread can be traced to the particular choice of hadronic renormalization scheme. To this end one can compute the relation between quark masses computed for two different lattice scales, Q and Q ^{′}. From Eq. (5.156) one easily infers that the strange quark mass \(m_s^{(Q^\prime )}\) estimated using Q ^{′}, is related to its counterpart \(m_s^{(Q)}\) via [37]
Here, the subscripts “lat” and “exp” refer to lattice and experimental estimates of the scale ratios. The ratio (Q ^{′}∕Q)_{lat} can be determined in the continuum limit using published lattice data, and the deviation of the proportionality factor from unity is a measure of the relative quenching effects, when either Q or Q ^{′} is chosen to set the scale. Once the results have been converted to the common scale r _{0}, the estimates for m _{s} in the continuum limit show remarkable consistency, despite the very different systematic effects among the simulations included in this analysis (c.f. Table 5.2). This demonstrates that lattice artefacts and renormalization effects can be controlled at the level of a few percent with the available techniques.
The challenge for current and future simulations is to eliminate the remaining uncertainty due to quenching. Several simulations with N _{f} = 2 or 2 + 1 flavours of dynamical quarks^{Footnote 15} based on different fermionic discretizations have produced results for the light quark masses, which are shown in Table 5.3. Despite the enormous progress that has been made in simulating light dynamical quarks, it is important to realize that systematic effects such as lattice artefacts and/or renormalization effects are currently not as well controlled as in the quenched theory. The fact that affordable lattice spacings are still relatively large implies that extrapolations to the continuum limit are in general longer than in the quenched approximation, thereby leading to larger errors. In some cases it is not even clear whether the leading lattice artefacts in dynamical simulations have been isolated. Also, the quantity Q that sets the scale must be known at least as accurately as the quark mass itself, and hence the determination of these observables may prove just as costly. Finally, dynamical quark masses are still fairly large, especially in many simulations using Wilson fermions, and thus the long and potentially uncontrolled chiral extrapolations significantly affect estimates for the isospinaveraged light quark mass \(\hat {m}\).
5.6 Spontaneous Chiral Symmetry Breaking
Chiral symmetry has already been mentioned in connection with the masses of the light quarks. Here we will extend the general framework and elaborate on effective descriptions of QCD at low energies, which can be treated analytically. As we shall see, much can be learnt via the interplay of such effective theories and lattice simulations of QCD.
Massless QCD with N _{f} flavours is invariant under independent rotations of the left and righthanded components of the quarks fields. If one defines the field Ψ as the vector of N _{f} Dirac spinors ψ _{i} via
its left and righthanded components are given by
The action of the massless theory is then invariant under transformations like
where ω _{L}, ω _{R} are real vectors, and T denotes the generators of SU(N _{f}), which satisfy
The above transformation can be rewritten in terms of vector and axial rotations, i.e.
where \({\boldsymbol {\alpha }_{\boldsymbol {V}}} \equiv \textstyle {1\over 2}\left ( {\boldsymbol {\omega }_{\boldsymbol {R}}}+{\boldsymbol {\omega }_{\boldsymbol {L}}}\right )\) and \({\boldsymbol {\alpha }_{\boldsymbol {A}}} \equiv \textstyle {1\over 2}\left ( {\boldsymbol {\omega }_{\boldsymbol {R}}}{\boldsymbol {\omega }_{\boldsymbol {L}}}\right )\). Invariance under these transformation laws is what one usually means when one says that (massless) QCD is invariant under a global SU(N _{f})_{L} ⊗SU(N _{f})_{R} symmetry.
Actually, QCD has even more global symmetries, namely a U(1)_{V} symmetry, which corresponds to a common rotation of all quark flavours. The conserved charge derived from the Noether current, which is associated with this unbroken symmetry, is the quark number. The conservation of the axial current associated with the remaining axial U(1) symmetry is, however, severely broken by an anomalous term, which gives rise to strong nonperturbative effects generated by instantons. Without going into further detail here, we refer to common textbooks.
Returning now to SU(N _{f})_{L} ⊗SU(N _{f})_{R}, we note that symmetries in subnuclear physics are usually deduced from the particle spectrum. That is, symmetries manifest themselves through the occurrence of massdegenerate (or nearly degenerate) particle multiplets that can be grouped according to the irreducible representations of the symmetry group. Indeed, for N _{f} = 3 one finds that the light pseudoscalar mesons, i.e. the pions, kaons and ηmesons form an octet. The mass splittings among the members of the octet are small when viewed on typical hadronic scales, and arise due to the unequal, nonzero masses of the light quarks. However, if the pseudoscalar octet were interpreted as a manifestation of an (approximate) SU(3)_{L} ⊗ SU(3)_{R} chiral symmetry, one would expect that each member of the octet is accompanied by a parity partner, i.e. a scalar meson, whose mass is of the same order of magnitude. This is not observed in experiment, where the lightest scalar mesons are found to lie 600–700 MeV above the pseudoscalar octet. One therefore concludes that the symmetry must be spontaneously broken. The term “spontaneous breaking” refers to the fact that theories like QCD possess more internal symmetries than those that can be inferred from the particle spectrum. In general, spontaneously broken symmetries are not realized as symmetry transformations involving the physical states of the theory. In particular, the ground state, i.e. the vacuum, is not invariant under the transformation. As discussed in many textbooks, it is precisely the invariance of the vacuum under the symmetry transformation that is required to ensure the degeneracy of the particle spectrum. If the vacuum is not invariant, certain operators may acquire a nonvanishing expectation value. In fact, a sufficient condition for the spontaneous breaking of the physical SU(3)_{L} ⊗ SU(3)_{R} chiral symmetry is fulfilled if the expectation value of the scalar density, \(\bar \Psi \Psi \), is nonzero, i.e.
Furthermore, according to Goldstone’s theorem [101], the generator of each broken symmetry is associated with a massless particle. Since the masses of the members of the pseudoscalar octet are rather small in comparison with the proton mass, they are identified as the Goldstone bosons of the spontaneously broken chiral symmetry.
Spontaneous chiral symmetry breaking is an entirely nonperturbative phenomenon. The task is then to explore the breaking mechanism and compute the value of the quark condensate \(\left \langle \bar \Psi \Psi \right \rangle \). As shall be outlined below, this can be achieved through the interplay of lattice simulations and effective lowenergy descriptions of QCD.
5.6.1 Chiral Perturbation Theory
Chiral Perturbation Theory (ChPT) has already been mentioned in connection with extrapolations of lattice data to the physical values of the up and downquark masses, and also in the context of lattice determinations of the strange quark mass. Here we present a brief introduction into the general formalism. More thorough reviews can be found in Refs. [102, 103].
Chiral Perturbation Theory is an effective theory, based on a systematic expansion of the lowenergy dynamics of QCD in powers of the 4momentum and the quark mass about the chiral limit [89, 90], i.e.
where the superscripts label the order of the expansion in powers of p. In contrast to QCD, the basic degrees of freedom which appear in \({\mathcal {L}}_{\mathsf {eff}}\) are the Goldstone bosons, rather than the fundamental quarks and gluons. ChPT is parameterized in terms of a set of empirical couplings, usually called “lowenergy constants” (LECs). At lowest order, the effective chiral Lagrangian (in Euclidean spacetime) reads
where \({\mathcal {M}}=\text{diag}(m_u,\,m_d,\,m_s)\) is the quark mass matrix, and U(x) collects the Goldstone boson fields, i.e.
The λ ^{a}’s denote the GellMann matrices which are normalized as Tr (λ ^{a}λ ^{b}) = 2δ ^{ab}. The LECs at leading order are B _{0} and F _{0}, where the latter corresponds to the pion decay constant in the chiral limit.^{Footnote 16}The expression for \({\mathcal {L}}_{\mathsf {eff}}^{(4)}\), i.e. the interaction terms at nexttoleading order in the chiral expansion, contains 12 additional interaction terms, multiplied by the LECs L _{1}, …, L _{10}, H _{1}, H _{2}. The values of the LECs are usually determined by matching the expressions of ChPT for physical observables to experimental data. However, it turns out that the complete set of LECs cannot be obtained in this way. Rather, in order to fix the values of some LECs, one must resort to additional theoretical assumptions. One particular example is the value of B _{0}, which appears in the chiral expansion of the pion mass at lowest order (see also Eq. (5.80)):
From this expressions it is clear that B _{0} can only be determined using m _{π} as input if the physical values of the quark masses are known in the first place. By the same token, the value of \(\hat {m}=\textstyle {1\over 2}(m_u+m_d)\) can only be inferred if an estimate for B _{0} is available. However, the a priori unknown parameter B _{0} drops out in suitably chosen ratios of \(m_\pi ^2, m_{\mathrm {K}}^2,\ldots \). This explains why ChPT can be used to predict the ratios of the light quark masses but fails to provide an absolute mass scale. Another reason why the complete set LECs cannot be determined from chiral symmetry considerations alone is the fact that the effective Lagrangian beyond leading order is invariant under a symmetry transformation which involves the LECs and the mass matrix \({\mathcal {M}}\), but which is absent in QCD. This is the socalled “KaplanManohar ambiguity” [104]. At this point it is clear that lattice simulations of QCD can provide valuable input for the determination of LECs. For instance, since the values of the quark masses are input parameters in the simulations, lattice QCD allows to map out the quark mass dependence of the masses of Goldstone bosons and thus determine the LEC B _{0}. We shall see below that B _{0} is related to the quark condensate \(\langle \bar \Psi \Psi \rangle \) which can be considered as the order parameter for spontaneous chiral symmetry breaking. Furthermore, as we have already discussed in Sect. 5.5.6, absolute values of quark masses are accessible via lattice QCD.
We end our brief introduction to ChPT with the derivation of a few relations which will be useful for our discussion of chiral symmetry breaking below. In particular, we shall derive the leadingorder mass formulae such as Eq. (5.80) and establish a link between the quark condensate and B _{0}. To this end we expand the field U in the chiral Lagrangian \({\mathcal {L}}_{\mathsf {eff}}^{(2)}\) in powers of the Goldstone boson fields. Assuming exact isospin symmetry, m _{u} = m _{d}, one finds at lowest order in ϕ _{a}:
After identifying ϕ _{1}, ϕ _{2}, ϕ _{3} with the pions, ϕ _{4}, …, ϕ _{7} with the kaons, and ϕ _{8} ≡ η, one derives the leadingorder relations between the quark masses and the masses of the Goldstone bosons, viz.
Thus, the relation for a generic pseudoscalar Goldstone boson made up of quarks with masses m _{1} and m _{2} is precisely what was already shown in Eq. (5.80). We note that from Eq. (5.174) one easily derives the GellMann–Okubo mass relation, i.e.
which is satisfied experimentally within a few percent. Furthermore, Eq. (5.174) yields the ratio \(m_s/\hat {m}\) at lowest order, viz.
which is already close to the estimate at nexttoleading order of \(m_s/\hat {m}= 24.4\pm 1.5\) [38], quoted in Sect. 5.5.6.
For the discussion of spontaneous symmetry breaking, it is useful to establish a connection between the quark condensate in QCD, \(\left \langle \bar \Psi \Psi \right \rangle \), and the LECs which parameterize the effective chiral Lagrangian. This link is provided by the socalled GellMann–Oakes–Renner relation [105], which we are going to derive below. To this end we consider the QCD Lagrangian in the continuum:
The path integral is defined as
and the expression for the quark condensate can be formally derived by taking derivatives with respect to the light quark masses, i.e.
What is the analogue of this expression in the effective chiral theory? To answer this question one takes the lowestorder chiral Lagrangian of Eq. (5.170) and defines the corresponding path integral^{Footnote 17}
Since \({\mathcal {L}}_{\mathsf {eff}}^{(2)}\) contains the quark mass matrix one can consider similar derivatives, i.e.
and comparison with Eq. (5.179) yields
In other words, the quark condensate is related to the slope parameter in the lowestorder mass formulae and the pion decay constant in the chiral limit, F _{0}. This result is known as the GellMann–Oakes–Renner relation.
5.6.2 Lattice Calculations of the Quark Condensate
The GellMann–Oakes–Renner relation is the starting point for many lattice determinations of the quark condensate. For a generic pseudoscalar meson consisting of a massdegenerate quark and antiquark, i.e. m _{1} = m _{2} ≡ m, the LEC Σ is given by
The technical drawback of this straightforward approach is that the chiral limit in the above expression is difficult to take in practice, as we have mentioned several times already. In the quenched approximation the situation is even worse: due to the appearance of quenched chiral logarithms (c.f. Eq. (5.82)) the ratio \(m_{\mathrm {PS}}^2/m\) becomes singular at vanishing quark mass, and hence the chiral limit does not exist. Since the quenched approximation is being abandoned, this issue will gradually become irrelevant.
However, a more serious obstacle remains in the case of dynamical simulations with Wilson fermions: since this particular type of regularization breaks chiral symmetry explicitly, the matching of simulation data at nonzero lattice spacing to the expressions of ChPT is—strictly speaking—not permitted. Matching is certainly justified if a fermionic discretization is employed which preserves chiral symmetry, such as overlap or domain wall fermions, or if results obtained using Wilson fermions are extrapolated to the continuum limit before a comparison to ChPT is performed.
A complementary approach for determining the condensate on the lattice is based on the Banks–Casher relation [106]. It provides a link between the LEC Σ and the spectral properties of the Dirac operator, viz.
where V is the spacetime volume. The spectral density ρ(λ) is defined as follows: Let \({\mathcal {D}}\) denote the massless Dirac operator in the continuum, satisfying \(\left \{\gamma _5,{\mathcal {D}}\right \}=0\). Its eigenvalue equation reads
where the eigenvalues and eigenfunctions depend on the gauge field. A suitable definition of the spectral density is then represented by
where the expectation value is taken with respect to the QCD functional integral.^{Footnote 18} Note that in Eq. (5.184) the ordering of limits must be obeyed. In particular, since the spontaneous breaking of a continuous symmetry cannot occur in finite volume, the limit V →∞ must be taken before the chiral limit and the spectrum in the deep infrared are considered.
The Banks–Casher relation provides not only a method to determine the condensate, but also suggests a mechanism how spontaneous chiral symmetry breaking comes about. Indeed, Eq. (5.184) implies that a nonzero value of the quark condensate is generated through a nonvanishing value of the spectral density in the deep infrared. In other words, spontaneous chiral symmetry breaking is driven by an accumulation of small eigenvalues. An immediate consequence of the Banks–Casher relation is that the level spacing Δλ between the small eigenvalues is given by
Hence, as V →∞ the level spacing becomes arbitrarily small. In the free theory, i.e. in the absence of a nontrivial gauge field one finds that ρ(λ) ∝ λ ^{3}, which vanishes as λ → 0. The accumulation of eigenvalues near zero with a rate predicted by Eq. (5.187) must therefore arise through the interaction with the gauge field.
In order to test the Banks–Casher scenario, a possible strategy is to compute the spectral density and check whether it actually produces an arbitrarily dense spectrum near the origin. Analytic predictions for ρ(λ) can be derived in the framework of effective theories of QCD at low energies, namely ChPT, as well as chiral Random Matrix Theory (RMT). The latter also yields predictions for the distributions of individual eigenvalues, in addition to the spectral density.
Chiral Random Matrix Theory goes back to an idea of Wigner who tried to utilize statistical properties for the theoretical description of systems with many degrees of freedom and complicated dynamics, such as nuclear resonances. Rather than trying to model the local interactions within such a system explicitly, all possible interactions that are consistent with the symmetries of the theory are equally likely. The Hamiltonian is then approximated by a matrix whose elements are uncorrelated but obey a particular probability distribution. The main guiding principle for the RMT description of QCD is the requirement that all global symmetries must be respected. The massless Dirac operator can then be represented by an N × N matrix \(\hat {D}\) with an offdiagonal block structure which is characteristic for systems with chiral symmetry:
As illustrated by the above expression, the matrix W is, in general, rectangular with N _{+} rows and N _{−} columns, such that N = N _{+} + N _{−}. For N _{+} ≠ N _{−} the matrix \(\hat {D}\) has N _{+} − N _{−} zero modes, and the index ν ≡ N _{+} − N _{−} may be identified with the topological charge in QCD. With this definition, \(\hat {D}\) is antihermitian and has purely imaginary eigenvalues which come in complex conjugate pairs:
One can define the system’s partition function in a sector of fixed topological charge ν via
where N _{f} is—as usual—the number of dynamical quark flavours. It makes sense to identify the matrix size N with the physical volume V of the theory (up to some proportionality constant).
In order to study the spectral properties of \(\hat {D}\) in the deep infrared, it is useful to rescale the eigenvalues by the system size
since, according to Eq. (5.187), the level spacing of the scaled eigenvalues z is of order one. The socalled microscopic spectral density in the sector of topological charge ν is then defined as
where the expectation value 〈⋯ 〉_{ν} is taken with respect to the partition function \({\mathcal {Z}}_\nu \). An explicit expression for \(\rho _s^{(\nu )}(z)\) in terms of Bessel functions has been worked out by Verbaarschot and Zahed [107]
The microscopic spectral density is the convolution of the distribution functions \(p_k^{(\nu )}\) of the individual scaled eigenvalues, i.e.
Chiral RMT yields predictions for these distributions. For instance, for the lowest eigenvalue in the sector with ν = 0 one obtains for N _{f} = 0
For further illustration the microscopic spectral density and the distribution functions for a few of the lowest eigenvalues are plotted in Fig. 5.16. The result for \(\rho _s^{(\nu )}(z)\) indicates that an accumulation of small eigenvalues does indeed take place. Since one considers the simultaneous limits μ → 0 and N →∞ for fixed z, a nonzero value of \(\rho _s^{(\nu )}(z)\) for finite z signals that the spectrum is packed more and more densely near the origin.
Can the predictions of RMT be verified from first principles in simulations of lattice QCD? The answer is ‘yes’, provided one considers a particular kinematical situation, commonly referred to as the “𝜖regime” of QCD. It is based on the formulation of QCD in a large but finite volume of spatial size L and for arbitrarily small quark mass. The Compton wavelength of the pion then exceeds the spatial size, and thus the 𝜖regime is characterized by
In this particular situation the path integral of the theory is dominated by zero momentum modes. In a symmetric finite box with volume V = L ^{4}, the minimum nonzero momentum is given by p _{min} ∝ 1∕L. Let us recall the expression for the lowestorder effective chiral Lagrangian, i.e.
where we have included the vacuum angle θ and assumed that . If the quark mass m is tuned so that
the statistical weight of fields with ∂ _{μ}U ≠ 0 will be strongly suppressed in the path integral. In other words, the mass term will dominate over the kinetic term, except for fields U with ∂ _{μ}U = 0. Since \(2m\varSigma /F_0^2 = m_{\mathrm {PS}}^2\), the conditions in Eq. (5.196), which define the kinematical situation of the 𝜖regime, are equivalent to
The zeromomentum part can be represented by a constant SU(3) matrix U _{0} such that
where the field ξ incorporates the fluctuations about the zero momentum mode. According to Leutwyler and Smilga [108], the path integral of the theory in topological sector ν can be written in the form
After this somewhat lengthy preparatory discussion, the connection between QCD in the 𝜖regime and chiral RMT can finally be established. An important result derived by Shuryak and Verbaarschot [109] states that the path integral \(Z_\nu ^{(0)}\) can be mapped exactly onto the partition function \({\mathcal {Z}}_\nu \) of RMT. One therefore expects that the lowlying eigenvalues of QCD in the 𝜖regime are distributed in the same way as those in RMT. By computing the former in a lattice simulation and performing a comparison to the analytically known distributions in RMT, one may verify the Banks–Casher scenario of spontaneous chiral symmetry breaking.
The NeubergerDirac operator D _{N} of Eq. (5.47) is ideally suited for this task. Since it satisfies the GinspargWilson relation, chiral symmetry is preserved at the level of the discretized theory. Furthermore, D _{N} can be shown to satisfy an exact index theorem, so that it sustains ν exact zero modes on gauge configurations with topological charge ν. This allows for an unambiguous identification of topological sectors to which the path integral \(Z_\nu ^{(0)}\) is restricted [110]. Therefore, the investigation of spontaneous chiral symmetry breaking is a prime example where it is absolutely vital that the latticeregularized theory obeys the same symmetries that are present in the continuum.
Before we proceed we must elucidate the relation of the spectra of the random matrix \(\hat {D}\) and the NeubergerDirac operator. While the eigenvalues of \(\hat {D}\) are purely imaginary, the operator D _{N} is unitary, and hence its eigenvalues lie on a circle with radius \(1/\overline {a}\) in the complex plane, centered around the point \(1/\overline {a}\) on the real axis. Thus, if γ denotes an eigenvalues of D _{N}, it can be parameterized as
Since the radius of the circle diverges in the continuum limit, the lowlying part of the spectrum satisfies \(\gamma \ll 1/\overline {a}\), and hence Reγ ≃ 0. One can then identify an eigenvalue μ of \(\hat {D}\) with Imγ, i.e.
A simple but effective check of the RMT description of the lowlying spectrum can be performed by comparing ratios of scaled eigenvalues. The combination γ _{k}ΣV of the kth eigenvalue in QCD corresponds to μ _{k}N in RMT. If the lowlying spectra in the two theories indeed coincide one expects the following equalities in a given topological sector ν
While the ratio 〈γ _{k}〉_{ν}∕〈γ _{j}〉_{ν} is determined in the simulation, the two integrals on the righthand side can be evaluated analytically for the first few eigenvalues.^{Footnote 19}
In Refs. [111, 112] ratios for some of the lowest eigenvalues have been computed in the quenched approximation. The results from [111] are shown in Fig. 5.17 for a box size L = 1.49 fm. The agreement between lattice results and RMT is excellent. By contrast, a smaller box size of about 1 fm yields significant discrepancies between QCD and RMT, which can be as large as 10 standard deviations. This is a reflection of the fact that the large volume limit must be taken before the RMT behaviour sets in. Similar findings have been reported for QCD with N _{f} = 2 flavours of dynamical overlap quarks [113].
The confirmation of the RMT prediction for the distribution of the lowlying eigenvalues supports the Banks–Casher scenario of spontaneous chiral symmetry breaking. In a subsequent step one may therefore extract the LEC Σ via the relation
If Σ is identified with the expectation value of the scalar density, as suggested by the effective lowenergy description of QCD, it must be related to a particular continuum scheme, like the \({\overline {{\mathrm {MS}}}}\)scheme of dimensional regularization. If the regularization prescription obeys chiral symmetry, the corresponding renormalization factor, Z _{S}, satisfies
where Z _{m} relates the bare quark mass to the chosen continuum scheme (for instance, \({\overline {{\mathrm {MS}}}}\)). Provided that Z _{S}, or equivalently, Z _{m} has been computed for a range of bare couplings, the lattice estimates for Σ can be used to determine the renormalized condensate in units of some scale, e.g.
For the NeubergerDirac operator, Z _{S} has been computed nonperturbatively in the quenched approximation [114], employing the technique outlined in Ref. [115]. The resulting values for Z _{S} could then be combined with the results for Σ extracted from the matching to RMT from [111]. A subsequent extrapolation to vanishing lattice spacing yields the results for the renormalized condensate in the continuum limit:
The quoted error represents the total uncertainty arising from statistics, the uncertainty in the renormalization factor, and the continuum extrapolation. If the nucleon mass is used to set the scale the central value drops to 261 MeV, as a consequence of the scale ambiguity encountered in the quenched approximation. We stress once more that the chiral condensate is illdefined in the quenched theory, and thus great care must be taken when the results are interpreted in the context of the full theory. Nevertheless, it is encouraging that for N _{f} = 2 flavours of dynamical quarks, a similar calculation [113] finds \(\varSigma _{{\overline {{\mathrm {MS}}}}}(2\,\text{GeV}) = (251\pm 7\pm 11\,\text{MeV})^3\) at a ≃ 0.11 fm, in good agreement with the quenched result, given the inherent ambiguities and inconsistencies of the latter.
Lattice results for the condensate have been reported by many other authors (e.g. [116,117,118,119,120,121,122,123,124,125]), employing a variety of approaches. Although the various calculations are subject to different systematics, the overall picture is rather consistent, with values for the condensate centering around (250 MeV)^{3}. As for many other quantities, the influence of lattice artefacts and renormalization effects must be studied in more detail, especially in the case of fully dynamical calculations. It is also important to mention that analytic nonperturbative approaches to the strong interaction, such QCD sum rules, also give broadly consistent results with lattice simulations within the quoted uncertainties (see e.g. [126,127,128] and references therein). This completes the consistent picture of chiral symmetry and its spontaneous breaking in QCD.
5.7 Hadronic Weak Matrix Elements
The experimental programme at the Bfactories BaBar and Belle, as well as many other experiments at highenergy colliders, such as the Tevatron and LEP, have greatly enhanced the accuracy of many observables related to flavour physics and the Cabibbo–Kobayashi–Maskawa (CKM) matrix. The main motivation for studying flavour physics is to gain a proper understanding of CP violation and, in turn, the matterantimatter asymmetry which is apparently manifest in the universe. CP violation is incorporated into the Standard Model via a complex phase in the CKM matrix, and therefore a precise knowledge of its elements is required to decide whether or not additional sources of CP violation must be considered.
In order to make these statements more precise we recall some basic definitions. As is well known, the CKM matrix V _{CKM} relates flavour to mass eigenstates. For flavourchanging charged current transitions between up and downtype quarks this implies that, in addition to the dominant transitions like u ↔ d, c ↔ s and t ↔ b, there are further transitions of lesser strength. The CKM matrix is therefore expected to possess a hierarchical structure, with the diagonal elements V _{ud}, V _{cs} and V _{tb} being of order one. An approximate parameterization that takes this into account is due to Wolfenstein [129]. By expanding V _{CKM} in powers of the Cabibboangle V _{us}≡ λ ≃ 0.22 one obtains
with the remaining parameters \(A,\bar \rho \) and \(\bar \eta \) of order one.^{Footnote 20} In the standard model, V _{CKM} is unitary, and, provided that one can determine its elements with sufficient precision, any deviation from unitarity would be a signature of “new physics”. Unitarity gives rise to relations such as
which can be represented by a triangle. The strategy that has been adopted in order to search for hints of new physics, is to use experimental and theoretical input to overconstrain the unitarity relations like those in Eq. (5.210). The current status is depicted in Fig. 5.18, where the unitarity triangle is plotted in the \((\bar \rho ,\bar \eta )\)plane [130].
The experimentally measured quantities, i.e. the mass differences ΔM _{s}, ΔM _{d} and 𝜖 _{K}, the latter of which parameterizes indirect CP violation in the kaon system, serve to constrain the apex of the unitarity triangle. They are proportional to the relevant CKM matrix elements, i.e.
where G _{F} is the Fermi constant, and M _{W}, m _{t} denote the masses of the Wboson and top quark, respectively. The proportionality factors in the above expressions involve the leptonic Bmeson decay constants f _{B} and \(f_{\mathrm {B}_{\mathrm {s}}}\), as well as the Bparameters \(\hat {B}_{\mathrm {B}}\), \(\hat {B}_{\mathrm {B}_{\mathrm {s}}}\) and \(\hat {B}_{\mathrm {K}}\), which in turn parameterize the transition amplitudes for \(B^0\bar {B}^0\), \(B_s^0\bar {B}_s^0\), and \(K^0\bar {K}^0\) mixing. While the decay constants are difficult to measure with sufficient accuracy, due to the fact that the leptonic decay rates are suppressed, the Bparameters are not at all accessible in experiment. One must therefore resort to theoretical estimates of these quantities. Since nonperturbative effects must inevitably be included, lattice simulations of QCD are ideally suited for this task.
Lattice calculations of weak hadronic matrix elements is a major activity within the lattice community, and a thorough coverage of all aspects would easily fill an entire chapter. We shall therefore concentrate on some of the most important quantities, and point out the main conceptual issues. It is strongly recommended that the reader consult the regular reviews of the topic at the annual lattice conferences, e.g. [131,132,133,134,135].
5.7.1 Weak Matrix Elements in the Kaon Sector
In the kaon sector, \(K^0\bar {K}^0\) mixing is one of the most important processes. The Bparameter B _{K} parameterizes the nonperturbative contribution to indirect CP violation. It is defined by the ratio of the relevant operator matrix element to its value in the socalled “vacuum saturation approximation”:
Here, μ denotes the renormalization scale at which the ΔS = 2 fourquark operator Q ^{ΔS=2}, defined by
is considered. The relation between 𝜖 _{K} and the CKM matrix elements is provided by the RGinvariant Bparameter \(\hat {B}_{\mathrm {K}}\). In NLO perturbation theory \(\hat {B}_{\mathrm {K}}\) is related to B _{K}(μ) via
where γ _{0}, γ _{1} denote the coefficients in the perturbative expansion of the anomalous dimension of Q ^{ΔS=2}. Since QCD is parityconserving, the physically relevant operator in the above expression is the parityeven combination O _{VV+AA}. The typical lefthanded chiral structure of this operator, which is characteristic for weak transitions, poses a problem for lattice calculations if Wilson fermions are employed. In this case the discretization breaks chiral symmetry explicitly, and thus O _{VV+AA} mixes under renormalization with operators involving the opposite chirality. Therefore, the general renormalization pattern is
Thus, in order to determine the physical matrix element, one must not only determine the overall renormalization factor Z, but also the mixing coefficients Δ_{i}. Several techniques have been developed [136,137,138] to address this problem, which is merely an inconvenience rather than a serious obstacle. In a formulation based on staggered fermions the problem is absent, since the remnant U(1) ⊗ U(1) symmetry protects the operator from mixing with other chiralities. However, a drawback of the staggered formulation is the broken flavour (“taste”) symmetry, which may lead to significant complications [139]. Fermionic discretizations based on the GinspargWilson relation, such as domain wall or overlap fermions do not suffer from the mixing problem, whilst preserving all flavour symmetries. Finally, the mixing problem can also be circumvented for Wilsonlike discretizations in the context of twistedmass QCD [140, 141]. With the help of a suitably chosen flavour rotation (see Eq. (5.51)), the matrix element of O _{VV+AA} in QCD can be mapped exactly onto that of the parityodd operator O _{VA+AV} in the chirally twisted theory, viz.
It has been shown that O _{VA+AV} renormalizes purely multiplicatively [142], i.e. all mixing coefficients vanish. The overall multiplicative, scaledependent renormalization factor of O _{VA+AV} which yields the physical matrix element has been determined nonperturbatively [143], using the finitesize scaling procedure based on the Schrödinger functional formalism described in Sect. 5.5.2.
We now give a summary of the current status of B _{K}. Here, the calculation by the JLQCD Collaboration [154], based on staggered quarks in the quenched approximation, has served as a benchmark result for a long time. Their result, for which the perturbatively renormalized matrix element was extrapolated to the continuum limit, has since been confirmed by many other calculations employing different fermionic discretizations and different renormalization techniques. These include domain wall [148, 149] and overlap quarks [150, 151], as well as the Wilson formulation [153, 155]. Moreover, a calculation employing twisted mass QCD has been completed [152], which includes nonperturbative renormalization and a thorough investigation of the continuum limit.
Recently, results for B _{K} from simulations with dynamical quarks have become available, both for N _{f} = 2 [146, 147] and N _{f} = 2 + 1 flavours [144, 145]. A compilation of quenched and unquenched results is shown in Fig. 5.19. Although the figure suggests a trend in the data which points to slightly lower estimates for \(\hat {B}_{\mathrm {K}}\) if dynamical quarks are switched on (see Fig. 5.19), the quoted uncertainties are still too large to point to a significant deviation. In particular, a systematic study of the continuum limit in the unquenched case is not yet available. It is interesting to compare the results for \(\hat {B}_{\mathrm {K}}\) to the nonlattice determination in Ref. [130]. Here, the determinations of the angles of the unitarity triangle from experimental data in conjunction with direct measurements of ΔM _{d}, ΔM _{s} and 𝜖 _{K} allow to fit the values of several of the quantities in Eq. (5.211), which incorporate the hadronic uncertainties. In this way one obtains \(\hat {B}_{\mathrm {K}}^{\text{nonlattice}} = 0.94\pm 0.17\), which is shown as the vertical band in Fig. 5.19. Clearly, within the rather large error margins, this result is compatible with all lattice determinations, quenched or unquenched.
First Row Unitarity and the Value of V _{us}
In addition to Eq. (5.210), the unitarity of the CKM matrix implies many other constraints on its elements, such as those which appear in the first row:
Owing to the smallness of V _{ub}, i.e. V _{ub}^{2} ≃ 2 ⋅ 10^{−5}, the direct verification of first row unitarity with the current experimental and theoretical accuracy rests on the precise knowledge of V _{ud} and V _{us}. The value of V _{ud} can be determined with high accuracy from superallowed nuclear βdecays (0^{+} → 0^{+} transitions), and in the current edition of the particle data book the best estimate is quoted as [61]
The value of V _{us} can be extracted from the decay rate of K _{ℓ3} transitions, i.e.
where \(f_{+}^{K\pi }\) is one of the two form factors which parameterize the hadronic matrix element for semileptonic K → πℓν _{ℓ} transitions, i.e.
In order to arrive at a precise estimate for V _{us}, \(f_{+}^{K\pi }(q^2)\) must be determined with an accuracy at the level of 1%, since the decay rate and hence the combination \(V_{us}{ }^2\,[f_{+}^{K\pi }]^2 \) can be measured rather precisely. The form factor \(f_{+}^{K\pi }\) admits a chiral expansion; At zero momentum transfer it reads
While the leading chiral correction, f _{2} = −0.023, has been computed long ago [156], knowledge on f _{4} and the higher corrections is still fairly limited. The strategy pursued in lattice calculations [157] is based on computing the quantity
which is a measure of the contributions beyond leading order. An old phenomenological estimate by Leutwyler and Roos [158] yields the value Δf = −0.016(8). It is clearly desirable to check this result and ultimately replace it by a modelindependent estimate based on QCD.
Semileptonic form factors can be determined in lattice simulations by computing suitable threepoint correlation functions, in which the initial and final hadronic states are projected onto nonvanishing momentum. The main issues that must be addressed in order to judge the accuracy of the form factor determination are listed in the following:

The dependence of the form factors on the momentum transfer q ^{2} must be modelled, in order to interpolate their values to q ^{2} = 0. Typical ansätze for the interpolation include linear or quadratic functions of q ^{2}, as well as formulae based on pole dominance [159]. The freedom of choosing a particular ansatz introduces a certain ambiguity, since different model functions yield slightly different results. Via the introduction of socalled twisted boundary conditions [159,160,161,162,163,164], the q ^{2} resolution of form factors can be significantly improved;

As for all quantities involving pions, a chiral extrapolation of lattice results must be performed. Clearly, in order to obtain \(f_{+}^{K\pi }(0)\) and hence Δf with small controlled errors, a reliable chiral extrapolation is perhaps the single most important issue. Thus, the ability to simulate as deeply as possible in the chiral regime will be decisive for the final accuracy;

Other systematic uncertainties include control over lattice artefacts, which is closely related to the renormalization of local operators, such as the vector current, which appears in Eq. (5.220). If chiral symmetry is broken explicitly, the (local) vector current is not conserved, and in order to guarantee a smooth approach to the continuum limit, its renormalization factor, Z _{V}, must be included. However, in all recent simulations the form factor has been extracted from suitably chosen ratios in which Z _{V} drops out.
A compilation of recent results for the form factor \(f_{+}^{K\pi }(0)\) and the quantity Δf are presented in Table 5.4, where they are compared to analytical estimates. The agreement with the old result by Leutwyler and Roos is quite striking. Despite a tendency among the more recent analytical calculations to produce slightly larger estimates for Δf, all results are in good agreement within the quoted uncertainties.
An alternative method to determine V _{us} from experimental data was proposed by Marciano [170]. Instead of considering semileptonic decays, it is based on the leptonic decay rates, i.e.
Hence, the task is to provide an input value for the ratio of decay constants, f _{K}∕f _{π}. This quantity is wellsuited for lattice calculations in several respects: first, ratios of quantities can be computed with high statistical accuracy, owing to the fact that the fluctuations in the numerator and denominator are correlated. Second, the renormalization factor of the axial current, Z _{A}, drops out in the ratio f _{K}∕f _{π}. However, since the quantity of interest involves a chiral extrapolation, the same caveats as in the case of the pion form factor, apply in this case. In particular, it is mandatory to go as close as possible to the physical mass of the pion. The quenched approximation is clearly of very limited value in this context, since the chiral behaviour and hence the actual value of f _{K}∕f _{π} may strongly depend on the number of active sea quarks. Furthermore, it is known that in the continuum limit of the quenched approximation the value f _{K}∕f _{π} is underestimated by about 10% [171].
Recent results for f _{K}∕f _{π} in lattice QCD with dynamical quarks are listed in Table 5.5. A caveat that applies to all such compilations is that systematic errors are not estimated in a uniform manner. For instance, none of the listed results (with the exception of [52]) is based on a systematic scaling study aimed at separating cutoff effects from the actual mass dependence, although the influence of lattice artefacts has been included in some error estimates by including cutoff effects into a generalized chiral fit. Moreover, not all of the listed values of f _{K}∕f _{π} include finitevolume corrections, which can be computed in ChPT and incorporated into the ansatz for the chiral fit [177, 178]. Despite these caveats it appears, though, that the estimates for f _{K}∕f _{π} based on fits including pion masses well below 500 MeV are compatible with each other.
5.7.2 Weak Matrix Elements in the Heavy Quark Sector
The main obstacle for calculations of weak matrix elements involving heavy quarks, and in particular the bquark, is that one is faced with a multiscale problem. In Sect. 5.2.5 we have already discussed systematic effects in lattice calculations that arise from finitesize effects and lattice artefacts. Translating the relations in (5.79) directly to the bquark sector, one finds that the following inequalities cannot be satisfied simultaneously, at least not with the currently available computer power:
Violation of the first relation implies the presence of large lattice artefacts, the second inequality must be satisfied if one wants to avoid uncontrolled finitevolume effects, and the third is dictated by memory capacities of current computers. With a bquark mass of m _{b} ≈ 4 GeV and typical inverse lattice spacings of \(a^{1}\;\lesssim \;4.5\,\text{GeV}\), it is evident that the bquark cannot be studied directly, since its Compton wavelength is smaller or of the same order of magnitude than the lattice spacing itself.
Several strategies to deal with this problem have been applied over many years, among them the “static approximation” [179], the nonrelativistic formulation (NRQCD) [180], the socalled “Fermilabapproach” [181] and finitesize scaling techniques [182, 183].
Since the charm quark is lighter than the bquark by roughly a factor three, one may attempt to treat charm as a fully relativistic, propagating quark in simulations. Still, one can incur large lattice artefacts in this way, and a careful extrapolation to the continuum limit is then required. However, such an extrapolation may be spoilt if the leading lattice artefacts cannot be isolated in the results, due to the relatively large mass of the charm quark. Still, if one has reason to trust the results obtained for relativistic charm quarks, one may extrapolate them to the mass of bquark, which is yet another way of circumventing the problem that the bquark is too heavy to be treated relativistically. Typically, the ansatz for the extrapolation of a particular quantity to the mass of the bquark is motivated by its expected quark mass dependence in Heavy Quark Effective Theory (HQET).
In the static approximation the bquark is assumed to be infinitely heavy [179]. In this formalism it is convenient to represent the bquark by a pair of spinors, \((\psi _h,\psi _{\bar h})\), which propagate forward and backward in time, respectively, and which satisfy
While the field ψ _{h} annihilates a heavy quark, \(\psi _{\bar h}\) creates a heavy antiquark. The dynamics of these fields in the discretized version of the theory is described by the EichtenHill action [184]
where \(\nabla _0,\,\nabla _0^*\) denote the forward and backward covariant lattice derivatives in the temporal direction. Although the numerical computation of the quark propagator based on the EichtenHill action is relatively “cheap”, simulation results in the static approximation typically suffer from relatively large statistical noise. Without going into detail we note that the signaltonoise ratio can be significantly improved if one replaces the temporal link variables in ∇_{0} and \(\nabla _0^*\) by suitably chosen generalized parallel transporters. A full account can be found in Ref. [185].
Obviously, the static approximation represents only the leading term in an expansion of the quark action in inverse powers of the heavy quark mass, and thus one expects corrections in powers of 1∕m _{h}. As described in Ref. [182], one can set up a formalism in which the leading corrections to physical observables can be systematically computed as operator insertions in correlation functions defined with respect to the static action S ^{stat}. Again, we refrain from describing any further details and refer the reader to the original literature [182, 186].
Higherorder corrections to the static approximation can also be incorporated into the theory by adding the appropriate 1∕m _{h} terms to the action itself. In this way one obtains a nonrelativistic version of QCD (NRQCD) [180], in which the mass of the heavy quark is imposed as a cutoff on relativistic momentum modes, i.e.
where v denotes the fourvelocity of the heavy quark. Heuristically, the introduction of the cutoff is justified since the internal typical momentum modes of hadrons containing a heavy quark are much smaller than the mass of the latter. The loss of relativistic states can be compensated by adding new local interaction terms order by order in p∕m _{h} ∼ v to \({\mathcal {L}}_h^{\text{stat}}\) and \({\mathcal {L}}_{\bar h}^{\text{stat}}\). In general, these additional interaction terms will generate mixing between quark and antiquark. However, by applying a FoldyWouthuysen transformation, the fields can be decoupled. At the level of the classical theory, the 1∕m _{h} correction to the NRQCD Lagrangian for the forward propagating field reads
and D is the vector of the covariant derivatives in the spatial directions.
In the quantized version of the theory, the coefficients which multiply the fields in the above expression become dependent on the gauge coupling and must be appropriately tuned to guarantee the correct matching of the nonrelativistic theory to standard QCD at order in 1∕m _{h}. Thus, the latticeregularized version of the 1∕m _{h} correction reads
where \({\boldsymbol {\hat B}}\) denotes a lattice representation of the magnetic field. The coefficients ω _{1} and ω _{2} are formally of order 1∕m _{h} and are found to be linearly divergent in the lattice spacing a. Therefore, at a given order in the nonrelativistic expansion of the action, a finite cutoff must be kept, and in this sense the effective theory is nonrenormalizable. All this implies that in NRQCD the continuum limit, a → 0, cannot be taken. Instead, one must argue that lattice artefacts are small in the range of lattice spacings where the calculations are performed.
Another approach can be based on the idea that the Wilson fermion action is suitably adapted for heavy quarks, such that the Wilson quark propagator does not deviate from the continuum behaviour even for quark masses , i.e. for quark masses near or above the cutoff [181]. According to Ref. [181] this can be achieved by modifying the normalization of the quark fields (see Eq. (5.36)) in the discretized lattice theory, i.e.
where the “pole mass” am _{P} of the Wilson propagator is given by
and am denotes the bare subtracted quark mass in the Wilson theory (see Eq. (5.39)). The factor \(\sqrt {2\kappa }\,{\mathrm {e}}^{am_{\mathrm {P}}/2}\) is designed to interpolate smoothly between the relativistic and nonrelativistic regimes. As a consequence, in order to cancel the effects of large quark masses in hadronic matrix elements involving bquarks, the normalization of quark fields is modified according to the above prescription. The socalled “Fermilab approach” to heavy quark physics on the lattice is based on the normalization in Eq. (5.230). Essentially it amounts to formulating an effective theory for quarks, whose spatial momenta are small, \(a\vec {p}\ll 1\), with massdependent coefficients. Like in the case of the static approximation, the formalism allows to take the continuum limit. Related approaches to the Fermilab method have been presented in Refs. [187, 188].
Finally, we briefly introduce another strategy to deal with heavy quarks on the lattice and the related multiscale problem [182, 183]. Here the condition m _{π}L ≫ 1 in Eq. (5.224) is sacrificed in favour of am _{b} ≪ 1. In this way one is able to accommodate a fully relativistic bquark at the expense of having to deal with strong finitevolume effects. The key observation is that the “distortion” due to unphysically small volumes can be computed in a series of finitesize scaling steps, which relate the results obtained on a sequence of lattice sizes L _{0}, L _{1}, …. Like in the case of the nonperturbative determination of the RG running of the coupling and the quark mass discussed in Sect. 5.5.2, one can set up a recursive finitesize scaling procedure, which traces the volume dependence of observables. Here it is mostly sufficient to apply two or three steps in the scaling sequence.
In the remainder of this section we shall discuss some selected results. Regarding the vast number of individual results, we do not attempt to provide a complete review of the current status of lattice calculations of weak matrix elements in the heavy quark sector. Regular appraisals of the progress made in studying these systems can be found in the rapporteur talks on the subject at the annual conferences on lattice field theory [132, 133, 135]. Instead we shall discuss the relation between CKM matrix elements and the quantities that must be computed in order to extract the former from experimental data without resorting to model assumptions.
HeavyLight Decay Constants
From Eq. (5.211) and Fig. 5.18 one infers that the ratio \(\xi \equiv f_{\mathrm {B}_{\mathrm {s}}}\sqrt {\hat {B}_{\mathrm {B}_{\mathrm {s}}}}/f_{\mathrm {B}}\sqrt {\hat {B}_{\mathrm {B}}}\) of decay constants and Bparameters is a key quantity, since it links ΔM _{s}∕ ΔM _{d} to the ratio V _{ts}^{2}∕V _{td}^{2} of CKM matrix elements. Typically, one determines decay constants and Bparameters separately, since the former can be easily extracted from hadronic twopoint functions, while the latter may undergo complicated mixing patterns, depending on the fermionic discretization. The decay constant of, say, a B ^{+} meson, is defined via the matrix element of the heavylight axial current, i.e.
If the matrix element on the righthand side is computed in a lattice simulation, then the axial current defined in the discretized theory must be matched to its counterpart in the continuum formulation. The details of the matching procedure depend on the type of fermionic discretization and the chosen treatment to represent the heavylight axial current on the lattice (e.g. static approximation, NRQCD, etc.). If the bquark is treated in the static approximation, the axial current has a nonvanishing anomalous dimension, and hence its running must be determined as well. Therefore, the various techniques which have been developed to compute the renormalization factors of local operators nonperturbatively, are of particular relevance also in the study of heavylight decay constants [189]. In particular, nonperturbative estimates for the renormalization factor of the axial current, Z _{A}, are required to ensure a smooth convergence towards the continuum limit.
We now present results for f _{B} and \(f_{\mathrm {B}_{\mathrm {s}}}\). From Chiral Perturbation Theory one expects that the bulk of the SU(3)flavour breaking effect in ξ (i.e. the deviation of ξ from unity) is carried by the decay constants. The full expression at NLO for \(f_{\mathrm {B}_{\mathrm {s}}}/f_{\mathrm {B}}\) reads [190]
where \(I_{\mathrm {P}}(m_{\mathrm {PS}})=m_{\mathrm {PS}}^2\ln (m_{\mathrm {PS}}^2/\mu ^2)\) and f _{2} is a lowenergy constant, and g ^{2} is the strength of the B ^{∗}Bπ vertex. As was pointed out by Kronfeld and Ryan [191], the contribution from the chiral logarithms can be sizeable, so that a naïve linear extrapolation of lattice data from the region of the strange quark mass tends to underestimate \(f_{\mathrm {B}_{\mathrm {s}}}/f_{\mathrm {B}}\). By contrast, the corresponding ratio \(B_{\mathrm {B}_{\mathrm {s}}}/B_{\mathrm {B}}\) is expected to be close to one, since the coefficient of the chiral logarithm nearly vanishes. Since \(f_{\mathrm {B}_{\mathrm {s}}}/f_{\mathrm {B}}\) enters directly into fits to the CKM parameters, many attempts were made to pin down its value precisely. As in the case of f _{K}∕f _{π} discussed earlier, the main issue for lattice calculations is whether the quark masses employed in simulations are small enough to allow for a controlled chiral extrapolation based on the NLO formulae. The influence of the chiral logarithms has so far been detected only in simulations based on N _{f} = 2 + 1 flavours of rooted staggered quarks. Using NRQCD to treat the bquark, the authors of [192] find
where the first error is statistical, while the second is an estimate of the systematic uncertainty. This result awaits confirmation from simulations with sea quark masses as small as those used in [192], but employing different fermionic discretizations, both in the sea and valence quark sectors. This is of particular relevance, since the typical spread among the recently published results is of the same order or even larger than the uncertainty quoted above. Further discussions and compilations of lattice data for \({f_{\mathrm {B}_{\mathrm {s}}}}/{f_{\mathrm {B}}}\) can be found in [133, 193].
Estimates for absolute values of heavylight decay constants are also highly desirable, especially since f _{B} is hard to determine experimentally, even at the Bfactories, since the B → τν _{τ} decay rate is suppressed. For \(f_{\mathrm {B}_{\mathrm {s}}}\) the suppression is even stronger, and thus the prospects for an experimental determination of this quantity are extremely uncertain. The main issues facing lattice calculations are the influence of lattice artefacts in conjunction with the renormalization of the axial current, and the dependence of results on the number of dynamical quark flavours.
As an example for one of the most advanced quenched calculations for \(f_{\mathrm {B}_{\mathrm {s}}}\) we shall briefly discuss the result by the ALPHA collaboration [198], which also illustrates the interplay between various methods to treat the bquark. In Ref. [198] the results obtained in the static approximation were combined with data computed around the mass of the charm quark. Provided that estimates for the decay constants in both datasets have been extrapolated to the continuum limit, a subsequent interpolation in the heavy quark mass yields the desired result for \(f_{\mathrm {B}_{\mathrm {s}}}\). The ansatz for the interpolation is based on the expression
where f _{PS} is a generic heavylight decay constant, γ, δ are real constants, and the factor C _{PS} arises from the matching between the static approximation and QCD with fully relativistic quarks. Thus, using the static approximation as the limiting case removes the systematic error due to the uncontrolled extrapolation to the mass of the bquark. The resulting estimate for \(f_{\mathrm {B}_{\mathrm {s}}}\) is [198]
Nonperturbative renormalization has been employed in both the static approximation and the relativistic formulation. Except for the unknown systematic error due to quenching, the quoted error contains all uncertainties. The above result has been confirmed by the approach based on the finitesize scaling method [199].
Turning now to unquenched simulations, we compare the above value to the result by the HPQCD Collaboration [192], which was obtained using NRQCD for the bquark, while N _{f} = 2 + 1 rooted staggered quarks were used as sea quarks. Here, the estimate for \(f_{\mathrm {B}_{\mathrm {s}}}\) results from a combination of the value for f _{B} and the ratio \(f_{\mathrm {B}_{\mathrm {s}}}/f_{\mathrm {B}}\) already quoted in Eq. (5.234). In this way one obtains
Thus, in spite of the large error, it appears that the inclusion of dynamical quark effects increases the estimate for heavylight decay constants. This is also supported by other simulations. For instance, using their simulation results in quenched QCD and with N _{f} = 2 flavour of dynamical Wilson quarks, the CPPACS Collaboration find [203]
The “nonlattice” determination of \(f_{\mathrm {B}_{\mathrm {s}}}\) via fits using the experimental results for the angles of the unitarity triangle as input [130] also point to a larger value compared to the quenched theory, as can be seen from the horizontal band in the compilation in Fig. 5.20.
In current unquenched simulations, systematic effects such as lattice artefacts and the renormalization of local operators are not yet controlled at a similar level compared to the quenched approximation. Thus, despite the fact that these calculations are much more “realistic” in that they include sea quarks, the quoted overall uncertainties are still relatively large.
BParameters \(\hat {\boldsymbol {B}}_{\mathrm {B}_{\mathrm {d}}}\) and \(\hat {\boldsymbol {B}}_{\mathrm {B}_{\mathrm {s}}}\)
Following the recent experimental determination of the mass difference ΔM _{s} at the Tevatron [211, 212], lattice determinations of the Bparameters \(\hat {B}_{\mathrm {B}_{\mathrm {d}}}\) and \(\hat {B}_{\mathrm {B}_{\mathrm {s}}}\) have received much attention. Although the first calculations date back to the 1990s, relatively few results are available, due to several specific technical difficulties. First, the complicated renormalization and mixing patterns of fourquark operators which afflict lattice calculations of the kaon Bparameter \(\hat {B}_{\mathrm {K}}\) are also encountered in the bquark sector. Second, there is the added complication which arises from the fact that the bquark cannot be simulated directly.
In Table 5.6 we list published results for \(B_{\mathrm {B}_{\mathrm {d}}}(m_b)\) and the ratio \(B_{\mathrm {B}_{\mathrm {s}}}/B_{\mathrm {B}_{\mathrm {d}}}\) from a variety of methods to treat the heavy quark. The table shows that all results are broadly consistent with each other at the level of 10%, despite the different systematics. Moreover, none of the listed estimates is based on nonperturbative renormalization factors, and furthermore all entries have been computed for a fixed value of the lattice spacing, i.e. a systematic study of the continuum limit is lacking even in the quenched approximation. As for the ratio \(B_{\mathrm {B}_{\mathrm {s}}}/B_{\mathrm {B}_{\mathrm {d}}}\), it should be mentioned that the quark masses in the simulations correspond to pion masses not much smaller than 500 MeV. However, in view of the fact that the bulk of the relevant SU(3)flavour breaking effect in ΔM _{s}∕ ΔM _{d} is expected to come from the ratio of decay constants, \(f_{\mathrm {B}_{\mathrm {s}}}/f_{\mathrm {B}_{\mathrm {d}}}\), this may not be such a serious limitation. Results for \(B_{\mathrm {B}_{\mathrm {d}}}\) and \(B_{\mathrm {B}_{\mathrm {d}}}\) computed on dynamical gauge configurations with rooted staggered quarks should be published soon.
Another recent development is the implementation of nonperturbative renormalization for heavylight fourquark operators in the static approximation [220, 221]. If the bquark is treated in the static approximation, the ΔB = 2 fourquark operator must be matched to its counterpart in the static theory, i.e.
where
with ℓ denoting the light (d or s) flavour. For the physical matrix element only the parityeven operators \(\widetilde {O}_{\mathrm {VV+AA}}\) and \(\widetilde {O}_{\mathrm {SS+PP}}\) are relevant. If chiral symmetry is not preserved by the discretization, fourquark operators such as \(\widetilde {O}_{\mathrm {VV+AA}}\) undergo complicated mixing patterns under renormalization, which necessitate finite subtractions similar to those required for the operator O _{VV+AA} in Eq. (5.215). However, just as in the case of \(K^0\bar K^0\) mixing, the parityeven operators \(\widetilde {O}_{\mathrm {VV+AA}}\) and \(\widetilde {O}_{\mathrm {SS+PP}}\) can be mapped onto their parityodd counterparts \(\widetilde {O}_{\mathrm {VA+AV}}\) and \(\widetilde {O}_{\mathrm {SP+PS}}\) by a flavour rotation, which realizes the transition to tmQCD at maximal twist angle. Moreover, it can be shown [220] that the combinations
renormalize purely multiplicatively. The RG running of these operators, as well as the matching to hadronic schemes based on tmQCD have been determined nonperturbatively in the SF scheme for N _{f} = 0 [221] and N _{f} = 2 [222], which will eventually allow for a determination of \(\hat {B}_{\mathrm {B}_{\mathrm {s}}}\) and \(\hat {B}_{\mathrm {B}}\) with full control over renormalization and discretization effects. Corrections of order 1∕m _{b} can be taken into account through an interpolation between the results obtained in the static approximation and for relativistic heavy quarks with masses in the region of that of the charm quark.
SemiLeptonic BDecays
The CKM elements V _{ub} and V _{cb}, which appear in the unitarity triangle relation equation (5.210), can be extracted from both inclusive and exclusive Bmeson decays. However, V _{ub} is still one of the most poorly constrained CKM elements. Its value can be determined by combining lattice calculations of semileptonic form factors for exclusive decays such as \(\bar {B}^0\to \pi ^{+}\ell ^{}\bar \nu _\ell \) with the experimentally measured decay rate. If the leptons are assumed to be massless, the latter yields the combination [V _{ub} f _{+}(q ^{2})]^{2}, while the form factor f _{+}(q ^{2}) can be extracted from the matrix element
Here, q _{μ} ≡ (p _{B} − p _{π})_{μ} denotes the momentum transfer. For a Bmeson at rest one has
In order to avoid large lattice artefacts, typical values of the pion momentum in simulations are restricted to
Therefore, lattice calculations typically yield the form factors f _{+} and f _{0} near \(q^2=q^2_{\max }\). By contrast, the bulk of the experimental data is recorded in bins with small values of q ^{2}, since the decay rate is suppressed near \(q_{\max }^2\). Therefore, an extrapolation to small values of q ^{2} must be performed, which requires an ansatz for the shape of the form factor. Although a parameterization of the q ^{2}dependence which goes beyond vector pole dominance and is also consistent with the expected heavyquark scaling laws has been proposed [223], the extrapolation to small momentum transfers typically introduces some model dependence in the result for V _{ub}.
Figure 5.21 shows a compilation of lattice data for the form factors as a function of q ^{2} together with the curves which represent the extrapolations to q ^{2} = 0. The problem of the model dependence introduced by the extrapolation to small momentum transfer can be avoided by combining form factors from lattice simulations with the decay rate measured in restricted intervals of q ^{2}, which overlap with the range of momentum transfers that are directly accessible in simulations. Such a procedure has been performed by the CLEO Collaboration [231]. The result for V _{ub} obtained in this way is somewhat smaller compared to the standard method based on form factor extrapolations, but the uncertainties are still quite large. For the actual estimates of V _{ub} obtained in this way, the reader may consult the original papers.
Semileptonic heavytoheavy decays such as \(\bar {B}\to (D,D^*)\ell \bar \nu _\ell \) offer a way to determine V _{cb}. In this case it is convenient to use the fourvelocities of the two mesons as the kinematical variables instead of the fourmomenta. The decay amplitudes are then parameterized in terms of six form factors, i.e.
where ω = v⋅v ^{′}. In the limit of infinite heavy quark mass, four out of these six form factors can be replaced by a single, universal form factor, ξ(ω), which is called the IsgurWise function [232]
while h _{−}(ω) and \(h_{\mathrm {A}_2}(\omega )\) vanish as m _{b}, m _{c} become infinitely heavy. Outside the exact heavyquark limit, the relation between the IsgurWise function and the form factors is modified. For instance,
where β _{+}, γ _{+} parameterize radiative corrections and corrections arising from operators of higher dimension, which are suppressed by additional inverse powers of the heavy quark mass. Similar relations hold for \(h_{\mathrm {A}_1}, h_{\mathrm {A}_3}\) and h _{V}. Another important result, known as Luke’s Theorem [233], states that at zero recoil, v = v ^{′}, i.e. ω = 1, the leading corrections to the form factors h _{+} and \(h_{\mathrm {A}_1}\) are quadratic in the inverse heavy quark mass.
With this setup one may devise a strategy to determine V _{cb} by combining the experimentally determined decay rate with lattice calculations of the form factors. The differential decay rate for \(\bar {B}\to D^*\ell \bar \nu _\ell \) in the limit of zero recoil reads
which, owing to Luke’s Theorem, receives corrections of order \(1/m_c^2\) only. For ω > 1 the single axial form factor \(h_{\mathrm {A}_1}\) must be replaced by a linear combination of several form factors. Thus, the theoretical uncertainties appear to be controlled best at zero recoil. Since the rate is suppressed near ω = 1, the measured decay rate must be extrapolated to that value to determine V _{cb}. Most of the published lattice calculations of the form factors and the IsgurWise function [234,235,236,237,238] are therefore focused on the determination of the slope of ξ(ω) at ω = 1. The measured decay rate can then be extrapolated to zero recoil using a particular parameterization of ξ(ω), with its slope constrained via the lattice calculation. After taking radiative and power corrections into account, a value for V _{cb} can be extracted.
A different but related strategy is to compute the form factors h _{+}(1) and \(h_{\mathrm {A}_1}(1)\) directly via suitably chosen double ratios of hadronic matrix elements in which many systematic effects can be expected to cancel [239, 240]. Using the “Fermilab approach” for the heavy quarks in the quenched approximation, the authors of Ref. [240] find
where the first error is statistical, while the second represents an estimate of various systematic uncertainties added in quadrature. Again, this result can be combined with the experimental decay rate to determine V _{cb}. More details can be found in [240].
Most lattice studies of heavytoheavy semileptonic Bdecays have been restricted to the quenched approximation. However, results for the form factors from dynamical simulations can be expected in the near future. Clearly, in order to have maximum impact on the determination of V _{cb}, systematic effects arising from lattice artefacts and the formulation used to treat the heavy quark must be controlled to a high degree.
5.8 Concluding Remarks
In this article we have introduced the lattice approach to QCD and discussed a variety of applications, which range from hadron spectroscopy, confinement, quark masses and the running coupling, to spontaneous chiral symmetry breaking and hadronic matrix elements for flavour physics. This illustrates not only the versatility of the lattice method, but also indicates that lattice calculations have become ever more important for making quantitative predictions in the notoriously difficult sector of nonperturbative QCD. Still, a great number of other applications have not even been covered here, including nucleon structure functions and form factors, calculations at finite temperature and/or chemical potential, or detailed investigations of the QCD vacuum structure.
That lattice calculations have reached this standing is owed to the enormous progress which been made in developing more efficient algorithms for dynamical fermions, better discretizations, as well as a number of new theoretical concepts such as nonperturbative renormalization. These developments, in conjunction with the availability of ever more powerful computers, shall allow for precise computations of many phenomenologically relevant quantities, which previously seemed virtually intractable.
5.9 Addendum: QCD on the Lattice
5.9.1 Introduction
Since the first edition of this article [241] the field of lattice QCD has undergone a huge transformation. While the actual methodology was well established at the time of writing (2007), few simulations employing dynamical quarks had produced results with controlled errors, having a direct impact on phenomenology and experiment. During the past ten years or so this has changed dramatically. Simulations with light dynamical quarks, whose masses correspond to the physical value of the pion mass, have become the state of the art, and the effects of dynamical strange and charm quarks are now routinely included as well. In fact, lattice calculations of certain observables have reached (or are aiming for) a level of precision where the effects of the breaking of isospin symmetry can no longer be ignored. This necessitates that lattice QCD must account not only for the effects of unequal u and d quark masses but also for corrections due to electromagnetism, owing to the different electric charges of up and downtype quarks.
In this context it is interesting to quote a remark by Ken Wilson, made at the 1989 International Conference on Lattice Field Theory [242]: “I still believe that an extraordinary increase in computing power (10^{8}is I think not enough) and equally powerful algorithmic advances will be necessary before a full interaction with experiment takes place.” Given that, in 1989, the most powerful supercomputers could sustain 10 GFlops (i.e. 10^{10} floating point operations per second), Wilson’s estimate was tantamount to requiring ExaFlops capabilities (10^{18} Flops) for lattice QCD to make an impact, a performance figure that has only been reached very recently by less than a handful of machines. The enormous progress that the field of lattice QCD has already seen over the past decade proves that Wilson’s view was far too pessimistic.^{Footnote 21} For instance, results from lattice calculations for the decay constants and form factors of mesons and baryons containing heavy quarks are vital input for global analyses of observables in flavour physics, designed to constrain the elements of the Cabibbo–Kobayashi–Maskawa matrix. Furthermore, lattice QCD yields precise values for the masses of the light (u, d, s) quarks [244].
An impressive testimony to the importance of lattice QCD for the entire field of particle physics is the regular report provided by the Flavour Lattice Averaging Group (FLAG). Since its inception in 2007, FLAG has been charting the progress in lattice QCD, by collecting results for a range of phenomenologically relevant quantities. Taking inspiration from the Particle Data Group, FLAG assesses the quality of individual calculations and produces world averages by combining those results that satisfy a defined set of requirements regarding the overall control over systematic effects. Three editions of the FLAG report, published in 2010 [245], 2013 [246] and 2016 [247], have appeared until now, and a fourth one has been published in 2019 [248]. In fact, the current status of lattice calculations of many observables that have been reviewed in the first edition of this article can be found in these comprehensive reports.
This short review is organized as follows. In Sects. 5.9.2 and 5.9.3 we give an update of lattice calculations applied to hadron spectroscopy, weak hadronic matrix elements and the determination of Standard Model parameters such as quark masses and the strong coupling constant. These quantities were covered extensively in the original edition of [241]. Then, in Sect. 5.9.4 we extend the discussion to the determination of quantities that describe structural and other properties of the nucleon, such as form factors and the axial charge. Finally, in Sect. 5.9.5 we discuss lattice calculations of the hadronic contributions to the muon anomalous magnetic moment, which is a key quantity to study possible deviations from the Standard Model. The review concludes with a few remarks on the progress achieved over the past decade and an outlook for future calculations.
5.9.2 Hadron Spectroscopy
The calculation of the light hadron spectrum, i.e. the masses of the lowestlying mesons and baryons has long been regarded a benchmark for lattice QCD. In the quenched approximation, i.e. in the absence of dynamical quarks, a significant deviation between the calculated spectrum and experiment at the level of 10–15% was observed. When the light hadron spectrum could eventually be accurately reproduced within the overall uncertainty after the inclusion of light dynamical quarks [249,250,251,252] (see Fig. 5.22), this was hailed as a major success of lattice QCD. Thanks to these milestone results, the credibility of lattice calculations was firmly established throughout the particle and hadron physics communities.
Calculations of the light hadron spectrum have since been further refined, by taking the effects of isospin breaking into account. Strong isospin breaking arises from the mass splitting between the u and d quarks, m _{u} ≠ m _{d}. Since the electric charges of u and d quarks differ as well, electromagntic interactions are another source of isospin breaking. The formulation of QED on a lattice of finite volume poses considerable technical challenges since the photon is massless. There are several strategies to address the problem of the associated zero mode, and we refer the reader to recent reviews of the subject [253,254,255], which also serve as a guide to the literature.
After the inclusion of strong and electromagnetic isospin breaking effects, it became possible to perform another benchmark calculation, namely the accurate determination of the neutronproton mass difference, as well as the mass splittings of other baryonic isomultiplets [256,257,258,259]. The ability to determine isospin breaking effects arising from QED was also instrumental for calculations of the electromagnetic mass splittings of pions and kaons [260,261,262,263,264,265], which can be used to study violations of Dashen’s theorem [266]. The latter states that the electromagnetic selfenergies of the charged pions and kaons are identical, while those of their neutral counterparts vanish. More details are found in section 3.1.1 of the FLAG report [247].
Another recent focus of lattice spectroscopy has been the determination of the excitation spectrum and the properties of hadronic resonances. This is a major refinement of previous calculations in which the masses of resonances (the simplest being the ρmeson) were extracted naively from the exponential decay of the vector correlation function, thereby ignoring the fact that resonances are characterized both by a mass and a width. The general framework for the study of resonance properties in lattice QCD was developed by Lüscher already in the 1980s and 1990s [267,268,269,270], and it is only now that the potential of this elegant and powerful formalism can be fully exploited. The key idea that underlies the Lüscher method is the realization that computing the energy levels of multiparticle states in a finite volume gives access to the scattering phase shifts in infinite volume, provided that the spectrum (including excited states) can be determined sufficiently well for a range of kinematical situations. The latter are typically determined by the lattice volume and/or the total momentum of the multiparticle system in question.
To be more specific, let us consider the simplest resonance, the ρmeson, whose properties can be accessed in pwave ππ scattering. For energies below the inelastic threshold, the Lüscher condition reads
where ϕ(q) is a known kinematic function of the scaled scattering momentum in units of the box size, q = kL∕2π and δ _{1} is the scattering phase shift. The scattering momentum k is determined from the nth energy level ω _{n} in a finite volume, according to
where m _{π} is the pion mass. Figure 5.23 shows an example of a calculation of the pwave scattering phase shift as a function of the centreofmass energy [271].
A crucial ingredient for the reliable determination of not just the energy level of the ground state but also the excitation spectrum is the use of correlator matrices computed using a suitable basis of interpolating operators (see Section 5.3 in Ref. [241]). The diagonalization of the correlator matrix can be achieved by solving a generalized eigenvalue problem from which the energy levels in a given channel can be determined [272,273,274]. The sometimes arduous task of constructing efficient interpolators for multiparticle states has been helped enormously by practical methods to compute “alltoall” quark propagators [275] and, in particular, the socalled “distillation” technique [276, 277]. With these new developments it has been possible to perform lattice investigations of ππ scattering and the ρ resonance [278,279,280,281,282,283,284,285,286,287,288,289,290,291], as well as determinations of Kπ[292, 293] and KK scattering lengths [294, 295]. The formalism has also been used to study mesonbaryon [296,297,298,299,300] and baryonbaryon [301, 302] interactions.
While the original Lüscher formalism was derived for the case of elastic twoparticle scattering, it has now been generalized to coupledchannel systems [303,304,305,306,307], including the treatment of threeparticle thresholds [308,309,310,311,312,313,314,315]. It also opens the possibility to study weak nonleptonic kaon decays [316] and compute form factors for timelike momentum transfers [317,318,319,320].
Moreover, the experimental discovery of new charmoniumlike resonances, commonly referred to as the X, Y and Z states, has kindled a new interest in hadron spectroscopy. A distinctive feature of the new resonances is their closeness to particle thresholds, and efforts are underway to gain a detailed understanding of the resonance structure in the charm sector. Using the formalism described above, there have been many calculations of a variety of charmoniumlike resonances in lattice QCD. In view of the vast literature, we refer the reader to several recent reviews of the subject [321,322,323].
5.9.3 Parameters of the Standard Model
The Standard Model (SM) contains 19 parameters (excluding the neutrino sector) whose values are not predicted by the theory itself but must instead be fixed using experimental input. In many cases the relations between experimentally accessible observables and SM parameters involve quantities that encode the effects of the strong interactions. A wellknown example is the kaon Bparameter B _{K} that enters the relation between the quantity 𝜖 _{K}, which is a measure of indirect CP violation, and a particular combination of Cabibbo–Kobayashi–Maskawa (CKM) matrix elements V _{td}, V _{ts}, i.e.
While 𝜖 _{K} can be determined experimentally from a ratio of decay amplitudes of long and shortlived Kmesons, K _{L,S} → (ππ)_{I=0}, the parameter \(\hat {B}_K\) must be extracted from the hadronic matrix element of a fourquark operator between K ^{0} and \(\bar {K}^0\) states. Obviously, such a calculation must be performed in the nonperturbative regime of QCD since it involves typical hadronic scales.
Other CKM matrix elements, such as V _{us}, V _{ub} and V _{cb} are related to weak processes involving kaons, D and Bmesons, which are described by a variety of leptonic decay constants (and their ratios), form factors of semileptonic meson and baryon decays, as well as the Bparameters that encode strong interaction contributions to \(B^0\bar {B}^0\) and \(B_s^0\bar {B}_s^0\) mixing. All these quantities have been studied in lattice QCD for many years, and increasingly precise estimates with controlled systematic errors have appeared over the past decade. They have been instrumental for recent analyses of the unitarity of the CKM matrix [324,325,326,327].
Similar considerations apply to SM parameters such as the strong coupling constant α _{s} and the masses of the quarks. While the asymptotic scaling behaviour of α _{s} gives rise to the dimensionful Λparameter that encodes the intrinsic scale of QCD, the quark masses are external parameters. Providing the link between experimentally accessible quantities and quark masses, as well as expressing the Λparameter in units of some measurable lowenergy quantity has been a primary task for lattice QCD. Lattice calculations have also be instrumental for determining the coupling constants of effective descriptions of QCD, such as the lowenergy constants of Chiral Perturbation Theory.
The importance of accurate, modelindependent determinations of SM parameters and input quantities for flavour physics has led to the foundation of the Flavour Lattive Averaging Group (FLAG). Updates of the FLAG report have appeared at regular intervals since the publication of its first edition in 2010 [245]. As part of its mission, FLAG issues global estimates and averages of lattice results, provided that they satisfy a set of defined quality criteria. FLAG estimates are quoted separately according to the sea quark content of the calculations that enter the global analyses, i.e. whether they have been obtained with a degenerate doublet of u, d quarks (N _{f} = 2) or with an additional dynamical strange (N _{f} = 2 + 1) and charm quark (N _{f} = 2 + 1 + 1). The current status of lattice QCD calculations of quark masses, the strong coupling, decay constants, form factors, mixing parameters and lowenergy constants is summarized in Tables 1 and 2 of the 2016 FLAG report [247]. The FLAG webpage^{Footnote 22} contains additional updates. Below we comment on the current status of a few selected quantities.
Quark Masses
According to FLAG, the strange quark mass is known to 1% precision, while the accuracy in the determination of the average u and d quark mass, \(\hat {m}\equiv \frac {1}{2}(m_u+m_d)\), varies between 1–5 %, depending on the sea quark content [328,329,330,331,332, 332,333,334,335,336,337,338,339,340]. Thanks to the recent progress in including the effects of isospin breaking in lattice QCD calculations, estimates for the masses of the individual u and d quarks could also be obtained, typically with 2 − 5 % precision [261, 262, 264, 330]. Furthermore, the masses of the heavy quarks have been determined with excellent precision [328, 330,331,332, 335, 337, 341,342,343,344,345,346,347,348].
Running Coupling
A milestone was achieved by the ALPHA collaboration, who published [349] an estimate for \(\alpha _s(M_Z^2)\) obtained by tracing the scale evolution of the strong coupling nonperturbatively over several orders of magnitude into an energy range where the application of perturbation theory can be considered safe (at least as far as the quoted precision is concerned). Their main result is the determination of the Λparameter in threeflavour QCD, i.e. \(\Lambda _{{\overline {{\mathrm {MS}}}}}^{(3)}=341(12)\,\text{MeV}\), which can be matched to the Λparameter in the fiveflavour theory using perturbation theory, giving \(\Lambda _{{\overline {{\mathrm {MS}}}}}^{(5)}=215(10)(3)\,\text{MeV}\). Finally, this is translated into the result for the strong coupling [349]:
The quoted error is 30% smaller than that of the 2016 PDG estimate of α _{s} = 0.1181(11) [244]. The latter includes lattice results from Refs. [331, 335, 350,351,352,353,354].
Kaon Weak Matrix Elements
The kaon Bparameter B _{K} is now known with an overall accuracy of 1.3% [336, 355,356,357,358,359]. Moreover, the calculations of matrix elements relevant for \(K^0\bar {K}^0\) mixing have been extended to include operators that arise in extensions of the Standard Model [355, 358,359,360,361,362,363].
Lattice QCD results for kaon leptonic decay constants (more precisely: the ratio \(f_{K^+}/f_{\pi ^+}\)) and the form factor f _{+}(0) describing semileptonic K → πℓν _{ℓ} decays have now reached a level of precision that enables a competitive and modelindependent determination of V _{us} (see Sect. 5.7.1 of the original review article). Moreover, it is possible to test the unitarity of the first row in the CKM matrix, i.e. the relation
by combining experimental information with lattice results for f _{+}(0) and \(f_{K^+}/f_{\pi ^+}\). Neglecting the contribution from V _{ub}^{2} ≈ 1.7 ⋅ 10^{−5}, one finds that V _{ud}^{2} + V _{us}^{2} can be determined with a total precision at the percent level, by combining the FLAG estimates^{Footnote 23} for f _{+}(0) and \(f_{K^+}/f_{\pi ^+}\) with the experimentally accessible combinations V _{us}f _{+}(0) = 0.2165(4) and \(V_{us}/V_{ud}f_{K^+}/f_{\pi ^+}=0.2760(4)\) [244, 364]. In QCD with dynamical light, strange and charm quarks (N _{f} = 2 + 1 + 1) the result is V _{ud}^{2} + V _{us}^{2} = 0.9797(74), which signals a slight tension of 2.7 standard deviations with the Standard Model. The precision of the unitarity test can be sharpened considerably by replacing V _{ud} with the value extracted from neutron βdecay, i.e. V _{ud} = 0.97417(21) [365]. It is then sufficient to provide one additional constraint from lattice QCD, either in the form of f _{+}(0) or the ratio \(f_{K^+}/f_{\pi ^+}\). Inserting the lattice result for f _{+}(0) yields V _{ud}^{2} + V _{us}^{2} = 0.99884(53), which again differs from unitarity by about 2σ. Using instead the lattice result for \(f_{K^+}/f_{\pi ^+}\) implies V _{ud}^{2} + V _{us}^{2} = 0.99986(46). Thus, firstrow unitarity can be probed with permillevel precision [247].
HeavyLight Decay Constants and Form Factors
The treatment of heavy quarks on the lattice presents additional significant challenges: since the mass of the charm quark is close to typical values of the inverse lattice spacing, which acts as the ultraviolet cutoff, lattice results are prone to suffering from large discretisation errors. Moreover, the mass of the bottom quark exceeds currently accessible values of a ^{−1}, and specially designed methods are required for a consistent treatment. This has been discussed extensively in Sect. 5.7.2 of the original review.
The overall precision of lattice estimates for weak hadronic matrix elements involving charm and bottom quarks has vastly improved over the past decade. As shown in Table 2 of FLAG 2016 [247], the leptonic decay constants of the B and B _{s} mesons are now known at the level of 2%, while ratios such as \(f_{B_s}/f_B\) have been determined with even better accuracy [347, 366,367,368,369,370,371,372,373]. Since the 2016 edition of the FLAG report, new results obtained with N _{f} = 2 + 1 + 1 flavours of dynamical quarks [343, 374, 375] have pushed the overall precision to the subpercent level, which is an impressive achievement. Also the estimates of the individual Bparameters \(\hat {B}_B\) and \(\hat {B}_{B_s}\), their ratios and combinations with the leptonic decay constants are now known with overall errors at the percent level [347, 370, 376, 377].
Results for form factors describing semileptonic decays of hadrons containing bquarks, such as B → (D, D ^{∗})ℓν, or even Λ_{b} → pℓν have reached a level of precision that is sufficient for competitive determinations of the CKM matrix elements V _{cb} and V _{ub} from exclusive processes. An extensive discussion is presented in the web update of the FLAG report.
5.9.4 Nucleon Matrix Elements
The understanding of the internal structure of the nucleon in terms of the fundamental interactions between its constituents, the quarks and gluons, has become a major activity within the field of lattice QCD. Structural information is encoded in quantities such as form factors, structure functions and (generalized) parton distribution functions (PDFs). An open problem in this context is the decomposition of the proton’s spin in terms of the spins of quarks and gluons, as well as their angular momentum [378, 379]. Another important issue is the socalled “proton radius puzzle” [380], which arises due to the observed discrepancy between the proton radius extracted from the Lamb shift in muonic hydrogen [381, 382] compared to the more traditional determinations from electronproton scattering [383] or the Lamb shift in electronic hydrogen [384]. Accurate knowledge of the electromagnetic form factors of the proton are indispensable in order to resolve—or corroborate—this puzzle.
The determination of quantities such as nucleon form factors in lattice QCD proceeds by calculating the corresponding hadronic matrix elements between nucleon initial and final states. A strong motivation for computing such quantities is provided by the fact that fundamental interactions are often probed in scattering experiments involving nuclear targets. For instance, probing the neutrino sector requires accurate knowledge of the scattering cross sections of neutrinos with nuclear targets. Similar considerations apply to the search for dark matter candidates. Therefore, precise determinations of the corresponding nucleon matrix elements are indispensable for exploring the limits of the SM.
The past decade has seen a huge rise in the number of publications describing lattice calculations of nucleon matrix elements. Quantities that have been studied include

the electromagnetic form factors of the nucleon, G _{E}(Q ^{2}) and G _{M}(Q ^{2}), which give access to the electric and magnetic charge radii of the nucleon and its magnetic moment [385,386,387,388,389,390,391,392,393,394,395,396,397];

the isovector axial charge of the nucleon, g _{A}, which is a measure of the strength of weak interaction in neutron βdecay [386, 387, 389, 392, 397,398,399,400,401,402,403,404,405,406,407,408,409,410,411,412,413,414], as well as the scalar and tensor charges, g _{S} and g _{T} [386, 393, 404,405,406, 411, 412, 414,415,416,417,418,419];

axial and induced pseudoscalar form factors of the nucleon [397, 407, 409, 420, 421], as well as the strange electromagnetic and axial form factors [421,422,423,424,425,426, 529] which probe the quark sea inside the nucleon;

the pionnucleon σterm σ _{πN} [412, 427,428,429,430,431,432,433,434,435,436,437,438] and the strange content of the nucleon σ _{s} [412, 429,430,431, 435,436,437,438,439,440,441,442,443,444]. These σterms are proportional to nucleon matrix element of the flavourdiagonal scalar density, \(\overline {q}q\), which parameterizes the rate of change in the nucleon mass due to a nonzero value of the corresponding quark mass.
Recent reviews, presented at the annual conference on lattice field theory, can be found in Refs. [445,446,447]. Some results on nucleon form factors and other matrix elements are reviewed in section 3.2.5 of [448], and a dedicated chapter has been prepared for the 2019 edition of the FLAG report. In addition, there has been a community effort in the form of a white paper [449] in which lattice results are used to reduce the overall uncertainties in polarized and unpolarized proton PDFs and their moments.
The relevant nucleon hadronic matrix elements are extracted from suitable threepoint correlation functions of quark bilinears between interpolating operators representing the initial and finalstate nucleons. Examples of the corresponding diagrams, with the initialstate nucleon placed at Euclidean time t = 0 (the source), the finalstate nucleon at time t _{s} (the sink) and the operator insertion at time t, are shown in Fig. 5.24. In addition to the quarkconnected diagram, in which the operator is inserted on a valence quark line, there are also quarkdisconnected diagrams in which the operator probes the quark sea. The latter class of diagrams must be computed to determine, for instance, isoscalar quantities, the strangeness form factors and the σterms.
Precise determinations of nucleon matrix elements with controlled statistical and systematic errors are particularly challenging. This is a consequence of the fact that the noisetosignal ratio in threepoint correlation functions corresponding to the diagrams in Fig. 5.24 grows exponentially with a rate proportional to \(\exp \{(m_N\frac {3}{2}m_\pi )t_s\}\), where m _{N} and m _{π} denote the nucleon and pion masses, respectively, and t _{s} is the sourcesink separation. Techniques designed to enhance the statistical signal at affordable numerical cost have been developed and applied, including the truncated solver method [450] and “allmodeaveraging” [451]. Furthermore, a technique to achieve an exponential error reduction via domain decomposition and multilevel integration has been proposed and tested in [452, 453]. So far, it has not been employed in actual calculations of nucleon matrix elements with dynamical quarks.
Quarkdisconnected diagrams of the type shown on the right of Fig. 5.24 are intrinsically even noisier than their quarkconnected counterparts and require special techniques that balance statistical accuracy against numerical cost. Commonly applied variance reduction techniques for quarkdisconnected diagrams include hierarchical probing [454, 455], the coherent source sequential propagator method [389, 456] lowmode averaging [457, 458], the hopping parameter expansion [450, 459,460,461] and partitioning/dilution [275, 462]).
Despite these improvements, typical values of the sourcesink separation t _{s} for which the signal has not yet disappeared into the noise are limited to t _{s} ≳ 1.5 fm. Since the correlation function is dominated by the ground state for t, (t _{s} − t) →∞, it is then not guaranteed that the matrix element of interest can be extracted without incurring a bias from unsuppressed excited state contributions, as long as one cannot probe the region t _{s} > 1.5 fm. Hence, in addition to “standard” systematic effects such as lattice artefacts or finitevolume effects, one must also ensure that the asymptotic regime of nucleon correlation functions has been correctly isolated. Indeed, controlling excited state effects has become perhaps the most important issue in current lattice calculations of nucleon matrix elements. The commonly used strategies include

fits to threepoint correlation functions or suitably defined ratios of correlators including subleading contributions from excited states [393, 394];

calculations of threepoint correlators summed over the operator insertion time t [463,464,465,466,467]. Contributions from excited states can be shown to be parametrically more strongly suppressed than in the standard case [468];

increasing the projection of nucleon interpolators onto the ground state [404, 469], as well as the construction of an operator basis for the variational method, which allows for the projection onto the approximate ground state [456, 469, 470].
The first two approaches proceed by fitting data obtained in a finite interval of sourcesink separations t _{s} to a function that describes the approach to the asymptotic behaviour. To be able to resolve the subleading contributions from excited states in such a fit obviously requires sufficiently precise input data.
Another challenge for lattice calculations of nucleon matrix elements is the accurate description of the pion mass dependence. Although simulations at or near the physical pion mass are now routinely performed, the result at the physical point is often obtained via an extrapolation in the pion mass. The fit ansatz for the pion mass dependence is usually derived from chiral effective theory. However, the convergence properties of baryonic chiral perturbation theory are not as well understood as in the mesonic sector, and it is still unclear whether the predicted functional form provides a good description in the pion mass range over which it is applied. It is thus mandatory to gather sufficiently precise results at small enough pion mass, in order to control the systematic uncertainty associated with the chiral extrapolation.
Instead of performing a detailed survey of a variety of nucleon observables, we single out one particular quantity—the isovector axial charge of the nucleon, g _{A}, which is perhaps the most widely studied of nucleon matrix elements in lattice QCD and serves to illustrate the current state of the art. The axial charge describes the coupling of the W boson to the nucleon. In Minkowski space notation it is defined by
where u _{n}(k, s) and u _{p}(k, s ^{′}) denote the Dirac spinors of the neutron and proton with fourmomentum k and spins s and s ^{′}, respectively. The axial charge has been measured experimentally in neutron βdecay, and the current world average quoted in the PDG is g _{A} = 1.2724 ± 0.0023 [471]. Provided that the experimental sensitivity is sufficient, it may be possible to probe for scalar and tensor interactions that are generated by loop effects or arise due to new forces in extensions of the SM. The definitions of the associated scalar and tensor charges, g _{S} and g _{T} are derived from Eq. (5.255) by replacing the axial current \(\overline {u}\gamma ^\mu \gamma _5 d\) by the scalar density \(\overline {u} d\) and the tensor current \(\overline {u}\sigma ^{\mu \nu } d\), respectively.
The calculation of g _{A} is facilitated by the fact that it is derived from a forward matrix element without any momentum transfer and, secondly, since the contributions from quarkdisconnected diagrams cancel in the isovector combination, for massdegenerate up and down quarks. Coupled with the fact that a precise experimental value is known, the isovector axial charge is a benchmark quantity for lattice calculations of nucleon matrix elements. Obviously, the ability of stateoftheart lattice calculations to reproduce the experimental result will enhance the credibility of lattice predictions for the unmeasured charges g _{S} and g _{T}. Figure 5.25 shows a compilation of recent results for g _{A}, obtained in lattice QCD with N _{f} = 2, 2 + 1 and 2 + 1 + 1 flavours of dynamical quarks. While most estimates agree with the experimental result within errors, it is clear that the overall precision of current lattice calculations does not match that of the experiments. To state this observation more precisely, we note that the typical total error of current lattice results is at the level of 1–3% while experiment is an order of magnitude more precise. It should also be mentioned that, more often than not, lattice results tend to be slightly lower that the PDG average. Whether this is due to a remnant bias from excited state contributions or indeed to any other systematic effect, must be investigated in future calculations able to realize larger sourcesink separations.
The tendency to underestimate g _{A} in early lattice calculations of g _{A} has been attributed to unsuppressed excited state effects. In this context it is interesting to note that recent analyses of the contributions from Nπ states to nucleon matrix elements based on chiral effective theory [472, 473] suggest that the asymptotic (physical) value of g _{A} is approached from above. The different conclusions drawn from numerical and analytic studies can only be reconciled if one succeeds in simulating significantly larger sourcesink separations at affordable cost.
Given that lattice QCD calculations reproduce the experimental value of benchmark quantities such as the axial charge at the level of a few percent, it is interesting to look at quantities that have not been measured so far. Results for the (isovector) scalar and tensor charges have been reported in [386, 393, 404,405,406, 411, 412, 414,415,416,417,418,419]. For both quantities one obtains g _{S}, g _{T} ≈ 1, and while the typical overall uncertainty in g _{S} is at the level of 10%, the tensor charge is determined with 3% precision, similar to that of g _{A}. The 2019 edition of the FLAG report contains a detailed compilation and comparison of results for the axial, scalar and tensor charges, as well as flavoursinglet charges and σterms. Calculations of these quantities have matured to a level which allows for global averages to be determined.
Lattice calculations of nucleon matrix elements is a rich subject, and while a comprehensive discussion of other quantities such as form factors and moments of PDFs is beyond the scope of this short review, we refer the reader to recent reviews [445,446,447], specific sections of [448] and the white paper on PDFs [449].
5.9.5 Hadronic Contributions to the Muon Anomalous Magnetic Moment
The SM describes with great accuracy and precision the properties of the constituents of the visible matter in the universe but leaves several profound questions unanswered. For instance, it cannot account for the matterantimatter asymmetry and does not explain the vast hierarchy between the electroweak scale and the Planck mass. Most prominently, the SM cannot account for the presence of dark matter in the universe for which there is overwhelming observational evidence. Against this backdrop, the exploration of the limits of the SM and the search for “new physics” has become a major activity in particle physics. Traditionally, highenergy particle colliders have had the highest discovery potential. However, despite the fact that the LHC is the most powerful accelerator in the world, new particles that can, for instance, explain the dark matter puzzle have not been observed in the expected region. Therefore, additional search strategies must be pursued to detect evidence for physics beyond the SM.
Observables that can be measured with very high precision and for which similarly accurate theoretical predictions exist at the same time, play an increasingly important rôle for exploring the limits of the SM. One such quantity is the anomalous magnetic moment of the muon, \(a_\mu \equiv \frac {1}{2}(g_\mu 2)\), where g _{μ} denotes the muon’s gyromagnetic ratio. There has been a persistent tension of about 3.5 standard deviations between the measured value and the SM prediction [244]:
As described in detail in the extensive reviews in Refs. [474] and [475], the SM estimate of the anomalous magnetic moment receives contributions from QED, the weak and the strong interactions, i.e.
While QED effects account for about 99.994% of the absolute value of \(a_\mu ^{\mathrm {SM}}\), its total uncertainty is completely dominated by the contribution from \(a_\mu ^{\text{strong}}\). Since the latter is mostly due to hadronic effects that are intrinsically nonperturbative, it is clear that special attention must be paid to their reliable evaluation.
The most important quantum corrections to \(a_\mu ^{\mathrm {SM}}\) arising from strong interaction physics are the leading hadronic vacuum polarization (HVP) and hadronic lightbylight scattering (HLbL) contributions. The HVP contribution, \(a_\mu ^{\text{hvp}}\), which arises at order α ^{2} (where α is the fine structure constant), can be expressed in terms of a dispersion integral of the cross section ratio R(s) = σ(e ^{+}e ^{−}→hadrons)∕σ(e ^{+}e ^{−}→ μ ^{+}μ ^{−}), multiplied by a known kernel function. At small values of the centreofmass energy s, the dispersion integral is evaluated using experimental data for the Rratio R(s) as input [476,477,478,479,480]. For instance, the recent analysis of Ref. [479], which is based on the available data for e ^{+}e ^{−}→hadrons, produced an estimate of \(a_\mu ^{\text{hvp}}=(693.1\pm 3.4)\cdot 10^{10}\). While the total error is at the level of 0.5%, it is clear that experimental uncertainties enter the SM prediction for a _{μ} in this approach.
The HLbL contribution has been quantified mostly using hadronic models, although efforts are under way to formulate and apply a dispersive or datadriven framework to treat some of the dominant subprocesses [481,482,483,484,485,486,487,488,489,490,491]. The current SM estimate \(a_\mu ^{\mathrm {SM}}\) is based on model calculations such as the “Glasgow consensus”, i.e. \(a_\mu ^{\text{hlbl}}=(105\pm 26)\cdot 10^{11}\) [492]. Other studies, which have produced consistent results, can be found in Refs. [474, 478, 493].
Given the importance of a _{μ} for testing the limits of the SM, it is crucial to verify the current estimates of \(a_\mu ^{\text{hvp}}\) and \(a_\mu ^{\text{hlbl}}\) and possibly reduce their overall errors using an ab initio approach such as lattice QCD. Given that two new experiments (E989 at Fermilab and E34 at JPARC) are set to improve the precision of the measurement of a _{μ} by a factor four, the importance of reliably estimating the hadronic contributions has become even higher. In order to make an impact, lattice QCD must be able to constrain \(a_\mu ^{\text{hvp}}\) with subpercent accuracy, while an estimate of \(a_\mu ^{\text{hlbl}}\) at the level of 10% would already be a major step forward. Both tasks, however, present a considerable challenge to lattice QCD. The current status of lattice calculations of \(a_\mu ^{\text{hvp}}\) and \(a_\mu ^{\text{hlbl}}\) was reviewed extensively in Ref. [494], which can be consulted for details. Here we present merely an overview of the main issues and a guide to the literature.
The hadronic vacuum polarization contribution, \(a_\mu ^{\text{hvp}}\), is accessible in lattice QCD via different integral representations involving the correlator of the electromagnetic current. The first possibility is to consider a convolution integral over Euclidean momenta Q ^{2} of the subtracted vacuum polarization function [500, 501]. The second possibility is the socalled timemomentum representation defined in Ref. [502], in which the product of the spatially summed vector correlator G(x _{0}) and a kernel function is integrated over the Euclidean time x _{0}. A variant of the timemomentum representation uses the time moments of G(x _{0}) [503]. Finally, there also exists a Lorentzcovariant formulation in coordinate space [504] involving the pointtopoint vector correlator G(x, y).
In order to meet the precision goal of subpercent uncertainty, it is mandatory to have good control over the infrared regime which makes a sizeable contribution to \(a_\mu ^{\text{hvp}}\). In the formulation of Refs. [500, 501] this implies that momenta corresponding to \(Q^2\lesssim m_\mu ^2\) must be included, since this is where the convolution integral receives its dominant contribution. Instead, in the timemomentum representation or the Lorentzcovariant formulation one must constrain the longdistance regime of the correlator sufficiently well. The statistical accuracy that one can attain for \(a_\mu ^{\text{hvp}}\) is affected by the wellknown noise problem encountered for the vector correlator, i.e. the fact that the signaltonoise ratio increases exponentially at large distances.^{Footnote 24} Another limiting factor for the overall precision of \(a_\mu ^{\text{hvp}}\) in lattice QCD is the knowledge of the lattice scale [499, 505]. At first sight this may seem surprising, given that \(a_\mu ^{\text{hvp}}\) is a dimensionless quantity. However, employing the timemomentum representation, one easily sees that the lattice scale enters through the combination (x _{0}m _{μ})^{2} in the kernel function. Similar arguments exist for the other representations of \(a_\mu ^{\text{hvp}}\). Furthermore, at the level of subpercent precision, it is necessary to include the contributions from quarkdisconnected diagrams and the effects from isospin breaking (see Sect. 5.9.2). All of this is explained in great detail in Ref. [494].
First exploratory calculations of \(a_\mu ^{\text{hvp}}\) in full QCD were published in 2008 [506], and in the following years several studies appeared [497, 507,508,509], employing a range of different discretisations of the quark action, which were mostly aimed at investigating systematic effects. The most recent calculations are focussed on reducing the overall uncertainties [495, 496, 498, 499, 510,511,512,513,514,515, 530]. A comparison of recent estimates for \(a_\mu ^{\text{hvp}}\) from lattice QCD to results obtained via the dispersive approach is shown in Fig. 5.26. As of now, current calculations cannot match the accuracy of the dispersive approach, but efforts are under way to reduce the uncertainties to a level that makes the lattice approach competitive with datadriven methods [494, 516].
In order to determine the hadronic lightbylight scattering contribution, it is necessary to formulate the problem in such a way that \(a_\mu ^{\text{hlbl}}\) is expressed in terms of quantities that can be computed on the lattice with affordable effort. Several different strategies have been proposed and are currently being pursued:
In a first method, the matrix element of the electromagnetic current between explicit muon initial and final states is computed is QCD+QED [517]. In order to isolate the desired lightbylight scattering contribution, one has to perform a nonperturbative subtraction. While the method has produced estimates in the expected range, statistical errors are large, as a result of the cancellation between two large numbers [518].
In another method proposed by the RBC/UKQCD Collaboration [519, 520], the lightbylight scattering diagram is evaluated by inserting three explicit photon propagators. The positions of the insertion of these propagators are then sampled stochastically. In this way, results for the quarkconnected and the leading quarkdisconnected contributions have been obtained, i.e.
The sum of the two contributions gives \(a_\mu ^{\text{hlbl}}=(53.5\pm 13.5)\cdot 10^{11}\) which differs from the Glasgow consensus by a factor two. However, before jumping to conclusions one must take into account that systematic effects have not yet been fully quantified in these calculations.
The Mainz group has proposed a method in which the QED kernel function is computed semianalytically in infinite volume [521,522,523,524]. This has the advantage that large finitevolume effects arising from the massless photon mode are absent. The method has yet to produce explicit estimates for \(a_\mu ^{\text{hlbl}}\). A variant was proposed by RBC/UKQCD in Ref. [525]. Another project of the Mainz group has focussed on the forward lightbylight scattering amplitude, which can be linked via the optical theorem and dispersive sum rules to models of the cross section for the process γ ^{∗}γ ^{∗}→hadrons [526, 527]. The results provide an important test for model estimates of \(a_\mu ^{\text{hlbl}}\).
Finally, lattice QCD calculations can also be used to directly test model estimates of the expected dominant contribution to \(a_\mu ^{\text{hlbl}}\) from the pion pole, which requires knowledge of the transition form factor for π ^{0} → γ ^{∗}γ ^{∗}. The calculation of Ref. [528], which was performed in twoflavour QCD, gives
which is in very good agreement with model estimates [491]. It will be interesting to extend this calculation by including the corresponding contributions of the η and η ^{′} mesons.
This brief survey demonstrates that lattice QCD contributes in many different and complementary ways to constrain the hadronic contributions to the muon g − 2 more precisely.
5.9.6 Concluding Remarks
In this short review we have charted the progress of lattice QCD calculations over more than a decade, i.e. since the publication of the original review article. Back in 2007, lattice QCD was on the verge of providing estimates for hadronic observables from first principles, which were of immediate phenomenological relevance. In the meantime, lattice QCD has become an indispensable tool in particle and hadron physics: In addition to to providing accurate estimates of SM parameters and input quantities for analyses in flavour physics, lattice QCD is now also making inroads into field such as nucleon structure and precision observables. This underlines the important role of lattice calculations for exploring the limits of the SM and searches for new physics.
Furthermore, studying hadronic interactions, i.e. the physics of resonances and multihadron systems, has become a major activity in lattice QCD and also serves as a basis for the understanding of light nuclei from first principles. Other important applications of the lattice formulation that have not been covered in this article are studies of matter under extreme conditions. Indeed, many features of the QCD phase diagram and properties of the quarkgluon plasma that are otherwise inaccessible can nowadays be obtained reliably from lattice calculations. Perhaps the most significant development since Ken Wilson’s 1989 remark, quoted in the introduction, is the fact that there is now a vigorous interaction between lattice QCD and experiment.
Notes
 1.
That the degeneracy is indeed completely lifted in the presence of a nontrivial gauge field can be verified in numerical simulations.
 2.
So far we have not discussed how to assign physical units to the lattice spacing a. This is described in Sect. 5.2.4.
 3.
Here and in the following we drop the subscript “E” on the partition function Z.
 4.
In the commonly normalization of hadron states one includes a factor \(2\epsilon _\alpha (\vec {p})\) in the denominator.
 5.
This phase diagram must not be confused with the physical phase diagram of QCD in the plane defined by the temperature and the chemical potential, which is explored at heavyion colliders.
 6.
Otherwise, an arbitrarily chosen value of g _{0} would always correspond the a critical point.
 7.
In practice this is achieved by drawing a random number r, with 0 < r ≤ 1. If r < e^{− ΔH} the new configuration is rejected.
 8.
This should be compared to the physical mass ratio of \(\hat {m}\approx m_s/24\) [38].
 9.
As usual we denote the running parameters by a bar across the symbol.
 10.
The expressions for b _{0} and b _{1}, as well as the Λparameter have already been shown in Sect. 5.2.4.
 11.
The figure actually shows the ratio for the RGI quark masses, instead of those renormalized in the \({\overline {{\mathrm {MS}}}}\)scheme.
 12.
The precise definition of \(\bar {g}_{\mathrm {SF}}\) is specified in Sect. 5.5.5 below.
 13.
In this expression \(V_\mu ^{\mathrm {C}}\) denotes the conserved lattice vector current, which involves quark fields at neighbouring lattice sites, and which is known not to undergo any finite renormalization, such that Z _{V} ≡ 1.
 14.
 15.
N _{f} = 2 usually denotes a degenerate doublet of light (u, d) quarks, while N _{f} = 2 + 1 denotes a degenerate doublet together with a heavier third flavour, i.e. the strange quark.
 16.
We use capital symbols for decay constants whenever we refer to a normalization in which F _{π} ≃ 93 MeV.
 17.
It should be obvious that the field U must not be confused with the link variable considered in previous sections.
 18.
A normalization factor of V ^{−1} must be included in Eq. (5.184) since ρ(λ) is proportional to the volume.
 19.
The expressions for the distributions \(p_{k}^{(\nu )}(z)\) become rapidly more complicated as k increases, so that one may have to resort to numerical evaluations of the integrals.
 20.
The relation of rescaled parameter \(\bar \rho \) to ρ is given by \(\bar \rho =\rho (1\lambda ^2/2+{\mathrm {O}}(\lambda ^4))\), and a similar relation holds for \(\bar \eta \) and η.
 21.
Even Wilson himself acknowledged, at least partially, that this was the case [243].
 22.
 23.
See the web update at http://flag.unibe.ch/.
 24.
This is similar to, but less severe, than the noise problem encountered in nucleon correlation functions discussed in Sect. 5.9.4 of this review.
References
M. Creutz, Quarks, Gluons and Lattices, Cambridge University Press (1983), Cambridge, UK.
I. Montvay and G. Münster, Quantum fields on a lattice, Cambridge University Press (1994), Cambridge, UK.
J. Smit, Introduction to quantum fields on a lattice: A robust mate, Cambridge Lect. Notes Phys. 15 (2002) 1.
H.J. Rothe, Lattice gauge theories: An Introduction, World Sci. Lect. Notes Phys. 74 (2005) 1.
K. Osterwalder and R. Schrader, Commun. Math. Phys. 31 (1973) 83; Commun. Math. Phys. 42 (1975) 281.
K.G. Wilson, Phys. Rev. D10 (1974) 2445.
P. Weisz, Nucl. Phys. B212 (1983) 1; P. Weisz and R. Wohlert, Nucl. Phys. B236 (1984) 397.
Y. Iwasaki, Nucl. Phys. B258 (1985) 141.
L. Susskind, Phys. Rev. D16 (1977) 3031.
D.B. Kaplan, Phys. Lett. B288 (1992) 342, heplat/9206013.
V. Furman and Y. Shamir, Nucl. Phys. B439 (1995) 54, heplat/9405004.
H. Neuberger, Phys. Lett. B417 (1998) 141, heplat/9707022; Phys. Lett. B427 (1998) 353, heplat/9801031.
P. Hasenfratz, Nucl. Phys. B (Proc. Suppl.) 63 (1998) 53, heplat/9709110.
K. Symanzik, Nucl. Phys. B226 (1983) 187; Nucl. Phys. B226 (1983) 205.
B. Sheikholeslami and R. Wohlert, Nucl. Phys. B259 (1985) 572.
M. Lüscher, S. Sint, R. Sommer and P. Weisz, Nucl. Phys. B478 (1996) 365, heplat/9605038.
M. Lüscher, S. Sint, R. Sommer, P. Weisz and U. Wolff, Nucl. Phys. B491 (1997) 323, heplat/9609035.
H. KlubergStern, A. Morel, O. Napoly and B. Petersson, Nucl. Phys. B220 (1983) 447.
G.P. Lepage, Phys. Rev. D59 (1999) 074502, heplat/9809157.
MILC Collaboration, K. Orginos, D. Toussaint and R.L. Sugar, Phys. Rev. D60 (1999) 054503, heplat/9903032.
H.B. Nielsen and M. Ninomiya, Nucl. Phys. B185 (1981) 20; Nucl. Phys. B193 (1981) 173.
P.H. Ginsparg and K.G. Wilson, Phys. Rev. D25 (1982) 2649.
P. Hasenfratz, V. Laliena and F. Niedermayer, Phys. Lett. B427 (1998) 125, heplat/9801021.
M. Lüscher, Phys. Lett. B428 (1998) 342, heplat/9802011.
P. Hernández, K. Jansen and M. Lüscher, Nucl. Phys. B552 (1999) 363, heplat/9808010.
ALPHA Collaboration, R. Frezzotti, P.A. Grassi, S. Sint and P. Weisz, JHEP 08 (2001) 058, heplat/0101001.
R. Frezzotti and G.C. Rossi, JHEP 08 (2004) 007, heplat/0306014.
S. Sint, Lattice QCD with a chiral twist, (2007), heplat/0702008.
A. Shindler, Twisted mass lattice QCD, (2007), arXiv:0707.4093 [heplat].
N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller and E. Teller, J. Chem. Phys. 21 (1953) 1087.
S. Duane, A.D. Kennedy, B.J. Pendleton and D. Roweth, Phys. Lett. B195 (1987) 216.
M. Lüscher, Commun. Math. Phys. 54 (1977) 283.
R. Sommer, Nucl. Phys. B411 (1994) 839, heplat/9310022.
ALPHA Collaboration, M. Guagnelli, R. Sommer and H. Wittig, Nucl. Phys. B535 (1998) 389, heplat/9806005.
ALPHA & UKQCD Collaborations, J. Garden, J. Heitger, R. Sommer and H. Wittig, Nucl. Phys. B571 (2000) 237, heplat/9906013.
M. Lüscher, Commun. Math. Phys. 104 (1986) 177.
H. Wittig, Nucl. Phys. B (Proc. Suppl.) 119 (2003) 59, heplat/0210025.
H. Leutwyler, Phys. Lett. B378 (1996) 313, hepph/9602366.
S.R. Sharpe, PoS LAT2006 (2006) 022, heplat/0610094.
HPQCD Collaboration, C.T.H. Davies et al., Phys. Rev. Lett. 92 (2004) 022001, heplat/0304004.
ETM Collaboration, P. Boucaud et al., Phys. Lett. B650 (2007) 304, heplat/0701012.
M. Hasenbusch, Phys. Lett. B519 (2001) 177, heplat/0107019.
C. Urbach, K. Jansen, A. Shindler and U. Wenger, Comput. Phys. Commun. 174 (2006) 87, heplat/0506011.
M. Lüscher, Comput. Phys. Commun. 165 (2005) 199, heplat/0409106.
M.A. Clark and A.D. Kennedy, Phys. Rev. Lett. 98 (2007) 051601, heplat/0608015.
F. Butler, H. Chen, J. Sexton, A. Vaccarino and D. Weingarten, Nucl. Phys. B430 (1994) 179, heplat/9405003.
CPPACS Collaboration, S. Aoki et al., Phys. Rev. Lett. 84 (2000) 238, heplat/9904012; Phys. Rev. D67 (2003) 034503, heplat/0206009.
UKQCD Collaboration, K.C. Bowler et al., Phys. Rev. D62 (2000) 054506, heplat/9910022.
MILC Collaboration, C.W. Bernard et al., Phys. Rev. Lett. 81 (1998) 3087, heplat/9805004.
C.W. Bernard et al., Phys. Rev. D64 (2001) 054506, heplat/0104002.
BGR Collaboration, C. Gattringer et al., Nucl. Phys. B677 (2004) 3, heplat/0307013.
CPPACS Collaboration, A. Ali Khan et al., Phys. Rev. D65 (2002) 054505, heplat/0105015.
C.J. Morningstar and M.J. Peardon, Phys. Rev. D60 (1999) 034509, heplat/9901004.
APE Collaboration, M. Albanese et al., Phys. Lett. 192B (1987) 163.
M. Teper, Phys. Lett. B183 (1987) 345.
M. Lüscher and U. Wolff, Nucl. Phys. B339 (1990) 222.
Y. Chen et al., Phys. Rev. D73 (2006) 014516, heplat/0510074.
UKQCD Collaboration, G.S. Bali et al., Phys. Lett. B309 (1993) 378, heplat/9304012.
J. Sexton, A. Vaccarino and D. Weingarten, Phys. Rev. Lett. 75 (1995) 4563, heplat/9510022.
M.J. Teper, Glueball masses and other physical properties of SU(N) gauge theories in D = (3+1): A Review of lattice results for theorists, (1998), hepth/9812187.
Particle Data Group, W.M. Yao et al., J. Phys. G33 (2006) 1.
E. Klempt and A. Zaitsev, Phys. Rept. 454 (2007) 1, arXiv:0708.4016 [hepph].
M. Lüscher, K. Symanzik and P. Weisz, Nucl. Phys. B173 (1980) 365; M. Lüscher, Nucl. Phys. B180 (1981) 317.
G. Parisi, R. Petronzio and F. Rapuano, Phys. Lett. 128B (1983) 418.
S. Necco and R. Sommer, Nucl. Phys. B622 (2002) 328, heplat/0108008.
G.S. Bali, Phys. Rept. 343 (2001) 1, hepph/0001312.
M. Lüscher and P. Weisz, JHEP 07 (2002) 049, heplat/0207003.
O. Philipsen and H. Wittig, Phys. Rev. Lett. 81 (1998) 4056, heplat/9807020.
ALPHA Collaboration, F. Knechtli and R. Sommer, Phys. Lett. B440 (1998) 345, heplat/9807022; Nucl. Phys. B590 (2000) 309, heplat/0005021.
SESAM Collaboration, G.S. Bali, H. Neff, T. Duessel, T. Lippert and K. Schilling, Phys. Rev. D71 (2005) 114513, heplat/0505012.
T. van Ritbergen, J.A.M. Vermaseren and S.A. Larin, Phys. Lett. B400 (1997) 379, hepph/9701390; K.G. Chetyrkin, Phys. Lett. B404 (1997) 161, hepph/9703278; J.A.M. Vermaseren, S.A. Larin and T. van Ritbergen, Phys. Lett. B405 (1997) 327, hepph/9703284.
M. Lüscher, R. Narayanan, P. Weisz and U. Wolff, Nucl. Phys. B384 (1992) 168, heplat/9207009.
S. Sint, Nucl. Phys. B421 (1994) 135, heplat/9312079.
S. Sint, Nucl. Phys. B451 (1995) 416, heplat/9504005.
ALPHA Collaboration, S. Capitani, M. Lüscher, R. Sommer and H. Wittig, Nucl. Phys. B544 (1999) 669, heplat/9810063.
G. Martinelli, C. Pittori, C.T. Sachrajda, M. Testa and A. Vladikas, Nucl. Phys. B445 (1995) 81, heplat/9411010.
ALPHA Collaboration, M. Della Morte et al., Nucl. Phys. B713 (2005) 378, heplat/0411025.
ALPHA Collaboration, M. Della Morte et al., Nucl. Phys. B729 (2005) 117, heplat/0507035.
Y. Taniguchi, JHEP 10 (2006) 027, heplat/0604002.
S. Sint, PoS LAT2005 (2006) 235, heplat/0511034.
M. Lüscher, JHEP 05 (2006) 042, heplat/0603029.
G. Parisi, Presented at 20th Int. Conf. on High Energy Physics, Madison, Wis., Jul 17–23, 1980.
G.P. Lepage and P.B. Mackenzie, Phys. Rev. D48 (1993) 2250, heplat/9209022.
M. Lüscher, R. Sommer, P. Weisz and U. Wolff, Nucl. Phys. B413 (1994) 481, heplat/9309005.
S. Sint and R. Sommer, Nucl. Phys. B465 (1996) 71, heplat/9508012.
HPQCD Collaboration, Q. Mason et al., Phys. Rev. Lett. 95 (2005) 052002, heplat/0503005.
Y. Schröder, Phys. Lett. B447 (1999) 321, hepph/9812205.
S. Bethke, Nucl. Phys. Proc. Suppl. 135 (2004) 345, hepex/0407021.
J. Gasser and H. Leutwyler, Ann. Phys. 158 (1984) 142.
J. Gasser and H. Leutwyler, Nucl. Phys. B250 (1985) 465.
M. Lüscher, S. Sint, R. Sommer and H. Wittig, Nucl. Phys. B491 (1997) 344, heplat/9611015.
M. Della Morte, R. Hoffmann, F. Knechtli, R. Sommer and U. Wolff, JHEP 07 (2005) 007, heplat/0505026.
SPQ_{cd}R Collaboration, D. Bećirević, V. Lubicz and C. Tarantino, Phys. Lett. B558 (2003) 69, heplat/0208003.
JLQCD Collaboration, S. Aoki et al., Phys. Rev. Lett. 82 (1999) 4392, heplat/9901019.
JLQCD Collaboration, T. Ishikawa et al., Phys. Rev. D78 (2008) 011502.
HPQCD Collaboration, Q. Mason, H.D. Trottier, R. Horgan, C.T.H. Davies and G.P. Lepage, Phys. Rev. D73 (2006) 114501, hepph/0511160.
M. Göckeler et al., Phys. Rev. D73 (2006) 054508, heplat/0601004.
D. Bećirević et al., Nucl. Phys. B734 (2006) 138, heplat/0510014.
CPPACS Collaboration, A. Ali Khan et al., Phys. Rev. Lett. 85 (2000) 4674, heplat/0004010.
ETM Collaboration, B. Blossier et al., JHEP 04 (2008) 020.
J. Goldstone, Nuovo Cim. 19 (1961) 154.
S. Scherer, Adv. Nucl. Phys. 27 (2003) 277, hepph/0210398.
V. Bernard and U.G. Meißner, Chiral perturbation theory, (2006), hepph/0611231.
D.B. Kaplan and A.V. Manohar, Phys. Rev. Lett. 56 (1986) 2004.
M. GellMann, R.J. Oakes and B. Renner, Phys. Rev. 175 (1968) 2195.
T. Banks and A. Casher, Nucl. Phys. B169 (1980) 103.
J.J.M. Verbaarschot and I. Zahed, Phys. Rev. Lett. 70 (1993) 3852, hepth/9303012.
H. Leutwyler and A. Smilga, Phys. Rev. D46 (1992) 5607.
E.V. Shuryak and J.J.M. Verbaarschot, Nucl. Phys. A560 (1993) 306, hepth/9212088.
L. Giusti, C. Hoelbling, M. Lüscher and H. Wittig, Comput. Phys. Commun. 153 (2003) 31, heplat/0212012.
L. Giusti, M. Lüscher, P. Weisz and H. Wittig, JHEP 11 (2003) 023, heplat/0309189.
W. Bietenholz, K. Jansen and S. Shcheredin, JHEP 07 (2003) 033, heplat/0306022.
H. Fukaya et al., Phys. Rev. D76 (2007) 054503, arXiv:0705.3322 [heplat].
J. Wennekers and H. Wittig, JHEP 09 (2005) 059, heplat/0507026.
P. Hernández, K. Jansen, L. Lellouch and H. Wittig, JHEP 07 (2001) 018, heplat/0106011.
L. Giusti, F. Rapuano, M. Talevi and A. Vladikas, Nucl. Phys. B538 (1999) 249, heplat/9807014.
P. Hernández, K. Jansen and L. Lellouch, Phys. Lett. B469 (1999) 198, heplat/9907022.
T. Blum et al., Phys. Rev. D69 (2004) 074502, heplat/0007038.
MILC Collaboration, T.A. DeGrand, Phys. Rev. D64 (2001) 117501, heplat/0107014.
L. Giusti, C. Hoelbling and C. Rebbi, Phys. Rev. D64 (2001) 114508, heplat/0108007.
P. Hernández, K. Jansen, L. Lellouch and H. Wittig, Nucl. Phys. B (Proc. Suppl.) 106 (2002) 766, heplat/0110199.
P. Hasenfratz, S. Hauswirth, T. Jörg, F. Niedermayer and K. Holland, Nucl. Phys. B643 (2002) 280, heplat/0205010.
D. Bećirević and V. Lubicz, Phys. Lett. B600 (2004) 83, hepph/0403044.
V. Gimenez, V. Lubicz, F. Mescia, V. Porretti and J. Reyes, Eur. Phys. J. C41 (2005) 535, heplat/0503001.
C. McNeile, Phys. Lett. B619 (2005) 124, heplat/0504006.
H.G. Dosch and S. Narison, Phys. Lett. B417 (1998) 173, hepph/9709215.
S. Narison, (2002), hepph/0202200.
M.R. Pennington, (2002), hepph/0207220.
L. Wolfenstein, Phys. Rev. Lett. 51 (1983) 1945.
UTfit Collaboration, M. Bona et al., JHEP 10 (2006) 081, hepph/0606167.
C. Dawson, PoS LAT2005 (2006) 007.
M. Okamoto, PoS LAT2005 (2006) 013, heplat/0510113.
T. Onogi, PoS LAT2006 (2006) 017, heplat/0610115.
A. Jüttner, PoS LAT2007 (2007) 014, arXiv:0711.1239 [heplat].
M. Della Morte, PoS LAT2007 (2007) 008, arXiv:0711.3160 [heplat].
L. Conti et al., Phys. Lett. B421 (1998) 273, heplat/9711053.
JLQCD Collaboration, S. Aoki et al., Phys. Rev. D60 (1999) 034511, heplat/9901018.
D. Bećirević et al., Phys. Lett. B487 (2000) 74, heplat/0005013.
R.S. Van de Water and S.R. Sharpe, Phys. Rev. D73 (2006) 014003, heplat/0507012.
M. Guagnelli, J. Heitger, C. Pena, S. Sint and A. Vladikas, Nucl. Phys. B (Proc. Suppl.) 106 (2002) 320, heplat/0110097.
R. Frezzotti and G.C. Rossi, JHEP 10 (2004) 070, heplat/0407002.
A. Donini, V. Gimenez, G. Martinelli, M. Talevi and A. Vladikas, Eur. Phys. J. C10 (1999) 121, heplat/9902030.
ALPHA Collaboration, M. Guagnelli, J. Heitger, C. Pena, S. Sint and A. Vladikas, JHEP 03 (2006) 088, heplat/0505002.
RBC and UKQCD Collaborations, D.J. Antonio et al., Phys. Rev. Lett. 100 (2008) 032001, hepph/0702042.
HPQCD and UKQCD Collaborations, E. Gamiz et al., Phys. Rev. D73 (2006) 114502, heplat/0603023.
Y. Aoki et al., Phys. Rev. D72 (2005) 114505, heplat/0411006.
UKQCD Collaboration, J.M. Flynn, F. Mescia and A.S.B. Tariq, JHEP 11 (2004) 049, heplat/0406013.
RBC Collaboration, T. Blum et al., Phys. Rev. D68 (2003) 114506, heplat/0110075.
CPPACS Collaboration, A. Ali Khan et al., Phys. Rev. D64 (2001) 114506, heplat/0105020.
N. Garron, L. Giusti, C. Hoelbling, L. Lellouch and C. Rebbi, Phys. Rev. Lett. 92 (2004) 042001, hepph/0306295.
MILC Collaboration, T.A. DeGrand, Phys. Rev. D69 (2004) 014504, heplat/0309026.
ALPHA Collaboration, P. Dimopoulos et al., Nucl. Phys. B749 (2006) 69, hepph/0601002; Nucl. Phys. B776 (2007) 258, heplat/0702017.
D. Bećirević, P. Boucaud, V. Gimenez, V. Lubicz and M. Papinutto, Eur. Phys. J. C37 (2004) 315, heplat/0407004.
JLQCD Collaboration, S. Aoki et al., Phys. Rev. Lett. 80 (1998) 5271, heplat/9710073.
D. Bećirević, D. Meloni and A. Retico, JHEP 01 (2001) 012, heplat/0012009.
J. Gasser and H. Leutwyler, Nucl. Phys. B250 (1985) 517.
D. Bećirević et al., Nucl. Phys. B705 (2005) 339, hepph/0403217.
H. Leutwyler and M. Roos, Z. Phys. C25 (1984) 91.
UKQCD Collaboration, P.A. Boyle, J.M. Flynn, A. Jüttner, C.T. Sachrajda and J.M. Zanotti, JHEP 05 (2007) 016, heplat/0703005.
P.F. Bedaque, Phys. Lett. B593 (2004) 82, nuclth/0402051.
G.M. de Divitiis, R. Petronzio and N. Tantalo, Phys. Lett. B595 (2004) 408, heplat/0405002.
C.T. Sachrajda and G. Villadoro, Phys. Lett. B609 (2005) 73, heplat/0411033.
UKQCD Collaboration, J.M. Flynn, A. Jüttner and C.T. Sachrajda, Phys. Lett. B632 (2006) 313, heplat/0506016.
F.J. Jiang and B.C. Tiburzi, Phys. Lett. B645 (2007) 314, heplat/0610103.
C. Dawson, T. Izubuchi, T. Kaneko, S. Sasaki and A. Soni, Phys. Rev. D74 (2006) 114502, hepph/0607162.
UKQCD and RBC Collaborations, D.J. Antonio et al., (2007), heplat/0702026; UKQCD and RBC Collaborations, P.A. Boyle et al., Phys. Rev. Lett. 100 (2008) 141601, arXiv:0710.5136 [heplat].
J. Bijnens and P. Talavera, Nucl. Phys. B669 (2003) 341, hepph/0303103.
M. Jamin, J.A. Oller and A. Pich, JHEP 02 (2004) 047, hepph/0401080.
V. Cirigliano et al., JHEP 04 (2005) 006, hepph/0503108.
W.J. Marciano, Phys. Rev. Lett. 93 (2004) 231803, hepph/0402299.
ALPHA Collaboration, J. Heitger, R. Sommer and H. Wittig, Nucl. Phys. B588 (2000) 377, heplat/0006026.
JLQCD Collaboration, S. Aoki et al., Phys. Rev. D68 (2003) 054502, heplat/0212039.
MILC Collaboration, C. Aubin et al., Phys. Rev. D70 (2004) 114501, heplat/0407028.
S.R. Beane, P.F. Bedaque, K. Orginos and M.J. Savage, Phys. Rev. D75 (2007) 094501, heplat/0606023.
RBC and UKQCD Collaborations, C. Allton et al., Phys. Rev. D76 (2007) 014504, heplat/0701013.
HPQCD Collaboration, E. Follana, C.T.H. Davies, G.P. Lepage and J. Shigemitsu, Phys. Rev. Lett. 100 (2008) 062002, arXiv:0706.1726 [heplat].
D. Bećirević and G. Villadoro, Phys. Rev. D69 (2004) 054010, heplat/0311028.
G. Colangelo, S. Durr and C. Haefeli, Nucl. Phys. B721 (2005) 136, heplat/0503014.
E. Eichten, Nucl. Phys. B (Proc. Suppl.) 4 (1988) 170.
B.A. Thacker and G.P. Lepage, Phys. Rev. D43 (1991) 196; G.P. Lepage, L. Magnea, C. Nakhleh, U. Magnea and K. Hornbostel, Phys. Rev. D46 (1992) 4052, heplat/9205007.
A.X. ElKhadra, A.S. Kronfeld and P.B. Mackenzie, Phys. Rev. D55 (1997) 3933, heplat/9604004.
ALPHA Collaboration, J. Heitger and R. Sommer, JHEP 02 (2004) 022, heplat/0310035.
M. Guagnelli, F. Palombi, R. Petronzio and N. Tantalo, Phys. Lett. B546 (2002) 237, heplat/0206023.
E. Eichten and B.R. Hill, Phys. Lett. B234 (1990) 511.
M. Della Morte, A. Shindler and R. Sommer, JHEP 08 (2005) 051, heplat/0506008.
R. Sommer, Nonperturbative QCD: Renormalization, O(a)improvement and matching to heavy quark effective theory, (2006), heplat/0611020.
S. Aoki, Y. Kuramashi and S.i. Tominaga, Prog. Theor. Phys. 109 (2003) 383, heplat/0107009.
N.H. Christ, M. Li and H.W. Lin, Phys. Rev. D76 (2007) 074505, heplat/0608006.
ALPHA Collaboration, J. Heitger, M. Kurth and R. Sommer, Nucl. Phys. B669 (2003) 173, heplat/0302019.
B. Grinstein, E.E. Jenkins, A.V. Manohar, M.J. Savage and M.B. Wise, Nucl. Phys. B380 (1992) 369, hepph/9204207; J.L. Goity, Phys. Rev. D46 (1992) 3929, hepph/9206230; M.J. Booth, Phys. Rev. D51 (1995) 2338, hepph/9411433; S.R. Sharpe and Y. Zhang, Phys. Rev. D53 (1996) 5125, heplat/9510037.
A.S. Kronfeld and S.M. Ryan, Phys. Lett. B543 (2002) 59, hepph/0206058.
HPQCD Collaboration, A. Gray et al., Phys. Rev. Lett. 95 (2005) 212001, heplat/0507015.
N. Tantalo, Lattice calculations for B and K mixing, (2007), hepph/0703241.
D. Bećirević et al., Nucl. Phys. B618 (2001) 241, heplat/0002025.
UKQCD Collaboration, L. Lellouch and C.J.D. Lin, Phys. Rev. D64 (2001) 094501, hepph/0011086.
UKQCD Collaboration, K.C. Bowler et al., Nucl. Phys. B619 (2001) 507, heplat/0007020.
G.M. de Divitiis, M. Guagnelli, F. Palombi, R. Petronzio and N. Tantalo, Nucl. Phys. B672 (2003) 372, heplat/0307005.
M. Della Morte et al., JHEP 0802 (2008) 078, arXiv:0710.2201 [heplat].
D. Guazzini, R. Sommer and N. Tantalo, JHEP 0801 (2008) 076, arXiv:0710.2229 [heplat].
C.W. Bernard et al., Phys. Rev. Lett. 81 (1998) 4812, hepph/9806412.
A.X. ElKhadra, A.S. Kronfeld, P.B. Mackenzie, S.M. Ryan and J.N. Simone, Phys. Rev. D58 (1998) 014506, hepph/9711426.
JLQCD Collaboration, S. Aoki et al., Phys. Rev. Lett. 80 (1998) 5711.
CPPACS Collaboration, A. Ali Khan et al., Phys. Rev. D64 (2001) 034505, heplat/0010009.
MILC Collaboration, C. Bernard et al., Phys. Rev. D66 (2002) 094501, heplat/0206016.
A. Ali Khan et al., Phys. Lett. B427 (1998) 132, heplat/9801038.
JLQCD Collaboration, K.I. Ishikawa et al., Phys. Rev. D61 (2000) 074501, heplat/9905036.
CPPACS Collaboration, A. Ali Khan et al., Phys. Rev. D64 (2001) 054504, heplat/0103020.
S. Collins et al., Phys. Rev. D60 (1999) 074504, heplat/9901001.
JLQCD Collaboration, S. Aoki et al., Phys. Rev. Lett. 91 (2003) 212001, hepph/0307039.
M. Wingate, C.T.H. Davies, A. Gray, G.P. Lepage and J. Shigemitsu, Phys. Rev. Lett. 92 (2004) 162001, hepph/0311130.
D0 Collaboration, V.M. Abazov et al., Phys. Rev. Lett. 97 (2006) 021802, hepex/0603029.
CDF Collaboration, A. Abulencia et al., Phys. Rev. Lett. 97 (2006) 062003, hepex/0606027.
V. Gadiyak and O. Loktik, Phys. Rev. D72 (2005) 114504, heplat/0509075.
V. Gimenez and G. Martinelli, Phys. Lett. B398 (1997) 135, heplat/9610024.
V. Gimenez and J. Reyes, Nucl. Phys. B545 (1999) 576, heplat/9806023.
UKQCD Collaboration, A.K. Ewing et al., Phys. Rev. D54 (1996) 3526, heplat/9508030.
J.C. Christensen, T. Draper and C. McNeile, Phys. Rev. D56 (1997) 6993, heplat/9610026.
D. Bećirević et al., Nucl. Phys. B618 (2001) 241, heplat/0002025.
D. Bećirević, V. Gimenez, G. Martinelli, M. Papinutto and J. Reyes, JHEP 04 (2002) 025, heplat/0110091.
F. Palombi, M. Papinutto, C. Pena and H. Wittig, JHEP 08 (2006) 017, heplat/0604014.
F. Palombi, M. Papinutto, C. Pena and H. Wittig, JHEP 09 (2007) 062, arXiv:0706.4153 [heplat].
P. Dimopoulos et al., PoS LAT2007 (2007) 368, arXiv:0710.2862 [heplat]; ALPHA Collaboration, P. Dimopoulos et al., JHEP 0805 (2008) 065, arXiv:0712.2429 [heplat].
D. Bećirević and A.B. Kaidalov, Phys. Lett. B478 (2000) 417, hepph/9904490.
S. Hashimoto, Int. J. Mod. Phys. A20 (2005) 5133, hepph/0411126.
UKQCD Collaboration, K.C. Bowler et al., Phys. Lett. B486 (2000) 111, heplat/9911011.
A. Abada et al., Nucl. Phys. B619 (2001) 565, heplat/0011065.
A.X. ElKhadra, A.S. Kronfeld, P.B. Mackenzie, S.M. Ryan and J.N. Simone, Phys. Rev. D64 (2001) 014502, hepph/0101023.
JLQCD Collaboration, S. Aoki et al., Phys. Rev. D64 (2001) 114505, heplat/0106024.
M. Okamoto et al., Nucl. Phys. B (Proc. Suppl.) 140 (2005) 461, heplat/0409116.
HPQCD Collaboration, E. Dalgic et al., Phys. Rev. D73 (2006) 074502, heplat/0601021.
CLEO Collaboration, S.B. Athar et al., Phys. Rev. D68 (2003) 072003, hepex/0304019.
N. Isgur and M.B. Wise, Phys. Lett. B232 (1989) 113; Phys. Lett. B237 (1990) 527.
M.E. Luke, Phys. Lett. B252 (1990) 447.
C.W. Bernard, Y. Shen and A. Soni, Phys. Lett. B317 (1993) 164, heplat/9307005.
UKQCD Collaboration, S.P. Booth et al., Phys. Rev. Lett. 72 (1994) 462, heplat/9308019.
UKQCD Collaboration, K.C. Bowler et al., Phys. Rev. D52 (1995) 5067, hepph/9504231.
UKQCD Collaboration, K.C. Bowler, G. Douglas, R.D. Kenway, G.N. Lacagnina and C.M. Maynard, Nucl. Phys. B637 (2002) 293, heplat/0202029.
G.M. de Divitiis, E. Molinaro, R. Petronzio and N. Tantalo, Phys. Lett. B655 (2007) 45, arXiv:0707.0582 [heplat]; G.M. de Divitiis, R. Petronzio and N. Tantalo, JHEP 0710 (2007) 062, arXiv:0707.0587 [heplat].
S. Hashimoto et al., Phys. Rev. D61 (2000) 014502, hepph/9906376.
S. Hashimoto, A.S. Kronfeld, P.B. Mackenzie, S.M. Ryan and J.N. Simone, Phys. Rev. D66 (2002) 014503, hepph/0110253.
H. Wittig, (2008).
K.G. Wilson, Nucl. Phys. Proc. Suppl. 17 (1990) 82.
K.G. Wilson, Nucl. Phys. Proc. Suppl. 140 (2005) 3, heplat/0412043.
Particle Data Group, C. Patrignani et al., Chin. Phys. C40 (2016) 100001.
G. Colangelo et al., Eur. Phys. J. C71 (2011) 1695, 1011.4408.
S. Aoki et al., Eur. Phys. J. C74 (2014) 2890, 1310.8555.
S. Aoki et al., Eur. Phys. J. C77 (2017) 112, 1607.00299.
S. Aoki et al., Eur. Phys. J. C80 (2020) 113. https://doi.org/10.1140/epjc/s1005201973547
PACSCS, S. Aoki et al., Phys. Rev. D79 (2009) 034503, 0807.1661.
S. Dürr et al., Science 322 (2008) 1224, 0906.3599.
ETM, C. Alexandrou et al., Phys. Rev. D78 (2008) 014509, 0803.3190.
MILC, A. Bazavov et al., Rev. Mod. Phys. 82 (2010) 1349, 0903.3598.
N. Tantalo, PoS LATTICE2013 (2014) 007, 1311.2797.
A. Portelli, PoS LATTICE2014 (2015) 013, 1505.07057.
A. Patella, PoS LATTICE2016 (2017) 020, 1702.03857.
T. Blum, R. Zhou, T. Doi, M. Hayakawa, T. Izubuchi, S. Uno and N. Yamada, Phys. Rev. D82 (2010) 094508, 1006.1311.
BudapestMarseilleWuppertal, S. Borsanyi et al., Phys. Rev. Lett. 111 (2013) 252001, 1306.2287.
S. Borsanyi et al., Science 347 (2015) 1452, 1406.4088.
R. Horsley et al., J. Phys. G43 (2016) 10LT02, 1508.06401.
T. Blum, T. Doi, M. Hayakawa, T. Izubuchi and N. Yamada, Phys. Rev. D76 (2007) 114508, 0708.0484.
RM123, G.M. de Divitiis et al., Phys. Rev. D87 (2013) 114505, 1303.4896.
D. Giusti, V. Lubicz, C. Tarantino, G. Martinelli, S. Sanfilippo, S. Simula and N. Tantalo, Phys. Rev. D95 (2017) 114504, 1704.06561.
R. Horsley et al., JHEP 04 (2016) 093, 1509.00799.
Z. Fodor et al., Phys. Rev. Lett. 117 (2016) 082001, 1604.07112.
MILC, S. Basak et al., Phys. Rev. D99 (2019) 034503, 1807.05556.
R.F. Dashen, Phys. Rev. 183 (1969) 1245.
M. Lüscher, Commun.Math.Phys. 104 (1986) 177.
M. Lüscher, Commun.Math.Phys. 105 (1986) 153.
M. Lüscher, Nucl. Phys. B354 (1991) 531.
M. Lüscher, Nucl. Phys. B364 (1991) 237.
C. Andersen, J. Bulava, B. Hörz and C. Morningstar, Nucl. Phys. B939 (2019) 145, 1808.05007.
C. Michael, Nucl. Phys. B259 (1985) 58.
M. Lüscher and U. Wolff, Nucl. Phys. B339 (1990) 222.
B. Blossier, M. Della Morte, G. von Hippel, T. Mendes and R. Sommer, JHEP 04 (2009) 094, 0902.1265.
J. Foley, K. Jimmy Juge, A. O’Cais, M. Peardon, S.M. Ryan and J.I. Skullerud, Comput. Phys. Commun. 172 (2005) 145, heplat/0505023.
Hadron Spectrum, M. Peardon et al., Phys. Rev. D80 (2009) 054506, 0905.2160.
C. Morningstar, J. Bulava, J. Foley, K.J. Juge, D. Lenkner, M. Peardon and C.H. Wong, Phys. Rev. D83 (2011) 114505, 1104.3870.
CPPACS, T. Yamazaki et al., Phys. Rev. D70 (2004) 074513, heplat/0402025.
NPLQCD, S.R. Beane, P.F. Bedaque, K. Orginos and M.J. Savage, Phys. Rev. D73 (2006) 054503, heplat/0506013.
S.R. Beane, T.C. Luu, K. Orginos, A. Parreno, M.J. Savage, A. Torok and A. WalkerLoud, Phys. Rev. D77 (2008) 014505, 0706.3026.
X. Feng, K. Jansen and D.B. Renner, Phys. Lett. B684 (2010) 268, 0909.3255.
T. Yagi, S. Hashimoto, O. Morimatsu and M. Ohtani, (2011), 1108.2970.
Z. Fu, Phys. Rev. D87 (2013) 074501, 1303.0517.
PACSCS, K. Sasaki, N. Ishizuka, M. Oka and T. Yamazaki, Phys. Rev. D89 (2014) 054502, 1311.7226.
D.J. Wilson, R.A. Briceño, J.J. Dudek, R.G. Edwards and C.E. Thomas, Phys. Rev. D92 (2015) 094502, 1507.02599.
ETM, C. Helmes et al., JHEP 09 (2015) 109, 1506.00408.
RQCD, G.S. Bali, S. Collins, A. Cox, G. Donald, M. Göckeler, C.B. Lang and A. Schäfer, Phys. Rev. D93 (2016) 054509, 1512.08678.
J. Bulava, B. Fahy, B. Hörz, K.J. Juge, C. Morningstar and C.H. Wong, Nucl. Phys. B910 (2016) 842, 1604.05593.
L. Liu et al., Phys. Rev. D96 (2017) 054516, 1612.02061.
D. Guo, A. Alexandru, R. Molina and M. Döring, Phys. Rev. D94 (2016) 034501, 1605.03993.
C. Morningstar, J. Bulava, B. Singha, R. Brett, J. Fallica, A. Hanlon and B. Hörz, Nucl. Phys. B924 (2017) 477, 1707.05817.
S. Prelovsek, L. Leskovec, C.B. Lang and D. Mohler, Phys. Rev. D88 (2013) 054508, 1307.0736.
R. Brett, J. Bulava, J. Fallica, A. Hanlon, B. Hörz and C. Morningstar, Nucl. Phys. B932 (2018) 29, 1802.03100.
C. Helmes, C. Jost, B. Knippschild, B. Kostrzewa, L. Liu, C. Urbach and M. Werner, PoS LATTICE2016 (2016) 135, 1611.09584.
C. Helmes, C. Jost, B. Knippschild, B. Kostrzewa, L. Liu, C. Urbach and M. Werner, Phys. Rev. D96 (2017) 034510, 1703.04737.
A. Torok, S.R. Beane, W. Detmold, T.C. Luu, K. Orginos, A. Parreno, M.J. Savage and A. WalkerLoud, Phys. Rev. D81 (2010) 074506, 0907.1913.
C.B. Lang and V. Verduci, Phys. Rev. D87 (2013) 054502, 1212.5055.
W. Detmold and A. Nicholson, Phys. Rev. D93 (2016) 114511, 1511.02275.
C.B. Lang, L. Leskovec, M. Padmanath and S. Prelovsek, Phys. Rev. D95 (2017) 014510, 1610.01422.
C.W. Andersen, J. Bulava, B. Hörz and C. Morningstar, Phys. Rev. D97 (2018) 014506, 1710.01557.
K. Orginos, A. Parreno, M.J. Savage, S.R. Beane, E. Chang and W. Detmold, Phys. Rev. D92 (2015) 114512, 1508.07583.
A. Francis, J.R. Green, P.M. Junnarkar, C. Miao, T.D. Rae and H. Wittig, (2018), 1805.03966.
S. He, X. Feng and C. Liu, JHEP 07 (2005) 011, heplat/0504019.
V. Bernard, M. Lage, U.G. Meissner and A. Rusetsky, JHEP 01 (2011) 019, 1010.6018.
M.T. Hansen and S.R. Sharpe, Phys. Rev. D86 (2012) 016007, 1204.0826.
R.A. Briceño and Z. Davoudi, Phys. Rev. D88 (2013) 094507, 1204.1110.
P. Guo, J. Dudek, R. Edwards and A.P. Szczepaniak, Phys. Rev. D88 (2013) 014501, 1211.0929.
L. Roca and E. Oset, Phys. Rev. D85 (2012) 054507, 1201.0438.
K. Polejaeva and A. Rusetsky, Eur. Phys. J. A48 (2012) 67, 1203.1241.
R.A. Briceño and Z. Davoudi, Phys. Rev. D87 (2013) 094507, 1212.3398.
M.T. Hansen and S.R. Sharpe, Phys. Rev. D90 (2014) 116003, 1408.5933.
M.T. Hansen and S.R. Sharpe, Phys. Rev. D93 (2016) 096006, 1602.00324, [Erratum: Phys. Rev. D96, no. 3, 039901(2017)].
M.T. Hansen and S.R. Sharpe, Phys. Rev. D95 (2017) 034501, 1609.04317.
R.A. Briceño, M.T. Hansen and S.R. Sharpe, Phys. Rev. D95 (2017) 074510, 1701.07465.
R.A. Briceño, M.T. Hansen and S.R. Sharpe, Phys. Rev. D98 (2018) 014506, 1803.04169.
L. Lellouch and M. Lüscher, Commun. Math. Phys. 219 (2001) 31, heplat/0003023.
H.B. Meyer, Phys. Rev. Lett. 107 (2011) 072002, 1105.1892.
X. Feng, S. Aoki, S. Hashimoto and T. Kaneko, Phys. Rev. D91 (2015) 054504, 1412.6319.
J. Bulava, B. Hörz, B. Fahy, K.J. Juge, C. Morningstar and C.H. Wong, PoS LATTICE2015 (2016) 069, 1511.02351.
F. Erben, J.R. Green, D. Mohler and H. Wittig, Phys. Rev. D101 (2020) 054504, 1910.01083.
S. Prelovsek, PoS LATTICE2014 (2014) 015, 1411.0405.
C. Liu, PoS LATTICE2016 (2017) 006, 1612.00103.
D. Mohler, EPJ Web Conf. 181 (2018) 01027.
CKMfitter Group, J. Charles et al., Eur. Phys. J. C41 (2005) 1, hepph/0406184.
J. Charles et al., Phys. Rev. D91 (2015) 073007, 1501.05013.
UTfit, M. Bona et al., JHEP 10 (2006) 081, hepph/0606167.
UTfit, M. Bona et al., JHEP 03 (2008) 049, 0707.0636.
Fermilab Lattice, MILC, TUMQCD, A. Bazavov et al., Phys. Rev. D98 (2018) 054517, 1802.04248.
Y. Maezawa and P. Petreczky, Phys. Rev. D94 (2016) 034507, 1606.08798.
ETM, N. Carrasco et al., Nucl. Phys. B887 (2014) 19, 1403.4504.
B. Chakraborty et al., Phys. Rev. D91 (2015) 054508, 1408.4169.
Fermilab Lattice, MILC, A. Bazavov et al., Phys. Rev. D90 (2014) 074509, 1407.3772.
S. Dürr et al., Phys. Lett. B701 (2011) 265, 1011.2403.
S. Dürr et al., JHEP 08 (2011) 148, 1011.2711.
C. McNeile, C.T.H. Davies, E. Follana, K. Hornbostel and G.P. Lepage, Phys. Rev. D82 (2010) 034512, 1004.4285.
RBC/UKQCD, T. Blum et al., Phys. Rev. D93 (2016) 074505, 1411.7017.
ETM, B. Blossier, P. Dimopoulos, R. Frezzotti, V. Lubicz, M. Petschlies, F. Sanfilippo, S. Simula and C. Tarantino, Phys. Rev. D82 (2010) 114513, 1010.3659.
P. Fritzsch, F. Knechtli, B. Leder, M. Marinkovic, S. Schaefer, R. Sommer and F. Virotta, Nucl. Phys. B865 (2012) 397, 1205.5380.