Quantum Field Theory

2 Scalar Fields 6 2.1 Classical scalar fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Quantisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 Plane wave expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.4 The quantum Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.5 Heisenberg picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.6 The Feynman propagator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.7 Consistency with causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17


Introduction
Quantum Field Theory is a highly important cornerstone of modern physics. It underlies, for example, the description of elementary particles i.e. the Standard Model of particle physics is a QFT. There is currently no observational evidence to suggest that QFT is insufficient in describing particle behaviour, and indeed many theories for beyond the Standard Model physics (e.g. supersymmetry, extra dimensions) are QFTs. There are some theoretical reasons, however, for believing that QFT will not work at energies above the Planck scale, at which gravity becomes important. Aside from particle physics, QFT is also widely used in the description of condensed matter systems, and there has been a fruitful interplay between the fields of condensed matter and high energy physics.
We will see that the need for QFT arises when one tries to unify special relativity and quantum mechanics, which explains why theories of use in high energy particle physics are quantum field theories. Historically, Quantum Electrodynamics (QED) emerged as the prototype of modern QFT's. It was developed in the late 1940s and early 1950s chiefly by Feynman, Schwinger and Tomonaga, and has the distinction of being the most accurately verified theory of all time: the anomalous magnetic dipole moment of the electron predicted by QED agrees with experiment with a stunning accuracy of one part in 10 10 ! Since then, QED has been understood as forming part of a larger theory, the Standard Model of particle physics, which also describes the weak and strong nuclear forces. As you will learn at this school, electromagnetism and the weak interaction can be unified into a single "electroweak" theory, and the theory of the strong force is described by Quantum Chromodynamics (QCD). QCD has been verified in a wide range of contexts, albeit not as accurately as QED (due to the fact that the QED force is much weaker, allowing more accurate calculations to be carried out).
As is clear from the above discussion, QFT is a type of theory, rather than a particular theory. In this course, our aim is to introduce what a QFT is, and how to derive scattering amplitudes in perturbation theory (in the form of Feynman rules). For this purpose, it is sufficient to consider the simple example of a single, real scalar field. More physically relevant examples will be dealt with in the other courses. Throughout, we will follow the so-called canonical quantisation approach to QFT, rather than the path integral approach. Although the latter approach is more elegant, it is less easily presented in such a short course.
The structure of these notes is as follows. In the rest of the introduction, we review those aspects of classical and quantum mechanics which are relevant in discussing QFT. In particular, we go over the Lagrangian formalism in point particle mechanics, and see how this can also be used to describe classical fields. We then look at the quantum mechanics of non-relativistic point particles, and recall the properties of the quantum harmonic oscillator, which will be useful in what follows. We then briefly show how attempts to construct a relativistic analogue of the Schödinger equation lead to inconsistencies. Next, we discuss classical field theory, deriving the equations of motion that a relativistic scalar field theory has to satisfy, and examining the relationship between symmetries and conservation laws. We then discuss the quantum theory of free fields, and interpret the resulting theory in terms of particles, before showing how to describe interactions via the S-matrix and its relation to Green's functions. Finally, we describe how to obtain explicit results for scattering amplitudes using perturbation theory, which leads (via Wick's theorem) to Feynman diagrams.

Classical Mechanics
Let us begin this little review by considering the simplest possible system in classical mechanics, a single point particle of mass m in one dimension, whose coordinate and velocity are functions of time, x(t) andẋ(t) = dx(t)/dt, respectively. Let the particle be exposed to a time-independent potential V (x). It's motion is then governed by Newton's law where F (x) is the force exerted on the particle. Solving this equation of motion involves two integrations, and hence two arbitrary integration constants to be fixed by initial conditions. Specifying, e.g., the position x(t 0 ) and velocityẋ(t 0 ) of the particle at some initial time t 0 completely determines its motion: knowing the initial conditions and the equations of motion, we also know the evolution of the particle at all times (provided we can solve the equations of motion).
We can also derive the equation of motion using an entirely different approach, via the Lagrangian formalism. This is perhaps more abstract than Newton's force-based approach, but in fact is easier to generalise and technically more simple in complicated systems (such as field theory!), not least because it avoids us having to think about forces at all. First, we introduce the Lagrangian which is a function of coordinates and velocities, and given by the difference between the kinetic and potential energies of the particle. Next, we define the action x(t) t x x'(t) Figure 1: Variation of particle trajectory with identified initial and end points.
The equations of motion are then given by the principle of least action, which says that the trajectory x(t) followed by the particle is precisely that such that S is extremised 2 . To verify this in the present case, let us rederive Newton's Second Law.
First let us suppose that x(t) is indeed the trajectory that extremises the action, and then introduce a small perturbation such that the end points are fixed: This sends S to some S + δS, where δS = 0 if S is extremised. One may Taylor expand to give S + δS = where we performed an integration by parts on the last term in the second line. The second and third term in the last line are the variation of the action, δS, under variations of the trajectory, δx. The second term vanishes because of the boundary conditions for the variation, and we are left with the third. Now the Principal of Least Action demands δS = 0. For the remaining integral to vanish for arbitrary δx is only possible if the integrand vanishes, leaving us with the Euler-Lagrange equation: If we insert the Lagrangian of our point particle, Eq. (2), into the Euler-Lagrange equation we obtain Hence, we have derived the equation of motion (the Euler-Lagrange equation) using the Principal of Least Action and found it to be equivalent to Newton's Second Law. The benefit of the former is that it can be easily generalised to other systems in any number of dimensions, multi-particle systems, or systems with an infinite number of degrees of freedom, where the latter are needed for field theory.
For example, a general system of point particles has a set {q i } of generalised coordinates, which may not be simple positions but also angles etc. The equations of motion are then given by d dt by analogy with the one-dimensional case. That is, each coordinate has its own Euler-Lagrange equation (which may nevertheless depend on the other coordinates, so that the equations of motion are coupled). Another advantage of the Lagrangian formalism is that the relationship between symmetries and conserved quantities is readily understood -more on this later. First, let us note that there is yet another way to think about classical mechanics (that we will see again in quantum mechanics / field theory), namely via the Hamiltonian formalism. Given a Lagrangian depending on generalised coordinates {q i }, we may define the conjugate momenta p i = ∂L ∂q i e.g. in the simple one-dimensional example given above, there is a single momentum p = mẋ conjugate to x. We recognise as the familiar definition of momentum, but it is not always true that p i = mq i .
We may now define the Hamiltonian As an example, consider again It is easy to show from the above definition that which we recognise as the total energy of the system. From the definition of the Hamiltonian one may derive (problem 1.1) which constitute Hamilton's equations. These are useful in proving the relation between symmetries and conserved quantities. For example, one readily sees from the above equations that the momentum p i is conserved if H does not depend explicitly on q i . That is, conservation of momentum is related to invariance under spatial translations, if q i can be interpreted as a simple position coordinate.

Quantum mechanics
Having set up some basic formalism for classical mechanics, let us now move on to quantum mechanics. In doing so we shall use canonical quantisation, which is historically what was used first and what we shall later use to quantise fields as well. We remark, however, that one can also quantise a theory using path integrals. Canonical quantisation consists of two steps. Firstly, the dynamical variables of a system are replaced by operators, which we denote by a hat. Secondly, one imposes commutation relations on these operators, The physical state of a quantum mechanical system is encoded in state vectors |ψ , which are elements of a Hilbert space H. The hermitian conjugate state is ψ| = (|ψ ) † , and the modulus squared of the scalar product between two states gives the probability for the system to go from state 1 to state 2, | ψ 1 |ψ 2 | 2 = probability for |ψ 1 → |ψ 2 .
Hermiticity ensures that expectation values are real, as required for measurable quantities. Due to the probabilistic nature of quantum mechanics, expectation values correspond to statistical averages, or mean values, with a variance An important concept in quantum mechanics is that of eigenstates of an operator, defined bŷ Evidently, between eigenstates we have ∆O = 0. Examples are coordinate eigenstates,x|x = x|x , and momentum eigenstates,p|p = p|p , describing a particle at position x or with momentum p, respectively. However, a state vector cannot be simultaneous eigenstate of non-commuting operators. This leads to the Heisenberg uncertainty relation for any two non-commuting operatorŝ A,B, Finally, sets of eigenstates can be orthonormalized and we assume completeness, i.e. they span the entire Hilbert space, As a consequence, an arbitrary state vector can always be expanded in terms of a set of eigenstates. We may then define the position space wavefunction ψ(x) = x|ψ , so that Acting on the wavefunction, the explicit form of the position and momentum operators iŝ so that the Hamiltonian operator iŝ Having quantised our system, we now want to describe its time evolution. This can be done in different "pictures", depending on whether we consider the state vectors or the operators (or both) to depend explicitly on t, such that expectation values remain the same. Two extreme cases are those where the operators do not depend on time (the Schrödinger picture), and when the state vectors do not depend on time (the Heisenberg picture). We discuss these two choices in the following sections.

The Schrödinger picture
In this approach state vectors are functions of time, |ψ(t) , while operators are time independent, ∂ tÔ = 0. The time evolution of a system is described by the Schrödinger equation 3 , If at some initial time t 0 our system is in the state Ψ(x, t 0 ), then the time dependent state vector solves the Schrödinger equation for all later times t. The expectation value of some hermitian operatorÔ at a given time t is then defined as Ô t = d 3 x Ψ * (x, t)ÔΨ(x, t), (22) and the normalisation of the wavefunction is given by Since Ψ * Ψ is positive, it is natural to interpret it as the probability density for finding a particle at position x. Furthermore one can derive a conserved current j, as well as a continuity equation by considering Ψ * × (Schr.Eq.) − Ψ × (Schr.Eq.) * .
The continuity equation reads where the density ρ and the current j are given by Now that we have derived the continuity equation let us discuss the probability interpretation of Quantum Mechanics in more detail. Consider a finite volume V with boundary S. The integrated continuity equation is where in the last line we have used Gauss's theorem. Using Eq. (23) the left-hand side can be rewritten and we obtain ∂ ∂t In other words, provided that j = 0 everywhere at the boundary S, we find that the time derivative of 1 t vanishes. Since 1 t represents the total probability for finding the particle anywhere inside the volume V , we conclude that this probability must be conserved: particles cannot be created or destroyed in our theory. Non-relativistic Quantum Mechanics thus provides a consistent formalism to describe a single particle. The quantity Ψ(x, t) is interpreted as a one-particle wave function.

The Heisenberg picture
Here the situation is the opposite to that in the Schrödinger picture, with the state vectors regarded as constant, ∂ t |Ψ H = 0, and operators which carry the time dependence,Ô H (t). This is the concept which later generalises most readily to field theory. We make use of the solution Eq. (21) to the Schrödinger equation in order to define a Heisenberg state vector through i.e. Ψ H (x) = Ψ(x, t 0 ). In other words, the Schrödinger vector at some time t 0 is defined to be equivalent to the Heisenberg vector, and the solution to the Schrödinger equation provides the transformation law between the two for all times. This transformation of course leaves the physics, i.e. expectation values, invariant, withÔ From this last equation it is now easy to derive the equivalent of the Schrödinger equation for the Heisenberg picture, the Heisenberg equation of motion for operators: Note that all commutation relations, like Eq. (9), with time dependent operators are now intended to be valid for all times. Substitutingx,p forÔ into the Heisenberg equation readily leads to the quantum mechanical equivalent of the Hamilton equations of classical mechanics.

The quantum harmonic oscillator
Because of similar structures later in quantum field theory, it is instructive to also briefly recall the harmonic oscillator in one dimension. Its Hamiltonian is given bŷ Employing the canonical formalism we have just set up, we easily identify the momentum operator to bep(t) = m∂ tx (t), and from the Hamilton equations we find the equation of motion to be ∂ 2 tx = −ω 2x , which has the well known plane wave solutionx ∼ exp iωt. An alternative path useful for later field theory applications is to introduce new operators, expressed in terms of the old ones, Using the commutation relation forx,p, one readily derives (see the preschool problems) With the help of these the Hamiltonian can be rewritten in terms of the new operators: With this form of the Hamiltonian it is easy to construct a complete basis of energy eigenstates |n ,Ĥ |n = E n |n .
Thus, the stateâ † |n has energy E n + ω, so thatâ † may be regarded as a "creation operator" for a quantum with energy ω. Along the same lines one finds thatâ|n has energy E n − ω, andâ is an "annihilation operator". Let us introduce a vacuum state |0 with no quanta excited, for whichâ|n = 0, because there cannot be any negative energy states. Acting with the Hamiltonian on that state we find i.e. the quantum mechanical vacuum has a non-zero energy, known as vacuum oscillation or zero point energy. Acting with a creation operator onto the vacuum state one easily finds the state with one quantum excited, and this can be repeated n times to get The root of the factorial is there to normalise all eigenstates to one. Finally, the number operator N =â †â returns the number of quanta in a given energy eigenstate,

Relativistic Quantum Mechanics
So far we have only considered non-relativistic particles. In this section, we see what happens when we try to formulate a relativistic analogue of the Schrödinger equation. First, note that we can derive the non-relativistic equation starting from the energy relation and replacing variables by their appropriate operators acting on a position space wavefunction ψ(x, t) to give As we have already seen, there is a corresponding positive definite probability density with corresponding current j = 2im (ψ * ∇ψ − (∇ψ * )ψ) .
Can we also make a relativistic equation? By analogy with the above, we may start with the relativistic energy relation and making the appropriate operator replacements leads to the equation for some wavefunction φ(x, t). This is the Klein-Gordon equation, and one may try to form a probability density and current, as in the non-relativistic case. Firstly, one notes that to satisfy relativistic invariance, the probability density should be the zeroth component of a 4-vector j µ = (ρ, j) satisfying ∂ µ j µ = 0.
In fact, one finds with j given as before. This is not positive definite! That is, this may (and will) become negative in general, so we cannot interpret this as the probability density of a single particle.
There is another problem with the Klein-Gordon equation as it stands, that is perhaps less abstract to appreciate. The relativistic energy relation gives and thus one has positive and negative energy solutions. For a free particle, one could restrict to having positive energy states only. However, an interacting particle may exchange energy with its environment, and there is nothing to stop it cascading down to energy states of more and more negative energy, thus emitting infinite amounts of energy.
We conclude that the Klein-Gordon equation does not make sense as a consistent quantum theory of a single particle. We thus need a different approach in unifying special relativity and quantum mechanics. This, as we will see, is QFT, in which we will be able to reinterpret the Klein-Gordon function as a field φ(x, t) describing many particles. Figure 2: System of masses m joined by springs (of constant k), whose longitudinal displacements are {f i }, and whose separation at rest is δx.
From now on, it will be extremely convenient to work in natural units, in which one sets = c = 1.
The correct factors can always be reinstated by dimensional analysis. In these units, the Klein-Gordon equation becomes ( + m 2 )φ(x, t) = 0,

Classical Field Theory
In the previous section, we have seen how to describe point particles, both classically and quantum mechanically. In this section, we discuss classical field theory, as a precursor to considering quantum fields. A field associates a mathematical object (e.g. scalar, vector, tensor, spinor...) with every point in spacetime. Examples are the temperature distribution in a room (a scalar field), or the E and B fields in electromagnetism (vector fields). Just as point particles can be described by Lagrangians, so can fields, although it is more natural to think in terms of Lagrangian densities.

Example: Model of an Elastic Rod
Let us consider a particular example, namely a set of point masses connected together by springs, as shown in figure 2. Assume the masses m are equal, as also are the force constants of the springs k. Furthermore, we assume that the masses may move only longitudinally, where the i th displacement is f i , and that the separation of adjacent masses is δx when all f i are zero. This system is an approximation to an elastic rod, with a displacement field f (x, t). To see what this field theory looks like, we may first write the total kinetic and potential energies as respectively, where we have used Hooke's Law for the potential energy. Thus, the Lagrangian is Clearly this system becomes a better approximation to an elastic rod as the continuum limit is approached, in which the number of masses N → ∞ and the separation δx → 0. We can then rewrite the Lagrangian as We may recognise lim as the density of the rod, and also define the tension Furthermore, the position index i gets replaced by the continuous variable x, and one has Finally, the sum over i becomes an integral so that the continuum Lagrangian is This is the Lagrangian for the displacement field f (x, t). It depends on a function of f andḟ which is integrated over all space coordinates (in this case there is only one, the position along the rod). We may therefore write the Lagrangian manifestly as where L is the Lagrangian density It is perhaps clear from the above example that for any field, there will always be an integration over all space dimensions, and thus it is more natural to think about the Lagrangian density rather than the Lagrangian itself. Indeed, we may construct the following dictionary between quantities in point particle mechanics, and corresponding field theory quantities (which may or may not be helpful to you in remembering the differences between particles and fields...!).
Classical Mechanics: Classical Field Theory: x(t) −→φ(x, t) Note that the action for the above field theory is given, as usual, by the time integral of the Lagrangian:

Relativistic Fields
In the previous section we saw how fields can be described using Lagrangian densities, and illustrated this with a non-relativistic example. Rather than derive the field equations for this case, we do this explicitly here for relativistic theories, which we will be concerned with for the rest of the course (and, indeed, the school).
In special relativity, coordinates are combined into four-vectors, x µ = (t, x i ) or x = (t, x), whose length x 2 = t 2 − x 2 is invariant under Lorentz transformations A general function transforms as f (x) → f (x ), i.e. both the function and its argument transform. A Lorentz scalar is a function φ(x) which at any given point in space-time will have the same amplitude, regardless of which inertial frame it is observed in. Consider a space-time point given by x in the unprimed frame, and x (x) in the primed frame, where the function x (x) can be derived from eq. (70). Observers in both the primed and unprimed frames will see the same amplitude φ(x), although an observer in the primed frame will prefer to express this in terms of his or her own coordinate system x , hence will see where the latter equality defines φ . Equation (71) defines the transformation law for a Lorentz scalar. A vector function transforms as We will work in particular with ∂ µ φ(x), where x ≡ x µ denotes the 4-position. Note in particular that In general, a relativistically invariant scalar field theory has action where and L is the appropriate Lagrangian density. We can find the equations of motion satisfied by the field φ using, as in point particle mechanics, the principle of least action. The field theory form of this is that the field φ(x) is such that the action of eq. (73) is extremised. Assuming φ(x) is indeed such a field, we may introduce a small perturbation which correspondingly perturbs the action according to Recognising the first term as the unperturbed action, one thus finds where we have integrated by parts in the second line. Assuming the fields die away at infinity so that δφ = 0 at the boundary of spacetime, the principle of least action δS = 0 implies This is the Euler-Lagrange field equation. It tells us, given a particular Lagrangian density (which defines a particular field theory) the classical equation of motion which must be satisfied by the field φ. As a specific example, let us consider the Lagrangian density from which one finds so that the Euler-Lagrange equation gives This is the Klein-Gordon equation! The above Lagrangian density thus corresponds to the classical field theory of a Klein-Gordon field. We see in particular that the coefficient of the quadratic term in the Lagrangian can be interpreted as the mass.
By analogy with point particle mechanics, one can define a canonical momentum field conjugate to φ: Then one can define the Hamiltonian density such that is the Hamiltonian (total energy carried by the field). For example, the Klein-Gordon field has conjugate momentum π =φ, and Hamiltonian density

Plane wave solutions to the Klein-Gordon equation
Let us consider real solutions to Eq. (80), characterised by φ * (x) = φ(x). To find them we try an ansatz of plane waves φ(x) ∝ e i(k 0 t−k·x) .
The Klein-Gordon equation is satisfied if (k 0 ) 2 − k 2 = m 2 so that Defining the energy as E(k) = k 2 + m 2 > 0, we obtain two types of solution which read We may interpret these as positive and negative energy solutions, such that it does not matter which branch of the square root we take in eq. (87) (it is conventional, however, to define energy as a positive quantity). The general solution is a superposition of φ + and φ − . Using this solution reads where α(k) is an arbitrary complex coefficient. Note that the coefficients of the positive and negative exponentials are related by complex conjugation. This ensures that the field φ(x) is real (as can be easily verified from eq. (90)), consistent with the Lagrangian we wrote down. Such a field has applications in e.g. the description of neutral mesons. We can also write down a Klein-Gordon Lagrangian for a complex field φ. This is really two independent fields (i.e. φ and φ * ), and thus can be used to describe a system of two particles (e.g. charged meson pairs). To simplify the discussion in this course, we will explicitly consider the real Klein-Gordon field. Note that the factors of 2 and π in eq. (90) are conventional, and the inverse power of the energy is such that the measure of integration is Lorentz invariant (problem 2.1), so that the whole solution is written in a manifestly Lorentz invariant way.

Symmetries and Conservation Laws
As was the case in point particle mechanics, one may relate symmetries of the Lagrangian density to conserved quantities in field theory. For example, consider the invariance of L under space-time translations where µ is constant. Under such a transformation one has φ(x µ + µ ) = φ(x µ ) + µ ∂ µ φ(x µ ) + . . .
where we have used Taylor's theorem. But if L does not explicitly depend on x µ (i.e. only through φ and ∂ µ φ) then one has where we have used the fact that δφ = µ ∂ µ φ in the third line, and all functions on the right-hand side are evaluated at x µ . One may replace ∂L/∂φ by the LHS of the Euler-Lagrange equation to get and equating this with the alternative expression above, one finds If this is true for all µ , then one has where is the energy-momentum tensor. We can see how this name arises by considering the components explicitly, for the case of the Klein Gordon field. One then finds One then sees that Θ 00 is the energy density carried by the field. Its conservation can then be shown by considering where we have used Eq. (100) in the second line. The Hamiltonian density is a conserved quantity, provided that there is no energy flow through the surface S which encloses the volume V . In a similar manner one can show that the 3-momentum p j , which is related to Θ 0j , is conserved as well. It is then useful to define a conserved energy-momentum four-vector In analogy to point particle mechanics, we thus see that invariances of the Lagrangian density correspond to conservation laws. An entirely analogous procedure leads to conserved quantities like angular mometum and spin. Furthermore one can study so-called internal symmetries, i.e. ones which are not related to coordinate but other transformations. Examples are conservation of all kinds of charges, isospin, etc. We have thus established the Lagrange-Hamilton formalism for classical field theory: we derived the equation of motion (Euler-Lagrange equation) from the Lagrangian and introduced the conjugate momentum. We then defined the Hamiltonian (density) and considered conservation laws by studying the energy-momentum tensor Θ µν .
3 Quantum Field Theory: Free Fields

Canonical Field Quantisation
In the previous sections we have reviewed the classical and quantum mechanics of point particles, and also classical field theory. We used the canonical quantisation procedure in discussing quantum mechanics, whereby classical variables are replaced by operators, which have non-trivial commutation relations. In this section, we see how to apply this procedure to fields, taking the explicit example of the Klein-Gordon field discussed previously. This is, as yet, a non-interacting field theory, and we will discuss how to deal with interactions later on in the course.
The Klein-Gordon Lagrangian density has the form We have seen that in field theory the field φ(x) plays the role of the coordinates in ordinary point particle mechanics, and we defined a canonically conjugate momentum, π(x) = ∂L/∂φ =φ(x). We then continue the analogy to point mechanics through the quantisation procedure, i.e. we now take our canonical variables to be operators, Next we impose equal-time commutation relations on them, φ (x, t),φ(y, t) = [π(x, t),π(y, t)] = 0.
As in the case of quantum mechanics, the canonical variables commute among themselves, but not the canonical coordinate and momentum with each other. Note that the commutation relation is entirely analogous to the quantum mechanical case. There would be an , if it hadn't been set to one earlier, and the delta-function accounts for the fact that we are dealing with fields. It is zero if the fields are evaluated at different space-time points. After quantisation, our fields have turned into field operators. Note that within the relativistic formulation they depend on time, and hence they are Heisenberg operators.
In the previous paragraph we have formulated commutation relations for fields evaluated at equal time, which is clearly a special case when considering fields at general x, y. The reason has to do with maintaining causality in a relativistic theory. Let us recall the light cone about an event at y, as in Fig. 3. One important postulate of special relativity states that no signal and no interaction can travel faster than the speed of light. This has important consequences about the way in which different events can affect each other. For instance, two events which are characterised by spacetime points x µ and y µ are said to be causal if the distance (x − y) 2 is time-like, i.e. (x − y) 2 > 0. By contrast, two events characterised by a space-like separation, i.e. (x − y) 2 < 0, cannot affect each other, since the point x is not contained inside the light cone about y.
In non-relativistic Quantum Mechanics the commutation relations among operators indicate whether precise and independent measurements of the corresponding observables can be made. If the commutator does not vanish, then a measurement of one observable affects that of the other. From the above it is then clear that the issue of causality must be incorporated into the commutation relations of the relativistic version of our quantum theory: whether or not independent and precise measurements of two observables can be made depends also on the separation of the 4-vectors characterising the points at which these measurements occur. Clearly, events with space-like separations cannot affect each other, and hence all fields must commute, This condition is sometimes called micro-causality. Writing out the four-components of the time interval, we see that as long as |t − t| < |x − y|, the commutator vanishes in a finite interval |t − t|. It also vanishes for t = t, as long as x = y. Only if the fields are evaluated at an equal space-time point can they affect each other, which leads to the equal-time commutation relations above. They can also affect each other everywhere within the light cone, i.e. for time-like intervals. It is not hard to show that in this case (e.g. problem 3.1) n.b. since the 4-vector dot product p · (x − y) depends on p 0 = p 2 + m 2 , one cannot trivially carry out the integrals over d 3 p here.

Creation and annihilation operators
After quantisation, the Klein-Gordon equation we derived earlier turns into an equation for operators. For its solution we simply promote the classical plane wave solution, Eq. (90), to operator status,φ Note that the complex conjugation of the Fourier coefficient turned into hermitian conjugation for an operator.
Let us now solve for the operator coefficients of the positive and negative energy solutions. In order to do so, we invert the Fourier integrals for the field and its time derivative, and then build the linear combination iE(k)(114)−(115) to find Following a similar procedure forâ † (k), and usingπ(x) =φ(x) we find Note that, as Fourier coefficients, these operators do not depend on time, even though the right hand side does contain time variables. Having expressions in terms of the canonical field variableŝ φ(x),π(x), we can now evaluate the commutators for the Fourier coefficients. Expanding everything out and using the commutation relations Eq. (109), we find We easily recognise these for every k to correspond to the commutation relations for the harmonic oscillator, Eq. (37). This motivates us to also express the Hamiltonian and the energy momentum four-vector of our quantum field theory in terms of these operators. To do this, first note that the Hamiltonian is given by the integral of the Hamiltonian density (eq. (84)) over all space. One may then substitute eq. (113) to yield (see the problem sheet) We thus find that the Hamiltonian and the momentum operator are nothing but a continuous sum of excitation energies/momenta of one-dimensional harmonic oscillators! After a minute of thought this is not so surprising. We expanded the solution of the Klein-Gordon equation into a superposition of plane waves with momenta k. But of course a plane wave solution with energy E(k) is also the solution to a one-dimensional harmonic oscillator with the same energy. Hence, our free scalar field is simply a collection of infinitely many harmonic oscillators distributed over the whole energy/momentum range. These energies sum up to that of the entire system. We have thus reduced the problem of handling our field theory to oscillator algebra. From the harmonic oscillator we know already how to construct a complete basis of energy eigenstates, and thanks to the analogy of the previous section we can take this over to our quantum field theory.

Energy of the vacuum state and renormalisation
In complete analogy we begin again with the postulate of a vacuum state |0 with norm one, which is annihilated by the action of the operator a, 0|0 = 1,â(k)|0 = 0 for all k.
Let us next evaluate the energy of this vacuum state, by taking the expectation value of the Hamiltonian, The first term in curly brackets vanishes, since a annihilates the vacuum. The second can be rewritten asâ It is now the second term which vanishes, whereas the first can be replaced by the value of the commutator. Thus we obtain which means that the energy of the ground state is infinite! This result seems rather paradoxical, but it can be understood again in terms of the harmonic oscillator. Recall that the simple quantum mechanical oscillator has a finite zero-point energy. As we have seen above, our field theory corresponds to an infinite collection of harmonic oscillators, i.e. the vacuum receives an infinite number of zero point contributions, and its energy thus diverges.
This is the first of frequent occurrences of infinities in quantum field theory. Fortunately, it is not too hard to work around this particular one. Firstly, we note that nowhere in nature can we observe absolute values of energy, all we can measure are energy differences relative to some reference scale, at best the one of the vacuum state, |0 . In this case it does not really matter what the energy of the vacuum is. This then allows us to redefine the energy scale, by always subtracting the (infinite) vacuum energy from any energy we compute. This process is called "renormalisation". We then define the renormalised vacuum energy to be zero, and take it to be the expectation value of a renormalised Hamiltonian, According to this recipe, the renormalised Hamiltonian is our original one, minus the (unrenormalised) vacuum energy, Here the subtraction of the vacuum energy is shown explicitly, and we can rewrite is aŝ The operatorĤ vac ensures that the vacuum energy is properly subtracted: if |ψ and |ψ denote arbitrary N -particle states, then one can convince oneself that ψ |Ĥ vac |ψ = 0. In particular we now find that as we wanted. A simple way to automatise the removal of the vacuum contribution is to introduce normal ordering. Normal ordering means that all annihilation operators appear to the right of any creation operator. The notation is : i.e. the normal-ordered operators are enclosed within colons. For instance It is important to keep in mind thatâ andâ † always commute inside : · · · :. This is true for an arbitrary string ofâ andâ † . With this definition we can write the normal-ordered Hamiltonian as :Ĥ : = : 1 2 and thus have the relationĤ R =:Ĥ : +Ĥ vac .

Fock space and Particles
After this lengthy grappling with the vacuum state, we can continue to construct our basis of states in analogy to the harmonic oscillator, making use of the commutation relations for the operatorŝ a,â † . In particular, we define the state |k to be the one obtained by acting with the operator a † (k) on the vacuum, Using the commutator, its norm is found to be since the last term in the first line vanishes (â(k) acting on the vacuum). Next we compute the energy of this state, making use of the normal ordered Hamiltonian, and similarly one finds :P : |k = k|k .
Observing that the normal ordering did its job and we obtain renormalised, finite results, we may now interpret the state |k . It is a one-particle state for a relativistic particle of mass m and momentum k, since acting on it with the energy-momentum operator returns the relativistic one particle energy-momentum dispersion relation, E(k) = √ k 2 + m 2 . The a † (k), a(k) are creation and annihilation operators for particles of momentum k. In analogy to the harmonic oscillator, the procedure can be continued to higher states. One easily checks that (problem 3.4) and so the state is a two-particle state (the factorial is there to have it normalised in the same way as the oneparticle state), and so on for higher states. These are called Fock states in the textbooks (formally speaking, a Fock space is a tensor product of Hilbert spaces, where the latter occur in ordinary Quantum Mechanics). At long last we can now see how the field in our free quantum field theory is related to particles. A particle of momentum k corresponds to an excited Fourier mode of a field. Since the field is a superpositon of all possible Fourier modes, one field is enough to describe all possible configurations representing one or many particles of the same kind in any desired momentum state. There are some rather profound ideas here about how nature works at fundamental scales. In classical physics we have matter particles, and forces which act on those particles. These forces can be represented by fields, such that fields and particles are distinct concepts. In non-relativistic quantum mechanics, one unifies the concept of waves and particles (particles can have wave-like characteristics), but fields are still distinct (e.g. one may quantise a particle in an electromagnetic field in QM, provided the latter is treated classically). Taking into account the effects of relativity for both particles and fields, one finds in QFT that all particles are excitation quanta of fields. That is, the concepts of field and particle are no longer distinct, but actually manifestations of the same thing, namely quantum fields. In this sense, QFT is more fundamental than either of its preceding theories. Each force field and each matter field have particles associated with it. Returning to our theory for the free Klein-Gordon field, let us investigate what happens under interchange of the two particles. Since [â † (k 1 ),â † (k 2 )] = 0 for all k 1 , k 2 , we see that and hence the state is symmetric under interchange of the two particles. Thus, the particles described by the scalar field are bosons. Finally we complete the analogy to the harmonic oscillator by introducing a number operator which gives us the number of bosons described by a particular Fock state, Of course the normal-ordered Hamiltonian can now simply be given in terms of this operator, i.e. when acting on a Fock state it simply sums up the energies of the individual particles to give This concludes the quantisation of our free scalar field theory. We have followed the canonical quantisation procedure familiar from quantum mechanics. Due to the infinite number of degrees of freedom, we encountered a divergent vacuum energy, which we had to renormalise. The renormalised Hamiltonian and the Fock states that we constructed describe free relativistic, uncharged spin zero particles of mass m, such as neutral pions, for example.
If we want to describe charged pions as well, we need to introduce complex scalar fields, the real and imaginary parts being necessary to describe opposite charges. For particles with spin we need still more degrees of freedom and use vector or spinor fields, which have the appropriate rotation and Lorentz transformation properties. For fermion fields (which satisfy the Dirac equation rather than the Klein-Gordon equation), one finds that the condition of a positive-definite energy density requires that one impose anti-commutation relations rather than commutation relations. This in turn implies that multiparticle states are antisymmetric under interchange of identical fermions, which we recognise as the Pauli exclusion principle. Thus, not only does QFT provide a consistent theory of relativistic multiparticle systems; it also allows us to "derive" the Pauli principle, which is put in by hand in non-relativistic quantum mechanics.
More details on vector and spinor fields can be found in the other courses at this school. Here, we continue to restrict our attention to scalar fields, so as to more clearly illustrate what happens when interactions are present.

Quantum Field Theory: Interacting Fields
So far we have seen how to quantise the Klein-Gordon Lagrangian, and seen that this describes free scalar particles. For interesting physics, however, we need to know how to describe interactions, which lead to nontrivial scattering processes. This is the subject of this section.
From now on we shall always discuss quantised real scalar fields. It is then convenient to drop the "hats" on the operators that we have considered up to now. Interactions can be described by adding a term L int to the Lagrangian density, so that the full result L is given by where is the free Lagrangian density discussed before. The Hamiltonian density of the interaction is related to L int simply by where H 0 is the free Hamiltonian. If the interaction Lagrangian only depends on φ (we will consider such a case later in the course), one has as can be easily shown from the definition above. We shall leave the details of L int unspecified for the moment. What we will be concerned with mostly are scattering processes, in which two initial particles with momenta p 1 and p 2 scatter, thereby producing a number of particles in the final state, characterised by momenta k 1 , . . . , k n . This is schematically shown in Fig. 4. Our task is to find a description of such a scattering process in terms of the underlying quantum field theory.

The S-matrix
The timescales over which interactions happen are extremely short. The scattering (interaction) process takes place during a short interval around some particular time t with −∞ t ∞. Figure 4: Scattering of two initial particles with momenta p 1 and p 2 into n particles with momenta k 1 , . . . , k n in the final state.
Long before t, the incoming particles evolve independently and freely. They are described by a field operator φ in defined through lim which acts on a corresponding basis of |in states. Long after the collision the particles in the final state evolve again like in the free theory, and the corresponding operator is acting on states |out . The fields φ in , φ out are the asymptotic limits of the Heisenberg operator φ. They both satisfy the free Klein-Gordon equation, i.e.
Operators describing free fields can be expressed as a superposition of plane waves (see Eq. (113)). Thus, for φ in we have with an entirely analogous expression for φ out (x). Note that the operators a † and a also carry subscripts "in" and "out".
The above discussion assumes that the interaction is such that we can talk about free particles at asymptotic times t → ±∞ i.e. that the interaction is only present at intermediate times. This is not always a reasonable assumption e.g. it does not encompass the phenomenon of bound states, in which incident particles form a composite object at late times, which no longer consists of free particles. Nevertheless, the assumption will indeed allow us to discuss scattering processes, which is the aim of this course. Note that we can only talk about well-defined particle states at t → ±∞ (the states labelled by "in" and "out" above), as only at these times do we have a free theory, and thus know what the spectrum of states is (using the methods of section 3). At general times t, the interaction is present, and it is not possible in general to solve for the states of the quantum field theory. Remarkably, we will end up seeing that we can ignore all the complicated stuff at intermediate times, and solve for scattering probabilities purely using the properties of the asymptotic fields.
At the asymptotic times t = ±∞, we can use the creation operators a † in and a † out to build up Fock states from the vacuum. For instance a † out (k 1 ) · · · a † out (k n )|0 = |k 1 , . . . , k n ; out .
We must now distinguish between Fock states generated by a † in and a † out , and therefore we have labelled the Fock states accordingly. In eqs. (160) and (161) we have assumed that there is a stable and unique vacuum state of the free theory (the vacuum at general times t will be that of the full interacting theory, and thus differ from this in general): Mathematically speaking, the a † in 's and a † out 's generate two different bases of the Fock space. Since the physics that we want to describe must be independent of the choice of basis, expectation values expressed in terms of "in" and "out" operators and states must satisfy Here |in and |out denote generic "in" and "out" states. We can relate the two bases by introducing a unitary operator S such that S is called the S-matrix or S-operator. Note that the plane wave solutions of φ in and φ out also imply that By comparing "in" with "out" states one can extract information about the interaction -this is the very essence of detector experiments, where one tries to infer the nature of the interaction by studying the products of the scattering of particles that have been collided with known energies. As we will see below, this information is contained in the elements of the S-matrix. By contrast, in the absence of any interaction, i.e. for L int = 0 the distinction between φ in and φ out is not necessary. They can thus be identified, and then the relation between different bases of the Fock space becomes trivial, S = 1, as one would expect. What we are ultimately interested in are transition amplitudes between an initial state i of, say, two particles of momenta p 1 , p 2 , and a final state f , for instance n particles of unequal momenta. The transition amplitude is then given by The S-matrix element S fi therefore describes the transition amplitude for the scattering process in question. The scattering cross section, which is a measurable quantity, is then proportional to |S fi | 2 . All information about the scattering is thus encoded in the S-matrix, which must therefore be closely related to the interaction Hamiltonian density H int . However, before we try to derive the relation between S and H int we have to take a slight detour.

More on time evolution: Dirac picture
The operators φ(x, t) and π(x, t) which we have encountered are Heisenberg fields and thus timedependent. The state vectors are time-independent in the sense that they do not satisfy a non-trivial equation of motion. Nevertheless, state vectors in the Heisenberg picture can carry a time label. For instance, the "in"-states of the previous subsection are defined at t = −∞. The relation of the Heisenberg operator φ H (x) with its counterpart φ S in the Schrödinger picture is given by Note that this relation involves the full Hamiltonian H = H 0 + H int in the interacting theory. We have so far found solutions to the Klein-Gordon equation in the free theory, and so we know how to handle time evolution in this case. However, in the interacting case the Klein-Gordon equation has an extra term, due to the potential of the interactions. Apart from very special cases of this potential, the equation cannot be solved anymore in closed form, and thus we no longer know the time evolution. It is therefore useful to introduce a new quantum picture for the interacting theory, in which the time dependence is governed by H 0 only. This is the so-called Dirac or Interaction picture. The relation between fields in the Interaction picture, φ I , and in the Schrödinger picture, φ S , is given by At t = −∞ the interaction vanishes, i.e. H int = 0, and hence the fields in the Interaction and Heisenberg pictures are identical, i.e. φ H (x, t) = φ I (x, t) for t → −∞. The relation between φ H and φ I can be worked out easily: where we have introduced the unitary operator U (t) The field φ H (x, t) contains the information about the interaction, since it evolves over time with the full Hamiltonian. In order to describe the "in" and "out" field operators, we can now make the following identifications: Furthermore, since the fields φ I evolve over time with the free Hamiltonian H 0 , they always act in the basis of "in" vectors, such that The relation between φ I and φ H at any time t is given by As t → ∞ the identifications of eqs. (174) and (175) yield From the definition of the S-matrix, Eq. (164) we then read off that We have thus derived a formal expression for the S-matrix in terms of the operator U (t), which tells us how operators and state vectors deviate from the free theory at time t, measured relative to t 0 = −∞, i.e. long before the interaction process. An important boundary condition for U (t) is What we mean here is the following: the operator U actually describes the evolution relative to some initial time t 0 , which we will normally suppress, i.e. we write U (t) instead of U (t, t 0 ). We regard t 0 merely as a time label and fix it at −∞, where the interaction vanishes. Equation (179) then simply states that U becomes unity as t → t 0 , which means that in this limit there is no distinction between Heisenberg and Dirac fields. Using the definition of U (t), Eq. (172), it is an easy exercise to derive the equation of motion for U (t): The time-dependent operator H int (t) is defined in the interaction picture, and depends on the fields φ in , π in in the "in" basis. Let us now solve the equation of motion for U (t) with the boundary condition lim The right-hand side still depends on U , but we can substitute our new expression for U (t) into the integrand, which gives where t 2 < t 1 < t. This procedure can be iterated further, so that the nth term in the sum is This iterative solution could be written in much more compact form, were it not for the fact that the upper integration bounds were all different, and that the ordering t n < t n−1 < . . . < t 1 < t had to be obeyed. Time ordering is an important issue, since one has to ensure that the interaction Hamiltonians act at the proper time, thereby ensuring the causality of the theory. By introducing the time-ordered product of operators, one can use a compact notation, such that the resulting expressions still obey causality. The time-ordered product of two fields φ(t 1 ) and φ(t 2 ) is defined as where θ denotes the step function. The generalisation to products of n operators is obvious. Using time ordering for the nth term of Eq. (183) we obtain Here we have replaced each upper limit of integration with t. Each specific ordering in the timeordered product gives a term identical to eq. (183), where applying the T operator corresponds to setting the upper limit of integration to the relevant t i in each integral. However, we have overcounted by a factor n!, corresponding to the number of ways of ordering the fields in the time ordered product. Thus one must divide by n! as shown. We may recognise eq. (185) as the nth term in the series expansion of an exponential, and thus can finally rewrite the solution for U (t) in compact form as where the "T " in front ensures the correct time ordering.

S-matrix and Green's functions
The S-matrix, which relates the "in" and "out" fields before and after the scattering process, can be written as where T is commonly called the T -matrix. The fact that S contains the unit operator means that also the case where none of the particles scatter is encoded in S. On the other hand, the non-trivial case is described by the T -matrix, and this is what we are interested in. So far we have derived an expression for the S-matrix in terms of the interaction Hamiltonian, and we could use this in principle to calculate scattering processes. However, there is a slight complication owing to the fact that the vacuum of the free theory is not the same as the true vacuum of the full, interacting theory. Instead, we will follow the approach of Lehmann, Symanzik and Zimmerman, which relates the S-matrix to n-point Green's functions i.e. vacuum expectation values of Heisenberg fields. We will see later how to calculate these in terms of vacuum expectation values of "in" fields (i.e. in the free theory).
In order to relate S-matrix elements to Green's functions, we have to express the "in/out"-states in terms of creation operators a † in/out and the vacuum, then express the creation operators by the fields φ in/out , and finally use the time evolution to connect those with the fields φ in our Lagrangian. Let us consider again the scattering process depicted in Fig. 4. The S-matrix element in this case is where a † in is the creation operator pertaining to the "in" field φ in . Our task is now to express a † in in terms of φ in , and repeat this procedure for all other momenta labelling our Fock states.
The following identities will prove useful The S-matrix element can then be rewritten as where in the last line we have used Eq. (156) to replace φ in by φ. We can now rewrite lim t 1 →−∞ using the following identity, which holds for an arbitrary, differentiable function f (t), whose limit t → ±∞ exists: The S-matrix element then reads The first term in this expression involves lim t 1 →+∞ φ = φ out , which gives rise to a contribution ∝ k 1 , . . . , k n ; out a † out (p 1 ) p 2 ; in .
This is non-zero only if p 1 is equal to one of k 1 , . . . , k n . This, however, means that the particle with momentum p 1 does not scatter, and hence the first term does not contribute to the T -matrix of Eq. (187). We are then left with the following expression for S fi : The time derivatives in the integrand can be worked out: where we have used that −∇ 2 e −ip 1 ·x 1 = p 2 1 e −ip 1 ·x 1 . For the S-matrix element one obtains where we have used integration by parts twice so that ∇ 2 acts on φ(x 1 ) rather than on e −ip 1 ·x 1 .
What we have obtained after this rather lengthy step of algebra is an expression in which the (Heisenberg) field operator is sandwiched between Fock states, one of which has been reduced to a one-particle state. We can now successively eliminate all momentum variables from the Fock states, by repeating the procedure for the momentum p 2 , as well as the n momenta of the "out" state. The final expression for S fi is S fi = (i) n+2 d 4 x 1 d 4 x 2 d 4 y 1 · · · d 4 y n e (−ip 1 ·x 1 −ip 2 ·x 2 +ik 1 ·y 1 +···+ikn·yn) where the time-ordering inside the vacuum expectation value (VEV) ensures that causality is obeyed. The above expression is known as the Lehmann-Symanzik-Zimmermann (LSZ) reduction formula. It relates the formal definition of the scattering amplitude to a vacuum expectation value of time-ordered fields. Since the vacuum is uniquely the same for "in/out", the VEV in the LSZ formula for the scattering of two initial particles into n particles in the final state is recognised as the (n + 2)-point Green's function: You will note that we still have not calculated or evaluated anything, but merely rewritten the expression for the scattering matrix elements. Nevertheless, the LSZ formula is of tremendous importance and a central piece of QFT. It provides the link between fields in the Lagrangian and the scattering amplitude S 2 fi , which yields the cross section, measurable in an experiment. Up to here no assumptions or approximations have been made, so this connection between physics and formalism is rather tight. It also illustrates a profound phenomenon of QFT and particle physics: the scattering properties of particles, in other words their interactions, are encoded in the vacuum structure, i.e. the vacuum is non-trivial!

How to compute Green's functions
Of course, in order to calculate cross sections, we need to compute the Green's functions. Alas, for any physically interesting and interacting theory this cannot be done exactly, contrary to the free theory discussed earlier. Instead, approximation methods have to be used in order to simplify the calculation, while hopefully still giving reliable results. Or one reformulates the entire QFT as a lattice field theory, which in principle allows to compute Green's functions without any approximations (in practice this still turns out to be a difficult task for physically relevant systems). This is what many theorists do for a living. But the formalism stands, and if there are discrepancies between theory and experiments, one "only" needs to check the accuracy with which the Green's functions have been calculated or measured, before approving or discarding a particular Lagrangian.
In the next section we shall discuss how to compute the Green's function of scalar field theory in perturbation theory. Before we can tackle the actual computation, we must take a further step. Let us consider the n-point Green's function The fields φ which appear in this expression are Heisenberg fields, whose time evolution is governed by the full Hamiltonian H 0 + H int . In particular, the φ's are not the φ in 's. We know how to handle the latter, because they correspond to a free field theory, but not the former, whose time evolution is governed by the interacting theory, whose solutions we do not know. Let us thus start to isolate the dependence of the fields on the interaction Hamiltonian. Recall the relation between the Heisenberg fields φ(t) and the "in"-fields 4 We now assume that the fields are properly time-ordered, i.e. t 1 > t 2 > . . . > t n , so that we can forget about writing T (· · · ) everywhere. After inserting Eq. (202) into the definition of G n one obtains Now we introduce another time label t such that t t 1 and −t t 1 . For the n-point function we now obtain The expression in curly braces is now time-ordered by construction. An important observation at this point is that it involves pairs of U and its inverse, for instance One can easily convince oneself that U (t, t 1 ) provides the net time evolution from t 1 to t. We can now write G n as where we have used the fact that we may commute the U operators within the time-ordered product.
Let us now take t → ∞. The relation between U (t) and the S-matrix Eq. (178), as well as the boundary condition Eq. (179) tell us that which can be inserted into the above expression. We still have to work out the meaning of 0|U −1 (∞) in the expression for G n . In a paper by Gell-Mann and Low it was argued that the time evolution operator must leave the vacuum invariant (up to a phase), which justifies the ansatz with K being the phase 5 . Multiplying this relation with |0 from the right gives Furthermore, Gell-Mann and Low showed that which implies After inserting all these relations into the expression for G n we obtain The S-matrix is given by and thus we have finally succeeded in expressing the n-point Green's function exclusively in terms of the "in"-fields. This completes the derivation of a relation between the general definition of the scattering amplitude S fi and the VEV of time-ordered "in"-fields. This has been a long and technical discussion, but the main points are the following: Scattering probabilities are related to S-matrix elements. To calculate S-matrix elements for an n particle scattering process, one must first calculate the n particle Green's function (eq. (212)). Then one plugs this into the LSZ formula (eq. (199)).
In fact, the Green's functions cannot be calculated exactly using eq. (212). Instead, one can only obtain answers in the limit in which the interaction strength λ is small. This is the subject of the following sections.

Perturbation Theory
In this section we are going to calculate the Green's functions of scalar quantum field theory explicitly. We will specify the interaction Lagrangian in detail and use an approximation known as perturbation theory. At the end we will derive a set of rules, which represent a systematic prescription for the calculation of Green's functions, and can be easily generalised to apply to other, more complicated field theories. These are the famous Feynman rules.
We start by making a definite choice for the interaction Lagrangian L int . Although one may think of many different expressions for L int , one has to obey some basic principles: firstly, L int must be chosen such that the potential it generates is bounded from below -otherwise the system has no ground state. Secondly, our interacting theory should be renormalisable. Despite being of great importance, the second issue will not be addressed in these lectures. The requirement of renormalisability arises because the non-trivial vacuum, much like a medium, interacts with particles to modify their properties. Moreover, if one computes quantities like the energy or charge of a particle, one typically obtains a divergent result 6 . There are classes of quantum field theories, called renormalisable, in which these divergences can be removed by suitable redefinitions of the fields and the parameters (masses and coupling constants). For our theory of a real scalar field in four space-time dimensions, it turns out that the only interaction term which leads to a renormalisable theory must be quartic in the fields. Thus we choose where the coupling constant λ describes the strength of the interaction between the scalar fields, much like, say, the electric charge describing the strength of the interaction between photons and electrons. The factor 4! is for later convenience. The full Lagrangian of the theory then reads and the explicit expressions for the interaction Hamiltonian and the S-matrix are The n-point Green's function is This expression cannot be dealt with as it stands. In order to evaluate it we must expand G n in powers of the coupling λ and truncate the series after a finite number of terms. This only makes sense if λ is sufficiently small. In other words, the interaction Lagrangian must act as a small perturbation on the system. As a consequence, the procedure of expanding Green's functions in powers of the coupling is referred to as perturbation theory. We will see that there is a natural diagrammatic representation of this expansion (Feynman diagrams). First, we need to know how to calculate the vacuum expectation values of time ordered products. This is the subject of the next section.

Wick's Theorem
The n-point Green's function in Eq. (217) involves the time-ordered product over at least n fields.
There is a method to express VEV's of n fields, i.e. 0|T {φ in (x 1 ) · · · φ in (x n )} |0 in terms of VEV's involving two fields only. This is known as Wick's theorem. Let us for the moment ignore the subscript "in" and return to the definition of normal-ordered fields. The normal-ordered product : We are now going to combine normal-ordered products with time ordering. The time-ordered product T {φ(x 1 )φ(x 2 )} is given by Here we have used the important observation that which means that normal-ordered products of fields are automatically time-ordered. 7 Equation (219) is Wick's theorem for the case of two fields: For the case of three fields, Wick's theorem yields At this point the general pattern becomes clear: any time-ordered product of fields is equal to its normal-ordered version plus terms in which pairs of fields are removed from the normal-ordered product and sandwiched between the vacuum to form 2-point functions. Then one sums over all permutations. Without proof we give the expression for the general case of n fields (n even): The symbol φ(x i ) indicates that φ(x i ) has been removed from the normal-ordered product.
Let us now go back to 0|T {φ(x 1 ) · · · φ(x n )}|0 . If we insert Wick's theorem, then we find that only the contribution in the last line of Eq. (223) survives: by definition the VEV of a normal-ordered product of fields vanishes, and it is precisely the last line of Wick's theorem in which no normalordered products are left. The only surviving contribution is that in which all fields have been paired or "contracted". Sometimes a contraction is represented by the notation: i.e. the pair of fields which is contracted is joined by the braces. Wick's theorem can now be rephrased as 0|T {φ(x 1 ) · · · φ(x n )}|0 = sum of all possible contractions of n fields.
An example of this result is the 4-point function

The Feynman propagator
Using Wick's Theorem one can relate any n-point Green's functions to an expression involving only 2-point functions. Let us have a closer look at We can now insert the solution for φ in terms ofâ andâ † . If we assume t x > t y then G 2 (x, y) can be written as This shows that G 2 can be interpreted as the amplitude for a meson which is created at y and destroyed again at point x. We can now replaceâ(p)â † (q) by its commutator: and the general result, after restoring time-ordering, reads Furthermore, using contour integration one can show that this expression can be rewritten as a 4-dimensional integral where is a small parameter which ensures that G 2 does not develop a pole. This calculation has established that G 2 (x, y) actually depends only on the difference (x − y). Equation (231) is called the Feynman propagator G F (x − y): The Feynman propagator is a Green's function of the Klein-Gordon equation, i.e. it satisfies and describes the propagation of a meson between the space-time points x and y.

Two-particle scattering to O(λ)
Let us now consider a scattering process in which two incoming particles with momenta p 1 and p 2 scatter into two outgoing ones with momenta k 1 and k 2 , as shown in Fig. 5. The S-matrix element in this case is Figure 5: Scattering of two initial particles with momenta p 1 and p 2 into 2 particles with momenta k 1 and k 2 . and S = 1 + iT . The LSZ formula Eq. (199) tells us that we must compute G 4 in order to obtain S fi . Let us work out G 4 in powers of λ using Wick's theorem.
Suppressing the subscripts "in" from now on, the expression we have to evaluate order by order in λ is At O(λ 0 ), the denominator is 1, and the numerator gives where we have used Wick's theorem. We may represent this graphically as follows: x 3 x 4 x 1 x 4 x 1 But this is the same answer as if we had set λ = 0, so O(λ 0 ) does not describe scattering and hence is not a contribution to the T -matrix.
The first non-trivial scattering happens at O(λ). For example, the expansion of the above formula includes the contribution (from the numerator) where the 4! inside the integral arises from all possible contractions in Wick's theorem. This has the graphical representation x 3 x 4 x 1 where each line corresponds to a propagator, and we have assinged a vertex to each space-time point. Also at this order, we have the graphs We will see later on that neither of these graphs contributes to the S-matrix element (after substituting the Green's function into the LSZ formula of eq. (199)), as they are not fully connected. By this we mean that not all external particle vertices are connected to the same graph. At yet higher orders, we may have graphs wich involve fully connected pieces, dressed by additional "vacuum bubbles" (such as that which is sitting in the middle of the right-most figure above). These vacuum bubbles are cancelled by the denominator in eq. (212) which, given that it contains no external fields, generates all possible vacuum graphs. The presence of these vacuum graphs explains why the vacuum of the interacting theory is different to that of the free theory, as mentioned earlier.
To summarise, the final answer for the scattering amplitude to O(λ) is given by Eq. (237).

Graphical representation of the Wick expansion: Feynman rules
We have already encountered the graphical representation of the expansion of Green's functions in perturbation theory after applying Wick's theorem. It is possible to formulate a simple set of rules which allow us to draw the graphs directly without using Wick's theorem and to write down the corresponding algebraic expressions. We again consider a neutral scalar field whose Lagrangian is Suppose now that we want to compute the O(λ m ) contribution to the n-point Green's function G n (x 1 , . . . , x n ). This is achieved by going through the following steps: (1) Draw all distinct diagrams with n external lines and m 4-fold vertices: • Draw n dots and label them x 1 , . . . , x n (external points) • Draw m dots and label them y 1 , . . . , y m (vertices) • Join the dots according to the following rules: -only one line emanates from each x i -exactly four lines run into each y j -the resulting diagram must be connected, i.e. there must be a continuous path between any two points.
(2) Assign a factor − iλ 4! d 4 y i to the vertex at y i (3) Assign a factor G F (x i − y j ) to the line joining x i and y j (4) Multiply by the number of contractions C from the Wick expansion which lead to the same diagram.
These are the Feynman rules for scalar field theory in position space.
Let us look at an example, namely the 2-point function. According to the Feynman rules the contributions up to order λ 2 are as follows: The combinatorial factor for this contribution is worked out as C = 4 · 4!. Note that the same graph, but with the positions of y 1 and y 2 interchanged is topologically distinct. Numerically it has the same value as the above graph, and so the corresponding expression has to be multiplied by a factor 2. Another contribution at order λ 2 is O(λ 2 ): This contribution must be discarded, since not all of the points are connected via a continuous line.

Feynman rules in momentum space
It is often simpler to work in momentum space, and hence we will discuss the derivation of Feynman rules in this case. This also reflects what is typically done in scattering experiments (i.e. incoming and outgoing particles have definite momentum). If one works in momentum space, the Green's functions are related to those in position space by a Fourier transform The Feynman rules then serve to compute the Green's function G n (p 1 , . . . , p n ) order by order in the coupling.
Let us see how this works for the 2 → 2 scattering example we considered above. At O(λ) this was given in eq. (237), which we may simplify slightly to We may now substitute in the momentum space form of each propagator (eq. (232)) to give where we have carried out the y integration in the second line. Substituting this into eq. (239) and carrying out the integrals over each x i , one finds We will not repeat the above derivation for a general Green's function. Rather, we now state the Feynman rules in momentum space, and the reader may easily verify that the above example is a special case.
Feynman rules (momentum space) (1) Draw all distinct diagrams with n external lines and m 4-fold vertices: • Assign momenta p 1 , . . . , p n to the external lines • Assign momenta k j to the internal lines (2) Assign to each external line a factor (3) Assign to each internal line a factor (the delta function ensures that momentum is conserved at each vertex).
(5) Multiply by the combinatorial factor C, which is the number of contractions leading to the same momentum space diagram (note that C may be different from the combinatorial factor for the same diagram considered in position space!) Alternatively, one may rephrase (4) and (5) as follows: (4*) Each vertex carries a factor −iλ(2π) 4 δ 4 momenta , (5*) Divide by the symmetry factor i.e. the dimension of the group of symmetry transformations that leaves the diagram invariant.

S-matrix and truncated Green's functions
The final topic in these lectures is the derivation of a simple relation between the S-matrix element and a particular momentum space Green's function, which has its external legs amputated: the socalled truncated Green's function. This further simplifies the calculation of scattering amplitudes using Feynman rules. Let us return to the LSZ formalism and consider the scattering of m initial particles (momenta p 1 , . . . , p m ) into n final particles with momenta k 1 , . . . , k n . The LSZ formula (eq. (199)) tells us that the S-matrix element is given by k 1 , . . . , k n ; out p 1 , . . . , p m ; in x 1 x 2 Let us have a closer look at G n+m (x 1 , . . . , x m , y 1 , . . . , y n ). As shown in Fig. 6 it can be split into Feynman propagators, which connect the external points to the vertices at z 1 , . . . , z n+m , and a remaining Green's function G n+m , according to where, perhaps for obvious reasons, G n+m is called the truncated Green's function. Putting Eq. (242) back into the LSZ expression for the S-matrix element, and using that one obtains k 1 , . . . , k n ; out p 1 , . . . , p m ; in After performing all the integrations over the z k 's, the final relation becomes k 1 , . . . , k n ; out p 1 , . . . , p m ; in where G n+m is the truncated n + m-point function in momentum space. This result shows that the scattering matrix element is directly given by the truncated Green's function in momentum space. In other words, calculating the S-matrix is much the same as calculating the Green's function, but without the free propagators associated with the external legs. Note that this renders zero any graph which is not fully connected -any diagram in which not all external points are connected to the same graph vanishes upon multiplication by the (p 2 i + m 2 ) factors. This is what allowed us to neglect such graphs in the previous section.

Summary
That completes this introductory look at quantum field theory. Although we did not get as far as some of the more relevant physical applications of QFT, we have looked in detail at what a QFT is, and how the description of scattering amplitudes leads to Feynman diagrams. To recap how we did this: 1. We reviewed the Lagrangian formalism for classical field theory, and also the canonical quantisation approach to quantum mechanics.
2. We constructed the Lagrangian for a relativistic field theory (the free Klein-Gordon field), and applied the techniques of canonical quantisation to this field theory.
3. States in this theory were found to represent particle excitations, such that a particle of momentum p was found to be a quantum of excitation in the relevant Fourier mode of the field.
4. We then studied the interacting theory, arguing that at initial and final times (when the interaction dies away) we can work with free fields. These were related by an operator S, whose matrix elements represented the transition probability to go from a given initial to a given final state.
5. Using the interaction picture for time evolution, we found an expression for the S matrix in terms of an evolution operator U , describing how the fields at general time t deviate from the initial free fields.
6. We also found a formula which related S matrix elements to n-particle Green's functions (vacuum expectation values of time-ordered fields). This was the LSZ formula of eq. (199).
7. We related the Green's functions involving Heisenberg fields to those involving the "in" fields at time t → −∞ (eq. (212)).
8. We then found how to compute these Green's functions in perturbation theory, valid when the strength of the interaction is weak. This involved having to calculate vacuum expectation values of time-ordered products, for which we could use Wick's theorem. 9. We developed a graphical representation of Wick's theorem, which led to simple rules (Feynman rules) for the calculation of Green's functions in position or momentum space.
10. These can easily be converted to S matrix elements by truncating the free propagators associated with the external lines.
Needless to say, there are many things we did not have time to talk about. Some of these will be explored by the other courses at this school: • Here we calculated S-matrix elements without explaining how to turn these into decay rates or cross-sections, which are the measurable quantities. This is dealt with in the QED / QCD course.
• The Klein-Gordon field involves particles of spin zero, which are bosons. One may also construct field theories for fermions of spin 1 2 , and vector bosons (spin 1). Physical examples include QED and QCD.
• Fields may have internal symmetries (e.g. local gauge invariance). Again, see the QED / QCD and Standard Model courses.
• Diagrams involving loops are divergent, ultimately leading to infinite renormalisation of the couplings and masses. The renormalisation procedure can only be carried out in certain theories. The Standard Model is one example, but other well-known physical theories (e.g. general relativity) fail this criterion.
• There is an alternative formulation of QFT in terms of path integrals (i.e sums over all possible configurations of fields). This alternative formulation involves some extra conceptual overhead, but allows a much more straightforward derivation of the Feynman rules. More than this, the path integral approach makes many aspects of field theory manifest i.e. is central to our understanding of what a quantum field theory is. This will not be covered at all in this school, but the interested student will find many excellent textbooks on the subject.
There are other areas which are not covered at this school, but nonetheless are indicative of the fact that field theory is still very much an active research area, with many exciting new developments: • Calculating Feynman diagrams at higher orders is itself a highly complicated subject, and there are a variety of interesting mathematical ideas (e.g. from number theory and complex analysis) involved in current research.
• Sometimes perturbation theory is not well-behaved, in that there are large coefficients at each order of the expansion in the coupling constant. Often the physics of these large contributions can be understood, and summed up to all orders in the coupling. This is known as resummation, and is crucial to obtaining sensible results for many cross-sections, especially in QCD.
• Here we have "solved" for scattering probabilities using a perturbation expansion. It is sometimes possible to numerically solve the theory fully non-perturbatively. Such approaches are known as lattice field theory, due to the fact that one discretizes space and time into a lattice of points. It is then possible (with enough supercomputing power!) to calculate things like hadron masses, which are completely incalculable in perturbation theory.
• Here we set up QFT in Minkowski (flat space). If one attempts to do the same thing in curved space (i.e. a strong gravitational field), many weird things happen that give us tantalising hints of what a quantum field of gravity should look like.
• There are some very interesting recent correspondences between certain limits of certain string theories, and a particular quantum field theory in the strong coupling limit. This has allowed us to gain new insights into nonperturbative field theory from an analytic point of view, and there have been applications in heavy ion physics and even condensed matter systems.
I could go on of course, and many of the more formal developments of current QFT research are perhaps not so interesting to a student in experimental particle physics. However, at the present time some of the more remarkable and novel extensions to the Standard Model (SUSY, extra dimensions) are not only testable, but are actively being looked for. Thus QFT, despite its age, is very much at the forefront of current research efforts and may yet surprise us! Acknowledgments I am very grateful to Chris White and Mrinal Dasgupta for providing a previous set of lecture notes, on which these notes are heavily based.

A Books on QFT
There are numerous textbooks already and a surprisingly high number of new books are appearing all the time. As with anything in theoretical physics, exploring a multitude of approaches to a certain field is encouraged.
In the following list, [1] is said to be a good introductory text and a lot of my colleagues use this one for their introduction to QFT classes. Mark has also put a "try-before-buy" version on his webpage, which is an early version of the entire textbook. You can judge yourself if it's worth the investment. My first encounter with QFT was [2]. It's a very good book that heavily makes use of the Path Integral Formalism (not discussed in these lectures), it also includes topics which are normally not featured in general purpose QFT books (e.g. SUSY, topological aspects). A modern classic is [3], which many use as a standard text. It covers a lot of ground and develops an intuitive approach to QFT (but you aren't spared the hard bits!). It also touches other areas where QFT finds application (e.g. Statistical Physics). In my opinion, it isn't very good to look things up because Peskin's pedagogical approach forces logically-connected topics to be scattered across the text. Unless you are very familar with the book, it can take ages to find certain things again. My personal favorite by far is [4], probably owing to the authors' focus on particle theory applications of QFT. But you'll probably need a bit of exposure to one of the introductory texts to fully appreciate the depth and technical details that the authors have put into it. Yes, it's expensive (like most of the Graduate-level textbooks), but having a advanced QFT book by a bunch of German authors on your shelf will not go unnoticed by your colleagues. Another good text is [5]. Finally, those who are not faint of heart and who like their field theory from the horse's mouth may like to consult Weinberg's monumental three volume set [6].

Renormalisation
This course runs in parallel with the Quantum Field Theory course, from which we will use some results. Some topics mentioned in this course will be covered in more detail in the Standard Model and Phenomenology courses next week.

57
Textbooks: These notes are intended to be self-contained, but only provide a short introduction to a complex and fascinating topic. You may find the following textbooks useful: 1. Aitchison and Hey, Gauge Theories in Particle Physics, CRC Press. The first two are more practical and closer to the spirit of this course while the other contain many more mathematical details. The last one is very recent. If you are particularly interested in (or confused by) a particular topic, I encourage you to take a look at it. If there are other textbooks which you find particularly helpful, please tell me and I will update the list.
These notes are based heavily on the content of previous versions of this course, in particular the 2013 version by Jennifer Smillie. Throughout, we will use "natural units" where = c = 1 and the metric signature (+ − −−).
Please email any comments, questions or corrections to a.banfi@sussex.ac.uk.

Relativistic Quantum Mechanics
In order to describe the dynamics of particles involved in high-energy collisions we must be able to combine the theory of phenomena occurring at the smallest scales, i.e. quantum mechanics, with the description of particles moving close to the speed of light, i.e. special relativity. To do this we must develop wave equations which are relativistically invariant (i.e. invariant under Lorentz transformations). In this section we will derive relativistic equations of motion for scalar particles (spin-0) and particles with spin-1/2.

The Klein-Gordon Equation
We start with the Hamiltonian for a particle in classical mechanics: To convert this into a wave equation, we make the replacements E → i∂ t and p → −i∇, so that a plane-wave solution has the energy-momentum relation given in eq. (1). Applied to a general wavefunction φ, a linear superposition of plane waves, this gives where H is the so-called Hamiltonian. We recognise this as the Schrödinger Equation, the cornerstone of Quantum Mechanics. From this form, we can deduce that eq. (3) cannot be relativistically invariant because time appears only through a first-order derivative on the left-hand side while space appears as a second-order derivative on the right-hand side. Yet we know that if we make a Lorentz transformation in the x direction for example, this would mix the x and t components and therefore they cannot have different rôles.
The problem with the Schrödinger Equation arose because we started from a non-relativistic energy-momentum relation. Let us then start from the relativistic equation for energy. For a particle with 4-momentum p µ = (E, p) and mass m, Again we convert this to an operator equation by setting p µ = i∂ µ so that the corresponding wave equation for an arbitrary scalar wavefunction φ(x, t) gives where we have introduced the four-vector x µ = (t, x). This is the "Klein-Gordon equation" which is the equation of motion for a free scalar field. We can explicitly check that this is indeed Lorentz invariant. Under a Lorentz transformation The field φ is a scalar, i.e. it has the transformation property Therefore, in the primed system, and the equation still holds.

The Dirac Equation
The Klein-Gordon equation admits negative-energy solutions, because the energy E appearing in the plane-wave in eq. (2) can have the two values ± p 2 + m 2 . Dirac sought to find an alternative relativistic equation which was linear in ∂ t like the Schrödinger equation (this was an attempt to solve the problem of negative-energy solutions to eq. (5) -in fact he didn't solve this problem, but a different one). If the equation is linear in ∂ t , it must also be linear in ∇ if it is to be invariant under Lorentz transformations. We therefore start with the general form Dirac also required that the solutions of his equation would be a solution of the Klein-Gordon equation as well, or equivalently, the energy relation eq. (4) was the correct energymomentum relation for plane wave solutions e −ip·x of the Dirac equation. To see what constraints this imposes, we must square eq. (9): However, the Klein-Gordon equation requires that the right-hand side is equal to [−∇ 2 + m 2 ]ψ(t, x) and therefore α and β must satisfy If α i and β are just numbers, these equations cannot be solved. Dirac solved them by instead taking α i and β to be n × n matrices, and ψ(t, x) to be a column vector. Even now, the solution is not immediate. One can show that the conditions in eq. (11) require and further that the eigenvalues of the above matrices are ±1. This in turn means that n must be even (do you understand why?). In 2-dimensions, there are still not enough linearly independent matrices to satisfy eq. (11). There do exist solutions in four dimensions. One such solution is where σ are the usual Pauli matrices and 1 2 represents the 2 × 2 identity matrix. Now we have formed an equation which may be thought of as a square-root of the Klein-Gordon equation, but which is not obviously Lorentz invariant. To show that, we first define the new matrices Then we form γ µ = (γ 0 , γ) where the µ is a Lorentz index. Each component is a 4 × 4 matrix. In terms of the γ-matrices, one can write the conditions in eq. (11) in a Lorentz covariant form This is an example of a Clifford algebra. Any matrices satisfying this condition in eq. (15) may be used to construct the Dirac equation. The representation in eqs. (13) and (14) is just one example, known as the Dirac representation. Note, for example, that any other matrices satisfying where U is a unitary matrix, will also be suitable.
Multiplying through by γ 0 , we may rewrite the eq. (9) in a covariant form as where & a, a vector with a slash, is a short-hand notation for γ µ a µ . The equation above is known as the Dirac equation. In momentum space, i.e. after a Fourier transformation, ∂ µ → −ip µ , and the Dirac equation becomes

whereψ(p) is the Fourier transform of a solution of the Dirac equation ψ(x).
We mentioned in passing that ψ(t, x) is a column vector rather than a scalar. This means that it contains more than one degree of freedom. Dirac exploited this property to interpret his equation as the wave equation for spin-1/2 particles, fermions, which can be either spin-up or spin-down. The column vector ψ is known as a Dirac spinor.
Comparing eq. (9) to the Schrödinger equation in eq. (3) gives the Hamiltonian for a free spin-1/2 particle:  The trace of the Hamiltonian gives the sum of the energy eigenvalues. The condition that the matrices α and β are traceless therefore means that the eigenvalues of H Dirac must sum to zero. Therefore, like the Klein-Gordon equation, also the Dirac equation has negative-energy solutions.
Dirac himself proposed a solution for this problem which became known as the "Dirac sea". He accepted the existence of negative-energy states, but took the vacuum as the state in which all these states are filled, see fig. 1. There is a conceptual problem with this in that the vacuum has infinite negative charge and energy. However, any observation relies only on energy differences, so this picture can give an acceptable theory.
As the negative-energy states are already full, the Pauli exclusion principle forbids any positive-energy electron to fall into one of the negative-energy states. If instead energy is supplied, an electron is excited from a negative-energy state to a positive-energy state and an "electron-hole" pair is created. The absence of the negative-energy electron, the hole, is interpreted as the presence of of state with positive energy and positive charge, i.e. a positron. Dirac predicted the existence of the positron in 1927 and this particle was discovered five years later.
However, Dirac's argument only holds for spin-1/2 particles which obey the Pauli exclusion principle. A consistent solution for all particles is provided by Quantum Field Theory in a picture developed by Feynman and Stückelberg, in which positive-energy partices travel only forward in time, whereas negative-energy particles travel only backwards in time. In this way, a negative-energy particle with momentum p µ , travelling backward in time, is re-interpreted as a positive energy anti-particle with momentum −p µ travelling forward in time. Let us see how this picture naturally arises by considering two processes, the scattering e − µ − → e − µ − , and Compton scattering e − γ → e − γ. In non-relativistic quantum mechanics, the scattering e − µ − → e − µ − corresponds to the scattering of an electron from an external Coulomb potential. This is represented on the left-hand side of fig. 2. The horizontal axis represents the time at which a give elementary process occurs. In non-relativistic quantum mechanics, scattering happens instantaneously, so that the time t 1 at which a photon is emitted by the incoming electron coincides with the time t 2 in which it is absorbed by a muon, which stays at rest as a source of a static potential. In quantum field theory the scattering cannot occur instantaneously, because we need to take into account the fact that the photon mediating the scattering travels at the speed of light. The corresponding scattering amplitude is given by the sum of the contributions of the two diagrams on the right-hand side of fig. 2. It is clear that, in the limit in which c can be taken to be infinite, the two diagrams coincide and give the non-relativistic contribution. From the point of view of the electron, the first diagram can be interpreted as the emission of a positive-energy photon at t = t 1 that travels forward in time, and is later absorbed by a muon at t = t 2 . The second diagram has an awkward interpretation from the point of view of the electron, because it corresponds to the emission of a negative-energy photon at t = t 2 that travels backwards in time. However, the graph makes perfectly sense if one considers that it is the muon that emits a photon a time t 1 , which is later reabsorbed by the electron at a time t 2 . A similar interpretation can be applied to the Compton scattering diagrams in Fig. 3, and clarifies the Feynman and Stückelberg interpretation of negativeenergy states. In the left diagram, an electron emits a photon at time t 1 and later, at time t 2 absorbs another one. In the right-hand diagram it appears as if an electron emits a photon and then travels backwards in time to absorb another photon. Feynman and Stückelberg reasoned instead that the incoming photon split into an electron-positron pair and then at a later time, the positron annihilates the other electron, emitting a photon.

Spin
In the previous section, we introduced a Dirac spinor as a solution to the Dirac equation in the form of a column vector. In this section, we will discuss the explicit form of the solutions to the Dirac equation, and verify that they indeed correspond to the wave functions for particles with spin-1/2. Figure 3: Diagrams illustrating the Feynman-Stückelberg interpretation of negative-energy particles, which correspond to those travelling backwards in time, as in the right-hand diagram. They interpreted a negative-energy particle travelling backwards as a positiveenergy anti-particle travelling forwards in time, see text.

Plane-Wave Solutions of the Dirac Equation
We begin by seeking plane-wave solutions to the Dirac Equation. Given the 2 × 2 block nature of the γ-matrices, we will start with the form where χ and φ are two-component spinors. Substituting this into eq. (18) and using eqs. (13) and (14), we find or equivalently From the identity (σ ·p) 2 = p 2 , these equations are only consistent for particles with p 0 = ± p 2 + m 2 (consistent with having solutions of the Klein-Gordon equation).
For a massive fermion at rest (p = 0), we have Positive-energy solutions ψ p=0 + must therefore have φ = 0 and negative energy solutions ψ p=0 − have χ = 0, as follows: For particles which are not at rest (p = 0), the solution is then dictated by eq. (22), with the requirement that it reduces to eq. (24) for p = 0. For positive-energy solutions, we therefore write where r = 1, 2 and N is a normalisation conventionally chosen such that u † r (p)u s (p) = 2E δ rs , which gives N = √ E + m. The spinors χ 1 and χ 2 cover the two (spin) degrees of freedom: Similarly, negative-energy solutions are conventionally written as with the spinors φ 1 and φ 2 again covering the two (spin) degrees of freedom: The spinors u(p) and v(p) therefore represent particle and anti-particle solutions with momentum p and energy E = p 2 + m 2 .

Spin
Each Dirac spinor has two linearly independent solutions which we stated earlier corresponded to the two possible spin states of a fermion. In this subsection we will define the corresponding spin operator. If we again consider a particle at rest we have These have eigen-values ± 1 2 under the matrix 1 2 One can repeat the same thing for anti-particles and generalise to all the Pauli matrices to deduce the "spin operator" You can check explicitly that S 2 = 3 4 1 4 , as we would expect. Therefore, for particles at rest, p = 0, the top two components of ψ + describe fermions with S z = +1/2 (spin up) and S z = −1/2 (spin down) respectively.
In case of a general p one can consider the projection of the spin-operator along the direction of motion of a particle, i.e. p/|p|. This gives the helicity operator, h(p) This operator satisfies h(p) 2 = 1, and hence its eigenvalues are ±1.

Working with Dirac Spinors
So far we have discussed Dirac spinors, ψ, describing spin-1/2 particles and how Dirac used his equation to predict anti-particles. To generate an equation for anti-particles, we first take the Hermitian conjugate of the Dirac equation and find where the arrows over the derivatives just mean they act on the left, and we have used the fact that γ 0 † = γ 0 and γ i † = −γ i . All matrices have to be written on the right because they are multiplying matrices and ψ † is a row-vector. The above equation does not seem Lorentz covariant. This can be rectified by multiplying the equation by γ 0 on the right-hand side and using [γ 0 , γ i ] = 0. Then we have The interpretation of the above equation is that the field ψ ≡ ψ † γ 0 represents an antiparticle.
By construction, the spinors u(p) and v(p) satisfy their respective Dirac equations in momentum space: They also satisfy a number of relations which will prove very useful in calculations of scattering amplitudes. Firstly, they are orthonormal: If instead one takes the outer product of spinor and anti-spinor, they also satisfy the following completeness relations: These relations can be checked explicitly (see problem sheet). 66

Lorentz transformations on spinors
Let us consider the Lorentz transformation of eq. (6). The field ψ has the transformation property with S(Λ) a suitable 4 × 4 matrix. Its explicit form is derived by imposing that the Dirac equation is Lorentz invariant: Imposing that S(Λ) satisfies we obtain so that ψ (x ) is a solution of the transformed Dirac equation, provided ψ(x) is a solution of the original one.
Eq. (40) is enough to construct the matrices S(Λ). By direct inspection one observes that The fact that S −1 (Λ) = S † (Λ) is not surprising, and is due to the fact that the Lorentz group is non-compact, and therefore it does not admit unitary finite-dimensional representations.
One can construct bi-linear productsψΓψ, with Γ a 4 × 4 matrix. We now show that Γ can be decomposed into a set of bi-linears, each having a definite transformation property under the Lorentz group. Since Γ is 4 × 4 matrix, we expect to find 16 such bi-linear products, constructed out of linearly independent matrices. Already we can find 5 such bi-linears:ψ We can construct 6 more matrices by considering Note that γ µ γ ν is not linearly independent from the previous matrices because {γ µ γ ν } = 2g µν 1. This gives 67 In addition to the four γ-matrices, we can construct their product which is conventionally known as γ 5 : which satisfies The factor of i is to make then matrix Hermitian. Using γ 5 , we can construct 5 more bi-linearsψ We have then found a set of 16 linearly independent matrices (check that they are linearly independent!) so that any bi-linear ψΓψ can be written as a sum of terms with definite transformation properties, i.e. transforming in a clear way as a scalar, pseudo-scalar, vector, pseudovector and tensor. (This is why the Feynman rule for a pseudo-scalar interacting with a particle-anti-particle pair has a γ 5 for example.) The most common use of γ 5 is in the projectors P L = (1 − γ 5 )/2 and P R = (1 + γ 5 )/2. You can check explicitly that these behave like projectors (ie. P 2 = P and P L P R = 0). When these act upon a Dirac spinor they project out either the component with "lefthanded" chirality or with "right-handed" chirality. These projectors therefore appear when considering weak interactions, for example, as W bosons only couple to left-handed particles. One has to take care when defining the handedness of antiparticles because A left-handed anti-particle appears with a right-handed projection operator next to it and vice-versa.

Quantum Electro-Dynamics
In this section, we will develop the theory of quantum electro-dynamics (QED) which describes the interaction between electrically charged fermions and a vector field (the photon A µ ). 68

The QED Lagrangian
In this course, we have so far considered spin-0 and spin-1/2 particles. We will postpone a detailed discussion of spin-1 particles until section 5.1. For the time being, we start from the Maxwell's equations in the vacuum in relativistic notation: and J ν is a conserved current, i.e. satisfying ∂ ν J ν = 0. Maxwell's equations can be derived from the Lagrangian by applying Euler-Lagrange equations The Dirac equation for ψ and its equivalent for ψ can be derived from the Lagrangian The starting point for the QED Lagrangian is then the sum of L em and L Dirac . However, in order to make the theory describe interactions, we must include a term which couples A µ to ψ and ψ. If we wish Maxwell's equation to be valid, this term has to be of the form L int = −J µ A µ , with J µ a conserved vector current. We then observe that the vector current J µ =ψ γ µ ψ is conserved if ψ is a solution of Dirac equation. In fact Therefore, a good candidate for the electromagnetic current describing an electron of charge −e is where −e multiplies the vector current so as to be sure that the resulting Coulomb potential arising from the solution of the static Maxwell's equations is the expected one. Using the above current, we obtain: Notice that L is invariant with respect to the "gauge" transformations Notice that the addition of the interaction term L int is equivalent to the replacement This prescription is known as "minimal coupling" and automatically ensures that the Lagrangian is gauge invariant. The use of gauge invariance to introduce interactions will be covered in detail in the Standard Model course next week. This gives The fact that L is invariant under the gauge transformations in eq. (62) means that A µ contains unphysical degrees of freedom. This is clear in view of the fact that a massless vector field contains two physical polarisations, whereas A µ has four degrees of freedom. In order to eliminate this degeneracy, a "gauge-fixing" condition is imposed. A possible choice of a gauge condition is the so-called Coulomb gauge, in which ∇ · A = 0. Although this condition eliminates the two additional degrees of freedom, it breaks Lorentz covariance. A common choice that preserves Lorentz covariance is the Lorentz gauge: This corresponds to choosing the gauge parameter α such that α = −∂ µ A µ above. In this gauge, the Maxwell equations become A ν = 0.
Notice that the Lorentz gauge condition reduces the number of degrees of freedom in A from four to three. Even now though A µ is not unique. A transformation of the form will also leave the Lagrangian unchanged. At classical level we can eliminate the extra polarisation "by hand", but at quantum level this cannot be done without giving up covariant canonical commutation rules. The way out, which can only be summarised, is to add a gauge-fixing Lagrangian L gf , so that the full QED Lagrangian becomes Using this Lagrangian as a starting point, and an extra condition on physical states, only the two physical polarisations propagate on-shell. Notice that setting ξ = 0 corresponds to enforcing the Lorentz gauge condition ∂ µ A ν = 0, otherwise the equations of motions give ∂ µ A ν = 0, i.e. ∂ µ A ν is a free field.

Feynman Rules
Feynman developed a method of organising the calculation of scattering amplitudes in terms of diagrams. Starting from a set of vertices (or interactions), each corresponding to a term in the Lagrangian and a set of links (or propagators), you build every possible diagram corresponding to your initial and final state. Each piece comes with a "rule" and the combination of these give the scattering amplitude (actually iM). Figure 4: The Feynman rules for QED. Wavy lines represent a photon and straight lines represent any charged fermion. The arrow on the straight line tells you it is a particle or anti-particle depending on whether it is with or against momentum flow. The polarisation vectors ε µ (p) will be discussed in section 5.2.
In the quantum field theory course at this school, you learn how to derive the "Feynman rules" for scalar φ 4 theory. The principles are the same here so in this course we will state the Feynman rules for QED and learn how to work with them. The Feynman rules are shown in figure 4. The left-hand column represents internal parts of the diagram while the right-hand column gives the rules for external fermions and photons.
A few comments are necessary here: 1. Individual pieces of a Feynman diagram are a mixture of matrices, vectors, covectors and scalars. They do not commute. The final amplitude is a number and therefore you must follow each fermion line from a spinor (either outgoing particle or incoming anti-particle) through the series of matrices to finish on an anti-spinor (either incoming particle or out-going anti-particle). This corresponds to working backwards along the fermion line. We will see this in the examples which follow. Similarly, all Lorentz indices corresponding to photons have to be contracted.
2. The photon propagator term has a free parameter ξ. This is due to the gauge freedom we discussed in the previous section. It does not represent a physical degree of freedom and therefore any calculation of a physical observable will be independent of ξ. We will most commonly work in Feynman gauge ξ = 1.
3. The propagators come with factors of iε in the denominator, otherwise they would have poles on the real axis and any integral over p would not be well-defined. The Figure 5: Building the leading-order Feynman diagram for Coulomb scattering. We start from the initial and final states on the left-hand side. The diagram on the right is the only way to connect these with up to two vertices.
factor of iε prescribes which direction to travel around the poles. This choice corresponds to the "Feynman prescription", which ensures causality.
4. The interaction vertex contains only one flavour of fermion. We know that the emission of a photon does not change an electron to a quark for example.
5. There are addtional factors of (−1) in the following scenarios: (a) an anti-fermion line runs continuously from an initial to a final state; (b) there is a closed fermion loop; (c) between diagrams with identical fermions in the final state.
These arise from the anti-commutation properties of fermionic operators which is beyond the scope of this course. This sign can be important to get the relative phase between diagrams correct, as happens for instance in Bhabha scattering.

Examples: Coulomb Scattering
As a first example, we consider Coulomb scattering: We start by drawing the external particles, see left-hand side of fig. 5. We now want to find all possible ways to connect these. There is no direct interaction between an electron and a muon but both interact with a photon, so a possible connected diagram is the one shown on the right-hand side. In fact, this is the only possible diagram with no more than two vertices. The number of vertices is directly related to the powers of the coupling e and therefore the diagram shown on the right is the leading-order (or tree-level) process.
If we consider e(p) e(k) → e(p ) e(k ) or e + (p) e − (k) → e + (p ) e − (k ) instead, there are two diagrams with two vertices, i.e. at O(e 2 ) (try this!). Both have to be added before squaring the amplitude to have the tree-level contribution to the cross section.
If we allow ourselves more than two vertices, there are many more diagrams we can draw. Since the number of external particles doesn't increase, these must contain closed loops and, therefore, they represent higher-loop processes. In this course, we will limit ourselves to tree-level processes. Loop-diagrams will be covered in the phenomenology course. Now we will construct the tree-level amplitude for Coulomb scattering from the rules in Fig. 4. Keeping in mind the earlier warning about the ordering of matrices and spinors, we take each fermion line in turn. The electron line gives In spin-space, this is co-vector-matrix-vector, which is a number. In Lorentz space it has one free index µ and is therefore a vector. Similarly, for the muon line we get Lastly, for the propagator with momentum q = p − p = k − k in Feynman gauge, we get so that the full amplitude is We will drop the iε from now on, as we will not need it in this example.
Just as in quantum mechanics, in order to compute the probability of this process happening, we must calculate |M| 2 . We will now add specific indices to label the spins, r, r , s, s . In order to describe an unpolarised physical scattering process, we will average over initial-state spins and sum over final-state spins. This convention is represented by a bar as follows: where we have explicitly evaluated the metric contractions for brevity.
To evaluate the products in eq. (69) we will use the results from section 2.1. We will take the pieces corresponding to the electron line first. Since [u r (p )γ ρ u r (p)] * is a number, its complex conjugate is its hermitian conjugate. Therefore where we have used γ ν † = γ 0 γ ν γ 0 , which you showed on the problem sheet. We now use eq. (37) to find We will use m for the electron mass and M for the muon mass. It is now useful to add a component index in spinor-space like you would do in normal linear algebra. Schematically we have where Γ represents the chain of γ-matrices in eq. (71). Now that we are explicitly labelling the components, we can swap the order of the terms to get We could have anticipated that we would get a trace as we need to get a single number from a series of matrices. Working from the anti-commutation relations, one can readily show the following identities (see problem sheet): Tr(odd number of γ matrices) = 0 , Tr(γ µ γ ν ) = 4g µν , Tr(γ µ γ ν γ ρ γ σ ) = 4(g µν g ρσ − g µρ g νσ + g µσ g νρ ) .
The same series of steps gives Substituting these results into eq. (69) gives We will now rewrite the invariants which appear in the above equation in terms of the centre-of-mass energy squared, s and the exchanged momentum-squared, q 2 = t. We have 74 which finally gives This expression can be further simplified by introducing the further invariant u = (p − k ) 2 = (p − k) 2 : The above equation gives the probability that the corresponding process occurs at a given point in phase space. In the next section, we will derive how to calculate a total cross section (or a total decay width) from amplitudes squared.

Calculation of Cross Sections
Ultimately it is not the amplitude we really want to calculate, but its integral over phase space to give the total cross section if it is a scattering process or the total decay width if it is a decay.

Phase Space Integrals
We must integrate over all the allowed phase space, which means all possible momentum configurations of the final-state particles. This result, divided by the flux of incoming particles, will give the total cross section.
In principle, we must integrate over over a 4-dimensional phase space for each particle f in the final state, but we must impose that each satisfies its on-shell condition p 2 f = m 2 f . We therefore must have Although the final expression explicitly separates the dependence on E and p, it is still Lorentz invariant as the original expression is clearly Lorentz invariant. Eq. (81) is frequently referred to as the Lorentz Invariant Phase Space measure (LIPS). The factors of 2π correspond to the conventions used for momentum space integrations in QFT.
We now need to normalise this expression to the flux of incoming particles. This is done by multiplying by the flux factor, F. For the scattering of two incoming particles, this is usually written as where E i and v i are the energy and velocity of each incoming particle. 1 A neater, equivalent form which explicitly demonstrates the Lorentz invariance of this quantity is In the massless limit s m 1 , m 2 , this simplifies to F 1/(2s). Finally, we must impose total conservation of momentum to find If you wish to calculate a total decay width instead, the expression is very similar. The only difference is that the flux factor becomes where M is the mass of the decaying particle. The total decay width, Γ, is therefore given by

Return to Coulomb Scattering
We may now calculate the relativistic cross section for Coulomb scattering, using our result from section 3.2. Eq. (84) applied to this example gives As this expression is Lorentz invariant, we are free to choose which frame to evaluate it in. This is an extremely powerful tool to evaluate these integrals, as a careful choice can lead to considerable simplifications. We will choose the centre-of-mass frame here so that p = −k. We can easily do the the k integration using three of the δ-functions to give We will proceed by transforming to spherical polar coordinates, d 3 p = |p | 2 d|p |dΩ, where we have written the solid angle, sin θ dθ dφ, as dΩ: We now make the change of variable |p | → E = E p + E k , which has Jacobian factor where it is understood that k = −p with |p | determined from E = √ s. The only undefined variables are the angles which remain to be integrated over. We could now substitute the expression for |M| 2 explicitly in terms of these angles but it is actually informative to instead study the differential cross section We will now consider the high energy limit where s m 2 e , m 2 µ . In this limit, the three Mandelstam invariants are given by which gives Note that this amplitude squared has no dependence on the azimuthal angle φ. Using the conventional notation α = e 2 /(4π), we obtain dσ dΩ

The Coulomb Potential
The same calculation may be used to calculate the cross section for the scattering of a relativistic particle from an external Coulomb potential by working in the rest frame of the muon and taking m µ → ∞. This is illustrated in fig. 6. Figure 6: Scattering by an external Coulomb potential.
Repeating the same calculation in this limit yields where is the Rutherford cross section which was calculated in preschool problem 9. The extra v 2 -term in eq. (96) then gives the relativistic correction to this. This result is entirely due to the electron being a spin-1/2 particle. If it were spin-0 instead, |M| 2 would look much simpler as there are no fermion traces to be performed and in that case we would find that there is no relativistic correction.
Although this now involves anti-particles, there is still one single diagram at leading-order and the trace algebra is very similar. Indeed we can re-interpret the incoming e + as an outgoing e − with momentum −p , and the outgoing µ + as an incoming µ − with momentum −k. Then we do find explicitly that This is an example of "crossing symmetry". Note in general that there is an additional minus sign for each fermion which swaps from the initial to final state or vice versa. This is because, for example, 78 In this case there are two minus signs whose combined effect gives just one.
If in e + e − -annihilation we take the approximation m e = 0, we find Once again, choosing to work in the centre-of-mass frame, we find If we again take the high-energy limit where s m 2 µ , this reduces to We can now convert the above result to a total cross section by performing the integral over the solid angle. This gives Now, when an electron and positron annihilate, other fermions may be produced. If these are quarks, they are then observed in the detector as hadrons. The same calculation gives plus higher-order corrections, where there are N c colours in each of the n f massless flavours of quarks with charge Q i . Therefore the ratio has been used to measure the number of colours to be N c = 3.

Photon Scattering
In this section we will calculate the scattering amplitude for eγ → eγ. In order to do that we need first to consider how to treat incoming and outgoing photons.

Photon Polarisation
We seek to find a plane-wave solution corresponding to a free photon (like our treatment for Dirac particles in section 2.1). It will have the form where ε µ (k) is the polarisation vector of the photon. In the Lorentz gauge of eq. (61), the photon equation of motion in eq. (51) is and is automatically satisfied by a solution of the form in eq. (106), provided k 2 = 0. The Lorentz gauge condition gives an additional constraint on the polarisation vector However, there is still freedom here because, given a polarisation vector ε which solves this equation, any other vector of the form ε = ε + λ k will also be a solution, which corresponds to the propagation of an extra unphysical longitudinal photon, with a polarisation proportional to k µ . This freedom is usually used to set ε 0 = 0 such that k·ε = 0 so that the two physical polarisations ε α , with α = 1, 2, are in the transverse direction, and are chosen to be orthonormal. A useful relation we will use in the following is The Feynman rule for an incoming photon is simply ε µ (k) while for an outgoing photon it is ε * µ (k), as shown in Fig. 4.
As for fermion spins, for unpolarised processes you compute the total cross section by averaging over incoming polarisations and summing over outgoing polarisations. Let us consider the case of a general process with one external incoming photon. The matrix element would have the form The left-hand side is a physical quantity, hence it should give the same result for any choice of the gauge. Had we chosen ε + λ k instead, this implies that A µ k µ has to vanish. This is a "Ward Identity" for QED, and is therefore a test of gauge-invariance.
Squaring the scattering-amplitude over the physical polarisations gives using eq. (109). The equation This could be done for each photon in turn if there were more in the process, and we find the general result that We have used the → notation of Peskin and Schröder here as the result is not an exact equality in the absence of the rest of the matrix element, but the result is nonetheless true in any practical calculation.

Compton Scattering
There are two diagrams at leading order for this process, shown in Fig. 7. Following the Feynman rules in Fig. 4 and the rules for external photons in the previous subsection, we find that the sum of the two diagrams gives You can check explicitly that the above amplitude does indeed satisfy the appropriate QED Ward Identities, i.e. replacing ε ν (k) with k ν gives M = 0, and similarly when replacing ε * µ (k ) with k µ (see tutorial sheet).
We now square the amplitude to get The calculation of the spin traces in this case requires the identities Figure 8: The Compton scattering process in the rest frame of the incoming electron.
from the problem sheet. We will again choose a suitable reference frame to simplify the calculation. In this case, it is convenient to work in the rest frame of the incoming electron as shown in fig. 8. We can use energy conservation to compute ω : In this frame, we therefore have The explicit dependence on the electron mass cancels with the factors of m in ω . It is however present in the flux factor F = 1/(4mω). We now compute the integral over the phase space to get σ = 1 4mω We can again do the integral over d 3 k using the spatial parts of the δ-function. Then we transfer to spherical polars and find A nice check of this result is to take the low-energy limit where ω m.
Then ω ω and we find dσ This is the Thomson cross section for the scattering of classical electromagnetic radiation by a free electron. In the other limit, the high-energy limit where ω m, we have 82 and the cross section is strongly peaked for small angles. This leads to a logarithmic enhancement when you perform the angular integration. These "collinear" logarithms arise whenever massless particles are emitted; this will be discussed in more detail in the phenomenology course.

Strong Interactions
In this section we will develop the theory of the strong interactions, quantum chromodynamics (QCD). The major difference between QED and QCD is that the gluons are self-interacting because they also carry colour charge (unlike the charge-neutral photon).

QCD Lagrangian
The particles which carry colour charge are The QCD Lagrangian for a quark of mass m is The a, i and j indices are gauge group indices which are discussed further below. The sum over these is implicit in eq. (124). Each t a is a 3 × 3 matrix in colour space. The t a matrices do not commute with each other, but obey the following algebra which is reminiscent of the algebra of angular momentum operators, Here, in place of the alternating tensor ε ijk , we have the "structure constants" f abc (which also appear in F a µν ). These are also completely anti-symmetric under the swapping of any pair of indices.
Just as the J i generate the rotation group, SU (2), the t a generate the colour symmetry group, SU (3). We choose to take the Pauli matrices as a representation of SU (2). For SU (3) we choose to take the representation where t a = 1 2 λ a and the λ a are the Gell-Mann matrices: In practice, we are not interested in calculating one particular colour component and instead work with sums over all colours which ultimately leads to traces over the t amatrices. We will see explicit examples of this in the sections that follow and here just collect some useful identities: The QCD Lagrangian L QCD is invariant under the infinitesimal "gauge" transformations where D ab µ is the covariant derivative in the "adjoint" representation, the one under which the gluon fields transforms under SU (3), as opposed to the "fundamental" representation, which rules the transformation of quark fields. In particular, the adjoint covariant derivative is given by The matrices T a , as needed for any generator of a representation of SU (3), satisfy the same commutation rules as t a :

84
These are nothing else than the Jacobi identity satisfied by the structure constants f abc : Notice that the gauge transformation for A a µ involves the strong coupling g s : and only at lowest order in g s does it reduce to the analogous transformation for QED.
As in QED, in order to quantise the QCD Lagrangian, we need to introduce a "gaugefixing" term, for instance We now describe the Feynman rules for QCD. The quark and gluon propagators are identical to those for QED except they are also accompanied by the appropriate deltafunction in colour space (see fig. 9). The coupling between two quarks and a gluon is now given in terms of colour matrices, t a ij as shown in fig. 10. Notice that the Dirac matrix γ µ also still appears as it must for spin-1/2 particles. The colour matrices and Dirac matrices do not interact with each other (they act on different vector spaces). The 'a' is the "adjoint" index and is associated with the gluon. The k and j are "fundamental" indices associated with the outgoing and incoming fermion line respectively.
+f ade f bce (g µν g ρσ − g µρ g νσ ) Figure 11: Three and four gluon vertices which arise from eq. (124). All momenta are taken to be incoming.
Returning to the Lagrangian, in QCD F a µν has an extra term compared to QED, as required by gauge invariance. (Technically this term is present for QED too, but QED is an "Abelian" gauge theory which means that the structure constants are zero). Multiplying out F aµν F a µν give extra terms with 3 and 4 gauge fields. These correspond to new threeand four-gluon vertices as shown in fig. 11.

Gauge Invariance
The presence of the non-commuting colour matrices illustrates that SU (3) is a non-Abelian gauge group. We can see the effect of this by studying the QCD equivalent of photon pair production, q(p)q(p ) → g(k) g(k ), shown in fig. 12. In QED, the matrix element squared for this process can be obtained from that of Compton scattering via crossing.
One immediate effect is obvious -there is now a third diagram including the three-gluon vertex. If we sum the contributions from the first two diagrams we find where we have implicitly assumed that gluon k has colour a and polarisation index µ, and gluon k has colour b and polarisation index ν. At this order, see eq. (133), gauge invariance corresponds to testing whether the replacement µ → µ + λk µ leaves the amplitude invariant. This is equivalent to testing the condition for the Ward Identity, A (a)+(b) µν k µ = 0: The non-zero commutator makes these diagrams alone not gauge-invariant. Adding diagram (c) gives a contribution which exactly cancels this (try this!) but yields another term proportional to k µ . This vanishes when we remember the whole expression is contracted with ε * ν (k ), and so gauge invariance is only obeyed once we project onto physical polarisations. This wasn't necessary in QED.
Recall in the QED case in section 3.2, we used A µν k µ = 0 to show that, in practical calculations, we can always make the replacement Although the right-hand summed all polarisations and not only the physical transverse ones, in actual calculations the unphysical longitudinal gluon polarisations automatically cancelled. This is no longer the case in QCD, where one has to sum strictly over physical polarisations. However, this can make calculations more cumbersome, so it might still be useful to sum over all polarisations, and to cancel in some way the unphysical degrees of freedom. How this cancellation is performed depends on the gauge. In covariant gauges, like the Feynman gauge, this is done by introducing extra fields, called the ghost fields. The alternative is to use the so-called physical gauges, that ensure that that only physical degrees of freedom propagate on shell.

Ghost Fields
To understand how the cancellation of unphysical polarisations actually arises in a covariant gauge, we need to revert to the case of photon pair production in QED. When we make the replacement in eq. (137), we are exploiting the fact that QED is unitary, i.e. probability is conserved through time evolution. A non-trivial implication of unitarity is that, at the lowest order in perturbation theory, twice the imaginary part of the forward amplitude for the process e + e − → e + e − has to be equal to amplitude squared for the process e + e − → γγ, when we integrate over the photon phase space and sum over physical photon polarisations. This is illustrated in Fig. 13, which shows the only intermediate Let us call A µν the contribution to the diagram on the left of the cut in Fig. 14. From the Ward identity k µ A µν = 0, we obtain that the contribution of gluon k to the imaginary part of the amplitude becomes where α = 1, 2 is the index labelling photon physical polarisations. This verifies explicitly the unitary relation represented in Fig. 13. The latter means that, in QED, making the replacement in eq. (137) corresponds to exploiting the unitarity of the theory to compute an amplitude squared through the imaginary part of the corresponding forward amplitude.
In the case of QCD, as we have seen in the previous section, the fact that k µ A µν = 0 implies that the amplitude squared for the process qq → gg is not given by the imaginary part of the forward amplitude qq → qq, when only gluons are considered as intermediate states.
In fact, the cut forward amplitude contains the contribution of non-physical longitudinal polarisations, which do not contribute to the amplitude squared for qq → gg. This would violate unitarity, so there has to be additional fields that  we now consider the imaginary part of the forward qq amplitude, at the lowest order in perturbation theory we need to include not only gluons as intermediate states, but ghosts as well, as pictorially illustrated in Fig. 16. The ghost-antighost loop contributes to the imaginary part of the forward amplitude with a factor (−1), just like a normal fermion loop, so as to cancel the contribution of the unphysical longitudinal gluon polarisations when summing over all diagrams. The resulting imaginary part equals the amplitude squared for the process qq → gg, integrated over the gluon phase space and summed over physical gluon polarisations, as required by unitarity of QCD.

Physical Gauges
Alternatively, we can impose a so-called "physical gauge" condition on the gluon fields to eliminate unphysical polarisations from the start. This eliminates the need for ghosts, which do not interact with gluons anymore, but complicates the gluon propagator. In place of the Lorentz gauge condition ∂ µ A a µ = 0, we impose for some arbitrary reference vector n µ . This is done by adding the gauge-fixing Lagrangian and taking the limit ξ → 0, thus enforcing the gauge condition in eq. (140).
The new expression for the propagator (for ξ = 0) is shown in fig. 17. When we use a physical gauge, whenever we sum over polarisations, we can make the replacement The different choices of reference vector n µ correspond to different choices of the gauge. One can explicitly check that results for physical quantities, such as cross sections, are independent of this choice.
A relevant example of a physical gauge is the light-cone gauge, in which n 2 = 0. In such a gauge, if we have an on-shell gluon q = (ω, q), we can choose n = (1, −q/ω). In this case

Dimensional regularisation and renormalisation scale
As mentioned in section 3.2, starting from the Feynman rules one can construct diagrams with loops, as for example the diagrams shown in fig. 18. The presence of loops means that momentum-conservation at each interaction vertex is no longer sufficient to determine the momentum in each leg. For example, k can take any value in the diagrams shown in fig. 18. We must therefore integrate over all possible values of unconstrained loop momenta. For example, the result for the diagram in fig. 18 with p the photon momentum and d the number of space-time dimensions. As the integral runs over all values of k, it includes very large values of k. Counting the powers of k, there are six of them in the numerator and four in the denominator, which implies that this integral diverges. In general, for any integral of the form we define the superficial degree of divergence, D, to be the result of the naïve powercounting: If D ≥ 0, then the integral is said to be superficially divergent. Such divergences are called ultra-violet (UV) because they arise whenever loop momenta get large. The boundary case of D = 0 is a logarithmic divergence (think of dk 1/k). The term "superficial" is used because there can be other factors which can affect the actual degree of divergence. In the example above, gauge invariance actually implies that the final result of the integral in eq. (144) must be proportional to (p 2 g µν − p µ p ν ). Therefore the divergence is only logarithmic, and not quadratic as it appears from naïve power counting.
The main point, though, is not the degree of divergence, but the fact that one finds divergences at all. These higher-loop corrections were supposed to be corrections in the perturbative series, hence smaller than those appearing at the previous perturbative order. For many years, this caused a major problem for the development of perturbation theory. However, there exists a well-defined procedure to "remove" these divergences which is called renormalisation. The basic idea behind renormalisation is that the parameters appearing in the Lagrangian do not need to be physical quantities, but their value is determined by comparing perturbative predictions to actual experimental data. For instance, the value of e can be extracted by measuring the Compton differential cross section at small angles. Therefore, infinities that eventually appear in perturbative calculations can be in principle reabsorbed in a redefinition of the parameters entering the Lagrangian. In practice, this amounts to rescaling all quantities in the Lagrangian by a "renormalisation constant", Z. For instance, for a field φ we have The field φ 0 is called "bare" field, as opposed to the "renormalised" field φ R , and Z φ is called renormalisation constant. This procedure has to be repeated for all fields, masses and coupling constants. Provided that all infinities in the theory can be removed with a finite number of renormalisation constants Z, then the theory is said to be renormalisable. After the renormalisation constants have been fixed, we can calculate all physical quantities in terms of the renormalised quantities and the results will be both finite and unambiguously defined.
The renormalisation constants are calculated according to some procedure that is called "renormalisation scheme". This consists in computing a suitable set of correlation functions, and imposing that these functions are finite at any order in perturbation theory. In this procedure one finds divergent integrals, which have to be regularised in some way. The regularisation actually provide means to parameterise the divergence. One approach is to implement a momentum cut-off, Λ, so as to artificially remove the region with large momentum. The most common approach though is called "dimensional regularisation".
Here we decrease the term d in eq. (146) to a lower value, so that we calculate all integrals in d = 4 − 2 dimensions instead of d = 4. The integration measure becomes and for each dimensionsless coupling g R one performs the replacement The factor of µ is essential to preserve the correct dimensions of the bare coupling in d dimensions. The renormalised coupling g R stays dimensionsless and depends now on the scale µ. The latter quantity is the famous renormalisation scale and it is the price that we pay for renormalisation as our finite calculations are now all dependent upon µ.
To summarise, the steps to perform renormalisation within dimensional regularisation are: 1. Compute all integrals in terms of renormalised quantities. 92 2. All UV divergences appear as 1/ -poles.
3. Define the renormalisation functions Z so as to cancel the poles in (and maybe some finite terms).
After renormalisation, eq. (147) depends on both and µ, as follows: and a similar expression holds for all couplings and masses. Both φ 0 and Z are infinite for → 0, whereas φ R (µ) stays finite, but depends on the unphysical renormalisation scale µ.
In a renormalised theory then, even tree-level diagrams depend on the renormalisation scale, through the coupling for example. The dependence on the renormalisation scale would dissappear only if we were able to calculate physical quantities to all orders in perturbation theory. Although this is unpractical, calculating one or two extra orders in perturbation theory can reduce the dependence considerably. However, this does mean that any theoretical calculation now depends on a free parameter, and it is exactly this parameter which leads to a way to estimate the "theory uncertainty". In fact, consider an so that the variation of µ around some central value µ 0 produces automatically a higherorder term. Notice that O (n) (α R (µ), µ, {Q i }) might contain ln(Q i /µ). This is why the central scale µ 0 is normally chosen of the order of the typical value that the scales Q i can assume. For example in gg → H, one would typically take µ 0 ∼ m H .
The obvious way to gauge how the strength of the dependence on the scale in a calculation is to vary the scale and see how the result varies. If the dependence is very weak, the result will be negligible. If the dependence is very strong, the variation will be large. The consensus of the community is to quote the theoretical uncertainty when the central scale is varied by a factor of 2 in each direction. One should remember that this is only an uncertainty of the dependence on the renormalisation scale and not a strict error bar. This is illustrated by the plot in fig. 19, which is taken from Gehrmann-De Ridder, Gehrmann, Glover & Pires, arXiv:1301.7310. It shows the scale dependence for inclusive jet production in the gluon-gluon channel at LO, NLO and NNLO. Indeed the variation decreases each time indicating that the sensitivity to the scale is decreasing. The fact that the lines do not overlap is a clear sign that these uncertainty bands are not error bands.

Running Coupling
By doing this for various observables, characterised by different typical scales µ, one can actually measure the dependence of the coupling on the renormalisation scale µ. This dependence can be predicted theoretically, and the comparison of the predicted dependence with the one that is actually observed represents one of the most stringent tests of the validity of a given QFT. This is illustrated for QCD in fig. 20, where one sees an astonishing agreement between the predicted "running" of the QCD coupling with the renormalisation scale Q, and what is observed in experimental data.
The theoretical object that dictates how a coupling evolves with the renormalisation scale is the beta function β(α R ), defined as There are various ways to compute the beta function, which in general depends on the renormalisation scheme used. However, one can show that the first two coefficient of the beta function, β 0 and β 1 , are independent of the renormalisation scheme. If we consider a scheme tied to dimensional regularisation (e.g. the so-called MS scheme), one has the relation where α 0 = g 2 0 /(4π) and α R = g 2 R /(4π). The crucial observation is that the bare coupling α 0 does not depend on µ. Therefore, its logarithmic derivative with respect to µ 2 is zero: This gives (156) In any scheme based on dimensional regularisation we have Therefore the first term of the beta function is just obtained from the 1/ pole of Z g , as follows The calculation of Z (1) g can be performed using any quantity that involves an interaction vertex. A way that is common to both QED and QCD is to consider the renormalised interaction Lagrangian Here we have used the ubiquitous notation Z ψ = √ Z 2 and Z A = √ Z 3 . The function Z 1 contains all UV divergences associated with loop corrections to the interaction vertex, whereas Z 2 and Z 3 contain UV divergences arising in the calculations of the fermion and gauge-boson propagators respectively. In QED, a powerful Ward identity implies Z 1 = Z 2 , so that the beta function can be calculated just from all the loop corrections to the propagator in the unrenormalised theory. For the case of QED Inserting this expression in the beta function we obtain which means that the QED coupling, at least until the beta function is dominated by its first term, becomes stronger with energy.
In QCD instead the Ward identity Z 1 = Z 2 does not hold any more. However, it holds at least for the part of these renormalisation functions that depends on C F . Since, at one loop, Z 2 is proportional to C F , its contribution to the beta function cancels exactly with the abelian contribution to Z 1 | n.a . The two depend on the gauge, but this gauge dependence cancels in the combination For instance, in the Feynman gauge where α s = g 2 s /(4π) and n f is the number of massless (a.k.a. "active") quark flavours contributing to the renormalisation of the gluon propagator. This gives where the latter expression corresponds to the actual value of the beta function for n f = 6 active flavours, as is the case at very high momentum scales. The fact that the beta function of QCD is negative when α s is small means that the QCD coupling decreases with energy. This property is known as asymptotic freedom, and is crucial to be able to compute hadronic cross sections in terms of quarks and gluons. In fact, when probed at short distances, hadrons appear as made up of pointlike constituents, quarks and gluons, which interact very feebly. Therefore, the Feynman rules we have learnt so far are enough to compute high-energy observables, for instance jet cross sections, as will be explained in the phenomenology course. At larger distances, the QCD coupling becomes stronger and stronger, at a point that quarks and gluons bind together to form hadrons. This phenomenon is known as confinement.

Summary
This has been a very quick tour through some very important, deep and interesting material. I hope it has provided some insight into the quantum field theory descriptions of QED and QCD, and provided you with useful tools for the future.  (1) Y ...................................................... 1 Abelian and non-Abelian local gauge theories The Standard Model is based on a product of groups SU(3) c ×SU(2) L ×U(1) Y , describing QCD, the chiral SU(2) L electroweak sector and the hypercharge U(1) Y sector in which QED is embedded. The first two of these groups are non-abelian, and are based on non-commuting group generators. The final group is abelian. We shall review in what follows how such gauge theories can be constructed from the principle of local gauge invariance, beginning with the simplest case of QED, and generalising this recipe to the construction of the non-abelian SU(N ) theories.

QED Lagrangian from local gauge invariance
The QED Lagrangian can be defined more fundamentally by demanding local gauge invariance. The Dirac Lagrangian has an obvious invariance under the global gauge transformation where the phase iα is independent of spacetime position x. Each term is simply multiplied by e iα e −iα = 1. Local gauge invariance corresponds to demanding invariance with phases iα(x) which are chosen independently at each spacetime point.
One now finds that local gauge invariance does not hold since The ∂ µ α(x) term violates the local gauge invariance. The resolution is that one needs to replace the ordinary derivative ∂ µ by the covariant derivative D µ . To ensure local gauge invariance one needs to ensure that under a gauge transformation D µ ψ(x) transforms in exactly the same way as ψ(x) itself. It is in this sense that one has a "covariant derivative".
This transformation rule holds if we define the covariant derivative where under a local gauge transformation the gauge field A µ transforms as The gauge transformation of A µ is exactly the same as the classical EM transformation, but the idea will be that the covariant derivative D µ and gauge fields A µ can provide a general recipe for constructing general non-abelian gauge theories. Having changed ∂ µ to D µ , and adding in the "kinetic energy" term − 1 4 F µν F µν one has the QED Lagrangian Crucially F µν can be defined in terms of the commutator of covariant derivatives, D µ . This involves introducing a "gauge comparator" and is analogous to parallel transport in General Relativity. The definition is In the case of abelian QED one finds the classical EM result (1.10) How does this generalise to non-Abelian gauge groups ?

The Non-Abelian Recipe Book
Local gauge transformations will be of the form Here the sum is over the N 2 −1 generators of the Lie group. These satisfy the Lie Algebra Here the c ijk are the real structure constants of the group. Abelian groups have commuting generators and so for the U(1) of QED c ijk = 0. For SU(2) the generators involve the three Pauli matrices T i = σ i /2 and the structure constants are c ijk = ǫ ijk , whilst for SU (3)  or quark colour triplets (red, green and blue, RGB) in SU(3) QCD.
The gauge fields are linear combinations of the generators of the gauge group One defines the covariant derivative Here g is the gauge coupling. For local gauge invariance one requires that and hence A µ transforms as The locally gauge invariant Lagrangian is then obtained by replacing ∂ µ → D µ in the free Dirac Lagrangian The non-Abelian expression for F µν follows from which yields One can easily check that under a local gauge transformation and so the kinetic energy term is locally gauge invariant since the trace is cyclic.
Defining the generators so that Tr[T i T j ] = 1 2 δ ij one arrives at the kinetic energy term (1.26)

The Lagrangian of QCD
Quantum Chromodynamics (QCD) is a non-abelian gauge theory of interacting quarks and gluons. The gauge group is SU(N c ), and there are N 2 c − 1 gluons. Experimental indications are that N c = 3. The Lagrangian density is The quark fields carry colour, R, G, B, and transform as a triplet in the fundamental representation The field strength tensor G a µν contains the abelian (QED) result and an extra term proportional to the structure constants f abc which are responsible for three and four-point self-interactions of gluons, not present for photons in QED.
For QCD (but not QED) one also needs to include unphysical ghost particles. These are scalar Grassmann (anti-commuting) fields needed to cancel unphysical polarization states for the gluons. The required Fadeev-Popov extra term in L QCD is In both QED and QCD one needs also to include a gauge fixing term if inverse propagators are to be defined.
There is only one other gauge-invariant structure that we could add involving the dual field strength tensorG a µν , This is a total derivative and so produces no effects at the perturbative level. However, if θ = 0 non-perturbative effects would induce a CP-violating electric dipole moment for the neutron, experimental constraints on this provide a bound |θ| < 3.10 −10 .
2 Glashow's Model SU(2) L ×U(1) Y We begin by defining a weak isospin doublet containing a left-handed electron and electron neutrino With an adjointχ We shall introduce a weak isospin quantum number T . The doublet has T = 1 2 , the upper and lower members of the doublet have T 3 = ± 1 2 , respectively. These row and column matrices are acted on by isospin generators in the form of 2 × 2 Pauli matrices 3) The generators 1 2 τ i satisfy the SU(2) Lie Algebra The isospin raising and lowering operators are τ ± = 1 2 (τ 1 ± iτ 2 ). One can then write an isospin triplet of weak currents Putting in row vectors, column vectors and matrices, we have explicitly on multiplying out The charge raising and lowering V-A currents can be written in terms of J 1 µ and J 2 The isospin triplet of currents have corresponding charges and these satisfy an SU (2) algebra To construct a combined weak and electromagnetic theory we will also require the electromagnetic current where Q denotes the charge of the particle (in this case an electron) in units of e ≈ 0.303 (α = e 2 /4π is the fine structure constant). So Q = −1 for e − . In terms of the net charge of interacting particles J 3 µ and J em µ are neutral currents, whereas J 1 µ and J 2 µ are charged currents. J 3 µ does not involve e R whereas electromagnetism does, and so to have a gauge theory involving both weak and electromagnetic interactions we must add an extra current J Y µ to J 3 µ . The simplest approach is to write then putting in the expressions for J em µ and J 3 µ we have (2.12) In virtue of the above identity between J em µ , J 3 µ and J Y µ the corresponding charges, Q (electric charge in units of e), T 3 (third component of weak isospin) and Y (termed hypercharge) satisfy This is identical to the Gell-Mann Nishijima relation obtained in the quark model of hadrons. The 1 2 coefficient in front of J Y µ is purely conventional. T 3 , Q and Y may be read off from the coefficients of theν L γ µ ν L ,ē L γ µ e L andē R γ µ e R terms in J 3 µ , J em µ and J Y µ above. The charge assignments (T, T 3 , Q, Y ) for the particles in the model are e R = (0, 0, −1, −2) (2.14) Each generation of leptons will have a similar weak isospin doublet with the same quantum numbers, We have an SU (2) L × U (1) Y structure where the generators of U (1) Y commute with those of SU (2) L . This implies that members of an isospin doublet must have the same hypercharge.
We have the following commutation relations for the generators T i , Q, Y (i = 1, 2, 3) so Q, T 3 , Y , form a mutually commuting set of generators, but only two are independent because of the relation Q = T 3 + Y 2 . The maximum number of independent mutually commuting generators defines the rank of the group. SU (2) L × U (1) Y has rank 2.
Notice that U (1) Y is chiral since e − L and e − R have different hypercharges whereas the electromagnetic charges are the same. To complete the specification of an SU (2) L × U (1) Y guage theory invariant under local gauge transformations, we need to introduce suitable vector fields to couple to these currents.
QED is based on the interaction −eJ emµ µ A µ of the electromagnetic current Qψγ µ ψ with the photon field A µ . This leads to a term in the Lagrangianψγ µ (i∂ µ + eA µ )ψ. Analogously we introduce an isotriplet of vector gauge bosons W i µ , (i = 1, 2, 3), to gauge the SU (2) L symmetry with coupling g and a vector boson B µ to gauge the U (1) Y symmetry with coupling g ′ /2. The interaction (analogous to QED) will be −gJ iµ W i µ − g ′ 2 J Y µ B µ , leading to the leptongauge boson portion of L , (2.17) The ( 1 2 ), (−1), (−2) in brackets are, respectively, the weak isospin of the doublet χ L , Y (e L ), and Y (e R ). The notation τ · W µ is shorthand for The full lepton-gauge boson Lagrangian will contain l=eµτ L(l), a sum over the three generations.
The SU (2) L and U (1) Y gauge transformations under which L(l) is invariant are (2.18) Here Λ(x) specifies the local U (1) Y gauge transformations and ∆(x) = (∆ 1 (x), ∆ 2 (x), ∆ 3 (x)) the local SU (2) L gauge transformations. The transformation of the W µ field is for an infinitessimal SU (2) L gauge transformation. Explicitly Separating off the interaction piece of L(l) we have We want to decompose this into a charged current (exchange of electrically charged W ± ) and a neutral current (exchange of electrically neutral Z 0 .) Consider the τ · W µ term in L I . We have Here we have defined the charged vector fields W ± µ = 1 √ 2 (W 1 µ ∓iW 2 µ ). The W 3 µ term is neutral and so belongs in L N C . We therefore have So the V − A charge raising and lowering currents of Eq.(2.7) couple to the charged W ± µ fields. The rest of L I gives us The next step is to identify the physical neutral vector fields Z µ and A µ . We therefore write W 3 µ and B µ as an orthogonal mixture of Z µ and A µ .
The angle θ w is the weak mixing angle. So in terms of Z µ and A µ We must have that J em µ = J 3 µ + 1 2 J Y µ is coupled to A µ with strength e, so we need 27) or equivalently 1 (2.28) We then have where J Y µ has been eliminated using J Y µ = 2(J em µ − J 3 µ ). The terms in the square bracket coefficient of Z µ can then be written as where g ′ = g sin θ w / cos θ w has been used. Then setting sin 2 + cos 2 = 1 we get So finally assembling all this we have Expressing the currents in terms of the full fermion fields ν, e we obtain From the coefficients of thellV terms (l = e, ν, V = A(γ), W ± , Z) multiplied by i we obtain the fermion-gauge boson vertex factors given in the Appendix.

Kinetic Energy Terms for Glashow's Model
To complete the Glashow model Lagrangian we need SU (2) L ×U (1) Y gauge invariant kinetic energy terms for the vector boson fields. In QED we have the kinetic energy term − 1 4 F µν F µν with F µν = ∂ µ A ν − ∂ ν A µ . The relevant terms for the W i µ fields (L W ) and B µ (L B ) are and Explicitly in terms of the fields W i µ (i = 1, 2, 3) which gauge SU (2) L . For the U (1) Y field B µ one has the Abelian field strength tensor B µν = ∂ µ B ν − ∂ ν B µ , and the kinetic energy term These terms can of course be rewritten in terms of the physical fields W + , W − , Z µ , A µ .
Having so rewritten L W and L B we can pick out the (∂ µ V )V V and V V V V cross terms in the physical fields. The Feynman Rules are in momentum space so i∂ µ V should be replaced by p µ V , where p µ is the momentum of the vector boson V. We have therefore generated the three and four-point self-interactions of W ± , Z and γ. The relevant Feynman Rules are given in the Appendix.
We now have all the Feynman rules for the Glashow model Lagrangian Similarly for a term involving M 2 W = M 2 W W µ W µ under an SU (2) L gauge transformation. This comment would apply in QED and forbid the photon mass term 1 2 M 2 γ A µ A µ , of course this is not a problem since we know experimentally that M γ = 0 and that photons are massless particles. A Dirac mass term for the leptons is also disallowed since mψψ = m(ψ R ψ L +ψ L ψ R ), written in terms of chiral L and R components. This is gauge invariant in QED which is L/R symmetric, but in the chiral SU (2) L × U (1) Y theory ψ R and ψ L have different gauge transformations in Eq.(2.18). Simply adding mass terms by brute force would lead to a sick theory. For masless vector bosons, e.g a photon in QED, one only has transverse polarization degrees of freedom, gauge invariance implies the absence of the longitudinal (L) modes. For massive W bosons one could consider the scattering of longitudinally polarized W pairs, The propagator for a massive vector boson of virtuality q 2 involves (g µν −q µ q ν /M 2 W )/(q 2 −M 2 W ). The longitudinally polarized W bosons are described by polarization vectors with ǫ L µ → qµ M W as q 2 → ∞, so the propagator approaches a constant at large q 2 . This implies that the longitudinally polarized W scattering grows like the square of the c.m. energy and unitarity is violated since at most a logarithmic growth is allowed. We therefore need to generate mass more subtly. One possibility is to exploit the so-called Higgs mechanism suggested by Peter Higgs in 1964 and motivated by the generation of Cooper pairs in superconductivity, involving the concept of spontaneous symmetry breaking.

Spontaneous Symmetry Breaking
In what follows we shall introduce the concept of Spontaneous Symmmetry Breaking (SSB) using the physical example of the Heisenberg spin chain model for a ferromagnet. This involves spontaneous breaking of rotational invariance. Treated in Landau mean field theory we shall see that the Free Energy of the ferromagnet below the critical Curie temperature T c has a form similar to the wine-bottle or mexican-hat potential which we shall use later in the context of breaking local Gauge Symmetry. We will develop this idea via a series of toy models involving a complex doublet of scalar fields, first discussing SSB of a global gauge symmetry and then the more relevant SSB of local gauge invariance.

The Heisenberg Ferromagnet
We consider a ferromagnetic material in a zero external magnetic B field. The Hamiltonian of the system is given by where the sum is over nearest-neighbour pairs of spins (i, j), σ i denoting the spin on site i. This Hamiltonian is rotationally invariant so that it commutes with the unitary rotation operator of three-dimensional spatial rotations, U (R).
However, below a critical temperature T c , the Curie temperature (T c = 1043 K for Iron), the ground state of the system has an overall net magnetization M = 0. This overall magnetization will be in a particular direction, and hence the rotational invariance has been broken.
Heating up the ferromagnet so that T > T c one finds that above the Curie temperature the overall magnetization vanishes M = 0 as the magnetic domains are randomized, and the system is rotationally invariant. Cooling down below T c selects a new non-zero magnetization. It is interesting to study the free energy, F , of the ferromagnet. This may be analysed using Landau mean field theory. One finds Here V is the volume, N is a degeneracy of states normalization factor. β > 0 is a parameter. Plotting F versus | M | for T > T c reveals a monotonically increasing curve with a minimum at | M | = 0. For T < T c , however, one has a non-trivial minimum at | M | = 0. So the system has a degenerate set of rotationally equivalent ground states. Rotating the T < T c curve around the F axis one finds a surface of the same form as the famous "wine-bottle" or "mexican hat" potential which we shall encounter in the Higgs Mechanism.

SSB of gauge symmetry-general considerations
The analogue of the ground state in the ferromagnet example will be the field theory vacuum.
Crucially physical symmetries such as rotational and translational invariance must hold for the vacuum state. We want to spontaneously break the internal gauge symmetry leaving rotational invariance unbroken. If the vacuum is specified by a (Higgs) field we require a scalar field with J = 0, otherwise the vacuum has an intrinsic angular momentum and rotational invariance would be broken. We should therefore require a scalar operatorφ(x) with some non-vanishing vacuum expectation value (vev) φ c (x), For translational invariance we must have a constant vev φ c (x) = v, so that We now turn to some specific toy models of SSB involving complex scalar fields.

SSB of a global Gauge Symmetry: Nambu-Goldstone mechanism
We consider the Lagrangian This has a global guage invariance under φ → φ ′ = e iα φ with α a constant. φ is a complex scalar field with real components φ 1 and φ 2 , We can then write L as where the scalar potential V (φ) is We can distinguish between two cases. If λ < 0 and µ 2 > 0 there is an overall minimum of V (φ) at φ 1 = φ 2 = 0. The term −µ 2 φ * φ is then a conventional mass term for a scalar particle, as in the Klein-Gordon Lagrangian If, however, λ < 0 and also µ 2 < 0 then we have a "wrong-sign" (imaginary) mass term. The true vacuum is no longer at φ = 0, we obtain the mexican-hat (wine-bottle) potential with a degenerate circle of minima in the φ 1 − φ 2 plane. Introducing X 2 = φ 2 1 + φ 2 2 we find the minimum of corresponding to The minimum of the potential is (3.14) A gauge transformation moves one around the degenerate circle of minima. By picking a particular vacuum state defined by a non-zero vev one spontaneously breaks the gauge invariance. We shall choose for simplicity to give a non-zero vev to the φ 1 direction with φ 2 having a zero vev.
We now rewrite the field φ in terms of new fields ξ and η reflecting the deviation from this true vacuum state, Rewriting V (φ) in terms of ξ and η we find Here the ellipsis denotes constant, cubic and quartic terms which we shall ignore. Substituting this back into the Lagrangian we have We see that we have a correct sign mass term, µ 2 ξ 2 , for the ξ scalar boson corresponding to m ξ = √ −2µ 2 (recall that µ 2 < 0). ξ is the Higgs boson and corresponds to the field direction given a non-zero vev v. We have a massless η scalar boson. This is the Goldstone Boson which corresponds to a field direction given a zero vev. Thinking of these field directions as analogous to normal modes the Higgs excitations are around the circle of minima, whereas the Goldstone excitations are around the bottom of the well.

SSB of local Gauge Symmetry
We now consider the same scalar Lagrangian as in Eq.(3.6), but rewritten using the covariant derivative Where under the local gauge transformation φ → φ We have the locally gauge invariant Lagrangian We will perform SSB in exactly the same way as in the previous example so that φ 1 acquires a vev, and φ 2 is the Goldstone mode which doesn't. Substituting φ(x) = 1 √ 2 (v + ξ(x) + iη(x)) into the Lagrangian then yields We see that we have successfully produced a mass term for the A µ field, We also have a massive Higgs ξ with m ξ = √ −2µ 2 . Less easy to interpret is the +evA µ ∂ µ η cross-term. The clue to how this unwanted cross-term can be removed lies in counting the number of field degrees of freedom before and after SSB. Rewriting the fields by identifying a different non-trivial vacuum cannot change this number. However, before SSB we have two longitudinal polarization states for the originally massless A µ field, since the longitudinal polarization state is absent for massless vector particles, there are in addition two scalar fields, so overall we have four degrees of freedom before SSB. After SSB the A µ field is now massive and so acquires an extra longitudinal polarization degree of freedom, so overall we have five field degrees of freedom after SSB. The explanation is that the Goldstone scalar field η is an unphysical spurion or ghost field which can be gauged away. We can say that it is "eaten" to provide the extra longitudinal polarization degree of freedom for the A µ field. To see this we can locally gauge transform φ( where we have dropped O(η 2 ) terms. We see that the η ghost field has been gauged away and is not present in the unitary gauge. We can also see that the unwanted cross-term is removed since can be rewritten as where A ′ µ denotes the A µ field in unitary gauge. In unitary gauge one finally has the Lagrangian The unitary gauge is not suitable for practical calculations, and one needs to introduce extra Feynman rules for the ghost scalars. We have not listed these rules in the Appendix, which assumes unitary gauge. In the next section we finally move on to discuss SSB for SU (2) L × U (1) Y .

The Higgs Mechanism for SU
We introduce an SU (2) L doublet of complex scalar Higgs fields The doublet has weak isospin T = 1 2 and hypercharge Y = 1 leading to electromagnetic charges +1, 0, for the T 3 = ± 1 2 upper and lower members of the doublet (recall Q = T 3 +Y /2).
In terms of real scalar fields φ i one has We then add to the massless Glashow model Lagrangian of Eq.(2.39) the scalar contribution The conjugate Φ † contains the antiparticles (φ −φ0 ).
The most general SU (2) L × U (1) Y invariant and renormalisable scalar potential V (Φ) is We arrange that as before λ < 0 and µ 2 < 0 so that L Φ contains a wrong-sign −µ 2 Φ † Φ mass term. V (Φ) is then bounded below so there will be an SU (2) L × U (1) Y invariant manifold of minima lying below V (Φ) = 0, and we obtain the "wine-bottle" or "mexican hat" potential.
so that the degenerate minima are specified by 8) or in terms of real scalar fields φ i We need to spontaneously break SU (2) L × U (1) Y by picking the vacuum from the set of minima of the potential V . We shall choose the vacuum expectation values (vev's) of the fields φ 1 , φ 2 and φ 4 to be zero We assign a non-zero vev v to the field φ 3 Of course, we should be able to pick the vacuum direction completely arbitrarily, but in order for the photon to remain massless, as it must do after the spontaneous symmetry breaking we need to give a non-zero vev to a neutral field. To do things generally we should only assign charges and other quantum numbers after performing the symmetry breaking. We shall proceed with these particular choices.
We now expand Φ around this chosen vacuum, setting φ 3 = H + v, where H is the neutral scalar Higgs field. It is possible to choose a special gauge, the unitary gauge, in which That is the "Goldstone" fields with zero vevs, φ 1 , φ 2 , φ 4 can be eliminated. To see this we can apply the local gauge transformation exp(i τ · θ(x)/v) to this unitary gauge form to obtain Expanding the exponential to O(θ) we find (4.14) So we see that the unitary gauge field of Eq.(4.12) is a gauge transformation of a general Φ with four independent scalar fields. The idea is that the three originally massless gauge fields W ± , Z 0 will become massive and acquire three extra longitudinal polarization degrees of freedom by "eating" the three unphysical Goldstone bosons. Notice that the above gauge transformation accordingly uses only three of the four possible SU (2) L × U (1) Y gauge transformation parameters. λ = 0, ∆ = − 2 θ v . As we noted earlier the unitary gauge is unsuited for calculations. One will need to add extra Feynman rules for the Goldstone bosons, analogous to the extra Feynman rules for Fadeev Popov ghost particles in QCD.
We can now evaluate L Φ in unitary gauge explicitly and exhibit the spontaneously generated mass terms for W ± and Z 0 . From Eq.(3.1) we find Notice that the photon field A µ is no longer involved, only W ± µ and Z µ . The photon will therefore not acquire a 1 2 M 2 A µ A µ mass term. The masslessness of the photon is guaranteed by the U (1) em gauge invariance of the Lagrangian. U (1) em is a residual symmetry . SU (2) L × U (1) Y has been spontaneously broken to U (1) em , and the originally massless W ± , Z 0 gauge bosons have acquired masses in the process. The conjugate (D µ Φ) † is given by We finally obtain in the unitary gauge We have used the relation (g cos θ w + g ′ sin θ w ) 2 = g 2 + g ′ 2 . The masses of W ± and Z can now be read off by identifying the terms M 2 W W + µ W −µ and 1 2 M 2 Z Z µ Z µ in Eq.(4.17). We find For the Higgs scalar we identify the overall 4 , and recalling that µ 2 = λv 2 we obtain the There are also VVH, VVHH and HHH, HHHH Higgs self-interactions. The corresponding Feynman rules and vertex factors are contained in the Appendix.

An immediate consequence of the above vector boson masses is that
This is often referred to as the "weak ∆I = 1 2 rule" and is connected with our choice of a Higgs doublet to perform the spontaneous symmetry breaking.
Notice that from the measured fine structure constant α = e 2 /4π and the vector boson masses M W and M Z we can determine sin 2 θ w , v and g, but not µ. This means that the Higgs mass M H is not determined directly by other experimentally measured parameters. We shall return a little later to a discussion of the number of independent Standard Model parameters.

Yukawa terms for lepton masses
To give charged leptons a mass one adds a so-called Yukawa term to the Lagrangian, L Y (l), where l = e, µ, τ labels the lepton. We have for instance for an electron substituting this into L Y (e) one has From which we can identify the electron mass m e = G e v/ √ 2, and the lepton-Higgs coupling g(Hēe) = m e /v = gm e /(2M W ). Notice that the ν L upper element of the doublet does not appear since in unitary gauge the upper entry in Φ is zero, and so as required we do not generate a neutrino mass term or interaction with the Higgs. We see that the coupling between leptons and the Higgs is proportional to the lepton mass, so τ signatures involving the heaviest mass lepton will be important for Higgs searches at colliders. Similarly for quarks bb, and tt signatures will be important. The vertex factor and Feynman rule for the Yukawa term is contained in the Appendix.

Electroweak quark sector
So far we have just considered the lepton sector. We also need to include a Lagrangian L(q) to describe electroweak quark interactions. We have six quarks (three generations) u, d, s, c, b, t. Q u = Q c = Q t = 2 3 , and Q d = Q s = Q b = − 1 3 . We can construct SU (2) L isospin doublets analogous to the leptonic case Here U 1 = u, U 2 = c, U 3 = t and D 1 = d, D 2 = s, D 3 = b. However experimentally one observes n → pe −ν e and also Λ → pe −ν e decays, corresponding to d → u and s → u transitions. This implies that the weak interaction eigenstates are mixtures of flavour eigenstates. We therefore replace the above χ f L by where D ′ f is a flavour rotated mixture Here θ c is the Cabibbo angle, and experimentally one finds θ c ≈ 13 degrees or cos θ c ≈ 0.97. The full three generation CKM matrix has the following |V ij | structure for the magnitudes of the elements The matrix involves 4 parameters-3 angles and 1 complex phase. The presence of this complex phase enables CP violation to occur.
In analogy with the leptonic isotriplet of currents one then defines the quark isotriplet As before i = 1, 2 are charged currents, and J f 3 µ is a neutral current.
Notice that D ′ f L the rotated flavour mixture has been replaced by D f L in the final line. This follows from the unitarity property V V † = 1. It has the important consequence that flavour changing neutral current processes are forbidden. We can now determine the electromagnetic quark currents Here the ( 2 3 ), (− 1 3 ) in brackets denote the electric charges of the quarks. If we define the hypecharge current J f Y µ in the same way as for the leptons, so that J f (em) So analogous to L(l) for leptons one obtains the quark electroweak lagrangian L(q) To give masses to the quarks we shall require a corresponding quark Yukawa term L Y (q).
After spontaneous symmetry breaking one has in unitary gauge In this way one generates a quark mass matrix and qqH interactions. We shall not pursue the details any further.

SM Lagrangian and independent parameter count
Assembling all the pieces we have discussed we can now arrive at the Glashow-Weinberg-Salam Standard Model Lagrangian The ellipsis denotes further gauge-fixing and ghost contributions. The Standard Model as specified by this Lagrangian has been shown to be renormalisable by 't Hooft and Veltmann. The unitarity problem for W + L W − L → W + L W − L scattering is also cured. It is solved by extra diagrams involving virtual Higgs exchange which now appear due to the W W H interaction terms.
It is interesting to count how many of the parameters in the Standard Model are independent. There are fifteen parameters overall if we ignore the quark sector, which may be divided into couplings: e(α), g, g ′ , G e , G µ , G τ . Masses: M W , M Z , M H , m e , m µ , m τ . Higgs sector parameters µ 2 , λ (v 2 = µ 2 λ 2 ), and last but not least the weak mixing angle sin 2 θ w . There are clearly many relations between the parameters, such as M W = 1 2 gv or e = g sin θ w for instance. It turns out that there are in fact seven independent parameters which if specified can then predict all fifteen. One can choose for instance the set g, g ′ , G e , G µ , G τ , µ 2 , λ. Alternatively A model with at least 19 undetermined parameters, in which the particular representations containing fermions and scalars are not compellingly motivated, and with a mysterious replication of three generations, does not seem a likely candidate for a complete theory of everything, even though it has proved consistent with experiment in essentially every detail checked, with the Higgs, confirmed by LHC earlier this year, being the last ingredient to fall into place.

Appendix of Feynman rules
The following pages summarize the Feynman Rules in unitary gauge for one generation of leptons. All the Lagrangian terms needed to derive the vertex factors for the different interactions are contained in these lecture notes.

Feynman Rules in the Unitary Gauge (for one Generation of Leptons)
Propagators: All propagators carry momentum p.

Introduction
Historically the lecture notes for the phenomenology course have consisted of the slides presented in the lectures. These notes are intended to provide additional information, and more mathematical detail, on the more theoretical aspects of the course which don't change from year to year. The recent experimental results, which as the LHC experiments take more and more data change from day-to-day, will continue to be presented solely on the slides used in the lectures.
These notes have been adapted from notes from Peter Richardson. In order to study hadron collisions we need to understand the basics of cross section calculations, Quantum Chromodynamics (QCD) and jets which we will first consider in the simpler environment of e + e − and lepton-hadron collisions before we go on to study hadron-hadron collisions.
Unfortunately there is no single good book on modern phenomenology. Two old classics but now a bit dated are:
Two good books, although mainly focused on QCD and probably at a bit too high a level for this course, are: • QCD and Collider Physics Ellis, Stirling and Webber [3]; • Quantum Chromodynamics Dissertori, Knowles and Schmelling [4]; and of course the classic on Higgs physics • The Higgs Hunter's Guide Gunion, Haber, Kane and Dawson [5].
In addition the recent reviews: • Towards Jetography [6] which provides a good primer on jet physics; • General-purpose event generators for LHC physics [7] which gives a detailed description of the physics of Monte Carlo event generators; are good sources of additional information.

e + e − Annihilation
While electron-positron colliders are less relevat for current phenomenology than they were before, they are a good starting oint to discuss many concepts one also finds at hadron colliders.
If we consider what happens when electrons and positrons collide, then the most likely thing is that some hadrons are produced. However, none of the Lagrangians or Feynman rules you've learnt involve hadrons. This is the key issue in most collider physics, we can calculate things for quarks and gluons but we observe hadrons.

Leading Order
We will start by studying one of the simplest possible processes, e + e − annihilation via the exchange of a photon or Z 0 boson, as shown in Fig. 1. This process can produce either quarks or leptons. Unfortunately due to quark confinement we cannot observe free quarks directly, instead quarks and antiquarks will produce hadrons with unit probability. Much of what we will study in this course will be concerned with the question, given that we observe hadrons how do we infer what was going on in the fundamental process involving quarks?
We will start with the simplest example. Given that quarks and antiquarks produce hadrons with unit probability we can measure the cross section for the process e + e − → qq, which we can calculate perturbatively, by measuring the cross section for e + e − → hadrons. This is the case because gluons (which also produce hadrons) do not couple directly to the leptons. This is the basis of most collider phenomenology, we want to measure things using hadrons that we can calculate using partons. The total cross section for e + e − annihilation into hadrons is the simplest such observable.
Using the techniques you have learnt in the other courses you can now calculate the total cross section for e + e − annihilation. In reality it is more common to study the ratio as this reduces experimental uncertainties. At low energies this process is dominated by photon exchange so we can neglect the Z 0 boson. In this limit where s is the centre-of-mass energy of the collision squared. The cross section for the production of quarks is The expected picture is shown in figure 2. The experimental measurement of this ratio is shown in Fig. 3

Higher Order Corrections
When we draw Feynman diagrams we are performing a perturbative expansion in the (hopefully) small coupling constant. Unfortunately the strong coupling often isn't very small, σ(e + e − →µ + µ − ) as a function of energy taken from Ref. [8].
at the Z 0 mass, α S (M Z ) = 0.118. We therefore need to consider higher orders in the perturbative expansion. There are always two types of correction: • real gluon emission; • virtual gluon loops.

Real Emission
There are two possible diagrams for gluon emission, see  considering photon exchange for simplicity, is where p a,b are the 4-momenta of the incoming electron and positron, respectively. The outgoing quark, antiquark and gluon have 4-momenta p 1,2,3 , respectively. The total momentum of the system q = p a + p b = p 1 + p 2 + p 3 . The gluon has colour index a = 1, . . . , N 2 c −1 whereas the quark/antiquark have colour indices i, j = 1, . . . , N c .
Summing/averaging over spins and colours The colour algebra gives a colour factor where the colour charges in the fundamental (quarks and antiquarks) and adjoint (gluons) representations are respectively. More about the colour algebra can be found in appendix C The three-body phase space is 1 8(2π) 9 p 1 dp 1 d cos θdφp 2 dp 2 d cos βdα where θ and φ are the polar and azimuthal angles, respectively, of the outgoing quark with respect to the beam direction. The polar and azimuthal angles of the antiquark with respect to the quark direction are β and α, respectively. We have integrated over p 3 using the δ-function and assumed that the outgoing particles are massless. Using momentum conservation Therefore the integral over the remaining δ-function is so dΦ n (p a + p b ; p 1 , p 2 , p 3 ) = 1 8(2π) 9 dp 1 d cos θdφdp 2 dα (11) where x i ≡ 2p i / √ s. Momentum and energy conservation requires that The total cross section is The contribution from the Z 0 boson is the same except for σ 0 . This is divergent at the edge of phase space as x 1,2 → 1 so that the total cross section is σ = ∞! This is a common feature of all perturbative QCD calculations. Configurations which are indistinguishable from the leading-order result are divergent. Physically there are two regions where this happens.

Collinear limit:
If we take x 1 → 1 at fixed x 2 or x 2 → 1 at fixed x 1 . We can see what happens physically by considering the dot product of the antiquark and gluon 4-momenta, i.e.
So the limit x 1 → 1, where the matrix element diverges, corresponds to the angle between the antiquark and gluon θ 23 → 0, i.e. collinear emission of the gluon from the antiquark. Similarly the limit x 2 → 1 corresponds to collinear emission of the gluon from the quark.
2. Soft limit: We can consider what happens in this limit by considering the energy of the gluon i.e. the matrix element diverges in the soft limit, when the energy of the gluon is small.
These are both universal features of QCD matrix elements. In general one can see how the divergencies appear by looking at the propagator just before the emission of a gluon.
From this expression one can see that the propagator vanishes (and therefore divergences appear) when the gluon is either soft (|k| → 0) or collinear (cos θ → 0) In these limits QCD matrix elements factorize, i.e. the matrix element including the emission of a soft or collinear gluon can be written as the convolution of the matrix element before the emission and a universal term describing collinear or soft emission.
Collinear Limit If we first consider collinear emission we take the momentum of the gluon p 3 parallel to p 2 (θ 23 = 0). We can therefore define wherep 2 is the momentum of the antiquark before the gluon radiation and z is the fraction of the original antiquark's momentum carried by the gluon. In this limit the matrix element factorizes As does the phase space Putting this together The Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) splitting function is a universal probability distribution for the radiation of a collinear gluon in any processes producing a quark. The splitting functions are: Figure 5: Dipole radiation pattern for e + e − → qqγ and e + e − → qqg.

Virtual Corrections
There are three diagrams involving virtual gluon loops, see  also divergent, but negative. This will cancel the real divergence to give a finite answer. To show this we need to regularize both the real and virtual cross sections and add them together. The result should be finite when we remove the regularization. The standard way of doing this is to work in d = 4 − 2ǫ dimensions where to regularize these infrared divergences ǫ < 0. In this case where H(0) = 1. The sum is finite as ǫ → 0. So finally combining this correction with the leading-order result Measuring R(e + e − ) is one way of measuring the strong coupling giving 2 α S (m Z ) = 0.1226 ± 0.0038.
The second and third order corrections, and the results for the next-to-leading-order corrections including quark masses are also known. This is the simplest example of an observable which we can calculate using perturbation theory involving quarks and gluons, but measure experimentally using hadrons. We now need to go on and consider more complicated observables. In addition to the infrared, soft and collinear, divergences we saw in the calculation of σ(e + e − → hadrons) it is possible to have ultraviolet divergences. The virtual corrections shown in Fig. 7 are divergent in the ultraviolet. These, and other similar corrections, lead to the strong coupling being renormalized to absorb the ultraviolet singularities. The renormalisation procedure introduces an unphysical renormalisation scale µ.

Running Coupling
The leads to: 1. diagrams are dependent on µ; 2. α S is replaced by the running coupling α S (µ); 3. although we can't calculate the coupling we can calculate how it changes with scale: where n f is the number of active quark flavours.
For β 0 > 0 the coupling displays asymptotic freedom, i.e. α S (µ) → 0 as µ → ∞ which allows us to perform perturbative calculations at high energies where the coupling is small. It is standard to quote the value of α S (M Z ). The value at other scales can by found by solving the evolution equation. Recent experimental measurements of the strong coupling evolved to the Z 0 mass and the running of coupling are shown in Fig. 8.
It is common to define a scale Λ QCD so that α s (µ) = 4π In general there is a choice of precisely how we perform the renormalisation, which leads to both renormalisation scale and scheme dependence. Physical observables don't depend on µ F or the renormalisation scheme, but fixed order perturbative calculations do.

Higher order calculations
Since the strong coupling constant is not very small the perturbative series converges slower than it does in QED. To get reliable QCD predictions we need at least NLO precision and NNLO is preferable for important processes, but NNLO calculations are very challenging. Perturbative calculations for hadron colliders have two unphysical parameters: the factorisation and renormalisation scales. The former defines the separation between the perturbative and non-perturbative description of hte proton and the latter is needed to remove the ultra-violet divergences and specifies at which scale the coupling constant should be evaluated. This dependence is an artefact of the truncation of the perturbative series, if we were able to compute the entire perturbative series to all orders, the dependence would drop out. Therefore the dependence on the factorisation and renormalisation scales is used as a gauge of the theoretical error due to the missing orders.

Infrared safety
To enable a meaningful comparison between theory and experiment it is important that the observable is defined in a way that allows the perturbative prediction to be carried out at higher orders. One requirement is that the observable should be infrared safe. By this we mean that the value of the obervable does not change in the case of a collinear splitting or in the case of the emission of a soft particle. Mathematically it means that the observable O has to fullfil the following properties.

Examples of infrared unsafe observables or procedured
• number of partons • observables using incoming parton momentum fractions • observables based on older jet algorithms • using infrared unsafe observables as renormalisation or factorisation scale It is not always easy to find out whether an observable/procedure is infrared safe.

Event Shapes
If we consider the e + e − annihilation events shown in Fig. 9 we see a collimated bunch of hadrons travelling in roughly the same direction as the original quarks or gluons. Often you can "see" the jets without some fancy mathematical definition. We will come back and consider jets in more detail when we consider hadron-hadron collisions later in the course, in Section 6. An alternative to defining jets is to define a more global measure of the event which is sensitive to the structure of the event. We need a number of properties to achieve this, the most important of which is infrared safety, i.e. if there is soft or collinear emission the answer doesn't change. Formally if a parton splits into two collinear partons x 3 > x 1,2 Figure 10: Phase space for e + e − → qqg. The requirement that x 3 ≤ 1 ensures that x 1 + x 2 ≥ 1 by momentum conservation so that physical phase space is the upper half plane.
or if a soft parton is emitted with momentum the result should not change. After the total cross section, the simplest infrared safe observable is the thrust where the sum is over all the final-state particles and the direction of the unit vector n, the thrust axis, is chosen to maximize the projection of the momenta of the final-state particles along that direction. For a two-jet pencil-like event all the particles lie along the thrust axis giving T = 1. For a totally spherical event the thrust can be calculated by taking a spherical distribution of particles in the limit of an infinite number of particles giving T = 1 2 . For three partons the thrust axis will lie along the direction of the most energetic parton, by momentum conservation there is an equal contribution to the thrust from the other partons giving T = max{x 1 , x 2 , x 3 }.
In order to calculate the differential cross section with respect to the thrust for e + e − → qqg we can start from the differential cross section in Eqn. 12. In many cases when we wish to introduce a new quantity into a differential cross section it is easier to insert the definition using a δ-function rather than performing a Jacobian transform, in this case we use to give where σ 0 is the leading-order cross section for e + e − → qq. This expression can be evaluated in each of the three phase-space regions shown in Fig. 10. First in the region where where we have used the δ-function to integrate over x 1 and the limits on x 2 are given by x 2 = x 1 = T for the upper limit and T = x 1 = x 3 = 2 − x 1 − x 2 = 2 − T − x 2 for the lower limit. Performing the integral gives The same result is obtained in the region x 2 > x 1,3 due to the symmetry of the formulae under x 1 ↔ x 2 .
In the final region we can take the integrals to be over x 2,3 and use the δ-function to eliminate the integral over x 3 giving where after the integral over x 3 , x 1 = 2 − x 2 − T and the limits are calculated in the same way as before.
Putting the results from the three regions together gives This result clearly diverges as T → 1, indeed in this limit We can use this result to define a two-and three-jet rate so that the three jet rate is  Figure 11: Thrust distribution at various centre-of-mass energies compared with Monte Carlo simulations, taken from Ref. [9]. and the two jet rate Similar logarithmically enhanced terms appear at all orders in the perturbative expansion giving an extra ln 2 τ at every order in α S , i.e.
Although α S is small, ln 2 τ in large so the perturbative expansion breaks down. The solution is to resum the large α n S ln 2n τ terms to all orders giving the Sudakov Form Factor This is finite (zero) at τ = 0, i.e. the probability for no gluon radiation is zero. In general the Sudakov form factor gives the probability of no radiation P (no emission) = exp −P naive (emission) .
An example of the experimental measurement of the thrust distribution is shown in Fig. 11 compared to various Monte Carlo simulations which include resummation of these large logarithmic contributions..

Deep Inelastic Scattering
Historically measurements of deep inelastic scattering were very important for establishing the nature of QCD. Nowadays they are mainly important for the measurement of the parton distribution functions we need to calculate all cross sections for processes with incoming hadrons. As the proton isn't fundamental at sufficiently high energies the scattering is from the constituent quarks and gluons. Figure 12: Deep inelastic scattering kinematics.
In deep inelastic scattering processes it is conventional to use the kinematic variables shown in Fig. 12. The struck parton carries a fraction x of the four-momentum of the incoming hadron. The four-momentum of the exchanged boson is q and the virtuality of the boson Q 2 = −q 2 . Using momentum conservation where p ′ is the 4-momentum of the scattered quark. Therefore (xp + q) 2 = 0 giving x = Q 2 2p·q . Similarly the mass of the hadronic system is W 2 = (p + q) 2 . By definition (k + p) 2 = 2k · p = s and therefore y= p·q p·k = Q 2 xs . Deep inelastic scattering has Q 2 ≫ M 2 (deep) and W 2 ≫ M 2 (inelastic), where M is the proton mass. Historically the observation and understanding of DIS was one of the key pieces of evidence for quarks. On general grounds the cross section has the form which parameterizes the cross section in terms of two unknown structure functions, F 1,2 (x, Q 2 ). If we consider that the proton is a bound state of partons we can calculate these structure functions. Suppose that the probability of a given type of quark carrying a fraction η of the proton's momentum is f q (η) the cross section for hadron scattering can be written in terms of those for partonic scattering  Figure 13: The reduced cross section, which is equivalent to F 2 up to some small corrections, measured by the H1 and ZEUS experiments from Ref. [10].
Taking the outgoing parton to be on-shell: Therefore d 2 σ(e + proton) The differential cross section for e ± (k) + q(p) → e ± (k ′ ) + q(p ′ ) via photon exchange which dominates at low Q 2 for neutral current scattering is where e q is the charge of the quark. So in the naive parton model are functions of x only, Bjorken scaling. Bjorken scaling works reasonably well, see Fig. 13, but the quantum corrections, lead to scaling violations. If we consider the O(α S ) corrections we have the following divergent contributions: 1. soft gluon, E g → 0; 2. gluon collinear to the final-state quark; 3. gluon collinear to the initial-state quark; 4. the virtual matrix element has a negative divergence; corresponding to the diagrams shown in Fig. 14.
The contributions from (1), (2) and (4) are indistinguishable from the tree-level configuration and the divergences cancel between the real and virtual corrections. However (3) has momentum fraction η > x and (4) η = x so the initial-state divergences don't cancel.
Just as with final-state radiation in the collinear limit it can be shown that Here we have the unregularized DGLAP splitting functionP q→qg , it is singular as z → 1.
The virtual contribution contains a compensating singularity at exactly z = 1. The regularized splitting function is defined to be the sum of real and virtual contributions 3 The total contribution is whereR qq x η is a calculable finite correction. The integral over t is infrared divergent, comes from long timescales and should be part of the hadronic wavefunction. We therefore introduce a factorization scale µ F and absorb contributions with t < µ F into the parton distribution function so that f q (η) becomes f q (η, µ 2 F ).
The finite piece is dependent on exactly how we define the parton distribution function, the factorization scheme dependence. Physical cross sections are independent of µ F , however at any finite order in perturbation theory they do depend on the factorization scale.
Recall that in perturbation theory we cannot predict α S (M Z ) but we can predict its evolution, Eqn. 27. Similarly for the PDFs

Hadron Collisions
In hadron collisions QCD processes dominate due to strength of the strong coupling. The cross sections for electroweak processes, W ± , Z 0 and Higgs production are much smaller. The values of x and Q 2 probed in hadron collisions and examples of the cross sections for various processes are shown in Fig. 15. In this section we will look at some of the basics 3 The +-prescription is defined by convolution with a well defined function, g(z), such that  of the production of the Z 0 boson, as a simple example of a hadron-hadron process, in the next section we will go on and study the physics of jets.
The calculation of the cross section for the production of an s-channel resonance in hadron-hadron collisions is described in more detail in Appendix A.3.1 where the cross section is given in Eqn. 126. The only dependence of the cross section on the rapidity of the Z 0 boson is via the PDFs, i.e. the rapidity distribution of Z 0 contains information on the PDFs of the partons a and b. The higher the mass of the produced system the more central it is, see Fig. 15. The Z 0 boson is centrally produced in both pp and pp collisions. The experimental results, for example those from the Tevatron shown in Fig. 16, are in good agreement with the theoretical predictions.
At leading order the transverse momentum of the gauge boson is zero. As before we have include real and virtual corrections, as shown in Fig. 17. In the same way as DIS the initial-state singularities must be factorized into the PDFs. At low transverse momentum we need to resum the multiple soft emissions whereas, as with the e + e − event shapes, at large p ⊥ the fixed-order approach is more reliable. The transverse momentum of the Z 0 boson at the Tevatron is shown in Fig. 18.
In hadron-hadron collisions we would like at least next-to-leading order (NLO) calculations. This is the first order at which we have a reliable calculation of the cross section. If possible we would like next-to-next-to-leading order (NNLO) calculations but  Figure 16: Rapidity of the Z 0 boson measured by the CDF experiment, taken from Ref. [12]. that is much harder and takes a long time, e.g. e + e − → 3 jets was calculated at: LO in 1974 [15]; NLO in 1980 [16]; NNLO in 2007 [17]. Calculating NNLO corrections is still extremely challenging in hadron collisions, only the Drell-Yan process and gg → H are known. However, we need higher order calculations because while the factorization scale uncertainty is significantly less at NLO when compared to leading order it can still be significant, see for example the scale uncertainty on the rapidity of the Z 0 boson shown in Fig. 19.

Jets
While we can often see the jets in an event when we look at an event display we need a precise definition to perform quantitative analyzes. 4 Jets are normally related to the underlying perturbative dynamics, i.e. quarks and gluons. The purpose of a jet algorithm is to reduce the complexity of the final state, combining a large number of final-state particles to a few jets, i.e.
We need a number of properties to achieve this (Snowmass accord):  from Ref. [14].
• simple to implement in experimental analyzes and theoretical calculations; • defined at any order in perturbation theory and gives finite cross sections at any order in perturbation theory (i.e. infrared safe); • insensitive to hadronization effects.
The most important of these properties is infrared safety, as with the event shapes we considered earlier. Provided the jet algorithm is infrared safe there are a range of different approaches.
The two main types of jet algorithm are: 1. cone algorithms; 2. sequential recombination algorithms.
There is a long history to this subject with: theorists and e + e − experimentalists generally preferring recombination algorithms for their better theoretical properties; hadron collider experimentalists preferring cone algorithms for their more intuitive picture and because applying many experimental corrections was easier. However, with the start of the LHC we have converged on a specific recombination algorithm.

Cone Algorithms
The simplest, oldest, and most intuitively appealing idea is a cone algorithm. The most widely used algorithms are iterative cone algorithms where the initial direction of the cone is determined by a seed particle, i. The sum of the momentum of all the particles with a cone of radius R, the jet radius, in the azimuthal angle φ and rapidity 5 y is then used as a new seed direction and the procedure iterated until the direction of the resulting cone is stable. In this approach the momenta of all the particles j such that are summed. As these algorithms are almost exclusively used in hadron-hadron collisions it is normal to use the kinematically variables defined in Appendix A.1. While this may seem simple there are a lot of complications in the details of the algorithm in particular: what should be used as the seeds; what happens when the cones obtained from two different seeds share particles, overlap. The details of the treatment of these issues can lead to problems with infrared safety, which can often be very subtle.
Consider a simple approach where we take all the particles to be seeds. If we have two partons separated in (y, φ) by twice the cone radius then two jets, with the direction a) Seed particles b) Jet Cones the approach we can get only one jet, i.e. the algorithm is unsafe. A simple solution was to use the midpoint between all the seeds as a seed, the midpoint algorithm. This solves the problem at this level but similar problems appear for higher multiplicities. The final solution, for the only known infrared safe cone algorithm, SISCone, is to avoid the use of seeds and treat overlapping jets carefully.

Sequential Recombination Algorithms
In this approach jets are constructed by sequential recombination. We define a distance measure between two objects d ij , in hadron collisions we must also define a distance measure d iB with respect to the beam direction. There are two variants of the algorithm the inclusive where all jets are retained and exclusive where only jets above the cut-off value of the jet measure d cut , the jet resolution scale, are kept. The algorithm proceeds as follows: 1. the distance measure is computed for each pair of particles, and with the beam direction in hadronic collisions, and the minimum found; 2. if the minimum value is for a final-state merging in the exclusive approach the particles i and j are recombined into a pseudoparticle if d ij ≤ d cut , while in the inclusive algorithm they are always recombined; 3. otherwise if a beam merging is selected in the in inclusive approach the particle is declared to be a jet, while in the exclusive approach it is discarded if d iB ≤ d cut ; 4. in the inclusive approach we continue until no particles remain, while in the exclusive approach we stop when the selected merging has min{d iB , d ij } ≥ d cut .
In the inclusive approach the jets are all those selected from merging with the beam, whereas in the exclusive approach the jets are all the remaining particles when the iteration is terminated. The choice of the distance measure, and to a lesser extent the recombination procedure, 6 defines the algorithm.
The earliest JADE algorithm for e + e − collisions uses the distance measure where E i,j are the energies of the particles and θ ij the angle between them. In e + e − collisions we have to use the exclusive algorithm and it is conventional to use a dimensionless measure y ij = d ij /Q 2 , where Q is the total energy in the event. While this choice can easily be proved to be safe in the soft and collinear limits there are problems with the calculation of higher order corrections. Therefore a class of k T algorithms was developed in which the distance measure was chosen to be the relative transverse momentum of the two particles in the collinear limit, i.e.
In e + e − collisions the conventional choice is In hadron collisions it is best to use a choice which is invariant under longitudinal boosts along the beam direction. The standard choice is where R is the "cone-size" and p i,⊥ is the transverse momentum of particle i with respect to the beam direction. The standard choice for the beam distance is d iB = p 2 i,⊥ . There are other definitions, particularly of the distance d ij , which are invariant under longitudinal boosts but that in Eqn. 60 is the most common choice.
In general there is a whole class of measures defined by and d iB = p 2p i,⊥ . The parameter p = 1 for the k T algorithm and 0 for the Cambridge/Aachen algorithm. Recently a new approach, the anti-k T algorithm, with p = −1, was proposed which favours clustering with hard collinear particles rather than clusterings of soft particles, as in the k T and Cambridge/Aachen algorithms. The anti-k T algorithm is still infrared safe and gives "conical" jets due to the angular part of the distance measure and is the algorithm preferred by both general-purpose LHC experiments.

Jet Cross Sections
All cone jet algorithms, expect from SISCone, are not infrared safe. The best ones typically fail in processes where we consider extra radiation from three-parton configurations while some already fail when we consider radiation from two-parton configurations, see the summary in Table 1.

Process Last meaningful order
Known at JetClu MidPoint Atlas cone CMS cone inclusive jet cross section LO NLO NLO (→ NNLO) W ± /Z 0 + 1-jet cross section LO NLO NLO 3-jet cross section none LO NLO W ± /Z 0 + 2-jet cross section none LO NLO jet masses in 3-jet and none none LO W ± /Z 0 + 2-jet events Table 1: Comparisons of various cone algorithms for hadron-hadron processes. Adapted from Ref. [6].
Examples of the jets, and their areas, formed using different algorithms on a sample parton-level event are shown in Fig. 22. As can be seen the k T and Cambridge/Aachen algorithms tend to cluster many soft particles giving jets with an irregular area whereas the jets produced by the cone and anti-k T algorithms are more regular making applying corrections for pile-up and underlying event contamination easier.
In order to study jet production in hadron collisions we need to understand both the jet algorithm and the production of the partons which give rise to the jets. The spin/colour summed/average matrix elements are given in Table 2. Many of these matrix elements have t-channel dominance, typically t → 0 ⇐⇒ p 2 ⊥ → 0. As a consequence the parton-parton scattering cross section grows quickly as p ⊥ → 0 an effect which is further enhanced by the running of α s when using µ R = p ⊥ as the renormalisation scale. An example of the p ⊥ spectrum of jets for different rapidities measured using the midpoint cone-algorithm is shown in Fig. 23. Table 2: Spin and colour summed/averaged matrix elements for 2 → 2 parton scattering processes with massless partons taken from Ref. [3]. A common factor of g 4 = (4πα s ) 2 (QCD), g 2 e 2 e 2 q (photon production) has been removed.

Jet Properties
In general the computation of jet properties in hadron-hadron collisions is extremely complicated, however for some quantities we can get estimates of various effects. The simplest of these is to estimate the change in the p ⊥ between a parton and the jet it forms.
We can start by considering the change due to perturbative QCD radiation. Suppose we have a quark with transverse momentum p ⊥ which radiates a gluon such that the quark carries a fraction z of its original momentum and the gluon a fraction 1 − z, as shown in Fig. 24. In this case after the radiation the centre of the jet will be the parton Figure 24: Kinematics of jet branching with the highest transverse momentum after the branching, i.e. the quark if z > 1 − z or the gluon if z < 1 − z. If the other parton is at an angular distance greater θ > R it will no longer be in the jet and the jet will have a smaller transverse momentum than the original parton. We can use the splitting probabilities given in Eqn. 18 to compute the average transverse momentum loss The loss of transverse momentum can be calculated for gluon jets in the same way using the gluon splitting functions giving These calculations give So for a jet with R = 0.4 quark and gluon jets will have 5% and 11% less transverse momentum than the parent parton, respectively. These results are subject to significant finite R and higher order corrections. The result will also depend on the precise details of the recombination scheme, for example SISCONE has a different recombination scheme where the centre of the cone is the direction of the sum of the partons and we require one parton to fall outside the cone. While this gives the perturbative energy loss by the jet there are other effects which can change the transverse momentum of the jet. In particular the jet can also lose energy in the hadronization process and can gain energy from the underlying event.
While these effects cannot be calculated from first principles we can use some simple models to gauge the size of the effects.
One model for the effect of hadronization on event shapes in e + e − collisions, due to Dokshitzer and Webber, is to perform a perturbative calculation and instead of stopping the calculation at some small energy scale µ I because the strong coupling becomes nonperturbative continue the calculation into the infrared regime with a model of the strong coupling in this regime which does not diverge. They define This model can also be used to assess the size of the hadronization corrections for the jet transverse momentum. The hadronization is modelled by soft gluons with k ⊥ ∼ Λ QCD . In this case the transverse momentum loss is As before the transverse momentum loss is As we are dealing with soft gluons z ∼ 1 so 1 + z 2 ≃ 2. In this case we will not use a fixed value of α S but need to evaluate it at the scale of the transverse momentum of the gluon with respect to the quark k ⊥ = p ⊥ (1 − z)θ. We also transform the integration variables to use k ⊥ and θ giving Using the coefficients from fits to the e + e − thrust distribution The hadronization correction has a 1 R dependence on the size of the jet, unlike the ln 1 R dependence of the perturbative radiation. We can estimate the underlying event contribution by assuming there is Λ UE energy per unit rapidity due to soft particles from the underlying event giving a correction to the transverse momentum of This is a useful estimate although strictly the area of the jet is only πR 2 for the anti-k T algorithm. An example of the various contributions to the shift between the partonic and jet transverse momentum is shown in Fig. 25.
All observables are a function of these 18 parameters. In principle we could choose 18 wellmeasured observables and define them to be the fundamental parameters of the theory, e.g.
and calculate everything else in terms of them.
For the electroweak part of the theory we need m t , m h and three other parameters to specify everything, neglecting the masses of the other Standard Model fermions. Everything else can then be calculated from these parameters, e.g.
The It is common to include the Fermi constant, , from the effective theory of weak interactions at low energies as a parameter.
Different choices for the input parameters give different values for the calculated parameters.
1. input: α(m Z ), G F , sin 2 θ W , extracted: This is due to the quantum corrections. It was the great triumph of the LEP/SLD and Tevatron physics programmes that the quantum corrections to the theory were probed. The normal choice of input parameters is: 1. α = 1/137.035999679(94) the fine-structure constant at q 2 = 0 is accurately measured, however the error on its evolution to q 2 = m 2 Z has greater uncertainty due to hadronic corrections; 2. G F = 1.166367(5) × 10 5 GeV −2 is very accurately measured in muon decay µ − → e − ν µνe ; 3. m Z = 91.1876 ± 0.0021 GeV from the LEP1 lineshape scan; as these are the most accurately measured. We have already considered the running of the coupling and corrections to cross sections and other observables. However masses are also renormalized in the Standard Model. If we consider the propagator for a massive gauge boson we get corrections of the form shown in Fig. 26. If we omit the Lorentz structures this gives a propagator

Quantum Corrections to Masses
where Π(q 2 ) is the gauge boson self energy. This is a geometric progression, summing the series gives .
If the particle can decay to the particles in the loop there is an imaginary part of the self energy Π(q 2 ) which is related to the width of the particle Im Π(q 2 ) = −iqΓ(q).
The real part of the self energy correction renormalizes the particle's mass giving As we have defined to the mass of the Z 0 boson to be a fundamental parameter δm 2 Z = 0, by definition.
The dominant corrections to the W mass come from top-bottom and Higgs loop corrections, as shown in Fig. 27. The correction to the W ± boson mass is

Electroweak Observables
A number of observables are used in the electroweak fit performed by the LEP Electroweak Working Group (LEPEWWG): 1. the Z 0 mass and width m Z , Γ Z ; 2. the hadronic cross section at the Z 0 pole σ(had) ≡ 12πΓ(e + e − )Γ(had) 3. the ratio of the hadronic to leptonic partial widths of the Z 0 , R ℓ ≡ Γ(had) ℓ + ℓ − , and the ratio of the bottom, R b ≡ Γ(bb)/Γ(had), and charm, R c ≡ Γ(cc)/Γ(had), quark partial widths to the hadronic partial width of the Z 0 ; 4. the forward-backward asymmetry for e + e − →f f for charged leptons, A 0,ℓ f b , bottom A 0,b f b , and charm A 0,c f b quarks; 5. the couplings of the fermions to the Z 0 can be extracted from the forward-backward asymmetry in polarized scattering at SLD The couplings for the bottom, A b , and charm, A c , quarks can be extracted from these measurements. There are a number of possible ways of extracting A ℓ ; 6. sin 2 θ lept eff (Q f b ) is extracted from the hadronic charge asymmetry; 7. the W mass, m W , and width, Γ W are measured in a range of ways; 8. the top quark mass, m t , is measured at the Tevatron.
The results of the precision electroweak fit are in good agreement with the experimental results, as shown in Fig. 28, and for example shows that there are 3 massless neutrinos which couple to the Z boson.

W mass measurements
One of the most important quantities in electroweak sector in the mass of the W ± boson. The first measurements of the W mass were in hadronic collisions. The QCD backgrounds and resolution means that the hadronic W ± decay mode cannot be used. The mass cannot be directly reconstructed using the leptonic mode due to the unobserved neutrino. Instead the transverse mass M ℓν2 where p ℓ ⊥ is the transverse momentum of the observed lepton, E / ⊥ is the missing transverse energy and φ ℓ,miss is the azimuthal angle between the lepton and the direction of the missing transverse energy, is used.  The maximum value of the transverse mass is M ℓν2 ⊥ ≤ m 2 W and can be used to extract the W ± mass. This approach was used by the UA1 and UA2 experiments for the original W mass measurements and the recent results at the Tevatron, for example Fig. 29. The endpoint is smeared by the non-zero p ⊥ and width of the W boson.
A major result of the LEP2 programme was the study of the production of pairs of electroweak gauge bosons, W + W − and Z 0 Z 0 . The mass of the W can be extracted in two ways: 1. measuring the cross section near the threshold which is clean theoretical but limited by statistics, see Fig. 30; 2. reconstructing the mass from the W decay products above threshold.

ρ parameter
In principle we should compare the full predictions of the Standard Model, or any model of new physics, with all the electroweak observables. However it is often useful, particularly in new physics models as corrections from new particles can lead to large corrections, to consider the ρ parameter. Naively  connects the Z 0 and W ± masses with the weak mixing angle. The dominant loop corrections to it from self energies give This relates m W , m t , and m H . For a long time, m t was most significant uncertainty in this relation; by now, m W has more than caught up.

Higgs Boson
So far we have concentrated on the particles from the Standard Model we have already seen, however there is one remaining SM particle which hasn't been discovered, the Higgs Boson.
this violates gauge invariance. Under the gauge transformation, A µ → A µ + 1 g ∂ µ θ, the mass term A µ A µ gives terms proportional to the gauge transformation parameter θ, i.e. the gauge boson mass term is not gauge invariant. As the fields Ψ L and Ψ R transform differently under SU(2) L under the gauge transformation of the left-handed fermion field the fermion mass term is not gauge invariant.
Adding these mass terms by hand is obviously a bad idea. Instead we add a complex scalar doublet under the SU(2) L gauge group which introduces an additional four degrees of freedom. This scalar field can be coupled gauge invariantly to the gauge bosons, i.e.
A gauge-invariant interaction term with fermions can also be included 7 In addition we need the Higgs potential For µ 2 < 0 this potential has an infinite number of equivalent minima, as shown in Fig. 31. We expand around one of these minima giving one radial and three circular modes. The circular modes are "gauged away" −→ "eaten" by gauge bosons to give them mass via the vacuum expectation value (vev) the minimum of the potential. From the structure above: Figure 31: The Higgs boson potential.
This gives a fixed relation between the mass of the particles and their coupling to (surviving) scalar Higgs boson.

Unitarity
While in the Standard Model introducing the Higgs boson is the only way to give mass to the particles in a gauge invariant manner there are other arguments for the existence of the Higgs boson and it is interesting to ask what would happen if the Higgs boson did not exist. If we consider W + W − → W + W − scattering, via the Feynman diagrams shown in Fig. 32, in the high energy limit the matrix element is So without the Higgs boson the cross section for s ≫ M W . This violates unitarity, so we need something to cancel the bad high energy behaviour of the cross section. We can arbitrarily invert a particle to cure this. This particle must Figure 33: Higgs boson contributions to W W scattering. be a scalar, suppose it has coupling, λ, to W + W − . This gives a contribution, via the Feynman diagrams in Fig. 33,

Higgs Searches
As with all searches for Higgs searches we want: • channels with a high signal rate; • and a low background rate.
Unfortunately the channels with the highest signal rate often have the largest backgrounds. We need to be able to trigger on a given signal. Good mass resolution for the mass of the Higgs boson and its decay products can help to suppress backgrounds. We should also try and measure things that are well understood theoretically.
In order to consider the signals we need to understand how the Higgs boson is produced and then decays in hadron-hadron collisions.
The analytic results for the partial widths for various Higgs boson decay modes are given in Table 3  The important search channels depend on the collider energy. At the Tevatron typical channels include:    • gg → H → W + W − → ℓℓ ′ +E / ⊥ this is the "golden plated" channel because although there is no mass peak the background can be reduced by using quantities, such as the angle between the leptons, which differ in the signal and background due to the different W boson production mechanisms; • qq → ZH → ℓℓbb the key ingredient for this process is the b-tagging efficiency and mass resolution for jets in order to suppress the QCD backgrounds; • qq ′ → W H → ℓνbb has similar features to qq → ZH → ℓℓbb; • qq ′ → ZH → E / ⊥ + bb the key feature is again the b-tagging efficiency and mass resolution for jets in order to suppress the QCD backgrounds; • qq ′ → W ± H → W ± W + W − in this case there is the possibility of same sign lepton production which has a low background together with the decay of remaining W to hadrons in order to increase the cross section.
Typical channels at the LHC include: • gg → H → ZZ → 4µ, 2e2µ which is the "Golden plated" channel for m H > 140 GeV, the key ingredient is the excellent resolution of the Z mass peak from the leptonic decay; • gg → H → W + W − → ℓℓ ′ + E / ⊥ is similar to the Tevatron analysis but with better statistics due to the larger production cross section; • gg → H → γγ is good for low mass, m H 120 GeV, Higgs bosons although the branching ratio is small, the key ingredient is the mass resolution for photon pairs and a veto on photons from π 0 decays; • VBF→ H → τ τ is a popular mode where the key ingredient is that QCD backgrounds are reduced by requiring a rapidity gap between the two tagging jets; • VBF→ H → W W as for VBF→ H → τ τ ; • VBF→ H → bb is in principle similar to the other VBF modes but it is hard to trigger on pure QCD-like objects (jets).

Extended Higgs Sectors
Adding a single Higgs doublet is the simplest choice for the Higgs sector. As we have yet to observe the Higgs boson it is possible to have a more complicated Higgs sector. There is some tension in the Standard Model between the value of the Higgs mass preferred by precision electroweak fits (M H ∼ 100 GeV) and the experimental limit (M H > 114 GeV). Many theoretically attractive models like SUSY naturally have a larger Higgs sector. However, we need to be careful to respect constraints from flavour changing neutral currents (FCNC) and the electroweak precision data.

The Two Higgs Doublet Model
The simplest extension to the Standard Model is the Two Higgs Doublet Model (THDM At tree level in SUSY m h 0 ≤ M Z however there are large quantum corrections (m h 0 140 GeV).

Beyond the Standard Model Physics
As discussed in Section 7 the Standard Model has 18 free parameters, although in principle we should also include the Θ parameter of QCD. We now need more parameters to incorporate neutrino masses. Despite the excellent description of all current experimental data there are still a number of important questions the Standard Model does not answer.
• What are the values of these parameters?
• Why is the top quark so much heavier that the electron?
• Why is the Θ parameter so small?
• Is there enough CP-violation to explain why we are here, i.e. the matter-antimatter asymmetry of the universe?
• What about gravity?
While these are all important questions there is no definite answer to any of them.
There are however a large number of models of Beyond the Standard Model (BSM) physics which motivated by trying to address problems in the Standard Model. Given the lack of any experimental evidence of BSM physics the field is driven by theoretical and ascetic arguments, and unfortunately fashion.
All models of BSM physics predict either new particles or differences from the Standard Model, otherwise they cannot be distinguished experimentally from the Standard Model. There are a number of ways of looking for BSM effects: In many ways these approaches are complimentary. Some effects, e.g CP-violation, are best studied by dedicated experiments but if the result of these experiments differs from the SM there should be new particles which are observable at collider experiments.
We will consider the collider signals of BSM physics in detail but only look at the constraints from low-energy physics as we look at various models. The most important low energy constraints are flavour changing neutral currents and proton decay. Often other constraints, e.g. from astrophysics and cosmology are also considered.

Models
We will briefly review some of the more promising models and then look at the implications of these models for collider physics taking a pragmatic view looking at the different possible signatures rather than the details of specific models.
There are a wide range of models: grand unified theories; Technicolor; supersymmetry; large extra dimensions; small extra dimensions; little Higgs models; unparticles . . .. Depending on which model builder you talk to they may be almost fanatical in their belief that one of these models is realized in nature.

Grand Unified Theories
The first attempts to answer the problems in the Standard Model were Grand Unified Theories (GUTs.) The basic idea is that the Standard Model gauge group SU(3) c × SU(2) L × U(1) Y is the subgroup of some larger gauge symmetry. The simplest group is SU(5), which we will consider here, other examples include SO (10). SU(5) has 5 2 −1 = 24 generators which means there are 24 gauge bosons. In the Standard Model there are 8 gluons and 4 electroweak gauge bosons (W ± , W 0 , B 0 ⇒ W ± , γ, Z 0 ). Therefore there are 12 new gauge bosons X ± 4 3 and Y ± 1 3 . The right-handed down type quarks and left handed leptons form a5 representation of SU (5). The rest of the particles form a 10 representation of the gauge group In this model there are two stages of symmetry breaking. At the GUT scale the SU(5) symmetry is broken and the X and Y bosons get masses. At the electroweak scale the SU(2) × U(1) symmetry is broken as before. There are three problems with this theory: the couplings do not unify at the GUT scale; why is the GUT scale higher than the electroweak scale; and proton Decay. We will come back to the first two of these questions.d Proton Decay Grand unified theories predict the decay of the proton via the exchange of the X and Y bosons, as shown in Fig. 37. We would expect this decay rate to go like where M X is the mass of the X boson and M p the mass of the proton, on dimensional grounds. There are limits on the proton lifetime from waterČerenkov experiments. The decay of the proton will produce an electron which is travelling faster than the speed of light in water. This will giveČerenkov radiation, just as the electron produced in the weak interaction of a neutrino does. This is used to search for proton decay. As there is no evidence of proton decay there is limit of τ P ≥ 1.6 × 10 32 years (91) on the proton lifetime. This means M X > 10 16−17 GeV which is larger than preferred by coupling unification. Proton decay gives important limits on other models.

Hierarchy Problem
H 0 H 0 f f Figure 38: Quantum correction to the Higgs mass from a fermion loop.
The vast majority of new physics models are motivated by considering the hierarchy problem, i.e. why is the electroweak scale is so much less than the GUT or Planck (where gravity becomes strong) scales? It is more common to discuss the technical hierarchy problem which is related to the Higgs boson mass. If we look at the Higgs mass there

Technicolor
Technicolor is one of the oldest solutions to the hierarchy problem. The main idea is that as the problems in the theory come from having a fundamental scalar particle they can be solved by not having one. The model postulates a new set of gauge interactions Technicolor, which acts on new technifermions. We think of this interaction like QCD, although different gauge groups have been considered. The technifermions form bound states, the lightest being technipions. Using the Higgs mechanism these technipions give the longitudinal components of the W ± and Z bosons, and hence generate the gauge boson masses. There must also be a way to generate the fermions masses, Extended Technicolor. It has proved hard to construct realistic models which are not already ruled out. For many years Technicolor fell out of fashion, however following the introduction of little Higgs models there has been a resurgence of interest and the new walking Technicolor models look more promising.

Supersymmetry
If there is a scalar loop in the Higgs propagator, as shown in Fig. 40. We get a new contribution to the Higgs mass, where M S is the mass of the new scalar particle. If there are two scalars for every fermion, with the same mass and λ s = |g f | 2 the quadratic dependence cancels. Theorists like to have symmetries to explain cancellations like this, Supersymmetry (SUSY). For every fermionic degree of freedom there is a corresponding bosonic degree of freedom: all the SM fermions have two spin-0 partners; all the SM gauge bosons have a spin-1 2 partner. The full particle content of the theory is given in Table 4. In SUSY models we need to have two Higgs doublets to give mass to both the up-and down-type quarks in a way which is invariant under the supersymmetric transformations.
There are major two reasons, in addition to the solution of the hierarchy problem, to favour SUSY as an extension of the SM.   SUSY coupling unification In SUSY GUTS the additional SUSY particles change the running of the couplings and allow the couplings to truly unify at the GUT scale, as shown in Fig. 41. However, with increasingly accurate experimental measurements of the strong coupling this is no longer quite true.

Coleman-Mandula theorem
In the modern view of particle physics we construct a theory by specifying the particle content and symmetries. All the terms allowed by the symmetries are then included in the Lagrangian. If we do this in supersymmetric models we naturally get terms which do not conserve lepton and baryon number. This leads to proton decay as shown in Fig. 42. Proton decay requires that both lepton and baryon number conservation are violated. The limits on the proton lifetime lead to very stringent limits on the product of the couplings leading to proton decay.
Only natural way for this to happen is if some symmetry requires that one or both couplings are zero. Normally a multiplicatively conserved symmetry R-parity such that Standard Model Particles have R p = +1 and SUSY particles have R p = −1, is introduced which forbids both terms. Alternatively symmetries can be imposed which only forbid the lepton or baryon number violating terms. The simplest SUSY extension of the Standard Model has R p conservation and is called the Minimal Supersymmetric Standard Model (MSSM). The multiplicative conservation of R-parity has two important consequences: SUSY particles are only pair produced; the lightest SUSY particle is stable, and therefore must be neutral on cosmological grounds. It is therefore a good dark matter candidate.
So far we haven't dealt with the biggest problem in SUSY. Supersymmetry requires that the SUSY particles have the same mass as their Standard Model partner and the SUSY partners have not been observed. SUSY must therefore be a broken symmetry in such a way that the Higgs mass does not depend quadratically on the ultraviolet cut-off, called soft SUSY breaking. This introduces over 120 parameters into the model. Many of these parameters involve either flavour changing or CP-violating couplings and are constrained by limits on flavour changing neutral currents.
Flavour Changing Neutral Currents In the Standard Model the only interactions which change change the quark flavour are those with the W ± boson. So any processes which change the flavour of the quarks, but not the charge, Flavour Changing Neutral Currents (FCNCs), must be loop mediated.
There are two important types: those which change the quark flavour with the emission of a photon, i.e. b → sγ; those which give meson-antimeson mixing, e.g. B −B mixing. Both are important in the Standard Model and in constraining possible new physics models.
In the Standard Model flavour changing neutral currents are suppressed by the Glashow-Iliopoulos-Maiani (GIM) mechanism. If we consider neutral Kaon mixing, as shown in Fig. 43, and the rare Kaon decays K 0 L → µ + µ − and K 0 L → γγ, as shown in Fig. 44. Considering only two generations for simplicity all these diagrams go like times a factor due to the Cabibbo mixing angle where M is the largest mass left after the removal of one W propagator, i.e. M W for K 0 −K 0 mixing and K 0 L → µ + µ − , and m c for K 0 L → γγ. This suppression is called the GIM mechanism and explains why Γ(K 0   Provide the SUSY breaking masses are flavour independent this is not a problem, as the mass differences are the same as the SM. It is also not a problem if there is no flavour mixing in the model. In general both these things are possible and must be considered.

SUSY Breaking
What are the 120 SUSY breaking parameters? In general there are: SUSY breaking masses for the scalars; SUSY breaking masses for the gauginos; A terms which mix three scalars; mixing angles and CP-violating phases. We need a model of where these parameters come from in order to do any phenomenological or experimental studies. We therefore use models which predict these parameters from physics at higher energy scales, i.e. the GUT or Planck scale. In all these models SUSY is broken in a hidden sector. The models differ in how this SUSY breaking is transmitted to the visible sector, i.e. the MSSM particles.
SUGRA SUSY breaking is transmitted via gravity. All the scalar (M 0 ) and gaugino (M 1/2 ) masses are unified at the GUT scale. The A and B terms are also universal. The known value of M Z is used to constrain the µ and B parameters leaving tan β = v 1 /v 2 as a free parameter. There are five parameters which give the mass spectrum: M 0 , M 1/2 , tan β, sgn µ, A. The gluino mass is correlated with M 1/2 and slepton mass with M 0 .  with the high luminosity available at Tesla. It is vital to have highly polarised electrons and it is very desirable to have polarised positrons as well. It is assumed that polarisations of P = 80% for electrons and P + = 60% for positrons are achievable. A proper choice of polarisations and center of mass energy helps disentangle the various production channels and suppress background reactions. Electron polarisation is essential to determine the weak quantum numbers, couplings and mixings. Positron polarisation provides additional important information 4]: (i) an improved precision on parameter measurements by exploiting all combinations of polarisation; (ii) an increased event rate (factor 1.5 or more) resulting in a higher sensitivity to rare decays and subtle e ects; and (iii) discovery of new physics, e.g. spin 0 sparticle exchange. In general the expected background is dominated by decays of other supersymmetric particles, while the Standard Model processes like W + W production can be kept under control at reasonably low level.
The most fundamental open question in SUSY is how supersymmetry is broken and in which way this breaking is communicated to the particles. Here three di erent schemes are considered: the minimal supergravity (mSUGRA) model, gauge mediated (GMSB) and anomaly mediated (AMSB) supersymmetry breaking models. The phenomenological implications are worked out in detail. The measurements of the sparticle properties, like masses, mixings, couplings, spin-parity and other quantum numbers, Figure 46: Examples of the mass spectra in different SUSY breaking models.
GMSB In gauge mediated SUSY breaking (GMSB) the flavour-changing neutral current problem is solved by using gauge fields instead to gravity to transmit the SUSY breaking. The messenger particles, X, transmit the SUSY breaking. The simplest choice is a complete SU(5) 5 or 10 of particles transmitting the SUSY breaking to preserve the GUT symmetry. The fundamental SUSY breaking scale 10 10 GeV is lower than in gravity mediated models. The gaugino masses occur at one-loop, Mg ∼ α s N X Λ while the scalar masses occur at two-loop, Mq ∼ α 2 s √ N X Λ, where Λ is the breaking scale and N X the number of messenger fields. The true LSP is the almost massless gravitino. The lightest superpartner is unstable and decays to gravitino and can be neutral, e.g.χ 0 1 , or charged, e.g.τ 1 .
AMSB The superconformal anomaly is always present and can give anomaly mediated SUSY breaking (AMSB). This predicts the sparticle masses in terms of the gravitino mass, M 3/2 . The simplest version of the model predicts tachyonic particles so another SUSY breaking mechanism is required to get a realistic spectrum, e.g. adding universal scalar masses (M 0 ). The model has four parameters M 0 , M 3/2 , tan β and sgn µ. In this model the lightest chargino is almost degenerate with the lightest neutralino.
The mass spectrum in the models is different, as shown in Fig. 46. The main differences are: the mass splitting between gluino and electroweak gauginos; the mass splitting of the squarks and sleptons; and the nature of the LSP.
Muon g-2 Another important low energy constraint on BSM physics is the anomalous magnetic moment of the muon. The magnetic moment of any fundamental fermion is where g is the g-factor, m the mass and S the spin of the particle. The Dirac equation predicts g = 2. However there are quantum corrections, as shown in Fig. 47, which lead to an anomalous magnetic moment, g − 2. There are also quark loops in the photon propagator, as shown in Fig. 48. This is a low energy process so we can not use perturbative QCD. Instead we must use the measured e + e − total cross section and the optical theorem to obtain the corrections which leads to an experimental error on the theoretical prediction. In many BSM theories, for example γ γ The original experimental result disagreed with the SM at 2.6σ, but there was an error in the sign in one of the terms in the theoretical calculation reducing the significance to about 1.4σ. However if you measure enough quantities some of them should disagree with the prediction by more the 1 sigma (about 1/3), and some by 2 sigma (4.6%) or 3 sigma (0.3%). This is why we define a discovery to be 5 sigma (6 × 10 −5 %), so this is nothing to worry about.
Rare B decays There is an amazing consistency of the current flavour physics measurements. However, many new physics models can have a similar pattern in their flavour sector, the new physics model must have this otherwise it is experimentally excluded. However, there can still be new physics in rare processes (like B + → τ + ν τ ) and CPasymmetries. One promising examples is the decay B s → µ + µ − . There are two Standard Model contributions from box and penguin diagrams as shown in This gives a simple leptonic final state with minor theoretical uncertainties but a huge background so the mass resolution is paramount, the expected mass resolution for the LHC experiments is given in Table 5. Exp.
In the MSSM, however, the amplitude involves three powers of tan 2 β, so that which leads to an enhancement over the SM value by up to three orders of magnitude.

Extra Dimensions
Many theorists believe there are more than 4 dimensions, for example string theories can only exist in 10/11 dimensions. The hierarchy problem can be solved (redefined?) in these models in one of two ways.
1. There is a large extra dimension with size ∼ 1mm. In this case where M Planck is the observed Planck mass, M is the extra-dimensional Planck mass and R the radius of the additional n dimensions. In this case the Planck mass is of order 1 TeV so there is no hierarchy problem. However the hierarchy in the sizes of the dimensions must be explained.
2. Small extra dimensions in which case the extra dimension is warped. The model has two branes, we live on one and the other is at the Plank scale. The Higgs VEV is suppressed by a warp factor, exp(−kr c π), where r c is the compactification radius of the extra dimension, and k a scale of the order of the Planck scale.
We can consider what happens in extra-dimensional models by studying a scalar field in 5-dimensions. In this case the equation of motion for the scalar field is where is the 5-dimensional Laplace operator. If the 5-th dimension is circular we can Fourier decompose the field, The equation of motion therefore becomes, This gives a Kaluza-Klein (KK) tower of states with mass splitting ∼ 1/R. There are a number of different models.
Large Extra Dimensions Only gravity propagates in the bulk, i.e. in the extra dimensions. We therefore only get Kaluza-Klein excitations of the graviton. In large extra dimensional models the mass splitting between the KK excitations is small and all the gravitons contribute to a given process. Phenomenologically there are deviations from the SM prediction for SM processes.
Small Extra Dimensions Again only gravity propagates in the bulk so there are only KK excitations of the graviton. In this case the mass splitting is large leading to resonant graviton production.
Universal Extra Dimensions Another alternative is to let all the Standard Model fields propagate in the bulk, Universal Extra Dimensions (UED). All the particles have Kaluza-Klein excitations. It is possible to have a Kaluza-Klein parity, like R-parity in SUSY. The most studied model has one extra dimension and a similar particle content to SUSY, apart from the spins. There are also some 6-dimensional models.

Little Higgs Models
In little Higgs models the Higgs fields are Goldstone bosons associated with breaking a global symmetry at a high scale, Λ S . The Higgs fields acquire a mass and become pseudo-Goldstone bosons via symmetry breaking at the electroweak scale. The Higgs fields remain light as they are protected by the approximate global symmetry. The model has heavy partners for the photon, Z 0 , W ± bosons and the top quark as well as extra Higgs bosons. The non-linear σ-model used for the high energy theory is similar to the low energy effective theory of pions which can be used to describe QCD, or in Technicolor models. This similarity with Technicolor models is one of the reasons for the resurgence of Technicolor models in recent years.
The original Little Higgs models had problems with electroweak constraints. The solution is to introduce a discrete symmetry called T-parity, analogous to R-parity in SUSY models. This solves the problems with the precision electroweak data and provides a possible dark matter candidate. This model has a much large particle content than the original Little Higgs model and is more SUSY-like with a partner for each Standard Model particle.

Unparticles
In these models a new sector at a high energy scale with a non-trivial infrared (IR) fixed point is introduced. This sector interacts with the Standard Model via the exchange of particles with a large mass scale leading to an effective theory

Deviations from the Standard Model
There can be deviations from what is expected in the Standard Model due to: compositeness; exchanging towers of Kaluza-Klein gravitons in large extra dimension models; unparticle exchange; . . . . This tends to give changes in the shapes of spectra. Therefore in order to see a difference you need to know the shape of the Standard Model prediction.
Example I: High p ⊥ jets One possible signal of compositeness is the production of high p ⊥ jets. At one point there was a disagreement between theory and experiment at the Tevatron. However, this was not due to new physics but too little high-x gluon in the PDFs. Now as well as looking in the p ⊥ spectra at central rapidities where we expect to see a signal of BSM physics we also look at high rapidity as a disagreement at both central and high rapidities is more likely to be due to the parton distribution functions. An example of the jet p ⊥ spectrum at a range of rapidities is shown in Fig. 23.
Example II: Unparticles Many models predict deviations in the Drell-Yan mass spectra, for example in an unparticle model with the exchange of virtual spin-1 unparticles, see Fig. 51. However, we need to be careful as higher order weak corrections which can also change the shape are often neglected.
Example III: PDF uncertainty or new physics In the ADD model of large extra dimensions there are changes in the shape of the jet p ⊥ and dijet mass spectra due to the exchange of KK towers of gravitons and their destructive interference with SM, as shown in Fig. 52.

Monojets
There are a range of models which predict monojet signals with the production of a quark or gluon which is recoiling against either: a stable neutral particle; a tower of KK gravitons in large extra dimension models; unparticles; . . . . Example IV: Mono-jets at the SppS In Ref. [23] the UA1 collaboration reported: 5 events with E ⊥,miss > 40 GeV and a narrow jet; 2 events with E ⊥,miss > 40 GeV and a neutral EM cluster. They could "not find a Standard Model explanation", and compared their findings with a calculation of SUSY pair-production [24]. They deduced a gluino mass larger than around 40 GeV. In Ref. [25], the UA2 collaboration describes similar events, also after 113 nb −1 , without indicating any interpretation as strongly as UA1. In Ref. [26] S. Ellis, R. Kleiss, and J. Stirling calculated the backgrounds to that process more carefully, and showed agreement with the Standard Model. There are many different Standard Model electroweak backgrounds and a careful comparison shows they are currently in agreement with the Standard Model, see Fig. 53.

New Particle Production
In general there are two cases for models in which new particles are produced. In the first type of model the main signal is the production of s-channel resonances while in the second class of models the signals are more varied and complex.

Resonance Production
The easiest and cleanest signal in hadron collisions is the production of an s-channel resonance which decays to e + e − or µ + µ − . Resonances in this and other channels are possible in: Little Higgs models; Z ′ models; UED; Small Extra Dimensions. Backgrounds can be remove using sideband subtraction. Example V: Resonant Graviton Production The best channel, e + e − , gives a reach of order 2 TeV depending on the cross section for the LHC running at √ s = 14 GeV. Other channels µ + µ − , gg, and W + W − are possible. If the graviton is light enough the angular distribution of the decay products can be used to measure the spin of the resonance. An example of the dilepton mass spectrum in this model is shown in Fig. 54. A lot of models predict hadronic resonances. This is much more problematic due to the mass resolution which smears out narrow resonances and the often huge QCD backgrounds. Although background subtraction can be used the ratio of the signal to background is often tiny, for example Fig. 55 shows the measured Z → bb peak at the Tevatron. 55

SUSY-like models
Most of the other models are "SUSY"-like, i.e. they contain: a partner of some kind for every Standard Model particle; often some additional particles such as extra Higgs bosons; a lightest new particle which is stable and a dark matter candidate.
A lot of new particles should be produced in these models. While some particles may be stable, 8 the the majority of these particles decay to Standard Model particles. Therefore we expect to see: charged leptons; missing transverse energy from stable neutral particles  Figure 55: Dijet mass spectrum for bottom quark jets at the Tevatron taken from Ref. [28].
or neutrinos; jets from quarks, perhaps with bottom and charm quarks; tau leptons; Higgs boson production; photons; stable charged particles. It is worth noting that seeing an excess of these does not necessarily tell us which model has been observed. The archetypal model containing large numbers of new particles which may be accessible at the LHC is SUSY. Other models are UED and the Little Higgs Model with T-parity. However, in practice UED is mainly used as a straw-man model for studies trying to show that a potential excess is SUSY.
Two statements which are commonly made are: the LHC will discover the Higgs boson; the LHC will discover low-energy SUSY if it exists. The first is almost certainly true, however the second is only partially true.
In hadron collisions the strongly interacting particles are dominantly produced. Therefore in SUSY squark and gluino production has the highest cross section, for example via the processes shown in Fig. 56.
These particles then decay in a number of ways. Some of them have strong decays to other strongly interacting SUSY particles, for example via the processes shown in Fig. 57. However the lightest strongly interaction SUSY particle, squark or gluino, can only decay weakly, as shown in Fig. 58. The gluino can only have weak decays with virtual squarks or via loop diagrams. This is the main production mechanism for the weakly interacting SUSY particles.
The decays of the squarks and gluinos will produce lots of quarks and antiquarks. The weakly interacting SUSY particles will then decay giving more quarks and leptons. Eventually the lightest SUSY particle which is stable will be produced. This behaves like a neutrino and gives missing transverse energy. So the signal for SUSY is large numbers   of jets and leptons with missing transverse energy. This could however be the signal for many models containing new heavy particles.
All SUSY studies fall into two categories: search studies which are designed to show SUSY can be discovered by looking for a inclusive signatures and counting events; measurement studies which are designed to show that some parameters of the model, usually masses, can be measured.
There is a large reach looking for a number of high transverse momentum jets and leptons, and missing transverse energy, see Figs. 59 and 60. It is also possible to have the production of the Z 0 and Higgs bosons and top quarks. In many cases the tau lepton may be produced more often than electrons or muons.
Once we observe a signal of SUSY there are various approaches to determine the Figure 60: Expected limits in SUSY parameter space for searches using jets, leptons and missing transverse energy for the LHC running at √ s = 14 TeV taken from Ref. [29].
properties of the model. The simplest of these is the effective mass which is strongly correlated with the mass of strongly interacting SUSY particles and can be used to measure the squark/gluino mass to about 15%, see Fig. 61. The analyzes we have just looked at are those that are used to claim the LHC will discover SUSY but this is not really what they tell us. They don't really discover SUSY. What they see is the production of massive strongly interacting particles, this does not have to be SUSY, it could easily be something else. In order to claim that a signal is SUSY we would need to know more about it. SUSY analyzes tend to proceed by looking for characteristic decay chains and using these to measure the masses of the SUSY particles and determine more properties of the model.

A.1 Kinematics
The basic language of all phenomenology is that of relativistic kinematics, in particular four-vectors. In hadron collisions because we do not know what fraction of the beam momenta is transferred to the partonic system it is preferable to use quantities, such as the transverse momentum, p ⊥ , with respect to the beam direction which are invariant under longitudinal boosts along the beam direction to describe the kinematics. In addition to the transverse momentum we use the rapidity, y, and massless pseudorapidity, η, beam direction. The pseudorapidity is more often used experimentally as it is related to the measured scattering angle. The four-momentum can by written as p µ = (E, p x , p y , p z ) = (m ⊥ cosh y, p ⊥ cos φ, p ⊥ sin φ, m ⊥ sinh y), where m 2 ⊥ = p 2 ⊥ + m 2 . The one-particle phase-space element can also be rewritten in terms of y and p ⊥ as

A.2 Cross Sections
The starting point of all collider physics calculations is the calculation of the scattering cross section. The cross section for a 2 → n scattering processes, a + b → 1...n, is where p a,b and p i=1,...,n are the momenta of the incoming and outgoing particles, respectively. The matrix element squared |M| 2 is summed/averaged over the spins and colours of the outgoing/incoming particles. The n-particle phase-space element is where E i is the energy of the ith particle. It is conventional to define s = (p a + p b ) 2 . For massless incoming particles 4 (p a · p b ) 2 − m 2 a m 2 b = 2s.
Although modern theoretical calculations involve ever higher multiplicity final states in these lectures we will primarily deal with 2 → 2 scattering processes in which case where |p 1 | is the magnitude of the three-momenta of either of the outgoing particles and θ and φ are the polar and azimuthal scattering angles, respectively. The cross section In is conventional to describe the scattering process in terms of the Mandelstam variables There are only two independent Mandelstam variables In terms of these variables dσ = 1 16πs 2 dt|M| 2 . (118)

A.3 Cross Sections in Hadron Collisions
In hadron collisions there is an additional complication as the partons inside the hadrons interact. The hadron-hadron cross section is where x 1,2 are momentum fractions of the interacting partons with respect to the incoming hadrons,ŝ = x 1 x 2 s,σ ab (ŝ, µ 2 F , µ 2 R ) is the parton-level cross section for the partons a and b to produce the relevant final state, f a/A (x, µ 2 F ) is the parton distribution function (PDF) giving the probability of finding the parton a in the hadron A, and similarly for f b/B (x, µ 2 F ). The factorization and renormalisation scales are µ F and µ R , respectively.
In hadron collisions we usually denote the variables for partonic process withˆ, e.g. s,t andû for the Mandelstam variables.
A.3.1 Resonance production (2 → 1 processes) The simplest example of a hadronic cross section is the production of an s-channel resonance, for example the Z 0 or Higgs bosons. We assume that the incoming partons are massless so that the 4-momenta of the incoming partons are: p a,b = x 1,2 (E, 0, 0, ±E), where E is beam energy in the hadron-hadron centre-of-mass system of collider such that s = 4E 2 . The Breit-Wigner cross section, e.g. for Z production, iŝ In the limit that the width is a lot less than the mass the narrow width limit. In this case the partonic centre-of-mass system is constrained to haveŝ = M 2 Z . The rapidityŷ of the partonic system andŝ are related to the momentum fractions x 1,2 bŷ s = x 1 x 2 , s andŷ = 1 2 ln Inverting these relationships we obtain This allows us to change the variables in the integration using giving the differential cross section dσ AB→Z 0 →µ + µ − dŷ = a,b=qq

B Flavour Physics
While most of the interactions in the Standard Model preserve the flavour of quarks and leptons the interaction of fermions with the W boson can change the flavour of the quarks and violate CP-conservation. In order to understand the interactions of the quarks with the W boson we first need to consider the generation of quark masses in the Standard Model. The masses of the quarks come from the Yukawa interaction with the Higgs field where Y u,d are complex 3 × 3 matrices, φ is the Higgs field, i, j are generation indices, Q i L are the left-handed quark doublets and, d I R and u I R are the right down-and up-type quark singlets. When the Higgs field acquires a vacuum expectation value φ = (0, v √ 2 ) we get the mass terms for the quarks.
The physical states come from diagonalizing Y u,d using 4 unitary 3 × 3 matrices, V u,d The interaction of the W ± and the quarks is given by The interaction with the mass eigenstates, f M L = V f L f I L , is where the Cabibbo-Kobayashi-Maskawa (CKM) matrix where s ij = sin θ ij and c ij = cos θ ij . As experimentally s 13 ≪ s 23 ≪ s 12 ≪ 1 it is convenient to use the Wolfenstein parameterization: s 12 = λ; s 23 = Aλ 2 ; and s 13 e iδ = Aλ 3 (ρ + iη).
In which If we assume that the neutrinos are massless there is no mixing for leptons. We now know that the neutrinos have small masses so there is mixing in the lepton sector. The analogy of the CKM matrix is the Maki-Nakagawa-Sakata (MNS) matrix U MNS .
A number of unitarity triangles can be constructed using the properties of the CKM matrix. The most useful one is which can be represented as a triangle as shown in Fig. 63. The area of all the unitary triangles is 1 2 J, where J is the Jarlskog invariant, a convention-independent measure of CP-violation, J = Im{V ud V cs V * us V * cd }.
There are a large number of measurements which constrain the parameters in the unitarity triangle. They all measure different combinations of the parameters and overconstrain the location of the vertex of the unitarity triangle.
The magnitudes of the CKM elements control the lengths of the sides: 1. |V ud | is accurately measured in nuclear beta decay; 2. |V cd | can be measured using either semi-leptonic charm meson decays or using neutrino DIS cross sections; 3. |V ub | is measured using inclusive and exclusive semi-leptonic B meson decays to light mesons B → X u ℓν or B → πℓν; 4. |V cb | is measured using inclusive and exclusive semi-leptonic B meson decays to charm mesons B → X C ℓν or B → Dℓν.
The CKM matrix elements which give the length of the remaining side can only be measured in loop-mediated processes. The most important of these, FCNCs, have already been discussed in the context of BSM physics in Section 9.1.4. These also gives rise to B −B mixing and oscillations, via the Feynman diagrams shown in Fig. 64.
where Γ is the average width of the mesons, ∆Γ is the width difference between the mesons and ∆m is the mass difference of the mesons. For both B d and B s mesons the ∆m term dominates. From the box diagram The decay constant f Bq can be measured from leptonic decays B q → ℓ + ν ℓ but B Bq comes from lattice QCD results. The QCD correction η B ∼ O(1). The B-factories have studied B 0 −B 0 mixing in great detail giving ∆m d = 0.507 ± 0.005ps −1 .
It is important to measure both B d −B d and B s −B s mixing as some hadronic uncertainties cancel in the ratio. The rate is ∝ |V ts V * tb | 2 due to the GIM mechanism. However, the high oscillation frequency makes B s −B s mixing tricky to observe. The Tevatron observation relied on tagging the flavour of the B meson at production by observing an associated kaon from the fragmentation. The final result is ∆m s = 17.77 ± 0.10(stat) ± 0.07(sys), (142) |V td ||V ts | = 0.2060 ± 0.0007(exp) ± 0.008(theo).  Figure 65: Examples of tree and penguin mediated processes, taken from Ref. [8].
The only source of CP-violation in the Standard Model is the complex phase in the CKM matrix. In order to see any effect we need at least two diagrams for the process with different CP-phases. There are three possibilities: CP-violation in the decay (direct); CP-violating in the mixing (indirect); CP-violation in the interference between decay and mixing. Example amplitudes are shown in Fig. 65.
The simplest type of CP-violation is direct CP-violation. This is the only possible type of CP-violation for charged mesons and is usually observed by measuring an asymmetry If CP-symmetry holds, then |K L = 1 √ 2 (|K 0 + |K 0 ) would be a CP-eigenstate with |K L = |K L . If we take |M = |K L and |f = |π − e + ν e the corresponding CPasymmetry is A CP = (0.327 ± 0.012)%, which means that K L is not a CP-eigenstate and there is CP-violation. There are many possible modes which measure different cominations of the angles in the unitarity triangle. The observed flavour and CP-violation is consistent with the Standard Model, i.e. the description by the CKM matrix, see Fig. 66.
There is one final area of flavour physics which is important. The matter in the universe consists of particles and not antiparticles. There are three Sakharov conditions required for this to happen: 1. baryon number violation; 2. C-symmetry and CP-symmetry violation; 3. interactions out of thermal equilibrium.
There are non-perturbative effects in the SM which violate baryon number. However, the amount of CP-violation in the quark sector is not enough to give the observed matterantimatter asymmetry, there might be more in the lepton sector, otherwise we need a new physics source of CP-violation.

C Color algebra
The color factors C F and C A correspond to the factors one gets for emitting a gluon off a quark or gluon line respectively.
The color factor for the splitting of a gluon into a quark-antiquark pair is given by T R .
One can compute color factors using a set of pictorial rules (see ?? for more details.) All these rules follow from the properties of the SU(3) color group.
The three-gluon vertex can be rewritten as: Here is an example of a calculation of a color factor with the pictorial method.
We have used the fact that a closed fermion loop with no gluon attachments amounts to a factor of N c , while a closed gluon loop would give a factor of N 2 c − 1.
A gluon loop on a gluon line can be written as the same line without the loop but with a factor of N c .

DARK MATTER
Dr David G. Cerdeño (University of Durham) Dark Matter: From production to detection David G. Cerdeño 1 IPPP, Durham University These notes are a write-up of lectures given at the HEP Summer School, which took place at the University of Lancaster in September, 2015.

Motivation for Dark Matter
The existence of a vast amount of dark matter (DM) in the Universe is supported by many astrophysical and cosmological observations. The latest measurements indicate that approximately a 27% of the Universe energy density is in form of a new type of non-baryonic cold DM. Given that the Standard Model (SM) of particle physics does not contain any viable candidate to account for it, DM can be regarded as one of the clearest hints of new physics.

Evidence for Dark Matter
Astrophysical and Cosmological observations have provided substantial evidence that point towards the existence of vast amounts of a new type of matter, that does not emit or absorb light. All astrophysical evidence for DM is solely based on gravitational effects (either trough the observation of dynamical effects, deflection of light by gravitational lensing or measurements of the gravitational potential of galaxy clusters), which cannot be accounted for by just the observed luminous matter. The simplest way to solve these problems is the inclusion of more matter (which does not emit light -and is therefore dark in the astronomical sense 2 ). Modifications in the Newtonian equation relating force and accelerations have also been suggested to address the problem at galactic scales, but this hypothesis is insufficient to account for effects at other scales (e.g., cluster of galaxies) or reproduce the anisotropies in the CMB.
No known particle can play the role of the DM (we will later argue that neutrinos contribute to a small part of the DM). Thus, this is one of the clearest hints for Physics Beyond the Standard Model and provides a window to new particle physics models. In the following I summarise some of the main pieces of evidence for DM at different scales.
I recommend completing this section with the first chapters of Ref. [1] and the recent article [2].

Galactic scale
Rotation curves of spiral galaxies Rotation curves of spiral galaxies are probably the bestknown examples of how the dynamical properties of astrophysical objects are affected by DM. Applying Gauss Law to a spiral galaxy (one can safely ignore the contribution from the spiral arms and assume a spherical distribution of matter in the bulge) leads to a simple relation between the rotation velocity of objects which are gravitationally bound to the galaxy and their distance to the galactic centre: where M (r) is the mass contained within the radius r. In the outskirts of the galaxy, where we expect that M does not increase any more, we would therefore expect a decay v rot ∝ r −1/2 . Vera Rubin's observations of rotation curves of spiral galaxies [3,4] showed a very slow decrease with the galactic radius. The careful work of Bosma [5], van Albada and Sancisi [6] showed that this flatness could not be accounted for by simply modifying the relative weight of the diverse galactic components (bulge, disc, gas), a new component was needed with a different spatial distribution (see Fig. 1).
Notice that the flatness of rotation curves can be obtained if a new mass component is introduced, whose mass distribution satisfies M (r) ∝ r in eq.(1). This is precisely the relation that one expects for a self-gravitational gas of non-interacting particles. This halo of DM can extend up to ten times the size of the galactic disc and contains approximately an 80% of the total mass of the galaxy.
Since then, flat rotation curves have been found in spiral galaxies, further strengthening the DM hypothesis. Of course, our own galaxy, the Milky Way is no exception. N-body simulations have proved to be very important tools in determining the properties of DM haloes. These can be characterised in terms of their density profile ρ(r) and the velocity distribution function f (v).  [7], based on gravitational lensing, allow for a much more precise determination of the total mass of this object.
Observations of the local dynamics provide a measurement of the DM density at our position in the Galaxy. Up to substantial uncertainties, the local DM density can vary in a range ρ 0 = 0.2 − 1 GeV cm −3 . It is customary to describe the DM halo in terms of a Spherical Isothermal Halo, in which the velocity distribution follows a Maxwell-Boltzmann law, but deviations from this are also expected. Finally, due to numerical limitations, current N-body simulations cannot predict the DM distribution at the centre of the galaxy. Whereas some results suggest the existence of a cusp of DM in the galactic centre, other simulations seem to favour a core. Finally, the effect of baryons is not easy to simulate, although substantial improvements have been recently made.

Galaxy Clusters
Peculiar motion of clusters. Fritz Zwicky studied the peculiar motions of galaxies in the Coma cluster [8,9]. Assuming that the galaxy cluster is an isolated system, the virial theorem can be used to relate the average velocity of objects with the gravitational potential (or the total mass of the system).
As in the case of galaxies, this determination of the mass is insensitive to whether objects emit any light or not. The results can then be contrasted with other determinations that are based on the luminosity. This results in an extremely large mass-to-light ratio, indicative of the existence of large amounts of missing mass, which can be attributed to a DM component.
Modern determinations through weak lensing techniques provide a better gravitational determination of the cluster masses [10,7] (see Fig. 2). I recommend reading through Ref. [9] for a derivation of the virial theorem in the context of Galaxy clusters. Dynamical systems. The Bullet Cluster (1E 0657-558) is a paradigmatic example of the effect of dark matter in dynamical systems. It consists of two galaxy clusters which underwent a collision. The visible components of the cluster, observed by the Chandra X-ray satellite, display a characteristic shock wave (which gives name to the whole system). On the other hand, weak-lensing analyses, which make use of data from the Hubble Space Telescope, have revealed that most of the mass of the system is displaced from the visible components. The accepted interpretation is that the dark matter components of the clusters have crossed without interacting significantly (see e.g., Ref. [11,12]).
The Bullet Cluster is considered one of the best arguments against MOND theories (since the gravitational effects occur where there is no visible matter). It also sets an upper bound on the self-interaction strength of dark matter particles.
DM filaments. Observations of the distribution of luminous matter at large scales have shown that it follows a filamentary structure. Numerical simulations of structure formation with cold DM have been able to reproduce this feature. To date, it is well understood that DM plays a fundamental role in creating that filamentary network, gravitationally trapping the luminous matter. Recently, the comparison of the distribution of luminous matter in the Abell 222/223 supercluster with weaklensing data has shown the existence of a dark filament joining the two clusters of the system. That filament, having no visible counterpart, is believed to be made of DM.

Cosmological scale
Finally, DM has also left its footprint in the anisotropies of the Cosmic Microwave Background (CMB). The analysis of the CMB constitutes a primary tool to determine the cosmological parameters of the Universe. The data obtained by dedicated satellites in the past decades has confirmed that we live in a flat Universe (COBE), dominated by dark matter and dark energy (WMAP), whose cosmological abundances have been determined with great precision (Planck). The abundance of DM is normally expressed in terms of the cosmological density parameter, defined as Ω DM h 2 = ρ DM /ρ c where ρ c is the critical density necessary to recover a flat Universe and h = 0.7 is the normalised Hubble parameter. The most recent measurements by the Planck satellite, combined with data obtained from Supernovae (that trace the Universe expansion) yield Ω CDM h 2 = 0.1196 ± 0.0031 .
Given that Ω ≈ 1, this means that dark matter is responsible for approximately a 26% of the Universe energy density nowadays. Even more surprising is the fact that another exotic component is needed, dark energy, which makes up approximately the 69% of the total energy density (see Fig. 4).

Neutral
It is generally argued that DM particles must be electrically neutral. Otherwise they would scatter light and thus not be dark. Similarly, constrains on charged DM particles can be extracted from unsuccessful searches for exotic atoms. Constraints on heavy millicharged particles are inferred from cosmological and astrophysical observations as well as direct laboratory tests [13,14,15]. Millicharged DM particles scatter off electrons and protons at the recombination epoch via Rutherfordlike interactions. If millicharged particles couple tightly to the baryonphoton plasma during the recombination epoch, they behave like baryons thus affecting the CMB power spectrum in several ways [13,14]. For particles much heavier than the proton, this results in an upper bound of its charge [14] ≤ 2.24 × 10 −4 (M/1 TeV) 1/2 .

Nonrelativistic
Numerical simulations of structure formation in the Early Universe have become a very useful tool to understand some of the properties of dark matter. In particular, it was soon found that dark matter has to be non-relativistic (cold) at the epoch of structure formation. Relativistic (hot) dark matter has a larger free-streaming length (the average distance traveled by a dark matter particle before it falls into a potential well). This leads to inconsistencies with observations.
However, at the Galactic scale, cold dark matter simulations lead to the occurrence of too much substructure in dark matter haloes. Apparently this could lead to a large number of subhaloes (observable through the luminous matter that falls into their potential wells). It was argued that if dark matter was warm (having a mass of approximately 2−3 keV) this problem would be alleviated.
Modern simulations, where the effect of baryons is included, are fundamental in order to fully understand structure formation in our Galaxy and determine whether dark matter is cold or warm.

NonBaryonic
The results of the CMB, together with the predictions from Big Bang nucleosynthesis, suggest that only 4 − 5% of the total energy budget of the universe is made out of ordinary (baryonic) matter. Given the mismatch of this with the total matter content, we must conclude that DM is non-baryonic.
Neutrinos. Neutrinos deserve special mention in this section, being the only viable non-baryonic DM candidate within the SM. Neutrinos are very abundant particles in the Universe and they are known to have a (very small) mass. Given that they also interact very feebly with ordinary matter (only through the electroweak force) they are in fact a component of the DM. There are, however various arguments that show that they contribute in fact to a very small part.
First, neutrinos are too light. Through the study of the decoupling of neutrinos in the early universe we can compute their thermal relic abundance. Since neutrinos are relativistic particles at the time of decoupling, this is in fact a very easy computation (we will come back to this in Section 2.2.1), and yields Using current upper bounds on the neutrino mass, we obtain Ω ν h 2 < 0.003, a small fraction of the total DM abundance.
Second, neutrinos are relativistic (hot) at the epoch of structure formation. As mentioned above, hot DM leads to a different hierarchy of structure formation at large scales, with large objects forming first and small ones occurring only after fragmentation. This is inconsistent with observations. .
The neutrino-nucleus cross section in the SM reads where F 2 (E R ) is the nuclear form factor, for which we have taken the parametrisation given by Helm [21]. Q v parametrises the coherent interaction with protons (Z) and neutrons (N = A − Z) in the nucleus:

Inelastic scattering of DM particles
WIMPs can also have inelastic scattering off nuclei [22]. The WIMP needs to have sufficient speed to interact with the nucleus and promote to an excited state (with energy separation δ) This leads to the condition Therefore, the main effect at a given experiment is to limit the sensitivity only to a part of the phase space of the halo. This favours heavy nuclei (since they can transfer more energy to the outgoing WIMP) and can account for observation in targets such as iodine (DAMA/LIBRA) while avoiding observation in lighter ones such as Ge (CDMS)