FormalPara Exercise 1.1: Manipulating Spinor Indices

The sigma-matrix four-vector is defined as \((\bar {\sigma }^{\mu })^{\dot {\alpha } \alpha } := ({\mathbb{1} },-\boldsymbol {\sigma })^{\top }\), where \(\boldsymbol {\sigma } = (\hat \sigma _1,\hat \sigma _2, \hat \sigma _3)\) is the list of the Pauli matrices \(\hat \sigma _i\).Footnote 1 We rewrite \((\sigma ^{\mu })_{\alpha \dot \alpha } = - \epsilon _{\dot \alpha \dot \beta } (\bar {\sigma }^{\mu })^{\dot \beta \beta } \epsilon _{\beta \alpha }\) in matrix notation, as

$$\displaystyle \begin{aligned} \sigma^{\mu} = - \begin{pmatrix} 0 & 1 \\ -1 & 0 \\ \end{pmatrix} \cdot \bar{\sigma}^{\mu} \cdot \begin{pmatrix} 0 & 1 \\ -1 & 0 \\ \end{pmatrix} \,. \end{aligned} $$
(5.1)

Substituting the explicit expressions for \(\bar {\sigma }^{\mu }\) gives \(\sigma ^0 = {\mathbb{1} }\) and \(\sigma ^i = \hat \sigma _i\), hence \(\sigma ^{\mu } = ({\mathbb{1} }, \boldsymbol {\sigma })^{\top }\). Multiplying the latter by the metric tensor proves the second identity,

$$\displaystyle \begin{aligned} {} \sigma_{\mu} = \eta_{\mu \nu} \sigma^{\nu} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \\ \end{pmatrix} \cdot \begin{pmatrix} {\mathbb{1}} \\ \hat\sigma_1 \\ \hat\sigma_2 \\ \hat\sigma_3 \\ \end{pmatrix} = ({\mathbb{1}}, -\boldsymbol{\sigma})^{\top} \,. \end{aligned} $$
(5.2)

To prove the third identity, \(\mathrm {Tr}\left (\sigma ^{\mu } \bar {\sigma }^{\nu }\right ) = 2 \eta ^{\mu \nu }\), we consider it for fixed values of \(\mu \) and \(\nu \). The facts that the Pauli matrices have vanishing trace and obey the anti-commutation relation \(\{\hat \sigma _i,\hat \sigma _j \} = 2 \delta _{ij}\) imply that

$$\displaystyle \begin{aligned} \begin{array}{ll} \mathrm{Tr}\left(\sigma^0 \bar{\sigma}^0\right) = \mathrm{Tr}\left({\mathbb{1}}\right) = 2 \,, & \quad \mathrm{Tr}\left(\sigma^0 \bar{\sigma}^i\right) = -\mathrm{Tr}\left( \hat{\sigma}_i\right) = 0\,, \\ \mathrm{Tr}\left(\sigma^i \bar{\sigma}^0\right) = \mathrm{Tr}\left(\hat{\sigma}_i\right) = 0 \,, & \quad \mathrm{Tr}\left(\sigma^i \bar{\sigma}^j\right) = -\mathrm{Tr}\left( \hat\sigma_i \hat\sigma_j \right) = - 2\delta_{ij}\,, \\ \end{array} \end{aligned} $$
(5.3)

for \(i,j=1,2,3\). Putting these together gives the third identity.

The Pauli matrices and the identity matrix form a basis of all \(2\times 2\) matrices. Any \(2\times 2\) matrix \(M_{\alpha \dot \alpha }\) can thus be expressed as

$$\displaystyle \begin{aligned} M_{\alpha \dot\alpha} = m_{\mu} (\sigma^{\mu})_{\alpha \dot\alpha} \,. \end{aligned} $$
(5.4)

By contracting both sides by \((\bar \sigma ^{\nu })^{\dot \alpha \beta }\) and computing the trace using the third identity, we can express the coefficients of the expansion in terms of M as \(m^{\mu } = \mathrm {Tr}\left (M \bar {\sigma }^{\mu }\right )/2\). Substituting this into the expansion of \(M_{\alpha \dot \alpha }\) gives

$$\displaystyle \begin{aligned} 2 \, M_{\alpha\dot\alpha} = M_{\beta\dot\beta} (\bar{\sigma}_{\mu})^{\dot\beta \beta} (\sigma^{\mu})_{\alpha\dot\alpha} \,. \end{aligned} $$
(5.5)

Since this holds for any matrix M, it follows that

$$\displaystyle \begin{aligned} (\sigma^{\mu})_{\alpha\dot\alpha} (\bar{\sigma}_{\mu})^{\dot\beta \beta} = 2 \, \delta^{\beta}_{\, \alpha} \delta^{\dot\beta}_{\, \beta} \,. \end{aligned} $$
(5.6)

Contracting both sides with suitable Levi-Civita symbols gives the fourth identity.

FormalPara Exercise 1.2: Massless Dirac Equation and Weyl Spinors
  1. (a)

    Any Dirac spinor \(\xi \) can be decomposed as \(\xi = \xi _+ + \xi _-\), where \(\xi _{\pm }\) satisfy the helicity relations

    $$\displaystyle \begin{aligned} P^{\pm} \xi_{\pm} = \xi_{\pm} \,, \qquad \quad P_{\pm} \xi_{\mp} = 0 \,, \end{aligned} $$
    (5.7)

    with \(P_{\pm } = ({\mathbb{1} } \pm \gamma ^{5})/2\). Using the Dirac representation of the \(\gamma \) matrices in Eq. (1.24) we have that

    $$\displaystyle \begin{aligned} P_{\pm} = \begin{pmatrix} {\mathbb{1}}_2 & \pm {\mathbb{1}}_2 \\ \pm {\mathbb{1}}_2 & {\mathbb{1}}_2 \end{pmatrix} \,. \end{aligned} $$
    (5.8)

    The helicity relations then constrain the form of \(\xi _{\pm }\) to have only two independent components:

    $$\displaystyle \begin{aligned} \xi_{+} = \left(\xi^0, \xi^1, \xi^0, \xi^1 \right)^{\top} \,, \qquad \xi_{-} = \left(\xi^0, \xi^1, -\xi^0, -\xi^1 \right)^{\top} \,. \end{aligned} $$
    (5.9)

    Indeed, \(u_+\) and \(v_-\) (\(u_-\) and \(u_+\)) have the form of \(\xi ^+\) (\(\xi ^-\)). We now focus on \(\xi _+\). We change variables from \(k^{\mu }\) to \(k^{\pm }\) and \(\mathrm {e}^{{\mathrm {i}} \phi }\), which have the benefit of implementing \(k^2=0\). Then we have that

    $$\displaystyle \begin{aligned} \gamma^{\mu} k_{\mu} = \begin{pmatrix} \frac{k^++k^-}{2} & 0 & \frac{k^--k^+}{2} & - \mathrm{e}^{-{\mathrm{i}} \phi} \sqrt{k^+ k^-} \\ 0 & \frac{k^++k^-}{2} & - \mathrm{e}^{{\mathrm{i}} \phi} \sqrt{k^+ k^-} & \frac{k^+-k^-}{2} \\ \frac{k^+-k^-}{2} & \mathrm{e}^{-{\mathrm{i}} \phi} \sqrt{k^+ k^-} & - \frac{k^++k^-}{2} & 0 \\ \mathrm{e}^{{\mathrm{i}} \phi} \sqrt{k^+ k^-} & \frac{k^--k^+}{2} & 0 & -\frac{k^++k^-}{2} \\ \end{pmatrix} \,. \end{aligned} $$
    (5.10)

    Plugging the generic form of \(\xi _{+}\) into the Dirac equation \(\gamma ^{\mu } k_{\mu } \xi ^+ = 0\) gives one equation, which fixes \(\xi ^1\) in terms of \(\xi ^0\),

    $$\displaystyle \begin{aligned} \xi^1 = \mathrm{e}^{{\mathrm{i}} \phi} \sqrt{\frac{k^-}{k^+}} \, \xi^0 \,. \end{aligned} $$
    (5.11)

    Since the equation is homogeneous, the overall normalisation is arbitrary. Choosing \(\xi ^0 = \sqrt {k^+}/\sqrt {2}\) gives the expressions for \(\xi _+=u_+=v_-\) given above.

  2. (b)

    For any Dirac spinor \(\xi \) we have

    $$\displaystyle \begin{aligned} \bar{\xi} P_{\pm} = \xi^{\dagger} \gamma^0 P_{\pm} = \xi^{\dagger} P_{\mp} \gamma^0 = \left(\gamma^0 P_{\pm} \xi \right)^{\dagger} \,, \end{aligned} $$
    (5.12)

    where we used that \((\gamma ^5)^{\dagger } = \gamma ^5\) and \(\{\gamma ^5,\gamma ^0\} = 0\). From this it follows that

    $$\displaystyle \begin{aligned} \bar{u}_{\pm} P_{\pm} = 0 \,, \qquad \bar{u}_{\pm} P_{\mp} = \bar{u}_{\pm} \,, \qquad \bar{v}_{\pm} P_{\pm} = \bar{v}_{\pm} \,, \qquad \bar{v}_{\pm} P_{\mp} = 0 \,. \end{aligned} $$
    (5.13)
  3. (c)

    Through matrix multiplication we obtain the explicit expression of U,

    $$\displaystyle \begin{aligned} U = \frac{1}{\sqrt{2}} \begin{pmatrix} {\mathbb{1}}_{2} & -{\mathbb{1}}_{2} \\ {\mathbb{1}}_{2} & {\mathbb{1}}_{2} \\ \end{pmatrix} \,, \end{aligned} $$
    (5.14)

    which is indeed a unitary matrix. The Dirac matrices in the chiral basis then are

    $$\displaystyle \begin{aligned} \gamma^{0}_{\text{ch}}=U \gamma^{0} U^{\dagger} = \left ( \begin{array}{ll}0 & {{\mathbb{1}}}_{2\times 2}\\ {{\mathbb{1}}}_{2\times 2} & 0\end{array}\right )\,, \qquad \gamma^{i}_{\text{ch}}= U \gamma^{i} U^{\dagger} = \left ( \begin{array}{ll}0 & \boldsymbol{\sigma}^{i}\\ -\boldsymbol{\sigma}^{i} & 0\end{array}\right )\, , \end{aligned} $$
    (5.15)

    with \(i=1,2,3\). Putting these together gives

    $$\displaystyle \begin{aligned} {} \gamma^{\mu}_{\text{ch}} = \begin{pmatrix} 0 & \sigma^{\mu} \\ \overline{\sigma}^{\mu} & 0 \\ \end{pmatrix} \,. \end{aligned} $$
    (5.16)

    Similarly, we obtain the expression of \(\gamma ^5\), which in this basis is diagonal,

    $$\displaystyle \begin{aligned} {} \gamma^5_{\text{ch}}=U \gamma^{5} U^{\dagger} = \begin{pmatrix}- {\mathbb{1}}_{2} & 0 \\ 0 & {\mathbb{1}}_{2} \end{pmatrix} \,. \end{aligned} $$
    (5.17)

    Finally, the solutions to the Dirac equation in the chiral basis are given by

    $$\displaystyle \begin{aligned} U u_+ = \left( 0, 0, \sqrt{k^+}, \mathrm{e}^{{\mathrm{i}} \phi} \sqrt{k^-} \right)^{\top} \,, \qquad U u_- = \left( \sqrt{k^-} \mathrm{e}^{-{\mathrm{i}} \phi}, -\sqrt{k^+}, 0, 0 \right)^{\top}\,, \end{aligned} $$
    (5.18)

    and similarly for \(v_{\pm }\).

  4. (d)

    The product of four Dirac matrices in the chiral representation (5.16) is given by

    $$\displaystyle \begin{aligned} \gamma^{\mu} \gamma^{\nu} \gamma^{\rho} \gamma^{\tau} = \begin{pmatrix} \sigma^{\mu} \bar{\sigma}^{\nu} \sigma^{\rho} \bar{\sigma}^{\tau} & 0 \\ 0 & \bar{\sigma}^{\mu} \sigma^{\nu} \bar{\sigma}^{\rho} \sigma^{\tau} \\ \end{pmatrix} \,. \end{aligned} $$
    (5.19)

    Multiplying to the right by

    $$\displaystyle \begin{aligned} \frac{1}{2} \left( {\mathbb{1}} - \gamma_5 \right) = \begin{pmatrix} {\mathbb{1}} & 0 \\ 0 & 0 \\ \end{pmatrix} \end{aligned} $$
    (5.20)

    selects the top left entry,

    $$\displaystyle \begin{aligned} \frac{1}{2} \gamma^{\mu} \gamma^{\nu} \gamma^{\rho} \gamma^{\tau} \left( {\mathbb{1}} - \gamma_5 \right) = \begin{pmatrix} \sigma^{\mu} \bar{\sigma}^{\nu} \sigma^{\rho} \bar{\sigma}^{\tau} & 0 \\ 0 & 0 \\ \end{pmatrix} \,. \end{aligned} $$
    (5.21)

    Taking the trace of both sides of this equation finally gives Eq. (1.29). Note that this result does not depend on the representation of the Dirac matrices, as the unitarity matrices relating different representations drop out from the trace. Using \({\mathbb{1} }+\gamma _5\) instead gives a relation for \({\mathrm {Tr}}\left (\bar {\sigma }^{\mu } \sigma ^{\nu } \bar {\sigma }^{\rho } \sigma ^{\tau }\right )\).

FormalPara Exercise 1.3: SU(\(N_c\)) Identities
  1. (a)

    The Jacobi identity for the generators (1.49) can be proven directly by expanding all commutators. We recall that the commutator is bilinear. The first term on the left-hand side gives

    $$\displaystyle \begin{aligned} \left[T^{a}, [ T^{b}, T^{c}]\right] = T^a T^b T^c - T^a T^c T^b - T^b T^c T^a + T^c T^b T^a \,. \end{aligned} $$
    (5.22)

    Summing both sides of this equation over the cyclic permutations of the indices (\(\{a,b,c\}\), \(\{b,c,a\}\), \(\{c,a,b\}\)) gives Eq. (1.49).

  2. (b)

    We substitute the commutation relations (1.46) into the Jacobi identity for the generators (1.49). The first term gives

    $$\displaystyle \begin{aligned} \left[T^{a}, [ T^{b}, T^{c}]\right] = - 2 f^{bce} f^{aeg} T^{g} \,. \end{aligned} $$
    (5.23)

    By summing over the cyclic permutations of the indices and removing the overall constant factor we obtain

    $$\displaystyle \begin{aligned} {} \left(f^{abe} f^{ceg} + f^{bce} f^{aeg} + f^{cae} f^{beg} \right) T^{g} = 0 \,. \end{aligned} $$
    (5.24)

    We recall that repeated indices are summed over. Since the generators \(T^{g}\) are linearly independent, their coefficients in Eq. (5.24) have to vanish separately. This gives the Jacobi identity (1.50).

  3. (c)

    Any \(N_c \times N_c\) complex matrix M can be decomposed into the identity \({\mathbb{1} }_{N_c}\) and the \(su(N_c)\) generators \(T^a\) (with \(a=1,\ldots , N_c^2-1\)),

    $$\displaystyle \begin{aligned} M = m_0 \, {\mathbb{1}}_{N_c} + m_a \, T^a \,. \end{aligned} $$
    (5.25)

    As usual, the repeated indices are summed over. The coefficients of the expansion can be obtained by multiplying both sides by \({\mathbb{1} }_{N}\) and \(T^a\), and taking the trace. Using the tracelessness of \(T^a\) and \({\mathrm {Tr}} T^a T^b = \delta ^{ab}\) we obtain that \(m_0 = {\mathrm {Tr}}(M)/N_c\) and \(m_a = {\mathrm {Tr}} (M T^a)\). We then rewrite the expansion as

    $$\displaystyle \begin{aligned} \left[ \left(T^a\right)_{i_1}^{\ j_1} \left(T^a\right)_{i_2}^{\ j_2} - \delta_{i_1}^{\ j_2} \delta_{i_2}^{\ j_1} + \frac{1}{N_c} \delta_{i_1}^{\ j_1} \delta_{i_2}^{\ j_2} \right] M_{j_2}^{\ i_2} = 0\,. \end{aligned} $$
    (5.26)

    Since this equation holds for any complex matrix M, it follows that the coefficient of \(M_{j_2}^{\ i_2}\) vanishes. This yields the desired relation.

FormalPara Exercise 1.4: Casimir Operators
  1. (a)

    The commutator of \(T^a T^a\) with the generators \(T^b\) is given by

    $$\displaystyle \begin{aligned} \begin{aligned} \left[ T^a T^a, T^b \right] & = T^a \left[ T^a, T^b \right] + \left[ T^a, T^b \right] T^a \\ & = {\mathrm{i}} \sqrt{2} f^{abc} \left( T^{a} T^{c} + T^{c} T^{a} \right) \,, \end{aligned} \end{aligned} $$
    (5.27)

    which vanishes because of the anti-symmetry of \(f^{abc}\). In the first line we used that \([A B,C] = A [B,C] + [A,C] B\), which can be proven by expanding all commutators, and in the second line we applied the commutation relations (1.46).

  2. (b)

    The Casimir invariant of the fundamental representation follows directly from the completeness relation (1.51),

    $$\displaystyle \begin{aligned} \begin{aligned} \left(T^a_F \right)_{i}^{\ k} \left(T^a_F \right)_{k}^{\ j} & = \delta_{i}^{\ j} \delta_{k}^{\ k} - \frac{1}{N_c} \delta_{i}^{\ k} \delta_{k}^{\ j} \\ & = \frac{N_c^2-1}{N_c} \left( {\mathbb{1}}_{N_c} \right)_{ij} \,, \end{aligned} \end{aligned} $$
    (5.28)

    from which we read off that \(C_F = (N_c^2-1)/N_c\). For the adjoint representation, we use Eq. (1.56) to express the generators in terms of structure constants,

    $$\displaystyle \begin{aligned} \left(T^a_A T^a_A \right)^{bc} = 2 f^{bak} f^{cak} \,. \end{aligned} $$
    (5.29)

    We express one of the structure constants in terms of generators through Eq. (1.48),

    $$\displaystyle \begin{aligned} \left(T^a_A T^a_A \right)^{bc} = - {\mathrm{i}} 2 \sqrt{2} {\mathrm{Tr}}\left(T^b_F T^a_F T^k_F \right) f^{cak} \,, \end{aligned} $$
    (5.30)

    where we also used the anti-symmetry of \(f^{cak}\) to remove the commutator from the trace. Next, we move \(f^{cak}\) into the trace and rewrite \( f^{cak} T^k_F \) in terms of a commutator,

    $$\displaystyle \begin{aligned} \left(T^a_A T^a_A\right)^{bc} = 2 {\mathrm{Tr}} \left( T^b_F T^a_F [T^a_F, T^c_F] \right) \,. \end{aligned} $$
    (5.31)

    Applying the completeness relation (1.51) finally gives

    $$\displaystyle \begin{aligned} \left(T^a_A T^a_A\right)^{bc} = 2 \, N_c \left( {\mathbb{1}}\right)^{bc} \,, \end{aligned} $$
    (5.32)

    from which we see that \(C_A = 2 \, N_c\).

    Note that in many QCD contexts it is customary to normalise the generators so that \(\mathrm {Tr}(T^a T^b) = \delta ^{ab}/2\), as opposed to \(\mathrm {Tr}(T^a T^b) = \delta ^{ab}\) as we do here. This different normalisation results in \(C_A = N_c\) and \(C_F = (N_c^2-1)/(2 N_c)\).

FormalPara Exercise 1.5: Spinor Identities

The identities (a) and (b) follow straightforwardly from the definition of the bra-ket notation and from the expression of \(\gamma ^{\mu }\) in terms of Pauli matrices,

$$ \displaystyle \begin{aligned} i | \gamma^{\mu} | j \rangle & = \begin{pmatrix} 0 & \tilde{\lambda}_i \\ \end{pmatrix} \cdot \begin{pmatrix} 0 & \sigma^{\mu} \\ \bar{\sigma}^{\mu} & 0 \\ \end{pmatrix} \cdot \begin{pmatrix} \lambda_j \\ 0 \\ \end{pmatrix} = (\tilde{\lambda}_i)_{\dot\alpha} (\bar{\sigma}^{\mu})^{\dot{\alpha} \alpha} (\lambda_j)_{\alpha} \,, \end{aligned} $$
(5.33)
$$\displaystyle \begin{aligned} \langle i | \gamma^{\mu} | j ] & = \begin{pmatrix} \lambda_i & 0 \\ \end{pmatrix} \cdot \begin{pmatrix} 0 & \sigma^{\mu} \\ \bar{\sigma}^{\mu} & 0 \\ \end{pmatrix} \cdot \begin{pmatrix} 0 \\ \tilde{\lambda}_j \\ \end{pmatrix} = \lambda_i^{\alpha} (\sigma^{\mu})_{\alpha\dot{\alpha}} \tilde{\lambda}_j^{\dot\alpha}\,. \end{aligned} $$
(5.34)

Setting \(j=i\) in the previous identities and using that \((\sigma ^{\mu })_{\beta \dot {\beta }} = \epsilon _{\beta \alpha } \epsilon _{\dot {\beta }\dot {\alpha }} (\bar {\sigma }^{\mu })^{\dot {\alpha }\alpha }\) gives the relation (c),

$$\displaystyle \begin{aligned} [ i | \gamma^{\mu} | i \rangle = \epsilon_{\dot{\alpha}\dot{\beta}} \tilde{\lambda}_i^{\dot\beta} (\bar{\sigma}^{\mu})^{\dot{\alpha} \alpha} \epsilon_{\alpha \beta} \lambda_i^{\beta} = \lambda_i^{\beta} (\sigma^{\mu})_{\beta \dot{\beta}} \tilde{\lambda}_i^{\dot{\beta}} = \langle i | \gamma^{\mu} | i ] \,. \end{aligned} $$
(5.35)

We obtain the relation (d) by substituting the identities \(\tilde {\lambda }_i^{\dot {\alpha }} \lambda _i^{\alpha } = p_i^{\mu } (\bar {\sigma }_{\mu })^{\dot {\alpha }\alpha }\) and \(\text{tr}\left (\sigma ^{\mu } \bar {\sigma }^{\nu }\right ) = 2 \eta ^{\mu \nu }\) into (b) with \(j=i\),

$$\displaystyle \begin{aligned} \langle i | \gamma^{\mu} | i ] = (\sigma^{\mu})_{\alpha\dot{\alpha}} (p_i)_{\nu} (\bar{\sigma}^{\nu})^{\dot\alpha \alpha} = 2 p_i^{\mu} \,. \end{aligned} $$
(5.36)

In order to prove the Schouten identity, we recall that a spinor \(\lambda _i\) is a two-dimensional object. We can therefore expand \(\lambda _3\) in a basis constructed from \(\lambda _1\) and \(\lambda _2\),

$$\displaystyle \begin{aligned} {} \lambda_3^{\alpha} = c_1 \lambda_1^{\alpha} + c_2 \lambda_2^{\alpha} \,. \end{aligned} $$
(5.37)

Contracting both sides of this equation by \(\lambda _1\) and \(\lambda _2\) gives a linear system of equations for the coefficients \(c_1\) and \(c_2\),

$$\displaystyle \begin{aligned} \begin{cases} \langle 3 1 \rangle = c_2 \langle 2 1 \rangle \\ \langle 3 2 \rangle = c_1 \langle 1 2 \rangle \\ \end{cases}\,. \end{aligned} $$
(5.38)

Substituting the solution of this system into Eq. (5.37) and rearranging the terms gives the Schouten identity. Finally, the identity a) and \((\bar {\sigma }^{\mu })^{\dot {\alpha } \beta } (\bar {\sigma }_{\mu })^{\dot {\beta } \alpha } = 2 \epsilon ^{\dot {\alpha }\dot {\beta }} \epsilon ^{\beta \alpha }\) give the Fierz rearrangement,

$$\displaystyle \begin{aligned} \begin{aligned}{}[ i | \gamma^{\mu} | j \rangle [ k | \gamma_{\mu} | l \rangle & = (\tilde{\lambda}_i)_{\dot{\alpha}} (\bar{\sigma}^{\mu})^{\dot{\alpha} \beta} (\lambda_j)_{\beta} (\tilde{\lambda}_k)_{\dot{\beta}} (\bar{\sigma}_{\mu})^{\dot{\beta} \alpha} (\lambda_l)_{\alpha} \\ & = 2 (\tilde{\lambda}_i)_{\dot{\alpha}} \epsilon^{\dot{\alpha}\dot{\beta}} (\tilde{\lambda}_k)_{\dot{\beta}} (\lambda_j)_{\beta} \epsilon^{\beta \alpha} (\lambda_l)_{\alpha} \\ & = 2 [ i k ] \langle l j \rangle \,. \end{aligned} \end{aligned} $$
(5.39)
FormalPara Exercise 1.6: Lorentz Generators in the Spinor-Helicity Formalism
  1. (a)

    The Lorentz generators in the scalar representation are obtained by setting to zero the x-independent representation matrices \(S^{\mu \nu }\) in Eq. (1.10):

    $$\displaystyle \begin{aligned} M^{\mu\nu} = {\mathrm{i}}\, \left (x^{\mu}\,\frac{\partial}{\partial x_{\nu}} - x^{\nu}\,\frac{\partial}{\partial x_{\mu}} \right ) \, . \end{aligned} $$
    (5.40)

    We act with \(M^{\mu \nu }\) on a generic function \(f(x)\), which we express in terms of its Fourier transform as \(f(x) = \int {\mathrm {d}}^4 p \, \mathrm {e}^{{\mathrm {i}} p\cdot x} \tilde {f}(p)\). By integrating by parts and using \(x^{\mu } = - {\mathrm {i}} \frac {\partial }{\partial p_{\mu }} \mathrm {e}^{{\mathrm {i}} p\cdot x}\) we obtain

    $$\displaystyle \begin{aligned} M^{\mu \nu} f(x) = \int {\mathrm{d}}^4 p \, \mathrm{e}^{{\mathrm{i}} p\cdot x} \tilde{M}^{\mu \nu} \tilde{f}(p) \,, \end{aligned} $$
    (5.41)

    where

    $$\displaystyle \begin{aligned} \tilde{M}^{\mu \nu} = {\mathrm{i}} \left( p^{\mu} \frac{\partial}{\partial p_{\nu}} - p^{\nu} \frac{\partial}{\partial p_{\mu}} \right) \end{aligned} $$
    (5.42)

    is the momentum-space realisation of the Lorentz generators. Indeed, one can verify that this form of the generators satisfies the commutation relations of the Poincaré algebra in Eqs. (1.8) and (1.9).

  2. (b)

    We begin with \(m_{\alpha \beta }\). It is instructive to spell out the indices of \(S^{\mu \nu }_{\mathrm {L}}\),

    $$\displaystyle \begin{aligned} \left(S^{\mu\nu}_{\mathrm{L}}\right)_{\alpha\beta} = \frac{{\mathrm{i}}}{4} \epsilon_{\beta\gamma} \left[ \left(\sigma^{\mu}\right)_{\alpha\dot\alpha} \left(\bar{\sigma}^{\nu}\right)^{\dot\alpha\gamma} - \left(\sigma^{\nu}\right)_{\alpha\dot\alpha} \left(\bar{\sigma}^{\mu}\right)^{\dot\alpha\gamma} \right] \,. \end{aligned} $$
    (5.43)

    Contracting it with \(\tilde {M}_{\mu \nu }\) and doing a little spinor algebra gives

    $$\displaystyle \begin{aligned} {} m_{\alpha\beta} = \frac{1}{2} \lambda_{\beta} \left(\sigma^{\mu}\right)_{\alpha\dot\alpha} \tilde{\lambda}^{\dot\alpha} \frac{\partial}{\partial p^{\mu}} + \frac{1}{2} \lambda_{\alpha} \tilde{\lambda}_{\dot \alpha} \left(\bar{\sigma}^{\mu}\right)^{\dot\alpha\gamma} \epsilon_{\gamma\beta} \frac{\partial}{\partial p^{\mu}} \,. \end{aligned} $$
    (5.44)

    We now need to express the derivatives with respect to \(p^{\mu }\) in terms of derivatives with respect to \(\lambda ^{\alpha }\) and \(\tilde {\lambda }^{\dot \alpha }\). For this purpose, we use the identity \(p^{\mu } = \tilde {\lambda }^{\dot \alpha } \lambda ^{\alpha } \left (\sigma ^{\mu }\right )_{\alpha \dot \alpha }/2\) (see Exercise 1.5), which allows us to use the chain rule,

    $$\displaystyle \begin{aligned} {} \frac{\partial}{\partial \lambda^{\alpha}} = \frac{\partial p^{\mu}}{\partial \lambda^{\alpha}} \frac{\partial}{\partial p^{\mu}} = \frac{1}{2} \left(\sigma^{\mu}\right)_{\alpha\dot\alpha} \tilde{\lambda}^{\dot\alpha} \frac{\partial}{\partial p^{\mu}} \,. \end{aligned} $$
    (5.45)

    This takes care of the first term on the RHS of Eq. (5.44). For the second term, we do the same but with the equivalent identity \(p^{\mu } = \lambda _{\alpha } \tilde {\lambda }_{\dot \alpha } \left (\bar {\sigma }^{\mu }\right )^{\dot \alpha \alpha }/2\). Using that \(\frac {\partial \lambda _{\beta }}{\partial \lambda ^{\alpha }} = \epsilon _{\beta \alpha }\) we obtain

    $$\displaystyle \begin{aligned} {} \frac{\partial}{\partial \lambda^{\alpha}} = \frac{1}{2} \tilde{\lambda}_{\dot\beta} \left(\bar{\sigma}^{\mu}\right)^{\dot\beta \beta} \epsilon_{\beta \alpha} \frac{\partial}{\partial p^{\mu}} \,. \end{aligned} $$
    (5.46)

    Substituting Eqs. (5.45) and (5.46) into Eq. (5.44) finally gives the desired expression of \(m_{\alpha \beta }\). The computation of \(\overline {m}_{\dot \alpha \dot \beta }\) is analogous.

  3. (c)

    The n-particle generators are given by

    $$\displaystyle \begin{aligned} \begin{aligned} & m_{\alpha \beta} = \sum_{k=1}^n \left( \lambda_{k \alpha} \frac{\partial}{\partial \lambda_k^{\beta}} + \lambda_{k \beta} \frac{\partial}{\partial \lambda_k^{\alpha}}\right) \,, \quad \overline{m}_{\dot\alpha \dot\beta} = \sum_{k=1}^n \left( \tilde{\lambda}_{k \dot\alpha} \frac{\partial}{\partial \tilde{\lambda}_k^{\dot\beta}} + \tilde{\lambda}_{k \dot\beta} \frac{\partial}{\partial \tilde{\lambda}_k^{\dot\alpha}}\right) \,, \\ & \tilde{M}^{\mu \nu} = {\mathrm{i}} \sum_{k=1}^n \left( p_k^{\mu} \frac{\partial}{\partial p_{k \nu}} - p_k^{\nu} \frac{\partial}{\partial p_{k \mu}} \right) \,. \end{aligned} \end{aligned} $$
    (5.47)

    We act with \(m_{\alpha \beta }\) and \(\overline {m}_{\dot \alpha \dot \beta }\) on \(\langle ij \rangle = \lambda _i^{\gamma } \lambda _{j\,\gamma }\) and \([ ij ] = \tilde {\lambda }_{i \, \dot \gamma } \lambda _{j}^{\dot \gamma }\). \(\langle ij \rangle \) (\([ij]\)) depends only on the \(\lambda _i\) (\(\tilde {\lambda }_i\)) spinors, and is thus trivially annihilated by \(\overline {m}_{\dot \alpha \dot \beta }\) (\(m_{\alpha \beta }\)). With a bit more of spinor algebra we can show that \(\langle ij \rangle \) is annihilated also by \(m_{\alpha \beta }\),

    $$\displaystyle \begin{aligned} \begin{aligned} m_{\alpha \beta} \langle ij \rangle & = \sum_{k=1}^n \left[ \delta_{ik} \delta^{\gamma}_{\, \beta} \lambda_{k \, \alpha} \lambda_{j \, \gamma} + \delta_{jk} \epsilon_{\gamma \beta} \lambda_{k \, \alpha} \lambda_i^{\gamma} + \left( \alpha \leftrightarrow \beta \right) \right] = \\ & = \lambda_{i \, \alpha} \lambda_{j \, \beta} - \lambda_{i \, \beta} \lambda_{j \, \alpha} + \left( \alpha \leftrightarrow \beta \right) = 0 \,. \end{aligned} \end{aligned} $$
    (5.48)

    Similarly we can show that \(\overline {m}_{\dot \alpha \dot \beta } [ij] =0 \). The Lorentz generators are first-order differential operators. As a result, any function of a Lorentz-invariant object is Lorentz invariant as well. We can thus immediately conclude that \(s_{ij} = \langle ij\rangle [ji]\) is annihilated by \(m_{\alpha \beta }\) and \(\overline {m}_{\dot \alpha \dot \beta }\). Alternatively, we can show that

    $$\displaystyle \begin{aligned} \tilde{M}_{\mu \nu} s_{ij} = 2 {\mathrm{i}} \left[ p_{i \mu} p_{j \nu} + p_{i \nu} p_{j \mu} - \left(\mu \leftrightarrow \nu \right) \right] = 0 \,. \end{aligned} $$
    (5.49)
FormalPara Exercise 1.7: Gluon Polarisations
  1. (a)

    In order to construct an explicit expression for the polarisation vectors we will write a general ansatz and apply constraints to fix all free coefficients. The polarisation vector \(\epsilon ^{\dot {\alpha }\alpha }_i\) is a four-dimensional object which satisfies constraints involving the corresponding external momentum \(p_i^{\dot {\alpha } \alpha } = \tilde {\lambda }_i^{\dot {\alpha }} \lambda _i^{\alpha }\) and reference vector \(r_i^{\dot {\alpha } \alpha } = \tilde {\mu }_i^{\dot {\alpha }} \mu ^{\alpha }\). For generic kinematics, i.e. for \(p_i \cdot r_i \neq 0\) (and thus \(\langle \lambda _i \mu _i \rangle \neq 0\) and \([ \tilde {\lambda }_i \tilde {\mu }_i ]\neq 0\)), one can show that \(\tilde {\lambda }_i^{\dot {\alpha }} \lambda _i^{\alpha }\), \(\tilde {\mu }_i^{\dot {\alpha }} \mu _i^{\alpha }\), \(\tilde {\lambda }_i^{\dot {\alpha }} \mu _i^{\alpha }\) and \(\tilde {\mu }_i^{\dot {\alpha }} \lambda _i^{\alpha }\) are linearly independent, and thus form a basis in which we can expand \(\epsilon ^{\dot {\alpha }\alpha }_i\). Our ansatz for \(\epsilon ^{\dot {\alpha }\alpha }_i\) therefore is

    $$\displaystyle \begin{aligned} \epsilon^{\dot{\alpha}\alpha}_i = c_1 \, \tilde{\lambda}_i^{\dot{\alpha}} \lambda_i^{\alpha} + c_2 \, \tilde{\mu}_i^{\dot{\alpha}} \mu_i^{\alpha} + c_3 \, \tilde{\lambda}_i^{\dot{\alpha}} \mu_i^{\alpha} + c_4 \, \tilde{\mu}_i^{\dot{\alpha}} \lambda_i^{\alpha} \,. \end{aligned} $$
    (5.50)

    The transversality and the gauge choice,

    $$\displaystyle \begin{aligned} \begin{aligned} & \epsilon^{\dot{\alpha}\alpha}_i \left(p_i\right)_{\alpha \dot{\alpha}} = c_2 \langle \mu_i \lambda_i \rangle [ \tilde{\lambda}_i \tilde{\mu}_i ] = 0 \,, \\ & \epsilon^{\dot{\alpha}\alpha}_i \left(r_i\right)_{\alpha \dot{\alpha}} = c_1 \langle \lambda_i \mu_i \rangle [ \tilde{\mu}_i \tilde{\lambda}_i ] = 0 \,, \end{aligned} \end{aligned} $$
    (5.51)

    imply that \(c_1 = c_2 = 0\). The light-like condition,

    $$\displaystyle \begin{aligned} \epsilon^{\dot{\alpha}\alpha}_i \left(\epsilon_i\right)_{\alpha \dot{\alpha}}= 2 c_3 c_4 \langle \lambda_i \mu_i \rangle [ \tilde{\lambda}_i \tilde{\mu}_i ] = 0 \,, \end{aligned} $$
    (5.52)

    has two solutions: \(c_3=0\) and \(c_4 = 0\). We parametrise the two solutions as

    $$\displaystyle \begin{aligned} \epsilon_{A,i}^{\dot{\alpha}\alpha} = n_A \tilde{\lambda}_i^{\dot{\alpha}} \mu_i^{\alpha} \,, \qquad \quad \epsilon_{B,i}^{\dot{\alpha}\alpha} = n_B \tilde{\mu}_i^{\dot{\alpha}} \lambda_i^{\alpha} \,. \end{aligned} $$
    (5.53)

    Next, we normalise the two solutions such that \(\epsilon _{A,i} \cdot \epsilon _{B,i}=-1\) and \(\epsilon _{A,i}^* = \epsilon _{B,i}\). This implies that

    $$\displaystyle \begin{aligned} {} n_A n_B = \frac{-\sqrt{2}}{\langle \lambda_i \mu_i \rangle} \frac{\sqrt{2}}{[ \tilde{\lambda}_i \tilde{\mu}_i ]} \,, \qquad \qquad n_A^*=n_B \,. \end{aligned} $$
    (5.54)

    There is now some freedom in fixing \(n_A\) and \(n_B\), which we must use to ensure that the two solutions have the correct helicity scaling. We may parametrise \(n_A = n \, \mathrm {e}^{{\mathrm {i}} \varphi }\) and \(n_B = n \, \mathrm {e}^{-{\mathrm {i}} \varphi }\) with real n and \(\varphi \), and fix the phase \(\varphi \) by requiring that the solutions are eigenvectors of the helicity operator. It is however simpler to follow a heuristic approach. Recalling that \(\langle \lambda _i \mu _i \rangle ^* = - [ \tilde {\lambda }_i \tilde {\mu }_i ]\), we notice that a particularly simple solution to the constraints (5.54) is given by \(n_A = -\sqrt {2}/\langle \lambda _i \mu _i \rangle \) and \(n_B = \sqrt {2}/[ \tilde {\lambda }_i \tilde {\mu }_i ]\). Following this guess, we have two fully determined vectors which satisfy all constraints of the polarisation vectors:

    $$\displaystyle \begin{aligned} \epsilon_{A,i}^{\dot{\alpha}\alpha} = -\sqrt{2} \, \frac{\tilde{\lambda}_i^{\dot{\alpha}} \mu_i^{\alpha}}{\langle \lambda_i \mu_i \rangle} \,, \qquad \quad \epsilon_{B,i}^{\dot{\alpha}\alpha} = \sqrt{2} \, \frac{\tilde{\mu}_i^{\dot{\alpha}} \lambda_i^{\alpha}}{[ \tilde{\lambda}_i \tilde{\mu}_i ]} \,. \end{aligned} $$
    (5.55)

    Finally, we need to check that \(\epsilon _{A,i}^{\dot {\alpha }\alpha }\) and \(\epsilon _{B,i}^{\dot {\alpha }\alpha }\) are indeed eigenvectors of the helicity generator h in Eq. (1.122), which in this case takes the form

    $$\displaystyle \begin{aligned} h = \frac{1}{2} \left[ -\lambda_{i}^{\alpha}\frac{\partial}{\partial \lambda_{i}^{\alpha}} -\mu_{i}^{\alpha}\frac{\partial}{\partial \mu_{i}^{\alpha}} + \tilde\lambda_{i}^{{\dot{\alpha}}}\frac{\partial}{\partial \tilde\lambda_{i}^{{\dot{\alpha}}}} + \tilde{\mu}_{i}^{{\dot{\alpha}}}\frac{\partial}{\partial \tilde{\mu}_{i}^{{\dot{\alpha}}}} \right]\, . \end{aligned} $$
    (5.56)

    The explicit computation yields that

    $$\displaystyle \begin{aligned} h \epsilon_{A,i}^{\dot{\alpha}\alpha} = +\epsilon_{A,i}^{\dot{\alpha}\alpha}\,, \qquad \quad h \epsilon_{B,i}^{\dot{\alpha}\alpha} = -\epsilon_{B,i}^{\dot{\alpha}\alpha}\,. \end{aligned} $$
    (5.57)

    We can therefore identify \(\epsilon _{A,i}^{\dot {\alpha }\alpha }=\epsilon _{+,i}^{\dot {\alpha }\alpha }\) and \(\epsilon _{B,i}^{\dot {\alpha }\alpha }=\epsilon _{-,i}^{\dot {\alpha }\alpha }\), which completes the derivation.

  2. (b)

    We rewrite the spinor expressions for the polarisation vectors as Lorentz vectors using the identities of Exercise 1.5,

    $$\displaystyle \begin{aligned} \epsilon_{+,i}^{\mu} = -\frac{1}{\sqrt{2}} \frac{ \mu_i^{\alpha} (\sigma^{\mu})_{\alpha\dot{\alpha}} \tilde{\lambda}_i^{\dot{\alpha}}}{\langle \lambda_i \mu_i \rangle} \,, \qquad \quad \epsilon_{-,i}^{\mu} = \frac{1}{\sqrt{2}} \frac{ \lambda_i^{\alpha} (\sigma^{\mu})_{\alpha\dot{\alpha}} \tilde{\mu}_i^{\dot{\alpha}}}{[ \tilde{\lambda}_i \tilde{\mu}_i ]} \,. \end{aligned} $$
    (5.58)

    Plugging these expressions into the polarisation sum gives

    $$\displaystyle \begin{aligned} \sum_{h=\pm} \epsilon_{h,i}^{\mu} \epsilon_{h,i}^{* \nu} = -\frac{1}{2} \frac{ (\sigma^{\mu})_{\alpha\dot{\alpha}} \tilde{\lambda}_i^{\dot{\alpha}} \lambda_i^{\beta} (\sigma^{\nu})_{\beta\dot{\beta}} \tilde{\mu}_i^{\dot{\beta}} \mu_i^{\alpha} + \left(\mu \leftrightarrow \nu \right)}{\langle \lambda_i \mu \rangle [ \tilde{\lambda}_i \tilde{\mu}_i ]} \,. \end{aligned} $$
    (5.59)

    Next, we rewrite the numerator in terms of traces as

    $$\displaystyle \begin{aligned} \sum_{h=\pm} \epsilon_{h,i}^{\mu} \epsilon_{h,i}^{* \nu} = -\frac{1}{2} \frac{\text{Tr}\left[\sigma^{\mu} p_i \sigma^{\nu} r_i \right]+ \left(\mu \leftrightarrow \nu \right)}{\langle \lambda_i \mu \rangle [ \tilde{\lambda}_i \tilde{\mu}_i ]} \,. \end{aligned} $$
    (5.60)

    We then use the identity (1.29) to rewrite the trace of Pauli matrices in terms of Dirac matrices. Finally, by using

    $$\displaystyle \begin{aligned} {} \begin{aligned} & {\mathrm{Tr}}\left(\gamma^{\mu} \gamma^{\nu} \gamma^{\rho} \gamma^{\tau}\right) = 4 \left(\eta^{\mu \nu} \eta^{\rho \tau} - \eta^{\mu \rho} \eta^{\nu \tau} + \eta^{\mu \tau} \eta^{\nu \rho} \right) \,, \\ & {\mathrm{Tr}}\left(\gamma^{\mu} \gamma^{\nu} \gamma^{\rho} \gamma^{\tau} \gamma_5 \right) = - 4 \, {\mathrm{i}} \, \epsilon^{\mu \nu \rho \tau} \,, \end{aligned} \end{aligned} $$
    (5.61)

    we obtain

    $$\displaystyle \begin{aligned} \sum_{h=\pm} \epsilon_{h,i}^{\mu} \epsilon_{h,i}^{* \nu} = -\eta^{\mu\nu} + \frac{p_i^{\mu} r_i^{\nu}+p_i^{\nu} r_i^{\mu}}{p_i \cdot r_i} \,. \end{aligned} $$
    (5.62)
FormalPara Exercise 1.8: Colour-Ordered Feynman Rules

We start from the full Feynman rule four-point vertex (1.66) contracted with dummy polarisation vectors \(\epsilon _{i}\),

$$\displaystyle \begin{aligned} V_{4}&= -{\mathrm{i}} g^2\, f^{abe}\, f^{cde}\, \left[ (\epsilon_{1}\cdot\epsilon_{3})(\epsilon_{2}\cdot\epsilon_{4})- (\epsilon_{1}\cdot\epsilon_{2})(\epsilon_{3}\cdot\epsilon_{4}) \right] +\text{cyclic}\,, \end{aligned} $$
(5.63)

and use \(f^{abe}\, f^{cde}=-{\mathrm {Tr}}([T^{a},T^{b}]\, [T^{c},T^{d}]) / 2\), which is obtained from Eqs. (1.48) and (1.51). Note that the \(U(1)\) piece cancels out here. Expanding out the commutators in the traces and collecting terms of identical colour ordering gives

$$\displaystyle \begin{aligned} V_{4}&= \frac{{\mathrm{i}} g^{2}}{2} {\mathrm{Tr}}\left(T^{a}T^{b}T^{c}T^{d}\right)\, \big[ 2(\epsilon_{1}\cdot\epsilon_{2})(\epsilon_{3}\cdot\epsilon_{4}) -(\epsilon_{1}\cdot\epsilon_{3})(\epsilon_{2}\cdot\epsilon_{4})\\ & \qquad -(\epsilon_{1}\cdot\epsilon_{4})(\epsilon_{2}\cdot\epsilon_{3}) \big]+\text{cyclic}\, , \end{aligned} $$
(5.64)

which is the result quoted in Eq. (1.149).

FormalPara Exercise 1.9: Independent Gluon Partial Amplitudes
  1. (a)

    Taking parity and cyclicity into account we have the following independent four-gluon tree-level amplitudes:

    $$\displaystyle \begin{aligned} A^{\text{tree}}_{4}(1^{+},2^{+},3^{+},4^{+}) \, ,\qquad & A^{\text{tree}}_{4}(1^{-},2^{+},3^{+},4^{+}) \, , \\ A^{\text{tree}}_{4}(1^{-},2^{-},3^{+},4^{+})\, , \qquad & A^{\text{tree}}_{4}(1^{-},2^{+},3^{-},4^{+})\, . \end{aligned} $$

    The last two are related via the \(U(1)\) decoupling theorem as

    $$\displaystyle \begin{aligned} \begin{aligned} A^{\text{tree}}_{4}(1^{-},2^{+},3^{-},4^{+}) &= - A^{\text{tree}}_{4}(1^{-},2^{+},4^{+},3^{-}) - A^{\text{tree}}_{4}(1^{-},4^{+},2^{+},3^{-}) \\ &= - A^{\text{tree}}_{4}(3^{-},1^{-},2^{+},4^{+}) - A^{\text{tree}}_{4}(3^{-},1^{-},4^{+},2^{+}) \, . \end{aligned} \end{aligned} $$
    (5.65)

    Hence only the three amplitudes \(A^{\text{tree}}_{4}(1^{+},2^{+},3^{+},4^{+})\), \(A^{\text{tree}}_{4}(1^{-},2^{+},3^{+},4^{+})\) and \(A^{\text{tree}}_{4}(1^{-},2^{-},3^{+},4^{+}) \) are independent. In fact the first two of this list vanish, so there is only one independent four-gluon amplitude at tree-level to be computed.

  2. (b)

    Moving on to the five-gluon case, we have the four cyclic and parity independent amplitudes

    $$\displaystyle \begin{aligned} A^{\text{tree}}_{5}(1^{+},2^{+},3^{+},4^{+},5^{+}) \, ,\qquad & A^{\text{tree}}_{5}(1^{-},2^{+},3^{+},4^{+},5^{+}) \, , \\ A^{\text{tree}}_{5}(1^{-},2^{-},3^{+},4^{+},5^{+})\, , \qquad & A^{\text{tree}}_{4}(1^{-},2^{+},3^{-},4^{+},5^{+})\, . \end{aligned} $$

    Looking at the following \(U(1)\) decoupling relation we may again relate the last amplitude in the above list to the third one

    $$\displaystyle \begin{aligned} \begin{aligned} &A^{\text{tree}}_{5}(2^{+},3^{-},4^{+},5^{+},1^{-}) = \\ & =- A^{\text{tree}}_{5}(3^{-},2^{+},4^{+},5^{+},1^{-}) - A^{\text{tree}}_{5}(3^{-},4^{+},2^{+},5^{-},1^{-})\\ &- A^{\text{tree}}_{5}(3^{-},4^{+},5^{+},2^{+},1^{-}) \\ &= - A^{\text{tree}}_{5}(1^{-},3^{-},2^{+},4^{+},5^{+}) - A^{\text{tree}}_{5}(1^{-},3^{-},4^{+},2^{+},5^{+})\\ &- A^{\text{tree}}_{5}(1^{-},3^{-},4^{+},5^{+},2^{+}) \,. \end{aligned} \end{aligned} $$
    (5.66)

    Hence also for the five-gluon case there are only three independent amplitudes: \(A^{\text{tree}}_{5}(1^{+},2^{+},3^{+},4^{+},5^{+})\), \(A^{\text{tree}}_{5}(1^{-},2^{+},3^{+},4^{+},5^{+})\), \(A^{\text{tree}}_{5}(1^{-},2^{-},3^{+},\)\(4^{+},5^{+})\). The first two in this list vanish, leaving us with one independent and non-trivial five-gluon tree-level amplitude, of the MHV type.

Exercise 1.10: The\(\overline {\text{MHV}}_3\)Amplitude

Using the three-point vertex in Eq. (1.149) we obtain

$$\displaystyle \begin{aligned} {} \begin{aligned} & A_3^{\text{tree}}\left(1^+, 2^+, 3^-\right) = \frac{{\mathrm{i}} g}{\sqrt{2}} \big\{ (\epsilon_{+,1}\cdot \epsilon_{+,2})\, (p_1-p_2)\cdot \epsilon_{-,3} \\ & \quad + (\epsilon_{+,2}\cdot \epsilon_{-,3}) \, (p_2-p_3) \cdot \epsilon_{+,1} + (\epsilon_{-,3}\cdot \epsilon_{+,1})\, (p_3-p_1) \cdot \epsilon_{+,2} \big\} \,. \end{aligned} \end{aligned} $$
(5.67)

Choosing the same reference momentum for all polarisations, \(r^{\dot \alpha \alpha } = \tilde {\mu }^{\dot \alpha } \mu ^{\alpha }\), we have

$$\displaystyle \begin{aligned} \epsilon_{+,1} \cdot \epsilon_{+,2} = 0 \,, \quad \epsilon_{+,i} \cdot \epsilon_{-,j} = - \frac{\langle\mu j\rangle [\mu i]}{\langle i \mu\rangle [j\mu]}\, , \quad (p_{i}-p_{j})\cdot\epsilon_{+,k}= \sqrt{2}\, \frac{ [ki]\langle\mu i\rangle}{\langle k\mu\rangle}\,, \end{aligned} $$
(5.68)

where we used that \(p_i+p_j+p_k=0\). Substituting these into Eq. (5.67) yields

$$\displaystyle \begin{aligned} {} \begin{aligned} A_3^{\text{tree}}\left(1^+, 2^+, 3^-\right) & = {\mathrm{i}} g \frac{ \langle \mu 3 \rangle }{[3\mu] \langle 1 \mu \rangle \langle 2 \mu \rangle } \Big( [12][\mu2]\langle 2 \mu \rangle - [1\mu] \overbrace{ \langle \mu 3 \rangle[32] }^{-\langle \mu 1 \rangle[12] }\Big) \\ & = {\mathrm{i}} g \frac{ \langle \mu 3 \rangle [12] }{[3\mu] \langle 1 \mu \rangle \langle 2 \mu \rangle } \Big( \overbrace{ [\mu2]\langle 2 \mu \rangle + \langle \mu 1 \rangle [1\mu]}^{-[\mu3]\langle 3 \mu \rangle} \Big) \\ & = -{\mathrm{i}} g \frac{\langle\mu 3\rangle^{2} [12]}{\langle1\mu\rangle\langle2\mu\rangle} \, . \end{aligned} \end{aligned} $$
(5.69)

Since the left-handed spinors are collinear, we may set \(\lambda _{2}=a\lambda _{1}\) and \(\lambda _{3}=b\lambda _{1}\). Momentum conservation \(\lambda _{1}(\tilde \lambda _{1} + a\tilde \lambda _{2}+b\tilde \lambda _{3})=0\) then implies that \(a = [31]/ [23]\) and \(b= [12]/ [23]\). Substituting these into Eq. (5.69) finally gives

$$\displaystyle \begin{aligned} A_3^{\text{tree}}\left(1^+, 2^+, 3^-\right) = -{\mathrm{i}} g \frac{ [12]^{3}}{ [23] [31]}\, . \end{aligned} $$
(5.70)
FormalPara Exercise 1.11: Four-Point Quark-Gluon Scattering

There are two colour-ordered diagrams contributing to \(A_{\bar {q}qgg}^{\text{tree}}(1^{-}_{\bar q}, 2^{+}_{q}, 3^{-}, 4^{+})\):

(5.71)

The first graph (I) is proportional to which vanishes for the reference-vector choice \(\mu _{4}\, \tilde \mu _{4}=p_{1}\). This is so as

(5.72)

Evaluating the second graph (II) with the colour-ordered Feynman rules we obtain

(5.73)

where \(q=p_1+p_2\) and \(p_{ij} = p_i-p_j\). The term \((2)\) vanishes for our choice \(\mu _{4}\, \tilde \mu _{4}=p_{1}\),

(5.74)

For the term \((3)\), we note that to find

(5.75)

which is killed by the choice \(\mu _3\tilde {\mu }_3 = p_2\). Hence, for this choice of reference vectors only the term \((1)\) in Eq. (5.73) contributes. One has

$$\displaystyle \begin{aligned} \epsilon_{3,-}\cdot\epsilon_{+,4}= - \frac{\langle\mu_{4}3\rangle\, [\mu_{3}\,4]}{\langle4\mu_{4}\rangle \, [3\mu_{3}]} \stackrel{\mu_{3}=\lambda_2, \mu_{4}=\lambda_1}{=} -\frac{\langle13\rangle\, [24]}{\langle41\rangle\, [32]}\, , \end{aligned} $$
(5.76)

and, using momentum conservation,

(5.77)

Inserting these into the term \((1)\) of Eq. (5.73) and using \(q^{2}=\langle 12\rangle [21]\) yields

$$\displaystyle \begin{aligned} A^{\text{tree}}_{\bar q q gg}(1^-_{\bar{q}}, 2^+_q, 3^-, 4^+) &= - {\mathrm{i}} g^2 \frac{\langle13\rangle^{2}}{\langle12\rangle\langle41\rangle} \, \overbrace{\frac{ [24]\langle43\rangle}{ [21]\langle43\rangle}}^{- [21]\langle13\rangle} = - {\mathrm{i}} g^2 \frac{\langle13\rangle^{3}\langle23\rangle}{\langle12\rangle\langle23\rangle\langle34\rangle\langle41\rangle}\,, \end{aligned} $$
(5.78)

as claimed. The helicity count of our result is straightforward and correct:

$$\displaystyle \begin{aligned} &h_{1} [ A^{\text{tree}}_{\bar q q gg} ]=-\frac{1}{2}(3-1-1)=-{\textstyle\frac{1}{2}} \,, \qquad & h_{2}[A^{\text{tree}}_{\bar q q gg}]=-{\textstyle\frac{1}{2}}(1-1-1)=+{\textstyle\frac{1}{2}}\, , \\ & h_{3}[A^{\text{tree}}_{\bar q q gg} ]=-{\textstyle\frac{1}{2}}(4-1-1)=-1\, , \qquad & h_{4}[A^{\text{tree}}_{\bar q q gg}]=-{\textstyle\frac{1}{2}}(0-1-1)=+1\,. \end{aligned} $$
FormalPara Exercise 2.1: The Vanishing Splitting Function \(\mathrm {Split}^{\mathrm {tree}}_+(x,a^+,b^+)\)

We parametrise the collinear limit \(5^+ \parallel 6^+\) by

$$\displaystyle \begin{aligned} \lambda_5 = \sqrt{x} \, \lambda_P \,, \qquad \tilde{\lambda}_5 = \sqrt{x} \, \tilde{\lambda}_P \,, \qquad \lambda_6 = \sqrt{1-x} \, \lambda_P \,, \qquad \tilde{\lambda}_5 = \sqrt{1-x} \, \tilde{\lambda}_P \,, \end{aligned} $$
(5.79)

with \(P = \lambda _P \tilde {\lambda }_P = p_5 + p_6\). Substituting this into the Parke-Taylor formula (1.192) for \(A_{6}^{\mathrm {tree}}(1^{-}, 2^{-}, 3^{+}, 4^{+}, 5^{+}, 6^{+})\) gives

$$\displaystyle \begin{aligned} A_{6}^{\mathrm{tree}}(1^{-}, 2^{-}, 3^{+}, 4^{+}, 5^{+}, 6^{+}) \stackrel{5 \parallel 6}{\longrightarrow} \frac{g}{\sqrt{x(1-x)}\,\langle56\rangle}\, \frac{{\mathrm{i}} g^3 \langle12\rangle^{4}}{\langle12\rangle\langle23\rangle\langle34\rangle\langle4P\rangle\langle P1\rangle} \,. \end{aligned} $$
(5.80)

Comparing this to the expected collinear behaviour from Eq. (2.5),

$$\displaystyle \begin{aligned} \begin{aligned} & A_{6}^{\mathrm{tree}}(1^{-}, 2^{-}, 3^{+}, 4^{+}, 5^{+}, 6^{+}) \stackrel{5 \parallel 6}{\longrightarrow} \, {\mathrm{Split}}_{-}^{\mathrm{tree}}(x,5^{+},6^{+})\, A_{5}^{\mathrm{tree}}(1^{-}, 2^{-}, 3^{+}, 4^{+}, P^{+}) \\ & \quad + {\mathrm{Split}}_{+}^{\mathrm{tree}}(x,5^{+},6^{+}) \, A_{5}^{\mathrm{tree}}(1^{-}, 2^{-}, 3^{+}, 4^{+}, P^{-}) \,, \end{aligned} \end{aligned} $$
(5.81)

and using Eq. (1.192) for the 5-gluon amplitudes, we see that the term with \({\mathrm {Split}}_{+}^{\mathrm {tree}}(x,5^{+},6^{+})\) is absent. Since \( A_{5}^{\mathrm {tree}}(1^{-}, 2^{-}, 3^{+}, 4^{+}, P^{-}) \neq 0\), we deduce that

$$\displaystyle \begin{aligned} {\mathrm{Split}}_{+}^{\mathrm{tree}}(x,5^{+},6^{+}) = 0 \,, \end{aligned} $$
(5.82)

as claimed.

FormalPara Exercise 2.2: Soft Functions in the Spinor-Helicity Formalism

The leading soft function for a positive-helicity gluon with colour-ordered neighbours a and b is given by Eq. (2.19) with \(n=a\) and \(1=b\),

$$\displaystyle \begin{aligned} {} \mathcal{S}^{[0]}_{\text{YM}}\left(a,q^+,b\right) = \frac{g}{\sqrt{2}} \left(\frac{\epsilon_+ \cdot p_b}{p_b \cdot q} - \frac{\epsilon_+ \cdot p_a}{p_a \cdot q} \right) \,. \end{aligned} $$
(5.83)

Using Eq. (1.124) for the polarisation vector with \(\mu \) as reference spinor, we have that

$$\displaystyle \begin{aligned} \frac{\epsilon_+ \cdot p_i}{p_i \cdot q} = \sqrt{2} \frac{\langle \mu i \rangle}{\langle q i \rangle \langle \mu q \rangle} \,. \end{aligned} $$
(5.84)

Substituting this with \(i=a,b\) into Eq. (5.83) and using the Schouten identity give

$$\displaystyle \begin{aligned} \mathcal{S}^{[0]}_{\text{YM}}\left(a,q^+,b\right) = g \, \frac{\langle a b \rangle}{\langle a q \rangle \langle q b \rangle} \,, \end{aligned} $$
(5.85)

as claimed. We can obtain the negative-helicity soft function by acting with spacetime parity on the positive-helicity one. Parity exchanges \(\lambda ^{\alpha }\) and \(\tilde {\lambda }^{\dot \alpha }\), which amounts to swapping \(\langle i j \rangle \) with \([ j i ]\).

We now turn to a positive-helicity graviton. The starting point is again Eq. (2.19),

$$\displaystyle \begin{aligned} {} \mathcal{S}^{[0]}_{\text{GR}}\left(q^{++},1,\ldots,n\right) = \kappa \sum_{a=1}^n \frac{\epsilon^{++}_{\mu\nu} \, p_a^{\mu} p_a^{\nu}}{p_a \cdot q} \,. \end{aligned} $$
(5.86)

We parametrise the graviton’s polarisation vector by two copies of the gauge-field one,

$$\displaystyle \begin{aligned} \epsilon_{++}^{\mu\nu}(q) = \epsilon_+^{\mu}(q,x) \, \epsilon_+^{\nu}(q,y) \,, \end{aligned} $$
(5.87)

where we spelled out the arbitrary reference vectors x and y. Substituting this into Eq. (5.86) and using Eq. (1.124) for the polarisation vectors gives the desired result:

$$\displaystyle \begin{aligned} \mathcal{S}^{[0]}_{\text{GR}}\left(q^{++},1,\ldots,n\right) = \kappa \sum_{a=1}^n \frac{\langle x a \rangle \langle y a \rangle [ a q ]}{\langle x q \rangle \langle y q \rangle \langle a q \rangle} \,. \end{aligned} $$
(5.88)

As above, the negative-helicity result can by obtained through parity conjugation.

FormalPara Exercise 2.3: A \(\bar {q}qggg\) Amplitude from Collinear and Soft Limits

Let us consider the collinear limit \(3^{-} \parallel 4^{+}\) of the quark-gluon amplitude \(A_{\bar {q}qggg}^{\text{tree}}(1_{\bar q}^{-}, 2_{q}^{+}, 3^{-}, 4^{+}, 5^{+})\). We parametrise the limit with

$$\displaystyle \begin{aligned} \lambda_{3}\to \sqrt{x}\, \lambda_{P}\, , \quad \lambda_{4}\to\sqrt{1-x}\, \lambda_{P} \,, \end{aligned}$$

where \(P=p_3+p_4\). The collinear factorisation theorem implies that

$$\displaystyle \begin{aligned} {} \begin{aligned} A_{\bar{q}qggg}^{\text{tree}}(1_{\bar q}^{-}, 2_{q}^{+}, 3^{-}, 4^{+}, 5^{+}) \stackrel{3 \parallel 4}{\longrightarrow} \, & \text{Split}^{\text{tree}}_{+}(x,3^{-},4^{+})\, A_{\bar{q}qgg}^{\text{tree}} (1_{\bar q}^{-}, 2_{q}^{+}, P^{-}, 5^{+}) \\ & + \text{Split}^{\text{tree}}_{-}(x,3^{-},4^{+})\, A_{\bar{q}qgg}^{\text{tree}}(1_{\bar q}^{-}, 2_{q}^{+}, P^{+}, 5^{+}) \\ = \, & \frac{g \, x^{2}}{\sqrt{x (1-x)}\, \langle34\rangle}\, \frac{- {\mathrm{i}} g^2 \langle1P\rangle^{3}\langle2P\rangle}{\langle12\rangle\langle2P\rangle\langle P5\rangle \langle51\rangle} \,, \end{aligned} \end{aligned} $$
(5.89)

where we inserted Eq. (2.7) for the splitting functions, and Eqs. (2.36) and (2.37) for the amplitudes. The limiting form of Eq. (5.89) suggests that the amplitude before the limit takes the form

$$\displaystyle \begin{aligned} A_{\bar{q}qggg}^{\text{tree}}(1_{\bar q}^{-}, 2_{q}^{+}, 3^{-}, 4^{+}, 5^{+}) = - {\mathrm{i}} g^3 \frac{\langle13\rangle^{3}\langle23\rangle} {\langle12\rangle\langle23\rangle\langle34\rangle\langle45\rangle\langle51\rangle} \, . \end{aligned} $$
(5.90)

The form above leads us to conjecture the following n-particle generalisation:

$$\displaystyle \begin{aligned} A_{\bar{q}qg\ldots g}^{\text{tree}}(1_{\bar q}^{-}, 2_{q}^{+}, 3^{-}, 4^{+}, \ldots, n^{+}) = - {\mathrm{i}} g^{n-2} \frac{\langle13\rangle^{3}\langle23\rangle} {\langle12\rangle\langle23\rangle\langle34\rangle\ldots\langle n1\rangle}\, . {} \end{aligned} $$
(5.91)

By analogy with Eq. (5.89), we see that the conjectured form of the n-particle amplitude Eq. (5.91) is consistent with the collinear limits \(3^{-} \parallel 4^{+}\) and \(i^{+} \parallel (i+1)^{+}\) for \(i=4,\ldots , n-1\). Let us also study two soft limits. First we take \(\lambda _{3}\to 0\). Then we immediately see that

$$\displaystyle \begin{aligned} A_{\bar{q}qg\ldots g}^{\text{tree}}(1_{\bar q}^{-}, 2_{q}^{+}, 3^{-}, 4^{+},\ldots, n^{+}) \stackrel{3^{-}\to 0}{\longrightarrow} 0 \,. \end{aligned} $$
(5.92)

Since the expected behaviour in the limit is

$$\displaystyle \begin{aligned} A_{\bar{q}qg\ldots g}^{\text{tree}}(1_{\bar q}^{-}, 2_{q}^{+}, 3^{-}, 4^{+},\ldots, n^{+}) \stackrel{3^{-}\to 0}{\longrightarrow} &\mathcal{S}^{[0]}(2,3^{-},4) \\ & A_{\bar{q}qg\ldots g}^{\text{tree}}(1_{\bar q}^{-}, 2_{q}^{+}, 4^{+}, \ldots, n^{+}) \, , \end{aligned} $$
(5.93)

and the relevant soft function is not zero, this implies that

$$\displaystyle \begin{aligned} A_{\bar{q}qg\ldots g}^{\text{tree}}(1_{\bar q}^{-}, 2_{q}^{+}, 3^{+}, \ldots, n^{+}) = 0 \,, \end{aligned} $$
(5.94)

which is thus the conjectured n-particle generalisation of Eq. (2.36). Taking the soft limit \(4^{+}\to 0\) (or any other positive-helicity gluon leg) on the other hand again allows us to check the self-consistency of Eq. (5.91),

(5.95)
FormalPara Exercise 2.4: The Six-Gluon Split-Helicity NMHV Amplitude

We want to determine the NMHV six-gluon amplitude \(A_{6}^{\text{tree}}(1^{+},2^{+},3^{+},4^{-},5^{-}, 6^{-})\). The \([6^-1^+\rangle \) shift leads to the BCFW recursion relation

$$\displaystyle \begin{aligned} \begin{aligned} & A_{6}^{\text{tree}}(1^{+}, 2^+, 3^+, 4^-, 5^-, 6^{-}) = \\ & \quad \sum_{i=3}^{5} \sum_{h=\pm} A_{i}^{\text{tree}}\left(\hat 1^+ , 2^+,\ldots , i-1, -{\hat P}^{-h}_{i}(z_{P_{i}})\right)\\ & \quad \frac{{\mathrm{i}}}{P_{i}^{2}} \, A_{8-i}^{\text{tree}}\left({\hat P}^{h}_{i}(z_{P_{i}}),i,\ldots , 5^-, \hat{6}^- \right)\,. \end{aligned} \end{aligned} $$
(5.96)

Since the all-plus/minus and single-plus/minus tree amplitudes vanish, only two contributions are non-zero. Diagrammatically, they are given by

where we used the short-hand notation \(P_{ij} = p_i+p_j\). The first diagram is given by

$$\displaystyle \begin{aligned} {} \text{(I)} = \frac{- {\mathrm{i}} g \, [ \hat{1} 2 ]^3}{[ 2 (-\hat P_{12}) ]\, [ (-\hat P_{12}) \hat 1 ]} \times \frac{{\mathrm{i}}}{\langle12\rangle [21]} \times \frac{-{\mathrm{i}} g^3 [ \hat P_{12} 3 ]^3}{[ 3 4 ]\,[ 4 5 ]\, [ 5 \hat{6} ]\, [ \hat{6} \hat P_{12} ]} \, . \end{aligned} $$
(5.97)

The corresponding z-pole is at \(z_{\text{I}} = P_{12}^2/\langle 6 | P_{12} | 1 ] = \langle 1 2 \rangle / \langle 6 2 \rangle \). Hence we have

$$\displaystyle \begin{aligned} |\hat{1}\rangle = \frac{\langle 1 6 \rangle}{\langle 6 2 \rangle} |2\rangle \,, \quad |\hat 1] = |1] \,, \quad |\hat{6}\rangle = |6\rangle \,, \quad |\hat{6}] = |6] + \frac{\langle 1 2 \rangle}{\langle 6 2 \rangle} |1] \,, \end{aligned} $$
(5.98)

where we used the Schouten identity to simplify \(|\hat {1}\rangle \), and, for \(\hat {P}_{12} = \hat {p}_1 + p_2\),

$$\displaystyle \begin{aligned} |\hat{P}_{12}\rangle = |2\rangle \,, \qquad |\hat{P}_{12}] = |2] + \frac{\langle 6 1 \rangle}{\langle 6 2 \rangle} |1] \,. \end{aligned} $$
(5.99)

By combining the above we obtain

$$\displaystyle \begin{aligned} \begin{aligned}{}[2 \hat{P}_{12}] &= \frac{\langle61\rangle}{\langle62\rangle}\, [21]\, , \quad [\hat{P}_{12} \hat{1}] = [21]\, , \quad [5\hat 6] = \frac{[5|P_{16}|2\rangle}{\langle 6 2 \rangle}\, , \\ {} [\hat{P}_{12} 3] &= \frac{\langle 6 | P_{12}|3]}{\langle62\rangle}\, , \quad [\hat 6 \hat{P}_{12}] = - \frac{P_{26}^{2}+P_{12}^{2}+P_{16}^{2}}{\langle62\rangle} \,. \end{aligned} \end{aligned} $$
(5.100)

The expression for \( [\hat 6 \,\hat P_{12}]\) can be further simplified by noting that \(P_{26}^{2}+P_{12}^{2}+P_{16}^{2} = (p_6+p_1+p_2)^2\). Substituting all these into Eq. (5.97), with the sign convention (1.113), gives our final expression for diagram (I):

$$\displaystyle \begin{aligned} \text{(I)} = {\mathrm{i}} g^4 \frac{\langle 6 | P_{12} | 3 ]^3}{\langle 6 1 \rangle \langle 1 2 \rangle [ 3 4 ] [ 4 5 ] [ 5 | P_{16} | 2 \rangle} \frac{1}{(p_6+p_1+p_2)^2}\,. \end{aligned} $$
(5.101)

For the second diagram we start with

$$\displaystyle \begin{aligned} {} \text{(II)} = \frac{{\mathrm{i}} g^3 \, \langle 4 \hat P_{56} \rangle^{3}}{\langle\hat P_{56}\hat{1}\rangle\langle\hat{1}2\rangle \langle23\rangle\langle34\rangle}\times \frac{{\mathrm{i}}}{\langle56\rangle [65]} \times \frac{{\mathrm{i}} g \, \langle5\hat 6\rangle^{3}} {\langle6 (-\hat{P}_{56})\rangle\langle(-\hat{P}_{56}) 5\rangle} \,. \end{aligned} $$
(5.102)

Now the shift parameter takes the value \(z_{\text{II}} = [ 6 5 ] / [ 5 1 ]\), which implies

$$\displaystyle \begin{aligned} |\hat{1}\rangle = |1\rangle + \frac{[ 5 6 ]}{[ 5 1 ]} |6\rangle \,, \quad |\hat{6}\rangle = |6\rangle \,, \quad |\hat{P}_{56}\rangle = |5\rangle + \frac{[ 1 6 ]}{[ 1 5 ]} |6\rangle \,, \end{aligned} $$
(5.103)

and hence

$$\displaystyle \begin{aligned} \begin{aligned} \langle4 \hat{P}_{56}\rangle &= \frac{\langle 4 | P_{56}|1]}{ [51]}\, , \quad \langle\hat{P}_{56} \hat{1}\rangle = \frac{(p_1+p_5+p_6)^2}{ [15]}\, ,\quad \langle5\hat{6}\rangle = \langle56\rangle\, ,\\ \langle6 \hat{P}_{56}\rangle&= \langle65\rangle\, ,\quad \langle\hat{P}_{56} 5\rangle= \frac{ [16]}{ [15]} \langle65\rangle\, , \quad \langle\hat{1}2\rangle= \frac{[5|P_{16}|2\rangle}{ [51]}\, . \end{aligned} \end{aligned} $$
(5.104)

Plugging these into Eq. (5.102) yields

$$\displaystyle \begin{aligned} \text{(II)} = {\mathrm{i}} g^4 \frac{\langle 4 | P_{56} | 1 ]^3}{ \langle23\rangle \langle34\rangle [16][65] [ 5 | P_{16} | 2 \rangle } \frac{1}{(p_1+p_5+p_6)^2} \,. \end{aligned} $$
(5.105)

Finally, by combining the two diagrams we obtain

$$\displaystyle \begin{aligned} \begin{aligned} A_{6}^{\text{tree}}(1^{+},2^{+},3^{+}, {} & 4^{-},5^{-},6^{-}) = {\mathrm{i}} g^4 \bigg( \frac{\langle 6 | P_{12} | 3 ]^3}{\langle 6 1 \rangle \langle 1 2 \rangle [ 3 4 ] [ 4 5 ] [ 5 | P_{16} | 2 \rangle} \frac{1}{(p_6+p_1+p_2)^2} \\ & + \frac{\langle 4 | P_{56} | 1 ]^3}{ \langle23\rangle \langle34\rangle [16][65] [ 5 | P_{16} | 2 \rangle } \frac{1}{(p_1+p_5+p_6)^2} \bigg) \,. \end{aligned} \end{aligned} $$
(5.106)
FormalPara Exercise 2.5: Soft Limit of the Six-Gluon Split-Helicity Amplitude

In the soft limit \(p_5 \to 0\) we have the reduced momentum-conservation condition \(p_{1}+p_{2}+p_{3}+p_{4}+p_{6}=0\), which implies that \((p_6+p_1+p_2)^2 = \langle 3 4 \rangle [ 4 3 ]\) and \((p_1+p_5+p_6)^2 = \langle 1 6 \rangle [ 6 1 ]\). Using these in Eq. (2.67) and pulling out the pole term \(( [45] [56])^{-1}\) gives

$$\displaystyle \begin{aligned} \begin{aligned} A_{6}^{\text{tree}}&(1^{+}, 2^{+}, 3^{+}, 4^{-}, 5^{-}, 6^{-}) \stackrel{5^- \to 0}{\longrightarrow} \\ & \quad \frac{{\mathrm{i}} g^4}{[5|P_{16}|2\rangle\, [45] [56]} \bigg( \frac{\langle 6|P_{12}|3]^{3} [56]}{\langle61\rangle\langle12\rangle [34]^2 \langle43\rangle} + \frac{ \langle 4|P_{56}|1]^{3} [54] }{\langle23\rangle\langle34\rangle [16]^2 \langle61\rangle}\, \bigg) \,. \end{aligned} \end{aligned} $$
(5.107)

We use the reduced momentum conservation and the Dirac equation to simplify

$$\displaystyle \begin{aligned} \langle 6|P_{12}|3] & = - \langle 6|(p_3+p_4+p_6)|3] \\ & = - \langle 6 4 \rangle [ 4 3 ]\,, \end{aligned} $$
(5.108)

and the soft limit for \(\langle 4|P_{56}|1] = \langle 4 6 \rangle [ 6 1 ]\), obtaining

$$\displaystyle \begin{aligned} \begin{aligned} A_{6}^{\text{tree}}&(1^{+}, 2^{+}, 3^{+}, 4^{-}, 5^{-}, 6^{-}) \stackrel{5^- \to 0}{\longrightarrow} \\ & \quad \frac{{\mathrm{i}} g^4}{[5|P_{16}|2\rangle\, [45] [56]}\, \frac{\langle46\rangle^{3}}{\langle12\rangle\langle23\rangle\langle34\rangle \langle61\rangle} \big( [34] [56] \langle23\rangle + [16] [45]\langle12\rangle \big) \,. \end{aligned} \end{aligned} $$
(5.109)

The two terms in the parentheses may be simplified using a Schouten identity as

(5.110)

By plugging this into the above we find

$$\displaystyle \begin{aligned} A_{6}^{\text{tree}}(1^{+}, 2^{+}, 3^{+}, 4^{-}, 5^{-}, 6^{-}) &\stackrel{5^{-}\to 0}{\longrightarrow} - g \frac{ [46]}{ [45] [56]} \times \frac{{\mathrm{i}} g^3 \, \langle46\rangle^{3}}{\langle12\rangle\langle23\rangle\langle34\rangle \langle61\rangle} \,, \end{aligned} $$
(5.111)

which indeed matches the expected factorisation,

$$\displaystyle \begin{aligned} A_{6}^{\text{tree}}(1^{+}, 2^{+}, 3^{+}, 4^{-}, 5^{-}, 6^{-}) &\stackrel{5^{-}\to 0}{\longrightarrow} \mathcal{S}^{[0]}_{\text{YM}}(4,5^{-},6) \times A_{5}^{\text{tree}}(1^{+}, 2^{+}, 3^{+}, 4^{-}, 6^{-})\,, \end{aligned} $$
(5.112)

with the soft function given in Eq. (2.25).

FormalPara Exercise 2.6: Mixed-Helicity Four-Point Scalar-Gluon Amplitude

The \([4^- 1^+\rangle \) shift leads to the BCFW recursion

(5.113)

where we used Eqs. (2.78) and (2.79) for the three-point scalar-gluon amplitudes (with \(g=1\)), \(P=p_1+p_2\), and \(r_1\) (\(r_4\)) denotes the reference momentum of the gluon leg 1 (4). With the gauge choice \(r_{1}=\hat {p}_4\) and \(r_{4}=\hat {p}_1\) along with the identities \(|\hat 4\rangle =|4\rangle \) and \(|\hat 1]=|1]\) for the \([4^- 1^+\rangle \) shift one has

$$\displaystyle \begin{aligned} \langle r_1 \hat{1}\rangle&= \langle41\rangle \,, \quad [\hat{4} \, r_4]= [4 1] \,, \quad \langle r_1 | \hat{P} | \hat{1} ] &= - \langle 4 | p_3 | 1 ] \,, \langle \hat{4} | p_3 | r_4 ] &= \langle 4 | p_3 | 1 ] \,. \end{aligned} $$
(5.114)

Plugging these into the above yields the final compact result

$$\displaystyle \begin{aligned} A_{4}(1^{+},2_{\phi},3_{\bar\phi},4^{-}) = {\mathrm{i}} \, \frac{\langle 4| p_{3}| 1]^{2}}{ (p_1+p_4)^2 \, [(p_1+p_2)^2-m^2] } \,. \end{aligned} $$
(5.115)
FormalPara Exercise 2.7: Conformal Algebra

The commutation relations with the dilatation operator d,

$$\displaystyle \begin{aligned} \left[d,p^{\alpha{\dot{\alpha}}} \right]=p^{\alpha{\dot{\alpha}}}\, , \quad \left[d,k_{\alpha{\dot{\alpha}}} \right]=-k_{\alpha{\dot{\alpha}}}\, , \quad \left[d,m_{\alpha\beta} \right]=0=\left[d,{\overline m}_{{\dot{\alpha}}{\dot{\beta}}}\right] \,, \end{aligned} $$
(5.116)

are manifest from dimensional analysis. We recall in fact that d measures the mass dimension, i.e. \([d,f] = [f] f\) where \([f]\) denotes the dimension of f in units of mass, and that the helicity spinors \(\lambda _i\) and \(\tilde {\lambda }_i\) have mass dimension \(1/2\). It remains for us to compute the commutator \([k_{\alpha {\dot {\alpha }}}, p^{\beta {\dot {\beta }}}]\), which is given by

$$\displaystyle \begin{aligned} \left[k_{\alpha{\dot{\alpha}}},p^{\beta{\dot{\beta}}} \right] = \left[\partial_{\alpha},\lambda^{\beta}\tilde\lambda^{{\dot{\beta}}} \right]\, \partial_{{\dot{\alpha}}}+ \partial_{\alpha} \left[ \partial_{{\dot{\alpha}}}, \lambda^{\beta}\tilde\lambda^{{\dot{\beta}}} \right] = \delta^{\beta}_{\ \alpha}\, \tilde\lambda^{{\dot{\beta}}} \, \partial_{{\dot{\alpha}}} + \delta^{{\dot{\beta}}}_{\ {\dot{\alpha}}}\, \lambda^{\beta}\partial_{\alpha} + \delta^{\beta}_{\ \alpha}\, \delta^{{\dot{\beta}}}_{{\dot{\alpha}}} \,. \end{aligned} $$
(5.117)

By using Eq. (2.102) for a single particle with raised index,

(5.118)

and the analogous equation with dotted indices, we obtain

(5.119)

which concludes the proof of Eq. (2.107).

FormalPara Exercise 2.8: Inversion and Special Conformal Transformations
  1. (a)

    Using the inversion transformation \(I \, x^{\mu }= x^{\mu }/x^{2}\) and the translation transformation \(P^{\mu } \, x = x^{\mu }-a^{\mu }\) we have

    $$\displaystyle \begin{aligned} I \, P^{\mu} \, I \, x^{\mu}= I \, P^{\mu} \, \frac{x^{\mu}}{x^{2}} & = I \, \frac{x^{\mu}-a^{\mu}}{(x-a)^{2}} = \frac{\frac{x^{\mu}}{x^{2}}-a^{\mu}}{\left(\frac{x}{x^{2}}-a \right)^{2}}= \frac{x^{\mu}-a^{\mu}x^{2}}{1-2 \, a\cdot x + a^{2}\, x^{2}} \,, \end{aligned} $$
    (5.120)

    which equals the finite special conformal transformation in Eq. (2.111).

  2. (b)

    We begin by computing the Jacobian factor \(|\partial x^{\prime }/\partial x|\), i.e. the absolute value of the determinant of the matrix with entries \(\partial x^{\prime \mu }/\partial x^{\nu }\) for \(\mu ,\nu =0,1,2,3\). It is convenient to decompose the special conformal transformation \(x\to x^{\prime }\) as in point a):

    $$\displaystyle \begin{aligned} x^{\mu} \quad \overset{I}{\longrightarrow} \quad y^{\mu} := \frac{x^{\mu}}{x^2} \quad \overset{P^{\mu}}{\longrightarrow} \quad z^{\mu} := y^{\mu} - a^{\mu} \quad \overset{I}{\longrightarrow} \quad x^{\prime \mu} := \frac{z^{\mu}}{z^2} \,. \end{aligned} $$
    (5.121)

    The Jacobian factor for \(x\to x^{\prime }\) then factorises into the product of the Jacobian factors for the three separate transformations:

    $$\displaystyle \begin{aligned} \left| \frac{\partial x^{\prime}}{\partial x} \right| = \left| \frac{\partial x^{\prime}}{\partial z} \right| \, \left| \frac{\partial z}{\partial y} \right| \, \left| \frac{\partial y}{\partial x} \right| \,. \end{aligned} $$
    (5.122)

    For the first inversion, \(x^{\mu } \to y^{\mu }\), we have that

    $$\displaystyle \begin{aligned} \frac{\partial y^{\mu}}{\partial x^{\nu}} = \frac{1}{x^2}\left( \eta^{\mu}{}_{\nu} - 2 \, \frac{x^{\mu} x_{\nu}}{x^2} \right) \,, \end{aligned} $$
    (5.123)

    so that the Jacobian factor takes the form

    $$\displaystyle \begin{aligned} \left| \frac{\partial y}{\partial x} \right| = \big(x^2\big)^{-4} \left| \mathrm{det}\left( \eta^{\mu}{}_{\nu} - 2 \, \frac{x^{\mu} x_{\nu}}{x^2} \right) \right| \,. \end{aligned} $$
    (5.124)

    We use the representation of the determinant in terms of Levi-Civita symbols:

    $$\displaystyle \begin{aligned} \left| \frac{\partial y}{\partial x} \right| &= \frac{\big(x^2\big)^{-4}}{4!} \left| \epsilon_{\mu_1 \mu_2 \mu_3 \mu_4} \epsilon^{\nu_1 \nu_2 \nu_3 \nu_4} \left( \eta^{\mu_1}{}_{\nu_1} - 2 \, \frac{x^{\mu_1} x_{\nu_1}}{x^2} \right)\right. \\ & \left.\quad \ldots \left( \eta^{\mu_4}{}_{\nu_4} - 2 \, \frac{x^{\mu_4} x_{\nu_4}}{x^2} \right) \right| \,. \end{aligned} $$
    (5.125)

    The contractions involving two, one, or no factors of \(\eta ^{\mu _i}{ }_{\nu _i}\) vanish because of the anti-symmetry of the Levi-Civita symbol. The contractions with three factors of \(\eta ^{\mu _i}{ }_{\nu _i}\) are equal. This leads us to

    $$\displaystyle \begin{aligned} \left| \frac{\partial y}{\partial x} \right| = \frac{\big(x^2\big)^{-4}}{4!} \left| \epsilon_{\mu \nu \rho \sigma} \epsilon^{\mu \nu \rho \sigma} + 4 \, \epsilon_{\mu \nu \rho \sigma_1} \epsilon^{\mu \nu \rho \sigma_2} \left( - 2 \, \frac{x^{\sigma_1} x_{\sigma_2}}{x^2} \right) \right| \,. \end{aligned} $$
    (5.126)

    Using the identities \( \epsilon _{\mu \nu \rho \sigma } \epsilon ^{\mu \nu \rho \sigma } = - 4!\) and \( \epsilon _{\mu \nu \rho \sigma _1} \epsilon ^{\mu \nu \rho \sigma _2} = - 3! \, \eta ^{\sigma _2}{ }_{\sigma _1}\)Footnote 2 gives

    $$\displaystyle \begin{aligned} \left| \frac{\partial y}{\partial x} \right| = \big(x^2\big)^{-4} \,. \end{aligned} $$
    (5.127)

    Similarly, for the second inversion, \(z \to x^{\prime }\), we have

    $$\displaystyle \begin{aligned} \begin{aligned} \left| \frac{\partial x^{\prime}}{\partial z} \right| & = \big(z^2\big)^{-4} \\ & = \left( \frac{1 - 2 \, a \cdot x + a^2 x^2}{x^2} \right)^{-4} \,. \end{aligned} \end{aligned} $$
    (5.128)

    The Jacobian factor of the translation \(y^{\mu } \to z^{\mu } = y^{\mu }-a^{\mu }\) is simply 1, as the translation parameter \(a^{\mu }\) does not depend on \(y^{\mu }\). Putting the above together gives

    $$\displaystyle \begin{aligned} \left| \frac{\partial x^{\prime}}{\partial x} \right| = \left(1 - 2 \, a \cdot x + a^2 x^2\right)^{-4} \,. \end{aligned} $$
    (5.129)

We now consider the transformation rule for the scalar field \({\varPhi }\) in Eq. (2.112),

$$\displaystyle \begin{aligned} {} {\varPhi}^{\prime}(x^{\prime}) = \left(1 - 2 \, a \cdot x + a^2 x^2\right)^{{\varDelta}} {\varPhi}(x) \,, \end{aligned} $$
(5.130)

and expand both sides in a Taylor series around \(a^{\mu } = 0\). For the LHS we obtain

$$\displaystyle \begin{aligned} {} \begin{aligned} \mathnormal{\varPhi}^{\prime}\left( x^{\prime} \right) & = \mathnormal{\varPhi}^{\prime}(x) + a^{\mu} \left( \frac{\partial x^{\prime \, \nu}}{\partial a^{\mu}} \right) \bigg|{}_{a=0} \partial_{\nu} \mathnormal{\varPhi}^{\prime}(x) + \mathcal{O}\big(a^2\big) \\ & = \mathnormal{\varPhi}^{\prime}(x) + a^{\mu} \left( - \eta_{\mu}{}^{\nu} x^2 + 2 \, x_{\mu} \, x^{\nu} \right) \partial_{\nu} \mathnormal{\varPhi}^{\prime}(x) + \mathcal{O}\big(a^2\big) \,. \end{aligned} \end{aligned} $$
(5.131)

Since \(\mathnormal {\varPhi }^{\prime }(x) - \mathnormal {\varPhi }(x) = \mathcal {O}(a)\), we can replace \(\partial _{\nu } \mathnormal {\varPhi }^{\prime }(x) \) by \(\partial _{\nu } \mathnormal {\varPhi }(x) \) in the above. Plugging this into Eq. (5.130) and expanding also the RHS gives

$$\displaystyle \begin{aligned} \mathnormal{\varPhi}^{\prime}(x) + a^{\mu} \left( - \eta_{\mu}{}^{\nu} x^2 + 2 \, x_{\mu} \, x^{\nu} \right) \partial_{\nu} \mathnormal{\varPhi}^{\prime}(x) = \mathnormal{\varPhi}(x) - 2 \, \mathnormal{\varDelta} \, (a \cdot x) \, \mathnormal{\varPhi}(x) + \mathcal{O}\big(a^2\big) \,. \end{aligned} $$
(5.132)

By comparing this to the defining equation of the generators (2.113),

$$\displaystyle \begin{aligned} \mathnormal{\varPhi}^{\prime}\left( x \right ) = \left[ 1 - {\mathrm{i}} \, a^{\mu} \, K_{\mu} + \mathcal{O}\big(a^2\big) \right] \, \mathnormal{\varPhi}(x) \,, \end{aligned} $$
(5.133)

we can read off the explicit form of the generator,

$$\displaystyle \begin{aligned} K_{\mu} = \i \left[ x^2 \, \partial_{\mu} - 2 \, x_{\mu} \left( x^{\nu} \partial_{\nu} + \varDelta \right) \right] \,, \end{aligned} $$
(5.134)

as claimed.

FormalPara Exercise 2.9: Kinematical Jacobi Identity

We start from the expression of \(n_s\) given in Eq. (2.119). We choose the reference momenta \(r_i\) for the polarisation vectors \(\epsilon _i\) so as to kill as many terms as possible. We recall that \(\epsilon _i \cdot r_i = 0\). Choosing \(r_1 = p_2\), \(r_2 = p_1\), \(r_3 = p_4\), and \(r_4 = p_3\) yields

(5.135)

The other factors are obtained from \(n_s\) by replacing the particles’ labels as

$$\displaystyle \begin{aligned} n_t = n_s \big|{}_{1 \to 2, 2 \to 3, 3 \to 1} \,, \qquad \qquad n_u = n_s \big|{}_{1 \to 3, 2 \to 1, 3 \to 2} \,. \end{aligned} $$
(5.136)

Adding the three factors gives

(5.137)

which vanishes because of momentum conservation.

FormalPara Exercise 2.10: Five-Point KLT Relation

The squaring relation (2.139) in the five-point case reads

$$\displaystyle \begin{aligned} {} M_{5}^{\text{tree}}(1,2,3,4,5) = \sum_{\sigma,\rho\in S_{2}} A_{5}^{\text{tree}}(1,\sigma,4,5) \, S[\sigma|\rho] \, A_{5}^{\text{tree}}(1,\rho,5,4) \,, \end{aligned} $$
(5.138)

where \(S_2\) is the set of permutations of \(\{2,3\}\), namely \(S_2 = \big \{ \{2,3\}, \, \{3,2\} \big \}\). We recall that the KLT kernels \(S[\sigma |\rho ]\) are given by Eq. (2.140) with \(n=5\),

$$\displaystyle \begin{aligned} S[\sigma|\rho] = \prod_{i=2}^{3} \bigg[ 2 \, p_{1}\cdot p_{\sigma_{i}} + \sum_{j=2}^{i} 2 \, p_{\sigma_{i}}\cdot p_{\sigma_{j}}\, \theta(\sigma_{j}, \sigma_{i})_{\rho} \bigg] \,, \end{aligned} $$
(5.139)

where \(\theta (\sigma _{j}, \sigma _{i})_{\rho }=1\) if \(\sigma _{j}\) is before \(\sigma _{i}\) in the permutation \(\rho \), and zero otherwise. We then have

$$\displaystyle \begin{aligned} \begin{aligned} S\left[(2,3)|(2,3)\right] & = 2 \, p_1 \cdot p_2 \, \big[ 2 \, p_1 \cdot p_3 + 2 \, p_3 \cdot p_2 \, \overbrace{\theta(2,3)_{(2,3)}}^{= \, 1} \big] \\ & = s_{12} ( s_{13} + s_{23} ) \,, \end{aligned} \end{aligned} $$
(5.140)

where \(s_{ij} = 2 \, p_i \cdot p_j\), and similarly

$$\displaystyle \begin{aligned} \begin{aligned} & S\left[(3,2)|(2,3)\right] = s_{12} \, s_{13} = S\left[(2,3)|(3,2)\right] \,, \\ & S\left[(3,2)|(3,2)\right] = s_{13} (s_{12}+s_{23}) \,. \end{aligned} \end{aligned} $$
(5.141)

Plugging the above into the squaring relation (5.138) gives

$$\displaystyle \begin{aligned} {} \begin{aligned} & M_{5}^{\text{tree}}(1,2,3,4,5) = \\ & \quad s_{12} A_5^{\text{tree}}(1,2,3,4,5) \big[ s_{13} A^{\text{tree}}_5(1,3,2,5,4) + (s_{13}+s_{23}) A^{\text{tree}}_5(1,2,3,5,4) \big] \\ & \quad + s_{13} A_5^{\text{tree}}(1,3,2,4,5) \big[ s_{12} A_5^{\text{tree}}(1,2,3,5,4) + (s_{12}+s_{23}) A_5^{\text{tree}}(1,3,2,5,4) \big] \,. \end{aligned} \end{aligned} $$
(5.142)

The terms in the square brackets can be simplified using the BCJ relations (2.133). For instance, for the first term we use

$$\displaystyle \begin{aligned} \begin{aligned} & p_3 \cdot p_1 \, A^{\text{tree}}_5(1,3,2,5,4) + p_3 \cdot (p_1+p_2) \, A^{\text{tree}}_5(1,2,3,5,4) \\ & + p_3 \cdot (p_1+p_2+p_5) \, A^{\text{tree}}_5(1,2,5,3,4) = 0 \,, \end{aligned} \end{aligned} $$
(5.143)

which is obtained by replacing \(1\to 3\), \(2\to 1\), \(3\to 2\), \(4 \to 5\) and \(5 \to 4\) in Eq. (2.133) with \(n=5\). Substituting

$$\displaystyle \begin{aligned} \begin{aligned} s_{13} \, A^{\text{tree}}_5(1,3,2,5,4) + (s_{13}+s_{23}) \, A^{\text{tree}}_5(1,2,3,5,4) = s_{34} \, A^{\text{tree}}_5(1,2,5,3,4) \,, \\ s_{12} \, A_5^{\text{tree}}(1,2,3,5,4) + (s_{12}+s_{23}) \, A_5^{\text{tree}}(1,3,2,5,4) = s_{24} \, A_5^{\text{tree}}(1,3,5,2,4) \,, \end{aligned} \end{aligned} $$
(5.144)

into Eq. (5.142) finally gives

(5.145)

as claimed.

FormalPara Exercise 3.1: The Four-Gluon Amplitude in \(\mathcal {N}=4\) Super-Symmetric Yang-Mills Theory

We begin with the \(s_{12}\) channel. The cut integrand is given by the following product of tree amplitudes:

(5.146)

The constant factors multiplying the amplitudes deserve a few remarks. First, we have a factor counting each field’s multiplicity in the \(\mathcal {N}=4\) super-multiplet: 1 gluon (g), 4 gluinos (\(\mathnormal {\varLambda }\)), and 6 scalars (\(\phi \)). Next, the factors of imaginary unit “\({\mathrm {i}}\)” follow from the factorisation properties of tree-level amplitudes as discussed below Eq. (2.3). In particular, note that the factorisation of the fermion line does not require any factors of “\({\mathrm {i}}\)”, as opposed to gluons and scalars. Finally, the gluino’s contribution comes with a further factor of \(-1\) coming from the Feynman rule for the closed fermion loop. The only non-vanishing contribution comes from the product of purely gluonic amplitudes with \(h_1=-\) and \(h_2=+\),

$$\displaystyle \begin{aligned} \mathcal{C}_{12|34}^{\mathcal{N}=4} = {\mathrm{i}} \, A^{(0)}\left( (-l_1)^{+},1^-,2^-,(l_2)^{+} \right) \, {\mathrm{i}} \, A^{(0)}\left( (-l_2)^{-},3^+,4^+ ,(l_1)^{-} \right) \,. \end{aligned} $$
(5.147)

This is the same as in the non-supersymmetric YM theory computed in Sect. 3.2 (see Eq. 3.34), hence we can immediately see that

$$\displaystyle \begin{aligned} \mathcal{C}_{12|34}\left( I^{(1)}_{\mathcal{N}=4}(1^-,2^-,3^+,4^+) \right)=\mathcal{C}_{12|34}\left( I^{(1)}(1^-,2^-,3^+,4^+) \right) \,, \end{aligned} $$
(5.148)

as claimed.

In contrast to the \(s_{12}\) channel, all fields contribute to the \(s_{23}\)-channel cut:

(5.149)

The second and fourth terms can be obtained by swapping \(1 \leftrightarrow 2\) and \(3 \leftrightarrow 4\) in the first and the third ones, respectively. We put all terms over a common denominator:

$$\displaystyle \begin{aligned} {\mathrm{D}} = \langle 1 l_1 \rangle \langle l_1 l_2 \rangle \langle l_2 4 \rangle \langle 4 1 \rangle \langle l_1 2 \rangle \langle 23 \rangle \langle 3 l_2 \rangle \langle l_2 l_1 \rangle \,. \end{aligned} $$
(5.150)

Factoring it out we have

$$\displaystyle \begin{aligned} \begin{aligned} {\mathrm{D}} \, \mathcal{C}_{23|41}^{\mathcal{N}=4} = \ & \langle 1 l_1 \rangle^4 \langle 2 l_2 \rangle^4 + \langle 2 l_1 \rangle^4 \langle 1 l_2 \rangle^4 - 4 \, \langle l_1 1 \rangle \langle l_1 2 \rangle^3 \langle l_2 2 \rangle \langle l_2 1 \rangle^3 \\ & - 4 \, \langle l_1 2 \rangle \langle l_1 1 \rangle^3 \langle l_2 1 \rangle \langle l_2 2 \rangle^3 + 6 \, \langle l_1 1 \rangle^2 \langle l_2 1 \rangle^2 \langle l_1 2 \rangle^2 \langle l_2 2 \rangle^2\,. \end{aligned} \end{aligned} $$
(5.151)

Here the “magic” of \(\mathcal {N}=4\) super Yang-Mills theory comes into play: the five terms above conspire together to form the fourth power of a binomial,

$$\displaystyle \begin{aligned} \mathcal{C}_{23|41}^{\mathcal{N}=4} = \frac{\big(\langle 1 l_2 \rangle \langle 2 l_1 \rangle - \langle 1 l_1 \rangle \langle 2 l_2 \rangle\big)^4}{{\mathrm{D}}} \,, \end{aligned} $$
(5.152)

which can be further simplified using a Schouten identity, obtaining

$$\displaystyle \begin{aligned} \mathcal{C}_{23|41}^{\mathcal{N}=4} & = \frac{ \langle l_1 l_2 \rangle^4 \langle 1 2 \rangle^4}{{\mathrm{D}}} \\ & = \frac{ \langle l_1 l_2 \rangle^2 \langle 1 2 \rangle^4}{\langle l_1 1 \rangle \langle l_1 2 \rangle \langle l_2 3 \rangle \langle l_2 4 \rangle \langle 2 3 \rangle \langle 1 4 \rangle} \,. \end{aligned} $$
(5.153)

This matches \(\mathcal {C}_{23|41}^{\text{box}}\) (see Eq. (3.46)), and we can thus conclude that

$$\displaystyle \begin{aligned} \mathcal{C}_{23|41}\left( I^{(1)}_{\mathcal{N}=4}(1^-,2^-,3^+,4^+) \right) =\mathcal{C}_{23|41}^{\text{box}}\left( I^{(1)}(1^-,2^-,3^+,4^+) \right) \,. \end{aligned} $$
(5.154)
FormalPara Exercise 3.2: Quadruple Cuts of Five-Gluon MHV Scattering Amplitudes
  1. (a)

    We parametrise the loop momentum \(l_1\) using the spinors of the external momenta as in Eq. (3.58). We then rewrite the quadruple cut equations,

    $$\displaystyle \begin{aligned} \begin{cases} l_1^2 = 0 \,, \\ l_2^2 = (l_1-p_2)^2 = 0 \,, \\ l_3^2 = (l_1-p_2-p_3)^2 = 0 \,, \\ l_4^2 = (l_1+p_1)^2 = 0 \,, \end{cases} \end{aligned} $$
    (5.155)

    in terms of the parameters \(\alpha _i\), as

    $$\displaystyle \begin{aligned} \begin{cases} \alpha_1 s_{12} = 0 \,, \\ \alpha_2 s_{12} = 0 \,, \\ (\alpha_1 \alpha_2-\alpha_3 \alpha_4)s_{12} = 0 \,, \\ \alpha_1 s_{13} + \alpha_2 s_{23} + \alpha_3 \langle 1 3 \rangle [ 3 2 ] + \alpha_4 \langle 2 3 \rangle [ 3 1 ] = s_{23} \,. \end{cases} \end{aligned} $$
    (5.156)

    For generic kinematics this system has two solutions:

    $$\displaystyle \begin{aligned} \left(l_1^{(1)}\right)^{\mu} = \frac{\langle 2 3 \rangle}{\langle 1 3 \rangle} \frac{1}{2} \langle 1 | \gamma^{\mu} | 2 ] \,, \qquad \left(l_1^{(2)}\right)^{\mu} = \frac{[ 2 3 ]}{[ 1 3 ]} \frac{1}{2} \langle 2 | \gamma^{\mu} | 1 ] \,. \end{aligned} $$
    (5.157)

    The spinors of the on-shell loop momenta on the first solution can be chosen as

    $$\displaystyle \begin{aligned} {} \begin{array}{llll} & |l_1^{(1)} \rangle = \frac{\langle 2 3 \rangle}{\langle 1 3 \rangle} |1\rangle \,, \qquad \quad && |l_1^{(1)}] = |2] \,, \\ & |l_2^{(1)} \rangle = \frac{\langle 2 1 \rangle}{\langle 1 3 \rangle} |3\rangle \,, && |l_2^{(1)}] = |2] \,, \\ & |l_3^{(1)} \rangle = |3\rangle \,, && |l_3^{(1)}] = \frac{\langle 2 1 \rangle}{\langle 1 3 \rangle} |2] - |3] \,, \\ & |l_4^{(1)} \rangle = |1\rangle \,, && |l_4^{(1)}] = |1] + \frac{\langle 2 3 \rangle}{\langle 1 3 \rangle} |2] \,. \end{array} \end{aligned} $$
    (5.158)

    The spinors for the second solution, \(l_1^{(2)}\), are obtained by swapping \(\langle \rangle \leftrightarrow []\) in the first one. For each solution \(l_1^{(s)}\), the quadruple cut is obtained by summing over all internal helicity configurations \(\mathbf h = (h_1,h_2,h_3,h_4)\) (with \(h_i = \pm \)) the product of four tree-level amplitudes,

    (5.159)

    Consider \(A_4\). The only non-vanishing four-gluon tree-level amplitude with two positive-helicity gluons is the MHV one, namely \(h_4=-h_3=-. A_3\) is thus \(\overline {\text{MHV}}\), and \(h_2=+\). Since MHV/\(\overline {\text{MHV}}\) three-point vertices cannot be adjacent, \(A_2\) must be MHV, and \(A_1\overline {\text{MHV}}\). This fixes the remaining helicity, \(h_1=+\). The quadruple cuts therefore receive contribution from one helicity configuration only, which we represent using the black/white notation as

    (5.160)

    Recall that the trivalent vertices impose constraints on the momenta. The \(\overline {\text{MHV}}\) vertex attached to \(p_1\) and the MHV vertex attached to \(p_2\) imply that \(|l_1\rangle \propto |1\rangle \) and \(|l_1] \propto |2]\), or equivalently that \(l_1^{\mu } \propto \langle 1 | \gamma ^{\mu } |2]\). Only the solution \(l_1^{(1)}\) is compatible with this constraint. Indeed, we can show explicitly that the contribution from the second solution vanishes, for instance

    (5.161)

    where we used \(|l_1^{(2)}] = \big ([23]/[13] \big ) |1]\) and \(|l_4^{(2)}] = |1]\). We assign spinors to \(-p\) according to the convention (1.113), namely \(|-p\rangle = {\mathrm {i}} |p\rangle \) and \(|-p] = {\mathrm {i}} |p]\). We thus have that

    $$\displaystyle \begin{aligned} \mathcal{C}_{1|2|3|45}\left( I^{(1)}(1^-,2^-,3^+,4^+,5^+) \right) \bigg|{}_{l_1^{(2)}} = 0 \,. \end{aligned} $$
    (5.162)

    On the first solution, the quadruple cut is given by

    $$\displaystyle \begin{aligned} \mathcal{C}_{1|2|3|45} \big|{}_{l_1^{(1)}} = \frac{[ l_1 l_4 ]^3}{[ 1 l_1 ] [ l_4 1 ]} \frac{\langle l_1 2 \rangle^3}{\langle 2 l_2 \rangle \langle l_2 l_1 \rangle} \frac{[ 3 l_3 ]^3}{[ l_3 l_2 ] [ l_2 3 ]} \frac{\langle l_4 l_3 \rangle^3}{\langle l_3 4 \rangle \langle 4 5 \rangle \langle 5 l_4 \rangle } \Bigg|{}_{l_1^{(1)}} \,, \end{aligned} $$
    (5.163)

    where we omitted the argument of \(\mathcal {C}\) for the sake of compactness. Plugging in the spinors from Eq. (5.169) and simplifying gives

    $$\displaystyle \begin{aligned} \mathcal{C}_{1|2|3|45}\left( I^{(1)}(1^-,2^-,3^+,4^+,5^+) \right) \bigg|{}_{l_1^{(1)}} = {\mathrm{i}} \, s_{12} s_{34} \left( \frac{{\mathrm{i}} \, \langle 1 2 \rangle^3}{\langle 2 3 \rangle \langle 3 4 \rangle \langle 4 5 \rangle \langle 5 1 \rangle} \right) \,, \end{aligned} $$
    (5.164)

    where in the parentheses we recognise the tree-level amplitude. Averaging over the two cut solutions (as in Eq. (3.73) for the four-gluon case) gives the four-dimensional coefficient of the scalar box integral,

    $$\displaystyle \begin{aligned} c_{0;1|2|3|45}(1^-,2^-,3^+,4^+,5^+) & \!=\! \frac{1}{2} \sum_{s=1}^2 \mathcal{C}_{1|2|3|45}\left( I^{(1)}(1^-,2^-,3^+,4^+,5^+) \right) \bigg|{}_{l_1^{(s)}} \\ & = \frac{{\mathrm{i}}}{2} s_{12} s_{34} A^{(0)}(1^-,2^-,3^+,4^+,5^+) \,, \end{aligned} $$
    (5.165)

    as claimed.

  2. (b)

    The solution of the quadruple cut can be obtained as in part (a) of this exercise. Alternatively, we can take a more direct route by exploiting the black/white formalism for the trivalent vertices. On each of the two solutions \(l_1^{(s)}\) the quadruple cut is given by

    (5.166)

    The only non-vanishing tree-level four-point amplitude is the MHV (or equivalently \(\overline {\text{MHV}}\)) one, so we have either \(h_1=h_2=-\) or \(h_1=h_2=+\). Specifying \(h_1\) and \(h_2\) and excluding adjacent black/white vertices fixes all the other helicities, so that the quadruple cut receives contribution from two helicity configurations:

    (5.167)

    In both cases the trivalent vertices constrain \(|l_4\rangle \propto |1\rangle \) and \(|l_4] \propto |5]\). The two configurations are thus non-vanishing only on one solution of the quadruple cut, say \(l_1^{(1)}\), which we parametrise starting from \(l_4\) as

    $$\displaystyle \begin{aligned} \left( l_4^{(1)} \right)^{\mu} = a \, \frac{1}{2} \langle 1 | \gamma^{\mu} |5] \,. \end{aligned} $$
    (5.168)

    The value of a is fixed by requiring that \(l_2^{(1)} = l_4^{(1)}+p_4+p_5\) is on shell (i.e. \(\big (l_2^{(1)}\big )^2=0\)), which gives \(a=\langle 4 5 \rangle / \langle 1 4 \rangle \). The spinors for the internal momenta on this solution can then be chosen as

    $$\displaystyle \begin{aligned} {} \begin{array}{llll} & |l_1^{(1)} \rangle = |1\rangle \,, && |l_1^{(1)}] = \frac{\langle 4 5 \rangle}{\langle 1 4 \rangle} |5] - |1] \,, \\ & |l_2^{(1)} \rangle = |4\rangle \,, && |l_2^{(1)}] = |4] + \frac{\langle 1 5 \rangle}{\langle 1 4 \rangle} |5] \,, \\ & |l_3^{(1)} \rangle = |4\rangle \,, && |l_3^{(1)}] = \frac{\langle 1 5 \rangle}{\langle 1 4 \rangle} |5] \,, \\ & |l_4^{(1)} \rangle = |1\rangle \,, \qquad \quad && |l_4^{(1)}] = \frac{\langle 4 5 \rangle}{\langle 1 4 \rangle} |5] \,. \end{array} \end{aligned} $$
    (5.169)

    The first contribution to the quadruple cut is given by

    $$\displaystyle \begin{aligned} \begin{aligned} \mathcal{C}_{1|23|4|5}^{(a)} \Big|{}_{l_1^{(1)}} & = \frac{[l_4 1]^3}{[1l_1][l_1l_4]} \frac{\langle3 l_2\rangle^3}{\langle l_2 l_1 \rangle \langle l_1 2 \rangle \langle 2 3 \rangle} \frac{[l_2 4]^3}{[4l_3][l_3l_2]} \frac{\langle 5 l_4 \rangle^3}{\langle l_4 l_3 \rangle \langle l_3 5 \rangle} \bigg|{}_{l_1^{(1)}} \\ & = {\mathrm{i}} \, s_{45} s_{15} \left( \frac{\langle 3 4 \rangle \langle 1 5 \rangle}{\langle 1 4 \rangle \langle 3 5 \rangle} \right)^4 \left(\frac{{\mathrm{i}} \, \langle 3 5 \rangle^4}{\langle 1 2 \rangle \langle 2 3 \rangle \langle 3 4 \rangle \langle 4 5 \rangle \langle 5 1 \rangle} \right) \,, \end{aligned} \end{aligned} $$
    (5.170)

    where in the right-most parentheses of the second line we recognise the tree-level amplitude \(A^{(0)}(1^+,2^+,3^-,4^+,5^-)\). The computation of the second term is analogous. Summing up the two contributions finally gives

    $$\displaystyle \begin{aligned} \mathcal{C}_{1|23|4|5} \Big|{}_{l_1^{(1)}} = {\mathrm{i}} \, s_{45} s_{15} \, A^{(0)} \, \left[ \left( \frac{\langle 3 4 \rangle \langle 1 5 \rangle}{\langle 1 4 \rangle \langle 3 5 \rangle} \right)^4 + \left( \frac{\langle 1 3 \rangle \langle 4 5 \rangle}{\langle 1 4 \rangle \langle 3 5 \rangle} \right)^4 \right] \,, \end{aligned} $$
    (5.171)

    where we omitted the argument of \(\mathcal {C}_{1|23|4|5}\) and \(A^{(0)}\) for compactness. The second solution, \(l_1^{(2)}\), is the complex conjugate of the first one. The quadruple cut vanishes on it by the argument above,

    $$\displaystyle \begin{aligned} \mathcal{C}_{1|23|4|5} \left(I^{(1)}(1^+,2^+,3^-,4^+,5^-)\right) \Big|{}_{l_1^{(2)}} = 0\,. \end{aligned} $$
    (5.172)

    Finally, we obtain the coefficient of the scalar box function at order \(\epsilon ^0\) by averaging over the two solutions:

    $$\displaystyle \begin{aligned} & c_{0;1|23|4|5}(1^+,2^+,3^-,4^+,5^-) \\ & \quad = \frac{{\mathrm{i}}}{2} s_{45} s_{15} \, A^{(0)} \, \left[ \left( \frac{\langle 3 4 \rangle \langle 1 5 \rangle}{\langle 1 4 \rangle \langle 3 5 \rangle} \right)^4 + \left( \frac{\langle 1 3 \rangle \langle 4 5 \rangle}{\langle 1 4 \rangle \langle 3 5 \rangle} \right)^4 \right] \,. \end{aligned} $$
    (5.173)
FormalPara Exercise 3.3: Tensor Decomposition of the Bubble Integral
  1. (a)

    We contract both sides of the form-factor decomposition in Eq. (3.81) by the basis tensors \(\eta ^{\mu _1 \mu _2}\) and \(p_1^{\mu _1} p_1^{\mu _2}\), obtaining

    $$\displaystyle \begin{aligned} {} \begin{cases} F_2^{[D]} \left[k^2 \right] = a_{2,00} \, D + a_{2,11} \, p_1^2 \,, \\ F_2^{[D]}\left[(k\cdot p_1)^2 \right] = a_{2,00} \, p_1^2 + a_{2,11} \, (p_1^2)^2 \,. \\ \end{cases} \end{aligned} $$
    (5.174)

    For the sake of simplicity we omit the dependence of the bubble integrals on \(p_1\), and we introduce the short-hand notations \(D_1=k^2\) and \(D_2 = (k-p_1)^2\) for the inverse propagators. Solving the linear system (5.174) for the form factors gives

    $$\displaystyle \begin{aligned} {} \begin{aligned} a_{2,00} & = \frac{1}{D-1} \Big( F_2^{[D]}\left[k^2 \right] - \frac{1}{p_1^2} F_2^{[D]}\left[(k\cdot p_1)^2 \right] \Big) \,, \\ a_{2,11} & = \frac{1}{p_1^2 (D-1)} \Big( \frac{D}{p_1^2} F_2^{[D]}\left[(k\cdot p_1)^2 \right] - F_2^{[D]}\left[k^2 \right]\Big) \,. \end{aligned} \end{aligned} $$
    (5.175)

    The contraction of the rank-2 bubble with \(\eta ^{\mu _1\mu _2}\) is given by a scaleless integral and thus vanishes in dimensional regularisation,

    $$\displaystyle \begin{aligned} {} F_2^{[D]}\left[ k^2 \right] = \int_k \frac{1}{(k-p_1)^2} = 0 \,. \end{aligned} $$
    (5.176)

    The contraction with \(p_1^{\mu _1} p_1^{\mu _2}\) is instead given by

    $$\displaystyle \begin{aligned} {} F_2^{[D]}\left[ (k\cdot p_1)^2 \right] = \frac{1}{4} \int_k \left( \frac{(p_1^2)^2}{D_1 D_2} + \frac{D_1}{D_2} + \frac{D_2}{D_1} - 2 + 2 \frac{p_1^2}{D_2} - 2 \frac{p_1^2}{D_1} \right) \,, \end{aligned} $$
    (5.177)

    where we used that \(2\, k\cdot p_1 = D_1 - D_2 + p_1^2\). All terms but the first one vanish in dimensional regularisation. To see this explicitly, consider for instance the second term. By shifting the loop momentum by \(p_1\) we can rewrite it as a combination of manifestly scaleless integrals,

    $$\displaystyle \begin{aligned} \int_k \frac{k^2}{(k-p_1)^2} = \int_k 1 + p_1^2 \int_k \frac{1}{k^2} + 2 p_1^{\mu} \int_k \frac{k_{\mu}}{k^2} \,, \end{aligned} $$
    (5.178)

    which vanish in dimensional regularisation (see Sect. 4.2.1 in Chap. 4). Equation (5.177) thus reduces to

    $$\displaystyle \begin{aligned} {} F_2^{[D]}\left[ (k\cdot p_1)^2 \right] = \frac{(p_1^2)^2}{4} F_2^{[D]}[1] \,. \end{aligned} $$
    (5.179)

    Substituting Eqs. (5.176) and (5.179) into Eq. (5.175) finally gives

    $$\displaystyle \begin{aligned} \begin{aligned} a_{2,00} & = -\frac{p_1^2}{4 (D-1)} \, F_2^{[D]}\left[1 \right] \,, \\ a_{2,11} & = \frac{D}{4 (D-1)} \, F_2^{[D]}\left[1 \right] \,. \end{aligned} \end{aligned} $$
    (5.180)
  2. (b)

    We proceed as we did in part (a). For compactness, we define

    $$\displaystyle \begin{aligned} \begin{aligned} & T_{1}^{\mu_1 \mu_2 \mu_3} = \eta^{\mu_1\mu_2} p_1^{\mu_3} + \eta^{\mu_2\mu_3} p_1^{\mu_1} + \eta^{\mu_3\mu_1} p_1^{\mu_2} \,, \\ & T_{2}^{\mu_1 \mu_2 \mu_3} = p_1^{\mu_1} p_1^{\mu_2} p_1^{\mu_3} \,. \end{aligned} \end{aligned} $$
    (5.181)

    Note that \(F_2^{[D]}\left [k^{\mu _1} k^{\mu _2} k^{\mu _3} \right ]\) is symmetric under permutations of the Lorentz indices. While \(T_2\) enjoys this symmetry, the three separate terms of \(T_1\) do not. That is why they appear together in \(T_1\) rather than with distinct form factors. The symmetry property would in fact constrain the latter to be equal. We then contract both sides of the tensor decomposition (3.82) with the basis tensors, and solve the ensuing \(2\times 2\) linear system for the form factors, Using the following contractions,

    $$\displaystyle \begin{aligned} \begin{aligned} & F_2^{[D]}\left[k^{\mu_1} k^{\mu_2} k^{\mu_3} \right] \left(T_1\right)_{\mu_1 \mu_2 \mu_3} = 0 \,, \\ & F_2^{[D]}\left[k^{\mu_1} k^{\mu_2} k^{\mu_3} \right] \left(T_2\right)_{\mu_1 \mu_2 \mu_3} = \frac{(p_1^2)^3}{8} F_2^{[D]}\left[1\right]\,, \\ & T_1^{\mu_1\mu_2\mu_3} \left(T_1\right)_{\mu_1\mu_2\mu_3} = 3 p_1^2 (D+2) \,, \\ & T_1^{\mu_1\mu_2\mu_3} \left(T_2\right)_{\mu_1\mu_2\mu_3} = 3 (p_1^2)^2 \,, \\ & T_2^{\mu_1\mu_2\mu_3} \left(T_2\right)_{\mu_1\mu_2\mu_3} = (p_1^2)^3 \,, \end{aligned} \end{aligned} $$
    (5.182)

    we obtain

    $$\displaystyle \begin{aligned} \begin{aligned} a_{2,001} & = - \frac{p_1^2}{8 (D-1)} \, F_2^{[D]}\left[1\right] \,, \\ a_{2,111} & = \frac{D+2}{8(D-1)} \, F_2^{[D]} \left[1\right] \,. \end{aligned} \end{aligned} $$
    (5.183)
FormalPara Exercise 3.4: Spurious Loop-Momentum Space for the Box Integral
  1. (a)

    The physical space is 3-dimensional, and may be spanned by \(\{p_1,p_2,p_3\}\). The spurious space is 1-dimensional. In order to construct a vector \(\omega \) spanning the spurious space, we start from a generic ansatz made from the spinors associated with \(p_1\) and \(p_2\),

    $$\displaystyle \begin{aligned} \omega^{\mu} = \alpha_1 p_1^{\mu} + \alpha_2 p_2^{\mu} + \alpha_3 \frac{1}{2} \langle 1 | \gamma^{\mu} | 2 ] + \alpha_4 \frac{1}{2} \langle 2 | \gamma^{\mu} | 1 ] \,, \end{aligned} $$
    (5.184)

    and constrain it by imposing the orthogonality to the external momenta and the normalisation (\(\omega ^2 = 1\)). While \(\omega \cdot p_1 = 0\) and \(\omega \cdot p_2 = 0\) fix \(\alpha _1 = \alpha _2 = 0\), the orthogonality to \(p_3\) and the normalisation imply

    $$\displaystyle \begin{aligned} \alpha_3 \langle 1 3 \rangle [ 3 2 ] + \alpha_4 \langle 2 3 \rangle [ 3 1 ] = 0 \,, \qquad \quad \alpha_3 \alpha_4 \, s_{12} = -1 \,, \end{aligned} $$
    (5.185)

    where \(s_{ij} = (p_i + p_j)^2\). The solution is given by

    $$\displaystyle \begin{aligned} {} \omega^{\mu} = \frac{1}{2 \sqrt{s_{12} s_{23} s_{13}}} \big[ \langle 1 | \gamma^{\mu} | 2 ] \langle 2 3 \rangle [ 3 1 ] - \langle 2 | \gamma^{\mu} | 1 ] \langle 1 3 \rangle [ 3 2 ] \big] \,. \end{aligned} $$
    (5.186)
  2. (b)

    We rewrite the spinor chains in Eq. (5.186) in terms of traces of Pauli matrices,

    $$\displaystyle \begin{aligned} \omega^{\mu} = \frac{1}{2 \sqrt{s_{12} s_{23} s_{13}}} \big[ \text{Tr}\left( \sigma^{\mu} \bar{\sigma}^{\rho} \sigma^{\tau} \bar{\sigma}^{\nu} \right) - \text{Tr}\left( \sigma^{\mu} \bar{\sigma}^{\nu} \sigma^{\tau} \bar{\sigma}^{\rho} \right) \big] p_{1 \nu} p_{2 \rho} p_{3 \tau} \,. \end{aligned} $$
    (5.187)

    We trade the Pauli matrices for Dirac matrices through Eq. (1.29). The terms free of \(\gamma _5\) cancel out thanks to the cyclicity of the trace and the identity \(\text{Tr}\left ( \gamma ^{\mu } \gamma ^{\nu } \gamma ^{\rho } \gamma ^{\tau } \right ) = \text{Tr}\left ( \gamma ^{\tau } \gamma ^{\rho } \gamma ^{\nu } \gamma ^{\mu } \right )\). We rewrite the traces with \(\gamma _5\) in terms of the Levi-Civita symbol using \(\text{Tr}\left (\gamma ^{\mu } \gamma ^{\nu } \gamma ^{\rho } \gamma ^{\tau } \gamma _5\right ) = -4 {\mathrm {i}} \epsilon ^{\mu \nu \rho \tau }\), obtaining

    $$\displaystyle \begin{aligned} \omega^{\mu} = \frac{2 {\mathrm{i}}}{ \sqrt{s_{12} s_{23} s_{13}}} \epsilon^{\mu \nu \rho \tau} p_{1\nu} p_{2\rho} p_{3\tau} \,. \end{aligned} $$
    (5.188)
FormalPara Exercise 3.5: Reducibility of the Pentagon in Four Dimensions
  1. (a)

    We rewrite the triangle integral as

    $$\displaystyle \begin{aligned} F^{[D]}_3(p_1,p_2) = \int_k \frac{1}{(-D_1) \, (-D_2) \, (-D_3)} \,, \end{aligned} $$
    (5.189)

    with inverse propagators

    $$\displaystyle \begin{aligned} D_1 = k^2 \,, \quad D_2 =(k-p_1)^2 \,, \quad D_3 = (k-p_1-p_2)^2 \,. \end{aligned} $$
    (5.190)

    The \({\mathrm {i}} 0\) is irrelevant here, and we thus omit it. We introduce a two-dimensional parametrisation of the loop momentum \(k^{\mu }\) by expanding it in a basis formed by two independent external momenta, say \(p_1^{\mu }\) and \(p_2^{\mu }\), as

    $$\displaystyle \begin{aligned} {} k^{\mu} = a_1 \, p_1^{\mu} + a_2 \, p_2^{\mu} \,. \end{aligned} $$
    (5.191)

    Since there are only two degrees of freedom, parametrised by \(a_1\) and \(a_2\), the three inverse propagators of the triangle integral cannot be algebraically independent (over the field of the rational functions of s). In order to find the relation among them, we express them in terms of \(a_1\) and \(a_2\),

    $$\displaystyle \begin{aligned} {} \begin{aligned} & D_1 = s \, a_1 (a_1-a_2) \,, \\ & D_2 = s \, (1-a_1) (1+a_2-a_1) \,, \\ & D_3 = s \, (1 - a_1) (a_2 - a_1) \,, \end{aligned} \end{aligned} $$
    (5.192)

    with \(s=p_1^2\). We then solve two of these equations to express \(a_1\) and \(a_2\) in terms of inverse propagators. Choosing \(D_1\) and \(D_3\) we obtain

    $$\displaystyle \begin{aligned} a_1 = \frac{D_1}{D_1-D_3} \,, \quad a_2 = \frac{(D_1-D_3)^2 - s \, D_1}{s \, (D_3-D_1)} \,. \end{aligned} $$
    (5.193)

    Plugging these into the expression of \(D_2\) in Eq. (5.192) gives a relation among the three inverse propagators,

    $$\displaystyle \begin{aligned} {} 1 = \frac{1}{s} \left( D_1+D_2-D_3 - \frac{D_1 D_2}{D_3} \right) \,. \end{aligned} $$
    (5.194)

    Inserting this into the numerator of the triangle integral, expanding, and removing the scaleless integrals which integrate to zero finally gives

    $$\displaystyle \begin{aligned} \begin{aligned} F^{[2-2\epsilon]}_{3}(p_1,p_2) & = \frac{1}{s} \int_k \frac{1}{k^2 (k-p_1)^2} + \text{terms missed in }2D \\ & = \frac{1}{s} \, F_2^{[2-2\epsilon]}(p_1) + \text{terms missed in }2D\,. \end{aligned} \end{aligned} $$
    (5.195)

    Up to terms which are missed by the two-dimensional analysis, the triangle integral in \(D=2-2\epsilon \) dimensions can be expressed in terms of a bubble integral, and is thus reducible.

  2. (b)

    In \(D=2\) dimensions, any three momenta are linearly dependent. The Gram matrix \(G(k,p_1,p_2)\) therefore has vanishing determinant,

    $$\displaystyle \begin{aligned} -\frac{1}{4} s^2 k^2 - s \, (k\cdot p_1) (k \cdot p_2) - s \, (k \cdot p_2)^2 = 0 \,, \end{aligned} $$
    (5.196)

    which can be verified using a two-dimensional parametrisation of \(k^{\mu }\) such as Eq. (5.191). In order to convert it into a relation among the inverse propagators, we express the scalar products of the loop momentum in terms of inverse propagators,

    $$\displaystyle \begin{aligned} k^2 = D_1 \,, \qquad k\cdot p_1 = \frac{D_1-D_2+s}{2} \,, \qquad k\cdot p_2 = \frac{D_2-D_3-s}{2} \,. \end{aligned} $$
    (5.197)

    Expressing the determinant of \(G(k,p_1,p_2)\) in terms of inverse propagators gives the relation (5.194).

  3. (c)

    The steps are the same as for the previous point, but the algebraic manipulations are cumbersome. We implement them in the Mathematica notebook Ex3.5_Reducibility.wl [1]. In \(D=4\) dimensions the following Gram determinant vanishes:

    $$\displaystyle \begin{aligned} {} \text{det} \, G\left(k,p_1,p_2,p_3,p_4\right) = 0\,. \end{aligned} $$
    (5.198)

    We aim to rewrite this in terms of the inverse propagators of the pentagon:

    $$\displaystyle \begin{aligned} {} \begin{array}{ll} D_1 = k^2 \,, & D_4 = (k-p_1-p_2-p_3)^2 \,, \\ D_2 = (k-p_1)^2 \,, \qquad \quad & D_5 = (k-p_1-p_2-p_3-p_4)^2 \,. \\ D_3 = (k-p_1-p_2)^2 \,, \qquad \qquad & \\ \end{array} \end{aligned} $$
    (5.199)

    The first step is to parametrise the kinematics in terms of independent invariants \(s_{ij} = (p_i+p_j)^2\). It is instructive to count the latter for a generic number of particles n. There are \(n(n+1)/2\) distinct scalar products \(p_i\cdot p_j\) with \(i,j=1,\ldots ,n\). Momentum conservation gives n constraints, as we may contract \(\sum _{i=1}^n p_i^{\mu } = 0\) by \(p_j^{\mu }\) for any \(j=1,\ldots ,n\). Moreover, we have n on-shell constraints: \(p_i^2=0\) for \(i=1,\ldots ,n\). We are thus left with \(\frac {n(n+1)}{2} -2 n = n (n-3)/2 \) independent invariants. For \(n=4\) that gives 2 independent invariants—the familiar s and t Mandelstam invariants—while for \(n=5\) we have 5. It is convenient to choose them as \(\mathbf {s} := \{s_{12},s_{23},s_{34},s_{45},s_{51}\}\). We now need to express all scalar products \(p_i\cdot p_j\) in terms of \(\mathbf {s}\). We may do so by solving the linear system of equations obtained from momentum conservation as discussed above:

    $$\displaystyle \begin{aligned} \sum_{i=1}^5 p_i \cdot p_j = 0 \,, \forall \, j=1,\ldots, 5\,. \end{aligned} $$
    (5.200)

    We rewrite the latter in terms of \(s_{ij}\)’s and solve. We do this in the Mathematica notebook, obtaining—for example—that \(p_1 \cdot p_4 = (s_{23} - s_{45} - s_{51})/2\).

    We now turn our attention to the scalar products involving the loop momentum: \(k^2\), and \(k\cdot p_i\) for \(i=1,\ldots ,4\) (\(k\cdot p_5\) is related to the others by momentum conservation). Having parametrised the kinematics in terms of independent invariants \(\mathbf {s}\), we may solve the system (5.199) to express them in terms of inverse propagators and \(\mathbf {s}\). For example, we obtain that \(k \cdot p_3 = (D_3 - D_4 + s_{45} -s_{12})/2\). Using this result, we can rewrite the Gram-determinant condition (5.198) as

    $$\displaystyle \begin{aligned} 1 = \sum_{i=1}^5 A_i D_i + \sum_{i\le j=1}^5 B_{ij} D_i D_j \,, \end{aligned} $$
    (5.201)

    where \(A_i\) and \(B_{ij}\) are rational functions of the invariants \(\mathbf {s}\). Plugging this into the numerator of the pentagon integral and expanding finally gives the reduction into integrals with fewer propagators, up to terms missed in \(D=4\).

FormalPara Exercise 3.6: Parametrising the Bubble Integrand
  1. (a)

    We parametrise the loop momentum as in Eq. (3.159). We recall that \(p_1 \cdot \omega _i = 0\) and \(\omega _i \cdot \omega _j = \delta _{ij} \, \omega _i^2\). The coefficient \(\alpha _1\) can be expressed in terms of propagators and external invariants by noticing that \(\alpha _1 = k\cdot p_1/p_1^2\), and rewriting \(k\cdot p_1\) in terms of inverse propagators (3.158). This gives

    $$\displaystyle \begin{aligned} {} \alpha_1 = \frac{D_1-D_2 +p_1^2 + m_1^2-m_2^2}{2 \, p_1^2}\,. \end{aligned} $$
    (5.202)

    We thus see that \(\alpha _1\) does not depend on the loop momentum on the bubble cut \(D_1=D_2=0\). As a result, the loop-momentum parametrisation of the bubble numerator \(\mathnormal {\varDelta }_{1|2}\) depends only on three ISPs: \(k\cdot \omega _i\) for \(i=1,2,3\). The maximum tensor rank for a renormalisable gauge theory is two, hence a general parametrisation is

    $$\displaystyle \begin{aligned} {} \begin{aligned} \mathnormal{\varDelta}_{1|2}&(k \cdot \omega_1, k \cdot \omega_2, k \cdot \omega_3) = c_{000} \\ & + c_{100} (k \cdot \omega_1) + c_{010} (k \cdot \omega_2) + c_{001} (k \cdot \omega_3) \\ & + c_{110} (k \cdot \omega_1) (k \cdot \omega_2) + c_{101} (k \cdot \omega_1) (k \cdot \omega_3) + c_{011} (k \cdot \omega_2) (k \cdot \omega_3) \\ & + c_{200} (k \cdot \omega_1)^2 + c_{020} (k \cdot \omega_2)^2 + c_{002} (k \cdot \omega_3)^2 \,. \end{aligned} \end{aligned} $$
    (5.203)

    The cut condition \(D_1=0\) implies one more constraint on the loop-momentum dependence:

    $$\displaystyle \begin{aligned} {} \mathcal{C}_{1|2}\left(k_{\parallel}^2\right) + \sum_{i=1}^3 (k\cdot \omega_i)^2 \omega_i^2 -m_1^2 = 0 \,. \end{aligned} $$
    (5.204)

    Since \(\mathcal {C}_{1|2}\big (k_{\parallel }^2\big ) = (m_1^2 - m_2^2 + p_1^2)^2/(4 \, p_1^2)\) does not depend on the loop momentum on the cut, we may use Eq. (5.204) to eliminate, say, \((k\cdot \omega _3)^2\) from the numerator (5.203). It is however more convenient to implement the constraint (5.204) so as to maximise the number of terms which integrate to zero. The terms in the second and third line on the RHS of Eq. (5.203) contain odd powers of \(k\cdot \omega _i\), and thus vanish upon integration. Using transverse integration one can show that

    $$\displaystyle \begin{aligned} \int_k \frac{(k \cdot \omega_i)^2}{D_1 D_2} = \frac{\omega_i^2}{\omega_j^2} \int_k \frac{(k \cdot \omega_j)^2}{D_1 D_2} \,. \end{aligned} $$
    (5.205)

    We can then use the constraint (5.204) to group \((k\cdot \omega _1)^2\), \((k\cdot \omega _2)^2\) and \((k\cdot \omega _3)^2\) into two terms which vanish upon integration. This can be achieved for instance as

    $$\displaystyle \begin{aligned} \begin{aligned} \mathnormal{\varDelta}_{1|2}&(k \cdot \omega_1, k \cdot \omega_2, k \cdot \omega_3) = c_{0;1|2} \\ & + c_{1;1|2} (k \cdot \omega_1) + c_{2;1|2} (k \cdot \omega_2) + c_{3;1|2} (k \cdot \omega_3) \\ & + c_{4;1|2} (k \cdot \omega_1) (k \cdot \omega_2) + c_{5;1|2} (k \cdot \omega_1) (k \cdot \omega_3) + c_{6;1|2} (k \cdot \omega_2) (k \cdot \omega_3) \\ & + c_{7;1|2} \left[ (k \cdot \omega_1)^2 - \frac{\omega_1^2}{\omega_3^2} (k \cdot \omega_3)^2 \right] + c_{8;1|2} \left[ (k \cdot \omega_2)^2 - \frac{\omega_2^2}{\omega_3^2} (k \cdot \omega_3)^2 \right] \,, \end{aligned} \end{aligned} $$
    (5.206)

    such that only the term with coefficient \(c_{0;1|2}\) survives upon integration, as claimed.

  2. (b)

    The bubble cut of the one-loop amplitude \(A^{(1),[4-2\epsilon ]}_n\) is by definition given by

    $$\displaystyle \begin{aligned} {} \mathcal{C}_{1|2} \left(A^{(1),[4-2\epsilon]}_n\right) = \int_k \left[ I^{(1)}(k) \prod_{i=1}^2 \left( D_i \, (-2 \pi {\mathrm{i}}) \, \delta^{(+)}\left(D_i\right) \right) \right] \,, \end{aligned} $$
    (5.207)

    where \(I^{(1)}(k)\) denotes the integrand of \(A^{(1),[4-2\epsilon ]}_n\). We parametrise the latter in terms of boxes, triangles, and bubbles. The terms which survive on the bubble cut \(1|2\) are

    (5.208)

    Here, the ellipsis denotes terms which vanish on the cut. The sum over X runs over all triangle configurations which share the propagators \(1/(-D_1)\) and \(1/(-D_2)\), \(\omega _1^X\) and \(\omega _2^X\) are the vectors spanning the corresponding spurious loop-momentum space, and \(1/(-D_X)\) is the propagator which completes the triangle. Similarly, the sum over \(Y,Z\) runs over all box configurations sharing the propagators \(1/(-D_1)\) and \(1/(-D_2)\), \(\omega ^{YZ}\) spans their spurious-loop momentum space, and \(1/(-D_Y)\) and \(1/(-D_Z)\) are the inverse propagators which complete the box. Equating Eqs. (5.207) and (5.208) and solving for \(\mathnormal {\varDelta }_{1|2}\) gives

    $$\displaystyle \begin{aligned} \begin{aligned} \mathnormal{\varDelta}_{1|2}&\left(k\cdot \omega_1, k\cdot \omega_2, k\cdot \omega_3\right) \bigg|{}_{D_i=0} = \bigg( I^{(1)}(k) \prod_{i=1}^2 D_i \\ & + \sum_X \frac{\mathnormal{\varDelta}_{1|2|X}\left( k \cdot \omega^X_1, k\cdot \omega^X_2 \right)}{D_X} - \sum_{Y,Z} \frac{\mathnormal{\varDelta}_{1|2|Y|Z}\left( k \cdot \omega^{YZ} \right)}{D_Y D_Z} \bigg) \bigg|{}_{D_i=0} \,, \end{aligned} \end{aligned} $$
    (5.209)

    as claimed.

FormalPara Exercise 3.7: Dimension-Shifting Relation at One Loop

We decompose the loop momentum into a four- and a \((-2\epsilon )\)-dimensional parts as

$$\displaystyle \begin{aligned} k_1^{\mu} = k_1^{[4], \, \mu} + k_1^{[-2\epsilon], \, \mu} \,, \end{aligned} $$
(5.210)

with \(k_1^{[-2\epsilon ]} \cdot k_1^{[4]} = 0 = k_1^{[-2\epsilon ]} \cdot p_i\) and \( k_1^{[-2\epsilon ]} \cdot k_1^{[-2\epsilon ]} = - \mu _{11}\). Note that \(\mu _{11} > 0\). The loop-integration measure factorises as \({\mathrm {d}}^D k_1 = {\mathrm {d}}^4 k_1^{[4]} \, {\mathrm {d}}^{-2\epsilon } k_1^{[-2\epsilon ]}\). We rewrite the integral on the LHS of Eq. (3.184) as

$$\displaystyle \begin{aligned} {} F^{[4-2\epsilon]}_n(p_1,\ldots,p_{n-1})[\mu_{11}^r] = \int \frac{{\mathrm{d}}^D k_1}{{\mathrm{i}} \pi^{D/2}} \, \mu_{11}^r \, \mathcal{F}_n\left( k_1^{[4]}, \mu_{11} \right) \,. \end{aligned} $$
(5.211)

The integrand \(\mathcal {F}_n\) depends on the loop momentum only through its four-dimensional components \(k_1^{[4], \, \mu }\) and \(\mu _{11}\). In other words, the integrand does not depend on the angular coordinates of the \((-2\epsilon )\)-dimensional subspace. We thus introduce angular and radial coordinates as

$$\displaystyle \begin{aligned} {\mathrm{d}}^{-2\epsilon} k_1^{[-2\epsilon]} = \frac{1}{2} \, {\mathrm{d}} \mathnormal{\varOmega}_{-2\epsilon} \, \mu_{11}^{-1-\epsilon} {\mathrm{d}} \mu_{11} \,, \end{aligned} $$
(5.212)

and carry out the \((-2\epsilon )\)-dimensional angular integration in Eq. (5.211). We recall that the surface area of a unit-radius sphere in m-dimensions is given by

$$\displaystyle \begin{aligned} {} \mathnormal{\varOmega}_{m} := \int {\mathrm{d}} \mathnormal{\varOmega}_{m} = \frac{2 \, \pi^{m/2}}{\mathnormal{\varGamma}\left(\frac{m}{2}\right)} \,. \end{aligned} $$
(5.213)

We obtain

$$\displaystyle \begin{aligned} {} F^{[4-2\epsilon]}_n(p_1,\ldots,p_{n-1})[\mu_{11}^r] = \frac{\mathnormal{\varOmega}_{-2\epsilon}}{2} \int \frac{{\mathrm{d}}^4 k_1^{[4]}}{{\mathrm{i}} \pi^{2-\epsilon}} \int_{0}^{\infty} {\mathrm{d}} \mu_{11} \, \mu_{11}^{r-1-\epsilon} \, \mathcal{F}_n\left( k_1^{[4]}, \mu_{11} \right) \,. \end{aligned} $$
(5.214)

We view the remaining \(\mu _{11}\) integration as the radial integration in a \((2r-2\epsilon )\)-dimensional subspace. The loop-integration measure in the latter is in fact given by

$$\displaystyle \begin{aligned} {\mathrm{d}}^{2r - 2\epsilon} k_1^{[2r - 2\epsilon]} = \frac{1}{2} \, {\mathrm{d}} \mathnormal{\varOmega}_{2r-2\epsilon} \, \mu_{11}^{r-1-\epsilon} {\mathrm{d}} \mu_{11} \,. \end{aligned} $$
(5.215)

Exploiting again the independence of the integrand on the angular coordinates, we rewrite Eq. (5.214) as

$$\displaystyle \begin{aligned} {} F^{[4-2\epsilon]}_n(p_1,\ldots,p_{n-1})[\mu_{11}^r] = \frac{\pi^r \mathnormal{\varOmega}_{-2\epsilon}}{\mathnormal{\varOmega}_{2 r-2\epsilon}} \int \frac{ {\mathrm{d}}^{4} k_1^{[4]} \, {\mathrm{d}}^{2r-2\epsilon} k_1^{[2r-2\epsilon]} }{{\mathrm{i}} \pi^{2+r-\epsilon}} \, \mathcal{F}_n\left( k_1^{[4]}, \mu_{11} \right) \,. \end{aligned} $$
(5.216)

Using Eq. (5.213) for the prefactor gives

$$\displaystyle \begin{aligned} \frac{\pi^r \mathnormal{\varOmega}_{-2\epsilon}}{\mathnormal{\varOmega}_{2 r-2\epsilon}} = \frac{\mathnormal{\varGamma}(r-\epsilon)}{\mathnormal{\varGamma}(-\epsilon)} \,, \end{aligned} $$
(5.217)

which we simplify using Eq. (3.185). The loop integration on the RHS of Eq. (5.216) matches the scalar one-loop integral (i.e., with numerator 1) with loop momentum in \(D=4+2r - 2\epsilon \) dimensions and four-dimensional external momenta, namely

$$\displaystyle \begin{aligned} F_n^{[4-2\epsilon]}(p_1,\ldots,p_{n-1})[\mu_{11}^r] = \left( \prod_{s=0}^{r-1}(s-\epsilon) \right) \, F_{n}^{[4+2 r - 2\epsilon]}(p_1,\ldots,p_{n-1})[1] \,, \end{aligned} $$
(5.218)

as claimed.

FormalPara Exercise 3.8: Projecting Out the Triangle Coefficients

The solution follows from the theory of discrete Fourier transform. Let N be a positive integer. The functions

$$\displaystyle \begin{aligned} \left\{\text{e}^{\frac{2 \pi {\mathrm{i}}}{N} k l}\,, \ l = 0,\ldots, N-1 \right\} \,, \end{aligned} $$
(5.219)

with \(k\in \mathbb {Z}\), form an orthogonal basis of the space of complex-valued functions on the set of the \(N^{\text{th}}\) roots of unity, \(\{ \text{e}^{2\pi {\mathrm {i}} l/N}, \, l=0,\ldots ,N-1\}\). In other words, they satisfy the orthogonality condition

$$\displaystyle \begin{aligned} {} \sum_{l=0}^{N-1} \text{e}^{\frac{2\pi{\mathrm{i}}}{N} (n-k) l} = \delta_{n,k} \, N \,. \end{aligned} $$
(5.220)

This is straightforward for \(n=k\). For \(n\neq k\), Eq. (5.220) follows from the identity

$$\displaystyle \begin{aligned} \sum_{l=0}^{N-1} z^l = \frac{1-z^N}{1-z} \end{aligned} $$
(5.221)

with \(z = \text{e}^{2\pi {\mathrm {i}} (n-k)/N}\), and hence \(z^N = 1\).

We can then use the orthogonality condition to project out the triangle coefficients \(d_{k;1|2|3}\). Using Eqs. (3.192) and (3.193) we have that

$$\displaystyle \begin{aligned} \sum_{l=-3}^3 \text{e}^{-{\mathrm{i}} k \theta_l} \mathnormal{\varDelta}_{1|2|3}(\theta_l) = \sum_{n=-3}^3 d_{n;1|2|3} \, \text{e}^{- 3 (n-k) \frac{2\pi{\mathrm{i}}}{7}} \sum_{l=0}^6 \text{e}^{\frac{2\pi {\mathrm{i}}}{7} (n-k) l} \,. \end{aligned} $$
(5.222)

Substituting Eq. (5.220) with \(N=7\), and solving for \(d_{k;1|2|3}\) gives

$$\displaystyle \begin{aligned} d_{k;1|2|3} = \frac{1}{7} \sum_{l=-3}^3 \text{e}^{-{\mathrm{i}} k \theta_l} \mathnormal{\varDelta}_{1|2|3}(\theta_l)\,, \end{aligned} $$
(5.223)

as claimed.

We now consider a rank-4 four-dimensional triangle numerator \(\mathnormal {\varDelta }_{1|2|3}^{(4)}(k\cdot \omega _1, k\cdot \omega _2)\). We parametrise the family of solutions to the triple cut by the angle \(\theta \) as in Eq. (3.190). Expanding sine and cosine into exponentials gives

$$\displaystyle \begin{aligned} \mathnormal{\varDelta}_{1|2|3}^{(4)}(\theta) = \sum_{k=-4}^4 d_{k;1|2|3}^{(4)} \, \text{e}^{{\mathrm{i}} k \theta} \,. \end{aligned} $$
(5.224)

The coefficients \(d_{k;1|2|3}^{(4)}\) can then be projected out using the \(9{\text{th}}\) roots of unity \(\text{e}^{{\mathrm {i}} \theta _l^{\prime }}\), with \(\theta _l^{\prime } = 2\pi \, l/9\), for \(l=-4,\ldots ,4\). By using the orthogonality condition (5.220) with \(N=9\) we obtain

$$\displaystyle \begin{aligned} d_{k;1|2|3}^{(4)} = \frac{1}{9} \sum_{l=-4}^4 \text{e}^{-{\mathrm{i}} k \theta_l^{\prime}} \mathnormal{\varDelta}_{1|2|3}^{(4)}\left(\theta_l^{\prime}\right)\,. \end{aligned} $$
(5.225)
FormalPara Exercise 3.9: Rank-One Triangle Reduction with Direct Extraction
  1. (a)

    After integration, the tensor integral \(F_3^{[D]}(P,Q)[k^{\mu }]\) can only be a function of \(P^{\mu }\) and \(Q^{\mu }\). We thus expand it as

    $$\displaystyle \begin{aligned} F_3^{[D]}(P,Q)[k^{\mu}] = c_1 \, P^{\mu} + c_2 \, Q^{\mu} \,. \end{aligned} $$
    (5.226)

    Contracting both sides by \(P^{\mu }\) and \(Q^{\mu }\), and solving for the coefficients gives

    $$\displaystyle \begin{aligned} {} \begin{aligned} & c_1 = \frac{1}{(P\cdot Q)^2-S \, T} \left[ (P\cdot Q) \, F_3^{[D]}(P,Q)[k\cdot Q] - T \, F_3^{[D]}(P,Q)[k\cdot P] \right] \,, \\ & c_2 = \frac{1}{(P\cdot Q)^2-S \, T} \left[ (P\cdot Q) \, F_3^{[D]}(P,Q)[k\cdot P] - S \, F_3^{[D]}(P,Q)[k\cdot Q] \right] \,. \end{aligned} \end{aligned} $$
    (5.227)

    Next, we need to rewrite the integrals above in terms of scalar integrals. To this end, we express the scalar products \(k \cdot P\) and \(k \cdot Q\) in terms of inverse propagators \(D_i\), as

    $$\displaystyle \begin{aligned} k \cdot P = \frac{1}{2} \big(D_1-D_2 + \hat{S} \big) \,, \qquad \quad k \cdot Q = \frac{1}{2} \big( D_2 - D_3 + \hat{T} \big) \,, \end{aligned} $$
    (5.228)

    where

    $$\displaystyle \begin{aligned} D_1 = k^2 - m_1^2 \,, \quad D_2 = (k-P)^2 - m_2^2 \,, \quad D_3 = (k-P-Q)^2-m_3^2\,, \end{aligned} $$
    (5.229)

    and

    $$\displaystyle \begin{aligned} \hat{S} := S + m_1^2 - m_2^2 \,, \qquad \hat{T} := T + m_2^2 - m_3^2 + 2 \, (Q\cdot P) \,. \end{aligned} $$
    (5.230)

    We thus have that

    $$\displaystyle \begin{aligned} F_3^{[D]} (P,Q)[k \cdot Q] & = \frac{1}{2} \left[ \int_k \frac{1}{D_1 \, D_2} - \int_k \frac{1}{D_1 \, D_3} - \int_k \frac{\hat{T}}{D_1 \, D_2 \, D_3} \right] \\ & = \frac{1}{2} \left[ F_2^{[D]}(P) - F_2^{[D]}(P+Q) + \hat{T} \, F_3^{[D]}(P,Q) \right] \,, \end{aligned} $$
    (5.231)

    while \(F_3^{[D]} (P,Q)[k \cdot P] \) contains no P-channel bubble. We recall that \(F_n^{[D]}(\cdots ) \equiv F_n^{[D]}(\cdots )[1]\). Putting the above together gives

    $$\displaystyle \begin{aligned} F_3^{[D]}(P,Q)[k\cdot Z] & = c_1 \, (P\cdot Z) + c_2 \, (Q\cdot Z) \\ & = \frac{1}{2} \frac{(P\cdot Q) (P\cdot Z) - (Q\cdot Z) \, S}{(P\cdot Q)^2-S \, T} \, F_2^{[D]}(P) + \ldots \,, \end{aligned} $$
    (5.232)

    where the ellipsis denotes terms which do not involve P-channel bubbles. Finally, we can read off that the coefficient of the P-channel scalar bubble integral is given by

    $$\displaystyle \begin{aligned} {} c_{0;P|QR} = \frac{(P\cdot Q) (P\cdot Z) - (Q\cdot Z) \, S}{2\big( (P\cdot Q)^2-S \, T \big) } \,, \end{aligned} $$
    (5.233)

    as claimed.

  2. (b)

    We outline here the main steps of the solution, while the computations are performed in the Mathematica notebook Ex3.9_DirectExtraction.wl [1]. Since we are considering a triangle integral, all quadruple cuts vanish. The coefficients of the box numerator are thus zero. We parametrise the triangle numerator \(\mathnormal {\varDelta }_{P|Q|R}\) as in Eq. (3.151),

    $$\displaystyle \begin{aligned} \begin{aligned} & \mathnormal{\varDelta}_{P|Q|R}(k\cdot \omega_{1,{\mathrm{tri}}}, k\cdot \omega_{2,{\mathrm{tri}}}) = c_{0;P|Q|R} + c_{1;P|Q|R} \, (k\cdot \omega_{1,{\mathrm{tri}}}) \\ & \quad + c_{2;P|Q|R} \, (k\cdot \omega_{2,{\mathrm{tri}}}) \\ & \quad + c_{3;P|Q|R} \left( (k\cdot \omega_{1,{\mathrm{tri}}})^2 - \frac{\omega_{1,{\mathrm{tri}}}^2}{\omega_{2,{\mathrm{tri}}}^2} (k\cdot \omega_{2,{\mathrm{tri}}})^2 \right) \\ & \quad + c_{4;P|Q|R} \, (k\cdot \omega_{1,{\mathrm{tri}}}) (k\cdot \omega_{2,{\mathrm{tri}}}) \\ & \quad + c_{5;P|Q|R} \, (k\cdot \omega_{1,{\mathrm{tri}}})^3 + c_{6;P|Q|R} \, (k\cdot \omega_{1,{\mathrm{tri}}})^2 (k\cdot \omega_{2,{\mathrm{tri}}}) \,, \end{aligned} \end{aligned} $$
    (5.234)

    with the spurious vectors as in Eq. (3.220),

    $$\displaystyle \begin{aligned} \omega_{1,{\mathrm{tri}}}^\mu &= \frac{1}{2}\langle \check{P} | \gamma^\mu | \check{Q} ] \, \mathnormal{\varPhi}_{\mathrm{tri}} + \frac{1}{2}\langle \check{Q} | \gamma^\mu | \check{P} ] \, \mathnormal{\varPhi}_{\mathrm{tri}}^{-1}\,, \end{aligned} $$
    (5.235)
    $$\displaystyle \begin{aligned} \omega_{2,{\mathrm{tri}}}^\mu &= \frac{1}{2}\langle \check{P} | \gamma^\mu | \check{Q} ] \, \mathnormal{\varPhi}_{\mathrm{tri}} - \frac{1}{2}\langle \check{Q} | \gamma^\mu | \check{P} ] \, \mathnormal{\varPhi}_{\mathrm{tri}}^{-1} \,. \end{aligned} $$
    (5.236)

    Here, \(\mathnormal {\varPhi }_{\mathrm {tri}}\) is an arbitrary factor which makes the summands phase-free. E.g. we may choose \(\mathnormal {\varPhi }_{\mathrm {tri}}=\langle \check {Q} | Z | \check {P} ]\), but its expression is irrelevant as it cancels out from the result. Moreover, we have the light-like projections

    $$\displaystyle \begin{aligned} {} \check{P}^{\mu} = \frac{\gamma \, \left(\gamma P^{\mu} - S \, Q^{\mu} \right)}{\gamma^2 - S \, T} \,, \qquad \quad \check{Q}^{\mu} = \frac{\gamma \, \left(\gamma Q^{\mu} - T \, P^{\mu}\right) }{\gamma^2 - S \, T} \,, \end{aligned} $$
    (5.237)

    with two projections \(\gamma _{\pm } = (P\cdot Q) \pm \sqrt { (P\cdot Q)^2 -S \, T}\). We parametrise the loop momentum on the triple cut \(P|Q|R\) (\(D_1 = D_2 = D_3 = 0\)) in terms of t as discussed in Sect. 3.5:

    $$\displaystyle \begin{aligned} {} \mathcal{C}_{P|Q|R}\left(k^{\mu}\right) = \beta_1 \, P^{\mu}+ \beta_2 \, Q^{\mu} + \frac{1}{2} \left(t+\frac{\gamma_{\mathrm{tri}}}{t}\right) \, \omega_{1, {\mathrm{tri}}}^{\mu} + \frac{1}{2} \left(t-\frac{\gamma_{\mathrm{tri}}}{t}\right) \, \omega_{2, {\mathrm{tri}}}^{\mu} \,, \end{aligned} $$
    (5.238)

    with

    $$\displaystyle \begin{aligned} \beta_1 = \frac{\hat{S} \, T- \hat{T} \, (P\cdot Q)}{2 \, \big(S \, T- (P \cdot Q)^2 \big)} \,, \qquad \quad \beta_2 = \frac{S \, \hat{T}- \hat{S} \, (P\cdot Q)}{2 \, \big(S \, T- (P \cdot Q)^2 \big)} \,. \end{aligned} $$
    (5.239)

    We use \(\gamma _{\mathrm {tri}}\) in Eq. (5.238) to distinguish it from the \(\gamma \) used in Eq. (5.237). Its value is fixed by the constraint \(D_1 = 0\), and we omit it here for conciseness. We now determine the triangle coefficients \(c_{i;P|Q|R}\) by solving

    $$\displaystyle \begin{aligned} {} \mathcal{C}_{P|Q|R}\left( \mathnormal{\varDelta}_{P|Q|R}(k\cdot \omega_{1,{\mathrm{tri}}}, k\cdot \omega_{2,{\mathrm{tri}}}) \right) = \mathcal{C}_{P|Q|R} \left(k \cdot Z \right) \,. \end{aligned} $$
    (5.240)

    We recall that the box subtraction terms are zero in this case. In Sect. 3.5 we have seen how to extract directly \(c_{0;P|Q|R}\) using the operation “\({\mathrm {Inf}}\)” (see Eq. (3.208)). Here however we need all triangle coefficients. The two sides of Eq. (5.240) are Laurent polynomials in t, with the loop-momentum parametrisation (5.238). As the equation holds for any value of t, we may solve it separately order by order in t. This gives enough constraints to fix all triangle coefficients. We find

    $$\displaystyle \begin{aligned} c_{1;P|Q|R} = - \frac{Z\cdot \omega_{1,{\mathrm{tri}}}}{2 \, \check{P}\cdot \check{Q}} \,, \qquad \quad c_{2;P|Q|R} = \frac{Z\cdot \omega_{2,{\mathrm{tri}}}}{2 \, \check{P}\cdot \check{Q}} \,. \end{aligned} $$
    (5.241)

    The coefficient of the scalar triangle integral, \(c_{0;P|Q|R}\), will not contribute to the bubble coefficient, and we thus omit it here. The higher-rank coefficients, \(c_{i;P|Q|R}\) with \(i=3,\ldots ,6\) all vanish, as we could have guessed from start by noticing that the example integral we are studying has a rank-one numerator.

We can now move on to the bubble coefficients. We parametrise the bubble numerator as in Eq. (3.161),

$$\displaystyle \begin{aligned} \begin{aligned} & \mathnormal{\varDelta}_{P|QR}(k\cdot\omega_{1,{\mathrm{bub}}}, k\cdot\omega_{2,{\mathrm{bub}}}, k\cdot\omega_{3,{\mathrm{bub}}}) = c_{0;P|QR} \\ & \quad + c_{1;P|QR} (k\cdot\omega_{1,{\mathrm{bub}}}) + c_{2;P|QR} (k\cdot\omega_{2,{\mathrm{bub}}}) + c_{3;P|QR} (k\cdot\omega_{3,{\mathrm{bub}}}) \\ & \quad + c_{4;P|QR} (k\cdot\omega_{1,{\mathrm{bub}}}) (k\cdot\omega_{2,{\mathrm{bub}}}) + c_{5;P|QR} (k\cdot\omega_{1,{\mathrm{bub}}}) (k\cdot\omega_{3,{\mathrm{bub}}}) \\ & \quad + c_{6;P|QR} (k\cdot\omega_{2,{\mathrm{bub}}}) (k\cdot\omega_{3,{\mathrm{bub}}})\\ & \quad + c_{7;P|QR} \left((k\cdot\omega_{1,{\mathrm{bub}}})^2 - \frac{\omega_{1,{\mathrm{bub}}}^2}{\omega_{3,{\mathrm{bub}}}^2} (k\cdot\omega_{3,{\mathrm{bub}}})^2 \right) \\ & \quad + c_{8;P|QR} \left((k\cdot\omega_{2,{\mathrm{bub}}})^2 - \frac{\omega_{2,{\mathrm{bub}}}^2}{\omega_{3,{\mathrm{bub}}}^2} (k\cdot\omega_{3,{\mathrm{bub}}})^2 \right) \,, \end{aligned} \end{aligned} $$
(5.242)

with the spurious vectors as in Eq. (3.212),

$$\displaystyle \begin{aligned} \omega_{1,{\mathrm{bub}}}^\mu &= \frac{1}{2}\langle P^\flat | \gamma^\mu | n ] \, \mathnormal{\varPhi}_{\mathrm{bub}} + \frac{1}{2}\langle n | \gamma^\mu | P^\flat ] \, \mathnormal{\varPhi}_{\mathrm{bub}}^{-1} \,, \end{aligned} $$
(5.243)
$$\displaystyle \begin{aligned} \omega_{2,{\mathrm{bub}}}^\mu &= \frac{1}{2}\langle P^\flat | \gamma^\mu | n ] \, \mathnormal{\varPhi}_{\mathrm{bub}} - \frac{1}{2}\langle n | \gamma^\mu | P^\flat ] \, \mathnormal{\varPhi}_{\mathrm{bub}}^{-1} \,, \end{aligned} $$
(5.244)
$$\displaystyle \begin{aligned} \omega_{3,{\mathrm{bub}}}^\mu &= P^{\flat,\mu} - \frac{S}{2 \, P\cdot n} \, n^\mu \,, \end{aligned} $$
(5.245)

where \(n^{\mu }\) is an arbitrary light-like momentum, and

$$\displaystyle \begin{aligned} P^{\flat, \, \mu} = P^{\mu} - \frac{S}{2 \, P\cdot n} \, n^{\mu} \,. \end{aligned} $$
(5.246)

We may choose the phase factor e.g. as \(\mathnormal {\varPhi }_{\mathrm {bub}} = \langle n | Z | P^\flat ]\) but—just like \(\mathnormal {\varPhi }_{\mathrm {tri}}\)—this will not appear in the result. We parametrise the loop momentum on the double cut \(P|QR\) (\(D_1 = D_2 = 0\)) in terms of t and y as in Eq. (3.210):

$$\displaystyle \begin{aligned} {} \mathcal{C}_{P|QR}\left(k^{\mu}\right) = \alpha_1 \, P^{\flat, \, \mu}+ \alpha_2 \, n^{\mu} + \alpha_3 \, \frac{1}{2} \, \langle P^{\flat} | \gamma^{\mu} | n ] \, \mathnormal{\varPhi}_{\mathrm{bub}} + \alpha_4 \, \frac{1}{2} \langle n | \gamma^{\mu} | P^{\flat} ] \, \mathnormal{\varPhi}_{\mathrm{bub}}^{-1} \,, \end{aligned} $$
(5.247)

with

$$\displaystyle \begin{aligned} \alpha_1 = y \,, \qquad \alpha_2 = \frac{\hat{S}- S \, y}{2 \, n \cdot P} \,, \qquad \alpha_3 = t \,, \qquad \alpha_4 = \frac{y \, (\hat{S} - S \, y) - m_1^2}{2 \, t \, (n\cdot P)} \,. \end{aligned} $$
(5.248)

The bubble coefficients \(c_{i;P|QR}\) are fixed through Eq. (3.163) with the box subtraction term set to zero:

$$\displaystyle \begin{aligned} \mathcal{C}_{P|QR} \big( \mathnormal{\varDelta}_{P|QR}(\{k\cdot\omega_{i,{\mathrm{bub}}}\}) \big) = \mathcal{C}_{P|QR} \left( - \frac{k\cdot Z}{D_3} + \frac{\mathnormal{\varDelta}_{P|Q|R}( \{k\cdot\omega_{i,{\mathrm{tri}}}\} )}{D_3} \right) \,. \end{aligned} $$
(5.249)

We can extract the coefficient \(c_{0;P|Q}\) directly using Eq. (3.226), as

$$\displaystyle \begin{aligned} {} c_{0;P|QR} \!=\! \mathcal{P} \mathrm{Inf}_y \mathrm{Inf}_t \left[ \mathcal{C}_{P|QR}\left( - \frac{k\cdot Z}{D_3} \right) \!-\! \frac{1}{2} \sum_{\gamma=\gamma_{\pm}} \mathcal{C}_{P|QR}\left( \frac{\mathnormal{\varDelta}_{P|Q|R}( \{k\cdot\omega_{i,{\mathrm{tri}}}\} )}{-D_3} \right) \right], \end{aligned} $$
(5.250)

with the operator \(\mathcal {P}\) defined in Eq. (3.223),

$$\displaystyle \begin{aligned} \mathcal{P}\big(f(y,t) \big) = f \big|{}_{t^0,y^0} + \frac{\hat{S}}{2 \, S}f \big|{}_{t^0,y^1}+ \frac{1}{3}\left( \frac{\hat{S}^2}{S^2} - \frac{m_1^2}{S} \right)f\big|{}_{t^0,y^2} \,, \end{aligned} $$
(5.251)

while \(\mathrm {Inf}_x\) expands a rational function around \(x =\infty \) and keeps only the terms that do not vanish in the limit (see Eq. (3.205)). We obtain

$$\displaystyle \begin{aligned} {} \mathcal{P} \mathrm{Inf}_y \mathrm{Inf}_t \left[ \mathcal{C}_{P|QR}\left( - \frac{k\cdot Z}{D_3} \right) \right] = \frac{1}{2} \, \frac{\langle P^{\flat} | Z | n ]}{\langle P^{\flat} | Q | n ]} \,, \end{aligned} $$
(5.252)

and

$$\displaystyle \begin{aligned} {} \mathcal{P} \mathrm{Inf}_y \mathrm{Inf}_t \left[ \mathcal{C}_{P|QR}\left( \frac{\mathnormal{\varDelta}_{P|Q|R}( \{k\cdot\omega_{i,{\mathrm{tri}}}\} )}{ - D_3} \right) \right] = - \frac{\langle P^{\flat}|\check{P} Z \check{Q}|n] + \langle P^{\flat}|\check{Q} Z \check{P}|n] }{4 \, (\check{P}\cdot \check{Q}) \, \langle P^{\flat} | Q | n ] } \,. \end{aligned} $$
(5.253)

We may simplify the RHS of Eq. (5.253) by rewriting

$$\displaystyle \begin{aligned} \langle P^{\flat}|\check{P} Z \check{Q}|n] & = \langle \check{Q} | Z | \check{P} ] \, \langle \check{P} P^{\flat} \rangle [n\check{Q}] \\ & = \mathrm{Tr}\left(\sigma^{\mu} \bar{\sigma}^{\nu} \sigma^{\rho} \bar{\sigma}^{\tau} \right) \, Z_{\mu} \check{P}_{\nu} V_{\rho} \check{Q}_{\tau} \,, \end{aligned} $$
(5.254)

where we introduced the short-hand \( V^{\mu } = \langle P^{\flat } | \gamma ^{\mu } | n ]/2\). Using the identity (1.29) we can trade the \(\sigma \)-matrix trace for a \(\gamma \)-matrix trace, obtaining

(5.255)

The trace with \(\gamma _5\) is fully anti-symmetric in the four momenta, and thus cancels out between \(\langle P^{\flat }|\check {P} Z \check {Q}|n]\) and \(\langle P^{\flat }|\check {Q} Z \check {P}|n]\). The remaining traces can be expressed in terms of scalar products using the familiar rule for the Dirac matrices (see Eq. (5.61)). We then obtain

(5.256)

Averaging over the two projections \(\gamma _{\pm }\) then gives

(5.257)

Substituting this and Eq. (5.252) into Eq. (5.250) finally gives

$$\displaystyle \begin{aligned} c_{0;P|QR} = \frac{(P\cdot Q) (P\cdot Z) - S \, (Q\cdot Z)}{2 \, \big( (P\cdot Q)^2 - S\,T \big)} \,, \end{aligned} $$
(5.258)

in agreement with the result of the Passarino-Veltman reduction given in Eq. (5.233).

FormalPara Exercise 3.10: Momentum-Twistor Parametrisations

The matrix Z in Eq. (3.258) has the form

$$\displaystyle \begin{aligned} Z = \left( Z_1 \, Z_2 \, Z_3 \, Z_4 \right) \,, \quad \text{with} \quad Z_i = \begin{pmatrix} \lambda_{i\alpha} \\ \mu_i^{\dot{\alpha}} \end{pmatrix} \,. \end{aligned} $$
(5.259)

We can thus read off \(\lambda _{i\alpha }\) and compute all \(\langle i j \rangle \) through \(\langle i j \rangle = - \lambda _{i \alpha } \epsilon ^{\alpha \beta } \lambda _{j \beta }\). Our conventions for \(\epsilon ^{\alpha \beta }\) are given in Exercise . For instance, we have that

$$\displaystyle \begin{aligned} \langle 1 2 \rangle = - ( 1 \ 0 ) \cdot \begin{pmatrix} 0 & -1 \\ 1 & 0 \\ \end{pmatrix} \cdot \begin{pmatrix} 0 \\ 1 \\ \end{pmatrix} = 1 \,. \end{aligned} $$
(5.260)

Repeating this for all \(\langle i j \rangle \) we get

$$\displaystyle \begin{aligned} {} \langle 1 2 \rangle = \langle 1 3 \rangle = \langle 1 4 \rangle = 1 \,, \quad \langle 2 3 \rangle = - \frac{1}{y} \,, \quad \langle 2 4 \rangle = y \,, \quad \langle 3 4 \rangle = \frac{1+y^2}{y} \,. \end{aligned} $$
(5.261)

From these we can see explicitly that the helicity information is obscured, as some \(\langle i j \rangle \) are set to constants. Next, we compute the \(\tilde {\lambda }_i\) through Eq. (3.257). E.g. we have

$$\displaystyle \begin{aligned} {} \tilde{\lambda}_1^{\dot{\alpha}} = \frac{\langle 1 2 \rangle \, \mu_4^{\dot{\alpha}} + \langle 2 4 \rangle \, \mu_1^{\dot{\alpha}} + \langle 4 1 \rangle \, \mu_2^{\dot{\alpha}}}{\langle 4 1 \rangle \langle 1 2 \rangle} \,. \end{aligned} $$
(5.262)

From Eq. (3.258) we read off \(\mu _1^{\dot \alpha } = \mu _2^{\dot \alpha } = (0,0)^{\top }\) and \(\mu _4^{\dot \alpha } = (0, x)^{\top }\). Substituting this and Eq. (5.261) into Eq. (5.262) gives \(\tilde {\lambda }_1^{\dot {\alpha }} = -\mu _4^{\dot {\alpha }} \). The other \(\tilde {\lambda }_i\) are obtained similarly:

$$\displaystyle \begin{aligned} {} \tilde{\lambda}_1^{\dot{\alpha}} = \begin{pmatrix} 0\\ -x \end{pmatrix} \,, \quad \tilde{\lambda}_2^{\dot{\alpha}} = \begin{pmatrix} x \\ 0 \end{pmatrix} \,, \quad \tilde{\lambda}_3^{\dot{\alpha}} = \frac{x \, y}{1+y^2} \begin{pmatrix} -y \\ 1 \end{pmatrix} \,, \quad \tilde{\lambda}_4^{\dot{\alpha}} = \frac{-x}{1+y^2} \begin{pmatrix} 1\\ y \end{pmatrix} \,. \end{aligned} $$
(5.263)

These allow us to determine all \([ i j ]\) through \([ i j ] = - \tilde {\lambda }_i^{\dot \alpha } \epsilon _{\dot \alpha \dot \beta } \tilde {\lambda }_j^{\dot \alpha }\),

$$\displaystyle \begin{aligned} {} \begin{aligned}{}[ 1 2 ] &= -x^2 \,, & [ 1 3 ] &=\frac{x^2 y^2}{1+y^2} \,, & [ 1 4 ]&=\frac{x^2}{1+y^2} \,, \\ {} [ 2 3 ] &= - \frac{x^2 y}{1+y^2} \,, & [ 2 4 ] &=\frac{x^2 y}{1+y^2}\,, & [ 3 4 ] & = -\frac{x^2 y}{1+y^2} \,. \end{aligned} \end{aligned} $$
(5.264)

We calculate \(s_{ij}\) from \(\langle i j \rangle \) and \([ i j ]\) through \(s_{ij} = \langle i j \rangle [ j i ]\). Thanks to momentum conservation, only two are independent. We choose

$$\displaystyle \begin{aligned} {} s_{12} = x^2 \,, \qquad s_{23} = - \frac{x^2}{1+y^2} \,. \end{aligned} $$
(5.265)

The others are determined from these as \(s_{13}=s_{24}=-s_{12}-s_{23}\), \(s_{14}=s_{23}\), and \(s_{34}=s_{12}\). We obtain the momenta \(p_i^{\mu }\) from \(\lambda _{i\alpha }\) and \(\tilde {\lambda }_i^{\dot \alpha }\) through \(p_i^{\mu } = - \lambda _{i\alpha } \epsilon ^{\alpha \beta } (\sigma ^{\mu })_{\beta \dot \beta } \tilde {\lambda }_i^{\dot \beta }\). See Exercise for our conventions on \(\sigma ^{\mu }\). We obtain

$$\displaystyle \begin{aligned} {} p_1^{\mu} = \frac{x}{2} \begin{pmatrix} -1 \\ 0 \\ 0 \\ - 1\end{pmatrix} \,, \qquad p_2^{\mu} = \frac{x}{2} \begin{pmatrix} - 1 \\ 0 \\ 0 \\ 1 \end{pmatrix} \,, \qquad p_3^{\mu} = \frac{x}{2} \begin{pmatrix} 1 \\ \frac{2 y}{1 + y^2} \\ 0 \\ \frac{1-y^2}{1+y^2} \end{pmatrix}\,, \end{aligned} $$
(5.266)

and \(p_4 = -p_1-p_2-p_3\). This parametrisation describes two incoming particles with momenta \(-p_1\) and \(-p_2\) traveling along the z axis with energy \(E = x/2\) in their center-of-mass frame. The outgoing particles, with momenta \(p_3\) and \(p_4\), lie on the xz-plane. The angle \(\theta \) between the three-momentum \({\mathbf {p}}_3\) and the z axis is related to y through \(y = \tan {}(\theta /2)\).

Let us consider the tree-level four-gluon amplitude \(A^{(0)}_4(1^-,2^+,3^-, 4^+)\). Using the Parke-Taylor formulae (1.192) and (1.193) (with \(g=1\)) it may be written either as

$$\displaystyle \begin{aligned} A^{(0)}_{4, \mathrm{MHV}}(1^-,2^+,3^-,4^+) = {\mathrm{i}} \, \frac{\langle 13 \rangle^4 }{\langle 1 2 \rangle \langle 2 3 \rangle \langle 3 4 \rangle \langle 4 1 \rangle} \,, \end{aligned} $$
(5.267)

or as

$$\displaystyle \begin{aligned} A^{(0)}_{4, \overline{\mathrm{MHV}}}(1^-,2^+,3^-,4^+) = {\mathrm{i}} \, \frac{[ 24]^4 }{[12] [23] [34] [41]} \,. \end{aligned} $$
(5.268)

Using the momentum-twistor parametrisation in Eqs. (5.261) and (5.264) it is straightforward to see that both expressions evaluate to

$$\displaystyle \begin{aligned} A^{(0)}_4(1^-,2^+,3^-,4^+) = {\mathrm{i}} \, \frac{y^2}{1+y^2} \,. \end{aligned} $$
(5.269)

Showing this with the spinor-helicity formalism alone requires some gymnastics with momentum conservation. For instance, we may proceed as

$$\displaystyle \begin{aligned} \frac{A^{(0)}_{4, \overline{\mathrm{MHV}}}(1^-,2^+,3^-,4^+)}{A^{(0)}_{4, \mathrm{MHV}}(1^-,2^+,3^-,4^+)} & = \frac{[24]^4 \langle 1 2 \rangle \langle 2 3 \rangle \langle 3 4 \rangle \langle 4 1 \rangle}{\langle 13 \rangle^4 [12] [23] [34] [41] } \\ & = \frac{\overbrace{\langle 1 2 \rangle [24]}^{- \langle 1 3 \rangle [34]} \, \overbrace{\langle 3 2 \rangle [24]}^{- \langle 3 1 \rangle [14]} \, \overbrace{\langle 3 4 \rangle [42]}^{-\langle 3 1 \rangle [12]} \, \overbrace{[24] \langle 4 1 \rangle}^{-[23] \langle 3 1 \rangle}}{\langle 13 \rangle^4 [12] [23] [34] [41] } \\ & = 1 \,. \end{aligned} $$
(5.270)

The proof for the adjacent MHV configuration \(A^{(0)}_4(1^-,2^-,3^+, 4^+)\) is analogous.

FormalPara Exercise 4.1: The Massless Bubble Integral
  1. (a)

    Applying the Feynman trick (4.12) to the bubble integral (4.15) gives

    $$\displaystyle \begin{aligned} F_2 = \frac{\mathnormal{\varGamma}(a_1+a_2)}{\mathnormal{\varGamma}(a_1) \mathnormal{\varGamma}(a_2)} \int \frac{{\mathrm{d}}\alpha_1 {\mathrm{d}}\alpha_2}{\text{GL}(1)} \int \frac{{\mathrm{d}}^D k}{{\mathrm{i}} \pi^{\frac{D}{2}}} \frac{\alpha_1^{a_1-1} \alpha_2^{a_2-1} (\alpha_1+\alpha_2)^{-a_1-a_2}}{ \left[-M^2 - {\mathrm{i}} 0 \right]^{a_1+a_2}} \,, \end{aligned} $$
    (5.271)

    where

    $$\displaystyle \begin{aligned} M^2 = k^2 - \frac{2 \, \alpha_2}{\alpha_1+\alpha_2} \, p \cdot k + \frac{\alpha_2}{\alpha_1+\alpha_2} p^2 \,. \end{aligned} $$
    (5.272)

    We complete the square in \(M^2\),

    $$\displaystyle \begin{aligned} M^2 = \bigg( k - \frac{\alpha_2}{\alpha_1+\alpha_2} p \bigg)^2 + p^2 \frac{\alpha_1 \alpha_2}{(\alpha_1+\alpha_2)^2} \,, \end{aligned} $$
    (5.273)

    and shift the loop momentum as \(k \to k-\alpha _2/(\alpha _1+\alpha _2) p\). This gives

    $$\displaystyle \begin{aligned} F_2 = \frac{\mathnormal{\varGamma}(a_1+a_2)}{\mathnormal{\varGamma}(a_1) \mathnormal{\varGamma}(a_2)} \int \frac{{\mathrm{d}}\alpha_1 {\mathrm{d}}\alpha_2}{\text{GL}(1)} \int \frac{{\mathrm{d}}^D k}{{\mathrm{i}} \pi^{\frac{D}{2}}} \frac{\alpha_1^{a_1-1} \alpha_2^{a_2-1} (\alpha_1+\alpha_2)^{-a_1-a_2}}{ \left[-k^2 - \frac{\alpha_1\alpha_2}{(\alpha_1+\alpha_2)^2} p^2 - {\mathrm{i}} 0 \right]^{a_1+a_2}} \,. \end{aligned} $$
    (5.274)

    We can now carry out the integration in k using the formula (4.6), obtaining

    $$\displaystyle \begin{aligned} {} F_2 = \frac{\mathnormal{\varGamma}\left(a_1+a_2-\frac{D}{2}\right)}{\mathnormal{\varGamma}(a_1) \mathnormal{\varGamma}(a_2)} \int \frac{{\mathrm{d}}\alpha_1 {\mathrm{d}}\alpha_2}{\text{GL}(1)} \alpha_1^{a_1-1} \alpha_2^{a_2-1} \frac{(\alpha_1+\alpha_2)^{a_1+a_2-D}}{\left( - \alpha_1 \alpha_2 p^2 - {\mathrm{i}} 0\right)^{a_1+a_2-\frac{D}{2}}} \,. \end{aligned} $$
    (5.275)

    This formula is the Feynman parameterisation for the massless bubble integral. It matches the one-loop master formula (4.14), with \(U=\alpha _1+\alpha _2\) and \(V=-\alpha _1 \alpha _2 p^2\).

  2. (b)

    We use the \(\text{GL}(1)\) invariance to fix \(\alpha _1+\alpha _2=1\), namely we insert \(\delta (\alpha _1+\alpha _2-1)\) under the integral sign in Eq. (5.275), and we absorb the \({\mathrm {i}} 0\) prescription into a small positive imaginary part of \(p^2\). We can carry out the remaining integration in terms of Gamma functions, obtaining

    $$\displaystyle \begin{aligned} F_2 & \!=\! \big(-p^2-{\mathrm{i}} 0\big)^{\frac{D}{2}-a_1-a_2} \frac{\mathnormal{\varGamma}\left(a_1+a_2-\frac{D}{2}\right)}{\mathnormal{\varGamma}(a_1) \mathnormal{\varGamma}(a_2)} \int_{0}^{1} {\mathrm{d}}\alpha_1 \, \alpha_1^{\frac{D}{2}-a_2-1} (1-\alpha_1)^{\frac{D}{2}-a_1-1} \\ & = \big(-p^2-{\mathrm{i}} 0\big)^{\frac{D}{2}-a_1-a_2} \frac{\mathnormal{\varGamma}\left(a_1+a_2-\frac{D}{2}\right) \mathnormal{\varGamma}\left(\frac{D}{2}-a_1\right) \mathnormal{\varGamma}\left(\frac{D}{2}-a_2\right) }{\mathnormal{\varGamma}(a_1) \mathnormal{\varGamma}(a_2) \mathnormal{\varGamma}(D - a_1 - a_2)} \,, \end{aligned} $$
    (5.276)

    as claimed.

FormalPara Exercise 4.2: Feynman Parametrisation

We draw the diagram of the triangle Feynman integral \(F_3\) (4.18) in Fig. 5.1, with both momentum-space and dual-space labelling. We assign the dual coordinate \(x_0\) to the region inside the loop, and relate the other dual coordinates to the external momenta according to Eq. (4.10):

$$\displaystyle \begin{aligned} p_1 = x_2 - x_1 \,, \quad \quad p_2 = x_3 - x_2 \,, \quad \quad p_3 = x_1 - x_3 \,. \end{aligned} $$
(5.277)

Note that momentum conservation (\(p_1+p_2+p_3 = 0\)) is automatically satisfied in terms of the dual coordinates. The loop momenta are then given by

$$\displaystyle \begin{aligned} {} k = x_0 - x_2 \,, \quad \quad k+p_1 = x_0 - x_1 \,, \quad \quad k-p_2 = x_0 - x_3 \,, \end{aligned} $$
(5.278)

so that the integral takes the form of Eq. (4.11),

$$\displaystyle \begin{aligned} {} F_3 = \int \frac{{\mathrm{d}}^D x_0}{{\mathrm{i}} \pi^{D/2}} \prod_{j=1}^3 \frac{1}{- x^2_{0j} - {\mathrm{i}} 0} \,. \end{aligned} $$
(5.279)

The relation between the loop momentum k and the dual variables in Eq. (5.278) differs from that given in Eq. (4.10), \(k=x_1-x_0\). We emphasise that this is just a convention, as we are free to redefine the loop integration variables. The dual regions, on the other hand, are invariant. This means that once we assign coordinates \(x_i\) to the dual regions, the integral takes the form of Eq. (5.279) (in agreement with the general formula (4.11)), regardless of the loop-momentum labelling we started from.

Fig. 5.1
A diagram of triangle x 0. Lines in the graph represent connections, and each line has a specific momentum associated with it. Arrows on the lines denote the direction of motion. The space around these lines is divided into 4 regions, and each region is labeled using dual coordinates.

Diagram of the triangle Feynman integral \(F_3\) defined in Eq. (“extrefF3examplemomentumspace). We write next to each internal edge the corresponding momentum. The arrows denote the directions of the momenta. The edges of the graph divide the space into four regions, which we label by the dual coordinates \(x_i\)

The kinematic constraints \(p_1^2 = p_2^2 = 0\) and \(p_3^2 = s\) imply that

$$\displaystyle \begin{aligned} x^2_{12} = p_1^2 = 0 \,, \qquad x^2_{23} = p_2^2 = 0 \,, \qquad x^2_{13} = p_3^2 = s \,, \end{aligned} $$
(5.280)

as claimed. The Symanzik polynomials are given by

$$\displaystyle \begin{aligned} U = \alpha_1 + \alpha_2 + \alpha_3 \,, \qquad V = - s \, \alpha_1 \alpha_3 \,. \end{aligned} $$
(5.281)

Substituting the above into Eq. (5.279) with \(D=4-2 \epsilon \) gives the following Feynman parameterisation:

$$\displaystyle \begin{aligned} F_3 = \mathnormal{\varGamma}(1+\epsilon) \int_0^{\infty} \frac{{\mathrm{d}} \alpha_1 {\mathrm{d}} \alpha_2 {\mathrm{d}} \alpha_3}{\text{GL}(1)} \frac{1}{(\alpha_1+\alpha_2+\alpha_3)^{1-2\epsilon} (- s \, \alpha_1 \alpha_3 - {\mathrm{i}} 0)^{1+\epsilon} } \,. \end{aligned} $$
(5.282)
FormalPara Exercise 4.3: Taylor Series of the Log-Gamma Function
  1. (a)

    The recurrence relation of the digamma function follows from that of the \(\mathnormal {\varGamma }\) function (4.23). Differentiating the latter, and dividing both sides by \(x \mathnormal {\varGamma }(x)\) gives

    $$\displaystyle \begin{aligned} \frac{ \mathnormal{\varGamma}^{\prime}(x+1)}{\mathnormal{\varGamma}(x+1)} = \frac{1}{x} + \frac{\mathnormal{\varGamma}^{\prime}(x)}{\mathnormal{\varGamma}(x)} \,, \end{aligned} $$
    (5.283)

    where we have used \(x \mathnormal {\varGamma }(x) = \mathnormal {\varGamma }(x+1)\). Comparing to the definition of the digamma function in Eq. (4.29), we can rewrite this as

    $$\displaystyle \begin{aligned} {} \psi(x+1) = \frac{1}{x} + \psi(x) \,. \end{aligned} $$
    (5.284)

    We apply Eq. (5.284) recursively starting from \(\psi (x+n)\) with \(n\in \mathbb {N}\),

    $$\displaystyle \begin{aligned} \begin{aligned} \psi(x+n) & = \frac{1}{x+n-1} + \psi(x+n-1) \\ & = \frac{1}{x+n-1} + \frac{1}{x+n-2} + \psi(x+n-2) \\ & = \sum_{s=1}^n \frac{1}{x+n-s} + \psi(x) \,. \end{aligned} \end{aligned} $$
    (5.285)

    Changing the summation index to \(k=n-s\) in the last line gives Eq. (4.30).

  2. (b)

    Consider the difference \(\psi (x+n) - \psi (1+n)\). Using Eq. (4.30) we can rewrite it as

    $$\displaystyle \begin{aligned} {} \psi(x+n) - \psi(1+n) = \sum_{k=0}^{n-1} \left(\frac{1}{x+k} - \frac{1}{1+k}\right) + \psi(x) + \gamma_{\text{E}} \,, \end{aligned} $$
    (5.286)

    where we recall that \(\psi (1) = - \gamma _{\text{E}}\). In order to study the limit \(n \to \infty \) we use Stirling’s formula (4.32), which implies the following approximation for \(\psi (x)\),

    $$\displaystyle \begin{aligned} \psi(1+x) = \frac{1}{2 x} + \log(x) + \mathcal{O}\left(\frac{1}{x^2}\right) \,. \end{aligned} $$
    (5.287)

    It follows that

    $$\displaystyle \begin{aligned} \lim_{n\to \infty} \left[ \psi(x+n) - \psi(1+n) \right] = 0 \,. \end{aligned} $$
    (5.288)

    Taking the limit \(n\to \infty \) of both sides of Eq. (5.286) gives Eq. (4.31).

  3. (c)

    The series representation (4.31) of the digamma function allows us the compute the higher-order derivatives in the Taylor expansion (4.28),

    $$\displaystyle \begin{aligned} {} \begin{aligned} \frac{{\mathrm{d}}^n}{{\mathrm{d}} x^n} \log \mathnormal{\varGamma}(1+x) \bigg|{}_{x=0} & = \frac{{\mathrm{d}}^{n-1}}{{\mathrm{d}} x^{n-1}} \psi(1+x) \bigg|{}_{x=0} \\ & = (-1)^n \, (n-1)! \, \sum_{k=0}^{\infty} \frac{1}{(1+k)^{n}} \,, \end{aligned} \end{aligned} $$
    (5.289)

    for \(n\ge 2\). By changing the summation index to \(k^{\prime }=k+1\), we recognise in the last line the definition of the Riemann zeta constant \(\zeta _n\) given in Eq. (4.26). Substituting Eq. (5.289) into Eq. (4.28) and simplifying finally give Eq. (4.24).

FormalPara Exercise 4.4: Finite Two-Dimensional Bubble Integral

We start from the Feynman parameterisation in Eq. (4.34). We set \(D=2\), fix the \(\text{GL}(1)\) freedom such that \(\alpha _1+\alpha _2=1\), and absorb the \({\mathrm {i}} 0\) prescription in a small positive imaginary part of s. We obtain

$$\displaystyle \begin{aligned} F_2\big(s,m^2; D=2\big) = \int_0^1 \frac{{\mathrm{d}} \alpha_1}{-s \, \alpha_1 (1-\alpha_1 ) + m^2} \,. \end{aligned} $$
(5.290)

In order to carry out the integration, we factor the denominator and decompose the integrand into partial fractions w.r.t. \(\alpha _1\):

$$\displaystyle \begin{aligned} F_2\big(s,m^2; D=2\big) = \frac{1}{s \, (\alpha_1^+-\alpha_1^-)} \int_0^1 {\mathrm{d}} \alpha_1 \left( \frac{1}{\alpha_1 - \alpha_1^+} - \frac{1}{\alpha_1 - \alpha_1^-} \right) \,, \end{aligned} $$
(5.291)

where

$$\displaystyle \begin{aligned} \alpha_1^{\pm} = \frac{1}{2} \left( 1 \pm \sqrt{\mathnormal{\varDelta}} \right) \,, \qquad \mathnormal{\varDelta} = 1 - 4 \, \frac{m^2}{s} \,. \end{aligned} $$
(5.292)

For \(s<0\) and \(m^2>0\), we have that \(\alpha _1^+ >1\) and \(\alpha _1^-<0\). The integration then yields

$$\displaystyle \begin{aligned} {} F_2\big(s,m^2;D=2\big) = \frac{1}{s \, \sqrt{\mathnormal{\varDelta}}} \log \left[ \frac{\alpha_1^- (\alpha_1^+-1)}{\alpha_1^+ (\alpha_1^--1)} \right] \,. \end{aligned} $$
(5.293)

We may simplify the expression by changing variables to s and x through

$$\displaystyle \begin{aligned} {} m^2 = -s \frac{x}{(1-x)^2} \,, \end{aligned} $$
(5.294)

with \(0<x<1\). The discriminant \(\mathnormal {\varDelta }\) in fact becomes a perfect square, and \(\sqrt {\mathnormal {\varDelta }}\) a rational function. The choice of the branch of the square root is arbitrary. We choose

$$\displaystyle \begin{aligned} {} \sqrt{\mathnormal{\varDelta}} = \frac{1+x}{1-x} \,, \end{aligned} $$
(5.295)

which is positive for \(0<x<1\). Equation (5.293) then simplifies to

$$\displaystyle \begin{aligned} {} F_2\big(s,m^2;D=2\big) = \frac{2}{s} \frac{1-x}{1+x} \log(x) \,. \end{aligned} $$
(5.296)

Equation (5.296) is very simple, but hides a symmetry property. We said above that the choice of the branch of \(\sqrt {\mathnormal {\varDelta }}\) is arbitrary. In other words, \(F_2\) must be invariant under \(\sqrt {\mathnormal {\varDelta }} \to - \sqrt {\mathnormal {\varDelta }}\). Let us work out how x changes under this transformation. Solving Eq. (5.294) for x gives two solutions. We choose the one such that \(0<x<1\) for \(s<0\) and \(m^2>0\), which is compatible with Eq. (5.295):

$$\displaystyle \begin{aligned} x = 1 - \frac{1}{2} \frac{s}{m^2} \left(1-\sqrt{\mathnormal{\varDelta}}\right) \,. \end{aligned} $$
(5.297)

One may then verify that \(1/x = x \big |{ }_{\sqrt {\mathnormal {\varDelta }} \to - \sqrt {\mathnormal {\varDelta }}} \). Therefore, when changing the sign of \(\sqrt {\mathnormal {\varDelta }}\), both the logarithm in Eq. (5.296) and its coefficient gain a factor of \(-1\), so that \(F_2\) is indeed invariant. This property is very common in Feynman integrals involving square roots. A particularly convenient way to make it manifest is to rewrite the argument of the logarithm in the form

$$\displaystyle \begin{aligned} \log\left( \frac{\sqrt{\mathnormal{\varDelta}}-a}{\sqrt{\mathnormal{\varDelta}}+a} \right) \,, \end{aligned} $$
(5.298)

for some rational function a. In the triangle case, dimensional analysis tells us that a must be a constant. Indeed, one may verify with \(a=1\) we recover \(\log (x)\). Our final expression for the two-dimensional bubble integral therefore is

$$\displaystyle \begin{aligned} F_2\big(s,m^2; D=2\big) = \frac{2}{s \, \sqrt{\mathnormal{\varDelta}}} \log \left( \frac{\sqrt{\mathnormal{\varDelta}}-1}{\sqrt{\mathnormal{\varDelta}}+1} \right) \,. \end{aligned} $$
(5.299)
FormalPara Exercise 4.5: Laurent Expansion of the Gamma Function
  1. (a)

    Since \(\mathnormal {\varGamma }(z)\) has a simple pole at \(z=0\), \(\mathnormal {\varGamma }(z+1) = z \mathnormal {\varGamma }(z)\) admits a Taylor expansion around \(z=0\),

    $$\displaystyle \begin{aligned} {} \mathnormal{\varGamma}(z+1) = 1 - z \, {\gamma_{\text{E}}} + \frac{z^2}{2} \mathnormal{\varGamma}^{\prime\prime}(1) + \mathcal{O}\big(z^3\big) \,. \end{aligned} $$
    (5.300)

    In order to evaluate the second derivative of \(\mathnormal {\varGamma }(z)\), we relate it to the digamma function through Eq. (4.29). Then we have that

    $$\displaystyle \begin{aligned} {} \mathnormal{\varGamma}^{\prime\prime}(1) = \psi^{\prime}(1) + {\gamma_{\text{E}}}^2 \,. \end{aligned} $$
    (5.301)

    Finally, we can evaluate \(\psi ^{\prime }(1)\) using the series representation of the digamma function (4.31), obtaining

    $$\displaystyle \begin{aligned} {} \psi^{\prime}(1) = \sum_{k=0}^{\infty} \frac{1}{(k+1)^2} = \zeta_2 \,. \end{aligned} $$
    (5.302)

    Substituting Eqs. (5.302) and (5.301) into Eq. (5.300), and dividing by z both sides of the equation gives the desired Laurent expansion (4.40).

  2. (b)

    In order to exploit the Laurent expansion around \(z=0\) computed in the previous part, we apply the recurrence relation (4.23) iteratively until we get

    $$\displaystyle \begin{aligned} {} \mathnormal{\varGamma}(z) = \frac{\mathnormal{\varGamma}(z+n)}{z(z+1) \ldots (z+n-1) } \,. \end{aligned} $$
    (5.303)

    The Laurent expansion of \(\mathnormal {\varGamma }(z+n)\) around \(z=-n\) is then obtained from Eq. (4.40) by replacing z with \(z+n\). The remaining factors are regular at \(z=-n\), and their Taylor expansion is given in terms of harmonic numbers (4.42) by

    $$\displaystyle \begin{aligned} {} \prod_{k=0}^{n-1} \frac{1}{z+k} \!=\! \frac{(-1)^n}{n!} \left[ 1 \!+\! (z+n) \, H_n \!+\! \frac{(z+n)^2}{2} \left( H_{n,2} \!+\! H_n^2 \right) \!+\! \mathcal{O}\left((z+n)^3\right) \right] \,. \end{aligned} $$
    (5.304)

    Substituting Eq. (4.40) with \(z \to z+n\) and Eq. (5.304) into Eq. (5.303), and expanding up to order \((z+n)\) gives Eq. (4.41).

FormalPara Exercise 4.6: Massless One-Loop Box with Mellin-Barnes Parametrisation

We begin by rewriting the function B in Eq. (4.50) as

$$\displaystyle \begin{aligned} {} B = - \frac{2 \, \epsilon}{t^2} \, B^{(1)} + \mathcal{O}\big(\epsilon^2\big) \,, \end{aligned} $$
(5.305)

where

$$\displaystyle \begin{aligned} {} B^{(1)} = \int_{{\mathrm{Re}}(z)=c^{\prime}} \frac{{\mathrm{d}} z}{2\pi {\mathrm{i}}} \, x^{-z} \, \mathnormal{\varGamma}(-z)^3 \, \frac{\mathnormal{\varGamma}(1+z)^3}{1+z} \,. \end{aligned} $$
(5.306)

We applied the recurrence relation (4.23) twice—with \(n=-1-z\) and \(n=1+z\)—to simplify the expression w.r.t. Eq. (4.50). The pole structure of \(B^{(1)}\) is depicted in Fig. 5.2. From the latter we see that \(-1 < c^{\prime } <0\). E.g., we may set \(c^{\prime }=-1/2\). In order to carry out the integration we close the contour at infinity. Assuming that \(x>1\), we close the contour to the right, as shown in Fig. 5.2. The contribution from the semi-circle at infinity vanishes, and the integral is given by

$$\displaystyle \begin{aligned} {} B^{(1)} = - \sum_{n=0}^{\infty} \text{Res}\left[ x^{-z} \, \mathnormal{\varGamma}(-z)^3 \, \frac{\mathnormal{\varGamma}(1+z)^3}{1+z} , \, z=n \right] \,. \end{aligned} $$
(5.307)

The minus sign comes from the clockwise direction of the loop. To compute the residues, we make use of the Laurent expansions computed in Exercise 4.5. The factor of \(\mathnormal {\varGamma }(-z)^3\) entails a triple pole at \(z=n\) (with \(n=0,1,\ldots \)), so that we need the expansion of all functions involved around \(z=n\) up to the third order. We obtain the Laurent expansion of \(\mathnormal {\varGamma }(-z)\) around \(z=n\) by replacing \(z\to -z\) in Eq. (4.41),

$$\displaystyle \begin{aligned} {} \mathnormal{\varGamma}(-z) = \frac{(-1)^n}{n!} \left\{ \frac{-1}{z-n} + H_n-{\gamma_{\text{E}}} - \frac{z-n}{2} \left[\left(H_n - {\gamma_{\text{E}}}\right)^2 + \zeta_2 + H_{n,2} \right] \right\} + \ldots \,.\end{aligned} $$
(5.308)

In order to compute the Taylor expansion of \(\mathnormal {\varGamma }(1+z)\) we leverage what we learnt about the digamma function in Exercise 4.3. The first derivative is given by

$$\displaystyle \begin{aligned} \frac{{\mathrm{d}}}{{\mathrm{d}} z} \mathnormal{\varGamma}(1+z) \bigg|{}_{z=n} = \mathnormal{\varGamma}(1+n) \, \psi(1+n) = n! \, (H_n - \gamma_{\text{E}}) \,, \end{aligned} $$
(5.309)

where we used the recurrence relation (4.30). We recall the definition of the harmonic numbers in Eq. (4.42). For the second derivative we use again Eq. (4.30) to write \(\psi ^{\prime }(1+n) = \psi ^{\prime }(1) - H_{n,2}\), and Eq. (4.31) to evaluate \(\psi ^{\prime }(1) = \zeta _2\). We thus obtain

$$\displaystyle \begin{aligned} \frac{{\mathrm{d}}^2}{{\mathrm{d}} z^2} \mathnormal{\varGamma}(1+z) \bigg|{}_{z=n} =& n! \, \big[ \psi(1+n)^2 + \psi^{\prime}(1+n) \big] \\ =& n! \, \big[ (H_n-\gamma_{\text{E}})^2 + \zeta_2 - H_{n,2} \big] \,. \end{aligned} $$
(5.310)

Putting the above together gives the Taylor expansion of \(\mathnormal {\varGamma }(1+z)\) around \(z=n\):

$$\displaystyle \begin{aligned} {} \mathnormal{\varGamma}(1+z) \!=\! n! \left\{ 1 \!+\! (z-n) \big( H_n - \gamma_{\text{E}}\big) \!+\! \frac{(z-n)^2}{2} \big[ (H_n\!-\!\gamma_{\text{E}})^2\!+\!\zeta_2-H_{n,2} \big] + \ldots \right\}\,. \end{aligned} $$
(5.311)

Attentive readers may notice that the expansion of \(\mathnormal {\varGamma }(1+z) \mathnormal {\varGamma }(-z)\) is much simpler:

$$\displaystyle \begin{aligned} \mathnormal{\varGamma}(-z) \, \mathnormal{\varGamma}(1+z) = - (-1)^n \, \bigg[ \frac{1}{z-n} + \zeta_2 \, (z-n) \bigg] + \mathcal{O}\big((z-n)^2\big) \,. \end{aligned} $$
(5.312)

We could have arrived directly at this result through Euler’s reflection formula,

$$\displaystyle \begin{aligned} \mathnormal{\varGamma}(-z) \, \mathnormal{\varGamma}(1+z) = -\frac{\pi}{\sin{}(\pi z)} \,. \end{aligned} $$
(5.313)

The Taylor expansions of the other functions in Eq. (5.306) are straightforward. Substituting them into Eq. (5.307) and taking the residue gives

$$\displaystyle \begin{aligned} {} B^{(1)} = \sum_{n=0}^{\infty} \frac{(-x)^{-n}}{(1+n)^3} + \log(x) \sum_{n=0}^{\infty} \frac{(-x)^{-n}}{(1+n)^2} + \frac{1}{2} \big(\pi^2+\log^2(x)\big) \sum_{n=0}^{\infty} \frac{(-x)^{-n}}{1+n} \,. \end{aligned} $$
(5.314)

The series can be summed in terms of polylogarithms through their definition (4.56):

$$\displaystyle \begin{aligned} \sum_{n=0}^{\infty} \frac{(-x)^{-n}}{(1+n)^k} = - x \, \mathrm{Li}_k\left(-\frac{1}{x}\right) \,, \end{aligned} $$
(5.315)

for \(k=1,2,\ldots \) and \(x>1\). Plugging this into Eq. (5.314), and the latter into Eq. (5.305) gives our final expression for B:

$$\displaystyle \begin{aligned} {} B &= \frac{2 \, \epsilon}{s\, t} \, \bigg[ \mathrm{Li}_3\left(-\frac{1}{x}\right) + \log(x) \, \mathrm{Li}_2\left(-\frac{1}{x}\right) - \frac{1}{2}\big(\pi^2 + \log^2(x)\big) \log\left(1+\frac{1}{x}\right) \bigg] \\ & \quad + \mathcal{O}\big(\epsilon^2\big) \,. \end{aligned} $$
(5.316)

The expression of the massless one-loop box \(F_4\) in Eq. (4.55) is obtained by subtracting from B in Eq. (5.316) the Laurent expansion of the residue A in Eq. (4.49).

Fig. 5.2
A mathematical analysis involving functions from a specific equation. There is an integration contour represented by a dashed line, which indicates the path taken when integrating these functions.

Pole structure of the \(\mathnormal {\varGamma }\) functions in Eq. (“refeq:MBboxB1), and integration contour (dashed line)

FormalPara Exercise 4.7: Discontinuities
  1. (a)

    The logarithm of a complex variable \(z = |z| \text{e}^{{\mathrm {i}} \, \text{arg}(z)}\) is defined as

    $$\displaystyle \begin{aligned} \log(z) = \log|z| + {\mathrm{i}} \, \text{arg}(z) \,, \end{aligned} $$
    (5.317)

    where \(|z|\) is the absolute value of z, and the argument of z (\(\text{arg}(z)\)) is the counterclockwise angle from the positive real axis to the line connecting z with the origin. \(\log |z|\) is a continuous function of z, hence the discontinuity of \(\log (z)\) originates from \(\text{arg}(z)\). As z approaches the negative real axis from above (below), \(\text{arg}(z)\) approaches \(\pi \) (\(-\pi \)). In other words, we have that

    $$\displaystyle \begin{aligned} \lim_{\eta\to 0^+}\text{arg} (x \pm {\mathrm{i}} \eta) = \pm \pi \, \mathnormal{\varTheta}(-x) \,, \end{aligned} $$
    (5.318)

    for \(x \in \mathbb {R}\). The discontinuity of the logarithm across the real axis is thus given by

    $$\displaystyle \begin{aligned} \begin{aligned} \text{Disc}_x \left[ \log (x) \right] & = \lim_{\eta\to 0^+} {\mathrm{i}} \left[ \text{arg}(x+{\mathrm{i}} \eta) - \text{arg}(x-{\mathrm{i}} \eta) \right] \\ & = 2{\mathrm{i}} \pi \, \mathnormal{\varTheta}(-x) \,. \end{aligned} \end{aligned} $$
    (5.319)
  2. (b)

    We rewrite the identity (4.67) here for convenience:

    $$\displaystyle \begin{aligned} {} {\mathrm{Li}}_2(x) = - {\mathrm{Li}}_2(1-x) - \log(1-x) \log(x) + \zeta_2 \,. \end{aligned} $$
    (5.320)

    This equation is well defined for \(0<x<1\). Focusing on the RHS, however, we see that all functions are well defined for \(x>0\) except for \(\log (1-x)\). We can thus make use of Eq. (5.320) to reduce the analytic continuation of \({\mathrm {Li}}_2(x)\) to \(x>1\) to that of \(\log (1-x)\). For \(x>1\) we have that

    $$\displaystyle \begin{aligned} {\mathrm{Li}}_2(x \!+\! {\mathrm{i}} \eta) - {\mathrm{Li}}_2(x - {\mathrm{i}} \eta) \!=\! \log(x) \left[ \log(1-x+{\mathrm{i}} \eta ) \!-\! \log(1-x-{\mathrm{i}} \eta) \right] \!+\! \mathcal{O}(\eta) \,. \end{aligned} $$
    (5.321)

    Hence, the discontinuity of the dilogarithm follows from that of the logarithm, as

    $$\displaystyle \begin{aligned} \text{Disc}_x \left[ {\mathrm{Li}}_2 (x) \right] & = \log (x) \, \text{Disc}_x \left[ \log(1-x) \right] \\ & = 2 \pi {\mathrm{i}} \, \log(x) \, \mathnormal{\varTheta}(x-1) \,. \end{aligned} $$
    (5.322)
FormalPara Exercise 4.8: The Symbol of a Transcendental Function

We make use of the recursive definition of the symbol (4.98). The differential of \(\log (x) \log (1-x)\) is given by

$$\displaystyle \begin{aligned} {\mathrm{d}} \left[ \log(x) \log (1-x) \right] = \log (1-x) \, {\mathrm{d}} \log(x) + \log (x) \, {\mathrm{d}} \log (1-x) \,. \end{aligned} $$
(5.323)

Through Eq. (4.98) we then have that

$$\displaystyle \begin{aligned} \mathcal{S}\big( \log(x) \log (1-x) \big) = [ S(\log(1-x)) , x ] + [ S(\log(x)), 1-x] \,. \end{aligned} $$
(5.324)

Since we already know that \(S(\log a) = [a]\) (Eq. (4.99)), the final result is

$$\displaystyle \begin{aligned} {} \mathcal{S}\big( \log(x) \log (1-x) \big) = [x, 1-x] + [1-x, x] \,. \end{aligned} $$
(5.325)

Alternatively, one may replace each function in the product by its symbol,

$$\displaystyle \begin{aligned} {} \mathcal{S}\big( \log(x) \log (1-x) \big) = [x] \times [1-x] \,. \end{aligned} $$
(5.326)

By comparing Eqs. (5.325) and (5.326) we see that

$$\displaystyle \begin{aligned} [x] \times [1-x] = [x,1-x] + [1-x, x] \,. \end{aligned} $$
(5.327)

This is a very important property of the symbol called shuffle product. It allows us to express the product of two symbols of weights \(n_1\) and \(n_2\) as a linear combination of symbols of weight \(n_1+n_2\). For example, at weight three we have

$$\displaystyle \begin{aligned} [a] \times [b,c] = [a,b,c] + [b,a,c] + [b,c,a] \,. \end{aligned} $$
(5.328)

We refer the interested readers to ref. [2].

FormalPara Exercise 4.9: Symbol Basis and Weight-Two Identities
  1. (a)

    The symbol method turns finding relations among special functions into a linear algebra problem. The first step is to put the symbols of Eq. (4.106) and the functions of Eq. (4.107) in two vectors:

    $$\displaystyle \begin{aligned} & \mathbf{b} = \big( [x,x] , [x,1-x], [1-x,x], [1-x,1-x] \big)^{\top} \,, \end{aligned} $$
    (5.329)
    $$\displaystyle \begin{aligned} & \mathbf{g} = \big( \log^2(x), \log^2 (1-x), \log(x) \log (1-x), {\mathrm{Li}}_{2}(1-x) \big)^{\top} \,. \end{aligned} $$
    (5.330)

    The elements of \(\mathbf {b}\) are a basis of all weight-two symbols in the alphabet \(\{x, 1-x\}\). We can thus express the symbol of \(\mathbf {g}\) in the basis \(\mathbf {b}\), as

    $$\displaystyle \begin{aligned} \mathcal{S}(\mathbf{g}) = \begin{pmatrix} 2 \, [x,x] \\ 2 \, [1-x,1-x] \\ {} [x,1-x]+[1-x,x] \\ - [x,1-x] \\ \end{pmatrix} = M \cdot \mathbf{b} \,, \qquad \text{with} \quad M = \begin{pmatrix} 2 & 0 & 0 & 0 \\ 0 & 0 & 0 & 2 \\ 0 & 1 & 1 & 0 \\ 0 & -1 & 0 & 0 \\ \end{pmatrix} \,. \end{aligned} $$
    (5.331)

    Since the matrix M has non-zero determinant, we can invert it to express the weight-two symbol basis \(\mathbf {b}\) in terms of the symbols of the functions in \(\mathbf {g}\),

    $$\displaystyle \begin{aligned} {} \mathbf{b} = M^{-1} \cdot \mathcal{S}(\mathbf{g}) \,, \qquad \text{with} \quad M^{-1} = \begin{pmatrix} \frac{1}{2} & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 1 \\ 0 & \frac{1}{2} & 0 & 0 \\ \end{pmatrix} \,. \end{aligned} $$
    (5.332)

    Therefore, \(\mathbf {g}\) is a basis of the weight-two symbols in the alphabet \(\{x,1-x\}\) as well.

  2. (b)

    Let us start from \({\mathrm {Li}}_{2}\left (x/(x-1)\right )\). Using Eq. (4.96), its symbol is given by

    $$\displaystyle \begin{aligned} \mathcal{S}\bigg[{\mathrm{Li}}_{2}\left(\frac{x}{x-1}\right)\bigg] = - \left[1-\frac{x}{x-1}, \frac{x}{x-1} \right] \,. \end{aligned} $$
    (5.333)

    The properties (4.100) allow us to express the latter in terms of the letters \(\{x,1-x\}\):

    $$\displaystyle \begin{aligned} \mathcal{S}\bigg[{\mathrm{Li}}_{2}\left(\frac{x}{x-1}\right)\bigg] = [1-x, x ] - [1-x, 1-x] \,. \end{aligned} $$
    (5.334)

    We can then express the symbol of \({\mathrm {Li}}_{2}(x/(x-1))\) in the basis \(\mathbf {b}\), as

    $$\displaystyle \begin{aligned} \mathcal{S}\bigg[{\mathrm{Li}}_{2}\left(\frac{x}{x-1}\right)\bigg] = (0, 0, 1, -1) \cdot \mathbf{b} \,. \end{aligned} $$
    (5.335)

    In this sense, \({\mathrm {Li}}_{2}(x/(x-1))\) ‘lives’ in the space spanned by Eq. (4.106). We do the same for the other dilogarithms in Eq. (4.108):

    $$\displaystyle \begin{aligned} & \mathcal{S}\big[ {\mathrm{Li}}_{2}(x) \big] = - [1-x,x] = (0, 0, -1, 0) \cdot \mathbf{b} \,, \end{aligned} $$
    (5.336)
    $$\displaystyle \begin{aligned} & \mathcal{S}\bigg[ {\mathrm{Li}}_{2}\left(\frac{1}{x}\right)\bigg] = [1 - x, x] - [x, x] = (-1, 0, 1, 0) \cdot \mathbf{b} \,, \end{aligned} $$
    (5.337)
    $$\displaystyle \begin{aligned} & \mathcal{S}\bigg[ {\mathrm{Li}}_{2}\left(\frac{1}{1-x}\right)\bigg] = -[1 - x, 1 - x] + [x, 1 - x] = (0, 1, 0, -1) \cdot \mathbf{b} \,, \end{aligned} $$
    (5.338)
    $$\displaystyle \begin{aligned} & \mathcal{S}\bigg[ {\mathrm{Li}}_{2}\left(\frac{x-1}{x} \right)\bigg] = [x, 1 - x] - [x, x] = (-1, 1, 0, 0) \cdot \mathbf{b} \,. \end{aligned} $$
    (5.339)
  3. (c)

    Having expanded the symbols of the dilogarithms in Eq. (4.108) in the basis \(\mathbf {b}\), we can change basis from \(\mathbf {b}\) to \(\mathcal {S}(\mathbf {g})\) as in Eq. (5.332). For example, we have

    $$\displaystyle \begin{aligned} \begin{aligned} \mathcal{S}\bigg[{\mathrm{Li}}_{2}\left(\frac{x}{x-1}\right)\bigg] & = (0, 0, 1, -1) \cdot M^{-1} \cdot S(\mathbf{g}) \\ & \!=\! \mathcal{S}\bigg[-\frac{1}{2} \log^2 (1-x) + \log(x) \log (1-x) + {\mathrm{Li}}_{2}(1-x) \bigg] . \end{aligned} \end{aligned} $$
    (5.340)

    Doing the same for the other dilogarithms in Eq. (4.108) gives the following identities:

    $$\displaystyle \begin{aligned} & \mathcal{S}\bigg[ {\mathrm{Li}}_{2}(x) + {\mathrm{Li}}_{2}(1 - x) + \log(x) \log(1 - x) \bigg] = 0 \,, \end{aligned} $$
    (5.341)
    $$\displaystyle \begin{aligned} & \mathcal{S}\bigg[ {\mathrm{Li}}_{2}\left(\frac{1}{x}\right) - {\mathrm{Li}}_{2}(1 - x) - \log(x) \log(1 - x) + \frac{1}{2} \log^2(x) \bigg] = 0 \,, \end{aligned} $$
    (5.342)
    $$\displaystyle \begin{aligned} & \mathcal{S}\bigg[ {\mathrm{Li}}_{2}\left(\frac{1}{1-x}\right) + {\mathrm{Li}}_2(1 - x) + \frac{1}{2} \log^2(1 - x) \bigg] = 0 \,, \end{aligned} $$
    (5.343)
    $$\displaystyle \begin{aligned} & \mathcal{S}\bigg[ {\mathrm{Li}}_{2}\left(\frac{x-1}{x} \right) + {\mathrm{Li}}_2(1 - x) + \frac{1}{2} \log^2(x) \bigg] = 0 \,. \end{aligned} $$
    (5.344)

    The terms which are missed by the symbol may be fixed as done for Eq. (4.92).

FormalPara Exercise 4.10: Simplifying Functions Using the Symbol

We compute the symbol of \(f_1(u,v)\) in Eq. (4.109). The general strategy is the following: we use the rule in Eq. (4.96) for the dilogarithms; next, we put all letters over a common denominator and factor them; finally, we use the symbol properties (4.100) to expand the symbol until all letters are irreducible factors. This gives

$$\displaystyle \begin{aligned} & \mathcal{S}\bigg[ {\mathrm{Li}}_2\left(\frac{1-v}{u}\right) \bigg] = [u+v-1,u] - [u+v-1, 1-v] + [u,1-v] - [u,u] \,, \end{aligned} $$
(5.345)
$$\displaystyle \begin{aligned} & \begin{aligned} \mathcal{S}\bigg[ {\mathrm{Li}}_2\left(\frac{(1-u)(1-v)}{u v}\right) \bigg] = \ & \big( [u+v-1,u]-[u+v-1,1-u]+[u,1-u] \\ & +[u,1-v]-[u,u]-[u,v] \big) + \big(u \leftrightarrow v \big) \,. \end{aligned} \end{aligned} $$
(5.346)

\({\mathrm {Li}}_2((1-u)/v)\) is obtained from \({\mathrm {Li}}_2((1-v)/u)\) by exchanging \(u \leftrightarrow v\), and so is its symbol. The symbol of \(\pi ^2/6\) vanishes. Putting the above together gives

$$\displaystyle \begin{aligned} {} \mathcal{S}\big[ f_1(u,v) \big] = [u, 1 - u] + [v, 1 - v] - [u, v] - [v, u] \,, \end{aligned} $$
(5.347)

as claimed. Note that the letter \(u+v-1\) drops out in the sum. In other words, \(u+v=1\) is a branch point for the separate terms in the expression of \(f_1(u,v)\) given in Eq. (4.109), but not for \(f_1(u,v)\).

On the RHS of Eq. (5.347) we recognise in \( [u, 1 - u] \) (\([v, 1 - v]\)) the symbol of \(-{\mathrm {Li}}_2(u)\) (\(-{\mathrm {Li}}_2(v)\)), and in \([u, v] + [v, u]\) the symbol of \(\log u \log v\) (see Exercise 4.8). An alternative and simpler expression for \(f_1(u,v)\) is thus given by

$$\displaystyle \begin{aligned} \mathcal{S}\big[f_1(u,v)\big] =\mathcal{S}\big[ -{\mathrm{Li}}_2(u) -{\mathrm{Li}}_2(v) - \log u \log v \big] \,, \end{aligned} $$
(5.348)

which matches—at symbol level—the expression of \(f_2(u,v)\) in Eq. (4.112).

FormalPara Exercise 4.11: The Massless Two-Loop Kite Integral

We define the integral family as

$$\displaystyle \begin{aligned} {} G^{\text{kite}}_{a_1,a_2,a_3,a_4,a_5} := \int \frac{{\mathrm{d}}^D k_1}{{\mathrm{i}} \pi^{D/2}} \frac{{\mathrm{d}}^D k_2}{{\mathrm{i}} \pi^{D/2}} \frac{1}{D_1^{a_1} D_2^{a_2} D_3^{a_3} D_4^{a_4} D_5^{a_5}} \,, \end{aligned} $$
(5.349)

where the inverse propagators \(D_i\) are given byFootnote 3

$$\displaystyle \begin{aligned} {} \begin{array}{lll} D_1 = - k_1^2 \,, & D_2 = -(k_1-p)^2 \,, \quad \quad & D_3 = -k_2^2 \,, \\ D_4 = -(k_2+p)^2\,, \quad \quad & D_5 = -(k_1+k_2)^2\,. & \\ \end{array} \end{aligned} $$
(5.350)

Feynman’s \({\mathrm {i}} 0\) prescription is dropped, as it plays no part here. The desired integral is

$$\displaystyle \begin{aligned} F_{\text{kite}}\big(s; D\big) = G^{\text{kite}}_{1,1,1,1,1} \,. \end{aligned} $$
(5.351)

The IBP relations for the triangle sub-integral with loop momentum \(k_2\) are given by

$$\displaystyle \begin{aligned} \int \frac{{\mathrm{d}}^D k_1}{{\mathrm{i}} \pi^{D/2}} \frac{{\mathrm{d}}^D k_2}{{\mathrm{i}} \pi^{D/2}} \frac{\partial}{\partial k_2^{\mu}} \frac{ q^{\mu} }{D_1 D_2 D_3 D_4 D_5} = 0 \,, \end{aligned} $$
(5.352)

for any momentum q. Upon differentiating we rewrite the scalar products in terms of inverse propagators—and thus of integrals of the family (5.349)—by inverting the system of equations (5.350) (e.g. \(k_1 \cdot p =( D_2 - D_1 + s)/2\)). There are three independent choices for q: \(k_1\), \(k_2\), and p. We need to find a linear combination of these such that the resulting IBP relation contains only \(F_{\text{kite}}\) and bubble-type integrals. Using \(q=k_1+k_2\) gives

$$\displaystyle \begin{aligned} (D-4) \, G^{\text{kite}}_{1, 1, 1, 1, 1} & - G^{\text{kite}}_{1, 1, 1, 2, 0} - G^{\text{kite}}_{1, 1, 2, 1, 0} + G^{\text{kite}}_{0, 1, 2, 1, 1} + G^{\text{kite}}_{1, 0, 1, 2, 1} = 0 \,. \end{aligned} $$
(5.353)

The graph symmetries imply that

$$\displaystyle \begin{aligned} G^{\text{kite}}_{1, 1, 1, 2, 0} = G^{\text{kite}}_{1, 1, 2, 1, 0} \,, \qquad \quad G^{\text{kite}}_{1, 0, 1, 2, 1} = G^{\text{kite}}_{0, 1, 2, 1, 1} \,. \end{aligned} $$
(5.354)

\(G^{\text{kite}}_{1, 1, 2, 1, 0}\) is a product of bubble integrals. Using Eq. (4.16) for the latter gives

$$\displaystyle \begin{aligned} G^{\text{kite}}_{1, 1, 2, 1, 0} = B(1,1) \, B(1,2) \, (-s)^{D-5} \,. \end{aligned} $$
(5.355)

\(G^{\text{kite}}_{0, 1, 2, 1, 1}\) instead has a bubble sub-integral. By using Eq. (4.16) iteratively we obtain

(5.356)

where the numbers next to the propagators in the graphs are their exponents. Putting the above together gives

$$\displaystyle \begin{aligned} G^{\text{kite}}_{1, 1, 1, 1, 1} = \frac{2}{D-4} \, B(1,1) \left[ B(1,2) -B\left(3-\frac{D}{2}, 2\right) \right] (-s)^{D-5} \,, \end{aligned} $$
(5.357)

which can be expressed in terms of gamma functions through Eq. (4.17). Setting \(D=4-2\epsilon \) and expanding the \(\mathnormal {\varGamma }\) functions around \(\epsilon =0\) through Eq. (4.24) finally gives Eq. (4.130).

FormalPara Exercise 4.13: “\({\mathrm {d}} \log \)” Form of the Massive Bubble Integrand with \(D=2\)

We start from the integrand \(\omega _{1,1}\) in the first line of Eq. (4.169). We use the parameterisation \(k = \beta _1 p_1 + \beta _2 p_2\), with \(p=p_1+p_2\), \(p_1^2=p_2^2=0\) and \(2 \, p_1 \cdot p_2 = s\). We change integration variables from k to \(\beta _1\) and \(\beta _2\). The propagators are given by

$$\displaystyle \begin{aligned} {} \begin{aligned} & -k^2 + m^2 = -s \, \big( \beta_1 \beta_2 - x \big) \,, \\ & -(k+p)^2 + m^2 = -s \, \big[ (1+\beta_1)(1+\beta_2) - x \big] \,, \end{aligned} \end{aligned} $$
(5.358)

where we introduced the short-hand notation \(x=m^2/s\). The Jacobian factor is a function of s, and dimensional analysis tells us that \(J\propto s\). The constant prefactor is irrelevant here.Footnote 4 The integrand then reads

$$\displaystyle \begin{aligned} {} \omega_{1,1} \propto \frac{1}{s} \, \frac{{\mathrm{d}} \beta_1 {\mathrm{d}} \beta_2}{(\beta_1 \beta_2 - x) \big[ (1+\beta_1)(1+\beta_2)-x\big]} \,. \end{aligned} $$
(5.359)

Our goal is to rewrite \(\omega _{1,1}\) in a “\({\mathrm {d}} \log \)” form. First, we decompose \(\omega _{1,1}\) into partial fractions w.r.t. \(\beta _2\):

$$\displaystyle \begin{aligned} s \, \omega_{1,1} \propto \frac{{\mathrm{d}} \beta_1}{\beta_1^2 + \beta_1 + x} \, \frac{{\mathrm{d}} \beta_2}{\beta_2 - x/\beta_1} - \frac{{\mathrm{d}} \beta_1}{\beta_1^2 + \beta_1 + x} \, \frac{{\mathrm{d}} \beta_2}{\beta_2 + 1 - x/(1+\beta_1)} \,. \end{aligned} $$
(5.360)

We then push all \(\beta _2\)-dependent factors into \({\mathrm {d}} \log \) factors, obtaining

$$\displaystyle \begin{aligned} \begin{aligned} s \, \omega_{1,1} & \propto \frac{{\mathrm{d}} \beta_1}{\beta_1^2 + \beta_1 + x} \, {\mathrm{d}} \log\left(\beta_2 - \frac{x}{\beta_1} \right) - \frac{{\mathrm{d}} \beta_1}{\beta_1^2 + \beta_1 + x} \, {\mathrm{d}} \log\left(1+\beta_2 - \frac{x}{1+\beta_1} \right) \\ & \propto \frac{{\mathrm{d}} \beta_1}{\beta_1^2 + \beta_1 + x} \, {\mathrm{d}} \log \left( \frac{\beta_1 \beta_2 - x}{(1 + \beta_1)(1 + \beta_2) - x} \right) \,. \end{aligned} \end{aligned} $$
(5.361)

In order to do the same w.r.t. \(\beta _1\), we need to factor the polynomial in the denominator:

$$\displaystyle \begin{aligned} {} \beta_1^2 + \beta_1 + x = \big(\beta_1 - \beta_1^+ \big) \big(\beta_1 - \beta_1^- \big) \,, \qquad \beta_1^{\pm} = \frac{1}{2} \Big(-1 \pm \sqrt{1-4 x} \Big) \,. \end{aligned} $$
(5.362)

Partial fractioning w.r.t. \(\beta _1\) and rewriting all \(\beta _1\)-dependent factors into \({\mathrm {d}} \log \)’s yields

$$\displaystyle \begin{aligned} {} \omega_{1,1} \propto \frac{1}{s \, \sqrt{1-4 x}} \, {\mathrm{d}}\log\left( \frac{\beta_1 - \beta_1^+}{\beta_1 - \beta_1^-} \right) \, {\mathrm{d}} \log \left( \frac{\beta_1 \beta_2 - x}{(1 + \beta_1)(1 + \beta_2) - x} \right) \,. \end{aligned} $$
(5.363)

Note that there is a lot of freedom in the expression of the \({\mathrm {d}} \log \) form. For instance, we might have started with a partial fraction decomposition w.r.t. \(\beta _1\), and we would have obtained a different—yet equivalent—expression. While the expression of the \({\mathrm {d}} \log \) form may vary, all singularities are always manifestly of the type \({\mathrm {d}} x/x\), and the leading singularities stay the same (up to the irrelevant sign).

We now consider the momentum-space \({\mathrm {d}} \log \) form in Eq. (4.169), namely

$$\displaystyle \begin{aligned} {} \omega_{1,1} \propto \frac{1}{s \, \sqrt{1-4x }} \, {\mathrm{d}} \log(\tau_1) \, {\mathrm{d}} \log(\tau_2) \,, \end{aligned} $$
(5.364)

where

$$\displaystyle \begin{aligned} {} \tau_1 = \frac{-k^2+m^2}{(k-k_{\pm})^2} \,, \qquad \qquad \tau_2 = \frac{-(k+p)^2+m^2}{(k-k_{\pm})^2} \,. \end{aligned} $$
(5.365)

We rewrite the latter in terms of \(\beta _1\) and \(\beta _2\), and show that it matches Eq. (5.359). We recall that \({\mathrm {d}} = {\mathrm {d}}\beta _1 \, \partial _{\beta _1} + {\mathrm {d}}\beta _2 \, \partial _{\beta _2}\) and \({\mathrm {d}} \beta _2 \, {\mathrm {d}} \beta _1 = - {\mathrm {d}} \beta _1 \, {\mathrm {d}} \beta _2\), which imply that

$$\displaystyle \begin{aligned} {} {\mathrm{d}} \log(\tau_1) \, {\mathrm{d}} \log(\tau_2) = \left( \frac{\partial \tau_1}{\partial \beta_1} \frac{\partial \tau_2}{\partial \beta_2} - \frac{\partial \tau_1}{\partial \beta_2} \frac{\partial \tau_2}{\partial \beta_1} \right) \frac{{\mathrm{d}} \beta_1 \, {\mathrm{d}} \beta_2}{\tau_1 \, \tau_2} \,. \end{aligned} $$
(5.366)

In Eq. (5.365), \(k_{\pm }\) denotes either of the two solutions to the cut equations. We choose \(k_+ = \beta _1^+ \, p_1 + \beta _2^+ \, p_2\), where \(\beta _{1}^+\) is given by Eq. (5.362) and \(\beta _2^+=x/\beta _1^+\). We then have

$$\displaystyle \begin{aligned} {} (k-k_+)^2 = \frac{s}{2} \, \Big[2x+\beta_1 + \beta_2 + 2 \beta_1 \beta_2 + (\beta_1 - \beta_2) \sqrt{1 - 4 x} \Big]\,. \end{aligned} $$
(5.367)

We substitute Eqs. (5.367) and (5.358) into Eq. (5.365), and the latter into Eq. (5.366). Simplifying the result—possibly with a computer-algebra system—gives

$$\displaystyle \begin{aligned} \frac{1}{s \, \sqrt{1-4x }} \, {\mathrm{d}} \log(\tau_1) \, {\mathrm{d}} \log(\tau_2) = \frac{1}{s} \, \frac{{\mathrm{d}}\beta_1 \, {\mathrm{d}} \beta_2}{(\beta_1 \beta_2 - x) \big[(1 + \beta_1)(1 + \beta_2) - x \big] } \,, \end{aligned} $$
(5.368)

which matches the expression of \(\omega _{1,1}\) in Eq. (5.359), as claimed.

FormalPara Exercise 4.14: An Integrand with Double Poles: The Two-Loop Kite in \(D=4\)

We complement \(p_1\) and \(p_2\) with two momenta constructed from their spinors,

$$\displaystyle \begin{aligned} p_3^{\mu} = \frac{1}{2} \langle 1 | \gamma^{\mu} | 2 ] \,, \quad p_4^{\mu} = \frac{1}{2} \langle 2 | \gamma^{\mu} | 1 ] \,, \end{aligned} $$
(5.369)

to construct a four-dimensional basis. We have that \(p_1 \cdot p_2 = - p_3 \cdot p_4 = s/2\), while all other scalar products \(p_i \cdot p_j\) vanish. We expand the loop momenta as

$$\displaystyle \begin{aligned} k_1^{\mu} = \sum_{i=1}^4 a_i \, p_i^{\mu} \,, \qquad k_2^{\mu} = \sum_{i=1}^4 b_i \, p_i^{\mu} \,, \end{aligned} $$
(5.370)

and change integration variables from \(k_1^{\mu }\) and \(k_2^{\mu }\) to \(a_i\) and \(b_i\). The inverse propagators defined in Eq. (5.350) are given by

$$\displaystyle \begin{aligned} {} \begin{aligned} & \begin{array}{ll} D_1 = \left(a_3 a_4 - a_1 a_2\right) s \,, \qquad \qquad & D_2 = \left(a_3 a_4 - a_1 a_2 + a_1 + a_2 -1 \right) s \,, \\ D_3 = \left(b_3 b_4 - b_1 b_2\right) s \,, \qquad \qquad & D_4 = \left(b_3 b_4 - b_1 b_2 - b_1 - b_2 -1 \right) s \,, \\ \end{array} \\ & \, D_5 = \left(a_3 a_4 - a_1 a_2 + b_3 b_4 - b_1 b_2 + a_3 b_4 + a_4 b_3 - a_1 b_2 - a_2 b_1 \right) s \,. \end{aligned} \end{aligned} $$
(5.371)

The Jacobian \(|J_1|\) of the change of variables \(\{k_1^{\mu }\} \to \{a_i\}\) is the determinant of the \(4 \times 4\) matrix with entries

$$\displaystyle \begin{aligned} \left(J_1\right)^{\mu}_i = \frac{\partial k_1^{\mu}}{\partial a_i} \,, \quad \mu=0,\ldots,3 \,, \quad i=1,\ldots, 4\,. \end{aligned} $$
(5.372)

Dimensional analysis tells us that \(|J_1| \propto s^2\). It is instructive to compute it explicitly as well. To do so, it is convenient to first consider

$$\displaystyle \begin{aligned} \left(J_1\right)^{\mu}_i \eta_{\mu \nu} \left(J_1\right)^{\nu}_j = p_i \cdot p_j \,. \end{aligned} $$
(5.373)

Taking the determinant on both sides gives

$$\displaystyle \begin{aligned} |J_1|{}^2 = - \frac{s^4}{16}\,, \end{aligned} $$
(5.374)

where the minus sign comes from the determinant of the metric tensor. The Jacobian for \(\{k_2^{\mu }\} \to \{b_i\}\) is similar. The maximal cut is then given by

$$\displaystyle \begin{aligned} F_{\text{kite}}^{\text{max cut}} \propto s^4 \int {\mathrm{d}} a_1 {\mathrm{d}} a_2 {\mathrm{d}} a_3 {\mathrm{d}} a_4 {\mathrm{d}} b_1 {\mathrm{d}} b_2 {\mathrm{d}} b_3 {\mathrm{d}} b_4 \prod_{i=1}^5 \delta\left(D_i \right) \,, \end{aligned} $$
(5.375)

where the inverse propagators are expressed in terms of \(a_i\) and \(b_i\) through Eq. (5.371), and the overall constant is neglected. We use the delta functions of \(D_1\) and \(D_3\) to fix \(a_1=a_3 a_4/a_2\) and \(b_1 = b_3 b_4/b_2\),

(5.376)

We then use the first two delta functions to fix \(a_3 = a_2(1-a_2)/a_4\) and \(b_3 =-b_2(1+b_2)/b_4\), and the remaining one to fix \(b_2 = a_2 b_4/a_4\). We obtain

$$\displaystyle \begin{aligned} F_{\text{kite}}^{\text{max cut}} \propto \frac{1}{s} \int \frac{{\mathrm{d}} a_2 {\mathrm{d}} a_4 {\mathrm{d}} b_4}{a_4 (a_4+b_4)} \,. \end{aligned} $$
(5.377)

New simple poles have appeared, and the integrand has a double pole at \(a_2 \to \infty \). We can make this manifest by the change of variable \(a_2 \to 1/\tilde {a}_2\), which maps the hidden double pole at \(a_2 \to \infty \) into a manifest double pole at \(\tilde {a}_2 = 0\),

$$\displaystyle \begin{aligned} F_{\text{kite}}^{\text{max cut}} \propto \frac{1}{s} \int \frac{{\mathrm{d}} \tilde{a}_2 {\mathrm{d}} a_4 {\mathrm{d}} b_4}{\tilde{a}_2^2 a_4 (a_4+b_4)} \,. \end{aligned} $$
(5.378)
FormalPara Exercise 4.16: The Box Integrals with the Differential Equations Method
  1. (a)

    We define and analyse the box integral family using LiteRed [3]. There are 3 master integrals. LiteRed’s algorithm selects them as the t- and s-channel bubbles, and the box (see Fig. 5.3). We denote them by \(\mathbf {g}\),

    $$\displaystyle \begin{aligned} {} \mathbf{g}(s,t;\epsilon) = \begin{pmatrix} I^{\text{box}}_{0,1,0,1} \\ I^{\text{box}}_{1,0,1,0} \\ I^{\text{box}}_{1,1,1,1} \end{pmatrix} \,. \end{aligned} $$
    (5.379)
    Fig. 5.3
    3 diagrams present 2 ovals with edges marked P 1 to P 4 and are labe;ed I box 0, 1, 0, 1 and I box 1, 0, 1, 0, on the left and right. A square with vertices P 1 to P 4 is labeled I box 1, 1, 1, 1.

    Master integrals of the box integral family in Eq. (“refeq:gdef)

  2. (b)

    We differentiate \(\mathbf {g}\) w.r.t. s and t, and IBP-reduce the result. We obtain

    $$\displaystyle \begin{aligned} \begin{cases} \partial_s \mathbf{g}(s,t;\epsilon) = A_s(s,t;\epsilon) \cdot \mathbf{g}(s,t;\epsilon) \,, \\ \partial_t \mathbf{g}(s,t;\epsilon) = A_t(s,t;\epsilon) \cdot \mathbf{g}(s,t;\epsilon) \,, \\ \end{cases} \end{aligned} $$
    (5.380)

    where

    $$\displaystyle \begin{aligned} A_s = \begin{pmatrix} 0 & 0 & 0 \\ 0 & -\frac{\epsilon}{s} & 0 \\ \frac{2 (2 \epsilon-1)}{s t (s + t)} & \frac{2 (1 - 2 \epsilon)}{s^2 (s + t)} & - \frac{s + t + \epsilon t}{s (s + t)} \\ \end{pmatrix} \,, \qquad A_t = \begin{pmatrix} -\frac{\epsilon}{t} & 0 & 0 \\ 0 & 0 & 0 \\ \frac{2 (1 - 2 \epsilon)}{t^2 (s + t)} & \frac{2 (2 \epsilon -1 )}{s t (s + t)} & - \frac{s + t + \epsilon s}{t (s + t)} \\ \end{pmatrix} \,. \end{aligned} $$
    (5.381)

    We verify the integrability conditions,

    $$\displaystyle \begin{aligned} A_s \cdot A_t - A_t \cdot A_s+ \partial_t A_s - \partial_s A_t = 0 \,, \end{aligned} $$
    (5.382)

    and the scaling relation,

    $$\displaystyle \begin{aligned} s \, A_s + t \, A_t = \text{diag}( - \epsilon, - \epsilon , -2 -\epsilon ) \,. \end{aligned} $$
    (5.383)

    The diagonal entries on the RHS of the scaling relation match the power counting of the integrals in \(\mathbf {g}\) in units of s.

  3. (c)

    We express \(\mathbf {f}\) in terms of \(\mathbf {g}\) as \(\mathbf {f} = T^{-1} \cdot \mathbf {g}\) by IBP-reducing the integrals in Eq. (4.175).Footnote 5 We obtain

    $$\displaystyle \begin{aligned} T^{-1} = c(\epsilon) \, \begin{pmatrix} 0 & 0 & s t \\ 0 & \frac{2\epsilon-1}{\epsilon} & 0 \\ \frac{2\epsilon-1}{\epsilon} & 0 & 0 \\ \end{pmatrix} \,. \end{aligned} $$
    (5.384)

    The new basis \(\mathbf {f}\) satisfies a system of DEs in canonical form,

    $$\displaystyle \begin{aligned} {} \begin{cases} \partial_s \mathbf{f}(s,t;\epsilon) = \epsilon \, B_s(s,t) \cdot \mathbf{f}(s,t;\epsilon) \,, \\ \partial_t \mathbf{f}(s,t;\epsilon) = \epsilon \, B_t(s,t) \cdot \mathbf{f}(s,t;\epsilon) \,, \\ \end{cases} \end{aligned} $$
    (5.385)

    where

    $$\displaystyle \begin{aligned} {} B_s = \begin{pmatrix} \frac{1}{s + t}-\frac{1}{s} & \frac{2}{s + t}-\frac{2}{s} & \frac{2}{s + t} \\ 0 & - \frac{1}{s} & 0 \\ 0 & 0 & 0 \\ \end{pmatrix} \,, \qquad B_t = \begin{pmatrix} \frac{1}{s + t}-\frac{1}{t} & \frac{2}{s + t} & \frac{2}{s + t}-\frac{2}{t} \\ 0 & 0 & 0 \\ 0 & 0 & -\frac{1}{t} \\ \end{pmatrix} \,. \end{aligned} $$
    (5.386)

    We compute \(B_s\) and \(B_t\) as in point b, or through the gauge transformation

    $$\displaystyle \begin{aligned} \epsilon \, B_s = T^{-1} \cdot A_s \cdot T - T^{-1} \cdot \partial_s T \,, \end{aligned} $$
    (5.387)

    and similarly for t. From Eq. (5.386) we see that the symbol alphabet of this family is \(\{s, t, s+t\}\). Thanks to the factorisation of \(\epsilon \) on the RHS of the canonical DEs (5.385), the integrability conditions split into

    $$\displaystyle \begin{aligned} B_s \cdot B_t - B_t \cdot B_s = 0 \,, \qquad \qquad \partial_s B_t - \partial_t B_s = 0 \,. \end{aligned} $$
    (5.388)

    The scaling relation is given by

    $$\displaystyle \begin{aligned} s \, B_s + t \, B_t = - {\mathbb{1}}_{3} \,. \end{aligned} $$
    (5.389)
  4. (d)

    Viewed as a function of s and \(x=t/s\), \(\mathbf {f}\) satisfies the canonical DEs

    $$\displaystyle \begin{aligned} \begin{cases} \partial_s \mathbf{f}(s,x;\epsilon) = \epsilon \, C_s(s,x) \cdot \mathbf{f}(s,x;\epsilon) \,, \\ \partial_x \mathbf{f}(s,x;\epsilon) = \epsilon \, C_x(s,x) \cdot \mathbf{f}(s,x;\epsilon) \,, \\ \end{cases} \end{aligned} $$
    (5.390)

    where \(C_s\) and \(C_x\) are related to \(B_s\) and \(B_t\) in (5.386) through the chain rule,

    $$\displaystyle \begin{aligned} C_x = s \, B_t \Big|{}_{t=x s} \,, \qquad \qquad C_s = B_s + \frac{t}{s} \, B_t \Big|{}_{t=x s} \,. \end{aligned} $$
    (5.391)

    We observe that \(C_x\) is a function of x only, and \(C_s\) of s. Thanks to this separation of variables, we can straightforwardly rewrite the canonical DEs in differential form,

    $$\displaystyle \begin{aligned} {} {\mathrm{d}} \, \mathbf{f}(s,x;\epsilon) = \epsilon \, \big[ {\mathrm{d}} \, \tilde{C}(s,x) \big] \cdot \mathbf{f}(s,x;\epsilon) \,. \end{aligned} $$
    (5.392)

    The connection matrix \(\tilde {C}\) is given by a linear combination of logarithms of the alphabet letters \(\alpha _1 = s\), \(\alpha _2 = x\) and \(\alpha _3 = 1+x\),

    $$\displaystyle \begin{aligned} \tilde{C}(s,x) = \sum_{k=1}^3 C_k \, \log \alpha_k(s,x) \,, \end{aligned} $$
    (5.393)

    with constant matrix coefficients,

    $$\displaystyle \begin{aligned} C_1 = - {\mathbb{1}}_{3} \,, \qquad C_2 = \begin{pmatrix} -1 & 0 & -2 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \\ \end{pmatrix} \,, \qquad C_3 = \begin{pmatrix} 1 & 2 & 2 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}\,. \end{aligned} $$
    (5.394)
  5. (e)

    We expand the pure integrals \(\mathbf {f}\) as a Taylor series around \(\epsilon =0\),Footnote 6

    $$\displaystyle \begin{aligned} {} \mathbf{f}(s,x;\epsilon) = \sum_{w\ge 0} \epsilon^w \, {\mathbf{f}}^{(w)}(s,x) \,, \end{aligned} $$
    (5.395)

    and define the weight-w boundary values at \(s=-1\) and \(x=1\) as \({\mathbf {b}}^{(w)} = {\mathbf {f}}^{(w)}(-1,1)\). Using Eq. (4.16) for the bubble-type integrals we obtain

    (5.396)

    The expression for \(f_3(s,x;\epsilon )\) is obtained by trading s for \(t = s x\) in \(f_2(s,x;\epsilon )\). We leave \(b_1^{(w)}\) as free parameters. The weight-0 boundary values are thus given by \( {\mathbf {b}}^{(0)} = (b_1^{(0)}, -1, -1)^{\top }\). We can now solve the canonical DEs in terms of symbols. In order to do so, we note that the canonical DEs (5.392) imply the following DEs for the coefficients of the \(\epsilon \) expansion,

    $$\displaystyle \begin{aligned} {} {\mathrm{d}} \, f_i^{(w)}(s,x) = \sum_{k=1}^3 \left[ \sum_{j=1}^3 (C_{k})_{ij} \, f_j^{(w-1)}(s,x) \right] \, {\mathrm{d}} \log\alpha_k(s,x) \,. \end{aligned} $$
    (5.397)

    The iteration starts from \({\mathbf {f}}^{(0)} = {\mathbf {b}}^{(0)}\). We spelled out all indices in Eq. (5.397) to facilitate the comparison against the recursive definition of the symbol given by Eqs. (4.97) and (4.98). From this, we find the following recursive formula for the symbol of the solution to the canonical DEs:

    $$\displaystyle \begin{aligned} \mathcal{S} \Big( f_i^{(w)} \Big) = \sum_{k=1}^3 \sum_{j=1}^3 (C_{k})_{ij} \, \bigg[ \mathcal{S} \Big( f_j^{(w-1)} \Big) , \, \alpha_k \bigg] \,, \end{aligned} $$
    (5.398)

    starting at weight \(w=0\) with \(\mathcal {S} \big ( f_i^{(0)} \big ) = b^{(0)}_i []\) (we recall that \([]\) denotes the empty symbol). With a slight abuse of notation, we may rewrite this in a more compact form as

    $$\displaystyle \begin{aligned} \mathcal{S}\big( {\mathbf{f}}^{(w)} \big) = \sum_{k=1}^3 C_{k} \cdot \left[ \mathcal{S}\big( {\mathbf{f}}^{(w-1)} \big) , \, \alpha_k \right] \,. \end{aligned} $$
    (5.399)

    At transcendental weight 1 we thus have that

    $$\displaystyle \begin{aligned} \mathcal{S}\big( {\mathbf{f}}^{(1)} \big) = C_1 \cdot {\mathbf{b}}^{(0)} \, [s] + C_2 \cdot {\mathbf{b}}^{(0)} \, [x] + C_3 \cdot {\mathbf{b}}^{(0)} \, [1 + x] \,. \end{aligned} $$
    (5.400)

    Since \([1+x]\) is the symbol of \(\log (1+x)\), \({\mathbf {f}}^{(1)}(x)\) would diverge at \(x=-1\) unless the coefficients of \([1+x]\) vanish. The finiteness at \(x=-1\) thus implies

    $$\displaystyle \begin{aligned} C_3 \cdot {\mathbf{b}}^{(0)} = 0 \,, \end{aligned} $$
    (5.401)

    which fixes \(b_1^{(0)} = 4\). We now have everything we need to write down the symbol of the solution up to any order in \(\epsilon \). For \(f_1\), for instance, we obtain

    $$\displaystyle \begin{aligned} \begin{aligned} & \mathcal{S}\left( f_1 \right) = 4 \, [] - 2 \, \epsilon \big( 2\, [s] + [x] \big) + 2 \, \epsilon^2 \big( 2 \, [s,s] + [s,x] + [x,s] \big) \\ & \, - 2 \epsilon^3 \Big( 2 \, [s,s,s] + [s,s,x] + [s,x,s] + [x,s,s] - [x,x,x]+[x,x,1+x] \Big) \\ & \quad + \mathcal{O}\big(\epsilon^4\big) \,. \end{aligned} \end{aligned} $$
    (5.402)
  6. (f)

    The dependence on s is given by an overall factor of \((-s)^{-\epsilon }\),Footnote 7 which is fixed by dimensional analysis. We thus define

    $$\displaystyle \begin{aligned} \mathbf{f}(s,x;\epsilon) = (-s)^{-\epsilon} \, \mathbf{h}(x;\epsilon) \,, \end{aligned} $$
    (5.403)

    where \(\mathbf {h}(x;\epsilon )\) does not depend on s. We expand \(\mathbf {h}(x;\epsilon )\) around \(\epsilon =0\) as in (5.395). The coefficients of the expansion \({\mathbf {h}}^{(w)}(x)\) satisfy the recursive DEs

    $$\displaystyle \begin{aligned} {} \partial_x \, {\mathbf{h}}^{(w)}(x) = \left[ \frac{C_2}{x} + \frac{C_3}{1+x} \right] \cdot {\mathbf{h}}^{(w-1)}(x) \,. \end{aligned} $$
    (5.404)

    Equation (5.396) implies that

    (5.405)

    which give us the boundary values \({\mathbf {e}}^{(w)} = {\mathbf {h}}^{(w)}(1)\) for the second and third integral. We have determined above that \(e_1^{(0)}=4\), and we leave the remaining \(e_1^{(w)}\)’s as free parameters. Integrating both sides of Eq. (5.404) gives

    $$\displaystyle \begin{aligned} {} {\mathbf{h}}^{(w)}(x) = \int_1^x \frac{{\mathrm{d}} x^{\prime}}{x^{\prime}} \, C_{2} \cdot {\mathbf{h}}^{(w-1)}(x^{\prime}) + \int_1^x \frac{{\mathrm{d}} x^{\prime}}{1+x^{\prime}} \, C_{3} \cdot {\mathbf{h}}^{(w-1)}(x^{\prime}) + {\mathbf{e}}^{(w)} \,, \end{aligned} $$
    (5.406)

    starting from \({\mathbf {h}}^{(0)} = {\mathbf {e}}^{(0)}\). For arbitrary values of the undetermined \(e_1^{(w)}\)’s, the second integral on the RHS of Eq. (5.406) diverges at \(x=-1\). We can thus fix the remaining boundary values by requiring that

    $$\displaystyle \begin{aligned} \lim_{x\to -1} C_3 \cdot {\mathbf{h}}^{(w)}(x) = 0 \,, \end{aligned} $$
    (5.407)

    order by order in \(\epsilon \). For instance, at weight one we have that

    $$\displaystyle \begin{aligned} C_3 \cdot {\mathbf{h}}^{(1)}(x) = \left( e_1^{(1)} , \, 0, \, 0 \right)^{\top} \,. \end{aligned} $$
    (5.408)

    The finiteness at \(x=-1\) thus fixes \(e_1^{(1)} = 0\). Iterating this up to weight 3 yields

    $$\displaystyle \begin{aligned} {} \begin{aligned} & h_1 (x;\epsilon) = 4 + \epsilon \big[ - 2 \, \log(x) \big] + \epsilon^2 \bigg[ - \frac{4 \pi^2}{3} \bigg] \\ & \quad + \epsilon^3 \bigg[ 2 \, \text{Li}_3(-x) - 2 \, \log(x) \, \text{Li}_2(-x) \\ & \ + \frac{1}{3} \log^3(x) - \log(x)^2 \log(1 + x) + \frac{7 \pi^2}{6} \log(x) - \pi^2 \log(1 + x) - \frac{34}{3} \zeta_3 \bigg] \\ & \quad + \mathcal{O}\big(\epsilon^4\big) \,. \end{aligned} \end{aligned} $$
    (5.409)
  7. (g)

    Equation (4.55) is related to Eq. (5.409) through \(h_1 = \epsilon ^2 s t \mathrm {e}^{\epsilon \gamma _{\text{E}}} (-s)^{\epsilon } F_4\). Up to transcendental weight two the equality is manifest. At weight three we find agreement after applying the identities

    $$\displaystyle \begin{aligned} {} \begin{aligned} & \mathrm{Li}_2\left(-\frac{1}{x}\right) = - \mathrm{Li}_2(-x)-\frac{1}{2} \log^2(x)-\zeta_2 \,, \\ & \mathrm{Li}_3\left(-\frac{1}{x}\right) = \mathrm{Li}_3(-x)+\frac{1}{3!} \log^3(x)+\zeta_2 \log(x) \,, \end{aligned} \end{aligned} $$
    (5.410)

    for \(x>0\), which we may prove by the symbol method, as discussed in Chap. 4.4.4.