Factorization of covariant Feynman graphs for the effective action

We prove a neat factorization property of Feynman graphs in covariant perturbation theory. The contribution of the graph to the effective action is written as a product of a massless scalar momentum integral that only depends on the basic graph topology, and a background-field dependent piece that contains all the information of spin, gauge representations, masses etc. We give a closed expression for the momentum integral in terms of four graph polynomials whose properties we derive in some detail. Our results can also be useful for standard (non-covariant) perturbation theory.

Recently, we have proposed a new covariant formalism based on the heat kernel [36] which is applicable at any loop order [37].In this approach, the effective action is calculated in the background field (BF) method in a manifestly covariant way.Recall that in the BF approach, one separates all fields into backgrounds and fluctuations.One then writes down all Feynman graphs to the desired loop order, where the edges of the graphs correspond to BF dependent propagators of fluctuations, and the vertices to BF dependent couplings.Notice that these graphs are connected, one particle irreducible, and have no external lines.
In this work we will further develop this idea and show that the contribution to the effective Lagrangian from a given Feynman graph G can be factorized as follows We will always use an index i to number the edges of the graph (1 ≤ i ≤ P ), and the index n over its vertices (1 ≤ n ≤ V ).The τ i are Schwinger parameters, and the x n spacetime points associated to the vertices of the graph.The z i are position variables dual to propagator momenta (in analogy to the vertex positions x n being dual to vertex momenta).Working with the z i variables greatly simplifies the treatment of fermions and derivatives of fields.SF denotes the symmetry factor of the graph, and N CFL is the number of closed fermion loops.The first factor appearing in eq.(1.1), the function I G 0 (τ i ; p n ; z i ), is essentially the result of the momentum integral of the graph which we will evaluate in closed form.The second one, a gauge-invariant V -point function Γ G (τ i ; x n ; k i ), depends on the background fields and can be readily computed in terms of heat kernel coefficients and field-dependent couplings.We will see that the momentum integral I G 0 only depends on the topology of the graph G. Feynman graphs constitute so-called labeled graphs, which are graphs that carry an identity (or label) at each edge and each vertex.The labels of the edges are the type of fluctuation fields propagating through them and the vertex labels the corresponding couplings. 1Notice that several edges (vertices) can carry the same label.While these labels do enter the calculation of Γ G , the calculation of I G 0 only depends on the reduced graph G 0 , which is obtained from the full graph by removing all the labels.In particular, all information (mass, spin, and all other quantum numbers such as gauge representations, flavor, extra derivatives etc) can be completely ignored.An example is given in figure 1.
This paper is organized as follows.We will give in sec. 2 a brief review of the construction of Γ, and also describe some of its properties glossed over in [37].Section Figure 1: A number of Feynman graphs (a) that give rise to the common reduced graph (b).The gauge and Lorentz invariant 2-point functions Γ are all different, while the momentum integral I is common to all of them thanks to the same underlying reduced graph.
Table 1: The basic propagators.See eq.(2.2) for the formal definition of B.
3 is devoted to a formal derivation of eq.(1.1) and some comments about the importance to our findings in standard (non-covariant) perturbation theory.We devote section 4 to the explicit calculation of I (that is, we will perform the momentum integration in closed form), expressing the result entirely in terms of graph-theoretic quantities in sec. 5.This is the most technical section, and and we moved some of the graph theory and linear algebra background into two appendices.In sec.6 we discuss how symmetries of the graph act on the factors I and Γ, and in sec.7 we give our conclusions.

The gauge-invariant function Γ
We define the V -point function Γ G for a given graph G as follows.
• For each edge of G write the expressions given in tables 1 and 2. We refer to these expressions somewhat loosely as propagators, even though they are only part of the latter.The basic propagators (without derivatives) are given by the expressions in table 1. Covariant derivatives of fields are indicated by a star and are subject to the rules given in table 2. Two or more derivatives (on either end of the propagator) are included in an obvious way, and so are derivatives on fermion propagators (the latter do not appear in renormalizable theories).
• For each vertex, write the appropriate field-dependent coupling C n (x n ), obtained from differentiation of the interaction L int with respect to the fluctuations. 2We stress that derivatives only appear in the propagators, never in the couplings.
Hence, Γ is the product of all of these factors, where A i are the expressions in tables 1 and 2. Gauge and Lorentz indices have to be contracted as dictated by the graph; the result is a total gauge and Lorentz singlet.Notice that there is no integration hidden in eq.(2.1).A comment on the arrows is in order.Edges corresponding to propagators involving the momentum k need an orientation.For complex fields, we conventionally take this to be the direction of particle number flow.For real fields, this direction is arbitrary but needs to be used consistently in the calculation of Γ and I. Reversing the direction of edge i in effect corresponds to z i → −z i in I and k i → −k i in Γ, yielding the same result in eq.(1.1).The reversal of the arrow of real edges thus constitutes an isomorphism of graphs (see section 6).
The B function appearing in the Feynman rules is formally defined as the ratio of two functions3 2) It is a matrix valued function analytic in t near t = 0 (the non-analyticities cancel between numerator and denominator).Here, X is a BF dependent mass that always features a universal piece X spin = −S µν F a µν t a where S µν are the Lorentz generators in the appropriate representation, but may include other contributions.In this work we will not be concerned with the evaluation of B (the heat kernel expansion) which has been extensively studied elsewhere [2-4, 7, 36, 38], and a Mathematica notebook has been included in the arXiv version of ref. [37] for an automated calculation of the expansion.In the following we will give some useful identities that were ommitted in ref. [37].
From the definition eq. ( 2.2) we have for real τ B(τ, X; x, y) † = B(τ, X † ; y, x) . ( For scalar, fermion and vector fields one has, respectively, and therefore For real fields we in addition have that X and B are purely real.Further useful identities can be derived for the Fermion propagator.Since the fermion propagator has an alternative form due to the identity This immediately gives the Hermiticity property (2.12) eqns.(2.11) and (2.12) are valid only under the k and τ integrations.

Factorization of Feynman graphs
In terms of the V -point function Γ G defined in the previous section, we get the contribution of the graph G to the effective action [37] S G eff = (−1) Notice the double integration over both propagator momenta and vertex positions.We have defined ÎG 0 (τ i ; The factor of (−i) P originates from the Wick rotation of the Schwinger parameters t i = −iτ i , the factor of i V from the vertices (see footnote 2), and the factor of i −1 is due to the fact that a Feynman graph G actually computes iS G eff .We have furthermore defined as well as the so called incidence matrix of the reduced graph G 0 , which is a V × P matrix.We adopt the convention that edges which start and end at the same vertex (self-loops) have a columns of zeros.An example of a graph and its incidence matrix is provided in fig. 2.
For the purpose of the following discussion, it is convenient to rewrite eq.(3.1) in a Dirac notation as An example of a three-loop graph and its incidence matrix.
that is, In ref [37] the interior product in eq.(3.5) was computed in the |x n ; k i representation.One of the main result of the present paper is that it is much more advantageous to compute it in the |x n , z i representation, as the z i integration turns into a simple derivative, analogous to the x n integration.This works because Γ G (τ i ; x n ; k i ) is polynomial in k i .Therefore, Fourier-transforming the latter to the dual z i variables gives a distribution x n ; The momentum-space loop-integration is now entirely contained in ÎG 0 |x n ; z i which only depends on the reduced graph G 0 and is therefore very universal.We will derive a closed expression for this quantity in sec.4. To prove eq. (1.1), consider ÎG 0 (τ i )|p n ; z i which will contain an overall momentum conserving delta function, so we parametrize it in terms of a new function Since we are after the local part of the effective action we expand in the vertex momenta p n .Hence, switching from the p n to the x n variables is equally simple: In summary, both the x n and z i integrations have turned into simple derivatives, and in particular eq.(3.5) implies eq.(1.1).It remains to compute the function I G 0 , this is done in the next section; the result can be found in eq.(4.16).
The presented method can be modified to work for standard QFT perturbation theory (without covariant background field method and local derivative expansion).A standard multi-loop Feynman graph gives rise to where T G is a tensor (polynomial in the propagator momenta k i ), consisting essentially of the nontrivial numerators of the propagators as well as polarization vectors and coupling constants, and p n are the ingoing momenta from external lines. 4After introduction of Schwinger parameters, we recognize this as Repeating the same reasoning as above gives where I G 0 is given by eq.(4.16).The formalism thus completely bypasses the usual tensor reduction (e.g.à la Passarino-Veltman).At this point, let us summarize the main difference to the standard approach: • Using Schwinger instead of Feynman parameters renders the momentum integration in I G 0 (τ i ; p n ; z n ) mass-independent.It is noteworthy that the integration over the variable i τ i is always trivially possible at the end of the calculation, effectively returning to Feynman parametrization after the loop momentum integration.
• Switching from the k n to the z n integration in the convolution in eq.(3.12) results in the explicit factorization eq.(3.13) and renders the loop momentum integration independent of T G .It also sidesteps the parametrization of the propagator momenta k i in T G in terms of loop and external momenta.
The remaining integration over the Feynman parameters is more difficult compared to the effective action (as one does not expand in external momenta).Powerful mathematical methods have been developed over the years to deal with this problem, see [39] and references therein.
In this section we will perform the loop integrations exactly, the result will be precisely the function I G 0 of eq.(1.1).As shown in sec.3, this function only depends on the reduced graph G 0 (whose information is entirely contained in its incidence matrix).
In this section and section 5, "graph" will always refer to the reduced graph, and the subscript G 0 is generally omitted.

The fundamental spaces of the incidence matrix
We can write momentum conservation of the propagator momenta k i and incoming vertex momenta The rank of B for a connected graph is V − 1 (in general V − C for graphs with C connected components).Each column has one +1 and one −1,5 which implies that n B ni = 0, which is just overall momentum conservation.Another way of saying this is that (1, 1, . . . 1) ⊺ is in the kernel of B ⊺ (a.k.a. the cokernel of B).If the graph is connected, this is the whole cokernel.The kernel of B gives the set of edges whose momenta sum to zero for zero external momenta, that is, precisely the loops.The kernel of B is referred to as the cycle space (loop space would be a more physics-based terminology).Some properties of this space are reviewed in app. A. The matrix B has nullity L (the number of loops) and the matrix B ⊺ has nullity C (the number of connected components).The equality of row and column rank gives For a connected graph (C = 1) this is simply P − L = V − 1.

Loop integrations
It remains to calculate the function ÎG 0 (τ i ; p n ; z i ) = ÎG 0 (τ i )|p n ; z i .In the remainder of this section, we suppress the subscript G 0 .Inserting a complete set of |x n ; k i states, using eq.(3.2) and performing the trivial x n integrations one gets which features the usual momentum conserving delta functions for each vertex.The map B is not surjective, and, unless L = 0 (tree graphs), neither is it injective, hence the relation eq.(4.1) cannot be inverted (the k i cannot be written in terms of the p n ).This motivates the introduction of loop momenta q 1 . . .q L and the restriction to some set of independent external momenta such as p 1 . . .p V −1 (i.e., one parametrizes the kernel and image of B).This is of course the usual procedure of loop integrations.While we certainly could proceed this way, we opt for a different approach which does not single out any edges or vertices and hence is more symmetric.Besides being more transparent, it also simplifies the treatment of graph isomorphisms and automorphisms which typically permute vertices and edges.Keeping symmetries manifest in the resulting expressions can be very helpful in many contexts (see e.g.sec.6).
The key technical trick that facilitates this manifestly symmetric treatment of vertices and edges is a Gaussian regularization of the delta functions which allows us to directly perform the Gaussian integration over the (Wick-rotated) k momenta without the need for a parametrization in terms of loop momenta.Defining the symmetric matrix we can express the result as We now proceed to carefully take the limit ǫ → 0. We expect the following behavior: We thus expand T −1 in powers of ǫ: Comparing coefficients of ǫ −1 in the equation TT −1 = ½ one gets B ⊺ BQ = 0, and hence, since B and B ⊺ B have the same kernel, BQ = 0 .(4.9) 6 One way to quickly prove these facts is to write 0 and diagonalize the 0 , which is of the form diag( λ1 ǫ . . .λP −L ǫ 0 . . .0), with λ i = 0, from which the small-ǫ behaviors of det T and T −1 follow immediately.
At order ǫ 0 , we have B ⊺ BQ ′ + T 0 Q = ½ and hence by use of eq.(4.9), B ⊺ BQ ′ B ⊺ = B ⊺ , which implies that BQ ′ B ⊺ acts as the identity on im B. Since it is also zero on ker B ⊺ , we have where P im B is the projector onto the image of B. eqns.(4.9) and eq.(4.10) are important identities that we will use repeatedly in what follows.Using eqns.(4.9) and (4.10) in eq.(4.6), we see that the terms quadratic in p give In the second line we have used that ker B ⊺ for a connected graph is one-dimensional and spanned by ( We can now safely take the limit ǫ → 0 in eq.(4.6), producing the expected delta function as well as some extra finite terms Here we defined the shorthands and we recall that Q, Q ′ and Q ′′ were defined in eq.(4.8).In the second term in the exponential in eq.(4.6) we have used eq.(4.9).From this, we extract the function I from eq. (3.9) by dropping the delta function Finally, we can derive two more interesting relations: which easily follow from the ǫ expansion of TT −1 = ½ and the relations (4.9) and (4.10).
Eq. (4.18) shows that the matrix U is actually not independent and that the vertex momenta p enter in I only in the combination Rp.

Simple examples at tree-level and one-loop
Although certainly not the main objective of the formalism, it can be applied to tree level graphs, for instance for integrating out heavy fields diagrammatically (as opposed to simply applying the equations of motion).Since B has trivial kernel, B ⊺ B is actually invertible and one has where B + = (B ⊺ B) −1 B ⊺ (notice that for tree-level graphs, B ⊺ B has full rank and is invertible).At one-loop order, consider the polygon graph from figure 3.One easily finds from the definition and the explicit form of the incidence matrix.We give the expressions for R and U in terms of their polynomials: where (the first term only contributes a total derivative to the effective Lagrangian).
In the next section we will analyze these quantities in the general case.
5 Characterization of ∆, Q, R and U The quantities ∆, Q, R and U appearing in I, eq.(4.16), are entirely encoded in the ǫ expansion of det T and T −1 , which provide a simple and efficient way of calculating them directly, this is likely the preferred method for a possible future automation of the formalism.In this section we collect some of their properties.The quantities ∆ and p T Up are in fact well known.∆ is called the first Szymanzik polynomial, and ∆(p T Up + i m 2 i τ i ) the second Szymanzik polynomial.Some of their graph theoretic properties were studied in [40], see also ref. [39].The polynomial ∆z T Qz was studied in [41] with the objective of simplifying calculations in quantum electrodynamics.

Graph theory basics
Let us first define a few more graph theoretic terms.A spanning tree of a connected graph (C = 1) is a connected tree subgraph that contains all vertices.It is easily shown that a spanning tree always has L edges less than the original graph.Removing further edges from a spanning tree, the graph becomes disconnected (C ′ > 1), in this case one obtains so-called spanning C ′ -forests (spanning 2-forests, spanning 3-forests etc).Examples of a spanning tree and a spanning forest for the graph of figure 2 are shown in figure 4.
A cut-vertex is a vertex that, if removed, splits the graph into two disconnected subgraphs with non-empty edge sets. 7A cut-edge or bridge is an edge that, if removed, splits the graph into two disconnected subgraphs.All edges of a tree graph are bridges.Notice that the two vertices connected to a bridge are also cut-vertices (unless they are not connected to any other edge).A graph containing cut-edges and cut-vertices is shown in figure 5.
Moreover, edge cuts are sets of edges that, if removed, disconnect the graph.Bridges are edge cuts with just one element.A minimal edge cut (i.e.none of its proper subsets is an edge cut) is called a bond.For instance, the edges {e 1 , e 2 , e 3 } and {e 3 , e 4 } in figure 2 are bonds, and so are any two edges of the polygon graph in figure 3.
Finally, define an equivalence relation on edges as follows: For each edge, let C(e) be the set of all cycles c for which e ∈ c.Define e 1 ∼ e 2 iff e 1 = e 2 or C(e 1 ) = C(e 2 ) = ∅.Loosely speaking, such edges always occur together in cycles.We show in appendix A that the following definition is equivalent: e 1 ∼ e 2 iff e 1 = e 2 or {e 1 , e 2 } form a bond.We call two equivalent edges allies and the equivalence classes alliances. 8The edges 3 and 4 in the graph in figure 2 form an alliance, and so do all edges of the polygon graph in figure 3.

Explicit expressions in terms of subgraphs
Let us define the so-called edge-based Laplacian Our approach has to be contrasted with the usual one in terms of the (vertex-based) Laplacian BB ⊺ .Since we will not be making use of the latter in this work we will refer to L simply as the Laplacian in what follows.
Moreover denote by L (ℓ 1 ...ℓ K ) , with (ℓ 1 < ℓ 2 < • • • < ℓ k ) the Laplacians with rows and columns ℓ 1 . . .ℓ K deleted.These matrices are themselves the Laplacians of the subgraphs obtained by removing the indexed edges from the original graph.Directly from the definition eq.(4.5), we can then write the explicit but slightly cumbersome expression for T −1 : Here, adj stands for adjugate matrix (the cofactor matrix), and the bar means to restore the deleted rows and columns and fill them with zeroes.As stated above, the matrices L (ℓ 1 ...ℓ K ) are Laplacians of subgraphs, and hence the sums over the ℓ i can be interpreted as sums over subgraphs.Let us characterize the type of subgraphs that contribute at a given K.The nullity of L (the same as the one of B) is L, let L ′ denote the nullity of L (ℓ 1 ...ℓ K ) .Note that L ′ ≤ L since we are removing edges and potentially opening loops.Another thing that can happen is that the graph becomes disconnected, let us denote by C ′ the number of connected components.Using the standard relation eq. ( 4.2) as well as V = V ′ then gives By standard linear algebra results, the rank of the adjugate matrix of an N dimensional square matrix M is rank The first nonzero term in the numerator occurs when In the denominator, the first nonzero term occurs for L ′ = 0 or K = L, C ′ = 1.Hence, ∆, Q, R and U get contributions from the subgraphs listed in table 3.In particular: where is the monomial containing the τ i parameters corresponding to the deleted edges.

The Kirchhoff polynomial ∆
The determinant (5.12) where B (n) S is the square matrix obtained from B S by removing the nth row.One can show by induction that det B (n) S = ±1 [42] and hence by eq.(5.12), det L S = V . (5.13) As a side remark, the proof implies that it does not matter whether one uses the directed or undirected graph's incidence matrix.
The Kirchhoff polynomial has the following properties 1.It is a homogeneous polynomial of degree L in the τ i parameters.
2. All the coefficients of the monomials appearing in ∆ are equal to one.
3. It is at most linear in the parameters τ i .
4. The τ parameter of a cut-edge does not appear in ∆. 9 Sometimes the Kirchhoff polynomial is defined as ∆ ′ = i τ i S ω −1 S .The latter may be given as the determinant of any cofactor of the matrix BT 0 B ⊺ .In this work we will refer to ∆ as the Kirchhoff polynomial.

If the graph contains a cut vertex, the corresponding ∆ factorizes
where ∆ 1 and ∆ 2 are the respective polynomials of the separated graphs. 10.The τ i parameters of an alliance only enter as their sum.
7. It is an invariant polynomial with respect to the automorphism group of the reduced graph.
Properties 1-4 are elementary consequences of eq.(5.11).To see property 5, notice that the spanning trees S are in one-to-one correspondence with the spanning trees (S 1 , S 2 ) of the subgraphs separated by the cut-vertex.Property 6 will be shown in sec.5.2.2, and property 7 is clear from eq. (5.10), as the determinant is invariant for any value of ǫ, and hence the coefficients of the ǫ expansion have to be separately invariant.

The matrix Q
The matrix Q given in eq. ( 5.6) contains the Laplacians of subgraphs with P ′ = P − (L − 1) = V edges, and hence the corresponding incidence matrices B C are square matrices.Therefore, According to eq. ( 5.4) the rank of B adj C must be one.It is shown in app.A that where the vector v C spans the one-dimensional kernel of B C , that is, it describes the single loop of C. Its components are ±1 for edges belonging to the cycle and zero otherwise.Thus, (5.16) Only the relative sign for different i, j matters, they have opposite signs if they are oppositely oriented ("anti parallel").Therefore where, as previously, the bar instructs us to fill missing rows/columns (edges not belonging to the graph C) with zeros.We notice furthermore that v C is in the kernel of B, which confirms our previous result eq.(4.9).
The objects v C are elements of the so-called cycle space (see App. A), in particular they are simple cycles (one-loop).Since there are usually several one loop graphs C that give rise to the same simple cycle c, one can simplify the expression for Q further by summing over the simple cycles c instead of the graphs C. One gets where ∆ c is the Kirchhoff polynomial for the graph obtained from the original one by contracting that cycle to a point.The equivalence of eqns.(5.18) and (5.19) is proven in appendix A. The polynomial ∆z ⊺ Qz has been described previously in [41] where it was dubbed the cycle polynomial.
If there are any cut vertices, then the matrix Q is block diagonal, since any simple cycle can only belong to one of the two subgraphs separated by the cut-vertex.
The representation of Q in eq. ( 5. 19) allows for a simple proof of the alliance property of ∆ (property 6 in the list in sec. 5.For two allied edges there are two possibilities.Either they are both in c or both absent from it.If they are both in c they cannot appear in ∆ c because by construction the contracted graph does not contain them.In this case they clearly appear as a sum in ∆ (from the term c T T 0 c).If they do not appear in c, then consider the contracted graph.The contracted graph again features an alliance with the same two edges, so one can proceed recursively, q.e.d.It follows from the alliance property of ∆ and ∆ c that Q also has the same property.
Let us summarize the essential characteristics of Q: 4. Q can be written as a sum over simple cycles, eq.(5.19). 5.The τ parameters of alliances only enter as their sum.
7. If there are any cut vertices, Q becomes block-diagonal.

The matrix R
Starting from eq. (5.7), we compute R explicitly as follows.We first notice that For a tree graph such as S, the matrix L S is in fact regular, det L S = V , see eq. (5.13), so one gets and therefore, (5.23) Acting with B + S on B S shows that it is a left inverse of B S and it therefore satisfies the first relation in eq.(B.2) (with P im B ⊺ S = ½).Since it also satisfies the second relation in eq.(B.2) it is equal to the Moore-Penrose pseudo-inverse (see app.B for details).
It remains to calculate the MP pseudo inverse explicitly.In view of momentum conservation, p we expect that [(B S ) + p] i equals minus the momentum flowing through propagator i.
For a tree level graph such as S, this momentum can be found in a rather simple way.
For instance consider the connected tree graph in figure 6a.The momentum flowing through edge i can be expressed by either the ingoing or outgoing momenta (recall that all p n enter the vertices): where T 1 i and T 2 i denote the disjoint tree graphs resulting from removing edge i from S, where by definition the edge i points from Any of the two choices defines a left inverse for B S , and many more are possible by using momentum conservation.We show in app.B that the one corresponding to the MP inverse is given by Filling in the missing rows of k S with zeroes using the bar notation, we can write Let us summarize some of the properties of R: 1. R is a right pseudo-inverse of B, i.e.BR = P im B , eq. (4.10).
2. Notice that R is not the MP inverse of B, however, B + ≡ P im B ⊺ R is.This follows from the characterization of the MP inverse in appendix B, see eq. (B.3).
4. R can be written as a sum over spanning trees, eq.(5.23).

Replacing B +
S by any arbitrary left inverse, for instance k ′ S,i = p T 1 i , just adds a total derivative to the effective Lagrangian.The corresponding function I ′ G 0 (τ i ; p n ; z i ) is invariant under graph automorphisms only modulo momentum conservation.

The matrix U
The matrix U is closely related to the so-called second Szymanzik polynomial.According to table 3, it can be written as a sum over spanning two-forests, i.e., tree subgraphs with two connected components containing all the vertices of the original graph.We first observe that the generalization of eq.(5.13) is transformations on the I and Γ factors and in the case of the latter point out some computational simplification that they imply.Thirdly, we would like to study complex conjugation of graphs in our formalism.We start by defining isomorphisms of graphs [43].An isomorphism between two Feynman graphs G and G ′ is a label-preserving bijection ϕ = (ϕ v , ϕ e , ϕ o ), where ϕ v (ϕ e ), maps vertices (edges) of G to vertices (edges) of G ′ , and ϕ 0 is a flip of the orientation of some subset of the real edges, such that Here, and S ϕv,e,o are the matrix representation of ϕ v,e,o .The diagonal matrix S ϕo has entries (S ϕo ) ii = −1 for flipped edges and (S ϕo ) ii = +1 for all other edges."Labelpreserving" simply means that ϕ v only maps vertices to vertices of the same kind and similarly for edges.Two isomorphic graphs give the same value when applying the Feynman rules.
An automorphism (or symmetry) of a graph G is an isomorphism between G and itself.In particular, an automorphism satisfies Automorphisms form a group, Aut(G), which is a subgroup of S V × (S P ⋉ C P 2 ) where the first factor is the group of vertex permutations, and the second factor the semidirect product of edge permutations and orientation flips of real edges.It is clear that the underlying reduced graph G 0 is also invariant under the same operations.Notice that if G contains a real self-loop e k , Aut(G) always contains an element that flips the orientation of this edge and does nothing else, according to S ϕv = ½, S ϕe = ½, and (S ϕo ) ij = δ ij (−1) δ ik .We define the symmetry factor of the graph as that is, the reciprocal of the order of the automorphism group.Nontrivial automorphism groups are more common for graphs without any external legs.Notice that x n and p n naturally transform under S V , τ i under S P , and z i and k i transform under S P ⋉ C P 2 .Since the graph remains unchanged, we have that under the action of Aut(G) In other words, the two factors in our master formula eq. ( 1 Since the two graphs are equal they must give the same Γ.This can be verified by applying the transformation to Γ directly: The two expressions are the same due to the cyclic property of the trace.This is the only symmetry and therefore the symmetry factor is |Aut(G)| −1 = 1 2 .Next, consider the graph from Yang-Mills theory in figure 7b.It has two nontrivial automorphisms, given by {v 1 , v 2 ; e 1 , e 2 , e 3 } → {v 2 , v 1 ; −e 1 , −e 2 , −e 3 } , (6.8) {v 1 , v 2 ; e 1 , e 2 , e 3 } → {v 1 , v 2 ; e 2 , e 1 , e 3 } .(6.9) In the first transformation, the two vertices are permuted and the gauge boson edges are flipped.The graph is indeed invariant under this: and hence a symmetry factor of 1 2 .Computation of symmetry factors is not the only purpose of studying symmetries.If Γ contains terms that are in the same orbit under the symmetry group Aut(G), it suffices to only compute them once.For instance, in the example of eq.(6.7), the cross terms tr C(x 1 )(i / D 1 )B(τ 1 ; x 1 , x 2 )C(x 2 ) / k 2 B(τ 2 ; x 2 , x 1 ) e −m 2 (τ 1 +τ 2 ) ↔ tr C(x 1 )/ k 1 B(τ 1 ; x 1 , x 2 )C(x 2 )(i / D 2 )B(τ 2 ; x 2 , x 1 ) e −m 2 (τ 1 +τ 2 ) , (6.12) transform into each other under the automorphism and only have to be computed once.This simplification can be quite convenient in sufficiently complicated graphs.
Given a graph G we define its conjugate graph G * by reversing all arrows (this may imply the change of vertex labels, see the example below).This transformation acts as complex conjugation on the functions Γ and I appearing in eq.(1.1): The relation for Γ follows from the identities given in section 2, while the relation for I can be seen from eq. (4.16).If the graph G is isomorphic to G * it is called self-conjugate or simply real.Real graphs contribute real operators to the effective action.Graphs G that are not isomorphic to G * are called complex, they yield complex operators to the effective action.
As an example, consider the graph in Yukawa theory with a complex scalar field Again, we code the labels by colors.The two vertices of the graph are complex conjugates of each other and hence receive different labels (red and green).Notice that these labels change under complex conjugation due to the reversal of the arrows.The label-preserving map 1 is an isomorphism and shows that the graph is self-conjugate.Taking the complex conjugate of the explicit expression for Γ, using eqns.(2.7) and (2.12) shows that the second line is indeed the expression we would obtain from the graph on the right hand side of eq. ( 6.15), confirming eq.(6.13).

Conclusions
We have elaborated on the recently proposed covariant perturbation theory formalism of ref. [37], casting the contribution of each multi-loop Feynman graph into a factorized form, eq.(1.1), which allowed us to perform the momentum integral in closed form.The latter is expressed in terms of one polynomial ∆ and three matrices Q, R, and U.The latter, when multiplied by ∆, are also polynomial in the Schwinger-Feynman parameters.We have expressed them both in terms of the graph's incidence matrix and as sums over subgraphs.The two representations are complementary in that they highlight different properties of the four quantities.We remark that the parametrization in terms of the incidence matrix is more suitable for a possible future automation of the formalism.Our findings can be applied to standard perturbation theory for momentum space correlation functions as explained in sec.3, bypassing the usual tensor reduction algorithms.
We furthermore have discussed automorphisms of graphs and shown that they leave each factor of eq.(1.1) separately invariant.graph of figure 2.
Any three of them are linearly independent and form a basis for ker B.
A note on terminology.In graph theory the term "cycle space" usually refers to the kernel of B over the two-element field Z 2 .This space has dimension L and consists of 2 L vectors (for instance, the complete cycle space of the graph in figure 2 consists of the vectors in eq.(A.2) modulo 2, as well as the zero-vector and the vector (1, 1, 0, 0, 1, 1) ⊺ ).Throughout this work, cycle space means ker B (over the reals), and we will always use the term cycle to refer to elements of the cycle space normalized such that its entries equal ±1.
The alliances defined in section 5.1 can be identified by looking at any cycle basis.Two edges e i , e j that are not bridges are allies iff for every element c in the basis c i = c j mod 2. Removing e i and e j from the graph reduces the dimension of the cycle space by one (it must decrease because e i and e j are not bridges, and it cannot decrease by more than one because in this case a cycle would exist that contained one edge but not the other).Using P ′ = P − 2, L ′ = L − 1, V ′ = V , we have C ′ = C + 1, hence the number of disconnected components increased by one and {e i , e j } form a bond.Conversely, if {e i , e j } form a bond, removing these edges results in C ′ = C + 1 and hence L ′ = L − 1 which implies that there cannot be any cycle that contains one edge but not the other, as otherwise we would have had L ′ = L − 2. This shows equivalence of the two characterizations of alliances in section 5.1. 13et us now prove that eq.(5.19) follows from eq. (5.18).We need to show that for any given simple cycle c, Fix any i with c i = 0, that is, any edge that participates in c.Denote by j = i the remaining edges of c.Then the graphs C are in one to one correspondence with the spanning trees S that do not contain i but contain all j, (the correspondence being removing/adding edge i).Hence Owing to the contraction and deletion relations [39], this amounts to first deleting edge i and then contracting edges j.But this is the same as contracting the whole loop.Consider now the case of trivial kernel, but nontrivial cokernel (an injective but not surjective map).It is easily seen that M ⊺ M is regular, and that (M ⊺ M) −1 M ⊺ is a left inverse for M. It is readily checked by either eq.(B.1) or eq.(B.2) that it is equal to the MP inverse M + .
In eq. ( 5.25) we wrote particular left inverses for the incidence matrix B S of some connected tree graph S. We now simply multiply any of these left inverses with P im B S from the right to get the Moore-Penrose pseudo inverse.Let us chose k 1 S,i = p T 1 i , and call this left inverse X.Let us assume that without loss of generality the ingoing momenta are the first V 1 i vertices.Then the ith row of the left inverse X reads (−1, −1 . . ., −1, 0, . . ., 0).The projector reads is the ith row of the matrix B + S .

4 Figure 4 :
Figure 4: A spanning tree and a spanning 2-forest for the graph of figure 2.

Figure 5 :
Figure 5: A graph with cut-edges (in green) and cut-vertices (in red).

T 1 ⊕ T 2 Table 3 :
The types of subgraphs contributing to the various ∆, Q, R and U.They all have the same number of vertices as the original graph, only fewer edges. is equal to the so-called Kirchhoff polynomial of the graph 9 ∆ = spanning trees S ω S .(5.11)In order to show the equivalence of the two we can use the Cauchy-Binet formula to compute the determinant of L S det L S = 2.1).Since trQT 0 = L,

Figure 6 : 2 i
Figure 6: The definition of the subgraphs T 1,2 i appearing in R.

4 .
consists in a permutation of the first two edges with the remaining components left unchanged.The symmetry factor in this case is |Aut(G)| −1 = 1 Finally, consider the graph in figure 7c from Yang Mills theory with a scalar.It has a single nontrivial automorphism {v 1 , v 2 , v 3 ; e 1 , e 2 , e 3 , e 4 } → {v 1 , v 2 , v 3 ; −e 1 , e 2 , e 3 , e 4 } , (6.11)

B 45 ] 5 )
Moore-Penrose pseudo inversesConsider an N × M real-valued matrix M with possibly nontrivial kernel and cokernel.The Moore-Penrose pseudo inverse of M is defined via the conditions[44] MM + M = M , M + MM + = M + , (M + M) ⊺ = M + M , (MM + ) ⊺ = MM + .(B.1)These conditions fix M + uniquely.An equivalent formulation can be given in terms of projectors onto column and row spaces of M, denoted here by P im M and P im M ⊺ respectively.Either of the following pairs of equations is equivalent to the four equations in eq.(B.1)M + M = P im M ⊺ , M + = M + P im M .(B.2) MM + = P im M , M + = P im M ⊺ M + .(B.3)Proof.We will first show that each of (B.1), (B.2), (B.3) defines a unique matrix.Uniqueness of M + from eq. (B.1) is a classic result [44].To see that eq.(B.2) fixes M + uniquely, consider any v ∈ im M. By standard linear algebra, there exists a unique w ∈ im M ⊺ with v = Mw.For two matrices X, Y satisfying the relations in eq.(B.2), calculate Xv = XMw = w, Yv = YMw = w, and hence Xv = Yv for all v ∈ im M, and hence XP im M = YP im M .The second relation in eq.(B.2) then implies X = Y.The uniqueness from eq. (B.3) is analogous.To see existence of M + and to prove that all three definitions give the same M + one considers the singular value decomposition of M [M diag is the diagonal matrix containing the rank M nonzero singular values of M, the zeroes stand for rectangular matrices of the appropriate dimensions, and O L,R are orthogonal matrices.One then defines the M × N matrix [45] It is a trivial matter to show that M + defined in this way satisfies all equations in eq.(B.1), (B.2) and (B.3), and hence they all define the same matrix M + and are therefore equivalent characterizations of the MP inverse, q.e.d.The characterizations eq.(B.2) and eq.(B.3) are useful in practice.For instance consider any left pseudo-inverse X of M, i.e., any matrix that satisfies the first relation in eq.(B.2).Then M + = XP im M .

Table 2 :
Propagators of derivative of fields.See eq.(2.2) for the formal definition of B.
.1) are separately invariant.The invariance of I(τ i ; p n ; z i ) in eq.(4.16) is manifest due to the symmetric treatment of the momenta k i and p n in section 4.Let us give some examples.Consider the one-loop graph in Yukawa theory (with a real scalar coupling to fermion-antifermion) in figure7a.It has a single nontrivial automorphism given by following transformation (consisting of permuting both the vertices and edges).{v 1 , v 2 ; e 1 , e 2 } → {v 2 , v 1 ; e 2 , e 1 } .