Scattering amplitude annihilators

Several polynomial differential operators are shown to annihilate the dimension-neutral YM and GR tree scattering amplitudes. In particular, we prove a conjecture of Loebbert, Mojaza and Plefka from their investigation of a hidden conformal symmetry in GR. The amplitudes are defined using a recursion based on factorization of residues. (In the current version we assume without proof that a solution to the recursion exists.)


Introduction
The scattering amplitudes of YM and GR are natural objects in physics. As purely mathematical objects, they are interesting special functions of many variables. The dimension-neutral tree amplitudes are rational functions. (Readers new to such amplitudes may want to take a look at Appendix A for some explicit formulas.) As with other special functions, it is natural to ask what differential equations they satisfy. In other terminology: What is the annihilator ideal? The goal of this paper is to prove that a number of second order differential operators annihilate the dimension-neutral YM and GR tree amplitudes. This paper is motivated by recent work of Loebbert, Mojaza and Plefka [1]. They investigated a potential hidden conformal symmetry of GR tree amplitudes in general spacetime dimension d, including but not limited to d = 4. This led them to conjecture certain auxiliary identities required for their computations. They verified them for amplitudes with n = 3, 4, 5, 6 legs 3 and conjectured that they would continue to hold for n ≥ 7. In this paper we prove these identities for all n ≥ 3, by induction on n. (In [1] these auxiliary identities are not stated directly as differential equations for the GR amplitudes, but involve certain symmetrization prescriptions. The relation between the formulation of the conjecture in [1] to the present paper is in Appendix B.) In this paper we exclusively work with the dimension-neutral version of the tree amplitudes; the spacetime dimension d will be absent. These amplitudes are rational functions on a complex vector space of dimension 2n(n − 2) with simple poles along a constellation of linear subspaces. The coordinates on this vector space will be denoted k ij , c ij , e ij with subscripts running over an index set I with |I| = n. The following linear relations among the coordinates cut the dimension of this vector space down to 2n(n − 2): The amplitudes are actually polynomial in the c ij and e ij variables. To obtain the amplitudes in d spacetime dimensions set k ij = k i · k j and c ij = k i · ǫ j and e ij = (1 − δ ij )ǫ i · ǫ j where k i and ǫ i are d-dimensional vectors, the momentum and polarization of the i-th particle. See Remark 1 for details on this.
The amplitudes for n = 3, 4 are in Appendix A which also contains references. A recursion for general n is in Definition 14 and it defines the amplitudes for the purpose of this paper. It is based on the well-known factorization of residues 4 , which comes in various guises. This recursion determines the amplitudes uniquely as we will see, but it is on the face of it not obvious that a solution exists. Though this may seem overly cautious to some readers, we state existence as an assumption, and work under this assumption in this paper.
Assumption A. A sequence of rational functions satisfying the recursion in Definition 14 exists. We refer to them as YM and GR tree amplitudes.
Unfortunately we do not have a reference that proves this directly for the dimension-neutral version of the amplitudes. For a self-contained account of YM and GR tree amplitudes in spacetime dimension d = 4, see our [6], where the amplitudes are constructed as minimal model brackets.
There are then 2n + 1 first order differential operators that are well-known to annihilate the amplitudes 5 . In our notation these annihilators are: where h = 1 and s = 4 − n for YM respectively h = 2 and s = 2 for GR. The D kij , D cij , D eij are partial derivatives but modified to be compatible with the 4 An application of this factorization is the well-known BCFW recursion [2]. 5 In the logic of this paper they are part of Definition 14.
linear relations (1). Geometrically speaking, they only take derivatives along the subspace defined by (1). They are explicitly defined in Lemma 3. Polynomiality in the c ij and e ij variables implies further obvious annihilators. The interpretation of all the annihilators mentioned so far, in terms of momenta and polarizations, is well-known, see Remark 1 below. But this paper is about the following second order annihilators. Theorem 1. Define the YM and GR tree amplitude by the recursion in Definition 14 and make Assumption A. The differential operators A i , B i , C i defined below annihilate the tree amplitudes for all n = |I| ≥ 3 and i ∈ I: and The operators D kij , D cij , D eij are defined in Lemma 3; they are partial derivatives but with corrections to make them compatible with the relations (1).
Variants of B i , C i appear in the conjecture in [1], see Appendix B. Our proof that they annihilate is by induction on n and uses the A i to close the argument 6 .
Beware that the YM amplitude requires a cyclic order on I, whereas the GR amplitude is permutation invariant and I is an unordered set.
Remark 1 (Interpretation of the amplitudes and of some annihilators). The amplitudes in d ≥ 4 spacetime dimensions are obtained from the dimensionneutral ones by setting k ij = k i · k j and c ij = k i · ǫ j and e ij = (1 − δ ij )ǫ i · ǫ j where k i and ǫ i are elements of a d-dimensional complex vector space, and · (dot) is a nondegenerate symmetric bilinear pairing 7 . The momentum vectors k i are subject to k i · k i = 0 and momentum conservation i k i = 0, and the polarization vectors ǫ i are subject to k i · ǫ i = 0. Assume k i = 0 here, so ǫ i lies in a subspace of dimension d − 1. For every i, the YM amplitude is linear in ǫ i , the GR amplitude is quadratic in ǫ i ; this is witnessed by annihilator Y i . The physical polarization is extracted as follows. For every k = 0 with k · k = 0 abbreviate P (k) = {ǫ | k · ǫ = 0}/ k which is a vector space of dimension d − 2, and observe that · induces a nondegenerate symmetric bilinear pairing on P (k). Then, separately for every i and fixed k i = 0: • As a function of ǫ i , the YM amplitude descends to a linear form on P (k i ).
• As a function of ǫ i , the GR amplitude descends to a quadratic form on P (k i ), that is, a linear form on the symmetric tensor product S 2 P (k i ). One can decompose this into the trace and the traceless part relative to the symmetric bilinear pairing on P (k i ). This is witnessed by X i and is known as gauge invariance. The amplitudes are also homogeneous jointly in all momenta; this is witnessed by Z. Hence the ddimensional amplitudes are sections of certain vector bundles on the projective variety k i · k i = 0 and i k i = 0 8,9,10 . In this paper k ij , c ij , e ij and (1) are primary. Occasional references to momenta and polarizations are informal.
There do exist algorithms to determine the annihilator of a rational function, as a left ideal in the Weyl algebra D of polynomial differential operators. To illustrate, in Macaulay2 one can compute the annihilator of x/y using [3,4] D = QQ[x,y,Dx,Dy,WeylAlgebra=>{x=>Dx,y=>Dy}]; loadPackage "Dmodules"; RatAnn(x,y) the answer being ∂ 2 x , x∂ x − 1, y∂ y + 1. Such algorithms use methods from the theory of D-modules to reduce the problem to computing the kernel of a map of finitely presented left D-modules, done using Gröbner bases for D-modules. A simpler approach, to find annihilators of some fixed order in the derivatives, is to make an ansatz with polynomial coefficients, reducing the problem to computing the kernel of a map of modules over the polynomial ring, done using ordinary Gröbner bases. Even simpler, to find annihilators of some fixed order both in the variables and in the derivatives, only the kernel of a linear map between finitedimensional vector spaces has to be computed, done using Gaussian elimination. It is not clear whether such methods can be used in practice for tree amplitudes, and note that we are interested in constructing annihilators for all n.
It would be interesting to understand what the full annihilator of the amplitudes is. In a Weyl algebra, every left ideal is finitely generated 11 . Rational functions are holonomic; this means that their annihilator in the Weyl algebra is in some sense as big as allowed by the Bernstein inequality 12 . The description 8 Actually one only has a vector bundle away from the singular locus. 9 It would be interesting to see whether the annihilators A i , B i , C i or suitable linear combinations of them imply annihilators for the d-dimensional amplitudes. The latter would be differential operators on the vector bundle on which the amplitudes live, possibly taking values in another vector bundle. 10 The fact that A i annihilates is vacuous for YM but not for GR amplitudes. For GR and n ≥ 4, the annihilator A i is not in the left ideal generated by the obvious annihilators listed above Theorem 1. It suffices to provide a rational function that is annihilated by all obvious annihilators but not by Here h(k) is any nonzero homogeneous function whose degree is such that Zf = 0. Then f is polynomial in the c and e variables and annihilated by X i , Y i , Z but it is not annihilated by A i . To check this, use Lemma 3. 11 Somewhat surprisingly, by a theorem of Stafford, every left ideal in a Weyl algebra over Ê or can in fact be generated by just two elements. 12 This says that over Ê or , and for a proper left ideal I ⊆ D in the Weyl algebra D in x 1 , . . . , xm, ∂ 1 , . . . , ∂m, the dimension of D/I (defined to be the degree of a suitable Hilbert polynomial) is ≥ m. Note that a rational function g/f always has m first order annihilators of the form f (∂ i g) − g(∂ i f ) − gf ∂ i ∈ D but they are in general of high polynomial degree.
of special functions in terms of their annihilator in the Weyl algebra is of course not limited to rational functions, cf. [5].
We prove Theorem 1 by brute force, using recursion on n. This uses the recursive definition of amplitudes mentioned above, by which the amplitudes have simple poles along a constellation of linear subspaces, and each residue is equal to a product of two amplitudes with lower n. Proving the theorem comes down to checking that these differential operators 'annihilate the poles'. (Naively, a second order operator hitting a simple pole can generate a third order pole.) Essentially, the recursion for the amplitudes implies a recursion for their annihilators. Hence the main computations in this paper do not actually involve rational functions, instead they are at Weyl algebra level.

Preliminaries
Notation. All vector spaces and algebras are over the complex numbers. For every finite-dimensional vector space X we denote: X * the dual vector space R X the commutative algebra of polynomials X → D X the Weyl algebra on X Frac(R X ) the field of rational functions on X These things are defined independently of coordinates. This is important because the spaces we encounter do not have a canonical coordinate system, and we work with a linearly dependent set of coordinates. Canonically, the linear functionals, we refer to them as coordinates X ⊆ D X first order constant coefficient differential operators As vector spaces, R X ≃ SX * and D X ≃ SX * ⊗ SX where S is the symmetric tensor algebra. Here SX are the constant coefficient differential operators.
Coordinate dependent definitions. Even though we will never commit to a particular basis, we recall the coordinate dependent definitions. By a coordinate we mean an element of X * . A coordinate system is a basis x 1 , . . . , x m ∈ X * of the dual space. The polynomial ring is then Let ∂ 1 , . . . , ∂ m ∈ X be the dual basis. Then D X is the associative algebra with identity generated by the variables x 1 , . . . , x m , ∂ 1 , . . . , ∂ m modulo the two-sided ideal generated by the relations Linear maps. If X, Y are vector spaces then every linear map α : X → Y canonically induces several other linear maps: We often find it convenient to specify a linear map by specifying the adjoint Y * → X * .
Direct sums. For a direct sum of finite-dimensional vector spaces, there are canonical isomorphisms R X⊕Y ≃ R X ⊗ R Y and D X⊕Y ≃ D X ⊗ D Y that we frequently use. The right hand sides refer to the tensor product of algebras, where all elements of D X commute with all elements of D Y . Canonical inclusions such as Index sets. Instead of a standard index set such as {1, . . . , n} we work with finite index sets denoted I, hence n is replaced by |I|. For YM, a cyclic ordering of the elements of I is required. In many calculations, I is a disjoint union with |J|, |K| ≥ 2 and often (4a) is assumed. Sometimes we need index sets with a distinguished element, always denoted •. We abbreviate If J • , K • have a cyclic order then J ⊔ K acquires a cyclic order. Conversely, if I has a cyclic order and (4a) respects this, then J • , K • inherit a cyclic order.

Kinematic variables
Here we introduce the vector space on which the amplitudes live and related objects. Many explicit formulas are included that are useful for computations.
Definition 2. For every finite set I with |I| ≥ 3, consider first the complex 'ambient' vector space of dimension 3|I| 2 with coordinate system Denote by C(I) the linear subspace defined by the relations (1). In this paper, the elements (5) are understood to be elements of the dual space C(I) * .
The dimension of C(I) is where n = |I|. If |I| = 3 then the k ij vanish identically as elements of C(I) * . Note that the relations (1) do not refer to an ordering, hence there is a natural action of the group of permutations of I on the vector space C(I).
The (5) are a linearly dependent set of coordinates, not a coordinate system on C(I). Therefore we cannot define partial derivatives in the usual way. We work with the following linearly dependent set of constant coefficient differential operators that are the closest analogues of partial derivatives.
that as elements of the Weyl algebra D C(I) satisfy and such that all 'mixed' commutators are zero: They span C(I). They satisfy Proof. The given commutators for D kij at first only determine an operator on the ambient vector space. But since [D kij , −] annihilates all left hand sides of the relations (1), we obtain a unique constant coefficient differential operator ∈ C(I) as claimed. The rest of the lemma goes the same way.
We have introduced coordinates k ii , c ii , e ii and derivatives D kii , D cii , D eii that are identically zero and in some sense superfluous. These phantom objects are nevertheless useful because they allow us to write sums as in (3).
Convention (4) is in force from here on, We introduce notation required to discuss poles of amplitudes and the factorization of residues. The linear subspace ξ = 0 will later be a pole 13 . We define Ξ, a derivative transversal to ξ = 0. We also define derivatives D ⊥ tangential to ξ = 0 and related coordinates.

Also define
The dependence of these definitions on the decomposition I = J ⊔ K is left implicit in the notation. The next lemma is immediate. Lemma 5. We have The D ⊥ span C(I) ⊥ . In the Weyl algebra D C(I) , for all j, j ′ ∈ J and k, k ′ ∈ K. The k ⊥ , c ⊥ , e ⊥ and D ⊥ satisfy the linear relations (1) and (7), with k replaced by k ⊥ and so forth. They additionally satisfy Same if the summation is over k, k ′ ∈ K instead.
Definition 6 (Extension space). Consider first the complex 'ambient' vector space of dimension 2|J| + 2|K| = 2|I| with coordinate system 15 Let E be the subspace defined by The elements (10) are understood to be elements of the dual space E * . There are unique for j, j ′ ∈ J and k, k ′ ∈ K and all other 'mixed' commutators are zero. (That is, commutators mixing c and e or mixing J indices with K indices are zero.) We now define the master space M . The main calculations in this paper will be carried out in the Weyl algebra D M of the master space.
Definition 7 (Master space). The master space corresponding to the de- Its Weyl algebra is therefore We agree that elements of D C(I) and D E are extended to D M . For example, Ξ is also viewed as an element of D M from now on. Set The factorization of residues requires relating C(I), via the master space M , to the spaces C(J • ) and C(K • ). We now construct four linear maps: The maps α J and α K (which are used in the recursion for amplitudes in Definition 14) are surjective, and β J and β K are explicit right-inverses. So The direction Ξ is in the kernel of these projections. One can, and we will, choose β J and β K so that their images are contained in M ⊥ and so that α K β J = 0 and α J β K = 0. The result of this is a useful decomposition There exists a unique linear map α J whose adjoint for all j, j ′ ∈ J. Analogous for α K .
Proof. We have specified the adjoint on a set that spans C(J • ) * , hence the map is unique if it exists. It exists if it respects all relations. For instance, j ′ ∈J c j ′ j + c •j is zero in C(J • ) * and must be mapped to zero, which it is. More interestingly, j∈J k •j is zero in C(J • ) * and must be mapped to zero, and it is by (9). This example shows that one must use k ⊥ , not k, on M * .
Lemma 9 (Pushforward). The linear map α J satisfies, and is equivalently defined by, for all j, j ′ ∈ J and k, k ′ ∈ K. The elements on the left are in M ⊆ D M . Those on the right are in C(J • ) ⊆ D C(J•) , using Lemma 3 for J • . Analogous for α K .
Proof. Let X → Y be any of these claimed assignments. We must check that Y (y) = X(α * J y) for all y ∈ C(J • ) * where here Y (y) means applying Y to y as a function 17 . Here α * J is given by Lemma 8. Consider for example the case X = D ⊥ k jk and y = k ab ∈ C(J • ) * with a, b ∈ J. Then using Lemmas 3 and 5. Similarly, using Lemma 3 for J • . In this example we see that X(α * J (y)) = Y (y).
Lemma 10 (Right-inverse). The map α J is surjective. There exists a unique right inverse β J : C(J • ) → M whose image is the subspace spanned by: This right-inverse β J maps Proof. First check that these assignments define a map β J . We have overspecified, so the map is unique if it exists. Check that say j ′ ∈J D c j ′ j + D c•j , which is zero in C(J • ), is mapped to zero. Now α J β J = ½ is by direct calculation using Lemma 9, and the image is as claimed.
Corollary 11. The internal direct sum decomposition (12) holds. Also: • The subspace image π ⊆ M ⊥ is the span of all with j ∈ J, k ∈ K. Its dimension is (2|J| − 1)(2|K| − 1). In D M its elements commute with all elements in the images of α * J , α * K in particular for all j, j ′ ∈ J and k, k ′ ∈ K.
We now define an operator U whose purpose is to 'eat polarization vectors' associated to the symbol •. This operator is used in the recursion for amplitudes in Definition 14. (We note that for the recursion, it does not matter if the coefficients of U are taken to be k ⊥ , c ⊥ , e ⊥ or their k, c, e counterparts.) Note that U is an operator that uses both summands in M = C(I) ⊕ E: Its coefficients are polynomials on C(I) whereas the derivatives are along E.  Roughly, the recursion says that the amplitudes are rational functions with only simple poles. The poles are along a known family of linear subspaces, and the residues are given recursively in terms products of lower amplitudes. This is known as factorization of residues. The case |I| = 3 serves as the base case. Define X i , Y i , Z as in (2) using the differential operators in Lemma 3.

Recursion for the amplitudes
Definition 14 (Recursion). By YM respectively GR tree amplitudes we mean a sequence of rational functions, one for every integer n ≥ 3. Using an index set I with n = |I|, these rational functions are denoted, respectively, It is understood that I must have a cyclic order for the YM amplitudes A I but no order for GR amplitudes M I . It is understood that I is merely an index, so if I ≃ I ′ is a bijection, preserving the cyclic order in the case of YM, then A I ≃ A I ′ respectively M I ≃ M I ′ simply by relabeling coordinate indices. With these preliminaries, the following properties must hold: • Base case. For n = 3 the amplitudes are given by Definition 13. • Poles and residues. Both A I and M I have poles only along the subspaces ξ = 0 corresponding to decompositions I = J ⊔ K with |J|, |K| ≥ 2. In the case of A I , there are poles only for decompositions that respect the cyclic order, thus a cyclic order is induced on J • and K • respectively. The poles are simple 18 and the residue is given by as an identity in Frac(R M /ξR M ). The operator U is in Definition 12. The specification of the residue is recursive because |J • |, |K • | < |I|.
Intuitively, in (13), the 'purpose' of the operators U respectively U 2 is to remove all variables (10) on the right hand side, since there can be no such variables on the left hand side.
Lemma 15 (Uniqueness). If an amplitude as in Definition 14 exists, then it is unique.
Proof. The proof is not sensitive to the constants in (13), part of which is a matter of normalization. The proof is by induction on n = |I|. The n = 3 amplitudes are fixed. For n ≥ 4, the residues of any two candidate amplitudes are the same by (13) and by the induction hypothesis. So the difference between any two candidates, call it u ∈ Frac(R C(I) ), has no poles and is regular, except perhaps where the poles intersect. But the union of all pairwise intersections of poles has codimension two, and by the Hartogs extension theorem (cf. [6] for more comments about this in the context of d = 4 amplitudes), u is actually globally regular. This means that u ∈ R C(I) , a polynomial. Since u is annihilated by X i and Y i and Z, Lemma 16 below implies u = 0.
Lemma 16 (Vanishing lemma). Suppose n = |I| ≥ 4 and suppose X i , Y i , Z are as in (2), either with the parameters for YM or with the parameters for GR. Suppose p i ≥ 0 with i ∈ I are integers with i∈I p i < n, and q ≥ 0 is another integer. Suppose a polynomial u ∈ R C(I) satisfies: for all i ∈ I. Then u = 0.
Proof. Distinguish the cases in the table below. The schematic structure refers to u as a polynomial in the c, k variables with coefficients that are polynomials in the e variables. The schematic structure follows from (Z + q)u = 0. The polynomials P (e) and Q(e) are homogeneous. Their homogeneity degrees are given in terms of p = i∈I p i < n and follow from i∈I (Y i + p i )u = 0. Note that P is a single homogeneous polynomial in case 3, whereas P and Q are schematic for several homogeneous polynomials in cases 1 and 2. Since p < n we have deg P > 0, deg Q > 0, in particular P , Q cannot be constant, so they are zero if their derivatives are zero. We discuss each case: • Case 4: Here u = 0.
• Case 3: Here P (e) is a single polynomial. Since X i u = 0 for all i, we have j∈I c ij D eij P = 0 There is no sum over i. For every fixed i the only relations between the (c ij ) j∈I are c ii = 0, which suffices to conclude D eij P = 0. So u = P = 0.
• Case 2: Here u = a,b∈I P ab c ab for some polynomials P ab = P ab (e). Using (1) we may assume a P ab = 0 and P aa = 0. Since X i u = 0 for all i, Since the terms scale differently in c and k, they are separately zero. We continue with j k ji P ji = 0. There is no sum over i. Since n ≥ 4, for every fixed i the only relations between the (k ji ) j∈I are k ii = 0 and • Case 1: Here u = a,b,c,d P abcd c ab c cd + a,b Q ab k ab . Using (1), we may assume P abcd = P cdab , a P abcd = 0, P aacd = 0, Q ab = Q ba , a Q ab = 0, Q aa = 0. Using X i u = 0 we have V i + W i = 0 using the abbreviations Since they scale differently in c and k, we have V i = W i = 0.
• It follows from V i = 0 that (reasoning as in case 2) if all P are zero then all derivatives of all Q are zero and then Q = 0, since deg Q > 0. The problem is thus reduced to showing that all P are zero.
we get (since this particular combination eliminates the Q terms using D e ik = D e ki ): for all a, i, k ∈ I. By differentiating with respect to D k cd , one obtains linear identities with constant coefficients for the P .

Some commutators
Lemma 17. Let |I| ≥ 3. In D C(I) we have the following identities: for all i, j ∈ I. All these identities hold for both YM and GR; recall that the parameters defining Y i and Z are different in these two cases.
The corollary contains identities for A j f , B j f , C j f with f the amplitude. This is a step towards Theorem 1 which says that A j f , B j f , C j f are all zero.
Corollary 18. Suppose f is either the YM amplitude, f = A I , or the GR amplitude, f = M I , as in Definition 14. Then for all i, j ∈ I: If A j f = 0 for all j then If B j f = 0 for all j then Proof. Use Lemma 17 and the fact that The elements X i , Y i , Z are well-known to be in the annihilator 20 , and so are some constant coefficient operators of order 2 for YM, order 3 for GR, that witness polynomiality in the c and e variables. Theorem 1 asserts that The proof, given at the end of this section, is by induction on |I|. The induction step will use the following lemma.
Lemma 20 (Key technical lemma). Let U be as in Definition 12. Suppose I = J ⊔ K with |J|, |K| ≥ 2 as before. Suppose Theorem 1 holds for the index sets J • and K • . Cyclic orders are understood in the case of YM. Then if Proof. The computation will be in the Weyl algebra, we will not directly work with rational functions. The first step is to split off the Ξ direction, using the direct sum decomposition (11). At the Weyl algebra level, Here D Ξ is the Weyl algebra generated by ξ and Ξ, with [Ξ, ξ] = 2. Every element of D Ξ commutes with every element of D M ⊥ . For every O we have for unique S 00 , . . . , S 12 ∈ D M ⊥ . This isolates all occurrences of ξ and Ξ. It is clear from (3) that at most one ξ and at most two Ξ appear. Set Note that Ξ commutes with U ∈ D M ⊥ from Definition 12, and the pushforward of Ξ under both α J and α K is zero. Hence it suffices to show that for YM, corresponding (by definition of S 0 , . . . , S 3 ) to the absence of 1/ξ and 1/ξ 2 and 1/ξ 3 terms respectively, and analogously it suffices to show that for GR. Note that S 0 has dropped out of the computation since it cannot generate a pole along ξ = 0. Note that: • S 2 = S 3 = 0 for O = A i because it involves no k-derivative, hence no Ξ-derivatives. This also implies S 11 = 0 in this case.
• S 3 = 0 for O = B i because it involves no second k-derivatives, hence no second Ξ derivatives. This also implies S 12 = 0 in this case.
• S 3 = 0 for O = C i but this requires a calculation. It suffices to show that the analogous claim holds for C i , that is for the following terms in (3c): with a = 1 2 and b = −1. Use Definition 4 to replace k ab = k ⊥ ab + const ab · ξ and D k ab = D ⊥ k ab + const ab · Ξ and keep only the k ⊥ ab respectively Ξ terms. Using the first equation in (9) and 2a + b = 0, we get zero.
It now suffices to show (17a), (18a) for A j , B j , C j for all j ∈ J.
The restriction to i = j ∈ J is new and is without loss of generality. It entails that the rest of this proof is not symmetric under exchanging J and K. The rest of this proof exploits the direct sum decomposition of M ⊥ in (12), see also Corollary 11. At the Weyl algebra level, The indicated isomorphisms are established by α J , β J and α K , β K . Call them They are explicitly given by the formulas in Lemmas 8, 9, 10. Let I YM ⊆ D M ⊥ respectively I GR ⊆ D M ⊥ be the left ideals generated by (the difference between YM and GR is implicit in the parameters defining the Y and Z elements): • The left ideal I π ⊆ D image π generated by all partial derivatives. Equivalently, this is the annihilator of the constant functions on image π.
• The left ideal in D image βJ generated by the image under γ J of the following elements, which are known annihilators of A J• respectively M J• : and 21 for YM: for all x, y, z ∈ {c, e} and all j 1 , j 2 , j 3 ∈ J.
• The left ideal in D image βK generated by the image under γ K of the following elements, which are known annihilators of A K• respectively M K• : and for YM: for all x, y, z ∈ {c, e} and all k 1 , k 2 , k 3 ∈ K.
It is part of the assumptions of this lemma, and a consequence of Definition 14, that (20) and (21) are known annihilators of A J• , M J• and A K• , M K• . More annihilators are known, but we will not need those. To show (19) it now suffices to show that (by construction of I YM , I GR and using α J π = 0, α K π = 0): Thus the problem is reduced to one of checking membership in a left ideal in the Weyl algebra D M ⊥ . (For every fixed |I| this can in principle be checked algorithmically using Gröbner bases. But we need to prove membership for all |I|.) To proceed, we use the canonical D image π /I π ≃ R image π . Here R image π are the polynomials on image π. This gives a canonical map can only be nonzero for d = 0, 1 because all O in (3) have polynomial coefficients of order ≤ 1. We claim that actually for all O = A j , B j , C j (an analogous statement fails for C j ). To see this, use the description of image π in Corollary 11. Here are some examples: • Consider the term a,b∈I e ab D eaj D e bj in A j . This term is already in D ⊥ M , since e = e ⊥ and D e = D ⊥ e . If a, b ∈ J or a, b ∈ K then we get no contribution to ρ 1 since e ⊥ ab commutes with all elements of image π by Corollary 11. If a ∈ K then D ⊥ eaj ∈ I π and if b ∈ K then D ⊥ e bj ∈ I π and we also get no contribution to ρ 1 .
• Consider next a,b∈I c ab D caj D e bj in A j . By the same reasoning, it suffices to consider the sum over a ∈ K, b ∈ J. For a ∈ K, write D caj = D ⊥ caj as The first term does not contribute to ρ 1 since D ⊥ caj − r ∈ I π by Corollary 11. The second does not contribute to ρ 1 since a∈K c ⊥ ab = − a∈J c ⊥ ab which for b ∈ J commutes with all elements of image π by Corollary 11.
• Terms involving k or D k are more complicated. One must take into account k = k ⊥ + const · ξ and D k = D ⊥ k + const · Ξ, in Definition 4. Using (23) one can see that ρ d (S 1 U ), ρ d (S 2 U ) can only be nonzero for d = 0, 1 and that ρ d (S 1 U 2 ), ρ d (S 2 U 2 ) can only be nonzero for d = 0, 1, 2. In each of these cases, and for O = A j , B j , C j , Tables 1 and 2 list elements in (20) and d elements from (20) elements from (21)  (21) that suffice to prove membership in I YM respectively I GR , as in (22). The tag 'poly' subsumes all annihilators in (20b), (21b) that witness polynomiality. We now discuss these tables in detail. The identities below are in To extract the various 'Taylor coefficients' we use the D defined in Corollary 11, understood here as mapping (R image π ) d → (R image π ) d−1 . We will not make explicit the 'poly' pieces and state some identities in the schematic form a = b mod poly which asserts that a − b is in the left ideal of (24) generated, via γ J and γ K respectively, by (20b) and (21b). With these preliminaries, one row in Table 2 with O = C j is proved by the identities in the space (24), they hold for all j 1 ∈ J and k 1 ∈ K. It is essential here that derivatives such as D e k 1 • are to the left of γ K (X • ). On the other hand, since γ K (D e k 1 • ) = D e k 1 • by Lemmas 9 and 10, it does not matter if this derivative is written inside or outside of γ K . We abbreviate with the understanding that j 1 , j 2 ∈ J and k 1 , k 2 ∈ K. The four identities (25) are now given, more succinctly, by We now state all identities needed for (22). For YM, with reference to Table 1: For GR, with reference to Table 2: ρ 0 (S 2 U 2 ) = 0 D 1 ρ 1 (S 2 U 2 ) = 2 (|J|−1)(|I| 2 −|I|−|J|) |J|(|I|−1)(|I|−2) D ej• R 1 γ K (X • ) D 2 ρ 2 (S 2 U 2 ) = −2 |K|(|K|−1) (|I|−1)(|I|−2) R 2 γ J (X j ) mod poly and ρ 0 ( These Weyl algebra identities are by direct calculation; checking this is algorithmically straightforward, best done using symbolic computation 22 . They imply (22) hence Lemma 20.
Proof (of Theorem 1). The proof is by induction on |I|. The base case |I| = 3 is by direct calculation. As an example, if I = {1, 2, 3} then for GR Note that: • By the recursion in Definition 14, the difference f − g does not have a pole along ξ = 0 and therefore neither do A i (f − g), B i (f − g), C i (f − g).
• By the induction hypothesis, we can invoke Lemma 20 and conclude that also A i g, B i g, C i g do not have a pole along ξ = 0.
Hence A i f , B i f C i f have no pole along ξ = 0 for every decomposition I = J ⊔K, and therefore (by Hartogs extension as in the proof of Lemma 15) we have where R C(I) is the ring of polynomials. To show that they are actually zero: • Use (16a) and Lemma 16 (with u = A i f ) to conclude that A i f = 0.
• Then use (16b) and Lemma 16 (with u = B i f ) to conclude that B i f = 0.
• Then use (16c) and Lemma 16 (with u = C i f ) to conclude that C i f = 0.

A Some tree amplitudes
Some readers may find it useful to see some amplitudes. Here, as in the rest of this paper, by amplitudes we mean the dimension-neutral ones. The expressions below are rational functions on the vector space C(I) in Definition 2, defined using the relations (1). Not all symmetries are manifest, for example the permutation invariance of the GR amplitudes is not. The expressions are up to an overall multiplicative constant that we do not care about. • The GR amplitudes are permutation invariant. So the conjecture in [1] that Y (F i ), Y (G i ) annihilate the GR amplitude is equivalent to Y loc (F i ), Y loc (G i ) annihilating the GR amplitude.
• The B i , C i in (3b), (3c) are, up to normalization, the operators Y loc (F i ), Y loc (G i ) but presented directly as differential operators on C = C(I). A detailed translation to (3b), (3c) is omitted. There are terms proportional to the conformal dimension ∆ in F i , G i , see [1], but they do not contribute to Y loc (F i ), Y loc (G i ) which therefore are independent of ∆.