Orientations of BCFW Charts on the Grassmannian

The Grassmannian formulation of $\mathcal{N}=4$ super Yang-Mills theory expresses tree-level scattering amplitudes as linear combinations of residues from certain contour integrals. BCFW bridge decompositions using adjacent transpositions simplify the evaluation of individual residues, but orientation information is lost in the process. We present a straightforward algorithm to compute relative orientations between the resulting coordinate charts, and we show how to generalize the technique for charts corresponding to sequences of any not-necessarily-adjacent transpositions. As applications of these results, we demonstrate the existence of a signed boundary operator that manifestly squares to zero and prove via our algorithm that any residues appearing in the tree amplitude sum are decorated with appropriate signs so all non-local poles cancel exactly, not just mod 2 as in previous works.


Introduction
Scattering amplitudes in 4d planar N = 4 super Yang-Mills (SYM) theory can be formulated as contour integrals in the space of k × n matrices, modulo multiplication by a GL(k) matrix; this is the Grassmannian manifold Gr(k, n) [1]. Reformulating scattering amplitudes in this new framework has led to many interesting and unexpected mathematical structures in N = 4 SYM such as on-shell diagrams [1,2,3] and the amplituhedron [4,5,6]. Many of the results extend also to 3d N = 6 ABJM theory [7,8] where several novel properties have emerged [9,10,11]. This paper will focus on planar N = 4 SYM theory, where the n-particle N k MHV tree amplitude can be obtained by evaluating the following integral: (1.1) The prefactor A MHV n is the n-particle MHV amplitude, C is a k × n matrix of full rank, and Z encodes the external data (momentum, particle type, etc.) as momentum twistors. In the denominator of the measure, M i is the i th consecutive k-minor of C, which means that it is the determinant of the submatrix of C with ordered columns i, i + 1, . . . , i + k − 1 (mod n). The contour on which the integral should be evaluated is designated by Γ. The result is a sum over residues computed at poles of the integrand. There are generally many families of contours which produce equivalent representations of the amplitude due to residue theorems.
Each contour can be thought of as a product of circles wrapping around poles of the integrand, the points where certain minors vanish. The minors are degree-k polynomials in the variables of integration, so the pole structure is generally very difficult to describe in the formulation (1.1). However, a technique was presented in [1] for generating charts on the space such that any individual codimension-1 residue occurs at a simple logarithmic singularity. Using one of those charts, the measure on a d-dimensional submanifold can be decomposed into a product of dα/α = dlog α quantities. We call this measure ω for future reference: Advantages to this formulation are that such charts are easy to generate and that every codimension-1 residue can be reached as a dlog singularity using only a small atlas of charts. A potential disadvantage is that directly relating two distinct charts is non-trivial; this could lead to sign ambiguities when combining individual residues into the amplitude. This is especially important because the residues contain non-physical singularities that should cancel in the tree amplitude sum. Previously, such divergences were shown to appear in pairs, so they at least cancel mod 2 [1]. In Section 4 of this paper, we demonstrate that the cancellation is exact, 1 which follows from the main result of this paper: the Master Algorithm introduced in Section 3.

Summary of Results
The key development of this paper is a systematic algorithm that generates the relative sign between any two BCFW charts on a submanifold, or cell, of the Grassmannian. Each chart is defined by a sequence of transpositions (a 1 b 1 )(a 2 b 2 ) . . . (a d b d ) acting on a permutation labeling a 0-dimensional cell. Equivalently, each chart can be represented by a path through the poset (partially ordered set) of Grassmannian cells. Every transposition (a i b i ) corresponds to a factor of dlog α i in (1.2).
To get a sense of the result, it is illustrative to consider the simple example of two charts defined by the sequences (ab)(cd) and (cd)(ab) with a < b < c < d < a+n. In the poset of cells, these sequences define distinct paths from a 0-dimensional cell labeled by σ 0 to a 2-dimensional cell labeled by σ, shown in (1.3) with edges labeled by the corresponding transpositions: If we associate the coordinate α 1 with (ab) and α 2 with (cd), then the dlog forms generated by the sequences are, respectively, ω = dlog α 1 ∧ dlog α 2 , and ω = dlog α 2 ∧ dlog α 1 . Then the relative orientation is given by the product of edge signs along the two paths.
We show in Section 3.3 that the relative orientation between any two charts is equal to the product of edge signs around a closed loop in the poset. All the edges can be weighted such that the product of signs around every quadrilateral is −1, just as in the example (1.5). 2 The closed loop is obtained by concatenating each input path (blue and green solid lines in (1.6 A)) together with a sawtooth path connecting their 0d cells (thick red line in (1.6 A)) times −1 for each 1d cell along the connecting path. We call this method the Master Algorithm.
To prove that the Master Algorithm correctly yields the relative orientation, we introduce two preliminary algorithms in Section 3.1. When the two paths meet at a common 0d cell, illustrated in (1.6 B) with blue and green solid lines, Algorithm 1 splits the big loop into smaller loops, e.g. using the dashed black lines in (1.6 B), for which the relative orientations can be computed directly from the corresponding plabic networks. If the paths end in different 0d cells, e.g. (1.6 C), Algorithm 2 additionally computes the relative sign between the source cells.
We then use Algorithms 1 and 2 in Section 3.2 to demonstrate that the big loop (1.6 A) can always be split up into quadrilaterals, each of which contributes a factor of −1 to the overall sign. Finally in Section 3.3 it is shown that if the edges are appropriately decorated with ±1, then only those that make up the loop contribute to the end result (in addition to the −1 from each 1d cell along the sawtooth path).
This eliminates the sign ambiguity between distinct residues, and in Section 4 we show that residues contributing to the tree amplitude are always decorated with signs so that all nonphysical singularities cancel exactly in the sum of residues. A similar argument demonstrates the existence of a boundary operator that manifestly squares to zero. In addition, we present an interpretation for paths corresponding to decompositions with non-adjacent transpositions. We review the necessary background in Section 2 before presenting the algorithm in Section 3, and a few details are relegated to appendices.
During the development of this analysis, a different method for determining relative orientations was proposed independently by Jacob Bourjaily and Alexander Postnikov using determinants of certain large matrices. We computed the orientations of 500 distinct charts on the 10d cell {5, 3,8,9,6,7,12,10,14, 11} ∈ Gr(3, 10) with both methods and found perfect agreement [13]. The algorithm presented in this paper has also been verified by checking a variety of charts whose orientations are known by other methods. The cancellation of all doubly-appearing poles in the tree amplitude has been confirmed explicitly for all n = 5, . . . , 13 and k = 3, . . . , n/2 . All of the algorithms described here have been implemented in Mathematica as an extension of the positroids package included with the arXiv version of [1]. Copies of the new code can be provided upon request.

Positroid Stratification
The positroid stratification is a decomposition of the Grassmannian Gr(k, n) into submanifolds called positroid cells (or just cells). Cells can be classified according to the ranks of submatrices constructed out of cyclically consecutive columns of a representative matrix. The top cell of Gr(k, n) is the unique cell of highest dimension, d = k(n − k). All maximal (k × k) minors are non-vanishing in the top cell, so all chains of consecutive columns will be full rank. For example, a representative matrix of the top cell of Gr(2, 4) is where c ij ∈ C are coordinates on the manifold such that none of the 2 × 2 minors vanish. All four parameters must be fixed to specify a point in the top cell, which is consistent with the expected dimension d = 2(4 − 2) = 4.
Lower dimensional cells are reached by fixing relations among the entries so that additional linear dependencies arise among consecutive columns. From a given cell, the accessible codimension-1 submanifolds are called the boundaries of that cell. The extra linear relations imply that various minors vanish in the measure of (1.1). Thus going to the boundary should be interpreted as taking a residue at the corresponding pole. Note that choosing a particular representative matrix amounts to selecting a chart on the cell, so some poles may not be accessible in certain charts. In the example above, the boundary where columns c 2 and c 3 are parallel is accessible by setting c 13 = 0, but there is no way to reach the boundary where columns c 1 and c 2 are parallel in the stated chart. The latter boundary could be accessed in a GL(2)-equivalent chart where a different set of columns were set to the identity.
A partial order can be defined on the set of positroid cells by setting C ≺ C whenever C is a codimension-1 boundary of C [14]. The poset structure is interesting for a number of reasons, several of which will be mentioned throughout the text. One such reason is that the poset of positroid cells in Gr(k, n) is isomorphic to a poset of decorated permutations. Decorated permutations are similar to standard permutations of the numbers 1, . . . , n, but differ in that k of the entries are shifted forward by n. To simplify the notation, we will use often 'permutation' to mean 'decorated permutation' since only the latter are relevant to us. Permutations will be written single-line notation using curly brackets; an example is given in (2.2). When referencing specific elements of a permutation, we will use the notation σ(i) to mean the i th element of the permutation σ, and with the understanding that σ(i+n) = σ(i)+n. A decorated permutation encodes the ranks of cyclically consecutive submatrices by recording, for each column c a , the first column c b with a ≤ b ≤ a+n such that c a ∈ span( c a+1 , c a+2 , . . . , c b ). The first inequality is saturated when c a = 0, and the second is saturated when c a is linearly independent of all other columns. Continuing with the example (2.1), the top cell of Gr (2,4) corresponds to the permutation σ top = {3, 4, 5, 6}, (2.2) which says that 1 → 3, 2 → 4, 3 → 5 ≡ 1, and 4 → 6 ≡ 2. In terms of the linear dependencies among columns of a representative matrix, it means c 1 ∈ span( c 2 , c 3 ), c 2 ∈ span( c 3 , c 4 ), c 3 ∈ span( c 4 , c 5 ) ≡ span( c 4 , c 1 ), and c 4 ∈ span( c 5 , c 6 ) ≡ span( c 1 , c 2 ).
Going to the boundary of a cell involves changing the linear relations among consecutive columns, so in permutation language, the boundary is accessed by exchanging two entries in σ. The transposition operation that swaps the elements at positions a and b is denoted by (ab), with a < b < a+n; we call this the boundary operation. Note that σ only has positions 1, . . . , n, but b can be greater than n, To account for this, we use Not all exchanges are allowed; for instance, applying the same transposition twice would revert back to the initial cell, which is clearly not a boundary. The allowed boundary transpositions satisfy the following criteria: a < b ≤ σ(a) < σ(b) ≤ a+n and σ(q) ∈ (σ(a), σ(b)) ∀q ∈ (a, b), (2.5) where (a, b) means the set {a + 1, b + 2, . . . b − 1}. We will call (ab) an adjacent transposition if all q ∈ (a, b) satisfy σ(q) ≡ q mod n [1]. The reason for this distinction will become clear in the next section. A strictly adjacent transposition is of the form (a a+1). For notational purposes, we define (ab) to act on the right, so if σ is a boundary ofσ, then we write σ =σ · (ab). Of course, (ab) is its own inverse, so acting on the right with (ab) again yields the inverse boundary operation σ · (ab) =σ. The boundary operation reduces the dimension by one, while the inverse boundary operation increases the dimension by one. Although the notation is identical, it should be clear what we mean from the context. Taking additional (inverse) boundaries leads to expressions like ρ = σ · (a 1 b 1 )(a 2 b 2 ) . . ., which means first exchange σ(a 1 ) and σ(b 1 ), then swap the elements at positions a 2 and b 2 , etc. 3

Plabic Graphs
Permutations and positroid cells can be represented diagrammatically with plabic (planarbicolored) graphs and plabic networks. 4 Plabic graphs are planar graphs embedded in a disk, in which each vertex is colored either black or white. Any bicolored graph can be made bipartite by adding oppositely-colored bivalent vertices on edges between two identically-colored vertices, so we will assume all graphs have been made bipartite. Some edges are attached to the boundary; we will call these external legs and number them in clockwise order 1, 2, . . . , n. If a monovalent leaf is attached to the boundary, it will be called a lollipop together with its edge. Plabic networks are plabic graphs together with weights (t 1 , t 2 , . . . , t e ) assigned to the edges. The weights are related to coordinates on the corresponding cell, as will be discussed in the next subsection.
Given a plabic graph G, one can define a trip permutation by starting from an external leg a and traversing the graph, turning (maximally) left at each white vertex and (maximally) right at each black vertex until the path returns to the boundary at some vertex b. When a = b, the permutation associated with this trip has We will explain the case a = b momentarily.
For tree amplitudes, we will be concerned with reduced plabic graphs. A plabic graph is reduced if it satisfies the following after deleting all lollipops [15]: 1. It has no leaves; 2. No trip is a cycle; 3. No trip uses a single edge twice; 4. No two trips share two edges e 1 and e 2 in the same order.
It follows that any external leg a for which σ G (a) ≡ a mod n must be a lollipop [15]. Specifically, we define σ G (a) = a to be a black lollipop, and σ G (a) = a+n to be a white lollipop.
Thus each reduced plabic graph/network corresponds to a unique decorated permutation. However, the correspondence is not a bijection; rather each permutation labels a family of reduced plabic graphs/networks. Members of each family are related by equivalence moves that modify the edge weights but leave the permutation unchanged [1,16]: (E1) GL(1) rotation: At any vertex, one can perform a GL(1) rotation that uniformly scales the weights on every attached edge, e.g. for a scaling factor f , (E2) Merge/delete: Any bivalent vertex whose edges both have weight 1 can be eliminated by merging its neighbors into one combined vertex and deleting the bivalent vertex and its edges, e.g.
If one of its neighbors is the boundary, then the bivalent vertex should be merged with the boundary instead. The inverse operation can also be used to 'unmerge' a vertex or boundary.
(E3) Square move: A four-vertex square with one pattern of coloring is equivalent to the four-vertex square with opposite coloring, e.g.
These three moves are sufficient to transform any plabic network into any other in its equivalence class. In addition, using these moves one can always fix all but d of the edge weights to unity in any network that represents a d-dimensional cell; throughout the rest of this paper, edges with unspecified weights will have weight 1. As we will see shortly, certain planar networks lead to especially convenient paramaterizations of positroid cells. The equivalence moves will allow us to compare the resulting oriented forms.

Bridge Decompositions and Charts
To write down a coordinate chart on a given cell, it is sufficient to construct a representative matrix with the appropriate linear dependencies among its columns. However, in a general chart the boundary structure could be very difficult to identify. Fortunately, there is a straightforward method to construct charts with simple dlog forms as in (1.2) [1]; we review this technique below and how it relates to paths in the poset of cells. Some paths through the poset do not correspond to any such charts, so we suggest an interpretation for those paths in Section 2.3.2 and explain the consequences for residues in Section 4.1.

Standard BCFW Bridge Decompositions
Positroid cells of dimension zero in Gr(k, n) correspond to unique plabic graphs made solely out of lollipops with edge weight 1. The k legs with σ(a) = a+n have white vertices while the rest are black. The boundary measurement matrix is zero everywhere except the submatrix composed out of the k columns corresponding to the white vertices, which together form a k × k identity matrix. There are no degrees of freedom, so the differential form (1.2) is trivial, ω = 1.
Since there is a unique representative plabic network for each 0d cell, we will build higherdimensional representatives out of the set of 0d networks. In the poset, higher-dimensional cells can be reached from 0d cells by repeatedly applying the inverse boundary operation defined below (2.5). Equivalently, a d-dimensional cell can be decomposed into a sequence of adjacent transpositions acting on a 0d permutation, e.g. 5 This is called a BCFW decomposition and leads to a convenient graphical representation. In a planar network, an adjacent transposition (a i b i ) amounts to simply adding a white vertex on leg a i , a black vertex on leg b i , and an edge between them with weight α i : This is called a (BCFW) bridge. Note that if one of the legs is initially a lollipop, then the resulting leaf should be deleted after adding the bridge [15,16]. The example (2.11) generates the following sequence of graphs: We have added a black vertex between the first two bridges to make the graph bipartite; it is drawn slightly smaller to distinguish it from the bridge vertices. Many simplifications are possible using the equivalence moves (2.7)-(2.9).
Adding a BCFW bridge affects the trip permutation by exchanging σ G (a) ↔ σ G (b) as desired, and the boundary measurement matrix transforms in a simple way [1], (2.14) One can easily check that the linear dependencies of the shifted matrix agree with the expected permutation as long as (a i b i ) is an adjacent transposition. If ω i−1 is the differential form associated with the initial cell, then after adding the bridge, the new form is This prescription provides a robust way to generate coordinates on any cell in Gr(k, n).
There are generally many ways to decompose a d-dimensional permutation σ into a sequence of adjacent transpositions acting on a 0d cell. In fact, no single chart covers all boundaries of a generic cell. However, an atlas of at most n standard BCFW charts is sufficient to cover all boundaries [1]. Every such chart defines a unique path through the poset from σ to some σ 0 . In the example (2.11), the path was σ = {3, 5, 4, 6} The important point is that every BCFW decomposition corresponds to a unique path of length d that starts at σ and ends in a 0d cell. The converse is not true; some paths of length d that start at σ and end in a 0d cell do not correspond to any BCFW decomposition.

Generalized Decompositions
When evaluating residues, one will often encounter paths through the poset that do not coincide with any single standard chart. These paths contain edges that represent non-adjacent transpositions. Continuing with the earlier example, the following path ends in the same 0d cell as (2.11), but the first transposition (13) crosses a non-self-identified leg: Nevertheless, this path is certainly a possible route when evaluating residues since every codimension-1 boundary is accessible from some adjacent chart [1]. We could, for instance, take the decomposition from (2.11), and take α 2 → 0. It is easy to see from the boundary measurement matrix that this yields the desired linear dependencies among columns: The generalization to any path through the poset is clear; each successive step involves computing the residue at a logarithmic singularity in some chart. Thus every path corresponds to some dlog form, not just the paths for which we have explicit plabic graphical representations. We will call these generalized decompositions and their coordinates generalized charts. In practice, computing residues along a particular path through the poset can always be done using standard adjacent charts, though it may involve changing coordinates at several steps along the way. As we will see in Section 4.1, the sign of the resulting residue depends only on the path taken, and not on the choice of reference charts along the way. that the relative sign between the dlog forms generated by two distinct charts can be obtained systematically. We will first show this result for standard BCFW charts constructed using only adjacent transpositions. Subsequently, we will discuss the generalized situation where we do not always have a simple graphical representation; the convention for deriving signs will be extended to cover the additional possibilities while maintaining consistency with the standard setup. The extended conventions will also lead to a simpler method for comparing charts.

Standard BCFW Charts
There are two cases that we need to address depending on whether the decompositions end in identical 0d cells or distinct ones. We will cover the identical case first and then deal with the other situation.

Charts with Identical 0d Cells
We assume first that the 0d cell labeled by σ 0 is the same for both paths. The d-dimensional cell labeled by σ is connected to σ 0 by two sequences of transpositions Graphically, the concatenation of the two paths creates a closed loop of length 2d in the poset, shown here schematically with one path denoted by a blue solid line and the other by a green dashed line: The relative orientation of the two dlog forms corresponding to the sequences can be obtained by a simple algorithm. Before presenting that result, we will need the following lemma: Proof. Let σ be the permutation labeling C ∈ Gr(k, n). When σ contains two neighboring elements satisfying σ(i) > σ(i + 1) (with σ(n + 1) = σ(1)+n), this is called an inversion. Such an inversion can be removed by applying the (strictly adjacent) transposition (i i + 1). Since all entries in σ top are ordered, one can reach the top cell by iteratively eliminating all inversions.
For example, the top cell of Gr(2, 6) can be reached from the 5-dimensional cell {2, 3, 4, 6, 7, 11} by the sequence of transpositions (6 7)(1 2)(2 3): {2, 3, 4, 6, 7, 11} Using this procedure, any BCFW sequence can be extended to reach the top cell using only strictly adjacent transpositions. Since (i, i + 1) does not cross any legs, the resulting sequence will be a valid BCFW sequence. Therefore, we may assume without loss of generality that the cell on which we seek to compare orientations is the top cell because two sequences which lead to a cell of lower dimension can be trivially extended to top cell sequences by appending the same transpositions to both paths. This will not affect the relative sign of the forms since both will have identical pieces appended to them.
We turn now to the sign-comparison algorithm. The idea is to compare each BCFW chart to specially chosen reference charts whose relative orientation is easy to compute. They are chosen so that at each iteration, the loop in the poset (initially of length 2d) is shortened. Then the final relative orientation is the product over all the intermediate orientations.
The transpositions with i > j yield a closed loop of length ≤ 2d. If there is no such position, then the paths are identical, so return +1.
2) Let σ label the j-dimensional cell reached by the sequence of transpositions (a 1 b 1 )(a 2 b 2 ) . . . (a j b j ) and σ label that reached by (a 1 b 1 )(a 2 b 2 ) . . . (a j b j ). Using the following rules, construct reference charts to which the initial charts should be compared. Comparing the two reference charts produces a known sign; the relevant parts are displayed with each step, and their relative signs are derived in Appendix A. There are several cases to consider (with a < b < c < d < a+n): The j-dimensional cells σ and σ have a shared (j + 1)-dimensional neighbor σ = σ · (cd) = σ · (ab). Let u be the sequence generated by Lemma 1 for the cell labeled byσ. Then the reference sequences and relative sign are: • The relative sign between reference charts is −1.
(cd) (ab) In this case σ and σ have a shared (j + 1)-dimensional neighbor σ = σ · (bc) = σ · (ab). Let u be the sequence generated by Lemma 1 for the cell labeled byσ. Then the reference sequences and relative sign are: • The relative sign between reference charts is −1.
(bc) (ab) Again σ and σ have a shared (j + 1)-dimensional neighborσ = σ · (bc) = σ · (ab). Let u be the sequence generated by Lemma 1 for the cell labeled byσ. Then the reference sequences and relative sign are: • The relative sign between reference charts is −1.
Similar to the previous case, σ and σ do not have a common (j + 1)-dimensional neighbor using only adjacent transpositions. Nonetheless, withσ = σ · (bc) andσ = σ · (bc), thenσ andσ have a common (j + 2)-dimensional neighbor ρ =σ · (cd) =σ · (ab). Let u be the sequence generated by Lemma 1 for the cell labeled by ρ. Then the reference sequences and relative sign are: • The relative sign between reference charts is −1.
In this case, we must look further to find a shared cell above σ and σ using only adjacent transpositions. They have a shared (j + 3)-dimensional great-grandparentρ = σ · (ab)(cd)(bc) = σ · (bc)(ab)(cd). Let u be the sequence generated by Lemma 1 for the cell labeled byρ. Then the reference sequences and relative sign are: • The relative sign between reference charts is −1.
(bc) 3) Repeat this algorithm to compare w tow and w tow .

4)
Return the product of the relative sign from step (2) times the result of each comparison in step (3).
The relative orientation of any two BCFW charts with identical endpoints can be compared with this algorithm. In the second part of the proof below, we show that the signs and reference charts presented in step (2i) are correct; this should also serve as an illustrative example of the algorithm in action.
Proof. We need to show that the algorithm will terminate in a finite number of iterations, and that the sign generated at each step is correct.
• We will first show that the algorithm will terminate after a finite number of iterations. The reference sequences constructed in step (2) are chosen so that when the algorithm is called again in step (3) to compare w tow, the new inputs satisfy (a i b i ) = (a i b i ) for all i ≤ j.
The same is true for the comparison of w andw .
Step (1) searches for the first point at which the input sequences differ, so by construction, the next position will be at least j + 1, which is larger than in the previous iteration. Since j is bounded by d, the algorithm will eventually terminate.
• Next we will explain the results presented in step (2i). Since a < b < c < d < a+n, the two transpositions can be applied in either order without violating the adjacent requirement of BCFW sequences. Therefore σ and σ have a common neighborσ, and both and (a 1 b 1 )(a 2 b 2 ) . . . (a j−1 b j−1 )(cd)(ab)u are valid BCFW sequences for the top cell. Moreover, since (a i b i ) = (a i b i ) for all i < j, their corresponding forms differ only in positions j and j + 1: where α j and α j+1 are the weights associated with (ab) and (cd) in the first sequence, while β j and β j+1 are associated with (cd) and (ab) in the second sequence. To determine the relationship between the two forms, we will use the plabic graph representations of the transpositions as BCFW bridges. Focusing on the j and j + 1 parts of the graph, we find the following: The left and right diagrams can only be equivalent if β j = α j+1 and β j+1 = α j , which implies Thus the relative sign between the reference charts is −1.
The remaining cases are described in Appendix A.
Thus we now have an algorithm that correctly computes the relative signs for BCFW forms generated from identical 0d cells.

Charts with Distinct 0d Cells
The next step will be to extend the result to decompositions that terminate in distinct 0d cells. Let us start with the simplest case: finding the relative sign between two charts on a 1d cell labeled by σ 1 . Since it is 1-dimensional, all but two entries in σ are self-identified mod n. The two non-trivial positions can be labeled a and b such that a < b < a+n.
The 1d cell has exactly two boundaries, which can be accessed respectively by (ab) or (b a+n), so the two BCFW sequences whose charts we will compare are (ab) and (b a+n): It is straightforward to find the relative orientations of the corresponding forms, ω 1 = dlog α and ω 1 = dlog β, (3.8) using basic plabic graph manipulations. As we see in the following diagrams, several GL(1) rotations (2.7) combined with a merge and unmerge with the boundary (2.8) are sufficient to discover that β = 1/α, Therefore, the forms are oppositely oriented, i.e. ω 1 = −ω 1 .
More generally, we could consider charts on d-dimensional cells whose corresponding BCFW sequences are identical everywhere except for the first transposition: The associated forms are For example, we could find a Lemma 1 sequence, u, for the 1d cell above and compare the two charts defined by σ 0 · (ab)u and σ 0 · (b a+n)u. Since BCFW sequences such as u use only adjacent transpositions, no bridge will ever be attached to legs a and b further from the boundary than the first bridge (ab), resp., (b a+n). Therefore, one can apply very similar logic as in the 1d case 6 to find that β 1 = 1/α 1 , so ω d = −ω d .
We can easily extend this to any two charts whose 0d endpoints share a common 1d neighbor, σ 1 , by applying Algorithm 1. Each sequence can be related to a reference sequence with the same 0d cell, but which goes through σ 1 and then follows some arbitrarily chosen path, say u, to the top cell. If the same path is chosen to compare to both charts, then the relative orientation of the reference charts is −1.
Finally, this can be extended to any two charts terminating in arbitrarily separated 0d cells by iterating the previous step together with Algorithm 1. We combine this into Algorithm 2. 2) Else, let a be the smallest index such that σ 0 (a) < σ 0 (a), and let b be the smallest index such that σ 0 (b) > σ 0 (b). We assume that a < b; if not, then the roles of σ 0 and σ 0 should be exchanged. Letσ = σ 0 · (ab), whose boundaries are σ 0 andσ 0 =σ · (b a+n), and define u to be the Lemma 1 sequence fromσ to the top cell. Construct two reference sequences:w = (ab)u andw = (b a+n)u.

4)
Return the product of the results from step (3) times −1 due to the relative sign betweeñ w andw .
Proof. Assuming that the cells and edges in step (2) exist, the sign at each iteration is valid because it uses Algorithm 1 to compare charts with identical 0d cells, and it returns −1 for each pair of sequences that differ only in the first position. It remains to show that step (2) is correct. Since all entries in the permutations labeling 0d cells are self-identified mod n, the definitions of a and b imply that: σ 0 (a) = a, σ 0 (b) = b+n, σ 0 (a) = a+n, and σ 0 (b) = b. (3.12) There exists another 0d cellσ 0 , which is identical to σ 0 except The new cellσ 0 has two important properties: • The first is that σ 0 andσ 0 have a common 1d neighbor,σ = σ 0 · (ab) =σ 0 · (b a+n). Since all other entries in σ 0 andσ 0 are self-identified mod n, both (ab) and (b a+n) are adjacent. Thus the cells and reference sequences of step (2) are uniquely defined and satisfy the standard adjacency requirements.
• The second is thatσ 0 differs from σ 0 at fewer sites than σ 0 differs. If there are m ≥ 1 locations where σ 0 (i) − σ 0 (i) = 0, then there are only m − 2 locations whereσ 0 (i) − σ 0 (i) = 0. Since all entries in 0d cells are self-identified mod n, and k entries are greater than n, then m must be even and no larger than 2k. Hence, Algorithm 2 will complete after at most k iterations.
Therefore Algorithms 1 and 2 are together sufficient to find the relative orientation of any two standard BCFW charts.

Generalized Decompositions
The plabic graph representation explained in Section 2.3.1 is convenient for the study of standard BCFW charts. They are a subset of the generalized decompositions, so the rules for defining reference charts and relative signs in Algorithms 1 and 2 will still apply. Since cases (i)-(vi) in step (2) of Algorithm 1 cover all possible comparisons, the techniques introduced in the previous section are in fact sufficient to find the relative orientation of any two charts.
However, there is an advantage to studying the generalized charts more closely. Cases (iv)-(vi) in Algorithm 1 were distinctly different than the other three cases because the reference charts required two or three steps to meet at a common cell as opposed to only one step in the earlier cases. Comparing the reference paths of the first three cases shows that they define quadrilaterals, while cases (iv) and (v) define hexagons, and (vi) defines an octagon. The relevant sections of the poset are depicted in (3.14) with the solid lines indicating the paths used in Algorithm 1, and the dashed lines showing the additional edges that were not used (the finely dotted lines indicate edges that do not always exist). We include the quadrilaterals from cases (i)-(iii) for completeness: The extra transpositions permitted in generalized charts allow the hexagons and octagons to be refined into quadrilaterals. Some of the internal quadrilaterals are equivalent to those from cases (i)-(iii), but there are also new ones. The relative orientation of two charts which differ only by one of the new quadrilaterals needs to be determined. To fix the signs, we require that the refined polygons produce the same signs as above when split into charts that differ by the interior quadrilaterals.
The hexagon from case (iv) can be split into a pair of quadrilaterals two ways. Either way, the top quadrilateral appears in one of the first three cases, which implies that the relative orientation around the lower quadrilateral must be −1 in order to agree with the overall sign of +1 derived in Appendix A.
In case (v), the edges shown with dashed lines always exist, so the hexagon can be split into three quadrilaterals. The one on the lower left is equivalent to case (iii), the lower right is equivalent to case (ii), and the top quadrilateral is identical to case (i). Hence the product of the three individual signs is (−1) 3 = −1, in agreement with the result found in Appendix A. In some situations, the hexagon can also be split up using the finely dotted lines. Then the top two hexagons are equivalent to cases (ii) and (iii), which implies that the relative orientation around the lower quadrilateral must also be −1.
Finally, the octagon from case (vi) can be also be split into three quadrilaterals. The top one matches case (iii), and the middle is equivalent to case (ii), so the relative orientation around the lower one must be −1 to agree with Appendix 1. This is unsurprising considering that the two transpositions are completely disjoint, so applying them in opposite order would suggest that the forms differ by a minus signs, similar to example (1.5) in the Introduction.
This exhausts all possible quadrilaterals that could appear in the poset. Hence the relative orientation between any two charts that differ by a quadrilateral is −1. A significant consequence of this result is the existence of a boundary operator which manifestly squares to zero, as we discuss further in Section 4.

The Master Algorithm
Before proceeding to discuss various applications, we present a more efficient method to compute the relative orientation of any two charts. In each iteration of Algorithm 1, every edge in the reference charts enters into two comparisons (once to the corresponding initial chart, and once to the other reference chart), while the edges in the initial charts enter only one comparison (to the associated reference chart). Therefore, if we assign ±1 to each edge such that the product of signs around any quadrilateral is −1, then the signs on the reference chart edges will appear twice and hence square to 1, while the product of signs on the initial chart edges will combine to produce the same overall sign as found by Algorithm 1. One method for producing a consistent set of edge signs is presented in Appendix B [12].
When changing 0d cells with Algorithm 2, one should think of taking a closed loop that traverses down and back each branch in (3.10), thus encountering each edge twice. Consequently, every edge sign appears twice, thus squaring to 1, so the only sign from this step will be −1 due to the relative sign between the reference charts. However, one can easily check that applying Algorithm 1 will introduce one additional copy of each branch, so those edges should be included in the overall loop. The result of chaining several of these together is a sawtooth path between the two 0d cells (the bold red line in (3.15)) that contributes signs from each edge along the way and a minus sign for each 1d cell along the path. Schematically, the combined loop will look like the following: This method for computing signs is summarized in the Master Algorithm:

Master Algorithm
Input: Two BCFW sequences of length d = k(n − k): w = (a 1 b 1 )(a 2 b 2 ) . . . (a d b d ) and 1) If both BCFW paths terminate in the same 0-dimensional cell, then the relative orientation is given by the product of edge signs along the paths. Equivalently, it is the product of signs around the closed loop of length 2d obtained by traversing down one path and back up the other.
2) If they terminate in different 0d cells, then the relative orientation also depends on the signs along a path connecting the two 0d cells. One can obtain such a path by a sawtooth pattern between 0d and 1d cells. The sign of this path is given by the product of edge signs along the path, times (−1) m/2 , where m is the number of locations i satisfying σ 0 (i) − σ 0 (i) = 0 (m/2 is the number of 1d cells in the sawtooth path). Thus the relative orientation is given by the product of signs along each BCFW path, times the connecting path sign. Equivalently, it is the product of signs around the closed loop obtained by concatenating the three paths (3.15), times the signs for the 1d cells.
So far, all the charts have been assumed to have minimal length, i.e. k(n − k) for charts on the top cell. In other words, each transposition in the sequence increases the dimension by 1. However, the Master Algorithm indicates that any path through the poset can be compared to any other path, even if they zig-zag up and down.
We have verified this algorithm by implementing it in Mathematica and applying it to a variety of charts whose orientations are known by other methods. This includes pairs of randomly generated NMHV charts of the type studied in [11], as well as higher k charts whose matrix representatives have identical GL(k) gauge fixings; the latter can be compared by directly equating the entries. In addition, the orientations of 500 distinct charts on the 10d cell {5, 3, 8, 9, 6, 7, 12, 10, 14, 11} ∈ Gr(3, 10) were computed using an independent method due to Jacob Bourjaily and Alexander Postnikov [13]. The results agreed perfectly with our algorithm.

Applications
In the remainder of this paper, we will discuss several areas in which the relative orientations are important. The end results are not surprising; they were anticipated and used in several previous works. The new contribution of this section will be to put these ideas on firm combinatorial footing, so for example, spurious poles in the tree contour will cancel exactly instead of mod 2 as in [1].

Comparing Residue Orientations
We have shown that any two charts can be compared by taking the product of edge signs around a closed loop in the poset of cells. This applies to any charts, even those corresponding to generalized decompositions with non-adjacent transpositions. As explained in Section 2.3.2, one can follow any path through the poset by taking residues out of order in standard BCFW charts and changing coordinates as needed. Due to the Master Algorithm, the final sign on the residue depends only on the path, not on the intermediate choices of charts. We will demonstrate this with a convincing example.
There is a standard chart on {4, 3, 6, 5} obtained by the path P 1 : {4, 3, 6, 5} Taking either coordinate to vanish lands in a codimension-1 boundary, so we are allowed to take them to zero in either order. There is also a non-adjacent path P 2 that ends in the same 0d cell: {4, 3, 6, 5} (14) − − → {5, 3, 6, 4} We will now evaluate the 0d residue along both paths using the coordinate chart (4.2), ignoring the delta functions in (1.1). The convention for evaluating residues is to take a contour around α i = 0 only when α i is the first variable in the form. Along P 1 we first take α 2 → 0 and then α 1 → 0, which is the order they appear in the form; hence the residue is +1. We can follow P 2 by taking α 1 → 0 first, so we pick up a factor of −1 from reversing the order of the wedge product. Thus the residue along P 2 is −1. The relative sign is −1, exactly as our algorithm predicts because the paths differ by a quadrilateral.
In terms of edge signs, there is a Z 2 symmetry at every vertex that allows us to flip the signs on all the attached edges without changing the overall sign of any closed loop. Since the product of signs around a quadrilateral is −1, we can use the symmetry to fix the edge signs so they exactly agree with the above computation at each step: The generalization to more complicated charts is straightforward.

Boundary Operator
Define the signed boundary operator ∂ acting on a cell C to be the sum of all boundaries of C weighted by the ±1 weight on the edge connecting each boundary to C.
where the sum is over all cells C i in the boundary of C and w(C, C i ) = ±1 is the weight on the edge between C and C i . Equivalently, we could take the sum over all cells of the appropriate dimension and define w(C, C i ) = 0 whenever there is no edge between them. 7 Applying the boundary operator again therefore yields a sum of codimension-2 boundaries of C, each one weighted by the product of the sign on the edge connecting it to its parent times the sign on its parent from the first application of ∂.
where i runs over boundaries C i of C and j(i) runs over boundaries C j(i) of C i . In order for this result to vanish, every codimension-2 cell must appear twice and with opposite signs.
We will first show that each cell appears twice, 8 and then it will be clear from our setup that that signs are opposite. Let σ be the permutation labeling C. Each edge represents a transposition acting on σ, so codimension-2 cells will arise from pairs of transpositions (ab)(cd). If a, b, c, d are all distinct, then the transpositions can be applied in either order and thus each cell appears twice. If only three are distinct, then there are a few cases to consider: (ac)(ab) ≡ (ab)(bc) σ(a) < σ(c) < σ(b) (ac)(bc) ≡ (bc)(ab) σ(b) < σ(a) < σ(c) (bc)(ac) ≡ (ab)(bc) and (ab)(ac) ≡ (bc)(ab) σ(a) < σ(b) < σ(c). Thus there are two unique routes from C to every codimension-2 cell in ∂ 2 C. Each pair of routes defines a quadrilateral in the poset, which we have seen implies a relative minus sign between the residues. Hence the boundary operator manifestly squares to zero.

Locality of Tree Contours
The n-particle N k MHV tree amplitude can be computed as a linear combination of residues of (1.1) with coefficients ±1. Each residue appearing in the amplitude corresponds to a 4kdimensional cell, whose the remaining degrees of freedom are fixed by the 4k bosonic delta functions in (1.1). A tree contour is defined as any choice of contour on the top cell that produces a valid representation of the tree amplitude; there are many equivalent representations due to residue theorems. Tree-level BCFW recursion relations [18,19,20] written in terms of on-shell diagrams [1] provide one technique to find an appropriate set of cells, but the on-shell diagram formulation does not generate the relative signs between them. This will be resolved shortly.
The tree amplitude diverges for certain configurations of the external momenta; these are called local poles -physically, they are interpreted as factorization channels in which an internal propagator goes on-shell. In the Grassmannian residue representation, such poles correspond to (4k − 1)-dimensional cells, i.e. boundaries of the 4k-dimensional cells. The amplitude is not manifestly local in this formulation, meaning that some boundary cells translate to non-local poles, which are momentum configurations with non-physical divergences. A key feature of the tree contour is that all of the local poles appear precisely once in the residue representation of the amplitude, while any non-local poles appear twice [1]. It was conjectured that the two appearances of each non-local pole should come with opposite signs so they cancel in the sum, similar to the vanishing of ∂ 2 . We are now equipped to prove that claim.
The boundary operator (4.5) can be used to define signed residue theorems. Even though the sign of each term in (4.5) is not fixed, the relative sign between any two cells in the boundary, say C 1 and C 2 , will always agree whenever they both appear in the boundary of a cell. It is easy to see that this is true if C 1 and C 2 share a common codimension-2 boundary since they form a quadrilateral. The edges connecting C 1 and C 2 to their shared boundary are the same no matter which C is used in the initial boundary operation, so the relative sign between the edges connecting C to C 1 and C 2 must not depend on that choice either. There is one situation in which they may not share a codimension-2 boundary, but in that case there will always be a third cell C 3 which has common boundaries with both of them; cf. case (vi) in Section 3.2. Now following the intuition of [1], we find residue theorems by requiring that the boundary of every (4k + 1)-dimensional cell vanishes: (4.8) We define the tree contour to encircle each singularity of the measure exactly once, so any residue appearing in the amplitude will have a coefficient ±1. The residue theorems (4.8) can change which poles are included in the contour, but they will never cause residues to appear more than once. This implies that the relative sign between any two cells in the amplitude must match the relative sign of those cells in the residue theorems. Therefore, by the same logic that showed ∂ 2 = 0, it follows that any (4k − 1)-dimensional cell appearing twice in the boundary of the tree amplitude will show up with opposite signs. Hence all non-local poles cancel in the sum.
We have checked numerically that this choice of signs correctly cancels all non-local poles for BCFW representations of the tree amplitude with n = 5, . . . , 13 and k = 3, . . . , n/2 .
To show that (bc) can be applied to σ, we need to show that σ(c) < σ(b) and that there is no q ∈ (b, c) such that σ(q) ∈ (σ(c), σ(b)). The condition that σ(c) < σ(b) is satisfied since it is equivalent to σ (a) < σ (c), and since no legs are touched between b and c, the condition on q ∈ (a, c) implies that no q ∈ (b, c) has σ(q) ∈ (σ(c), σ(b)). Thus (bc) is an allowed transposition on σ, so the pathw exists.
Note that the above analysis was valid for any charts, not just adjacent ones. However, to compute the relative sign, we will focus on the restricted set of charts for which all q ∈ (a, c) satisfy σ (q) ≡ q mod n, namely the standard BCFW charts. The corresponding plabic graphs can be manipulated to a common layout using the equivalence moves (2.7) and (2.8): We find that they are equivalent only with the identifications Plugging this into the dlog forms and using that dlog α j ∧ dlog (α j α j+1 ) = dlog α j ∧ dlog α j + dlog α j+1 = −dlog α j+1 ∧ dlog α j , (A. 4) we find that the two forms are oppositely oriented.

iii) (ac) vs. (bc)
This situation is analogous to case (ii), so we can skip directly to comparing the graphs. Restricting again to the standard BCFW situation where σ(q) ≡ q mod n for all q ∈ (a, c), we can perform a sequence of merge/delete and GL(1) rotations on the corresponding plabic graphs: We therefore identify Then using that dlog (α j /α j+1 ) = dlog α j − dlog α j+1 , we find that the two forms are oppositely oriented.

B.1 Reduced Words
Each decorated permutation can equivalently be represented by one or more reduced words: minimal-length sequences of letters which obey certain equivalence relations. In this case, the letters s i are generators of the affine permutation groupS n . They are defined with the following properties and relations: (B.1) We will refer to the relations in the second line as, respectively, the reduction move, the braid move, and the swap move. As we showed in Lemma 1, any d-dimensional cell C in Gr(k, n) can be reached from the top cell by a sequence of k(n − k) − d transpositions s i , and that sequence defines a reduced word labeling C. There are generally many distinct reduced words for a given cell, but they are equivalent due to the relations in (B.1).
Any two cells C and C of dimensions d and d + 1, which share an edge in the poset, i.e. are related by a single boundary operation, are straightforwardly connected in the language of reduced words. Given a reduced word f of length δ = k(n − k) − d on the lower-dimensional cell C, there is a unique reduced word f on C of length δ = δ − 1 obtained by deleting a single letter from f [21]. Specifically, for f = s i 1 s i 2 . . . s i j . . . s i δ , there is a unique s i j such that f = s i 1 s i 2 . . .ŝ i j . . . s i δ , where the hat denotes deletion. It is easy to construct a (generally non-reduced) word t such that f ≡ tf ; one can check that t = s i 1 s i 2 . . . s i j−1 s i j s i j−1 . . . s i 2 s i 1 accomplishes the desired effect by repeated reduction moves. Given that the two cells are also related by a transposition of the form (ab), it is not surprising that repeated application of the relations (B.1) shows that t ≡ s a s a+1 . . . s b−2 s b−1 s b−2 . . . s a+1 s a , which is a reduced word representation of (ab).

B.2 Decorating Edges
Choose a representative reduced word for every cell in the poset; we will refer to this as the standard word on that cell. This is similar to choosing a particular Lemma 1 sequence for each cell. From a cell C labeled by the standard word f , every cell C that has C as a boundary can be reached by deleting some s i j from f . This yields a word f on C as explained above. If f is identical to the standard word on C , then weight the corresponding edge with (−1) j . It is easy to check that deleting two letters s i j 1 and s i j 2 in opposite orders will produce the desired factor of −1 around a quadrilateral. For one order, we would find (−1) i j 1 +i j 2 , while for the other order we would find (−1) i j 1 +i j 2 −1 , so they differ by −1.
However, it is not always possible to delete both transpositions in either order. One may have to delete different transpositions to reach the same cell by two different routes. Moreover, the final reduced word in each case may be different. Thus, we need to find the sign difference between two reduced words. Any two reduced words can be mutated into each other using just the braid and swap moves defined in (B.1). By decorating these rules with ±1, we can find the desired difference between the words. In fact, we already determined those signs in Appendix A because the generators act as adjacent transpositions. The braid move is simply a special case of (iv), so it should be decorated with a +1, and the swap move is a special case of (i), so it should be decorated with a −1. Hence, the relative sign between two words is the number of swaps required to transform one into the other.
The number of swaps can be determined directly from the inversion list of each word. A reduced word creates a unique ordering on the set of inversions in the permutation labeling the cell (not all orderings are possible). The number of swap moves needed to transform one word into another is the number of pairs of inversions (i, j) and (k, l), with all i, j, k, l distinct, in different order in the two inversion lists.