Properties of the contraction map for holographic entanglement entropy inequalities

We present a deterministic way of finding contraction maps for candidate holographic entanglement entropy inequalities modulo choices due to actual degeneracy. We characterize its complexity and give an argument for the completeness of the contraction map proof method as a necessary and sufficient condition for the validity of an entropy inequality for holographic entanglement.

1 Introduction Holographic entropy inequalities constrain the allowed quantum states in conformal field theory that are consistent with the existence of a semi-classical gravity dual via the Ryu-Takayanagi formula [1].These constraints on entanglement are stronger than those obeyed by generic quantum states, as first discovered in [2].The program of characterizing them systematically was initiated in [3] and continued in [4, 5], where in particular the classification of the set of entropic constraints was completed for mixed states on five parties.Further work towards characterizing holographic entanglement includes the study of bit threads [6-11], the connection to quantum marginal independence problems [12-15], generalizations to other measures of holographic entanglement [16-20], and extensions of the tools and proof techniques beyond the holographic setting [21-24].
An essential tool introduced by [3] is a combinatorial strategy for proving the validity of holographic entropy inequalities, dubbed the contraction map proof method. 12The method states that an inequality is valid if there exists a certain map obeying a contraction property dictated by the inequality (this will be discussed in more detail in Section 2).Though powerfully general, its inner workings are poorly understood, resulting in highly inefficient implementations of it which prove to be onerous, when not practically impossible to run, for inequalities involving more than five parties.Up until now, the method employed to discover these contraction maps has been based on greedy algorithms, heavily reliant on the aid of heuristics, that brute-force search for global solutions to the contraction property. 3his is highly unsatisfactory not just at the practical level, but also at the fundamental one: the contraction condition is intrinsically geometric, and a better understanding of it should not only improve the proof method but also our holographic interpretation of these inequalities.
In this work, we develop a new deterministic technique for finding contraction maps for candidate inequalities.In particular, we find and prove a set of rules that provide unique solutions to the contraction condition that determines the map, thereby avoiding the arbitrary local choices that greedy approaches suffer from.We empirically demonstrate that our set of rules is strong enough that no backtracking is needed on any of the heretofore-proven inequalities for five, six and seven parties.We characterize the algorithmic complexity of implementing these deterministic rules and provide an argument that the contraction map proof method is a complete proof technique.In so doing, we show that this deterministic implementation of the contraction map proof technique provides a notable improvement both performance-wise and conceptually to the existing greedy approaches.In addition to the exponential speed-up, our algorithm can easily handle inequalities that are computationally intractable by the state-of-the-art greedy algorithm.
The organization of this paper is as follows.In Section 2, we review relevant aspects of the contraction map proof method as applied to holographic entanglement entropy.In Section 3, we introduce the new technical notions needed for the new deterministic techniques, state those techniques, and prove that they must be respected by all contraction maps.In Section 4, we characterize the algorithmic complexity of implementing these techniques and give an argument for the completeness of the contraction map proof method in general.Finally, in Section 5 we conclude with a discussion of our results and future directions.
We first briefly introduce the basic ingredients and notation necessary to state the contraction map proof method for holographic entanglement entropy inequalities (see [3, 5, 29] for more details and intuition).In holography, the entropy S(X) of a boundary region X is given by where X is the Ryu-Takayanagi (RT) surface for X, and G N is Newton's constant [1]. 4 As usual, the RT surface obeys a homology condition by which there exists a homology surface W X whose boundary ∂W X = X ∪ X .Given a set of n ≥ 1 boundary subsystems {X i } n i=1 , one can consider 2 n − 1 distinct subsystems X I ≡ i∈I X i , one for each non-empty subset I ⊆ [n] ≡ {1, . . ., n}.Using the shorthand S(I) ≡ S(X I ), entropy inequalities can then be canonically written in the form where α l , β r ≥ 1 by convention, and the I l , J r subsets are all distinct.
One can map the geometric picture of the Cauchy slice of bulk geometry to a graph picture and vice-versa where each distinct subsystem in the partitioned geometry is assigned to a vertex on the graph.An entanglement entropy inequality that holds on the graph also holds for the holographic geometry.From here on, we will refer to such inequalities as entropy inequalities on graphs.The problem of proving an entropy inequality candidate for a holographic geometry can thus be translated into the problem of finding a contraction map from a hypercube representing the left-hand-side(LHS) to a hypercube representing the right-hand-side(RHS) of an inequality.
For the contraction map proof method, we consider the RT surfaces on the LHS, {X I l } N l=1 and use them to partition the bulk Cauchy slice where they lie into bulk regions.The resulting bulk regions W x can be uniquely labelled by bitstrings x ∈ {0, 1} N via the following inclusion/exclusion scheme: if the region lies on the homology surface of X I l , 0, otherwise. (2.3) A particularly relevant set of bitstrings are those labeling regions adjacent to boundary subsystems; for each i ∈ [n + 1], x where the (n + 1)-th bitstring is all zeroes and is associated with the purifier.These bitstrings are often referred to as occurrence bitstrings and can be analogously defined for RHS subsystems using bitstrings y ∈ {0, 1} M .Bitstrings differing by a single bit label adjacent bulk regions sharing a portion of RT surface.This way, this labeling not only encodes all regions that make up homology surfaces but also all portions that make up their RT surfaces.The idea of the contraction map proof method is to build a map that takes the LHS bulk regions W x and uses them to construct homology regions for RHS subsystems.
The purpose of the contraction property is to make sure that no portion of RT surface of any LHS subsystem appears on the boundary of the newly constructed RHS homology regions more times than on the LHS ones.This guarantees that the areas bounding the resulting RHS homology regions are no larger than those of LHS RT surfaces.These homology regions for RHS subsystems will be bounded by bulk surfaces of not-necessarilyminimal area, so it will also be guaranteed that actual RT surfaces for RHS subsystems will have no larger area.The upshot is that if it is possible to find such a contraction map, then the inequality is valid for any possible holographic configuration.Explicitly, defining a weighted Hamming distance on length-M bitstrings via the contraction map proof method can be stated as follows: Theorem 1.If there exists a map f : {0, 1} M → {0, 1} N with the homology property such that the following contraction conditions are obeyed, ) 2) is a valid entropy inequality on graphs.
As explained above, f builds homology regions for RHS subsystems using regions of those for LHS ones; in particular, the homology region for J r is given by f (x)r=1 W x , i.e., the union of all LHS regions labeled by an x such that f (x) r = 1.The homology property (2.6) on occurrence bitstrings thus simply makes sure that the constructed homology regions are adjacent to the desired RHS subsystem.This homology property can essentially be understood as providing the initial conditions for the problem of finding a contraction map.The challenge of obeying all the contraction conditions set by (2.7) is where the complexity of the proof method lies.
To study the contraction maps relevant to Theorem 1, it is useful to think of their domain and co-domain spaces as unit hypercubes H M ≡ {0, 1} M and H N ≡ {0, 1} N respectively, where every bitstring labels a corresponding vertex.In such hypercubes, Hamming distances between any pair of vertices are given by the minimal distance between them following edges of the hypercube (cf.using the taxicab metric), which is also equivalent to just counting the number of different bits between the bitstrings labeling the vertices.The weighted Hamming distance introduced in (2.5) that is relevant for the contraction condition in (2.7) simply scales each dimension of the hypercube by a multiplicative factor 5 .Because bitstrings uniquely label hypercube vertices, we will refer to the two interchangeably in what follows. 6e above constraints are the only ones that must be satisfied for a contraction map to exist for a given holographic entanglement entropy inequality.The traditional way of finding such contraction maps is via a greedy algorithm, wherein the map is built recursively through locally optimal choices of image bitstrings.In particular, one initially fixes some ordering of the domain bitstrings, with occurrence bitstrings first as they provide a set of initial conditions for the contraction problem. 7One then attempts to find an image bitstring for the next input bitstring that obeys all contraction conditions with the previous ones.If more than one solution exists, one of them is picked at random and the rest are stored.This step is iterated with subsequent input bitstrings until either a full contraction map is found, or one hits an input for which no output satisfies the contraction conditions.The latter generically happens (even if a contraction map does exist) because solutions are only locally obtained following some ordering which accounts for previous constraints, but not future ones and random choices are made whenever more than one option is available.As a result, such a failure is only local and simply requires revisiting previous solution choices and re-iterating the process for every such choice made.A definitive failure to find a contraction map, with which one can conclude no such map exists, occurs when all possible solution choices for all input bitstrings have been exhausted with local failures in all cases.Otherwise, a contraction map will always be found.This potential for assigning bitstrings incorrectly is the main downside of the greedy algorithm; it is assigning bits at times where it is not clear if those bits are free to be assigned to be 1 or 0. As such, it oftentimes requires backtracking to previous solutions, where incorrectly assigned bits need to be flipped.Finding which bits are incorrectly assigned is an algorithmically time-consuming process, which only gets worse as the inequality gets more terms.In fact, as the greedy algorithm does not exploit known structures of holographic entanglement entropies, its algorithmic complexity should be similar to that of 3 − SAT , as it is approaching the problem as if it were an unstructured constraint satisfaction problem.As such, it would be a worst-case NP-complete algorithm.This would clearly be too slow to scale beyond a relatively small N and M .

Deterministic Approach to Contraction Maps
We will bypass the need for a greedy algorithm by directly solving deterministically for image bits that are uniquely fixed by the initial homology property and the requirements of the contraction conditions.The outcome of doing so can be one of three: 1.All image bits are fixed, thereby yielding a complete contraction map that is unique.

2.
A subset of the image bits are uniquely fixed, and some remain arbitrary, resulting in a partial contraction map that may or may not admit a contracting completion.
3. At least one image bit admits no solution to the contraction conditions, thus definitively implying that no contraction map exists.
While doing so will not (and should not) yield a valid contraction map for arbitrary candidate inequalities, it will generate valid partial contraction mappings, unless a contradiction is reached.If a contradiction is reached, however, this contradiction is demanded by the consistency of the initial data, and therefore cannot be fixed with backtracking.Once all deterministic assignments have been made, then in principle the remaining choices are genuinely free, and backtracking should not be required prior to that point.It may nevertheless be that all choices after the deterministic fixing lead to contradictions, but empirically this has not occurred with the inequalities we studied, and nevertheless backtracking to prior to this point should not be required.• A pair of vertices x, y ∈ H N is said to be Hamming distance preserving if z ∈ H M is said to be on a Hamming path between x and y if In words: if x and y have the same bits in some sub-bitstring, then any z sharing that same sub-bitstring is said to be on a Hamming path between x and y.8 It is worth quoting at this point an important defining property distance functions obey.For any x, y, z ∈ H M , the triangle inequality holds: Hereafter, all results we prove involving a map f : H M → H N implicitly assume this map is a contraction map, i.e., f obeys d α (x, y) ≥ d β (f (x), f (y)) for all x, y ∈ H M .In other words, we are proving general properties pertaining to the contraction maps relevant to proofs for holographic entropy inequalities using Theorem 1.

Deterministic Method 1
From the definition above, a useful result follows: Lemma 1.If x, y ∈ H M are Hamming distance preserving, then all vertices on Hamming paths between x and y map to vertices in H N on Hamming paths between f (x) and f (y).
Proof.Let x, y ∈ H M be Hamming distance preserving, and consider another vertex z ∈ H M .By letting z be on a Hamming path between x and y, we see that , and by the contraction condition it must be the case that However, as a distance function on H N , d β obeys the triangle inequality (3.1), leading to This in turn implies f (z) is on a Hamming path between f (x) and f (y) in H N , as claimed.
The above result implies that any two Hamming distance preserving vertices x, y ∈ H M provide precise information about the image f (z) for any vertex z ∈ H M on a Hamming path between them.Explicitly, for every r ∈ [N ] with coincident bit images f (x) r = f (y) r ≡ b, one can immediately assign the value f (z) r = b.In fact, the requirement that x and y be Hamming distance preserving can be relaxed to obtain an even stronger result: then all vertices on Hamming paths between x and y map to vertices in H N on Hamming paths between f (x) and f (y). 9 Proof.Consider a vertex z ∈ H M on a Hamming path between x and y, such that d α (x, y) = d α (x, z) + d α (y, z).If f (z) is on a Hamming path between f (x) and f (y), we would have . Consider an alternative map f yielding an image f (z) off of Hamming paths between f (x) and f (y), but otherwise equal to f .This requires f (z) having at least one bit flipped relative to f (x) and f (y) where the latter two coincide, and consequently also relative to f (z).The resulting new map f (z) will thus give distances obeying Combining these, we obtain where the second line uses the triangle inequality on H N , the third follows by hypothesis, and the fourth by the assumption that z is on a Hamming path between x and y.That the LHS of (3.4) is strictly greater than d α (x, z) + d α (y, z) is a direct violation of the contraction condition, implying that f is not a contraction map.This result grants the following assignment rule to deterministically fix entries of a contraction map that uniquely solve the contraction conditions: Rule 1.For every x, y ∈ H M such that 0 ≤ d α (x, y) − d β (f (x), f (y)) ≤ 1 and every z ∈ H M on a Hamming path between x and y, Theorem 2 uniquely fixes the following bits of f (z): This constraint can therefore be used to fix bits in bitstrings of H N on Hamming paths between f (x) and f (y) that satisfy the above condition.This provides a powerful method of directly assigning bits of the H N hypercube without the need to solve a naively NP-complete constraint satisfaction problem.Instead, one simply needs to determine all vertices along Hamming preserving paths of the initial data and fix their matched bits.Once new fully fixed bitstrings on the H N have been discovered, they may be appended to the initial data to search for new Hamming preserving paths.
The number of bits fixed by a single Hamming preserving path can be computed as follows: if d α (x, y) = D, and the dimensions of the H M and H N hypercubes are M and N , respectively, then the number of fixed bits per row is N − D and the number of rows for which fixing occurs is 2 D − 2. This means that the total number of fixed bits is (2 D −2)(N −D).The total number of initially free bits is (2 M −I)N , where I is the number of initial conditions.The ratio of these goes to one as D approaches M and if D ≪ N .Note that this analysis does not address double counting of fixed data from subsequent Hamming preserving paths, so multiple paths would provide an overestimate.

Deterministic Method 2
A second constraint comes from considering when all degrees of freedom of a H N bitstring have been exhausted, in the sense that the contraction condition is saturated.In particular, consider a pair of vertices x, y ∈ H M such that their images f (x) and f (y) have been partially fixed (i.e., some image bits have been uniquely determined e.g.via Rule 1, but some remain undetermined).Then for any completion of the map f , we can lower-bound d β (f (x), f (y)) by the partial distance d fixed β (f (x), f (y)) where d β is only applied to the dimensions r ∈ [N ] where both f (x) r and f (y) r have already been fixed.Explicitly, letting I x , I y ⊆ [N ] label the bits respectively in f (x) and f (y) that have already been uniquely fixed, This way one easily obtains the following rule: Rule 2. For every x, y ∈ H M such that d fixed β (f (x), f (y)) = d α (x, y), the following bits get newly fixed: and similarly under the exchange x ↔ y.
In other words, the saturation of the contraction condition means that all remaining unfixed bits of f (x) and f (y) must match, which leads to uniquely determined images for all bits f (x) r and f (y) r with r ∈ I x ∪I y .While it is clear that unfixed bits on [N ] (I x ∪I y ) will necessarily also have to match between the two bitstrings, their specific value can however not be fixed by Rule 2.
This is a method that can be implemented after the first method to fix further bits.After it has been run, the first method can be run again as subsequent Hamming preserving paths are revealed.These methods are then alternated until no additional bits are fixed by either approach.

Deterministic Method 3
An additional definition is required to obtain our next rule: Definition 2. Given x, y ∈ H M , another vertex z ∈ H M is said to be k-off Hamming paths between x and y if k is the minimal distance from z to any vertex e on Hamming paths between x and y, k = min e∈HP (x,y) where HP (x, y) is the set of vertices appearing on the Hamming paths between x and y.
This notion can be used to generalize Lemma 1 as follows: Lemma 2. If x, y ∈ H M are Hamming distance preserving, then any vertex that is k-off Hamming paths between x and y maps to a vertex in H N that is at most k-off Hamming paths between f (x) and f (y).
Proof.Let z ∈ H M be a vertex k-off Hamming paths between x, y ∈ H M .From the definition, let e ∈ H M be on a Hamming path between x and y such that d α (z, e) = k, and by the contraction condition it must be the case that d α (z, e) ≥ d β (f (z), f (e)), so we have d β (f (z), f (e)) ≤ k.Since by Lemma 1 every f (e) lies on a Hamming path between f (x) and f (y), we conclude that f (z) is at most k-off Hamming paths between them.Now we can prove our next main result.Formally, Theorem 3. Let z be a vertex on H M that is one-off from both a Hamming path between x and y and a Hamming path between a and b on H M .Further, let then all matching bits between f (a), f (b), f (x), and f (y) are fixed for f (z).Furthermore, f (z) r will not match one of f (x) r = f (y) r or f (a) r = f (b) r , and whichever Hamming path it does not match in this column, f (z) will match all remaining bits of that Hamming path.
Proof.Let z be a vertex on H M that is one-off from both a Hamming path between x and y and a Hamming path between a and b on H M .Further, let . This forces by the previous lemma that f (z) is at most one-off Hamming paths between f (x) and f (y), and also is at most one-off Hamming paths between f (a) and ) is either exactly one-off Hamming paths between f (x) and f (y) or is exactly one-off Hamming paths between f (a) and f (b).This immediately requires that all matching bits between f (a), f (b), f (x), and f (y) are fixed for f (z), as there are no remaining degrees of freedom for f (z); if this were not the case, it would be at least two-off Hamming paths between f (x) and f (y) or at least two-off Hamming paths between f (a) and f (b).Furthermore, if WLOG f (z) is exactly one-off f (a) and f (b), e.g.
must match all other matched bits between f (a) and f (b), for the same reason that all available degrees of freedom have been exhausted.
The above generalizes easily for k-off for integer k ≥ 0. Going to higher (for example six) point functions with three Hamming paths does not help, as any column can only be either zero or one, and so any higher point comparison would degenerate into comparisons of pairs of power sets of Hamming paths.
Indeed, the constraints generated by these three methods, and the third in particular, would seem to be complete, as any potential consequence of a contraction map constraint condition that yields an unconditionally fixed bit would fall into one of these three categories.Therefore, once these methods have been applied the remaining choices are observed to be free 10 .This has been empirically verified on all the five-party inequalities [4], the known 384 six-party inequalities [30] and the known seven-party inequalities [28] [31] for holographic entanglement entropy, as subsequent choices can be made without needing backtracking in all of these cases.

Combining the Deterministic Methods
Practically, the first and second methods both run in trivial amounts of time, while the third method is slower.The precise time complexity of these methods will be precisely described in Section 4. Therefore, we adopt a strategy of alternating the first and second methods until stability has been reached before running the third method.After this, the first and the second methods are run until stability has been reached, at which point the third method is run again, with termination at a point where the third method does not fix any additional bits.

Choice Constraints
There is a constraint of a different type which we state as the following theorem: Theorem 4. If d α (x, y) = 2 or d α (x, y) = 3 and d β (f (x), f (y)) = 0, then if we take a vertex z on the Hamming path between x and y, then either In other words, z can only map to either the same vertex that both x and y map to or a neighbor of that vertex.Not doing so would lead to a violation of the contraction condition.We introduce the following definition to state a related constraint.Definition 3. Given x, y ∈ H M , y is said to be distance-k-neighbor of x (and vice-versa) if For a given vertex z, we say that vertex w ∈ N α (z, k), if w is a distance-k-neighbor of z.
We call the set N α (z, k) as the set of distance-k-neighbors of z.
The following constraint is useful to choose bitstrings that are neither fixed by the deterministic methods nor by the constraint in theorem 4.
Theorem 5. Consider a vertex z ∈ H M .Given the set of distance-1-neighbors of z, N α (z, 1) ⊂ H M , define the set I(z) ⊂ H N such that Then, f (z) ∈ I(z).
Theorem 5 can be generalized for up to distance-k-neighbors. Modulo these conditional constraints, one then simply freely chooses an unfixed bit (say the top-left bit in the tabular representation of H N and fixes it to be 1 or 0).For technical reasons, fixing it to be 1 results in faster convergence than fixing it to be 0, as the H N deterministically fixed bitstrings tend to have more 0's than 1's.After this bit is fixed, the deterministic fixer methods are run again until stability is achieved, at which point another choice is made.Choices are made in this way until a contradiction is reached, or a full contraction map has been specified.It is interesting to quantify the number of choices that must be made in any given known valid inequality.

Examples
As an example, consider the five-party inequality given by This inequality has 10 terms on the LHS and 18 terms on the expanded RHS.The contraction between hypercubes is a map f : 2 10 → 2 18 .There are 1024 bitstrings, each having 18 bits on the RHS totalling to 18432 bits.Before making any free choice, the deterministic methods 1 and 2 fully fixes 170 bitstrings and partially fixes 842 more bitstrings, leaving a total of 3636 bits unfixed.Applying method 3, the number of unfixed bits is reduced to 3558.The greedy algorithm leaves 3588 bits unfixed, i.e, it fixes 30 less bits than our methods 1, 2 and 3 combined. 11This inequality has a maximal choice, or upper bound of choices that must be made if the choices are made to require the largest number of subsequent choices, of 142 bits choosing them all to be 0 and choosing an entire string of 18 bits.The true number of choices are less than the maximal choice (for example, choosing 1 at every choice instead of 0 ends up with a total of 83 bit choices instead).This number of choices is representative of the number that must be made for the other inequalities at hand.We summarize the details of the contraction maps for the five five-party inequalities found in [3] in the table 1. 12 The four other five-party inequalities  We also give two examples of previously unknown six-party facet inequalities13 that we proved using contraction map method, in 3.16 and 3.17

Aside: Unphysical H M Vertices
As was first pointed out in [5], certain vertices in H M are unphysical; for example, a vertex cannot be included in A but excluded in AB.By nesting of minimal-cardinality min-cuts, such vertices simply do not exist (holographically, entanglement wedge nesting guarantees that such spacetime regions are empty).These vertices can formally be characterized in terms of intersecting anti-chains among the collections of subsystems they label (we refer the interested reader to [5] for more details on this classification). 14t is therefore a question as to whether these vertices must be mapped to H N and whether if they were mapped to H N , they would result in over-stringent (and undesired) constraints for the contraction proof method.By analysis of the known inequalities, the answer to both of these questions appears to be no.Regarding the former, at the level of the proof methods, it is clearly unnecessary to include such vertices in H N : the graph vertex they label does not exist, so no edges attached to it exist either, and thus there is no need to rearrange their contribution to left-hand-side cuts into contributions to righthand-side cuts.In other words, a contraction map for H M without unphysical bitstrings included suffices to prove the corresponding inequality valid.In principle, it could be a logical possibility for a contraction map to exist for H M without unphysical bitstrings, but to not exist for the full H M .This is the latter question, which we experimentally answer in the negative.For all known valid inequalities, contraction maps exist regardless of whether unphysical bitstrings are included.Something even stronger happens to hold: the inclusion of unphysical bitstrings turns out to not enforce the deterministic fixing of any single additional bit on the right-hand side.This suggests that the contraction conditions following from unphysical bitstrings are always strictly weaker than the rest, and thus redundant.We leave the proof of this statement for future work.Whether or not true, it is clear that at both fundamental and practical levels, removing unphysical vertices from H N is preferred.
It is an interesting question to ask if the number of such unphysical bitstrings decreases as the party number increases.While naively we have a way of analyzing this via the infinite family of cyclic holographic entanglement entropy inequalities, as an independent member of this family appears for each odd party number, the LHS terms in the cyclic family of interest all together form an intersecting anti-chain (every term has more than 1/2 of the terms and they all cross), which means any subset of them is also an intersecting antichain.In other words, there are no unphysical bitstrings for the cyclic family.However, we find that statistically, the number of unphysical bitstrings is significantly higher for the six-party inequalities than for the five-party ones, and establishing whether this trend continues to higher party numbers is a useful course of study.
We can make this more precise with the following estimate.The unphysicality of bitstrings on H M can be studied pairwise via the columns of the contraction map, as for every pair of columns either one is contained in the other, or the intersection is trivial.As these possibilities are the only ones that would lead to a bitstring being deemed unphysical, and as for each of these only one combination of two bits yields an unphysical outcome (11 for trivial intersection and 10 for containment, WLOG), when an offending combination occurs a quarter of the naive bitstrings are rendered unphysical.
Consider a pair of columns defining H M .Let one of them be an a-party column, and the other be an b-party column.We can take a and b to both be less than or equal to N , the party number of the inequality, by purification.There are N a ways different possibilities for the n-party column; WLOG we can specialize to a specific one.Then, N −a b out of N b choices for the m party column have a trivial intersection with the a party column.
For a, b ≪ N , 15 this is very close to all of them.Each of these results in only 3/4 of the bitstrings having the possibility of remaining physical.This is, however, only the analysis for trivial intersection, 16 and only for a single pair of columns.Once all columns with small a, b are considered, the number of such pairs selected without replacement 17 gives the exponent that the 3/4 is raised to, and will generically eliminate almost all naive bitstrings as unphysical.
Even if only one of a, b ≪ N , and the other is as large as is allowed by symmetry to make the reduction as small as possible, e.g. a = N/2, if b ≪ N then the fraction of choices of the b-party column that will result in some unphysicality as above goes as 1 2 b , which for small b is nontrivial.It's only when a, b ≫ 1 and at least one of a, b ∼ N/2 that the fraction of choices resulting in trivial intersections approaches zero.

Complexity
Recall H M and H N are Hamming hypercubes on left and right-hand sides of an inequality, where we have |H M | = 2 M = N .Both M and N are of the order of log N .If one were to assign all the RHS vertices with the same entry, one would have to fill up M values 2 N times, suggesting that the most trivial toy map (not necessarily a contraction) will have a complexity O(N log N ).Any realistic contraction map has a computational complexity greater than this.
The computational complexity of applying methods 1 and 2 are upper bounded by O(N 3 log N ) and O(N 2 log N ) respectively.One executes them sequentially starting from initial data to fixing a certain number of bits, where one has to make a choice to fix any more bits.Note that calculation of this upper bound assumes the distance between two bitstrings in LHS is maximal (M ) and every pair of bitstrings are Hamming-distance preserving from H M to H N , both of which are over-estimations.We call the complexity associated with making choices a query complexity Q.It is empirically observed that Q ∼ O(N ).
For most inequalities, methods 1 and 2 suffice to fix as many bits as including method 3 would, i.e, method 3 doesn't fix any additional bits (for some inequalities, method 3 fix additional bits).As the methods 1 and 2 are executed after making every bit choice, the estimated upper bound of complexity to generate the entire contraction map is O(N 4 log N ).
To stubbornly apply method 3, one rarely fixes a large number of bits while increasing the computational complexity greatly.One first has to find all pairs of Hamming distance preserving vertices fixed by methods 1 and 2, whose complexity scales as O(N 2 log N ).By our algorithmic implementation of a single run of method 3 using two unique Hamming distance preserving pairs of vertices (x 1 , x 2 ) and (x 3 , x 4 ), the complexity is bounded above by O(N 4 (log N ) 2 ).Running through all such pairs, the complexity scales as O(N 8 (log N ) 2 ).Note that this again an inflated upper bound as we are assuming that the number of Hamming distance preserving pairs scale as O(N 2 ) (while empirically it has been observed to be of the order of O(N )).We apply method 3 only once after the first deterministic run and rely on the power of methods 1 and 2 once the algorithm starts making choices. 18he above routines run until it fixes no new bits.The number of choices that must be made can be understood as the number of queries needed to complete a contraction proof.While there is no theoretical bound (short of the total number of unfixed bits initially) for the number of queries, in practice the number of queries required is significantly lower, and does not exceed O(N ) choices for any given inequality to date.This number is also not optimized, and so in principle could be even lower, given a more optimized 0/1 selection strategy when choices are made. 19A pseudocode summarizing the working of the algorithm is given in Algorithm 1.

Completeness
As the constraints of the contraction map are built out, sub-graphs of the final graph are constructed step by step, with unfixed bits corresponding to regions for which not all adjacency conditions via wormhole are known.Such unknowns correspond to degrees of freedom remaining in the contraction map.
This leads to a path towards a proof of completeness of the contraction map method.Because every fixing of the bitstring reduces the remaining degrees of freedom of the graph, and thus of a wormhole geometry, one can ask what happens when the contraction map fails.If the contraction map is implemented bit by bit, this must correspond to a situation where a single bit is forced to be both 1 and 0, which it cannot be.What this would mean is that two different graph extensions of the graph corresponding to the bit just prior to the contradictory bit are generated.These graphs will serve as the counterexample graphs, where their cuts will fail the contraction map, which corresponds to the cutting and pasting strategy for all holographic entanglement entropy/graph cut inequalities.Something that is still lacking is a constructive algorithm for generating the graph that serves as the counterexample to a particular false inequality.While our arguments here show that such a graph must exist, it does not explicitly generate such a graph from knowledge of the inequality alone.We leave the development of this constructive algorithm for future work.
Also, there is an argument that the methods we used and straightforward generalizations thereof encompass all possible deterministic fixings, specifically via combinations of Hamming-preserving paths.The argument proceeds by contradiction, by assuming that there exists a rigid sub-mapping between hypercubes that is not a Hamming-preserving path (or an off-by-one by previous parity arguments).However, such a mapping could always be reduced to a Hamming-preserving path by modification of the LHS to take up the slack.Therefore, such a sub-mapping would not be rigid, as a rigid mapping would not allow for such slack by definition, as such slack would always be associated with a non-forced choice.Therefore, the only rigid maps permitted are Hamming distance preserving maps, and therefore our deterministic mapping methods, appropriately generalized to n Hamming distance preserving maps, is complete.We leave the formal proof of this statement for future work.

Discussion
It is a tantalizing direction to consider what these deterministic methods would mean for the bulk spacetime in AdS/CFT directly.The Hamming-preserving maps correspond to isometries between certain sub-graphs of H M to certain sub-graphs of H N .Because these hypercubes are representations of the spacetime itself, it suggests isometries between sequences of certain bulk regions separated by Ryu-Takayanagi surfaces associated with portions of the LHS and RHS of the entropy inequalities.This has the potential to give novel constraints regarding the metric rigidity of AdS/CFT, possibly connecting to bulk metric reconstruction [33].
A major limitation of this work is that it gives no aid in generating candidate inequalities for holographic entanglement entropy.Therefore, a complementary future direction to this work is to find a way of efficiently generating candidate inequalities or to generate automatically correct inequalities that are true, but whose tightness to the cone must be checked.To do this, one can potentially study the problem of finding all contraction maps, or at least some families of maps that scale with N and M between given hypercubes M and N with no single-party constraints.Once such families have been found, the single-party bitstrings of H M and H N can be assigned retroactively, which then specifies the entropies that appear in the candidate inequality.Finding such families of contraction maps appears to be a difficult combinatorial problem, but a clever solution thereof would immediately generate novel families of entanglement entropy inequalities.

Table 1 .
Summary of 5-party contraction maps with LHS and RHS of length M and N respectively.This table shows the percentage of bits fixed by deterministic run from methods 1 and 2 before making the first choices and finally notes the number of choices made to generate a contraction map.