Tensor chain and constraints in tensor networks

This paper accompanies with our recent work on quantum error correction (QEC) and entanglement spectrum (ES) in tensor networks (arXiv:1806.05007). We propose a general framework for planar tensor network state with tensor constraints as a model for $AdS_3/CFT_2$ correspondence, which could be viewed as a generalization of hyperinvariant tensor networks recently proposed by Evenbly. We elaborate our proposal on tensor chains in a tensor network by tiling $H^2$ space and provide a diagrammatical description for general multi-tensor constraints in terms of tensor chains, which forms a generalized greedy algorithm. The behavior of tensor chains under the action of greedy algorithm is investigated in detail. In particular, for a given set of tensor constraints, a critically protected (CP) tensor chain can be figured out and evaluated by its average reduced interior angle. We classify tensor networks according to their ability of QEC and the flatness of ES. The corresponding geometric description of critical protection over the hyperbolic space is also given.


IV. Geometric description 24
A. Tensor network as a powerful tool for building the ground state of a many-body system has been greatly investigated in recent years [1]. One remarkable feature of tensor network states is the intuitive description of quantum entanglement among local degrees of freedom.
For a subsystem composed of some uncontracted edges in a tensor network, its entanglement entropy is vividly bounded by the minimal cuts disconnecting this subsystem and its complementarity. This scenario can be viewed as the discretized description of Ryu-Takayanagi (RT) formula in holographic approach [2]. Inspired by this, people find that a holographic space can emerge from entanglement renormalization of a many-body system [3,4]. It has further been conjectured in [5] and [6] that the classical connectivity of spacetime arises by entangling the degrees of freedom in two components. As a bridge between quantum entanglement and the structure of spacetime, tensor networks have been providing a practical framework for exploring the emergence of spacetime in the context of gauge/gravity duality [7,8].
Another property of entanglement enjoyed by holographic duality is quantum error correction (QEC) [9]. Based on sub system duality, operators in the bulk can be reconstructed by the operators supported on a sub system of the boundary [10][11][12][13]. In other words, there are subspaces of the Hilbert space in the bulk which can still be reconstructed even if an amount of information on the boundary is erased [14][15][16][17]. Great progress has also been made in the realization of QEC by virtue of tensor networks [15,16,[18][19][20][21]. In this framework, sub system duality is reflected by the isometry between two sub Hilbert spaces associated with sub tensor networks.
Currently it is still a key issue whether tensor networks, or what kind of tensor networks could produce all the aspects of holography in the context of AdS/CFT correspondence.
Taking AdS 3 /CF T 2 as an example, we pick up some important properties that a tensor network is desired to possess.
• Such a tensor network is a discretization of 2-dimensional hyperbolic space (H 2 space), which is a time slice of an AdS 3 spacetime in global coordinate system. Correspondingly, the tensor network is endowed with a symmetry described by a discrete subgroup of SL(2, R), which is the isometry of H 2 space.
• Such a tensor network respects RT formula and the entanglement entropy is characterized by a logarithmic law. Moreover, the entanglement spectrum (ES) of the ground state should be non-flat such that one can reproduce the Cardy-Calabrese formula of Renyi entropy for a CF T 2 with large central charge c, namely [27,28] S n (A) = 1 + 1 n where A is a spatial interval on the boundary and l A is its length with the unit of UV cutoff.
• Such a tensor network has the function of QEC as AdS spacetime enjoys.
• Such a tensor network can reproduce the behavior of Green's function in AdS 3 /CFT 2 .
Of course, all these properties may not be independent of one another.
One candidate for capturing above holographic features of AdS is hyperinvariant tensor networks, recently proposed by Evenbly in [25]. It is composed of identical polygons by uniformly tiling hyperbolic space. The key idea is to impose constraints on the product of multiple tensors to form isometric mappings. It turns out that this sort of networks may combine the advantages of multiscale entanglement renormalization ansatz (MERA) [1,3,4,29,30] which is characterized by non-flat ES and the network composed of perfect tensors [18,22,31] which is usually endowed with the function of QEC.
But one key issue arises in this approach. That is, what kind of multi-tensor constraints could endow such features to a given tensor networks? or more quantitatively, is there any criteria to justify the ability of QEC and the non-flatness of ES for a given tensor networks with multi-tensor constraints? In [32] we have provided affirmative answers to these issues with the proposal for critical protection on tensor chains. In this paper we intend to elaborate our proposal and present the detailed analysis on tensor chains and constraints in tensor networks and prove the statements on the classification of tensor networks in [32].
We organize the paper as follows. In next section we will propose a generalized framework for the tensor networks with multi-tensor constraints in the tiling of H 2 space. To classify different types of multi-tensor constraints efficiently and describe the behavior of tensor contractions during the evaluation of ES, we introduce the notion of tensor chain to describe the contraction of tensor products. Moreover, we will introduce a quantity, called the average reduced interior angle, to characterize the geometric structure of CP chain. Based on this structure we will introduce the concept of critical protection in Section III, which should be viewed as the core concept in our paper, because it plays an essential role in measuring the quality of QEC as well as the non-flatness of ES in a quantitative manner. As the first consequence, we will immediately see that once the ES becomes non-flat under the multitensor constraints as proposed in [25], then the ability of QEC from bulk to boundary has to be weakened. Among this sort of networks, we find that most of the perfect tensor networks as the limit case have the strongest ability of QEC, while they are always accompanied by a flat ES. Therefore, in order to construct tensor networks with a non-flat ES as AdS spacetime, one has to pay the price of sacrificing the ability of QEC. All above investigation is based on a tensor networks embedded into H 2 space which can be viewed as a discretization of the hyperbolic geometry. Correspondingly, we may also describe QEC and ES over the geometry of H 2 directly, which involves the notion of geodesics and the curves of constant curvature, etc. We present the description based on H 2 geometry in Section IV and the relevant backgrounds are given in Appendix A. Keep going on, to intuitively understand the role of critical protection in the evaluation of QEC and ES, in Section V we present some specific examples of tensor networks and demonstrate how the realization of QEC could be reflected by the structure of CP tensor chain, and how the flatness of ES can be reflected by the region of critical protection. Moreover, we develop a generalized description of greedy algorithm by imposing multi-tensor constraints on tensor chains. After that we classify tensor networks with constraints by their properties of QEC and ES. We firstly study the relation between CP and QEC in Section VI, presenting a criteria for the existence of QEC, and then focus on the relation between CP and ES in Section VII, with detailed proofs of the propositions on various bounds for the flatness of ES. Section VIII is the conclusion and outlook.

II. TENSOR CHAINS IN A TENSOR NETWORK
In this section we will present a general framework for tensor networks based on the tiling of hyperbolic space. We define a notion of tensor chain whose skeleton forms a polyline in a network. Associated with each tensor chain, the reduced interior angle can be defined, which in some sense could be viewed as the discrete description of the curvature of the ployline.

A. Tiling of H 2 space
In the global coordinate system of AdS 3 spacetime, the isochronous surface is a H 2 space where L is the radius of H 2 geometry, the unique dimensional quantity introduced in this paper. So we are free to set L = 1.
Firstly, we intend to discretize H 2 space in a uniform version, which can be realized by the tiling of H 2 space with identical polygons. Consider many identical polygons composed of b edges in a 2 dimensional surface, then put them together by gluing their edges such that a edges share the same node. We call such discretization as the {b, a} tiling of H 2 space. In a space with negative curvature, because the sum of interior angles of a triangle is less than 2π, one can realize a {b, a} tiling of H 2 space only if Obviously, a ≥ 3, b ≥ 3.
When a tiling of H 2 is specified by {b, a}, the geometry is determined (up to the radius L). We call the polygon with b edges as the elementary polygon, while the union of several elementary polygons as a composite polygon. The length of each edge of the elementary polygon is  Fig.1(a), respectively.
Because of the rotational invariance of H 2 space, we further demand that the indexes of tensor T and E have cyclic symmetry 1 In this paper, we adopt the convention that the index of tensor T can be lowered by contracting it with a tensor E Correspondingly, the edge connecting two nodes represents the index contraction of two tensors T by a tensor E, namely, convenience, we require that for a tensor network state Ψ, all the indexes of tensors T (with full upper indexes) should be contracted and those uncontracted indexes should only belong to tensors E, as shown in Fig.1(a).
A tensor network Ψ can define a state |Ψ in the Hilbert space on those uncontracted edges. In this paper, we will investigate the algorithms of QEC and the entanglement of Ψ by manipulating tensor networks.

C. Tensor chain
Given a tensor network by the {b, a} tiling, we intend to introduce a notion of tensor chain to depict the product structure of multi-tensors with index contractions, which will be convenient for us to impose a tensor constraint and quantitatively describe its geometric properties. Firstly, in order to define a tensor chain in an efficient way, we adopt a compact form to denote a single tensor T of rank-a which is subject to rotational symmetry. We divide all its indexes into four groups in order and label each group with an abstract index, which is called collected index and labelled by a capital letter. Then the component of T can be generally written as T ABCD . For instance, for a tensor T i 1 i 2 ···i 5 with 5 indexes, we may collect them as while the collection i 1 i 2 = A, i 3 = C, i 4 = B, i 5 = D is prohibited. Furthermore, those collected indexes may also be lowered by tensor E, such as where Now we can construct a tensor chain M by contracting k tensors T with tensors E where A = A 1 A 2 · · · A k and B = B 1 B 2 · · · B k are uncontracted indexes, while C 1 , C 2 , · · · C k are specified as the indexes which are contracted in the chain. Moreover, since each tensor T occupies a node in the tiling, we call k the number of nodes in tensor chain M as well. The sum index i in (9) runs from 1 to k, counting the number of nodes in M . Alternatively, a tensor chain M can be viewed as a mapping from the Hilbert space on uncontracted indexes A to the Hilbert space on uncontracted indexes B.
Obviously, in a {b, a} tiling any two nodes are only possibly connected by single edge, otherwise they are not connected directly. Thus, here we only consider tensor chain with #(C i ) = 1 for i = 2, 3, · · · , k. Furthermore, if #(C 1 ) = #(C k+1 ) = 1, then we call M as a closed tensor chain; if #(C 1 ) = #(C k ) = 0, we call M as an open tensor chain. Two typical samples of tensor chain are illustrated in Fig.2.
Since a diagram of tensor chain can be specified by the number of uncontracted edges in the tensor product, we propose a notation m 1 m 2 · · · m k n 1 n 2 · · · n k to denote an open tensor chain Similarly, we use m 1 m 2 · · · m k n 1 n 2 · · · n k to denote a closed tensor chain with Since one can reconstruct m i from n i or vice versa according to (10)(11), some time for convenience we abbreviate either m i or n i to * . For instance, * * · · · * n 1 n 2 · · · n k ≡ m 1 m 2 · · · m k * * · · · * ≡ m 1 m 2 · · · m k n 1 n 2 · · · n k with (10) and * * · · · * n 1 n 2 · · · n k ≡ m 1 m 2 · · · m k * * · · · * ≡ m 1 m 2 · · · m k n 1 n 2 · · · n k with (11).
We provide the following definition to describe some relations among tensor chains.
Definition 1. Given a tensor network with {b, a} tiling and an open tensor chain M = m 1 m 2 · · · m k * * · · · * , we define its sub tensor chain M p,q as m p m p+1 · · · m q * * · · · * for 1 ≤ p ≤ q ≤ k. We say M M if M is a sub tensor chain of M . Furthermore, we say an open Given a closed tensor chain M = m 1 m 2 · · · m k * * · · · * , we define its sub tensor chain M p,q We always require that the subscripts p, q in M p,q belong to integers and satisfy 1 ≤ p ≤ q ≤ k, where k is the number of nodes in M .

D. The reduced interior angle
When a tensor chain is embedded into a tensor network in H 2 space, its skeleton can be marked by a directed polyline concisely, as shown in Fig.3. Along the direction of the polyline, we require that the sequence number of nodes increases and the edges on the left (right) hand side of the polyline are always associated with the upper (lower) indexes of the tensor chain. For a closed tensor chain, conventionally the direction of the closed polyline is specified to be anticlockwise, so the inward or left-handed (outward or right-handed) edges of the polyline are associated with the upper (lower) indexes of the tensor chain. We remark that a closed polyline with clockwise direction can be analyzed in parallel, with the requirement that its inward (outward) edges are associated with the lower (upper) indexes.
The curvature of a polyline at the ith node can be captured by its interior angle θ i , which is defined as the angle on the left hand side of the polyline and is a multiple of 2π/a. We further define the reduced interior angle as s i = θ i / 2π a , which is an integer. Obviously, the reduced interior angle is related to the number of upper edges at each node and we intend to give the following definition.
Definition 2. For a closed tensor chain M = m 1 m 2 · · · m k * * · · · * , the reduced interior angle of the i-th tensor is s i = m i + 1. For an open tensor chain M = m 1 m 2 · · · m k * * · · · * , the reduced interior angles are s i = m i + 1 − δ i1 .
For later convenience, we further introduce several quantities based on reduced interior angles to evaluate the curvature of a tensor chain M . Specially, we define the prime tensor chain, which is the core notion for the construction of the algebra of tensor constraints in next subsection.
Definition 3. Given a tensor chain M with k nodes, the average reduced interior angle κ(M ) is defined as The sub reduced interior angle κ p,q (M ) from the p-th tensor to the q-th tensor is defined as and the maximal reduced interior angle κ p,q (M ) is defined as Based on above definitions, we have following theorems for tensor chains.
the prime tensor chain M with κ(M ) = κ exists and is unique.
where u and v are coprime integers, then M = m 1 m 2 · · · m v n 1 n 2 · · · n v with where [x] is the floor function. So, M has reversal symmetry, namely The reduced interior angles of M are given by Proof. Set a prime tensor chain M = m 1 m 2 · · · m k n 1 n 2 · · · n k satisfying κ(M ) = κ. Then, for l = 1, 2, · · · , k − 1, On one hand, from (19) Then Thus v ≥ k.
On the other hand, from (20), one has Combining above two statements, we have So Specially, when l = 1, In summary, we have proved (16). Moreover, using identities we can derive (17). Then according to Definition 2, we have (18). At last, from (18) we find It leads to the conclusion that conditions in (10) can always be satisfied if we require (15).
Because of Theorem 3, M is unique.

CHAINS
In this section we propose a notion of critical protection to describe the behavior of tensor networks under the contractions of tensor product which are subject to tensor constraints.

A. Tensor constraint
The notion of tensor chain provides us a convenient way to describe a general constraint on the product of tensors T and E, which plays an essential role in pushing operators through nodes or edges in network in the context of QEC. Usually we impose the constraint requiring that some contraction of tensors should be proportional to an isometry. Of course the contraction of tensors may or may not form a tensor chain. Here for simplicity, we only consider imposing tensor constraints on tensor chains which can be concisely written as where M = m 1 m 2 · · · m k n 1 n 2 · · · n k is an open tensor chain with n i to be the number of edges that are contracted with its conjugate tensor at the ith node, as illustrated in Fig.4. Notice that the contraction on two lower indexes B in (32) implies that it involves in #(B) contractions of j E ij (E jk ) * . For convenience, in the remainder of this paper, when we say a tensor constraint M , we actually refer to the constraint in terms of tensor chain M which is subject to (32). Obviously, a non-trivial constraint M requires k i=1 m i ≥ 1. Moreover, an isometry can be realized only if the number of degrees of freedom in A is less than or equal to that Thus, any non-trivial tensor constraint M should satisfy One may immediately find that for a set of tensor constraints some of them may not be logically independent. In general, there are four fundamental operations to derive new constraints from given tensor constraints, which can be listed as follows.
Reversal: If m 1 m 2 · · · m k n 1 n 2 · · · n k is a tensor constraint, then m k m k−1 · · · m 1 n k n k−1 · · · n 1 is a tensor constraint as well.
As an example, we demonstrate the derivation of new constraints by contraction and reduction in Fig.5.
We remark that the strength of a tensor constraint M can be quantified by its maximal reduced interior angle κ max (M ). By comparing κ max of new derived constraints with those of original constraints, we have the following theorem.
Theorem 5. Any tensor constraint M derived from M and M through above ways satisfies where M = m 1 m 2 · · · m k n 1 n 2 · · · n k , M = m 1 m 2 · · · m l n 1 n 2 · · · n l .
For contraction and reduction, M M . Thanks to Theorem 1, we have (35).
For combination, ∃p, q s.t. κ p,q (M ) = κ max (M ). In the case of q ≤ k or k < p, similarly, we have (35). In the case of p ≤ k < q, we observe that which leads to So, In summary, we have (35).
Next we intend to study a set of tensor constraints with the form where M s = 1 a − 1 is called step tensor chain and other tensor constraints are not specified.
Step tensor chain has the minimal average reduce interior angle κ(M s ) = 1. Generally, those tensor constraints in S may not be mutually independent. With the help of step tensor chain, we have the following theorem for the relation of tensor constraints. For any general set of tensor constraints, we can find a unique set of tensor constraints S c which is logically equivalent to S and only contains two elements S c is called central set. M t is called top tensor chain, which should be prime and satisfy the condition (34). The equivalence between S and S c require κ(M t ) = max M ∈S κ max (M ), as proved in Theorem (8). Without loss of generality, we will only consider central set S c hereinafter.
Thanks to Theorem 3, given a {b, a} tiling, we have a one-to-one mapping between all the possible top tensor chains M t and the rational numbers in 1, a 2 . Thus, we have classified all the general sets of tensor constraints with the form in (40) by the rational number . Given a κ(M t ), with the use of (16), we can directly construct top tensor chain M t as well as those tensor chains M satisfying M M t .
Proof. Let M t = m 1 m 2 · · · m k n 1 n 2 · · · n k , M = m 1 m 2 · · · m k n 1 n 2 · · · n k . We denote the proposition that M can be derived from S c as P 1.
If P 1 is true, thanks to Theorem 2 and Theorem 5, Next we will apply the method of induction on k to prove that if κ max (M ) ≤ κ(M t ) then P 1 is true.
Firstly, for the simplest case with k = 1, κ max (M ) = m 1 is an integer. Because of Now assume that when 1 ≤ k < l P 1 is true, we are going to prove that for k = l, P 1 is also true.
At current stage, the length of tensor chain M , namely l, could be either longer or shorter than the length of M t , namely k. In either case, we will compare the number of upper basic indexes at each node within the parts with the same length, min(k, l). To describe the difference of this part in two tensor chains, it is convenient to define the proposition that ∃j ∈ {1, 2, · · · , min(k, l)} s.t. m j = m j as P 2. Then for k = l, We split the situation into the following three cases.
1. P 2 is false and l ≤ k. It means that the tensor chain M is shorter or has equal length, and has the same number of upper basic indexes as M t at each node. Then obviously one has M M t , so P 1 is true.
2. P 2 is false and l > k. It means the tensor chain M has the same number of upper basic indexes as M t at first k nodes but it is longer. One can pick out the extra part of M by setting M = m k+1 + 1 m k+2 · · · m l * * · · · * . For p > 1, one can easily derive that κ p,q (M ) = κ p+k,q+k (M ) ≤ κ max (M ) ≤ κ(M t ). While for p = 1, one has Because of the assumption of induction and l − k < l, M can be derived from S c . Now since M can be derived from M and M t by reversal and combination, P 1 must be true.
3. P 2 is true. It means the number of upper basic indexes at some nodes are different in two tensor chains. Set the minimal j satisfying m j = m j as r. If In conclusion, P 1 is true when κ max (M ) ≤ κ(M t ). One may ask whether there always exist tensor T and tensor E satisfying the tensor constraints S c . We do not have a general proof about the existence here. Nevertheless, for some specific tensor constraints, we can actually solve them by constructing specific tensors

Theorem 8. Any general set of tensor constraints
indeed. Some examples are demonstrated in Appendix B and see more in [32].

B. Protection
In this subsection we describe the behavior of tensor chains under the action of tensor constraints. For this purpose we first give the following definitions.

Definition 6. Given a set of tensor constraints
The notion of protection can be intuitively understood as following. If we find such M in S D satisfying M T M T , then the contraction can be simplified under the constraint S c = {M s , M t }. Diagrammatically, the tensor chain M becomes disconnected under the contraction with the tensor chain M * , as illustrated in Fig.6. In other words, when we say a tensor chain is protected, it means that one can not factorize it by contracting its lower indexes with any M ∈ S D derived from S c . Actually, the condition in Definition 6, namely "∃M ∈ S D ", can be simplified as "∃M M t ".

C. CP tensor chains
In this subsection we point out that given a tiling and S c , there exists a tensor chain which is critically protected. We notice that whether a tensor chain M is protected or not can be reflected by the value of interior angles, which roughly speaking measures the curvature of the skeleton of the tensor chain. Specifically, the larger is κ max (M T ), the easier it is for M to become unprotected. Therefore, there is a critical value for κ at which tensor chain is critically protected.

Definition 7.
Given an open tensor chain M = m 1 m 2 · · · m k−1 m k n 1 n 2 · · · n k−1 n k , we can define a periodic tensor chain by joining infinitely many M s as with a loop body Obviously, κ(M period ) = κ(M ).
Definition 8. Given a tiling and the set S c , we define the critically protected (CP) tensor chain M c as the periodic tensor chain generated by M t . We further define the CP reduced We demonstrate the construction of M c with an example in Fig.7.
The exact meaning of critical protection is characterized by the following theorem.
Theorem 9. With a given {b, a} tiling and a given S c , an open tensor chain M = * * · · · * n 1 n 2 · · · n k is unprotected if and only if ∃p, q satisfying A closed tensor chain M = * * · · · * n 1 n 2 · · · n k is unprotected if and only if ∃p, h satisfying Proof. We will present the proof for the case of open tensor chain in detail and claim that it can be applied to closed tensor chain in parallel. The main difference will be mentioned in the end of proof.
We first prove the proposition: if ∃p, q satisfying 1 ≤ p ≤ q ≤ k such that (44) is true, then M is unprotected. Without lose of generality, we start with the assumption that ∃p , q Otherwise, we just replace p, q by p , q .
Let h = q − p + 1 and x i = a − 1 − n i , ∀i. There are two cases: From (44)(46)(47), we have Let the rational number Now we prove the converse proposition: If M is unprotected, then ∃p, q satisfying 1 ≤ p ≤ q ≤ k such that (44) is satisfied. Suppose that the top tensor chain M t = m 1 m 2 · · · m v n 1 n 2 · · · n v , and M is unprotected when its tensors from the pth to the qth are acted on by a tensor tensor chain · · · 0 1 1 0 1 1 · · · · · · 3 2 2 3 2 2 · · · .
As far as closed tensor chain is concerned, the only difference is that it has cyclic symmetry with modular k, namely * * · · · * * n 1 n 2 · · · n k−1 n k = * * · · · * * n 2 n 3 · · · n k n 1 , then above x i is nothing but reduced interior angles, namely s i = a − 1 − n i = x i . The proposition can be proved with the same algebra. Similarly, thanks to Theorem 3, we have a one-to-one mapping between CP tensor chain M c and CP reduced interior angle κ c .
Critical protection characterizes the limit of mapping the information from one side (with upper indexes) to another side (with lower indexes) with full fidelity. So the physical correspondence of CP tensor chain is the maximal boundary of the region where the interior information can be mapped to the boundary without loss.

IV. GEOMETRIC DESCRIPTION
In this section we elaborate some geometric properties of the tensor network with {b, a} tiling in H 2 space, which will be essential for us to provide a quantitative description for the QEC and ES in tensor networks. The isometry group of H 2 space is SL(2, R). In H 2 space, the curves of constant curvature (CCC) include circles, hypercircles and horocircles, depending on their values of geodesic curvature. Geodesic is a kind of hypercircle 3 . A brief review on SL(2, R) and CCC is given in Appendix A.
A. The curve of constant curvature corresponding to a periodic polyline The {b, a} tiling breaks the isometry group SL(2, R) into a discrete subgroup G tiling , which is the set of all transformations preserving the tiling. We are interested in two specific generators V, S of G tiling , where V is the anticlockwise rotation around a node by an angle 2π/a and S is the clockwise rotation around the midpoint of an edge linked to this node by π. V, S should satisfy the following equations Tr(V S) = 2 cos(π/b).
The solutions up to SL(2, R) are where length P is given in (4).
Recall that a tensor chain can be embedded into a tensor network in H 2 space, as discussed at the beginning of Subsection III A. Similarly, a periodic tensor chain can also be embedded, whose skeleton forms an endless and periodic polyline. When the scale of a chain is much greater than the period of a polyline, the roughness of the skeleton can be zoomed out such 3 In many literature, 'geodesic' in tensor network often refers to the polyline with minimal cuts. However, it may not always coincide with the geometrical geodesic in H 2 space. We will not adopt 'geodesic' to describe the polyline with minimal cuts through this paper.
that it looks like a CCC in H 2 space, whose geodesic curvature λ is a constant. In general, given the embedding of a periodic tensor chain, we can define a unique CCC corresponding to this chain by specific operation. Next we will firstly present the procedures to locate such a CCC and finally discuss some exceptional cases that such CCC could not be defined.
The periodic tensor chain M can be constructed from an open tensor chain according to Definition 7. We choose a node of M and number it by i. The loop body beginning at the center of the rotation generated by V as the ith node and the center of the rotation generated by S as the midpoint of the edge between the ith node and the (i + 1)th node.
Then we further define a transformation preserving the structure of periodic polyline as which maps each period in the polyline to the next period along the direction of the polyline.
Starting from a point q in H 2 space, the set of all the points generated by W n , namely Q W = {W n q|n ∈ Z}, will be located on a CCC. When |Q W | ≥ 3, the CCC can be uniquely determined.
We are interested in the case that point q is the midpoint of one edge with lower index in M . With some algebra we finally derive that the geodesic curvature λ of this kind of CCC can be calculated by where π q is the matrix of clockwise rotation by angle π around point q, which belongs to SL(2, R). π q can be generated by the generators V and S, according to the relative position between point q and the ith node.
Obviously, different choices of point q generate different CCCs and λs. To determine the unique CCC corresponding to M , we remark that one just need to choose the point q which minimizes |λ − 1| in (57). The process of generating CCC from a periodic tensor chain is illustrated in Fig.8.
Once the CCC corresponding to a periodic tensor chain can be uniquely determined, the classification of CCC in (A6) can be reformulated by the trace of W , From Theorem 3 and Definition 7, for a given rational number κ ∈ [1, a 2 ], one can construct a unique prime tensor chain and its periodic tensor chain M whose κ(M ) = κ. As a result, one can further figure out the corresponding CCC as well as its geodesic curvature λ. So one can define a mapping from κ to λ, as illustrated in Fig.9.
It may be noticed that not all the λ can be inversely mapped to κ, because λ is a positive number while κ is a rational number. Nevertheless, horocircle, whose curvature λ = 1, is a special kind of CCC in H 2 space. It corresponds to a closed tensor chain (closed polyline) in large radius limit. Furthermore, given a {b, a} tiling, we can prove that the average reduced interior angle κ h of such a closed tensor chain is Since this quantity plays a crucial role in classifying the tensor networks, we provide the detailed proof as follows. Proof where The number of nodes enclosed by M is always more than that enclosed by M . Thus, if we begin with an elementary closed polyline with b nodes and total reduced interior angle b and continuously find the outer adjoint polyline, we will approach the polyline corresponding to a horocircle. Thus we have where f n represents f applied n times. To evaluate above limit, we observe that where , 0 < c 2 < 1. Thus Since f n (1) must be finite, we finally have Now we discuss some exceptional cases. The first case is |Q W | ≤ 2, namely the number of generated points less than two, so CCC can not be uniquely defined through the above process. The second case is that M period bends in an irregular way such that the embedding of periodic tensor chain may lead to a self-crossed polyline and no CCC could form, such as the M period with loop body 1 1 1 5 4 5 in the tensor network with {3, 7} tiling. Such two cases only happen for some κ ∈ [1, κ h ).

B. CP curves
The CCC corresponding to a CP tensor chain is called CP curve, whose geodesic curvature is called CP curvature λ c . CP curve is a generalization of the greedy geodesic in [18].
Given a tiling, CP curvature λ c and CP reduced interior angle κ c are inversely related to each other, as shown in Fig.9. If the CP tensor chain form a closed polyline, the CP curve is a circle, κ c < κ h and λ c > 1. If the CP tensor chain form an open polyline which extends to the boundary, the CP curve is a hypercircle, κ c > κ h and λ c < 1.
Roughly speaking, a periodic tensor chain is unprotected if its corresponding CCC has geodesic curvature λ > λ c ; while it is protected if the corresponding CCC has λ < λ c . The structure of a tensor network with {7, 3} tiling is shown in Fig.1(a). According to (59), one has κ h = 1.28. We impose the central set S c = 1 2 , M t for some typical M t and discuss the entanglement property of the tensor network. The structure of M t , M c , and the values of κ c and λ c are listed in Table.I. The corresponding diagrams of tensor constraints and CP tensor chains in the tiling are illustrated in Fig.11, 12, 13 and 14 respectively.
In parallel, a tensor network with {4, 5} tiling is shown in Fig.1(b). In this case, one has κ h = 1.63. The entanglement properties of this tensor network with different top tensor    chains are collected in Table.II. The corresponding diagrams of tensor constraints and the embedded CP tensor chains are plotted in Fig.15, 16, 17 and 18 respectively.
In above figures, we divide the boundary of the tensor network into two intervals A andĀ.
The shaded region with different colors presents the effect of the greedy algorithm starting from A and fromĀ respectively, which will be further discussed in next subsection in detail.
CP tensor chains are marked in each figure and its significance in greedy algorithm will be stressed as well. In next two sections, we will further take these figures as examples to disclose the relation between greedy algorithm and quantum error correction as well as entanglement spectrum.

B. Greedy algorithm on tensor chains
For a tensor network Ψ, we generalize the greedy algorithm in [18], based on the set S D derived from a central set S c . After choosing an interval A on the boundary, we consider a sequence of cuts {C n } and a sequence of sub tensor network {Φ n }, where each C n is bounded by ∂A and each Φ n consists of those tensors enclosed by C n and A, shaded with strips. So each Φ n is a mapping from the Hilbert space on C n to the Hilbert space on A. Let C 1 = A, then Φ 1 is an identity. Next one figures out a tensor chain in the tiling which belongs to the set of S D and all of its lower indexes can be contracted with Φ n . Then Φ n+1 is constructed by absorbing such M n into Φ n . The greedy algorithm stops when no such tensor chain can be found. The way of iteration guarantees that each Φ n is proportional to an isometry.
As explained in [32], above greedy algorithm for a tensor network Ψ is equivalent to the procedure of simplifying the contraction of tensor chains in (42), where M is any tensor chain embedded in the tensor network Ψ.
To describe the process of greedy algorithm precisely, which is essential in the proofs for the properties of ES, we intend to extend the notion of protection to a directed cut in greedy algorithm.
One may notice that process of a greedy algorithm is not unique. Actually, one may have a lot of ways to arrange the sequence of absorbing tensor chains into the shaded region Φ n such that during the course of greedy algorithm, C n need not to be connected. In the greedy algorithm starting from an interval A, each cut C n is specified a direction such that its corresponding Φ n is on its right hand side. A cut may consist of one or more connected components, as illustrated in Fig.19. Definition 11. Say a directed cut C is unprotected, if there exists unprotected tensor chain which is connected to C. Otherwise, say C is protected.

So a greedy algorithm progresses (stops) when the cut is unprotected (protected).
A greedy algorithm can start from the intervalĀ as well. In Fig.11 12 14 15 16 18, we show the final results of greedy algorithm on several tensor networks with specific intervals A andĀ, which are shaded with different colors respectively. In Fig.13 and 17, all the tensor chains are protected under the action of greedy algorithm such that no shaded region presents in those networks.
In above plots one may notice that some CP tensor chains are absorbed by greedy algorithm, which apparently conflicts with the fact that CP tensor chain should be protected.
We point out that this phenomenon ascribes to the fact that the endpoints of CP tensor chains belong to the interval A orĀ on the boundary as well. For instance, consider the The closed curve is denoted by (N 1 , N 2 , · · · , N 7 ). The sub tensor networks Φ absorbed by the greedy algorithm is shaded with purple strips. A minimal secant geodesic G m is marked by blue dot-dashed line.
greedy algorithm starting fromĀ, as shown in Fig.20. We defineȦ to be the interval between one endpoint of a CP tensor chain, which is an uncontracted edge on the boundary withinĀ, and the most neighboring endpoint ofĀ. Generally, the width ofȦ is equal to the geodesic distance between the CP curve and its axis, i.e. d c = arctanh(λ c ). We firstly consider the greedy algorithm starting from a sub-intervalĀ −Ȧ. At this stage greedy algorithm stops before it touches the CP tensor chain indeed. However, for practice when we calculate the reduced density matrix ρ A , the contraction on the endpoints of CP tensor chain, namely uncontracted edges withinȦ, must be taken into account by definition. At this stage the CP tensor chain may fail to be protected under the action of greedy algorithm, as shown in Fig.12. We refer it as the boundary effect of greedy algorithm. Nevertheless, we remark that this boundary effect is weak in the sense that it just absorbs finite layers (most possibly, only one layer) of tensors, and we will elaborate it when we study the ES of tensor networks in Section VII.

VI. QUANTUM ERROR CORRECTION (QEC)
In this section we will concentrate on how to justify the ability of QEC for a tensor network based on the properties of CP tensor chain.

A. Greedy algorithm and QEC
The whole story of QEC on tensor networks is based on the Hilbert space associated with uncontracted edges which introduce extra degrees of freedom in the bulk and the corresponding code subspace in the Hilbert space on the boundary. The correction to the code subspace after erasing an interval A on the boundary is equivalent to pushing a bulk operator in the wedge of the intervalĀ to the intervalĀ on the boundary [14]. Technically, following [18], the procedure of QEC involves in three steps: (1) acting on an uncontracted index in the bulk with an operator; (2) pushing the operator from this index in the bulk to the indexes in the network; (3) pushing it to the boundary further.
In our current work we will ignore the Hilbert space in the bulk since our main purpose is to realize the algorithm of QEC on tensor networks. We will skip step 1 and begin at step 2, by directly inserting an operator into the contracted edges in the interior. In the language of tensor chain, we can insert an operator O between tensor chain M and M No matter which way one adopts to insert an operator into the network, the latter processes of QEC are the same. Thanks to tensor constraint, we can push an operator O through a tensor chain M ∈ S D and the output operator can be rewritten as O , namely Specifically, without uncontracted indexes in the bulk, equation (67) is just the reflection of step 2 above and equation (68) depicts step 3. After all, the terminology 'QEC' in this paper refers to the above interpretation.
All above operations can be demonstrated by diagrams. Taking the tensor network with {7, 3} tiling as an example. The insertion of an operator is illustrated as While employing tensor constraints (11), the process of pushing an operator through tensor chains can be illustrated as , , .
One can successively push operators through tensor chains in S D . Operators may be finally pushed to an interval on the boundary or not, depending on the structure of tilings and tensor constraints. Actually, tensor pushing is the reverse procedure of greedy algorithm, where pushing an operator through tensor chain M ∈ S D is reverse to the procedure of absorbing a tensor chain M into the shaded region of a tensor network.
Definition 12. We say that a tensor network enjoys QEC if any operator inserted into the bulk of the network can be pushed to an interval on the boundary.
In Fig.11 and 12, an inserted operator O is successfully pushed toĀ. We remark that if the operator is inserted into the region enclosed by the CP tensor chain and intervalĀ, as illustrated in Fig.11, then it can be pushed toĀ. In other word, after erasing an interval A, most of those points in the wedge ofĀ can be recovered by QEC.
While if the operator is inserted into the region enclosed by the CP tensor chain and the geodesic bounded by ∂Ā, the situation becomes subtle and it is not guaranteed that the operator can always be pushed toĀ. On one hand, if the inserted operator is close to CP tensor chain, as illustrated in Fig.12, then it may still be pushed to a subinterval inĀ.
While, now the bound of such subinterval is approaching to ∂Ā. In this figure we notice that a lot of arrows, which denote the trajectory of pushing the operator through, go across the geodesic and then radiate out in a wide region, in contrast to the process in Fig.11. Such phenomenon indicates that the information of an operator can only be recovered in a wide range of the boundary, implying the function of QEC in Fig.12 is weaker than that in the tensor network in Fig.11. It may be related to the approximate QEC [14,33]. On the other hand, if the operator is rather close to the geodesic bounded by ∂Ā, it may not be pushed toĀ any more.

B. CP curves and QEC
The geometric description of CP tensor chain in Section IV provides us a way to describe QEC over H 2 space as well. Given a subsystemĀ on the boundary, one may ask whether an operator acting on point x which locates inside the wedge ofĀ can be pushed toĀ.
For a simply connected intervalĀ, we denote its two endpoints as u and v, respectively. To check whether Φ is isometric or not, one needs to evaluate the inner product ΦΦ † , which is directly determined by the imposed tensor constraints S c . During the evaluation process, the most difficult step is to simplify the contraction M M † , where M is the boundary tensor chain of Φ on H. So to figure out whether Φ is isometric or not, our final task is to justify whether M is protected or not under tensor contractions which are subject to S c .
Fortunately, our discussion in the section of critical protection has provided an answer to this question. One can justify this by comparing the geodesic curvature λ of the hypercircle H with the the curvature of CP curve λ c . If λ > λ c , then M is unprotected; if λ < λ c , then M is protected.
As a result, given a subsystem A on the boundary, we find a geodesic connecting two end points of the subsystem A and a CP curve between the geodesic andĀ. Whether an operator at x can be pushed intoĀ depends on the geodesic curvature of the hypercircle passing through x. For those points inside the region enclosed by boundaryĀ and the CP curve, an operator can be recovered by QEC since the geodesic curvature of hypercircles is greater than λ c ; while for those points inside the region enclosed by the CP curve and the geodesic an operator can not be recovered by QEC since the geodesic curvature is less than When a tiling of H 2 is specified, κ c is inversely related to λ c . Because κ is more easily calculated than λ, one can alternatively compare the average reduced interior angle κ of a hypercircle with the average reduced interior angle of CP tensor chain κ c . For a given tiling, recall that the tensor chain corresponding to a horocircle with λ h = 1 has average reduced interior angle κ h in (59). Moreover, once S c is specified, then λ c and κ c are determined as well. Whether a tensor network enjoys QEC or not can be justified by comparing the value of λ c with λ h or κ c with κ h , as described below.
If λ c ≥ 1 or κ c ≤ κ h , the CP curve is a circle or a horocircle. The geodesic curvature of all hypercircles must be less than λ c , so no QEC can be implemented by inserting an operator into any point in the bulk and such a tensor network do not enjoy QEC. For instance, those tensor networks in Fig. 13 and 17 belong to this class. It matches the fact that greedy algorithm does not iterate in these tensor networks.
Similarly, if λ c < 1 or κ c > κ h , the CP curve is a hypercircle. An operator inserted into the region enclosed by the CP tensor chain andĀ can be recovered by QEC and such a tensor network enjoy QEC. For instance, all the other tensor networks except Fig.13 and 17 in this paper belong to this class. Nevertheless, given an intervalĀ, the region that can be recovered is different for different constraints.
We summarize the above results about the function of QEC in a network in Fig.25.

VII. ENTANGLEMENT SPECTRUM (ES)
Next we focus on the evaluation of entanglement spectrum for a given tensor network, and argue that the flatness of ES can be justified with the power of critical protection in general cases.

A. Reduced density matrix
A tensor network Ψ gives a state |Ψ in the Hilbert space defined on its uncontracted edges on the boundary. Given an interval A on the boundary, one can obtain the reduced density matrix of A by tracing out the complementary regionĀ, namely We are concerned with the issue whether the reduced density matrix ρ A has a flat spectrum, which means that all the non-zero eigenvalues of ρ A are identical. This statement can be alternatively rephrased as the following propositions: • All the orders of Renyi entropy are identical, namely independent of n.
• Reduced density matrix satisfies the relation As indicated at the beginning of this paper, ρ A of the ground state of CF T 2 satisfies (1) and First we disclose the key role of CP tensor chain in identifying the protected region in a tensor network. Recall the boundary effect in greedy algorithm, we intend to separate the procedure of taking trace onĀ into following two steps In the following, we will take tensor networks with {7, 3} tiling as examples to demonstrate the evaluation of ES by manipulating tensor networks. The results have previously been collected in Table I.

Non-flat ES
First of all, we point out the evaluation of ES depends on the choice of the interval A on the boundary. We will see that, for constraints S c = 1 2 , 1 0 1 1 1 1 , the ES of ρ A for any relatively large interval A is non-flat. We call the tensor network generally has a non-flat ES. The word "generally" means that the ES is always non-flat unless fine-tuning tensor T and E. Throughout this paper when we say that a tensor network has a non-flat ES, we refer to the above statement.
Firstly, we trace out the degrees of freedom inĀ −Ȧ to obtain the reduced density matrix. During this procedure the structure of tensor network is simplified due to the tensor constraints generated by S c , see Fig.21 (a-c). Specifically, those tensors in the wedge ofĀ are contracted into identity matrices, which is just the process of the greedy algorithm starting fromĀ−Ȧ in the previous section. One can repeatedly consider this process until it reaches a final stage that the network can not be simplified any more, as shown in Fig.21(c).
The terminal boundary forms a polyline in red as marked in Fig.21. As a matter of fact, such a polyline is nothing but a CP tensor chain as we have defined in previous section.
From this figure we perceive that, before the trace of uncontracted edges inȦ is taken into account, the operation induced by the greedy algorithm can not enter the region enclosed by CP tensor chain and A, which is exactly the reason why we call it critically protected tensor chain. Now the next step is to evaluate TrȦ, namely tracing the degrees of freedom associated with uncontracted edges on the boundary which are mostly neighboring to A. The process is illustrated in Fig.21(d)(e). We notice that the network structure can be further simplified such that CP tensor chains are absorbed into the shaded region at this step, which is the boundary effect of greedy algorithm as we described in previous section.
The boundary effect of greedy algorithm results from the discretization of H 2 space, which may not appear in a continuous geometry. In the context of tensor networks, however, according to (75) the uncontracted edges inȦ should be contracted. Sometime the contribution of this effect to reduced density matrix becomes subtle, and we should cautiously handle this effect. In other words, whether the ES is flat or not can only be justified after the boundary effect is taken into account. Now with the reduced density matrix ρ A at hand, we can compute ρ 2 A by further contracting those uncontracted edges in A. One can simplify ρ 2 A by virtue of tensor constraints, which is parallel to the above process onĀ. Boundary effect of greedy algorithm appears as well. Before the boundary effect is taken into account, the greedy algorithm stops at a CP tensor chain, which is the reflection of the CP tensor chain appearing in the contraction onĀ about the geodesic bounded by ∂A. Thus, the simplification of ρ 2 A is equivalent to applying the greedy algorithm to A andĀ successively, as shown in Fig.12.

The calculation of ρ 2
A is demonstrated in Fig.22. Obviously from this diagram we find that ρ 2 A can not be simplified to be proportional to ρ A such that equation (74) is not satisfied. Equivalently, from Fig.12, we notice that some tensors are not absorbed by the greedy algorithm starting from A andĀ, thus (74) is not satisfied.
Given the above constraints, we point out that as long as A is large enough, ρ A always gives rise to a non-flat ES, independent of the choice of A. This assertion will be proved in Subsection VII C. Right now we just conclude that such a tensor network has a non-flat ES, in agreement with what is found by explicitly computing the eigenvalues of the reduced density matrix in [25].
We remark that for the above constraints the corresponding CP curve is a hypercircle.
When the CP curve is a horocircle, it approaches the boundary with single intersecting point. Or when the CP curve is a circle, it does not reach the boundary. For both cases one need not consider the boundary effect separately, and the ES is usually non-flat for CP circles since the region enclosed by the circle is protected.

Flat ES
We have pointed out that one equivalent way to check the relation in (74) is to consider the greedy algorithm starting from A and fromĀ successively. Let us take Fig.11 as an example, where S c = 1 2 , 1 1 1 1 . We observe that the union of these two shaded regions covers the whole tensor network, implying that all the tensors are absorbed by the greedy algorithm. Therefore, (74) is satisfied and ES has to be flat. We call the tensor network has a flat ES.

Mixed ES
From Fig.14, we know that, for S c = 1 2 , 1 0 1 0 1 1 1 0 1 1 , the ES of ρ A can be flat or non-flat, depending on the choice of A. We call the tensor network has a mixed ES.

B. Geometric point of view on ES
In the tensor network realization of AdS/CFT, a tensor network is usually treated as the wavefunction Ψ of the ground state. Alternatively, when an interval A on the boundary is given, we notice that Ψ can be understood as a mapping from the Hilbert space on A to the Hilbert space onĀ. So Ψ can be regarded as a matrix Ψ Ā A , where two indexes A andĀ represent the degrees of freedom on two subsystems A andĀ, respectively.
The notion of critical protection provides us an efficient way to visualize the simplification of tensor networks under the tensor contractions which are subject to tensor constraints. To make this process more transparent, we firstly intend to decompose a network into some sub networks. As seen in previous subsections, when the indexes on A orĀ are contracted, the greedy algorithm will stop at some nodes. Let us firstly neglect the boundary effect, then the skeletons of connecting those nodes will form two CP tensor chains, which are neighboring to the geodesic bounded by ∂A.
First of all, we point out that when λ c ≥ 1 or κ c ≤ κ h , all the hypercircles are protected since their geodesic curvatures are less than λ c . So a non-flat ES is guaranteed. In the following, we will focus on the non-trivial case, λ c < 1 or κ c > κ h , where CP curves are hypercircles.
We denote the CP curve close to A orĀ as H A or HĀ, respectively. The region enclosed by two CP curves is called CP region Ω c . Those tensors in the CP region form a sub tensor network Ψ c , which is a mapping from H A to HĀ and is denoted as (Ψ c ) H A HĀ . Similarly, H A and A enclose a sub tensor network Φ A , which defines a mapping (Φ A ) A H A ; HĀ andĀ enclose a sub tensor network ΦĀ, which defines a mapping (ΦĀ)Ā HĀ . Since the tensors outside H A are not protected under the contraction of A, the mapping (Φ A ) A H A from H A to A should be proportional to an isometry. Similarly, the mapping (ΦĀ)Ā HĀ from HĀ toĀ is proportional to an isometry as well. It is denoted as where the indexes are abbreviated and I (I ) is identity matrix on A (Ā).
Finally, the full matrix Ψ Ā A can be represented as the product of matrices Then it is easy to see where (76) is used. A flat ES in (74) means that We present a schematic diagram to demonstrate the decomposition of tensor network state as well as the calculation of ρ A and ρ 2 A in Fig.23. The condition for flat ES (80) is illustrated in Fig.24. This figure reveals that whether the ES is flat or not depends on the thickness of the CP region where the thickness of the CP region is defined by the distance between the two CP curves.
Equivalently, from above derivation we notice that the flatness of ES may be checked by observing the result of the greedy algorithm starting from A and fromĀ successively, which figures out the region of isometry between tensor chains in (76). If all the tensors are absorbed by the greedy algorithm, then (80) is valid and the ES is flat, and vice versa.
Once the boundary effect is considered, as we showed in previous section, CP tensor chains on the boundary of the CP region Ω c are not protected any more under the greedy algorithm. Nevertheless, only a finite thickness of the CP region will be absorbed. In Fig.22, since those tensors close to the geodesic are not absorbed, the tensor network has a non-flat ES.
The experience one has gained from this picture is that the thickness of CP region determines whether the ES is flat or not. Without the boundary effect, the boundary of CP region is composed of two CP curves, so its thickness is 2d c , where d c = arctanh(λ c ) is the geodesic distance between the CP curve (hypercircle) and its axis. Due to the boundary effect, the CP tensor chain will not be protected any more and the outer layer of the original CP region will be absorbed by greedy algorithm. The thickness of such layers is proximately given by P , which is the length of an edge (4). So the thickness of CP region decrease to 2d c − P . Since Ψ c is protected, (80) is true only if the thickness of the CP region vanishes.
The evaluation of the geodesic curvature λ in a general tensor network is difficult, which prevents us from justifying the flatness of ES with CP curvature λ c . Alternatively, this job can be done by calculating CP reduced interior angle κ c , as described in the next subsection.
Because of (3), the relation κ h < κ 1 < κ 0 always holds. If κ c ∈ (1, κ h ), it turns out the network is not able to implement QEC but has non-flat ES, as indicated in Fig.13 and 17. If κ c ∈ (κ h , κ 1 ), then the network can implement QEC and has non-flat ES, as shown in Fig.12 and 16. If κ c ∈ [κ 1 , κ 0 ), the ability of QEC will become stronger but the ES will become "mixed" , as shown in Fig.14 and 18. Finally, if κ c = κ 0 , the quality of QEC becomes better but the ES has to be flat, which is exactly the property of perfect tensors, as shown in Fig.11 and 15.
Correspondingly, we may propose a geometric quantity in H 2 space which plays a similar role as κ c in tensor network. This quantity is the geodesic curvature λ c of CP curve. Given a tiling, λ c can be calculated by using κ c . A schematic relation between λ c and QEC and ES is also illustrated in Fig.25. While, we do not have general expressions for the bounds λ 0 and λ 1 so far, which corresponds to κ 0 and κ 1 , respectively. 4 Until now, we have constructed a general framework for tensor networks with tensor constraints, and developed a generalized greedy algorithm to describe the property of critical 4 The main difficulty probably results from the specification of an unique CP curve corresponding to a CP tensor chain. Tensor chains are discrete, while curves are continuous. To assign an unique curve, we have to impose more conditions such as requiring that the CP curve has the maximal value of geodesic curvature, which is difficult to handle in practice for a general tiling. protection. In the remainder of this paper, we will provide detailed proofs for the quantitative relation between CP tensor chain and QEC as well as ES, and finally complete the classification of tensor networks as illustrated in Fig.25.
Those statements can be rephrased into following propositions: • If κ c = a 2 , then ρ A has flat ES for any choice of A; • If κ c < a 2 , then ρ A may have non-flat ES for some choices of A; • If κ c ≥ b b−2 , then ρ A may have flat ES for some choices of A; • If κ c < b b−2 , then ρ A has non-flat ES for any choice of large A.
Now we intend to prove these propositions separately.
Based on the discussion in Subsection V B, we will prove the flatness of ES by showing that any directed cut appearing in the process of greedy algorithm is unprotected such that the greedy algorithm will not stop until all the tensors are absorbed.
To prove a directed cut in the process of greedy algorithm is unprotected, one need to find out an unprotected tensor chain connected to the cut. Recall that those directed cuts in greedy algorithm may have many disconnected components. We firstly prove a lemma for a cut containing the structure of twigs or loops, which will greatly simplify the rest of proofs.
Proof. Denote the sequence of nodes [N 1 , · · · , N k ] as L N . We assume that all these nodes in L N are distinct, otherwise we just replace N 1 and N k by any two nodes which are identical and the following proof is still valid.
When k = 1, it is only possible that the connected component is a single node N 1 , then the tensor chain 0 a on N 1 is connected to C. Since 0 a ∈ S D , C is unprotected. When k = 2, the shape of L N is a twig and N 1 is the endpoint of the twig.
Step tensor chain 1 a − 1 on N 1 is connected to C, so C is unprotected. For instance, in Fig.19, N 27 = N 29 and the sequence L N = [N 28 , N 29 ] forms a twig, N 28 is the endpoint and M s at N 28 is connected to the cut.
When k ≥ 3, those edges between N i and N i+1 , and the edge between N 1 and N k in L N form a closed polyline, e.g., see the sequence [N 10 , N 11 , · · · , N 16 ] in Fig.19. We define the region enclosed by the polyline as Y , which consists of F elementary polygons, E edges and V nodes (vertices) which satisfy Euler's formula Let the reduced interior angle of Y at N i be x i for i ∈ {1, 2, · · · , k}. We have From above four formulas, we have Because x k ≥ 1, we further have Those nodes [N 1 , N 2 , · · · , N k−1 ] form a tensor chain M = * * · · · * n 1 n 2 · · · n k−1 connected to C, FIG. 26. The "directed sum" of two directed cuts C A (purple) and CĀ (red) consists of two directed closed curves C I (green) and C I (yellow). The region I corresponding to C I is filled in green.
From Theorem 9, M is unprotected and thus C is unprotected.
In conclusion, when κ c ≥ b b−2 , any cut C containing twigs or loops must be unprotected.
Obviously, κ c = a 2 > b b−2 . Lemma 1 is applicable to this case and those branches forming twigs or loops in a cut will be absorbed by the greedy algorithm. Taking the cut in Fig.19 as an example, we claim that those nodes in {N 8 , N 9 , · · · , N 17 } , {N 27 , N 28 , N 29 }, and {N 1 , N 2 , · · · , N 7 } will be absorbed.
As a result, now we can focus on the case that the cut C is single connected and bounded by ∂A, which is denoted as C = [N 1 , N 2 , · · · , N l ]. Furthermore, these nodes in C are distinct.
With any choice of single interval A on the boundary of a tensor network Ψ, we apply the greedy algorithm starting from A and fromĀ simultaneously. So two cuts, C A and CĀ, appear in Ψ at the same time. We will prove with mathematical induction that when κ c = a 2 , either of these two cuts is unprotected until all the tensors are absorbed. Now we consider the configuration of C A and CĀ. Both of them are connected to ∂A.
Besides, they may overlap at some place, where their directions are opposite, as illustrated in Fig.26. Then, we define the "directed sum" of C A and CĀ as the union of them but excluding their overlapped parts. The directed sum consists of one or more closed curves, as shown in Fig.26. Select one of them and denote it as C I , which is a directed cut as well. Set the sequence of nodes corresponding to C I to be (N 1 , N 2 , · · · , N k ). We connect these nodes (N 1 , N 2 , · · · , N k ) with edges in order and enclose a region I, which is a union of elementary polygons and edges. At node N i , let the reduced outer angles of I be y i and let the number of edges cut by C I be n i . Obviously, y i = n i + 1. Gauss-Bonnet theorem tells that Obviously, those edges cut by C are divided into two parts, one part is cut by C A and the other is cut by CĀ. Without loss of generality, we suppose that C A runs from N 1 to N u+1 .
Moreover, l 1 edges of N 1 are cut by C A andl k+1 edges cut by CĀ. While for N u+1 , l u+1 edges are cut by C A andl u+1 edges are cut by CĀ. Obviously, l 1 +l k+1 = n 1 and l u+1 +l u+1 = n u+1 .
Then we know that tensor chain M A = * * · · · * l 1 l 2 · · · l u+1 is connected to C A and tensor chain MĀ = * * · · · * l u+1lu+2 · · ·l k is connected to CĀ. From (90), From Theorem 9, either of M A or MĀ is unprotected, so either of C A or CĀ is unprotected.
Thus the greedy algorithm will keep going on until Area(H) = 0 at least, which means two cuts C A and CĀ are overlapped such that all tensors are absorbed. Then the ES is flat.
2. κ c < a 2 ⇒ ∃ non-flat ES Next we intend to prove when κ c < a 2 , there exists non-flat ES for some choices of single interval A on the boundary. Thanks to Theorem 9, when a is odd, we can specifically choose A on the boundary such that the structure as shown in Fig.27(a) is protected under the action of the greedy algorithm starting from either side. A tensor network with a special choice for A is shown in Fig.27(c), where the structures enclosed by dashed red circles are protected and prevent the ES from being flat.
When a is even, similarly one can choose A appropriately such that the structure as shown in Fig.27(b) is protected. Then the ES is non-flat.
We remark that such kind of protected structures is common in tensor networks, especially when the network is large enough. So we intend to argue that when κ c < a 2 , most choices of interval A will lead to non-flat ES.
In previous subsection we have learned that when κ c < a 2 , ES being non-flat is a common phenomenon. Nevertheless, we point out that when κ c ≥ b b−2 , it is possible to construct single interval whose ES is flat.
Next we just prove the existence of flat ES by constructing a specific interval A with "minimal secant geodesic", which is obtained by following steps (Fig.19). We start from the midpoint of an edge between two uncontracted edges on the boundary, then connect this point with the midpoint of another edge in the polygon which has the farthest distance to this point. Next we choose the neighboring polygon of this new midpoint in the bulk and connect the midpoint with the other farthest midpoint in this polygon. Repeat above steps until it reaches the boundary of the network. The trajectory forms a geodesic called minimal secant geodesic, denoted by G m . It should be noticed that for a polygon with odd edges, there are two middle points which are the farthest from the specified midpoint, one to the left and the other to the right, as shown in Fig.28. We need choose these two midpoints by turn in above steps, as shown in Fig.19. A minimal secant geodesic G m divides the boundary of network into two parts A andĀ, which almost have the same size. We will show that for such a division, the corresponding ES is flat by proving that the greedy algorithm starting from either A orĀ does not stop until the sequence of cuts reaches G m . The proofs for A andĀ are parallel. So we only prove the case for A.
Similarly, thanks to Lemma 1, we focus on the case that the cut C is single connected and connected to ∂A, which is denoted as C = [N 1 , N 2 , · · · , N l ], with distinct nodes.
We give G m a direction such that Φ is on its right hand side. Then G m becomes a directed cut which is denoted as [N 1 , N 2 , · · · , N m ]. By definition, these nodes are distinct.
When C and G m are not overlapped, the edges connecting those nodes in C and G m at least form a polygon. In general, they may enclose one or more polygons, as illustrated in Fig.19.
We pick out any one of them and label it as Y . Let the set of those nodes on the boundary of Y to be the union of [N p+1 , N p+2 , · · · , N p+u ] in C and [N q+1 , N q+2 , · · · , N q+v ] in G m . N p+1 and N q+1 are neighboring to each other. N p+u and N q+v are neighboring to each other. We naturally have u ≥ 2 after excluding the cases in Lemma 1. Let the reduced interior angle of Y at N p+i as x i for i ∈ {1, 2, · · · , u} and the reduced interior angle of Y at N q+j as x j for j ∈ {1, 2, · · · , v}. Similar to the relation in (86) in the proof of Lemma 1, for Y , we have Suppose that the part [N q+1 , N q+2 , · · · , N q+v ] crosses w elementary polygons. Due to the special construction of G m , we have the relation Plugging it into (94), we obtain On [N p+1 , N p+2 , · · · , N p+u ], tensor chain M = * * · · · * n 1 n 2 · · · n u is connected to C, where n i = a − 1 − x i for i ∈ {1, 2, · · · , u}. From (97) and Theorem 9, we know M is unprotected, thus C is unprotected.
In conclusion, once C = G m , C is unprotected and the greedy algorithm progresses. So those tensors between A and G m will be absorbed. In parallel, those tensors betweenĀ and G m will be absorbed under the greedy algorithm starting fromĀ. Finally, the sequence of cuts reaches G m , leading to a flat ES.
Here we prove that when κ c < b b−2 , the ES of a single and large interval A is non-flat. Perhaps this argument is the most important part in this section because it supplies us a quantitative criteria to justify if a tensor network has a not-flat ES.
Consider a single interval A and its complementĀ on the boundary of a given tensor network. There exists a continuous line, called G, connecting two ending points of A with a minimal cuttings on the edges of the network. The line G divides the whole network into two sub tensor networks (see Fig.29).
It is noticed that the nearest neighboring tensors of line G form two tensor chains. We call these two tensor chains as M A and MĀ, respectively. As an example, the skeletons of these two tensor chains are marked in Fig.29. We set all the indexes associated with the edges cut by line G as upper indexes, while the other indexes are lower indexes.
Assume that M A has k A nodes, and MĀ has kĀ nodes. Set the number of elementary polygons crossed by line G to be F . Then we have two equations Now we provide a proof by contradiction. We assume that the ES would be flat, then M A , MĀ ∈ S D . According to Theorem 7, we have We substitute (99) into (98) and get an inequality as To simulate real AdS spacetime, the number of layers in a network is expected to be large enough. Then for large interval A, F 1. Since 2κc b−(b−2)κc is a finite number, From (100) and (101), we have κ c ≥ b b−2 , contradictory to the initial assumption. Thus, when κ c < b b−2 , the ES of a single interval A must be non-flat in a network with large layers.

VIII. CONCLUSIONS AND OUTLOOKS
In this paper we have presented a general framework for tensor networks with tensor constraints based on the tiling of H 2 space. A notion of critical protection based on the tensor chain has been proposed to describe the behavior of tensor networks under the action of greedy algorithm. In particular, a criteria has been developed with the help of the average reduced interior angle of CP chain such that for a given tensor network the ability of QEC and the flatness of ES can be justified in a quantitative manner. We have also demonstrated a lot of examples of tensor network and discussed their properties of QEC and ES. In general, once the ability of QEC of a tensor network becomes stronger, then its ES becomes flat more easily, and vice versa. By contrast, it is fascinating to notice that AdS spacetime is endowed with these two holographic features with perfect balance indeed. Currently it is still challenging to construct tensor networks which could capture all the holographic features of AdS spacetime. What we have found in this paper may shed light on this issue. Firstly, we have learned that the notion of critical protection provides a description on the limit of information transmission with full fidelity. In the case that the CP curve H c is a circle, i.e. λ c L 2 > 1, the information in the interior of H c can be transmitted to its surface without loss, where we have restored the AdS radius L. While, for a circle H which is larger than H c , its interior information can not be transmitted to its surface without loss. So we can say that H c is the maximal boundary which can holographically store the interior information [34,35]. Thus, for a tensor network which captures the feature of QEC as AdS space, it must not contain circular CP curves, which requires λ c L 2 ≤ 1. Furthermore, if we intend to construct a single tensor network which exhibits both QEC and non-flat ES, it seems that the tensor networks with κ c ∈ (κ h , κ 1 ) might have more likelihood to approach this goal.
Next we address some open issues that should be crucial for one to explore the role of tensor networks with constraints in holographic approach. Firstly, because of the chain structure of tensor constraint, in our present framework we have investigated QEC and ES only for a single interval on the boundary, which is just similar to the setup for hyperinvariant tensor network in [25].
It is an open question whether QEC can be realized for multiintervals on the boundary, as investigated in network with perfect tensors or random tensors [18,19,22]. Actually, our preliminary investigation reveals that if the number of intervals is large enough, it would be very hard to realize QEC with non-flat ES for multi-intervals, because it involves in constructing tensor constraint with scales as large as the entanglement wedge of the multi-intervals, which is rather complicated. We would like to leave this issue for further investigation.
Secondly, in order to simulate AdS space, it is desirable to send the number of layers of tensor network to infinity. Then the area of its boundary goes to infinity as well. Under this limit, the treatment on the boundary effect of tensor constraints is subtle. When the CP curve is a hypercircle with λ c L 2 < 1, it has a constant distance to the geodesic which is d c = L arctanh(λ c L 2 ). The CP curve is unprotected once the boundary effect is considered, so the boundary effect scales as d c , which is independent of the number of the layers. When d c /L is small, the boundary effect becomes negligible in this limit comparing to the infinite area of the boundary. However, when d c /L is very large, such as κ c → κ h + 0, the boundary effect can not be neglected.
Finally, we are concerned with the issue how to reproduce the Cardy-Calabrese formula of Renyi entropy (1) in the framework of tensor networks. It is known that Renyi entropy depends not only on the tiling and tensor constraints, but also on the matrix elements of tensors, such as the elements of tensor U and Q in Appendix B. In addition, we are interested in the possible relation between the CP curve and the gravity dual of Renyi entropy. In [36], the nth-order holographic Renyi entropy can be calculated by the area of a cosmic brane n with tension T n , namely n 2 ∂ n n − 1 n S n = Area(Cosmic Brane n ) 4G N .
The cosmic brane n backreacts to the geometry at order T n G N where G N is the Newton constant. However, if we simply set T n G N → 0, all the cosmic branes become probe branes 5 . Then, for a given subsystem on the boundary, those cosmic branes would have the same area and flat entanglement spectrum appears. According to Subsection VII B in our paper, when d c /L is large, the entanglement spectrum becomes non-flat, while when d c /L is small, the entanglement spectrum becomes flat. It would be interesting to explore the possible relation between T n G N and d c /L in the light of this observation.
We define new coordinate ζ = x + iz to rewrite the metric as (A2) The isometry of H 2 geometry is SL(2, R), which means the form of the metric is unchanged under the coordinate transformation where real parameters α, β, γ, δ satisfy αδ − βγ = 1.

Curves of constant curvature
One key notion that we have frequently used in this paper is the curve of constant curvature (CCC) in H 2 space. The geodesic curvature of a curve with an affine parameter s is given by The curves with λ µ = 0 are geodesics in H 2 space. The geodesic distance of any two points with coordinates (x 1 , z 1 ) and (x 2 , z 2 ) can be derived as d = arccosh There are three kinds of CCC in H 2 space, namely, the circle, horocircle and hypercircle, as illustrated in Fig.30.
A circle is a curve whose geodesic distance to a given point (the center of the circle) is a constant r. The geodesic curvature of a circle with radius r is λ = coth(r).
A horocircle (or horocycle) is a curve whose normal geodesics all converge asymptotically to its center in the same direction, so it is also called limit circle. The geodesic curvature of a horocircle is equal to 1.
A hypercircle (or hypercycle) is a curve whose points have the same orthogonal distance d from a given geodesic, so it is also called equidistant curve. The corresponding geodesic is called its axis. The geodesic curvature of a hypercircle is λ = tanh(d).
Of course, a geodesic is a hypercircle with d = 0.