Tensor chain and constraints in tensor networks

We develop our recent work on quantum error correction (QEC) and entanglement spectrum (ES) in tensor networks (arXiv:1806.05007). We propose a general framework for planar tensor network state with tensor constraints as a model for AdS3/CFT2 correspondence, which could be viewed as a generalization of hyperinvariant tensor networks recently proposed by Evenbly. We elaborate our proposal on tensor chains in a tensor network by tiling H2 space and provide a diagrammatical description for general multi-tensor constraints in terms of tensor chains, which forms a generalized greedy algorithm. The behavior of tensor chains under the action of greedy algorithm is investigated in detail. In particular, for a given set of tensor constraints, a critically protected (CP) tensor chain can be figured out and evaluated by its average reduced interior angle. We classify tensor networks according to their ability of QEC and the flatness of ES. The corresponding geometric description of critical protection over the hyperbolic space is also given.


Introduction
Tensor network as a powerful tool for building the ground state of a many-body system has been greatly investigated in recent years [1]. One remarkable feature of tensor network states is the intuitive description of quantum entanglement among local degrees of freedom. For a subsystem composed of some un-contracted edges in a tensor network, its entanglement entropy is vividly bounded by the minimal cuts disconnecting this subsystem and its complementarity. This scenario can be viewed as the discretized description of Ryu-Takayanagi (RT) formula in holographic approach [2]. Inspired by this, people find that a holographic space can emerge from entanglement renormalization of a many-body system [3,4]. It has further been conjectured in [5] and [6] that the classical connectivity of spacetime arises by entangling the degrees of freedom in two components. As a bridge between quantum entanglement and the structure of spacetime, tensor networks have been providing a practical framework for exploring the emergence of spacetime in the context of gauge/gravity duality [7,8].
Another property of entanglement enjoyed by holographic duality is quantum error correction (QEC) [9]. Based on sub system duality, operators in the bulk can be reconstructed by the operators supported on a sub system of the boundary [10][11][12][13]. In other words, there are subspaces of the Hilbert space in the bulk which can still be reconstructed even if an amount of information on the boundary is erased [14][15][16][17]. Great progress has also been made in the realization of QEC by virtue of tensor networks [15,16,[18][19][20][21]. In this framework, sub system duality is reflected by the isometry between two sub Hilbert spaces associated with sub tensor networks.
Currently it is still a key issue whether tensor networks, or what kind of tensor networks could produce all the aspects of holography in the context of AdS/CFT correspondence. Taking AdS 3 /CF T 2 as an example, we pick up some important properties that a tensor network is desired to possess.
• Such a tensor network is a discretization of 2-dimensional hyperbolic space (H 2 space), which is a time slice of an AdS 3 spacetime in global coordinate system. Correspondingly, the tensor network is endowed with a symmetry described by a discrete subgroup of SL(2, R), which is the isometry of H 2 space.

JHEP06(2019)032
• Such a tensor network respects RT formula and the entanglement entropy is characterized by a logarithmic law. Moreover, the entanglement spectrum (ES) of the ground state should be non-flat such that one can reproduce the Cardy-Calabrese formula of Renyi entropy for a CF T 2 with large central charge c, namely [27,28] S n (A) = 1 + 1 n where A is a spatial interval on the boundary and l A is its length with the unit of UV cutoff.
• Such a tensor network has the function of QEC as AdS spacetime enjoys.
• Such a tensor network can reproduce the behavior of Green's function in AdS 3 /CFT 2 .
Of course, all these properties may not be independent of one another. One candidate for capturing above holographic features of AdS is hyperinvariant tensor networks, recently proposed by Evenbly in [25]. It is composed of identical polygons by uniformly tiling hyperbolic space. The key idea is to impose constraints on the product of multiple tensors to form isometric mappings. It turns out that this sort of networks may combine the advantages of multiscale entanglement renormalization ansatz (MERA) [1,3,4,29,30] which is characterized by non-flat ES and the network composed of perfect tensors [18,22,31] which is usually endowed with the function of QEC.
But one key issue arises in this approach. That is, what kind of multi-tensor constraints could endow such features to a given tensor networks? or more quantitatively, is there any criteria to justify the ability of QEC and the non-flatness of ES for a given tensor networks with multi-tensor constraints? In [32] we have provided affirmative answers to these issues with the proposal for critical protection on tensor chains. In this paper we intend to elaborate our proposal and present the detailed analysis on tensor chains and constraints in tensor networks. More importantly, we will develop a notion of prime tensor chain and then prove the unique theorem of prime tensor chain such that any general set of tensor constraints is logically equivalent to a unique central set which only contains two elements. With the power of this key ingredient, we will classify tensor networks with multi-tensor constraints based on their features of QEC and ES.
We organize the paper as follows. In section 2 we will propose a generalized framework for the tensor networks with multi-tensor constraints in the tiling of H 2 space. To classify different types of multi-tensor constraints efficiently and describe the behavior of tensor contractions during the evaluation of ES, we introduce the notion of tensor chain to describe the contraction of tensor products. Moreover, we will introduce a quantity, called the average reduced interior angle, to characterize the geometric structure of CP chain. Based on this structure we will introduce the concept of critical protection in section 3, which should be viewed as the core concept in our paper, because it plays an essential role in measuring the quality of QEC as well as the non-flatness of ES in a quantitative manner. As the first consequence, we will immediately see that once the ES becomes non-flat under the multi-tensor constraints as proposed in [25], then the ability of QEC from bulk to JHEP06(2019)032 boundary has to be weakened. Among this sort of networks, we find that most of the perfect tensor networks as the limit case have the strongest ability of QEC, while they are always accompanied by a flat ES. Therefore, in order to construct tensor networks with a non-flat ES as AdS spacetime, one has to pay the price of sacrificing the ability of QEC. All above investigation is based on a tensor networks embedded into H 2 space which can be viewed as a discretization of the hyperbolic geometry. Correspondingly, we may also describe QEC and ES over the geometry of H 2 directly, which involves the notion of geodesics and the curves of constant curvature, etc. We present the description based on H 2 geometry in section 4 and the relevant backgrounds are given in appendix A. Keep going on, to intuitively understand the role of critical protection in the evaluation of QEC and ES, in section 5 we present some specific examples of tensor networks and demonstrate how the realization of QEC could be reflected by the structure of CP tensor chain, and how the flatness of ES can be reflected by the region of critical protection. Moreover, we develop a generalized description of greedy algorithm by imposing multi-tensor constraints on tensor chains. After that we classify tensor networks with constraints by their properties of QEC and ES. We firstly study the relation between CP and QEC in section 6, presenting a criteria for the existence of QEC, and then focus on the relation between CP and ES in section 7, with detailed proofs of the propositions on various bounds for the flatness of ES. In section 8, we generalize the framework to a kind of tensor networks with tiling of asymptotic H 2 space. Section 9 is the conclusion and outlook.

Tensor chains in a tensor network
In this section we will present a general framework for tensor networks based on the tiling of hyperbolic space. We define a notion of tensor chain whose skeleton forms a polyline in a network. Associated with each tensor chain, the reduced interior angle can be defined, which in some sense could be viewed as the discrete description of the curvature of the ployline.

Tiling of H 2 space
In the global coordinate system of AdS 3 spacetime, the isochronous surface is a H 2 space ds 2 = L 2 (dρ 2 + cosh 2 ρdτ 2 ), (2.1) where L is the radius of H 2 geometry, the unique dimensional quantity introduced in this paper. So we are free to set L = 1. Firstly, we intend to discretize H 2 space in a uniform version, which can be realized by the tiling of H 2 space with identical polygons. Consider many identical polygons composed of b edges in a 2 dimensional surface, then put them together by gluing their edges such that a edges share the same node. We call such discretization as the {b, a} tiling of H 2 space. In a space with negative curvature, because the sum of interior angles of a triangle is less than 2π, one can realize a {b, a} tiling of H 2 space only if 1 a Obviously, a ≥ 3, b ≥ 3.

JHEP06(2019)032
When a tiling of H 2 is specified by {b, a}, the geometry is determined (up to the radius L). We call the polygon with b edges as the elementary polygon, while the union of several elementary polygons as a composite polygon. The length of each edge of the elementary polygon is

Tensor networks with {b, a} tiling
Now we construct a tensor network based on a {b, a} tiling. Associated with each node, we assign a rank-a tensor T , each index of which is specified to an edge jointed at the node respectively. The elements of the tensor T are denoted as T i 1 i 2 ···ia , where all indexes have the same dimension d and d > 1. Associated with each edge, we also assign a rank-2 tensor E, whose elements are E i 1 i 2 . We call the above indexes associated with tensors T and E as basic indexes, which are labelled by lowercase letters. As examples, two tensor networks with {7, 3} tiling and {4, 5} tiling are illustrated in figure 1(a), respectively. Because of the rotational invariance of H 2 space, we further demand that the indexes of tensor T and E have cyclic symmetry 1 In this paper, we adopt the convention that the index of tensor T can be lowered by contracting it with a tensor E Correspondingly, the edge connecting two nodes represents the index contraction of two tensors T by a tensor E, namely, Therefore, given a tensor network with {b, a} tiling, we can define a quantum state Ψ consisting of two sorts of tensors T and E by tensor products and contractions. For later convenience, we require that for a tensor network state Ψ, all the indexes of tensors T (with full upper indexes) should be contracted and those un-contracted indexes should only belong to tensors E, as shown in figure 1(a). A tensor network Ψ can define a state |Ψ in the Hilbert space on those un-contracted edges. In this paper, we will investigate the algorithms of QEC and the entanglement of Ψ by manipulating tensor networks.

Tensor chain
Given a tensor network by the {b, a} tiling, we intend to introduce a notion of tensor chain to depict the product structure of multi-tensors with index contractions, which will be JHEP06(2019)032 convenient for us to impose a tensor constraint and quantitatively describe its geometric properties. Firstly, in order to define a tensor chain in an efficient way, we adopt a compact form to denote a single tensor T of rank-a which is subject to rotational symmetry. We divide all its indexes into four groups in order and label each group with an abstract index, which is called collected index and labelled by a capital letter. Then the component of T can be generally written as T ABCD . For instance, for a tensor T i 1 i 2 ···i 5 with 5 indexes, we may collect them as while the collection i 1 i 2 = A, i 3 = C, i 4 = B, i 5 = D is prohibited. Furthermore, those collected indexes may also be lowered by tensor E, such as where We further define #(A) as the number of basic indexes in a collected index A. For example, in the above collection, #(A) = 2 and #(B) = 1. Now we can construct a tensor chain M by contracting k tensors T with tensors E where A = A 1 A 2 · · · A k and B = B 1 B 2 · · · B k are un-contracted indexes, while C 1 , C 2 , · · · C k are specified as the indexes which are contracted in the chain. Moreover, since each tensor T occupies a node in the tiling, we call k the number of nodes in tensor chain M as well. The sum index i in (2.8) runs from 1 to k, counting the number of nodes in M . Alternatively,

JHEP06(2019)032
a tensor chain M can be viewed as a mapping from the Hilbert space on un-contracted indexes A to the Hilbert space on un-contracted indexes B.
Obviously, in a {b, a} tiling any two nodes are only possibly connected by single edge, otherwise they are not connected directly. Thus, here we only consider tensor chain with #(C i ) = 1 for i = 2, 3, · · · , k. Furthermore, if #(C 1 ) = #(C k+1 ) = 1, then we call M as a closed tensor chain; if #(C 1 ) = #(C k ) = 0, we call M as an open tensor chain. Two typical samples of tensor chain are illustrated in figure 2.
Since a diagram of tensor chain can be specified by the number of un-contracted edges in the tensor product, we propose a notation m 1 m 2 · · · m k n 1 n 2 · · · n k to denote an open tensor chain M with elements Similarly, we use m 1 m 2 · · · m k n 1 n 2 · · · n k to denote a closed tensor chain with Since one can reconstruct m i from n i or vice versa according to (2.9)(2.10), some time for convenience we abbreviate either m i or n i to * . For instance, * * · · · * n 1 n 2 · · · n k ≡ m 1 m 2 · · · m k * * · · · * ≡ m 1 m 2 · · · m k n 1 n 2 · · · n k with (2.9) and * * · · · * n 1 n 2 · · · n k ≡ m 1 m 2 · · · m k * * · · · * ≡ m 1 m 2 · · · m k n 1 n 2 · · · n k with (2.10).
The structure of the tensor chain as given in (2.8) is quite general. Given a tensor chain, one may build other tensor chains by splitting it or lowering its indexes. For example, in {4, 5} tiling, given the tensor chain 2 1 2 2 2 2 , one can split it into 2 1 2 3 and 3 2 , or lower JHEP06(2019)032

The reduced interior angle
When a tensor chain is embedded into a tensor network in H 2 space, its skeleton can be marked by a directed polyline concisely, as shown in figure 3. Unless otherwise specified, we refer to "directed polyline" when saying "polyline". Along the direction of the polyline, we require that the sequence number of nodes increases and the edges on the left (right) hand side of the polyline are always associated with the upper (lower) indexes of the tensor chain. For a closed tensor chain, conventionally the direction of the closed polyline is specified to be anticlockwise, so the inward or left-handed (outward or right-handed) edges of the polyline are associated with the upper (lower) indexes of the tensor chain. We remark that a closed polyline with clockwise direction can be analyzed in parallel, with the requirement that its inward (outward) edges are associated with the lower (upper) indexes. Obviously, tensor chain and polyline are fundamentally different from each other. Their relation can be established only through embedding. A tensor chain with different embedding picks out different polylines. They have distinct positions and are linked by diffeomorphism. The invariants of embedding are length and curvature. The curvature of a polyline at the ith node can be captured by its interior angle θ i , which is defined as the angle on the left hand side of the polyline and is a multiple of 2π/a. We further define the reduced interior angle as s i = θ i / 2π a , which is an integer. Obviously, the reduced interior

JHEP06(2019)032
angle is related to the number of upper edges at each node and we intend to give the following definition.
Definition 2. For a closed tensor chain M = m 1 m 2 · · · m k * * · · · * , the reduced interior angle of the i-th tensor is For an open tensor chain M = m 1 m 2 · · · m k * * · · · * , the reduced interior angles are For later convenience, we further introduce several quantities based on reduced interior angles to evaluate the curvature of a tensor chain M . Specially, we define the prime tensor chain, which is the core notion for the construction of the algebra of tensor constraints in next subsection.
Definition 3. Given a tensor chain M with k nodes, we define its average reduced interior angle κ(M ) as The sub reduced interior angle κ p,q (M ) from the p-th tensor to the q-th tensor is defined as The maximal reduced interior angle κ p,q (M ) is defined as Based on above definitions, we can easily prove following two lemmas and construct the uniqueness theorems of prime tensor chain.   the prime tensor chain M with κ(M ) = κ exists and is unique. Furthermore, if κ = u/v where u and v are coprime integers, then M = m 1 m 2 · · · m v n 1 n 2 · · · n v with The reduced interior angles of M are given by Proof. Set a prime tensor chain M = m 1 m 2 · · · m k n 1 n 2 · · · n k satisfying κ(M ) = κ. Then, for l = 1, 2, · · · , k − 1, Specially, when l = 1,

JHEP06(2019)032
In summary, we have proved (2.15 It leads to the conclusion that conditions in (2.9) can always be satisfied if we require (2.14). So the prime tensor chain exists and is unique. Proof. ∃p, q s.t. κ p,q (M ) = κ max (M ) and κ p ,q (M ) < κ max (M ), ∀p , q satisfying p < p ≤ q ≤ q or p ≤ p ≤ q < q. Then M = M p,q is prime, whose κ(M ) = κ p,q (M ) = κ max (M ). Because of Theorem 3, M is unique.

Tensor constraints and Critically Protected (CP) tensor chains
In this section we propose a notion of critical protection to describe the behavior of tensor networks under the contractions of tensor product which are subject to tensor constraints.

Tensor constraint
The notion of tensor chain provides us a convenient way to describe a general constraint on the product of tensors T and E, which plays an essential role in pushing operators through nodes or edges in network in the context of QEC. Usually we impose the constraint requiring that some contraction of tensors should be proportional to an isometry. Of course the contraction of tensors may or may not form a tensor chain. Here for simplicity, we only consider imposing tensor constraints on tensor chains which can be concisely written as where M = m 1 m 2 · · · m k n 1 n 2 · · · n k is an open tensor chain with n i to be the number of edges that are contracted with its conjugate tensor at the ith node, as illustrated in figure 4. Notice that the contraction on two lower indexes B in (3.1) implies that it involves in #(B) contractions of j E ij (E jk ) * . For convenience, in the remainder of this paper, when we say a tensor constraint M , we actually refer to the constraint in terms of tensor chain M which is subject to (3.1). Obviously, a non-trivial constraint M requires k i=1 m i ≥ 1. Moreover, an isometry can be realized only if the number of degrees of freedom in A is less than or equal to that in B, namely  Thus, any non-trivial tensor constraint M should satisfy One may immediately find that for a set of tensor constraints some of them may not be logically independent. In general, there are four fundamental operations to derive new constraints from given tensor constraints, which can be listed as follows.
Reversal. If m 1 m 2 · · · m k n 1 n 2 · · · n k is a tensor constraint, then m k m k−1 · · · m 1 n k n k−1 · · · n 1 is a tensor constraint as well.
As an example, we demonstrate the derivation of new constraints in figure 5 and figure 6.
We remark that the strength of a tensor constraint M can be quantified by its maximal reduced interior angle κ max (M ). By comparing κ max of new derived constraints with those of original constraints, we have the following theorem. where M = m 1 m 2 · · · m k n 1 n 2 · · · n k , M = m 1 m 2 · · · m l n 1 n 2 · · · n l . JHEP06(2019)032 At the first step, we contract the right-most index of constraint 1 1 1 3 2 3 with an EE † and derive constraint 1 1 0 3 2 4 . At the second step, we reduce constraint 1 1 0 3 2 4 by using constraint 1 4 and derive constraint 1 1 3 3 . In the case of q ≤ k or k < p, similarly, we have (3.4). In the case of p ≤ k < q, we observe that So, (3.8) In summary, we have (3.4).
Next we intend to study a set of tensor constraints with the form where M s = 1 a − 1 is called step tensor chain and other tensor constraints are not specified.
Step tensor chain M s has the minimal average reduce interior angle κ(M s ) = 1. Generally, those tensor constraints in S may not be mutually independent, in the sense that they may be related by above four fundamental operations: reversal, contraction, reduction and combination. Moreover, two apparently different sets of constraints could be equivalent if one takes above fundamental operations into account. Therefore, we need figure out a way to distinguish the set of constraints. Finally, we find a key feature of them. That is, any of such a set is logically equivalent to a unique set of tensor constraints S c which only contains two elements. As a result, we provide the following definition.
Definition 5. We define a set of tensor constraint where M t is called top tensor chain, which should be prime and satisfy the condition (3.3). We call S c as central set. We further define its derived set as Next we will prove that all the tensor constraints in S D can be derived from S c in Lemma 7, and then prove the equivalence between S and S c in Theorem 8 where κ(M t ) = max M ∈S κ max (M ), which is an important result of this paper. Thus, to better understand our following proofs of Lemmas and Theorem, we would like to sketch the logic line and point out some keys in the proof as follows. To prove Lemma 7 and Theorem 8, the first key is to learn from Theorem 5 that the strength of tensor constraints, which is evaluated by κ max , never increases under logical derivation. So we expect that if two set of tensor constraints are equivalent, the maximal κ max among their elements must be the same. However, the inverse proposition of Theorem 5 may not be true. Now the second key is the step tensor chain M s , with which we can reduce the tensor constraint with maximal κ max step by step and derive all those tensor constraints with smaller κ max by combination. The third key is the uniqueness of prime tensor chain with a given κ max . Above three keys give rise to the relation S ⇔ S c ⇔ S D where S c is unique. In addition, M t is named "top" tensor chain in the sense that it is the ascendant tensor chain with maximal κ max and could be viewed as the "parent" of some other tensor chains; while M s is named "step" tensor chain in the sense that it plays the unit role in reducing the top tensor chain to other constraints step by step.
Thanks to Theorem 3, given a {b, a} tiling, we have a one-to-one mapping between all the possible top tensor chains M t and the rational numbers in 1, a 2 . Thus, we have classified all the general sets of tensor constraints with the form in (3.9) by the rational . Given a κ(M t ), with the use of (2.15), we can directly construct top tensor chain M t as well as those tensor chains M satisfying M M t .
One may ask whether there always exist tensor T and tensor E satisfying the tensor constraints S c . We do not have a general proof about the existence here. Nevertheless, for some specific tensor constraints, we can actually solve them by constructing specific tensors indeed. Some examples are demonstrated in appendix B and see more in [32]. Proof. Consider {b, a} tiling. According to Definition 1, we let M = m 1 m 2 · · · m k n 1 n 2 · · · n k and M = m p m p+1 · · · m q−1 m q n p n p+1 · · · n q−1 n q , where 1 ≤ p ≤ q ≤ k, m p ≤ m p and m q ≤ m q .
By reversal and contraction, tensor constraint m 1 m 2 · · · m k−1 0 n 1 n 2 · · · n k−1 a − 1 is derived from constraint M . By reduction, tensor constraint m 1 m 2 · · · m k−1 n 1 n 2 · · · n k−1 + 1 is derived from constraints m 1 m 2 · · · m k−1 0 n 1 n 2 · · · n k−1 a − 1 and M s = 1 a − 1 . Above operation reduces the number of nodes in tensor constraint from its right hand side. With reversal, similar operation can reduce the number of nodes from left hand side. By successive operations, we can finally derive tensor constraint M step by step.
We denote the proposition that M can be derived from S c as P 1.
If P 1 is true, thanks to Lemma 2 and Theorem 5, Next we will apply the method of induction on k to prove that if κ max (M ) ≤ κ(M t ) then P 1 is true.
Firstly, for the simplest case with k = 1, Because of Lemma 6, P 1 is true. Now assume that when 1 ≤ k < l P 1 is true, we are going to prove that for k = l, P 1 is also true.
At current stage, the number of nodes in tensor chain M , namely l, could be either more or less than the number of nodes in M t , namely k. In either case, we will compare the number of upper basic indexes at each node within the parts with the same length, min(k, l). To describe the difference of this part in two tensor chains, it is convenient to define the proposition that ∃j ∈ {1, 2, · · · , min(k, l)} s.t. m j = m j as P 2. Then for k = l, We split the situation into the following three cases.
1. P 2 is false and l ≤ k. It means that the tensor chain M is shorter or has equal length, and has the same number of upper basic indexes as M t at each node. Then obviously one has M M t , so P 1 is true.
2. P 2 is false and l > k. It means the tensor chain M has the same number of upper basic indexes as M t at first k nodes but it is longer. One can pick out the extra part of M by setting M = m k+1 + 1 m k+2 · · · m l * * · · · * . For p > 1, one can easily derive that κ p, Because of the assumption of induction and l − k < l, M can JHEP06(2019)032 be derived from S c . Now since M can be derived from M and M t by reversal and combination, P 1 must be true.
3. P 2 is true. It means the number of upper basic indexes at some nodes are different in two tensor chains. Set the minimal j satisfying m j = m j as r. If m r > m r , then m r ≥ m r + 1. So Because of the assumption of induction and l − r < l, M can be derived from S c as well. Since M can be derived from M and M by combination, P 1 is true.

Protection
In this subsection we describe the behavior of tensor chains under the action of tensor constraints. For this purpose we first give the following definitions. Definition 6. The transpose of an open tensor chain M = m 1 m 2 · · · m k n 1 n 2 · · · n k is M T = n 1 n 2 · · · n k m 1 m 2 · · · m k . The transpose of a closed tensor chain M = m 1 m 2 · · · m k n 1 n 2 · · · n k is M T = n 1 n 2 · · · n k m 1 m 2 · · · m k .
The notion of protection can be intuitively understood as following. If we find such M in S D satisfying M T M T , then the contraction JHEP06(2019)032 Figure 7. A unprotected tensor chain · · · 1 0 0 1 0 · · · · · · 2 3 3 2 3 · · · becomes disconnected when contracting it with the conjugation of 1 1 can be simplified under the constraint S c = {M s , M t }. Diagrammatically, the tensor chain M becomes disconnected under the contraction with the tensor chain M * , as illustrated in figure 7. In other words, when we say a tensor chain is protected, it means that one can not factorize it by contracting its lower indexes with any M ∈ S D derived from S c . Actually, the condition in Definition 7, namely "∃M ∈ S D ", can be simplified as "∃M M t ".

CP tensor chains
In this subsection we point out that given a tiling and S c , there exists a tensor chain which is critically protected. We notice that whether a tensor chain M is protected or not can be reflected by the value of interior angles, which roughly speaking measures the curvature of the skeleton of the tensor chain. Specifically, the larger is κ max (M T ), the easier it is for M to become unprotected. Therefore, there is a critical value for κ at which tensor chain is critically protected.

Definition 8.
Given an open tensor chain M = m 1 m 2 · · · m k−1 m k n 1 n 2 · · · n k−1 n k , we can define a periodic tensor chain by joining infinitely many M s as with a loop body m 1 − 1 m 2 m 3 · · · m k n 1 n 2 n 3 · · · n k − 1 . 2 k is called the period of M period .
Definition 9. Given a tiling and the set S c , we define the critically protected (CP) tensor chain M c as the periodic tensor chain generated by M t . We further define the CP reduced interior angle as We demonstrate the construction of M c with an example in figure 8. The exact meaning of critical protection is characterized by the following theorem. 2 Here we have exceptionally used the notation of closed tensor chain to denote a periodic tensor chain and a loop body, because both of them satisfy (2.10).

JHEP06(2019)032
Theorem 9. With a given {b, a} tiling and a given S c , an open tensor chain M = * * · · · * n 1 n 2 · · · n k is unprotected if and only if ∃p, q satisfying A closed tensor chain M = * * · · · * n 1 n 2 · · · n k is unprotected if and only if ∃p, h satisfying (3.14) Proof. We will present the proof for the case of open tensor chain in detail and claim that it can be applied to closed tensor chain in parallel. The main difference will be mentioned in the end of proof. We first prove the proposition: if ∃p, q satisfying 1 ≤ p ≤ q ≤ k such that (3.13) is true, then M is unprotected. Without lose of generality, we start with the assumption that There are two cases: JHEP06(2019)032 Figure 8. By using top tensor chain 1 1 1 3 2 3 , we can define loop body 0 1 1 3 2 2 and construct a CP tensor chain · · · 0 1 1 0 1 1 · · · · · · 3 2 2 3 2 2 · · · . Now we prove the converse proposition: if M is unprotected, then ∃p, q satisfying 1 ≤ p ≤ q ≤ k such that (3.13) is satisfied. Suppose that the top tensor chain M t = m 1 m 2 · · · m v n 1 n 2 · · · n v , and M is unprotected when its tensors from the pth to the qth are acted on by a tensor constraint M .
As far as closed tensor chain is concerned, the only difference is that it has cyclic symmetry with modular k, namely * * · · · * * n 1 n 2 · · · n k−1 n k = * * · · · * * n 2 n 3 · · · n k n 1 , then above x i is nothing but reduced interior angles, namely s i = a − 1 − n i = x i . The proposition can be proved with the same algebra. Similarly, thanks to Theorem 3, we have a one-to-one mapping between CP tensor chain M c and CP reduced interior angle κ c .
Critical protection characterizes the limit of mapping the information from one side (with upper indexes) to another side (with lower indexes) with full fidelity. So the physical correspondence of CP tensor chain is the maximal boundary of the region where the interior information can be mapped to the boundary without loss.

Geometric description
In this section we elaborate some geometric properties of the tensor network with {b, a} tiling in H 2 space, which will be essential for us to provide a quantitative description for JHEP06(2019)032 the QEC and ES in tensor networks. The isometry group of H 2 space is SL(2, R). In H 2 space, the curves of constant curvature (CCC) include circles, hypercircles and horocircles, depending on their values of geodesic curvature. Geodesic is a kind of hypercircle. 3 A brief review on SL(2, R) and CCC is given in appendix A.

The curve of constant curvature corresponding to a periodic polyline
The {b, a} tiling breaks the isometry group SL(2, R) into a discrete subgroup G tiling , which is the set of all transformations preserving the tiling. We are interested in two specific generators V, S of G tiling , where V is the anticlockwise rotation around a node by an angle 2π/a and S is the clockwise rotation around the midpoint of an edge linked to this node by π. V, S should satisfy the following equations The solutions up to SL(2, R) are where length P is given in (2.3).
Recall that a tensor chain can be embedded into a tensor network in H 2 space, as discussed at the beginning of subsection 3.1. Similarly, a periodic tensor chain can also be embedded, whose skeleton forms an endless and periodic polyline. When the scale of a chain is much greater than the period of a polyline, the roughness of the skeleton can be zoomed out such that it looks like a CCC in H 2 space, whose geodesic curvature λ is a constant. In general, given the embedding of a periodic tensor chain, we can define a unique CCC corresponding to this chain by specific operation. Next we will firstly present the procedures to locate such a CCC and finally discuss some exceptional cases that such CCC could not be defined.
The periodic tensor chain M can be constructed from an open tensor chain according to Definition 8. We choose a node of M and number it by i. The loop body beginning at the (i + 1)th node is m i+1 m i+2 · · · m i+k n i+1 n i+2 · · · n i+k , where k is the period of M . Now we choose the center of the rotation generated by V as the ith node and the center of the rotation generated by S as the midpoint of the edge between the ith node and the (i + 1)th node. Then we further define a transformation preserving the structure of periodic polyline as which maps each period in the polyline to the next period along the direction of the polyline. Starting from a point q in H 2 space, the set of all the points generated by W n , namely Q W = {W n q|n ∈ Z}, will be located on a CCC. When |Q W | ≥ 3, the CCC can be uniquely determined.
We are interested in the case that point q is the midpoint of one edge with lower index in M . With some algebra we finally derive that the geodesic curvature λ of this kind of CCC can be calculated by where π q is the matrix of clockwise rotation by angle π around point q, which belongs to SL(2, R). π q can be generated by the generators V and S, according to the relative position between point q and the ith node. Obviously, different choices of point q generate different CCCs and λs. To determine the unique CCC corresponding to M , we remark that one just need to choose the point q which minimizes |λ − 1| in (4.7). The process of generating CCC from a periodic tensor chain is illustrated in figure 9.
Once the CCC corresponding to a periodic tensor chain can be uniquely determined, the classification of CCC in (A.6) can be reformulated by the trace of W , From Theorem 3 and Definition 8, for a given rational number κ ∈ [1, a 2 ], one can construct a unique prime tensor chain and its periodic tensor chain M whose κ(M ) = κ. As a result, one can further figure out the corresponding CCC as well as its geodesic curvature λ. So one can define a mapping from κ to λ, as illustrated in figure 10. It may be noticed that not all the λ can be inversely mapped to κ, because λ is a positive number while κ is a rational number. Nevertheless, horocircle, whose curvature λ = 1, is a special kind of CCC in H 2 space. It corresponds to a closed tensor chain (closed polyline) in large radius limit. Furthermore, given a {b, a} tiling, we can prove that the average reduced interior angle κ h of such a closed tensor chain is Since this quantity plays a crucial role in classifying the tensor networks, we provide the detailed proof as follows.
Proof. Horocircle is the limit of a circle with infinitely long radius in H 2 space. To construct the polyline corresponding to horocircle in a tensor network, we may consider the process of increasing the scale of a closed polyline. In a network with {b, a} tiling, we consider a closed polyline M with k nodes and its total reduced interior angle is l. So the average reduced interior angle of M is κ = l/k. Now it is helpful for us to plot the Poincare dual of the network with {b, a} tiling, which is a network with {a, b} tiling and all the nodes located at centers of dual polygons, as illustrated in figure 11. Given a polyline M , its outer adjoint polyline M in Poincare dual network can be constructed step by step: (1) find out those elementary polygons in Poincare dual network whose center is located at the node of M ; (2) pick out the edges of those polygons outside of M ; (3) link those edges in order and then obtain a closed polyline which is just M .
The relation between M and M is illustrated in figure 11. One can find that M has (a − 1)k − l nodes and its total reduced interior angle is ak − l. Keep going on, one can find the outer adjoint polyline M of polyline M will fall back into the network with {b, a} tiling. Similarly, polyline M has (b − 1)((a − 1)k − l) − ak + l nodes and its total reduced interior angle is b((a − 1)k − l) − ak + l. So its average reduced interior angle is where The number of nodes enclosed by M is always more than that enclosed by M . Thus, if we begin with an elementary closed polyline with b nodes and total reduced interior angle b and continuously find the outer adjoint polyline, we will approach the polyline corresponding to a horocircle. Thus we have where f n represents f applied n times. To evaluate above limit, we observe that where , 0 < c 2 < 1. (4.14) Thus Since f n (1) must be finite, we finally have

JHEP06(2019)032
Now we discuss some exceptional cases. The first case is |Q W | ≤ 2, namely the number of generated points less than two, so CCC can not be uniquely defined through the above process. The second case is that M period bends in an irregular way such that the embedding of periodic tensor chain may lead to a self-crossed polyline and no CCC could form, such as the M period with loop body 1 1 1 5 4 5 in the tensor network with {3, 7} tiling. Such two cases only happen for some κ ∈ [1, κ h ).

CP curves
The CCC corresponding to a CP tensor chain is called CP curve, whose geodesic curvature is called CP curvature λ c . CP curve is a generalization of the greedy geodesic in [18].
Given a tiling, CP curvature λ c and CP reduced interior angle κ c are inversely related to each other, as shown in figure 10. If the CP tensor chain is embedded as a closed polyline, the CP curve is a circle, κ c < κ h and λ c > 1. If the CP tensor chain is embedded as an open polyline which extends to the boundary, the CP curve is a hypercircle, κ c > κ h and λ c < 1. As reviewed in appendix A, the reflection of a hypercircle with respect to its axis (a geodesic) is a hypercircle as well. Thus the reflection of a CP curve is a CP curve. From (A.5), their geodesic distance is 2d c = 2 arctanh(λ c ). (4.17) Roughly speaking, a periodic tensor chain is unprotected if its corresponding CCC has geodesic curvature λ > λ c ; while it is protected if the corresponding CCC has λ < λ c .

Tensor networks and greedy algorithm
So far we have established a framework to describe tensor chains and tensor constraints. We will impose the central set S c to a tensor network in the sense that those constraints M ∈ S D derived from S c are valid, while those M / ∈ S D are not valid. In parallel, a tensor network with {4, 5} tiling is shown in figure 1(b). In this case, one has κ h = 1.63. The entanglement properties of this tensor network with different top tensor chains are collected in table 2. The corresponding diagrams of tensor constraints and the embedded CP tensor chains are plotted in figures 16, 17, 18 and 19 respectively.
In above figures, we divide the boundary of the tensor network into two intervals A andĀ. The shaded region with different colors presents the effect of the greedy algorithm  starting from A and fromĀ respectively, which will be further discussed in next subsection in detail. CP tensor chains are marked in each figure and its significance in greedy algorithm will be stressed as well. In next two sections, we will further take these figures as examples to disclose the relation between greedy algorithm and quantum error correction as well as entanglement spectrum.

Greedy algorithm on tensor chains
For a tensor network Ψ, we generalize the greedy algorithm in [18], based on the set S D derived from a central set S c . After choosing an interval A on the boundary, we consider a sequence of cuts {C n } and a sequence of sub tensor network {Φ n }, where each C n is bounded by ∂A and each Φ n consists of those tensors enclosed by C n and A, shaded with strips. So each Φ n is a mapping from the Hilbert space on C n to the Hilbert space on A.
Let C 1 = A, then Φ 1 is an identity. Next one figures out a tensor chain in the tiling which belongs to the set of S D and all of its lower indexes can be contracted with Φ n . Then Φ n+1 is constructed by absorbing such M n into Φ n . The greedy algorithm stops when no such tensor chain can be found. The way of iteration guarantees that each Φ n is proportional to an isometry. As explained in [32], above greedy algorithm for a tensor network Ψ is equivalent to the procedure of simplifying the contraction of tensor chains in (3.11), where M is any tensor chain embedded in the tensor network Ψ.
To describe the process of greedy algorithm precisely, which is essential in the proofs for the properties of ES, we intend to extend the notion of protection to a directed cut in greedy algorithm.    One may notice that process of a greedy algorithm is not unique. Actually, one may have a lot of ways to arrange the sequence of absorbing tensor chains into the shaded region Φ n such that during the course of greedy algorithm, C n need not to be connected. In the greedy algorithm starting from an interval A, each cut C n is specified a direction such that its corresponding Φ n is on its right hand side. A cut may consist of one or more connected components, as illustrated in figure 20.     where N 1 and N l are located at the boundary. The sequence of nodes given by a closed curve is denoted as (N 1 , N 2 , · · · , N l ), where N 1 and N l are neighbor.

JHEP06(2019)032
Just for convenience, one may allow the sequence number of nodes to start from any integer, such as [N −1 , N 0 , N 1 , N 2 ]. But the sequence number must be monotonically increasing with unit step.
Definition 11. Say a tensor chain M is connected to a directed cut C, if M lies on the left hand side of C and all the edges associated with its lower indexes are cut by C while all the edges associated with its upper indexes are not cut by C.
Definition 12. Say a directed cut C is unprotected, if there exists unprotected tensor chain which is connected to C. Otherwise, say C is protected.
So a greedy algorithm progresses (stops) when the cut is unprotected (protected). The closed curve is denoted by (N 1 , N 2 , · · · , N 7 ). The sub tensor networks Φ absorbed by the greedy algorithm is shaded with purple strips. A minimal secant geodesic G m is marked by blue dot-dashed line.

JHEP06(2019)032
A greedy algorithm can start from the intervalĀ as well. In figures 12, 13, 15, 16, 17 and 19, we show the final results of greedy algorithm on several tensor networks with specific intervals A andĀ, which are shaded with different colors respectively. In figures 14 and 18, all the tensor chains are protected under the action of greedy algorithm such that no shaded region presents in those networks.
In above plots one may notice that some CP tensor chains are absorbed by greedy algorithm, which apparently conflicts with the fact that CP tensor chain should be protected. We point out that this phenomenon ascribes to the fact that the endpoints of CP tensor chains belong to the interval A orĀ on the boundary as well. For instance, consider the greedy algorithm starting fromĀ, as shown in figure 21. We defineȦ to be the interval between one endpoint of a CP tensor chain, which is an un-contracted edge on the boundary withinĀ, and the most neighboring endpoint ofĀ. Generally, the width ofȦ is equal to the geodesic distance d c between the CP curve and its axis, see (4.17). We firstly consider the greedy algorithm starting from a sub-intervalĀ −Ȧ. At this stage greedy algorithm stops before it touches the CP tensor chain indeed. However, for practice when we calculate the reduced density matrix ρ A , the contraction on the endpoints of CP tensor chain, namely un-contracted edges withinȦ, must be taken into account by definition. At this stage the CP tensor chain may fail to be protected under the action of greedy JHEP06(2019)032 algorithm, as shown in figure 13. We refer it as the boundary effect of greedy algorithm. Nevertheless, we remark that this boundary effect is weak in the sense that it just absorbs finite layers (most possibly, only one layer) of tensors, and we will elaborate it when we study the ES of tensor networks in section 7.

Quantum error correction (QEC)
In this section we will concentrate on how to justify the ability of QEC for a tensor network based on the properties of CP tensor chain.

Greedy algorithm and QEC
The whole story of QEC on tensor networks is based on the Hilbert space associated with un-contracted edges which introduce extra degrees of freedom in the bulk and the corresponding code subspace in the Hilbert space on the boundary. The correction to the code subspace after erasing an interval A on the boundary is equivalent to pushing a bulk operator in the wedge of the intervalĀ to the intervalĀ on the boundary [14]. Technically, following [18], the procedure of QEC involves in three steps: (1) acting on an un-contracted index in the bulk with an operator; (2) pushing the operator from this index in the bulk to the indexes in the network; (3) pushing it to the boundary further.
In our current work we will ignore the Hilbert space in the bulk since our main purpose is to realize the algorithm of QEC on tensor networks. We will skip step 1 and begin at step 2, by directly inserting an operator into the contracted edges in the interior. In the language of tensor chain, we can insert an operator O between tensor chain M and M No matter which way one adopts to insert an operator into the network, the latter processes of QEC are the same. Thanks to tensor constraint, we can push an operator O through a

JHEP06(2019)032
tensor chain M ∈ S D and the output operator can be rewritten as O , namely Specifically, without un-contracted indexes in the bulk, equation (6.1) is just the reflection of step 2 above and equation (6.2) depicts step 3. After all, the terminology 'QEC' in this paper refers to the above interpretation.
All above operations can be demonstrated by diagrams. Taking the tensor network with {7, 3} tiling as an example. The insertion of an operator is illustrated as While employing tensor constraints 12, the process of pushing an operator through tensor chains can be illustrated as , (6.4) , . (6.5) One can successively push operators through tensor chains in S D . Operators may be finally pushed to an interval on the boundary or not, depending on the structure of tilings and tensor constraints. Actually, tensor pushing is the reverse procedure of greedy algorithm, where pushing an operator through tensor chain M ∈ S D is reverse to the procedure of absorbing a tensor chain M into the shaded region of a tensor network.
Definition 13. We say that a tensor network enjoys QEC if any operator inserted into the bulk of the network can be pushed to an interval on the boundary.
In figures 12 and 13, an inserted operator O is successfully pushed toĀ. We remark that if the operator is inserted into the region enclosed by the CP tensor chain and interval A, as illustrated in figure 12, then it can be pushed toĀ. In other word, after erasing an interval A, most of those points in the wedge ofĀ can be recovered by QEC.
While if the operator is inserted into the region enclosed by the CP tensor chain and the geodesic bounded by ∂Ā, the situation becomes subtle and it is not guaranteed that the operator can always be pushed toĀ. On one hand, if the inserted operator is close to CP tensor chain, as illustrated in figure 13, then it may still be pushed to a subinterval inĀ. While, now the bound of such subinterval is approaching to ∂Ā. In this figure we notice that a lot of arrows, which denote the trajectory of pushing the operator through, go across the geodesic and then radiate out in a wide region, in contrast to the process in figure 12. Such phenomenon indicates that the information of an operator can only be JHEP06(2019)032 recovered in a wide range of the boundary, implying the function of QEC in figure 13 is weaker than that in the tensor network in figure 12. It may be related to the approximate QEC [14,33]. On the other hand, if the operator is rather close to the geodesic bounded by ∂Ā, it may not be pushed toĀ any more.

CP curves and QEC
The geometric description of CP tensor chain in section 4 provides us a way to describe QEC over H 2 space as well. Given a subsystemĀ on the boundary, one may ask whether an operator acting on point x which locates inside the wedge ofĀ can be pushed toĀ. For a simply connected intervalĀ, we denote its two endpoints as u and v, respectively. Then these two points together the point x can uniquely determine a hypercircle H in H 2 space. The sub network in region Ω enclosed by H andĀ defines a mapping Φ from the Hilbert space associated with the edges on H to the Hilbert space associated with the edges on A. If and only if Φ is proportional to an isometry, then the operator can be pushed to the boundary, thus implementing QEC in an operator scenario. Otherwise the operator can not be pushed into the region specified byĀ, and the recovery of such an operator will be prevented by erasing A such that QEC fails.
To check whether Φ is isometric or not, one needs to evaluate the inner product ΦΦ † , which is directly determined by the imposed tensor constraints S c . During the evaluation process, the most difficult step is to simplify the contraction M M † , where M is the boundary tensor chain of Φ on H. So to figure out whether Φ is isometric or not, our final task is to justify whether M is protected or not under tensor contractions which are subject to S c .
Fortunately, our discussion in the section of critical protection has provided an answer to this question. One can justify this by comparing the geodesic curvature λ of the hypercircle H with the curvature of CP curve λ c . If λ > λ c , then M is unprotected; if λ < λ c , then M is protected.
As a result, given a subsystem A on the boundary, we find a geodesic connecting two end points of the subsystem A and a CP curve between the geodesic andĀ. Whether an operator at x can be pushed intoĀ depends on the geodesic curvature of the hypercircle passing through x. For those points inside the region enclosed by boundaryĀ and the CP curve, an operator can be recovered by QEC since the geodesic curvature of hypercircles is greater than λ c ; while for those points inside the region enclosed by the CP curve and the geodesic an operator can not be recovered by QEC since the geodesic curvature is less than λ c . When a tiling of H 2 is specified, κ c is inversely related to λ c . Because κ is more easily calculated than λ, one can alternatively compare the average reduced interior angle κ of a hypercircle with the average reduced interior angle of CP tensor chain κ c . For a given tiling, recall that the tensor chain corresponding to a horocircle with λ h = 1 has average reduced interior angle κ h in (4.9). Moreover, once S c is specified, then λ c and κ c are determined as well. Whether a tensor network enjoys QEC or not can be justified by comparing the value of λ c with λ h or κ c with κ h , as described below.
If λ c ≥ 1 or κ c ≤ κ h , the CP curve is a circle or a horocircle. The geodesic curvature of all hypercircles must be less than λ c , so no QEC can be implemented by inserting an operator into any point in the bulk and such a tensor network do not enjoy QEC. For

JHEP06(2019)032
instance, those tensor networks in figures 14 and 18 belong to this class. It matches the fact that greedy algorithm does not iterate in these tensor networks.
Similarly, if λ c < 1 or κ c > κ h , the CP curve is a hypercircle. An operator inserted into the region enclosed by the CP tensor chain andĀ can be recovered by QEC and such a tensor network enjoy QEC. For instance, all the other tensor networks except figures 14 and 18 in this paper belong to this class. Nevertheless, given an intervalĀ, the region that can be recovered is different for different constraints. We summarize the above results about the function of QEC in a network in table 3.
Furthermore, given a tensor network, we can further evaluate its ability of QEC in the following way. We insert an operator O at the center of the tensor network and push it to the boundary within a single intervalĀ. That is to say, the operator can be recovered even if A is erased. The ratio between the maximal area of A erased and the total area of the boundary can be applied to evaluate the ability of QEC of such tensor network. From the geometric description, to find such maximal area, we just need to find the hypercircle with curvature λ c which passes through the center of the H 2 space. The ratio is When CP curve approaches to a geodesic, λ c → 0, κ c → a 2 and R QEC → 1 2 , the tensor network has the maximal ability of QEC, which is just like the "perfect tensor" in [18]. When λ c increases and R QEC decreases, the ability of QEC is weakened. When CP curve approaches to a horocircle, λ c → 1, κ c → κ h and R QEC → 0, the ability of QEC is totally lost.

Entanglement spectrum (ES)
Next we focus on the evaluation of entanglement spectrum for a given tensor network, and argue that the flatness of ES can be justified with the power of critical protection in general cases.

Reduced density matrix
A tensor network Ψ gives a state |Ψ in the Hilbert space defined on its un-contracted edges on the boundary. Given an interval A on the boundary, one can obtain the reduced density matrix of A by tracing out the complementary regionĀ, namely We are concerned with the issue whether the reduced density matrix ρ A has a flat spectrum, which means that all the non-zero eigenvalues of ρ A are identical. This statement can be alternatively rephrased as the following propositions: • All the orders of Renyi entropy are identical, namely independent of n.

JHEP06(2019)032
• Reduced density matrix satisfies the relation As indicated at the beginning of this paper, ρ A of the ground state of CF T 2 satisfies (1.1) and exhibits a non-flat ES. The gravitational dual result of AdS 3 vacuum coincides with the above result as well. Now we would like to check whether the ES of a tensor network state is flat or not. For this purpose it is convenient to check the relation in (7.3) by manipulating tensor networks.
First we disclose the key role of CP tensor chain in identifying the protected region in a tensor network. Recall the boundary effect in greedy algorithm, we intend to separate the procedure of taking trace onĀ into following two steps In the following, we will take tensor networks with {7, 3} tiling as examples to demonstrate the evaluation of ES by manipulating tensor networks. The results have previously been collected in table 1.

Non-flat ES
First of all, we point out the evaluation of ES depends on the choice of the interval A on the boundary. We will see that, for constraints S c = 1 2 , 1 0 1 1 1 1 , the ES of ρ A for any relatively large interval A is non-flat. We call the tensor network generally has a non-flat ES. The word "generally" means that the ES is always non-flat unless fine-tuning tensor T and E. Throughout this paper when we say that a tensor network has a non-flat ES, we refer to the above statement.
Firstly, we trace out the degrees of freedom inĀ −Ȧ to obtain the reduced density matrix. During this procedure the structure of tensor network is simplified due to the tensor constraints generated by S c , see figure 22(a-c). Specifically, those tensors in the wedge ofĀ are contracted into identity matrices, which is just the process of the greedy algorithm starting fromĀ −Ȧ in the previous section. One can repeatedly consider this process until it reaches a final stage that the network can not be simplified any more, as shown in figure 22(c). The terminal boundary forms a polyline in red as marked in figure 22. As a matter of fact, such a polyline is nothing but a CP tensor chain as we have defined in previous section. From this figure we perceive that, before the trace of un-contracted edges inȦ is taken into account, the operation induced by the greedy algorithm can not enter the region enclosed by CP tensor chain and A, which is exactly the reason why we call it critically protected tensor chain. Now the next step is to evaluate TrȦ, namely tracing the degrees of freedom associated with un-contracted edges on the boundary which are mostly neighboring to A. The process is illustrated in figure 22(d)(e). We notice that the network structure can be further simplified such that CP tensor chains are absorbed into the shaded region at this step, which is the boundary effect of greedy algorithm as we described in previous section.

JHEP06(2019)032
The boundary effect of greedy algorithm results from the discretization of H 2 space, which may not appear in a continuous geometry. In the context of tensor networks, however, according to (7.4) the un-contracted edges inȦ should be contracted. Sometime the contribution of this effect to reduced density matrix becomes subtle, and we should cautiously handle this effect. In other words, whether the ES is flat or not can only be justified after the boundary effect is taken into account. Now with the reduced density matrix ρ A at hand, we can compute ρ 2 A by further contracting those un-contracted edges in A. One can simplify ρ 2 A by virtue of tensor constraints, which is parallel to the above process onĀ. Boundary effect of greedy algorithm appears as well. Before the boundary effect is taken into account, the greedy algorithm stops at a CP tensor chain, which is the reflection of the CP tensor chain appearing in the contraction onĀ about the geodesic bounded by ∂A. Thus, the simplification of ρ 2 A is equivalent to applying the greedy algorithm to A andĀ successively, as shown in figure 13.
The calculation of ρ 2 A is demonstrated in figure 23. Obviously from this diagram we find that ρ 2 A can not be simplified to be proportional to ρ A such that equation (7.3) is not satisfied. Equivalently, from figure 13, we notice that some tensors are not absorbed by the greedy algorithm starting from A andĀ, thus (7.3) is not satisfied.
Given the above constraints, we point out that as long as A is large enough, ρ A always gives rise to a non-flat ES, independent of the choice of A. This assertion will be proved in subsection 7.4. Right now we just conclude that such a tensor network has a non-flat ES, in agreement with what is found by explicitly computing the eigenvalues of the reduced density matrix in [25,32].
We remark that for the above constraints the corresponding CP curve is a hypercircle. When the CP curve is a horocircle, it approaches the boundary with single intersecting point. Or when the CP curve is a circle, it does not reach the boundary. For both cases one need not consider the boundary effect separately, and the ES is usually non-flat for CP circles since the region enclosed by the circle is protected.

Flat ES
We have pointed out that one equivalent way to check the relation in (7.3) is to consider the greedy algorithm starting from A and fromĀ successively. Let us take figure 12 as an example, where S c = 1 2 , 1 1 1 1 . We observe that the union of these two shaded regions covers the whole tensor network, implying that all the tensors are absorbed by the greedy algorithm. Therefore, (7.3) is satisfied and ES has to be flat. We call the tensor network has a flat ES.

Mixed ES
From figure 15, we know that, for S c = 1 2 , 1 0 1 0 1 1 1 0 1 1 , the ES of ρ A can be flat or non-flat, depending on the choice of A. We call the tensor network has a mixed ES.

Geometric point of view on ES
In the tensor network realization of AdS/CFT, a tensor network is usually treated as the wavefunction Ψ of the ground state. Alternatively, when an interval A on the boundary is JHEP06(2019)032

JHEP06(2019)032
given, we notice that Ψ can be understood as a mapping from the Hilbert space on A to the Hilbert space onĀ. So Ψ can be regarded as a matrix Ψ Ā A , where two indexes A and A represent the degrees of freedom on two subsystems A andĀ, respectively.
The notion of critical protection provides us an efficient way to visualize the simplification of tensor networks under the tensor contractions which are subject to tensor constraints. To make this process more transparent, we firstly intend to decompose a network into some sub networks. As seen in previous subsections, when the indexes on A orĀ are contracted, the greedy algorithm will stop at some nodes. Let us firstly neglect the boundary effect, then the skeletons of connecting those nodes will form two CP tensor chains, which are neighboring to the geodesic bounded by ∂A.
First of all, we point out that when λ c ≥ 1 or κ c ≤ κ h , all the hypercircles are protected since their geodesic curvatures are less than λ c . So a non-flat ES is guaranteed. In the following, we will focus on the non-trivial case, λ c < 1 or κ c > κ h , where CP curves are hypercircles.
We denote the CP curve close to A orĀ as H A or HĀ, respectively. The region enclosed by two CP curves is called CP region Ω c . Those tensors in the CP region form a sub tensor network Ψ c , which is a mapping from H A to HĀ and is denoted as (Ψ c ) H A HĀ . Similarly, H A and A enclose a sub tensor network Φ A , which defines a mapping (Φ A ) A H A ; HĀ and A enclose a sub tensor network ΦĀ, which defines a mapping (ΦĀ)Ā HĀ . Since the tensors outside H A are not protected under the contraction of A, the mapping (Φ A ) A H A from H A to A should be proportional to an isometry. Similarly, the mapping (ΦĀ)Ā HĀ from HĀ tō A is proportional to an isometry as well. It is denoted as where the indexes are abbreviated and I (I ) is identity matrix on A (Ā). Finally, the full matrix Ψ Ā A can be represented as the product of matrices Then it is easy to see We present a schematic diagram to demonstrate the decomposition of tensor network state as well as the calculation of ρ A and ρ 2 A in figure 24. The condition for flat ES (7.9) is illustrated in figure 25. This figure reveals that whether the ES is flat or not depends on the thickness of the CP region where the thickness of the CP region is defined by the distance between the two CP curves 2d c in (4.17).  Equivalently, from above derivation we notice that the flatness of ES may be checked by observing the result of the greedy algorithm starting from A and fromĀ successively, which figures out the region of isometry between tensor chains in (7.5). If all the tensors are absorbed by the greedy algorithm, then (7.9) is valid and the ES is flat, and vice versa.

JHEP06(2019)032
Once the boundary effect is considered, as we showed in previous section, CP tensor chains on the boundary of the CP region Ω c are not protected any more under the greedy algorithm. Nevertheless, only a finite thickness of the CP region will be absorbed. In figure 23, since those tensors close to the geodesic are not absorbed, the tensor network has a non-flat ES.
The experience one has gained from this picture is that the thickness of CP region determines whether the ES is flat or not. Without the boundary effect, the boundary of CP region is composed of two CP curves, so its thickness is 2d c , where d c = arctanh(λ c ) is the geodesic distance between the CP curve (hypercircle) and its axis. Due to the JHEP06(2019)032 boundary effect, the CP tensor chain will not be protected any more and the outer layer of the original CP region will be absorbed by greedy algorithm. The thickness of such layers is proximately given by P , which is the length of an edge (2.3). So the thickness of CP region decrease to 2d c − P . Since Ψ c is protected, (7.9) is true only if the thickness of the CP region vanishes.
The evaluation of the geodesic curvature λ in a general tensor network is difficult, which prevents us from justifying the flatness of ES with CP curvature λ c . Alternatively, this job can be done by calculating CP reduced interior angle κ c , as described in the next subsection.

The flatness of ES
In this subsection we argue that the thickness of Ψ c is negatively related to the flatness of ES, when the thickness is small. We firstly construct the matrix form of Ψ c , i.e. (Ψ c ) H A HĀ , from tensor network. For κ c > κ h , the thickness of Ψ c is a constant 2d c . Then one can slice Ψ c into periodic tensor chains bounded by ∂A, as shown in figure 26. We denote these periodic tensor chains as {M n } (n = 1, 2, · · · , N d ) with the order from H A to HĀ, such that M 1 lies on H A while M N d lies on HĀ. The number N d is related to the thickness as N d ≈ 2d c /P . Obviously, all of {M n } are not isometrics. We can express Ψ c by the matrix products To show the approximate relation between the thickness 2d c and the flatness of ES, it is enough to consider an approximation that all the periodic tensor chains are identical, denoted as M n ≈ e K , ∀n. Then where the matrix is diagonalized at the last step. We have denoted the eigenvalues of K + K † as {u 1 , u 2 , · · · , u l }, which should not be identical. The appearance of those zeros at the last step result from (7.5) and the fact that Φ A and ΦĀ are not square matrices. We define the following quantity to evaluate the non-flatness of ES which vanishes if and only if ρ A has a flat ES. We take small N d limit while keeping n finite, which supports our assertion at the beginning. Now we find the positive correlation between the non-flatness of ES and λ c when d c is small. From (4.17), we haveF And it is negatively correlated to κ c , according to figure 10 and subsection 4.1.

The reduced interior angle of CP tensor chain and ES
In previous subsections we have shown the relation between the flatness of ES and the structure of CP tensor chains under the action of greedy algorithm. In this subsection we show that the flatness of ES can be justified based on the value of κ c . Specifically, we find that the bigger is κ c , the stronger is the ability of QEC while ES more easily becomes flat, as shown in table 3. In this figure we further introduce three quantities which are determined by the {b, a} tiling: Because of (2.2), the relation κ h < κ 1 < κ 0 always holds. If κ c ∈ (1, κ h ), it turns out the network is not able to implement QEC but has non-flat ES, as indicated in figures 14 and 18. If κ c ∈ (κ h , κ 1 ), then the network can implement QEC and has non-flat ES, as shown in figures 13 and 17. If κ c ∈ [κ 1 , κ 0 ), the ability of QEC will become stronger but the ES will become "mixed", as shown in figures 15 and 19. Finally, if κ c = κ 0 , the quality of QEC becomes better but the ES has to be flat, which is exactly the property of perfect tensors, as shown in figures 12 and 16. Correspondingly, we may propose a geometric quantity in H 2 space which plays a similar role as κ c in tensor network. This quantity is the geodesic curvature λ c of CP curve. Given a tiling, λ c can be calculated by using κ c . A schematic relation between λ c and QEC and ES is also illustrated in table 3. While, we do not have general expressions for the bounds λ 0 and λ 1 so far, which corresponds to κ 0 and κ 1 , respectively. 4 Until now, we have constructed a general framework for tensor networks with tensor constraints, and developed a generalized greedy algorithm to describe the property of 4 The main difficulty probably results from the specification of an unique CP curve corresponding to a CP tensor chain. Tensor chains are discrete, while curves are continuous. To assign an unique curve, we have to impose more conditions such as requiring that the CP curve has the maximal value of geodesic curvature, which is difficult to handle in practice for a general tiling.

JHEP06(2019)032
Non-QEC QEC Non-FES Mixed-ES FES Table 3. λ c and κ c can be used to classify the property of QEC and ES in tensor networks.
critical protection. In the remainder of this paper, we will provide detailed proofs for the quantitative relation between CP tensor chain and QEC as well as ES, and finally complete the classification of tensor networks as illustrated in table 3. Those statements can be rephrased into following propositions: Now we intend to prove these propositions separately.
7.4.1 κ c = a 2 ⇒ flat ES Based on the discussion in subsection 5.2, we will prove the flatness of ES by showing that any directed cut appearing in the process of greedy algorithm is unprotected such that the greedy algorithm will not stop until all the tensors are absorbed.
To prove a directed cut in the process of greedy algorithm is unprotected, one need to find out an unprotected tensor chain connected to the cut. Recall that those directed cuts in greedy algorithm may have many disconnected components. We firstly prove a proposition for a cut containing the structure of twigs or loops, which will greatly simplify the rest of proofs.
Proof. Denote the sequence of nodes [N 1 , · · · , N k ] as L N . We assume that all these nodes in L N are distinct, otherwise we just replace N 1 and N k by any two nodes which are identical and the following proof is still valid.
When k = 1, it is only possible that the connected component is a single node N 1 , then the tensor chain 0 a on N 1 is connected to C. Since 0 a ∈ S D , C is unprotected. When k = 2, the shape of L N is a twig and N 1 is the endpoint of the twig.
Step tensor chain 1 a − 1 on N 1 is connected to C, so C is unprotected. For instance, in figure 20, JHEP06(2019)032 N 27 = N 29 and the sequence L N = [N 28 , N 29 ] forms a twig, N 28 is the endpoint and M s at N 28 is connected to the cut.
When k ≥ 3, those edges between N i and N i+1 , and the edge between N 1 and N k in L N form a closed polyline, e.g., see the sequence [N 10 , N 11 , · · · , N 16 ] in figure 20. We define the region enclosed by the polyline as Y , which consists of F elementary polygons, E edges and V nodes (vertices) which satisfy Euler's formula Let the reduced interior angle of Y at N i be x i for i ∈ {1, 2, · · · , k}. We have From above four formulas, we have Those nodes [N 1 , N 2 , · · · , N k−1 ] form a tensor chain M = * * · · · * n 1 n 2 · · · n k−1 connected to From Theorem 9, M is unprotected and thus C is unprotected.
In conclusion, when κ c ≥ b b−2 , any cut C containing twigs or loops must be unprotected. Now, we can prove the main proposition. Proof. Obviously, κ c = a 2 > b b−2 . Proposition 0 is applicable to this case and those branches forming twigs or loops in a cut will be absorbed by the greedy algorithm. Taking the cut in figure 20 as an example, we claim that those nodes in {N 8 , N 9 , · · · , N 17 } , {N 27 , N 28 , N 29 }, and {N 1 , N 2 , · · · , N 7 } will be absorbed. Figure 27. The "directed sum" of two directed cuts C A (purple) and CĀ (red) consists of two directed closed curves C I (green) and C I (yellow). The region I corresponding to C I is filled in green.

JHEP06(2019)032
As a result, now we can focus on the case that the cut C is single connected and bounded by ∂A, which is denoted as C = [N 1 , N 2 , · · · , N l ]. Furthermore, these nodes in C are distinct.
With any choice of single interval A on the boundary of a tensor network Ψ, we apply the greedy algorithm starting from A and fromĀ simultaneously. So two cuts, C A and CĀ, appear in Ψ at the same time. We will prove with mathematical induction that when κ c = a 2 , either of these two cuts is unprotected until all the tensors are absorbed. Now we consider the configuration of C A and CĀ. Both of them are connected to ∂A. Besides, they may overlap at some place, where their directions are opposite, as illustrated in figure 27. Then, we define the "directed sum" of C A and CĀ as the union of them but excluding their overlapped parts. The directed sum consists of one or more closed curves, as shown in figure 27. Select one of them and denote it as C I , which is a directed cut as well. Set the sequence of nodes corresponding to C I to be (N 1 , N 2 , · · · , N k ). We connect these nodes (N 1 , N 2 , · · · , N k ) with edges in order and enclose a region I, which is a union of elementary polygons and edges. At node N i , we denote the reduced outer angles of I as y i and denote the number of edges cut by C I as n i . Obviously, y i = n i + 1. Gauss-Bonnet theorem tells that

JHEP06(2019)032
Obviously, those edges cut by C are divided into two parts, one part is cut by C A and the other is cut by CĀ. Without loss of generality, we suppose that C A runs from N 1 to N u+1 . Moreover, l 1 edges of N 1 are cut by C A andl k+1 edges cut by CĀ. While for N u+1 , l u+1 edges are cut by C A andl u+1 edges are cut by CĀ. Obviously, l 1 +l k+1 = n 1 and l u+1 +l u+1 = n u+1 . We further define that l i = n i for i = 2, 3, · · · , u andl i = n i for i = u + 2, u + 3, · · · , k. Then we know that tensor chain M A = * * · · · * l 1 l 2 · · · l u+1 is connected to C A and tensor chain MĀ = * * · · · * l u+1lu+2 · · ·l k is connected to CĀ. From (7.25), From Theorem 9, either of M A or MĀ is unprotected, so either of C A or CĀ is unprotected. Thus the greedy algorithm will keep going on until Area(H) = 0 at least, which means two cuts C A and CĀ are overlapped such that all tensors are absorbed. Then the ES is flat. Proof. We will construct a specific interval A with "minimal secant geodesic" by following steps. We start from the midpoint of an edge between two un-contracted edges on the boundary, then connect this point with the midpoint of another edge in the polygon which has the farthest distance to this point. Next we choose the neighboring polygon of this new midpoint in the bulk and connect the midpoint with the other farthest midpoint in this polygon. Repeat above steps until it reaches the boundary of the network. The trajectory forms a geodesic called minimal secant geodesic, denoted by G m . It should be noticed that for a polygon with odd edges, there are two middle points which are the farthest from the specified midpoint, one to the left and the other to the right, as shown in figure 29. We need choose these two midpoints by turn in above steps, as shown in figure 20. A minimal secant geodesic G m divides the boundary of network into two parts A andĀ, which almost have the same size. We will show that for such a division, the corresponding ES is flat by proving that the greedy algorithm starting from either A orĀ does not stop until the sequence of cuts reaches G m . The proofs for A andĀ are parallel. So we only prove the case for A.

JHEP06(2019)032
Similarly, thanks to Proposition 0, we focus on the case that the cut C is single connected and connected to ∂A, which is denoted as C = [N 1 , N 2 , · · · , N l ], with distinct nodes.
We give G m a direction such that Φ is on its right hand side. Then G m becomes a directed cut which is denoted as [N 1 , N 2 , · · · , N m ]. By definition, these nodes are distinct.
When C and G m are not overlapped, the edges connecting those nodes in C and G m at least form a polygon. In general, they may enclose one or more polygons, as illustrated in figure 20.
We pick out any one of them and label it as Y . Let the set of those nodes on the boundary of Y to be the union of [N p+1 , N p+2 , · · · , N p+u ] in C and [N q+1 , N q+2 , · · · , N q+v ] in G m . N p+1 and N q+1 are neighboring to each other. N p+u and N q+v are neighboring to each other. We naturally have u ≥ 2 after excluding the cases in Proposition 0. Let the reduced interior angle of Y at N p+i as x i for i ∈ {1, 2, · · · , u} and the reduced interior angle of Y at N q+j as x j for j ∈ {1, 2, · · · , v}. Similar to the relation in (7.21) in the proof of Proposition 0, for Y , we have Suppose that the part [N q+1 , N q+2 , · · · , N q+v ] crosses w elementary polygons. Due to the special construction of G m , we have the relation Plugging it into (7.29), we obtain
In conclusion, once C = G m , C is unprotected and the greedy algorithm progresses. So those tensors between A and G m will be absorbed. In parallel, those tensors betweenĀ and G m will be absorbed under the greedy algorithm starting fromĀ. Finally, the sequence of cuts reaches G m , leading to flat ES.  Proof. Consider a single interval A and its complementĀ on the boundary. There exists a continuous line, called G, connecting two ending points of A with a minimal cuttings on the edges of the network. The line G divides the whole network into two sub tensor networks (see figure 30).
It is noticed that the nearest neighboring tensors of line G form two tensor chains. We call these two tensor chains as M A and MĀ, respectively. As an example, the skeletons of these two tensor chains are marked in figure 30. We set all the indexes associated with the edges cut by line G as upper indexes, while the other indexes are lower indexes.
Assume that M A has k A nodes, and MĀ has kĀ nodes. Set the number of elementary polygons crossed by line G to be F . Then we have two equations κ(M A )k A + κ(MĀ)kĀ = bF, k A + kĀ = (b − 2)F + 2. (7.33) Now we provide a proof by contradiction. We assume that the ES would be flat, then M A , MĀ ∈ S D . According to Lemma 7, we have We substitute (7.34) into (7.33) and get an inequality as To simulate real AdS spacetime, the number of layers in a network is expected to be large enough. Then for large interval A, F 1. Since 8 Tensor networks of asymptotic H 2 space As we know, holographic duality can be generalized to asymptotic AdS spacetime. It is very desirable to consider tensor networks in asymptotic AdS spacetime. Since the time slice of static asymptotic AdS 3 spacetime is asymptotic H 2 , we will generalize above strategy to the tensor network constructed by tiling asymptotic H 2 space. For simplicity, we will only consider the asymptotic H 2 space with rotational symmetry around the center. Due to C-theorem [34], the curvature in the infrared (IR) region should be less than the curvature in the ultraviolet (UV) region.
As an example, we firstly construct some tensor networks with {7, 3} tiling in the UV while with {8, 3} tiling in the IR, which are discretization of AdS-AdS domain wall, as shown in figure 31. In each tensor network, those tensors T and E satisfy a set of tensor constraints S c . Notice that the length of all the edges should be the same, whose value is determined by the asymptotic tiling, i.e. the value of P with {7, 3} tiling in (2.3). The UV part and the IR part of CP tensor chain have their independent forms, determined by the local tiling, while they should be connected with each other on the interface between UV and IR. UV-IR matching is a common phenomenon in holography, such as the matching between two RT surfaces of holography entanglement entropy at the indeterminate scale [35]. We find that the action of greedy algorithm does not change qualitatively in comparison with the case of pure AdS space, as illustrated in figure 31. Therefore, the classification of these tensor networks according to QEC and ES remains unchanged.
Based on above examples, we are going to analyze the tensor networks with {b, a} tiling in the UV, while {b , a} tiling in the IR, where b > b is required by C-theorem. The tensor constraints S c are globally specified. It is reasonable to assume that two regions with different tilings are large enough. First, the CP reduced interior angle κ c remains uniform in these two regions, which is only dependent on S c and a. Second, from (7.16), κ 0 remains uniform, while both κ h and κ 1 in {b, a} tiling are greater than those in {b , a} tiling, denoted as κ UV h > κ IR h and κ UV 1 > κ IR 1 . It means that the ES of the tensor network with {b , a} tiling is easier to be flat with respect to the one with {b, a} tiling.   dense in the tensor network. So there still exists an interval A such that its pair of cuts C A and CĀ are protected. Then Proposition 2 is still valid.

JHEP06(2019)032
• When κ c ≥ κ UV 1 , one can construct the minimal secant geodesic which divides the boundary into two intervals with nearly equal size. Then such minimal secant geodesic passes through the region with {b , a} tiling. Since κ c ≥ κ IR 1 , proposition 3 is still valid in this region. So the interval corresponding to such minimal secant geodesic has a reduced density matrix with flat ES.
• When κ c < κ UV 1 , the region that is not absorbed by the greedy algorithm shapes as illustrated in figure 33, where its thickness decreases in the IR while remains nonzero in the UV. So the ES is still non-flat.
In conclusion, the property of QEC and ES is determined by the structure of tensor network in the UV part of the domain wall.

Conclusions and outlooks
In this paper we have presented a general framework for tensor networks with tensor constraints based on the tiling of H 2 space. A notion of critical protection based on the tensor chain has been proposed to describe the behavior of tensor networks under the action of greedy algorithm. In particular, a criteria has been developed with the help of the average reduced interior angle of CP chain such that for a given tensor network the ability of QEC and the flatness of ES can be justified in a quantitative manner. We have

JHEP06(2019)032
also demonstrated a lot of examples of tensor network and discussed their properties of QEC and ES. In general, once the ability of QEC of a tensor network becomes stronger, then its ES becomes flat more easily, and vice versa. By contrast, it is fascinating to notice that AdS spacetime is endowed with these two holographic features with perfect balance indeed. Currently it is still challenging to construct tensor networks which could capture all the holographic features of AdS spacetime. What we have found in this paper may shed light on this issue. Firstly, we have learned that the notion of critical protection provides a description on the limit of information transmission with full fidelity. In the case that the CP curve H c is a circle, i.e. λ c L 2 > 1, the information in the interior of H c can be transmitted to its surface without loss, where we have restored the AdS radius L. While, for a circle H which is larger than H c , its interior information can not be transmitted to its surface without loss. So we can say that H c is the maximal boundary which can holographically store the interior information [36,37]. Thus, for a tensor network which captures the feature of QEC as AdS space, it must not contain circular CP curves, which requires λ c L 2 ≤ 1. Furthermore, if we intend to construct a single tensor network which exhibits both QEC and non-flat ES, it seems that the tensor networks with κ c ∈ (κ h , κ 1 ) might have more likelihood to approach this goal.
Next we address some open issues that should be crucial for one to explore the role of tensor networks with constraints in holographic approach. Firstly, because of the chain structure of tensor constraint, in our present framework we have investigated QEC and ES only for a single interval on the boundary, which is just similar to the setup for hyperinvariant tensor network in [25]. It is an open question whether QEC can be realized for multi-intervals on the boundary, as investigated in network with perfect tensors or random tensors [18,19,22]. Actually, our preliminary investigation reveals that if the number of intervals is large enough, it would be very hard to realize QEC with non-flat ES for multi-intervals, because it involves in constructing tensor constraint with scales as large as the entanglement wedge of the multi-intervals, which is rather complicated. We would like to leave this issue for further investigation. The generalization to higher dimensional space based on our framework is complicated. Taking H 3 space in AdS 4 spacetime as an example, we expect a generalization from tensor chain to tensor surface, which is a 2-dimensional tensor network. Similarly, we can impose some constraints on those tensor surface. If the geometric description still works, we expect that the geodesic curvature λ of a curve of constant curvature would be generalized to the extrinsic curvature of a surface with constant extrinsic curvature. In higher dimension, a geometric description in continuous space might be more effective than a discrete description in networks.
Secondly, in order to simulate AdS space, it is desirable to send the number of layers of tensor network to infinity. Then the area of its boundary goes to infinity as well. Under this limit, the treatment on the boundary effect of tensor constraints is subtle. When the CP curve is a hypercircle with λ c L 2 < 1, it has a constant distance to the geodesic which is d c = L arctanh(λ c L 2 ). The CP curve is unprotected once the boundary effect is considered, so the boundary effect scales as d c , which is independent of the number of the layers. When d c /L is small, the boundary effect becomes negligible in this limit comparing JHEP06(2019)032 to the infinite area of the boundary. However, when d c /L is very large, such as κ c → κ h +0, the boundary effect can not be neglected.
Finally, we are concerned with the issue how to reproduce the Cardy-Calabrese formula of Renyi entropy (1.1) in the framework of tensor networks. It is known that Renyi entropy depends not only on the tiling and tensor constraints, but also on the matrix elements of tensors, such as the elements of tensor U and Q in appendix B. In addition, we are interested in the possible relation between the CP curve and the gravity dual of Renyi entropy. In [38], the nth-order holographic Renyi entropy can be calculated by the area of a cosmic brane n with tension T n , namely n 2 ∂ n n − 1 n S n = Area(Cosmic Brane n ) 4G N . (9.1) The cosmic brane n backreacts to the geometry at order T n G N where G N is the Newton constant. However, if we simply set T n G N → 0, all the cosmic branes become probe branes. 5 Then, for a given subsystem on the boundary, those cosmic branes would have the same area and flat entanglement spectrum appears. According to subsection 7.2 in our paper, when d c /L is large, the entanglement spectrum becomes non-flat, while when d c /L is small, the entanglement spectrum becomes flat. It would be interesting to explore the possible relation between T n G N and d c /L in the light of this observation.

JHEP06(2019)032
We define new coordinate ζ = x + iz to rewrite the metric as (A. 2) The isometry of H 2 geometry is SL(2, R), which means the form of the metric is unchanged under the coordinate transformation where real parameters α, β, γ, δ satisfy αδ − βγ = 1.

A.2 Curves of constant curvature
One key notion that we have frequently used in this paper is the curve of constant curvature (CCC) in H 2 space. The geodesic curvature of a curve with an affine parameter s is given by The curves with λ µ = 0 are geodesics in H 2 space. The geodesic distance of any two points with coordinates (x 1 , z 1 ) and (x 2 , z 2 ) can be derived as d = arccosh (x 1 −x 2 ) 2 +z 2 1 +z 2 2 2z 1 z 2 . There are three kinds of CCC in H 2 space, namely, the circle, horocircle and hypercircle, as illustrated in figure 34.
A circle is a curve whose geodesic distance to a given point (the center of the circle) is a constant r. The geodesic curvature of a circle with radius r is λ = coth(r).
A horocircle (or horocycle) is a curve whose normal geodesics all converge asymptotically to its center in the same direction, so it is also called limit circle. The geodesic curvature of a horocircle is equal to 1.
A hypercircle (or hypercycle) is a curve whose points have the same orthogonal distance d from a given geodesic, so it is also called equidistant curve. The corresponding geodesic is called its axis. The geodesic curvature of a hypercircle is λ = tanh(d). (A.5) Of course, a geodesic is a hypercircle with d = 0. As a summary, one can classify all the CCCs in H 2 space by their geodesic curvature     where two indexes µν (ρσ) are grouped together. The elements of tensor R are R µνρσ , which satisfy R µνρσ = R ρσµν = R νµσρ , ρσ R µνρσ R * µ ν ρσ ∝ δ µµ δ νν , νσ R µνρσ R * µ νρ σ ∝ δ µµ δ ρρ .  Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.