Petz reconstruction in random tensor networks

We illustrate the ideas of bulk reconstruction in the context of random tensor network toy models of holography. Specifically, we demonstrate how the Petz reconstruction map works to obtain bulk operators from the boundary data by exploiting the replica trick. We also take the opportunity to comment on the differences between coarse-graining and random projections.


Introduction
The emergence of bulk spacetime geometry from non-gravitational field theoretic degrees of freedom in the AdS/CFT correspondence can be understood by viewing the holographic map from the bulk to the boundary as a quantum error correcting code [1,2]. The essential idea is that while the Hilbert space of the field theory is isomorphic to the full string theoretic quantum gravitational Hilbert space, semiclassical gravitational physics has access to a much smaller subspace of states. These 'code subspace' states corresponding to excitations of the vacuum (or other geometric states) by a few, O(c eff ), perturbative quanta, are to be viewed as the quantum message one wishes to encode into a bigger Hilbert space. 1 This encoding map can moreover be viewed as a noisy quantum channel.
The question of recovering local bulk geometry can be rephrased in this framework as constructing a recovery map for this channel, one that allows us to reconstruct from field theory data, either the bulk state, or better yet (in the Heisenberg picture) local bulk operators. The latter are especially interesting given that the standard reconstruction of local bulk physics exploits only the bulk causal structure [3,4] through the extrapolate dictionary [5,6]. It however has become quite clear thanks to the holographic entanglement entropy proposals [7,8] that one should be able to reconstruct operators in a larger domain of the bulk, the entanglement wedge [9][10][11].
While it has been argued that reconstructing operators in the entanglement wedge involves modular evolution [12,13], an alternative viewpoint exploiting quantum recovery maps was presented in [14]. The idea as elaborated further in [15] is that the Petz map [16] and its 1 We use c eff to denote the effective central charge of the field theory. In the gravitational setting it can be related to the AdS scale in Planck units: c eff ∼ AdS twirled generalization [17], provide a universal general-purpose recovery maps which suffice to reproduce the bulk quantum state with fidelity F ∼ 1 − O c − 1 2 eff . Explicit reconstructions have been analyzed in [18,19] using modular flow, and in [20] using the Petz map. Some of these discussions have explicitly demonstrated the non-trivial encoding of the bulk in the boundary, especially in the context of black hole evaporation through the replica wormhole contributions [20,21] (which justify the quantum extremal surface [22] prescription). While there are structural similarities between the modular evolved operators, and the Petz reconstruction, the precise connection between the two is as yet to be fully fleshed out.
The goal of this short note is to explore the properties of the Petz map in a simple toy model of holography, viz., random tensor networks (RTN) [23]. These tensor networks involve discrete degrees of freedom with the bond dimension χ a proxy for the central charge, c eff ∼ log χ. Unlike specific perfect tensor codes which work with fixed set of operators/tensors [24], the random tensors involve projecting onto suitably entangled states (analogous to PEPS tensor networks, cf., [25]). As demonstrated in [23] and argued for more generally in [26] these discrete networks share many features of the holographic map, in particular, saturating the minimal cut rule for computing von Neumann entropy, analogous to the RT/HRT formulae in gravitational systems [7,8]. One can moreover understand the flat entanglement spectra of these models by invoking a constrained variational problem in gravity, wherein one works with fixed-area states of the bulk gravitational path integral [27,28]. Our motivation is to provide a simple example (analogous to the discussion in [20]) for the Petz reconstruction. In addition we will comment on some of the features of the RTNs and holographic codes viewed as quantum channels.
The outline of this article is as follows: in §2 we give a quick overview of the RTNs and the use of the replica trick to compute entropies. In §3 we illustrate how the Petz reconstruction works in the case of the RTNs, demonstrating along the way, the use of replicas to show the matching of bulk and boundary observables. Finally, in §4 we comment on various features of holographic encodings of the bulk geometry viewed from the perspective of quantum channels.

Random tensor networks and replicas
We begin with a quick overview of RTNs [23]. Consider an arbitrary graph with a vertex set {x}. At each vertex we have a Hilbert space H x = ⊗ nx k=1 H x,k which admits a tensor product decomposition into factors H x,k . For two vertices x and y that are connected by a link, we pick some sub-factors of the vertex Hilbert spaces, and use them to define a link Hilbert space H xy = ⊗ mx k=1 H x,k ⊗ ⊗ my l=1 H y,l with m x ≤ n x and m y ≤ n y , respectively. In addition we will allow for some boundary vertices which lie at the edge of the graph and have a Hilbert space H ∂ . These will likewise have links to some bulk vertices with a similar construction defining H x∂ .
Given this set-up we can define the tensor network in two equivalent ways. We first lay out maximally entangled states τ xy in the link Hilbert space H xy ⊗ H yx along each link (a) The map from the bulk to boundary in a random tensor network illustrated for three vertex graph. At sites x and y we have three qubits (blue circles) while at site z we have two qubits. In addition we have two boundary qubits shown at the dangling ends. The black lines with arrowheads between qubits denote an EPR state in the tensor product space of the linked qubits. At vertex z we place a state φ z of the two qubits while at vertices x and y we include random states |vx and |vy , respectively of the three qubits located there. These are collectively encoded by the triangles: a blue one stands for operator insertion and an orange one encodes random state of the qubits. The red lines indicate contractions between the qubits where we insert operators to those at the other vertices. The network defines a map from the bulk qubits to the boundary qubits piping the state φ z onto a state We can abstract the information of the network into a few essential elements to indicate the bulk to boundary map. We first extend Fig. 1a to include the conjugate bra states so as to denote maps from bulk operators/stateŝ ρ onto boundary operators ρ AĀ . In the process we concatenate operator insertion vertices into a square (blue), while the vertices with random projectors ΠV are depicted by orange triangles. The black lines continue to denote the maximally entangled state between subfactors, red lines denote contractions between the subfactors where we insert the operator and those where we perform random projections. Dangling lines at the outer edge lead to boundary degrees of freedom. In this presentation, for a boundary bipartitioning we can truncate to three effective bulk vertices, one where the operatorρ is inserted and two others (a andā) which correspond to the two boundary degrees of freedom (A andĀ). including the dangling boundary ones. We additionally choose to insert a bulk operatorÔ (or stateρ) on a set of bulk vertices by acting with an appropriate operator on a collection of vertices. 2 Finally, on the state thus prepared on ⊗ x H x , we make random measurements at each bulk vertex by projecting onto Haar random state |V x V x |. We can obtain the state |V x by acting on a reference state in H x with a Haar random unitary, eg., |V x = U x |0 . A basic three node network is illustrated in Fig. 1a.
We will refer to the tensor indices associated with the vertex set in the bulk of the graph where we insert operators as "bulk degrees of freedom" while those corresponding to uncontracted, dangling indices from the vertex set are our "boundary degrees of freedom". Operationally we can take the tensor factors for the bulk Hilbert space to be smaller in dimension compared to those where we insert the random projectors (to allow for saddle point analysis). The network then defines a map from the bulk to the boundary, mapping bulk quantum statesρ onto boundary states via where Π V is our random measurement. An alternative perspective is to first start with random states prepared at each vertex in H x , and thence project them onto the maximally entangled state along the links. The construction of the network and the boundary states is illustrated in Fig. 1. We will view the network as preparing a quantum state with no information about temporal evolution. Consequently, our statements should be viewed as being applicable to states on a single Cauchy slice in the geometric set-up.
We will bipartition the boundary degrees of freedom into A andĀ. Associated with these will be a set of bulk vertices, collectively a andā, respectively, with dim(H a ) = d a and dim(Hā) = dā. The bulk Hilbert space is viewed as the code subspace of the boundary and has dimension d code = d a × dā. In the geometric setting the state space H a would correspond to bulk states in the homology surface R A ≡ a, a Cauchy slice of the entanglement wedge E A of A. The map M(ρ) and its restriction to a subregion of the boundary M A (ρ) ≡ M(ρ)| A (obtained by partial tracing) are not normalized, i.e., the maps are not trace-preserving. We will choose to convert the outputs to boundary density operators, normalizing by hand, to obtain ρ and ρ A , respectively. This will lead to a normalization factor of Tr(M(ρ)) in the computations below. We will revisit the nature of the bulk to boundary map later in our discussion.
Let us quickly review the computation of von Neumann and Rényi entropies for ρ A using the replica method, cf., [23] for details. We compute traces of powers of ρ A (with aforementioned normalization) by working in the covering space unfolding the powers of reduced density operators to the computation of the expectation value of an observable, viz., The first equality is the definition, and in writing the second we used the fact that the trace over a set of powers of an operator can be computed by working in an n-fold tensor product, together with a suitable projection on the symmetric subspace achieved by the insertion of X (n) A . On the n-fold tensor product we have a natural S n permutation group action, which for the purposes of computing cyclic trace invariants introduces a cyclic permutation of n elements. This defines the operator X (n) A = x∈A z (n) above, with z (n) ∈ Z n a cyclic permutation. Note that the cyclic permutation is inserted in the boundary region A alone, since theĀ has already been traced over to compute ρ A .
The main advantage of the RTNs is that the replica calculation can be done quite efficiently by first averaging over the random projectors. 3 We can use the result: 4 This ends up mapping the problem to a spin model with S n valued spins g x at each vertex. The numerator of (2.2) maps to the computation of partition function, with boundary conditions: insertions of fixed cyclic permutation X (n) A along boundary spins in A and the identity element e inĀ. Therefore, (2.4) where g = x g x , N n = x N n,x , χ is the bond dimension, and ellipses stand for subleading terms. γ A the minimal cut of the graph implementing a bulk bipartitioning into a andā.
Since we are computing a spin-model partition sum with fixed boundary conditions, we can estimate the result for large bond dimensions, by thinking of spin domains. For our boundary conditions g x = z (n) ∈ Z n for x ∈ A and g x = e for x ∈Ā, we propagate the boundary spins inwards into the graph and encounter a domain wall separating the two domains in an energy minimizing configuration. Generically, there will be multiple competing configurations with domain walls γ i A , serving as the bulk separatrix (the RTN analog of the bulk RT surfaces). Energy minimization picks out a unique preferred configuration in the spin model [23], and thus in the RTN serves to define the homology surface a andā, for A andĀ, respectively. As one changes the relative dimensions of H A and HĀ we will encounter phase transitions with the minimum energy domain wall configuration switching between competing saddles. The ellipses in (2.4) refer to the contribution of these subleading saddles and are of order O(χ |γ 1 A |−|γ 2 A | ). The normalization factor in the denominator evaluates similarly though now all boundary spins have g = e permutation, resulting in Tr[M(ρ) ⊗n ] = N n + · · · , leading to S (n) The entanglement spectrum of these networks is flat (i.e., n independent) [23] and can be understood to correspond to fixed-area states in the geometric description [27,28].

Petz reconstruction of bulk states
The goal of bulk reconstruction is to construct an operator O A supported on A, given a bulk operatorÔ a supported on the homology surface R A = a of A. We can view the state ρ A obtained fromρ as the result of operating a noisy quantum channel, ρ A = E(ρ). Our task is to construct a recovery map R, such that R • E(ρ) = R(ρ A ) ≈ρ a . We then can use the adjoint channel R † to find a map between boundary and bulk operators, For general quantum channels the Petz map gives an ansatz for this recovery: in the Heisenberg picture for operators it reads where σ is a fixed fiducial state. Taking it to be the maximally mixed state τ achieves the desired reconstruction with error of O c − 1 2 eff [14,15]. For starters however, we will employ a simpler ansatz, dubbed "Petz-lite" in [20] which posits instead with the coefficient fixed by demanding that the identity operator maps to the identity operator. One potential obstruction in using the Petz map in RTNs is the non-linearity in the bulk-boundary map (2.1) arising from the fact that we have to normalize the boundary state ρ A by hand. Strictly speaking, we are dealing with a general quantum operation since the map from bulk to boundary is not trace-preserving. We nevertheless can use the map M A to construct analogs of Petz like reconstruction maps for operators. We will first explain how to perform the reconstruction using the Petz-lite ansatz and then talk about the more general twirled Petz map.

The simplified Petz reconstruction
For the simple reconstruction, we claim that the map from the bulk to the boundary M A will itself suffice to perform the reconstruction, viz., provides a faithful boundary representative of a bulk operator in a. Unlike density operators we will not a-priori normalize the pull-back of the bulk operators to the boundary, but will determine the proportionality constant c 0 post-facto so that 1-point functions agree.
To verify that we have a good reconstruction, we will focus on evaluating the expectation values of the boundary operators. We have:   Figure 2: The representation of the computation of the r.h.s. of (3.4) which captures the non-trivial part of Tr(ρA OA) in the random tensor network. We have two copies of the network with swap boundary condition on A (dashed blue) and identity boundary condition onĀ (solid blue). Note that the computation is similar to that involved in evaluating the second Rényi entropy with one copy of the density matrix replaced by the operator.  where X A is the Z 2 swap operator inserted at vertices in A. The calculation pretty much parallels that of the second Rényi entropy, the main modification being that one of the copies ofρ is now replaced by the bulk operatorÔ, see Fig. 2.
It is not hard to see that whereÔ a = Trā(Ô).
In obtaining the answer we have assumed that the dominant spin configuration with the boundary conditions is g x = z (2) for x ∈ A and identity otherwise. A diagrammatic illustration of the averaged computation for a simple toy network is depicted in Fig. 3. The normalization factor from the denominator is computed similarly as before with the boundary vertices having the identity spin, leading to The numerical pre-factor on the r.h.s. can be absorbed into the definition of the boundary operator O A by choosing c 0 appropriately. Thus, as expected the Petz-lite ansatz does a good job recovering the boundary operator O A from the bulk data.

General recovery: 1-point functions
For the Petz map itself, we have to first make a choice of the fiducial reference state σ and then take fractional powers. One can circumvent this by defining a replica version: and take the limit n → − 1 2 . While it is easy to see that this construction will work, specifically with the dominant spin configuration being a cyclic permutation for even n (see for instance [20] who use a similar trick in the context of the SYK model), we find it useful to work with the twirled Petz map. As recovery channel, the twirled Petz map is given by the expression: for an arbitrary reference state σ. Picking the reference state to the maximally mixed state τ = 1 d code 1 code , one can write an expression for the operator version of the map which allows a more standard replica construction. One has [14,15] where d code = d a × dā. In this presentation of the twirled Petz map, we can think of t as quantifying a perturbation of the maximally mixed state -it may be viewed as a source for the operator deformation byÔ a ⊗ 1ā. The representation of the twirled Petz map in (3.9) has the advantage that we can use the standard replica trick used to compute relative entropy. Using the standard identify Tr (ρ log σ) = lim n→1 1 n − 1 log Tr ρ σ n−1 , (3.10) we can rewrite an insertion of O A in terms of insertions of powers of M(σ(t)) which will allow for direct replica manipulations, as we illustrate below.
To test the efficacy of the reconstruction we again focus on matching expectation values of operators in the bulk and boundary states respectively. We have (3.11) We have chosen to drop a normalization factor Tr [M(ρ)] above as it independent of our deformation parameter t. We can furthermore factorize the computation of f (n) (t) by realizing A , as can be pictorially visualized as before. The computation is again simplified if we first perform the averaging over the random projectors and proceeds in parallel to our earlier discussion for the computation of the Rényi entropies. We have which implies (3.14) Differentiating with respect to the deformation parameter t and then taking the limit n → 1 we end up the desired answer Tr(ρ A O A ) = Tr ρÔ + subleading .

General recovery: higher point functions
Having understood the Petz reconstruction of bulk operators, and the recovery of the 1-point functions, we now turn to the higher point functions. The general statement of entanglement wedge reconstruction would say that we should be able to recover an arbitrary correlation function of bulk operators in the entanglement wedge in terms of corresponding boundary avatars. It was already argued in [14] that the Petz reconstruction would achieve this outcome.
We will now verify the same explicitly in the RTNs. The recovery of an k-point function using the Petz-lite reconstruction map is a straightforward generalization of the computation in §3.2. One can immediately see that it parallels the computation of the k + 1 Rényi entropy where k copies of M(ρ) are replaced by k j=1 M(Ô j ). We will therefore focus on recovering the higher-point functions using the twirled Petz map. For the sake of illustration consider first the computation of the two-point function of operators inserted in the homology surface a. We want to show that (3.16) We will use the replica trick to compute the l.h.s. of (3.16), for which we need a suitable generalization of (3.10) when multiple operators are in play. Consider therefore the following: log σ 1 log σ 2 c ρ ≡ Tr(ρ log σ 1 log σ 2 ) − Tr(ρ log σ 1 ) Tr(ρ log σ 2 ) = lim (3.17) One can justify this in a manner similar to (3.10). To be clear, one can use a unitary diagonalization ansatz σ i = U i Σ i U † i to facilitate computation of the powers, and then differentiates with respect to n 1 and n 2 to arrive at the limit. The reason for ending up with the connected part of the correlator is the usual fact that the logarithm isolates the connected pieces (sequential derivatives give the lower-point functions).
With the multi-replica trick (2.4) at hand one can proceed with the computation of the two-point function of the reconstructed operators. We have: Once again we can unfold the computation to work in a tensor product replica space with suitable cyclic permutations. Taking the average over the random projectors allows us to evaluate the result. One finds: 20) with M = 1 + i (n i − 1). Finally, we can evaluate the derivatives with respect to t i and thence the limit n i → 1 recovering for the average: Effectively, the random projection average allows us to interpret f (n 1 ,n 2 ) (t 1 , t 2 ) as the generating functional of the correlation functions with t 1 and t 2 as sources. So indeed log f (n 1 ,n 2 ) (t 1 , t 2 ) does lead to connected 2-point correlators. Since we independently have a computation of the expectation values of one-point functions, we can immediately recover the full two-point function.
The matching of higher point functions works similarly, once we realize that we can write: Recovering the connected parts suffices as the disconnected parts can be reconstructed iteratively from the lower-point functions.

Comments on holographic channels
We have focused on illustrating the efficacy of the Petz reconstruction of bulk operators in RTNs by concentrating on the matching of correlation functions. We see that the general correlation functions of bulk operators in the homology region R A in the reduced stateρ a agree with those computed using the corresponding reconstructed boundary operators in accord with [14]. The underlying reason for the matching is simply the fact that we are able to perform state decoding. That is, given ρ A = E(ρ), the recovery map viewed as a decoding channel satisfies Eq. (3.1), viz., R • E(ρ) ≈ρ a . Indeed this suffices, as (4.1) The state decoding itself works as follows. Given a bulk stateρ and its boundary encoding E(ρ) the random average decoded state is One can read this off directly from Fig. 2 -the construction simply involves erasing the operatorÔ and leaving the bulk legs on the second copy dangling to create a state in a as we expect from the general results proved in [14,15]. However, as noted above the bulk-boundary map in RTNs is not trace-preserving, and thus not a quantum channel per se, but rather a quantum operation. One can nevertheless proceed with an analog of a Petz reconstruction map which (as demonstrated above) serves to reconstruct the bulk operators.
The crucial fact thatÔ a lies in the homology surface R A = a was left implicit in our analysis, but follows from the nature of the dominant (saddle-point) spin configuration minimizing the energy in the auxiliary spin model. For instance, if we focus on a simplified toy problem where we have a single bulk degree of freedom in the middle, then we can illustrate the operation R • E as a quantum depolarizing operation. One can write the averaged decoding map as We have now written explicitly the result with two potential energy minimizing domain wall configurations. For χ 1, we have a sharp phase transition between two situations; when γ 1 A is the minimum cut, then the recovery succeeds with certainty. On the other hand when γ 2 A is the minimum cut the recovery fails completely as we end up with the maximally mixed state.
One key aspect of the RTNs that is worth emphasizing has to do with the use of random projectors to define the bulk to boundary map. For any graph network, the boundary encoding M A (ρ) can be well approximated by averaged boundary density operator M A (ρ), as one would expect. However, computation of moments of the density operators, or the evaluation of correlation functions, as discussed in the preceding sections, involves non-trivial interlinking across the replica copies. Mathematically, this is clear from the fact that such computations in a single theory can be unfolded into an evaluation in the n-fold tensor product system with suitable insertion of cyclic permutation elements. When we carry out the average over the random projectors, we can induce connections between these replica copies as the group averaging result (2.3) provides further permutation insertions which can combine with the cyclic replica permutation to generate new cycles/links. This is entirely analogous to the manner in which gravitational dynamics engenders connections between different replica copies through Euclidean replica wormholes [20,21] as is to a large extent already clear from the results of [20] on Petz reconstruction. While one might have viewed the random tensors as a means to extract the behaviour of the typical state of the graph, the general lesson is that the fluctuations, moments, and correlations in such random projected states, carry information beyond the average, thanks to the aforementioned propensity for forming new inter-connections.
In this vein it is interesting to contemplate the distinction between coarse-graining (via partial tracing) versus random projections onto entangled states for general open quantum systems. In a bipartite system-environment setting, one traditionally considers simply tracing out environmental degrees of freedom, inducing on the system degrees of freedom a nonunitary dynamics. However, the results alluded to above suggest that one may be able to glean further insight from the study of random projection models of system-environment couplings. More specifically, in these settings, replicaesque analysis ought to reveal that the system has more detailed information about the purifying environment than would have been anticipated naively. These issues deserve further attention.