Hayden-Preskill decoding from noisy Hawking radiation

In the Hayden-Preskill thought experiment, the Hawking radiation emitted before a quantum state is thrown into the black hole is used along with the radiation collected later for the purpose of decoding the quantum state. A natural question is how the recoverability is affected if the stored early radiation is damaged or subject to decoherence, and/or the decoding protocol is imperfectly performed. We study the recoverability in the thought experiment in the presence of decoherence or noise in the storage of early radiation.


Introduction
The Hayden-Preskill thought experiment [1] asked the question of how much information could be recovered regarding a quantum state that was thrown into a black hole under the assumption that the black hole was a fast scrambling quantum system. The surprising result is that, given sufficient time after the message was thrown in, there existed a quantum channel for the recovery of the quantum information associated with the state that was thrown in (see also [2,3]). This result was critical to the development of black hole quantum information as a subfield, in particular being critical to the formulation of the black hole information paradox [4][5][6][7][8]. In more recent works, explicit quantum circuit implementations of the thought experiment have been proposed by [9][10][11], some in the slightly modified context of traversable wormholes in AdS/CFT [12,13].
Of particular interest to those without a strong interest in quantum gravity, this protocol does not require that the quantum state even be thrown into specifically a black hole; indeed, all that is required is that it becomes associated with a sufficiently rapidly scrambling quantum system. Such understandings, in turn, have boosted the study of quantum chaos from the viewpoint of the quantum information theory and quantum many body physics [14][15][16][17][18][19][20][21][22][23][24]. In this context, the work of [1] could have potential applications for NISQ quantum algorithms and devices, perhaps as a subroutine or a kind of quantum memory. Studying this model in this context, however, naturally begs the question of robustness against decoherence.
Recently, the effects of certain types of noise and decoherence on the recoverability have been studied in [10] to experimentally assess the amount of chaos in strongly correlated quantum systems on a noisy quantum device [25]. While out-of-time-ordered correlation (OTOC) functions are powerful diagnostic tools of information scrambling/chaos, their signals can be hard to extract due to decoherence and noise effects in realistic experimental setups. Yoshida and Yao [10] proposed a Hayden-Preskill decoding protocol as an alternative tool to circumvent the difficulty. Accordingly, their primary interest lies in the effect of those errors on the operations of unitaries U , which can be noisy or imperfectly implemented.
In this work, we study the robustness of the Hayden-Preksill thought experiment to decohering effects different from those considered in [10]. The effect of decoherence on black hole quantum information has already been studied in for example [26,27], and have been shown to have potentially quite interesting effects in the context of the information paradox. Here, however, we will focus on the purely information theoretic question of the robustness of the Hayden-Preskill recovery channel against erasure or decoherence, as could occur because of circuit imperfections or inadequate shielding against interactions with the environment. In this context, we assess the severity of those errors in terms of the quality of Hayden-Preskill decoding assuming that the erasure error or decoherence occurs in the stored radiation collected earlier for the purpose of recovery. After relating the recoverability to the mutual information between the input state and collected radiation in general setup, we carry out explicit computations of decoding fidelity, that is a proxy of recoverability, and the effects of errors provided the time evolution is described by a Haar random unitary. We find that the erasure errors severely affects the decoding fidelity in contrast to decoherence, which has a weaker effect.
The organization of this work will be as follows. In Sec. 2, we will review the Hayden-Preskill thought experiment in the absence of decohering effects. In Sec 3, we will discuss the effect of erasure errors on the Hayden-Preskill protocol. In Sec 4, we will discuss the effects of decoherence. In Sec. 5, we compare the recoverabilities with the erasures and the decoherence based on the computations done in the previous sections. Finally, we will conclude with a discussion of these effects in section 5, in particular by contrast to the earlier work performed in [10].

Hayden-Preskill thought experiment
Suppose Alice has a quantum diary represented by a state on A, which she throws into a black hole B. In particular, let the black hole have already emitted half of its qubits as Hawking radiation B when Alice throws her diary in, e.g. existing past its Page time [5,14]. The composite system will then experience some unitary evolution U to end up with Hawking radiation D and the remainder of the black hole C. A natural question to ask at this point would be if Bob can reconstruct Alice's diary by collecting the early radiation B and the late radiation D and acting on it with some quantum recovery channel. If yes, a further question would be how much late time radiation Bob needs in order to perform the reconstruction. According to [1], the answer is affirmative, with Bob needing only to collect a small amount of the late time radiation to perform the recovery.
For the purpose of decoding, we attach a reference system R that forms a EPR state with the state on A. The state |Ψ HP after the time evolution U is given by The EPR states |EPR RA and |EPR BB are defined by The dot in the graphical representation stands for the normalization factor of a EPR state. Decoding Alice's state is accomplished by applying some operation V to distill the state R from the collected radiation D and B : If the state B E shares nearly maximal amount of information with R, then Bob is capable of successfully decoding Alice's state by the quantum recovery channel V .

Decoding protocol
We review how to construct a decoding operation V in the absence of error and noise. Much of this is review of work first performed by [9]. We assume that Bob has collected all of the early radiation and the late radiation, and has complete knownledge of the unitary driving the black hole's dynamics U (e.g., that he is capable of implementing U on his own qubits for the purpose of decoding). Given the state |Ψ HP (2.1), Bob's decoding strategy is the following: 1. Prepare a copy of |EPR RA , denoted by |EPR R A .
2. Apply U * on B A . We call this state |Ψ in and it is given by, 3. Project the state onto |EPR DD . In other words, Bob repeatedly performs projective measurements on DD until his state is successfully projected onto |EPR DD 1 . Letting Π DD := |EPR EPR| DD be the projection operator, we find the probability of getting |EPR DD to be given by The state that is projected onto |EPR DD is For later convenience, we calculate P EPR assuming that U is a Haar random unitary operator 2 : Here, dU stands for the integration of the unitary operator U over the Haar measure (see (A.4) for details of computation).
4. The fidelity between ρ out and |EPR RR quantifies the quality of decoding (see Sec. 2.3). It is computed as, (2.8) Thus, small P EPR leads to high fidelity. In particular, provided U is the Haar random unitary and d A d D is satisfied, (2.7) reduces to P EPR ≈ 1/d 2 A , implying nearly maximal decoding quality, F EPR ≈ 1 (see Fig. 1).

Decoding fidelity and mutual information
The decoding quality is quantified by mutual information between the reference system R and the collected radiation B D (2.1). Here, we make a connection between the mutual information and decoding fidelity (2.8). For computational convenience we consider Rényi-2 mutual information, with the Rényi-2 entropy S (2) := − log Tr[ρ 2 ]. We find S (2) (R) = log d R = log d A because the state R is maximally entangled with A. The base of logarithm is taken to be 2 throughout the article. Also, S (2) (RB D) = log d C because RB CD is a pure state and RB D is maximally entangled with C. Let us compute Tr[ρ 2 B D ] = e −S (2) (B D) . The state ρ B D is given by, namely, the projection probability P EPR directly measures the Rényi-2 mutual information. We remark that P EPR is equal to the "averaged out-of-time-ordered correlator (OTOC)", and thus, gives another perspective of information scrambling (see [10] for details). Therefore, the decoding fidelity is expressed in terms of which shows that large I (2) (R, B D) results in high decoding fidelity [10] as originally argued based on the decoupling principle [1].

Hayden-Preskill decoding with erasure errors
In real world physical systems, however, the decoding protocol can be subject to various errors. In the present work, we are interested in storage errors that occur while Bob keeps early radiation on B until he applies the decoding protocol upon gathering the late radiation. (3.1) In this graphical representation, V is the decoding operation that Bob performs on the collected radiation on B D to distill the EPR pair on RR , and the red wire represents the noisy storage.

Decoding protocol with erasure errors
We model the noise in storage by erasure errors, i.e. the situation where some qubits in the storage on B becomes inaccessible because they are lost or damaged. We will examine the effect of decoherence in Sec. 4. The erasure is modeled by randomly choosing qubits to be discarded with probability The entire state before the application of recovery protocol takes the form, We graphically doubled the wire between B and B in order to separate them into N B2 -erased qubits on B 2 from N B1 -surviving qubits on B 1 . The state on B 2 is traced out as Bob is unable to access the lost qubits. Following the protocol in the previous section, Bob applies U * after attaching the EPR state |EPR A R and the N B2 -qubit maximally mixed state, I B2 /d B2 , to fill in the erased qubits. Thus, the associated density operator ρ in is graphically represented by, Then, Bob projects the state (3.3) onto |EPR DD to obtain, Upon projecting ρ out onto |EPR RR we find the decoding fidelity F EPR = Tr[Π RR ρ out ] to be Successfully projected state is the distilled EPR pair on RR . Compared with the fidelity in the ideal case (2.8), we note that δ quantifies the effect of error in F EPR P EPR .

Mutual information
In order to make a connection to the Rényi-2 mutual information I (2) (R, This equation is recast into the following form, where we have used S (2) This can be written as With the erasure error, P EPR is given by a combination of Rényi-2 entropies instead of Rényi-2 mutual information, the latter of which encodes the recoverability of Alice's state. The ratio of these quantities is precisely captured by the error factor δ (3.7) [10]. Using (3.5), (3.7), and (3.9) we find (3.10) Even though P EPR takes a different form from the ideal case (2.12), the decoding fidelity rightly reflects the mutual information between the state on R and the collected radiation B 1 D, to which Bob has an access. The effect of erasure is translated to the reduction of the mutual information and vice versa.

Decoding fidelity
Let us compute δ (3.5) provided that the time evolution is governed by a Haar random unitary, Executing the Haar integral (A.5) we find the Haar random average of δ, It is reduced to 1 without erasure errors, p = 0. For small p, using d −2p (3.13) The error factor is approximately proportional to the number of erased qubits N B2 = p log d B . P EPR is similarly computed: (3.14) Its Haar average (A.6) results in, Therefore, the decoding fidelity is (3.16)

Hayden-Preskill decoding with decoherence in storage
We consider the effects of two types of errors: decoherence in storage of the collected early radiation on B , and imperfect implementation of the backward unitary evolution combined with decoherence.

Decoherence in storage
We model the decoherence by the depolarizing channel, Although the decoherence affects the state on B , the quantum channel may be understood to act on |EPR EPR| BB , 3 3 This identity is checked as follows:  The decoherence in storage is responsible for the decay of entanglement between the early radiation B and the black hole B, that are initially maximally entangled, resulting in the state, To start decoding, Bob introduces an EPR pair on R A followed by an application of U * to obtain the state, Sequential projections of ρ in on |EPR DD and then on |EPR RR leads to the decoding fidelity, where P EPR is given by, Below, we will discuss its relation to the mutual information, and calculate these quantity explicitly.

Mutual information
Let us relate the decoding fidelity to the Rényi-2 mutual information. Tr[(ρ RB D ) 2 ] = 2 −S (2) (RB D) on the state (4.4) is converted to, In the last equality we used, whereQ describes the depolarizing channel with probabilityp satisfying (see Appendix B for derivation), This is equivalently written as which, in turn, implies (4.14) Therefore, the larger the Rényi-2 mutual information is the better the decoding quality becomes. Note, however, that the mutual information is computed by using the quantum channelQ withp ≤ p (4.10). The intuition here is that theQ channel involves participation from B 2 and B 3 as ancilla qubits, which can in general change the mutual information of states after application of a quantum channel even of systems non involving the ancilla when contrasted with the Q channel which does not involve the ancillas at all. In this sense, (4.10) is a consistency condition that must be satisfied to ensure that the ancilla qubits have no affect on the mutual information when theQ channel is applied.

Decoding fidelity
Let us compute δ: In order to further proceed with the computation, we assume that the time evolution is described by Haar random unitary. Haar random average of the second term is computed in (A.7) and we find the error rate, Similarly, we carry out the Haar integral to find Haar average of P EPR : (4.17) The decoding fidelity is now computed by plugging these into (4.6). Its p dependence in comparison with the one with the erasure errors (3.16) is shown in Fig. 3 and discussed in Sec. 5.

Imperfect noisy backward evolution
Suppose Bob is only capable of implementing noisy backward time evolution of U by using a matrixŨ * that is not precisely the complex conjugate of U because of either imperfect implementation, decoherence, or both. We model the situation by the following initial state: where the quantum channel used here is The decoding fidelity F EPR is given by, with the probability of ρ in projected onto |EPR DD , Then, ∆ is computed as where quantifies the difference between U andŨ , and becomes precisely their two-norm overlap when d C = 1 [10].

Comparison: Erasure vs Decoherence
As we have discussed, the error factor δ quantifies the deviation from unity of F EPR P EPR . Indeed, the error factor is identically unity in the absence of errors. Comparisons between the erasure errors and the decoherence in terms of the error factor are shown in Fig. 2. For computations of δ the Haar random average of unitary evolutions is taken. Both plots clearly show that the erasure error have severe impacts relative to the decoherence, particularly when the size of Alice's state N A is small. This is understood from the fact that, for small p, the effect of erasure error is proportional to the number of erased qubits ∼ pN B , that becomes larger when N A is smaller (3.13). For general p, the error factor decays exponentially with respect to p (3.12) given sufficiently large N D , in contrast to the effect of decoherence (4.17), that induces the decay of δ proportional to p. The decoding fidelityF EPR also displays a sharp difference between the erasure and the decoherence effects, as shown in Figs. 3 and 4. The decoding fidelity declines rapidly as the erasure probability p increases, which significantly limits the recoverability of Hayden-Preskill thought experiment in the presence of erasure errors. In contrast, the recoverability is well protected against the decoherence effect as long as the size N D of the radiation that Bob collects later is large relative to that of Alice's state N A .

Conclusion
Both erasure errors and decoherence are likely to happen in Bob's storage of radiation that he collected at an earlier time in the Hayden-Preskill thought experiment, and consequently, affect the quality of decoding algorithm. We have assessed their impact on the quantum recovery channel from the viewpoint of its decoding fidelity. We found that serious breakdown of decoding fidelity occurs when the erasure error exists in the storage while the protocol is relatively resilient against the decoherence. This may equivalently be attributed  to the reduction of the mutual information (3.10) as a consequence of the decreased entanglement due to the lost qubits.
In the present work, we primarily focused on the noise and decoherence occurring during the storage phase motivated by the setup of the thought experiment. There is, however, a chance that these can affect other parts of circuit, something which has been addressed in [10]. They explored these effects on the unitary evolution, and proposed an experimental implementation to diagnose information scrambling in quantum systems on a noisy device.
It is also interesting to see our recovery analysis in the presence of the erasure in relation to the entanglement phase transition [31][32][33][34][35][36][37][38]. In the latter context, projective measurements and random unitaries are repeatedly applied. Depending on the measurement rates the late time entanglement structure of a quantum system can be drastically changed if the rate at which the system is disentangled by the measurements overcomes the rate of random circuit entanglement generation. A crucial difference here, however, is that randomly chosen qubits are discarded in the erasure while the measurements results may be used to retain a certain amount of entanglement.

B Relation between Q andQ
The quantum channels Q andQ are respectively defined by The identities used in (4.8) and (4.12) are proved as follows: In the second to last equality, we used the identification p = 2p −p 2 .