Space-time generalization of mutual information

The mutual information characterizes correlations between spatially separated regions of a system. Yet, in experiments we often measure dynamical correlations, which involve probing operators that are also separated in time. Here, we introduce a space-time generalization of mutual information which, by construction, satisfies several natural properties of the mutual information and at the same time characterizes correlations across subsystems that are separated in time. In particular, this quantity, that we call the \emph{space-time mutual information}, bounds all dynamical correlations. We construct this quantity based on the idea of the quantum hypothesis testing. As a by-product, our definition provides a transparent interpretation in terms of an experimentally accessible setup. We draw connections with other notions in quantum information theory, such as quantum channel discrimination. Finally, we study the behavior of the space-time mutual information in several settings and contrast its long-time behavior in many-body localizing and thermalizing systems.


Introduction
A prominent experimental diagnostic for the properties of physical systems are correlation functions.These offer a window into the complex dependencies that exist among different subregions in space and time of a given system, and provide an organized and transparent way to characterize its fundamental properties.For example, the electric conductivity is determined by the current-current response function.Angle-resolved photo emission experiments measure the two-point function of the electron as a function of momentum and energy.Phases of matter are also characterized by correlation functions: for example, spontaneously broken symmetry phases, such as ferromagnetism and crystalline order, correspond to long-range correlations in space.In general, a two-point correlation function is defined for two operators each residing in a region of spacetime.One question that naturally arises is whether there exists a quantum information measure to quantify the amount of correlations between two space-time regions.For two spatial regions, A and B, defined at the same time, mutual information I(A : B) = S A + S B − S AB is a natural measure, where S A represents the von Neumann entropy of subsystem A. I(A : B) = 0 if and only if the states of AB factorize as ρ AB = ρ A ⊗ ρ B , which implies that there is no connected correlation between the two regions.A non-zero I(A : B) provides a quantitative measure of correlations that is independent of the specific operators that are correlated.For instance, if we consider two ferromagnetic states, one with long-range correlations of the spin-z component ⟨S z (r)S z (r ′ )⟩ c and another with an equal amount of correlation in S x , then, keeping everything else fixed, both states will yield the same mutual information.Importantly, the mutual information provides an upper bound on the connected correlation functions between the two regions [1].
In physical experiments, dynamical correlation functions can be measured for regions located at different times, just like equal-time correlators.However, using mutual information to measure correlations between different times is not feasible.In quantum field theory, the mutual information can typically be defined for algebras associated with spacelike separated regions A and B that are not adjacent, but it is likely undefined for time-like separated regions, particularly when region B is inside the future lightcone of region A. For finite dimensional quantum systems, the issue arises because the state ρ AB is typically not defined for time-like separated regions.Our study is motivated by the idea that a physical system should be characterized by observables.Thus, we aim to generalize mutual information for a pair of space-time regions A and B that is valid even when they are not space-like separated, and hence when ρ AB is not defined.This is the primary objective of our research.
In this paper, we introduce a novel quantity, called the space-time mutual information (STMI) denoted as J(A : B), which generalizes the mutual information for two arbitrary space-time regions A and B. Our approach is based on the idea of hypothesis testing where, in a gedanken experiment, an experimentalist has access to a physical system at regions A and B defined at separate times.For example, in a qubit chain with qubits labelled by x = 1, 2, ..., N , region A could be qubit x = 1 at time t = t 1 and B could be the qubits x = 2, 3 at time t = t 2 .The experimentalist is allowed to couple a general ancilla to regions A and B of the system.This can include ordinary measurements as well as more sophisticated quantum couplings with the ancilla, such as applying a quantum perturbation at A and measure its consequence in region B. The goal of the experimentalist is to distinguish between two situations.The first situation is where the experimentalist accesses regions A and B of the same system, and therefore can measure correlations between them.The second situation where the coupling at A and B occurs in two independent copies of the original system.By construction, in the second situation there is no correlation between the two subsystems.The difficulty of distinguishing the two situations measures the amount of correlation, which can be characterized by a relative entropy.The STMI is defined by optimizing this relative entropy over all possible system-ancilla coupling schemes.The advantage of our quantity is that it directly describes an experimentally accessible setup.
The remainder of the paper is organized as follows.In Sec. 2 we introduce the intuition and the definition of the STMI, and explore some of its simple properties.We show that the STMI reduces to the ordinary mutual information when regions A and B are spacelike separated.We investigate various properties of STMI such as its monotonicity under local operations at A and B separately.In Sec. 3 we prove that the STMI is an upper-bound of connected dynamical correlation functions, a direct generalization of the corresponding inequality in Ref. [1] for static correlations.In Sec. 4 we show that, in certain cases, when three subregions are considered, our quantity satisfies a space-time generalization of the Markov property.In Sec. 5 we draw a connection between the STMI and quantum channel discrimination and use this to prove additivity of the STMI in a special setting.Sec.6 proposes a simplification of the definition of the STMI, which holds in a restricted regime and which we apply to provide semi-analytic results when studying the examples of Sec. 7. Finally, we introduce a classical counterpart of the STMI in Sec. 8 and discuss conclusions and outlook in Sec. 9.

General intuition
Before presenting the rigorous definition of STMI, we would like to provide some intuition by presenting a simplified version of this quantity.Let us consider a system in an initial state ρ in defined on a Hilbert space on region A Ā that evolves by time evolution U into an output state ρ out ≡ U ρ in U † , which can be partitioned into region B and B. Here, A and B denote two subregions before and after the evolution, respectively, as in Fig. 1(a).We would like to introduce a generalization of mutual information that is applicable to general subregions A and B, in particular to when these are causally connected.In the latter case the standard definition of mutual information does not apply, as there is no joint state on AB.To overcome this issue we couple subsystem A to an ancilla W , which plays the role of an idler, allowing us to "carry to the future" the information encoded in A, as in Fig. 1(b).We shall denote the resulting state as ρ BW .The operator V that couples ancilla and system, as well as the dimension of W , are for now arbitrary.Without loss of generality, as we will see momentarily, we can always assume V to be unitary, and the initial state of W to be pure.Now, recall that, when A and B are spatially separated and can be embedded in the same Hilbert space, the standard mutual information is the relative entropy between the connected state reduced on AB, ρ AB , and the disconnected state ρ A ⊗ ρ B .By analogy with this, in the case when A and B are causally connected, we consider the relative entropy between the connected state ρ BW and an analog of the disconnected state ρ A ⊗ ρ B .This disconnected state should be such that subregion B is unaffected by the presence of the perturbation V acting on A. The natural choice for such state is then given in Fig. 1(c), which we write as ρ B,0 ⊗ ρ W , where ρ B,0 denotes the unperturbed evolved state in B, and ρ W = tr B (ρ BW ) is the state of W after coupling with A, which is determined by ρ in and coupling V (which is therefore independent from the time evolution U ). Intuitively, the state ρ B,0 ⊗ ρ W is the state that determines the disconnected term ⟨O B ⟩ ⟨O A ⟩ in correlation functions, for any operators Finally, we define the space-time mutual information J 1 (A : B) by maximizing the relative entropy over the ancilla-system coupling V : We now see that it is sufficient to consider unitary coupling between system and ancilla.Indeed, if we take V to be a generic quantum channel, this is equivalent to having a unitary coupling to a bigger W followed by partial trace, which will only reduce the relative entropy, so for the purpose of taking the supremum it is sufficient to consider unitaries.Since we assume W is arbitrarily large, we can also take the initial state of W to be a pure state.If the initial state is a mixed state, we can purify it by enlarging W .As we will show below, this definition already satisfies two important requirements: it reduces to the standard mutual information when A and B are spatially separated, and it bounds all space-time correlation functions of operators supported on A and B, with any normal (i.e., defined on the Schwinger-Keldysh time contour) time ordering.
The general definition of STMI is similar to Eq. (2.1) except that it is defined for N copies of the initial state.In the next subsection we shall introduce the general definition based on a more rigorous setup of quantum hypothesis testing.

Definition
In this subsection we provide the rigorous reasoning behind our definition of STMI, and present the general definition.We begin by reviewing the hypothesis testing interpretation of relative entropy [2,3].To this aim, consider a black box that may contain either N copies of quantum state ρ, or N copies of quantum state σ.We are allowed to perform arbitrary measurements on this N -copied system to tell whether it is ρ ⊗N or σ ⊗N .Now, if we make a hypothesis that the state is σ ⊗N , and carry out some measurement, we can compute the probability of a measurement output assuming the state is σ ⊗N .If our hypothesis is correct, this probability will reach 1 in large N , but if our hypothesis is incorrect, i.e. if the state is actually ρ, then the typical result has a smaller probability P N (ρ|σ), which is lower bounded by the relative entropy: P N (ρ|σ) ≥ e −N S(ρ|σ) .Here, S(ρ|σ) = tr (ρ log ρ − ρ log σ).A smaller probability means that one can conclude the hypothesis is wrong with a higher confidence.Therefore, the relative entropy provides the fastest rate at which one can identify a wrong hypothesis.If ρ and σ are close to each other, S(ρ|σ) is small and it will be difficult to distinguish them.In contrast, if σ is not full rank, there are states which never appear in σ.For example, in a qubit system with states |0⟩, |1⟩, if σ = |0⟩⟨0|, then the probability of seeing |1⟩ is zero.Thus if the probability of |1⟩ is nonzero in ρ, if we perform a measurement in this basis we are certain that the state is ρ the moment we observe |1⟩, which means P N = 0 for a finite N .This corresponds to a diverging relative entropy.
For two spacelike separated regions A, B, the mutual information I(A : B) is a relative entropy I(A : B) = S (ρ AB |ρ A ⊗ ρ B ). Thus mutual information determines the probability that ρ AB is mistaken to be the uncorrelated state ρ A ⊗ ρ B .Inspired by this intepretation, as a space-time generalization we consider the two situations illustrated in Fig. 2. In both cases, the experimentalist controls the ancilla W and the gates V A , V B which couple W with the physical system L.This coupling characterizes the most general experiments that can occur, which includes measurements of correlation functions in the original system, and also includes more general quantum processes.For example, we can swap a qubit in A with a qubit in W and later measure the swapped-out qubit together with B in an entangled basis.In situation 1, W is coupled with the same N copies of L at the space-time region A and B. In situation 0, the coupling at B occurs with a new set of N copies of systems, such that there won't be any correlation between B and W .The experimentalist does not know whether it is situation 0 or 1.If they make a conjecture that it is situation 0, and actually it is situation 1, the rate of finding out that the conjecture is wrong is determined by the relative entropy of the output state of W , denoted as σ (N ) W 1 and σ (N ) W 0 respectively.Thus it is natural to define the space-time mutual information J(A : B) by optimizing this relative entropy over the choice of V A , V B : More formally, this setup is an example of quantum algorithmic measurement (QUALM) defined in Ref. [4].The two situations in Fig. 2 are denoted as two "lab oracles," where the intrinsic dynamics U is given by nature, while the experimentalist has the option of choosing the coupling V A , V B between L and W .A QUALM refers to an algorithm, i.e. a choice of gates V A , V B for the purpose of achieving a particular task, similar to how a quantum algorithm is chosen to achieve a certain classical computation.In our case, the task is to distinguish the two lab oracles, with the optimal QUALM obtained by maximizing the relative entropy between the two output states.The definition (2.2) can be simplified by realizing that the supremum over V B can be achieved explicitly.Denote the state before applying V B for B and W as ρ B N W,a , a = 0, 1.

One can see that σ (N )
W,a is related to ρ B N W,a by a quantum channel induced by V B followed by a partial trace over B: Due to the monotonicity of relative entropy under quantum channels, we have S σ More explicitly, we can define W = W A W B , and only W A is acted upon by the coupling V A .W B has an initial state that is in direct product with W A , and it has the same size as B. In this situation, If we take V B to be a swap between W B and B, σ W,a is identical to ρ B N W A ,a , so that the relative entropy is the same.Therefore the swap operator achieves the optimization over V B , and we can directly define the STMI using the state ρ B N W,a : where from now on we shall drop the subscript 1 from the connected state ρ B N W for ease of notation, and we shall also drop the subscript A from W and V .In the above equation we have used the fact that ρ B N W,0 is by design in a factorized form As a side remark, one special choice of the ancilla and its coupling to subsystem A is to choose W = W 1 W 2 , in which W 1 and W 2 each has the same Hilbert space dimension as that of A, and they are prepared in a maximally entangled state with each other.If we choose V to be a SWAP gate between A and W 1 , ρ BW is equivalent to the superdensity operator defined in Ref. [5].

Properties of Space-time Mutual Information
We now explore a few basic properties of J(A : B).
Maximal size of W .In our definition of STMI, there is no restriction on the size of ancilla W .However, W does not need to be arbitrarily large.Without lost of generality, we can assume the initial state of AA and that of W to be pure, |Φ A Ā⟩ and |Φ W ⟩. If that is not the case, we can always enlarge the system and introduce its purification.For simplicity of notation let us consider the case with one copy.After applying unitary V we obtain a pure state Ψ AAW = V Φ AA ⊗ |Φ W ⟩. Expand this state in an arbitrary basis |a A ⟩ of A, we can express This expression makes it explicit that the rank of the reduced state of W is always bounded by d 2 A with d A the Hilbert space dimension of A. Therefore it is always sufficient to take The discussion here easily generalizes to N copies, in which case it is sufficient to take A .In other words, the size of W (number of qudits) can be taken as 2N copies of A. Additionally, by arbitrariness of V , we can choose |Φ W ⟩ to be the EPR state (or any other fixed reference state).
An alternative expression.We can decompose J(A : B) into two terms: The first term is the ordinary mutual information in state ρ B N W , while the second term is the relative entropy between the state of B with and without the coupling with W .Here ρ The second term is a consequence of the causal influence of A on B, which highlights the fact that even if we do not access the ancilla W , the coupling with W still has nontrivial effect on the state of B. Physically, these two terms represent two ways to distinguish the correlated state ρ B N W and the uncorrelated state ρ ⊗N B,0 ⊗ ρ W .The second term is sensitive to how much the coupling can change the state of B, while the first term implies that even if the state of B does not change, one can still measure the correlation between A and B by measuring that between B and W .For example, let us consider a simple case when the evolution operator is trivial, and A and B are the same spatial region.If initially A and A are in a maximally entangled EPR pair state, and W is in a maximally entangled EPR pair state of ancilla subregions W 1 and W 2 , each with the same dimension as A, then we can take V to be a SWAP gate between A and W 1 , after which ρ B = ρ B,0 , so that the second term vanishes, but the first term is nonzero.Alternatively, if we prepare W 1 in a pure state and still apply a SWAP gate, the second term will be nonzero, while the first term is smaller.In general, the maximization over V is achieved by a compromise between these two terms.
Reduction to ordinary mutual information.If B and A are space-like separated so that we can define a quantum state ρ AB , gates applied to A will never affect the state of B, so that ρ B N = ρ ⊗N B,0 .In this case the second term in (2.8) vanishes.In addition, the mutual information I (N ) (B : W ) satisfies the monotonicity I (N ) (B : W ) ≤ N I(A : B), where I(A : B) is defined in the original system.Furthermore, the equal sign is achieved by choosing V to be a swap gate, so that we obtain J N (A : B) = I(A : B).
Monotonicity.The mutual information is monotonously non-increasing when a quantum channel is applied to A or B separately.The same applies to STMI.For region B, this follows directly from the monotonicity of relative entropy [6].For a generic quantum channel N B applied to B, we have For a quantum channel N A applied to A, the proof is slightly more nontrivial.One can consider the dilation of N A , i.e. an isometry K from A to a bigger system A ⊗ W . N A is obtained by applying this isometry followed by tracing over W , i.e.N A (ρ A ) = tr W Kρ A K † .Computing J N (A : B) after applying the channel N A requires to apply V (which couples A and W ) after applying K. Then we can merge W with W and view V • K as a coupling between A and a bigger ancilla W ⊗ W . Therefore for arbitrary V , we can apply the monotonicity of relative entropy by tracing over W to obtain Taking the supremum over V of the right-hand side yields the STMI J N (A : B) after applying quantum channel N A , while the left-hand side, upon optimization over V , yields the STMI without applying N A .This implies monotonicity of J N (A : B) with respect to the application of a quantum channel in either A or B.
As a special case of the above, J N (A : B) is non-increasing upon tracing over part of A or B. In other words, for disjoint regions A, C at the initial time, and disjoint regions B, D at the final time, we have J(A : B) ≤ J(AC : B), J(A : B) ≤ J(A : BD).
For the ordinary mutual information, when the inequality above is saturated, i.e.I(AC : B) = I(A : B), the state ρ ABC satisfies the Markov property and can be reconstructed from its marginal on AB.We obtained similar results for the STMI, which we will discuss in Sec. 4.
Absence of upper bound.In Eq. (2.8), the mutual information term is always finite, upper bounded by 2 log d B .The relative entropy term could diverge.The divergence occurs if ρ B,0 is not full rank, and ρ B N has a nonzero probability to be in the null space of ρ ⊗N B,0 .For example, if there is a conserved charge and the original state ρ B,0 is supported in a charge range [q 1 , q 2 ], then as long as we can tune V to change the charge of B to be beyond this range, J(A : B) diverges.If ρ B,0 is full rank, with a nonzero minimal eigenvalue p min , one can prove that Thus we obtain J(A : B) ≤ 2 log d B + log 1 p min .Relation to Choi state mutual information.A related quantity to the STMI is the mutual information of the Choi state corresponding to a given unitary evolution which, in terms of Fig. 3, is given by I(B : W 2 ) [7].Such quantity can be viewed as the mutual information term in (2.8) where we choose V to be the SWAP between A and W 2 , with W = W 1 W 2 .We then infer that the Choi state mutual information cannot be larger than the STMI: Additionally, note that in Fig. 3 Ā is in the infinite-temperature state.The space-time mutual information J(A : B), on the other hand, is applicable for any initial state, including states where A and Ā are entangled.
Vanishing condition.The condition J N (A : B) = 0 requires ρ B N W = ρ ⊗N B,0 ⊗ ρ W for all V .This obviously requires all correlation functions in A and B to be disconnected.Conversely, assume that all (Keldysh-ordered) correlation functions between A and B factorize, i.e. (taking N = 1 for simplicity) (2.13) Then, the BW correlation function in the system coupled with ancilla W is also factorizes, where we decomposed the isometry V = i V i |i⟩, with |i⟩ a basis on W and V i an operator acting on A, and O ji W = ⟨j|O W |i⟩.This in turn implies that ρ BW necessarily factorizes, ρ BW = ρ B ⊗ ρ W , with ρ B = ρ B,0 , thus implying J 1 (A : B) = 0. Similarly, when we couple N copies of A to W , we can also express the BW correlation as a sum over Keldysh-contour ordered correlators of A and B, which in turn proves that J N (A : B) = 0 if all AB correlation functions factorize.
Thermodynamics.If U is a chaotic time evolution U = e −iHt and B is smaller than half of the system size, for long enough t the subregion B will reach thermal equilibrium, so that the only dependence of ρ B on V will by given by thermodynamic quantities.For simplicity let us consider a system with energy conservation and no other conservation law.In this case, the only possible change caused by the coupling V A with the ancilla is the change of temperature.Thus we have ρ are thermal states at temperatures β ′ and β, respectively.Here we have assumed that V A does not cause a large energy fluctuation in the system, such that β ′ is well-defined.In this case, the mutual information I(B N : W ) is negligible, and the main contribution to STMI comes from the relative entropy term: This is the change of thermal free energy More generically, V A can cause a large energy fluctuation.For example, consider a single copy of the system coupled with W , and V A can create a Schroedinger cat state in the system, so that the reduced density matrix of BW is where 0 and 1 are states of W .If this happens, there will be a nontrivial classical mutual information I(B : W ). Since energy is the only variable that W can correlate with, and quantum correlation cannot be preserved in B after a chaotic evolution, the mutual information is limited by the energy uncertainty.If V A has the ability of changing the energy of B by ∆E at most, then the mutual information is upper-bounded by with δE = ⟨H 2 ⟩ B − ⟨H⟩ 2 B the intrinsic energy uncertainty.We expect this to be a small contribution.If we consider the limit that B is a finite (smaller than half) portion of the entire system, and consider the thermodynamic limit, then at most the mutual information term will be proportional to log |B|, while the relative entropy term is proportional to |B| as long as β ′ ̸ = β.

Bound on space-time correlation functions
An important property satisfied by the STMI is that, as we will now show, it bounds any two-point function between two possibly causally connected subregions.This generalizes the bound of the standard mutual information of spatial correlation functions [1].

Theorem 1
The STMI bounds all two-point correlation functions between subsystems A and B. Explicitly, for any N ≥ 1, we have the following bounds on symmetric and retarded correlation functions: where The numerators on the right-hand sides of the above inequalities correspond to retarded and connected symmetric two-point functions; these two correlators are sufficient to linearly generate any other Keldysh time ordering (such as Feynman, time-ordered correlators, etc.).
To prove the theorem, it is sufficient to consider the single-copy STMI J 1 (A : B).We take a specific choice of ancilla-system coupling, defined by where i, j = 0, 1, 2, and the operators X i act on A and are defined as These operators satisfy i X † i X i = 1, thus guaranteeing that the coupling to the ancilla is an isometry, and can thus be extended to a unitary operator.The coupling (3.3) can be thought of as a control-O A gate and was suitably chosen so that, as we will see, it will reproduce the two-point function we want to bound.Now, to prove the bound for the retarded two-point function (3.1), define the following operator acting on W : We then find that the retarded two-point function can be viewed as the following expectation value over the state ρ BW : Now consider the following sequence of inequalities where in the first step we applied the quantum Pinsker's inequality, and in the second step we used Hölder's inequality.Comparing (3.7) with (3.6), we arrive at (3.1).Note that the disconnected state does not contribute to the trace on the right-hand side of (3.7) due to the form of Y W , which is equivalent to saying that the retarded two-point function does not have a disconnected component.The vanishing of this trace also implies that only the mutual information term in (2.8) contributes to the bound, i.e. we have a tighter bound given by Similar steps lead to the bound on the symmetric two-point function (3.2), this time using, instead of Y W , When A and B are causally disconnected, the symmetric two-point function reduces to the spatial one and we recover the standard bound of [1], with the correct numerical prefactor.
We emphasize that taking the supremum over V in the definition of J 1 is crucial for the above proof of the bounds (3.1),(3.2).Indeed, consider e.g.fixing V = S AW 1 ⊗ 1 W 2 , where S AW 1 is the swap between A and W 1 .With this choice, ρ BW constitutes an example of superdensity operator defined in [5].In this case, as we show in appendix A, using similar steps as above, one can prove weaker bounds than (3.1),(3.2),with an overall dimensional suppression factor.

Markov property
In the case of ordinary mutual information, for three regions ABC one can define conditional mutual information which is non-negative due to monotonicity of relative entropy under the action of quantum channels.I(A : C|B) is the decrease of relative entropy S(ρ ABC |ρ A ⊗ρ BC ) under the partial trace over C. If I(A : C|B) = 0, the state ρ ABC can be recovered from ρ AB and ρ BC using the Petz map [8,9]: Since ρ ABC is determined by ρ AB and ρ BC , correlation functions between A and BC can be determined by that between A and B.More explicitly, for any operator O A supported on A and O BC supported on BC, we can define Now we consider the situation with spacetime mutual information.Consider three regions A, B, C with B and C defined at the same future time, and A defined at an earlier time, as is shown in Fig. 4 (a).Assume that Naively, the unitary V 1 coupling A and W that optimize J N (A : B) may be different from the unitary V 2 that optimize J N (A : BC).However, they must actually be the same, which can be proven by contradiction.If V 1 ̸ = V 2 , we can take V = V 1 and compute the relative entropy If V 1 does not maximize this quantity, then we have Since J N (A : B) and JN (A : BC) are computed for the same system (with the same gate V 1 ), Eq. (4.7) contradicts with monotonicity of relative entropy under partial trace over C. Therefore we have proven that for the same where both reach the maximum.Consequently, we can apply the Petz map and express Interestingly, the map acts trivially on W .For any operator O Bc and O W , there is a corresponding operator ÕB = ρ This inturn implies that any correlation function between A and BC that one can measure indirectly through measuring correlation between BC and W can actually be converted into a measurement that only involves A and B. In other words, C does not directly correlate with A. The correlation between A and C are only generated through BC correlation and AB correlation.This is exactly in parallel with the Markovian condition in the ordinary spatial mutual information case.
The other situation is shown in Fig. 4 (b), when regions A and B are at equal time and C is at a later time.In that case, if we have J N (AB : C) = J N (B : C), it simply means that the optimal coupling between AB and W that maximizes the relative entropy does not involve A. it is sufficient to couple the ancilla to B in our optimization in order to find J N (AB : C), and A remains untouched.In this case, A is not a system that is traced out, and we cannot directly apply Petz map.A natural question is whether this condition also implies a Markov property on the joint ancilla-system state.We leave the exploration of this problem for future work.

Quantum channel discrimination and additivity
Quantum channel discrimination can be viewed as a natural extension of quantum hypothesis testing, where the aim is to discriminate channels instead of states.The formulation of this problem resembles that presented in Sec. 2, although, as we now illustrate, it is applied to a slightly different context.Consider two quantum channels N 1 and N 2 , both with input and output systems A and B, respectively.To discriminate whether the system measured has evolved through N 1 or N 2 , one implements adaptive strategies consisting of alternating applications of the channel to discriminate and auxiliary channels.More explicitly, the application of N 1 or N 2 is alternated with channels A (i) that map the output B, jointly with an ancilla W , to the input A of the subsequent application of the channel, together with the ancilla, as shown in Fig. 5(a).After N repetitions of this process, the final state is measured.From the formulation of the quantum hypothesis testing reviewed in Sec.2.2 we know that the probability of incorrectly concluding that the channel is 2 ) , where ρ are the output states that use channel N 1 and N 2 , respectively, and the superscript (A) indicates that we applied a given sequence of systemancilla channels A (1) , A (2) , . . ., corresponding to a particular strategy.The best strategy is then characterized by the optimization sup ρ,A (i)

S(ρ
where ρ is the input state in Fig. 5(a).Due to the joint convexity of the relative entropy and to the fact that ρ are linear in ρ, it is sufficient to restrict to pure states ρ.Quantum channel discrimination has many applications, e.g.: quantum illumination, to enhance the detection of targets in the presence of thermal noise through entangled photon pairs [10]; quantum metrology, to estimate unknown parameters of quantum channels [11], and quantum reading, which involves the use of nonclassical transmitters to read data from classical digital memories [12].

Additivity of the STMI for initial pure states
Let us now come back to the setup of Sec.2.2.Assuming the initial state reduced on A is pure, i.e. ρ in = ρ Ā ⊗ ρ A , with ρ A pure, the optimization problem (2.5) reduces to a special case of quantum channel discrimination.To see the connection, note that both the system A N and the ancilla W in eq.(2.5) are in a pure state, and optimizing over V corresponds to optimizing over the most general pure state of the joint system A N W .One then writes the connected state ρ B N W as N tensor copies of channel N acting on the state ρ A N W on subregion A N , where channel N is obtained by tracing the time evolution in (2.5) over B. As shown in Fig. 5(b) this setup is then equivalent, for an appropriate choice of A i , to the setup of Fig. 5(a), where A (i) simply swaps the i-th copy of A. For the disconnected state ρ ⊗N B,0 ⊗ ρ W we apply a similar reasoning, but instead of N we now have the replacer channel: R(ρ) = ρ B,0 for any state ρ of A. We then reduced (2.5) to an instance of quantum channel discrimination, and can write In particular, J 1 (A : B) identifies with the channel relative entropy between N and R [13].
We will see in Sec.5.2 how, in the general case, the STMI 2.5 can be formulated as a "constrained" quantum channel discrimination.We now prove that the STMI J N (A : B), when ρ A is factorized, is independent of N , i.e. it is additive.Additivity is a fundamental question in quantum channel discrimination and, fortunately, this property was recently proven when the alternative hypothesis (i.e., the second argument in the relative entropy) is a replacer channel [13], which precisely corresponds to our setup.Theorem 1 of [13] essentially states that, for N → ∞, (5.1) is equal to the channel relative entropy sup ρ AW S(N 1 ρ AW |N 2 ρ AW ) when N 2 is a replacer channel.Using the setup of the previous paragraph with N 1 = N and N 2 = R, we then have where the equality comes from the theorem mentioned above, and the inequality comes from that J N (A : B) corresponds to a special choice of A, as shown in 5(b).Moreover, since sup ρ AW S(N ρ AW |Rρ AW ) = J 1 (A : B) due to (5.2) and J N ≥ J 1 , we conclude additivity We observe that, in quantum channel discrimination, there exist counterexamples to additivity when the alternative hypothesis is a more general channel [14].
We currently do not know if additivity holds for general initial states.In Appendix B we show that the N -replicated optimization problem (2.5) admits a stationary point (i.e., V satisfies δ V S(ρ , where V 1 is a stationary point for J 1 .This is a necessary but not sufficient condition for additivity.

STMI as a constrained quantum channel discrimination
In the previous section we showed that when the initial state ρ A is pure, the STMI can be viewed as a quantum channel discrimination.In this section, we will discuss the situation when A and Ā are entangled and in particular ρ A is not pure.We shall see that it is most natural to think of this case as a constrained quantum channel discrimination.First, we can assume that the initial state of the system ρ in is pure by extending Ā and tracing over the extension.We can then rewrite the STMI as where ρ Ā = Tr A ρ in , and ρ is a state on A N ĀN W . Also, N ⊗N denotes N tensor copies of channel N : ĀA → B, and R is the replacer channel introduced in Sec.5.1.Due to the joint convexity of the relative entropy and to the convexity of S, it suffices to optimize over pure states.One might ask if the constraint S in (5.4) can be implemented as a quantum channel in such a way that (5.4) can still be viewed as an unconstrained channel discrimination.More explicitly, we ask if there is a quantum channel Q : DW → ĀN A N W , for some subsystem D, whose image Q(DW ) is exactly S and acts trivially on W .The latter condition is necessary to maintain the structure of channel discrimination, i.e.W plays the role of an idler on which no channel is applied.The answer to this question is negative.Indeed, any state of S can be written as ρ = V ρ in V † , where V : ĀA → ĀAW is an isometry acting trivially on Ā, ρ in is a pure state, and for simplicity we are assuming N = 1.This means that Tr Āρ has a fixed entanglement spectrum, independent of V .Assume that the quantum channel Q exists.Then, considering two arbitrary initial states σ 1 and σ 2 on DW , the states ρ 1 = Tr ĀQ(σ 1 ) and ρ 2 = Tr ĀQ(σ 1 ) should have the same entanglement spectrum.If their eigenstates are different, then pρ 1 + (1 − p)ρ 2 = Tr ĀQ(pσ 1 + (1 − p)σ 2 ) will not have the same eigenvalues for 0 < p < 1, and we thus infer that ρ 1 = ρ 2 .This implies that Tr D σ 1 = Tr A ρ 1 = Tr A ρ 2 = Tr D σ 2 , which contradicts the assumption that σ 1 and σ 2 are arbitrary states.

An ansatz for factorized initial states
The optimization problem (2.5) can be very non-trivial, especially for large Hilbert space dimension.In this Section we propose an ansatz leading to a simplification that applies when the initial state is factorized, i.e.
We shall restrict to a single replica N = 1.Keeping into account the bound |W | ≤ 2|A| from the discussion around eq. (2.6), it is sufficient to take W = W 1 ⊗ W 2 , with both W 1 and W 2 isomorphic to A. The ansatz consists of replacing V with a swap S between A and W 1 , and optimize over a generic initial state |ψ W ⟩ of W : as in Fig. 6.
We shall see below that the optimization reduces to a self-consistent equation for the reduced state of the ancilla W 2 .This reduces the number of parameters to optimize from 2d 4  A −d 2 A (an isometry going from A → AW 1 W 2 ) to d 2 A (number of independent mixed states on W 2 ).In Sec. 7 we show several numerical examples leveraging this ansatz, including MBL and thermalizing systems.
As a support of this ansatz note that, when ρ A is pure, following a similar reasoning as that around (2.6), we find that |W | ≤ |A|.After applying the ancilla-system coupling V , we then have a pure state jointly defined on AW , and the optimization (2.5) reduces to optimizing over such state.This, in turn, is precisely the setup of Fig. 5(b), thus coinciding with our ansatz and proving its validity for pure ρ A .As additional support for generic ρ A , consider the relative entropy S(ρ B |ρ B0 ) as defined in eq.(2.8).One can easily see that, for any ancilla-system coupling V , the corresponding value of S(ρ B |ρ B0 ) can be reproduced by a suitable ansatz with ancilla state |ψ W ⟩. Of course, to prove that the ansatz leads to a global maximum one would need to verify this statement for the entire expression in (2.8).
It is easy to see that (6.2) does not recover the optimum in (2.5) when A and Ā are entangled.In Appendix C we show an explicit counterexample.
We now show how the above ansatz leads to a self-consistent equation for the state on the ancilla W 2 .We shall restrict to the single-replica case N = 1 for simplicity.Below, N : A → B will denote the quantum channel obtained from the evolution of the total system after tracing out B and choosing ρ Ā as the initial state for Ā.Using the ansatz, the relative entropy in (2.5) for N = 1 specifies to: where ρ AW denotes the joint state of system-ancilla after the swap has been applied, The reduced state of W 2 in |ψ W ⟩ is equivalent to ρ W by unitary conjugation.Since all quantities we discuss are invariant under applying a unitary operator to W 2 , we can assume the reduced density operator of W 1 and W 2 to be the same ρ W without loss of generality.
Writing the quantum channel N (ρ) = I K I ρK † I , where K I are the Kraus operators satisfying I K † I K I = Id, it is useful to view the indices I as labeling states |I⟩ E of an auxiliary system E, and to introduce the unitary evolution operator Tracing over E leads to the channel N , and tracing over B leads to the complement channel Ñ , with In this notation, the first term in (6.3) reduces to the entropy of the ancilla's output state, so that we can express eq.( 6.3) in terms of only the reduced state ρ W : The optimization problem (2.5) then reduces to optimizing over ρ W .The variation of (6.3) with respect to ρ W can be written as thus leading to a self-consistent equation for ρ W : with C a normalization constant.One needs to remember to verify if the restricted (to V = SW AP ) maximization is also a maximum (or at least a saddle point) for the unrestricted problem.
We now verify that the above ansatz is a stationary point of the optimization problem.Keeping into account factorization of the initial state (6.1), the variation of the relative entropy S(ρ BW |ρ BW,0 ) that is inside the sup argument in eq.(2.5) with respect to an infinitesimal change of the unitary V → (Id + iT )V , with T an infinitesimal Hermitian operator acting on AW , is where ρW = Tr B ρ BW .We now plug in the ansatz by taking W = W 1 ⊗ W 2 , setting V to be the swap operator between A and W 1 , and initializing the ancilla in a generic state |ψ⟩.As a consequence, ρW = ρ in ⊗ ρ W , where ρ W was defined around (6.3), ρ AW = (|ψ⟩⟨ψ|) AW 2 ⊗ (ρ in ) W 1 , and the above variation simplifies to Assuming that N is unitary, the first term in the commutator vanishes.Plugging in (6.13), we find that that latter is a stationary point of S with respect to the general variation (6.9).With generic N instead, we plug in (6.8) and find we conclude that the commutator in (6.11) vanishes.We thus showed that for a generic quantum channel N , (6.8) is a stationary point.It would be interesting to prove whether the ansatz is a local or global minimum in the general case.Finally, we note that if the quantum channel N : A → B is unitary, the ansatz allows us to analytically solve the optimization problem (2.5).In this case the ancilla's entropy is zero, S( Ñ (ρ W )) = 0, and eq.(6.8) reduces to Plugging the solution in the relative entropy (6.3) leads to the space-time mutual information This indicates that, when the initial state is pure, the state of the ancilla can be chosen so that J 1 (A : B) diverges, even when the Hilbert spaces of A and B are finite-dimensional.As we commented around eq. (2.8), J 1 is unbounded, unlike the standard mutual information.

Examples
In this section we shall study the behavior of the space-time mutual information in various quantum systems.We will first study its behavior in single-qubit systems subject to two different types of evolution and contrast the different behavior of the STMI in these two situations.We will then focus on many-body systems in two extreme cases: fully thermalizing and many-body localizing dynamics.

Single-qubit system
As a first example we consider the case where A is the entire system and is given by a single qubit.The time evolution is defined by a quantum channel N which maps it to an output state in B, also a single qubit.We will consider two types of quantum channel: depolarizing channel N dpl and dephasing channel N dph , defined by with 0 ≤ p ≤ 1.Here σ i are the Pauli matrices.We will focus on the STMI (2.5) with a single replica of the system N = 1, i.e.J 1 (A : B).For the dephasing channel, the STMI asymptotes to a finite value meaning that some information is preserved.
To numerically optimize over V , we initialize the state of a random isometry, and perform the following iterative updates on ρ AW : where η is a positive number that sets the increment.The explicit expression of V δS δV can be obtained from (6.9).1 Fig. 7 shows the plots of J 1 (A : B) for the depolarizing and the dephasing channels as a function of p. Close to p = 0 the evolution approaches the identity, and we recover (6.14).As p → 1, N dpl becomes a fully depolarizing channel; all the information from the past is lost and the space-time mutual information approaches zero.On the other hand, as p → 1 the dephasing channel N dph still preserves classical information of the initial state, and therefore the space-time mutual information approaches a nonzero constant.
We shall now analytically evaluate the STMI J 1 (A : B) in a particular limit, using the ansatz of Sec. 6.We focus first on the dephasing channel N dph .We write the initial state as where ⃗ a ∈ R 3 is a Bloch vector, with |⃗ a| = 1, and we take a 1 = ε and a 3 = √ 1 − ε 2 with ε small, i.e., the state is almost an eigenstate of the evolution.In this case, the dephasing channel is close to the identity and we thus expect the space-time mutual information to diverge with ε.Adopting the ansatz (6.2), we take expression (6.6) as our starting point and we optimize it over the reduced state of the ancilla ρ W , which we parameterize using (7.3) using the Bloch vector ⃗ b.We shall be interested in obtaining only the divergent part and it thus suffices to keep only the last term in (6.6): From N dph (ρ in ) = 1 2 (Id + (1 − p)a 1 σ 1 + a 3 σ 3 ), and from the identity 1 2 (Id + tanh β⃗ n 2 cosh β , with ⃗ n2 = 1, we find log N ρ in = 2 log ε|1⟩⟨1| , (7.5) which leads to We then find that the space-time mutual information diverges as the initial state approaches an eigenstate of the evolution, and the divergent contribution is independent of p.We will see in the next subsection that a similar behavior happens for many-body localized systems.
For the depolarizing channel N dpl we consider a pure initial state, ρ in = |0⟩⟨0|.Using again the ansatz (6.6) we can exploit the symmetry of the evolution and choose the ancilla state and we expect eq. ( 6.8) to be solved by ρ W ∝ c 1 Id + c 2 σ 3 .The complementary channel Ñ given in terms of matrix Γ defined in (6.5) is where n i = (0, 0, 1), ε ijk is the totally antisymmetric tensor with ε 123 = 1, and Using these expressions, we evaluate J 1 (A : B) by extremizing (6.6) over β.A plot showing the relative entropy (6.3) as a function of β for various values of p is given in Fig. 8.The plot also illustrates a comparison between the STMI, the superdensity operator mutual information described at the end of Sec.2.2 (corresponding to β = 0), and the mutual information obtained by choosing the ancilla state (6.13).
We now find the exact value of the STMI for the depolarizing channel in the limit p → 1.Using (6.6), It is straightforward to evaluate each term in (7.10).As p → 1, we obtain where .12) Figure 8. Plot of (6.3) as a function of β for a depolarizing channel with p = 0.5 (left) and with p = 0.9 (right).The STMI J 1 (A : B) is given by the maximum value of the blue curve.The green line represents the relative entropy using superdensity operator entanglement (β = 0), while the orange line represents the entropy obtained by substituting (6.13) in (6.3).Depending on p, one quantity can be better than the other.
Numerically solving the optimization (2.5) for this relative entropy, we find that β ≈ −0.72− (1 − p)0.68, and we thus see that J 1 (A : B) → 0 linearly as 1 − p → 0. In the fully depolarizing case p = 1 the optimization problem is trivial as A and B are disconnected, thus any ancilla state ρ W solves the optimization problem.However, what we find shows that the limit p → 1 selects a unique ancilla state.

MBL and Thermalization
We will now explore two extreme cases of many-body dynamics.The first example, a manybody localized (MBL) system, preserves an extensive amount of local operators, while the second example concerns a thermalizing system, and thus all local information is efficiently scrambled across the system.These two examples thus constitute contrasting cases where the STMI should display very different phenomenology.
We shall start with MBL and study the single-replica STMI J 1 (A : B), with A a single qubit at the initial time t = 0 and B the same qubit at time t.As a model we consider a truncation of the MBL fixed-point Hamiltonian [15][16][17]: It is easy to see that states |0⟩, |1⟩ are preserved by the evolution.This means that, writing in matrix form using the |0⟩, |1⟩ basis, N must act as where f (0) = 1 and, as t grows, we expect generically that f (t) vanishes, similarly to the fully dephasing channel N dph discussed in the previous subsection.If α = 0, the state does not evolve and remains pure, so that J 1 (A : B) = ∞ for all times, consistently with the discussion around (6.14).For general α, and because ρ in is pure, we can decompose the relative entropy S(ρ BW |ρ BW,0 ) as in (6.6): For small α, the first two terms in (7.16) are finite, while the last one diverges.Applying the same manipulations as those around eq. ( 7.4), we find that showing that the closer the initial state (7.14) is to an eigenstate of the conserved operator σ 3 , the larger the STMI will be at late time, consistently with the intuition that the STMI quantifies the information preserved by the system.Fig. 9 shows the time-dependence of J 1 (A : B) and confirms this behavior.For the thermalizing case, we consider a Floquet system whose evolution is generated by the unitary For (g, h, τ ) = (0.9045, 0.8090, 0.8), this system is known to thermalize efficiently and have weak finite-size effects [18].We consider the same STMI J 1 (A : B) as in the MBL case with coincident location of input and output qubits.Since there are no conserved operators in this case (including energy), we expect that J 1 (A : B) will quickly drop to zero for any initial state.Fig. 10 shows J 1 (A : B) for various initial states, confirming our expectation.The timescale characteristic of this drop is given by the rate of decoherence caused by the effective channel describing the evolution of a single qubit.

Classical space-time mutual information
We now discuss how the STMI can be defined for classical systems, and make connection to information-theoretic quantities discussed in earlier literature.Consider a classical system whose initial state is characterized by a probability over a state space S, which we denote by P in (i), where i labels a state in S.This probability is then mapped to an output probability through a stochastic map M : S → S ′ acting as P in (i) → i M(j|i)P in (i), where j labels states of S ′ , and M(j|i) denotes the transfer matrix associated to the map M that satisfies normalization and positivity: j M(j|i) = 1 and M(j|i) ≥ 0. The notion of STMI also applies to this classical setting with the important difference that the ancilla, as well as its coupling to the system, are themselves restricted to be classical.With A ⊆ S and B ⊆ S ′ , we introduce a classical ancilla W and a stochastic map K that acts jointly on AW .From this we can define connected and disconnected ancilla-system probabilities, similarly to Sec. 2. Taking a single copy of the system N = 1, these probabilities are: where P B,0 (k) = ijl M(kl|ij)P in (ij) is the unperturbed output state and P W (p) = k P BW (kp).Here, k, l, p, i and j label states in B, B, W, A and Ā respectively.We represented the state for generic N ≥ 1 in Fig. 11(a).The classical counterpart of the STMI is then where D(•|•) denotes the Kullbeck-Leibler (KL) divergence, and we assume W to be sufficiently large.One can show that, similarly to the quantum STMI, (8.2) bounds correlation functions as well as response functions.In fact, correlation functions can be already bounded by the input-output mutual information, 3 which can be obtained from the KL divergence in (8.2) by choosing K to be the copy channel: K(qp|i) = 1 if q = p = i and 0 otherwise, see Fig. 11(b).To see this, we write the correlation function of two observables O A and O B as Using similar steps as in Sec. 3, one easily obtains the bound I(B : We then see that, in the classical case, correlations can be bound without adaptively optimizing over the system-ancilla coupling. To bound response functions, on the other hand, the input-output mutual information is not enough: we need an adaptive ancilla-system coupling, like in the quantum case.Consider perturbing the evolution before applying channel M through a small perturbation from the identity, which itself can be viewed as a channel.Its transfer matrix can be written as N A (k|i) = δ ki + εN A (k|i), with k N A (k|i) = 0, N A (k|i) ≥ 0 for k ̸ = i and ε ≥ 0 small.The response function is then the leading order contribution in ε to the one-point function (8.4) The input-output mutual information will not in general bound such two-point functions.
For example, take P in (i) = δ i0 to be a pure state, then I(A : B) = 0.One can easily see that there are choices of O B and N A with a nonzero response function so that the bound is violated.
We shall now prove the bound using J 1 (A : B), adopting a similar approach as in Sec. 3. Introduce a 2-bit ancilla with states p = 0, 1, and the ancilla-system coupling where for simplicity we left implicit the dependence on the indices in B and Ā. Applying similar steps as in Sec. 3 one then shows that the desired bound holds J 1 (A : B) ≥ , where the factor of 1  8 also appeared in Theorem 1 for the quantum case.
Finally, we note that the expression of the classical STMI can be slightly simplified.Indeed, still taking to N = 1 for simplicity, we now show that to find the optimal J 1 one can restrict the state P BW in (8.1) to the form P BW (kqi) = lj M(kl|qj)K(q|i)P in (ij) (8.7) as illustrated in Fig. 11(d).The key is that copying to the ancillas W 0 and W 2 the state before and after applying K, as in Fig. 11(c), does not require introducing any additional probing of the system, i.e.W 1 is not necessary.More precisely, the conditional mutual information I(B : W 1 |W 0 W 2 ) can be readily shown to vanish.Due to this fact, and using the classical counterpart of (2.8), we have thus showing that marginalizing over W 1 does not affect D(P BW |P BW,0 ), and the optimal K in (8.2) can be achieved using (8.7).A consequence of this fact is that, if the initial state is factorized, i.e.I(A : Ā) = 0, from Fig. 11(d) it is clear that maximization over K can be replaced by a maximization over the state of A with K fixed to the identity.While we are not aware of discussions of the STMI in the literature, a restricted version of our implementation has been considered in the context of classical channel discrimination, where one optimizes over the input state [20,21].

Conclusions
In this paper, we introduced the space-time mutual information (STMI), a quantity that generalizes mutual information to spatial subregions that can be separated in time.This was achieved by demanding that the STMI satisfies some of the natural properties possessed by the standard mutual information.The most stringent property leading to our definition (2.5) is that the STMI should bound space-time correlation functions between the two subregions.
We then investigated several properties that descend from our proposal, such as the Markov property and the relationship to quantum channel discrimination.We studied the behavior of the STMI in MBL and thermalizing many-body systems and found very distinct behaviors, thus, in a sense, providing a characterization of these two types of dynamics.Finally, we discussed a classical counterpart of the STMI.
Our framework can be extended in several directions.First, in this work we studied the time dependence of the STMI for two extreme cases in the context of many-body dynamics (MBL and Floquet thermalization).A natural next step is to look at more intermediate situations, e.g.thermalizing systems conserving a finite number of quantities such as energy or charge, or kinematically constrained models [22][23][24].For subregions small enough compared to system size we expect the STMI to decay polynomially in time for these systems.When the subregions considered become large, we saw around eq. (2.15) that the STMI asymptotes to a finite value; it would be interesting to find how this asymptotic value is approached at late times.Another intriguing avenue for investigation involves examining the time dependence of the STMI as a diagnostic tool to differentiate between integrable and non-integrable systems, as explored in recent studies such as those highlighted in [25,26].Additionally, it will also be interesting to consider restrictions of the optimization over V which may characterize the type of information that the ancilla is able to extract from the system.For example, one can restrict V to be a one-way LOCC (local operations and classical communication) from A to W , which corresponds to an experimentalist who can only carry out classical measurements.
On an information-theoretic level, it is still an open question whether the STMI satisfies additivity.A positive answer to this would imply that it is sufficient to restrict to a single replica N = 1 in the definition of the STMI (2.5).We proved additivity in a restricted case where we could map our quantity to the channel relative entropy.While we could not find counter-examples to additivity in more general settings, it is still possible that additivity might not hold in full generality; we leave this question to future work.

Figure 1 .
Figure 1.(a) System with initial state ρ in undergoing evolution with unitary U .The upper cap stands for tracing over the corresponding region.(b) Coupling between subsystem A and ancilla W giving rise to the connected state ρ BW .(c) Disconnected state ρ B,0 ⊗ ρ W , where ρ B,0 is the unperturbed evolved state reduced to subsystem B.

Figure 2 .
Figure 2. Definition of J N (A : B) in Eq. (2.2) for the uncorrelated and correlated case.

Figure 3 .
Figure 3. Mutual information of the Choi state corresponding to the unitary U .

Figure 4 .
Figure 4. Illustration of the two situations involving three regions ABC, for the discussion of Markovian condition in Sec. 4.

. 13 )Figure 9 .
Figure 9. Plot of J 1 (A : B) for various values of α with evolution given by the MBL fixed point Hamiltonian(7.13).Each of these plots is obtained from a single disorder realization, with w = 10 and ξ = 2.
) where ||N A || ∞ = sup j i |N A (i|j)|.Note that K satisfies positivity and normalization.Further introducing the observable O W acting on the ancilla W , with O W (p = 0) = 1 = −O W (p = 1), we find ipkj O B (j)O W (p)M(jl|k)K(kp|i)P in O B (t) is a (Heisenberg) operator supported in subregion B, and similarly for O A , || • || ∞ denotes the operator norm, (• • • ) c the connected component of a correlator, and O A , O B are assumed to be Hermitian.