Keywords

1 Introduction

The fate of a generic many-body quantum system can be described by quantum statistical mechanics at equilibrium, where it is expected that it eventually thermalizes through the process of thermal equilibration regardless of the system’s initial state. Recently, however, it has become clear that there exists exceptional disordered quantum system that can avoid this fate through localization [1,2,3]. This phenomenon, termed Many-Body Localisation (MBL), leads to a plethora of interesting features that cannot be described by quantum statistical mechanics, including the preservation of information, the slow spreading of entanglement, the emergence of integrability, and so forth [4].

Concomitant to the development of a new understanding of the MBL phase, there has also been significant progress in the experimental simulation of quantum many-body systems [5,6,7,8,9,10]. With a greater degree of tunability, control, manipulation and isolation from the environment, such systems provide a perfect avenue towards a greater understanding of strongly-correlated many-body quantum systems. Examples of such experimental setups are ultracold atoms in optical lattices, trapped ions, and nuclear and electron spins associated with impurity atoms in diamond nitrogen-vacancy centers [4]. These systems have also been studied through the lens of quantum information science using concepts and tools such as quantum entanglement, quantum coherence and the quantum Fisher information. For example, entanglement growth can be detected through suitable witnesses or with the quantum Fisher information [11]. The latter provides a lower bound on the entanglement in the system with just a measurement of two-body correlators, which can be efficiently accessed with site-resolved imaging [6]. In some cases, for instance in ion traps, partial or full quantum state tomography can be performed [12].

Systems in the MBL phase differs significantly from systems in the ergodic phase in many aspects. From an experimental perspective, the equilibration of physical observables to non-thermal values at long times [13, 14] is interesting as the expectation values of physical observables such as local magnetisation can be experimentally probed [5,6,7]. However, local observables can only reveal part of the complete picture: to fully investigate MBL, it is fruitful to resort to more abstract quantities that capture finer details from the information-theoretic properties of the MBL phase. This is precisely the approach that we will take in this paper.

In particular, we focus on understanding the localization and the propagation of information as the MBL systems evolve. We begin with a brief overview of relevant concepts and methods in the next section. We then proceed to discuss the main problem in detail, and present our numerical results on the temporal mutual information at length in Sect. 3.

2 Background and Numerical Setup for Dynamics

From an information-theoretic perspective, a hallmark of the MBL phase is the logarithmic spreading of the entanglement entropy. The entanglement entropy between two subsystems of a quantum state \(\rho \), or alternatively the Von Neumann entropy of the reduced density matrix for either subsystem, is defined by:

$$\begin{aligned} S_{ent} (\rho _A) = -\text {Tr}(\rho _A \text {log} \rho _A), \end{aligned}$$
(1)

where \(\rho _A = \text {Tr}_B (\rho _{AB})\), with A being a subsystem of the total system AB. It measures the extent to which the subsystems A and B are entangled with one another. Starting with an initial non-thermal product state, one can show [15] that the entanglement entropy of a MBL system grows logarithmically, i.e.:

$$\begin{aligned} S_{ent}(\rho _{MBL}) \propto \text {log} (t), \end{aligned}$$
(2)

in contrast to the ballistic spread of entropy in the ergodic case:

$$\begin{aligned} S_{ent}(\rho _{erg}) \propto t. \end{aligned}$$
(3)

This indicates much slower spreading of entanglement in the MBL phase. Indeed, this slow spreading of correlations is often regarded as one of the distinguishing features of MBL from ergodic systems.

Another relevant quantity is the quantum mutual information (QMI). For a system partitioned into two subsystems A and B, the QMI between A and B is defined as:

$$\begin{aligned} I(A:B) = S(\rho _A) + S(\rho _B) - S(\rho _{AB}), \end{aligned}$$
(4)

where \(S(\rho )\) is the entanglement entropy of the state \(\rho \) as defined in the previous paragraph. It measures the total correlations shared between the two subsystems, or equivalently how much information is gained about one subsystem by measuring the state of the other. The separability of two subsystems, i.e. \(\rho _{AB} = \rho _A \otimes \rho _B\) implies zero QMI, while non-zero QMI implies non-zero correlation or entanglement. The growth and equilibration of the QMI can be used as a diagnostic tool for the MBL phase [16, 17]. Numerical results indicate that the QMI in an MBL phase is exponentially localized in space, which is consistent with our intuition of a localized phase.

2.1 Physical Model

An important model that exhibits MBL is the 1D isotropic Heisenberg spin-1/2 chain, subject to random transverse magnetic fields [16, 17]. For an array of L spins obeying open boundary conditions, the Hamiltonian for this system is given by:

$$\begin{aligned} H = J \sum _{i=1}^{L-1} \mathbf {S}_i \cdot \mathbf {S}_{i+1} + \sum _{i=1}^{L} h_i S_i^z. \end{aligned}$$
(5)

Here, \(\mathbf {S}_i = (S_i^x, S_i^y, S_i^z)\) is the vector of local spin operators at site i, with \(i \in [1, L]\), J the interaction strength, and \(h_i\) the strength of the disordered magnetic field at site i, which is a random real number uniformly distributed in the interval \([-W, W]\). This well-studied system is known to exhibit MBL, with an ergodic-MBL transition occurring at \(W \approx 3.5J\) [2, 18, 19]. In the following sections, we will focus exclusively on this model, with the disorder parameter W controlling the system’s localization. Throughout the article and in numerical simulations, we will also set \(J=1\) consistently.

2.2 Simulation of Unitary Dynamics

To study the steady-state properties of a closed system, one can perform a quantum quench, where an initial nonequilibrium state \(|\psi (0) \rangle \) (which is the ground state of a Hamiltonian \(H_0\)) is first prepared, and evolved under the unitary dynamics of another Hamiltonian H according to Schrödinger’s equation:

$$\begin{aligned} i \hbar \frac{\partial }{\partial t} |\psi (t) \rangle = H |\psi (t) \rangle . \end{aligned}$$
(6)

\(|\psi (t) \rangle \) can then be studied either experimentally by measuring the values of physical observables in a physical setup, or numerically by simulating less experimentally accessible quantities such as the entanglement entropy and the QMI. Of particular interest to us are signatures that manifest when MBL systems evolve in time.

We simulate the unitary dynamics of a closed quantum system by directly integrating Schrödinger’s equation for a time independent Hamiltonian H, Eq. (6). In a particular basis \({|\phi _k \rangle }\), we have:

$$\begin{aligned} |\psi (t) \rangle = \sum _k c_k(t) |\phi _k \rangle . \end{aligned}$$
(7)

The set of coupled first-order differential equations then takes the form:

$$\begin{aligned} i \hbar \frac{\partial c_k(t)}{\partial t} = \sum _i [H]_{ki} c_i(t), \end{aligned}$$
(8)

which can be readily integrated by an ODE solver to yield \(|\psi (t) \rangle \).

Importantly, to study disordered systems such as Eq. (5), we study the averages of quantities over multiple disorder realizations, with each realisation corresponding to a randomly sampled set of transverse magnetic fields \({h_i}\). The number of realizations range from 100–1000 in this project.

The numerical simulation of quantum dynamics is implemented in Python with QuTiP [20], an open-source software for simulating the dynamics of closed and open quantum systems. In particular, the Complex-valued Variable-coefficient Ordinary Differential Equation (zvode) solver [21] is used to integrate Eq. (6). Time intensive simulations are also performed with the High Performance Computing (HPC) clusters from NUS and NSCC.

3 Information Scrambling and Delocalization in MBL Systems

An important characteristics of MBL is the slow propagation of quantum information. Along these lines, we wish to understand how an initially localized information is spatially spread across a many-body quantum system under time evolution, and how the phenomenon of many-body localization changes the answer to this question.

These considerations lead us to the notion of information scrambling, which is the spreading of local information across many-body quantum systems, such that they can only be recovered by non-local measurements. Related to thermalization and chaos, the notion of scrambling has been used recently to study the quantum information of black holes [22, 23], and can be experimentally probed [24]. Naturally, it is interesting to relate this notion to the MBL phase, given its information-localizing nature.

In this section, we relate a proposed measure of scrambling, the temporal mutual information (following [25]), with the MBL phase, and investigate its qualitative differences with the ergodic phase with numerical and some analytical arguments. We begin with a brief review on the channel-state duality and use it to define the temporal mutual information.

3.1 Channel-State Duality

Consider the action of a unitary operatorFootnote 1 U(t) that acts on vectors in \(\mathcal {H}\), written in a basis as:

$$\begin{aligned} U(t) = \sum _{i,j} u_{ij} |i \rangle \langle j |. \end{aligned}$$
(9)

This operator is then isomorphic to a state in \(\mathcal {H} \otimes \mathcal {H}\):

$$\begin{aligned} |U(t) \rangle = \sum _{i,j} u_{ij} |i \rangle |j \rangle , \end{aligned}$$
(10)

an isomorphism known as the channel-state duality (or the Choi-Jamiołkowski isomorphism). More generally, consider an arbitrary input ensemble \(\rho _{in} = \sum _i p_j |\psi _j \rangle \langle \psi _j |\). Each state \(|\psi _j \rangle \) in this statistical ensemble evolves into \(|\phi _j \rangle = U(t) |\psi _j \rangle \), so that the entire ensemble becomes \(\rho _{out} = \sum _i p_j |\phi _j \rangle \langle \phi _j |\). The action of U(t) on the input state can then be summarized by the pure state:

$$\begin{aligned} {|\varPsi \rangle = \sum _j \sqrt{p_j} |\psi _j \rangle _{in} \otimes |\phi _j \rangle _{out} = \mathbbm {1} \otimes U(t) \sum _j \sqrt{p_j} |\psi _j \rangle _{in} \otimes |\psi _j \rangle _{out}.} \end{aligned}$$
(11)

\(|\varPsi \rangle \) contains all information about the action of U(t) on \(\rho _{in}\). In particular, we have:

$$\begin{aligned} \rho _{in}&= \text {Tr}_{out} (|\varPsi \rangle \langle \varPsi |), \end{aligned}$$
(12)
$$\begin{aligned} \rho _{out}&= \text {Tr}_{in} (|\varPsi \rangle \langle \varPsi |). \end{aligned}$$
(13)

Importantly, the state in the form of Eq. (11) treats the input and output states on equal footing. If we consider the unitary operator to be the propagator \(U(t) = e^{-iHt}\) of the Hamiltonian H, Eq. (11) then contains information about the the state at different times, before and after the evolution due to U(t).

Fig. 1.
figure 1

Schematic representation of the spatial partition of input 1D lattice into A and B, and the output state into C and D after a unitary evolution generated by H. In general, there need not be an equal number of partitions of the input and output states, and A and C (B and D) need not correspond to the same spatial partitions. If they do, we denote them \(A=A(0)\) and \(C=A(t)\) (\(B=B(0)\) and \(D=B(t)\)).

3.2 Temporal Mutual Information

For concreteness, consider a 1D lattice of spins in \((\mathcal {H})^{\otimes N}\) that is evolving under a Hamiltonian H. The state \(|\varPsi \rangle \) dual to the channel U(t) then lives in \(\mathcal {H}_{in} \otimes \mathcal {H}_{out}\), where \(\mathcal {H}_{in} = \mathcal {H}_{out} = (\mathcal {H})^{\otimes N}\). We partition \(\mathcal {H}_{in}\) arbitrarily into subsystems A and B, and \(\mathcal {H}_{out}\) into C and D (Fig. 1 represents the situation schematically). Following [24], one can then define the entanglement entropy between subsystems at different times and different spatial sites by tracing out appropriate subsystems from the dual state \(\rho = |\varPsi \rangle \langle \varPsi |\). For example,

$$\begin{aligned} S(\rho _{AC}) \equiv -\text {Tr}(\rho _{AC} \log {\rho _{AC}}), \end{aligned}$$
(14)

where \(\rho _{AC} = \text {Tr}_{BD}(\rho )\) is the reduced density matrix containing subsystem A before the unitary evolution and C after the evolution. We can further define a more useful quantity, which is the mutual information at different times and different spatial sites:

$$\begin{aligned} I(A:C) \equiv S(\rho _A) + S(\rho _C) - S(\rho _{AC}). \end{aligned}$$
(15)

This quantity, which we refer to as the temporal mutual information, intuitively quantifies the amount of information that one can obtain about subsystem A by measuring subsystem C at a later time. Furthermore, if A and C correspond to the same sites spatially (at different times), which we denote as \(A \equiv A(0)\) and \(C \equiv A(t)\), I(A(0) : A(t)) quantifies information contained in a spatial region (A(t)) about the same region before time evolution (A(0)) - in other words how much information can still be extracted from a region of space about its past configuration.

3.3 Problem Statement

We can study the delocalization of information using a game played between Alice and Bob. Suppose Alice has a source of classical information \(X=0,...,N\) with probability distribution \(p_0, ... , p_N\), which she chooses to encode in a set of quantum states \(\{|\psi _0 \rangle , ... , |\psi _N \rangle \}\). Alice prepares a a state \(|\psi _X \rangle \) from this set and sends it to Bob, so the state that Bob effectively studies is:

$$\begin{aligned} \rho = \sum _{i=0}^N p_i |\psi _i \rangle \langle \psi _i |. \end{aligned}$$
(16)

Bob’s task is then to determine the index X based on his knowledge about the state of some part of the system.

Fig. 2.
figure 2

Schematic of the game between Alice and Bob, realized on a 1D lattice of spins. \(\rho _A\) is the information-bearing state provided by Alice, and \(\rho _B\) is the state that Bob uses to infer \(\rho _A\). In (a), Bob infers \(\rho _A\) using a state \(\rho _B\) at the same time, so the information that Bob can extract is I(A : B). In (b), Bob infers \(\rho _A = \rho _A(0)\) using a state at a later time, \(\rho _C\), which happens to be the same spins after the evolution, i.e. \(\rho _A(t)\). The information that Bob can extract is then the temporal mutual information I(A(0) : A(t)). In general, \(\rho _C\) need not be the same spins. Our analyses will focus solely on the temporal mutual information.

For concreteness, suppose further that these quantum states are realized on part of a 1D lattice of spins, i.e. \(\rho _A \in \mathcal {H}^A = (\mathcal {H})^{\otimes K} \subset (\mathcal {H})^{\otimes L}\), with \(K<L\), so that \( (\mathcal {H})^{\otimes L}\) constitutes the system (See. Fig. 2a). Calling A the partition that contains the information-bearing state \(\rho _A\), the information that Bob can extract on \(\rho _A\) if he has full knowledge of a state \(\rho _B\) at another partition B is then given by the quantum mutual information:

$$\begin{aligned} I(A:B) = S(\rho _A) + S(\rho _B) - S(\rho _{AB}), \end{aligned}$$
(17)

as discussed in Sect. 1.

Here, \(\rho _A\) and \(\rho _B\) correspond to states at the same time - this is the approach taken by [16, 17] to study MBL - while Eq. (15) from the previous section provides an extension to states at different times. Our question can now be rephrased as the following:

How much information can Bob obtain about an initial state \(\rho _A\) if he has knowledge about a state \(\rho _C\) after the action of a unitary evolution U(t)?

Our following investigations will focus on determining the behaviour of I(A(0) : C(t)) for different partitions C(t). When \(C=A(t)\), I(A(0) : A(t)) measures the amount of information that remains in the original subsystem after time evolution. On the other hand, if C(t) is spatially disjoint from A(0), I(A(0) : C(t)) measures the amount of information that has “leaked out” to another region C outside of A (See. Fig. 2b). We expect the behaviour of I(A(0) : C(t)) to differ depending on whether the lattice is ergodic or localized, and in our following work we obtain numerical evidence that our intuition is indeed true.

3.4 Numerical Results

We consider the closed system as consisting of 6 spins, where the first two spins are the information bearing spins. This closed system evolves under the Heisenberg XXX model Hamiltonian Eq. (5) that has been discussed in the previous sections, with the disorder parameter W controlling the strength of localization.

We encode the classical information source \(X=0, 1, 2, 3\) in orthogonal states \(\{ |\uparrow \uparrow \rangle , |\uparrow \downarrow \rangle , |\downarrow \uparrow \rangle , |\downarrow \downarrow \rangle \}\) with equal probabilities \(p_0=p_1=p_2=p_3\), so that the information bearing state \(\rho _A = \sum _i p_i |\psi _i \rangle \langle \psi _i |\) corresponds to the maximally mixed state \({\rho _A = \frac{1}{4}\mathbbm {1}}\). It is embedded in an environment (the 4 remaining spins) which we take to be the Néel state \(|\uparrow \downarrow \uparrow \downarrow \rangle \). The combined initial state is thus:

$$\begin{aligned} {\rho = \frac{1}{4}\mathbbm {1} \otimes |\uparrow \downarrow \uparrow \downarrow \rangle .} \end{aligned}$$
(18)

With \(U(t) = e^{-iHt}\), we can then use the channel-state duality to construct the pure state \(|\varPsi \rangle \) from Eq. (11), and monitor the evolution of the temporal mutual information I(A(0) : C(t)) with different partitions C(t) and disorder W.

Dynamics of Initially Localized Information. Starting from \(\rho \), how does information that is initially localized in \(\rho _A\) leak out to its surroundings after time evolution due to U(t)? This can be monitored by following the evolution of I(A(0), A(t)) - from an initially maximal value, it is expected to decay as time progresses, indicating that information initially contained in A(0) has decayed.

Fig. 3.
figure 3

Evolution of I(A(0) : C(t)) for different partitions C(t) and disorder strength W (indicated by different colors), averaged over 50 disorder realizations. In (a), C(t) is chosen to correspond to the same region A. I(A(0) : C(t)) is then observed to decay and equilibrate to a lower steady-state value that depends on the strength of localization, controlled by increasing W. In (b), C(t) is disjoint from A, and chosen to be the furthest single site away from A. While initially containing no information about A(0), information leaks out into this region as the system evolves and becomes entangled to the environment. We also graphically indicate the partitions below the plots, with blue marking partition A and red marking C of the temporal mutual information I(A : C).

Figure 3a shows the evolution of I(A(0), A(t)) for different values of W. This decay can indeed be noticed, with the final steady-state value depending on the disorder strength. In addition, we can also choose C(t) to be spatially disjoint with A(t). I(A(0), C(t)) then monitors information initially contained in A(0) that has leaked to another spatially disjoint region C(t). This is shown in Fig. 3b. Both plots agree with our expectation that localization strength directly affects the amount of information about the initial state that has leaked out spatially to other regions.

Having qualitatively shown the effects of localization strength on I(A(0), C(t)), we repeat this simulation more fully with all possible future partitions C(t) for the ergodic case with \(W=0.1J\) and the MBL case with \(W=7J\), shown in Fig. 4. There are \(2^n-1=31\) number of ways to choose C(t), corresponding to the 31 curves in Fig. 4. Several noteworthy observations can be extracted from Fig. 4:

  • Similar to the previous plots, the steady-state values of I(A(0) : C(t)) depends on W. (This scaling is investigated more thoroughly in the next subsection)

  • I increases as the size of the partition C(t) is increased. This follows from the monotonicity of the quantum mutual information, i.e. \(I(X:YZ) > I(X:Y)\).

  • In the MBL phase, the steady-state values of I(A(0) : C(t)) decays with distance from the initial 2 spins. (For example, \(I(A(0):[5])> I(A(0):[6])> ... > I(A(0):[9])\)). The ergodic phase, on the other hand, does not exhibit this spatial decayFootnote 2 (\(I(A(0):[6]) = ... = I(A(0):[9])\)). This possibly demonstrates that in the ergodic phase, initially localized information is evenly distributed across the entire chain, while in the MBL phase the distribution of information depends on distance from the first 2 spins.

  • At short times, the growth/decay of I occurs more rapidly in the ergodic phase, compared to the MBL phase. This can be attributed to the slow spreading of entanglement in the MBL phase, compared to the ballistic spread in the ergodic phase.

  • I(A(0) : [56789](t)) is constant and equal to \(2S(\rho _A(0))=2\ln 4 \approx 2.77\). This is due to identity Eq. (20), representing the conservation of information, which we prove in the last subsection.

  • Every curve (Except I(A(0) : [56789](t))) has a symmetric counterpart such that the sum between these two curves at any time is \(2S(\rho _A(0))\). (For example, \(I(A(0):[5](t)) + I(A(0):[6789](t)) = 2S(\rho _A(0))\)) This can be explained with the identity Eq. (22) in the final subsection.

Fig. 4.
figure 4

Evolution of temporal mutual information for all possible partitions of C(t), for the ergodic phase (Top plot) and the MBL phase (Bottom plot). Colors represent different choices of C(t) and are labelled in the legend, and the indices corresponding to different sites are shown in the schematic below the plots (Blue highlighting indicates information-bearing spins). For example, I(A(0), [5](t)) is the temporal mutual information between A(0) and A(t), I(A(0), [56](t)) is the temporal mutual information between A(0) and the first 3 spins at time t, and I(A(0), [56789](t)) is the temporal mutual information between A(0) and the entire system plus environment at time t.

Steady-State Values of Temporal Mutual Information as a Function of Localization Strength. From the dynamics above, I(A(0), C(t)) reaches steady-state values after evolving for \(t \approx 15J\). Choosing the final \(20\%\) of the evolution as the steady-state window \(t_{SS}\), we define the steady-state TMI as:

$$\begin{aligned} \overline{I}(A(0), C) = \frac{1}{\varDelta t_{SS}} \int _{t_{SS}} I(A(0), C(t)) dt. \end{aligned}$$
(19)

In this section, we investigate these steady-state values as a function of disorder/localization strength W. The results are shown in Fig. 5. We note the following observations:

  • Indeed, the value of \(\overline{I}(A(0):C)\) for any partition C that contains \(A=[5]\) increases with W, signalling increasing localization of information in the initial spatial region. On the contrary, the value of \(\overline{I}(A(0):C)\) for any partition that does not contain [5] decreases with W, as less information has leaked out to these partitions.

  • As W is increased so that the system transitions into the MBL phase, the decay of \(\overline{I}(A(0):C)\) as C is chosen to be further away from the first 2 spins (also observed and discussed in the previous section) is again visible from the splitting of initially coinciding lines. Again, this indicates that ergodicity spreads information throughout the system evenly, while MBL tends to localize information with a strength that decays spatially.

Fig. 5.
figure 5

\(\overline{I}(A(0):C)\) as a function of disorder strength W for all possible partitions C. Colors represent different choices of C and are labelled in the legend, and the indices corresponding to different sites can be found in the schematic below Fig. 4. Each value of \(\overline{I}\) is obtained by averaging over 500 disorder realizations, and the averaging window is chosen to be the final 20\(\%\) of a total evolution time of \(Jt = 40\).

Scaling with System Size. To study how the steady state temporal mutual information \(\overline{I}(A(0):C)\) scales with system size L, and whether it is useful for detecting the location of the critical disorder \(W_c\) at which the ergodic-MBL transition occurs, we perform additional simulations with different values of L.

Similar analyses [26, 27] that study the ergodic-MBL transition for finite system sizes provide evidence that the entanglement entropy and Holevo quantity can help locate the the ergodic-MBL transition that occurs at the thermodynamic limit, \(L \xrightarrow {} \infty \), where these quantities vary discontinuously across a critical disorder strength \(W_c\). If the behaviour of \(\overline{I}(A(0):C)\) against W approaches a step function in a similar manner as the system size L tends to infinity, the location of the discontinuity then marks the location of the critical disorder \(W_c\).

The results of some preliminary investigations for the same Heisenberg XXX spin chainFootnote 3 are presented in Fig. 6 for \(L=4, 6, 8, 10, 12\). From the figure, while there are good indications that the curves are converging to a sigmoidal curve as L increases, suggesting a possible scaling law for \(\overline{I}(A(0):C)\), the limitations of our numerics prevent more concrete claims. More samples and larger system sizes will be needed to establish a scaling law.

Fig. 6.
figure 6

Normalized steady-state values of I(A(0) : C) against disorder, for system sizes \(L=6, 8, 10, 12\). The information-bearing state consists of the first L/2 spins, while the environment constitute the remaining L/2 spins. \(\overline{I}(A(0):C)\) is normalized with \(2S(A(0))=2 \ln (2^L)\) so that the same y-scale can be used to compare \(\overline{I}\) across different system sizes. Each point is produced by averaging over 100–200 disorder realizations, with an averaging window chosen to be the final \(20\%\) of a total evolution time of \(Jt=40\).

Mathematical Identities Finally, we state and prove a few identities on the temporal mutual information that explains some features of the above numerical results. Let L symbolically denote the system, and \(\{ A, B\}\) a partition of L. Suppose further that \(\rho _A(0)\) is a mixed state and \(\rho _B(0)\) is a pure state, so that \(\rho (0) = \rho _A(0) \otimes \rho _B(0)\) (corresponding to our setup above). Recall that \(\rho = |\varPsi \rangle \langle \varPsi |\) is the (pure) dual state obtained using the channel-state duality Eq. (11).

Lemma 1

$$\begin{aligned} I(A(0):L(t)) = 2S(\rho _A(0)). \end{aligned}$$
(20)

Information initially contained in a subsystem A can always be fully re-extracted at a later time from the full system L.

Proof

From the definition of the temporal mutual information Eq. (15),

$$\begin{aligned} I(A(0):L(t)) = S(\rho _{A(0)}) + S(\rho _{L(t)}) - S(\rho _{A(0)L(t)}). \end{aligned}$$
(21)

We use the fact that the entropy of the bipartitions of a pure state are equal. Since the dual state \(\rho \) is pure, this implies that the second term is \(S(\rho _{L(t)}) = S(\rho _{L(0)}) = S(\rho (0)) = S(\rho _A(0) \otimes \rho _B(0))\). The same fact yields \(S(\rho _{A(0)L(t)}) = S(\rho _{B(0)})\) for the third term. Finally, \(\rho _B(0)\) being pure implies that the third term vanishes, while the second term becomes \(S(\rho _{L(t)}) = S(\rho _A(0))\), leading to the result.

Lemma 2

If \(\{ C, D\}\) is an arbitrary partition of L, then:

$$\begin{aligned} I(A(0):L(t)) = I(A(0):C(t)) + I(A(0):D(t)). \end{aligned}$$
(22)

Proof

Firstly, note that the subsystem B(0) can always be traced out without changing the entropy. This can be seen by applying the triangle inequality and the subadditivity of the entropy on any subsystem B(0)X containing B(0):

$$\begin{aligned} |S(\rho _B(0)) - S(\rho _X)| \le S(\rho _{B(0) X}) \le S(\rho _X) + S(\rho _B(0)). \end{aligned}$$
(23)

\(\rho _B(0)\) being pure then implies that \(S(\rho _{B(0) X}) = S(\rho _X)\).

The right hand expression is by definition:

$$\begin{aligned}&I(A(0):C(t)) + I(A(0):D(t)) = S(\rho _{A(0)}) + S(\rho _{C(t)}) \nonumber \\&- S(\rho _{A(0)C(t)}) + S(\rho _{A(0)}) + S(\rho _{D(t)}) - S(\rho _{A(0)D(t)}). \end{aligned}$$
(24)

Some terms can be simplified:

$$\begin{aligned} S(\rho _{A(0)D(t)})&= S(\rho _{B(0)C(t)}) = S(\rho _{C(t)})\end{aligned}$$
(25)
$$\begin{aligned} S(\rho _{A(0)C(t)})&= S(\rho _{B(0)D(t)}) = S(\rho _{D(t)}), \end{aligned}$$
(26)

where the first equality is because the entropies of the bipartitions of a pure state are equal, and the second equality results from our initial note. Finally, cancelling some terms yield:

$$\begin{aligned} I(A(0):C(t)) + I(A(0):D(t)) = 2S(\rho _{A(0)}) = I(A(0):L(t)), \end{aligned}$$
(27)

where the second equality is due to the previous identity Eq. (20).

Lemma 3

We have:

$$\begin{aligned} I(A(0):A(t)) = S(\rho _A(0)) + S(\rho _A(t)) - S(\rho _B(t)). \end{aligned}$$
(28)

Proof

Note that:

$$\begin{aligned} S(\rho _{A(0)A(t)}) = S(\rho _{B(0)B(t)}) = S(\rho _{B(t)}), \end{aligned}$$
(29)

where the first equality is because the entropies of the bipartitions of a pure state are equal, and the second equality results from tracing out B(0) not affecting the entropy. Applying the definition of I(A(0) : A(t)) then leads to the result.

4 Conclusion and Discussion

We have investigated information localization in MBL systems from an information-theoretic perspective by introducing the temporal mutual information. Unlike the quantum mutual information, which depends on states at the same time, the temporal mutual information allows us to monitor the spread of information as the system evolves. From our numerical simulations on its dynamics, we observe that its evolution and steady-state behavior agree with our intuition that the MBL phase should localize and slow the spread of information. We also find some preliminary indications that it can be useful in identifying the ergodic-MBL transition, based on its scaling with system size.

Further work in this direction should constitute a better understanding of the temporal mutual information, and its relation with the entanglement entropy. In light of Eq. (28), there is a simple relation between I and S; does the temporal mutual information then contain more information, or is the entanglement entropy sufficient in characterising the spread of information? Otherwise, the scaling behaviour of I could be investigated more thoroughly with larger system sizes. This would ideally involve better suited computational techniques such as the use of matrix product states and the time-evolving block decimation algorithm.