Complexity from the Reduced Density Matrix: a new Diagnostic for Chaos

We investigate circuit complexity to characterize chaos in multiparticle quantum systems. In the process, we take a stride to analyze open quantum systems by using complexity. We propose a new diagnostic of quantum chaos from complexity based on the reduced density matrix by exploring different types of quantum circuits. Through explicit calculations on a toy model of two coupled harmonic oscillators, where one or both of the oscillators are inverted, we demonstrate that the evolution of complexity is a possible diagnostic of chaos.


Introduction
Chaotic systems are abundant in nature. Although we have a reasonably thorough understanding of classical chaos, our knowledge of chaos for quantum systems is still inadequate [1,2]. We expect quantum chaos to play an essential role in understanding some of the most important open questions in physics, such as thermalization, transport in quantum many-body systems, black hole information loss, to name a few. Therefore, it is essential to have a comprehensive understanding of quantum chaos [1,2].
Traditionally, chaos in quantum systems has been identified by comparing results from random matrix theory (RMT) [2,3]. Recently, however, other diagnostics have been proposed to probe chaotic quantum systems. One such diagnostic is the out-of-time-ordered correlator (OTOC), F T (t) ≡ W (t)V (0)W (t)V (0) , where W (t) and V (t) are operators in the Heisenberg picture, and the angle bracket · · · denotes a thermal average. This quantity has been argued to give information about the chaos in quantum mechanical systems [4][5][6]. It has been shown that the (classical) Lyapunov exponent and the scrambling time may be extracted from these quantities. However, recent works have revealed some tension between the OTOC and RMT diagnostics [7]. A deeper understanding of probes of quantum chaos is required; it is worthwhile to investigate other probes of quantum chaos. Quantum information theory turns out to be the most promising in this direction.
Ideas and tools from quantum information theory have come to permeate all of modern (quantum) physics. They have brought new insights into traditional ideas and had far-reaching consequences, e.g. a new perspective in the structure of space-time itself. Most information-theoretic studies begin by (purposely) partitioning the system into subsystems A and B; one considers the reduced density matrix of subsystem-A, upon tracing out subsystem-B,ρ A = Tr B [ρ], whereρ is the density matrix of the entire system and the von Neumann entropy, S = −Tr[ρ A lnρ A ]. Indeed, much of quantum information theory is concerned with the uses and interpretation of the von Neumann entropy.
An information-theoretic quantity that has recently come into the limelight is the system's complexity. Complexity, rather 'Circuit Complexity' [8][9][10], is an idea from the theory of quantum computationit is the shortest distance between some reference states |ψ R and a target state |ψ T . Operationally, it quantifies the minimal number of operations needed to manipulate |ψ R to |ψ T . The flurry of recent work on circuit complexity in the field of quantum field theory in recent time  has largely been spurred by black hole physics and, in particular, the conjecture that it resolves certain puzzles related to black holes [38,39]. The aspect of complexity that we are particularly interested in is its potential to characterize quantum chaos. Previous work [40][41][42][43][44][45][46] has shown that complexity could detect the scrambling time and Lyapunov exponent. However, this required a unique type of quantum circuit -a doubly-evolved state, where the target state is obtained by first evolving the system forward in time with a Hamiltonian H, and then evolving it backwards in time with a slightly different Hamiltonian H + δH.
In this work, we take the first step toward using complexity to characterize chaos in a multiparticle system. We consider a toy model consisting of two coupled oscillators, where one or both of the oscillators are inverted. We will refer throughout the paper, one of the oscillator as the 'system' and the other one as the 'bath'. It is well known that, classically, the inverted oscillator has an unstable fixed point in phase space at (x = 0, p = 0) (and, hence, not chaotic in the strict sense). Nonetheless, in the context of studying quantum chaos in various quantum field theories [47][48][49][50][51][52][53][54], it has served as a powerful toy model mostly because it is an exactly solvable system. We begin by revisiting the complexity of the doubly-evolved circuit state, namely a state obtained by first evolving the system forward in time with a Hamiltonian H, and then evolving it backwards in time with a Hamiltonian H + δH. We highlight some new features that were not appreciated in previous works [40]. We discovered that in the inverted regime the complexity for this doubly evolved state and a single evolved state with respect to the same reference state saturate at the same value. When the parameters for the system and bath are different, the linear growth region of complexity splits into two separate regions, indicating the Lyapunov spectrum [55].
Then we propose a new diagnostic of chaos using complexity that does not require this type of contrived target state and is, in fact, more powerful. This new diagnostic is based on the reduced density matrix by employing the "operator-state mapping" [56][57][58] to build an effective target state. Our proposal captures all the features mentioned above and more. For example, we found that the scrambling time and Lyapunov exponents are mainly dictated by the bath parameter. Finally, we also compare this new form of complexity with the complexity of purification [17,28,[59][60][61][62] for detecting chaos. We discover that the complexity of purification is not as sensitive as our proposed probe.
It should be noted, the model we consider can be thought of as the simplest example of an open quantum system, where some subsystem is treated as the 'system', and the rest is treated as a 'bath/reservoir' [47]. Open quantum systems are extremely important for various reasons [63][64][65]. Our proposal takes a step toward using complexity to characterize open quantum systems. Previously in [25,26], a system connected with a classical source was considered; however, the diagnostic used was the full system complexity.
The rest of the paper is organized as follows. In Section 2, we present our model and a brief review of circuit complexity. In Section 3, we compute the evolution of complexity for the entire system for different circuits and demonstrate how quantum chaos can be detected and quantified. In Section 4, we illustrate our new proposal as the diagnostic of chaos and compare it with the complexity of purification. Finally, we summarize and present concluding remarks in Section 5.

Our Model and Complexity
Our model consists of two oscillators, where we treat one of the oscillators as the system and the other as the bath. The Hamiltonian of our model is the following Here x i and p i are the position and momentum operators at site-i, with [x i , p j ] = iδ i,j , ω 0 is a parameter with units of energy and { 1 , 2 , λ} ∈ R are dimensionless parameters. In what follows, we take λ ≥ 0; we set ω 0 = 1, i.e. ω 0 sets the energy scale. We are working in units where = 1. The Hamiltonian (Eq. 1) is readily diagonalized by introducing a matrix notation - where 0 = ( 1 − 2 )/2. Using this matrix notation, the Hamiltonian takes the following form By performing an orthogonal transformation, we can diagonalize this Hamiltonian (2b). Under this transformation we define new position and momentum variables Q i and P i ([Q i , P j ] = iδ i,j ) as where The states we will consider originate from a quench in the above model - Now we will give a brief review of circuit complexity. We will directly use the wavefunction and compute the circuit complexity using Nielsen's method [8][9][10]. The details can be found in [11]. The problem/goal of complexity is the following: given a set of elementary gates and a reference state, what is the most efficient quantum circuit that starts at that reference state (at s = 0) and terminates at a target state (s = 1) where U is the unitary operator that takes the reference state to the target state. We will represent the target sate |Ψ s=1 as |Ψ T and the reference state |Ψ s=0 as |Ψ R in the rest of the paper. We construct it from a continuous sequence of parametrized path ordered exponential of a control Hamiltonian operator Here s parametrizes a path in the space of the unitaries and given a set of elementary gates M I , the control Hamiltonian can be written as The coefficients Y I are the control functions that dictates which gate will act at a given value of the parameter. The control function is basically a tangent vector in the space of unitaries and satisfy the Schrodinger Then we define a cost functional F(U,U ) as follows 5 : Minimizing this cost functional gives us the optimal circuit. There are different choices for the cost functional [11]. In this paper we will consider In this work we will consider the ground state of H < , |ψ 0 as our reference state |ψ R and we will consider different target states |ψ T that are evolved from this ground state. The target wave function ψ T (x 1 , x 2 ) = x 1 , x 2 |ψ T takes the following form where Following [66], we will take the elementary gates M I as the generators of the GL(2, C) group. For details interested readers are referred to [66]. In all the cases, the complexity takes the form [66] (due to the structure of the wave functions) where α = 1, 2 and the normal mode frequencies are given by, , In this work, we start with first the full system complexity and study the unstable behaviour by using 4 From Eq. 14 we have Y I (s)M I = dU (s) ds .U −1 (s). M I s are taken to be the generators of some groups and can be normalized in a way that they satisfy M I M T J = δ IJ . Then finally we have, Y I (s) = Tr dU (s) ds .U −1 (s).M T I . Here T denotes the transpose of matrix. For further details interested readers are referred to [11]. 5 The dot defines the derivative w.r.t s.
the techniques mentioned in [40]. Then we will propose a new diagnostic for studying chaos from the reduced density matrix.

System Complexity
In this section, we discuss the complexity of the (entire) system. We consider two types of target states and study the evolution of complexity for the same reference state.
Target state I: First we consider a target state obtained by evolving the system forward in time by the |ψ 0 is the ground state of H < defined in Eq. 10. Working in the position representation, the evolved wavefunction can be written as where the initial wave function ψ 0 (x) has the form HereΩ 0 = ω 0 (defined in Eq. 1) and K(x, x |t) is the propagator For the Hamiltonian given in Eq. 1, the propagator factorizes in normal mode coordinates and the propagator for each normal mode is where f α = Ω α cot(Ω α t), and g α = Ω α / sin(Ω α t) with Ω α s are defined via Eq. 8 and α = a, s. To carry out the calculations, we return to the original coordinates via Eq. 3. The propagator can be written as Carrying out the Gaussian integrals, one obtains Target state II: To get the other target state we evolve the system forward in time with a Hamiltonian H F > , and then backward in time with a slightly different Hamiltonian H B > : Both H F > and H B > have the same form but have slightly different values of 1 , 2 , λ defined in Eq. 1. For this case we can write in the position representation: where we are ultimately interested in t 1 → t and t 2 → −t. Using the parameterization for the propagator in Eq. 23a and carrying out the (Gaussian) integrals, one obtains Eq. 24 witĥ Then the wavefunction can be written as the desired form mentioned in Eq. 18: where Ω(t) ij denotes various components of theΩ(t) defined in Eq. 28. Finally using Eq. 5 and Eq. 8 these can be written in terms of the parameters of the Hamiltonian.
For a chaotic quantum system, even if the two Hamiltonians are arbitrarily close 6 , i.e. H B > = H F > +δH F > , the resulting state |ψ T will be quite different from |ψ 0 . This type of evolved state is used to compute the Loschmidt echo [67][68][69], which is a measure of quantum chaos. It was pointed in [40] that the complexity for this type of doubly-evolved state defined in Eq. 29 is essential to capture the chaotic behaviour of the system. The authors showed that complexity could capture similar information contained in the out-of- time order correlators and, hence, can be used as an alternative diagnostic to study chaotic features. In [40], it was shown that the time scale when complexity starts to grow linearly is equivalent to the scrambling time, and the slope of the linear portion captures the Lyapunov exponent.
In what follows, we employ the approach of [40] and consider the complexity associated with Eq. 26, comparing it with the complexity associated with Eq. 20; in doing so, we demonstrate a new feature of many-body unstable systems. We also demonstrate that the complexity associated with Eq. 26 is capable of sensing the system's Lyapunov spectrum. As highlighted above, since we are interested in chaotic behaviour, we focus on the case where either one of the oscillators or both are inverted ( 1 < 0, and/or 2 < 0). Note that there are different ways to construct the perturbed Hamiltonian. For our simple model, we have three parameters: { 1 , 2 λ}; we can construct the perturbed Hamiltonian H + δH by considering perturbations of any combination of these parameters. Fig. 1 displays the evolution of complexity for the above mentioned target states, where H + δH = H(λ + δλ). Notice that for both cases, the complexity is bounded. This behaviour was not seen in [40]; we believe this is due to the more sensitive nature of wave function method over the correlation matrix method as discovered in [66] The key observations that follows from Fig. 1 are given as follows: • The complexity for the singly-evolved state (Eq. 20) grows rapidly and reaches saturation. On the other hand the complexity for the doubly-evolved state (Eq. 26) starts with a scrambling regime, grows linearly, and finally reaches its (maximum) saturation value.
• The complexity is bounded by the same values for both the singly-evolved and doubly-evolved states. 7 This speculation is motivated by the fact that the full system complexity is very similar to a single oscillator model used in [40], and the main difference between these two models is the use of two different methods of complexity. Note that in [40] the complexity was computed using the covariance matrix. Since the states under consideration are Gaussian states they can be equivalently described by their covariance matrix. One can compute the complexity from the covariance matrix [66]. Also, [66] tells us that the covariance matrix method is less sensitive compared to the wavefunction method (of computing circuit complexity) as it uses different types of quantum gates for evolved states.
Note that, for a regular system, the magnitude of the complexity increases when we evolve the system further. Therefore, this bounded nature of the complexity as shown in Fig. (1), appears only when one or both oscillators of the system is (are) inverted with fixed parameters.
Furthermore, a state obtained by evolving a reference state multiple times will have the same complexity after the scrambling time in the inverted regime. Hence, it appears that, after a certain time scale, no target state is any more or less difficult to construct (from the same reference state) than any other. This might have potential practical applications in information processing. Note that we can perform the backward evolution with the Hamiltonian, H+δH = H( 1 +δ 1 , 2 +δ 2 , λ) i.e. we can perturb the Hamiltonian by slightly changing the parameters 1 and 2 . Fig. 2 displays how such a perturbation in different direction displays different complexity growth, though the qualitative feature of the growth is similar.
The evolution of complexity (coming from Eq. 26) for this type of perturbation demonstrates another interesting feature of our unstable system -when the parameters 's are quite different, the linear growth of complexity happens in two distinct segments with two different slopes. The right panel of Fig.2 shows that complexity is almost flat early time (scrambling), then it grows linearly; after that it flattens again (2nd scrambling) for a period, followed by a second linear growth and finally reaching saturation.
To make the connection more concrete, in Fig. 3 we fix either 1 or 2 and sweep through the other. We can draw some interesting conclusions, which we list below.
• The first linear growth and the first scrambling time is dictated by the fixed-parameter 2 . Fig. 3 displays the complexity for same 2 < 0, for different choices of 1 < 0.
• As we increase | 2 |, the second scrambling time decreases and the slope of the second linear portion  Figure 3: Complexity display two separate linear growths when the parameters ( 1 , 2 ) for the system and bath oscillators are different. In the left panel the bath parameter 2 is fixed at -6 and the system parameter 1 is scanning from -1 to -6. Whereas in the right panel, the system parameter 1 is chosen as -6, -7, -10, -12 and -20 from dashed cyan to solid cyan curve respectively.
starts to align with the slope of the first linear portion as 1 → 2 .
• When the system parameter (| 1 |) is bigger than the bath parameter (| 2 |), the slope of the upper linear portion starts to bent in the opposite direction (see Fig. 3).
• We see the same behaviour as mentioned in the previous point, when we fix the system parameter and scan through the bath parameter. This implies that complexity for this target state, cannot distinguish between the two.
In the next section, we propose a new diagnostic that will be able to distinguish between the two. We conclude this section by commenting on the saturation value of the complexity when H + δH = H( 1 + δ 1 , 2 + δ 2 , λ). If we increase the magnitude of either 1 and 2 or both the saturation value of complexity increases.

Complexity from the Reduced Density Matrix
In this section, we will propose a new diagnostic of chaos from complexity. First of all, this particular approach does not require the construction of the target state by performing two evolutions (forward followed by a backward with slightly different Hamiltonian) as discussed in the previous section. In that sense, this is a more natural diagnostic for studying chaotic behaviour. Secondly, it will capture all the features displayed in the previous section as well as some new ones. As noted above, our model can be thought of as the simplest open quantum system, where one of the oscillators is treated as the system and the other is treated as the bath. The reduced density matrix plus the operator-state mapping [56][57][58] form the basic ingredients for constructing the diagnostic 8 In the second part of this section, we will compare the results obtained from this approach with those obtained using the complexity of purification, showing that the complexity of purification captures less information.

Complexity from Operator State Mapping
We will be interested in analyzing the reduced density matrix. To that end, we partition the system into two subsystems, taking oscillator-1 to be our "system" and oscillator-2 to be the "bath"; we form the reduced density matrix of oscillator-1ρ whereρ is the density matrix of the entire system.
Calculations are readily executed in the position representation - where being the position-space density matrix of the full system. The wave function is given by Eq. 29, namely forming the system's position-space density matrix (Eq. 32c) and tracing out oscillator-2 (as per Eq. 32b), one obtains the reduced density matrix for oscillator-1 and γ 1 = Re [γ]. In what follows, we will be particularly interested in powers of the reduced density matrix; interesting to modify the existing methods of computing operator complexity to include non-unitary operators.
we focus onρ 1/2 1 due to the structure of Eq. 32c, 9 Now we are ready to use the operator-state mapping [56][57][58]. The idea is for any operator, one can associate a state by working in a doubled Hilbert space. More explicitly, for an operatorÔ, whose matrix representation with respect to the orthonormal basis {|m } is given bŷ Motivated by the thermofield-double state [70], we work withρ

1/2
A ; working in the position representation, one has an effective wave function (in this doubled Hilbert space) ψ(x, x ) = 1 We will use this as the target state and compute complexity. We can write the wavefunction explicitly as follows: Here N is a normalization constant.
The effective wave function can be written in the form To proceed, we need to diagonalize the argument of the exponential; we obtain the effective wave function 9 It will be interesting to consider other powers ofρ 1 and check if any additional feature emerges. We are currently investigating it and hope to report about in some other publication. where and with In what follows, we will use this effective wavefunction (39b) as the target state and compute the complexity. with respect to the ground state wavefunction. Fig. 4 represents the complexity for this state (39a). When both the system and bath are inverted ( 1 < 0, 2 < 0) the complexity for the effective wavefunction displays the expected chaotic like behaviour, namely it contains a small complexity scrambling period, followed by a linear growth and finally a saturation. On top of that, we see a downward concavity during the scrambling region. Note that unlike the full system complexity, when one of the oscillators is inverted, we do not see this behaviour as illustrated in the right panel of Fig. 4. Understanding the physical meaning of this early time behaviour of complexity from the density matrix requires further investigation, and we would leave that for different work. Next, we want to investigate if this new diagnostic can sense the entire Lyapunov spectrum, as could the (full) system complexity. This is detailed in Fig. 5. Below we list our findings: • When we fix the bath parameter 2 (< 0) and gradually increase the inverted system parameter (starting with a smaller absolute value than the bath parameter), we see the same behaviour as the full-system complexity as shown in Fig. 2. The linear portion is composed of two different slopes, just like the full system complexity.
• Unlike the full system complexity, we only see one scrambling regime when the bath parameter 2 is • Furthermore, when the system and the bath parameters are close, the slopes (of the linear portion of the graph displayed in Fig. 5) are aligned; it remains the same, even if the magnitude of the system parameter is larger than the bath parameter. It gives a bound on the growth rate of complexity (Lyapunov exponent), and this is controlled by the bath parameter. Curiously, this is consistent with the black hole case where the Lyapunov exponent is bounded by the bath temperature [6]. At this point it is only a qualitative comparison. We believe it deserves a more systematic study in future. This feature is not present in the full system case. 10 By the scrambling time we mean the time scale after which the complexity starts to grow linearly. It is evident from Fig. 5, that this scrambling time coincide with each other even if we change the system parameter 1 for the fixed value of the bath parameter 2 . We are calling this as one scrambling regime.
• We do not see the same scrambling regime if we fix the system parameter 1 (< 0) and gradually increase the bath parameter (starting with a smaller absolute value than the system parameter) as in Fig. 6. As we change the bath parameter, both the scrambling time and the Lyapunov exponent changes significantly. • We can explore this further by investigating how the scrambling time (time scale when complexity starts to grow linearly [40]) and the slope of the linear growth (Lyapunov exponent) change with changing the bath parameter keeping the system parameter and coupling fixed. The left panel of Fig. 7 displays the change in scrambling time with bath parameter (| 2 |) and the right panel displays the slope with bath parameter (| 2 |). These figures indicate that as the bath gets more unstable/chaotic, the system scrambles faster, and the slope of the complexity grows bigger. Both these behaviours are consistent with the parameter dependence for single oscillator found in [40].
We will conclude this section by highlighting another difference from the full system case. Complexity for this effective wavefunction does not overlap with each other when the parameter 1 and 2 are switched as displayed in Fig. 8. The underlying reason for this asymmetry is, of course, the tracing out of the bath oscillator (oscillator 2). This makes it more appropriate for the understanding of open systems. Since in all realistic scenario, we deal with open systems this complexity provides a more practical choice of investigating the underlying chaotic nature of the system.

Complexity of Purification
To further illustrate the importance and the scope of our new diagnostic in this section we compute the complexity of purification [17,28,59,60], yet another method of computing complexity for mixed state. We will show that this particular approach is not as sensitive as complexity coming the operator-state mapping [56][57][58] as discussed in the Sec. We start with the density (reduced) matrix defined in Eq. 33a. First we purify (|ψ 11 ) in such way that,ρ 1 corresponds to the auxiliary Hilbert-space such that |ψ 11 is a pure state. Then the complexity of purification (C p ) is defined as, where, the minimization is over all possible purification and C(|ψ 11 ) corresponds to the complexity of the state |ψ 11 with respect to the reference state, which is the ground state of the Hamiltonian in Eq. 1 with Next we parametrize our purified state in the following way, where x 2 belongs to the auxiliary Hilbert-space. α, γ and τ are in general complex and yet to be determined. N is the normalization of the wavefunction. Then we have The corresponding reduced density matrix after tracing out the auxiliary Hilbert space is Using the condition mentioned in Eq. 40 and using Eq. 33a we get the following, α = Ω 1 (t), τ = κ(t), Re(γ) = Re(Ω 2 (t)).
We have determined all the parameters inside Eq. 42 in terms of the given parameters as in Eq. 45, except for Im(γ). Hence the minimization in (41) can be carried over Im(γ) and the minimum value will correspond to the complexity of purification. We compute the complexity corresponding to Eq. 42 as follows: where, Finally, the complexity of purification is In Fig. 9 (left panel) we display the evolution of this complexity for 2 = −5, λ = 0.1 (as we did with the operator-state mapping [56][57][58]). We reach the following conclusions: • By comparing with Fig. 5, we can see easily that C p gives us a similar early time behaviour. This early time behaviour is then followed by a linear growth as expected for this kind of systems.
• Unlike the complexity coming state-operator map, we only get a single slope, hence C p doesn't give us the information about the Lyapunov spectrum. We can only extract one of the Lyapunov exponents.
• We have further explored the time-evolution of C p with fixed 1 and varying 2 in Fig. 9 (right panel).
But unlike Fig. 5 and Fig. 6 there is no asymmetry between the behaviour of time evolution of C p when we fix the system parameter and vary the bath parameter and vice versa.
In light of the above discussion, we would like to point out that the complexity of purification cannot detect the complete Lyapunov spectrum for this two oscillator model. Therefore, it seems it is not as sensitive as the complexity of mixed state obtained by using the operator-state mapping in capturing chaotic behaviour. Finally, we would add that although we took the illustrative approach to explain our findings, the results outlined in this paper are robust as we have scanned the parameter space quite exhaustively.

Discussion
In this work, we took the first step toward using complexity to characterize chaos in a multiparticle system. In the process, we took a step forward towards analyzing open quantum systems by applying the notion of complexity. Our model consisted of two oscillators, where one or both of the oscillators is inverted. Since the inverted oscillator is known to capture features similar to chaotic systems [47], this provides a natural toy model to study chaos. By exploring different types of quantum circuits, we showed that complexity is a useful diagnostic of chaos.
We first considered the complexity of a doubly-evolved target state (forward followed by a backward evolutions with slightly different Hamiltonians)|ψ T = exp(iH B > t) exp(−iH F > t)|ψ 0 , comparing the results with a singly-evolved target state |ψ T = exp(−iH > t)|ψ 0 . We showed that, for both the singly-evolved and doubly-evolved target states, the (full) system complexity exhibited different early-time behaviour, but are both bounded by the same saturation value. When the parameters of the two oscillators are different, the linear growth region of the doubly-evolved state splits into two separate regions. Furthermore, the growth of complexity does not change when the parameters of the two oscillators are switched.
Next, we proposed a more natural diagnostic of chaos-based on the reduced density matrix (where one of the particles is traced out) and the operator-state mapping [56][57][58]. We discovered that the scrambling time and Lyapunov exponents are mainly dictated by the parameter of the particle that was traced out, i.e. by the 'bath'. Qualitatively speaking, this is consistent with the black hole case where the Lyapunov exponent is bounded by the bath temperature [6]. We compared our results with those obtained using the complexity of purification; we showed that the complexity of purification gives less information, thus further highlighting the importance of the specific construction of our new proposal.
Besides its potential as a more natural testing device for chaotic behaviour, complexity from the density matrix has another advantage over the full system complexity. In this work, we considered an effective wave function ψ(x, x ) ∼ ρ 1/2 1 (x , x); this was motivated by the thermofield-double construction [70]. However, one can consider effective wave functions built from more general powers ofρ 1 , ψ(x, x ) ∼ ρ q 1 (x , x)this could provide further information by which to characterize the system; this proposal provides a more natural form of complexity to compare with other information-theoretic measures such as Renyi entropy, entanglement entropy, etc., as they all are based on the same quantity, namely the reduced density matrix. This might open up possibilities to explore complexity as an extension of entropy [73,74] and other measures of correlations (for eg. OTOC) [75][76][77] for open systems.
To give a proof-of-principle argument for complexity from the reduced density matrix as a new diagnostic for chaos, we have used the coupled inverted oscillators as a toy model. This is, however, a rather special example and, by no means, a realistic chaotic system. We want to explore this diagnostic for the realistic chaotic system in future work. Perhaps, a realistic model where one can apply this analysis is the chaotic spin chain model e.g. transverse field Ising model [70]. For that model one needs to find out geodesics on the space of SU (N ) unitaries, which we believe is tractable along the line of [9,10].