Krylov complexity of density matrix operators

Quantifying complexity in quantum systems has witnessed a surge of interest in recent years, with Krylov-based measures such as Krylov complexity ($C_K$) and Spread complexity ($C_S$) gaining prominence. In this study, we investigate their interplay by considering the complexity of states represented by density matrix operators. After setting up the problem, we analyze a handful of analytical and numerical examples spanning generic two-dimensional Hilbert spaces, qubit states, quantum harmonic oscillators, and random matrix theories, uncovering insightful relationships. For generic pure states, our analysis reveals two key findings: (I) a correspondence between moment-generating functions (of Lanczos coefficients) and survival amplitudes, and (II) an early-time equivalence between $C_K$ and $2C_S$. Furthermore, for maximally entangled pure states, we find that the moment-generating function of $C_K$ becomes the Spectral Form Factor and, at late-times, $C_K$ is simply related to $NC_S$ for $N\geq2$ within the $N$-dimensional Hilbert space. Notably, we confirm that $C_K = 2C_S$ holds across all times when $N=2$. Through the lens of random matrix theories, we also discuss deviations between complexities at intermediate times and highlight subtleties in the averaging approach at the level of the survival amplitude.


Introduction
Understanding complexity is inherently intuitive yet challenging to precisely quantify.It involves gauging the difficulty of tasks within the constraints of limited resources.For instance, information theorists [1] often adopt the circuit model, where a task can be accomplished by using a set of elementary gates and their minimal number that does the job is the estimate for complexity.In recent years, quantifying complexity in quantum systems has emerged as a focal point, spanning disciplines from quantum field theories [2][3][4][5][6][7] to quantum gravity [8][9][10][11][12].This exploration has given rise to various physicallymotivated measures of quantum complexity, which are now being tested as innovative tools for unraveling the mysteries of the quantum realm.
One of the most promising recent definitions in the realm of quantum complexity is Krylov complexity, developed to quantify operator size and growth in quantum manybody systems and quantum field theories [13] (see also [14]).Essentially, the idea is to map the growth of the Heisenberg operator into a one-dimensional chain, representing the minimal orthonormal basis capturing operator evolution.As time progresses, operator growth manifests as motion along this chain, with the average position defining the Krylov complexity (i.e. the operator size).Recently, this idea has been extended to quantum states undergoing unitary evolution and spreading through the Hilbert space [15].Here, Spread complexity quantifies the average position of the state's spread along a similar one-dimensional chain.These two physical definitions have provided significant insights into quantum systems, such as e.g.sensitivity to quantum chaos [15][16][17], topological phase transitions [18,19], and also play important roles in random matrix theories [20], quantum gravity and the AdS/CFT correspondence [21,22].See also  and the references therein for some of the recent development in Krylov and Spread complexity.
One of the interesting open questions within the the Krylov-based complexity program is the exploration of whether and how the two above definitions differ and how they encode the information about the system's dynamics, such as quantum chaos or thermalization.To make progress in this direction, in this work, we investigate the Krylov complexity of the density matrix operator's growth, corresponding to a pure state, denoted as ρ(t) = |ψ(t)⟩⟨ψ(t)| and compare it with the Spread complexity of |ψ(t)⟩ = exp(−iHt)|ψ(0)⟩. 1In this context, once we find the Krylov basis and Lanczos coefficients for |ψ(t)⟩ and compute the time evolution of the Spread complexity, it is natural to ask how various features (e.g. the growth, the time evolution and various timescales) get repackaged into the Krylov complexity of ρ(t).As we will see, even though the Krylov complexity may appear more natural for pure states (e.g.regarding the return amplitude) the Spread complexity is often amenable to an elegant, analytical treatment.More generally, in this paper we explore similarities and differences between these two approaches for pure states with a hope to pave the way towards more efficient and sensitive definitions of quantum complexity. 2ne intriguing observation emerges when we examine the time evolution of the thermofield double state |ψ(t)⟩ = exp (−i(H L + H R )t) |T F D⟩. Notably, previous work [15] highlighted that the Lanczos coefficients in this context are encoded in the return amplitude given by the amplitude of the spectral form factor (see also related [68,69]).In our investigation, we find a similar pattern when considering the evolution of ρ(t).Here, the return amplitude aligns with the spectral form factor itself.This observation carries significant implications, particularly in the context of averaging and ensembles of theories, such as Random Matrices or 2D quantum gravity [70][71][72].Namely, it has been recently realised that averages over such return amplitudes in theories dual to gravity probe contributions from gravitational wormholes in the bulk.Following these ideas in the context of complexity of a pure ρ(t) we can then hope to make a link between the Krylov basis and wormholes.However, we point out a few subtleties that stress the fact that the order of taking an average can sometimes lead to incorrect conclusions.
This paper is structured as follows: Section 2 provides essential background information on Krylov complexity for operators.In Section 3, we analyze several analytical models, computing the Spread and Krylov complexity for pure states.Section 4 expands our examination to examples within random matrix theories.Towards the conclusion of this section, in 4.4, we present a simple toy model to highlight the subtleties surrounding the order of averaging in computing Krylov complexity.Section 5 discusses the repackaging of Lanczos coefficients and Krylov basis data for |ψ(t)⟩ into those of ρ(t) = |ψ(t)⟩⟨ψ(t)|.Finally, Section 6 presents our main conclusions and discusses some interesting open questions.Additional details on Spread complexity are provided in Appendix A.

Preliminaries
In this section, we review the formalism of the operator growth within the context of the Krylov complexity [13].Specifically, tailored to our objectives, we present the formalism pertaining to the density matrix case.

Operator growth and Lanczos algorithm
Density matrix in quantum mechanics.In quantum mechanics, a system is described by a state vector, typically denoted as |ψ⟩, which belongs to a Hilbert space.Nevertheless, in many practical situations, a quantum systems may not be in a pure state but rather in a mixed state.A mixed state occurs when a system is in a statistical ensemble of different pure states with certain probabilities.
The notion of the density matrix, denoted as ρ, proves to be a valuable asset in quantum mechanics, offering a means to represent the quantum state of a system -be it pure or mixed.It incorporates information regarding the probabilities associated with various outcomes during measurements.To illustrate, the density matrix takes the form of the outer product of the state vector: where the former represents a pure state, and the latter characterizes a mixed state.Note that the latter is constructed as a summation of the outer products of individual pure states, each weighted by its respective probability p i .Density matrices emerge as indispensable tools in branches of quantum mechanics dealing with mixed states, such as quantum statistical mechanics, open quantum systems, and quantum information.
Liouville-von Neumann equation.The Liouville-von Neumann equation -e.g., for the mixed state; the latter in (2.1)asserts that where the Schrödinger equation where ρ 0 := ρ(0) and L is the Liouvillian superoperator.Note that the expression (2.3) indicates that the time evolution of the density matrix, ρ(t), can be articulated as a linear combination involving a sequence of operators In other words, the initial density matrix ρ 0 undergoes the time evolution within the operator (sub)space spanned by (2.4), which is called the Krylov subspace.The common lore is that a more "chaotic" H (within L) may result in a more complex ρ(t) compared to a scenario where H is less "chaotic".The Krylov subspace can admit an orthonormal basis denoted as |ρ n ), commonly referred to as the Krylov basis.This basis can be systematically generated through the application of the Gram-Schmidt orthogonalization procedure on the subspace: this procedure is known as the Lanczos algorithm.To carry out the Lanczos algorithm, an essential requirement is the determination of the inner product between operators ρ 1 and ρ 2 within the (reduced) density matrix can serve as a widely adopted approach within the von Neumann entropy for comprehending and quantifying quantum entanglement through the evaluation of entanglement entropy.
Utilizing the inner product (2.5), one can construct the Krylov basis |ρ n ) through the Lanczos algorithm: In this way, the Lanczos algorithm provides the Krylov basis {|ρ n )} and the associated normalization coefficients {b n } commonly referred to as the Lanczos coefficients.
Krylov complexity of density matrix.Upon obtaining the Krylov basis using the Lanczos algorithm, it becomes feasible to represent the time-dependent density matrix, ρ(t), in terms of the Krylov basis Here, the transition amplitudes, denoted as φ n (t), adheres to the normalization condition |φ n (t)| 2 = 1. 4 Then, by substituting (2.6) into the Liouville-von Neumann equation (2.2), 5 one can find the recursion relation for the transition amplitude as subject to the initial condition φ n (0) = δ n0 by definition.Note that the solution of the equation (2.7), φ n (t), can be achieved once b n is determined through the Lanczos algorithm.The transition amplitude φ n (t) constitutes a fundamental component in constructing the Krylov complexity (2.8), as defined subsequently.Furthermore, it is insightful to regard the Lanczos coefficients as hopping amplitudes facilitating the traversal of the initial operator along the "Krylov chain".In addition, the functions φ n (t) can be thought of as wave packets moving along this chain [13].This conceptualization motivates a definition of the Krylov complexity (C K ) which is the average position on the chain i.e., quantifies the average position in the Krylov basis. 6.2More on Krylov complexity of density matrix

Lanczos coefficients by moment method
Since we are primarily concerned with φ n (t), it is instructive to introduce an alternative method for computing the Lanczos coefficients, known as the moment method [13].The procedural steps for this calculation are outlined as follows.To begin with, the moments µ 2n are defined through the moment-generating function G(t) as where G(t) is chosen as the autocorrelation function C(t) for the Krylov complexity.Subsequently, by leveraging the determinant of the Hankel matrix M ij = µ i+j constructed from these moments, a relation between the Lanczos coefficients b n and the moment µ 2n is established: Alternatively, this relationship can be expressed through a recursion relation: (2.11)

Density matrix of the thermofield double state
In the preceding subsection, we presented the formalism for the Lanczos algorithm and outlined the computation of Krylov complexity for the density matrix of a generic (pure or mixed) state |ψ⟩.The primary focus of this manuscript is to investigate the Krylov complexity of the density matrix in the context of the thermofield double (TFD) state |ψ β ⟩, given by where E n denotes the eigenvalues, |n⟩ represents the eigenstates, and β = 1/T denotes the inverse temperature.Using the TFD state (2.12), we can construct the density matrix of the TFD, denoted as ρ β : and its time evolution is given by (2.3):The motivation for considering this particular pure state, the TFD state, arises from the exploration of Spread complexity [15].In that study, the authors used the TFD state to identify new features in the evolution of the Spread complexity that are universal characteristics of chaotic systems, such as the ramp-peak-slope and plateau.In the subsequent sections of this study, we test and compare these features within the Krylov complexity of the density matrix using the TFD state.Furthermore, we will explore the connection between the Krylov complexity of the density matrix (2.14) and Spread complexity of our specific pure state, the TFD state.We will also discuss the case of the mixed state within certain analytic models.
Autocorrelation function of the TFD state.Before concluding this section, it is instructive to discuss the autocorrelation function (2.9) for the TFD state, a key quantity for the analysis of Krylov complexity, especially for evaluating the Lanczos coefficients.
For the TFD case, utilizing (2.5) together with (2.14), the autocorrelation function is constructed as follows: e −i(En−Em)t e −β(En+Em) . (2.15) The final result implies that the autocorrelation function of the density matrix of the TFD state is the spectral form factor (SFF) of the TFD: where the partition function is defined in (2.12).In other words, the Lanczos coefficient of the density matrix of the TFD state can be exclusively determined by the energy spectrum through the SFF.Furthermore, the relationship (2.15) between the autocorrelation function and the SFF can also be illuminated through the survival amplitude S(t), introduced in the study of the Spread complexity of the TFD state [15].Expressing (2.15) in terms of S(t), we have: (2.17) where (2.1) and (2.5) are employed in the second equality.In the last equality, S(t) := ⟨ψ β (t)|ψ β ⟩ is referred to as the survival amplitude (or return amplitude) [15,73,74], and the authors have established its relationship with the SFF as: Consequently, the autocorrelation function (2.17) simplifies to C(t) = SFF as depicted in (2.15)-(2.16).It is worth noting that the autocorrelation function is not equivalent to the SFF for general operators and inner products.Our observation reveals that when we select the density matrix associated with the TFD state as the initial operator and opt for the inner product as defined in Eq. (2.5), the resulting autocorrelation function precisely aligns with the SFF.It is also worth emphasizing that the moment-generating function of the Krylov complexity (associated with the density matrix of the generic pure state), and that of the Spread complexity [15] can be succinctly summarized as Similarly, for Spread complexity (refer to the appendix A for a formalism review) with G(t) = S(t), the Lanczos coefficient is given by where the imaginary part is omitted by definition.Then, as the objective function |S(t)| 2 is an even function of time by definition, it follows that Note that analytical studies [37,59] have shown that both C K and C S exhibit quadratic behavior in the early time regime8 : (2.25) As such, combining (2.25) with (2.24), the conclusion emerges that: • In the early time regime, the Krylov complexity associated with the density matrix of the general pure state is always twice the Spread complexity: In the subsequent sections, we will demonstrate that C K = 2C S can hold even beyond the early time regime, specifically when examining the maximally entangled state within the context of a two-dimensional Hilbert space.

Analytically solvable models
In this section, we employ the methodology introduced in the preceding section to investigate the Krylov complexity associated with the density matrix of the pure/mixed state.Our primary focus is not only on developing the further understanding of the Krylov complexity, but also on conducting a comparative analysis with the Spread complexity of the TFD, given in previous literature.
For this purpose, we focus on cases where analytical examination is viable.Specifically, we consider scenarios involving: 1. General two-dimensional Hilbert space, 2. Qubit state (within modular Hamiltonian), 3. Quantum harmonic oscillators.
It is noteworthy that the first two models pertain to the case of a two-dimensional Hilbert space, while the third one pertains to an infinite-dimensional Hilbert space.Our investigations of these three models are geared towards achieving comparability with the Spread complexity outlined in [15,76].
The analysis of the first model will be expanded to higher (yet finite)-dimensional Hilbert spaces in the subsequent section, employing random matrix theories, in particular.

Two-dimensional Hilbert space
Let us first consider an arbitrary two-dimensional system with a Hamiltonian in the energy basis represented as accompanied by a generic (pure or mixed) density matrix by that is Hermitian.Additionally, the normalization condition for probabilities (Tr[ρ 0 ] = 1) and Det[ρ 0 ] ≥ 0 result in Recall that the density matrix is a Hermitian and positive semi-definite operator, implying that all of its eigenvalues are real and non-negative.Consequently, the product of its eigenvalues, i.e., the determinant of it (Det[ρ 0 ]), is non-negative.Also recall that to compute the Krylov complexity given by (2.8), one needs to find the transition amplitude φ n (t) by solving the recursion equation (2.7) where the computation of the Lanczos coefficients b n is required in advance.In this study, the moment method (2.9)-(2.11) is employed to compute b n , where the initial step involves the computation of the auto-correlation function C(t).
Krylov complexity of the generic state.For the case of a (general) density matrix (3.2), the auto-correlation function can be read as where ∆E := E 2 − E 1 .Here we use (2.5) in the second equality, and (3.1)-(3.2) are applied in the last equality.Subsequently, employing the moment method (2.9)-(2.11)for the given (3.4), the nonvanishing Lanczos coefficients can be determined as Finally, by solving the recursion equation (2.7) with (3.5), the Krylov complexity of the generic operator (3.2) can be achieved as In what follows, we will investigate (3.6) for both pure and mixed states, respectively.Then, plugging this into (3.6), the Krylov complexity is obtained as It is worth noting that the phase factor ϕ in (3.7) does not contribute to the Krylov complexity since it cancels out in A 2 12 + B 2 12 .One particularly interesting pure state arises from the mapping where Z β = e −βE 1 + e −βE 2 .This mapping yields the form of the TFD (2.12) as The corresponding the Krylov complexity from (3.9) is expressed as (3.12) Shortly, we will demonstrate that this Krylov complexity can be directly associated with the Spread complexity of the TFD.
Krylov complexity of the mixed state.To investigate the scenario of a mixed state, we examine the maximum value of the Krylov complexity in (3.6) Then, the maximization condition for C Max K , i.e., the maximum of C Max K , is obtained when However, considering the second condition in (3.3), it is evident that As this upper bound implies that the density matrix corresponds to a pure state (the blue line in the figure), we can conclude that the Krylov complexity of the density matrix for a pure state is consistently greater than that of a mixed state (the yellow surface in the figure).One remark is in order.The examination of C Max K can be conventionally carried out with a fixed value of A 11 (rather than A 2  12 + B 2 12 ), as it is directly linked to fixing the energy eigenvalue of the system.To illustrate, the (average) energy of the system can be determined as follows Here, (3.1)-(3.2) have been employed.Additionally, by utilizing (3.17) along with the first condition in (3.3),A 11 + A 22 = 1, we can derive the following expressions Consequently, it is appropriate to discuss C Max K as a function of A 2 12 + B 2 12 when the energy eigenvalues (E 1 , E 2 ) and ⟨E⟩ of the system are specified, i.e., under the condition of fixed A 11 (or Spread complexity of the pure state.We now investigate one of the pivotal objectives of this manuscript, namely, to establish a connection between Krylov complexity and Spread complexity.For this purpose, we compute the Spread complexity of the state |ψ⟩ in (3.7), allowing for a comparison with the Krylov complexity of the pure state (3.9). 11 Similar to the Krylov complexity, the computation of Lanczos coefficients is performed using the moment method (2.19), where the survival amplitude is given by of a mixed state has the lowest value, optimized over all possible purifications of a given mixed state. 11It is important to recall that, by definition, Spread complexity can only be evaluated for pure states [15].
The non-vanishing Lanczos coefficients are determined as follows Finally, the Spread complexity is obtained A brief overview of the Spread complexity computation [15] is provided in the appendix A. Upon inspection, it is observed that both the Krylov complexity of the pure state (3.9), denoted as C K , and the Spread complexity (3.21), denoted as C S , exhibit oscillatory behavior over time.However, their quantitative characteristics differ.
Importantly, the Spread complexity (3.21), when mapped into the TFD using (3.10), yields more intriguing results: which can be compared with (3.12).Notably, when β = 0, the Krylov complexity associated with the density matrix of the TFD is twice the Spread complexity of the TFD, i.e., From these observations, we propose the main conclusion for the case of the two-dimensional Hilbert space: • For the maximally entangled pure state, Krylov complexity becomes exactly twice Spread complexity: C K = 2C S (for any t ≥ 0).
In addition, beyond the maximally entangled state (such as the TFD at finite β), Krylov complexity can exhibit qualitatively similar behavior to Spread complexity.These assertions are the primary highlights of this manuscript, and we provide additional evidence through various examples in the subsequent sections.

Qubit state: modular Hamiltonian
Subsequently, to provide additional evidence for the aforementioned proposal, we study another analytical example within the context of qubit states.In particular we consider two entangled qubits, whose state can be expressed in the Schmidt form as follows where |j⟩ are the basis vectors.This pure state resides in the Hilbert space H A ⊗ H A c , where A represents a qubit subsystem and A c its complement.In addition, it is noteworthy that one can define the reduced operator ρ A for the subsystem as where H A is referred to as the modular Hamiltonian.Importantly, the state |ψ⟩ in (3.24) is invariant under the time evolution with the total (modular) Hamiltonian H A ⊗ 1 Ac − 1 A ⊗ H Ac [78].Thus, to investigate the non-trivial time evolution from |ψ⟩, it is necessary to consider the evolution with H A ⊗ 1 Ac , expressed as: where |ψ⟩ is in (3.24).
Qubit state with two energy levels.Let us commence by examining the Krylov complexity for a qubit state with two energy levels, expressed as: Here, ρ A is obtained by utilizing (3.25), and the corresponding H A can be derived.It is pertinent to note that the modular Spread complexity analysis for the same state has been presented in [76].
Next, determining the survival amplitude as where (3.26) is used with the given ρ A in (3.27), one can employ the moment method (2.19) to compute the non-vanishing Lanczos coefficients as (3.29)Then, solving the recursion equation (2.7) with b n , the Krylov complexity can be expressed as which can be compared with (3.9).This can be contrasted with the Spread complexity of the two energy level state (3.27) from [76], given by: which can also be comparable with (3.21).A qualitative similarity between C K and C S can be observed.
In order to verify our relation, we first need to identify the maximally entangled state from the entanglement entropy: where ρ A is given by (3.27).Notably, p = 1/2 corresponds to the maximally entangled state.Therefore, expanding (3.30)-(3.31)near p = 1/2 yields which aligns with the observation below (3.23): C K = 2C S for the maximally entangled state in the two-dimensional Hilbert space.
Qudit state with three energy levels.Similarly to the two-energy-level state (3.27), we explore the qudit state with three energy levels: The corresponding density matrix is obtained as along with the survival amplitude We find that, for the case of q = p, the Lanczos coefficients can be computed analytically (3.37) Finally, the resulting Krylov complexity is then given by Additionally, using the method in the appendix, we compute the Spread complexity for (3.34): Again, to identify the maximally entangled state, we calculate the entanglement entropy where ρ A is given by (3.35) and one can find that p = 1/3 is the maximally entangled state.

Quantum harmonic oscillator
To conclude this section, we explore the analytical relationship between C K and C S for the infinite-dimensional Hilbert space.
For this purpose, we focus on the Krylov complexity associated with the density matrix of the TFD for the case of the quantum harmonic oscillator with frequency ω, which has been examined by circuit complexity [79][80][81], Krylov complexity [45] and spread complexity [15,59].The spectrum and its partition function are where (2.12) is employed for the partition function.Here, we use the same notation for the integer n as used in the study of the Spread complexity in [15].Since we are interested in the Krylov complexity from the TFD, the autocorrelation function C(t), computed through (2.19), relies on the survival amplitude S(t): This expression is derived from (2.18), utilizing (2.16), and (3.42).
Then, utilizing the moment method, we determine the non-vanishing Lanczos coefficients b n for the quantum harmonic oscillator as Nevertheless, we numerically find that b n oscillates in small n regime and remains finite even with increasing values of n.As an illustration, we present a representative plot with (β, ω) = (π/2, 3) in the left panel in Fig. 2. Note that the non-vanishing b n in larger n is also observed for the case of the Spread complexity in [15].Finally, by solving the recursion equation (2.7) using the numerically obtained b n , we determine the Krylov complexity of the quantum harmonic oscillator.The results are presented as the black line in the right panel of Fig. 2. Notably, the qualitative behavior of C K is akin to the Spread complexity described in [15], given by This Spread complexity is depicted as the blue line in the right panel of Fig. 2. Inverted quantum harmonic oscillator.One might contemplate whether an exact duality between C K and C S , as proposed in (3.23), can be identified even for the infinitedimensional Hilbert space.However, unlike the two-dimensional Hilbert space example, setting β to zero for the harmonic oscillators is not feasible, as indicated in (3.42).Moreover, the generic "analytic" expressions for both b n and C K are not readily available in this case.
Nevertheless, some progress can been made in the case of the inverted harmonic oscillator as well, where ω → −iΩ.Specifically, we have found that the Lanczos coefficients given in (3.44) can be expressed as: Especially, when considering cos βΩ = 0, which establishes the relationship between β and the frequency as the Lanczos coefficients can be found analytically which is verified in Fig. 3. Given this analytic Lanczos coefficients, we solve the equation (2.7) and obtain the analytic expression for φ n (t) This provides the Krylov complexity of the inverted harmonic oscillator as (3.50) Having the analytic expression for the Krylov complexity, finally we can quantitatively compare it with the Spread complexity for the inverted harmonic oscillator: which is obtained from (3.45) after substituting ω → −iΩ.
Comparing (3.50) with (3.51) alongside (3.47), we derive and confirm that C K = 2C S in (2.25) is valid in the early time regime.
Recall that for quantum harmonic oscillators, setting β = 0 for the TFD is not welldefined (divergent).Hence we cannot claim that C K = 2C S for the maximally entangled state in the quantum harmonic oscillator.Nevertheless, we also observe that the derived complexities of the inverted harmonic oscillator, (3.52), in the late time regime (t ≫ 1) scale as leading to the relation C K = C 2 S that will be worth exploring in other examples in the future.

Average Krylov complexity
In this section, we continue exploring the relationship between C K and C S within the context of a higher (yet finite)-dimensional Hilbert space.In particular, we consider random matrix theories (RMT) for this purpose.It is worth noting that the analysis of C S for RMT can be found in [15] (and [17,20,82]), allowing us to draw comparisons between our findings in C K and C S presented in that literature.
RMT serves not only as a suitable model for our purpose, but also holds significance in physics, particularly in identifying potentially universal features within general chaotic systems.A foundational conjecture suggests that the detailed structure of the spectrum of a quantum chaotic Hamiltonian can be effectively approximated by the statistical behaviors of random matrices [83,84]: see also [85][86][87][88] for the review.
Within the scope of this manuscript, we consider all three universality classes: Gaussian Unitary Ensemble (GUE), Gaussian Orthogonal Ensemble (GOE), and Gaussian Symplectic Ensemble (GSE).These terms denote specific ensembles of random matrices.Each ensemble aligns with a distinct symmetry class of random matrices with Gaussian measures, respectively, as where H is the N × N matrix, E is the scaling factor which can be chosen E = 1, Z GUE or GOE or GSE is a normalization constant.Note that here N denotes the dimension of the Hilbert space.
Average Krylov complexity.It is worth to note that in RMT, the Hamiltonian can be evaluated by the random values within the classes in (4.1).Consequently, the resulting C K from RMT is subject to this inherent "randomness".Thus, to mitigate this randomness and uncover the essential structure in C K , this paper introduces the average Krylov complexity, denoted as where the probability distribution of eigenvalues for a N × N Gaussian matrix, ρ (E i ), is given by where Z is the normalization constant, and D = 1, 2, 4 is referred to as the Dyson index, corresponding to GOE, GUE, and GSE, respectively.Notably, ρ here does not represent the density matrix.
Three remarks are in order.Firstly, (4.2) is equivalent to the arithmetic mean, i.e., where K denotes the Krylov complexity at the i-th result in the total N it iterations.Secondly, for a sufficiently large dimension of the Hilbert space (or size of the matrix), CK ≈ C K , implying no need for averaging. 12Thirdly, the average Spread complexity can also be defined similarly: The primary objective of this section is then to explore the relationship between CK and CS specifically in the context of the TFD when N ≥ 2. It is important to note that, for the TFD case, the autocorrelation function (2.15) becomes determinable once the energy eigenvalue E n is acquired.Consequently, by employing the moment method outlined in (2.9)-(2.11), the coefficients b n can be determined.Subsequently, one can solve the recursion equation (2.7) to calculate C K .In simpler terms, the "sole" input required to compute C K in this section is to ascertain the energy eigenvalue E n for the random matrix (4.1). 13.2 Krylov complexity in random matrix theories

Analytic results (N = 2)
In general, numerical methods are typically employed to compute (4.2)-(4.3).However, for the specific case of N = 2, analytical computation becomes feasible.
For the RMT with N = 2, applicable to our two-dimensional Hilbert space results, both C K in (3.12) and C S in (3.22) can be determined analytically: where ∆E := E 2 − E 1 .Recall that C K = 2C S holds for the maximally entangled state (3.23) throughout all time range, i.e., also does for the RMT when N = 2.
Furthermore, when N = 2, (4.2) can be further expressed as follows: where P (∆E) is the probability distribution function of the level spacing ∆E.This distribution function plays a crucial role in the subsequent subsection.
For N = 2, P (∆E) can be computed analytically for each class for GOE ,  By plugging (4.7) and (4.5) into (4.6),we obtain the analytical expression for CK when β = 0 (i.e., maximally entangled state): where D(t) = e −t 2 t 0 e x 2 dx is the Dawson function.To contrast with the non-chaotic model, we also examined Independent and Identically Distributed random variables (IID) from the standard normal distribution, where the probability distribution of spacing is given by In Fig. 4, we display the average Krylov complexity when N = 2.In the left panel, we observe that the Krylov complexity of the RMT aligns with the Spread complexity of the RMT, as presented in [15].In other words, both complexities exhibit a characteristic peak for quantum chaotic cases (GUE, GOE, GSE), in contrast to the integrable cases (IID).This aligns with our analytical proof establishing that C K = 2C S .Furthermore, in the right panel, the β-dependence is also consistently observed: complexities are suppressed by the finite β.  for various classes: GUE (red), GOE (green), GSE (blue), and IID (black), specifically when β = 0, as per (4.8).On the right panel, the β dependence of CK is displayed for the case of GUE, with β = 0, 1, 2 (red, orange, yellow).This β-dependent data is computed numerically with N it = 10 4 .Notably, the consistency between the red (analytical) data on the left and the red (numerical) data on the right panel is evident.

Numerical results (N > 2)
We proceed with the exploration of the average Krylov complexity and Spread complexity for the case when N > 2. In this scenario, numerical methods are employed, as outlined below (4.4).Essentially, once we compute the energy eigenvalues, CK can be evaluated.
In Fig. 5, we present both CK and CS for various values of N , with a specific focus on N = 3, 4, 8 and 10 when β = 0 in this paper. 14We observe two main features for higher-dimensional Hilbert space.
• The peak in CK diminishes, unlike CS (e.g., see the left column in Fig. 5).
In the subsequent subsection, we provide a potential origin of the difference (observed in the intermediate time regime) between CK and CS through an analysis of the energy spectrum.Two remarks are in order.Firstly, across all figures, it is evident that the initial growth rate of CK (and CS ) in the chaotic cases (denoted by red, green, blue) surpasses that of the integrable case (denoted in black).Secondly, the value of the late-time plateau is exactly the same for integrable and RMT cases.
Additionally, as a by-product, we discover a potential connection between the two complexities, at least in late times (t ≫ 1): CK ≈ N CS .Combining the results from both two (and infinite)-dimensional Hilbert spaces in the preceding sections with the finitedimensional outcomes here, we also observe a relationship for the Krylov complexity of the  Therefore, for the maximally entangled pure state in a (finite) N -Hilbert dimension, we conclude that where C K = 2C S holds within all time window when N = 2.It is also important to recall that C K = 2C S in early times is universally valid for general pure state (see below (2.25)).

Energy spectrum analysis
In this section, we consider one possible scenario for understanding the difference between the Liouvillian and the Hamiltonian dynamics in Krylov space during intermediate times.
Our analysis closely draws from the conjecture concerning the dimension of the Krylov space (D K ), as introduced in [27], which we elaborate on below. 15  The dimension of the Krylov space (D K ) and quantum chaos.We begin by considering the eigen-equation of the Liouvillian superoperator L (2.3), where Here, E a and |E a ⟩ denote the energy eigenvalues and eigenstates of the given Hamiltonian.
Let us consider an arbitrary operator O, expressed in the basis (4.12) as where D H is the dimension of the Hilbert space.Acting with L n on |O) yields which generates the Krylov subspace (analogous to (2.4) for the case of the density matrix).The dimension of this subspace is denoted as D K , and it satisfies: 15 See [17] for more discussion on its validity and potential subtleties.
-25 - The terms D 2 H − D H can be obtained from the off-diagonal term in (4.14).They represent the number of the off-diagonal components of a D H × D H matrix. Additionally, we have +1 from the diagonal term, which appear both in the left and right side of (4.15).
In [27], it was conjectured that quantum chaos implies large values of D K .In particular, (Chaotic system) : Thus, if true, it is crucial to determine when and how D K can attain its upper bound.
Reduced D K by type-II degeneracies.Following [27] we now suppose that for some set of m pairs of indices I, the eigenvalue of L is degenerate, i.e., In this case, we can express the off-diagonal part P off-diagonal in (4.14) as follows: where In terms of the spectrum, (4.19) implies that there are m pairs of energies with the same gap, e.g., for some (a, b) and (c, d).We refer to this property as type-II degeneracy. 16Importantly, when (4.19) occurs, [27] showed that the upper bound of D K is reduced by the number of type-II degeneracies m, such that The reduction in D K may play an important role in understanding the emergence of chaotic characteristics within Krylov complexity.Notably, even amidst chaotic behavior in the system's Hamiltonian, the condition 16) may not be fulfilled due to a large amount of type-II degeneracy.In such cases, the chaotic attributes stemming from Krylov complexity might not be distinctly apparent.
Generalized type-II degeneracy in RMT.Given the preceding discussion, it is reasonable to speculate that within the framework of random matrix theory, type-II degeneracies might serve as a potential source of distinction between C K (N > 2) and C S within the intermediate time regime.
An important subtlety arises: since we are dealing with continuous probability distribution functions, it becomes evident that the probability of any specific value for ∆ω := ω ab − ω cd must be zero.Instead, attention is directed towards the probability associated with intervals, where the probability density function encompasses non-zero probabilities.In this context, we can define probability distribution function P (∆ω) as where ρ (E i ) is in (4.3).Note that, by definition, P (∆ω = 0) = 0 for N = 2, as there are only two eigenvalues.However, when N > 2, we find P (∆ω = 0) ̸ = 0 and is in fact the maximum value of the distribution.For instance, for the case of GUE and N = 3, 4, the probability distribution function is given by: (210195−108135(∆ω) 2 +47088(∆ω) 4 −696(∆ω) 6 −48(∆ω) 8 +16(∆ω) 10 ) where ∆ω = |ω 12 − ω 23 |.These probability distributions are illustrated in Fig. 6.Hence, we can deduce that in RMT, the probability of having ∆ω = 0 is precisely zero, while there exists a non-zero probability of having ∆ω ≈ 0 within a finite interval.This prompts the question: are the complexities of nearly degenerate and exactly degenerate systems nearly identical?The arguments below suggest us that the answer is affirmative.
To begin, consider the scenario with exact type-II degeneracy.Let us suppose that b α represents the final Lanczos coefficient, so b α+1 = 0. Due to type-II degeneracy, α will be smaller than D 2 H − D H + 1.If we slightly alter the spectrum such that none of the ∆ω are strictly zero, the previous Lanczos coefficients (b 1 , b 2 , • • • , b α ) will only experience a minor shift.However, as b α+1 becomes slightly larger than zero, b α+2 will take a considerably large value, which can be inferred from (2.11).Consequently, the subsequent Lanczos coefficients are not expected to vanish in most cases.It is crucial to emphasize that the progression of the transition amplitude occurs from φ 0 to φ D 2 H −D H +1 , indicating that the early deviations are primarily derived from φ α .However, given the small value of b α+1 , and the fact that the initial increment of the transition amplitude φ α+1 (t) relies entirely on b α+1 φ α (t), φ α+1 (t) can be neglected for some period of time.Even though b α+2 assumes a considerable magnitude, the negligible value of φ α+1 during such period implies that φ α+2 can also be disregarded in that same period.Consequently, the transition amplitude essentially halts its propagation at φ α .Hence, the significance of the subsequent transition amplitudes (φ α+1 (t), φ α+2 (t), . ..) can be neglected within eq.(2.8).Given that the changes in the previous Lanczos coefficients (b 1 , b 2 , . . ., b α ) are minimal, the preceding transition amplitudes (φ 0 (t), φ 1 (t), . . ., φ α−1 (t)) similarly experience negligible changes.Consequently, the overall change in complexity should remain minimal.
Given the similarity between the complexities of systems exhibiting nearly type-II degeneracy and those with exact type-II degeneracy, it becomes natural to extend the concept of type-II degeneracy to encompass scenarios involving continuous probability distributions.Specifically, we define the generalized type-II degeneracy so that (∃ Generalized Type-II degeneracy) : Crucially, even though the complexity of systems satysfying (4.22) resemble those with exact degeneracy, generalized degeneracies do not lower the dimension of the Krlylov space.
One important remark about RMT with N > 2 is in order, namely, the fact that it is more likely to encounter generalized type-II degeneracies as N increases.This can be understood as follows.First, consider all possible ∆ω that are non-degenerate.These include where we employed (4.12) and considered the fact that RMT exhibits the level repulsion in energy eigenvalues, i.e., E i ̸ = E j .Subsequently, we can determine the number of ∆ω non-deg as 2N (N − 1)(N − 2) + N (N − 1) = N (2N 2 − 5N + 3) together with the total number ∆ω total as N (N − 1)(N (N − 1) − 1) = N 4 − 2N 3 + N .Consequently, the number of ∆ω that can be (nearly) degenerate is given by indicating that as we increase the size of the Hilbert space, it is likely to find a higher number of the generalized type-II degeneracies.Our numerical results support this observation: as we increased N , the peak observed in CK diminished.This loss of sensitivity to quantum chaos could then be attributed to the growing prevalence of type-II degeneracies.
Spread complexity and the degeneracy.A natural question one may ask is if Spread complexity is also sensitive to (nearly) degeneracies in the spectrum.To analyze this question, we can explore the dimension of the subspace concerning the Spread complexity C S , following the same logic outlined for Krylov complexity.For the Spread complexity [15], analogous equations to (4.13)-(4.14)can be identified from where L = H.The dimension of the Spread subspace, denoted as D S , is then given by Furthermore, similar to (4.18), the second equation in (4.25) can be rewritten as where we used the fact that the spectrum may contain (standard, or type-I) degeneracies, Assuming that the number of degeneracies is m, this leads to the reduction of ( A couple of comments are in order.First, note that, by definition, P (∆E = 0) ̸ = 0 for the IID ensemble, shown in Fig. 7, so there exists generalized degeneracy in this case.We suspect this is the reason why there is no peak in the Spread complexity of IID, which goes hand in hand with the conjecture relating the peak to quantum chaos.Second, note that if there exists generalized degeneracy (∆E ≈ 0), there exists generalized type-II degeneracy as well (∆ω ≈ 0), which explains why there is no peak in the Krylov complexity of IID.Notably, random matrix theories (GUE, GOE and GSE) do not exhibit generalized (type-I) degeneracies due to the level repulsion between energy eigenvalues, which explains the presence of the peak in Spread complexity for chaotic RMT with N ≥ 2. Summarising, we argued that the vanishing of the peak in Krylov complexity may be attributed to the existence on approximate type-II degeneracies in RMT.In contrast, the persistence of the peak in Spread complexity may be attributed to the absence of standard degeneracies in RMT.These observations help elucidate the qualitative distinctions between C K and C S for N > 2 within the intermediate time regime.However, it will be interesting to better understand different scenarios for explaining the decreasing peak in the evolution of Krylov complexity and their compatibility with the analysis of this section.For instance, [17] noticed that the diminishing of the peak is also correlated with differences in variances for different Dyson index D of random matrix models. 17Intuitively, this may be related to introducing extra noise in the Lanczos spectrum when focusing on the Liouvillian for ρ(t) = |ψ(t)⟩⟨ψ(t)| instead of just the Hamiltonian for a single state |ψ(t)⟩.

Comments on averaging
In this section we discuss some subtleties in taking the average at different stages of computation for the Krylov complexity of the pure density matrix.This is clearly relevant when we are interested in understanding wormhole contributions to the spectral form factor from the perspective of complexity.First, we will start a simple toy model with two energies E 1 and E 2 and partition function As we argued before, when computing the Krylov complexity of ρ(t) for the evolution of the TFD state the return amplitude is given by the spectral from factor itself.Let us first compute everything for some general choice of E 1 and E 2 .The return amplitude becomes where ∆E = E 1 − E 2 .The return amplitude is clearly periodic with period 2π/∆E.From this expression we only get two non-trivial Lanczos coefficients and all the others vanish.Expanding the state we find three solutions of (2.7) that read The Krylov complexity becomes This yields our results for the N = 2 RMT and of course the averaged relation (4.38) holds.Interestingly, the averaging of the completely periodic spectral form factor turns it into the one of the RMT with the famous slope-dip-ramp-plateau structure.Of course the times of these features are very short here e.g. the dip occurs at t d = √ 3 (i.e.t d = √ K where K = D 2 H − D H + 1) and SFF becomes constant.Moreover, the ramp (with value 1/2) already starts around twice this time (see Fig. 8).Next we may consider an alternative procedure when we first take the average spectral form factor and then compute from it the moments and Lanczos coefficients to finally evaluate the Krylov complexity.It may be natural to expect that this is a reasonable approach 18 but we can see that this is not always the case.Indeed if we can write our averaged return amplitude in terms of even moments µ 2k only as and this gives rise to the infinite number of "staggered" Lanczos coefficients where even and odd b n grow differently (similarly to coefficients found in [44,50]).More explicitly, we Already at this point we see that taking the naive average on the level of the return amplitude would change the dimension of the Krylov subspace to infinite.Moreover, with these b n , we can now in principle find solutions of the Schrödinger equation.In fact, for odd n = 2k + 1 we have a simple expression However, the even wave functions with n = 2k appear much more complicated.Nevertheless, using the Schrödinger equation, we can express them via derivatives of the odd ones  as Unfortunately, we were also not able to evaluate the Krylov complexity analytically from these expressions (i.e.perform the sum to n = ∞ and we leave it as an interesting future problem).Nevertheless, we can use these wave functions to compute the answer for arbitrarily large n cut-off Λ.We then introduce and plot them on Fig. 10.Observe that after the initial quadratic growth, complexity starts growing linearly.Since the dimension of the Krylov basis is not bounded, we do not expect saturation and this growth is likely to persist when the cut-off is removed.On the other hand, we can see that peaks of the complexity are cut-off effects and they take place exactly at times when the probability sum deviates from 1.The second peak and plateau for later times are more mysterious and it would be interesting to investigate them in e.g.random matrix models after performing the average on the level of the return amplitude.Concluding this section, we want to stress again that naive average at the level of the return amplitude, however tempting and interesting from the gravity and wormholes perspective, may be quite subtle and can lead to wrong conclusions.Nevertheless, it will be interesting to explore it further in JT gravity or large-N matrix model with the aim to better understand how/if wormhole contributions to the return amplitudes (spectral form factor) can be deduced from the scaling of Lanczos coefficients of the growth of Krylov complexity.We leave this as another important future problem.In addition, we have mostly focused on the β = 0 example and understanding finite-temperature effects on Lanczos coefficients (even in our toy model) would also be interesting.

Formal approach
In this last section we finally take a slightly more abstract approach and ask how the Krylov basis for the pure density matrix operator ρ(t) = |ψ(t)⟩⟨ψ(t)|, is related to the one computed for the state |ψ(t)⟩.We will then illustrate our arguments for the Harmonic oscillator example.Firstly, recall that expanding the quantum state in the Krylov basis The Krylov basis |K n ⟩ is constructed via Lanczos algorithm and the sets of Lanczos coefficients {a n } and {b n } can be extracted from the moments µ n , return amplitude Moreover, the coefficients of the expansion satisfy the Schrödinger equation Let us assume that we have solved this problem and know all the ingredients above.Then, in this work, we are interested in the density matrix operator ρ(t) = |ψ(t)⟩⟨ψ(t)| that evolves according the the Heisenberg equation and, after choosing our inner product (2.5),we want to turn it into a state in the Hilbert space H ⊗ H ρ(t) = e −iHt ρ(0)e iHt ⇐⇒ |ρ(t)) = e −iLt |ρ(0)) . (5.4) In general, these hints may not simplify the task of evaluating the Krylov complexity of ρ(t) too significantly (and we simply have to compute it directly) but we will illustrate below these steps for the Hamiltonian constructed from the Heisenberg-Weyl algebra generators.
We expect an elegant generalisation of these steps for general semi-simple Lie groups and their discrete series representations (and leave it for the future work).

Toy model: harmonic oscillator
As an example, we consider a state that evolves according to the Hamiltonian built from the Heisenberg-Weyl generators where without loss of generality, we let λ ∈ R and n = a † a is the number operator, a the annihilation operator and a † is the creation operator that satisfy The Krylov basis for this case is simply the algebra basis (see [33]) and from the action of H we simply read off Lanczos coefficients They are encoded in the return amplitude and the solutions of the Schrödinger equations read , ∀ n > 0 . (5.41) In order to derive more explicit results, we express the ladder operators in terms of the creation and annihilation operators of the harmonic oscillator.Firstly, we have the Krylov basis and we can series expand it such that every term contains a product of k α † − 's and (l − k) α † + α − 's acting on |ρ 0 ).However, we observe that due to the commutator 1 2 The normalisation factor N can be then written as . (5.45) Note that the Thermofield Double (TFD) state can be particularly intriguing.The utilization of this state in Ref. [15] led the authors to unveil new features in C S , which were conjectured to represent universal characteristics of chaotic systems.The first finding listed in Result II is noteworthy for two reasons.Firstly, it is analytically verified and holds true across any N -dimensional Hilbert space. 20Secondly, as discussed in the introduction, averaging over the return amplitude in theories dual to gravity may include contributions from gravitational wormholes in the bulk.This connection offers relevance to investigations concerning wormhole contributions to the Spectral Form Factor (SFF) from a complexity perspective.
The second finding listed in Result II, concerning the N = 2 case, is also derived from scenarios where analytical examination is feasible: the general two-dimensional Hilbert space and qubit states undergoing modular Hamiltonian evolution.In these models, we also investigated C K associated with the density matrix of a general mixed state.Interestingly, we consistently observe that the C K for a pure state surpasses that of a generic mixed state, aligning with the findings of circuit complexity studies [77].
The analysis for N = 2 was also extended to higher-dimensional Hilbert spaces (N > 2), leading to our third finding listed in Result II.For this extension we focused on random matrix theories (RMT), where the Spread complexity (C S ) of the Thermofield Double (TFD) state has emerged as a reliable measure for quantum chaos [15].
Given the inherent randomness in random matrix theories (RMT), we introduced the averaged C K (and C S ) using the probability distribution (4.2), which is equivalent to the arithmetic mean (4.4) when the total iterations (drawn from specific ensembles) are sufficiently high.Through numerical computations up to N = 10, we discerned that C K approximately equaled N C S at late times.Additionally, we observed that in the intermediate time regime, C K could deviate from its resemblance to C S , with C K lacking a discernible peak unlike C S .We speculated that the absence of this peak in C K could be related to the manifestation of type-II degeneracy (4.22).
Regarding the averaging process, we also discussed some subtleties associated with averaging at different computational stages for C K .We explicitly demonstrated that a naive averaging approach at the level of the return amplitude could be subtle and potentially lead to erroneous conclusions.
Last but not least, in the context of (inverted) harmonic oscillators, we conducted analytical computations for both C K and C S as a toy example of an infinite dimensional Hilbert space (N → ∞).We observed a curious relation at late times, C K = C 2 S , suggesting the need for additional analysis to understand their correlation.
From a bigger picture, it is worth recalling that the exploration of complexity and chaos has played a pivotal role in recent developments of the AdS/CFT correspondence and the study of black holes.Various conjectures suggest that the complexity of holographic quantum systems may offer insights into the black hole interior [8][9][10][11][12], and potentially shed light on the emergence of spacetime [89][90][91][92][93][94].In this context, a pressing concern arises from the ambiguity resulting from various proposals defining complexity.In contrast, Krylov (and Spread) complexity provides a clear and unambiguous definition.This leads to a critical question: can Krylov complexity effectively capture the growth of a black hole interior?Relevant investigations in this direction can be found in [95].From this standpoint, comprehending the relationship between complexities (such as conventional computational complexity, Krylov, and Spread complexity) emerges as a significant consideration.While proposals exist for the gravity dual interpretation of conventional circuit complexity, recent inquiries have also explored a dual interpretation of Krylov complexity [22].We anticipate that our examination of the relationship (using the density matrix operator) between Krylov complexity, Spread complexity, and the Spectral Form Factor could not only contribute to refining definitions of quantum chaos and complexity but also offer insights into the mystery of black hole interiors and the emergence of spacetime.

Figure 1 .
Figure 1.C Max K in (3.13)where A 22 = 1−A 11 .The yellow surface represents the case for the mixed state, while the blue line corresponds to C Max K at A 2 12 + B 2 12 = A 11 A 22 , indicating a pure state.

Figure 2 .
Figure 2. The numerical results for quantum harmonic oscillators with (β, ω) = (π/2, 3).In the right panel, C K is depicted as the black line, while C S is with the blue line.

Figure 3 .
Figure 3. Lanczos coefficients for the inverted harmonic oscillator with Ω = 3.The black dots represent the numerical data and the blue solid line is the analytic result (3.48).

Figure 4 .
Figure 4.The average Krylov complexity CK is illustrated in the left panel, showcasing its values for various classes: GUE (red), GOE (green), GSE (blue), and IID (black), specifically when β = 0, as per (4.8).On the right panel, the β dependence of CK is displayed for the case of GUE, with β = 0, 1, 2 (red, orange, yellow).This β-dependent data is computed numerically with N it = 10 4 .Notably, the consistency between the red (analytical) data on the left and the red (numerical) data on the right panel is evident.

Figure 8 .
Figure 8.Comparison of the SFF (blue, for ∆E = 2) in our toy model and its GUE average (orange) for β = 0.The dip occurs at t d = √ 3 and value of the plateau is 1/2.

Figure 9 .
Figure 9. Numerical plot of b n and b 2n Lanczos coefficients from the averaged return amplitude.
employed in the second equality.For simplicity, we adopt ℏ = 1 hereafter.Several observations are in order.Firstly, the Liouville-von Neumann equation (2.2) is evidently applicable to pure states as well.Secondly, though (2.2) bears resemblance to the Heisenberg equation, a crucial sign distinction exists; caution is warranted to prevent misconstruing this similarity.The Heisenberg equation operates within the Heisenberg picture, where states remain time-independent and operators evolve over time.In contrast, (2.2) adheres to the Schrödinger picture, where states (within in the density matrix) exhibit the time-dependence, and the Hamiltonian H remains unaltered over time.Lastly, (2.2) constitutes the quantum mechanical analogue of the classical Liouville equation, which, in classical mechanics, gives rise to a significant result known as Liouville's theorem.Krylov basis and Lanczos algorithm.The Liouville-von Neumann equation (2.2) can be formally solved by Max K at the upper bound of A 2 12 + B 2 12 , which is A 11 A 22 .This analysis demonstrates that C Max K , "at fixed A 11 ", attains its highest value when A 2 12 + B 2 12 = A 11 A 22 . 9 K in (3.13) is a monotonically increasing function in the region of A 2 12 + B 2 12 ≤ A 11 A 22 , it is observed that C Max K reaches its maximum value when A 2 12 + B 2 12 = A 11 A 22 , which is supported by numerical verification in Fig. 1, detailed below.It is imperative to note that C Max K in (3.13) is a function of two quantities: However, utilizing the first condition in (3.3),A 22 = 1 − A 11 , we can express C Max K in terms of A 2 12 + B 2 12 and A 11 .In Fig. 1, we make a plot of C Max K in terms of such variables where the blue line denotes C