The growth of operator entropy in operator growth

We study upper bounds on the growth of operator entropy SK in operator growth. Using uncertainty relation, we first prove a dispersion bound on the growth rate |∂tSK| ≤ 2b1∆SK, where b1 is the first Lanczos coefficient and ∆SK is the variance of SK. However, for irreversible process, this bound generally turns out to be too loose at long times. We further find a tighter bound in the long time limit using a universal logarithmic relation between Krylov complexity and operator entropy. The new bound describes the long time behavior of operator entropy very well for physically interesting cases, such as chaotic systems and integrable models.


Introduction
The Krylov complexity (or K-complexity) [1] and the operator entropy (or K-entropy) [2] are introduced to describe the Heisenberg evolution of operators O(t) = e iHt O 0 e −iHt , where O 0 is an initial operator and H is the Hamiltonian. In particular, for complex systems, the initially simple operator O 0 will irreversibly grow into a complex one, with the size blowing up exponentially. It is widely believed that a statistical description should emerge in this process and some universal features may be captured by K-complexity and K-entropy.
The two notions have attracted a lot of attentions in literature [3][4][5][6][7][8][9][10][11][12]. In particular, it was established in [10] that for general irreversible process, the two quantities enjoy a universal logarithmic relation to leading order at long times: S K = η lnK + · · · , where 0 < η ≤ 1 and η = 1 corresponds to chaotic systems. 1 On the other hand, it was proved in [11] that the growth rate of K-complexity in operator growth is upper bounded as |∂ t K| ≤ 2b 1 ∆K, where b 1 is the first Lanczos coefficient and ∆K is the variance of Kcomplexity. This is referred to as dispersion bound [11]. This inspires us to search a similar bound for K-entropy and examine the influence of the above logarithmic relation to the growth of K-entropy. 1 By chaotic systems, we mean the Lanczos coefficients grows linearly asymptotically bn → αn + γ, except for 1d case, in which there is a logarithmic correction bn → αn/lnn. However, the authors in [6] show that for free field theories at finite temperatures, which do not probe chaos dynamically, the Lanczos coefficients grow linearly asymptotically as well. This raises a question whether asymptotic behavior of Lanczos coefficients characterizes dynamical properties of Hamiltonian systems appropriately. Nonetheless, this does not influence our discussions and results.

JHEP08(2022)232
The paper is organized as follows. In section 2, we briefly review the recursion method and the Lanczos algorithm. In section 3, we prove a upper bound on the growth of general A-complexity defined in the Kryolv space. We show that saturation of the bound demands that A-complexity operator is linearly related to K-complexity operator. In section 4, we study the upper bound on the growth of K-entropy. Using continuum limit, we establishes that the dispersion bound of K-entropy is rarely saturated. Instead, for irreversible process, the logarithmic relation between K-complexity and K-entropy implies a tighter bound in the long time limit. We test the new bound using a number of numerical examples. We conclude in section 5.

The recursion method and the Lanczos algorithm
whereÕ n stands for the nested commutators: However, sometimes evaluation of these commutators is of great difficulty. It is helpful to take the operator as a wave function, which evolves under the Liouvillian L ≡ [H , ·]. One has where |Õ n ) = L n |O 0 ). To proceed, one needs introduce an inner product in the operator Hilbert space satisfying where a is a complex number. In addition, the Liouvillian should be hermitian under the inner product (A|LB) = (LA|B). It is clear that the choice of inner product greatly influences the outcome of the recursion method, such as the behavior of Lanczos coefficient b n introduced below. However, for our purpose in this paper, we do not need to specify a particular choice of inner product (when we discuss certain Hamiltonian systems, the inner product generally has already been specified in literature but we will not mention it explicitly. The readers should not be confused). The interested readers are referred to [13] for more details about choice of inner product. The physical information about operator growth is essentially encoded in the autocorrelation function In general, the original operator basis {|Õ n )} is not orthogonal. To study the operator dynamics in an orthonormal basis, one adopts the Gram-Schmidt scheme, starting with a normalized vector |O 0 ). The first vector is given by For the n-th vector, one has inductively The output of this procedure is a set of orthomornal vectors {|O n )}, referred to as Krylov basis and a sequence of positive numbers {b n }, referred to as Lanczos coefficients. These coefficients have units of energy and can be used to measure time in the Heisenberg evolution of operators. In Krylov basis, evolution of the operator O(t) can be formally written as is a discrete set of (real) wave functions and p n ≡ ϕ 2 n can be interpreted as probabilities. One has the normalization The Heisenberg evolution of O(t) gives rise to a discrete set of equations subject to the boundary condition ϕ n (0) = δ n0 and b 0 = 0 = ϕ −1 (t) by convention. This uniquely determines the wave functions ϕ n (t) for a given set of Lanczos coefficients. Since the auto-correlation function is simply C(t) = ϕ 0 (t), it is immediately seen that physical information encoded in C(t) can be equivalently extracted from the Lanczos coefficients, though there is not a simple transformation between them. The Krylov complexity (K-complexity) and the Krylov entropy (K-entropy) are defined as where S n ≡ − lnϕ 2 n . Generalisation of K-complexity is also interesting, for example the K-complexity of degree i is defined as (2.11) These quantities roughly play the same role as K-complexity in characterizing operator growth. However, except for simplicity, the definition of K-complexity is special in the sense that it is deeply connected to the complexity algebra, which characterizes the symmetry of Heisenberg evolution [11]. We will study this in the next section. Here we just remind the readers that if and only if the complexity algebra is closed, the operator wave functions can be exactly solved and the dispersion bound on the growth rate of K-complexity is saturated [11].

Upper bound on the growth of A-complexity
In [11], the authors first proved an upper bound on the growth rate of K-complexity, referred to as dispersion bound. It was shown that saturation of the bound is equivalent to closure of the complexity algebra, which is spanned by Notice the Liouvillian L = L + + L − . In fact, only when complexity algebra is closed, the operator wave functions ϕ n can be solved exactly [3] and hence these cases provide interesting examples to study operator growth. In this section, we would like to extend the result [11] to general A-complexity defined under the Krylov basis where A is a self-adjoint superoperator. We demand that A n is either a polynomial of n, independent of time or A n = S n when it depends on time. Clearly K-complexity and K-entropy are included in our notions of A-complexity. In the following, we will generalise the dispersion bound of K-complexity to this entire class of complexity. First, by straightforward calculations, we deduce where a dot denotes the derivative with respect to the canonical time coordinate. This gives On the other hand, according to definition, the growth rate of A-complexity is given by If A n is independent of time, the second term in the square bracket vanishes whereas if A n = S n , this term still does not contribute because of n=0ṗ n = 0. In both cases, we arrive at In fact, the result can be derived more directly by taking the operator space as an ordinary Hilbert space, which evolves under the "Hamiltonian" L. Using the density operator ρ ≡ |O(t))(O(t)| and A = Tr(ρA), one finds where in the second equality, we have adopted the Liouville equationρ = i[L , ρ].

JHEP08(2022)232
With this result in hand and adopting the Robertson uncertainty relation 2∆A∆L ≥ | [A , L] |, we obtain |∂ t A| ≤ 2∆A∆L = 2b 1 ∆A , (3.8) where ∆A = A 2 − A 2 stands for the dispersion of A with respect to some state |A) and ∆L = b 1 . Notice that if the Krylov space is infinite dimensional, it is necessary that the state |A) is contained in the intersection of the domains of AL and LA, otherwise the bound may not hold [14]. To avoid confusion with the uncertainty relation with observables, the bound is referred to as dispersion bound in [11] and we shall follow this convention.
Several comments are in order: • The bound is valid to any kind of A-complexity, including K-complexity and K-entropy. These two cases will be studied carefully later.
• The initial time t = 0 is an extremal point, at which the bound is always saturated for any A-complexity.
• A geometrical interpretation was given for saturation of the bound for K-complexity [11]: the projective Krylov space can be identified with all rank one orthogonal projections on the Krylov space. The bound is saturated if and only if the corresponding curve in the Krylov space moves along the gradient of Krylov complexity: meaning the dynamics is directed to the direction that maximizes the local growth of K-complexity. This interpretation can be generalised to any kind of A-complexity straightforwardly.
• There are several cases, in which the complexity algebra is closed. It was proved [11] that only in these cases the dispersion bound for K-complexity is saturated. Later, we will show that for general A-complexity, saturation of the dispersion bound is possible if and only if A is linearly related to K.
• One can not obtain a tighter bound by using generalised uncertainty relation because of {A , L} = 0, see (3.3). However, this does not exclude the possibility that for certain cases (such as irreversible process), the growth rate of A-complexity could be tighter bounded at some time regimes. We will study this in detail for K-entropy in the next section.

Conditions for saturation of the dispersion bound
Apparently it is not clear whether the dispersion bound of A-complexity can be saturated during unitary evolution of operators. However, since the uncertainty relation is obtained by applying the Cauchy-Schwarz inequality to the two vectors (A−A)|O(t)) and (L− L )|O(t)), saturation of the bound is equivalent to the two vectors are linearly dependent. In other words, the vector e −iLt (A − A)e iLt |O 0 ) is linearly related to |O 1 ) because of L = 0. To study this carefully, we adopt the Baker-Campbell-Hausdorff formula and find where L n stands for the nested commutators: -5 -

JHEP08(2022)232
To evaluate these commutators, we introduce matrix basises e m ,n = |O m )(O n | and represent a matrix as M = M m ,n e m ,n . Here e n ,n corresponds to the diagonal elements. For later purpose, we refer the elements associated to the basis e n+k,n (or e n,n+k ) as k-diagonals (or −k-diagonals) of operators. One has e m ,n |O k ) = δ nk |O m ) , e m ,p e q ,n = δ pq e m ,n . (3.10) We are ready to compute the nested commutators L n : • First, L 0 = A and L 0 |O 0 ) = A 0 |O 0 ). Similar terms giving |O 0 ) are widely contained in the nested commutators L n . In fact, if the upper bound is saturated, these terms will be cancelled by A(t) term in (3.9). It provides an alternate way to extract A-complexity in this case, see [11] for K-complexity case.
• Second, straightforward calculation gives . This gives the desired term proportional to |O 1 ). However, evaluation of L 2 yields ) and we have omitted the diagonal terms, which are irrelevant to our discussions. One finds Obviously this term cannot be cancelled by the A-complexity term and hence must be set to zero for saturation of the upper bound.
• As a matter of fact, the highest subdiagonal in L n is n-diagonal (or −n-diagonal) so that L n |O 0 ) = (L n ) n,0 |O n ) + · · · . Saturation of the upper bound demands that all these terms vanish. It follows that the (n + 2)-diagonals of L n+2 can be deduced from the (n + 1)-diagonals of L n+1 : where n ≥ 1 and L(n , k) ≡ L n+2 n+k+2 ,k . Mathematically for above recurrence relations, together with the initial condition L(0 , k) = g(k)b k+1 b k+2 , one has [11] This implies that g(k) = 0 for k ≥ 1 to guarantee all n-diagonals of L n vanishes. Hence, A n must be a linear function of n: A n = αn + β, where α , β are constants independent of n (but may depend on time). In other words, saturation of the upper bound for A-complexity is possible when A-complexity operator A is linearly related to the K-complexity operator A = αK + β. Since the bound for K-complexity is saturated if and only if the complexity algebra is closed [11], it is a little surprising that the algebra influences the unitary evolution of operators more widely than expected: it constrains the growth rate of any kind of A-complexity.

Saturation of the dispersion bound
We move to study K-entropy. According to previous discussions, saturation of the dispersion bound on K-entropy is only possible when the complexity algebra is closed and hence the bound on K-complexity is saturated. These are only a few cases [3,11]: the complexity algebra is SL(2 , R) algebra (the SYK model), SU(2) algebra and Heisenberg-Weyl algebra. Exploring these cases, we only find two examples which saturate the K-entropy bound. The first is the evolution of operator O 0 = σ x under a single qubit (two-level) Hamiltonian H = ωσ z , where σ i stands for the i-th Pauli matrix. This case belongs to j = 1/2 case of the SU(2) algebra. One has b 1 = ω , b n>1 = 0 , and ϕ 0 = cos(ωt) , ϕ 1 = sin(ωt). However, since the Krylov dimension is only D = 2, it turns out that the dispersion bound for any A-complexity is trivially saturated. 2 For example, evaluation of K-entropy yields S K = − cos 2 (ωt) ln cos 2 (ωt) − sin 2 (ωt) ln sin 2 (ωt) , ∆S K = 2 sin(ωt) cos(ωt) ln cot(ωt) . (4.1) It is easy to see |∂ t S K | = 2b 1 ∆S K . The second example belongs to a special case of SYK model. The Lanczos coefficient is proportional to n: b n = ωn, with the wave functions given by ϕ n (t) = tanh n (ωt)/ cosh (ωt). It is easy to see that for this case the K-entropy operator is linearly related to K-complexity operator. Explicitly, the K-entropy and the variance are given by S K = cosh 2 (ωt) ln cosh 2 (ωt) − sinh 2 (ωt) ln sinh 2 (ωt) , ∆S K = 2 sinh(ωt) cosh(ωt) ln coth(ωt) . (4.2) Again |∂ t S K | = 2b 1 ∆S K . It is interesting to observe that by demanding reality and positivity, the above results for K-entropy (and K-complexity) can be obtained by taking ω → iω from the single-qubit system. This is unexpected. However, the trick is not valid to general A-complexity.

Continuum limit analysis
From above discussions, one may gain an intuition that saturation of the dispersion bound for K-entropy in the unitary evolution of operators is much harder than that for K-complexity. In fact, for the latter, the bound is always asymptotically saturated at long times for chaotic systems 3 but this never happens for K-entropy (or any other A-complexity). To search a deeper understanding for this, we adopt the continuum limit to study the dispersion bound for the two quantities.
2 In addition to the K-entropy, consider A = n P (n)ϕ 2 n (t), where P (n) is a polynomial of n, independent of time. It is easy to see A = P (1)ϕ 2 1 (t) , ∆A = |P (1)ϕ0(t)ϕ1(t)| so that |∂tA| = 2b1∆A. 3 The condition for saturation of the dispersion bound of K-complexity is b 2 n+1 − b 2 n is linear in n whereas for chaotic systems the Lanczos coefficient always grows asymptotically linearly. Hence, the bound is saturated asymptotically at long times for general chaotic systems.

JHEP08(2022)232
It was known that for semi-infinite chains, the continuum limit is good at capture the leading long time behaviors of K-complexity and K-entropy using coarse grained wave functions [2]. Introducing a lattice cutoff and defining a coordinate x = n and the velocity v(x) = 2 b n . The interpolating wave function is defined as ϕ(x , t) = ϕ n (t). The wave equation (2.8) becomes Expansion in powers of , one finds to leading order This is a chiral wave equation with a position-dependent velocity v(x) and mass 1 2 ∂ x v(x). The equation is much simplified in a new frame y defined as v(x)∂ x = ∂ y and a rescaled wave function ψ(y , t) = v(y) ϕ(y , t) . (4.5) One finds The general solution is given by where ψ i (y) = ψ(y , 0) stands for the initial amplitude. This tells us that the leading order wave function simply moves ballistically in the evolution. This approximation derives the growth of K-complexity correctly but for K-entropy, some higher order corrections should be included for certain cases, for example the Lanczos coefficients have a bounded support [2]. However, for our purpose, the leading order result is already enough (we will turn to numerical approach when the leading order analysis breaks down). Normalization condition reads 1 = n |ϕ n (t)| 2 = 1 dx ϕ 2 (x , t) = 1 dy ψ 2 (y , t) . (4.8) Evaluation of the K-complexity and the K-entropy yields Similarly, one has (4.12) -8 -

JHEP08(2022)232
Using these results, once the transformation between the two frames is known, we are able to extract the leading time dependence of the quantities immediately.
Here we are particularly interested in evaluating the variance of K-complexity and K-entropy. Using Taylor expansion, we deduce to leading order at long times where Y n ≡ 1 dy y n ψ 2 i (y) and we have ignored higher order terms (these terms are in the same order of x (t)/ for chaotic systems but this does not influence our discussions). This implies that while saturation of the dispersion bound happens only for a few cases, the bound still captures the long time behavior of K-complexity well. In other words, even if the dispersion bound is not saturated asymptotically in operator growth, the K-complexity still grows fast, with a rate of change close to the dispersion bound. This is also supported by our numerical results. For example, in figure 1, we show this explicitly for integrable models b n = αn δ .
The variance of K-entropy can be analyzed in the same way. We find to leading order which is of order unity. It implies that compared to K-complexity, the dispersion bound of K-entropy approaches to a constant at long times for general irreversible process. This is ensured by our numerical results as well (the interested readers are referred to the next subsection for details). In particular, for certain cases in which the leading order analysis breaks down, we find the result is still valid. Hence, we may take (4.14) as a definite result for irreversible process. However, the growth rate of K-entropy generally decreases in a power law at long times ∂ t S K ∼ 1/t γ , where 0 < γ ≤ 1 [10] except in the scrambling regime of chaotic systems (where ∂ t S K is a constant). This illustrates technically why the dispersion bound of K-entropy is too loose at long times. 4 This is also in accordance with physical expectations. At sufficiently long times, the distribution of wave functions at an instant t nearly arrives at the maximal entropy (allowed by the operator dynamics) so that in the next time step t → t + δt, the increasing of entropy is highly suppressed (δS K ∼ δK/K according to (4.15)), while the dimension of the operator space, effectively characterized by K-complexity increases monotonically without any such constraint (this is said in the regime 0 < K < S, for systems with S extensive degrees of freedoms).
The above analysis tells us that unlike K-complexity, the dispersion bound of K-entropy is too loose to characterize the long time behavior of K-entropy for irreversible process. This motivates us to search a tighter bound for the entropy growth. In the next subsection, we will show that such a bound indeed exists in the long time limit. 4 However, for chaotic systems, ∂tSK is a constant, in the same order of the entropy variance ∂tSK ∼ 2b1∆SK . Hence, the logic does not explain why in this case, the dispersion bound of K-entropy is not asymptotically saturated as K-complexity.  Figure 1. The ratio of the growth rate of K-complexity to the dispersion bound in operator growth for integrable models b n = αn δ , where δ = 3/8 (solid), 2/3 (dashed) and 3/4 (dotted). The horizontal line stands for the cases in which the bound is saturated during the evolution. It is immediately seen that even if for these models, the bound is not saturated asymptotically, complexity still grows fast, with the rate of change in the same order of the dispersion bound.

A new bound on K-entropy in the long time limit
It was established [10] that for irreversible process, there exists a universal logarithmic relation between K-complexity and K-entropy to leading order at long times: where the coefficient η is bounded as 0 < η ≤ 1. The upper bound η = 1 is saturated by chaotic systems. In particular, it implies that long time dynamics of fast scramblers is particularly simple. Given the mean value of the distribution D, the wave functions to leading order are described by a uniform distribution so that the entropy is maximal. One has S K log D and K D/2. Up to a factor of 2, the K-complexity exactly measures the effective dimension of the Krylov space at an instant. It turns out that existence of the logarithemic relation bounds the growth of K-entropy further in the long time limit. Consider K-complexity at first. On one hand, one has the dispersion bound |∂ t K| ≤ 2b 1 ∆K whereas the logarithmic relation implies at long times where in the first equality we have ignored the subleading order corrections. Hence, strictly speaking the result is valid only in the long time limit. All relevant results below should be understood in the same way. Now we have two bounds for complexity growth. However, we have learnt from the continuum limit analysis that complexity growth rate can be close to the dispersion bound in the long time limit for irreversible process. Hence, it is hard to imagine that (4.16) provides a tighter bound. Then consistency of the two bounds leads us to propose The relation can also be understood as the constant η is bounded in the long time dynamics as η ≤ lim t→∞ ∆S K K/∆K . (4.19) Return to the growth of K-entropy. If the above result is correct, it implies a tighter bound on the K-entropy growth in the long time limit This is a main result of this paper. We will show that though this new bound is valid only in the long time limit, it well describes the growth of K-entropy in the long time tails of irreversible process. Before moving to explicit examples, let us show why (4.18) should be correct. First, consider fast scramblers. The K-complexity saturates the dispersion bound in the long time limit such that This proves (4.18). For general cases, ∆K , ∆S K at long times can be estimated using the continuum limit analysis. One has ∆K/K ∼ ∂ t K/b 1 K ∼ ∂ t S K /b 1 , which generally decreases (in a power law to leading order) at long times. However the variance of K-entropy approaches to a constant ∆S K ∼ O (1). Therefore, (4.18) continues hold beyond a critical time t c at which η∆K(t c )/K(t c ) = ∆S K (t c ). In fact, we obtain a stronger result The relation passes a variety of numerical tests.

Numerical results
We would like to test the new bound of K-entropy proposed in ( Consider SYK-like model at first. The Lanczos coefficient is given by b n = ω n(n − 1 + ξ). The wave functions can be solved exactly as [1] ϕ n (t) = (ξ) n n! tanh n (ωt) cosh ξ (ωt) , (4.23) where (ξ) n = ξ(ξ +1) · · · (ξ +n−1) is the Pochhammer symbol. For ξ = 1, the K-entropy and its variance have already been given by (4.2). It turns out that in this case, the bound (4.20) is approached from below in the long time limit: ∂ t S K = 2b 1 ∆K/K = 2b 1 ∆S K , as shown The bound ∂ t S K = 2b 1 ∆K/K is approached from below in the long time limit and ∂ t S K = 2b 1 ∆K/K = 2b 1 ∆S K . However, during the evolution, the dispersion bound is always saturated and ∂ t S K = 2b 1 ∆S K ≤ 2b 1 ∆K/K at finite (but long) times. Right panel: ξ = 2. The bound ∂ t S K = 2b 1 ∆K/K is again approached from below but the relation (4.20) can be extended to finite times t ≥ t c .
in the left panel of figure 2. Here it is worth emphasizing that this is the only case we find in which the equality in (4.18) is taken. However, generalisation of the relation (4.20) to finite times turns out to be incorrect. Instead, since the dispersion bound is saturated during the evolution and t c → ∞, one finds ∂ t S K = 2b 1 ∆S K ≤ 2b 1 ∆K/K.
The situation is much different for other ξ's value. For example, the ξ = 2 case is shown in the right panel of figure 2. The tighter bound proposed in (4.20) is again approached from below but now ∂ t S K = 2b 1 ∆K/K < 2b 1 ∆S K in the long time limit. In particular, the relation (4.20) can be extended to finite times t ≥ t c . As a matter of fact, we study a number of chaotic models numerically and find these features are generic.
We move to study integrable models which have b n = αn δ , where 0 < δ < 1. In this case, the coefficient η in the logarithmic relation (4.15) is η = δ when δ ≥ 1/2 [10]. In figure 3, we show the entropy growth during the evolution for two examples: 5 δ = 1/2 (left) and δ = 2/3 (right). It follows that for integrable models, the growth rate of K-entropy generally decreases as ∂ t S K ∼ 1/t whereas ∆S K still approaches to a constant in the long time limit, consistent with our continuum limit analysis. Furthermore, the entropy growth rate is very close to the tighter bound (4.20) at long times. However, extension of relation (4.20) to finite times is not always correct. For instance, for the Heisenberg-Weyl case, the bound ∂ t S K = 2b 1 η∆K/K is approached from above so that the entropy growth rate is also bounded from below 2b Finally, we would like to study the case in which the Lanczos coefficient has a bounded support. In this case, the continuum limit analysis at leading order is failed to capture the leading time dependence of K-entropy (but the result for K-complexity is still valid) [2]. Hence, our result for ∆S K in section 4.2 breaks down and the bound proposed in (4.20) looks problematic in this case. However, our numerical results still support that ∆S K → const in the long time limit and the bound (4.20) continues hold. For example, consider the simplest case b n = b which has η 0.729302. The wave function is solved in terms of Bessel 5 For Heisenberg-Weyl case δ = 1/2, the wave functions can be solved exactly as ϕn(t) = e −α 2 t 2 /2 α n t n √ n! .  The numerical results for entropy growth are shown in figure 4. It is immediately seen that up to subleading oscillations, ∆S K approaches to a constant while the entropy growth rate still decreases in a power law ∂ t S K ∼ b 1 ∆K/K ∼ 1/t. Combining all the above results together, we conclude that during irreversible operator growth, the K-entropy growth can be roughly classified into two regimes: 1) the initial regime, in which the entropy growth rate nearly saturates the dispersion bound ∂ t S K 2b 1 ∆S K ; 2) the late time regime, in which the growth rate can be well described by the new bound (4.20) ∂ t S K ∼ 2b 1 η∆K/K. For physically interesting cases, such as chaotic systems and integrable models, this describes the growth of K-entropy very well during the Heisenberg evolution.

Conclusion
In this paper, we examine the upper bound on the growth of general A-complexity defined in the Krylov space, including K-complexity and K-entropy, during the Heisenberg -13 -JHEP08(2022)232 evolution of operators. We first prove a dispersion bound (3.8), generalising the result of K-complexity [11] to general A-complexity. Our new contributions are: 1) We show that saturation of the dispersion bound on A-complexity is possible if and only if the A-complexity operator A is linearly related to the K-complexity operator K: A = αK + β, where α , β are c-number constants (independent of n but may depend on time). However, since saturation of the bound for K-complexity is equivalent to closure of the complexity algebra [11], this implies that the complexity algebra constrains the growth of general A-complexity in a subtle way. 2) Though there are only a few cases in which the complexity algebra is closed, the K-complexity grows sufficiently fast at long times for general irreversible process, with the rate of change close to the dispersion bound ∂ t K ∼ 2b 1 ∆K.
However, the situation for K-entropy is quite different. Except chaotic systems, the entropy growth rate generally decreases in a power law to leading order at long times ∂ t S K ∼ 1/t γ , 0 < γ ≤ 1 (γ = 0 for chaotic systems) whereas the variance ∆S K approaches to a constant in the long time limit. Thus generally the dispersion bound on K-entropy is too loose to be saturated. However, it turns out that the dispersion bound of K-complexity leads to a tighter bound on the growth of K-entropy (4.20) in the long time limit because of a universal logarithmic relation between K-complexity and K-entropy for irreversible process [10]. In particular, for physically interesting cases such as chaotic systems and integrable models, the K-entropy growth rate at long times is very close to this new bound.