On the evolution of operator complexity beyond scrambling

We study operator complexity on various time scales with emphasis on those much larger than the scrambling period. We use, for systems with a large but finite number of degrees of freedom, the notion of K-complexity employed in [1] for infinite systems. We present evidence that K-complexity of ETH operators has indeed the character associated with the bulk time evolution of extremal volumes and actions. Namely, after a period of exponential growth during the scrambling period the K-complexity increases only linearly with time for exponentially long times in terms of the entropy, and it eventually saturates at a constant value also exponential in terms of the entropy. This constant value depends on the Hamiltonian and the operator but not on any extrinsic tolerance parameter. Thus K-complexity deserves to be an entry in the AdS/CFT dictionary. Invoking a concept of K-entropy and some numerical examples we also discuss the extent to which the long period of linear complexity growth entails an efficient randomization of operators.


Introduction
Quantum complexity has been proposed as a new entry in the holographic dictionary (see for instance [2,3] and references therein). The underlying idea is to characterize the entanglement of a state in an 'optimal' way, with respect to some simple building blocks, such as gates in a quantum circuit model or more generally a tensor network. Complexity can then be defined as the size of the smallest circuit or tensor network which approximates the state, given some prescribed set of gates or fundamental tensors (see for instance [4,5]). The quantum circuit model leads naturally to a notion of complexity which is extensive in the number of degrees of freedom, S, and furthermore grows linearly in time, for a period much longer than any ordinary thermalization time scale: with β an effective time step for state-vector orthogonality, i.e. Ψ t |Ψ t+β ≈ 0. This linear growth is to be matched to the linear growth of spacelike volumes inside a black hole of entropy S and inverse Hawking temperature β [6,7]. An important question is whether the so-defined complexity has an upper bound. In the quantum models with a finite set of qubits, a computation is regarded as finished when the target state is approximated within some a priori tolerance , with respect to a standard metric on the space of states. Complexities defined with such an implicit dependence on the tolerance parameter are bounded by the number of -cells in the space of states, which scales exponentially with the number of qubits: scrambling evolution. We establish the linear growth of K-complexity and the saturation time scale. In section 4 we define the notion of K-entropy, as a measure of the degree of randomization of the Heisenberg flow, and argue on the basis of some numerical estimates that such randomization is expected to occur in order of magnitude. Section 5 brings the conclusions and a number of open questions suggested by our work.

Review of K-complexity
We begin with a review of K-complexity and a description of the notational conventions to be used in this paper. The main reference for this section is [1]. Given the Hamiltonian of a lattice system, H, and a particular initial operator O 0 , one defines a linearly independent set of operators O n in terms of the n-times The orthonormality can be defined with respect to any non-degenerate inner product in the operator algebra, such as where the trace is taken over the complete Hilbert space of dimension N . In what follows, we assume that appropriate cutoffs exist so that N is finite, but many of the expressions should admit a consistent N → ∞ limit. The construction of the Krylov basis runs iteratively as follows. From the initial operator O 0 = A 0 , which we assume to be normalized, where the non-negative matrix elements b n are called Lanczos coefficients. It is useful to exploit the notation (2.1) to introduce a vector space of operators, |O), with dimension of order N 2 . The adjoint action of the Hamiltonian in (2.4) introduces a linear operator in this space known as the Liouvillian, defined as which acts as the generator of the Heisenberg flow O t = e itH O 0 e −itH in this notation:

JHEP10(2019)264
Any Hermitian operator O can be expanded in the Krylov basis according to the expression for some real coefficients ϕ n . In terms of these 'amplitudes', we can rewrite the Heisenberg with boundary condition ϕ −1 (t) = 0. One can directly check that the normalization of the amplitudes is preserved under time evolution, i.e. ∂ t n |ϕ n (t)| 2 = 0. Since O 0 was assumed to be normalized, we have ϕ n (0) = δ n0 and the quantities |ϕ n (t)| 2 define a unit-normalized probability distribution for all times. If we start with a 'small' operator, containing few local degrees of freedom, each nested commutator with the Hamiltonian tends to increase its size. For a k-local Hamiltonian, containing products of less than k local degrees of freedom, we expect the size of O n to be of order n k for large values of n. To see this, suppose O n contains 'clusters' of size r in terms of local operators. Since H is a sum of clusters of size k, the commutator [H, O n ] is nonzero when the corresponding clusters have a non-vanishing intersection. The components with largest size are then those corresponding to clusters that intersect through O(1) local operators, yielding commutators of size r + k. Hence, each commutator will generically increase the size of the operator by O(k). Proceeding inductively we find that a nested commutator of order n will generate operators of size n k in order of magnitude.
Since 'operator size' is an intuitive measure of its complexity, and operator size is roughly related to the ordering in the Krylov basis, it is natural to define the notion of K-complexity as the average value of n in the Krylov basis expansion (2.7), i.e.
where unit normalization of the ϕ n amplitudes is assumed. It was explicitly shown in [1] that this definition is close to operator size for the SYK model in the thermodynamic limit. Applying the general definition (2.9) to the time-evolved operator O t = e itH O 0 e −itH with initial condition O 0 , we are led to a natural notion of time-dependent K-complexity: which depends implicitly on the seed operator O 0 . A given pattern of growth of Lanczos coefficients as a function of n translates into a characteristic growth of complexity. For instance, it is shown in [1] that a system with an asymptotic large-n law 1 b n ≈ α n , (2.11)

JHEP10(2019)264
accumulates K-complexity at an exponential rate: (2.12) A benchmark example of this behavior is the SYK model, for which 2α = λ is the Lyapunov exponent revealed in OTOC correlations. It is then natural to propose (2.11) as a criterion for local quantum chaos, since explicit evaluation of Lanczos coefficients in various integrable systems yield softer asymptotic laws of the form b n ∼ α n δ , 0 < δ < 1 . (2.13) In these cases, K-complexity has a milder, powerlike growth: It is useful to find relations between the patterns of growth of Lanczos coefficients and more familiar objects, such as correlation functions. Let us consider the time autocorrelation which coincides with the standard Wightman correlation function at infinite temperature.
In the thermodynamic limit, N → ∞, the Fourier transform develops a non-trivial analytic structure. In particular, the singularities closest to the real axis are located at ±iπ/(2α), where α is the slope coefficient in (2.11), and G(ω) decays exponentially along the real axis with the law (cf. [14]), More generally, a growth law of the form (2.13) translates into a decay G(ω) ∼ exp(−|ω/ω 0 | 1/δ ), i.e. the sharper is the decay of the spectral function, the milder is the growth of the Lanczos coefficients. In the case that the b n have a finite asymptotic limit lim n→∞ b n = b ∞ , it turns out that the spectral function has compact support in the There is a direct relation between the Lanczos coefficients and the moments of the Liouvillian, which in turn control the Taylor series of the autocorrelation function (only even moments contribute for Hermitian operators) The relation between b n and µ 2n can be written explicitly as a combinatorial formula (cf. appendix A of [1] for a review)

JHEP10(2019)264
where {h k } is the set of so-called Dyck paths, sequences of 2n numbers satisfying h 0 = h 2n = 1/2, h k ≥ 1/2 and |h k − h k+1 | = 1 for all k. The number of such paths is the Catalan number C n = (2n)!/n!(n + 1)!. From this expression one can relate the large-n asymptotics of Lanczos coefficients and moments. For instance, a linear growth of b n translates into a factorial-squared growth of the moments, i.e. µ 2n ∼ n 2n . More interesting for our purposes is the fact that an asymptotically constant Lanczos sequence, b n ∼ b ∞ produces power-like moments: where we have applied the large-n asymptotics C n ≈ 4 n n −3/2 and used the notation o(n) in the exponent for any terms with large n growth slower than linear, such as fractional powers or logarithms. Hence, an asymptotic power-law behavior of the moments is associated with a flat distribution of Lanczos coefficients.

K-complexity of scramblers: fast and finite
In systems with a finite-dimensional Hilbert space, K-complexity is necessarily bounded by the dimensionality of the operator space, C K ≤ N 2 . Saturation of this bound is not guaranteed, as the Krylov basis may terminate its iterative construction before it spans the whole operator space. Still, for sufficiently generic choices of initial operator O 0 and Hamiltonian H, we expect that n max does not lie far below N . To see this, consider the basis of operators where |E a denotes the exact energy eigenstate with eigenvalue E a . The N 2 operators L (ab) define a basis of the operator space which is orthonormal with respect to the inner product (2.1). The components of O t in this basis are proportional to its matrix elements in the exact energy basis: which at the same time can be written as For sufficiently generic initial operator, there are O(N 2 ) non-vanishing matrix elements, which remain non-vanishing at all times. Thus, the 'supervector' |O t ) has O(N 2 ) nonvanishing projections (L (ab) |O t ). Although the Krylov basis is rotated with respect to (3.1), it is natural to expect that the number of non-vanishing K-components (O n | O t ) will also be of O(N 2 ). Furthermore, for generic values of the energies E a , the N independent phases e −itEa describe an ergodic motion on a real N -dimensional torus, which is embedded in the operator space by the equation (3.2). Hence, |O t ) lies on an N -dimensional submanifold and we can conclude that for systems with S degrees of freedom, and generic choices of H and O 0 .

JHEP10(2019)264
The computation of K-complexities requires the evaluation of (2.10) once we know the amplitudes ϕ n (t). These in turn are obtained by solving (2.8). Therefore, it is the structure of the sequence b n what determines the relevant dynamical regimes in the growth of K-complexity. In a typical fast scrambler, such as the SYK model, small operators grow in size at an exponential rate exp(λt), where λ ≈ 2α is the Lyapunov exponent. In other words, for small operators, operator size is roughly equivalent to K-complexity.
We can regard the operator as 'scrambled' when it has spread, in order of magnitude, over the whole system. For a fast scrambler with S local degrees of freedom, this happens at the familiar time scale t * ∼ λ −1 log S [15]. The value of the K-complexity at the scrambling time is of order For systems with O(S) lattice sites and a finite-dimensional Hilbert space one has N ∼ e O(S) .
Since S e O(S) , it follows that K-complexity has an enormous scope for growth beyond the 'scrambling value'. Should the complexity continue to grow exponentially fast for t > t * , it would saturate in a time of order S. In the next section we use the ETH hypothesis to argue that this estimate is far from correct.

The ETH estimate
For systems which scramble less efficiently than a 'fast' scrambler, one expects the scrambling time to scale like a power of S, rather than a logarithm, but the intuitive relation between K-complexity and operator size suggests that the complexity at the scrambling time continues to satisfy (3.4). Hence, the wide gap between the complexity at scrambling, C K (t * ) ∼ S, and the maximal complexity, of order e O(S) , should be a general feature of any system with finite degrees of freedom.
The rate of K-complexity growth after scrambling depends on the form of the b n coefficients for n S. These can be constrained from the behavior of the moments: From the spectral decomposition of the correlation function, we obtain the expression: where O ab = E a | O 0 |E b denote the matrix elements of the initial operator in the exact energy basis. These matrix elements can be used to characterize a degree of quantum chaos. For operators whose expectation values and correlations approach thermal values at long times, it is expected that O ab satisfy the Eigenstate Thermalization Hypothesis (ETH) [16][17][18][19][20], which essentially says that the eigenbases of O 0 and H are uncorrelated,

JHEP10(2019)264
related by a random unitary on the N -dimensional Hilbert space. From this assumption it follows that off-diagonal matrix elements contributing to (3.7) have the form where R ab is a random matrix whose entries have mean zero and unit variance. The form factor F carries the information about the normalization of the operator and is assumed to depend smoothly on the energies of the states. Plugging this ansatz into the spectral expression (3.7) we thus find For n S the energy sum tends to be dominated by the largest possible energy differences. For a system with S degrees of freedom and extensive energy, the maximum energy difference is of order ΛS, where Λ is the UV cutoff. Hence, we expect the large-n moments in (3.9) to scale as (ΛS) 2n for n S. We can refine the estimate further to check that the operator form factor does not alter this conclusion significantly. The function F (E a , E b ) is assumed to depend weakly on the average energyĒ = (E a + E b )/2 and more sharply on the energy differences ω = E a − E b , with a characteristic bandwidth Γ. For local operators the bandwidth is an intensive energy scale, not scaling with S, and set by the local frequency cutoff Λ. For instance, assuming an exponential form factor F (ω) ∼ e −ω/Γ , we would estimate the sum in (3.9) as proportional to where γ stands for the lower-incomplete Gamma function. Upon further use of the asymptotic expansion for n/S 1, we find the following n S asymptotics for the moments: The same qualitative estimate is obtained if we use milder form factors such as the standard Using now the general relation (2.20) we conclude that ETH suggests a saturation of the Lanczos sequence at a 'plateau' of height λ ∼ 2α ∼ Λ, within factors of order unity, and we are led to a very simple picture for the Lanczos sequence: linear growth with slope λ for 0 < n < n * morphing into an approximate plateau as n S. This conjectured form of the Lanczos band is shown in figure 1. In the figure, we denote by a dotted line our ignorance about the details of high-energy endpoint, other than our previous estimate that, for sufficiently generic initial operators, we expect n max = e O(S) , as explained in the discussion leading to (3.3).
More generally, for systems with less efficient scrambling, the initial linear growth might be substituted by (2.13), whereas the 'post-scrambling' plateau for n S is expected to be rather general. It would be very interesting to test the generality of this 'Lanczos plateau' in numerical simulations of various models, such as SYK.
The qualitative description of a fast scrambler, as determined by a Lyapunov exponent λ ∼ Λ and S degrees of freedom, can be adapted more general situations where only a subset of degrees of freedom are 'activated' in the scrambling process. This occurs when considering a system at a finite temperature below the UV cutoff, T < Λ. In this case, on states of entropy S, the system can be described as having about one degree of freedom per thermal cell of size β = T −1 participating in the scrambling process, with the rest of degrees of freedom effectively 'frozen' in their ground state, and thus not contributing to the entropy. On such states of entropy S and effective temperature T , the UV cutoff is effectively replaced by T , setting the scale of the Lyapunov exponent λ ∼ T .

Dynamics of K-complexity
The evolution of K-complexity for a fast scrambler with linear growth (2.11) was studied in [1]. An analytic solution for the amplitudes ϕ n (t) exists for a formal choice of Lanczos coefficients given by b n = n(n − 1 + η). To simplify matters, we look at the exactly linear case, corresponding to η = 1, for which the solution reads ϕ n (t) = tanh n (αt) sech(αt) . (3.12) An initially sharp peak at n = 0 moves to higher n exponentially fast: n peak (t) ∼ e 2αt . The overall height of the function at large t is of order e −αt . Hence, the scrambling is JHEP10(2019)264 very efficient at accessing 'large' operators but at the same time it is also very efficient in randomizing the operator in the Krylov basis, leading to an essentially flat ϕ n distribution with support on [0, n peak (t)] and height of order 1/ n peak (t).
The growth of complexity is largely controlled by the ballistic motion in n-space of the solution's 'wave front'. On the other hand, operator randomization depends on whether a significant tail is left behind the wave front. For a discussion of the ballistic aspect, as well as the detailed matching between the pre-scrambling and post-scrambling regimes, it is useful to start with a continuum approximation.
Taking a coarse-grained look at the discrete function ϕ n (t), let us introduce a lattice cutoff ε and a coordinate x = ε n, and define the interpolating functions ϕ(x, t) = ϕ n (t), v(x) = 2ε b(εn) = 2ε b n . A continuum form of the recursion relation (2.8) can be written as: Expanding now in powers of ε, we find to leading order where ψ i (y) = ψ(y, 0) is the initial condition. The rescaling (3.15) is also useful from the point of view of the intuition about probability distributions. From the discrete normalization condition n≥0 |ϕ n | 2 = 1 we can derive the continuum analogs so that ψ(y) is a naive probability amplitude in y space, just as ϕ(x) is a naive probability amplitude in x space. The physics of (3.17) is that of a simple ballistic motion of the initial ψ-distribution towards positive values of y at a constant velocity. The problem is solved once we know the change of variables between the x-frame and the y-frame. The K-complexity as a function of time is given by

JHEP10(2019)264
Using the general solution (3.17) in the last expression and changing variables y → y + t we find C K (t) = 1 ε 2 dy x(y + t) |ψ i (y)| 2 . (3.20) There are various interesting cases to consider. A fast scrambler with linear Lanczos growth has v(x) = λ x, where λ = 2α is the Lyapunov exponent. The corresponding change of variables is where we have chosen the additive normalization in y for convenience. Notice that, in this case, the y variable runs over the whole real line, whereas the x variable is restricted to be positive. The scrambling solution for the ϕ amplitude then reads An initial peak at y = 0 for ψ i (y) will move ballistically as y p (t) = t, corresponding to an x-frame trajectory which also controls the exponential growth of K-complexity.
If the velocity has a logarithmic correction, as proposed in [1] for the corresponding frame map is x = ε e √ 2λy . and the distribution peak and K-complexity grow at a rate of order exp( √ 2λt). For systems with a less efficient scrambling, governed by (2.13) with δ < 1, the drift velocity is given by v(x) = 2α x ε δ , leading to a change of variables y = 1 2α and a power-like complexity growth proportional to (α t) 1 1−δ . It is interesting to compare estimates of scrambling times based on the growth of Kcomplexity with other heuristic models of scrambling. If we define the scrambling time by the requirement that complexity reaches the size of the system, C K (t * ) ∼ S, then we have On the other hand, in d-spatial dimensions, ballistic scrambling takes a time of order t * ∼ L for a system of size L. If we write S ∼ (α L) d for the effective number of degrees of JHEP10(2019)264 freedom (entropy) and α −1 for the effective dynamical time step, we have t * ∼ α −1 S 1/d for ballistic scrambling. If we model the scrambling by a diffusion process, characterized by a random walk of step α −1 , we obtain instead t * ∼ α −1 S 2/d . Then, we find the interesting correspondences The post-scrambling regime. In the post-scrambling regime the x and y frames are simply proportional: and the amplitude ϕ(x, t) just moves ballistically towards large x with velocity v * , the K-complexity also growing linearly. To summarize, in the simplest case of an SYK-like fast scrambler, with Lyapunov exponent λ and S extensive degrees of freedom, we have v(x) ≈ λ x in the scrambling band, 0 < x < ε S and v(x) = v * ∼ λ ε S in the post-scrambling band. In the scrambling period, x(y) ≈ ε e λy resulting in the expected exponential growth. If the initial operator is 'small', the initial complexity is also small, C K (0) = O(1) and C K (t * ) = O(S). On the other hand, in the post-scrambling regime x(y) ≈ y v with constant v = v * = ελn * ∼ ελ S. At long times: using the normalization condition (3.18). We conclude that the complexity grows exponentially fast during scrambling and only linearly after scrambling, with a rate of order λS. The time scale for the amplitude to reach n max is of order At times larger than t K , the function ψ(y, t) remains stuck near the endpoint, because the drift towards large values of x will prevent the distribution from bouncing back. This implies that the complexity eventually levels off and remains constant. Over extremely long time scales, however, we know that the solution of the discrete equation (2.8) will necessarily undergo Poincaré recurrences. The time scale for this to happen is of order where determines the precision with which we demand recurrence. We summarize the qualitative behavior of the K-complexity for a fast scrambler in figure 2.

Operator randomization and K-entropy
Having established the existence of a very long post-scrambling era of linear K-complexity growth, we now begin a more detailed study of this dynamical regime. In particular, we discuss the degree of randomization of the operator O t , when expanded in the Krylov basis. For this purpose, we shall introduce the notion of K-entropy. In order to motivate its definition, we momentarily go back to the scrambling period. The exact solution (3.12) describes two a priori independent phenomena: there is an exponentially fast growth of K-complexity and at the same time there is an efficient randomization of the operator over the time-dependent span of the Krylov operator set. This is intuitively clear from the qualitative form of (3.12), which eventually looks like a uniform distribution of size n peak and amplitude 1/ √ n peak . A more formal characterization of this uniformity is given by the 'operator entropy' or K-entropy, which we define by Since the quantities |ϕ n | 2 define a probability distribution, the so-defined S K satisfies the usual properties of an entropy function. If the ϕ n amplitude is very peaked at a particular value of n, large or small, the K-entropy is small. On the other hand, if the distribution is completely uniform over the interval [0, n M ], then S K = log (n M ). Applying the definition (4.1) to (3.12) we can determine the growth of K-entropy to be expected from a typical fast scrambler. The result of a numerical evaluation is a linear growth with slope close to 2α = λ. Hence, the scrambling dynamics increases K-complexity at an exponential rate, and also increases K-entropy at a linear rate. It turns out that the linear growth of K-entropy for a fast scrambler is captured by the continuum solution of the leading equation (3.14). The continuum versions of K-entropy in both x-frame and y-frame are

JHEP10(2019)264
Extracting the velocity-dependent term in the y-frame expression, we have In the leading continuum approximation, any y-frame solution has the form ψ(y, t) = ψ i (y − t). Hence, the first term is time-independent, whereas the second term computes the average of log(v(y)) over the operator probability distribution. In periods where the complexity growth is accelerated, such as the scrambling period of a fast scrambler, there is entropy production. Inserting the leading continuous solution (3.22) of the scrambling regime into (4.2) one obtains which matches the numerical evaluation for the exact solution (3.12). This means that the simple chiral wave equation with a mass term (3.14) actually gives a very accurate description of the scrambling regime, not only accounting for the growth of K-complexity, but also capturing quantitatively the growth of K-entropy.
In the post-scrambling period where v(x) ≈ constant, the mass term in (3.14) is negligible and the amplitude propagates ballistically in both frames. Therefore, the leading order term in the continuum approximation to the amplitude does not detect any significant growth of the K-entropy. We now turn to analyse what can seen at some higher orders.

The continuum amplitude at post-scrambling
We have seen that, while operator randomization is well accounted for in the continuum approximation for the scrambling regime, it is completely missed at leading order in the post-scrambling regime. It is an important question to determine whether K-entropy can be produced at all during the enormously long post-scrambling era.
In this section we show that the next-to-leading approximation to the evolution equation (3.13) already begins to incorporate the randomization effect, but ultimately falls short of the goal. Carrying the short distance expansion of (3.13) to higher orders one finds (3.14), with further corrections on the right hand side. At order ε there is a term This is a small effect in the scrambling regime and completely negligible in the post-scrambling regime. At order ε 2 we find two terms: The first term is a diffusion contribution with the wrong sign of the diffusion constant, and it only acts for a small time in the scrambling era. The second term is active throughout the long post-scrambling era and thus corresponds to the leading correction which is in principle capable of incorporating a broadening effect.

JHEP10(2019)264
Let us then consider the O(ε 2 )-corrected equation in the post-scrambling regime t > t * and in the y-frame, (∂ t + ∂ y )ψ(y, t) = −γ ∂ 3 y ψ(y, t) , (4.5) written in terms of the rescaled amplitude ψ(y, t) = v(y)ϕ(y, t), which has standard L 2 norm in the y-frame. The coefficient controlling the new term is certainly small: where we have used that v = ε λ n * ∼ ε λ S in the post-scrambling regime. In order to solve (4.5) we seek a solution of Fourier form with dispersion ω k = k − γ k 3 . Let us set an initial condition at t = t * , specifying the amplitude as ψ(y, t * ) = ψ i (y), which is just the Fourier transform of ψ k . The solution reads where ∆t = t − t * . By the rescaling k → k/(3γ∆t) 1/3 we can evaluate the momentum integral in terms of the Airy function to obtain where z = y − ∆t (3γ∆t) 1/3 .
It is already clear from this expression that this approximation is beginning to capture the randomization effect, due to the properties of the Airy function. To see this, let us consider an initial delta-function pulse, where ∆y = y − y * . The constant A is fixed by requiring the correct normalization of ψ(y, t). Evaluating the asymptotics long after the ballistic front y ∼ t has passed, i.e. ∆t ∆y, one obtains fixes the order of magnitude of the constant to be A ∼ (3γ) 1/4 √ ε, so that the operator amplitude looks like a rapidly oscillating function over the interval 0 < ∆y < ∆t of the form where Osc [0,t] stands for the oscillation component with unit amplitude (a cosine function) and support on the interval [0, t]. Converting back to the x-frame amplitude ϕ = ψ/ √ v we have an oscillating function with amplitude of order ε/vt and support on the interval [0, vt]. This result is interesting, since it shows perfectly efficient randomization (cf. figure 3). The very flat and long tail yields a K-entropy of order S K ∼ log(vt/ε) = log(2bt) (4.14) at long times. However, a delta-function initial condition is not a realistic starting point for the post-scrambling regime. First, such a singular initial configuration is beyond the regime of applicability of the low-derivative approximations to (3.13). Second, it was argued that a period of fast-scrambling with S degrees of freedom outputs a distribution with an x-width of order x * ∼ ε n * ∼ εS ε. Hence, in order to check if the present approximation captures randomization, we must input an initial distribution of width δx ∼ ε S. Equivalently, in the y-frame at post-scrambling this amounts to δy ∼ δx/v * ∼ λ −1 .
Picking a gaussian ansatz for the normalized y-frame distribution,  the integral (4.9) may be evaluated exactly to obtain 2 . (4.17) Looking at the long-time tail we focus on the region of large ∆y with ∆t ∆y, so that only the first term in B remains relevant as a correction to the Airy function profile. This term induces a suppression of order exp(−δ 2 /6γ) on the tail amplitude. Putting all factors together one finally finds ϕ tail ∼ δ λ n * e −(δ λ n * ) 2 up to O(1) factors. We conclude that, unless we pick a lattice-size distribution, with δ ∼ 1/λS, the randomization is all but washed out when looking at smooth signals. In particular, for the choice of width δ ∼ 1/λ, which corresponds to an initial scrambling period of time t * = λ −1 log S, the tail is exponentially suppressed, 19) and the propagation is essentially ballistic. We show the difference between the two choices of initial width in figures 4 and 5.
On the other hand, the fact that randomization arises when the signal is extrapolated to cutoff scales, beyond the domain where we trust the equation (4.5), suggests that perhaps randomization is a true property of the discrete evolution equation.

The discrete amplitude at post-scrambling
In search for K-entropy production in the post-scrambling regime, we return to the discrete problem (2.8), which becomes when the Lanczos coefficients are approximated by a constant b n ≈ b. In the physical situation of interest, this equation holds for n > n * ∼ S, and the solution must be matched to a solution of the scrambling regime, such as (3.12). Ignoring boundary conditions for the time being, a particular solution of (4.20) is just a Bessel function: ϕ n (t) = J n (2bt) . It has the correct normalization at t = 0, with all amplitudes vanishing except ϕ 0 (0) = 1. Therefore the Bessel functions describe the spread of a distribution which begins sharply localized at the origin. A glance at the plot in figure 6 reveals that randomization is very efficient, featuring a tail similar to that of the Airy function found in the last section. Using the so-called 'approximation by tangents' (cf. [21]) we can write, for n large at fixed ratio 2bt/n > 1: where a = arc tan √ 4b 2 t 2 − n 2 . As the distribution moves to large n at constant velocity, equal to 2b, there is a rapidly oscillating tail with almost flat envelope and height of order (4b 2 t 2 − n 2 ) 1/4 . Therefore, the Bessel function restricted to positive n behaves qualitatively as the Airy function, featuring an oscillating tail with amplitude of order 1/ √ 2bt, supported on the interval [0, 2bt].
The Bessel function amplitude has however in this case unphysical features, because it leaks into the negative n axis, as the ansatz (4.21) fails to satisfy the correct boundary condition ϕ −1 = 0. This implies that the probability density |ϕ n | 2 is not conserved on the physical configurations with n ≥ 0. The problem can be fixed by a superposition of two Bessel functions: which vanishes identically at n = −1 for all times, as one can verify using the identity J −n (z) = (−1) n J n (z). As a result, R −1 (2bt) = 0 is effectively a 'Dirichlet' condition separating the dynamics of the physical region n ≥ 0 and the dynamics of the unphysical region n < −1. Furthermore, R n (t = 0) = δ n,0 + δ n,−2 and, since one can now consistently restrict attention to positive values of n, it follows that (4.23) does satisfy the physical conditions of being narrowly localized at t = 0 and permanently confined in the n ≥ 0 region. The function (4.23) can be rewritten as a from that makes manifest a linear enveloping behavior at large n, as shown in figure 7. Despite this accumulation of probability at the higher end of the n spectrum, one can check using the form (4.24) that the K-entropy does grow at a logarithmic rate S K [R n (2bt)] ∝ log (2bt) at long times, the hallmark of a good operator randomization. The function R n (2bt) locates the initial pulse right next to the Dirichlet condition. In order to better simulate the type of configuration prepared by a previous scrambling period, it is convenient to engineer analogs of the R n function with initial pulses located at any desired position. These 'displaced' pulses can be manufactured by generalizing for any non-negative integer k. These functions meet the goal since they vanish at n = −1 for all times and R (k) n (0) = δ n,k + (−1) k δ n,−k−2 . Hence, we have a function which starts with a unit pulse at any n = k ≥ 0, while remaining confined to the n ≥ 0 domain at all times, the original pulse function (4.23) corresponding to the particular case of k = 0.
For generic values of k and long times, the k-pulse functions R (k) n (2bt) look like modulated Bessel functions, i.e. they display a tail of average height of order 1/ √ 2bt and are supported on the ballistic domain bounded by n t ∼ 2bt (cf. figure 8), therefore, they also feature logarithmically increasing K-entropies.
With these ingredients in place, we are ready to discuss the more realistic case of an initial pulse with arbitrary width K 0 . This can be achieved by a superposition of k-pulses ϕ n (t) = In particular, choosing K 0 ∼ S simulates the kind of signal that is prepared by a previous period of fast scrambling. To simplify matters, let us consider a square pulse with α k = 1/ √ K 0 . An example of the long time evolution of such a pulse is shown in figure 9. We observe a stable peak which propagates ballistically and an approximately uniform tail obtained by averaging over tails of single-pulse functions. Assuming that the phases of each single-  Figure 10. Growth of K-entropy for an initial square pulse of width K 0 = 5. Notice the asymptotic logarithmic growth and the initial finite-size effects due to the details of the square pulse. Figure 11. Sketch of the K-entropy dynamics in a fast scrambler with S degrees of freedom and Lyapunov exponent λ. A linear growth proportional to λ t during scrambling is followed by a logarithmic increase in the post-scrambling era, according to a scaling log(2Sλt), and a final saturation beyond times of order t K .
pulse function add up randomly, we estimate that a randomization tail exists with height of order 1/ √ 2bt and width of order 2bt, leading to a logarithmic growth of K-entropy: S K (t) ∼ log(2bt). This logarithmic growth for the K-entropy can be confirmed by direct numerical evaluation (cf. figure 10).
The conclusion is that randomization does occur in order of magnitude. There is a persistent ballistic component which makes an O(1) fraction of the normalization, but the K-entropy at long times is dominated by the oscillating tail. Eventually, after times of order t K , the K-entropy becomes of order log(n max ), thereby growing from O(log S) at t * to O(S) by the exponential time scale t K . A qualitative picture of the K-entropy dynamics in a fast scrambler is presented in figure 11.

Conclusions
In this paper we have explored the long-time behavior of K-complexity, an algebraic notion of operator complexity which relies on an effective dimensionality of a linear subspace containing the operator's time evolution. This concept was introduced in [1] as a useful JHEP10(2019)264 characterization of chaotic behavior, in the sense of being governed by the same Lyapunov exponent as OTOC correlators.
Using the Eigenstate Thermalization Hypothesis as a starting point, we have argued that K-complexity grows linearly at late times, after the system has been scrambled, with a rate which is extensive in the size S of the system. Eventually, the K-complexity must saturate at a maximum bounded by e O(S) , in a time also proportional to e O(S) , and stays approximately constant thereafter, until Poincaré recurrences begin to show up at times scaling as a double exponential of the entropy, exp(e O(S) ).
We furthermore notice that, during the exponentially long post-scrambling period when K-complexity grows linearly, the operator is randomized in order of magnitude. This can be characterized by the logarithmic growth of the K-entropy, which measures the degree of uniformity of the amplitudes ϕ n (t). More precisely, we find numerical evidence for a growth law of the form S K ∼ log (2bt) , as bt 1, where b denotes the asymptotic value of the Lanczos sequence. At complexity saturation, the K-entropy also saturates at a value of order log(n max ) = O(S). It would be interesting to study the consequences of this randomization on the long time behavior of correlation functions, along the lines of [9,[22][23][24][25].
The outstanding open question regarding these results is the holographic representation of K-complexity. During the scrambling period, there is an approximate correspondence between K-complexity and operator size. There are proposals for concrete relations between operator size and bulk quantities [26][27][28]. In these examples, the holographic map is specified between the process of particle free-fall towards a horizon and a scrambling process in the holographic dual. The natural expectation is that a period of linear growth of complexity should be associated to properties of the motion in the interior of the black hole.