On The Evolution Of Operator Complexity Beyond Scrambling

We study operator complexity on various time scales with emphasis on those much larger than the scrambling period. We use, for systems with a large but finite number of degrees of freedom, the notion of K-complexity employed in arXiv:1812.08657 for infinite systems. We present evidence that K-complexity of ETH operators has indeed the character associated with the bulk time evolution of extremal volumes and actions. Namely, after a period of exponential growth during the scrambling period the K-complexity increases only linearly with time for exponentially long times in terms of the entropy, and it eventually saturates at a constant value also exponential in terms of the entropy. This constant value depends on the Hamiltonian and the operator but not on any extrinsic tolerance parameter. Thus K-complexity deserves to be an entry in the AdS/CFT dictionary. Invoking a concept of K-entropy and some numerical examples we also discuss the extent to which the long period of linear complexity growth entails an efficient randomization of operators.


Introduction
Quantum complexity has been proposed as a new entry in the holographic dictionary (see for instance [2,3] and references therein). The underlying idea is to characterize the entanglement of a state in an 'optimal' way, with respect to some simple building blocks, such as gates in a quantum circuit model or more generally a tensor network. Complexity can then be defined as the size of the smallest circuit or tensor network which approximates the state, given some prescribed set of gates or fundamental tensors (see for instance [4]).
The quantum circuit model leads naturally to a notion of complexity which is extensive in the number of degrees of freedom, S, and furthermore grows linearly in time, for a period much longer than any ordinary thermalization time scale: with β an effective time step for state-vector orthogonality, i.e. Ψ t |Ψ t+β ≈ 0. This linear growth is to be matched to the linear growth of spacelike volumes inside a black hole of entropy S and inverse Hawking temperature β [5,6]. An important question is whether the so-defined complexity has an upper bound. In the quantum models with a finite set of qubits, a computation is regarded as finished when the target state is approximated within some a priori tolerance , with respect to a standard metric on the space of states. Complexities defined with such an implicit dependence on the tolerance parameter are bounded by the number of -cells in the space of states, which scales exponentially with the number of qubits: where c is a numerical constant of O(1), and we neglect various polynomial corrections to this expression [7]. For any linearly rising complexity, the bound (2) is then attained over time scales exponential in the entropy S, similar to the Heisenberg time scale t H ∼ β e S , which controls the randomization of a quantum state under time evolution, except for the occurrence of the factor log(1/ ). It is not clear how to interpret the complexity bound on the gravitational side, since the tolerance parameter lacks a concrete physical interpretation. The extremal volume through a wormhole grows with no limit in an eternal black hole geometry, only disturbed by nonperturbative effects such as tunneling transitions. If these fluctuations affect the complexity in a similar manner as they affect correlation functions, the relevant time scale for complexity saturation should be the Heisenberg time t H , which has no dependence on the tolerance parameter [8,9].
The dependence on finds its origin on the notion of a metric over the space of states. It would be interesting to have a definition of complexity which does not rely on some effective volume, but rather depends on an effective dimensionality. A suggestion in this direction is to switch from the Schrödinger picture to the Heisenberg picture and characterize complexity in terms of the size of an operator with respect to some local basis of the operator algebra. In a system of qubits, we can think of the Pauli operators at each qubit as generators of the operator algebra, and the size of a given operator may be defined as the average number of non-identity factors in this representation.
Operator growth and the relation to thermalization has been discussed as a criterion for quantum chaos in model systems for fast scramblers, such as the SYK model [10], [11,12]. The authors of ref. [1] have proposed a variant of these notions which uses an operator basis adapted to the time evolution of operators, rather than an a priori basis of the operator algebra. Starting from some initial operator O 0 , one can envision the Heisenberg flow on the space of operators, O t = e itH O 0 e −itH , whose Taylor expansion with respect to the time variable is generated by the set of nested commutators of O 0 with the Hamiltonian. Using these nested commutators as linear generators of the operator algebra one can describe the Heisenberg flow as gradually accesing a growing subspace of the operator space. K-complexity is defined in [1] as an effective dimension of this growing subspace.
Using the SYK model as a benchmark model for a fast scrambler, it has been shown in [1] that K-complexity is similar to other definitions of operator size, when working in the thermodynamic limit. In this paper, we move away from the thermodynamic limit and study the regime of very long times, much larger than the scrambling time, when operator size ceases to be a useful characterization of complexity. We show that K-complexity continues to grow at a linear rate in this post-scrambling period until it saturates just below the total dimensionality of the operator space, of order e O(S) . Because of the linear rate, this saturation occurs on time scales exponential in the number of degrees of freedom, roughly similar to the Heisenberg time. Both the complexity upper bound and the saturation time scale stand without any reference to a coarse-graining parameter.
This paper is organized as follows. In section 2 we review the basic concepts and notation elements of K-complexity. In section 3 we discuss the K-complexity of scrambling systems with a finite number of degrees of freedom, with special emphasis on the post-scrambling evolution. We establish the linear growth of K-complexity and the saturation time scale. In section 4 we define the notion of K-entropy, as a measure of the degree of randomization of the Heisenberg flow, and argue on the basis of some numerical estimates that such randomization is expected to occur in order of magnitude. Section 5 brings the conclusions and a number of open questions suggested by our work.

Review of K-complexity
We begin with a review of K-complexity and a description of the notational conventions to be used in this paper. The main reference for this section is [1].
Given the Hamiltonian of a lattice system, H, and a particular initial operator O 0 , one defines a linearly independent set of operators O n in terms of the n-times nested commutators [H, [H, · · · , [H, O 0 ], . . . ], ], conveniently improved into an orthonormal set, known as the Krylov basis. This choice is motivated by the time evolution, since the nested commutators determine the time Taylor expansion of the Heisenberg operator O t = e itH O 0 e −itH . The orthonormality can be defined with respect to any non-degenerate inner product in the operator algebra, such as the trace inner product The construction of the Krylov basis runs iteratively as follows. From the initial operator . From here onwards, given O n−1 and O n−2 we define and The adjoint action of the Hamiltonian is almost diagonal on the orthonormal set O n : where the non-negative matrix elements b n are called Lanczos coefficients. It is useful to exploit the notation (3) to introduce a vector space of operators, |O), with dimension of order N 2 . The adjoint action of the Hamiltonian in (6) introduces a linear operator in this space known as the Liouvillian, defined as If we start with a 'small' operator, containing few local degrees of freedom, each nested commutator with the Hamiltonian tends to increase its size. For a k-local Hamiltonian, containing products of less than k local degrees of freedom, we expect the size of O n to be of order n k for large values of n. In [1], the authors build upon this remark and define complexity in terms of the 'average' number of commutators which are required to construct a given operator O, starting with an initial operator O 0 . More precisely, if O is in the vector space generated by the set of O n operators seeded by O 0 and H, we can write for some complex coefficients ϕ n . A measure of the typical number of H-commutators required to build O is then and referred to as K-complexity, for its implicit dependence on the construction of the Krylov basis. It was shown in [1] that this definition is close to operator size for the SYK model in the thermodynamic limit.
Applying the general definition (9) to the time-evolved operator O t = e itH O 0 e −itH with initial condition O 0 , we are led to a natural notion of time-dependent K-complexity: with boundary condition ϕ −1 (t) = 0. A given pattern of growth of Lanczos coefficients as a function of n translates into a characteristic growth of complexity. For instance, it is shown in [1] that a system with an asymptotic large-n law 1 b n ≈ α n , accumulates K-complexity at an exponential rate: A benchmark example of this behavior is the SYK model, for which 2α = λ is the Lyapunov exponent revealed in OTOC correlations. It is then natural to propose (12) as a criterion for local quantum chaos, since explicit evaluation of Lanczos coefficients in various integrable systems yield softer asymptotic laws of the form b n ∼ α n δ , 0 < δ < 1 .
In these cases, K-complexity has a milder, powerlike growth: C K (t) ∼ (αt) 1 1−δ . It is useful to find relations between the patterns of growth of Lanczos coefficients and more familiar objects, such as correlation functions. Let us consider the time autocorrelation which coincides with the standard Wightman correlation function at infinite temperature. In the thermodynamic limit, N → ∞, the Fourier transform develops a non-trivial analytic structure. In particular, the singularities closest to the real axis are located at ±iπ/(2α), where α is the slope coefficient in (12), and G(ω) decays exponentially along the real axis with the law (cf. [13]), More generally, a growth law of the form (14) translates into a decay G(ω) ∼ exp(−|ω/ω 0 | 1/δ ), i.e. the sharper is the decay of the spectral function, the milder is the growth of the Lanczos coefficients. In the case that the b n have a finite asymptotic limit lim n→∞ b n = b ∞ , it turns out that the spectral function has compact support in the finite interval There is a direct relation between the Lanczos coefficients and the moments of the Liouvillian, which in turn control the Taylor series of the autocorrelation function (only even moments contribute for Hermitian operators) The relation between b n and µ 2n involves intricate combinatorics, but there is a lower bound Furthermore, if the sequence of b n is non-decreasing, there is also an upper bound where C n = (2n)! n!(n+1)! is the n-th Catalan number. In particular, for a b n sequence which is non-decreasing and asymptotic to b ∞ , one has where we have used the large-n asymptotic form of the Catalan numbers C n ≈ 4 n n −3/2 . The notation o(n) in the exponent stands for any terms with large n growth slower than linear, such as fractional powers or logarithms. If the b n sequence is not strictly increasing, and yet b ∞ exists, then there is a non-decreasing sequence which approximates b n asymptotically as n → ∞. Hence, we expect the estimate (22) to be qualitatively good provided the Lanczos sequence has a finite limit b ∞ .

K-complexity of scramblers: fast and finite
In systems with a finite-dimensional Hilbert space, K-complexity is necessarily bounded by the dimensionality of the operator space, C K ≤ N 2 . Saturation of this bound is not guaranteed, as the Krylov basis may terminate its iterative construction before it spans the whole operator space. Still, for sufficiently generic choices of initial operator O 0 and Hamiltonian H, we expect that n max does not lie far below N . To see this, consider the basis of operators where |E a denotes the exact energy eigenstate with eigenvalue E a . The N 2 operators L (ab) define a basis of the operator space which is orthonormal with respect to the inner product (3). The components of O t in this basis are proportional to its matrix elements in the exact energy basis: which at the same time can be written as For sufficiently generic initial operator, there are O(N 2 ) non-vanishing matrix elements, which remain non-vanishing at all times. Thus, the 'supervector' |O t ) has O(N 2 ) non-vanishing projections (L (ab) |O t ). Although the Krylov basis is rotated with respect to (23), it is natural to expect that the number of non-vanishing K-components (O n | O t ) will also be of O(N 2 ). Furthermore, for generic values of the energies E a , the N independent phases e −itEa describe an ergodic motion on a real N -dimensional torus, which is embedded in the operator space by the equation (24). Hence, |O t ) lies on an N -dimensional submanifold and we can conclude that for systems with S degrees of freedom, and generic choices of H and O 0 . The computation of K-complexities requires the evaluation of (10) once we know the amplitudes ϕ n (t). These in turn are obtained by solving (11). Therefore, it is the structure of the sequence b n what determines the relevant dynamical regimes in the growth of Kcomplexity. In a typical fast scrambler, such as the SYK model, small operators grow in size at an exponential rate exp(λt), where λ ≈ 2α is the Lyapunov exponent. In other words, for small operators, operator size is roughly equivalent to K-complexity.
We can regard the operator as 'scrambled' when it has spread, in order of magnitude, over the whole system. For a fast scrambler with S local degrees of freedom, this happens at the familiar time scale t * ∼ λ −1 log S [14]. The value of the K-complexity at the scrambling time is of order For systems with O(S) lattice sites and a finite-dimensional Hilbert space one has N ∼ e O(S) .
Since S e O(S) , it follows that K-complexity has an enormous scope for growth beyond the 'scrambling value'. Should the complexity continue to grow exponentially fast for t > t * , it would saturate in a time of order S. In the next section we use the ETH hypothesis to argue that this estimate is far from correct.

The ETH estimate
For systems which scramble less efficiently than a 'fast' scrambler, one expects the scrambling time to scale like a power of S, rather than a logarithm, but the intuitive relation between K-complexity and operator size suggests that the complexity at the scrambling time continues to satisfy (25). Hence, the wide gap between the complexity at scrambling, C K (t * ) ∼ S, and the maximal complexity, of order e O(S) , should be a general feature of any system with finite degrees of freedom.
The rate of K-complexity growth after scrambling depends on the form of the b n coefficients for n S. These can be constrained from the behavior of the moments: From the spectral decomposition of the correlation function, we obtain the expression: where O ab = E a | O 0 |E b denote the matrix elements of the initial operator in the exact energy basis. These matrix elements can be used to characterize a degree of quantum chaos. For operators whose expectation values and correlations approach thermal values at long times, it is expected that O ab satisfy the Eigenstate Thermalization Hypothesis (ETH) [15][16][17], which essentially says that the eigenbases of O 0 and H are uncorrelated, related by a random unitary on the N -dimensional Hilbert space. From this assumption it follows that off-diagonal matrix elements contributing to (28) have the form where R ab is a random matrix whose entries have mean zero and unit variance. The form factor F carries the information about the normalization of the operator and is assumed to depend smoothly on the energies of the states. Plugging this ansatz into the spectral expression (28) we thus find For n S the energy sum tends to be dominated by the largest possible energy differences. For a system with S degrees of freedom and extensive energy, the maximum energy difference is of order ΛS, where Λ is the UV cutoff. The sum over energy eigenvalues in (30) appears to be controlled by the form factor's bandwidth Γ, which is an intensive energy scale, not scaling with S, and set by the local frequency cutoff Λ. For instance, assuming an exponential form factor F (ω) ∼ e −ω/Γ , we would estimate the sum over energy differences However, a saddle point analysis shows that, for n S, this integral gets its main contribution from ω c ∼ 2nΓ ΛS. Precisely for n S, the saddle point sits outside the actual integration range, and the integral must be approximated ignoring the form factor. Ultimately, this is a consequence of the smoothness of the form factor as a function of ω, and should hold generally for any operator satisfying the ETH ansatz. In particular, quasinormal behavior in time correlations is associated to Lorentzian profiles of the form F (ω) ∼ (ω 2 + Γ 2 ) −1 , with an even milder damping of large frequencies, so that the previous argument applies as well.
We conclude that, as n S, the moment sum is controlled by the average of (E a − E b ) 2n over the energy band, i.e.
Going back to (22), this results in Lanczos coefficients approaching an asymptotic 'plateau' For a fast scrambler, b n * ∼ αn * ∼ αS. Therefore, if all couplings are of order unity, the Lyapunov exponent must be itself of the order of the local characteristic frequency, λ ∼ 2α ∼ Λ, within factors of order unity, and we are led to a very simple picture for the Lanczos sequence: linear growth with slope λ for 0 < n < n * morphing into an approximate plateau extending all the way to n max ∼ e O(S) . The qualitative description of a fast scrambler, as determined by a Lyapunov exponent λ ∼ Λ and S degrees of freedom, can be adapted more general situations where only a subset of degrees of freedom are 'activated' in the scrambling process. This occurs when considering a system at a finite temperature below the UV cutoff, T < Λ. In this case, on states of entropy S, the system can be described as having about one degree of freedom per thermal cell of size β = T −1 participating in the scrambling process, with the rest of degrees of freedom effectively 'frozen' in their ground state, and thus not contributing to the entropy. On such states of entropy S and effective temperature T , the UV cutoff is effectively replaced by T , setting the scale of the Lyapunov exponent λ ∼ T .
The qualitative band-structure of b n coefficients for a fast scrambler is shown in Fig. 1. More generally, for systems with less efficient scrambling, the initial linear growth might be substituted by (14), whereas the 'post-scrambling' plateau for n S is expected to be rather general. It would be very interesting to test the generality of this 'Lanczos plateau' in numerical simulations of various models, such as SYK.

Dynamics of K-complexity
The evolution of K-complexity for a fast scrambler with linear growth (12) was studied in [1]. An analytic solution for the amplitudes ϕ n (t) exists for a formal choice of Lanczos coefficients given by b n = n(n − 1 + η). To simplify matters, we look at the exactly linear case, corresponding to η = 1, for which the solution reads ϕ n (t) = tanh n (αt) sech(αt) . An initially sharp peak at n = 0 moves to higher n exponentially fast: n peak (t) ∼ e 2αt . The overall height of the function at large t is of order e −αt . Hence, the scrambling is very efficient at accessing 'large' operators but at the same time it is also very efficient in randomizing the operator in the Krylov basis, leading to an essentially flat ϕ n distribution with support on [0, n peak (t)] and height of order 1/ n peak (t).
The growth of complexity is largely controlled by the ballistic motion in n-space of the solution's 'wave front'. On the other hand, operator randomization depends on whether a significant tail is left behind the wave front. For a discussion of the ballistic aspect, as well as the detailed matching between the pre-scrambling and post-scrambling regimes, it is useful to start with a continuum approximation.
Taking a coarse-grained look at the discrete function ϕ n (t), let us introduce a lattice cutoff ε and a coordinate x = ε n, and define the interpolating functions ϕ(x, t) = ϕ n (t), v(x) = 2ε b(εn) = 2ε b n . A continuum form of the recursion relation (11) can be written as: Expanding now in powers of ε, we find to leading order a chiral wave equation with position-dependent velocity v(x) and mass ∂ x v(x)/2. We can solve it by introducing a new coordinate y by the relation v(x)∂ x = ∂ y , and a rescaled amplitude which simplifies the chiral wave equation: the dots standing for the neglected terms of higher order in the ε expansion. The general solution of this equation is given by where ψ i (y) = ψ(y, 0) is the initial condition. The rescaling (36) is also useful from the point of view of the intuition about probability distributions. From the discrete normalization condition n≥0 |ϕ n | 2 = 1 we can derive the continuum analogs so that ψ(y) is a naive probability amplitude in y space, just as ϕ(x) is a naive probability amplitude in x space. The physics of (38) is that of a simple ballistic motion of the initial ψ-distribution towards positive values of y at a constant velocity. The problem is solved once we know the change of variables between the x-frame and the y-frame. The K-complexity as a function of time is given by Using the general solution (38) in the last expression and changing variables y → y + t we find There are various interesting cases to consider. A fast scrambler with linear Lanczos growth has v(x) = λ x, where λ = 2α is the Lyapunov exponent. The corresponding change of variables is where we have chosen the additive normalization in y for convenience. Notice that, in this case, the y variable runs over the whole real line, whereas the x variable is restricted to be positive. The scrambling solution for the ϕ amplitude then reads An initial peak at y = 0 for ψ i (y) will move ballistically as y p (t) = t, corresponding to an x-frame trajectory which also controls the exponential growth of K-complexity.
If the velocity has a logarithmic correction, as proposed in [1] for (1 + 1)-dimensional systems, the corresponding frame map is x = ε e √ 2λy . and the distribution peak and K-complexity grow at a rate of order exp( √ 2λt). For systems with a less efficient scrambling, governed by (14) with δ < 1, the drift velocity is given by leading to a change of variables and a power-like complexity growth proportional to (α t) 1 1−δ . It is interesting to compare estimates of scrambling times based on the growth of Kcomplexity with other heuristic models of scrambling. If we define the scrambling time by the requirement that complexity reaches the size of the system, C K (t * ) ∼ S, then we have On the other hand, in d-spatial dimensions, ballistic scrambling takes a time of order t * ∼ L for a system of size L. If we write S ∼ (α L) d for the effective number of degrees of freedom (entropy) and α −1 for the effective dynamical time step, we have t * ∼ α −1 S 1/d for ballistic scrambling. If we model the scrambling by a diffusion process, characterized by a random walk of step α −1 , we obtain instead t * ∼ α −1 S 2/d . Then, we find the interesting correspondences The post-scrambling regime In the post-scrambling regime the x and y frames are simply proportional: and the amplitude ϕ(x, t) just moves ballistically towards large x with velocity v * , the K-complexity also growing linearly. To summarize, in the simplest case of an SYK-like fast scrambler, with Lyapunov exponent λ and S extensive degrees of freedom, we have v(x) ≈ λ x in the scrambling band, 0 < x < ε S and v(x) = v * ∼ λ ε S in the post-scrambling band. In the scrambling period, x(y) ≈ ε e λy resulting in the expected exponential growth. If the initial operator is 'small', the initial complexity is also small, C K (0) = O(1) and C K (t * ) = O(S). On the other hand, in the post-scrambling regime x(y) ≈ y v with constant v = v * = ελn * ∼ ελ S. At long times: using the normalization condition (39). We conclude that the complexity grows exponentially fast during scrambling and only linearly after scrambling, with a rate of order λS. The time scale for the amplitude to reach n max is of order At times larger than t K , the function ψ(y, t) remains stuck near the endpoint, because the drift towards large values of x will prevent the distribution from bouncing back. This implies that the complexity eventually levels off and remains constant. Over extremely long time scales, however, we know that the solution of the discrete equation (11) will necessarily undergo Poincaré recurrences. The time scale for this to happen is of order where determines the precision with which we demand recurrence. We summarize the qualitative behavior of the K-complexity for a fast scrambler in Fig. 2.

Operator randomization and K-entropy
Having established the existence of a very long post-scrambling era of linear K-complexity growth, we now begin a more detailed study of this dynamical regime. In particular, we discuss the degree of randomization of the operator O t , when expanded in the Krylov basis. For this purpose, we shall introduce the notion of K-entropy. In order to motivate its definition, we momentarily go back to the scrambling period.
The exact solution (33) describes two a priori independent phenomena: there is an exponentially fast growth of K-complexity and at the same time there is an efficient randomization of the operator over the time-dependent span of the Krylov operator set. This is intuitively clear from the qualitative form of (33), which eventually looks like a uniform distribution of size n peak and amplitude 1/ √ n peak . A more formal characterization of this uniformity is given by the 'operator entropy' or K-entropy, which we define by Figure 2: Evolution of K-complexity for a fast scrambler of size S, featuring an exponential law in the pre-scrambling era t < t * ∼ λ −1 log S, followed by a a linear law in the post-scrambling era, up to t K ∼ e O(S) /λS, when the complexity finally saturates.
If the ϕ n amplitude is very peaked at a particular value of n, large or small, the K-entropy is small. On the other hand, if the distribution is completely uniform over the interval [0, n M ], then S K = log (n M ). Applying the definition (52) to (33) we can determine the growth of K-entropy to be expected from a typical fast scrambler. The result of a numerical evaluation is a linear growth with slope close to 2α = λ. Hence, the scrambling dynamics increases K-complexity at an exponential rate, and also increases K-entropy at a linear rate. It turns out that the linear growth of K-entropy for a fast scrambler is captured by the continuum solution of the leading equation (35). The continuum versions of K-entropy in both x-frame and y-frame are Extracting the velocity-dependent term in the y-frame expression, we have In the leading continuum approximation, any y-frame solution has the form ψ(y, t) = ψ i (y − t). Hence, the first term is time-independent, whereas the second term computes the average of log(v(y)) over the operator probability distribution. In periods where the complexity growth is accelerated, such as the scrambling period of a fast scrambler, there is entropy production. Inserting the leading continuous solution (43) of the scrambling regime into (53) one obtains which matches the numerical evaluation for the exact solution (33). This means that the simple chiral wave equation with a mass term (35) actually gives a very accurate description of the scrambling regime, not only accounting for the growth of K-complexity, but also capturing quantitatively the growth of K-entropy.
In the post-scrambling period where v(x) ≈ constant, the mass term in (35) is negligible and the amplitude propagates ballistically in both frames. Therefore, the leading order term in the continuum approximation to the amplitude does not detect any significant growth of the K-entropy. We now turn to analyse what can seen at some higher orders.

The continuum amplitude at post-scrambling
We have seen that, while operator randomization is well accounted for in the continuum approximation for the scrambling regime, it is completely missed at leading order in the post-scrambling regime. It is an important question to determine whether K-entropy can be produced at all during the enormously long post-scrambling era.
In this section we show that the next-to-leading approximation to the evolution equation (34) already begins to incorporate the randomization effect, but ultimately falls short of the goal. Carrying the short distance expansion of (34) to higher orders one finds (35), with further corrections on the right hand side. At order ε there is a term This is a small effect in the scrambling regime and completely negligible in the post-scrambling regime. At order ε 2 we find two terms: The first term is a diffusion contribution with the wrong sign of the diffusion constant, and it only acts for a small time in the scrambling era. The second term is active throughout the long post-scrambling era and thus corresponds to the leading correction which is in principle capable of incorporating a broadening effect. Let us then consider the O(ε 2 )-corrected equation in the post-scrambling regime t > t * and in the y-frame, written in terms of the rescaled amplitude ψ(y, t) = v(y)ϕ(y, t), which has standard L 2 norm in the y-frame. The coefficient controlling the new term is certainly small: where we have used that v = ε λ n * ∼ ε λ S in the post-scrambling regime. In order to solve (56) we seek a solution of Fourier form with dispersion ω k = k − γ k 3 . Let us set an initial condition at t = t * , specifying the amplitude as ψ(y, t * ) = ψ i (y), which is just the Fourier transform of ψ k . The solution reads ψ(y, t) = dy ψ i (y ) dk 2π e ik (y−y −∆t)+iγ ∆t k 3 , where ∆t = t − t * . By the rescaling k → k/(3γ∆t) 1/3 we can evaluate the momentum integral in terms of the Airy function to obtain where It is already clear from this expression that this approximation is beginning to capture the randomization effect, due to the properties of the Airy function. To see this, let us consider an initial delta-function pulse, leading to a solution where ∆y = y − y * . The constant A is fixed by requiring the correct normalization of ψ(y, t).
Evaluating the asymptotics long after the ballistic front y ∼ t has passed, i.e. ∆t ∆y, one obtains for 0 < ∆y < ∆t, and essentially zero otherwise. The normalization condition 1 = 1 ε dy |ψ(y)| 2 fixes the order of magnitude of the constant to be A ∼ (3γ) 1/4 √ ε, so that the operator amplitude looks like a rapidly oscillating function over the interval 0 < ∆y < ∆t of the form where Osc [0,t] stands for the oscillation component with unit amplitude (a cosine function) and support on the interval [0, t]. Converting back to the x-frame amplitude ϕ = ψ/ √ v we have an oscillating function with amplitude of order ε/vt and and support on the interval [0, vt]. This result is interesting, since it shows perfectly efficient randomization (cf. Fig. 3). The very flat and long tail yields a K-entropy of order at long times. However, a delta-function initial condition is not a realistic starting point for the post-scrambling regime. First, such a singular initial configuration is beyond the regime of applicability of the low-derivative approximations to (34). Second, it was argued that a period of fast-scrambling with S degrees of freedom outputs a distribution with an x-width of order x * ∼ ε n * ∼ εS ε. Hence, in order to check if the present approximation captures randomization, we must input an initial distribution of width δx ∼ ε S. Equivalently, in the y-frame at post-scrambling this amounts to δy ∼ δx/v * ∼ λ −1 .
Picking a gaussian ansatz for the normalized y-frame distribution, the integral (60) may be evaluated exactly to obtain 2 ψ(y, t) = where .
(68) Figure 4: The amplitude (67) for a very narrow initial pulse, δ = 10 −2 in units of the Lyapunov exponent, is very similar to the amplitude for an initial delta-function pulse. Looking at the long-time tail we focus on the region of large ∆y with ∆t ∆y, so that only the first term in B remains relevant as a correction to the Airy function profile. This term induces a suppression of order exp(−δ 2 /6γ) on the tail amplitude. Putting all factors together one finally finds ϕ tail ∼ δ λ n * e −(δ λ n * ) 2 up to O(1) factors. We conclude that, unless we pick a lattice-size distribution, with δ ∼ 1/λS, the randomization is all but washed out when looking at smooth signals. In particular, for the choice of width δ ∼ 1/λ, which corresponds to an initial scrambling period of time t * = λ −1 log S, the tail is exponentially suppressed, On the other hand, the fact that randomization arises when the signal is extrapolated to cutoff scales, beyond the domain where we trust the equation (56), suggests that perhaps randomization is a true property of the discrete evolution equation.

The discrete amplitude at post-scrambling
In search for K-entropy production in the post-scrambling regime, we return to the discrete problem (11), which becomes when the Lanczos coefficients are approximated by a constant b n ≈ b. In the physical situation of interest, this equation holds for n > n * ∼ S, and the solution must be matched to a solution of the scrambling regime, such as (33). Ignoring boundary conditions for the time being, a particular solution of (71) is just a Bessel function: ϕ n (t) = J n (2bt) .
It has the correct normalization at t = 0, with all amplitudes vanishing except ϕ 0 (0) = 1. Therefore the Bessel functions describe the spread of a distribution which begins sharply localized at the origin. A glance at the plot in Fig. 6 reveals that randomization is very efficient, featuring a tail similar to that of the Airy function found in the last section. Using the so-called 'approximation by tangents' (cf. [18]) we can write, for n large at fixed ratio 2bt/n > 1: where a = arc tan √ 4b 2 t 2 − n 2 . As the distribution moves to large n at constant velocity, equal to 2b, there is a rapidly oscillating tail with almost flat envelope and height of order (4b 2 t 2 − n 2 ) 1/4 . Therefore, the Bessel function restricted to positive n behaves qualitatively as the Airy function, featuring an oscillating tail with amplitude of order 1/ √ 2bt, supported on the interval [0, 2bt].
The Bessel function amplitude has however in this case unphysical features, because it leaks into the negative n axis, as the ansatz (72) fails to satisfy the correct boundary condition ϕ −1 = 0. This implies that the probability density |ϕ n | 2 is not conserved on the physical configurations with n ≥ 0. The problem can be fixed by a superposition of two Bessel functions: which vanishes identically at n = −1 for all times, as one can verify using the identity J −n (z) = (−1) n J n (z). As a result, R −1 (2bt) = 0 is effectively a 'Dirichlet' condition separating the dynamics of the physical region n ≥ 0 and the dynamics of the unphysical region n < −1. Furthermore, R n (t = 0) = δ n,0 +δ n,−2 and, since one can now consistently restrict attention to positive values of n, it follows that (74) does satisfy the physical conditions of being narrowly localized at t = 0 and permanently confined in the n ≥ 0 region. The function (74) can be rewritten as a from that makes manifest a linear enveloping behavior at large n, as shown in Fig. 7. Despite this accumulation of probability at the higher end of the n spectrum, one can check using the form (75) that the K-entropy does grow at a logarithmic rate S K [R n (2bt)] ∝ log (2bt) at long times, the hallmark of a good operator randomization. The function R n (2bt) locates the initial pulse right next to the Dirichlet condition. In order to better simulate the type of configuration prepared by a previous scrambling period, it is convenient to engineer analogs of the R n function with initial pulses located at any desired position. These 'displaced' pulses can be manufactured by generalizing (74) into for any non-negative integer k. These functions meet the goal since they vanish at n = −1 for all times and R (k) n (0) = δ n,k + (−1) k δ n,−k−2 . Hence, we have a function which starts with a unit pulse at any n = k ≥ 0, while remaining confined to the n ≥ 0 domain at all times, the original pulse function (74) corresponding to the particular case of k = 0.
For generic values of k and long times, the k-pulse functions R (k) n (2bt) look like modulated Bessel functions, i.e. they display a tail of average height of order 1/ √ 2bt and are supported on the ballistic domain bounded by n t ∼ 2bt (cf. Fig. 8), therefore, they also feature logarithmically increasing K-entropies.
With these ingredients in place, we are ready to discuss the more realistic case of an initial pulse with arbitrary width K 0 . This can be achieved by a superposition of k-pulses ϕ n (t) = In particular, choosing K 0 ∼ S simulates the kind of signal that is prepared by a previous  Figure 10: Growth of K-entropy for an initial square pulse of width K 0 = 5. Notice the asymptotic logarithmic growth and the initial finite-size effects due to the details of the square pulse.
period of fast scrambling. To simplify matters, let us consider a square pulse with α k = 1/ √ K 0 . An example of the long time evolution of such a pulse is shown in Fig. 9. We observe a stable peak which propagates ballistically and an approximately uniform tail obtained by averaging over tails of single-pulse functions. Assuming that the phases of each single-pulse function add up randomly, we estimate that a randomization tail exists with height of order 1/ √ 2bt and width of order 2bt, leading to a logarithmic growth of K-entropy: S K (t) ∼ log(2bt). This logarithmic growth for the K-entropy can be confirmed by direct numerical evaluation (cf. Fig. 10).
The conclusion is that randomization does occur in order of magnitude. There is a persistent ballistic component which makes an O(1) fraction of the normalization, but the K-entropy at long times is dominated by the oscillating tail. Eventually, after times of order t K , the K-entropy becomes of order log(n max ), thereby growing from O(log S) at t * to O(S) by the exponential time scale t K . A qualitative picture of the K-entropy dynamics in a fast scrambler is presented in Fig. 11.

Conclusions
In this paper we have explored the long-time behavior of K-complexity, an algebraic notion of operator complexity which relies on an effective dimensionality of a linear subspace containing the operator's time evolution. This concept was introduced in [1] as a useful characterization of chaotic behavior, in the sense of being governed by the same Lyapunov exponent as OTOC correlators.
Using the Eigenstate Thermalization Hypothesis as a starting point, we have argued that K-complexity grows linearly at late times, after the system has been scrambled, with a rate t * t K t Log S 2S S K Figure 11: Sketch of the K-entropy dynamics in a fast scrambler with S degrees of freedom and Lyapunov exponent λ. A linear growth proportional to λ t during scrambling is followed by a logarithmic increase in the post-scrambling era, according to a scaling log(2Sλt), and a final saturation beyond times of order t K .
which is extensive in the size S of the system. Eventually, the K-complexity must saturate at a maximum bounded by e O(S) , in a time also proportional to e O(S) , and stays approximately constant thereafter, until Poincaré recurrences begin to show up at times scaling as a double exponential of the entropy, exp(e O(S) ).
We furthermore notice that, during the exponentially long post-scrambling period when K-complexity grows linearly, the operator is randomized in order of magnitude. This can be characterized by the logarithmic growth of the K-entropy, which measures the degree of uniformity of the amplitudes ϕ n (t). More precisely, we find numerical evidence for a growth law of the form S K ∼ log (2bt) , as bt 1, where b denotes the asymptotic value of the Lanczos sequence. At complexity saturation, the K-entropy also saturates at a value of order log(n max ) = O(S). It would be interesting to study the consequences of this randomization on the long time behavior of correlation functions, along the lines of [9,[19][20][21].
The outstanding open question regarding these results is the holographic representation of K-complexity. During the scrambling period, there is an approximate correspondence between K-complexity and operator size. There are proposals for concrete relations between operator size and bulk quantities [22] [23]. In these examples, the holographic map is specified between the process of particle free-fall towards a horizon and a scrambling process in the holographic dual. The natural expectation is that a period of linear growth of complexity should be associated to properties of the motion in the interior of the black hole.

Acknowledgments
J.L.F. Barbon would like to thank the Hebrew University in Jerusalem and IHES for hospitality during the preparation of this work. R. Shir would like to thank the IFT-Madrid for hospitality during the preparation of this work. R Sinha would like to thank the Strings 2019 conference in Brussels, Belgium where the work was first presented.