Black holes, complexity and quantum chaos

We study aspects of black holes and quantum chaos through the behavior of computational costs, which are distance notions in the manifold of unitaries of the theory. To this end, we enlarge Nielsen geometric approach to quantum computation and provide metrics for finite temperature/energy scenarios and CFT’s. From the framework, it is clear that costs can grow in two different ways: operator vs ‘simple’ growths. The first type mixes operators associated to different penalties, while the second does not. Important examples of simple growths are those related to symmetry transformations, and we describe the costs of rotations, translations, and boosts. For black holes, this analysis shows how infalling particle costs are controlled by the maximal Lyapunov exponent, and motivates a further bound on the growth of chaos. The analysis also suggests a correspondence between proper energies in the bulk and average ‘local’ scaling dimensions in the boundary. Finally, we describe these complexity features from a dual perspective. Using recent results on SYK we compute a lower bound to the computational cost growth in SYK at infinite temperature. At intermediate times it is controlled by the Lyapunov exponent, while at long times it saturates to a linear growth, as expected from the gravity description.


Introduction
Although there is a large amount of knowledge about the holographic dictionary in the context of AdS/CFT [1], see [2] for a recent review on bulk reconstruction and references therein, it remains unclear how the CFT describes processes behind or near the horizon of a black hole. One of the main reasons is that most of the well-known entries of the dictionary consider setups anchored at the boundary of AdS, such as the field operator correspondence [3,4] or the Ryu-Takayanagi formula for computing entanglement entropy [5,6]. On the other hand, an example of quantities that are transparently sensitive to near horizon dynamics are out of time-ordered correlation functions (OTOC), as developed in [7][8][9]. But these OTOC are sensitive through O(1/N ) effects, both from CFT and gravity points of view, while the near horizon geometry and its physics are O(1) effects that should be encoded in the CFT through O(1) effects as well. For example, as we review below, infalling particles have basic properties like energy and momentum that are controlled by the chaos exponent (see for example [10][11][12][13]), and one would like to understand how such properties are encoded in the CFT. Besides, as we will see, there are universal features not directly captured by OTOCs.

JHEP09(2018)043
encodes the near horizon geometry. Interestingly, this statement rests on the equivalence principle. If the particle/system momentum in the freely falling frame is constant, this necessarily implies that the costs described by an outside observer increase exponentially with time. As a byproduct, we show that the present approach suggests a further chaos bound in the coefficient in front of the exponential growth, obtained by letting the infalling particle approach the speed of light. This seems a non-trivial prediction for a dual theory to have a local Minkowskian gravitational dual near the black hole horizon. This should be contrasted to the OTOC approach, for which, to the author's knowledge, the coefficient in front of the exponential growth is operator dependent [9]. Finally, in the last section, we study these features of complexity from a microscopic dual perspective. By using recent results on operator growth in SYK [22], we obtain a lower bound on the computational cost in the dual theory. Before the scrambling time, we confirm it is controlled by the Lyapunov exponent. After the scrambling time, the exponential growth saturates to a linear growth. On one hand, this dramatic change in the dynamics is actually mirrored in the gravitational description, since by times of the order of the scrambling time backreaction of the infalling shock wave has to be included, due to its large proper energy [23]. On the other hand, this dynamical transition is consistent with Lloyd's bound [24], since an indefinite exponential growth would, after the scrambling time, completely invalidate it.
Before we move on, we want to make a couple of general comments. First, notice that well-defined distance notions in the Hilbert space or in the manifold of unitaries have to be respected across dualities, so the present approach is self-consistent. An important example in holography is relative entropy. In [25], it was proven that bulk relative entropy equals CFT relative entropy, as it should, given the assumed equality of Hilbert spaces. The problem is that bulk relative entropy resists a meaningful definition since it is related to the vacuum entanglement of quantum fields in the bulk. Another problem is that it is anchored in the boundary, complicating the exploration of the full geometry and the exportation of the technique to more general spacetimes. Finally, when considering pure state scenarios, relative entropy, being invariant under unitary transformations, is not fine grained enough to certain details of the evolution. Computational costs seem to avoid all of these problems. They are well defined and computable at both sides of the duality, they are able to explore the full geodesic structure, the method is not attached to any particular geometry, and they are perfectly suited for pure state contexts.
Second, one possible reason why these type of interesting quantities have passed largely unnoticed in the physics community is because usual statistical ensembles, which can be used to build well defined distance notions, are totally blind to details of time evolution. For example: of length are needed, and suitable notions can be found by generalizing Nielsen geometric approach to quantum computation [14][15][16]. For recent related approaches to quantum complexity in physics see [26][27][28][29][30][31][32][33][34].
2 A geometric approach to quantum mechanics As described in the introduction, in the context of dualities between apparently different theories (such as holographic dualities), it is interesting to have fine-grained notions of distance in the unitary manifold, since these have to be preserved across the duality. Typical interesting distances are those associated with Hamiltonian time evolution: and any type of symmetry transformations: associated to some set of Lie algebra generators T j of certain symmetry group G. Finally, we can be interested in the cost of Heinsenberg time evolution: where V is a unitary perturbation of the state. Mathematically, the problem is to assign lengths to trajectories U (s) in the unitary manifold. Generically, lengths are defined by integrating a suitable 'norm' of the tangent vector to the trajectory along the curve. For Riemannian geometry, the famous expression reads: so what we need is a chart x µ and a metric on the tangent space.
In what follows we follow Nielsen approach [14][15][16] to define the geometry. The motivations to define such geometries were purely related to quantum computation. The objective was to define geometries such that the lengths of minimal geodesics provide lower bounds to quantum complexity. It is perfectly possible that for physics applications there exist other metric definitions that are also sufficiently fine grained. In this article, we will concentrate and expand on these quantum complexity inspired notions of length, which will suffice for our purposes. But at any rate, we first want to remark that such geometric approach is just the natural mathematical approach to define distances in the unitary manifold, and also we want to remind that for applications in dualities, the only important thing is that we use the same geometry on both sides of the duality and that the geometry is sufficiently fine-grained.
The starting point is that the added computational cost (the added distance in the geometry) that arises when applying a small unitary evolution to any unitary matrix is independent of the input. In other words, for the infinitesimal gate/transformation:

JHEP09(2018)043
the added distance does not depend on U (s), and it is just a function of the instantaneous HamiltonianH(s) being applied at time s to move us from U (s) to U (s + ds). This simple condition is just a short of local flatness in the unitary manifold. It is just the way to impose that the cost of applying a gate is an intrinsic property of the gate itself. We use the tildẽ H(s) notation to distinguish the instantaneous HamiltonianH(s) from the Hamiltonian of the physical theory H since generically they will be totally different objects. So if we are interested in analyzing certain unitary history U (s), we are forced to find H(s), such that (2.5) holds at each point of the trajectory. This instantaneous Hamiltoniañ H(s) turns out to be given by the associated Schrodinger equation: From a physical perspective, the instantaneous HamiltoniansH(s) provide the 'velocities' used to explore the unitary manifold. They are elements of the tangent space. This situation exactly parallels the analysis of Lie groups, as we will exploit later in the article. Notice that the previous relation holds because at first order in ds we have: Once we findH(s) from U (s), the computatinal cost associated to the trajectory U is given by its length: where F (H(s)) is a metric functional on the tangent space of the unitary manifold. Before defining F , let us remark that computingH(s) from (2.6) can be quite non-trivial, as we explain in detail in the next section. The simplest examples are those in which the unitary evolution can be written as: for whichH(s) = H. In these cases C(U ) = F (H)s, and this is how the famous linear growth of complexity looks like in the geometric approach. Let us continue and define the metric F . We first need to define a chart and this will depend on the theory. We will consider explicit examples later, but for the time being, we just assume there is an orthonormal basis of hermitian generators T µi of the tangent space. This allows us to write all Hamiltonians as: (2.10) We include two indices because one will run over operators with different penalty factors (index µ), and the other over operators associated to the same penalty (index i). Of course, in explicit examples like SYK or CFT's, the sum over i might implicitly depend on the specific µ.

JHEP09(2018)043
For the present dicussion we assume the manifold is finite dimensional. We will later generalize the framework to CFT's. So if we are givenH(s), and the dimension of the Hilbert space is |H|, the expansion coefficients are given by: In this generic context, the (sufficiently fine grained) class of metrics proposed in [14] is: where p µ are some unkown penalty factors that are included to differentiate between various directions in the manifold. We defined them as p 2 µ so that when we are evolving in only one direction, associated to one specific generator T µi , the cost is proportional to p µ . In the context of computational complexity, the penalties p µ are included to punish directions associated to operators that are assumed to be more difficult to apply (or create), but no generic principle is given to find them. We will comment on them below.
For later reference, notice that it is natural to define the projector into the space of generators with equal penalty factors p µ : where there is no summation over the index µ. Using such projector, the metric (2.12) can also be written in two suggestive ways: and where ρ mixed = 1/|H| is the usual maximally mixed density matrix. Notice that the reason why the previous two expressions are equal is that correlations between generators associated to different penalties vanish.
Having described the framework let us make several comments. First, from the previous equation, it is clear that the reason to have chosen ρ mixed and not any other state is unjustified. One interpretation is that the cost in a given state is given by the previous relations, but with the new state inserted in the position of ρ mixed . Then, if we have some average over states, such as ρ mixed , the associated cost is just the average of the costs. This comment will become clearer when generalizing to QFT's below. Notice that this short of state dependence does not spoil the fine-grained nature of the complexity metric. This finegrained nature relies on the fact that we are exploring the unitary manifold by infinitesimal transformations, and such statement does not depend on which infinitesimal cost we choose.

JHEP09(2018)043
Second, we want to remark that this notion of quantum complexity, based on distances on the unitary manifold, also provides a natural notion of distance on the Hilbert space. In other words, the previous framework provides a possible definition for the problem of finding the minimum complexity protocol taking us from certain reference state to a certain target state. The reasoning is simple and goes as follows. Every protocol (or unitary circuit) from a reference state |ψ r to a target state |ψ t reads: The cost of such a journey through the Hilbert space can be defined as the computational cost of the associated journey over the unitary manifold, which goes from the identity operator and ends up in the unitary matrix: In this way, every journey over the Hilbert space has a computational cost, which in principle can be found by using the previously defined framework. Complexity minimization of the process that takes us from |ψ r to |ψ t becomes the problem of finding the minimal geodesic in the unitary manifold that goes from the identity to U final . Indeed, since any unitary matrix that differs from U final by an element of the stabilizer group of the target state |ψ t is valid as well, we take into account protocols going from the identity to such set of final unitaries. This shows that the problem of finding the complexity of a target state given a reference state can always be seen as a geodesic problem in the manifold of unitaries. 1 Below this will not play a role for us since it will always be clear the considered physical process.
As a last comment, we could also have considered infinitesimal costs such as F ∝ H (s) 2 − H (s) 2 . In the Hilbert space formulation, these type of costs would give rise to the Fubini-study metrics. In the complexity context these have been considered in [29] and more recently in [34]. Physically the difference between them is clear, eqs. (2.14) and (2.15) consider the average value of the instantaneous Hamiltonian (the energy per gate), while the Fubini-study choice focus on the deviations from the mean value (the energy dispersion per gate). This exactly parallels the difference between Lloyd's bound [24] and the Margolus-Levitin bound [35]. Since holography points to linear growths of complexity, both in time and in energy, this seems to exclude the Fubini-study choice, so we stick to eqs. (2.14) and (2.15).
In the next sections, we enlarge Nielsen's framework so as to include manifolds with infinite dimensions (like CFT's), situations in which we are at finite temperature or energy, and comment on the issue of penalty factors. We will also discuss the technical difficulties that appear in actual computations, and make some simple but important remarks on the possible types of complexity growth we might have.

State dependence
The past framework was fairly generic, and it exactly parallels Nielsen's approach to finite dimensional spin systems [14]. But there are a couple of issues that need to be faced in order to go towards physics applications. The first concerns the extension to infinite dimensional systems like QFT's, or even to finite dimensional systems but at finite temperature.
For example, consider the Hamiltonian of a free QFT: Blindy using (2.12), the cost of such an operator would be: where p a † k a k is the penalty associated to the number operator a † k a k . There are two problems with (2.19). The first is that, unless p a † k a k decays sufficiently fast with k, which seems totally unphysical, the answer diverges. The second is that, even in the unphysical case in which the answer is finite, in a real situation we would be counting the cost of operators that are not being used. For example, if we are in a state |ψ kmax in which there are no particles with momenta higher than k max , the action of H QFT on the state is equal to the action of: whose cost, using again formula (2.12), is finite and given by: The moral is straightforward. Since the action of the high momentum Hamiltonian tail on the state is equal to zero, and zero has vanishing cost, we would like to say that the cost of H QFT in the state |ψ kmax is equal to the cost of H kmax QFT . At first sight, this might appear like some short of state dependence, but it is actually not. We are just choosing the operator that minimizes complexity costs, while still moving in the same trajectory of the Hilbert space. Besides, the cost does not depend on a putative previous unitary trayectory, so it is still an intrinsic property of the gate itself.
We need to formalize this intuition so as to be applicable to generic situations. The first option is to consider the expectation value of the Hamiltonian in the present state: This gives a finite answer, but it does not capture a possible dependence on the penalties. A simple route that does capture the penalty dependence goes by looking at the previous JHEP09(2018)043 alternative metric formulation (2.14). The natural generalizations for the cost ofH(s) in the state |ψ are: Some important remarks are in turn. First, when we consider generic states in the cost definition, 2 the two different choices are not equal. The reason they were equal before is that correlations between different penalty generators in the maximally mixed state vanish, and so the cost vanish as well, but this is not necessarily true in generic states. This is a subtle issue because it does not show up easily. Indeed, for the computations below and the ones in the forthcoming article [20], both definitions are equal. The second remark is that it seems this definition fails when applied to Hamiltonian eigenstates. For such states, we expect the complexity to not increase. On the other hand, blindly applying the previous relationship with the true Hamiltonian of the system seems to have some cost. The error lies in that in such scenario, we should not insert the Hamiltonian but the identity operator (H(s) = 0), which has zero cost. This is an explicit example showing that one needs to take some care when extracting results from the complexity in the unitary manifold to the complexity in the Hilbert space. As explained in the previous section, the general correct way is always to quotient out by the subgroup of unitaries that leaves invariant the target state (its stabilizer subgroup), and as the representative of a given class of instantaneous Hamiltonians, choose the one that minimizes the previous relation. In the case of Hamiltonian evolution of an eigenstate, we are just applying elements of the stabilizer group, and we should then choose the identity as the representative, which has zero cost. Generically, these subtle considerations are utterly cumbersome and difficult to work with. Luckily, for physical applications, and in particular applications to chaotic systems, these subtleties do not really matter, since the action of any non-trivial gate will produce some non-trivial change on the state. Moreover, for AdS/CFT applications, it is convenient to have a formula that tells us that the complexity of an energy eigenstate grows linearly with time, since most probes of the state will not distinguish between an eigenstate and a non-equilibrium unitarily evolving state. Finally, we remark again that the previous distance is a correct notion of distance in the manifold of unitaries, the only subtleties arising when interpreting it as a distance in the Hilbert space.

On penalty functions and CFT's
The second problem concerns the weights p µ . In Nielsen's approach to quantum spin systems, the proposed penalties are functionals of the so-called 'weight' of the generalized JHEP09(2018)043 spin operator. A generalized spin operator has the following form: If there are N tensor product factors, out of which M factors are equal to 1, then the weight w is equal to N − M . The proposed penalties p σ in spin systems are functionals of the weight p σ = p σ (w), that increase as the weight increases. This is how we punish directions that are suppossed to be more 'complicated' than others. In this context, there are a couple of questions that need to be solved for physics applications. The first asks for a more unique functional p σ (w) for spin systems, some sort of 'natural' penalty functions. The second asks about the role of weight w in generic theories, in particular, their role in CFT's. Concerning the first question, we are going to leave it open for the time being. We will come back to it below (2.2.1) and in the last section. In both sections, due to different reasons and by exploring different possibilities, we will argue that a good physical choice is p σ (w) = w. This choice contrasts with the choice made in [19], in which penalties were chosen to depend exponentially on the weights. We comment more on this issue and in the differences between both choices in section 3.3.
For the second question, we want to explore the proposal that in CFT's, the role of weight w is played by the scaling dimension ∆ of the associated operator O ∆ . In CFT's, due to the operator product expansion, any gate or any instantaneous Hamiltonian can be expanded in terms of local operators at some fixed time slice: where x is a generalized coordinate for the d − 1 dimensional spacelike surface at time t, and the sum runs over primaries and descendants as well. Another possibility would be to think in radial quantization and expand in terms of operators at some fixed radial slice. Importantly, notice that we do not need operator products as in the spin system. Due to the OPE and the operator state correspondence, operator growth in CFT's is equivalent to the usual evolution of a quantum state, where the initial state (operator) gets mixed with other states (operators) as time evolves.
As for the finite dimensional case, once we have characterized the set of infinitesimal gates, we need to define a norm on them. Through the penalties, this norm will tell us which directions in (2.25) are more difficult to explore. Now, given translation invariance, the penalties cannot depend on x. 3 They are therefore intrinsic functions of O ∆,l . This suggests that penalties only depend on the scaling dimension of the operator. It is then natural to define pojectors into subsapces of equal scaling dimension:

JHEP09(2018)043
where the sum runs over all operators with scaling dimension ∆ (again primaries and descendants as well). The generalization of the previous metrics to CFT's is: In situations in which there is an approximate continuous spectrum of scaling operators, one can approximate the sums by continuous integrals, weighted by the degeneracy of the sector of scaling dimension ∆. Notice also that the integrand in the previous expression is finite and positive definite.
There are various reasons suggesting that ∆ plays the role of w in CFT's. First, notice that translational invariance, together with the form of the expansion (2.25), implies that the penalty function must be an intrinsic property of the operator. There are not too many options. The real dimension of the operator/field is not a good choice since the dimension could be zero. Good examples of this situation are the fermions of SYK. In that situation, any string of operators, no matter how large, would still have dimension zero. Another possibility is to count the number of operators in a string of operators, completely paralleling the idea of weight in spin systems. The first problem with this is that, in the context of AdS/CFT, we would be equally punishing an operator with low scaling dimension, that creates a perturbative particle in AdS, with an operator with very large scaling dimension, that is dual to a (pure state) black hole. This seems unreasonable. The second problem with this option is that it is not consistent with the OPE, since any such string can be written as a linear combination of operators with no products whatsoever.
On the other hand, the scaling dimension of the operator seems the right intrinsic property that tells us what is more difficult/easy to create in CFT's. The first reason is that in CFT's, the scaling dimension in radial quantization in the plane turns out to be the energy in the cylinder formulation. Given that complexity generically evolves as C = E t, where E is the energy of the state, states with higher scaling dimension have correspondingly higher complexity rate growth. If a given state is able to produce more complexity, it should be more difficult to create and should be punished accordingly. In CFT's, this translates into a dependence on the scaling dimensions. In the same line of thought, the Hilbert space of a conformal family is very much like a harmonic oscillator. To obtain a descendant of level n with scaling dimension ∆ n = ∆ + n, we need to apply n times the momentum operator. If penalties are functionals of such number n, then they are functionals of the associated ∆ n . More generically, it seems that the scaling dimension would be a convenient choice when studying how complexity behaves under conformal transformations. In particular, it behaves selfconsistently under the operator product expansion. It is obvious that the penalty could also depend on the operator spin l, but we have not found a good specific use for this. Including such dependence is straightforward and carries no conceptual problems.

JHEP09(2018)043
Another argument goes by looking at SYK (to be defined below), which is both a spin system and a CFT. In SYK, the scaling dimension of a string of operators in the large-N limit is directly proportional to the number of Majorana fermions in the string. In this case, the scaling dimension proposal reduces to the usual weight prescription.
More generically, in general QFT's or theories without conformal invariance, where scaling dimensions do not appear, we expect the penalties to be proportional to the energy associated to the given operator. This natural assumption, saying that the computational weight is proportional to the energy of the operator seems to be at the core between the gravity-complexity correspondence, as we argue below.
Finally, probably the best supporting argument for the state-dependent proposals (2.27) come from the specific example considered in [20] when studying the complexity of the Virasoro group and CFT's in 1+1 dimension. In such scenarios, they lead to a direct gravitational interpretation.

Average scaling dimension and natural penalty factors
From a physicist point of view, the previous unknown penalty functions p(∆) are quite disturbing. There is obviously too much freedom. One expectation is that most choices give similar qualitative results, albeit with certain quantitative differences. This is actually what we will find below for a big class of choices. But at any rate, we would like to have some definite option.
Below, when computing complexity in SYK and comparing with chaos, we will conclude that a good prescription for local fields is given by: Actually, since complexity (2.27) is defined up to a global choice of units, it is convenient to divide all penalties by the one of the real Hamiltonian of the system (the penalty associated to the energy-momentum tensor). This choice of units obviously ensures that the Hamiltonian has an associated penalty equal to 1, and the complexity of unitary evolution is simply set to C(e −iHt ) = Et. Now, has the choice p(∆) = ∆ some physical explanation? Is there a natural quantity that carries information about the penalty functions? Here we will argue that there is one indeed.
The previous expansion of the instantaneous Hamiltonian (2.25), together with the projectors into spaces of equal scaling dimension, naturally defines the following probability distributions: for the first defintion and for the second.

JHEP09(2018)043
They can be interpreted as the probability that the operator has dimension ∆. The intuition is that we look at the expansion (2.25) as a state that is expanded on a certain basis of states, and we are defining the probability of finding a state with scaling dimension ∆.
Having such probability distributions, we naturally look for the average scaling dimension: We see that if we want to compute the average scaling dimension (2.31), this is conceptually similar to the computation of the cost (2.27), if the penalty functions are set to: With this choice, the CFT metrics (2.27) take the following natural form: This is just the average of the square of the scaling dimension, a natural quantity as well. Besides, whenever the probability distribution is peaked around some definite scaling dimension and normalizing the complexity by Z i the cost is just given by the average scaling dimension defined before. We will consider such objects in SYK below to clarify in a explicit example the differences and similarities.

Technical difficulties with geometric complexity
In the previous sections, we have defined geometries for the manifold of unitaries of generic quantum theories. In principle, such information is enough to compute the lengths of any given trajectory U (s). In practice, as noted in [14], there is a technical obstruction which enormously complicates the problem. In this section, we want to present such technicality, since in the applications below we will have to deal with it.
To have a specific situation in mind, consider we want to compute the cost of Heisenberg time evolution, which is the length of the following orbit: where O(t) ≡ e iHt Oe −iHt . To compute the cost, we first need to extract the instantaneous HamiltonianH(t) that it is being applied at each differential amount of time along the time evolution. This was derived in the previous section to be: Quite surprisingly, this simple looking equation is difficult to handle in general, even having the exact O(t). The reason can be seen as follows. At linear order in dt, we can write the following equation:

JHEP09(2018)043
Given such relation, the instantaneousH(t) can be found in terms of O(t) and dO(t) dt by means of a version of the Baker-Campbell-Hausdorff formula: ]. So to compute the cost for a given O(t), we need to evaluate (2.37) and insert it into the metric (2.23). Given the previous chains of nested commutators, this certainly seems a challenging task. Below we will see how such task can be accoomplished when the unitary trajectory belongs to some symmetry group. In such cases, the group structure allows ressumation of the series.

Simple growths vs operator growths
To end with all these preliminaries, we want to make an important simple remark. There are two qualitatively different ways in which the computational costs can increase along the unitary trajectory. Consider that the initial Hamiltonian is given by one particular generator, sayH(0) = T νj . 4 We could have a situation in which the initial generator continues to be the instantaneous Hamiltonian at all points in the trajectory: where there is no summation in the last expression. Plugging such formula in any of the metrics defined in the previous sections, we observe that the cost is given by: We will call such cases 'simple growths' since they do not imply a mixing of the initial generator with other generators as we proceed along the unitary trajectory. These cases are simpler because the time dependence is all encoded in the intensity change x νi (s), which is a fairly common expectation value. The specific penalty factors are not important in order to understand the dynamics. They just factor out. An explicit example is e −iHt , whose cost is given by C(U ) = Et. But there are other non-trivial situations of this simple short, as we show below. A related situation is one in which the initial generator gets mixed with other generators, but only with those with the same penalty factor p ν . In such case, the cost expression (2.39) still holds. These simple cases appear naturally when considering unitary paths generated by elements of a symmetry group, as we exploit below and more systematically in [20]. The second situation concerns mixing of the initial generator with generators of different penalties as we proceed along the unitary trajectory. This is obviously the 'complicated' scenario, which we will term operator growth, as in [22,36]. Quite interestingly and counterintuitively, we will see that in holographic dualities both types of growths seem to be to dual to each other.

JHEP09(2018)043 3 Quantum complexity and gravity
In this section, we apply the previous ideas to study specific aspects of quantum gravity, such as the behavior of computational costs under certain general coordinate transformations, their connection to quantum chaos, and their dynamics in SYK. We go from the simplest examples towards the more complex ones, so we start by analyzing the cost of symmetry transformations.

The cost of symmetry
In this section, we study the cost of various symmetry transformations. First, notice that the geometric approach to complexity is basically equal to the geometric view of Lie groups in physics. In particular, the cost function of an infinitesimal transformation is a norm on the Lie algebra of the theory, while finite distances are obtained by composing infinitesimal ones. For concreteness, consider a quantum theory in which certain continuous symmetry group G acts naturally in the Hilbert space and in the operator algebra. Natural 'gates' in this system are symmetry transformations: where g i ∈ G and U (g) is a representation of G in the Hilbert space. To study continuous paths, we can increase to infinity the number of gates, while decreasing the strength of each unitary. In the continuous limit, a gate is given by an infinitesimal symmetry transformation, which can be expanded on the Lie algebra of the group G: where the t a are the hermitian generators of the symmetry group. In this situation, the instantaneous HamiltonianH(θ) is a particular element of the Lie algebra, and the cost function is a norm on the algebra. If one of the generators is the Hamiltonian of the system, we can explore time evolution, but in general, there will be other directions in the symmetry group to explore. As we are going to see, there are two important advantages of using symmetry groups as the 'gate' set. The first is that for symmetry group transformations (at least the ones we consider), the penalty functions will be fixed up to a global choice of units. In this sense, symmetry transformations belong to the simple class described above, where the cost functions are just given by expectations values in the appropriate state. The second advantage is that the group structure allows us to handle the computation of instantaneous Hamiltonians. In this article, we consider rotations, boosts and some general coordinate transformations important for black hole physics. In [20] we consider the Virasoro group.
Let's start with the simplest example, which is that of SU (2). The generators are the three components of the angular momentum. If the theory is rotationally invariant, the penalty factors associated to each direction must be the same p Jx = p Jy = p Jz . We can thus fix the global units so that the penalties are equal to 1. The cost function simplifies to: F (H(θ)) = ψ|H(θ)H(θ)|ψ .

JHEP09(2018)043
For example, for a finite rotation of angle θ around the unit vector − → n , we have U (θ) = e −iθ − → J · − → n , and the cost grows as: (3.5) If the state considered in (3.3) is the maximally mixed state, see [20] for a general proof and references therein. 5 For the very same reason, the complexity of a translation is given by: wherep is the state momentum and x is the traversed distance. Here as well, unitaries driven by constant momentum operators define minimal geodesics in the submanifold of the unitary group associated to the subgroup of translations. This is due to the abelian nature of the group, which implies that the complexity manifold is flat in those directions. This is beacuse all nested commutators that appear in the computation of the intantaneous Hamiltonian vanish. Therefore, the metric does not depend on the point chosen, labelled by P ρ . It only depends on the instantaneous velocities dP ρ . The manifold is thus diffeomorphic to flat space, and minimal geodesics are given by straight lines, i.e unitary trajectories driven by constant momentum operators (for example the Hamiltonian). One interesting aspect of these observations is that looking at the complexity of Hamiltonian time evolution alone, it is very opaque what is the consequence of complexity minimization. On the other hand, already at the level of simple symmetry transformations like rotations or translations, we see that to minimize complexity we need to minimize the path lengths in the space-time manifold in which the symmetries are acting. In other words, particles moving through geodesics in space-time are those who minimize their associated computational costs (at least for geodesics defined by symmetry flows in the manifold).

Rotations of the angular momentum
Let us slightly complicate the scenario and ask for the cost of the following rotation: e iJx(θ) = e iJzθ e iJx e −iJzθ = e i(Jx cos θ−Jy sin θ) . (3.7) This would be the simplest analogue of Heisenberg time evolution. To compute the cost of (3.7), we need to find the instantaneous HamiltonianH(θ): ]. Given the group structure, the nested commutators oscillate between −iJ z and −J y , so the previous expression can be easily resumed to: (1)) .

(3.9)
Having this expression it is trivial to compute the evolution of the cost for any given state using (3.3). This example shows how group structures allow exact evaluations, and how one actually computes computational costs.

Boosts
This simple analysys becomes more interesting for the Lorentz group. To the already considered angular momentum − → J , linear momentum − → P and Hamiltonian H, we need to add the boost vector − → K . Without lack of generality, consider a boost K x along the x direction. The cost of is trivial and given by: More interesting is the behavior of the relative cost associated to the boost of the linear momentum. For homogeneous Lorentz transformations, the linear momentum transforms as a vector: Therefore, if the initial unitary is a displacement by x ρ in the position of the state we have: e iKxη e iP ρ xρ e −iKxη = e iP ρ (η)xρ = e iΛ ρ µ (η)P µ xρ , where: (3.14) Now, since the group of translations is abelian, all nested commutators that appear in the computation of the instantaneous Hamiltonian vanish. We simply get: Besides, since the instantaneous Hamiltonian is just a linear combination of operators with the same penalty, the cost reduces to the standard norm in the considered state. If such state has momentum p µ we obtain: We conclude that the behavior of the computational cost under Lorentz boosts is simply given by: (3.17) Using (3.17), for a massless state with momentum p µ 1 = (p, −p, 0, 0), the associated costs to initial displacements ∆t and ∆x are: while for a massive state with momentum p µ 2 = (m, 0, 0, 0) we have: Finally, for a massive state with velocity p µ v = (p, −v, 0, 0) and large hyperbolic angle we have: .

(3.20)
Notice hat the relativistic causality bound, stating that nothing can travel faster than the speed of light, has a precise inprint on the possible complexity growths. In particular, it bounds the prefactor of the exponential growth to be less than or equal the time component of the momentum multiplied by the initial displacement. Indeed, since the relativistic bound is v ≤ p we have (p + v)/2 ≤ 1 and: C(e iP t (η)∆t ) ≤ p∆t e η . (3.21)

Chaos and black holes
Building upon previous results, in this section we describe how computational costs are sensitive to the universal behavior of black holes. The first main observation is that, given the equivalence principle (not necessarily at the horizon), complexity has to grow exponentially with a rate controlled by the redshift factor. This implies that it grows with the maximal Lyapunov exponent derived in [9]. The second observation is that the exponential growth is a universal aspect which does not depend on details of the infalling particle, nor even on its infalling velocity. Details of the infalling velocity are encoded in the prefactor (which otherwise is still universal with respect to the nature of the particle). Letting the infalling velocity approach the speed of light suggests a bound on such prefactor. To derive such aspects from a general standpoint, we consider the following (d + 2)dimensional geometry, which may admit a dual (d+1)-dimensional field theory formulation at finite temperature: Here, F (ρ) is the warp factor controlling the asymptotic behavior of the geometry at large ρ, and h(ρ) models thermal effects. It has a simple zero at the horizon, h(ρ 0 ) = 0, and

JHEP09(2018)043
approaches unity at large values of ρ. The Hawking temperature can be found by the usual Euclidean formalism to be; The blackening factor h(ρ) is not universal, since it depends on the black hole considered. But as it is well known, it shows a definite universal structure near the horizon. This can be seen by taking the near horizon limit, in which F (ρ) → F 0 and h(ρ) → h ′ 0 (ρ − ρ 0 ) and the metric becomes: The proper distance to the horizon is: Measuring radial distances with such coordinate, and using the relation for the Hawking temperature, the metric shows its well known universal character: where ds 2 univ concerns the universal part, and ds 2 ⊥ stands for the transversal coordinates. There is no real universality coming from the transverse metric, apart from the trivial flat space approximation for sufficiently small horizon patches. For the present pourposes, transverses directions play no role, since we will be considering radial geodesics for which dℓ 2 = 0.
The universal behavior concerning the time and radial parts of the metric can be made more recognizable by defining the dimensionless time variable ω = 2πT t, so that: which is nothing but Rindler spacetime. This neatly shows that the near horizon region is just flat space in general relativity, and facilitates the coordinate transformation that takes us to the usual Minkoswki manifold. This is given by: (3.28) in which the metric becomes; Given the previous coordinate transformation, and defining the usual proper time variable as dτ = ρ p dω, the transformation between the momentum operators associated to each reference frame is given by:

JHEP09(2018)043
These relations just state that the transformation between the Mikowski frame to the Rindler frame is just a time dependent Lorentz boost. Since the coordinate transformation is a Lorentz boost, the results of the previous section apply. As long as the equivalence principle holds, freelly falling trayectories will have constant momentum p in the Minkowski frame, and therefore we conclude that: This implies that the cost of a massless infalling state with momentum p µ 1 = (p, −p, 0, 0), associated to initial displacements ∆τ and ∆ρ, is given by: while for a massive state with momentum p µ 2 = (m, 0, 0, 0) we have: Since ω = 2π β t: , (3.34) in the first scenario, while in the second: For a general infalling state with momentum p µ 2 = (p, −v, 0, 0), we would obtain There are a couple of important observations we can draw from these results. The first is that relative computational costs are sensitive to the universal structure of black holes, as dictated by their near horizon regions. These computational costs are not 1/N effects, but O(1) features that neatly codify the universal structure, as we were seeking in the introduction. The second observation is that this result rests on the equivalence principle. If the momentum operators in a freelly falling frame are constant, as they should if the equivalence principle holds, then the costs associated with an outside observer grow with the maximal Lyapunov exponent. The second observation is that the universal Lyapunov growth applies to all freely falling trajectories. It even applies to particles moving faster than the speed of light. In

JHEP09(2018)043
this sense, the Lyapunov growth might also apply to bulk theories with causality violations. On the other hand, the specifications of the infalling particle velocity neatly appear in the long-time asymptotics of the prefactor accompanying the exponential growth. This prefactor is still universal. It does not depend on the nature of the particle, just on its four-momentum (its infalling trajectory). This observation suggests a further bound on the growth of chaos for quantum theories having local gravity duals (at least as defined by complexity evolution). From the gravity perspective, the strongest growth is obtained by saturating causality at the local Minkowski level and letting the infalling particle move at the speed of light. Looking at the previous formulas, the results suggest that for theories with causal gravity duals we expect: for the behavior of the complexity of the momentum operator associated to the infalling particle. We stress that the new part of the bound is in the prefactor and that ∆τ is the initial displacement, which sets the initial perturbation. It would be nice to have a clear dual of this growth, which is otherwise totally rooted in the growth of the radial momentum and the proper energy of the infalling particle, which are bounded by the previous relation without the initial displacement prefactor. Recently, in [13] it has been proposed that such growths might be related to the size of the dual operator, as defined below when considering the cost growth of SYK. In the SYK scenario, we will see that indeed the cost growth is controlled by the operator size. The problem with the operator size is that it is a quantity specially built for spin systems, and not so clearly defined for QFT's. During the discussion of the penalty functions in CFT's (2.2), we noticed that due to the operator product expansion, we do not need to include operator products. We just need to include local operators of all possible scaling dimensions. From this perspective, what grows under Heisenberg time evolution is the average scaling dimension of the perturbed operator, 6 where we remind that the average scaling dimensions might be defined as (2.31). We thus expect a duality between the growth of proper energy and the growth of the scaling dimensions in the context of AdS/CFT. We remind that from this scaling dimension perspective, penalty factors just allow observing such scaling dimensions dynamics.
This proposal is interesting for various reasons. First, it is well known that there is a precise relation between energies in AdS and scaling dimensions in the boundary. This is valid for any space-time dimension. In other words, in the context of AdS/CFT, scaling dimensions gravitate. It is thus natural to relate the growth of proper energy and momentum of the infalling particle to the growth of the average scaling dimension of the dual operator. Besides, if the growth of the scaling dimension continues for a sufficiently long time, we will eventually need to account for its backreaction on the geometry. This would explain the expected backreaction of the infalling particle in the gravitational description, a feature that lies at the root of the behavior of out of time-ordered correlation functions [9]. The second interesting aspect is that, if such duality is correct, from the previously found behavior of proper energies and relation (3.37), we expect an exponential growth for such JHEP09(2018)043 average scaling dimensions and a universal behavior of the prefactor. More concretely we expect a bound of the type: where ∆ is the average scaling dimension of the perturbed operator. In the next section, when analyzing the cost growth in SYK, we will describe these features as well. At infinite temperature, the lower bound we are able to compute does not saturate the previous one, giving hope that it is indeed a non-trivial bound.

The cost of operator growth in SYK
In section 2.4 we explained how computational costs simplify whenever the initial operator does not mix with other operators, or whenever it just mixes with other operators of equal penalties. These were called 'simple growths'. In the context of AdS/CFT [1], the black hole analysis we have performed would apply to the bulk description, in which the theory is weakly interacting and operators do not grow, in the sense of [22,36]. But complexity does grow, and it does so in a very non-trivial exponential manner, as we just described.
To try to understand this exponential complexity growth from a dual perspective, we can seek to compute the cost of Heinsenberg time evolution: in the thermal state. This seems a challenging task. Since the dual theory is strongly coupled, the evolution of O(t) is not going to be simple at all, and the operator will mix with operators associated with different penalties. We thus need to take care of the penalties by using formula (2.23), or its CFT version (2.27). Now, for generic theories, even knowing the dynamics of operator growth, the computation seems challenging. As explained better in section 2.3, this is because once we have O(t) and dO(t) dt , we need to insert them in the expression for the instantaneous Hamiltonian (2.37), find all nested commutators, and add them up.
At the time being, this computation seems out of reach. We will content ourselves with evaluating a lower bound for the evolution of the computational cost in the case of SYK, using the recent results of ref. [22]. SYK models [37][38][39][40] are models of N Majorana fermions interacting through random k-body interactions: (3.40) Each term in the above sum contains q Majorana fermions and the couplings are real random numbers with zero mean and variance equal to J 2 i 1 ···iq = J 2 (q−1)! N q−1 . Although the motivations to study these models seem very well known by this time, let us describe them briefly here for completeness. First, these models have an infrared conformal phase and were shown to have holographic duals and saturate the chaos bound by Kitaev [37][38][39], see [40] for a complete discussion. Second, this is a new class of solvable models in the large-N limit, intimately connected with the previously known tensor models [41,42]. Also, the zero temperature entropy reproduces black hole entropy, as
For the concerns of this article, SYK is also interesting because it is both a spin system and CFT, so it is the perfect setup to test possible generalizations of Nielsen approach to spin systems. In particular, in exact analogy to the case in which we have N spins degrees of freedom, and any instantaneous Hamiltonian can be expanded in the basis of generalized Pauli matrices, in the present scenario we can expand any instantaneous Hamiltonian as: Hermiticity ofH implies that the coefficients are either real or pure imaginary, and in this case they can be easily obtained by defining the standard inner product: where ρ mixed = 1/2 N/2 is the maximmally mixed density matrix in the Hilbert space of N Majorana fermions. We have normalized the fermions so that χ 2 = 1. Therefore: Now notice that, on average, the SYK model is invariant with respect to a relabelling of the fermions. This implies that all operators of size s, i.e operators of the form χ i 1 · · · χ is , have the same average scaling dimension. 7 Equivalently, the scaling dimension is a function of the size of the operator ∆ = f (s). We conclude that in SYK, the penalties can be equivalently defined in terms of s or ∆, giving strong support that in general CFT's, it is the scaling dimension the property that should be 'punished', as put forward in section 2.2. Following the steps described in (2), it is natural to define a projector into the space of equal penalty factors, defined there as the space of equal scaling dimension ∆. In SYK, proyectors into the space of equal size operators are naturally organized by their average scaling dimension:P Notice that: (P ∆(s) (H(t)),P ∆(s) (H(t))) = i 1 ···is Using (2.27), the cost of such Hamiltonian in the infinite temperature or maximally mixed state is:

JHEP09(2018)043
Now we consider perturbing the thermal state with a unitary matrix V (t) = e iχ 1 . This is like setting the first fermion in a certain coherent state. As time evolves: where χ 1 (t) = e iHt χ 1 e −iHt is the usual Heisenberg time evolution. Such operator can be expanded as: This expansion was studied recently in [22]. In the limit of large q, the following result was obtained: To compute complexity, we need to extract the instananeous Hamiltonian driving the unitary at each differential amount of time. This is generically given by (2.37). Given the random nature of the dynamics, a lower bound on the growth can be found just by taking the first term, since the inclusion of all other terms will just increase the cost of the operator. The first term is the time derivative dχ 1 (t)/dt: Using (3.46), the cost of such operator is: where we have defined:P We need to relateP s (t) to the original P s (t). Since the phases of the coefficients in the expansion (3.41) are constant in time, the relation is as follows: To finish the computation we just need to insert the penalties, perform the sum and integrate over time. We will explore a polynomial family of penalties, defined by: Reminding that the scaling dimension of the fermions is 1/q, in the large-N limit the average scaling dimension of χ i 1 · · · χ is is ∆ χ = s/q. Combining all details, we finally arrive at: C(e iO(t≫1/J ) ) ≥ c r e rJ t √ q = c r e rλ L t/2 √ q , (3.55)

JHEP09(2018)043
where c r is a constant that depends on r and that can be computed case by case. The first two cases are c 1 = 1/ √ 2 and c 2 = 1 4 3/2. Also we have used the expression for the SYK Lyapunov exponent at infinite temperature λ L = 2J .
To summarize, relation (3.55) is a lower bound on the computational cost growth of Heisenberg time evolution in SYK. Observe that all penalty choices, characterized by r, are sensitive to the chaos exponent. Qualitatively, at least in this case, the penalty choice does not affect the main feature (the growth characterized by the Lyapunov exponent). Indeed, remembering that p(w) is the penalty associated to the weight w, the previous expression can be more succintively written as: where w O(t) e λ L t is the average weight of the operator O(t).
We observe that to match the expected chaos growth we should choose p ∆ = ∆ (equivalently p(ω) = ω). This result fits quite well with the arguments developed in (2.2.1). For such penalty choice, the cost of the operator is a natural physical quantity to consider. It is just the average of the square scaling dimension.
Notice also that the average scaling dimension itself is just given by: Given the proposal of the last section, this should be dual to the growth of proper energy (3.37). In this case, our proposal coincides with the proposal of [13], but it is now understood as a very subtle example of the duality between energy and scaling dimensions in AdS/CFT. As commented before, we remark that the penalty choice for which the cost of Heisenberg time evolution exactly matches the chaos growth is not the same as the one chosen in [19]. In ref. [19], a exponential dependence between penalties and weights was chosen so as to ensure that complexity can grow until times of order O(e S ), where S is the entropy of the system. Given eq. (3.55), such proposal implies that the cost of Heisenberg time evolution is doubly-exponential. Although this might seem inconsistent at first sight, it might happen that, although the cost growth is doubly exponential, the complexity growth is actually exponential. This requires a strong bending of the complexity manifold in directions not associated with the ones drawn by Hamiltonian time evolution. 8 At the present moment we have not enough tools to discern which choice is the correct one, but the present results trasparently show what are the physical differences between both choices.
As a final remark, notice also that the growth (3.57) does not saturate the bound (3.38), given the 1/2 prefactor. Here 1/q would be the initial energy, corresponding to the scaling dimension of the initially perturbed fermionic degree of freedom. Of course, we are computing the growth at infinite temperature. It is possible that saturation occurs at low temperatures, where the Lyapunov growth also saturates to its maximal value. But at any rate, this suggests that the bound (3.37) is not trivial since it is not saturated by default. It

JHEP09(2018)043
would be interesting if it is able to discriminate between theories with maximal Lyapunov growth but non-local gravity duals.

Saturation to linear growth after the scrambling time
The complexity of the operator e iχ 1 (t) has been shown to be controlled by the growth of the operator χ 1 (t). The consequence is that complexity grows exponentially fast, and it is controlled by the chaos exponent. But such growth cannot continue forever. Soon after the operator has reached a size of O(N ), there is no more room to grow and the operator growth process must saturate. More concretely, notice that the expansion: can be understood as defining a probability distribution: The reason is that if χ 1 (0) = χ 1 , then we have i P i (t) = 1 for all times. Moreover, Heisenberg time evolution drives such distribution to the uniform one at times greater than the scrambling time [22]. The intuition is that at long times we can approximate the operator by a random operator, in which the probability of individual basis element is just the inverse of the total number of them. This is in the same spirit as the usual explanation of quantum thermalization by means of random states, see for example [53][54][55], and indeed it can be understood in similar terms, as we explain in the next section. This same intuition holds for dχ 1 (t)/dt. Denoting its exapnsion by: we observe again: Tr For example, in SYK for large-q such constant is easily found to be 2J 2 /q. Since the sum of the squares is constant, the expansion coefficients of the derivative also behave as a probability distribution. More interestingly, this argument holds as well for the exact instantaneous Hamiltonian. The exact expression for the instantaneous Hamiltonian was: (3.62) Even if this is a complicated expression, we will always be able to write it in the complete basis:H (t) = s i 1 <···<is

JHEP09(2018)043
The interesting obervation is that, given the exact form (3.62), the following expression holds: Tr(H † (t)H(t)) = s i 1 <···<is This is because such expression is valid term by term in (3.62), since for general time evolved operators we have: For the same reasons as for χ 1 (t), we expect dχ 1 (t)/dt and the instantaneous Hamiltonian to reach stationarity at long times. These time-scales are obviously of the same order as the time by wich the operator χ 1 (t) itself reaches stationarity. ForH(t), this means that on average, at long times, all coefficients are equal to H 2 /Ω, where Ω → 2 N q for large-N. The complexity growth at long times (longer than the scrambling time) is thus given by: To compute the proportionality factor, the only thing that needs to be found is H. Again, this is a difficult task but one that might be acutally achieved. This is beacuse to compute H, and therefore to compute the growth rate a long times, we do not need to go to long times. Since H is constant we can compute it at any non-zero small time, and we expect simplifications, or that approximations can be made. We hope to report on this in the future.

Long times, Lloyd's bound and bulk duals
The exponential complexity growths derived for black holes and SYK (relations (3.37) and (3.55)), might lead to an inconsistency with Lloyd's bound [24]. In the geometric approach to quantum complexity, Lloyd's bound simply tells that the maximal complexity growth is given by constant Hamiltonian evolution. This is simply: where M is the mass of the black hole or the total energy of the system. For perturbations around equilibrium, the growth found in this article is: where E is the energy of the perturbation and the proportionality factors depends on the initial conditions, see (3.2). Although the exponential growth is certainly fast, for small perturbations and for times smaller than the scrambling time, it is actually slower than the linear growth (3.67). This is because there is a hierarchy between M and E, given by M ∼ SE, where S is the entropy of the black hole. But if such exponential growth would continue forever, eventually it would bypass Lloyd's bound, leading us to a certain tension. From a bulk description in AdS/CFT, it was shown in [23] that such growth does not continue forever. For times larger than the scrambling time, where one would begin JHEP09(2018)043 to violate Lloyd's bound, we need to include the backreaction of the perturbation on the geometry. This implies a linear growth at long times, see also [21]. It is interesting that this sudden change of dynamics, as determined by general relativity, seems to be anchored in the previous argument, which states that (3.67) is the true maximal complexity growth.
Quite strikingly, such dynamical transition was explicitly seen in SYK. It is ultimately due to the saturation of the operator growth process, which takes us from exponential to linear growth in the evolution of complexity. From the dual theory point of view, it is essentially a finite size effect. If the entropy is finite, the operator cannot grow forever, and the transition to linear growth will occur at sufficiently long times. What it is interesting is that such finite size effect is fully captured by the classical dynamics of general relativity, which otherwise is expected to only capture semiclassical aspects. In this section, we want to argue that the ideas generalize to any dual theory.
For a generic theory, one would consider a perturbation at t = 0 of the type e iO(0) , for some given observable O. In QFT's this perturbation could be a smeared operator over some time slice. As time evolves, the operator mixes in a complicated a way, but at long times it will reach a simple stationary behavior, when proyected over some state, for example the thermal one. Since time evolution leaves the canonical density matrix invariant, it follows that: Tr(ρ βH (t) †H (t)) = constant ≡H 2 , (3.70) where we remind that the instantaneous HamiltonianH(t) is given by (3.62). Now, for the same reasons that at long times unitary evolution drives quantum states to random states [53][54][55], unitary evolution will drive the operators O(t), dO(t)/dt andH(t) to certain random operators, characterized by the fact that the modulus of the expansion coefficients are constant on average. Therefore, for times larger than the scrambling time, complexity always grows linearly with time. Besides, at stationarity we expect such constant coefficients to be proportional to their associated probabilities in the thermal ensemble. In this situation, the linear growth is given by:

Conclusions
It is still not fully understood how space-time distances in the bulk description of holographic dualities are to be represented in the CFT side. While bulk space-time distances

JHEP09(2018)043
are still mysterious in such sense, distances in the Hilbert space, or distances in the manifold of unitaries, have to be respected across dualities. They are just the same if the equivalence of the underlying Hilbert space and of the microscopic Hamiltonians holds. In this article, inspired by the geometric approach to quantum complexity developed by Nielsen and collaborators [14][15][16], and by the recent ideas which relate gravity and complexity [17,18], we have explored certain distance notions in the manifold of unitaries and in the Hilbert space. These notions have been defined in (2) for generic CFT's and generic states. The definitions reduce to the ones given in [14][15][16] whenever it makes sense to consider maximally mixed density matrices associated to finite dimensional Hilbert spaces. For CFT's, the definition (2.27) is based on the statement that the role played by the weight in spin systems is played by the scaling dimension in CFT's. This statement suggests by itself a natural interpretation of the penalty functions. They allow studying the average scaling dimensions of the appropriate operator. Also, the state dependence of formulas (2.27) turns out to be crucial to connect to gravity in CFT's, as shown in [20].
After defining the framework, one of the most important observations of this work is to notice that the difficulties that appear in actual complexity computations (described in (2.2) and (2.4)), disappear when considering unitary trajectories driven by generators of a certain symmetry group. As explicit examples, we computed the costs associated with rotations, translations, and boosts. In particular, we have shown that the cost associated to a boost of the momentum operator grows exponentially with the rapidity, and the prefactor carries information about the detailed space-time trajectory. Since the relation between the freely falling frame in black hole spacetimes and the static outside frame is a time-dependent Lorentz boost, we concluded that the cost associated to the evolution of momentum of a freely falling particle increases exponentially with the maximal Lyapunov exponent. The prefactor accompanying the exponential is still universal. It does not depend on the nature of the infalling particle, just on its infalling trajectory. This suggests a further bound on the growth of chaos (at least as defined by complexity evolution), which is obtained when we allow the infalling particle to saturate bulk causality. Therefore, at least for theories with a local gravitational dual near the horizon, we concluded that the evolution of complexity is bounded by (3.37). Such prefactor might potentially be able to discriminate between theories with maximal Lyapunov growth that violate bulk causality and those that do not violate it. Given the provided definition of complexity for CFT's (2.27), this feature should translate into a bound on the growth of the average scaling dimension of the perturbed operator.
In the last section, we attempted to compute these type of distances in dual formulations. We have partially succeeded in SYK, where we were able to provide a lower bound on the cost growth. This lower bound nicely shows a Lyapunov growth and its dynamics is directly related to the dynamics of operator growth of the perturbed operator, see eqs. (3.55) and (3.56). Besides, the average scaling dimension does not saturate the new bound alluded before. From this lack of saturation, we cannot conclude that SYK fails to reproduce black holes physics since this computation is in the high energy limit of SYK. But it gives a partial hope that the new bound is non-trivial and that it might convert into a finer way of discriminating between theories with maximal Lyapunov growth, as alluded above.

JHEP09(2018)043
Lastly, we have described the late type asymptotics of the cost growth. After the scrambling time, the perturbed operator stops growing due to finite size effects in a thermal ensemble and the process reaches stationarity. At such long times, we can approximate the operator by a random operator. Such saturation has a definite imprint in the complexity growth, turning the Lyapunov exponential growth into a linear growth in time, where the slope of the linear growth can be computed at small times. This avoids a hypothetical tension between Lloyd's bound and the exponential growth, and also nicely corresponds to the gravitational dynamics derived in [23].