Time Evolution of Complexity: A Critique of Three Methods

In this work, we propose a testing procedure to distinguish between the different approaches for computing complexity. Our test does not require a direct comparison between the approaches and thus avoids the issue of choice of gates, basis, etc. The proposed testing procedure employs the information-theoretic measures Loschmidt echo and Fidelity; the idea is to investigate the sensitivity of the complexity (derived from the different approaches) to the evolution of states. We discover that only circuit complexity obtained directly from the wave function is sensitive to time evolution, leaving us to claim that it surpasses the other approaches. We also demonstrate that circuit complexity displays a universal behaviour---the complexity is proportional to the number of distinct Hamiltonian evolutions that act on a reference state. Due to this fact, for a given number of Hamiltonians, we can always find the combination of states that provides the maximum complexity; consequently, other combinations involving a smaller number of evolutions will have less than maximum complexity and, hence, will have resources. Finally, we explore the evolution of complexity in non-local theories; we demonstrate the growth of complexity is sustained over a longer period of time as compared to a local theory.


Introduction
Recent progress in the fields of quantum information and condensed matter have shed light on the inner-workings of holographic duality (AdS/CFT duality). It is becoming evident that the entanglement entropy (EE) defined in the boundary conformal field theory (CFT) is related to the emergence of the bulk geometry; this relationship becomes more stimulating in the context of black hole physics [1][2][3][4] 6 . In the black hole setting, an important question is, "What is the appropriate observable which can probe physics behind the horizon?" It was observed that although the EE saturates as the black hole thermalizes [5], the size of the Einstein-Rosen bridge (of an eternal AdS black hole) continues to increase. Based on this observation, Susskind et al. proposed that the quantity in the CFT that continues to increase after thermalization is the complexity [6][7][8]. Two interesting proposals were made in the context of AdS/CFT [6][7][8]. The first is 'complexity equals volume' (CV conjecture)-the volume is that of a maximal co-dimension-one bulk surface extending to the boundary of AdS space time, which can be chosen to asymptote to a specific time-slice where the boundary state resides. The second is 'complexity equals action' (CA conjecture)-one evaluates the bulk action (with suitable boundary and counter terms to make the variational principle well defined) on the so-called Wheeler-DeWitt (WDW) patch. Both these objects probe physics behind the horizon and grow with time even after thermalization. Both of these proposals have their shortcomings, and many recent studies have tested these proposals in various settings .
Given its importance in holography, it is crucial to be able to quantify complexity in quantum field theory; recently, some progress has been made in this direction [56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71]. Computational complexity (or circuit complexity in our case) is an important concept for quantum information theory [72][73][74][75][76][77][78][79][80][81]-given a suitable basis, the complexity is the minimum number of operations needed to perform a desired task. It is of central importance to be able to quantify this, since it helps distinguish the quantum nature of an algorithm from its classical counterpart; this identifies if a proposed quantum computer is indeed a true quantum computer [82][83][84][85] 7 . The study of circuit complexity in quantum field theory is in its infancy; only a few cases have been studied to date and much remains unexplored. In [86][87][88], it was shown that simulation of several field theoretic observables on a quantum computer has an exponential advantage over classical algorithms which use perturbative Feynman diagrams. For our purpose, we will adhere to the notion of complexity associated with a quantum circuit-the task is to prepare the 'target state' (for us, this is the time evolved ground state of some Hamiltonian) by a quantum circuit starting from the suitable 'reference state', and make this circuit as efficient as possible. In [89][90][91], a geometric approach for circuit complexity was put forward; this was studied in Ref. [56] for free scalar field theory. Several methods have been proposed/employed to quantify/compute complexity in quantum field theory [56,57,62]. A common feature of these computations is that they all geometrize the quantity. Interestingly, the similarities and differences between these approaches are far from being understood. In this paper, we make some progress in this direction.
Note that one of the key motivations of Susskind et al. for exploring complexity was that it does not grow quickly with time like the entanglement entropy. The growth sustains over a longer period of time compared to the case of entanglement entropy [7,25], and it continues to grow even after the boundary has reached thermal equilibrium until finally saturating at some later time. In the context of holography, the time evolution of complexity has been studied for various types of eternal AdS black holes in different dimensions using both using both the CA and CV conjectures. Ref. [31,33] observed that the complexity obtained from CV conjecture monotonically increases with time and then saturates to a (positive) constant, which is reminiscent of the Lloyd bound [75]. On the other hand, complexity obtained using the CA conjecture for an uncharged black hole remains constant at early times, decreases briefly, and then exhibits a positive growth; at large times, the complexity saturates, but it does so from above, thus violating the Lloyd bound [31,33]. In Ref. [20], a quantity dubbed 'complexity of formation' was defined and studied in the context of holography. This quantity is the difference between the complexity associated with the eternal black hole background and the complexity of the vacuum AdS spacetime. In [20], it was observed that it is divergent for the extremal black hole and a possible interpretation is given based on the fact that states with finite temperature and chemical potential are infinitely more complex than the vacuum state. Additionally, the time evolution of complexity has been studied in the context of collapsing black holes (eternal AdS-Vaidya black hole) using both the CA and CV conjectures [43,47]. Depending on the energy of a collapsing shell, one observes different types of growth patterns for the complexity at early times [47]. The evolution of complexity has been studied for various other interesting holographic scenarios [16-18, 25, 35-37, 40, 42, 44, 51, 52, 70]. Among these is a proposal to study complexity and its evolution for a subsystem at the boundary [44] which extends the notion of complexity for mixed states. A quantity named the 'complexity of purification' has been proposed as a diagnostic. While these developments are certainly interesting, they are still at their early stages and a proper field theoretic interpretation is currently outstanding.
A natural place to begin investigating the evolution of complexity is in simple QFTs; some studies have been made in this regard. In [63,66], the time evolution of complexity was studied for free scalar field theory after a quench, and a comparison was made with the evolution of the EE. In [70,71], this computation has been extended for fermionic systems. In [66], the authors also computed the 'complexity of purification' for this model. In [61,69], the nature of complexity evolution from an axiomatic point of view has been discussed. In [33,59], complexity evolution for thermofield double states in CFTs has been studied. In this work, we consider simple models (free theories) in the hope that this will pave the way forward to study interacting QFTs (appropriate for understanding holography). Second, we would like to differentiate between the various methods for computing complexity in this setup. Although our proposal is general, for explicit illustration we will use a generic bosonic lattice model which corresponds to a plethora of interesting QFTs in the continuum limit. We compare the three most common methods of computing complexity, namely (1) the Fubini-Study approach [57], (2) the covariance matrix method [62] and (3) circuit complexity (going directly at the wavefunction) [56]. We show that only circuit complexity is sensitive to our diagnostic. We further discover/demonstrate a generic pattern of this sensitivity, which hints at interesting physics that might be useful for quantum computation. In [92][93][94][95][96], an alternative method of computing complexity using path integral approach has been proposed; in [97] its implications for holography were further motivated. We will not consider this path integral approach, but will scrutinize the other three methods discussed above.
To establish our testing method, we will use two common information theoretic measures-the Loschmidt echo and fidelity. Generally speaking, the Loschmidt echo is defined as the overlap between a reference state and a forward and then backward evolved state. One starts with a reference state |ψ 0 , which is first forward evolved by some Hamiltonian followed by a backward evolution by a slightly different Hamiltonian (|ψ 2 = exp(iH 1 t) exp(−iH 1 t)|ψ 0 ). The Loschmidt echo is defined as [98] F LE = | ψ 0 |ψ 2 | . (1.1) This overlap can be thought of as a distance (in state space) between two states. Another way to represent the above overlap is the followingF where, |ψ 1 = exp(−iH 1 t)|ψ 0 and |ψ 1 = exp(−iH 1 )|ψ 0 . Clearly, these two quantities have the same value, thereby making it insensitive to the details of the evolution of states -they only depend on the the Hamiltonians H 1 and H 1 and the reference state |ψ 0 [25]. These overlaps contain important physical information about the underlying system. In this paper we investigate if there is any difference between F LE andF. More explicitly, we address the question-is there any alternative notion of distance that can differentiate between the states involved in these overlaps? Complexity can be a natural candidate for this. To incorporate complexity with this quest we develop a test and then check the different methods of complexities. One of our main results is that circuit complexity from Nielsen method can give us the desired quantity. Then we generalize our result for states with multiple evolutions and find a generic property that might be useful from the perspective of quantum computation -one can compute one quantity with less complexity over the other; this is the quantity which can be simulated more efficiently by a quantum computer.
To quantify the complexity associated with the states involved in F LE andF, we will use the 'bra' and 'ket' of the overlap as the reference and target states. Then we compute the complexities associated with these states for both F LE andF by three methods discussed above. The strength of this method is that we do not need to do a direct comparison between the approaches. Rather, we are exploring the evolution of complexities associates with the states in F LE andF and checking if the evolutions are identical or not.
Computing complexity using the Fubini-Study approach amounts to first identify the target state as some kind of coherent state, and then finding the geodesic (connecting the target and the reference state) distance on this manifold induced by this family of states. On the other hand, both the circuit complexity and the covariance matrix method employ the geometric method pioneered by Nielsen [89][90][91] and translated in the context of QFT by [56], with only difference being, for the first case one uses directly the wave function and the for the second case one uses covariance matrix (appropriate only for Gaussian states). We will discuss them in detail in the later sections. We find that only the circuit complexity gives us the desired difference between the complexities defined for the states coming from the Loschmidt echo with those of fidelity; in that sense, circuit complexity is a better approach. Moreover, complexity from the Loschmidt echo is always larger. We extend this idea for an arbitrary number of evolutions and show that the number of evolutions performed on one state (ket) dictates the complexity between a pair of states, and the state with the highest number of evolutions will have the highest complexity. This implies that if one is interested in overlap measurement between two states, Fidelity which corresponds to the smallest number of evolutions on one state will always be the easiest choice since it has the least complexity. The other two methods are unable to distinguish between the complexities of these two different quantities, thereby demonstrating the advantage of circuit complexity approach over these two methods.
The organization of the paper is as follows. In Section (2) we discuss our model and set up the quench protocol. In Section (3) we discuss the computation of the complexity for the time evolved ground state of our model by three different methods. In Section (4) we explain our testing procedure and apply this to different methods of complexities. In the following section we then generalize our arguments and discuss the implications in detail. In Section (6) we briefly explore the time evolution of complexity for non-local theories and compare with results from local theories. Lastly, we conclude by summarizing our results and note interesting future work.

The Model and Quench Protocol
We consider a free bosonic field theory (regularized) on a lattice; 8 the Hamiltonian is In Eq. (2.1), x l (p l ) is the position (momentum) operator at site-l, and {q,q} parameterize the "restoring forces"-q > 0, but we allowq to have either sign. This Hamiltonian is more general than a free scalar field theory discretized on a lattice; depending on the choice of parameters, a variety of interesting behaviors arise. 9 For us, this provides a convenient/natural medium to explore our testing procedure. Eq. (2.1) is readily analyzed by expanding the position and momentum operators in Fourier modes (normal modes) as where 0 ≤ k ≤ (N − 1) with N being the total number of (lattice) sites; one obtains 10 where ω 2 k = q 2 +q cos( 2 π k N ) and ω k = ω −k . Eq. (2.3) is then diagonalized by introducing creation and annihilation operators 11 We are interested in studying quenches in the above model-the quench protocol we employ is where (q,q) and (q 1 ,q 1 ) are different. For t ≤ 0, we prepare the system in the ground state of H(q,q); then we evolve the state by In what follows, we consider the evolution of the complexity following the quench-we consider the complexity between the initial state and the time evolved state. In the following section, we compute the complexity for this model by the different approaches. 8 We set the lattice spacing to unity. 9 E.g. writing q 2 = (a 2 + b 2 ) andq = 2a b (a, b ∈ R), one has the bosonic analog of the Su-Schreiffer-Heeger model [100,101]. 10 We have used the orthogonality condition

Complexity
In this section, we will explore the different approaches of probing the complexity. At the end of this section we will comment on the differences between the different approaches. We will investigate the following methods-

• Complexity from Fubini-Study
• Circuit complexity from wave function • Circuit complexity from the covariance matrix The basic idea of complexity is the following: One starts with a suitable reference state, 12 which one acts on with a set of unitary operators to reach a target state; the complexity corresponds to the minimum number of operations needed to accomplish this. To carry out this procedure, one first fixes the set of elementary unitary operators and determines the operator space; the complexity is the shortest distance in that (operator) space connecting the reference and the target states [89][90][91]. In general, this is a nontrivial and even ambiguous procedure. To proceed, one defines suitable measures on the operator space, which satisfy the following criteria [56,[89][90][91]] : 1. They should be continuous.
2. These should be positive definite.
3. They should be homogeneous. 4. They should satisfy the triangle inequality.

They can be infinitely differentiable.
Criteria (1)-(4) identify these quantities as legitimate functions to measure distance between points in the underlying space; if condition (5) is satisfied, then they correspond to the distance between points on a Finsler manifold. Now there are various methods for computing complexity. The Fubini-Study approach [57] naturally selects one particular measure. In this method, one typically identifies the state as some kind of coherent state of a particular group, and then one defines a metric for that group manifold; the complexity is computed as the geodesic distance between the reference and target states (for that particular metric). This is illustrated in the Fig. (1), while the details are discussed in Section (3.1). On the other hand, the 'circuit complexity' approach allows one to choose various measures [56,62,67] satisfying the properties mentioned above. For the model (and states) considered in this work, it is natural to write the reference and target states in the position representation: and A is a matrix; then the problem is reduced to finding the optimal unitary which takes the reference state (A R ) to the the target state (A T ). This is discussed in detail in Section (3.2). Now for Gaussian states, they are equivalently described by their covariance matrix; complexity in this case quantifies the minimum number of unitary operations required to generate the covariance matrix of the target state starting from the covariance matrix of the reference state. The details of the covariance matrix approach are presented in Section (3.3).

Complexity from Fubini-Study
In this section, we detail the calculation of the complexity using the Fubini-Study approach. This is executed by writing the eigenoperators of H 1 (the Hamiltonian for t > 0) in terms of the eigenoperators of H (the Hamiltonian for t ≤ 0). As per Eq. (2.5), we have 13 for H(q,q) : from this, one obtains the Bogoliubov transformation relating (a 1,k , a † 1,−k ) to (a k , a † −k ): 13 Note that ω 1,k and ω k are functions of (q 1 ,q 1 ) and (q,q), respectively.
with |U k | 2 − |V k | 2 = 1. Hence, we obtain As discussed above, we take the ground state of H(q,q, q ) as our reference state; this is given by where |k, −k denotes the Fock vacuum for modes k and (−k). We are interested in the complexity of the time-evolved state To evaluate this, we employ the decomposition 14 [102] exp with The state Eq. (3.11) can be thought of as an SU (1, 1) coherent state; the state manifold can be given a Riemannian structure [103] -considering the class of states This produces just an overall phase and can be absorbed inside the normalization (N k (t)) of the state. Non trivial effects comes from the exp(γ + k τ + k ).
labeled by the parameter τ and evaluating the Fubini-Study line element one obtains For each value of k, one has H 2 in the CP 1 representation. For a given k, the distance is naturally defined by 15 The full state manifold has the form H 2 × H 2 × · · · -the distance can be defined as In this (Fubini-Study) approach, the complexity geodesic distance between the reference and target states (3.18) -it follows from Eq. (3.17) that the complexity is where C k is the geodesic distance for a particular k.

Circuit Complexityá la Nielsen
We now detail the calculation of the circuit complexity 17 . This approach was pioneered by Nielsen; [89] it was adapted for free scalar field theory in [56] and has recently been generalized for interacting field theories in [68]. We start with the (defining) expression where U (τ ) is a unitary operator representing the quantum circuit, which takes the reference state |ψ R defined at τ = 0 to the target state |ψ T defined at τ = 1. As before, τ parametrizes a path in the Hilbert space (and one can re-parametrize this τ in various ways). Now the unitary operator can be written as a path-ordered exponential where H(τ ) is a Hermitian operator. Next we fix a basis {M I } and expand H(τ ) in term of this basis: where the coefficients {Y I (τ )} are referred to as 'control functions'. The {M I } provide the elementary gates that will be used. The algebra satisfied by these gates gives us the structure of the group; the unitaryŨ (τ ) can be parametrized as a general element of that group. The goal now is to minimize the depth of this circuit to find the optimal control functions {Y I (τ )}. To this end, we define the circuit complexity C(Ũ ) through suitable cost functions F(Ũ ,U ) as [56,[89][90][91]] We minimize this cost function and find the geodesic connecting the two states; evaluating C(Ũ ) on this geodesic, we obtain the complexity. There are various possible choices for these functions F(Ũ ,U ), but they should satisfy the conditions (1)-(5) discussed in Section (3). Here we mention a few of them that have been used extensively in the literature: [56,62,67] (3.26) The {p I }, known as 'penalty factors', are weights which, at the moment, are arbitrary. Among these, F κ=1 directly counts the number of gates; most importantly, F 2 with p I = 1 for all I is basically a distance function on a given manifold. We note that the complexity computed using F 2 is very similar to C F S , as both of are coming from evaluating the shortest between two points on a given manifold; the difference lies in the fact that circuit complexity, a priori, cannot fix the {p I } -we have to make a choice for that. The Fubini-Study approach canonically fixes it (in fact, they are all fixed to unity) [57]. 17 In the rest of the text whenever we will mention circuit complexity we will refer to this approach. 18 These are formally known as 'Schatten Norms' and first considered in [62] and explored in detail in [67].
In the subsequent analysis, we compute the complexity using F 2 (with p I = 1 for all I) to make a direct comparison with the Fubini-Study approach. For our case the target wave function following from (3.8) is given by 19 where N τ =1 (t) is the normalization factor. The frequencies (Ω k ) are given by their real and imaginary parts are , .

(3.29)
This wave function can be written as Also we take the reference as (in the same basis v) ω r can in general be complex. The unitary (3.24) acts as with the boundary conditions A convenient way to parametrize thisŨ (τ ) is as follows ,  where tr(M I .(M J ) T ) = δ IJ and I, J = 0, · · · , N 2 − 1. Then the metric can be defined as, There is a certain arbitrariness regarding the choice of G IJ -we choose, for simplicity, G IJ = δ IJ [56] i.e. we are fixing the penalty factors to unity; this will enable us to make a more direct comparison with the Fubini-Study approach.
Since we are working with a basis in which both the reference and the target states can be simultaneously diagonalized, the off-diagonal components coming from some of the elements of GL(N, C) will increase the distance between states; the shortest distance corresponds to them being set to zero [56]. Hence, theŨ (τ ) will take the formŨ where the {α k (τ )} are complex, and the {M diag k } are the (N ) diagonal generators containing only one identity at the k'th diagonal entry. Then using (3.37), one obtains the flat metric where the superscripts 1 and 2 denote the real and imaginary part of α k , respectively. It follows the geodesic is simply a straight line of the form for each value of k (j = 1, 2); using the boundary conditions, one obtains for each k. Then the complexity is given by where g ij denote the components of the metric (3.39), and the x i 's are coordinates associated with this metric. Finally, one obtains Like before (i.e in Section (3.1)) we choose the reference as the ground state of H(q, q ) at t = 0ω r will be ω k as defined in (2.4); we obtain This expression is very pleasing-the first part is the logarithm of the ratio of the frequencies of target and reference state, 20 and is similar to time independent case [56]; however, we have an additional contribution from the phase term, namely the second term in (3.44). This is very reasonable, since the time-evolved state has a non-trivial phase. To reproduce those phases starting form a simple reference state, one needs appropriate unitary operators and they will obviously generate certain cost; the complexity evaluated by this method is aptly capturing that.
Next we look at the structure of the optimized circuit. We rewrite (3.38) using (3.41) and get, Here k runs from 0 to N − 1 and the correspondingα k 's are equal to 1 2 In terms of these operators the optimal circuit that generates the required target state takes the following form, where . Note that here we have introduced an infinitesimal parameter . For all practical purposes (at least from the point of view of implementation), the target state can only be achieved up to a certain tolerance, i.e ||ψ T − U |ψ R | < . Basically, plays the role of a small error, which we want to minimize as much as possible [56]. O k s scale the coefficients ofx 2 i inside the wave function by a real number and the other operatorÔ k scales the coefficients ofx 2 i by a complex number. Together they are sufficient to reproduce the target wave function as the coefficients ofx 2 i inside the wave function are complex numbers. Note that, one could have used ix 2 i together with the scaling operators to get the target state from the reference state. But the geodesic analysis prefers rather different set of operators beside the scaling operators. The reason is two fold. As we have seen, when the target A τ =1 and the reference A τ =0 can be simultaneously diagonalized, the geodesic is just a straight line path and that, in turn, gives us the optimal circuit consisting of mutually commuting gates [56]. From this one can easily see that it rules out the possibility of having ix 2 i and scaling together, since they don't commute with each other. The second reason is a technical reason. Given the basis v = {x 0 , · · · ,x N −1 } we can see that it is not possible to write down a matrix representation for the operator e ix 2 i as the action of ix 2 i on the basis v takes it out of the basis. In other words, the action of this operator on the reference state is non-linear. On the other hand, we will see in the next section that one can find a representation of ix 2 i , with respect 20 Again, for a more detailed discussion about the choice of reference state, see Appendix (B).
to the covariance matrix and the optimal unitary coming from the geodesic analysis will consist of this operator.

Circuit Complexity from the covariance matrix
Just like the reference wave function, the target wave function (3.27) is purely Gaussian; it can be completely characterized by its first and second moments. Therefore, we can define a covariance matrix [105], which will contain the same information as the matrix A defined in the wave function (Eq. (3.30)); hence, we can reformulate the analysis of Section (3.2) in terms of the covariance matrix [62]. The components of the covariance matrix (G) can be defined via where ξ = {x 0 ,p 0 , · · · ,x N −1 ,p N −1 }. The matrix G then takes the form For each value of k, the matrix G factorizes further into 2 × 2 symmetric blocks-these blocks have one-to-one corresponds with the canonical pair {x k ,p k }; hence, there will be N of the 2 × 2 blocks. The matrix G is (N × N ) symmetric matrix. For our target state (3.27), each of the (2 × 2) matrices takes the form . (3.50) Also for the reference state (3.32) we have . (3.51) Note that the determinants of both of these matrices are unity. Now to facilitate the computation we will perform a change of basis for each of the smaller blocks as follows.
such thatG τ =0 ,k = I (an identity matrix). Therefore,G τ =1 ,k andG τ =0 ,k always commute with each other and can be diagonalized simultaneously [66]. This is same as Section (3.2), where A τ =1 and A τ =0 commute with each other. In terms of the covariance matrix the statement (3.33) becomes, Now as before we restrict ourselves to the GL(R) unitary. Given the fact that G's admit a block structure, as before it is convenient to parametrizeŨ (τ ) as GL(2N, R)=GL(2, R)× · · · (2N − 2) · · · ×GL(2, R) 22 . Further we can parametrize each of these GL(2, R) block as R × SL(2, R). We will now conduct all the subsequent analysis block by block. For each block we have, (3.55) Next we set the boundary conditions as before.
This will give the final boundary condition as follows whereG τ =1, k ij denote various components of the matrixG τ =1, k . Also we need the following, This gives, c k is an arbitrary. For simplicity we will choose φ k (τ = 1) = φ k (τ = 0) = 0 and θ k ( . Given this form ofŨ k (τ ) we can write down the metric as shown in (3.36) and (3.37). Following [56,62,67] the metric is (we have set G IJ = 1 2 δ IJ for simplicity), The total complexity (after summing over k) is defined as, where g k ij denote the components of the metric (3.60) for each k and x i 's are coordinates associated with this metric for each value of k. The simplest solution for the geodesic is again a straight line on this geometry [56].
So finally we get , This is a special choice, but given the block diagonal structure ofŨ (τ ) we can easily justify this. (3.64) Now let's investigate the structure of the optimal circuit. Give this solution (3.62) we get, This can be decomposed in terms of SL(2, R) generators in the following way, From these representations (induced on the covariance matrix) we can now identify the operators as [66], Finally, in terms of these operators we get, Note that even though we start with the full GL(2, R) generators, the optimal circuit is composed only of the generators belonging to the SL(2, R) sub-group.

A Brief Comparison:
At this point, we pause and make a brief comparison between the three methods and comment on the structure of the optimal circuit. In all three methods, for each value of k we have restricted ourselves to the space of GL(2, R) unitaries. Then we have performed an optimization to find the best possible unitary which leads to the minimal depth. Both the Fubini-Study (Section (3.1)) and covariance matrix methods (Section (3.2)) give a set of operators which satisfy SU(1,1) and SL(2, R) algebras, respectively; the optimal circuit for both these two cases are made up of scaling operators (i (x kpk +p kxk )) and (ix 2 k , ip 2 k ) operators. These operators are local operators in normal mode basis.
On the other hand, the geodesic analysis done in the context of circuit complexity (Section (3.2)) forced us to a different set of operators (Ô k ) as shown in (3.47), except the scaling operators (i (x kpk +p kxk )). TheÔ k operators (as mentioned in (3.47)) are slightly more non-local compared to the ix 2 k and i p 2 k operators in the normal mode basis. We should note that when the wave function is a real Gaussian, then we do not need any of these extra operators. Only the scaling operator O k (mentioned in (3.47)) is sufficient. The expressions for complexity coming from the covariance matrix method (3.63) and circuit complexity method (3.43) are basically identical, given the same reference state. However, when the target wave function is a complex Gaussian, they are different as it is evident from (3.43) and (3.63). It seems, the advantage of using Fubini-Study and covariance matrix methods is that we get the optimal circuit made from local operators, whereas the circuit complexity method tends to prefer slightly more non-local operators. However, in the next section, we will establish that the circuit complexity (from wave function) has an advantage over the other two methods as it can capture the evolution of states.
To end this section, we stress that complexity depends on both the choice of reference state and gates (and also on the measure used). For a fixed reference state and fixed measure, the value of the complexity will depend on the underlying unitary gates. In the next section we compare the complexity obtained from the different methods-this will not be a comparison between their magnitudes, but rather a comparison of their sensitivity to a particular test that we propose.

Loschmidt Echo, Fidelity, and Complexity
In this section, we propose a diagnostic to test the advantages or disadvantages of these three different methods. As discussed in the Introduction, we consider an interesting information-theoretic measure, namely the Loschmidt echo (1.1); here we discuss it in detail.
The Loschmidt echo (LE) is considered as a measure of the sensitivity of quantum mechanics to perturbations in the evolution Hamiltonian. As mentioned earlier (Eq. (1.1)), the LE is defined as [98,99] Since one is performing a forward evolution followed by a backward evolution with slightly different Hamiltonians, the Loschmidt echo also quantifies the irreversibility in a quantum system. For our case we take |ψ 0 as the ground state of the Hamiltonian at t = 0 (Eq. (2.6a)), which is defined in (3.7). The Hamiltonian H 1 (defined in Eq. (2.6b)) is a function of (q 1 ,q, ). Also, H 1 is of the form as Eq. (2.4), however, it is a function of (q 2 ,q 2 ). These parameters are slightly different from both (q,q) and (q 1 ,q 1 ). We define Then rewrite (4.1) as, We can view (4.1) is a different way. We define the following, 23 For a more comprehensive review of the application of the Loschmidt echo, interested readers are referred to [104].
In terms of these we can rewrite (4.1) in the following way, We termed this as Fidelity. An illustration of Loschmidt echo and Fidelity is shown in Fig. (2). Basically here we have defined the overlap of two wave functions evolved from the same initial state but with slightly different Hamiltonian. Quantum mechanically (4.1) and (4.5) are equivalent. Using Figure 2: An illustration for Loschmidt echo and Fidelity these overlaps, we will propose a diagnostic to distinguish between different methods of measuring complexities. We will call this 'LE vs F Test' of complexity. We will use the 'bra' of the overlap as the reference state and 'ket' as the target state. Explicitly, for Loschmidt echo we will compute the complexity of ψ 2 with respect to ψ 0 and for fidelity we will compute the complexity of ψ 1 with respect toψ 1 . Although the overlap of states-(4.1) and (4.5) are the same, we find that the circuit complexity (using the wave function, as done in Section (3.2)) differs. On the other hand, complexities for Loschmidt echo and fidelity, coming from either Fubini-Study method (as in Section (3.1)) and the circuit complexity from covariance matrix method (as in Section (3.3)) are the same.

Fidelity and Complexity:
For Fidelity the states in the overlap are both evolved states, therefore, for this case we will compute the complexity of one evolved state with respect to the other and call this the complexity of fidelity C F (Ũ ). Explicitly we find the complexity of evolved state ψ 1 by H 1 from ψ 0 with respect toψ 1 by H 1 from ψ 0 .
• To compute the complexity for this case using the Fubini-Study method (Section (3.1)), we have to use the general formula mentioned in (3.21). Now unlike (3.22), θ 1,k will be non zero. In fact we will have θ 1,k = 2 arctanh |γ 1,k |, θ 2,k = 2 arctanh |γ 2,k |, where γ 1,k is defined in (3.10) and (3.12). γ 2,k is associated with the Hamiltonian evolution of |ψ 0 by H 1 and has the same form as γ 1,k but is a function of (q 2 ,q 2 ) instead. These parameters are slightly different from (q 1 ,q 1 ).
• To compute the circuit complexity as sketched in Section (3.1) we need to use (3.43). Also following (3.27) we getψ where N (t) is the normalization and Here ω 2,k is associated with H 1 and ω 2 2,k = q 2 2 +q 2 cos( 2π k N ). Now in (3.27), we need to replace ω r by Ω 1,k .
• For the covariance matrix method (Section (3.3)) we have to use the general formula for the complexity given in (3.63) and again replace ω r = Ω 1,k .

Loschmidt Echo and Complexity
The overlap in Loschmidt echo contains one forward and then backward evolved state and a ground state. Therefore, for this we will compute the complexity (C LE (Ũ )) of |ψ 2 (x k , t) as defined in (4.2) w.r.t the ground state |ψ 0 (x k , t) of H(q,q) at t = 0. Now we have, • For the Fubini-Study approach, we start with the state defined in (3.11). Then we act it by exp(i H 1 t). Then again we can decompose exp(i H 1 t) like what we have done in (3.9) using BCH formula with the definitions given in (3.10). To be more explicit, (4.10) Given this we need to evaluate the following, We know that τ − k annihilates |k, −k . Then we successively use BCH formula and the decomposition mentioned in (3.9). Finally, after absorbing overall phase factors inside the normalization we get, Then we compute the complexity for this case. We can simply use the formula (3.22) and replace γ 1,k byγ k as mentioned in (4.13).
• For computing complexity by the methods outlined in Section (3.2) and Section (3.3), we need the following evolved state ψ 2 (x, t), andN k (t) is the normalization factor so that the inner-product of the wave function with itself remains one. ω 2,k is defined below equation (4.8). To compute the complexity using either of these two methods we can simply use either (3.44) or (3.64) and replace Ω k byΩ k as defined in (4.15).

LE vs F Test for Different Methods of Complexities
Now we explicitly evaluate C F (Ũ ) and C LE (Ũ ) coming from three different methods. This will show the difference between these three method, namely we will get, C F (Ũ ) = C LE (Ũ ) for Fubini-Study and covariance matrix method but C F = C LE (Ũ ) from circuit complexity method as described in Section (3.2). We evaluate all the expressions and present two representative plots by choosing the following values for the parameters, {q 2 = 5, q 2 1 = 20, q 2 2 = 29,q = 4,q 1 = 16,q 2 = −20}. (4.16) with two choices for for N = 500 and N = 1000.

Figure 3: LE vs F Test for Fubini-Study
From Fig. (3) one can immediately see that the Fubini-Study approach cannot distinguish between the two complexities. They overlap with each other completely. At this point we are unable to prove analytically that the two expressions C LE (Ũ ) and C F (Ũ ) mentioned in (4.17) are equal to each other. However, we can only show that they are equal, at the leading order in small t expansion. We sketch the proof in the appendix (C) and leaving the complete proof for future study.
Circuit Complexity: For this case we have the following, where, Ω k and Ω 1,k are defined in (3.28) and (4.8) respectively.
From Fig. (4) it is evident that the two complexities are quite different in circuit complexity method. Moreover, the complexity related to Loschmidt Echo (complexity for the state, we obtained by a forward followed by a backward evolution) is larger than the complexity that we get for the Fidelity, namely the complexity between two forward evolved state for most part of the evolution.

Figure 4: LE vs F Test for Circuit Complexity
Therefore, although the closeness of states between (ψ 0 and ψ 2 ) is same as the closeness between (ψ 1 and ψ 1 ), the complexity of ψ 2 with respect to ψ 0 is larger than the complexity of ψ 1 with respect toψ 1 . We further plot |C LE (Ũ ) − C F (Ũ )| with respect to time. From Fig. (5) we observe that this difference becomes constant quite fast and just fluctuates around this constant value even at large time. It will be interesting to do further numerical analysis to explore the late time behaviour and investigate their physical implications in a future work. So far we have only used, F 2 as the measure for the complexity. We have also considered another measure of complexity, namely F κ=1 , as defined in (3.26) with p I = 1 for all I. Using the arguments of [67] one can easily show that, like F 2 , F κ=1 also get minimized when evaluated on the same geodesic solution (3.40). Then we can numerically show that the complexities associated with Loschmidt Echo and Fidelity are different with respect to this F κ=1 measure.
Complexity from the covariance matrix: For this case we have the following, where, Ω k and Ω 1,k are defined in (3.28) and (4.8) respectively. Just like the Fubini-Study approach, from Fig. (6) one can immediately see that the covariance matrix methods cannot distinguish between the two complexities. They overlap with each other completely and again this behaviour is independent of the values of the parameters and N. Again, like the Fubini-Study case, we are unable to prove analytically that these two expressions C LE (Ũ ) and C F (Ũ ) mentioned in (4.20) are equal to each other. However, we can only show that they are equal at the leading order in small t expansion. We sketch the proof in the appendix (C) and leaving the complete proof for future study. Now we would like to stress the following point. In the literature, for example, [98,99,104], the Loschmidt echo is considered as a diagnostic for quantum chaos and for that it is imperative to consider the two Hamiltonians H 1 (q 1 ,q 2 ) and H 1 (q 2 ,q 2 ) are only slightly different. In our notation this corresponds to the fact that the values of the parameter sets {q 1 ,q 1 } and {q 2 ,q 2 } are slightly different from each other. However the results presented in this section and the next, only require that the values of these two sets of parameters have to be different from each other, but they do not depend on the magnitude of this difference. In fact difference between {q 1 ,q 1 } and {q 2 ,q 2 } can be large and the choice made in (4.16) corroborate this statement.

A Consistency Check
We will conclude this section with a quick check of the triangle inequality. This also serves as a consistency check for our computation. We considered states-{ψ 0 , ψ 1 , ψ 2 } and computed circuit complexities (Section (3.1)) for the state ψ 1 (forward evolved by H 1 ) with respect to ψ 0 (C 1 ), the state ψ 2 (forward evolution by H 1 followed by a backward evolution by H 1 ) with respect to ψ 0 (C 2 ) and finally for the state ψ 2 with respect to ψ 1 (C 3 ). ψ 0 is the ground state of the Hamiltonian (2.6a). Then the Fig. (7) clearly shows that the triangle inequality (C 1 + C 2 ≥ C 3 ) is satisfied. For the other two methods (in Section (3.1) and Section (3.2)) we can check that triangle equality is trivially satisfied. This is a consistency check for our numerical computations.

A Generic Feature of Circuit Complexity
In the previous section, we established that only circuit complexity is sensitive to the evolution of states; in that sense, one can argue that it is a better measure of complexity. Now we will explore if this is a generic feature of the overlap, namely if we do further forward and backward evolutions by another set of Hamiltonians H 2 and slightly different H 2 .
|ψ 4 (x k , t) = e iH 2 t e −iH 2 t e iH 1 t e −iH 1 t |ψ 0 (5.1) The analogue of Loschmidt echo will be Note that this overlap (5.2) can be written as follows: where |ψ 3 (x k , t) = e −iH 2 t e iH 1 t e −iH 1 t |ψ 0 , |ψ 1 (x k , t) = e −iH 2 t |ψ 0 , |ψ 2 (x k , t) = e iH 1 t e −iH 1 t |ψ 0 , |ψ 2 (x k , t) = e iH 2 t e −iH 2 t |ψ 0 . Therefore, we get two different overlaps with the same magnitude. Hence we get two different types of Fidelities in terms of states involved in the overlap. We will label ψ 1 |ψ 3 as Fidelity F 1 and ψ 2 |ψ 2 as Fidelity F 2 . Note that, one can also consider the reverse combination of states, when defining Fidelity, such as ψ 2 |ψ 2 and ψ 3 |ψ 1 . These extra overlaps will not change the qualitative feature of our results, therefore, for illustration purposes we will ignore them. After performing the appropriate number of evolutions, we compute the corresponding complexities. Our computation is summarized in Fig. (8). Once again we see that the complexity for the LE is always larger than complexity computed for any combinations of intermediate states corresponding to Fidelity. This result can be written as The superscripts denote the pair of wave functions for which we compute the complexity. Now there are several comments are in order.
• We made an important assumption here that, each evolution (forward and backward) are done by a different Hamiltonian. Let us try to elaborate this point. For the case of LE ( ψ 0 |ψ 4 ) in the above example, |ψ 4 is being generated from |ψ 0 by 4 evolutions by 4 different Hamiltonians 24 . One can obviously generalize this argument for any number of evolutions. Now once we fix the set of the Hamiltonians entering in LE, we can easily compute various types of Fidelity by distributing this same set of Hamiltonians in various different ways such that quantum mechanically all of these overlaps will be the same. For the case stated earlier, there will be two distinct types of fidelities namely, ψ 1 |ψ 3 and ψ 2 |ψ 2 .
• Another point is that, for the purpose of these computations our starting point is always the ground state of the Hamiltonian (for example |ψ 0 in the equation (5.1)) which is of the form (2.1). All the other states are constructed by evolving this ground state.
• Last but the not the least, the result in (5.5) does not depend on whether we are performing a forward or a backward evolution and also does not depend on the degree to which these Hamiltonians differ from each other. Now given these facts we next try to generalize our results. We can perform the same operations for arbitrary number of different Hamiltonians, leading to arbitrary number of evolutions for the state. We have tested this for 8 different evolutions with different Hamiltonians and interestingly, we find that the complexity corresponding to Loschmidt echo is always larger than any possible fidelity. Moreover, for different fidelities, the number of evolutions performed on reference state dictates the magnitude of their complexities. This is shown in the Fig (9). This result implies, although the closeness between two states (overlap) does not change with unitary evolutions, they are very different from the perspective of a quantum circuit and the difficulty (in terms of complexity) of getting an evolved target state from the other. And our analysis also gives us a guideline about pairs for which it is easier to move between states in the sense of complexity. It tells us which pair of states will have the smallest complexity for a given set of Hamiltonians.
One interesting feature of these differences in complexity for the overlaps is that they do not die away with time. The differences are small at very early time, but it become fixed as soon as the complexities saturate. Moreover, our analysis indicates an upper bound on the complexity for a given overlap evolved with a fixed number of Hamiltonians. Since as overlaps they are all the same, this comparison by complexity can be seen as comparing the same quantity from different ways, therefore, by abusing the language we can say that, any pair other than the pair with maximum complexity will have uncomplexity or resources [25]. In our language, The complexity corresponding to the Loschmidt Echo will always have the highest complexity. Therefore, the complexity corresponding to the Fidelity will have resources.
Given the recent experimental advances one can possibly simulate these overlaps in an experimental setting [98,106,107]. Our complexity analysis is providing us with the most efficient (optimal) quantum circuit needed to simulate the time evolution, hence providing a natural selection mechanism. Note that quantum mechanically all of these overlaps are the same. Therefore, this test will reduce the difficulties of experimental implementation in the sense that it can be obtained by constructing a quantum circuit with a minimal number of gates.
Before ending this section, we want to further clarify what we meant by having resources. Again, let's assume that we want to make overlap measurements (say, for two steps of evolutions) in the lab. We can measure either Loschmidt Echo or Fidelity since quantum mechanically the result is the same. Now suppose we are being supplied with either ψ 1 orψ 1 , apart form ψ 0 then our result implies that it is easier to measure Fidelity as the complexity between the states entering in Fidelity is less compared to the complexity between the states entering in Loschmidt Echo. Note that, if we do not have either of ψ 1 orψ 1 , but only ψ 0 then of course we will loose this advantage since then we have to also take into account the complexity of simulating ψ 1 orψ 1 from ψ 0 . We will have this advantage only when we are being supplied with eitherψ 1 or ψ 1 . This is precisely what we meant by having resources i.e having possession of states with some extrinsic complexity (w.r.t ψ 0 ).

Evolution of Complexity: Local vs Non-Local Theory
In this section, we explore the evolution of complexity in a different context; to highlight the point, we show results obtained from the circuit complexity method (Section (3.2)), but our discussion is applicable to the other two methods as well (discussed in Sections (3.1) and (3.2)).
As seen in Fig. (4), C LE (Ũ ) and C F (Ũ ) grow almost instantaneously, and then fluctuate around a constant value. 25 While the fluctuations are not unexpected [25], the fast growth is not in conformity with some of the expectations in the existing literature [25]. Furthermore, we have found the complexity attains saturation faster than the entanglement entropy [71,109]. Although it is a bit early to do a direct comparison with Holography, we note that this feature contradicts the holographic expectations, namely that the complexity grows more slowly than the entanglement and attains saturation much later. We will address this issue of the different time scales in upcoming works [71,109]. For α = 1, this is the model considered in this work; here we consider (positive and negative) integer values of α with |α| > 1 i.e. we consider a non-local theory. Fig. (10) shows the early time (t < 10) behavior of the circuit complexity for both C LE (Ũ ) and C F (Ũ ) for several values of α. 26 Notice that the complexity grows for a substantial amount of time in these non-local theories i.e. we get the desired time-dependence of the complexity; the more non-local the theory is, the slower the rate of growth.
[Also notice that the difference between C LE (Ũ ) and C F (Ũ ) becomes more pronounced as the theory becomes more non-local.] In non-local theories, the entanglement entropy exhibits volume-law scaling (compared to the area-law scaling exhibited by local theories) [108]; we speculate the volume-law (or area-law) scaling of the entanglement entropy is related to the growth of the complexity. We investigate this in detail in a forthcoming paper [109].

Discussion
In this work, we proposed a litmus test, titled LE vs F Test, to distinguish between different methods of computing complexities. This test was predicated on the fact that an overlap between two states is invariant under unitary evolution; we categorized the different overlaps as Loschmidt echo and fidelity and computed the complexity between the states involved in the overlap. The idea was to investigate if the different measures of complexity are sensitive enough to capture the evolution of states; intriguingly we found that only circuit complexity from the Nielsen approach passes the test. This lead us to conclude that circuit complexity from the Nielsen approach is a more sensitive method, at least from the perspective of sensing time evolution of states. Finally, we examined the nature of the growth of complexity for our model-the complexity grows very quickly, saturates, and then exhibits fluctuations around the saturated value. We observed that if we make the theory non-local (namely, considered a dispersion relation of the form in Eq. 6.1 with |α| > 1), the complexity grows more slowly.
There are many interesting directions to pursue. It is important to understand the issue of the time scales, namely the relation between the equilibration time and the time for the complexity to saturate. Until now, we have worked primarily in a discretized set-up; we would like take a continuum limit to make better contact with (continuous) QFTs. Moreover, to make contact with holography, it is important to generalize our results to interacting theories; toward that goal, one starting point could be to use the construction of [68]. It is known that the Loschmidt echo can be used as a diagnostic for chaos, and one can extract information about the Lyapunov exponent from it [98,104]; it would be interesting to evaluate C LE (t) for chaotic theories and study its relation to chaos. Furthermore, it would be valuable to extend this construction to mixed states, which would create a platform to test some of the holographic results related to sub-region complexity [44]. Last but not the least, tensor networks provide useful ways to represent the time evolution of wave functions [110]; it would be interesting to understand if this kind of computation could shed light on the optimal network required to represent time evolution, thereby improving such constructions. One could also study the causal structure of spacetime [111], and understand the connection between our construction and various path integral approaches [97,[112][113][114].

B Choice of the Reference State
In the main text, we discussed the complexity associated with the two overlaps (1.1) and (1.2). For that we computed the relative complexity between some time evolved state and the ground state of the Hamiltonian (2.6a). In general, given a target state, the value of the complexity depends on the choice of reference state. A natural choice for the reference state is an unentangled state, namely a state which has no entanglement in the original coordinate basis; this unentangled reference state is the ground state of the ultra-local Hamiltonian where ω 0 =constant i.e. it is dispersionless. Here we compute the complexity w.r.t. such an unentangled state, for the sake of the comparison with results in the main text. With respect to this state, one can readily write down the expression for the complexity as evaluated in Sections (3.2) and (3.3) -we replace ω k by ω 0 in (3.44) and (3.64). The complexity from the Fubini-Study approach, however, is more involved; in what follows, we outline the calculation. As done in Section (2), one can diagonalize H 0 by introducing one obtains Next the operators {a k , a † −k } of (2.6a) are related to {c k , c † −k } via with |U 0 k | 2 − |V 0 k | 2 = 1. Then the ground state of H 0 , which we denote by |ψ r , is given in terms of ground state of (2.6b) as LE vs F test using Fubini-Study method : From (3.12) and (4.10), α i,k , β i,k , µ i,k for i = 1, 2 are of O(t), in fact linear in t. Now we expand all the expressions in small t and keep only the leading order term.