Phase transition and Chaos in charged SYK model

We study chaotic-integrable transition and the nature of quantum chaos in SYK model with chemical potential. We find that the phase transition depends on the temperature steps used for the cooling and heating process. We also find that a mass-like term consisting of two fermion random interaction (q=2 SYK term) does not give rise to a sharp transition. We find that turning on the chemical potential suppresses the Lyapunov exponent exponentially. We also show that this suppression is due to the chemical potential and not because of mass term in the Hamiltonian.


Introduction and Summary
The Sachdev-Ye-Kitaev (SYK) model is a quantum system of many fermions (N in number, N is large) with random all-to-all interaction [1,2]. It has been a subject of great interest in the last few years [3][4][5][6][7][8]. The model has many remarkable properties. It does not have any quasi-particle excitations. The gaps in the spectrum are exponentially suppressed in N . It flows to a conformal theory in deep infrared. It also saturates the quantum chaos bound of [9]. All these properties point to the existence of a bulk dual of the theory. There has been many proposals and other related works on the gravity side [10][11][12][13][14][15][16][17][18].
The original SYK model is a model with Majorana fermions. If one considers SYK model with complex fermions, one can turn on the mass term in the Hamiltonian or consider a thermal state with chemical potential turned on. With chemical potential turned on, a first order phase transition has been observed [19]. The high temperature phase is chaotic while the low temperature phase is integrable(non-chaotic). Henceforth, the two phases will be called chaotic phase and integrable phase. The integrable phase is effectively described by a weakly interacting massive theory with a renormalized mass. In this phase, the Lyapunov exponent is also practically zero. This phase transition is like Hawking-Page transition between black hole phase and thermal AdS phase [20].
In this paper we study the phase transition in more details. The most important result of this work is that the phase transition depends on the temperature steps used for the cooling and heating process. The saddle points of the two phases coexist for all values of temperature and chemical potential. We also study the chaos dynamics in the presence of mass term in the Hamiltonian and in a state with the chemical potential turned on.
We will mainly work with 4-fermion (q=4) interaction. Consider the two Hamiltonians The couplings j 4,ij;kl are random numbers drawn from a Gaussian distribution. For both the Hamiltonians, Q = i Ψ † i Ψ i is a conserved charge. So,we can consider thermal states with non-zero chemical potential of this charge. The relation between charge Q (the expectation value) and the chemical potential is given in [21] at low temperature limit T → 0 or β → ∞. Mass and chemical potential are same if one is calculating partition function using imaginary time path integration. First the imaginary time Schwinger-Dyson (SD) equations are solved numerically using an iteration method. The solution is then used to calculate the partition function. To prepare low temperature states, one has to gradually cool down the system numerically. One can also heat up the system after the cooling process. The details are given in section 3.
We have also looked for phase transition in another related system where instead of the mass term there is a (q = 2) SYK interaction term. The Hamiltonian is where again the couplings j 2,ij are random numbers drawn from a Gaussian distribution. There has been claims that this system also undergoes chaotic-integrable phase transition. But from explicit free energy calculation, we found that there is no phase transition. The chaotic dynamics is increasingly suppressed when the q = 2 coupling strength is increased but the system is never completely integrable. This agrees with high precision calculation of the Lyapunov exponent of this system [22]. The Lyapunov exponent never goes to zero completely. This also agrees with the result from non-equilibrium dynamics. It has been found that the q = 2 coupling slows down the thermalization process, but the system ultimately thermalizes even when q = 2 coupling is very large [23]. We also study the nature of chaos for the two Hamiltonians (1.1) in thermal states with chemical potential turned on. In real time dynamics, mass and chemical potential are different. Chemical potential is manifested in the state while mass term is a part of the Hamiltonian. We work out the differences for simple free theories in Appendix B. So, for the system with hamiltonian H SY K we consider mixed states with the following probability densities.
For the system with HamiltonianH SY K , we consider the states An interesting case is when we take η = −µ inρ 2 (β, η). The probability density is effectively ρ 1 (β), but the time evolution operator is stillH SY K . There are many conjectured diagnostics of chaos. One of the most popular test is the comparison of the energy spectrum with the eigenvalue spectrum of random matrix [24]. Another popular test is to examine spectral form factor [25,26]. In this paper, we will calculate Out-of-Time-Ordered corellators (OTOC). OTOC of chaotic systems like SYK model grows exponentially [9] where N is the number of degrees of freedom in the system. It has been conjectured that the Lyapunov exponent λ L of a quantum system has a upper bound.
Interestingly there has been exceptional cases where this bound has been found to be violated [27,28]. The most interesting case is the bulk calculation of the BTZ black hole background with non-zero angular momentum [27]. For this particular case, the violation of the above bound has been attributed to the fact that left-moving and rightmoving degrees of freedom in the boundary theory have different effective temperatures β ef f = (β ± J) [29]. A general argument which explains this result is provided in [30]. Lyapunov exponent in charged SYK model has also been calculated in the large q limit where the chemical potential is absorbed by redefining an effective coupling strength.
In this case, the Lyapunov exponent is found to be greatly suppressed by the chemical potential. But the upper bound is still given by the above equation (1.6). This can be easily shown in our case by using the two BCH formula.
Using this relations, the OTOC of the system with Hamiltonian H SY K in the thermal state with chemical potential ρ 2 (β, η) is Using the same line of argument provided in [9], the upper bound is still 2π/β. Similar general argument has also been given in [30]. The suppression of Lyapunov exponent at finite temperature is not surprising. It can be easily seen in the extreme case of η → ∞ with β finite, the Lyapunov exponent is zero. The state in this limit is the empty state |0 .
Similarly with η → −∞, the state is the filled state |1 and again the Lyapunov exponent is zero. Note that |1 and |0 are eigenstates of Q so they are also eigenstates of H SY K . Another viewpoint of OTOC is operator scrambling [31]. It measures the rate of growth of an operator with time evolution. From this viewpoint the suppression implies that the state with chemical potential picks out 'operator words' with slower growth rate. The fastest growing operator words are killed by the state. Their expectation values are negligible.
The results of this work are as follows: 1. In a thermal state with non-zero chemical potential, we found that the chaotic and integrable saddle points exist at all the regions of the parameter space of the temperature and the chemical potential that we have numerically searched. The phase transition depends on the cooling and heating process of the system. Large temperature jumps induce the phase transition. Earlier works have shown that there is a region in the (β, µ) plane where either of the two phases can exist depending on whether one is cooling down or heating up the system. We found that the size of this region depends on the temperature steps of the cooling and heating process of the system. Earlier works have also pointed out a critical point in the (β, µ) plane beyond which there is no phase transition. We found that again this critical point is an artefact of the temperature step dependence of the phase transition. We have shown this by explicitly showing a phase transition well beyond the supposed critical point.
2. Instead of the chemical potential, we looked for phase transition by turning on the two fermion random interaction (q=2 SYK-like term). Unlike in other attempts, we do this by explicitly calculating the free energy. We do not find the phase transition in this case.
3. We also observe the phase transition in the real time solutions of the Schwinger-Dyson equations. In the chaotic phase and at finite temperature, the chemical potential suppresses the Lyapunov exponent exponentially. The suppression is also independent of the sign of the chemical potential.
4. The suppression of the Lyapunov exponent is solely due to chemical potential. In other words, the suppression is due to the state of the system. Mass term in the dynamics (in the evolution operator) does not suppress the Lyapunov exponent in the chaotic phase.
5. In the integrable phase, the Lyapunov exponent is effectively zero. Numerically, we find λ * < 10 −6 in the integrable phase.
For the calculation of Lyapunov exponents in the chaotic phase, we cool down the system slowly so that the system remains in the chaotic phase. One of the most important consequences of these results is that thermalization could be state-dependent. This is the subject of our ongoing work [22]. We also would like to point out that interesting gravity configurations with no black hole formation have been found [32,33]. More details can be found in section 6.
An interesting consequence of the BCH relations (1.7,1.8) is that mass quench is trivial in 1-D quantum systems including SYK model. For example consider a mass quench, starting from a thermal state of a massive theory, one turns off the mass term (at any rate). The final state is a thermal state of the same temperature but with chemical potential turned on. The chemical potential being equal to the initial mass. The Green's functions after the quench is simply given by the relation (B.13). The equilibration process is instantaneous. As soon as both the time arguments passed the quench region, the Green's functions reach their final values. Similar BCH relations exist for bosonic systems also.
The outline of this paper is as follows: In section 2, we introduce complex SYK model in imaginary time formalism and reproduce the well-known Schwinger-Dyson equations of the theory. We elaborate on the numerical recipe used to solve the SD equations. In section 3, we elaborate on the numerical calculation of partition function. We also reproduce the phase transition and points out the new features we observed. In section 4, we briefly repeat the exercise of deriving the SD equation in real time formalism. Then we calculate the Green's and Wightman functions. In section 5, we calculate OTOC and the associated Lyapunov exponents. Section 6 consists of conclusions and discussions. The appendix consists of a collection of general results we have used in the main text. Appendix A consists of the conventions we have used in the paper. In Appendix B, we differentiate mass and chemical potential. Fluctuation-Dissipation theorem in the presence of chemical potential is derived in Appendix C. Appendix D consists of details and subtleties involved in numerical calculation of partition function of a fermionic theory.

Complex SYK model in imaginary time formalism
The (grand) partition function is Z(β, µ) = Tr e −β(H SY K +µQ) = Tr e −βH SY K (2.1) As pointed out above, mass and chemical potential are equivalent in imaginary time formalism. The Hamiltonian and the charge are In path integral language, the partition function in terms of grassmann fields ψ i (τ ) is 3) The symmetries of the couplings are j 4,ij;kl = j * 4,kl;ij , j 4,ij;kl = −j 4,ji;kl , j 4,ij;kl = −j 4,ij,lk (2.4) These symmetries ensure that the Hamiltonian is hermitian. j 4,ij;kl are the well-known disordered couplings with a Gaussian distribution. Instead of the range of indices in (2.2), below we will consider i < j and k < l for j 4,ij;kl , while j and l runs from 1 to N . With this convention, we separate the real and imaginary parts of j 4,ij;kl .
The exact distributions are We work with quenched averaging of the disordered couplings where we perform averaging at the level of the partition function.
The action is Performing the Gaussian integrals of j 4R,ij;kl and j 4I,ij;kl , we get , get cancelled due to the symmetries of j 4,ij;kl , and the Gaussian integrations. Finally we have, In large N limit, only melonic diagrams dominate so we will enforce using a Lagrange multiplier Σ(τ 1 , τ 2 ) which turns out to be the self energy. So, the partition function is Performing theψ i and ψ i grassmanian integrals, we get Using the convention defined in (A.5) for the Fourier transforms of the grassmann variables, the equations of motion are These are the well-known Schwinger-Dyson(SD) equation of complex SYK model. We solved the SD equations numerically. The two equations form a closed iterative loop. The loop is executed until the desired convergence is achieved. An approximated initial values of G(ω n ) are used to start the iterations. We used the propagators of the exactly solvable q = 2 SYK model as the initial values. For q = 2 SYK model, only difference in the SD equations is in the expression of self energy which becomes Σ(τ 1 , τ 2 ) = J 2 2 G(τ 1 , τ 2 ). So, the exact solution of the thermal propagator is The test for convergence is done by calculating where G prev (ω n ) is the Green's function in the previous iteration. The iteration is stopped when ∆G is smaller than a preset tolerance limit. One of the crucial numerical technique used to achieve convergence is the weighted iteration pointed out in [34]. In the first batch of iterations, we take half weight where we take half of the previous value of G(ω n ) and updated the other half using (2.15). We choose the imaginary time interval {−β/2, β/2} for performing the iterative loops instead of the usual (0, β). But once the desired convergence is achieved, we can perform a Fourier transform and calculate the propagator at any range of the imaginary time. The above imaginary time interval is discretized into 10 4 points (2 × 10 4 points in some cases) with the interval between each adjacent points being β × 10 −4 . We found that taking any lesser number of points introduce large errors in grand potential calculation. In the frequency space, we used the range {−2π * (10 5 − 1/2)/β, 2π * (10 5 − 1/2)/β} with interval size 2π/β. Taking this higher UV cut-off reproduce more accurate values of G(τ 1 , τ 2 )| τ 1 →τ 2 which in turn gives more accurate value of grand potential.
We will always be working with T < 1 or β > 1. Using the initial data (2.16), we calculate the propagator at relatively high temperature (or low β ∼ 1). Then using that calculated propagator, we iteratively solve the SD equations at a lower temperature. It is similar to cooling down the system gradually. After cooling to a sufficiently low temperature, we can also heat up the system.
One interesting observation is that taking equal inverse temperature steps gives similar first iteration values of ∆G. This can be interpreted in a very rough sense as implying propagators at equal inverse temperature intervals are also numerically at equal "distance" from each other. So, inverse temperature is the natural and more convenient scale compare to temperature.
The cooling and heating process can be roughly classified into four regimes -1. Cooling with equal temperature steps (∆T constant): In this regime, we cool down the system in equal temperature steps. Since we will be concern with only T > 1, it means we are using harmonically increasing inverse temperature steps. This regime is useful if we want the system to undergo phase transition while cooling down the system.
2. Cooling with equal inverse temperature steps (∆β constant): We cool down the system in equal inverse temperature steps. This means harmonically decreasing temperature steps. This regime is useful if we want to cool down the system in the same phase avoiding the phase transition.
3. Heating with equal temperature steps (∆T constant): In this regime, we heat up the system in equal temperature steps. It means we are using harmonically decreasing inverse temperature steps.
4. Cooling with equal inverse temperature steps (∆β constant): We heat up the system in equal inverse temperature steps. This means harmonically increasing temperature steps.

Grand potential and phase transition
The grand potential is defined as Using (2.14) in the expression of the partition function (2.14), we have The grand potential is calculated using the above two equations after solving the SD equations. But note that straight-forward use of the above equations does not give the correct grand potential or the correct partition function. Even for free theory, Det [∂ τ + µ] does not numerically reproduce the free fermion partition function. We used the numerical recipe in Appendix D.
We observed a chaotic-integrable phase transition when we gradually change the temperature of the system while keeping µ fixed at a relatively small value µ ∼ 0.1. In the chaotic phase for µ = 0 and large β, the mid-section of the imaginary time range is well approximated by which is the conformal limit. While in the integrable phase, the Green's function is exponential like in the case of the free massive theory given in equation (B.7). This means a gap [35]. In the chaotic phase with µ turned on, the Green's function have intermediate behaviour between the two extremes. Figure 1 is a plot for the three cases. This phase transition has been observed previously for the same model and for other similar models [19,[35][36][37]. But we observed some new features of the phase transition which are very interesting.
1. We find that the phase transition happens more readily when we used equal temperature steps. While using small inverse temperature steps tend to keep the system in the same phase even when the system is in an extremely unstable supercooled or superheated state. Actually, we did not observe the phase transition using inverse temperature steps for intermediate values of µ ∈ {0, 0.30} even when the system has been cooled down or heated up to extreme temperatures. Figure (2) shows examples of the phase transition for different temperature steps of ∆T = 0.002, 0.001 and no transition for ∆T = 0.0001.
2. As pointed out in [19,35], for a fixed value of µ, there is a region around the critical temperature where the system can be in either of the two phases depending on whether the system is gradually cooled down or heated up. A very interesting feature we observed is that the size of this region is not a fixed interval in the temperature range for a given fixed µ. This is a consequence of the first observation we mentioned above. The size of the region depends on the temperature step size we used to cool or heat the system. Smaller temperature step size broadens the region while larger step size shrinks the region. In one extreme case, with µ =, cooling directly from the initial condition (2.16) to β = takes the system directly to integrable phase.
3. In [19], it has been reported that there is a critical point (T = 0.0687, µ = 0.345) beyond which the there is no phase transition. We find that the critical point does not exist. Figure (3) shows a phase transition we observed for µ = 0.4 at T = 0.0147 (or β = 68) which is well beyond the supposed critical point. Indeed, the authors of [19] were surprised by the critical point they found. The model does not have a notion of locality due to the all-to-all interaction. Accordingly there is no clear picture of a RG flow to the critical point.
4. We also find that there is no phase transition for a system with the Hamiltonian in (1.2) but without chemical potential. [38] examines the spectrum, spectral form factor and calculates the Lyapunov exponent of this system. It was claimed that there is a phase transition. But our calculation of the free energy does not show any sharp transition. The system is always in the chaotic phase. J 2 = 1 and 1.5 are extremely high values of J 2 for J 4 = 1 and inverse temperature β ∼ 100. The Lyapunov exponents are small but non-zero at this parameter range λ * L ∼ 10 −3 [22]. This new observations have important implications. For example, it alludes to state-dependent thermalization. More details can be found in Section 6. As pointed out above, we have to used equal temperature steps to see the phase transition. We have also checked that for a fixed temperature increasing µ in equal increments also leads to phase transition.
As we mentioned above, ∆G is the measure of convergence, the smaller it is, the better the convergence is. ∆G decreases rapidly and monotonically if the cooling/heating process keeps the system in the same phase. A signal for an impending phase transition is that ∆G will hit a bump and cross it. 1 It would decrease first to some extent and then increase for a while and then decrease and converge rapidly. So, it is important to check if the preset tolerance limit of ∆G is small enough to detect the phase transition. But we also would like to note that the phase transition does not depend on the tolerance limit once this quantity is set to a small enough value. We have verified it by running the same cooling and heating process with different tolerance limits.

Complex SYK model in real time formalism
One can repeat the derivation of the Schwinger-Dyson equations in real time. Or we could use Wick rotation. The real time SD equations are where we have used the conventions in Appendix A. One could attempt numerical Wick rotation. But we could not do it. The approximating function has to be calculated with extremely high precision which is beyond our computational resource. Instead, we resort to solving the real time SD equation using an iterative method again. The connecting piece which complete the iterative loop is the Fluctuation-Dissipation relations. They are the expressions of the greater and lesser Green's functions in terms of the spectral function A(ω) = −2 Im Σ R (ω).
In Appendix C, we have derived these relations. Possible issue that can arise with solving the real time SD equation separately is that the phase transition might happen at different temperatures even when we use the same temperature steps and the same value of µ. Actually there is no guarantee that the phase transition would happen at all when using the real time SD equations. The steps involved are as follows.
1. Just like the case of imaginary time solution, the initial values of the iterations is the spectral function of the solvable q = 2 SYK model. The real time solution of q = 2 SYK model is 2. G > (ω) and G < (ω) are calculated using the Fluctuation-Dissipation relations. Fourier transforms give G > (t 1 , t 2 ) and G < (t 1 , t 2 ).
3. The next step is to calculate Σ R (ω). One could directly use convolutions to calculate Σ R (ω) from G > (ω) and G < (ω). But the three integrals during convolution are computationally more expensive. So it is better to use the expression of real time Σ(t 1 , t 2 ) and perform a Fourier transform. The relation between Σ R (ω) and 4. Using the calculated value of Σ R (ω), we calculate the new A(ω).

Using the new A(ω)
, we calculate G > (ω) and G < (ω). After this, we go back to step (2) and repeat the iteration until the desired tolerance limit is reached. The convergence is checked by calculating the difference of the spectral function of the new iteration and the previous iteration.
We also find the phase transition in the real time formalism. We also observed that the phase transition happens at roughly the same temperature as the one observed in imaginary time calculation for the same µ. Moreover, ∆A crosses a bump to go from one phase to the other just like in the case of imaginary time solutions. Most importantly, using small inverse temperature steps, we cool down the system to very low temperatures in the chaotic phase. The solutions in the chaotic phase will be used in the next section to calculate Lyapunov exponent.
With µ or η turned on, the charge or occupation number Q = −i lim t 1 →t 2 G < (t 1 , t 2 ) is no longer 1 2 . As we expect, the occupation number is same for the same numerical value of µ (in system with HamiltonianH SY K ) and η (in system with Hamiltonian H SY K ). But at finite temperature in which we are working, analytic relation between Q and µ or η is so far lacking.
We solve the real time SD equations by discretizing time into 10000 intervals of size dt = 0.05. We took the frequency range from −3000 to 3000 with interval size dω = 0.001. We mostly solved the SD equations for the HamiltonianH SY K in a thermal ensemble with temperature only. So, the effective chemical potential is µ. Equation (1.9) shows that the Lyapunov exponents of two systems with same the effective chemical potential -for example , one with Hamiltonian H SY K in the state with probability density ρ 2 (β, η) and the other with HamiltonianH SY K (µ = η) in the stateρ 1 (β) -are same. We have verified this numerically by calculating the Lyapunov exponent in the two systems. One can consider systems in which both µ and η are non-zero. So, the effective chemical potential is µ + η.

Calculation of Lyapunov exponent
The Lyapunov exponent is calculated by diagonalizing the retarded kernel []. The recursive Feynman diagram for OTOC is shown is Figure (4). The upper legs are iβ/2 imaginary time away from the lower legs. The eigenvalue problem is [], where G + (t) = G > (t − iβ/2) and G − (t) = G < (t + iβ/2) (= −G + (t)). We solve this equation in frequency space and look for eigenvalue k = 1. This is done by tuning λ in the OTOC ansatz F 1,2 (t 1 , t 2 ) = e λ(t 1 +t 2 )/2 f 1,2 (t 1 , t 2 ) (5.1) After some algebraic steps the final equation is respectively. We solve this equation numerically using BLAS and LAPACK libraries in FORTRAN. As mentioned in the previous section, we took the frequency range from −3000 to 3000 with interval size dω = 0.001. The diagonalization of the 12002 × 12002 real matrix takes of the order of 10 seconds in a modern computer (Intel i7-7700 with 8 GB single channel memory). The largest eigenvalue k is the one of interest for us. We have to set it to 1 by tuning λ = λ L , the Lyapunov exponent for the system of interest. We find that λ and the largest eigenvalue (all other eigenvalues also) has an inverse relation. Increasing λ decreases the eigenvalues and vice versa. So, we do not have to search a large range of λ. We used bisection method after finding two values of λ for which (k − 1) has opposite signs.
We calculate the Lyapunov exponent for the two systems with Hamiltonians H SY K andH SY K in different states with or without chemical potentials turned on. The results are as follows 1. We find that the chemical potential suppresses the Lyapunov exponent exponentially. Figure (5) is the plot of the Lyapunov exponent as a function of the chemical potential for fixed values of β. It fits an exponential function very well for small values of the chemical potential.
2. We find that the upper bound of the Lyapunov exponent with chemical potential is still 2π/β. This can be seen from Figure ( 3. An interesting case is when we turn on charge for the system with Hamiltoniañ H SY K so that the charge and the mass effectively cancels. The state is where we have taken η = −µ. The real time evolution operator is still e −itH SY K . We find that there is no suppression of the Lyapunov exponent. This means that the suppression in Lyapunov exponent we have seen above is solely due to chemical potential. The mass term in the Hamiltonian does not suppress the Lyapunov exponent.
4. In the integrable phase, we find that the normalized Lyapunov exponent λ * L is below 10 −6 for µ = 0.28 and β = 90.9. Around β = 30, λ * L is of the order of 10 −4 . In the integrable phase, the numerical value of the Lyapunov exponent decreases sharply with increasing β which is opposite of what happens in chaotic phase.

Comparison with large q result
Here we will compare our results with the large q result of [39]. The scaled Lyapunov exponent λ * L is given by πλ * L = βJ q cos(πλ * L /2), whereJ q = qJ 2 q 2(2 + 2 cosh(µβ)) q/2−1 (5.8) For βJ < 1, the solution is a relative of the Dottie number [40]. Dottie number is the solution of x = cos(x). It is a universal attracting fixed point which one can simply check by repeatedly taking cos function of any given real number. Similarly, for the above equation of λ * L , the solutions are calculated by repeatedly taking βJ 4 cos(πλ * L /2). The solutions are transcendental numbers. With β = 100, J 4 = 1, βJ 4 → 1 − for µ ∼ 0.0990339. For smaller µ or larger β, βJ 4 is greater than 1 and the equation cannot be solved by the above iterative method. The solutions are searched using bisection method. There is only one unique solution for a given set of parameters J 4 , β, µ and q. Figure (7) is the plot comparing our numerical result with the large q formula (but putting q = 4 in the formula). We find that the Lyapunov exponent calculated using the formula (5.8) are much more highly suppressed.

Conclusions and Discussions
In this work, we have shown that the chaotic and integrable saddle points of complex SYK model coexist in a large region of the parameter space of temperature and chemical potential. The phase transition depends on the temperature steps used for the cooling and the heating process. We also show that a supposed critical point is an artefact of this dependence on the temperature step size. These results point towards the idea that thermalization can be state-dependent. Consider the integrable state at high temperature where the chaotic state is favoured energetically. The conventional wisdom is that the integrable state is unstable so an infinitesimal perturbation will bring the system to the chaotic state. But we would also like to point out that the cooling and heating process we performed are strong finite perturbations. So, in our case, this means that the integrable state is not really unstable but rather a saddle point in a finite potential well, at least with respect to sudden temperature changes of certain magnitude. It would be interesting if one can perform quantum quench in the less-stable integrable state and check for nonthermalizing evolution. A strong enough quench would ultimately take the system to the chaotic state.
Similar rigorous numerical calculations in simple models are available on the gravity side. Consider the hard wall model of [32]. The gravity theory has a hard wall in the bulk. Unless a large enough energy is pump in to create a big black hole with which the horizon swallows up the hard wall, there is no black hole formation. It would also be interesting to see if our results have any effect on worm hole formation [41].
We have also shown that q = 2 SYK term does not lead to a sharp transition. This result agrees with the observation in [23] that even in the presence of very strong q = 2 coupling the system always thermalizes. This was shown by performing a quantum quench in a system with the Hamiltonian given by (1.2). One uses a time-dependent q = 2 or q = 2 or q = 6 coupling to perturb the system. This takes the system out of equilibrium. One solves numerically the equations of motion of the Green's functions called Kadanoff-Baym equations. Still it would be interesting to see if the nonthermalizing path is the most favourable path by calculating the transition amplitude. It has also been shown that the Lyapunov exponent is never effectively zero even when the q = 2 coupling is very large. In contrast, we have found that in the integrable phase due to chemical potential, the Lyapunov exponent is effectively zero.
Our numerical calculation of the OTOC has shown that the chemical potential suppresses the Lyapunov exponent exponentially when we consider the system at finite temperature. In the temperature T → 0, the Lyapunov exponent still tends towards the upper bound of (1.6). We have also shown that a mass term in the Hamiltonian of SYK model does not suppress the Lyapunov exponent.

Acknowledgement
The author would like to thank helpful discussions with Dileep Jatkar, Ritabrata Bhattacharya and Tousik Samui. The author would also like to thank Sayantani Bhattacharyya, Yogesh Srivastava and Tarun Sharma for helpful commments.

A Conventions
Our definition of the Green's functions are given below. G is the thermal propagator. G's are real time Green's functions. ψ + means the operator is inserted in the upper segment of the Keldysh contour while ψ − means insertion in lower segment.
Our convention for the Fourier transforms are

B Mass versus chemical potential
In many works, mass and chemical potential are confusingly mixed up and used interchangeably. Indeed there are no difference between mass and chemical potential in calculation of thermal partition function using path integral approach in imaginary time). But in real time calculations, mass and chemical potential are very different quantities. While mass term is a part of the Hamiltonian, chemical potential is manifested in the state. So, if one use Wick rotation (analytic continuation) to obtain real time quantities from imaginary time quantities, then the Hamiltonian also includes a mass term. The thermal state also has chemical potential turned on. The starkest difference between chemical potential and mass term can be easily explained in the two free fermionic systems consisting of one fermion each. Consider the two systems to be as follows: 1. A free Hamiltonian and the system is in a thermal state with chemical potential η 2 .
But the time evolution operator is A Hamiltonian consisting of a mass term and the system is a thermal state with only temperature.
The time evolution operator is The partition functions for the two systems at same temperature β are Z 1 = Tr e −β(H 1 +ηQ = (1 + e −βη ) (B.5) Indeed, the partition functions for the two systems are same if one takes η = µ numerically (and at same temperature), but this does not mean dynamically the two systems are same. The Green's function of the first system cannot be obtained by using Wick rotation. The Green's functions of system 1 in imaginary and real time are given below.
G 1 (τ 1 , τ 2 ) is the Feynman propagator in imaginary time. G > 1 (t 1 , t 2 ) and G < 1 (t 1 , t 2 ) are the greater and lesser Green's functions respectively 3 . Their exact definitions are given in Appendix A. Using G > and G < , one can write down the Feynman, retarded and advanced Green's functions. The Green's functions of system 2 are Note that the real time Green's function for the two systems are related by This also holds true for other general Hamiltonians. It can be shown easily using the BCH relations (1.7,1.8) and the fact that Ψ † Ψ is a conserved charge so it commutes with the Hamiltonian. Another reason why we want to differentiate mass and chemical potential is because suppression of Lyapunov exponent is solely due to the chemical potential. Consider the two Hamiltonians H SY K andH SY K define in ??. We find that the Lyapunov exponent of the first system with Hamiltonian H SY K in a thermal state ρ = e −βH SY K is equal to the Lyapunov exponent of the system with HamiltonianH SY K but in a thermal statẽ ρ = e −β(H SY K −ηQ) where η = µ (soρ = ρ). So, the suppression of Lyapunov exponent is solely due to the chemical potential but not due to the mass term in the dynamics.

C Fluctuation-dissipation theorem with chemical potential
Consider the lesser Green's function in a thermal ensemble with the chemical potential turned on where in the fourth line we have used the relation where again we have used the commutation relation Q, ψ † = ψ † . Taking t → −t and Fourier transforming both sides of (C.1) w.r.t. t, we have The rest of the derivation are the standard steps. One takes the Fourier transform of the retarded Green's function We took → 0 + and also note that G > (ω) is purely negative imaginary. The real part of G R (ω) will be given by the principle value integral. The spectral function is So, finally we have, These relations constitute the Fluctuation-Dissipation theorem with chemical potential.

D Fermion partition function
In this section we will write down the details involved in the calculation of fermionic partition function and the free energy. We will closely follow [42]. Consider a one fermion system with the Hamiltonian In the continuum limit one obtains the well known action of a single fermion. where in the second line we have performed an integration by part.
One cannot obtain the correct free energy using the continuum approximation. From the discretized formula, we get It can be numerically shown (taking large enough L) that the above expression gives 1 + e −βE which is the well known partition function of a fermion. For a massless free fermion (E = 0), as expected we get Z = 2. Note that the non-zero element at the lowermost leftmost corner is important, it fixes the periodic boundary condition, without it the determinant is 1. Working in the frequency domain, the continuum formula does not give the correct free energy. For the first/kinetic term, the continuum approximation amounts to replacing e iωndτ − 1 with iω dτ while for the second/potential term, the approximation is replacing e iωndτ with 1. But this approximations would fail when ω n is very large. So, only quantities which are not sensitive to the high frequencies will be correct. For the exact result, one has to use again the formula (D.9). Using the mode expansions (A.5,A.5), we get Compare to (3.2), we have perform a coordinate transformation from (τ 1 , τ 2 ) to (τ 1 − τ 2 , τ 2 ) in the exponential part.