Hessian eigenvalue distribution in a random Gaussian landscape

The energy landscape of multiverse cosmology is often modeled by a multi-dimensional random Gaussian potential. The physical predictions of such models crucially depend on the eigenvalue distribution of the Hessian matrix at potential minima. In particular, the stability of vacua and the dynamics of slow-roll inflation are sensitive to the magnitude of the smallest eigenvalues. The Hessian eigenvalue distribution has been studied earlier, using the saddle point approximation, in the leading order of 1/N expansion, where N is the dimensionality of the landscape. This approximation, however, is insufficient for the small eigenvalue end of the spectrum, where sub-leading terms play a significant role. We extend the saddle point method to account for the sub-leading contributions. We also develop a new approach, where the eigenvalue distribution is found as an equilibrium distribution at the endpoint of a stochastic process (Dyson Brownian motion). The results of the two approaches are consistent in cases where both methods are applicable. We discuss the implications of our results for vacuum stability and slow-roll inflation in the landscape.


Introduction
One of the striking predictions of string theory is the existence of a vast energy landscape with a multitude of vacuum states [1,2]. The landscape can be described by a multi-dimensional scalar potential U (φ); then the vacua correspond to local minima of this potential. In the cosmological context, positive-energy vacua drive the inflationary expansion of the universe, and transitions between different vacua occur by quantum tunneling through bubble nucleation. The same kind of scenario is suggested by other particle physics models with compact extra dimensions. For a review of this multiverse picture see, e.g., Ref. [3]. The expected number of vacua in the landscape is enormous, so predictions in this kind of theory must necessarily be statistical.
The details of the high-energy vacuum landscape are not well understood, and it is often modeled as a random Gaussian field. The statistics of vacuum energy densities and of slow-roll inflation in such a landscape have been extensively studied in the literature [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]. Another well studied model is the axionic landscape, which can also be approximated by a random Gaussian field in a certain limit [19][20][21][22][23][24]. One of the key mathematical problems to be addressed in these models is to find the eigenvalue distribution of the Hessian matrix ζ ij = ∂ 2 U/∂φ i ∂φ j . The potential minima correspond to the points where ∂U/∂φ i = 0 and all Hessian eigenvalues are positive, and the stability of the vacuum depends on small eigenvalue end of the Hessian spectrum. The dynamics of slow-roll inflation also depends on the smallest eigenvalues, which determine whether or not inflation is multi-field, with more than one field being dynamically important.
The Hessian eigenvalue distribution in a random Gaussian field has been found in Ref. [25] using the saddle point approximation in the leading order in the large-N expansion, where N is the dimensionality of the landscape. This approximation, however, becomes inaccurate for very small eigenvalues, where sub-leading terms play a significant role. In the present paper we extend the method of Ref. [25] to account for the sub-leading contributions. 1 We also develop a new method, based on a version of Dyson Browninan motion [28], which can be applied in cases where the other method fails. In this new approach, the eigenvalue distribution is obtained as an equilibrium distribution of the Brownian stochastic process. The results of the two approaches agree in cases where both methods are applicable. We use our results to estimate the typical magnitude of the smallest Hessian eigenvalue at a local minimum of the potential and discuss its implications for the vacuum stability and for the dynamics of slow-roll inflation. We also calculate the density of minima in a random Gaussian landscape. The result is consistent with earlier numerical calculations for N 100 and extends them to larger values of N .
The paper is organized as follows. In the next section we specify the model of a random Gaussian landscape, review the probability distribution of the Hessian in this model, and clarify its relation to Wigner's random matrix model. In Sec. 3, we use the saddle point approximation to calculate the Hessian eigenvalue distribution at a generic point in the landscape, under the condition that all eigenvalues are larger than a given threshold. In Sec. 4, we extend the analysis to stationary points of the landscape. We find the probability for a stationary point to be a minimum and estimate the smallest Hessian eigenvalue at a minimum. Then in Sec. 5 we develop a new method, based on Dyson Brownian motion, and use it to find the eigenvalue distribution at stationary points. Some cosmological implications of our results are discussed in Sec. 6. Our conclusions are summarized in Sec. 7. In Appendix A we discuss axionic landscapes and show that under certain conditions they can be approximated by random Gaussian fields. We use the reduced Planck units (M pl 2.4 × 10 18 GeV ≡ 1) throughout the paper.

Correlators
We consider a random Gaussian landscape U (φ), defined in an N -dimensional field space φ, which is characterized by the average valueŪ ≡ U (φ) and the correlation function Here, k ≡ |k| and angular brackets indicate ensemble averages. We assume that the correlation function rapidly decays at |φ 1 − φ 2 | Λ and the potential has a characteristic scale U 0 . We define different moments of the spectral function P (k) as In Appendix A, we show that under certain conditions this type of random fields can be used to approximate axionic landscapes.
As an illustration, we may use the following correlation function: with Λ playing the role of the correlation length in the landscape. In this case, the moments are given by In the large-N limit the moments are of the order In the rest of this paper, we do not use the above explicit form of the correlation function, but generically assume only the dependence of Eq. (2.5). Let us consider the potential around a given point in the field space and expand it in a Taylor series. Since the values of the potential at nearby points are correlated with one another, the coefficients of the Taylor expansion should also be correlated. In particular we have where η i = ∂U/∂φ i and ζ ij ≡ ∂ 2 U/∂φ i ∂φ j is the Hessian matrix. The parameters E, B, A are related to the moments (2.2) as From Eq. (2.5), we expect that A, B, E are O(N 0 ) in the large N limit.

Probability distribution
The probability distribution for U and ζ ij can be found by taking the inverse of the correlation matrix. The resulting distribution is [25], [14] P where Note that the combination AE − B 2 must be positive (or zero), since otherwise the distribution cannot be normalized. The cross term in Eq. (2.13) can be absorbed by a constant shift of the eigenvalues of the Hessian using the relations and setting where I is the identity matrix. This implies that the eigenvalues of the Hessian are shifted by amount of the order −U/Λ 2 for a given U , where we have used B < 0. The Hessian matrix can be diagonalized and the distribution can be written in terms of its eigenvalues λ i . Changing the variables from ζ ij to λ i , the probability distribution for the eigenvalues is given by [28] where |λ i − λ j | comes from the Jacobian and C is a normalization factor. We denote the average eigenvalue asλ and deviations from the average as δλ i : Then Q U,ζ can be written as for a fixed U , where we have used (2.21) in the large N limit. 2 Note that the Jacobian that appears in Eq. (2.17) is independent of λ.
One may be interested in the probability distribution for the Hessian eigenvalues without any condition on U . In this case, U can be integrated out and the distribution takes the form [29] Here we comment on the difference from the random matrix theory (RMT), where the probability distribution for the elements of a real symmetric matrix ζ ij is given by [30] with a certain constant σ RMT . This distribution is usually referred to as the Gaussian Orthogonal Ensemble (GOE). When we express the eigenvalues of ζ in terms of the average valueλ and displacements from the average δλ i , the exponent Q RMT is rewritten as To compare Q U,ζ (or Q ζ ) and Q RMT we may take σ 2 RMT = 2A (∼ U/Λ 2 ). We then see that the cost of a nonzeroλ in the GOE is larger than that for a random Gaussian field by a factor of N . Thus, for the GOE, the averaged valueλ is strongly prohibited from being away from zero in the large-N limit.
Let us emphasize that the Gaussian correlation function (2.3) is a rather special example of a random Gaussian model. It has a specific property because the coefficient of (Trζ) 2 in (2.13) vanishes (AE − B 2 = 0). As a result, for a fixed U the Hessian distribution is just given by the GOE with a constant shift of the diagonal terms: where m ij is a GOE matrix. However, this is not a generic property of random Gaussian models. In what follows, we do not consider this special case, but consider a generic random Gaussian landscape, which is specified by moments of the correlation function.

Saddle point approximaion
In this section we use the saddle point approximation to calculate the probability distribution of Hessian eigenvalues in a random Gaussian landscape under the condition that all eigenvalues are greater than a given threshold. We follow and extend the calculation of Refs. [31,32], where the eigenvalue distribution was found for the case of the GOE. In Sec. 3.1, we calculate the distribution at a generic point in the landscape. The distribution at local minima of the potential cannot be found with this method. However, the result of this calculation can be used to find the probability for a stationary point of the potential to be a minimum and to estimate the smallest Hessian eigenvalue at a minimum, as we will see in Sec. 4.1. The calculation of the Hessian eigenvalue distribution at potential minima should await the introduction of our new method in Sec. 5.
Hereafter, we generically consider the case where which can be obtained from Eq. (2.22) by rescaling λ i → √ 2Aλ i with a = N/(N + 2). It can also represent (2.20) with a = 1 − 2AE/N (AE − B 2 ) and the same rescaling, after a shift λ i → λ i + λ * . The latter case will be discussed in detail in Sec. 4.2. In both cases, we expect 1 − a = O(N −1 ).

Conditional probability distribution
The probability distribution for the Hessian eigenvalues can be written as where A is a normalization constant and the logarithmic term comes from the Jacobian factor in Eq. (2.17). The conditional probability P (λ cr ) that all eigenvalues are greater than some value λ cr can then be calculated from We shall further rescale the eigenvalues as µ = λ/ √ N and introduce a density function of µ as In terms of this density function, we can rewrite H(λ) as The partition function Z(λ cr ) can also be rewritten in terms of ρ(µ). The Jacobian involved in changing from µ i to ρ(µ) was calculated in Ref. [31,32] by saddle point approximation in the large N limit. It is given by where A and A are normalization constants. Thus we obtain where

Eigenvalue density function
We shall now use the saddle point approximation to find the most probable density function ρ(µ). We first note that the leading term Σ 0 [ρ] in Eq. (3.13) is independent ofλ. We therefore include the subleading contribution due to the first term in Σ 1 [ρ], which breaks this degeneracy. The second term in Σ 1 [ρ] is also independent ofλ, and we shall neglect it here. This term only gives O(N −7/4 ) corrections to ∆Σ (≡ Σ(µ cr ) − Σ(−∞)), as we will discuss later in this section.
Varying the functional Σ[ρ] with respect to ρ(µ), we determine the critical distribution ρ c (µ) at the saddle point, Taking a derivative with respect to µ, we obtain where P indicates the Cauchy principal part. Shifting µ as µ = x+µ cr , this can be rewritten as where we have defined The integral in the last equation is just the average eigenvalueμ; hence this equation can also be written as The solution of the integral equation (3.16) has been found in Ref. [31,32]. Here we quote the result: Then we obtain In the special case when This is the celebrated Wigner semi-circle distribution. It has support at − √ 2 < µ < √ 2, and thus the requirement µ > µ cr with µ cr = − √ 2 does not impose any constraint on ρ(µ). For the same reason the Wigner distribution is unperturbed when µ cr < − √ 2. The Gaussian orthogonal ensemble (GOE), which was studied in Refs. [31,32], corresponds to a = 0; then Eq. (3.24) gives x 0 = 0 for µ cr = 0. The eigenvalue distribution for this case is shown by a green curve in Fig. 1.
The integral in Eq. (3.17) can be done by using the explicit forms of ρ c and L(x 0 ). As a result we obtain where We can numerically solve Eq. (3.24) in terms of x 0 for given values of a and µ cr . One is often interested in the case when µ cr = 0, so that all eigenvalues are positive. In Fig. 2 we show x 0 as a function of (1 − a) for µ cr = 0. We see that −x 0 asymptotes to √ 2 (as indicated by the red dashed line) for 1 − a → 0. To clarify the asymptotic behavior, we Taylor expand the function F (x) about x = − √ 2. This gives Now, to the leading order in (1 − a), Eq. (3.24) becomes where we have used that F (− √ 2) = 0. Hence we find For µ cr = 0 this gives It is interesting to note that x 0 = − √ 2 for 1 − a = 0 and any value of µ cr . In this case Eq. (3.18) givesμ = √ 2 + µ cr . This means that the distribution ρ c (µ) is just given by a   shifted Wigner semi-circle (3.23) when we neglect the next-leading order correction Σ 1 [ρ] in the large N limit [25]. Deviations from the Wigner semi-circle come from the next-leading order effect.
In the left panel of Fig. 3, we plot the eigenvalue distribution ρ c (µ) with µ cr = − √ 2, − √ 2/2, 0, √ 2/2, √ 2 for the case of a = N/(N + 2) (which corresponds to the Hessian distribution (2.22) and N = 100. We also plot the distribution for µ cr = 0 as an orange curve in Fig. 1, to compare it with the GOE distribution (plotted as a green curve). We see that the GOE distribution is much more concentrated near the origin, reflecting the high cost of a nonzero average eigenvalueμ in that case.
The Hessian distributions in Fig. 3 look like the Wigner semi-circle with an overall shift and a slight modification at the left edge. From Eqs. (3.18) and (3.29), the amount of the shift can be estimated asμ for µ cr = 0 and in the limit of (1 − a) 1. The form of the distribution near the left edge, where it significantly deviates from the Wigner semi-circle, can be found from Eqs. (3.19) and (3.29): The number of eigenvalues in this range is The partition function Z(µ cr ) can be approximated by its value at the saddle point: Using Eq. (3.14), Σ(µ cr ) is given by Here, the Lagrange multiplier C can be determined from Eq. (3.14) by setting µ = µ cr , These integrals can be done explicitly by using Eq. (3.19). The result is where we have used Eq. (3.17).
Since we are interested in the case where (1 − a) 1, we can simplify Eq. (3.36) by using Eq. (3.28). The result is The probability for all eigenvalues to be greater than µ cr is given by and Σ(−∞) = (3 + ln 4)/8. This probability can be found numerically using Eqs. (3.17) and (3.36). In the right panel of Fig. 3, we plot ∆Σ(0) as a function of (1 − a). We also plot the asymptote ∆Σ(0) = (1 − a) in the limit of (1 − a) → 0 as a dashed red line. In the case of the Hessian distribution Now we can justify that the second term of Σ 1 in Eq. (3.12) gives a negligible contribution to ∆Σ in the large N limit. As we already mentioned, this term is independent of the average eigenvalueμ. Furthermore, from Eq. (3.32) we see that the change in Σ 1 due to the modified distribution near µ = 0 is of the order N 1/4 /N . It follows that the contribution of the second term of Σ 1 to ∆Σ is O(N −7/4 ), which is much smaller than the other terms in the large N limit. We checked that it is indeed O(N −7/4 ) by numerically calculating N −1 dµρ ln ρ as a function of N with ρ given by Eq. (3.19). Therefore, our result for ∆Σ is accurate with an uncertainty of O(N −7/4 ). Additional support for neglecting the second term of Σ 1 comes from the fact that the distribution (3.19) obtained without this term agrees very well with the result of the dynamical method, which takes all terms into account (see Sec. 5).
4 Probability of µ > µ cr at stationary points of the potential We shall now calculate the probability for all Hessian eigenvalues to be greater than a given value µ cr at stationary points, where ∂ i U = 0. For µ cr = 0, this is the same as the probability for a stationary point to be a local minimum. We insert a delta function in Eq. (3.5) for the partition function to enforce the condition ∂ i U = 0 in the landscape: The Jacobian |detζ| (= i |λ i |) gives an additional factor for the probability distribution of the Hessian. Hence Eq. (3.3) should be replaced by As we did in Sec. 3.1, we rescale the eigenvalues as µ = λ/ √ N and consider a density function of µ. In terms of this density function, we can rewrite H(λ) as The partition function Z(λ cr ) can also be expressed in terms of ρ(µ), as in Eq.
We can absorb the third term in (4.4) into the first term in the following way. To the leading order in N , the distribution ρ c (µ) at the saddle point is the shifted Wigner semi-circle is the next-leading order term, we can approximate it as Σ 1 [ρ W (µ;μ)]. Then we can calculate the third term in (4.4): we can rewrite Σ 1 as in the large N limit. Therefore, we can use the result of the previous subsection, (3.24) and (3.36), with a replaced by a + 1/N .

Probability of minima and the smallest eigenvalue
An important characteristic of a landscape is the density of potential minima in the field space. If the correlation length of the landscape is Λ, the density of stationary points, where points per correlation volume. The density of minima can be obtained by multiplying this by the probability for a stationary point to be a local minimum. This is the same as the probability for all Hessian eigenvalues at that point to be positive, (4.10) The probability P min has been studied earlier in the literature. Bray and Dean used the saddle point approximation in the large N limit and found the asymptotic value N ∆Σ(0) = 1 [25]. Easther et al [13] noted that P min can significantly deviate from this value for moderately large values of N . They calculated the probability using efficient numerical codes for several values of N up to 100. Their results, shown by grey dots in the left panel of Fig. 4, are in a very good agreement with ours. However, their fitting formula is not consistent with ours at larger values of N . It is clear from the left panel of Fig. 4 that the asymptotic behavior cannot be correctly obtained by the extrapolation from N ≤ 100. For some applications it is important to estimate the smallest eigenvalue of the Hessian at potential minima (see, e.g., Sec. 6.2). The probability that this eigenvalue is greater than a given value µ min can be found from where now ∆Σ = Σ(µ min ) − Σ(0). For small values of µ min we can approximate this as The probability distribution for the smallest eigenvalue can then be estimated as We plot N d∆Σ(µ cr )/dµ cr at µ cr = 0 for the case of a = 1 − 1/N in the right panel of Fig. 4. We see that it is ∼ 1 for N ∼ 100 and is asymptotic to √ 2 at N → ∞, as shown by the red dashed line, in agreement with the analytic formula (3.37). The typical magnitude of the smallest eigenvalue can now be estimated as (4.14) 4.2 Probability of µ > µ cr for a fixed U We shall now use the distribution (2.20) to calculate the probability for all Hessian eigenvalues to be greater than λ cr at a given value of U . Eq. (2.20) can be rewritten as where λ * = (B/E)(U −Ū ) and we disregard terms that are independent of λ i . We define shifted and rescaled eigenvaluesλ i and the parameter a as  Fig. 5. The plots asymptote to the analytic formula (3.37) or 19) in the limit of (1 − a) → 0, which is plotted as a red dashed line.
To calculate the probability for a stationary point at a given value of U to have all Hessian eigenvalues greater than λ cr , we need to add a term − i ln|λ i | in Eq. (3.3). Neglecting a constant term, the additional term can be written as − i ln|λ i + λ * |. So we should replace λ i →λ i and − i ln|λ i | → − i ln|λ i + λ * | in Eq. (4.2). By using the argument around Eq. (4.5), we can replace the extra term by in the leading order in the large N limit, where µ * ≡ λ * / √ 2AN andρ(μ) = ρ(µ). As a result, we can rewrite Σ 1 as

(4.21)
If we redefineμ by shiftingμ →μ + µ * /[N (1 − a) − 1], this is the same as Eq. (4.8) and we can use the same calculation and result. Therefore, the term − i ln|λ i + λ * |, which comes from the Jacobian for the stationary condition, gives two corrections: a → a + 1/N (as we discussed below Eq. (4.8)) andμ →μ In summary, the probability for a stationary point at a given value of U to have all Hessian eigenvalues greater than λ cr can be found from Eq. (4.19) with the above replacements: Theirλ can be identified with our (λ cr + 2 √ AN ) in the leading order approximation. We note, however, that Bray and Dean calculated ρ(λ,λ) only in the leading order in 1/N , which becomes rather inaccurate near the left edge of the distribution. Hence their result is not accurate for small values of α. We do not have this problem in our calculation, so our result can be used for arbitrary values of λ cr .
We note also that even though the approximations we used here are sufficient for calculating ∆Σ(λ cr ), they are not accurate enough to find the distribution ρ c (µ) at small values of µ, because the distribution strongly deviates from the Wigner semi-circle near µ = 0. In Sec. 5 we shall develop a new method which is sufficiently accurate in that regime.

Dynamical method
As we already noted, the approximations we used in Sections 4 and 4.1 are not sufficiently accurate for finding the eigenvalue distribution at small values of µ. In this section we develop a numerical method, using a version of the Dyson Brownian motion [28], to dynamically derive the distribution of Hessian eigenvalues. This method accounts for all terms in Σ 0 and Σ 1 without any approximations, apart from the limitations of numerical resolution and computer runtime. We shall first apply the Dyson Brownian motion method to the random matrix theory with GOE.

Dyson Brownian motion and Fokker-Planck equation
The Dyson Brownian motion model was introduced in Ref. [28] to describe stochastic evolution of random matrices. The eigenvalues λ i of a random matrix are assumed to undergo a stochastic process described by the Langevin equation The potential W is given by The eigenvalues are subject to a potential force ∂W/∂λ i and a stochastic force ξ i . Note that the potential W is equal to the 'Hamiltonian' (3.3). The probability density P (λ, t) satisfies the Fokker-Planck equation, which can be obtained by taking the ensemble average over ξ i [28]: where T = 1 and the potential force E i is given by The equilibrium solution of Eq. Thus we can interpret W and T as the potential and the temperature, respectively. We note that the distribution (5.7) is the same as the eigenvalue distribution (3.2), (3.3) with a = 0 for the GOE ensemble.
Since we are interested not in the individual variables λ i , but in the distribution of eigenvalues, we define a time-dependent probability density ρ(λ, t) as We can easily check that dλρ = 1. We also rescale the variable as µ = λ/ √ N to compare the results with those in Sec. 3. We set the normalization condition dµρ(µ, t) = 1 so we rescale the density ρ(λ, t) → ρ(µ, t)/ √ N . Then it obeys the following equation: where T = 1/N is the temperature. The potential force E is given by where we have replaced the summation j by the integral dµ ρ(µ ) in the second term.
In what follows we shall use the Fokker-Planck equation for ρ(µ, t), without referring to the Langevin equation.

Dynamical evolution
We are interested in the equilibrium distribution under the condition that all eigenvalues are positive. This can be realized by evolving ρ(µ, t) by Eq. (5.9) for a sufficiently long time 4 with a reflecting boundary condition at µ = 0, j(µ = 0, t) = 0. (5.12) The equilibrium solution is stationary, ∂ρ/∂t = 0, and it follows from Eq. (5.9) that j(µ) = const. Then the boundary condition (5.12) requires that This condition is similar to the one that we used to determine the saddle point solution (see Eq. (3.15)): where we set a = 0. The first term in Eq. (5.14) comes from the term dµρ ln[ρ] in Σ 1 [ρ], which we neglected in Sec.3. Thus Eq. (5.9) provides a useful check for the results of saddle point approximation.
We solve the Fokker-Planck equation numerically by discretizing the differential equation. Numerical methods for solving the Fokker-Planck equation with the boundary condition (5.12) have been extensively studied [33][34][35]. The grid size ∆µ, the volume of µ-space L µ , and the step size ∆t are taken to be 0.02, 5, and 0.005, respectively. We checked that our results are not affected by these parameters by varying their values. The results are presented in Fig. 6, where we take N = 1/T = 100. The initial condition is taken to be a Gaussian function with a peak at µ = 2 and a width of 0.5, as indicated by a blue line. We see that the evolution converges to a stationary distribution, which agrees very well with the semi-analytic solution of Sec.3, with only a slight deviation at the right edge.
Here we comment on this slight deviation. It comes from the fact that we neglected the second term of Σ 1 [ρ] in Eq. (3.12) to calculate the semi-analytic solution while we do not use any approximation to calculate ρ in the dynamical method. To check that this deviation is physical and is consistent with the results in the literature, we calculated the distribution for the case of µ cr − √ 2 (i.e., for the case without the boundary) using the dynamical method. We found that the tails of the distribution at the right and left edges agree very well with the well-known Tracy-Widom distribution [36,37]. Therefore, the smooth tail of the distribution at the right edge in Fig. 6 can be attributed to the spread of the largest eigenvalues a la Tracy-Widom beyond the edge of the semi-analytic distribution.

Hessian eigenvalue distribution in RGF model
The same method can be applied to find the Hessian eigenvalue distribution in a random Gaussian field, except in this case we should use a = N/(N + 2) in Eq. (3.3). Then the potential (5.3) is replaced by The equilibrium distribution ρ c (µ) is again equivalent to the saddle point solution of (3.15) with the term coming from dµρ ln[ρ] included. We find this distribution by evolving ρ(µ, t) via the Fokker-Planck equation.
We solve the Fokker-Planck equation numerically and show the result in Fig. 7. We take N = 1/T = 100 and a = N/(N + 2). The initial condition and other parameters are the same as we used in Sec.5.1.2 for the case of GOE. Once again, we see that the endpoint of the evolution is very close to the analytic solution. This justifies the approximation of neglecting the term dµρ ln[ρ] in Σ 1 [ρ] that we made in Sec. 4.

Hessian eigenvalue distribution at stationary points of the potential
We finally consider the Hessian eigenvalue distribution at stationary points, where ∂ i U = 0, under the condition that all eigenvalues are positive. We found in Sec.4 that in this case Σ 1 [ρ] has an additional term, − ρ(µ) ln |µ|. This adds an extra term 1/N µ to the potential force (5.16) in the Fokker-Planck equation, We solve the equation numerically using the same parameter values and initial condition as before. The results are presented in Fig. 8. The equilibrium distribution is shown by the solid shaded red line. For comparison we also show, by a dashed orange line, the semi-analytic distribution calculated in Sec. 4 for Hessian eigenvalues at generic points (not necessarily potential minima). We see that the two distributions are very close to one another, except near µ = 0. The semi-analytic solution diverges as µ −1/2 , while our equilibrium distribution drops sharply to zero. This is the effect of the strong repulsive force due to the last term in Eq. (5.17). To illustrate the behavior of the distribution at small values of µ, we plot ρ c (µ) near µ = 0 for the cases of N = 20 (blue line) and 100 (yellow line) in Fig. 9. For µ 1/N , we can approximate E(µ) ≈ 1/N µ, and Eq.(5.13) gives with C = const. This is in agreement with the plots in Fig. 9. We find that C is about 0.5N from our numerical results. It should be noted, however, that our method may not be accurate in the range 0 < µ 1/N . The average number of eigenvalues in this range is N 1/N 0 ρ c (µ)dµ ∼ 1, and thus replacing discrete eigenvalues by a continuous distribution is not justified. 5 One can expect nevertheless that this approximation gives correct order-of-magnitude results near the limit of its applicability, µ ∼ 1/N . We can then use it to estimate the typical magnitude of the smallest eigenvalue of the Hessian, µ min : The plots in Fig. 9 suggests that ρ c (µ) 0.3 for µ 1/N . Hence we find The same estimate is obtained by numerically integrating the distribution in Eq. (5.19). It is in agreement with a more accurate estimate (4.14) in Sec. 4.1.

Some applications in cosmology
In this section we consider some applications of our result to the landscape models.

Vacuum stability
Vacuum stability in landscape models has been studied numerically in Refs. [12,39]. A simple analytic treatment was given by Dine and Paban in Ref. [40]. They assume (i) that the most probable decay channels are typically in the directions of the smallest Hessian eigenvalues and (ii) that the vacuum decay rate is controlled mainly by the quadratic and cubic terms in the expansion of U (φ) about the potential minimum. Then the tunneling (bounce) action in the direction of the Hessian eigenvalue λ i can be estimated as where γ ∼ U 0 /Λ 3 is the typical coefficient of a cubic expansion term and K ∼ 50 is a numerical coefficient. The highest rate corresponds to the smallest Hessian eigenvalue λ min . Dine and Paban assume that λ min ∼ (1/N )(U 0 /Λ 2 ) and find B ∼ (K/N )(Λ 4 /U 0 ). With U 0 /Λ 4 ∼ 0.1 − 1 and N ∼ 100, this can be rather small, B ∼ 1, suggesting that most of the vacua in the landscape are very unstable. A more accurate estimate of λ min can be obtained from Eq. (4.14) or Eq. (5.20), This corresponds to and thus the tunneling action is This is √ N times larger than the estimate of Ref. [40], so the vacuum stability is significantly enhanced. (We note that if we used Eq. (5.19) with the GOE eigenvalue distribution (Eq. (3.19) with x 0 = 0), we would have µ min ∼ 1/N 2 , which would suggest a much lower stability.)

Multi-field inflation
In this Section we assume that the landscape is small-field, which means that the correlation length is Λ 1 in Planck units. Slow-roll inflation in such a landscape occurs in rare flat regions, where the first and second derivatives of the potential in some direction are much smaller than their typical values. It was argued in Refs. [14][15][16]18] that inflation in such regions tends to be single-field, with the inflaton field rolling in a nearly straight line along the flat direction. Other fields (corresponding to orthogonal directions) can be excited and significant deviations from a straight trajectory can occur only if some of the fields have masses smaller than the Hubble parameter during inflation, m √ U 0 in Planck units. This is much smaller than the typical mass m 0 ∼ √ U 0 /Λ. However, with a large number of fields N some of the masses may be m 0 and may get as small as √ U 0 . We shall now investigate this possibility.
Flat inflationary tracks are likely to be found in the vicinity of inflection points, where one of the Hessian eigenvalues vanishes (this corresponds to the flat direction), the rest of the eigenvalues are positive, and the potential gradient vanishes in the directions orthogonal to the flat direction. Let us choose the φ 1 axis in the flat direction. Then we have λ 1 = 0 and λ i > 0, ∂U/∂φ i = 0 for i = 2, ..., N . The mass spectrum in the directions orthogonal to the flat direction is determined by the Hessian eigenvalues, m 2 i = λ i (i > 1). We now want to estimate the smallest of these eigenvalues.
The probability distribution for Hessian eigenvalues λ = (λ 2 , λ 3 , . . . , λ N ) at inflection points can be derived along the same lines as we derived Eq. (4.2). It is given by This is similar to Eq. (4.2), but with a few differences. First, the coefficient of the last term is not unity but is 2. An additional − ln |λ i | term comes from the last term in parentheses of Eq. (4.2) with i = 1 or j = 1. Second, the number of eigenvalues is N − 1.
We can now use the method of Sec. 4.1 to find the probability distribution for the second smallest Hessian eigenvalue µ 2 at inflection points (the first smallest being µ 1 = 0). We note that the number of eigenvalues is now N − 1 and rewrite the coefficient of the second term in the parenthesis of (6.6) as a/N = a / (N − 1), where a = a(N − 1)/N , so this term becomes (6.7) As we explained in Sec. 4, the last term of Eq. (6.6) can be absorbed into a by the replacement of a → a + 2/N , where the factor of 2 comes from the coefficient of the last term. As a result, we should replace a with a(N − 1)/N + 2/N in the calculation of Sec. 3.1.
Since a(N − 1)/N + 2/N a + 1/N , the result should be the same with the one obtained in Sec. 4 in the large N limit. Therefore the distribution of eigenvalues at an inflection point is given by ρ c (µ) with 1 − a 1/N and µ cr = 0. The probability for all eigenvalues to be positive is given by exp[−N 2 ∆Σ], and the typical value of µ 2 can be estimated as in Eq. (4.14), The asymptotic value of N dΣ(µ cr )/dµ cr | µcr=0 is √ 2 in the limit N → ∞. The distribution of eigenvalues at inflection points can be found using the dynamical method of Sec. 5. The Fokker-Planck equation has the same form as before, but with slightly different parameters and coefficients. The temperature T in Eq. (5.10) is given by 1/(N − 1) and the potential force E(µ, t) is given by The change in the last term of E(µ, t) modifies the form of the distribution at µ → 0. In this limit, the Fokker-Planck equation reduces to dρ/dµ = 2ρ/µ, with the solution ρ c (µ) ≈ Cµ 2 (µ 1/N ) (6.10) where C = const. The distribution obtained by numerically evolving the Fokker-Planck equation is shown in Fig. 10. We find that the constant C in Eq. (6.10) is ∼ 0.1N 2 from our numerical results. As in Sec. 5.3, the average number of eigenvalues in the range 0 < µ 1/N is O(1), so we cannot expect our distribution to be accurate in this range.
As before, the second smallest eigenvalue of the Hessian, µ 2 , can also be estimated from which gives in agreement with (6.8).
The rescaled eigenvalue (6.12) corresponds to λ 2 ∼ U 0 /( √ N Λ 2 ). If this is smaller than about H 2 ∼ U 0 , then the associated field φ 2 will undergo significant fluctuations and may play a dynamical role during inflation. This is unlikely if √ N Λ 2 1. Thus we conclude that multifield inflation is not likely for Λ N −1/4 ∼ 0.3 (for N = 100).

Conclusions
The main focus of this paper was to investigate the Hessian eigenvalue distribution at local minima of a random Gaussian landscape. Bray and Dean used the saddle point approximation to calculate this distribution and the density of local minima in the leading order of the large N expansion. We found, however, that the next-to-leading order corrections modify the distribution at the lower edge of the domain. This is particularly important for the smallest Hessian eigenvalues, which we need to estimate for assessing the vacuum stability and the multi-field nature of inflation in the landscape.
We extended the saddle point method to account for the sub-leading in 1/N contributions and used it to calculate the density of local minima in the landscape. This method can also be used to determine the Hessian eigenvalue distribution at a generic point in the landscape, but it fails to find the distribution at potential minima with the desired accuracy. For that we had to develop a completely new approach.
In our new approach, the Hessian eigenvalue distribution is calculated as the asymptotic endpoint of a stochastic process, called Dyson Brownian motion. The distribution is evolved via a suitable Fokker-Planck equation, and the equilibrium distribution is obtained after a sufficiently large number of iterations. We have verified that this method agrees with the saddle point method in cases where the latter method is applicable.
We discussed some implications of our results for vacuum stability and slow-roll inflation in the landscape. We found that metastable vacua in a Gaussian landscape are more stable than a naive estimate would suggest. Slow-roll inflation at inflection points in the landscape is likely to be single-field when the smallest nonzero Hessian eigenvalue at a typical inflection point is greater than the energy scale of the landscape U 0 . We found that this condition is satisfied if the correlation length in the landscape is Λ N −1/4 . For N ∼ 100, this means that inflation is essentially single-field in a landscape with Λ 0.3 in Planck units.
In Appendix A we discussed the relation between a random Gaussian landscape and an axionic landscape. We specified the conditions under which an axionic landscape can be approximated by an isotropic random Gaussian field. We expect that our results should be applicable to such axion models.
We note finally that the problem of Hessian eigenvalue distribution in a random field arises in many areas of condensed matter physics (see, e.g., [41][42][43] and references therein). Our methods and results may be useful in these areas as well.
with f a,−n = f * an . In Ref. [24], they assume f a (X) = [1 − cos X], in which case f a0 = 1, f a,±1 = 1/2, and f an = 0 for all |n| > 1. In what follows, we consider a generic periodic function f a (X). For example, the strong dynamics of QCD results in a complicated periodic function for the QCD axion [46] (see also Ref. [47] for a recent work).
String theory predicts the existence of a large number of axions, N 100 (e.g., [45]). In Ref. [24] it was shown that interesting alignment effects can arise in the axionic landscape if the number of terms in the potential (A.1) is N < P < 2N . They also noted that for P N the potential approaches that for a random Gaussian field, as a consequence of the central limit theorem. The statistical properties of this field depend on the choice of the distribution P (q).
The integers q ai define a lattice in the q-space with a spacing ∆q = 1. We shall assume that the variance of the distribution P (q) isq 2 1, which means that the correlation length of U (θ) is small compared to the periodicity length 2π. Then the distribution P (q) can be approximated as continuous.
We will be interested in the two-point correlation function for the potential U (θ). After averaging over the random phases, this function should depend only on the difference θ 1 − θ 2 . Then, without loss of generality, we can choose one of the points to be at θ = 0. Thus, we consider U (θ)U (0) = a,a n,n Λ 4 a Λ 4 a f an f a n e inXa q e inδa e in δa δ , (A.5) where · · · α (α = q, δ) represents the ensemble average over random variables α. We use This depends only on q 2 a = i q 2 ai and thus is rotationally invariant in the q-space. Note that Eq. (A.11) is the only factorized distribution, P (q a ) = i P i (q ai ), that has this property. We can expect the two-point function to also be rotationally invariant in the θ-space. Indeed, from Eq. (A.8) we have Since we assume thatq 1, the sum over q can be approximated by an integral, so we obtain Note that ifq depends on the index a,q in this equation should simply be replaced byq a . Now let us consider the form of P (q), which was adopted in Ref. [24]: P (q) = const for |q| < q m and P (q) = 0 otherwise. In this case, Hence, Unlike Eq. (A.14), this correlation function is not rotationally invariant. The reason is that rotational invariance is violated by the probability distribution for q. It may be instructive to compare the correlators of the potential U (θ) and its derivatives for different choices of P (q). We find ζ ij (θ)ζ kl (θ) q,δ = a Λ 8 a q ai q aj q ak q al q f 2 a δ , (A. 20) where primes denote derivatives with respect to X. The gradient η i = ∂U/∂θ i is not correlated with the potential nor the Hessian. The ensemble average over q ai gives q ai q aj q = δ ijq q ai q aj q ak q al q =q 4 a (δ ij δ kl + δ ik δ jl + δ il δ jk − r a δ ij δ jk δ kl ) (A.22) we see that the resulting correlation functions have the same form as Eqs. (2.6)-(2.10), except for the additional term (rδ ij δ jk δ kl ) in (A.22). This additional term breaks the rotational invariance of the model and vanishes when the random variables q ai have a rotationally invariant distribution (A.11). In Ref. [24], the authors assumed a flat distribution for q ai as an example (which breaks rotational invariance) and obtained r = 6/5. The extra term in (A.22) also indicates a deviation from Gaussian statistics. However, it affects only the statistics of the diagonal components of the Hessian. There are only N diagonal components and N (N − 1)/2 non-diagonal ones, so we can expect this term to be unimportant at large N . The same discussion applies to correlators for higher-order derivatives. Thus we expect that Gaussian random fields give a good approximation for this type of landscape in the limit of N 1 and P N . Finally, we comment that the cancellation for the coefficient of (Trζ) 2 (AE − B 2 = 0) occurs when f a (X) = [1 − cos X], which was adopted in Ref. [24]. In this case, the Hessian distribution is just given by the GOE with a constant shift of the diagonal terms as Eq. (2.27). However, this is not a generic property of axion landscape with a generic choice of functions f a (X).