Abstract
We prove an almost constant lower bound of the isoperimetric coefficient in the KLS conjecture. The lower bound has the dimension dependency \(d^{o_d(1)}\). When the dimension is large enough, our lower bound is tighter than the previous best bound which has the dimension dependency \(d^{1/4}\). Improving the current best lower bound of the isoperimetric coefficient in the KLS conjecture has many implications, including improvements of the current best bounds in Bourgain’s slicing conjecture and in the thinshell conjecture, better concentration inequalities for Lipschitz functions of logconcave measures and better mixing time bounds for MCMC sampling algorithms on logconcave measures.
Introduction
Given a distribution, the isoperimetric coefficient of a subset is the ratio of the measure of the subset boundary to the minimum of the measures of the subset and its complement. Taking the minimum of such ratios over all subsets defines the isoperimetric coefficient of the distribution, also called the Cheeger isoperimetric coefficient of the distribution.
Kannan, Lovász and Simonovits (KLS) [12] conjecture that for any distribution that is logconcave, the Cheeger isoperimetric coefficient equals to that achieved by halfspaces up to a universal constant factor. If the conjecture is true, the Cheeger isoperimetric coefficient can be determined by going through all the halfspaces instead of all subsets. For this reason, the KLS conjecture is also called the KLS hyperplane conjecture. To make it precise, we start by formally defining logconcave distributions and then we state the conjecture.
A probability density function \(p: \mathbb {R}^d\rightarrow \mathbb {R}\) is logconcave if its logarithm is concave, i.e., for any \(x, y \in \mathbb {R}^{d} \times \mathbb {R}^{d}\) and for any \(\lambda \in [0, 1]\),
Common probability distributions such as Gaussian, exponential and logistic are logconcave. This definition also includes any uniform distribution over a convex set defined as follows. A subset \(K \subset \mathbb {R}^d\) is convex if \(\forall x, y \in K \times K, z \in [x, y] \implies z \in K\). The isoperimetric coefficient \(\psi (p)\) of a density p in \(\mathbb {R}^d\) is defined as
where \(p(S) = \int _{x \in S} p(x) dx\) and the boundary measure of the subset is
where \({\mathbf {d}}(x, S)\) is the Euclidean distance between x and the subset S.
The KLS conjecture is stated by Kannan, Lovász and Simonovits [12] as follows.
Conjecture 1
There exists a universal constant c, such that for any logconcave density p in \(\mathbb {R}^d\), we have
where \(\rho \left( p \right) \) is the spectral norm of the covariance matrix of p. In other words, \(\rho \left( p \right) = \left\ A\right\ _{2}\), where \(A = {{\,\mathrm{Cov}\,}}_{X \sim p} (X)\) is the covariance matrix.
An upper bound of \(\psi (p)\) of the same form is relatively easy and it was shown to be achieved by halfspaces [12]. Proving the lower bound on \(\psi (p)\) up to some small factors in Conjecture 1 is the main goal of this paper. We say a logconcave density is isotropic if its mean \({\mathbb {E}}_{X\sim p} [X]\) equals to 0 and its covariance \({{\,\mathrm{Cov}\,}}_{X\sim p}(X)\) equals to \(\mathbb {I}_d\). In the case of isotropic logconcave densities, the KLS conjecture states that any isotropic logconcave density has its isoperimetric coefficient lower bounded by a universal constant.
There are many attempts trying to lower bound the Cheeger isoperimetric coefficient in the KLS conjecture. We refer readers to the survey paper by Lee and Vempala [18] for a detailed exposition of these attempts. In particular, the original KLS paper [12] (Theorem 5.1) shows that for any logconcave density p with covariance matrix A,
The original KLS paper [12] only deals with uniform distributions over convex sets, but their proof techniques can be easily extended to show that the same results hold for all logconcave densities. Remark that Equation (3) implies \(\psi (p) \ge \frac{\log (2)}{d^{1/2} \cdot \sqrt{\rho \left( p \right) }}\). The current best bound is shown in Lee and Vempala [17], where they show that there exists a universal constant c such that for any logconcave density p with covariance matrix A,
It implies that \(\psi (p) \ge \frac{c}{d^{1/4} \cdot \sqrt{\rho \left( p \right) }}\). Note that in Lee and Vempala [17], their notation of \(\psi (p)\) is the reciprocal of ours and it is later switched in Theorem 32 of the survey paper [18] by the same authors. As a result, the above bound is not a misstatement of the results in Lee and Vempala [17] and it is simply translated into our notations. In this paper, we improve the dimension dependency \(d^{1/4}\) to \(d^{o_d(1)}\) in the lower bound of the isoperimetric coefficient.
There are many implications of improving the lower bound in the KLS conjecture. The two closely related conjectures are Bourgain’s slicing conjecture [3, 4] and the thinshell conjecture [2]. It is worth noting that Bourgain [4] stated the slicing conjecture earlier than the introduction of the KLS conjecture. In terms of their connections to the KLS conjecture, Eldan and Klartag [9] proved that the thinshell conjecture implies Bourgain’s slicing conjecture up to a universal constant factor. Later, Eldan [8] showed that the inverse of an lower bound of the isoperimetric coefficient is equivalent to an upper bound of the thinshell constant in the thinshell conjecture. Combining these two results, we have that an lower bound in the KLS conjecture implies upper bounds in the thinshell conjecture and in Bourgain’s slicing conjecture.
The current best upper bound of the thinshell constant has the dimension dependency \(d^{1/4}\) due to Lee and Vempala’s [17] improvement in the KLS conjecture. The current best bound of the slicing constant in Bourgain’s slicing conjecture also has the dimension dependency \(d^{1/4}\), proved by Klartag [13] without using the KLS conjecture. Klartag’s slicing constant bound is a slight improvement over Bourgain’s earlier slicing bound [4] which has the dimension dependency \(d^{1/4}\log (d)\). Given the current best bounds in these three conjectures and the relation among them, we conclude that improving the current best lower bound in the KLS conjecture improves the current best bounds for the other two conjectures, as noted in Lee and Vempala [18]. For a detailed exposition of the three conjectures and related results since the introduction of Bourgain’s slicing conjecture, we refer readers to Klartag and Milman [14].
Additionally, improving the lower bound in the KLS conjecture also improves concentration inequalities for Lipschitz functions of logconcave measures. It also leads to faster mixing time bounds of Markov chain Monte Carlo (MCMC) sampling algorithms on logconcave measures. Despite the great importance of these results, deriving these results from our new bound in the KLS conjecture is not the main focus of our paper. We refer readers to Milman [20] and Lee and Vempala [18] for more details about the abundant implications of the KLS conjecture.
Notation For two sequences \(a_n\) and \(b_n\) indexed by an integer n, we say that \(a_n = o_n(b_n)\) if \(\lim _{n \rightarrow \infty } \frac{a_n}{b_n} = 0\). The Euclidean norm of a vector \(x \in \mathbb {R}^d\) is denoted by \(\left\ x\right\ _{2}\). The spectral norm of a square matrix \(A \in \mathbb {R}^{d\times d}\) is denoted by \(\left\ A\right\ _{2}\). The Euclidean ball with center x and radius r is denoted by \(\mathbb {B}(x, r)\). For a real number \(x \in \mathbb {R}\), we denote its ceiling by \(\lceil x \rceil = \min \left\{ m \in \mathbb {Z} \mid m \ge x \right\} \). We say a density p is more logconcave than a Gaussian density \(\varphi \) if p can be written as a product form \(p = \nu \cdot \varphi \) where \(\varphi \) is the Gaussian density and \(\nu \) is a logconcave function (that is, \(\nu \) is proportional to a logconcave density). For a martingale \((M_t,\ t \in \mathbb {R}_+)\), we use \(\left[ M \right] _t\) to denote its quadratic variation, defined as
Main results
We prove the following lower bound on the isoperimetric coefficient of any logconcave density.
Theorem 1
There exists a universal constant c such that for any logconcave density p in \(\mathbb {R}^d\) and any integer \(\ell \ge 1\), we have
where \(\rho \left( p \right) \) is the spectral norm of the covariance matrix of p.
As a corollary, take \(\ell = \left\lceil \left( \frac{\log (d)}{\log \log (d)} \right) ^{1/2} \right\rceil \), then there exists a constant \(c'\) such that
Since \(\lim _{d\rightarrow \infty } \frac{\log \log (d)}{\log (d)} = 0\), for \(d\) large enough, the above lower bound is better than any lower bound of the form \(\frac{1}{d^{c''} \sqrt{\rho \left( p \right) }} \) (\(c''\) is a positive constant) in terms of dimension \(d\) dependency.
The proof of the main theorem uses the stochastic localization scheme introduced by Eldan [8]. Eldan uses this stochastic localization scheme to show that the thin shell conjecture is equivalent to the KLS conjecture up to a logarithmic factor. The construction of stochastic localization scheme uses elementary properties of semimartingales and stochastic integration. The main idea of Eldan’s proof to derive the KLS conjecture from the thin shell conjecture is to smoothly multiply a Gaussian part to the logconcave density, so that the modified density is more logconcave than a Gaussian density. When the Gaussian part is large enough, one can then easily prove the isoperimetric inequality.
The same scheme was refined in Lee and Vempala [17] to obtain the current best lower bound in the KLS conjecture. Lee and Vempala directly attack the KLS conjecture while following the same stochastic localization scheme to smoothly multiply a Gaussian part to the logconcave density. Their use of a new potential function leads to the current best lower bound in the KLS conjecture. The proof in this paper builds on Lee and Vempala [17]’s refinements of Eldan’s method, while it improves the handling of several quantities involved in the stochastic localization scheme. Figure 1 provides a diagram showing the relationship between the main lemmas.
To ensure the existence and the uniqueness of the stochastic localization construction, we first prove a lemma that deals with logconcave densities with compact support. Then we relate back to the main theorem by finding a compact support which contains most of the probability measure for a logconcave density.
Lemma 1
There exists a universal constant c such that for any logconcave density p in \(\mathbb {R}^d\) with compact support and any integer \(\ell \ge 1\), we have
The proof of Lemma 1 is provided in Section 2.5 after we introduce the intermediate lemmas. The use of the integer l in the lemma indicates that we control the Cheeger isoperimetric coefficient in an iterative fashion. In fact, we prove Lemma 1 by induction over l starting from the known bound in Equation (3). For this, we define the supremum of the product of the isoperimetric coefficient and the squareroot of its spectral norm over all logconcave densities in \(\mathbb {R}^d\) with compact support:
Then we prove the following lemma on the lower bound of \(\psi _d\), which serves as the main induction argument.
Lemma 2
Suppose that \(\psi _k \ge \frac{1}{\alpha k^\beta }\) for all \(k \le d\) for some \(0 \le \beta \le \frac{1}{2}\) and \(\alpha \ge 1\), take \(q = \lceil \frac{1}{\beta } \rceil + 1\), there exists a universal constant c such that we have
The proof of Lemma 2 is provided towards the end of this section in Section 2.4. To have a good understanding of how we get there, we start by introducing the stochastic localization scheme introduced by Eldan [8].
Eldan’s stochastic localization scheme.
Given a logconcave density p in \(\mathbb {R}^d\) with covariance matrix \(A\), we define the following stochastic differential equation (SDE)
where \(W_t\) is the Wiener process, the matrix \(C_t\), the density \(p_t\), the mean \(\mu _t\) and the covariance \(A_t\) are defined as follows
The next lemma shows the existence and the uniqueness of the SDE solution.
Lemma 3
Given a density p in \(\mathbb {R}^d\) with compact support with covariance \(A\) and \(A\) is invertible, then the SDE (8) is well defined and it has a unique solution on the time interval [0, T], for any time \(T > 0\). Additionally, for any \(x \in \mathbb {R}^d\), \(p_t(x)\) is a martingale with
The proof of Lemma 3 follows from the standard existence and uniqueness theorem of SDE (Theorem 5.2 in Øksendal [21]). The proof is provided in Appendix A.
Before we dive into the proof of Lemma 2, we discuss how the stochastic localization scheme allows us to control the boundary measure of a subset. First, according to the concavity of the isoperimetric profile (Theorem 2.8 in Sternberg and Zumbrun [25] or Theorem 1.8 in Milman [20]), it is sufficient to consider subsets of measure 1/2 in the definition of the isoperimetric coefficient in Equation (2). Second, the density \(p_t\) is logconcave and it is more logconcave than the Gaussian density proportional to \(e^{\frac{1}{2}x^\top B_t x}\). It can be shown via the KLS localization lemma [12] that a density which is more logconcave than a Gaussian has an isoperimetric coefficient lower bound that depends on the covariance of the Gaussian (see e.g. Theorem 2.7 in Ledoux [16] or Theorem 4.4 in Cousins and Vempala [7]). Third, given an initial subset E of \(\mathbb {R}^d\) with measure \(p(E) = \frac{1}{2}\), using the martingale property of \(p_t(E)\), we observe that
Inequality (i) uses the isoperimetric inequality for a logconcave density which is more logconcave than a Gaussian density proportional to \(e^{\frac{1}{2}x^\top B_t x}\) [7, 16]. Inequality (ii) uses the fact that \(p_t(E)\) is nonnegative.
Based on the above observation, the high level idea of the proof requires two main steps:

There exists some time \(t > 0\), such that the Gaussian component \(\frac{1}{2}x^\top B_t x\) of the density \(p_t\) is large enough, so that we can apply the known isoperimetric inequality for densities more logconcave than a Gaussian.

We need to control the quantity \(p_t(E)\) so that the obtained isoperimetric inequality at time t can be related back to that at time 0.
The first step is obvious since our construction explicitly enforces the density \(p_t\) to have a Gaussian component \(\frac{1}{2}x^\top B_t x\) in Equation (9). Then the remaining question is whether we can run the SDE long enough to make the Gaussian component large enough while still keeping \(p_t(E)\) to be the same order as \(p(E) = \frac{1}{2}\) with large probability.
Control the evolution of the measure of a subset.
Lemma 4
Under the same assumptions of Lemma 3, for any measurable subset E of \(\mathbb {R}^d\) with \(p(E) = \frac{1}{2}\) and \(t > 0\), the solution \(p_t\) of the SDE (9) satisfies
This lemma is proved in Lemma 29 of Lee and Vempala [17]. We provide a proof here for completeness.
Proof of Lemma 4. Let \(g_t = p_t(E)\). Using Equation (13), we obtain the following derivative of \(g_t\)
Its quadratic variation is
where the inequality follows from Cauchy–Schwarz inequality. Applying the Dambis, DubinsSchwarz theorem (see e.g. Revuz and Yor [23] Section V.1 Theorem 1.7), there exists a Wiener process \({\tilde{W}}_t\) such that \(g_t  g_0\) has the same distribution as \({\tilde{W}}_{[g]_t}\). Since \(g_0 = \frac{1}{2}\), we obtain
where the last inequality follows from the fact that \({\mathbb {P}}\left( \xi > 2 \right) < 0.023\) for \(\xi \) follows the standard Gaussian distribution.\(\square \)
Control the evolution of the spectral norm.
According to Lemma 4, to control the evolution of the measures of subsets, we need to control the spectral norm of \(A^{1/2} A_t A^{1/2}\). The following lemma serves the purpose.
Lemma 5
In addition to the same assumptions of Lemma 3, if \(\psi _k \ge \frac{1}{\alpha k^\beta }\) for all \(k \le d\) for some \(0 < \beta \le \frac{1}{2}\) and \(\alpha \ge 1\), then there exists a universal constant c such that for \(q = \lceil \frac{1}{\beta } \rceil + 1\), \(d\ge 3\) and \(T_2 = \frac{1}{ c \cdot q \alpha ^2\log (d) d^{2\beta  \beta /(4q)}}\), we have
Direct control of the largest eigenvalue of \(A^{1/2} A_t A^{1/2}\) is not trivial, instead we use the potential function \(\Gamma _t\) to upper bound the largest eigenvalue. Define
It is clear that \(\Gamma _t^{1/q} \ge \left\ A^{1/2} A_t A^{1/2}\right\ _{2}\). So in order to upper bound \(\left\ A^{1/2} A_t A^{1/2}\right\ _{2}\), it is sufficient to upper bound \(\Gamma _t^{1/q}\). The advantage of using \(\Gamma _t\) is that it is differentiable. We have the following differential for \(A_t\) and \(\Gamma _t\):
Obtaining these differentials uses Itô’s formula and the proofs are provided in Appendix A.
The next lemma upper bounds the terms in the potential \(\Gamma _t\).
Lemma 6
Under the same assumptions of Lemma 5, the potential \(\Gamma _t\) defined in Equation (14) can be written as follows
where \(v_t \in \mathbb {R}^d\) and \(\delta _t \in \mathbb {R}\) satisfy
The proof of Lemma 6 is provided in Section 3.1. Remark that bounds similar to the first bound of \(\delta _t\) in Lemma 6 have appeared in Lee and Vempala [17], whereas the second bound of \(\delta _t\) in Lemma 6 is novel. The second bound of \(\delta _t\) also leads to the following Lemma 8 which gives better control of the potential than the previous proof by Lee and Vempala [17] when t is large.
Using the bounds in Lemma 6, we state the two lemmas which control the potential \(\Gamma _t\) in two ways.
Lemma 7
Under the same assumptions of Lemma 6, using the following transformation
we have
where \(T_1 = \frac{1}{32768 q \alpha ^2 \log (d) d^{2\beta }}\).
Lemma 8
Under the same assumptions of Lemma 6, using the following transformation
we have
The proofs of Lemma 7 and 8 are provided in Section 3.2.
Now we are ready to prove Lemma 5.
Proof of Lemma 5. We take
We bound the spectral norm of \(A^{1/2}A_t A^{1/2}\) in two time intervals via Lemma 7 and Lemma 8. In the first time interval \([0, T_1]\), we have
Inequality (i) follows from the condition \(\beta q \ge 1\). (ii) follows from the fact that \({{\,\mathrm{Tr}\,}}\left( A^q \right) ^{1/q} \ge \left\ A\right\ _{2}\). (iii) is because \(3^q d\ge 2^q (d+ 1)\) when \(q \ge 2\) and \(d\ge 1\). h is defined in Lemma 7. (iv) follows from Lemma 7.
In the first time interval, we can also bound the expectation of \(\Gamma _{T_1}^{1/q}\). Since the density \(p_{T_1}\) is more logconcave than a Gaussian density with covariance matrix \(\frac{A}{T_1}\), the covariance matrix of \(p_{T_1}\) is upper bounded as follows (see Theorem 4.1 in BrascampLieb [5] or Lemma 5 in Eldan and Lehec [10])
Consequently, all the eigenvalues of \(Q_{T_1}\) are less than \(\frac{1}{T_1}\) and \(\Gamma _{T_1}\) is upper bounded by \(\frac{d}{T_1^{q}}\). Using the above bound, we can bound the expectation of \(\Gamma _{T_1}^{1/q}\) as follows
Inequality (i) follows from Lemma 7, the inequality \(3^qd\ge 2^q(d+1)\) (similar to what we did in the last four steps of Equation (17)) and Equation (18). (ii) follows from \(q \ge 2\), \(\beta \le {1/2}\) and \(d^{1/2} \ge \log (d)\) for \(d\ge 3\).
In the second time interval, for \(t \in [T_1, T_2]\), we have
Inequality (i) follows from Lemma 8. (ii) is because \(t \le T_2\). (iii) follows from \(T_2 = \frac{d^{\beta /(4q)}}{40} T_1\). Using the above bound, we control the spectral norm in the second time interval via Markov’s inequality
where inequality (i) follows from Markov’s inequality and (ii) follows from Equation (20). (iii) follows from the definition of \(T_2\) and \(\frac{\beta }{2}+\frac{1}{q} \le 2\beta \beta /(4q)\) when \(\beta q \ge 1\) and \(q \ge 2\).
Combining the bounds in the first and second time intervals in Equation (17) and (21), we obtain
\(\square \)
Proof of Lemma 2.
The proof of Lemma 2 follows the strategy described after Lemma 3. We make the arguments rigorous here. We consider a logconcave density p in \(\mathbb {R}^d\) with compact support. Without loss of generality, we can assume that the covariance matrix A of the density p is invertible. Otherwise, the density p is degenerate and we can instead prove the results in a lower dimension.
According to the concavity of the isoperimetric profile (Theorem 2.8 in Sternberg and Zumbrun [25] or Theorem 1.8 in Milman [20]), it is sufficient to consider subsets of measure 1/2 in the definition of isoperimetric coefficient (2). Given an initial subset E of \(\mathbb {R}^d\) with \(p(E) = \frac{1}{2}\), use the martingale property of \(p_{T_2}(E)\), we have
Inequality (i) uses the isoperimetric inequality for a logconcave density which is more logconcave than a Gaussian density proportional to \(e^{\frac{1}{2}x^\top B_t x}\) (see e.g. Theorem 2.7 in Ledoux [16] or Theorem 4.4 in Cousins and Vempala [7]). Inequality (ii) follows from the fact that \(p_t(E)\) is nonnegative. (iii) follows from Lemma 4 and Lemma 5 (for \(d\ge 3\)). (iv) follows from the construction that \(B_t = t A^{1}\). We conclude the proof since \(T_2\) is taken as \(\frac{1}{ c \cdot q \alpha ^2\log (d) d^{2\beta  \beta /(4q)}}\) with c as a constant. The above proof only works for \(d\ge 3\). It is easy to verify that Lemma 2 still holds for the case for \(d= 1, 2\) from the original KLS bound in Equation (3).\(\square \)
Proof of Lemma 1.
The proof of Lemma 1 consists of applying Lemma 2 recursively. We define
For \(\ell \ge 1\), we define \(\alpha _\ell \) and \(\beta _\ell \) recursively as follows:
where c is the constant in Lemma 2. It is not difficult to show by induction that \(\alpha _\ell \) and \(\beta _\ell \) satisfy
We start with a known bound from the original KLS paper [12]
In the induction, suppose that we have
From the above inequality, we obtain for any \(1 \le k \le d\),
with \(\alpha _\ell ' = \alpha _\ell \left( \log (d) +1 \right) ^{\ell /2}\). Using the above lower bounds for \(\psi _k\), we can apply Lemma 2. For integer \(\ell +1\), we have
where inequality (i) follows from Lemma 2, inequality (ii) follows from \(q \le \frac{2}{\beta }\) and the last equality follows from the definition of \(\alpha _\ell \) and \(\beta _\ell \). We conclude Lemma 1 using the \(\alpha _\ell \) and \(\beta _\ell \) bounds in Equation (24).\(\square \)
Proof of Theorem 1.
To derive Theorem 1 from Lemma 1, it is sufficient to show that for any logconcave density p in \(\mathbb {R}^d\), most of its probability measure is on a compact support. Let \(\mu \) be the mean of the density p. Since \(r \mapsto p(\mathbb {B}\left( \mu , r \right) ^c)\) is an nonincreasing function of r with limit 0 at \(\infty \), there exists a radius \(R > 0\), such that \(p(\mathbb {B}\left( \mu , R \right) ^c) \le 0.2\). Note that it is possible to get a better bound via e.g. logconcave concentration bounds from Paouris [22], but knowing the existence of such radius R is sufficient for the proof here.
Denote \(B = \mathbb {B}\left( \mu , R \right) \). Then \(p(B^c)\le 0.2\). Let \(\varrho \) be the density obtained by truncating p on the ball B. Then \(\varrho \) is logconcave and it has compact support. For a subset \(E \subset \mathbb {R}^d\) of measure such that \(p(E) = \frac{1}{2}\), we have
The last inequality follows because \(p(E^c)  p(B^c) \ge 0.5  0.2 \ge \frac{1}{4}\). Since it is sufficient to consider subsets of measure 1/2 in the definition of the isoperimetric coefficient [20, 25], we conclude that the isoperimetric coefficient of p is lower bounded by half of that of \(\varrho \). Applying Lemma 1 for the isoperimetric coefficient of \(\varrho \), we obtain Theorem 1.\(\square \)
Proof of auxiliary lemmas
In this section, we prove auxiliary Lemmas 6, 7 and 8.
Tensor bounds and proof of Lemma 6.
In this subsection, we prove Lemma 6. Since Lemma 6 involves the thirdorder moment tensor of a logconcave density, we define the following 3Tensor for any probability density \(p \in \mathbb {R}^d\) with mean \(\mu \) to simplify notations.
For A, B, C three matrices in \(\mathbb {R}^{d\times d}\), we can write \(\mathcal {T}_p(A, B, C)\) equivalently as
Before we prove Lemma 6, we prove the following properties related to the 3Tensor.
Lemma 9
Suppose p is a logconcave density with mean \(\mu \) and covariance A. Then for any positive semidefinite matrices B and C, we have
Lemma 10
Suppose that \(\psi _k \ge \frac{1}{\alpha k^\beta }\) for all \(k \le d\) for some \(0 \le \beta \le \frac{1}{2}\) and \(\alpha \ge 1\). Suppose p is a logconcave density in \(\mathbb {R}^d\) with covariance A and A is invertible. Then for \(q \ge \frac{1}{2\beta }\), we have
Lemma 11
Given \(\tau > 0\). Suppose p is a logconcave density which is more logconcave than \(\mathcal {N}(0, \frac{1}{\tau } \mathbb {I}_d)\). Let A be its covariance matrix. Suppose A is invertible then for \(q \ge 3\), we have
Lemma 12
Suppose p is a logconcave density in \(\mathbb {R}^d\). For any \(\delta \in [0, 1]\), for A, B, C positive semidefinite matrices then
The proofs of the above lemmas are provided in Section 3.3.
Now we are ready to prove Lemma 6.
Proof of Lemma 6. We first prove the bound on \(\left\ v_t\right\ _{2}\), where
Applying Lemma 9 and knowing the covariance of \(p_t\) is \(A_t\), we obtain
Equality (i) uses the definition of \(Q_t = A^{1/2} A_t A^{1/2}\). Equality (ii) uses the fact that \(\left\ MM^\top \right\ _{2} = \left\ M^\top M\right\ _{2}\) for any square matrix \(M \in \mathbb {R}^{d\times d}\). Inequality (iii) uses that \(\left\ M\right\ _{2} \le {{\,\mathrm{Tr}\,}}\left( M^q \right) ^{1/q}\) for any positive semidefinite matrix M.
Next, we bound \(\delta _t\) in two ways. We can ignore the negative term in \(\delta _t\) to obtain the following:
where \(\varrho _t\) is the density of lineartransformed random variable \(A^{1/2}\left( X\mu _t \right) \) for X drawn from \(p_t\) and \(\mu _t\) is the mean of \(p_t\). \(\varrho _t\) is still logconcave since any linear transformation of a logconcave density is logconcave (see e.g. Saumard and Wellner [24]). \(\varrho _t\) has covariance \(A^{1/2} A_t A^{1/2}\), which is also \(Q_t\). For \(a \in \left\{ 0, \cdots , q2 \right\} \), we have
Inequality (i) follows from Lemma 12. Inequality (ii) follows from Lemma 10. Since there are \(q1\) terms in the sum, we conclude the first part of the bound for \(\delta _t\).
On the other hand, since \(p_t\) is more logconcave than the Gaussian density proportional to \(e^{\frac{t}{2} (x\mu _t)^\top A^{1} (x\mu _t)}\), \(\varrho _t\) is more logconcave than the Gaussian density proportional to \(e^{\frac{t}{2} x^\top x}\). Applying Lemma 12 and Lemma 11 to each term in Equation (27), we obtain
This concludes the second part of the bound for \(\delta _t\).\(\square \)
Control of the potential in two time intervals.
In this subsection, we prove Lemma 7 and Lemma 8.
Proof of Lemma 7. The function h has the following derivatives
Using Itô’s formula, we obtain
where inequality (i) plugs in the bounds in Lemma 6.
Define a martingale \(Y_t\) such that
with \(Y_0 = 0\). According to the \(\left\ v_t\right\ _{2}\) upper bound in Lemma 6, we have
Hence the martingale \(Y_t\) is welldefined. According to the Dambis, DubinsSchwarz theorem (see e.g. Revuz and Yor [23] Section V.1 Theorem 1.7), there exits a Wiener process \({\tilde{W}}_t\) such that \(Y_t\) has the same distribution as \({\tilde{W}}_{[Y]_t}\). Then we have for any \(\gamma > 0\),
Set \(T = \frac{1}{32768 q \alpha ^2 \log (d) d^{2\beta }}\) and \(\Psi = \frac{1}{2} \left( d+1 \right) ^{1/q}\). Observe that \(\Gamma _0 = d\) and as a result \(h(\Gamma _0) = \left( d+1 \right) ^{1/q}\). Then we have
Inequality (i) follows from the choice of T. (ii) uses Equation (28). (iii) follows by plugging in \(\Psi = \frac{1}{2}\left( d+1 \right) ^{1/q}\) and \(3^q d^2 \ge 2^q (d+ 1)^2\). (iv) follows from \(\beta q \ge 1\), \(d\ge 3\), \(q\ge 2\) and \(3^{4/3} < 0.3\).\(\square \)
Proof of Lemma 8. The function f has the following derivatives
Using Itô’s formula, we obtain
Using the bounds in Lemma 6 and the martingale property of the term \(\frac{1}{q} \Gamma _t^{1/q1} v_t^\top dW_t\), we obtain
Solving the above differential equation, we obtain
\(\square \)
Proof of tensor bounds.
In this subsection, we prove Lemmas 9, 10, 11 and 12.
Proof of Lemma 9. Since C is positive semidefinite, we can write its eigenvalue decomposition as follows \(C = \sum _{i=1}^d\lambda _i v_i v_i^\top \), with \(\lambda _i \ge 0\). Then,
Inequality (i) follows from triangular inequality. (ii) follows from Cauchy–Schwarz inequality. (iii) follows from the statement below, which upper bounds the fourth moment of a logconcave density via its second moment.\(\square \)
For any logconcave density \(\nu \) and any vector \(\theta \in \mathbb {R}^{d}\), we have
for \(a \ge b > 0\), where \(\mu _\nu \) is the mean of \(\nu \). Equation (29) is proved e.g. in Corollary 5.7 of Guédon et al. [11] and the exact constant is provided in Proposition 3.8 of Latała and Wojtaszczyk [15].
In order to prove Lemma 10, we need to introduce one additional lemma as follows.
Lemma 13
Suppose that \(\psi _k \ge \frac{1}{\alpha k^\beta }\) for all \(k \le d\) for some \(0 < \beta \le \frac{1}{2}\) and \(\alpha \ge 1\). For an isotropic logconcave density p in \(\mathbb {R}^d\) and a unit vector \(v \in \mathbb {R}^d\), define \(\Delta = {\mathbb {E}}_{X \sim p} \left( X^\top v \right) \cdot XX^\top \), then we have

1.
For any orthogonal projection matrix \(P \in \mathbb {R}^{d\times d}\) with rank r, we have
$$\begin{aligned} {{\,\mathrm{Tr}\,}}\left( \Delta P \Delta \right) \le 16 \psi ^{2}_{\min (2r, d)}. \end{aligned}$$ 
2.
For any positive semidefinite matrix A, we have
$$\begin{aligned} {{\,\mathrm{Tr}\,}}\left( \Delta A \Delta \right) \le 128 \alpha ^2 \log (d) \left( {{\,\mathrm{Tr}\,}}\left( A^{1/(2\beta )} \right) \right) ^{2\beta }. \end{aligned}$$
This lemma was proved in Lemma 41 in an older version (arXiv version 2) of Lee and Vempala [17]. The main proof idea for the first part of Lemma 13 appeared in Eldan [8] (Lemma 6). we provide a proof here for completeness.
Proof of Lemma 13. For the first part, we have
Since \({\mathbb {E}}_{X\sim p} X^\top v = 0\), we can subtract the mean of the first term \(X^\top \Delta P X\) without changing the value of \({{\,\mathrm{Tr}\,}}\left( \Delta P \Delta \right) \). Then
Inequality (i) follows from the Cauchy–Schwarz inequality. Inequality (ii) follows from the fact that \({\mathbb {E}}_{X\sim p}(X^\top v)^2 = 1\) as p is isotropic and that the inverse Poincaré constant is upper bounded by twice of inverse of the squared isoperimetric coefficient (also known as Cheeger’s inequality [6, 19] or Theorem 1.1 in Milman [20]). The matrix \(\Delta P + P^\top \Delta \) has rank at most \(\min (2r, d)\). Rearranging the terms in the above equation, we conclude the first part of Lemma 13.
For the second part, we write the matrix A in its eigenvalue decomposition and group the terms by eigenvalues. We have
where \(A_i\) has eigenvalues between the interval \((\left\ A\right\ _{2} e^{i1} /d, \left\ A\right\ _{2} e^{i} /d]\) and B has eigenvalues smaller than or equal to \(\left\ A\right\ _{2}/d\). Because the intervals have right bounds increasing exponentially, we have \(J = \lceil \log (d) \rceil \). Let \(P_i\) be the orthogonal projection matrix formed by the eigenvectors in \(A_i\). Then we have
where inequality (i) follows from the first part of Lemma 13 and inequality (ii) follows from the hypothesis of Lemma 13. Similarly for matrix B, we have
where inequality (i) follows from the hypothesis of Lemma 13 and inequality (ii) follows from the fact that \(\left\ B\right\ _{2} \le \left\ A\right\ _{2}/d\) and \(2\beta \le 1\). Putting the bounds (30) and (31) together, we have
Inequality (i) follows from Holder’s inequality and inequality (ii) follows from the fact that \(\left\ A_j\right\ _{2}^{1/2\beta } \text {rank}(A_j) \le e {{\,\mathrm{Tr}\,}}\left( A_{j}^{1/2\beta } \right) \) due to the construction of \(A_j\). This concludes the second part of Lemma 13.\(\square \)
Proof of Lemma 10. Let \(\mu \) be the mean of p. First, for X a random vector in \(\mathbb {R}^d\) drawn from p, we define the standardized random variable \(A^{1/2} (X  \mu )\) and its density \(\varrho \). \(\varrho \) is an isotropic logconcave density. Then through a change of variable, we have
where the last inequality follows from Lemma 12. \(A^q\) is positive semidefinite and we write down its eigenvalue decomposition \(A^q = \sum _{i=1}^d\lambda _i v_i v_i ^\top \) with \(\lambda _i \ge 0\). Since \(\varrho \) is isotropic, we can rewrite the 3Tensor into a summation form and apply Lemma 13.
where we define \(\Delta _i = \int (x^\top v_i) x x^\top \varrho (x) dx\), inequality (i) follows from Lemma 13 and that \(\varrho \) is isotropic, inequality (ii) follows from Cauchy–Schwarz inequality and the assumption that \(q \ge \frac{1}{2\beta }\).\(\square \)
Proof of Lemma 11. Without loss of generality, we can assume that the density p has mean 0. Its covariance matrix A is positive semidefinite and invertible. We can write down its eigenvalue decomposition as follows \(A = \sum _{i=1}^d\lambda _i v_i v_i^\top \) with \(\lambda _i > 0\) and \(v_i\) are eigenvectors with norm 1. Then \(A^{q}\) has an eigenvalue decomposition with the same eigenvectors \(A^q = \sum _{i=1}^d\lambda _i^q v_i v_i^\top \). Define \(\Delta _i = {\mathbb {E}}_{X \sim p} (X^\top A^{1/2}v_i) X X ^\top \), then
Next we bound the terms \({{\,\mathrm{Tr}\,}}\left( \Delta _i \Delta _i \right) \). We have
Equality (i) is because \({\mathbb {E}}_{X \sim p} X = 0\). Inequality (ii) follows from Cauchy–Schwarz inequality. Equality (iii) follows from the definition of the covariance matrix \({\mathbb {E}}_{X\sim p} XX^\top = A\). Inequality (iv) follows from the BrascampLieb inequality (or Hessian Poincaré, see Theorem 4.1 in Brascamp and Lieb [5]) together with the assumption that p is more logconcave than \(\mathcal {N}(0, \frac{1}{\tau }\mathbb {I}_d)\).
Plugging the bounds of the terms \({{\,\mathrm{Tr}\,}}\left( \Delta _i \Delta _i \right) \) into Equation (32), we obtain
Inequality (i) follows from Cauchy–Schwarz inequality. For \(q \ge 3\), inequality (ii) follows from Lemma 12. From the above equation, after rearranging the terms, we obtain
\(\square \)
Proof of Lemma 12. This lemma is proved in Lemma 43 in an older version (arXiv version 2) of Lee and Vempala [17], we provide a proof here for completeness.
Without loss of generality, we can assume that the density p has mean 0. For \(i \in \left\{ 1, \cdots , d \right\} \), we define \(\Delta _i = {\mathbb {E}}_{X\sim p} B^{1/2} X X ^\top B^{1/2} X^\top C^{1/2} e_i\) where \(e_i \in \mathbb {R}^d\) is the vector with ith coordinate 1 and 0 elsewhere. We have \(\sum _{i=1}^de_i e_i ^\top = \mathbb {I}_d\). We can rewrite the tensor on the left hand side as a sum of traces.
For any symmetric matrix F, a positivesemidefinite matrix G and \(\delta \in [0, 1]\), we have
Applying the above trace inequality (34) that we prove later for completeness (see also Lemma 2.1 in Zhu et al. [1]), we obtain
Writing the sum of traces in Equation (33) back to the 3Tensor form, we conclude Lemma 12.
It remains to prove the trace inequality in Equation (34). Without loss of generality, we can assume G is diagonal. Hence, we have
where the inequality follows from Jensen’s inequality and the fact that the logarithm function is concave (or the inequality of arithmetic and geometric means).\(\square \)
References
 1.
Z. AllenZhu, Y.T. Lee, and L. Orecchia. Using optimization to obtain a widthindependent, parallel, simpler, and faster positive SDP solver. In: Proceedings of the TwentySeventh Annual ACMSIAM Symposium on Discrete Algorithms. SIAM (2016), pp. 1824–1831.
 2.
M. Anttila, K. Ball, and I. Perissinaki. The central limit problem for convex bodies. Transactions of the American Mathematical Society, (12)355 (2003), 4723–4735
 3.
K. Ball. Logarithmically concave functions and sections of convex sets in Rn. Studia Math, (1)88 (1988), 69–84
 4.
J. Bourgain. On high dimensional maximal functions associated to convex bodies. American Journal of Mathematics, (6)108 (1986), 1467–1476
 5.
H.J. Brascamp and E.H. Lieb. On extensions of the Brunn–Minkowski and Prékopa–Leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation. In: Inequalities. Springer (2002), pp. 441–464.
 6.
J. Cheeger. A lower bound for the smallest eigenvalue of the Laplacian. In: Proceedings of the Princeton Conference in Honor of Professor S. Bochner (1969), pp. 195–199.
 7.
B. Cousins and S. Vempala. A cubic algorithm for computing Gaussian volume. In: Proceedings of the TwentyFifth Annual ACMSIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics (2014), pp. 1215–1228.
 8.
R. Eldan. Thin shell implies spectral gap up to polylog via a stochastic localization scheme. Geometric and Functional Analysis, (2)23 (2013), 532–569
 9.
R. Eldan and B. Klartag. Approximately Gaussian marginals and the hyperplane conjecture. Concentration, Functional Inequalities and Isoperimetry, 545 (2011), 55–68
 10.
R. Eldan and J. Lehec. Bounding the norm of a logconcave vector via thinshell estimates. In: Geometric Aspects of Functional Analysis. Springer (2014), pp. 107–122.
 11.
O. Guédon, P. Nayar, and T. Tkocz. Concentration inequalities and geometry of convex bodies. Analytical and Probabilistic Methods in the Geometry of Convex Bodies, 2 (2014), 9–86
 12.
R. Kannan, L. Lovász, and M. Simonovits. Isoperimetric problems for convex bodies and a localization lemma. Discrete & Computational Geometry, (3–4)13 (1995), 541–559
 13.
B. Klartag. On convex perturbations with a bounded isotropic constant. Geometric & Functional Analysis GAFA, (6)16 (2006), 1274–1290
 14.
B. Klartag and V. Milman. The slicing problem by bourgain. In: (To Appear) Analysis at Large, A Collection of Articles in Memory of Jean Bourgain. Springer (2021).
 15.
R. Latała and J. Wojtaszczyk. On the infimum convolution inequality. Studia Mathematica, (189)2 (2008), 147–187
 16.
M. Ledoux. The Concentration of Measure Phenomenon, Number 89. American Mathematical Society (2001).
 17.
Y.T. Lee and S.S. Vempala. Eldan’s stochastic localization and the KLS hyperplane conjecture: an improved lower bound for expansion. In: 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS). IEEE (2017), pp. 998–1007.
 18.
Y.T. Lee and S.S. Vempala. The Kannan–Lovász–Simonovits conjecture. arXiv preprint arXiv:1807.03465 (2018).
 19.
V.G. Maz’ya. Classes of domains and imbedding theorems for function spaces. In: Doklady Akademii Nauk, Vol. 133. Russian Academy of Sciences (1960), pp. 527–530.
 20.
E. Milman. On the role of convexity in isoperimetry, spectral gap and concentration. Inventiones Mathematicae, (1)177 (2009), 1–43
 21.
B. Øksendal. Stochastic Differential Equations. Springer, Berlin (2003).
 22.
G. Paouris. Concentration of mass on convex bodies. Geometric & Functional Analysis GAFA, (5)16 (2006), 1021–1049
 23.
D. Revuz and M. Yor. Continuous Martingales and Brownian Motion, Vol. 293. Springer, Berlin (2013).
 24.
A. Saumard and J.A. Wellner. Logconcavity and strong logconcavity: a review. Statistics Surveys, 8 (2014), 45
 25.
P. Sternberg and K. Zumbrun. On the connectivity of boundaries of sets minimizing perimeter subject to a volume constraint. Communications in Analysis and Geometry, (1)7 (1999), 199–220
Acknowledgements
Yuansi Chen has received funding from the European Research Council under the Grant Agreement No 786461 (CausalStats  ERC2017ADG). We acknowledge scientific interaction and exchange at “ETH Foundations of Data Science”. We thank Peter Bühlmann and Bin Yu for their continuous support and encouragement. We thank Afonso Bandeira, Raaz Dwivedi, Ronen Eldan, Yin Tat Lee and Martin Wainwright for helpful discussions. We thank Bo’az Klartag and Joseph Lehec for pointing out a mistake in the previous revision. We also thank anonymous reviewers for their careful reading of our manuscript and their suggestions on presentation and writing.
Funding
Open access funding provided by Swiss Federal Institute of Technology Zurich.
Author information
Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Proof of Lemma 3 and derivatives
Proof of Lemma 3 and derivatives
In this section, we first prove the existence and uniqueness of the SDE solution in Lemma 3 and then derive the derivatives of \(p_t\), \(A_t\) and \(\Gamma _t\) in Equation (13), Equation (15) and (16) using Itô’s calculus. Similar results are also proved in Eldan [8] and Lee and Vempala [17] since a similar stochastic localization is used. We provide a proof here for completeness.
Proof of Lemma 3. We can rewrite the stochastic differential equation (8) as follows to make the dependency clear:
where
Since p has a compact support, given \(x \in \mathbb {R}^d\), \(\varrho (\cdot , \cdot , x)\) as a function of (c, B) is Lipschitz in c and B. Similarly, \(\mu \) is also Lipschitz in c and B. Consequently, \(A^{1/2}\), \(A^{1}\mu (c_t, B_t)\) and \(A^{1}\) are all bounded and Lipschitz on \(c_t\) and \(B_t\) on the compact support. Applying the existence and uniqueness theorem of SDE solutions (Theorem 5.2 in Øksendal [21]), we show that the SDE solution exists and is unique on the time interval [0, T] for any \(T > 0\).
Next, we derive the derivative of \(p_t\). Define
Then \(p_t(x)\) can be written as \(\frac{G_t(x)}{V_t}\). Let \(S_t(x)\) denote the quadratic variation of the process \(c_t^\top x\). We have
Using Itô’s formula, we have
Using Itô’s formula on the inverse of \(V_t\), we have
Using Itô’s formula on \(p_t\), with the above derivatives, we obtain
Then we derive the derivative of \(A_t\). By the definition of \(A_t\), we have
where \(\mu _t = \int _{\mathbb {R}^d} x p_t(x) dx\). Using Itô’s formula on \(\mu _t\), we obtain
Using Itô’s formula on \(A_t\) and viewing it as a function of \(\mu _t\) and \(p_t\), we obtain
We observe that \(\int d\mu _t\left( x  \mu _t \right) ^\top p_t(x) dx = 0\) and \(\int \left( x  \mu _t \right) \left( d\mu _t \right) ^\top p_t(x) dx = 0\). Then,
Combining all the terms together, we have
Finally, we derive the derivative of \(\Gamma _t\). Define the function \(\Gamma : \mathbb {R}^{d\times d} \mapsto \mathbb {R}\) as \(\Gamma (X) = {{\,\mathrm{Tr}\,}}\left( X^q \right) \). The firstorder and secondorder derivatives of \(\Gamma \) are given by
Using the above derivatives and Itô’s formula, we obtain
where \(E_{ij}\) is the matrix that takes 1 at the entry (i, j) and 0 otherwise and \(Q_{ij, t}\) is the stochastic process defined by the (i, j) entry of \(Q_t\). Using the derivative of \(A_t\) in Equation (15), we have
where \(z(x)_i\) is the ith coordinate of \(\left[ A^{1/2}(x\mu _t) \right] \). Plugging the expressions of \(dA_t\) and \(d\left[ A_{ij}, A_{kl} \right] _t \) into Equation (35), we obtain
\(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chen, Y. An Almost Constant Lower Bound of the Isoperimetric Coefficient in the KLS Conjecture. Geom. Funct. Anal. 31, 34–61 (2021). https://doi.org/10.1007/s00039021005584
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00039021005584