Skip to main content

Advertisement

Log in

A two-sided estimate for the Gaussian noise stability deficit

  • Published:
Inventiones mathematicae Aims and scope

Abstract

The Gaussian noise-stability of a set \(A \subset {\mathbb R}^n\) is defined by

$$ \begin{aligned} {\mathcal {S}}_\rho (A) = {\mathbb P}\left( X \in A ~ \& ~ Y \in A \right) \end{aligned}$$

where \(X,Y\) are standard jointly Gaussian vectors satisfying \({\mathbb E}[X_i Y_j] = \delta _{ij} \rho \). Borell’s inequality states that for all \(0 < \rho < 1\), among all sets \(A \subset {\mathbb R}^n\) with a given Gaussian measure, the quantity \({\mathcal {S}}_\rho (A)\) is maximized when \(A\) is a half-space. We give a novel short proof of this fact, based on stochastic calculus. Moreover, we prove an almost tight, two-sided, dimension-free robustness estimate for this inequality: by introducing a new metric to measure the distance between the set \(A\) and its corresponding half-space \(H\) (namely the distance between the two centroids), we show that the deficit \({\mathcal {S}}_\rho (H) - {\mathcal {S}}_\rho (A)\) can be controlled from both below and above by essentially the same function of the distance, up to logarithmic factors. As a consequence, we also establish the conjectured exponent in the robustness estimate proven by Mossel-Neeman, which uses the total-variation distance as a metric. In the limit \(\rho \rightarrow 1\), we obtain an improved dimension-free robustness bound for the Gaussian isoperimetric inequality. Our estimates are also valid for a generalized version of stability where more than two correlated vectors are considered.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Bakry, D., Ledoux, M.: Lévy-Gromov isoperimetric inequality for an innite dimensional diffusion generator. Invent. Math. 123, 259–281 (1995)

    MathSciNet  Google Scholar 

  2. Barthe, F., Maurey, B.: Some remarks on isoperimetry of Gaussian type. Ann. Inst. H. Poincaré Probab. Statist. 36(4), 419–434 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bobkov, S.G., Gozlan, N., Roberto, C., Samson, P.-M.: Bounds on the deficit in the logarithmic Sobolev inequality, preprint (2013)

  4. Borell, C.: The Brunn-Minkowski inequality in Gauss space. Invent. Math. 30(2), 207–216 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  5. Borell, C.: Geometric bounds on the Ornstein-Uhlenbeck velocity process. Z. Wahrsch. Verw. Gebiete 70(1), 1–13 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  6. Carlen, E.A., Kerce, C.: On the cases of equality in Bobkovs inequality and Gaussian rearrangement (English summary). Calc. Var. Part. Diff. Equ. 13(1), 118 (2001)

    MathSciNet  Google Scholar 

  7. Cianchi, A., Fusco, N., Maggi, F., Pratelli, A.: On the isoperimetric deficit in Gauss space. Am. J. Math. 133(1), 131–186 (2011). doi:10.1353/ajm.2011.0005

    Article  MathSciNet  MATH  Google Scholar 

  8. Bobkov, S.G.: An isoperimetric inequality on the discrete cube, and an elementary proof of the isoperimetric inequality in Gauss space (English summary). Ann. Probab. 25(1), 206–214 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  9. Ehrhard, A.: Symétrisation dans lespace de Gauss (French). Math. Scand. 53(2), 281–301 (1983)

    MathSciNet  MATH  Google Scholar 

  10. Eldan, R.: Thin shell implies spectral gap up to polylog via a stochastic localization scheme. Geom. Funct. Anal. 23(2), 532–569 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  11. Eldan, R.: Skorokhod embeddings via stochastic flows on the space of measures. arXiv:1303.3315

  12. Eldan, R., Lehec, J.: Bounding the norm of a log-concave vector via thin-shell estimates. arXiv:1306.3696

  13. Furstenberg, H.: Recurrence in Ergodic Theory and Combinatorial Number Theory. Princeton Univ. Press, Princeton, NJ (1981)

    Book  Google Scholar 

  14. Isaksson, M., Mossel, E.: Maximally stable gaussian partitions with discrete applications. Israel J. Math. 189, 347396 (2012)

    Article  MathSciNet  Google Scholar 

  15. Kindler, G., ODonnell, R.: Gaussian noise sensitivity and fourier tails. IEEE Conference on Computational Complexity, pp. 137–147 (2012)

  16. Ledoux, M.: A short proof of the Gaussian isoperimetric inequality. High dimensional probability. (Oberwolfach, 1996), pp. 229–232. Progr. Probab., 43, Birkhauser, Basel (1998)

  17. Ledoux, M.: Isoperimetry and gaussian analysis. Lectures on probability theory and statistics, pp. 165–294 (1996)

  18. Mossel, E., Neeman, J.; Robust optimality of gaussian noise stability. arXiv:1210.4126 (2012)

  19. Neeman, J.: Isoperimetry and noise sensitivity in Gaussian space. Ph.D. Thesis, University of California, Berkeley (2013)

  20. Sudakov, V.N., Cirelson, B.S.: Extremal properties of half-spaces for spherically invariant measures. J. Math. Sci. 9(1), 918 (1978)

    Article  Google Scholar 

  21. Talagrand, M.: Transportation cost for Gaussian and other product measures. Geom. Funct. Anal. 6, 587600 (1996)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

I am grateful to Elchanan Mossel for inspiring me to work on this problem and for several fruitful discussions in which, in particular, he suggested that the method should work for \(q\)-stability and for the isoperimetric problem. I am deeply thankful to Bo’az Klartag for a very useful discussion in which he gave me the idea of using Talagrand’s theorem in the proof of Lemma 24. Finally, I thank Gil Kalai for introducing me to this topic and Yuval Peres, Joe Neeman, Joseph Lehec and James Lee for useful comments on a preliminary version of this note.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ronen Eldan.

Appendix

Appendix

In the appendix we fill in a few technical lemmas whose proofs were omitted from the note.

Proof of Lemma 8

Define

$$\begin{aligned} g_{x,t} (y) := \gamma _{y,\sqrt{1-t}} (x) = \frac{1}{(2 \pi (1-t))^{n/2} } \exp \left( - \frac{|x-y|^2}{2 (1-t)} \right) . \end{aligned}$$

A simple calculation gives

$$\begin{aligned} \nabla g_{x,t}(y) = \frac{(x-y)}{(1-t)} g_{x,t}(y), \end{aligned}$$

and therefore

$$\begin{aligned} \Delta g_{x,t} (y) = \left( \frac{ |x-y|^2}{(1-t)^2 } + \frac{n}{(1-t)} \right) g_{x,t} (y). \end{aligned}$$

Moreover,

$$\begin{aligned} \frac{\partial }{\partial t} g_{x,t} (y) = - \left( \frac{ |x-y|^2}{2 (1-t)^2 } + \frac{n}{2 (1-t)} \right) g_{x,t} (y). \end{aligned}$$

We can therefore calculate, using Itô’s formula,

$$\begin{aligned} d F_t(x)&= d g_{x,t} (W_t) = \frac{\partial }{\partial t} g_{x,t} (W_t) + \nabla g_{x,t} W_t \cdot d W_t + \frac{1}{2} \Delta g_{x,t} (W_t)\\&= \nabla g_{x,t} (W_t) \cdot d W_t = (1-t)^{-1} \langle x - W_t, d W_t \rangle F_t(x) \end{aligned}$$

which proves that \(F_t(x)\) is a local martingale and establishes equation (14).

Next, let \(\phi : {\mathbb R}^n \rightarrow {\mathbb R}\) satisfy \(|\phi (x)| < C_1 + C_2 |x|^p\) for some constants \(C_1,C_2,p>0\). Remarking that the integral \(\int _{{\mathbb R}^n} \phi (x) \exp (-\alpha |x-x_0| ^2)\) is absolutely convergent for all \(\alpha > 0\) and \(x_0 \in {\mathbb R}^n\), we deduce that for all \(0<t<1\) and all \(y \in {\mathbb R}^n\), we have

$$\begin{aligned} \nabla \int _{{\mathbb R}^n} \phi (x) g_{x,t} (y) dx = \int _{{\mathbb R}^n} \phi (x) \nabla g_{x,t}(y) dx \end{aligned}$$

and

$$\begin{aligned} \left( \frac{\partial }{\partial t} - \frac{1}{2} \Delta \right) \int _{{\mathbb R}^n} \phi (x) g_{x,t}(y) dx = 0. \end{aligned}$$

Formula (15) follows. Finally, the fact that the process \(t \rightarrow \int _{{\mathbb R}^n} \phi (x) F_t(x) dx\) is a martingale follows immediately from the fact that

$$\begin{aligned} \int _{{\mathbb R}^n} \phi (x) F_t(x) dx = {\mathbb E}[\phi (W_1) | {\mathcal {F}}_t]. \end{aligned}$$

\(\square \)

Proof of Lemma 21

We begin with formula (63). By equation (88) we have

$$\begin{aligned} \frac{q(s)}{s} = \frac{e^{-\Psi (s)^2 / 2}}{\int _{- \Psi (s)}^\infty e^{-x^2/2}dx }. \end{aligned}$$

Denote \(y = - \Psi (s)\). Since, by (85), \(q'(s)\) is a decreasing function, we may assume that \(s < \frac{1}{2}\) and thus \(y > 0\). The inequality \(\left( y + \frac{1}{y+1} \right) ^2 \le y^2 + 3\) suggests that

$$\begin{aligned} \int _{y}^\infty e^{-x^2 / 2} \ge \int _{y}^{y + 1/(y+1)} e^{-(y + 1/(y+1))^2 / 2} dx \ge e^{-3} \frac{1}{y+1} e^{-y^2/2} \end{aligned}$$

so

$$\begin{aligned} \frac{q(s)}{s} = \frac{e^{-y^2 / 2}}{\int _{y}^\infty e^{-x^2/2}dx } \le e^{3} (y + 1) = - e^{3} (\Psi (s) + 1) \end{aligned}$$

for all \(s < 1/2\). But a well known fact about the Gaussian distribution is that for \(s < 1/2\)

$$\begin{aligned} - \Psi (s) \le C \sqrt{|\log s|} \end{aligned}$$

for some a universal constant \(C>0\). Formula (63) follows.

The upper bound of formula (62) now follows immediately from the symmetry of the function \(q(s)\) around \(s=1/2\), and we are left with proving the lower bound. Consider the function

$$\begin{aligned} h(s) = 4 s(1-s) q(1/2) = \frac{4}{\sqrt{2 \pi }} s(1-s). \end{aligned}$$

We know that \(h(s) = q(s)\) for \(s \in \{0,1/2,1\}\). Moreover, \(h(s)\) is tangent to \(q(s)\) at \(s = 1/2\), and lastly, according to formula (86), we see that \(q'(s)\) is a convex function in \(s \in [0,1/2]\). Consequently, the convex function \(g(s) = q'(s) - h'(s)\) intersects the x-axis exactly once in the interval \((0,1/2)\), say at the point \(s_0\) (since it is equal to zero at \(s=1/2\) and since its integral on that interval is equal zero). Now, we have

$$\begin{aligned} q''(1/2) = - \sqrt{2 \pi } > - \frac{8}{\sqrt{2 \pi }} = h''(1/2), \end{aligned}$$

which implies that \(g'(1/2) > 0\). We conclude that \(g(s)(s-s_0) < 0\) for \(0 < s < 1/2\). By the fact that \(q(0) = h(0)\) and \(q(1/2) = h(1/2)\) we know that

$$\begin{aligned} \int _0^{1/2} g(s) ds = 0 \end{aligned}$$

and therefore

$$\begin{aligned} q(s) - h(s) = - \int _{s}^{1/2} g(x) dx \ge 0, ~~ \forall 0 < s < 1/2 \end{aligned}$$

so \(q(s) \ge h(s)\) in \(0 < s < 1/2\). Since both functions are symmetric around \(s=1/2\), we have established that

$$\begin{aligned} q(s) \ge h(s) = \frac{4}{\sqrt{2 \pi }} s(1-s) \end{aligned}$$

and the upper bound is proven. \(\square \)

Proof of Fact 26

The upper bound follows immediately from the fact that, according to formula (86) one has \(q''(s) < q''(1/2) < - 2\) for all \(0 < s < 1\). Let us prove the lower bound. By the symmetry of \(q(s)\) around \(s=1/2\), we may assume without loss of generality that \(h < 1/2\). Define

$$\begin{aligned} f(s) = q(s) - q(h) - q'(h) (s-h) \end{aligned}$$

and

$$\begin{aligned} g(s) = h^{-2} f(0) (s-h)^2. \end{aligned}$$

Note that by definition, the functions \(f(0) = g(0)\), \(f(h) = g(h) = 0\) and \(f'(h) = g'(h) = 0\). Now, according to formula (86), the function \(q'(s)\) is convex in \([0,1/2]\) (here, we use the assumption that \(h < 1/2\)). Therefore, the function \(w(s) = f'(s) - g'(s)\) is also convex in this interval. Now, we know that \(w(h) = 0\) and that \(\int _0^h w(s) ds = 0\), so from the convexity of \(w(s)\) we conclude that there exists \(s_0 \in (0,h)\) such that

$$\begin{aligned} w(h) = 0 \text{ and } w(s) (s-s_0) \le 0, ~~ \forall 0 < s < h \end{aligned}$$
(132)

and therefore

$$\begin{aligned} \int _s^h w(x) dx \le 0, ~~ \forall 0<s<h. \end{aligned}$$

It follows that \(g(s) < f(s)\) for all \(0 < s < h\). Moreover, since \(w(s)\) is convex up to \(s=1/2\), necessarily we have \(w(s) > 0\) for \(h<s<1/2\) and it follows that

$$\begin{aligned} g(s) \le f(s), ~~ \forall 0 \le s \le 1/2. \end{aligned}$$

Next, we show that \(g(s) \le f(s)\) also for \(1/2 < s < 1\), or in other words we will show that

$$\begin{aligned} p(s) \le q(s), ~~ \forall 0 < s < 1 \end{aligned}$$

where

$$\begin{aligned} p(s) = g(s) + q(h) + q'(h) (s-h). \end{aligned}$$

Indeed, the fact that \(w(s)\) is convex up to \(s=1/2\) and by (132), we know that \(w(1/2) > 0\), which means that \(p'(1/2) < q'(1/2) = 0\), and therefore the parabola \(p(s)\) attains a maximum at some point \(b \le 1/2\) which means that \(p(1-s) \le p(s)\) for all \(s < 1/2\). So by the symmetry of \(q(s)\) around \(s=1/2\) we get

$$\begin{aligned} q(1-s) = q(s) \ge p(s) \ge p(1-s) , ~~ \forall 0 < s < 1/2. \end{aligned}$$

We finally have \(f(s) \ge g(s)\) for all \(0 \le s \le 1\). In order to prove the lower bound, it therefore suffices to show that

$$\begin{aligned} - \frac{4}{h^2(1-h)^2} (s-h)^2 \le g(s)&= h^{-2} f(0) (s-h)^2\\&= h^{-2} (s-h)^2 (-q(h) + h q'(h)) \end{aligned}$$

or in other words, using the assumption \(h<1/2\),

$$\begin{aligned} 1 \ge q(h) - h q'(h) \end{aligned}$$

a combination of (85) with the fact that \(q(h) \le q(1/2) < 1\) finishes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Eldan, R. A two-sided estimate for the Gaussian noise stability deficit. Invent. math. 201, 561–624 (2015). https://doi.org/10.1007/s00222-014-0556-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00222-014-0556-6

Navigation