## Abstract

The Gaussian noise-stability of a set \(A \subset {\mathbb R}^n\) is defined by

where \(X,Y\) are standard jointly Gaussian vectors satisfying \({\mathbb E}[X_i Y_j] = \delta _{ij} \rho \). Borell’s inequality states that for all \(0 < \rho < 1\), among all sets \(A \subset {\mathbb R}^n\) with a given Gaussian measure, the quantity \({\mathcal {S}}_\rho (A)\) is maximized when \(A\) is a half-space. We give a novel short proof of this fact, based on stochastic calculus. Moreover, we prove an almost tight, two-sided, dimension-free robustness estimate for this inequality: by introducing a new metric to measure the distance between the set \(A\) and its corresponding half-space \(H\) (namely the distance between the two centroids), we show that the deficit \({\mathcal {S}}_\rho (H) - {\mathcal {S}}_\rho (A)\) can be controlled from both below and above by essentially the same function of the distance, up to logarithmic factors. As a consequence, we also establish the conjectured exponent in the robustness estimate proven by Mossel-Neeman, which uses the total-variation distance as a metric. In the limit \(\rho \rightarrow 1\), we obtain an improved dimension-free robustness bound for the Gaussian isoperimetric inequality. Our estimates are also valid for a generalized version of stability where more than two correlated vectors are considered.

### Similar content being viewed by others

## References

Bakry, D., Ledoux, M.: Lévy-Gromov isoperimetric inequality for an innite dimensional diffusion generator. Invent. Math.

**123**, 259–281 (1995)Barthe, F., Maurey, B.: Some remarks on isoperimetry of Gaussian type. Ann. Inst. H. Poincaré Probab. Statist.

**36**(4), 419–434 (2000)Bobkov, S.G., Gozlan, N., Roberto, C., Samson, P.-M.: Bounds on the deficit in the logarithmic Sobolev inequality, preprint (2013)

Borell, C.: The Brunn-Minkowski inequality in Gauss space. Invent. Math.

**30**(2), 207–216 (1975)Borell, C.: Geometric bounds on the Ornstein-Uhlenbeck velocity process. Z. Wahrsch. Verw. Gebiete

**70**(1), 1–13 (1985)Carlen, E.A., Kerce, C.: On the cases of equality in Bobkovs inequality and Gaussian rearrangement (English summary). Calc. Var. Part. Diff. Equ.

**13**(1), 118 (2001)Cianchi, A., Fusco, N., Maggi, F., Pratelli, A.: On the isoperimetric deficit in Gauss space. Am. J. Math.

**133**(1), 131–186 (2011). doi:10.1353/ajm.2011.0005Bobkov, S.G.: An isoperimetric inequality on the discrete cube, and an elementary proof of the isoperimetric inequality in Gauss space (English summary). Ann. Probab.

**25**(1), 206–214 (1997)Ehrhard, A.: Symétrisation dans lespace de Gauss (French). Math. Scand.

**53**(2), 281–301 (1983)Eldan, R.: Thin shell implies spectral gap up to polylog via a stochastic localization scheme. Geom. Funct. Anal.

**23**(2), 532–569 (2013)Eldan, R.: Skorokhod embeddings via stochastic flows on the space of measures. arXiv:1303.3315

Eldan, R., Lehec, J.: Bounding the norm of a log-concave vector via thin-shell estimates. arXiv:1306.3696

Furstenberg, H.: Recurrence in Ergodic Theory and Combinatorial Number Theory. Princeton Univ. Press, Princeton, NJ (1981)

Isaksson, M., Mossel, E.: Maximally stable gaussian partitions with discrete applications. Israel J. Math.

**189**, 347396 (2012)Kindler, G., ODonnell, R.: Gaussian noise sensitivity and fourier tails. IEEE Conference on Computational Complexity, pp. 137–147 (2012)

Ledoux, M.: A short proof of the Gaussian isoperimetric inequality. High dimensional probability. (Oberwolfach, 1996), pp. 229–232. Progr. Probab., 43, Birkhauser, Basel (1998)

Ledoux, M.: Isoperimetry and gaussian analysis. Lectures on probability theory and statistics, pp. 165–294 (1996)

Mossel, E., Neeman, J.; Robust optimality of gaussian noise stability. arXiv:1210.4126 (2012)

Neeman, J.: Isoperimetry and noise sensitivity in Gaussian space. Ph.D. Thesis, University of California, Berkeley (2013)

Sudakov, V.N., Cirelson, B.S.: Extremal properties of half-spaces for spherically invariant measures. J. Math. Sci.

**9**(1), 918 (1978)Talagrand, M.: Transportation cost for Gaussian and other product measures. Geom. Funct. Anal.

**6**, 587600 (1996)

## Acknowledgments

I am grateful to Elchanan Mossel for inspiring me to work on this problem and for several fruitful discussions in which, in particular, he suggested that the method should work for \(q\)-stability and for the isoperimetric problem. I am deeply thankful to Bo’az Klartag for a very useful discussion in which he gave me the idea of using Talagrand’s theorem in the proof of Lemma 24. Finally, I thank Gil Kalai for introducing me to this topic and Yuval Peres, Joe Neeman, Joseph Lehec and James Lee for useful comments on a preliminary version of this note.

## Author information

### Authors and Affiliations

### Corresponding author

## Appendix

### Appendix

In the appendix we fill in a few technical lemmas whose proofs were omitted from the note.

###
*Proof of Lemma 8*

Define

A simple calculation gives

and therefore

Moreover,

We can therefore calculate, using Itô’s formula,

which proves that \(F_t(x)\) is a local martingale and establishes equation (14).

Next, let \(\phi : {\mathbb R}^n \rightarrow {\mathbb R}\) satisfy \(|\phi (x)| < C_1 + C_2 |x|^p\) for some constants \(C_1,C_2,p>0\). Remarking that the integral \(\int _{{\mathbb R}^n} \phi (x) \exp (-\alpha |x-x_0| ^2)\) is absolutely convergent for all \(\alpha > 0\) and \(x_0 \in {\mathbb R}^n\), we deduce that for all \(0<t<1\) and all \(y \in {\mathbb R}^n\), we have

and

Formula (15) follows. Finally, the fact that the process \(t \rightarrow \int _{{\mathbb R}^n} \phi (x) F_t(x) dx\) is a martingale follows immediately from the fact that

\(\square \)

###
*Proof of Lemma 21*

We begin with formula (63). By equation (88) we have

Denote \(y = - \Psi (s)\). Since, by (85), \(q'(s)\) is a decreasing function, we may assume that \(s < \frac{1}{2}\) and thus \(y > 0\). The inequality \(\left( y + \frac{1}{y+1} \right) ^2 \le y^2 + 3\) suggests that

so

for all \(s < 1/2\). But a well known fact about the Gaussian distribution is that for \(s < 1/2\)

for some a universal constant \(C>0\). Formula (63) follows.

The upper bound of formula (62) now follows immediately from the symmetry of the function \(q(s)\) around \(s=1/2\), and we are left with proving the lower bound. Consider the function

We know that \(h(s) = q(s)\) for \(s \in \{0,1/2,1\}\). Moreover, \(h(s)\) is tangent to \(q(s)\) at \(s = 1/2\), and lastly, according to formula (86), we see that \(q'(s)\) is a convex function in \(s \in [0,1/2]\). Consequently, the convex function \(g(s) = q'(s) - h'(s)\) intersects the x-axis exactly once in the interval \((0,1/2)\), say at the point \(s_0\) (since it is equal to zero at \(s=1/2\) and since its integral on that interval is equal zero). Now, we have

which implies that \(g'(1/2) > 0\). We conclude that \(g(s)(s-s_0) < 0\) for \(0 < s < 1/2\). By the fact that \(q(0) = h(0)\) and \(q(1/2) = h(1/2)\) we know that

and therefore

so \(q(s) \ge h(s)\) in \(0 < s < 1/2\). Since both functions are symmetric around \(s=1/2\), we have established that

and the upper bound is proven. \(\square \)

###
*Proof of Fact 26*

The upper bound follows immediately from the fact that, according to formula (86) one has \(q''(s) < q''(1/2) < - 2\) for all \(0 < s < 1\). Let us prove the lower bound. By the symmetry of \(q(s)\) around \(s=1/2\), we may assume without loss of generality that \(h < 1/2\). Define

and

Note that by definition, the functions \(f(0) = g(0)\), \(f(h) = g(h) = 0\) and \(f'(h) = g'(h) = 0\). Now, according to formula (86), the function \(q'(s)\) is convex in \([0,1/2]\) (here, we use the assumption that \(h < 1/2\)). Therefore, the function \(w(s) = f'(s) - g'(s)\) is also convex in this interval. Now, we know that \(w(h) = 0\) and that \(\int _0^h w(s) ds = 0\), so from the convexity of \(w(s)\) we conclude that there exists \(s_0 \in (0,h)\) such that

and therefore

It follows that \(g(s) < f(s)\) for all \(0 < s < h\). Moreover, since \(w(s)\) is convex up to \(s=1/2\), necessarily we have \(w(s) > 0\) for \(h<s<1/2\) and it follows that

Next, we show that \(g(s) \le f(s)\) also for \(1/2 < s < 1\), or in other words we will show that

where

Indeed, the fact that \(w(s)\) is convex up to \(s=1/2\) and by (132), we know that \(w(1/2) > 0\), which means that \(p'(1/2) < q'(1/2) = 0\), and therefore the parabola \(p(s)\) attains a maximum at some point \(b \le 1/2\) which means that \(p(1-s) \le p(s)\) for all \(s < 1/2\). So by the symmetry of \(q(s)\) around \(s=1/2\) we get

We finally have \(f(s) \ge g(s)\) for all \(0 \le s \le 1\). In order to prove the lower bound, it therefore suffices to show that

or in other words, using the assumption \(h<1/2\),

a combination of (85) with the fact that \(q(h) \le q(1/2) < 1\) finishes the proof. \(\square \)

## Rights and permissions

## About this article

### Cite this article

Eldan, R. A two-sided estimate for the Gaussian noise stability deficit.
*Invent. math.* **201**, 561–624 (2015). https://doi.org/10.1007/s00222-014-0556-6

Received:

Accepted:

Published:

Issue Date:

DOI: https://doi.org/10.1007/s00222-014-0556-6