1 Introduction

The convex hull \([X_1,\ldots ,X_n]\) of n independent standard Gaussian samples \(X_1,\ldots ,X_n\) from \({\mathbb {R}}^d\) is the Gaussian polytope \(P^{(d)}_n\). For fixed dimension d, the face numbers and intrinsic volumes of \(P_n^{(d)}\) as n tends to infinity are well understood by now. For \(i=0\ldots ,d\) and polytope Q, let \(f_i(Q)\) denote the number of i-faces of Q and let \(V_i(Q)\) denote the ith intrinsic volume of Q. The asymptotic behavior of the expected value of the number of facets \(f_{d-1}(P^{(d)}_n)\) as \(n \rightarrow \infty \) was provided by Rényi and Sulanke [22] if \(d=2\), and by Raynaud [21] if \(d\ge 3\). Namely, they proved that, for any fixed d,

$$\begin{aligned} {\mathbb {E}}f_{d-1}(P^{(d)}_n) = 2^d \pi ^{\frac{d-1}{2}} d^{- \frac{1}{2}} ( \ln n)^{ \frac{d-1}{2} } (1+o(1)) \end{aligned}$$
(1)

as \(n \rightarrow \infty \). For \(i=0,\ldots ,d\), expected value of \(V_i(P^{(d)}_n)\) as \(n \rightarrow \infty \) was computed by Affentranger [1], and that of \(f_i(P^{(d)}_n)\) was determined Affentranger and Schneider [2] and Baryshnikov and Vitale [3], see Hug et al. [15] and Fleury [12] for a different approach. More recently, Kabluchko and Zaporozhets [18, 19] proved explicit expressions for the expected value of \(V_d(P^{(d)}_n)\) and the number of k-faces \(f_k(P^{(d)}_n)\). Yet these formulas are complicated and it is not immediate how to deduce asymptotic results for large n high dimensions d.

After various partial results, including the variance estimates of Calka and Yukich [6] and Hug and Reitzner [16], central limit theorems were proved for \(f_i(P^{(d)}_n)\) and \(V_d(P^{(d)}_n)\) by Bárány and Vu [5], and for \(V_i(P^{(d)}_n)\) by Bárány and Thäle [4]. These results have been strengthened considerably by Grote and Thäle [14]. The interesting question whether \({\mathbb {E}}f_{d-1}(P^{(d)}_n)\) is an increasing function in n was answered in the positive by Kabluchko and Thäle [17]. It would be interesting to investigate the monotonicity behavior of the facet number if n and d increases simultaneously.

The “high-dimensional” regime, that is, when d is allowed to grow with n, is of interest in numerous applications in statistics, signal processing, and information theory. The combinatorial structure of \(P^{(d)}_n\), when d tends to infinity and n grows proportionally with d, was first investigated by Vershik and Sporyshev [23], and later Donoho and Tanner [11] provided a satisfactory description. For any \(t>1\), Donoho and Tanner [11] determined the optimal \(\varrho (t)\in (0,1)\) such that if n/d tends to t, then \(P^{(d)}_n\) is essentially \(\varrho (t)d\)-neighbourly (if \(0<\eta <\varrho (t)\) and \(0\le k\le \eta d\), then \(f_k(P^{(d)}_n)\) is asymptotically \({n \atopwithdelims ()k+1}\)). See Donoho [10], Candés et al. [7], Candés and Tao [8, 9], Mendoza-Smith et al. [20].

In this note, we consider \(f_{d-1}(P^{(d)}_n)\), the number of facets, when both d and n tend to infinity. Our main result is the following estimate for the expected number of facets of the Gaussian polytope. The implied constant in \(O(\cdot )\) is always some absolute constant. We write \({{\,\mathrm{{lln}}\,}}x\) for \(\ln (\ln x)\).

Theorem 1.1

Assume \(P^{(d)}_n\) is a Gaussian polytope. Then for \(d \ge 78\) and \(n \ge e^e d\), we have

$$\begin{aligned} {\mathbb {E}}f_{d-1}(P^{(d)}_n) = 2^d \pi ^{\frac{d-1}{2}} d^{- \frac{1}{2}} e^{ \frac{d-1}{2} {{\,\mathrm{{lln}}\,}}\frac{n}{d} - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}\frac{n}{d} }{\ln \frac{n}{d} } +(d-1) \frac{\theta }{\ln \frac{n}{d}} \ +O\left( \sqrt{d} e^{ - \frac{1}{10} d}\right) } \end{aligned}$$

with \(\theta =\theta (n,d) \in [-34,2]\).

When n/d tends to infinity as \(d \rightarrow \infty \), Theorem 1.1 provides the asymptotic formula

$$\begin{aligned} {\mathbb {E}}f_{d-1}(P^{(d)}_n) = \left( (4\pi +o(1)) \ln \frac{n}{d}\right) ^{\frac{d-1}{2}}~. \end{aligned}$$

If \(n/(d e^d)\rightarrow \infty \), then we have \(\frac{d}{\ln \frac{n}{d}} \rightarrow 0 \) and hence

$$\begin{aligned} {\mathbb {E}}f_{d-1}(P^{(d)}_n) = 2^d \pi ^{\frac{d-1}{2}} d^{- \frac{1}{2}} e^{ \frac{d-1}{2} {{\,\mathrm{{lln}}\,}}\frac{n}{d} - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}\frac{n}{d} }{\ln \frac{n}{d} } +o(1)} \end{aligned}$$

as \(d \rightarrow \infty \). In the case when n grows even faster such that \((\ln n)/(d\ln d) \rightarrow \infty \), the asymptotic formula simplifies to the result (1) of Rényi and Sulanke [22] and Raynaud [21] for fixed dimension.

Corollary 1.2

Assume \(P^{(d)}_n\) is a Gaussian polytope. If \((\ln n)/(d\ln d) \rightarrow \infty \), we have

$$\begin{aligned} {\mathbb {E}}f_{d-1}(P^{(d)}_n) = 2^d \pi ^{\frac{d-1}{2}} d^{- \frac{1}{2}} ( \ln n)^{ \frac{d-1}{2} } (1+o(1))~. \end{aligned}$$

There is a (simpler) counterpart of our main results stating the asymptotic behavior of the expected number of facets of \(P_n^{(d)}\), if \(n-d\) is small compared to d, that is, if n/d tends to one.

Theorem 1.3

Assume \(P^{(d)}_n\) is a Gaussian polytope. Then for \(n - d = o( d)\), we have

$$\begin{aligned} {\mathbb {E}}f_{d-1}(P^{(d)}_n) = {n \atopwithdelims ()d} 2^{-(n-d)+1} e^{\frac{1}{\pi }\frac{(n-d)^2}{d} + O\left( \frac{(n-d)^3}{d^2}\right) +o(1) } \end{aligned}$$

as \(d \rightarrow \infty \).

This complements a result of Affentranger and Schneider [2] stating the number of k-dimensional faces for \(k \le n-d\) and \(n-d\) fixed,

$$\begin{aligned} {\mathbb {E}}f_{k}(P^{(d)}_n) = {n \atopwithdelims ()k+1} (1 +o(1))~, \end{aligned}$$

as \(d \rightarrow \infty \).

In the next section we sketch the basic idea of our approach, leaving the technical details to later sections. In Sect. 3 we provide asymptotic approximations for the tail of the normal distribution. In Sect. 4 concentration inequalities are derived for the \(\beta \)-distribution. Finally, in Sects. 5 and 6, Corollary 1.2 and Theorem 1.3 are proven.

2 Outline of the Argument

For \(z\in {\mathbb {R}}\), let

$$\begin{aligned} \Phi (y)=\frac{1}{\sqrt{\pi }}\int \limits _{-\infty }^y e^{- s^2}\,ds, \ \text{ and } {\phi }(y)=\Phi '(y)=\frac{1}{\sqrt{\pi }}e^{- y^2}~. \end{aligned}$$

Our proof is based on the approach of Hug, Munsonius, and Reitzner [15]. In particular, [15, Theorem 3.2] states that if \(n\ge d+1\) and \(X_1,\ldots ,X_n\) are independent standard Gaussian points in \({\mathbb {R}}^d\), then

$$\begin{aligned} {\mathbb {E}}f_{d-1}([X_1,\ldots ,X_n]) = {n \atopwithdelims ()d} {\mathbb {P}}(Y \notin [Y_1, \dots , Y_{n-d}])~, \end{aligned}$$

where \(Y,Y_{1},\dots ,Y_{n-d}\) are independent real-valued random variables with \(Y {\mathop {=}\limits ^{d}} N\left( 0,\frac{1}{2d}\right) \) and \(Y_{i} {\mathop {=}\limits ^{d}} N\left( 0,\frac{1}{2}\right) \) for \(i=1,\ldots ,n-d\). This gives

$$\begin{aligned} {\mathbb {E}}f_{d-1}([X_1,\ldots ,X_n])= & {} 2 {n \atopwithdelims ()d} \frac{\sqrt{d}}{\sqrt{\pi }} \int \limits _{- \infty }^\infty \Phi (y)^{n-d} e^{-dy^2}\,dy\end{aligned}$$
(2)
$$\begin{aligned}= & {} 2 {n \atopwithdelims ()d} \sqrt{d} \, \pi ^{\frac{d-1}{2}} \int \limits _{- \infty }^\infty \Phi (y)^{n-d} {\phi }(y)^d \,dy~. \end{aligned}$$
(3)

Note that similar integrals appear in the analysis of the expected number of k-faces for values of k in the entire range \(k=0,\ldots ,d-1\). In our case, the analysis boils down to understanding the integral of \(\Phi (y)^{n-d} {\phi }(y)^d\) over the real line. By substituting \( (1-u) = \Phi (y)\), we obtain

$$\begin{aligned} \int \limits _{- \infty }^\infty \Phi (y)^{n-d} {\phi }(y)^d \,dy = \int \limits _{0}^1 (1-u)^{n-d} {\phi }(\Phi ^{-1}(1-u))^{d-1}\,du~. \end{aligned}$$

Clearly, \(n \ge d+2\) is the nontrivial range. When \(n/d \rightarrow \infty \), \((1-u)^{n-d}\) is dominating, and we need to investigate the asymptotic behavior of \({\phi }(\Phi ^{-1}(1-u))\) as \(u \rightarrow 0\). We show that the essential term is precisely 2u. Hence, it makes sense to rewrite the integral as

$$\begin{aligned} 2^{d-1} \int \limits _{0}^1 (1-u)^{n-d} u^{d-1} \underbrace{\left( (2u)^{-1} {\phi }(\Phi ^{-1}(1-u))\right) ^{d-1}} _{=: g_d(u)} \,du~. \end{aligned}$$

For \(x,y>0\), the Beta-function is given by \({\varvec{B}}(x,y)= \int _0^1 (1-u)^{x-1} u^{y-1} du \). It is well known that for \(k,l \in {\mathbb {N}}\) we have \({\varvec{B}}(k,l)= \frac{(k-1)!(l-1)!}{(k+l-1)!}\). A random variable U is \({\varvec{B}}_{(x,y)}\) distributed if its density is given by \({\varvec{B}}(x,y)^{-1} (1-u)^{x-1} u^{y-1}\). With this, we have established the following identity:

Proposition 2.1

$$\begin{aligned} {\mathbb {E}}f_{d-1}([X_1,\ldots ,X_n])= & {} 2^d \pi ^{\frac{d-1}{2}} d^{- \frac{1}{2}} {\mathbb {E}}g_d(U) \end{aligned}$$
(4)

where

$$\begin{aligned} g_d(u) = \left( (2u)^{-1} {\phi }(\Phi ^{-1}(1-u))\right) ^{d-1} \end{aligned}$$

and U is a \({\varvec{B}}(n-d+1,d)\) random variable.

In Lemma 3.3 below we show that

$$\begin{aligned} g_d(u) = (\ln u^{-1})^{- \frac{d-1}{2} } e^{- \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}u^{-1} }{ \ln u^{-1}} - (d-1) \frac{O(1)}{\ln u^{-1}} } \end{aligned}$$

as \(u \rightarrow 0\). Because the Beta function is concentrated around \(\frac{d}{n}\), see Lemma 4.1 and Lemma 4.2, this yields

$$\begin{aligned} {\mathbb {E}}g_d(U) \approx \left( \ln \frac{n}{d}\right) ^{\frac{d-1}{2}} e^{ - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}\frac{n}{d} }{ \ln \frac{n}{d}} - (d-1) \frac{O(1)}{ \ln \frac{n}{d}} } \end{aligned}$$

which implies our main result.

3 Asymptotics of the \(\Phi \)-Function

To estimate \(\Phi (z)\), we need a version of Gordon’s inequality [13] for the Mill’s ratio:

Lemma 3.1

For any \(z>1\) there exists \(\theta \in (0,1)\), such that

$$\begin{aligned} \Phi (z)= 1-\frac{e^{-z^2}}{2\sqrt{\pi }z} \left( 1 - \frac{\theta }{2z^2}\right) \end{aligned}$$

Proof

It follows by partial integration that

$$\begin{aligned} \int \limits _z^{\infty }e^{-t^2}\,dt= \int \limits _z^{\infty }2t e^{-t^2}\, \frac{1}{2t} \,dt= \frac{e^{-z^2}}{2z} - \int \limits _z^{\infty } \frac{e^{-t^2}}{2t^2} \,dt= \frac{e^{-z^2}}{2z} - \frac{\theta e^{-z^2}}{4z^3} \end{aligned}$$

which yields the lemma. \(\square \)

Lemma 3.2

For any \(u \in (0,e^{-1}]\) there is a \({\delta }={\delta }(u) \in (0,16)\) such that

$$\begin{aligned} \Phi ^{-1}(1-u) = \sqrt{ \ln u^{-1} - \frac{1}{2} {{\,\mathrm{{lln}}\,}}u^{-1} - \ln (2\sqrt{\pi }) + \frac{1}{4} \frac{{{\,\mathrm{{lln}}\,}}u^{-1} }{ \ln u^{-1}} + \frac{\delta }{\ln u^{-1}} }. \end{aligned}$$
(5)

Proof

It is useful to prove (5) for the transformed variable \(u=e^{-t}\). We define

$$\begin{aligned} z (t) = \sqrt{t - \frac{1}{2} \ln t - \ln (2\sqrt{\pi }) + \frac{1}{4} \frac{\ln t}{t} +\frac{\delta (t)}{t} } \end{aligned}$$
(6)

which exists for \(t > 0\). In a first step we prove that this is the asymptotic expansion of \(z=\Phi ^{-1}(1-e^{-t})\) as \(z,t \rightarrow \infty \) with a suitable function \({\delta }={\delta }(t)=O(1)\). In a second step we show the bound on \(\delta \). Observe that \(z \ge 1\) implies \(t \ge \ln \Phi (-1))=-2,54\dots \). By Lemma 3.1, for \( z \ge 1\)

$$\begin{aligned} e^{-t}= 1- \Phi (z) = \frac{1}{2\sqrt{\pi }\, z} e^{- z^2} \left( 1 - \frac{\theta (z)}{2 z^2}\right) \end{aligned}$$
(7)

as \(z \rightarrow \infty \) with some \(\theta (z) \in (0,1)\), which immediately implies that \( z=z(t) \rightarrow \infty \) as \( t \rightarrow \infty \). Equation (7) shows that \(e^t \ge 2 \sqrt{\pi }z e^{z^2}\) and thus

$$\begin{aligned} t \ge \ln (2 \sqrt{\pi }) + \ln z(t) + z(t)^2 \ge z(t)^2 \end{aligned}$$

for \(z \ge 1\). The function \(z=z(t)\) is the inverse function we are looking for, if it satisfies

$$\begin{aligned} 4 \pi z(t)^2 e^{-2t} = e^{- 2z(t)^2} \left( 1 - \frac{\theta (z)}{2 z^2}\right) ^2. \end{aligned}$$
(8)

We plug (6) into this equation. This leads to

$$\begin{aligned} t - \frac{1}{2} \ln t - \ln (2\sqrt{\pi }) + \frac{1}{4} \frac{\ln t }{t} + \frac{\delta (t)}{ t}&= t e^{ - \frac{1}{2} \frac{\ln t}{t} - 2\frac{\delta (t)}{t} } \left( 1 - O(t^{-1})\right) \\&= t - \frac{1}{2} \ln t - 2 \delta (t) - O(1) \end{aligned}$$

and shows \( - \ln (2\sqrt{\pi }) + o(1)=- 2 \delta (t) - O(1) \). Thus the function z(t) given by (6) in fact satisfies (7) and therefore it is the asymptotic expansion of the inverse function.

The desired estimate for \({\delta }\) follows from some more elaborate but elementary calculations. First we prove that \({\delta }\ge 0\). By (8) and because \(e^x \ge 1+x\),

$$\begin{aligned} t - \frac{1}{2} \ln t - \ln (2\sqrt{\pi }) + \frac{1}{4} \frac{\ln t }{t} + \frac{\delta (t)}{ t}&\ge t \left( 1 - \frac{1}{2} \frac{\ln t}{t} - 2\frac{\delta (t)}{t} \right) \left( 1 - \frac{\theta }{2t} \right) ^2\\&\ge (t - \frac{1}{2} \ln t - 2 \delta (t) ) \left( 1 - \frac{\theta }{t} \right) \end{aligned}$$

which is equivalent to

$$\begin{aligned} \delta (t)&\ge \frac{ \ln (2\sqrt{\pi }) - \theta - \frac{1-2\theta \ln t }{4t} }{\left( 2 + \frac{1-2\theta }{t}\right) } >0 \end{aligned}$$

for \(t \ge 1\). On the other hand, again by (8),

$$\begin{aligned} t&\ge \left( t - \frac{1}{2} \ln t - \ln (2\sqrt{\pi }) + \frac{1}{4} \frac{\ln t }{t} + \frac{\delta (t)}{ t} \right) e^{\frac{1}{2} \frac{\ln t}{t} + 2\frac{\delta (t)}{t} } \end{aligned}$$

and using \(e^x \ge 1+x\) implies

$$\begin{aligned} \delta (t) \le \frac{ \ln (2\sqrt{\pi }) + \frac{2\ln (2\sqrt{\pi }) -1}{4} \frac{\ln t}{t} + \frac{1}{4} \frac{(\ln t)^2}{t} + \frac{1}{8} \frac{(\ln t )^2}{t^2} }{2 - (2\ln (2\sqrt{\pi })-1) \frac{1}{t} - \frac{\ln t }{t} } \le 16. \end{aligned}$$

\(\square \)

An asymptotic expansion for \({\phi }(\Phi ^{-1} (1-u))\) follows immediately:

Lemma 3.3

For any \(u \in (0,e^{-1}]\) there is a \({\delta }={\delta }(u) \in (0,16)\) such that

$$\begin{aligned} g_d(u)= \left( (2u)^{-1}{\phi }( \Phi ^{-1}(1-u) ) \right) ^{d-1}= (\ln u^{-1})^{ \frac{d-1}{2} } e^{- \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}u^{-1} }{ \ln u^{-1}} - (d-1)\frac{\delta }{\ln u^{-1}} }~. \end{aligned}$$

4 Concentration of the \(\beta \)-Distribution

A basic integral for us is the Beta-integral

$$\begin{aligned} {\varvec{B}}({\alpha },{\beta }) = \int \limits _{0}^{1}(1- x)^{{\alpha }-1} x^{{\beta }-1}\,dx =\frac{({\alpha }-1)!({\beta }-1)!}{({\alpha }+{\beta }-1)!}. \end{aligned}$$
(9)

Let \(U \sim {\varvec{B}}({\alpha }, {\beta })\) distributed. Then \({\mathbb {E}}U=\frac{{\beta }}{{\alpha }+{\beta }}\) and \(\textrm{var}(U)= \frac{{\alpha }{\beta }}{({\alpha }+{\beta })^2 ({\alpha }+{\beta }+1)}\) Next we establish concentration inequalities for a Beta-distributed random variable around its mean. Observe that if \(U \sim \textbf{B}({\alpha },{\beta })\), then \(1-U \sim \textbf{B}({\beta }, {\alpha })\). Hence we may concentrate on the case \({\alpha }\ge {\beta }\).

Lemma 4.1

Let \(U \sim {\varvec{B}}(a+1, b+1)\) distributed with \(a \ge b\) and set \(n=a+b\). Then

$$\begin{aligned} {\mathbb {P}}\left( U \le \frac{b}{n} - s \frac{a^{\frac{1}{2}} b^{\frac{1}{2}}}{n^{\frac{3}{2}}}\right) \le \frac{3e^3}{ \pi } \frac{1}{s} \left( e^{- \frac{1}{6} s^2 } - e^{- \frac{1}{6} \frac{nb}{a} } \right) _+. \end{aligned}$$

Proof

We have to estimate the integral

$$\begin{aligned} \frac{1}{{\varvec{B}}(a+1, b+1)} \int \limits _{0}^{\frac{b-s \sqrt{\frac{ab}{n}}}{n}} (1- x)^a x^b \,dx \end{aligned}$$

For an estimate from above we substitute \(x= \frac{b}{n} - \frac{y}{n} \sqrt{\frac{ab}{n}}\).

$$\begin{aligned} J_-= & {} \int \limits _{0}^{\frac{b-s \sqrt{\frac{ab}{n}}}{n}}(1- x)^a x^b\,dx\\= & {} \frac{ a^{a+ \frac{1}{2}} b^{b+ \frac{1}{2}} }{n^{n+\frac{3}{2}}} \ \int \limits _{s }^{\sqrt{\frac{nb}{a}}} \left( 1+ y \sqrt{\frac{b}{an}}\right) ^a \left( 1 - y \sqrt{\frac{a}{bn}}\right) ^b\,dy \end{aligned}$$

It is well known that

$$\begin{aligned} \ln (1+x) = \sum _{k=1}^\infty (-1)^{k-1} \frac{x^k}{k} \le x - \frac{x^2}{6}, \end{aligned}$$
(10)

for \(x \in (-1,1]\). Since \(a \ge b\), we have

$$\begin{aligned} \left( 1+ y \sqrt{\frac{b}{an}}\right) ^a \left( 1 - y \sqrt{\frac{a}{bn}}\right) ^b \le e^{- \frac{1}{6} y^2 }~, \end{aligned}$$

which implies

$$\begin{aligned} J_-\le & {} \frac{ a^{a+ \frac{1}{2}} b^{b+ \frac{1}{2}} }{n^{n+\frac{3}{2}}} \ \int \limits _{s }^{\sqrt{\frac{nb}{a}}} e^{- \frac{1}{6} y^2 } \,dy\\\le & {} \frac{3 a^{a+ \frac{1}{2}} b^{b+ \frac{1}{2}} }{n^{n+\frac{3}{2}}} \ \frac{1}{s} \left( e^{- \frac{1}{6} s^2 } - e^{- \frac{1}{6} \frac{nb}{a} } \right) ~. \end{aligned}$$

In the last step we use Stirling’s formula,

$$\begin{aligned} \sqrt{2\pi }\, n^{n+\frac{1}{2}} e^{-n} \le n! \le e \, n^{n+\frac{1}{2}} e^{-n}, \end{aligned}$$

to see that

$$\begin{aligned} \frac{ a^{a+ \frac{1}{2}} b^{b+ \frac{1}{2}} }{n^{n+\frac{3}{2}}} \ \le \frac{e^3}{\pi } {\varvec{B}}(a+1,b+1). \end{aligned}$$
(11)

\(\square \)

Lemma 4.2

Let \(U \sim {\varvec{B}}(a+1, b+1)\) distributed with \(a \ge b\) and set \(n=a+b\). Then for \({\lambda }\ge 2\),

$$\begin{aligned} {\mathbb {P}}\left( U \ge {\lambda }\frac{b}{n}\right) \le \frac{e^3}{\pi } {\lambda }^b b^{\frac{1}{2} } e^{b + \frac{3}{2}} e^{- {\lambda }\frac{ab}{n}}. \end{aligned}$$

Proof

We assume that \(a \ge b\) and thus \(a \ge \frac{n}{2}\). We have to estimate the probability

$$\begin{aligned} {\mathbb {P}}\left( U \ge {\lambda }\frac{b}{n}\right)\le & {} \frac{1}{{\varvec{B}}(a+1, b+1)} \int \limits _{{\lambda }\frac{b}{n}}^{1}(1- x)^a x^b \,dx \end{aligned}$$

We substitute \(x \rightarrow \frac{1}{a} x + {\lambda }\frac{b}{n}\) and obtain

$$\begin{aligned} \int \limits _{{\lambda }\frac{b}{n}}^{1}(1- x)^a x^b \,dx\le & {} \int \limits _{0}^{\infty }e^{-x - {\lambda }\frac{ab}{n}} \left( \frac{1}{a} x + {\lambda }\frac{b}{n}\right) ^b \ \frac{1}{a} \,dx\\\le & {} a^{-(b+1)} e^{- {\lambda }\frac{ab}{n}} \int \limits _{0}^{\infty }e^{-x} \left( x + {\lambda }\frac{ab}{n}\right) ^b \,dx. \end{aligned}$$

The use of the binomial formula and the Gamma functions yields

$$\begin{aligned} \int \limits _{0}^{\infty }e^{-x }\left( x + {\lambda }\frac{ab}{n}\right) ^b \,dx= & {} \sum _{k=0}^b {b \atopwithdelims ()k} \int \limits _{0}^{\infty }e^{-x } x^{b-k} \left( {\lambda }\frac{ab}{n}\right) ^k \,dx\\= & {} \sum _{k=0}^b {b \atopwithdelims ()k} (b-k)! \left( {\lambda }\frac{ab}{n}\right) ^k\\\le & {} b \left( {\lambda }\frac{ab}{n}\right) ^b \end{aligned}$$

because \( b \le {\lambda }\frac{ab}{n}\) for \(a \ge \frac{n}{2} \ge b\) and \({\lambda }\ge 2\), and \(\frac{1}{k!} \left( {\lambda }\frac{ab}{n}\right) ^k \) is increasing for \( k\le \left( {\lambda }\frac{ab}{n}\right) \). Using (11) this gives

$$\begin{aligned} {\mathbb {P}}\left( U \ge {\lambda }\frac{b}{n}\right)\le & {} \frac{e^3}{\pi } \left( 1+ \frac{b}{a} \right) ^{a+ \frac{3}{2}} b^{\frac{1}{2} } {\lambda }^b e^{- {\lambda }\frac{ab}{n}} \end{aligned}$$

and with \((1+x) \le e^x\) the lemma. \(\square \)

5 The Case \(n-d \) Large

In this section we combine Lemma 3.3 which gives the asymptotic behavior of \( g_d(u) \) as \(u \rightarrow 0\), with the concentration properties of the Beta function just obtained. We split our proof in two Lemmata.

Lemma 5.1

For \(d \ge d_0=78\) and \(n \ge e^e d\) we have

$$\begin{aligned} {\mathbb {E}}g_d(U) \le e^{\frac{d-1}{2} {{\,\mathrm{{lln}}\,}}(\frac{n}{d} ) - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}(\frac{n}{d}) }{ \ln (\frac{n}{d})} + (d-1) \frac{2}{ \ln (\frac{n}{d})}} e^{\frac{e^6}{\pi } \sqrt{d} e^{-\frac{1}{10} d} }. \end{aligned}$$

Lemma 5.2

For \(d \ge d_0=78\) and \(n \ge e^e d\) we have

$$\begin{aligned} {\mathbb {E}}g_d(U) \ge e^{ \frac{d-1}{2} {{\,\mathrm{{lln}}\,}}(\frac{n}{d}) - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}\frac{n}{d} }{\ln \frac{n}{d} } -(d-1) \frac{34}{\ln \frac{n}{d}} } \ e^{- \frac{2e^6}{\pi } \sqrt{d} e^{ - \frac{1}{10} d}}. \end{aligned}$$

These two bounds prove Theorem 1.1. The idea is to split the expectation into the main term close to \(\frac{d}{n}\) and two error terms,

$$\begin{aligned} {\mathbb {E}}g_d(U)= & {} {\mathbb {E}}g_d(U) \, {\mathbbm {1}}\left( U \le e^{-2} \frac{d}{n} \right) \\{} & {} +\,{\mathbb {E}}g_d(U) \, {\mathbbm {1}} \left( U \in \left[ e^{-2} \frac{d}{n},2 \frac{d}{n}\right] \right) \\{} & {} +\,{\mathbb {E}}g_d(U) \, {\mathbbm {1}} \left( U \ge 2 \frac{d}{n} \right) ~. \end{aligned}$$

Proof of Lemma 5.2

Recall that U is \({\varvec{B}}(n-d+1,d)\)-distributed. Lemma 4.2 with \(a=n -d \) and \(b= d-1 \) shows that

$$\begin{aligned} {\mathbb {P}}\left( U \ge {\lambda }\frac{d}{n}\right) \le {\mathbb {P}}\left( U \ge {\lambda }\frac{d-1}{n-1}\right) \le \frac{e^3}{\pi } {\lambda }^{d-1} (d-1)^{\frac{1}{2} } e^{(d-1) + \frac{3}{2}} e^{- {\lambda }\frac{(n-d)(d-1)}{n-1}} \end{aligned}$$

because \( \frac{d-1}{n-1}< \frac{d}{n}\). For \({\lambda }=2\) this gives

$$\begin{aligned} {\mathbb {P}}\left( U \ge 2 \frac{d}{n}\right) \le \frac{e^6}{2\pi } \sqrt{d} e^{ ( \ln 2- 1 + 2 \frac{d}{n}) d} \le \frac{e^6}{2\pi } \sqrt{d} e^{ - \frac{1}{10} d} \end{aligned}$$
(12)

for \(n \ge 10 d\). The probability that U is small is estimated by Lemma 4.1 with \(s= (1- e^{-2})\sqrt{\frac{(d-1)(n-1)}{n-d}}\),

$$\begin{aligned} {\mathbb {P}}\left( U \le e^{-2} \frac{d-1}{n-1} \right)\le & {} \frac{3e^3}{ \pi } (1- e^{-2})^{-1} \sqrt{\frac{n-d}{(d-1)(n-1)}} e^{- \frac{1}{6} (1- e^{-2})^2 \frac{(d-1)(n-1)}{n-d} }\\\le & {} \frac{e^6}{ 2 \pi } e^{- \frac{1}{10}d } \end{aligned}$$

for \(d \ge 6\). Combining both estimates and using

$$\begin{aligned} \ln (1-x) \ge - 2x \end{aligned}$$
(13)

for \(x \in [0, \frac{1}{2}]\), we have

$$\begin{aligned} {\mathbb {P}}\left( U \in \left[ \frac{1}{2} \frac{d}{n},2 \frac{d}{n} \right] \right) \ge 1 - \frac{e^6}{2\pi } \sqrt{d} e^{ - \frac{1}{10} d} - \frac{e^6}{ 2 \pi } e^{- \frac{1}{10}d } \ge e^{ - \frac{2e^6}{\pi } \sqrt{d} e^{ - \frac{1}{10} d}} \end{aligned}$$
(14)

for \(d \ge d_0=78\). (Observe that \( \frac{2e^6}{\pi } \sqrt{d}_0 e^{ - \frac{1}{10} d_0} \le \frac{1}{2} \).) In the last step we compute

$$\begin{aligned} \min _{u \in \left[ e^{-2} \frac{d}{n},2 \frac{d}{n}\right] } g_d(u)= & {} \min _{u \in \left[ e^{-2} \frac{d}{n},2 \frac{d}{n}\right] } e^{\frac{d-1}{2} {{\,\mathrm{{lln}}\,}}u^{-1} - \frac{d-1}{4} \frac{\ln \ln u^{-1} }{ \ln u^{-1}} - (d-1) \frac{ \delta }{ \ln u^{-1}} }\\\ge & {} e^{\frac{d-1}{2} {{\,\mathrm{{lln}}\,}}\left( \frac{1}{2} \frac{n}{d}\right) - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}\left( \frac{1}{2} \frac{n}{d}\right) }{ \ln \left( \frac{1}{2} \frac{n}{d}\right) } - (d-1) \frac{\max \delta }{ \ln \left( \frac{1}{2} \frac{n}{d}\right) } } \end{aligned}$$

for \(n \ge e^e d\). Here, note that \(\frac{{{\,\mathrm{{lln}}\,}}x}{\ln x}\) is decreasing for \(x \ge e^e\). Now using

$$\begin{aligned} {{\,\mathrm{{lln}}\,}}\left( \frac{n}{d} \right) \ge {{\,\mathrm{{lln}}\,}}\left( \frac{1}{2} \frac{n}{d} \right) = {{\,\mathrm{{lln}}\,}}\left( \frac{n}{d} \right) +\ln \left( 1- \frac{\ln 2 }{ \ln \left( \frac{n}{d}\right) } \right) \ge {{\,\mathrm{{lln}}\,}}\left( \frac{n}{d} \right) - \frac{2\ln 2 }{ \ln \left( \frac{n}{d}\right) }, \end{aligned}$$

and

$$\begin{aligned} \frac{1}{\ln \left( \frac{1}{2} \frac{n}{d}\right) } = \frac{1}{\ln \left( \frac{n}{d}\right) - \ln 2 } \le \frac{1}{\ln \left( \frac{n}{d}\right) } \left( 1 + 2 \frac{\ln 2}{\ln \left( \frac{n}{d}\right) }\right) \le 2 \frac{1}{\ln \left( \frac{n}{d}\right) } \end{aligned}$$

for \(n \ge e^{e} d \), we have

$$\begin{aligned} \min _{u \in \left[ e^{-2} \frac{d}{n},2 \frac{d}{n}\right] } g_d(u)\ge & {} e^{ \frac{d-1}{2} {{\,\mathrm{{lln}}\,}}\frac{n}{d} - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}\frac{n}{d} }{\ln \frac{n}{d}} -(d-1) \frac{\delta '}{\ln \frac{n}{d}}} \end{aligned}$$

with \(\delta '=\frac{3 \ln 2}{2}+2 \max \delta \in [0, 34]\). Combining this estimate with (14) we obtain

$$\begin{aligned} {\mathbb {E}}g_d(U)\ge & {} \min _{u \in \left[ e^{-2} \frac{d}{n},2 \frac{d}{n}\right] } g_d(u)\ {\mathbb {E}}{\mathbbm {1}}\left( U \in \left[ e^{-2} \frac{d}{n},2 \frac{d}{n}\right] \right) \\\ge & {} e^{ \frac{d-1}{2} {{\,\mathrm{{lln}}\,}}\frac{n}{d} - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}\frac{n}{d} }{\ln \frac{n}{d} } -(d-1) \frac{\delta '}{\ln \frac{n}{d}} } \ e^{- \frac{2e^6}{\pi } \sqrt{d} e^{ - \frac{1}{10} d}} \end{aligned}$$

for \(d \ge d_0\) and \(n \ge e^e d\).

Proof of Lemma 5.1

As an upper bound we have

$$\begin{aligned} {\mathbb {E}}g_d(U)\le & {} {\mathbb {E}}g_d(U){\mathbbm {1}}\left( U \le e^{-2} \frac{d}{n} \right) \\{} & {} + \max _{u \in [e^{-2} \frac{d}{n},2 \frac{d}{n}]} g_d(u) \ {\mathbb {P}}\left( U \in \left[ e^{-2} \frac{d}{n},2 \frac{d}{n} \right] \right) \\{} & {} + \underbrace{ \max _{u \in \left[ 2 \frac{d}{n},1\right] } g_d(u)}_{\le \max _{u \in \left[ \frac{d}{n},1\right] } g_d(u)} {\mathbb {P}}\left( U \ge 2 \frac{d}{n} \right) \\\le & {} {\mathbb {E}}g_d(U){\mathbbm {1}}\left( U \le e^{-2} \frac{d}{n} \right) \\{} & {} + e^{ \frac{d-1}{2} {{\,\mathrm{{lln}}\,}}\left( e^2 \frac{n}{d}\right) - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}\left( e^2 \frac{n}{d}\right) }{ \ln \left( e^2 \frac{n}{d}\right) } }\\{} & {} + e^{ \frac{d-1}{2} {{\,\mathrm{{lln}}\,}}\left( \frac{n}{d}\right) - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}\left( \frac{n}{d}\right) }{ \ln \left( \frac{n}{d}\right) } } \frac{e^6}{2\pi } \sqrt{d} e^{ - \frac{1}{10} d} \end{aligned}$$

since \(\delta \ge 0\), and where the last term follows from (12). For the first term we use that \({\phi }(\Phi ^{-1}(\cdot ))\) is a symmetric and concave function and thus increasing on \([0, e^{-2}\frac{d}{ n}]\), and that \({\delta }\ge 0\).

$$\begin{aligned}{} & {} {\mathbb {E}}g_d(U){\mathbbm {1}} \left( U \le e^{-2}\frac{d}{n} \right) \\{} & {} \quad \le \frac{1}{{\varvec{B}}(n-d+1, d)} \int \limits _0^{e^{-2}\frac{d}{n}} e^{\frac{d-1}{2} {{\,\mathrm{{lln}}\,}}x^{-1} - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}x^{-1} }{ \ln x^{-1}} } (1-x)^{n-d} x^{d-1} dx\\{} & {} \quad \le \frac{1}{{\varvec{B}}(n-d+1, d)} e^{ \frac{d-1}{2} {{\,\mathrm{{lln}}\,}}\left( e^2 \frac{n}{d}\right) - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}\left( e^2 \frac{n}{d}\right) }{ \ln \left( e^2 \frac{n}{d}\right) } } \left( e^{-2} \frac{d}{n}\right) ^{d-1} \int \limits _0^{\infty } e^{-(n-d)x} dx \end{aligned}$$

Now the remaining integration is trivial. We use Stirling’s formula (11) to estimate the Beta-function and obtain

$$\begin{aligned}{} & {} {\mathbb {E}}g_d(U){\mathbbm {1}} \left( U \le e^{-2}\frac{d}{n} \right) \\{} & {} \quad \le \frac{e^3}{\pi } \frac{(n-1)^{n +\frac{1}{2} }}{(n-d)^{n-d+\frac{3}{2}} (d-1)^{d- \frac{1}{2}}} e^{ \frac{d-1}{2} {{\,\mathrm{{lln}}\,}}\left( e^2 \frac{n}{d}\right) - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}\left( e^2 \frac{n}{d}\right) }{\ln \left( e^2 \frac{n}{d}\right) } } \left( e^{-2} \frac{d}{n}\right) ^{d-1}\\{} & {} \quad \le e^{ \frac{d-1}{2} {{\,\mathrm{{lln}}\,}}\left( e^2 \frac{n}{d}\right) - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}\left( e^2 \frac{n}{d}\right) }{ \ln \left( e^2 \frac{n}{d}\right) } } \frac{e^5}{\pi } e^{(d-1) + \frac{(d-1)}{(n-d) }\left( \frac{3}{2}\right) + 1+ \frac{1}{(d-1)}\frac{1}{2} -2d}\\{} & {} \quad \le e^{ \frac{d-1}{2} {{\,\mathrm{{lln}}\,}}\left( e^2 \frac{n}{d}\right) - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}\left( e^2 \frac{n}{d}\right) }{ \ln \left( e^2 \frac{n}{d}\right) } } \frac{e^5}{\pi } e^{-\frac{1}{10} d} \end{aligned}$$

e.g. for \(n \ge e^e d\) and \(d \ge 78\). Combining our results gives

$$\begin{aligned} {\mathbb {E}}g_d(U)\le & {} e^{ \frac{d-1}{2} {{\,\mathrm{{lln}}\,}}\left( e^2 \frac{n}{d}\right) - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}\left( e^2 \frac{n}{d}\right) }{ \ln \left( e^2 \frac{n}{d}\right) } } \frac{e^5}{\pi } e^{-\frac{1}{10} d}\\{} & {} + e^{ \frac{d-1}{2} {{\,\mathrm{{lln}}\,}}\left( e^2 \frac{n}{d}\right) - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}\left( e^2 \frac{n}{d}\right) }{ \ln \left( e^2 \frac{n}{d}\right) } }\\{} & {} + e^{ \frac{d-1}{2} {{\,\mathrm{{lln}}\,}}\left( \frac{n}{d}\right) - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}\left( \frac{n}{d}\right) }{ \ln \left( \frac{n}{d}\right) } } \frac{e^6}{2\pi } \sqrt{d} e^{ - \frac{1}{10} d} \end{aligned}$$

In a similar way as above, we get rid of the involved constant \(e^2\) by using

$$\begin{aligned} {{\,\mathrm{{lln}}\,}}\left( \frac{n}{d} \right) \le {{\,\mathrm{{lln}}\,}}\left( e^2 \frac{n}{d} \right) = {{\,\mathrm{{lln}}\,}}\left( \frac{n}{d} \right) +\ln \left( 1 + \frac{ 2 }{ \ln (\frac{n}{d})} \right) \le {{\,\mathrm{{lln}}\,}}\left( \frac{n}{d} \right) + \frac{2}{ \ln (\frac{n}{d})}, \end{aligned}$$

and

$$\begin{aligned} \frac{1}{\ln \left( e^2 \frac{n}{d}\right) } = \frac{1}{\ln (\frac{n}{d})} \left( 1+ \frac{2}{\ln (\frac{n}{d})} \right) ^{-1} \ge \frac{1}{\ln (\frac{n}{d} ) } \left( 1 - \frac{2}{\ln (\frac{n}{d}) } \right) . \end{aligned}$$

This yields

$$\begin{aligned} {\mathbb {E}}g_d(U) \le e^{ \frac{d-1}{2} {{\,\mathrm{{lln}}\,}}(\frac{n}{d} ) - \frac{d-1}{4} \frac{{{\,\mathrm{{lln}}\,}}(\frac{n}{d}) }{ \ln (\frac{n}{d})} + (d-1) \frac{\frac{3}{2}}{ \ln (\frac{n}{d})} } \left( 1+ \frac{e^6}{\pi } \sqrt{d} e^{-\frac{1}{10} d} \right) \end{aligned}$$
(15)

\(\square \)

6 The Case \(n-d\) Small

Finally, it remains to prove Theorem 1.3. The starting point here is again formula (2), together with the substitution \(y \rightarrow \frac{y}{\sqrt{d}}\).

$$\begin{aligned} {\mathbb {E}}f_{d-1}([X_1,\ldots ,X_n])= & {} 2 {n \atopwithdelims ()d} \frac{\sqrt{d}}{\sqrt{\pi }} \int \limits _{- \infty }^\infty \Phi (y)^{n-d} e^{-dy^2}\,dy\nonumber \\= & {} 2 {n \atopwithdelims ()d} \frac{1}{\sqrt{\pi }} \int \limits _{- \infty }^\infty \Phi \left( \frac{y}{\sqrt{d}}\right) ^{n-d} e^{-y^2}\,dy \end{aligned}$$
(16)

The Taylor expansion of \( \Phi \) at \(y=0\) is given by

$$\begin{aligned} \Phi (y)= & {} \frac{1}{2} + \frac{1}{\sqrt{\pi }} y + \frac{1}{\sqrt{\pi }} (-\theta _1) e^{-\theta _1^2} \, y^2 = \frac{1}{2} + \frac{1}{\sqrt{\pi }} y (1-\theta _2 y) \end{aligned}$$

with some \( \theta _1, \theta _2 \in {\mathbb {R}}\) depending on y. Since \(\Phi (y)\) is above its tangent at 0 for \(y>0\) and below it for \(y<0\), we have \(0 \le 1- \theta _2 y \le 1\). Further,

$$\begin{aligned} |\theta _2| \le \max _{\theta _1} \theta _1 e^{-\theta _1^2} = \frac{1}{\sqrt{2e}}. \end{aligned}$$

Hence an expression for \(\ln \Phi \) at \(y=0\) is given by

$$\begin{aligned} \ln \Phi (y)= & {} - \ln 2 + \ln \left( 1 + \frac{2}{\sqrt{\pi }} y (1- \theta _2 y) \right) . \end{aligned}$$

We need again estimates for the logarithm, namely \(\ln (1+x) = x - \theta _3 x^2 < x\) with some \(\theta _3 = \theta _3 (x) \ge 0\). In addition, there exists \(c_3 \in {\mathbb {R}}\) such that \(\theta _3 < c_3 \) if x is bounded away from \(-1\), for example, for \(x \ge 2 \Phi (-1)-1\). This gives

$$\begin{aligned} \ln \Phi (y)\le & {} - \ln 2 + \frac{2}{\sqrt{\pi }} y - \frac{2}{\sqrt{\pi }} \theta _2 y^2 \end{aligned}$$

and

$$\begin{aligned} \ln \Phi (y)= & {} - \ln 2 + \frac{2}{\sqrt{\pi }} y (1- \theta _2 y) - \theta _3 \frac{4}{\pi } y^2 \underbrace{(1- \theta _2 y)^2}_{\le 1}\\\ge & {} - \ln 2 + \frac{2}{\sqrt{\pi }} y - \frac{2}{\sqrt{\pi }} \theta _2 y^2 - \theta _3 \frac{4}{\pi } y^2 \end{aligned}$$

with \(\theta _3 < c_3 \) for \(y \ge -1\). Thus the Taylor expansion of \(\ln \Phi \) at \(y=0\) is given by

$$\begin{aligned} \ln \Phi (y)= & {} - \ln 2 + \frac{2}{\sqrt{\pi }} y - \theta _4 y^2 \end{aligned}$$

with some \( \theta _4= \theta _4(y) > - \frac{1}{2}\), and there exists a \(c_4 \in {\mathbb {R}}\) with \(\theta _4 \le c_4\) for \(y \ge -1\). We plug this into (16) and obtain

$$\begin{aligned} \int \limits _{- \infty }^\infty \Phi \left( \frac{y}{\sqrt{d}}\right) ^{n-d} e^{-y^2}\,dy= & {} e^{- (n-d)\ln 2} \int \limits _{- \infty }^\infty e^{ \frac{2}{\sqrt{\pi }} \frac{n-d}{\sqrt{d}} y - \theta _4 \frac{n-d}{d} y^2 -y^2 } \,dy~. \end{aligned}$$

Since \(\frac{n-d}{d} \rightarrow 0\) we assume that \( 1+\theta _4 \frac{n-d}{d} \ge 1- \frac{1}{2} \frac{n-d}{d} > 0.\) As an estimate from above we have

$$\begin{aligned} \int \limits _{- \infty }^\infty e^{ \frac{2}{\sqrt{\pi }} \frac{n-d}{\sqrt{d}} y - (1+\theta _4 \frac{n-d}{d}) y^2 } \,dy\le & {} \int \limits _{- \infty }^\infty e^{ \frac{2}{\sqrt{\pi }} \frac{n-d}{\sqrt{d}} y - (1- \frac{1}{2} \frac{n-d}{d}) y^2 } \,dy \nonumber \\= & {} e^{\frac{\frac{4}{\pi }\frac{(n-d)^2}{d}}{4 (1- \frac{1}{2} \frac{n-d}{d})}} \int \limits _{- \infty }^\infty e^{ -\left( \frac{\frac{2}{\sqrt{\pi }} \frac{n-d}{\sqrt{d}}}{2 \sqrt{(1- \frac{1}{2} \frac{n-d}{d})}} - \sqrt{(1- \frac{1}{2} \frac{n-d}{d})}y\right) ^2 } \,dy \nonumber \\= & {} e^{ \frac{1}{\pi }\frac{(n-d)^2}{d} \left( 1+ O\left( \frac{n-d}{d}\right) \right) } \frac{\sqrt{\pi }}{\sqrt{\left( 1- \frac{1}{2} \frac{n-d}{d}\right) } }\nonumber \\= & {} \sqrt{\pi }e^{ \frac{1}{\pi }\frac{(n-d)^2}{d} + O\left( \frac{(n-d)^3}{d^2}\right) + O\left( \frac{n-d}{d}\right) }. \end{aligned}$$
(17)

The estimate from below is slightly more complicated. For \(y \ge -\sqrt{d}\) there is an upper bound \(c_4\) for \(\theta _4\). Using this we have

$$\begin{aligned} \int \limits _{- \infty }^\infty e^{ \frac{2}{\sqrt{\pi }} \frac{n-d}{\sqrt{d}} y - \theta _4 \frac{n-d}{d} y^2 -y^2 } \,dy\ge & {} e^{\frac{1}{\pi }\frac{(n-d)^2}{d}} \int \limits _{\frac{1}{\sqrt{\pi }} \frac{n-d}{\sqrt{d}}- \sqrt{d}}^\infty e^{-\left( \frac{1}{\sqrt{\pi }} \frac{n-d}{\sqrt{d}} - y \right) ^2 - c_4 \frac{n-d}{d} y^2 } \,dy\\\ge & {} e^{\frac{1}{\pi }\frac{(n-d)^2}{d}} \int \limits _{- \infty }^{\sqrt{d} } e^{-y^2 - c_4 \frac{n-d}{d} \left( \frac{1}{\sqrt{\pi }} \frac{n-d}{\sqrt{d}} - y \right) ^2 } \,dy~. \end{aligned}$$

Now we use \((a-b)^2 \le 2a^2 + 2b^2\) which shows that

$$\begin{aligned} \int \limits _{- \infty }^\infty e^{ \frac{2}{\sqrt{\pi }} \frac{n-d}{\sqrt{d}} y - \theta _4 \frac{n-d}{d} y^2 -y^2 } \,dy\ge & {} e^{\frac{1}{\pi }\frac{(n-d)^2}{d} + O(\frac{(n-d)^3}{d^2}) } \int \limits _{- \infty }^{\sqrt{d} } e^{- (1+ 2 c_4 \frac{n-d}{d}) y^2 } \,dy\nonumber \\= & {} e^{\frac{1}{\pi }\frac{(n-d)^2}{d} + O\left( \frac{(n-d)^3}{d^2}\right) } \frac{1}{\sqrt{(1+ 2 c_4 \frac{n-d}{d})}} \int \limits _{- \infty }^{\sqrt{d (1+ 2 c_4 \frac{n-d}{d})}} e^{- y^2 } \,dy \nonumber \\\ge & {} e^{\frac{1}{\pi }\frac{(n-d)^2}{d} + O\left( \frac{(n-d)^3}{d^2}\right) +O\left( \frac{n-d}{d}\right) } \int \limits _{- \infty }^{\sqrt{d} } e^{- y^2 } \,dy. \end{aligned}$$
(18)

Recall the estimate for \(\Phi (z) \) from Lemma 3.1,

$$\begin{aligned} \int \limits _{- \infty }^{\sqrt{d} } e^{- y^2 } \,dy = \sqrt{\pi }\, \Phi (\sqrt{d}) \ge \sqrt{\pi }(1- e^{-d}) = \sqrt{\pi }e^{O(e^{-d})}. \end{aligned}$$
(19)

We combine Eqs. (17), (18) and (19) and obtain

$$\begin{aligned} \int \limits _{- \infty }^\infty e^{ \frac{2}{\sqrt{\pi }} \frac{n-d}{\sqrt{d}} y - \theta _4 \frac{n-d}{d} y^2 -y^2 } \,dy = \sqrt{\pi }e^{\frac{1}{\pi }\frac{(n-d)^2}{d} + O\big (\frac{(n-d)^3}{d^2}\big ) +O\big (\frac{n-d}{d}\big ) + O(e^{-d}) } \end{aligned}$$

which yields Theorem 1.3.