1 Introduction

1.1 Random Real Enumerative Geometry

In this paper we continue the study of real enumerative problems initiated in [9]. Our goal is to answer questions such as

In average, how many lines intersect four random lines in \({\mathbb {R}}\mathrm {P}^3\)?

To be more precise, let \({\mathbb {G}}(1,3)\) be the Grassmannian of lines in \({\mathbb {R}}\mathrm {P}^3\). This is a homogeneous space equipped with a transitive action of the orthogonal group O(4) on it and there is a unique invariant probability measure defined on \(\mathbb {G}(1,3)\) invariant under this action. We fix \(L\in {\mathbb {G}}(1,3)\) and define the Schubert variety:

$$\begin{aligned} \Omega (L):=\left\{ \ell \in {\mathbb {G}}(1,3) \ | \ \ell \cap L \ne \emptyset \right\} . \end{aligned}$$

Then the answer to the previous question is given by the number:

$$\begin{aligned} \delta _{1,3}:={\mathbb {E}}{ \# \left( g_1\cdot \Omega \left( L\right) \cap \cdots \cap g_4\cdot \Omega \left( L\right) \right) } \end{aligned}$$

where \(g_1,\ldots ,g_4\) are independent, taken uniformly at random from O(4) (with the normalized Haar measure).

One can generalize to higher dimensions. Let \({\mathbb {G}}(k,n)\) be the Grassmannian of linear projective subspaces of dimension k in \({\mathbb {R}}\mathrm {P}^n\). It is a homogeneous space with \(O(n+1)\) acting transitively on it and with a unique \(O(n+1)-\)invariant probability measure. We fix \(L\in {\mathbb {G}}(n-k-1,n)\) and introduce the corresponding Schubert variety:

$$\begin{aligned} \Omega (L):=\left\{ \ell \in {\mathbb {G}}(k,n) \ | \ \ell \cap L \ne \emptyset \right\} . \end{aligned}$$
(1)

We define

$$\begin{aligned} \delta _{k,n}:={\mathbb {E}}{ \# \left( g_1\cdot \Omega \left( L\right) \cap \cdots \cap g_{(k+1)(n-k)}\cdot \Omega \left( L\right) \right) } \end{aligned}$$
(2)

where \(g_1,\ldots ,g_{(k+1)(n-k)}\) are independent, taken uniformly at random from \(O(n+1)\) (with the normalized Haar measure). This number equals the average number of k-dimensional subspaces of \({\mathbb {R}}\mathrm {P}^n\) meeting \((k+1)(n-k)\) random subspaces of dimension \((n-k-1)\).

1.2 Previously on Probabilistic Schubert Calculus

In the recent work [9], the first named author of the present paper together with Peter Bürgisser established a formulaFootnote 1 for the number \(\delta _{k,n}\) in (2), see [9, Corollary 5.2]:

$$\begin{aligned} \delta _{k,n}=\frac{d_{k,n}!}{2^{d_{k,n}}} \cdot |{\mathbb {G}}(k,n)| \cdot |C(k,n)| \end{aligned}$$

where \(|{\mathbb {G}}(k,n)|\) is the volume of the Grassmanniann and |C(kn)| the volume of a certain convex body in \({\mathbb {R}}^{(k+1)\times (n-k)}\), which the authors called the Segre zonoid. This convex body is defined as follows: take random points \(p_1,\ldots ,p_m\) independently and uniformly on \(S^k\times S^{n-k-1} \subset {\mathbb {R}}^{(k+1)(n-k)}\) and consider the Minkowski sum \(K_m:=\frac{1}{m}([0,p_1]+\cdots +[0,p_m])\). Then C(kn) is the limit, with respect to the Hausdorff metric, of \(K_m\) as \(m\rightarrow \infty \) (being a limit of zonotopes, C(kn) is a zonoid), see Definition 9.

If we see elements of \( {\mathbb {R}}^{(k+1)\times (n-k)}\) as matrices it turns out that this convex body, in some sense only depends on their singular values. Using this and assuming \(k+1<n-k\) one can construct a convex body in the space \({\mathbb {R}}^{k+1}\) of singular values such that if we call r its radial function, we have [9, Theorem 5.13]:

$$\begin{aligned} \delta _{k,n}=\beta _{k,n}\int _{S^{k}_+}{\left( p_k \cdot (r)^{k+1}\right) ^{(n-k)}q_k \ \mathrm{d}S^{k}} \end{aligned}$$
(3)

where \(p_k\) and \(q_k\) are simple combinatorial functions of the coordinates on \({\mathbb {R}}^{k+1}\), \(\beta _{k,n}\) is a known coefficient (whose explicit expression is given in (17)) and the domain of integration is

$$\begin{aligned} S^{k}_+=\left\{ x\in {\mathbb {R}}^{k+1}|\ \Vert x\Vert =1,\, x_1\ge \cdots \ge x_{k+1} \ge 0\right\} . \end{aligned}$$

Equation (3) will be our starting point for computing both the asymptotic of \(\delta _{k,n}\) and the “exact” formula for \(\delta _{1,n}\).

1.3 Main Results

Our first main result is the asymptotic of \(\delta _{k,n}\) for any fixed k, as n goes to infinity, generalizing [9, Theorem 6.8], which deals with the case \(k=1\).

Theorem 1

For every integer \(k>0\) and as n goes to infinity, we have

$$\begin{aligned} \delta _{k,n}=a_k \cdot \left( b_k\right) ^n\cdot n^{-\frac{k(k+1)}{4}}\left( 1+{\mathcal {O}}(n^{-1})\right) \end{aligned}$$

where

$$\begin{aligned} a_k&=\Lambda _k \ \frac{2^{\frac{(k+1)(k-2)}{4}}}{\pi ^{\frac{k(k+2)}{2}}}\ \frac{\Gamma \left( \frac{k(k+3)}{4} \right) }{\Gamma \left( \frac{k(k+1)+2}{4} \right) }\left( \frac{k+1}{k+2}\right) ^{\frac{k(k+3)}{4}}\left( \frac{\Gamma \left( \frac{k+1}{2} \right) }{\Gamma \left( \frac{k+2}{2} \right) }\right) ^{k(k+1)} \\ b_k&=\left( \frac{\Gamma \left( \frac{k+2}{2} \right) }{\Gamma \left( \frac{k+1}{2} \right) }\sqrt{\pi }\right) ^{(k+1)}. \end{aligned}$$

(The number \(\Lambda _k\) that appears in the expression of \(a_k\) can be expressed as an integral over the polynomials that have all roots in \({\mathbb {R}}\), see Definition 10.)

For instance \(\Lambda _1\) and \(\Lambda _2\) can be easily computed and the previous formula gives:

$$\begin{aligned} \delta _{1,n}&= \frac{8}{3\pi ^{5/2}} \cdot \left( \frac{\pi ^2}{4}\right) ^n \cdot n^{-1/2} \left( 1+{\mathcal {O}}\left( n^{-1}\right) \right) \\ \delta _{2,n}&= \frac{9\sqrt{3}}{2048\sqrt{2\pi }} \cdot 8^n \cdot n^{-3/2} \left( 1+{\mathcal {O}}\left( n^{-1}\right) \right) . \end{aligned}$$

Similarly one can consider the same problem over the complex Grassmannian \(\mathbb {G}_\mathbb {C}(k,n)\) of k-dimensional complex subspaces of \({\mathbb {C}}\mathrm {P}^n\). The Schubert cycles \(\Omega _\mathbb {C}(L_\mathbb {C})\) are defined just as in (1). The compact Lie group with transitive action is now the unitary group \(U(n+1)\). We define

$$\begin{aligned} \delta _{k,n}^\mathbb {C}:={\mathbb {E}}{ \# \left( g_1\cdot \Omega _\mathbb {C}(L_\mathbb {C}) \cap \cdots \cap g_{(k+1)(n-k)}\cdot \Omega _\mathbb {C}(L_\mathbb {C})\right) } \end{aligned}$$
(4)

where \(g_1,\ldots ,g_{(k+1)(n-k)}\) are independent, taken uniformly at random from \(U(n+1)\) (with the normalized Haar measure).

Remark 1

The expected value in (4) is an integer. Indeed the variable is almost surely constant and computes the degree of the Grassmannian in the Plücker embedding (see [9, Corollary 4.15]).

For this number we derive the following asymptotic (to be compared with Theorem 1).

Proposition 2

(The asymptotic for the complex case)

$$\begin{aligned} \delta _{k,n}^\mathbb {C}= a_k^\mathbb {C}\cdot \left( b_k^\mathbb {C}\right) ^n\cdot n^{-\frac{k(k+2)}{2}}\left( 1+{\mathcal {O}}(n^{-1})\right) \end{aligned}$$

where

$$\begin{aligned} a_k^\mathbb {C}&=\frac{\Gamma (1)\Gamma (2)\cdots \Gamma (k+1)}{(2\pi )^{k/ 2}(k+1)^{k(k+1)-1/2}}\\ b_k^\mathbb {C}&=\left( k+1\right) ^{(k+1)}. \end{aligned}$$

Remark 2

To derive the asymptotic formula of Theorem 1, we first notice that \(\mu =\frac{1}{\sqrt{k+1}}(1,\ldots ,1)\) is the only critical point of r (and \(p_k\)) in the domain of integration of (3), it is a maximum and is non degenerate. Thus if we can compute its Hessian \(H_k\) at this point we could compute the asymptotic of (3) using Laplace’s method.

The difficulty lies in the fact that \(H_k\) is a symmetric bilinear form on \(T_\mu S^{k} \cong {\mathbb {R}}^{k}\), thus we would need to compute \(\sim k^2\) entries for large k. However here we are saved by the symmetries of the convex body whose radial function is r. Indeed it is invariant by permutation of coordinates in \({\mathbb {R}}^{k+1}\). This implies that \(H_k\) commutes with this action of the symmetric group \({\mathfrak {S}}_{k+1}\). Moreover \(T_\mu S^{k}=\mu ^\perp \) is an irreducible subspace for this action. Thus by Schur’s Lemma \(H_k=\lambda _k \cdot {\mathbf {1}}\) for some \(\lambda _k\in {\mathbb {R}}\): in this way, for each \(k>0\), we only need to compute one number! Still this computation is non trivial (see Proposition 16).

It is not difficult to prove (Corollary 1 below) that \(\delta _{k,n}\) belongs to the ring of periods introduced by Kontsevich and Zagier [5]. Other than this, the nature of these numbers remains mysterious. In fact we do not even have an “exact” formula for the simplest non trivial case \(\delta _{1,3}\). Nevertheless we can present it as a one-dimensional integral (see Proposition 24).

Theorem 3

$$\begin{aligned} \delta _{1,n}=-2 \pi ^{2n-2}c(n)\int _{0}^1 L(u)^{n-1}\mathrm {sinh}(w(u))w'(u)\mathrm{d}u \end{aligned}$$

where

$$\begin{aligned} c(n)=\frac{\Gamma \left( 2n-2\right) }{\Gamma \left( n\right) \Gamma \left( n-2\right) } \end{aligned}$$

\(L=F\cdot G\) and \(w=\log \left( F/G\right) \) with

$$\begin{aligned} F(u)&:=\int _{0}^{\pi /2} \frac{u \ \sin ^2(\theta )}{\sqrt{\cos ^2\theta +u^2 \sin ^2\theta }}\mathrm {d}\theta \\ G (u)&:=\int _{0}^{\pi /2} \frac{u \ \sin ^2(\theta )}{\sqrt{\sin ^2\theta +u^2 \cos ^2\theta }}\mathrm {d}\theta \end{aligned}$$

Remark 3

One may want to numerically evaluate \(\delta _{k,n}\).Footnote 2 For this purpose Eq. (3) is not quite suitable because we do not know the radial function r explicitly.

On the contrary, in Theorem 3 everything is explicit. The functions F and G are elliptic integrals and satisfy a rather simple linear differential equation with rational coefficients. One could then use techniques such as the D-modules machinery (see for Example [10]) to obtain numerical evaluation.

1.4 Structure of the Paper

In [9] is initiated the study of the numbers \(\delta _{k,n}\). We will first recall what is achieved there as well as some preliminary background in Sect. 2. In Sect. 3.1 we compute the asymptotic of \(\delta _{k,n}\) as n goes to infinity; this is to be compared with the asymptotic in the complex case which is computed in Sect. 3.2. In Sect. 4 we prove that \(\delta _{k,n}\) is a period in the sense of Kontsevitch and Zagier. Finally in Sect. 5 we provide a formula for \(\delta _{1,n}\) for every \(n\ge 3\), as a one dimensional integral of elliptic functions.

2 Preliminaries

2.1 The Gamma Function

Definition 1

The Gamma Function is defined for all \(x>0\) by

$$\begin{aligned} \Gamma (x)=\int _0^{+\infty } t^{x-1}\mathrm{e}^{-t} \mathrm {d}t \end{aligned}$$

We will use the two following classical results. For a proof and more details see for example [7].

Proposition 4

For all real numbers a and b

$$\begin{aligned} \frac{ \Gamma (x+a)}{ \Gamma (x+b)}=x^{a-b}\left( 1+{\mathcal {O}}\left( {x}^{-1}\right) \right) . \end{aligned}$$
(5)

Proposition 5

(Multiplication Theorem) For all \(x\ge 0\) and for all integer m we have

$$\begin{aligned} \prod _{k=0}^{m-1}\Gamma \left( x+{\frac{k}{m}}\right) =(2\pi )^{\frac{m-1}{2}}\;m^{{\frac{1}{2}}-mx}\;\Gamma (mx). \end{aligned}$$

2.2 The Grassmannian

Definition 2

The real Grassmannian manifold is the homogeneous space

$$\begin{aligned} {\mathbb {G}}(k,n):=\frac{O(n+1)}{O(n-k)\times O(k+1)} \end{aligned}$$

where O(m) is the orthogonal group of \(m\times m\) orthogonal matrices. It is a smooth manifold of dimension

$$\begin{aligned} \dim ({\mathbb {G}}(k,n))=d_{k,n}:=(k+1)(n-k). \end{aligned}$$

Definition 3

The Plücker embedding is the embedding

$$\begin{aligned} \begin{aligned} {\mathbb {G}}(k,n)&\rightarrow \mathrm {P}(\Lambda ^{k+1} {\mathbb {R}}^{n+1}) \\ W&\mapsto \left[ a_1 \wedge \ldots \wedge a_{k+1} \right] \end{aligned} \end{aligned}$$
(6)

where \(\left\{ a_1,\ldots ,a_{k+1}\right\} \) is any basis for W.

We provide \({\mathbb {G}}(k,n)\) with the Riemannian structure induced by (6), recalling that the scalar product on \((k+1)\)-vectors is given by:

$$\begin{aligned} \langle u_1\wedge \cdots \wedge u_{k+1},\ v_1\wedge \cdots \wedge v_{k+1}\rangle =\det \left( \langle u_i, v_j\rangle \right) _{1\le i,j\le k+1} . \end{aligned}$$

The volume of the Grassmannian with respect to the volume density associated to the restriction of the Plücker metric is [9, Equation (2.11) and (2.14)]:

$$\begin{aligned} \left| \mathbb {G}(k,n) \right| =\frac{\left| O(n+1)\right| }{\left| O(k+1)\right| \left| O(n-k)\right| }=\pi ^{\frac{(k+1)(n-k)}{2}} \prod _{i=1}^{k+1}\frac{\Gamma \left( \frac{i}{2}\right) }{\Gamma \left( \frac{n-k+i}{2}\right) } \end{aligned}$$
(7)

Remark 4

From (2.2) we see that \({\mathbb {G}}(k,n)={\mathbb {G}}(n-k-1,n)\). From now on we can and will assume \(k+1\le n-k \).

2.3 Convex Bodies

We will need a few elementary results from convex geometry.

Definition 4

A convex body, is a non empty compact convex subset of \({\mathbb {R}}^n\). We denote by \({\mathscr {K}}_n\) the set of convex bodies of \({\mathbb {R}}^n\) containing the origin.

Definition 5

The support function of \(K\in {\mathscr {K}}_n\) is the function

$$\begin{aligned} h_K : {\mathbb {R}}^n&\rightarrow {\mathbb {R}}\\ x&\mapsto \max \{\langle x,y \rangle \ |\ y\in K\}. \end{aligned}$$

Definition 6

The support hyperplane H(Ku) of \(K\in {\mathscr {K}}_n\) in the direction \(u\in S^{n-1}\) is

$$\begin{aligned} H(K,u):=\left\{ x\in {\mathbb {R}}^n \ | \ \langle x,u \rangle =h_K(u)\right\} . \end{aligned}$$
Fig. 1
figure 1

The support function

Intuitively, the support function, or more precisely its restriction to the sphere \(S^{n-1}\), associates to each direction \(u\in S^{n-1}\) the distance to the hyperplane H(Ku), see Fig. 1. It characterizes the body in the sense that \(h_{K_1}=h_{K_2}\iff K_1=K_2\). Moreover it has some nice properties making it very useful, see [4, Section 1.7.1] for proofs and more details:

$$\begin{aligned} K_1=K_2&\iff h_{K_1}=h_{K_2}&K_1\subset K_2&\iff h_{K_1}\le h_{K_2} \\ h_{K_1+K_2}&=h_{K_1}+h_{K_2}&h_{\lambda K}&=\lambda h_{K}\\&x\in K \iff \langle x,y \rangle \le h_K(y) \ \forall y \in {\mathbb {R}}^n \end{aligned}$$

The following result will be useful for us, see [4, Corollary 1.7.3].

Proposition 6

If \(h_K\) is differentiable in \(x_0 \in {\mathbb {R}}^n\), then

$$\begin{aligned} \{\nabla h_K(x_0)\}= \{y\}=\partial K \cap H\left( K,\frac{x_0}{\Vert x_0\Vert }\right) . \end{aligned}$$

We will also need another function representing convex bodies.

Definition 7

The radial function of \(K\in {\mathscr {K}}_n\) is

$$\begin{aligned} r:S^{n-1}&\rightarrow {\mathbb {R}}_+ \\ u&\mapsto \sup \left\{ t\ge 0 \ | \ t u \in K\right\} . \end{aligned}$$

In this paper we will be interested in a special class of convex bodies: these are zonoids associated to a probability distribution in \({\mathbb {R}}^d\). The correspondence between zonoids and probability measures is studied in [3]. See for example [3, Theorem 3.1]. We introduce the following definition.

Definition 8

(Vitale zonoid) Let \(v\in {\mathbb {R}}^d\) be a random vector such that \({\mathbb {E}}\Vert v\Vert <\infty .\) We define the Vitale zonoid associated to v to be the convex body with the support function \(h(u)={\mathbb {E}}h_{[0, v]}(u).\)

There is a special case of this construction that will be relevant for us. Let \(Z\subset {\mathbb {R}}^d\) be a compact semialgebraic set, and sample v at random from the uniform distributionFootnote 3 on Z. Then we will denote by \(C_Z\) the Vitale zonoid associated to Z.

2.4 Laplace’s Method

The main step for the computation of the formula (27) is to apply in a multidimensional setting an asymptotic method for computing integrals, the so called Laplace’s method. For a proof and more details on this result, one can see [6, Section II Theorem 1] .

Theorem 7

(Laplace’s method) We consider the integral depending on one parameter \(\lambda >0\):

$$\begin{aligned} I(\lambda ):=\int _{t_1}^{t_2} \mathrm{e}^{-\lambda a(t)} b(t) \mathrm{d}t, \end{aligned}$$

where a, b are functions \([t_1,t_2]\rightarrow {\mathbb {R}}\) satisfying the conditions:

  1. (i)

    a is smooth in a neighborhood of \(t_1\) and there exists \(\mu >0\) and \(a_0\ne 0\) such that for \(t\rightarrow t_1\):

    $$\begin{aligned} a(t)=a(t_1)+a_0(t-t_1)^\mu + {\mathcal {O}}(|t-t_1|^{\mu +1}). \end{aligned}$$
  2. (ii)

    b is smooth in a neighborhood of \(t_1\) and there exists \(\nu \ge 1\) and \(b_0\ne 0\) such that for \(t\rightarrow t_1\):

    $$\begin{aligned} b(t)=b_0(t-t_1)^{\nu -1} + {\mathcal {O}}(|t-t_1|^{\nu }). \end{aligned}$$
  3. (iii)

    \(t_1\) is a global minimum for a on \([t_1,t_2]\), i.e. \(a(t)>a(t_1)\) \(\forall t \in ]t_1,t_2[\), moreover for all \(\epsilon >0\),

    $$\begin{aligned} \inf _{t\in [t_1+\epsilon ,t_2[}\{a(t)-a(t_1)\}>0 \end{aligned}$$
  4. (iv)

    The integral \(I(\lambda )\) converges absolutely for sufficiently large \(\lambda \).

Then, as \(\lambda \rightarrow \infty \), we have:

$$\begin{aligned} I(\lambda ) = \mathrm{e}^{-\lambda a(t_1)} \cdot \frac{\Gamma \left( \frac{\nu }{\mu }\right) }{\lambda ^{\nu /\mu }} \cdot \frac{b_0}{\mu \cdot a_0^{\nu /\mu }} \left( 1+{\mathcal {O}}(\lambda ^{-(1+\nu )/\mu })\right) . \end{aligned}$$

2.5 Main Characters

Definition 9

For \(k\le n\) positive integers, the Segre zonoid is the convex body C(kn) defined as follow. Take \(p_1,\ldots , \ p_m\) uniformly and independently at random on \(S^{k}\times S^{n-k-1}\subset {\mathbb {R}}^{d_{k,n}}\) and construct the Minkowski sum \(K_m:= \frac{1}{m} \sum _{i=1}^m \left[ 0,\ p_i \right] \). Then as m goes to infinity, \(K_m\) converges (w.r.t the Haussdorff metric) almost surely and C(kn) is defined to be its limit.

The fact that this sequence of random compact sets converges almost surely follows from a strong law of large number that one can find in [1]. In the language of the previous section, C(kn) is the Vitale zonoid associated to \(S^k\times S^{n-k-1}\).

Remark 5

There is an appropriate notion of tensor product for zonoids, see [8, Section 3]. In this sense the Segre zonoid is a tensor of balls.

If we think of \({\mathbb {R}}^{d_{k,n}}\) as the space of \((k+1)\times (n-k)\) matrices, it turns out that the convex body C(kn) depends only on the singular values of these matrices. We then have [9, Theorem 5.13]

Proposition 8

The volume of the Segre zonoid is given by

$$\begin{aligned} |C(k, n)| =\frac{2^{d_{k,n}}\cdot \pi ^{(k+1)(2n+4-k)}}{d_{k,n}\, \Gamma \left( \frac{k+1}{2}\right) \Gamma \left( \frac{k}{2}\right) \cdots \Gamma \left( \frac{1}{2}\right) \, \Gamma \left( \frac{n-k}{2}\right) \Gamma \left( \frac{n-k-1}{2}\right) \cdots \Gamma \left( \frac{n-2k}{2}\right) } \ I_k(n) \end{aligned}$$

where

$$\begin{aligned} I_k(n):=\int _{S^{k}_+}{\left( p_k \cdot (r)^{(k+1)}\right) ^{(n-k)}q_k \ \mathrm{d}S^{k}}. \end{aligned}$$
(8)

With the functions of the coordinates \(x=(x_1,\ldots ,x_{k+1})\in {\mathbb {R}}^{k+1}\)

$$\begin{aligned} p_k(x):=\prod _{i=1}^{k+1}x_i&,&q_k(x):=p_k(x)^{-(k+1)}\prod _{i<j} \left| x_i^2-x_j^2 \right| , \end{aligned}$$
(9)

and where r is the radial function of the convex body in \({\mathbb {R}}^{(k+1)}\) whose support function is given by [9, Proposition 5.8]:

$$\begin{aligned} h(x)=\frac{1}{\left( 2\pi \right) ^{(k+2)/2}}\int _{{\mathbb {R}}^{k+1}}\sqrt{ x_1^2 \xi _1^2+\cdots +x_{k+1}^2 \xi _{k+1}^2}\ \mathrm{e}^{-\frac{\Vert \xi _i\Vert ^2}{2}} {{\,\mathrm{d}\,}}\xi \end{aligned}$$
(10)

and the domain of integration is

$$\begin{aligned} S^{k}_+:=\left\{ x\in {\mathbb {R}}^{k+1} \ | \ \Vert x\Vert =1,\ x_1\ge \cdots \ge x_{k+1}\ge 0 \right\} . \end{aligned}$$

Let us recall the following [9, Lemma 5.10].

Proposition 9

The maximum of the radial function r is given by

$$\begin{aligned} R:=r(\mu )=\max _{u\in S^k}r(u)=\frac{1}{\sqrt{\pi }\sqrt{k+1}} \frac{\Gamma \left( \frac{k+2}{2}\right) }{\Gamma \left( \frac{k+1}{2}\right) }. \end{aligned}$$

Moreover \(\mu \) is the global maximum on \(S^k_+\) and the same is true for the function \(p_k\) defined in (9).

Proof

For the first part we refer to [9]. Consider \(p_k\) as a function on the whole space \({\mathbb {R}}^{k+1}\). The ith component of the gradient \(\nabla p_k\) at the point x is \(x_1\ldots \hat{x_i} \ldots x_{k+1}\) (the product of all coordinates except \(x_i\)). This is normal to the sphere if and only if there is \(\lambda \in {\mathbb {R}}\backslash \{0\}\) such that

$$\begin{aligned} \forall i \ x_1\ldots \hat{x_i} \ldots x_{k+1}=\lambda x_i. \end{aligned}$$
(11)

We see that if one of the \(x_i\) is zero then they all must vanish. Thus if \(x\in S^k\) we can assume \(x_i\ne 0 \ \forall i\) and multiply both side of (11) by \(x_i\). We obtain that x is a critical point of \(p_k\) restricted to \(S^k\) (i.e. \(\nabla p_k (x)\) is normal to \(S^k\)) if and only if \(x_i=\pm x_j\) for all \(1\le i, j \le k+1\) and \(\mu \) is the only point with this property in \(S^k_+\). Moreover \(\mu \) is a maximum because \(\nabla p_k (\mu )\) is pointing outward of the sphere. \(\square \)

We will also need the following number.

Definition 10

For each \(a=(a_1,\ldots ,\ a_k)\in {\mathbb {R}}^{k}\), consider the polynomial of degree \((k+1) \ p_a(X):=X^{k+1}+a_1 X^{k-1}-a_2 X^{k-2}+\cdots \pm a \) (note the absence of the term of degree k). Let \({\mathcal {R}}_k:=\left\{ a\in {\mathbb {R}}^k \ | \ \text { all the roots of } p_a \mathrm{are real} \right\} .\) Then

$$\begin{aligned} \Lambda _k := \int _{{\mathcal {R}}_k}\mathrm{e}^{a_1}\ \mathrm{d}a. \end{aligned}$$

The number \(\Lambda _k\) has another expression if we consider the point of view of roots. For that purpose we introduce the square root of the discriminant:

$$\begin{aligned} \sqrt{\Delta }:=\prod _{i<j} (x_i-x_j) \ \ \forall x=(x_1,\ldots , x_{k+1})\in {\mathbb {R}}^{k+1}. \end{aligned}$$

We also set \(\mu :=\frac{1}{\sqrt{k+1}}(1,\ldots ,1)\in S^k \subset {\mathbb {R}}^{k+1}\) and \(F_k:=\left\{ x_1\ge x_2 \ge \cdots \ge x_{k+1} \right\} \). Note that on \(F_k\), \(\sqrt{\Delta }\) is non-negative so the notation makes sense.

Proposition 10

For every positive integer k,

$$\begin{aligned} \Lambda _k = \Gamma \left( \frac{K+k}{2}\right) \frac{2^{\frac{K+k-2}{2}}}{\sqrt{k+1}} \int _{F_k\cap S^k\cap \mu ^\perp } \sqrt{\Delta } \ \mathrm{d}S^{k-1}, \end{aligned}$$

where \(K=\left( {\begin{array}{c}k+1\\ 2\end{array}}\right) \) and \(\mathrm{d}S^{k-1}\) is the standard spherical measure of the unit sphere of \(\mu ^\perp \).

Proof

First by a spherical change of coordinates and by homogeneity of \(\sqrt{\Delta }\) of degree K, we have

$$\begin{aligned} \int _{F_k\cap \mu ^\perp }\mathrm{e}^{-\frac{\Vert v\Vert ^2}{2}} \sqrt{\Delta } \ \mathrm{d}v= \Gamma \left( \frac{K+k}{2}\right) 2^{\frac{K+k-2}{2}} \int _{F_k\cap S^k\cap \mu ^\perp } \sqrt{\Delta } \ \mathrm{d}S^{k-1} \end{aligned}$$
(12)

where \(\mathrm{d}v\) is the flat Lebesgue measure on \(\mu ^\perp \) induced by its embedding in \({\mathbb {R}}^{k+1}\).

On the other hand, let us introduce the elementary symmetric polynomials \(\sigma _1= x_1+\ \cdots \ + x_{k+1}\), \(\sigma _2=\sum _{i<j} x_i x_j\), \(\ldots \), \(\sigma _{k+1}= x_1 \cdots x_{k+1}.\) This is a diffeomorphism on \(F_k\) whose Jacobian is precisely \(\det \left( \frac{\partial \sigma }{\partial x}\right) = \sqrt{\Delta }\). In fact, \(\det \left( \frac{\partial \sigma }{\partial x}\right) \) is a monic polynomial of the same degree of \(\sqrt{\Delta }\); moreover, it is easy to see that for every \(i\ne j\) the polynomial \((x_i-x_j)\) divides \(\det \left( \frac{\partial \sigma }{\partial x}\right) \), therefore they are equal.

Now consider a new orthonormal basis in \({\mathbb {R}}^{k+1}\) with first unit vector given by \(\mu \). Let \({\tilde{x}}\) be the coordinates in this new basis and let \(v=({\tilde{x}}_2,\ldots , {\tilde{x}}_{k+1})\). Observe that \({\tilde{x}}_1= \langle x, \mu \rangle =\sigma _1/\sqrt{k+1}\). Thus we obtain the Jacobian matrix

$$\begin{aligned} \frac{\partial {\tilde{x}}}{\partial \sigma }=\left( \begin{array}{l@{\quad }l@{\quad }l@{\quad }l} \frac{1}{\sqrt{k+1}} &{}\quad 0 &{}\quad \cdots &{}0 \\ &{} &{} &{} \\ \frac{\partial v}{\partial \sigma _1} &{} &{}\quad \frac{\partial v}{\partial \sigma _{\ge 2}} &{} \\ &{} &{} &{} \\ \end{array} \right) . \end{aligned}$$

This implies that \(\det \left( \frac{\partial {\tilde{x}}}{\partial \sigma } \right) = \frac{1}{\sqrt{k+1}}\det \left( \frac{\partial v}{\partial \sigma _{\ge 2}} \right) \). On the other hand \({\tilde{x}}\) is an orthogonal transformation of x so \(\det \left( \frac{\partial {\tilde{x}}}{\partial \sigma } \right) =\det \left( \frac{\partial {x}}{\partial \sigma } \right) =1/\sqrt{\Delta }\). Altogether this gives

$$\begin{aligned} \det \left( \frac{\partial v}{\partial \sigma _{\ge 2}} \right) =\sqrt{k+1}/\sqrt{\Delta } \end{aligned}$$
(13)

Moreover we see that \((\sigma _1)^2=\Vert x\Vert ^2+2\sigma _2\). Restricted to \(v\in \mu ^\perp =\{\sigma _1=0\}\) this gives \(-\frac{\Vert v\Vert ^2}{2}=\sigma _2\) .

Next we let \(a_i:=\sigma _{i+1}\) for \(1\le i \le k\) and apply the change of variable \(v\rightarrow a\) to the left-hand side of (12). By (13) the Jacobian is \(\sqrt{\Delta } \ \mathrm {d}v=\sqrt{k+1}\, \mathrm {d}a\). This gives

$$\begin{aligned} \int _{F_k\cap \mu ^\perp }\mathrm{e}^{-\frac{\Vert v\Vert ^2}{2}} \sqrt{\Delta } \ \mathrm{d}v=\sqrt{k+1}\, \Lambda _k \end{aligned}$$

\(\square \)

3 Asymptotics

Fix an integer \(k>0\). Given \(L\in {\mathbb {G}}(n-k-1,n)\), consider the a corresponding Schubert variety in the Grassmannian \({\mathbb {G}}(k,n)\):

$$\begin{aligned} \Omega (L):=\left\{ \ell \in {\mathbb {G}}(k,n) \ | \ \ell \cap L \ne \emptyset \right\} . \end{aligned}$$
(14)

It is a singular subvarietyFootnote 4 of the Grassmannian of codimension 1 and its volume is computed in [9, Theorem 4.2].

Recall that we are interested in the computation of the numbers

$$\begin{aligned} \delta _{k,n}:={\mathbb {E}}{ \# \left\{ g_1\cdot \Omega \left( L\right) \cap \cdots \cap g_{(k+1)(n-k)}\cdot \Omega \left( L\right) \right\} } \end{aligned}$$

for which the following formula is established in [9, Corollary 5.2]:

$$\begin{aligned} \delta _{k,n}=\frac{d_{k,n}!}{2^{d_{k,n}}} \cdot |{\mathbb {G}}(k,n)| \cdot |C(k,n)|. \end{aligned}$$
(15)

Here C(kn) is the convex body defined in Definition 9 (where \(d_{k,n}=(k+1)(n-k)\) is the dimension of the Grassmanian \({\mathbb {G}}(k,n)\)).

Using Proposition 8 and [9, Equation (2.11)] we get

$$\begin{aligned} \delta _{k,n}=\beta _{k,n}\ I_k(n) \end{aligned}$$
(16)

with \(I_k(n)\) defined in (8) and

$$\begin{aligned} \beta _{k,n}:=(2\pi )^{k+1}\left( \frac{\pi }{2}\right) ^{d_{k,n}} \frac{\Gamma \left( d_{k,n}\right) }{\Gamma (\frac{n+1}{2}) \Gamma (\frac{n}{2})\cdots \Gamma (\frac{n-2k}{2})}. \end{aligned}$$
(17)

3.1 Asymptotic of \(\delta _{k,n}\) as \(n\rightarrow \infty \)

In this section we compute the asymptotics of \(\delta _{k,n}\) as n goes to \(\infty \).

In order to compute these asymptotics we will apply Laplace’s Method (Theorem 7) to Eq. (8) using the fact that the global maximum of \(p_k \cdot (r)^{(k+1)}\) is reached at \(\mu \) (Proposition 9). There are two major obstacles that arise. First: we don’t know the radial function r explicitly. Second: one needs to compute the Hessian of \(p_k \cdot (r)^{(k+1)}\) at the point \(\mu \).

To solve the first problem, the key is Proposition 6 that will allow us to express r in terms of the support function, see Eq. (18) below.

To deal with the second difficulty, we will prove that the Hessian of \(p_k \cdot (r)^{(k+1)}\) is a multiple of the identity. To do so we use the fact that the convex body defined by r is invariant under the action of the symmetric group acting by permutation of coordinates. This will imply that the Hessian at \(\mu \) is a morphism of representations on an irreducible subspace and we can use Schur’s Lemma (see Proposition 12 below).

Let us denote by D(k) the convex body defined by r and by \(\partial D(k)\) its boundary. Using Proposition 6, we have the following commutative diagram:

figure a

where \(\pi (x)=\frac{x}{\Vert x\Vert }\) and \(\psi =\pi \circ \nabla h\). Thus assuming that \(\psi \) is a local diffeomorphism near \(\mu \), we can write

$$\begin{aligned} r(x)^2=\Vert (\nabla h) \left( \psi ^{-1} (x) \right) \Vert ^2. \end{aligned}$$
(18)

Here \(\nabla h\) is the gradient of the function on the whole space \({\mathbb {R}}^{k+1}\) which is restricted to the sphere only afterward, but for the sake of simplicity we omit the restriction in the notation of the function.

Thus if we can compute the Taylor polynomial of \(\nabla h\) at \(\mu \), we would at the same time get the Taylor polynomial of r using the following Lemma.

Lemma 11

Let \(f_1:{\mathbb {R}}^p\rightarrow {\mathbb {R}}^q\), \(f_2:{\mathbb {R}}^n\rightarrow {\mathbb {R}}^p\) and \(f_3:{\mathbb {R}}^m\rightarrow {\mathbb {R}}^n\) be \({C}^2\)-functions. The second derivative of their composition at \(0\in {\mathbb {R}}^m\) is given by:

$$\begin{aligned} {{\,\mathrm{D}\,}}^2_0(f_1\circ f_2 \circ f_3) (x,x)={{\,\mathrm{D}\,}}_{f_2(f_3(0))}f_1\cdot \left[ {{\,\mathrm{D}\,}}_{f_3(0)}f_2\cdot {{\,\mathrm{D}\,}}^2_0 h(x,x)+{{\,\mathrm{D}\,}}^2_{f_3(0)}f_2\left( {{\,\mathrm{D}\,}}_0 f_3\cdot x, {{\,\mathrm{D}\,}}_0 f_3\cdot x\right) \right] \\ +{{\,\mathrm{D}\,}}^2_{f_2(f_3(0))}f_1\left( {{\,\mathrm{D}\,}}_{f_3(0)}f_2\cdot {{\,\mathrm{D}\,}}_0 f_3\cdot x, {{\,\mathrm{D}\,}}_{f_3(0)}f_2\cdot {{\,\mathrm{D}\,}}_0 f_3\cdot x \right) \ \forall x \in {\mathbb {R}}^m. \end{aligned}$$

Proof

Let f and g be \({C}^2\)-functions between real vector spaces that can be composed. We get the Taylor series:

$$\begin{aligned} g(x)=g(0)+{{\,\mathrm{D}\,}}_{0}g\cdot x +\frac{1}{2}{{\,\mathrm{D}\,}}^2_{0}g(x,x)+{\mathcal {O}}(\Vert x\Vert ^3). \end{aligned}$$

Writing the Taylor series of \(f(g(x))=f\left( g(0)+{{\,\mathrm{D}\,}}_{0}g\cdot x +{{\,\mathrm{D}\,}}^2_{0}g(x,x)+{\mathcal {O}}(\Vert x\Vert ^3)\right) \) at g(0) and putting together the terms of second order we get

$$\begin{aligned} {{\,\mathrm{D}\,}}_{0}^2(f\circ g)(x,x)={{\,\mathrm{D}\,}}_{g(0)}f\cdot {{\,\mathrm{D}\,}}^2_{0} g (x,x) +{{\,\mathrm{D}\,}}^2_{g(0)}f({{\,\mathrm{D}\,}}_{0} g\cdot x, {{\,\mathrm{D}\,}}_{0} g\cdot x). \end{aligned}$$

Replacing f by \(f_1\) and g by \(f_2\circ f_3\), we get the result. \(\square \)

In particular if \(f:{\mathbb {R}}^m \rightarrow {\mathbb {R}}\) and \(g:{\mathbb {R}}^n\rightarrow {\mathbb {R}}^m\), then

$$\begin{aligned} {{\,\mathrm{D}\,}}^2_a\left( (f \circ g)^2\right) (x,x)= & {} 2f(g(a))\cdot \left[ {{\,\mathrm{D}\,}}_{g(a)}f\cdot {{\,\mathrm{D}\,}}^2_ag(x,x)+{{\,\mathrm{D}\,}}^2_{g(a)}f\left( {{\,\mathrm{D}\,}}_a g\cdot x, {{\,\mathrm{D}\,}}_a g\cdot x\right) \right] \nonumber \\&+2\left( {{\,\mathrm{D}\,}}_{g(a)}f\cdot {{\,\mathrm{D}\,}}_a g\cdot x \right) ^2 \ \forall x \in {\mathbb {R}}^m. \end{aligned}$$
(19)

From now on we will work in exponential coordinates at \(\mu \). That is to a point \(x\in T_{\mu }S^{k}=\mu ^{\perp }\) corresponds the point \(\cos \Vert x\Vert \ \mu + \sin \Vert x\Vert \ \frac{x}{\Vert x\Vert } \ \in S^k\). In particular in these coordinates \(\mu =0\). Thus in these coordinates, \(\psi \) and \(\frac{\partial h}{\partial x_i}|_{S^k}\) can be considered as functions on \({\mathbb {R}}^k\cong \mu ^\perp \).

We would like to replace g by \(\psi ^{-1}\) and f by \(\frac{\partial h}{\partial x_i}|_{S^k}\) in (19) and sum over \(i\in \left\{ 1,\ldots ,k+1\right\} \) to get \({{\,\mathrm{D}\,}}^2_\mu (r^2)\). The difficulty is that we need to compute all the entries of the Hessian matrix which are approximately \(k^2\). This increasing complexity could have made the computation impossible, but as we pointed out before, it turns out that this matrix is a multiple of the identity.

Proposition 12

If \(f:\mu ^{\perp }\rightarrow {\mathbb {R}}\) is \(C^2\) in a neighborhood of 0 and invariant under the standard action of \({\mathfrak {S}}_{k+1}\) on \({\mathbb {R}}^{k+1}\) then its Hessian at 0 is a multiple of the identity, i.e. there exist \(C_f \in {\mathbb {R}}\) such that

$$\begin{aligned} {{\,\mathrm{D}\,}}^2_0f(x,x)=C_f \Vert x\Vert ^2 \ \forall x \in \mu ^\perp . \end{aligned}$$

Proof

First note that \(\mu \in {\mathbb {R}}^{k+1}\) is fixed under the standard action of \({\mathfrak {S}}_{k+1}\) (that is by permutations of coordinates). Thus the action decomposes in \({\mathbb {R}}^{k+1}={\mathbb {R}}\mu \oplus \mu ^{\perp }\) and the action on \(\mu ^{\perp }\) is well-defined. Moreover the invariant subspace \(\mu ^{\perp }\) is irreducible.

Now, let \(P\in {\mathfrak {S}}_{k+1}\) be a permutation. Since f is \({C}^2\) we have

$$\begin{aligned} f(x)&=f(0)+{{\,\mathrm{D}\,}}_0f\cdot x + \frac{1}{2} x^T H x +{\mathcal {O}}\left( \Vert x\Vert ^3\right) \\ f(P x)&=f(0)+{{\,\mathrm{D}\,}}_0f\cdot P x + \frac{1}{2} (P x)^T H (P x) +{\mathcal {O}}\left( \Vert x\Vert ^3\right) \end{aligned}$$

where we wrote the quadratic form \({{\,\mathrm{D}\,}}_0^2f\) in the matrix form: \({{\,\mathrm{D}\,}}_0^2f(x,x)=x^T H x\) for a certain symmetric matrix H.

By comparing the terms of order 2 we get \(P^T H P = H\). Moreover \({\mathfrak {S}}_{k+1}\) acts by orthogonal matrices implying that H commutes with this action. In other words H is a morphism of representations from \(\mu ^\perp \) onto itself. Since \(\mu ^\perp \) is irreducible it follows from Schur’s Lemma [2, Section 1.2] that H is a (possibly complex) multiple of the identity. Since H is symmetric all its eigenvalues are real. Thus it is a real multiple of the identity. \(\square \)

Remark 6

By (10), h is \({\mathfrak {S}}_{k+1}\)-invariant, thus \(r^2\) is \({\mathfrak {S}}_{k+1}\)-invariant as well and the computation of its Hessian is reduced to the computation of the coefficient \(C_{r^2}\) of Proposition 12.

We let \(h_i:=\frac{\partial h}{\partial x_i}|_{S^k}\). We recall that, by Proposition 9, the point \(\mu \in S^k\) is a point of maximum of r. In particular the tangent plane to \(\partial D(k)\) at \(\mu \) equals \(\{\mu \}^{\perp }.\) Consequently the support hyperplane \(H(D(K), \mu )\) is \(\mu +\{\mu \}^{\perp }\) and, by Proposition 6, \(\nabla h (\mu ) =\mu \). Thus \(\psi (\mu )=\mu \).

Remark 7

Furthermore \(\psi =\pi \circ \nabla h\) is also \({\mathfrak {S}}_{k+1}\)-invariant and is a function from \(\mu ^{\perp }\) onto itself in the exponential coordinates at \(\mu \). Mimicking the proof of Proposition 12, we get that if the differential of \(\psi \) at \(\mu \) has at least one real eigenvalue \(C_\psi \). Then for all \(x \in \mu ^{\perp }\), we have \({{\,\mathrm{D}\,}}_\mu \psi \cdot x= C_\psi x\).

Before stating our next results, we write h in a more convenient form using a change of variables (spherical coordinates) in (10).

$$\begin{aligned} h(x)=\frac{2^k}{\pi ^{(k+2)/2}} \Gamma \left( \frac{k+2}{2}\right) \int _{S^k_\mathrm{pos}}\sqrt{\sum _{j=1}^{k+1}x_j^2\xi _j^2}\ \mathrm{d}S^k(\xi ) \end{aligned}$$
(20)

with \(S^k_{pos}=S^k\cap \left\{ x_i\ge 0 \ \forall i=1,\ldots ,k+1\right\} \) (note that \(S^k_{pos}\ne S^k_+\)).

Definition 11

For \(m\in {\mathbb {N}}\), we denote by G(m) the numbers:

$$\begin{aligned} G(m):=\int _{S^k_{pos}}\xi _1^m \mathrm{d}S^k(\xi ). \end{aligned}$$

These numbers satisfy the following simple identities.

Proposition 13

  1. (i)

    \(G(m)= \frac{\pi ^{k/2}}{2^k} \frac{\Gamma \left( \frac{m+1}{2}\right) }{\Gamma \left( \frac{k+m+1}{2}\right) }\);

  2. (ii)

    \(\frac{G(4)}{G(2)}=\frac{3}{k+3} \);

  3. (iii)

    \(\frac{G(6)}{G(2)}=\frac{15}{(k+3)(k+5)} \);

  4. (iv)

    \(\frac{\int _{S^k_{pos}}\xi _1^2 \xi _2^2 \mathrm{d}S^k(\xi )}{G(2)}=\frac{1}{k+3}\);

  5. (v)

    \(\frac{\int _{S^k_{pos}}\xi _1^4 \xi _2^2 \mathrm{d}S^k(\xi )}{G(2)}=\frac{3}{(k+3)(k+5)}\).

Proof

Observe first that, for any \(p>0\),

$$\begin{aligned} \int _0^{+\infty } t^p \mathrm{e}^{-\frac{t^2}{2}} \mathrm{d}t =2^{\frac{p-1}{2}} \Gamma \left( \frac{p+1}{2} \right) . \end{aligned}$$
(21)

which is obtained by the change of variable \(u=\frac{t^2}{2}\) and the definition of the Gamma function. We prove the first two items, other items are proven in a similar way.

  1. (i)

    Using a polar change of variables and Eq. (21) we get

    $$\begin{aligned} \int _{{\mathbb {R}}^{k+1}_\mathrm{pos}}\xi _1^m \mathrm{e}^{-\frac{||\xi ||^2}{2}} \mathrm{d}\xi = \Gamma \left( \frac{k+m+1}{2}\right) 2^{\frac{k+m-1}{2}} G(m). \end{aligned}$$

    where \({\mathbb {R}}^{k+1}_\mathrm{pos}=\left\{ x_i\ge 0 \ \forall i=1,\ldots ,k+1\right\} \) is the positive orthant. On the other hand using Fubini, we get

    $$\begin{aligned} \int _{{\mathbb {R}}^{k+1}_{pos}}\xi _1^m \mathrm{e}^{-\frac{||\xi ||^2}{2}} \mathrm{d}\xi = \left( \int _0^{+\infty }\mathrm{e}^{-\frac{t^2}{2}}\right) ^k \left( \int _0^{+\infty }t^m \mathrm{e}^{-\frac{t^2}{2}}\right) = \left( \frac{\pi }{2}\right) ^{\frac{k}{2}} \ 2^{\frac{m-1}{2}} \Gamma \left( \frac{m+1}{2} \right) . \end{aligned}$$

    Equaling the right-hand sides of these two equalities gives the result.

  2. (ii)

    Using (i). we have \(\frac{G(4)}{G(2)}= \frac{\Gamma \left( \frac{5}{2}\right) }{\Gamma \left( \frac{3}{2}\right) } \frac{\Gamma \left( \frac{k+3}{2}\right) }{\Gamma \left( \frac{k+5}{2}\right) }\). But \(\Gamma \left( \frac{k+5}{2}\right) =\frac{k+3}{2}\Gamma \left( \frac{k+3}{2}\right) \) and \(\frac{\Gamma \left( \frac{5}{2}\right) }{\Gamma \left( \frac{3}{2}\right) }=3/2\).

\(\square \)

Remark 8

Observe that the coefficient in front of the integral in (20) is \(\frac{R}{\sqrt{k+1}}\frac{1}{G(2)}\).

Proposition 14

The operator \({{\,\mathrm{D}\,}}_\mu \psi \) has a real eigenvalue \(C_\psi =\frac{k+1}{k+3}\). Thus (by Remark 7) \({{\,\mathrm{D}\,}}_\mu \psi \) is a non zero multiple of the identity. In particular, \(\psi \) is a local diffeomorphism near \(\mu \).

Proof

First of all, in (20) we integrate a bounded function over a compact domain, thus in computing \(h_i\) we can interchange integration and derivation and get

$$\begin{aligned} h_i(x)=\frac{R}{\sqrt{k+1}}\frac{1}{G(2)}\int _{S^k_{pos}}\frac{x_i\ \xi _i^2}{\sqrt{\sum _{j=1}^{k+1}x_j^2\xi _j^2}}\ \mathrm{d}S^k(\xi ). \end{aligned}$$

Let \(\gamma \) be the geodesic on the sphere starting at \(\mu \) with the initial velocity \({\dot{\gamma }}_0=\frac{\sqrt{k}}{\sqrt{k+1}}(1,-\frac{1}{k},\ldots ,-\frac{1}{k})\). \(\gamma : {\mathbb {R}}\rightarrow S^k\) is given by

$$\begin{aligned} \gamma (t)=\frac{1}{\sqrt{k+1}}\left( \cos t +\sqrt{k}\sin t, \cos t -\frac{\sin t}{\sqrt{k}},\ldots , \cos t -\frac{\sin t}{\sqrt{k}}\right) . \end{aligned}$$
(22)

Along this particular geodesic,

$$\begin{aligned} h_1(\gamma (t))=\frac{R}{G(2)\sqrt{k+1}}\int _{S^k_\mathrm{pos}}\frac{(\cos t +\sqrt{k}\sin t)\ \xi _1^2}{\sqrt{\xi _1^2(\cos t +\sqrt{k}\sin t)^2+(1-\xi _1^2)(\cos t -\frac{\sin t}{\sqrt{k}})^2}}\ \mathrm{d}S^k(\xi ), \end{aligned}$$

which we can expand as:

$$\begin{aligned} h_1(\gamma (t))&=\frac{R}{G(2)\sqrt{k+1}}\int _{S^k_\mathrm{pos}}\left[ \xi _1^2+t \ \frac{k+1}{\sqrt{k}}(\xi _1^2-\xi _1^4)\right] \mathrm{d}S^k(\xi )+{\mathcal {O}}(t^2) \nonumber \\&=\frac{R}{G(2)\sqrt{k+1}}\left[ G(2)+t \ \frac{k+1}{\sqrt{k}}\left( G(2)-G(4)\right) \right] +{\mathcal {O}}(t^2). \end{aligned}$$
(23)

Using Proposition 13, we get

$$\begin{aligned} h_1(\gamma (t))=\frac{R}{\sqrt{k+1}}\left[ 1+t \ \sqrt{k} \ \frac{k+1}{k+3}\right] +{\mathcal {O}}(t^2). \end{aligned}$$
(24)

Similarly for \(j\ge 2\), we get

$$\begin{aligned} h_j(\gamma (t))=\frac{R}{\sqrt{k+1}}\left[ 1-t \ \frac{1}{\sqrt{k}} \ \frac{k+1}{k+3}\right] +{\mathcal {O}}(t^2). \end{aligned}$$
(25)

Moreover

$$\begin{aligned} \psi _i (\gamma (t))=\frac{h_i\left( \gamma (t)\right) }{\sqrt{\left( h_1(\gamma (t)) \right) ^2+k\left( h_2(\gamma (t)) \right) ^2}}. \end{aligned}$$

Taking once again the linear Taylor polynomial in t we get

$$\begin{aligned} \psi _1(\gamma (t))&=\frac{1}{\sqrt{k}}+t\ \frac{\sqrt{k}}{\sqrt{k+1}} \cdot \frac{k+1}{k+3}+{\mathcal {O}}(t^2) \\ \psi _2(\gamma (t))&=\frac{1}{\sqrt{k}}-t\ \frac{1}{\sqrt{k}\sqrt{k+1}} \cdot \frac{k+1}{k+3}+{\mathcal {O}}(t^2). \end{aligned}$$

Recalling that \({{\,\mathrm{D}\,}}_\mu \psi \cdot {\dot{\gamma }}_0=\frac{{{\,\mathrm{d}\,}}}{{{\,\mathrm{d}\,}}t}|_{t=0}\psi (\gamma (t))\) we find that \({\dot{\gamma }}_0\) is an eigenvector with the eigenvalue \(\frac{k+1}{k+3}\). \(\square \)

Remark 9

By (24) and (25) we note that \({{\,\mathrm{D}\,}}_\mu (\nabla h)\cdot {\dot{\gamma }}_0=\sum _{i=1}^{k+1}{{\,\mathrm{D}\,}}_\mu h_i\cdot {\dot{\gamma }}_0=0.\) Thus \({\dot{\gamma }}_0\) is an eigenvector of \({{\,\mathrm{D}\,}}_\mu (\nabla h)\) with the eigenvalue 0 and by the same argument as in remark 7, \({{\,\mathrm{D}\,}}_\mu (\nabla h)=0\), i.e. \(\mu \) is a critical point of \(\nabla h\).

Proposition 15

In the above notation,

$$\begin{aligned} {{\,\mathrm{D}\,}}^2_\mu (r^2) (x,x)= \frac{2}{C_\psi ^2}\left( \sum _{i=1}^{k+1}\left( {{\,\mathrm{D}\,}}_\mu h_i\cdot x\right) ^2 + \frac{R}{\sqrt{k+1}}\sum _{i=1}^{k+1}{{\,\mathrm{D}\,}}^2_\mu h_i (x,x) \right) \ \forall x \in \mu ^{\perp }. \end{aligned}$$

Proof

Take \(f=h_i\) and \(g=\psi ^{-1}\) in Eq. (19). Sum over \(i\in \{1,\ldots ,k+1\}\) and use Remark 9. \(\square \)

We are finally ready to compute the Hessian of \(r^2\).

Proposition 16

For all \(x\in \mu ^{\perp }\), we have

$$\begin{aligned} {{\,\mathrm{D}\,}}^2_\mu (r^2) (x,x)= -4 \frac{R^2}{k+1} \ ||x||^2. \end{aligned}$$

Proof

Once again we use the geodesic \(\gamma \) defined by (22) and compute \({{\,\mathrm{D}\,}}^2_\mu (r^2) ({\dot{\gamma }}_0,{\dot{\gamma }}_0)\) using Proposition 15. With the help of (24) and (25), we get

$$\begin{aligned} \sum _{i=1}^{k+1}\left( {{\,\mathrm{D}\,}}_\mu h_i\cdot {\dot{\gamma }}_0\right) ^2=\left( {{\,\mathrm{D}\,}}_\mu h_1\cdot {\dot{\gamma }}_0\right) ^2+k\left( {{\,\mathrm{D}\,}}_\mu h_2\cdot {\dot{\gamma }}_0\right) ^2=R^2 \frac{(k+1)^2}{(k+3)^2}. \end{aligned}$$

To compute the second derivative of \(h_i\) we need to take the Taylor series of equation (23) up to degree 2. The second order term for \(h_1\) is \(\frac{t^2}{2}\frac{k+1}{k}\left[ 2\xi _1^2-\xi _1^4(3k+5)+\xi _1^6 3(k+1)\right] \) which once integrated and using Proposition 13, gives \({{\,\mathrm{D}\,}}^2_\mu h_1({\dot{\gamma }}_0,{\dot{\gamma }}_0){=-}\frac{R}{\sqrt{k{+}1}} \frac{(k+1)(7k-1)}{(k+3)(k+5)}\).

Similarly, one gets \({{\,\mathrm{D}\,}}^2_\mu h_2({\dot{\gamma }}_0,{\dot{\gamma }}_0)=\frac{R}{\sqrt{k+1}} \frac{(k+1)}{k}\left( -1+9\frac{(k+1)}{(k+3)(k+5)}\right) \). Combining those, we obtain

$$\begin{aligned} \sum _{i=1}^{k+1}{{\,\mathrm{D}\,}}^2_\mu h_i ({\dot{\gamma }}_0,{\dot{\gamma }}_0)={{\,\mathrm{D}\,}}^2_\mu h_1 ({\dot{\gamma }}_0,{\dot{\gamma }}_0)+k{{\,\mathrm{D}\,}}^2_\mu h_2 ({\dot{\gamma }}_0,{\dot{\gamma }}_0)=-\frac{R}{\sqrt{k+1}} \frac{(k+1)^2}{(k+3)}. \end{aligned}$$

The result follows from Proposition 15, Remark 6 and the fact that \(\Vert {\dot{\gamma }}_0\Vert =1\). \(\square \)

Remark 10

The Hessians of the various intermediate functions (such as the \(h_i\)’s) depend on the choice of local coordinates and make sense only if we consider them as functions on \(\mu ^{\perp }\). However since \(\mu \) is a critical point of \(r^2\), its Hessian at this point is well-defined and does not depend on the choice of local coordinates.

Finally, we write (8) in Riemannian polar coordinates:

$$\begin{aligned} I_k(n)=\int _{{\tilde{S}}^{k-1}_+}\int _0^{l(v)} \mathrm{e}^{-(n-k)\left( -\frac{k+1}{2}\log (r^2)-\log (p)\right) }q \sqrt{\det g} \ \rho ^{k-1}\ \mathrm{d}\rho \ \mathrm{d}S^{k-1}(v) \end{aligned}$$

where g is the spherical metric of \(S^k\) on \(\mu ^\perp \), the angular domainis given by \({\tilde{S}}^{k-1}_+:=\pi \circ \exp _\mu ^{-1}(S^k_+)=\mu ^\perp \cap S^k \cap F_k\) and l(v) is the time needed to reach the boundary of the domain \(S^k_+\) starting at \(\mu \) with velocity v.

In order to apply Theorem 7 we need to use the Taylor series of the various functions appearing in the integrand. A simple (but rather tedious) computation leads to:

$$\begin{aligned}&I_k(n)=\frac{2^{K}}{(k+1)^{\frac{K-(k+1)^2}{2}}} \int _{{\tilde{S}}^{k-1}_+}\prod _{i<j}|x_i-x_j|\\&\quad \times \int _0^{l(v)} \mathrm{e}^{-(n-k)\left( -\frac{k+1}{2} \log \left( \frac{R^2}{k+1}\right) +\rho ^2(k+2) +{\mathcal {O}}(\rho ^3)\right) }(\rho ^{K+k-1}+{\mathcal {O}}(\rho ^{K+k}))\ \mathrm{d}\rho \ \mathrm{d}S^{k-1} \end{aligned}$$

where \(K=\left( {\begin{array}{c}k+1\\ 2\end{array}}\right) =\frac{k(k+1)}{2}\).

Lemma 17

For all \(v\in {\tilde{S}}^{k-1}_+\), \(l(v)\ge \tan ^{-1}(1/\sqrt{k})\).

Proof

\(S^k_+\) is the (geodesically) convex hull on \(S^k\subset {\mathbb {R}}^{k+1}\) of the points \(\alpha _1:=(1,0,\ldots ,0)\), \(\alpha _2:=\frac{1}{\sqrt{2}}(1,1,0,\ldots ,0)\), \(\ldots \) , \(\alpha _{k+1}=\mu \). The closest of these points to \(\mu \) (except \(\mu \) itself) is \(\alpha _k\). The cosine of the angle between \(\alpha _k\) and \(\mu \) is given by their scalar product \(\langle \alpha _k, \mu \rangle =\frac{\sqrt{k}}{\sqrt{k+1}} \). The result follows from the formula \(\tan (\cos ^{-1}(x))=\frac{\sqrt{1-x^2}}{x}\). \(\square \)

Thus the upper bound l(v) does not really matter for the asymptotics. Moreover the outermost integral is the integral of a bounded function over a compact domain and we can interchange it with the limit. We apply Theorem 7 with \(\lambda =(n-k)\), \(\mu =2\) and \(\nu =K+k\). Using Proposition 10, we find

$$\begin{aligned} I_k(n)= & {} \frac{\Gamma \left( \frac{K+k}{2}\right) }{\Gamma \left( \frac{K+1}{2}\right) } \frac{2^{\frac{K-1}{2}}\Lambda _k}{(k+1)^{\frac{K-(k+1)^2}{2}}(k+2)^{\frac{K+k}{2}}}\nonumber \\&\times \left( \frac{R}{\sqrt{k+1}}\right) ^{(n-k)(k+1)} \frac{1}{n^{\frac{K+k}{2}}}\left( 1+{\mathcal {O}}((n-k)^{-\frac{K+k+1}{2}})\right) . \end{aligned}$$
(26)

We are now (finally) ready to state the main theorem of this section.

Theorem 18

For every fixed integer \(k>0\) and as n goes to infinity, we have

$$\begin{aligned} \delta _{k,n}=a_k \cdot \left( b_k\right) ^n\cdot n^{-\frac{k(k+1)}{4}}\left( 1+{\mathcal {O}}(n^{-1})\right) \end{aligned}$$
(27)

where

$$\begin{aligned} a_k&=\Lambda _k \ \frac{2^{\frac{k(k-3)}{4}}}{\pi ^{\frac{k(k+2)}{2}}}\sqrt{k+1}\ \left( \frac{k+1}{k+2}\right) ^{\frac{k(k+3)}{4}}\left( \frac{\Gamma \left( \frac{k+1}{2} \right) }{\Gamma \left( \frac{k+2}{2} \right) }\right) ^{k(k+1)} \nonumber \\ b_k&=\left( \frac{\Gamma \left( \frac{k+2}{2} \right) }{\Gamma \left( \frac{k+1}{2} \right) }\sqrt{\pi }\right) ^{(k+1)}. \end{aligned}$$
(28)

Proof

We use (16) and (17). We need to compute the asymptotics of

$$\begin{aligned} \beta _{k,n}:=\frac{\pi ^{(k+1)(n-k)+(k+1)}}{2^{(k+1)(n-k-1)}} \frac{\Gamma \left( (k+1)(n-k)\right) }{\Gamma (\frac{n+1}{2})\Gamma (\frac{n}{2})\cdots \Gamma (\frac{n-2k}{2})}. \end{aligned}$$

Using Proposition 5, we get

$$\begin{aligned} \Gamma \left( (k+1)(n-k)\right) =\frac{(k+1)^{(k+1)(n-k)-1/2}}{(2\pi )^{k/2}}\prod _{l=0}^k \Gamma \left( n-k+\frac{l}{k+1} \right) \end{aligned}$$

and the denominator

$$\begin{aligned} \prod _{l=0}^k\Gamma \left( \frac{n+1-2l}{2} \right) \Gamma \left( \frac{n-2l}{2} \right) =\frac{\pi ^\frac{k+1}{2}}{2^{(k+1)(n-k-1)}}\prod _{l=0}^k\Gamma \left( n-2l \right) . \end{aligned}$$

Moreover using (5) we have:

$$\begin{aligned} \prod _{l=0}^k \frac{\Gamma \left( n-k+\frac{l}{k+1} \right) }{\Gamma \left( n-2l \right) }=\prod _{l=0}^k n^{-k+\frac{l}{k+1}+2l}\ \left( 1+{\mathcal {O}}(n^{-1}) \right) = n^{k/2} \ \left( 1+{\mathcal {O}}(n^{-1}) \right) . \end{aligned}$$

Thus

$$\begin{aligned} \beta _{k,n}= \frac{ \left( \pi (k+1) \right) ^{ (k+1)(n-k) }}{(2\pi )^{k/2} \sqrt{k+1}}\ \ n^{k/2}\ \left( 1+{\mathcal {O}}(n^{-1})\right) . \end{aligned}$$

Reintroducing it carefully into \(\delta _{k,n}=\beta _{k,n}\cdot I_k(n)\) and using (26) and Proposition 9 we get the result. \(\square \)

We notice the structure of formula (27): it consists of a factor \(a_k\) that does not depend on n, another factor \((b_k)^n\) that grows exponentially fast and a rational factor \(n^{-k(k+1)/4}\). The last two are easily computable for any \(k>0\). Unfortunately the expression for \(a_k\) in (28) still depends on the constant \(\Lambda _k\) for which we were not able to find a closed formula for an arbitrary k. However some particular values can be computed explicitly.

Proposition 19

We have \(\Lambda _1 = 1\) and \(\Lambda _2 = \sqrt{\frac{\pi }{3}}\).

Proof

We use directly the Definition 10.

For \(k=1\), the polynomial \(X^2+a\) has real roots if and only if \(a\le 0\). Thus \(\Lambda _1 = \int _{0}^{+\infty }\mathrm{e}^{-t} \ \mathrm{d}t = 1\).

For \(k=2\), the polynomial \(X^3+aX-b\) has all real roots if and only if the discriminant \(\Delta = -4 a^3-27 b^2\) is positive. For fixed \(a=-t\), this means \(b^2\le \frac{4}{27}\ t^3\) i.e. \(b\in \left[ -\frac{2}{3\sqrt{3}} t^{3/2}, +\frac{2}{3\sqrt{3}} t^{3/2}\right] \). Thus

$$\begin{aligned} \Lambda _2=\frac{4}{3\sqrt{3}}\int _{0}^{+\infty }t^{3/2} \mathrm{e}^{-t} \ \mathrm{d}t = \frac{4}{3\sqrt{3}} \Gamma \left( \frac{5}{2} \right) = \frac{4}{3\sqrt{3}}\ \frac{3\sqrt{\pi }}{4}. \end{aligned}$$

\(\square \)

This allows us to write the first three asymptotic values of \(\delta _{k,n}\)

$$\begin{aligned} \delta _{0,n}&=1 \ ; \\ \delta _{1,n}&= \frac{8}{3\pi ^{5/2}} \cdot \left( \frac{\pi ^2}{4}\right) ^n \cdot n^{-1/2} \left( 1+{\mathcal {O}}\left( n^{-1}\right) \right) \ ;\\ \delta _{2,n}&= \frac{9\sqrt{3}}{2048\sqrt{2\pi }} \cdot 8^n \cdot n^{-3/2} \left( 1+{\mathcal {O}}\left( n^{-1}\right) \right) . \end{aligned}$$

3.2 Asymptotics in the Complex Case

One can state the same problem for the complex Grassmannian of subspaces in \({\mathbb {C}}\mathrm {P}^n\).

Recall from the introduction (Eq. (4)) that we denote by \(\delta _{k,n}^\mathbb {C}\) the number of complex k-subspaces of \({\mathbb {C}}\mathrm {P}^n\) meeting \((k+1)(n-k)\) generic subspaces of dimension \(n-k-1\). A closed formula for \(\delta _{k,n}^\mathbb {C}\) is known for every kn (see [9, Corollary 4.15]):

$$\begin{aligned} \delta _{k,n}^\mathbb {C}= \frac{\Gamma (1)\Gamma (2)\cdots \Gamma (k+1)}{\Gamma (n-k+1)\Gamma (n-k+2)\cdots \Gamma (n+1)}(k+1)(n-k) \Gamma \left( (k+1)(n-k) \right) \nonumber \\ \end{aligned}$$
(29)

We can compute its asymptotics.

Proposition 20

For every fixed k, as \(n\rightarrow \infty \) we have

$$\begin{aligned} \delta _{k,n}^\mathbb {C}= a_k^\mathbb {C}\cdot \left( b_k^\mathbb {C}\right) ^n\cdot n^{-\frac{k(k+2)}{2}}\left( 1+{\mathcal {O}}(n^{-1})\right) \end{aligned}$$

where

$$\begin{aligned} a_k^\mathbb {C}&=\frac{\Gamma (1)\Gamma (2)\cdots \Gamma (k+1)}{(2\pi )^{k/2}(k+1)^{k(k+1)-1/2}} \\ b_k^\mathbb {C}&=\left( k+1\right) ^{(k+1)}. \end{aligned}$$

Proof

Using the Multiplication Theorem (Proposition 5) we get

$$\begin{aligned} \Gamma \left( (k+1)(n-k) \right) =\frac{(k+1)^{(k+1)(n-k)-1/2}}{(2\pi )^{k/2}}\prod _{i=0}^k \Gamma \left( n-k+\frac{i}{k+1} \right) \end{aligned}$$

When reintroduced in Eq. (29) this gives

$$\begin{aligned} \delta _{k,n}^\mathbb {C}=a_k^\mathbb {C}(b_k^\mathbb {C})^n (n-k) \prod _{i=0}^k\frac{\Gamma \left( n-k+ \frac{i}{k+1} \right) }{\Gamma \left( n-k+i+1 \right) } \end{aligned}$$
(30)

Using Proposition 4 we deduce that

$$\begin{aligned} \prod _{i=0}^k\frac{\Gamma \left( n-k+ \frac{i}{k+1} \right) }{\Gamma \left( n-k+i+1 \right) }&= \prod _{i=0}^k (n-k)^{-i k/(k+1)-1} \left( 1+{\mathcal {O}}(n^{-1})\right) \\&= n^{-k(k+2)/2-1} \left( 1+{\mathcal {O}}(n^{-1})\right) , \end{aligned}$$

which, once reintroduced in (30), gives the result. \(\square \)

4 Periods

We start with the following elementary fact.

Proposition 21

For any positive integer \(n>0\), \(\delta _{0,n}=1\).

Proof

Let us look back at the definition of the Schubert variety in (14). In the case \(k=0\), we fix \(L\in {\mathbb {G}}(n-1,n)\) i.e. a hyperplane in \({\mathbb {R}}\mathrm {P}^n\). Then \(\Omega (L)=\left\{ p\in {\mathbb {R}}\mathrm {P}^n \ | \ p \in L\right\} =L\). Thus \(\delta _{0,n}\) is the average number of points in the intersection of n random hyperplanes of \({\mathbb {R}}\mathrm {P}^n\) which is 1. \(\square \)

Recall that a real number is called a period if it is the volume of a semialgebraic subset of Euclidean space given by polynomial inequalities with rational coefficients. Let us show that \(\delta _{k,n}\) is a period. In order to prove it, we first need the following Lemma.

Lemma 22

Let \(S\subset {\mathbb {R}}^d\) be a compact semialgebraic set with defining polynomials over \({\mathbb {Q}}\) and let \(\alpha :S\rightarrow {\mathbb {R}}\) be an algebraic function with coefficients in \({\mathbb {Q}}\). Denoting by \(\mathrm {vol}_S\) the volume density on the set \(\mathrm {sm}(S)\) of smooth points of S associated with the Riemannian metric induced by the ambient space \({\mathbb {R}}^d\), then

$$\begin{aligned} \int _{S}\alpha \cdot \mathrm {vol}_S\text { belongs to the ring of periods}. \end{aligned}$$

where by definition \(\int _{S}\alpha \cdot \mathrm {vol}_S=\int _{\text {sm}(S)}\alpha \cdot \mathrm {vol}_S\).

Proof

We preliminary decompose S into smaller pieces, each with defining polynomials with rational coefficients. To this end, observe that:

$$\begin{aligned} S=\bigcup _{i=1}^a\bigcap _{j=1}^b\left( \{f_{i,j}=0\}\cap \{g_{i,j}<0\}\right) , \end{aligned}$$

with \(f_{i,j}, g_{i,j}\in {\mathbb {Q}}[x_1, \ldots , x_d].\) Removing all the inequalities from the previous description, assuming that each \(f_{i,j}\) is irreducible, keeping only the \(f_{i,j}\)’s whose zero set is s-dimensional, and relabeling these with \(f_{i,j}=f_k\), \(k\in \{1, \ldots , \gamma \}\), we can set \(Y_{k}=\{f_k=f_{i,j}=0\}\). Next we see that there exists a semialgebraic set \(\Sigma _1\) of dimension \(\dim (\Sigma _1)<s\) such that:

$$\begin{aligned} S\backslash \Sigma _1\subseteq \bigcup _{k=1}^\gamma Y_{k}. \end{aligned}$$

By construction each \(Y_k\) has dimension s; for every \(k=1, \ldots , \gamma \), denote by \(X_k\) the set of singular points of \(Y_k\) and consider \(\Sigma _2=\Sigma _1\cup \left( \bigcup _{k=1}^\gamma X_k\right) \). Then \(S\backslash \Sigma _2\) (which coincides with S up to a set of dimension strictly less than s) is contained in

$$\begin{aligned} S\backslash \Sigma _2\subset \bigcup _{k=1}^\gamma \mathrm {sm}(Y_k). \end{aligned}$$

Because for every \(k=1, \ldots , \gamma \) \(Y_k\) is smooth, there exist \(1\le i_1<\cdots <i_s\le d\) such that the critical points of the projection \(\mathrm {proj}_{\mathrm {span}\{e_{i_1}, \ldots , e_{i_s}\}}\) restricted to \(Y_k\) form a set \(C_k\) of codimension one in \(Y_k\). Define

$$\begin{aligned} \Sigma _3:=\Sigma _1\cup \Sigma _2\cup \left( \bigcup _{k=1}^\gamma C_k\right) . \end{aligned}$$

\(\Sigma _3\) is a set of dimension at most \(s-1\). Set \(L_k=\mathrm {span}\{e_{i_1}, \ldots , e_{i_s}\}\) and \(p_k=\mathrm {proj}_{\mathrm {span}\{e_{i_1}, \ldots , e_{i_s}\}}|_{Y_k}\). We now decompose \(S\backslash \Sigma _3\) into disjoint pieces \(S_k=S\cap \Sigma _3^c\):

$$\begin{aligned} S\backslash \Sigma _3=\coprod _{k=1}^\gamma S_k. \end{aligned}$$

Since \(\dim (\Sigma _3)<s\), we have:

$$\begin{aligned} \int _{S}\alpha \cdot \mathrm {vol}_S=\sum _{k=1}^\gamma \int _{S_k}\alpha \cdot \mathrm {vol}_S. \end{aligned}$$

Summing up: the desired integral can be written as a sum of integrals over semialgebraic sets \(S_1, \ldots , S_\gamma \), each of dimension s, each defined by polynomial equalities and inequalities with rational coefficients and with the property that there exists a map \(p_k:S_k\rightarrow L_k\simeq {\mathbb {R}}^s\) (defined over \({\mathbb {Q}}\)) which is a diffeomorphism onto its image.

For each \(k=1, \ldots , \gamma \), consider the inverse of the projection \(\tau _k:p_k(S_k)\rightarrow S_k\), which is also a diffeomorphism. \(\tau _k\) is semialgebraic and defined over \({\mathbb {Q}}\). In particular,

$$\begin{aligned} \int _{S}\alpha \cdot \mathrm {vol}_S= & {} \sum _{k=1}^\gamma \int _{S_k} \alpha \cdot \mathrm {vol}_S\\= & {} \sum _{k=1}^\gamma \int _{p_L(S_k)} \alpha (\tau _k(y))\cdot \sqrt{\det \left( J\tau _k(y)J\tau _k(x)^T\right) }\, \mathrm {d}y. \end{aligned}$$

Each summand in the latter formula is an integral of an algebraic function defined over \({\mathbb {Q}}\) and the domain of integration is a full-dimensional semialgebraic set in \(L\simeq {\mathbb {R}}^s\) defined over \({\mathbb {Q}}\). In particular, each summand is a period, and therefore the whole integral is a period. \(\square \)

Lemma 23

Let \(Z\subset {\mathbb {R}}^d\) be a compact semialgebraic set with defining polynomials over \({\mathbb {Q}}\) and let \(C_Z\) be the Vitale zonoid associated to Z. Then \(\mathrm {vol}(C_Z)\) belongs to the ring of periods.

Proof

Apply the previous Lemma with the choice of \(S=Z\times \cdots \times Z\subset {\mathbb {R}}^{d\times d}\) and

$$\begin{aligned} \alpha (z_1, \ldots , z_d)=\sqrt{\det \left( [z_1, \ldots , z_d][z_1, \ldots , z_d]^T\right) }. \end{aligned}$$

Corollary 1

Each \(\delta _{k,n} \) belongs to the ring of periods.

Proof

We use Eqs. (15) and (7). Since periods form a ring and values of the Gamma function at rational points are periods this proves the statement. \(\square \)

5 A Line Integral for \(\delta _{1,n}\)

In the case of \(\delta _{1,n}\) we can prove the following formula (the idea of the proof is due to Erik Lundberg).

Proposition 24

In the above notation,

$$\begin{aligned} \delta _{1,n}=-2 \pi ^{2n-2}c(n)\int _{0}^1{ L(u)^{n-1}\frac{\mathrm {d}}{\mathrm {d}u}\left( \cosh \left( w(u)\right) \right) \,\mathrm {d}u} \end{aligned}$$

where

$$\begin{aligned} c(n)=\frac{\Gamma \left( 2n-2\right) }{\Gamma \left( n\right) \Gamma \left( n-2\right) }=\frac{n(n-2)}{2}\delta _{1,n}^\mathbb {C}, \end{aligned}$$

\(L=F\cdot G\) and \(w=\log (F/G)\) with

$$\begin{aligned} F(u)&:=\int _{0}^{\pi /2} \frac{u \ \sin ^2(\theta )}{\sqrt{\cos ^2\theta +u^2 \sin ^2\theta }}\mathrm {d}\theta , \\ G(u)&:=\int _{0}^{\pi /2} \frac{ \sin ^2(\theta )}{\sqrt{\sin ^2\theta +u^2 \cos ^2\theta }}\mathrm {d}\theta . \end{aligned}$$

Proof

From [9, Equation (6.12)], we have

$$\begin{aligned} \delta _{1,n}= \pi ^{2n-2}c(n)\int _0^{\pi /4}\left( r(\theta )^2\, \cos \theta \sin \theta \right) ^{n-1} \frac{(\cos \theta )^2-(\sin \theta )^2}{(\cos \theta \, \sin \theta )^2}\mathrm {d}\theta \end{aligned}$$
(31)

Here \(c(n)=\frac{\Gamma \left( 2n-2\right) }{ \Gamma \left( n\right) \Gamma \left( n-2\right) }\).

Moreover from (18) we have \(r(\theta )^2 = | \nabla h (\cos t, \sin t)|^2\), where

$$\begin{aligned} \tan \theta = \frac{h_y(\cos t, \sin t)}{h_x (\cos t, \sin t)} \end{aligned}$$
(32)

and h is given by (10) and can be reduced to:

$$\begin{aligned} h(x,y) =\frac{1}{\pi } \int _0^{\pi /2}\sqrt{x^2 \cos ^2(\theta )+y^2\sin ^2(\theta )}\, \mathrm {d}\theta . \end{aligned}$$
(33)

Let \( p(t) = h_y(\cos t, \sin t) \) and \( q(t) = h_x(\cos t, \sin t)\). Then we get \(\cos ^2 \theta =\frac{q(t)^2}{q(t)^2 + p(t)^2}\), \(\sin ^2 \theta =\frac{p(t)^2}{q(t)^2 + p(t)^2}\) and \(r(\theta )^2 = p(t)^2+q(t)^2\). If we change the variable of integration in (31) to t, then the integrand becomes

$$\begin{aligned} \left( (q^2 + p^2)\frac{pq}{p^2+q^2}\right) ^{n-1}\frac{q^2-p^2}{p^2+q^2}\frac{(q^2+p^2)^2}{q^2p^2} \frac{\mathrm{d}}{\mathrm{d}t} \left( \frac{p(t)}{q(t)} \right) \frac{q(t)^2}{q(t)^2 + p(t)^2} dt \end{aligned}$$

where we have used (32) to determine \(\mathrm{d} \theta = \frac{\mathrm{d}}{\mathrm{d}t} \left( \frac{p(t)}{q(t)} \right) \frac{q(t)^2}{q(t)^2 + p(t)^2} \mathrm {d}t\). So (31) becomes

$$\begin{aligned} \delta _{1,n}= \pi ^{2n-2} c(n) \int _0^{\pi /2} \left( p(t)q(t)\right) ^{n-3}(q(t)^2 - p(t)^2 ) q(t)^2 \frac{\mathrm{d}}{\mathrm{d}t} \left( \frac{p(t)}{q(t)} \right) \mathrm{d}t \end{aligned}$$

Next we make the change of variables \(u= \tan t\). It is not difficult to see that \(p(t)=F(u(t))\) and \(q(t)=G(u(t))\) using (33) and the definition of F and G in the proposition. The integral becomes:

$$\begin{aligned} \delta _{1,n}= \pi ^{2n-2}c(n)\int _0^{1} \left( F(u)G(u)\right) ^{n-3}(G(u)^2 - F(u)^2) G(u)^2 \frac{\mathrm{d}}{\mathrm{d}u} \left( \frac{F(u)}{G(u)} \right) \mathrm{d}u. \end{aligned}$$

Now we let \(L=F\cdot G\) and \(H=F/G\). The integrand becomes \(L^{n-1} (1/H-H) H'/H\). The last factor suggests to use the notation \(w:=\log (H)\). We obtain

$$\begin{aligned} \delta _{1,n}=-2 \pi ^{2n-2}c(n)\int _0^{1} L(u)^{n-1} \sinh (w(u)) \frac{\mathrm {d}}{\mathrm {d}u} w(u) \mathrm {d}u \end{aligned}$$

\(\square \)