1 Statement of the Problem

1.1 Introduction

It is well known that the sum of angles in any plane triangle is constant, whereas the sum of solid d-dimensional angles at the vertices of a d-dimensional simplex is not, starting with dimension \(d=3\). It is therefore natural to ask what is the “average” angle-sum of a d-dimensional simplex. To define the notion of average, we put a probability measure on the set simplices as follows. Let \(X_1,\ldots ,X_n\) be independent, identically distributed (i.i.d.) random points in \(\mathbb {R}^{n-1}\) with probability distribution \(\mu \). Consider a random simplex defined as their convex hull:

$$\begin{aligned}{}[X_1,\ldots ,X_n] := \{\lambda _1X_1+\cdots +\lambda _n X_n:\lambda _1+\cdots +\lambda _{n}=1, \,\lambda _1\ge 0,\ldots , \lambda _{n}\ge 0\}. \end{aligned}$$

For the class of distributions studied here, this simplex is non-degenerate (i.e., has a non-empty interior) a.s. Let \(\beta ([X_1,\ldots ,X_{k}], [X_1,\ldots ,X_{n}])\) denote the internal angle of the simplex \([X_1,\ldots ,X_n]\) at its \((k-1)\)-dimensional face \([X_1,\ldots ,X_k]\). Similarly, we denote by \(\gamma ([X_1,\ldots ,X_{k}], [X_1,\ldots ,X_{n}])\) the external (or normal) angle of \([X_1,\ldots ,X_n]\) at \([X_1,\ldots ,X_k]\). The exact definitions of internal and external angles will be recalled in Sect. 4.1; see also the book [35] for an extensive account of stochastic geometry. We agree to choose the units of measurement for angles in such a way that the full-space angle equals 1. We shall be interested in the expected values of the above-defined angles. The special case when \(\mu \) is a multivariate normal distribution has been studied in [12, 13, 21], where the following theorem has been demonstrated.

Theorem 1.1

If \(X_1,\ldots ,X_n\) are i.i.d. random points in \(\mathbb {R}^{n-1}\) having a non-degenerate multivariate Gaussian distribution, then the expected internal angle of \([X_1,\ldots ,X_n]\) at the k-vertex face \([X_1,\ldots ,X_k]\) coincides with the internal angle of the regular \((n-1)\)-dimensional simplex \([e_1,\ldots ,e_n]\) at its face \([e_1,\ldots ,e_k]\), for all \(k\in \{1,\ldots ,n\}\). Here, \(e_1,\ldots ,e_n\) denotes the standard orthonormal basis of \(\mathbb {R}^n\). The statement remains true if internal angles are replaced by the external ones.Footnote 1

1.2 Beta and Beta\('\) Distributions

In the present paper we shall be interested in the case when \(\mu \) belongs to one of the following two remarkable families of probability distributions introduced by Miles [27] and studied by Ruben and Miles [33]. A random vector in \(\mathbb {R}^d\) has a d-dimensional beta distribution with parameter \(\beta >-1\) if its Lebesgue density is

(1)

Here, \(\Vert x\Vert = (x_1^2+\cdots +x_d^2)^{1/2}\) denotes the Euclidean norm of the vector \(x= (x_1,\ldots ,x_d)\in \mathbb {R}^d\). Similarly, a random vector in \(\mathbb {R}^d\) has beta\('\) distribution with parameter \(\beta >d/2\) if its Lebesgue density is given by

$$\begin{aligned} \tilde{f}_{d,\beta }(x)=\tilde{c}_{d,\beta } ( 1+\Vert x \Vert ^2)^{-\beta },\qquad x\in \mathbb {R}^d,\quad \tilde{c}_{d,\beta }= \frac{ \Gamma ( \beta ) }{\pi ^{ {d}/{2} } \Gamma ( \beta - {d}/{2})}. \end{aligned}$$
(2)

The following particular cases are of special interest:

  1. (a)

    The beta distribution with \(\beta =0\) is the uniform distribution in the unit ball \(\mathbb {B}^{d}:=\{x\in \mathbb {R}^d: \Vert x\Vert \le 1\}\).

  2. (b)

    The weak limit of the beta distribution as \(\beta \downarrow -1\) is the uniform distribution on the unit sphere \(\mathbb {S}^{d-1} := \{x\in \mathbb {R}^d: \Vert x\Vert = 1\}\); see [19]. In the following, we write \(f_{d,-1}\) for the uniform distribution on \(\mathbb {S}^{d-1}\), and the results of the present paper apply to the case \(\beta =-1\).

  3. (c)

    The standard normal distribution on \(\mathbb {R}^d\) is the weak limit of both beta and beta\('\) distributions (after suitable rescaling) as \(\beta \rightarrow +\infty \); see [20, Lem. 1.1].

  4. (d)

    The beta\('\) distribution \({\tilde{f}}_{n-1,n/2}\) on \(\mathbb {R}^{n-1}\) with \(\beta = n/2\) is the image of the uniform distribution on the upper half-sphere \(\mathbb {S}^{n-1}_+\) under the so-called gnomonic projection [18, Prop. 2.2]; see also [17] for further applications of this observation.

1.3 Expected Internal Angles

Let \(X_1,\ldots ,X_{n}\) be independent random points in \(\mathbb {R}^{n-1}\) distributed according to the beta distribution \(f_{n-1,\beta }\), where \(\beta \ge -1\). Their convex hull \([X_1,\ldots ,X_n]\) is called the \((n-1)\)-dimensional beta simplex. We shall be interested in the expected internal angles of these random simplices, denoted by

$$\begin{aligned} J_{n,k}(\beta ) := {\mathbb {E}}\,\beta ([X_1,\ldots ,X_{k}], [X_1,\ldots ,X_{n}]), \end{aligned}$$

for all \(n\in \mathbb {N}\) and \(k\in \{1,\ldots ,n\}\). By definition, \(J_{n,n}(\beta ) = 1\) for all \(n\in \mathbb {N}\). Similarly, let \({\tilde{X}}_1,\ldots ,{\tilde{X}}_{n}\) be independent random points in \(\mathbb {R}^{n-1}\) distributed according to the beta\('\) distribution \({\tilde{f}}_{n-1,\beta }\), where \(\beta > (n-1)/2\). Their convex hull \([{\tilde{X}}_1,\ldots ,{\tilde{X}}_n]\) is called the \((n-1)\)-dimensional beta\('\) simplex and its expected internal angles are denoted by

$$\begin{aligned} {\tilde{J}}_{n,k}(\beta ) := {\mathbb {E}}\,\beta ([{\tilde{X}}_1,\ldots ,{\tilde{X}}_{k}], [{\tilde{X}}_1,\ldots ,{\tilde{X}}_{n}]) \end{aligned}$$

for all \(n\in \mathbb {N}\) and \(k\in \{1,\ldots ,n\}\). Again, we define \({\tilde{J}}_{n,n}(\beta ) = 1\) for all \(n\in \mathbb {N}\). Note that the subscripts n, respectively k, refer to the number of vertices of the simplex, respectively, of the face of interest, rather than to the corresponding dimensions. By exchangeability, for both beta and beta\('\) simplices, it does not matter which face with k vertices to take. Hence, the expected sum of internal angles at all k-vertex faces of the corresponding simplex is

$$\begin{aligned} \mathbb {J}_{n,k}(\beta ) := \left( {\begin{array}{c}n\\ k\end{array}}\right) J_{n,k}(\beta ), \qquad {\tilde{\mathbb {J}}}_{n,k}(\beta ) := \left( {\begin{array}{c}n\\ k\end{array}}\right) {\tilde{J}}_{n,k}(\beta ). \end{aligned}$$

The triangular arrays \(J_{n,k}(\beta )\) and \({\tilde{J}}_{n,k}(\beta )\) appeared in [20] together with the closely related arrays \(I_{n,k}(\alpha )\) and \({\tilde{I}}_{n,k}(\alpha )\) that are essentially the expected external angles of beta and beta\('\) simplices; see Theorems 1.2 and 1.3, below. It has been shown in [20] that many quantities appearing in stochastic geometry can be expressed in terms of \(I_{n,k}(\beta )\), \({\tilde{I}}_{n,k}(\beta )\) and \(J_{n,k}(\beta )\), \({\tilde{J}}_{n,k}(\beta )\). An incomplete list of such quantities is as follows:

  1. (a)

    The expected f-vectors of beta and beta\('\) polytopes. The beta polytopes are defined as random polytopes of the form \(P_{n,d}^{\beta }:=[Z_1,\ldots ,Z_n]\), where \(Z_1,\ldots ,Z_n\) are i.i.d. random points in \(\mathbb {R}^d\) with distribution of the form \(f_{d,\beta }\). The beta\('\) polytope \({\tilde{P}}_{n,d}^\beta \) is defined similarly.

  2. (b)

    Expected internal and external angles of beta and beta\('\) polytopes, and, more generally, expected intrinsic conic volumes of their tangent cones.

  3. (c)

    Expected f-vector of the zero cell of the Poisson hyperplane tessellation and expected f-vectors of the random polytopes in the half-sphere; see [17] for a detailed study of these models.

  4. (d)

    Expected f-vector of the typical Poisson–Voronoi cell.

  5. (e)

    Constants appearing in the work of Reitzner [30] on the asymptotics of the expected f-vectors of random polytopes approximating smooth convex bodies.

  6. (f)

    External and internal angles of the regular simplex with n vertices at its k-vertex faces. These coincide with the corresponding expected angles of the random Gaussian simplex [13, 21], and are given by \(I_{n,k}(+\infty ):= \lim _{\beta \uparrow +\infty } I_{n,k}(\beta )\) and \(J_{n,k}(+\infty ) := \lim _{\beta \uparrow +\infty } J_{n,k}(\beta )\), respectively.

While there exist explicit formulae for \(I_{n,k}(\alpha )\) and \({\tilde{I}}_{n,k}(\alpha )\) (see Sect. 1.4), no general formulae are known for \(\mathbb {J}_{n,k}(\beta )\) and \({\tilde{\mathbb {J}}}_{n,k}(\beta )\) except in some special cases. For example, we have \(\mathbb {J}_{3,1} (\beta ) = 1/2\) because the sum of angles in any plane triangle equals half the full angle. For general \(n\in \mathbb {N}\), it always holds that \(\mathbb {J}_{n,n}(\beta ) = 1\) and \(\mathbb {J}_{n,n-1}(\beta ) = n/2\), and all these formulae are valid in the beta\('\) case, too. A general combinatorial formula for \({\tilde{\mathbb {J}}}_{n, k}(n/2)\) was derived in [17], where it was used to compute the expected f-vector of the Poisson zero polytope. For \(n=4\) and \(n=5\), explicit formulae for \(\mathbb {J}_{n,k}(\beta )\) were derived in [16] by a method not allowing for an extension to higher dimensions. The main results of the present paper can be summarised as follows. In Sect. 2, we derive a formula which enables us to compute \(\mathbb {J}_{n,k}(\beta )\) and \({\tilde{\mathbb {J}}}_{n,k}(\beta )\) symbolically for half-integer \(\beta \), and numerically for all admissible \(\beta \). The main work for this formula has been done in [17, 20], while the main contribution of the present paper is its explicit statement and demonstration of some consequences. The latter will be done in Sect. 3, where we apply the formula to compute (among other examples) the expected f-vectors of typical Poisson–Voronoi cells and the constants that appeared in the work of Reitzner [30] on random polytopes approximating convex bodies, in dimensions up to 10.

1.4 Expected External Angles

The following two theorems define the quantities \(I_{n,k}(\alpha )\) and \({\tilde{I}}_{n,k}(\alpha )\) and relate them to the expected external angles of beta and beta\('\) simplices. They are special cases of Theorems 1.6 and 1.16 in [20], respectively.

Theorem 1.2

Let \(X_1,\ldots ,X_n\) be i.i.d. random points in \(\mathbb {R}^{n-1}\) with beta density \(f_{n-1,\beta }\), \(\beta \ge -1\) (which is interpreted as the uniform distribution on the sphere \(\mathbb {S}^{n-2}\) if \(\beta =-1)\). Then, for all \(k\in \{1,\ldots ,n\}\), the expected external angle of the beta simplex \([X_1,\ldots ,X_n]\) at its face \([X_1,\ldots ,X_k]\) is given by

$$\begin{aligned} {\mathbb {E}}\,\gamma ([X_1,\ldots ,X_k], [X_1,\ldots ,X_n]) = I_{n,k}(2\beta + n-1), \end{aligned}$$

where for \(\alpha >-1/k\) we define

$$\begin{aligned} I_{n,k}(\alpha )\,=\int _{-\pi /2}^{+\pi /2} c_{1,({\alpha k - 1})/{2}} (\cos \varphi )^{\alpha k} \left( \int _{-\pi /2}^\varphi c_{1,({\alpha -1})/{2}}(\cos \theta )^{\alpha } \,\mathrm{d}\theta \right) ^{\!n-k} \mathrm{d}\varphi . \end{aligned}$$
(3)

Theorem 1.3

Let \({\tilde{X}}_1,\ldots ,{\tilde{X}}_n\) be i.i.d. random points in \(\mathbb {R}^{n-1}\) with beta\('\) density \({\tilde{f}}_{n-1,\beta }\), where \(\beta > ( {n-1})/{2}\). Then, for all \(k\in \{1,\ldots ,n\}\), the expected external angle of the beta\('\) simplex \([{\tilde{X}}_1,\ldots ,{\tilde{X}}_n]\) at its face \([{\tilde{X}}_1,\ldots ,{\tilde{X}}_k]\) is given by

$$\begin{aligned} {\mathbb {E}}\,\gamma ([{\tilde{X}}_1,\ldots ,{\tilde{X}}_k], [{\tilde{X}}_1,\ldots ,{\tilde{X}}_n]) = {\tilde{I}}_{n,k}(2\beta - n + 1), \end{aligned}$$

where for \(\alpha >0\) we define

$$\begin{aligned} {\tilde{I}}_{n,k}(\alpha )\,=\int _{-\pi /2}^{+\pi /2} {\tilde{c}}_{1,({\alpha k + 1})/{2}} (\cos \varphi )^{\alpha k-1} \left( \int _{-\pi /2}^\varphi {\tilde{c}}_{1,({\alpha +1})/{2}}(\cos \theta )^{\alpha -1} \,\mathrm{d}\theta \right) ^{\!n-k} \mathrm{d}\varphi . \end{aligned}$$
(4)

Usually, it will be more convenient to work with angle sums rather than with individual angles, which is why we introduce the quantities

$$\begin{aligned} \mathbb {I}_{n,k}(\alpha ) := \left( {\begin{array}{c}n\\ k\end{array}}\right) I_{n,k}(\alpha ), \qquad {\tilde{\mathbb {I}}}_{n,k}(\alpha ) := \left( {\begin{array}{c}n\\ k\end{array}}\right) {\tilde{I}}_{n,k}(\alpha ). \end{aligned}$$
(5)

Note that \(\mathbb {I}_{n,n}(\alpha ) = {\tilde{\mathbb {I}}}_{n,n}(\alpha ) = 1\).

2 Main Results

2.1 Algorithm for Computing Expected Internal-Angle Sums

In the next proposition we state relations which enable us to express the quantities \(\mathbb {J}_{n,k}(\beta )\) through the quantities \(\mathbb {I}_{n,k}(\alpha )\). The proof will be given in Sect. 4.2, where we shall also discuss similarity between these relations and McMullen’s angle-sum relations [23, 24] for deterministic polytopes.

Proposition 2.1

For every \(n\in \{2,3,\ldots \}\), \(k\in \{1,\ldots ,n-1\}\), and every \(\beta \ge -1\) the following relations between the quantities \(\mathbb {I}_{n,m}(\alpha )\) and \(\mathbb {J}_{m,k}(\beta )\) hold:

$$\begin{aligned} \sum _{\begin{array}{c} s=0,1,\ldots \\ n-s\ge k \end{array}} \!\mathbb {I}_{n,n-s}(2\beta +n-1) \mathbb {J}_{n-s,k} \biggl (\beta +\frac{s}{2}\biggr )&= \left( {\begin{array}{c}n\\ k\end{array}}\right) , \end{aligned}$$
(6)
$$\begin{aligned} \sum _{\begin{array}{c} s=0,1,\ldots \\ n-s\ge k \end{array}} \!(-1)^s\mathbb {I}_{n,n-s}(2\beta +n-1) \mathbb {J}_{n-s,k} \biggl (\beta +\frac{s}{2}\biggr )&= 0. \end{aligned}$$
(7)

Similarly, for every \(n\in \{2,3,\ldots \}\), \(k\in \{1,\ldots ,n-1\}\), and for every \(\beta >(n-1)/2\), the quantities \({\tilde{\mathbb {I}}}_{n,m}(\alpha )\) and \({\tilde{\mathbb {J}}}_{m,k}(\beta )\) satisfy the following relations:

$$\begin{aligned} \sum _{\begin{array}{c} s=0,1,\ldots \\ n-s\ge k \end{array}} \! {\tilde{\mathbb {I}}}_{n,n-s}(2\beta -n+1) {\tilde{\mathbb {J}}}_{n-s,k} \biggl (\beta -\frac{s}{2}\biggr )&= \left( {\begin{array}{c}n\\ k\end{array}}\right) , \end{aligned}$$
(8)
$$\begin{aligned} \sum _{\begin{array}{c} s=0,1,\ldots \\ n-s\ge k \end{array}}\! (-1)^s{\tilde{\mathbb {I}}}_{n,n-s}(2\beta -n+1) {\tilde{\mathbb {J}}}_{n-s,k} \biggl (\beta -\frac{s}{2}\biggr )&= 0. \end{aligned}$$
(9)

We now explain how these relations can be used to compute the quantities \(\mathbb {J}_{n,k}(\beta )\) and \({\tilde{\mathbb {J}}}_{n,k}(\beta )\). Since the results in these two cases are similar to each other, we restrict ourselves to \(\mathbb {J}_{n,k}(\beta )\) and state the results for \({\tilde{\mathbb {J}}}_{n,k}(\beta )\) at the end of the section. First of all, we have \(\mathbb {J}_{1,1}(\beta ) = 1\). Assume that for some \(n\in \{2,3,\ldots \}\) we are able to compute (symbolically or numerically) the quantities \(\mathbb {J}_{m,k}(\gamma )\) with arbitrary \(m\in \{1,\ldots ,n-1\}\), \(k\in \{1,\ldots ,m\}\), and \(\gamma \ge -1/2\). We are going to compute the quantities \(\mathbb {J}_{n,k}(\beta )\) with \(k\in \{1,\ldots ,n\}\) and \(\beta \ge -1\). If \(k=n\), we trivially have \(\mathbb {J}_{n,n}(\beta ) = 1\). For \(k\in \{1,\ldots ,n-1\}\) we use the formula

$$\begin{aligned} \mathbb {J}_{n,k}(\beta )=\left( {\begin{array}{c}n\\ k\end{array}}\right) - \sum _{s=1}^{n-k} \mathbb {I}_{n,n-s}(2\beta +n-1) \mathbb {J}_{n-s,k}\biggl (\beta +\frac{s}{2}\biggr ), \end{aligned}$$
(10)

which follows from (6) by separating the term with \(s=0\). Note that on the right-hand side we have the quantities of the type \(\mathbb {I}_{n,n-s}(\gamma )\) (which are just trigonometric integrals; see Sect. 1.4) and the quantities \(\mathbb {J}_{n-s,k}(\beta + s/ 2)\) which are already assumed to be known by the induction assumption since \(n-s < n\).

The above recursive procedure allows us to express \(\mathbb {J}_{n,k}(\beta )\) as a polynomial in the variables \(\mathbb {I}_{m,\ell }(2\beta +n-1)\) with \(1\le \ell < m \le n\). Note that all terms have the same \(\beta \)-parameter \(2\beta +n-1\). For example, for \(n=4\) we obtain

$$\begin{aligned} \mathbb {J}_{4,1}(\beta )&=3 -2 \mathbb {I}_{4,3}(3 + 2 \beta ) - \mathbb {I}_{4,2}(3 + 2 \beta ) + \mathbb {I}_{4,3}(3 + 2 \beta ) \mathbb {I}_{3,2}(3 + 2 \beta ),\\ \mathbb {J}_{4,2}(\beta )&=6 -3 \mathbb {I}_{4,3}(3 + 2 \beta ) - \mathbb {I}_{4,2}(3 + 2 \beta ) + \mathbb {I}_{4,3}(3 + 2 \beta ) \mathbb {I}_{3,2}(3 + 2 \beta ),\\ \mathbb {J}_{4,3}(\beta )&= 4-\mathbb {I}_{4,3}(3 + 2 \beta ),\qquad \qquad \mathbb {J}_{4,4}(\beta )=1. \end{aligned}$$

We simplified the first line by using that \(\mathbb {I}_{n,1}(\alpha )=1\). Also, note that, in fact, \(\mathbb {I}_{3,2}(\alpha ) = 3/2\) and \(\mathbb {I}_{4,3}(\alpha ) = 2\). More generally, we shall prove the following:

Theorem 2.2

For every \(\beta \ge -1\), \(n\in \mathbb {N}\), and \(k\in \{1,\ldots ,n\}\), \(\mathbb J_{n,k}(\beta )\) equals

$$\begin{aligned} \sum _{\ell =0}^{n-k} (-1)^\ell \sum \mathbb {I}_{n, n_1}(2\beta +n-1) \mathbb {I}_{n_1,n_2} (2\beta + n-1) \ldots \mathbb {I}_{n_{\ell -1}, n_\ell }(2\beta + n-1) \left( {\begin{array}{c}n_\ell \\ k\end{array}}\right) , \end{aligned}$$

where the second sum is taken over all integer tuples \((n_0, n_1,\ldots ,n_{\ell })\) such that \(n=n_0>n_1>\cdots >n_\ell \ge k\).

The following equation, which follows from (6) and (7) by taking the arithmetic mean, is more efficient for computational purposes since it contains less terms than (10):

$$\begin{aligned} \mathbb {J}_{n,k}(\beta )\,=\,\frac{1}{2} \left( {\begin{array}{c}n\\ k\end{array}}\right) \,-\! \sum _{s=1}^{\lfloor ({n-k})/{2}\rfloor } \!\! \mathbb {I}_{n,n-2s}(2\beta +n-1) \mathbb {J}_{n-2s,k}(\beta +s). \end{aligned}$$
(11)

For example, the first few non-trivial values of the internal-angles vector

$$\begin{aligned} \mathbb {J}_{n,\bullet }(\beta ):= (\mathbb {J}_{n,1}(\beta ), \ldots , \mathbb {J}_{n,n}(\beta )) \end{aligned}$$

are given by

$$\begin{aligned} \mathbb {J}_{4,\bullet }(\beta )= & {} (2-\mathbb {I}_{4,2}(2 \beta +3), 3-\mathbb {I}_{4,2}(2 \beta +3),2,1),\\ \mathbb {J}_{5,\bullet }(\beta )= & {} \biggl (\frac{3}{2}-\frac{\mathbb {I}_{5,3}(2 \beta +4)}{2} ,5-\frac{3\mathbb {I}_{5,3}(2 \beta +4)}{2} ,5-\mathbb {I}_{5,3}(2 \beta +4),\frac{5}{2},1\biggr ),\\ \mathbb {J}_{6,\bullet }(\beta )= & {} \biggl (3-\mathbb {I}_{6,2}(2 \beta +5)+\mathbb {I}_{4,2}(2 \beta +5) \mathbb {I}_{6,4}(2 \beta +5)-2 \mathbb {I}_{6,4}(2 \beta +5),\\&\frac{15}{2} -\mathbb {I}_{6,2}(2 \beta +5)+\mathbb {I}_{4,2}(2 \beta +5) \mathbb {I}_{6,4}(2 \beta +5)-3 \mathbb {I}_{6,4}(2 \beta +5),\\&10-2 \mathbb {I}_{6,4}(2 \beta +5),\frac{15}{2}-\mathbb {I}_{6,4}(2 \beta +5),3,1\biggr ). \end{aligned}$$

Generalising these formulae, we can prove the following

Theorem 2.3

For every \(\beta \ge -1\), \(n\in \mathbb {N}\), and \(k\in \{1,\ldots ,n\}\) we have that \(2 \mathbb {J}_{n,k}(\beta ) -\delta _{n,k}\) is equal to

$$\begin{aligned} \sum _{\ell =0}^{\lfloor ({n-k})/{2}\rfloor }\!\! (-1)^\ell \sum \mathbb {I}_{n, n_1}(2\beta +n-1) \mathbb {I}_{n_1,n_2} (2\beta + n-1) \ldots \mathbb {I}_{n_{\ell -1}, n_\ell }(2\beta + n-1) \left( {\begin{array}{c}n_\ell \\ k\end{array}}\right) , \end{aligned}$$

where \(\delta _{n,k}\) is Kronecker’s delta, and the sum is taken over all integer tuples \((n_0,n_1,\ldots ,n_\ell )\) such that \(n=n_0>n_1>\cdots >n_\ell \ge k\) and such that \(n-n_i\) is even for all \(i\in \{1,\ldots ,\ell \}\).

The quantities \({\tilde{\mathbb {J}}}_{n,k}(\beta )\) can be computed in a similar manner. We put \({\tilde{\mathbb {J}}}_{1,1}(\beta ) = 1\) and then use the recursive formula

$$\begin{aligned} {\tilde{\mathbb {J}}}_{n,k}(\beta )=\left( {\begin{array}{c}n\\ k\end{array}}\right) - \sum _{s=1}^{n-k}{\tilde{\mathbb {I}}}_{n,n-s}(2\beta -n+1){\tilde{\mathbb {J}}}_{n-s,k}\biggl (\beta -\frac{s}{2}\biggr ) \end{aligned}$$

which follows from (8). Alternatively, one can use the more efficient formula

$$\begin{aligned} {\tilde{\mathbb {J}}}_{n,k}(\beta )\,=\,\frac{1}{2} \left( {\begin{array}{c}n\\ k\end{array}}\right) \,-\! \sum _{s=1}^{\lfloor ({n-k})/{2}\rfloor }\!\! {\tilde{\mathbb {I}}}_{n,n-2s}(2\beta -n+1){\tilde{\mathbb {J}}}_{n-2s,k}(\beta -s), \end{aligned}$$

which follows from (8) and (9) by taking their arithmetic mean. The next two theorems are similar to Theorems 2.2 and 2.3. We omit their straightforward proofs.

Theorem 2.4

For every \(\beta > (n-1)/2\), \(n\in \mathbb {N}\), and \(k\in \{1,\ldots ,n\}\), \( {\tilde{\mathbb {J}}}_{n,k}(\beta )\) equals

$$\begin{aligned} \sum _{\ell =0}^{n-k} (-1)^\ell \sum {\tilde{\mathbb {I}}}_{n, n_1}(2\beta - n + 1) {\tilde{\mathbb {I}}}_{n_1,n_2} (2\beta - n + 1) \ldots {\tilde{\mathbb {I}}}_{n_{\ell -1}, n_\ell }(2\beta - n + 1) \left( {\begin{array}{c}n_\ell \\ k\end{array}}\right) , \end{aligned}$$

where the second sum is taken over all integer tuples \((n_0, n_1,\ldots ,n_{\ell })\) such that \(n=n_0>n_1>\cdots >n_\ell \ge k\).

Theorem 2.5

For every \(\beta > (n-1)/2\), \(n\in \mathbb {N}\), and \(k\in \{1,\ldots ,n\}\), \(2{\tilde{\mathbb {J}}}_{n,k}(\beta ) - \delta _{n,k}\) equals

$$\begin{aligned} \sum _{\ell =0}^{\lfloor ({n-k})/{2}\rfloor }\! (-1)^\ell \sum {\tilde{\mathbb {I}}}_{n, n_1}(2\beta - n + 1){\tilde{\mathbb {I}}}_{n_1,n_2} (2\beta - n + 1) \ldots {\tilde{\mathbb {I}}}_{n_{\ell -1}, n_\ell }(2\beta - n + 1) \left( {\begin{array}{c}n_\ell \\ k\end{array}}\right) , \end{aligned}$$

where \(\delta _{n,k}\) is Kronecker’s delta, and the sum is taken over all integer tuples \((n_0,n_1,\ldots ,n_\ell )\) such that \(n=n_0>n_1>\cdots >n_\ell \ge k\) and such that \(n-n_i\) is even for all \(i\in \{1,\ldots ,\ell \}\).

2.2 Relations in Matrix Form

Let us write the relation (7) in the form

$$\begin{aligned} \sum _{m=k}^n (-1)^{n-m} \mathbb {I}_{n,m}(2\beta +n-1) \mathbb {J}_{m,k} \biggl (\beta +\frac{n-m}{2}\biggr ) = \delta _{nk}, \end{aligned}$$

where \(\delta _{nk}\) denotes Kronecker’s delta. Introducing the new variable \(\gamma :=\beta + (n-1)/2\) that ranges in the interval \([({n-3})/{2},+\infty )\), we can write

$$\begin{aligned} \sum _{m=k}^n (-1)^{n-m} \mathbb {I}_{n,m}(2\gamma ) \mathbb {J}_{m,k} \biggl (\gamma -\frac{m-1}{2}\biggr ) = \delta _{nk}. \end{aligned}$$
(12)

This relation has the advantage that now the \(\mathbb {J}\)-term does not contain n, which allows to state it in matrix form. Take some \(N\in \mathbb {N}\), \(\gamma \ge ({N-3})/{2}\), and introduce the \(N\times N\) matrices \(\mathbb {A}\) and \(\mathbb {B}\) with the following entries;

$$\begin{aligned} \mathbb {A}_{n,m}&:= {\left\{ \begin{array}{ll} (-1)^{n} \mathbb {I}_{n,m}(2\gamma ) &{}\text { if } 1\le m \le n \le N,\\ 0 &{}\text { otherwise}, \end{array}\right. }\\ \mathbb {B}_{m,k}&:= {\left\{ \begin{array}{ll} (-1)^{m} \mathbb {J}_{m,k}(\gamma - ({m-1})/{2}) &{}\text { if } 1\le k \le m \le N,\\ 0 &{}\text { otherwise}. \end{array}\right. } \end{aligned}$$

Note that both \(\mathbb {A}\) and \(\mathbb {B}\) are lower-triangular matrices with 1’s on the diagonal. Then, (12) states that \(\mathbb {A}\mathbb {B}= E\), where E is the \(N\times N\) identity matrix. Since this implies that \(\mathbb {B}\mathbb {A}= E\), we arrive at the following relation which is dual to (12):

$$\begin{aligned} \sum _{m=k}^n (-1)^{n-m} \mathbb {J}_{n,m}\biggl (\gamma -\frac{n-1}{2}\biggr ) \mathbb {I}_{m,k} (2\gamma ) = \delta _{nk} \end{aligned}$$
(13)

for all \(\gamma \ge ({N-3})/{2}\). Similar arguments apply in the beta\('\) case. Switching back to the original variable \(\beta \), we arrive at the following result which is the dual of Proposition 2.1.

Proposition 2.6

For every \(n\in \{2,3,\ldots \}\), \(k\in \{1,\ldots ,n-1\}\), and every \(\beta \ge -1\) we have

$$\begin{aligned} \sum _{\begin{array}{c} s=0,1,\ldots \\ n-s\ge k \end{array}} (-1)^{s} \mathbb {J}_{n,n-s}(\beta ) \mathbb {I}_{n-s,k} (2\beta +n-1) = 0. \end{aligned}$$
(14)

Similarly, for every \(n\in \{2,3,\ldots \}\), \(k\in \{1,\ldots ,n-1\}\), and for every \(\beta >(n-1)/2\), we have

$$\begin{aligned} \sum _{\begin{array}{c} s=0,1,\ldots \\ n-s\ge k \end{array}} (-1)^{s} {\tilde{ \mathbb {J}}}_{n,n-s}(\beta ) {\tilde{\mathbb {I}}}_{n-s,k} (2\beta -n+1) = 0. \end{aligned}$$
(15)

2.3 Arithmetic Properties of Expected Internal-Angle Sums

At the moment, we do not have a general formula for \(\mathbb {J}_{n,k}(\beta )\) and \({\tilde{\mathbb {J}}}_{n,k}(\beta )\) which is “nicer” than what is given in Theorems 2.2, 2.3, 2.4, and 2.5. Still, we can say something about the arithmetic properties of these quantities. First we state what we know about \(\mathbb {I}_{n,k}(\alpha )\).

Theorem 2.7

Let \(\alpha \ge 0\) be integer, \(n\in \mathbb {N}\), and \(k\in \{1,\ldots ,n\}\).

  1. (a)

    If \(\alpha \) is odd, then \(\mathbb {I}_{n,k}(\alpha )\) is rational.

  2. (b)

    If \(\alpha \) is even, then \(\mathbb {I}_{n,k}(\alpha )\) can be expressed in the form \(r_0+r_2\pi ^{-2} + r_4\pi ^{-4} +\cdots + r_{n-k} \pi ^{-(n-k)}\) (if \(n-k\) is even) or \(r_0+r_2\pi ^{-2} + r_4\pi ^{-4} +\cdots + r_{n-k-1} \pi ^{-(n-k-1)}\) (if \(n-k\) is odd), where the \(r_i\)’s are rational numbers.

Using the above theorem together with the results of Sect. 2.1, we shall prove the following result on the \(\mathbb {J}_{n,k}(\beta )\)’s.

Theorem 2.8

Let \(\beta \ge -1\) be integer or half-integer. Let also \(n\in \mathbb {N}\) and \(k\in \{1,\ldots ,n\}\).

  1. (a)

    If \(2\beta + n\) is even, then \(\mathbb {J}_{n,k}(\beta )\) is a rational number.

  2. (b)

    If \(2\beta + n\) is odd, then \(\mathbb {J}_{n,k}(\beta )\) can be expressed as \(q_0 + q_2 \pi ^{-2} + q_4 \pi ^{-4} + \cdots + q_{n-k} \pi ^{-(n-k)}\) (if \(n-k\) is even) or \(q_0 + q_2 \pi ^{-2} + q_4 \pi ^{-4} + \cdots + q_{n-k} \pi ^{-(n-k-1)}\) (if \(n-k\) is odd), where the \(q_{i}\)’s are rational numbers.

Symbolic computations we performed with the help of Mathematica 11 strongly suggest that in the case when \(n-k\) is odd, part (b) can be strengthened as follows:

Conjecture 2.9

If both \(2\beta + n\) and \(n-k\) are odd, then \(\mathbb {J}_{n,k}(\beta )\) is a number of the form \(q\pi ^{-(n-k-1)}\) with some rational q.

Conjecture 2.9 states that \(\mathbb {J}_{n,k}(\beta )\) has sometimes much simpler form than the one suggested by Theorems 2.2 and 2.3. For example, when computing \(\mathbb {J}_{7,2}(-1)\), we can use the formula

$$\begin{aligned} \mathbb {J}_{7,2}(-1) = \frac{1}{2} \left( {\begin{array}{c}7\\ 2\end{array}}\right) - \mathbb {I}_{7,5}(4) \mathbb {J}_{5,2}(0) - \mathbb {I}_{7,3}(4) \mathbb {J}_{3,2}(1), \end{aligned}$$

which follows from (11). The involved values are given by

$$\begin{aligned} \mathbb {J}_{5,2}(0)&= \frac{1692197}{282240 \pi ^2},&\mathbb {J}_{3,2}(1)&= \frac{3}{2},\\ \mathbb {I}_{7,5}(4)&= 7-\frac{2144238917}{190270080 \pi ^2},&\mathbb {I}_{7,3}(4)&= 7 + \frac{1250163908136617}{30981823488000 \pi ^4}-\frac{1692197}{60480 \pi ^2}, \end{aligned}$$

so that, a priori, we expect \(\mathbb {J}_{7,2}(-1)\) to be a linear combination of \(1, \pi ^{-2}, \pi ^{-4}\) over \(\mathbb {Q}\). A posteriori, it turns out that \(\mathbb {J}_{7,2}(-1)={113537407}/({16128000 \pi ^4})\) is a rational multiple of \(\pi ^{-4}\), while the remaining terms cancel. We were not able to explain this strange cancellation using Theorems 2.2 and 2.3. It is therefore natural to conjecture that there is a “nicer” formula for \(\mathbb {J}_{n,k}(\beta )\) than the ones given in these theorems. The results for the quantities \({\tilde{\mathbb {I}}}_{n,k}(\alpha )\) and \({\tilde{ \mathbb {J}}}_{n,k}(\beta )\) are analogous. We state them without proofs.

Theorem 2.10

Let \(\alpha >0\) be integer, \(n\in \mathbb {N}\), and \(k\in \{1,\ldots ,n\}\).

  1. (a)

    If \(\alpha \) is even, then \({\tilde{\mathbb {I}}}_{n,k}(\alpha )\) is rational.

  2. (b)

    If \(\alpha \) is odd, then \({\tilde{\mathbb {I}}}_{n,k}(\alpha )\) can be expressed in the form \(r_0+r_2\pi ^{-2} + r_4\pi ^{-4} +\cdots + r_{n-k} \pi ^{-(n-k)}\) (if \(n-k\) is even) or \(r_0+r_2\pi ^{-2} + r_4\pi ^{-4} +\cdots + r_{n-k-1} \pi ^{-(n-k-1)}\) (if \(n-k\) is odd),  where the \(r_i\)’s are rational numbers.

Theorem 2.11

Let \(n\in \mathbb {N}\) and \(k\in \{1,\ldots ,n\}\). Let also \(\beta > (n-1)/2\) be integer or half-integer.

  1. (a)

    If \(2\beta - n\) is odd, then \({\tilde{\mathbb {J}}}_{n,k}(\beta )\) is a rational number.

  2. (b)

    If \(2\beta - n\) is even, then \({\tilde{\mathbb {J}}}_{n,k}(\beta )\) can be expressed as \(q_0 + q_2 \pi ^{-2} + q_4 \pi ^{-4} + \cdots + q_{n-k-1} \pi ^{-(n-k-1)}\) (if \(n-k\) is odd) or \(q_0 + q_2 \pi ^{-2} + q_4 \pi ^{-4} + \cdots + q_{n-k} \pi ^{-(n-k)}\) (if \(n-k\) is even), where the \(q_i\)’s are rational numbers.

In the case when k is even, our symbolic computations suggest the following stronger version of (b):

Conjecture 2.12

If both \(2\beta - n\) and k are even, then \({\tilde{ \mathbb {J}}}_{n,k}(\beta )\) is a number of the form \(q\pi ^{-(n-k)}\) (if \(n-k\) is even) or \(q\pi ^{-(n-k-1)}\) (if \(n-k\) is odd) with some rational q.

3 Special Cases and Applications

In this section we present several special cases of the above results and their applications to some problems of stochastic geometry. The symbolic computations were performed using Mathematica 11. For the vector of the expected internal angles we use the notation

$$\begin{aligned} \mathbb {J}_{n,\bullet } (\beta ) = (\mathbb {J}_{n,1}(\beta ), \ldots , \mathbb {J}_{n,n}(\beta )). \end{aligned}$$

3.1 Internal Angles of Random Simplices: Uniform Distribution on the Sphere

Let \(X_1,\ldots ,X_n\) be i.i.d. random points sampled uniformly from the unit sphere \(\mathbb {S}^{n-2} \subset \mathbb {R}^{n-1}\). Recall that the expected sum of internal angles of the simplex \([X_1,\ldots ,X_n]\) at its k-vertex faces is denoted by \(\mathbb {J}_{n,k}(-1)\). Clearly,

$$\begin{aligned} \mathbb {J}_{1,\bullet }(-1) = (1),\qquad \mathbb {J}_{2,\bullet }(-1) = (1,1),\qquad \mathbb {J}_{3,\bullet }(-1) = \biggl (\frac{1}{2},\frac{3}{2}, 1\biggr ). \end{aligned}$$
(16)

The first two non-trivial cases, \(n=3\) and \(n=4\) (corresponding to simplices in dimensions 3 and 4), were treated in [16]:

$$\begin{aligned} \mathbb {J}_{4,\bullet }(-1)= & {} \biggl (\frac{1}{8},\frac{9}{8},2,1\biggr ),\\ \mathbb {J}_{5,\bullet }(-1)= & {} \biggl (-\frac{1}{6} + \frac{539}{288 \pi ^2},\frac{539}{96 \pi ^2},\frac{5}{3} + \frac{539}{144 \pi ^2},\frac{5}{2},1\biggr ). \end{aligned}$$

The method used there did not allow for an extension to higher dimensions. Using Mathematica 11 and the algorithm described in Sect. 2.1 we recovered these results and, moreover, obtained the following

Theorem 3.1

We have

$$\begin{aligned} \mathbb {J}_{6,\bullet }(-1)= & {} \biggl (\frac{25411}{7340032},\frac{233445}{1048576},\frac{5155}{3584},\frac{23075}{7168},3,1\biggr ),\\ \mathbb {J}_{7,\bullet }(-1)= & {} \biggl (\frac{1}{6}+\frac{113537407}{48384000 \pi ^4}-\frac{2144238917}{1141620480 \pi ^2},\frac{113537407}{16128000 \pi ^4},\\&\ {-}\frac{7}{6}+\frac{113537407}{24192000 \pi ^4}+\frac{2144238917}{114162048 \pi ^2},\frac{2144238917}{76108032 \pi ^2},\\&\frac{7}{2}+\frac{2144238917}{190270080 \pi ^2},\frac{7}{2},1\biggr ),\\ \mathbb {J}_{8,\bullet }(-1)= & {} \biggl (\frac{76136856565967}{1454662679640670208},\frac{29503701837953231}{1454662679640670208},\frac{5899486844923}{16647293239296},\\&\frac{1146031403475}{584115552256},\frac{418431615}{84672512},\frac{1603846783}{254017536},4, 1\biggr ),\\ \mathbb {J}_{9,\bullet }(-1)= & {} \biggl (-\frac{3}{10}-\frac{1581133359667623075371927}{218521780048552780800000 \pi ^4}\\&\quad +\frac{2819369438967901759}{1739761680384000000 \pi ^6} +\frac{3585828150520517221}{975094112225376000 \pi ^2},\\&\frac{2819369438967901759}{579920560128000000 \pi ^6},\\&\frac{1581133359667623075371927}{21852178004855278080000 \pi ^4}+\frac{2819369438967901759}{869880840192000000 \pi ^6}+2\\&\quad -\frac{25100797053643620547}{975094112225376000 \pi ^2},\frac{1581133359667623075371927}{14568118669903518720000 \pi ^4},\\&{-}\frac{21}{5}+\frac{1581133359667623075371927}{36420296674758796800000 \pi ^4}+\frac{25100797053643620547}{325031370741792000 \pi ^2},\\&\frac{25100797053643620547}{325031370741792000 \pi ^2}, 6+\frac{3585828150520517221}{162515685370896000 \pi ^2},\frac{9}{2},1\biggr ),\\ \mathbb {J}_{10,\bullet }(-1)= & {} \biggl (\frac{7142769685117513413611137831}{13319284084760520585863454122835968},\\&\frac{15207860904181118336356297648935}{13319284084760520585863454122835968},\\&\frac{9440668036340000013447895}{198472799133666166452518912},\frac{240195630998707566620445}{441541266148311827480576},\\&\frac{65392213852270069737}{23659801379879256064},\frac{177147685252097540771}{23659801379879256064},\\&\frac{8199101438535}{705117028352},\frac{29352612289095}{2820468113408},5,1\biggr ). \end{aligned}$$

3.2 Internal Angles of Random Simplices: Uniform Distribution in the Ball

Let \(X_1,\ldots ,X_n\) be i.i.d. random points sampled uniformly from the unit ball \(\mathbb {B}^{n-1}\). The expected sum of internal angles of the simplex \([X_1,\ldots ,X_n]\) at its k-vertex faces is \(\mathbb {J}_{n,k}(0)\). The values of \(\mathbb {J}_{n,k}(0)\) for \(n=1,2,3\) are the same as in (16). For simplices with \(n=4\) and \(n=5\) vertices (corresponding to dimensions \(d=3\) and 4), the following results were obtained in [16] by a method not extending to higher dimensions:

$$\begin{aligned} \mathbb {J}_{4,\bullet }(0)= & {} \biggl (\frac{401}{2560},\frac{2961}{2560},2,1\biggr ),\\ \mathbb {J}_{5,\bullet }(0)= & {} \biggl (-\frac{1}{6} + \frac{1692197}{846720 \pi ^2},\frac{1692197}{282240 \pi ^2},\frac{5}{3} + \frac{1692197}{423360 \pi ^2},\frac{5}{2},1\biggr ). \end{aligned}$$

Using Mathematica 11 and the above algorithm we recovered these results and, moreover, obtained the following

Theorem 3.2

We have

$$\begin{aligned} \mathbb {J}_{6,\bullet }(0)= & {} \biggl (\frac{112433094897}{17197049053184},\frac{29573170815}{120259084288},\frac{6929155}{4685824},\frac{30358275}{9371648},3,1\biggr ),\\ \mathbb {J}_{7,\bullet }(0)= & {} \biggl (\frac{1}{6}+\frac{36051577693123}{13519341158400 \pi ^4}-\frac{621038966291119}{325969178895360 \pi ^2},\\&\frac{36051577693123}{4506447052800 \pi ^4},-\frac{7}{6}+\frac{36051577693123}{6759670579200 \pi ^4}+\frac{621038966291119}{32596917889536 \pi ^2},\\&\frac{621038966291119}{21731278593024 \pi ^2}, \ \frac{7}{2}+\frac{621038966291119}{54328196482560 \pi ^2}, \frac{7}{2}, 1\biggr ),\\ \mathbb {J}_{8,\bullet }(0)= & {} \biggl (\frac{54854407266470750437}{407304109147506899681280}, \frac{1922620195704749849441}{81460821829501379936256},\\&\ \frac{1818739186251799}{4855443348258816}, \frac{6494630010305885}{3236962232172544},\\&\frac{2403490929}{482344960}, \frac{9156320369}{1447034880}, 4, 1\biggr ),\\ \mathbb {J}_{9,\bullet }(0)= & {} \biggl (-\frac{3}{10}-\frac{3825746278401786849105853842941927}{513083615323402301904101376000000 \pi ^4}\\&\quad \qquad +\frac{834997968128824111294853689}{434049888937072472064000000 \pi ^6}\\&\quad \qquad +\frac{25695566187355249503645020401}{6950795362764910977640320000 \pi ^2},\\&\ \frac{834997968128824111294853689}{144683296312357490688000000 \pi ^6},\\&\frac{3825746278401786849105853842941927}{51308361532340230190410137600000 \pi ^4}\\&\quad \qquad +\frac{834997968128824111294853689}{217024944468536236032000000 \pi ^6}+2\\&\quad \qquad -\frac{25695566187355249503645020401}{992970766109272996805760000 \pi ^2},\\&\ \frac{3825746278401786849105853842941927}{34205574354893486793606758400000 \pi ^4},\\&\ {-}\frac{21}{5}+\frac{3825746278401786849105853842941927}{85513935887233716984016896000000 \pi ^4}\\&\quad \qquad +\frac{25695566187355249503645020401}{330990255369757665601920000 \pi ^2},\\&\frac{25695566187355249503645020401}{330990255369757665601920000 \pi ^2},\\&\ 6+\frac{25695566187355249503645020401}{1158465893794151829606720000 \pi ^2},\frac{9}{2},1\biggr ),\\ \mathbb {J}_{10,\bullet }(0)= & {} \biggl (\frac{16173937433865922950599394579005791588389155}{9204102262874833628227344732391414668379518140416},\\&\ \frac{12688011280876667528205329700413092651546251555}{9204102262874833628227344732391414668379518140416},\\&\ \frac{32929953220484140728052018125551175}{640848401352029148689993712621584384},\\&\ \frac{210765193340397846616524118474155}{373323101767213558740779093983232},\\&\frac{371193086109705273947602629}{131859245100259540744536064},\frac{2253773101928857034270262735}{298418291542692644842897408},\\&\ \frac{15529150935155595}{1330783805505536},\frac{55452665100321675}{5323135222022144},5,1\biggr ). \end{aligned}$$

3.3 Typical Poisson–Voronoi Cells

Let \(P_1,P_2,\ldots \) be the points of a Poisson point process on \(\mathbb {R}^d\) with constant intensity 1. The typical Poisson–Voronoi cell is a random polytope which, for our purposes, can be defined as follows:

$$\begin{aligned} {\mathcal {V}}_d := \{x\in \mathbb {R}^d:\Vert x\Vert \le \Vert x-P_j\Vert \text { for all } j\in \mathbb {N}\}. \end{aligned}$$

The typical Poisson–Voronoi cell is one of the classical objects of stochastic geometry; see [8, 9, 15, 28, 29, 35] for reviews and the works of Meijering [25], Gilbert [11], and Miles [26] for important early contributions. We shall be interested in the expected f-vector of \({\mathcal {V}}_d\) denoted by

$$\begin{aligned} {\mathbb {E}}{\mathbf {f}}({\mathcal {V}}_{d}) = ({\mathbb {E}}f_0({\mathcal {V}}_{d}),{\mathbb {E}}f_1({\mathcal {V}}_{d}),\ldots ,{\mathbb {E}}f_{d-1}({\mathcal {V}}_{d})), \end{aligned}$$

where \(f_k({\mathcal {V}}_d)\) is the number of k-dimensional faces of \({\mathcal {V}}_d\). To the best of our knowledge, explicit formulae for the complete vector \({\mathbb {E}}{\mathbf {f}}({\mathcal {V}}_{d})\) have been known only in dimensions \(d=2\) and 3:

$$\begin{aligned} {\mathbb {E}}{\mathbf {f}} ({\mathcal {V}}_{2}) = (6, 6),\qquad {\mathbb {E}}{\mathbf {f}} ({\mathcal {V}}_{3}) = \biggl (\frac{96 \pi ^2}{35},\frac{144 \pi ^2}{35},2+\frac{48 \pi ^2}{35}\biggr ), \end{aligned}$$
(17)

see [35, Thm. 10.2.5] or [28, Eq. (7.13)]. The following formula can be found in the works of Miles [26, Eq. (75)] and Møller [28, Thm. 7.2]:

$$\begin{aligned} {\mathbb {E}}f_0({\mathcal {V}}_d) = \frac{2^{d+1}\pi ^{(d-1)/2}}{d^2} \cdot \frac{\Gamma (({d^2+1})/{2})}{\Gamma ( {d^2}/{2})} \biggl (\frac{\Gamma (( {d+2})/2)}{\Gamma (({d+1})/{2})}\biggr )^{\!d}. \end{aligned}$$
(18)

In fact, there is a more general formula [28, Thm. 7.2] for the expected s-content of all s-faces of a typical t-face in a d-dimensional tessellation, but it is only the case \(s=0\), \(t=d\) for which this result yields a formula for some entry of the expected f-vector of \({\mathcal {V}}_d\).

For arbitrary \(d\in \mathbb {N}\) and for all \(k\in \{0,\ldots ,d-1\}\), it has been shown in [20] (see Theorem 1.21 and its proof there, with \(\alpha = d\)) that

(19)

where

(20)

Taking \(k=0\), we recover (18). Formula (19), together with the algorithm for computation of \({\tilde{\mathbb {J}}}_{n,k}(\beta )\), allows us to compute \({\mathbb {E}}f_k({\mathcal {V}}_{d})\) in finitely many steps. Using Mathematica 11, we have done this in dimensions \(d\in \{2,\ldots ,10\}\). As a result, we recovered (17) and, moreover, obtained the following

Theorem 3.3

The expected f-vector of the typical Poisson–Voronoi cell is given by

$$\begin{aligned} {\mathbb {E}}{\mathbf {f}} ({\mathcal {V}}_{4})= & {} \biggl (\frac{1430}{9},\frac{2860}{9},\frac{590}{3},\frac{340}{9}\biggr ),\\ {\mathbb {E}}{\mathbf {f}} ({\mathcal {V}}_{5})= & {} \biggl (\frac{7776000 \pi ^4}{676039},\frac{19440000 \pi ^4}{676039},\frac{2716500 \pi ^2}{49049}+\frac{12960000 \pi ^4}{676039},\frac{4074750 \pi ^2}{49049},\\&\ 2+\frac{1358250 \pi ^2}{49049}-\frac{1296000 \pi ^4}{676039}\biggr ),\\ {\mathbb {E}}{\mathbf {f}} ({\mathcal {V}}_{6})= & {} \biggl (\frac{90751353}{10000},\frac{272254059}{10000},\frac{120613311}{4000},\frac{14930979}{1000},\frac{62611437}{20000},\frac{4053}{20}\biggr ),\\ {\mathbb {E}}{\mathbf {f}} ({\mathcal {V}}_{7})= & {} \biggl (\frac{27536588800000 \pi ^6}{322476036831},\frac{96378060800000 \pi ^6}{322476036831},\frac{145800103122713984000 \pi ^4}{139352342399730603}\\&\quad \qquad +\frac{96378060800000 \pi ^6}{322476036831},\frac{364500257806784960000 \pi ^4}{139352342399730603},\\&\frac{1088840823954800 \pi ^2}{1430074210851}+\frac{729000515613569920000 \pi ^4}{418057027199191809}\\&\qquad \quad -\frac{96378060800000 \pi ^6}{967428110493}, \frac{544420411977400 \pi ^2}{476691403617},\frac{544420411977400 \pi ^2}{1430074210851}\\&\qquad \quad +2-\frac{72900051561356992000 \pi ^4}{418057027199191809}+\frac{13768294400000 \pi ^6}{967428110493}\biggr ) ,\\ {\mathbb {E}}{\mathbf {f}} ({\mathcal {V}}_{8})= & {} \biggl (\frac{37400492672297766}{45956640625},\frac{149601970689191064}{45956640625},\frac{6850391092580412}{1313046875},\\&\ \frac{27954881044110648}{6565234375},\frac{17044839181035378}{9191328125},\\&\ \frac{18843745433119128}{45956640625},\frac{5212716470964}{133984375},\frac{4422456}{4375}\Bigg ),\\ {\mathbb {E}}{\mathbf {f}} ({\mathcal {V}}_{9})= & {} \biggl (\frac{100837904362675200000000 \pi ^8}{109701233401363445369},\frac{453770569632038400000000 \pi ^8}{109701233401363445369},\\&\frac{2852955835216853216138612837266320000 \pi ^6}{134952926502386519274273464063983}\\&\quad +\frac{605027426176051200000000 \pi ^8}{109701233401363445369},\\&\ \frac{9985345423258986256485144930432120000 \pi ^6}{134952926502386519274273464063983},\\&\ \frac{16352535012213243758810504565072375 \pi ^4}{326981148443273530305985029716}\\&\quad +\frac{9985345423258986256485144930432120000 \pi ^6}{134952926502386519274273464063983}\\&\quad -\frac{423519198323235840000000 \pi ^8}{109701233401363445369},\\&\ \frac{81762675061066218794052522825361875 \pi ^4}{653962296886547060611970059432},\\&\ \frac{19758536784497995373925 \pi ^2}{2249321131934361056}\\&\quad +\frac{27254225020355406264684174275120625 \pi ^4}{326981148443273530305985029716}\\&\quad -\frac{3328448474419662085495048310144040000 \pi ^6}{134952926502386519274273464063983}\\&\quad +\frac{201675808725350400000000 \pi ^8}{109701233401363445369},\frac{59275610353493986121775 \pi ^2}{4498642263868722112},\\&2+\frac{19758536784497995373925 \pi ^2}{4498642263868722112}\\&\quad -\frac{5450845004071081252936834855024125 \pi ^4}{653962296886547060611970059432}\\&\quad +\frac{475492639202808869356435472877720000 \pi ^6}{134952926502386519274273464063983}\\&\quad -\frac{30251371308802560000000 \pi ^8}{109701233401363445369}\biggr ) ,\\ {\mathbb {E}}{\mathbf {f}} ({\mathcal {V}}_{10})= & {} \biggl (\frac{155696519360438569961130397}{1556433053837891712},\frac{778482596802192849805651985}{1556433053837891712}, \\&\ \frac{363290492786125188681583835}{345874011963975936},\frac{4865451274315354941235930}{4053211077702843},\\&\ \frac{89845553163656455297282315}{111173789559849408},\frac{23998744131568764316595507}{74115859706566272},\\&\ \frac{32972345885500895805463345}{444695158239397632},\frac{377982052291467600549815}{43234251495496992},\\&\ \frac{5889025850448565}{13894111602},\frac{402700265}{83349}\biggr ). \end{aligned}$$

Combining (19) with Theorem 2.11, we can say something about the arithmetic structure of \({\mathbb {E}}f_k({\mathcal {V}}_d)\) for arbitrary dimension d.

Theorem 3.4

Let \(d\in \mathbb {N}\) and \(k\in \{0,\ldots ,d-1\}\).

  1. (a)

    If d is even, then \({\mathbb {E}}f_k({\mathcal {V}}_{d})\) is a rational number.

  2. (b)

    If d is odd, then \({\mathbb {E}}f_k({\mathcal {V}}_{d})\) can be expressed as \(q_{d-1} \pi ^{d-1} + q_{d-3} \pi ^{d-3} + \cdots +q_{d-k-1}\pi ^{d-k-1}\) (if k is even) or \(q_{d-1} \pi ^{d-1} + q_{d-3} \pi ^{d-3} + \cdots +q_{d-k}\pi ^{d-k}\) (if k is odd), where the coefficients \(q_{i}\) are rational.

Proof of (a)

Let d be even. Recall that \(\Gamma (x)\) is integer if \(x>0\) is integer, and is a rational multiple of \(\sqrt{\pi }\) if \(x>0\) is half-integer. It follows from (20) that \({\tilde{ \mathbb {I}}}_{\infty ,m}(d)\) is rational. Also, by Theorem 2.11 (a), \({\tilde{\mathbb {J}}}_{m,d-k}((m-1+d)/2)\) is rational. It follows from (19) that \({\mathbb {E}}f_k({\mathcal {V}}_{d})\) is rational. \(\square \)

Proof of (b)

Let now d be odd. The summation in (19) is over odd values of m. For any such value, \({\tilde{ \mathbb {I}}}_{\infty ,m}(d)\) is a rational multiple of \(\pi ^{m-1}\). On the other hand, by Theorem 2.11 (b), \({\tilde{\mathbb {J}}}_{m,d-k}((m-1+d)/2)\) can be written as a \(\mathbb {Q}\)-linear combination of \(\pi ^{-j}\), where j is even and satisfies \(0\le j\le m-d+k\). It follows that \({\tilde{ \mathbb {I}}}_{\infty ,m}(d){\tilde{\mathbb {J}}}_{m,d-k}((m-1+d)/2)\) is a \(\mathbb {Q}\)-linear combination of \(\pi ^{\ell }\), \(d-k-1\le \ell \le m-1\), with \(\ell \equiv m-1 \equiv d-1\) (mod 2). The claim follows. \(\square \)

In fact, a closer look at the values collected in Theorem 3.3 suggests the following conjecture which is a consequence of Conjecture 2.12.

Conjecture 3.5

If both d and k are odd, then \({\mathbb {E}}f_k({\mathcal {V}}_{d})\) is a number of the form \(q\pi ^{d-k}\) with some rational q.

3.4 Random Polytopes Approximating Smooth Convex Bodies

Let \(U_1,U_2,\ldots \) be independent random points distributed uniformly in the d-dimensional convex body K. Denote the convex hull of n such points by \(K_{n,d}= [U_1,\ldots ,U_n]\). Asymptotic properties of \(K_{n,d}\), as \(n\rightarrow \infty \), have been very much studied starting with the work of Rényi and Sulanke [31, 32] (see, for example, [15, 34]) and we shall not attempt to review the vast literature on this topic. In particular, regarding the f-vector of \(K_{n,d}\), this development culminated in the work of Reitzner who proved the following result [30, p. 181]. If the boundary of K is of differentiability class \({\mathcal {C}}^2\) and the Gaussian curvature \(\kappa (x)>0\) is positive at every boundary point \(x\in \partial K\), then

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{{\mathbb {E}}f_k(K_{n,d})}{{n^{{(d-1)/(d+1)}}}}=\frac{c_{d,k}\Omega (K)}{(\mathop {\mathrm {Vol}}\nolimits _dK)^{({d-1})/({d+1})}} \end{aligned}$$
(21)

for every \(k\in \{0,1,\ldots ,d-1\}\), where \(\Omega (K):=\int _{\partial K}\kappa (x)^{1/(d+1)} d x\) is the so-called affine surface area of K, and \(c_{d,0}, \ldots , c_{d,d-1}\) are certain strictly positive constants not depending on K. In [30], (21) is stated without the term involving \(\mathop {\mathrm {Vol}}\nolimits _dK\), for which it is necessary to assume that K has unit volume. The general case follows from the following scaling property of the affine surface area:

$$\begin{aligned} \Omega (r K) = r^{{d(d-1)}/({d+1})}\Omega (K), \qquad r>0, \end{aligned}$$

see, e.g., [14, Thm. 3.6] and take \(p=1\) there.

As Reitzner [30, p. 181] writes, “It would be of interest to determine the vector \({\mathbf {c}}_d= (c_{d,0},\ldots , c_{d,d-1})\); but we have not succeeded in getting an explicit expression”. Our aim is to provide explicit expressions for \({\mathbf {c}}_d\) for all \(d\le 10\). In the following, it will be convenient to take \(K := \mathbb {B}^d\) (which is possible since \({\mathbf {c}}_d\) does not depend on K) and use the notation

(22)

for all \(k\in \{0,1,\ldots ,d-1\}\). Here, we recall that \(P_{n,d}^0 = [X_1,\ldots ,X_n]\) is the convex hull of n i.i.d. random points \(X_1,\ldots ,X_n\) distributed uniformly in the ball \(\mathbb {B}^d\). Note also that the affine surface area of the unit ball coincides with its usual surface area: \(\Omega (\mathbb {B}^d)=2\pi ^{d/2}/\Gamma ({d/2})\). For \(d=2\), the value of \(C_{2,0}=C_{2,1}\) has been identified by Rényi and Sulanke [31, Satz 3] who proved that

$$\begin{aligned} C_{2,0} = C_{2,1} = \lim _{n\rightarrow \infty } \frac{ {\mathbb {E}}f_1(P_{n,2}^0)}{n^{1/3}} = \lim _{n\rightarrow \infty } \frac{{\mathbb {E}}f_0(P_{n,2}^0)}{n^{1/3} } = 2\Gamma (5/3) \pi ^{2/3} \root 3 \of {2/3} . \end{aligned}$$

If \(d\in \mathbb {N}\) is arbitrary and \(k=d-1\), Affentranger [2] (see his Corollary 1 on p. 366, the formula for \(c_3\) on p. 378, and take \(q=0\)) proved the following formula for \(C_{d,d-1}\):

$$\begin{aligned} \begin{aligned}&{2\pi ^{d(d-1)/(2(d+1))}\over (d+1)!}\cdot {\Gamma (1+{d^2/2})\Gamma ({(d^2+1)/( d+1)})\over \Gamma ({(d^2+1)/2})}\\&\times \biggl ({(d+1)\Gamma ({(d+1)/2})\over \Gamma (1+{d/2})}\biggr )^{\!(d^2+1)/( d+1)}. \end{aligned} \end{aligned}$$
(23)

Note also that an exact formula for the number of facets of a convex hull of N i.i.d. points sampled uniformly from the ball \(\mathbb {B}^d\) has been obtained by Buchta and Müller [6] (see their Theorem 3 on page 760), but it requires some work to analyse its asymptotic behaviour as \(N\rightarrow \infty \). In [20, Rem. 1.9], it has been shown that for all \(d\in \mathbb {N}\) and \(k\in \{0,\ldots ,d-1\}\),

(24)

In the special case \(k=d-1\), (24) reduces to (23) since \(\mathbb {J}_{d,d}(1/2) = 1\). Hug [15, Corr. 7.1 and p. 209] gave a formula for \(c_{d,0}\) (and, hence, for \(C_{d,0}\)) which is equivalent to the formula for \(\mathbb {J}_{d,1}(1/2)\), which will be stated in Theorem 3.8 below. Combining (24) with the above algorithm for computing \(\mathbb {J}_{d,k+1}(1/2)\), we obtain the following explicit formulae for Reitzner’s constants in dimensions \(d\le 10\).

Theorem 3.6

The vectors \({\mathbf {C}}_d := (C_{d,0},\ldots , C_{d,d-1})\) are explicitly given by

$$\begin{aligned} {\mathbf {C}}_1= & {} 2\times (1),\qquad {\mathbf {C}}_2 = 2 \root 3 \of {\frac{2}{3}} \pi ^{2/3} \Gamma \biggl (\frac{5}{3}\biggr ) \times (1,1),\\ {\mathbf {C}}_3= & {} \frac{35 \sqrt{\pi /3}}{4} \times \biggl (\frac{1}{2},\frac{3}{2},1\biggr ),\\ {\mathbf {C}}_4= & {} \frac{20\cdot 2^{4/5} 15^{2/5} \pi ^{12/5} \Gamma ({17}/{5})}{143} \times \biggl (\frac{26741}{16800 \pi ^2},1+\frac{26741}{16800 \pi ^2},2,1\biggr ),\\ {\mathbf {C}}_5= & {} \frac{676039 \cdot \Gamma ({13}/{3})}{18000 \root 3 \of {10}} \times \biggl (\frac{2000}{52003},\frac{64003}{104006},\frac{108006}{52003},\frac{5}{2},1\biggr ) ,\\ {\mathbf {C}}_6= & {} \frac{4390400\cdot 2^{6/7} 35^{2/7} \pi ^{30/7} \Gamma ({37}/{7})}{116680311}\\&\times \biggl (\frac{1758847651}{2458624000 \pi ^4},-\frac{1}{2}+\frac{1758847651}{2458624000 \pi ^4}+\frac{108130927981}{14717390688 \pi ^2},\\&\quad \frac{108130927981}{7358695344 \pi ^2},\frac{5}{2}+\frac{108130927981}{14717390688 \pi ^2},3,1\biggr ),\\ {\mathbf {C}}_7= & {} \frac{35830670759 \cdot \Gamma ({25}/{4})}{420175000 \root 4 \of {35}} \times \biggl (\frac{52521875}{44479453356},\frac{1260026621}{14826484452},\\&\frac{708362065}{855374103},\frac{115870255}{39856141},\frac{371689191}{79712282},\frac{7}{2},1\biggl ),\\ {\mathbf {C}}_8= & {} \frac{15752961000000\cdot 6^{4/9} 35^{2/9} \pi ^{56/9} \Gamma ({65}/{9})}{2077805148460987} \times \biggl (\frac{90856752400884977}{571643448768000000 \pi ^6},\\&\frac{2}{3}+\frac{3883880966311229933975003293}{209349006975455882895360000 \pi ^4}+\frac{90856752400884977}{571643448768000000 \pi ^6}\\&\quad -\frac{486245776939428578826199}{59171148465116379120000 \pi ^2}, \frac{3883880966311229933975003293}{104674503487727941447680000 \pi ^4},\\&-\frac{7}{3}+\frac{3883880966311229933975003293}{209349006975455882895360000 \pi ^4}\\&\quad +\frac{486245776939428578826199}{11834229693023275824000 \pi ^2},\\&\frac{486245776939428578826199}{9861858077519396520000 \pi ^2},\frac{14}{3} +\frac{486245776939428578826199}{29585574232558189560000 \pi ^2},4,1\Bigg ),\\ {\mathbf {C}}_9= & {} \frac{109701233401363445369 \cdot \Gamma ({41}/{5})}{726032911411261440\cdot 3^{2/5}\root 5 \of {14}}\times \biggl (\frac{12004512424128}{581660834577748915},\\&\frac{3683565096070608}{581660834577748915},\frac{17538430231527552}{116332166915549783},\frac{570366050377039}{491890769198942},\\&\frac{1019018617306221}{245945384599471},\frac{1080810073}{137168095},\frac{1131811448}{137168095},\frac{9}{2},1\biggr ),\\ {\mathbf {C}}_{10}= & {} \frac{434735988912345551929344\cdot 2^{6/11} 3^{4/11} 77^{2/11} \pi ^{90/11} \Gamma ({101}/{11})}{353855725819178568093478175} \\&\times \biggl (\frac{549837358580569775037558395}{24790385031737592753218912256 \pi ^8},\\&-\frac{3}{2}-\frac{301974317327871030169614455148390753674792595873047}{6565687677840932855885309667960898754371584000000 \pi ^4}\\&\quad +\frac{296364869518522313138595119776890880847603113}{11688440195468832553173084766502230425600000 \pi ^6}\\&\quad +\frac{549837358580569775037558395}{24790385031737592753218912256 \pi ^8}\\&\quad +\frac{37401610118391599618484796905719320020269}{1946114861053154102938714818796281216000 \pi ^2},\\&\frac{296364869518522313138595119776890880847603113}{5844220097734416276586542383251115212800000 \pi ^6},\\&\frac{301974317327871030169614455148390753674792595873047}{1313137535568186571177061933592179750874316800000 \pi ^4}\\&+\frac{296364869518522313138595119776890880847603113}{11688440195468832553173084766502230425600000 \pi ^6}\\&\quad +5-\frac{37401610118391599618484796905719320020269}{556032817443758315125347091084651776000 \pi ^2},\\&\frac{301974317327871030169614455148390753674792595873047}{1094281279640155475980884944660149792395264000000 \pi ^4},\\&\frac{301974317327871030169614455148390753674792595873047}{3282843838920466427942654833980449377185792000000 \pi ^4}\\&\quad -7+\frac{37401610118391599618484796905719320020269}{278016408721879157562673545542325888000 \pi ^2},\\&\frac{37401610118391599618484796905719320020269}{324352476842192350489785803132713536000 \pi ^2},\\&\frac{15}{2}+\frac{37401610118391599618484796905719320020269}{1297409907368769401959143212530854144000 \pi ^2},5,1\biggr ). \end{aligned}$$

3.5 Random Polytopes with Vertices on the Sphere

Similarly, one can consider random polytopes approximating a convex body K and having vertices on the boundary of K. Here, we restrict ourselves to the case \(K=\mathbb {B}^d\), so that we are interested in the random polytope \(P_{n,d}^{-1}\) defined as the convex hull of n points \(X_1,\ldots ,X_n\) chosen uniformly at random on the unit sphere \(\mathbb {S}^{d-1}\), \(d\ge 2\). In [20, Rem. 1.9], it has been shown that

(25)

for all \(k\in \{0,\ldots ,d-1\}\). In the special case \(k=d-1\), it was previously shown by Affentranger [2] (see his Corollary 1 on p. 366 and the formula for \(c_3\) on p. 378, this time with \(q=-1\)) and Buchta et al. [7] (see their formula for \({\bar{F}}_n^{(d)}\) on p. 231) that

$$\begin{aligned} \begin{aligned} C_{d,d-1}^*&= \frac{2^{d-1}}{d} \left( {\begin{array}{c}d-1\\ (d-1)/2\end{array}}\right) ^{\!1-d} \left( {\begin{array}{c}(d-1)^2\\ (d-1)^2/2\end{array}}\right) \\&={2^d\pi ^{{d/ 2}-1}\over d(d-1)^2}\cdot {\Gamma (1+{d(d-2)/2})\over \Gamma ({(d-1)^2/2})}\biggl ({\Gamma ({(d+1)/2})\over \Gamma ({d/2})}\biggr )^{\!d-1}, \end{aligned} \end{aligned}$$
(26)

where the second equality follows from the duplication formula for the Gamma function. This formula for \(C_{d,d-1}^*\) is a special case of (25) since \(\mathbb {J}_{d,d}(-1/2) = 1\). Using (25) together with the algorithm for computing \(\mathbb {J}_{d,k+1}(-1/2)\), we obtain the following

Theorem 3.7

The vectors \({\mathbf {C}}_d^* := (C_{d,0}^*,\ldots , C_{d,d-1}^*)\) are explicitly given by

$$\begin{aligned} {\mathbf {C}}_2^*= & {} (1,1),\quad {\mathbf {C}}_3^* = (1,3,2),\quad {\mathbf {C}}_4^*= \biggl (1,1+\frac{24 \pi ^2}{35},\frac{48 \pi ^2}{35},\frac{24 \pi ^2}{35}\biggr ),\\ {\mathbf {C}}_5^*= & {} \biggl (1,\frac{170}{9},\frac{590}{9},\frac{715}{9},\frac{286}{9}\biggr ) ,\\ {\mathbf {C}}_6^*= & {} \biggl (1,1+\frac{679125 \pi ^2}{49049}-\frac{648000 \pi ^4}{676039},\frac{1358250 \pi ^2}{49049},\\&\ \frac{679125 \pi ^2}{49049}+\frac{3240000 \pi ^4}{676039},\frac{3888000 \pi ^4}{676039},\frac{1296000 \pi ^4}{676039}\biggr ) ,\\ {\mathbf {C}}_7^*= & {} \biggl (1,\frac{4053}{40},\frac{20870479}{20000},\frac{14930979}{4000},\frac{120613311}{20000},\frac{90751353}{20000},\frac{12964479}{10000}\biggr ) ,\\ {\mathbf {C}}_8^*= & {} \biggl (1,1+\frac{272210205988700 \pi ^2}{1430074210851}-\frac{36450025780678496000 \pi ^4}{418057027199191809}\\&\quad +\frac{6884147200000 \pi ^6}{967428110493},\frac{544420411977400 \pi ^2}{1430074210851},\frac{272210205988700 \pi ^2}{1430074210851}\\&\quad +\frac{182250128903392480000 \pi ^4}{418057027199191809}-\frac{24094515200000 \pi ^6}{967428110493},\\&\frac{72900051561356992000 \pi ^4}{139352342399730603},\frac{72900051561356992000 \pi ^4}{418057027199191809}\\&\quad +\frac{48189030400000 \pi ^6}{967428110493},\frac{13768294400000 \pi ^6}{322476036831},\frac{3442073600000 \pi ^6}{322476036831}\biggr ) ,\\ {\mathbf {C}}_9^*= & {} \biggl (1,\frac{2211228}{4375},\frac{1737572156988}{133984375},\frac{4710936358279782}{45956640625},\\&\ \frac{17044839181035378}{45956640625},\frac{4659146840685108}{6565234375},\frac{6850391092580412}{9191328125},\\&\ \frac{18700246336148883}{45956640625},\frac{4155610296921974}{45956640625}\biggr ) ,\\ {\mathbf {C}}_{10}^*= & {} \biggl (1,1+\frac{19758536784497995373925 \pi ^2}{8997284527737444224}\\&\quad -\frac{5450845004071081252936834855024125 \pi ^4}{1307924593773094121223940118864}\\&\quad +\frac{237746319601404434678217736438860000 \pi ^6}{134952926502386519274273464063983}\\&\quad -\frac{15125685654401280000000 \pi ^8}{109701233401363445369},\frac{19758536784497995373925 \pi ^2}{4498642263868722112},\\&\frac{19758536784497995373925 \pi ^2}{8997284527737444224}\\&\quad +\frac{27254225020355406264684174275120625 \pi ^4}{1307924593773094121223940118864}\\&\quad -\frac{832112118604915521373762077536010000 \pi ^6}{134952926502386519274273464063983}\\&\quad +\frac{50418952181337600000000 \pi ^8}{109701233401363445369},\\&\ \frac{16352535012213243758810504565072375 \pi ^4}{653962296886547060611970059432},\\&\frac{5450845004071081252936834855024125 \pi ^4}{653962296886547060611970059432}\\&\quad +\frac{1664224237209831042747524155072020000 \pi ^6}{134952926502386519274273464063983}\\&\quad -\frac{70586533053872640000000 \pi ^8}{109701233401363445369},\\&\ \frac{1426477917608426608069306418633160000 \pi ^6}{134952926502386519274273464063983},\\&\ \frac{356619479402106652017326604658290000 \pi ^6}{134952926502386519274273464063983}\\&\quad +\frac{75628428272006400000000 \pi ^8}{109701233401363445369},\\&\ \frac{50418952181337600000000 \pi ^8}{109701233401363445369},\frac{10083790436267520000000 \pi ^8}{109701233401363445369}\biggr ). \end{aligned}$$

Observe that the first entry of each vector is \(C_{d,0}^* = 1\) for all \(d\in \mathbb {N}\). This is trivial because all points \(X_1,\ldots ,X_n\) are vertices of \(P_{n,d}^{-1}\). Yet, in the above table, the constant 1 appeared as a result of a non-trivial computation of \(\mathbb {J}_{d,1}(-1/2)\). On the one hand, this gives evidence for the correctness of the algorithm. On the other hand, it can be used to give an explicit formula for \(\mathbb {J}_{d,1}(-1/2)\), as we shall show in the next section.

3.6 Special Cases: \(\mathbb {J}_{n,1}(1/2)\) and \(\mathbb {J}_{n,1}(-1/2)\)

There are only few special cases in which we are able to obtain a “nice” formula for \(\mathbb {J}_{n,k}(\beta )\) or \({\tilde{ \mathbb {J}}}_{n,k}(\beta )\). Most notably, in [17] we obtained an explicit formula for \({\tilde{\mathbb {J}}}_{n,k}(n/2)\) which has applications to the expected f-vector of the zero cell of the Poisson hyperplane tessellation. By a similar method, it is also possible to derive a combinatorial formula for \({\tilde{ \mathbb {J}}}_{n,k}(({n+1})/{2})\), which will be treated elsewhere. In this section, we shall prove simple formulae for \(\mathbb {J}_{n,1}(1/2)\) and \(\mathbb {J}_{n,1}(-1/2)\). Note that the beta distributions with \(\beta =1/2\) and \(\beta =-1/2\) are natural multidimensional generalizations of the Wigner semicircle and the arcsine distributions, respectively.

Theorem 3.8

For every \(n\in \mathbb {N}\) we have

$$\begin{aligned} \mathbb {J}_{n,1}(1/2)&=\frac{n(n^2+1)(n^2+n+2)\pi }{(n+3)2^{n(2n+1)}} \left( {\begin{array}{c}n+1\\ (n+1)/2\end{array}}\right) ^{n-1} \left( {\begin{array}{c}n^2\\ n^2/2\end{array}}\right) \\&=\frac{n(n^2+1)(n^2+n+2)}{2^{n+1}(n+3)\pi ^{({n-2})/{2}}}\biggl (\frac{\Gamma (({n+2})/{2})}{\Gamma (({n+3})/{2})}\biggr )^{\!n-1}\frac{\Gamma (({n^2+1})/{2})}{\Gamma (({n^2+2})/{2})}. \end{aligned}$$

Proof

The argument follows essentially the approach sketched by Hug [15, pp. 209–210]. Consider N i.i.d. points uniformly distributed in the unit ball \(\mathbb {B}^d\). Denote their convex hull by \(P_{N,d}^0\). As \(N\rightarrow \infty \), the random polytope \(P_{N,d}^0\) approaches the unit ball. In particular, \({\mathbb {E}}\mathop {\mathrm {Vol}}\nolimits _d P_{N,d}^0\) converges to \(\kappa _d\), the volume of \(\mathbb {B}^d\). The speed of convergence has been identified by Wieacker [36]; see also [2] for similar results on general beta polytopes and [1, 19] for exact formulae for the expected volume. In particular, it is known that

$$\begin{aligned} \begin{aligned} \kappa _d - {\mathbb {E}}\mathop {\mathrm {Vol}}\nolimits _d P_{N,d}^0\sim \frac{d\kappa _d}{2d!}&\cdot \frac{d+1}{d+3} \Gamma \biggl (\frac{d^2+1}{d+1}+2\biggr )\\&\quad \times \biggl (\frac{2\sqrt{\pi } \Gamma (({d+3})/{2})}{\Gamma (({d+2})/{2})}\biggr )^{\!{2}/({d+1})} N^{-{2}/({d+1})}, \end{aligned} \end{aligned}$$
(27)

as \(N\rightarrow \infty \); see, for example Corollary 1 on p. 366 of [2] and the formula for \(c_5\) on p. 378, with \(q=0\). The left-hand side is closely related to the expected number of vertices of \(P_{N,d}^0\) via Efron’s identity which states that

$$\begin{aligned} {\mathbb {E}}f_0 (P_{N,d}^0) = N\cdot \frac{\kappa _d-{\mathbb {E}}\mathop {\mathrm {Vol}}\nolimits _dP_{N-1,d}^0}{\kappa _d}. \end{aligned}$$
(28)

Indeed, the N-th point is a vertex of \(P_{N,d}^0\) if and only if it is outside the convex hull of the remaining \(N-1\) points. If we condition on the first \(N-1\) points, then the probability that the last point is a vertex is \((\kappa _d- \mathop {\mathrm {Vol}}\nolimits _d P_{N-1,d}^0)/\kappa _d\). Taking expectations proves Efron’s identity. From (27) and (28) we deduce that

$$\begin{aligned} \begin{aligned} {\mathbb {E}}f_0 (P_{N,d}^0)\sim \frac{d}{2d!}&\cdot \frac{d+1}{d+3} \Gamma \biggl (\frac{d^2+1}{d+1}+2\biggr ) \\&\quad \times \biggl (\frac{2\sqrt{\pi } \Gamma (({d+3})/{2})}{\Gamma (({d+2})/{2})}\biggr )^{\!{2}/({d+1})}\! N^{({d-1})/({d+1})}, \end{aligned} \end{aligned}$$
(29)

as \(N\rightarrow \infty \). On the other hand, we know from (22) and (24) (where we take \(k=0\)) that

$$\begin{aligned} \begin{aligned} {\mathbb {E}}f_0(P_{N,d}^0)&\sim C_{d,0} N^{(d-1)/(d+1)}\\&={2\pi ^{d(d-1)/(2(d+1))}\over (d+1)!}\cdot {\Gamma (1+{d^2/2})\Gamma ({(d^2+1)/(d+1)})\over \Gamma ({(d^2+1)/2})}\\&\quad \times \biggl ({(d+1)\Gamma ({(d+1)/2})\over \Gamma (1+{d/2})}\biggr )^{\!(d^2+1)/(d+1)} \!\mathbb {J}_{d,1}(1/2)\cdot N^{{(d-1)/(d+1)}} \end{aligned} \end{aligned}$$
(30)

as \(N\rightarrow \infty \). Equating the constants on the right-hand sides of (29) and (30), resolving w.r.t. \(\mathbb {J}_{d,1}(1/2)\), and simplifying, we arrive at the second formula stated in Theorem 3.8. The equivalence of both formulae is easily shown using the identity

$$\begin{aligned} \left( {\begin{array}{c}z\\ z/2\end{array}}\right) = \frac{2^z\Gamma (({z+1})/{2})}{\sqrt{\pi }\Gamma (({z+2})/{2})}, \end{aligned}$$
(31)

which is equivalent to the Legendre duplication formula for the Gamma function. \(\square \)

Theorem 3.9

For every \(n\in \{2,3,\ldots \}\) we have

$$\begin{aligned} \mathbb {J}_{n,1}(-1/2)&= 2^{1-n} n\left( {\begin{array}{c}n-1\\ (n-1)/2\end{array}}\right) ^{\!n-1} \left( {\begin{array}{c}(n-1)^2\\ (n-1)^2/2\end{array}}\right) ^{\!-1}\\&=\frac{n(n-1)^2}{2^{n}\pi ^{(n-2)/2}}\biggl (\frac{\Gamma ({n}/{2})}{\Gamma (({n+1})/{2})}\biggr )^{\!n-1} \frac{\Gamma ({(n-1)^2}/{2})}{\Gamma (({(n-1)^2+1})/{2})}. \end{aligned}$$

We shall give two independent proofs. The first one is based on (25) (which, as was explained above, generalises (26) obtained independently in [2] and [7]). The second proof relies, among other ingredients, on a formula due to Kingman [22]. The fact that all these formulae lead to the same result can be viewed as an additional evidence for their correctness.

First proof of Theorem 3.9

Recall that \(P_{N,d}^{-1}\) is the convex hull of N i.i.d. points having the uniform distribution on \(\mathbb {S}^{d-1}\). By a formula derived in [20], we have

$$\begin{aligned}&\lim _{N\rightarrow \infty } \frac{ {\mathbb {E}}f_k(P_{N,d}^{-1}) }{N}\\&\quad = {2^d\pi ^{{d/2}-1}\mathbb {J}_{d,k+1}(-{1/ 2})\over d(d-1)^2}\cdot {\Gamma (1+{d(d-2)/2})\over \Gamma ({(d-1)^2/2})}\biggl ({\Gamma ({(d+1)/2})\over \Gamma ({d/2})}\biggr )^{\!d-1}. \end{aligned}$$

On the other hand, in the special case when \(k=0\) we trivially have \(f_0(P_{N,d}^{-1})=N\) a.s. since every point is a vertex. Hence, the right-hand side equals 1 if \(k=0\), which yields

$$\begin{aligned} \mathbb {J}_{d,1}(-1/2) = {d(d-1)^2 \over 2^d\pi ^{{d/2}-1}}\cdot {\Gamma ({(d-1)^2/2})\over \Gamma (1+{d(d-2)/2})} \biggl ({\Gamma ({d/2})\over \Gamma ({(d+1)/2})}\biggr )^{\!d-1}. \end{aligned}$$

Replacing d by n completes the proof of the second formula stated in Theorem 3.9. The equivalence to the first formula follows from Legendre’s duplication formula (31). \(\square \)

The second proof of Theorem 3.9 uses the following observation of Feldman and Klain [10]. It can be viewed as a special case of a more general result that has been obtained earlier by Affentranger and Schneider [3].

Theorem 3.10

Let \(S=[x_0,\ldots ,x_d]\subset \mathbb {R}^d\) be a d-dimensional simplex. Let U be a random vector uniformly distributed on the unit sphere \(\mathbb {S}^{d-1}\) and denote by \(\Pi =\Pi _{U^\bot }\) the orthogonal projection onto the orthogonal complement of U. Then, the sum of solid angles at all vertices of S is given by

$$\begin{aligned} s_0(S) = \frac{\mathbb {P}[\Pi S \text { is a (d-1)-dimensional simplex}]}{2} . \end{aligned}$$

Second proof of Theorem 3.9

Let \(X_0,\ldots ,X_d\), where \(d = n-1\), be i.i.d. random points in \(\mathbb {R}^d\) with probability density \(f_{d,-1/2}\). Independently of these points, let U be a uniform random point on the sphere \(\mathbb {S}^{d-1}\). Consider an orthogonal projection \(\Pi \) of the simplex \([X_0,\ldots ,X_d]\) onto a random, uniformly distributed, hyperplane \(L:= U^\bot \). Then, it follows from Theorem 3.10 and Fubini’s formula that

$$\begin{aligned} \mathbb {J}_{n,1}(-1/2) = \frac{d+1}{2}\cdot \mathbb {P}[\Pi X_0 \text { is a not vertex of } [\Pi X_0,\ldots ,\Pi X_d]]. \end{aligned}$$

Let us compute the probability on the right-hand side. Let \(I_L:L\rightarrow \mathbb {R}^{d-1}\) be an isometry with \(I_L(0)=0\). By the projection property of the beta densities (see [19, Lem. 4.4]) the points

$$\begin{aligned} Y_0:=I_L(\Pi X_0),\ldots , Y_d := I_L(\Pi X_d), \end{aligned}$$

have the density \(f_{d-1,0}\). That is, these points are uniformly distributed in the unit ball \(\mathbb {B}^{d-1}\). Clearly, these points are i.i.d. We have

$$\begin{aligned}&\mathbb {P}[\Pi X_0 \text { is a not vertex of } [\Pi X_0,\ldots ,\Pi X_d]]=\mathbb {P}[Y_0 \text { is a not vertex of } [Y_0,\ldots ,Y_d]]\\&\qquad =\mathbb {P}[Y_0 \in [Y_1,\ldots ,Y_d]]=\frac{{\mathbb {E}}\mathop {\mathrm {Vol}}\nolimits _{d-1} [Y_1,\ldots ,Y_d]}{\kappa _{d-1}}, \end{aligned}$$

where the last equality is Efron’s identity obtained by conditioning on \(Y_1,\ldots ,Y_d\) and recalling that \(Y_0\) is uniformly distributed in \(\mathbb {B}^{d-1}\). A formula for the expected volume on the right-hand side is well known from the work of Kingman [22, Thm. 7]:

$$\begin{aligned} {\mathbb {E}}\mathop {\mathrm {Vol}}\nolimits _{d-1} [Y_1,\ldots ,Y_d]&=\kappa _{d-1} \left( {\begin{array}{c}d\\ d/2\end{array}}\right) ^{\!d} \left( {\begin{array}{c}d^2\\ d^2/2\end{array}}\right) ^{\!-1} 2^{1-d}\\&=\kappa _{d-1} \frac{ d^2(d+1)}{2^{d}\pi ^{(d-1)/2}}\biggl (\frac{\Gamma (({d+1})/{2})}{\Gamma (({d+2})/{2})}\biggr )^{\!d} \frac{\Gamma ({d^2}/{2})}{\Gamma (({d^2+1})/{2})}, \end{aligned}$$

where the second equality can be verified using the duplication formula for the Gamma function. Taking everything together and recalling that \(d=n-1\) completes the proof. \(\square \)

4 Proofs: Formulae for Internal Angles

4.1 Notation and Facts from Stochastic Geometry

Let us first introduce the necessary notation, referring to the book by Schneider and Weil [35] for an extensive account of stochastic geometry. A polyhedral cone (or just a cone) \(C\subset \mathbb {R}^d\) is an intersection of finitely many closed half-spaces whose boundaries pass through the origin. The solid angle of C is defined as

$$\begin{aligned} \alpha (C) = \mathbb {P}[U \in C], \end{aligned}$$

where U is a random vector having the uniform distribution on the unit sphere of the smallest linear subspace containing C. For example, the angle of \(\mathbb {R}^d\) is 1, whereas the angle of any half-space is 1/2. Let \(P\subset \mathbb {R}^d\) be a d-dimensional convex polytope. Denote by \(\mathcal {F}_k(P)\) the set of its k-dimensional faces, where \(k\in \{0,1,\ldots , d\}\). The set of all faces of P is denoted by \(\mathcal {F}_{\bullet }(P)=\bigcup _{k=0}^d\mathcal {F}_k(P)\). The tangent cone of P at its face \(F\in \mathcal {F}_k(P)\) is defined as

$$\begin{aligned} T(F,P) := \{y\in \mathbb {R}^d: \exists \varepsilon >0 \text { such that } f_0 + \varepsilon y \in P\}, \end{aligned}$$

where \(f_0\) is any point in the relative interior of F, defined as the interior of F taken with respect to its affine hull. The internal angle of P at its face \(F\in \mathcal {F}_k(P)\) is defined by

$$\begin{aligned} \beta (F,P) := \alpha (T(F,P)). \end{aligned}$$

The normal or external cone of F is defined as the polar cone of T(FP), that is

$$\begin{aligned} N(F,P) = \{z\in \mathbb {R}^d:\langle z,y \rangle \le 0 \text { for all } y\in T(F,P)\}. \end{aligned}$$

The normal of external angle of P at its face \(F\in \mathcal {F}_k(P)\) is defined by

$$\begin{aligned} \gamma (F,P) := \alpha (N(F,P)). \end{aligned}$$

By convention, \(\beta (P,P) = \gamma (P,P) = 1\). For a polyhedral cone \(C\subset \mathbb {R}^d\) we denote by \(\upsilon _{0}(C),\ldots ,\upsilon _d(C)\) its conic intrinsic volumes. There are various equivalent definitions of these quantities, see [4, 5] and [35, Sect. 6.5]. For example, we have

$$\begin{aligned} \upsilon _j(C) = \sum _{F\in \mathcal {F}_j(C)} \alpha (F) \gamma (F, C),\qquad j\in \{0,\ldots ,d\}. \end{aligned}$$

It is known, see [35, Thm. 6.5.5] or [5, Eq. (5.1)], that for every cone \(C\subset \mathbb {R}^d\),

$$\begin{aligned} \sum _{j=0}^d \upsilon _j(C) =1. \end{aligned}$$
(32)

Also, the Gauss–Bonnet relation, see [35, Thm. 6.5.5] or [5, Eq. (5.3)], states that

$$\begin{aligned} \sum _{j=0}^d (-1)^j \upsilon _j(C) = 0 \end{aligned}$$
(33)

for every d-dimensional polyhedral cone C that is not a linear subspace.

4.2 Proof of Proposition 2.1

Consider the \((n-1)\)-dimensional random simplices

$$\begin{aligned} P_{n,n-1}^\beta := [X_1,\ldots ,X_{n}]\quad \text { and }\quad {\tilde{P}}_{n,n-1}^\beta := [{\tilde{X}}_1,\ldots ,{\tilde{X}}_{n}], \end{aligned}$$

where \(X_1,\ldots ,X_n\) (respectively, \({\tilde{X}}_1,\ldots ,{\tilde{X}}_n\)) are independent random points in \(\mathbb {R}^{n-1}\) with probability density \(f_{n-1,\beta }\) (respectively, \({\tilde{f}}_{n-1,\beta }\)). Let G (respectively, \({\tilde{G}}\)) be a k-vertex face of \(P_{n,n-1}^\beta \) (respectively, \({\tilde{P}}_{n,n-1}^\beta \)). Without loss of generality, we can take \(G=[X_1,\ldots ,X_k]\) and \({\tilde{G}}= [{\tilde{X}}_1,\ldots ,{\tilde{X}}_k]\). The tangent cones of these simplices at this face are defined as

$$\begin{aligned} T_{n,k}^\beta&:=\{v\in \mathbb {R}^{n-1}: \text {there exists } \varepsilon>0 \text { such that } g_0 + \varepsilon v\in P_{n,n-1}^\beta \},\\ {\tilde{T}}_{n,k}^\beta&:=\{v\in \mathbb {R}^{n-1}: \text {there exists } \varepsilon >0 \text { such that } {\tilde{g}}_0 + \varepsilon v\in {\tilde{P}}_{n,n-1}^\beta \}, \end{aligned}$$

where \(g_0\) (respectively, \({\tilde{g}}_0\)) is any point in the relative interior of G (respectively, \({\tilde{G}}\)). The expected conic intrinsic volumes of the tangent cones \(T_{n,k}^\beta \) and \({\tilde{T}}_{n,k}^\beta \) were computed in [20, Thms. 1.12 and 1.18]. Namely, it was shown there that for all \(k\in \{1,\ldots ,n-1\}\) and \(j\in \{k-1,\ldots ,n-1\}\) we have

$$\begin{aligned} {\mathbb {E}}\upsilon _j(T_{n,k}^\beta )&={\left( {\begin{array}{c}n\\ k\end{array}}\right) }^{\!-1}\! \mathbb {I}_{n,j+1}(2\beta +n-1) {\tilde{\mathbb {J}}}_{j+1,k}\biggl (\beta + \frac{n-1-j}{2}\biggr ), \end{aligned}$$
(34)
$$\begin{aligned} {\mathbb {E}}\upsilon _j({\tilde{T}}_{n,k}^\beta )&={\left( {\begin{array}{c}n\\ k\end{array}}\right) }^{\!-1}\!{\tilde{\mathbb {I}}}_{n,j+1}(2\beta -n+1) {\tilde{ \mathbb {J}}}_{j+1,k}\biggl (\beta - \frac{n-1-j}{2}\biggr ). \end{aligned}$$
(35)

For \(j\notin \{k-1,\ldots ,n-1\}\) we have \(\upsilon _j(T_{n,k}^\beta ) = \upsilon _j({\tilde{T}}_{n,k}^\beta )=0\), which is due to the fact that the tangent cones contain the \((k-1)\)-dimensional linear subspace spanned by \(X_1-g_0,\ldots ,X_k-g_0\) (respectively, \({\tilde{X}}_1-{\tilde{g}}_0,\ldots ,{\tilde{X}}_k-{\tilde{g}}_0\)). Applied to the tangent cones \(T_{n,k}^\beta \) and \({\tilde{T}}_{n,k}^\beta \), relations (32) and (33) read as

$$\begin{aligned} \sum _{j=k-1}^{n-1}\! \upsilon _j(T_{n,k}^\beta )= & {} \sum _{j=k-1}^{n-1}\! \upsilon _j({\tilde{T}}_{n,k}^\beta )=1,\\ \sum _{j=k-1}^{n-1}\!(-1)^j\upsilon _j({\tilde{T}}_{n,k}^\beta )= & {} \sum _{j=k-1}^{n-1}\! (-1)^j\upsilon _j({\tilde{T}}_{n,k}^\beta )=0. \end{aligned}$$

Taking the expectation and applying (34) and (35), we arrive at the required relations (6)–(9). \(\square \)

Remark 4.1

It is possible to obtain another proof of Proposition 2.1 using McMullen’s non-linear angle-sum relations [23, 24]. These state that for every face \(F\in \mathcal {F}_\bullet (P)\) of an arbitrary polytope P,

$$\begin{aligned} \sum _{H\in \mathcal {F}_\bullet (P): F\subset H \subset P}\!\! \beta (F,H) \gamma (H,P)&= 1,\\ \sum _{H\in \mathcal {F}_\bullet (P): F\subset H \subset P} \!\!(-1)^{\dim H - \dim P} \beta (F,H) \gamma (H,P)&= \delta _{F,P}, \end{aligned}$$

where \(\delta _{F,P} = 1\) if \(F=P\), and \(\delta _{F,P}=0\) otherwise. Applied to \(P = P_{n,n-1}^\beta = [X_1,\ldots ,X_{n}]\) and \(F=[X_1,\ldots ,X_k]\), the first relation reads

$$\begin{aligned} \sum _{m=k}^n \left( {\begin{array}{c}n-k\\ m-k\end{array}}\right) \beta ([X_1,\ldots ,X_k],[X_1,\ldots ,X_m]) \gamma ([X_1,\ldots ,X_m],[X_1,\ldots ,X_n]) = 1. \end{aligned}$$
(36)

To prove (6) of Proposition 2.1, one is tempted to take the expectation of this relation. This has to be done with care because the relation is non-linear. First of all, by Theorem 1.2 we have

$$\begin{aligned} {\mathbb {E}}\,\gamma ([X_1,\ldots ,X_m],[X_1,\ldots ,X_n]) = I_{n,m} (2\beta + n-1). \end{aligned}$$

The so-called canonical decomposition of beta distributions, see [33] or [20, Thm. 3.3], implies that the random variables \(\gamma ([X_1,\ldots ,X_m],[X_1,\ldots ,X_n])\) and \(\beta ([X_1,\ldots ,X_k],[X_1,\ldots ,X_m])\) are stochastically independent; see [20, Thm. 1.6] for the statement and [20, Sect. 4.1] for the proof. Finally, [20, Thm. 4.1] with \(d=m-1\), \(\ell = n-m\) implies that

$$\begin{aligned} {\mathbb {E}}\,\beta ([X_1,\ldots ,X_k],[X_1,\ldots ,X_m])=J_{m,k} \biggl (\beta + \frac{n-m}{2}\biggr ). \end{aligned}$$

Observe that on the right-hand side we have a quantity different from \(J_{m,k}(\beta )\) since the points \(X_1,\ldots ,X_m\) are in \(\mathbb {R}^{n-1}\) and do not form a full-dimensional simplex, so that we cannot directly apply the definition of \(J_{m,k}(\beta )\). Taking the expectation of (36) and using the above facts, we obtain

$$\begin{aligned} \sum _{m=k}^n \left( {\begin{array}{c}n-k\\ m-k\end{array}}\right) I_{n,m} (2\beta + n-1) J_{m,k} \biggl (\beta + \frac{n-m}{2}\biggr ) = 1. \end{aligned}$$

Recalling that \(\mathbb {I}_{n,k}(\alpha ) = \left( {\begin{array}{c}n\\ k\end{array}}\right) I_{n,k}(\alpha )\) and \(\mathbb {J}_{n,k}(\beta ) = \left( {\begin{array}{c}n\\ k\end{array}}\right) J_{n,k}(\beta )\), we arrive at (6). The proofs of (7)–(9) are similar.

4.3 Proof of Theorem 2.2

We use induction over n. The claim is true for \(n=k=1\) since \(\mathbb {J}_{1,1}(\beta ) = 1\) and \(\mathbb {I}_{1,1}(2\beta ) = 1\). Assume that, for some \(n\ge 2\), the claim is true for all quantities \(\mathbb {J}_{m,k}(\gamma )\) with \(m\in \{1,\ldots ,n-1\}\), \(k\in \{1,\ldots ,m\}\), \(\gamma \ge -1\). In particular, \(\mathbb J_{m,k}(\beta +(n-m)/2)\) equals

$$\begin{aligned} \sum _{\ell =0}^{m-k}(-1)^\ell \; \sum _{m=m_0>\cdots >m_\ell \ge k} \mathbb {I}_{m, m_1}(2\beta +n-1) \ldots \mathbb {I}_{m_{\ell -1}, m_\ell }(2\beta + n-1) \left( {\begin{array}{c}m_\ell \\ k\end{array}}\right) . \end{aligned}$$

By (10), we have

$$\begin{aligned} \mathbb {J}_{n,k}(\beta )=\left( {\begin{array}{c}n\\ k\end{array}}\right) - \sum _{m=k}^{n-1} \mathbb {I}_{n,m}(2\beta +n-1) \mathbb {J}_{m,k}\biggl (\beta +\frac{n-m}{2}\biggr ). \end{aligned}$$

Using the induction assumption, we obtain

$$\begin{aligned} \mathbb {J}_{n,k}(\beta )&=\left( {\begin{array}{c}n\\ k\end{array}}\right) - \sum _{m=k}^{n-1} \sum _{\ell =0}^{m-k} (-1)^\ell \sum _{m=m_0>\cdots>m_\ell \ge k} \mathbb {I}_{n,m}(2\beta +n-1) \\&\quad \quad \times \mathbb {I}_{m, m_1}(2\beta +n-1) \ldots \times \mathbb {I}_{m_{\ell -1}, m_\ell }(2\beta + n-1) \left( {\begin{array}{c}m_\ell \\ k\end{array}}\right) \\&=\left( {\begin{array}{c}n\\ k\end{array}}\right) - \sum _{\ell '=1}^{n-k} (-1)^{\ell '-1} \sum _{n=n_0>\cdots >n_{\ell '}\ge k}\mathbb {I}_{n,n_1}(2\beta +n-1) \\&\quad \quad \times \mathbb {I}_{n_1, n_2}(2\beta +n-1) \ldots \times \mathbb {I}_{n_{\ell '-1}, n_{\ell '}}(2\beta + n-1) \left( {\begin{array}{c}n_{\ell '}\\ k\end{array}}\right) , \end{aligned}$$

where we used the index shift \(\ell ' = \ell +1\), \((n_1,\ldots ,n_{\ell '}) = (m_0,\ldots , m_\ell )\). Note that \(\left( {\begin{array}{c}n\\ k\end{array}}\right) \) can be interpreted as the term corresponding to \(\ell '=0\). This completes the induction.

\(\Box \)

Theorem 2.3 can be established analogously by using (11) instead of (10).

5 Proofs: Arithmetic Properties

In this section we prove Theorems 2.7 and 2.8. The proofs of Theorems 2.10 and 2.11, being analogous to the proofs of Theorems 2.7 and 2.8, are omitted.

5.1 Proof of Theorems 2.7 and 2.8

Recall from Sect. 2.1 that we can express \(\mathbb {J}_{n,k}(\beta )\) through the quantities of the form

$$\begin{aligned} \mathbb {I}_{n,k}(\alpha ) = \left( {\begin{array}{c}n\\ k\end{array}}\right) \int _{-\pi /2}^{+\pi /2}\! c_{1,({\alpha k - 1})/{2}} (\cos \varphi )^{\alpha k} \biggl (\int _{-\pi /2}^\varphi \! c_{1,({\alpha -1})/{2}}(\cos \theta )^{\alpha } \,\mathrm{d}\theta \biggr )^{\!n-k} \mathrm{d}\varphi , \end{aligned}$$

\(\alpha \ge 0\), where

$$\begin{aligned} c_{1,\beta }= \frac{ \Gamma ( {3}/{2} + \beta ) }{\sqrt{\pi }\Gamma (\beta +1)}, \qquad \beta >-1. \end{aligned}$$

In Propositions 5.4 and 5.6 we shall establish the arithmetic properties of \(\mathbb {I}_{n,k}(\alpha )\) for integer \(\alpha \ge 0\). Taken together, these propositions yield Theorem 2.7.

Lemma 5.1

Let \(\beta > -1\).

  1. (a)

    If \(\beta \) is integer, then \(c_{1,\beta }\) is rational.

  2. (b)

    If \(\beta \) is half-integer, then \(c_{1,\beta }\) is a rational multiple of \(\pi ^{-1}\).

Proof

Just recall the following two facts: (i) \(\Gamma (x)\) is integer if \(x>0\) is integer; (ii) \(\Gamma (x)\) is a rational multiple of \(\Gamma (1/2) = \sqrt{\pi }\) if \(x>0\) is half-integer. \(\square \)

Lemma 5.2

If \(k\ge 1\) is an odd integer, then \(\int _{-\pi /2}^\varphi (\cos \theta )^k \,d \theta \) can be represented as a linear combination of the functions \(1, \sin \varphi , \sin 3\varphi , \ldots , \sin k\varphi \) with rational coefficients.

Proof

We have

$$\begin{aligned} (\cos \theta )^k = \biggl (\frac{\mathrm{e}^{{\mathrm{i}}\theta } + \mathrm{e}^{-{\mathrm{i}}\theta }}{2}\biggr )^{\!k}\,=\!\sum _{m = \pm 1, \pm 3,\ldots } \!\!q_m \mathrm{e}^{{\mathrm{i}}m \theta }\,=\!\sum _{m = 1,3,\ldots } \!\!2q_m \cos m\theta \end{aligned}$$

for some rational numbers \(q_m\) satisfying \(q_m=q_{-m}\) and vanishing for \(m>k\). By integration it follows that

$$\begin{aligned} \int _{-\pi /2}^\varphi (\cos \theta )^k\, d \theta= & {} \sum _{m = 1,3,\ldots } \!\!2q_m \int _{-\pi /2}^\varphi \!\cos m\theta \,d \theta \\= & {} \sum _{m = 1,3,\ldots }\!\! \frac{2q_m(\sin m\varphi -{{\,\mathrm{sin}\,}}(-m\pi /2))}{m}, \end{aligned}$$

which proves the claim. \(\square \)

Lemma 5.3

If \(k\ge 0\) is an even integer, then \(\int _{-\pi /2}^\varphi (\cos \theta )^k \,d \theta \) can be represented as a linear combination of the functions \(\pi , \varphi , \sin 2\varphi , \sin 4\varphi , \ldots , \sin k\varphi \) with rational coefficients.

Proof

We have

$$\begin{aligned} (\cos \theta )^k = \biggl (\frac{\mathrm{e}^{{\mathrm{i}}\theta } + \mathrm{e}^{-{\mathrm{i}}\theta }}{2}\biggr )^{\!k}\,=\!\sum _{m = 0, \pm 2, \pm 4,\ldots } \!\!q_m \mathrm{e}^{{\mathrm{i}}m \theta }=q_0 \,+\! \sum _{m = 2,4,\ldots } \!\!2q_m \cos m\theta \end{aligned}$$

for some rational numbers \(q_m\) satisfying \(q_m=q_{-m}\) and vanishing for \(m>k\). By integration it follows that

$$\begin{aligned} \int _{-\pi /2}^\varphi (\cos \theta )^k\, d \theta&\,=\,q_0 \cdot \biggl (\varphi + \frac{\pi }{2}\biggr ) \,+\! \sum _{m = 2,4,\ldots }\!\! 2q_m \int _{-\pi /2}^\varphi \!\cos m\theta \, d \theta \\&\,=\,q_0 \cdot \biggl (\varphi + \frac{\pi }{2}\biggr )\, +\! \sum _{m = 2,4,\ldots }\!\! \frac{2q_m(\sin m\varphi -{{\,\mathrm{sin}\,}}(-m\pi /2))}{m}, \end{aligned}$$

which proves the claim since \({{\,\mathrm{sin}\,}}(-m\pi /2) = 0\) for even m. \(\square \)

Proposition 5.4

If \(\alpha \ge 1\) is an odd integer, then \(\mathbb {I}_{n,k}(\alpha )\) is rational for all \(n\in \mathbb {N}\), \(k\in \{1,\ldots ,n\}\).

Proof

Note that \(c_{1,({\alpha -1})/{2}}\) is rational by Lemma 5.1. Using Lemma 5.2 and the formula \(\sin t = (\mathrm{e}^{{\mathrm{i}}t} - \mathrm{e}^{-{\mathrm{i}}t})/(2{\mathrm{i}})\) we can write

$$\begin{aligned} \int _{-\pi /2}^\varphi c_{1,({\alpha -1})/{2}}(\cos \theta )^{\alpha } \,\mathrm{d}\theta \,=\, a +\! \sum _{m=1,3,\ldots }\!\! a_m \sin m\varphi \,=\, a +\! \sum _{m=\pm 1, \pm 3,\ldots }\!\! a_m' {\mathrm{i}}\mathrm{e}^{{\mathrm{i}}m \varphi }, \end{aligned}$$

for some \(a,a_m,a_m'\in \mathbb {Q}\). The sums in the above equality, as well as all sums in this proof, have only finitely many non-zero terms.

Case 1: Let \(k\in \{1,\ldots ,n\}\) be odd. Then, \(c_{1,({\alpha k - 1})/{2}}\) is rational by Lemma 5.1, and we can write

$$\begin{aligned} c_{1,({\alpha k - 1}){2}} (\cos \varphi )^{\alpha k}=c_{1,({\alpha k - 1})/{2}} \biggl (\frac{\mathrm{e}^{{\mathrm{i}}\varphi } + \mathrm{e}^{-{\mathrm{i}}\varphi } }{2}\biggr )^{\!\alpha k}\,=\!\sum _{\ell =\pm 1, \pm 3,\ldots }\!\! b_\ell \mathrm{e}^{{\mathrm{i}}\ell \varphi } \end{aligned}$$

with some rational numbers \(b_\ell \). Taking everything together, we arrive at

$$\begin{aligned} \mathbb {I}_{n,k}(\alpha )\,=\, \left( {\begin{array}{c}n\\ k\end{array}}\right) \int _{-\pi /2}^{+\pi /2} \left( \sum _{\ell =\pm 1, \pm 3,\ldots }\!\! b_\ell \mathrm{e}^{{\mathrm{i}}\ell \varphi }\right) \left( a +\!\sum _{m=\pm 1, \pm 3,\ldots }\!\! a_m' {\mathrm{i}}\mathrm{e}^{{\mathrm{i}}m \varphi }\right) ^{\!n-k} \mathrm{d}\varphi . \end{aligned}$$

When multiplying out the terms under the integral sign, we obtain a finite \(\mathbb {Q}\)-linear combination of the terms of the form \(\mathrm{e}^{i s \varphi }\) (with odd s) and \({\mathrm{i}}\mathrm{e}^{is \varphi }\) (with even s). The integral of a term of the former type is a rational number since

$$\begin{aligned} \int _{-\pi /2}^{+\pi /2} \mathrm{e}^{i s \varphi }\,\mathrm{d}\varphi = \frac{\mathrm{e}^{i s \pi /2} - \mathrm{e}^{-i s \pi /2}}{{\mathrm{i}}s}\in \mathbb {Q}, \qquad s\in \{\pm 1, \pm 3, \ldots \}. \end{aligned}$$

The integrals of the terms of the latter type, with \(s\ne 0\), are also rational since

$$\begin{aligned} \int _{-\pi /2}^{+\pi /2} {\mathrm{i}}\mathrm{e}^{i s \varphi }\,\mathrm{d}\varphi = \frac{\mathrm{e}^{i s \pi /2} - \mathrm{e}^{-i s \pi /2}}{s} \in \mathbb {Q},\qquad s\in \{\pm 2, \pm 4, \ldots \}. \end{aligned}$$

Finally, the term \({\mathrm{i}}\mathrm{e}^{{\mathrm{i}}0 \varphi }\) must have coefficient 0 since its integral is purely imaginary and we know a priori that \(\mathbb {I}_{n,k}(\alpha )\) is real. Hence, \(\mathbb {I}_{n,k}(\alpha )\) is rational.

Case 2: Let \(k\in \{1,\ldots ,n\}\) be even. Then, \(\alpha k\) is also even and \(c_{1,{(\alpha k - 1)}/{2}}\) is a rational multiple of \(1/\pi \) by Lemma 5.1. We can write

$$\begin{aligned} c_{1,({\alpha k - 1})/{2}} (\cos \varphi )^{\alpha k}=c_{1,({\alpha k - 1})/{2}} \biggl (\frac{\mathrm{e}^{{\mathrm{i}}\varphi } + \mathrm{e}^{-{\mathrm{i}}\varphi }}{2}\biggr )^{\!\alpha k}=\frac{1}{\pi }\sum _{\ell = 0, \pm 2, \pm 4,\ldots } \!\!b_\ell \mathrm{e}^{{\mathrm{i}}\ell \varphi } \end{aligned}$$

with some rational numbers \(b_\ell \), where the sum contains only finitely many non-zero terms. Taking everything together, we arrive at

$$\begin{aligned} \mathbb {I}_{n,k}(\alpha )= \left( {\begin{array}{c}n\\ k\end{array}}\right) \int _{-\pi /2}^{+\pi /2} \left( \frac{1}{\pi }\sum _{\ell = 0, \pm 2, \pm 4,\ldots } \!\!b_\ell \mathrm{e}^{{\mathrm{i}}\ell \varphi } \right) \left( a +\! \sum _{m=\pm 1, \pm 3,\ldots }\!\! a_m' {\mathrm{i}}\mathrm{e}^{{\mathrm{i}}m \varphi }\right) ^{\!n-k}\mathrm{d}\varphi . \end{aligned}$$

When multiplying out the terms under the sign of the integral, we obtain a finite \(\mathbb {Q}\)-linear combination of terms of the form \(\pi ^{-1} \mathrm{e}^{i s \varphi }\) (with even s) and \({\mathrm{i}}\pi ^{-1} \mathrm{e}^{i s \varphi }\) (with odd s). The integral of the term \(\pi ^{-1}\mathrm{e}^{i 0 \varphi }\) is 1. By the same analysis as in Case 1, the integrals of all terms with \(s\ne 0\) are purely imaginary and hence must cancel since we know a priori that \(\mathbb {I}_{n,k}(\alpha )\) is real. Hence, \(\mathbb {I}_{n,k}(\alpha )\) is rational. \(\square \)

Proof of Theorem 2.8 (a)

Let \(n\in \mathbb {N}\), \(k\in \{1,\ldots ,n\}\), and let \(\beta \ge -1\) be such that \(2\beta + n\) is even. Our aim is to prove that \(\mathbb {J}_{n,k}(\beta )\) is rational. This is done by induction. The claim is trivial for \(n=1,2,3\). Assuming that, for some \(n\ge 4\), the statement has been established for all \(\mathbb {J}_{m,k}(\gamma )\) with \(m\in \{1,\ldots ,n-1\}\), we recall that by (10),

$$\begin{aligned} \mathbb {J}_{n,k}(\beta ) = \left( {\begin{array}{c}n\\ k\end{array}}\right) - \sum _{s=1}^{n-k} \mathbb {I}_{n,n-s}(2\beta +n-1) \mathbb {J}_{n-s,k}\biggl (\beta +\frac{s}{2}\biggr ). \end{aligned}$$

The numbers \(\mathbb {I}_{n,n-s}(2\beta +n-1)\) are rational by Proposition 5.4, whereas the terms \(\mathbb {J}_{n-s,k}(\beta +s/2)\) are rational by induction assumption, for all \(s\in \{1,\ldots ,n-k\}\). \(\square \)

Next we are going to analyse \(\mathbb {I}_{n,k}(\alpha )\) for even \(\alpha \ge 0\). To this end, we need the following

Lemma 5.5

Consider the integral \(T(s,p) = ( 1/\pi ) \int _{-\pi /2}^{+\pi /2} \mathrm{e}^{{\mathrm{i}}s \varphi } (\varphi /\pi )^p\, d \varphi \), where s is an even integer and \(p\ge 0\) is integer.

  1. (a)

    If p is even, then T(sp) can be represented as \(q_0+ q_2\pi ^{-2}+q_4 \pi ^{-4}+\cdots +q_{p}\pi ^{-p}\) with rational \(q_i\)’s.

  2. (b)

    If p is odd, then T(sp) can be represented as \( {\mathrm{i}}(q_0+ q_2\pi ^{-2}+q_4 \pi ^{-4}+\cdots +q_{p-1}\pi ^{-(p-1)})/\pi \) with rational \(q_i\)’s.

Proof

For \(s=0\) the statement is trivial since \(T(0,p) = 0\) for odd p and \(T(0,p) = 2^{-p}/(p+1)\) for even p. Let \(s\ne 0\) be even. For \(p=0\) we have \(T(s,p)=0\). For integer \(p\ge 1\) the statement follows by induction using the formula

$$\begin{aligned} T(s,p) = \frac{1}{\pi {\mathrm{i}}s} \int _{-\pi /2}^{+\pi /2}(\varphi /\pi )^p \,d \mathrm{e}^{{\mathrm{i}}s \varphi } =\biggl (\frac{(\varphi /\pi )^p}{\pi {\mathrm{i}}s}p\mathrm{e}^{{\mathrm{i}}s \varphi }\biggr ) \bigg |_{\varphi =-\pi /2}^{\varphi =+\pi /2} +\frac{{\mathrm{i}}p}{\pi s} T(s,p-1), \end{aligned}$$

which is obtained by partial integration. \(\square \)

Proposition 5.6

If \(\alpha \ge 0\) is even, \(n\in \mathbb {N}\), and \(k\in \{1,\ldots ,n\}\), then \(\mathbb {I}_{n,k}(\alpha )\) can be expressed in the form \(r_0+r_2\pi ^{-2} + r_4\pi ^{-4} +\cdots + r_{n-k} \pi ^{-(n-k)}\) (if \(n-k\) is even) or \(r_0+r_2\pi ^{-2} + r_4\pi ^{-4} +\cdots + r_{n-k-1}\pi ^{-(n-k-1)}\) (if \(n-k\) is odd), where the \(r_i\)’s are rational numbers.

Proof

Note that \(c_{1,({\alpha -1})/{2}}\) is a rational multiple of \(1/\pi \) by Lemma 5.1. Using Lemma 5.3 and the formula \(\sin t = (\mathrm{e}^{{\mathrm{i}}t} - \mathrm{e}^{-{\mathrm{i}}t})/(2{\mathrm{i}})\) we can write

$$\begin{aligned} \int _{-\pi /2}^\varphi \! c_{1,({\alpha -1})/{2}}(\cos \theta )^{\alpha } \,\mathrm{d}\theta= & {} a' + \frac{\varphi a''}{\pi } + \sum _{m=2,4,\ldots } \!\frac{a_m'''}{\pi } \sin m\varphi \\= & {} a' + \frac{\varphi a''}{\pi } + \!\sum _{m=\pm 2, \pm 4,\ldots } \!\frac{a_m}{\pi } {\mathrm{i}}\mathrm{e}^{{\mathrm{i}}m \varphi }, \end{aligned}$$

for some \(a',a'',a_m''',a_m\in \mathbb {Q}\). Recall that \(\alpha k\) is even and hence \(c_{1,({\alpha k - 1})/{2}}\) is a rational multiple of \(1/\pi \) by Lemma 5.1. Thus, we can write

$$\begin{aligned} c_{1,({\alpha k - 1})/{2}} (\cos \varphi )^{\alpha k}=c_{1,({\alpha k - 1})/{2}} \biggl (\frac{\mathrm{e}^{{\mathrm{i}}\varphi } + \mathrm{e}^{-{\mathrm{i}}\varphi } }{2}\biggr )^{\!\alpha k}\!=\frac{1}{\pi }\sum _{\ell = 0, \pm 2, \pm 4,\ldots }\!\! b_\ell \mathrm{e}^{{\mathrm{i}}\ell \varphi } \end{aligned}$$

with some rational numbers \(b_\ell \), where we recall the convention that the sums contain only finitely many non-zero terms. Taking everything together, we arrive at

$$\begin{aligned} \mathbb {I}_{n,k}(\alpha )=\left( {\begin{array}{c}n\\ k\end{array}}\right) \int _{-\pi /2}^{+\pi /2}&\left( \frac{1}{\pi }\sum _{\ell = 0, \pm 2, \pm 4,\ldots }\!\! b_\ell \mathrm{e}^{{\mathrm{i}}\ell \varphi } \right) \\&\times \left( a' + \frac{\varphi a''}{\pi } +\! \sum _{m=\pm 2, \pm 4,\ldots } \frac{a_m}{\pi } {\mathrm{i}}\mathrm{e}^{{\mathrm{i}}m \varphi }\right) ^{\!n-k} \mathrm{d}\varphi . \end{aligned}$$

When multiplying everything out, we obtain a representation of \(\mathbb {I}_{n,k}(\alpha )\) as a finite \(\mathbb {Q}\)-linear combination of terms of the form

$$\begin{aligned} \frac{1}{\pi }\biggl (\frac{{\mathrm{i}}}{\pi }\biggr )^{\!b} \int _{-\pi /2}^{+\pi /2} \mathrm{e}^{{\mathrm{i}}s \varphi } \biggl (\frac{\varphi }{\pi }\biggr )^{\!p} d \varphi = \biggl (\frac{{\mathrm{i}}}{\pi }\biggr )^{\!b} T(s,p), \end{aligned}$$

where s is even, \(p\ge 0\) and \(b\ge 0\) are integers with \(p+b\in \{0,\ldots , n-k\}\). If both p and b are even, then by Lemma 5.5 (a) the term is a \(\mathbb {Q}\)-linear combination of \(1,\pi ^{-2},\pi ^{-4}, \ldots , \pi ^{-(p+b)}\). If both p and b are odd, then by Lemma 5.5 (b) the term is a \(\mathbb {Q}\)-linear combination of \(1,\pi ^{-2},\pi ^{-4}, \ldots , \pi ^{-(p+b)}\). If the parities of p and b differ, then the term is purely imaginary and can be ignored since we a priori know that \(\mathbb {I}_{n,k}(\alpha )\) is real, which implies that all such terms must cancel. \(\square \)

Proof of Theorem 2.8 (b)

Let \(n\in \mathbb {N}\), \(k\in \{1,\ldots ,n\}\), and \(\beta \ge -1\) be such that \(2\beta + n\) is odd. We prove by induction that \(\mathbb {J}_{n,k}(\beta )\) can be expressed as \(q_0 + q_2 \pi ^{-2} + q_4 \pi ^{-4} + \cdots + q_{n-k} \pi ^{-(n-k)}\) (if \(n-k\) is even) or \(q_0 + q_2 \pi ^{-2} + q_4 \pi ^{-4} + \cdots + q_{n-k} \pi ^{-(n-k-1)}\) (if \(n-k\) is odd), where the \(q_{i}\)’s are rational. The statement is trivial for \(n=1,2,3\). Assume that, for some \(n\ge 4\), the statement has been established for \(\mathbb {J}_{m,k}(\gamma )\) with \(m\in \{1,\ldots ,n-1\}\). Recall from (11) that

$$\begin{aligned} \mathbb {J}_{n,k}(\beta )=\frac{1}{2} \left( {\begin{array}{c}n\\ k\end{array}}\right) -\! \sum _{s=1}^{\lfloor ({n-k})/{2}\rfloor }\!\! \mathbb {I}_{n,n-2s}(2\beta +n-1) \mathbb {J}_{n-2s,k}(\beta +s). \end{aligned}$$

By Proposition 5.6, \(\mathbb {I}_{n,n-2s}(2\beta +n-1)\) can be expressed in the form \(r_0+r_2\pi ^{-2} + r_4\pi ^{-4} +\cdots + r_{2s} \pi ^{-2s}\) with rational \(r_i\)’s. On the other hand, by the induction assumption, we can write \(\mathbb {J}_{n-2s,k}(\beta +s)\) in the form \(q_0' + q_2' \pi ^{-2} + q_4' \pi ^{-4} + \cdots + q_{n-2s-k}' \pi ^{-(n-2s-k)}\) (if \(n-k\) is even) or \(q_0' + q_2' \pi ^{-2} + q_4' \pi ^{-4} + \cdots + q_{n-2s-k}' \pi ^{-(n-2s-k-1)}\) (if \(n-k\) is odd) with rational \(q_i'\)’s. Multiplying everything out, we obtain the required statement. \(\square \)