1 Introduction

In the 1840s, Steiner developed a striking decomposition for the volume of a Euclidean expansion of a polytope in \(\mathbb {R}^3\). The modern statement of Steiner’s formula describes an expansion of a compact convex set \(K\) in \(\mathbb {R}^d\):

$$\begin{aligned} \mathrm{Vol }(K + \lambda \mathsf {B}_d) = \textstyle \textstyle \sum \limits \limits _{j=0}^d \lambda ^{d-j} \cdot \mathrm{Vol }(\mathsf {B}_{d-j}) \cdot \mathcal {V}_{j}(K) \quad \text {for} \quad \lambda \ge 0. \end{aligned}$$
(1.1)

The symbol \(\mathsf {B}_j\) refers to the Euclidean unit ball in \(\mathbb {R}^j\), and \(+\) denotes the Minkowski sum. In other words, the volume of the expansion is just a polynomial whose coefficients depend on the set \(K\). The geometric functionals \(\mathcal {V}_j\) that appear in (1.1) are called Euclidean intrinsic volumes [28]. Some of these are familiar, such as the usual volume \(\mathcal {V}_d\), the surface area \(2 \, \mathcal {V}_{d-1}\), and the Euler characteristic \(\mathcal {V}_0\). They can all be interpreted as measures of content that are invariant under rigid motions and isometric embedding [35].

Beginning around 1940, researchers began to develop analogues of the Steiner formula in spherical geometry [4, 22, 23, 33, 41]. In their modern form, these results express the size of an angular expansion of a closed convex cone \(C\) in \(\mathbb {R}^d\):

$$\begin{aligned} \mathrm{Vol }\big \{ \varvec{x} \in \mathsf {S}^{d-1} : {{\mathrm{dist}}}^2(\varvec{x}, C) \le \lambda \big \} = \textstyle \sum \limits _{j=0}^d \beta _{j,d}(\lambda ) \cdot v_j(C) \quad \text {for} \quad \lambda \in [0,1]. \end{aligned}$$
(1.2)

We have written \(\mathsf {S}^{d-1}\) for the Euclidean unit sphere in \(\mathbb {R}^d\), and the functions \(\beta _{j,d}: [0, 1] \rightarrow \mathbb {R}_+\) do not depend on the cone \(C\). The geometric functionals \(v_j\) that appear in (1.2) are called conic intrinsic volumes.Footnote 1 These quantities capture fundamental structural information about a convex cone. They are invariant under rotation; they do not depend on the embedding dimension; and they arise in many other geometric problems [37].

The intrinsic volumes of a closed convex cone \(C\) in \(\mathbb {R}^d\) satisfy several important identities [37, Thm. 6.5.5]. In particular, the numbers \(v_0(C), \dots , v_d(C)\) are nonnegative and sum to one, so they describe a probability distribution on the set \(\{ 0, 1, 2, \dots , d \}\). Thus, we can define a random variable \(V_C\) by the relations

$$\begin{aligned} \mathbb {P}\big \{ V_C = k \big \} = v_k(C) \quad \text {for each}\quad k = 0, 1, 2, \dots , d. \end{aligned}$$

This construction invites us to use probabilistic methods to study the cone \(C\).

Recent research [5, Thm. 6.1] has determined that the random variable \(V_C\) concentrates sharply about its mean value for every closed convex cone \(C\). In other words, most of the intrinsic volumes of a cone have negligible size; see Fig. 1 for a typical example. As a consequence of this phenomenon, a small number of statistics of \(V_C\) capture the salient information about the cone. For many purposes, we only need to know the mean, the variance, and the type of tail decay. This paper develops a systematic technique for collecting this kind of information.

Fig. 1
figure 1

The intrinsic volumes of a circular cone. These two diagrams depict the intrinsic volumes \(v_k(C)\) of a circular cone \(C\) with angle \(\pi /6\) in \(\mathbb {R}^{64}\), computed using the formulas from [6, Ex. 4.4.8]. The intrinsic volume random variable \(V_C\) has mean \(\mathrm{\mathbb {E} }[V_C] \approx 16.5\) and variance \({{\mathrm{Var}}}[V_C] \approx 23.25\). The tails of \(V_C\) exhibit Gaussian decay near the mean and Poisson decay farther away. Left The intrinsic volumes \(v_k(C)\) coincide with the probability mass function of \(V_C\). Right The logarithms \(\log _{10}(v_k(C))\) of the intrinsic volumes have a quadratic profile near \(k = \mathrm{\mathbb {E} }[V_C]\), which is indicative of the Gaussian decay

Our method depends on a generalization of the spherical Steiner formula (1.2). This result, Theorem 3.1, allows us to compute statistics of the intrinsic volume random variable \(V_C\) by passing to a geometric random variable that we can study directly. The prospect for making this type of argument is dimly visible in the spherical Steiner formula (1.2): the right-hand side can be interpreted as a moment of \(V_C\), while the left-hand side reflects the probability that a certain geometric event occurs. Our master Steiner formula provides the flexibility we need to study a wider class of statistics.

This species of argument was developed in collaboration with our colleagues Dennis Amelunxen and Martin Lotz. In our earlier joint paper [5], we used a laborious version of the technique to show that the intrinsic volumes concentrate. Here, we simplify and strengthen the method to obtain new relations for the variance of \(V_C\) and to improve the concentration inequalities.

Our work fits into a larger program that studies convex optimization with sophisticated tools from geometry. The link between conic geometry and convex optimization arises because convex cones play the same role in convex analysis that subspaces play in linear algebra [24, pp. 89–90]. Indeed, when we study the behavior of convex optimization problems with random data, the conic intrinsic volumes arise naturally; see, for example, [13, 5, 9, 10, 18, 19, 26, 27, 39, 40]. Unfortunately, this program of research has been delayed by the apparent difficulty of producing explicit bounds for conic intrinsic volumes. As a consequence, we believe that it is time to investigate the concentration properties of the intrinsic volumes of a general cone.

1.1 Roadmap

Section 2 contains the definition and basic properties of the conic intrinsic volumes. Section 3 states our master Steiner formula for convex cones, and it explains how to derive formulas for the size of an expansion of a convex cone. We begin our probabilistic analysis of the intrinsic volumes in Sect. 4. This material includes formulas and bounds for the variance and exponential moments of the intrinsic volume random variable. Section 5 continues with a probabilistic treatment of the intrinsic volumes of a product cone. We provide several detailed examples of these methods in Sect. 6. Afterward, Sect. 7 summarizes some background material about convex cones in preparation for the proof of master Steiner formula in Sect. 8.

1.2 Notation and Basic Concepts

Before commencing with the main development, let us set notation and recall some basic facts from convex analysis. Section 7 contains a more complete discussion; we provide cross-references as needed.

We work in the Euclidean space \(\mathbb {R}^d\), equipped with the standard inner product \(\langle { \cdot },{ \cdot }\rangle \), the associated norm \(\Vert { \cdot }\Vert \), and the norm topology. The symbols \(\varvec{0}\) and \(\varvec{0}_d\) refer to the origin of \(\mathbb {R}^d\). For a point \(\mathbf{x} \in \mathbb {R}^d\) and a set \(K \subset \mathbb {R}^d\), we define the distance \(\mathrm{dist }(\mathbf{x}, K) := \mathrm{inf}\big \{ \Vert \mathbf{x} - \mathbf{y} \Vert : \mathbf{y} \in K \big \}\).

A convex cone \(C\) is a nonempty subset of \(\mathbb {R}^d\) that satisfies

$$\begin{aligned} \tau \cdot (\varvec{x} + \varvec{y}) \in C \quad \text {for all}\quad \tau > 0\quad \text {and}\quad \varvec{x}, \varvec{y} \in C. \end{aligned}$$

We designate the family \(\fancyscript{C}_d\) of all closed convex cones in \(\mathbb {R}^d\). A cone \(C\) is polyhedral if it can be expressed as the intersection of a finite number of halfspaces:

$$\begin{aligned} C = \bigcap _{i=1}^N \big \{ \varvec{x} \in \mathbb {R}^d : \langle { \varvec{u}_i },\,{ \varvec{x} }\rangle \ge 0 \big \} \quad \text {for some}\quad \varvec{u}_i \in \mathbb {R}^d. \end{aligned}$$

For each cone \(C \in \fancyscript{C}_d\), we define the polar cone \(C^\circ \in \fancyscript{C}_d\) via the formula

$$\begin{aligned} C^\circ := \big \{ \varvec{u} \in \mathbb {R}^d : \langle { \varvec{u} },\,{ \varvec{x} }\rangle \le 0\;\text { for all} \;\;\varvec{x} \in C \big \}. \end{aligned}$$

The polar of a polyhedral cone is always polyhedral.

We introduce the metric projector \(\varvec{\Pi }_C\) onto a cone \(C \in \fancyscript{C}_d\) by the formula

$$\begin{aligned} \varvec{\Pi }_C : \mathbb {R}^d \rightarrow C \quad \text {where}\quad \varvec{\Pi }_C(\varvec{x}) := \hbox {arg min}\big \{ \Vert \varvec{x} - \varvec{y} \Vert ^2 : \varvec{y} \in C \big \}. \end{aligned}$$
(1.3)

The metric projector onto a closed convex cone is a nonnegatively homogeneous function:

$$\begin{aligned} \varvec{\Pi }_C(\tau \varvec{x}) = \tau \cdot \varvec{\Pi }_C(\varvec{x}) \quad \text {for all} \quad \tau \ge 0 \quad \text {and} \quad \varvec{x} \in \mathbb {R}^d. \end{aligned}$$

The squared norm of the metric projection is a differentiable function:

$$\begin{aligned} \nabla \Vert { \varvec{\Pi }_C(\varvec{x})\Vert }^2 = 2\, \varvec{\Pi }_C(\varvec{x}) \quad \text {for all} \quad \varvec{x} \in \mathbb {R}^d. \end{aligned}$$
(1.4)

This result follows from [32, Thm. 2.26].

We conclude with the basic notation concerning probability. We write \(\mathbb {P}\) for the probability of an event and \(\mathrm{\mathbb {E} }\) for the expectation operator. The symbol \(\sim \) denotes equality of distribution. We reserve the letter \(\varvec{g}\) for a standard Gaussian vector, and \(\varvec{\theta }\) denotes a vector uniformly distributed on the sphere. The dimensions are determined by context.

2 The Intrinsic Volumes of a Convex Cone

We begin with an introduction to the conic intrinsic volumes that is motivated by the treatment in [6]. To each closed convex cone, we can assign a sequence of intrinsic volumes. For polyhedral cones, these functionals have a clear geometric meaning, so we start with the definition for this special case.

Definition 2.1

(Intrinsic volumes of a polyhedral cone) Let \(C \in \fancyscript{C}_d\) be a polyhedral cone. For \(k = 0, 1, 2, \dots , d\), the conic intrinsic volume \(v_k(C)\) is the quantity

$$\begin{aligned} v_k(C) := \mathbb {P}\big \{ \varvec{\Pi }_C(\varvec{g})\,\,\text {lies in the relative interior of a}\,k\text {-dimensional face of}\,C \big \}. \end{aligned}$$

The metric projector \(\varvec{\Pi }_C\) onto the cone is defined in (1.3), and the random vector \(\varvec{g}\) is drawn from the standard Gaussian distribution on \(\mathbb {R}^d\).

As explained in Sect. 7.3, we can equip the set \(\fancyscript{C}_d\) with the conic Hausdorff metric to form a compact metric space. The polyhedral cones form a dense subset of \(\fancyscript{C}_d\), so it is natural to use approximation to extend the definition of the intrinsic volumes to nonpolyhedral cones.

Definition 2.2

(Intrinsic volumes of a closed convex cone) Let \(C \in \fancyscript{C}_d\) be a closed convex cone. Consider any sequence \(( C_i )_{i \in \mathbb {N}}\) of polyhedral cones in \(\fancyscript{C}_d\) where \(C_i \rightarrow C\) in the conic Hausdorff metric. Define

$$\begin{aligned} v_k(C) := \lim _{i \rightarrow \infty } v_k(C_i) \quad \text {for} \quad k = 0, 1, 2, \dots , d. \end{aligned}$$
(2.1)

The geometric functionals \(v_k : \fancyscript{C}_d \rightarrow [0, 1]\) are called conic intrinsic volumes.

See Sect. 8.2 for a proof that the limit in (2.1) is well defined. The reader should be aware that the geometric interpretation of intrinsic volumes from Definition 2.1 breaks down for general cones because the limiting process does not preserve facial structure.

The conic intrinsic volumes have some remarkable properties. Fix the ambient dimension \(d\), and let \(C \in \fancyscript{C}_d\) be a closed convex cone in \(\mathbb {R}^d\). The intrinsic volumes are...

  1. (1)

    Intrinsic. The intrinsic volumes do not depend on the dimension of the space \(\mathbb {R}^d\) in which the cone \(C\) is embedded. That is, for each natural number \(r\),

    $$\begin{aligned} v_k( C \times \{\varvec{0}_r\} ) = \Big \{\begin{array}{l@{\quad }l} v_k(C), &{}\ 0 \le k \le d, \\ 0, &{}\ d < k \le d + r. \end{array} \end{aligned}$$
  2. (2)

    Volumes. Let \(\gamma _d\) denote the standard Gaussian measure on \(\mathbb {R}^d\). Then \(v_d(C) = \gamma _d(C)\) and \(v_0(C) = \gamma _d(C^\circ )\) where \(C^\circ \) denotes the polar cone. The other intrinsic volumes, however, do not admit such a clear interpretation.

  3. (3)

    Rotation invariant. For each \(d \times d\) orthogonal matrix \(\varvec{Q}\), we have \(v_k(\varvec{Q} C) = v_k(C)\).

  4. (4)

    Continuous. If \(C_i \rightarrow C \) in the conic Hausdorff metric, then \(v_k(C_i) \rightarrow v_k(C)\).

  5. (5)

    A distribution. The intrinsic volumes form a probability distribution on \(\{ 0, 1, 2, \dots , d \}\). That is,

    $$\begin{aligned} v_k(C) \ge 0 \quad \text {and}\quad \textstyle \sum \limits _{j=0}^d v_j(C) = 1. \end{aligned}$$
  6. (6)

    Indicators of dimension for a subspace. For any \(j\)-dimensional subspace \(L_j \subset \mathbb {R}^d\), we have

    $$\begin{aligned} v_k(L_j) = \Big \{\begin{array}{ll} 1, &{} k = j, \\ 0, &{} k \ne j. \end{array} \end{aligned}$$
  7. (7)

    Reversed under polarity. The intrinsic volumes of the polar cone \(C^\circ \) satisfy

    $$\begin{aligned} v_k(C^\circ ) = v_{d-k}(C). \end{aligned}$$

These claims follow from Definition 2.2 using facts from Sects. 1.2 and 7 about the geometry of convex cones.

Remark 2.3

(Notation for intrinsic volumes) The notation \(v_k\) for the \(k\)th intrinsic volume does not specify the ambient dimension. This convention is justified because the intrinsic volumes of a cone do not depend on the embedding dimension.

3 A Generalized Steiner Formula for Cones

As we have seen, the intrinsic volumes of a cone form a probability distribution. Probabilistic methods offer a powerful technique for studying the intrinsic volumes. To pursue this idea, we want access to moments and other statistics of the sequence of intrinsic volumes. We acquire this information using a general Steiner formula for cones.

3.1 The Master Steiner Formula

Let us introduce a class of Gaussian integrals. Fix a Borel measurable bivariate function \(f : \mathbb {R}_+^2 \rightarrow \mathbb {R}\). Consider the geometric functional

$$\begin{aligned} \varphi _f : \fancyscript{C}_d \rightarrow \mathbb {R}\quad \text {where}\quad \varphi _f(C) := \mathrm{\mathbb {E} }\big [ f\big ( \Vert \varvec{\Pi }_C(\varvec{g}) \Vert ^2, \ \Vert \varvec{\Pi }_{C^\circ }(\varvec{g}) \Vert ^2 \big ) \big ]. \end{aligned}$$
(3.1)

As usual, \(\varvec{g}\in \mathbb {R}^d\) is a standard Gaussian vector, and the expectation is interpreted as a Lebesgue integral. We can develop an elegant expansion of \(\varphi _f\) in terms of the conic intrinsic volumes.

Theorem 3.1

(Master Steiner formula for cones) Let \(f : \mathbb {R}^2_+ \rightarrow \mathbb {R}\) be a Borel measurable function. Then the geometric functional \(\varphi _f\) defined in (3.1) admits the expression

$$\begin{aligned} \varphi _f(C) = \textstyle \sum \limits _{k=0}^d \varphi _f( L_k ) \cdot v_k(C) \quad {\textit{for}} \quad C \in \fancyscript{C}_d \end{aligned}$$
(3.2)

provided that all the expectations in (3.2) are finite. Here, \(L_k\) denotes a \(k\)-dimensional subspace of \(\mathbb {R}^d\) and the conic intrinsic volumes \(v_k\) are introduced in Definition 2.2.

The coefficients \(\varphi _f(L_k)\) in the expression (3.2) have an alternative form that is convenient for computations. Let \(L_k\) be an arbitrary \(k\)-dimensional subspace of \(\mathbb {R}^d\). According to the Definition (3.1) of the functional \(\varphi _f\),

$$\begin{aligned} \varphi _f(L_k) = \mathrm{\mathbb {E} }\big [ f\big ( \Vert \varvec{\Pi }_{L_k}(\varvec{g})\Vert ^2, \ \Vert \varvec{\Pi }_{{L_k}^\circ }(\varvec{g})\Vert ^2 \big ) \big ]. \end{aligned}$$

The marginal property of the standard Gaussian vector \(\varvec{g}\) ensures that \(\varvec{\Pi }_{L_k}(\varvec{g})\) and \(\varvec{\Pi }_{{L_k}^\circ }(\varvec{g})\) are independent standard Gaussian vectors supported on \(L_k\) and \({L_k}^\circ \). Thus, \(\Vert \varvec{\Pi }_{L_k}(\varvec{g})\Vert ^2\) and \(\Vert \varvec{\Pi }_{{L_k}^\circ }(\varvec{g})\Vert ^2\) are independent chi-square random variables with \(k\) and \(d-k\) degrees of freedom respectively. Note the convention that a chi-square variable with zero degrees of freedom is identically zero. In view of this fact, we have the following equivalent of Theorem 3.1.

Corollary 3.2

Instate the hypotheses and notation of Theorem 3.1. Let \(\big \{ X_0, \dots , X_d \big \}\) be an independent sequence of random variables where \(X_k\) has the chi-square distribution with \(k\) degrees of freedom, and let \(\big \{ X'_{0}, \dots , X'_{d}\big \}\) be an independent copy of this sequence. Then

$$\begin{aligned} \varphi _f(C) = \textstyle \sum \limits _{k=0}^d \mathrm{\mathbb {E} }\big [ f \big (X_k^{}, \ X'_{d-k} \big ) \big ] \cdot v_k(C) \quad {\textit{for}} \quad C \in \fancyscript{C}_d. \end{aligned}$$

Corollary 3.2 has an appealing probabilistic consequence. For every cone \(C \in \fancyscript{C}_d\), the random variable \(\Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2\) is a mixture of chi-square random variables \(X_0, \dots , X_d\) where the mixture coefficients \(v_k(C)\) are determined solely by the cone. This fact corresponds with a classical observation from the field of constrained statistical inference, where the random variate \(\Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2\) is known as the chi-bar-squared statistic [36, Sec. 3.4].

We outline the proof of Theorem 3.1 in Sects. 7 and 8. The argument involves techniques that are already familiar to experts. Many of the core ideas appear in McMullen’s influential paper [28]. A similar approach has been used in hyperbolic integral geometry [34, p. 242]; see also the proof of [37, Thm. 6.5.1]. The main technical novelty is our method for showing that the conic intrinsic volumes are continuous with respect to the conic Hausdorff metric.

3.2 How Big is the Expansion of a Cone?

The Euclidean Steiner formula (1.1) describes the volume of a Euclidean expansion of a compact convex set. Although it may not be obvious from the identity (3.2), the master Steiner formula contains information about the volume of an expansion of a convex cone. This section explains the connection, which justifies our decision to call Theorem 3.1 a Steiner formula.

First, we argue that there is a simple expression for the Gaussian measure of a Euclidean expansion of a convex cone.

Proposition 3.3

(Gaussian Steiner formula) For each cone \(C \in \fancyscript{C}_d\) and each number \(\lambda \ge 0\),

$$\begin{aligned} \gamma _d(C + \lambda \mathsf {B}_d) = \mathbb {P}\big \{ {{\mathrm{dist}}}^2( \varvec{g}, C ) \le \lambda \big \} = \textstyle \sum \limits _{k=0}^d \mathbb {P}\big \{X_{d-k} \le \lambda \big \} \cdot v_k(C) \end{aligned}$$
(3.3)

where \(\gamma _d\) is the standard Gaussian measure on \(\mathbb {R}^d\) and \(\mathsf {B}_d\) is the unit ball in \(\mathbb {R}^d\). The random variable \(X_j\) follows the chi-square distribution with \(j\) degrees of freedom.

Proof

The first identity in (3.3) is immediate. For the second, we appeal to Corollary 3.2 with the function

$$\begin{aligned} f(a,b) = \Big \{\begin{array}{ll} 1, &{}\ b \le \lambda , \\ 0, &{}\ \text {otherwise.} \end{array} \end{aligned}$$

This step yields the relation

$$\begin{aligned} \mathbb {P}\big \{ {{\mathrm{dist}}}^2(\varvec{g}, C) \le \lambda \big \}&= \mathbb {P}\big \{ \Vert \varvec{\Pi }_{C^\circ }(\varvec{g})\Vert ^2 \le \lambda \big \}\\&= \textstyle \sum \limits _{k=0}^d \mathrm{\mathbb {E} }\big [ f \big ( X_k, X'_{d-k} \big ) \big ]\\&= \textstyle \sum \limits _{k=0}^d \mathbb {P}\big \{ X'_{d-k} \le \lambda \big \} \cdot v_k(C). \end{aligned}$$

The first equality depends on the representation (7.2) of the distance to a cone in terms of the metric projector onto the polar cone. The result (3.3) follows because \(X'_{d-k}\) has the same distribution as \(X_{d-k}\). \(\square \)

We can also establish the spherical Steiner formula (1.2) as a consequence of Theorem 3.1 by replacing the Gaussian vector \(\varvec{g}\) in Proposition 3.3 with a random vector \(\varvec{\theta }\) that is uniformly distributed on the Euclidean unit sphere. This strategy leads to an expression for the proportion of the sphere subtended by an angular expansion of the cone.

Proposition 3.4

(Spherical Steiner formula) For each cone \(C \in \fancyscript{C}_d\) and each number \(\lambda \in [0, 1]\), it holds that

$$\begin{aligned} \mathbb {P}\big \{ {{\mathrm{dist}}}^2(\varvec{\theta }, C) \le \lambda \big \} = \textstyle \sum \limits _{k=0}^d \mathbb {P}\big \{ B_{d-k,d} \le \lambda \big \} \cdot v_k(C). \end{aligned}$$
(3.4)

The random vector \(\varvec{\theta }\) is drawn from the uniform distribution on the unit sphere \(\mathsf {S}^{d-1}\) in \(\mathbb {R}^d\), and the random variable \(B_{j, d}\) follows the \(\textsc {beta}\big (\tfrac{1}{2}j, \ \tfrac{1}{2}d \big )\) distribution.

Proof

When \(\lambda = 1\), both sides of (3.4) equal one. For \(\lambda < 1\), we convert the spherical variable to a Gaussian using the relation \(\varvec{\theta } \sim \varvec{g} / \Vert \varvec{g}\Vert \). It follows that

$$\begin{aligned} \mathbb {P}\big \{ {{\mathrm{dist}}}^2(\varvec{\theta }, C) \le \lambda \big \} = \mathbb {P}\big \{ \Vert \varvec{\Pi }_{C^\circ }(\varvec{g})\Vert ^2 \le \lambda (1 - \lambda )^{-1} \cdot \Vert \varvec{\Pi }_{C}(\varvec{g})\Vert ^2 \big \}. \end{aligned}$$

This identity depends on the representation (7.2) of the distance, the nonnegative homogeneity of the metric projector \(\varvec{\Pi }_{C^\circ }\), and the Pythagorean relation (7.3). Apply Corollary 3.2 with the function

$$\begin{aligned} f(a,b) = \Big \{\begin{array}{ll} 1, &{} b \le \lambda (1-\lambda )^{-1} a, \\ 0, &{} \text {otherwise}. \end{array} \end{aligned}$$

To finish, we recall the geometric interpretation [7] of the beta random variable: \(B_{j,d} \sim \Vert \varvec{\Pi }_{L_j}(\varvec{\theta })\Vert ^2\) where \(L_j\) is a \(j\)-dimensional subspace of \(\mathbb {R}^d\). \(\square \)

Neither the Gaussian Steiner formula (3.3) nor the spherical Steiner formula (3.4) is new. The spherical formula is classical [4], while the Gaussian formula has antecedents in the statistics literature [36, 38]. There is novelty, however, in our method of condensing both results from the master Steiner formula (3.2).

4 Probabilistic Analysis of Intrinsic Volumes

In this section, we apply probabilistic methods to the conic intrinsic volumes. Our main tool is the master Steiner formula, Theorem 3.1, which we use repeatedly to convert statements about the intrinsic volumes of a cone into statements about the projection of a Gaussian vector onto the cone. We may then apply methods from Gaussian analysis to study this random variable.

4.1 The Intrinsic Volume Random Variable

Definitions 2.1 and 2.2 make it clear that the intrinsic volumes form a probability distribution. This observation suggests that it would be fruitful to analyze the intrinsic volumes using techniques from probability. We begin with the key definition.

Definition 4.1

(Intrinsic volume random variable) Let \(C \in \fancyscript{C}_d\) be a closed convex cone. The intrinsic volume random variable \(V_C\) has the distribution

$$\begin{aligned} \mathbb {P}\big \{ V_C = k \big \} = v_k(C) \quad \text {for} \quad k = 0,1,2, \dots , d. \end{aligned}$$

Notice that the intrinsic volume random variable of a cone \(C \in \fancyscript{C}_d\) and its polar have a tight relationship:

$$\begin{aligned} \mathbb {P}\big \{ V_{C^\circ } = k \big \} = v_k(C^\circ ) = v_{d-k}(C)= \mathbb {P}\big \{ V_C = d - k \big \} \end{aligned}$$

because polarity reverses the sequence of intrinsic volumes. In other words, \(V_{C^\circ } \sim d - V_C\).

4.2 The Statistical Dimension of a Cone

The expected value of the intrinsic volume random variable \(V_C\) has a distinguished place in the theory because \(V_C\) concentrates sharply about this point. In anticipation of this result, we glorify the expectation of \(V_C\) with its own name and notation.

Definition 4.2

(Statistical dimension [5, Sec. 5.3]) The statistical dimension \(\delta (C)\) of a cone \(C \in \fancyscript{C}_d\) is the quantity

$$\begin{aligned} \delta (C) := \mathrm{\mathbb {E} }[ V_C ] = \textstyle \sum \limits _{k=0}^d k \, v_k(C). \end{aligned}$$

The statistical dimension of a cone really is a measure of its dimension. In particular,

$$\begin{aligned} \delta ( L ) = \dim ( L ) \quad \text {for each subspace}\; L \subset \mathbb {R}^d. \end{aligned}$$
(4.1)

In fact, the statistical dimension is the canonical extension of the dimension of a subspace to the class of convex cones [5, Sec. 5.3]. By this, we mean that the statistical dimension is the only rotation invariant, continuous, localizable valuation on \(\fancyscript{C}_d\) that satisfies (4.1). See [37, p. 254 and Thm. 6.5.4] for further information about the unexplained technical terms.

The statistical dimension interacts beautifully with the polarity operation. In particular,

$$\begin{aligned} \delta (C) + \delta (C^\circ ) = \mathrm{\mathbb {E} }[ V_C ] + \mathrm{\mathbb {E} }[ V_{C^\circ } ] = \mathrm{\mathbb {E} }[ V_C ] + \mathrm{\mathbb {E} }[ d - V_C ] = d. \end{aligned}$$

This formula allows us to evaluate the statistical dimension for an important class of cones. A closed convex cone \(C\) is self-dual when it satisfies the identity \(C = - C^\circ \). Examples include the nonnegative orthant, the second-order cone, and the cone of positive-semidefinite matrices. We have the identity

$$\begin{aligned} \delta (C) = \tfrac{1}{2}d \quad \text {for a self-dual cone}\;\; C \in \fancyscript{C}_d. \end{aligned}$$
(4.2)

The statistical dimension of a cone can be expressed in terms of the projection of a standard Gaussian vector onto the cone [5, Prop. 5.11]. The master Steiner formula gives an easy proof of this result.

Proposition 4.3

(Statistical dimension) Let \(C \in \fancyscript{C}_d\) be a closed convex cone. Then

$$\begin{aligned} \delta (C) = \mathrm{\mathbb {E} }\big [ \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2 \big ]. \end{aligned}$$

The identity in Proposition 4.3 can be used to evaluate the statistical dimension for many cones of interest. See [5, Sec. 4] for details and examples.

Proof

The master Steiner formula, Corollary 3.2, with function \(f(a,b) = a\) states that

$$\begin{aligned} \mathrm{\mathbb {E} }\big [ \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2 \big ] = \textstyle \sum \limits _{k=0}^d \mathrm{\mathbb {E} }[ X_k ] \cdot v_k(C) = \textstyle \sum \limits _{k=0}^d k \, v_k(C) = \delta (C). \end{aligned}$$

Indeed, a chi-square variable \(X_k\) with \(k\) degrees of freedom has expectation \(k\). \(\square \)

4.3 The Variance of the Intrinsic Volumes

The variance of the intrinsic volume random variable tells us how tightly the intrinsic volumes cluster around their mean value. We can find an explicit expression for the variance in terms of the projection of a Gaussian vector onto the cone.

Proposition 4.4

(Variance of the intrinsic volumes) Let \(C \in \fancyscript{C}_d\) be a closed convex cone. Then

$$\begin{aligned} {{\mathrm{Var}}}[ V_C ]&= {{\mathrm{Var}}}\big [ \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2 \big ] - 2 \delta (C) \end{aligned}$$
(4.3)
$$\begin{aligned}&= {{\mathrm{Var}}}\big [ \Vert \varvec{\Pi }_{C^\circ }(\varvec{g})\Vert ^2 \big ] - 2 \delta (C^\circ ) = {{\mathrm{Var}}}[V_{C^\circ }]. \end{aligned}$$
(4.4)

Proposition 4.4 leads to exact formulas for the variance of the intrinsic volume sequence in several interesting cases; Sect. 6 contains some worked examples.

Proof

By definition, the variance satisfies

$$\begin{aligned} {{\mathrm{Var}}}[ V_C ] = \mathrm{\mathbb {E} }\big [ V_C^2 \big ] - \big (\mathrm{\mathbb {E} }[ V_C ] \big )^2 = \mathrm{\mathbb {E} }\big [ V_C^2 \big ] - \delta (C)^2. \end{aligned}$$

To obtain the expectation of \(V_C^2\), we invoke the master Steiner formula, Corollary 3.2, with the function \(f(a,b) = a^2\) to obtain

$$\begin{aligned} \mathrm{\mathbb {E} }\big [ \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^4 \big ]&= \textstyle \sum \limits _{k=0}^d \mathrm{\mathbb {E} }\big [ X_k^2 \big ] \cdot v_k(C)\\&= \textstyle \sum \limits _{k=0}^d k^2 v_k(C) + 2 \textstyle \sum \limits _{k=0}^d k \, v_k(C)\\&= \mathrm{\mathbb {E} }\big [ V_C^2 \big ] + 2\delta (C). \end{aligned}$$

Indeed, the raw second moment of a chi-square random variable \(X_k\) with \(k\) degrees of freedom equals \(k^2 + 2k\). Combine these two displays to reach

$$\begin{aligned} {{\mathrm{Var}}}[ V_C ]&= \mathrm{\mathbb {E} }\big [ \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^4 \big ] - \delta (C)^2 - 2 \delta (C)\\&= \mathrm{\mathbb {E} }\big [ \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^4 \big ] - \big ( \mathrm{\mathbb {E} }\big [ \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2 \big ] \big )^2 - 2 \delta (C), \end{aligned}$$

where the second identity follows from Proposition 4.3. Identify the variance of \(\Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2\) to complete the proof of (4.3). To establish (4.4), note that \({{\mathrm{Var}}}[ V_C ] = {{\mathrm{Var}}}[ d - V_C ] = {{\mathrm{Var}}}[ V_{C^\circ } ]\), and then apply (4.3) to the random variable \(V_{C^\circ }\). \(\square \)

4.4 A Bound for the Variance of the Intrinsic Volumes

Proposition 4.4 also allows us to produce a general bound on the variance of the intrinsic volumes of a cone.

Theorem 4.5

(Variance bound for intrinsic volumes) Let \(C \in \fancyscript{C}_d\) be a closed convex cone. Then

$$\begin{aligned} {{\mathrm{Var}}}[ V_C ] \le 2 \, \big ( \delta (C) \wedge \delta (C^\circ ) \big ). \end{aligned}$$

The operator \(\wedge \) returns the minimum of two numbers.

The example in Sect. 6.3 demonstrates that the constant two in (4.5) cannot be reduced in general.

Proof

To bound the variance of \(V_C\), we plan to invoke the Gaussian Poincaré inequality [15, Thm. 1.6.4] to control the variance of \(\Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2\). This inequality states that

$$\begin{aligned} {{\mathrm{Var}}}[ H(\varvec{g}) ] \le \mathrm{\mathbb {E} }\big [ \Vert \nabla H(\varvec{g}) \Vert ^2 \big ] \end{aligned}$$

for any function \(H : \mathbb {R}^d \rightarrow \mathbb {R}\) whose gradient is square-integrable with respect to the standard Gaussian measure. We apply this result to the function

$$\begin{aligned} H(\varvec{x}) = \Vert \varvec{\Pi }_C(\varvec{x})\Vert ^2 \quad \text {with}\quad \Vert \nabla H(\varvec{x})\Vert ^2 = 4 \Vert \varvec{\Pi }_C(\varvec{x}) \Vert ^2. \end{aligned}$$

The gradient calculation is justified by (1.4). We determine that

$$\begin{aligned} {{\mathrm{Var}}}\big [ \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2 \big ] \le 4 \mathrm{\mathbb {E} }\big [ \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2 \big ] = 4 \delta (C), \end{aligned}$$

where the second identity follows from Proposition 4.3. Introduce this inequality into (4.3) to see that \({{\mathrm{Var}}}[V_C] \le 2\delta (C)\). We can apply the same argument to see that

$$\begin{aligned} {{\mathrm{Var}}}\big [ \Vert \varvec{\Pi }_{C^\circ }(\varvec{g})\Vert ^2 \big ] \le 4 \delta (C^\circ ). \end{aligned}$$

Substitute this bound into (4.4) to conclude that \({{\mathrm{Var}}}[V_C] \le 2 \delta (C^\circ )\). \(\square \)

In principle, a random variable taking values in \(\{0,1,2, \dots , d\}\) can have variance larger than \(d^2/3\)—consider the uniform random variable. In contrast, Theorem 4.5 tells us that the variance of the intrinsic volume random variable \(V_C\) cannot exceed \(d\) for any cone \(C\). This observation has consequences for the tail behavior of \(V_C\). Indeed, Chebyshev’s inequality implies that

$$\begin{aligned} \mathbb {P}\big \{ \vert { V_C - \delta (C) }\vert > \lambda \sqrt{\delta (C)} \big \} \le \frac{{{\mathrm{Var}}}[V_C]}{\lambda ^2 \delta (C)} \le \frac{2}{\lambda ^2}. \end{aligned}$$

That is, most of the mass of \(V_C\) is located near the statistical dimension.

4.5 Exponential Moments of the Intrinsic Volumes

In the previous section, we discovered that the intrinsic volume random variable \(V_C\) is often close to its mean value. This observation suggests that \(V_C\) might exhibit stronger concentration. A standard method for proving concentration inequalities for a random variable is to calculate its exponential moments. The master Steiner formula allows us to accomplish this task.

Proposition 4.6

(Exponential moments of the intrinsic volumes) Let \(C \in \fancyscript{C}_d\) be a closed convex cone. For each parameter \(\eta \in \mathbb {R}\),

$$\begin{aligned} \mathrm{\mathbb {E} }{} \mathrm {e}^{\eta V_C} = \mathrm{\mathbb {E} }{} \mathrm {e}^{\xi \, \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2} \quad \mathrm{where} \quad \xi = \tfrac{1}{2}\big (1 - \mathrm {e}^{-2\eta }\big ). \end{aligned}$$

Proof

Fix a number \(\xi < \tfrac{1}{2}\). With the choice \(f(a,b) = \mathrm {e}^{\xi a}\), Corollary 3.2 shows that

$$\begin{aligned} \mathrm{\mathbb {E} }{} \mathrm {e}^{\xi \, \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2}&= \textstyle \sum \limits _{k=0}^d \mathrm{\mathbb {E} }{}\big [ \mathrm {e}^{\xi X_k} \big ] \cdot v_k(C) = \textstyle \sum \limits _{k=0}^d (1 - 2\xi )^{-k/2} v_k(C)\\&= \textstyle \sum \limits _{k=0}^d \mathrm {e}^{\eta k} v_k(C) = \mathrm{\mathbb {E} }{} \mathrm {e}^{\eta V_C}. \end{aligned}$$

We have used the familiar formula for the exponential moments of a chi-square random variable \(X_k\) with \(k\) degrees of freedom. The penultimate identity follows from the change of variables \(\eta = - \tfrac{1}{2}\log (1 - 2\xi )\), which establishes a bijection between \(\xi < \tfrac{1}{2}\) and \(\eta \in \mathbb {R}\). \(\square \)

Remark 4.7

(Conic Wills functional) Proposition 4.6 leads to a geometric description of the generating function of the intrinsic volumes:

$$\begin{aligned} W_C(\lambda ) := \lambda ^d \mathrm{\mathbb {E} }\exp \Big ( \frac{1-\lambda ^2}{2} \cdot {{\mathrm{dist}}}^2(\varvec{g}, C) \Big ) = \textstyle \sum \limits _{k=0}^d \lambda ^k v_k(C) \quad \text {for} \quad \lambda > 0. \end{aligned}$$
(4.5)

To see why this is true, use the representation (7.2) of the distance, and apply Proposition 4.6 with \(\eta = -\log \lambda \) to confirm that

$$\begin{aligned} W_C(\lambda ) = \lambda ^d \mathrm{\mathbb {E} }\exp \Big ( \frac{1 - \lambda ^2}{2} \cdot \Vert \varvec{\Pi }_{C^\circ }(\varvec{g})\Vert ^2 \Big ) = \lambda ^d \textstyle \sum \limits _{k=0}^d \lambda ^{-k} v_k(C^\circ ) = \textstyle \sum \limits _{k=0}^d \lambda ^{k} v_k(C). \end{aligned}$$

We have applied the fact that polarity reverses intrinsic volumes to reindex the sum. The function \(W_C\) can be viewed as a conic analog of the Wills functional [42] from Euclidean geometry.

4.6 A Bound for the Exponential Moments of the Intrinsic Volumes

Proposition 4.6 allows us to obtain an excellent bound for the exponential moments of \(V_C\). In the next section, we use this result to develop concentration inequalities for the intrinsic volumes.

Theorem 4.8

(Exponential moment bound for intrinsic volumes) Let \(C \in \fancyscript{C}_d\) be a closed convex cone. For each parameter \(\eta \in \mathbb {R}\),

$$\begin{aligned} \mathrm{\mathbb {E} }\mathrm {e}^{\eta (V_C - \delta (C))}&\le \exp \Big ( \frac{\mathrm {e}^{2\eta } - 2 \eta - 1}{2} \cdot \delta (C) \Big ) , \end{aligned}$$
(4.6)

and

$$\begin{aligned} \mathrm{\mathbb {E} }\mathrm {e}^{\eta (V_C - \delta (C))}&\le \exp \Big ( \frac{\mathrm {e}^{-2\eta } + 2\eta - 1}{2} \cdot \delta (C^\circ ) \Big ). \end{aligned}$$
(4.7)

The major technical challenge is to bound the exponential moments of the random variable \(\Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2\). The following lemma provides a sharp estimate for the exponential moments. It improves on an earlier result [5, Sublem. D.3].

Lemma 4.9

Let \(C \in \fancyscript{C}_d\) be a closed convex cone. For each parameter \(\xi < \tfrac{1}{2}\),

$$\begin{aligned} \mathrm{\mathbb {E} }\mathrm {e}^{\xi \, ( \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2 - \delta (C) )} \le \exp \Big ( \frac{2\xi ^2 \delta (C)}{1 - 2\xi } \Big ). \end{aligned}$$

Proof

Define the zero-mean random variable

$$\begin{aligned} Z := \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2 - \delta (C). \end{aligned}$$

Introduce the moment generating function \(m(\xi ) := \mathrm{\mathbb {E} }{} \mathrm {e}^{\xi Z}\). Our aim is to bound \(m(\xi )\). Before we begin, it is helpful to note a few properties of the moment generating function. First, the derivative \(m'(\xi ) = \mathbb {E} \big [ Z \mathrm {e}^{\xi Z} \big ]\) whenever \(\xi < \tfrac{1}{2}\). By direct calculation, \(\log m(0) = 0\). Furthermore, l’Hôpital’s rule shows that \(\lim _{\xi \rightarrow 0} \xi ^{-1} \log m(\xi ) = 0\) because the random variable \(Z\) has zero mean.

The argument is based on the Gaussian logarithmic Sobolev inequality [15, Thm. 1.6.1]. One version of this result states that

$$\begin{aligned} \mathrm{\mathbb {E} }\big [ H(\varvec{g} ) \cdot \mathrm {e}^{H(\varvec{g})} \big ] - \mathrm{\mathbb {E} }\big [ \mathrm {e}^{H(\varvec{g})} \big ] \log {} \mathrm{\mathbb {E} }\big [ \mathrm {e}^{H(\varvec{g})} \big ] \le \frac{1}{2} \mathrm{\mathbb {E} }\big [ \Vert \nabla H(\varvec{g})\Vert ^2 \cdot \mathrm {e}^{H(\varvec{g})} \big ] \end{aligned}$$
(4.8)

for any differentiable function \(H : \mathbb {R}^d \rightarrow \mathbb {R}\) such that the expectations in (4.8) are finite. We apply this result to the function

$$\begin{aligned} H(\varvec{x}) = \xi \, \big [ \Vert \varvec{\Pi }_C(\varvec{x})\Vert ^2 - \delta (C) \big ] \quad \text {with}\quad \Vert \nabla H(\varvec{x})\Vert ^2 = 4 \xi ^2 \Vert \varvec{\Pi }_C(\varvec{x})\Vert ^2. \end{aligned}$$

The gradient calculation is justified by (1.4). Notice that

$$\begin{aligned} H(\varvec{g}) = \xi Z \quad \text {and}\quad \Vert \nabla H(\varvec{g})\Vert ^2 = 4\xi ^2(Z + \delta (C)). \end{aligned}$$

Therefore, the logarithmic Sobolev inequality (4.8) delivers the relation

$$\begin{aligned} \xi \cdot \mathrm{\mathbb {E} }\big [ Z \mathrm {e}^{\xi Z} \big ] - \mathrm{\mathbb {E} }\big [ \mathrm {e}^{\xi Z} \big ] \log {} \mathrm{\mathbb {E} }\big [ \mathrm {e}^{\xi Z} \big ] \le 2 \xi ^2 \cdot \mathrm{\mathbb {E} }\big [ Z\mathrm {e}^{\xi Z} \big ] + 2\xi ^2 \delta (C) \cdot \mathrm{\mathbb {E} }\big [ \mathrm {e}^{\xi Z} \big ] \quad \text {for} \quad \xi < \tfrac{1}{2}. \end{aligned}$$

We can rewrite the last display as a differential inequality for the moment generating function:

$$\begin{aligned} \xi m'(\xi ) - m(\xi ) \log m(\xi ) \le 2\xi ^2 m'(\xi ) + 2\delta (C) \cdot \xi ^2 m(\xi ) \quad \text {for}\quad \xi < \tfrac{1}{2}. \end{aligned}$$
(4.9)

The requirement on \(\xi \) is necessary and sufficient to ensure that \(m(\xi )\) and \(m'(\xi )\) are finite. To complete the proof, we just need to solve this differential inequality.

We follow the argument from [13, Thm. 5]. Divide the inequality (4.9) by the positive number \(\xi ^2 m(\xi )\) to reach

$$\begin{aligned} \frac{1}{\xi } \cdot \frac{m'(\xi )}{m(\xi )} - \frac{1}{\xi ^2} \log m(\xi ) \le 2 \cdot \frac{m'(\xi )}{m(\xi )} + 2\delta (C) \quad \text {for}\quad \xi \in (-\infty , 0) \cup \big (0, \tfrac{1}{2}\big ). \end{aligned}$$

The left- and right-hand sides of this relation are exactly integrable:

$$\begin{aligned} \frac{\mathrm {d}{}}{\mathrm {d}{s}} \Big [ \frac{1}{s} \log m(s) \Big ] \le 2 \cdot \frac{\mathrm {d}{}}{\mathrm {d}{s}} \Big [ \log m(s) + 2\delta (C) \cdot s\Big ] \quad \text {for}\quad s \in (-\infty , 0) \cup \big (0, \tfrac{1}{2}\big ). \end{aligned}$$
(4.10)

To continue, we first consider the case \(0 < \xi < \tfrac{1}{2}\). Integrate the inequality (4.10) over the interval \(s \in [0, \xi ]\) using the boundary conditions \(\log m(0) = 0\) and \(\lim _{\xi \rightarrow 0} \xi ^{-1} \log m(\xi ) = 0\). This step yields

$$\begin{aligned} \frac{1}{\xi } \log m(\xi ) \le 2 \log m(\xi ) + 2\delta (C) \cdot \xi \quad \text {for} \quad 0 < \xi < \tfrac{1}{2}. \end{aligned}$$

Solve this relation for the moment generating function \(m\) to obtain the bound

$$\begin{aligned} m(\xi ) \le \exp \Big ( \frac{2\xi ^2 \delta (C)}{1 - 2\xi } \Big ) \quad \text {for}\quad 0 \le \xi < \tfrac{1}{2}. \end{aligned}$$
(4.11)

The boundary case \(\xi = 0\) follows from a direct calculation. Next, we address the situation where \(\xi < 0\). Integrating (4.10) over the interval \([\xi , 0]\), we find that

$$\begin{aligned} - \frac{1}{\xi } \log m(\xi ) \le - 2 \log m(\xi ) - 2 \delta (C) \cdot \xi \quad \text {for}\quad \xi < 0. \end{aligned}$$

Solve this inequality for \(m(\xi )\) to see that the bound (4.11) also holds in the range \(\xi < 0\). This observation completes the proof. \(\square \)

With Lemma 4.9 at hand, we quickly reach Theorem 4.8.

Proof of Theorem 4.8

We begin with the statement from Proposition 4.6. Adding and subtracting multiples of \(\delta (C)\) in the exponent, we obtain the relation

$$\begin{aligned} \mathrm{\mathbb {E} }{} \mathrm {e}^{\eta (V_C - \delta (C))} = \mathrm {e}^{(\xi - \eta ) \, \delta (C)} \cdot \mathrm{\mathbb {E} }{} \mathrm {e}^{{\xi \, (\Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2 - \delta (C))}}, \end{aligned}$$

where \(\xi = \tfrac{1}{2}\big (1 - \mathrm {e}^{-2\eta } \big ) < \tfrac{1}{2}\). Lemma 4.9 controls the moment generating function on the right-hand side:

$$\begin{aligned} \mathrm{\mathbb {E} }{} \mathrm {e}^{\eta (V_C - \delta (C))} \le \mathrm {e}^{(\xi - \eta ) \, \delta (C)} \cdot \exp \Big ( \frac{2\xi ^2 \delta (C)}{1 - 2\xi } \Big ). \end{aligned}$$

By a marvelous coincidence, the terms in the exponent collapse into a compact form:

$$\begin{aligned} \xi - \eta + \frac{2\xi ^2}{1 - 2\xi } = \frac{\mathrm {e}^{2 \eta } - 2 \eta -1 }{2}. \end{aligned}$$

Combine the last two displays to finish the proof of (4.6). To obtain the second formula (4.7), note that

$$\begin{aligned} \mathrm{\mathbb {E} }{} \mathrm {e}^{\eta (V_C - \delta (C))} = \mathrm{\mathbb {E} }{} \mathrm {e}^{(-\eta ) (V_{C^\circ } - \delta (C^\circ ))} \end{aligned}$$

because \(V_{C} \sim d - V_{C^\circ }\) and \(\delta (C) = d - \delta (C^\circ )\). Now apply (4.6) to the right-hand side.

4.7 Concentration of Intrinsic Volumes

The exponential moment bound from Theorem 4.8 allows us to obtain concentration results for the sequence of intrinsic volumes of a convex cone. The following corollary provides Bennett-type inequalities for the intrinsic volume random variable.

Corollary 4.10

(Concentration of the intrinsic volume random variable) Let \(C \in \fancyscript{C}_d\) be a closed convex cone. For each \(\lambda \ge 0\), the intrinsic volume random variable \(V_C\) satisfies the upper tail bound

$$\begin{aligned} \mathbb {P}\big \{ V_C - \delta (C) \ge \lambda \big \} \le \exp \Big ( - \frac{1}{2} \max \Big \{ \delta (C) \cdot \psi \Big (\frac{\lambda }{\delta (C)} \Big ),\ \delta (C^\circ ) \cdot \psi \Big (\frac{-\lambda }{\delta (C^\circ )}\Big ) \Big \} \Big ) \end{aligned}$$
(4.12)

and the lower tail bound

$$\begin{aligned} \mathbb {P}\big \{ V_C - \delta (C) \le - \lambda \big \} \le \exp \Big ( - \frac{1}{2} \max \Big \{ \delta (C) \cdot \psi \Big (\frac{-\lambda }{\delta (C)} \Big ),\ \delta (C^\circ ) \cdot \psi \Big (\frac{\lambda }{\delta (C^\circ )}\Big ) \Big \} \Big ). \end{aligned}$$
(4.13)

The function \(\psi (u) := (1+u)\log (1+u) - u\) for \(u \ge - 1\) while \(\psi (u) = \infty \) for \(u < - 1\).

Proof

The argument, based on the Laplace transform method, is standard. For any \(\eta > 0\),

$$\begin{aligned} \mathbb {P}\big \{ V_C - \delta (C) \ge \lambda \big \} \le \mathrm {e}^{-\eta \lambda } \cdot \mathrm{\mathbb {E} }{} \mathrm {e}^{\eta \, (V_C - \delta (C))} \le \mathrm {e}^{-\eta \lambda } \cdot \exp \Big ( \frac{\mathrm {e}^{2\eta } - 2\eta - 1}{2} \cdot \delta (C) \Big ), \end{aligned}$$

where we have applied the exponential moment bound (4.6) from Theorem 4.8. Minimize the right-hand side over \(\eta > 0\) to obtain the first branch of the maximum in (4.12). The second exponential moment bound (4.7) leads to the second branch of the maximum in (4.12). The lower tail bound (4.13) follows from the same considerations. For more details about this type of proof, see [14, Sec. 2.7]. \(\square \)

To understand the content of Corollary 4.10, it helps to make some further estimates. Comparing Taylor series, we find that \(\psi (u) \ge u^2/(2 + 2u/3)\). This observation leads to a weaker form of the tail bounds (4.12) and (4.13). For \(\lambda \ge 0\),

$$\begin{aligned} \mathbb {P}\big \{ V_C - \delta (C) \ge \lambda \big \}&\le \exp \Big ( \frac{-\lambda ^2/4}{(\delta (C) + \lambda / 3) \wedge (\delta (C^\circ ) - \lambda / 3)} \Big ), \\ \mathbb {P}\big \{ V_C - \delta (C) \le -\lambda \big \}&\le \exp \Big ( \frac{-\lambda ^2/4}{(\delta (C) - \lambda / 3) \wedge (\delta (C^\circ ) + \lambda / 3)} \Big ). \end{aligned}$$

This pair of inequalities reflects the fact that the left tail of \(V_C\) exhibits faster decay than the right tail when the statistical dimension \(\delta (C)\) is small; the tail behavior is reversed when \(\delta (C)\) is close to the ambient dimension. For practical purposes, it seems better to combine these estimates into a single bound:

$$\begin{aligned} \mathbb {P}\big \{ \vert {V_C - \delta (C)}\vert \ge \lambda \big \} \le 2 \exp \Big ( \frac{-\lambda ^2/4}{(\delta (C) \wedge \delta (C^\circ )) + \lambda / 3} \Big ) \quad \text {for} \quad \lambda \ge 0. \end{aligned}$$
(4.14)

This tail bound indicates that \(V_C\) looks somewhat like a Gaussian variable with mean \(\delta (C)\) and variance \(2 \, (\delta (C) \wedge \delta (C^\circ ))\) or less. This claim is consistent with Theorem 4.5. The result (4.14) improves over [5, Thm. 6.1], and we will see an example in Sect. 6.3 that saturates the bound.

Our analysis suggests that the intrinsic volume sequence of a convex cone \(C\) cannot exhibit very complicated behavior. Indeed, the statistical dimension \(\delta (C)\) already tells us almost everything there is to know. The only large intrinsic volumes \(v_k(C)\) are those where the index \(k\) is in the range \(\delta (C) \pm \mathrm{const} \cdot \sqrt{\delta (C) \wedge \delta (C^\circ )}\). The consequence of this result for conic integral geometry is that a cone with statistical dimension \(\delta (C)\) behaves essentially like a subspace with approximate dimension \(\delta (C)\). See [5] for more support of this point and its consequences for convex optimization.

5 Intrinsic Volumes of Product Cones

Suppose that \(C_1 \in \fancyscript{C}_{d_1}\) and \(C_2 \in \fancyscript{C}_{d_2}\) are closed convex cones. We can form another closed convex cone by taking their direct product:

$$\begin{aligned} C_1 \times C_2 := \big \{ (\varvec{x}_1, \varvec{x}_2) \in \mathbb {R}^{d_1+d_2} : \varvec{x}_1 \in C_1 \text { and }\,\varvec{x}_2 \in C_2 \big \} \in \fancyscript{C}_{d_1+d_2}. \end{aligned}$$

The probabilistic methods of the last section are well suited to the analysis of a product cone. In this section, we compute the intrinsic volumes of a product cone using these techniques. Then we identify the mean, variance, and concentration behavior of the intrinsic volume random variable of a product cone.

5.1 The Product Rule for Intrinsic Volumes

The intrinsic volumes of the product cone can be derived from the intrinsic volumes of the two factors.

Corollary 5.1

(Product rule for intrinsic volumes) Let \(C_1 \in \fancyscript{C}_{d_1}\) and \(C_2 \in \fancyscript{C}_{d_2}\) be closed convex cones. The intrinsic volumes of the product cone \(C_1 \times C_2\) satisfy

$$\begin{aligned} v_k(C_1 \times C_2) = \textstyle \sum \limits _{i+j = k} v_i(C_1) \cdot v_j(C_2) \quad \text {for}\quad k = 0, 1, 2, \dots , d_1 + d_2. \end{aligned}$$
(5.1)

We present a short proof of Corollary 5.1 based on the conic Wills functional (4.5). This approach echoes Hadwiger’s method [21] for computing the Euclidean intrinsic volumes of a product of convex sets.

Proof

Let \(\varvec{g}_1 \in \mathbb {R}^{d_1}\) and \(\varvec{g}_2 \in \mathbb {R}^{d_2}\) be independent standard Gaussian vectors. The direct product \((\varvec{g}_1, \varvec{g}_2)\) is a standard Gaussian vector on \(\mathbb {R}^{d_1+d_2}\). For each \(\lambda > 0\), the Definition (4.5) of the Wills functional gives

$$\begin{aligned} W_{C_1 \times C_2}(\lambda )&= \lambda ^{d_1+d_2} \mathrm{\mathbb {E} }{} \exp \Big ( \frac{1 - \lambda ^2}{2} \cdot {{\mathrm{dist}}}^2\big ( (\varvec{g}_1,\varvec{g}_2), C_1 \times C_2 \big ) \Big ) \\&= \lambda ^{d_1} \mathrm{\mathbb {E} }{} \exp \Big ( \frac{1 - \lambda ^2}{2} \cdot {{\mathrm{dist}}}^2( \varvec{g}_1, C_1) \Big )\\&\quad \times \lambda ^{d_2} \mathrm{\mathbb {E} }{} \exp \Big ( \frac{1 - \lambda ^2}{2} \cdot {{\mathrm{dist}}}^2( \varvec{g}_2, C_2) \Big )\\&= W_{C_1}(\lambda ) \cdot W_{C_2}(\lambda ). \end{aligned}$$

The second identity follows from the fact that the squared distance to a product cone equals the sum of the squared distances to the factors; we have also invoked the independence of the two standard Gaussian vectors to split the expectation. Applying the relation (4.5) twice, we find that

$$\begin{aligned} W_{C_1 \times C_2}(\lambda )&= W_{C_1}(\lambda ) \cdot W_{C_2}(\lambda ) = \Big ( \textstyle \sum \limits _{i=0}^{d_1} \lambda ^i v_i(C_1) \Big ) \Big ( \textstyle \sum \limits _{j=0}^{d_2} \lambda ^j v_j(C_2) \Big )\\&= \textstyle \sum \limits _{k=0}^{d_1+d_2} \lambda ^k \textstyle \sum \limits _{i + j = k} v_i(C_1) \cdot v_j(C_2). \end{aligned}$$

But (4.5) also shows that

$$\begin{aligned} W_{C_1 \times C_2}(\lambda ) = \textstyle \sum \limits _{k=0}^{d_1+d_2} \lambda ^k v_k(C_1 \times C_2). \end{aligned}$$

Comparing coefficients in these two polynomials, we arrive at the relation (5.1). \(\square \)

5.2 Concentration of the Intrinsic Volumes of a Product Cone

We can employ the probabilistic techniques from Sect. 4 to collect information about the intrinsic volumes of a product cone. Let \(C_1 \in \fancyscript{C}_{d_1}\) and \(C_2 \in \fancyscript{C}_{d_2}\) be two cones, and consider independent random variables \(V_{C_1}\) and \(V_{C_2}\) whose distributions are given by the intrinsic volumes of \(C_1\) and \(C_2\). In view of Corollary 5.1,

$$\begin{aligned} v_k(C_1 \times C_2)&= \textstyle \sum \limits _{i+j=k} v_i(C_1) \cdot v_j(C_2)\\&= \mathbb {P}\big \{ V_{C_1} + V_{C_2} = k \big \} \quad \text {for}\quad k = 0, 1, 2, \dots , d_1 + d_2. \end{aligned}$$

In other words, the intrinsic volume random variable \(V_{C_1 \times C_2}\) of the product cone has the distribution

$$\begin{aligned} V_{C_1 \times C_2} \sim V_{C_1} + V_{C_2}. \end{aligned}$$
(5.2)

This observation allows us to compute the statistical dimension of the product cone:

$$\begin{aligned} \delta ( C_1 \times C_2 ) = \mathrm{\mathbb {E} }\big [ V_{C_1 \times C_2} \big ] = \delta (C_1) + \delta (C_2). \end{aligned}$$
(5.3)

Of course, we can also derive (5.3) directly from Proposition 4.3. A more interesting consequence is the following expression for the variance of the intrinsic volumes:

$$\begin{aligned} {{\mathrm{Var}}}\big [ V_{C_1 \times C_2} \big ] = {{\mathrm{Var}}}[V_{C_1}] + {{\mathrm{Var}}}[V_{C_2}] \le 2 \, \big [ \big (\delta (C_1) \wedge \delta (C_1^\circ ) \big ) + \big (\delta (C_2) \wedge \delta (C_2^\circ )\big ) \big ].\nonumber \\ \end{aligned}$$
(5.4)

The inequality follows from Theorem 4.5. With some additional effort, we can develop a concentration result for the intrinsic volumes of a product cone that matches the variance bound (5.4).

Corollary 5.2

(Concentration of intrinsic volumes for a product cone) Let \(C_1 \in \fancyscript{C}_{d_1}\) and \(C_2 \in \fancyscript{C}_{d_2}\) be closed convex cones. For each \(\lambda \ge 0\),

$$\begin{aligned} \mathbb {P}\big \{\vert { V_{C_1 \times C_2} - \delta (C_1 \times C_2)\vert } \ge \lambda \big \} \le 2 \, \exp \Big ( \frac{-\lambda ^2/4}{\sigma ^2 + \lambda /3} \Big ) \end{aligned}$$

where

$$\begin{aligned} \sigma ^2 := \big (\delta (C_1) \wedge \delta (C_1^\circ ) \big ) + \big (\delta (C_2) \wedge \delta (C_2^\circ )\big ). \end{aligned}$$

This represents a significant improvement over the simple tail bound from [5, Lem. 7.2]. A similar result holds for any finite product \(C_1 \times \cdots \times C_r\) of closed convex cones.

Proof

First, recall the numerical inequality

$$\begin{aligned} \frac{\mathrm {e}^{2\eta } - 2 \eta - 1}{2} \le \frac{\eta ^2}{1 - 2\vert {\eta }\vert /3} \quad \text {for}\quad \vert {\eta }\vert < \tfrac{3}{2}. \end{aligned}$$

This estimate allows us to package the two exponential moment bounds from Theorem 4.8 as

$$\begin{aligned} \mathrm{\mathbb {E} }\mathrm {e}^{\eta (V_C - \delta (C))} \le \exp \Big ( \frac{\eta ^2 (\delta (C) \wedge \delta (C^\circ ))}{1-2\vert {\eta }\vert /3} \Big ) \quad \text {for} \quad \vert {\eta }\vert < \tfrac{3}{2}. \end{aligned}$$

Applying this bound twice, we learn that the exponential moments of the random variable \(V_{C_1 \times C_2}\) satisfy

$$\begin{aligned} \mathrm{\mathbb {E} }\mathrm {e}^{\eta \, (V_{C_1 \times C_2} - \delta (C_1 \times C_2))} = \mathrm{\mathbb {E} }{} \mathrm {e}^{ \eta \, (V_{C_1} - \delta (C_1))} \cdot \mathrm{\mathbb {E} }\mathrm {e}^{ \eta \, (V_{C_2} - \delta (C_2))} \le \exp \Big ( \frac{\eta ^2 \sigma ^2}{1-2\vert {\eta }\vert /3} \Big ). \end{aligned}$$

The first relation follows from the distributional identity (5.2) and the statistical dimension calculation (5.3). The Laplace transform method delivers

$$\begin{aligned} \mathbb {P}\big \{ V_{C_1 \times C_2} - \delta (C_1 \times C_2) \ge \lambda \big \}&\le \mathrm{inf}_{\eta > 0} \Big \{ \mathrm {e}^{- \eta \lambda } \cdot \exp \Big ( \frac{\eta ^2 \sigma ^2}{1-2\vert {\eta }\vert /3} \Big ) \Big \}\\&\le \exp \Big ( \frac{-\lambda ^2/4}{\sigma ^2 + \lambda /3} \Big ). \end{aligned}$$

We have chosen \(\eta = \lambda /(2\sigma ^2 + 2\lambda /3)\) to reach the second inequality. We obtain the lower tail bound from the same argument. \(\square \)

6 Examples

In this section, we demonstrate the vigor of the ideas from Sect. 4 by applying them to some concrete examples. The probabilistic viewpoint provides new insights, and it enables us to complete some difficult calculations with minimal effort.

6.1 The Nonnegative Orthant

As a warmup, we begin with an example where it is easy to compute the intrinsic volumes directly. The nonnegative orthant \(\mathbb {R}_+^d\) is the polyhedral cone

$$\begin{aligned} \mathbb {R}_+^d := \big \{ \varvec{x} \in \mathbb {R}^d : x_i \ge 0 \text { for}\, i=1, \dots , d \big \}. \end{aligned}$$

The nonnegative orthant is self-dual, which immediately delivers several results. For typographical felicity, we abbreviate \(C = \mathbb {R}_+^d\). Applying the identity (4.2) and Theorem 4.5, we find that

$$\begin{aligned} \delta (C) = \mathrm{\mathbb {E} }[ V_C ] = \tfrac{1}{2}d \quad \text {and}\quad {{\mathrm{Var}}}[V_C] \le 2 \delta (C) = d. \end{aligned}$$

The tail bound (4.14) specializes to

$$\begin{aligned} \mathbb {P}\big \{ \vert { V_C - \tfrac{1}{2}d}\vert \ge \lambda \big \} \le 2 \, \exp \Big ( \frac{-\lambda ^2}{2d + 4\lambda /3} \Big ). \end{aligned}$$

These estimates already provide a significant amount of information about the intrinsic volumes of the orthant.

How well do these bounds describe the actual behavior of the intrinsic volumes? Appealing directly to Definition 2.1, we can check that \(V_C \sim \textsc {binomial}\big (d, \tfrac{1}{2}\big )\). See, for example, [6, Ex. 4.4.7]. Therefore,

$$\begin{aligned} \mathrm{\mathbb {E} }[ V_C ] = \tfrac{1}{2}d \quad \text {and}\quad {{\mathrm{Var}}}[ V_C ] = \tfrac{1}{4} d. \end{aligned}$$

Furthermore, the binomial random variable satisfies a sharp tail bound of the form

$$\begin{aligned} \mathbb {P}\big \{ \vert { V_C - \tfrac{1}{2}d }\vert \ge \lambda \big \} \le 2 \, \exp \Big ( \frac{-2\lambda ^2}{d} \Big ). \end{aligned}$$

We discover that our general results overestimate the variance of \(V_C\) by a factor of four, but they do capture the subgaussian decay of the intrinsic volumes.

6.2 The Cone of Positive-Semidefinite Matrices

Our approach to intrinsic volume calculations is most valuable when there is no explicit formula for the intrinsic volumes or the expressions are too complicated to evaluate easily. For a challenge of this type, let us consider the cone of real positive-semidefinite matrices. We can compute the mean and variance of the intrinsic volume sequence of this cone by combining our methods with established results from random matrix theory.

The cone \(\mathbb {S}_+^n\) consists of all \(n \times n\) positive-semidefinite (psd) matrices:

$$\begin{aligned} \mathbb {S}_{+}^n := \big \{ \varvec{X} \in \mathbb {R}_\mathrm{sym}^{n \times n} : \varvec{u}^T\varvec{X} \varvec{u} \ge 0\,\text {for all}\,\varvec{u} \in \mathbb {R}^n \big \}, \end{aligned}$$

where \(\mathbb {R}_\mathrm{sym}^{n \times n}\) consists of \(n \times n\) symmetric matrices. This vector space has dimension \(d = n(n+1)/2\). The psd cone is self-dual with respect to \(\mathbb {R}_\mathrm{sym}^{n \times n}\), so the expression (4.2) shows that the statistical dimension

$$\begin{aligned} \delta ( \mathbb {S}_+^n ) = \frac{n(n+1)}{4}. \end{aligned}$$

As with the nonnegative orthant, we immediately obtain bounds on the variance and concentration inequalities for the intrinsic volumes.

We will use Proposition 4.4 to compute the variance of the sequence of intrinsic volumes when \(n\) is large. Let us abbreviate \(C = \mathbb {S}_+^n\). The intrinsic volumes do not depend on the embedding dimension of the cone, so there is no harm in treating the cone as a subset of the linear space \(\mathbb {R}^{n \times n}\) of square matrices. To compute the metric projection of a matrix \(\mathbf{X} \in \mathbb {R}^{n \times n}\) onto the cone \(C\), we first extract the symmetric part of the matrix and then compute the positive part [11, p. 99] of the Jordan decomposition:

$$\begin{aligned} \varvec{\Pi }_C(\varvec{X}) = \varvec{\Pi }_C\big ( \tfrac{1}{2}(\varvec{X} + \varvec{X}^T) \big ) = \tfrac{1}{2}(\varvec{X} + \varvec{X}^T)_+. \end{aligned}$$

It follows that

$$\begin{aligned} \big \Vert {\varvec{\Pi }_C(\varvec{X})} \big \Vert _{\mathrm {F}}^2 = \tfrac{1}{4} \big \Vert { \big (\varvec{X} + \varvec{X}^T\big )_+ } \big \Vert _{\mathrm {F}}^2 = \tfrac{1}{4} \mathrm{tr }\big [ \big ( \varvec{X} + \varvec{X}^T\big )_+^2 \big ]. \end{aligned}$$

Let \(\varvec{G}_n \in \mathbb {R}^{n \times n}\) be a matrix with independent standard Gaussian entries. Then the matrix \(\varvec{W}_n = 2^{-1/2} \big (\varvec{G}_n + \varvec{G}_n^T\big ) \in \mathbb {R}_\mathrm{sym}^{n\times n}\) is a member of the Gaussian orthogonal ensemble (GOE). We have

$$\begin{aligned} \big \Vert { \varvec{\Pi }_C(\varvec{G}_n) } \big \Vert _{\mathrm {F}}^2 = \tfrac{1}{2} \mathrm{tr }\big [ (\varvec{W}_n)_+^2 \big ]. \end{aligned}$$

To invoke Proposition 4.4, we must compute the variance of this quantity.

Our method is to renormalize the matrix and invoke asymptotic results for the GOE. From the formula above,

$$\begin{aligned} {{\mathrm{Var}}}\big [ \big \Vert { \varvec{\Pi }_C(\varvec{G}_n) } \big \Vert _{\mathrm {F}}^2 \big ] = {{\mathrm{Var}}}\Big [ \frac{n}{2} \cdot \mathrm{tr }\big [ \big (n^{-1/2} \varvec{W}_n \big )_+^2 \big ] \Big ] = \frac{n^2}{4}{{\mathrm{Var}}}\Bigl [ \frac{1}{n} \mathrm{tr }\big [ \big ( \varvec{W}_n \big )_+^2 \big ] \Bigr ]. \end{aligned}$$

The final term involves the variance of the function \(h(s) := (s)_+^2 = \max \{s, 0\}^2\) applied to the empirical spectral distribution of \(\varvec{W}_n\). In the limit as \(n \rightarrow \infty \), this variance can be expressed in terms of an integral against a kernel associated with the GOE [16, Thm. 9.2].

$$\begin{aligned} {{\mathrm{Var}}}\Bigl [\frac{1}{n} \mathrm{tr }\big [\big ( \varvec{W}_n \big )^2_+\big ] \Bigr ] \rightarrow \int \limits _{-2}^2 \int \limits _{-2}^2 h'(s) h'(t) \cdot \rho _\mathrm{GOE}(s,t) \, \mathrm {d}{s}\, \mathrm {d}{t}, \end{aligned}$$

where the kernel takes the form

$$\begin{aligned} \rho _\mathrm{GOE}(s,t) = \frac{1}{2 \pi ^2} \log \Big ( \frac{4 - st + \sqrt{(4-s^2)(4-t^2)}}{4 - st - \sqrt{(4-s^2)(4-t^2)}} \Big ). \end{aligned}$$

With the assistance of the Mathematica computer algebra system, we determine that the double integral equals \(1 + 16/\pi ^2\). Therefore,

$$\begin{aligned} {{\mathrm{Var}}}\big [ \big \Vert { \varvec{\Pi }_C(\varvec{G}_n) } \big \Vert _{\mathrm {F}}^2 \big ] = \frac{n^2}{4} \Big (1 + \frac{16}{\pi ^2}\Big ) + o(n^2) \quad \text {as} \quad n \rightarrow \infty . \end{aligned}$$

Proposition 4.4 yields

$$\begin{aligned} {{\mathrm{Var}}}[V_C] = {{\mathrm{Var}}}\big [ \big \Vert { \varvec{\Pi }_C(\varvec{G}_n) } \big \Vert _{\mathrm {F}}^2 \big ] - 2\delta (C) = \frac{n^2}{4} \Big (\frac{16}{\pi ^2} - 1\Big ) + o(n^2) \quad \text {as} \quad n \rightarrow \infty . \end{aligned}$$

In particular,

$$\begin{aligned} \frac{{{\mathrm{Var}}}[V_C]}{\delta (C)} \rightarrow \frac{16}{\pi ^2} - 1 \quad \text {as} \quad n \rightarrow \infty . \end{aligned}$$

This ratio measures how much the intrinsic volumes are spread out relative to the size of the cone.

As a point of comparison, Amelunxen and Bürgisser have computed the intrinsic volumes of the psd cone exactly using methods from differential geometry [3, Thm. 4.1]. The expressions, involving Mehta integrals, can be evaluated for low-dimensional cones, but they have resisted asymptotic analysis.

6.3 Circular Cones

A circular cone \(C\) in \(\mathbb {R}^d\) with angle \(0 \le \alpha \le \pi /2\) takes the form

$$\begin{aligned} C = {{\mathrm{Circ}}}_d(\alpha ) := \big \{ \varvec{x} \in \mathbb {R}^d : x_1 \ge \Vert {\varvec{x}}\Vert \cos (\alpha ) \big \}. \end{aligned}$$

In particular, this family includes the second-order cone \(\mathbb {L}^{d} := {{\mathrm{Circ}}}_{d}(\pi /4)\). Second-order cones are also known as Lorentz cones, and they are self-dual.

With some effort, it is possible to work out the intrinsic volumes of a circular cone [6, Ex. 4.4.8]. Instead, we apply our techniques to compute the mean and variance of the intrinsic volume random variable. This calculation demonstrates that circular cones with a small angle saturate the variance bound from Theorem 4.5. Afterward, we sketch an argument that small circular cones also saturate the upper tail bound from Corollary 4.10.

6.3.1 Mean and Variance Calculations

Fix an angle \(\alpha \in (0, \pi /2)\). We consider the circular cone \(C = {{\mathrm{Circ}}}_d(\alpha )\) where the dimension \(d\) is large. For each unit vector \(\varvec{u} \in \mathbb {R}^d\), elementary trigonometry shows that

$$\begin{aligned} \Vert \varvec{\Pi }_C(\varvec{u})\Vert ^2 = H(\arccos (u_1)) \quad \text {where}\quad H(\beta ) := \left\{ \begin{array}{ll} 1, &{} \beta \in [0, \alpha ), \\ \cos ^2(\beta - \alpha ), &{} \beta \in [\alpha , \alpha + \pi /2], \\ 0, &{} \beta \in (\alpha + \pi /2, \pi ]. \end{array} \right. \end{aligned}$$

Recall the polar decomposition \(\varvec{g} = R \cdot \varvec{\theta }\) where \(R\) and \(\varvec{\theta }\) are independent, \(R^2\) is a chi-square random variable with \(d\) degrees of freedom, and \(\varvec{\theta }\) is uniformly distributed on the sphere. With this notation,

$$\begin{aligned} \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2 = R^2 \cdot H\big (\arccos (\theta _1)\big ). \end{aligned}$$
(6.1)

This expression allows us to evaluate the moments of \(\Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2\) quickly.

We begin with the statistical dimension. Combine Proposition 4.3 with the expression (6.1) and integrate in polar coordinates to reach

$$\begin{aligned} \delta (C) = \frac{d}{\kappa _d} \int \limits _{0}^\pi H(\beta ) \sin ^{d-2}(\beta ) \, \mathrm {d}{\beta } \quad \text {where}\quad \kappa _d := \int \limits _{0}^\pi \sin ^{d-2}(\beta ) \, \mathrm {d}{\beta }. \end{aligned}$$

A more detailed version of this calculation appears in [26, Prop. 6.8]. The function \(\beta \mapsto \sin ^{d-2}(\beta )\) peaks sharply around \(\pi /2\), so it does little harm to replace \(H\) by the function \(\widetilde{H}(\beta ) = \cos ^2(\beta - \alpha )\) in the integrand. Computing this simpler integral, we obtain a closed-form expression plus a remainder term:

$$\begin{aligned} \delta (C) = d \sin ^2(\alpha ) + \cos (2\alpha ) + \varepsilon _1(\alpha , d). \end{aligned}$$
(6.2)

We assert that the remainder term is exponentially small as a function of the dimension:

$$\begin{aligned} \vert {\varepsilon _1(\alpha , d)}\vert < \sqrt{\frac{\pi }{8}} \cdot d^{3/2} \cdot \exp \Big ( - \frac{1}{2} (d-1) \cdot \big (\alpha \wedge (\pi /2-\alpha )\big )^2 \Big ). \end{aligned}$$

The precise form of the error is not particularly important here, so we omit the details.

The same approach delivers the variance of \(V_C\). Repeating the argument above, we get

$$\begin{aligned} \mathrm{\mathbb {E} }\big [ \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^4 \big ] = \frac{d(d+2)}{\kappa _d} \int \limits _0^\pi H(\beta )^2 \sin ^{d-2}(\beta ) \, \mathrm {d}{\beta }. \end{aligned}$$

Replace \(H\) with \(\widetilde{H}\) and integrate to obtain

$$\begin{aligned} \mathrm{\mathbb {E} }\big [ \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^4 \big ]&= {3}/{8} d(d+2) - 4(d-2)(d+2) \cos (2\alpha )\nonumber \\&\quad -\, (d-4)(d-2)\cos (4\alpha ) + \varepsilon _2(\alpha ,d). \end{aligned}$$
(6.3)

The error term satisfies the bound

$$\begin{aligned} \vert \varepsilon _2(\alpha , d) \vert \le \sqrt{\frac{\pi }{8}} \cdot d^{3/2} (d+2) \cdot \exp \Big ( - \frac{1}{2} (d-1) \cdot \big (\alpha \wedge (\pi /2-\alpha )\big )^2 \Big ). \end{aligned}$$

In view of Propositions 4.3 and 4.4, we determine that

$$\begin{aligned} {{\mathrm{Var}}}[V_C] = {{\mathrm{Var}}}\big [ \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2 \big ] - 2\delta (C) = \mathrm{\mathbb {E} }\big [ \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^4 \big ] - \delta (C)^2 - 2\delta (C). \end{aligned}$$
(6.4)

Combine (6.2), (6.3), and (6.4) and simplify to reach

$$\begin{aligned} {{\mathrm{Var}}}[V_C] = \tfrac{1}{2}(d - 2) \sin ^2(2\alpha ) + \varepsilon _3(\alpha ,d). \end{aligned}$$

Once again, the remainder term is exponentially small in the dimension \(d\) for each fixed \(\alpha \in (0, \pi /2)\).

These calculations allow us to compare the variance of \(V_C\) with the statistical dimension \(\delta (C)\):

$$\begin{aligned} \frac{{{\mathrm{Var}}}[V_C]}{\delta (C)}&= \frac{(d-2) \sin ^2(2\alpha )}{2 d \sin ^2(\alpha )} + \varepsilon _4(\alpha , d) \rightarrow 2 \cos ^2(\alpha )\\&\quad \text {as}\quad d \rightarrow \infty \quad \text {with fixed}\quad \alpha > 0. \end{aligned}$$

By considering a sufficiently small angle \(\alpha \), we can find a circular cone \(C = {{\mathrm{Circ}}}_d(\alpha )\) for which \({{\mathrm{Var}}}[V_C]\) is arbitrarily close to \(2\delta (C)\). In conclusion, we cannot improve the constant two in the variance bound from Theorem 4.5.

6.3.2 Tail Behavior

Circular cones also exhibit tail behavior that matches the predictions of Corollary 4.10 exactly. It takes some technical effort to establish this claim in detail, so we limit ourselves to a sketch of the argument. These ideas are drawn from [5, Sec. 6.2].

Fix an angle \(0 < \alpha \ll \tfrac{\pi }{2}\), and abbreviate \(q = \sin ^2(\alpha )\). Consider the circular cone \(C = {{\mathrm{Circ}}}_d(\alpha )\) where the dimension takes the form \(d = 2(n+1)\) where \(n\) is a large integer. In particular, the formula (6.2) shows that the statistical dimension \(\delta (C) \approx d\sin ^2(\alpha ) \approx 2nq\). It can be established that the odd intrinsic volumes of \(C\) follow a binomial distribution [6, Ex. 4.4.8]:

As a consequence of the Gauss–Bonnet Theorem [37, Thm. 6.5.5], there is an interlacing result [5, Prop. 5.6] for the upper tail of \(V_C\).

$$\begin{aligned} \mathbb {P}\big \{ Y \ge k \big \} \le \mathbb {P}\big \{ V_C \ge 2k \big \} \le \mathbb {P}\big \{ Y \ge k - 1 \big \}. \end{aligned}$$

Thus, accurate probability bounds for \(V_C\) follow from bounds for the binomial random variable \(Y\).

Our tail inequality, Corollary 4.10, predicts that \(V_C\) has subgaussian behavior for moderate deviations. To see that circular cones actually display this behavior, we turn to the classical limits for the binomial random variable \(Y_n\). The Laplace–de Moivre central limit theorem states that

$$\begin{aligned} \mathbb {P}\big \{ Y_n - nq \ge t \sqrt{nq(1-q)} \big \} \rightarrow 1 - \Phi (t) \quad \text {as}\quad n \rightarrow \infty \quad \text {with}\,q\,\text {fixed}. \end{aligned}$$

Here, \(\Phi \) denotes the distribution function of a standard Gaussian variate. When \(q \approx 0\) and \(n\) is large, we can invoke the approximation \(\delta (C) \approx 2nq\) and a tail bound for the Gaussian distribution to obtain

$$\begin{aligned} \mathbb {P}\big \{ V_C - \delta (C) \ge \lambda \sqrt{\delta (C)} \big \}&\approx \mathbb {P}\big \{ V_C - 2nq \ge \lambda \sqrt{2nq(1-q)} \big \}\\&\approx \mathbb {P}\big \{ Y_n - nq \ge (2^{-1/2} \lambda ) \sqrt{nq(1-q)} \big \} \approx \mathrm {e}^{-\lambda ^2/4}. \end{aligned}$$

This expression matches the behavior expressed in the weaker tail bound (4.14). In other words, we see that the intrinsic volumes of a small circular cone have subgaussian concentration for moderate deviations, with variance approximately \(2 \delta (C)\).

Corollary 4.10 also predicts that \(V_C\) has Poisson tails for very large deviations. Vanishingly small circular cones display this behavior. Suppose that \(q = q_n = b / n\) for a large constant \(b\). The approximation \(\delta (C) \approx 2b\) and Chernoff’s bound for the tail of a binomial random variable together give

$$\begin{aligned} \mathbb {P}\big \{ V_C - \delta (C) \ge \lambda \delta (C) \big \}&\approx \mathbb {P}\big \{ V_C - 2b \ge 2\lambda b \big \}\\&\approx \mathbb {P}\big \{ Y_n - b \ge \lambda b \big \}\\&\approx \mathrm {e}^{ b \, (\lambda - (1+\lambda )\log (1+\lambda )) }. \end{aligned}$$

After a change of variables, this formula coincides with the tail bound (4.6). The Chernoff bound is quite accurate in this regime, so we see that (4.6) is saturated by vanishingly small circular cones in high dimensions.

6.4 Summary of Calculations

We conclude this section with an overview of the statistical dimension and variance calculations. See Table 1 for this material. Observe that the ratio of the variance \({{\mathrm{Var}}}[V_C]\) to the statistical dimension \(\delta (C)\) can range from zero to two. A subspace \(L_k\) with dimension \(k\) shows that the lower bound is achievable across the entire range of statistical dimensions. The circular cones \({{\mathrm{Circ}}}_d(\alpha )\) show that the upper bound is saturated by cones whose statistical dimension is small. Amelunxen (personal communication) has conjectured that somewhat tighter bounds are possible when \(\delta (C) \approx \tfrac{1}{2}d\), but this remains to be established.

Table 1 Statistical dimension and variance calculations

7 Background on Conic Geometry

This section summarizes the foundational material that we require to establish the master Steiner formula. We provide sketches or references in lieu of proofs to keep the presentation lean. Most of the material here is drawn from the books [8, 29, 32, 37].

7.1 The Tiling Induced by a Polyhedral Cone

This section describes a fundamental decomposition of \(\mathbb {R}^d\) induced by a closed convex cone and its polar. First, we recall a few basic facts that we will use liberally in our development. Given a cone \(C \in \fancyscript{C}_d\), every point \(\varvec{x} \in \mathbb {R}^d\) can be expressed as an orthogonal sum of the type

$$\begin{aligned} \varvec{x} = \varvec{\Pi }_C(\varvec{x}) + \varvec{\Pi }_{C^\circ }(\varvec{x}) \quad \text {where}\quad \varvec{\Pi }_C(\varvec{x}) \perp \varvec{\Pi }_{C^\circ }(\varvec{x}). \end{aligned}$$
(7.1)

We often use the consequence

$$\begin{aligned} |\!|\varvec{\Pi }_{C^\circ }(\varvec{x})|\!|^2 = {{\mathrm{dist}}}^2( \varvec{x}, C ). \end{aligned}$$
(7.2)

Another outcome is the Pythagorean relation

$$\begin{aligned} |\!|{ \varvec{x} }|\!|^2 = |\!|{ \varvec{\Pi }_C(\varvec{x}) }|\!|^2 + |\!|{ \varvec{\Pi }_{C^\circ }(\varvec{x}) }|\!|^2. \end{aligned}$$
(7.3)

See Rockafellar [29, pp. 338–341] for more information about this construction.

Let \(C\) be a polyhedral cone. A face \(F(\varvec{u})\) of \(C\) with outward normal \(\varvec{u}\) takes the form

$$\begin{aligned} F(\varvec{u}) := \big \{ \varvec{x} \in C : \langle { \varvec{u} },\ { \varvec{x} } \rangle = \sup \nolimits _{\varvec{y} \in C} \langle { \varvec{u} },\ {\varvec{y}} \rangle \big \}. \end{aligned}$$

The face \(F(\varvec{u})\) is nonempty if and only if \(\varvec{u} \in C^\circ \); otherwise, the supremum is infinite. The linear hull \(\mathrm{lin }(K)\) of a convex set \(K \subset \mathbb {R}^d\) is the intersection of all subspaces that contain \(K\). The dimension of a face \(F\) is the dimension of its linear hull \(\mathrm{lin }(F)\).

Recall that the polar of a polyhedral cone is always a polyhedral cone. The outward normals of a face \(F\) of the cone \(C\) comprise a face \(N_F\) of the polar cone \(C^\circ \) called the normal face:

$$\begin{aligned} N_F := \mathrm{lin }(F)^\circ \cap C^\circ . \end{aligned}$$

Each polyhedral cone \(C\) induces a tiling of \(\mathbb {R}^d\), where each tile is an orthogonal sum of the faces of \(C\) and the normal faces of \(C^\circ \). The following statement of this claim amplifies an observation of McMullen [28, Lem. 3]. Below, the relative interior \(\mathrm{relint }(K)\) refers to the interior with respect to the relative topology induced by \(\mathbb {R}^d\) on the linear hull \(\mathrm{lin }(K)\).

Fact 7.1

(The tiling induced by a polyhedral cone) Let \(C \in \fancyscript{C}_d\) be a polyhedral cone. Then the inverse image of the relative interior of a face \(F\) has the orthogonal decomposition

$$\begin{aligned} \varvec{\Pi }_C^{-1}\big (\mathrm{relint }(F) \big ) = \mathrm{relint }(F) + N_F. \end{aligned}$$
(7.4)

Moreover, the space \(\mathbb {R}^d\) is a disjoint union of the inverse images of the faces of the cone \(C\):

$$\begin{aligned} \mathbb {R}^d = \bigsqcup _{F \text {a face of}\,C} \big ( \mathrm{relint }(F) + N_F \big ). \end{aligned}$$
(7.5)

Fact 7.1 is almost obvious from the orthogonal decomposition (7.1). See [26, Prop. A.8] for a detailed proof.

7.2 The Solid Angle of a Cone

Let \(C\) be a convex cone whose linear hull is \(j\)-dimensional. The solid angle of the cone is defined as

$$\begin{aligned} \angle (C) := \frac{1}{(2\pi )^{j/2}} \int \limits _C \mathrm {e}^{- \Vert \mathbf{x} \Vert ^2 / 2} \, \mathrm {d}\mathbf{x} = \mathbb {P}\big \{ \varvec{g}_C \in C \big \} = \mathbb {P}\big \{ \varvec{\theta }_C \in C \big \}. \end{aligned}$$
(7.6)

The volume element \(\mathrm {d}{\varvec{x}}\) derives from the Lebesgue measure on the linear hull \(\mathrm{lin }(C)\). The random vector \(\varvec{g}_C\) has the standard Gaussian distribution on \(\mathrm{lin }(C)\), and \(\varvec{\theta }_C\) is uniformly distributed on the unit sphere in \(\mathrm{lin }(C)\). We use the convention that the unit sphere in the zero-dimensional Euclidean space \(\mathbb {R}^0\) is the set \(\mathsf {S}^{-1} := \{0\}\).

Let \(C\) be a polyhedral cone, and let \(F\) be a face of \(C\) with normal face \(N_F\). The internal angle of \(F\) is the solid angle \(\angle (F)\), while the external angle of \(F\) is the solid angle \(\angle (N_F)\). The intrinsic volumes of a polyhedral cone can be written in terms of the internal and external angles of the faces.

Fact 7.2

(Intrinsic volumes and polyhedral angles) Let \(C \in \fancyscript{C}_d\) be a polyhedral cone, and let \(\fancyscript{F}_k(C)\) be the family of \(k\)-dimensional faces of \(C\). Then

$$\begin{aligned} v_k(C) = \textstyle \sum \limits _{F \in \fancyscript{F}_k(C)} \angle (F) \angle (N_F). \end{aligned}$$

Fact 7.2 is a direct consequence of the Definition 2.1 of the intrinsic volumes of a polyhedral cone, the orthogonal decomposition (7.4) of the inverse image of a face, and the geometric interpretation (7.6) of the solid angles. This result can be traced at least as far back as  [28]; see also [37, Eqn. (6.47)]. A complete proof appears in [26, Prop. A.8].

Remark 7.3

(Alternative notation) In the literature, the internal angle of a face \(F\) of a cone \(C\) is often denoted by \(\beta (\varvec{0}, F)\), and the external angle is often denoted by \(\gamma (F,C)\).

7.3 The Hausdorff Topology on Convex Cones

In this section, we develop a metric topology on the class \(\fancyscript{C}_d\) of closed convex cones. This topology leads to notions of approximation and convergence, and it provides a way to extend results for polyhedral cones to general closed convex cones. See [6, Sec. 3.2] for a more comprehensive treatment.

To construct an appropriate metric, we begin by defining the angular distance between two nonzero vectors:

$$\begin{aligned} {{\mathrm{dist}}}_s(\varvec{x}, \varvec{y}) := \arccos \Big ( \frac{\langle {\varvec{x}},{\varvec{y}\rangle }}{\Vert {\varvec{x}\Vert } \Vert {\varvec{y}\Vert }} \Big ) \quad \text {for}\quad \varvec{x}, \varvec{y} \in \mathbb {R}^d \setminus \{\varvec{0} \}. \end{aligned}$$

We instate the conventions that \({{\mathrm{dist}}}_s(\varvec{0}, \varvec{0}) = 0\) and \({{\mathrm{dist}}}_s(\varvec{x}, \varvec{0}) = {{\mathrm{dist}}}_s(\varvec{0}, \varvec{x}) = \pi /2\) for \(\varvec{x} \ne \varvec{0}\). This definition extends to closed convex cones \(C, C' \in \fancyscript{C}_d\) via the rule

$$\begin{aligned} {{\mathrm{dist}}}_s(C, C') := \mathrm{inf}_{\begin{array}{c} \varvec{x} \in C \\ \varvec{y}\in C' \end{array}} {{\mathrm{dist}}}_s(\varvec{x}, \varvec{y}) \quad \text {when} \quad C, C' \ne \{ \varvec{0} \}. \end{aligned}$$

The trivial cone \(\{\varvec{0}\}\) demands special attention. We set \({{\mathrm{dist}}}_s(\{\varvec{0}\}, \{\varvec{0}\}) = 0\), while \({{\mathrm{dist}}}_s(\{\varvec{0}\}, C) = {{\mathrm{dist}}}_s(C, \{\varvec{0}\}) = \pi /2\) when \(C \ne \{\varvec{0}\}\).

The angular expansion \({{\mathrm{\fancyscript{T}_{s}}}}(C, \alpha )\) of a cone \(C \in \fancyscript{C}_d\) by an angle \(0 \le \alpha \le 2\pi \) is the union of all rays that lie within an angle \(\alpha \) of the cone. Equivalently,

$$\begin{aligned} {{\mathrm{\fancyscript{T}_{s}}}}(C, \alpha ) := \big \{ \varvec{x} \in \mathbb {R}^d : {{\mathrm{dist}}}_s(\varvec{x}, \varvec{y}) \le \alpha \text { for some}\, \varvec{y} \in C \big \}. \end{aligned}$$

Note that the expansion \({{\mathrm{\fancyscript{T}_{s}}}}(C,\alpha )\) of a convex cone need not be convex for any \(\alpha > 0\). For instance, the angular expansion of a proper subspace is never convex.

Define the conic Hausdorff metric \({{\mathrm{{{\mathrm{dist}}}_{\fancyscript{H}}}}}(C_1,C_2)\) between two cones \(C_1, C_2 \in \fancyscript{C}_d\) by

$$\begin{aligned} {{\mathrm{{{\mathrm{dist}}}_{\fancyscript{H}}}}}(C_1,C_2) := \mathrm{inf}\big \{\alpha \ge 0 : {{\mathrm{\fancyscript{T}_{s}}}}(C_1, \alpha ) \supset C_2 \text { and }\, {{\mathrm{\fancyscript{T}_{s}}}}(C_2, \alpha ) \supset C_1 \big \}\\ \quad \text {for}\quad C_1,C_2 \in \fancyscript{C}_d. \end{aligned}$$

We equip \(\fancyscript{C}_d\) with the conic Hausdorff metric and the associated metric topology to form a compact metric space. It is not hard to check [6, Prop. 3.2.4] that polarity is a local isometry on \(\fancyscript{C}_d\):

$$\begin{aligned} \text {For}\quad \alpha < \pi /2, \quad {{\mathrm{{{\mathrm{dist}}}_{\fancyscript{H}}}}}(C_1,C_2) = \alpha \quad \text {implies}\quad {{\mathrm{dist}}}(C_1^\circ , C_2^\circ ) = \alpha . \end{aligned}$$
(7.7)

When we write expressions like \(C_i \rightarrow C\) for closed convex cones, we are always referring to convergence in the conic Hausdorff metric. The property (7.7) ensures that \(C_i \rightarrow C\) if and only if \(C_i^\circ \rightarrow C^\circ \).

A basic principle in the analysis of metric spaces is to identify a dense subset that consists of points with additional regularity. We can make arguments that exploit this regularity and apply a limiting procedure to extend the claim to the rest of the space. To that end, let us demonstrate that the polyhedral cones form a dense subset of \(\fancyscript{C}_d\). The approach mirrors [35, Thm. 1.8.13].

Fact 7.4

(Polyhedral cones are dense) Let \(C \in \fancyscript{C}_d\) be a closed convex cone. For each \(\varepsilon > 0\), there is a polyhedral cone \(C_{\varepsilon } \in \fancyscript{C}_d\) that satisfies \({{\mathrm{{{\mathrm{dist}}}_{\fancyscript{H}}}}}(C, C_{\varepsilon }) < \varepsilon \).

Proof

(sketch) We may assume \(C \ne \{\varvec{0}\}\). Let \(\fancyscript{X}\) be a finite \(\varepsilon \)-cover of the set \(C \cap \mathsf {S}^{d-1}\) with respect to the angular distance. That is,

$$\begin{aligned} \fancyscript{X} = \big \{ \varvec{x}_i : i = 1, \dots , N_{\varepsilon } \big \} \subset C \cap \mathsf {S}^{d-1} \end{aligned}$$

and

$$\begin{aligned} \min \nolimits _i {{\mathrm{dist}}}_s(\varvec{x}, \varvec{x}_i) < \varepsilon \quad \text {for all}\quad \varvec{x} \in C \cap \mathsf {S}^{d-1}. \end{aligned}$$

Consider the convex cone \(C_\varepsilon \) generated by \(\fancyscript{X}\):

$$\begin{aligned} C_{\varepsilon } := \mathrm{cone }(\fancyscript{X}) := \big \{ \textstyle \sum \limits \nolimits _{i=1}^{N_{\varepsilon }} \tau _i \varvec{x}_i : \tau _i \ge 0 \big \}. \end{aligned}$$

The cone \(C_{\varepsilon }\) is polyhedral, and it satisfies \({{\mathrm{{{\mathrm{dist}}}_{\fancyscript{H}}}}}(C, C_{\varepsilon }) < \varepsilon \). \(\square \)

In order to perform limiting arguments, it helps to work with continuous functions. The next result ensures that projection onto a cone is continuous with respect to the conic Hausdorff metric.

Fact 7.5

(Continuity of the projection) Consider a sequence \(( C_i )_{i \in \mathbb {N}}\) of closed convex cones in \(\fancyscript{C}_d\) where \(C_i \rightarrow C\) in the conic Hausdorff metric. For each \(\varvec{x} \in \mathbb {R}^d\), the projection \(\varvec{\Pi }_{C_i}(\varvec{x}) \rightarrow \varvec{\Pi }_C(\varvec{x})\) as \(i \rightarrow \infty \).

The proof is a straightforward exercise in elementary analysis, so we refer the reader to [26, Prop. 3.8] for details. This result has a Euclidean analog [35, Lem. 1.8.9].

8 Proof of the Master Steiner Formula

This section contains the proof of Theorem 3.1. In Sect. 8.1, we establish a restricted version of the master Steiner formula for polyhedral cones. In Sect. 8.2, we apply this basic result to prove that the intrinsic volumes of a closed convex cone are well defined, and we verify that the intrinsic volumes are continuous with respect to the conic Hausdorff metric. Afterward, we use an approximation argument to extend the master Steiner formula to closed convex cones in Sect. 8.3, and we remove the restrictions on the function \(f\) in Sect. 8.4.

8.1 Polyhedral Cones and Bounded Continuous Functions

We begin with a specialized version of the master Steiner formula that restricts the cone \(C\) to be polyhedral and the function \(f\) to be bounded and continuous. This argument contains all the essential geometric ideas.

Lemma 8.1

(Master Steiner formula for polyhedral cones) Let \(f : \mathbb {R}_+^2 \rightarrow \mathbb {R}\) be a bounded continuous function, and let \(C \in \fancyscript{C}_d\) be a polyhedral cone. Then the geometric functional \(\varphi _f\) defined in (3.1) admits the expression

$$\begin{aligned} \varphi _f(C) = \textstyle \sum \limits _{k=0}^d \varphi _f(L_k) \cdot v_k(C), \end{aligned}$$
(8.1)

where \(L_k\) is a \(k\)-dimensional subspace of \(\mathbb {R}^d\) and the conic intrinsic volumes \(v_k\) are introduced in Definition 2.1.

Proof

Define the random variables \(\varvec{u} = \varvec{\Pi }_C(\varvec{g})\) and \(\varvec{w} = \varvec{\Pi }_{C^\circ }(\varvec{g})\). The tiling (7.5) induced by a polyhedral cone allows us to decompose the functional \(\varphi _f\) in terms of the faces of the cone \(C\).

$$\begin{aligned} \varphi _f(C) = \mathrm{\mathbb {E} }\big [ f\big ( \Vert \varvec{u}\Vert ^2, \Vert \varvec{w}\Vert ^2 \big ) \big ] = \textstyle \sum \limits _{k=0}^d \ \textstyle \sum \limits _{F \in \fancyscript{F}_k(C)} \mathrm{\mathbb {E} }\big [ f\big (\Vert \varvec{u}\Vert ^2, \Vert \varvec{w}\Vert ^2 \big ) \cdot 1\!\!1_{\mathrm{relint }(F)}(\varvec{u}) \big ] \end{aligned}$$
(8.2)

where \(\fancyscript{F}_k(C)\) is the set of \(k\)-dimensional faces of \(C\) and \(1\!\!1_{A}\) is the 0–1 indicator function of a Borel set \(A\).

We need to find an alternative expression for the expectation remaining in (8.2). Fix a \(k\)-dimensional face \(F\) of \(C\) with normal face \(N_F\). The orthogonal decomposition (7.4) of the inverse image \(\varvec{\Pi }_C^{-1}\big (\mathrm{relint }(F)\big )\) implies that we can integrate over \(F\) and \(N_F\) independently.

$$\begin{aligned}&\mathrm{\mathbb {E} }\big [ f\big ( \Vert \varvec{u}\Vert ^2, \Vert \varvec{w}\Vert ^2 \big ) \cdot 1\!\!1_{\mathrm{relint }(F)}(\varvec{u}) \big ] \quad \\&\quad = \frac{1}{(2\pi )^{d/2}} \int \limits _{\mathrm{relint }(F)} \mathrm {d}{\varvec{x}} \int \limits _{N_F} \mathrm {d}{\varvec{y}} \cdot f\big ( \Vert \varvec{x}\Vert ^2, \Vert \varvec{y}\Vert ^2 \big ) \cdot \mathrm {e}^{-(\Vert \varvec{x}\Vert ^2 + \Vert \varvec{y}\Vert ^2)/2}. \end{aligned}$$

This identity relies on the Pythagorean relation (7.3). The volume elements \(\mathrm {d}{\varvec{x}}\) and \(\mathrm {d}{\varvec{y}}\) derive from the Lebesgue measures on \(\mathrm{lin }(F)\) and \(\mathrm{lin }(N_F)\). Some care is required for the face \(F = \{\varvec{0}\}\), in which case \(\mathrm {d}{\varvec{x}}\) is the Dirac measure at the origin; a similar issue arises when \(N_F = \{ \varvec{0} \}\).

To continue, we convert each of the integrals to polar coordinates [20, Thm. 2.49]. This step gives

$$\begin{aligned}&\mathrm{\mathbb {E} }\big [ f\big ( \Vert \varvec{u}\Vert ^2, \Vert \varvec{w}\Vert ^2 \big )\nonumber \cdot 1\!\!1_{\mathrm{relint }(F)}(\varvec{u}) \big ]\\&\quad = \int \limits _{\mathrm{relint }(F) \cap \mathsf {S}^{k-1}} \mathrm {d}{\bar{\sigma }_{k-1}} \int \limits _{N_F \cap \mathsf {S}^{d-k-1}} \mathrm {d}{\bar{\sigma }_{d-k-1}} \cdot I_f(k,d) \end{aligned}$$
(8.3)

where \(\bar{\sigma }_{j-1}\) denotes the uniform measure on the sphere \(\mathsf {S}^{j-1}\). The quantity \(I_f(k,d)\) depends only on the function \(f\) and the two indices \(k\) and \(d\):

$$\begin{aligned} I_f(k,d)&:= \frac{\sigma _{k-1}\big (\mathsf {S}^{k-1}\big ) \cdot \sigma _{d-k-1}\big (\mathsf {S}^{d-k-1}\big )}{(2\pi )^{d/2}}\\&\quad \times \int \limits _0^\infty \int \limits _0^\infty f\big (s^2,t^2 \big ) \cdot s^{k-1} t^{d-k-1} \mathrm {e}^{-(s^2+t^2)/2} \, \mathrm {d}{s} \, \mathrm {d}{t} \quad \text {when}\quad 1 \le k \le d - 1 \end{aligned}$$

and

$$\begin{aligned} I_f(0,d)&:= \frac{\sigma _{d-1}(\mathsf {S}^{d-1})}{(2\pi )^{d/2}} \int \limits _0^\infty f\big (0, t^2\big ) \cdot t^{d-1} \mathrm {e}^{-t^2/2} \, \mathrm {d}{t} \end{aligned}$$

and

$$\begin{aligned} I_f(d,d)&:= \frac{\sigma _{d-1}(\mathsf {S}^{d-1})}{(2\pi )^{d/2}} \int \limits _0^\infty f\big (s^2, 0\big ) \cdot s^{d-1} \mathrm {e}^{-s^2/2} \, \mathrm {d}{s}. \end{aligned}$$

Above, \(\sigma _k(\mathsf {S}^{k-1}):=2\pi ^{k/2}/\Gamma \bigl (\frac{k}{2}\bigr )\) denotes the unnormalized measure of the sphere \(\mathsf {S}^{k-1}\). We do not need these formulas for \(I_f\), but we have included them for reference.

In view of the identity (7.6) for the solid angle of a cone, the expression (8.3) implies that

$$\begin{aligned} \mathrm{\mathbb {E} }\big [ f\big ( \Vert \varvec{u}\Vert ^2, \Vert \varvec{w}\Vert ^2 \big ) \cdot 1\!\!1_{\mathrm{relint }(F)}(\varvec{u}) \big ] = \angle (F) \angle (N_F) \cdot I_f(k,d). \end{aligned}$$
(8.4)

We have employed the fact that the solid angle of a cone coincides with the solid angle of its relative interior. The geometry of the face \(F\) only enters this expression through the presence of the solid angles.

We are almost done now. Combine the decomposition (8.2) and the identity (8.4) to reach

$$\begin{aligned} \varphi _f(C) = \textstyle \sum \limits _{k=0}^d I_f(k,d) \cdot ( \textstyle \sum \limits \nolimits _{F \in \fancyscript{F}_k(C)} \angle (F) \angle (N_F) ) = \textstyle \sum \limits _{k=0}^d I_f(k,d) \cdot v_k(C). \end{aligned}$$
(8.5)

The second relation follows from Fact 7.2, which expresses the intrinsic volumes in terms of the internal and external angles of the cone \(C\). Finally, we must identify an alternative representation for the coefficients \(I_f(k,d)\). Recall that a \(j\)-dimensional subspace \(L_j\) of \(\mathbb {R}^d\) is a polyhedral cone with \(v_j(L_j) = 1\) and \(v_k(L_j) = 0\) for \(k \ne j\). Applying the formula (8.5) to the subspace \(L_j\), we learn that

$$\begin{aligned} \varphi _f(L_j) = I_f(j,d) \quad \text {for}\quad j = 0, 1, 2, \dots , d. \end{aligned}$$

Substitute these identities into (8.5) to complete the proof of (8.1). \(\square \)

8.2 Continuity of Intrinsic Volumes

To carry out our plan, we need to verify that the conic intrinsic volumes of \(C\) are well defined and continuous with respect to the conic Hausdorff metric.

Proposition 8.2

(Intrinsic volumes of convex cones) Consider a closed convex cone \(C \in \fancyscript{C}_d\).

  1. (1)

    Well-definition. There is a sequence \((C_i)_{i \in \mathbb {N}}\) of polyhedral cones in \(\fancyscript{C}_d\) that converges to \(C\) in the conic Hausdorff metric. For each index \(k\), the limit \(\lim _{i \rightarrow \infty } v_k(C_i)\) exists, and it is independent of the sequence of polyhedral cones. Therefore, we may define

    $$\begin{aligned} v_k(C) := \lim _{i \rightarrow \infty } v_k(C_i) \quad \text {for}\quad k = 0, 1, 2, \dots , d. \end{aligned}$$
    (8.6)
  2. (2)

    Continuity. Let \((C_i)_{i\in \mathbb {N}}\) be any sequence of cones in \(\fancyscript{C}_d\) that converges to \(C\) in the conic Hausdorff metric. Then

    $$\begin{aligned} \lim _{i \rightarrow \infty } v_k(C_i) = v_k(C) \quad \text {for}\quad k = 0, 1, 2, \dots , d. \end{aligned}$$

Proposition 8.2 is not new. For instance, it is an immediate consequence of the corresponding fact [37, Thm. 6.5.2(b)] about spherical intrinsic volumes. Here, we develop the result as a consequence of Lemma 8.1 and the continuity of the projection map, Fact 7.5. We believe that this argument provides an attractive alternative to the standard methods. Our approach rests on the following lemma.

Lemma 8.3

Let \(X_k\) denote a chi-square random variable with \(k\) degrees of freedom. For each \(d \in \mathbb {N}\), there is a family \(\big \{f_1, f_2, f_3, \dots , f_d \big \}\) of bounded continuous functions on \(\mathbb {R}_+\) with the property that

$$\begin{aligned} \mathrm{\mathbb {E} }\big [ f_j(X_k) \big ] = \Big \{\begin{array}{ll} 1, &{} j = k, \\ 0, &{} j \ne k. \end{array} \end{aligned}$$

Proof

Consider the density \(\rho _k\) of the chi-square random variable \(X_k\) for each \(k = 1, 2, 3, \dots , d\).

$$\begin{aligned} \rho _k(s) = \frac{1}{2^{k/2} \Gamma (k/2)} \cdot s^{k/2} \mathrm {e}^{-s/2} \quad \text {for}\, s \ge 0. \end{aligned}$$

These functions are bounded and continuous, and they compose a linearly independent family [17, Chap. 5]. Introduce the \(d\)-dimensional linear space \(P := \mathrm{lin }\big \{ \rho _1, \dots , \rho _d \big \}\) equipped with the inner product

$$\begin{aligned} \langle {f},\ {\rho } \rangle := \int \limits _{0}^\infty f(s) \rho (s) \, \mathrm {d}{s} \quad \text {for}\,f, \rho \in P. \end{aligned}$$

Standard arguments [25, Lem. 8.6-2] show that \(\big \{ \rho _1, \dots , \rho _d \big \}\) induces a biorthogonal system \(\big \{ f_1, \dots , f_d \big \} \subset P\). By construction, the functions \(f_j\) identify the number of degrees of freedom in a chi-square random variable. Indeed, let \(X_k\) follow the chi-square distribution with \(k\) degrees of freedom. Then

$$\begin{aligned} \mathrm{\mathbb {E} }\big [ f_j(X_k) \big ] = \int \limits _0^\infty f_j(s) \cdot \frac{1}{2^{k/2} \Gamma (k/2)} s^{k/2} \mathrm {e}^{-s/2}\, \mathrm {d}{s} = \langle {f_j},\ {\rho _k} \rangle = \Big \{\begin{array}{ll} 1, &{} j = k, \\ 0, &{} j \ne k. \end{array} \end{aligned}$$

This is the advertised result. \(\square \)

We can use the functions from Lemma 8.3 in combination with Lemma 8.1 to isolate the properties of individual intrinsic volumes.

Proof of Proposition 8.2

Let \(C \in \fancyscript{C}_d\) be a closed convex cone. Fact 7.4 implies that there is a sequence \((C_i)_{i \in \mathbb {N}}\) of polyhedral cones in \(\fancyscript{C}_d\) for which \(C_i \rightarrow C\). As a consequence, \(C_i^\circ \rightarrow C^\circ \) as well.

Consider the family \(\big \{ f_1, \dots , f_d \big \}\) of functions promised by Lemma 8.3. For each index \(j \ge 1\), Lemma 8.1 shows that

$$\begin{aligned} \mathrm{\mathbb {E} }\big [ f_j\big ( \Vert \varvec{\Pi }_{C_i}(\varvec{g})\Vert ^2 \big ) \big ] = \textstyle \sum \limits _{k=0}^d \mathrm{\mathbb {E} }\big [ f_j(X_k) \big ] \cdot v_k(C_i) = v_j(C_i) \quad \text {for} \quad i \in \mathbb {N}. \end{aligned}$$

We claim that

$$\begin{aligned} v_j(C_i) = \mathrm{\mathbb {E} }\big [ f_j\big ( \Vert \varvec{\Pi }_{C_i}(\varvec{g})\Vert ^2 \big ) \big ] \rightarrow \mathrm{\mathbb {E} }\big [ f_j\big (\Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2 \big ) \big ] \quad \text {as}\quad i \rightarrow \infty . \end{aligned}$$
(8.7)

The limit in (8.7) does not depend on the choice of sequence, so the definition (8.6) of the intrinsic volumes of \(C\) is valid for each index \(k \ge 1\). For the intrinsic volume \(v_0\), we simply note that

$$\begin{aligned} v_0(C_i) = v_d( C_i^\circ ) \rightarrow v_d( C^\circ ) \quad \text {as}\quad i \rightarrow \infty \end{aligned}$$

as a consequence of the definition of \(v_d\). This limit is unambiguous, so \(v_0\) is also well-defined.

To justify the calculation in (8.7), we apply the dominated convergence theorem to pass the limit through the expectation. Fact 7.5 shows that the metric projection is continuous; the Euclidean norm and the functions \(f_j\) are also continuous. Thus, we have the pointwise limit

$$\begin{aligned} f_j\big (\Vert \varvec{\Pi }_{C_i}(\varvec{x})\Vert ^2 \big ) \rightarrow f_j\big (\Vert \varvec{\Pi }_C(\varvec{x})\Vert ^2 \big ) \quad \text {as}\quad i \rightarrow \infty \quad \text {for}\, \varvec{x} \in \mathbb {R}^d. \end{aligned}$$

The integrands are controlled by an integrable function because \(f_j\) is bounded:

$$\begin{aligned} \vert { f_j\big ( \Vert \varvec{\Pi }_{C_i}(\varvec{g})\Vert ^2 \big )\vert } \le \sup _{s \ge 0 } \vert {f_j(s)\vert } \quad \text {for}\quad i \in \mathbb {N}. \end{aligned}$$

Dominated convergence applies, which ensures that (8.7) is correct.

From here, it is easy to verify the continuity of intrinsic volumes. Suppose that \((C_i)_{ i \in \mathbb {N} }\) is a sequence of closed convex cones in \(\fancyscript{C}_d\) for which \(C_i \rightarrow C\). For each index \(k \ge 0\), we can find a sequence \((C_i')_{ i \in \mathbb {N} }\) of polyhedral cones in \(\fancyscript{C}_d\) for which

$$\begin{aligned} {{\mathrm{{{\mathrm{dist}}}_{\fancyscript{H}}}}}(C_i', C_i) < i^{-1} \quad \text {and}\quad \vert { v_k(C_i') - v_k(C_i)}\vert < i^{-1} \quad \text {for}\quad i \in \mathbb {N}. \end{aligned}$$

This point follows from the density of polyhedral cones in \(\fancyscript{C}_d\) stated in Fact 7.4 and the definition (8.6) of the intrinsic volumes. Since \(C_i \rightarrow C\), this construction ensures that \(C_i' \rightarrow C\). By definition of the intrinsic volumes, \(v_k(C_i') \rightarrow v_k(C)\). But then we must conclude that \(v_k(C_i) \rightarrow v_k(C)\). \(\square \)

8.3 Extension to General Convex Cones

Next, let us extend the master Steiner formula, Lemma 8.1 from polyhedral cones to closed convex cones. Our strategy is to approximate a convex cone with a sequence of polyhedral cones, apply Lemma 8.1 to each member of the sequence, and use continuity to take the limit.

Lemma 8.4

(Extension to closed convex cones) Let \(f : \mathbb {R}_+^2 \rightarrow \mathbb {R}\) be a bounded continuous function, and let \(C \in \fancyscript{C}_d\) be a closed convex cone. Then the master Steiner formula (8.1) still holds.

Proof

Fact 7.4 ensures that polyhedral cones form a dense subset of \(\fancyscript{C}_d\), so there is a sequence \(( C_i )_{ i \in \mathbb {N} }\) of polyhedral cones in \(\fancyscript{C}_d\) for which \(C_i \rightarrow C\) as \(i \rightarrow \infty \). We also have the limit \(C_i^\circ \rightarrow C^\circ \). Lemma 8.1 implies that

$$\begin{aligned} \mathrm{\mathbb {E} }\big [ f\big ( \Vert \varvec{\Pi }_{C_i}(\varvec{g})\Vert ^2, \ \Vert \varvec{\Pi }_{C_i^\circ }(\varvec{g})\Vert ^2 \big ) \big ] = \textstyle \sum \limits _{k=0}^d \varphi _f(L_k) \cdot v_k(C_i) \quad \text {for}\quad i \in \mathbb {N}. \end{aligned}$$

Taking the limit as \(i \rightarrow \infty \), we reach

$$\begin{aligned} \mathrm{\mathbb {E} }\big [ f\big ( \Vert \varvec{\Pi }_{C}(\varvec{g})\Vert ^2, \ \Vert \varvec{\Pi }_{{C}^\circ }(\varvec{g})\Vert ^2 \big ) \big ] = \textstyle \sum \limits _{k=0}^d \varphi _f(L_k) \cdot v_k(C). \end{aligned}$$

To justify the limit on the left-hand side, we invoke the dominated convergence theorem. This act is legal because \(f\) is bounded and continuous, the squared Euclidean norm is continuous, and the metric projector is continuous. The limit on the right-hand side follows from the continuity of intrinsic volumes expressed in Proposition 8.2. \(\square \)

8.4 Extension to Integrable Functions

We are now prepared to complete the proof of the master Steiner formula that we announced in Sect. 3.1. All that remains is to expand the class of functions that we can consider. The following lemma contains the outstanding claims of Theorem 3.1.

Lemma 8.5

(Extension to integrable functions) Let \(f : \mathbb {R}_+^2 \rightarrow \mathbb {R}\) be a Borel measurable function, and let \(C \in \fancyscript{C}_d\) be a closed convex cone. Then the master Steiner formula (8.1) still holds, provided that each expectation is finite.

Proof

Let us reinterpret Lemma 8.4 as a statement about measures. The Banach space \(C_0(\mathbb {R}_+^2)\) consists of bounded and continuous real-valued functions on \(\mathbb {R}_+^2\) that tend to zero at infinity. Consider a function \(h \in C_0(\mathbb {R}_+^2)\), and observe that the left-hand side of (8.1) can be written as

$$\begin{aligned} \varphi _h(C) = \mathrm{\mathbb {E} }\big [ h\big ( \Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2, \ \Vert \varvec{\Pi }_{C^\circ }(\varvec{g})\Vert ^2 \big ) \big ] = \int h(s,t) \, \mathrm {d}{\mu }(s,t) \end{aligned}$$

where the measure \(\mu \) is defined for each Borel set \(A \subset \mathbb {R}_+^2\) by the rule

$$\begin{aligned} \mu (A) := \mathbb {P}\big \{ \big (\Vert \varvec{\Pi }_C(\varvec{g})\Vert ^2, \ \Vert \varvec{\Pi }_{C^\circ }(\varvec{g})\Vert ^2 \big ) \in A \big \}. \end{aligned}$$

Similarly, the right-hand side of (8.1) can be written as

$$\begin{aligned} \textstyle \sum \limits _{k=0}^d \varphi _f(L_k) \cdot v_k(C) = \textstyle \sum \limits _{k=0}^d \Big ( \displaystyle \int h(s,t) \, \mathrm {d}{\mu }_k(s,t) \Big ) \cdot v_k(C) \end{aligned}$$

where the measure \(\mu _k\) is defined via

$$\begin{aligned} \mu _k(A) := \mathbb {P}\big \{ \big (\Vert \varvec{\Pi }_{L_k}(\varvec{g})\Vert ^2, \ \Vert \varvec{\Pi }_{{L_k}^\circ }(\varvec{g})\Vert ^2 \big ) \in A \big \}. \end{aligned}$$

As a consequence, Lemma 8.4 demonstrates that

$$\begin{aligned} \int h \, \mathrm {d}{\mu } = \int h \, \mathrm {d}{\Big (\textstyle \sum \limits _{k=0}^d v_k(C) \mu _k\Big )} \quad \text {for}\quad h \in C_0(\mathbb {R}_+^2). \end{aligned}$$
(8.8)

We claim that (8.8) guarantees the equality of measures

$$\begin{aligned} \mu = \textstyle \sum \limits _{k=0}^d v_k(C) \cdot \mu _k. \end{aligned}$$
(8.9)

Because the measures are equal, it holds for each nonnegative Borel measurable function \(f_+ : \mathbb {R}_+^2 \rightarrow \mathbb {R}_+\) that

$$\begin{aligned} \int f_+ \, \mathrm {d}{\mu } = \textstyle \sum \limits _{k=0}^d \Big ( \displaystyle \int f_+ \, \mathrm {d}{\mu _k} \Big ) \cdot v_k(C). \end{aligned}$$

We can replace \(f_+\) with any Borel measurable function \(f: \mathbb {R}_+^2 \rightarrow \mathbb {R}\), provided that all the integrals remain finite. Reinterpreted, this observation yields the conclusion.

Finally, we justify the claim (8.9). The dual of \(C_0(\mathbb {R}_+^2)\) can be identified as the Banach space \(\mathbb {M}(\mathbb {R}_+^2)\) of regular Borel measures, acting on functions by integration [30, Thm. 6.19]. Therefore, \(C_0(\mathbb {R}_+^2)\) separates points in \(\mathbb {M}(\mathbb {R}_+^2)\) [31, Sec. 3.14]. Each of the measures \(\mu \) and \(\mu _k\) is the push-forward of the standard Gaussian measure \(\gamma _d\) by a continuous function, so each one is a regular Borel probability measure [12, pp. 174, 185]. Therefore, the collection of integrals in (8.8) guarantees the equality of measures in (8.9). \(\square \)