Intrinsic Volumes of Polyhedral Cones: A combinatorial perspective

The theory of intrinsic volumes of convex cones has recently found striking applications in areas such as convex optimization and compressive sensing. This article provides a self-contained account of the combinatorial theory of intrinsic volumes for polyhedral cones. Direct derivations of the General Steiner formula, the conic analogues of the Brianchon-Gram-Euler and the Gauss-Bonnet relations, and the Principal Kinematic Formula are given. In addition, a connection between the characteristic polynomial of a hyperplane arrangement and the intrinsic volumes of the regions of the arrangement, due to Klivans and Swartz, is generalized and some applications are presented.


Introduction
The theory of conic intrinsic volumes (or solid/internal/external/Grassmann angles) has a rich and varied history, with origins dating back at least to the work of Sommerville [32]. This theory has recently found renewed interest, owing to newly found connections with measure concentration and resulting applications in compressive sensing, optimization, and related fields [3,5,11,14,24]. Despite this recent surge in interest, the theory remains somewhat inaccessible to a general public in applied areas; this is, in part, due to the fact that many of the results are found using varying terminology (cf. Sect. 2.3), or are available as special cases of a more sophisticated theory of spherical integral geometry [13,30,34] that treats the subject in a level of generality (involving curvature/support measures or relying on differential geometry) that is usually more than what is needed from the point of view of the above-mentioned applications. In addition, some results, such as the relation to the theory of hyperplane arrangements, have so far not been connected to the existing body of research.
One aim of this article is therefore to provide the practitioner with a self-contained account of the basic theory of intrinsic volumes of polyhedral cones that requires little more background than some elementary polyhedral geometry and properties of the Gaussian distribution. While some of the material is classic (see, for example, [25]), we blend into the presentation a generalization of a formula of Klivans and Swartz [22], with a streamlined proof and some applications.
The focus of this text is on simplicity rather than generality, on finding the most natural relations between different results that may be derived in different orders from each other, and on highlighting parallels between different results. Despite this, the text does contain some generalizations of known results, provided these can be derived with little additional effort. In the interest of brevity, this article does not discuss the probabilistic properties of intrinsic volumes, such as their moments and concentration properties, nor does it go into related geometric problems such as random projections of polytopes [1,35]. Section 2 is devoted to some preliminaries from the theory of polyhedral cones including a discussion of conic intrinsic volumes, a section devoted to clarifying the connections between different notation and terminology used in the literature, and a section introducing some concepts and techniques from the theory of partially ordered sets. In Sect. 3 we present a modern interpretation of the conic Steiner formula that underlies the recent developments in [5,14,24], and in Sect. 4, which is based on the influential work of McMullen [25], we derive and discuss the Gauss-Bonnet relation for intrinsic volumes. Section 5 contains a crisp proof of the Principal Kinematic Formula for polyhedral cones, and Sect. 6 is devoted to a generalization of a result by Klivans and Swartz [22] and some applications thereof.

Notation and Conventions
Throughout, we use boldface letters for vectors and linear transformations. To lighten the notation we denote the set consisting solely of the zero vector by 0. We use calligraphic letters for families of sets. We use the notation ⊆ for set inclusion and ⊂ for strict inclusion.

Preliminaries
General references for basic facts about convex cones that are stated here are, for example, [9,28,38]. More precise references will be given when necessary. A convex cone C ⊆ R d is a convex set such that λC = C for all λ > 0. A convex cone is polyhedral if it is a finite intersection of closed half-spaces. In particular, linear subspaces are polyhedral, and polyhedral cones are closed. In what follows, unless otherwise stated, all cones are assumed to be polyhedral and non-empty. A supporting hyperplane of a convex cone C is a linear hyperplane H such that C lies entirely in one of the closed half-spaces induced by H (unless explicitly stated otherwise, all hyperplanes will be linear, i.e., linear subspaces of codimension one). A proper face of C is a set of the form F = C ∩ H , where H is a supporting hyperplane. If set F is called a face of C if is either a proper face or C itself. The linear span lin(C) of a cone C is the smallest linear subspace containing C and is given by lin(C) = C + (−C), where A+ B = {x + y : x ∈ A, y ∈ B} denotes the Minkowski sum of two sets A and B. The dimension of a face F is dim F := dim lin(F), and the relative interior relint(F) is the interior of F in lin(F). A cone is pointed if the origin 0 is a zero-dimensional face, or equivalently, if it does not contain a linear subspace of dimension greater than zero. If C is not pointed, then it contains a nontrivial linear subspace of maximal dimension k > 0, given by L = C ∩ (−C), and L is contained in every supporting hyperplane (and thus, in every face) of C. Denoting by C/L the orthogonal projection of C on the orthogonal complement of L, the projection C/L is pointed, and C = L + C/L is an orthogonal decomposition of C; we call this the canonical decomposition of C.
We denote by F(C) the set of faces, This relation is usually stated and proved in terms of polytopes [38,Chap. 8], but intersecting a pointed cone with a suitable affine hyperplane yields a polytope with a face structure equivalent to that of the cone; the general case can be reduced to the pointed case through the canonical decomposition. A short proof of the Euler relation along with remarks on the history of this result can be found in [23].

Duality
The polar cone of a cone C ⊆ R d is defined as 123 If C = L is a linear subspace, then C • = L ⊥ is just the orthogonal complement, and the polar cone of the polar cone is again the original cone, as will be shown below. To any face F ∈ F k (C) we can associate the normal face N F C ∈ F d−k (C • ) defined as N F C = C • ∩ lin(F) ⊥ . To ease notation we will sometimes use F = N F C when the cone is clear. The resulting map F k (C) → F d−k (C • ) is a bijection, which satisfies N F (C • ) = F. This relation is easily deduced from the mentioned involution property of the polarity map, cf. Proposition 2.3 below. The polar operation is order reversing, C ⊆ D implies C • ⊇ D • , as follows directly from the definition; more properties will be presented below.
Central to convex geometry and optimization are a variety of theorems of the alternative, the most prominent of which is known as Farkas' Lemma (among the countless references, see for example [38,Chap. 2] This theorem is usually stated for closed convex sets and affine hyperplanes H (see, e.g., [28,Thm. 11.3]). Theorem 2.2 then follows from this more general version by noting that the relative interior of any non-empty, closed convex cone contains points arbitrary close to 0, which implies 0 ∈ H .
The separating hyperplane theorem can be used to derive some interesting results involving the polar cone. The first such result states that polarity is an involution on the set of closed convex cones. We write C •• := (C • ) • for the polar of the polar. Proof Let x ∈ C. Then, by definition of the polar, for all y ∈ C • we have x, y ≤ 0. This, in turn, implies that x ∈ C •• . Now let x ∈ C •• and assume that x / ∈ C. In particular, x = 0, and by closedness of C there exists ε > 0 such that the ε-cone around x, B ε := { y : x, y ≥ (1 − ε) x y }, satisfies relint(C) ∩ relint(B ε ) = ∅. By Theorem 2.2, there exists a hyperplane separating C and B ε , and thus a non-zero h ∈ R d such that x, h > 0, ∀ y ∈ C : h, y ≤ 0.
The first condition implies h / ∈ C • , while the second one implies h ∈ C • . It follows that x ∈ C.
The following variation of Farkas' Lemma for convex cones, which is slightly more general than the usual one, is taken from [4].
The situation in which D = L is a hyperplane is best visualised as in Fig. 1.
In view of some of the later developments, it is important to understand the behaviour of duality under intersections. The following is a conic variant of [28,Cor. 23.8.1] (see also [38,Chap. 7] for a similar theme).

Proposition 2.5 The polar operation of intersection is the Minkowski sum,
Moreover, every face of C ∩ D is of the form F ∩ G for some F ∈ F(C), G ∈ F(D), and the polar face satisfies If additionally relint(F) ∩ relint(G) = ∅, then (2.3) holds with equality.
Proof For the first claim, note that where in the first equality we used Proposition 2.3; the third equality is easily verified by noting that z, x + 0 = z, x and z, 0 + y = z, y . The first claim then follows by polarity and another application of Proposition 2.3.

123
For the second claim, note that a faceF ∈ F(C ∩ D) can be written asF = {x ∈ C ∩ D : x, h = 0} for some h ∈ (C ∩ D) • . By the first claim, we can write the normal vector in the form where the second equality follows from the fact that x, h C ≤ 0 and Finally, for the claim about the polar face, note that, by what we have just shown and using double polarity, The claim (2.3) follows by invoking polarity again.
Two faces F ∈ F(C) and G ∈ F(D) are said to intersect transversely, written F G, if their relative interiors have a non-empty intersection, relint(F) ∩ relint(G) = ∅,

Corollary 2.6 Let C, D be cones and F
For a polyhedral cone C ⊆ R d , denote by C the Euclidean projection, (2.5) The Moreau decomposition of a point x ∈ R d is the sum representation where C (x) and C • (x) are orthogonal. A direct consequence is the disjoint decomposition see also [25,Lem. 3].

Intrinsic Volumes
For C ⊆ R d a polyhedral cone and for two faces F, On the other hand, since the relative interiors of faces of C are disjoint, we have For the most part we will consider the case G = C. Define the k-th intrinsic volumes of For a fixed cone, the intrinsic volumes form a probability distribution on {0, 1, . . . , d}. Note that if F ∈ F k (C) then, by the decomposition (2.6), For later reference, we note that in combination with Corollary 2.6, we get for cones C, D and faces be the non-negative orthant, i.e., the cone consisting of points with non-negative coordinates. A vector x projects orthogonally to a k-dimensional face of C if and only if exactly k coordinates are non-positive. By symmetry considerations and the invariance of the Gaussian distribution under permutations of the coordinates, it follows that

123
The following important properties of the intrinsic volumes, which are easily verified in the setting of polyhedral cones, will be used frequently: Note that the product rule and L is a subspace of dimension k. We will sometimes be working with the intrinsic volume generating polynomial, The product rule then states that the generating polynomial is multiplicative with respect to direct products. A direct consequence of the orthogonal invariance and the polarity rule is that the intrinsic volume sequence is symmetric for self-dual cones (i.e., cones such that C = −C • ).
An important summary parameter is the expected value of the distribution associated to the intrinsic volumes, the statistical dimension, which coincides with the expected squared norm of the projection of a Gaussian vector on the cone, The statistical dimension reduces to the usual dimension for linear subspaces. The coincidence of the two expected values is a special case of the generalized Steiner formula 3.1, and is crucial in applications of the statistical dimension. More on the statistical dimension and its properties and applications can be found in [5,14,24].

Angles
In the classical works on polyhedral cones, intrinsic volumes were viewed as polytope angles, see [12] for some perspective. Polyhedral cones arise as tangent or normal cones of polyhedra K ⊆ R d . Given such a polyhedron K and a face F ⊆ K , with x 0 ∈ relint(F), the tangent cone T F K is defined as The normal cone to K at F is the polar of the tangent cone. To clarify the relations to the terminology used in this paper and to facilitate a translation of the results of some of the referenced papers, we provide the following list.

Solid Angle
When speaking about the solid angle of a cone C ⊆ R d , sometimes denoted α(C), one usually assumes that C has non-empty interior, and one defines α(C) as the Gaussian volume of C (or equivalently, the relative spherical volume of C ∩ S d−1 , where S d−1 is the (d − 1)-dimensional unit sphere); we extend this definition to also cover lowerdimensional cones, and define for dim C = k,

Internal/External Angle
The internal and external angle of a polyhedral set K ⊆ R d at a face F are defined as the solid angle of the tangent and normal cone of K at F, respectively, . Furthermore, conic polarity swaps between internal and external angles: where we use the notation F := N F C for the face of C • , which is polar to the face F of C. This shows that any formula involving the internal and external angles of a cone C has a polar version in terms of the internal and external angles of C • where the roles of internal and external have been exchanged. (Some of the formulas in [25] are stated in this polar version.)

Remark 2.9
The Brianchon-Gram-Euler relation [27,Thm. (1)] of a convex polytope K translates in the above notation as Replacing the bounded polytope by an unbounded cone makes this relation invalid. However, there exists a closely related conic version, which is called Sommerville's Theorem [27,Thm. (37)]. This in turn can be used to derive a Gauss-Bonnet relation, cf. Sect. 4.

Grassmann Angle
The Grassmann angles of a cone C, which have been introduced and analyzed by Grünbaum [15], are defined through the probability that a uniformly random linear subspace of a specific (co)dimension intersects the cone nontrivially. The kinematic/Crofton formulae express this probability in terms of the intrinsic volumes, cf. Sect. 5. More precisely, we have where L k ⊆ R d denotes a uniformly random linear subspace of codimension k. Notice that when considering the intrinsic volumes and the Grassmann angles as vectors, , then these are related through a nonsingular linear transformation. Hence, any formula in the intrinsic volumes of a cone has an equivalent form in terms of Grassmann angles and vice versa; in this paper we prefer the intrinsic volume versions.

Remark 2.10
The preference of intrinsic volumes over Grassmann angles has an odd effect on the logic behind Corollary 4.3 below, which is attributed to Grünbaum. This result is originally stated and proved in [15,Thm. 2.8] in terms of the Grassmann angles. So in order to rewrite Corollary 4.3 in its original form, one needs to apply Crofton's formula (2.10) whose proof, given in Sect. 5, uses Gauss-Bonnet (4.4), which in turn is a direct consequence of Corollary 4.3. The resulting proof of the original result [15, Thm. 2.8] (in terms of Grassmann angles) is thus much less direct than the original one given by Grünbaum.

Some Poset Techniques
In this section we recall some notions from the theory of partially ordered sets (posets) that we will need in Sect. 6. We only recall those properties that we will directly use, see [33,Chap. 3] for more details and context. A lattice is a poset with the property that any two elements have both a least upper bound and a greatest lower bound. We will only consider finite lattices; in particular, for these lattices the greatest and the least elements1,0 both exist. More precisely, we will consider the following two (types of) finite lattices.
Example 2.11 (Face lattice) Let C ⊆ R d be a polyhedral cone. Then the set of faces F(C) with partial order given by inclusion is a finite lattice. The elements1,0 are given by1 = C and0 = C ∩ (−C).

Example 2.12 (Intersection lattice of a hyperplane arrangement) Let
. . , n}}, endowed with the partial order given by reverse inclusion, is called the intersection lattice of the hyperplane arrangement A. This lattice has a disjoint decomposition into L 0 (A), . . . , The minimal and maximal elements are given by0 = R d and 1 = n i=1 H i . One can define the (real) incidence algebra of a (locally) finite poset (P, ) as the set of all functions ξ : P × P → R, which besides having the usual vector space structure also possesses the multiplication defined for two functions ξ, ν : P × P → R. The identity element in this algebra is the Kronecker delta, δ(x, y) = 1 if x = y and δ(x, y) = 0 else. Another important element is the characteristic function of the partial order, ζ(x, y) = 1 if x y and ζ(x, y) = 0 else. This function is invertible, and its inverse μ, called Möbius function on P, can be recursively defined by μ(x, y) = 0 if x y, and The incidence algebra acts on the set of functions f : P → R on the right by The Möbius inversion is the simple fact that for two functions f, g : P → R one has f ζ = g if and only if f = gμ. Explicitly, this equivalence can be written out as follows: The Möbius function of the face lattice from Example 2.11 is given by μ(F, G) = (−1) dim G−dim F . For a whole range of techniques for computing Möbius functions we refer to [6,33].

Some Elementary Facts About Hyperplane Arrangements
The last concept we need to introduce is that of a characteristic polynomial, which can be defined for any finite graded lattice; we only introduce the characteristic polynomial for hyperplane arrangements, as we will only use it in this context. We use the notation from Example 2.12. The characteristic polynomial of a hyperplane arrangement More generally, we introduce the jth-level characteristic polynomial of A as follows, so that χ A = χ A,d , and we also introduce the bivariate polynomial 1 14) The jth level characteristic polynomial can be written in terms of characteristic polynomials by considering restrictions of A: If L ⊆ R d is a linear subspace, then the arrangement A L = {H ∩ L : H ∈ A, L H } is a hyperplane arrangement relative to L. It is easily seen that we obtain (2.15) The Möbius function of the intersection lattice alternates in sign [33, Prop. 3.10.1], and so do the coefficients of the ( jth-level) characteristic polynomial. Note that χ A, j (t) (is either zero or) has degree j and the leading coefficient is given by For future reference we also note that in the cases j = 0, 1 we have The complement of the hyperplanes of an arrangement A, R d \ H ∈A H , decomposes into open convex cones. We denote by R(A) the set of polyhedral cones given by the closures of these regions, and we denote r (A) := |R(A)|. More generally, we define The following theorem by Zaslavsky [37] lies at the heart of the result by Klivans and Swartz [22] that we will present in Sect. 6.

Theorem 2.13 (Zaslavsky) Let A be an arrangement of linear hyperplanes in
Note that since the coefficients of the characteristic polynomial alternate in sign, the number of j-dimensional regions, r j (A), is given by the sum of the absolute values of the coefficients of the jth-level characteristic polynomial.

The Conic Steiner Formula
A classic result in integral geometry is the Steiner Formula: the d-dimensional measure of the ε-neighbourhood of a convex body K ⊂ R d (compact, convex) can be expressed as a polynomial in ε of degree at most d, with the intrinsic volumes as coefficients: In order to state an analogous result for convex cones or spherically convex sets (intersections of convex cones with the unit sphere), we have to agree on a notion of distance. A natural choice here is the capped angle (C, x) = arccos( C (x) / x ). Note that with this definition, (C, x) ≤ π/2, and is equal to π/2 if and only if x ∈ C • . Note also that for x with x = 1 and α ≤ π/2, we have (C, x) ≤ α if and only if C (x) 2 ≥ cos 2 α. Using this notion of distance, one obtains a formula similar to the Euclidean Steiner formula (3.1), which is usually called spherical/conic Steiner formula, see for example [34,Chap. 6.5] and the references given there, or the formula below.
It turns out that, when working with cones rather than spherically convex sets, it is convenient to work with the squared length of the projection on C rather than with the angle. Moreover, it turns out quite useful to also consider the squared length of the projection on the polar cone C • . The following general Steiner formula in the conic setting is due to McCoy and Tropp [24, Thm. 3.1]; its formulation in probabilistic terms, as suggested by Goldstein, Nourdin and Peccati [14], is remarkably elegant. We sketch their proof (in the polyhedral case) below. Theorem 3.1 Let C ⊆ R d be a convex polyhedral cone, let g ∈ R d be a Gaussian vector, and let the discrete random variable V on {0, 1, . . . , d} be given by P{V = k} = v k (C). Then where d = denotes equality in distribution, and X k , Y k are independent χ 2 -distributed random variables with k degrees of freedom.
A geometric interpretation of this form of the conic Steiner formula is readily obtained by considering moments of the two sides in (3.2). Indeed, the expectation of f C (g) 2 , C • (g) 2 equals the Gaussian volume of the angular neighbourhood around C of radius α ≤ π/2, i.e., of the set T α (C) denotes the angular neighbourhood of radius α around a k-dimensional linear subspace. These Gaussian volumes of angular neighborhoods of linear subspaces replace the monomials in the Euclidean Steiner formula (3.1). By taking a suitable moment of (3.2) we obtain the usual conic Steiner formula.
Proof sketch of Theorem 3.1 In order to show the claimed equality in distribution (3.2) it suffices to show that the moments coincide. Let f : R 2 + → R be a Borel measurable function. In view of the decomposition (2.5) we can express the expectation of Using spherical coordinates and the orthogonal invariance of Gaussian vectors, one can deduce that the above expectation equals where L k denotes an arbitrary k-dimensional linear subspace. Summing up the terms gives rise to the claimed coincidence of moments, which shows equality of the distributions.
A useful consequence of the general Steiner formula is that the moment generating functions of the discrete random variable V from Theorem 3.1 and the continuous random variable C (g) 2 coincide up to reparametrization: which directly follows from (3.2) by the well-known formula for the moment generating function of χ 2 -distributed random variables, E[e s X k ] = (1 − 2s) −k/2 . This result is from [24], where it is used to derive a concentration result for the random variable V , and also underlies the argumentation in [14], where a central limit theorem for V is derived.

Gauss-Bonnet and the Face Lattice
The Gauss-Bonnet Theorem is a celebrated result in differential geometry connecting curvature with the Euler characteristic. In the setting of polyhedral cones, this theorem asserts that the alternating sum of the intrinsic volumes equals the alternating sum of the f -vector, The main goal of this section is to show the connections between the Gauss-Bonnet relation, a result by Sommerville [32], which can be seen as a conic version of the Brianchon-Euler-Gram relation for polytopes [16, 14.1] (4.1) Proof Both sides in (4.1) are zero if C contains a nonzero linear subspace. So we assume in the following that C is pointed, C ∩ (−C) = 0. Let g be a random Gaussian vector and H = g ⊥ the orthogonal complement, which is almost surely a hyperplane. By Farkas' Lemma 2.4, Note that with probability 1, the intersection C ∩ H is either 0 or has dimension dim C − 1. Setting On the other hand, for 0 < i < d and using (4.2),

123
where in the first step we used the fact that almost surely every i-dimensional face of C ∩ H is of the form F ∩ H , with F ∈ F i+1 (C), and for every F ∈ F i+1 (C) the intersection F ∩ H is either an i-dimensional face of C ∩ H or 0. Alternating the sum and using linearity of expectation, where in the first step we used Sommerville's Theorem, and in the second step we used that v G (F) = 0 if G is not a face of F, and dim F/G = dim F − dim G. This shows the claim.
Proof Follows by summing in (4.4) over all k-dimensional faces and noting that for Proof Summing the terms in (4.5) over k and using d The rest follows from the Euler relation (2.1).
If C is not a linear subspace, then the Gauss-Bonnet relation can be interpreted as saying that the random variable V on {0, 1, . . . , d} given by P{V = k} = v k (C), actually decomposes into two random variables V 0 , V 1 on {0, 2, 4, . . . , 2 d/2 } and {1, 3, 5, . . . , 2 (d − 1)/2 + 1}, respectively, such that In fact, the same argument that gives the general Steiner formula (3.2) also shows that where g 0 and g 1 denote Gaussian vectors conditioned on their projection on C falling in an even-or odd-dimensional face, respectively, and X k , Y k are independent χ 2 -distributed random variables with k degrees of freedom. We can paraphrase (4.5) in terms of the moments of these random variables.
123 Corollary 4.5 Let f : R 2 + → R be a Borel measurable function, and for C ⊆ R d a polyhedral cone, which is not a linear subspace, let ϕ f (C), ϕ 0 f (C), ϕ 1 f (C) denote the moments Then we have Proof The first equation is obtained by invoking the general Steiner formula and applying (4.5): The second equation is obtained by using Möbius inversion (2.12) and noting that the Möbius function of the face lattice is μ(F, We list a few more corollaries, the usefulness of which may yet need to be established. The proofs are variations of the proof of Corollary 4.4.

Corollary 4.6 For the statistical dimension δ(C) we obtain
In particular, if dim C is odd, then and if dim C is even, then {0, 1, . . . , d} defined by P{V C = k} = v k (C). The alternating sum of the exponential generating function satisfies

Remark 4.8 The Gauss-Bonnet relation can also be written out as
Rewriting this formula in terms of internal/external angles, and extending this to include also the case G = C, one obtains where ≤ denotes the order relation in the face lattice, i.e., the inclusion relation.
In [25] McMullen observed that this relation means that the internal and external angle functions (one of them multiplied by the Möbius function) are mutual inverses in the incidence algebra of the face lattice, cf. Sect. 2.4. More precisely, the Gauss-Bonnet relation only shows that one of them is the left-inverse of the other (and of course the other is a right-inverse of the first), but since left-inverse, right-inverse, or two-sided inverse are equivalent in the incidence algebra [33, Prop. 3.6.3] one obtains the following additional relation "for free": This is [25,Thm. 3].
The relation (4.2) used in the proof of Sommerville's Theorem 4.1 is a special case of the principal kinematic formula, to be derived in more detail next.

Elementary Kinematics for Polyhedral Cones
The principal kinematic formulae of integral geometry relate the intrinsic volumes, or certain measures that localize these quantities, of the intersection of two or more randomly moved geometric objects to those of the individual objects. This section presents a direct derivation of the principal kinematic formula in the setting of two polyhedral cones. The results of this section are special cases of Glasauer's Kinematic Formula for spherically convex sets [13,34], though in the spirit of the rest of this article, our proof is combinatorial, based on the facial decomposition of the cone, and uses probabilistic terminology.
In what follows, when we say that Q is drawn uniformly at random from the orthogonal group O(d), we mean that it is drawn from the Haar probability measure ν on O

(d). This is the unique regular Borel measure on O(d) that is left and right invariant (ν( Q A) = ν(A Q) = ν(A) for Q ∈ O(d) and a Borel measurable
for the integral with respect to the Haar probability measure, and we will occasionally omit the subscript Q ∈ O(d), or just write Q in the subscript, when there is no ambiguity. More information on invariant measures in the context of integral geometry can be found in [34,Chap. 13].
Implicit in the statement of the theorem is the integrability of v k (C ∩ Q D) as a function of Q. This will be established in the proof. Recall that the intrinsic volumes of C × D are obtained by convoluting the intrinsic volumes of C and D, cf. Sect. 2.2. The second equation in (5.1) follows from the first and from k v k (C) = 1, and statement (5.2) follows from (5.1) by applying the product rule (2.9). Note also that using polarity (Proposition 2.5) on both sides of (5.1) we obtain the polar kinematic formulas In particular, if D = L is a linear subspace of dimension d − m, For the derivation of this corollary, and for later use, we need the following genericity lemma. Recall from Sect. For the second claim, assume that C is not a linear subspace. The lineality space of C, C ∩ (−C), is contained in every supporting hyperplane of C, and therefore does not intersect relint(C). If C Q D, then there exists nonzero x ∈ relint(C) ∩ Q D. In particular, x does not lie in the lineality space of C. Since the lineality space of the intersection C ∩ Q D is the intersection of the lineality spaces of C and of Q D, it follows that x is in the complement of the lineality space of C ∩ Q D in C ∩ Q D, which shows that C ∩ Q D is not a linear subspace.

Proof of Corollary 5.2 Denoting χ(C)
, the Gauss-Bonnet relation (4.6) says that χ(C) = 0 if C is not a linear subspace, and χ(0) = 1. By Lemma 5.3 we see that χ is almost surely the indicator function for the event that C and D only intersect at the origin. We can therefore conclude, The second claim follows by replacing D with L.
Our proof of Theorem 5.1 is based on a classic "double counting" argument; to illustrate this, we first consider an analogous situation with finite sets. We note that Proposition 5.4 generalizes without difficulties to the setting of compact groups acting on topological spaces, as in [34,Thm. 13.1.4].

Proposition 5.4 Let be a finite set and G be a finite group acting transitively on .
Let M, N ⊆ be subsets. Then for uniformly random γ ∈ G, Proof Taking ξ ∈ uniformly at random, we obtain the cardinality of M as | | · P{ξ ∈ M}. Introduce the indicator function 1 M (ξ ) for the event ξ ∈ M and note that It follows that the random variables 1 M (ξ ), 1 γ N (ξ ) are uncorrelated: Lemma 5.5 uses the same idea to establish the kinematic formula for the Gaussian measure of cones of different dimensions, and Theorem 5.1 then follows by applying this to the pairwise intersection of faces.
The proof of Lemma 5.5 relies crucially on the left and right invariance of the Haar measure, which implies that for any measurable f : O(d) → R + and fixed For a linear subspace L ⊆ R d , we can (and will) naturally identify the group O(L) of orthogonal transformations of L with the subgroup of O(d) consisting of those Q ∈ O(d) for which Qx = x for x ∈ L ⊥ . The group O(L) carries its own Haar probability measure. We also use the following characterization of the Gaussian volume of a convex cone (5.8) where x = 0 arbitrary. This characterization follows from the fact that for Q ∈ O(d) uniformly at random, the point Qx is uniformly distributed on the sphere of radius x .
Proof of Lemma 5.5 For illustration purposes we first prove (5.5) the measurability of (x, Q) → 1 C (x)1 D ( Q T x), and the fact that the integral is then measurable in Q, see for example [29,Thm. 8.5]. Fubini's Theorem and (5.8) then yield We proceed with the general case of (5.5). By Lemma 5.3, almost surely dim We thus need to show that To see that the map Q → v k (C ∩ Q D) is measurable, note that, using the fact that the orthogonal projection of a Gaussian vector to a subspace is again Gaussian, we have It is enough to verify that the projection L C ∩ Q L D (x) is continuous in x and Q outside a set of measure zero; the measurability of v k (C ∩ Q D) then follows from the same considerations as in the case k = d. If C Q D, then L C ∩ Q L D is the kernel of a matrix of rank d − k whose rows depend continuously on Q. The projection L C ∩ Q L D (x) depends continuously on x and on this matrix, and therefore also on Q.
We now proceed to show identity (5.9). Let Q 0 ∈ O(L D ). By the orthogonal invariance (5.7), Since this holds for any Q 0 ∈ O(L D ), we can choose Q 0 ∈ O(L D ) uniformly at random to obtain where in (1) we used Q 0 L D = L D , in (2) we used Fubini's Theorem, and in (3) we used (5.8). For the remaining part, replacing Q with Q 1 Q for Q 1 ∈ O(L C ) uniformly at random, and applying (5.7) again, where the last equality follows again from (5.8). We now derive (5.6). By Lemma 5.3, for generic Q, L C ∩ Q L D = 0 and dim L C + Q L D = j + = d − k. Using the fact that an orthogonal projection of a Gaussian vector is Gaussian, we get The integrability of this expression in Q follows, as above, from the fact that the projection map to L C + Q L D is continuous for almost all Q and g. For generic Q, any g ∈ R d has a unique decomposition g = g C + g D + g ⊥ , with g C ∈ L C , g D ∈ Q L D , g ⊥ ∈ (L C + Q L D ) ⊥ . Note that g C and g D are not orthogonal projections, and that the decomposition (even g C ) depends on Q.
From the uniqueness of this decomposition we get the equivalence and therefore Now let Q 0 ∈ O(L C ) be fixed. By orthogonal invariance of the Haar measure and of the Gaussian distribution we can replace Q with Q 0 Q and g with g := Q 0 g. We next determine the decomposition g . Note that under this substitution, . By uniqueness of the decomposition, We therefore have where we used Fubini in the second and (5.8) in the last equality. Note that Q T g D ∈ L D . Repeating the argument above by replacing Q with Q Q T 1 for Q 1 ∈ O(L D ), we get where again we used (5.8). This finishes the proof.

123
Proof of Theorem 5. 1 We first note that it suffices to prove the first equality in (5.1), as we can deduce the second from the fact that the intrinsic volumes sum up to one, The equations in (5.2) follow directly from (5.1) as a special case, since The genericity Lemma 5.3 implies that the k-dimensional faces of C ∩ Q D are generically of the form then the kinematic formula follows by noting that It remains to show (5.11). By (2.8) and Lemma 5.3, almost surely The integrability of these terms has been shown in the proof of Lemma 5.5, which shows the integrability in (5.1). In order to prove (5.11) we proceed as in the proof of Lemma 5.5. Let Q 0 ∈ O(L F ) be uniformly at random. Note that the normal cone N F C lies in the orthogonal complement of L F , so that Q 0 leaves the normal cone invariant. Using the invariance of the Haar measure as in the proof of Lemma 5.5, where in (1) we used the orthogonal invariance of the intrinsic volumes and in (2) we applied Lemma 5.5 to the inner expectation (note that the dimensions match). Comparing the first line with the last line we see that the term v j (F) could be extracted by replacing F with L F . Repeating the same trick by replacing Q with Q Q 1 for where in the second equation we used that v k (L F ∩ Q L G ) = 1, and the last equality follows from (5.6).

Remark 5.6
In the literature there are roughly two different strategies used to derive kinematic formulas: (1) Use a characterisation theorem for the intrinsic volumes (or a suitable localisation thereof) that shows that certain types of functions in a cone must be linear combinations of the intrinsic volumes. This approach is common in integral geometry [21,34], see [2,13] for the spherical/conic setting. (2) Assume that the boundary of the cone intersected with a sphere is a smooth hypersurface; then argue over the curvature of the intersection of the boundaries. For a general version of this approach, with references to related work, see [17].
The second approach is usually also based on a double-counting argument that involves the co-area formula. Our proof can be interpreted as a piecewise-linear version of this approach.

The Klivans-Swartz Relation for Hyperplane Arrangements
While the most natural lattice structure associated to a polyhedral cone is arguably its face lattice, there is also the intersection lattice generated by the hyperplanes that are spanned by the facets of the cone (assuming that the cone has non-empty interior; otherwise one can argue within the linear span of the cone). In this section we present a deep and useful relation between this intersection lattice and the intrinsic volumes of the regions of the hyperplane arrangement, which is due to Klivans and Swartz [22], and which we will generalize to also include the faces of the regions. We finish this section with some applications of this result.
Let A be a hyperplane arrangement in R d . Recall from (2.17) the notation R j (A) and r j (A) for the set of j-dimensional regions of the arrangement and for their cardinality, respectively. Also recall Zaslavsky's Theorem 2.13, which is the briefly stated identity r j (A) = (−1) j χ A, j (−1), where χ A, j denotes the jth-level characteristic polynomial of the arrangement. Expressing this polynomial in the form and using the identity k v k (C) = 1, we can rewrite Zaslavsky's result in the form Klivans and Swartz [22] have proved that in the case j = d this equality of sums is in fact an equality of the summands. We will extend this and show that for all j the summands are equal. In particular, taking the sum of intrinsic volumes of all regions of a certain dimension j in a hyperplane arrangement yields a quantity that is solely expressible in the lattice structure of the hyperplane arrangement. So while the intrinsic volumes of a single region are certainly not necessarily invariant under any nonsingular linear transformations, the sum of intrinsic volumes over all regions of a fixed dimension is indeed invariant under any nonsingular linear transformations.
where P F (t) = k v k (F)t k . In terms of the intrinsic volumes, for 0 ≤ k ≤ j, where a jk is the coefficient of t k in χ A, j (t).
Note that in the special case j = k we obtain F∈R j (A) v j (F) = j (A), which is easily verified directly. We derive a concise proof of Theorem 6.1 by combining Zaslavsky's Theorem with the kinematic formula. A similar, though slightly different, proof strategy using the kinematic formula was recently employed in [20] to derive Klivans and Swartz's result.
The cases j = 0, 1 will be shown directly; in the case j ≥ 2 we prove (6.1) by induction on k. This proof by induction naturally consists of two steps: (1) For the case k = 0 we need to show Let H be a hyperplane in general position relative to A, that is, H intersects all subspaces in L(A) transversely. In H consider the restriction A H = {H ∩ H : H ∈ A}. The number of ( j − 1)-dimensional regions in A H is given by the number of j-dimensional regions in A, which are hit by the hyperplane H . With the simplest case of the Crofton formula (4.2), we obtain for a uniformly random hyperplane H , and therefore, We will see below that r j−1 (A H ) is almost surely constant (which eliminates the expectation on the left-hand side) and is in fact expressible in terms of χ A, j . This will give the basis step in a proof by induction on k of (6.1). (2) For the induction step we use the kinematic formula (5.2) with m = 1, that gives for a uniformly random hyperplane H , Notice that if the summation would be over the regions in A H , then we could (and in fact can if k ≥ 2) apply the induction hypothesis and express v k (C ∩ H ) in terms of the characteristic polynomials of A H , which, as we will see below, is constant for generic H and expressible in the characteristic polynomial of A. Since the summation is over the regions of A we need to be a bit careful in the case k = 1.

123
To implement this idea we need to understand how the characteristic polynomial of a hyperplane arrangement changes when adding a hyperplane in general position.

Lemma 6.2 Let
A be a hyperplane arrangement in R d , and let j ≥ 2. If H ⊂ R d is a linear hyperplane in general position relative to A, then the ( j − 1)th-level characteristic polynomial of the reduced arrangement A H and the number of ( j − 1)dimensional regions of A H are given by In terms of coefficients, if χ A, j (t) = k a jk t k , then The constant and linear coefficients of χ A, j are given by which shows that indeedā j−1,0 = a j0 +a j1 . As for the claimed formula for r j−1 (A H ) we use Zaslavsky's Theorem 2.13 to obtain which finishes the proof.
Proof of Theorem 6.1 We first verify the cases j = 0, 1 directly. Recall from (2.16) that χ A,0 (t) = 0 (A) and χ A, In a linear hyperplane arrangement we have at most one 0-dimensional region, and R 0 (A) = L 0 (A) (possibly both empty). Therefore, As for the case j = 1, note first that if r 0 (A) = 0, then R 1 (A) = L 1 (A) and the claim follows as in the case j = 0. If on the other hand r 0 (A) = 1, then every line L ∈ L 1 (A) corresponds to two rays F + , F − ∈ R 1 (A), that is, r 1 (A) = 2 1 (A).
Since v 1 (F ± ) = v 0 (F ± ) = 1/2, and 0 (A) = 1, we obtain We now assume j ≥ 2 and proceed by induction on k starting with k = 0. In (6.2) we have seen that From Lemma 6.2 we obtain that r j−1 (A H ) is almost surely constant and given by This settles the case k = 0. For k > 0 we need to distinguish between k = 1 and k ≥ 2. From (6.3), we obtain, using the case k = 0 and Lemma 6.2, This settles the case k = 1. Finally, in the case k ≥ 2 we argue similarly, using that Remark 6.3 It was pointed out to us by Rolf Schneider that for k > 0, j > 0 and a subspace L of dimension dim L = d − m, in general position relative to A, one can (as we did in the case k = 0) use the identity to express the sum of the Grassmann angles in terms of the number of regions of the reduced arrangement. One can then derive the expression (for example, by applying Lemma 6.2 iteratively), to express the number of regions of the reduced arrangement in terms of the characteristic polynomial of A. Via the Crofton formulas 5.2, we can use this to recover the expressions for the intrinsic volumes.

Applications
In this section we compute some examples and present some applications of Theorem 6.1.

Product Arrangements
Let A, B be two hyperplane arrangements in R d and R e , respectively. The product arrangement in R d+e is defined as The characteristic polynomial is multiplicative, χ A×B (t) = χ A (t)χ B (t), and so is the bivariate polynomial (2.14), X A×B (s, t) = X A (s, t)X B (s, t). This can either be shown directly [26,Chap. 2], or deduced from Theorem 6.1, as the intrinsic volumes polynomial satisfies P C×D (t) = P C (t)P D (t).

Generic Arrangements
A hyperplane arrangement A is said to be in general position if the corresponding normal vectors are linearly independent. 2 Combinatorial properties of such arrangements have been studied by Cover and Efron [10], who generalize results of Schläfli [31] and Wendel [36] to get expressions for, among other things, the average number of j-dimensional faces of a region in the arrangement. We set out to compute the characteristic polynomial of an arrangement of hyperplanes in general position, and in the process recover the formulas of Cover and Efron and a formula of Hug and Schneider [18] for the expected intrinsic volumes of the regions. The resulting formula for r j (A) allows us to recover the formula of Cover and Efron [10, Thm. 1] for the sum of the f j (C) over all regions. If one takes one of these j-dimensional regions uniformly at random, then one also recovers the expression for the average number of j-dimensional faces from [10,Thm. 3']. Moreover, then (6.6) and Theorem 6.1 together yield a closed formula for the expected intrinsic volumes of the regions. In particular, the d-dimensional regions have expected intrinsic volumes of This is [18,Thm. 4.1].

Lemma 6.5
The jth-level characteristic polynomials for the above defined hyperplane arrangements are given by where d j denote the Stirling numbers of the second kind. Proof We first discuss the case A = A A . From the formula for the chambers of A it is seen that an element in L(A) is of the form L = x ∈ R d : x π(k 1 ) = · · · = x π( 1 ) , x π(k 2 ) = · · · = x π( 2 ) , . . . , where k 1 ≤ 1 < k 2 ≤ 2 < . . . . More precisely, for L ∈ L j (A) there exists a unique partition I 1 , . . . , I j , each non-empty, of {1, . . . , d} such that L = {x ∈ R d : ∀i = 1, . . . , j, ∀a, b ∈ I i , x a = x b }. The corresponding reduction A L is easily seen to be a nonsingular linear transformation of the j-dimensional braid arrangement, so that χ A L (t) = j−1 i=0 (t − i). Since the number of partitions of {1, . . . , d} into j non-empty sets is given by d j , cf. [33], and by the characterisation (2.15) of χ A, j (t), we obtain the claim in the case A = A A .
In the case A = A BC we can argue similarly, but we need to keep in mind the extra role of the origin. For every element L ∈ L(A) there exists a subset I of {1, . . . , d} of cardinality |I | ≥ j, and a partition I 1 , . . . , I j of I such that L = {x ∈ R d : ∀a / ∈ I, x a = 0 and ∀i = 1, . . . , j, ∀a, b ∈ I i , x a = x b }. The same argument as in the case A = A A , along with the identity d i= j d i i j = d+1 j+1 , then settles the case A = A BC .
For the first type of linear subspace we obtain a reduction A L 1 that is isomorphic to the arrangement A D , while for the second type we obtain a reduction A L 2 that is isomorphic to the arrangement A BC (each, of course, of the corresponding dimension).
The number of subspaces of type L 1 is given by d j (as in the case A = A A ), while the number of subspaces of type L 2 is given by d+1 j+1 − d j (as in the case A = A BC , but noting that |I | = d does not give a BC-type reduction). The same argument as before now yields the formula