Average degree of the essential variety

The essential variety is an algebraic subvariety of dimension $5$ in real projective space $\mathbb R\mathrm P^{8}$ which encodes the relative pose of two calibrated pinhole cameras. The $5$-point algorithm in computer vision computes the real points in the intersection of the essential variety with a linear space of codimension $5$. The degree of the essential variety is $10$, so this intersection consists of 10 complex points in general. We compute the expected number of real intersection points when the linear space is random. We focus on two probability distributions for linear spaces. The first distribution is invariant under the action of the orthogonal group $\mathrm{O}(9)$ acting on linear spaces in $\mathbb R\mathrm P^{8}$. In this case, the expected number of real intersection points is equal to $4$. The second distribution is motivated from computer vision and is defined by choosing 5 point correspondences in the image planes $\mathbb R\mathrm P^2\times \mathbb R\mathrm P^2$ uniformly at random. A Monte Carlo computation suggests that with high probability the expected value lies in the interval $(3.95 - 0.05,\ 3.95 + 0.05)$.


Introduction
The mathematical abstraction of a pinhole camera is a projective linear map where C ∈ R 3×4 is a matrix of rank 3. The camera is called calibrated, when C = [R, t], where R ∈ SO(3) is a rotation matrix and t ∈ R 3 is a translation vector.The relative-pose problem is the problem of computing the relative position of two cameras in 3-space; see [8,Section 9].Suppose that we have two calibrated cameras given by two matrices C 1 and C 2 of rank 3. Since we are only interested in relative positions, we can assume C 1 = [1 3 , 0] and C 2 = [R, t].If x ∈ RP 3 is a point in 3-space, u = C 1 x ∈ RP 2 and v = C 2 x ∈ RP 2 are called a point-correspondence. Any point-correspondence (u, v) satisfies the algebraic equation

and [t]
× is the matrix acting by [t] × x = t × x, the cross-product in R 3 .The set of all such matrices is denoted E := {E(R, t) | R ∈ SO(3), t ∈ R 3 }.This is an algebraic variety defined by the 10 cubic and homogeneous polynomial equations det(E) = 0, 2EE T E − Tr(EE T )E = 0; see [7,Section 4].Therefore, if π : R 3×3 → P(R 3×3 ) ∼ = RP 8 denotes the projectivization map, E is the cone over the projective variety which is called the essential variety.
In the following we view elements in RP 8 as real 3 × 3 matrices up to scaling.The essential variety E is of dimension 5 = dim SO(3) + dim R 3 − 1. Demazure showed that its complexification has degree 10; see [6,Theorem 6.4].Denote by G := G(3, RP 8 ) the Grassmannian of 3-dimensional linear spaces in RP 8 .By (1.1), every point correspondence induces a linear equation on E. For 5 general point correspondences (u 1 , v 1 ), . . ., (u 5 , v 5 ) ∈ RP 2 × RP 2 , the linear space That is, the relative pose problem can be solved by computing the real zeros of a system of polynomial equations that has 10 complex zeros in general.Once we have computed E = E(R, t) we can recover the relative position of the two cameras from E. The process of recovering the relative pose of two calibrated cameras from five point correspondences is known as the 5-point algorithm, see [12].
The system of polynomial equations that we need to solve as part of the 5-point algorithm has 10 complex zeros in general, but the number of real zeros depends on L. Often, one computes all complex zeros and sorts out the real ones.Whether or not this is an efficient approach depends on how likely it is to have many real zeros out of 10 complex ones.Motivated by this observation, in this paper we study the average degree E #(E ∩ L) for random L.
Consider L = U • L 0 , where L 0 ∈ G is fixed and U ∼ Unif(O( 9)) then with respect to Haar measure on G we in fact have L ∼ Unif(G); see [10,13].Our first result shows with this uniform distribution, we expect 4 of the 10 complex intersection points to be real.
This result is in fact quite surprising, because we get an integer, though there is no reason why it should even be a rational number (see also [3,Remark 2]).
To work within the computer vision framework, we need a different distribution than used in Theorem 1.1.The probability distribution is O(9)-invariant, yet linear equations of the type u T Ev = 0 are not O(9)-invariant.These special linear equations are O(3) × O(3)-invariant by the group action (U, V ).(u, v) := (U u, V v).The corresponding invariant probability distribution is given by the random point a = U • a 0 ∈ RP 2 , where U ∼ Unif(O(3)) and a 0 ∈ RP 2 is fixed.We denote this by a ∼ Unif(RP 2 ).
Remark 1.2.The definition of Unif(G) does not depend on the choice of L 0 , and the definition of Unif(RP 2 ) does not depend on the choice of a 0 .
We were not able to determine the exact value of the integral in this theorem.Yet, we can independently sample N random matrices of the form z 1 z 2 z 3 z 4 z 5 and compute their absolute determinants.This gives an empirical average value µ N .An experiment with sample size N = 5 • 10 9 gives an empirical average of In fact, µ N is itself a random variable and we have by Chebychev's inequality, where σ 2 is the variance of the absolute determinant.We show in Proposition 4.4 below that σ 2 ≤ 360.Using this in Chebychev's inequality we get (in fact, since 360 is an extremely coarse upper bound, the true probability should be much smaller).Therefore, it is likely that E L∼ψ #(E ∩ L) is strictly smaller than 4; i.e., it is likely that the expected value in Theorem 1.3 is less than the one in Theorem 1.1.See Figure 1.
The distribution of zeros shown in Figure 1 gives rise to further questions of interest in computer vision.When applying the 5-point algorithm it is important to know when there are no real solutions.In Figure 1, for 1000 sampled spaces, the distribution with respect to Unif(G) had 10 instances with no real solutions, and the distribution with ψ had only 1 instance with no real solutions.The experiments give an indication that no real solutions is a relatively rare occurrence, but further work will need to be done to quantify and geometrically characterize these occurrences with respect to different distributions.
We remark that the distributions Unif(G) and ψ are different in the following sense.For L ∼ Unif(G) every linear space L ∈ G has the same probability.But when L ∼ ψ, it must be defined by 5 linear equations that are given by rank-one matrices of size 3.The Segre variety of rank-one matrices of size 3 in RP 8 has dimension 4 (see 1 [11, Section 4.3.5]),so that a general linear space of codimension 4 = 9 − 5 in RP 8 , spanned by 5 general 3 × 3 matrices, intersects the Segre variety in finitely many points.There is an Euclidean open subset in G, such that this intersection has strictly less than 5 points.Hence, there is a measurable subset In Section 5 we use a result by Vitale [16] to express the expected value in Theorem 1.3 through the volume of a certain convex body K ⊂ R 5 .Namely, and K defined by its support function h K (x) = 1 2 E z |x T z|, and z ∈ R 5 is as above; K is a zonoid and we call it the essential zonoid.We use this to prove a lower bound for the expected number of real points E L∼ψ #(E ∩ L) in Proposition 5.1.
The two probability distributions in Theorem 1.1 and Theorem 1.3 are geometric, meaning that they are not biased towards preferred points in G or RP 2 , respectively.In applications, however, one might be interested in other distributions, like for instance taking the u i and v i uniformly in a box (see Examples 4.1 and 4.3 below).For such a case, we do not get concrete results like Theorem 1.1 or Theorem 1.3.Nevertheless, in Theorem 4.2 below we give a general integral formula.
Outline.In Section 2 we give preliminaries.We recall the integral geometry formula in projective space and study the geometry of the essential variety.In Section 3 we prove Theorem 1.1 by computing the volume of the essential variety.In Section 4 we prove Theorem 1.3 and Theorem 4.2.In the last section, Section 5, we study the essential zonoid.

Preliminaries
Let us start by setting up our notation as well as making note of many key volume computations used throughout the paper.We consider the Euclidean space R n with the standard 1 In [11] one can find a formula for the dimension of the complex Segre variety.The real Segre variety is Zariski dense in the complex Segre variety, so their real and complex dimensions coincide.and once with distribution ψ (the right chart).Then, we computed E ∩ L by solving the system of polynomial equations with the software HomotopyContinuation.jl [4].The charts show the empirical distribution of real zeros and the corresponding empirical means in these experiments.
metric ⟨x, y⟩ = x T y.The norm of a vector x ∈ R 3 will be denoted by ∥x∥ := ⟨x, x⟩ and the unit sphere by S n−1 := {x ∈ R n | ∥x∥ = 1}.The Euclidean volume of the sphere is In particular vol(S 1 ) = 2π and vol(S 2 ) = 4π.The standard basis vectors in R n are denoted e i for 1 ≤ i ≤ n.The space of real n × n matrices R n×n is also endowed with a Euclidean structure We denote the identity matrix 1 n ∈ R n×n and the zero matrix 0 n .The orthogonal group will be denoted by O(n), while the special orthogonal group is SO(n).Both the orthogonal and special orthogonal group are Riemannian submanifolds of R n×n .Volumes of the two manifolds are see [9,].For instance, vol(SO(2)) = 2π and vol(SO(3)) = 8π 2 .
2.1.Integral geometry.The real projective space of dimension n − 1 is defined to be RP n−1 := (R n \ {0})/ ∼, where the equivalence relation is x ∼ y ⇔ ∃λ ∈ R : x = λy.The projection π : S n−1 → RP n−1 that maps x to its class is a 2 : 1 cover.It induces a Riemannian structure on RP n−1 by declaring π to be a local isometry.Let now X ⊆ RP n−1 be a submanifold of dimension m and L ⊆ RP n−1 be a linear space of codimension m.Howard [9] proved that for almost all U ∈ O(n) we have that X ∩ U • L is finite and . This formula will be used for proving Theorem 1.1.

2.2.
The coarea formula.The proof of (2.2) is based on the coarea formula, which we will also need.In order to state the formula we need to introduce the normal Jacobian.Let M, N be Riemannian manifolds with dim(M ) ≥ dim(N ) and let where J is the matrix representation of the derivative D x F relative to orthonormal bases in T x M and T F (x) N .Then for any integrable function h : See, e.g., [9, Section A-2].
2.3.The geometry of the essential variety.In this subsection, we study in more detail the geometry of the essential variety E. Recall from (1.2) that E is the projection of the cone E to projective space RP 8 .We can also project E to the sphere.This defines the spherical essential variety In particular, this shows Tr Therefore, im(E) = E S .Let M ∈ SO(3) be a matrix such that M t = t and M x = −x for all x orthogonal to t, then we have M [−t] × = [t] × and we can write the following This means that E is at least 2:1.To show it is at most 2 : 1, we consider the following for some rotation M and λ ∈ {±1}.We want to check how many different rotation matrices M satisfy this equation.We have the following chain of implications We see that the columns of 1 3 − λM are multiples of t, therefore we can write 1 3 − λM = c tt T for some c ∈ R. We make use of the fact that det(M ) = 1.Firstly we compute the determinant where we have used that t T t = 1.This implies This is Rodrigues' formula for 180-degree rotation about the axis spanned by t.Additionally, it is worth mentioning that this symmetry of the essential variety is exactly the twisted pair, described in [8]. □ Next, we show the invariance properties of the map E. For U, V ∈ SO(3) we denote In particular, the next lemma shows that this defines a group action on E S .
Lemma 2.2.For orthogonal matrices U, V ∈ SO( 3) and (R, t) ∈ SO(3) × S 2 we have With the above lemma, we deduce the following result on E S .
Corollary 2.3.E S is a homogeneous space for SO(3) × SO(3) acting by left and right multiplication.In particular, E S , and hence also E, is smooth.
We now denote the following special matrix in E: (2.5) (recall that e 1 denotes the first standard basis vector (1, 0, 0) T ).Proof.The stabilizer groups of E ∈ E S all have the same volume.We compute the stabilizer group of E 0 .By Lemma 2.1, E(R, t) is 2:1 and by (2.4) we have where M = . Therefore, (U, V ).E 0 = E 0 if and only if U e 1 = e 1 and U V T = 1 3 , or U e 1 = −e 1 and U V T = M ; i.e., M U = V .That is, stab(E 0 ) is realized as the image of the map The normal Jacobian of F at every point is √ 2. For fixed ε, SO(2) × {ε} is a homogeneous space under the action of SO(2) acting on itself.This group action is transitive and preserves the inner product, so the normal Jacobian is constant.Thus it suffices to compute the normal Jacobian at (1 2 , ε).To see this, the tangent space to SO(3) at the identity is j − e j e T i .Thus an orthogonal basis for the tangent space of SO(3) × SO(3) at (1 3 , 1 3 ), is given by Indeed, with respect to this basis and identifying the tangent space of SO(2) × {−1, 1} with R, we have D (1 2 ,ε) F = 0 0 ε 0 0 1 T and thus We conclude by using the coarea formula (2.3) for M = SO(2) × {−1, 1}, N = stab(E 0 ), h ≡ √ 2, and F −1 (y) a single point by injectivity to obtain vol(stab Next, we compute an orthonormal basis of the tangent space T E 0 E at E 0 .
Lemma 2.5.An orthonormal basis of T E 0 E is given by the following five matrices Proof.First, we observe that the five matrices above are pairwise orthogonal and all of norm one.Since dim E = 5, it therefore suffices to show that B 1 , . . ., B 5 ∈ We have T e 1 S 2 = span{e 2 , e 3 } and T 1 3 SO(3) = span{F 1,2 , F 1,3 , F 2,3 }, where F i,j = e i e T j −e j e T i as above.Therefore, the following five matrices are in T E 0 E: Each of the B i above can be expressed as a linear combination of these five matrices, which shows Alternatively, to prove Lemma 2.5 we consider the derivative of the smooth surjective map γ : SO(3) × SO(3) → E S , (U, V ) → (U, V ).E 0 .Since the basis for the tangent space of SO(3) × SO(3) at (1 3 , 1 3 ) is given as in (2.6), the tangent space T E 0 E is also spanned by the following six matrices

The volume of the essential variety
In this section, we prove Theorem 1.1.The strategy is as follows.By Corollary 2.3, E is a smooth submanifold of RP 8 .We can apply the integral geometry formula (2.2) to get vol(E ∩ L) = vol(E) vol(RP 5 ) .
Thus, to prove Theorem 1.1 we can compute the volume of E. We do this in the next theorem.Notice that the result of the theorem, when plugged into (3.1)immediately, proves Theorem 1.1.
Theorem 3.1.The volume of the essential variety is We give two different proofs of this theorem.Since vol(E) = 1 2 vol(E S ), it is enough to compute the latter volume.
Proof 1.By Lemma 2.1, we realize E S as the image of the smooth map (R, t) → E(R, t), and we now restrict the domain to the image.By Lemma 2.2, NJ(E, (R, t)) is invariant under the action by SO(3) × SO(3).Applying the coarea formula (2.3) over the 2-element fibers of E, we get that vol(E S ) = This implies Recall, F i,j = e i e T j − e j e T i .With respect to the orthonormal basis {B i } computed in Lemma 2.5 and the orthonormal basis {(0 3 , e 2 ), (0 3 , e 3 ), (F 1,2 , 0), (F 1,3 , 0), (F 2,3 , 0)} computed for T 1 3 SO(3) × T e 1 S 2 , the columns of the matrix J associated to the derivative of E at (1 3 , e 1 ) are the basis elements of T 1 3 SO(3) × T e 1 S 2 written as a combination of the basis given by Lemma 2.5: .
So, we have that NJ(E, (1 3 , e 1 )) = √ det JJ T = 1 4 , and consequently vol(E S ) = 4π 3 .Therefore, we have vol(E) = 2π 3 .By (2.1), vol(RP 3, E S is a homogeneous space under the action of SO(3) × SO(3).We therefore have the surjective smooth map γ : SO(3) By Lemma 2.2, the map γ is equivariant with respect to the SO(3) × SO(3) action.This implies, that the value of the normal Jacobian does not depend on (U, V ).Therefore, we have vol , and so We compute the normal Jacobian.Recall the notation F i,j = e i e T j − e j e T i .With respect to the orthonormal basis computed in Lemma 2.5 and the orthonormal basis as in (2.6) for the tangent space of SO(3) × SO(3) at (1 3 , 1 3 ), the columns of the matrix J associated to the derivative of γ at (1 3 , 1 3 ) are given by writing the matrices in (2.7) with respect to the basis in Lemma 2.5: .
Taking determinant we obtain NJ(γ, (1 We get vol(E S ) = 4π 3 .As above, this implies vol(E) = 4 • vol(RP 5 ). □ Another important notion in the context of relative pose problems in computer vision is the so-called fundamental matrix ; see, e.g., [8,Section 9].While essential matrices encode the relative pose of calibrated cameras, fundamental matrices encode the relative position between uncalibrated cameras.Fundamental matrices are precisely the matrices of rank two.So, similar to Theorem 3.1, the average degree of fundamental matrices is given by the normalized volume of the manifold of rank two matrices F ⊂ RP 8 .The volume was computed by Beltrán in [1]: vol(F) = π 4 6 = 2 • vol(RP 7 ).Notice that dim F = 7.We get (here, L = U • L 0 , U ∼ Unif(O( 9)), is a random uniform line in RP 8 ).Thus, the average degree of the manifold of fundamental matrices is 2, while the degree of its complexification is 3.

Average number of relative poses
In this section we prove Theorem 1.3.Let Ψ : (RP 2 ) ×10 → R be a measurable function and denote p := (u 1 , v 1 , . . ., u 5 , v 5 ) ∈ (RP 2 ) ×10 , where (RP 2 ) ×10 , represents taking the cartesian product of (RP 2 ) with itself 10 times.We consider the following expected value for the number of real solutions to the relative pose problem For Ψ(p) = 1, the constant one function, µ = E L∼ψ #(E ∩ L).In the general case, µ is the expected value of #(E ∩ L) for a probability distribution with probability density Ψ(p).(y 1 , y 2 , 1) ∈ S 2 .The derivative of this map relative to the standard basis in R 2 and R 3 is expressed by the matrix 1 The tangent space of the sphere is T s S 2 = s ⊥ .Let P s = 1 3 − ss T be the projection onto s ⊥ .
To get the derivative relative to an orthonormal basis of s ⊥ , we have to multiply the above matrix from the left with P s .We get We have This implies that the probability density of u is given by where α is the angle between the lines through u and e 3 .
Let us write g(u) := we obtain the density Ψ(p) with , when p is in the product of boxes, and Ψ(p) = 0 otherwise.△ We will also denote Ψ : (R 3 \{0}) ×10 → R defined by Ψ(u 1 , . . ., v 5 ) := Ψ(π(u 1 ), . . ., π(v 5 )), where π : R 3 \{0} → RP 2 is the projection.It will be convenient to replace the uniform random variables in RP 2 by Gaussian random variables in R 3 , see [5,Remark 2.24]: Again, E L∼ψ #(E ∩ L) is recovered by setting Ψ(p) = 1 in (4.1).We denote the Gaussian density by Φ(p) = (2π) −15 exp(− 1 2 5 ).The proof of Theorem 1.3 consists of three steps, separated into three subsections.In the initial two subsections, our objective is to calculate the normal Jacobian and apply the coarea formula.However, in this process, we do not arrive at an explicit or practical form.Following that in the final subsection, we adopt an alternative approach that involves a new parametrization.This transformation allows us to obtain a closed-form expression for Theorem 1.3.

The incidence variety. The incidence variety is
. This is a real algebraic subvariety of (R 3 \ {0}) ×10 × E. Recall from Lemma 2.2 that SO(3) × SO(3) acts transitively on E by left and right multiplication.This extends to a group action on I via (U, V ).(p, E) := (U u 1 , V v 1 , . . ., U u 5 , V v 5 , U EV T ).Let E 0 := E(1 3 , e 1 ) be as in (2.5) and let us denote the quadric where u = (u 1 , u 2 , u 3 ) T and v = (v 1 , v 2 , v 3 ) T .We denote its zero set by We prove that I is smooth by showing that the Jacobian matrix of the system of equations u T i Ev i = 0, for i = 1, . . ., 5 has full rank at every point in I; see, e.g., [5,Theorem A.9].The Jacobian matrix of q is the 1×6 matrix J(u, v) For (p, E 0 ) ∈ I the matrix A has full rank.Since the image of A is contained in the image of the Jacobian matrix of u T i Ev i = 0, i = 1, . . ., 5, we see that the latter has full rank.Therefore, I is smooth.

4.2.
Computing the normal Jacobian.On I we have the two coordinate projections π 1 : I → (R 3 \ {0}) ×10 and π 2 : I → E. The projection π 2 is surjective, but π 1 is not, since out of the 10 complex solutions of the system of equations u T i Ev i = 0, i = 1, . . ., 5, there can be 0 real solutions.Notice that im(π 1 ) is a full-dimensional semi-algebraic set.Let U be the interior of im(π 1 ).Then, U is an open set, hence measurable.Integrating over im(π 1 ) is the same as integrating over U. We therefore have, using (4.1), Let us also denote U := π 1 ( I).Consider a point p ∈ U \ U and suppose that (p, E) ∈ I.
We have shown in the previous subsection that I is a smooth manifold.We may therefore apply the coarea formula from (2.3) twice, first to π 1 and then to π 2 , to get Furthermore, the Gaussian density Φ(p) is also invariant under the SO(3) × SO(3) action.The fiber over ).E 0 .The ratio of normal Jacobians is computed next.Recall from (4.2) the definition of the matrix A ∈ R 5×30 .For B 1 , . . ., B 5 the basis from Lemma 2.5 we denote Then, the tangent space of I at (p, E 0 ) is defined by the linear equation with respect to orthonormal bases.So, When B is not invertible, NJ(π 1 , (p, E 0 )) = 0 and the formula in (4.5) also holds.

4.3.
Integration on the quadric.We plug (4.5) into (4.4) and obtain We have (u, v) ∈ Q if and only if (u 2 , u 3 ) is a multiple of (v 2 , v 3 ).Therefore, we have the following 2 : 1 parametrization: The Jacobian matrix of ϕ is Then, NJ(ϕ, (a, b, r, s, θ)) = det(J T J) and i=1 .We get: is the probability density, such that a i , b i , r i , s i are all standard normal and θ i is uniform in [0, 2π) for every i, and all variables are independent.We can therefore rephrase (4.6) as The rows of B are all of the form  .
We state a general integral formula.
Theorem 4.2.With the notation above, we have that the expected value µ = E #(E ∩ L), where the distribution of L is defined by a nonnegative measurable function Ψ : (RP 2 ) ×10 → R, is given by where (U, V ) ∈ SO(3) × SO(3) is such that E = (U, V ).E 0 , and the first expected value is over the uniform distribution in E. The second expectedd value is for a = (a i , b i , r i , s i , θ i ) 5 i=1 with a i , b i , r i , s i ∼ N (0, 1), θ i ∼ Unif([0, 2π)) and all are independent.
We continue example 4.1 by computing the distribution and approximating the mean value.
Example 4.3.As in example 4.1 we consider the case when the x i and y i are sampled i.i.d.from the box [−5, 5]×[−5, 5] ⊂ R 2 .Figure 2 shows the empirical distribution of the number of real zeros and an empirical mean of ≈ 3.788.We could also approximate the average number of real zeros using Theorem 4.2.
We sample from the probability density Ψ((U, V ).ϕ(a)) in Theorem 4.2 using the basic version of Metropolis-Hastings algorithm (see, e.g., [14]).For this, we use the proposal density for (E, a), such that a is as above and E ∈ E is uniform.We computed a corresponding Markov-Chain with 10 6 states.The Metropolis-Hastings algorithm rejected all but 796 of those states.The empirical mean computed from the 796 states is ≈ 3.5563.
Let us now work towards proving Theorem 1.3.In the setting of Theorem 1.3 we have Ψ(p) = 1 and thus, by Theorem 4.2, We have shown in Theorem 3.1 that vol(E) = 4 • vol(RP 5 ) = 2π 3 .Consequently, as stated in Theorem 1.3.We close this section by giving a (extremely coarse) upper bound on the variance of the random determinant.This bound is used for applying Chebychev's inequality in the introduction.where we have used that E θ cos 2 θ = E θ sin 2 θ = 1 2 .□

The essential zonoid
Vitale [16] showed that the expected absolute determinant of a random matrix can be expressed as the volume of a convex body.More specifically, of a zonoid.Zonoids are limits of zonotopes in the Hausdorff topology on the space of all convex bodies, and zonotopes are Minkowski sums of line segments; see [15] for more details.
Notice that the probability distribution of z from (4.7) is invariant under multiplying by −1; i.e., z ∼ −z.In this case, based on Vitale's result, it was shown in [2, Theorem 5 We call K the essential zonoid.
In the remainder of this section, we bound h K (x) from below to find a convex body whose volumes give a lower bound for vol(K).This gives, using (5.1), the following result.It is important to note that the mentioned lower bound involves numerical computations in its calculation.
Remark 5.2.The value of 0.93 is not close to the experimental value of 3.95 from the introduction.To get a lower bound closer to 3.95 one would need to understand the support function of K at points x = (x 1 , . . ., x 5 ) ∈ R 5 , where all entries are nonzero.In the computation below we always either have x 1 = x 2 = 0 or x 3 = x 4 = 0.For such points we can work with the function that maps x to the vector of norms ρ = (ρ 1 , ρ 2 , ρ 3 ), where ρ 1 = x 2 1 + x 2 2 , ρ 2 = x 2 3 + x 2 4 and ρ 3 = |x 5 |.However, if all entries of x are nonzero, also the angle between the two points (x 1 , x 2 ), (x 3 , x 4 ) ∈ R 2 will play a role, not just their norms.We were not able to find a lower bound for h K (x) in this case.We nevertheless prove Proposition 5.1 for completeness.
We will need the following lemma.
Lemma 5.3.We have Proof.The first formula is proved by using Let us have a closer look at the support function.

Figure 1 .
Figure1.The two pie charts show the outcome of the following two experiments.We sampled N = 1000 random linear spaces, once with distribution Unif(G) (the left chart) and once with distribution ψ (the right chart).Then, we computed E ∩ L by solving the system of polynomial equations with the software HomotopyContinuation.jl[4].The charts show the empirical distribution of real zeros and the corresponding empirical means in these experiments.

Example 4 . 1 . 1 ( 1 (
We regard R 2 as a subset of RP 2 by using the embedding ϕ : R 2 → RP 2 such that u := ϕ(y) = [y : 1].Consider the case when y ∈ R 2 is chosen uniformly in the box B := [a, b] × [c, d] ⊂ R 2 .We compute the probability density of u relative to the uniform measure on RP 2 .The probability density of y relative to the Lebesgue measure in R 2 is b−a)(d−c) • δ B (y), where δ B (y) is the indicator function of the box B. Let W ⊂ ϕ(B) be a measurable subset, then P (u ∈ W ) = P (y ∈ ϕ −1 (W )) = ϕ −1 (W ) b−a)(d−c) dy.Using the coarea formula (2.3) we express the probability of W as P (u ∈ W ) = W 1 (b − a)(d − c) • 1 NJ(ϕ, y) du.Therefore, the probability density of u is ((b − a)(d − c) • NJ(ϕ, y)) −1 .Let us compute the normal Jacobian of the map ϕ.Since we can work locally, we compute the derivative of the map y → s :=

Figure 2 .
Figure 2. The pie chart shows the outcome of the following experiment, similar to the experiments in Figure 1.We sampled N = 1000 pairs (xi, yi) 5 i=1 , where the xi and yi are sampled i.i.d.from the box [−5, 5] × [−5, 5] ⊂ R 2 .Then, we computed E ∩ L by solving the system of polynomial equations with the software HomotopyContinuation.jl [4].The chart shows the empirical distribution of real zeros and the corresponding empirical means in these experiments.