1 Introduction

A classical result from complex algebraic geometry tells that on a generic cubic surface in complex projective space there are exactly 27 lines. This is still true for a generic real cubic surface, i.e., on the zero set in complex projective space of a real cubic polynomial, however these lines might not be real. In fact the number of real lines on the real zero locus \(Z(P)\subset {\mathbb {R}}{\mathrm {P}}^3\), for a generic \(P\in {\mathbb {R}}[x_0, \ldots , x_3]_{(3)}\) in the space of real homogeneous polynomials of degree 3 can be either 27, 15, 7 or 3, depending on the coefficients of the chosen polynomial [33].

This is a typical phenomenon in real algebraic geometry, where in general there is no “generic” answer to such counting problems. There is however a recent interest into looking at these questions from the probabilistic point of view, replacing the word generic with “random”, which in the case of the current paper means asking for the expectation of the number of real lines on a random real cubic surface. This approach has its origin in classical works of Kac [17], Edelman and Kostlan [10], Shub and Smale [34, 35], and it has recently seen new progress [7, 9, 12,13,14,15, 22,23,24,25,26,27,28, 31, 31, 32], leading to the emergence of the field of Random real algebraic geometry.

Of course, when talking about expected quantities, one should specify what is meant by “random”. In this paper we will endow the space \({\mathbb {R}}[x_0, \ldots x_3]_{(3)}\) with a centered, nondegenerate gaussian distribution, which we require to be invariant under the action of the orthogonal group O(4) by change of variables—so that there are no preferred points or directions in the projective space \({\mathbb {R}}{\mathrm {P}}^3\). Notice that Darmois–Skitovich Theorem together with a straightforward generalization of Theorem 4.5 of [19] guarantee that the gaussianity is a consequence of the independence of the coefficients of the monomials and the orthogonal invariance. Such a probability distribution will be called an invariant distribution and a polynomial sampled from it will be called an invariant polynomial. Invariant distributions on \({\mathbb {R}}[x_0, \ldots x_3]_{(3)}\) can be explicitly described: they correspond to scalar products on \({\mathbb {R}}[x_0, \ldots x_3]_{(3)}\) which are invariant under the action of the orthogonal group O(4) by change of variables, and they are parametrized by a point in the positive quadrant \((\lambda _1, \lambda _2)\in (0, \infty )\times (0, \infty )\), see [19]. This comes from the fact that there is a decomposition

$$\begin{aligned} {\mathbb {R}}[x_0, \ldots x_3]_{(3)}=\mathcal {H}_3\oplus \Vert x\Vert ^2\cdot \mathcal {H}_1, \end{aligned}$$
(1.1)

where \(\mathcal {H}_3\) and \(\mathcal {H}_1\) denotes respectively the space of harmonic cubic polynomials and harmonic linear polynomials (i.e., just linear polynomials). The remarkable fact here is that the decomposition (1.1) is orthogonal with respect to any invariant scalar product; moreover the action of the orthogonal group by change of variables preserves the two spaces of harmonics and in addition the induced representation on these spaces is irreducible. In particular, in each space of harmonics, there is a unique (up to multiples) scalar product which is O(4)-invariant; this explains the two positive parameters needed to describe an invariant distribution.

In practice, to construct a random invariant polynomial, we proceed as follows. First observe that the quantity we are interested in (the number of lines on the zero set, and in fact the zero set itself) does not depend on the multiple of the defining polynomial that we take and we can normalize our parameters to satisfy \(\lambda _1+\lambda _2=1\). In particular we can work with a single parameter \(\lambda \in (0,1)\) such that \((\lambda _1, \lambda _2)=(\lambda , 1-\lambda )\). Consider the \(L^2\)-scalar product, which is defined for \(f, g\in {\mathbb {R}}[x_0, \ldots x_3]_{(3)}\) by

$$\begin{aligned} ( f, g )_{L^2}=\frac{1}{2\pi ^2} \int _{{\mathbb {R}}^4}f(x)\,g(x) e^{-\frac{\left\Vert x\right\Vert ^2}{2}} \,{\mathrm{d}}x. \end{aligned}$$

Then we fix bases \(\{H_{3, j}\}_{j\in J_3=\{1,\dots ,16\}}\) for \(\mathcal {H}_3\) and \(\{H_{1, j}\}_{j\in J_1=\{1,\dots ,4\}}\) for \(\mathcal {H}_1\) which are orthonormal with respect to the \(L^2\)-scalar product. With these choices we define a random polynomial \(P_\lambda \) as a linear combination of random harmonics, weighted by the parameters:

$$\begin{aligned} P_{\lambda }(x)=\lambda \left( \sum _{j\in J_3}\xi _{3, j}\cdot H_{3, j}(x)\right) +(1-\lambda )\left( \sum _{j\in J_1}\xi _{1, j}\cdot \Vert x\Vert ^2H_{1, j}(x)\right) , \end{aligned}$$
(1.2)

where \(\{\xi _{3, j}\}_{j\in J_3}\) and \(\{\xi _{1, j}\}_{j\in J_1}\) are two independent families of independent standard gaussians. We include in our study also the choices \(\lambda =0\) and \(\lambda =1\), which correspond to purely harmonic polynomials (but not to scalar products). The case \(\lambda = 0\) is special also for another reason: the associated hypersurface is a degenerate cubic, namely a hyperplane.

Example 1

(The Kostlan distribution) A Kostlan random polynomial is defined by

$$\begin{aligned} P(x)=\sum _{|\alpha |=3}\xi _\alpha \cdot \left( \frac{3!}{\alpha _0!\cdots \alpha _3!}\right) ^{1/2}x_0^{\alpha _0}\cdots x_3^{\alpha _3} \end{aligned}$$
(1.3)

where \(\{\xi _{\alpha }\}_{|\alpha |=3}\) is a family of independent standard gaussians. The resulting probability distribution on \({\mathbb {R}}[x_0, \ldots x_3]_{(3)}\) is invariant and corresponds to the choice of \(\lambda =\frac{1}{3}\) in (1.2) (see Corollary 2). The authors of [4] have proved that the expectation of the number of real lines on the zero set of a random Kostlan cubic equals:

$$\begin{aligned} E_{\frac{1}{3}}=6\sqrt{2}-3. \end{aligned}$$

Generalizing the work of [4], in this paper we give an explicit formula for the expectation of the number of real lines on a random invariant cubic, as a function of the parameter \(\lambda \in [0,1]\).

Theorem 1

The expected number of real lines on the zero set of the random cubic polynomial \(P_\lambda \) equals : 

$$\begin{aligned} E_\lambda =\frac{9(8\lambda ^2+(1-\lambda )^2)}{2\lambda ^2+(1-\lambda )^2}\left( \frac{2\lambda ^2}{8\lambda ^2+(1-\lambda )^2}-\frac{1}{3}+\frac{2}{3}\sqrt{\frac{8\lambda ^2+(1-\lambda )^2}{20\lambda ^2+(1-\lambda )^2}}\right) . \end{aligned}$$
(1.4)

An interesting corollary of the previous Theorem is the fact that one can analytically prove that the expectation is maximized at \(\lambda =1\), i.e., for random purely harmonic cubics.

Corollary 1

The function \(E_{\lambda }\) is monotone increasing and attains its maximum at \(\lambda =1{:}\)

$$\begin{aligned} E_1=24\sqrt{\frac{2}{5}}-3. \end{aligned}$$

Remark 1

The previous corollary is particularly interesting because it confirms the intuition that purely harmonic polynomials of maximum degree exhibit complicated topological configurations, see [20].

Remark 2

On the other hand the minimum of the function \(E_{\lambda }\) is 3, and this number has a precise meaning. In fact we prove in Proposition 2 that there exists a neighborhood of the set of purely harmonic polynomials of degree one (i.e. linear form times \(\Vert x\Vert ^2\)), such that the smooth cubic surfaces in this neighborhood contain exactly three lines. The proof of this fact does not involve the expression of the function \(E_{\lambda }\), therefore we could deduce that \(E_{0} = 3\) without knowing (1.4).

Remark 3

Another possible model of random cubics can be introduced following the work of Allcock et al. [3]. They have studied the moduli space of real cubic surfaces from the point of view of hyperbolic geometry and computed the orbifold Euler characteristic (which is proportional to the hyperbolic volume) of each component of the moduli space. One can define an expectation taking the weighted average of the number of real lines, weighted by the volume of the corresponding component. In this way one gets an expected number of \(\frac{239}{37}\) real lines, see [3, Table 1.2].

Remark 4

Yet another model of randomness can be obtained by looking at random determinantal cubics. To be more specific, consider random \(3\times 3\) matrices \(A_0, A_1, A_2, A_3\) filled with independent standard gaussians, and define the random polynomial:

$$\begin{aligned} F(x_0, x_1, x_2, x_3)=\det (x_0A_0+x_1A_1+x_2A_2+x_3A_3). \end{aligned}$$
(1.5)

Random determinantal cubics are O(4)-invariant. Smooth cubics admit a determinantal representation (i.e., they can be written as the zero set of some F as in (1.5)), see [5, 6]. It is natural therefore to ask for the expectation of the number of real lines on a random determinantal cubic surface \(Z(F)\subset {\mathbb {R}}{\mathrm {P}}^3\), however this problem seems to be considerably more complicated than the gaussian one considered here (the coefficients of F in (1.5) are cubic in gaussian variables and they are also highly dependent) and we leave this as an open question.

Remark 5

As said in the beginning, our approach will be probabilistic and our answer will depend on the probability distribution we have chosen. It is important to mention that there exists also a certain signed count of lines on a (generic) real cubic surface that is independent of the surface itself. For this type of count, following classical work of Segre [33] (later rediscovered and extended by Okonek and Teleman [29] and Kharlamov and Finashin [11]), one can classify the lines lying on the cubic into elliptic and hyperbolic. This corresponds to giving a sign to each line. The number e of elliptic lines plus the number h of hyperbolic lines depend on the cubic, but their difference \(h-e\) is always 3. Following [18], one can further extend this type of signed count to a different field \({\mathbb {K}}\) (for instance the p-adic numbers \({\mathbb {K}}={\mathbb {Q}}_p\)). In this case a line is a closed point in the Grassmannian of lines in \({\mathbb {P}}_{\mathbb {K}}^3\). The sign, which is now called type, takes value in the Grothendieck–Witt group GW(\({\mathbb {K}}\)) of non-degenerate bilinear forms and it depends on the field of definition of the line. With these specifications we get a similar invariant count, see [18, Theorem 2]. An interesting question is: what happens over the p-adic numbers? In this direction, [18, Theorem 2] gives a way to perform a well defined enriched count but, in the spirit of the current paper, it makes sense to ask for the expected number of \({\mathbb {Q}}_p\)-lines on a random p-adic cubic. This question has been studied by the first named author of this paper together with Lerario in [2] and the answer is \(\tfrac{(p^3-1)(p^2+1)}{p^5-1}\).

Remark 6

The study of cubic surfaces has been recently enriched by the famous 27 questions posed by Sturmfels, that are collected in [30]. By the same logic of this paper, it can be noticed that some of those questions can be restated according to a probabilistic point of view. For example looking at question 23 and putting a probability distribution on \({\mathbb {R}}{\mathrm {P}}^{19}\), instead of asking for a semialgebraic description of the set of smooth hyperbolic cubics in \({\mathbb {R}}{\mathrm {P}}^{19}\), one could seek the probability of a smooth cubic to be hyperbolic.

2 Preliminaries

2.1 The Decomposition into Harmonic Polynomials and Invariant Scalar Products

Let us consider the space of real d-homogeneous polynomials \(W_{n,d}={\mathbb {R}}[x_0, \ldots x_n]_{(d)}\). The orthogonal group \(O(n+1)\) acts on it by change of variables, so that we can view \(W_{n,d}\) as a representation of \(O(n+1)\). We want to find the decomposition of \(W_{n,d}\) into its irreducible subrepresentations.

Denote the space of real homogeneous harmonic polynomials of degree d in \(n + 1\) variables by

$$\begin{aligned} \mathcal {H}^{n}_{d}:=\{H \in W_{n,d} : \mathop {}\!\mathbin \bigtriangleup H = 0\}. \end{aligned}$$

This space is invariant with respect to \(O(n+1)\) and the following algebraic decomposition holds (see [4]):

$$\begin{aligned} W_{n,d}=\bigoplus _{d-j\in 2{\mathbb {N}}}\left\Vert x\right\Vert ^{d-j}\mathcal {H}_{j}^{n}. \end{aligned}$$

Moreover the spaces \(\left\Vert x\right\Vert ^{d-j} \mathcal {H}^{n}_{j}\) form irreducible representations of \(O(n+1)\) and are orthogonal with respect to any \(O(n+1)\)-invariant scalar product. Let us denote with \((\cdot ,\cdot )\) a generic real scalar product on \(W_{n,d}\) which is invariant under the action of the orthogonal group \(O(n+1)\); we will use the notation \((\cdot ,\cdot )_2\) for the \(L^2\) scalar product which is by definition

$$\begin{aligned} (f,g)_2 = \frac{1}{2\pi ^2}\int _{{\mathbb {R}}^{n+1}} f(x)g(x) e^{-\frac{\left\Vert x\right\Vert ^2}{2}} {\mathrm{d}}x \quad f,g\in W_{n,d}. \end{aligned}$$

As a consequence of Schur lemma (see [36, Lemma 18.1.1]) the restriction of \((\cdot ,\cdot )\) to the space \(\left\Vert x\right\Vert ^{d-j}\mathcal {H}_{j}^{n}\) is a multiple of the \(L^2\) scalar product. So given \(f,g\in W_{n,d}\) we can write \(f=\sum _{j}\left\Vert x\right\Vert ^{d-j}f_j\) and \(g=\sum _{j}\left\Vert x\right\Vert ^{d-j}g_j\), with \(f_j,g_j\in \mathcal {H}_j^n\) where j is such that \(d-j\in 2{\mathbb {N}}\), and we have that

$$\begin{aligned} (f,g)=\sum _{d-j\in 2{\mathbb {N}}}\mu _j(\left\Vert x\right\Vert ^{d-j}f_j,\left\Vert x\right\Vert ^{d-j}g_j)_2 \end{aligned}$$

for some \(\mu _d\), \(\mu _{d-2}\), \(\ldots > 0\).

Given an invariant scalar product \((\cdot ,\cdot )\) we can construct a gaussian probability distribution which is invariant under rotations. First we fix an orthonormal basis \(\{\left\Vert x\right\Vert ^{d-j} H_{j,i}\}_{i \in J_j}\) for harmonics \({\mathcal {H}}^n_j\) with respect to \((\cdot ,\cdot )_2\), where \(J_j = \{1, \ldots , \hbox {dim}(\mathcal {H}^n_j)\}\). Then \(\{\lambda _j \left\Vert x\right\Vert ^{d-j} H_{j,i}\}\) is an orthonormal basis with respect to \((\cdot ,\cdot )\), where \(\lambda _j = \mu _j^{-\frac{1}{2}}\). We construct a random polynomial with such a basis whose coefficients are given by centered gaussian random variables \(\xi _{j,i}\sim N(0,1)\):

$$\begin{aligned} P(x) = \sum _{d-j \in 2{\mathbb {N}}} \lambda _j \sum _{i\in J_j} \xi _{j,i} \left\Vert x\right\Vert ^{d-j} H_{j,i}(x). \end{aligned}$$

In our case we have that

$$\begin{aligned} W_{3,3} =\mathcal {H}_3^3\oplus \left\Vert x\right\Vert ^2 \mathcal {H}_1^3 \end{aligned}$$

and therefore we only need two parameters to classify all the scalar products

$$\begin{aligned} (\cdot ,\cdot )=\mu _1(\cdot ,\cdot )_2+\mu _2(\cdot ,\cdot )_2. \end{aligned}$$
Table 1 Orthogonal basis for Theorem 1

Let us fix bases \(\{H_{3, i}\}_{i\in J_3=\{1,\dots ,16\}}\) for \(\mathcal {H}_3^3\) and \(\{\Vert x\Vert ^2 H_{1, i}\}_{i\in J_1=\{1,\dots ,4\}}\) for \(\Vert x\Vert ^2 \mathcal {H}_1^3\) which are orthonormal with respect to the \(L^2\)-scalar product, and then we have that \(\{\frac{1}{\sqrt{\mu _1}}H_{3, i}\}_{i\in J_3}\cup \{\frac{\Vert x\Vert ^2}{\sqrt{\mu _2}}H_{1, i}\}_{i\in J_1}\) is an orthonormal basis with respect to our scalar product. Notice that since for our purposes we just need to classify scalar products up to constants, we can rescale our parameters such that they sum up to 1 and obtain the following random polynomial

$$\begin{aligned} P_{\lambda }(x)=\lambda \left( \sum _{i\in J_3}\xi _{3, i}\cdot H_{3, i}(x)\right) +(1-\lambda )\left( \sum _{i\in J_1}\xi _{1, i}\cdot \Vert x\Vert ^2H_{1, i}(x)\right) \end{aligned}$$
(2.1)

where \(\xi _{j,i}\sim N(0,1)\) are independent standard gaussians. In Theorem 1 we will use the explicit orthogonal basis for \(\mathcal {P}\) shown in Table 1.

Remark 7

Notice that we will take into account also the limit cases \(\lambda = 0\) and \(\lambda = 1\) of pure harmonics of degree 1 and 3 respectively.

Corollary 2

The Kostlan distribution (1.3) corresponds to the choice \(\lambda =\frac{1}{3}\) in (1.2).

Proof

Take the element \(x_0x_1x_2\) of the basis. Its \(L^2\) norm is 1, while its Kostlan norm is \(\frac{1}{\sqrt{6}}\), therefore we get that \(\mu _1=\frac{1}{6}\). Consider now the element \(x_0(x_0^2+x_1^2+x_2^2+x_3^2)\). Its \(L^2\) norm is \(4\sqrt{3}\), while its Kostlan norm is \(\sqrt{2}\), therefore \(\mu _2=\frac{1}{24}\). We look for \(\alpha \in {\mathbb {R}}\) such that \(\alpha \frac{1}{\sqrt{\mu _1}}+\alpha \frac{1}{\sqrt{\mu _2}}=1\), i.e., \(\alpha =\frac{1}{3\sqrt{6}}\). So in the end \(\lambda =\alpha \frac{1}{\sqrt{\mu _1}}=\frac{1}{3}\).

2.2 Vector Bundles and the Kac–Rice Formula

In this section we recall the construction from [4, Theorem 1]. Let \(\text {Gr}^+(2,4)\) denote the Grassmannian of oriented 2-planes in \({\mathbb {R}}^4\), that we identify with its image in \({\mathbb {S}}^5\) under the spherical Plücker embedding. It can be seen as the set of simple, norm-one vectors in the second exterior power of \({\mathbb {R}}^4\). Denote by g the Riemannian metric induced by this embedding. Let \(\hbox {Sym}^3(\tau ^*_{2,4})\) be the 3rd symmetric power of the dual of the tautological bundle on \(\text {Gr}^+(2,4)\). For every \(f\in {\mathbb {R}}[x_0,\ldots ,x_3]_{(3)}\), we define a section \(\sigma _f\) of the bundle \(\hbox {Sym}^3(\tau ^*_{2,4})\) by considering \(\sigma _f(W)=f|_W\), its restriction on \(W\in \text {Gr}^+(2,4)\). In this way our main problem of finding the expected number of lines in the surface \(Z(P_{\lambda })\subseteq {\mathbb {R}}{\mathrm {P}}^3\) becomes computing

$$\begin{aligned} E_{\lambda } ={\mathbb {E}}\#\lbrace W\in \text {Gr}(2,4) \mid \sigma _{P_{\lambda }}(W)=0\rbrace \end{aligned}$$

where \(\text {Gr}(2,4)\) denotes the Grassmannian of 2-planes in \({\mathbb {R}}^4\), whose double cover is given by \(\text {Gr}^+(2,4)\). We recall now the following theorem which is an essential tool for this computation.

Theorem 2

(Kac–Rice formula [1]) Let (Mg) be a Riemannian manifold of dimension m and \(X:M\rightarrow {\mathbb {R}}^m\) be a smooth random map such that

  1. (i)

    for every \(t\in M,\) the random vector X(t) has a gaussian nondegenerate distribution; 

  2. (ii)

    the probability that X has degenerate zeroes in M is zero.

Then,  denoting by \(p_{X(t)}\) the density function of X(t),  for every \(U\subset M\) measurable set the expected number of zeroes of X in U is given by the formula : 

$$\begin{aligned} {\mathbb {E}}\,\#(\{X=0\}\cap U)=\int _{U} {\mathbb {E}}\lbrace |{\det (\hat{J}X(t))}| \mid X(t)=0 \rbrace \, p_{X(t)}(0)\cdot w_U(t) \end{aligned}$$

where \(w_U\) is the volume form induced by the Riemannian metric g and \(\hat{J}X(t)\) denotes the matrix of the derivatives of the components of X with respect to an orthonormal frame field.

For \(i=1,2\) and \(j=3,4\), consider \(E_{i,j}\) the matrix that has 1 in position (ij), \(-1\) in position (ji) and 0 otherwise; then \(e^{tE_{i,j}}\in O(4)\). Let \(e_0,e_1,e_2,e_3\) be the standard basis vectors of \({\mathbb {R}}^4\), \(t = (t_{1,3},t_{1,4},t_{2,3},t_{2,4})\in {\mathbb {R}}^{2 \times 2}\), and consider the function

figure a

Then \(\phi :{\mathbb {R}}^{2\times 2} \rightarrow \text {Gr}^+(2,4)\) defined by \(\phi (t)=R(t,e_0)\wedge R(t,e_1)\) is a local parametrization of \(\text {Gr}^+(2,4)\) around \(e_0\wedge e_1\). In fact this is the Riemannian exponential map centred at \(e_0\wedge e_1\) (see [21]).

Hence \(\phi ^{-1} :U \rightarrow {\mathbb {R}}^{2\times 2}\) is a coordinate chart on a neighborhood U of \(e_0\wedge e_1\), and we get a trivialization of the bundle \(\hbox {Sym}^3(\tau ^*_{2,4})\) over U as follows:

figure b

where \(h(f)=(W,f(R(\phi ^{-1}(W),\cdot )))\) for every \(f\in \hbox {Sym}^3(\tau ^*_{2,4})|_W\).

Remark 8

Since \(\text {Gr}^+(2,4)\) is compact and connected, and the map \(\phi \) is a Riemannian exponential map, then \(\phi \) is surjective. We can take U to be the largest domain for which \(\phi \) is a diffeomorphism: then \(\text {Gr}^+(2,4){\setminus } U\) is the cut locus at \(e_0\wedge e_1\) (see [8, Theorem III.2.2]) and it has measure 0. So integrating over U is equivalent to integrating over \(\text {Gr}^+(2,4)\).

Take the polynomial \(P_{\lambda }\) in (2.1) and define

$$\begin{aligned} \tilde{\sigma }_{P_{\lambda }} :U \rightarrow {\mathbb {R}}[y_0,y_1]_{(3)} \simeq {\mathbb {R}}^4 \end{aligned}$$

in such a way that \(h(\sigma _{P_{\lambda }}(W))=(W,\tilde{ \sigma }_{P_{\lambda }}(W))\). So we can apply the Kac–Rice formula to \(X=\tilde{\sigma }_{P_{\lambda }}\circ \phi :\phi ^{-1}(U) \rightarrow {\mathbb {R}}^4\):

$$\begin{aligned} {\mathbb {E}}\#\{\tilde{\sigma }_{P_{\lambda }}=0\}={\mathbb {E}}\#\{X=0\}&=\int _{\phi ^{-1}(U)} {\mathbb {E}}\{|\det (\hat{J}X(t))| \mid X(t)=0\}p_{X(t)}(0)\cdot \phi ^*w_{\text {Gr}^+(2,4)}(t)\\&= \int _U {\mathbb {E}}\{|\det (J(W))| \mid \tilde{\sigma }_{P_{\lambda }}(W)=0\}p(0,W)\cdot w_{\text {Gr}^+(2,4)}(W)\\&= \int _{\text {Gr}^+(2,4)} {\mathbb {E}}\{|\det (J(W))| \mid \tilde{\sigma }_{P_{\lambda }}(W)=0\}p(0,W)\cdot w_{\text {Gr}^+(2,4)}(W), \end{aligned}$$

where here \(\phi ^{-1}(U)\) is endowed with the pull-back metric \(\phi ^*g\), p(0, W) denotes the density at zero of \(\tilde{\sigma }_{P_{\lambda }}(W)\) and J(W) is the matrix of the derivatives at W of the components of \(\tilde{\sigma }_{P_{\lambda }}\) with respect to an orthonormal frame field, that we will simply call Jacobian matrix.

The fact that the distribution of \(P_{\lambda }\) is O(4)-invariant implies that the function

$$\begin{aligned} C(W):={\mathbb {E}}\{|\det (J(W))| \mid \tilde{\sigma }_{P_{\lambda }}(W)=0\}p(0,W) \end{aligned}$$

is a constant C which does not depend on W. Indeed, let \(W_1\) and \(W_2\) be two elements of \(\text {Gr}^+(2,4)\), and let \(k\in O(4)\) be such that \(k(W_1)=W_2\). Then, by the Kac–Rice formula, we have

$$\begin{aligned} C(W_1)&=\lim _{\epsilon \rightarrow +\infty }\frac{1}{\mathrm {vol}(B(W_1,\epsilon ))}\int _{B(W_1,\epsilon )}\\&{\mathbb {E}}\{|\det (J(W))| \mid \tilde{\sigma }_{P_{\lambda }}(W)=0\}p(0,W)\cdot w_{\text {Gr}^+(2,4)}(W)\\&=\lim _{\epsilon \rightarrow +\infty }\frac{1}{\mathrm {vol}(B(W_1,\epsilon ))}{\mathbb {E}}\#(\{\tilde{\sigma }_{P_{\lambda }}(W)=0\}\cap B(W_1,\epsilon ))\\&=\lim _{\epsilon \rightarrow +\infty }\frac{1}{\mathrm {vol}(B(W_1,\epsilon ))}{\mathbb {E}}\#(\{\tilde{\sigma }_{P_{\lambda }}\circ k^{-1}(W)=0\}\cap k(B(W_1,\epsilon )))\\&=\lim _{\epsilon \rightarrow +\infty }\frac{1}{\mathrm {vol}(B(W_2,\epsilon ))}{\mathbb {E}}\#(\{\tilde{\sigma }_{P_{\lambda }}(W)=0\}\cap B(W_2,\epsilon ))\\&=C(W_2) \end{aligned}$$

denoting by \(B(W_i,\epsilon )\) the ball around \(W_i\) of radius \(\epsilon \). Therefore the expected number of zeros of the section is

$$\begin{aligned} {\mathbb {E}}\#\{\tilde{\sigma }_{P_{\lambda }}=0\}=C\cdot \mathrm {vol}(\text {Gr}^+(2,4)) \end{aligned}$$

where \(\mathrm {vol}(\text {Gr}^+(2,4))\) is the volume of \(\text {Gr}^+(2,4)\). Moreover we will show in the proof of Theorem 1 that \(\tilde{\sigma }_{P_{\lambda }}(W)\) and J(W) are independent random variables (for a certain W), and in that case

$$\begin{aligned} {\mathbb {E}}\{|\det (J(W))| \mid \tilde{\sigma }_{P_{\lambda }}(W)=0\}= {\mathbb {E}}\{|\det (J(W))|\}. \end{aligned}$$

Because \(Gr^+(2,4)\) is a double covering of Gr(2, 4), in the end we get that

$$\begin{aligned} E_{\lambda }&={\mathbb {E}}\#\{ W\in Gr(2,4) \mid \sigma _{P_{\lambda }}(W)=0\} \nonumber \\&=\frac{1}{2}\,{\mathbb {E}}\#\{ W\in Gr^+(2,4) \mid \sigma _{P_{\lambda }}(W)=0\} \nonumber \\&=\frac{1}{2}\,C\cdot \mathrm {vol}(Gr^+(2,4)) \nonumber \\&={\mathbb {E}}\{|\det (J(W_0))|\}\cdot \mathrm {vol}(Gr(2,4))\cdot p(0,W_0) \end{aligned}$$
(2.2)

for a fixed \(W_0\in Gr^+(2,4)\).

Let us now focus on the Jacobian matrix: write the polynomial \(P_{\lambda }\) in the monomial basis as

$$\begin{aligned} P_{\lambda }= \sum _{|i|=3} \beta _{i_0,i_1,i_2,i_3}y_0^{i_0}y_1^{i_1}y_2^{i_2}y_3^{i_3} \end{aligned}$$

and choose \(W_0=e_0\wedge e_1\); since \(W_0=\phi (0)\) then

$$\begin{aligned} \tilde{\sigma }_{P_{\lambda }}(W_0)=\sigma _{P_{\lambda }}(W_0)=\sum _{|i|=3} \beta _{i_0,i_1,0,0}y_0^{i_0}y_1^{i_1}. \end{aligned}$$

As in the proof of [4, Theorem 2] we can compute the matrix \(J(W_0)\) that turns out to be:

$$\begin{aligned}J(W_0)=\begin{bmatrix} \beta _{2,0,1,0} &{} 0 &{} \beta _{2,0,0,1} &{} 0 \\ \beta _{1,1,1,0} &{} \beta _{2,0,1,0} &{} \beta _{1,1,0,1} &{} \beta _{2,0,0,1}\\ \beta _{0,2,1,0} &{} \beta _{1,1,1,0} &{} \beta _{0,2,0,1} &{} \beta _{1,1,0,1} \\ 0 &{} \beta _{0,2,1,0} &{} 0 &{} \beta _{0,2,0,1} \end{bmatrix}. \end{aligned}$$

This matrix will be used in the proof of the main theorem.

Proof of Theorem 1

Fix the orthogonal basis \(\{\tilde{H}_{3,j}\}_{j\in \{1,\dots ,16\}} \cup \{\tilde{H}_{1,j}\}_{j\in \{1,\dots ,4\}}\) introduced in Table 1 for the space \(W_{3,3}\). Then our random polynomial is

$$\begin{aligned} P_{\lambda }(x)= & {} \lambda \left( \sum _{j=1}^4\xi _{3, j}\cdot \tilde{H}_{3, j}(x) + \sum _{j=5}^8\xi _{3, j}\cdot \frac{\tilde{H}_{3, j}(x)}{2\sqrt{3}} + \sum _{j=9}^{12}\xi _{3, j}\cdot \frac{\tilde{H}_{3, j}(x)}{2} + \sum _{j=13}^{16}\xi _{3, j}\cdot \frac{\tilde{H}_{3, j}(x)}{\sqrt{3}} \right) \\&+(1-\lambda )\left( \sum _{j=1}^4\xi _{1, j}\cdot \Vert x\Vert ^2 \frac{\tilde{H}_{1, j}(x)}{4\sqrt{3}}\right) . \end{aligned}$$

Expanding this harmonic basis in the monomial one we can compute directly the Jacobian as above and we obtain the expression:

$$\begin{aligned} J(W_0)=\begin{bmatrix} \bar{x}- \bar{y} &{} 0 &{} \bar{x'} - \bar{y'} &{} 0 \\ \bar{z} &{} \bar{x}- \bar{y} &{} \bar{z'} &{} \bar{x'} - \bar{y'}\\ \bar{x} + \bar{y} &{} \bar{z} &{} \bar{x'} + \bar{y'} &{} \bar{z'} \\ 0 &{} \bar{x} + \bar{y} &{} 0 &{} \bar{x'} + \bar{y'} \end{bmatrix} \end{aligned}$$

where these new gaussians

$$\begin{aligned}&\bar{x} = -\frac{\lambda }{2\sqrt{3}}\xi _{3,7} + \frac{\lambda }{2\sqrt{3}}\xi _{3,15} + \frac{(1-\lambda )}{4\sqrt{3}}\xi _{1,3} \sim N\left( 0,\sqrt{\frac{\lambda ^2}{6} + \frac{(1-\lambda )^2}{48}}\right) \\&\bar{x'} = -\frac{\lambda }{2\sqrt{3}}\xi _{3,8} + \frac{\lambda }{2\sqrt{3}}\xi _{3,16} + \frac{(1-\lambda )}{4\sqrt{3}}\xi _{1,4} \sim N\left( 0,\sqrt{\frac{\lambda ^2}{6} + \frac{(1-\lambda )^2}{48}}\right) \\&\bar{y} = \frac{\lambda }{2}\xi _{3,11} \sim N\left( 0, \frac{\lambda }{2} \right) \\&\bar{y'} =- \frac{\lambda }{2}\xi _{3,12} \sim N\left( 0, \frac{\lambda }{2} \right) \\&\bar{z} = \lambda \, \xi _{3,1} \sim N\left( 0, \lambda \right) \\&\bar{z'} = \lambda \,\xi _{3,2} \sim N\left( 0, \lambda \right) \end{aligned}$$

are again independent. On the other hand, when we compute \(\tilde{\sigma }_{P_{\lambda }}(W_0)\) the only basis elements that do not vanish are \(H_{3,5}, H_{3,6}, H_{3,9}, H_{3,13}, H_{3,14}, H_{1,1}, H_{1,2}\) so this section and \(J(W_0)\) are independent. Therefore, thanks to Eq. (2.2), we are left with

$$\begin{aligned} E_\lambda = {\mathbb {E}}\{|\det (J(W_0))|\}\cdot \mathrm {vol}(Gr(2,4))\cdot p(0,W_0). \end{aligned}$$

Let us compute \({\mathbb {E}}\{|\det (J(W_0))|\}\). We will use the following notation: if \(\bar{t} \sim N(0,\eta )\) we will call \(t= \frac{1}{\eta } \bar{t} \sim N(0,1)\). It turns out after some computations that

$$\begin{aligned} \det (J(W_0)) = \left( \frac{\lambda ^2}{6} + \frac{(1-\lambda )^2}{48} \right) \lambda ^2 \alpha ^2 - \frac{\lambda ^4}{4} \beta ^2 + \left( \frac{\lambda ^2}{6} + \frac{(1-\lambda )^2}{48} \right) \lambda ^2 \gamma ^2 \end{aligned}$$

where \(\alpha =xy'-x'y\), \(\beta =y'z - yz'\), \(\gamma =x'z-xz'\) are quadratic forms in gaussians.

Instead of parametrizing the scalar products with \((\lambda ,1-\lambda )\) we can use other rescaled parameters (MN) such that \(\frac{M^2}{6}+\frac{N^2}{48}=1\). Fix \((\lambda ,1-\lambda )\) and (MN) parametrizing the same distribution: there exists \(\mu :[0,1]\rightarrow {(0,\infty )}\) such that \(\mu (\lambda ) P_{\lambda }(x) = \check{P}_{M,N}\) as explained in Sect. 2.1, where

$$\begin{aligned} \check{P}_{M,N} = M \left( \sum _{j\in J_3}\xi _{3, j}\cdot H_{3, j}(x)\right) +N \left( \sum _{j\in J_1}\xi _{1, j}\cdot \Vert x\Vert ^2H_{1, j}(x)\right) \end{aligned}$$

and \(\mu (\lambda ) = \frac{4\sqrt{3}}{\sqrt{(1-\lambda )^2 + 8\lambda ^2}}\). Hence we can do again the same reasoning using the (MN) parameters, and we can compute for this polynomial the function

$$\begin{aligned} \check{E}_{M,N} = {\mathbb {E}}\{|\det (J(W_0))|\}\cdot \mathrm {vol}(\text {Gr}(2,4))\cdot p(0,W_0). \end{aligned}$$

To write then the expectation as a function of the \(\lambda \) parameter we need to remember that \(E_{\lambda } = \check{E}_{\lambda \,\mu (\lambda ), (1-\lambda )\mu (\lambda )}\), as the zero set does not change under multiplication of a polynomial by a constant.

With these new parameters the determinant becomes much simpler:

$$\begin{aligned} \det (J(W_0)) = M^2\left( \alpha ^2 - \frac{M^2}{4}\beta ^2 + \gamma ^2\right) . \end{aligned}$$

To compute the expectation of \(|\det (J(W_0))|\) we need the joint density \(\rho (\alpha ,\beta ,\gamma )\). Surprisingly it can be recovered by the method of characteristic functions and using Theorem 2.1 of [16], as explained in [4], so that denoting by \(| \cdot |\) the Euclidean norm:

$$\begin{aligned} \rho (\alpha ,\beta ,\gamma ) = \frac{1}{4\pi } \frac{e^{-|(\alpha ,\beta ,\gamma )|}}{|(\alpha ,\beta ,\gamma )|}. \end{aligned}$$

Therefore we can compute the expectation of \(|\det (J(W_0))|\) as:

$$\begin{aligned} {\mathbb {E}}\{|\det (J(W_0))|\}&= \frac{M^2}{4\pi } \int _{{\mathbb {R}}^3} \left| \alpha ^2 - \frac{M^2}{4}\beta ^2 + \gamma ^2 \right| \frac{e^{-\sqrt{\alpha ^2 + \beta ^2 + \gamma ^2}}}{\sqrt{\alpha ^2 + \beta ^2 + \gamma ^2}} {\mathrm{d}}\alpha {\mathrm{d}}\beta {\mathrm{d}}\gamma \\&= \frac{M^2}{4\pi } \int _{{\mathbb {R}}}\int _0^{2\pi } \int _0^{\infty } \rho \left| \rho ^2 - \frac{M^2}{4}\beta ^2 \right| \frac{e^{-\sqrt{\rho ^2 + \beta ^2}}}{\sqrt{\rho ^2 + \beta ^2}} {\mathrm{d}}\rho {\mathrm{d}}\phi {\mathrm{d}}\beta \\&= \frac{M^2}{2} \int _{{\mathbb {R}}} \int _0^{\infty } \rho \left| \rho ^2 - \frac{M^2}{4}\beta ^2 \right| \frac{e^{-\sqrt{\rho ^2 + \beta ^2}}}{\sqrt{\rho ^2 + \beta ^2}} {\mathrm{d}}\rho {\mathrm{d}}\beta \\&= \frac{M^2}{2} \int _{-\frac{\pi }{2}}^{\frac{\pi }{2}} \int _0^{\infty } r^3 \cos {\theta } \left| \cos ^2{\theta } - \frac{M^2}{4}\sin ^2{\theta } \right| e^{-r} {\mathrm{d}}r {\mathrm{d}}\theta \\&= 3M^2 \int _{-\frac{\pi }{2}}^{\frac{\pi }{2}} \cos {\theta } \left| \cos ^2{\theta } - \frac{M^2}{4}\sin ^2{\theta } \right| {\mathrm{d}}\theta \\&= 6M^2 \left( \frac{M^2}{12} - \frac{2}{3} + \frac{4}{3}\sqrt{\frac{4}{4+M^2}} \right) \end{aligned}$$

where we used two changes of variables:

$$\begin{aligned} {\left\{ \begin{array}{ll} \alpha = \rho \cos \phi \\ \beta = \beta \\ \gamma = \rho \sin \phi \end{array}\right. } \qquad {\left\{ \begin{array}{ll} \rho = r \cos \theta \\ \beta = r \sin \theta \end{array}\right. } \end{aligned}$$

and then solved the integral in the \(\theta \) variable finding explicitly the intervals of positivity and negativity of the function in the absolute value.

We have to compute now the density of \(\tilde{\sigma }_{P_{\lambda }}(W_0)\) at 0. It is a gaussian random vector with zero mean and covariance

$$\begin{aligned} \Sigma = \begin{bmatrix} \frac{M^2}{12}+\frac{N^2}{48} &{} 0 &{} -\frac{M^2}{12}+\frac{N^2}{48} &{} 0 \\ 0 &{} \frac{5M^2}{12}+\frac{N^2}{48} &{} 0 &{} -\frac{M^2}{12}+\frac{N^2}{48}\\ -\frac{M^2}{12}+\frac{N^2}{48} &{} 0 &{} \frac{5M^2}{12}+\frac{N^2}{48} &{} 0 \\ 0 &{} -\frac{M^2}{12}+\frac{N^2}{48} &{} 0 &{} \frac{M^2}{12}+\frac{N^2}{48} \end{bmatrix} \end{aligned}$$

that can be computed by looking at the coefficients of \(P_{\lambda }\) in the monomials \(x_0^3, x_0^2x_1, x_0x_1^2, x_1^3\). This implies that

$$\begin{aligned} p(0,W_0) = \frac{1}{4\pi ^2 \sqrt{\det \Sigma }} = \frac{3}{2\pi ^2 \left( 4M^2 - \frac{M^4}{2}\right) } \end{aligned}$$

where we simplified the expression of the determinant using the relation \(\frac{M^2}{6}+\frac{N^2}{48}=1\). Finally the volume of the Grassmannian [4, Remark 2] is \(\mathrm {vol}(\text {Gr}(2,4))=2\pi ^2\), therefore we have that

$$\begin{aligned} \check{E}_{M,N} = \frac{12}{8-M^2}\left( \frac{M^2}{4} - 2 + 4\sqrt{\frac{4}{4+M^2}} \right) . \end{aligned}$$

Observe that \(M=\mu (\lambda )\lambda =\frac{4\sqrt{3}}{\sqrt{(1-\lambda )^2 + 8\lambda ^2} }\lambda \), so we can come back to the original parameter \(\lambda \) and obtain that

$$\begin{aligned} E_\lambda =\frac{9(8\lambda ^2+(1-\lambda )^2)}{2\lambda ^2+(1-\lambda )^2}\left( \frac{2\lambda ^2}{8\lambda ^2+(1-\lambda )^2}-\frac{1}{3}+\frac{2}{3}\sqrt{\frac{8\lambda ^2+(1-\lambda )^2}{20\lambda ^2+(1-\lambda )^2}}\right) . \end{aligned}$$

2.3 Properties of the Function \(E_{\lambda }\)

Proposition 1

The function \(E_{\lambda }\) is monotone increasing.

Fig. 1
figure 1

A plot of the function \(E_{\lambda }\)

Proof

To simplify the computation and because M is an increasing function of \(\lambda \), we will prove the monotonicity of E as a function of M instead of \(\lambda \).

$$\begin{aligned} \check{E}_{M,N}=-3+\frac{96}{(8-M^2)(\sqrt{4+M^2})}. \end{aligned}$$

Then, it is enough to prove that the denominator \(g(M)=(8-M^2)(\sqrt{4+M^2})\) is decreasing. In fact,

$$\begin{aligned} g'(M)&=-2M(\sqrt{4+M^2})+\frac{(8-M^2)M}{\sqrt{4+M^2}}\\&=\frac{-2M(4+M^2)+M(8-M^2)}{\sqrt{4+M^2}}\\&=\frac{-3M^3}{\sqrt{4+M^2}}.\\ \end{aligned}$$

So for positive values of M, which are the ones we are interested in, \(g'(M)\le 0\) and therefore \(E_{\lambda }\) is increasing.

The plot of the function \(E_{\lambda }\) is shown in Fig. 1. Its minimum is \(E_{0}=3\) whereas the maximum is reached by the other limit case \(\lambda =1\) and is \(E_{1}=24\sqrt{\frac{2}{5}}-3\simeq 12,179\), as stated in Corollary 1. This value of \(\lambda \) corresponds to purely harmonic polynomials of degree 3.

Let us focus now on the minimum. The fact that it is 3 may be proved also with another approach, that gives some information about the deterministic situation. We now define different discriminants and explain the relation between them.

Table 2 We present here in a schematic way the connected components of \({\mathbb {R}}{\mathrm {P}}^{19}{\setminus }\Delta ^{{\mathbb {C}}}\) with the topology of the zero sets of the cubics in them and the number of lines on those cubics

We call complex discriminant the subset of those cubics in \({\mathbb {R}}{\mathrm {P}}^{19}\) which have a complex singularity (their partial derivatives have a common complex zero) and we denote it with \(\Delta ^{{\mathbb {C}}}\). It is a known fact (see [3]) that \({\mathbb {R}}{\mathrm {P}}^{19}{\setminus }\Delta ^{{\mathbb {C}}}\) has five connected components. If we fix a connected component among those five, all zero sets of cubics in there contain the same number of lines and are all homotopy equivalent (see Table 2). We call real discriminant the subset of cubics in \(\Delta ^{{\mathbb {C}}}\) such that at least one singularity is real and we will denote it by \(\Delta ^{{\mathbb {R}}}\). Notice that every smooth cubic contains 27 lines and therefore every element in \({\mathbb {R}}{\mathrm {P}}^{19}{\setminus }\Delta ^{{\mathbb {C}}}\) contains a finite number of lines.

In the next Proposition we will work in the space \(W_{3,3}\) endowed with the \(L^2\) norm. With some abuse of notation we denote by \(\Delta ^{{\mathbb {C}}} \subset W_{3,3}\) the set of those \(g\in W_{3,3}\) whose projectivization is in \(\Delta ^{{\mathbb {C}}}\) or g is the zero cubic. Same for \(\Delta ^{{\mathbb {R}}}\).

Proposition 2

There exists \(\epsilon >0\) such that for all

$$\begin{aligned} h_1 (x) = \Vert x \Vert ^2 ( a_0 x_0 + a_1 x_1 + a_2 x_2 + a_3 x_3 ) \in \Vert x\Vert ^2{\mathcal {H}}_1^3 \end{aligned}$$

with \(\left\Vert h_1\right\Vert ^2_{L^2} = 48 (a_0^2 + a_1^2 + a_2^2 + a_3^2) = 1\) and for all \(g\in W_{3,3} {\setminus } \Delta ^{{\mathbb {C}}}\) such that \(\Vert g- h_1 \Vert _{L^2} \le \epsilon \) the zero set of g contains exactly 3 lines.

Proof

First of all notice that if we find the \(\epsilon \) of the claim for a fixed \(\bar{h}_1\), then the same \(\epsilon \) works for any other polynomial \(h_1\) as in the statement. In fact \(\exists R\in O(4)\) such that \(h_1 (x) = \bar{h}_1 (Rx)\), where the \(L_2\) norm of \(h_1\) is again 1. Moreover, due to the O(4)-invariance of the \(L^2\)-norm, that rotation R takes the ball of radius \(\epsilon \) around \(\bar{h}_1\) into the ball of radius \(\epsilon \) around \(h_1\), without changing the geometry of the zero sets of the cubics in there. So we are left to prove the claim for a fixed \(h_1\). The cubic \(h_1\) belongs to \(\Delta ^{{\mathbb {C}}}{\setminus }\Delta ^{{\mathbb {R}}}\), and its zero set is topologically \({\mathbb {R}}{\mathrm {P}}^{2}\). Thanks to Thom’s isotopy lemma [37] \(\exists \epsilon >0\) such that for \(g\in W_{3,3}\) with \(\left\Vert g-h_1\right\Vert _{L^2}\le \epsilon \) the zero sets Z(g) and \(Z(h_1)\) are ambient isotopic, and hence homeomorphic. This means that if g is a cubic as above and \(g\not \in \Delta ^{{\mathbb {C}}}\), then the projectivization of g belongs to the connected component of \({\mathbb {R}}{\mathrm {P}}^{19}\) whose zero set is topologically \({\mathbb {R}}{\mathrm {P}}^2\) and contains exactly 3 lines.

Remark 9

If \(\Vert h_1\Vert _{L^2}\ne 1\), the claim above remains true but in a slightly different neighborhood: for all \(g\in W_{3,3} {\setminus } \Delta ^{{\mathbb {C}}}\) such that \(\Vert g- h_1 \Vert _{L^2} \le \epsilon \Vert h_1\Vert _{L^2}\), the zero set of g contains exactly 3 lines.

In view of Proposition 2, we can deduce that

$$\begin{aligned} \lim _{\lambda \rightarrow 0} E_{\lambda } = 3 \end{aligned}$$

without knowing the explicit formula for \(E_{\lambda }\). In fact given any polynomial \(f\in W_{3,3}\), thanks to the harmonic decomposition, we can always write \(f=h_3+h_1\) where \(h_3\in {\mathcal {H}}_3^3\) and \(h_1\in \left\Vert x\right\Vert ^2{\mathcal {H}}_1^3\). Taking the \(\epsilon \) of the proposition above, we have

$$\begin{aligned} {\begin{matrix} E_{\lambda } &{}= {\mathbb {E}}\#\lbrace \text {lines on } Z(f) \rbrace \\ &{}= \frac{1}{K} \int _{\Vert x\Vert ^2{\mathcal {H}}_1^3} \int _{{\mathcal {H}}_3^3} \#\lbrace \text {lines on } Z(f) \rbrace \, e^{-\left( \frac{\Vert h_3\Vert _{L^2}^2}{2\lambda ^2} + \frac{\Vert h_1\Vert _{L^2}^2}{2(1-\lambda )^2} \right) } \frac{1}{\lambda ^{16}(1-\lambda )^{4}} {\mathrm{d}}h_3 \, {\mathrm{d}}h_1 \\ &{}= \frac{1}{K} \int _{S^3}\Theta (\theta )\,\int _0^{\infty } \left[ \int _{ \left\{ \Vert h_3\Vert _{L^2} \le \rho \, \epsilon \right\} } \cdots \; \, {\mathrm{d}}h_3 + \int _{ \left\{ \Vert h_3\Vert _{L^2} > \rho \, \epsilon \right\} } \cdots \; \, dh_3 \right] \cdot \rho ^3 \, {\mathrm{d}}\rho \, {\mathrm{d}}\theta \end{matrix}} \end{aligned}$$

where K is a normalization constant, \(\Theta (\theta )\) a function deriving from the spherical change of coordinates, \(S^3 = \{ h_1 \in \Vert x\Vert ^2{\mathcal {H}}_1^3 \, \text {such that} \, \left\Vert h_1\right\Vert _{L^2}=1 \}\), \(\rho =\left\Vert h_1\right\Vert _{L^2}\). The number of lines on Z(f) in the first summand is exactly 3 for the generic f because we are in the nice neighborhood of Proposition 2. Therefore \(E_{\lambda } = 3\cdot {\mathbb {P}}(h_3+h_1\in B(h_1,\epsilon \left\Vert h_1\right\Vert _{L^2}))+ I(\lambda )\) where \(I(\lambda )\) is the following non-negative integral

$$\begin{aligned}&\frac{1}{K} \int _{S^3}\Theta (\theta )\int _0^{\infty } \int _{ \left\{ \Vert h_3\Vert _{L^2}> \rho \, \epsilon \right\} } \#\lbrace \text {lines on } Z(f) \rbrace \, e^{-\left( \frac{\Vert h_3\Vert _{L^2}^2}{2\lambda ^2} + \frac{\rho ^2}{2(1-\lambda )^2} \right) }\\&\qquad \frac{ \rho ^3}{\lambda ^{16}(1-\lambda )^{4}} {\mathrm{d}}h_3\,{\mathrm{d}}\rho \, {\mathrm{d}}\theta \\&\quad \le \frac{27}{K} \int _{S^3} \Theta (\theta ) \, {\mathrm{d}}\theta \int _0^{\infty } \int _{ \left\{ \Vert h_3\Vert _{L^2}> \rho \, \epsilon \right\} } e^{-\left( \frac{\Vert h_3\Vert _{L^2}^2}{2\lambda ^2} + \frac{\rho ^2}{2(1-\lambda )^2} \right) }\frac{ \rho ^3}{\lambda ^{16}(1-\lambda )^{4}} {\mathrm{d}}h_3\,{\mathrm{d}}\rho \\&\quad \le K' \int _0^{\infty } \int _{ \left\{ \lambda \Vert \hat{h}_3\Vert _{L^2} > \rho \, \epsilon \right\} } e^{-\left( \frac{\Vert \hat{h}_3\Vert _{L^2}^2}{2} + \frac{\rho ^2}{2(1-\lambda )^2} \right) }\frac{\rho ^3}{(1-\lambda )^{4}} {\mathrm{d}}\hat{h}_3\,{\mathrm{d}}\rho \end{aligned}$$

for \(K'\) a new constant that englobes all the others, and \(\hat{h}_3=\frac{h_3}{\lambda }\). Using dominated convergence it is now easy to see that \(\lim _{\lambda \rightarrow 0} I(\lambda ) =0\). Indeed the last inequality of the explicit computation also proves \({\mathbb {P}}(h_3+h_1\not \in B(h_1,\epsilon \left\Vert h_1\right\Vert ))\rightarrow 0\), which implies \({\mathbb {P}}(h_3+h_1\in B(h_1,\epsilon \left\Vert h_1\right\Vert ))\rightarrow 1\) and therefore

$$\begin{aligned} \lim _{\lambda \rightarrow 0} E_{\lambda } = 3. \end{aligned}$$

2.4 Generalization

More in general let us consider \(f\in {\mathbb {R}}[x_0, \dots , x_n]_{(d)}\) and the associated zero locus \(Z(f)\subset {\mathbb {R}}{\mathrm {P}}^n\). The same polynomial f defines a section \(\sigma _f\) of the vector bundle

figure c

such that \(\sigma _f(W)=f\vert _W\). The set \(\{\sigma _f =0\}\) corresponds to the lines contained in Z(f) and it is generically 0-dimensional if and only if \(d=2n-3\). So it makes sense to ask for the number of lines inside a hypersurface of degree \(2n-3\) in \({\mathbb {R}}{\mathrm {P}}^n\). If for the case of cubics in \({\mathbb {R}}{\mathrm {P}}^3\) it was at least known that the maximum number of complex lines, 27, could be reached also in the real case, when moving to this more general setting it is not clear whether the generic number of complex lines can be realized or not by real lines. Following the same procedure explained in Sect. 2.1 we may wonder

What is the expected number of real lines inside a random invariant hypersurface of degree \(2n-3\) in \({\mathbb {R}}{\mathrm {P}}^n\)?

The idea is that the expectation might be maximized again by purely harmonic polynomials of top degree, and so the possible way of constructing hypersurfaces with many lines could be sampling random pure harmonics of degree \(2n-3\).

In [4] the authors have proved that

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{ \log E_n^{\text {Kostlan}} }{\log C_n}=\frac{1}{2} \end{aligned}$$

where \(E_n^{\text {Kostlan}}\) is the expected number of real lines inside a random invariant hypersurface Z(f) of degree \(2n-3\) in \({\mathbb {R}}{\mathrm {P}}^n\) sampled from the Kostlan distribution, and \(C_n\) is the number of complex lines on a generic hypersurface of degree \(2n-3\) in \(W_{3,3}^n\). This led A. Lerario to a conjecture: sampling random pure harmonics of degree \(2n-3\) instead of Kostlan, the intuition is that

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{ \log E_n^{\text {Harmonic}} }{\log C_n}>\frac{1}{2} \end{aligned}$$

(or maybe in a wonderful universe the limit could also be equal to 1).

Remark 10

Such results would be relevant also because they may give some information also about the deterministic case. In dimension \(n>3\), it is not even clear if there exist real hypersurfaces containing \(C_n\) real lines, but for sure there must exist hypersurfaces with at least \(\lceil E_n \rceil \) real lines. Random results thus give a bound that may not be known yet.