1 Introduction

It was already known since the works of Blaschke (1923, p. 18) that conics are the only planar curves with constant equiaffine curvature. In a special case of a graph of a function of class \(C^5\), this condition is equivalent to a certain \(5{\hbox {th}}\) order ordinary differential equation, which reads

$$\begin{aligned} 9f''(x)^2f^{(5)}(x)-45f''(x)f^{(3)}(x)f^{(4)}(x)+40f^{(3)}(x)^3=0. \end{aligned}$$
(1)

In higher dimensions, hyperquadrics are characterized by Maschke–Pick–Berwald theorem (Nomizu et al. 1994, Theorem 4.5) as the only hypersurfaces with vanishing cubic form C defined in Nomizu et al. (1994, Chpt. II, Sect. 4). However, the definition implicitely uses the intrinsic Blaschke structure and thus the cubic form C can hardly be expressed in an extrinsic coordinate system. It is also unclear what minimal smoothness we need to assume. Nevertheless, such a result for 2-dimensional surfaces turns out to be a consequence of two relatively simple partial differential equations. The aim of this paper is to prove the following main theorem:

Theorem 1.1

Let \(f\in W^{3,1}_{\textrm{loc}}(\Omega )\) be a function from the local Sobolev space,Footnote 1 defined on a connected open subset \(\Omega \subseteq {\mathbb {R}}^2\). Suppose that the Hessian determinant of f is somewhere positive. Then f is a weak solution to the system of partial differential equations

$$\begin{aligned} \begin{aligned} f^{(3,0)} {f^{(0,2)}}^2-3 f^{(1,2)} f^{(2,0)} f^{(0,2)}+2 f^{(0,3)} f^{(1,1)} f^{(2,0)}&= 0, \\ f^{(0,3)} {f^{(2,0)}}^2-3 f^{(2,1)} f^{(0,2)} f^{(2,0)}+2 f^{(3,0)} f^{(1,1)} f^{(0,2)}&= 0 \end{aligned} \end{aligned}$$
(2)

if and only if its graph is contained in a quadratic surface.

Therefore Theorem 1.1 can be considered a 2-dimensional analog of the aforementioned result of Blaschke. Contrary to Maschke-Pick-Berwald, it is formulated in terms of simple, explicit partial differential equations, with weaker smoothness assumption. Moreover, we will show that the system (2) is minimal in the sense that the left-hand sides form a minimal generating set (viz. a reduced Gröbner basis) of a certain differential ideal.

Such a characterization of quadratic surfaces of positive Gaussian curvature as the only solutions to some partial differential equations without boundary condition may be useful when one wants to prove that some specific convex body is an ellipsoid using e.g. the tools of differential geometry. Such problems arise naturally in convex geometry, especially in various characterizations of Hilbert spaces among all finite-dimensional Banach spaces.

As superfluous as it may seem, the assumption on the Hessian determinant is not purely technical, as the following holds:

Theorem 1.2

Let \(f\in W^{3,1}_{\textrm{loc}}(\Omega )\) be a function from the local Sobolev space, defined on a connected open subset \(\Omega \subseteq {\mathbb {R}}^2\). Suppose that the Hessian determinant of f is non-positive. Then f is a weak solution to the system of partial differential equations (2) if and only if \(\Omega \) contains a countable sum of disjoint open connected subsets \(\Omega _i\) such that:

  1. (1)

    on each \(\Omega _i\) the graph of f is contained in either:

    1. (a)

      a doubly-ruled surface,Footnote 2 or

    2. (b)

      a developable surface,Footnote 3 or

    3. (c)

      a Catalan surfaceFootnote 4 with directrix plane XZ, or

    4. (d)

      a Catalan surface with directrix plane YZ,

  2. (2)

    the union \(\bigcup \Omega _i\) is dense in \(\Omega \).

Note that all of the above are particular examples of ruled surfaces.Footnote 5 Regrettably, the exact classification of solutions to (2) seems to be a tedious, technical task and therefore will not be given here, so as not to overshadow the main idea.

To perform lengthy computations, we will employ a widely used technical computing system Wolfram Mathematica (Wolfram Research 2016). Nevertheless, they still could have been done with pen and paper (albeit with some difficulty) and hence the proof remains human-surveyable. A thorough discussion of this aspect can be found in Appendix A. For transparency, all the results obtained with the help of a computer are tagged with “Spikey” (), followed by a reference to the relevant section of Appendix A.

2 Notation and basic concepts

To prove the Theorem 1.1 we will need some very general facts concerning quadratic surfaces, that are in themselves quite interesting. We begin with rephrasing the problem in the language of commutative algebra.

Definition 1

Let

$$\begin{aligned} R\, {:=}\, {\mathbb {R}}\left[ x,y,\partial ^{(0,0)},\partial ^{(0,1)},\partial ^{(1,0)},\ldots \right] \end{aligned}$$

be a ring of polynomials in variables x, y and formal partial derivatives \(\partial ^{(i,j)}\) and let

$$\begin{aligned} S\, {:=}\, \left\langle \partial ^{(0,2)}, \partial ^{(2,0)}, \partial ^{(0,2)} \partial ^{(2,0)}-{\partial ^{(1,1)}}^2\right\rangle \end{aligned}$$

be a submonoid of the multiplicative monoid of R, with the listed generators. By \(S^{-1}R\) we denote a localisation of R at S (Eisenbud 1995, Sect. 2.1).

The ring \(S^{-1}R\) can be viewed as an algebra of a certain type of differential operators T defined for those smooth functions \(f:{\mathbb {R}}^2\supseteq \Omega \rightarrow {\mathbb {R}}\) for which all the expressions

$$\begin{aligned} f^{(0,2)}(x,y),\quad f^{(2,0)}(x,y),\quad f^{(0,2)}(x,y) f^{(2,0)}(x,y)- f^{(1,1)}(x,y)^2\end{aligned}$$
(3)

do not take a zero value on \(\Omega \) and thus have reciprocals. We will call such functions generic. Examples include but are not limited to functions with positive Hessian determinant, i.e. whose graphs have positive Gaussian curvature.

Notation

Let \(\Omega \subseteq {\mathbb {R}}^2\) be a connected open subset of \({\mathbb {R}}^2\). Denote by \({\mathcal {G}}(\Omega )\) the set of generic functions \(f:\Omega \rightarrow {\mathbb {R}}\) and by \({\mathcal {Q}}(\Omega )\) its subset consisting of parametrizations of quadratic surfaces.

Definition 2

Let \(D_x,D_y:S^{-1}R\rightarrow S^{-1}R\) be derivations (Eisenbud 1995, Chpt. 16), i.e. \({\mathbb {R}}\)-linear endomorphisms of additive group of \(S^{-1}R\) satisfying the Leibniz product rule

$$\begin{aligned} D(r_1r_2)=D(r_1)r_2+r_1D(r_2),\quad r_1,r_2\in S^{-1}R, \end{aligned}$$

and thus uniquely determined by their values on indeterminates:

$$\begin{aligned} D_x(x)\, {:=}\, 1,\quad D_x(y)\, {:=}\, 0,\quad D_x\left( \partial ^{(i,j)}\right) \, {:=}\, \partial ^{(i+1,j)},\\ D_y(x)\, {:=}\, 0,\quad D_y(y)\, {:=}\, 1,\quad D_y\left( \partial ^{(i,j)}\right) \, {:=}\, \partial ^{(i,j+1)}. \end{aligned}$$

In particular, the well-known formula for differentiating fractions

$$\begin{aligned} D\left( \frac{r}{s}\right) =\frac{D(r)s-rD(s)}{s^2} \end{aligned}$$

follows from the Leibniz product rule. A ring \(S^{-1}R\) equipped with derivations \(D_x,D_y\) forms a differential ring.

Definition 3

A differential ideal \({\mathfrak {a}}\) in a differential ring R is an ideal that is mapped to itself by each derivation.

Definition 4

Let X be a subset of \({\mathcal {G}}(\Omega )\). The annihilator of X in \(S^{-1}R\), denoted by \(X^\dagger \), is a collection of differential operators \(T\in S^{-1}R\) such that \(Tf=0\) for all \(f\in X\). The annihilator of any subset is clearly a differential ideal. The annihilator of an empty set is the whole \(S^{-1}R\) and the annihilator of the whole \({\mathcal {G}}(\Omega )\) is just the zero operator.

3 Polynomial PDEs satisfied by generic quadratic surfaces

Observe that a graph of a function f is contained in a quadratic surface if and only if its every point satisfies a quadratic equation

$$\begin{aligned} a_{11} x^2+a_{12} x y+a_{13} x f+a_{22} y^2+a_{23} y f+a_{33} f^2+b_1 x+b_2 y+b_3 f+c = 0\nonumber \\\end{aligned}$$
(4)

with constant coefficients \(a_{ij},b_k,c\). This is equivalent to the fact that the set of functions

$$\begin{aligned} \begin{Bmatrix} x^2,&x y,&x f(x,y),&y^2,&y f(x,y),&f(x,y)^2,&x,&y,&f(x,y),&1 \end{Bmatrix}\end{aligned}$$
(5)

is linearly dependent. That is how the concept of generalized Wronskian for functions of several variables enters play. For clarity, we adopt the notation from Wolsson (1989).

Definition 5

(Wolsson 1989, Definition 1) A generalised Wronskian of \(\varvec{\phi }=(\phi _1({\varvec{t}}),\ldots ,\phi _n({\varvec{t}}))\), where \({\varvec{t}}=(t_1,\ldots ,t_m)\), is any determinant of the type

$$\begin{aligned} \begin{vmatrix} \varvec{\phi }\\ \partial ^1\varvec{\phi }\\ \vdots \\ \partial ^{n-1}\varvec{\phi }\end{vmatrix}, \end{aligned}$$

where \(\varvec{\phi }\), \(\partial ^i\varvec{\phi }\) are row vectors, \(\partial ^i\) is any partial derivative of order not greater that i and all \(\partial ^i\) are distinct.

Remark

Note that in the realm of functions in \(m\ge 2\) variables a generalized Wronskian of \(\varphi \) is no longer unique, since there are many possible ways of choosing row vectors \(\partial ^i\varvec{\phi }\) satisfying all the imposed conditions. More precisely, there are

$$\begin{aligned} \left( {\begin{array}{c}m+i\\ m\end{array}}\right) \end{aligned}$$

partial derivatives of order not greater than i and hence there are exactly

$$\begin{aligned} \prod _{i=0}^{n-1}\left( \left( {\begin{array}{c}m+i\\ m\end{array}}\right) -i\right) \end{aligned}$$

generalised Wronskians of n functions in m variables. However, from now henceforth we will identify all generalized Wronskians that differ only in the order of rows as it does not affect the rank of the matrix.

Notation

Denote by \(\varvec{\phi }\) the tuple of functions (5).

Assertion 3.1

Each generalized Wronskian of \(\varvec{\phi }\) can be viewed as an element of \(S^{-1}R\). Moreover, by the very definition, it belongs to \({\mathcal {Q}}(\Omega )^\dagger \). Indeed, if the set of functions (5) is linearly dependent, then all its generalized Wronskians vanish identically since their columns are themselves linearly dependent.

The following key proposition characterizes the set of polynomial differential equations satisfied by the parametrization of any generic quadratic surface.

Proposition 3.2

Let \(\Omega \subseteq {\mathbb {R}}^2\) be a connected open subset of \({\mathbb {R}}^2\). Then the annihilator \({\mathcal {Q}}(\Omega )^\dagger \subseteq S^{-1}R\) is a differential ideal generated by

$$\begin{aligned} \begin{aligned}&\partial ^{(3,0)}{\partial ^{(0,2)}}^2-3 \partial ^{(1,2)} \partial ^{(2,0)} \partial ^{(0,2)}+2 \partial ^{(0,3)} \partial ^{(1,1)} \partial ^{(2,0)}, \\&\partial ^{(0,3)}{\partial ^{(2,0)}}^2-3 \partial ^{(2,1)} \partial ^{(0,2)} \partial ^{(2,0)}+2 \partial ^{(3,0)} \partial ^{(1,1)} \partial ^{(0,2)}. \end{aligned} \end{aligned}$$
(6)

Proof

Clearly \({\mathcal {Q}}(\Omega )^\dagger \) is a differential ideal in \(S^{-1}R\). Denote by \({\mathfrak {a}}\) the differential ideal generated by (6). We have to show that \({\mathcal {Q}}(\Omega )^\dagger ={\mathfrak {a}}\). We will do this by proving both inclusions.

First, we will show a simpler inclusion \({\mathcal {Q}}(\Omega )^\dagger \supseteq {\mathfrak {a}}\). Since both \({\mathcal {Q}}(\Omega )^\dagger \) and \({\mathfrak {a}}\) are differential ideals, it is enough to prove that the generators of \({\mathfrak {a}}\) are contained in \({\mathcal {Q}}(\Omega )^\dagger \). Let \(f\in {\mathcal {Q}}(\Omega )\) be a parametrization of some generic quadratic surface. By Assertion 3.1, all the generalized Wronskians of \(\varvec{\phi }\) vanish identically on \(\Omega \). Denote by \(W_{i,j}\) the generalised Wronskian of \(\varvec{\phi }\) formed by deleting the row \(\varvec{\phi }^{(i,j)}\) from

$$\begin{aligned} \begin{pmatrix} \varvec{\phi }\\ \varvec{\phi }^{(0,1)} \\ \varvec{\phi }^{(1,0)} \\ \varvec{\phi }^{(0,2)} \\ \varvec{\phi }^{(1,1)} \\ \varvec{\phi }^{(2,0)} \\ \varvec{\phi }^{(0,3)} \\ \varvec{\phi }^{(1,2)} \\ \varvec{\phi }^{(2,1)} \\ \varvec{\phi }^{(3,0)} \\ \varvec{\phi }^{(0,4)} \end{pmatrix}. \end{aligned}$$

The only non-trivial (i.e. not vanishing algebraically) ones are the following:

$$\begin{aligned} W_{3,0}&= 24 {f^{(0,2)}}^2 \Big (3 f^{(2,1)} {f^{(0,2)}}^2-6 f^{(1,1)} f^{(1,2)} f^{(0,2)}- f^{(0,3)} f^{(2,0)} f^{(0,2)}+4 f^{(0,3)} {f^{(1,1)}}^2\Big ) \\ W_{2,1}&= 72 {f^{(0,2)}}^2 \Big (f^{(3,0)} {f^{(0,2)}}^2-3 f^{(1,2)} f^{(2,0)} f^{(0,2)}+2 f^{(0,3)} f^{(1,1)} f^{(2,0)}\Big ) \\ W_{1,2}&= 72 {f^{(0,2)}}^2 \Big (f^{(0,3)} {f^{(2,0)}}^2-3 f^{(0,2)} f^{(2,1)} f^{(2,0)}+2 f^{(0,2)} f^{(1,1)} f^{(3,0)}\Big ) \\ W_{0,3}&= 24 {f^{(0,2)}}^2 \Big (4 f^{(3,0)} {f^{(1,1)}}^2-6 f^{(2,0)} f^{(2,1)} f^{(1,1)}+3 f^{(1,2)} {f^{(2,0)}}^2-f^{(0,2)} f^{(2,0)} f^{(3,0)}\Big ) \end{aligned}$$

Note that although the underlying matrices depend on \(4{\hbox {th}}\) order partial derivatives, their determinants do not, which is somehow intriguing.

Remark

Since \(\varvec{\phi }\) consists of \(n=10\) functions and in \(m=2\) variables there are exactly 10 partial derivatives of at most \(3{\hbox {rd}}\) order, there is a unique generalized Wronskian of \(\varvec{\phi }\) using partial derivatives of at most \(3{\hbox {rd}}\) order, namely \(W_{0,4}\). However, it turns out that \(\varvec{\phi }^{(3,0)},\varvec{\phi }^{(2,1)},\varvec{\phi }^{(1,2)},\varvec{\phi }^{(0,3)}\) are always linearly dependent. Indeed, observe that the \(4\times 10\) matrix

has only 4 non-zero columns corresponding to xf(xy), yf(xy), \(f(x,y)^2\), f(xy) and a direct computation shows that the determinant of this only non-trivial \(4\times 4\) minor is zero anyway. Thus every generalized Wronskian of \(\varvec{\phi }\) vanishes identically unless it is missing some \(3{\hbox {rd}}\) order partial derivative. In particular, there is no non-trivial generalized Wronskian of \(\varvec{\phi }\) using partial derivatives of order at most 3. Moreover, there are (a priori at most) only 4 non-trivial generalized Wronskians of \(\varvec{\phi }\) using a single partial derivative order greater than 3, since it must replace one of the 4 partial derivatives of order 3.

Now, since f is assumed to be generic, its \(2{\hbox {nd}}\) order pure derivative \(f^{(0,2)}\) is non-zero. Hence from the vanishing of \(W_{2,1}\) and \(W_{1,2}\) we obtain that a parametrization of any generic quadratic surface satisfies (2). This concludes the first part of the proof.

Remark

Note that for any generic function f, if \(W_{2,1}\) and \(W_{1,2}\) vanish, then the remaining two generalized Wronskians also vanish. Indeed, we have

$$\begin{aligned} \begin{gathered} 3 f^{(2,0)} W_{3,0} = 2 f^{(1,1)} W_{2,1}-f^{(0,2)} W_{1,2}, \\ 3 f^{(0,2)} W_{0,3} = 2 f^{(1,1)} W_{1,2}-f^{(2,0)} W_{2,1}, \end{gathered} \end{aligned}$$
(7)

while both \(f^{(2,0)}\) and \(f^{(0,2)}\) are non-zero. Furthermore, the same holds for any pair of featured generalized Wronskians except for \(W_{3,0}\) and \(W_{0,3}\), when the above equations (7) in variables \(W_{2,1}\) and \(W_{1,2}\) may turn out to be linearly dependent. This is the case exactly when

$$\begin{aligned} f^{(0,2)}f^{(2,0)}-4{f^{(1,1)}}^2=0, \end{aligned}$$

which together with \(W_{3,0}=0\) and \(W_{0,3}=0\) forms a system of partial differential equations. This time, however, apart from parametrizations of certain quadratic surfaces (including degenerate), it admits a single family of exotic solutions of the form

$$\begin{aligned} f(x,y)=\frac{a}{(x+x_0)(y+y_0)}+b_1x+b_2y+c, \end{aligned}$$

which arise as parametrizations of certain cubic surfaces. Moreover, note that all these functions are generic, unless \(a=0\). Therefore the choice of equations (2) was arbitrary only to some extent.

Remark

Observe that the last factors of \(W_{3,0}\) and \(W_{0,3}\) as well as \(W_{2,1}\) and \(W_{1,2}\) are equivalent up to the order of variables. However, the overall symmetry is broken by the common factor \({f^{(0,2)}}^2\), which is the result of choosing \(\varvec{\phi }^{(0,4)}\) as a supplementary row. Exactly as we should expect, if we had chosen \(\varvec{\phi }^{(4,0)}\), we would have obtained the same set of generalized Wronskians, but this time with common factor \({f^{(2,0)}}^2\) instead of \({f^{(0,2)}}^2\).

Let

$$\begin{aligned} Q\, {:=}\, S^{-1}{\mathbb {R}}\left[ x,y,\partial ^{(0,0)},\partial ^{(0,1)},\partial ^{(1,0)},\partial ^{(0,2)},\partial ^{(1,1)},\partial ^{(2,0)},\partial ^{(0,3)},\partial ^{(1,2)},\partial ^{(0,4)}\right] \end{aligned}$$

be the localization of a real polynomial ring in selected 11 variables at S. Since localization commutes with adding new external elements, there is a ring isomorphism

$$\begin{aligned} S^{-1}R\simeq Q\left[ \partial ^{(2,1)},\partial ^{(3,0)},\partial ^{(1,3)},\ldots \right] , \end{aligned}$$

where the latter is already a polynomial ring over Q in the remaining infinitely many variables. Let us choose a graded lexicographic order on variables \(\partial ^{(i,j)}\) and then graded reverse lexicographic order on monomials. We will find a Gröbner basis of \({\mathfrak {a}}\) with respect to this monomial ordering. For more details on Gröbner bases including definitions and examples we recommend the reader to go through (Cox et al. 2016, Chpt. 2).

Denote polynomials (6) respectively by \(p_1\), \(p_2\). Observe that for every \(i,j\ge 0\) and \(k=1,2\), \({D_x}^i{D_y}^jp_k\) is linear in highest order partial derivatives and thus we can write e.g.

$$\begin{aligned} \begin{pmatrix}D_xD_xp_1\\ D_xD_yp_1\\ D_yD_yp_1\\ D_xD_xp_2\\ D_xD_yp_2\\ D_yD_yp_2 \end{pmatrix}={\varvec{A}}_5\begin{pmatrix}\partial ^{(0,5)}\\ \partial ^{(1,4)}\\ \partial ^{(2,3)}\\ \partial ^{(3,2)}\\ \partial ^{(4,1)}\\ \partial ^{(5,0)}\end{pmatrix}+{\varvec{b}}_5,\end{aligned}$$
(8)

where

$$\begin{aligned} \small {\varvec{A}}_5{:=}\begin{pmatrix} 0 &{} 0 &{} 2 \partial ^{(1,1)} \partial ^{(2,0)} &{} -3 \partial ^{(0,2)} \partial ^{(2,0)} &{} 0 &{} {\partial ^{(0,2)}}^2 \\ 0 &{} 2 \partial ^{(1,1)} \partial ^{(2,0)} &{} -3 \partial ^{(0,2)} \partial ^{(2,0)} &{} 0 &{} {\partial ^{(0,2)}}^2 &{} 0 \\ 2 \partial ^{(1,1)} \partial ^{(2,0)} &{} -3 \partial ^{(0,2)} \partial ^{(2,0)} &{} 0 &{} {\partial ^{(0,2)}}^2 &{} 0 &{} 0 \\ 0 &{} 0 &{} {\partial ^{(2,0)}}^2 &{} 0 &{} -3 \partial ^{(0,2)} \partial ^{(2,0)} &{} 2 \partial ^{(0,2)} \partial ^{(1,1)} \\ 0 &{} {\partial ^{(2,0)}}^2 &{} 0 &{} -3 \partial ^{(0,2)} \partial ^{(2,0)} &{} 2 \partial ^{(0,2)} \partial ^{(1,1)} &{} 0 \\ {\partial ^{(2,0)}}^2 &{} 0 &{} -3 \partial ^{(0,2)} \partial ^{(2,0)} &{} 2 \partial ^{(0,2)} \partial ^{(1,1)} &{} 0 &{} 0 \\ \end{pmatrix} \end{aligned}$$

is a \(6\times 6\) matrix and \({\varvec{b}}_5\) (the definition of which is irrelevant and therefore has been omitted for brevity) is a \(6\times 1\) vector over \(S^{-1}R\). Moreover, \({\varvec{A}}_5\) and \({\varvec{b}}_5\) contain only partial derivatives of order at most 4. One can easily verify that the determinant of \({\varvec{A}}_5\) is equal to

and thus is a unit in \(S^{-1}R\). It follows that \({\varvec{A}}_5\) is invertible over \(S^{-1}R\) and we can rewrite (8) as

$$\begin{aligned} {{\varvec{A}}_5}^{-1}\begin{pmatrix}D_xD_xp_1\\ D_xD_yp_1\\ D_yD_yp_1\\ D_xD_xp_2\\ D_xD_yp_2\\ D_yD_yp_2 \end{pmatrix}=\begin{pmatrix}\partial ^{(0,5)}\\ \partial ^{(1,4)}\\ \partial ^{(2,3)}\\ \partial ^{(3,2)}\\ \partial ^{(4,1)}\\ \partial ^{(5,0)}\end{pmatrix}+{{\varvec{A}}_5}^{-1}{\varvec{b}}_5.\end{aligned}$$
(9)

Now, the left-hand side is a vector of elements from \({\mathfrak {a}}\) and hence so is also the right-hand side. Moreover, since \({{\varvec{A}}_5}^{-1}{\varvec{b}}_5\) contains only partial derivatives of order at most 4, the leading term of each polynomial on the right-hand side is a corresponding \(5{\hbox {th}}\) order partial derivative. By definition, the ideal \({\mathfrak {a}}\) is closed under derivations and thus by differentiating these polynomials we can obtain an element of \({\mathfrak {a}}\) with the leading term being any partial derivative of higher order. Using right the same argument we can likewise write

$$\begin{aligned} {{\varvec{A}}_4}^{-1}\begin{pmatrix}D_xp_1\\ D_yp_1\\ D_xp_2\\ D_yp_2\end{pmatrix}=\begin{pmatrix}\partial ^{(1,3)}\\ \partial ^{(2,2)}\\ \partial ^{(3,1)}\\ \partial ^{(4,0)}\end{pmatrix}+{{\varvec{A}}_4}^{-1}{\varvec{b}}_4,\quad {{\varvec{A}}_3}^{-1}\begin{pmatrix}p_1\\ p_2\end{pmatrix}=\begin{pmatrix}\partial ^{(2,1)}\\ \partial ^{(3,0)}\end{pmatrix}+{{\varvec{A}}_3}^{-1}{\varvec{b}}_3, \end{aligned}$$

since the determinant of

$$\begin{aligned} \small {\varvec{A}}_4{:=}\begin{pmatrix} 2 \partial ^{(1,1)} \partial ^{(2,0)} &{} -3 \partial ^{(0,2)} \partial ^{(2,0)} &{} 0 &{} {\partial ^{(0,2)}}^2 \\ -3 \partial ^{(0,2)} \partial ^{(2,0)} &{} 0 &{} {\partial ^{(0,2)}}^2 &{} 0 \\ {\partial ^{(2,0)}}^2 &{} 0 &{} -3 \partial ^{(0,2)} \partial ^{(2,0)} &{} 2 \partial ^{(0,2)} \partial ^{(1,1)} \\ 0 &{} -3 \partial ^{(0,2)} \partial ^{(2,0)} &{} 2 \partial ^{(0,2)} \partial ^{(1,1)} &{} 0 \\ \end{pmatrix} \end{aligned}$$

is equal to

and the determinant of

$$\begin{aligned} \small {\varvec{A}}_3{:=}\begin{pmatrix} 0 &{} {\partial ^{(0,2)}}^2 \\ -3 \partial ^{(0,2)} \partial ^{(2,0)} &{} 2 \partial ^{(0,2)} \partial ^{(1,1)} \\ \end{pmatrix} \end{aligned}$$

is equal to

Hence all the partial derivatives \(\partial ^{(2,1)},\partial ^{(3,0)},\partial ^{(1,3)},\ldots \), which are exactly those not included in the definition of Q, are contained in the ideal of leading terms \(\langle \textrm{LT}({\mathfrak {a}})\rangle \) (Cox et al. 2016, Definition 2.5.1).

Remark

It is a mere coincidence that after computing \(2{\hbox {nd}}\) order partial derivatives of \(p_1\) and \(p_2\) the number of independent equations is equal to the number of \(5{\hbox {th}}\) order partial derivatives of f and thus the matrix \({\varvec{A}}_5\) is uniquely determined. The multiplicative monoid \(S\subseteq R\) was devised to contain all the prime factors of \(\det {\varvec{A}}_5\). However, to obtain \({\varvec{A}}_4\) and \({\varvec{A}}_3\) we had to arbitrarily choose some subset of variables, and this time it was not a coincidence that both \(\det {\varvec{A}}_4\) and \(\det {\varvec{A}}_3\) share the same prime factors as \(\det {\varvec{A}}_5\). Indeed, there are other choices for which it is no longer the case. Thus the set of variables to the polynomial ring Q was carefully selected so that both \({\varvec{A}}_4\) and \({\varvec{A}}_5\) are already invertible in \(S^{-1}R\).

Denote by G the set of polynomials constructed above, such that every monomial \(\partial ^{(2,1)},\partial ^{(3,0)},\partial ^{(1,3)},\ldots \) is a leading term \(\textrm{LT}(g)\) of some polynomial \(g\in G\). Suppose that \({\mathcal {Q}}(\Omega )^\dagger \supsetneq \langle G\rangle \) and let \(p\in {\mathcal {Q}}(\Omega )^\dagger \setminus \langle G\rangle \). After a complete reduction of p by G we obtain a remainder \(r\in {\mathcal {Q}}(\Omega )^\dagger \setminus \langle G\rangle \), which is irreducible by G, i.e. its leading term \(\textrm{LT}(r)\) is not a multiple of any \(\textrm{LT}(g)\), \(g\in G\) (Cox et al. 2016, Theorem 2.3.3). Thus r is an element of the coefficient ring Q and corresponds to some rational function in selected 11 variables. We will prove that \(r=0\), which will eventually give us the desired contradiction. By definition, it vanishes for any tuple consisting of x, y, and relevant partial derivatives of some function parametrizing a generic quadratic surface at (xy). Since r is rational, it is enough to show that the set of such arguments has a non-empty interior as a subset of \({\mathbb {R}}^{11}\).

For this, we define an implicit function \(\psi :{\mathbb {R}}^{11}\rightarrow {\mathbb {R}}^{11}\) in the following way. Let f be a parametrization of some quadratic surface satisfying (4) with \(a_{33}=1\). Then \(\psi \) maps the tuple of parameters

$$\begin{aligned} \left( x, \quad y, \quad a_{11}, \quad a_{12}, \quad a_{13}, \quad a_{22}, \quad a_{23}, \quad b_1, \quad b_2, \quad b_3, \quad c\right) \end{aligned}$$
(10)

to the tuple

$$\begin{aligned} \left( x,\,\,y,\,\, f,\,\, f^{(0, 1)},\,\, f^{(1, 0)},\,\, f^{(0, 2)},\,\,f^{(1, 1)},\,\, f^{(2, 0)},\,\, f^{(0, 3)},\,\, f^{(1, 2)},\,\, f^{(0, 4)} \right) \end{aligned}$$

consisting of x, y and relevant partial derivatives of f at (xy). We can obtain an explicit formula for \(\psi \) by symbolically solving the quadratic equation (4) first and then symbolically differentiating the result. Since in general there are two possible solutions for f, we have to locally select an arbitrary branch of the square root function, so that \(\psi \) is smooth.

Now, consider a generic function

$$\begin{aligned} f(x,y)=\sqrt{1+x^2+y^2} \end{aligned}$$

parametrizing a quadratic surface represented by the tuple of parameters

$$\begin{aligned} \left( x, \, y, \, -1, \, 0, \, 0, \, -1, \, 0, \, 0, \, 0, \, 0, \, -1 \right) .\end{aligned}$$
(11)

Since (3) depend continuously on (10), any point in some open neighborhood U of (11) also corresponds to a parametrization of some generic quadratic surface and thus \(r\circ \psi \) vanishes on U. Hence it is enough to show that \(\psi (U)\) has a non-empty interior. Computing the Jacobian determinant of \(\psi \) at (11) yields

which is non-zero. Hence \(\psi \) is a local diffeomorphism and so there exists an open subset \(V\subseteq U\) such that \(\psi \vert _V:V\rightarrow \psi (V)\) is a diffeomorphism. In particular, \(\psi (V)\) is open. However, recall that it is contained in the zero set of r, which must therefore be a zero function, a contradiction. It follows that \({\mathfrak {a}}\subseteq {\mathcal {Q}}(\Omega )^\dagger =\langle G\rangle \subseteq {\mathfrak {a}}\), which means that \({\mathcal {Q}}(\Omega )^\dagger ={\mathfrak {a}}\) and moreover G is, in fact, a reduced Gröbner basis of \({\mathfrak {a}}\), which concludes the proof. \(\square \)

Remark

Now we are able to clarify in what sense equations (2) are minimal. Namely, (6) form a reduced Gröbner basis of \({\mathcal {Q}}(\Omega )^\dagger \), while, as it will turn out, we would obtain the same results as in Theorem 1.1 and Theorem 1.2 for any generating set of \({\mathcal {Q}}(\Omega )^\dagger \). Although the elements (6) seem to be the best choice, the reduced Gröbner basis is by no means unique. Besides, with Proposition 3.2 at hand, finding other generating sets becomes a purely algorithmic task.

4 Smoothing properties and their connection with holomorphicity

At some point in the future, we will want to deduce a linear dependence of a set of functions from the vanishing of their generalized Wronskians. For this, we will use the main result from Wolsson (1989), where the necessary and sufficient conditions are established. Although the author roughly requires that all the generalized Wronskians must vanish, in the course of the inherently constructive proof he considers only finitely many generalized Wronskians of bounded order. Nevertheless, our initial assumption that the function f is merely an element of \(W^{3,1}_{\textrm{loc}}(\Omega )\) is too weak for any non-trivial generalized Wronskian to be well-defined. Therefore we need to somehow improve the smoothness of f. As it turns out, the differentiability of class \(C^5\) will be sufficient and thus we will not use the following fact in its full generality.

Lemma 4.1

Let \(f\in W^{3,1}_{\textrm{loc}}(\Omega )\) be a function defined on a connected open subset \(\Omega \subseteq {\mathbb {R}}^2\). Suppose that f is generic and is a weak solution to the system of partial differential equations (2). Then f is infinitely differentiable.

Proof

Let \(u,v:\Omega \rightarrow {\mathbb {R}}\) be the functions defined as follows:

$$\begin{aligned} \begin{aligned} u(x,y)&\, {:=}\, \frac{f^{(2,0)}(x,y)-f^{(0,2)}(x,y)}{\left| f^{(0,2)}(x,y) f^{(2,0)}(x,y)-f^{(1,1)}(x,y)^2\right| ^{3/4}} \\ v(x,y)&\, {:=}\, \frac{2 f^{(1,1)}(x,y)}{\left| f^{(0,2)}(x,y) f^{(2,0)}(x,y)-f^{(1,1)}(x,y)^2\right| ^{3/4}}. \end{aligned} \end{aligned}$$
(12)

Note that they are well-defined, by the assumption that f is generic. Since they depend only on the \(2{\hbox {nd}}\) order partial derivatives of f, which are assumed to be elements of \(W^{1,1}_{\textrm{loc}}(\Omega )\), and moreover the Hessian determinant of f is locally bounded away from 0, both functions uv are elements of \(W^{1,1}_{\textrm{loc}}(\Omega )\). Computing their weak partial derivatives and applying (2) one can find out that they satisfy the Cauchy–Riemann equations:

where \(p_1,p_2\) denote respectively the left-hand sides of (2) and ± is the sign of the Hessian determinant. Again, the above formulas are well-defined, by the assumption that f is generic. Thus uv are analytic on \(\Omega \) (Gray and Morris 1978, Theorem 9). This is actually a special case of a more general result on the regularity of solutions of hypo-elliptic partial differential equations (Hörmander 1961).

Now, observe that the \(1{\hbox {st}}\) order partial derivatives of uv as well as the left-hand sides of (2) are linear in \(3{\hbox {rd}}\) order partial derivatives of f and thus we can write e.g.

$$\begin{aligned} \begin{pmatrix}u^{(1,0)}\\ u^{(0,1)}\\ p_1\\ p_2\end{pmatrix}={\varvec{A}}\begin{pmatrix}f^{(0,3)}\\ f^{(1,2)}\\ f^{(2,1)}\\ f^{(3,0)}\end{pmatrix}, \end{aligned}$$

which allows us to express all the \(3{\hbox {rd}}\) order partial derivatives of f in terms of the \(1{\hbox {st}}\) order partial derivatives of u and the \(2{\hbox {st}}\) order partial derivatives of f. To do this, we only need to verify that the matrix \({\varvec{A}}\) is invertible. Indeed, its determinant is equal to

where ± is the sign of the Hessian determinant. Therefore we have

$$\begin{aligned} \begin{pmatrix}f^{(0,3)}\\ f^{(1,2)}\\ f^{(2,1)}\\ f^{(3,0)}\end{pmatrix}={\varvec{A}}^{-1}\begin{pmatrix}u^{(1,0)}\\ u^{(0,1)}\\ p_1\\ p_2 \end{pmatrix}={\varvec{A}}^{-1}\begin{pmatrix}u^{(1,0)}\\ u^{(0,1)}\\ 0\\ 0 \end{pmatrix},\end{aligned}$$
(13)

where the right-hand side is linear in \(1{\hbox {st}}\) order partial derivatives of u and algebraic in \(2{\hbox {nd}}\) order partial derivatives of f. Since \({\varvec{A}}^{-1}=(\det {\varvec{A}})^{-1}({{\,\textrm{adj}\,}}{\varvec{A}})\), where \(\det {\varvec{A}}\) is locally bounded away from 0 and \({{\,\textrm{adj}\,}}{\varvec{A}}\) is a polynomial in \(2{\hbox {nd}}\) order partial derivatives of f, we have \({\varvec{A}}^{-1}\in W^{1,1}_{\textrm{loc}}(\Omega )\). It follows that the left-hand side (and hence also the right-hand side) is an element of \(W^{1,1}_{\textrm{loc}}(\Omega )\), which means by the very definition that \(f\in W^{4,1}_{\textrm{loc}}(\Omega )\). Now we are able to weakly differentiate the equalities (13) and iterate the same argument to see that f is indeed infinitely differentiable on \(\Omega \). This ends the proof. \(\square \)

5 Proofs of the main theorems

Before we move on to the essential part of this section, we will prove the following lemma, which will play a key role in the proofs of both main theorems:

Lemma 5.1

Let \(f\in W^{3,1}_{\textrm{loc}}(\Omega )\) be a function defined on a connected open subset \(\Omega \subseteq {\mathbb {R}}^2\). Suppose that f is generic. Then f satisfies the system of partial differential equations (2) if and only if its graph is contained in a quadratic surface.

Proof

Left implication follows immediately from Proposition 3.2. A proof of the right implication \((\implies )\) is not so straightforward, since we want to deduce a linear dependence of a set of functions from vanishing of their generalized Wronskians, which fails to be true in general (Peano 1889) and therefore needs specific arguments.

Again we adopt the notation from Wolsson (1989).

Definition 6

(Wolsson 1989, Definition 2) A critical point of \(\varvec{\phi }\) is a point of the domain at which all generalized Wronskians of \(\varvec{\phi }\) vanish.

Definition 7

(Wolsson 1989, Definition 3) An \(r\times r\) generalised sub-Wronskian of \(\varvec{\phi }\), \(1\le r\le n\), is a generalised Wronskian of any subsequence of \(\varvec{\phi }\).

Note that not every minor of a generalized Wronskian is a generalized sub-Wronskian. Indeed, the above definition requires that it also satisfies the additional condition for orders of partial derivatives.

Definition 8

(Wolsson 1989, Definition 4) The order of a critical point \({\varvec{t}}\) of \(\varvec{\phi }\) is the largest positive integer r for which some \(r\times r\) generalized sub-Wronskian of \(\varvec{\phi }\) is not zero at \({\varvec{t}}\). Should all sub-Wronskians vanish at \({\varvec{t}}\), the order is defined to be zero.

We will show that every \({\varvec{t}}\in \Omega \) is a critical point of \(\varvec{\phi }\) of order 9. First, observe that all \(10\times 10\) generalized Wronskians of \(\varvec{\phi }\) vanish identically on \(\Omega \). Indeed, from Lemma 4.1 we infer that f is smooth and thus all its generalized Wronskians are well-defined. Moreover, by Assertion 3.1 they belong to \({\mathcal {Q}}(\Omega )^\dagger \) and hence they vanish identically on \(\Omega \) since both generators of \({\mathcal {Q}}(\Omega )^\dagger \) do.

We are left to prove that for every \({\varvec{t}}\in \Omega \) there exists a \(9\times 9\) generalized sub-Wronskian of \(\varvec{\phi }\) that is non-zero at \({\varvec{t}}\). Observe that i.a. every \(9\times 9\) minor of the following \(9\times 10\) matrix

$$\begin{aligned} {\varvec{W}}{:=}\begin{pmatrix} \varvec{\phi }\\ \varvec{\phi }^{(0,1)} \\ \varvec{\phi }^{(1,0)} \\ \varvec{\phi }^{(0,2)} \\ \varvec{\phi }^{(1,1)} \\ \varvec{\phi }^{(2,0)} \\ \varvec{\phi }^{(0,3)} \\ \varvec{\phi }^{(1,2)} \\ \varvec{\phi }^{(0,4)} \end{pmatrix} \end{aligned}$$

that comprises the first row is a valid sub-Wronskian of \(\varvec{\phi }\) and thus it is enough to show that \({\varvec{W}}={\varvec{W}}({\varvec{t}})\) has full rank at every \({\varvec{t}}\in \Omega \). Denote by \(W_i\) the minor of \({\varvec{W}}\) obtained by deleting \(i{\hbox {th}}\) column and suppose that all \(W_i\) are zero. A direct computation shows that

which implies

$$\begin{aligned} f^{(0,4)}=\frac{4 {f^{(0,3)}}^2}{3 f^{(0,2)}}. \end{aligned}$$

Applying the above result to the definition of \(W_5\) yields

and consequently

$$\begin{aligned} f^{(0,3)}=0. \end{aligned}$$

It follows that

which finally gives us the desired contradiction.

We are now at a point where we can apply the following fundamental theorem:

Lemma 5.2

(Wolsson 1989, Theorem 2) If G is an open connected set consisting of critical points of the same order \(r>0\), then \(\varvec{\phi }\) has a linearly independent subset \(S_r=\{\phi _1,\ldots ,\phi _r\}\), say, which is a basis of \(\textrm{span}(\varvec{\phi })\), and consequently \(\varvec{\phi }\) is linearly dependent on G.

By Lemma 5.2 we know that \(\varvec{\phi }\) is linearly dependent on \(\Omega \). This concludes the proof. \(\square \)

Remark

Observe that the system of partial differential equations (2) is satisfied if and only if a pair of functions (12) satisfies Cauchy-Riemann equations. Thus the graph of f is contained in a quadratic surface if and only if \(u+iv\) is holomorphic. Moreover, if f satisfies (4), then a direct computation shows that \(u+iv\) is simply a quadratic polynomial:

(14)

where

$$\begin{aligned} {\varvec{Q}}{:=}\begin{pmatrix}a_{11} &{} \frac{1}{2}a_{12} &{} \frac{1}{2}a_{13} &{} \frac{1}{2}b_1 \\ \frac{1}{2}a_{12} &{} a_{22} &{} \frac{1}{2}a_{23} &{} \frac{1}{2}b_2 \\ \frac{1}{2}a_{13} &{} \frac{1}{2}a_{23} &{} a_{33} &{} \frac{1}{2}b_{3} \\ \frac{1}{2}b_1 &{} \frac{1}{2}b_2 &{} \frac{1}{2}b_3 &{} c \end{pmatrix}\end{aligned}$$
(15)

is a symmetric matrix defining an affine quadratic form (4) and \(Q_{i,j}\) is the (ij) minor of \({\varvec{Q}}\), i.e. the determinant of the submatrix formed by deleting the \(i{\hbox {th}}\) row and \(j{\hbox {th}}\) column.

Remark

Since a quadratic surface is uniquely determined by 9 parameters and the quadratic polynomial (14) has only 5 parameters, a natural question arises which functions correspond to the same quadratic polynomial? Note that uv depend only on \(2{\hbox {nd}}\) order partial derivatives of f, which means that adding linear terms does not change (14). For completeness, we still need one more parameter. Careful inspection of (14) shows e.g. that every function of the form

$$\begin{aligned} f(x,y){:=}a\sqrt{1-ax^2-ay^2}+bx+cy+d \end{aligned}$$

gives rise to the same quadratic polynomial \(u+iv=z^2\). Unfortunately, the general answer is far more complicated and will not be given here.

Finally, we still will need one more simple fact, which can be verified by a direct computation:

Assertion 5.3

(Zawalski 2022, Theorem 1)

Let \(f:{\mathbb {R}}^2\supset \Omega \rightarrow {\mathbb {R}}\) be a function of class \(C^2\) defined on an open subset of \({\mathbb {R}}^2\) and satisfying a quadratic equation (4). Then the following formula holds:

where \({\varvec{H}}_f\) is the Hessian matrix of f, \(\Delta _f\) is the discriminant of (4) with respect to the variable f and \({\varvec{Q}}\) is just as defined in (15).\(\square \)

Now we are ready to prove Theorem 1.1.

Proof of Theorem 1.1

Define \(U\subseteq \Omega \) to be the subset consisting of those points, where the Hessian determinant of f is positive. Note that U is open, which immediately follows from the continuity of partial derivatives. Moreover, the assumption on f asserts that it is also non-empty.

First, we will show the right implication \((\implies )\). Since \(f\vert _U\) is generic, by Lemma 5.1 its graph is contained in a quadratic surface. Let \({\varvec{t}}\in \Omega \) be a limit point of U, i.e. such that there exists a sequence \({\varvec{t}}_\bullet \) of points in U whose limit is \({\varvec{t}}\). Now, if \(\Delta _f\) vanishes identically on U then f is affine, a contradiction. Thus there exists \({\varvec{u}}\in U\) such that \(\Delta _f({\varvec{u}})>0\), which implies \(-16\det {\varvec{Q}}>0\). Moreover, since \(\Delta _f\) is a quadratic polynomial, the sequence \(\Delta _f({\varvec{t}}_\bullet )^2\) is bounded from above. It follows that \(\det {\varvec{H}}_f(\varvec{t}_\bullet )=-16\det {\varvec{Q}}\cdot \Delta _f(\varvec{t}_\bullet )^{-2}\) is bounded from below by some positive constant \(\varepsilon >0\). In particular, we have \(\det \varvec{H}_f({\varvec{t}})\ge \varepsilon >0\) and hence \({\varvec{t}}\in U\), by the very definition. Thus we have shown that U contains all its limit points, which makes it closed in \(\Omega \). However, recall that U is also open, in which case we have simply \(U=\Omega \). This concludes the first part of the proof.

The remaining left implication follows right the same way. Since the graph of \(f\vert _U\) is by assumption contained in a quadratic surface, we repeat the above limit point argument to see that likewise \(U=\Omega \). With this result at hand, we once again apply Lemma 5.1 to conclude the proof. \(\square \)

Finally, we are in a position to clear out the assumption on the Hessian determinant. However, it turns out to be important, since (2) is also satisfied by parametrizations of some ruled surfaces, the Hessian determinant of which is non-positive.

Proof of Theorem 1.2

Define the following open sets:

$$\begin{aligned} \Omega _{\mathrm a}&{:=}\left\{ f^{(0,2)}\ne 0, f^{(2,0)}\ne 0, f^{(0,2)} f^{(2,0)}-{f^{(1,1)}}^2<0\right\} ,\\ \Omega _{\mathrm b}&{:=}{{\,\textrm{Int}\,}}\left\{ f^{(0,2)} f^{(2,0)}-{f^{(1,1)}}^2=0\right\} ,\\ \Omega _{\mathrm c}&{:=}{{\,\textrm{Int}\,}}\left\{ f^{(2,0)}=0\right\} ,\\ \Omega _{\mathrm d}&{:=}{{\,\textrm{Int}\,}}\left\{ f^{(0,2)}=0\right\} . \end{aligned}$$

Clearly their sum \(\Omega _{\mathrm a}\cup \Omega _{\mathrm b}\cup \Omega _{\mathrm c}\cup \Omega _{\mathrm d}\) is dense in \(\Omega \). By definition, on each connected component of \(\Omega _{\mathrm b}\) the graph of f is contained in a developable surface. Moreover, on each connected component of \(\Omega _{\mathrm c}\) (respectively: \(\Omega _{\mathrm d}\)) f is linear along every straight line parallel to the OX (respectively: OY) axis and thus, again by definition, its graph is contained in a Catalan surface with directrix plane XZ (respectively: YZ). Hence to prove the right implication \((\implies )\) it remains for us to show that on every connected component of \(\Omega _{\mathrm a}\) the graph of f satisfying (2) is contained in a doubly-ruled surface, which readily follows from Lemma 5.1. Indeed, we immediately obtain that the graph of f is contained in a quadratic surface of negative Gaussian curvature. The only two are hyperbolic paraboloid and single-sheeted hyperboloid, both of which are doubly-ruled (Hilbert and Cohn-Vossen 1999, p. 15). This concludes the first part of the proof.

On the other hand, observe that \(f\vert _{\Omega _{\mathrm b}}\) automatically satisfies (2). Indeed, denote

$$\begin{aligned} H_f(x,y){:=}f^{(0,2)}(x,y) f^{(2,0)}(x,y)-f^{(1,1)}(x,y)^2 \end{aligned}$$

and observe that

$$\begin{aligned} p_1&= -4 f^{(1,2)} H_f+f^{(0,2)} \frac{\partial H_f}{\partial x}+2 f^{(1,1)} \frac{\partial H_f}{\partial y} = 0, \\ p_2&= -4 f^{(2,1)} H_f+f^{(2,0)} \frac{\partial H_f}{\partial y}+2 f^{(1,1)} \frac{\partial H_f}{\partial x} = 0, \end{aligned}$$

where \(p_1,p_2\) again stand for the left-hand sides of (2), respectively. Moreover, \(f\vert _{\Omega _{\mathrm c}}\) (respectively: \(f\vert _{\Omega _{\mathrm d}}\)) satisfies \(f^{(2,0)}\equiv 0\) and consequently \(f^{(3,0)}\equiv 0\) (respectively: \(f^{(0,2)}\equiv 0\) and consequently \(f^{(0,3)}\equiv 0\)), in which case a simple check shows that it satisfies (2) as well. Hence to prove the left implication it remains for us to show that for each \(\Omega _i\), \(f\vert _{\Omega _i\cap \Omega _{\mathrm a}}\) satisfies (2). However, \(\Omega _i\cap \Omega _{\mathrm a}\) is non-empty if and only if the graph of \(f\vert _{\Omega _i}\) is contained in a doubly-ruled surface of negative Gaussian curvature. The only two are hyperbolic paraboloid and single-sheeted hyperboloid (Hilbert and Cohn-Vossen 1999, p. 15), both of which are quadratic. The assertion follows from Lemma 5.1, which concludes the proof. \(\square \)

Remark

Denote by

$$\begin{aligned} f({\varvec{t}}+{\varvec{h}}){=:}\sum _{k=0}^3\frac{1}{k!}f_k({\varvec{t}}) [{\varvec{h}}]+o(\Vert {\varvec{h}}\Vert )^3 \end{aligned}$$

the series expansion of f at \({\varvec{t}}\in \Omega \), where \(f_k({\varvec{t}})\) stands for its \({k}{\hbox {th}}\) order homogeneous Taylor polynomial in \({\varvec{h}}\). Generally, any \(2{\hbox {nd}}\) order homogeneous polynomial vanishes on at most two lines in \(\mathbb{R}\mathbb{P}^1\). Therefore if the graph of f is contained in a doubly-ruled surface, \(f_2({\varvec{t}})\) vanishes exactly on the two rulings that pass through \({\varvec{t}}\). Moreover, \(f_3({\varvec{t}})\) likewise must vanish on the same two rulings. In particular, it follows that \(f_2({\varvec{t}})\) divides \(f_3({\varvec{t}})\) as a polynomial. So it should come as no surprise to us that, for a generic function f, equations (2) are satisfied if and only if \(f_2({\varvec{t}})\) divides \(f_3({\varvec{t}})\). Indeed,

$$\begin{aligned} f_3({\varvec{t}})[h_1,h_2]=\begin{aligned}&\left( \frac{f^{(3,0)}({\varvec{t}})}{f^{(2,0)} ({\varvec{t}})}h_1+\frac{f^{(0,3)}({\varvec{t}})}{f^{(0,2)} ({\varvec{t}})}h_2\right) f_2({\varvec{t}})[h_1,h_2]\\&\quad -h_1h_2\left( \frac{p_2({\varvec{t}})}{f^{(2,0)}(\varvec{t})f^{(0,2)}({\varvec{t}})}h_1+\frac{p_1(\varvec{t})}{f^{(2,0)}({\varvec{t}})f^{(0,2)}(\varvec{t})}h_2\right) ,\end{aligned} \end{aligned}$$

where \(p_1,p_2\) again stand for the left-hand sides of (2), respectively. Observe that the remainder is a product of \(h_1\), \(h_2\) and some linear homogeneous polynomial, whereas \(f_2({\varvec{t}})[h_1,h_2]\) is divisible by neither \(h_1\) nor \(h_2\), which means it can not divide the remainder unless the latter is zero. Thus we have found yet another way of looking on (2): for, in generic case, it arises as generalized Wronskians of a certain set of functions, as Cauchy-Riemann equations for a certain pair of functions and now as coefficients of a certain remainder from dividing \(f_3(\varvec{t})\) by \(f_2({\varvec{t}})\).

Remark

Since the proofs of both theorems were mainly algebraic, the same results hold also in a complex setting, if we assume \(f:\mathbb C^2\supseteq \Omega \rightarrow {\mathbb {C}}\) to be holomorphic. Although the author did not point it out, the same applies likewise to the cited work of Wolsson (1989) concerning generalized Wronskians, which allows us to apply the results in an analogous manner. Only smoothing Lemma 4.1 ceases to make sense, but actually it is not needed anymore.

Let us conclude our considerations with an alternative proof of a well-known corollary from the aforementioned theorem of Maschke–Pick–Berwald (Nomizu et al. 1994, Theorem 4.5):

Corollary 5.4

Let \(S\subset {\mathbb {R}}^3\) be a convex surface of class \(C^3\) such that for every \({\varvec{x}}\in S\) there is a quadratic surface having \(3{\hbox {rd}}\) order contact with S at \({\varvec{x}}\). Then S is itself a quadratic surface.

Proof

Define \(f:{\mathbb {R}}^2\supseteq U\rightarrow {\mathbb {R}}\) to be a function such that its graph contains an open subset of S. Fix \({\varvec{x}}\in U\) and define \(g:{\mathbb {R}}^2\supseteq V\rightarrow {\mathbb {R}}\) to be a parametrization of a quadratic surface having \(3{\hbox {rd}}\) order contact with S at \({\varvec{x}}\). It follows from the ‘if’ part of Theorem 1.1 that g satisfies (2) at \(\varvec{x}\). Moreover, by assumption, we have the equality of jets \(J_{{\varvec{x}}}^3g=J_{{\varvec{x}}}^3f\) and hence f likewise satisfies (2) at \({\varvec{x}}\). Now, since \(\varvec{x}\) was arbitrary, it means that f satisfies (2) on the whole domain U and finally from the ‘only if’ part of Theorem 1.1 we obtain that its graph is contained in a quadratic surface. This concludes the proof. \(\square \)

The above differential characterization of quadratic surfaces is expressed in the language of differential geometry rather than differential equations. However, unlike Theorem 1.1, the assumption of Corollary 5.4 is clearly invariant under affine change of coordinate system, which is a highly desirable property.