Abstract
Let \(f\in W^{3,1}_{\textrm{loc}}(\Omega )\) be a function defined on a connected open subset \(\Omega \subseteq {\mathbb {R}}^2\). We will show that its graph is contained in a quadratic surface if and only if f is a weak solution to a certain system of \({3}{\hbox {rd}}\) order partial differential equations unless the Hessian determinant of f is non-positive on the whole \(\Omega \). Moreover, we will prove that the system is in some sense the simplest possible in a wide class of differential equations, which will lead to the classification of all polynomial partial differential equations satisfied by parametrizations of generic quadratic surfaces. Although we will mainly use the tools of linear and commutative algebra, the theorem itself is also somehow related to holomorphic functions.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
It was already known since the works of Blaschke (1923, p. 18) that conics are the only planar curves with constant equiaffine curvature. In a special case of a graph of a function of class \(C^5\), this condition is equivalent to a certain \(5{\hbox {th}}\) order ordinary differential equation, which reads
In higher dimensions, hyperquadrics are characterized by Maschke–Pick–Berwald theorem (Nomizu et al. 1994, Theorem 4.5) as the only hypersurfaces with vanishing cubic form C defined in Nomizu et al. (1994, Chpt. II, Sect. 4). However, the definition implicitely uses the intrinsic Blaschke structure and thus the cubic form C can hardly be expressed in an extrinsic coordinate system. It is also unclear what minimal smoothness we need to assume. Nevertheless, such a result for 2-dimensional surfaces turns out to be a consequence of two relatively simple partial differential equations. The aim of this paper is to prove the following main theorem:
Theorem 1.1
Let \(f\in W^{3,1}_{\textrm{loc}}(\Omega )\) be a function from the local Sobolev space,Footnote 1 defined on a connected open subset \(\Omega \subseteq {\mathbb {R}}^2\). Suppose that the Hessian determinant of f is somewhere positive. Then f is a weak solution to the system of partial differential equations
if and only if its graph is contained in a quadratic surface.
Therefore Theorem 1.1 can be considered a 2-dimensional analog of the aforementioned result of Blaschke. Contrary to Maschke-Pick-Berwald, it is formulated in terms of simple, explicit partial differential equations, with weaker smoothness assumption. Moreover, we will show that the system (2) is minimal in the sense that the left-hand sides form a minimal generating set (viz. a reduced Gröbner basis) of a certain differential ideal.
Such a characterization of quadratic surfaces of positive Gaussian curvature as the only solutions to some partial differential equations without boundary condition may be useful when one wants to prove that some specific convex body is an ellipsoid using e.g. the tools of differential geometry. Such problems arise naturally in convex geometry, especially in various characterizations of Hilbert spaces among all finite-dimensional Banach spaces.
As superfluous as it may seem, the assumption on the Hessian determinant is not purely technical, as the following holds:
Theorem 1.2
Let \(f\in W^{3,1}_{\textrm{loc}}(\Omega )\) be a function from the local Sobolev space, defined on a connected open subset \(\Omega \subseteq {\mathbb {R}}^2\). Suppose that the Hessian determinant of f is non-positive. Then f is a weak solution to the system of partial differential equations (2) if and only if \(\Omega \) contains a countable sum of disjoint open connected subsets \(\Omega _i\) such that:
-
(1)
on each \(\Omega _i\) the graph of f is contained in either:
-
(a)
a doubly-ruled surface,Footnote 2 or
-
(b)
a developable surface,Footnote 3 or
-
(c)
a Catalan surfaceFootnote 4 with directrix plane XZ, or
-
(d)
a Catalan surface with directrix plane YZ,
-
(a)
-
(2)
the union \(\bigcup \Omega _i\) is dense in \(\Omega \).
Note that all of the above are particular examples of ruled surfaces.Footnote 5 Regrettably, the exact classification of solutions to (2) seems to be a tedious, technical task and therefore will not be given here, so as not to overshadow the main idea.
To perform lengthy computations, we will employ a widely used technical computing system Wolfram Mathematica (Wolfram Research 2016). Nevertheless, they still could have been done with pen and paper (albeit with some difficulty) and hence the proof remains human-surveyable. A thorough discussion of this aspect can be found in Appendix A. For transparency, all the results obtained with the help of a computer are tagged with “Spikey” (), followed by a reference to the relevant section of Appendix A.
2 Notation and basic concepts
To prove the Theorem 1.1 we will need some very general facts concerning quadratic surfaces, that are in themselves quite interesting. We begin with rephrasing the problem in the language of commutative algebra.
Definition 1
Let
be a ring of polynomials in variables x, y and formal partial derivatives \(\partial ^{(i,j)}\) and let
be a submonoid of the multiplicative monoid of R, with the listed generators. By \(S^{-1}R\) we denote a localisation of R at S (Eisenbud 1995, Sect. 2.1).
The ring \(S^{-1}R\) can be viewed as an algebra of a certain type of differential operators T defined for those smooth functions \(f:{\mathbb {R}}^2\supseteq \Omega \rightarrow {\mathbb {R}}\) for which all the expressions
do not take a zero value on \(\Omega \) and thus have reciprocals. We will call such functions generic. Examples include but are not limited to functions with positive Hessian determinant, i.e. whose graphs have positive Gaussian curvature.
Notation
Let \(\Omega \subseteq {\mathbb {R}}^2\) be a connected open subset of \({\mathbb {R}}^2\). Denote by \({\mathcal {G}}(\Omega )\) the set of generic functions \(f:\Omega \rightarrow {\mathbb {R}}\) and by \({\mathcal {Q}}(\Omega )\) its subset consisting of parametrizations of quadratic surfaces.
Definition 2
Let \(D_x,D_y:S^{-1}R\rightarrow S^{-1}R\) be derivations (Eisenbud 1995, Chpt. 16), i.e. \({\mathbb {R}}\)-linear endomorphisms of additive group of \(S^{-1}R\) satisfying the Leibniz product rule
and thus uniquely determined by their values on indeterminates:
In particular, the well-known formula for differentiating fractions
follows from the Leibniz product rule. A ring \(S^{-1}R\) equipped with derivations \(D_x,D_y\) forms a differential ring.
Definition 3
A differential ideal \({\mathfrak {a}}\) in a differential ring R is an ideal that is mapped to itself by each derivation.
Definition 4
Let X be a subset of \({\mathcal {G}}(\Omega )\). The annihilator of X in \(S^{-1}R\), denoted by \(X^\dagger \), is a collection of differential operators \(T\in S^{-1}R\) such that \(Tf=0\) for all \(f\in X\). The annihilator of any subset is clearly a differential ideal. The annihilator of an empty set is the whole \(S^{-1}R\) and the annihilator of the whole \({\mathcal {G}}(\Omega )\) is just the zero operator.
3 Polynomial PDEs satisfied by generic quadratic surfaces
Observe that a graph of a function f is contained in a quadratic surface if and only if its every point satisfies a quadratic equation
with constant coefficients \(a_{ij},b_k,c\). This is equivalent to the fact that the set of functions
is linearly dependent. That is how the concept of generalized Wronskian for functions of several variables enters play. For clarity, we adopt the notation from Wolsson (1989).
Definition 5
(Wolsson 1989, Definition 1) A generalised Wronskian of \(\varvec{\phi }=(\phi _1({\varvec{t}}),\ldots ,\phi _n({\varvec{t}}))\), where \({\varvec{t}}=(t_1,\ldots ,t_m)\), is any determinant of the type
where \(\varvec{\phi }\), \(\partial ^i\varvec{\phi }\) are row vectors, \(\partial ^i\) is any partial derivative of order not greater that i and all \(\partial ^i\) are distinct.
Remark
Note that in the realm of functions in \(m\ge 2\) variables a generalized Wronskian of \(\varphi \) is no longer unique, since there are many possible ways of choosing row vectors \(\partial ^i\varvec{\phi }\) satisfying all the imposed conditions. More precisely, there are
partial derivatives of order not greater than i and hence there are exactly
generalised Wronskians of n functions in m variables. However, from now henceforth we will identify all generalized Wronskians that differ only in the order of rows as it does not affect the rank of the matrix.
Notation
Denote by \(\varvec{\phi }\) the tuple of functions (5).
Assertion 3.1
Each generalized Wronskian of \(\varvec{\phi }\) can be viewed as an element of \(S^{-1}R\). Moreover, by the very definition, it belongs to \({\mathcal {Q}}(\Omega )^\dagger \). Indeed, if the set of functions (5) is linearly dependent, then all its generalized Wronskians vanish identically since their columns are themselves linearly dependent.
The following key proposition characterizes the set of polynomial differential equations satisfied by the parametrization of any generic quadratic surface.
Proposition 3.2
Let \(\Omega \subseteq {\mathbb {R}}^2\) be a connected open subset of \({\mathbb {R}}^2\). Then the annihilator \({\mathcal {Q}}(\Omega )^\dagger \subseteq S^{-1}R\) is a differential ideal generated by
Proof
Clearly \({\mathcal {Q}}(\Omega )^\dagger \) is a differential ideal in \(S^{-1}R\). Denote by \({\mathfrak {a}}\) the differential ideal generated by (6). We have to show that \({\mathcal {Q}}(\Omega )^\dagger ={\mathfrak {a}}\). We will do this by proving both inclusions.
First, we will show a simpler inclusion \({\mathcal {Q}}(\Omega )^\dagger \supseteq {\mathfrak {a}}\). Since both \({\mathcal {Q}}(\Omega )^\dagger \) and \({\mathfrak {a}}\) are differential ideals, it is enough to prove that the generators of \({\mathfrak {a}}\) are contained in \({\mathcal {Q}}(\Omega )^\dagger \). Let \(f\in {\mathcal {Q}}(\Omega )\) be a parametrization of some generic quadratic surface. By Assertion 3.1, all the generalized Wronskians of \(\varvec{\phi }\) vanish identically on \(\Omega \). Denote by \(W_{i,j}\) the generalised Wronskian of \(\varvec{\phi }\) formed by deleting the row \(\varvec{\phi }^{(i,j)}\) from
The only non-trivial (i.e. not vanishing algebraically) ones are the following:
Note that although the underlying matrices depend on \(4{\hbox {th}}\) order partial derivatives, their determinants do not, which is somehow intriguing.
Remark
Since \(\varvec{\phi }\) consists of \(n=10\) functions and in \(m=2\) variables there are exactly 10 partial derivatives of at most \(3{\hbox {rd}}\) order, there is a unique generalized Wronskian of \(\varvec{\phi }\) using partial derivatives of at most \(3{\hbox {rd}}\) order, namely \(W_{0,4}\). However, it turns out that \(\varvec{\phi }^{(3,0)},\varvec{\phi }^{(2,1)},\varvec{\phi }^{(1,2)},\varvec{\phi }^{(0,3)}\) are always linearly dependent. Indeed, observe that the \(4\times 10\) matrix
has only 4 non-zero columns corresponding to xf(x, y), yf(x, y), \(f(x,y)^2\), f(x, y) and a direct computation shows that the determinant of this only non-trivial \(4\times 4\) minor is zero anyway. Thus every generalized Wronskian of \(\varvec{\phi }\) vanishes identically unless it is missing some \(3{\hbox {rd}}\) order partial derivative. In particular, there is no non-trivial generalized Wronskian of \(\varvec{\phi }\) using partial derivatives of order at most 3. Moreover, there are (a priori at most) only 4 non-trivial generalized Wronskians of \(\varvec{\phi }\) using a single partial derivative order greater than 3, since it must replace one of the 4 partial derivatives of order 3.
Now, since f is assumed to be generic, its \(2{\hbox {nd}}\) order pure derivative \(f^{(0,2)}\) is non-zero. Hence from the vanishing of \(W_{2,1}\) and \(W_{1,2}\) we obtain that a parametrization of any generic quadratic surface satisfies (2). This concludes the first part of the proof.
Remark
Note that for any generic function f, if \(W_{2,1}\) and \(W_{1,2}\) vanish, then the remaining two generalized Wronskians also vanish. Indeed, we have
while both \(f^{(2,0)}\) and \(f^{(0,2)}\) are non-zero. Furthermore, the same holds for any pair of featured generalized Wronskians except for \(W_{3,0}\) and \(W_{0,3}\), when the above equations (7) in variables \(W_{2,1}\) and \(W_{1,2}\) may turn out to be linearly dependent. This is the case exactly when
which together with \(W_{3,0}=0\) and \(W_{0,3}=0\) forms a system of partial differential equations. This time, however, apart from parametrizations of certain quadratic surfaces (including degenerate), it admits a single family of exotic solutions of the form
which arise as parametrizations of certain cubic surfaces. Moreover, note that all these functions are generic, unless \(a=0\). Therefore the choice of equations (2) was arbitrary only to some extent.
Remark
Observe that the last factors of \(W_{3,0}\) and \(W_{0,3}\) as well as \(W_{2,1}\) and \(W_{1,2}\) are equivalent up to the order of variables. However, the overall symmetry is broken by the common factor \({f^{(0,2)}}^2\), which is the result of choosing \(\varvec{\phi }^{(0,4)}\) as a supplementary row. Exactly as we should expect, if we had chosen \(\varvec{\phi }^{(4,0)}\), we would have obtained the same set of generalized Wronskians, but this time with common factor \({f^{(2,0)}}^2\) instead of \({f^{(0,2)}}^2\).
Let
be the localization of a real polynomial ring in selected 11 variables at S. Since localization commutes with adding new external elements, there is a ring isomorphism
where the latter is already a polynomial ring over Q in the remaining infinitely many variables. Let us choose a graded lexicographic order on variables \(\partial ^{(i,j)}\) and then graded reverse lexicographic order on monomials. We will find a Gröbner basis of \({\mathfrak {a}}\) with respect to this monomial ordering. For more details on Gröbner bases including definitions and examples we recommend the reader to go through (Cox et al. 2016, Chpt. 2).
Denote polynomials (6) respectively by \(p_1\), \(p_2\). Observe that for every \(i,j\ge 0\) and \(k=1,2\), \({D_x}^i{D_y}^jp_k\) is linear in highest order partial derivatives and thus we can write e.g.
where
is a \(6\times 6\) matrix and \({\varvec{b}}_5\) (the definition of which is irrelevant and therefore has been omitted for brevity) is a \(6\times 1\) vector over \(S^{-1}R\). Moreover, \({\varvec{A}}_5\) and \({\varvec{b}}_5\) contain only partial derivatives of order at most 4. One can easily verify that the determinant of \({\varvec{A}}_5\) is equal to
and thus is a unit in \(S^{-1}R\). It follows that \({\varvec{A}}_5\) is invertible over \(S^{-1}R\) and we can rewrite (8) as
Now, the left-hand side is a vector of elements from \({\mathfrak {a}}\) and hence so is also the right-hand side. Moreover, since \({{\varvec{A}}_5}^{-1}{\varvec{b}}_5\) contains only partial derivatives of order at most 4, the leading term of each polynomial on the right-hand side is a corresponding \(5{\hbox {th}}\) order partial derivative. By definition, the ideal \({\mathfrak {a}}\) is closed under derivations and thus by differentiating these polynomials we can obtain an element of \({\mathfrak {a}}\) with the leading term being any partial derivative of higher order. Using right the same argument we can likewise write
since the determinant of
is equal to
and the determinant of
is equal to
Hence all the partial derivatives \(\partial ^{(2,1)},\partial ^{(3,0)},\partial ^{(1,3)},\ldots \), which are exactly those not included in the definition of Q, are contained in the ideal of leading terms \(\langle \textrm{LT}({\mathfrak {a}})\rangle \) (Cox et al. 2016, Definition 2.5.1).
Remark
It is a mere coincidence that after computing \(2{\hbox {nd}}\) order partial derivatives of \(p_1\) and \(p_2\) the number of independent equations is equal to the number of \(5{\hbox {th}}\) order partial derivatives of f and thus the matrix \({\varvec{A}}_5\) is uniquely determined. The multiplicative monoid \(S\subseteq R\) was devised to contain all the prime factors of \(\det {\varvec{A}}_5\). However, to obtain \({\varvec{A}}_4\) and \({\varvec{A}}_3\) we had to arbitrarily choose some subset of variables, and this time it was not a coincidence that both \(\det {\varvec{A}}_4\) and \(\det {\varvec{A}}_3\) share the same prime factors as \(\det {\varvec{A}}_5\). Indeed, there are other choices for which it is no longer the case. Thus the set of variables to the polynomial ring Q was carefully selected so that both \({\varvec{A}}_4\) and \({\varvec{A}}_5\) are already invertible in \(S^{-1}R\).
Denote by G the set of polynomials constructed above, such that every monomial \(\partial ^{(2,1)},\partial ^{(3,0)},\partial ^{(1,3)},\ldots \) is a leading term \(\textrm{LT}(g)\) of some polynomial \(g\in G\). Suppose that \({\mathcal {Q}}(\Omega )^\dagger \supsetneq \langle G\rangle \) and let \(p\in {\mathcal {Q}}(\Omega )^\dagger \setminus \langle G\rangle \). After a complete reduction of p by G we obtain a remainder \(r\in {\mathcal {Q}}(\Omega )^\dagger \setminus \langle G\rangle \), which is irreducible by G, i.e. its leading term \(\textrm{LT}(r)\) is not a multiple of any \(\textrm{LT}(g)\), \(g\in G\) (Cox et al. 2016, Theorem 2.3.3). Thus r is an element of the coefficient ring Q and corresponds to some rational function in selected 11 variables. We will prove that \(r=0\), which will eventually give us the desired contradiction. By definition, it vanishes for any tuple consisting of x, y, and relevant partial derivatives of some function parametrizing a generic quadratic surface at (x, y). Since r is rational, it is enough to show that the set of such arguments has a non-empty interior as a subset of \({\mathbb {R}}^{11}\).
For this, we define an implicit function \(\psi :{\mathbb {R}}^{11}\rightarrow {\mathbb {R}}^{11}\) in the following way. Let f be a parametrization of some quadratic surface satisfying (4) with \(a_{33}=1\). Then \(\psi \) maps the tuple of parameters
to the tuple
consisting of x, y and relevant partial derivatives of f at (x, y). We can obtain an explicit formula for \(\psi \) by symbolically solving the quadratic equation (4) first and then symbolically differentiating the result. Since in general there are two possible solutions for f, we have to locally select an arbitrary branch of the square root function, so that \(\psi \) is smooth.
Now, consider a generic function
parametrizing a quadratic surface represented by the tuple of parameters
Since (3) depend continuously on (10), any point in some open neighborhood U of (11) also corresponds to a parametrization of some generic quadratic surface and thus \(r\circ \psi \) vanishes on U. Hence it is enough to show that \(\psi (U)\) has a non-empty interior. Computing the Jacobian determinant of \(\psi \) at (11) yields
which is non-zero. Hence \(\psi \) is a local diffeomorphism and so there exists an open subset \(V\subseteq U\) such that \(\psi \vert _V:V\rightarrow \psi (V)\) is a diffeomorphism. In particular, \(\psi (V)\) is open. However, recall that it is contained in the zero set of r, which must therefore be a zero function, a contradiction. It follows that \({\mathfrak {a}}\subseteq {\mathcal {Q}}(\Omega )^\dagger =\langle G\rangle \subseteq {\mathfrak {a}}\), which means that \({\mathcal {Q}}(\Omega )^\dagger ={\mathfrak {a}}\) and moreover G is, in fact, a reduced Gröbner basis of \({\mathfrak {a}}\), which concludes the proof. \(\square \)
Remark
Now we are able to clarify in what sense equations (2) are minimal. Namely, (6) form a reduced Gröbner basis of \({\mathcal {Q}}(\Omega )^\dagger \), while, as it will turn out, we would obtain the same results as in Theorem 1.1 and Theorem 1.2 for any generating set of \({\mathcal {Q}}(\Omega )^\dagger \). Although the elements (6) seem to be the best choice, the reduced Gröbner basis is by no means unique. Besides, with Proposition 3.2 at hand, finding other generating sets becomes a purely algorithmic task.
4 Smoothing properties and their connection with holomorphicity
At some point in the future, we will want to deduce a linear dependence of a set of functions from the vanishing of their generalized Wronskians. For this, we will use the main result from Wolsson (1989), where the necessary and sufficient conditions are established. Although the author roughly requires that all the generalized Wronskians must vanish, in the course of the inherently constructive proof he considers only finitely many generalized Wronskians of bounded order. Nevertheless, our initial assumption that the function f is merely an element of \(W^{3,1}_{\textrm{loc}}(\Omega )\) is too weak for any non-trivial generalized Wronskian to be well-defined. Therefore we need to somehow improve the smoothness of f. As it turns out, the differentiability of class \(C^5\) will be sufficient and thus we will not use the following fact in its full generality.
Lemma 4.1
Let \(f\in W^{3,1}_{\textrm{loc}}(\Omega )\) be a function defined on a connected open subset \(\Omega \subseteq {\mathbb {R}}^2\). Suppose that f is generic and is a weak solution to the system of partial differential equations (2). Then f is infinitely differentiable.
Proof
Let \(u,v:\Omega \rightarrow {\mathbb {R}}\) be the functions defined as follows:
Note that they are well-defined, by the assumption that f is generic. Since they depend only on the \(2{\hbox {nd}}\) order partial derivatives of f, which are assumed to be elements of \(W^{1,1}_{\textrm{loc}}(\Omega )\), and moreover the Hessian determinant of f is locally bounded away from 0, both functions u, v are elements of \(W^{1,1}_{\textrm{loc}}(\Omega )\). Computing their weak partial derivatives and applying (2) one can find out that they satisfy the Cauchy–Riemann equations:
where \(p_1,p_2\) denote respectively the left-hand sides of (2) and ± is the sign of the Hessian determinant. Again, the above formulas are well-defined, by the assumption that f is generic. Thus u, v are analytic on \(\Omega \) (Gray and Morris 1978, Theorem 9). This is actually a special case of a more general result on the regularity of solutions of hypo-elliptic partial differential equations (Hörmander 1961).
Now, observe that the \(1{\hbox {st}}\) order partial derivatives of u, v as well as the left-hand sides of (2) are linear in \(3{\hbox {rd}}\) order partial derivatives of f and thus we can write e.g.
which allows us to express all the \(3{\hbox {rd}}\) order partial derivatives of f in terms of the \(1{\hbox {st}}\) order partial derivatives of u and the \(2{\hbox {st}}\) order partial derivatives of f. To do this, we only need to verify that the matrix \({\varvec{A}}\) is invertible. Indeed, its determinant is equal to
where ± is the sign of the Hessian determinant. Therefore we have
where the right-hand side is linear in \(1{\hbox {st}}\) order partial derivatives of u and algebraic in \(2{\hbox {nd}}\) order partial derivatives of f. Since \({\varvec{A}}^{-1}=(\det {\varvec{A}})^{-1}({{\,\textrm{adj}\,}}{\varvec{A}})\), where \(\det {\varvec{A}}\) is locally bounded away from 0 and \({{\,\textrm{adj}\,}}{\varvec{A}}\) is a polynomial in \(2{\hbox {nd}}\) order partial derivatives of f, we have \({\varvec{A}}^{-1}\in W^{1,1}_{\textrm{loc}}(\Omega )\). It follows that the left-hand side (and hence also the right-hand side) is an element of \(W^{1,1}_{\textrm{loc}}(\Omega )\), which means by the very definition that \(f\in W^{4,1}_{\textrm{loc}}(\Omega )\). Now we are able to weakly differentiate the equalities (13) and iterate the same argument to see that f is indeed infinitely differentiable on \(\Omega \). This ends the proof. \(\square \)
5 Proofs of the main theorems
Before we move on to the essential part of this section, we will prove the following lemma, which will play a key role in the proofs of both main theorems:
Lemma 5.1
Let \(f\in W^{3,1}_{\textrm{loc}}(\Omega )\) be a function defined on a connected open subset \(\Omega \subseteq {\mathbb {R}}^2\). Suppose that f is generic. Then f satisfies the system of partial differential equations (2) if and only if its graph is contained in a quadratic surface.
Proof
Left implication follows immediately from Proposition 3.2. A proof of the right implication \((\implies )\) is not so straightforward, since we want to deduce a linear dependence of a set of functions from vanishing of their generalized Wronskians, which fails to be true in general (Peano 1889) and therefore needs specific arguments.
Again we adopt the notation from Wolsson (1989).
Definition 6
(Wolsson 1989, Definition 2) A critical point of \(\varvec{\phi }\) is a point of the domain at which all generalized Wronskians of \(\varvec{\phi }\) vanish.
Definition 7
(Wolsson 1989, Definition 3) An \(r\times r\) generalised sub-Wronskian of \(\varvec{\phi }\), \(1\le r\le n\), is a generalised Wronskian of any subsequence of \(\varvec{\phi }\).
Note that not every minor of a generalized Wronskian is a generalized sub-Wronskian. Indeed, the above definition requires that it also satisfies the additional condition for orders of partial derivatives.
Definition 8
(Wolsson 1989, Definition 4) The order of a critical point \({\varvec{t}}\) of \(\varvec{\phi }\) is the largest positive integer r for which some \(r\times r\) generalized sub-Wronskian of \(\varvec{\phi }\) is not zero at \({\varvec{t}}\). Should all sub-Wronskians vanish at \({\varvec{t}}\), the order is defined to be zero.
We will show that every \({\varvec{t}}\in \Omega \) is a critical point of \(\varvec{\phi }\) of order 9. First, observe that all \(10\times 10\) generalized Wronskians of \(\varvec{\phi }\) vanish identically on \(\Omega \). Indeed, from Lemma 4.1 we infer that f is smooth and thus all its generalized Wronskians are well-defined. Moreover, by Assertion 3.1 they belong to \({\mathcal {Q}}(\Omega )^\dagger \) and hence they vanish identically on \(\Omega \) since both generators of \({\mathcal {Q}}(\Omega )^\dagger \) do.
We are left to prove that for every \({\varvec{t}}\in \Omega \) there exists a \(9\times 9\) generalized sub-Wronskian of \(\varvec{\phi }\) that is non-zero at \({\varvec{t}}\). Observe that i.a. every \(9\times 9\) minor of the following \(9\times 10\) matrix
that comprises the first row is a valid sub-Wronskian of \(\varvec{\phi }\) and thus it is enough to show that \({\varvec{W}}={\varvec{W}}({\varvec{t}})\) has full rank at every \({\varvec{t}}\in \Omega \). Denote by \(W_i\) the minor of \({\varvec{W}}\) obtained by deleting \(i{\hbox {th}}\) column and suppose that all \(W_i\) are zero. A direct computation shows that
which implies
Applying the above result to the definition of \(W_5\) yields
and consequently
It follows that
which finally gives us the desired contradiction.
We are now at a point where we can apply the following fundamental theorem:
Lemma 5.2
(Wolsson 1989, Theorem 2) If G is an open connected set consisting of critical points of the same order \(r>0\), then \(\varvec{\phi }\) has a linearly independent subset \(S_r=\{\phi _1,\ldots ,\phi _r\}\), say, which is a basis of \(\textrm{span}(\varvec{\phi })\), and consequently \(\varvec{\phi }\) is linearly dependent on G.
By Lemma 5.2 we know that \(\varvec{\phi }\) is linearly dependent on \(\Omega \). This concludes the proof. \(\square \)
Remark
Observe that the system of partial differential equations (2) is satisfied if and only if a pair of functions (12) satisfies Cauchy-Riemann equations. Thus the graph of f is contained in a quadratic surface if and only if \(u+iv\) is holomorphic. Moreover, if f satisfies (4), then a direct computation shows that \(u+iv\) is simply a quadratic polynomial:
where
is a symmetric matrix defining an affine quadratic form (4) and \(Q_{i,j}\) is the (i, j) minor of \({\varvec{Q}}\), i.e. the determinant of the submatrix formed by deleting the \(i{\hbox {th}}\) row and \(j{\hbox {th}}\) column.
Remark
Since a quadratic surface is uniquely determined by 9 parameters and the quadratic polynomial (14) has only 5 parameters, a natural question arises which functions correspond to the same quadratic polynomial? Note that u, v depend only on \(2{\hbox {nd}}\) order partial derivatives of f, which means that adding linear terms does not change (14). For completeness, we still need one more parameter. Careful inspection of (14) shows e.g. that every function of the form
gives rise to the same quadratic polynomial \(u+iv=z^2\). Unfortunately, the general answer is far more complicated and will not be given here.
Finally, we still will need one more simple fact, which can be verified by a direct computation:
Assertion 5.3
(Zawalski 2022, Theorem 1)
Let \(f:{\mathbb {R}}^2\supset \Omega \rightarrow {\mathbb {R}}\) be a function of class \(C^2\) defined on an open subset of \({\mathbb {R}}^2\) and satisfying a quadratic equation (4). Then the following formula holds:
where \({\varvec{H}}_f\) is the Hessian matrix of f, \(\Delta _f\) is the discriminant of (4) with respect to the variable f and \({\varvec{Q}}\) is just as defined in (15).\(\square \)
Now we are ready to prove Theorem 1.1.
Proof of Theorem 1.1
Define \(U\subseteq \Omega \) to be the subset consisting of those points, where the Hessian determinant of f is positive. Note that U is open, which immediately follows from the continuity of partial derivatives. Moreover, the assumption on f asserts that it is also non-empty.
First, we will show the right implication \((\implies )\). Since \(f\vert _U\) is generic, by Lemma 5.1 its graph is contained in a quadratic surface. Let \({\varvec{t}}\in \Omega \) be a limit point of U, i.e. such that there exists a sequence \({\varvec{t}}_\bullet \) of points in U whose limit is \({\varvec{t}}\). Now, if \(\Delta _f\) vanishes identically on U then f is affine, a contradiction. Thus there exists \({\varvec{u}}\in U\) such that \(\Delta _f({\varvec{u}})>0\), which implies \(-16\det {\varvec{Q}}>0\). Moreover, since \(\Delta _f\) is a quadratic polynomial, the sequence \(\Delta _f({\varvec{t}}_\bullet )^2\) is bounded from above. It follows that \(\det {\varvec{H}}_f(\varvec{t}_\bullet )=-16\det {\varvec{Q}}\cdot \Delta _f(\varvec{t}_\bullet )^{-2}\) is bounded from below by some positive constant \(\varepsilon >0\). In particular, we have \(\det \varvec{H}_f({\varvec{t}})\ge \varepsilon >0\) and hence \({\varvec{t}}\in U\), by the very definition. Thus we have shown that U contains all its limit points, which makes it closed in \(\Omega \). However, recall that U is also open, in which case we have simply \(U=\Omega \). This concludes the first part of the proof.
The remaining left implication follows right the same way. Since the graph of \(f\vert _U\) is by assumption contained in a quadratic surface, we repeat the above limit point argument to see that likewise \(U=\Omega \). With this result at hand, we once again apply Lemma 5.1 to conclude the proof. \(\square \)
Finally, we are in a position to clear out the assumption on the Hessian determinant. However, it turns out to be important, since (2) is also satisfied by parametrizations of some ruled surfaces, the Hessian determinant of which is non-positive.
Proof of Theorem 1.2
Define the following open sets:
Clearly their sum \(\Omega _{\mathrm a}\cup \Omega _{\mathrm b}\cup \Omega _{\mathrm c}\cup \Omega _{\mathrm d}\) is dense in \(\Omega \). By definition, on each connected component of \(\Omega _{\mathrm b}\) the graph of f is contained in a developable surface. Moreover, on each connected component of \(\Omega _{\mathrm c}\) (respectively: \(\Omega _{\mathrm d}\)) f is linear along every straight line parallel to the OX (respectively: OY) axis and thus, again by definition, its graph is contained in a Catalan surface with directrix plane XZ (respectively: YZ). Hence to prove the right implication \((\implies )\) it remains for us to show that on every connected component of \(\Omega _{\mathrm a}\) the graph of f satisfying (2) is contained in a doubly-ruled surface, which readily follows from Lemma 5.1. Indeed, we immediately obtain that the graph of f is contained in a quadratic surface of negative Gaussian curvature. The only two are hyperbolic paraboloid and single-sheeted hyperboloid, both of which are doubly-ruled (Hilbert and Cohn-Vossen 1999, p. 15). This concludes the first part of the proof.
On the other hand, observe that \(f\vert _{\Omega _{\mathrm b}}\) automatically satisfies (2). Indeed, denote
and observe that
where \(p_1,p_2\) again stand for the left-hand sides of (2), respectively. Moreover, \(f\vert _{\Omega _{\mathrm c}}\) (respectively: \(f\vert _{\Omega _{\mathrm d}}\)) satisfies \(f^{(2,0)}\equiv 0\) and consequently \(f^{(3,0)}\equiv 0\) (respectively: \(f^{(0,2)}\equiv 0\) and consequently \(f^{(0,3)}\equiv 0\)), in which case a simple check shows that it satisfies (2) as well. Hence to prove the left implication it remains for us to show that for each \(\Omega _i\), \(f\vert _{\Omega _i\cap \Omega _{\mathrm a}}\) satisfies (2). However, \(\Omega _i\cap \Omega _{\mathrm a}\) is non-empty if and only if the graph of \(f\vert _{\Omega _i}\) is contained in a doubly-ruled surface of negative Gaussian curvature. The only two are hyperbolic paraboloid and single-sheeted hyperboloid (Hilbert and Cohn-Vossen 1999, p. 15), both of which are quadratic. The assertion follows from Lemma 5.1, which concludes the proof. \(\square \)
Remark
Denote by
the series expansion of f at \({\varvec{t}}\in \Omega \), where \(f_k({\varvec{t}})\) stands for its \({k}{\hbox {th}}\) order homogeneous Taylor polynomial in \({\varvec{h}}\). Generally, any \(2{\hbox {nd}}\) order homogeneous polynomial vanishes on at most two lines in \(\mathbb{R}\mathbb{P}^1\). Therefore if the graph of f is contained in a doubly-ruled surface, \(f_2({\varvec{t}})\) vanishes exactly on the two rulings that pass through \({\varvec{t}}\). Moreover, \(f_3({\varvec{t}})\) likewise must vanish on the same two rulings. In particular, it follows that \(f_2({\varvec{t}})\) divides \(f_3({\varvec{t}})\) as a polynomial. So it should come as no surprise to us that, for a generic function f, equations (2) are satisfied if and only if \(f_2({\varvec{t}})\) divides \(f_3({\varvec{t}})\). Indeed,
where \(p_1,p_2\) again stand for the left-hand sides of (2), respectively. Observe that the remainder is a product of \(h_1\), \(h_2\) and some linear homogeneous polynomial, whereas \(f_2({\varvec{t}})[h_1,h_2]\) is divisible by neither \(h_1\) nor \(h_2\), which means it can not divide the remainder unless the latter is zero. Thus we have found yet another way of looking on (2): for, in generic case, it arises as generalized Wronskians of a certain set of functions, as Cauchy-Riemann equations for a certain pair of functions and now as coefficients of a certain remainder from dividing \(f_3(\varvec{t})\) by \(f_2({\varvec{t}})\).
Remark
Since the proofs of both theorems were mainly algebraic, the same results hold also in a complex setting, if we assume \(f:\mathbb C^2\supseteq \Omega \rightarrow {\mathbb {C}}\) to be holomorphic. Although the author did not point it out, the same applies likewise to the cited work of Wolsson (1989) concerning generalized Wronskians, which allows us to apply the results in an analogous manner. Only smoothing Lemma 4.1 ceases to make sense, but actually it is not needed anymore.
Let us conclude our considerations with an alternative proof of a well-known corollary from the aforementioned theorem of Maschke–Pick–Berwald (Nomizu et al. 1994, Theorem 4.5):
Corollary 5.4
Let \(S\subset {\mathbb {R}}^3\) be a convex surface of class \(C^3\) such that for every \({\varvec{x}}\in S\) there is a quadratic surface having \(3{\hbox {rd}}\) order contact with S at \({\varvec{x}}\). Then S is itself a quadratic surface.
Proof
Define \(f:{\mathbb {R}}^2\supseteq U\rightarrow {\mathbb {R}}\) to be a function such that its graph contains an open subset of S. Fix \({\varvec{x}}\in U\) and define \(g:{\mathbb {R}}^2\supseteq V\rightarrow {\mathbb {R}}\) to be a parametrization of a quadratic surface having \(3{\hbox {rd}}\) order contact with S at \({\varvec{x}}\). It follows from the ‘if’ part of Theorem 1.1 that g satisfies (2) at \(\varvec{x}\). Moreover, by assumption, we have the equality of jets \(J_{{\varvec{x}}}^3g=J_{{\varvec{x}}}^3f\) and hence f likewise satisfies (2) at \({\varvec{x}}\). Now, since \(\varvec{x}\) was arbitrary, it means that f satisfies (2) on the whole domain U and finally from the ‘only if’ part of Theorem 1.1 we obtain that its graph is contained in a quadratic surface. This concludes the proof. \(\square \)
The above differential characterization of quadratic surfaces is expressed in the language of differential geometry rather than differential equations. However, unlike Theorem 1.1, the assumption of Corollary 5.4 is clearly invariant under affine change of coordinate system, which is a highly desirable property.
Data Availability
The data that support the findings of this study are available from a public institutional repository, https://drive.google.com/drive/folders/1-pCxC69RI0g10bmWQdHiZ26d3faswKYX?usp=sharing.
Notes
Fix \(1\le p\le \infty \) and let \(k\in {\mathbb {N}}\). The local Sobolev space \(W^{k,p}_{\textrm{loc}}(\Omega )\) consists of all locally integrable functions \(f:\Omega \rightarrow {\mathbb {R}}\) such that for every multi-index \(\varvec{\alpha }\) with \(|\varvec{\alpha }|\le k\), the mixed partial derivative \(f^{(\varvec{\alpha })}\) exists in the weak sense and belongs to \(L^p_{\textrm{loc}}(\Omega )\) (cf. Evans 2010, Sect. 5.2.2).
A ruled surface that contains two families of rulings.
A ruled surface having Gaussian curvature \(K=0\) everywhere.
A ruled surface all of whose rulings are parallel to a fixed plane, called the directrix plane of the surface.
A surface that can be swept out by moving a line in space. The straight lines themselves are called rulings. The Gaussian curvature on a ruled regular surface is everywhere non-positive.
References
Blaschke, W.: Vorlesungen über Differentialgeometrie und Geometrische Grundlagen von Einsteins Relativitätstheorie II. Die Grundlehren der Mathematischen Wissenschaften VII. Affine Differentialgeometrie. Springer, New York (1923)
Cox, D.A., Little, J., O’Shea, D.: Ideals, varieties, and algorithms: An introduction to computational algebraic geometry and commutative algebra. In: Undergraduate Texts in Mathematics. Springer, New York (2016)
Eisenbud, D.: Commutative algebra with a view toward algebraic geometry. In: Graduate Texts in Mathematics. Springer, New York (1995)
Evans, L.C.: Partial differential equations. In: Graduate Studies in Mathematics. American Mathematical Society, New York (2010)
Gray, J.D., Morris, S.A.: When is a function that satisfies the Cauchy–Riemann equations analytic? Am. Math. Mon. 85(4), 246–256 (1978)
Hilbert, D., Cohn-Vossen, S.: Geometry and the imagination. AMS Chelsea Publishing Series/AMS Chelsea Publishing, New York (1999)
Hörmander, L.: Hypoelliptic differential operators. Ann. l’Instit. Fourier 11, 477–492 (1961)
Nomizu, K., Katsumi, N., Sasaki, T.: Affine differential geometry: geometry of affine immersions. In: Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge (1994)
Peano, G.: Sur le déterminant Wronskien. Mathesis 9, 75–76 (1889)
Wolfram Research, Inc.: Mathematica, Version 11.0.0, Wolfram Research, Inc., Champaign (2016)
Wolsson, K.: A condition equivalent to linear dependence for functions with vanishing Wronskian. Linear Algebra Appl. 116, 1–8 (1989)
Zawalski, B.: A concise formula for the Hessian determinant of a function parameterising a quadratic hypersurface. arXiv:2203.04082 (2022)
Acknowledgements
I would like to thank my doctoral advisor Prof. Michał Wojciechowski for his breakthrough ideas, insistence on giving a profound background to all the intermediate results, help in completing all the details and especially high amount of effort spent on editorial work.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work was supported by the Polish National Science Centre grant nos. 2015/18/A/ST1/00553 and 2020/02/Y/ST1/00072. Numerical computations were performed within the Interdisciplinary Centre for Mathematical and Computational Modelling UW grant no. G76-28.
Computer assistance in symbolic computations
Computer assistance in symbolic computations
All functions were implemented in Wolfram Mathematica (Wolfram Research 2016). The code itself is available online at https://drive.google.com/drive/folders/1-pCxC69RI0g10bmWQdHiZ26d3faswKYX?usp=sharing. Computations were performed on a Linux x86 (64-bit) machine with a single Intel Xeon CPU E5-2697 v3 processor and 64GB memory. The total execution time was negligible.
1.1 Notebook-1.nb
In the beginning, we use symbolic differentiation D to obtain the Wronskian matrix of (5). Afterward, we use Minors to compute 210 symbolic determinants of order 4 and thus find out that the four rows corresponding to \(3{\hbox {rd}}\) order partial derivatives are indeed linearly dependent. Then we again use Minors to compute 11 symbolic determinants of order 10, among which the only non-trivial ones are \(W_{3,0}\), \(W_{2,1}\), \(W_{1,2}\) and \(W_{0,3}\). Finally, we use Minors to compute 10 symbolic determinants of order 9 and select the simplest-looking ones. Based on them, we solve some simple linear equations to find out that all the featured minors can not vanish simultaneously. Performing the same calculations with pen and paper would be tedious, however possible. Although we sometimes applied Factor to factorize the results, in all cases the factorization turned out to be trivial.
1.2 Notebook-2.nb
In the beginning, we use symbolic differentiation D to compute the left-hand side of (8). Afterward, we use CoefficientArrays to extract the explicit form of the matrix \({\varvec{A}}_5\) of order 6. Then we apply Det and Factor to obtain its determinant in the form from which we can readily see that it is an element of the multiplicative submonoid S. Finally, we repeat the same steps for \(\varvec{A}_4\) of order 5 and \({\varvec{A}}_3\) of order 4. The same calculations could well be done with pen and paper, though it is pointless.
1.3 Notebook-3.nb
At the beginning, we solve the quadratic equation (4) for f, assuming previously that \(a_{33}=1\). Afterward, we use symbolic differentiation to obtain the explicit formula for \(\psi \). Then we use Grad to compute the Jacobian matrix of \(\psi \) with respect to the 11-dimensional vector (10). Now, since its symbolic determinant is difficult to compute even for a supercomputer, we instantiate the matrix at (11) and only then we apply Det and Factor to obtain its determinant of order 11 in the simplest form. The content of this notebook is by far the most demanding computational task because, in addition to the heavy workload, it also requires manipulating algebraic expressions containing square roots.
1.4 Notebook-4.nb
In the beginning, we define \(p_1\), \(p_2\), u, v and verify that u, v satisfy (12), which requires symbolic differentiation and manipulating algebraic expressions containing square roots. Afterward, we use CoefficientArrays to extract the explicit form of the matrix \({\varvec{A}}\) of order 4. Then we apply Det and Together to obtain its determinant in the simplest form. Further, we solve the quadratic equation (4) for f and then put the result into the formula for \(u+iv\). We apply Together and PowerExpand to bring the result to a simpler form. Finally, we define the matrix \({\varvec{Q}}\) of order 4 and verify the formula (14), using Minors to compute 16 symbolic determinants of order 3 along the way. Then we apply Together to force the expansion of the underlying expression. At the very end, we verify Assertion 5.3, using symbolic differentiation D composed with Det to obtain the Hessian determinant of f and Discriminant to compute the discriminant of (4) with respect to the variable f. Again, we apply Together to force the expansion of the underlying expression. The same calculations could well be done with pen and paper, though it is pointless.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Zawalski, B. Differential characterization of quadratic surfaces. Beitr Algebra Geom (2024). https://doi.org/10.1007/s13366-024-00735-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13366-024-00735-0
Keywords
- Quadratic surfaces
- Ruled surfaces
- Generalized Wronskian
- Differential algebra
- Gröbner basis
- Symbolic computation