1 Introduction

We introduce a viscosity approach to a broad class of constant rank theorems. Such theorems say that under suitable conditions a positive semi-definite bilinear form on a manifold, that satisfies a uniformly elliptic PDE, must have constant rank in the manifold. In this sense, constant rank theorems can be viewed as a strong maximum principle for tensors. The aim of this paper is two-fold. Firstly, we want to present a new approach to constant rank theorems. It is based on the idea that the subtraces of a linear map satisfy a linear differential inequality in a viscosity sense and the latter allows to use the strong maximum principle. This avoids the use of nonlinear test functions, as in [5], as well as the need for approximation by simple eigenvalues, as in [24]. Secondly, we show that the simplicity of this method allows us to obtain previously undiscovered constant rank theorems, in particular for non-homogeneous curvature type equations. To illustrate the idea, we give a new proof for the following full rank theorem for the Christoffel-Minkowski problem, a.k.a. the \(\sigma _{k}\)-equation.

Theorem 1.1

[14, Theorem 1.2]

Let \(({\mathbb {S}}^n,g,\nabla )\) be the unit sphere with standard round metric and connection. Suppose \(n\ge 2\), \(1\le k\le n-1\) and \(0<s,\phi \in C^{\infty }({\mathbb {S}}^n)\) satisfy

$$\begin{aligned} \begin{aligned}\nabla ^{2}\phi ^{-\frac{1}{k}}+\phi ^{-\frac{1}{k}}g\ge 0,\; 0\le r:=\nabla ^{2}s+sg\in \Gamma _{k},\; \sigma _{k}(r)= \phi , \end{aligned} \end{aligned}$$

where \(\sigma _{k}\) is k-th symmetric polynomial of eigenvalues of r with respect to g and \(\Gamma _{k}\) is the k-th Garding cone. Then r is positive definite.

Proof

For convenience, we define

$$\begin{aligned} \begin{aligned} F=\sigma _{k}^{1/k},\; f=\phi ^{1/k}. \end{aligned} \end{aligned}$$

Then \(F=f.\) Differentiate F and use Codazzi, where a semi-colon stands for covariant derivatives and we use the summation convention:

$$\begin{aligned} \begin{aligned} f_{;ab}=F_{;ab}&=F^{ij,kl}r_{ij;a}r_{kl;b}+F^{ij}r_{ij;ab}\\&=F^{ij,kl}r_{ij;a}r_{kl;b}+F^{ij}r_{ab;ij}-r_{ab}F^{ij}g_{ij}+Fg_{ab}. \end{aligned} \end{aligned}$$

Hence the tensor r satisfies the elliptic equation

$$\begin{aligned} \begin{aligned} F^{ij}r_{ab;ij}=F^{ij}g_{ij}r_{ab}-fg_{ab}-F^{ij,kl}r_{ij;a}r_{kl;b}+f_{;ab}. \end{aligned} \end{aligned}$$

Now we deduce an inequality for the lowest eigenvalue of r, \( \lambda _{1}\), in a viscosity sense. Let \(\xi \) be a smooth lower support at \(x_0\in \mathbb {S}^{n}\) for \(\lambda _{1}\) and let \(D_1\ge 1\) denote the multiplicity of \(\lambda _{1}(x_0)\). Denote by \(\Lambda \) the complement of the set \(\{i,j,k,l>D_1\}\) in \(\{1,\dots ,n\}^{4}\). We use a relation between the derivatives of \(\xi \) and r, and the inverse concavity of F (cf. [3, Lemma 5], [2]) to estimate in normal coordinates at \(x_{0}\):

$$\begin{aligned} \begin{aligned} F^{ij}\xi _{;ij}&\le F^{ij} r_{11;ij}-2\sum _{j>D_1}\frac{F^{ii}}{\lambda _{j}}(r_{ij;1})^{2}\\&=-F^{ij,kl}r_{ij;1}r_{kl;1}-2\sum _{j>D_1}\frac{F^{ii}}{\lambda _{j}}(r_{ij;1})^{2}+F^{ij}g_{ij}r_{11}-(f-f_{;11})\\&=-\sum _{i,j,k,l>D_1}F^{ij,kl}r_{ij;1}r_{kl;1}-2\sum _{j>D_1}\frac{F^{ii}}{\lambda _{j}}(r_{ij;1})^{2}-(f-f_{;11})\\&\quad -\sum _{(i,j,k,l)\in \Lambda }F^{ij,kl}r_{ij;1}r_{kl;1}+F^{ij}g_{ij}r_{11}\\&\le -\left( f+2f^{-1}f_{;1}^2-f_{;11}\right) + c|\nabla \xi |+F^{ij}g_{ij}\xi \\&\le F^{ij}g_{ij}\xi + c|\nabla \xi |. \end{aligned} \end{aligned}$$

Then the strong maximum principle for viscosity solutions (cf. [4]) implies that the set \(\{\lambda _{1}=0\}\) is open. Hence, if \(\lambda _{1}\) was zero somewhere, it would be zero everywhere. However, we know it is positive somewhere, since at a minimum of s we have \(r>0\). \(\square \)

The proof may be summarized as follows: apply the viscosity differential inequality from [3, Lemma 5] for the minimum eigenvalue \(\lambda _1\) of the spherical hessian of r. Then the strong maximum principle shows that since there is a point at which \(\lambda _1 > 0\) we must have \(\lambda _1 > 0\) everywhere and hence the hessian has constant, full rank. A similar argument was employed in [19] for obtaining curvature estimates along a curvature flow.

Our main approach here is to generalize the viscosity inequality to the subtrace \(G_m = \lambda _1 + \dots + \lambda _m\), the sum of the first m eigenvalues. See Lemma 3.2 below. Then by induction, we are able to show that if \(\lambda _1 = \dots = \lambda _{m-1} \equiv 0\), the strong maximum principle shows that either \(G_m > 0\) or \(G_m \equiv 0\) to conclude constant rank theorems (in short, CRT).

We say a symmetric 2-tensor \(\alpha \) is Codazzi, provided \(\nabla \alpha \) is totally symmetric. Here is a prototypical CRT:

Theorem 1.2

(Homogeneous CRT) [10, Theorem 1.4] Suppose \(\alpha \) is a Codazzi, non-negative, symmetric 2-tensor on a connected Riemannian manifold \((M,g,\nabla )\) satisfying \(\Psi (\alpha ,g) = f> 0\), where \(\Psi \) is one-homogeneous, inverse concave and strictly elliptic (see Definition 1.3 and Assumption 2.1), and we have \(\nabla ^2 f^{-1} + \tau f^{-1} g \ge 0\) with \(\tau (x)\) the minimum sectional curvature at x. Then \(\alpha \) is of constant rank,Footnote 1

We state a more general version of CRT that allows the curvature function to be non-homogeneous and to explicitly depend on \(x\in M\) as well. To state the result, we need a few definitions.

Definition 1.3

Let \(\Gamma \subset \mathbb {R}^{n}\) be an open, convex cone such that

$$\begin{aligned} \begin{aligned} \Gamma _{+}:= \{\lambda \in \mathbb {R}^{n}:\lambda _{i}>0\quad \forall 1\le i\le n\} \subset \Gamma . \end{aligned} \end{aligned}$$

Suppose \((M^{n},g)\) is a smooth Riemannian manifold. A \(C^{\infty }\)-function

$$\begin{aligned} \begin{aligned} F:\Gamma \times M\rightarrow \mathbb {R}\end{aligned} \end{aligned}$$

is said to be a pointwise curvature function, if for any \(x\in M\), the map \(F(\cdot ,x)\) is symmetric under permutation of the \(\lambda _{i}\). Such a map generates another map (denoted by F again) given by

$$\begin{aligned} \begin{aligned} F:\mathcal {U}\subset \mathbb {R}^{n\times n}_{\textrm{sym}}\times \mathbb {R}^{n\times n}_{\textrm{sym}}\times M&\rightarrow \mathbb {R}\\ (\alpha ,g,x)&\mapsto F(\alpha ,g,x) = F(\lambda ,x), \end{aligned} \end{aligned}$$

where \(\mathcal {U}\) is a suitable open set and \(\lambda = (\lambda _{i})_{1\le i\le n}\) are the eigenvalues of \(\alpha \) with respect to g, or equivalently, the eigenvalues of the linear map \(\alpha ^{\sharp }\) defined by \(g(\alpha ^{\sharp }(v),w) = \alpha (v,w).\) Note that F can be considered as a map on an open set of \(\mathbb {R}^{n\times n}\) via \(F(\alpha ^{\sharp },x) = F(\alpha ,g,x)\); see [23].

With the convention \(\alpha ^{i}_{j} = g^{ik}\alpha _{kj}\), where \((g^{kl})\) is the inverse of \((g_{kl})\):

$$\begin{aligned} \begin{aligned} F^{i}_{j}:= \frac{\partial F}{\partial \alpha ^{j}_{i}},\quad F^{ij}:= \frac{\partial F}{\partial \alpha _{ij}},\quad F^{ij,kl}:= \frac{\partial F}{\partial \alpha _{ij}\partial \alpha _{kl}}. \end{aligned} \end{aligned}$$

Note that \(F^{ij} = F^{i}_{k}g^{kj}\). Moreover, F is said to be

  1. (i)

    Strictly elliptic, if \(F^{ij}\eta _{i}\eta _{j} >0\quad \forall 0\ne \eta \in \mathbb {R}^{n},\)

  2. (ii)

    One-homogeneous, if for all \(x\in M\), \(F(\cdot ,x)\) is homogeneous of degree one, and

  3. (iii)

    Inverse concave, if the map \(\tilde{F} \in C^{\infty }(\Gamma _{+}\times M)\) defined by

    $$\begin{aligned} \begin{aligned} \tilde{F}(\lambda _{i},x)= - F(\lambda _{i}^{-1},x)\quad \text{ is } \text{ concave. } \end{aligned} \end{aligned}$$

We use the convention for the Riemann tensor from [11]. For a Riemannian or Lorentzian manifold \((M,g,\nabla )\),

$$\begin{aligned} \begin{aligned} {{\,\textrm{Rm}\,}}(X, Y) Z: = \nabla _Y \nabla _X Z - \nabla _X \nabla _Y Z - \nabla _{[Y,X]} Z \end{aligned} \end{aligned}$$

and we lower the upper index to the first slot:

$$\begin{aligned} \begin{aligned} {{\,\textrm{Rm}\,}}(W, X, Y, Z) = g(W, {{\,\textrm{Rm}\,}}(X, Y) Z). \end{aligned} \end{aligned}$$

The respective local coordinate expressions are \((R^{m}_{jkl})\) and \((R_{ijkl})\).

Definition 1.4

 

  1. (i)

    A pointwise curvature function \(F\in C^{\infty }(\Gamma \times M)\) is \(\Phi \)-inverse concave for some

    $$\begin{aligned} \begin{aligned} \Phi \in C^{\infty }(\Gamma \times M, T^{4,0}(M)), \end{aligned} \end{aligned}$$

    provided at all \(\beta >0\) we have

    $$\begin{aligned} \begin{aligned} F^{ij,kl}\eta _{ij}\eta _{kl}+2F^{ik}\tilde{\beta }^{jl}\eta _{ij}\eta _{kl}\ge \Phi ^{ij,kl}\eta _{ij}\eta _{kl}, \end{aligned} \end{aligned}$$

    where \(\tilde{\beta }^{ik}\beta _{kj}=\delta ^{i}_{j}\).

  2. (ii)

    For \(\alpha \in \Gamma \) we define a curvature-adjusted modulus of \(\Phi \)-inverse concavity,

    $$\begin{aligned} \begin{aligned} \omega _{F}(\alpha )(\eta ,v)&=\Phi ^{ij,kl}\eta _{ij}\eta _{kl}+D_{xx}^{2}F(v,v)+2D_{x^{k}}F^{ij}\eta _{ij}v^{k}\\&+ {{\,\textrm{tr}\,}}_{g}{{\,\textrm{Rm}\,}}(\alpha ^{\sharp },v,D_{\alpha ^{\sharp }}F,v), \end{aligned} \end{aligned}$$

    where D denotes the product connection on \(\mathbb {R}^{n\times n}\times M\). Here the curvature term denotes contracting the vector parts of the (1, 1) tensors \(\alpha ^{\sharp } = \alpha ^i_j, D_{\alpha ^{\sharp }} F = F^k_l\) with the Riemann tensor and tracing the resulting bilinear form with respect to the metric so that

    $$\begin{aligned} \begin{aligned} {{\,\textrm{tr}\,}}_{g}{{\,\textrm{Rm}\,}}(\alpha ^{\sharp },e_m,D_{\alpha ^{\sharp }}F,e_m) = g^{jl} \alpha ^i_j F^k_l R_{imkm}. \end{aligned} \end{aligned}$$

Remark 1.5

If \((A, x) \mapsto -F(A^{-1}, x)\) is concave (i.e., F is inverse concave), then we take \(\Phi =0\) and for all \((\eta ,v)\) we have

$$\begin{aligned} \begin{aligned} \omega _F(\eta ,v)\ge {{\,\textrm{tr}\,}}_g {{\,\textrm{Rm}\,}}(\alpha ^{\sharp },v,D_{\alpha ^{\sharp }}F,v). \end{aligned} \end{aligned}$$

On several occasions, where there is a homogeneity condition on F, we will be able to choose a good positive \(\Phi \) that allows to relax assumptions on the other variables of the operator F; see Sect. 2.

We state the main result of the paper which contains Theorem 1.2 as a special case.

Theorem 1.6

(Non-homogeneous CRT) Let \((M,g,\nabla )\) be a connected Riemannian manifold and \(\Gamma \) an open, convex cone containing \(\Gamma _{+}\). Suppose \(F\in C^{\infty }(\Gamma \times M)\) is a \(\Phi \)-inverse concave, strictly elliptic pointwise curvature function. Let \(\alpha \) be a Codazzi, non-negative, symmetric 2-tensor with eigenvalues in \(\Gamma \) and

$$\begin{aligned} \begin{aligned} F(\alpha ^{\sharp },\cdot ) = 0\quad \text{ on }~M. \end{aligned} \end{aligned}$$

Suppose for all \(\Omega \Subset M\) there exists a positive constant \(c=c(\Omega )\), such that for all eigenvectors v of \(\alpha ^{\sharp }\) there holds

$$\begin{aligned} \begin{aligned} \omega _{F}(\alpha )(\nabla _{v}\alpha ,v)\ge -c(\alpha (v,v)+|\nabla \alpha (v,v)|). \end{aligned} \end{aligned}$$

Then \(\alpha \) is of constant rank.

Remark 1.7

It might seem more natural to replace the condition on \(\omega _F\) with the condition

$$\begin{aligned} \begin{aligned} \omega _{F}(\alpha )(\eta , v) \ge -c(\alpha (v,v)+|\nabla \alpha (v,v)|) \end{aligned} \end{aligned}$$

for every \(\eta \) and all v. Indeed such a condition certainly leads to constant rank theorems since taking in particular \(\eta = \nabla _v \alpha \), and v and eigenvector, we may apply Theorem 1.6. However, the requirement holding for all \(\eta , v\) is too restrictive for applications such as in Theorem 1.2. See the proof in Sect. 2 below where the required inequality is only proved to hold for \(\eta = \nabla _v \alpha \) and v an eigenvector.

An application of Theorem 1.6 to a non-homogeneous curvature problem is given in Theorem 2.4. Such a result was declared interesting in [16]. The full results are listed in Sect. 2.

CRT (also known as the microscopic convexity principle) was initially developed in [9] in two-dimensions for convex solutions of semi-linear equations, \(\Delta u = f(u)\) using the maximum principle and the homotopy deformation lemma. The result was extended to higher dimensions in [20]. The continuity method combined with a CRT yields existence of strictly convex solutions to important curvature problems. For example, a CRT was an important ingredient in the study of prescribed curvature problems such as the Christoffel-Minkowski problem and prescribed Weingarten curvature problem [12, 14, 15]. Later, general theorems for fully nonlinear equations were obtained in [5, 10] under the assumption that \(A \mapsto F(A^{-1})\) is locally convex. These approaches are based on the observation that a non-negative definite matrix valued function A has constant rank if and only if there is a \(\ell \) such that the elementary symmetric functions satisfy \(\sigma _{\ell } \equiv 0\) and \(\sigma _{\ell -1} > 0\). To apply this observation requires rather delicate, long computations and the introduction of clever auxiliary functions. The difficulties are at least in part due to the non-linearity of \(\sigma _{\ell }\). An alternative approach was taken in [24, 25], using a linear combination of lowest m eigenvalues, which provides a linearity advantage at the expense of losing regularity compared with \(\sigma _{\ell }\). The authors get around this difficulty by perturbing A so that the eigenvalues are distinct (thus restoring regularity) but then using an approximation argument. Our approach based on the viscosity inequality shows that \(G_m\) enjoys sufficient regularity to apply the strong maximum principle and this suffices to obtain a self-contained proof of the CRT.

We remark here, that our method is capable of reproving the results in [5, 10], namely with the help of Theorem 3.4 it is possible to prove that any convex solution u to

$$\begin{aligned} \begin{aligned} H(\nabla ^{2}u,\nabla u,u,\cdot ) = 0 \end{aligned} \end{aligned}$$

has constant rank under the assumption that

$$\begin{aligned} \begin{aligned} (A,u,x)\mapsto -H(A^{-1},p,u,x) \end{aligned} \end{aligned}$$

is concave for fixed p. This result does not follow from Theorem 1.6, but by using a suitably redefined \(\omega _{F}\) in Theorem 3.4, this result follows in the same way as Theorem 1.6. Here we rather want to focus on geometric problems.

We proceed as follows: In Sect. 2 we collect and prove direct applications of Theorem 1.6. In Sect. 3 we prove the viscosity inequality satisfied by the subtrace, a result that is of interest by itself. After some further corollaries, we conclude with the proof of Theorem 1.6.

2 Applications

In this section, we collect a few applications of Theorem 1.6. We fix an assumption that we need on several occasions.

Assumption 2.1

Let \(\Gamma \) be as in Definition 1.3.

  1. (i)

    \(\Psi \in C^{\infty }(\Gamma )\) is a positive, strictly elliptic, homogeneous function of degree one and normalized to \(\Psi (1,\dots ,1)=n,\)

  2. (ii)

    \(\Psi \) is inverse concave.

Recall that such a function \(\Psi \) at invertible arguments \(\beta \) satisfies

$$\begin{aligned} \begin{aligned} \Psi ^{ij,kl}\eta _{ij}\eta _{kl}+2\Psi ^{ik}\tilde{\beta }^{jl}\eta _{ij}\eta _{kl}\ge \frac{2}{\Psi }(\Psi ^{ij}\eta _{ij})^{2} \end{aligned} \end{aligned}$$
(2.1)

for all symmetric \((\eta _{ij})\); see for example [2].

In order to facilitate notation, for covariant derivatives we use semi-colons, e.g., the components of the second derivative \(\nabla ^{2}T\) of a tensor are denoted by

$$\begin{aligned} \begin{aligned} T_{;ij}=\nabla _{\partial _{j}}\nabla _{\partial _{i}}T-\nabla _{\nabla _{\partial _{j}\partial _{i}}}T. \end{aligned} \end{aligned}$$

First, we illustrate how Theorem 1.2 follows from Theorem 1.6.

Proof of Theorem 1.2

We define \(F = \Psi - f.\) In view of (2.1) and Definition 1.4, we have

$$\begin{aligned} \Phi ^{ij,kl}\eta _{ij}\eta _{kl}=2\Psi ^{-1}(\Psi ^{ij}\eta _{ij})^{2}. \end{aligned}$$

Let \(x_0\in M\) and \((e_i)_{1\le i\le n}\) be an orthonormal basis of eigenvectors for \(\alpha ^{\sharp }(x_0)\). In the associated coordinates, we calculate

$$\begin{aligned} \begin{aligned} \omega _F(\alpha )( \nabla _{e_{m}}\alpha ,e_m)&\ge 2f^{-1}f_{;m}^2 -f_{;mm}+\tau \Psi ^{kr}\alpha ^l_k(g_{lr} - g_{lm}g_{rm})\\&\ge 2f^{-1}f_{;m}^2 - f_{;mm}+\tau f - c\alpha _{mm}\\&=f^{2}\left( (f^{-1})_{;mm}+\tau f^{-1} \right) - c\alpha _{mm}, \end{aligned} \end{aligned}$$

for some constant c. Hence the claim follows from Theorem 1.6. \(\square \)

For a \(C^2\) function \(\zeta \) on a space (Mg) of constant curvature \(\tau _M\),

$$\begin{aligned} \begin{aligned} r_M[\zeta ]:=\tau _M\nabla ^2\zeta + g\zeta . \end{aligned} \end{aligned}$$

The next theorem contains the full rank theorems from [14, 15, 17] as special cases.

Theorem 2.2

(\(L_p\)-Christoffel-Minkowski Type Equations) Suppose \((M,g,\nabla )\) is either the hyperbolic space \(\mathbb {H}^{n}\) or the sphere \(\mathbb {S}^{n}\) equipped with their standard metrics and connections. Let \(\Psi \) satisfy Assumption 2.1, \(k\ge 1,\) \(p\ne 0\) and \(0<\phi ,s\in C^{\infty }(M)\) satisfy

$$\begin{aligned} \begin{aligned} r_{M}[s]&\ge 0,\; s^{1-p}\Psi ^k(r_M[s])= \phi . \end{aligned} \end{aligned}$$

If either

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{ll} r_{\mathbb {H}^n}[\phi ^{-\frac{1}{p+k-1}}]\ge 0, &{} p+k-1<0, \\ \text{ or }\\ r_{\mathbb {S}^n}[\phi ^{-\frac{1}{p+k-1}}]\ge 0, &{} p\ge 1, \end{array} \right. \end{aligned} \end{aligned}$$

then \(r_M[s]\) is of constant rank. In particular, if \(M=\mathbb {S}^n,\) then we have

$$\begin{aligned} \begin{aligned} r_{\mathbb {S}^n}[s]>0. \end{aligned} \end{aligned}$$

Proof

Note that \(\alpha =r_M[s]\) is a Codazzi tensor. We define

$$\begin{aligned} F=\Psi -\left( \phi s^{p-1}\right) ^{\frac{1}{k}}=\Psi -f. \end{aligned}$$

For simplicity, we rewrite \(f=us^{q-1},\) where \(u=\phi ^{\frac{1}{k}}\) and \(q=\frac{p+k-1}{k}.\)

As in the proof of Theorem 1.2, we have

$$\begin{aligned} \begin{aligned} \omega _F(\alpha )( \nabla _{e_{m}}\alpha ,e_m)\ge 2f^{-1}f_{;m}^2 - f_{;mm}+\tau _M f - c\alpha _{mm}. \end{aligned} \end{aligned}$$

Now we calculate

$$\begin{aligned} \begin{aligned} f_{;mm}-2f^{-1}f_{;m}^2-\tau _Mf&=-\left( \tau _Mqu+\frac{q+1}{q}\frac{(u_{;m})^2}{u}-u_{;mm}\right) s^{q-1}\\&\quad -\frac{q-1}{q}\left( \frac{u_{;m}}{u}+q\frac{s_{;m}}{s}\right) ^2f\\&\quad +\tau _M(q-1)fs^{-1}r_M[s]_{mm}. \end{aligned} \end{aligned}$$

Therefore, if either \(r_{\mathbb {H}^n}[u^{-\frac{1}{q}}]\ge 0,\; q<0\) or \(r_{\mathbb {S}^n}[u^{-\frac{1}{q}}]\ge 0,\; q\ge 1,\) then

$$\begin{aligned} \begin{aligned} f_{;mm}-2f^{-1}f_{;m}^2-\tau _Mf\le c\alpha _{mm}, \end{aligned} \end{aligned}$$

for some \(c\ge 0.\) The result follows from Theorem 1.6. Since \(\mathbb {S}^n\) is compact, at some point y we must have \(r_{{\mathbb {S}}^n}[s](y)>0.\) Hence \(r_{{\mathbb {S}}^n}[s]>0\) on M. \(\square \)

Remark 2.3

Let \(M=x(\Omega )\), \(x:\Omega \hookrightarrow \mathbb {R}^{n,1}\) be a co-compact, convex, spacelike hypersurface. The support function of M, \(s:\mathbb {H}^n\rightarrow \mathbb {R}\), is defined by \(s(z)=\inf \{-\langle z,p\rangle ;\, p\in M\},\) and \(r_{\mathbb {H}^n}[s]\) is non-negative definite. Moreover, if \(r>0\), then the eigenvalues of r with respect to g are the principal radii of curvature; e.g., [1]. Therefore, the curvature problem stated in the previous theorem can be considered as an \(L_p\)-Christoffel-Minkowski type problem in the Minkowski space.

In [16] the authors asked the validity of CRT for non-homogeneous curvature problems. In this respect we have the following theorem. First we have to recall the definition of the Garding cones:

$$\begin{aligned} \begin{aligned} \Gamma _{\ell } = \{\lambda \in \mathbb {R}^{n}:\sigma _{1}(\lambda )>0,\dots ,\sigma _{\ell }(\lambda )>0\}, \end{aligned} \end{aligned}$$

where \(\sigma _{k}\) is the k-th elementary symmetric polynomial of the \(\lambda _{i}\). In \(\Gamma _{\ell }\), all \(\sigma _{k}\), \(1\le k\le \ell \), are strictly elliptic and the \(\sigma _{k}^{1/k}\) are inverse concave, see [18]. For a cone \(\Gamma \subset \mathbb {R}^{n}\), on a Riemannian manifold (Mg) a bilinear form \(\alpha \) is called \(\Gamma \) -admissible, if its eigenvalues with respect to g are in \(\Gamma \).

Theorem 2.4

(A non-homogeneous curvature problem) Let \(\phi >0\) be a smooth function on \((\mathbb {S}^{n},g,\nabla )\) with

$$\begin{aligned} \begin{aligned} \phi g-\nabla ^{2}\phi \ge 0, \end{aligned} \end{aligned}$$

\(\psi _{\ell }\equiv 1\) and \(0< \psi _{k}\in C^{\infty }(\mathbb {S}^{n})\) for \(1\le k\le \ell -1\) satisfyFootnote 2

$$\begin{aligned} \begin{aligned} \nabla ^{2}\psi _{k}-\frac{k}{k+1}\frac{\nabla \psi _{k}\otimes \nabla \psi _{k}}{\psi _{k}}+(k-1)\psi _{k}\ge 0. \end{aligned} \end{aligned}$$

Let \(\alpha \) be a \(\Gamma _{\ell }\)-admissible, Codazzi, non-negative, symmetric 2-tensor, such that

$$\begin{aligned} \begin{aligned} \sum _{k=1}^{\ell }\psi _{k}(x)\sigma _{k}(\alpha ,g)=\phi (x). \end{aligned} \end{aligned}$$

Then \(\alpha \) is of constant rank. In particular, when \(\alpha =r_{\mathbb {S}^n}[s]\ge 0\) for some positive function \(s\in C^{\infty }(\mathbb {S}^n)\), then in fact we have \(\alpha >0.\)

Proof

The result follows quickly from Theorem 1.6. We define

$$\begin{aligned} \begin{aligned} F(\alpha ,g,x) = \sum _{k=1}^{\ell }\psi _{k}(x)\sigma _{k}(\alpha ,g) - \phi (x). \end{aligned} \end{aligned}$$

Since \(\sigma _{k}^{1/k}\) is inverse concave and 1-homogeneous, F is \(\Phi \)-inverse concave with

$$\begin{aligned} \begin{aligned} \Phi ^{pq,rs}\eta _{pq}\eta _{rs}:= \sum _{k=1}^{\ell }\frac{k+1}{k}\psi _{k}\frac{\sigma _{k}^{pq}\sigma _{k}^{rs}}{\sigma _{k}}\eta _{pq}\eta _{rs}. \end{aligned} \end{aligned}$$

Let \(x_0\in M\) and \((e_i)_{1\le i\le n}\) be an orthonormal basis of eigenvectors for \(\alpha ^{\sharp }(x_0).\) Now using

$$\begin{aligned} \begin{aligned} F^{kr}\alpha _k^{l}R_{liri}=F^{kr}\alpha _k^l(g_{lr}g_{ii}-g_{li}g_{ri})=\sum _{k=1}^{\ell } k\psi _k\sigma _k-F^{ii}\alpha ^{ii}, \end{aligned} \end{aligned}$$

we deduce

$$\begin{aligned} \begin{aligned}&\omega _F(\alpha )(\nabla _{e_i}\alpha ,e_i)+\phi _{;ii}\\&\quad \ge \sum _{k=1}^{\ell }\left( \sigma _{k}\psi _{k;ii}+2\psi _{k;i}\sigma _{k;i}+\frac{k+1}{k}\frac{\psi _k}{\sigma _k}(\sigma _{k;i})^2+k\psi _{k}\sigma _{k} \right) - c\alpha _{ii}\\&\quad \ge \sum _{k=1}^{\ell }\left( \psi _{k;ii}-\frac{k}{k+1}\frac{(\psi _{k;i})^{2}}{\psi _{k}}+(k-1)\psi _{k} + \psi _{k} \right) \sigma _{k}-c\alpha _{ii}\\&\quad =\sum _{k=1}^{\ell -1}\left( \psi _{k;ii}-\frac{k}{k+1}\frac{(\psi _{k;i})^{2}}{\psi _{k}}+(k-1)\psi _{k} \right) \sigma _{k}+ \phi + (\ell -1)\sigma _{\ell }- c\alpha _{ii}. \end{aligned} \end{aligned}$$

Therefore, \(\omega _F(\alpha )(\nabla _{e_i}\alpha ,e_i)+c\alpha _{ii}\) is non-negative for some constant c. \(\square \)

Let \((N,{\bar{g}},\bar{D})\) be a simply connected Riemannian or Lorentzian spaceform of constant sectional curvature \(\tau _N\). That is, N is either the Euclidean space \(\mathbb {R}^{n+1}\), the sphere \(\mathbb {S}^{n+1}\), the hyperbolic space \(\mathbb {H}^{n+1}\) with respective sectional curvature \(0,1,-1\) or the \((n+1)\)-dimensional Lorentzian de Sitter space \(\mathbb {S}^{n,1}\) with sectional curvature 1.

Assume \(M=x(\Omega )\) given by \(x:\Omega \hookrightarrow N\) is a connected, spacelike, locally convex hypersurface of N and

$$\begin{aligned} \begin{aligned} f\in C^{\infty }(M\times \mathbb {R}_{+}\times \tilde{N}), \end{aligned} \end{aligned}$$

where \(\tilde{N}\) denotes the dual manifold of N, i.e.,

$$\begin{aligned} \begin{aligned} {\tilde{\mathbb {R}}}^{n+1}=\mathbb {S}^{n},\quad {\tilde{\mathbb {S}}}^{n+1}=\mathbb {S}^{n+1},\quad {\tilde{\mathbb {H}}}^{n+1}={\mathbb {S}}^{n,1},\quad {\tilde{\mathbb {S}}}^{n,1}={\mathbb {H}}^{n+1}. \end{aligned} \end{aligned}$$

Here f is extended as a zero homogeneous function to the ambient space. We write \(\nu , h, s\) for the future directed (timelike) normal, the second fundamental form and the support function of M, respectively (cf. [7, 8]). The eigenvalues of h with respect to the induced metric on \(\Sigma \) are ordered as \(\kappa _1\le \dots \le \kappa _n\) and we write in short

$$\begin{aligned} \kappa =(\kappa _1,\ldots ,\kappa _n). \end{aligned}$$

The Gauss equation (cf. [11, (1.1.37)]) relates extrinsic and intrinsic curvatures,

$$\begin{aligned} \begin{aligned} R_{ijkl}&=\sigma (h_{ik}h_{jl}-h_{il}h_{jk})+{\overline{{{\,\textrm{Rm}\,}}}}(x_{;i},x_{;j},x_{;k},x_{;l}) \\&= \sigma (h_{ik}h_{jl}-h_{il}h_{jk})+ \tau _N(\bar{g}_{ik} \bar{g}_{jl} - \bar{g}_{il} \bar{g}_{jk}), \end{aligned} \end{aligned}$$
(2.2)

where \(\sigma = {\bar{g}}(\nu ,\nu )\) and the second fundamental form is defined by

$$\begin{aligned} \begin{aligned} {\bar{D}}_{X}Y = \nabla _{X}Y - \sigma h(X,Y)\nu . \end{aligned} \end{aligned}$$

Theorem 2.5

Let \((N,{\bar{g}},{\bar{D}})\) be one of the spaces above and let \(\Psi \) satisfy Assumption 2.1. Let M be a connected, spacelike, locally convex and \(\Gamma \)-admissible hypersurface such that

$$\begin{aligned} \begin{aligned} \Psi (\kappa )=f(x,s,\nu ), \end{aligned} \end{aligned}$$

where \(0<f\in C^{\infty }(M\times \mathbb {R}_{+}\times \tilde{N})\) and

$$\begin{aligned} \begin{aligned} \bar{D}^2_{xx}f^{-1}+\tau _Nf^{-1} {\bar{g}}\ge 0. \end{aligned} \end{aligned}$$

Then the second fundamental form of M is of constant rank.

Proof

Define \(F(h,g,x) = \Psi (h^{\sharp }) - f(x,s(x),\nu (x)).\) Let \(x_0\in M\) and \((e_i)_{1\le i\le n}\) be an orthonormal basis of eigenvectors for \(h^{\sharp }(x_0).\) Now in view of Theorem 1.6, the claim follows from [8, p. 15] and a computation using the Gauss equation (2.2):

$$\begin{aligned} \begin{aligned}&\omega _F(h)(\nabla _{e_{m}}h,e_{m}) \\&\quad \ge ~ 2\Psi ^{-1}(\Psi _{;m})^2 - {\bar{D}}^2_{xx}f(e_m,e_m)+ F^{ik}h^{l}_{i} R_{kmlm} - c(h_{mm} +|\nabla h_{mm}|)\\&\quad \ge ~ 2\Psi ^{-1}(\Psi _{;m})^2 - {\bar{D}}^2_{xx}f(e_{m},e_{m}) + \Psi ^{ik}h^{l}_{i} \bar{R}_{kmlm}- c(h_{mm} + |\nabla h_{mm}|)\\&\quad \ge ~ 2f^{-1}({\bar{D}}_x f(e_{m}))^2 - {\bar{D}}^2_{xx}f(e_{m},e_{m}) +\tau _N \Psi ^{ik}h^l_i(g_{kl} - g_{km} g_{lm})\\&\qquad - c(h_{mm}+ |\nabla h_{mm}|)\\&\quad \ge ~ 2f^{-1}({\bar{D}}_x f(e_{m}))^2 - {\bar{D}}^2_{xx}f(e_{m},e_{m}) + \tau _N f - c(h_{mm} + |\nabla h_{mm}|)\\&\quad \ge ~ - c(h_{mm} + |\nabla h_{mm}|). \end{aligned} \end{aligned}$$

\(\square \)

The following corollary contains the CRT from [12, 13] as special cases.

Corollary 2.6

(Curvature Measures Type Equations) Suppose the curvature function \(\Psi \) satisfies Assumption 2.1, \(1\le k\le n-1\), \(p\in \mathbb {R}\) and \(0<\phi \in C^{\infty }({\mathbb {S}}^n)\). Let M be a \(\Gamma \)-admissible convex hypersurface of \(\mathbb {R}^{n+1}\) which encloses the origin in its interior and suppose

$$\begin{aligned} \begin{aligned} \Psi (\kappa )=\langle x,\nu \rangle ^{p} |x|^{-\frac{n+1}{k}}\phi \left( \frac{x}{|x|}\right) ^{\frac{1}{k}}. \end{aligned} \end{aligned}$$

If

$$\begin{aligned} \begin{aligned} |x|^{\frac{n+1}{k}}\phi \left( \frac{x}{|x|}\right) ^{-\frac{1}{k}}~\text{ is } \text{ convex } \text{ on }~\mathbb {R}^{n+1}\setminus \{0\}, \end{aligned} \end{aligned}$$

then M is strictly convex.

3 A viscosity approach

The following lemma served as the main motivation for us to study the constant rank theorems with a viscosity approach. It shows that the smallest eigenvalue of a bilinear form satisfies a viscosity inequality. In the context of extrinsic curvature flows a similar approach was taken to prove preservation of convex cones; see [21, 22]. There it was shown that the distance of the vector of eigenvalues to the boundary of a convex cone satisfies a viscosity inequality.

Lemma 3.1

[3, Lemma 5] Let the eigenvalues of a symmetric 2-tensor \(\alpha \) with respect to a metric \((g,\nabla )\) at \(x_0\) be ordered via

$$\begin{aligned} \begin{aligned} \lambda _1=\cdots =\lambda _{D_1}<\lambda _{D_1+1}\le \cdots \le \lambda _n, \end{aligned} \end{aligned}$$

for some \(D_1\ge 1.\) Let \(\xi \) be a lower support for \(\lambda _1\) at \(x_0.\) That is, \(\xi \) is a smooth function such that in an open neighborhood of \(x_0\),

$$\begin{aligned} \begin{aligned} \xi \le \lambda _1 \end{aligned} \end{aligned}$$

and \(\xi (x_0)=\lambda _1(x_0).\) Choose an orthonormal frame for \(T_{x_0}M\) such that

$$\begin{aligned} \begin{aligned} \alpha _{ij}=\delta _{ij}\lambda _i,\quad g_{ij}=\delta _{ij}. \end{aligned} \end{aligned}$$

Then at \(x_0\) we have for \(1\le k\le n\),

  1. (1)
    $$\begin{aligned} \begin{aligned} \alpha _{ij;k}=\delta _{ij}\xi _{;k}\quad 1\le i,j\le D_1, \end{aligned} \end{aligned}$$
  2. (2)
    $$\begin{aligned} \begin{aligned} \xi _{;kk}\le \alpha _{11;kk}-2\sum _{j>D_1}\frac{(\alpha _{1j;k})^2}{\lambda _j-\lambda _1}. \end{aligned} \end{aligned}$$

While the previous lemma is sufficient for full rank theorems (i.e., when the respective linear map is non-negative, and positive definite at least at one point), we need to generalize [3, Lemma 5] from the smallest eigenvalue to an arbitrary subtrace of a matrix to treat constant rank theorems.

To formulate the following lemma, we introduce some notation. For a symmetric 2-tensor \(\alpha \) on a vector space V with inner product g, let \(\alpha ^{\sharp }\) be the metric raised endomorphism defined by \(g(\alpha ^{\sharp }(X), Y) = \alpha (X, Y)\). Then \(\alpha ^{\sharp }\) is diagonalizable and we write

$$\begin{aligned} \lambda _1 \le \dots \le \lambda _n \end{aligned}$$

for the eigenvalues with distinct eigenspaces \(E_k\) of dimension \(d_k = \dim E_k\), \(1 \le k \le N\). For convenience, let \(E_0 = \{0\}\) and \(d_0 = 0\). Define

$$\begin{aligned} \begin{aligned} \bar{E}_j = \bigoplus _{k=0}^j E_k, \quad \bar{d}_j = {\text {dim}} \bar{E}_j \end{aligned} \end{aligned}$$

for \(0\le j \le N\) so that

$$\begin{aligned} \begin{aligned} \{0\} = \bar{E}_0 \subsetneq \bar{E}_1 \subsetneq \cdots \subsetneq \bar{E}_N = V, \quad \bar{E}_k = \bar{E}_{k-1} \oplus E_k. \end{aligned} \end{aligned}$$

Let \((e_j)_{1 \le j \le n}\) be an orthonormal basis of eigenvectors corresponding to the eigenvalues \((\lambda _j)_{1\le j \le n}\) giving \(E_k = {{\,\textrm{span}\,}}\{e_{\bar{d}_{k-1}+1}, \dots , e_{\bar{d}_k}\}\) and \(\bar{E}_k = {{\,\textrm{span}\,}}\{e_1, \dots , e_{\bar{d}_k}\}\). For each \(1 \le m \le n\), there is a unique j(m) such that

$$\begin{aligned} \begin{aligned} \bar{E}_{j(m)-1} \subsetneq V_m:= {{\,\textrm{span}\,}}\{e_1, \dots , e_m\} \subseteq \bar{E}_{j(m)}. \end{aligned} \end{aligned}$$

Then \(\bar{d}_{j(m)-1} < m \le \bar{d}_{j(m)}\). For convenience, we write

$$\begin{aligned} \begin{aligned} D_m:= \bar{d}_{j(m)}. \end{aligned} \end{aligned}$$

Note that \(D_m\) is the largest number such that

$$\begin{aligned} \begin{aligned} \lambda _1\le \cdots \le \lambda _m=\cdots =\lambda _{D_m}<\lambda _{D_m+1}\le \cdots \le \lambda _n, \end{aligned} \end{aligned}$$

and hence

$$\begin{aligned} \bar{E}_{j(m)}={{\,\textrm{span}\,}}\{e_1,\ldots ,e_{D_m}\}. \end{aligned}$$

The subspace \(V_m\) is invariant under \(\alpha ^{\sharp }\) and the trace of \(\alpha ^{\sharp }\) restricted to \(V_m\) is the subtrace,

$$\begin{aligned} \begin{aligned} G_{m}:=\sum _{k=1}^{m}\lambda _{k}. \end{aligned} \end{aligned}$$

This subtrace is characterized by Ky Fan’s maximum principle (cf. [6, Theorem 6.5]), taking the infimum with respect to all traces of \(\pi _P \circ \alpha ^{\sharp }|_P\) over m-planes of the tangent spaces where \(\pi _P\) is orthogonal projection onto an m-plane P:

$$\begin{aligned} \begin{aligned} G_{m}&= \inf _P \{{{\,\textrm{tr}\,}}\pi _P \circ \alpha ^{\sharp }|_P:P = m\text {-plane}\} \\&= \inf _{(w_{k})_{1\le k\le m}}\left\{ \sum _{k,l=1}^{m}g^{kl}\alpha (w_{k},w_{l}):(g(w_{k},w_{l}))_{1\le k,l\le m}>0\right\} , \end{aligned} \end{aligned}$$

where \((g^{kl})\) is the inverse of \(g_{kl}=g(w_{k},w_{l})\). Now suppose \(\alpha \) is a bilinear form on a Riemannian manifold (Mg), \(x_0\in M\) and \((e_{i})_{1\le i\le n}\) is an orthonormal basis of eigenvectors at \(x_{0}\) with eigenvalues

$$\begin{aligned} \begin{aligned} \lambda _{1}(x_{0})\le \dots \le \lambda _{n}(x_{0}). \end{aligned} \end{aligned}$$

Letting \(w_i(x)\), \(1 \le i \le m\), be any set of linearly independent local vector fields around \(x_0\) with \(w_i(x_0) = e_i\), then we have a smooth upper support function for \(G_m\) at \(x_0\):

$$\begin{aligned} \begin{aligned} \Theta (x):= \sum _{k,l=1}^{m} g^{kl} \alpha _{kl} \ge G_m(x), \quad \Theta (x_0) = G_m(x_0), \end{aligned} \end{aligned}$$

where \(\alpha _{kl} = \alpha (w_k(x), w_l(x))\). We make use of \(\Theta \) to prove the next lemma generalizing Lemma 3.1.

Lemma 3.2

Let (Mg) be a Riemannian manifold and let \(\alpha \) be a symmetric 2-tensor on TM. Suppose \(1\le m\le n\) and \(\xi \) is a (local) lower support at \(x_0\) for the subtrace \(G_m(\alpha ^{\sharp })\). Then at \(x_0\) we have

  1. (1)
    $$\begin{aligned} \begin{aligned} \xi _{;i} = {{\,\textrm{tr}\,}}_{V_m} \alpha _{;i} = \sum _{k=1}^m \alpha _{kk;i}, \end{aligned} \end{aligned}$$
  2. (2)
    $$\begin{aligned} \begin{aligned} \xi _{;ii}\le&\sum _{k=1}^{m}\alpha _{kk;ii}-2\sum _{k=1}^{m}\sum _{r>D_m}\frac{(\alpha _{kr;i})^{2}}{\lambda _{r}-\lambda _{k}}, \end{aligned} \end{aligned}$$

where \(V_m = {{\,\textrm{span}\,}}\{e_1(x_0), \dots , e_m(x_0)\}\) for any choice of m orthonormal eigenvectors \(e_k\) with corresponding eigenvalues \(\lambda _1, \dots , \lambda _m\) satisfying

$$\begin{aligned} \begin{aligned} \lambda _1\le \cdots \le \lambda _m=\cdots =\lambda _{D_m}<\lambda _{D_m+1}\le \cdots \le \lambda _n. \end{aligned} \end{aligned}$$

Proof

For this proof we use the summation convention for indices ranging between 1 and m. Let \(\xi \) be a lower support for \(G_m\) at \(x_{0}\). Fix an index \(1\le i\le n\) and let \(\gamma (s)\) be a geodesic with \(\gamma (0)=x_{0}\) and \({\dot{\gamma }}(0)=e_{i}(x_0)\). Let \((v_k)_{1\le k \le m}\) be any basis (not necessarily orthonormal) for \(V_m\) as in the statement of the lemma. As mentioned above, for any m linearly independent vector fields \((w_k(s))_{1\le k \le m}\) along \(\gamma \) with \(w_k(0) = v_k(x_0)\), \(\alpha _{kl} = \alpha (w_{k},w_{l})\) and \((g^{kl}) = (g(w_{k},w_{l}))^{-1}\), the function

$$\begin{aligned} \begin{aligned} \Theta (s):= g^{kl} \alpha _{kl} - \xi (\gamma (s)) \end{aligned} \end{aligned}$$

satisfies

$$\begin{aligned} \begin{aligned} \Theta (s) \ge 0, \; \Theta (0) = 0 \end{aligned} \end{aligned}$$

and hence

$$\begin{aligned} \begin{aligned} {\dot{\Theta }}(0) = 0, \; \ddot{\Theta }(0) \ge 0. \end{aligned} \end{aligned}$$

Since \(V_m \subseteq \bar{E}_{j(m)}\), choosing \(w_k\) such that \({\dot{w}}_k(0) \perp \bar{E}_{j(m)}(x_0)\) gives

$$\begin{aligned} \begin{aligned} {\dot{g}}_{kl}(0) = g({\dot{w}}_k(0), v_l) + g(v_k, {\dot{w}}_l(0)) = 0 \end{aligned} \end{aligned}$$

and hence also

$$\begin{aligned} \begin{aligned} {\dot{g}}^{kl}(0) = -g^{ka}(0) {\dot{g}}_{ab}(0)g^{bl}(0) = 0. \end{aligned} \end{aligned}$$

Then we compute

$$\begin{aligned} \begin{aligned} 0 = {\dot{\Theta }}(0) = \left( g^{kl} \alpha _{kl;i} - \xi _{;i}\right) |_{x_0} \end{aligned} \end{aligned}$$

giving the first part.

Now we move on to the second derivatives. For this we make the additional assumptions, \(v_k = e_k\) and \(\ddot{w}_k(0) = 0\). We first calculate

$$\begin{aligned} \begin{aligned} \ddot{g}^{kl}(0)&= g^{km}{\dot{g}}_{mr}g^{ra}{\dot{g}}_{ab}g^{bl}-g^{ka}\ddot{g}_{ab}g^{bl}+g^{ka}{\dot{g}}_{ab}g^{bm}{\dot{g}}_{mr}g^{rl} \\&= -\delta ^{ka}\ddot{g}_{ab}(0)\delta ^{bl}, \end{aligned} \end{aligned}$$

since \({\dot{g}}_{kl}(0) = 0\) and \(g^{kl}(0) = \delta ^{kl}\). Then from \(\ddot{w}_k(0) = 0\) we obtain

$$\begin{aligned} \begin{aligned} \ddot{g}^{kl}(0)&= -\left[ g(\ddot{w}_{k},w_{l})+g(w_{k},\ddot{w}_{l})+2g({\dot{w}}_{k},{\dot{w}}_{l})\right] (0) \\&= -2 \delta ^{ka}g({\dot{w}}_a(0), {\dot{w}}_b(0))\delta ^{bl}. \end{aligned} \end{aligned}$$

From the local minimum property,

$$\begin{aligned} \begin{aligned} 0&\le \ddot{\Theta }(0)\\&=\ddot{g}^{kl}(0)\alpha _{kl}+\delta ^{kl}\frac{d^{2}}{ds^{2}}_{|s=0}\alpha _{kl}(s)-\xi _{;ii}(x_{0})\\&=-2g({\dot{w}}_{k}(0),{\dot{w}}_{l}(0))\alpha ^{kl} + \delta ^{kl}\alpha _{kl;ii}\\&\quad +4\delta ^{kl}\nabla _{i}\alpha ({\dot{w}}_{k}(0),w_{l}(0)) + 2\delta ^{kl}\alpha ({\dot{w}}_{k}(0),{\dot{w}}_{l}(0))-\xi _{;ii}(x_{0})\\&= \sum _{k=1}^m \alpha _{kk;ii} -\xi _{;ii}(x_{0}) \\&\quad +2\sum _{k=1}^m \left( 2\nabla _{i}\alpha ({\dot{w}}_{k}(0),e_{k}) + \alpha ({\dot{w}}_{k}(0),{\dot{w}}_{k}(0)) - g({\dot{w}}_{k}(0),{\dot{w}}_{k}(0))\lambda _k\right) . \end{aligned} \end{aligned}$$

From \({\dot{w}}_k(0) \perp \bar{E}_{j(m)}\), we may write \({\dot{w}}_k (0) = \sum \limits _{r>D_m} c_k^r e_r\) giving

$$\begin{aligned} \begin{aligned} \xi _{;ii}(x_{0})-\sum _{k=1}^{m}\alpha _{kk;ii}&\le 2\sum _{k=1}^{m} \sum _{r>D_m} \left( 2 c_k^r \alpha _{kr;i} + (c_k^r)^2 \lambda _r - (c_k^r)^2\lambda _k \right) \\&= 2\sum _{k=1}^{m} \sum _{r>D_m} c_k^r \left( 2\alpha _{kr;i} + c_k^r(\lambda _r - \lambda _k)\right) . \end{aligned} \end{aligned}$$

Optimizing yields the specific choice

$$\begin{aligned} \begin{aligned} {\dot{w}}_{k}(0) = -\sum _{r> D_m}\frac{\alpha _{kr;i}}{\lambda _{r}-\lambda _{k}}e_{r}. \end{aligned} \end{aligned}$$

From this we obtain

$$\begin{aligned} \begin{aligned} \xi _{;ii}(x_{0})-\sum _{k=1}^{m}\alpha _{kk;ii} \le&-2\sum _{k=1}^{m} \sum _{r>D_m} \frac{\alpha _{kr;i}}{\lambda _{r}-\lambda _{k}} \left( 2\alpha _{kr;i} - \alpha _{kr;i}\right) \\ =&-2\sum _{k=1}^{m}\sum _{r>D_m}\frac{(\alpha _{kr;i})^{2}}{\lambda _{r}-\lambda _{k}}. \end{aligned} \end{aligned}$$

\(\square \)

Corollary 3.3

Let \(\alpha \) be a non-negative, symmetric 2-tensor on TM. Suppose for some \(1 \le m \le n\) that \(\dim \ker \alpha ^{\sharp } \ge m-1\) or equivalently that the eigenvalues of \(\alpha ^{\sharp }\) satisfy \(\lambda _{1} \equiv \dots \equiv \lambda _{m-1} \equiv 0.\) Then for all \(x_{0}\) and any lower support \(\xi \) for \(G_{m}\) at \(x_{0}\) and all \(1\le i\le n\) we have

  1. (1)

    \((\nabla _i\alpha (x_{0}))_{|\ker \alpha ^{\sharp } \times \ker \alpha ^{\sharp }} = 0,\)

  2. (2)

    \((\nabla _i\alpha (x_{0}))_{|E_{j(m)} \times E_{j(m)}} = g\nabla _i \xi (x_{0}),\quad \text{ if }~\lambda _{m}(x_{0})>0.\)

Proof

We use a basis \((e_{i})\) as in Lemma 3.2. To prove (1) we may assume \(\lambda _1(x_0) = 0\), and hence the zero function is a lower support for \(\lambda _1\). By Lemma 3.1, we have \(\nabla \alpha _{kl}=0\) for all \(1\le k,l\le d_1\) proving the first equation.

Now we prove (2). For \(m=1\) the claim follows from Lemma 3.2-(1). Suppose \(m > 1\). If \(d_1 \ge m\) at \(x_0\) then \(\lambda _m(x_0) = 0\) which violates our assumption. Hence \(d_1=m-1\) and \(E_1(x_0) = {{\,\textrm{span}\,}}\{e_1, \dots , e_{m-1}\}\). Taking any unit vector \(v \in E_2(x_0)={{\,\textrm{span}\,}}\{e_m,\ldots ,e_{D_m}\}\) and applying Lemma 3.2-(1) with \(V_m=\{e_1,\ldots ,e_{m-1},v\}\) gives

$$\begin{aligned} \begin{aligned} \nabla _i \alpha (v,v) = {{\,\textrm{tr}\,}}_{V_m} \nabla _i\alpha = \nabla _i \xi \quad \forall 1\le i\le n. \end{aligned} \end{aligned}$$

Polarizing the quadratic form \(v \mapsto \nabla _i \alpha (v, v)\) over \(E_2(x_0)\) then shows

$$\begin{aligned} \nabla _i \alpha _{kl} =\delta _{kl} \nabla _i\xi \quad \forall m \le k,l \le D_m. \end{aligned}$$

\(\square \)

Now we state the key outcome of the results in this section. We want to acknowledge that the following proof is inspired by the beautiful paper [24] and their sophisticated test function

$$\begin{aligned} \begin{aligned} Q = \sum _{q=1}^{m}G_{q}. \end{aligned} \end{aligned}$$

Theorem 3.4

Under the assumptions of Theorem 1.6, if \(\dim \ker \alpha ^{\sharp } \ge m-1\), for all \(\Omega \Subset M\) there exists a constant \(c=c(\Omega )\), such that for all \(x_{0}\in \Omega \) and any lower support function \(\xi \) for \(G_m(\alpha ^{\sharp })\) at \(x_0\) we have

$$\begin{aligned} \begin{aligned} F^{ij} \xi _{;ij} \le c(\xi + |\nabla \xi |). \end{aligned} \end{aligned}$$

Proof

In view of our assumption \(\lambda _{m-1}\equiv 0\). Hence the zero function is a smooth lower support at \(x_{0}\) for every subtrace \(G_{q}\) with \(1\le q\le m-1\). Therefore by Lemma 3.2, for every \(1\le q\le m-1\) and every \(1\le i\le n\) we obtain

$$\begin{aligned} \begin{aligned} 0\le \sum _{k=1}^{q}\alpha _{kk;ii}-2\sum _{k=1}^{q}\sum _{j>D_{q}}\frac{(\alpha _{kj;i})^{2}}{\lambda _{j}-\lambda _{k}}. \end{aligned} \end{aligned}$$
(3.1)

Due to the Ricci identity, we have the commutation formula

$$\begin{aligned} \begin{aligned} \alpha _{ij;kl} = \alpha _{ki;jl}&= \alpha _{ki;lj} + R^{p}_{kjl}\alpha _{pi} + R^{p}_{ijl}\alpha _{pk} \\&= \alpha _{kl;ij} + R^{p}_{kjl}\alpha _{pi} + R^{p}_{ijl}\alpha _{pk}. \end{aligned} \end{aligned}$$

Taking into account Lemma 3.2 and adding the inequalities (3.1) for \(1\le q\le m-1\), we have at \(x_{0}\),

$$\begin{aligned} \begin{aligned} F^{ij}\xi _{;ij}&\le \sum _{q=1}^{m}\sum _{k=1}^{q}F^{ij}\alpha _{kk;ij} -2\sum _{q=1}^{m}\sum _{k=1}^{q}\sum _{j>D_{q}}\frac{F^{ii}(\alpha _{kj;i})^{2}}{\lambda _{j}-\lambda _{k}} \\&\le \sum _{q=1}^{m}\sum _{k=1}^{q}F^{ij}\left( \alpha _{ij;kk} - R^{p}_{kjk}\alpha _{pi} - R^{p}_{ijk}\alpha _{pk}\right) \\&\quad -2\sum _{q=1}^{m}\sum _{k=1}^{q}\sum _{j>D_{q}}\frac{F^{ii}(\alpha _{kj;i})^{2}}{\lambda _{j}-\lambda _{k}}. \end{aligned} \end{aligned}$$

Now differentiating the equation \(F(\alpha ^{\sharp }, x)=0\) yields

$$\begin{aligned} \begin{aligned} 0&= F^{ij}\alpha _{ij;k}+D_{x^{k}}F,\\ 0&=F^{ij,rs}\alpha _{ij;k}\alpha _{rs;l}+D_{x^{l}}F^{ij}\alpha _{ij;k}+F^{ij}\alpha _{ij;kl}+D_{x^{k}}F^{rs}\alpha _{rs;l}+D_{x^{k}x^{l}}^{2}F. \end{aligned} \end{aligned}$$

Then substituting above gives

$$\begin{aligned} \begin{aligned} F^{ij}\xi _{;ij}&\le -2\sum _{q=1}^{m}\sum _{k=1}^{q}\sum _{j>D_{q}}\frac{F^{ii}(\alpha _{kj;i})^{2}}{\lambda _{j}-\lambda _{k}} - \sum _{q=1}^{m}\sum _{k=1}^q F^{ij,rs}\alpha _{ij;k}\alpha _{rs;k} \\&\quad - \sum _{q=1}^{m}\sum _{k=1}^{q}\left( D^{2}_{x^{k}x^{k}}F + 2D_{x^{k}}F^{ij}\alpha _{ij;k} + F^{ij}\left( R^{p}_{kjk}\alpha _{pi} + R^{p}_{ijk}\alpha _{pk}\right) \right) \\&\le -2\sum _{q=1}^{m}\sum _{k=1}^{q}\sum _{j>D_{m}}\frac{F^{ii}(\alpha _{ij;k})^{2}}{\lambda _{j}} - \sum _{q=1}^{m}\sum _{k=1}^q \sum _{i,j,r,s>D_{m}}F^{ij,rs}\alpha _{ij;k}\alpha _{rs;k}\\&\quad - \sum _{q=1}^{m}\sum _{k=1}^{q}\left( D^{2}_{x^{k}x^{k}}F + 2D_{x^{k}}F^{ij}\alpha _{ij;k} + F^{ij}R^{p}_{kjk}\alpha _{pi}\right) +c\xi \\&\quad + C\sum _{i=1}^{n}\sum _{j,k\le D_{m}}|\alpha _{jk;i}|-2\sum _{q=1}^{m}\sum _{k=1}^{q}\sum _{j=D_{q}+1}^{D_{m}}\frac{F^{ii}(\alpha _{ij;k})^{2}}{\lambda _{j}}, \end{aligned} \end{aligned}$$

where we have used that \(\alpha \) is Codazzi and the fact that \(1 \le k \le m \le D_m\) in splitting the sum involving \(F^{ij,rs}\) into terms where at least two indices are at most \(D_{m}\) and the remaining indices \(i,j,r,s>D_m\). We have also used \(\lambda _j - \lambda _k \ge \lambda _j\), and that for some constant c

$$\begin{aligned} \begin{aligned} F^{ij}R^{p}_{ijm}\alpha _{pm}\ge -c\xi . \end{aligned} \end{aligned}$$

Now for every \(1\le k\le m\) define

$$\begin{aligned} \begin{aligned} \eta _{k}=(\eta _{ijk}) = {\left\{ \begin{array}{ll}\alpha _{ij;k},&{} i,j>D_{m}\\ 0, &{} i\le D_{m}~\text{ or }~j\le D_{m}.\end{array}\right. } \end{aligned} \end{aligned}$$

Then

$$\begin{aligned} \begin{aligned} F^{ij}\xi _{;ij}&\le -2\sum _{q=1}^{m}\sum _{k=1}^{q}\sum _{j>D_{m}}\frac{F^{ii}(\eta _{ijk})^{2}}{\lambda _{j}} - \sum _{q=1}^{m}\sum _{k=1}^q F^{ij,rs}\eta _{ijk}\eta _{rsk}\\&\quad - \sum _{q=1}^{m}\sum _{k=1}^{q}D^{2}_{x^{k}x^{k}}F - 2\sum _{q=1}^{m}\sum _{k=1}^{q}D_{x^{k}}F^{ij}\eta _{ijk} - \sum _{q=1}^{m}\sum _{k=1}^{q}F^{ij}R^{p}_{kjk}\alpha _{pi}\\&\quad +C\sum _{i=1}^{n}\sum _{j,k\le D_m}|\alpha _{jk;i}|-2\sum _{q=1}^m\sum _{k=1}^{q}\sum _{j=D_{q}+1}^{D_{m}}\frac{F^{ii}(\alpha _{ij;k})^{2}}{\lambda _{j}}+ c\xi . \end{aligned} \end{aligned}$$

In addition we define \(\alpha ^{\sharp }_{\varepsilon } = \alpha ^{\sharp }+\varepsilon {{\,\textrm{id}\,}}\), which has positive eigenvalues for \(\varepsilon >0\). In the sequel, a subscript \(\varepsilon \) denotes evaluation of a quantity at \(\alpha ^{\sharp }_{\varepsilon }\), e.g., we put \(F^{ij}_{\varepsilon } = F^{ij}(\alpha ^{\sharp }_{\varepsilon }).\) We have

$$\begin{aligned} \begin{aligned} F^{ij}\xi _{;ij}&\le \sum _{q=1}^{m}\lim _{\varepsilon \rightarrow 0}\left( - 2\sum _{k=1}^{q}\sum _{j=1}^{n}\frac{F^{ii}_{\varepsilon }(\eta _{ijk})^{2}}{\lambda _{j}+\varepsilon } - \sum _{k=1}^q F^{ij,rs}_{\varepsilon }\eta _{ijk}\eta _{rsk}\right. \\&\quad \left. - \sum _{k=1}^{q}(D^{2}_{x^{k}x^{k}}F)_{\varepsilon } - 2\sum _{k=1}^{q}(D_{x^{k}}F^{ij})_{\varepsilon }\eta _{ijk} - \sum _{k=1}^{q}F^{ij}_{\varepsilon }R^{p}_{kjk}(\alpha _{\varepsilon })_{pi}\right) \\&\quad +C\sum _{i=1}^{n}\sum _{j,k\le D_m}|\alpha _{jk;i}|-2\sum _{q=1}^m\sum _{k=1}^{q}\sum _{j=D_{q}+1}^{D_{m}}\frac{F^{ii}(\alpha _{ij;k})^{2}}{\lambda _{j}}+c\xi . \end{aligned} \end{aligned}$$

In view of Definition 1.4, and the definition of \(\omega _F,\)

$$\begin{aligned} \begin{aligned} F^{ij}\xi _{;ij}&\le \sum _{q=1}^{m}\lim _{\varepsilon \rightarrow 0}\left( - \sum _{k=1}^{q}\Phi ^{ij,rs}_{\varepsilon }\eta _{ijk}\eta _{rsk}- \sum _{k=1}^{q}(D^{2}_{x^{k}x^{k}}F)_{\varepsilon }\right. \\&\quad \left. - 2\sum _{k=1}^{q}(D_{x^{k}}F^{ij})_{\varepsilon }\eta _{ijk} - \sum _{k=1}^{q}F^{ij}_{\varepsilon }R^{p}_{kjk}(\alpha _{\varepsilon })_{pi}\right) \\&\quad +C\sum _{i=1}^{n}\sum _{j,k\le D_m}|\alpha _{jk;i}|-2\sum _{q=1}^m\sum _{k=1}^{q}\sum _{j=D_{q}+1}^{D_{m}}\frac{F^{ii}(\alpha _{ij;k})^{2}}{\lambda _{j}}+c\xi \\&\le - \sum _{q=1}^{m}\sum _{k=1}^{q}\omega _{F}(\alpha )(\eta _{k},e_{k})+C\sum _{i=1}^{n}\sum _{j,k\le D_m}|\alpha _{jk;i}|\\&\quad -2\sum _{q=1}^m\sum _{k=1}^{q}\sum _{j=D_{q}+1}^{D_{m}}\frac{F^{ii}(\alpha _{ij;k})^{2}}{\lambda _{j}}+c\xi . \end{aligned} \end{aligned}$$

Adding and subtracting some terms gives

$$\begin{aligned} \begin{aligned} F^{ij}\xi _{;ij}&\le -\sum _{q=1}^{m}\sum _{k=1}^{q}\omega _{F}(\alpha )(\nabla _{e_{k}}\alpha ,e_{k}) + c\xi \\&\quad +\sum _{q=1}^{m}\sum _{k=1}^{q}\omega _{F}(\alpha )(\nabla _{e_{k}}\alpha ,e_{k}) - \sum _{q=1}^{m}\sum _{k=1}^{q}\omega _{F}(\alpha )(\eta _{k},e_{k}) \\&\quad + C\sum _{i=1}^{n}\sum _{k,j\le D_m}|\alpha _{jk;i}|-2\sum _{q=1}^m\sum _{k=1}^{q}\sum _{j=D_{q}+1}^{D_{m}}\frac{F^{ii}(\alpha _{ij;k})^{2}}{\lambda _{j}}. \end{aligned} \end{aligned}$$
(3.2)

Next we estimate the last two lines of (3.2). We have

$$\begin{aligned} \begin{aligned} \sum _{q=1}^{m}\sum _{k=1}^{q}\omega _{F}(\alpha )(\nabla _{e_{k}}\alpha ,e_{k})&- \sum _{q=1}^{m}\sum _{k=1}^{q}\omega _{F}(\alpha )(\eta _{k},e_{k})\le C\sum _{i=1}^{n}\sum _{j,k\le D_m}|\alpha _{jk;i}|, \\ C\sum _{i=1}^{n}\sum _{j,k\le D_m}|\alpha _{jk;i}|&\le C\sum _{i=1}^{n}\sum _{k=1}^{D_1}\sum _{j=D_{1}+1}^{D_{m}}|\alpha _{jk;i}|+c|\nabla \xi |, \end{aligned} \end{aligned}$$

where for the last inequality we used Corollary 3.3. Let us define

$$\begin{aligned} \begin{aligned} \mathcal {R}&=C\sum _{i=1}^{n}\sum _{k=1}^{D_1}\sum _{j=D_{1}+1}^{D_{m}}|\alpha _{jk;i}|-2\sum _{q=1}^m\sum _{k=1}^{q}\sum _{j=D_{q}+1}^{D_{m}}\frac{F^{ii}(\alpha _{ij;k})^{2}}{\lambda _{j}-\lambda _{k}}\\&=C\sum _{i=1}^{n}\sum _{k=1}^{D_1}\sum _{j=D_{1}+1}^{D_{m}}|\alpha _{jk;i}|-2\sum _{q=1}^{m-1}\sum _{k=1}^{q}\sum _{j=D_{q}+1}^{D_{m}}\frac{F^{ii}(\alpha _{ij;k})^{2}}{\lambda _{j}-\lambda _{k}}. \end{aligned} \end{aligned}$$

Note that if \(\lambda _m(x_0)=0\), then \(D_q=D_m\) for all \(q\le m\) and hence \({\mathcal {R}}=0\). If \(\lambda _{m}(x_{0})>0,\) then we have \(D_q=m-1\) for all \(q\le m-1\) and

$$\begin{aligned} \begin{aligned} {\mathcal {R}}=C\sum _{i=1}^{n}\sum _{k=1}^{m-1}\sum _{j=m}^{D_{m}}|\alpha _{jk;i}|-2\sum _{k=1}^{m-1}(m-k)\sum _{j=m}^{D_{m}}\frac{F^{ii}(\alpha _{ij;k})^{2}}{\lambda _{j}-\lambda _{k}}. \end{aligned} \end{aligned}$$

Therefore, due to uniform ellipticity, we can use

$$\begin{aligned} \begin{aligned} C\sum _{i=1}^n|\alpha _{jk;i}|\le 2(m-k)\frac{F^{ii}(\alpha _{jk;i})^{2}}{\lambda _{j}-\lambda _k}+c\xi \end{aligned} \end{aligned}$$

to show that \({\mathcal {R}}\le c'\xi .\) Then by the assumptions on \(\omega _F\), the right hand side of (3.2) is bounded by \(c(\xi + |\nabla \xi |)\) completing the proof. \(\square \)

Remark 3.5

Here we crucially used that F is \(\Phi \)-inverse concave, then we took the limit \(\varepsilon \rightarrow 0\) and finally swapped \(\eta _k\) with \(\nabla _{e_k} \alpha \) absorbing the extra terms. If on the other hand we tried to swap first without using \(\Phi \)-inverse concavity, the extra terms would involve \(\sum _{r=1}^{n}\frac{F^{ii}_{\varepsilon }(\nabla _{e_k} (\alpha _{\varepsilon })_{ir})^{2}}{\lambda _{r}+\varepsilon }\). Since \(\lambda _r = 0\) for \(1\le r \le m-1\) this blows up in the limit \(\varepsilon \rightarrow 0\) and cannot be absorbed.

Proof of Theorem 1.6

Let \(k:= \max _{x\in M}\dim \ker \alpha ^{\sharp }(x).\) If \(k=0\), we are done. By induction we show that for all \(1\le m\le k\) we have \(\lambda _{m}\equiv 0\). For \(m=1\), clearly we have \(\dim \ker \alpha ^{\sharp } \ge m-1\) and hence by Theorem 3.4 a lower support \(\xi \) for \(G_1 = \lambda _1\) locally satisfies

$$\begin{aligned} \begin{aligned} F^{ij} \xi _{;ij} \le c(\xi + |\nabla \xi |). \end{aligned} \end{aligned}$$

By the strong maximum principle [4], \(\lambda _1 \equiv 0\).

Now suppose the claim holds true for \(m-1\), i.e.,

$$\begin{aligned} \begin{aligned} \lambda _1 \equiv \dots \equiv \lambda _{m-1} \equiv 0. \end{aligned} \end{aligned}$$

Then a lower support \(\xi \) for \(G_m\) satisfies

$$\begin{aligned} \begin{aligned} F^{ij}\xi _{;ij} \le c(\xi + |\nabla \xi |). \end{aligned} \end{aligned}$$

Hence \(G_m \equiv 0\) for all \(m\le k.\) Since k indicates the maximum dimension of the kernel, we must have \(\lambda _{k+1}>0\) and the rank is always \(n-k\). \(\square \)