1 Introduction

Let \((\overline{M}, g)\) be a compact manifold with smooth boundary ∂M. In this paper, we are concerned with the obstacle problem

$$\begin{aligned} \max \bigl\{ u - \phi, - \bigl(f \bigl(\lambda \bigl(\nabla ^{2} u + \chi \bigr)\bigr) - \psi (x, u, \nabla u)\bigr)\bigr\} = 0\quad \text{in } M \end{aligned}$$
(1.1)

with the boundary condition

$$\begin{aligned} u = \varphi \quad\text{on } \partial M, \end{aligned}$$
(1.2)

where f is a smooth, symmetric function defined in an open convex cone \(\Gamma \subset \mathbb{R}^{n}\) with a vertex at the origin and

$$\begin{aligned} \Gamma _{n} = \bigl\{ \lambda = (\lambda _{1}, \ldots, \lambda _{n}) \in \mathbb{R}^{n}: \text{ each } \lambda _{i} > 0\bigr\} \subseteq \Gamma \neq \mathbb{R}^{n}, \end{aligned}$$

\(\nabla ^{2} u\) denotes the Hessian of u, χ is a \((0, 2)\)-tensor field, \(\lambda (h)\) denotes the eigenvalues of a \((0, 2)\)-tensor field h with respect to the metric g and \(\varphi \in C^{4} (\partial M)\). In this work, we assume the obstacle function \(\phi \in C^{3} (\overline{M})\) satisfies \(\phi = \varphi \) on ∂M.

We shall use a penalization technique to establish the a priori \(C^{2}\) estimates for a singular perturbation problem (see (2.1)). A similar problem was studied in [14] and [1], where the obstacle function ϕ is assumed to satisfy \(\phi > \varphi \) on ∂M so that near the boundary ∂M, the solution of (2.1) satisfies the Hessian-type equation

$$\begin{aligned} f \bigl(\lambda \bigl(\nabla ^{2} u + \chi \bigr)\bigr) = \psi (x, u, \nabla u) \end{aligned}$$
(1.3)

and the second-order boundary estimates follow from studies on Hessian-type equations (see [6], [9], and [10] for examples). In the current paper the obstacle function ϕ is allowed to equal φ on the boundary so that the main difficulty is from the boundary estimates for second-order derivatives.

As in [3], we suppose the function \(f \in C^{2} (\Gamma ) \cap C^{0} (\overline{\Gamma})\) satisfies the structure conditions:

$$\begin{aligned} &f_{i} = f_{\lambda _{i}} \equiv \frac{\partial f}{\partial \lambda _{i}} > 0 \quad\text{in $\Gamma $}, 1 \leq i \leq n, \end{aligned}$$
(1.4)
$$\begin{aligned} &\text{$f$ is concave in $\Gamma $}, \end{aligned}$$
(1.5)

and

$$\begin{aligned} \textstyle\begin{cases} f>0& \text{in $\Gamma $}, \\ f=0& \text{on $\partial \Gamma $}. \end{cases}\displaystyle \end{aligned}$$
(1.6)

In addition, f is also assumed to satisfy that for any positive constants \(\mu _{1}, \mu _{2}\) with \(0 < \mu _{1} < \mu _{2} < \sup_{\Gamma }f\) there exists a positive constant \(c_{0}\) depending on \(\mu _{1}\) and \(\mu _{2}\) such that

$$\begin{aligned} \bigl(f_{1} (\lambda ) \cdot \cdots \cdot f_{n} (\lambda )\bigr)^{1/n} \geq c_{0} \end{aligned}$$
(1.7)

for any \(\lambda \in \Gamma _{\mu _{1}, \mu _{2}}:= \{\lambda \in \Gamma: \mu _{1} \leq f (\lambda ) \leq \mu _{2}\}\) and

$$\begin{aligned} f_{i} (\lambda ) \geq c_{1} \biggl(1 + \sum _{j} f_{j} \biggr) \quad\text{for any } \lambda \in \Gamma \text{ with } \lambda _{i} < 0. \end{aligned}$$
(1.8)

Furthermore, f is supposed to satisfy that for any \(A > 0\) and any compact set \(K \subset \Gamma \), there exists \(R = R (A, K) > 0\) such that

$$\begin{aligned} f (\lambda _{1}, \ldots, \lambda _{n-1}, \lambda _{n} + R) \geq A, \quad\text{for all } \lambda \in K \end{aligned}$$
(1.9)

and

$$\begin{aligned} f (R \lambda ) \geq A, \quad\text{for all } \lambda \in K. \end{aligned}$$
(1.10)

Following [3], we assume that there exists a large number \(R > 0\) such that at each \(x \in \partial M\),

$$\begin{aligned} \bigl(\kappa _{1} (x), \ldots, \kappa _{n-1} (x), R\bigr) \in \Gamma, \end{aligned}$$
(1.11)

where \((\kappa _{1} (x), \ldots, \kappa _{n-1} (x))\) are the principal curvatures of ∂M at x (relative to the interior normal). Since the function ψ may depend on ∇u, we assume there exists an admissible subsolution \(\underline{u} \in C^{2} (\overline{M})\) satisfying

$$\begin{aligned} \textstyle\begin{cases} f (\lambda (\nabla ^{2} \underline{u} + \chi )) \geq \psi (x, \underline{u}, \nabla \underline{u}) &\text{in } M, \\ \underline{u} = \varphi &\text{on } \partial M , \\ \underline{u} \leq \phi &\text{in } M. \end{cases}\displaystyle \end{aligned}$$
(1.12)

As in [6], the function \(\psi (x, z, p) \in C^{2} (T^{*}\overline{M} \times \mathbb{R}) > 0\) satisfies

$$\begin{aligned} & \psi (x, z, p) \text{ is convex in } p, \end{aligned}$$
(1.13)
$$\begin{aligned} &\sup_{(x, z, p) \in T^{*} \overline{M} \times \mathbb{R}} \frac{- \psi _{z} (x, z, p)}{\psi (x, z, p)} < \infty \end{aligned}$$
(1.14)

and the growth condition

$$\begin{aligned} \begin{aligned}& p \cdot \nabla _{p} \psi (x, z, p) \leq \bar{\psi} (x, z) \bigl(1 + \vert p \vert ^{ \gamma _{1}} \bigr), \\ &p \cdot \nabla _{x} \psi (x, z, p) + \vert p \vert ^{2} \psi _{z} (x, z, p) \geq \bar{\psi} (x, z) \bigl(1 + \vert p \vert ^{\gamma _{2}}\bigr), \end{aligned} \end{aligned}$$
(1.15)

when \(|p|\) is sufficiently large, where \(\gamma _{1} < 2\), \(\gamma _{2} < 4\) are positive constants and ψ̄ is a positive-continuous function of \((x, z) \in \overline{\Omega} \times \mathbb{R}\).

Definition 1.1

A function \(u \in C^{2} (M)\) is called admissible if \(\lambda (\nabla ^{2} u + \chi ) \in \Gamma \) in Ω.

Our main results are stated as follows.

Theorem 1.2

Suppose f satisfies (1.4)(1.11) and there exists an admissible subsolution \(\underline{u} \in C^{2} (\overline{M})\) satisfying (1.12). Assume that \(\psi > 0\) satisfies (1.13)(1.15), \(\varphi \in C^{4} (\partial M)\), ϕ is admissible in M and \(\phi = \varphi \) on ∂M. Then, there exists an admissible solution \(u \in C^{1,1} (\overline{M})\) of (1.1) and (1.2).

Furthermore, \(u \in C^{3,\alpha} (E)\) for any \(\alpha \in (0, 1)\) and the Hessian equation (1.3) holds in E, where \(E:= \{x \in M: u(x) < \phi (x)\}\).

Note that in Theorem 1.2, the function ϕ is assumed to be admissible. Under the homogeneous boundary condition, i.e., \(\varphi \equiv 0\), and that \(\chi \equiv 0\), we can remove this assumption.

Theorem 1.3

Assume that \(\chi \equiv 0\) in (1.1). Suppose (1.4)(1.11) and there exists an admissible subsolution \(\underline{u} \in C^{2} (\overline{M})\) satisfying (1.12) with \(\varphi \equiv 0\). Assume that \(\psi > 0\) satisfies (1.13)(1.15), \(\varphi \equiv 0\) and \(\phi \equiv 0\) on ∂M. Then, there exists an admissible solution \(u \in C^{1,1} (\overline{M})\) of (1.1) and (1.2) and \(u \in C^{3,\alpha} (E)\) for any \(\alpha \in (0, 1)\) and satisfies (1.3) in E.

Typical examples are given by \(f = \sigma ^{1/k}_{k}\), \(1 \leq k \leq n\), defined on the cone \(\Gamma _{k} = \{\lambda \in \mathbb{R}^{n}: \sigma _{j} (\lambda ) > 0, j = 1, \ldots, k\}\), where \(\sigma _{k} (\lambda )\) are the elementary symmetric functions

$$\begin{aligned} \sigma _{k} (\lambda ) = \sum _{i_{1} < \cdots < i_{k}} \lambda _{i_{1}} \ldots \lambda _{i_{k}},\quad k = 1, \ldots, n. \end{aligned}$$
(1.16)

Other interesting examples satisfying (1.4)–(1.11) (see [13]) are

$$\begin{aligned} f (\lambda ) = \sigma _{k}^{1/k} (\mu _{1}, \ldots, \mu _{n}), \end{aligned}$$
(1.17)

defined on the cone \(\Gamma = \{\lambda \in \mathbb{R}^{n}: (\mu _{1}, \ldots, \mu _{n}) \in \Gamma _{k}\}\), where \(\mu _{i}\) are defined by

$$\begin{aligned} \mu _{i} = \sum_{j \neq i} \lambda _{j},\quad i = 1, \ldots, n. \end{aligned}$$

It is an interesting question whether we can establish the a priori second-order estimates without the condition (1.13). We note that such a condition is necessary in general (see [11]). It is a longstanding problem of the global \(C^{2}\) estimates for the k-Hessian equation

$$\begin{aligned} \sigma _{k} \bigl(\lambda \bigl(D^{2} u\bigr)\bigr) = \psi (x, u, Du) \end{aligned}$$

dropping the condition (1.13). The cases \(k=2\), \(k=n-1\), and \(k=n-2\) were resolved by Guan–Ren–Wang [11], Ren–Wang [20], and Ren–Wang [21], respectively. It is still open for general k. Chu–Jiao [5] considered the case (1.17) and established the curvature estimates without the condition (1.13). Jiao–Liu [13] studied the corresponding Dirichlet problem. It is of interest to ask if the above methods can be applied to the related obstacle problem (1.1).

Given a function \(v: \Omega \rightarrow \mathbb{R}\), denote \(M_{v}:= \{(x, v(x)): x \in \Omega \}\) to be the graphic hypersurface defined by v. Then, the Gauss curvature of \(M_{v}\) is

$$\begin{aligned} K (M_{v}) = \frac{\det D^{2} v}{(1+ \vert Dv \vert ^{2})^{(n+2)/2}}. \end{aligned}$$

A classic problem in differential geometry is to find a convex graphic hypersurface with prescribed Gauss curvature K that is equivalent to solving a Monge–Ampère equation

$$\begin{aligned} \det D^{2} u = K (x, u) \bigl(1+ \vert Du \vert ^{2}\bigr)^{(n+2)/2}. \end{aligned}$$
(1.18)

It is also of interest to find hypersurfaces having prescribed Gauss curvature under an obstacle. Such a problem is also equivalent to an obstacle for Monge–Ampère equations. Xiong–Bao [25] proved the \(C^{1,1}\) regularity under the condition that the obstacle function is strictly larger than the boundary data. A similar question can be asked if the Gauss curvature is replaced with other kinds of curvatures, such as the mean curvature [4]. The following two theorems can be regarded as applications of Theorem 1.2 and Theorem 1.3.

Theorem 1.4

Let Ω be a uniformly convex bounded domain in \(\mathbb{R}^{n}\). Given a function \(K (x, z) \in C^{2} (\overline{\Omega}\times \mathbb{R}) > 0\) satisfying that there exists a positive constant A such that

$$\begin{aligned} K_{z} (x, z) \geq - A K (x, z), \quad\textit{for all } (x, z) \in \overline{\Omega}\times \mathbb{R} \end{aligned}$$
(1.19)

and a piece of uniformly convex graphic hypersurface \(M_{\phi}\), suppose there exists a uniformly convex graphic hypersurface \(M_{\underline{u}}\) under \(M_{\phi}\) satisfying the Gauss curvature of \(M_{\underline{u}}\),

$$\begin{aligned} K (M_{\underline{u}}) \bigl(x, \underline{u} (x)\bigr) \geq K \bigl(x, \underline{u} (x)\bigr)\quad \textit{for } x \in \overline{\Omega} \end{aligned}$$
(1.20)

and \(\underline{u} = \phi \) on Ω. Then, there exists a \(C^{1,1}\) graphic hypersurface \(M_{u}\) under \(M_{\phi}\) with the same boundary such that \(K (M_{u}) \geq K (x, u)\) in Ω and \(K (M_{u}) = K (x, u)\) in \(E:= \{x \in \Omega: u(x) < \phi (x)\}\).

Theorem 1.5

Suppose \(K (x, z) \in C^{2} (\overline{\Omega}\times \mathbb{R}) > 0\) satisfying (1.19). The graphic hypersurface \(M_{\phi}\) is of constant boundary, suppose there exist a uniformly convex graphic hypersurface \(M_{\underline{u}}\) under \(M_{\phi}\) satisfying (1.20) and \(\underline{u} = \phi \) on Ω. Then, there exists a \(C^{1,1}\) graphic hypersurface \(M_{u}\) under \(M_{\phi}\) with the same constant boundary such that \(K (M_{u}) \geq K (x, u)\) in Ω and \(K (M_{u}) = K (x, u)\) in E.

Other applications of the obstacle problem for Hessian equations can be found in [2], [4], [15], [19], [22], and so on. The reader is referred to [1] for more applications and background of (1.1).

Similar problems were studied in [14], [1], and [12] under various conditions. In this work, we are mainly concerned with the boundary estimates for second-order estimates. The main difficulty is from the existence of a disturbance term \(\beta _{\epsilon}\) in (2.1). It is also why the conditions (1.9)–(1.11) are needed.

The obstacle problem for Monge–Ampère equations (when \(f = \sigma _{n}^{1/n}\)) was studied extensively, see [2], [16], [17], [22], and [25] for examples. For the obstacle problem of Hessian equations on Riemannian manifolds, the reader is referred to [1], [12], and [14]. We refer the reader to [6], [8], [10], [18], and [24] for the study of Hessian-type equations on Riemannian manifolds.

In Sect. 2, we provide the general idea to prove Theorems 1.2 and 1.3 for which we introduce an approximating problem using a penalization technique. Section 3 is devoted to the boundary estimates for second-order estimates for the solution of the approximating problem.

2 Preliminaries

As in [14] and [25], we consider the singular perturbation problem

$$\begin{aligned} \textstyle\begin{cases} f (\lambda (\nabla ^{2} u + \chi ) = \psi (x, u, \nabla u) + \beta _{\epsilon }(u - \phi ) & \text{in } M, \\ u = \varphi & \text{on } \partial M, \end{cases}\displaystyle \end{aligned}$$
(2.1)

where the penalty function \(\beta _{\epsilon}\) is defined by

$$\begin{aligned} \beta _{\epsilon }(z) = \textstyle\begin{cases} 0, & z \leq 0, \\ z^{3} / \epsilon, & z > 0, \end{cases}\displaystyle \end{aligned}$$

for \(\epsilon \in (0, 1)\). Obviously, \(\beta _{\epsilon }\in C^{2} (\mathbb{R})\) satisfies

$$\begin{aligned} \begin{aligned} & \beta _{\epsilon}, \beta '_{\epsilon}, \beta ''_{\epsilon } \geq 0; \\ & \beta _{\epsilon }(z) \rightarrow \infty \quad\text{as } \epsilon \rightarrow 0^{+}, \text{ whenever } z > 0; \\ & \beta _{\epsilon }(z) = 0, \quad\text{whenever } z \leq 0. \end{aligned} \end{aligned}$$
(2.2)

Since \(\underline{u} \leq \phi \), \(\underline{u}\) is also a subsolution to (2.1). Let \(u_{\epsilon }\in C^{3} (\overline{M}) \cap C^{4} (M)\) be an admissible solution of (2.1) with \(u_{\epsilon }\geq \underline{u}\). We shall show that there exists a constant C independent of ϵ such that

$$\begin{aligned} \Vert u_{\epsilon} \Vert _{C^{2} (\overline{M})} \leq C \end{aligned}$$
(2.3)

for small ϵ.

The \(C^{0}\) estimates can be easily derived from the fact that \(\Gamma \subset \Gamma _{1}\) and \(u \geq \underline{u}\). The following lemma is crucial for our estimates, and its proof can be found in [1] (see [25] for the case of the Monge–Ampère equation). For completeness, we provide a proof here.

Lemma 2.1

There exists a positive constant \(c_{2}\) independent of ϵ such that

$$\begin{aligned} 0 \leq \beta _{\epsilon }(u_{\epsilon }- \phi ) \leq c_{2} \quad\textit{on } \overline{M}. \end{aligned}$$
(2.4)

Proof

We consider the maximal value of \(u_{\epsilon }- \phi \) on . We may assume it is achieved at an interior point \(x_{0} \in M\) since \(u_{\epsilon }- \phi = \varphi - \phi = 0\) on ∂M. We have, at \(x_{0}\),

$$\begin{aligned} \nabla (u_{\epsilon }- \phi ) = 0 \end{aligned}$$
(2.5)

and

$$\begin{aligned} \begin{aligned} \nabla ^{2} u_{\epsilon } \leq \nabla ^{2} \phi . \end{aligned} \end{aligned}$$
(2.6)

It follows that, at \(x_{0}\),

$$\begin{aligned} \begin{aligned} 0 \leq {}& \beta _{\epsilon }(u_{\epsilon }- \phi ) = f \bigl(\lambda \bigl( \nabla ^{2} u_{\epsilon }+ \chi \bigr) \bigr) - \psi (x, u, \nabla \phi ) \\ \leq {}& f \bigl(\lambda \bigl(\nabla ^{2} \phi + \chi \bigr)\bigr) - \psi (x, u, \nabla \phi ) \leq c_{2} \end{aligned} \end{aligned}$$

for some positive constant \(c_{2}\) depending only on \(\|\phi \|_{C^{2} (\overline{M})}\) and (2.4) holds. □

After establishing the estimate (2.3), we can find a subsequence \(u_{\epsilon _{k}}\) and a function \(u \in C^{1,1} (\overline{\Omega})\) such that

$$\begin{aligned} u_{\epsilon _{k}} \rightarrow u \quad\text{in } C^{1,\alpha} (\bar{M}), \forall \alpha \in (0, 1), \text{ as } \epsilon _{k} \rightarrow 0. \end{aligned}$$

Then, we see u is an admissible solution of (1.1) and (1.2) as in [25]. The fact that \(u \in C^{3, \alpha} (E)\) and satisfies (1.3) in E follows from the Evans–Krylov theory.

The \(C^{1}\) bound under conditions (1.8) and (1.15) was derived in [14]. It was also shown in [14] how to establish the estimates for second-order derivatives from their bound on the boundary. This paper will focus on the estimates for second-order estimates on the boundary.

Let \(u \in C^{4} (\overline{M})\) be an admissible function. For simplicity we shall use the notation \(U = \chi + \nabla ^{2} u\) and, under an orthonormal local frame \(e_{1}, \ldots, e_{n}\),

$$\begin{aligned} &U_{ij} \equiv U (e_{i}, e_{j}) = \chi _{ij} + \nabla _{ij} u, \\ & \nabla _{k} U_{ij} \equiv \nabla U (e_{i}, e_{j}, e_{k}) = \nabla _{k} \chi _{ij} + \nabla _{kij} u. \end{aligned}$$
(2.7)

Let F be the function defined by

$$\begin{aligned} F (h) = f \bigl(\lambda (h)\bigr) \end{aligned}$$

for a \((0,2)\)-tensor h on M. Equation (2.1) is therefore written in the form

$$\begin{aligned} F (U) = \psi (x, u, \nabla u) + \beta _{\epsilon }(u-\phi ). \end{aligned}$$
(2.8)

Following the literature we denote throughout this paper

$$\begin{aligned} F^{ij} = \frac{\partial F}{\partial h_{ij}} (U), \qquad F^{ij, kl} = \frac{\partial ^{2} F}{\partial h_{ij} \partial h_{kl}} (U) \end{aligned}$$

under an orthonormal local frame \(e_{1}, \ldots, e_{n}\). The matrix \(\{F^{ij}\}\) has eigenvalues \(f_{1}, \ldots, f_{n}\) and is positive-definite by assumption (1.4), while (1.5) implies that F is a concave function of \(U_{ij}\) (see [3]). Moreover, when \(U_{ij}\) is diagonal so is \(\{F^{ij}\}\). We can derive from (1.4)–(1.6) that

$$\begin{aligned} \sum_{i} f_{i} (\lambda ) \lambda _{i} \geq 0\quad \text{for any } \lambda \in \Gamma. \end{aligned}$$
(2.9)

We need the following lemmas that were proved in [7].

Lemma 2.2

Let \(A = \{A_{ij}\} \in \mathcal{S}^{n\times n}\) with \(\lambda (A) = (\lambda _{1}, \ldots, \lambda _{n}) \in \Gamma \) and \(F^{ij} = \frac{\partial F (A)}{\partial A_{ij}}\) with eigenvalues \(f_{1}, \ldots, f_{n}\), where \(\mathcal{S}^{n\times n}\) is the space of all symmetric matrices. There exists an index r such that

$$\begin{aligned} \sum_{\beta \leq n-1} F^{ij} A_{i \beta} A_{\beta j} \geq \frac{1}{2} \sum _{i \neq r} f_{i} \lambda _{i}^{2}. \end{aligned}$$
(2.10)

Lemma 2.3

For any index r and \(\epsilon > 0\), there exists a positive constant C depending only on n such that

$$\begin{aligned} \sum f_{i} \vert \lambda _{i} \vert \leq \epsilon \sum_{i \neq r} f_{i} \lambda _{i}^{2} + \frac{C}{\epsilon} \sum f_{i} + Q (r), \end{aligned}$$
(2.11)

where \(Q (r) = f (\lambda ) - f (1, \ldots, 1)\) if \(\lambda _{r} \geq 0\) and \(Q (r) = 0\) if \(\lambda _{r} < 0\).

In the following section, we will drop the subscript ϵ for convenience.

3 Estimates for second-order derivatives on the boundary

In this section, we establish the boundary estimates for second-order derivatives of the solution of (2.1). Fix an arbitrary point \(x_{0} \in \partial M\). We choose smooth orthonormal local frames \(e_{1}, \ldots, e_{n}\) around \(x_{0}\) such that when restricted to ∂M, \(e_{n}\) is normal to ∂M.

Let \(\rho (x)\) denote the distance from x to \(x_{0}\),

$$\begin{aligned} \rho (x) \equiv \operatorname{dist}_{M^{n}} (x, x_{0}), \end{aligned}$$

and \(M_{\delta} = \{x \in M: \rho (x) < \delta \}\). Since ∂M is smooth we may assume the distance function to ∂M

$$\begin{aligned} d (x) \equiv \operatorname{dist} (x, \partial M) \end{aligned}$$

is smooth in \(M_{\delta _{0}}\) for fixed \(\delta _{0} > 0\) sufficiently small (depending only on the curvature of M and the principal curvatures of ∂M). Since \(\nabla _{ij} \rho ^{2} (x_{0}) = 2 \delta _{ij}\), we may assume ρ is smooth in \(M_{\delta _{0}}\) and

$$\begin{aligned} \{\delta _{ij}\} \leq \bigl\{ \nabla _{ij} \rho ^{2}\bigr\} \leq 3 \{\delta _{ij} \} \quad\text{in } M_{\delta _{0}}. \end{aligned}$$
(3.1)

Since \(u - \underline{u} = 0\) on ∂M we have

$$\begin{aligned} \nabla _{\alpha \beta} (u - \underline{u}) = - \nabla _{n} (u - \underline{u}) \varPi (e_{\alpha}, e_{\beta}),\quad \forall 1 \leq \alpha, \beta < n \text{ on $\partial M$} , \end{aligned}$$
(3.2)

where Π denotes the second fundamental form of ∂M. Therefore,

$$\begin{aligned} \bigl\vert \nabla _{\alpha \beta} u (x_{0}) \bigr\vert \leq C, \quad\text{for } 1 \leq \alpha, \beta \leq n-1. \end{aligned}$$
(3.3)

Next, we establish the estimate

$$\begin{aligned} \bigl\vert \nabla _{\alpha n} u (x_{0}) \bigr\vert \leq C\quad \text{for } \alpha \leq n - 1. \end{aligned}$$
(3.4)

Define the linear operator L by

$$\begin{aligned} L w:= F^{ij} \nabla _{ij} w - \psi _{p_{k}} \nabla _{k} w - \beta _{ \epsilon}' (u - \phi ) w, \quad\text{for } w \in C^{2} (M). \end{aligned}$$

We first need to construct a barrier as Lemma 6.2 of [6].

Lemma 3.1

Let

$$\begin{aligned} v:= u - \underline{u} + t d - N d^{2}. \end{aligned}$$

Then, there exist positive constants t, δ sufficiently small and N sufficiently large such that

$$\begin{aligned} L v \leq - \epsilon _{0} \biggl(1 + \sum _{i} F^{ii} \biggr) \end{aligned}$$
(3.5)

and

$$\begin{aligned} v \geq 0 \end{aligned}$$
(3.6)

in \(M_{\delta}\) for some uniform constat \(\epsilon _{0} > 0\).

Proof

First, there exists a positive constant \(\theta _{0}\) such that \(\underline{u} - \theta _{0} \rho ^{2}\) is also admissible. By (2.4) and the concavity of F, we have

$$\begin{aligned} F^{ij} \nabla _{ij} (u - \underline{u}) &\leq - \theta _{0} \sum_{i} F^{ii} - F \bigl(\nabla ^{2} \underline{u} - \theta _{0} g + \chi \bigr) + F \bigl( \nabla ^{2} u + \chi \bigr) \\ &= - \theta _{0} \sum_{i} F^{ii} - F \bigl(\nabla ^{2} \underline{u} - \theta _{0} g + \chi \bigr) + \psi + \beta _{ \epsilon} \\ &\leq - \theta _{0} \sum_{i} F^{ii} + C, \end{aligned}$$

where the constant C depends on \(\|u\|_{C^{1} (\overline{M})}\) and the constant \(c_{2}\) in (2.4). Recall that \(f_{i} = \frac{\partial f}{\partial \lambda _{i}}\), where \(\lambda = \lambda (\nabla ^{2} u + \chi )\) for \(i = 1, \ldots, n\). Without loss of generality, we may assume \(f_{n} = \min_{i} \{f_{i}\}\). Next, since \(\nabla d \equiv 1\) on the boundary, we have

$$\begin{aligned} F^{ij} \nabla _{ij} \bigl(d^{2}\bigr) \geq f_{n} + 2 d F^{ij} \nabla _{ij} d \geq f_{n} - C \delta \sum_{i} F^{ii} \quad\text{in } M_{\delta}, \end{aligned}$$

for δ sufficiently small. It follows that

$$\begin{aligned} L v + \beta _{\epsilon }v \leq - \theta _{0} \sum _{i} F^{ii} - N f_{n} + C (\delta N + t) \biggl(1 + \sum_{i} F^{ii} \biggr) \end{aligned}$$

in \(M_{\delta}\). By (1.7), we have

$$\begin{aligned} \frac{\theta _{0}}{4} \sum_{i} F^{ii} + N f_{n} \geq \frac{n \theta _{0}}{4} (N f_{1} \cdot \cdots \cdot f_{n})^{1/n} \geq \frac{n c_{0} \theta _{0} N^{1/n}}{4}. \end{aligned}$$

Thus, we can choose N sufficiently large and t, δ sufficiently small such that

$$\begin{aligned} L v + \beta _{\epsilon }v \leq - \frac{\theta _{0}}{2} \sum _{i} F^{ii} - c_{3} N^{1/n}. \end{aligned}$$

We may further make δ sufficiently small such that \(v \geq 0\) in \(M_{\delta}\). Since \(\beta '_{\epsilon }\geq 0\) we obtain (3.5). □

From formula \((4.7)\) in [7] and differentiating the equation (2.1), we have

$$\begin{aligned} \bigl\vert L \nabla _{k} (u - \phi ) \bigr\vert \leq C \biggl(1 + \sum_{i} F^{ii} + \sum _{i} f_{i} \vert \lambda _{i} \vert \biggr), \quad\text{for } 1 \leq k \leq n, \end{aligned}$$
(3.7)

where C is a positive constant depending only on \(\|u\|_{C^{1} (\overline{M})}\), \(\|\phi \|_{C^{3} (\overline{M})}\) and \(\|\psi \|_{C^{1}}\). Similar to formula \((4.9)\) in [7], by Lemma 2.2, we find that

$$\begin{aligned} \begin{aligned} L \biggl(\sum _{\beta \leq n-1} \bigl(\nabla _{\beta }(u - \phi ) \bigr)^{2} \biggr) \geq {}& \sum_{\beta \leq n-1} F^{ij} U_{\beta i} U_{\beta j} - C \biggl(1 + \sum _{i} F^{ii} + \sum_{i} f_{i} \vert \lambda _{i} \vert \biggr) \\ & {}+ \beta '_{\epsilon }\sum_{\beta \leq n-1} \bigl(\nabla _{\beta }(u - \phi )\bigr)^{2} \\ \geq {}& \frac{1}{2} \sum_{i \neq r} f_{i} \lambda _{i}^{2} - C \biggl(1 + \sum _{i} F^{ii} + \sum _{i} f_{i} \vert \lambda _{i} \vert \biggr) \end{aligned} \end{aligned}$$
(3.8)

for some index \(1 \leq r \leq n\). Let

$$\begin{aligned} \varPsi = A_{1} v + A_{2} \rho ^{2} - A_{3} \sum_{\beta \leq n-1} \bigl\vert \nabla _{\beta }(u - \phi ) \bigr\vert ^{2} \end{aligned}$$
(3.9)

as in [7]. Combining (2.11), (3.7), and (3.8), we can choose \(A_{1} \gg A_{2} \gg A_{3} \gg 1\) such that

$$\begin{aligned} L \bigl(\varPsi \pm \nabla _{\alpha }(u - \phi )\bigr) \leq 0\quad \text{in } M_{ \delta } \end{aligned}$$

and

$$\begin{aligned} \varPsi \pm \nabla _{\alpha }(u - \phi ) \geq 0 \quad\text{on } \partial M_{ \delta } \end{aligned}$$

for any index \(1 \leq \alpha \leq n-1\). Then, by the maximum principle, we have

$$\begin{aligned} \varPsi \pm \nabla _{\alpha }(u - \phi ) \geq 0 \quad\text{on } \overline{M}_{\delta}. \end{aligned}$$

Since

$$\begin{aligned} \varPsi \pm \nabla _{\alpha }(u - \phi ) = 0 \quad\text{at } x_{0} \end{aligned}$$

we obtain (3.4).

Since \(\Delta u + \mathrm{tr} (\chi ) > 0\) in M, it suffices to establish the upper bound

$$\begin{aligned} \nabla _{nn} u (x_{0}) \leq C. \end{aligned}$$
(3.10)

We first suppose ϕ is admissible in M. As in [7], following an idea of Trudinger [23] we prove that there are uniform constants \(c_{0}, R_{0}\) such that for all \(R > R_{0}\), \((\lambda ' [\{U_{\alpha \beta} (x_{0})\}], R) \in \Gamma \) and

$$\begin{aligned} f \bigl(\lambda ' \bigl[\bigl\{ U_{\alpha \beta} (x_{0})\bigr\} \bigr], R\bigr) \geq \psi [u] (x_{0}) + \beta _{\epsilon }(x_{0}) + c_{0}, \end{aligned}$$
(3.11)

which implies (3.10) by Lemma 1.2 in [3], where \(\lambda ' [\{U_{\alpha \beta}\}] = (\lambda '_{1}, \ldots, \lambda '_{n-1})\) denote the eigenvalues of the \((n-1) \times (n-1)\) matrix \(\{U_{\alpha \beta}\}\) (\(1 \leq \alpha, \beta \leq n-1\)). Denote

$$\begin{aligned} \tilde{m}_{R}:= \min_{x \in \partial M} f \bigl(\lambda ' \bigl[\bigl\{ U_{\alpha \beta} (x)\bigr\} \bigr], R\bigr). \end{aligned}$$

Suppose \(\tilde{m}_{R}\) is achieved at a point \(x_{0} \in \partial M\). Choose local orthonormal frames \({e_{1}},{e_{2}}, \ldots,{e_{n}}\) around \(x_{0}\) as before and assume \(\nabla _{n n} u (x_{0}) \geq \nabla _{n n} \phi (x_{0})\). Let \(\Phi _{ij}:= \nabla _{ij} \phi + \chi _{ij}\) and

$$\begin{aligned} \tilde{c}_{R}:= \min_{x \in \overline{M}_{\delta _{0}}} f \bigl(\lambda ' \bigl[ \bigl\{ \Phi _{\alpha \beta} (x) \bigr\} \bigr], R\bigr) \end{aligned}$$

for \(\delta _{0}\) sufficiently small such that \(e_{1}, \ldots, e_{n}\) are well defined in \(\overline{M}_{\delta _{0}}\). By (1.9) and the fact that ϕ is admissible, we see that

$$\begin{aligned} \lim_{R \rightarrow + \infty} \tilde{c}_{R} = + \infty. \end{aligned}$$
(3.12)

We wish to show \(\tilde{m}_{R} \rightarrow + \infty \) as \(R \rightarrow + \infty \). Without loss of generality we assume \(\tilde{m}_{R} < \tilde{c}_{R}/2\) (otherwise we are done by (3.12)).

For a symmetric \((n - 1) \times (n - 1)\) matrix \(\{r_{\alpha {\beta}}\}\) such that \((\lambda ' [\{r_{\alpha \beta}\}], R) \in \Gamma \), define

$$\begin{aligned} \tilde{F}[r_{\alpha \beta}]:= f \bigl(\lambda ' \bigl[ \{r_{\alpha \beta}\}\bigr], R\bigr) . \end{aligned}$$

Note that is concave by (1.5). Let

$$\begin{aligned} \tilde{F}^{\alpha {\beta}}_{0} = \frac{\partial \tilde{F}}{\partial r_{\alpha \beta}} \bigl[U_{\alpha \beta} (x_{0})\bigr]. \end{aligned}$$

We find

$$\begin{aligned} \tilde{F}^{\alpha {\beta}}_{0} U_{\alpha {\beta}} - \tilde{F}^{ \alpha {\beta}}_{0} U_{\alpha {\beta}} (x_{0}) \geq \tilde{F}[U_{ \alpha {\beta}}] - \tilde{m}_{R} \geq 0\quad \text{on $\partial M$ near } x_{0}. \end{aligned}$$
(3.13)

By (3.2) we have on ∂M near \(x_{0}\),

$$\begin{aligned} U_{\alpha {\beta}} = \Phi _{\alpha {\beta}} - \nabla _{n} (u - \phi ) \sigma _{\alpha {\beta}}, \end{aligned}$$
(3.14)

where \(\sigma _{\alpha {\beta}} = \langle \nabla _{\alpha} e_{\beta}, e_{n} \rangle \); note that \(\sigma _{\alpha \beta} = \varPi (e_{\alpha}, e_{\beta})\) on ∂M. Define

$$\begin{aligned} Q = - \eta \nabla _{n} (u - \phi ) + \tilde{F}^{\alpha {\beta}}_{0} \Phi _{\alpha {\beta}} - \tilde{F}^{\alpha {\beta}}_{0} U_{\alpha { \beta}} (x_{0}), \end{aligned}$$

where \(\eta = \tilde{F}^{\alpha {\beta}}_{0} \sigma _{\alpha {\beta}}\). From (3.13) and (3.14) we see that \(Q (x_{0}) = 0\) and \(Q \geq 0\) on ∂M near \(x_{0}\). Furthermore, we have

$$\begin{aligned} \begin{aligned} Q &\geq - \eta \nabla _{n} (u - \phi ) + \tilde{F}[\Phi _{\alpha \beta}] - \tilde{F}\bigl[U_{\alpha \beta} (x_{0})\bigr] \\ &\geq - \eta \nabla _{n} (u - \phi ) + \tilde{c}_{R} - \tilde{m}_{R} \\ &\geq - \eta \nabla _{n} (u - \phi ) + \frac{\tilde{c}_{R}}{2} \quad\text{in } \overline{M}_{\delta _{0}}. \end{aligned} \end{aligned}$$
(3.15)

By (3.7) and (3.15), we have

$$\begin{aligned} \begin{aligned} L Q &\leq C \mathcal{F} \Bigl(1 + \sum F^{ii} + \sum f_{i} \vert \lambda _{i} \vert \Bigr) - \frac{\tilde{c}_{R}}{2} \beta '_{\epsilon} \\ &\leq C \mathcal{F} \Bigl(1 + \sum F^{ii} + \sum f_{i} \vert \lambda _{i} \vert \Bigr), \end{aligned} \end{aligned}$$
(3.16)

where

$$\begin{aligned} \mathcal{F}:= \sum_{\alpha \leq n-1} \tilde{F}^{\alpha \alpha}_{0}. \end{aligned}$$

Recall that Ψ is defined in (3.9). Choosing \(A_{1} \gg A_{2} \gg A_{3} \gg 1\) as before, we derive

$$\begin{aligned} \textstyle\begin{cases} L (\mathcal{F} \varPsi + Q) \leq 0 &\text{in $M_{\delta}$}, \\ \mathcal{F} \varPsi + Q \geq 0 &\text{on $\partial M_{\delta}$}. \end{cases}\displaystyle \end{aligned}$$
(3.17)

By the maximum principle, \(\mathcal{F} \varPsi + Q \geq 0\) in \(M_{\delta}\). Thus,

$$\begin{aligned} \nabla _{n} Q (x_{0}) \geq - \mathcal{F} \nabla _{n} \varPsi (x_{0}) \geq -C \mathcal{F}. \end{aligned}$$
(3.18)

By (1.11), we see, at \(x_{0}\), \((\lambda ' (\sigma _{\alpha \beta}), \sqrt{R_{0}}) \in \Gamma \) for some \(R_{0}\) sufficiently large. Thus, there exists a uniform constant \(\epsilon _{0} > 0\) such that \((\lambda '({\sigma _{\alpha \beta }} - {\epsilon _{0}}{\delta _{\alpha \beta }}),\sqrt {R} ) \in \Gamma \) for all \(R \geq R_{0}\). From the concavity of and (1.10) we find, at \(x_{0}\),

$$\begin{aligned} \begin{aligned} \sqrt{R} \tilde{F}_{0}^{\alpha \beta} \sigma _{\alpha \beta}& = \sqrt{R} \tilde{F}_{0}^{\alpha \beta} (\sigma _{\alpha \beta} - \epsilon _{0} \delta _{\alpha \beta}) - \tilde{F}_{0}^{\alpha \beta} U_{ \alpha \beta} (x_{0}) + \tilde{F}_{0}^{\alpha \beta} U_{\alpha \beta} (x_{0}) + \sqrt{R} \epsilon _{0} \mathcal{F} \\ &\geq \tilde{F}\bigl[\sqrt{R} (\sigma _{\alpha \beta} - \epsilon _{0} \delta _{\alpha \beta})\bigr] - \tilde{F}\bigl[U_{\alpha \beta} (x_{0}) \bigr] + \tilde{F}_{0}^{\alpha \beta} U_{\alpha \beta} (x_{0}) + \sqrt{R} \epsilon _{0} \mathcal{F} \\ &\geq f \bigl(\sqrt{R} \bigl(\lambda ' (\sigma _{\alpha \beta} - \epsilon _{0} \delta _{\alpha \beta}), \sqrt{R}\bigr)\bigr) - \tilde{F} \bigl[U_{\alpha \beta} (x_{0})\bigr] + \sqrt{R} \epsilon _{0} \mathcal{F} - C \mathcal{F} \\ &\geq f \bigl(\sqrt{R} \bigl(\lambda ' (\sigma _{\alpha \beta} - \epsilon _{0} \delta _{\alpha \beta}), \sqrt{R_{0}}\bigr) \bigr) - \tilde{m}_{R} + \frac{\sqrt{R}}{2} \epsilon _{0} \mathcal{F} \\ &\geq C (R) - \tilde{m}_{R} + \frac{\sqrt{R}}{2} \epsilon _{0} \mathcal{F}, \end{aligned} \end{aligned}$$

provided R is sufficiently large, where \(\lim_{R \rightarrow + \infty} C (R) = + \infty \). We may assume \(\tilde{m}_{R} \leq C(R)\) for otherwise we are done. It follows that, at \(x_{0}\),

$$\begin{aligned} \eta = \tilde{F}_{0}^{\alpha \beta} \sigma _{\alpha \beta} \geq \frac{\epsilon _{0}}{2} \mathcal{F}. \end{aligned}$$
(3.19)

Combining (3.18) and (3.19) we obtain

$$\begin{aligned} \nabla _{nn} u (x_{0}) \leq C. \end{aligned}$$

We have established an a priori upper bound for all eigenvalues of \(\{U_{ij} (x_{0})\}\). Consequently, \(\lambda [\{U_{ij} (x_{0})\}]\) is contained in a compact subset of Γ by (1.6), and therefore

$$\begin{aligned} \lim_{R \rightarrow + \infty} \tilde{m}_{R} = + \infty \end{aligned}$$

by (1.9). This proves (3.11) and the proof of (3.10) is complete.

We now consider the case \(\chi \equiv 0\) and \(\varphi \equiv 0\) on ∂M to prove Theorem 1.3. By [3] we have

$$\begin{aligned} \Delta u \geq \delta _{0} > 0 \end{aligned}$$
(3.20)

for some positive constant \(\delta _{0}\) depending only on \(\psi _{0} = \inf \psi > 0\). Let \(u_{0}\) be defined by the equation

$$\begin{aligned} \Delta u_{0} = \delta _{0} \quad\text{in } M \end{aligned}$$

with \(u_{0} = 0\) on ∂M. By the maximum principle and Hopf’s lemma, we see \(u_{0} < 0\) in M and \((u_{0})_{\nu }< 0\) on ∂M, where ν is the unit interior normal to ∂M. Since ∂M is compact, there exists a uniform constant \(\gamma _{1} > 0\) such that \((u_{0})_{\nu }\leq - \gamma _{1}\) on ∂M. By (3.20) and the maximum principle, we find that

$$\begin{aligned} u \leq u_{0} \quad\text{in } M \text{ and } u = u_{0} = 0 \text{ on } \partial M. \end{aligned}$$

It follows that

$$\begin{aligned} \nabla _{n} u (x_{0}) \leq \nabla _{n} (u_{0}) (x_{0}) \leq - \gamma _{1}. \end{aligned}$$
(3.21)

We find, at \(x_{0} \in \partial M\),

$$\begin{aligned} \nabla _{\alpha \beta} u = - \nabla _{n} u \varPi (e_{\alpha}, e_{ \beta}), \quad\text{for } 1 \leq \alpha, \beta \leq n-1. \end{aligned}$$

Since \(\underline{u} = 0\), we have, at \(x_{0}\),

$$\begin{aligned} \nabla _{\alpha \beta} \underline{u} = - \nabla _{n} \underline{u} \varPi (e_{\alpha}, e_{\beta}), \quad\text{for } 1 \leq \alpha, \beta \leq n-1. \end{aligned}$$

Therefore,

$$\begin{aligned} \nabla _{\alpha \beta} u = \frac{\nabla _{n} u}{\nabla _{n} \underline{u}} \nabla _{\alpha \beta} \underline{u}. \end{aligned}$$

By (3.21), we then find the eigenvalues of the \((n-1)\times (n-1)\) matrix \(\{\nabla _{\alpha \beta} u (x_{0})\}_{\alpha, \beta \leq n-1}\) \(\lambda ' \{\nabla _{\alpha \beta} u (x_{0})\}\) belong to a compact subset of \(\Gamma '\), where \(\Gamma '\) denotes the projection of Γ to \(\lambda ' = (\lambda _{1}, \ldots, \lambda _{n-1})\) of Γ. By (1.9) and Lemma 1.2 of [3], we can prove (3.10).