1 Introduction and main results

In this work, we are interested in the local regularity theory for nonlinear complex degenerate elliptic equations of the form

$$\begin{aligned} F(D^2_{\mathbb {C}}u)=f, \end{aligned}$$
(1)

where \(D^2_{\mathbb {C}}u\) denotes the complex Hessian of u.

Such equations have been studied extensively in the literature, going back to the celebrated work [6] on the Dirichlet problem in the real case and its counterpart [17] for the complex setting. We recall that an important feature of degenerate equations is that, unlike uniformly elliptic equations, the \(C^{2,\alpha }\)-regularity may fail. Perhaps one of the most important equation of the form (1) is the complex Monge–Ampère equation:

$$\begin{aligned} \det (D^2_{\mathbb {C}}u)=f>0. \end{aligned}$$
(2)

The regularity of the solution of this equation is studied in the literature by many authors and by different tools. It was first proved in pioneering work [4] that the solution u to the Dirichlet problem in a ball B belongs to \(C^{1,1}(B) \cap C^0(\overline{B})\) provided that the right-hand side \(f \in C^2(\overline{B})\) and the boundary data is \(C^2(\partial B)\). Another important regularity result concerning the Dirichlet problem associated with (2) in a strictly pseudoconvex domain, established in [5], is the smooth regularity up to the boundary of its solution when the right-hand side, the boundary data and the domain are all smooth (the strict positivity of the right-hand side is also essential).

The local regularity of (2) (no boundary data) was also studied. A sharp result was obtained in [2] by developing some methods in [24]: solutions \(u \in W^{2,p}_{{\mathrm {loc}}}\) of (2) with \(f \in C^{\infty }\) are necessarily \(C^{\infty }\) whenever \(p>n(n-1)\), and no smaller exponent p can be expected in general.

The local regularity of other equations of the form (1) were also studied. Notably, in [9], a counterpart of [2] was proved for the so-called complex k-Hessian equation under the assumption that u belongs to \(W^{2,p}_{{\mathrm {loc}}}\) with \(p> n(k-1)\).

The goal of the present paper is to extend the approach of [2] and [9] to more general nonlinear complex degenerate elliptic equations. We will introduce simple conditions on the nonlinearity F to obtain general local regularity results, thus considerably broadening the field of application of this method.

In particular, we shall see that our results apply to the so-called complex k-Monge–Ampère equation or \(\mathrm {MA}_k\)-equation, \(k \in \left\{ 1,\ldots ,n\right\}\), that has recently received much attention [7, 8, 13, 23]:

$$\begin{aligned} \prod _{1 \le i_1<\ldots <i_k \le n} \left( \lambda _{i_1}(D^2_{\mathbb {C}}u)+\cdots +\lambda _{i_k}(D^2_{\mathbb {C}}u)\right) =f, \end{aligned}$$

where \(\lambda _1(D^2_{\mathbb {C}}u), \ldots , \lambda _n(D^2_{\mathbb {C}}u)\) denote the eigenvalues of \(D^2_{\mathbb {C}}u\). For \(k=1\), we have the Monge–Ampère equation \(\det (D^2_{\mathbb {C}}u)=f\) and for \(k=n\) this is the Poisson equation \(\Delta u=f\).

Interior estimates for this equation in the real setting have been studied recently in [7]. In the complex setting this operator was discussed in [23] and, in the special case \(k=n-1\), also in [25]. It has been shown in [8] that this operator does not satisfy an integral comparison principle, which makes the associated potential theory much harder to be developed. Finally, let us also mention that the Dirichlet problem associated with this operator was studied in [28] using a probabilistic approach.

The outline of the paper is as follows. After recalling some basic notations, we precise in Sect. 1.2 what kind of nonlinear operators F are considered in this article and we present our main results. Their proofs are the purpose of Sect. 2. Some examples of Hessian equations covered by such a framework are then given in Sect. 3. Finally, in Sect. 4 we detail the case study of the \(\mathrm {MA}_k\)-equation.

1.1 General notations

  • All along this work, \(\Omega \subset \mathbb {C} ^n\) (\(n \ge 1\)) is a nonempty open bounded connected subset.

  • Let \(\mathbb {H} ^n\) be the set of \(n \times n\) Hermitian matrices. The (ij)th entry of a matrix \(A \in \mathbb {H} ^n\) will be denoted by \(a_{i{\bar{j}}}\). We recall that \(\mathbb {H} ^n\) is a vector space over the field \(\mathbb {R}\) of dimension \(n^2\). The inner product on \(\mathbb {H} ^n\) is the standard Frobenius inner product:

    $$\begin{aligned} {\left\langle A , B \right\rangle } _{\mathbb {H} ^n}=\mathrm {Trace} \,(AB) \in \mathbb {R}, \quad A,B \in \mathbb {H} ^n. \end{aligned}$$

    The \(n \times n\) identity matrix will be denoted by \(\mathrm {Id}\).

    We will use the classical cones

    $$\begin{aligned} {\mathcal {C}}_n= \left\{ A \in \mathbb {H} ^n \quad \big | \quad A>0\right\} , \quad {\mathcal {C}}_1= \left\{ A \in \mathbb {H} ^n \quad \big | \quad \mathrm {Trace} \,(A)>0\right\} . \end{aligned}$$
  • The Fréchet derivative at \(A \in {\mathcal {C}}\) of a function \(F \in C^1({\mathcal {C}},\mathbb {R})\), defined on a nonempty open subset \({\mathcal {C}}\subset \mathbb {H} ^n\), will be denoted by \(DF(A) \in {\mathcal {L}}(\mathbb {H} ^n,\mathbb {R})\). It can be identified with the matrix \(\left( \frac{\partial F}{\partial a_{i{\bar{j}}}}(A)\right) _{1 \le i,j \le n} \in \mathbb {H} ^n\) through the formula

    $$\begin{aligned} DF(A) B=\sum _{1 \le i,j \le n} \frac{\partial F}{\partial a_{i{\bar{j}}}}(A) b_{i{\bar{j}}}, \quad B \in \mathbb {H} ^n. \end{aligned}$$
  • In this work, we use the following convention for ellipticity. We say that a function \(F:{\mathcal {C}}\longrightarrow \mathbb {R}\) is:

    • (degenerate) elliptic in \({\mathcal {C}}\) if

      $$\begin{aligned} A \le B \quad \Longrightarrow \quad F(A) \le F(B), \end{aligned}$$

      for every \(A,B \in {\mathcal {C}}\), where we recall that \(A \le B\) (resp. \(A<B\)) means that the Hermitian matrix \(B-A\) is nonnegative-definite (resp. positive-definite).

    • uniformly elliptic in \({\mathcal {C}}\) if there exist \(0<m \le M\) such that

      $$\begin{aligned} A \le B \quad \Longrightarrow \quad m\left\| B-A\right\| _{\mathbb {H} ^n} \le F(B)-F(A) \le M\left\| B-A\right\| _{\mathbb {H} ^n}, \end{aligned}$$

      for every \(A,B \in {\mathcal {C}}\).

    For instance, with these conventions the equation \(\Delta u=f\) is (uniformly) elliptic (it corresponds to the equation \(F(D^2_{\mathbb {C}}u)=f\) with \(F(A)=\mathrm {Trace} \,(A)\) and \({\mathcal {C}}=\mathbb {H} ^n\)).

  • The complex Hessian is denoted by \(D^2_{\mathbb {C}}u=\left( u_{i {\bar{j}}}\right) _{1 \le i,j \le n}\), where we use the standard notations \(u_j\) and \(u_{{\bar{j}}}\) to denote, respectively, \(\frac{\partial {u}}{\partial z_j}=\frac{1}{2}\left( \frac{\partial {u}}{\partial x_j}-i\frac{\partial {u}}{\partial y_j}\right)\) and \(\frac{\partial {u}}{\partial {\bar{z}}_j}=\frac{1}{2}\left( \frac{\partial {u}}{\partial x_j}+i\frac{\partial {u}}{\partial y_j}\right)\). The Laplacian is denoted by \(\Delta u=\sum _{j=1}^n u_{j {\bar{j}}}=\mathrm {Trace} \,(D^2_{\mathbb {C}}u) \in \mathbb {R}\). Note that it is the complex Laplacian, so it is one-fourth the real Laplacian.

  • Finally, we shall denote by \(C(n), C(n,\Omega )\), etc. a positive number that may change from line to line but that depends only on the quantities indicated between the brackets.

1.2 Main results

Throughout this work, we will always make the following assumptions:

  1. (a)

    Domain: \({\mathcal {C}}\subset \mathbb {H} ^n\) is a nonempty open convex cone such that \({\mathcal {C}}_n\subset {\mathcal {C}}\subset {\mathcal {C}}_1\).

  2. (b)

    Regularity: \(F \in C^1(\overline{{\mathcal {C}}},\mathbb {R})\).

  3. (c)

    Positivity: \(F>0\) in \({\mathcal {C}}\).

  4. (d)

    Homogeneity: F is positively homogeneous of degree \(d\in (0,+\infty )\) (i.e. \(F(\alpha A)=\alpha ^dF(A)\) for ever \(\alpha >0\) and \(A \in {\mathcal {C}}\)).

  5. (e)

    Concavity: \(F^{1/d}\) is concave in \({\mathcal {C}}\).

  6. (f)

    Domination of the determinant: there exists \(C>0\) such that

    $$\begin{aligned} F(P)^{1/d} \ge C (\det (P))^{1/n}, \quad \forall P \in {\mathcal {C}}_n. \end{aligned}$$
    (3)

Some examples will be presented in Sect. 3.

The key assumption here is the assumption (f) that will help us to compare our general nonlinear equation \(F(D^2_{\mathbb {C}}u)=f\) with the complex Monge–Ampère equation and thus use the recent uniform estimates obtained in [1].

The assumption (a) is quite standard (see e.g. [6, pp. 261–262]). Note that this excludes the case of the whole space \({\mathcal {C}}=\mathbb {H} ^n\).

It is important to point out that we cannot only consider homogeneous functions of degree 1 by normalizing F into \(F^{1/d}\) since this normalization is not in general regular up to the boundary (e.g. \(F(A)=\det (A)^{1/n}\) in \({\mathcal {C}}={\mathcal {C}}_n\)).

Finally, by concavity and homogeneity, we can check that F satisfies (3) for every \(A \in {\mathcal {C}}\) if and only if

$$\begin{aligned} F(A+P)^{1/d} \ge F(A)^{1/d}+C\left( \det P\right) ^{1/n}, \end{aligned}$$

for every \(A \in {\mathcal {C}}\) and \(P \in {\mathcal {C}}_n\). In particular, F is necessarily elliptic in \({\mathcal {C}}\) (but, in general, not uniformly elliptic). We refer, for instance, to Abja et al. [1, Remark 1.3] for more details.

The first result of the present paper is the following.

Theorem 1.1

Let \({\mathcal {C}}\) and F satisfy the above standing assumptions (a)–(f). Let \(f>0\) in \(\Omega\) with \(f^{1/d} \in C^{1,1}(\Omega )\). Let \(u \in W^{2,p}_{{\mathrm {loc}}}(\Omega )\) with \((D^2_{\mathbb {C}}u)(z) \in {\mathcal {C}}\) for a.e. \(z \in \Omega\) satisfy (almost everywhere)

$$\begin{aligned} F(D^2_{\mathbb {C}}u)=f \quad \text{ in } \Omega , \end{aligned}$$

and assume that

$$\begin{aligned} p>n\max \left\{ d-1,1\right\} . \end{aligned}$$
(4)

Then, for every nonempty open subset \(\omega \subset \subset \Omega\), we have \(\Delta u \in L^{\infty }(\omega )\) with

$$\begin{aligned} \sup _{\omega } \Delta u \le R, \end{aligned}$$

for some \(R>0\) depending only on \(n,p,d,\mathrm {dist} \,(\omega ,\partial \Omega ),\left\| D^2_{\mathbb {C}}u\right\| _{L^p(\Omega _\varepsilon )}\), \(\left\| \Delta (f^{1/d})\right\| _{L^{\infty }(\Omega _\varepsilon )}\) and \(\inf _{\Omega _\varepsilon } f\) if \(d>1\) (resp. \(\sup _{\Omega _\varepsilon } f\) if \(d<1\)), where \(\Omega _\varepsilon = \left\{ z \in \Omega \quad \big | \quad \mathrm {dist} \,(z,\partial \Omega )>\varepsilon \right\}\) and \(\varepsilon >0\) is some number depending only on \(\mathrm {dist} \,(\omega ,\partial \Omega )\).

Theorem 1.1 generalizes [2, Theorem] and [9, Theorem 4.1], where the particular cases of the complex Monge–Ampère equation and the complex k-Hessian equation were considered.

An important consequence of Theorem 1.1 and of the classical theory for uniformly elliptic real equations from [20] will be the following local regularity result:

Corollary 1.2

Under the framework of Theorem 1.1, assume in addition that F satisfies the following property:

  1. (g)

    Sets of uniform ellipticity: there exists \(R_0>0\) such that, for every \(R>R_0\), F is uniformly elliptic in \({\mathcal {C}}(R)\), where

    $$\begin{aligned} {\mathcal {C}}(R)= \left\{ A \in {\mathcal {C}}\quad \big | \quad \mathrm {Trace} \,(A)<R, \quad \frac{1}{R}<F(A)\right\} . \end{aligned}$$
    (5)

Then, we have the property

$$\begin{aligned} F \in C^{\infty }({\mathcal {C}}), \quad f \in C^{\infty }(\Omega ) \quad \Longrightarrow \quad u \in C^{\infty }(\Omega ). \end{aligned}$$
(6)

Remark 1.3

Note that \({\mathcal {C}}(R)\) is convex because F is concave. However, it is never a cone and it does not contain \({\mathcal {C}}_n\). Note as well that \(\lim _{R \rightarrow +\infty } {\mathcal {C}}(R)={\mathcal {C}}\).

We also point out that we do not need to assume that \(\overline{{\mathcal {C}}(R)}\) is compact. For instance, \({\mathcal {C}}(R)\) is not bounded for the simplest example \(F(A)=\mathrm {Trace} \,(A)\) on \(A \in {\mathcal {C}}_1\) (\(n \ge 2\)). However, this is a rather pathological case and we will see in Proposition 3.1 that \({\mathcal {C}}(R)\) is always bounded whenever

$$\begin{aligned} \partial {\mathcal {C}}\cap \partial {\mathcal {C}}_1= \left\{ 0\right\} . \end{aligned}$$

Remark 1.4

Whether the condition (4) yields the smallest value of p to induce the \(C^{\infty }\)-regularity (6) may depend on the equation itself. This condition is, for instance, sharp in the case of the Monge–Ampère equation, see [2, p. 412], but it is still an open problem for other equations such as the complex k-Hessian equation and the k-Monge–Ampère equation (see Sect. 4.3).

Remark 1.5

It is unclear whether similar local regularity results hold on Hermitian manifolds as well (see [26] for results related to Corollary 1.2 in this context when \(u \in C^2\)). It would be interesting to investigate this problem.

The rest of the article is organized as follows. In the next section, we prove our main results. In Sect. 3, we present some examples of Hessian equations that are covered by our framework. Finally, in Sect. 4, we discuss a bit more in detail the case of the k-Monge–Ampère equation.

2 Proofs of the main results

The proof of Theorem 1.1 is inspired from the proof of [24, Theorem 1] and the ideas of [2] (see also [9, Section 4]).

We introduce the normalization

$$\begin{aligned} G(A)=(F(A))^{1/d}, \quad A \in {\mathcal {C}}. \end{aligned}$$

Clearly, G is elliptic in \({\mathcal {C}}\). The equation \(F(D^2_{\mathbb {C}}u)=f\) then becomes

$$\begin{aligned} G(D^2_{\mathbb {C}}u)=f^{1/d}. \end{aligned}$$

Throughout Sect. 2, the notation \(L_u\) is exclusively saved for the linearization of G about \(A=(D^2_{\mathbb {C}}u)(z)\):

$$\begin{aligned} (L_u w)(z)=\sum _{1 \le i,j \le n} \frac{\partial G}{\partial a_{i{\bar{j}}}}((D^2_{\mathbb {C}}u)(z)) w_{i{\bar{j}}}(z). \end{aligned}$$
(7)

Note that \(L_u\) is (degenerate) elliptic and that its coefficients are Lebesgue measurable (as composition of a continuous function with a Lebesgue measurable function).

2.1 Preliminaries

2.1.1 Geometric configuration

Let \(\omega \subset \subset \Omega\) and set \(\delta =\mathrm {dist} \,(\omega ,\partial \Omega )>0\). For \(z_0 \in \omega\), let

$$\begin{aligned} {\tilde{\omega }}=\omega \cap B_{\delta /4}(z_0), \end{aligned}$$

where \(B_R(z_0) \subset \mathbb {C} ^n\) denotes the open ball of center \(z_0\) and radius \(R>0\). Up to the transformation \(z \mapsto (z-z_0)/(\delta /2)\), we can always assume that \(B_{\delta /2}(z_0)\) is the open unit ball, that will simply be denoted by \(B_1\) in the sequel. Thus, \(B_\delta (z_0)\) becomes \(B_2(0)\), that will simply be denoted by \(B_2\). Therefore, from now on, we are in the following geometric configuration:

$$\begin{aligned} {\tilde{\omega }} \subset \subset B_1 \subset \subset B_2 \subset \subset \Omega , \quad \mathrm {dist} \,({\tilde{\omega }},\partial B_1) \ge \frac{1}{4}\mathrm {dist} \,(\omega ,\partial \Omega ). \end{aligned}$$
(8)

2.1.2 Approximation of the Laplacian

In order not to consume too much regularity, we will need a suitable approximation of the Laplacian. This is done as in [2, p. 415] and [9, Lemma 4.2] (see also [4, Proposition 6.3]).

Lemma 2.1

For every \(0<\varepsilon <1\), let \(T_\varepsilon :L^p(B_2) \longrightarrow L^p(B_1)\) (\(1 \le p \le \infty\)) be the linear operator defined by

$$\begin{aligned} (T_\varepsilon u)(z)= \frac{n+1}{\varepsilon ^2} \left( u_{\varepsilon }(z)-u(z)\right) , \end{aligned}$$

with

$$\begin{aligned} u_{\varepsilon }(z)=\frac{1}{\mu _n(B_\varepsilon (z))} \int _{B_\varepsilon (z)} u \, d\mu _n, \end{aligned}$$

where here and in what follows \(\mu _n\) denotes the Lebesgue measure in \(\mathbb {C} ^n\). Then, we have the following properties:

(i):

Positivity: If u is subharmonic in \(B_2\), then \((T_\varepsilon u)(z) \ge 0\) for every \(z \in B_1\).

(ii):

Regularity: If \(u \in C^0(B_2)\), then \(T_\varepsilon u \in C^0(\overline{B_1})\). If \(u \in W^{2,p}(B_2)\) and \(p<\infty\), then \(u_\varepsilon \in W^{2,p}(B_1)\) with \(D^2_{\mathbb {C}}u_\varepsilon =(D^2_{\mathbb {C}}u)_\varepsilon\) (with a slight but obvious abuse of notation).

(iii):

Convergence: If \(u \in W^{2,p}(B_2)\) and \(p<\infty\), then \(T_\varepsilon u \rightharpoonup \Delta u\) weakly in \(L^p(B_1)\) as \(\varepsilon \rightarrow 0\).

(iv):

Uniform bound: If \(u \in W^{2,p}(B_2)\), then, for every \(0<\varepsilon <1\), we have

$$\begin{aligned} \left\| T_\varepsilon u\right\| _{L^p(B_1)} \le C(n,p) \left\| \Delta u\right\| _{L^p(B_2)}. \end{aligned}$$

The first point is a direct consequence of the mean value inequality (see e.g. [16, Theorem 2.4.1]). The second point is not difficult to check. For the weak convergence, one can, for instance, adapt the proof of [16, Proposition 4.2.6]. Finally, for the uniform bound, see e.g. [11, p. 219].

2.1.3 A uniform estimate for strong supersolutions

Recently, it has been obtained in [1, Theorem 1.4] some new type of Alexandrov–Bakelman–Pucci estimate for various nonlinear complex degenerate equations. Thanks to the assumptions of the present article, it applies in particular to our linearized operator \(L_u\) defined in (7) for which it results in the following:

Theorem 2.2

Let \(r,q>n\). For every \(g \in L^q(B_1)\) and every \(w \in W^{2,r}_{{\mathrm {loc}}}(B_1) \cap C^0(\overline{B_1})\) such that

$$\begin{aligned} \left\{ \begin{array}{ll}L_u(-w) \le g &{}\quad \text { in } B_1, \\ w \le 0 &{}\quad \text { on } \partial B_1, \end{array}\right. \end{aligned}$$

we have

$$\begin{aligned} \sup _{B_1} w \le C(n,r,q)\left\| g_+\right\| _{L^q(B_1)}, \end{aligned}$$

where \(g_+=\max (g,0)\) denotes the positive part of g.

2.1.4 Estimate of the operator norm

We will need the following estimate:

Lemma 2.3

There exists \(C>0\) such that, for every \(A \in {\mathcal {C}}\),

$$\begin{aligned} \left\| DG(A)\right\| _{{\mathcal {L}}(\mathbb {H} ^n,\mathbb {R})} \le C (G(A))^{1-d} \left\| A\right\| ^{d-1}_{\mathbb {H} ^n}. \end{aligned}$$
(9)

Proof

By continuity of DF on the compact \(\left\{ A \in \overline{{\mathcal {C}}} \quad \big | \quad \left\| A\right\| _{\mathbb {H} ^n}=1\right\}\) and the fact that DF is homogeneous of degree \(d-1\), we have, for some \(C>0\),

$$\begin{aligned} \left\| DF(A)\right\| _{{\mathcal {L}}(\mathbb {H} ^n,\mathbb {R})} \le C \left\| A\right\| ^{d-1}_{\mathbb {H} ^n}, \quad \forall A \in {\mathcal {C}}. \end{aligned}$$

The desired estimate (9) then follows from the computation

$$\begin{aligned} DG(A)B=\frac{1}{d} (F(A))^{(1/d)-1} DF(A)B, \quad A \in {\mathcal {C}}, \, B \in \mathbb {H} ^n. \end{aligned}$$

\(\square\)

We point out that it is only to show (9) that we use that F is \(C^1\) up to the boundary.

Despite not needed, we mention that the inequality (9) also shows that the coefficients of the operator \(L_u\) belong to \(L^{\infty }\) if \(d=1\), and to \(L^{p/(d-1)}\) if \(d>1\), which are then in \(L^n\) by the assumption (4).

2.1.5 Jensen’s inequality in convex subsets of \(\mathbb {H} ^n\)

The following version of Jensen’s inequality will be needed (see e.g. [10, Lemma 2.8.1, p.76]):

Lemma 2.4

Let \({\mathcal {C}}\subset E\) be a nonempty open convex set of a real finite-dimensional space E and let \(G:{\mathcal {C}}\longrightarrow \mathbb {R}\) be a concave function. Let \(\Omega \subset \mathbb {C} ^n\) (\(n \ge 1\)) be a nonempty open bounded subset and \(H \in L^1(\Omega ,E)\) be such that \(H(z) \in {\mathcal {C}}\) for a.e. \(z \in \Omega\). Then, we have

$$\begin{aligned} \frac{1}{\mu _n(\Omega )}\int _\Omega H(z) \, d\mu _n \in {\mathcal {C}}, \end{aligned}$$

and

$$\begin{aligned} G\left( \frac{1}{\mu _n(\Omega )} \int _\Omega H(z) \, d\mu _n \right) \ge \frac{1}{\mu _n(\Omega )}\int _\Omega G\left( H(z)\right) \, d\mu _n, \end{aligned}$$

(the right-hand side possibly being equal to \(-\infty\)).

2.2 Proof of Theorem 1.1

We are now ready to prove our first main result.

  1. (1)

    Strategy of the proof.

    Let \(0<\varepsilon <1\). For \(\alpha ,\beta \in [2,+\infty )\) (to be determined later) let us introduce an auxiliary function w given by

    $$\begin{aligned} w(z)=\eta (z) (T_\varepsilon u(z))^{\alpha }, \quad z \in B_1, \end{aligned}$$

    where

    $$\begin{aligned} \eta (z)=\mathrm {dist} \,(z,\partial B_1)^{2\beta } =\left( 1-\left| z\right| ^2\right) ^\beta . \end{aligned}$$

    Note that:

    • w is unambiguously defined since \(T_\varepsilon u \ge 0\) by item (i) of Lemma 2.1 and subharmonicity of u, which follows from the assumptions that \(D^2_{\mathbb {C}}u \in {\mathcal {C}}\subset {\mathcal {C}}_1\) (see e.g. [16, Theorem 2.5.8]).

    • \(w \in C^0(\overline{B_1})\) by item (ii) of Lemma 2.1 since \(u \in C^0(\Omega )\) by the Sobolev embedding \(W^{2,p}_{{\mathrm {loc}}}(\Omega ) \subset C^0(\Omega )\) as \(p>n\) by assumption (4).

    • \(w \in W^{2,p}_{{\mathrm {loc}}}(B_1)\) thanks to the regularity of u and Sobolev embeddings (using that \(p \ge n\)).

    The goal will be to bound \(\sup _{B_1} w\) from above by a positive number depending only on the quantities indicated in the statement of Theorem 1.1. This will also provide an upper bound for \(\sup _{{\tilde{\omega }}} T_\varepsilon u\) since \(\eta (z) \ge \left( \mathrm {dist} \,(\omega ,\partial \Omega )/4\right) ^{2\beta }>0\) for \(z \in {\tilde{\omega }}\) (recall (8)). This will in turn imply that \(\Delta u \in L^{\infty }({\tilde{\omega }})\) with the same bound as for \(T_\varepsilon u\). Indeed, denoting by C a bound for \(T_\varepsilon u\) we will have shown that \(T_\varepsilon u \in S\), where

    $$\begin{aligned} S= \left\{ g \in L^{\infty }({\tilde{\omega }}), \quad \left\| g\right\| _{L^{\infty }({\tilde{\omega }})} \le C\right\} . \end{aligned}$$

    We can check that this set is closed in \(L^p({\tilde{\omega }})\) (using, for instance, the partial converse of the Lebesgue dominated convergence theorem [21, Theorem 3.12]). Since it is also clearly convex, it is then weakly closed in \(L^p({\tilde{\omega }})\) (see e.g. [22, Theorem 3.12]). As \(T_\varepsilon u \rightharpoonup \Delta u\) weakly in \(L^p({\tilde{\omega }})\) (item (iii) of Lemma 2.1) and \(T_\varepsilon u \in S\), it follows that \(\Delta u \in S\) as well, which is exactly what we want. Now, in order to bound \(\sup _{B_1} w\) we are going to use the condition (4) on p to show that \((L_u (-w))_+ \in L^q(B_1)\) for some \(q>n\), with estimate

    $$\begin{aligned} \left\| (L_u (-w))_+\right\| _{L^q(B_1)} \le C\left( \left( \sup _{B_1} w\right) ^{1-2/\beta }+1\right) , \end{aligned}$$
    (10)

    for some \(C>0\) depending only on the quantities indicated in the statement of Theorem 1.1. The conclusion will then follow from the uniform estimate of Theorem 2.2 with \(g=(L_u (-w))_+\). Note that \(w=0\) on \(\partial B_1\) (in fact, this is the (only) obstruction to simply take \(\eta =1\)).

  2. (2)

    Computation of \(L_u(-w)\).

    We have

    $$\begin{aligned} w_i=\eta _i (T_\varepsilon u)^{\alpha }+\eta \alpha (T_\varepsilon u)^{\alpha -1} (T_\varepsilon u)_i, \end{aligned}$$

    and

    $$\begin{aligned} w_{i {\bar{j}}}= & \eta _{i {\bar{j}}} (T_\varepsilon u)^{\alpha } +\eta _i \alpha (T_\varepsilon u)^{\alpha -1}(T_\varepsilon u)_{{\bar{j}}} +\eta _{{\bar{j}}} \alpha (T_\varepsilon u)^{\alpha -1} (T_\varepsilon u)_i\\&+\,\eta \alpha (\alpha -1) (T_\varepsilon u)^{\alpha -2} (T_\varepsilon u)_{{\bar{j}}} (T_\varepsilon u)_i +\eta \alpha (T_\varepsilon u)^{\alpha -1} (T_\varepsilon u)_{i {\bar{j}}}. \end{aligned}$$

    Consequently,

    $$\begin{aligned} L_u (-w)= \sum _{1\le i, j \le n} -\frac{\partial G}{\partial a_{i {\bar{j}}}}(D^2_{\mathbb {C}}u) w_{i {\bar{j}}}= I+II, \end{aligned}$$

    where (using also the identity \(\overline{v_i}=v_{{\bar{i}}}\) for any real-valued function v)

    $$\begin{aligned} I= & {} \sum _{1\le i, j \le n} -\frac{\partial G}{\partial a_{i {\bar{j}}}}(D^2_{\mathbb {C}}u) \eta _{i {\bar{j}}} (T_\varepsilon u)^{\alpha } +2 \alpha (T_\varepsilon u)^{\alpha -1} \mathfrak {Re} \,\left( \sum _{1\le i, j \le n} -\frac{\partial G}{\partial a_{i {\bar{j}}}}(D^2_{\mathbb {C}}u) \eta _i (T_\varepsilon u)_{{\bar{j}}}\right) \\&+\,\sum _{1\le i, j \le n} -\frac{\partial G}{\partial a_{i {\bar{j}}}}(D^2_{\mathbb {C}}u) \eta \alpha (\alpha -1) (T_\varepsilon u)^{\alpha -2} (T_\varepsilon u)_{{\bar{j}}} (T_\varepsilon u)_i, \end{aligned}$$

    and

    $$\begin{aligned} II=\sum _{1\le i, j \le n} -\frac{\partial G}{\partial a_{i {\bar{j}}}}(D^2_{\mathbb {C}}u) \eta \alpha (T_\varepsilon u)^{\alpha -1} (T_\varepsilon u)_{i {\bar{j}}}. \end{aligned}$$
  3. (3)

    Estimate of the term I.

    To estimate this term, we will make use of the ellipticity and of the estimate (9). Let us introduce the following sesquilinear form on \(\mathbb {C} ^n\):

    $$\begin{aligned} \varphi (a,b)=\sum _{1\le i, j \le n} -\frac{\partial G}{\partial a_{i {\bar{j}}}}(D^2_{\mathbb {C}}u) a_i \overline{b_j}, \quad a,b \in \mathbb {C} ^n. \end{aligned}$$

    This form is nonpositive by ellipticity of G. Therefore, we can use the basic inequality \(2 \mathfrak {Re} \,\varphi (a,b) \le -\varphi (a,a) -\varphi (b,b)\) with \(a_i=\frac{1}{\sqrt{t}}\eta _i\) and \(b_j=\sqrt{t}(T_\varepsilon u)_j\) (\(t>0\) to be chosen below), to obtain

    $$\begin{aligned}&2\mathfrak {Re} \,\left( \sum _{1\le i, j \le n} -\frac{\partial G}{\partial a_{i {\bar{j}}}}(D^2_{\mathbb {C}}u) \eta _i (T_\varepsilon u)_{{\bar{j}}}\right) \\&\quad \le -\frac{1}{t}\sum _{1\le i, j \le n} -\frac{\partial G}{\partial a_{i {\bar{j}}}}(D^2_{\mathbb {C}}u) \eta _i \eta _{{\bar{j}}}\\&\qquad - \,t\sum _{1\le i, j \le n} -\frac{\partial G}{\partial a_{i {\bar{j}}}}(D^2_{\mathbb {C}}u) (T_\varepsilon u)_i (T_\varepsilon u)_{{\bar{j}}}. \end{aligned}$$

    Taking now \(t=(\alpha -1)\eta /T_\varepsilon u>0\), this removes the third term in I to give

    $$\begin{aligned} I \le (T_\varepsilon u)^{\alpha } \sum _{1\le i, j \le n} -\frac{\partial G}{\partial a_{i {\bar{j}}}}(D^2_{\mathbb {C}}u) \left( \eta _{i {\bar{j}}} -\frac{\alpha }{(\alpha -1) \eta } \eta _i \eta _{{\bar{j}}} \right) . \end{aligned}$$

    Let

    $$\begin{aligned} B=(b_{i{\bar{j}}})_{1 \le i,j \le n}, \quad b_{i {\bar{j}}}= -\eta _{i {\bar{j}}} +\frac{\alpha }{(\alpha -1) \eta } \eta _i \eta _{{\bar{j}}}. \end{aligned}$$

    Direct computations show that

    $$\begin{aligned} \eta _i=-\beta {\bar{z}}_i \eta ^{1-1/\beta }, \quad \eta _{i {\bar{j}}}=-\beta \delta _{i j} \eta ^{1-1/\beta } +\beta (\beta -1){\bar{z}}_i z_j \eta ^{1-2/\beta }, \end{aligned}$$

    where \(\delta _{ij}\) denotes the Kronecker delta. Since \(\eta \le 1\), we obtain

    $$\begin{aligned} \left\| B\right\| _{\mathbb {H} ^n} \le C(\alpha ,\beta ,n)\eta ^{1-2/\beta } =C(\alpha ,\beta ,n) w^{1-2/\beta } \frac{1}{(T_\varepsilon u)^{\alpha (1-2/\beta )}}. \end{aligned}$$

    Consequently, using (9), we have

    $$\begin{aligned} I \le C(\alpha ,\beta ,n) C'(f,d) \left\| D^2_{\mathbb {C}}u\right\| ^{d-1}_{\mathbb {H} ^n} (T_\varepsilon u)^{2\alpha /\beta } w^{1-2/\beta }, \end{aligned}$$

    where \(C'(f,d)=(\inf _{B_2} f)^{(1/d)-1}\) if \(d\ge 1\) and \(C'(f,d)=(\sup _{B_2} f)^{(1/d)-1}\) if \(d<1\). Note that we cannot use the better estimate (3) since \(B \not \in \overline{{\mathcal {C}}_n}\).

  4. (4)

    Estimate of the term II.

    For this term, we will use the concavity of F to show that

    $$\begin{aligned} II \le \eta \alpha \left| T_{\varepsilon } (f^{1/d})\right| (T_\varepsilon u)^{\alpha -1}. \end{aligned}$$

    To prove such an estimate, we would like to use the concavity inequality

    $$\begin{aligned} G(B) \le G(A)+DG(A)(B-A), \quad A,B \in {\mathcal {C}}, \end{aligned}$$
    (11)

    with

    $$\begin{aligned} A=(D^2_{\mathbb {C}}u)(z), \quad B=(D^2_{\mathbb {C}}u_{\varepsilon })(z). \end{aligned}$$

    But first observe that \(B=(D^2_{\mathbb {C}}u)_\varepsilon (z)\) (see Lemma 2.1) and apply Jensen’s inequality (Lemma 2.4) to see that indeed \(B \in {\mathcal {C}}\) for a.e. \(z \in B_1\) with, in addition,

    $$\begin{aligned} G((D^2_{\mathbb {C}}u)_{\varepsilon }) \ge \left( G(D^2_{\mathbb {C}}u)\right) _{\varepsilon }. \end{aligned}$$

    We can now use (11) to obtain the desired estimate:

    $$\begin{aligned} \sum_{1\le i, j \le n} -\frac{\partial G}{\partial a_{i {\bar{j}}}}(D^2_{\mathbb {C}}u) (T_\varepsilon u)_{i {\bar{j}}} \le \frac{n+1}{\varepsilon ^2}\left( G(D^2_{\mathbb {C}}u)-\left( G(D^2_{\mathbb {C}}u)\right) _{\varepsilon }\right) =-T_\varepsilon (f^{1/d}). \end{aligned}$$
  5. (5)

    Choices of \(\alpha\) and \(\beta\).

    In summary, we have obtain the inequality

    $$\begin{aligned} (L_u (-w))_+ \le C(\alpha ,\beta ,n)C'(f,d) \left\| D^2_{\mathbb {C}}u\right\| ^{d-1}_{\mathbb {H} ^n} (T_\varepsilon u)^{2\alpha /\beta } w^{1-2/\beta } +\eta \alpha \left| T_\varepsilon (f^{1/d})\right| (T_\varepsilon u)^{\alpha -1}. \end{aligned}$$

    Thanks to the condition (4) on p, we can find \(q>n\) such that \(p/q \ge 1\) and \((p/q)-(d-1)>0\). Consequently, taking

    $$\begin{aligned} \alpha =1+\frac{p}{q}, \quad \beta =\frac{2 \alpha }{\frac{p}{q}-(d-1)}, \end{aligned}$$

    we have \(\alpha ,\beta \ge 2\) and \((L_u (-w))_+ \in L^{q}(B_1)\), with the following estimate (by Hölder’s inequality and (iv) of Lemma 2.1):

    $$\begin{aligned} \left\| (L_u (-w))_+\right\| _{L^{q}(B_1)}\le & {} C(\alpha ,\beta ,n)C'(f,d)C(n,p) \left\| D^2_{\mathbb {C}}u\right\| _{L^p(B_1)}^{d-1} \left\| \Delta u\right\| _{L^p(B_2)}^{2\alpha /\beta } \sup _{B_1} w^{1-2/\beta } \\&+\eta \alpha C(n,p) \left\| \Delta (f^{1/d})\right\| _{L^{\infty }(B_2)} \left\| \Delta u\right\| _{L^p(B_2)}^{p/q}. \end{aligned}$$

    This yields the desired estimate (10). \(\square\)

2.3 \(C^\infty\) regularity

In this section, we show how to deduce Corollary 1.2 from Theorem 1.1 and the classical theory for uniformly elliptic real equations from [20].

We follow the presentations of [26] and [27]. Let \(\mathbb {S} ^{2n}\) be the space of \(2n \times 2n\) real symmetric matrices. The inner product on \(\mathbb {S} ^{2n}\) is the standard Frobenius inner product.

In this section, a function u of n complex variables \((z_1,\ldots ,z_n)\) will also be a considered as a function of 2n real variables \((x_1,\ldots , x_n, y_1,\ldots ,x_n)\), where \(z_j=x_j+iy_j\). The real Hessian of u is then denoted by

$$\begin{aligned} D^2_{\mathbb {R}}u = \left( \begin{array}{c|c} u_{x_i x_j} &{} u_{x_i y_j} \\ \hline u_{y_i x_j} &{}u_{y_i y_j} \end{array}\right) \in \mathbb {S} ^{2n}. \end{aligned}$$

The open ball in \(\mathbb {R} ^{2n}\) of center 0 radius \(r>0\) will be denoted by \(B_r\).

Corollary 1.2 is essentially a consequence of the following fundamental general result and Schauder’s estimates:

Theorem 2.5

Let \({\bar{F}}:\mathbb {S} ^{2n} \longrightarrow \mathbb {R}\) be concave and uniformly elliptic, and let \(f \in C^{0,\beta }(\overline{B_1})\) (\(0<\beta \le 1\)). If \(u \in W^{2,p}_{{\mathrm {loc}}}(B_1)\) is a strong solution to

$$\begin{aligned} {\bar{F}}(D^2_{\mathbb {R}}u)=f \quad \text{ in } B_1, \end{aligned}$$

and \(p \ge 2n\), then there exist \(\alpha \in (0,1)\) and \(\varepsilon \in (0,1)\) such that

$$\begin{aligned} u \in C^{2,\alpha }(B_\varepsilon ). \end{aligned}$$

We recall that “strong solution” simply means that the equation holds almost everywhere. This theorem is a consequence of [20, Theorem 8.1] (see e.g. [27, Section 2] for details). In this reference, the result deals in fact with C-viscosity solutions, but we recall that a strong solution \(u \in W^{2,p}_{{\mathrm {loc}}}\) with \(p \ge 2n\) is a C-viscosity solution, see [18, Section III].

In order to apply this result, we need two preliminary observations: firstly, we have to associate with our equation \(F(D^2_{\mathbb {C}}u)=f\) an equation for the real Hessian \({\bar{F}}(D^2_{\mathbb {R}}u)=f\) and, secondly, this \({\bar{F}}\) has to be defined over the whole space \(\mathbb {S} ^{2n}\). This is done in a standard way (see, e.g., [3, 26, 27], etc.):

  • We identify \(n\times n\) Hermitian matrices with the subspace of \(\mathbb {S} ^{2n}\) given by matrices invariant by the canonical complex structure:

    $$\begin{aligned} \iota (\mathbb {H} ^n) = \left\{ A \in \mathbb {S} ^{2n} \quad \big | \quad AJ-JA=0\right\} , \quad J= \begin{pmatrix} 0 &{} -\mathrm {Id}\\ \mathrm {Id}&{} 0 \end{pmatrix}, \end{aligned}$$

    where the map \(\iota :\mathbb {H} ^n \longrightarrow \mathbb {S} ^{2n}\) is given by

    $$\begin{aligned} \iota (A+iB)= \begin{pmatrix} A &{} -B \\ B &{} A \end{pmatrix} . \end{aligned}$$

    Let us also introduce the projection \(\pi : \mathbb {S} ^{2n} \longrightarrow \iota (\mathbb {H} ^n)\) given by

    $$\begin{aligned} \pi (S)=\frac{S+J^{\mathrm {Tr}} SJ}{2}. \end{aligned}$$

    The complex and real Hessians are then related by the identity

    $$\begin{aligned} \iota (2D_{\mathbb {C}}^2 u)=\pi (D_{\mathbb {R}}^2u). \end{aligned}$$

    As a result, if u solves \(F(D^2_{\mathbb {C}}u)=f\), then it solves as well

    $$\begin{aligned} {\tilde{F}}(D^2_{\mathbb {R}}u)=f, \end{aligned}$$

    where \({\tilde{F}}:{\mathcal {E}}(R) \longrightarrow \mathbb {R}\) is given by

    $$\begin{aligned} {\tilde{F}}(A)=F\left( \frac{1}{2}\iota ^{-1}\left( \pi (A)\right) \right) , \end{aligned}$$

    and \({\mathcal {E}}(R) \subset \mathbb {S} ^{2n}\) is defined by

    $$\begin{aligned} {\mathcal {E}}(R)=\pi ^{-1}\left( \iota \left( 2{\mathcal {C}}(R)\right) \right) . \end{aligned}$$

    We recall that \({\mathcal {C}}(R)\) is defined in (5) and that it is not a cone, so that the factor 2 cannot be removed. It is clear that \({\mathcal {E}}(R)\) is an open convex set and that \({\tilde{F}}\) is concave in \({\mathcal {E}}(R)\). Since F is uniformly elliptic in \({\mathcal {C}}(R)\) by assumption of Corollary 1.2, it is also clear that \({\tilde{F}}\) is uniformly elliptic in \({\mathcal {E}}(R)\).

  • Let us now extend \({\tilde{F}}\) to \(\mathbb {S} ^{2n}\). This is done by considering \({\bar{F}}:\mathbb {S} ^{2n} \longrightarrow \mathbb {R}\) defined by

    $$\begin{aligned} {\bar{F}}(A)=\inf \left\{ L(A) \quad \big | \quad L:\mathbb {S} ^{2n} \longrightarrow \mathbb {R} \text { affine linear, } \quad m\mathrm {Id}\le DL \le M \mathrm {Id}, \quad L \ge {\tilde{F}} \text { in } {\mathcal {E}}(R)\right\} , \end{aligned}$$

    where \(0<m \le M\) denote the ellipticity constants of \({\tilde{F}}\). We can check that \({\bar{F}}\) is concave and uniformly elliptic in \(\mathbb {S} ^{2n}\), and that we have \({\bar{F}}={\tilde{F}}\) in \({\mathcal {E}}(R)\) (see e.g. [26, Lemma 4.1]).

Corollary 1.2 is now an easy consequence of the previous results.

Proof of Corollary 1.2

After translation and dilation of the coordinates if necessary, it is sufficient to show that \(u \in C^{\infty }(B_\varepsilon )\) for some \(\varepsilon \in (0,1)\) when \(B_1 \subset \subset B_2 \subset \subset \Omega\).

From Theorem 1.1, we know that, for some \(R_1>0\),

$$\begin{aligned} \Delta u <R_1 \quad \text { in } B_2. \end{aligned}$$

On the other hand, by our assumptions on f, there exists \(R_2>0\) such that

$$\begin{aligned} F(D^2_{\mathbb {C}}u)=f>\frac{1}{R_2}. \end{aligned}$$

Consequently, for any \(R>\max \left\{ R_0,R_1,R_2\right\}\) we have

$$\begin{aligned} D^2_{\mathbb {C}}u \in {\mathcal {C}}(R). \end{aligned}$$

It now follows from the previous discussion that u is a strong solution to the concave and uniformly elliptic real equation

$$\begin{aligned} {\bar{F}}(D^2_{\mathbb {R}}u)=f \quad \text{ in } B_1. \end{aligned}$$

Besides, \(u \in W^{2,p}(B_1)\) for every \(1 \le p<\infty\) by Calderón–Zygmund estimates since \(u, \Delta u \in L^{\infty }(B_2)\). We can then apply Theorem 2.5 and obtain that \(u \in C^{2,\alpha }(B_\varepsilon )\) for some \(\alpha \in (0,1)\) and \(\varepsilon \in (0,1)\).

As a result, \(u \in C^{2,\alpha }(B_\varepsilon )\) solves \({\tilde{F}}(D^2_{\mathbb {R}}u)=f\) in \(B_\varepsilon\). Since

$$\begin{aligned} F \in C^{\infty }({\mathcal {C}}) \quad \Longrightarrow \quad {\tilde{F}} \in C^{\infty }({\mathcal {E}}(R)), \end{aligned}$$

it follows that \(u \in C^{\infty }(B_\varepsilon )\) from the classical Schauder’s estimates, see e.g. [20, Proposition 9.1] (the proof in this reference is carried out for \({\mathcal {E}}(R)=\mathbb {S} ^{2n}\) but all the arguments go through after noticing that \(\left\{ tD^2 u(x+he_k)+(1-t) D^2 u(x) \quad \big | \quad t \in [0,1], \, x \in \overline{H}, \, 0\le h \le \mathrm {dist} \,(H,\partial \Omega )/2\right\}\) is a compact set of \(\mathbb {S} ^{2n}\) which is included in \({\mathcal {E}}(R)\)). \(\square\)

3 Application to Hessian equations

Let us now present some examples covered by our framework. We emphasize that our results Theorem 1.1 and Corollary 1.2 do not require that our equations are Hessian, but all the examples of application that we present here will be Hessian equations.

Let us first recall some notations.

  • For \(A \in \mathbb {H} ^n\), its eigenvalues (which are real) will always be sorted as follows:

    $$\begin{aligned} \lambda _1(A) \le \lambda _2(A) \le \cdots \le \lambda _n(A). \end{aligned}$$

    We then introduce \(\lambda :\mathbb {H} ^n \longrightarrow \mathbb {R} ^n\) defined by

    $$\begin{aligned} \lambda (A)=(\lambda _1(A),\ldots ,\lambda _n(A)). \end{aligned}$$
  • A function \(F:{\mathcal {C}}\longrightarrow \mathbb {R}\) is said to be a Hessian operator if there exist a set \(\Gamma \subset \mathbb {R} ^n\) and a function \({\hat{F}}:\Gamma \longrightarrow \mathbb {R}\) such that

    $$\begin{aligned} {\mathcal {C}}=\lambda ^{-1}(\Gamma ), \quad F(A)={\hat{F}}(\lambda (A)), \quad \forall A \in {\mathcal {C}}. \end{aligned}$$

    The notation \(\Gamma\) will always be saved for sets of \(\mathbb {R} ^n\) whereas the notation \({\mathcal {C}}\) will always be saved for sets of \(\mathbb {H} ^n\).

  • For \(k \in \left\{ 1,\ldots ,n\right\}\), we introduce the classical cones

    $$\begin{aligned} \Gamma _k= \left\{ \lambda \in \mathbb {R} ^n \quad \big | \quad \sigma _\ell (\lambda _1,\ldots ,\lambda _n)>0, \quad \forall \ell \in \left\{ 1,\ldots ,k\right\} \right\} , \end{aligned}$$

    where \(\sigma _k\) is the kth elementary symmetric polynomial:

    $$\begin{aligned} \sigma _k(\lambda )= \sum _{(i_1,\ldots ,i_k) \in E_n^k} \lambda _{i_1} \cdots \lambda _{i_k}, \end{aligned}$$

    where, here and in the rest of this article, we use the notation

    $$\begin{aligned} E_n^k= \left\{ (i_1,\ldots ,i_k) \in \left\{ 1,\ldots ,n\right\} ^k \quad \big | \quad i_1<\cdots <i_k\right\} . \end{aligned}$$

    We recall that \(\mathrm {card} \,E_n^k=C_n^k=n!/(k!(n-k)!)\).

  • We recall that a set \(S \subset \mathbb {R} ^n\) is symmetric if \((\lambda _{\tau (1)},\ldots ,\lambda _{\tau (n)}) \in S\) for every \((\lambda _1,\ldots ,\lambda _n) \in S\) and every permutation \(\tau\). A function \({\hat{F}}:S \longrightarrow \mathbb {R}\) is symmetric if \({\hat{F}}(\lambda _{\tau (1)},\ldots ,\lambda _{\tau (n)})={\hat{F}}(\lambda _1,\ldots ,\lambda _n)\) for every \((\lambda _1,\ldots ,\lambda _n) \in S\) and every permutation \(\tau\).

Let us now provide practical conditions on \(\Gamma\) and \({\hat{F}}\) to guarantee that \({\mathcal {C}}=\lambda ^{-1}(\Gamma )\) and \(F={\hat{F}} \circ \lambda\) satisfy the assumptions of our main results.

Proposition 3.1

Let \({\hat{F}}:\mathbb {R} ^n \longrightarrow \mathbb {R}\) be a symmetric polynomial, homogeneous of degree \(d\in \mathbb {N} ^*\), satisfying, for some \(C>0\),

$$\begin{aligned} {\hat{F}}(\lambda )^{1/d} \ge C \left( \prod _{i=1}^n \lambda _i\right) ^{1/n}, \quad \forall \lambda \in \Gamma _n, \end{aligned}$$
(12)

and let \(\Gamma \subset \mathbb {R} ^n\) be a nonempty open symmetric convex cone such that

$$\begin{aligned}&\Gamma _n \subset \Gamma \subset \Gamma _1, \quad \partial \Gamma \cap \partial \Gamma _1= \left\{ 0\right\} , \\&{\hat{F}}>0 \text { in } \Gamma , \quad {\hat{F}}=0 \text { on } \partial \Gamma , \\&{\hat{F}}^{1/d} \text { is concave in } \Gamma . \end{aligned}$$

Then, \({\mathcal {C}}=\lambda ^{-1}(\Gamma )\) satisfies (a) and \(F={\hat{F}} \circ \lambda\) satisfies (b)–(g).

We emphasize that the map \(\lambda\) has few nice properties, so this result is far from being trivial. The proof of Proposition 3.1 is postponed to the end of this section for the sake of the presentation.

Remark 3.2

The concavity assumption is satisfied by a large class of polynomials, namely, the hyperbolic polynomials. We recall that a polynomial \({\hat{F}}\) homogeneous of degree \(d\in \mathbb {N} ^*\) is called hyperbolic with respect to \(e \in \mathbb {R} ^n\) if \({\hat{F}}(e) \ne 0\) and if, for every \(\lambda \in \mathbb {R} ^n\), the one-variable polynomial \(t \in \mathbb {C} \longmapsto {\hat{F}}(\lambda -t e)\) has only real roots. In this case, \({\hat{F}}^{1/d}\) is concave in the open convex cone (called hyperbolicity cone)

$$\begin{aligned} \Gamma = \left\{ \lambda \in \mathbb {R} ^n \quad \big | \quad {\hat{F}}(\lambda -te)\ne 0, \quad \forall t \le 0\right\} . \end{aligned}$$

We refer to [6, Section 1] and [12] for more details.

Let us now finally present some concrete examples.

  • The complex Monge–Ampère equation:

    $$\begin{aligned} {\hat{F}}(\lambda )=\prod _{i=1}^n \lambda _i, \quad \Gamma =\Gamma _n. \end{aligned}$$

    The degree of homogeneity is obviously \(d=n\) and the domination property (12) is trivial. This polynomial is hyperbolic with respect to \(e=(1,1,\ldots ,1)\). We can check that the associated hyperbolicity cone is indeed \(\Gamma _n\). It clearly satisfies all the desired assumptions.

  • The complex k-Hessian equation: for \(k \in \left\{ 1,\ldots ,n\right\}\),

    $$\begin{aligned} {\hat{F}}=\sigma _k, \quad \Gamma =\Gamma _k. \end{aligned}$$

    The degree of homogeneity is obviously \(d=k\) and the domination property (12) follows from Maclaurin’s inequality: for any \(k \ge 2\),

    $$\begin{aligned} \Gamma _{k-1} \subset \Gamma _k, \quad \left( \frac{\sigma _k}{C_n^k}\right) ^{1/k} \le \left( \frac{\sigma _{k-1}}{C_n^{k-1}}\right) ^{1/(k-1)} \text{ in } \Gamma _{k-1}. \end{aligned}$$

    This polynomial is hyperbolic with respect to \(e=(1,1,\ldots ,1)\). We can check that the associated hyperbolicity cone is indeed \(\Gamma _k\). It clearly satisfies all the desired assumptions (\(k>1\)).

  • For \(s \in [0,1)\), let

    $$\begin{aligned} {\hat{F}}(\lambda )= & {} (1-s)^2\lambda _1 \lambda _2+s(\lambda _1+\lambda _2)^2 \\= & {} (\lambda _1+s\lambda _2)(s\lambda _1+\lambda _2), \\ \Gamma= & {} \Gamma _{2-s}= \left\{ \lambda \in \mathbb {R} ^2 \quad \big | \quad \lambda _1+s\lambda _2>0, \quad s\lambda _1+\lambda _2>0\right\} . \end{aligned}$$

    The cones \(\Gamma _{2-s}\) interpolate between \(\Gamma _2\) and \(\Gamma _1\). The degree of homogeneity is obviously \(d=2\) and the domination property (12) is trivial since \({\hat{F}}(\lambda ) \ge (1-s)^2\lambda _1 \lambda _2\). The cone clearly satisfies all the desired assumptions (\({\hat{F}}\) is also a hyperbolic polynomial with respect to \(e=(1,1)\), but the concavity of \({\hat{F}}^{1/2}\) is easily checked by a direct computation here).

  • The complex k-Monge–Ampère equation: for \(k \in \left\{ 1,\ldots ,n\right\}\),

    $$\begin{aligned} {\hat{F}}=\mathrm {MA}_k, \quad \Gamma =\Gamma _k'. \end{aligned}$$

    where

    $$\begin{aligned} \mathrm {MA}_k(\lambda )= & {} \prod _{(i_1,\ldots ,i_k) \in E_n^k} \left( \lambda _{i_1}+\cdots +\lambda _{i_k}\right) , \\ \Gamma _k'= & {} \left\{ \lambda \in \mathbb {R} ^n \quad \big | \quad \lambda _{i_1}+\cdots +\lambda _{i_k}>0, \quad \forall (i_1,\ldots ,i_k) \in E_n^k\right\} . \end{aligned}$$

    The degree of homogeneity is \(d=C_n^k\). The domination property (12) is a consequence of the following inequality: for any \(k \ge 2\),

    $$\begin{aligned} \Gamma _{k-1}' \subset \Gamma _k', \quad \frac{1}{k}\left( \mathrm {MA}_k\right) ^{1/C_n^k} \ge \frac{1}{k-1}\left( \mathrm {MA}_{k-1}\right) ^{1/C_n^{k-1}} \text{ in } \Gamma _{k-1}'. \end{aligned}$$
    (13)

    This property will be detailed in Sect. 4.2. This polynomial is again hyperbolic with respect to \(e=(1,1,\ldots ,1)\). We can check that the associated hyperbolicity cone is indeed \(\Gamma _k'\). It clearly satisfies all the desired assumptions (\(k<n\)).

We conclude this section with the proof Proposition 3.1.

Proof of Proposition 3.1

We only establish the nonobvious properties.

  1. (1)

    Regularity of F. Since \({\hat{F}}\) is a symmetric polynomial, by Newton’s fundamental theorem of symmetric polynomials we can write

    $$\begin{aligned} {\hat{F}}(\lambda )=p(\sigma _1(\lambda ),\ldots ,\sigma _n(\lambda )), \end{aligned}$$

    for some polynomial p. Since all the functions \(A \in \mathbb {H} ^n \longmapsto \sigma _k(\lambda (A))\) are \(C^{\infty }(\mathbb {H} ^n,\mathbb {R})\) (recall that they are equal to \((-1)^k (\partial ^{n-k} h/\partial t^{n-k})(0,A)\) where \(h \in C^{\infty }(\mathbb {R} \times \mathbb {H} ^n,\mathbb {R})\) is the function \(h(t,A)=\det (t\mathrm {Id}-A)\)), we deduce that \(F \in C^{\infty }(\mathbb {H} ^n,\mathbb {R})\) by composition.

  2. (2)

    Convexity of \({\mathcal {C}}\) and concavity of \(F^{1/d}\). We point out that the concavity of \(F^{1/d}\) is also shown in [6, Section 3] but we would like to present a different proof here. Let \(A,B \in {\mathcal {C}}\) and \(t \in [0,1]\) be fixed. We have to show that

    $$\begin{aligned}&\lambda (tA+(1-t)B) \in \Gamma , \\&\quad F(\lambda (tA+(1-t)B))^{1/d} \ge tF(\lambda (A))^{1/d}+(1-t)F(\lambda (B))^{1/d}. \end{aligned}$$

    To this end, it is convenient to make use of the theory of majorization, for which we refer to [19]. The starting point is that, for every \(k \in \left\{ 1,\ldots ,n\right\}\), the function

    $$\begin{aligned} A \in \mathbb {H} ^n \longmapsto \lambda _1(A)+\cdots +\lambda _k(A) \text { is concave.} \end{aligned}$$

    From this property, we have that \(t\lambda (A)+(1-t)\lambda (B)\) “majorizes” \(\lambda (tA+(1-t)B)\), that is

    $$\begin{aligned} \left\{ \begin{array}{ll} \sum \limits _{j=1}^k \lambda _j(tA+(1-t)B) \ge \sum \limits _{j=1}^k \left( t\lambda _j(A)+(1-t)\lambda _j(B)\right) , \quad \forall k \in \left\{ 1,\ldots ,n-1\right\} ,\\ \sum \limits _{j=1}^n \lambda _j(tA+(1-t)B) =\mathrm {Trace} \,(tA+(1-t)B) =\sum \limits _{j=1}^n \left( t\lambda _j(A)+(1-t)\lambda _j(B)\right) . \end{array}\right. \end{aligned}$$
    (14)

    It follows that (see e.g. [19, Chapter 1, B.1. Lemma])

    $$\begin{aligned} \lambda (tA+(1-t)B)^\mathrm {Tr}=T_N T_{N-1}\cdots T_1 (t\lambda (A)+(1-t)\lambda (B))^\mathrm {Tr}, \end{aligned}$$

    for some matrices \(T_1, \ldots , T_N \in \mathbb {R} ^{n \times n}\) of the form

    $$\begin{aligned} T_i=t_i \mathrm {Id}+ (1-t_i)P_i, \end{aligned}$$

    for some \(t_i \in [0,1]\) and some elementary permutation matrix \(P_i \in \mathbb {R} ^{n \times n}\) (i.e. switching only two components). Since \(\Gamma\) is convex and symmetric, it is clear that such transformations map \(\Gamma\) into itself. As a result, we have \(\lambda (tA+(1-t)B) \in \Gamma\). On the other hand, since \({\hat{F}}^{1/d}\) is concave and symmetric, it is Schur concave (see e.g. [19, Chapter 3, C.2. Proposition]), meaning that the majorization (14) implies that

    $$\begin{aligned} {\hat{F}}(t\lambda (A)+(1-t)\lambda (B))^{1/d} \le {\hat{F}}(\lambda (tA+(1-t)B))^{1/d}. \end{aligned}$$

    Using again the concavity of \({\hat{F}}^{1/d}\), we obtain the concavity of \(F^{1/d}\).

  3. (3)

    Uniform ellipticity of F in \({\mathcal {C}}(R)\).

    Let \(R>0\) be fixed. Let

    $$\begin{aligned} \Gamma (R)= \left\{ \lambda \in \Gamma \quad \big | \quad \sum _{i=1}^n \lambda _i<R, \quad \frac{1}{R}<{\hat{F}}(\lambda )\right\} . \end{aligned}$$

    We first show that the assumption \(\partial \Gamma \cap \partial \Gamma _1= \left\{ 0\right\}\) implies that \(\Gamma (R)\) is bounded. To this end, it is sufficient to prove that there exists \(\varepsilon >0\) such that

    $$\begin{aligned} \Gamma \subset \left\{ \lambda \in \Gamma \quad \big | \quad \varepsilon \left| \lambda \right| <\sum _{i=1}^n \lambda _i\right\} . \end{aligned}$$

    To show this property, we argue by contradiction and assume that there exists a sequence \((\lambda ^\varepsilon )_{\varepsilon >0} \subset \Gamma\) such that

    $$\begin{aligned} \varepsilon \left| \lambda ^\varepsilon \right| \ge \sum _{i=1}^n \lambda ^\varepsilon _i. \end{aligned}$$
    (15)

    Since \(\Gamma \subset \Gamma _1\), we have in particular that \(\lambda ^\varepsilon \ne 0\) and we can normalize the sequence by considering \({\tilde{\lambda }}^\varepsilon =\lambda ^\varepsilon /\left| \lambda ^\varepsilon \right|\). We have \({\tilde{\lambda }}^\varepsilon \in \Gamma\) since \(\Gamma\) is a cone. We can extract a subsequence, still denoted by \(({\tilde{\lambda }}^\varepsilon )_{\varepsilon >0}\), that converges to some \({\tilde{\lambda }} \in \overline{\Gamma }\). Besides, \({\tilde{\lambda }} \ne 0\) since \(\left| {\tilde{\lambda }}\right| =1\). Passing to the limit \(\varepsilon \rightarrow 0\) in (15), we obtain that

    $$\begin{aligned} 0 \ge \sum _{i=1}^n {\tilde{\lambda }}_i. \end{aligned}$$

    Since \(\Gamma \subset \Gamma _1\), it follows that \({\tilde{\lambda }} \in \partial \Gamma \cap \partial \Gamma _1\) and thus \({\tilde{\lambda }}=0\) by assumption, a contradiction. As a result, \(\overline{\Gamma (R)}\) is compact and there exist real numbers \(m \le M\) such that, for every \(i \in \left\{ 1,\ldots ,n\right\}\),

    $$\begin{aligned} m \le \frac{\partial {\hat{F}}}{\partial \lambda _i}(\lambda ) \le M, \quad \forall \lambda \in \overline{\Gamma (R)}. \end{aligned}$$
    (16)

    Let us now prove that \(m>0\). To this end, it is sufficient to show that

    $$\begin{aligned}&0<\frac{\partial {\hat{F}}}{\partial \lambda _i}(\lambda ), \quad \forall \lambda \in \Gamma , \quad \forall i \in \left\{ 1,\ldots ,n\right\} , \\&\overline{\Gamma (R)} \subset \Gamma . \end{aligned}$$

    The inclusion \(\overline{\Gamma (R)} \subset \Gamma\) immediately follows from the assumption that \({\hat{F}}=0\) on \(\partial \Gamma\). Let us now prove the inequality. We use some ideas of the proof of [6, Corollary, pp. 269-270]. Let \(i \in \left\{ 1,\ldots ,n\right\}\) be fixed. Let \(e_i\) denote the ith canonical vector of \(\mathbb {R} ^n\). We have \(e_i \in \overline{\Gamma _n} \subset \overline{\Gamma }\). From the concavity and homogeneity of \({\hat{G}}={\hat{F}}^{1/d}\), which belongs to \(C^1(\Gamma ) \cap C^0(\overline{\Gamma })\) since \({\hat{F}}>0\) in \(\Gamma\), we have

    $$\begin{aligned} 0 \le {\hat{G}}(e_i) \le {\hat{G}}(\lambda )+D{\hat{G}}(\lambda )(e_i-\lambda ) =D{\hat{G}}(\lambda )e_i =\frac{\partial {\hat{G}}}{\partial \lambda _i}(\lambda ). \end{aligned}$$

    Let us now prove that this inequality is strict. Assume not, and let then \(\lambda ^* \in \Gamma\) be such that \((\partial {\hat{G}}/ \partial \lambda _i)(\lambda ^*)=0\). Since \(\Gamma\) is open, there exists \(\varepsilon >0\) such that \(\lambda ^*+t e_i \in \Gamma\) for every \(t \in [0,\varepsilon ]\). By concavity, we then have

    $$\begin{aligned} \left( D{\hat{G}}(\lambda ^*+te_i)-D{\hat{G}}(\lambda ^*)\right) (te_i) \le 0, \quad \forall t \in [0,\varepsilon ]. \end{aligned}$$

    As a result,

    $$\begin{aligned} \frac{\partial {\hat{G}}}{\partial \lambda _i}(\lambda ^*+te_i)=0, \quad \forall t \in [0,\varepsilon ]. \end{aligned}$$

    Since \({\hat{F}}>0\) in \(\Gamma\), this is equivalent to

    $$\begin{aligned} \frac{\partial {\hat{F}}}{\partial \lambda _i}(\lambda ^*+te_i)=0, \quad \forall t \in [0,\varepsilon ]. \end{aligned}$$

    By analyticity, we deduce that this identity holds for any \(t \in \mathbb {R}\), in particular for negative ones. Let then

    $$\begin{aligned} T(\lambda ^*)=\inf \left\{ t \in \mathbb {R} \quad \big | \quad \lambda ^*+te_i \in \Gamma \right\} . \end{aligned}$$

    Since \(\Gamma \subset \Gamma _1\), we necessarily have \(T(\lambda ^*)>-\infty\). Integrating the identity \((\partial {\hat{F}}/\partial \lambda _i)(\lambda ^*+te_i)=0\) over \([T(\lambda ^*),0]\) then yields

    $$\begin{aligned} {\hat{F}}(\lambda ^*)={\hat{F}}(\lambda ^*+T(\lambda ^*)e_i). \end{aligned}$$

    This is a contradiction since \({\hat{F}}>0\) in \(\Gamma \ni \lambda ^*\), whereas \({\hat{F}}=0\) on \(\partial \Gamma \ni \lambda ^*+T(\lambda ^*)e_i\). Therefore, (16) holds with \(m>0\). To conclude, it remains to show that this implies the uniform ellipticity of \(F={\hat{F}} \circ \lambda\) in \({\mathcal {C}}(R)\). Let then \(A,B \in {\mathcal {C}}(R)\) with \(A \le B\). We denote the eigenvalues of A (resp. B) by \((\lambda _1,\ldots ,\lambda _n)\) (resp. \((\mu _1,\ldots ,\mu _n)\)). By definition, we have to show that

    $$\begin{aligned} m'\left\| B-A\right\| _{\mathbb {H} ^n} \le F(B)-F(A) \le M'\left\| B-A\right\| _{\mathbb {H} ^n}, \end{aligned}$$

    for some \(0<m' \le M'\) (that do not depend on AB). We only prove the first inequality, the other one being proved similarly. Since \(B-A \ge 0\), the first inequality is equivalent to

    $$\begin{aligned} m''\mathrm {Trace} \,(B-A) \le F(B)-F(A), \end{aligned}$$

    for some \(0<m''\). Since \(\mathrm {Trace} \,(B-A)=\mathrm {Trace} \,(B)-\mathrm {Trace} \,(A)\), this is equivalent to the following property for \({\hat{F}}\):

    $$\begin{aligned} m''\sum _{i=1}^n (\mu _i-\lambda _i) \le {\hat{F}}(\mu )-{\hat{F}}(\lambda ). \end{aligned}$$
    (17)

    Let us then prove this inequality. By definition, we have \(\lambda , \mu \in \Gamma (R)\). Assume first that

    $$\begin{aligned}&(\mu _1,\ldots ,\mu _{n-1},\lambda _n) \in \Gamma (R), \quad (\mu _1,\ldots ,\mu _{n-2},\lambda _{n-1},\lambda _n) \in \Gamma (R),\nonumber \\&\quad \ldots \quad (\mu _1,\lambda _2,\ldots ,\lambda _n) \in \Gamma (R). \end{aligned}$$
    (18)

    Then, taking \(t(\mu _1,\ldots ,\mu _{n-1},\mu _n)+(1-t)(\mu _1,\ldots ,\mu _{n-1},\lambda _n) \in \Gamma (R)\) in (16) and integrating over \(t \in [0,1]\), we have

    $$\begin{aligned} m(\mu _n-\lambda _n) \le {\hat{F}}(\mu _1,\ldots ,\mu _{n-1},\mu _n)-{\hat{F}}(\mu _1,\ldots ,\mu _{n-1},\lambda _n). \end{aligned}$$

    Taking then \(t(\mu _1,\ldots ,\mu _{n-1},\lambda _n)+(1-t)(\mu _1,\ldots ,\mu _{n-2},\lambda _{n-1},\lambda _n) \in \Gamma (R)\) in (16), iterating this process and summing all the obtained inequalities, we eventually obtain (17) with \(m''=m\). The general case can be deduced from the case (18) by an approximation argument (using the compactness of \(\overline{\Gamma (R)}\)).

\(\square\)

4 Additional properties for the \(\mathrm {MA}_k\)-equation

In this section, we detail the case of the k-Monge–Ampère equation, which is important for applications.

Let us first state explicitly the local regularity result that we have obtained for this equation.

Theorem 4.1

Let \(k \in \left\{ 1,\ldots ,n-1\right\}\) (\(n \ge 2\)). Let \(f \in C^{\infty }(\Omega )\) with \(f>0\) in \(\Omega\). Let \(u \in W^{2,p}_{{\mathrm {loc}}}(\Omega )\) with \((D^2_{\mathbb {C}}u)(z) \in {\mathcal {C}}_k'=\lambda ^{-1}(\Gamma _k')\) for a.e. \(z \in \Omega\) satisfy (almost everywhere)

$$\begin{aligned} \mathrm {MA}_k(\lambda (D^2_{\mathbb {C}}u))=f \quad \text{ in } \Omega . \end{aligned}$$

If \(p>n(C_n^k-1)\), then \(u \in C^{\infty }(\Omega )\).

We recall once again that, unless \(k=1\) (the Monge–Ampère equation), it is not known whether this condition on p is sharp. However, we will provide in Sect. 4.3 an example that shows that a threshold does exist for such a result to be valid.

4.1 k-Plurisubharmonic functions

The importance in the study of the operator \(\mathrm {MA}_k\) lies in its connection with the notion of k-plurisubharmonic function.

Definition 4.2

A function \(u:\Omega \longrightarrow \mathbb {R}\) is said to be k-plurisubharmonic in \(\Omega\) if it is upper semi-continuous and if it is subharmonic whenever it is restricted to any affine complex plane of dimension k. We denote by k-plurisubharmonic the vector space of such functions.

For instance, 1-plurisubharmonic functions are the plurisubharmonic functions, and n-plurisubharmonic functions are the subharmonic functions.

To make clearer the link between k-plurisubharmonic functions and the operator \(\mathrm {MA}_k\), we first introduce some notations.

Definition 4.3

Let \(A \in \mathbb {C} ^{n \times n}\) and \(k \in \left\{ 1,\ldots ,n\right\}\). The operator \(D_A: \Lambda ^k \mathbb {C} ^n \longrightarrow \Lambda ^k \mathbb {C} ^n\) is the linear action of A as derivation on the space \(\Lambda ^k \mathbb {C} ^n\) of k-vectors. On simple k-vectors, this means that

$$\begin{aligned}D_A(v_1\wedge v_2 \wedge \cdots \wedge v_k)&= (Av_1)\wedge v_2 \wedge \cdots \wedge v_k +v_1\wedge (Av_2) \wedge \cdots \wedge v_k +\cdots \\ & \quad +\,v_1\wedge v_2 \wedge \cdots \wedge (Av_k), \end{aligned}$$

for every \(v_1,\ldots , v_k \in \mathbb {C} ^n\).

By choosing the canonical basis of the complex space \(\Lambda ^k \mathbb {C} ^n\) (recall that \(\dim \Lambda ^k \mathbb {C} ^n=C_n^k\)), \(D_A\) can be identified with a matrix of size \(C_n^k \times C_n^k\), that will still be denoted by \(D_A\).

Example 4.4

Let \(A=\left( u_{i {\bar{j}}}\right) _{1 \le i,j \le n}\).

  • For \(n=3\) and \(k=2\), we have

    $$\begin{aligned} D_A= \begin{pmatrix} u_{1{\bar{1}}}+u_{2{\bar{2}}}&{} u_{2{\bar{3}}} &{} -u_{1{\bar{3}}}\\ u_{3{\bar{2}}} &{} u_{1{\bar{1}}}+u_{3{\bar{3}}} &{} u_{1{\bar{2}}}\\ -u_{3{\bar{1}}} &{} u_{2{\bar{1}}} &{} u_{2{\bar{2}}}+u_{3{\bar{3}}} \end{pmatrix}. \end{aligned}$$
  • For \(n=4\) and \(k=3\), we have

    $$\begin{aligned} D_A= \begin{pmatrix} u_{1{\bar{1}}}+u_{2{\bar{2}}}+u_{3{\bar{3}}}&{} u_{3{\bar{4}}} &{} u_{1{\bar{4}}}&{} -u_{2{\bar{4}}}\\ u_{4{\bar{3}}} &{} u_{1{\bar{1}}}+u_{2{\bar{2}}}+u_{4{\bar{4}}} &{}- u_{1{\bar{3}}}&{}u_{2{\bar{3}}} \\ u_{4{\bar{1}}} &{} -u_{3{\bar{1}}} &{} u_{2{\bar{2}}}+u_{3{\bar{3}}}+u_{4{\bar{4}}}&{} u_{2{\bar{1}}}\\ -u_{4{\bar{2}}} &{} u_{3{\bar{2}}} &{} u_{1{\bar{2}}}&{} u_{3{\bar{3}}}+u_{4{\bar{4}}}+u_{1{\bar{1}}} \end{pmatrix}. \end{aligned}$$
  • For \(n=4\) and \(k=2\), we have

    $$\begin{aligned} D_A= \begin{pmatrix} u_{1{\bar{1}}}+u_{2{\bar{2}}}&{} u_{2{\bar{3}}} &{} u_{2{\bar{4}}}&{} -u_{1{\bar{3}}}&{} -u_{1{\bar{4}}}&{} 0\\ u_{3{\bar{2}}} &{} u_{1{\bar{1}}}+u_{3{\bar{3}}} &{} u_{3{\bar{4}}}&{} u_{1{\bar{2}}}&{}0 &{} -u_{1{\bar{4}}}\\ u_{4{\bar{2}}} &{} u_{4{\bar{3}}} &{} u_{1{\bar{1}}}+u_{4{\bar{4}}}&{} 0 &{} u_{1{\bar{2}}} &{} u_{1{\bar{3}}} \\ -u_{3{\bar{1}}} &{} u_{2{\bar{1}}} &{} 0&{} u_{2{\bar{2}}}+u_{3{\bar{3}}}&{}u_{3{\bar{4}}} &{} -u_{2{\bar{4}}}\\ -u_{4{\bar{1}}} &{} 0 &{} u_{2{\bar{1}}}&{} u_{4{\bar{3}}}&{}u_{2{\bar{2}}}+u_{4{\bar{4}}} &{} u_{2{\bar{3}}} \\ 0 &{} -u_{4{\bar{1}}} &{} u_{3{\bar{1}}}&{} -u_{4{\bar{2}}}&{} u_{3{\bar{2}}}&{} u_{3{\bar{3}}}+u_{4{\bar{4}}} \end{pmatrix}. \end{aligned}$$

We can check that \(D_A\) is Hermitian if so is A and that the spectrum of \(D_A\) is exactly

$$\begin{aligned} \left\{ \lambda _{i_1}(A)+\cdots +\lambda _{i_k}(A) \quad \big | \quad (i_1,\ldots ,i_k) \in E^k_n\right\} , \end{aligned}$$

so that it becomes clear that

$$\begin{aligned} \mathrm {MA}_k(\lambda (A))=\det (D_A), \end{aligned}$$

and

$$\begin{aligned} {\mathcal {C}}_k'= \left\{ A \in \mathbb {H} ^n \quad \big | \quad D_A>0\right\} . \end{aligned}$$

This cone is linked to the notion of k-plurisubharmonic function as follows: when \(u \in C^2(\Omega )\), we have the characterization

$$\begin{aligned} u \text { is { k}-plurisubharmonic in } \Omega \quad \Longleftrightarrow \quad (D^2_{\mathbb {C}}u)(z) \in \overline{{\mathcal {C}}_k'}, \quad \forall z \in \Omega . \end{aligned}$$

For more information about k-plurisubharmonic functions, we refer, for instance, the reader to [14, 15].

4.2 Comparison between \(\mathrm {MA}_k\) and \(\mathrm {MA}_{k-1}\)

In this section, we show that the crucial domination property (12) holds for \(\mathrm {MA}_k\). In fact, we prove the more precise property (13) that allows to compare \(\mathrm {MA}_k\) and \(\mathrm {MA}_{k-1}\).

Let \(k \in \left\{ 2,\ldots ,n\right\}\) be fixed. Let us first show the inclusion

$$\begin{aligned} \Gamma _{k-1}' \subset \Gamma _k'. \end{aligned}$$

Let then \(\lambda \in \Gamma _{k-1}'\) and let us show that \(\lambda _{i_1}+\cdots +\lambda _{i_k}>0\). For any \((i_1,\ldots ,i_k) \in E^k_n\), we denote by \(I_k= \left\{ i_1,\ldots ,i_k\right\}\) and

$$\begin{aligned} E^{k-1}_{I_k}= \left\{ (j_1,\ldots ,j_{k-1}) \quad \big | \quad j_1,\ldots ,j_{k-1} \in I_k, \quad j_1<\ldots <j_{k-1}\right\} . \end{aligned}$$

Then, the claim follows from the identity

$$\begin{aligned} 0<\sum _{(j_1,\ldots ,j_{k-1}) \in E^{k-1}_{I_k}} \left( \lambda _{j_1}+\cdots +\lambda _{j_{k-1}}\right) =(k-1)\left( \lambda _{i_1}+\cdots +\lambda _{i_k}\right) . \end{aligned}$$
(19)

To show this identity, we have to count how many \(\lambda _{i_1}, \lambda _{i_2},\ldots\) are in the sum of the left-hand side. For \(\lambda _{i_1}\), the only possibility is to have \(j_1=i_1\). Therefore, there are \(C_{k-2}^{k-1}=k-1\) such terms. For \(\lambda _{i_2}\), there are two and only two possibilities, namely \(j_1=i_2\) or \(j_2=i_2\) (in this second case, necessarily \(j_1=i_1\)). This gives \(C_{k-3}^{k-1}+C_{k-3}^{k-2}\) terms, which is equal to \(C_{k-2}^{k-1}\) by Pascal’s rule. Repeating this reasoning leads to (19).

Let us now show the inequality

$$\begin{aligned} \mathrm {MA}_k(\lambda )\le \mathrm {MA}_{k-1}(\lambda ), \end{aligned}$$

for \(\lambda \in \Gamma _{k-1}'\). This is equivalent to show that

$$\begin{aligned} \prod _{(i_1,\ldots ,i_k) \in E_n^k} \left( \lambda _{i_1}+\cdots +\lambda _{i_k}\right) \ge \left( \frac{k}{k-1}\right) ^{C_n^k} \left( \prod _{(j_1,\ldots ,j_{k-1}) \in E^{k-1}_{I_k}} \left( \lambda _{j_1}+\cdots +\lambda _{j_{k-1}}\right) \right) ^{C_n^k /C_n^{k-1}}. \end{aligned}$$

Using (19) and the inequality of arithmetic and geometric means (note that \(\mathrm {card} \,E^{k-1}_{I_k}=k\)), we have

$$\begin{aligned} \begin{array}{rl} \displaystyle \lambda _{i_1}+\cdots +\lambda _{i_k} &{}\displaystyle =\frac{k}{k-1} \frac{\sum _{(j_1,\ldots ,j_{k-1}) \in E^{k-1}_{I_k}} \left( \lambda _{j_1}+\cdots +\lambda _{j_{k-1}}\right) }{k}\\ &{}\displaystyle \ge \frac{k}{k-1} \left( \prod _{(j_1,\ldots ,j_{k-1}) \in E^{k-1}_{I_k}} \left( \lambda _{j_1}+\cdots +\lambda _{j_{k-1}}\right) \right) ^{1/k}. \end{array} \end{aligned}$$

To conclude, it remains to observe the identity

$$\begin{aligned}&\prod _{(i_1,\ldots ,i_k) \in E^k_n} \left( \prod _{(j_1,\ldots ,j_{k-1}) \in E^{k-1}_{I_k}} \left( \lambda _{j_1}+\cdots +\lambda _{j_{k-1}}\right) \right) \\&\quad = \left( \prod _{(j_1,\ldots ,j_{k-1}) \in E^{k-1}_n} \left( \lambda _{j_1}+\cdots +\lambda _{j_{k-1}}\right) \right) ^{kC_n^k /C_n^{k-1}}, \end{aligned}$$

which can be established as before by counting the terms \(\lambda _{j_1}+\cdots +\lambda _{j_{k-1}}\) that appear in the left-hand side (note that \(kC_n^k/C_n^{k-1}=n-k+1\)). \(\square\)

4.3 A lower bound for p

Let us conclude Sect. 4 by mentioning that, even if the condition \(p>n(C_n^k-1)\) is not known to be sharp, it is, however, necessary for the conclusion of Theorem 4.1 to be true that p is not too small. More precisely, we will show that it cannot be smaller than

$$\begin{aligned} p^*=(n-k)C_n^k. \end{aligned}$$

Note that \(p^*\) is only equal to \(n(C_n^k-1)\) for \(k=1\) (and \(k=n\) but we have excluded this value). This follows from the following type of Pogorelov’s example, which is built on the one of Błocki [3].

Proposition 4.5

Let \(1 \le m<n\) and \(\beta >0\). The function

$$\begin{aligned} u(z',z'')=\left( 1+\left| z'\right| ^2\right) \left| z''\right| ^{2\beta }, \end{aligned}$$

where \((z',z'') \in \mathbb {C} ^m \times \mathbb {C} ^{n-m}\), satisfies:

  1. (i)

    \(u \in C^{\infty }(\mathbb {C} ^n \backslash N)\), where \(N= \left\{ z \in \mathbb {C} ^n \quad \big | \quad z''=0\right\}\), and \((D^2_{\mathbb {C}}u)(z)>0\) for every \(z \in \mathbb {C} ^n \backslash N\).

  2. (ii)

    \(u \in W^{2,p}_{{\mathrm {loc}}}(\mathbb {C} ^n)\) (\(p \ge 1\)) if and only if

    $$\begin{aligned} \beta \ge 1 \quad \text{ or } \quad \left( \beta<1 \quad \text{ and } \quad p<\frac{n-m}{1-\beta }\right) . \end{aligned}$$
    (20)
  3. (iii)

    \(u \in C_{{\mathrm {loc}}}^{1,\alpha }(\mathbb {C} ^n)\) (\(\alpha \in [0,1]\)) if and only if \(\alpha \le 2\beta -1\).

  4. (iv)

    For \(\beta =1-C_m^k/C_n^k\) (\(C_m^k=0\) if \(m<k\)), we have \(\mathrm {MA}_k(\lambda (D^2_{\mathbb {C}}u)) \in C^{\infty }(\mathbb {C} ^n)\) with \(\mathrm {MA}_k(\lambda (D^2_{\mathbb {C}}u))>0\) in \(\mathbb {C} ^n\).

For \(\beta =1-1/C_n^k\), the largest p that we can obtain from the condition (20) is when \(m=k\) and it gives \(p<(n-k)C_n^k\). Consequently, we have constructed an example that satisfies

$$\begin{aligned} \left. \begin{array}{c} u \in W^{2,p}_{{\mathrm {loc}}}(\mathbb {C} ^n), \quad \forall p \in \left[ 1,(n-k)C_n^k\right) , \\ D^2_{\mathbb {C}}u>0 \text{ in } \mathbb {C} ^n \backslash \left\{ z \in \mathbb {C} ^n \quad \big | \quad z''=0\right\} , \\ \mathrm {MA}_k(\lambda (D^2_{\mathbb {C}}u)) \in C^{\infty }(\mathbb {C} ^n), \\ \mathrm {MA}_k(\lambda (D^2_{\mathbb {C}}u))>0 \text{ in } \mathbb {C} ^n, \end{array}\right\} \quad \text{ but } \quad u \not \in C_{{\mathrm {loc}}}^{1,\alpha }(\mathbb {C} ^n), \quad \forall \alpha >1-\frac{2}{C_n^k}. \end{aligned}$$

Remark 4.6

The same function can be used to obtain lower bounds for various degenerate elliptic equations (Monge–Ampère, k-Hessian equation, etc.), although not always optimal.

Proof of (i)

It is clear that \(D^2_{\mathbb {C}}u\) is smooth off N. Let \(z \in \mathbb {C} ^n \backslash N\). A direct computation shows that we have the block decomposition

$$\begin{aligned} (D^2_{\mathbb {C}}u)(z) = \left( \begin{array}{c|c} \delta _{ij}\left| z''\right| ^{2\beta } &{} \beta {\bar{z}}_i z_j\left| z''\right| ^{2\beta -2} \\ \hline \beta {\bar{z}}_i z_j \left| z''\right| ^{2\beta -2} &{} \beta \left( 1+\left| z'\right| ^2\right) \left( (\beta -1){\bar{z}}_i z_j \left| z''\right| ^{2\beta -4}+\delta _{ij} \left| z''\right| ^{2\beta -2}\right) \end{array}\right) , \end{aligned}$$
(21)

where \(\delta _{ij}\) denotes the Kronecker delta and \(1 \le i,j \le m\) for the left upper block, \(1 \le i \le m\) and \(m+1 \le j \le n\) for the right upper block, etc.

Let us now find all the eigenvalues of \((D^2_{\mathbb {C}}u)(z)\). From this expression, we see that:

  • \(\left| z''\right| ^{2\beta }\) is an eigenvalue of multiplicity at least \(m-1\). Indeed, since the right upper block defines a matrix of rank at most 1, the rank of \(D^2_{\mathbb {C}}u-\left| z''\right| ^{2\beta } \mathrm {Id}\) is thus at most \(1+n-m\). By the rank-nullity theorem, \(\left| z''\right| ^{2\beta }\) is an eigenvalue of multiplicity at least \(n-(1+n-m)=m-1\).

  • Similarly, \(\beta \left( 1+\left| z'\right| ^2\right) \left| z''\right| ^{2\beta -2}\) is an eigenvalue of multiplicity at least \(n-m-1\). Besides, in case of equality with the first type of eigenvalue, it is then of multiplicity \(n-2\).

Therefore, in every case we know \(n-2\) eigenvalues and to find the two remaining ones, we are going to find their sum S and product P. The computation of S is easy. Indeed, the sum of the eigenvalues is

$$\begin{aligned} \mathrm {Trace} \,(D^2_{\mathbb {C}}u)=S+(m-1)\left| z''\right| ^{2\beta }+(n-m-1)\beta \left( 1+\left| z'\right| ^2\right) \left| z''\right| ^{2\beta -2}. \end{aligned}$$

On the other hand, from (21), we have

$$\begin{aligned} \mathrm {Trace} \,(D^2_{\mathbb {C}}u) =m\left| z''\right| ^{2\beta }+\beta \left( 1+\left| z'\right| ^2\right) (n-m+\beta -1)\left| z''\right| ^{2\beta -2}. \end{aligned}$$

Therefore,

$$\begin{aligned} S=\left| z''\right| ^{2\beta }+\beta ^2 \left( 1+\left| z'\right| ^2\right) \left| z''\right| ^{2\beta -2}. \end{aligned}$$

In order to find the product P, we will use the relation with the determinant of \(D^2_{\mathbb {C}}u\). As product of the eigenvalues, we have

$$\begin{aligned} \det (D^2_{\mathbb {C}}u)= & {} P\left( \left| z''\right| ^{2\beta }\right) ^{m-1} \left( \beta \left( 1+\left| z'\right| ^2\right) \left| z''\right| ^{2\beta -2}\right) ^{n-m-1}\nonumber \\= & {} P \beta ^{n-m-1} \left( 1+\left| z'\right| ^2\right) ^{n-m-1} \left| z''\right| ^{2\beta (n-2)-2(n-m-1)}. \end{aligned}$$
(22)

Let us now compute \(\det (D^2_{\mathbb {C}}u)\) directly from (21). Doing the line substitutions

$$\begin{aligned} L_i \leftarrow L_i -\sum _{r=1}^m \beta {\bar{z}}_i z_r \left| z''\right| ^{-2} L_r, \end{aligned}$$

for each \(i \in \left\{ m+1,\ldots ,n\right\}\), we obtain

$$\begin{aligned} \det (D^2_{\mathbb {C}}u) = \det \left( \begin{array}{c|c} \delta _{ij}\left| z''\right| ^{2\beta } &{} \beta {\bar{z}}_i z_j\left| z''\right| ^{2\beta -2} \\ \hline 0 &{} a_{ij} \end{array}\right) , \end{aligned}$$

where we introduced

$$\begin{aligned} a_{ij}=\beta \left( \beta -\left( 1+\left| z'\right| ^2\right) \right) {\bar{z}}_i z_j \left| z''\right| ^{2\beta -4} +\beta \delta _{ij}\left( 1+\left| z'\right| ^2\right) \left| z''\right| ^{2\beta -2}. \end{aligned}$$

Denoting by \(A=(a_{ij})_{m+1 \le i,j \le n}\) we have

$$\begin{aligned} \det (D^2_{\mathbb {C}}u)=\left( \left| z''\right| ^{2\beta }\right) ^m \det (A). \end{aligned}$$

To compute \(\det (A)\), we note that it is the sum of a rank one matrix with an invertible matrix, for which we have the formula

$$\begin{aligned} \det (J+v_1 v_2^{\mathrm {Tr}})=\det (J)(1+v_2^{\mathrm {Tr}} J^{-1}v_1), \end{aligned}$$

for any invertible J and vectors \(v_1,v_2\). Using this formula, we obtain

$$\begin{aligned} \det (A)= \left( \beta \left( 1+\left| z'\right| ^2\right) \left| z''\right| ^{2\beta -2}\right) ^{n-m} \frac{\beta }{1+\left| z'\right| ^2}. \end{aligned}$$

In summary,

$$\begin{aligned} \det (D^2_{\mathbb {C}}u)=\beta ^{n-m+1} \left( 1+\left| z'\right| ^2\right) ^{n-m-1} \left| z''\right| ^{2\beta n-2(n-m)}. \end{aligned}$$

Comparing with (22), we finally obtain that

$$\begin{aligned} P=\beta ^2 \left| z''\right| ^{4\beta -2}. \end{aligned}$$

From the knowledge of the sum S and product P, the two remaining eigenvalues are given by

$$\begin{aligned} \frac{S+\sqrt{S^2-4P}}{2} =\phi (z) \left| z''\right| ^{2\beta -2} \quad \text{ and } \quad \frac{2P}{S+\sqrt{S^2-4P}} =\frac{\beta ^2}{\phi (z)} \left| z''\right| ^{2\beta }, \end{aligned}$$

where \(\phi\) is the smooth positive function defined by

$$\begin{aligned} \phi (z)=\frac{\left| z''\right| ^2+\beta ^2 \left( 1+\left| z'\right| ^2\right) +\sqrt{ \left( \left| z''\right| ^2+\beta ^2 \left( 1+\left| z'\right| ^2\right) \right) ^2 -4\beta ^2 \left| z''\right| ^2 } }{2}. \end{aligned}$$

Finally, note that all the eigenvalues are positive outside N. \(\square\)

Proof of (ii)

We only investigate the integrability of the second-order derivatives since we can check afterward that the first-order derivatives yield a less restrictive condition on the exponent of integrability p. We recall that the only problem for the integrability of \(u_{i{\bar{j}}}\) is near N, see the expression (21). From this expression, we also see that the “worst term” is then

$$\begin{aligned} \beta \left( 1+\left| z'\right| ^2\right) (\beta -1){\bar{z}}_i z_j\left| z''\right| ^{2\beta -4}, \end{aligned}$$

(unless \(\beta =1\) but in such a case we clearly have \(u_{i{\bar{j}}} \in C^0(\mathbb {C} ^n)\) for every ij).

Let us then consider the ball \(B_{R}' \times B_{\varepsilon }''\), with \(R,\varepsilon >0\), where \(B_r'\) (resp. \(B_r''\)) denotes the open ball of \(\mathbb {C} ^m\) (resp. \(\mathbb {C} ^{n-m}\)) with center 0 and radius \(r>0\). We recall that \(\mu _d\) denotes the Lebesgue measure in \(\mathbb {C} ^d\). We have

$$\begin{aligned}&\left\| \left( 1+\left| z'\right| ^2\right) {\bar{z}}_i z_j\left| z''\right| ^{2\beta -4}\right\| ^p_{L^p(B_R' \times B_{\varepsilon }'')} =\left( \int _{B_R'}\left( 1+\left| z'\right| ^2\right) ^p d\mu _m\right) \\&\quad \times \,\left( \int _{B_{\varepsilon }''}\left(\left| z_i\right| \left| z_j\right|\right )^p \left| z''\right| ^{(2\beta -4)p}d\mu _{n-m}\right) . \end{aligned}$$

The first integral in the right-hand side is clearly finite. For the second one, using standard formula (see e.g. [16, p. 23]) we have

$$\begin{aligned} \int _{B_{\varepsilon }''}\left(\left| z_i\right| \left| z_j\right|\right)^p \left| z''\right| ^{(2\beta -4)p}d\mu _{n-m} =C(n-m,\varepsilon ,p)\int _0^{\varepsilon } r^{2(n-m)-1+2p+(2\beta -4)p} dr. \end{aligned}$$

It is well known that this integral is finite if and only if the power of r is larger than \(-1\), which gives the desired condition (20). \(\square\)

Proof of (iv)

Let us now compute \(\mathrm {MA}_k(\lambda (D^2_{\mathbb {C}}u))\). We recall that

$$\begin{aligned} \mathrm {MA}_k(\lambda )= \prod _{(i_1,\ldots ,i_k) \in E_n^k} \left( \lambda _{i_1}+\cdots +\lambda _{i_k}\right) . \end{aligned}$$

From what precedes, we know that there are m eigenvalues of the form \(\left| z''\right| ^{2\beta }\) and \(n-m\) eigenvalues of the form \(\left| z''\right| ^{2\beta -2}\) (by “of the form” we mean up to the multiplication by a smooth positive function). Now, observe that:

  • The sum \(\lambda _{i_1}+\cdots +\lambda _{i_k}\) is of the form \(\left| z''\right| ^{2\beta }\) if so is each element of the sum. There are \(C_m^k\) possibilities for this situation to happen (\(C_m^k=0\) if \(m<k\)).

  • In all the other cases, the sum \(\lambda _{i_1}+\cdots +\lambda _{i_k}\) is of the form \(\left| z''\right| ^{2\beta -2}\). Since there are \(C_n^k\) products overall in \(\mathrm {MA}_k\), this means that such sums appear \(C_n^k-C_m^k\) times.

In summary, we have

$$\begin{aligned} \mathrm {MA}_k(\lambda (D^2_{\mathbb {C}}u)) =\psi (z) \left| z''\right| ^{C_m^k(2\beta )+(C_n^k-C_m^k)(2\beta -2)}, \end{aligned}$$

for some smooth positive function \(\psi\). Consequently, \(\mathrm {MA}_k(\lambda (D^2_{\mathbb {C}}u)) \in C^{\infty }(\mathbb {C} ^n)\) when the power of \(\left| z''\right|\) is exactly equal to zero, which gives the desired condition on \(\beta\), namely

$$\begin{aligned} \beta =1-\frac{C_m^k}{C_n^k}. \end{aligned}$$

For this value of \(\beta\), we also have \(\mathrm {MA}_k(\lambda (D^2_{\mathbb {C}}u))=\psi >0\) in \(\mathbb {C} ^n\). \(\square\)