1 Introduction

This paper is concerned with global and interior second derivative estimates of solutions to partial differential equations of Monge–Ampère type, (MATEs) that arise in optimal transportation and geometric optics and are embraced by the notion of “generated prescribed Jacobian equation” (GPJE) introduced in [16]. Such equations can be written in the general form

det { D 2 u - A ( · , u , D u ) } = B ( · , u , D u ) ,
(1.1)

where A and B are respectively n × n symmetric matrix valued and scalar functions on a domain U R n × R × R n , D u and D 2 u denote respectively the gradient vector and Hessian matrix of the scalar function u, which is defined on a bounded domain Ω R n for which the sets

U x = { ( u , p ) R × R n ( x , u , p ) U }

are non-empty for all x Ω . A solution u C 2 ( Ω ) of (1.1) is called elliptic, (degenerate elliptic), whenever D 2 u - A ( · , u , D u ) > 0 , ( 0 ) , which implies B > 0 , ( 0 ) .

We shall establish the second derivative bounds for generated prescribed Jacobian equations under the hypotheses G1, G2, G1*, G3w and G4w introduced in [16], which extend the conditions A1, A2 and A3w for optimal transportation equations introduced in [20]. These are degenerate versions of the conditions A1, A2 and A3 for regularity originating in [11]. In fact we will relabel them here replacing G in [16] by A to make this extension clearer. First we need to suppose that there exists a smooth generating function g defined on a domain Γ R n × R n × R for which the sets

Γ x , y = { z R ( x , y , z ) Γ }

are convex, (and hence open intervals). Setting

U = { ( x , g ( x , y , z ) , g x ( x , y , z ) ) ( x , y , z ) Γ } ,

we then assume

  • A1: For each ( x , u , p ) U , there exists a unique point ( x , y , z ) Γ satisfying

    g ( x , y , z ) = u , g x ( x , y , z ) = p .
  • A2: g z < 0 , det E 0 , in Γ, where E is the n × n matrix given by

    E = [ E x , y ] = [ g x , y - ( g z ) - 1 g x , z g y ] .

Defining Y ( x , u , p ) = y , Z ( x , u , p ) = z in A1, we thus obtain mappings Y : U R n , Z : U R with Y p = E - 1 . The matrix function A in the corresponding generated prescribed Jacobian equation is then given by

A ( x , u , p ) = - Y p - 1 ( Y x + Y u p ) = g x x [ x , Y , Z ] ,
(1.2)

where Y = Y ( x , u , p ) and Z = Z ( x , u , p ) . Note that the Jacobian determinant of the mapping ( y , z ) ( g x , g ) ( x , y , z ) is g z det E , 0 by A2, so that Y and Z are accordingly smooth. In particular, it suffices to have g C 2 ( Γ ) to define Y , Z C 1 ( U ) , A C 0 ( U ) . The reader is referred to [16] for more details. We also mention though that the special case of optimal transportation is given by taking

g ( x , y , z ) = c ( x , y ) - z ,
(1.3)

where c is a cost function defined on a domain D in R n × R n so that Γ = D × R , g x = c x , g z = - 1 , E x , y = c x , y and conditions A1 and A2 are equivalent to those in [11, 19]. Note that here we follow the same sign convention as in [15, 20] so that our cost functions are the negatives of those usually considered. We remark also that the case when Y is independent of u is equivalent to the optimal transportation case.

Crucial to us in this paper is the dual condition to A1, namely

  • A1*: The mapping Q : = - g y / g z is one-to-one in x, for all ( x , y , z ) Γ .

We will treat the implications of condition A1* as needed in Sect. 2. Meanwhile we note that the Jacobian matrix of the mapping x Q ( x , y , z ) is - E t / g z , where E t is the transpose of E, so its determinant is automatically non-zero when condition A2 holds. Note that in the optimal transportation case (1.3), we simply have Q = c y but the duality is more complicated in the general situation and will be amplified further in Sect. 2.

The next conditions are expressed in terms of the matrix function A and extend condition A3w in the optimal transportation case [20]. For these we assume that A is twice differentiable in p and once in u.

  • A3w: The matrix function A is regular in U, that is A is codimension one convex with respect to p in the sense that,

    A i j , k l ξ i ξ j η k η l : = ( D p k p l A i j ) ξ i ξ j η k η l 0 ,

    in U, for all ξ , η R n such that ξ · η = 0 .

  • A4w: The matrix A is monotone increasing with respect to u in U, that is

    D u A i j ξ i ξ j 0 ,

    in U, for all ξ R n .

An explicit formula for D p 2 A in terms of the generating function g is given in [16]. In order to use conditions A3w and A4w for our barrier constructions in Sect. 2, we also assume that U is sufficiently large in that the sets U x are convex in p, for each ( x , u ) R n × R , (for A3w), and U x = J x × P x , for each x R n , where J x is an open interval in R, (possibly empty), and P x R n , (for A4w). When we use both A3w and A4w, we would also assume P x to be convex. We note that in [4], U = R n × R × R n so that these conditions are automatically satisfied and in the optimal transportation case J x = R . Similar conditions are also needed for the convexity theory in [16].

To formulate our second derivative estimates, we first denote the one-jet of a function u C 1 ( Ω ) by

J 1 [ u ] ( Ω ) = { x , u ( x ) , D u ( x ) x Ω } ,

and recall from [16] that a g-affine function in Ω is a function of the form

g 0 = g ( · , y 0 , z 0 ) ,

where y 0 and z 0 are fixed so that ( x , y 0 , z 0 ) Γ for all x Ω .

Theorem 1.1

Let u C 4 ( Ω ) C 2 ( Ω ¯ ) be an elliptic solution of Eq. (1.1) in Ω, where B C 2 ( U ) , inf B > 0 and A C 2 ( U ) is given by (1.2) with generating function g satisfying conditions A1, A2, A1*, A3w and A4w. Suppose also u g 0 in Ω for some g-affine function g 0 with the one jets J 1 [ u ] ( Ω ) , J 1 [ g 0 ] ( Ω ) U . Then we have the estimate

sup Ω | D 2 u | C 1 + sup Ω | D 2 u | ,
(1.4)

where the constant C depends on n , U , g , B , Ω , g 0 and J 1 [ u ] .

Note that the inequality u g 0 is trivially satisfied if there exists a g-affine function on Ω ¯ which is greater than sup u and moreover can be dispensed with in the optimal transportation case, where Theorem 1.1 improves Theorems 3.1 and 3.2 in [20] and Theorem 1.1 in [4]. Also we may assume in place of A4w the reversed monotonicity condition:

  • A4*w: The matrix A is monotone decreasing with respect to u in U, that is

    D u A i j ξ i ξ j 0 , for all ξ R n ,

in which case Theorem 1.1 continues to hold provided the inequality u g 0 is replaced by u g 0 .

Next we formulate the corresponding extension of the Pogorelov interior estimate [2] for generated prescribed Jacobian equations.

Theorem 1.2

Let u C 4 ( Ω ) C 0 , 1 ( Ω ¯ ) be an elliptic solution of (1.1) satisfying u = g 0 on Ω , for some g-affine function g 0 , where B C 2 ( U ) , inf B > 0 and A C 2 ( U ) is given by (1.2) with generating function g satisfying conditions A1, A2, A1*, A3w, A4w and J 1 [ u ] ( Ω ) , J 1 [ g 0 ] ( Ω ) U . Then we have for any Ω Ω ,

sup Ω | D 2 u | C ,
(1.5)

where the constant C depends on n , U , g , B , Ω , Ω , g 0 and J 1 [ u ] .

In the optimal transportation case, Theorem 1.2 improves Theorem 1.1 in [9] and Theorem 2.1 in [4] by removing the barrier and subsolution hypotheses assumed there. As in [9], we may also replace g 0 by any strict supersolution. We also remark that the dependence on J 1 [ u ] in Theorems 1.1 and 1.2 is more specifically determined by sup Ω ( | u | + | D u | ) and dist ( J 1 [ u ] ( Ω ) , U ) .

This paper is organised as follows. In the next section we will construct a function in Lemma 2.1 for which the Eq. (1.1) is uniformly elliptic and use it to prove an appropriate version, Lemma 2.2, of the key Lemma 2.1 in [4]. From here we obtain Theorems 1.1 and 1.2 by following the same arguments as in [4]. In Sect. 3, we will also discuss the application of Theorem 1.1 to global second derivative estimates for the Dirichlet and second boundary value problems for optimal transportation and generated prescribed Jacobian equations, yielding improvements of the corresponding results in [4, 13, 14, 20]. In Sect. 4 we focus on applications to optimal transportation and near field geometric optics. In the optimal transportation case, we obtain a different and more direct proof of the improved global regularity result in [15]. Then we conclude by treating some examples of generating functions in geometric optics satisfying our hypotheses, which arise from the reflection and refraction of parallel beams.

To conclude this introduction we should also emphasise that while we have considered the more general case of generated prescribed Jacobian equations our results are also new for optimal transportation. Further we note that when we strengthen condition A3w to the strict inequality A3, the corresponding second derivative estimates as treated in [11], (see also [20] and [18]), are much simpler to prove although even here the constructions in Sect. 2 are still new.

2 Barrier constructions

Our first lemma shows the existence of a uniformly elliptic function under conditions A1, A2 and A1*.

Lemma 2.1

Let g C 4 ( Γ ) be a generating function satisfying conditions A1, A2 and A1*, and suppose g 0 is a g-affine function on Ω ¯ , where Ω is a bounded domain in R n . Then there exists a function u ¯ C 2 ( Ω ¯ ) such that J 1 [ u ¯ ] U , u ¯ > g 0 in Ω and

D 2 u ¯ - A ( · , u ¯ , D u ¯ ) a 0 I , i n Ω ,
(2.1)

for some positive constant a 0 depending on g 0 and Γ.

Proof

Suppose g 0 = g ( · , y 0 , z 0 ) for some ( y 0 , z 0 ) R n × R , let B ρ = B ρ ( y 0 ) be a ball of radius ρ and centre y 0 in R n , and consider the function v = v ρ in B ρ , given by

v ρ ( y ) = z 0 - ρ 2 - | y - y 0 | 2 .
(2.2)

Our desired function u ¯ is now defined as the g -transform of v, in accordance with the notion of duality introduced in [16], namely

u ¯ ( x ) = v ρ ( x ) = sup y B ρ g ( x , y , v ρ ( y ) ) .
(2.3)

Note that u ¯ is well defined for x Ω ¯ for ρ sufficiently small to ensure the set

Γ 0 = Ω ¯ × B ¯ ρ ( y 0 ) × [ z 0 - ρ , z 0 ] Γ .

Furthermore, the supremum in (2.3) will be taken on at a unique point y = T x B ρ . To prove this assertion, we fix a point y B ρ and set y λ = ( 1 - λ ) y + λ y 0 for some λ ( 0 , 1 ) . Since g z < 0 in Γ, we then have, by the mean value theorem,

g ( x , y , v ρ ( y ) ) - g ( x , y λ , v ρ ( y λ ) ) sup Γ 0 | g y | | y - y λ | + sup Γ 0 g z ( v ρ ( y ) - v ρ ( y λ ) ) C λ ρ - δ ρ 2 λ - λ 2 ,

for positive constants C and δ. Consequently, choosing 0 < λ < 2 / 1 + ( C δ ) 2 , we have

g ( x , y , v ρ ( y ) ) < g ( x , y λ , v ρ ( y λ ) )

and hence the supremum in (2.3) cannot occur on B ρ . By differentiation, we then have at an interior maximum point y,

g y + g z D v ρ ( y ) = 0
(2.4)

and hence by condition A1*, there exists a unique x = T y corresponding to y. From our construction (2.3) the function u ¯ is g-convex in Ω and its g-normal mapping χ [ u ¯ ] ( x ) consists of those y maximizing (2.3). Here we recall from [16] that a function u C 0 ( Ω ) is g-convex in Ω if for each point x 0 Ω , there exists y 0 R n and z 0 I ( Ω , y 0 ) : = Ω Γ · , y 0 , such that

u ( x ) g ( x , y 0 , z 0 ) , u ( x 0 ) = g ( x 0 , y 0 , z 0 ) ,

for all x Ω and the g-normal mapping χ [ u ] ( x 0 ) of u at x 0 is given by

χ [ u ] ( x 0 ) = { y 0 R n u ( x ) g ( x , y 0 , z 0 ) for all x Ω } ,

and moreover, χ [ u ] ( Ω ) = Ω χ [ u ] ( · ) . From (2.4), we also have

| D v ρ ( y ) | sup Γ 0 | g y | - g z C
(2.5)

whence χ [ u ¯ ] ( Ω ) B ( 1 - λ ) ρ ( y 0 ) for some constant λ > 0 . To show y = T x is uniquely determined by x, we need to prove χ [ u ¯ ] ( x ) is a single point. For this we invoke the duality considerations in [16]. Letting

Γ = { ( x , y , u ) R n × R n × R u = g ( x , y , z ) for ( x , y , z ) Γ } ,

we let g C 4 ( Γ ) denote the dual generating function of g, given by

g ( x , y , g ( x , y , u ) ) = u .

Also denoting

U = { ( y , z , q ) R n × R × R n q = Q ( x , y , z ) for ( x , y , z ) Γ } ,

the dual matrix, corresponding to g , is given by

A ( y , z , q ) = g y y [ X , y , g ( X , y , z ) ] = - g y g z y ( X , y , z ) + g y g z z ( X , y , z ) q ,
(2.6)

where X = X ( y , z , q ) is the unique mapping, given by A1*, satisfying

Q ( x , y , z ) = g y ( X , y , g ( X , y , z ) ) = q .
(2.7)

Note that the mapping T constructed above is given by

x = T y = X ( y , v ρ ( y ) , D v ρ ( y ) ) .
(2.8)

Since the graph of v ρ is a hemisphere of radius ρ and hence has constant curvature 1 / ρ , we have, on χ [ u ¯ ] ( Ω )

D 2 v ρ - A ( · , v ρ , D v ρ ) a 0 I
(2.9)

for some fixed constant a 0 > 0 , provided ρ is taken sufficiently small, which implies in particular that v ρ is locally strictly g -convex, that is for any y 1 χ [ u ¯ ] ( x 0 ) , x 0 Ω , we have

v ρ ( y ) > g ( x 0 , y , u ¯ ( x 0 ) )
(2.10)

for all y y 1 , near y 1 . Clearly, by again taking ρ sufficiently small, we infer that the function v ρ is strictly globally g -convex, that is inequality (2.10) holds for all y y 1 , B ( 1 - λ ) ρ ( y 1 ) . Consequently y 1 = T x 0 is uniquely determined by x 0 Ω , whence the mapping T is invertible with inverse T given by

T x = Y ( x , u ¯ ( x ) , D u ¯ ( x ) )

and u ¯ is differentiable with J 1 [ u ¯ ] U . Since T is C 1 smooth so also is T with Jacobian matrix

D T ( x ) = [ D y T ( y ) ] - 1 , y = T x .

Using the relation

D u ¯ ( x ) = g x ( x , T x , g ( x , T x , u ¯ ( x ) )

we then have u ¯ C 2 ( Ω ¯ ) . Since

D T ( x ) = Y p D 2 u ¯ - A ( · , u ¯ , D u ¯ ) , D y T ( y ) = X q D 2 v ρ - A ( · , v ρ , D v ρ ) ,

and Y p = E - 1 , X q = - g z ( E t ) - 1 , we thus obtain (2.1) from condition A2. Finally we observe that

u ¯ ( x ) = sup y B ρ ( y 0 ) g ( x , y , v ρ ( y ) ) g ( x , y 0 , v ρ ( y 0 ) ) = g ( x , y 0 , z 0 - ρ ) > g ( x , y 0 , z 0 ) ,

by using g z < 0 in the last inequality. We then obtain u ¯ > g 0 as asserted and the proof is complete.

Remark 2.1

For a matrix A, not arising from optimal transportation or generated Jacobian mappings, Lemma 2.1 may not be true even if A satisfies the regularity condition A3w in a strong way. To see this, suppose A = A ( p ) a 1 | p | 2 I for some constant a 1 > 0 . Then the uniform ellipticity condition, D 2 u - A ( D u ) a 0 I , a 0 > 0 , implies u is convex in a convex domain Ω with

det D 2 u ( a 0 + a 1 | D u | 2 ) n .

By integration, we obtain

| Ω | R n ( a 0 + a 1 | p | 2 ) - n d p ,

which can only be satisfied for | Ω | sufficiently small.

We now use Lemma 2.1 to construct a barrier for the linearized operator of Eq. (1.1), in accordance with Remark 2.4 in [5]. Letting u C 2 ( Ω ) be elliptic for Eq. (1.1) with J 1 [ u ] U we set F [ u ] = log ( det ( D 2 u - A ( x , u , D u ) ) ) and define an associated linear operator L by

L = L [ u ] = F i j ( D i j - D p k A i j ( x , u , D u ) D k ) ,
(2.11)

where F i j = F w i j and { F i j } = { w i j } denotes the inverse matrix of { w i j } { u i j - A i j ( x , u , D u ) } . It is known that F is a concave operator with respect to w i j for elliptic u. We then have the following improvement and extension of the fundamental lemma, Lemma 2.1 in [4] in the optimal transportation case.

Lemma 2.2

Let g C 4 ( Γ ) be a generating function satisfying conditions A1, A2, A1*, A3w, A4w and let u C 2 ( Ω ¯ ) be elliptic with respect to F, J 1 [ u ] U and u g 0 in Ω for some g-affine function, g 0 , on Ω ¯ . Then

L e K ( u ¯ - u ) ϵ 1 i F i i - C ,
(2.12)

holds in Ω, for sufficiently large positive constant K, uniform positive constants ϵ 1 , C , depending on n , U , g , Ω , g 0 , J 1 [ u ] and sup Ω F [ u ] .

Proof

By Lemma 2.1, u ¯ C 2 ( Ω ¯ ) is a uniformly elliptic function satisfying J 1 [ u ¯ ] U , u ¯ > g 0 in Ω and

F [ u ¯ ] = log ( det ( D 2 u ¯ - A ( x , u ¯ , D u ¯ ) ) ) n log a 0 ,

where a 0 is the positive constant in (2.1). For any x 0 Ω , the perturbation u ¯ ϵ = u ¯ - ϵ 2 | x - x 0 | 2 is still a uniformly elliptic function with J 1 [ u ¯ ϵ ] U , for sufficiently small ϵ > 0 , satisfying

F [ u ¯ ϵ ] = log ( det ( D 2 u ¯ ϵ - A ( x , u ¯ ϵ , D u ¯ ϵ ) ) ) a 1 ,

for some constant a 1 = n log a 0 2 . From our construction of u ¯ , we can also choose ϵ sufficiently small such that u ¯ ϵ g 0 u in Ω.

Setting v = u ¯ - u , v ϵ = u ¯ ϵ - u and η = K v = K ( u ¯ - u ) , we have v = v ϵ + ϵ 2 | x - x 0 | 2 . By detailed calculation, we have

L e η = e η ( L η + F i j D i η D j η ) = K e η ( L v + K F i j D i v D j v ) = K e η ( L v ϵ + L ϵ 2 | x - x 0 | 2 + K F i j D i v D j v ) = K e η ϵ i F i i - ϵ F i j D p k A i j ( x , u , D u ) ( x - x 0 ) k + F i j ( D i j v ϵ - D p k A i j ( x , u , D u ) D k v ϵ ) + K F i j D i v D j v = K e η ϵ i F i i - ϵ F i j D p k A i j ( x , u , D u ) ( x - x 0 ) k + F i j [ D i j ( u ¯ ϵ - u ) - ( A i j ( x , u ¯ ϵ , D u ¯ ϵ ) - A i j ( x , u , D u ) ) ] + F i j [ A i j ( x , u , D u ¯ ϵ ) - A i j ( x , u , D u ) - D p k A i j ( x , u , D u ) D k v ϵ ] + F i j [ A i j ( x , u ¯ ϵ , D u ¯ ϵ ) - A i j ( x , u , D u ¯ ϵ ) ] + K F i j D i v D j v .
(2.13)

By the concavity of F with respect to w i j , we have

F i j [ D i j ( u ¯ ϵ - u ) - ( A i j ( x , u ¯ ϵ , D u ¯ ϵ ) - A i j ( x , u , D u ) ) ] F [ u ¯ ϵ ] - F [ u ] a 1 - F [ u ] .
(2.14)

Since u ¯ ϵ u in Ω, we then have, by the mean value theorem and condition A4w

F i j A i j ( x , u ¯ ϵ , D u ¯ ϵ ) - A i j ( x , u , D u ¯ ϵ ) = F i j D u A i j ( x , u ^ , D u ¯ ϵ ) ( u ¯ ϵ - u ) 0 ,
(2.15)

where u ^ = λ u ¯ ϵ + ( 1 - λ ) u for some λ ( 0 , 1 ) . Note that from our conditions on U, we have u ( x ) , u ^ ( x ) J x for all x Ω so that A i j ( x , u , D u ¯ ϵ ) is well defined in (2.13), and A i j ( x , u ^ , D u ¯ ϵ ) is well defined in (2.15). Using the Taylor expansion, we have

A i j ( x , u , D u ¯ ϵ ) - A i j ( x , u , D u ) - D p k A i j ( x , u , D u ) D k v ϵ = 1 2 A i j , k l ( x , u , p ^ ) D k v ϵ D l v ϵ = 1 2 A i j , k l ( x , u , p ^ ) D k v D l v - ϵ A i j , k l ( x , u , p ^ ) ( x - x 0 ) k D l v - ϵ 2 ( x - x 0 ) l ,
(2.16)

where p ^ = ( 1 - θ ) D u + θ D u ¯ ϵ for some θ ( 0 , 1 ) . Here we are also using the convexity of the set P x for each x Ω which ensures the point ( x , u , p ^ ) U so that the term A i j , k l ( x , u , p ^ ) in (2.16) is also well defined.

By (2.14), (2.15), (2.16), we have from (2.13)

L e η K e η ϵ i F i i - ϵ F i j [ D p k A i j ( x , u , D u ) + A i j , k l ( x , u , p ^ ) ( D l v - ϵ 2 ( x - x 0 ) l ) ] ( x - x 0 ) k + 1 2 F i j A i j , k l ( x , u , p ^ ) D k v D l v + K F i j D i v D j v + a 1 - F [ u ] .
(2.17)

Next we choose a finite family of balls B ρ ( x i ) , with centres at x i , i = 1 N , covering Ω ¯ and with fixed radii ρ < min { 1 / 2 n Θ , 1 } , where Θ = max Ω ¯ [ | D p k A i j ( x , u , D u ) | + | A i j , k l ( x , u , p ^ ) | ( | D l v | + 1 2 ) ] . Then we select x 0 = x i for some i such that x B ρ ( x i ) . Accordingly, we have for a fixed small positive ϵ,

L e η K e η ϵ i F i i - ϵ 2 n i , j | F i j | + 1 2 F i j A i j , k l ( x , u , p ^ ) D k v D l v + K F i j D i v D j v + a 1 - F [ u ] ,
(2.18)

holds for x B ρ ( x i ) . We see that (2.18) holds in all balls B ρ ( x i ) , i = 1 N , with a fixed positive constant ϵ. Then by the finite covering, (2.18) holds in Ω with a uniform positive constant ϵ.

Without loss of generality, assume that D v = ( D 1 v , 0 , , 0 ) at a given point in Ω, we get

L e η K e η ϵ i F i i - ϵ 2 n i , j | F i j | + 1 2 F i j A i j , 11 ( x , u , p ^ ) ( D 1 v ) 2 + K F 11 ( D 1 v ) 2 + a 1 - F [ u ] K e η ϵ i F i i - ϵ 2 n i , j | F i j | + 1 2 i o r j = 1 F i j A i j , 11 ( x , u , p ^ ) ( D 1 v ) 2 + K F 11 ( D 1 v ) 2 + a 1 - F [ u ] = K e η ϵ i F i i - ϵ 2 n i , j | F i j | + i F 1 i A 1 i , 11 ( x , u , p ^ ) ( D 1 v ) 2 - 1 2 F 11 A 11 , 11 ( x , u , p ^ ) ( D 1 v ) 2 + K F 11 ( D 1 v ) 2 + a 1 - F [ u ] ,

where condition A3w is used in the second inequality.

Since the matrix { F i j } is positive definite, any 2 × 2 diagonal minor has positive determinant. By the Cauchy’s inequality, we have

and

| F 1 i | F 11 F i i α F i i + 1 4 α F 11 ,

for any positive constant α.

Thus, we have

L e η K e η i F i i ϵ 2 - α | A 1 i , 11 ( x , u , p ^ ) | ( D 1 v ) 2 + F 11 K - 1 4 α i | A 1 i , 11 ( x , u , p ^ ) | - 1 2 | A 11 , 11 ( x , u , p ^ ) | ( D 1 v ) 2 + a 1 - F [ u ] .

By first choosing α small such that α ϵ 4 max i max Ω ¯ { | A 1 i , 11 ( x , u , p ^ ) | ( D 1 v ) 2 } , and then choosing K large such that K max Ω ¯ 1 4 α i | A 1 i , 11 ( x , u , p ^ ) | + 1 2 | A 11 , 11 ( x , u , p ^ ) | , we obtain

L e η K e η ϵ 4 i F i i + a 1 - F [ u ] .

By choosing ϵ 1 = min Ω ¯ { ϵ 4 K e η } and C = max Ω ¯ { K e η | F [ u ] - a 1 | } , the conclusion of this lemma is proved.

Remark 2.2

When we apply the operator L to the function e K ( u ¯ - u ) , the second derivatives of u appear in the coefficients of F i j which are controlled by using the concavity property of F. Accordingly ϵ 1 in (2.12) depends on the one jet J 1 [ u ] but not on D 2 u . We note that the constant C in (2.12) has the additional dependence on sup Ω F [ u ] , which does involve second derivatives of u, but when we apply Lemma 2.2 to elliptic solutions u and uniformly elliptic function u ¯ , such a dependence is reduced. For example, when we consider the Eq. (1.1), namely F [ u ] = log B ( x , u , D u ) , the additional dependence of C in (2.12) is just sup Ω ( log B ( x , u , D u ) ) .

Remark 2.3

When we assume the reverse monotonicity A4*w and u g 0 , then by modifying u ¯ appropriately and using (2.15), we still obtain the key barrier inequality (2.12).

3 Applications to second derivative estimates

Theorems 1.1 and 1.2 follow from Lemma 2.2 and the proofs of Theorem 3.1 in [20] and Theorem 1.1 in [9]. These latter results are proved under an additional hypothesis that Ω is A-bounded on J 1 [ u ] ( Ω ) , that is there exists a function φ C 2 ( Ω ) C 0 , 1 ( Ω ¯ ) satisfying

D 2 φ - D p A ( · , u , D u ) · D φ I ,
(3.1)

in Ω. However, as pointed out in [4], we need only assume the weaker inequality

L φ w i i - C 0 ,
(3.2)

for some constant C 0 to carry out the same proofs. By virtue of Lemma 2.2, the barrier inequality (3.2) is satisfied by the function

φ = 1 ϵ 1 e K ( u ¯ - u ) .
(3.3)

We will first treat the global estimates for boundary value problems, and subsequently the interior estimate, Theorem 1.2, as we will need to modify slightly the proof in [9] to accommodate the dependance of A on u.

Taking account of the above remarks we may extend the statement of Theorem 3.1 in [20] as follows.

Lemma 3.1

Let u C 4 ( Ω ) C 1 , 1 ( Ω ¯ ) be an elliptic solution of Eq. (1.1) in Ω with A , B C 2 ( U ) . Suppose that A is regular on J 1 = J 1 [ u ] ( Ω ) , that is

D p k p l A i j ( · , u , D u ) ξ i ξ j η k η l 0
(3.4)

in Ω, for all ξ , η R n such that ξ · η = 0 , and that A satisfies (3.2) for some φ C 2 ( Ω ) C 0 , 1 ( Ω ¯ ) , with inf J 1 B > 0 . Then we have the estimate

sup Ω | D 2 u | C 1 + sup Ω | D 2 u | ,
(3.5)

where the constant C depends on A , B , Ω , | u | 0 , 1 ; Ω , | φ | 0 , 1 ; Ω and C 0 .

Clearly, Theorem 1.1 follows immediately from Lemmas 2.2 and 3.1. As an immediate consequence of Theorem 1.1, the hypothesis of the existence of a subsolution u ̲ in Theorem 1.1 of [4] can be removed altogether in the optimal transportation case, while for the Dirichlet problem in Theorem 1.2 of [4], we need only assume the existence of a subsolution u ̲ in a neighbourhood of the boundary, whose boundary trace is the prescribed boundary function. However, the natural boundary condition for optimal transportation and the more general prescribed Jacobian equations is the prescription of the image of the mapping, T u : = Y ( · , u , D u ) , that is

T u ( Ω ) = Ω
(3.6)

for some given target domain Ω Y ( U ) . Global estimates for solutions of Eq. (1.1) subject to (3.6) now follow from Theorem 1.1 under appropriate convexity hypotheses on Ω and Ω . As with (3.1) and (3.4), we will express these initially in terms of J 1 [ u ] . Accordingly, we assume the domain Ω is uniformly A-convex with respect to J 1 [ u ] in the sense that there exists a defining function φ C 2 ( Ω ¯ ) satisfying φ = 0 , D φ 0 on Ω together with the “uniform convexity” condition (3.1) in a neighbourhood N of Ω , while for the domain Ω we assume the dual condition, that Ω is uniformly Y -convex with respect to J 1 [ u ] , namely that there exists a defining function φ C 2 ( Ω ¯ ) satisfying φ = 0 , D φ 0 on Ω together with the “dual uniform convexity” condition:

D 2 φ - A i j , k ( · , u , D u ) D k φ I
(3.7)

in a neighbourhood N of Ω , where A i j , k is given by

[ A i j , k ] = Y p - 1 D p p Y k ( Y p - 1 ) t
(3.8)

for each k = 1 , , n . Taking account of (1.2) we see that these conditions can be formulated for general prescribed Jacobian equations where Y C 2 ( U ) satisfies det Y p 0 , and are not necessarily restricted to those determined through generating functions. Furthermore, writing

G ( x , u , p ) = φ Y ( x , u , p ) ,
(3.9)

condition (3.7) may be expressed in the form

G p p ( x , u , D u ) δ 0 I
(3.10)

for a positive constant δ 0 , whenever Y ( x , u , D u ) N and moreover the boundary condition (3.6) implies a nonlinear oblique boundary condition,

G ( · , u , D u ) = 0 ,
(3.11)

on Ω , for an elliptic solution u C 2 ( Ω ¯ ) of Eq. (1.1) [14]. Building on the optimal transportation case in [20], an obliqueness estimate

G p ( · , u , D u ) · γ κ 0
(3.12)

on Ω , for a positive constant κ 0 , where γ denotes the unit outer normal to Ω , is derived in [10, 14] for elliptic solutions u of (1.1) (3.6), in C 3 ( Ω ¯ ) , under the above hypotheses that Ω and Ω are respectively uniformly A-convex and uniformly Y -convex with respect to J 1 [ u ] .

To proceed from (3.12) to global second derivative estimates for the second boundary value problem, we need a boundary estimate for second derivatives, which we can extract from [20], Sect. 4, as a refinement of the Monge–Ampère case in [21].

Lemma 3.2

Let u C 4 ( Ω ¯ ) be an elliptic solution of Eq. (1.1) in Ω subject to a nonlinear oblique boundary condition (3.11), (3.12) on Ω C 4 with A , B , G C 2 ( U ) . Suppose that A is regular on J 1 = J 1 [ u ] ( Ω ) U , Ω is uniformly A-convex with respect to J 1 , inf J 1 B > 0 and G satisfies the uniform convexity condition (3.10). Then we have the estimate,

sup Ω | D 2 u | C 1 + sup Ω | D 2 u | 2 n - 3 2 ( n - 1 ) ,
(3.13)

where the constant C depends on A , B , G , Ω and | u | 0 , 1 ; Ω .

Using the estimates, (3.12) and (3.13), we can thus conclude from Theorem 1.1, the following global second derivative estimate for generated prescribed Jacobian equations.

Theorem 3.1

Let u C 4 ( Ω ¯ ) be an elliptic solution of the second boundary value problem (1.1), (3.6) in Ω, where A C 2 ( U ) is given by (1.2) with generating function g C 4 ( Γ ) , satisfying conditions A1, A2, A1*, A3w and A4w (or A4*w), B > 0 , C 2 ( U ) and the domains Ω , Ω C 4 are respectively uniformly A-convex and uniformly Y -convex with respect to J 1 = J 1 [ u ] ( Ω ) U . Suppose also u g 0 (or u g 0 ) in Ω for some g-affine function g 0 on Ω ¯ . Then we have the estimate,

sup Ω | D 2 u | C ,
(3.14)

where the constant C depends on n , U , g , B , Ω , Ω , g 0 and J 1 [ u ] .

We may express the domain convexity hypotheses in Theorem 3.1 in terms of boundary data depending on the solution u or more specifically, as in [10], intervals containing the range of u. For this purpose we will say that Ω is uniformly Y-convex with respect to ( Ω , u ) if

( D i γ j - D p A i j ( x , u ( x ) , p ) · D γ ) τ i τ j δ 0 ,
(3.15)

for all x Ω , Y ( x , u ( x ) , p ) Ω , unit outer normal γ and unit tangent vector τ, for some constant δ 0 > 0 . Similarly the target domain Ω is uniformly Y -convex with respect to ( Ω , u ) if

D i γ j ( y ) - A i j , k ( x , u ( x ) , p ) γ k ( y ) τ i τ j δ 0 ,
(3.16)

for all x Ω , y = Y ( x , u ( x ) , p ) Ω , unit outer normal γ and unit tangent vector τ, for some constant δ 0 > 0 . It then follows that the uniform convexity assumptions in Theorem 3.1 may be replaced by Ω, Ω being respectively uniformly Y-convex, Y -convex with respect to ( Ω , u ) , ( Ω , u ) . Moreover these assumptions are equivalent to the convexity notions defined in terms of the generating function g in [16]. Assuming Ω and Ω are connected and T u is a diffeomorphism, then Ω is uniformly Y-convex with respect to ( Ω , u ) if and only if Ω is uniformly g-convex with respect to v, that is the images Q ( · , y , v ( y ) ) ( Ω ) are uniformly convex for all y Ω ¯ , where v is the g-transform of u on Ω , given by

v ( y ) = g ( x , y , u ( x ) ) = Z ( x , u ( x ) , D u ( x ) ) ,
(3.17)

for y = T u ( x ) , while Ω is uniformly Y -convex with respect to ( Ω , u ) if and only if Ω is uniformly g -convex with respect to u, that is the images P ( x , · , u ( x ) ) ( Ω ) are uniformly convex for all x Ω ¯ , where P ( x , y , u ) = g x ( x , y , g ( x , y , u ) ) for ( x , y , u ) Γ ; see [16] for more details.

Returning to the Dirichlet problem, we state the following analogue of Theorem 3.1, which follows from Theorem 1.1 and our boundary estimates in [4].

Theorem 3.2

Let u C 4 ( Ω ¯ ) be an elliptic solution of (1.1) in Ω, satisfying u = u 0 on Ω , where A C 2 ( U ) is given by (1.2) with generating function g C 4 ( Γ ) , satisfying conditions A1, A2, A1*, A3w and A4w (or A4*w), B > 0 , C 2 ( U ) , u 0 C 4 ( Ω ¯ ) and the domain Ω C 4 is uniformly A-convex with respect to J 1 = J 1 [ u ] ( Ω ) U . Suppose also u g 0 (or u g 0 ) in Ω for some g-affine function g 0 on Ω ¯ . Then we have the estimate,

sup Ω | D 2 u | C ,
(3.18)

where the constant C depends on n , U , g , B , Ω , φ , g 0 and J 1 [ u ] .

To reduce the proof of Theorem 3.2 to our boundary estimates in Section 3 of [4], we replace u ̲ by u 0 in inequality (3.5) there and use the barrier v = - C φ in place of v = 1 - ϕ , for sufficiently large constant C, in the rest of the proof, where φ is a defining function in the definition of A-convexity. Note that under the assumption of uniform A-convexity we do not need Lemma 2.2 to obtain the second derivative boundary estimate, which is already asserted in [13]. Note also that in these arguments the dependence of the coefficients on u can be removed by replacing u by u ( x ) .

To conclude this section we indicate how Theorem 1.2 follows from Lemma 2.2 and [9]. In the optimal transportation case (1.3), when A is independent of u, we simply use (3.3) in the proof of Theorem 1.1 in [9] instead of (3.1), with u 0 replaced by g 0 . In the general case we proceed the same way, first noting that the monotonicity condition A4w ensures η = g 0 - u > 0 in Ω, by virtue of the strong maximum principle [2] and the fact that g 0 is a degenerate elliptic solution of the homogeneous equation, ( B = 0 ). We then consider as in [9], the corresponding auxiliary function

v = τ log η + log ( w i j ξ i ξ j ) + 1 2 β | D u | 2 + e κ φ ,
(3.19)

where β , κ , τ are positive constants to be determined and estimate L ~ v from below at an interior maximum point x = x ¯ , where

L ~ v = L v - D p ( log B ) · D v .

As in Section 3 of [20], the additional terms arising from the the u dependence, in the differentiations of (1.1), are absorbed in the lower bounds (18) and (19) in [9] so it remains to extend the lower bound for L ~ η in inequalities (20) and (21) in [9], where now η = g 0 - u . This is done similarly to the argument in the proof of Lemma 2.2, as follows. By calculation, we have

L ~ η = w i j [ D i j η - D p k A i j ( x , u , D u ) D k η ] - D p k ( log B ) D k η = w i j [ - w i j + A i j ( x , g 0 , D g 0 ) - A i j ( x , u , D u ) - D p k A i j ( x , u , D u ) D k η ] - D p k ( log B ) D k η - C + w i j [ A i j ( x , g 0 , D g 0 ) - A i j ( x , u , D u ) - D p k A i j ( x , u , D u ) D k η ] - C + w i j [ A i j ( x , u , D g 0 ) - A i j ( x , u , D u ) - D p k A i j ( x , u , D u ) D k η ] = - C + 1 2 w i j A i j , k l ( x , u , p ¯ ) D k η D l η ,
(3.20)

where p ¯ = ( 1 - θ ) D g 0 + θ D u for some θ ( 0 , 1 ) , condition A4w is used in the second inequality and Taylor’s formula is used in the last equality. By assuming { w i j } is diagonal, we have from (3.20)

L ~ η - C n + 1 2 w i i A i i , k l ( x , u , p ¯ ) D k η D l η , - C + k i w i i A i i , i k ( x , u , p ¯ ) D i η D k η + 1 2 w i i A i i , i i ( x , u , p ¯ ) ( D i η ) 2 - C ( 1 + w i i | D i η | ) ,
(3.21)

where condition A3w is used in the second inequality and the constant C depends on Ω , A , B , g 0 and J 1 [ u ] . We remark that the particular form of the estimate (3.21) is actually crucial in the ensuing estimations in [9].

Accordingly we obtain from [9], an estimate,

( g 0 - u ) τ | D 2 u | C
(3.22)

for positive constants τ and C, corresponding to the estimate (9) in [9]. Our desired estimate (1.5) follows, noting also that the estimate (28) in [9] for u 0 - u from below extends automatically when we substitute u ( x ) for u in A and B.

4 Optimal transportation and geometric optics

4.1 Optimal transportation

For our applications to optimal transportation when g reduces to a cost function c through (1.3), it will suffice to assume c C 4 ( Ω ¯ × Ω ¯ ) where the domains Ω and Ω are respectively uniformly c and c -convex with respect to each other, that is the images c y ( · , y ) ( Ω ) and c x ( x , · ) ( Ω ) are uniformly convex in R n , for each y Ω ¯ , x Ω ¯ .

Conditions A1, A1*, A2 and A3w can then be written as:

  • A1: The mappings c x ( x , · ) , c y ( · , y ) are one-to-one for each x Ω , y Ω ;

  • A2: det c x , y 0 in Ω ¯ × Ω ¯ ;

  • A3w: ( D p k p l A i j ( x , p ) ) ξ i ξ j η k η l 0 , for all ( x , p ) Ω × R n such that Y ( x , p ) Ω and ξ , η R n such that ξ · η = 0 .

Here we also have the simpler formulae,

Y ( x , u , p ) = Y ( x , p ) = c x - 1 ( x , · ) ( p ) , A ( x , u , p ) = A ( x , p ) = c x x ( x , Y ) .
(4.1)

We then conclude from Theorem 3.2 the following global second derivative estimate which improves Theorem 1.1 in [20] and Theorem 2.1 in [15].

Corollary 4.1

Let u C 4 ( Ω ¯ ) be an elliptic solution of the second boundary value problem (1.1), (3.6) in Ω, where A is given by (4.1) with cost function c satisfying conditions A1, A2 and A3w and B > 0 , C 2 ( J ¯ 1 [ u ] ) and the domains Ω , Ω C 4 are respectively uniformly c-convex and uniformly c -convex with respect to each other. Then we have the estimate,

sup Ω | D 2 u | C ,
(4.2)

where the constant C depends on n , c , B , Ω , Ω and J 1 [ u ] .

For application to optimal transportation, the function B has the form

B ( · , u , p ) = | det c x , y | ρ ρ Y ( · , p )
(4.3)

where ρ and ρ are positive densities defined on Ω and Ω respectively, satisfying the mass balance condition

Ω ρ = Ω ρ ,
(4.4)

which is a necessary condition for a solution u for which the mapping T u = Y ( · , D u ) is a diffeomorphism. From Corollary 4.1, we obtain following [20] the existence of an elliptic solution u C 3 ( Ω ¯ ) which is unique up to additive constants, as formulated in Theorem 1.1 in [15]. However our proof of Corollary 4.1 in this case is more direct and more general than the approach in [15] which depends on a strict convexity regularity result of Figalli et al. [1]. From the existence we obtain the existence of a smooth optimal mapping solving the transportation problem as formulated in Corollary 1.2 in [15].

Examples of cost functions which satisfy condition A3w and not A3 are given in [20] and [9]. However for these examples the global second derivative bounds in Theorems 1.1 and are proved in [20] from other structures such as c-boundedness or duality so that Corollary 4.1 is not new in these cases. This is not the case though for the interior estimate, Theorem 1.2, which for example is new for the cost function c ( x , y ) = | x - y | - 2 . For explicit calculations in this case the reader is referred to [8], as well as [20].

4.2 Geometric optics

Keeping to domains in Euclidean space, we will just consider here parallel beams in R n + 1 , directed in the e n + 1 direction, through a domain Ω R n × { 0 } , illuminating targets which are graphs over domains Ω in R n × { 0 } . The formulae are easily adjusted to cover more general situations and we may also consider point sources of light as in [6]. The special cases where the targets are themselves domains in R n × { 0 } yield strictly regular matrix functions A and global estimates for the second boundary value problem are already proved in [10, 14]. Consequently our main interest here is with situations where the resultant matrices A satisfy A3w and not necessarily A3, although as we indicated in Sect. 1 our lemmas in Sect. 2 are still new in this case.

  1. (i)

    Reflection. We modify slightly the example in [16] to allow for non-flat targets; see also [7] for a thorough description of the geometric picture. Let D be a domain in R n × R n , containing Ω × Ω , and consider the generating function:

    g ( x , y , z ) = Φ ( y ) + 1 2 z - z 2 | x - y | 2 ,
    (4.5)

    defined for ( x , y ) D and z > 0 where Φ is a smooth function on R n . By differentiation we have

    g x ( x , y , z ) = z ( y - x )
    (4.6)

    so that if there are mappings y = Y ( x , u , p ) , z = Z ( x , u , p ) satisfying A1, we must have y = x + p / z and from (4.5),

    u = h ( z ) = Φ x + p z + 1 2 z 1 - | p | 2 .
    (4.7)

    By differentiation we then obtain,

    h ( z ) = - 1 2 z 2 ( 2 p · D Φ + 1 - | p | 2 ) < 0 ,
    (4.8)

    provided

    2 p · D Φ + 1 - | p | 2 > 0 ,

    that is,

    z 2 | y - x | 2 + 2 z ( x - y ) · D Φ - 1 < 0 .
    (4.9)

    Accordingly conditions A1 and A2 will be satisfied for 0 < z < z 0 ( x , y ) , where

    z 0 = 1 | x - y | 2 + | ( x - y ) · D Φ | 2 + ( x - y ) · D Φ ,
    (4.10)

    Y ( x , u , p ) = x + p / Z ( x , u , p ) and Z, given implicitly by (4.7), satisfies Z u < 0 . Now from (4.6), we see that the matrix A is given by

    A ( x , u , p ) = - Z ( x , u , p ) I
    (4.11)

    and hence conditions A3w and A4w are both satisfied whenever the dual function Z is concave in the gradient variables. It also follows that condition A1* is also satisfied when (4.9) holds, that is the mapping Q given by

    Q ( x , y , z ) = 2 z 2 D Φ + z ( x - y ) 1 + z 2 | x - y | 2

    is one to one in x, for 0 < z < z 0 ( x , y ) , which can be proved for example from the quadratic structure of the equation, Q ( x , y , z ) = q . Furthermore by taking z sufficiently small, we see that there exist arbitrarily large g-affine functions and hence all the hypotheses of Theorems 1.1 and 1.2 are satisfied by the generating function g if Z is concave in p. Note that when Φ is constant, then Z = ( 1 - | p | 2 ) / 2 ( u - Φ ) , u > Φ , | p | < 1 and the strict condition A3 holds, [16]. By direct calculation, we have an explicit formula for the matrix E,

    E = z I - 2 z ( x - y ) [ z ( x - y ) + D Φ ] 1 + z 2 | x - y | 2 ,

    and its determinant

    det E = - z n z 2 | x - y | 2 + 2 z ( x - y ) · D Φ - 1 1 + z 2 | x - y | 2 > 0 ,

    when (4.9) holds. By the Sherman–Morrison formula, we also have a formula for the inverse of E,

    E - 1 = 1 z I - 2 z ( x - y ) [ z ( x - y ) + D Φ ] z 2 | x - y | 2 + 2 z ( x - y ) · D Φ - 1 .
  2. (ii)

    Refraction. We consider refraction from media I to media I I with respective refraction indices n 1 , n 2 > 0 and set κ = n 1 / n 2 . For κ 1 , we consider now generating functions,

    g ( x , y , z ) = Φ ( y ) - 1 | κ 2 - 1 | κ z + z 2 + ( κ 2 - 1 ) | x - y | 2 ,
    (4.12)

    where again ( x , y ) D , z > 1 - κ 2 | x - y | for 0 < κ < 1 , > 0 for κ > 1 and Φ is a smooth function on R n . For more details about the geometric and physical aspects of this model, see for example [12]. We will now restrict attention to the case 0 < κ < 1 , in which case by setting κ = 1 - κ 2 and rescaling z z / κ , Φ κ Φ , g κ g , we can write

    g ( x , y , z ) = Φ ( y ) - κ z - z 2 - | x - y | 2 .
    (4.13)

    By differentiation we now have

    g x ( x , y , z ) = x - y z 2 - | x - y | 2
    (4.14)

    so that if there are mappings y = Y ( x , u , p ) , z = Z ( x , u , p ) satisfying A1, we must have

    y = x - z p 1 + | p | 2 ,

    so from (4.13),

    u = h ( z ) = Φ x - z p 1 + | p | 2 - κ z - z 1 + | p | 2 .
    (4.15)

    By differentiation we then obtain,

    h ( z ) = - 1 1 + | p | 2 p · D Φ + 1 + κ 1 + | p | 2 < 0 ,
    (4.16)

    provided

    p · D Φ + 1 + κ 1 + | p | 2 > 0 ,

    that is,

    z 2 - | x - y | 2 + κ z + ( x - y ) · D Φ > 0 .
    (4.17)

    Accordingly conditions A1 and A2 will be satisfied for z > z 0 = z 0 ( x , y ) , where z 0 = | x - y | , if κ | x - y | ( y - x ) · D Φ and z 0 is given by

    z 0 = 1 1 - κ 2 κ ( x - y ) · D Φ + ( 1 - κ 2 ) | x - y | 2 + | ( x - y ) · D Φ ( y ) | 2 ,
    (4.18)

    if κ | x - y | ( y - x ) · D Φ . Furthermore the mapping Y is given by Y ( x , u , p ) = x + Z ( x , u , p ) p / 1 + | p | 2 and Z, given implicitly by (4.15), satisfies Z u < 0 . From (4.14), we now have

    A ( x , u , p ) = 1 + | p | 2 Z ( x , u , p ) ( I + p p ) .
    (4.19)

    It follows then that conditions A3w and A4w are both satisfied whenever the function p 1 + | p | 2 / Z ( x , u , p ) is convex. Again the condition A1* is also satisfied when (4.17) holds so that the hypotheses of Theorems 1.1 and 1.2 are fulfilled provided in the case of Theorem 1.1 the solution u has a g-affine majorant, which is satisfied automatically from the boundary condition, u = g 0 on Ω in the case of Theorem 1.2. Note that when Φ is constant, we obtain

    Z = ( Φ - u ) 1 + | p | 2 1 + κ 1 + | p | 2 , u < Φ ,

    so again the strict condition A3 holds. By direct calculation, we now have an explicit formula for the matrix E,

    E = - 1 z 2 - | x - y | 2 I + ( x - y ) [ κ ( x - y ) + z D Φ ] z 2 - | x - y | 2 ( z + κ z 2 - | x - y | 2 ) ,

    and its determinant

    det E = ( - 1 ) n z z 2 - | x - y | 2 + κ z + ( x - y ) · D Φ ( z 2 - | x - y | 2 ) n + 1 2 ( z + κ z 2 - | x - y | 2 ) 0 ,

    when (4.17) holds. We also obtain the following formula for the inverse of E,

    E - 1 = - z 2 - | x - y | 2 I - ( x - y ) [ κ ( x - y ) + z D Φ ] z [ z 2 - | x - y | 2 + κ z + ( x - y ) · D Φ ] .

    In the case κ > 1 , the monotonicity is reversed in the case Φ = constant. For general Φ and taking now κ = κ 2 - 1 , we have after rescaling z z / κ , Φ κ Φ , g κ g ,

    g ( x , y , z ) = Φ ( y ) - κ z - z 2 + | x - y | 2 ,
    (4.20)

    and hence we obtain in place of (4.14) to (4.16),

    (4.21)

    provided

    ( y - x ) · D Φ < κ z + z 2 + | x - y | 2 .

    However now we have in place of (4.19),

    A ( x , u , p ) = - 1 - | p | 2 Z ( x , u , p ) ( I - p p )
    (4.22)

    so that we obtain A4*w instead of A4w. Hence for A4w we need to assume the reverse inequality, h ( z ) > 0 , that is 0 < z < z 0 , where now z 0 is given by

    z 0 = 1 κ 2 - 1 κ ( y - x ) · D Φ - ( κ 2 - 1 ) | x - y | 2 + | ( x - y ) · D Φ ( y ) | 2 ,
    (4.23)

    which implies at least that Ω and Ω are disjoint and excludes the case Φ = constant. However with z > max { z 0 , 0 } , Theorems 1.1, and are still applicable provided the function p - 1 - | p | 2 / Z ( x , u , p ) is convex as there exist arbitrarily small g-affine functions. Also then the case of constant Φ is embraced with

    Z = ( Φ - u ) 1 - | p | 2 1 + κ 1 - | p | 2 , u < Φ , | p | < 1 .

    By direct calculation, we again have an explicit formula for the matrix E,

    E = 1 z 2 + | x - y | 2 I + ( x - y ) [ - κ ( x - y ) + z D Φ ] z 2 + | x - y | 2 ( z + κ z 2 + | x - y | 2 ) ,

    and its determinant

    det E = z [ z 2 + | x - y | 2 + κ z + ( x - y ) · D Φ ] ( z 2 + | x - y | 2 ) n + 1 2 ( z + κ z 2 + | x - y | 2 ) 0 ,

    when h ( z ) 0 . By the Sherman-Morrison formula, we also obtain the inverse of E,

    E - 1 = z 2 + | x - y | 2 I - ( x - y ) [ - κ ( x - y ) + z D Φ ] z [ z 2 + | x - y | 2 + κ z + ( x - y ) · D Φ ] .

    As in the previous cases the condition A1* will hold if either h ( z ) < 0 , (that is det E > 0 ), or h ( z ) > 0 , (that is det E < 0 ) , which again follows from the equivalent quadratic structure of the equation Q ( x , y , z ) = q .

  3. (iii)

    Further remarks. It is interesting to note that when we pass to the dual functions in the above examples the monotonicity of A is reversed but, as shown in general in [16], the condition A3w is preserved. In particular for the dual in the reflection case (4.5) we have from [16],

    g ( x , y , u ) = 1 u - Φ ( y ) + ( u - Φ ( y ) ) 2 + | x - y | 2 ,
    (4.24)

    while in the refraction cases (4.12), a simple calculation gives

    g ( x , y , u ) = + , ( - ) κ ( u - Φ ( y ) ) + ( u - Φ ( y ) ) 2 + | x - y | 2 ,
    (4.25)

    for κ < 1 , ( κ > 1 ) respectively. Note that by replacing z by - 1 / z in (4.5), the dual function in (4.24) is the limit as κ 1 of the case κ > 1 in (4.25).

Also we note that when we consider instead the complementary ellipticity condition, D 2 u - A ( · , u , D u ) < 0 , in Eq. (1.1), condition A3w is replaced by substituting - A for A but conditions A4w and A4*w are maintained. Thus in the above examples we need to replace Z by - Z in the convexity conditions, (and interchange the inequalities u < g 0 , u > g 0 ), to fulfil our hypotheses.

As well as the monotonicity properties, another interesting ingredient of the above examples is that we cannot infer, as in the optimal transportation examples mentioned in the previous section, that bounded domains are A-bounded so second derivative estimates do not follow from the relevant arguments in [14, 20].

4.3 Final remarks

The existence of locally smooth solutions to the second boundary value problem for generated prescribed Jacobian equations is treated in [16] under conditions A1, A2, A1*, A3 and A4w. We remark that there that the montonicity condition A4w may also be replaced by condition A4* for local regularity. In [17] and ensuing work the monotonicity condition A4w is relaxed and local boundary regularity is also considered, yielding an alternative approach to that in [10] which deals with estimates and existence of globally smooth solutions. We remark that the strict monotonicity of Z, in our examples in Sect. 4.2, also follows from a general formula in [17],

Z u = det g x , y g z det E .

In a follow up work [3] we will use our estimates here to extend the global theory when A3 is weakened to A3w, without A-boundedness conditions, as well as further develop the optics examples. We also develop an alternative duality approach to second derivative estimates when the matrix A depends only on u and p. As already noted in [20], this situation is much simpler in two dimensions.

Finally we wish to thank the anonymous referee of this paper for their careful checking and useful comments.