On Pogorelov estimates in optimal transportation and geometric optics

In this paper, we prove global and interior second derivative estimates of Pogorelov type for certain Monge–Ampère type equations, arising in optimal transportation and geometric optics, under sharp hypotheses. Specifically, for the case of generated prescribed Jacobian equations, as developed recently by the second author, we remove barrier or subsolution hypotheses assumed in previous work by Trudinger and Wang (Arch Ration Mech Anal 192:403–418, 2009), Liu and Trudinger (Discret Contin Dyn Syst Ser A 28:1121–1363, 2010), Jiang et al. (Calc Var Partial Differ Equ 49:1223–1236, 2014), resulting in new applications to optimal transportation and near field geometric optics.


Introduction
This paper is concerned with global and interior second derivative estimates of solutions to partial differential equations of Monge-Ampère type, (MATEs) that arise in optimal transportation and geometric optics and are embraced by the notion of "generated prescribed Jacobian equation" (GPJE) introduced in [16]. Such equations can be written in the general form det{D 2 u − A(·, u, Du)} = B(·, u, Du), (1.1) where A and B are respectively n × n symmetric matrix valued and scalar functions on a domain U ⊂ R n × R × R n , Du and D 2 u denote respectively the gradient vector and Hessian matrix of the scalar function u, which is defined on a bounded domain ⊂ R n for which the sets are non-empty for all x ∈ . A solution u ∈ C 2 ( ) of (1.1) is called elliptic, (degenerate elliptic), whenever D 2 u − A(·, u, Du) > 0, (≥ 0), which implies B > 0, (≥ 0). We shall establish the second derivative bounds for generated prescribed Jacobian equations under the hypotheses G1, G2, G1*, G3w and G4w introduced in [16], which extend the conditions A1, A2 and A3w for optimal transportation equations introduced in [20]. These are degenerate versions of the conditions A1, A2 and A3 for regularity originating in [11]. In fact we will relabel them here replacing G in [16] by A to make this extension clearer. First we need to suppose that there exists a smooth generating function g defined on a domain ⊂ R n × R n × R for which the sets x,y = {z ∈ R | (x, y, z) ∈ } are convex, (and hence open intervals). Setting U = {(x, g(x, y, z), g x (x, y, z)) | (x, y, z) ∈ }, we then assume A1: For each (x, u, p) ∈ U, there exists a unique point (x, y, z) ∈ satisfying g(x, y, z) = u, g x (x, y, z) = p. A2: g z < 0, det E = 0, in , where E is the n × n matrix given by Defining Y (x, u, p) = y, Z (x, u, p) = z in A1, we thus obtain mappings Y : U → R n , Z : U → R with Y p = E −1 . The matrix function A in the corresponding generated prescribed Jacobian equation is then given by where Y = Y (x, u, p) and Z = Z (x, u, p). Note that the Jacobian determinant of the mapping (y, z) → (g x , g)(x, y, z) is g z det E, = 0 by A2, so that Y and Z are accordingly smooth. In particular, it suffices to have g ∈ C 2 ( ) to define Y, Z ∈ C 1 (U), A ∈ C 0 (U). The reader is referred to [16] for more details. We also mention though that the special case of optimal transportation is given by taking g(x, y, z) = c(x, y) − z, (1.3) where c is a cost function defined on a domain D in R n × R n so that = D × R, g x = c x , g z = −1, E x,y = c x,y and conditions A1 and A2 are equivalent to those in [11,19]. Note that here we follow the same sign convention as in [15,20] so that our cost functions are the negatives of those usually considered. We remark also that the case when Y is independent of u is equivalent to the optimal transportation case. Crucial to us in this paper is the dual condition to A1, namely A1*: The mapping Q := −g y /g z is one-to-one in x, for all (x, y, z) ∈ .
We will treat the implications of condition A1* as needed in Sect. 2. Meanwhile we note that the Jacobian matrix of the mapping x → Q(x, y, z) is −E t /g z , where E t is the transpose of E, so its determinant is automatically non-zero when condition A2 holds. Note that in the optimal transportation case (1.3), we simply have Q = c y but the duality is more complicated in the general situation and will be amplified further in Sect. 2. The next conditions are expressed in terms of the matrix function A and extend condition A3w in the optimal transportation case [20]. For these we assume that A is twice differentiable in p and once in u.

A3w:
The matrix function A is regular in U, that is A is codimension one convex with respect to p in the sense that, An explicit formula for D 2 p A in terms of the generating function g is given in [16]. In order to use conditions A3w and A4w for our barrier constructions in Sect. 2, we also assume that U is sufficiently large in that the sets U x are convex in p, for each (x, u) ∈ R n × R, (for A3w), and U x = J x × P x , for each x ∈ R n , where J x is an open interval in R, (possibly empty), and P x ⊂ R n , (for A4w). When we use both A3w and A4w, we would also assume P x to be convex. We note that in [4], U = R n × R × R n so that these conditions are automatically satisfied and in the optimal transportation case J x = R. Similar conditions are also needed for the convexity theory in [16].
To formulate our second derivative estimates, we first denote the one-jet of a function u ∈ C 1 ( ) by and recall from [16] that a g-affine function in is a function of the form where y 0 and z 0 are fixed so that (x, y 0 , z 0 ) ∈ for all x ∈ . (1.2) with generating function g satisfying conditions A1, A2, A1*, A3w and A4w. Suppose also u ≤ g 0 in for some g-affine function g 0 with the one jets J 1 where the constant C depends on n, U, g, B, , g 0 and J 1 [u].
Note that the inequality u ≤ g 0 is trivially satisfied if there exists a g-affine function on¯ which is greater than sup u and moreover can be dispensed with in the optimal transportation case, where Theorem 1.1 improves Theorems 3.1 and 3.2 in [20] and Theorem 1.1 in [4]. Also we may assume in place of A4w the reversed monotonicity condition: A4*w: The matrix A is monotone decreasing with respect to u in U, that is in which case Theorem 1.1 continues to hold provided the inequality u ≤ g 0 is replaced by u ≥ g 0 .

5)
where the constant C depends on n, U, g, B, , , g 0 and J 1 [u].
In the optimal transportation case, Theorem 1.2 improves Theorem 1.1 in [9] and Theorem 2.1 in [4] by removing the barrier and subsolution hypotheses assumed there. As in [9], we may also replace g 0 by any strict supersolution. We also remark that the dependence on J 1 [u] in Theorems 1.1 and 1.2 is more specifically determined by sup (|u| + |Du|) and dist(J 1 [u]( ), ∂U).
This paper is organised as follows. In the next section we will construct a function in Lemma 2.1 for which the Eq. (1.1) is uniformly elliptic and use it to prove an appropriate version, Lemma 2.2, of the key Lemma 2.1 in [4]. From here we obtain Theorems 1.1 and 1.2 by following the same arguments as in [4]. In Sect. 3, we will also discuss the application of Theorem 1.1 to global second derivative estimates for the Dirichlet and second boundary value problems for optimal transportation and generated prescribed Jacobian equations, yielding improvements of the corresponding results in [4,13,14,20]. In Sect. 4 we focus on applications to optimal transportation and near field geometric optics. In the optimal transportation case, we obtain a different and more direct proof of the improved global regularity result in [15]. Then we conclude by treating some examples of generating functions in geometric optics satisfying our hypotheses, which arise from the reflection and refraction of parallel beams.
To conclude this introduction we should also emphasise that while we have considered the more general case of generated prescribed Jacobian equations our results are also new for optimal transportation. Further we note that when we strengthen condition A3w to the strict inequality A3, the corresponding second derivative estimates as treated in [11], (see also [20] and [18]), are much simpler to prove although even here the constructions in Sect. 2 are still new.

Barrier constructions
Our first lemma shows the existence of a uniformly elliptic function under conditions A1, A2 and A1*. Lemma 2.1 Let g ∈ C 4 ( ) be a generating function satisfying conditions A1, A2 and A1*, and suppose g 0 is a g-affine function on¯ , where is a bounded domain in R n . Then there exists a functionū ∈ C 2 (¯ ) such that J 1 [ū] U,ū > g 0 in and for some positive constant a 0 depending on g 0 and .
Proof Suppose g 0 = g(·, y 0 , z 0 ) for some (y 0 , z 0 ) ∈ R n × R, let B ρ = B ρ (y 0 ) be a ball of radius ρ and centre y 0 in R n , and consider the function v = v ρ in B ρ , given by Our desired functionū is now defined as the g * -transform of v, in accordance with the notion of duality introduced in [16], namelȳ Note thatū is well defined for x ∈¯ for ρ sufficiently small to ensure the set Furthermore, the supremum in (2.3) will be taken on at a unique point y = T x ∈ B ρ . To prove this assertion, we fix a point y ∈ ∂ B ρ and set y λ = (1 − λ)y + λy 0 for some λ ∈ (0, 1). Since g z < 0 in , we then have, by the mean value theorem, for positive constants C and δ. Consequently, choosing 0 < λ < 2/1 + ( C δ ) 2 , we have and hence the supremum in (2.3) cannot occur on ∂ B ρ . By differentiation, we then have at an interior maximum point y, and hence by condition A1*, there exists a unique x = T * y corresponding to y. From our construction (2.3) the functionū is g-convex in and its g-normal mapping χ [ū](x) consists of those y maximizing (2.3). Here we recall from [16] that a function u ∈ C 0 ( ) is g-convex in if for each point x 0 ∈ , there exists y 0 ∈ R n and z 0 ∈ I ( , y 0 ) := ∩ ·,y 0 , such that for all x ∈ and the g-normal mapping χ [u](x 0 ) of u at x 0 is given by and moreover, is a single point. For this we invoke the duality considerations in [16]. Letting we let g * ∈ C 4 ( * ) denote the dual generating function of g, given by g(x, y, g * (x, y, u)) = u.
Also denoting the dual matrix, corresponding to g * , is given by where X = X (y, z, q) is the unique mapping, given by A1*, satisfying Note that the mapping T * constructed above is given by Since the graph of v ρ is a hemisphere of radius ρ and hence has constant curvature 1/ρ, we have, on χ [ū]( ) for some fixed constant a * 0 > 0, provided ρ is taken sufficiently small, which implies in particular that v ρ is locally strictly g * -convex, that is for any for all y = y 1 , near y 1 . Clearly, by again taking ρ sufficiently small, we infer that the function v ρ is strictly globally g * -convex, that is inequality (2.10) holds for all

Using the relation
we then haveū ∈ C 2 (¯ ). Since by using g z < 0 in the last inequality. We then obtainū > g 0 as asserted and the proof is complete.

Remark 2.1
For a matrix A, not arising from optimal transportation or generated Jacobian mappings, Lemma 2.1 may not be true even if A satisfies the regularity condition A3w in a strong way. To see this, suppose A = A( p) ≥ a 1 | p| 2 I for some constant a 1 > 0. Then the uniform ellipticity condition, By integration, we obtain which can only be satisfied for | | sufficiently small.
We now use Lemma 2.1 to construct a barrier for the linearized operator of Eq. (1.1), in accordance with Remark 2.4 in [5]. Letting u ∈ C 2 ( ) be elliptic for Eq. u, Du))) and define an associated linear operator L by where It is known that F is a concave operator with respect to w i j for elliptic u. We then have the following improvement and extension of the fundamental lemma, Lemma 2.1 in [4] in the optimal transportation case.

Lemma 2.2
Let g ∈ C 4 ( ) be a generating function satisfying conditions A1, A2, A1*, A3w, A4w and let u ∈ C 2 (¯ ) be elliptic with respect to F, J 1 [u] U and u ≤ g 0 in for some g-affine function, g 0 , on¯ . Then (2.12) holds in , for sufficiently large positive constant K , uniform positive constants 1 , C, depending on n, U, g, , where a 0 is the positive constant in (2.1). For any for some constant a 1 = n log a 0 2 . From our construction ofū, we can also choose sufficiently small such thatū ≥ g 0 ≥ u in . Setting By the concavity of F with respect to w i j , we have (2.14) Sinceū ≥ u in , we then have, by the mean value theorem and condition A4w .15). Using the Taylor expansion, we have Here we are also using the convexity of the set P x for each x ∈ which ensures the point (x, u,p) ∈ U so that the term A i j,kl (x, u,p) in (2.16) is also well defined. By (2.14), (2.15), (2.16), we have from (2.13) Next we choose a finite family of balls B ρ (x i ), with centres at x i , i = 1 · · · N , coverinḡ and with fixed radii . Accordingly, we have for a fixed small positive , holds for x ∈ B ρ (x i ). We see that (2.18) holds in all balls B ρ (x i ), i = 1 · · · N , with a fixed positive constant . Then by the finite covering, (2.18) holds in with a uniform positive constant . Without loss of generality, assume that Dv = (D 1 v, 0, . . . , 0) at a given point in , we get where condition A3w is used in the second inequality.
Since the matrix {F i j } is positive definite, any 2 × 2 diagonal minor has positive determinant. By the Cauchy's inequality, we have for any positive constant α. Thus, we have

Remark 2.3
When we assume the reverse monotonicity A4*w and u ≥ g 0 , then by modifyingū appropriately and using (2.15), we still obtain the key barrier inequality (2.12).

Applications to second derivative estimates
Theorems 1.1 and 1.2 follow from Lemma 2.2 and the proofs of Theorem 3.1 in [20] and Theorem 1.1 in [9]. These latter results are proved under an additional hypothesis that is A-bounded on J 1 [u]( ), that is there exists a function ϕ ∈ C 2 ( ) ∩ C 0,1 (¯ ) satisfying D 2 ϕ − D p A(·, u, Du) · Dϕ ≥ I, (3.1) in . However, as pointed out in [4], we need only assume the weaker inequality for some constant C 0 to carry out the same proofs. By virtue of Lemma 2.2, the barrier inequality (3.2) is satisfied by the function We will first treat the global estimates for boundary value problems, and subsequently the interior estimate, Theorem 1.2, as we will need to modify slightly the proof in [9] to accommodate the dependance of A on u.
Taking account of the above remarks we may extend the statement of Theorem 3.1 in [20] as follows.

Lemma 3.1 Let u ∈ C 4 ( ) ∩ C 1,1 (¯ ) be an elliptic solution of Eq. (1.1) in with A, B ∈ C 2 (U). Suppose that A is regular on J
in , for all ξ, η ∈ R n such that ξ · η = 0, and that A satisfies (3.2) for some
Clearly, Theorem 1.1 follows immediately from Lemmas 2.2 and 3.1. As an immediate consequence of Theorem 1.1, the hypothesis of the existence of a subsolution u in Theorem 1.1 of [4] can be removed altogether in the optimal transportation case, while for the Dirichlet problem in Theorem 1.2 of [4], we need only assume the existence of a subsolution u in a neighbourhood of the boundary, whose boundary trace is the prescribed boundary function. However, the natural boundary condition for optimal transportation and the more general prescribed Jacobian equations is the prescription of the image of the mapping, T u := Y (·, u, Du), that is for some given target domain * ∈ Y (U). Global estimates for solutions of Eq. (1.1) subject to (3.6) now follow from Theorem 1.1 under appropriate convexity hypotheses on and * . As with (3.1) and (3.4), we will express these initially in terms of J 1 [u]. Accordingly, we assume the domain is uniformly A-convex with respect to J 1 [u] in the sense that there exists a defining function ϕ ∈ C 2 (¯ ) satisfying ϕ = 0, Dϕ = 0 on ∂ together with the "uniform convexity" condition (3.1) in a neighbourhood N of ∂ , while for the domain * we assume the dual condition, that * is uniformly Y *convex with respect to J 1 [u], namely that there exists a defining function ϕ * ∈ C 2 (¯ * ) satisfying ϕ * = 0, Dϕ * = 0 on ∂ * together with the "dual uniform convexity" condition: in a neighbourhood N * of ∂ * , where A * i j,k is given by for each k = 1, . . . , n. Taking account of (1.2) we see that these conditions can be formulated for general prescribed Jacobian equations where Y ∈ C 2 (U) satisfies det Y p = 0, and are not necessarily restricted to those determined through generating functions. Furthermore, writing condition (3.7) may be expressed in the form for a positive constant δ * 0 , whenever Y (x, u, Du) ∈ N * and moreover the boundary condition (3.6) implies a nonlinear oblique boundary condition, G(·, u, Du) = 0, (3.11) on ∂ , for an elliptic solution u ∈ C 2 ( ) of Eq. (1.1) [14]. Building on the optimal transportation case in [20], an obliqueness estimate on ∂ , for a positive constant κ 0 , where γ denotes the unit outer normal to ∂ , is derived in [10,14] for elliptic solutions u of (1.1) (3.6), in C 3 ( ), under the above hypotheses that and * are respectively uniformly A-convex and uniformly Y * -convex with respect to J 1 [u].
To proceed from (3.12) to global second derivative estimates for the second boundary value problem, we need a boundary estimate for second derivatives, which we can extract from [20], Sect. 4, as a refinement of the Monge-Ampère case in [21].

Lemma 3.2 Let u ∈ C 4 (¯ ) be an elliptic solution of Eq. (1.1) in
subject to a nonlinear oblique boundary condition (3.11), (3.12) on ∂ ∈ C 4 with A, B, G ∈ C 2 (U). Suppose that A is regular on J 1 = J 1 [u]( ) U, is uniformly A-convex with respect to J 1 , inf J 1 B > 0 and G satisfies the uniform convexity condition (3.10). Then we have the estimate, , (3.13) where the constant C depends on A, B, G, and |u| 0,1; .

Theorem 3.1 Let u ∈ C 4 (¯ ) be an elliptic solution of the second boundary value
We may express the domain convexity hypotheses in Theorem 3.1 in terms of boundary data depending on the solution u or more specifically, as in [10], intervals containing the range of u. For this purpose we will say that is uniformly Y -convex with respect to ( * , u) if for all x ∈ ∂ , Y (x, u(x), p) ∈ * , unit outer normal γ and unit tangent vector τ , for some constant δ 0 > 0. Similarly the target domain * is uniformly Y * -convex with respect to ( , u) if for all x ∈ , y = Y (x, u(x), p) ∈ ∂ * , unit outer normal γ * and unit tangent vector τ , for some constant δ * 0 > 0. It then follows that the uniform convexity assumptions in Theorem 3.1 may be replaced by , * being respectively uniformly Y -convex, Y *convex with respect to ( * , u), ( , u). Moreover these assumptions are equivalent to the convexity notions defined in terms of the generating function g in [16]. Assuming and * are connected and T u is a diffeomorphism, then is uniformly Y -convex with respect to ( * , u) if and only if is uniformly g-convex with respect to v, that is the images Q(·, y, v(y))( ) are uniformly convex for all y ∈¯ * , where v is the g-transform of u on * , given by v(y) = g * (x, y, u(x)) = Z (x, u(x), Du(x)), (3.17) for y = T u(x), while * is uniformly Y * -convex with respect to ( , u) if and only if * is uniformly g * -convex with respect to u, that is the images P(x, ·, u(x))( * ) are uniformly convex for all x ∈¯ , where P(x, y, u) = g x (x, y, g * (x, y, u)) for (x, y, u) ∈ * ; see [16] for more details.
Returning to the Dirichlet problem, we state the following analogue of Theorem 3.1, which follows from Theorem 1.1 and our boundary estimates in [4]. given by (1.2) with generating function g ∈ C 4 ( ), satisfying conditions A1, A2, A1*, A3w and A4w (or A4*w), B > 0, ∈ C 2 (U), u 0 ∈ C 4 (¯ ) and the domain ∈ C 4 is uniformly A-convex with respect to J 1 = J 1 [u]( ) U. Suppose also u ≤ g 0 (or u ≥ g 0 ) in for some g-affine function g 0 on¯ . Then we have the estimate, sup |D 2 u| ≤ C, (3.18) where the constant C depends on n, U, g, B, , ϕ, g 0 and J 1 [u].

Theorem 3.2 Let u ∈ C 4 (¯ ) be an elliptic solution of (1.1) in , satisfying u
To reduce the proof of Theorem 3.2 to our boundary estimates in Section 3 of [4], we replace u by u 0 in inequality (3.5) there and use the barrier v = −Cϕ in place of v = 1 − φ, for sufficiently large constant C, in the rest of the proof, where ϕ is a defining function in the definition of A-convexity. Note that under the assumption of uniform A-convexity we do not need Lemma 2.2 to obtain the second derivative boundary estimate, which is already asserted in [13]. Note also that in these arguments the dependence of the coefficients on u can be removed by replacing u by u(x).
To conclude this section we indicate how Theorem 1.2 follows from Lemma 2.2 and [9]. In the optimal transportation case (1.3), when A is independent of u, we simply use (3.3) in the proof of Theorem 1.1 in [9] instead of (3.1), with u 0 replaced by g 0 . In the general case we proceed the same way, first noting that the monotonicity condition A4w ensures η = g 0 − u > 0 in , by virtue of the strong maximum principle [2] and the fact that g 0 is a degenerate elliptic solution of the homogeneous equation, (B = 0). We then consider as in [9], the corresponding auxiliary function where β, κ, τ are positive constants to be determined and estimateLv from below at an interior maximum point x =x, wherẽ As in Section 3 of [20], the additional terms arising from the the u dependence, in the differentiations of (1.1), are absorbed in the lower bounds (18) and (19) in [9] so it remains to extend the lower bound forLη in inequalities (20) and (21) in [9], where now η = g 0 − u. This is done similarly to the argument in the proof of Lemma 2.2, as follows. By calculation, we havẽ wherep = (1 − θ)Dg 0 + θ Du for some θ ∈ (0, 1), condition A4w is used in the second inequality and Taylor's formula is used in the last equality. By assuming {w i j } is diagonal, we have from (3.20) where condition A3w is used in the second inequality and the constant C depends on , A, B, g 0 and J 1 [u]. We remark that the particular form of the estimate (3.21) is actually crucial in the ensuing estimations in [9].
Accordingly we obtain from [9], an estimate, for positive constants τ and C, corresponding to the estimate (9) in [9]. Our desired estimate (1.5) follows, noting also that the estimate (28) in [9] for u 0 − u from below extends automatically when we substitute u(x) for u in A and B.

Optimal transportation
For our applications to optimal transportation when g reduces to a cost function c through (1.3), it will suffice to assume c ∈ C 4 (¯ ×¯ * ) where the domains and * are respectively uniformly c and c * -convex with respect to each other, that is the images c y (·, y)( ) and c x (x, ·)( * ) are uniformly convex in R n , for each y ∈¯ * , x ∈¯ .
Here we also have the simpler formulae, We then conclude from Theorem 3.2 the following global second derivative estimate which improves Theorem 1.1 in [20] and Theorem 2.1 in [15]. where the constant C depends on n, c, B, , * and J 1 [u].
For application to optimal transportation, the function B has the form where ρ and ρ * are positive densities defined on and * respectively, satisfying the mass balance condition ρ = * ρ * , (4.4) which is a necessary condition for a solution u for which the mapping T u = Y (·, Du) is a diffeomorphism. From Corollary 4.1, we obtain following [20] the existence of an elliptic solution u ∈ C 3 (¯ ) which is unique up to additive constants, as formulated in Theorem 1.1 in [15]. However our proof of Corollary 4.1 in this case is more direct and more general than the approach in [15] which depends on a strict convexity regularity result of Figalli et al. [1]. From the existence we obtain the existence of a smooth optimal mapping solving the transportation problem as formulated in Corollary 1.2 in [15]. Examples of cost functions which satisfy condition A3w and not A3 are given in [20] and [9]. However for these examples the global second derivative bounds in Theorems 1.1 and 4.1 are proved in [20] from other structures such as c-boundedness or duality so that Corollary 4.1 is not new in these cases. This is not the case though for the interior estimate, Theorem 1.2, which for example is new for the cost function c(x, y) = |x − y| −2 . For explicit calculations in this case the reader is referred to [8], as well as [20].

Geometric optics
Keeping to domains in Euclidean space, we will just consider here parallel beams in R n+1 , directed in the e n+1 direction, through a domain ⊂ R n × {0}, illuminating targets which are graphs over domains * in R n ×{0}. The formulae are easily adjusted to cover more general situations and we may also consider point sources of light as in [6]. The special cases where the targets are themselves domains in R n × {0} yield strictly regular matrix functions A and global estimates for the second boundary value problem are already proved in [10,14]. Consequently our main interest here is with situations where the resultant matrices A satisfy A3w and not necessarily A3, although as we indicated in Sect. 1 our lemmas in Sect. 2 are still new in this case.
(i) Reflection. We modify slightly the example in [16] to allow for non-flat targets; see also [7] for a thorough description of the geometric picture. Let D be a domain in R n × R n , containing × * , and consider the generating function: defined for (x, y) ∈ D and z > 0 where is a smooth function on R n . By differentiation we have g x (x, y, z) = z(y − x) (4.6) so that if there are mappings y = Y (x, u, p), z = Z (x, u, p) satisfying A1, we must have y = x + p/z and from (4.5), By differentiation we then obtain, Accordingly conditions A1 and A2 will be satisfied for 0 < z < z 0 (x, y), where Y (x, u, p) = x + p/Z (x, u, p) and Z , given implicitly by (4.7), satisfies Z u < 0. Now from (4.6), we see that the matrix A is given by and hence conditions A3w and A4w are both satisfied whenever the dual function Z is concave in the gradient variables. It also follows that condition A1* is also satisfied when (4.9) holds, that is the mapping Q given by is one to one in x, for 0 < z < z 0 (x, y), which can be proved for example from the quadratic structure of the equation, Q(x, y, z) = q. Furthermore by taking z sufficiently small, we see that there exist arbitrarily large g-affine functions and hence all the hypotheses of Theorems 1.1 and 1.2 are satisfied by the generating function g if Z is concave in p. Note that when is constant, then Z = (1 − | p| 2 )/2(u − ), u > , | p| < 1 and the strict condition A3 holds, [16]. By direct calculation, we have an explicit formula for the matrix E, and its determinant det E = −z n z 2 |x − y| 2 + 2z(x − y)· D − 1 1 + z 2 |x − y| 2 > 0, when (4.9) holds. By the Sherman-Morrison formula, we also have a formula for the inverse of E, (ii) Refraction. We consider refraction from media I to media II with respective refraction indices n 1 , n 2 > 0 and set κ = n 1 /n 2 . For κ = 1, we consider now generating functions, where again (x, y) ∈ D, z > √ 1 − κ 2 |x − y| for 0 < κ < 1, > 0 for κ > 1 and is a smooth function on R n . For more details about the geometric and physical aspects of this model, see for example [12]. We will now restrict attention to the case 0 < κ < 1, in which case by setting κ = √ 1 − κ 2 and rescaling z → z/κ , → κ , g → κ g, we can write (4.13) By differentiation we now have (z 2 + |x − y| 2 ) n+1 2 (z + κ z 2 + |x − y| 2 ) = 0, when h (z) = 0. By the Sherman-Morrison formula, we also obtain the inverse of E, As in the previous cases the condition A1* will hold if either h (z) < 0, (that is det E > 0), or h (z) > 0, (that is det E < 0), which again follows from the equivalent quadratic structure of the equation Q(x, y, z) = q. (iii) Further remarks. It is interesting to note that when we pass to the dual functions in the above examples the monotonicity of A is reversed but, as shown in general in [16], the condition A3w is preserved. In particular for the dual in the reflection case (4.5) we have from [16], while in the refraction cases (4.12), a simple calculation gives g * (x, y, u) = +, (−) κ(u − (y)) + (u − (y)) 2 + |x − y| 2 , (4.25) for κ < 1, (κ > 1) respectively. Note that by replacing z by −1/z in (4.5), the dual function in (4.24) is the limit as κ → 1 of the case κ > 1 in (4.25).
Also we note that when we consider instead the complementary ellipticity condition, D 2 u − A(·, u, Du) < 0, in Eq. (1.1), condition A3w is replaced by substituting −A for A but conditions A4w and A4*w are maintained. Thus in the above examples we need to replace Z by −Z in the convexity conditions, (and interchange the inequalities u < g 0 , u > g 0 ), to fulfil our hypotheses.
As well as the monotonicity properties, another interesting ingredient of the above examples is that we cannot infer, as in the optimal transportation examples mentioned in the previous section, that bounded domains are A-bounded so second derivative estimates do not follow from the relevant arguments in [14,20].

Final remarks
The existence of locally smooth solutions to the second boundary value problem for generated prescribed Jacobian equations is treated in [16] under conditions A1, A2, A1*, A3 and A4w. We remark that there that the montonicity condition A4w may also be replaced by condition A4* for local regularity. In [17] and ensuing work the monotonicity condition A4w is relaxed and local boundary regularity is also considered, yielding an alternative approach to that in [10] which deals with estimates and existence of globally smooth solutions. We remark that the strict monotonicity of Z , in our examples in Sect. 4.2, also follows from a general formula in [17], In a follow up work [3] we will use our estimates here to extend the global theory when A3 is weakened to A3w, without A-boundedness conditions, as well as further develop the optics examples. We also develop an alternative duality approach to second derivative estimates when the matrix A depends only on u and p. As already noted in [20], this situation is much simpler in two dimensions.
Finally we wish to thank the anonymous referee of this paper for their careful checking and useful comments.