1 Introduction

The evolution by mean curvature flow (MCF) has been studied extensively and it has many applications in image processing and neurogeometry (see e.g. [5, 6] for further details). We say that a hypersurface evolves by MCF if it contracts in the normal direction with the normal velocity proportional to its mean curvature, see e.g. [9]. It is well-known that this evolution may develop singularities in finite time in the Euclidean and Riemannian setting (as in the case of the dumbbell, see [9] for further details). To deal with such a singularities, many concepts of general solutions to study this evolution have been developed. In particular in 1991, Chen et al. [4] and, independently, Evans and Spruck [10] introduced the so called level set approach, which consists in studying the evolving hypersurfaces as level sets of (viscosity) solutions of suitable associated nonlinear PDEs. We are interested in a degenerate version of such an evolution, namely the evolution by horizontal mean curvature flow (HMCF) and its approximation, the approximated Riemannian mean curvature flow. The HMCF is, informally, the MCF defined in a suitable way in a sub-Riemannian geometry. A sub-Riemannian geometry is a degenerate manifold where the metric is defined defined along the fibers of a subbundle of the tangent bundle. More specifically, we take \(X_{1}, \dots ,X_{m}\) smooth vector fields on the manifold \(\mathbb {R}^{n}\) and a metric g defined along the fibers of the distribution \(\mathcal {H}\) generated by such vector fields. Then it is possible to define intrinsic derivatives of any order by taking the derivatives along the vector fields \(X_{1}, \dots ,X_{m}\) and, as direct consequence, operators such as the horizontal Laplacian or the horizontal divergence may be defined. This sub-Riemannian geometry can be approximated to a Riemannian one by completing the basis of vector fields \(\{X_{1}, \dots , X_{m}\}\) with \(N-m\) vector fields \(X_{m+1}^{\varepsilon }, \dots , X_{N}^{\varepsilon }\) which depend on a parameter \(\varepsilon >0\). This basis is orthonormal w.r.t. a suitable metric \(g_{\varepsilon }\). This approximation is known as Riemmanian approximation.

In this paper we will study a stochastic representation of the viscosity solution (see [7, 12] for further details) of approximated mean curvature flow, i.e. we will use a suitable stochastic optimal control problem in order to obtain the viscosity solution of the approximated mean curvature flow. A connection between some geometric evolution equations and some stochastic control problems was found independently by Buckdahn, Cardaliaguet and Quincampoix in [2] and Soner and Touzi in [16, 17] in 2001 (see also [18] for further remarks on this topic). Roughly speaking, the increments of the stochastic process are constrained by the control to a lower dimensional subspace of \(\mathbb {R}^{N}\), while the cost functional depends only on the terminal cost. However, we have to consider an essential supremum and not, as in the standard control problem, an expectation over the probability space. It is possible to show that the value function of this stochastic optimal control problem solves (in viscosity sense) the level set equation associated with the geometric evolution. Furthermore, it is possible to prove that the set of points from which the initial hypersurface can be reached almost surely in a given time by choosing an appropriate control which coincides with the set evolving by mean curvature flow. This stochastic approach can be generalized to a class of sub-Riemannian geometries which respects a weak condition of regularity (the so called Hörmander condition) by using an intrinsic Brownian motion associated with the sub-Riemannian geometry, see Dirr, Dragoni and von Renesse in [8]. In the Euclidean setting the stochastic dynamics can be expressed using the definition of the Itô integral while in the sub-Riemannian case we have to use the definition of the Stratonovich integral. In the latter case the dynamics is far more complex because, informally, we have a deterministic part (related to first order derivatives induced by the chosen geometry) and a stochastic one (related to some second order derivatives induced by the chosen geometry). Our aim is to extend the result obtained in [8] to the approximated Riemannian mean curvature flow, with an \(\varepsilon >0\) fixed.

The paper is organised as follows: in the Sect. 2 we define some preliminary concepts about sub-Riemannian geometries, in the Sect. 3 we introduce the horizontal mean curvature flow, in the Sect. 4 we approximate it using a Riemannian approximation and, finally, in the Sect. 5 we will find a stochastic representation of the solution of approximated mean curvature flow.

2 Preliminaries

We recall some geometrical definitions which will be crucial for defining the evolution by HMCF. For more definitions and properties about sub-Riemannian geometries we refer to [15] and also [1] for the particular case of Carnot groups.

Definition 2.1

Let M be a N-dimensional smooth manifold, we consider for every point p a subspace of \(T_{p} M\) called \(\mathcal {H}_{p}\). We define the distribution as \(\mathcal {H} = \{ (p,v) | \ p \in M, \ v \in \mathcal {H}_{p} \}\).

Definition 2.2

Let M be a manifold and X,Y two vector fields defined on this manifold and \(f:M \rightarrow \mathbb {R}\) a smooth function, then we define the (Lie) bracket between X and Y as \([X,Y](f)= XY(f) - YX(f)\).

Let us consider \(\mathcal {X} = \{ X_{1}, \dots , X_{m} \}\) spanning some distribution \(\mathcal {H} \subset TM\), we define the k bracket as \(\mathcal {L}^{(k)} = \{ [X,Y] | X \in \mathcal {L}^{(k-1)}, \ \ Y \in \mathcal {L}^{(1)} \}\) with \(i_{j} \in \{1, \dots , m \}\) and \(\mathcal {L}^{(1)} = \mathcal {X}\). The associated Lie algebra is the set of all brackets between the vector fields of the family

$$\begin{aligned} \mathcal {L}(\mathcal {X}) := \{ [X_{i} , X_{j}^{(k)} ] | X^{(k)}_{j} \ k\text{-length } \text{ bracket } \text{ of } X_{1}, \dots X_{m} \ k \in \mathbb {N} \}. \end{aligned}$$

The definition of the Hörmander condition below is crucial in order to work with PDEs in a sub-Riemannian setting, because it allows us to recover the whole tangent space for every point.

Definition 2.3

(Hörmander condition) Let M be a smooth manifold and \(\mathcal {H}\) a distribution defined on M. We say that the distribution is bracket generating if and only if, at any point, the Lie algebra \(\mathcal {L}(\mathcal {X})\) spans the whole tangent space. We say that a sub-Riemannian geometry satisfies the Hörmander condition if and only if the associated distribution is bracket generating.

Definition 2.4

Let M be a smooth manifold and \(\mathcal {H}=span \{ X_{1}, \dots , X_{m} \} \subset TM\) a distribution and g a Riemannian metric of M defined on the subbundle \(\mathcal {H}\). A sub-Riemannian geometry is the triple \((M, \mathcal {H} , g)\).

Definition 2.5

Let \((M, \mathcal {H}, g)\) be a sub-Riemannian geometry and \(\gamma :[0,T] \rightarrow M\) an absolutely continuous curve, we say that \(\gamma \) is an horizontal curve if and only if

$$\begin{aligned} \dot{\gamma }(t) \in \mathcal {H}_{\gamma (t)}, \ \ \text{ for } \text{ a.e. } \ t \in [0,T], \end{aligned}$$

or, equivalently, if there exists a measurable function \(h : [0,T] \rightarrow \mathbb {R}^{N}\) such that

$$\begin{aligned} \dot{\gamma }(t) = \sum _{i=1}^{m} h_{i}(t) X_{i}(\gamma (t)), \ \ \text{ for } \text{ a.e. } \ t \in [0,T], \end{aligned}$$

where \(h(t)=(h_{1}(t), \dots , h_{m}(t))\) and \(X_{1}, \dots X_{m}\) are some vector fields spanning the distribution \(\mathcal {H}\).

Under the Hörmander condition the following theorem holds true.

Theorem 2.6

( [15])[Chow] Let M be a smooth manifold and \(\mathcal {H}\) a bracket generating distribution defined on M. If M is connected, then there exists a horizontal curve joining any two given points of M.

2.1 Carnot type geometries

From this point we will consider only the case where the topological manifold M is the Euclidean \(\mathbb {R}^N\). Moreover, in this paper we will concentrate on sub-Riemannian geometries with a particular structure: the so called Carnot-type geometries.

Definition 2.7

Let us consider \((M, \mathcal {H}, g)\) a sub-Riemannian geometry. We say that \(X_{1}, \dots , X_{m}\), \(m<N\), are Carnot-type vector fields if the coefficients of \(X_{i}\) are 0 for \(j \in \{ 1, \dots , m \} \setminus \{ i \}\), the i-component is equal to 1 and the other \(N-m\) components are polynomial in x.

For later use we also introduce the matrix associated to the vector fields \(X_{1} , \dots , X_{m}\), which is the \(N\times m\) matrix defined as

$$\begin{aligned} \sigma (x)=[X_{1}(x), \dots , X_{m}(x)]^{T}. \end{aligned}$$

In general, for Carnot-type geometries, the matrix \(\sigma \) assumes the following structure:

$$\begin{aligned} \sigma (x) = \begin{bmatrix} I_{m \times m}&A(x_{1}, \dots x_{m}) \end{bmatrix} \end{aligned}$$
(2.1)

where the matrix \(A(x_{1}, \dots , x_{m})\) is a \((N-m) \times m\) matrix depending only on the first m components of x.

Example

(The Heisenberg group) The most significant sub-Riemannian geometry is the so called Heisenberg group. For a formal definition of the Heisenberg group and the connection between its structure as non commutative Lie group and its manifold structure we refer to [1]. Here we simply introduce the 1-dimensional Heisenberg group as the sub-Riemannian structure induced on \(\mathbb {R}^{3}\) by the vector fields

$$\begin{aligned} X_1(x)= \begin{pmatrix} 1\\ 0 \\ - \frac{x_{2}}{2} \end{pmatrix} \quad \text {and} \quad X_2= \begin{pmatrix} 0\\ 1 \\ \frac{x_{1}}{2} \end{pmatrix}, \quad \forall \; x=(x_1,x_2,x_3)\in \mathbb {R}^3. \end{aligned}$$

In the case of the Heisenberg group, the matrix \(\sigma \) is given by

$$\begin{aligned} \sigma (x)= \begin{bmatrix} 1 &{} 0 &{} - \frac{x_{2}}{2} \\ 0 &{} 1 &{} \frac{x_{1}}{2} \end{bmatrix}, \quad \forall \, x=(x_{1},x_{2},x_{3}) \in \mathbb {R}^{3}. \end{aligned}$$

The introduced vector fields satisfy the Hörmander condition: in fact \([X_1,X_2](x)=\begin{pmatrix}0\\ 0\\ 1\end{pmatrix}\) for any \(x\in \mathbb {R}^3\).

The previous structure, which applies to a large class of geometries, allows us to consider an easy and explicit Riemannian approximation.

Let us consider a distribution \(\mathcal {H}\) spanned by the Carnot-type vector fields \(\{ X_{1}, \dots , X_{m} \}\) defined on \(\mathbb {R}^{N}\) with \(m<N\) and satisfying the Hörmander condition. It is possible to complete the distribution \(\mathcal {H}\) by adding \(N-m\) vector fields \( X_{m+1} , \dots , X_{N}\) in order to construct an orthogonal basis for all \(x \in \mathbb {R}^{N}\), i.e.

$$\begin{aligned} \text {span} \big ( X_{1}(x) , \dots , X_{m}(x),X_{m+1}(x) , \dots , X_{N}(x)\big )=T_x\mathbb {R}^N\equiv \mathbb {R}^N, \;\forall \, x\in \mathbb {R}^N. \end{aligned}$$

The geometry induced, for all \(\varepsilon > 0\), by the distribution

$$\begin{aligned} \mathcal {H}_\varepsilon (x)= span \{ X_{1}(x) , \dots , X_{m}(x), \varepsilon X_{m+1}(x) , \dots , \varepsilon X_{N}(x) \}, \ \ \forall x \in \mathbb {R}^{N} \end{aligned}$$

is called Riemannian approximation of our starting sub-Riemannian geometry. We remark that the associated basis is composed by orthonormal vector fields w.r.t. the approximated Riemannian metric \(g_{\varepsilon }\). The associated matrix is

$$\begin{aligned} \sigma _{\varepsilon }(x) = [ X_{1}(x), \dots X_{m}(x) , \varepsilon X_{m+1}(x) \dots , \varepsilon X_{N}(x)]^{T}. \end{aligned}$$
(2.2)

We remark that \(\det ( \sigma _{\varepsilon }(x)) \ne 0\).

We note that, in the case of Carnot-type geometries, we can always choose

$$\begin{aligned} X_i(x)=e_i, \quad \forall i=m+1,\dots ,N \quad \forall x\in \mathbb {R}^N, \end{aligned}$$

where by \(e_i\) we indicate the standard Euclidean unit vector with 1 at the i-th component.

Example

(Riemannian approximation of \(\mathbb {H}^{1}\)) In the case of the Heisenberg group introduced in the previous example, the matrix associated to the Riemannian approximation is, for every point \(x=(x_1,x_2,x_3)\), given by

$$\begin{aligned} \sigma _{\varepsilon }(x)= \begin{bmatrix} 1 &{} 0 &{} - \frac{x_{2}}{2} \\ 0 &{} 1 &{} \frac{x_{1}}{2} \\ 0 &{} 0 &{} \varepsilon \end{bmatrix}. \end{aligned}$$

Remark 2.8

This technique is called Riemannian approximation since, as \(\varepsilon \rightarrow 0^+\), then the geometry induced by Riemannian approximation converges, in sense of Gromov-Hausdorff (see [13] for further details), to the original sub-Riemannian geometry (as shown, as example, in [5]).

3 Horizontal mean curvature evolution

Given a smooth hypersurface \(\Gamma \), we indicate by \(n_E(x)\) the standard (Euclidean) normal to the hypersurface \(\Gamma \) at the point x. The following definitions will be key for this paper (see [8] for further details).

Definition 3.1

Given a smooth hypersurface \(\Gamma \), the horizontal normal at \(x \in \Gamma \) is the renormalized projection of the Euclidean normal on the horizontal space \(\mathcal {H}_x\), i.e.

$$\begin{aligned} n_{0}(x):= \frac{pr_{\mathcal {H}} n_{E}(x)}{|pr_{\mathcal {H}} n_{E}(x)|_{g}} = \frac{ \alpha _1(x)X_1(x)+\dots +\alpha _m(x)X_m(x)}{\sqrt{\alpha ^2_1(x)+\dots +\alpha ^2_m(x)}}\in \mathcal {H}_x \subset \mathbb {R}^N. \end{aligned}$$

With an abuse of notation we will often indicate by \(n_{0}(x)\) the associated m-valued vector

$$\begin{aligned} n_{0}(x)= \frac{ (\alpha _1(x), \dots , \alpha _m(x))^{T}}{\sqrt{\alpha ^2_1(x)+\dots +\alpha ^2_m(x)}}\in \mathbb {R}^m. \end{aligned}$$
(3.1)

The main difference between the horizontal normal and the Euclidean normal is that the first one may not exist even for smooth hypersurfaces. In fact at some points the horizontal normal is not defined while the Euclidean one exists. These points are called characteristic points.

Definition 3.2

Given a smooth hypersurface \(\Gamma \), characteristic points occur whenever \(n_E(x)\) is orthogonal to the horizontal plane \(\mathcal {H}_x\), then its projection on such a subspace vanishes, i.e.

$$\begin{aligned} \alpha ^2_1(x)+\dots +\alpha ^2_m(x)=0. \end{aligned}$$

We recall that, for every smooth hypersurface, the mean curvature at the point \(x \in \Gamma \) is defined as the Euclidean divergence of the Euclidean normal at that point. Similarly, for every smooth hypersurface, we introduce the horizontal mean curvature.

Definition 3.3

Given a smooth hypersurface \(\Gamma \) and a non characteristic point \(x\in \Gamma \), the horizontal mean curvature is defined as the horizontal divergence of the horizontal normal, i.e. \( k_{0}(x) = div_{\mathcal {H}} n_{0}(x), \) where \(n_0(x) \) is the m-valued vector associated to the horizontal normal (see (3.1)) while \(div_{\mathcal {H}} \) is the divergence w.r.t. the vector fields \(X_1,\dots , X_m\), i.e.

$$\begin{aligned} k_{0}(x) = X_1\left( \frac{\alpha _1(x)}{\sqrt{\sum _{i=1}^m\alpha _i^2(x)}}\right) + \cdots + X_m\left( \frac{\alpha _m(x)}{\sqrt{\sum _{i=1}^m\alpha _i^2(x)}}\right) . \end{aligned}$$

Obviously the horizontal mean curvature is never defined at characteristic points, since there the horizontal normal does not exist.

Definition 3.4

Let \(\Gamma _t\) be a family of smooth hypersurfaces in \(\mathbb {R}^N\). We say that \(\Gamma _{t}\) is an evolution by horizontal mean curvature flow of \(\Gamma \) if and only if \(\Gamma _0=\Gamma \) and for any smooth horizontal curve \(\gamma : [0,T] \rightarrow \mathbb {R}^{N}\) such that \(\gamma (t) \in \Gamma _{t}\) for all \(t \in [0,T]\), the horizontal normal velocity \(v_{0}\) is equal to minus the horizontal mean curvature, i.e.

$$\begin{aligned} v_{0}(\gamma (t)):= g_{\gamma (t)}(\dot{\gamma }(t), n_{0}(\gamma (t))) = - k_{0}(\gamma (t)), \end{aligned}$$
(3.2)

where \(n_{0}(\gamma (t))\) and \(k_{0}(\gamma (t))\) as respectively the horizontal normal and the horizontal mean curvature defined by Definitions 3.1 and 3.3 at the point \(\gamma (t)\).

We want to compute now the horizontal normal and the horizontal curvature for a smooth hypersurface expressed as zero level set, i.e.

$$\begin{aligned} \Gamma = \big \{x \in \mathbb {R}^{N}| u(x)= 0 \big \}, \end{aligned}$$

for some smooth function \(u:\mathbb {R}^N\rightarrow \mathbb {R}\).

As did in [8], the horizontal normal for the level set formulation may be expressed as

$$\begin{aligned} n_{0}(x)= \left( \frac{X_{1} u (x)}{\sqrt{\sum _{i=1}^{m} (X_{i} u(x))^2}} , \dots , \frac{X_{m} u (x)}{\sqrt{\sum _{i=1}^{m} (X_{i} u(x))^2}} \right) . \end{aligned}$$
(3.3)

Similarly, we write the horizontal mean curvature as

$$\begin{aligned} k_{0}(x)= \sum _{i=1}^{m} X_{i} \left( \frac{X_{i}u (x)}{\sqrt{\sum _{i=1}^{m} (X_{i} u(x))^2}} \right) . \end{aligned}$$
(3.4)

Let \(\Gamma _{t} = \{ (x,t) | u(x,t) = 0 \}\) where u is \(C^{2}\). Applying (3.3) and (3.4) to the Definition 3.4 we obtain that u solves the following PDE

$$\begin{aligned} u_{t} = Tr( (\mathcal {X}^{2}u)^{*}) - \bigg < (\mathcal {X}^{2}u)^{*} \frac{\mathcal {X} u}{|\mathcal {X} u|} , \frac{\mathcal {X} u}{|\mathcal {X} u|} \bigg > \end{aligned}$$
(3.5)

where \(\mathcal {X}u\) is the horizontal gradient, that is

$$\begin{aligned} \mathcal {X}u:= (X_{1}u , \dots , X_{m}u)^{T} \end{aligned}$$

and \((\mathcal {X}^{2}u)^{*}\) is the symmetrized horizontal Hessian, that is

$$\begin{aligned} ((\mathcal {X}^{2} u)^{*})_{ij}:= \frac{X_{i}(X_{j}u) + X_{j}(X_{i}u)}{2}. \end{aligned}$$

As remarked in [8], it is possible to write (3.5) in the form

$$\begin{aligned} u_{t}=F(x,Du,D^{2}u) \end{aligned}$$

with

$$\begin{aligned} F(x,q,S) =&- Tr( \sigma (x) S \sigma ^{T} (x) + A(x,p)) \nonumber \\&+ \left\langle \left( \sigma (x) S \sigma ^{T}(x) + A(x,p) \right) \frac{\sigma (x) q}{| \sigma (x) q|} , \frac{\sigma (x) q}{|\sigma (x) q|} \right\rangle \end{aligned}$$
(3.6)

where

$$\begin{aligned} A(x,q) = \frac{1}{2} < \nabla _{X_{i}}X_{j}(x) + \nabla _{X_{j}}X_{i}(x), q>. \end{aligned}$$

We observe that the function F(xqS) is well defined and continuous if \(|\sigma (x)q|>0\), so we define the set \(\mathcal {V}= \{(x,q) \in \Gamma \times T_{x}\Gamma | \ \sigma (x)q=0 \}\). In this way we observe that

$$\begin{aligned} F: (\mathbb {R}^{2N} \setminus \mathcal {V}) \times Sym(N) \rightarrow \mathbb {R}. \end{aligned}$$

is well defined.

We remark that if we consider \((x,q) \in \mathcal {V}\) then F is not defined and it cannot be extended continuously. Hence, in order to extend it to \(\mathbb {R}^{N} \times \mathbb {R}\), we have to compute the upper and lower envelopes of F.

Definition 3.5

Let us consider a locally bounded function \(u: \mathbb {R} \times [0,T] \rightarrow \mathbb {R}\).

  • The upper semicontinuous envelope is defined as

    $$\begin{aligned} u^{*}(t,x) \! := \! \inf \{ v(t, x)|\ v \ \text{ cont. } \text{ and } v \ge u \} \! = \! \limsup _{r \rightarrow 0^{+}} \{ u(s, y)| |y - x| \le r, |t - s| \le r \}. \end{aligned}$$
  • The lower semicontinuous envelope is defined as

    $$\begin{aligned} u_{*}(t,x)&:= \sup \{ u(t, x)|\ u \ \text{ cont. } \text{ and } v \le u \} \! = \! \liminf _{r \rightarrow 0^{+}} \{ u(s, y)| |y - x| \le r, |t - s| \le r \}. \end{aligned}$$

Remark 3.6

If the function \(u: \mathbb {R}^{N} \times [0,T] \rightarrow \mathbb {R}\) is continuous then it holds true

$$\begin{aligned} u_{*}(t,x) = u(t,x) = u^{*}(t,x), \ \ \text{ for } \text{ all } \ \ \ (t,x) \in [0,T] \times \mathbb {R}^{N}. \end{aligned}$$

Remark 3.7

Applying the Definition 3.5 to the function F as defined in (3.6) we obtain

$$\begin{aligned} F^{*}(x,q,S)= {\left\{ \begin{array}{ll} -Tr(\overline{S}) + \left\langle \overline{S} \frac{\sigma (x) q}{|\sigma (x) q|} , \frac{\sigma (x) q}{| \sigma (x) q|} \right\rangle , \ \ | \sigma (x) q| \ne 0, \\ -Tr(\overline{S}) + \lambda _{max}(\overline{S}), \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ | \sigma (x) q| = 0 \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} F_{*}(x,q,S)= {\left\{ \begin{array}{ll} -Tr(\overline{S}) + \left\langle \overline{S} \frac{\sigma (x) q}{|\sigma (x) q|} , \frac{\sigma (x) q}{| \sigma (x) q|} \right\rangle , \ \ |\sigma (x) q| \ne 0, \\ -Tr(\overline{S}) + \lambda _{min}(\overline{S}), \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ | \sigma (x) q| = 0 \end{array}\right. } \end{aligned}$$

where \(\overline{S} = \sigma (x) S \sigma ^{T}(x) + A(x,q)\) with \(\lambda _{max}\) and \(\lambda _{min}\) the maximum and the minimum eigenvalues of the matrix \(\overline{S}\).

4 Approximated Riemannian mean curvature flow

The Equation (3.5) can be approximated to a Riemannian mean curvature flow using the Riemannian approximation. This leads the following generalizations of the definitions of horizontal normal and horizontal divergence.

Definition 4.1

Given a smooth hypersurface \(\Gamma \), the approximated Riemannian normal at the point \(x \in \Gamma \) is the renormalized projection of the Euclidean normal on the horizontal space \(\mathcal {H}^{\varepsilon }_x\), i.e.

$$\begin{aligned} n_{\varepsilon }(x) :&= \frac{pr_{\mathcal {H}_{\varepsilon }} n_{E}(x)}{|pr_{\mathcal {H}_{\varepsilon }} n_{E}(x)|_{g_{\varepsilon }}} \\&= \frac{ \sum _{i=1}^{m}\alpha _i(x)X_i(x)+ \varepsilon \sum _{i=m+1}^{N} \alpha _{i}(x) X_{i}(x)}{\sqrt{\alpha ^2_1(x)+\dots +\alpha ^2_m(x) + \varepsilon ^{2} \alpha ^2_{m+1}(x) + \dots + \varepsilon ^{2} \alpha ^{2}_{N}(x)}}\in \mathcal {H}_x\subset \mathbb {R}^N. \end{aligned}$$

With an abuse of notation, we will often indicate by \(n_{\varepsilon }(x)\) the associated N-valued vector

$$\begin{aligned} n_{\varepsilon }(x)= \frac{( \alpha _{1}(x), \dots , \alpha _{m}(x), \varepsilon \alpha _{m+1}(x), \dots , \varepsilon \alpha _{N}(x))^{T}}{\sqrt{\alpha ^2_1(x)+\dots +\alpha ^2_m(x) + \varepsilon ^{2} \alpha ^{2}_{m+1}(x) + \dots + \varepsilon ^{2} \alpha ^{2}_{N}(x)}}\in \mathbb {R}^N. \end{aligned}$$
(4.1)

Definition 4.2

Given a smooth hypersurface \(\Gamma \) and a point \(x\in \Gamma \), the approximated Riemannian mean curvature is defined as the approximated Riemannian divergence of the approximated Riemannian normal, i.e. \( k_{\varepsilon }(x) = div_{\mathcal {H}^{\varepsilon }} n_{\varepsilon }(x), \) where \(n_{\varepsilon }(x) \) is the N-valued vector associated to the approximated Riemannian normal (see (4.1)) while \(div_{\mathcal {H}^{\varepsilon }} \) is the divergence w.r.t. the vector fields \(X_1,\dots , X_m, \varepsilon X_{m+1}, \dots ,\varepsilon X_{N}\), i.e.

$$\begin{aligned} k_{\varepsilon }(x) \!&= \! \sum _{i=1}^{m} X_i\left( \frac{\alpha _i(x)}{\sqrt{\sum _{j=1}^m\alpha _j^2(x) + \varepsilon ^{2} \sum _{k=m+1}^{N} \alpha _{k}^{2}(x)}}\right) \nonumber \\&\quad + \varepsilon \! \sum _{i=m+1}^{N} \! X_i\left( \frac{\varepsilon \alpha _i(x)}{\sqrt{\sum _{j=1}^m\alpha _j^2(x) + \varepsilon ^{2} \sum _{k=m+1}^{N} \alpha _{k}^{2}(x)}}\right) \! . \end{aligned}$$
(4.2)

Remark 4.3

In this setting we do not have characteristic points on the hypersurface \(\Gamma \). Hence when the Euclidean normal is not zero, then at least one \(\alpha _{i}(x)\) of (4.1) will be not zero.

We define the approximated Riemannian mean curvature flow, adapting the definition of horizontal mean curvature flow (as stated in the Definition 3.4) to the approximated Riemannian case.

Definition 4.4

Let \(\Gamma _t\) be a family of smooth hypersurfaces in \(\mathbb {R}^N\). We say that \(\Gamma _{t}\) is an evolution by approximated Riemannian mean curvature flow of \(\Gamma \) if and only if \(\Gamma _0=\Gamma \) and for any smooth horizontal curve \(\gamma _{\varepsilon }: [0,T] \rightarrow \mathbb {R}^{N}\) such that \(\gamma _{\varepsilon }(t) \in \Gamma _{t}\) for all \(t \in [0,T]\), the approximated Riemannian normal velocity \(v_{\varepsilon }\) is equal to minus the approximated Riemannian mean curvature, i.e.

$$\begin{aligned} v_{\varepsilon }(\gamma _{\varepsilon }(t)):= g_{\varepsilon {_{\gamma _{\varepsilon }(t)}}}(\dot{\gamma _{\varepsilon }}(t), n_{\varepsilon }(\gamma _{\varepsilon }(t))) = - k_{\varepsilon }(\gamma _{\varepsilon }(t)), \end{aligned}$$

where \(n_{\varepsilon }(\gamma _{\varepsilon }(t))\) and \(k_{\varepsilon }(\gamma _{\varepsilon }(t))\) as respectively the approximated Riemannian normal and the approximated Riemannian mean curvature defined by Definitions 4.1 and 4.2 and \(g_{\varepsilon }\) the approximated Riemannian metric.

As did in Sect. 3, let us consider \(\Gamma _{t} = \{ (x,t) | u(x,t) = 0 \}\) where u is \(C^{2}\). Developing all the computations following the example of [8] and recalling the Definitions 4.1 and 4.2 adapted to the level set formulation as did in Sect. 3 of this paper, we obtain that u solves the following PDE

$$\begin{aligned} u_{t} = Tr(( \mathcal {X}_{\varepsilon }^{2} u)^{*}) - \left\langle (\mathcal {X}^{2}_{\varepsilon } u)^{*} \frac{\mathcal {X}_{\varepsilon } u}{|\mathcal {X}_{\varepsilon } u|} , \frac{\mathcal {X}_{\varepsilon } u}{|\mathcal {X}_{\varepsilon } u|} \right\rangle = \Delta _{\varepsilon } u - \Delta _{0, \infty , \varepsilon } u, \end{aligned}$$
(4.3)

where \(\mathcal {X}_{\varepsilon } u\) is the approximated Riemannian gradient, i.e.

$$\begin{aligned} \mathcal {X}_{\varepsilon } u = (X_{1}u, \dots , X_{m}u, \varepsilon X_{m+1}u, \dots , \varepsilon X_{N}u)^{T} \end{aligned}$$

and \((\mathcal {X}^{2}_{\varepsilon } u )^{*}\) is the approximated Riemannian symmetrized Hessian, i.e.

$$\begin{aligned} (\mathcal {X}^{2}_{\varepsilon } u )^{*}_{ij} = \frac{X_{i}^{\varepsilon }(X_{j}^{\varepsilon } u) + X_{j}^{\varepsilon }(X_{i}^{\varepsilon }u)}{2}. \end{aligned}$$
(4.4)

We observe that we may write the Equation (4.3) as

$$\begin{aligned} u_{t} + F_{\varepsilon }(x,Du, D^2 u)=0, \end{aligned}$$
(4.5)

with

$$\begin{aligned} F_{\varepsilon }(x,q,S) =&- Tr( \sigma _{\varepsilon }(x) S \sigma ^{T}_{\varepsilon } (x) + A_{\varepsilon }(x,q)) \nonumber \\&+ \left\langle \left( \sigma _{\varepsilon }(x) S \sigma _{\varepsilon }^{T}(x) + A_{\varepsilon }(x,q) \right) \frac{\sigma _{\varepsilon }(x) q}{| \sigma _{\varepsilon }(x) q|} , \frac{\sigma _{\varepsilon } (x) q}{|\sigma _{\varepsilon }(x) q|} \right\rangle \end{aligned}$$
(4.6)

where

$$\begin{aligned} (A_{\varepsilon })_{ij}(x,q) = \frac{1}{2} < \nabla _{X_{i}^{\varepsilon }} X_{j}^{\varepsilon } + \nabla _{X_{j}^{\varepsilon }} X_{i}^{\varepsilon } , q >. \end{aligned}$$

Let us remark that the function \(F_{\varepsilon }\), due to the fact that \(det(\sigma _{\varepsilon }(x)) \ne 0\) for all \(x \in \mathbb {R}^{N}\), is always well defined everywhere except for \(q=0\). This change is crucial to compute the upper and lower envelopes of \(F_{\varepsilon }\).

Remark 4.5

Applying the Definition 3.5 to the function \(F_{\varepsilon }\) as defined in (3.6) we obtain that the upper and lower envelopes are given by

$$\begin{aligned} F^{*}_{\varepsilon }(x,q,S)= {\left\{ \begin{array}{ll} -Tr(\overline{S}_{\varepsilon }) + \left\langle \overline{S}_{\varepsilon } \frac{\sigma _{\varepsilon }(x) q}{|\sigma _{\varepsilon }(x) q|} , \frac{\sigma _{\varepsilon }(x) q}{| \sigma _{\varepsilon }(x) q|} \right\rangle , \ \ |q| \ne 0, \\ -Tr(\overline{S}_{\varepsilon }) + \lambda _{max}(\overline{S}_{\varepsilon }), \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ | q| = 0 \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} F_{{\varepsilon }_{*}}(x,q,S)= {\left\{ \begin{array}{ll} -Tr(\overline{S}_{\varepsilon }) + \left\langle \overline{S}_{\varepsilon } \frac{\sigma _{\varepsilon }(x) q}{|\sigma _{\varepsilon }(x) q|} , \frac{\sigma _{\varepsilon }(x) q}{| \sigma _{\varepsilon }(x) q|} \right\rangle , \ \ |q| \ne 0, \\ -Tr(\overline{S}_{\varepsilon }) + \lambda _{min}(\overline{S}_{\varepsilon }), \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ |q| = 0 \end{array}\right. } \end{aligned}$$

where \(\overline{S}_{\varepsilon } = \sigma _{\varepsilon }(x) S \sigma _{\varepsilon }^{T}(x) + A_{\varepsilon }(x,q)\) with \(\lambda _{max}\) and \(\lambda _{min}\) the maximum and the minimum eigenvalues of the matrix \(\overline{S}_{\varepsilon }\).

Remark 4.6

Let us remark that, while in the horizontal case the upper (resp. lower) envelope depends also from the sub-Riemannian geometry (i.e. \(|\sigma (x)q|>0\)). in the approximated Riemannian geometry depends only on the variable q (since \(det(\sigma _{\varepsilon }(x)) \ne 0\) for all \(x \in \mathbb {R}^{N}\)).

4.1 The approximated Riemannian stochastic control problem

Let us consider a family of smooth vector fields \(\mathcal {X}=\{ X_{1}, \dots X_{m} \}\) and its Riemannian approximation \(\mathcal {X}_{\varepsilon }=\{ X_{1}, \dots , X_{m} , \varepsilon X_{m+1}, \dots , \varepsilon X_{N} \}\).

Definition 4.7

We define the horizontal Brownian motion the process

$$\begin{aligned} d \xi = \sum _{i=1}^{m} X_{i}( \xi ) \circ dB^{i}_{m}, \end{aligned}$$

where \(B_{m}\) is a m-dimensional Brownian motion, \(\circ \) the Stratonovich differential and \(X_{i}\) the vector fields of \(\mathcal {X}\) which span the distribution \(\mathcal {H}\). We define the approximated Riemannian horizontal Brownian motion as

$$\begin{aligned} d \xi _{\varepsilon } = \sum _{i=1}^{N} X_{i}^{\varepsilon }( \xi _{\varepsilon }) \circ dB^{i}_{N} \end{aligned}$$

where \(B_{N}\) is an N-dimensional Brownian motion and \(X_{i}^{\varepsilon }\) the vector fields of \(\mathcal {X}_{\varepsilon }\) which span the distribution \(\mathcal {H}_{\varepsilon }\).

Let \((\Omega , \mathcal {F}, \{ \mathcal {F}_{t} \}_{t \ge 0}, \mathbb {P})\) be a filtered probability space, \(B_{j}\) is a j-dimensional Brownian motion adapted to the filtration \(\{ \mathcal {F}_{t} \}_{t \ge 0}\) with \(j=m,N\), we recall that a predictable process is a time-continuous stochastic process \(\{\xi (t)\}_{t \ge 0}\) defined on the filtered probability space \((\Omega , \mathcal {F}, \{\mathcal {F}_{t}\}_{t \ge 0}, \mathbb {P})\), measurable with respect to the \(\sigma \)-algebra generated by all left-continuous adapted processes (see [3] and [11] for further details). Given a smooth function \(g: \mathbb {R}^{N} \rightarrow \mathbb {R}\) (which parametrizes the starting hypersurface at time \(t=0\)) we introduce \(\xi ^{t,x, \nu }\), the solution of the stochastic ODE

$$\begin{aligned} {\left\{ \begin{array}{ll} d \xi ^{t,x, \nu }(s) = \sqrt{2} \sigma ^{T} ( \xi ^{t,x, \nu } (s)) \circ dB_{m}^{\nu }(s), \ \ \ \ \ s \in (t,T], \\ dB_{m}^{\nu }(s)= \nu (s)dB_{m}(s), \\ \xi ^{t,x, \nu }(t) = x, \end{array}\right. } \end{aligned}$$
(4.7)

where the matrix \(\sigma \) is defined in (2.1), \(\circ \) represents the differential in the sense of Stratonovich and

$$\begin{aligned} \mathcal {A} = \big \{ \nu : [t,T] \rightarrow Sym(m) \ \text{ predictable } \ | \nu \ge 0 , \ I_{m} - \nu ^{2} \ge 0 , \ Tr(I_{m} - \nu ^{2})=1\big \} \end{aligned}$$
(4.8)

and the function \(V:[0,T] \times \mathbb {R}^{N} \rightarrow \mathbb {R}\) defined as

$$\begin{aligned} V(t,x):= \inf _{\nu \in \mathcal {A}} ess \sup _{\omega \in \Omega } g(\xi ^{t,x, \nu }(T)(\omega )). \end{aligned}$$
(4.9)

Similarly, for \(\varepsilon >0\) fixed, we define \(\xi ^{t,x, \nu _{1}}_{\varepsilon }\) as the solution of the SDE

$$\begin{aligned} {\left\{ \begin{array}{ll} d \xi ^{t,x, \nu _{1}}_{\varepsilon }(s) = \sqrt{2} \sigma ^{T}_{\varepsilon } ( \xi ^{t,x, \nu _{1}}_{\varepsilon } (s)) \circ dB_{N}^{\nu _{1}}(s), \ \ \ \ \ s \in (t,T], \\ dB_{N}^{\nu _{1}}(s) = \nu _{1}(s) dB_{N}(s), \\ \xi ^{t,x, \nu _{1}}_{ \varepsilon }(t) = x, \end{array}\right. } \end{aligned}$$
(4.10)

where \(\sigma _{\varepsilon }\) is the matrix defined in (2.2) and

$$\begin{aligned} \mathcal {A}_{1} = \big \{ \nu _{1}: [t,T] \rightarrow Sym(N) \ \text{ predictable } \ | \ \nu _{1} \ge 0 , \ I_{N} - \nu ^{2}_{1} \ge 0 , \ Tr(I_{N} - \nu ^{2}_{1})=1 \big \} \end{aligned}$$
(4.11)

and the function \(V^{\varepsilon }: [0,T] \times \mathbb {R}^{N} \rightarrow \mathbb {R}\) defined by

$$\begin{aligned} V^{\varepsilon }(t,x) := \inf _{\nu \in \mathcal {A}_{1}}ess \sup _{\omega \in \Omega } g( \xi ^{t,x, \nu _{1}}_{\varepsilon }(T)(\omega )). \end{aligned}$$
(4.12)

It is possible to show that the function V as in (4.9) solves in the viscosity sense the level-set equation for the evolution by HMCF (see [8]).

Note also that the sets of controls (4.8) and (4.11) may be rewritten respectively as

$$\begin{aligned} \mathcal {A} = \{ \nu ^{2}| \ \nu \in \mathcal {A} \} = Co\{ I_{m} - a \otimes a | \ a \in \mathbb {R}^{m}, \ \ |a| =1 \}, \end{aligned}$$

and

$$\begin{aligned} \mathcal {A}_{1} = \{ \nu _{1}^{2}| \ \nu _{1} \in \mathcal {A}_{1} \} = Co\{ I_{N} - \overline{a} \otimes \overline{a} | \ \overline{a} \in \mathbb {R}^{N}, \ \ |\overline{a}| =1 \}. \end{aligned}$$

Remark 4.8

Let us remark that the first equations of the systems (4.7) and (4.10) have a differential in Stratonovich sense, while the second ones have a differential in Itô sense.

Remark 4.9

Roughly speaking, it is possible to see (4.8) and (4.11) as sets of controls which locally constrained the horizontal Brownian motion and approximated Riemannian Brownian motion to a tanget space of codimension one (see [2, 8] for further details).

Next we introduce the p-regularising approximation of the functions V and \(V^{\varepsilon }\). These functions will be the p-approximation of the \(L^{\infty }\) norms as defined in of (4.9) and (4.12).

Definition 4.10

For \(p>1\), the p-approximation of the value function associated to the value function (4.9) is defined as

$$\begin{aligned} V_{p}(t,x) := \inf _{\nu \in \mathcal {A}} \mathbb {E}[ |g(\xi ^{t,x, \nu })(T)(\omega )|^{p} ]^{\frac{1}{p}}, \end{aligned}$$
(4.13)

where \(\xi ^{t,x, \nu }\) is as (4.7) and \(\mathcal {A}\) is as in (4.8).

Similarly, we introduce the following \(\varepsilon \)-p-regularising function, that is the p-value function associated to the value function (4.12),

$$\begin{aligned} V^{\varepsilon }_{p}(t,x) := \inf _{\nu _{1} \in \mathcal {A}_{1}} \mathbb {E}[ |g(\xi ^{t,x, \nu _{1}}_{\varepsilon })(T)(\omega )|^{p} ]^{\frac{1}{p}}. \end{aligned}$$
(4.14)

where \(\xi ^{t,x, \nu _{1}}_{\varepsilon }\) is as (4.10) and \(\mathcal {A}_{1}\) is as (4.11).

Definition 4.11

The Hamiltonian associated to the horizontal stochastic optimal control problem (4.7) is

$$\begin{aligned} H(x,q,S) = \sup _{\nu \in \mathcal {A}} \bigg [ -Tr( \sigma (x) S \sigma ^{T}(x) \nu ^{2}(s)) + \sum _{i,j=1}^{m}( \nu ^{2}(s))_{ij} \big < \nabla _{X_{i}} X_{j}(x) , q \big > \bigg ]. \end{aligned}$$

where \(\sigma \) is defined as in (2.1), \(q \in \mathbb {R}^{N}\), \(S \in Sym(N)\) and \(\mathcal {A}\) is as in (4.8).

Definition 4.12

The Hamiltonian associated to the approximated Riemannian stochastic optimal control problem (4.10) is

$$\begin{aligned} H_{\varepsilon }(x,q,S) = \sup _{\nu _{1} \in \mathcal {A}_{1}} \bigg [ -Tr( \sigma _{\varepsilon }(x) S \sigma _{\varepsilon }^{T}(x) \nu _{1}^{2}(s)) + \sum _{i,j=1}^{N}( \nu ^{2}_{1}(s))_{ij} \big < \nabla _{X^{\varepsilon }_{i}} X^{\varepsilon }_{j}(x) , q \big > \bigg ]. \end{aligned}$$

where \(\sigma _{\varepsilon }\) is defined as in (2.2), \(q \in \mathbb {R}^{N}\), \(S \in Sym(N)\) and \(\mathcal {A}_{1}\) is as (4.11).

Remark 4.13

The function \(V_{p}\) solves in viscosity sense PDE:

$$\begin{aligned} {\left\{ \begin{array}{ll} -(V_{p}) + H_{p}(x,DV_{p}, D^{2}V_{p}) =0, \ \ \ t \in [0,T) , \ x \in \mathbb {R}^{N}, \\ V_{p}(T,x)=g(x), \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x \in \mathbb {R}^{N} \end{array}\right. } \end{aligned}$$
(4.15)

where

$$\begin{aligned} H_{p}(x,q, M) := \sup _{\nu \in \mathcal {A}} \bigg [-(p-1) r^{-1} Tr[\nu \nu ^{T} q q^{T}] + Tr[\nu \nu ^{T} M]\bigg ], \end{aligned}$$
(4.16)

(see [2] for further details).

Remark 4.14

Similarly to Remark 4.13, for \(\varepsilon >0\) and \(p>1\) fixed, the function \(V_{p}^{\varepsilon }\) solves in the viscosity sense the PDE

$$\begin{aligned} {\left\{ \begin{array}{ll} -(V^{\varepsilon }_{p}) + H^{\varepsilon }_{p}(x,DV^{\varepsilon }_{p}, D^{2}V^{\varepsilon }_{p}) =0, \ \ t \in [0,T) , \ x \in \mathbb {R}^{N}, \\ V^{\varepsilon }_{p}(T,x)=g(x), \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x \in \mathbb {R}^{N} \end{array}\right. } \end{aligned}$$
(4.17)

where

$$\begin{aligned} H_{p}^{\varepsilon }(x,q, M) := H_{p}(x, q_{\varepsilon }, M_{\varepsilon }) = \sup _{\nu \in \mathcal {A}_{1}} \bigg [-(p-1) r^{-1} Tr[\nu \nu ^{T} q_{\varepsilon } q^{T}_{\varepsilon }] + Tr[\nu \nu ^{T} M_{\varepsilon }]\bigg ], \end{aligned}$$
(4.18)

where \(\mathcal {A}_{1}\) is given in (4.11) and, for all \(q \in \mathbb {R}^{N}\) and \(M=(M_{ij})_{i,j=1}^{N} \in Sym(N)\),

$$\begin{aligned} q_{\varepsilon }:= \begin{bmatrix} q_{1} \\ \dots \\ q_{m} \\ \varepsilon q_{m+1} \\ \dots \\ \varepsilon q_{N} \end{bmatrix} \end{aligned}$$

and

$$\begin{aligned} M_{\varepsilon }:= \begin{bmatrix} M_{11}&{} \dots &{} M_{1m} &{} \varepsilon M_{1 (m+1)} &{}\dots &{} \varepsilon M_{1 N}\\ &{} &{} &{} \vdots &{} &{} &{} \\ M_{m1} &{} \dots &{} M_{mm} &{} \varepsilon M_{(m+1) m}&{} \dots &{} \varepsilon M_{Nm}\\ \varepsilon M_{(m+1)1} &{} \dots &{} \varepsilon M_{(m+1)m} &{} \varepsilon ^{2} M_{(m+1)(m+1)} &{} \dots &{} \varepsilon ^{2} M_{(m+1)N} \\ &{} &{} &{} \vdots &{} &{} &{} \\ \varepsilon M_{1N} &{} \dots &{} \varepsilon M_{mN} &{} \varepsilon ^{2} M_{(m+1)N} &{} \dots &{} \varepsilon ^{2} M_{NN} \end{bmatrix}. \end{aligned}$$

5 \(V^{\varepsilon }\) as viscosity solution

In this last section we will prove the main result of this paper. Before doing it, we have to introduce some technical lemmas.

Lemma 5.1

(Comparison Principle) Let us consider \(0< \varepsilon <1\) fixed. Let \(g_{1}\), \(g_{2}\) be uniformly continuous functions on \([0,T] \times \mathbb {R}^{N}\) with \(g_{1}\le g_{2}\) and \(V_{i}^{\varepsilon }(t,x)\) for \(i=1,2\) as defined in (4.12) with terminal costs \(g_{i}\) then it holds true

$$\begin{aligned} V_{1}^{\varepsilon }(t,x) \le V^{\varepsilon }_{2}(t,x) \ \text{ on } [0,T] \times \mathbb {R}^{N}. \end{aligned}$$

Proof

It follows from the assumption \(g_{1} \le g_{2}\) and from the properties of infimum and essential supremum. \(\square \)

Lemma 5.2

Let us consider \(0< \varepsilon <1\) fixed. Let g be a bounded and uniformly continuous function on \([0,T]\times \mathbb {R}^{N}\) and let \(V^{\varepsilon }(t,x)\) be defined as in (4.12) with g as terminal cost. Let us consider \(\phi : \mathbb {R} \rightarrow \mathbb {R}\) continuous and strictly increasing. Then

$$\begin{aligned} \phi (V^{\varepsilon }_{g}(t,x)) = V^{\varepsilon }_{\phi (g)}(t,x). \end{aligned}$$

Proof

Since \(\phi \) is an increasing and continuous function, we remark that \(\phi (\inf A)= \inf \phi (A)\) where \(A \subset \mathbb {R}\). Then, for every measurable function \(f: \Omega \rightarrow \mathbb {R}\) it is easy to see that

$$\begin{aligned} \phi (ess \sup f) = ess \sup (\phi (f)) \end{aligned}$$

and so we can conclude the proof. \(\square \)

Remark 5.3

Lemmas 5.1 and 5.2 allow us to conclude that the set \(\{V^{\varepsilon }(t, x)\le 0 \}\) depends only on the set \(\{ g(x)\le 0\}\) and not on the specific form of g. Furthermore we will show that \(V^{\varepsilon }(t, x)\) solves (in the viscosity sense) the level set equation for the evolution by horizontal mean curvature flow for a fixed \(0<\varepsilon <1\).

We state now the main theorem of the paper.

Theorem 5.4

Let us consider \(0<\varepsilon <1\) fixed. Let \(g:\mathbb {R}^{N} \rightarrow \mathbb {R}\) be a globally bounded and Lipschitz function, \(T > 0\) and

$$\begin{aligned} \sigma _{\varepsilon }(x) = [X_{1}(x), \ldots , X_{m}(x), \varepsilon E_{m+1}(x) , \dots , \varepsilon E_{N}(x)]^{T} \end{aligned}$$

a \(N \times N\) matrix obtained from the Riemannian approximation of the \(m \times N\) Hörmander matrix \(\sigma (x) = [X_{1}(x), \ldots , X_{m}(x)]^{T}\) with \(m\le N\) and smooth coefficients and \(E_{i}=(0, \dots , 1 , \dots , 0)^{T}\) the vector fields where 1 is at the position i. Assuming that \(\sigma _{\varepsilon }\) and \(\nu _{\varepsilon }(x)=\sum _{i=1}^{N} \nabla _{ X^{\varepsilon }_{i}}X^{\varepsilon }_{j} (x)\) are Lipschitz (in order to have the non-explosion for the solution of the SDE), then the value function \(V^{\varepsilon } (t, x)\) defined by (4.12) is a bounded lower semicontinuous viscosity solution of the level set equation for the evolution by approximated Riemannian mean curvature flow, with terminal condition \(V^{\varepsilon } (T, x) = g(x)\).

In order to prove the Theorem 5.4 we have to introduce the half-relaxed upper-limit, prove some preliminaries lemmas and theorems and, at the end, verify that the terminal condition is satisfied.

Definition 5.5

We define the half-relaxed upper-limit of \(V_{p}^{\varepsilon }(t,x)\)

$$\begin{aligned} V^{\sharp , \varepsilon }(t,x):=\limsup _{(s,y) \rightarrow (t,x) \ \ p \rightarrow \infty } V^{\varepsilon }_{p}(s,y). \end{aligned}$$

This lemma allows to use the definition of the upper half-relaxed limit instead of the definition of upper envelope.

Lemma 5.6

Let us consider \(0<\varepsilon <1\) fixed. It holds true

$$\begin{aligned} V^{\sharp , \varepsilon }(t,x) = V^{* , \varepsilon }(t,x) \ \ \ \text{ for } \text{ all } \ \ \ (t,x) \in [0,T] \times \mathbb {R}^{N} \end{aligned}$$

where the upper envelope and the half-relaxed upper limit are defined as in Definitions 3.5 and 5.5.

Proof

We observe that \(V^{\sharp , \varepsilon } \ge V^{\varepsilon }\) and \(V^{\sharp , \varepsilon }\) is an upper semicontinuous function. Then, since \(V^{*, \varepsilon }\) is the smallest upper envelope it holds \(V^{\sharp , \varepsilon } \ge V^{*, \varepsilon }\). On the other hand, recalling that \(V^{\varepsilon }_{p}(t,x) \le V^{\varepsilon }(t,x)\) for any t,x, and \(p>1\) and \(\varepsilon >0\) fixed, then taking the \(\limsup \) in t,x and p we obtain that \(V^{\sharp , \varepsilon } \le V^{*, \varepsilon }\) and as consequence the result follows. \(\square \)

Another important observation is related to the \(L^{p}\)-norm related to \(V^{\varepsilon }(t,x)\), i.e. \(V^{\varepsilon }_{p}(t,x)\) as in Definition 4.10.

We obtain the following result for \(0<\varepsilon <1\) fixed.

Lemma 5.7

Let us consider \(0<\varepsilon < 1\) fixed. Under the assumptions of Theorem 5.4, we have

$$\begin{aligned} V^{\varepsilon }(t,x) = \lim _{p \rightarrow \infty } V_{p}^{\varepsilon }(t,x) \ \ \text{ for } \text{ all } \ \ (t,x) \in [0,T] \times \mathbb {R}^{N} \end{aligned}$$

The convergence is pointwise.

Proof

As the \(L^{p}\) norms are bounded by essential supremum and increasing we obtain immediately for each fixed control and \(\varepsilon >0\)

$$\begin{aligned} V^{\varepsilon }(t,x) \ge V^{\varepsilon }_{p}(t,x). \end{aligned}$$

The other inequality will be proved as in [8]. Let us consider \(q\ge 1\), then by the property of the infimum we can find a control \(\nu _{q}\) such that

$$\begin{aligned} \bigg ( \mathbb {E}[ g^{p}( \xi ^{t,x, \nu _{1, q}}_{ \varepsilon } (T))] \bigg )^{\frac{1}{q}} \le V^{\varepsilon }_{q}(t,x) + \frac{1}{q}. \end{aligned}$$

The controlled SDE (4.10) has a drift part which depends on the control only through \(\nu _{1}^{2}\) (we recall by assumption that \(\varepsilon >0\) is fixed) and our control set is convex in \(\nu ^{2}_{1}\). Proceeding as in [8], we obtain that there exists a probability space \((\Omega , \mathcal {F} , \{ \mathcal {F}_{t} \}_{t \ge 0}, \mathbb {P} , B_{N}, \nu _{1} )\) such that for a subsequence \(q_{k}\) the process \(\xi ^{t,x, \nu _{1, q_{k}}}_{ \varepsilon }\) converges weakly to \(\xi ^{t,x, \nu _{1}}\) and so for any fixed \(\overline{q} \ge 1\)

$$\begin{aligned} \lim _{k \rightarrow \infty } \bigg ( \mathbb {E}[ g^{q}(\xi ^{t,x, \nu _{1, q_{k}}}_{ \varepsilon }(T))] \bigg )^{\frac{1}{q}} = \bigg ( \mathbb {E} [g^{\overline{q}}(\xi ^{t,x, \nu _{1}}_{\varepsilon }(T)) ] \bigg )^{\frac{1}{\overline{q}}}. \end{aligned}$$

Since the \(L^{q}\) norm is non decreasing in q

$$\begin{aligned} \bigg ( \mathbb {E}[g^{q}(\xi ^{t, x , \nu _{1}}_{\varepsilon }(T))] \bigg )^{\frac{1}{\overline{q}}} \le \lim _{q \rightarrow \infty } V^{\varepsilon }_{q}(t,x). \end{aligned}$$

Finally, using the convergence of the \(L^{q}\) norm to the \(L^{\infty }\) norm we obtain

$$\begin{aligned} V^{\varepsilon }(t,x) \le \lim _{q \rightarrow \infty } V_{q}^{\varepsilon }(t,x). \end{aligned}$$

\(\square \)

In order to prove that \(V^{\varepsilon }\) is a viscosity solution of approximated Riemannian mean curvature flow we have to recall a further lemma.

Lemma 5.8

( [2]) Let \(S \in Sym(N)\) such that the space of the eigenvectors associated to the maximum eigenvalue is of the dimension one. Then, \(S \rightarrow \lambda _{max}(S)\) is \(C^{1}\) in a neighbourhood of S. Moreover,

$$\begin{aligned} D \lambda _{max}(S)(H)= <Ha,a>, \end{aligned}$$

for any \(a \in \mathbb {R}^{m}\) the eigenvector associated to \(\lambda _{max}(S)\) and \(|a|=1\).

The Theorem 5.4 is the consequence of the following theorem.

Theorem 5.9

Let us consider \(0<\varepsilon <1\) fixed. Let \(g: \mathbb {R}^{N} \rightarrow \mathbb {R}\) be a globally bounded and Lipschitz function, \(T>0\) and \(\sigma _{\varepsilon }(x)\) a Riemannian approximation of the \(m \times N\)-Hörmander matrix \(\sigma (x)\). Since the comparison principle holds (see [14]), then the value function \(V^{\varepsilon }(t,x)\) is the unique continuous viscosity solution of approximated Riemannian mean curvature flow, satisfying \(V^{\varepsilon }(T,x)=g(x)\).

Proof

We divide this proof in two steps: we prove that \(V^{\varepsilon }(t,x)\) is a viscosity supersolution and \(V^{\varepsilon , \sharp }(t,x)\) is a viscosity subsolution.

  • \(V^{\varepsilon }\) is a viscosity supersolution: Let us consider \(\phi \in C^{1}([0,T]; C^{2}(\mathbb {R}^{N}))\) such that \(V^{\varepsilon } - \phi \) has a local minimum at (tx). Two cases are possible: if \(\mathcal {X}_{\varepsilon } \phi (t,x) \ne (0, \dots , 0)\) we have to verify that

    $$\begin{aligned} - \phi _{t}(t,x) - \Delta _{\varepsilon } \phi (t,x) + \Delta _{\varepsilon , \infty } \phi (t,x) \ge 0 \end{aligned}$$

    where the equation is given as in (4.3). If \(\mathcal {X}_{\varepsilon } \phi (t,x) = (0, \dots , 0)\) we have to verify that

    $$\begin{aligned} -\phi _{t}(t,x) - \Delta _{\varepsilon } \phi (t,x) + \lambda _{max}( (\mathcal {X}^{2}_{\varepsilon } \phi )^{*} (t,x)) \ge 0 \end{aligned}$$

    where \((\mathcal {X}^{2}_{\varepsilon } \phi )^{*}\) is defined as (4.4).

    For any \(p>1\) there exists a sequence \((t_{p}, x_{p})\) such that \(V^{\varepsilon }_{p} - \phi \) has a local minimum at \((t_{p}, x_{p})\) and \((t_{p}, x_{p}) \rightarrow (t,x)\) a \(p \rightarrow \infty \). In fact, we can always assume that (tx) is a strict minimum in some \(B_{R}(t,x)\) (to obtain this it is sufficient to substitute a generic test function \(\phi \) with the test function \(\phi +|x-x_{p}|^{4}\)). Set \(K= \overline{B_{\frac{R}{2}}(t,x)}\), the sequence of minimum points \((t_{p}, x_{p})\) converge to some \((\overline{t} , \overline{x}) \in K\). As \(V^{\varepsilon }\) is the limit of \(V^{\varepsilon }_{p}\) as \(p \rightarrow \infty \) (see Lemma 5.7) and lower semicontinuous, therefore by a standard argument yields that \((\overline{t}, \overline{x})\) is a minimum, hence it equals to (tx). Then it holds true

    $$\begin{aligned} - \phi _{t}(t_{p}, x_{p}) + H_{\varepsilon }(x_{p}, (p-1) V_{p}^{-1} D \phi (D \phi )^{T} + D^{2} \phi )(t_{p}, x_{p}) \ge 0. \end{aligned}$$

    If \(\sigma _{\varepsilon }(x) D \phi (t,x) \ne 0\), we write the Hamiltonian in a more explicit way. Set

    $$\begin{aligned} S_{1}= (p-1) V_{p}^{-1} (\mathcal {X}_{\varepsilon }\phi (t_{p}, x_{p}))( \mathcal {X}_{\varepsilon } \phi (t_{p}, x_{p}))^{T} \end{aligned}$$

    and

    $$\begin{aligned} S_{2}= (\mathcal {X}_{\varepsilon }^{2} \phi )^{*} (t_{p}, x_{p}) \end{aligned}$$

    then

    $$\begin{aligned} H_{\varepsilon }(x_{p}, S_{1} ,S_{2}) =&- Tr(S_{1} + S_{2}) + \lambda _{max}(S_{1} + S_{2}) \nonumber \\ =&-Tr(S_{1}) - Tr(S_{2}) + \lambda _{max}(S_{1} + S_{2}) \nonumber \\ =&-(p-1) (V_{p}^{\varepsilon })^{-1}(t_{p}, x_{p}) | \mathcal {X}_{\varepsilon } \phi (t_{p}, x_{p})|^{2}\nonumber \\&- \Delta _{\varepsilon } \phi (t_{p}, x_{p}) + \lambda _{max}(S_{1} + S_{2}) \end{aligned}$$
    (5.1)

    since the trace operator is linear and \(Tr((\mathcal {X}_{\varepsilon } \phi (x_{p}))( \mathcal {X}_{\varepsilon } \phi (x_{p}))^{T} = | \mathcal {X}_{\varepsilon } \phi (x_{p})|^{2}\). Now we use the Lemma 5.8 in order to expand the \(\lambda _{max}\). We consider the matrix

    $$\begin{aligned} S= \frac{\mathcal {X}_{\varepsilon } \phi (t,x) (\mathcal {X}_{\varepsilon } \phi (t,x))^{T}}{V^{\varepsilon }(t,x)} \end{aligned}$$

    for which \(\lambda _{max}(S)= \frac{|\mathcal {X}_{\varepsilon } \phi (t,x)|^{2}}{V^{\varepsilon }(t,x)}\) and where \(a= \frac{\mathcal {X}_{\varepsilon } \phi (t,x)}{|\mathcal {X}_{\varepsilon } \phi (t,x)|}\) since \(\mathcal {X}_{\varepsilon } \phi (t,x) \ne 0\) (see [2] for further remarks). Let us consider

    $$\begin{aligned} S_{p} = \frac{(\mathcal {X}_{\varepsilon } \phi (t_{p}, x_{p}))(\mathcal {X}_{\varepsilon } \phi (t_{p}, x_{p}))^{T}}{V^{\varepsilon }_{p}(t_{p},x_{p})}, \end{aligned}$$

    it is immediate to observe that \(S_{p}\) converges to S as \(p \rightarrow \infty \). By Taylor’s formula we know that there exists a \(\theta _{p} \in (0,1)\) such that

    $$\begin{aligned} \lambda _{max}&\bigg ( S_{p} + \frac{(\mathcal {X}_{\varepsilon }^{2} \phi )^{*} (t_{p} , x_{p})}{p-1} \bigg ) = \lambda _{max}(S_{p})\\&+ \frac{1}{p-1} D \lambda _{max} \bigg ( S_{p} + \frac{\theta _{p}}{p-1} (\mathcal {X}^{2}_{\varepsilon } \phi )^{*}(t_{p}, x_{p}) \bigg ) (\mathcal {X}^{2}_{\varepsilon } \phi )^{*}(t_{p}, x_{p}). \end{aligned}$$

    Using the fact that \(\lambda _{max}\) is \(C^{1}\) in a neighbourhood of S and \(S_{p} \rightarrow S\) to get

    $$\begin{aligned} \lambda _{max}&\bigg ( S_{p} + \frac{(\mathcal {X}_{\varepsilon }^{2}\phi )^{*} (t_{p}, x_{p})}{p-1} \bigg ) = \lambda _{max}(S_{p}) \\&+ \frac{1}{p-1}D \lambda _{max}(S)(\mathcal {X}^{2}_{\varepsilon } \phi )^{*} (t_{p}, x_{p}) + o\left( \frac{1}{p} \right) \end{aligned}$$

    where \(p o(1/p) \rightarrow 0\) when \(p \rightarrow \infty \). Hence we obtain

    $$\begin{aligned} \lambda _{max}&\bigg ( S_{p} + \frac{(\mathcal {X}^{2}_{\varepsilon } \phi )^{*} (t_{p}, x_{p})}{p-1} \bigg ) \\&= \lambda _{max}(S_{p}) + \frac{< (\mathcal {X}^{2}_{\varepsilon } \phi )^{*} (t_{p},x_{p}) \mathcal {X}_{\varepsilon } \phi (t,x) , \mathcal {X}_{\varepsilon } \phi (t,x)>}{(p-1) | (\mathcal {X}_{\varepsilon } \phi )(t,x)|^{2}}. \end{aligned}$$

    Then, expanding the p-Hamiltonian (5.1) we obtain immediately the inequality. If \(\mathcal {X}_{\varepsilon } \phi (t,x)=0\) then we use the subadditivity of \(S \rightarrow \lambda _{max}(S)\) and remark that, since \(V_{p}^{\varepsilon }\) is supersolution

    $$\begin{aligned} 0 \le&- \phi _{t} + H_{\varepsilon }(x_{p}, D\phi , (p-1) (V_{p}^{\varepsilon })^{-1} D \phi (D \phi )^{T} + D^{2} \phi ) \\ \le&- \phi _{t} - (p-1)(V_{p}^{\varepsilon })^{-1}| \mathcal {X}_{\varepsilon } \phi |^{2} - Tr( (\mathcal {X}^{2}_{\varepsilon } \phi )^{*}) \\&+ \lambda _{max}((p-1) (V^{\varepsilon }_{p})^{-1} \mathcal {X}_{\varepsilon } \phi ( \mathcal {X}_{\varepsilon } \phi )^{T} + (\mathcal {X}_{\varepsilon }^{2} \phi )^{*}) \\ \le&- \phi _{t} - (p-1)(V_{p}^{\varepsilon })^{-1}| \mathcal {X}_{\varepsilon } \phi |^{2} - Tr( (\mathcal {X}^{2}_{\varepsilon } \phi )^{*}) \\&+ (p-1) (V^{\varepsilon }_{p})^{-1} |\mathcal {X}_{\varepsilon } \phi |^{2} + \lambda _{max}(\mathcal {X}_{\varepsilon }^{2} \phi )^{*} \\ =&- \phi _{t} - Tr((\mathcal {X}^{2}_{\varepsilon } \phi )^{*}) + \lambda _{max}(\mathcal {X}^{2}_{\varepsilon } \phi )^{*}. \end{aligned}$$

    In the end, we can conclude now that \(V^{\varepsilon }\) is a supersolution.

  • \(V^{*, \varepsilon }\) is the subsolution: As consequence of Lemma 5.6 it is possible to write \(V^{*, \varepsilon } = V^{\sharp , \varepsilon }\). Let \(\phi \in C^{1}([0,T]; C^{2}(\mathbb {R}^{N}))\) such that \(V^{\sharp , \varepsilon } - \phi \) has a strict maximum at \((t_{0}, x_{0})\). Let us consider a sequence of points of maximum \((t_{p}, x_{p})\) for \(V_{p}^{\varepsilon } - \phi \), then it is possible to find a subsequence converging to (tx). Since \(V_{p}^{\varepsilon }\) is the solution of

    $$\begin{aligned} {\left\{ \begin{array}{ll} -(V_{p})_{t} + H_{\varepsilon }(x , DV_{p}^{\varepsilon }, (p-1) (V_{p}^{\varepsilon })^{-1} DV_{p}^{\varepsilon } (DV_{p}^{\varepsilon })^{T} + D^{2}V_{p}^{\varepsilon }) = 0 \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x \in \mathbb {R}^{N}, \ t \in [0,T), \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ V^{\varepsilon }_{p}(T,x) = g(x), \ \ \ \ \ x \in \mathbb {R}^{N} \end{array}\right. } \end{aligned}$$
    (5.2)

    then we have that

    $$\begin{aligned} 0 \le - \phi _{t} + H_{\varepsilon }(x, (p-1) (V_{p}^{\varepsilon })^{-1} D \phi (D \phi )^{T} + D^{2} \phi ) \end{aligned}$$
    (5.3)

    at the point \((t_{p}, x_{p})\). We define, for any \(z>0\), \(x, d \in \mathbb {R}^{N}\)and any \(N \times N\) symmetric matrix S

    $$\begin{aligned} H^{\varepsilon }_{p}(x,z,d,S) =&- \frac{(p-1)}{z} | \sigma _{\varepsilon }(x) d |^{2} - Tr( \sigma ^{T}_{\varepsilon }(x) S \sigma _{\varepsilon }(x) + A_{\varepsilon }(x,d)) \\&+ \lambda _{max} \bigg ( \frac{(p-1)}{z} (\sigma _{\varepsilon }(x) d) (\sigma _{\varepsilon }(x) d)^{T} + \sigma _{\varepsilon }^{T}(x)S \sigma _{\varepsilon }(x) + A_{\varepsilon }(x,d) \bigg ) \end{aligned}$$

    and

    $$\begin{aligned} (H_{\varepsilon })^{*}(x,d,S) = {\left\{ \begin{array}{ll} - Tr( \sigma _{\varepsilon }^{T}(x)S \sigma _{\varepsilon }(x) + A_{\varepsilon }(x,d)) \\ \ \ \ \ \ \ \ \ + \bigg < (\sigma ^{T}_{\varepsilon }(x) S \sigma _{\varepsilon }(x) + A_{\varepsilon }(x,d)) \frac{\sigma _{\varepsilon }(x) d}{|\sigma _{\varepsilon }(x) d|}, \frac{\sigma _{\varepsilon }(x) d}{|\sigma _{\varepsilon }(x) d|} \bigg >, \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ |d| \ne 0, \\ - Tr( \sigma _{\varepsilon }^{T}(x)S \sigma _{\varepsilon }(x) + A_{\varepsilon }(x,d)) \\ \ \ \ \ \ \ \ \ + \lambda _{max} (\sigma ^{T}_{\varepsilon }(x) S \sigma _{\varepsilon }(x) + A_{\varepsilon }(x,d)), \ \ \ \ \ \ \ \ \ \ |d|=0 \end{array}\right. } \end{aligned}$$

    and, as stated in [8], we can observe that

    $$\begin{aligned} H^{\varepsilon }_{p}(x,z,d,S) \ge (H^{\varepsilon })^{*}(x,d,S). \end{aligned}$$

    We note that for \(|d|=0\) it is immediate, for \(|d| \ne 0\) we observe that

    $$\begin{aligned} \lambda _{max} \bigg ( \frac{(p-1)}{z} (\sigma _{\varepsilon }(x) d) (\sigma _{\varepsilon }(x) d)^{T} + \sigma _{\varepsilon }^{T}(x)S \sigma _{\varepsilon }(x) + A_{\varepsilon }(x,p) \bigg ) \\ \ge \frac{(p-1)}{z} |\sigma _{\varepsilon }(x) d|^{2} + \lambda _{max} ( \sigma _{\varepsilon }^{T}(x)S \sigma _{\varepsilon }(x) + A_{\varepsilon }(x,p) ) \end{aligned}$$

    and, called \(\overline{S}_{\varepsilon }= \sigma _{\varepsilon }^{T}(x)S \sigma _{\varepsilon }(x) + A_{\varepsilon }(x,p)\)

    $$\begin{aligned} \lambda _{max}(\overline{S}_{\varepsilon }) = \max _{|a|=1} <\overline{S}_{\varepsilon } a , a> \end{aligned}$$

    we obtain immediately the inequality. Let us consider \(\varepsilon >0\), set \(z= \phi ^{-1}(t_{p}, x_{p})>0\), \(d=D \phi (t_{p} , x_{p})\), \(S= D^{2} \phi (t_{p}, x_{p})\), then taking the limsup of (5.3) we obtain for \(p \rightarrow \infty \) and recalling that, by definition, \((H_{\varepsilon })^{*} \ge (H_{\varepsilon })_{*}\) we obtain

    $$\begin{aligned} 0 \ge \phi _{t} + (H_{\varepsilon })_{*}(x, D\phi ,D^{2}\phi ) \end{aligned}$$

    at (tx). The result follows immediately. \(\square \)

Now, in order to conclude the proof the main theorem of this section, we need a further lemma.

Lemma 5.10

Let us consider \(0<\varepsilon <1\) fixed. For any \(x \in \mathbb {R}^{N}\), \(V^{\varepsilon , \sharp }(T,x) \le g(x)\).

Proof

By contradiction, we assume that it is not true and that there exists a point \(x_{0}\) such that \(V^{\varepsilon , \sharp }(T,x) \ge g(x_{0}) + \delta \), for \(\delta >0\) sufficiently small. We use as test function

$$\begin{aligned} \phi (t,x) = \alpha (T-t) + \beta |x - x_{0}|^{2} + g(x_{0}) + \frac{\delta }{2} \end{aligned}$$

with \(\alpha > -C \beta \), with C a constant depending just on the data of the problem and the point \(x_{0}\) and \(\beta >1\) sufficiently large. We remark that

$$\begin{aligned} \phi _{t}(t,x) = \alpha , \ \ \ \ D\phi (t,x)= 2 \beta (x-x_{0}) , \ \ \ \ D^{2} \phi (t,x) = 2 \beta Id. \end{aligned}$$

We can find a sequence \((t_{k} , x_{k}) \rightarrow (T, x_{0})\) and \(p_{k} \rightarrow \infty \) as \(k \rightarrow \infty \) such that \(V^{\varepsilon }_{p_{k}} - \phi \) has a positive local maximum at some point \((s_{k} , y_{k})\), for any \(k>1\). To obtain the contradiction we use the fact that \(V^{\varepsilon }_{p_{k}}\) is solution of the Eq. (5.2) in order to obtain \(\alpha + C \beta \le 0\). We observe that the functions \(V^{\varepsilon }_{p}\) are bounded uniformly in p and \(\varepsilon \) is fixed so, by the growth of \(|x-x_{0}|\), the maximum points are such that \(y_{k} \in \overline{B_{R}(x_{0})}=: K\) with R independent of k. In the point \((s_{k}, y_{k})\) it holds true

$$\begin{aligned} 0 \ge&\alpha - H_{\varepsilon }(y_{k}, (p-1) \phi ^{-1} D \phi (D \phi )^{T} + D^{2} \phi ) \\ \ge&\alpha - 2 \beta Tr (\sigma _{\varepsilon }(y_{k}) \sigma ^{T}_{\varepsilon }(y_{k}) + A_{\varepsilon }(y_{k}, y_{k} - x_{0})) \\&+ 2 \beta \lambda _{min} (\sigma _{\varepsilon }(y_{k}) \sigma ^{T}_{\varepsilon }(y_{k}) + A_{\varepsilon }(y_{k}, y_{k} - x_{0})). \end{aligned}$$

Then recalling that there is a compact set K such that \(y_{k} \in K\) for all k, by continuity, we get \(0 \ge \alpha + C \beta \), with

$$\begin{aligned} C=&- \max _{x \in K} Tr(\sigma _{\varepsilon }(x) \sigma _{\varepsilon }^{T}(x)) - \max _{x \in K} A_{\varepsilon }(x, x-x_{0}) \\&+ \min _{k \in K}\lambda _{min}(\sigma _{\varepsilon }(x) \sigma _{\varepsilon }^{T}(x)) + \min _{x \in K} \lambda _{min}(A_{\varepsilon }(x, x-x_{0})) \end{aligned}$$

with such estimate we obtain the contradiction, i.e. the thesis. \(\square \)

We conclude this paper remarking that the solution is, in particular, continuous since the comparison principle holds true.

Corollary 5.11

Let us consider \(0<\varepsilon <1\) fixed. Let \(g: \mathbb {R}^{N} \rightarrow \mathbb {R}\) be bounded and Hölder continuous function, \(T>0\) and \(\sigma _{\varepsilon }(x)\) a \(N \times N\)-Hörmander matrix like in the Theorem 5.4. Since the comparison principle holds (see [14]), then the value function \(V^{\varepsilon }(t,x)\) is the unique continuous viscosity solution of the level set Eq. (4.3), satisfying \(V^{\varepsilon }(T,x)=g(x)\).

Proof

We have already shown that \(V^{\varepsilon , *}(t,x)=V^{\varepsilon , \#}(t,x)\) is a viscosity subsolution while \(V_{*}^{\varepsilon }(t,x)=V^{\varepsilon }(t,x)\) is a viscosity supersolution of (4.3) with initial condition g. For Lemma 5.10 we know that \(V^{\varepsilon , \#}(t,x)\le g(x)\) and \(V(T,x)=g(x)\) so, by comparison principle, it holds \(V^{\varepsilon , \#}(t,x)\le V^{\varepsilon }(t,x)\). By definition of \(\limsup \) we have \(V^{\varepsilon , \#}(t,x)\ge V^{\varepsilon }(t,x)\) i.e. \(V^{\varepsilon }(t,x)\) is upper semicontinuous. Since \(V^{\varepsilon }(t,x)\) is also lower semicontinuous we can conclude immediately stating that \(V^{\varepsilon }(t,x)\) is continuous. \(\square \)