A stochastic representation for the solution of approximated mean curvature flow

The evolution by horizontal mean curvature flow (HMCF) is a partial differential equation in a sub-Riemannian setting with applications in IT and neurogeometry [see Citti et al. (SIAM J Imag Sci 9(1):212–237, 2016)]. Unfortunately this equation is difficult to study, since the horizontal normal is not always well defined. To overcome this problem the Riemannian approximation was introduced. In this article we obtain a stochastic representation of the solution of the approximated Riemannian mean curvature using the Riemannian approximation and we will prove that it is a solution in the viscosity sense of the approximated mean curvature flow, generalizing the result of Dirr et al. (Commun Pure Appl Math 9(2):307–326, 2010).


Introduction
The evolution by mean curvature flow (MCF) has been studied extensively and it has many applications in image processing and neurogeometry (see e.g. [5,6] for further details). We say that a hypersurface evolves by MCF if it contracts in the normal direction with the normal velocity proportional to its mean curvature, see e.g. [9]. It is well-known that this evolution may develop singularities in finite time in the Euclidean and Riemannian setting (as in the case of the dumbbell, see [9] for further details). To deal with such a singularities, many concepts of general solutions to study this evolution have been developed. In particular in 1991, Chen et al. [4] and, independently, Evans and Spruck [10] introduced the so called level set approach, which consists in studying the evolving hypersurfaces as level sets of (viscosity) solutions of This work was completed with the support of our T E X-pert. suitable associated nonlinear PDEs. We are interested in a degenerate version of such an evolution, namely the evolution by horizontal mean curvature flow (HMCF) and its approximation, the approximated Riemannian mean curvature flow. The HMCF is, informally, the MCF defined in a suitable way in a sub-Riemannian geometry. A sub-Riemannian geometry is a degenerate manifold where the metric is defined defined along the fibers of a subbundle of the tangent bundle. More specifically, we take X 1 , . . . , X m smooth vector fields on the manifold R n and a metric g defined along the fibers of the distribution H generated by such vector fields. Then it is possible to define intrinsic derivatives of any order by taking the derivatives along the vector fields X 1 , . . . , X m and, as direct consequence, operators such as the horizontal Laplacian or the horizontal divergence may be defined. This sub-Riemannian geometry can be approximated to a Riemannian one by completing the basis of vector fields {X 1 , . . . , X m } with N − m vector fields X ε m+1 , . . . , X ε N which depend on a parameter ε > 0. This basis is orthonormal w.r.t. a suitable metric g ε . This approximation is known as Riemmanian approximation.
In this paper we will study a stochastic representation of the viscosity solution (see [7,12] for further details) of approximated mean curvature flow, i.e. we will use a suitable stochastic optimal control problem in order to obtain the viscosity solution of the approximated mean curvature flow. A connection between some geometric evolution equations and some stochastic control problems was found independently by Buckdahn, Cardaliaguet and Quincampoix in [2] and Soner and Touzi in [16,17] in 2001 (see also [18] for further remarks on this topic). Roughly speaking, the increments of the stochastic process are constrained by the control to a lower dimensional subspace of R N , while the cost functional depends only on the terminal cost. However, we have to consider an essential supremum and not, as in the standard control problem, an expectation over the probability space. It is possible to show that the value function of this stochastic optimal control problem solves (in viscosity sense) the level set equation associated with the geometric evolution. Furthermore, it is possible to prove that the set of points from which the initial hypersurface can be reached almost surely in a given time by choosing an appropriate control which coincides with the set evolving by mean curvature flow. This stochastic approach can be generalized to a class of sub-Riemannian geometries which respects a weak condition of regularity (the so called Hörmander condition) by using an intrinsic Brownian motion associated with the sub-Riemannian geometry, see Dirr, Dragoni and von Renesse in [8]. In the Euclidean setting the stochastic dynamics can be expressed using the definition of the Itô integral while in the sub-Riemannian case we have to use the definition of the Stratonovich integral. In the latter case the dynamics is far more complex because, informally, we have a deterministic part (related to first order derivatives induced by the chosen geometry) and a stochastic one (related to some second order derivatives induced by the chosen geometry). Our aim is to extend the result obtained in [8] to the approximated Riemannian mean curvature flow, with an ε > 0 fixed. The paper is organised as follows: in the Sect. 2 we define some preliminary concepts about sub-Riemannian geometries, in the Sect. 3 we introduce the horizontal mean curvature flow, in the Sect. 4 we approximate it using a Riemannian approximation and, finally, in the Sect. 5 we will find a stochastic representation of the solution of approximated mean curvature flow.

Preliminaries
We recall some geometrical definitions which will be crucial for defining the evolution by HMCF. For more definitions and properties about sub-Riemannian geometries we refer to [15] and also [1] for the particular case of Carnot groups.
. . , m} and L (1) = X . The associated Lie algebra is the set of all brackets between the vector fields of the family The definition of the Hörmander condition below is crucial in order to work with PDEs in a sub-Riemannian setting, because it allows us to recover the whole tangent space for every point.

Definition 2.3. (Hörmander condition)
Let M be a smooth manifold and H a distribution defined on M . We say that the distribution is bracket generating if and only if, at any point, the Lie algebra L(X ) spans the whole tangent space. We say that a sub-Riemannian geometry satisfies the Hörmander condition if and only if the associated distribution is bracket generating.    For later use we also introduce the matrix associated to the vector fields X 1 , . . . , X m , which is the N × m matrix defined as In general, for Carnot-type geometries, the matrix σ assumes the following structure: Example. (The Heisenberg group) The most significant sub-Riemannian geometry is the so called Heisenberg group. For a formal definition of the Heisenberg group and the connection between its structure as non commutative Lie group and its manifold structure we refer to [1]. Here we simply introduce the 1dimensional Heisenberg group as the sub-Riemannian structure induced on R 3 by the vector fields In the case of the Heisenberg group, the matrix σ is given by The introduced vector fields satisfy the Hörmander condition: in fact [X 1 , NoDEA A stochastic representation for the solution Page 5 of 21 9 The previous structure, which applies to a large class of geometries, allows us to consider an easy and explicit Riemannian approximation.
Let us consider a distribution H spanned by the Carnot-type vector fields {X 1 , . . . , X m } defined on R N with m < N and satisfying the Hörmander condition. It is possible to complete the distribution H by adding N − m vector fields X m+1 , . . . , X N in order to construct an orthogonal basis for all The geometry induced, for all ε > 0, by the distribution is called Riemannian approximation of our starting sub-Riemannian geometry. We remark that the associated basis is composed by orthonormal vector fields w.r.t. the approximated Riemannian metric g ε . The associated matrix is We remark that det(σ ε (x)) = 0. We note that, in the case of Carnot-type geometries, we can always choose where by e i we indicate the standard Euclidean unit vector with 1 at the i-th component.
Example. (Riemannian approximation of H 1 ) In the case of the Heisenberg group introduced in the previous example, the matrix associated to the Riemannian approximation is, for every point x = (x 1 , x 2 , x 3 ), given by Remark 2.8. This technique is called Riemannian approximation since, as ε → 0 + , then the geometry induced by Riemannian approximation converges, in sense of Gromov-Hausdorff (see [13] for further details), to the original sub-Riemannian geometry (as shown, as example, in [5]).

Horizontal mean curvature evolution
Given a smooth hypersurface Γ, we indicate by n E (x) the standard (Euclidean) normal to the hypersurface Γ at the point x. The following definitions will be key for this paper (see [8] for further details).
Definition 3.1. Given a smooth hypersurface Γ, the horizontal normal at x ∈ Γ is the renormalized projection of the Euclidean normal on the horizontal space With an abuse of notation we will often indicate by n 0 (x) the associated mvalued vector The main difference between the horizontal normal and the Euclidean normal is that the first one may not exist even for smooth hypersurfaces. In fact at some points the horizontal normal is not defined while the Euclidean one exists. These points are called characteristic points. Definition 3.2. Given a smooth hypersurface Γ, characteristic points occur whenever n E (x) is orthogonal to the horizontal plane H x , then its projection on such a subspace vanishes, i.e.
We recall that, for every smooth hypersurface, the mean curvature at the point x ∈ Γ is defined as the Euclidean divergence of the Euclidean normal at that point. Similarly, for every smooth hypersurface, we introduce the horizontal mean curvature.

Definition 3.3. Given a smooth hypersurface Γ and a non characteristic point
x ∈ Γ, the horizontal mean curvature is defined as the horizontal divergence of the horizontal normal, i.e. k 0 (x) = div H n 0 (x), where n 0 (x) is the m-valued vector associated to the horizontal normal (see (3.1)) while div H is the divergence w.r.t. the vector fields X 1 , . . . , X m , i.e. .
Obviously the horizontal mean curvature is never defined at characteristic points, since there the horizontal normal does not exist. Definition 3.4. Let Γ t be a family of smooth hypersurfaces in R N . We say that Γ t is an evolution by horizontal mean curvature flow of Γ if and only if Γ 0 = Γ and for any smooth horizontal curve γ : , the horizontal normal velocity v 0 is equal to minus the horizontal mean curvature, i.e. v 0 (γ(t)) := g γ(t) (γ(t), n 0 (γ(t))) = −k 0 (γ(t)), (3.2) where n 0 (γ(t)) and k 0 (γ(t)) as respectively the horizontal normal and the horizontal mean curvature defined by Definitions 3.1 and 3.3 at the point γ(t).
We want to compute now the horizontal normal and the horizontal curvature for a smooth hypersurface expressed as zero level set, i.e.
for some smooth function u : R N → R. As did in [8], the horizontal normal for the level set formulation may be expressed as Similarly, we write the horizontal mean curvature as where X u is the horizontal gradient, that is and (X 2 u) * is the symmetrized horizontal Hessian, that is As remarked in [8], it is possible to write (3.5) in the form We observe that the function F (x, q, S) is well defined and continuous if In this way we observe that is well defined. We remark that if we consider (x, q) ∈ V then F is not defined and it cannot be extended continuously. Hence, in order to extend it to R N × R, we have to compute the upper and lower envelopes of F .
Remark 3.7. Applying the Definition 3.5 to the function F as defined in (3.6) we obtain with λ max and λ min the maximum and the minimum eigenvalues of the matrix S.

Approximated Riemannian mean curvature flow
The Equation (3.5) can be approximated to a Riemannian mean curvature flow using the Riemannian approximation. This leads the following generalizations of the definitions of horizontal normal and horizontal divergence.
With an abuse of notation, we will often indicate by n ε (x) the associated N -valued vector where n ε (x) is the N -valued vector associated to the approximated Riemannian normal (see (4.1)) while div H ε is the divergence w.r.t. the vector fields X 1 , . . . , X m , εX m+1 , . . . , εX N , i.e.

Remark 4.3.
In this setting we do not have characteristic points on the hypersurface Γ. Hence when the Euclidean normal is not zero, then at least one α i (x) of (4.1) will be not zero.
We define the approximated Riemannian mean curvature flow, adapting the definition of horizontal mean curvature flow (as stated in the Definition 3.4) to the approximated Riemannian case.
As did in Sect. 3, let us consider Γ t = {(x, t)|u(x, t) = 0} where u is C 2 . Developing all the computations following the example of [8] and recalling the Definitions 4.1 and 4.2 adapted to the level set formulation as did in Sect. 3 of this paper, we obtain that u solves the following PDE where X ε u is the approximated Riemannian gradient, i.e.
We observe that we may write the Equation Let us remark that the function F ε , due to the fact that det(σ ε (x)) = 0 for all x ∈ R N , is always well defined everywhere except for q = 0. This change is crucial to compute the upper and lower envelopes of F ε .

Remark 4.5.
Applying the Definition 3.5 to the function F ε as defined in (3.6) we obtain that the upper and lower envelopes are given by q) with λ max and λ min the maximum and the minimum eigenvalues of the matrix S ε . Remark 4.6. Let us remark that, while in the horizontal case the upper (resp. lower) envelope depends also from the sub-Riemannian geometry (i.e. |σ(x)q| > 0). in the approximated Riemannian geometry depends only on the variable q (since det(σ ε (x)) = 0 for all x ∈ R N ).

The approximated Riemannian stochastic control problem
Let us consider a family of smooth vector fields X = {X 1 , . . . X m } and its Riemannian approximation X ε = {X 1 , . . . , X m , εX m+1 , . . . , εX N }. Definition 4.7. We define the horizontal Brownian motion the process where B m is a m-dimensional Brownian motion, • the Stratonovich differential and X i the vector fields of X which span the distribution H. We define the approximated Riemannian horizontal Brownian motion as where B N is an N -dimensional Brownian motion and X ε i the vector fields of X ε which span the distribution H ε . Let (Ω, F, {F t } t≥0 , P) be a filtered probability space, B j is a j-dimensional Brownian motion adapted to the filtration {F t } t≥0 with j = m, N , we recall that a predictable process is a time-continuous stochastic process {ξ(t)} t≥0 defined on the filtered probability space (Ω, F, {F t } t≥0 , P), measurable with respect to the σ-algebra generated by all left-continuous adapted processes (see [3] and [11] for further details). Given a smooth function g : R N → R (which parametrizes the starting hypersurface at time t = 0) we introduce ξ t,x,ν , the solution of the stochastic ODE where the matrix σ is defined in (2.1), • represents the differential in the sense of Stratonovich and Similarly, for ε > 0 fixed, we define ξ t,x,ν1 ε as the solution of the SDE It is possible to show that the function V as in (4.9) solves in the viscosity sense the level-set equation for the evolution by HMCF (see [8]).
Note also that the sets of controls (4.8) and (4.11) may be rewritten respectively as Remark 4.8. Let us remark that the first equations of the systems (4.7) and (4.10) have a differential in Stratonovich sense, while the second ones have a differential in Itô sense. Remark 4.9. Roughly speaking, it is possible to see (4.8) and (4.11) as sets of controls which locally constrained the horizontal Brownian motion and approximated Riemannian Brownian motion to a tanget space of codimension one (see [2,8] for further details).
Next we introduce the p-regularising approximation of the functions V and V ε . These functions will be the p-approximation of the L ∞ norms as defined in of (4.9) and (4.12). Definition 4.10. For p > 1, the p-approximation of the value function associated to the value function (4.9) is defined as where ξ t,x,ν is as (4.7) and A is as in (4.8).
Similarly, we introduce the following ε-p-regularising function, that is the p-value function associated to the value function (4.12), where ξ t,x,ν1 ε is as (4.10) and A 1 is as (4.11).
Definition 4.11. The Hamiltonian associated to the horizontal stochastic optimal control problem (4.7) is where σ is defined as in (2.1), q ∈ R N , S ∈ Sym(N ) and A is as in (4.8).
Definition 4.12. The Hamiltonian associated to the approximated Riemannian stochastic optimal control problem (4.10) is Hε(x, q, S) = sup where σ ε is defined as in (2.2), q ∈ R N , S ∈ Sym(N ) and A 1 is as (4.11).
Remark 4.13. The function V p solves in viscosity sense PDE: where (4.16) (see [2] for further details). Remark 4.14. Similarly to Remark 4.13, for ε > 0 and p > 1 fixed, the function V ε p solves in the viscosity sense the PDE (4.18) where A 1 is given in (4.11) and, for all q ∈ R N and M = (M ij ) N i,j=1 ∈ Sym(N ),

V ε as viscosity solution
In this last section we will prove the main result of this paper. Before doing it, we have to introduce some technical lemmas.

Lemma 5.1. (Comparison Principle)
Let us consider 0 < ε < 1 fixed. Let g 1 , g 2 be uniformly continuous functions on [0, T ] × R N with g 1 ≤ g 2 and V ε i (t, x) for i = 1, 2 as defined in (4.12) with terminal costs g i then it holds true Proof. It follows from the assumption g 1 ≤ g 2 and from the properties of infimum and essential supremum.  Furthermore we will show that V ε (t, x) solves (in the viscosity sense) the level set equation for the evolution by horizontal mean curvature flow for a fixed 0 < ε < 1.
We state now the main theorem of the paper.  In order to prove the Theorem 5.4 we have to introduce the half-relaxed upper-limit, prove some preliminaries lemmas and theorems and, at the end, verify that the terminal condition is satisfied.
This lemma allows to use the definition of the upper half-relaxed limit instead of the definition of upper envelope. Proof. We observe that V ,ε ≥ V ε and V ,ε is an upper semicontinuous function. Then, since V * ,ε is the smallest upper envelope it holds V ,ε ≥ V * ,ε . On the other hand, recalling that V ε p (t, x) ≤ V ε (t, x) for any t,x, and p > 1 and ε > 0 fixed, then taking the lim sup in t,x and p we obtain that V ,ε ≤ V * ,ε and as consequence the result follows. Another important observation is related to the L p -norm related to V ε (t, x), i.e. V ε p (t, x) as in Definition 4.10. We obtain the following result for 0 < ε < 1 fixed. Lemma 5.7. Let us consider 0 < ε < 1 fixed. Under the assumptions of Theorem 5.4, we have The convergence is pointwise.
Proof. As the L p norms are bounded by essential supremum and increasing we obtain immediately for each fixed control and ε > 0 x). The other inequality will be proved as in [8]. Let us consider q ≥ 1, then by the property of the infimum we can find a control ν q such that The controlled SDE (4.10) has a drift part which depends on the control only through ν 2 1 (we recall by assumption that ε > 0 is fixed) and our control set is convex in ν 2 1 . Proceeding as in [8], we obtain that there exists a probability space (Ω, F, {F t } t≥0 , P, B N , ν 1 ) such that for a subsequence q k the process ξ t,x,ν1,q k ε converges weakly to ξ t,x,ν1 and so for any fixed q ≥ 1 lim k→∞ E[g q (ξ t,x,ν1,q k ε (T ))] Finally, using the convergence of the L q norm to the L ∞ norm we obtain In order to prove that V ε is a viscosity solution of approximated Riemannian mean curvature flow we have to recall a further lemma. for any a ∈ R m the eigenvector associated to λ max (S) and |a| = 1.
The Theorem 5.4 is the consequence of the following theorem.