1 Introduction

The aim of this paper is to study the critical points of an area functional for submanifolds of given degree immersed in an equiregular graded manifold. This can be defined as the structure \((N,{\mathcal {H}}^1,\ldots ,{\mathcal {H}}^s)\), where N is a smooth manifold and \({\mathcal {H}}^1\subset {\mathcal {H}}^2\subset \cdots \subset {\mathcal {H}}^s=TN\) is a flag of sub-bundles of the tangent bundle satisfying \([{\mathcal {H}}^i,{\mathcal {H}}^j]\subset {\mathcal {H}}^{i+j}\) when \(i,j\geqslant 1\) and \(i+j\leqslant s\), and \([{\mathcal {H}}^i,{\mathcal {H}}^j]\subset {\mathcal {H}}^s\) when \(i,j\geqslant 1\) and \(i+j>s\). The considered area depends on the degree of the submanifold. The concept of pointwise degree for a submanifold M immersed in a graded manifold was first introduced by Gromov [28] as the homogeneous dimension of the tangent flag given by

$$\begin{aligned} T_p M \cap {\mathcal {H}}_p^1 \subset \cdots \subset T_p M \cap {\mathcal {H}}_p^s. \end{aligned}$$

The degree of a submanifold \(\deg (M)\) is the maximum of the pointwise degree among all points in M. An alternative way of defining the degree is the following: on an open neighborhood of a point \(p \in N\) we can always consider a local basis \((X_1,\ldots ,X_n)\) adapted to the filtration \(({\mathcal {H}}^i)_{i=1,\ldots ,s}\), so that each \(X_j\) has a well defined degree. Following [36] the degree of a simple m-vector \(X_{j_1}\wedge \ldots \wedge X_{j_m}\) is the sum of the degree of the vector fields of the adapted basis appearing in the wedge product. Since we can write a m-vector tangent to M with respect to the simple m-vectors of the adapted basis, the pointwise degree is given by the maximum of the degree of these simple m-vectors.

We consider a Riemannian metric \(g=\langle \cdot ,\cdot \rangle \) on N. For any \(p\in N\), we get an orthogonal decomposition \(T_pN={\mathcal {K}}_p^1\oplus \ldots \oplus {\mathcal {K}}_p^s\). Then we apply to g a dilation induced by the grading, which means that, for any \(r>0\), we take the Riemannian metric \(g_r\) making the subspaces \({\mathcal {K}}_p^i\) orthogonal and such that

$$\begin{aligned} g_r|_{{\mathcal {K}}^i}=\frac{1}{r^{i-1}}\,g|_{{\mathcal {K}}^i} \; . \end{aligned}$$

Whenever \({\mathcal {H}}^1\) is a bracket generating distribution the structure \((N, g_r)\) converges in Gromov-Hausdorff sense to the sub-Riemannian structure \((N,{\mathcal {H}}^1, g_{|{\mathcal {H}}^1})\) as \(r \rightarrow 0\). Therefore an immersed submanifold \(M\subset N\) of degree d has Riemannian area measure \(A(M,g_r)\) with respect to the metric \(g_r\). We define area measure \(A_d\) of degree d by

$$\begin{aligned} A_d(M):=\lim _{r\downarrow 0 }\ r^{(\deg (M)- \dim (M))/2} A(M,g_r) \end{aligned}$$
(1.1)

when this limit exists and it is finite. In (3.7) we stress that the area measure \(A_d\) of degree d is given by integral of the norm the g-orthogonal projection onto the subspace of m-forms of degree equal to d of the orthonormal m-vector tangent to M. This area formula was provided in [35, 36] for \(C^1\) submanifolds immersed in Carnot groups and in [19] for intrinsic regular submanifolds in the Heisenberg groups.

Given a submanifold \(M\subset N\) of degree d immersed into a graded manifold \((N,({\mathcal {H}}^i)_{i})\), we wish to compute the Euler–Lagrange equations for the area functional \(A_d\). The problem has been intensively studied for hypersurfaces, and results appeared in [2, 8, 9, 12, 15, 16, 22, 30, 31, 33, 37, 46, 48]. For submanifolds of codimension greater than one in a sub-Riemannian structure only in the case of curves has been studied. In particular it is well know that there exists minimizers of the length functional which are not solutions of the geodesic equation: these curves, discovered by Montgomery in [38, 39] are called abnormal geodesics. In this paper we recognize that a similar phenomenon can arise while studying the first variational of area for surfaces immersed in a graded structure: there are isolated surfaces which does not admit degree preserving variations. Consequently we focus on smooth submanifolds of fixed degree, and admissible variations, which preserve it. The associated admissible vector fields, \(V= \frac{\partial \Gamma _t}{\partial t }\big |_{t=0}\) satisfies the system of partial differential equations of first order (5.3) on M. So we are led to the central question of characterizing the admissible vector fields which are associated to an admissible variation.

The analogous integrability problem for geodesics in sub-Riemannian manifolds and, more generally, for functionals whose domain of definition consists of integral curves of an exterior differential system, was posed by Cartan [7] and studied by Griffiths [26], Bryant [3] and Hsu [32]. These one-dimensional problems have been treated by considering a holonomy map [32] whose surjectivity defines a regularity condition implying that any vector field satisfying the system (5.3) is integrable. In higher dimensions, there does not seem to be an acceptable generalization of such an holonomy map. However, an analysis of Hsu’s regularity condition led the authors to introduce a weaker condition named strong regularity in [11]. This condition can be generalized to higher dimensions and provides a sufficient condition to ensure the local integrability of any admissible vector field on M, see Theorem 7.2. Indeed, in this setting the admissibility system (5.3) in coordinates is given by

$$\begin{aligned} \sum _{j=1}^m C_j({\bar{p}}) E_j(F)({\bar{p}})+ B({\bar{p}})F({\bar{p}})+ A({\bar{p}})G({\bar{p}})=0, \end{aligned}$$
(1.2)

where \(C_j, B, A\) are matrices, F are the vertical components of the admissible vector field, G are the horizontal control components and \({\bar{p}} \in M\). Since the strong regularity tells us that the matrix \(A({\bar{p}})\) has full rank we can locally write explicitly a part of the controls in terms of the vertical components and the other part of the controls, then applying the Implicit Function Theorem we produce admissible variations.

In Remark 7.6 we recognize that our definition of strongly regular immersion generalizes the notion introduced by [28] of regular horizontal immersions, that are submanifolds immersed in the horizontal distribution such that the degree coincides with the topological dimension m. In [27], see also [43], the author shows a deformability theorem for regular horizontal immersions by means of Nash’s Implicit Function Theorem [41]. Our result is in the same spirit but for immersions of general degree.

For strongly regular submanifolds it is possible to compute the Euler–Lagrange equations to obtain a sufficient condition for stationary points of the area \(A_d\) of degree d. This naturally leads to a notion of mean curvature, which is not in general a second order differential operator, but can be of order three. This behavior doesn’t show up in the one-dimensional case where the geodesic equations for regular curves have order less than or equal to two, see [11, Theorem 7.2] or [32, Theorem 10].

These tools can be applied to mathematical model of perception in the visual cortex: Citti and Sarti [12] showed that 2 dimensional minimal surfaces in the three-dimensional sub-Riemannian manifold SE(2) play an important role in the completion process of images, taking orientation into account. Adding curvature to the model, a four dimensional Engel structure arises, see § 1.5.1.4 in [17, 45] and § 4.3 here. The previous 2D surfaces, lifted in this structure are codimension 2, degree four strongly regular surfaces in the sense of our definition. On the other hand we are able to show that there are isolated surfaces which do not admit degree preserving variations. Indeed, in Example 7.8 we exhibit an isolated plane, immersed in the Engel group, whose only admissible normal vector field is the trivial one. Moreover, in analogy with the one-dimensional result by [4], Proposition 7.9 shows that this isolated plane is rigid in the \(C^1\) topology, thus this plane is a local minimum for the area functional. Therefore we recognized that a similar phenomenon to the one of existence of abnormal curves can arise in higher dimension. Finally we conjecture that a bounded open set \(\Omega \) contained in this isolated plane is a global minimum among all possible immersed surfaces sharing the same boundary \(\partial \Omega \).

We have organized this paper into several sections. In the next one notation and basic concepts, such as graded manifolds, Carnot manifolds and degree of submanifolds, are introduced. In Sect. 3 we define the area of degree d for submanifolds of degree d immersed in a graded manifold \((N,{\mathcal {H}}^i)\) endowed with a Riemannian metric. This is done as a limit of Riemannian areas. In addition, an integral formula for this area in terms of a density is given in formula (3.6). Section 4 is devoted to provide examples of submanifolds of certain degrees and the associated area functionals. In Sects. 5 and 6 we introduce the notions of admissible variations, admissible vector fields and integrable vector fields and we study the system of first order partial differential equations defining the admissibility of a vector field. In particular, we show the independence of the admissibility condition for vector fields of the Riemannian metric in § 6.2. In Sect. 7 we give the notion of a strongly regular submanifold of degree d, see Definition 7.1. Then we prove in Theorem 7.2 that the strong regularity condition implies that any admissible vector vector is integrable. In addition, we exhibit in Example 7.8 an isolated plane whose only admissible normal vector field is the trivial one. Finally in Sect. 8 we compute the Euler–Lagrange equations of a strongly regular submanifold and give some examples.

2 Preliminaries

Let N be an n-dimensional smooth manifold. Given two smooth vector fields XY on N, their commutator or Lie bracket is defined by \([X,Y]:=XY-YX\). An increasing filtration \(({\mathcal {H}}^i)_{i\in {{\mathbb {N}}}}\) of the tangent bundle TN is a flag of sub-bundles

$$\begin{aligned} {\mathcal {H}}^1\subset {\mathcal {H}}^2\subset \cdots \subset {\mathcal {H}}^i\subset \cdots \subseteq TN, \end{aligned}$$
(2.1)

such that

  1. (i)

    \( \cup _{i \in {{\mathbb {N}}}} {\mathcal {H}}^i= TN\)

  2. (ii)

    \( [{\mathcal {H}}^{i},{\mathcal {H}}^{j}] \subseteq {\mathcal {H}}^{i+j},\) for \( i,j \geqslant 1\),

where \( [{\mathcal {H}}^i,{\mathcal {H}}^j]:=\{[X,Y] : X \in {\mathcal {H}}^i,Y \in {\mathcal {H}}^j\}\). Moreover, we say that an increasing filtration is locally finite when

  1. (iii)

    for each \(p \in N\) there exists an integer \(s=s(p)\), the step at p, satisfying \({\mathcal {H}}^s_p=T_p N\). Then we have the following flag of subspaces

    $$\begin{aligned} {\mathcal {H}}^1_p\subset {\mathcal {H}}^2_p\subset \cdots \subset {\mathcal {H}}^s_p=T_p N. \end{aligned}$$
    (2.2)

A graded manifold \((N,({\mathcal {H}}^i))\) is a smooth manifold N endowed with a locally finite increasing filtration, namely a flag of sub-bundles (2.1) satisfying (i),(ii) and (iii). For the sake of brevity a locally finite increasing filtration will be simply called a filtration. Setting \(n_i(p):=\dim {{\mathcal {H}}}^i_p \), the integer list \((n_1(p),\cdots ,n_s(p))\) is called the growth vector of the filtration (2.1) at p. When the growth vector is constant in a neighborhood of a point \(p \in N\) we say that p is a regular point for the filtration. We say that a filtration \(({\mathcal {H}}^i)\) on a manifold N is equiregular if the growth vector is constant in N. From now on we suppose that N is an equiregular graded manifold.

Given a vector v in \(T_p N\) we say that the degree of v is equal to \(\ell \) if \(v\in {\mathcal {H}}_p^\ell \) and \(v \notin {\mathcal {H}}_p^{\ell -1}\). In this case we write \(\text {deg}(v)=\ell \). The degree of a vector field is defined pointwise and can take different values at different points.

Let \((N,({\mathcal {H}}^1,\ldots , {\mathcal {H}}^s))\) be an equiregular graded manifold. Take \(p\in N\) and consider an open neighborhood U of p where a local frame \(\{X_{1},\cdots ,X_{n_1}\}\) generating \({\mathcal {H}}^1\) is defined. Clearly the degree of \(X_j\), for \(j=1,\ldots ,n_1\), is equal to one since the vector fields \(X_1,\ldots ,X_{n_1}\) belong to \({\mathcal {H}}^1\). Moreover the vector fields \(X_1, \ldots ,X_{n_1}\) also lie in \({\mathcal {H}}^2\), we add some vector fields \(X_{n_{1}+1},\cdots ,X_{n_2} \in {\mathcal {H}}^2\setminus {\mathcal {H}}^{1} \) so that \((X_1)_p,\ldots ,(X_{n_2})_p\) generate \({\mathcal {H}}^2_p\). Reducing U if necessary we have that \(X_1,\ldots ,X_{n_2}\) generate \({\mathcal {H}}^2\) in U. Iterating this procedure we obtain a basis of TM in a neighborhood of p

$$\begin{aligned} (X_1,\ldots ,X_{n_1},X_{n_1+1},\ldots ,X_{n_2},\ldots ,X_{n_{s-1}+1}, \ldots ,X_n), \end{aligned}$$
(2.3)

such that the vector fields \(X_{n_{i-1}+1},\ldots ,X_{n_i}\) have degree equal to i, where \(n_0:=0\). The basis obtained in (2.3) is called an adapted basis to the filtration \(({\mathcal {H}}^1,\ldots ,{\mathcal {H}}^s)\).

Given an adapted basis \((X_i)_{1\leqslant i\leqslant n}\), the degree of the simple m-vector field \(X_{j_1}\wedge \ldots \wedge X_{j_m}\) is defined by

$$\begin{aligned} \deg (X_{j_1}\wedge \ldots \wedge X_{j_m}):=\sum _{i=1}^m\deg (X_{j_i}). \end{aligned}$$

Any m-vector X can be expressed as a sum

$$\begin{aligned} X_p=\sum _{J}\lambda _J(p)(X_J)_p, \end{aligned}$$

where \(J=(j_1,\ldots ,j_m)\), \(1\leqslant j_1<\cdots <j_m\leqslant n\), is an ordered multi-index, and \(X_J:=X_{j_1}\wedge \ldots \wedge X_{j_m}\). The degree of X at p with respect to the adapted basis \((X_i)_{1\leqslant i\leqslant n}\) is defined by

$$\begin{aligned} \max \{\deg ((X_J)_p):\lambda _J(p)\ne 0\}. \end{aligned}$$

It can be easily checked that the degree of X is independent of the choice of the adapted basis and it is denoted by \(\deg (X)\).

If \(X=\sum _J\lambda _J X_J\) is an m-vector expressed as a linear combination of simple m-vectors \(X_J\), its projection onto the subset of m-vectors of degree d is given by

$$\begin{aligned} (X)_d=\sum _{\deg (X_J)=d} \lambda _JX_J, \end{aligned}$$
(2.4)

and its projection over the subset of m-vectors of degree larger than d by

$$\begin{aligned} \pi _d(X)=\sum _{\deg (X_J)\geqslant d+1} \lambda _JX_J. \end{aligned}$$

In an equiregular graded manifold with a local adapted basis \((X_1, \ldots ,X_n)\), defined as in (2.3), the maximal degree that can be achieved by an m-vector, \(m\leqslant n\), is the integer \(d_{\max }^m\) defined by

$$\begin{aligned} d_{\max }^m:=\deg (X_{n-m+1})+\cdots +\deg (X_{n}). \end{aligned}$$
(2.5)

2.1 Degree of a submanifold

Let M be a submanifold of class \(C^1\) immersed in an equiregular graded manifold \((N,({\mathcal {H}}^1,\ldots , {\mathcal {H}}^s))\) such that \(\dim (M)=m<n=\dim (N)\). Then, following [34, 36], we define the degree of M at a point \(p\in M\) by

$$\begin{aligned} \deg _M(p):=\deg (v_1\wedge \ldots \wedge v_m), \end{aligned}$$

where \(v_1,\ldots ,v_m\) is a basis of \(T_pM\). Obviously, the degree is independent of the choice of the basis of \(T_pM\). Indeed, if we consider another basis \(\mathcal {B'}=(v_1', \cdots , v_m')\) of \(T_p M\), we get

$$\begin{aligned} v_1 \wedge \cdots \wedge v_m= \det (M_{{\mathcal {B}},\mathcal {B'}}) \ v_1' \wedge \cdots \wedge v_m', \end{aligned}$$

where \(M_{{\mathcal {B}},\mathcal {B'}}\) denotes the change of basis matrix. Since \(\det (M_{{\mathcal {B}},\mathcal {B'}})\ne 0\), we conclude that \(\deg _M(p)\) is well-defined. The degree \(\deg (M)\) of a submanifold M is the integer

$$\begin{aligned} \deg (M):=\max _{p\in M} \deg _{M}(p). \end{aligned}$$

We define the singular set of a submanifold M by

$$\begin{aligned} M_0=\{p \in M : \deg _M(p)<\deg (M) \}. \end{aligned}$$
(2.6)

Singular points can have different degrees between m and \(\deg (M)-1\).

In [28, 0.6.B] Gromov considers the flag

$$\begin{aligned} \tilde{{\mathcal {H}}}_p^1 \subset \tilde{{\mathcal {H}}}_p^2 \subset \cdots \subset \tilde{{\mathcal {H}}}_p^s =T_pM, \end{aligned}$$
(2.7)

where \(\tilde{{\mathcal {H}}}_p^j=T_pM \cap {\mathcal {H}}_p^j \) and \({\tilde{m}}_j=\text {dim}(\tilde{{\mathcal {H}}}_p^j)\). Then he defines the degree at p by

$$\begin{aligned} {\tilde{D}}_H(p)= \sum _{j=1}^s j ({\tilde{m}}_j- {\tilde{m}}_{j-1}), \end{aligned}$$

setting \({\tilde{m}}_{0}=0\). It is easy to check that our definition of degree is equivalent to Gromov’s one, see [23, Chapter 2.2]. As we already pointed out, \((M,({\tilde{{\mathcal {H}}}}^j)_{j \in {{\mathbb {N}}}})\) is a graded manifold.

Let us check now that the degree of a vector field and the degree of points in a submanifold are lower semicontinuous functions.

Lemma 2.1

Let \((N,({\mathcal {H}}^1,\ldots , {\mathcal {H}}^s))\) be a graded manifold regular at \(p\in N\). Let V be a vector field defined on a open neighborhood \(U_1\) of p. Then we have

$$\begin{aligned} \liminf \limits _{q\rightarrow p}\deg (V_q)\geqslant \deg (V_p). \end{aligned}$$

Proof

As \(p\in N\) is regular, there exists a local adapted basis \((X_1,\ldots ,X_n)\) in an open neighborhood \(U_2\subset U_1\) of p. We express the smooth vector field V in \(U_2\) as

$$\begin{aligned} V_q=\sum _{i=1}^{s} \sum _{j=n_{i-1}+1}^{n_{i}} c_{ij}(q) (X_j)_q \end{aligned}$$
(2.8)

on \(U_2\) with respect to an adapted basis \((X_1,\cdots , X_n)\), where \(c_{ij}\in C^\infty (U_2)\). Suppose that the degree \(\deg (V_p)\) of V at p is equal to \(d \in {{\mathbb {N}}}\). Then, there exists an integer \(k \in \{ n_{d-1}+1,\cdots ,n_{d}\}\) such that \(c_{dk}(p)\ne 0\) and \(c_{ij}(p)=0\) for all \(i=d+1,\cdots ,s\) and \(j=n_{i-1}+1,\cdots ,n_{i}\). By continuity, there exists an open neighborhood \(U'\subset U_2\) such that \(c_{dk}(q)\ne 0\) for each q in \(U'\). Therefore for each q in \(U'\) the degree of \(V_q\) is greater than or equal to the degree of V(p),

$$\begin{aligned} \deg (V_q)\geqslant \deg (V_p)=d. \end{aligned}$$

Taking limits we get

$$\begin{aligned} \liminf \limits _{q\rightarrow p}\deg (V_q)\geqslant \deg (V_p). \end{aligned}$$

\(\square \)

Remark 2.2

In the proof of Lemma 2.1, \(\deg (V_q)\) could be strictly greater than d in case there were a coefficient \(c_{ij}\) with \(i\geqslant d+1\) satisfying \(c_{ij}(q)\ne 0\).

Proposition 2.3

Let M be a \(C^1\) immersed submanifold in a graded manifold \((N,({\mathcal {H}}^1,\ldots , {\mathcal {H}}^s))\). Assume that N is regular at \(p\in M\). Then we have

$$\begin{aligned} \liminf \limits _{q\rightarrow p, q \in M}\deg _M(q)\geqslant \deg _M(p). \end{aligned}$$

Proof

The proof imitates the one of Lemma 2.1 and it is based on the fact that the degree is defined by an open condition. Let \(\tau _M=\sum _J\tau _JX_J\) be a tangent m-vector in an open neighborhood U of p, where a local adapted basis is defined. The functions \(\tau _J\) are continuous on U. Suppose that the degree \(\deg _M(p)\) at p in M is equal to d. This means that there exists a multi-index \({\bar{J}}\) such that \(\tau _{{\bar{J}}}(p)\ne 0\) and \(\deg ((X_{{\bar{J}}})_p)=d\). Since the function \(\tau _{{\bar{J}}}\) is continuous there exists a neighborhood \(U'\subset U\) such that \(\tau _{{\bar{J}}}(q)\ne 0\) in \(U'\). Therefore, \(\deg (\tau _M(q))\geqslant d\) and taking limits we have

$$\begin{aligned} \liminf \limits _{q\rightarrow p}\deg _M(q)\geqslant \deg _M(p). \end{aligned}$$

\(\square \)

Corollary 2.4

Let M be a \(C^1\) submanifold immersed in an equiregular graded manifold. Then

  1. 1.

    \(\deg _M\) is a lower semicontinuous function on M.

  2. 2.

    The singular set \(M_0\) defined in (2.6) is closed in M.

Proof

The first assertion follows from Proposition 2.3 since every point in an equiregular graded manifold is regular. To prove 2, we take \(p\in M\smallsetminus M_0\). By 1, there exists a open neighborhood U of p in M such that each point q in U has degree \(\deg _M(q)\) equal to \(\deg (M)\). Therefore we have \(U\subset M\smallsetminus M_0\) and hence \(M\smallsetminus M_0\) is an open set. \(\square \)

2.2 Carnot manifolds

Let N be an n-dimensional smooth manifold. An l-dimensional distribution \({\mathcal {H}}\) on N assigns smoothly to every \(p\in N\) an l-dimensional vector subspace \({\mathcal {H}}_p\) of \(T_pN\). We say that a distribution \({\mathcal {H}}\) complies Hörmander’s condition if any local frame \(\{X_1, \ldots , X_l\}\) spanning \({\mathcal {H}}\) satisfies

$$\begin{aligned} \dim ({\mathcal {L}}(X_1,\ldots ,X_{l}))(p)=n, \quad \text {for all} \ p\in N, \end{aligned}$$

where \({\mathcal {L}}(X_{1},\ldots ,X_{l})\) is the linear span of the vector fields \(X_{1},\ldots ,X_{l}\) and their commutators of any order.

A Carnot manifold \((N,{\mathcal {H}})\) is a smooth manifold N endowed with an l-dimensional distribution \({\mathcal {H}}\) satisfying Hörmander’s condition. We refer to \({\mathcal {H}}\) as the horizontal distribution. We say that a vector field on N is horizontal if it is tangent to the horizontal distribution at every point. A \(C^1\) path is horizontal if the tangent vector is everywhere tangent to the horizontal distribution. A sub-Riemannian manifold \((N,{\mathcal {H}},h)\) is a Carnot manifold \((N,{\mathcal {H}})\) endowed with a positive-definite inner product h on \({\mathcal {H}}\). Such an inner product can always be extended to a Riemannian metric on N. Alternatively, any Riemannian metric on N restricted to \({\mathcal {H}}\) provides a structure of sub-Riemannian manifold. Chow’s Theorem assures that in a Carnot manifold \((N,{\mathcal {H}})\) the set of points that can be connected to a given point \(p\in N\) by a horizontal path is the connected component of N containing p, see [40]. Given a Carnot manifold \((N,{\mathcal {H}})\), we have a flag of subbundles

$$\begin{aligned} {\mathcal {H}}^1:={\mathcal {H}}\subset {\mathcal {H}}^2\subset \cdots \subset {\mathcal {H}}^i\subset \cdots \subset TN, \end{aligned}$$
(2.9)

defined by

$$\begin{aligned} {\mathcal {H}}^{i+1} :={\mathcal {H}}^i + {[}{\mathcal {H}},{\mathcal {H}}^i], \qquad i\geqslant 1, \end{aligned}$$

where

$$\begin{aligned} {[}{\mathcal {H}},{\mathcal {H}}^i]:=\{[X,Y] : X \in {\mathcal {H}},Y \in {\mathcal {H}}^i\}. \end{aligned}$$

The smallest integer s satisfying \({\mathcal {H}}^s_p=T_pN\) is called the step of the distribution \({\mathcal {H}}\) at the point p. Therefore, we have

$$\begin{aligned} {\mathcal {H}}_p\subset {\mathcal {H}}^2_p\subset \cdots \subset {\mathcal {H}}^s_p=T_p N. \end{aligned}$$

The integer list \((n_1(p),\cdots ,n_s(p))\) is called the growth vector of \({\mathcal {H}}\) at p. When the growth vector is constant in a neighborhood of a point \(p \in N\) we say that p is a regular point for the distribution. This flag of sub-bundles (2.9) associated to a Carnot manifold \((N,{\mathcal {H}})\) gives rise to the graded structure \((N,({\mathcal {H}}^i))\). Clearly an equiregular Carnot manifold \((N,{\mathcal {H}})\) of step s is an equiregular graded manifold \((N,{\mathcal {H}}^1, \ldots , {\mathcal {H}}^s)\). In particular a Carnot group turns out to be an equiregular graded manifold.

Given a connected sub-Riemannian manifold \((N,{\mathcal {H}},h)\), and a \(C^1\) horizontal path \(\gamma :[a,b]\rightarrow N\), we define the length of \(\gamma \) by

$$\begin{aligned} L(\gamma )=\int _a^b \ \sqrt{h({\dot{\gamma }}(t),{\dot{\gamma }}(t))} \ dt. \end{aligned}$$
(2.10)

By means of the equality

$$\begin{aligned} d_c(p,q):=\inf \{L(\gamma ) : \gamma \ \text {is a } C^1\ \text { horizontal path joining } p,q \in N \}, \end{aligned}$$
(2.11)

this length defines a distance function (see [5, § 2.1.1,§ 2.1.2]) usually called the Carnot-Carathéodory distance, or CC-distance for short. See [40, Chapter 1.4] for further details.

3 Area for submanifolds of given degree

In this section we shall consider a graded manifold \((N,{\mathcal {H}}^1,\ldots ,{\mathcal {H}}^s)\) endowed with a Riemannian metric g, and an immersed submanifold M of dimension m.

We recall the following construction from [28, 1.4.D]: given \(p\in N\), we recursively define the subspaces \({\mathcal {K}}^1_p:={\mathcal {H}}_p\), \({\mathcal {K}}^{i+1}_p:=({\mathcal {H}}_p^i)^\perp \cap {\mathcal {H}}^{i+1}_p\), for \(1\leqslant i\leqslant (s-1)\). Here \(\perp \) means perpendicular with respect to the Riemannian metric g. Therefore we have the decomposition of \(T_pN\) into orthogonal subspaces

$$\begin{aligned} T_pN={\mathcal {K}}_p^1\oplus {\mathcal {K}}_p^2\oplus \cdots \oplus {\mathcal {K}}_p^s. \end{aligned}$$
(3.1)

Given \(r>0\), a unique Riemannian metric \(g_r\) is defined under the conditions: (i) the subspaces \({\mathcal {K}}_i\) are orthogonal, and (ii)

$$\begin{aligned} g_r|_{{\mathcal {K}}_i}=\frac{1}{r^{i-1}}g|_{{\mathcal {K}}_i}, \qquad i=1,\ldots ,s. \end{aligned}$$
(3.2)

When we consider Carnot manifolds, it is well-known that the Riemannian distances of \((N,g_r)\) uniformly converge to the Carnot-Carathéodory distance of \((N,{\mathcal {H}},h)\), [28, p. 144].

Working on a neighborhood U of p where a local frame \((X_1,\ldots ,X_k)\) generating the distribution \({\mathcal {H}}\) is defined, we construct an orthonormal adapted basis \((X_1,\ldots ,X_n)\) for the Riemannian metric g by choosing orthonormal bases in the orthogonal subspaces \({\mathcal {K}}^i\), \(1\leqslant i\leqslant s\). Thus, the m-vector fields

$$\begin{aligned} {\tilde{X}}_J^r=\left( r^{\frac{1}{2}(\deg (X_{j_1})-1)}X_{j_1}\right) \wedge \ldots \wedge \left( r^{\frac{1}{2}(\deg (X_{j_m})-1)}X_{j_m}\right) , \end{aligned}$$
(3.3)

where \(J = (j_1 , j_2 , \ldots , j_m )\) for \(1 \leqslant j_1< \cdots < j_m \leqslant n\), are orthonormal with respect to the extension of the metric \(g_r\) to the space of m-vectors. We recall that the metric \(g_{r}\) is extended to the space of m-vectors simply defining

$$\begin{aligned} g_r(v_1\wedge \ldots \wedge v_m , v_1'\wedge \ldots \wedge v_m')=\det \big (g_r(v_i,v_j')\big )_{1\leqslant i,j\leqslant m}, \end{aligned}$$
(3.4)

for \(v_1,\ldots ,v_m\) and \(v_1', \ldots ,v_m'\) in \(T_{p} N\). Observe that the extension is denoted the same way.

3.1 Area for submanifolds of given degree

Assume now that M is an immersed submanifold of dimension m in a equiregular graded manifold \((N,{\mathcal {H}}^1,\ldots , {\mathcal {H}}^s)\) equipped with the Riemannian metric g. We take a Riemannian metric \(\mu \) on M. For any \(p\in M\) we pick a \(\mu \)-orthonormal basis \(e_1,\ldots ,e_m\) in \(T_pM\). By the area formula we get

$$\begin{aligned} A(M',g_r) =\int _{{M'}} \left| e_1 \wedge \ldots \wedge e_m\right| _{g_r} d\mu (p), \end{aligned}$$
(3.5)

where \(M'\) is a bounded measurable subset of M and \(A(M',g_r)\) is the m-dimensional area of \(M'\) with respect to the Riemannian metric \(g_r\).

Now we express

$$\begin{aligned} e_1\wedge \ldots \wedge e_m=\sum _{J} \tau _{J}(p) {(X_{J})_p}=\sum _{J} {\tilde{\tau }}^r_{J}(p) ({\tilde{X}}_{J}^r)_p,\quad r>0. \end{aligned}$$

From (3.3) we get \({\tilde{X}}_{J}^r=r^{\frac{1}{2}(\deg (X_J)-m)}{X_{J}}\), and so \({\tilde{\tau }}_{J}=r^{-\frac{1}{2}(\deg (X_J)-m)}{\tau _{J}}\). Moreover, as \(\{{\tilde{X}}_{J}^r\}\) is an orthonormal basis for \(g_r\), we have

$$\begin{aligned} \left| e_1 \wedge \ldots \wedge e_m\right| _{g_r}^2=\sum _J ({\tilde{\tau }}_{J}^r(p))^2= \sum _J r^{-(\deg (X_J)-m)}{\tau _{J}^2(p)}. \end{aligned}$$

Therefore, we have

$$\begin{aligned} \lim _{r\downarrow 0} r^{\frac{1}{2}(\deg (M)-m)} \left| e_1 \wedge \ldots \wedge e_m\right| _{g_r}&= \lim _{r\downarrow 0} \left( \sum _J r^{(\deg (M)-\deg (X_J))}{\tau _{J}^2(p)}\right) ^{1/2} \\&=\left( \sum _{\deg (X_J)=\deg (M)} \tau _{J}^2(p)\right) ^{1/2}. \end{aligned}$$

By Lebesgue’s dominated convergence theorem we obtain

$$\begin{aligned} \lim _{r\downarrow 0} \left( r^{\tfrac{1}{2}(\deg (M)-m)}A(M',g_r)\right) =\int _{{M'}} \left( \sum _{\deg (X_J)=\deg (M)} \tau _{J}^2(p)\right) ^{\frac{1}{2}} \ d\mu (p). \end{aligned}$$
(3.6)

Definition 3.1

If M is an immersed submanifold of degree d in an equiregular graded manifold \((N,{\mathcal {H}}^1,\ldots , {\mathcal {H}}^s)\) endowed with a Riemannian metric g, the degree d area \(A_d\) is defined by

$$\begin{aligned} A_d(M'):=\lim _{r\downarrow 0} \Big (r^{\tfrac{1}{2}(d-m)}A(M',g_r)\Big ), \end{aligned}$$

for any bounded measurable set \(M'\subset M\).

Equation (3.6) provides an integral formula for the area \(A_d\). An immediate consequence of the definition is the following

Remark 3.2

Setting \(d:=\deg (M)\) we have by Eq. (3.6) and the notation introduced in (2.4) that the degree d area \(A_d\) is given by

$$\begin{aligned} A_d(M')=\int _{{M'}} |\left( e_1\wedge \ldots \wedge e_m \right) _d|_{g} \ d\mu (p). \end{aligned}$$
(3.7)

for any bounded measurable set \(M'\subset M\). When the ambient manifold is a Carnot group this area formula was obtained by [36]. Notice that the d area \(A_d\) is given by the integral of the m-form

$$\begin{aligned} \omega _d(v_1,\ldots ,v_m)(p)=\langle v_1\wedge \ldots \wedge v_m, \dfrac{(e_1\wedge \ldots \wedge e_m )_d}{|(e_1\wedge \ldots \wedge e_m)_d|_g} \rangle , \end{aligned}$$
(3.8)

where \(v_1,\ldots ,v_m\) is a basis of \(T_p M\).

In a more general setting, an m-dimensional submanifold in a Riemannian manifold is an m-current (i.e., an element of the dual of the space of m-forms), and the area is the mass of this current (for more details see [18]). Similarly, a natural generalization of an m-dimensional submanifold of degree d immersed in a graded manifold is an m-current of degree d whose mass should be given by \(A_d\). In [19] the authors studied the theory of \({\mathbb {H}}\)-currents in the Heisenberg group. Their mass coincides with our area (3.7) on intrinsic \(C^1\) submanifolds. However in (3.8) we consider all possible m-forms and not only the intrinsic m-forms in the Rumin’s complex [1, 42, 49].

Corollary 3.3

Let M be an m-dimensional immersed submanifold of degree d in a graded manifold \((N,{\mathcal {H}}^1,\ldots ,{\mathcal {H}}^s)\) endowed with a Riemannian metric g. Let \(M_0\subset M\) be the closed set of singular points of M. Then \(A_d(M_0)=0\).

Proof

Take an orthonormal basis \(v_1,\ldots ,v_m\) of M at p and express \(v_1\wedge \ldots \wedge v_m=\sum _{J} \tau _J(p)(X_J)_p\). When p is a singular point, \(\deg (v_1\wedge \ldots \wedge v_m)<\deg (M)=d\) and so \(\tau _J(p)=0\) whenever \(\deg (X_J)\geqslant d\).

Since \(M_0\) is measurable, from (3.6) we obtain

$$\begin{aligned} A_d(M_0)=\int _{M_0} \left( \sum _{\deg (X_J)=d} \tau _{J}^2(p)\right) ^{\frac{1}{2}} \ d\mu (p) \end{aligned}$$

and so \(A_d(M_0)=0\). \(\square \)

Remark 3.4

Another easy consequence of the definition is the following: if M is an immersed submanifold of degree d in graded manifold \((N,{\mathcal {H}}^1,\ldots ,{\mathcal {H}}^s)\) with a Riemannian metric, then \(A_{d'}(M')=\infty \) for any open set \(M'\subset M\) when \(d'<d\). This follows easily since in the expression

$$\begin{aligned} r^{\frac{1}{2} (d'-m)} \left| e_1 \wedge \ldots \wedge e_m\right| _{g_r} \end{aligned}$$

we would have summands with negative exponent for r.

In the following example, we exhibit a Carnot manifold with two different Riemannian metrics that coincide when restricted to the horizontal distribution, but yield different area functionals of a given degree

Example 3.5

We consider the Carnot group \({\mathbb {H}}^1 \otimes {\mathbb {H}}^1\), which is the direct product of two Heisenberg groups. Namely, let \({{\mathbb {R}}}^3 \times {{\mathbb {R}}}^3\) be the 6-dimensional Euclidean space with coordinates \((x,y,z,x',y',z')\). We consider the 4-dimensional distribution \({\mathcal {H}}\) generated by

$$\begin{aligned} X&=\partial _x-\dfrac{y}{2}\partial _z,&Y&=\partial _y+\dfrac{x}{2}\partial _z,\\ X'&=\partial _{x'}-\dfrac{y'}{2}\partial _{z'}&Y'&=\partial _{y'}+\dfrac{x'}{2} \partial _{z'}. \end{aligned}$$

The vector fields \(Z=[X,Y]=\partial _z\) and \(Z'=[X',Y']=\partial _{z'}\) are the only non trivial commutators that generate, together with \(X,Y,X',Y'\), the subspace \({\mathcal {H}}^2=T({{\mathbb {H}}}^1\otimes {{\mathbb {H}}}^1)\). Let \(\Omega \) be a bounded open set of \({{\mathbb {R}}}^2\) and u a smooth function on \(\Omega \) such that \(u_t(s,t)\equiv 0\). We consider the immersed surface

$$\begin{aligned} \Phi :\Omega&\longrightarrow {\mathbb {H}}^1 \otimes {\mathbb {H}}^1, \\ (s,t)&\longmapsto (s,0,u(s,t),0,t,u(s,t)), \end{aligned}$$

whose tangent vectors are

$$\begin{aligned} \Phi _s&= (1,0,u_s,0,0,u_s)=X+u_s \ Z+u_s \ Z',\\ \Phi _t&= (0,0,0,0,1,0)=Y'. \end{aligned}$$

Thus, the 2-vector tangent to M is given by

$$\begin{aligned} \Phi _s\wedge \Phi _t=X\wedge Y'+ u_s(Z\wedge Y'+Z'\wedge Y'). \end{aligned}$$

When \(u_s(s,t)\) is different from zero the degree is equal to 3, since both \(Z\wedge Y'\) and \(Z'\wedge Y'\) have degree equal to 3. Points of degree 2 corresponds to the zeroes of \(u_s\). We define a 2-parameter family \(g_{\lambda ,\nu }\) of Riemannian metrics on \({{\mathbb {H}}}^1\otimes {{\mathbb {H}}}^1\), for \((\lambda ,\mu )\in {{\mathbb {R}}}^2\), by the conditions (i) \((X,Y,X',Y')\) is an orthonormal basis of \({\mathcal {H}}\), (ii) Z, \(Z'\) are orthogonal to \({\mathcal {H}}\), and (iii) \(g(Z,Z)= \lambda \), \(g(Z',Z')= \mu \) and \(g(Z',Z)=0\). Therefore, the degree 3 area of \(\Omega \) with respect to the metric \(g_{\mu ,\nu }\) is given by

$$\begin{aligned} A_3(\Omega )=\int _{\Omega } u_s( \lambda + \nu ) \ ds dt. \end{aligned}$$

As we shall see later, these different functionals will not have the same critical points, that would depend on the election of Riemannian metric.

4 Examples

4.1 Degree of a hypersurface in a Carnot manifold

Let M be a \(C^1\) hypersurface immersed in an equiregular Carnot manifold \((N,{\mathcal {H}})\), where \({\mathcal {H}}\) is a bracket generating l-dimensional distribution. Let Q be the homogeneous dimension of N and \(p\in M\).

Let us check that \(\deg (M)=Q-1\). The pointwise degree of M is given by

$$\begin{aligned} \deg _{M}(p)=\sum _{j=1}^s j ({\tilde{m}}_j-{\tilde{m}}_{j-1}) , \end{aligned}$$

where \({\tilde{m}}_j=\text {dim}({\tilde{{\mathcal {H}}}}_p^j)\) with \({\tilde{{\mathcal {H}}}}_p^j=T_p M \cap {\mathcal {H}}_p^j\). Recall that \(n_i=\dim ({\mathcal {H}}_p^i)\). As \(T_pM\) is a hyperplane of \(T_pN\) we have that either \({\tilde{{\mathcal {H}}}}_p^i={\mathcal {H}}^i_p\) and \({\tilde{m}}_i=n_i\), or \({\tilde{{\mathcal {H}}}}_p^i\) is a hyperplane of \({\mathcal {H}}_p^i\) and \({\tilde{m}}_i=m_i-1\). On the other hand,

$$\begin{aligned} {\tilde{m}}_i-{\tilde{m}}_{i-1}\leqslant n_i-n_{i-1}. \end{aligned}$$

Writing

$$\begin{aligned} n_i-n_{i-1}={\tilde{m}}_i-{\tilde{m}}_{i-1}+z_i, \end{aligned}$$

for non-negative integers \(z_i\) and adding up on i from 1 to s we get

$$\begin{aligned} \sum _{i=1}^s z_i=1, \end{aligned}$$

since \({\tilde{m}}_s=n-1\) and \(n_s=n\). We conclude that there exists \(i_0\in \{1,\ldots ,s\}\) such that \(z_{i_0}=1\) and \(z_j=0\) for all \(j\ne i_0\). This implies

$$\begin{aligned} {\tilde{m}}_i&=n_i ,&\qquad i<i_0, \\ {\tilde{m}}_i&=n_i -1,&\qquad i\geqslant i_0. \end{aligned}$$

If \(i_0>1\) for all \(p\in M\), then \({\mathcal {H}}\subset TM\), a contradiction since \({\mathcal {H}}\) is a bracket-generating distribution. We conclude that \(i_0=1\) and so

$$\begin{aligned} \deg (M)&=\sum _{i=1}^s i\,({\tilde{m}}_i-{\tilde{m}}_{i-1})=1\cdot {\tilde{m}}_1+\sum _{i=2}^s i\,({\tilde{m}}_i-{\tilde{m}}_{i-1}) \\&=1\cdot (n_1-1)+\sum _{i=2}^s i\,(n_i-n_{i-1})=Q-1. \end{aligned}$$

4.2 \(A_{2n+1}\)-area of a hypersurface in a \((2n+1)\)-dimensional contact manifold

A contact manifold is a smooth manifold \(M^{2n+1}\) of odd dimension endowed with a one form \(\omega \) such that \(d\omega \) is non-degenerate when restricted to \({\mathcal {H}}=\text {ker}(\omega )\). Since it holds

$$\begin{aligned} d\omega (X,Y)=X(\omega (Y))-Y(\omega (X))-\omega ([X,Y]), \end{aligned}$$

for \(X,Y\in {\mathcal {H}}\), the distribution \({\mathcal {H}}\) is non-integrable and satisfies Hörmander rank condition by Frobenius theorem. When we define a horizontal metric h on the distribution \({\mathcal {H}}\) then \((M,{\mathcal {H}},h)\) is a sub-Riemannian structure. It is easy to prove that there exists an unique vector field T on M so that

$$\begin{aligned} \omega (T)=1, \quad {\mathcal {L}}_T(X)=0, \end{aligned}$$

where \({\mathcal {L}}\) is the Lie derivative and X is any vector field on M. This vector field T is called the Reeb vector field. We can always extend the horizontal metric h to the Riemannian metric g making T a unit vector orthogonal to \({\mathcal {H}}\).

Let \(\Sigma \) be a \(C^1\) hypersurface immersed in M. In this setting the singular set of \(\Sigma \) is given by

$$\begin{aligned} \Sigma _0=\{p \in \Sigma : T_p \Sigma ={\mathcal {H}}_p \}, \end{aligned}$$

and corresponds to the points in \(\Sigma \) of degree 2n. Observe that the non-integrability of \({\mathcal {H}}\) implies that the set \(\Sigma \smallsetminus \Sigma _0\) is not empty in any hypersurface \(\Sigma \).

Let N be the unit vector field normal to \(\Sigma \) at each point, then on the regular set \(\Sigma \smallsetminus \Sigma _0\) the g-orthogonal projection \(N_h\) of N onto the distribution \({\mathcal {H}}\) is different from zero. Therefore out of the singular set \(\Sigma _0\) we define the horizontal unit normal by

$$\begin{aligned} \nu _h=\dfrac{N_h}{|N_h|}, \end{aligned}$$

and the vector field

$$\begin{aligned} S=\langle N,T \rangle \nu _h-|N_h|T, \end{aligned}$$

which is tangent to \(\Sigma \) and belongs to \({\mathcal {H}}^2\). Moreover, \(T_p\Sigma \cap ({\mathcal {H}}^2_p\smallsetminus {\mathcal {H}}^1_p)\) has dimension equal to one and \(T_p\Sigma \cap {\mathcal {H}}_p^1\) equal to \(2n-1\), thus the degree of the hypersurface \(\Sigma \) out of the singular set is equal to \(2n+1\). Let \(e_1,\ldots ,e_{2n-1}\) be an orthonormal basis in \(T_p\Sigma \cap {\mathcal {H}}^1_p\). Then \(e_1,\ldots ,e_{2n-1},S_p\) is an orthonomal basis of \(T_p\Sigma \) and we have

$$\begin{aligned} e_1\wedge \ldots \wedge e_{2n-1}\wedge S=\langle N,T\rangle e_1\wedge \ldots \wedge e_{2n-1}\wedge \nu _h- |N_h|e_1\wedge \ldots \wedge e_{2n-1}\wedge T. \end{aligned}$$

Hence we obtain

$$\begin{aligned} A_{2n+1}(\Sigma )=\int _{\Sigma } |N_h| d\Sigma . \end{aligned}$$
(4.1)

In [20] Galli obtained this formula as the perimeter of a set that has \(C^1\) boundary \(\Sigma \) and in [50] Shcherbakova as the limit of the volume of a \(\varepsilon \)-cylinder around \(\Sigma \) over its height equal to \(\varepsilon \). This formula was obtain for surfaces in a 3-dimensional pseudo-hermitian manifold in [9] and by S. Pauls in [44]. This is exactly the area formula independently established in recent years in the Heisenberg group \({{\mathbb {H}}}^n\), that is the prototype for contact manifolds (see for instance [9, 10, 15, 30, 47]).

Example 4.1

(The roto-translational group) Take coordinates \((x,y,\theta )\) in the 3-dimensional manifold \({{\mathbb {R}}}^2\times {\mathbb {S}}^1\). We consider the contact form

$$\begin{aligned} \omega =\sin (\theta )dx- \cos (\theta )dy, \end{aligned}$$

the horizontal distribution \({\mathcal {H}}=\text {ker}(\omega )\), is spanned by the vector fields

$$\begin{aligned} X=\cos (\theta ) \partial _x+ \sin (\theta ) \partial _y, \quad Y=\partial _{\theta }, \end{aligned}$$

and the horizontal metric h that makes X and Y orthonormal.

Therefore \({{\mathbb {R}}}^2\times {\mathbb {S}}^1\) endowed with this one form \(\omega \) is a contact manifold. Moreover \(({{\mathbb {R}}}^2\times {\mathbb {S}}^1, {\mathcal {H}},h)\) has a sub-Riemannian structure which is also a Lie group known as the roto-translational group. A mathematical model of simple cells of the visual cortex V1 using the sub-Riemannian geometry of the roto-translational Lie group was proposed by Citti and Sarti (see [13, 14]). Here the Reeb vector field is given by

$$\begin{aligned} T=[X,Y]=\sin (\theta ) \partial _x-\cos (\theta ) \partial _y. \end{aligned}$$

Let \(\Omega \) be an open set of \({{\mathbb {R}}}^{2}\) and \(u:\Omega \rightarrow {{\mathbb {R}}}\) be a function of class \(C^1\). When we consider a graph \(\Sigma =\text {Graph}(u)\) given by the zero set level of the \(C^1\) function

$$\begin{aligned} f(x,y,\theta )=u(x,y)-\theta =0, \end{aligned}$$

the projection of the unit normal N onto the horizontal distribution is given by

$$\begin{aligned} N_h=\dfrac{ X(u)X-Y}{\sqrt{1+X(u)^2+T(u)^2}}. \end{aligned}$$

Hence the 3-area functional is given by

$$\begin{aligned} A_3(\Sigma , \lambda )=\int _{\Omega }\left( 1+X(u)^2\right) ^{\frac{1}{2}} \, dx dy. \end{aligned}$$

4.3 \(A_4\)-area of a ruled surface immersed in an Engel structure

Let \(E={{\mathbb {R}}}^2 \times {\mathbb {S}}^1 \times {{\mathbb {R}}}\) be a smooth manifold with coordinates \(p=(x,y,\theta ,k)\). We set \({\mathcal {H}}=\text {span}\{X_1,X_2\}\), where

$$\begin{aligned} X_{1}= \cos ( \theta ) \partial _{x} + \sin ( \theta ) \partial _{y}+ k\partial _{\theta }, \qquad X_{2}=\partial _{k}. \end{aligned}$$
(4.2)

Therefore \((E,{\mathcal {H}})\) is a Carnot manifold, indeed \({\mathcal {H}}\) satisfy the Hörmander rank condition since \(X_1\) and \(X_2\)

$$\begin{aligned} \begin{aligned} X_{3}&=[X_{1},X_{2}]=-\partial _{\theta }\\ X_{4}&=[X_{1},[X_{1},X_{2}]]=-\sin (\theta )\partial _{x}+ \cos (\theta ) \partial _{y} \end{aligned} \end{aligned}$$
(4.3)

generate all the tangent bundle. Here we follow a computation developed by Le Donne and Magnani [34] in the Engel group. Let \(\Omega \) be an open set of \({{\mathbb {R}}}^2\) endowed with the Lebesgue measure. Since we are particularly interested in applications to the visual cortex (see [23, 45, 1.5.1.4] to understand the reasons) we consider the immersion \(\Phi :\Omega \rightarrow E\) given by \(\Phi =(x,y,\theta (x,y),\kappa (x,y))\) and we set \(\Sigma =\Phi (\Omega )\). The tangent vectors to \(\Sigma \) are

$$\begin{aligned} \Phi _{x}=(1,0, \theta _{x}, \kappa _{x}), \qquad \Phi _{y}=(0,1,\theta _{y},\kappa _{y}). \end{aligned}$$
(4.4)

In order to know the dimension of \(T_p \Sigma \cap {\mathcal {H}}_p\) it is necessary to take in account the rank of the matrix

$$\begin{aligned} B=\left( \begin{array}{cccc} 1 &{} \quad 0 &{} \quad \theta _{x}&{} \quad \kappa _{x}\\ 0 &{} \quad 1 &{} \quad \theta _{y}&{} \quad \kappa _{y}\\ \cos (\theta ) &{} \quad \sin (\theta )&{} \quad \kappa &{} \quad 0\\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 1\\ \end{array}\right) . \end{aligned}$$
(4.5)

Obviously \(\text {rank}(B)\geqslant 3\), indeed we have

$$\begin{aligned} \det \left( \begin{array}{ccc} 1 &{} \quad 0 &{} \quad \kappa _{x}\\ 0 &{} \quad 1 &{} \quad \kappa _{y}\\ 0 &{} \quad 0 &{} \quad 1\\ \end{array}\right) \ne 0. \end{aligned}$$

Moreover, it holds

$$\begin{aligned} \begin{aligned} \text {rank}(B)=3&\quad \Leftrightarrow \quad \det \left( \begin{array}{ccc} \cos (\theta ) &{}\sin (\theta )&{}\kappa \\ 1 &{} \quad 0 &{} \quad \theta _{x}\\ 0 &{} \quad 1 &{} \quad \theta _{y}\\ \end{array}\right) =0\\&\quad \Leftrightarrow \quad \kappa -\theta _{x}\cos (\theta )-\theta _{y}\sin (\theta )=0\\&\quad \Leftrightarrow \quad \kappa =X_1(\theta (x,y)). \end{aligned} \end{aligned}$$
(4.6)

Since we are inspired by the foliation property of hypersurface in the Heisenberg group and roto-translational group, in the present work we consider only surface \(\Sigma =\{(x,y,\theta (x,y),\kappa (x,y))\}\) verifying the foliation condition \(\kappa =X_1(\theta (x,y))\). Thus, we have

$$\begin{aligned} \begin{aligned} \Phi _{x}\wedge \Phi _{y}=&(\cos (\theta )\kappa _{y}-\sin (\theta )\kappa _{x})X_{1}\wedge X_{2}-(\cos (\theta )\theta _{y} -\sin (\theta ) \theta _{x})X_{1}\wedge X_{3}\\&+X_{1}\wedge X_{4}+(\theta _{x}\kappa _{y}-\theta _{y}\kappa _{x}-\kappa (\cos (\theta )\kappa _{y}-\sin (\theta )\kappa _{x}))X_{2}\wedge X_{3}\\&+(\sin (\theta )\kappa _{y}+\cos (\theta )\kappa _{x})X_{2}\wedge X_{4} \\ {}&+(\kappa -\sin (\theta )\theta _{y}-\cos (\theta )\theta _{x})X_{3}\wedge X_{4}. \end{aligned} \end{aligned}$$
(4.7)

By the foliation condition (4.6) we have that the coefficient of \(X_{3}\wedge X_{4}\) is always equal to zero, then we deduce that \(\deg (\Sigma )\leqslant 4\). Moreover, the coefficient of \(X_{1}\wedge X_{4}\) never vanishes, therefore \(\deg (\Sigma )=4\) and there are not singular points in \(\Sigma \). When \(\kappa =X_1(\theta )\) a tangent basis of \(T_p \Sigma \) adapted to 2.7 is given by

$$\begin{aligned} \begin{aligned} e_1&=\cos (\theta )\Phi _x+ \sin (\theta )\Phi _y= X_1+X_1(\kappa )X_2,\\ e_2&=-\sin (\theta )\Phi _x+\cos (\theta )\Phi _y=X_4-X_4(\theta )X_3+X_4(\kappa )X_2. \end{aligned} \end{aligned}$$
(4.8)

When we fix the Riemannian metric \(g_1\) that makes \((X_1,\ldots ,X_4)\) orthonormal we have that the \(A_4\)-area of \(\Sigma \) is given by

$$\begin{aligned} A_4(\Sigma ,g)=\int _{\Omega }\left( 1+X_1(\kappa )^2\right) ^{\frac{1}{2}} \ dx dy =\int _{\Omega }\left( 1+X_1^2(\theta )^2\right) ^{\frac{1}{2}} \ dx dy. \end{aligned}$$
(4.9)

When we fix the Euclidean metric \(g_0\) that makes \((\partial _1,\partial _2, \partial _{\theta },\partial _k)\) we have that the \(A_4\)-area of \(\Sigma \) is given by

$$\begin{aligned} A_4(\Sigma ,g_0)=\int _{\Omega }\left( 1+\kappa ^2+X_1(\kappa )^2\right) ^{\frac{1}{2}} \ dx dy. \end{aligned}$$
(4.10)

5 Admissible variations for submanifolds

Let us consider an m-dimensional manifold \({\bar{M}}\) and an immersion \(\Phi :{\bar{M}}\rightarrow N\) into an equiregular graded manifold endowed with a Riemannian metric \(g=\langle \cdot ,\cdot \rangle \). We shall denote the image \(\Phi ({\bar{M}})\) by M and \(d:=\deg (M)\). In this setting we have the following definition

Definition 5.1

A smooth map \(\Gamma :{\bar{M}}\times (-\varepsilon ,\varepsilon )\rightarrow N\) is said to be an admissible variation of \(\Phi \) if \(\Gamma _t:{\bar{M}}\rightarrow N\), defined by \(\Gamma _t({\bar{p}}):=\Gamma ({\bar{p}},t)\), satisfies the following properties

  1. (i)

    \(\Gamma _0=\Phi \),

  2. (ii)

    \(\Gamma _t({\bar{M}})\) is an immersion of the same degree as \(\Phi ({\bar{M}})\) for small enough t, and

  3. (iii)

    \(\Gamma _t({\bar{p}})=\Phi ({\bar{p}})\) for \({\bar{p}}\) outside a given compact subset of \({\bar{M}}\).

Definition 5.2

Given an admissible variation \(\Gamma \), the associated variational vector field is defined by

$$\begin{aligned} V({\bar{p}}):=\frac{\partial \Gamma }{\partial t}({\bar{p}},0). \end{aligned}$$
(5.1)

The vector field V is an element of \({\mathfrak {X}}_0({\bar{M}},N)\): i.e., a smooth map \(V:{\bar{M}}\rightarrow TN\) such that \(V({\bar{p}})\in T_{\Phi ({\bar{p}})}N\) for all \({\bar{p}}\in {\bar{M}}\). It is equal to 0 outside a compact subset of \({\bar{M}}\).

Let us see now that the variational vector field V associated to an admissible variation \(\Gamma \) satisfies a differential equation of first order. Let \(p=\Phi ({\bar{p}})\) for some \({\bar{p}}\in {\bar{M}}\), and \((X_1, \cdots , X_n)\) an adapted frame in a neighborhood U of p. Take a basis \(({\bar{e}}_1,\ldots ,{\bar{e}}_m)\) of \(T_{{\bar{p}}}{{\bar{M}}}\) and let \(e_j=d\Phi _{{\bar{p}}}({\bar{e}}_j)\) for \(1\leqslant j\leqslant m\). As \(\Gamma _t({\bar{M}})\) is a submanifold of the same degree as \(\Phi ({\bar{M}})\) for small t, there follows

$$\begin{aligned} \left\langle (d \Gamma _t)_{{\bar{p}}} (e_1) \wedge \ldots \wedge (d \Gamma _t)_{{\bar{p}}}(e_m) , {(X_J)_{\Gamma _t({\bar{p}})} } \right\rangle =0, \end{aligned}$$
(5.2)

for all \(X_J=X_{j_1} \wedge \ldots \wedge X_{j_m}\), with \(1\leqslant j_1<\cdots < j_m\leqslant n\), such that \(\deg (X_J)>\deg (M)\). Taking the derivative with respect to t in equality (5.2) and evaluating at \(t=0\) we obtain the condition

$$\begin{aligned} 0=\langle e_1\wedge \ldots \wedge e_m,\nabla _{V(p)}X_J\rangle +\sum _{k=1}^m \langle e_1 \wedge \ldots \wedge \nabla _{e_k} V \wedge \ldots \wedge e_m ,X_J\rangle \end{aligned}$$

for all \(X_J\) such that \(\deg (X_J)>\deg (M)\). In the above formula, \(\langle \cdot ,\cdot \rangle \) indicates the scalar product in the space of m-vectors induced by the Riemannian metric g. The symbol \(\nabla \) denotes, in the left summand, the Levi–Civita connection associated to g and, in the right summand, the covariant derivative of vectors in \({\mathfrak {X}}({\bar{M}},N)\) induced by g. Thus, if a variation preserves the degree then the associated variational vector field satisfies the above condition and we are led to the following definition.

Definition 5.3

Given an immersion \(\Phi :{\bar{M}}\rightarrow N\), a vector field \(V\in {\mathfrak {X}}_0({\bar{M}},N)\) is said to be admissible if it satisfies the system of first order PDEs

$$\begin{aligned} 0=\langle e_1\wedge \ldots \wedge e_m,\nabla _{V(p)}X_J\rangle +\sum _{k=1}^m \langle e_1 \wedge \ldots \wedge \nabla _{e_k} V \wedge \ldots \wedge e_m ,X_J\rangle \end{aligned}$$
(5.3)

where \(X_J=X_{j_1} \wedge \ldots \wedge X_{j_m}\), \(\deg (X_J)>d\) and \(p\in M\). We denote by \({\mathcal {A}}_{\Phi }({\bar{M}},N)\) the set of admissible vector fields.

It is not difficult to check that the conditions given by (5.3) are independent of the choice of the adapted basis.

Thus we are led naturally to a problem of integrability: given \(V\in {\mathfrak {X}}_0({\bar{M}},N)\) such that the first order condition (5.3) holds, we ask whether an admissible variation whose associated variational vector field is V exists.

Definition 5.4

We say that an admissible vector field \(V\in {\mathfrak {X}}_0({\bar{M}},N)\) is integrable if there exists an admissible variation such that the associated variational vector field is V.

Proposition 5.5

Let \(\Phi : {\bar{M}} \rightarrow N\) be an immersion into a graded manifold. Then a vector field \(V \in {\mathfrak {X}}_0({\bar{M}},N)\) is admissible if and only if its normal component \(V^{\perp }\) is admissible.

Proof

Since the Levi–Civita connection and the covariant derivative are additive we deduce that the admissibility condition (5.3) is additive in V. We decompose \(V=V^\top +V^{\perp }\) in its tangent \(V^\top \) and normal \(V^{\perp }\) components and observe that \(V^\top \) is always admissible since the flow of \(V^\top \) is an admissible variation leaving \(\Phi ({\bar{M}})\) invariant with variational vector field \(V^\top \). Hence, \(V^{\perp }\) satisfies (5.3) if and only if V verifies (5.3). \(\square \)

6 The structure of the admissibility system of first order PDEs

Let us consider an open set \(U\subset N\) where a local adapted basis \((X_1,\ldots ,X_n)\) is defined. We know that the simple m-vectors \(X_J:=X_{j_1}\wedge \ldots \wedge X_{j_m}\) generate the space \(\Lambda _m(U)\) of m-vectors. At a given point \(p\in U\), its dimension is given by the formula

$$\begin{aligned} \dim (\Lambda _m(U)_p)=\left( {\begin{array}{c}n\\ m\end{array}}\right) . \end{aligned}$$

Given two m-vectors \(v,w\in \Lambda _m(U)_p\), it is easy to check that \(\deg (v+w) \leqslant \max \{\deg {v},\deg {w}\}\), and that \(\deg {\lambda v}=\deg {v}\) when \(\lambda \ne 0\) and 0 otherwise. This implies that the set

$$\begin{aligned} \Lambda _m^d(U)_p:=\{v\in \Lambda _m(U)_p: \deg {v}\leqslant d\} \end{aligned}$$

is a vector subspace of \(\Lambda _m(U)_p\). To compute its dimension we let \(v_i:=(X_i)_p\) and we check that a basis of \(\Lambda _m^d(U)_p\) is composed of the vectors

$$\begin{aligned} v_{i_1}\wedge \ldots \wedge v_{i_m}\ \text {such that } \sum _{j=i_1}^{i_m}\deg (v_j)\leqslant d. \end{aligned}$$

To get an m-vector in such a basis we pick any of the \(k_1\) vectors in \({\mathcal {H}}^1_p\cap \{v_1,\ldots ,v_n\}\) and, for \(j=2,\ldots ,s\), we pick any of the \(k_j\) vectors on \(({\mathcal {H}}^j_p\smallsetminus {\mathcal {H}}^{j-1}_p)\cap \{v_1,\ldots ,v_n\}\), so that

  • \(k_1+\cdots +k_s=m\), and

  • \(1\cdot k_1+\cdots +s\cdot k_s\leqslant d\).

So we conclude, taking \(n_0=0\), that

$$\begin{aligned} \dim (\Lambda _m^d(U)_p)=\sum _{ \begin{array}{c} k_1+ \cdots + k_s=m,\\ 1 \cdot k_1+ \cdots + s \cdot k_s\leqslant d \end{array}} \bigg (\prod _{i=1}^s \left( {\begin{array}{c}n_i-n_{i-1}\\ k_i\end{array}}\right) \bigg ). \end{aligned}$$

When we consider two simple m-vectors \(v_{i_1}\wedge \ldots \wedge v_{i_m}\) and \(v_{j_1}\wedge \ldots \wedge v_{j_m}\), their scalar product is 0 or \(\pm 1\), the latter case when, after reordering if necessary, we have \(v_{i_k}=v_{j_k}\) for \(k=1,\ldots ,m\). This implies that the orthogonal subspace \(\Lambda _m^d(U)_p^\perp \) of \(\Lambda _m^d(U)_p\) in \(\Lambda _m(U)_p\) is generated by the m-vectors

$$\begin{aligned} v_{i_1}\wedge \ldots \wedge v_{i_m}\ \text {such that } \sum _{j=i_1}^{i_m}\deg (v_j)> d. \end{aligned}$$

Hence we have

$$\begin{aligned} \dim (\Lambda _m^d(U)_p^\perp )=\sum _{ \begin{array}{c} k_1+ \cdots + k_s=m,\\ 1 \cdot k_1+ \cdots + s \cdot k_s> d \end{array}} \bigg (\prod _{i=1}^s \left( {\begin{array}{c}n_i-n_{i-1}\\ k_i\end{array}}\right) \bigg ), \end{aligned}$$
(6.1)

with \(n_0=0\). Since N is equiregular, \(\ell =\dim (\Lambda _m^d(U)_p^{\perp })\) is constant on N. Then we can choose an orthonormal basis \((X_{J_1},\ldots ,X_{J_{\ell }})\) in \(\Lambda _m^d(U)_p^{\perp }\) at each point \(p \in U\).

6.1 The admissibility system with respect to an adapted local basis

In the same conditions as in the previous subsection, let \(\ell =\dim (\Lambda _m^d(U)_p^{\perp })\) and \((X_{J_1},\ldots ,X_{J_{\ell }})\) an orthonormal basis of \(\Lambda _m^d(U)_p^{\perp }\). Any vector field \(V\in {\mathfrak {X}}({\bar{M}},N)\) can be expressed in the form

$$\begin{aligned} V=\sum _{h=1}^n f_h X_h , \end{aligned}$$

where \(f_1,\ldots ,f_n \in C^{\infty }(\Phi ^{-1}(U), {{\mathbb {R}}})\). We take \({\bar{p}}_0\in \Phi ^{-1}(U)\) and, reducing U if necessary, a local adapted basis \((E_i)_i\) of \(T{\bar{M}}\) in \(\Phi ^{-1}(U)\). Hence the admissibility system (5.3) is equivalent to

$$\begin{aligned} \sum _{j=1}^m \sum _{h=1}^n c_{i j h} \, E_j (f_h) + \sum _{h=1}^n \beta _{i h} \, f_h =0, \quad i=1,\ldots ,\ell , \end{aligned}$$
(6.2)

where

$$\begin{aligned} c_{i j h}({\bar{p}})= \langle e_1 \wedge \ldots \wedge \overset{(j)}{(X_h)_p} \wedge \ldots \wedge e_m, (X_{J_i})_p \rangle , \end{aligned}$$
(6.3)

and

$$\begin{aligned} \begin{aligned} \beta _{i h}({\bar{p}})&=\langle e_1\wedge \ldots \wedge e_m,\nabla _{(X_h)_p} X_{J_i}\rangle + \\&\qquad + \sum _{j=1}^m \langle e_1 \wedge \ldots \wedge \nabla _{e_j} X_h \wedge \ldots \wedge e_m, (X_{J_i})_p\rangle \\&\quad =\sum _{j=1}^m \langle e_1 \wedge \ldots \wedge [E_j,X_h] (p) \wedge \ldots \wedge e_m, (X_{J_i})_p\rangle . \end{aligned} \end{aligned}$$
(6.4)

In the above equation we have extended the vector fields \(E_i\) in a neighborhood of \(p_0=\Phi ({\bar{p}}_0)\) in N, denoting them in the same way.

Definition 6.1

Let \({\tilde{m}}_{\alpha }(p)\) be the dimension of \({\tilde{{\mathcal {H}}}}_p^{\alpha }=T_p M \cap {\mathcal {H}}^{\alpha }_p\), \(\alpha \in \{1,\ldots ,s\}\), where we consider the flag defined in (2.7). Then we set

$$\begin{aligned} \iota _0(U)=\max _{p \in U} \min _{1\leqslant \alpha \leqslant s} \{ \alpha : {\tilde{m}}_{\alpha } (p) \ne 0 \}. \end{aligned}$$

and

$$\begin{aligned} \rho :=n_{\iota _0}=\dim ({\mathcal {H}}^{\iota _0})\geqslant \dim ({\mathcal {H}}^1)=n_1. \end{aligned}$$
(6.5)

Remark 6.2

In the differential system (6.2), derivatives of the function \(f_h\) appear only when some coefficient \(c_{ijh}({\bar{p}})\) is different from 0. For fixed h, notice that \(c_{i j h}({\bar{p}})=0\), for all \(i=1,\ldots ,\ell \), \(j=1,\ldots ,m\) and \({\bar{p}}\) in \(\Phi ^{-1}(U)\) if and only if

$$\begin{aligned} \deg (e_1 \wedge \cdots \wedge \overset{(j)}{(X_h)_p} \wedge \cdots \wedge e_m)\leqslant d,\quad \text {for all } 1\leqslant j\leqslant m, p \in \Phi ^{-1}(U). \end{aligned}$$

This property is equivalent to

$$\begin{aligned} \deg ((X_h)_p)\leqslant \deg (e_j), \text {for all } 1\leqslant j\leqslant m, p \in \Phi ^{-1}(U). \end{aligned}$$

So we have \(c_{ijh}=0\) in \(\Phi ^{-1}(U)\) for all ij if and only if \(\deg (X_h)\leqslant \iota _0(U)\).

We write

$$\begin{aligned} V=\sum _{h=1}^{\rho } g_h X_h+ \sum _{r=\rho +1}^n f_r X_r , \end{aligned}$$

so that the local system (6.2) can be written as

$$\begin{aligned} \sum _{j=1}^m \sum _{r=\rho +1}^n c_{i j r} E_j (f_r) + \sum _{r=\rho +1}^n b_{i r} f_r + \sum _{h=1}^\rho a_{i h} g_h =0, \end{aligned}$$
(6.6)

where \(c_{i j r}\) is defined in (6.3) and, for \(1\leqslant i\leqslant \ell \),

$$\begin{aligned} a_{ih}=\beta _{ih},\quad b_{ir}=\beta _ {ir},\ 1\leqslant h\leqslant \rho ,\ \rho +1\leqslant r\leqslant n, \end{aligned}$$
(6.7)

where \(\beta _{ij}\) is defined in (6.4). We denote by B the \(\ell \times (n-\rho )\) matrix whose entries are \(b_{i r}\), by A the \(\ell \times \rho \) whose entries are \(a_{i h}\) and for \(j=1,\ldots ,m\) we denote by \(C_j\) the \(\ell \times (n- \rho )\) matrix \(C_j=(c_{ijh})_{h=\rho +1,\ldots ,n}^{i=1,\ldots ,\ell }\). Setting

$$\begin{aligned} F=\begin{pmatrix} f_{\rho +1} \\ \vdots \\ f_n \end{pmatrix}, \quad G=\begin{pmatrix} g_1 \\ \vdots \\ g_{\rho } \end{pmatrix} \end{aligned}$$
(6.8)

the admissibility system (6.2) is given by

$$\begin{aligned} \sum _{j=1}^m C_j E_j(F)+ BF+ AG=0. \end{aligned}$$
(6.9)

6.2 Independence on the metric

Let g and \({\tilde{g}}\) be two Riemannian metrices on N and \((X_i)\) be orthonormal adapted basis with respect to g and \((Y_i)\) with respect to \({\tilde{g}}\). Clearly we have

$$\begin{aligned} Y_i=\sum _{j=1}^n d_{ji} X_j, \end{aligned}$$

for some square invertible matrix \(D=(d_{ji})_{j=1,\ldots ,n}^{i=1,\ldots ,n}\) of order n. Since \((X_i)\) and \((Y_i)\) are adapted basis, D is a block matrix

$$\begin{aligned} D=\begin{pmatrix} D_{1 1} &{} \quad D_{1 2}&{} \quad D_{1 3} &{} \quad \ldots &{} D_{1 s} \\ 0 &{} \quad D_{2 2}&{} \quad D_{2 3} &{} \quad \ldots &{} \quad D_{2 s}\\ 0&{} \quad 0 &{} \quad D_{3 3}&{} \quad \ldots &{} \quad D_{3 s}\\ 0&{} \quad 0 &{} \quad 0&{} \quad \ddots &{} \quad \vdots \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0&{} \quad D_{s s}\\ \end{pmatrix}, \end{aligned}$$

where \(D_{i i}\) for \(i=1,\ldots ,s\) are square matrices of orders \(n_i\). Let \(\rho \) be the integer defined in (6.1), then we define \(D_h=(d_{ji})_{i,j=1,\ldots ,\rho }\), \(D_v=(d_{ji})_{i,j=\rho +1,\ldots ,n}\) and \(D_{hv}=(d_{ji})_{j=1,\ldots ,\rho }^{i=\rho +1,\ldots ,n}\). Let us express V as a linear combination of \((Y_i)\)

$$\begin{aligned} V=\sum _{h=1}^{\rho } {\tilde{g}}_h Y_h+\sum _{r=\rho +1}^n {\tilde{f}}_r Y_r, \end{aligned}$$

then we set

$$\begin{aligned} {\tilde{F}}=\begin{pmatrix} {\tilde{f}}_{\rho +1} \\ \vdots \\ {\tilde{f}}_n \end{pmatrix}, \quad {\tilde{G}}=\begin{pmatrix} {\tilde{g}}_1 \\ \vdots \\ {\tilde{g}}_{\rho } \end{pmatrix} \end{aligned}$$

and F and G as in (6.8).

Given \(I=(i_1,\ldots , i_m)\) with \(i_1< \ldots < i_m\), we have

$$\begin{aligned} Y_I&=Y_{i_1} \wedge \cdots \wedge Y_{i_m}=\sum _{j_1=1}^n \cdots \sum _{j_m=1}^n d_{j_1 i_1} \cdots d_{j_m i_m} X_{j_1} \wedge \cdots \wedge X_{j_m}\\&= \sum _{j_1< \ldots < j_m} \lambda ^{j_1 \ldots j_m}_{i_1 \ldots i_m} X_{j_1} \wedge \cdots \wedge X_{j_m}= \sum _{J} \lambda _{J I} X_J. \end{aligned}$$

Since the adapted change of basis preserves the degree of the m-vectors, the square matrix \(\varvec{\Lambda }=(\lambda _{J I} )\) of order \(\left( {\begin{array}{c}n\\ m\end{array}}\right) \) acting on the m-vector is given by

$$\begin{aligned} {\varvec{\Lambda }}=\begin{pmatrix} \varvec{\Lambda }_h &{} \quad \varvec{\Lambda }_{hv}\\ 0 &{} \quad \varvec{\Lambda }_v \end{pmatrix}, \end{aligned}$$
(6.10)

where \(\varvec{\Lambda }_h\) and \(\varvec{\Lambda }_v\) are square matrices of order \(\left( {\begin{array}{c}n\\ m\end{array}}\right) -\ell \) and \(\ell \) respectively and \(\varvec{\Lambda }_{hv}\) is a matrix of order \(\left( \left( {\begin{array}{c}n\\ m\end{array}}\right) -\ell \right) \times \ell \). Moreover the matrix \(\varvec{\Lambda }\) is invertible since both \(\{X_J\}\) and \(\{Y_I\}\) are basis of the vector space of m-vectors.

Remark 6.3

One can easily check that the inverse of \(\varvec{\Lambda }\) is given by the block matrix

$$\begin{aligned} \varvec{\Lambda }^{-1}=\begin{pmatrix} \varvec{\Lambda }_h^{-1} &{} \quad - \varvec{\Lambda }_h^{-1} \, \varvec{\Lambda }_{hv} \varvec{\Lambda }_{v}^{-1}\\ 0 &{} \quad \varvec{\Lambda }_v^{-1} \end{pmatrix}. \end{aligned}$$

Setting \({\tilde{\mathbf {G}}}=({\tilde{g}}(X_I,X_J))\) we have

$$\begin{aligned} {\tilde{\mathbf {G}}}=\begin{pmatrix} {\tilde{\mathbf {G}}}_h &{} {\tilde{\mathbf {G}}}_{hv}\\ ({\tilde{\mathbf {G}}}_{hv})^t &{} {\tilde{\mathbf {G}}}_v \end{pmatrix}=(\varvec{\Lambda }^{-1})^t (\varvec{\Lambda }^{-1}). \end{aligned}$$

Thus it follows

$$\begin{aligned} {\tilde{\mathbf {G}}}_v&= (\varvec{\Lambda }_v^{-1})^t \varvec{\Lambda }_v^{-1} + (\varvec{\Lambda }_v^{-1})^t \varvec{\Lambda }_{hv}^t ( \varvec{\Lambda }_h^{-1})^t \, \varvec{\Lambda }_h^{-1} \varvec{\Lambda }_{hv} \varvec{\Lambda }_v^{-1},\\ {\tilde{\mathbf {G}}}_{hv}&=- (\varvec{\Lambda }_h^{-1})^t\varvec{\Lambda }_h^{-1} \varvec{\Lambda }_{hv} \varvec{\Lambda }_v^{-1},\\ {\tilde{\mathbf {G}}}_{h}&= (\varvec{\Lambda }_h^{-1})^t \varvec{\Lambda }_h^{-1} . \end{aligned}$$

Let \({\tilde{A}}\) be the associated matrix

$$\begin{aligned} {\tilde{A}}=\Big ( {\tilde{g}} \Big ( Y_{J_i} , \sum _{j=1}^m E_1 \wedge \ldots \wedge [E_j, Y_h](p) \wedge \ldots \wedge E_m \Big ) \Big )^{h=1,\ldots ,\rho }_{i=1,\ldots ,\ell }. \end{aligned}$$

Setting

$$\begin{aligned} \omega _{J r}=\sum _{j=1}^mg( X_J, E_1 \wedge \cdots \wedge [E_j, X_r] \wedge \cdots \wedge E_m), \end{aligned}$$

and \(\Omega =\begin{pmatrix}\Omega _h&\quad \Omega _v\end{pmatrix}=(\omega _{J r})_{\deg (J)\leqslant d}^{r=1,\ldots ,n}\), a straightforward computation shows

$$\begin{aligned} \begin{aligned} {\tilde{A}}&= (\varvec{\Lambda }_{hv})^t \left( {\tilde{\mathbf {G}}}_{h} \, \Omega _h \, D_h+ {\tilde{\mathbf {G}}}_{hv} \, A \, D_h + {\tilde{\mathbf {G}}}_{h} \, \sum _{j=1}^m C_j E_j( D_h ) \right) \\&\quad + (\varvec{\Lambda }_v)^t \left( ({\tilde{\mathbf {G}}}_{hv})^t \, \Omega _h D_h+ {\tilde{\mathbf {G}}}_v \, A \, D_h+({\tilde{\mathbf {G}}}_{hv})^t \sum _{j=1}^m C_j E_j( D_h ) \right) \end{aligned} \end{aligned}$$

By Remark 6.3 we obtain

$$\begin{aligned} \begin{aligned} {\tilde{A}}&= (\varvec{\Lambda }_{hv})^t \Big ((\varvec{\Lambda }_h^{-1})^t \varvec{\Lambda }_h^{-1} \,( \Omega _h \, D_h + \sum _{j=1}^m C_j E_j( D_h ) )\\&\quad -(\varvec{\Lambda }_h^{-1})^t \varvec{\Lambda }_h^{-1} \varvec{\Lambda }_{hv} \varvec{\Lambda }_v^{-1} \, A \, D_h \Big )\\&\quad - \Big ( \varvec{\Lambda }_{hv}^t (\varvec{\Lambda }_h^{-1})^t \varvec{\Lambda }_h^{-1} \, (\Omega _h D_h + \sum _{j=1}^m C_j E_j( D_h ))\Big )\\&\quad +\left( \varvec{\Lambda }_v^{-1} + \varvec{\Lambda }_{hv}^t ( \varvec{\Lambda }_h^{-1})^t \, \varvec{\Lambda }_h^{-1} \varvec{\Lambda }_{hv} \varvec{\Lambda }_v^{-1} \right) \, A \, D_h\\&= \varvec{\Lambda }_v^{-1} \, A \, D_h. \end{aligned} \end{aligned}$$
(6.11)

Preliminary we notice that if \(h=1,\ldots ,\rho \) we have

$$\begin{aligned} \begin{aligned} {\tilde{c}}_{i j h}&= {\tilde{g}} (Y_{J_i} , E_1 \wedge \ldots \wedge \overset{(j)}{Y_h} \wedge \ldots \wedge E_m)\\&=\sum _I \sum _{deg(J)\leqslant d} \sum _{k=1}^{\rho } \lambda _{I J_i} \, {\tilde{g}}(X_I, X_J) c_{J j k}\, d_{kh}\\&=\sum _{\deg (I)\leqslant d} \sum _{\deg (J)\leqslant d} \sum _{k=1}^{\rho } \lambda _{I J_i} \, {\tilde{g}}(X_I, X_J) c_{J j k}\, d_{kh}+\\&\quad + \sum _{\deg (I)> d} \sum _{\deg (J)\leqslant d} \sum _{k=1}^{\rho } \lambda _{I J_i} \, {\tilde{g}}(X_I, X_J) c_{J j k}\, d_{kh}. \end{aligned} \end{aligned}$$
(6.12)

Therefore, setting

$$\begin{aligned} {\tilde{C}}^{H}_j=\Big ( {\tilde{g}} (Y_{J} , E_1 \wedge \ldots \wedge \overset{(j)}{Y_h} \wedge \ldots \wedge E_m) \Big )_{\deg (J)\leqslant d}^{h=1,\ldots ,\rho } \end{aligned}$$

and

$$\begin{aligned} {\tilde{C}}^{0}_j=\Big ( {\tilde{g}} (Y_{J_i} , E_1 \wedge \ldots \wedge \overset{(j)}{Y_h} \wedge \ldots \wedge E_m) \Big )_{i=1,\ldots ,\ell }^{h=1,\ldots ,\rho }, \end{aligned}$$

by (6.12) we gain

$$\begin{aligned} {\tilde{C}}_j^{0}= (\varvec{\Lambda }_{hv}^t {\tilde{\mathbf {G}}}_h+ \varvec{\Lambda }_v^t ({\tilde{\mathbf {G}}}_{hv})^t )( C_j^{H} D_h)=0. \end{aligned}$$

Let \({\tilde{C}}_j\) be the associated matrix

$$\begin{aligned} {\tilde{C}}_j=\Big ( {\tilde{g}} (Y_{J_i} , E_1 \wedge \ldots \wedge \overset{(j)}{Y_h} \wedge \ldots \wedge E_m) \Big )_{i=1,\ldots ,\ell }^{h=\rho +1,\ldots ,n}. \end{aligned}$$

Setting

$$\begin{aligned} {\tilde{C}}^{HV}_j=\Big ( {\tilde{g}} (Y_{J} , E_1 \wedge \ldots \wedge \overset{(j)}{Y_h} \wedge \ldots \wedge E_m) \Big )_{\deg (J)\leqslant d}^{h=\rho +1,\ldots ,n}, \end{aligned}$$

it is immediate to obtain the following equality

$$\begin{aligned} \begin{aligned} {\tilde{C}}_j&= (\varvec{\Lambda }_{hv})^t \left( {\tilde{\mathbf {G}}}_{h} (C_j^H D_{hv}+C_j^{HV} D_v)+ {\tilde{\mathbf {G}}}_{hv} C_j D_v\right) \\&\quad + (\varvec{\Lambda }_v)^t \left( ({\tilde{\mathbf {G}}}_{hv})^t (C_j^H D_{hv}+C_j^{HV} D_v)+ {\tilde{\mathbf {G}}}_v C_j D_v \right) \\&=\varvec{\Lambda }_v^{-1} C_j D_v . \end{aligned} \end{aligned}$$
(6.13)

Let \({\tilde{B}}\) be the associated matrix

$$\begin{aligned} {\tilde{B}}=\Big ( {\tilde{g}} \Big ( Y_{J_i} , \sum _{j=1}^m E_1 \wedge \ldots \wedge [E_j, Y_h] \wedge \ldots \wedge E_m \Big ) \Big )^{h=\rho +1,\ldots ,n}_{i=1,\ldots ,\ell }. \end{aligned}$$

A straightforward computation shows

$$\begin{aligned} \begin{aligned} {\tilde{B}}&= (\varvec{\Lambda }_{hv})^t \Big ( {\tilde{\mathbf {G}}}_h ( \Omega _h \, D_{hv}+\Omega _v D_v + \sum _{j=1}^m C_j^H E_j(D_{hv})+ C_j^{HV} E_j( D_h ) )\\&\quad + {\tilde{\mathbf {G}}}_{hv}( A D_{hv}+ B D_v+ \sum _{j=1}^m C_j E_j(D_v) ) \Big )\\&\quad + (\varvec{\Lambda }_v)^t \Big ( {\tilde{\mathbf {G}}}_{hv}^t ( \Omega _h \, D_{hv}+\Omega _v D_v + \sum _{j=1}^m C_j^H E_j(D_{hv})+ C_j^{HV} E_j( D_h ) )\\&\quad + {\tilde{\mathbf {G}}}_{v}( A D_{hv}+ B D_v+ \sum _{j=1}^m C_j E_j(D_v) ) \Big )\\ \end{aligned} \end{aligned}$$

By Remark 6.3 we obtain

$$\begin{aligned} {\tilde{B}}= \varvec{\Lambda }_v^{-1} \, A \, D_{hv} + \varvec{\Lambda }_v^{-1} B D_v +\sum _{j=1}^m \varvec{\Lambda }_v^{-1} C_j E_j (D_v) . \end{aligned}$$
(6.14)

Finally, we have \( G=D_h {\tilde{G}}+ D_{hv} {\tilde{F}} \) and \(F=D_v {\tilde{F}}\).

Proposition 6.4

Let g and \({\tilde{g}}\) be two different metrics, then a vector fields V is admissible w.r.t. g if and only if V is admissible w.r.t. \({\tilde{g}}\).

Proof

We remind that an admissible vector field

$$\begin{aligned} V=\sum _{i=1}^{\rho } g_i X_i + \sum _{i=\rho +1}^n f_i X_i \end{aligned}$$

w.r.t. g satisfies

$$\begin{aligned} \sum _{j=1}^m C_j E_j(F) + BF +AG=0. \end{aligned}$$
(6.15)

By (6.11), (6.14) and (6.13) we have

$$\begin{aligned} \begin{aligned}&\sum _{j=1}^m {\tilde{C}}_j E_j({\tilde{F}}) + {\tilde{B}}{\tilde{F}} +{\tilde{A}}{\tilde{G}}=\varvec{\Lambda }_v^{-1} \left( \sum _{j=1}^m C_j (D_v E_j({\tilde{F}} )+ E_j(D_v) {\tilde{F}}) \right. \\&\left. \quad +A \, D_{hv} {\tilde{F}} + A \, D_h {\tilde{G}}+ B D_v {\tilde{F}} \right) =\varvec{\Lambda }_v^{-1} \left( \sum _{j=1}^m C_j E_j(F) + BF +AG\right) \end{aligned} \end{aligned}$$
(6.16)

In the previous equation we used that \(G=D_h {\tilde{G}}+ D_{hv} {\tilde{F}}\), \(F=D_v {\tilde{F}}\) and

$$\begin{aligned} E_j(D_v) D_v^{-1} + D_v E_j(D_v^{-1})=0, \end{aligned}$$

for all \(j=1,\ldots ,m\), that follows by \(D_v D_v^{-1}=I_{n-\rho }\). Then the admissibility system (6.15) w.r.t. g is equal to zero if and only if the admissibility system (6.16) w.r.t. \({\tilde{g}}\). \(\square \)

Remark 6.5

When the metric g is fixed and \((X_i)\) and \((Y_i)\) are orthonormal adapted basis w.r.t g, the matrix D is a block diagonal matrix given by

$$\begin{aligned} D=\begin{pmatrix} D_h &{} 0 \\ 0 &{} D_v \end{pmatrix}, \end{aligned}$$

where \(D_h\) and \(D_v\) are square orthogonal matrices of orders \(\rho \) and \((n-\rho )\), respectively. From equations (6.11), (6.14), (6.13) it is immediate to obtain the following equalities

$$\begin{aligned} \begin{aligned} {\tilde{F}}&=D_v^{-1} F, \\ {\tilde{G}}&=D_h^{-1} G, \\ {\tilde{A}}&=\varvec{\Lambda }_v^{-1} \ A \ D_h, \\ {\tilde{B}}&=\varvec{\Lambda }_v^{-1} B D_v+ \sum _{j=1}^m \varvec{\Lambda }_{v}^{-1} C_j E_j (D_v), \\ {\tilde{C}}_j&=\varvec{\Lambda }_{v}^{-1} C_j D_v. \end{aligned} \end{aligned}$$
(6.17)

6.3 The admissibility system with respect to the intrinsic basis of the normal space

Let \(\ell \) be the dimension of \(\Lambda _m^d(U)_p^{\perp }\) and \((X_{J_1},\ldots ,X_{J_{\ell }})\) an orthonormal basis of simple m-vector fields. Let \({\bar{p}}_0\) be a point in \({\bar{M}}\) and \(\Phi ({\bar{p}}_0)=p_0 \). Let \(e_1,\ldots ,e_m\) be an adapted basis of \(T_{p_0} M\) that we extend to adapted vector fields \(E_1,\ldots ,E_m\) tangent to M on U. Let \(v_{m+1},\ldots ,v_n\) be a basis of \((T_{p_0} M)^{\perp }\) that we extend to vector fields \(V_{m+1}, \ldots ,V_n\) normal to M on U, where we possibly reduced the neighborhood U of \(p_0\) in N. Then any vector field in \({\mathfrak {X}}(\Phi ^{-1}(U) ,N)\) is given by

$$\begin{aligned} V=\sum _{j=1}^m \psi _j E_j+\sum _{h=m+1}^n \psi _h V_h , \end{aligned}$$

where \(\psi _1,\ldots , \psi _n \in C^{r}( \Phi ^{-1}(U) , {{\mathbb {R}}})\). By Proposition 5.5 we deduce that V is admissible if and only if \(V^{\perp }=\sum _{h=m+1}^n \psi _h V_h\) is admissible. Hence we obtain that the system (5.3) is equivalent to

$$\begin{aligned} \sum _{j=1}^m \sum _{h=m+1}^n \xi _{i j h} E_j (\psi _h) + \sum _{h=m+1}^n {\hat{\beta }}_{i h} \psi _h =0, \quad i=1,\ldots ,\ell , \end{aligned}$$
(6.18)

where

$$\begin{aligned} \xi _{i j h}({\bar{p}})= \langle e_1 \wedge \ldots \wedge \overset{(j)}{v_h} \wedge \ldots \wedge e_m, (X_{J_i})_p \rangle \end{aligned}$$
(6.19)

and

$$\begin{aligned} \begin{aligned} {\hat{\beta }}_{i h}({\bar{p}})&=\langle e_1\wedge \ldots \wedge e_m,\nabla _{v_h} X_{J_i}\rangle + \\&\quad + \sum _{j=1}^m \langle e_1 \wedge \ldots \wedge \nabla _{e_j} V_h \wedge \ldots \wedge e_m, (X_{J_i})_p\rangle \\&=\sum _{j=1}^m \langle e_1 \wedge \ldots \wedge [E_j, V_h] (p) \wedge \ldots \wedge e_m, (X_{J_i})_p\rangle . \end{aligned} \end{aligned}$$
(6.20)

Definition 6.6

Let \(\iota _0(U)\) be the integer defined in 6.1. Then we set \(k:=n_{\iota _0}-{\tilde{m}}_{\iota _0}\).

Assume that \(k\geqslant 1\), and write

$$\begin{aligned} V^{\perp }=\sum _{h=m+1}^{m+k} \phi _h \, V_h+ \sum _{r=m+k+1}^n \psi _r \, V_r , \end{aligned}$$

and the local system (6.18) is equivalent to

$$\begin{aligned} \sum _{j=1}^m \sum _{r=\rho +1}^n \xi _{i j r} \, E_j (\psi _r) + \sum _{r=\rho +1}^n \beta _{i r} \, \psi _r + \sum _{h=m+1}^{m+k} \alpha _{i h} \, \phi _h =0, \end{aligned}$$
(6.21)

where \(\xi _{i j r}\) is defined in (6.19) and, for \(1\leqslant i\leqslant \ell \),

$$\begin{aligned} \alpha _{ih}={\hat{\beta }}_{ih},\quad \beta _{ir}={\hat{\beta }}_ {ir},\ m+1\leqslant h\leqslant m+k,\ m+k+1\leqslant r\leqslant n. \end{aligned}$$
(6.22)

We denote by \(B^{\perp }\) the \(\ell \times (n-m-k)\) matrix whose entries are \(\beta _{i r}\), by \(A^{\perp }\) the \(\ell \times k\) whose entries are \(\alpha _{i h}\) and for every \(j=1, \cdots m\) by \(C_j^{\perp }\) the \(\ell \times (n- m-k ) \) matrix with entries \((\xi _{ijh})_{h=m+k+1,\ldots ,n}^{i=1,\ldots ,\ell } \) Setting

$$\begin{aligned} F^{\perp }=\begin{pmatrix} \psi _{m+k+1} \\ \vdots \\ \psi _n \end{pmatrix}, \quad G^{\perp }=\begin{pmatrix} \phi _{m+1} \\ \vdots \\ \phi _{m+k} \end{pmatrix} \end{aligned}$$
(6.23)

the admissibility system (6.2) is given

$$\begin{aligned} \sum _{j=1}^m C^{\perp }_j E_j(F^{\perp })+ B^{\perp }F^{\perp }+ A^{\perp } G^{\perp }=0. \end{aligned}$$
(6.24)

Remark 6.7

We can define the matrices \(A^\top \), \(B^\top \), \(C^\top \) with respect to the tangent projection \(V^\top \) in a similar way to the matrices \(A^\bot \), \(B^\bot \), \(C^\bot \). First of all we notice that the entries

$$\begin{aligned} \xi ^{\top }_{i j \nu }({\bar{p}})= \langle e_1 \wedge \ldots \wedge \overset{(j)}{e_{\nu }} \wedge \ldots \wedge e_m, (X_{J_i})_p \rangle \end{aligned}$$

for \(i=1,\ldots ,\ell \) and \(j,\nu =1,\ldots ,m\) are all equal to zero. Therefore the matrices \(C^{\top }\) and \(B^{\top }\) are equal to zero. On the other hand, \(A^{\top }\) is the \((\ell \times m)\)-matrix whose entries are given by

$$\begin{aligned} \alpha ^{\top }_{i \nu }({\bar{p}})=\sum _{j=1}^m \langle e_1 \wedge \ldots \wedge [E_j, E_{\nu }] (p) \wedge \ldots \wedge e_m, (X_{J_i})_p\rangle \end{aligned}$$

for \(i=1, \ldots ,\ell \) and \(\nu =1,\ldots ,m\). Frobenius Theorem implies that the Lie brackets \([E_j,E_\nu ]\) are all tangent to M for \(j,\nu =1,\ldots ,m\), and so all the entries of \(A^{\top }\) are equal to zero.

7 Integrability of admissible vector fields

In general, given an admissible vector field V, the existence of an admissible variation with associated variational vector field V is not guaranteed. The next definition is a sufficient condition to ensure the integrability of admissible vector fields.

Definition 7.1

Let \(\Phi :{\bar{M}}\rightarrow N\) be an immersion of degree d of an m-dimensional manifold into a graded manifold endowed with a Riemannian metric g. Let \(\ell =\dim (\Lambda _m^d(U)_q^\perp )\) for all \(q\in N\) and \(\rho =n_{\iota _0}\) set in (6.1). When \(\rho \geqslant \ell \) we say that \(\Phi \) is strongly regular at \({\bar{p}} \in {\bar{M}}\) if

$$\begin{aligned} \text {rank} (A({\bar{p}}))=\ell , \end{aligned}$$

where A is the matrix appearing in the admissibility system (6.9).

The rank of A is independent of the local adapted basis chosen to compute the admissibility system (6.9) because of Eq. (6.17). Next we prove that strong regularity is a sufficient condition to ensure local integrability of admissible vector fields.

Theorem 7.2

Let \(\Phi :{\bar{M}}\rightarrow N\) be a smooth immersion of an m-dimensional manifold into an equiregular graded manifold N endowed with a Riemannian metric g. Assume that the immersion \(\Phi \) of degree d is strongly regular at \({\bar{p}}\). Then there exists an open neighborhood \(W_{{\bar{p}}}\) of \({\bar{p}}\) such every admissible vector field V with compact support on \(W_{{\bar{p}}}\) is integrable.

Proof

Let \(p=\Phi ({\bar{p}})\). First of all we consider an open neighborhood \(U_{p} \subset N\) of p such that an adapted orthonormal frame \((X_1,\ldots , X_n)\) is well defined. Since \(\Phi \) is strongly regular at \({\bar{p}}\) there exist indexes \(h_1,\ldots , h_{\ell }\) in \(\{1,\ldots ,\rho \}\) such that the submatrix

$$\begin{aligned} {\hat{A}} ({\bar{p}} )=\begin{pmatrix} a_{1h_1} ({\bar{p}} )&{} \cdots &{} a_{1 h_{\ell }}({\bar{p}} )\\ \vdots &{} \ddots &{} \vdots \\ a_{\ell h_1}({\bar{p}} )&{} \cdots &{} a_{\ell h_{\ell }}({\bar{p}} ) \end{pmatrix} \end{aligned}$$

is invertible. By a continuity argument there exists an open neighborhood \(W_{{\bar{p}} } \subset \Phi ^{-1}( U_p )\) such that \(\det ({\hat{A}} ({\bar{q}}))\ne 0\) for each \({\bar{q}} \in W_{{\bar{p}} }\).

We can rewrite the system (6.9) in the form

$$\begin{aligned} \begin{pmatrix} g_{h_1}\\ \vdots \\ g_{h_{\ell }} \end{pmatrix}= -{\hat{A}}^{-1} \left( \sum _{j=1}^m C_j E_j (F) + B F +{\tilde{A}} \begin{pmatrix} g_{i_1}\\ \vdots \\ g_{i_{\rho -\ell }} \end{pmatrix} \right) , \end{aligned}$$
(7.1)

where \(i_1, \ldots , i_{\rho -\ell }\) are the indexes of the columns of A that do not appear in \({\hat{A}}\) and \({\tilde{A}}\) is the \(\ell \times (\rho -\ell )\) matrix given by the columns \(i_1, \ldots , i_{\rho -\ell }\) of A. The vectors \((E_i)_i\) form an orthonormal basis of \(T{\bar{M}}\) near \({\bar{p}}\).

On the neighborhood \(W_{{\bar{p}}}\) we define the following spaces

  1. 1.

    \({\mathfrak {X}}_0^r(W_{{\bar{p}}},N)\), \(r\geqslant 0\) is the set of \(C^r\) vector fields compactly supported on \(W_{{\bar{p}}}\) taking values in TN.

  2. 2.

    \({\mathcal {A}}_0^r(W_{{\bar{p}}},N)=\{Y \in {\mathfrak {X}}_0^r(W_{{\bar{p}}},N) : Y=\sum _{s=1}^{\rho } g_{s} X_{s}\}\).

  3. 3.

    \({\mathcal {A}}_{1, 0}^r(W_{{\bar{p}}},N)=\{Y\in {\mathcal {A}}_0^r(W_{{\bar{p}}},N) : Y=\sum _{i=1}^{\ell } g_{h_i} X_{h_i} \}.\)

  4. 4.

    \({\mathcal {A}}_{2,0}^r(W_{{\bar{p}}},N)=\{Y \in {\mathcal {A}}_0^r(W_{{\bar{p}}},N) : \langle Y,X\rangle =0 \ \forall \ X \in {\mathcal {A}}_{1,0}^r(W_{{\bar{p}}},N)\}\).

  5. 5.

    \({\mathcal {V}}_0^r(W_{{\bar{p}}} ,N)=\{Y\in {\mathfrak {X}}^r(W_{{\bar{p}}},N) : \langle Y,X\rangle =0 \ \forall X \in {\mathcal {A}}_0^r(W_{{\bar{p}}},N) \} ={\mathcal {A}}_0^r(W_{{\bar{p}}},N)^{\perp }. \)

  6. 6.

    \(\Lambda _0^r(W_{{\bar{p}}},N)=\{ \sum _{i=1}^{\ell } f_i X_{J_i} : f_i \in C_0^r(W_{{\bar{p}}}) \}.\)

Given \(r\geqslant 1\), we set

$$\begin{aligned} E:={\mathcal {A}}_{2,0}^{r-1}(W_{{\bar{p}}},N) \times {\mathcal {V}}_0^r(W_{{\bar{p}}} ,N) , \end{aligned}$$

and consider the map

$$\begin{aligned} {\mathcal {G}}: E \times {\mathcal {A}}_{1,0}^{r-1}(W_{{\bar{p}}},N) \rightarrow E \times \Lambda _0^{r-1}(W_{{\bar{p}}},N) , \end{aligned}$$
(7.2)

defined by

$$\begin{aligned} {\mathcal {G}}(Y_1,Y_2,Y_3)=(Y_1,Y_2,{\mathcal {F}}(Y_1+Y_2+Y_3)), \end{aligned}$$

where \(\Pi _v\) is the projection in the space of m-forms with compact support in \(W_{{\bar{p}}}\) onto \(\Lambda ^r(W_{{\bar{p}}},N)\), and

$$\begin{aligned} {\mathcal {F}}(Y)=\Pi _v\left( d\Gamma (Y)(e_1) \wedge \ldots \wedge d\Gamma (Y)(e_m)\right) , \end{aligned}$$

where \(\Gamma (Y)(p)=\exp _{\Phi (p)}(Y_p)\). Observe that \({\mathcal {F}}(Y)=0\) if and only if the submanifold \(\Gamma (Y)\) has degree less or equal to d. We consider on each space the corresponding \(||\cdot ||_r\) or \(||\cdot ||_{r-1}\) norm, and a product norm.

Then

$$\begin{aligned} D{\mathcal {G}}(0,0,0)(Y_1,Y_2,Y_3)=(Y_1,Y_2,D{\mathcal {F}}(0)(Y_1+Y_2+Y_3)), \end{aligned}$$

where we write in coordinates

$$\begin{aligned} Y_1=\sum _{t=1}^{\rho -\ell } g_{i_t} \, X_{i_t}, \quad Y_2=\sum _{i=1}^{\ell } g_{h_i} \, X_{h_i}, \quad \text {and} \quad Y_3=\sum _{r=\rho +1}^{n} f_{r} \, X_{r}. \end{aligned}$$

Following the same argument we used in Sect. 5, taking the derivative at \(t=0\) of (5.2), we deduce that the differential \(D{\mathcal {F}}(0)Y\) is given by

$$\begin{aligned} D{\mathcal {F}}(0)Y= \sum _{i=1}^{\ell }\left( \sum _{j=1}^m \sum _{r=\rho +1}^n c_{i j r} E_j (f_r) + \sum _{r=\rho +1}^n b_{i r} f_r + \sum _{h=1}^\rho a_{i h} g_h \right) X_{J_i}. \end{aligned}$$

Oberve that \(D{\mathcal {F}}(0)Y=0\) if and only if Y is an admissible vector field, namely Y solves (7.1).

Our objective now is to prove that the map \(D{\mathcal {G}}(0, 0,0)\) is an isomorphism of Banach spaces.

Indeed suppose that \(D{\mathcal {G}}(0, 0,0)(Y_1,Y_2,Y_3)=(0,0,0)\). This implies that \(Y_1\) and \(Y_2\) are equal zero. By the admissible Eq. (7.1) we have that also \(Y_3\) is equal to zero, then \(D{\mathcal {G}}(0, 0,0)\) is injective. Then fix \((Z_1, Z_2, Z_3)\), where \(Z_1 \in {\mathcal {A}}_{2,0}^{r-1}(W_{{\bar{p}}},N)\), \( Z_2 \in {\mathcal {V}}_0^r(W_{{\bar{p}}},N) \), \( Z_3 \in \Lambda _0^{r-1}(W_{{\bar{p}}},N) \) we seek \(Y_1,Y_2,Y_3\) such that \(D{\mathcal {G}}(0, 0,0)(Y_1,Y_2,Y_3)=(Z_1, Z_2, Z_3)\). We notice that \(D{\mathcal {F}}(0)(Y_1+Y_2+Y_3)=Z_3\) is equivalent to

$$\begin{aligned} \left( \begin{array}{c} z_{1}\\ \vdots \\ z_{\ell } \end{array}\right) = \left( \sum _{j=1}^m C_j \, E_j (F) + B F + {\tilde{A}} \left( \begin{array}{c} g_{i_1}\\ \vdots \\ g_{i_{\rho -\ell }} \end{array}\right) + {\hat{A}} \left( \begin{array}{c} g_{h_1}\\ \vdots \\ g_{h_{\ell }} \end{array}\right) \right) , \end{aligned}$$

where with an abuse of notation we identify \(Z_3=\sum _{i=1}^{\ell } z_i \ X_{J_i}\) and \(\sum _{i=1}^{\ell } z_i \ X_{h_i}\). Since \({\hat{A}}\) is invertible we have the following system

$$\begin{aligned} \left( \begin{array}{c} g_{h_1}\\ \vdots \\ g_{h_{\ell }} \end{array}\right) = -{\hat{A}}^{-1} \left( \sum _{j=1}^m C_j \, E_j (F) + B F + {\tilde{A}} \left( \begin{array}{c} g_{i_1}\\ \vdots \\ g_{i_{\rho -\ell }} \end{array}\right) + \left( \begin{array}{c} z_{1}\\ \vdots \\ z_{\ell } \end{array}\right) \right) . \end{aligned}$$
(7.3)

Clearly \(Y_1=Z_1\) fixes \(g_{i_1},\ldots , g_{i_{\rho -\ell }}\) in (7.3), and \(Y_2=Z_2\) fixes the first and second term of the right hand side in (7.3). Since the right side terms are given we have determined \(Y_3 \), i.e. \(g_{h_1},\ldots ,g_{h_{\ell }}\), such that \(Y_3\) solves (7.3). Therefore \(D{\mathcal {G}}(0, 0,0)\) is surjective. Thus we have proved that \(D{\mathcal {G}}(0,0,0)\) is a bijection.

Let us prove now that \(D{\mathcal {G}}(0,0,0)\) is a continuous and open map. Letting \(D{\mathcal {G}}(0,0,0)(Y_1,Y_2,Y_3)=(Z_1,Z_2,Z_3)\), we first notice \(D{\mathcal {G}}(0, 0,0)\) is a continuous map since identity maps are continuous and, by (7.3), there exists a constant K such that

$$\begin{aligned} \Vert Z_3 \Vert _{r-1}&\leqslant K \left( \sum _{j=1}^m \Vert \nabla _j Y_2 \Vert _{r-1}+ \Vert Y_2\Vert _{r-1}+ \Vert Y_1\Vert _{r-1} + \Vert Y_3\Vert _{r-1}\right) \\&\leqslant K( \Vert Y_2\Vert _{r} + \Vert Y_1\Vert _{r-1} + \Vert Y_3\Vert _{r-1}). \end{aligned}$$

Moreover, \(D{\mathcal {G}}(0, 0,0)\) is an open map since we have

$$\begin{aligned} \Vert Y_3 \Vert _{r-1}&\leqslant K \left( \sum _{j=1}^m \Vert \nabla _j Z_2 \Vert _{r-1}+ \Vert Z_2\Vert _{r-1}+ \Vert Z_1\Vert _{r-1} + \Vert Z_3\Vert _{r-1}\right) \\&\leqslant K ( \Vert Z_2\Vert _{r} + \Vert Z_1\Vert _{r-1} + \Vert Z_3\Vert _{r-1}). \end{aligned}$$

This implies that \(D{\mathcal {G}}(0,0,0)\) is an isomorphism pf Banach spaces.

Let now us consider an admissible vector field V with compact support on \(W_p\). We consider the map

$$\begin{aligned} \tilde{{\mathcal {G}}}:(-\varepsilon ,\varepsilon ) \times E \times {\mathcal {A}}_{0,1}^{r-1}(W_{{\bar{p}}},N) \rightarrow E \times \Lambda _0^{r-1}(W_{{\bar{p}}},N), \end{aligned}$$

defined by

$$\begin{aligned} \tilde{{\mathcal {G}}}(s,Y_1,Y_3,Y_2)=(Y_1,{\mathcal {F}}(sV+Y_1+Y_3+Y_2)). \end{aligned}$$

The map \(\tilde{{\mathcal {G}}}\) is continuous with respect to the product norms (on each factor we put the natural norm, the Euclidean one on the intervals and \(||\cdot ||_r\) and \(||\cdot ||_{r-1}\) in the spaces of vectors on \(\Phi ({\bar{M}})\)). Moreover

$$\begin{aligned} \tilde{{\mathcal {G}}}(0,0,0,0)=(0,0), \end{aligned}$$

since \(\Phi \) has degree d. Denoting by \(D_Y\) the differential with respect to the last three variables of \({\tilde{G}}\) we have that

$$\begin{aligned} D_{Y}\tilde{{\mathcal {G}}}(0,0,0,0)(Y_1,Y_2,Y_3)=D{\mathcal {G}}(0,0,0)(Y_1,Y_2,Y_3) \end{aligned}$$

is a linear isomorphism. We can apply the Implicit Function Theorem to obtain unique maps

$$\begin{aligned} \begin{aligned}&Y_1:(-\varepsilon ,\varepsilon ) \rightarrow {\mathcal {A}}_{0,2}^{r-1}(W_{{\bar{p}}},N), \\&Y_2:(-\varepsilon ,\varepsilon ) \rightarrow {\mathcal {V}}_0^{r}(W_{{\bar{p}}},N), \\&Y_3:(-\varepsilon ,\varepsilon ) \rightarrow {\mathcal {A}}_{0,1}^{r-1}(W_{{\bar{p}}},N), \end{aligned} \end{aligned}$$
(7.4)

such that \(\tilde{{\mathcal {G}}}(s,Y_1(s),Y_2(s),Y_3(s))=(0,0)\). This implies that \(Y_1(s)=0\), \(Y_2(s)=0\), \(Y_3(0)=0\) and that

$$\begin{aligned} {\mathcal {F}}(sV+Y_3(s))=0. \end{aligned}$$

Differentiating this formula at \(s=0\) we obtain

$$\begin{aligned} D{\mathcal {F}}(0)\left( V+\dfrac{\partial Y_3}{\partial s} (0)\right) =0. \end{aligned}$$

Since V is admissible we deduce

$$\begin{aligned} D{\mathcal {F}}(0) \dfrac{\partial Y_3}{\partial s} (0) =0. \end{aligned}$$

Since \(\tfrac{\partial Y_3}{\partial s} (0)=\sum _{i=1}^{\ell } g_{h_i}X_{h_i}\), where \(g_{h_i} \in C_0^{r-1}(W_{{\bar{p}}})\), Eq. (7.1) implies \(g_{h_i} \equiv 0\) for each \(i=1,\ldots ,\ell \). Therefore it follows \(\tfrac{\partial Y_3}{\partial s} (0)=0\).

Hence the variation \(\Gamma _s({\bar{p}})=\Gamma (s V+ Y_3(s))({\bar{p}})\) coincides with \(\Phi ({\bar{q}})\) for \(s=0\) and \({\bar{q}}\in W_{{\bar{p}}}\), it has degree d and its variational vector fields is given by

$$\begin{aligned} \dfrac{\partial \Gamma _s}{\partial s} \bigg |_{s=0}= V+ \dfrac{\partial Y_3}{\partial s} (0)=V. \end{aligned}$$

Moreover, \(\text {supp}(Y_3) \subseteq \text {supp}(V) \). Indeed, if \({\bar{q}} \notin \text {supp}(V)\), the unique vector field \(Y_3(s)\), such \({\mathcal {F}}(Y_3(s))=0\), is equal to 0 at \({\bar{q}}\). \(\square \)

Remark 7.3

In Proposition 5.5 we stressed the fact that a vector field \(V= V^{\top }+ V^{\perp }\) is admissible if and only if \(V^{\perp }\) is admissible. This follows from the additivity in V of the admissibility system (5.3) and the admissibility of \(V^\top \). Instead of writing V with respect to the adapted basis \((X_i)_i\) we consider the basis \(E_1,\ldots ,E_m,V_{m+1}, \ldots ,V_n\) described in Sect. 6.3.

Let \(A^{\perp }, B^{\perp }, C^{\perp }\) be the matrices defined in (6.22), \(A^{\top }\) be the one described in Remark 6.7 and A be the matrix with respect to the basis \((X_i)_i\) defined in (6.7). When we change only the basis for the vector field V by (6.11) we obtain \({\tilde{A}}=A D_h\). Since \(A^{\top }\) is the null matrix and \({\tilde{A}}= (A^{\top } |\, A^{\perp })\) we conclude that \(\text {rank}(A({\bar{p}}))=\text {rank}(A^{\perp }({\bar{p}}))\). Furthermore \(\Phi \) is strongly regular at \({\bar{p}}\) if and only if \(\text {rank}(A^{\perp }({\bar{p}}))= \ell \leqslant k\), where k is the integer defined in 6.6.

7.1 Some examples of regular submanifolds

Example 7.4

Consider a hypersurface \(\Sigma \) immersed in an equiregular Carnot manifold N, then we have that \(\Sigma \) always has degree d equal to \(d_{\max }^{n-1}=Q-1\), see   4.1. Therefore the dimension \(\ell \), defined in Sect. 6, of \(\Lambda _m^d(U)_p\) is equal to zero. Thus any compactly supported vector field V is admissible and integrable. When the Carnot manifold N is a contact structure \((M^{2n+1}, {\mathcal {H}}=\text {ker}(\omega ))\), see 4.2, the hypersurface \(\Sigma \) has always degree equal to \(d_{\text {max}}^{2n}=2n+1\).

Example 7.5

Let \((E,{\mathcal {H}})\) be the Carnot manifold described in Sect. 4.3 where \((x,y,\theta ,k) \in {{\mathbb {R}}}^2 \times {\mathbb {S}}^1 \times {{\mathbb {R}}}=E\) and the distribution \({\mathcal {H}}\) is generated by

$$\begin{aligned} X_1=\cos (\theta )\partial _x+\sin (\theta )\partial _y+ k \partial _{\theta }, \quad X_2= \partial _k. \end{aligned}$$

Clearly \((X_1,\ldots ,X_4)\) is an adapted basis for \({\mathcal {H}}\). Moreover the others no-trivial commutators are given by

$$\begin{aligned} {[}X_1,X_4]&=-k X_1-k^2 X_3\\ {[}X_3,X_4]&=X_1+k X_3. \end{aligned}$$

Let \(\Omega \subset {{\mathbb {R}}}^2\) be an open set. We consider the surface \(\Sigma =\Phi (\Omega )\) where

$$\begin{aligned} \Phi (x,y)=(x,y,\theta (x,y),\kappa (x,y)) \end{aligned}$$

and such that \(X_1(\theta (x,y))=\kappa (x,y)\). Therefore the \(\deg (\Sigma )=4\) and its tangent vectors are given by

$$\begin{aligned} {\tilde{e}}_1=&X_1+ X_1(\kappa ) X_2,\\ {\tilde{e}}_2=&X_4-X_4(\theta )X_3+X_4(\kappa )X_2. \end{aligned}$$

Let \(g=\langle \cdot ,\cdot \rangle \) be the metric that makes orthonormal the adapted basis \((X_1,\ldots , X_4)\). Since \((\Lambda _2^{4}(N))^{\perp }=\text {span}\{X_3 \wedge X_4\}\) the only no-trivial coefficient \(c_{1 1r}\), for \(r=3,4\) are given by

$$\begin{aligned} \langle X_3\wedge {\tilde{e}}_2, X_3 \wedge X_4\rangle =1, \quad \text {and} \quad \langle X_4\wedge {\tilde{e}}_2, X_3 \wedge X_4\rangle =X_4(\theta ). \end{aligned}$$

On the other hand \(c_{1 2 h}=\langle {\tilde{e}}_1 \wedge X_k, X_3 \wedge X_4\rangle =0 \) for each \(h=1,\ldots ,4\), since we can not reach the degree 5 if one of the two vector fields in the wedge has degree one. Therefore the only equation in (6.2) is given by

$$\begin{aligned} {\tilde{e}}_1(f_3)+ X_4(\theta ) {\tilde{e}}_1(f_4)+ \sum _{h=1}^4 \left( \langle X_3 \wedge X_4, {\tilde{e}}_1\wedge [{\tilde{e}}_2,X_h]+ [{\tilde{e}}_1,X_h] \wedge {\tilde{e}}_2\rangle \right) f_h=0. \end{aligned}$$
(7.5)

Since \(\deg ({\tilde{e}}_1\wedge [{\tilde{e}}_2,X_h]) \leqslant 4\) we have \(\langle X_3 \wedge X_4, e_1\wedge [{\tilde{e}}_2,X_h]\rangle =0\) for each \(h=1,\ldots ,4\). Since \([u X, Y]=u[X,Y]-Y(u) X\) for each \(X,Y \in {\mathfrak {X}}(N) \) and \(u \in C^{\infty }(N)\) we have

$$\begin{aligned} {[}{\tilde{e}}_1,X_h]&=[X_1,X_h]+X_1(\kappa )[X_2,X_h]-X_h(X_1(\kappa ))X_2\\&={\left\{ \begin{array}{ll} -X_1(\kappa )X_3 -X_1(X_1(\kappa ))X_2 &{} h=1\\ X_3- X_2(X_1(\kappa ))X_2 &{} h=2\\ X_4 -X_3(X_1(\kappa ))X_2 &{} h=3\\ -\kappa X_1- \kappa ^2 X_3 -X_4(X_1(\kappa ))X_2 &{} h=4. \end{array}\right. } \end{aligned}$$

Thus, we deduce

$$\begin{aligned} \langle X_3\wedge X_4,[{\tilde{e}}_1,X_h] \wedge {\tilde{e}}_2\rangle ={\left\{ \begin{array}{ll} -X_1(\kappa ) &{} h=1\\ 1 &{} h=2\\ X_4(\theta ) &{} h=3\\ -\kappa ^2 &{} h=4. \end{array}\right. } \end{aligned}$$

Hence the Eq. (7.5) is equivalent to

$$\begin{aligned} {\tilde{e}}_1(f_3)+ X_4(\theta ) {\tilde{e}}_1(f_4)-X_1(\kappa ) f_1+ f_2-X_4(\theta ) f_3- \kappa ^2 f_4=0 \end{aligned}$$
(7.6)

Since \(\iota _0(\Omega )=1\), we have \(\rho =n_{1}=2\), where \(\rho \) is the natural number defined in (6.1). In this setting the matrix C is given by

$$\begin{aligned} C=\left( \begin{array}{cccc} 1&0&X_4(\theta )&0 \end{array} \right) , \end{aligned}$$

Then the matrices A and B are given by

$$\begin{aligned} A=\left( \begin{array}{cc}-X_1(\kappa )&1 \end{array} \right) , \\ B=\left( \begin{array}{cc}-X_4(\theta )&-\kappa ^2 \end{array} \right) . \end{aligned}$$

Since \(\text {rank}(A(x,y))=1\) and the matrix \({\hat{A}}(x,y)\), defined in the proof of Theorem 7.2, is equal to 1 for each \((x,y) \in \Omega \) we have that \(\Phi \) is strongly regular at each point (xy) in \(\Omega \) and the open set \(W_{(x,y)}=\Omega \). Hence by Theorem  7.2 each admissible vector field on \(\Omega \) is integrable.

On the other hand we notice that \(k= n_1-{\tilde{m}}_1=1\). By the Gram-Schmidt process an orthonormal basis with respect to the metric g is given by

$$\begin{aligned} e_1&= \dfrac{1}{\alpha _1} ( X_1+X_1(\kappa )X_2),\\ e_2&=\frac{1}{\alpha _2}\left( X_4- X_4(\theta )X_3+ \frac{X_4(\kappa )}{\alpha _1^2} (X_2- X_1(\kappa )X_1)\right) ,\\ v_3&=\dfrac{1}{\alpha _3}(X_3+X_4(\theta )X_4),\\ v_4&= \frac{\alpha _3}{\alpha _2 \alpha _1} \left( ( -X_1(\kappa )X_1+X_2)+ \frac{X_4(\kappa )}{\alpha _3^2} (X_4(\theta ) X_3- X_4) \right) , \end{aligned}$$

where we set

$$\begin{aligned} \alpha _1&=\sqrt{1+X_1(\kappa )^2}, \quad \alpha _3=\sqrt{1+X_4(\theta )^2} \\ \alpha _2&=\sqrt{1+X_4(\theta )^2+\frac{X_4(\kappa )^2}{(1+X_1(\kappa )^2)}}=\frac{\sqrt{\alpha _1^2 \alpha _3^2+ X_4(\kappa )^2}}{\alpha _1}. \end{aligned}$$

Since it holds

$$\begin{aligned}&\langle v_3\wedge e_2, X_3 \wedge X_4\rangle =\frac{\alpha _3}{\alpha _2},\\&\langle v_4\wedge e_2, X_3 \wedge X_4\rangle =0,\\&\langle [e_1,v_3] \wedge e_2, X_3 \wedge X_4\rangle =\frac{X_4(\theta ) (1-\kappa ^2)}{\alpha _1 \alpha _2 \alpha _3},\\&\langle [e_1,v_4]\wedge e_2, X_3 \wedge X_4\rangle =\frac{ \alpha _3 }{\alpha _2}\left( 1 + \frac{X_4(\kappa )^2}{\alpha _1^2 \alpha _3^2}\right) =\frac{\alpha _2}{\alpha _3}, \end{aligned}$$

then a vector field \(V^{\perp }= \psi _3(x,y) \, v_3 + \psi _4(x,y) \, v_4\) normal to \(\Sigma \) is admissible if and only if \(\psi _3, \psi _4 \in C_0^{r} (\Omega ) \) verify

$$\begin{aligned} \frac{\alpha _3}{\alpha _2} e_1(\psi _3)+ \frac{X_4(\theta ) (1-\kappa ^2)}{\alpha _1\alpha _2 \alpha _3} \, \psi _3+ \frac{\alpha _2}{\alpha _3} \, \psi _4=0. \end{aligned}$$

That is equivalent to

$$\begin{aligned} {\bar{X}}_1 (\psi _3) + b^{\perp } \, \psi _3 + a^{\perp } \, \psi _4=0, \end{aligned}$$
(7.7)

where \({\bar{X}}_1= \cos (\theta (x,y))\partial _x + \sin (\theta (x,y)) \partial _y \) and

$$\begin{aligned} b^{\perp }&=\frac{X_4(\theta )(1-X_1(\theta )^2)}{1+X_4(\theta )^2}, \\ a^{\perp }&=\alpha _1\left( 1+ \frac{X_4(\kappa )^2}{\alpha _1^2 \alpha _3^2}\right) . \end{aligned}$$

In particular, since \(a^{\perp }(x,y)>0\) we have that \(\text {rank}(a^{\perp }(x,y))=1\) for all \((x,y) \in \Omega \). Along the integral curve \(\gamma '(t)={\bar{X}}_1\) on \(\Omega \) the Eq. (7.7) reads

$$\begin{aligned} \psi '_3(t)+ b^{\perp }(t) \psi _3(t)+a^{\perp }(t) \psi _4(t)=0, \end{aligned}$$

where we set \(f(t)=f(\gamma (t))\) for each function \(f:\Omega \rightarrow {{\mathbb {R}}}\).

Remark 7.6

Let \((N,{\mathcal {H}})\) be a Carnot manifold such that \({\mathcal {H}}=\text {ker}(\theta )\) where \(\theta \) is a \({{\mathbb {R}}}^{n-\ell }\) one form. Following [28, 43] we say that an immersion \(\Phi :{\bar{M}} \rightarrow N\) is horizontal when the pull-back \(\Phi ^{*} \theta =0\) and, given a point \(p \in \Phi ({\bar{M}})\), the subspace \(T_p M \subset {\mathcal {H}}_p\) is regular if the map

$$\begin{aligned} V \rightarrow (\iota _V d \theta )_{|T_p M} \end{aligned}$$
(7.8)

is onto for each horizontal vector V on \({\bar{M}}\). Let X be an horizontal extension of V on N and Y be another horizontal vector field on N, then

$$\begin{aligned} d \theta (X,Y)=X(\theta (Y))-Y(\theta (X))-\theta ([X,Y])=-\theta ([X,Y]) \end{aligned}$$

Assume that the local frame \(E_1,\ldots ,E_m\) generate \(T_p M\) at p then the map (7.8) is given by \( \theta ([X,E_j] (p)), \) for each \(j=1,\ldots ,m\). In [24, Section 3] the author notice that there exist special coordinates adjusted to the admissibility system such that the entries of the control matrix A are \(a_{ijh}=\langle V_i, [E_j,V_h]\rangle \), where \(V_{m+1},\ldots ,V_n\) are vector fields in the normal bundle. In this notation the surjectivity of this map coincides with the pointwise condition of maximal rank of the matrix \((a_{ijh})\). Since by Eq. (6.17) the rank of A is independent of the metric g we deduce that this regularity notion introduced by [27, 28] is equivalent to strongly regularity at \({\bar{p}}\) (Definition 7.1) for the class of horizontal immersions.

7.2 An isolated plane in the Engel group

Definition 7.7

We say that an immersion \(\Phi : {\bar{M}} \rightarrow N\) in an equiregular graded manifold \((N,{\mathcal {H}}^1 \subset \ldots \subset {\mathcal {H}}^s)\) is isolated if the only admissible variation normal to \(M=\Phi ({\bar{M}})\) is the trivial one.

Here we provide an example of isolated surface immersed in the Engel group.

Example 7.8

Let \(N={{\mathbb {R}}}^4\) and \({\mathcal {H}}=\text {span}\{X_1,X_2\}\), where

$$\begin{aligned} X_1=\partial _{x_1}, \quad X_2=\partial _{x_2}+ x_1\partial _{x_3}+x_3 \partial _{x_4} \end{aligned}$$

and \(X_3=\partial _{x_3}\) and \(X_4=\partial _{x_4}\). We denote by \({\mathbb {E}}^4\) the Engel group given by \(({{\mathbb {R}}}^4,{\mathcal {H}})\). Let \(\Upsilon : \Omega \subset {{\mathbb {R}}}^2 \rightarrow {\mathbb {E}}^4\) be the immersion given by

$$\begin{aligned} \Upsilon (v,\omega )=(v,0,\omega ,0). \end{aligned}$$

Since \(\Upsilon _v \wedge \Upsilon _w=X_1 \wedge X_3\) the degree \(\deg (\Sigma )=3\), where \(\Sigma =\Upsilon (\Omega )\) is a plane. An admissible vector field \(V=\sum _{k=1}^4 f_k X_k\) verifies the system (6.2) that is given by

$$\begin{aligned} \begin{aligned}&\sum _{h=1}^4 \ \dfrac{\partial f_h}{\partial x_1} \langle X_h \wedge X_3, X_{J_i}\rangle + \dfrac{\partial f_h}{\partial x_3}\langle X_1 \wedge X_h, X_{J_i}\rangle +\\&\quad +f_h \left( \langle [X_1,X_h] \wedge X_3,X_{J_i}\rangle +\langle X_1 \wedge [X_3,X_h],X_{J_i}\rangle \right) =0, \end{aligned} \end{aligned}$$
(7.9)

for \(X_{J_1}=X_1 \wedge X_4\), \(X_{J_2}=X_2 \wedge X_4\) and \(X_{J_3}=X_3 \wedge X_4\). Therefore (7.9) is equivalent to

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}\dfrac{\partial f_4}{\partial x_3}+ f_2=0\\ &{} 0=0\\ &{}-\dfrac{\partial f_4}{\partial x_1}=0. \end{array}\right. } \end{aligned}$$

Let \(K=\text {supp} (V)\). First of all we have \(\frac{\partial f_4}{\partial x_1}=0\). Since \(f_4 \in C^{\infty }(\Omega )\) there follows

$$\begin{aligned} \dfrac{\partial f_2}{\partial x_1}=-\dfrac{\partial ^2 f_4}{\partial x_3 \partial x_1}=0. \end{aligned}$$

Then let \((x_1,x_2) \in K\) we consider the curve

$$\begin{aligned} \gamma :s \mapsto (x_1+s,x_3) \end{aligned}$$

along which \(f_4\) and \(f_2\) are constant. Since \(f_4\) and \(f_2\) are compactly supported at the end point, \((x_1+s_0,x_3) \in \partial K\) we have \(f_4(x_1+s_0,x_3)=f_2(x_1+s_0,x_3)=0\). Therefore we gain \(f_4=f_2\equiv 0\). Therefore the only admissible vector fields \(f_1X_1+ f_3 X_3\) are tangent to \(\Sigma \). Assume that there exists an admissible variation \(\Gamma _s\) for \(\Upsilon \), then its associated variational vector field is admissible. However we proved that the only admissible vector fields are tangent to \(\Sigma \), therefore the admissible variation \(\Gamma _s\) has to be tangent to \(\Sigma \) and the only normal one a trivial variation, hence we conclude that the plane \(\Sigma \) is isolated.

Moreover, we have that \(k=1\) and the matrix \(A^{\perp }\) defined in 7.1 is given by

$$\begin{aligned} A(u,w)=\left( \begin{array}{cc} -1 \\ 0 \\ 0 \\ \end{array} \right) . \end{aligned}$$

Since \(\text {rank}(A)=1< 3\) we deduce that \(\Upsilon \) is not strongly regular at any point in \(\Omega \).

In analogy with the rigidity result by [4], here we prove that \(\Sigma \) is isolated without using the admissibility system. This also implies that the plane \(\Sigma \) is rigid in the \(C^1\) topology.

Proposition 7.9

Let \({\mathbb {E}}^4\) be the Engel group given by \(({{\mathbb {R}}}^4,{\mathcal {H}})\), where the distribution \({\mathcal {H}}\) is generated by

$$\begin{aligned} X_1=\partial _{x_1}, \quad X_2=\partial _{x_2}+ x_1\partial _{x_3}+x_3 \partial _{x_4}. \end{aligned}$$

Let \(\Omega \subset {{\mathbb {R}}}^2\) be a bounded open set. Then the immersion \(\Upsilon : \Omega \rightarrow {\mathbb {E}}^4\) of degree 3 given by

$$\begin{aligned} \Upsilon (v,w)=(v,0,w,0) \end{aligned}$$

is isolated.

Proof

An admissible normal variation \(\Gamma _s\) of \(\Upsilon \) has to have the same degree of \(\Upsilon \) and has to share the same boundary \(\Upsilon (\partial \Omega )= \partial \Sigma \), where clearly \(\Sigma =\Upsilon (\Omega )\). For a fix s, we can parametrize \(\Gamma _s\) by

$$\begin{aligned} \Phi : \Omega \rightarrow {\mathbb {E}}^4, \quad \Phi (v,w)=(v,\phi (v,w),w, \psi (v,w)), \end{aligned}$$

where \(\phi ,\psi \in C_0^1(\Omega ,{{\mathbb {R}}})\). Since \(\deg (\Phi (\Omega ))=3\) we gain

$$\begin{aligned} {\left\{ \begin{array}{ll} \langle \Phi _v \wedge \Phi _w, X_1 \wedge X_4\rangle =0\\ \langle \Phi _v \wedge \Phi _w, X_2 \wedge X_4\rangle =0\\ \langle \Phi _v \wedge \Phi _w, X_3 \wedge X_4\rangle =0,\\ \end{array}\right. } \end{aligned}$$
(7.10)

where

$$\begin{aligned} \Phi _v=\partial _{1}+ \phi _v \partial _2 + \psi _v \partial _4= X_1 + \phi _v (X_2-v X_3+wX_4)+ \psi _v X_4 \end{aligned}$$

and

$$\begin{aligned} \Phi _w= \phi _w \partial _2 +\partial _3 +\psi _w \partial _4=\phi _w(X_2-vX_3+wX_4)+X_3+\psi _w X_4. \end{aligned}$$

Denoting by \(\pi _{4}\) the projection over the 2-vectors of degree larger than 3, we have

$$\begin{aligned} \pi _4(\Phi _v \wedge \Phi _w)&=(\psi _w+w \phi _w) X_1 \wedge X_4+ \phi _v(\psi _w+w \phi _w) X_2 \wedge X_4\\&\quad -v \phi _v(\psi _w+w \phi _w) X_3 \wedge X_4 + \phi _w (\psi _v+w \phi _v) X_4\wedge X_2 \\&\quad + (1-v\phi _w)(\psi _v+w \phi _v) X_4\wedge X_3. \end{aligned}$$

Therefore (7.10) is equivalent to

$$\begin{aligned} {\left\{ \begin{array}{ll} \psi _w+w \phi _w=0\\ \phi _v \psi _w - \psi _v \phi _w=0\\ v(\phi _v \psi _w - \psi _v \phi _w)- (\psi _v+w\phi _v)=0.\\ \end{array}\right. } \end{aligned}$$
(7.11)

The second equation implies that (7.11) is equivalent to

$$\begin{aligned} {\left\{ \begin{array}{ll} \psi _w+w \phi _w=0\\ \phi _v \psi _w - \psi _v \phi _w=0\\ \psi _v+w\phi _v=0.\\ \end{array}\right. } \end{aligned}$$
(7.12)

Then we notice that the first and the third equations implies the second one as it follows

$$\begin{aligned} \phi _v \psi _w - \psi _v \phi _w= -\phi _v w\phi _w+w \phi _v \phi _w=0. \end{aligned}$$

Therefore the immersion \(\Phi \) has degree three if and only if

$$\begin{aligned} {\left\{ \begin{array}{ll} \psi _w=-w \phi _w\\ \psi _v=-w\phi _v.\\ \end{array}\right. } \end{aligned}$$
(7.13)

Only when the compatibility conditions ([29, Eq. (1.4), Chapter VI]) for linear system of first order are given we have a solution of this system. However the compatibility condition is given by

$$\begin{aligned} 0=\psi _{w v}-\psi _{vw}= \phi _v \end{aligned}$$

Since \(\phi \in C_0^1(\Omega )\) we obtain \(\phi \equiv 0\). Therefore also \(\psi _v=0\), then \(\psi \equiv 0\). Hence \(\Phi =\Upsilon \). \(\square \)

8 First variation formula for submanifolds

In this section we shall compute a first variation formula for the area \(A_d\) of a submanifold of degree d. We shall give some definitions first. Assume that \(\Phi :{\bar{M}}\rightarrow N\) is an immersion of a smooth m-dimensional manifold into an n-dimensional equiregular graded manifold endowed with a Riemannian metric g. Let \(\mu =\Phi ^*g\). Fix \({\bar{p}}\in {\bar{M}}\) and let \(p=\Phi ({\bar{p}})\). Take a \(\mu \)-orthonormal basis \(({\bar{e}}_1,\ldots ,{\bar{e}}_m)\) in \(T_{{\bar{p}}}{\bar{M}}\) and define \(e_i:=d \Phi _{{\bar{p}}}({\bar{e}}_i)\) for \(i=1,\ldots ,m\). Then the degree d area density \(\Theta \) is defined by

$$\begin{aligned} \Theta ({\bar{p}}):=|(e_1\wedge \ldots \wedge e_m)_d|=\left( \sum _{\deg (X_J)=d} \langle e_1\wedge \ldots \wedge e_m,(X_J)_p\rangle ^2\right) ^{1/2}, \end{aligned}$$
(8.1)

where \((X_1, \ldots , X_n)\) is an orthonormal adapted basis of TN. Then we have

$$\begin{aligned} A_d(M)=\int _{{\bar{M}}}\Theta ({\bar{p}})d\mu ({\bar{p}}). \end{aligned}$$

Assume now that \(V\in {\mathfrak {X}}({\bar{M}},N)\), then we set

$$\begin{aligned} ({{\,\mathrm{div}\,}}_{{\bar{M}}}^d V)({\bar{p}}):=\sum _{i=1}^m \langle e_1\wedge \ldots \wedge \nabla _{e_i}V\wedge \ldots \wedge e_m, (e_1\wedge \ldots \wedge e_m)_d\rangle . \end{aligned}$$
(8.2)

Finally, define the linear function f by

$$\begin{aligned} f(V_{{\bar{p}}}):=\sum _{\deg (X_J)=d}\langle e_1\wedge \ldots \wedge e_m,\nabla _{V_{{\bar{p}}}} X_J\rangle \langle e_1\wedge \ldots \wedge e_m,(X_J)_{{\bar{p}}}\rangle . \end{aligned}$$
(8.3)

Then we have the following result

Theorem 8.1

Let \(\Phi :{\bar{M}}\rightarrow N\) be an immersion of degree d of a smooth m-dimensional manifold into an equiregular graded manifold equipped with a Riemannian metric g. Assume that there exists an admissible variation \(\Gamma :{\bar{M}}\times (-\varepsilon ,\varepsilon )\rightarrow N\) with associated variational field V with compact support. Then

$$\begin{aligned} \frac{d}{dt}\bigg |_{t=0} A_d(\Gamma _t({\bar{M}}))=\int _{{\bar{M}}} \frac{1}{\Theta ({\bar{p}})}\,\big (({{\,\mathrm{div}\,}}_{{\bar{M}}}^d V)({\bar{p}})+f(V_{{\bar{p}}})\big ) d\mu ({\bar{p}}). \end{aligned}$$
(8.4)

Proof

Fix a point \({\bar{p}}\in {\bar{M}}\). Clearly, \({\mathcal {E}}_i(t,{\bar{p}})=d \Gamma _{({\bar{p}},t)}({\bar{e}}_i)\), \(i=1,\ldots ,m\), are vector fields along the curve \(t\mapsto \Gamma ({\bar{p}},t)\). Therefore, the first variation is given by

$$\begin{aligned} \frac{d}{dt}\bigg |_{t=0} A(\Gamma _t({\bar{M}}))&=\int _{{\bar{M}}} \frac{d}{dt}\bigg |_{t=0} |\left( {\mathcal {E}}_1(t)\wedge \ldots \wedge {\mathcal {E}}_m(t) \right) _d| d\mu ({\bar{p}}) \\&=\int _{{\bar{M}}} \frac{d}{dt}\bigg |_{t=0} \left( \sum _{\text {deg}(X_J)=d} \langle {\mathcal {E}}_1(t)\wedge \ldots \wedge {\mathcal {E}}_m(t), X_J\rangle ^2 \right) ^{\frac{1}{2}} d\mu ({\bar{p}}). \end{aligned}$$

The derivative of the last integrand is given by

$$\begin{aligned}&\dfrac{1}{| (e_1\wedge \ldots \wedge e_m)_d|} \sum _{\text {deg}(X_J)=d} \langle e_1\wedge \ldots \wedge e_m, (X_J)_p\rangle \ \times \\&\quad \times \left( \langle e_1\wedge \ldots \wedge e_m, \nabla _{V_{{\bar{p}}}} X_J\rangle + \sum _{i=1}^m \langle e_1\wedge \ldots \wedge \nabla _{e_i} V \wedge \ldots \wedge e_m , (X_J)_p\rangle \right) . \end{aligned}$$

Using (8.2) and (8.3) we obtain (8.4). \(\square \)

Definition 8.2

Let \(\Phi :{\bar{M}}\rightarrow N\) be an immersion of degree d of a smooth m-dimensional manifold into an equiregular graded manifold equipped with a Riemannian metric g. We say that \(\Phi \) is \(A_d\)-stationary, or simply stationary, if it is a critical point of the area \(A_d\) for any admissible variation.

Proposition 8.3

Let \(\Phi :{\bar{M}}\rightarrow N\) be an immersion of degree d of a smooth m-dimensional manifold into an equiregular graded manifold equipped with a Riemannian metric g. Let \(\Gamma _t\) be admissible variation whose variational field \(V=V^{\top }\) is compactly supported and tangent to \(M=\Phi ({\bar{M}})\). Then we have

$$\begin{aligned} \frac{d}{dt}\bigg |_{t=0} A_d(\Gamma _t({\bar{M}}))=0. \end{aligned}$$

Proof

Since \(\Gamma _t({\bar{M}})\subset \Phi (M)\) for all t, the vector field \({\bar{V}}_p=d\Phi _{{\bar{p}}}^{-1}(V_{{\bar{p}}})\) is tangent to \({\bar{M}}\) and we have

$$\begin{aligned} \frac{d}{dt}\bigg |_{t=0}A_d(M)=\int _{{\bar{M}}} ({\bar{V}}(\Theta )+\Theta {{\,\mathrm{div}\,}}_{{\bar{M}}}{\bar{V}})\,d\mu =\int _{{\bar{M}}}{{\,\mathrm{div}\,}}_{{\bar{M}}}(\Theta {\bar{V}})\,d\mu =0. \end{aligned}$$

\(\square \)

Lemma 8.4

Let \(f,g\in C^\infty (M)\)and X be a tangential vector field in \(C^{\infty }(M,TM)\). Then there holds,

  1. (i)

    \(f{{\,\mathrm{div}\,}}_M(X)+ X(f)={{\,\mathrm{div}\,}}_M(fX) \),

  2. (ii)

    \(g X(f)={{\,\mathrm{div}\,}}_M(f g X)-g f{{\,\mathrm{div}\,}}_M(X)-f X(g)\).

Proof

By the definition of divergence we obtain (i) as follows

$$\begin{aligned} {{\,\mathrm{div}\,}}_M(f X)= \sum _{i=1}^m \langle \nabla _{e_i}(f X),e_i\rangle = \sum _{i=1}^m e_i(f) \langle X,e_i\rangle + f \langle \nabla _{e_i}(X),e_i\rangle . \end{aligned}$$

To deduce (ii) we apply twice (i) as follows

$$\begin{aligned} {{\,\mathrm{div}\,}}_M( g f X)-f X(g) =g{{\,\mathrm{div}\,}}_M(f X)= g X(f)+ g f {{\,\mathrm{div}\,}}_M(X). \end{aligned}$$

\(\square \)

Theorem 8.5

Let \(\Phi :{\bar{M}}\rightarrow N\) be an immersion of degree d of a smooth m-dimensional manifold into an equiregular graded manifold equipped with a Riemannian metric g. Assume that there exists an admissible variation \(\Gamma :{\bar{M}}\times (-\varepsilon ,\varepsilon )\rightarrow N\) with associated variational field V with compact support. Then

$$\begin{aligned} \frac{d}{dt}\bigg |_{t=0} A_d(\Gamma _t({\bar{M}}))=\int _{{\bar{M}}} \langle V, {\mathbf {H}}_d\rangle d\mu , \end{aligned}$$
(8.5)

where \({\mathbf {H}}_d\) is the vector field

$$\begin{aligned} \begin{aligned}&-\sum _{j=m+1}^n \sum _{i=1}^m{{\,\mathrm{div}\,}}_M\big (\xi _{ij}E_i\big )N_j. \\&+\sum _{j=m+1}^n \sum _{i=1}^m \langle E_1\wedge \ldots \wedge \nabla _{E_i}N_j\wedge \ldots \wedge E_m,\frac{(E_1\wedge \ldots \wedge E_m)_d}{|(E_1\wedge \ldots \wedge E_m)_d|}\rangle \,N_j \\&+\sum _{j=m+1}^n \frac{f(N_j)}{\Theta }N_j. \end{aligned} \end{aligned}$$
(8.6)

In this formula, \((E_i)_i\) is a local orthonormal basis of TM and \((N_j)_j\) a local orthonormal basis of \(TM^\perp \). The functions \(\xi _{ij}\) are given by

$$\begin{aligned} \xi _{ij}=\langle E_1\wedge \ldots \wedge {\mathop {N_j}\limits ^{(i)}}\wedge \ldots \wedge E_m,\frac{(E_1\wedge \ldots \wedge E_m)_d}{|(E_1\wedge \ldots \wedge E_m)_d|}\rangle . \end{aligned}$$
(8.7)

Proof

Since our computations are local and immersions are local embeddings, we shall identify locally \({\bar{M}}\) and M to simplify the notation.

We decompose \(V= V^\top +V^{\perp }\) in its tangential \( V^\top \) and perpendicular \(V^{\perp }\) parts. Since \({{\,\mathrm{div}\,}}_{{\bar{M}}}^d\) and the functional f defined in (8.3) are additive, we use the first variation formula (8.4) and Proposition 8.3 to obtain

$$\begin{aligned} \frac{d}{dt}\bigg |_{t=0} A_d(\Gamma _t({\bar{M}}))=\int _{{\bar{M}}} \frac{1}{\Theta ({\bar{p}})}\,\big (({{\,\mathrm{div}\,}}_{{\bar{M}}}^d V^{\perp })({\bar{p}})+f(V^{\perp }_{{\bar{p}}})\big ) d\mu ({\bar{p}}). \end{aligned}$$

To compute this integrand we consider a local orthonormal basis \((E_i)_i\) in TM around p and a local orthonormal basis \((N_j)_j\) of \(TM^\perp \) with \((N_j)_j\). We have

$$\begin{aligned} V^\perp =\sum _{j=m+1}^n\langle V,N_j\rangle N_j. \end{aligned}$$

We compute first

$$\begin{aligned} \frac{{{\,\mathrm{div}\,}}_{{\bar{M}}}^d V^\perp }{\Theta }=\sum _{i=1}^m\langle E_1\wedge \ldots \wedge \nabla _{E_i}V^\perp \wedge \ldots \wedge E_m,\frac{(E_1\wedge \ldots \wedge E_m)_d}{|(E_1\wedge \ldots \wedge E_m)_d|}\rangle \end{aligned}$$

as

$$\begin{aligned} \sum _{i=1}^m\sum _{j=m+1}^n \langle E_1\wedge \ldots \wedge \big (\nabla _{E_i}\langle V,N_j\rangle N_j\big )\wedge \ldots \wedge E_m,\frac{(E_1\wedge \ldots \wedge E_m)_d}{|(E_1\wedge \ldots \wedge E_m)_d|}\rangle , \end{aligned}$$

that it is equal to

$$\begin{aligned} \begin{aligned}&\sum _{i=1}^m\sum _{j=m+1}^n \bigg (E_i\big (\langle V,N_j\rangle \big ) \langle E_1\wedge \ldots \wedge {\mathop {N_j}\limits ^{(i)}}\wedge \ldots \wedge E_m,\frac{(E_1\wedge \ldots \wedge E_m)_d}{|(E_1\wedge \ldots \wedge E_m)_d|}\rangle \\&\quad +\langle V,N_j\rangle \langle E_1\wedge \ldots \wedge {\mathop {\nabla _{E_i}N_j}\limits ^{(i)}}\wedge \ldots \wedge E_m,\frac{(E_1\wedge \ldots \wedge E_m)_d}{|(E_1\wedge \ldots \wedge E_m)_d|}\rangle \bigg ). \end{aligned} \end{aligned}$$
(8.8)

The group of summands in the second line of (8.8) is equal to \(\langle V,{\mathbf {H}}_2\rangle \), where

$$\begin{aligned} {\mathbf {H}}_2=\sum _{i=1}^m\sum _{j=m+1}^n \langle E_1\wedge \ldots \wedge {\mathop {\nabla _{E_i}N_j}\limits ^{(i)}}\wedge \ldots \wedge E_m,\frac{(E_1\wedge \ldots \wedge E_m)_d}{|(E_1\wedge \ldots \wedge E_m)_d|}\rangle \,N_j. \end{aligned}$$

To treat the group of summands in the first line of (8.8) we use (ii) in Lemma 8.4. recalling (8.7) we have

$$\begin{aligned} E_i\big (\langle V,N_j\rangle \big )\xi _{ij}={{\,\mathrm{div}\,}}_M\big (\langle V,N_j\rangle \xi _{ij}E_i\big )-\langle V,{{\,\mathrm{div}\,}}_M\big (\xi _{ij}E_i\big )N_j\rangle , \end{aligned}$$

so that applying the Divergence Theorem we have that the integral in M of the first group of summands in (8.8) is equal to

$$\begin{aligned} \int _M \langle V,{\mathbf {H}}_1\rangle d\mu , \end{aligned}$$

where

$$\begin{aligned} {\mathbf {H}}_1=-\sum _{i=1}^m\sum _{j=m+1}^n {{\,\mathrm{div}\,}}_M\big (\xi _{ij}E_i\big )N_j. \end{aligned}$$

We treat finally the summand

$$\begin{aligned} \frac{f(V^\bot )}{\Theta }=\sum _{i=m+1}^n\langle V,N_j\rangle \frac{f(N_j)}{\Theta }=\langle V,{\mathbf {H}}_3\rangle , \end{aligned}$$

where

$$\begin{aligned} {\mathbf {H}}_3=\sum _{j=m+1}^n \frac{f(N_j)}{\Theta }N_j. \end{aligned}$$

This implies the result since \({\mathbf {H}}_d={\mathbf {H}}_1+{\mathbf {H}}_2+{\mathbf {H}}_3\). \(\square \)

In the following result we obtain a slightly different expression for the mean curvature \({\mathbf {H}}_d\) in terms of Lie brackets. This expression is sometimes more suitable for computations.

Corollary 8.6

Let \(\Phi :{\bar{M}}\rightarrow N\) be an immersion of degree d of a smooth m-dimensional manifold into an equiregular graded manifold equipped with a Riemannian metric g, \(M=\Phi ({\bar{M}})\). We consider an extension \((E_i)_i\) of a local orthonormal basis of TM and respectively an extension \((N_j)_j\) of a local orthonormal basis of \(TM^\perp \) to an open neighborhood of N. Then the vector field \({\mathbf {H}}_d\) defined in (8.6) is equal to

$$\begin{aligned} \begin{aligned} {\mathbf {H}}_d&=\sum _{j=m+1}^n \Big ( {{\,\mathrm{div}\,}}_M \Big ( \Theta N_j-\sum _{i=1}^m \xi _{ij} E_i \Big )+ \\&\quad + N_j(\Theta ) + \sum _{i=1}^m \sum _{k=m+1}^n \xi _{ik} \langle [ E_i, N_j], N_k\rangle \Big ) N_j, \end{aligned} \end{aligned}$$
(8.9)

where \(\xi _{ij}\) is defined in (8.7).

Proof

Keeping the notation used in the proof of Theorem 8.5 we consider

$$\begin{aligned} {\mathbf {H}}_2= \sum _{i=1}^m\sum _{j=m+1}^n \langle E_1\wedge \ldots \wedge {\mathop {\nabla _{E_i}N_j}\limits ^{(i)}}\wedge \ldots \wedge E_m,\frac{(E_1\wedge \ldots \wedge E_m)_d}{|(E_1\wedge \ldots \wedge E_m)_d|}\rangle \,N_j. \end{aligned}$$

Writing

$$\begin{aligned} \nabla _{E_i} N_j= \sum _{\nu =1}^m \langle \nabla _{E_i} N_j, E_{\nu }\rangle E_{\nu } + \sum _{k=m+1}^{m} \langle \nabla _{E_i} N_j, N_k\rangle N_k, \end{aligned}$$
(8.10)

we gain

$$\begin{aligned} {\mathbf {H}}_2= \sum _{j=m+1}^n \Big ( {{\,\mathrm{div}\,}}_M ( N_j ) \, |(E_1\wedge \ldots \wedge E_m)_d| + \sum _{i=1}^m\sum _{k=m+1}^n \xi _{ik}\langle \nabla _{E_i} N_j, N_k\rangle \Big ) N_j. \end{aligned}$$

Let us consider

$$\begin{aligned} {\mathbf {H}}_3=\sum _{j=m+1}^n \sum _{\deg (X_J)=d}\bigg (\langle E_1\wedge \ldots \wedge E_m,\nabla _{N_j}X_J\rangle \frac{\langle E_1\wedge \ldots \wedge E_m,X_J\rangle }{|(E_1\wedge \ldots \wedge E_m)_d|} \bigg )\,N_j. \end{aligned}$$
(8.11)

Since the Levi–Civita connection preserves the metric, we have

$$\begin{aligned} \langle E_1\wedge \ldots \wedge E_m,\nabla _{N_j}X_J\rangle = N_j(\langle E_1\wedge \ldots \wedge E_m,X_J\rangle )-\langle \nabla _{N_j} (E_1 \wedge \cdots \wedge E_m), X_J\rangle .\nonumber \\ \end{aligned}$$
(8.12)

Putting the first term of the right hand side of (8.12) in (8.11) we obtain

$$\begin{aligned} \sum _{\deg (X_J)=d} N_j(\langle E_1\wedge \ldots \wedge E_m,X_J\rangle ) \frac{\langle E_1\wedge \ldots \wedge E_m,X_J\rangle }{|(E_1\wedge \ldots \wedge E_m)_d|}= N_j(\Theta ). \end{aligned}$$

On the other hand writing

$$\begin{aligned} \nabla _{N_j} E_i= \sum _{\nu =1}^m \langle \nabla _{N_j} E_i, E_{\nu }\rangle E_{\nu } + \sum _{k=m+1}^{m} \langle \nabla _{N_j} E_i, N_k\rangle N_k \end{aligned}$$

we deduce

$$\begin{aligned}&\sum _{i=1}^m \sum _{\deg (X_J)=d} \langle E_1\wedge \ldots \wedge {\mathop {\nabla _{N_j} E_i}\limits ^{(i)}}\wedge \ldots \wedge E_m, X_J\rangle \frac{\langle E_1\wedge \ldots \wedge E_m,X_J\rangle }{|(E_1\wedge \ldots \wedge E_m)_d|}=\\&\quad = \sum _{i=1}^m \sum _{k=m+1}^n \langle \nabla _{N_j} E_i, N_k\rangle \xi _{ik}. \end{aligned}$$

Therefore we obtain

$$\begin{aligned} {\mathbf {H}}_3=\sum _{j=m+1}^n \Big ( N_j (\Theta )- \sum _{i=1}^m \sum _{k=m+1}^n \langle \nabla _{N_j} E_i, N_k\rangle \xi _{ik} \Big ) N_j. \end{aligned}$$

Since the Levi–Civita connection is torsion-free we have

$$\begin{aligned} {\mathbf {H}}_2+{\mathbf {H}}_3=\sum _{j=m+1}^n \Big ( {{\,\mathrm{div}\,}}_M ( N_j ) \, \Theta + N_j(\Theta ) + \sum _{i=1}^m \sum _{k=m+1}^n \xi _{ik} \langle [E_i,N_j],N_k\rangle \Big ). \end{aligned}$$

Since \({{\,\mathrm{div}\,}}_M ( N_j ) \, \Theta = {{\,\mathrm{div}\,}}_M ( \Theta \, N_j ) \) we conclude that \({\mathbf {H}}_d={\mathbf {H}}_1+ {\mathbf {H}}_2+ {\mathbf {H}}_3\) is equal to (8.9). \(\square \)

8.1 First variation formula for strongly regular submanifolds

Definition 8.7

Let \(\Phi : {\bar{M}} \rightarrow N\) be a strongly regular immersion (see §  7) at \({\bar{p}}\), \(v_{m+1},\ldots , v_n\) be an orthonormal adapted basis of the normal bundle and k be the integer defined in 6.6. Let \(N_{m+1},\ldots ,N_n\) be a local adapted frame of the normal bundle so that \((N_j)_p=v_j\). By Remark 7.3 the immersion \(\Phi \) is strongly regular at \({\bar{p}}\) if and only if \(\text {rank}(A^{\perp })=\ell \). Then there exists a partition of \(\{m+1,\ldots , m+k\}\) into sub-indices \(h_1<\ldots <h_\ell \) and \(i_1<\ldots <i_{m+k-\ell }\) such that the matrix

$$\begin{aligned} {\hat{A}}^{\perp } ({\bar{p}} )=\left( \begin{array}{ccc} \alpha _{1 h_1} ({\bar{p}} )&{} \cdots &{} \alpha _{1 h_{\ell }}({\bar{p}} )\\ \vdots &{} \ddots &{} \vdots \\ \alpha _{\ell h_1}({\bar{p}} )&{} \cdots &{} \alpha _{\ell h_{\ell }}({\bar{p}} ) \end{array} \right) \end{aligned}$$
(8.13)

is invertible. The mean curvature vector of degree d defined in Theorem 8.5 is given by

$$\begin{aligned} {\mathbf {H}}_d= \sum _{j=m+1}^{n} H_d^j N_j. \end{aligned}$$

Then we decompose \({\mathbf {H}}_d\) into the following three components

$$\begin{aligned} {\mathbf {H}}_d^{v}=\begin{pmatrix} H_d^{m+k+1} \\ \vdots \\ H_d^n \end{pmatrix}^t , \quad {\mathbf {H}}_d^h=\begin{pmatrix} H_d^{h_1} \\ \vdots \\ H_d^{h_{\ell }} \end{pmatrix}^t, \quad \text {and} \quad {\mathbf {H}}_d^\iota =\begin{pmatrix} H_d^{i_1} \\ \vdots \\ H_d^{i_{m+k-\ell }} \end{pmatrix}^t \end{aligned}$$
(8.14)

with respect to \(N_{m+1},\ldots , N_n\).

Theorem 8.8

Let \(\Phi : {\bar{M}} \rightarrow N\) be a strongly regular immersion at \({\bar{p}}\) in an equiregular graded manifold. Then \(\Phi ({\bar{M}})\) is a critical point of \(A_d\) if and only if the immersion \(\Phi \) verifies

$$\begin{aligned} {\mathbf {H}}_d^{\iota }- {\mathbf {H}}_d^{h} ({\hat{A}}^{\perp })^{-1} {\tilde{A}}^{\perp } =0, \end{aligned}$$
(8.15)

and

$$\begin{aligned} {\mathbf {H}}_d^v- {\mathbf {H}}_d^{h}({\hat{A}}^{\perp })^{-1} B^{\perp } -\sum _{j=1}^m E_j^{*}\left( {\mathbf {H}}_d^{h} \,({\hat{A}}^{\perp })^{-1} C_j^{\perp } \right) =0, \end{aligned}$$
(8.16)

where \(E_j^{*}\) is the adjoint operator of \(E_j\) for \(j=1,\ldots ,m\) and \({\mathbf {H}}_d^v\), \({\mathbf {H}}_d^h\) and \({\mathbf {H}}_d^{\iota }\) are defined in (8.14), \(B^{\perp }\), \(C_j^{\perp }\) in 6.3, \({\hat{A}}^{\perp }\) in (8.13) and \({\tilde{A}}^{\perp }\) is the \(\ell \times (m+k-\ell )\) matrix given by the columns \(i_1, \ldots , i_{m+k-\ell }\) of \(A^{\perp }\).

Proof

Since \(\Phi : {\bar{M}} \rightarrow N\) is a normal strongly regular immersion then by Theorem 7.2 each normal admissible vector field

$$\begin{aligned} V^{\perp }=\sum _{i=m+1}^{m+k} \phi _i \, N_i + \sum _{r=m+k+1}^n \psi _r \, N_r \end{aligned}$$

is integrable. Keeping in mind the sub-indices in Definition 8.7, we set

$$\begin{aligned} \Psi =\begin{pmatrix} \psi _{m+k+1} \\ \vdots \\ \psi _n \end{pmatrix} , \quad \Gamma =\begin{pmatrix} \phi _{h_1} \\ \vdots \\ \phi _{h_{\ell }} \end{pmatrix} \quad \text {and} \quad \Upsilon =\begin{pmatrix} \phi _{i_1} \\ \vdots \\ \phi _{i_{m+k-\ell }} \end{pmatrix}. \end{aligned}$$
(8.17)

Since the immersion \(\Phi : {\bar{M}}\rightarrow N\) is strongly regular, the admissibility condition (6.24) for \(V^{\perp }\) is equivalent to

$$\begin{aligned} \Gamma = -({\hat{A}}^{\perp })^{-1} \bigg ( \sum _{j=1}^m C^{\perp }_j \, E_j (\Psi ) + B^{\perp } \Psi +{\tilde{A}}^{\perp }\Upsilon \bigg ). \end{aligned}$$
(8.18)

By Theorem 8.5 the first variational formula is given by

$$\begin{aligned} \begin{aligned} \frac{d}{dt}\bigg |_{t=0} A_d(\Gamma _t({\bar{M}}))&=\int _{{\bar{M}}} \langle V^{\perp }, {\mathbf {H}}_d\rangle \\&=\int _{{\bar{M}}} {\mathbf {H}}_d^{v} \, \Psi + {\mathbf {H}}_d^{\iota } \, \Upsilon + {\mathbf {H}}_d^{h} \Gamma \\&=\int _{{\bar{M}}} {\mathbf {H}}_d^{v} \, \Psi + {\mathbf {H}}_d^{\iota } \, \Upsilon - {\mathbf {H}}_d^{h} \,({\hat{A}}^{\perp })^{-1} \bigg ( \sum _{j=1}^m C^{\perp }_j \, E_j (\Psi ) + B^{\perp } \Psi +{\tilde{A}}^{\perp }\Upsilon \bigg ) \\&=\int _{{\bar{M}}} \bigg ( {\mathbf {H}}_d^{\iota }- {\mathbf {H}}_d^{h} ({\hat{A}}^{\perp })^{-1} {\tilde{A}}^{\perp } \bigg ) \Upsilon +\\&\quad + \int _{{\bar{M}}} \bigg ({\mathbf {H}}_d^v- {\mathbf {H}}_d^{h}({\hat{A}}^{\perp })^{-1} B^{\perp } -\sum _{j=1}^m E_j^{*}\bigg ( {\mathbf {H}}_d^{h} \,({\hat{A}}^{\perp })^{-1} C_j^{\perp } \bigg ) \bigg ) \Psi , \end{aligned} \end{aligned}$$

for every \(\Psi \in C_0^{\infty }(W_{{\bar{p}}}, {{\mathbb {R}}}^{n-m-k}) , \Upsilon \in C_0^{\infty }(W_{{\bar{p}}}, {{\mathbb {R}}}^{k-\ell })\). By the arbitrariness of \(\Psi \) and \(\Upsilon \), the immersion \(\Phi \) is a critical point of the area \(A_d\) if and only if it satisfies Eqs. (8.15) and (8.16) on \(W_{{\bar{p}}}\). \(\square \)

Example 8.9

(First variation for a hypersurface in a contact manifold) Let \((M^{2n+1}, \omega )\) be a contact manifold such that \({\mathcal {H}}= \ker (\omega )\), see §  4.2. Let T be the Reeb vector associated to this contact geometry and g the Riemannian metric on M that extends a given metric on \({\mathcal {H}}\) and makes T orthonormal to \({\mathcal {H}}\). Let \(\nabla \) be the Riemannian connection associated to g.

Let us consider a hypersurface \(\Sigma \) immersed in M. As we showed in § 4.2, the degree of \(\Sigma \) is maximum and equal to \(2n+1\), thus each compactly supported vector field V on \(\Sigma \) is admissible. Following § 4.2, we consider the unit normal N to \(\Sigma \) and its horizontal projection \(N_h\). As in § 4.2, we consider the vector fields \( \nu _h=\frac{N_h}{|N_h|}, \qquad \) and \(e_1,\ldots ,e_{2n-1}\) an orthonormal basis of \(T_p \Sigma \cap {\mathcal {H}}_p \). A straightforward computation, contained in [25], shows that the mean curvature \(H_d\) deduced in (8.9) coincide with

$$\begin{aligned} {\mathbf {H}}_d =-{{\,\mathrm{div}\,}}_{\Sigma }^h (\nu _h) +\langle [\nu _h,T],T\rangle . \end{aligned}$$
(8.19)

When \(\langle [\nu _h,T],T\rangle =0\) we obtain well known horizontal divergence of the horizontal normal. This definition of mean curvature for an immersed hypersurface was first given by S.Pauls [44] for graphs over the xy-plane in \({\mathbb {H}}^1\), later extended by Cheng et al. [9] in a 3-dimensional pseudo-hermitian manifold. In a more general setting this formula was deduced in [15, 30]. For more details see also [6, 20, 21, 47, 48, 50].

Example 8.10

(First variation for ruled surfaces in an Engel Structure) Here we compute the mean curvature equation for the surface \(\Sigma \subset E\) of degree 4 introduced in Sect. 4.3. In (4.8) we determined the tangent adapted basis

$$\begin{aligned} {\tilde{E}}_1&=\cos (\theta )\Phi _x+ \sin (\theta )\Phi _y= X_1+X_1(\kappa )X_2,\\ {\tilde{E}}_2&=-\sin (\theta )\Phi _x+\cos (\theta )\Phi _y=X_4-X_4(\theta )X_3+X_4(\kappa )X_2 \end{aligned}$$

A basis for the space \((TM)^{\perp }\) is given by

$$\begin{aligned} {\tilde{N}}_3&=X_4(\theta )X_4+ X_3 \\ {\tilde{N}}_4&=X_1(\kappa )X_1-X_2+X_4(\kappa )X_4 \end{aligned}$$

By the Gram-Schmidt process we obtain an orthonormal basis with respect to the metric g as follows

$$\begin{aligned} E_1&=\dfrac{{\tilde{E}}_1}{|{\tilde{E}}_1|}= \dfrac{1}{\alpha _1} ( X_1+X_1(\kappa )X_2),\\ E_2&=\frac{1}{\alpha _2}\left( X_4- X_4(\theta )X_3+ \frac{X_4(\kappa )}{\alpha _1^2} (X_2- X_1(\kappa )X_1)\right) \\ N_3&=\dfrac{1}{\alpha _3}(X_3+X_4(\theta )X_4)\\ N_4&= \frac{\alpha _3}{\alpha _2 \alpha _1} \left( ( -X_1(\kappa )X_1+X_2)+ \frac{X_4(\kappa )}{\alpha _3^2} (X_4(\theta ) X_3- X_4) \right) \end{aligned}$$

where we set

$$\begin{aligned} \alpha _1&=\sqrt{1+X_1(\kappa )^2}, \quad \alpha _3=\sqrt{1+X_4(\theta )^2} \\ \alpha _2&=\sqrt{1+X_4(\theta )^2+\frac{X_4(\kappa )^2}{(1+X_1(\kappa )^2)}}=\frac{\sqrt{\alpha _1^2 \alpha _3^2+ X_4(\kappa )^2}}{\alpha _1} \end{aligned}$$

and

$$\begin{aligned} N_h= -X_1(\kappa )X_1+X_2, \quad \nu _h=\frac{1}{\alpha _1}(-X_1(\kappa )X_1+X_2) \end{aligned}$$

Since the degree of \(\Sigma \) is equal to 4 we deduce that

$$\begin{aligned} (E_1\wedge E_2)_4= \frac{1}{\alpha _1 \alpha _2} (X_1\wedge X_4 + X_1(\kappa ) X_2 \wedge X_4), \end{aligned}$$

then it follows \(|(E_1\wedge E_2)_4|= \alpha _2^{-1}\) and

$$\begin{aligned} \frac{(E_1\wedge E_2)_4}{|(E_1\wedge E_2)_4|}= \frac{1}{\alpha _1}(X_1\wedge X_4 + X_1(\kappa ) X_2 \wedge X_4). \end{aligned}$$

A straightforward computation shows that \(\xi _{i3}\) for \(i=1,2\) defined in (8.9) are given by

$$\begin{aligned} \xi _{13}&=\left\langle N_3 \wedge E_2, \frac{(E_1\wedge E_2)_4}{|(E_1\wedge E_2)_4|}\right\rangle =0,\\ \xi _{23}&=\left\langle E_1 \wedge N_3, \frac{(E_1\wedge E_2)_4}{|(E_1\wedge E_2)_4|}\right\rangle =\frac{X_4(\theta )}{\alpha _3},\\ \xi _{14}&=\left\langle N_4 \wedge E_2, \frac{(E_1\wedge E_2)_4}{|(E_1\wedge E_2)_4|}\right\rangle =0,\\ \xi _{24}&=\left\langle E_1 \wedge N_4, \frac{(E_1\wedge E_2)_4}{|(E_1\wedge E_2)_4|}\right\rangle =-\frac{X_4(\kappa )}{\alpha _1\alpha _2 \alpha _3 } \end{aligned}$$

Since we have

$$\begin{aligned} \frac{1}{\alpha _2} N_3 - \frac{X_4(\theta )}{\alpha _3} E_2= \frac{\alpha _3}{\alpha _2} X_3- \frac{X_4(\theta )X_4(\kappa )}{\alpha _1 \alpha _2 \alpha _3} \nu _h . \end{aligned}$$

and

$$\begin{aligned} \frac{1}{\alpha _2} N_4+ \frac{X_4(\kappa )}{ \alpha _1 \alpha _2 \alpha _3} E_2&= \frac{1}{\alpha _2^2}\bigg (\frac{\alpha _3}{ \alpha _1} \bigg (N_h+ \frac{X_4(\kappa )}{\alpha _3^2}(X_4(\theta )X_3-X_4) \bigg )\\&\quad + \frac{X_4(\kappa )}{\alpha _1 \alpha _3} \bigg ( -X_4(\theta )X_3+X_4 -\frac{X_4(\kappa )}{\alpha _1^2} N_h \bigg ) \bigg ) \\&= \frac{1}{\alpha _2^2 \alpha _1} ( {\alpha _3}N_h + \frac{X_4(\kappa )^2}{\alpha _3 \alpha _1^2} N_h) = \frac{1}{\alpha _1 \alpha _3} N_h \\&=\frac{1}{\alpha _3} \nu _h \end{aligned}$$

it follows that the third component of \({\mathbf {H}}_d\) is equal to

$$\begin{aligned} H_d^3&=-{{\,\mathrm{div}\,}}_M \left( \frac{\alpha _3}{\alpha _2} X_3- \frac{X_4(\theta )X_4(\kappa )}{\alpha _1 \alpha _2 \alpha _3} \nu _h\right) -N_3(\alpha _2^{-1}) \\&\quad + \frac{X_4(\theta )}{\alpha _3} \langle [N_3,E_2],N_3\rangle - \frac{ X_4(\kappa )}{\alpha _3 \alpha _2 \alpha _1 } \langle [N_3,E_2],v_4\rangle \end{aligned}$$

and the fourth component of \({\mathbf {H}}_d\) is equal to

$$\begin{aligned} H_d^4=-{{\,\mathrm{div}\,}}_M \left( \frac{\nu _h}{\alpha _3}\right) -N_4(\alpha _2^{-1}) + \frac{X_4(\theta )}{\alpha _3} \langle [N_4,E_2],N_3\rangle - \frac{ X_4(\kappa )}{\alpha _3 \alpha _2 \alpha _1 } \langle [N_4,E_2],N_4\rangle . \end{aligned}$$

Then first variation formula is given by

$$\begin{aligned} A_{d}(\Gamma _t(\Omega ))=\int _{\Omega } \langle V^{\perp }, {\mathbf {H}}_d\rangle = \int _{\Omega } H^3_d \, \psi _3 + H^4_d \, \psi _4 \end{aligned}$$
(8.20)

for each \(\psi _3, \psi _4 \in C_0^{\infty }\) satisfying (7.7). Following Theorem 7.2 for each \(\psi _3 \in C_0^{\infty } \) we deduce

$$\begin{aligned} \psi _4=- \frac{{\bar{X}}_1 (\psi _3) + b^{\perp } \psi _3}{a^{\perp }}, \end{aligned}$$
(8.21)

since \(a^{\perp }>0\).

Lemma 8.11

Keeping the previous notation. Let \(f,g:\Omega \rightarrow {{\mathbb {R}}}\) be functions in \(C_0^1(\Omega )\) and

$$\begin{aligned} {\bar{X}}_1&=\cos ( \theta (x,y)) \partial _x +\sin (\theta (x,y)) \partial _y, \\ X_4&=-\sin (\theta (x,y)) \partial _x+\cos ( \theta (x,y)) \partial _y \end{aligned}$$

Then there holds

$$\begin{aligned} \int _{\Omega } g {\bar{X}}_1 (f) + \int _{\Omega }f g {\bar{X}}_4 (\theta )=-\int _{\Omega } f {\bar{X}}_1 (g) . \end{aligned}$$

By Lemma 8.11 and the admissibility Eq. (8.21) we deduce that (8.20) is equivalent to

$$\begin{aligned} \int _{\Omega } \bigg (H_d^3 -\frac{b^{\perp }}{a^{\perp }}H_d^4 + {\bar{X}}_1\left( \frac{H_d^4}{a^{\perp }}\right) + X_4(\theta ) \frac{H_d^4}{a^{\perp }} \bigg ) \psi _3, \end{aligned}$$

for each \(\psi _3 \in C_0^{\infty } (\Omega )\). Therefore a straightforward computation shows that minimal \((\theta ,\kappa )\)-graphs for the area functional \(A_4\) verify the following third order PDE

$$\begin{aligned} {\bar{X}}_1(H_d^4)+ a^{\perp } H_d^3 + \bigg ( \frac{X_4(\theta ) }{\alpha _3^2} [X_1, X_4](\theta ) - \frac{1}{a^{\perp }}{\bar{X}}_1\left( a^{\perp } \right) \bigg ) H_d^4 =0. \end{aligned}$$
(8.22)