1 Introduction

In this paper, we consider a nonlinear control system, \(F:{\mathbb {R}}^n\times A\rightarrow {\mathbb {R}}^n\),

$$\begin{aligned} \left\{ \begin{array}{ll} \dot{x}_t=F(x_t,a_t),\\ x_0=x\in {\mathbb {R}}^n. \end{array}\right. \end{aligned}$$
(1.1)

Controllability of (1.1) is a classical subject in optimal control. Given a closed set \({\mathcal {T}}\subset {\mathbb {R}}^n\), our target, we are interested in the behaviour of trajectories, the solutions of (1.1), in the neighborhood of a given point \(x_o\in {{\mathcal {T}}}\backslash \hbox {int }{\mathcal {T}}\). We want to find trajectories reaching \({\mathcal {T}}\) in small time starting at any point x in the neighborhood of \(x_o\) in order to make the target small time locally attainable (STLA) at \(x_o\), or equivalently the minimum time function T continuous (and vanishing) at \(x_o\). Our target is locally the solution of a system of \(h(\le n)\) smooth and independent equations and at most an inequality (if \(h<n\)), i.e. a manifold with boundary but some nonsmooth examples are also included in our approach. The two extreme cases in our study are \({\mathcal {T}}=\{x_o\}\), the point, and \({\mathcal {T}}\) being the closure of an open set, the fat target. Particularly the former is a classical subject of geometric control theory, and it has been deeply studied in the literature. The latter has become interesting in more recent years because it presents difficulties of a different nature and it has implications in the study of regularity of solutions to Hamilton Jacobi Bellman equations defined in general open sets. We will give answers also for all intermediate dimensions of the target, where the literature is much less expanded, as far as we know. We are not going to make any assumptions on the algebraic structure of the controlled vector field F, while in the literature one mostly finds results for symmetric, or control affine systems. We point out that all our arguments are local in the neighborhood of the given point \(x_o\in {\mathcal {T}}\), but for notational convenience, we prefer to think the system globally defined. The system will always satisfy existence, uniqueness and uniform boundedness of trajectories, see (2.10) below, and control functions \(a_t,t>0\) will always be piecewise constant. We will often say that \(f:{\mathbb {R}}^n\rightarrow {\mathbb {R}}^n\) is an available vector field of the system, if there is \(a\in A\) such that \(f(x)\equiv F(x,a)\).

Our main focus is on finding simple and efficient sufficient conditions for regularity of the minimum time function T, rather than the perspective of geometric control, namely discussing the structure of the Lie algebra leading to local controllability properties. We recall that when T is continuous, it is the unique solution of the Hamilton–Jacobi partial differential equation, solved by the classical Bellman approach, and Hölder regularity of T also determines the regularity of solutions of more general HJ equations based on the same control system, see [8]. We will obtain the continuity of T at \(x_o\) by proving local estimates of the form

$$\begin{aligned} T(x)\le C|x-x_o|^{1/k},\quad x_o\in {\mathcal {T}},\;x\in B_\delta (x_o) \end{aligned}$$
(1.2)

for an appropriate positive integer k. We say that the system satisfies a k-th order attainability condition at \(x_o\) when (1.2) holds. It is well known, see e.g. our paper with Bardi [9] and the book [6], that if this holds true at every point of the target then the minimum time function is continuous in its open domain, while it is usually only lower semicontinuous in general, given appropriate convexity assumptions on F. When the previous estimate can be improved to

$$\begin{aligned} T(x)\le Cd(x,{\mathcal {T}})^{1/k},\quad x_o\in {\mathcal {T}},\;x\in B_\delta (x_o) \end{aligned}$$
(1.3)

at every point of the target, then it also known that T becomes moreover locally \(1/k-\)Hölder continuous in its domain. Here \(d(x,{\mathcal {T}})\) indicates the distance of x from the target. We think that it is preferable to split these two steps of the quest for regularity of T, since proving (1.2) allows us to extend some of the arguments of the case of the point target, while going from (1.2) to (1.3) is not always obvious and is not yet clarified for general systems. This second step is clear if we can choose the constants \(C,k,\delta \) in (1.2) uniformly in \(x_o\), as it is usually done in the literature, although this is far from being necessary in general. We will not discuss it in full detail in the present paper, see our other paper [35] for some positive results. Sometimes (1.2) may hold while (1.3) does not as we show in an example below marking the difference with previous literature. Of course when \({\mathcal {T}}\) is a point, the two estimates (1.2), (1.3) coincide.

For the point target, one usually seeks answers in the Lie algebra of the available vector fields, for instance the well known full rank condition. This is sufficient for symmetric systems, as a consequence of the classical Chow-Rashevskii [13] result. For affine systems, some classical results as for instance Sussmann [37, 38] seek properties on the Lie algebra of the available vector fields in addition to the rank condition in order to obtain controllability at equilibrium points, see in particular the work by Frankowska [15] and Kawski [18], that extend the classical Kalman condition for linear systems, see [17]. Our perspective, that also applies to general targets, is different. In order to explain it, suppose for now that locally in the neighborhood of \(x_o\), \({\mathcal {T}}=\{x:u(x)=u(x_o)\}\) where \(u:{\mathbb {R}}^n\rightarrow {\mathbb {R}}^h\). Our general idea is to start by finding directions \(v\in {\mathbb {R}}^h\backslash \{0\}\) of \(k-\)th order variation of u at \(x_o\), namely such that there is a trajectory of (1.1) with \(x_0=x_o\) such that

$$\begin{aligned} u(x_t)=u(x_o)+vt^k+t^ko(1),\quad \hbox {as }t\rightarrow 0+. \end{aligned}$$

We will construct such directions quite explicitly by using higher order hamiltonian expressions that involve derivatives of the vector fields and those of the function u, obtained by Taylor expansions of the composed function \(u(x_t)\). They are easily computable both analytically and numerically therefore they can be implemented in practical problems. We stress the fact that in general the expression of k-th order variations may contain high order derivatives of u, therefore even in the case of a fat target we are not limited to the normal direction or even the curvature, contrary to previous literature. Next we will require the family of k-th order variations to be a positive basis of \({\mathbb {R}}^h\), in order to prove that the target is STLA in the neighborhood of \(x_o\) and (1.2) holds. Roughly speaking, we look at \(u(x_t)\) as a trajectory in the space \({\mathbb {R}}^h\), where \(h\le n\) is the codimension of the target and where the target projects to a single point. We remark that we only study the family of variations at one point \(x_o\in {\mathcal {T}}\). Contrary to previous literature, in any neighborhood of \(x_o\) there might be points where the target is not controllable and (1.3) unachievable in general without further assumptions. Moreover there might be points outside the target where any element of the Lie algebra does not point toward it. Indeed higher derivatives of u are important and may induce controllability. One could also reach the target at higher order with just one vector field with no need of exploiting the Lie algebra. Both these facts are incorporated in our sufficient conditions.

Notice that if the target is the single point \(x_o\) then we can choose \(u(x)=x-x_o\). Only in this case our k-th order variations are high order variations of the trajectories of the control system, sometimes called high order variations of the attainable set in the literature. Thus we feel that our approach is the correct counterpart of the Lie algebra for targets of positive dimension and the higher order hamiltonians we consider is the counterpart of the Lie bracket. We build reference trajectories starting at \(x_o\), one for each element of the positive basis, and we estimate their distance with the trajectory starting at a generic point x in the neighborhood of \(x_o\) and using the same controls one after the other. Eventually with a fixed point argument and extending an implicit function result, originally shown in [29], we get controllability of the target. Our results also apply to certain classes of nonsmooth fat targets and our second order sufficient conditions are also necessary for fat targets as shown in [35]. The concept of positive basis of a vector space was used in Petrov [29] to determine that a point is first order STLA if the available vector fields of a nonlinear control system form at the point a positive basis of \({\mathbb {R}}^n\). Our result is an extension of that classical statement for sufficient conditions of any order and targets of any dimension. Often in the literature Taylor estimates of the trajectories are combined with the first (or second) order Taylor estimate of the distance from the target. Indeed the positive basis condition can be equivalently rewritten by using the proximal normal set at the point of the target, since this set can have a high dimension because the distance function is necessarily nonsmooth at the boundary of the target for codimension higher than one. However the approach with the distance function has the serious drawback in that it is not clear how to encode properties of higher derivatives of the distance function (e.g. curvature and higher) since nonsmooth analysis is yet of no help. Moreover in that approach a point x in the neighborhood of \(x_o\) is projected on the target and at the foot of the projection (not necessarily \(x_o\)) conditions on the system are needed, contrary to our sufficient conditions. We cope with this problems by writing instead Taylor estimates of the composed function \(u(x_t)\) directly, where u is vector valued. In the case of the point target, we find in our proof either the classical Chow-Rashevskii [13] controllability of symmetric systems and the one for affine systems of Frankowska [15] and Kawski [18], i.e. our k-order variations seem to detect only the good Lie brackets. We also notice that usually in the literature, the proof that a target is STLA follows different paths in the case of the point and of fat targets. Here we give a unified presentation for the two cases as well as all intermediate dimensions of the target.

Our approach initiated in [33] where we studied symmetric systems and [35] where we considered general nonlinear systems. In both papers we consider a fat target and find simple algebraic second order sufficient conditions to show that the target is STLA with trajectories of at most one switch and a degenerate elliptic differential inequality to express them. We expect to be able to use a similar approach to prove small time local capturability of pursuit evasion differential games, where higher order sufficient conditions are completely missing from the literature, as far as we know. Also control problems with state constraints can be naturally approached with our methods. We will undertake such problems in the future.

Most attainability results in the literature concern controllability to a point. Besides what we already mentioned, Liverowskii [24] extended the approach by Petrov, see also [30], to prove second order sufficient and necessary conditions for the point. For other results on higher order necessary and sufficient conditions for affine systems, we refer to Bianchini and Stefani [10,11,12, 36] and to Krastanov [20] for dynamics on manifords. We also mention the chapter on controllability of control systems in the book by Coron [14] where many additional references can be found. For fat targets, Bardi-Falcone [7] found necessary and sufficient first order conditions, while more recently our paper with Bardi and Feleqi [8] derives necessary second order conditions and drops one level of regularity of sufficient conditions, by using the generalized Lie brackets of Rampazzo-Sussman [31]. For affine systems with drift vanishing on the target, Krastanov and Quincampoix [20,21,22] proved higher order sufficient STLA conditions of a different nature for nonsmooth fat targets only involving the normal vectors and using the idea of variations of the attainable set. The work by Marigonda and Rigo [26, 27] pointed out the importance of the curvature of the target to imply controllability, and studied higher order attainability of certain nonsmooth targets for affine systems with nontrivial drift, and local Hölder continuity of the minimum time function. Later with Le [23] they studied higher order sufficient conditions focusing on the presence of state constraints. For attainability of a smooth target with intermediate dimension much fewer results are available in the classical literature. We know of the papers by Bacciotti [5] for first order conditions and the author [32] for second order conditions for symmetric systems and smooth targets of any dimension, possibly with a boundary. We finally mention Motta and Rampazzo [28] who construct higher order hamiltonians adding iterated Lie brackets as additional vector fields to prove global asymptotic controllability. Their Hamiltonian is still a first order operator in contrast to ours. Recently Albano et al. [4] show for some symmetric systems that the set where local Lipschitz continuity of the minimum time function fails is the union of singular trajectories, and that it is analytic except on a null set. For results in this direction see also the author in [34].

We outline the contents of the paper. In Sect. 2 we develop the basic asymptotic formulas for trajectories and some calculus in order to understand relationships with previous literature and build the tools to check the examples. Sections 34 are the core of the paper and present the sufficient conditions for STLA. We work out three cases separately: the case of the fat target because it is easier and does not use the Lemma in the Appendix; the case of the point because it does not use an extra assumption and develops some preliminary tool needed in the general proof. Finally the general case of intermediate dimensions of the target is dealt with. Section 5 shows some examples where we apply our results in particular extending the current literature. The Appendix contains a revisited Petrov’s Lemma which is a key ingredient of the arguments.

2 Preliminaries and Notations: Hamiltonian Asymptotic Formulas

In this section we derive explicit high order variations of the composition of a vector valued function u with a trajectory of a control system, in particular of trajectories themselves when u is the identity. For smooth u and vector field, such variations will contain derivatives both of u and of the vector field possibly of any order. If in particular \(u\equiv d\) is scalar and the distance function from a set, they might contain the exterior normal direction, the curvature and higher derivatives of the distance. Using the distance function for targets of codimension higher than one is however not always a good idea since it will be nonsmooth at points of the target. We start with the case when u is scalar and we are dealing with a dynamical system. This will introduce the Lie derivatives and illustrate what we mean by a Hamiltonian approach. Next we will proceed with trajectories of control systems introducing new Hamilton–Taylor operators and later with vector valued functions. We emphasise that the expressions that we will derive are explicit, easily computable either analitically and numerically. What we do has connections with classical formalism to compute the asymptotics of flows of dynamical systems, such as for instance chronological calculus by Agrachev and Gamkrelidze, see e.g. [1,2,3], or products of exponentials by Sussmann, see e.g. [19, 38], but in that work we did not find what we specifically need here.

We start with a function \(u:{\mathbb {R}}^n\rightarrow {\mathbb {R}}\) and consider the trajectories of a dynamical system

$$\begin{aligned} \left\{ \begin{array}{ll}\dot{x}_t=f(x_t),\\ x_0\in {{\mathbb {R}}}^n, \end{array}\right. \end{aligned}$$
(2.1)

where \(f\in C({\mathbb {R}}^n;{\mathbb {R}}^n)\) is a vector field. For integer \(k\ge 1\), \(f\in C^{k-1}({\mathbb {R}}^n;{\mathbb {R}}^n)\), we introduce the Hamiltonian operators \(H^{(h)}_f:C^h({\mathbb {R}}^n)\rightarrow C({\mathbb {R}}^n)\), \(h=0,\dots ,k\),

$$\begin{aligned} H^{(0)}_fu\equiv u,\quad H_fu=f\cdot \nabla u,\quad H^{(h+1)}_fu=H_f\circ H^{(h)}_fu. \end{aligned}$$

Note that \(H_fu\) in the literature also appears as Lie derivative of u (or pre-hamiltonian). Observe that, in any interval where the trajectory is defined,

$$\begin{aligned} \frac{d}{dt}u(x_t)=H_fu(x_t),\quad \frac{d^k}{dt^k}u(x_t)=H^{(k)}_fu(x_t) \end{aligned}$$

and therefore the following property easily follows.

Lemma 2.1

If \(f\in C^{k-1}({\mathbb {R}}^n;{\mathbb {R}}^n)\) is Lipschitz continuous and \(u\in C^k({\mathbb {R}}^n)\), then the following Taylor formula holds

$$\begin{aligned} u(x_t)=\sum _{i=0}^{k}\frac{t^i}{i!}H^{(i)}_fu(x_0)+t^ko(1),\quad \hbox {as }t\rightarrow 0. \end{aligned}$$
(2.2)

Remark 2.2

The remainder term o(1) in (2.2) can be expressed as

$$\begin{aligned} \frac{1}{k!}\left( H^{(k)}_fu(x_s)-H^{(k)}_fu(x_0)\right) , \end{aligned}$$

for a suitable \(s\in (0,t)\) and then it goes to 0 as \(t\rightarrow 0\) locally uniformly for \(x_0\in {\mathbb {R}}^n\).

We also notice that the operator \(H^{(2)}_f:C^2({\mathbb {R}}^n)\rightarrow C({\mathbb {R}}^n)\) can be explicitely written as

$$\begin{aligned} H^{(2)}_fu=H_f\circ H_fu=f\cdot \nabla (f\cdot \nabla u)=\hbox {Tr}(D^2u\;f\otimes f)+Df\;f\cdot \nabla u \end{aligned}$$

and it is degenerate elliptic on u. Likewise \({\mathcal H}_{f,k}:=H^{(k)}_f\) is a partial differential operator of order k.

When fu are \(C^\infty \) and \(u(x_t)\) is analytic, we obtain, for small t,

$$\begin{aligned} u(x_t)=\sum _{k=0}^{+\infty }\frac{t^k}{k!}H^{(k)}_fu(x_0)=:e^{tH_f}u(x_0), \end{aligned}$$

introducing an exponential notation for the Hamiltonian. We now want to apply the same approach to families of vector fields and consider what we name one-switch (balanced) trajectories. Let \(t>0\) and \(f,g\in C(R^n;{\mathbb {R}}^n)\) be two vector fields. Consider a Caratheodory solution of

$$\begin{aligned} \dot{x}_s=\left\{ \begin{array}{ll}f(x_s),\quad &{} \text{ if } s\in [0,t),\\ g(x_s),&{} \text{ if } s\in [t,2t],\end{array}\right. \quad x_0\in {\mathbb {R}}^n. \end{aligned}$$
(2.3)

Note that \(x_s[t]\) is indeed a family of trajectories indexed with the parameter \(t>0\), although we will usually hide the parameter t. We want to describe the variation at the end point \(u(x_{2t})-u(x_0)\).

Lemma 2.3

Let \(f,g\in C^{k-1}({\mathbb {R}}^n;{\mathbb {R}}^n)\), fg Lipschitz, and \(u\in C^k({\mathbb {R}}^n)\). Then the one switch trajectory (2.3) satisfies the following asymptotic formula

$$\begin{aligned} u(x_{2t})=u(x_0)+\sum _{i=1}^{k}\frac{t^i}{i!}(H_f\boxplus H_g)^iu(x_0)+t^ko(1),\quad \hbox {as }t\rightarrow 0, \end{aligned}$$
(2.4)

where the remainder tends to 0 locally uniformly with respect to \(x_0\). Here we define

$$\begin{aligned} (H_f\boxplus H_g)^iu:= \sum _{i=0}^{m}\left( \begin{array}{c}m\\ i\end{array}\right) H^{(m-i)}_f\circ H^{(i)}_gu. \end{aligned}$$

Proof

In the assumptions, by (2.2) we know that, in the [t, 2t] interval,

$$\begin{aligned} u(x_{2t})=\sum _{i=0}^{k}\frac{t^i}{i!}H^{(i)}_gu(x_t)+t^ko(1), \end{aligned}$$

as \(t\rightarrow 0\), where the remainder goes to 0 uniformly with respect to the initial point, which is \(x_t\) in this case. For \(i=1,\dots ,k\), let \(v=H^{(i)}_gu\), likewise

$$\begin{aligned} H^{(i)}_gu(x_t)= & {} v(x_t)=\sum \limits _{j=0}^{k-i}\frac{t^j}{j!}H^{(j)}_fv(x_0)+t^{k-i}o(1) \nonumber \\= & {} \sum _{j=0}^{k-i}\frac{t^j}{j!}H^{(j)}_f\circ H^{(i)}_gu(x_0)+t^{k-i}o(1), \end{aligned}$$

thus finally

$$\begin{aligned} \begin{array}{ll} u(x_{2t})&{}=\sum \limits _{i=0}^{k}\frac{t^i}{i!}\left( \sum \limits _{j=0}^{k-i}\frac{t^j}{j!}H^{(j)}_f\circ H^{(i)}_gu(x_0)+t^{k-i}o(1)\right) +t^ko(1)\\ &{}=\sum \limits _{i=0}^{k}\left( \sum \limits _{j=0}^{k-i}\frac{t^{i+j}}{i!j!}H^{(j)}_f\circ H^{(i)}_gu(x_0)\right) +t^ko(1) \\ &{}=\sum \limits _{m=0}^{k}\frac{t^{m}}{m!}\sum \limits _{i=0}^{m}\left( \begin{array}{c}m\\ i\end{array}\right) H^{(m-i)}_f\circ H^{(i)}_gu(x_0)+t^ko(1). \end{array} \end{aligned}$$

\(\square \)

Notice that \(H_f\boxplus H_g=H_f+H_g=H_{f+g}\) is itself a Hamiltonian operator. However we caution the reader that, if \(m\ge 2\),

$$\begin{aligned} \left( H_f\boxplus H_g\right) ^m\ne H_{f+g}^{(m)}, \end{aligned}$$

in general. Indeed if \([f,g]=Dgf-Dfg\) represents the Lie bracket of two vector fields, then we immediately compute

$$\begin{aligned} H_{[f,g]}u=[f,g]\cdot \nabla u=H_f\circ H_gu-H_g\circ H_fu=\frac{1}{2}((H_f\boxplus H_g)^2u-(H_g\boxplus H_f)^2u) \end{aligned}$$

and therefore, as an example,

$$\begin{aligned} (H_f\boxplus H_g)^2u=H^{(2)}_fu+2H_f\circ H_gu+H^{(2)}_gu=H^{(2)}_{f+g}u+[f,g]\cdot \nabla u. \end{aligned}$$
(2.5)

Therefore Lie brackets are part of the game, as they should be. Also observe that for \(\lambda >0\) by definition we have that

$$\begin{aligned} (H_{\lambda f}\boxplus H_{\lambda g})^mu=\lambda ^m(H_f\boxplus H_g)^mu, \end{aligned}$$

so we have a homogeneity property. We notice the following sometimes useful algebraic properties

$$\begin{aligned} H_{-f}u=-H_fu;\;(H_{f}\boxplus H_{f})^ku=2^kH^{(k)}_fu,\;(H_{-f}\boxplus H_{-g})^ku=(-1)^k(H_{f}\boxplus H_{g})^ku. \end{aligned}$$

Remark 2.4

It will be important to discuss the sign of the quantities \((H_f\boxplus H_g)^iu(x_0)\), choosing fg appropriately. In the special case \(k=2\), from (2.5) we get that if \((H_f\boxplus H_g)^2u(x_0)<0\) then either \(H^{(2)}_{f+g}u(x_0)<0\) or \([f,g]\cdot \nabla u(x_0)\ne 0\). We can check that the implication can be somewhat reversed. Let

$$\begin{aligned} S(x)=\left( \begin{array}{cc} H_f^{(2)}u(x)&{}\quad H_f\circ H_gu(x)\\ H_g\circ H_fu(x)&{}\quad H^{(2)}_gu(x) \end{array}\right) \end{aligned}$$

and observe that S(x) is symmetric if and only if \([f,g]\cdot \nabla u(x)=0\). In [33] we proved the following.

Proposition

If \([f,g]\cdot \nabla u(x_0)\ne 0\) then there is an eigenvector \(a_1=\;^t(a_{1,1},a_{2,1})\) of \(\;^tS(x_0)S(x_0)\) with strictly positive eigenvalue \(\lambda ^2\) (\(\lambda >0\)), such that if \(a_2=\;^t(a_{1,2},a_{2,2})=-S(x_0)a_1/\lambda \), then \(a_1\ne -a_2\) and

$$\begin{aligned} (H_{a_{1,2}f+a_{2,2}g}\boxplus H_{a_{1,1}f+a_{2,1}g})^2u(x_0)<0. \end{aligned}$$

The contents of Lemma 2.3 can be clearly extended to (balanced) switch trajectories with any finite number of switches. For instance, given three vector fields \(f,g,h\in C^{m-1}({\mathbb {R}}^n;{\mathbb {R}}^n)\) and \(u\in C^m({\mathbb {R}}^n)\), we can also introduce the operators \({{\mathcal {H}}}_{f,g,h,m}:C^m({\mathbb {R}}^n)\rightarrow C({\mathbb {R}}^n)\), where

$$\begin{aligned} {{\mathcal {H}}}_{f,g,h,m}u(x_0)\equiv (H_f\boxplus H_g\boxplus H_h)^mu:= \sum _{i,j,\;i+j\le m} \left( \begin{array}{c}m\\ i,j\end{array}\right) H^{(m-i-j)}_f\circ H^{(j)}_g\circ H^{(i)}_hu. \end{aligned}$$
(2.6)

We recall that the trinomial coefficients are defined as

$$\begin{aligned} \left( \begin{array}{c}m\\ i,j\end{array}\right) =\frac{m!}{(m-i-j)!i!j!}= \left( \begin{array}{c}m\\ i\end{array}\right) \left( \begin{array}{c}m-i\\ j\end{array}\right) . \end{aligned}$$

To clarify the above definition, we notice the following property. First define the following notation

$$\begin{aligned} ((H_{f}\boxplus H_{g})\boxplus H_{{h}})^ku:= \sum _{i=0}^k\left( \begin{array}{c}k\\ i\end{array}\right) \left( H_f\boxplus H_g\right) ^{k-i}\circ H^{(i)}_hu. \end{aligned}$$

Proposition 2.5

Let \(f,g,h\in C^{k-1}({\mathbb {R}}^n;{\mathbb {R}}^n)\) and \(u\in C^k({\mathbb {R}}^n)\). Then

$$\begin{aligned} (H_f\boxplus H_g\boxplus H_h)^ku=((H_f\boxplus H_g)\boxplus H_h)^ku. \end{aligned}$$

More in general, if \(f_1,\dots ,f_{m+1}\in C^{k-1}({\mathbb {R}}^n;{\mathbb {R}}^n)\), then

$$\begin{aligned} (H_{f_1}\boxplus \dots \boxplus H_{f_{m+1}})^ku=((H_{f_1}\boxplus \dots \boxplus H_{f_m})\boxplus H_{f_{m+1}})^ku, \end{aligned}$$
(2.7)

where notations are extended in a straightforward way.

Proof

It is a matter of computing things. For three vector fields,

$$\begin{aligned} \begin{array}{ll} (H_f\boxplus H_g\boxplus H_h)^k&{}=\sum \limits _{i,j,\;i+j\le k} \left( \begin{array}{c}k\\ i,j\end{array}\right) H^{(k-i-j)}_f\circ H^{(j)}_g\circ H^{(i)}_hu\\ &{}=\sum \limits _{i=0}^k\left( \begin{array}{c}k\\ i\end{array}\right) \left( \sum _{j=0}^{k-i} \left( \begin{array}{c}k-i\\ j\end{array}\right) H^{(k-i-j)}_f\circ H^{(j)}_g\right) \circ H^{(i)}_hu \\ &{}=\sum \limits _{i=0}^k\left( \begin{array}{c}k\\ i\end{array}\right) \left( H_f\boxplus H_g\right) ^{k-i}\circ H^{(i)}_hu. \end{array} \end{aligned}$$

The general case can be proved similarly by induction. \(\square \)

By the previous statement, the \(k-\)th power of the sum of \(m+1\) Hamiltonians can either be defined recursively by the right hand side of (2.7) or explicitly in the corresponding way to (2.6), by using the multinomial coefficients. The previous operators that we have defined appear in the Taylor estimates of trajectories in the following way. Given vector fields \(f_1,\dots ,f_m\in C({\mathbb {R}}^n;{\mathbb {R}}^n)\), a (balanced) trajectory with \((m-1)\)-switches is a Caratheodory solution of

$$\begin{aligned} \dot{x}_s=\left\{ \begin{array}{ll}f_1(x_s),\quad &{} \text{ if } s\in [0,t),\\ f_2(x_s),&{} \text{ if } s\in [t,2t),\\ \dots \;,\\ f_m(x_s),&{}s\in [(m-1)t,mt], \end{array}\right. \quad x_0\in {\mathbb {R}}^n. \end{aligned}$$
(2.8)

It is now clear by construction that by using an elementary induction argument on the parameters, we obtain the following result for the families of trajectories parametrised by t.

Proposition 2.6

Let \(f_1,\dots ,f_m\in C^{k-1}({\mathbb {R}}^n;{\mathbb {R}}^n)\) and \(u\in C^k({\mathbb {R}}^n)\). Then the (balanced) trajectory with \((m-1)\)-switches (2.8) satisfies the following asymptotic formula at the end point

$$\begin{aligned} u(x_{mt})=u(x_0)+\sum _{i=1}^{k}\frac{t^i}{i!}(H_{f_1}\boxplus \dots \boxplus H_{f_m})^iu(x_0)+t^ko(1),\quad \hbox {as }t\rightarrow 0, \end{aligned}$$
(2.9)

where the remainder tends to 0 locally uniformly in \(x_0\).

2.1 Nonlinear Control Systems

We now consider the nonlinear control system (1.1), where \(x_.\) is the state and \(a_.\) is the control, determined by a controlled vector field F. The general assumptions we make here will stand for the rest of the paper. We assume for convenience that A is a compact subset of a metric space, \(F:{\mathbb {R}}^n\times A\rightarrow {\mathbb {R}}^n\) is continuous and locally Lipschitz continuous in the variable x in a neighborhood of \(x_o\in {\mathbb {R}}^n\), that is we can find \(R, L>0\) such that

$$\begin{aligned} |F(x,a)-F(y,a)|\le L|x-y|, \end{aligned}$$
(2.10)

for all \(x,y\in B_R(x_o)\), \(a\in A\). In particular there is \(M>0\) such that

$$\begin{aligned} |F(x,a)|\le |F(x,a)-F(x_o,a)|+|F(x_o,a)|\le L|x-x_o|+|F(x_o,a)|\le M, \end{aligned}$$

for all \(x\in B_R(x_o)\), \(a\in A\). Therefore if \((x_s)_{s\in [0,t]}\) is a trajectory solution of (1.1) then

$$\begin{aligned} |x_s-x|\le Ms,\quad s\in [0,t], \end{aligned}$$

and \(|x_s-x_o|\le R\) if \(|x-x_o|\le R/2\) and \(t\le R/(2\,M)=:\sigma \).

To the control system we associate a target \({\mathcal {T}}\), a closed subset of \({\mathbb {R}}^n\). The target will be assumed smooth, in the sense that given \(x_o\in {\mathcal {T}}\backslash \hbox {int}{{\mathcal {T}}}\), it is locally defined in \(B_R(x_o)\) by a family of \(h(\le n)\) equations (a \(h-\)dimensional manifold)

$$\begin{aligned} \left\{ \begin{array}{l} u_1(x)=0,\\ \dots ,\\ u_h(x)=0, \end{array}\right. \end{aligned}$$

possibly, when \(h\le n-1\), with an additional inequality (a \(h-\)dimensional manifold with boundary)

$$\begin{aligned} u_{h+1}(x)\le 0, \end{aligned}$$

where \(u_1,\dots ,u_{h+1}\in C^1({\mathbb {R}}^n)\) at least, are given functions. We always assume that the jacobian of either one of the functions involved in the definition of \({\mathcal {T}}\), \(u=(u_1,\dots ,u_h)\) or \({{\hat{u}}}=(u_1,\dots ,u_{h+1})\) has full rank at \(x_o\).

We are interested in the property that \({\mathcal {T}}\) is small time local attainable (STLA for short) in the neighborhood of the given point \(x_o\), namely the continuity at \(x_o\) of the minimum time function

$$\begin{aligned} T(x)=\inf _{a_.\in L^\infty ((0,+\infty );A)}t_x(a)(\le +\infty ), \end{aligned}$$

where \(t_x(a)=\min \{t\ge 0:x_t\in {\mathcal {T}}\}(\le +\infty )\), \(x_t\) trajectory of the control system corresponding to the control \(a_\cdot \) and initial point x. Note that \(T(x_o)=0\). It is well known that the continuity of T at all boundary points of the target propagates in the whole of the domain of T (the reachable set) which is then an open set, see e.g. Bardi and the author [8, 9]. We will seek continuity of T at \(x_o\) by proving estimates of the form (1.2) in the neighborhood of \(x_o\), for a suitable integer k.

We give some definitions to classify the structure of system (1.1).

Definition 2.7

We say that the system is convex if for any pair of available vector fields \(f,g\in C({\mathbb {R}}^n;{\mathbb {R}}^n)\) any convex combination \(\lambda f+(1-\lambda )g\), for all \(\lambda \in [0,1]\) is also available.

We say that the system is symmetric if it is convex and for any available vector field f, then also \(-f\) is available.

We say that a system is affine (in the control) if it has the structure

$$\begin{aligned} F(x,a)=f_o(x)+G(x,a), \end{aligned}$$

where G(xa) is symmetric. Usually \(f_o\) is called the drift.

We say that a function \(u:{\mathbb {R}}^n\rightarrow {\mathbb {R}}\) has a \(k-th\) order decrease rate for the control system at \(x_o\in {\mathbb {R}}^n\) if there are available vector fields \(f_1,\dots f_m\) such that

$$\begin{aligned} (H_{f_1}\boxplus \dots \boxplus H_{f_m})^ku(x_o)<0,\;(H_{f_1}\boxplus \dots \boxplus H_{f_m})^iu(x_o)=0\;\hbox {for all }i=1,\dots ,k-1. \end{aligned}$$

Notice that if \(A\subset {\mathbb {R}}^m\) is convex and \(F(x,a)=f_o(x)+\sum _{i=1}^ma_if_i(x)\), then the system is convex. If A is moreover symmetric with respect to the origin, then it is affine in the control and symmetric if \(f_o\equiv 0\).

Observe that if we have as available vector fields \(f,-f,g,-g\), then we can compute \((H_f\boxplus H_g\boxplus H_{-f}\boxplus H_{-g})u(x)=0\) and

$$\begin{aligned} \begin{array}{l} (H_f\boxplus H_g\boxplus H_{-f}\boxplus H_{-g})^2u(x)=((H_f\boxplus H_g)\boxplus (H_{-f}\boxplus H_{-g}))^2u(x)\\ =(H_f\boxplus H_g)^2u(x)+(H_{-f}\boxplus H_{-g})^2u(x)+2(H_f\boxplus H_g)\circ (H_{-f}\boxplus H_{-g})u(x) \\ =2(H_f\boxplus H_g)^2u(x)-2H^{(2)}_{f+g}u(x)= 2[f,g]\cdot \nabla u(x). \end{array} \end{aligned}$$
(2.11)

Therefore if \([f,g]\cdot \nabla u(x_o)<0\) then u has a second order decrease rate at \(x_o\). For general systems however we cannot produce a trajectory having the second order Taylor coefficient proportional to the Lie bracket of any two given available vector fields.

Remark 2.8

The equivalence (2.5) gives rise to the following fully degenerate second order Hamilton–Jacobi operator taking into account pairs of available vector fields

$$\begin{aligned} \max _{(a_1,a_2)\in A\times A}\begin{array}{l}\left\{ -\hbox {Tr}(D^2u\;(F(x,a_1)+F(x,a_2))\otimes (F(x,a_1)+F(x,a_2)))\right. \\ -(D(F(x,a_1)+F(x,a_2))\;(F(x,a_1)+F(x,a_2)) \\ \left. + [(F(x,a_1),F(x,a_2))])\cdot \nabla u. \right\} \end{array} \end{aligned}$$

that we introduced in [33, 35] as a counterpart of the classical Bellman operator, in order to study the second order attainability of fat targets. If \(F(x_o,a)\cdot \nabla u(x_o)=0\) for all \(a\in A\), then such operator applied to u is strictly positive at \(x_o\) if and only if u has 2nd order decrease rate at \(x_o\). If moreover \(u\equiv d\) is the signed distance function from \(\partial {\mathcal {T}}\) which is negative in the interior of the target, then \(n(x_o)=\nabla d(x_o)\) is the exterior normal vector and d has second order decrease rate if and only if there are \(a_1,a_2\in A\) such that

$$\begin{aligned}{} & {} \hbox {Tr}(D^2d\;(F(x_o,a_1)+F(x_o,a_2))\otimes (F(x_o,a_1)+F(x_o,a_2)))\\{} & {} \quad +\left( D(F(x_o,a_1)+F(x_o,a_2))\;(F(x_o,a_1)+F(x_o,a_2))\right. \\{} & {} \quad \left. +[(F(\cdot ,a_1),F(\cdot ,a_2))](x_o)\right) \cdot n(x_o)<0. \end{aligned}$$

The first line above is proportional to the normal curvature in the direction of the average of the vector fields and the second checks the directions of an appropriate vector field and the exterior normal. It has two contributions: the first relative to the average again and the second to their Lie bracket. If in particular \(a_1=a_2\) we find that d has 2nd order decrease rate with only one vector field as

$$\begin{aligned}{} & {} \hbox {Tr}(D^2d\;F(x_o,a_1)\otimes F(x_o,a_1))\\{} & {} \quad +\left( DF(x_o,a_1)\;F(x_o,a_1)\right) \cdot n(x_o)<0. \end{aligned}$$

We now develop some calculus for the Hamiltonian-Taylor coefficients to show that they can be manipulated as easily as the Lie brackets.

Proposition 2.9

Let \(f,g\in C^{k}({\mathbb {R}}^n;{\mathbb {R}}^n)\) and \(u\in C^{k+1}({\mathbb {R}}^n)\). Then for \(k=1\), \((H_f\boxplus H_g)u=H_{f+g}u\), and for all \(k\ge 1\)

$$\begin{aligned} (H_f\boxplus H_g)^{k+1}u=H_f\circ (H_f\boxplus H_g)^{k}u+(H_f\boxplus H_g)^{k}\circ H_gu. \end{aligned}$$
(2.12)

Proof

We compute from the definition

$$\begin{aligned} \begin{array}{c}\sum \limits _{i=0}^{k+1} \left( \begin{array}{c}k+1\\ i\end{array}\right) H^{(k+1-i)}_f\circ H^{(i)}_gu=H^{(k+1)}_fu+H^{(k+1)}_gu+ \sum \limits _{i=1}^{k}\left( \begin{array}{c}k+1\\ i\end{array}\right) H^{(k+1-i)}_f\circ H^{(i)}_gu\\ =H^{(k+1)}_fu+H^{(k+1)}_gu+ \sum \limits _{i=1}^{k}\left( \begin{array}{c}k\\ i\end{array}\right) H^{(k+1-i)}_f\circ H^{(i)}_gu +\sum \limits _{i=1}^{k}\left( \begin{array}{c}k\\ i-1\end{array}\right) H^{(k+1-i)}_f\circ H^{(i)}_gu\\ =H_f\circ \sum \limits _{i=0}^{k}\left( \begin{array}{c}k\\ i\end{array}\right) H^{(k-i)}_f\circ H^{(i)}_gu+ \sum \limits _{i=1}^{k+1}\left( \begin{array}{c}k\\ i-1\end{array}\right) H^{(k+1-i)}_f\circ H^{(i)}_gu\\ =H_f\circ (H_f\boxplus H_g)^{k}u+\left( \sum \limits _{i=0}^{k}\left( \begin{array}{c}k\\ i\end{array}\right) H^{(k-i)}_f\circ H^{(i)}_g\right) \circ H_gu. \end{array} \end{aligned}$$

\(\square \)

Sometimes our Taylor operators simplify to first order. This is important, because it allows to drop the regularity of the target in our statements. The first example is the following.

Remark 2.10

Suppose that we have two vector fields balanced at \(x_0\), i.e. \(f(x_0)+g(x_0)=0\). Therefore \(H_{f+g}u(x_o)=(f+g)\cdot \nabla u(x_0)=0\) and by Proposition 2.9 we also have

$$\begin{aligned} (H_f\boxplus H_g)^2u(x_0){} & {} =H_{[f,g]}u(x_0),\\ (H_f\boxplus H_g)^3u(x_0){} & {} =H_f\circ H_{[f,g]}u(x_0)+H_{[f,g]}\circ H_gu(x_0) \\{} & {} +D(D(f+g)(f+g))f\cdot \nabla u(x_0) \\{} & {} =\hbox {ad}^2_gf(x_0),\cdot \nabla u(x_o)+D((f+g)(f+g))f\cdot \nabla u(x_0), \end{aligned}$$

where the second equation only holds for \(u(x)=x_i\), and

where we defined the iterated Lie bracket \(\hbox {ad}_gf:=[g,f]\), \(\hbox {ad}^{k+1}_gf:=[g,\hbox {ad}^k_gf]\). Thus in this case the higher order decrease rate condition is reduced to discussing the sign of a first order operator provided by an iterated Lie bracket.

A higher order Hamiltonian–Taylor operator always reduces to first order when the function u is linear. Let for instance I(x)=x and suppose that \(u(x)=I_i(x)\) for some \(i=1,\dots ,n\). Notice in fact that if \(f=(f_1,\dots ,f_n)\) is an available vector field, then

$$\begin{aligned} H_fI_i(x)=f\cdot \nabla I_i(x)=f_i(x). \end{aligned}$$

If we have two vector fields fg, we can then prove the following.

Proposition 2.11

Let fg be two available vector fields with the appropriate regularity required in the operations of the following formulas. Then for all \(k\ge 2\) and \(i=1,\dots ,n\) we have that

$$\begin{aligned} \begin{array}{c} (H_f\boxplus H_g)^2I_i(x)=F_2\cdot \nabla I_i(x)=(F_2)_i(x),\\ (H_f\boxplus H_g)^{k+1}I_i(x)=F_{k+1}\cdot \nabla I_i(x)=(F_{k+1})_i(x), \end{array} \end{aligned}$$

where we define recursively \(F_2(x)=D(f+g)(f+g)(x)+[f,g](x)\) and \(F_{k+1}(x):=DF_k(f+g)(x)+[F_k,g](x)\). Therefore \((H_f\boxplus H_g)^kI_i\) is a first order operator on \(I_i\).

Proof

For the two vector fields fg we obtain, if \(e_i\) is the i-th element of the standard unit basis of \({\mathbb {R}}^n\), i.e. \(x_i=x\cdot e_i\), since \(\nabla I_i=e_i\),

$$\begin{aligned} H_f\circ H_gI_i(x)=f\cdot \nabla g_i(x)=Dgf\cdot e_i=Dgf\cdot \nabla I_i(x). \end{aligned}$$

Thus

$$\begin{aligned} (H_f\boxplus H_g)^2I_i(x)=(D(f+g)(f+g)+[f,g])\cdot \nabla I_i(x)=F_2\cdot \nabla I_i(x). \end{aligned}$$

Next

$$\begin{aligned} \begin{array}{ll} (H_f\boxplus H_g)^3I_i(x)&{}=H_f\circ (H_f\boxplus H_g)^2I_i(x)+(H_f\boxplus H_g)^2\circ H_gI_i(x)\\ &{}= f\cdot \nabla (F_2)_i(x)+F_2\cdot \nabla g_i(x)\\ &{}=\{DF_2(f+g)+DgF_2-DF_2g\}\cdot \nabla I_i(x)\\ &{}=\left( DF_2(f+g)+[F_2,g]\right) \cdot \nabla I_i(x)=F_3\cdot \nabla I_i(x). \end{array} \end{aligned}$$

At this point we find similarly by induction that

$$\begin{aligned} (H_f\boxplus H_g)^{k+1}I_i(x)=\left( DF_k(f+g)+[F_k,g]\right) \cdot \nabla I_i(x)=F_{k+1}\cdot \nabla I_i(x). \end{aligned}$$

\(\square \)

2.2 Vector Valued H-operators for Functions

When \(f_1,\dots .f_m\in C^{j-1}({\mathbb {R}}^n;{\mathbb {R}}^n)\) are vector fields and \(u:{\mathbb {R}}^n\rightarrow {\mathbb {R}}^h\), \(u=(u_1,\dots ,u_h)\in C^j({\mathbb {R}}^n;{\mathbb {R}}^h)\), we will use the following notation for a vector valued operator

$$\begin{aligned} (H_{f_1}\boxplus \dots \boxplus H_{f_m})^ju:=\;^t((H_{f_1}\boxplus \dots \boxplus H_{f_m})^ju_r)_{r=1,\dots ,h}. \end{aligned}$$

Our expansion formulas therefore become statements also for vector valued functions and in particular trajectories themselves.

Proposition 2.12

Let \(f_1,\dots ,f_m\in C^{k-1}({\mathbb {R}}^n;{\mathbb {R}}^n)\) and \(u\in C^k({\mathbb {R}}^n;{\mathbb {R}}^h)\). Then the (balanced) trajectory with \((m-1)\)-switches (2.8) satisfies the following asymptotic formula in \({\mathbb {R}}^h\)

$$\begin{aligned} u(x_{mt})=u(x_0)+\sum _{i=1}^{k}\frac{t^i}{i!}(H_{f_1}\boxplus \dots \boxplus H_{f_m})^iu(x_0)+t^ko(1),\quad \hbox {as }t\rightarrow 0, \end{aligned}$$
(2.13)

where the remainder tends to 0 locally uniformly in \(x_0\). If in particular \(h=n\) and \(u(x)=I(x)=x\) is the identity function on \({\mathbb {R}}^n\), then we obtain the following expansion formula for balanced trajectories

$$\begin{aligned} x_{mt}=x_0+\sum _{i=1}^{k}\frac{t^i}{i!}(H_{f_1}\boxplus \dots \boxplus H_{f_m})^iI(x_0)+t^ko(1),\quad \hbox {as }t\rightarrow 0. \end{aligned}$$
(2.14)

Remark 2.13

Consider two points \(x_1,x_2\in {\mathbb {R}}^n\) and a family of available vector fields \(f_1,\dots ,f_m\). For \(0<s\le t\le T\) consider the trajectory \(x^1_r\) defined in [0, ms], starting at \(x_1\) and using the vector fields \(f_i\) on subsequent intervals of length s as in (2.8) and the trajectory \(x^2_r\) defined in [0, mt], starting at \(x_2\) and using the vector fields \(f_i\) on subsequent intervals of length t. Notice that by Gronwall inequality

$$\begin{aligned} \begin{array}{c} |x^1_s-x^2_t|\le M|t-s|+|x_1-x_2|e^{Ls},\quad |x^1_{2s}-x^2_{2t}|\le M|t-s|+|x^1_s-x^2_t|e^{Ls}, \end{array} \end{aligned}$$

and then by induction \(|x^1_{ms}-x^2_{mt}|\le C(|t-s|+|x_1-x_2|)\), C depending on mT. The end point of a balanced trajectory of a given family of vector fields is then continuous with respect to the initial point and time. Therefore the remainder o(1) in (2.13) can be expressed as a continuous function in \(\gamma =\gamma (x_0,t)\) given by \(\gamma (x_0,0)=0\) and

$$\begin{aligned} \gamma (x_0,t)=\frac{k!}{t^k}\left( u(x_{mt})-u(x_0)-\sum _{i=1}^{k}\frac{t^i}{i!}(H_{f_1}\boxplus \dots \boxplus H_{f_m})^iu(x_0)\right) ,\quad \hbox {if }t>0. \end{aligned}$$

Remark 2.14

To better understand our expansion formula in the vector case, note that if fg are available vector fields then

$$\begin{aligned} \begin{array}{ll} (H_f\boxplus H_g)^3I=H_f\circ (H^{(2)}_{f+g}I+H_{[f,g]}I)+(H^{(2)}_{f+g}+H_{[f,g]})\circ H_gI= H^{(3)}_{f+g}I\\ +(H^{(2)}_{f+g}\circ H_gI-H_g\circ H^{(2)}_{f+g}I) +(H_f\circ H_{[f,g]}I+H_{[f,g]}\circ H_gI) \end{array} \end{aligned}$$

and now since

$$\begin{aligned} \begin{array}{c} H^{(2)}_{f+g}\circ H_gI-H_g\circ H^{(2)}_{f+g}I=H_{f+g}\circ H_{[f,g]}I+H_{[f,g]}\circ H_{f+g}I;\\ H_f\circ H_{[f,g]}I+H_{[f,g]}\circ H_gI=\hbox {ad}^2_fg+H_{[f,g]}\circ H_{f+g}I=\hbox {ad}^2_gf+H_{f+g}\circ H_{[f,g]}I \end{array} \end{aligned}$$

we finally obtain

$$\begin{aligned}{} & {} \frac{1}{3!}(H_f\boxplus H_g)^3I=\frac{1}{3!}H^{(3)}_{f+g}I+\frac{1}{2}\left( H_{f+g}\circ (\frac{1}{2}H_{[f,g]}I)\right. \\{} & {} \quad \left. +(\frac{1}{2}H_{[f,g]})\circ H_{f+g}I\right) +\frac{1}{12}(\hbox {ad}^2_fg+\hbox {ad}^2_gf). \end{aligned}$$

This confirms the third term in the expansion of the Baker–Campbell–Hausdorff formula.

The following statement shows how to express a second order operator differently.

Proposition 2.15

Let \(f_1,\dots ,f_m\in C^1({\mathbb {R}}^n;{\mathbb {R}}^n)\) be available vector fields and \(u\in C^2({\mathbb {R}}^n)\). Then

$$\begin{aligned} \begin{array}{c} (H_{f_1}\boxplus \dots \boxplus H_{f_m})^2I=D{(f_1+\dots +f_m)}(f_1+\dots +f_m)+\sum _{1\le i<j\le m}[f_i,f_j],\\ (H_{f_1}\boxplus \dots \boxplus H_{f_m})^2u=H^{(2)}_{f_1+\dots +f_m}u+\sum _{1\le i<j\le m}[f_i,f_j]\cdot \nabla u. \end{array} \end{aligned}$$

Proof

We only prove the second formula. We already know that

$$\begin{aligned} (H_{f_1}\boxplus H_{f_2})^2u=H^{(2)}_{f_1+f_2}u+[f_1,f_2]\cdot \nabla u. \end{aligned}$$

Now notice that by definition

$$\begin{aligned}{} & {} (H_{f_1}\boxplus H_{f_2}\boxplus H_{f_3})^2u=(H_{f_1}\boxplus H_{f_2})^2u+H^{(2)}_{f_3}u+2H_{f_1+f_2}\\{} & {} \qquad \circ H_{f_3}u\pm H_{f_3}\circ H_{f_1+f_2}u\\{} & {} \quad =H^{(2)}_{f_1+f_2}u+H_{f_1+f_2}\circ H_{f_3}u+H_{f_3}\circ H_{f_1+f_2}u+H^{(2)}_{f_3}u\\{} & {} \qquad +[f_1,f_2]\circ \nabla u+[f_1+f_2,f_3]\cdot \nabla u\\{} & {} \quad =H^{(2)}_{f_1+f_2+f_3}u+([f_1,f_2]+[f_1,f_3]+[f_2,f_3])\cdot \nabla u. \end{aligned}$$

We complete similarly the statement by induction. We leave the easy details to the reader. \(\square \)

We add some definitions to the vector case.

Definition 2.16

We say that a function \(u:{\mathbb {R}}^n\rightarrow {\mathbb {R}}^h\) has a \(k-th\) order rate of change in the direction of \(v\in {\mathbb {R}}^h\), \(k\ge 1\), for the control system at \(x_o\in {\mathbb {R}}^n\) if there are available vector fields \(f_1,\dots f_m\) such that \(H_{f_1+\dots +f_m}u(x_o)\ne 0\) if \(k=1\) or for \(k\ge 2\)

$$\begin{aligned} v=\frac{1}{k!}(H_{f_1}\boxplus \dots \boxplus H_{f_m})^ku(x_0)\ne 0,\;(H_{f_1}\boxplus \dots \boxplus H_{f_m})^iu(x_0)=0\;\hbox {for all }i=1,\dots ,k-1. \end{aligned}$$
(2.15)

Let \({\mathcal {L}}_u:=\{v:v\hbox { satisfies }2.15\}\subset {\mathbb {R}}^h\).

In particular we say that a balanced trajectory \(x_s\) of (1.1) as in (2.8) moves at \(k-\)th order rate in the direction of \(v\in {\mathbb {R}}^n\) for the control system at \(x_o\in {\mathbb {R}}^n\) if the vector fields \(f_1,\dots f_m\) are such that \(({f_1+\dots +f_m})(x_o)\ne 0\) if \(k=1\) or for \(k\ge 2\)

$$\begin{aligned} v=\frac{1}{k!}(H_{f_1}\boxplus \dots \boxplus H_{f_m})^kI(x_0)\ne 0,\;(H_{f_1}\boxplus \dots \boxplus H_{f_m})^iI(x_0)=0\;\hbox {for all }i=1,\dots ,k-1. \end{aligned}$$
(2.16)

Let \({\mathcal {L}}\equiv {\mathcal {L}}_I:=\{v:v\hbox { satisfies }2.16\}\subset {\mathbb {R}}^n\).

Remark 2.17

(Brackets and iterated Lie brackets.) Similarly to what we observed in (2.11), if fgh and \(-f,-g,-h\) are available vector fields, in particular in the case of a symmetric system, then with a little algebraic effort one sees that, as expected,

$$\begin{aligned} (H_f\boxplus H_g\boxplus H_{-f}\boxplus H_{-g})I \equiv 0,\quad (H_f\boxplus H_g\boxplus H_{-f}\boxplus H_{-g})^2I =2{[f,g]}, \end{aligned}$$

thus \([f,g]\in {\mathcal {L}}\) if it is nonvanishing at \(x_o\), while on the other hand

$$\begin{aligned} \begin{array}{c} (H_f\boxplus H_g\boxplus H_{-f}\boxplus H_{-g}\boxplus H_h\boxplus H_g\boxplus H_f\boxplus H_{-g}\boxplus H_{-f}\boxplus H_{-h})I\equiv 0,\\ (H_f\boxplus H_g\boxplus H_{-f}\boxplus H_{-g}\boxplus H_h\boxplus H_g\boxplus H_f\boxplus H_{-g}\boxplus H_{-f}\boxplus H_{-h})^2I\equiv 0,\\ (H_f\boxplus H_g\boxplus H_{-f}\boxplus H_{-g}\boxplus H_h\boxplus H_g\boxplus H_f\boxplus H_{-g}\boxplus H_{-f}\boxplus H_{-h})^3I=6{[[f,g],h]}, \end{array} \end{aligned}$$

so that \([[f,g],h]\in {\mathcal {L}}\), if it is nonvanishing at \(x_o\). Therefore, when using the identity function in \({\mathbb {R}}^n\), and if the system is symmetric, continuing in this fashion one can check that the complete Lie algebra generated by the available vector fields is contained in \({\mathcal {L}}\).

Remark 2.18

(The ad operator for vector fields.) Using the argument of Remark 2.10 we can see the following. Assume that \(f_o(x)+\varepsilon f_1(x)\) are available vector fields for all \(\varepsilon \in [-1,1]\) and that \(f_o(x_o)=0\). This happens for instance in the case of an affine system with drift \(f_o\) and \(x_o\) as an equilibrium point, in particular for a linear system. Let \(f(x)=f_o(x)+\varepsilon f_1(x)\) and \(g(x)=f_o(x)-\varepsilon f_1(x)\). Then observe that by Remark 2.10

$$\begin{aligned} (H_f\boxplus H_g)I(x_o)=2\varepsilon (-1)\hbox {ad}_{f_o}f_1(x_o)+o(\varepsilon ), \end{aligned}$$

as \(\varepsilon \rightarrow 0\). This indicates that we can find iterated Lie brackets given by the ad operator close to directions in \({\mathcal {L}}\). Indeed we find for instance

$$\begin{aligned} \begin{array}{cc} (H_f\boxplus H_g\boxplus H_g\boxplus H_f)I(x_o)=2f_o(x_o)=0, (H_f\boxplus H_g\boxplus H_g\boxplus H_f)^2I(x_o)=0,\\ (H_f\boxplus H_g\boxplus H_g\boxplus H_f)^3I(x_o)=12\varepsilon \hbox {ad}^2_{f_o}f_1(x_o)+4\varepsilon ^2\hbox {ad}^2_{f_1}f_o(x_o). \end{array} \end{aligned}$$

Therefore given v a unit vector, if \(v\cdot \hbox {ad}^2_{f_o}f_1(x_o)<0\) then for \(\varepsilon \) sufficiently close to 0 also \(v\cdot (H_f\boxplus H_g\boxplus H_g\boxplus H_f)^3I(x_o)<0\) and \((1/3!)(H_f\boxplus H_g\boxplus H_g\boxplus H_f)^3I(x_o)\in {\mathcal {L}}\). We can proceed further as

$$\begin{aligned} (H_f\boxplus H_g\boxplus H_g\boxplus H_f\boxplus H_g\boxplus H_f\boxplus H_f\boxplus H_g)^jI(x_o)=0 \end{aligned}$$

for \(j=1,2,3\) while \((H_f\boxplus H_g\boxplus H_g\boxplus H_f\boxplus H_g\boxplus H_f\boxplus H_f\boxplus H_g)^4I(x_o)=-192\varepsilon \hbox {ad}^3_{f_o}f_1(x_o)+o(\varepsilon )\) and so forth.

3 Small Time Local Attainability of Fat Targets

In this section we obtain a first attainability result for the control system (1.1) with the general assumptions of the previous Sect. 2.1, in the case of targets locally described by an inequality. Namely in the neighborhood \(B_{R}(x_o)\) of \(x_o\) the target is described as

$$\begin{aligned} {{\mathcal {T}}}=\{x:u(x)\le u(x_o)\}, \end{aligned}$$

where \(u\in C^1({\mathbb {R}}^n)\), \(\nabla u(x_o)\ne 0\). This case is easier because u is scalar, possibly the distance from the target, as there is only one constraint to cope with and we do not really need the Lemma in the Appendix. Note that this accommodates also the case of a target given by a single equation \(\{x:u(x)=u(x_o)\}\) because the two sets of the complement of the target \(\{x:u(x)>u(x_o)\}\) and \(\{x:u(x)<u(x_o)\}\) are locally disconnected. At the end of the section we extend the result and show how our approach can lead to sufficient conditions for controllability also for some classes of nonsmooth targets.

Theorem 3.1

For some integer \(k>0\) let \(u\in C^{k}({\mathbb {R}}^n)\) and \(x_o\in \mathcal T\backslash \hbox {int}{\mathcal {T}}\) be such that \(\nabla u(x_o)\ne 0\). Suppose that u has \(k-\)th order decrease rate, i.e. there are available vector fields \(f_i\in C^{k-1}({\mathbb {R}}^n;{\mathbb {R}}^n)\), \(i=1,\dots ,m\) such that

$$\begin{aligned} (H_{f_1}\boxplus \dots \boxplus H_{f_m})^ku(x_o)<0\hbox { and }(H_{f_1}\boxplus \dots \boxplus H_{f_m})^ju(x_o)=0 \hbox { for }j=1,\dots ,k-1. \end{aligned}$$
(3.1)

Then there are \(\delta ,\delta '>0\) and a constant \(K>0\) such that for any \(x\in {\mathbb {R}}^n\), \(|x-x_o|\le \delta '\) we can find \(t\in [0,\delta ]\) such that if \(x_s\) is the trajectory obtained by using the vector fields \(f_i\) on subsequent intervals of length t as in (2.8) and starting out at x, then \(x_{mt}\in B_R(x_o)\), \(u(x_{mt})\le u(x_o)\) and moreover

$$\begin{aligned} t\le K|x-x_o|^{1/k}. \end{aligned}$$

In particular the minimum time function T to reach the target \({\mathcal {T}}\) satisfies

$$\begin{aligned} T(x)\le Km|x-x_o|^{1/k},\quad x\in B_{\delta '}(x_o) \end{aligned}$$
(3.2)

and thus it is continuous at \(x_o\) and the target is STLA at \(x_o\).

Proof

The proof is local, in the neighborhood of the point \(x_o\). By the assumptions, we can choose \(c_o>0\) such that \(A:=(H_{f_1}\boxplus \dots \boxplus H_{f_m})^ku(x_o)\le -2c_o<0\). We pick a radius \(R>0\) so that \(\nabla u(x)\ne 0\) in \(B_R(x_o)\) and \({\mathcal {T}}\cap B_R(x_o)=\{x\in B_R(x_o):u(x)\le u(x_o)\}\). We consider \(\sigma >0\) so that for any \(|x-x_o|\le R/2\), \(0\le t\le \sigma \), any trajectory \(\{x_s:s\ge 0\}\) of the control system using only the vector fields \(f_i\) and starting at x satisfies \(|x_s-x_o|\le R\), for \(s\le mt\).

For any xt such that \(|x-x_o|\le R/2\), \(0\le t\le \sigma \), and \(u(x)> u(x_o)\), we construct the trajectory \(x_s\) using the vector fields \(f_1,\dots ,f_m\) in subsequent intervals of length t and starting out at x, and the corresponding reference trajectory \(x_s^o\) starting at \(x_o\) and using the same control. Notice that changing t modifies the trajectory drastically. We define the continuous function (see Remark 2.13)

$$\begin{aligned} \rho (x,t)=u(x^o_{mt})-u(x_{mt}) \end{aligned}$$

and we observe that by local Lipschitz continuity of u and Gronwall estimates on the trajectories, there are constants \(C,L>0\) such that

$$\begin{aligned} |\rho (x,t)|\le {{\hat{C}}}|x_{mt}-x^o_{mt}|\le C|x-x_o|e^{Lm\sigma }, \end{aligned}$$
(3.3)

for all \(|x-x_o|\le R/2\) and \(t\in [0,\sigma ]\). In particular

$$\begin{aligned} \lim _{t\rightarrow 0}\rho (x,t)=u(x^o)-u(x)<0,\quad \lim _{x\rightarrow x_o}\sup _{t\in [0,\sigma ]}|\rho (x,t)|=0. \end{aligned}$$
(3.4)

By the asymptotic formula of the trajectory \(x^o_s\) proven in Proposition 2.6 and the assumptions, we have that

$$\begin{aligned} u(x^o_{mt})-u(x_o){} & {} =\sum _{j=1}^{k}\frac{t^j}{j!}(H_{f_1}\boxplus \dots \boxplus H_{f_m})^ju(x_o)+\frac{t^k}{k!}\gamma (t)\\{} & {} =\frac{t^k}{k!}\left( (H_{f_1}\boxplus \dots \boxplus H_{f_m})^ku(x_o)+\gamma (t)\right) , \end{aligned}$$

where \(\lim _{t\rightarrow 0}\gamma (t)=0\). Therefore we can find \(0<\delta (\le \sigma )\) independent of x, such that \(|\gamma (t)|\le c_o,\hbox { for all } t\in [0,\delta ]\). For a given x, \(|x-x_o|\le R/2\), we thus prove that by (3.3)

$$\begin{aligned} \begin{array}{ll} u(x_{mt})-u(x_o)&{}=u(x_{mt})-u(x^o_{mt})+\frac{t^k}{k!}\left( (H_{f_1}\boxplus \dots \boxplus H_{f_m})^ku(x_o)+\gamma (t)\right) \\ {} &{} \le \frac{t^k}{k!}\left( A+\gamma (t)\right) -\rho (x,t)\le -c_ot^k/k!+C|x-x_o|e^{Lm\sigma }. \end{array}\end{aligned}$$
(3.5)

We now notice that the right hand side in (3.5) is zero for

$$\begin{aligned} t=t^*=\left( \frac{Ck!}{c_o}e^{Lm\sigma }\right) ^{1/k}|x-x_o|^{1/k} \end{aligned}$$

and that this choice is admissible since \(t^*\le \delta \) provided we choose

$$\begin{aligned} |x-x_o|\le \delta '=\frac{c_o}{Ck!}e^{-Lm\sigma }\delta ^k. \end{aligned}$$

Thus the target is reached at most at time \(mt^*\) and the estimate on the minimum time function holds. \(\square \)

Remark 3.2

In our previous papers [33, 35] we proved that if \(k=2\) and the system is symmetric or affine, then the decrease rate condition of the previous Theorem with \(m\le 2\), e.g. a trajectory with at most one switch, is necessary and sufficient for the target being STLA at \(x_o\) and holding (1.2), the necessary part being proven in [8].

Notice that in (3.1) derivatives of u up to order k may appear.

The next statement shows how to improve in a simple way estimate (3.2) and obtain Hölder regularity of the minimum time function. We need some uniformity of the pointwise estimate satisfied by T. Below we indicate the distance function from the target as \(d(x,{\mathcal {T}}):=\min \{|x-y|:y\in {\mathcal {T}}\}.\)

Proposition 3.3

We consider the control system (1.1) and the target \({\mathcal {T}}\) at the beginning of the section with \(u\in C^1({\mathbb {R}}^n)\). Assume that there are \(R,C>0\) and an integer k such that for all \(x\in {\mathcal {T}}\cap B_R(x_o)\) the minimum time function satisfies

$$\begin{aligned} T(y)\le C|y-x|^{1/k}\quad \hbox {for all }y\in B_{R/2}(x). \end{aligned}$$

Then T also satisfies

$$\begin{aligned} T(x)\le d(x,{\mathcal {T}})^{1/k},\quad \hbox {for all }x\in B_{R/2}(x_o). \end{aligned}$$

In particular if the assumption holds for all \(x_o\in {\mathcal {T}}\), then T is locally \(1/k-\)Hölder continuous in its domain if F satisfies (2.10) globally.

Proof

Let \(x\in B_{R/2}(x_o)\), and \(p_x\in {\mathcal {T}}\), \(|x-p_x|=d(x,{\mathcal {T}})\), be a projection of x. Then \(p_x\in B_R(x_o)\) since \(|p_x-x_o|\le |p_x-x|+|x-x_o|\le 2|x-x_o|\) and we can apply the assumption at \(p_x\) and find

$$\begin{aligned} T(x)\le C|x-p_x|^{1/k}=Cd(x,{\mathcal {T}})^{1/k}. \end{aligned}$$

The last part of the statement comes from a standard argument as in the book [6] or in [8]. \(\square \)

If the decrease rate sufficient condition can be satisfied in the viscosity solutions sense or for some other classes of nonsmooth sets, we can drop regularity of the target.

Corollary 3.4

Let \(x_o\in {\mathcal {T}}\backslash \hbox {int}{\mathcal {T}}\). The two following hold. (i) Let \(u\in C({\mathbb {R}}^n)\), suppose that \({\mathcal {T}}\cap B_R(x_o)=\{x:u(x)\le u(x_o)\}\) and that there is \(\Phi \in C^k({\mathbb {R}}^n)\) such that \(u-\Phi \) attains a local maximum point at \(x_o\), \(\nabla \Phi (x_o)\ne 0\) and \(\Phi \) has \(k-\)th order decrease rate at \(x_o\). Then all conclusions of Theorem 3.1 hold true. (ii) For some integer \(k>0\) let \(u_i\in C^{k}({\mathbb {R}}^n)\), \(i=1,\dots ,l\), be such that \(\nabla u_i(x_o)\ne 0\), and \({\mathcal {T}}\cap B_R(x_o)=\{x\in B_R(x_o):u_i(x)\le u_i(x_o),\;i=1,\dots ,l\}\). Suppose that \(u_i\) has \(k_i-\)th order decrease rate for all i, i.e. there are available vector fields \(f_h\in C^{k-1}({\mathbb {R}}^n;{\mathbb {R}}^n)\), \(h=1,\dots ,m\) such that for all \(i=1,\dots ,l\)

$$\begin{aligned} (H_{f_1}\boxplus \dots \boxplus H_{f_m})^{k_i}u_i(x_o)<0\hbox { and }(H_{f_1}\boxplus \dots \boxplus H_{f_m})^ju_i(x_o)=0 \hbox { for }j=1,\dots ,k-1. \end{aligned}$$
(3.6)

Then there are \(\delta ,\delta '>0\) and a constant \(K>0\) such that for any \(x\in {\mathbb {R}}^n\), \(|x-x_0|\le \delta '\) we can find \(t\in [0,\delta ]\) such that if \(x_s\) is the trajectory obtained by using the vector fields \(f_i\) as in (2.8) and starting out at x, then \(u_i(x_{mt})\le u_i(x_o)\) for all \(i=1,\dots ,l\) and moreover \(t\le K|x-x_o|^{1/k}\) if \(k=\max _ik_i\). All other conclusions of Theorem 3.1 remain unchanged.

Proof

(i) If there is a neighborhood of \(x_o\) such that \(u(x)-\Phi (x)\le u(x_o)-\Phi (x_o)\) for \(x\in B_R(x_o)\), then

$$\begin{aligned} \hat{{\mathcal {T}}}=\{x\in B_R(x_o):\Phi (x)\le \Phi (x_o)\}\subset \{x\in B_R(x_o):u(x)\le u(x_o)\}. \end{aligned}$$

We thus apply the assumption on \(\Phi \) and determine that \(\hat{{\mathcal {T}}}\), and therefore \({\mathcal {T}}\), is STLA at \(x_o\).

(ii) We modify the proof of Theorem 3.1. By the assumptions, we can choose \(c_o>0\) such that \(A_i:=(H_{f_1}\boxplus \dots \boxplus H_{f_m})^{k_i}u_i(x_o)\le -2c_o<0\), for all \(i=1,\dots ,l\). Following the proof of Theorem 3.1, we reach (3.5) for each constraint, and if \(\delta \le 1\),

$$\begin{aligned} u_i(x_{mt})-u_i(x_o)\le -c_o\frac{t^{k_i}}{k_i!} -\rho _i(x,t)\le -c_o\frac{t^k}{k!} -\rho (x,t), \end{aligned}$$

where \(=1,\dots ,l\), \(\rho (x,t)=\min _i\rho _i(x,t)\). We conclude as in Theorem 3.1. \(\square \)

Of course one can extend Proposition 3.3 as in Corollary 3.4 as well.

4 Small Time Local Attainability of Manifolds

4.1 The Case of a Point Target

In this section the target for system (1.1) is a point \({\mathcal {T}}=\{x_o\}\subset {\mathbb {R}}^n\), which is identified by the system \(u(x)=x-x_o=0\). This case simplifies with respect to the general one because the target is determined by flat constraints. The constants \(R,\sigma \) below will follow Sect. 2.1.

We start with a useful Lemma comparing a trajectory at a point and the translation of the trajectory starting at a different point but following the same control.

Lemma 4.1

Let \((x^o_s)_{s\in [0,t]}\) be a solution of (1.1) starting at \(x_o\) and \((a_s)_{s\in [0,t]}\) be the corresponding control. Let \(x\ne x_o\) and \(y_s=x^o_s+x-x_o\), \(s\in [0,t]\) be the translation of \(x^o_s\) starting at x. Then if \(t,|x-x_o|\) are sufficiently small, the trajectory \((x_s)_{s\in [0,t]}\) of (1.1) starting at x with control \(a_s\) satisfies

$$\begin{aligned} | x_t-y_t|\le Le^{Lt}|x-x_o|t. \end{aligned}$$

Proof

The translated trajectory \(y_s\) is itself a trajectory of a translated control system

$$\begin{aligned} \dot{y}_s=F(y_s+x_o-x,a_s),\quad y_0=x. \end{aligned}$$

Therefore

$$\begin{aligned} | x_s-y_s|\le \int _0^s|F( x_r,a_r)-F(y_r+x_o-x,a_r)|\;dr\le L|x-x_o|s+L\int _0^s| x_r-y_r|\;dr \end{aligned}$$

for all \(s\in [0,t]\). By usual Gronwall estimates we then get

$$\begin{aligned} | x_t-y_t|\le Le^{Lt}|x-x_o|t. \end{aligned}$$

\(\square \)

In this section we are going to assume the following condition at \(x_o\):

(A1):

We have m groups of available vector fields of the control system: \(f^{(i)}_1,\dots ,f^{(i)}_{j(i)}\), \(i=1,\dots ,m\) and integers \(k_i>0\) such that \(f^{(i)}_r\in C^{k_i-1}({\mathbb {R}}^n;{\mathbb {R}}^n)\) and we denote \({\bar{k}}=\max _ik_i\). We suppose for convenience that \(j(i)=1\) if \(k_i=1\). We assume that

$$\begin{aligned} {\mathbb {R}}^n\ni A^o_i:=\frac{1}{k_i!}\left( H_{f^{(i)}_1}\boxplus \dots \boxplus H_{f^{(i)}_{j(i)}}\right) ^{k_i}I(x_o)\ne 0. \end{aligned}$$

and if \(k_i\ge 2\)

$$\begin{aligned} \left( H_{f^{(i)}_1}\boxplus \dots \boxplus H_{f^{(i)}_{j(i)}}\right) ^rI(x_o)=0,\quad 1\le r<k_i,\;i=1,\dots ,m. \end{aligned}$$

In particular the i-th group has j(i) vector fields and the trajectory moves at \(k_i-\)th order rate at \(x_o\) in the direction of \(A^o_i\in {\mathcal {L}}\). We construct the \(n\times m\) matrix, written in columns as \(A_o=\left( A^o_i \right) _{i=1,\dots ,m}\), so that \(A_o\) has a column for each group of vector fields. The main assumption on matrix \(A_o\) is the following.

(A2):

As a \(n\times m\) matrix, the m columns of the matrix \(A_o\) are a positive basis of \({\mathbb {R}}^n\). In particular \(A_o\) has rank n (and \(m\ge n+1\)).

Remark 4.2

The notion of positive basis of a vector space is classical and recalled in the Appendix together with a technical lemma that we need in the main results of this section, where we use the assumption (A2). If \(k_i=1\) for each i, then in (A1) we have m vector fields satisfying \(A^o_i=f_i(x_o)\ne 0\). In this case assumption (A2) is the positive basis considion of the classical work by Petrov [29, 30]. If \({\bar{k}}=2\), then (A2) has been used by Liverovskii [24, 25] as necessary and sufficient second order condition when the columns of \(A_o\) are either available vector fields or their first Lie brackets.

Below for any vector \(\tau \in {\mathbb {R}}^m\) we indicate \(\tau \ge 0\) if \(\tau =(\tau _1,\dots ,\tau _m)\) and \(\tau _i\ge 0\) for \(i=1,\dots ,m\). Let \(\tau =(\tau _1,\dots ,\tau _m)\ge 0\). We are going to define several reference trajectories. For \(i=1,\dots ,m\) and \(\tau _i\ge 0\), trajectory \(x^{o,i}_t\) starts at \(x_o\) and it is a balanced trajectory of the j(i) fields \(f^{(i)}_1,\dots ,f^{(i)}_{j(i)}\) followed in consecutive time intervals of length \(\tau _i^{1/k_i}\) each, and therefore by Proposition 2.12 and (A1) we know that at the end point

$$\begin{aligned} x^{o,i}_{j(i)\tau _i^{1/k_i}}=x_o+A^o_i\tau _i+\tau _io(1), \end{aligned}$$
(4.1)

as \(\tau _i\rightarrow 0\). For the given \(\tau \ge 0\), we build recursively a further reference trajectory as follows. We put \(T_0=0\) for convenience. For the first group: we consider the trajectory \(x^o_s\equiv x^{o,1}_s\), \(s\in [0,T_1]\), \(T_1=j(1)\tau _1^{1/k_1}\). We proceed recursively with the following groups of vector fields. If we have defined \(x^o_s\) up to the \(i-\)th group and time \(T_i=\sum _{l=1}^i j(l)\tau _l^{1/k_l}\), we proceed with the \((i+1)\)-th group of vector fields starting at \(x^o_{T_i}\) and prolonging the trajectory by following the vector fields \(f^{(i+1)}_1,\dots ,f^{(i+1)}_{j(i+1)}\) in \(j(i+1)\) successive intervals of respective length \(\tau _{i+1}^{1/k_{i+1}}\) until we reach the point \(x^o_{T_{i+1}}\), \(T_{i+1}=T_i+j({i+1})\tau _{i+1}^{1/k_{i+1}}\). Recursively, for all \(\tau \ge 0\) we have a trajectory \(x^o_s\) well defined for \(s\in [0,T_m]\).

For a generic initial point \(x\in B_{R/2}(x_o)\), and \(s\in [0,T_m]\), we also consider the corresponding trajectory \(x_s\) starting at x with the same control as \(x^o_s\). We finally construct the translated trajectories \(y^{i+1}_s=x^{o,i+1}_s+x_{T_{i}}^o-x_o\) for \(s\in [0,j(i+1)\tau _{i+1}^{1/k_{i+1}}]\). Notice that \(y^{i+1}_0=x_{T_{i}}^o\). Standing (A1), we have now defined \(x^{o,i}_s,y^i_s\), \(s\in [0,T_i-T_{i-1}]\), \(i=1,\dots ,m\), and \(x^o_s,x_s\), \(s\in [0,T_m]\).

We start with the following Lemma that we will also use in the following section.

Lemma 4.3

Consider a nonlinear control system in the form (1.1) and suppose that (A1) is satisfied at the point \(x_o\in {\mathcal {T}}\). Then with the notations above

$$\begin{aligned} x_{T_i}^o=x_o+\sum _{j=1}^iA^o_j\tau _j+(\tau _1+\dots +\tau _i)o(1), \end{aligned}$$
(4.2)

for all \(i=1,\dots ,m\), as \(\tau \rightarrow 0\). In particular there is \(C>0\) such that

$$\begin{aligned} |x_{T_i}^o-x_o|\le C (\tau _1+\dots +\tau _i), \end{aligned}$$

for \(i=1,\dots ,m\) and all \(\tau \) sufficiently small.

Proof

We prove the statement by an induction argument. For the first group of vector fields and (4.1) we have

$$\begin{aligned} x_{T_1}^o=x_{T_1}^{o,1}=x_o+A^o_1\tau _1+\tau _1o(1), \end{aligned}$$

as \(\tau \rightarrow 0\). Suppose now by induction that after the i-th group of vector fields, \(1\le i<m\), we have that (4.2) is satisfied. Observe that

$$\begin{aligned} x_{T_{i+1}}^o-x_o{} & {} =(x_{T_{i+1}}^o-y_{T_{i+1}-T_i}^{i+1})+(x_{T_{i+1}-T_i}^{o,i+1}-x_o)+(x_{T_{i}}^o-x_o)\\{} & {} =(x_{T_{i+1}}^o-y_{T_{i+1}-T_i}^{i+1})\\{} & {} \quad +A^o_{i+1}\tau _{i+1}+\tau _{i+1}o(1)+\sum _{j=1}^iA^o_j\tau _j+(\tau _1+\dots +\tau _i)o(1) \end{aligned}$$

where we rewrote the second term in the first line by (4.1) and the third by the induction assumption (4.2) at i. Notice that by Lemma 4.1, if \(T_m\le \sigma \), comparing the trajectories in \([T_i,T_{i+1}]\),

$$\begin{aligned} |x_{T_{i+1}}^o-y_{T_{i+1}-T_i}^{i+1}|\le C|x_{T_{i}}^o-x_o|(T_{i+1}-T_i)\le {{\hat{C}}}(\tau _1+\dots +\tau _{i})o(1), \end{aligned}$$

again by the induction assumption (4.2). This proves that (4.2) holds for \(i+1\) and the conclusion is thus reached. \(\square \)

We now prove the main result of this part of the paper. The construction of the reference trajectories made above stands.

Theorem 4.4

Consider a nonlinear control system in the form (1.1) and suppose that (A1) and (A2) are satisfied at the point \(x_o\in {\mathbb {R}}^n\). Then the target \({{\mathcal {T}}}=\{x_o\}\) is STLA and the minimum time function T(x) at a point x in the neighborhood of \(x_o\) satisfies

$$\begin{aligned} T(x)\le C|x-x_o|^{1/{\bar{k}}}. \end{aligned}$$

In particular T is locally \(1/{\bar{k}}\)-Hölder continuous in its domain, if F satisfies (2.10) globally.

Proof

Given \(x\ne x_o\), \(x\in B_{R/2}(x_o)\), we want to find \({\mathbb {R}}^m\ni \tau \ge 0\) so that \(x_{T_m}=x_o\), where \((x_s)_s\) has been defined above. Notice that by Gronwall estimates on trajectories and if \(T_m\le \sigma \), therefore for \(\tau \ge 0\) sufficiently small, then the continuous function \(\rho (x,\tau )=x_{T_m}^o-x_{T_m}\) satisfies

$$\begin{aligned} |\rho (x,\tau )|\le C|x-x_o|. \end{aligned}$$

In particular \(\lim _{x\rightarrow x_o}\sup _{|\tau |\le \sigma }|\rho (x,\tau )|=0\). We use Lemma 4.3 for \(i=m\) and conclude that

$$\begin{aligned} x_{T_{m}}-x_o&=x_{T_{m}}-x_{T_{m}}^o+x_{T_{m}}^o-x_o=A_o\tau +\hat{\gamma }(\tau )\sum _{i=1}^m\tau _i-\rho (x,\tau )\nonumber \\&=(A_o+\gamma (\tau ))\tau -\rho (x,\tau ), \end{aligned}$$
(4.3)

where \({\hat{\gamma }}\) is a continuous function with values in \({\mathbb {R}}^n\) such that \({\hat{\gamma }}(0)=0\) and \(\gamma (\tau )\) is \(n\times m\) matrix valued vanishing as \(\tau \rightarrow 0\) with m columns all equal to \({\hat{\gamma }}\). Finding \(\tau \ge 0\) so that the right hand side of (4.3) is zero is the content of Lemma 6.2(i) in the Appendix. It uses a fixed point argument and it is where the assumption (A2) is finally needed. Thus there are \(\delta ,\delta '>0\) such that for all \(x,|x-x_o|<\delta '\) we can find \(\tau \ge 0\), \(|\tau |<\delta \) and for such \(\tau \), \(x_{T_m}=x_o\). The corresponding trajectory \(x_s\) then reaches the point \(x_o\) at time \(T_m\). Moreover Lemma 6.2 also shows that for the specific \(\tau \) we have

$$\begin{aligned} \sum _{i=1}^m\tau _i\le K\sup _{|\tau |\le \delta }|\rho (x,\tau )|\le C|x-x_o| \end{aligned}$$

and therefore (we may assume that \(\tau _i\le 1\) for all i)

$$\begin{aligned} T_m=\sum _{i=1}^mj(i)\tau _i^{1/k_i}\le {{\hat{C}}}\left( \sum _{i=1}^m\tau _i^{1/{\bar{k}}}\right) \le {{\tilde{C}}}|x-x_o|^{1/{\bar{k}}}. \end{aligned}$$

The final statement of the Theorem concerning Hölder regularity of the minimum time function now follows by well known arguments, see e.g. [6]. \(\square \)

Remark 4.5

In the case of a point as target, Remark 2.17 applied to symmetric systems outlines the fact that the set \({\mathcal {L}}\supset \{h(x_o):h\in \hbox {Lie}(F)\}=L(F)(x_o)\), where Lie(F) is the Lie algebra generated by all the available vector fields of the control system. If \(L(F)(x_o)={\mathbb {R}}^n\), then \({\mathcal {L}}\) contains a positive basis on \({\mathbb {R}}^n\) and we can recover in Theorem 4.4 the classical sufficient condition for small time local controllability of Rashevskii and Chow [13].

Similarly Remark 2.18 applied to the case of an affine system where \(F(x,a)=f_o(x)+G(x,a)\) with an equilibrium point at \(x_o\) outlines the fact that \(L^a(F)(x_o)=\cup _{\lambda \ge 0}\lambda \;\hbox {co}\{ad^k_{f_o}f(x_o):k\in \mathbb N,\,F(x,a)=f_o(x)+f(x),a\in A\}\subset \cup _{\lambda \ge 0}\lambda \;\hbox {co}{\mathcal {L}}\). Here \(f_o\) is the drift, \(f_o(x_o)=0\), \(f(x)=G(x,a)\) is a generic available vector field in the symmetric part of F and \(\hbox {co }C\) is the convex hull of the set C. If \(L^a(F)(x_o)={\mathbb {R}}^n\) then the set \({\mathcal {L}}\) contains a positive basis of \({\mathbb {R}}^n\). Therefore in Theorem 4.4 we can also recover the sufficient condition of Frankovska [16] and Kawski [18].

4.2 A General Manifold with a Boundary

In this section we will discuss the attainability of general smooth targets, namely manifolds possibly with a boundary. We therefore assume that in the neighborhood of the point \(x_o\) the target is described by a set of equations and at most one inequality. Let \(u=\;^t(u_1,\dots ,u_h):{\mathbb {R}}^n\rightarrow {\mathbb {R}}^h\) and possibly also \(u_{h+1}:{\mathbb {R}}^n\rightarrow {\mathbb {R}}\) be at least of class \(C^1\) and such that \(\nabla u_i(x_o)\), \(i=1,\dots ,h,(h+1)\), are linearly independent. The previous assumption stands throughout the section. Thus the target is locally defined in the neighborhood of \(x_o\) in one of the two following ways

$$\begin{aligned} \begin{array}{c} {{\mathcal {T}}}=\{x:u(x)=u(x_o)\},\\ {{\mathcal {T}}}=\{x:u(x)=u(x_o),\;u_{h+1}(x)\ge u_{h+1}(x_o)\}. \end{array}\end{aligned}$$
(4.4)

In the second case \({\mathcal {T}}\) has a boundary (as a manifold)

$$\begin{aligned} \partial {{\mathcal {T}}}=\{x:u(x)=u(x_o),\;u_{h+1}(x)= u_{h+1}(x_o)\}. \end{aligned}$$

We will suppose for convenience that \(1\le h\le n-1\), as the two cases \(h=0\), \(h=n\) have been previously treated. In particular the target has no interior points. We will keep all notations as in the previous Sect. 4.1. The constants \(R,\sigma ,L,M\) will also follow Sect. 2.1. The next key Lemma compares the variation of a given vector valued smooth function.

Lemma 4.6

(i) Let \((x^o_s)_{s\in [0,t]}\) be a solution of (1.1) starting at \(x_o\) and \((a_s)_{s\in [0,t]}\) be the corresponding control. Let \((x_s)_{s\in [0,t]}\) be the trajectory of (1.1) starting at \(x\in B_{R/2}(x_o)\) with the same control \(a_s\) and let \(u:{\mathbb {R}}^n\rightarrow {\mathbb {R}}^h\), \(u=(u_1,\dots ,u_h)\), be a function of class \(C^2\). Then, if \(t,|x-x_o|>0\) are sufficiently small,

$$\begin{aligned} u(x_t)-u(x)=u(x^o_t)-u(x_o)+\alpha (x,t), \end{aligned}$$
(4.5)

where \(\alpha :B_{R/2}(x_o)\times [0,\sigma ]\rightarrow {\mathbb {R}}^h\) is a continuous function and \(|\alpha (x,t)|\le C|x-x_o|t\).

(ii) Suppose moreover that \(B_R(x_o)\times A\ni (x,a)\mapsto Du(x)F(x,a)\) only depends on a restricted group of variables \(x_{l_1},\dots ,x_{l_p}\) and \(a\in A\), and that the corresponding components of the vector field \(F_{l_1},\dots ,F_{l_p}\) also depend only on the same group of variables as well. Suppose moreover

$$\begin{aligned} |Du(x)F(x,a)-Du(y)F(y,a)|\le {{\hat{L}}}\sqrt{\sum _{j=1}^p(x_{l_j}-y_{l_j})^2}, \end{aligned}$$
(4.6)

for all \(x,y\in B_R(x_o)\) and \(a\in A\). Then (4.5) holds with \(\alpha \) satisfying the stronger estimate

$$\begin{aligned} |\alpha (x,t)|\le Ct\sqrt{\sum _{j=1}^p(x_{l_j}-(x_o)_{l_j})^2}. \end{aligned}$$

Proof

Note that (4.5) holds with

$$\begin{aligned} \alpha (x,t)= & {} u(x_t)-u(x)-(u(x^o_t)-u(x_o))\\ {}= & {} \int _0^t\left( Du(x_s) F(x_s,a_s)-Du(x^o_s) F(x^o_s,a_s)\right) \;ds \end{aligned}$$

and then by Gronwall inequality

$$\begin{aligned} \begin{array}{ll} |\alpha (x,t)|&{}\le \int _0^t(\Vert D^2u\Vert _\infty M+\Vert Du\Vert _\infty L)|x_s-x_s^o|\;ds\\ &{}\le (\Vert D^2u\Vert _\infty M+\Vert Du\Vert _\infty L)|x-x_o|\int _0^te^{Ls}\;ds\le C|x-x_o|t, \end{array} \end{aligned}$$

for \(|x-x_o|\) sufficiently small, \(t\in [0,\sigma ]\) and C depending only on the data and \(\sigma \).

The proof of (ii) is similar by considering the subsystem of (1.1) of the group of components \(F_{l_1},\dots ,F_{l_p}\) and using (4.6). \(\square \)

We are now going to assume the following conditions:

(B1):

There are m groups of available vector fields: \(f^{(i)}_1,\dots ,f^{(i)}_{j(i)}\), \(i=1,\dots ,m\) and integers \(k_i>0\) such that \(f^{(i)}_r\in C^{k_i-1}({\mathbb {R}}^n;{\mathbb {R}}^n)\) and \(u\in C^{\bar{k}}({\mathbb {R}}^n;{\mathbb {R}}^h)\), if \(x_o\in {\mathcal {T}}\backslash \partial {\mathcal {T}}\) (respectively \({{\hat{u}}}=(u,u_{h+1})\in C^{{\bar{k}}}({\mathbb {R}}^n;{\mathbb {R}}^{h+1})\), if \(x_o\in \partial {\mathcal {T}}\)), where \({\bar{k}}=\max _ik_i\). We suppose for convenience that \(j(i)=1\) if \(k_i=1\). We assume that

$$\begin{aligned} \begin{array}{c} {\mathbb {R}}^h\ni k_i!A^o_i=(H_{f^{(i)}_1}\boxplus \dots \boxplus H_{f^{(i)}_{j(i)}})^{k_i}u(x_o)\ne 0,\\ (\hbox {resp. } {\mathbb {R}}^{h+1}\ni k_i!\;^t(A^o_i,s_i)=(H_{f^{(i)}_1}\boxplus \dots \boxplus H_{f^{(i)}_{j(i)}})^{k_i}{{\hat{u}}}(x_o)\ne 0), \end{array} \end{aligned}$$

and if \(k_i\ge 2\)

$$\begin{aligned}{} & {} (H_{f^{(i)}_1}\boxplus \dots \boxplus H_{f^{(i)}_{j(i)}})^ru_l(x_o)=0,\\ {}{} & {} 1\le r<k_i,\;i=1,\dots ,m,\;l=1,\dots ,h\hbox { (resp. }h+1), \end{aligned}$$

i.e. u has \(k_i-\)th order rate of change in the direction of \(A^o_i\in {\mathcal {L}}_u\) (resp. \((A^o_i,s_i)\in {\mathcal {L}}_{{{\hat{u}}}}\)).

If \(x_o\in {\mathcal {T}}\backslash \partial {\mathcal {T}}\), we construct a \(h\times m\) matrix, written in columns as \(A_o=\left( A_i^o\right) _{i=1,\dots ,m}\) or else if \(x_o\in \partial {\mathcal {T}}\), we add to \(A_o\) an extra row

$$\begin{aligned} {{\hat{A}}}_o=\left( \begin{array}{c}A_o\\ s\end{array}\right) , \quad s=\left( \frac{1}{k_i!}(H_{f^{(i)}_1}\boxplus \dots \boxplus H_{f^{(i)}_{j(i)}})^{k_i}u_{h+1}(x_o)\right) _{i=1,\dots ,m}. \end{aligned}$$

In particular in \(A_o\) (respectively \({{\hat{A}}}_o\)) there is only a column for each group of vector fields and it is not zero. We suppose that:

(B2):

If \(x_o\in {\mathcal {T}}\backslash \partial {\mathcal {T}}\), the m columns of the matrix \(A_o\) form a positive basis of \({\mathbb {R}}^h\). In particular \(A_o\) has rank h. If instead \(x_o\in \partial {\mathcal {T}}\) then the following holds: the matrix \({{\hat{A}}}_o\) has rank \(h+1\) and for all \(p\in {\mathbb {R}}^h\) and \(r\ge 0\) there exists \(\lambda =\;^t(\lambda _1,\dots ,\lambda _m)\ge 0\) such that \(p=A_o\lambda \), \(r\le s\cdot \lambda \).

The second part of assumption (B2) modifies the positive basis condition in order to be useful at boundary points of the manifold.

For any given family of nonnegative times \(\tau =(\tau _1,\dots ,\tau _m)\ge 0\) sufficiently small, and an initial point \(x\in B_{R/2}(x_o)\), we build as in the previous Sect. 4.1, the trajectories \(x^o_s,\;x_s\) for \(s\in [0,T_m]\) and \(x^{o,i}_s\), for \(s\in [0,j(i)\tau _i^{1/k_i}]\). The next statement uses an additional assumption to (B1–2), in order to prove that the target is STLA at \(x_o\).

Theorem 4.7

Consider a nonlinear control system in the form (1.1) and \(x_o\in {\mathcal {T}}\), where the target \({\mathcal {T}}\) is locally described as above in the section. Suppose that (B1) and (B2) are satisfied at \(x_o\). In addition we require that the vector fields satisfying (B1) (below \(k_i\) and m come from assumption (B1)) also satisfy: either

$$\begin{aligned} (H_{f_1^{(i)}}\boxplus \dots \boxplus H_{f_{j(i)}^{(i)}})^rI(x_o)=0, \end{aligned}$$
(4.7)

for all \(i=1,\dots , m\) and \(1\le r<k_i\) (if \(k_i\ge 2\)) or there is \({\bar{i}}\in \{1,\dots ,m-1\}\) such that if \(k_i\ge 2\), (4.7) holds for all \(i=1,\dots ,{\bar{i}}-1\) and \(1\le r<k_i\) and \(k_i=1\) for \(i={\bar{i}}+1,\dots m\).

Then the target is STLA at \(x_o\) and the minimum time function T(x) at a point x in the neighborhood of \(x_o\) satisfies (for appropriate \(\delta '\le \sigma \))

$$\begin{aligned} T(x)\le C|x-x_o|^{1/{\bar{k}}}\quad x\in B_{\delta '}(x_o). \end{aligned}$$
(4.8)

If moreover (4.8) holds near all points \({{\hat{x}}}\in B_{\delta '}(x_o)\cap {\mathcal {T}}\) and \(C,\delta ',{\bar{k}}\) are independent of \({{\hat{x}}}\), then (4.8) improves to

$$\begin{aligned} T(x)\le Cd(x,{\mathcal {T}})^{1/{\bar{k}}}. \end{aligned}$$

If in particular the assumptions hold at all \(x_o\in {\mathcal {T}}\) and F satisfies (2.10) globally, then T is locally \(1/\bar{k}-\) Hölder continuous in its domain.

Proof

We proceed similarly to the proofs of Theorem 4.4 and Lemma 4.3, so we only point out the main differences. With the notations of those results, for any \(x\in B_{R/2}(x_o)\), we want to find a nonnegative vector \(\tau \in {\mathbb {R}}^m\) so that \(u(x_{T_m})=u(x_o)\) and in addition \(u_{h+1}(x_{T_m})\ge u_{h+1}(x_o)\) when \(x_o\in \partial {\mathcal {T}}\). Now, similarly to (4.3),

$$\begin{aligned} u(x_{T_m})-u(x_o)=-\rho (x,\tau )+u(x_{T_m}^o)-u(x_o), \end{aligned}$$

and again \(\rho \) is continuous and \(|\rho (x,\tau )|\le C|x-x_o|\), by the local Lipschitz continuity of u, if \(\tau \) is sufficiently small. By the assumption (B1) we know that

$$\begin{aligned} u(x^{o,i}_{j(i)\tau _i^{1/k_i}})-u(x_o)=A^o_i\tau _i+\tau _io(1), \end{aligned}$$
(4.9)

as \(\tau \rightarrow 0\), for \(i=1\dots m\), since \(A^o_i\in {\mathcal {L}}_u\). Moreover we also have

$$\begin{aligned} u_{h+1}(x^{o,i}_{j(i)\tau _i^{1/k_i}})-u_{h+1}(x_o)=s_i\tau _i+\tau _io(1), \end{aligned}$$

if \(x_o\in \partial {\mathcal {T}}\). We will proceed in the case \(x_o\in {\mathcal {T}}\backslash \partial {\mathcal {T}}\) and modify accordingly at the end if \(x_o\in \partial {\mathcal {T}}\). We want to proceed recursively as in Lemma 4.3 and assume that after the i-th step

$$\begin{aligned} u(x_{T_i}^o)-u(x_o)=\sum _{j=1}^iA^o_j\tau _j+\left( \tau _1+\dots +\tau _i\right) o(1). \end{aligned}$$
(4.10)

Observe that (4.10) holds for \(i=1\) by (4.9). Given (4.10) and using (B1)

$$\begin{aligned} u\left( x_{T_{i+1}}^o\right) -u(x_o){} & {} =u\left( x_{T_{i+1}}^o\right) -u\left( x_{T_{i+1}-T_i}^{o,i+1}\right) +u\left( x_{T_{i+1}-T_i}^{o,i+1}\right) \nonumber \\{} & {} \quad -u(x_o)-\left( u(x_{T_{i}}^o)-u(x_o)\right) +\left( u(x_{T_{i}}^o)-u(x_o)\right) \nonumber \\{} & {} =\left[ \left( \left( u(x_{T_{i+1}}^o)-u(x_{T_{i}}^o))-(u(x_{T_{i+1}-T_i}^{o,i+1}) -u(x_o)\right) \right) \right] \nonumber \\ {}{} & {} \quad +A^o_{i+1}\tau _{i+1}+\tau _{i+1}o(1)\nonumber \\{} & {} \quad +\sum _{j=1}^iA^o_j\tau _j+(\tau _1+\dots +\tau _i)o(1). \end{aligned}$$
(4.11)

What remains to be done is the estimate of the square bracket in the last line. By using Lemma 4.6(i) we get

$$\begin{aligned} \left| \left( (u(x_{T_{i+1}}^o)-u\left( x_{T_{i}}^o)\right) -(u(x_{T_{i+1}-T_i}^{o,i+1})-u(x_o)\right) \right| \le C|x_{T_{i}}^o-x_o|\left( T_{i+1}-T_i\right) . \end{aligned}$$
(4.12)

We now need \(|x_{T_{i}}^o-x_o|(T_{i+1}-T_i)\le {{\hat{C}}}(\tau _1+\dots +\tau _i)o(1)\) to conclude and this does not seem to hold in general without further assumptions. We can obtain it since \(T_{i+1}-T_i=\tau _{i+1}\) for \(i\ge {\bar{i}}\) as \(k_{i+1}=1\), or by Lemma 4.3 for \(i=1,\dots ,{\bar{i}}-1\) by using the extra assumption so that we finally get by induction

$$\begin{aligned} u(x_{T_{m}})-u(x_o)=A_o\tau +o(1)\sum _{i=1}^m\tau _i-\rho (x,\tau )=(A_o+\gamma (\tau ))\tau -\rho (x,\tau ), \end{aligned}$$
(4.13)

with an appropriate continuous, vector valued function \(\gamma (\tau )\) vanishing as \(\tau \rightarrow 0\). In the case that \(x_o\in \partial {\mathcal {T}}\) then with easy changes, if \({\hat{\rho }}(x,\tau )={{\hat{u}}}(x^o_{T_m})-{{\hat{u}}}(x_{T_m})\), we obtain similarly

$$\begin{aligned} {{\hat{u}}}(x_{T_{m}})-{{\hat{u}}}(x_o)=({{\hat{A}}}_o+{\hat{\gamma }}(\tau ))\tau -\hat{\rho }(x,\tau ). \end{aligned}$$

We then complete the statement of the Theorem in both cases by using Lemma 6.2 in the Appendix similarly to the proof of Theorem 4.4, since in the case \(x_o\in \partial {\mathcal {T}}\) we need \(u(x_{T_{m}})= u(x_o)\) and \(u_{h+1}(x_{T_{m}})\ge u_{h+1}(x_o)\). \(\square \)

Remark 4.8

Ideally we need the estimate (4.12) to have in the right hand side \(C|u(x_{T_{i}}^o)-u(x_o)|(T_{i+1}-T_i)=(\tau _1+\dots +\tau _i)o(1)\), by the induction assumption, to be able to conclude as in Lemma 4.3 without further assumptions. This does not seem to be reachable in general.

In Theorem 4.7 notice that no extra condition is assumed on the \({\bar{i}}-\)th group of vector fields. If \(k_i=2\) then notice that the extra condition for the i-th group requires

$$\begin{aligned} 0=(H_{f_1^{(i)}}\boxplus \dots \boxplus H_{f_{j(i)}^{(i)}})I(x_o)=H_{f_{1}+\dots + f_{j(i)}}I(x_o)= (f_1^{(i)}+\dots +f_{j(i)}^{(i)})(x_o), \end{aligned}$$

i.e. the vector fields of the \(i-\)th group are balanced at \(x_o\). This kind of assumption was made in the paper by the author [32] to prove the corresponding result for second order sufficient conditions in the case of symmetric systems.

With a slight modification in the proof, we can in fact relax the extra assumption in the statement of Theorem 4.7, which is too restrictive in some examples, by using Lemma 4.6(ii) instead.

Corollary 4.9

Consider the nonlinear control system (1.1) and \(x_o\in {\mathcal {T}}\), where the target \({\mathcal {T}}\) is locally described in one of two ways in (4.4). Suppose that (B1) and (B2) are satisfied. In addition we require the following: \(B_R(x_o)\times A\ni (x,a)\mapsto Du(x)F(x,a)\) only depends on a restricted group of variables \(x_{l_1},\dots ,x_{l_p}\) and \(a\in A\), and (4.6) holds true, for all \(x,y\in B_R(x_o)\) and \(a\in A\). Suppose also that the corresponding components of the vector field \(F_{l_1},\dots ,F_{l_p}\) also depend only on the same group of variables and the control as well, and: either

$$\begin{aligned} (H_{f_1^{(i)}}\boxplus \dots \boxplus H_{f_{j(i)}^{(i)}})^rI_l(x_o)=0, \end{aligned}$$
(4.14)

for all \(i=1,\dots ,m\), \(l\in \{l_1,\dots ,l_p\}\) and \(1\le r<k_i\), where \(k_i\) comes from assumption (B1) or there is \(\bar{i}\in \{1,\dots ,m-1\}\) such that if \(k_i\ge 2\), (4.14) holds for all \(i=1,\dots ,{\bar{i}}-1\) and \(1\le r<k_i\) and \(k_i=1\) for \(i={\bar{i}}+1,\dots m\).

Then the target is STLA at \(x_o\) and all other conclusions of Theorem 4.7 hold true.

Proof

If \(x_o\in {\mathcal {T}}\backslash \partial {\mathcal {T}}\), we proceed as in the proof of Theorem 4.7 until we get to (4.11). By using Lemma 4.6 (ii) instead, we modify (4.12) according to the current additional assumption and obtain in the right hand side

$$\begin{aligned} C|(x_{T_{i}}^o-x_o)_{l_1,\dots ,l_p}|(T_{i+1}-T_i) \end{aligned}$$

instead, where the vector \(x_{l_1,\dots ,l_p}\) only contains the indicated p coordinates. We now apply Lemma 4.1 to the subsystem of the space coordinates \(l_1,\dots ,l_p\) and use it as in Lemma 4.3 to finally achieve that, \(1\le i\le \bar{i}-1\),

$$\begin{aligned} \left( T_{i+1}-T_i\right) \left( x_{T_i}^o-x_o-\sum _{j=1}^iv^o_j\tau _j\right) _{l_1,\dots ,l_p} =\left( \tau _1+\dots +\tau _i\right) o(1), \end{aligned}$$

where \(v^o_j=(1/k_i!)(H_{f_1^{(i)}}\boxplus \dots \boxplus H_{f_{j(i)}^{(i)}})^{1/k_i}I_j(x_o)\) for \(j\in \{l_1,\dots ,l_p\}\). We conclude the induction step

$$\begin{aligned} u\left( x_{T_{i+1}}^o\right) -u(x_o)=\sum _{j=1}^{i+1}A^o_j\tau _j+(\tau _1+\dots +\tau _i)o(1) \end{aligned}$$

and again (4.13) is satisfied. Similarly, we complete the argument if \(x_o\in \partial {\mathcal {T}}\), concluding the proof by applying Lemma 6.2(ii) in the Appendix and where the assumption (B2) is required. \(\square \)

Another variation of the result is in the following.

Corollary 4.10

As in the previous Corollary assume (B1) and (B2) for the control system (1.1) and \(x_o\in {\mathcal {T}}\). Suppose in addition that for \(j=0,1\) according to the fact that \(x_o\in \mathcal T\backslash \partial {\mathcal {T}}\) or \(x_o\in \partial {\mathcal {T}}\): the first \(h+j\) columns of the jacobian \(Du(x_o)\) (resp. \(D(u,u_{h+1})(x_o)\)) are not singular and that the last \((n-h-j)\) components of the system \((F_i)_{h+j+1,\dots ,n}\), only depend on the corresponding coordinates \((x_i)_{h+j+1,\dots ,n}\) and the control \(a\in A\). Moreover: either

$$\begin{aligned} \left( H_{f_1^{(i)}}\boxplus \dots \boxplus H_{f_{j(i)}^{(i)}}\right) ^rI_l(x_o)=0, \end{aligned}$$
(4.15)

for all \(i=1,\dots ,m\), \(l\in \{h+j+1,\dots ,n\}\) and \(1\le r<k_i\), where \(k_i\) comes from assumption (B1) or there is \(\bar{i}\in \{1,\dots ,m-1\}\) such that if \(k_i\ge 2\), (4.7) holds for all \(i=1,\dots ,{\bar{i}}-1\) and \(1\le r<k_i\) and \(k_i=1\) for \(i={\bar{i}}+1,\dots m\).

Then the target is STLA at \(x_o\) and all other conclusions of Theorem 4.7 hold true.

Proof

We proceed as in the proof of Theorem 4.7 until we get to (4.11) and now estimate (4.12) as follows. Add \(n-h-j\) components to \(u=(u_1,\dots ,u_{h+j})\) as \((\tilde{u})_i(x)=x_i\), for \(i=h+j+1,\dots ,n\) in order to make \((u,\tilde{u}):{\mathbb {R}}^n\rightarrow {\mathbb {R}}^n\) locally invertible around \(x_o\). If L is a Lipschitz constant for the inverse function \((u,{{\tilde{u}}})^{-1}\) then

$$\begin{aligned} |x_{T_{i}}^o-x_o|(T_{i+1}-T_i)\le & {} L(|u(x_{T_{i}}^o)-u(x_o)|+|(x_{T_{i}}^o-x_o)_{h+j+1,\dots ,n}|)(T_{i+1}-T_i)\\\le & {} C(\tau _1+\dots +\tau _i)o(1), \end{aligned}$$

by the induction assumption, by Lemma 4.1 applied to the subsystem of the space coordinates \(h+j+1,\dots ,n\) and Lemma 4.3. Eventually the left hand side in (4.12) estimates with \(C(\tau _1+\dots +\tau _i)o(1)\) and we conclude the proof as before. \(\square \)

5 Examples

In this section we show some examples illustrating our method.

Example 5.1

In this example in the plane the system is symmetric with only one vector field \(F(x,y,a)=af_o(x,y)\), \(f_o(x,y)=\;^t(0,1)\) and control \(a\in [-1,1]\). In particular Lie brackets and high order variations of trajectories have no role for controllability. The target is \({\mathcal {T}}=\{u(x,y)\le 0\}\) where \(u(x,y)=x-y^4\). We compute the first Lie derivative

$$\begin{aligned} F\cdot \nabla u(x,y)=-4ay^3 \end{aligned}$$

which is always negative at the points of the boundary of the target, for some \(a=\pm 1\), except at the origin, therefore they have a first order controllability. Notice moreover that \(F\cdot \nabla u(x,0)=0\) for all x. So there are no vector fields in the Lie algebra pointing toward the target at these points and standard literature will not apply. We proceed with higher order Lie derivatives and compute

$$\begin{aligned} H^{(2)}_Fu(x,y)=-12a^2y^2,\quad H^{(3)}_Fu(x,y)=-24a^3y \end{aligned}$$

both vanishing at the origin and

$$\begin{aligned} H^{(4)}_Fu(x,y)=a^4(f_o)^4_2\;\partial ^4_yu(x,y)=-24a^4<0 \end{aligned}$$

for either \(a=\pm 1\). Therefore the target is STLA also at the origin and the minimum time function satisfies

$$\begin{aligned} T(x)\le C (x^2+y^2)^{1/8} \end{aligned}$$

in a neighborhood of the origin. Indeed one can easily compute the minimum time function and \(T(x,y)=x^{1/4}-|y|\) for \(x\ge y^4\) so that it follows that \(T(x,y)\le Ld(x,y)^{1/4}\) for a suitable constant L. Notice that the origin is never the end point of a trajectory reaching the target and that the reason for the controllability at the origin is \(\partial ^4_yu(0,0)\), another reason why known sufficient conditions in the literature will not apply.

Example 5.2

In \({\mathbb {R}}^3\) the control system is affine with vector field \(F=f_o+af_1\), \(f_o(x,y,z)= \;^t(y,0,0)\), \(f_1(x,y,z)= \;^t(0,1,0)\), \(a\in [-1,1]\), and it has an equilibrium at the point \(P=(0,0,1/2)\). Notice that the point target \(\{P\}\) is not controllable. We take as target \({\mathcal {T}}=\{u=x+zy^2\le 0\}\), in the neighborhood of P. Since \(F\cdot \nabla u=y(1+2az)\), in any neighborhood of P there are points of the boundary of the target \({\mathcal {T}}\) where the system is not controllable e.g. \(P_\varepsilon =(-\varepsilon ^2(1-\varepsilon ),\sqrt{2}\varepsilon ,(1-\varepsilon )/2)\), \(\varepsilon \rightarrow 0+\), since \(F\cdot \nabla u(P_\varepsilon )=\sqrt{2}\varepsilon (1+a(1-\varepsilon ))\ge \sqrt{2}\varepsilon (1-|1-\varepsilon |)>0\) for any a and \(\varepsilon \) small. Therefore (1.3) cannot hold and standard literature will not apply at P. Nevertheless \(F\cdot \nabla u(0,0,1/2)=0\) and

$$\begin{aligned} (H_{f_o-f_1}\boxplus H_{f_o+f_1})^2u(P)=[f_o-f_1,f_o+f_1]\cdot \nabla u(P)=-2<0 \end{aligned}$$

and therefore \({\mathcal {T}}\) at P is STLA and the minimum time function satisfies in a neighborhood of P (for some \(C>0\))

$$\begin{aligned} T(x,y,z)\le C(x^2+y^2+(z-1/2)^2)^{1/4}. \end{aligned}$$

Example 5.3

(Reaching a curve in \({\mathbb {R}}^3\).) Consider the affine system \(F(x,y,z,a)=f_o(x,y,z)+af_1(x,y,z)+bf_2(x,y,z)\), \(a,b\in [-1,1]\), where \(f_o(x,y,z)=\;^t(0,1,0)\), \(f_1(x,y,z)=\;^t(1,-1,0)\) and \(f_2(x,y,z)=\;^t(0,0,1)\) and assume that we want to reach the curve \(u(x,y,z)=(x^3-y,z)=(0,0)\). The vector fields are constant therefore there is no high order variation of the trajectories to play with. To check the first order condition,

$$\begin{aligned} Du\;F(x,y,z,a,b)=\;^t(a(3x^2+1)-1,b), \end{aligned}$$

and then \(Du\;F(x,y,z,0,\pm 1)=\;^t(-1,\pm 1)\), while \(Du\;F(x,y,z,1,0)=\;^t(3x^2,0)\). Therefore all points of the target with \(x\ne 0\) are first order controllable as \(\{(-1,1),(-1,-1),(3x^2,0)\}\) is a positive basis of \({\mathbb {R}}^2\). Notice that as a single point, the origin is not STLA for the system because the second cohordinate of the system is nondecreasing. We easily compute at the origin

$$\begin{aligned}{} & {} H^{(2)}_{f_o+f_1}u(0,0,0)=\;^t(6x,0)|_{x=0}=\;^t(0,0),\nonumber \\{} & {} H^{(3)}_{f_o+f_1}u(0,0,0)=\;^t(\partial ^3_xu\;((f_1)_1)^3,0)=\;^t(6,0), \end{aligned}$$

Thus

$$\begin{aligned}\begin{array}{ll} A_o=\left( H^{(3)}_{f_o+f_1}u(0)\;|\;H_{f_o-f_2}u(0)\;|\,H_{f_o+f_2}u(0)\right) =\left( \begin{array}{cccc} 6&{}\quad -1&{}\quad -1\\ 0&{}\quad -1&{}\quad 1 \end{array}\right) \end{array} \end{aligned}$$

and the columns of \(A_o\) are a positive basis of \({\mathbb {R}}^2\). Therefore the curve is third order STLA at the origin by Theorem 4.7. No extra condition applies as \({\bar{i}}=1\). It is \(\partial ^3_x u\;((f_1)_1)(0,0,0)\) that makes the origin controllable. Previous literature does not enlighten this.

Example 5.4

Consider the affine system \(F(x,y,z,a)=f_o(x,y,z)+af_1(x,y,z)+bf_2(x,y,z)\), \(a,b\in [-1,1]\), where \(f_o(x,y,z)=\;^t(y,0,0)\), \(f_1(x,y,z)=\;^t(0,1,0)\) and \(f_2(x,y,z)=\;^t(0,0,1)\) and assume that we want to reach the curve \(u(x,y,z)=(2x-y^2,z)=(0,0)\). To check the first order condition at the origin,

$$\begin{aligned} \begin{array}{c} Du\;F(0,0,0,0,\pm 1)=\;^t(0,\pm 1),\\ (H_{f_o\mp f_1}\boxplus H_{f_o\pm f_1})I(0,0,0)=\;^t(0,0),\quad (H_{f_o\mp f_1}\boxplus H_{f_o\pm f_1})^2u(0,0,0)=\;^t(\mp 6,0). \end{array} \end{aligned}$$

Therefore the target is second order controllable at the origin by Theorem 4.7. The curve is not controllable at other points, therefore the approach with the distance function will not apply.

Example 5.5

(From the book by Coron [14].) In \({\mathbb {R}}^2\) with target \({\mathcal {T}}=\{(x,y):x^2+y^2\le r^2\}\) consider the affine vector field \(F(x,y,a)=f_0(x,y)+af_1(x,y)\), \(f_0(x,y)=\;^t(y^3,0),\;f_1(x,y)=\;^t(0,1)\) and \(u(x,y)=x^2+y^2-r^2\). We compute

$$\begin{aligned} f(x,y,a)\cdot \nabla u(x,y)=y(xy^2+a), \end{aligned}$$

which is negative on \(\partial {\mathcal {T}}\) for some \(|a|\le 1\) unless \(y=0\) so that the points where a first order decrease rate condition fails are \((\pm r,0)\). Then we obtain

$$\begin{aligned}{} & {} (H_{f_0+f_1}\boxplus H_{f_0-f_1})^2u(\pm r,0)=0,\;(H_{f_0+f_1}\boxplus H_{f_0-f_1})^3u(\pm r,0)=0,\\{} & {} (H_{f_0+f_1}\boxplus H_{f_0-f_1})^4u(x,y)=12x+204y^4,\\{} & {} (H_{f_0-f_1}\boxplus H_{f_0+f_1})^4u(x,y)=-12x+204y^4, \end{aligned}$$

therefore a fourth order condition holds at \((\pm r,0)\).

If instead we change the target to the origin then we get \((f_o\pm f_1)(0,0)=\;^t(0,\pm 1)\) and also

$$\begin{aligned} \begin{array}{cc} (H_{f_0+f_1}\boxplus H_{f_0-f_1})I(0,0)=(0,0),\quad (H_{f_0+f_1}\boxplus H_{f_0-f_1})^2I(0,0)=(0,0),\\ (H_{f_0+f_1}\boxplus H_{f_0-f_1})^3I(0,0)=(0,0), (H_{f_0+f_1}\boxplus H_{f_0-f_1})^4I(0,0)=(12,0),\\ (H_{f_0-f_1}\boxplus H_{f_0+f_1})^4I(0,0)=(-12,0). \end{array} \end{aligned}$$

Therefore \(\{(f_o\pm f_1)(0,0),(H_{f_0-f_1}\boxplus H_{f_0+f_1})^4I(0,0),(H_{f_0+f_1}\boxplus H_{f_0-f_1})^4I(0,0)\}\) is a positive basis of \({\mathbb {R}}^2\) and we have a 4th order condition at the origin.