1 Introduction

The stabilizability to a target \(\mathcal {T}\subset \mathbb {R}^n\) of a control system \(\dot{y} = f(y,\alpha )\) on \(\mathbb {R}^n\) consists in the existence of a feedback law \(\alpha :\mathbb {R}^n\setminus \mathcal {T}\rightarrow A\), where A is the set of control values, whose (suitably defined) implementation drives the state trajectory towards the target \(\mathcal {T}\), in some uniform way.

Though stabilizability is a dynamical issue, it may obviously be considered together with a concomitant optimization problem. Specifically, starting from any state \(x\in \mathbb {R}^n\backslash \mathcal {T}\), one might wish to minimize a cost functional of the form

$$\begin{aligned} \int _0^{S_y} l(y(s),\alpha (s)) \, ds, \qquad {(l\ge 0)}, \end{aligned}$$
(1)

over all control-trajectory pairs \((\alpha , y):[0,S_y[\rightarrow A\times \mathbb {R}^n\) of the control system

$$\begin{aligned} \dot{y}= f(y,\alpha ),\,\,\,\, y(0)=x, \end{aligned}$$
(2)

where \(0<S_y\le \infty \) denotes the infimum among the times S that verify

\(\displaystyle \lim _{s\rightarrow S^-} \textrm{dist}(y(s),\mathcal {T})=0 \). A natural question then arises:

(Q) With a suitably extended notion of stabilization, what are sufficient conditions for the existence of a modified Lyapunov function that will simultaneously stabilize the system while bounding the costs?

As for global asymptotic controllability, which is stabilizability’s corresponding open loop notion, early answers to (\({\textbf{Q}}\)) have been provided in [24, 27] by means of the notion of Minimum Restraint Function (MRF). The latter is a particular Control Lyapunov Function—i.e. a solution of a suitable Hamilton–Jacobi dissipative inequality—which, besides implying global asymptotic controllability, regulates the cost, namely it provides a criterium for selecting control-trajectory pairs \(s\mapsto (\alpha (s),y(s))\) which satisfy a uniform bound on the corresponding cost. Question (Q) has received some answers for both bounded and unbounded controls in . [21,22,23], respectively, where one shows how a stabilizing feedback law, which also ensures an upper bound on the cost, can be built starting from a MRF, through a ‘sample and hold’ approach. (Among the different notions of stabilizability, see e.g. [1, 4, 7, 10, 35, 36, 38], we consider here the so-called sample stabilizability, see e.g. [9, 11, 34].)

In relation to the driftless control-affine system

$$\begin{aligned} \dot{y}(s) = \sum _{i=1}^m f_i(y(s))\, \alpha ^i(s), \qquad \end{aligned}$$
(3)

where the controls \(\alpha \) take values in \(A:=\{\pm e_1,\dots ,\pm e_m\}\) (\(e_i\) denoting the ith element of the canonical basis of \(\mathbb {R}^m\)), in this paper we aim to improve these results by constructing a Lie-bracket-determined stabilizing feedback which, at the same time, induces a bound for the cost (1).

This means, in particular, that the utilized ‘sample and hold’ technique involves not only the vector fields \(f_1,\ldots ,f_m\) but also their iterated Lie brackets.

More precisely, we will consider the degree-k Hamiltonian

$$\begin{aligned} H[p_0]^{(k)}(x,p,u):= \min _{B} \Big \{\langle p, B (x) \rangle + p_0(u) \displaystyle \,\max _{a\in A_ B}l(x,a)\Big \}, \end{aligned}$$
(4)

where \(p_0:\mathbb {R}_{\ge 0}\rightarrow [0,1]\) is a continuous increasing function, here called cost multiplier. The minimum is taken among (signed) iterated Lie brackets B of the vector fields \(f_1,\dots , f_m\) of length \(\le k\), while, for any bracket B, the inner maximization is performed over the subset \(A_ B\subset A\) of control values

utilized to approximate the B-flow. The novelty with respect to the standard Hamiltonian consists in the fact

that, on the one hand, for \(k\ge 2\), \(H[p_0]^{(k)}\) is obtained by minimization over vectors (the Lie brackets) which may not belong to the dynamics of the system, and, on the other hand, \(H[p_0]^{(k)}\) depends also on the functions l and \(p_0\). Under some mild hypotheses specified in Sect. 2, the degree-k Hamiltonian \(H[p_0]^{(k)}\) is well defined and continuous. Finally, in order to choose the right iterated Lie bracket (in the construction of the feedback) we exploit the following differential inequality:

$$\begin{aligned} \begin{array}{l} \displaystyle H[p_0]^{(k)}(x,p, U(x)) \le -\gamma (U(x)) \qquad \forall x \in \mathbb {R}^n\setminus \mathcal {T}, \ \ \forall p\in \partial _{{\textbf{P}}}U(x). \end{array} \end{aligned}$$
(5)

Here, the dissipative rate \(\gamma \) is an increasing function taking values in \(]0,+\infty [\) and \(\partial _{{\textbf{P}}} U(x)\) denotes the proximal subdifferential of U at x (see (6)). We call relation (5) the degree-k HJ dissipative inequality (where HJ stands for Hamilton–Jacobi). A proper, positive definite, and continuous function \(U: \overline{\mathbb {R}^n \setminus \mathcal {T}} \rightarrow \mathbb {R}\) satisfying (5) for some \(p_0\) and \(\gamma \), is called a degree-k Minimum Restraint Function (in short, degree -k MRF). Let us observe that, as a trivial consequence of the monotonicity relation

$$\begin{aligned} H[p_0]^{(k)} \le H[p_0]^{(k-1)} \le \dots \le H[p_0]^{(1)}, \end{aligned}$$

the higher the Hamiltonian’s degree the larger the corresponding set of MRFs.

Furthermore, as shown in an example in [29], for k sufficiently large it can well happen that a smooth degree-k MRF does exist while a standard, i.e. degree-1, \(C^1\) MRF, does not (see also [28, Ex. 2.1\(-\)2.4], for the case with no cost). An increased regularity of a degree-k MRF—possibly obtained by choosing a sufficiently large degree k—is of obvious interest, both from a numerical point of view and in relation with feedback insensitivity to data errors.

Our main result, which will be rigorously stated in Theorem 1, can be roughly summarized as follows:

Main result. Under some regularity and integrability assumptions, the existence of a degree- k Minimum Restraint Function \(U:\overline{\mathbb {R}^n\setminus \mathcal {T}} \rightarrow \mathbb {R}\)

implies that control system (3) is degree-k sample stabilizable to \(\mathcal {T}\) with regulated cost.

Let us observe that the use of Lie brackets is a well-established, basic tool in the investigations of necessary conditions for optimality as well as sufficient conditions for controllability (see e.g. [2, 3, 12, 17, 37]). Furthermore, Lie algebraic assumptions play a crucial role in the study of regularity and uniqueness for boundary value problems of Hamilton–Jacobi equations (see e.g. [5] and references therein). However, in the mentioned literature Lie brackets are not involved explicitly in the connected HJ equations or inequalities, as is the case in our degree-k dissipative inequality (5).

As for the assumptions in the above statement, they include some integrability properties of the cost multiplier \(p_0\) and of the dissipative rate \(\gamma \). Furthermore, whenever \(k>1\), they also involve a certain interplay between the curvature parameters of \(\partial \mathcal {T}\) and the semiconcavity coefficient of the MRF U (see Sect. 3.2). As a matter of fact, the need for such a condition was somehow predictable, since it results unavoidable in the high order controllability conditions for the minimum time problem with a general target (see e.g. [18, 19, 25, 26]), where U is the distance \(\textbf{d}\) from the target and \(l\equiv 1\) (see Remark 10). For instance, when \(\textbf{d}\) is a solution of (5) for \(p_0\) and \(\gamma \) positive constants, the ‘regularity and integrability assumptions’ of the Main Result are always satisfied as soon as either \(\mathcal {T}\) is a singleton or \(\mathcal {T}\) satisfies the internal sphere condition (see Remark 6).

Let us summarize the previous considerations by saying that, on the one hand, our results extend previous results on sample stabilizability by considering both higher order conditions and a regulated cost. Also, our main result extends some achievement of [15, 28], which concerned the case without a cost. On the other hand, it can be regarded as a generalization to problems with a nonnegative vanishing Lagrangian l (widely investigated from a PDEs’ point of view, see e.g. [20, 30, 31]) of both classical achievements on Hölder continuity and more recent regularity results for the minimum time problem.

The paper is organized as follows. Section 2 is devoted to state the definitions of degree-k feedback generator and degree-k sample stabilizability with regulated cost. In Sect. 3 we introduce the precise assumptions and state the main result, whose proof is given in Sect. 4. In Sect. 5 we briefly mention possible generalizations of both the form of the dynamics (no longer driftless control-affine) and in the regularity assumptions. In particular, the latter can be weakened up to the point of considering set-valued Lie brackets of Lipschitz continuous vector fields.

1.1 Notation and Preliminaries

For any \(a,b\in \mathbb {R}\), let us set \(a\vee b:= \max \{a,b\}\), \(a\wedge b:= \min \{a,b\}\). For any integer \(N\ge 1\), \((\mathbb {R}^N)^*\) denotes the dual space of \(\mathbb {R}^N\) (and is often identified with \(\mathbb {R}^N\) itself), while we set \(\mathbb {R}_{\ge 0}^N:=[0,+\infty [^N\) and \(\mathbb {R}_{>0}^N:=]0,+\infty [^N\). For \(N=1\) we simply write \(\mathbb {R}_{\ge 0}\) and \(\mathbb {R}_{>0}\), respectively. We denote the closed unit ball in \(\mathbb {R}^N\) by \(\mathcal {B}_N\) and, given \(r>0\), \(r\mathcal {B}_{N}\) will stand for the ball of radius r (we do not specify the dimension when it is clear from the context). Given two nonempty sets X, \(Y \subseteq \mathbb {R}^N\), we call distance between X and Y the number \(\textrm{dist}(X, Y):= \inf \{ |x-y| \text {: } x\in X \text {, } y \in Y\}\). We do observe that \(\textrm{dist}(\cdot ,\cdot )\) is not a distance in general, while the map \(\mathbb {R}^N\ni x\mapsto \textrm{dist}(\{x\},X)\) coincides with the distance function from X. We set \(\mathcal {B}(X,r):= \{ x \in \mathbb {R}^N \text {: } \textrm{dist}(\{x\},X) \le r \}\) and we write \(\partial X\), int(X), and \({\overline{X}}\) for the boundary, the interior, and the closure of X, respectively. For any two points x, \(y \in \mathbb {R}^N\) we denote by sgm(xy) the segment joining them, i.e. sgm\((x,y):= \{ \lambda x+(1-\lambda )y \text {: } \lambda \in [0,1] \}\). Moreover, for any two vectors \(v,w \in \mathbb {R}^N\), \(\langle v,w\rangle \) denotes their scalar product. Let \(\Omega \subseteq \mathbb {R}^N\) be an open, nonempty subset. Given an integer \(k\ge 1\), we write \(C^k(\Omega )\) for the set of vector fields of class \(C^k\) on \(\Omega \), namely \(C^k(\Omega ):=C^k(\Omega ; \mathbb {R}^N)\), while \(C^{k}_b(\Omega ) \subset C^k(\Omega )\) denotes the subset of vector fields with bounded derivatives up to order k. We use \(C^{k-1,1}(\Omega )\subset C^{k-1}(\Omega )\) to denote the subset of vector fields whose \((k-1)\)-th derivative is Lipschitz continuous on \(\Omega \), and we set and \(C^{k-1,1}_b (\Omega ):=C^{k-1}_b(\Omega )\cap C^{k-1,1}(\Omega ) \).

We say that a continuous function \(G:{\overline{\Omega }} \rightarrow \mathbb {R}\) is positive definite if \(G(x)>0\)  \(\forall x\in \Omega \) and \(G(x)=0\)  \(\forall x\in \partial \Omega \). The function G is called proper if the pre-image \(G^{-1}({{\mathcal {K}}})\) of any compact set \({{\mathcal {K}}}\subset \mathbb {R}\) is compact. We use \(\partial _{{\textbf{P}}}G(x)\) to denote the (possibly empty) proximal subdifferential of G at x, namely the subset of \((\mathbb {R}^N)^*\) such that, for some positive constants \(\rho \), c, one has

$$\begin{aligned} p\in \partial _{{\textbf{P}}}G(x)\iff G({\bar{x}})-G(x)+\rho |\bar{x}-x|^2\ge \langle p,\, {\bar{x}}-x\rangle \qquad \forall {\bar{x}}\in \mathcal {B}(\{x\}, c).\nonumber \\ \end{aligned}$$
(6)

The limiting subdifferential \(\partial G(x)\) of G at \(x\in \Omega \), is defined as

$$\begin{aligned} \partial G(x):= \Big \{p\in \mathbb {R}^N: \ \ p=\lim _{i\rightarrow +\infty }, \ p_i: \ p_i\in \partial _{{\textbf{P}}}G(x_i), \ \lim _{i\rightarrow +\infty } x_i=x\Big \}. \end{aligned}$$

When the function G is locally Lipschitz continuous on \(\Omega \), the limiting subdifferential \(\partial G(x)\) is nonempty at every point.

We say that a function \(G: \Omega \rightarrow \mathbb {R}\) is semiconcave (with linear modulus) on \(\Omega \) if it is continuous and for any closed subset \(\mathcal {M}\subset \Omega \) there exists \(\eta _{_\mathcal {M}} >0\) such that

$$\begin{aligned} G(x_1) + G(x_2) - 2G\left( \frac{x_1+x_2}{2}\right) \le \eta _{_\mathcal {M}} |x_1-x_2|^2 \end{aligned}$$

for all \(x_1\), \(x_2 \in \mathcal {M}\) such that the segment sgm\((x_1,x_2)\) is contained in \(\mathcal {M}\). If this property is valid just for any compact subset \(\mathcal {M}\subset \Omega \), G is said to be locally semiconcave (with linear modulus) on \(\Omega \). In both cases, G is locally Lipschitz continuous and, for \(\mathcal {M}\), \(\eta _{_\mathcal {M}}\), \(x_1\), \(x_2\) as above, for any \(p \in \partial G(x)\) the following inequality holds true as well (see [8, Prop. 3.3.1, 3.6.2]):

$$\begin{aligned} G(x_2) - G(x_1) \le \langle p, x_2-x_1 \rangle + \eta _{_\mathcal {M}} |x_2-x_1|^2. \end{aligned}$$
(7)

Finally, let us collect some basic definitions on iterated Lie brackets. If \(g_1\), \(g_2\) are \(C^1\) vector fields on \(\mathbb {R}^N\) the Lie bracket of \(g_1\) and \(g_2\) is defined as

$$\begin{aligned}{}[g_1,g_2](x):= Dg_2(x)\cdot g_1(x) - D g_1(x)\cdot g_2(x)\,\,\, \big (= - [g_2,g_1](x)\big ). \end{aligned}$$

As is well-known, the map \([g_1,g_2]\) is a true vector field, i.e. it can be defined intrinsically. If the vector fields are sufficiently regular, one can iterate the bracketing process: for instance, given a 4-tuple \(\textbf{g}:=(g_1,g_2,g_3,g_4)\) of vector fields one can construct the brackets \([[g_1,g_2],g_3]\), \([[g_1,g_2],[g_3,g_4]]\), \([[[g_1,g_2],g_3],g_4]\), \([[g_2,g_3],g_4]\). Accordingly, one can consider the (iterated) formal brackets \(B_1:=[[X_1,X_2],X_3]\), \(B_2:=[[X_1,X_2],[X_3,X_4]]\), \(B_3:=[[[X_1,X_2],X_3],X_4]\), \(B_4:=[[X_2,X_3],X_4 ]\) (regarded as sequence of letters \(X_1,\ldots ,X_4\), commas, and left and right square parentheses), so that, with obvious meaning of the notation, \(B_1(\textbf{g}) = [[g_1,g_2],g_3]\), \(B_2(\textbf{g}) = [[g_1,g_2],[g_3,g_4]]\), \(B_3(\textbf{g}) =[[[g_1,g_2],g_3],g_4]\), \(B_4(\textbf{g}) =[[g_2,g_3],g_4]\).

The degree (or length) of a formal bracket is the number \(\ell _{_B}\) of letters that are involved in it. For instance, the brackets \(B_1, B_2, B_3, B_4\) have degrees equal to 3, 4, 4, and 3, respectively. By convention, a single variable \(X_i\) is a formal bracket of degree 1. Given a formal bracket B of degree \(\ge 2\), then there exist formal brackets \(B_1\) and \(B_2\) such that \(B=[B_1,B_2]\). The pair \((B_1,B_2)\) is univocally determined and it is called the factorization of B.

The switch-number of a formal bracket B is the number \({\mathfrak {s}}_{_B}\) defined recursively as:

$$\begin{aligned} {\mathfrak {s}}_{_B}:= 1 \ \text { if} \ell _{B}=1, \qquad {\mathfrak {s}}_{_{B}}:= 2\big ({\mathfrak {s}}_{_{B_1}}+{\mathfrak {s}}_{_{B_2}}\big )\ \text {if} \ \ell _B\ge 2 \ \text {and} \ {B}=[B_1,B_2]. \end{aligned}$$

For instance, the switch-numbers of \([[X_3,X_4],[[X_5,X_6],X_7]]\) and \([[X_5,X_6],X_7]\) are 28 and 10, respectively. When no confusion may arise, we also speak of ‘degree and switch-number of Lie brackets of vector fields’. It may happen that brackets with the same degree have different switch numbers. However, if we set, for any integer \(k\ge 1\),

$$\begin{aligned} {\left\{ \begin{array}{ll} \beta (k) = 2[\beta (k-1)+1] \quad \text {if }k\ge 2, \\ \beta (1) = 1, \end{array}\right. } \end{aligned}$$
(8)

and if B is a formal bracket with degree \(\ell _{_B}\le k\), then

\({\mathfrak {s}}_{_B} \le \beta (k)\).

We will use the following notion of admissible bracket pair:

Definition 1

Let \(c\ge 0\), \(\ell \ge 1\), \(q\ge c+\ell \) be integers, let \(B = B(X_{c+1},\ldots ,X_{c+\ell })\) be an iterated formal bracket and let \(\textbf{g}=(g_1,\ldots ,g_q )\) be a string of continuous vector fields. We say that \(\textbf{g}\) is of class \(C^B\) if there exist non-negative integers \(k_1\dots ,k_q\) such that, by the only information that \(g_i\) is of class \(C^{k_i}\) for every \(i=1,\dots ,q\), one can deduce that \(B(\textbf{g})\) is a \(C^0\) vector field (see [14, Definition 2.6]). In this case, we call \((B,\textbf{g})\) an admissible bracket pair (of degree \(\ell \) and switch number \({\mathfrak {s}}:={\mathfrak {s}}_{_B}\)).

For instance, if \(B= [ [[X_3,X_4],[X_5,X_6]],X_7 ]\), \(\textbf{g}= (g_1,g_2,g_3,g_4,g_5,g_6,g_7,g_8)\), then \(\textbf{g}\) is of class \(C^B\) provided \(g_3,g_4,g_5,g_6\) are of class \(C^{3}\) and \(g_7\in C^{1}\).

2 Degree-k Sample Stabilizability with Regulated Cost

Let us recall from [16] the definitions of degree-k feedback generator, sampling process, and degree-k sample stabilizability with regulated cost. Throughout the whole paper \(k\ge 1\) will be a given integer and we will consider the following sets of hypotheses:

(H1):

The control set \(A\subset \mathbb {R}^m\) is defined as \(A:=\{\pm e_1, \dots , \pm e_m\}\) and the target \(\mathcal {T}\subset \mathbb {R}^n\) is a closed subset with compact boundary.

(H2):

the Lagrangian \(l:\mathbb {R}^n\times A\rightarrow \mathbb {R}_{\ge 0}\) is such that, for any \(a\in A\), the function \(\mathbb {R}^n\ni x\mapsto l(x,a)\), is locally Lipschitz continuous. Furthermore, the vector fields \(f_1,\dots , f_m:\mathbb {R}^n\rightarrow \mathbb {R}^n\) belong to \(C^{k-1,1}_{b}(\Omega )\) for any bounded, nonempty subset \(\Omega \subset \mathbb {R}^n\).

We set \(\textbf{d}(X):=\textrm{dist}(X,\mathcal {T})\) for any \(X\subset \mathbb {R}^n\). If \(X=\{x\}\) for some \(x\in \mathbb {R}^n\), we will simply write \(\textbf{d}(x)\) in place of \(\textbf{d}(\{x\})\).

2.1 Admissible Trajectories and Global Asymptotic Controllability with Regulated Cost

Definition 2

(Admissible controls, trajectories, and costs) We say that \((\alpha ,y)\) is an admissible control-trajectory pair if there is some \(0<S_y\le +\infty \) such that:

  1. (i)

    \(\alpha :[0,S_y[\rightarrow A\) is Lebesgue measurable;

  2. (ii)

    \(y:[0,S_y[\rightarrow \mathbb {R}^n\setminus \mathcal {T}\) is a (Carathéodory) solution to the control system

    $$\begin{aligned} \dot{y}(s)= \sum _{i=1}^m f_i(y(s))\, \alpha ^i(s), \end{aligned}$$
    (9)

    satisfying, if \(S_y<+\infty \), \(\lim _{s\rightarrow S_y^-}\textbf{d}(y(s))=0\).

Given an admissible pair \((\alpha ,y)\), we refer to \({\mathfrak {I}}\) given by

$$\begin{aligned} {\mathfrak {I}}(s):= \int _ 0^{s } l(y(\sigma ),\alpha (\sigma ))\, d\sigma , \quad \forall s\in [0,S_y[ \end{aligned}$$
(10)

as the integral cost, and to \((\alpha ,y,{\mathfrak {I}})\) as an admissible control-trajectory-cost triple. For every \(x\in \mathbb {R}^n \setminus \mathcal {T}\), we call \((\alpha ,y)\), \((\alpha ,y,{\mathfrak {I}})\) as above with \(y(0)=x\), an admissible pair from x and an admissible triple from x, respectively. For any admissible pair or triple such that \(S_y<+\infty \), we extend \(\alpha \), y, and \({\mathfrak {I}}\) to \(\mathbb {R}_{\ge 0}\) by setting \(\alpha (s):={\bar{a}}\), \({\bar{a}}\in A\) arbitrary, and \( (y, {\mathfrak {I}})(s):= \lim _{\sigma \rightarrow S_y^-} (y(\sigma ), {\mathfrak {I}}(\sigma )),\) for any \(s\ge S_y\),Footnote 1

Let us recall the notion of global asymptotic controllability with regulated cost, as formulated in [16].

Definition 3

(Global asymptotic controllability with regulated cost) We say that system (9) is globally asymptotically controllable (in short, GAC) to \(\mathcal {T}\) if, for any \(0<r<R\), for every \(x\in \mathbb {R}^n\) with \(\textbf{d}(x)\le R\) there exists an admissible control-trajectory pair \((\alpha ,y)\) from x that satisfies the following conditions (i)–(iii):

$$\begin{aligned} \begin{array}{lllll} &{}\mathrm{(i)} &{}\textbf{d}(y(s)) \le {\varvec{\Gamma }}(R)\qquad &{}\forall s \ge 0; \qquad \qquad \qquad &{}(\mathrm{Overshoot\ boundedness})\\ &{}\mathrm{(ii)} &{}\textbf{d}(y(s)) \le r \qquad &{}\forall s \ge \textbf{S}(R,r); &{}{(\mathrm{Uniform\ attractiveness})}\\ &{}\mathrm{(iii)} &{}\displaystyle \lim _{s \rightarrow +\infty } \textbf{d}(y(s)) = 0, &{}\, &{}(\mathrm{Total\ attractiveness}) \end{array} \end{aligned}$$

where \(\Gamma :\mathbb {R}_{\ge 0}\rightarrow \mathbb {R}_{\ge 0}\) is a function with \(\Gamma (0)=0\) and \({\textbf{S}}:\mathbb {R}_{>0}^2\rightarrow \mathbb {R}_{>0}\). If, moreover, there exists a function \(\textbf{W}:\overline{\mathbb {R}^n{\setminus }\mathcal {T}}\rightarrow \mathbb {R}_{\ge 0}\) continuous, proper, and positive definite, such that the admissible control-trajectory-cost triple \((\alpha ,y,{\mathfrak {I}})\) associated with \((\alpha ,y)\) above, satisfies

$$\begin{aligned} \begin{array}{l} \qquad \mathrm{(iv)} \,\,\, \displaystyle \int _0^{S_y} l(\alpha (s),y(s)) \, ds \le \textbf{W}(x) \,\,\,\qquad \qquad \,\, {(\mathrm{Uniform\ cost\ boundedness})} \end{array} \end{aligned}$$

(\(S_y\le +\infty \) as in Definition 2), we say that system (9) is globally asymptotically controllable to \(\mathcal {T}\) with \(\textbf{W}\)-regulated cost (or simply, with regulated cost).

2.2 Degree-k Feedback Generator

Let us introduce the sets of admissible bracket pairs associated with the vector fields \(\pm f_1,\dots , \pm f_m\) in the dynamics.

Definition 4

(Control label) For any integer h such that \(1\le h\le k\), let us define the set \(\mathcal {F}^{(h)}\) of control labels of degree \(\le h\) as

$$\begin{aligned} \mathcal {F}^{(h)}:= \left\{ (B, {\textbf{g}},{\text {sgn}}) \ \left| \begin{array}{l} \ {\text {sgn}}\in \{+,-\} \text { and} (B, {\textbf{g}}) \text {is an admissible bracket} \\ \text { pair of degree} l_B\le h \text {such that} {\textbf{g}}:=(g_1,\ldots ,g_q) \\ \text { satisfies} g_j\in \{f_1,\dots ,f_m\} \text {for any} j=1,\dots ,q \end{array} \right. \right\} . \end{aligned}$$

We will call degree and switch number of a control label \((B, {\textbf{g}},{\text {sgn}})\in \mathcal {F}^{(h)}\), the degree and the switch number of B, respectively.

With any control label in \(\mathcal {F}^{(k)}\) we associate an oriented control:

Definition 5

(Oriented control) Consider a time \(t>0\) and \((B, {\textbf{g}},+), (B, {\textbf{g}},-) \in \mathcal {F}^{(k)}\).

We define the corresponding oriented controls \({\alpha }_{(B,\textbf{g},+),t}\), \({\alpha }_{(B,\textbf{g},-),t}\), respectively, by means of the following recursive procedure:

  1. (i)

    if \(\ell _B=1\), i.e. \(B=X_j\) for some integer \(j\ge 1\), we set

    $$\begin{aligned} {\alpha }_{(B,\textbf{g},+),t}(s):= e_i \qquad \text {for any} s\in [0,t], \end{aligned}$$

    where \(i\in \{1,\dots ,m\}\) is such that \(B({\textbf{g}}) = f_i\) (i.e. \(g_j=f_i\));

  2. (ii)

    if \(\ell _B\ge 1 \), we set \( {\alpha }_{(B,\textbf{g},-),t}(s):= -{\alpha }_{(B,\textbf{g},+),t}(t-s)\) for any \(s\in [0,t]\);

  3. (iii)

    if \(\ell _B\ge 2\) and \(B=[B_1,B_2]\) is the factorization of B, we set \({\mathfrak {s}}_1:={\mathfrak {s}}_{B_1}\), \({\mathfrak {s}}_2:={\mathfrak {s}}_{B_2}\), and \({\mathfrak {s}}:={\mathfrak {s}}_B(=2{\mathfrak {s}}_1+2{\mathfrak {s}}_2)\) and, for any \(s\in [0,t]\), we posit

    $$\begin{aligned} {\alpha }_{(B,\textbf{g},+),t}(s):= {\left\{ \begin{array}{ll} {\alpha }_{(B_1,\textbf{g},+),\frac{{\mathfrak {s}}_1}{{\mathfrak {s}}}t}(s) \quad &{}\text {if }s\in [0, \frac{{\mathfrak {s}}_1}{{\mathfrak {s}}}t [ \\ {\alpha }_{(B_2,\textbf{g},+),\frac{{\mathfrak {s}}_2}{{\mathfrak {s}}}t} \left( s - \frac{{\mathfrak {s}}_1}{{\mathfrak {s}}}t \right) \quad &{}\text {if }s\in [\frac{{\mathfrak {s}}_1}{{\mathfrak {s}}}t, \frac{{\mathfrak {s}}_1 + {\mathfrak {s}}_2}{{\mathfrak {s}}}t [ \\ {\alpha }_{(B_1,\textbf{g},-),\frac{{\mathfrak {s}}_1}{{\mathfrak {s}}}t} \left( s - \frac{{\mathfrak {s}}_1 + {\mathfrak {s}}_2}{{\mathfrak {s}}}t \right) \,\,\, &{}\text {if }s\in [ \frac{{\mathfrak {s}}_1 + {\mathfrak {s}}_2}{{\mathfrak {s}}}t, \frac{2{\mathfrak {s}}_1 + {\mathfrak {s}}_2}{{\mathfrak {s}}}t [ \\ {\alpha }_{ (B_2,\textbf{g},-),\frac{{\mathfrak {s}}_2}{{\mathfrak {s}}}t} \left( s - \frac{ 2{\mathfrak {s}}_1 + {\mathfrak {s}}_2}{{\mathfrak {s}}}t \right) \, &{}\text {if }s\in \left[ \frac{ 2{\mathfrak {s}}_1 + {\mathfrak {s}}_2}{{\mathfrak {s}}}t, t \right] . \end{array}\right. } \end{aligned}$$
    (11)

Example 1

If \(B=[[X_3,X_4],[X_5,X_6]]\), \({\textbf{g}}=(f_3,f_2,f_1,f_2,f_3,f_4,f_2,f_3)\) and \(t>0\) one has

$$\begin{aligned} {\alpha }_{(B,\textbf{g},+),t}(s)\,\,= {\left\{ \begin{array}{ll} \begin{aligned} e_1 \qquad &{}\text {if } s\in [ 0, t/16 [ \cup [ 9t/16, 10t/16 [ \\ e_2 \qquad &{}\text {if } s\in [ t/16, 2t/16 [ \cup [ 8t/16, 9t/16 [ \\ e_3 \qquad &{}\text {if } s\in [4t/16,5t/16 [ \cup [ 13t/16, 14t/16 [ \\ e_4 \qquad &{}\text {if } s\in [5t/16,6t/16 [ \cup [ 12t/16, 13t/16 [ \\ -e_1 \qquad &{}\text {if } s\in [2t/16,3t/16[ \cup [11t/16, 12t/16 [\\ -e_2 \qquad &{}\text {if } s\in [ 3t/16,4t/16[ \cup [10t/16,11 t/16 ] \\ -e_3 \qquad &{}\text {if } s\in [6t/16,7 t/16[ \cup [15t/16,t] \\ -e_4 \qquad &{}\text {if } s\in [7t/16,8t/16 [ \cup [ 14t/16, 15t/16 [ \end{aligned} \end{array}\right. } \end{aligned}$$

(and \({\alpha }_{(B,\textbf{g},-),t}\) can be obtained according to Definition 4, (ii)).

Let us recall a crucial formula of Lie bracket approximation in [14, Theorem 3.7], in a form proved in [15, Lemma 3.1].

Lemma 1

Assume (H1)-(H2) and fix \({\tilde{R}}>0\). Then, there exist \({\bar{\delta }}>0\) and \(\omega >0\) such that for any \(x \in \overline{\mathcal {B}(\mathcal {T},{\tilde{R}}){\setminus }\mathcal {T}} \), any control label \((B, {\textbf{g}},{\text {sgn}})\in \mathcal {F}^{(k)}\) of degree \(\ell \) and switch number \({\mathfrak {s}}\), and any \(t\in [0,{\bar{\delta }}]\), there exists a (unique) solution \(y(\cdot )\) to the Cauchy problem

$$\begin{aligned} \dot{y}(s) =\displaystyle \sum _{i=1}^m f_i(y(s)){\alpha }_{(B,\textbf{g},{\text {sgn}}),t}^i(s), \qquad y(0) = x, \end{aligned}$$

defined on the whole interval [0, t] and satisfyingFootnote 2

$$\begin{aligned} y(s)\in \mathcal {B}(\mathcal {T},2{\tilde{R}}) \ \ \forall s\in [0,t], \quad \left| y(t) - x -{\text {sgn}}\, B( {\textbf{g}})(x) \left( \frac{t}{{\mathfrak {s}}} \right) ^\ell \right| \le \omega \, t \left( \frac{t}{{\mathfrak {s}}}\right) ^\ell . \end{aligned}$$
(12)

Remark 1

In general, \({\bar{\delta }}\) and \(\omega \) in the above lemma do depend on \({\tilde{R}}\), while they are independent of \({\tilde{R}}\) as soon as the hypothesis \(f_1,\dots ,f_m\in C^{k-1,1}_b(\mathbb {R}^n)\) replaces the assumption on \(f_1,\dots ,f_m\) in (H2).

Definition 6

(Degree-k feedback generator) We call degree-k feedback generator any map \(\mathcal {V}:\mathbb {R}^n\setminus \mathcal {T}\rightarrow \mathcal {F}^{(k)}\) and write

$$\begin{aligned} \begin{array}{l} \quad \qquad x\mapsto \mathcal {V}(x):=(B_x,{\textbf{g}}_x,{\text {sgn}}_x), \\ \ell (x):= \ell _{_{B_x}}, \qquad {\mathfrak {s}}(x):={\mathfrak {s}}_{_{B_x}}\quad \forall x\in \mathbb {R}^n\setminus \mathcal {T}. \end{array} \end{aligned}$$
(13)

Definition 7

(Multiflow) Given a degree-k feedback generator \(\mathcal {V}\), for every \(x\in \mathbb {R}^n\setminus \mathcal {T}\) and \(t>0\), we define the control \( \alpha _{x,t}:\mathbb {R}_{\ge 0}\rightarrow A\), as

$$\begin{aligned} \alpha _{x,t}(s):={\alpha }_{\mathcal {V}(x),t}(s\wedge t), \qquad \text { for any} s\in \mathbb {R}_{\ge 0}. \end{aligned}$$

We will refer to a maximal solution to the Cauchy problem

$$\begin{aligned} \dot{y}(s)= \sum _{i=1}^m f_i\big (y(s)\big )\alpha ^i_{x,t}(s), \qquad y(0)=x, \end{aligned}$$
(14)

as the \(\mathcal {V}\)-multiflow, starting from x up to the time t (or simply \(\mathcal {V}\)-multiflow). It will be denoted by \(y_{x,t}\).Footnote 3

2.3 Sampling Processes

We call \(\pi :=\{ s_j \}_{j}\) a partition of \(\mathbb {R}_{\ge 0}\) if \(s_0=0\), \(s_j < s_{j+1}\) for any \(j\in \mathbb {N}\), and \(\displaystyle \lim _{j\rightarrow +\infty }s_j=+\infty \). The sampling time, or diameter, of \(\pi \) is defined as the supremum of the set \(\{s_{j+1}-s_j: \ \ j\in \mathbb {N}\}\).

Definition 8

(\(\mathcal {V}\)-sampling process). Given a degree-k feedback generator \(\mathcal {V}\), we refer to \((x, \pi , \alpha _x^\pi , y_x^\pi )\) as a \(\mathcal {V}\)-sampling process if \(x\in \mathbb {R}^n{\setminus }\mathcal {T}\), \(\pi :=\{s_j\}_j\) is a partition of \(\mathbb {R}_{\ge 0}\), \(y_x^{\pi }\) is a continuous function taking values in \(\mathbb {R}^n\setminus \mathcal {T}\) defined recursively by

$$\begin{aligned} {\left\{ \begin{array}{ll} y_x^{\pi }(s):= y_{x_j,t_j}(s-s_{j-1}) \qquad \text { for all} s\in [s_{j-1}, \sigma _j[ \, \text {and} \, 1\le j \le {\textbf{j}}\\ y_x^{\pi }(0) = x, \end{array}\right. } \end{aligned}$$
(15)

where, for all \(j\ge 1\), \(y_{x_j,t_j}\) is a \(\mathcal {V}\)-multiflow with \(t_j:= s_j-s_{j-1}\), \(x_j:= y_x^{\pi }(s_{j-1})\) for all \(1\le j \le {\textbf{j}}\), andFootnote 4

$$\begin{aligned} \begin{array}{l} \sigma _j:= \sup \left\{ \sigma \ge s_{j-1} \text {: } y_{x_j, t_j} \text { defined on } [s_{j-1},\sigma [, \ y_{x_j, t_j}([s_{j-1},\sigma [)\subset \mathbb {R}^n\setminus \mathcal {T}\right\} ,\\ {\textbf{j}}:=\inf \{j: \ \sigma _j\le s_j\}. \end{array} \end{aligned}$$

We will refer to the map \(y_x^\pi :[0, \sigma _{{\textbf{j}}}[\rightarrow \mathbb {R}^n{\setminus }\mathcal {T}\) as a \(\mathcal {V}\)-sampling trajectory. According to Definition 7, the corresponding \(\mathcal {V}\)-sampling control \(\alpha _{x}^{\pi }\) is defined as

$$\begin{aligned} \alpha _x^{\pi }(s):= \alpha _{x_j, t_j}(s - s_{j-1}) \qquad \text {for all }s\in [s_{j-1}, s_j[\,\cap \,[0, \sigma _{{\textbf{j}}}[, \quad 1\le j\le {\textbf{j}}. \end{aligned}$$

Furthermore, we define the \(\mathcal {V}\)-sampling cost \({\mathfrak {I}}_x^{\pi }\) as

$$\begin{aligned} {\mathfrak {I}}_x^{\pi }(s):=\int _0^s l(y_x^{\pi }(\sigma ),\alpha _x^{\pi }(\sigma ))\, d\sigma , \quad \forall s\in [0, \sigma _{{\textbf{j}}}[, \end{aligned}$$
(16)

and we call \((x, \pi , \alpha _x^\pi , y_x^\pi , {\mathfrak {I}}_x^\pi )\) a \(\mathcal {V}\)-sampling process-cost.

If \((\alpha _x^{\pi },y_x^{\pi })\) [resp. \((\alpha _x^{\pi },y_x^{\pi }, {\mathfrak {I}}_x^{\pi })\)] is an admissible pair [resp. triple] from x, we say that the \(\mathcal {V}\)-sampling process \((x, \pi , \alpha _x^\pi , y_x^\pi )\) [resp. \(\mathcal {V}\)-sampling process-cost \((x, \pi , \alpha _x^\pi , y_x^\pi ,{\mathfrak {I}}_x^{\pi })\)] is admissible. In this case, when \( \sigma _{{\textbf{j}}}=S_{y_x^\pi }<+\infty \), we extend \(\alpha _x^{\pi }\), \(y_x^{\pi }\), and \({\mathfrak {I}}_x^{\pi }\) to \(\mathbb {R}_{\ge 0}\), according to Definition 2.

In the definition of degree-k sample stabilizability with regulated cost below, we will consider only \(\mathcal {V}\)-sampling processes belonging to the subclass of \({\mathfrak {d}}\)- scaled \(\mathcal {V}\)- sampling processes, defined as follows.

Definition 9

[\({\mathfrak {d}}\)-scaled \(\mathcal {V}\)-sampling process-cost] Given \({\mathfrak {d}}:=(\delta _1,\ldots ,\delta _k)\) in \(\mathbb {R}^k_{>0}\), referred to as a multirank, and a degree-k feedback generator \(\mathcal {V}\), we say that \((x, \pi , \alpha _x^\pi , y_x^\pi , {\mathfrak {I}}_x^\pi )\) [resp. \((x, \pi , \alpha _x^\pi , y_x^\pi )\)] is a \({\mathfrak {d}}\)-scaled \(\mathcal {V}\)-sampling process-cost [resp. \({\mathfrak {d}}\)-scaled \(\mathcal {V}\)-sampling process] provided it is a \(\mathcal {V}\)-sampling process-cost [resp. \(\mathcal {V}\)-sampling process] such that the partition \(\pi =\{s_j\}_j\) satisfies

$$\begin{aligned} \Delta (k) \, \delta _{\ell _j} \le s_j - s_{j-1} \le \delta _{\ell _j} \qquad \forall j\in \mathbb {N}, \ j\ge 1, \end{aligned}$$
(17)

where \(\ell _j:= \ell (y_x^\pi (s_{j-1}))\) (see Definition 13) and \(\Delta (k):= \frac{k-1}{k}\).

Remark 2

When \(k=1\), the degree-1 feedback generator \(\mathcal {V}\) takes values in \(\mathcal {F}^{(1)}\), so that, in view of Definition 5, a \(\mathcal {V}\)-sampling process \((x, \pi , \alpha _x^\pi , y_x^\pi )\) is nothing but a standard \(\pi \)-sampling process associated with the feedback law \(\alpha (x):={\text {sgn}}_x\,e_i\), as soon as \(\mathcal {V}(x)=(B_x,{\textbf{g}}_x,{\text {sgn}}_x)\) and \(B_x({\textbf{g}}_x) = f_i\). In particular, in this case \(\Delta (1)=0\), \({\mathfrak {d}}=\delta >0\), and a \({\mathfrak {d}}\)-scaled \(\mathcal {V}\)-sampling process \((x, \pi , \alpha _x^\pi , y_x^\pi )\) coincides with a standard \(\pi \)-sampling process such that diam\((\pi )<\delta \), exactly as required in the classical notion of sample stabilizability (see, for instance, [10, 11]). When \(k>1\), the notion of \({\mathfrak {d}}\)-scaled \(\mathcal {V}\)-sampling process is more restrictive. In particular, (17) prescribes for each such process \((x, \pi , \alpha _x^\pi , y_x^\pi )\) both an upper and a lower bound on the amplitude \(s_j - s_{j-1}\) of every sampling interval, depending on the degree \(\ell _j\in \{1,\dots ,k\}\) of \(\mathcal {V}(y_x^\pi (s_{j-1}))\). Considering this subclass of processes will actually be crucial in the proof of Theorem 1. Indeed, when a trajectory \(y_x^\pi \) defined on an interval \([s_{j-1},s_{j}]\) approximates the direction of a degree-\(\ell _j\) Lie bracket of length \(\ell _j>1\) for the time \(t_j:=s_j - s_{j-1}\,(<1)\), the displacement of \(y_x^\pi \) is proportional to \(t_j^{\ell _j}<t_j\)—see the asymptotic formula (12). Hence, the lower bound in condition (17) ensures that for each sampling trajectory the sum of the displacements is divergent. This is necessary in order to build stabilizing sampling trajectories that uniformly approach any fixed neighborhood of the target while providing an uniform upper bound for the cost (see Definition 11 below). Incidentally, notice that \(\Delta (k)\) might be replaced by any function null for \(k=1\), positive and smaller than 1 otherwise.

2.4 Degree-k Sample Stabilizability with Regulated Cost

To state the notion of degree -k sample stabilizability of system (9) to \(\mathcal {T}\) with regulated cost, we first need to provide the following definition.

Definition 10

(Integral-cost-bound function) We say that a function \({\varvec{\Psi }}: \mathbb {R}_{>0}^3 \rightarrow \mathbb {R}_{\ge 0}\) is an integral-cost-bound function if

$$\begin{aligned} {\varvec{\Psi }}(R,v_1,v_2):=\Lambda (R)\cdot \Psi (v_1,v_2), \end{aligned}$$

where

  1. (i)

    \(\Lambda :\mathbb {R}_{>0}\rightarrow \mathbb {R}_{>0}\) is a continuous, increasing function and \(\Lambda \equiv 1\) if \(k=1\);

  2. (ii)

    \(\Psi : \mathbb {R}_{>0}^2 \rightarrow \mathbb {R}_{\ge 0}\) is a continuous map, which is increasing and unbounded in the first variable and decreasing in the second variable;

  3. (iii)

    there exists a strictly decreasing bilateral sequence \((u_i)_{i\in \mathbb {Z}}\subset \mathbb {R}_{>0}\), such that, for some (hence, for any) \(j\in \mathbb {Z}\), one has

    $$\begin{aligned} \sum _{i=j}^{+\infty }\Psi (u_{i},u_{i+1})<+\infty ,\quad \text {and} \lim _{i\rightarrow -\infty } u_i = +\infty , \quad \lim _{i\rightarrow +\infty } u_i = 0. \end{aligned}$$
    (18)

Definition 11

(Degree-k sample stabilizability with regulated cost) Let \(\mathcal {V}\) be a degree-k feedback generator and let \(U:\overline{\mathbb {R}^n\setminus \mathcal {T}}\rightarrow \mathbb {R}_{\ge 0}\) be a continuous, proper, and positive definite function. We say that \(\mathcal {V}\) degree -k U-sample stabilizes system (9) to \(\mathcal {T}\) if there exists a multirank map \({\mathfrak {d}}:\mathbb {R}_{>0}^2\rightarrow \mathbb {R}_{>0}^k\) such that, for any \(0<r<R\), every \({\mathfrak {d}}(R,r)\)-scaled \(\mathcal {V}\)-sampling process \((x, \pi , \alpha _x^\pi , y_x^\pi )\) with \(\textbf{d}(x)\le R\) is admissible and satisfies

$$\begin{aligned} \begin{array}{l} \text {(i)} \ \textbf{d}(y_x^{\pi }(s)) \le \Gamma (R)\quad \forall s \ge 0, \\ \text {(ii)} \ {\textbf{t}}(y_x^\pi ,r):=\inf \big \{s\ge 0 \text {: } U(y_x^\pi (s))\le {\varphi }(r) \big \}\le \textbf{T}(R,r), \\ \text {(iii)} \ \text {if} \exists \tau >0 \text {such that} \ U(y_x^\pi (\tau ))\le {\varphi (r)}, \ \text {then} \ \textbf{d}(y_x^\pi (s))\le r \ \ \forall s\ge \tau , \end{array} \end{aligned}$$

where \(\Gamma :\mathbb {R}_{\ge 0}\rightarrow \mathbb {R}_{\ge 0}\), \(\varphi :\mathbb {R}_{\ge 0}\rightarrow \mathbb {R}_{\ge 0}\) are continuous, strictly increasing and unbounded functions with \(\Gamma (0)=0\), \(\varphi (0)=0\), and \({\textbf{T}}:\mathbb {R}_{>0}^2\rightarrow \mathbb {R}_{>0}\) is a function increasing in the first variable and decreasing in the second one. We will refer to property (i) as Overshoot boundedness and to (ii)-(iii) as U-Uniform attractiveness.

If, in addition, there exists an integral-cost-bound function \(\mathbf{\Psi }: \mathbb {R}_{>0}^3\rightarrow \mathbb {R}_{ \ge 0} \) such that the \(\mathcal {V}\)-sampling process-cost \((x, \pi , \alpha _x^\pi , y_x^\pi ,{\mathfrak {I}}_x^\pi )\) associated with the \(\mathcal {V}\)-sampling process \((x, \pi , \alpha _x^\pi , y_x^\pi )\) above satisfies the inequality

$$\begin{aligned} \text {(iv)}\,\, \displaystyle {\mathfrak {I}}_x^\pi ({\textbf{t}}(y_x^\pi ,r)) = \int _0^{{\textbf{t}}(y_x^\pi ,r)} l(y_x^{\pi }(s),\alpha _x^{\pi }(s))ds \le {\varvec{\Psi }}\Big (R,\, U(x), \,U(y_x^\pi ({\textbf{t}}(y_x^\pi ,r)))\Big ) \end{aligned}$$

we say that \(\mathcal {V}\) degree-k U-sample stabilizes system (9) to \(\mathcal {T}\) with \({\varvec{\Psi }}\)-regulated cost. We will refer to (iv) as Uniform cost boundedness.

When there exist some function U, some degree-k feedback generator \(\mathcal {V}\) [and an integral-cost-bound function \({\varvec{\Psi }}\)] such that \(\mathcal {V}\) degree-k U-sample stabilizes system (9) to \(\mathcal {T}\) [with \({\varvec{\Psi }}\)-regulated cost], we say that system (9) is degree-k U-sample stabilizable to \(\mathcal {T}\) [with \({\varvec{\Psi }}\)-regulated cost]. Sometimes, we will simply say that system (9) is degree-k sample stabilizable to \(\mathcal {T}\) [with regulated cost].

Remark 3

By Theorem 1 below, the existence of a degree-k MRF U leads to the notion of degree-k U-sample stabilizability with regulated cost in Definition 11. This notion might look quite involved if compared to classical stabilizability concepts (without cost). However, maybe this is not the case, in view of the following facts:

  1. (i)

    degree-k sample stabilizability with regulated cost implies global asymptotic controllability with regulated cost as in Definition3 (exactly as classical sample stabilizability implies global asymptotic controllability);

  2. (ii)

    in the absence of a cost, system (9) is degree -k sample stabilizable to \(\mathcal {T}\) for some \(k\ge 1\) in the sense of Definition11if and only if it is sample stabilizable to \(\mathcal {T}\) in the classical sense of [10, Definition I.3], which is in turn equivalent to be degree-k sample stabilizable to \(\mathcal {T}\) according to [15, Definition 2.18].

  3. (iii)

    degree-1 sample stabilizability with regulated cost in the sense of Definition 11implies sample stabilizability with regulated cost as defined in [21].

The proofs of statements (i) and (ii) can be found in [16]. Moreover, in view of Remark 2 and using the notations of Definition 11, statement (iii) follows from the following two considerations: first, U-uniform attractiveness immediately implies the standard uniform attractiveness condition

$$\begin{aligned} \exists \, \textbf{S}(R,r)>0 \quad \text {such that} \quad \textbf{d}(y_x^\pi (s)) \le r \qquad \text{ for } \text{ all }\ s\ge \textbf{S}(R,r) \end{aligned}$$

(with \(\textbf{S}(R,r)\le \textbf{T}(R,r)\)), which in turn characterizes classical sample stabilizability (see e.g. [10]); secondly, for \(k=1\) the uniform cost boundedness condition (iv) in Definition 11 implies the cost bound condition considered in [21], namely

$$\begin{aligned} \begin{array}{c} \displaystyle \int _0^{\textbf{s}(y_x^\pi ,r)} l(y_x^{\pi }(s),\alpha _x^{\pi }(s))\, ds \le \int _0^{\textbf{t}(y_x^\pi ,r)} l(y_x^{\pi }(s),\alpha _x^{\pi }(s))\, ds \\ \ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \le \Psi (U(x),\varphi ( r))\le \Psi (U(x),0)=:W(x), \end{array} \end{aligned}$$

where \(\textbf{s}(y_x^\pi ,r):= \inf \{s\ge 0 \text {: } \textbf{d}(y_x^\pi (\sigma ))\le r \ \text { for all }\sigma \ge s\}\).Footnote 5

3 The Main Result

Together with the notion of degree-k Hamiltonian, in this section we provide our most important result: it states that the existence of a suitably defined solution U to a degree-k Hamilton–Jacobi inequality is a sufficient condition for the system to be degree-k sample stabilizable with regulated cost.

3.1 Degree-k Hamilton–Jacobi Dissipative Inequality

For any integer h, \(1\le h\le k\), and any \((B, {\textbf{g}},{\text {sgn}})\in \mathcal {F}^{(h)} \) let us define the subset \(A(B, {\textbf{g}},{\text {sgn}})\subseteq A\) of control values

$$\begin{aligned} A(B, \textbf{g},{\text {sgn}}):= {\left\{ \begin{array}{ll} \{ e_i\} &{}\text {if} B(\textbf{g}) = f_i and {\text {sgn}}=+, \\ \{-e_i\} &{}\text {if} B(\textbf{g}) = f_i and {\text {sgn}}=-, \\ \big \{\pm e_{j_1},\ldots , \pm e_{j_\ell }\big \} &{}\left\{ {\begin{array}{l} \text {if} B=B(X_{c+1},\dots , X_{c+\ell }), \\ 2\le \ell \le h, \text {and} \textbf{g} \text {is such that} \\ g_{c+r} = f_{j_r} \text {for} r=1,\ldots ,\ell . \end{array}}\right. \end{array}\right. } \end{aligned}$$
(19)

Definition 12

(Pseudo-Hamiltonian) We define the pseudo-Hamiltonian \({{\mathcal {H}}}: (\mathbb {R}^n{\setminus } \mathcal {T}) \times \mathbb {R}^*\times (\mathbb {R}^n)^* \times \mathcal {F}^{(k)}\rightarrow \mathbb {R}\), as

$$\begin{aligned} {{\mathcal {H}}}\Big (x,p_0,p,(B, {\textbf{g}},{\text {sgn}})\Big ):= \langle p,{\text {sgn}}\, B({\textbf{g}})(x) \rangle + p_0 \max _{a \in A(B, {\textbf{g}},{\text {sgn}})} \, l(x,a) \end{aligned}$$

for any \( \big (x,p_0,p,(B, {\textbf{g}},{{\text {sgn}}})\big ) \in (\mathbb {R}^n{\setminus } \mathcal {T}) \times \mathbb {R}^* \times (\mathbb {R}^n)^* \times \mathcal {F}^{(k)}\).

Definition 13

(Degree-h Hamiltonian) Given a continuous increasing function \(p_0:\mathbb {R}_{\ge 0}\rightarrow [0,1]\) and an integer h, \(1\le h\le k\), we define the degree-h Hamiltonian

$$\begin{aligned} H[p_0]^{(h)}:(\mathbb {R}^n\setminus \mathcal {T}) \times (\mathbb {R}^n)^*\times \mathbb {R}\rightarrow \mathbb {R}, \end{aligned}$$

by setting, for every \((x,p,u) \in (\mathbb {R}^n{\setminus } \mathcal {T}) \times (\mathbb {R}^n)^*\times \mathbb {R}\),

$$\begin{aligned} H[p_0]^{(h)}(x,p,u):= \min _{(B, {\textbf{g}},{\text {sgn}}) \in \mathcal {F}^{(h)}}{\mathcal {H}} \Big (x,p_0(u),p,(B, {\textbf{g}},{\text {sgn}})\Big ). \end{aligned}$$

Notice that minimum exists because the set of control labels of degree \(\le h\) is finite. Furthermore, under the standing hypotheses, degree-h Hamiltonians \(H[p_0]^{(h)}\) are well defined and continuous for every \(h\in \{1,\dots ,k\}\). Observe also that

$$\begin{aligned} H[p_0]^{(k)} \le H[p_0]^{(k-1)} \le \dots \le H[p_0]^{(1)}, \end{aligned}$$
(20)

where the degree-1 Hamiltonian \(H[p_0]^{(1)}\) reduces to

$$\begin{aligned} H[p_0]^{(1)}(x,p,u)=\min _{a\in A} \left\{ \left\langle p, \sum _{i=1}^m f_i(x)a^i\right\rangle + p_0(u)\, l(x,a)\right\} . \end{aligned}$$

As an example, let us consider the degree-2 Hamiltonian \(H[p_0]^{(2)}\):

$$\begin{aligned} \begin{array}{l} H[p_0]^{(2)}(x,p,u) = H[p_0]^{(1)}(x,p,u)\ \ \wedge \\ \displaystyle \qquad \qquad \qquad \min _{i,j \in \{1,\dots ,m\}} \Big \{ \langle p, [f_i, f_j](x) \rangle + \max _{a\in \{\pm e^i, \pm e^j \} }p_0(u) l(x,a) \Big \}. \end{array} \end{aligned}$$

Definition 14

(Degree-h MRF) For any integer h, \(1\le h\le k\), a continuous map \(U: \overline{\mathbb {R}^n {\setminus } \mathcal {T}} \rightarrow \mathbb {R}\) is said to be a degree-h Minimum Restraint Function (in short, degree-h MRF) if it is proper, positive definite, and satisfies the HJ dissipative inequality

$$\begin{aligned} H[p_0]^{(h)}(x,p, U(x)) \le -\gamma (U(x)) \qquad \forall x \in \mathbb {R}^n\setminus \mathcal {T}, \ \ \forall p\in \partial _{{\textbf{P}}}U(x), \end{aligned}$$
(21)

for some continuous and increasing functions \(p_0:\mathbb {R}_{\ge 0}\rightarrow [0,1]\) and \(\gamma : \mathbb {R}_{\ge 0}\rightarrow \mathbb {R}_{>0}\), to which we will refer as the cost multiplier and the dissipative rate, respectively. Furthermore, we say that U is a degree-h Control Lyapunov Function (in short, degree-h CLF) if it is a degree-h MRF with \(p_0\equiv 0\).

Remark 4

From (20) it follows that for all \(q_1\), \(q_2\in \mathbb {N}\), \(1\le q_1<q_2\le k\), a degree-\(q_1\) MRF (for some \(p_0\) and \(\gamma \)) is also a degree-\(q_2\) MRF (for the same \(p_0\) and \(\gamma \)), while the converse is false, in general. In particular, a smooth degree-k MRF for some \(k>1\) may exist in situations where there are no smooth degree-1 MRFs (see the examples in [28, 29]). Actually, this is one of the main reasons for considering degree-k MRFs with \(k>1\).

Remark 5

If U is a degree-k MRF for some \(p_0\) and \(\gamma \), it is also a degree-k CLF. Indeed, since \(p_0\) and the lagrangian l are nonnegative, from the dissipative inequality (21) it follows that, for every \(x \in \mathbb {R}^n{\setminus } \mathcal {T}\) and \(p\in \partial _{{\textbf{P}}}U(x)\),

$$\begin{aligned} H[0]^{(k)}(x,p,U(x))=\min _{(B, {\textbf{g}},{\text {sgn}}) \in \mathcal {F}^{(k)}} \, \langle p,B({\textbf{g}})(x) \rangle \le -\gamma (U(x)). \end{aligned}$$
(22)

A notion of (locally semiconcave) degree-k CLF was first introduced in [28]. The definition of degree-k MRF has been anticipated in [29], in the special case of constant \(p_0\) and lagrangian l independent of the control \(a\in A\).

3.2 Main Result

To prove that the existence of a degree-k MRF U implies degree-k sample stabilizability with regulated cost, we need additional assumptions. These conditions include some integrability requirements on the cost multiplier \(p_0\) and on the dissipative rate \(\gamma \) and, in case \(k>1\), also the following \(\nu \)-semiconcavity property for U, in a neighborhood of the target.

Definition 15

(\(\nu \)-semiconcavity) Let \(\mathcal {M}\subset \mathbb {R}^n\) be a nonempty subset and let \(\nu \in [0,1]\). We say that a continuous function \(U:\overline{\mathbb {R}^n\setminus \mathcal {T}} \rightarrow \mathbb {R}\) is \(\nu \)-semiconcave on \(\mathcal {M}\setminus \mathcal {T}\) if there are some positive constants L, \(\theta \) and \(C>0\) such that for every x, \({\hat{x}}\in \mathcal {M}{\setminus }\mathcal {T}\) with \(\textrm{sgm}({\hat{x}},x)\subset \mathcal {M}{\setminus }\mathcal {T}\) and \(|{\hat{x}}-x|\le \theta \), one has

$$\begin{aligned} \begin{array}{l} \displaystyle U({\hat{x}})-U(x)\le \langle p, {\hat{x}} -x\rangle + \frac{C}{\textbf{d}(\textrm{sgm}({\hat{x}},x))^\nu } \, |{\hat{x}}-x|^2 \quad \forall p\in \partial U(x), \\ |p|\le L \qquad \forall p\in \partial U(x). \end{array} \end{aligned}$$
(23)

Remark 6

If U is a locally semiconcave function on \(\mathbb {R}^n\setminus \mathcal {T}\), then, in view of the property (7) of its subdifferential, U is \(\nu \)-semiconcave on \(\mathcal {M}\setminus \mathcal {T}\) with \(\nu =0\) for every compact set \(\mathcal {M}\subset \mathbb {R}^n\setminus \mathcal {T}\). More generally, the notion of \(\nu \)-semiconcavity is an extension of a condition concerning the distance function \(\textbf{d}\) (from a closed set \(\mathcal {T}\)) (see e.g [25, 26]). In particular, (see e.g. [8]) note that

  1. 1.

    if \(\mathcal {T}\) has boundary of class \(C^{1,1}\), then the distance \(\textbf{d}\) is semiconcave in \(\overline{\mathbb {R}^n\setminus \mathcal {T}}\) and turns out to be \(\nu \)-semiconcave on \(\mathbb {R}^n\setminus \mathcal {T}\) with \(\nu =0\), \(L=1\), for any \(\theta >0\) and for some \(C>0\).

  2. 2.

    If \(\mathcal {T}\) satisfies the internal sphere condition of radius \(r>0\), namely, for all \(x\in \mathcal {T}\) there exists \({\bar{x}}\in \mathcal {T}\) such that \(x\in \mathcal {B}({\bar{x}},r)\subset \mathcal {T}\), then the distance \(\textbf{d}\) is \(\nu \)-semiconcave on \(\mathbb {R}^n\setminus \mathcal {T}\) with \(\nu =0\) and satisfies (23) for \(L=1\), and \(C=1/r\), for every \(\theta >0\).

  3. 3.

    If \(\mathcal {T}\) is a singleton, then the distance \(\textbf{d}\) is \(\nu \)-semiconcave in \(\mathbb {R}^n\setminus \mathcal {T}\) with \(\nu =1\) and (23) holds for \(L=C=1\), for every \(\theta >0\).

We will also use the following hypothesis.

(H3):

Let U be a degree-k MRF and let \(p_0\), \(\gamma \) be the associated cost multiplier and dissipative rate, respectively.

(i):

If \(k>1\), assume that, for some \(\nu \in [0,1]\) and \(c>0\), U is \(\nu \)-semiconcave on \(\mathcal {B}(\mathcal {T},c)\setminus \mathcal {T}\), and that the map \(\Theta : \mathbb {R}_{>0}\rightarrow \mathbb {R}_{>0}\), defined by

$$\begin{aligned} \Theta (w){} & {} := \frac{1}{p_0(w)} \, \vee \, \frac{1}{p_0(w) w^{1- k^{^{-1}}}} \, \vee \, \frac{1}{p_0(w) \gamma (w)^{k-1}}\nonumber \\{} & {} \quad \vee \frac{1}{p_0(w) [w^\nu \gamma (w)]^{1-k^{^{-1}}}}, \end{aligned}$$
(24)

is integrable on [0, u] for any \(u>0\).

(ii):

If \(k=1\), assume that the map \(\Theta \) above, i.e. \(\displaystyle \Theta (w)= \frac{1}{p_0(w)}\) is integrable on [0, u] for any \(u>0\).

Remark 7

In some situations, the integrability condition (24) above can be weakened. For instance, assume that, for some \(M>0\), \(0<l(x,a)\le M\) for all \((x,a)\in (\mathbb {R}^n\setminus \mathcal {T})\times A\). Then, given a degree-k MRF U with cost multiplier \(p_0\) and dissipative rate \(\gamma \), set

$$\begin{aligned} \lambda (u):= \inf _{\{(x,a)\in \mathbb {R}^n\times A: \ U(x)\ge u\}}\,l(x,a) \quad \forall u>0 \end{aligned}$$

and consider the strictly increasing functions

$$\begin{aligned} {\tilde{\gamma }}(u):=\frac{1}{2}\left( p_0(u)\,\lambda (u) +\gamma (u)\right) , \qquad \tilde{p}_0(u):=\frac{1}{2}\left( p_0+\frac{\gamma }{M}\right) . \end{aligned}$$
(25)

From (21) it follows that U also satisfies the HJ dissipative inequality:

$$\begin{aligned} H[{\tilde{p}}_0]^{(k)}(x,p, U(x)) \le -{\tilde{\gamma }}(U(x)) \qquad \forall x \in \mathbb {R}^n\setminus \mathcal {T}, \ \ \forall p\in \partial _{{\textbf{P}}}U(x). \end{aligned}$$

Hence, U can be regarded as a degree-k MRF with cost multiplier \({\tilde{p}}_0\) and dissipative rate \({\tilde{\gamma }}\). Notice that condition (24) referred to \({\tilde{p}}_0\) and \({\tilde{\gamma }}\) is weaker than the one corresponding to \(p_0\) and \(\gamma \).

Befor stating our main result, let us finally introduce a stronger, global version of hypothesis (H2).

(H2)\(^*\):

For any \(a\in A\), the map \(x\mapsto l(x,a)\) is Lipschitz continuous on \(\mathbb {R}^n\). Furthermore, the vector fields \(f_1,\dots , f_m\) belong to \(C^{k-1,1}_{b}(\mathbb {R}^n)\).

Theorem 1

(Main result) Assume hypotheses (H1)(H2) and let U be a degree-k MRF, which we suppose to be locally semiconcave on \(\mathbb {R}^n\setminus \mathcal {T}\). Then, there exists a degree-k feedback generator \(\mathcal {V}\) such that the following statements hold:

(i):

\(\mathcal {V}\) degree-k U-sample stabilizes system (9) to \(\mathcal {T}\).

(ii):

If in addition U satisfies hypothesis (H3), then \(\mathcal {V}\) degree-k U-sample stabilizes system (9) to \(\mathcal {T}\) with \({\varvec{\Psi }}\)-regulated cost, the map

$$\begin{aligned} (R,v_1,v_2)\mapsto {\varvec{\Psi }}(R,v_1,v_2)=\Lambda (R)\,\Psi (v_1,v_2) \end{aligned}$$

being an integral-cost-bound function, where \(\Psi :\mathbb {R}_{>0}^2\rightarrow \mathbb {R}_{\ge 0}\) is defined as

$$\begin{aligned} \displaystyle { \Psi }(v_1,v_2) = \left\{ \begin{array}{l} \displaystyle 0 \,\, \vee \,\, \int _{\frac{v_2}{2}}^{v_1}\Theta (w) \,dw \quad \qquad \qquad \qquad \text {if k=1,} \\ \displaystyle 0 \,\, \vee \,\, \int _{\frac{v_2}{2}}^{v_1}\Theta \Big (\frac{v_2}{v_1}\,{w}\Big )\,dw \ \ \quad \qquad \quad \text{ if }\ k>1 \end{array}\right. \end{aligned}$$
(26)

(\(\Theta \) as in (24)). In particular, in case \(k>1\), if U (satisfies (H3) and) is semiconcave and Lipschitz continuous on \(\mathbb {R}^n\setminus \mathcal {T}\)Footnote 6 and hypothesis (H2) \(^*\) is satisfied, then \(\Lambda \) is constant.

The proof of the theorem will be given in the next section. From Theorem 1 and [16, Theorem 3.1], the existence of a degree-k MRF U as above implies GAC with \(\textbf{W}\)-regulated cost. More precisely, we have:

Corollary 1

Assume hypotheses (H1)(H2). Then, given a degree-k MRF U satisfying the hypotheses of Theorem 1, system (9) is globally asymptotically controllable to \(\mathcal {T}\) with W-regulated cost, the map \(\textbf{W}\) being defined as

$$\begin{aligned} \displaystyle \textbf{W}(x) = \left\{ \begin{array}{l} \displaystyle \int _{0}^{U(x)}\Theta (w) \,dw \ \ \ \ \qquad \qquad \qquad \qquad \text {if}\, k=1, \\ \displaystyle \Lambda (\varphi ^{-1}(U(x)))\,\int _{0}^{\frac{U(x)}{2}}\Theta (w)\,dw \ \quad \quad \quad \text {if}\, k>1, \end{array}\right. \end{aligned}$$
(27)

where \(\Theta \), \(\Lambda \) are as in Theorem 1, and \(\varphi \) is as in Definition 11.

Theorem 1 and Corollary 1 include and extend several previous results on sufficient conditions for sample stabilizability and GAC with (or without) a regulated cost, as we illustrate in the following remarks.

Remark 8

(Case \(k=1\)) If the cost multiplier \(p_0\) is a positive constant and \(k=1\), then the function \(\Theta \) as in hypothesis (H3) is trivially integrable and the integral-cost-bound function \(\mathbf{\Psi }\) takes the form

$$\begin{aligned} \mathbf{\Psi }(R, v_1,v_2)=\Psi (v_1,v_2)=0 \,\, \vee \,\, \frac{1}{p_0}\Big (v_1-\frac{v_2}{2}\Big )\qquad \text {for all} \ (R,v_1,v_2)\in \mathbb {R}_{>0}^3. \end{aligned}$$

Actually, for \(k=1\) the proof of Theorem 1 below can be easily adapted (see Remark 2 and [15, 16]) to a general control system

$$\begin{aligned} \dot{y}=F(y,a), \qquad a\in A\subset \mathbb {R}^m, \end{aligned}$$

with A nonempty and compact and F continuous in both variables and locally Lipschitz continuous in x, uniformly w.r.t. the control. Hence, in view of Remark 3, (iii), from Theorem 1 and Cor. 1 we regain the results on sample stabilizability with regulated cost and on GAC with regulated cost obtained in [21, 27], respectively (actually, with a slightly sharper bound on the cost).

Remark 9

(Case \(k>1\)) Point (i) of Theorem 1 (which does not concern a cost), coincides with the result on (an apparently different notion of) degree-k sample stabilizability in [15], as the two definitions are equivalent (see Remark 3, (ii)). Furthermore, Cor. 1 implies the result in [28], where the existence of a locally semiconcave degree-k CLF was shown to guarantee GAC. Finally, in the general case with a cost, Cor. 1 also implies the result in [29], where a sketch is given of the fact that (under slightly stronger assumptions than those assumed here) the existence of a degree-k MRF yields GAC with regulated cost.

Remark 10

(Degree-k MRFs and STLC) Allowing \(p_0\) to be an increasing function of u, on the one hand, significantly improves the estimate on the cost bound function \(\textbf{W}\). On the other hand, it allows us to reformulate well-known Lie algebraic conditions for the small time local controllability (STLC) of system (9) to \(\mathcal {T}\), requiring the distance function to be a degree-k MRF. More specifically, let us consider the case \(l\equiv 1\), that is, the minimum time problem, and suppose that the distance function \(\textbf{d}\) is a degree-k MRF for some \(p_0\), \(\gamma \), and \(k\ge 1\), for which (H3) is valid. Then, for all \(x\in \mathbb {R}^n\setminus \mathcal {T}\), \(\textbf{d}\) satisfies the HJ dissipative inequality (21), which takes now the form

$$\begin{aligned} \min _{(B, {\textbf{g}},{\text {sgn}}) \in \mathcal {F}^{(k)}} \, \langle p,{\text {sgn}}\, B({\textbf{g}})(x) \rangle \le -{\tilde{\gamma }}(\textbf{d}(x))\qquad \forall p\in \partial _{{\textbf{P}}} \textbf{d}(x), \end{aligned}$$
(28)

where \({\tilde{\gamma }}(r):=p_0(r)+\gamma (r)\). It is easy to recognize that this condition, combined with the integrability assumption in hypothesis (H3), leads back to well-known higher order weak Petrov (i.e. Lie algebraic) conditions, sufficient for the STLC of the driftless control-affine system (9) to the closed target \(\mathcal {T}\). By the expression “weak", we mean dissipative inequalities as (28), in which the dissipative rate \({\tilde{\gamma }}\) can be 0 at 0, to distinguish them from the classical higher order Petrov conditions, in which \({\tilde{\gamma }}\) can be replaced by a positive constant. In particular, for any \(R>0\) by (27) we get the following estimate for the minimum time function T:

$$\begin{aligned} T(x)\le \left\{ \begin{array}{l} \displaystyle \int _{0}^{\textbf{d}(x)}\Theta (w) \,dw \ \ \ \ \qquad \qquad \text {if}\, k=1, \\ \displaystyle {\bar{\Lambda }} \,\int _{0}^{\frac{\textbf{d}(x)}{2}}\Theta (w)\,dw \ \qquad \quad \quad \text {if}\, k>1 \end{array}\right. \end{aligned}$$

for every \(x\in B(\mathcal {T},R)\), for a suitable constant \( \bar{\mathbf{\Lambda }}>0\). In view of the definition (24) of \(\Theta \),Footnote 7 this result is entirely in line with well-known one (see e.g. [5, 8, 18, 19, 25, 26], and references therein). In conclusion, our degree-k sample stabilizability sufficient conditions include as a special case most of the sufficient conditions for STLC of system (9) to an arbitrary closed set \(\mathcal {T}\) in the literature. We point out that, considering only \(p_0\equiv {\bar{p}}_0\) positive constant, we would have \({\tilde{\gamma }}\ge {\bar{p}}_0>0\), so our conditions would include just ordinary, i.e. non-weak, higher order Petrov conditions.

4 Proof of Theorem 1

Let us begin by proving statement (ii) of the thesis, in case \(k>1\). Let \(U: \overline{\mathbb {R}^n \setminus \mathcal {T}} \rightarrow \mathbb {R}\) be a degree-k MRF for some cost multiplier function \(p_0\) and some dissipative rate \(\gamma \). Furthermore, assume that U is locally semiconcave on \(\mathbb {R}^n\setminus \mathcal {T}\) and satisfies hypothesis (H3), the latter meaning that U is \(\nu \)-semiconcave on \(B(\mathcal {T},c)\setminus \mathcal {T}\) for some \(\nu \in [0,1]\) and \(c>0\), and that \(\Theta \) defined as in (24) is integrable on [0, u] for every \(u>0\).

4.1 A Degree-k Feedback Generator and Some Preliminary Estimates

Let us first establish how the function U is used to build a degree-k feedback generator. Note that in the HJ dissipative inequality (21) we can replace the proximal subdifferential with the limiting subdifferential, namely we can assume that U satisfies

$$\begin{aligned} H[p_0]^{(k)}(x,p, U(x)) \le -\gamma (U(x))\qquad \forall x\in \mathbb {R}^n\setminus \mathcal {T}, \ \ \forall p\in \partial U(x). \end{aligned}$$
(29)

Indeed, U is locally semiconcave, thus locally Lipschitz continuous, and \(H[p_0]^{(k)}(\cdot )\) is continuous.

Definition 16

(Degree-k U-feedback generator) Given U as specified above, choose an arbitrary selection \(p(x)\in \partial U(x)\) for any \(x\in \mathbb {R}^n{\setminus }\mathcal {T}\). Then, a degree-k feedback generator \(\mathcal {V}:\mathbb {R}^n\setminus \mathcal {T}\rightarrow \mathcal {F}^{(k)}\) is said to be a degree-k U-feedback generator if (see Definition 12)

$$\begin{aligned} {{\mathcal {H}}} \Big (x,p_0(U(x)), p(x),\mathcal {V}(x)\Big )\le -\gamma (U(x)) \quad \text {for all} x\in \mathbb {R}^n\setminus \mathcal {T}. \end{aligned}$$
(30)

Clearly, given a selection \(p(x)\in \partial U(x)\), a degree-k U-feedback generator \(\mathcal {V}\) always exists and is defined as a selection

$$\begin{aligned} \mathcal {V}(x)\in \underset{(B, {\textbf{g}},{\text {sgn}}) \in \mathcal {F}^{(k)}}{{\text {argmin}}} \,\,{{\mathcal {H}}} \Big (x,p_0(U(x)), p(x),(B, {\textbf{g}},{\text {sgn}})\Big ) \qquad \forall x\in \mathbb {R}^n\setminus \mathcal {T}. \end{aligned}$$

Remark 11

Let us point out that, in order to define a degree-k U-feedback generator \(\mathcal {V}\) it might be enough to assume the existence of a function U which satisfies, besides the other properties, the HJ dissipative inequality in (29) just for only one selection \(p(x)\in \partial U(x)\).

From now on, let a selection \(p(x)\in \partial U(x)\) and an associated degree-k U-feedback generator \(\mathcal {V}(x)=(B_x,{\textbf{g}}_x,{\text {sgn}}_x)\) be given.

Let us define two U-dependent distance-like functions \(d_{U_-}\), \(d_{U_+}:\mathbb {R}_{\ge 0} \rightarrow \mathbb {R}_{\ge 0}\) as the smallest distance of the target from the superlevel set \(\{\textrm{U}\ge u\}\) and the largest distance of the target from the sublevel set \(\{\textrm{U}\le u\}\), respectively. Namely, for every \( u\ge 0 \) we set

$$\begin{aligned} \begin{aligned}&{d_{U_-}}(u):= \inf \left\{ \textbf{d}(x) \text {: } x\in \overline{\mathbb {R}^n\setminus \mathcal {T}} \ \ \textrm{with} \ \ \textrm{U}(x) \ge u \right\} \\&{d_{U_+}}(u):= \sup \left\{ \textbf{d}(x) \text {: } x\in \overline{\mathbb {R}^n\setminus \mathcal {T}} \ \ \textrm{with} \ \ \textrm{U}(x) \le u \right\} . \end{aligned} \end{aligned}$$
(31)

It is immediate to see that \({d_{U_-}} \), \({d_{U_+}} \) are strictly increasing and satisfy

$$\begin{aligned} \begin{aligned}&{d_{U_-}}(0)=\lim _{u\rightarrow 0^+}{d_{U_-}}(u) =\lim _{u\rightarrow 0^+}{d_{U_+}}(u)=0={d_{U_+}}(0),\\&{d_{U_-}}(\textrm{U}(x))\le \textbf{d}(x)\le {d_{U_+}}(\textrm{U}(x)) \qquad \forall x\in \overline{\mathbb {R}^n\setminus \mathcal {T}}. \end{aligned} \end{aligned}$$
(32)

Furthermore, \(d_{U_-}\), \(d_{U_+}\) can be approximated from below and from above, respectively, by continuous, strictly increasing functions that satisfy (32). Let us redefine \(d_{U_-}\) and \(d_{U_+}\) as such two approximations. Fix r, \(R>0\) such that \(r<R\) and set

$$\begin{aligned} {\hat{\textbf{U}}}_R:= {d_{U_-}^{\,\,\,-1}}(R), \qquad \qquad \tilde{R}:=d_{U_+}({\hat{\textbf{U}}}_R) = d_{U_+}\circ {d_{U_-}^{\,\,\,-1}}(R), \end{aligned}$$
(33)

so that \(B(\mathcal {T}, R )\subseteq U^{-1}([0, {\hat{\textbf{U}}}_R])\subseteq B(\mathcal {T}, {\tilde{R}} ).\) Now, by applying Lemma 1 for this \({\tilde{R}}\), we obtain that there exist some \({\bar{\delta }}={\bar{\delta }}({\tilde{R}})\) and \(\omega =\omega ({\tilde{R}})>0\) such that, for any \(x\in U^{-1}(]0,{\hat{\textbf{U}}}_R])\) and \(t\in [0,{\bar{\delta }}]\), each \(\mathcal {V}\)-multiflow \(y_{x,t}\) is defined on [0, t] and satisfies (12), i.e.

$$\begin{aligned} y_{x,t}([0,t])\subset \mathcal {B}(\mathcal {T},2{\tilde{R}}), \quad \left| y_{x,t}(t) - x - {\text {sgn}}_x\, B_x( {\textbf{g}}_x)(x) \left( \frac{t}{{\mathfrak {s}}} \right) ^\ell \right| \le \omega \, t \left( \frac{t}{{\mathfrak {s}}}\right) ^\ell ,\nonumber \\ \end{aligned}$$
(34)

where \(\ell =\ell (x)=\ell (B_x)\), \({\mathfrak {s}}={\mathfrak {s}}(x) = {\mathfrak {s}}(B_x)\), as in (13).

From the \(\nu \)-semiconcavity of U on \(B(\mathcal {T},c)\setminus \mathcal {T}\) and the local semiconcavity of U on \(\mathbb {R}^n\setminus \mathcal {T}\) (which implies local Lipschitz continuity), it follows that, for the same \({\tilde{R}}\) as above, there exist \({\bar{C}}={\bar{C}}({\tilde{R}})>0\) and \(L_U=L_U(\tilde{R})>0\) such that, for every x, \({\hat{x}}\in \mathcal {B}(\mathcal {T},2\tilde{R}){\setminus }\mathcal {T}\) with \(\textrm{sgm}({\hat{x}},x)\subset \mathcal {B}(\mathcal {T},2\tilde{R}){\setminus }\mathcal {T}\) and \(|{\hat{x}}-x|\le \theta \) (\(\theta \) as in Definition 15), one has, for every \( p\in \partial U(x)\) (see (7) and (23))

$$\begin{aligned} \begin{array}{l} U({\hat{x}})-U(x)\le \langle p, {\hat{x}} -x\rangle + {\bar{C}}\left( 1\vee \frac{1}{\textbf{d}(\textrm{sgm}({\hat{x}},x))^\nu }\right) \, |{\hat{x}}-x|^2, \\ |p|\le L_U. \end{array} \end{aligned}$$
(35)

Finally, under hypothesis (H2) there are some \(M=M(\tilde{R})>0\) and \(L_l=L_l({\tilde{R}})>0\), such that, for all \({\hat{x}}\), \(x\in \mathcal {B}(\mathcal {T},2{\tilde{R}})\), one has

$$\begin{aligned} |B(\textbf{g})(x)| \le M \quad \forall (B,\textbf{g}, {\text {sgn}})\in \mathcal {F}^{(k)}, \quad |l({\hat{x}},a)-l(x,a)|\le L_l|{\hat{x}}-x| \quad \forall a\in A.\nonumber \\ \end{aligned}$$
(36)

For brevity, we often omit to explicitly write the dependence of the constants \({\bar{\delta }}\), \(\omega \), \(L_U\), M, and \(L_l\) (and of the constants derived from them) on \({\tilde{R}}\).

4.2 Estimating U Increments When \(\mathcal {V}(x)\) has Degree \(\ell \le k\)

For every \(r\in [0,R]\), define

$$\begin{aligned} {\hat{\textbf{u}}}_r:=\chi ^{-1}\big ({d_{U_+}^{\,\,\,-1}}(r)\big ), \end{aligned}$$
(37)

where the map \(\chi \) is defined by setting, for every \( u\ge 0\),

$$\begin{aligned} \lambda (u):= u \,\, \vee \,\, u^{\frac{1}{k}}, \qquad \quad \chi (u):= u + 2 \lambda (u). \end{aligned}$$
(38)

Notice that both \(\lambda \) and \(\chi :\mathbb {R}_{\ge 0}\rightarrow \mathbb {R}_{\ge 0}\) are continuous, strictly increasing, surjective, \(\chi (0)=\lambda (0)= 0\), and \(\chi (u) > u \) for all \(u>0\). By construction, we immediately get

$$\begin{aligned} \mathcal {T}\subset U^{-1}([0, {\hat{\textbf{u}}}_r]) \subseteq B(\mathcal {T}, r )\subset B(\mathcal {T}, R )\subseteq U^{-1}([0, {\hat{\textbf{U}}}_R])\subseteq B(\mathcal {T}, \tilde{R} )\subset B(\mathcal {T}, 2{\tilde{R}}). \end{aligned}$$

As a consequence of Lemma 2 below, the degree-k MRF U is decreasing when evaluated along any multiflow \(y_{x,t}\) of the degree-k U-feedback generator \(\mathcal {V}\), with \(x\in U^{-1}([{\hat{\textbf{u}}}_r, \hat{\textbf{U}}_R])\) and t in an interval which depends on the degree \(\ell =\ell (x)\) defined as above.

Lemma 2

Fix an integer \(\ell \in \{1,\dots ,k\}\). Then, using the above notations, we obtain that there exists some positive \(\delta _\ell (r)\) (\(=\delta _\ell ({\tilde{R}},r)\), i.e. depending on \({\tilde{R}}\) as well) such that, for any \(x\in U^{-1}([\hat{\textbf{u}}_r, \hat{\textbf{U}}_R])\) verifying \(\ell (x)=\ell \), and any \(t\in [0,\delta _\ell (r)]\), the \(\mathcal {V}\)-multiflow \(y_{x,t}\) verifies

$$\begin{aligned} U(y_{x,t}(t)) - U(x) +\frac{t^{\ell -1}}{{\mathfrak {s}}^{\ell }} p_0(U(x)) \int _0^t l (y_{x,t}(s), \alpha _{x,t}(s))\,ds \, \le -\frac{\gamma (U(x))}{2} \frac{t^{\ell }}{{\mathfrak {s}}^{\ell }},\nonumber \\ \end{aligned}$$
(39)
$$\begin{aligned} \textbf{d}(y_{x,t}(s))\le 2{\tilde{R}} \qquad \forall s\in [0,t],\, \end{aligned}$$

Footnote 8 and

$$\begin{aligned} \displaystyle \frac{\hat{\textbf{u}}_r}{2}\le U(y_{x,t}(t))< U(x). \end{aligned}$$
(40)

Proof

We begin by proving property (40). Let us set

$$\begin{aligned} \delta _0:= 1\,\, \wedge \,\, {\bar{\delta }} \,\, \wedge \,\, \frac{\theta }{M}, \qquad {\hat{\delta }}_\ell (r):= \frac{\hat{\textbf{u}}_r^{\frac{1}{\ell }}}{(2 (M+\omega ) L_{_U})^{\frac{1}{\ell }} \vee L_{_U}M}. \end{aligned}$$
(41)

Fix \(x\in U^{-1}([\hat{\textbf{u}}_r, \hat{\textbf{U}}_R])\) with degree \(\ell (x)=\ell \). By the first relation in (34), for every \(t\in [0,\delta _0]\) one has \(y_{x,t}(s)\in \mathcal {B}(\mathcal {T},2{\tilde{R}})\) for all \(s\in [0,t]\). Hence, in view of (36) and (41), one has \(|y_{x,t}(s)-x|\le \theta \) for all \(s\in [0,t]\). Furthermore, the definition of \(L_U\) and the fact that \(U\equiv 0\) on \(\mathcal {T}\) imply that \(U(x)\le L_U \textbf{d}(x)\). Therefore, since \({\mathfrak {s}}\ge 1\), for all times \(t\in [0,\delta _0\wedge {\hat{\delta }}_\ell (r)]\) the second relation in (34) yields

$$\begin{aligned} |y_{x,t}(t) -x| \le (M+\omega t) \left( \frac{t}{{\mathfrak {s}}} \right) ^\ell \le (M+\omega ) t^{\ell } \le \frac{\hat{\textbf{u}}_r}{2L_{_U}} \le \frac{U(x)}{2L_{_U}} \le \frac{\textbf{d}(x)}{2}. \end{aligned}$$
(42)

As a first consequence, one has \(|U(y_{x,t}(t)) - U(x)| \le \displaystyle \frac{\hat{\textbf{u}}_r}{2}\). Since \(U(x) \ge \hat{\textbf{u}}_r\), this proves the left-hand side of (40).

Now let us prove (39). Since for any \(z\in \mathcal {B}(x,\textbf{d}(x)/2)\) one has \(\textbf{d}(z)\ge \textbf{d}(x)/2\), and (42) implies that \(\textrm{sgm}(x, y_{x,t}(t)))\subset \mathcal {B}( x,\textbf{d}(x)/2)\), one has

$$\begin{aligned} \textbf{d}(\textrm{sgm}(x, y_{x,t}(t)))\ge \frac{\textbf{d}(x)}{2} \ge \frac{\hat{\textbf{u}}_r}{2 L_{_U}}. \end{aligned}$$
(43)

Let now \({\check{\delta }}_\ell (r)\) be the unique solution of the equationFootnote 9

$$\begin{aligned} \left( L_{_U}\omega + \frac{L_l M}{2} \right) \delta + \frac{\bar{C}(2L_{_U})^\nu (M+\omega )^2}{[(2L_U)\wedge \hat{\textbf{u}}_r]^\nu } \delta ^\ell =\frac{\gamma (\hat{\textbf{u}}_r)}{2}, \end{aligned}$$
(44)

where (M and) \(L_l\) are as in (36), and set

$$\begin{aligned} \delta _\ell (r):= \delta _0 \,\,\wedge \,\,{\hat{\delta }}_\ell (r)\,\, \wedge \,\,{\check{\delta }}_\ell (r). \end{aligned}$$
(45)

Using (30), (35), (43), (34), for any \(t\in [0, \delta _\ell (r)]\), we get ,Footnote 10

$$\begin{aligned}&\displaystyle U(y_{x,t}(t)) - U(x) + \left( \frac{t^{{\ell }-1}}{{\mathfrak {s}}^{\ell }}\right) p_0(U(x)) \int _0^t l(y_{x,t}(s), \alpha _{x,t}(s))\,ds \, \\&\quad \le \langle p(x), y_{x,t}(t) -x\rangle +{\bar{C}}\vee \frac{ {\bar{C}}}{\textbf{d}(\textrm{sgm}(x, y_{x,t}(t)))^\nu }\,|y_{x,t}(t) -x|^2 \\&\displaystyle \qquad \qquad \qquad \qquad +\left( \frac{t^{{\ell }-1}}{{\mathfrak {s}}^{\ell }}\right) p_0(U(x)) \int _0^t \max _{a\in A(\mathcal {V}(x))} l(y_{x,t}(s),a)ds \, \\&\qquad \displaystyle \le \left( \frac{t}{{\mathfrak {s}}} \right) ^{\ell } \left[ \langle p(x), {\text {sgn}}_x B_x({\textbf{g}}_x)(x)\rangle + L_{_U} \omega t+ p_0(U(x))\max _{a\in A(\mathcal {V}(x)) } l(x,a) \right. \\&\displaystyle \quad \qquad \qquad \qquad \qquad \qquad \left. +\frac{p_0(U(x)) L_l M}{2} t + \frac{{\bar{C}}(2L_{_U})^\nu (M+\omega t)^2}{[(2L_U)\wedge \hat{\textbf{u}}_r]^\nu } \left( \frac{t}{{\mathfrak {s}}} \right) ^{\ell } \right] \\&\displaystyle \qquad \le \left( \frac{t}{{\mathfrak {s}}} \right) ^{\ell } \left[ -\gamma (U(x)) + \left( L_{_U} \omega + \frac{L_l M}{2} \right) t + \frac{{\bar{C}}(2L_{_U})^\nu (M+\omega )^2 }{[(2L_U)\wedge \hat{\textbf{u}}_r]^\nu } t^{\ell } \right] \\&\displaystyle \qquad \le \left( \frac{t}{{\mathfrak {s}}} \right) ^{\ell }\left[ -\gamma (U(x)) + \frac{\gamma (\hat{\textbf{u}}_r)}{2} \right] \le -\frac{\gamma (U(x))}{2} \left( \frac{t}{{\mathfrak {s}}}\right) ^{\ell }. \end{aligned}$$

\(\square \)

In the next lemma we determine two U-sublevel sets such that the \(\mathcal {V}\)-multiflows issuing from them remain in \(\mathcal {B}(\mathcal {T},r)\) until a time that depends on the utilized iterated Lie bracket.

Lemma 3

Fix an integer \(\ell \in \{1,\dots ,k\}\). Then, for \(\lambda \) and \(\chi \) as in (38) and using the above notations, for any \(x\in U^{-1}(]0, \hat{\textbf{U}}_R])\) with the degree \(\ell (x)\) of \(\mathcal {V}(x)\) equal to \(\ell \) and any \(t\in [0,\delta _\ell (r)]\), the \(\mathcal {V}\)-multiflow \(y_{x,t}\) satisfies (i) and (ii) below:

(i):

if \(y_{x,t}(\textbf{t})\in U^{-1}(]0,\hat{\textbf{u}}_r])\) for some \(\textbf{t}\in [0,t]\), then \(U(y_{x,t}(s))\le \hat{\textbf{u}}_r + \lambda (\hat{\textbf{u}}_r)\) for any \(s\in [\textbf{t},t]\), so that, in particular, \(\textbf{d}(y_{x,t}(s))\le r\) for any \(s\in [\textbf{t},t]\);

(ii):

if \(x\in U^{-1}(]\hat{\textbf{u}}_r, \hat{\textbf{u}}_r + \lambda (\hat{\textbf{u}}_r)])\), then \(U(y_{x,t}(s))\le \chi (\hat{\textbf{u}}_r)\) for any \(s\in [0,t]\), so that, in particular, \(\textbf{d}(y_{x,t}(s))\le r\) for any \(s\in [0,t]\)Footnote 11

Proof

The proofs of (i) and (ii) follow the same lines, so we prove (ii) only. Let \(x\in U^{-1}(]\hat{\textbf{u}}_r,\hat{\textbf{u}}_r+\lambda (\hat{\textbf{u}}_r)])\). Since \(y_{x,t}(s)\in \mathcal {B}(\mathcal {T},2{\tilde{R}})\) for all \(s\in [0,t]\) and, in particular, \(t\le {\hat{\delta }}_\ell (r)\) as defined in (41), recalling the definition of \(\lambda \) one has

$$\begin{aligned} |U(y_{x,t}(s))-U(x)|\le L_{_U} |y_{x,t}(s)-x| \le L_{_U} M s \le \lambda (\hat{\textbf{u}}_r) \qquad \forall s\in [0,t]. \end{aligned}$$

Hence, \(U(y_{x,t}(s))\le \chi (\hat{\textbf{u}}_r)\) for any \(s\in [0,t]\), by the definition of \(\chi \). In view of (32) and (37), this implies that \(\textbf{d}(y_{x,t}(s))\le r\) for any \(s\in [0,t]\). \(\square \)

4.3 Stabilizing \({\mathfrak {d}}\)-Scaled \(\mathcal {V}\)-Sampling Processes

Given \(0<r<R\), set

$$\begin{aligned} {\mathfrak {d}}={\mathfrak {d}}(R,r):=(\delta _1(r), \dots ,\delta _k(r)), \end{aligned}$$

where \(\delta _\ell (r)\) is as in (45) for any \(\ell =1,\dots ,k\).Footnote 12 Let \((x, \pi , \alpha _x^\pi , y_x^\pi )\) be an arbitrary \({\mathfrak {d}}(R,r)\)-scaled \(\mathcal {V}\)-sampling process such that \(\textbf{d}(x)\le R\). Since the partition \(\pi =(s_j)_{j\in \mathbb {N}}\) of \(\mathbb {R}_{\ge 0}\) satisfies (17) (and \(\mathcal {B}(\mathcal {T},R)\subseteq U^{-1}(]0,\hat{\textbf{U}}_R])\)), thanks to the definition of \({\mathfrak {d}}\), Lemma 2 implies that \((\alpha _x^\pi , y_x^\pi )\) is an admissible control-trajectory pair from x, and

$$\begin{aligned} y_x^\pi (s)\in \mathcal {B}(\mathcal {T},2{\tilde{R}}) \qquad \text { for all} s\ge 0. \end{aligned}$$
(46)

In particular, using the notations of Definitions 7 and 8, for every \(j\in \mathbb {N}\), \(1\le j\le {\textbf{j}}\), the \(\mathcal {V}\)-multiflow \(y_{x_j,t_j}\) is defined on the whole interval \([0,t_j]\), where

$$\begin{aligned} t_j:= s_j-s_{j-1}, \qquad x_1:=x, \qquad \qquad x_{j+1}:= y_{x_{j},t_{j}}(s_{j}-s_{j-1}). \end{aligned}$$

Hence, one has \(y_x^\pi (s_j)=x_{j+1}\) for all \(0\le j < {\textbf{j}}\). (However, whenever \({\textbf{j}}<+\infty \), the trajectory \(y_x^\pi \) reaches for the first time the target \(\mathcal {T}\) at some \(\sigma _{\textbf{j}}\in ]s_{{\textbf{j}}-1},s_{\textbf{j}}]\). In this case, after \(\sigma _{\textbf{j}}\), the pair \((\alpha _x^\pi , y_x^\pi )\) is extended constantly, while the \({\textbf{j}}\)-th \(\mathcal {V}\)-multiflow remains defined as before, over the entire interval \([s_{{\textbf{j}}-1},s_{\textbf{j}}]\), so that we may well have \(y_x^\pi (s_{\textbf{j}})=y_x^\pi (\sigma _{\textbf{j}})\ne x_{{\textbf{j}}+1}\).)

In the following lemma we provide an upper bound for the time taken by \(y_x^\pi \) to reach the sublevel set \(U^{-1}(]0,\hat{\textbf{u}}_r[)\).

Lemma 4

Consider \((x, \pi , \alpha _x^\pi , y_x^\pi )\) as above. Then, one has

$$\begin{aligned} \iota _r:=\inf \{ j \in \mathbb {N}\text {: } U(y_x^{\pi }(s_j))< \hat{\textbf{u}}_r \} < +\infty , \end{aligned}$$
(47)

and \(\iota _r\le {\textbf{j}}\) when \({\textbf{j}}<+\infty \). Moreover,

$$\begin{aligned} s_{\iota _r} \le \textbf{T}(R,r):= \beta (k) \root k \of {\frac{2(\hat{\textbf{U}}_R-\hat{\textbf{u}}_r) J(R,r)^{k-1}}{\gamma (\hat{\textbf{u}}_r)}} +1, \end{aligned}$$
(48)

where \( \beta (k)\) is as in (8) and, for

$$\begin{aligned} \mu (R,r):=\min \{\delta _\ell (r) \text {: } \ell =1,\dots ,k \}, \end{aligned}$$
(49)

J(Rr) is defined asFootnote 13

$$\begin{aligned} J(R,r):= {\left\{ \begin{array}{ll} \left[ \!\!\left[ \frac{2 (\hat{\textbf{U}}_R-\hat{\textbf{u}}_r) \beta (k)^k }{\gamma (\hat{\textbf{u}}_r) \mu (R,r)^k \Delta (k)^k} \right] \!\!\right] +1 \qquad &{}\text {if}\, k\ge 2, \\ \qquad \quad 1 &{}\text {if}\, k=1. \end{array}\right. } \end{aligned}$$
(50)

Proof

If \({\textbf{j}}<+\infty \), one has by definition that \(U(y_x^{\pi }(s_j))=0\) for every \(j\ge {\textbf{j}}\). Therefore, \(\iota _r \le {\textbf{j}}<+\infty \). If \({\textbf{j}}=+\infty \) and we assume by contradiction that \(\iota _r=+\infty \), then

\(U(y_x^{\pi }(s_j)) \ge \hat{\textbf{u}}_r\) for any \(j\in \mathbb {N}\). Hence, if we set \(\ell _j:= \ell (y_x^\pi (s_{j-1}))\), and \({\mathfrak {s}}_j:= {\mathfrak {s}}(y_x^\pi (s_{j-1}))\), by (39) and the monotonicity of \(\gamma \), for any integer \(j\ge 1\), we have

$$\begin{aligned} \begin{array}{l} \hat{\textbf{u}}_r - \hat{\textbf{U}}_R \le U(y_x^{\pi }(s_j)) - U(x) = \left[ U(y_x^\pi (s_j)) - U(y_x^\pi (s_{j-1}))\right] + \dots \\ \displaystyle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \left[ U(y_x^\pi (s_1))- U(x) \right] \displaystyle \\ \displaystyle \ \qquad \qquad \le -\frac{\gamma (U(y_x^\pi (s_{j-1})))}{2} \left( \frac{t_j}{{\mathfrak {s}}_j} \right) ^{\ell _j} - \cdots - \frac{\gamma (U(x))}{2} \left( \frac{t_1}{{\mathfrak {s}}_1} \right) ^{\ell _1} \\ \displaystyle \ \qquad \qquad \le - \frac{\gamma (\hat{\textbf{u}}_r)}{2 \beta (k)^k}[t_1^{k} + \dots + t_j^{k}], \end{array} \end{aligned}$$
(51)

where we have used that \(1\ge (t_j/{\mathfrak {s}}_j)^{\ell _j} \ge (t_j/\beta (k))^k\) for any integer \(j\ge 1\). If \(k=1\), then \(t_1+\dots +t_j=s_j\rightarrow +\infty \) when \(j\rightarrow +\infty \), by the very definition of partition of \(\mathbb {R}_{\ge 0}\). Otherwise, if \(k\ge 2\), (17) implies that \(t_1^k+\dots +t_j^k \ge \mu (R,r)^k \Delta (k)^k \, j\), which tends to \(+\infty \) as \(j\rightarrow +\infty \). Hence, in both cases we reach a contradiction, so that \(\iota _r<+\infty \).

Let us now prove (48). Using the Jensen inequality in (51), we deduce that

$$\begin{aligned} \hat{\textbf{U}}_R-\hat{\textbf{u}}_r \ge \frac{\gamma (\hat{\textbf{u}}_r)}{2 \beta (k)^k} \frac{s_j^k}{j^{k-1}} \end{aligned}$$
(52)

for any \(j\le \iota _r-1\). Now, taking \(j=\iota _r-1\) and \(k=1\) in (52), we get (48) in the particular case \(k=1\). Indeed, one has

$$\begin{aligned} s_{\iota _r} \le s_{\iota _r -1} + \delta _1(R,r) \le \frac{2(\hat{\textbf{U}}_R - \hat{\textbf{u}}_r)}{\gamma (\hat{\textbf{u}}_r)} +1. \end{aligned}$$

Let now \(k\ge 2\). Again by (51), as soon as \(j\le \iota _r-1\) we obtain

$$\begin{aligned} \hat{\textbf{U}}_R-\hat{\textbf{u}}_r \ge \frac{\gamma (\hat{\textbf{u}}_r)}{2 \beta (k)^k} \mu (R, r)^k\Delta (k)^k\, j. \end{aligned}$$

In particular, taking \(j=\iota _r-1\) in the previous relation, we deduce that

$$\begin{aligned} \iota _r\le J(R,r), \end{aligned}$$
(53)

with J(Rr) as in (50). By (53), (52) we finally obtain (48) also for \(k\ge 2\). \(\square \)

We are now ready to show that the degree-k U-feedback generator \(\mathcal {V}\) degree-k U-sample stabilizes system (9) to \(\mathcal {T}\). Fix an arbitrary \({\mathfrak {d}}(R,r)\)-scaled \(\mathcal {V}\)-sampling process \((x, \pi , \alpha _x^\pi , y_x^\pi )\) such that \(\textbf{d}(x)\le R\), as above.

By (46) and (37), setting

$$\begin{aligned} \Gamma (R):= 2{\tilde{R}} =d_{U_+}({d_{U_-}^{\,\,\,-1}}(R)), \end{aligned}$$

we directly get the overshoot boundedness property (i) in Definition 11, namely \(\textbf{d}(y_x^{\pi }(s)) \le \Gamma (R)\) for any \(s\ge 0.\) Furthermore, the properties of the functions \(d_{U_+}\) and \(d_{U_-}\) (see (32)) imply that \(\displaystyle \lim _{R\rightarrow 0} \Gamma (R)=0\).

In order to prove the In order to prove the U-uniform attractiveness property (ii)-(iii), observe that, by the very definition of \(\iota _r<+\infty \) in Lemma 4, one has \(U(y_x^\pi (s_{\iota _r}))\le \hat{\textbf{u}}_r\). Accordingly, the time

$$\begin{aligned} {\textbf{t}}= \textbf{t}(y_x^\pi ,r):=\inf \{s\ge 0 \text {: } U(y_x^\pi (s))\le \hat{\textbf{u}}_r \}, \end{aligned}$$
(54)

(is finite and) satisfies

$$\begin{aligned} {\textbf{t}} \le s_{\iota _r} \le \textbf{T}(R,r), \end{aligned}$$
(55)

for \(\textbf{T}(R,r)\) as in (48). Set

$$\begin{aligned} {\varphi }(r):=\hat{\textbf{u}}_r \qquad \text {for any} r>0. \end{aligned}$$
(56)

To conclude this step, it only remains to show condition (iii) in Definition 11, which is equivalent to prove that

$$\begin{aligned} \textbf{d}(y_x^\pi (s)) \le r \qquad \text {for any} s\ge {\textbf{t}}. \end{aligned}$$
(57)

Let \({\bar{\textbf{j}}}\ge 1\) be the integer such that \({\textbf{t}}\in [s_{{\bar{\textbf{j}}}-1}, s_{{\bar{\textbf{j}}}}[\), so that, by Lemma 3, (i), one has that \(U(y_x^{\pi }(s))\le \hat{\textbf{u}}_r + \lambda (\hat{\textbf{u}}_r)\) and \(\textbf{d}(y_x^\pi (s))\le r\) for any \(s\in [{\textbf{t}}, s_{{\bar{\textbf{j}}}}]\). Hence, either case (a) or case (b) below occurs:

  1. (a)

    \(U\left( y_x^{\pi }(s_{{\bar{\textbf{j}}}})\right) \le \hat{\textbf{u}}_r\).

  2. (b)

    \(U(y_x^{\pi }(s_{{\bar{\textbf{j}}}})) \in ]\hat{\textbf{u}}_r, \hat{\textbf{u}}_r + \lambda (\hat{\textbf{u}}_r)]\).

Case (a). By Lemma 3, (i), it follows that \(\textbf{d}(y_x^{\pi }(s))\le r\) for all \(s \in [s_{{\bar{\textbf{j}}}}, s_{{\bar{\textbf{j}}}+1}]\). Then either case (a) or case (b) above holds with \({\bar{\textbf{j}}} +1\) replacing \({\bar{\textbf{j}}}\).

Case (b). Arguing as in Lemma 4, we deduce that there exists an integer \(\iota ({\bar{\textbf{j}}})\) satisfying

$$\begin{aligned} \iota ({\bar{\textbf{j}}}):= \inf \{ j \in \mathbb {N}\text {: } j\ge {\bar{\textbf{j}}}+1 \ \ \text { and } \ \ U\left( y_x^{\pi }(s_j)\right)< \hat{\textbf{u}}_r \}<+\infty . \end{aligned}$$

In particular, by Lemma 2, the sequence \(\left( U(y_x^{\pi }(s_j))\right) _j\) is decreasing for \({\bar{\textbf{j}}} \le j \le \iota ({\bar{\textbf{j}}})\) and by Lemma 3, (ii), we get \(\textbf{d}(y_x^{\pi }(s)) \le r\) for all \(s\in [s_{{\bar{\textbf{j}}}}, s_{\iota ({\bar{\textbf{j}}})}]\). Moreover, in view of Lemma 3, (i), we also have \(\textbf{d}(y_x^{\pi }(s)) \le r\) for all \(s\in [s_{\iota ({\bar{\textbf{j}}})}, s_{\iota ({\bar{\textbf{j}}})+1}]\). Then, either case (a) or case (b) above holds with \(\iota ({\bar{\textbf{j}}})+1\) replacing \({\bar{\textbf{j}}}\).

The proof of (57) is thus concluded. In particular, let us point out that the continuity and monotonicity properties of the functions \(\Gamma \), \(\varphi \) and \(\textbf{T}\) are straightforward consequence of their very definitions and of the properties of the functions \(d_{U_-}\) and \(d_{U_+}\).

4.4 The Cost Boundedness Property

Let \(0<r<R\) and let \((x, \pi , \alpha _x^\pi , y_x^\pi , {\mathfrak {I}}_x^\pi )\) be an arbitrary \({\mathfrak {d}}(R,r)\)-scaled \(\mathcal {V}\)-sampling process-cost such that \(\textbf{d}(x)\le R\), associated with a \({\mathfrak {d}}(R,r)\)-scaled \(\mathcal {V}\)-sampling process \((x, \pi , \alpha _x^\pi , y_x^\pi )\) as in the previous step. We use the same notations as above and, in addition, set

$$\begin{aligned} u_j:= U(x_{j })\qquad \text {for any} j\in \mathbb {N}, \ 1\le j\le {\textbf{j}}+1. \end{aligned}$$

Observe that from the previous lemmas it follows that

$$\begin{aligned} \hat{\textbf{u}}_r \le u_{\iota _r}<u_{\iota _r-1}<\dots<u_1=U(x), \quad \frac{\hat{\textbf{u}}_r}{2}\le u_{\iota _r+1}<u_{\iota _r}. \end{aligned}$$
(58)

To prove the uniform cost boundedness property (iv) in Definition 11, we need to construct a integral-cost-bound function \({\varvec{\Psi }}=\Lambda \,\Psi \), such that

$$\begin{aligned} {\mathfrak {I}}_x^\pi ({\textbf{t}}) = \int _0^{{\textbf{t}}} l(y_x^{\pi }(s),\alpha _x^{\pi }(s))\, ds \le \Lambda (R)\, \Psi (U(x),\varphi (r)), \end{aligned}$$

where \({\textbf{t}}\) is as in (54). When \(U(x)\le \varphi (r)\), the integral is zero (because \({\textbf{t}} =0\)) and the estimate is trivial for every nonnegative \({\varvec{\Psi }}\). Hence, let us suppose \(U(x)> \varphi (r)=\hat{\textbf{u}}_r\). Since \(l\ge 0\), from (55), (58) and Lemma 2, we get

$$\begin{aligned} \begin{aligned} \int _0^{{\textbf{t}}}&l( y_x^\pi (s), \alpha _x^\pi (s)) ds \le \int _0^{s_{\iota _r} \wedge S_{y_x^\pi }} l( y_x^\pi (s), \alpha _x^\pi (s)) ds \\&= \sum _{j=1}^{\iota _r-1} \int _{s_{j-1}}^{s_{j}} l( y_x^\pi (s), \alpha _x^\pi (s)) ds + \int _{s_{\iota _r-1}}^{s_{\iota _r} \wedge \sigma _{\textbf{j}}} l( y_x^\pi (s), \alpha _x^\pi (s)) ds \\&\le \sum _{j=1}^{\iota _r } \frac{{\mathfrak {s}}_j^{\ell _j}}{t_j^{\ell _j-1}} \frac{u_j-u_{j+1}}{p_0(u_j)}, \end{aligned} \end{aligned}$$
(59)

where \(S_{y_x^\pi }\) is as in Definition 2, so that \(S_{y_x^\pi }=\sigma _{\textbf{j}}\), and, in particular,

$$\begin{aligned} \begin{aligned} \int _{s_{\iota _r-1}}^{s_{\iota _r} \wedge S_{y_x^\pi }} l( y_x^\pi (s), \alpha _x^\pi (s)) ds&\le \int _{s_{\iota _r-1}}^{s_{\iota _r}} l( y_{x_{\iota _r}, t_{\iota _r}}(s), \alpha _{x_{\iota _r}, t_{\iota _r}}(s)) ds \\&\le \frac{({\mathfrak {s}}_{\iota _r})^{\ell _{\iota _r}} (u_{\iota _r}-u_{\iota _r+1})}{(t_{\iota _r})^{\ell _{\iota _r}-1}{p_0(u_{\iota _r})} }. \end{aligned} \end{aligned}$$

Define the function \({\hat{\Lambda }}:\{(v_1,v_2)\in \mathbb {R}_{>0}^2: \ v_2\le v_1\} \rightarrow \mathbb {R}_{>0}\), given by

$$\begin{aligned} {\hat{\Lambda }}(v_1,v_2):=\frac{v_1}{v_2} \qquad (\ge 1). \end{aligned}$$

For every \(j=1,\dots ,\iota _r+1\), recalling that \(\frac{\varphi (r)}{2}\le u_j\le U(x)\), we have

$$\begin{aligned} \frac{\varphi (r)}{2{\hat{\Lambda }}(U(x),\varphi (r))}\le \frac{u_j}{\hat{\Lambda }(U(x),\varphi (r))}\le \frac{U(x)}{\hat{\Lambda }(U(x),\varphi (r))}=\varphi (r). \end{aligned}$$
(60)

Now, let us fix \(j\in \{1,\dots ,\iota _r+1\}\) and let us estimate the quantity \(\frac{1}{t_j^{\ell _j-1}}\). The definition (44) implies that either

$$\begin{aligned} \left( L_{_U}\omega + \frac{ L_l M}{2} \right) {\check{\delta }}_\ell (r) \ge \frac{\gamma (\varphi (r))}{4} \end{aligned}$$

or

$$\begin{aligned} \frac{{\bar{C}}(2L_{_U})^\nu (M+\omega )^2}{[(2L_{_U})\wedge \varphi (r)]^\nu } ({\check{\delta }}_\ell (r))^\ell \ge \frac{\gamma (\varphi (r))}{4}. \end{aligned}$$

Thus, (17), (60) yield that

$$\begin{aligned} \begin{aligned} \frac{1}{t_j^{\ell _j-1}}&\le \frac{1}{\Delta (k)^{\ell _j-1}} \Bigg ( \frac{1}{\delta _0^{\ell _j-1}} \,\, \vee \,\, \frac{1}{({\check{\delta }}_{\ell _j})^{\ell _j-1}} \,\, \vee \,\, \frac{1}{({\hat{\delta }}_{\ell _j})^{\ell _j-1}} \Bigg ) \\&\le \frac{1}{\Delta (k)^{k-1}} \Bigg [ \frac{1}{\delta _0^{k-1}} \,\, \vee \,\, \frac{4{\bar{C}} (2L_{_U})^\nu (M+\omega )^2}{\gamma ^{1-\frac{1}{k}}\big (\frac{u_j}{{\hat{\Lambda }}}\big ) \big [(2L_U)\wedge \frac{u_j}{{\hat{\Lambda }}}\big ]^{\nu -\frac{\nu }{k}}} \,\, \vee \,\, \frac{4L_{_U}\omega + 2 L_l M}{\gamma ^{k-1}\big (\frac{u_j}{{\hat{\Lambda }}}\big )} \\&\qquad \quad \quad \vee \,\, \left. \frac{(2L_{_U}(M+\omega ))^{1-\frac{1}{k}} \, \vee \, (L_{_U}M)^{k-1}}{\Big (\frac{u_j}{{\hat{\Lambda }}}\Big )^{1-\frac{1}{k}}} \right] \le {\tilde{\Lambda }}({\tilde{R}}) \frac{p_0\big (\frac{u_j}{\hat{\Lambda }}\big )}{\beta (k)^k} \Theta \Big (\frac{u_j}{{\hat{\Lambda }}}\Big ), \end{aligned} \end{aligned}$$
(61)

where (\(\Theta \) is as in (24) and) we have written \(\hat{\Lambda }\) in place of \({\hat{\Lambda }}(U(x),\varphi (r))\) and we have set

$$\begin{aligned} \begin{array}{l} {\tilde{\Lambda }}({\tilde{R}}):= \frac{\beta (k)^k}{\Delta (k)^{k-1}}\Big (\frac{1}{\delta _0^{k-1}}\,\,\vee \,\, 4{\bar{C}} [(2L_{_U})^{\frac{\nu }{k}}\vee (2L_{_U})^\nu ] (M+\omega )^2 \\ \qquad \qquad \qquad \qquad \vee \,\, (4L_{_U}\omega + 2 L_l M) \,\,\vee \,\, \left( 2L_{_U}(M+\omega )\right) ^{1-\frac{1}{k}} \,\,\vee \,\, (L_{_U}M)^{k-1} \Big ). \end{array} \end{aligned}$$
(62)

Note that \({\tilde{\Lambda }}({\tilde{R}})\) depends on \({\tilde{R}}\) defined as in (37), likewise all constants \(\delta _0\), \({\bar{C}}\), \(L_U\), M, \(\omega \), and \(L_l\) actually depend on \(\tilde{R}\). Hence, from (59), (60), and the monotonicity of \(p_0\), we get

$$\begin{aligned} \begin{array}{l} \displaystyle \int _0^{{\textbf{t}}} l( y_x^\pi (s), \alpha _x^\pi (s)) ds \le \sum _{j=1}^{\iota _r } \frac{{\mathfrak {s}}_j^{\ell _j}}{t_j^{\ell _j-1}} \frac{u_j-u_{j+1}}{p_0(u_j)}\le {\tilde{\Lambda }}({\tilde{R}}) \sum _{j=1}^{\iota _r }\Theta \Big (\frac{u_j}{\hat{\Lambda }}\Big )\,(u_j-u_{j+1}) \\ \quad \qquad \qquad \qquad \qquad \le \displaystyle {\tilde{\Lambda }}({\tilde{R}}) \int _{\frac{ \varphi (r)}{2 }}^{U(x)}\Theta \Big (\frac{w}{\hat{\Lambda }}\Big )\,dw= \Lambda (R)\Psi (U(x),\varphi (r)), \end{array} \end{aligned}$$
(63)

as soon as we set \(\Lambda (R):={\tilde{\Lambda }}({\tilde{R}})\) (\({\tilde{R}}\) is in turn a function of R by (37)) and define \(\Psi : \mathbb {R}_{>0}^2 \rightarrow \mathbb {R}_{\ge 0}\), as

$$\begin{aligned} \displaystyle \Psi (v_1,v_2):= 0 \,\, \vee \,\, \int _{\frac{v_2}{2}}^{v_1}\Theta \Big (\frac{v_2}{v_1}\cdot {w}\Big )\,dw, \qquad \text{ for } \text{ any }\ (v_1,v_2)\in \mathbb {R}_{>0}^2. \end{aligned}$$
(64)

To complete the proof of statement (ii) in case \(k>1\), let us show that \(\Psi \) is an integral-cost-bound function. Given an arbitrary \({\bar{v}}>0\), let us consider the bilateral sequence \((v_i)_{i\in \mathbb {Z}}\), given by

$$\begin{aligned} v_1={\bar{v}}, \qquad v_{i+1}= \frac{v_{i}}{2} \quad \forall i\in \mathbb {Z}, \end{aligned}$$

so that \({\hat{\Lambda }}(v_i,v_{i+1})=2\) for all i. Thanks to hypothesis (H3), we have

$$\begin{aligned} \begin{array}{l} \displaystyle \sum _{i=1}^{+\infty } \Psi (v_i,v_{i+1}) = \sum _{i=1}^{+\infty } \int _{v_{i+2}}^{v_i}\Theta \Big (\frac{w}{2}\Big )\,dw = \sum _{i=1}^{+\infty } \int _{\frac{{\bar{v}}}{2^{i+1}}}^{\frac{\bar{v}}{2^{i-1}}}\Theta \Big (\frac{w}{2}\Big )\,dw \\ \displaystyle \ \qquad \qquad \le 2\int _0^{{\bar{v}}} \Theta \Big (\frac{w}{2}\Big )\,dw =4\int _0^{\frac{\bar{v}}{2}}\Theta (w)\,dw<+\infty . \end{array} \end{aligned}$$
(65)

Clearly, \(\Psi \) is continuous and the required monotonicity and unboundedness properties are immediate consequence of the definition of \(\Theta \). Similarly, \(\Lambda \) is increasing, hence it can be approximated from above by a continuous increasing function, still denoted by \(\Lambda \) with a small abuse of notation. Finally, when (together with and (H3)) hypothesis (H2)\(^*\) is satisfied and U is also Lipschitz continuous and semiconcave on \(\mathbb {R}^n\setminus \mathcal {T}\), it is easy to see that \({\bar{C}}\), \(L_{_U}\), M, \(\omega \) and \(L_{_l}\) in (62) (as well as \(\delta _0\) in (41)) no longer depend on the compact set \(B(\mathcal {T},2{\tilde{R}})\), so that \(\Lambda \) turns out to be constant.

4.5 Sketch of the Proof of (i) for \(k\ge 1\) and of (ii) for \(k=1\)

Let now U be a degree-k MRF for some \(p_0\) and \(\gamma \), and assume U locally semiconcave on \(\mathbb {R}^n\setminus \mathcal {T}\).Footnote 14

Consider a selection \(p(x)\in \partial U(x)\) and let \(\mathcal {V}\) be a degree-k U-feedback generator. Fix \(0<r<R\) and, using the above notations, define \(\hat{\textbf{U}}_R\) and \({\tilde{R}}\) as in (37), so that for any \(x\in U^{-1}(]0,{\hat{\textbf{U}}}_R])\) and \(t\in [0,{\bar{\delta }}]\), each \(\mathcal {V}\)-multiflow \(y_{x,t}\) is defined on [0, t] and satisfies (34). In particular, all constants \({\bar{\delta }}\), \(\omega \), M, and \(L_l\) as in (34), (36) are fixed, as above, on the compact set \(\mathcal {B}(\mathcal {T},2{\tilde{R}})\). The fundamental difference from the previous proof is that the Lipschitz continuity and semiconcavity constants of U can no longer be defined up to the target, but on compact sets \(\mathcal {M}\subset \mathbb {R}^n\setminus \mathcal {T}\) only.

For \(d_{U_-}\) and \(d_{U_+}\) as in (31), set

$$\begin{aligned} \varphi (r)=\check{{\textbf{u}}}_r:= \frac{1}{2}\, d_{U_+}^{-1}(r), \qquad \check{\textbf{r}}_r:=\frac{1}{2}\, d_{U_-}\Big (\frac{\check{\textbf{u}}_r}{2}\Big ). \end{aligned}$$
(66)

Accordingly, it holds

$$\begin{aligned} \begin{aligned} \mathcal {T}&\subset \mathcal {B}(\mathcal {T}, \check{\textbf{r}}_r) \subset \mathcal {B}(\mathcal {T}, 2\check{\textbf{r}}_r) \subseteq U^{-1}\Big (\,\Big ]0,\frac{\check{\textbf{u}}_r}{2}\Big ]\Big )\subset U^{-1}(]0,\check{\textbf{u}}_r])\subset U^{-1}(]0,2\check{\textbf{u}}_r]) \\&\subseteq \mathcal {B}(\mathcal {T},r)\subset \mathcal {B}(\mathcal {T},R)\subseteq U^{-1}(]0,\hat{\textbf{U}}_R])\subseteq \mathcal {B}(\mathcal {T}, {\tilde{R}} )\subset \mathcal {B}(\mathcal {T}, 2{\tilde{R}}). \end{aligned} \end{aligned}$$

Let us define

$$\begin{aligned} \mathcal {M}:=\overline{\mathcal {B}(\mathcal {T},2{\tilde{R}}) \setminus \mathcal {B}(\mathcal {T}, \check{\textbf{r}}_r)}, \end{aligned}$$

and let \(\eta _{_\mathcal {M}}\), \(L_{_\mathcal {M}}>0\) be the semiconcavity and the Lipschitz constant of U on \(\mathcal {M}\), respectively. Then, any \(\mathcal {V}\)-multiflow \(y_{x,t}\) starting from \(x\in U^{-1}([\check{{\textbf{u}}}_r, {\hat{\textbf{U}}}_R])\) up to \(t\le {\bar{\delta }}\wedge \frac{\check{\textbf{r}}_r}{M} \wedge \frac{\check{\textbf{u}}_r}{2L_{_\mathcal {M}} M}\) is such that

$$\begin{aligned} U(y_{x,t}(t))\ge \frac{\check{\textbf{u}}_r}{2}, \qquad y_{x,t}(s)\in \mathcal {M}\quad \forall s\in [0,t], \qquad \textrm{sgm}(x, y_{x,t}(t)) \subset \mathcal {M}. \end{aligned}$$
(67)

Indeed \(|U(y_{x,t}(s))-U(x)|\le L_{_\mathcal {M}} |y_{x,t}(s)-x|\le L_{_\mathcal {M}} M s \le \frac{\check{{\textbf{u}}}_r}{2}\) for any \(s\in [0,t]\), so that \(U(y_{x,t}(s))\ge \frac{\check{\textbf{u}}_r}{2}\) for any \(s\in [0,t]\). In view of (32), the first two relations in (67) follow. The last property in (67) can be deduced by the facts that \(\textbf{d}(x) \ge 2\check{\textbf{r}}_r\) and \(|y_{x,t}(t)-x|\le M t\le \check{\textbf{r}}_r\). Set

$$\begin{aligned} \delta (R,r):= 1 \,\,\, \wedge \,\,\, {\bar{\delta }}\,\,\, \wedge \,\,\, \frac{\check{{\textbf{u}}}_r}{2L_{_\mathcal {M}} M} \,\,\, \wedge \,\,\, \frac{\check{\textbf{r}}_r}{M} \,\,\, \wedge \,\,\, \frac{\gamma (\check{{\textbf{u}}}_r)}{2(L_{_\mathcal {M}}\omega + L_{_l} M + \eta _{_\mathcal {M}} (M+\omega )^2 )},\nonumber \\ \end{aligned}$$
(68)

Note that now \(\delta (R,r)\) does not depend on \(\ell \). Suitably modifying the constants in the proof of Lemma 2, it is possible to prove that any \(\mathcal {V}\)-multiflow \(y_{x,t}\) starting from \(x\in U^{-1}([\check{\textbf{u}}_r, \hat{\textbf{U}}_R])\) up to time \(t\le \delta (R,r)\) satisfies inequality (39). Indeed, thanks to (67) we can apply (7) in place of (23) and derive that

$$\begin{aligned} \begin{aligned}&U(y_{x,t}(t)) - U(x) + p_0(U(x)) \int _0^t l(y_{x,t}(s), \alpha _{x,t}(s))\,ds \, \left( \frac{t^{\ell -1}}{{\mathfrak {s}}^\ell }\right) \\&\ \le \Big ( \frac{t}{{\mathfrak {s}}} \Big )^\ell \Big [ -\gamma (U(x)) + t \Big ( L_{_\mathcal {M}} \omega + L_{_l} M + \eta _{_\mathcal {M}}(M+\omega )^2\Big ) \Big ] \le -\frac{\gamma (U(x))}{2} \left( \frac{t}{{\mathfrak {s}}}\right) ^\ell . \end{aligned} \end{aligned}$$

From now on, the proof of (i) is a simple adaptation of the previous one, so we omit it.

For what concerns the cost estimate in the case \(k=1\), arguing as above we simply get

$$\begin{aligned} \displaystyle \int _0^{{\textbf{t}}} l( y_x^\pi (s), \alpha _x^\pi (s)) ds \le \sum _{j=1}^{\iota _r } \frac{u_j-u_{j+1}}{p_0(u_j)} \le \int _{\frac{\varphi (r)}{2}}^{U(x)}\Theta (w)\,dw \end{aligned}$$
(69)

where now \(\Theta (w)=\frac{1}{p_0(w)}\) for all \(w\ge 0\).

5 A General Dynamics in the Case \(k>1\)

In Remark 8 we observed that, for the case \(k=1\), the extension of Theorem 1 to a general system

$$\begin{aligned} \dot{y}=F(y,a), \qquad a\in A, \end{aligned}$$
(70)

is straightforward. The case \(k>1\) is much more involved. However, a first easy extension can be achieved by still considering a driftless control-affine system but with a control set of the following form

$$\begin{aligned} {\tilde{A}}:=\Big \{\beta _1e_1,\dots , \beta _m e_m, -\gamma _1e_1,\dots ,-\gamma _m e_m \ \ \ \ \text {s.t.} \ \ \ \ \beta _i,\gamma _i>0,\,\,\,\forall i=1,\ldots ,m \Big \}. \end{aligned}$$

For instance, the definitions of oriented controls for the case \(k=1,2\) should be replaced by the following ones:

  1. (i)

    if \(\ell _B=1\), i.e. \(B=X_j\) for some integer \(j\ge 1\), we set

    $$\begin{aligned} {\alpha }_{(X_j,\textbf{g},+),t}(s):= \beta _i e^i \qquad \text {for any} s\in [0,t], \end{aligned}$$

    and

    $$\begin{aligned} {\alpha }_{(X_j,\textbf{g},-),t}(s):= -\gamma _i e^i \qquad \text {for any} s\in [0,t], \end{aligned}$$

    where \(i\in \{1,\dots ,m\}\) is such that \(B({\textbf{g}}) = f_i\) (i.e. \(g_j=f_i\));

  2. (ii)

    if \(\ell _B=2\), \(B=[X_j,X_{j+1}]\) and \(B({\textbf{g}}) = [f_{i_1},f_{i_2}]\) (namely, \(g_{j}=f_{i_1}\) and \(g_{j+1}=f_{i_2}\)), we set

    $$\begin{aligned} \begin{aligned}&{\alpha }_{(B,\textbf{g},+),t}(s):= \\&\quad {\left\{ \begin{array}{ll} {\alpha }_{(X_j,\textbf{g},+),\frac{\tau _1}{\tau }t}(s) \Big (= \beta _{i_1} e^{i_1} \Big ) &{}s\in [0, \frac{\tau _1}{\tau }t [ \\ {\alpha }_{(X_{j+1},\textbf{g},+),\frac{\tau _2}{\tau }t} \left( s - \frac{\tau _1}{\tau }t\right) \Big (=\beta _{i_2} e^{i_2}\Big ) &{}s\in [\frac{\tau _1}{\tau }t, \frac{\tau _1+\tau _2}{\tau }t [ \\ {\alpha }_{(X_j,\textbf{g},-),\frac{\tau _3}{\tau }t} \left( s - \frac{\tau _1+\tau _2}{\tau }t\right) \Big (= -\gamma _{i_1} e^{i_1}\Big ) &{}s\in [ \frac{\tau _1+\tau _2}{\tau }t, \frac{\tau _1+\tau _2+\tau _3}{\tau }t [ \\ {\alpha }_{ (X_{j+1},\textbf{g},-),\frac{\tau _4}{\tau }t } \left( s - \frac{\tau _1+\tau _2+\tau _3}{\tau }t \right) \Big ( =-\gamma _{i_2} e^{i_2}\Big ) &{}s\in \left[ \frac{\tau _1+\tau _2+\tau _3}{\tau }t, t \right] \end{array}\right. } \end{aligned} \end{aligned}$$

    and

    $$\begin{aligned} {\alpha }_{(B,\textbf{g},-),t}(s):= - {\alpha }_{(B,\textbf{g},+),t}(t-s) \qquad \text {for any} s\in [0,t], \end{aligned}$$

    where

    $$\begin{aligned} \tau _1=1/\beta _{i_1}, \ \ \tau _2=1/\beta _{i_2}, \ \ \tau _3=1/\gamma _{i_1},\ \ \tau _4 =1/\gamma _{i_2}, \ \ \tau =\tau _1+\tau _2+\tau _3+\tau _4. \end{aligned}$$

It is easy to verify that the asymptotic formula (12) should be replaced by the \((i_1,i_2)\)-dependent inequality

$$\begin{aligned} \left| y(t) - x -{\text {sgn}}\, B( {\textbf{g}})(x) \left( \frac{t}{\tau } \right) ^2 \right| \le \omega \, t \left( \frac{t}{\tau }\right) ^2. \end{aligned}$$
(71)

For \(k>2\) one would proceed in an akin way, so that, by means of suitable notions of the degree-k Hamiltonians, an extension of Theorem 1 would follow without any other difficulty. An extension of the case \(k>1\) for control system (70) to a general dynamics F and a general control set A, requires strong selection properties for the set-valued map \(x\rightsquigarrow F(x,A)\). As a natural example, one could assume the existence of a selection

$$\begin{aligned} {\check{F}} (x, C) \subseteq F(x,A) \end{aligned}$$
(72)

where \({\check{F}}\) is a driftless control-affine dynamics and C is like \({\tilde{A}}\), namely, for some integer \(r>1\),

$$\begin{aligned} {\check{F}}(y,c):= \sum _{i=1}^r {\check{f}}_i(y)\, c^i, \qquad (c^1,\dots ,c^r) \in C, \end{aligned}$$

(\({\check{f}}_1,\ldots ,{\check{f}}_r\) are vector fields) and

$$\begin{aligned} C:=\Big \{\beta _1e_1,\dots ,\beta _re_r, -\gamma _1e_1,\dots ,-\gamma _r e_r \ \ \text {s.t.} \ \ \beta _i,\gamma _i>0,\,\,\,\forall i=1,\ldots ,r \Big \}\subset \mathbb {R}^r. \end{aligned}$$

Thanks to selection (72), a degree-k feedback generator for the control system \(\dot{y} = \sum _{i=1}^r \check{f}_i(y )\, c^i\),   \(c\in C\), defines a feedback law also for the original system \(\dot{y}=F(y,a)\), \(a\in A\). Hence, by defining Hamiltonians \({\check{H}}[p_0]^{(h)}\), \(h\le k\), corresponding to the dynamics \({\check{F}}\) in a way similar to (13), we obtain a result like Theorem 1, valid for the system \(\dot{y}= {\check{F}}(y,c)\),   \(c\in C\), which in turn provides degree-k sample stabilizability with regulated cost for the original system (70).

A non-trivial further generalization might concern control-affine systems with a non-zero drift. Obviously, the corresponding degree-k Hamiltonian should be based also on minimizations over sets including suitable brackets containing the drift among their factors (see [28, Theorem 5.1]).

Finally, another interesting generalization might consist, still for a driftless control-affine system \(\dot{y} = \sum _{i=1}^m f_i(y )\, a^i\), in considering less regular vector fields \(f_1,\ldots ,f_m\): if B is a formal iterated bracket, in view of [14, Theorem 3.7] (see also [13]), an asymptotic formula similar to (12) still holds provided \( B(f_1,\ldots ,f_m)\) is a \(L^\infty \) map (defined almost everywhere). In such a case one can make use of a set-valued iterated Lie bracket \( B_{set}(f_1,\ldots ,f_m)\), which happens to be upper semi-continuous. For instance, in the case of the bracket \([f_5,f_7]\) with \(f_5,f_7\) locally Lipschitz continuous, the bracket \([f_5,f_7]\) is an \(L^\infty \) map defined almost everywhere. The corresponding set-valued bracket is defined, for every x, as

$$\begin{aligned}{}[f_5,f_7]_{set}(x):= co\{\lim _{n\rightarrow \infty }[f_5,f_7](x_n), \,\,\,x_n\rightarrow x\}, \end{aligned}$$

where ‘co’ denotes the convex hull and the limits are intended for all sequences \((x_n)\) converging to x made of differentiability points for both \(f_5\) and \(f_7\). This bracket has revealed fit for extending various basic results on vector fields’ families (see [32, 33]) and might be useful also for an extension of the results of the present paper (see [28, Theorem 4.1], and also [6] for the STLC’s issue).