1 Introduction

In this paper we investigate in an optimal control perspective the stabilizability to a set \({\mathbf {C}}\subset {{\mathbb {R}}}^n\) of the nonlinear control system

$$\begin{aligned} \dot{x}=f(x,u), \ \ u\in U, \end{aligned}$$
(1.1)

to which we associate a cost of the form

$$\begin{aligned} \int _ 0^{{T}_{x} } l(x(\tau ),u(\tau ))\, d\tau , \end{aligned}$$
(1.2)

where \(l\ge 0\), and \(T_x\le +\infty \)—the exit-time of x—verifies

$$\begin{aligned} x(t) \in {{\mathbb {R}}}^n\setminus \mathbf{C}\ \ \text {for all }t\in [0,T_x), \quad \lim _{t\rightarrow {T}_{x}^-} {\mathbf {d}}(x(t))=0 \end{aligned}$$
(1.3)

(for any \(y\in {{\mathbb {R}}}^n\), \({\mathbf {d}}(y)\) denotes the distance of y from \({\mathbf {C}}\)). In particular, for any \(r>0\), we set \(B_r(\mathbf{C}):=\{y\in {{\mathbb {R}}}^n: \ {\mathbf {d}}(y)\le r\}\) and assume that:

(H0) the target set \({\mathbf {C}}\subset {{\mathbb {R}}}^n\) is closed, with compact boundary; the control set \(U\subseteq {{\mathbb {R}}}^m\) is closed, not necessarily bounded; the functions \(f:({{\mathbb {R}}}^n\setminus \mathbf{C})\times U\rightarrow {{\mathbb {R}}}^n\), \(l:({{\mathbb {R}}}^n\setminus \mathbf{C})\times U\rightarrow [0,+\infty )\) are uniformly continuous on \({{\mathcal {K}}}\times U\) for any compact set \({{\mathcal {K}}}\subset {{\mathbb {R}}}^n\setminus \mathbf{C}\); for any \(R>0\) there is some \(M(R)>0\) such that

$$\begin{aligned} |f(x,u)|\le M(R), \quad l(x,u)\le M(R) \qquad \forall (x,u)\in (B_R(\mathbf{C})\setminus \mathbf{C})\times U. \end{aligned}$$
(1.4)

Hence, for any admissible control, given by a function \(u\in L^\infty _{loc}([0,T_x), U)\),Footnote 1 every Cauchy problem associated to (1.1) has in general multiple solutions and the cost may be finite even if \(T_x=+\infty \).

In order to obtain sufficient conditions for the stabilizability of the system with regulated cost, we consider the Hamiltonian

$$\begin{aligned} \displaystyle H(x,{p_{0}},p):=\inf _{u\in U}\big \{\langle p\,,\,f(x,u)\rangle +{p_{0}}\,l(x,u)\big \}, \end{aligned}$$
(1.5)

and the following notion, firstly introduced in [23] (in a slightly weaker form).

Definition 1.1

(\({p_{0}}\)-Minimum Restraint Function) Let \(W:\overline{{{\mathbb {R}}}^n\setminus {{\mathbf {C}}}}\rightarrow [0,+\infty )\) be a continuous function, which is locally semiconcave, positive definite, and proper on \({{\mathbb {R}}}^n\setminus {\mathbf {C}}\). We say that W is a \({p_{0}}\)-Minimum Restraint Function—in short, \({p_{0}}\)-MRF—for some\({p_{0}}\ge 0\), if it verifies the decrease condition:

$$\begin{aligned} H (x,{p_{0}}, D^*W(x) )\le -\gamma (W(x)) \quad \forall x\in {{{{\mathbb {R}}}^n}\setminus {\mathbf {C}}}, \end{aligned}$$
(1.6)

Footnote 2 for some continuous, strictly increasing function \(\gamma :(0,+\infty )\rightarrow (0,+\infty )\).

A \({p_{0}}\)-Minimum Restraint Function is at once a Control Lyapunov Function for (1.1) and, by [23, Prop. 5.1], also a strict viscosity supersolution to the Hamilton-Jacobi-Bellman equation

$$\begin{aligned} \sup _{u\in U}\left\{ -\langle DW(x)\,,\,f(x,u)\rangle -{p_{0}}\,l(x,u)\right\} -\gamma (W(x)) =0, \quad x\in {{\mathbb {R}}}^n\setminus \mathbf{C}. \end{aligned}$$

Hence, when a \({p_{0}}\)—Minimum Restraint Function exists, on the one hand, the (open-loop) global asymptotic controllability of the control system (1.1) to \({\mathbf {C}}\)—namely, that for any initial condition \(z\in {{\mathbb {R}}}^n\setminus \mathbf{C}\) there is an admissible trajectory-control pair (xu) to (1.1) with \(x(0)=z\), such that \(\lim _{t\rightarrow {T}_{x}^-} {\mathbf {d}}(x(t))=0\) in a certain uniform and stable manner that we will not dwell upon here—is expected (see e.g. [3, 30, 32]). On the other hand, if \({p_{0}}>0\), the existence of an admissible trajectory-control pair (xu) with \(x(0)=z\) satisfying the cost estimate

$$\begin{aligned} \int _0^{T_x}l(x(t),u(t))\,dt\le \frac{W(z)}{{p_{0}}} \end{aligned}$$
(1.7)

follows by known optimality principles (see e.g. [22, 25, 33]). The main contribution of [19, 23] was to prove that the existence of a \({p_{0}}\)-Minimum Restraint Function W for some \({p_{0}}>0\) allows to produce a pair (xu) that meets both of these properties.

A related and important goal is, given a \({p_{0}}\)-Minimum Restraint Function W for some \({p_{0}}>0\), to provide a state feedback \(K:{{\mathbb {R}}}^n\setminus \mathbf{C}\rightarrow U\) such that the system \(\dot{x}(t) =f(x(t),K(x(t))\) is globally asymptotically stable to\(\mathbf{C}\)and has\(({p_{0}},W)\)-regulated cost, that is, such that for any stable trajectory x with \(x(0)=z\), the cost \(\int _0^{T_x}l(x(t),K(x(t)))\,dt\) is not greater than \(W(z)/{p_{0}}\). In this paper we address the question, left by [19, 23] as an open problem, of how to define such a feedback law through the use of W. In the ideal case in which W is differentiable and there exists a continuous feedback K(x) such that \( \langle DW(x)\,,\,f(x,K(x))\rangle +{p_{0}}\,l(x,K(x)) \le -\gamma (W(x))\) for all \(x\in {{\mathbb {R}}}^n\setminus \mathbf{C}, \) one easily derives global asymptotic stabilizability with \(({p_{0}},W)\)-regulated cost. However, it is a classical matter in nonlinear control systems that a differentiable Control Lyapunov Function W may not exist and, even if a smooth W exists, a continuous feedback K does not generally exist (see e.g. [1, 7, 15, 30,31,32]). From here, the need of considering the nonsmooth version (1.6) of the decrease condition and of defining a discontinuous feedback\(K:{{\mathbb {R}}}^n\setminus \mathbf{C}\rightarrow U\), which, because of the unboundedness of U, we can only assume locally bounded (see Prop. 3.2 below). In particular, we might have \(\lim \sup _{x\rightarrow {\bar{x}}\in \partial \mathbf{C}}|K(x)|=+\infty \).

Identified a feedback K as above, the main issue is how to interpret the discontinuous differential equation and the associated exit-time cost, so that the control system (1.1) can be stabilized to the target \(\mathbf{C}\) with \(({p_{0}},W)\)-regulated cost by K. For the trajectories of \(\dot{x}(t) =f(x(t),K(x(t))\), we simply adapt to our setting the nowadays classical notions of sampling and Euler solutions in [13, 14], inspired by differential games theory [17]. However, our primary objective is to introduce a suitable concept of cost associated to a stable sampling or Euler solution starting from z, so that such cost is bounded above by \(W(z)/{p_{0}}\). Postponing the precise definitions to Sect. 2, given some constants r, R such that \(0<r<R\), we call a sampling pair \((x,u)\,(r,R)\)-stable when, starting from some z with \(r<{\mathbf {d}}(z)\le R\), x reaches in a uniform manner the r-neighborhood \(B_r(\mathbf{C})\) of the target and, after a time \({\bar{T}}_x^r\), remains definitely in \(B_r(\mathbf{C})\). In this case, for all \(t\ge 0\) we define the corresponding sampling cost as

$$\begin{aligned} x^0(t):=\int _0^{\min \{t,{\bar{T}}_x^r\}}l(x(s),u(s))\,ds \end{aligned}$$

and show that \(x^0(t)\le W(z)/{p_{0}}\). The difficulty in proving the latter inequality lies in the fact that \({\bar{T}}_x^r\) may not be the first instant in which x enters \(B_r(\mathbf{C})\). Consequently, we must estimate the cost in a time interval where we basically have no information on x, except that it is in \(B_r(\mathbf{C})\) (see Sect. 3.1 below). Nevertheless, this is the correct notion of sampling cost. Indeed, let us now define an Euler cost-solution pair \(({\mathscr {X}}^0,{\mathscr {X}})\) as the locally uniform limit on \([0,+\infty )\) of a sequence of sampling cost-trajectory pairs as above, when the sampling times tend to zero. We obtain that \({\mathscr {X}}\) approaches uniformly asymptotically \(\mathbf{C}\), while \(\lim _{t\rightarrow T_{\mathscr {X}}^-}{\mathscr {X}}^0(t)\le W(z)/{p_{0}}\), where \(T_{\mathscr {X}}\le +\infty \) is the exit-time of \({\mathscr {X}}\), as in (1.3) (see Sect. 3.2).

Furthermore, inspired by [28], we prove that, when f and l are locally Lipschitz continuous in x uniformly w.r.t. \(u\in U\) up to the boundary of \(\mathbf{C}\), the existence of a \({p_{0}}\)-Minimum Restraint Function W with \({p_{0}}>0\), possibly not locally semiconcave but merely locally Lipschitz continuous on \(\overline{{{\mathbb {R}}}^n\setminus \mathbf{C}}\), still guarantees stabilizability of (1.1) to \(\mathbf{C}\) with \((p_0,W)\)-regulated cost, in the sample and Euler sense (see Theorem 4.3 and Corollary 4.4). This result can be useful for the effective implementation of our feedback construction, as shown by the example in Sect. 5.

We are motivated to consider U unbounded and the minimization in the decrease condition on the whole set U (and not on bounded subsets, as usual) mainly because these assumptions are met, for instance, in the stabilization of mechanical systems with vibrating controls. These are nonlinear control systems, affine or quadratic in the derivative of the control, which is considered as an impulsive control (see [6]). In particular, the reparameterized systems usually introduced in the study of control-polynomial systems satisfy (H0). As a consequence, as shown in [18], the results of the present paper are the starting point for the stabilizability (with and without regulated cost) of impulsive control systems. In fact, our assumptions include also the case where U is bounded and f and l are continuous on \({{\mathbb {R}}}^n\times U\). For U unbounded, they are satisfied, for instance, by the class of control problems in which the input appears inside a saturation nonlinearity, such as \( f(x,u)=f_0(x)+\sum _{i=1}^m f_i(x)\sigma _i(u)\), \(l(x,u)=l_0(x)+l_1(x)|\sigma _0(u)|\), where \(l_0\), \(l_1\), \(f_0\), \(f_1,\dots , f_m\in C({{\mathbb {R}}}^n)\) and \(\sigma _0,\dots ,\sigma _m\) are bounded, uniformly continuous maps on U. The stabilizability of such control systems plays a relevant role both in the literature and in the applications (see e.g. [4, 10, 11, 34]).

Finally, the value function

$$\begin{aligned} V(z):=\inf _{(x,u), \ x(0)=z}\, \int _ 0^{{T}_{x} } l(x(\tau ),u(\tau ))\, d\tau , \end{aligned}$$

is clearly bounded above by any \({p_{0}}\)-Minimum Restraint Function divided by \({p_{0}}\). Hence our approach could be useful to design approximated optimal closed-loop strategies, when there exists a sequence of \({p_{0}}\)-Minimum Restraint Functions approaching V, as in [25], or at least to obtain “safe” performances, keeping the cost under the value W. Moreover, when \(V\le W/{p_{0}}\), then V is continuous on the target’s boundary and this is crucial to establish comparison, uniqueness, and robustness properties for the associated Hamilton–Jacobi–Bellman equation [22, 24, 25] and to study associated asymptotic and ergodic problems [26]. From this PDE point of view, problem (1.1)–(1.2) has been widely investigated; a likely incomplete bibliography, also containing applications (for instance, the Füller and shape-from-shading problems), includes [8, 16, 20, 33] and the references therein.

The paper is organized as follows. In the remaining part of the Introduction we provide some preliminary definitions and notation. In Sect. 2 we define precisely the sample and Euler stabilizability of (1.1) to \(\mathbf{C}\)with\(({p_{0}},W)\)-regulated cost, which is shown to be guaranteed by the existence of a \({p_{0}}\)-Minimum Restraint Function in Theorem 3.1, our main result. In Sect. 4 we consider the case of Lipschitz continuous data, postponing the proofs in the “Appendix”. An example on the stabilizability with regulated cost of the non-holonomic integrator control system concludes the paper (see Sect. 5).

1.1 Notation and preliminaries

For every \(r\ge 0\) and \(\Omega \subset {{\mathbb {R}}}^N\), we set \(B_r(\Omega ):=\{x\in {{\mathbb {R}}}^N\mid d(x,\Omega )\le r\}\), where d is the usual Euclidean distance. When \(\Omega =\{z\}\) for some \(z\in {{\mathbb {R}}}^N\), we also use of the notation \(B(z,r):=B_r(\{z\})\). For any map \(F:\Omega \rightarrow {{\mathbb {R}}}^M\) we call modulus (of continuity) ofF any increasing, continuous function \(\omega :[0,+\infty )\rightarrow [0,+\infty )\) such that \(\omega (0)=0\), \(\omega (r)>0\) for every \(r>0\) and \(|F(x_1)-F(x_2)|\le \omega (|x_1-x_2|)\) for all \(x_1\), \(x_2\in \Omega \). We use \(\overline{\Omega }\), \(\overset{\circ }{\Omega }\) to denote the closure and the interior of the set \(\Omega \), respectively.

Let us summarize some basic notions in nonsmooth analysis (see e.g. [9, 12, 35] for a thorough treatment).

Definition 1.2

(Positive definite and proper functions). Let \(\Omega \subset {{\mathbb {R}}}^N\) be an open set with compact boundary. A continuous function \(F:\overline{\Omega } \rightarrow {{\mathbb {R}}}\) is said positive definite on \(\Omega \) if \(F(x)>0\)  \(\forall x\in \Omega \) and \(F(x)=0\)  \(\forall x\in \partial \Omega \). The function F is called proper on \(\Omega \) if the pre-image \(F^{-1}(K)\) of any compact set \(K\subset [0,+\infty )\) is compact.

Definition 1.3

(Semiconcavity). Let \(\Omega \subseteq {{\mathbb {R}}}^N\). A continuous function \(F:\Omega \rightarrow {{\mathbb {R}}}\) is said to be semiconcave on \(\Omega \) if there exists \(\rho >0\) such that

$$\begin{aligned} F(x)+F({\hat{x}})-2F\left( \frac{x+{\hat{x}}}{2}\right) \le \rho |x-{\hat{x}}|^2, \end{aligned}$$

for all x, \({\hat{x}}\in \Omega \) such that \([x,{\hat{x}}]\subset \Omega \). The constant \(\rho \) above is called a semiconcavity constant for F in \(\Omega \). F is said to be locally semiconcave on \(\Omega \) if it semiconcave on every compact subset of \(\Omega \).

We remind that locally semiconcave functions are locally Lipschitz continuous. Actually, they are twice differentiable almost everywhere.

Definition 1.4

(Limiting gradient). Let \(\Omega \subseteq {{\mathbb {R}}}^N\) be an open set, and let \(F:\Omega \rightarrow {{\mathbb {R}}}\) be a locally Lipschitz continuous function. For every \(x\in \Omega \) we call set of limiting gradients of F at x, the set:

$$\begin{aligned} D^*{F}(x) := \Big \{ w\in {{\mathbb {R}}}^N: \ \ w=\lim _{k}\nabla {F}(x_k), \ \ x_k\in DIFF(F)\setminus \{x\}, \ \ \lim _k x_k=x\Big \}, \end{aligned}$$

where \(\nabla \) denotes the classical gradient operator and DIFF(F) is the set of differentiability points of F.

The set-valued map \(x\leadsto D^*F(x)\) is upper semicontinuous on \(\Omega \), with nonempty, compact, and not necessarily convex values. When F is a locally semiconcave function, \(D^*{F}\) coincides with the limiting subdifferential\(\partial _LF\), namely,

$$\begin{aligned} D^*F(x)=\partial _LF(x) := \Big \{\lim \, p_i: \ p_i\in \partial _PF(x_i), \ \lim \, x_i=x\Big \} \quad \forall x\in \Omega . \end{aligned}$$

As usual, for every \(x\in \Omega \), the proximal subdifferential of F at x is given by

$$\begin{aligned} \begin{array}{l} \partial _PF(x):= \Big \{p \in {{\mathbb {R}}}^N\,:\, \exists \, \sigma ,\eta >0\, \text { s.t.,} \ \forall y\in B(x,\eta ), \\ \quad \qquad \qquad \qquad F(y)-F(x)+\sigma |y-x|^2\ge \langle p,y-x\rangle \Big \}. \end{array} \end{aligned}$$

For locally Lipschitz continuous functions, the Clarke subdifferential\(\partial _CF(x)\) of F at x, can be defined as \(\partial _CF(x):=\)co\(\partial _LF(x)\). Finally, locally semiconcave functions enjoy the following properties (see [9, Propositions 3.3.1, 3.6.2]).

Lemma 1.5

Let \(\Omega \subseteq {{\mathbb {R}}}^N\) be an open set and let \(F:\Omega \rightarrow {{\mathbb {R}}}\) be a locally semiconcave function. Then for any compact set \(\mathcal {K}\subset \Omega \) there exist some positive constants L and \(\rho \) such that, for any \(x\in \mathcal {K}\),Footnote 3

$$\begin{aligned} \begin{array}{l} F({\hat{x}})-F(x)\le \langle p,{\hat{x}}-x \rangle +\rho |{\hat{x}}-x|^2, \\ |p|\le L \quad \forall p\in D^*F(x), \end{array} \end{aligned}$$
(1.8)

for any point \({\hat{x}}\in \mathcal {K}\) such that \([x,{\hat{x}}]\subset \mathcal {K}\).

2 Sample and Euler stabilizability with regulated cost

Let us introduce the notions of sampling and Euler solutions with regulated cost. Hypothesis (H0) is assumed throughout the whole section.

Definition 2.1

(Admissible trajectory-control pairs and costs) For every point \(z\in {{\mathbb {R}}}^n\setminus {\mathbf {C}}\), we will say that (xu) is an admissible trajectory-control pair from z for the control system

$$\begin{aligned} \dot{x}=f(x,u), \end{aligned}$$
(2.1)

if there exists \(T_x\le +\infty \) such that \(u\in L^\infty _{loc}([0,T_x),U)\) and x is a Carathéodory solution of (2.1) in \([0,T_x)\) corresponding to u, verifying \(x(0)=z\) and

$$\begin{aligned} x([0,T_x))\subset {{\mathbb {R}}}^n\backslash {\mathbf {C}}\ \ \displaystyle \text {and, if } T_{ x}<+\infty , \ \ \lim _{t\rightarrow T^-_{ x}} \mathbf{d}(x(t))=0. \end{aligned}$$

We shall use \({\mathcal { A}}_f({ z})\) to denote the family of admissible trajectory-control pairs (xu) from z for the control system (2.1). Moreover, we will call cost associated to \((x,u)\in {\mathcal { A}}_f({ z})\) the function

$$\begin{aligned} x^0(t):=\int _0^{t } l(x(\tau ),u(\tau ))\,d\tau \ \ \forall t\in [0,T_x). \end{aligned}$$

If \(T_x<+\infty \), we extend continuously \((x^0,x)\) to \([0,+\infty )\), by setting

$$\begin{aligned} (x^0,x)(t)=\lim _{\tau \rightarrow T_x^-}(x^0,x)(\tau ) \qquad \forall t\ge T_x. \end{aligned}$$

From now on, we will always consider admissible trajectories and associated costs defined on \([0,+\infty )\).

Observe that for any admissible trajectory-control pair defined on \([0, T_x)\), when \(T_x<+\infty \) the above limit exists by (H0). In particular, this follows by the compactness of \(\partial \mathbf{C}\) and the boundedness of f and l in any bounded neighborhood of the target.

A partition of \([0,+\infty )\) is a sequence \(\pi =(t^j) \) such that \(t^0=0, \quad t^{j-1}<t^j\)   \(\forall j\ge 1\), and \(\lim _{j\rightarrow +\infty }t^j=+\infty \). The value \(\text {diam}(\pi ):=\sup _{ j\ge 1}(t^{j }-t^{j-1})\) is called the diameter or the sampling time of the sequence \(\pi \). A feedback for (2.1) is defined to be any locally bounded function \(K:{{\mathbb {R}}}^n\setminus \mathbf{C}\rightarrow U\).

Definition 2.2

(Sampling trajectory and sampling cost) Given a feedback \(K:{{\mathbb {R}}}^n\setminus \mathbf{C}\rightarrow U\), a partition \(\pi =(t^j)\) of \([0,+\infty )\), and a point \(z\in {{\mathbb {R}}}^n\setminus \mathbf{C}\), a \(\pi \)-sampling trajectory for (2.1) from z associated to K is a continuous function x defined by recursively solving

$$\begin{aligned} \dot{x}(t)=f(x(t),K(x(t^{j-1})) \qquad t\in [t^{j-1},t^j],~ (x(t)\in {{\mathbb {R}}}^n\setminus \mathbf{C}) \end{aligned}$$

from the initial time \(t^{j-1}\) up to time

$$\begin{aligned} \tau ^j:=t^{j-1}\vee \sup \{\tau \in [t^{j-1},t^j ]: \ x \ \text {is defined on } [t^{j-1},\tau )\}, \end{aligned}$$

where \(x(t^0)=x(0)=z\). In this case, the trajectory x is defined on the right-open interval from time zero up to time \(t^-:=\inf \{\tau ^j: \ \tau ^j<t^j\}\). Accordingly, for every \(j\ge 1\), we set

$$\begin{aligned} u(t):= K(x(t^{j-1})) \quad \forall t\in [t^{j-1},t^j)\cap [0,t^-). \end{aligned}$$
(2.2)

The pair (xu) will be called a \(\pi \)-sampling trajectory-control pair of (2.1) from z (corresponding to the feedback K). The sampling cost associated to (xu) is given by

$$\begin{aligned} x^0(t):=\int _0^t l(x(\tau ),u(\tau ))\,d\tau \quad t\in [0,t^-). \end{aligned}$$
(2.3)

Definition 2.3

(Sample stabilizability with \((p_0,W)\)-regulated cost) A feedback \(K:{{\mathbb {R}}}^n\setminus \mathbf{C}\rightarrow U\) is said to sample-stabilize (2.1) to \(\mathbf{C}\) if there is a function \(\beta \in {\mathcal {KL}}\) satisfying the following: for each pair \(0<r<R\) there exists \(\delta =\delta (r,R)>0\), such that, for every partition \(\pi \) of \([0,+\infty )\) with \(\text {diam}(\pi )\le \delta \) and for any initial state \(z\in {{\mathbb {R}}}^n\setminus \mathbf{C}\) such that \(\mathbf{d}(z)\le R\), any \(\pi \)-sampling trajectory-control pair (xu) of (2.1) from z associated to K belongs to \({{\mathcal {A}}}_f(z)\) and verifies:

$$\begin{aligned} \mathbf{d}(x(t))\le \max \{\beta (R,t), r\} \qquad \forall t\in [0,+\infty ). \end{aligned}$$
(2.4)

Such (xu) are called (rR)-stable (to\(\mathbf{C}\)) sampling trajectory-control pairs. If the system (2.1) admits a sample-stabilizing feedback to \(\mathbf{C}\), then it is called sample stabilizable (to\(\mathbf{C}\)).

When there exist \({p_{0}}>0\) and a continuous map \(W:\overline{{{\mathbb {R}}}^n\setminus {\mathbf {C}}}\rightarrow [0,+\infty )\) which is positive definite and proper on \({{\mathbb {R}}}^n\setminus \mathbf{C}\), such that the sampling cost \(x^0\) associated to any (rR)-stable sampling pair (xu) verifies

$$\begin{aligned} x^0({\bar{T}}_x^r)= \int _0^{{\bar{T}}_x^r} l(x(\tau ),u(\tau ))\,d\tau \le \frac{W(z)}{{p_{0}}} \end{aligned}$$
(2.5)

where

$$\begin{aligned} {\bar{T}}_x^r:=\inf \{t>0: \ {\mathbf {d}}(x(\tau ))\le r \ \ \forall \tau \ge t\}, \end{aligned}$$
(2.6)

we say that (2.1) is sample stabilizable (to\(\mathbf{C}\)) with\((p_0,W)\)-regulated cost.

Observe that, when \({\mathbf {d}}(z)\le r\), the time \({\bar{T}}_x^r\) may be zero. In this case (2.5) imposes no conditions on the cost.

Let us now introduce Euler solutions and the associated costs and a notion of Euler stabilizability to \(\mathbf{C}\) with \((p_0,W)\)-regulated cost.

Definition 2.4

(Euler trajectory and Euler cost) Let \((\pi _i)\) be a sequence of partitions of \([0,+\infty )\) such that \(\delta _i:=\text {diam}(\pi _i)\rightarrow 0\) as \(i\rightarrow \infty \). Given a feedback \(K:{{\mathbb {R}}}^n\setminus \mathbf{C}\rightarrow U\) and \(z\in {{\mathbb {R}}}^n\setminus \mathbf{C}\), for every i, let \((x_i,u_i)\in {\mathcal { A}}_{f}({ z})\) be a \(\pi _i\)-sampling trajectory-control pair of (2.1) from z associated to K and let \( x_i^0\) be the corresponding cost. If there exists a map \({\mathscr {X}}:[0,+\infty )\rightarrow {{\mathbb {R}}}^n\), verifying

$$\begin{aligned} x_i\rightarrow {\mathscr {X}}\qquad \text {locally uniformly in }[0,+\infty ) \end{aligned}$$
(2.7)

we call \({\mathscr {X}}\) an Euler trajectory of (2.1) fromz (corresponding to the feedback K).

If moreover there is a map \({\mathscr {X}}^0:[0,+\infty )\rightarrow [0,+\infty )\) verifying

$$\begin{aligned} x_i^0\rightarrow {\mathscr {X}}^0\qquad \text {locally uniformly in }[0,+\infty ), \end{aligned}$$
(2.8)

we call \({\mathscr {X}}^0\) the Euler cost associated to \({\mathscr {X}}\).

Remark 2.5

As Euler trajectories are not, in general, classical solutions to the control system (2.1), an Euler cost may not coincide with the integral of the Lagrangian along the corresponding Euler trajectory, for some control. Nevertheless, this is true in special situations, as, for instance, when the function l is continuous, bounded and does not depend on the control, that is \(l(x,u)={\tilde{l}}(x)\) for all (xu). Indeed, in this case if there exists a sequence of sampling trajectories \(x_i\rightarrow {\mathscr {X}}\) locally uniformly in \([0,+\infty )\), the dominated convergence theorem implies that the associated costs \(x^0_i\) converge locally uniformly to the function \({\mathscr {X}}^0\) verifying

$$\begin{aligned} {\mathscr {X}}^0(t)= \int _0^t {\tilde{l}}({\mathscr {X}}(\tau ))d\tau \quad \text {for any } t\ge 0. \end{aligned}$$

Indeed, fixed \(t>0\), for all \(s\in [0,t]\) one has

$$\begin{aligned} |x_i^0(s)-{\mathscr {X}}^0(s)|\le \int _0^s|{\tilde{l}}(x_i(\tau ))-{\tilde{l}}({\mathscr {X}}(\tau ))|\,d\tau \le t\omega (\sup _{\tau \in [0,t]}|x_i(\tau )-{\mathscr {X}}(\tau )|), \end{aligned}$$

when \(\omega \) denotes a modulus of \({\tilde{l}}\) on a suitable compact neighborhood of \({\mathscr {X}}([0,t])\). Therefore, \(\sup _{s\in [0,t]} |x_i^0(s)-{\mathscr {X}}^0(s)|\rightarrow 0\) as \(i\rightarrow +\infty \), for every \(t>0\).

Definition 2.6

(Euler stabilizability with\((p_0,W)\)-regulated cost) The system (2.1) is Euler stabilizable to \(\mathbf{C}\) with Euler stabilizing feedbackK, if there exists a function \(\beta \in {\mathcal {KL}}\) such that for each \(z\in {{\mathbb {R}}}^n\setminus \mathbf{C}\), every Euler solution \({\mathscr {X}}\) of (2.1) from z associated to K verifies

$$\begin{aligned} \mathbf{d}({\mathscr {X}}(t))\le \beta ({\mathbf {d}}(z),t) \qquad \forall t\in [0,+\infty ). \end{aligned}$$
(2.9)

When there exist some \({p_{0}}>0\) and a continuous map \(W:\overline{{{\mathbb {R}}}^n\setminus {\mathbf {C}}}\rightarrow [0,+\infty )\) which is positive definite and proper on \({{\mathbb {R}}}^n\setminus {\mathbf {C}}\), such that every Euler cost \({\mathscr {X}}^0\) associated to \({\mathscr {X}}\) verifies

$$\begin{aligned} \lim _{t\rightarrow T_{{\mathscr {X}}}^-}{\mathscr {X}}^0(t)\le \frac{W(z)}{{p_{0}}} \quad \forall z\in {{\mathbb {R}}}^n\setminus \mathbf{C}, \end{aligned}$$
(2.10)

where

$$\begin{aligned} \displaystyle T_{\mathscr {X}}:=\inf \{\tau \in (0,+\infty ]: \ {\mathscr {X}}([0,\tau ))\subset {{\mathbb {R}}}^n\setminus \mathbf{C}, \ \ \lim _{t\rightarrow \tau ^-} \mathbf{d}({\mathscr {X}}(t))=0\}, \end{aligned}$$

then (2.1) is said to have a \((p_0,W)\)-regulated cost (w.r.t. the feedback K).

3 Main result

Theorem 3.1

Assume hypothesis (H0) and let W be a \({p_{0}}\)-MRF with \({p_{0}}>0\). Then there exists a locally bounded feedback \(K:{{\mathbb {R}}}^n\setminus \mathbf{C}\rightarrow U\) that sample and Euler stabilizes system (2.1) to \(\mathbf{C}\) with \((p_0,W)\)-regulated cost.

We split the proof of Theorem 3.1 in two subsections, concerning with the sample stabilizability and the Euler stabilizability, respectively.

Preliminarily, let us observe that for any \((x, p)\in ({{\mathbb {R}}}^n\setminus \mathbf{C})\times {{\mathbb {R}}}^n\) the infimum in the definition of the Hamiltonian H can be taken over a compact subset of U, in view of the following result.

Proposition 3.2

Assume (H0) and let W be a \({p_{0}}\)-MRF with \({p_{0}}\ge 0\). Then there exists a continuous function \(N: (0,+\infty )\rightarrow (0,+\infty )\) such that, setting for any \((x, p)\in ({{\mathbb {R}}}^n\setminus \mathbf{C})\times {{\mathbb {R}}}^n\),

$$\begin{aligned} H_{N(r)}(x,{p_{0}},p):=\min _{u\in U\cap B(0,N(r))}\Big \{\langle p, f(x,u)\rangle +{p_{0}}\, l(x,u) \Big \} \quad \forall r>0, \end{aligned}$$

one has

$$\begin{aligned} H_{ N(W(x))} (x,p_0,D^*W(x))< -\gamma (W(x)) \qquad \forall x\in {{\mathbb {R}}}^n\setminus \mathbf{C}. \end{aligned}$$
(3.1)

Proof

Fix \(\sigma >0\). By [19, Prop. 3.3] we derive that there exists a decreasing, continuous function \(N: (0,\sigma ] \rightarrow (0,+\infty )\) such that, setting

$$\begin{aligned} H_{ N(r)}(x,p_0,p):=\min _{u\in U\cap B(0,N(r))}\Big \{\langle p, f(x,u)\rangle +{p_{0}}\,l(x,u) \Big \} \end{aligned}$$
(3.2)

for all \(r\in (0,\sigma ]\), it follows that

$$\begin{aligned} H_{ N(W(x))} (x,{p_{0}},D^*W(x))< -\gamma (W(x)) \end{aligned}$$
(3.3)

for every \(x\in W^{-1}((0,\sigma ])\). It only remains to show that there exists a continuous map \(N:[\sigma ,+\infty )\rightarrow (0,+\infty )\) such that extending (3.2) to \(r\in [\sigma ,+\infty )\) one gets (3.13.3) for every \(x\in W^{-1}([\sigma ,+\infty ))\). Arguing as in the proof of [19, Prop. 3.3], one can obtain that for any \(r>\sigma \) there is some \(N(r)\ge N(\sigma )\) such that

$$\begin{aligned} H_{ N(r)} (x,p_0,p)< -\gamma (W(x)) \qquad \forall x\in W^{-1}([\sigma ,r]) \ \text { and } \ p\in D^*W(x). \end{aligned}$$

Moreover, for any \(r_2>r_1\ge \sigma \), one clearly has \(N(r_2)\ge N(r_1)\) and, enlarging N if necessary, one can assume that \(r\mapsto N(r)\) is increasing and continuous on \([\sigma ,+\infty )\). Therefore for any \(x\in W^{-1}([\sigma ,+\infty ))\) the thesis (3.13.3) follows from (3.2) as soon as \(r=W(x)\). \(\square \)

As an immediate consequence of Proposition 3.2, the existence of a \({p_{0}}\)-MRF W guarantees the existence of a feedback K with the following properties.

Proposition 3.3

Let W be a \({p_{0}}\)-MRF with \({p_{0}}\ge 0\) and fix a selection \(p(x)\in D^*W(x)\) for any \(x\in {{\mathbb {R}}}^n\setminus \mathbf{C}\). Given a function N as in Proposition 3.2, then any map \(K:{{\mathbb {R}}}^n\setminus \mathbf{C}\rightarrow U\) such that

$$\begin{aligned} K(x)\in \underset{u\in U\cap B(0,N(W(x))}{\arg \min }\Big \{\langle p(x), f(x,u)\rangle +{p_{0}}\,l(x,u) \Big \}, \end{aligned}$$

verifies

$$\begin{aligned} \langle p(x), f(x,K(x))\rangle +{p_{0}}\, l(x,K(x)) <-\gamma (W(x)) \quad \forall x\in {{\mathbb {R}}}^n\setminus \mathbf{C}. \end{aligned}$$
(3.4)

We call any map K as above, a W-feedback for the control system (2.1). When the dependence of K on W is clear, we simply call K a feedback.

3.1 Proof of the sample stabilizability with \((p_0,W)\)-regulated cost

The proof relies on Propositions 3.4, 3.5 and on Lemma 3.6 below.

Proposition 3.4

[19, Prop. 3.5] Assume (H0). Let W be a \({p_{0}}\)-MRF with \({p_{0}}\ge 0\), define N accordingly to Proposition 3.2, and let K be a W-feedback. Moreover, let \(\varepsilon \), \({\hat{\mu }}\), \(\sigma \) verify \(\varepsilon >0\) and \(0<{\hat{\mu }}< \sigma \). Then there exists some \({\hat{\delta }}={\hat{\delta }}({\hat{\mu }},\sigma )>0\) such that, for every partition \(\pi =(t^j)\) of \([0,+\infty )\) with diam\((\pi )\le {\hat{\delta }}\) and for each \(z\in {{\mathbb {R}}}^n\setminus {\mathbf {C}}\) satisfying \(W(z)\in ({\hat{\mu }},\sigma ]\), any \(\pi \)-sampling trajectory-control pair (xu) of

$$\begin{aligned} \dot{x}= f(x,u), \qquad x(0)=z, \end{aligned}$$
(3.5)

associated to the feedback K is defined on \([0,{\hat{t}})\) and enjoys the following properties:

  1. (i)

    \({\hat{t}}:=T^{{\hat{\mu }}}_x<+\infty \), where

    $$\begin{aligned} T^{{\hat{\mu }}}_x:=\inf \{t\ge 0: \ \ W(x(t))\le {\hat{\mu \}}}; \end{aligned}$$
    (3.6)
  2. (ii)

    for every \(t\in [0,{\hat{t}})\) and \(j\ge 1\) such that \(t\in [t^{j-1}, t^j)\),

    $$\begin{aligned} W(x(t))-W(x(t^{j-1}))+{p_{0}}\int _{t^{j-1}}^t l(x(\tau ),u(\tau ))\,d\tau \le -\frac{\gamma (W(x(t^{j-1}))) }{\varepsilon +1}(t -t^{j-1}).\nonumber \\ \end{aligned}$$
    (3.7)

Proposition 3.4 describes the behavior of any sampling trajectory-control pair (xu) with sampling time not greater than \({\hat{\delta }}\) just until its first exit-time \({\hat{t}}\) from the set \(\{x\in \overline{{{\mathbb {R}}}^n\setminus {\mathbf {C}}}: \ \ W(x)>{\hat{\mu \}}}\). In [19] this was enough to derive global asymptotic controllability. Global asymptotic stabilizability, instead, requires also that, loosely speaking, any x is defined in \([0,+\infty )\) and stays in the sublevel set \(\{x\in \overline{{{\mathbb {R}}}^n\setminus {\mathbf {C}}}: \ \ W(x)\le {\hat{\mu \}}}\) for every \(t\ge {\bar{t}}\), for some \({\bar{t}}={\bar{t}}({\hat{\mu }},\sigma )\). This is the content of the next proposition, which can be seen as an extension of [13, Lemma IV.2] to the setting considered here.

Proposition 3.5

Assume (H0) and let W be a \({p_{0}}\)-MRF with \({p_{0}}\ge 0\). Using the same notation of Proposition 3.4, set

$$\begin{aligned} {\bar{\delta }}= {\bar{\delta }}({\hat{\mu }},\sigma ):=\min \left\{ {\hat{\delta }}\left( \frac{{\hat{\mu }}}{4},2\sigma \right) , \frac{{\hat{\mu }}}{4L\,m}\right\} , \end{aligned}$$
(3.8)

where L is the Lipschitz constant of W in \(W^{-1}([{\hat{\mu }}/4,2\sigma ])\) and

$$\begin{aligned} m:=\sup _{W^{-1}((0,2\sigma ])\times U} |(f, l)|. \end{aligned}$$
(3.9)

Then for every partition \(\pi =(t^j)\) of \([0,+\infty )\) with diam\((\pi )\le {\bar{\delta }}\) and for each \(z\in {{\mathbb {R}}}^n\setminus {\mathbf {C}}\) satisfying \(W(z)\in ({\hat{\mu }},\sigma ]\), any \(\pi \)-sampling trajectory x of (3.5) is defined in \([0,+\infty )\)Footnote 4 and verifies

$$\begin{aligned} x(t)\in W^{-1}([0,{\hat{\mu ]}}) \quad \forall t\ge {\bar{t}}, \end{aligned}$$
(3.10)

where \({\bar{t}}:=T_x^{{\hat{\mu }}/4}<+\infty \).

Proof

Fix a partition \(\pi =(t^j)\) of \([0,+\infty )\) of diameter not greater than \({\bar{\delta }}\) and an initial datum \(z\in W^{-1}(({\hat{\mu }}, \sigma ])\). By Proposition 3.4 with \({\hat{\mu }}/4\) in place of \({\hat{\mu }}\), any \(\pi \)-sampling solution x is defined at least up to \({\bar{t}}:=T_x^{{\hat{\mu }}/4}<+\infty \) and \(W(x([0,{\bar{t}}])\subset [{\hat{\mu }}/4, W(z)]\), \(W(x({\bar{t}}))={\hat{\mu }}/4\). Moreover, if \(\bar{n}:=\max \{j\in {{\mathbb {N}}}: \ t^j\le {\bar{t}}\}\), then we have

$$\begin{aligned} x(t)\in W^{-1}([{\hat{\mu }}/4,W(t^{{\bar{n}}-1})])\subseteq W^{-1}([{\hat{\mu }}/4, 3{\hat{\mu }}/4]) \quad \forall t\in [t^{{\bar{n}}-1},t^{{\bar{n}}}]. \end{aligned}$$
(3.11)

where, to deal with the case \({\bar{n}}=0\), we set \(t^{-1}:=t^0=0\). The last inclusion follows by the definition of \({\bar{\delta }}\), which implies

$$\begin{aligned} W(x(t^{{\bar{n}}}))-W(x({\bar{t}}))\le L|x(t^{{\bar{n}}})-x({\bar{t}})|\le Lm{\bar{\delta }}\le \frac{{\hat{\mu }}}{4}, \end{aligned}$$

so that \(W(x(t^{{\bar{n}}}))\le {\hat{\mu }}/2\) and, arguing similarly, \(W(x(t^{{\bar{n}}-1}))\le 3{\hat{\mu }}/4\).

We use (3.11) as base to inductively prove that any \(\pi \)-sampling solution x of (3.5) either is defined on \([0,+\infty )\) and verifies (3.10) in the stronger form

$$\begin{aligned} x(t)\in W^{-1}((0,{\hat{\mu ]}}) \quad \forall t\ge {\bar{t}}, \end{aligned}$$

or x has finite blow-up time coinciding with the first time \(T_x\) such that \(\lim _{t\rightarrow T_x^-}{\mathbf {d}}(x(t))=0\): in this case, since \(|\dot{x}|\) is bounded by m, x can be continuously extended to \([0,+\infty )\) and this extension verifies (3.10).

Fix \(j\ge {\bar{n}}\) and assume by induction that an arbitrary \(\pi \)-sampling trajectory x, eventually extended accordingly to Definition 2.1, is defined up to time \(t^{j-1}\) and verifies \(x([0,t^{j-1}])\subseteq W^{-1}([0,{\hat{\mu ]}})\). We have to show that x is defined on \([t^{j-1},t^{j}]\) and verifies

$$\begin{aligned} x(t)\in W^{-1}([0,{\hat{\mu ]}}) \quad \forall t\in [t^{j-1},t^{j}]. \end{aligned}$$
(3.12)

If \(W(x(t^{j-1}))=0\), x is constant on \([t^{j-1},t^j]\) and (3.12) is obviously satisfied. When \(0<W(x(t^{j-1}))\le {\hat{\mu }}\), we distinguish the following situations:

Case 1.\(W(x(t^{j-1}))\ge {\hat{\mu }}/2\). Then by Proposition 3.4 (choosing in particular \(z=x(t^{j-1})\) and the partition \(\pi _j:=(t^{k+j-1}-t^{j-1})_k\)) we deduce that any \(\pi \)-sampling trajectory with value \(W(x(t^{j-1}))\ge {\hat{\mu }}/2\) is defined on the whole interval \([t^{j-1},t^j]\) and verifies \(0\le W(x(t))-W(x(t^j))\le Lm{\bar{\delta }}\le {\hat{\mu }}/4\) for all \(t\in [t^{j-1},t^j]\), so that \(x([t^{j-1},t^{j}])\subset W^{-1}([{\hat{\mu }}/4,{\hat{\mu ]}})\) and this implies (3.12).

Case 2.\(W(x(t^{j-1}))<{\hat{\mu }}/2\). Any \(\pi \)-sampling solution x of (3.5) with this property can be defined on a maximal interval \([t^{j-1}, {\tilde{t}})\). Assume first that \({\tilde{t}}> t^j\), so that x is defined for all \(t\in [t^{j-1},t^j]\) and suppose by contradiction

$$\begin{aligned} x([t^{j-1},t^j])\nsubseteq W^{-1}([0,{\hat{\mu ]}}). \end{aligned}$$

Then there exist \(t^{i-1}<{\underline{t}}^j< {\bar{t}}^j\le t^j\) such that

$$\begin{aligned} W(x({\underline{t}}^j))= {\hat{\mu }}/2,\quad W(x({\bar{t}}^j))={\hat{\mu }}, \quad x([{\underline{t}}^j,{\bar{t}}^j])\subseteq W^{-1}([{\hat{\mu }}/2,{\hat{\mu ]}}). \end{aligned}$$

This yields the required contradiction, since we have

$$\begin{aligned} {\hat{\mu }}/2=W(x({\bar{t}}^j))-W(x({\underline{t}}^j))\le Lm{\bar{\delta }} \le {\hat{\mu }}/4. \end{aligned}$$

Therefore x verifies (3.12).

Let us now assume \({\tilde{t}}\le t^j\). By standard properties of the ODEs, the blow-up time \({\tilde{t}}\) verifies either \(\lim _{t\rightarrow {\tilde{t}}^-}|x(t)|=+\infty \) or \({\tilde{t}}=T_x\). Notice that if we had

$$\begin{aligned} x([t^{j-1},{\tilde{t}}))\nsubseteq W^{-1}([0,{\hat{\mu ]}}), \end{aligned}$$
(3.13)

we could find \(t^{i-1}<{\underline{t}}^j< {\bar{t}}^j<{\tilde{t}}\) and obtain a contradiction arguing as above. Hence \(x([t^{j-1},{\tilde{t}}))\subseteq W^{-1}([0,{\hat{\mu ]}})\) and \({\tilde{t}}= T_x\) necessarily, since the set \(W^{-1}([0,{\hat{\mu ]}})\) is compact. By the boundedness of f on \(W^{-1}((0,{\hat{\mu ]}})\times U\) this implies that \(\exists \lim _{t\rightarrow {\tilde{t}}}x(t)={\bar{z}}\in \partial \mathbf{C}\) and the extension of x to \([t^{j-1},t^j]\) given by \(x(t)={\bar{z}}\) for all \(t\in [{\tilde{t}},t^j]\) verifies (3.12). The proof is thus concluded. \(\square \)

Finally, let us relate the level sets of a \({p_{0}}\)-MRF W with the ones of the distance function \({\mathbf {d}}\) using the following general result.

Lemma 3.6

Let W, \(W_1 :\overline{{{\mathbb {R}}}^n\setminus {\mathbf {C}}}\rightarrow [0,+\infty )\) be continuous functions, and let us assume that W and \(W_1\) are positive definite and proper on \({{\mathbb {R}}}^n\setminus {\mathbf {C}}\). Then the functions \({\bar{g}}\), \(\underline{g}:(0,+\infty )\rightarrow (0,+\infty )\) given by

$$\begin{aligned} \begin{array}{l} \underline{g}(r)=\underline{g}_{\,W,W_1}(r):=\sup \left\{ \alpha>0: \ \ \{{\tilde{z}}: \ W({\tilde{z}})\le \alpha \}\subseteq \{{\tilde{z}}: \ W_1({\tilde{z}}) < r\}\right\} , \\ {\bar{g}}(r)={\bar{g}}_{\,W,W_1}(r):=\inf \left\{ \alpha >0: \ \ \{{\tilde{z}}: \ W({\tilde{z}})\le \alpha \}\supseteq \{{\tilde{z}}: \ W_1( {\tilde{z}})\le r\}\right\} , \end{array} \end{aligned}$$

are well-defined, increasing and there exist the limits

$$\begin{aligned} \lim _{r\rightarrow 0^+}{\bar{g}}(r)=\lim _{r\rightarrow 0^+}\underline{g}(r)=0, \qquad \lim _{r\rightarrow +\infty }{\bar{g}}(r)=\lim _{r\rightarrow +\infty }\underline{g}(r)=+\infty . \end{aligned}$$
(3.14)

Moreover, one has

$$\begin{aligned} \underline{g}(W_1(x))\le W (x)\le {\bar{g}}(W_1(x)) \quad \forall x\in {{\mathbb {R}}}^n\setminus \mathbf{C}. \end{aligned}$$
(3.15)

Proof

For every \(\alpha >0\), let us introduce the sets \({\mathcal {S}}_\alpha := \{{\tilde{z}}: \ W({\tilde{z}})\le \alpha \}\), \({\mathcal {S}}^1_\alpha := \{{\tilde{z}}: \ W_1( {\tilde{z}})\le \alpha \}\), and \({\mathcal {S}}^{1_<}_\alpha := \{{\tilde{z}}: \ W_1( {\tilde{z}})<\alpha \}\). By the hypotheses on W and \(W_1\) it follows that \(({\mathcal {S}}_\alpha )_{\alpha >0}\), \(({\mathcal {S}}^1_\alpha )_{\alpha >0}\), and \(({\mathcal {S}}^{1_<}_\alpha )_{\alpha >0}\) are strictly increasing families of nonempty, bounded sets verifying

$$\begin{aligned} \begin{array}{l} \displaystyle \lim _{\alpha \rightarrow 0^+}{\mathcal {S}}_\alpha = \lim _{\alpha \rightarrow 0^+}{\mathcal {S}}^1_\alpha =\lim _{\alpha \rightarrow 0^+}{\mathcal {S}}^{1_<}_\alpha =\mathbf{C}, \\ \displaystyle \lim _{\alpha \rightarrow +\infty }{\mathcal {S}}_\alpha = \lim _{\alpha \rightarrow +\infty }{\mathcal {S}}^1_\alpha = \lim _{\alpha \rightarrow +\infty }{\mathcal {S}}^{1_<}_\alpha ={{\mathbb {R}}}^n. \end{array} \end{aligned}$$

Then for any \(r>0\) there exist \({\bar{\alpha }}\), \({\bar{\alpha }}_1>0\) such that \({\mathcal {S}}_\alpha \subset {\mathcal {S}}^{1_<}_r\) for all \(\alpha \le {\bar{\alpha }}\) and \({\mathcal {S}}_\alpha \supset {\mathcal {S}}^1_r\) for all \(\alpha \ge {\bar{\alpha }}_1\), so that \({\bar{g}}(r)\) and \(\underline{g}(r)\) turn out to be well-defined. Moreover, \(r\mapsto {\bar{g}}(r)\), \(\underline{g}(r)\) are clearly increasing and verify the limits (3.14).

In order to prove the inequalities in (3.15), given \(x\in {{\mathbb {R}}}^n\setminus \mathbf{C}\), let us set \(r:=W_1(x)\), \(\alpha :=W(x)\), \(\underline{\alpha }:=\underline{g}(W_1(x))\), and \({\bar{\alpha }}:={\bar{g}}(W_1(x))\). Arguing by contradiction, let us first assume that \(\underline{g}(W_1(x))> W (x)\), namely

$$\begin{aligned} \underline{\alpha } > \alpha . \end{aligned}$$
(3.16)

By the definition of \(\underline{\alpha }\), (3.16) would imply that \({\mathcal {S}}_\alpha \subseteq {\mathcal {S}}^{1_<}_r\). This is impossible, since \(x\in {\mathcal {S}}_\alpha \) but \(x\notin {\mathcal {S}}^{1_<}_r\), because \(\alpha =W(x)\le \alpha \) while \(r=W_1(x)\nless r\). Similarly, if we suppose that \(W (x)> {\bar{g}}(W_1(x))\), namely

$$\begin{aligned} \bar{\alpha }< \alpha , \end{aligned}$$
(3.17)

by the definition of \(\bar{\alpha }\) we get that, for every \(\alpha '\in ({\bar{\alpha }},\alpha )\), one should have \({\mathcal {S}}^{1}_r\subseteq {\mathcal {S}}_{\alpha '}\). But this contradicts the fact that \(x\in {\mathcal {S}}^{1}_r\), since \(r=W_1(x)\le r\), while \(x\notin {\mathcal {S}}_{\alpha '}\), being \(\alpha =W(x)\nleq \alpha '\). \(\square \)

We are now ready to show that, given a \({p_{0}}\)-MRF W with \({p_{0}}>0\), the control system (2.1) is sample stabilizable to \(\mathbf{C}\) with \((p_0,W)\)-regulated cost. For any pair r, \(R>0\) with \(r< R\), let us set

$$\begin{aligned} \begin{array}{l} {\hat{\mu }}(r):=\underline{g}_{W,{\mathbf {d}}}(r)=\sup \left\{ \mu>0: \ \ \{{\tilde{z}}: \ W({\tilde{z}})\le \mu \}\subseteq \overset{\circ }{B}_r(\mathbf{C})\right\} , \\ \, \\ \sigma (R):=\bar{g}_{W,{\mathbf {d}}}(R)=\inf \left\{ \sigma >0: \ \ \{{\tilde{z}}: \ W({\tilde{z}})\le \sigma \}\supseteq B_R(\mathbf{C})\right\} . \end{array} \end{aligned}$$
(3.18)

By Lemma 3.6, if \(r<{{\mathbf {d}}({\tilde{z}})}\le R\), then \({\tilde{z}}\in W^{-1}(({\hat{\mu }}(r), \sigma (R)])\) and the values \({\hat{\mu }}(r)\), \( \sigma (R)\) are finite and verify \(0<{\hat{\mu }}(r)< \sigma (R)\). Let us choose

$$\begin{aligned} \delta =\delta (r,R):={\bar{\delta }}({\hat{\mu }}(r),\sigma (R)), \end{aligned}$$
(3.19)

where \({\bar{\delta }}({\hat{\mu }},\sigma )\) is defined by (3.8). Fixed \(\varepsilon >0\), for instance, \(\varepsilon =1\), by Propositions 3.4, 3.5 it follows that for every partition \(\pi =(t^j)\) of \([0,+\infty )\) with \(\text {diam}(\pi )\le \delta \) and for every initial state \(z\in {{\mathbb {R}}}^n\setminus \mathbf{C}\) such that \(\mathbf{d}(z)\le R\), any \(\pi \)-sampling trajectory-control pair (xu) of (2.1) with \(x(0)=z\) has x defined in \([0,+\infty )\) and verifies:

  1. (i)

    \({\bar{t}}:=T_x^{{\hat{\mu }}(r)/4}<+\infty \);

  2. (ii)

    for every \(t\in [0,{\bar{t}})\) and \(j\ge 1\) such that \(t\in [t^{j-1}, t^j)\),

    $$\begin{aligned} W(x(t))-W(x(t^{j-1}))+{p_{0}}\int _{t^{j-1}}^t l(x(\tau ),u(\tau ))\,d\tau \le -\frac{\gamma (W(x(t^{j-1}))) }{2}(t -t^{j-1});\nonumber \\ \end{aligned}$$
    (3.20)
  3. (iii)

    for every \(t\ge {\bar{t}}\), \(W(x(t))\le {\hat{\mu }}(r)\), which implies that \({\mathbf {d}}(x(t))\le r\).

The time \({\bar{t}}\) might be zero when \({\mathbf {d}}(z)\le r\). Of course, condition (ii) is significant only if \({\bar{t}}>0\).

Observing that (3.20) implies

$$\begin{aligned} W(x(t))-W(z)\le -\frac{\gamma (W(x(t^{j-1}))) }{2}t \qquad \forall t\in [0,{\bar{t}}), \end{aligned}$$
(3.21)

the construction of a \({\mathcal {KL}}\) function \(\beta \) such that

$$\begin{aligned} {\mathbf {d}}(y(t))\le \beta ({\mathbf {d}}(z),t) \quad \forall t\in [0,{\bar{t}}) \end{aligned}$$
(3.22)

can be obtained arguing as in [19, p.600], hence we omit it. Together with (iii), this yields that

$$\begin{aligned} {\mathbf {d}}(x(t))\le \max \{\beta ({\mathbf {d}}(z),t),r\} \quad \forall t \ge 0. \end{aligned}$$

Moreover, when \({\bar{t}}>0\) by summing up j from 0 to the last index \({\tilde{n}}\) such that \(t^{{\tilde{n}}}< {\bar{t}}\), from (ii) it follows that

$$\begin{aligned} W(x({\bar{t}}))-W(z)+{p_{0}}\int _0^{{\bar{t}}} l(x(\tau ),u(\tau ))\,d\tau \le -\frac{\gamma (W(x(t^{{\tilde{n}}}))) }{2}{\bar{t}}. \end{aligned}$$
(3.23)

Hence

$$\begin{aligned} \int _0^{{\bar{t}}} l(x(\tau ),u(\tau ))\,d\tau \le \frac{W(z)}{{p_{0}}} \end{aligned}$$

and this concludes the proof since

$$\begin{aligned} {\bar{t}}\ge {\bar{T}}_x^r=\inf \{t>0: \ {\mathbf {d}}(x(\tau ))\le r \ \ \forall \tau \ge t\}. \end{aligned}$$

\(\square \)

Remark 3.7

When \({\mathbf {d}}(z)>r\), the time \( {\bar{T}}_x^r\), after which any (rR) stable \(\pi \)-sampling trajectory x starting from z remains definitively in \(B_r(\mathbf{C})\), is uniformly bounded by a positive constant. Precisely, using the above notations, by the previous proof one can easily deduce the following upper bound

$$\begin{aligned} {\bar{T}}_x^r\le {\bar{t}}\le \frac{2(W(z)-W(x({\bar{t}})))}{\gamma (W(x({\bar{t}})))}= \frac{2\left( W(z)-\frac{{\hat{\mu }}(r)}{4}\right) }{\gamma ({\hat{\mu }}(r)/4)}. \end{aligned}$$
(3.24)

3.2 Proof of the Euler stabilizability with \((p_0,W)\)-regulated cost

Let us start with some preliminary results. In the sequel we make use of all the notations introduced in the previous subsection.

The following lemma establishes a uniform lower bound for the time needed to admissible trajectories starting from the same point z and approaching the target, to reach an \(\varepsilon \)-neighborhood of the target.

Lemma 3.8

Assume (H0). Given \(R>0\), let us set

$$\begin{aligned} \displaystyle {\tilde{M}}(R):=\sup \{|f(x,u)|: \ \ x\in B_R(\mathbf{C})\setminus \mathbf{C}, \ \ u\in U\}. \end{aligned}$$

Then for any \(z\in {{\mathbb {R}}}^n\setminus \mathbf{C}\) such that \({\mathbf {d}}(z)\le R\) and \(\varepsilon \in (0,{\mathbf {d}}(z))\), setting

$$\begin{aligned} T_\varepsilon := \frac{{\mathbf {d}}(z)-\varepsilon }{{\tilde{M}}(R)}>0, \end{aligned}$$
(3.25)

every admissible trajectory-control pair \((x,u)\in {\mathcal { A}}_f({ z})\) with \(T_x\le +\infty \) and such that \(\lim _{t\rightarrow T^-_x}{\mathbf {d}}(x(t))=0\), verifies

$$\begin{aligned} {\mathbf {d}}(x(t))\ge \varepsilon \quad \forall t\in [0,T_\varepsilon ]. \end{aligned}$$
(3.26)

As a consequence, \(T_x\ge \frac{{\mathbf {d}}(z)}{{\tilde{M}}(R)}\).

Proof

Given \((x,u)\in {\mathcal { A}}_f({ z})\) as above, let us set \({\bar{\tau }}:=\sup \{t\ge 0: \ {\mathbf {d}}(x(t))\ge {\mathbf {d}}(z)\}\). The time \({\bar{\tau }}\) is clearly finite and defining \({\tilde{T}}_x^\varepsilon :=\inf \{t>{\bar{\tau }}: \ {\mathbf {d}}(x(t))\le \varepsilon \}\), one trivially has \(0\le {\bar{\tau }}< {\tilde{T}}_x^\varepsilon \) and \(x([{\bar{\tau }}, {\tilde{T}}_x^\varepsilon ])\subseteq \overline{B_R(\mathbf{C})}\setminus \mathbf{C}\). If \({\bar{z}}^\varepsilon \in \partial \mathbf{C}\) verifies

$$\begin{aligned} \varepsilon ={\mathbf {d}}(x({\tilde{T}}_x^\varepsilon ))=|x({\tilde{T}}_x^\varepsilon )-{\bar{z}}^\varepsilon |, \end{aligned}$$

then the uniform bound (3.26) is a consequence of the following inequalities

$$\begin{aligned} {\mathbf {d}}(z)={\mathbf {d}}(x({\bar{\tau }}))\le |x({\bar{\tau }})-{\bar{z}}^\varepsilon |\le |x({\bar{\tau }})-x({\tilde{T}}_x^\varepsilon )|+|x({\tilde{T}}_x^\varepsilon )-{\bar{z}}^\varepsilon |\le {\tilde{M}}(R )\,{\tilde{T}}_x^\varepsilon +\varepsilon , \end{aligned}$$

implying that \({\tilde{T}}_x^\varepsilon \ge \frac{{\mathbf {d}}(z)-\varepsilon }{{\tilde{M}}(R)}= T_\varepsilon \). Indeed, \({\mathbf {d}}(x(t))\ge {\mathbf {d}}(z)>\varepsilon \) for all \(t\in [0,{\bar{\tau }}]\) and the definition of \({\tilde{T}}_x^\varepsilon \) implies that \( {\mathbf {d}}(x(t))>\varepsilon \) for all \(t\in ]{\bar{\tau }}, {\tilde{T}}_x^\varepsilon ]\), so that \({\mathbf {d}}(x(t))\ge \varepsilon \) for all \(t\in [0,T_\varepsilon ]\). By the arbitrariness of \(\varepsilon >0\), this implies that \(T_x\ge {\mathbf {d}}(z)/{\tilde{M}}(R )\). \(\square \)

Next result allows us to determine, given a \({p_{0}}\)-MRF W, a positive constant R and a sampling time \(\delta >0\) small enough, a radius \(r<R\) such that any \(\pi \)-sampling trajectory-control pair for (3.5) with initial point z verifying \({\mathbf {d}}(z)\le R\) and with diam\((\pi )=\delta \) is (rR)-stable.

Lemma 3.9

Assume (H0). Let W be a \({p_{0}}\)-MRF with \({p_{0}}\ge 0\) and for any pair r, \(R>0\) with \(r< R\), let \(\delta =\delta (r,R)\) be defined accordingly to (3.19). Then, for every fixed \(R>0\), \(\delta (\cdot ,R)\) is positive and increasing and

$$\begin{aligned} \lim _{r\rightarrow 0^+}\delta (r,R)=0, \qquad \delta (R):=\lim _{r\rightarrow R^-}\delta (r,R)<+\infty . \end{aligned}$$

Proof

By Sect. 3.1, we have that

$$\begin{aligned} \delta (r,R)={\bar{\delta }}(\mu (r),\sigma (R)), \end{aligned}$$

where \({\bar{\delta }}\) is defined as in (3.8), in Proposition 3.5. Since the map \(r\mapsto {\hat{\mu }}(r)\) is increasing, \({\hat{\mu }}(r)\) vanishes as \(r\rightarrow 0^+\) and \({\hat{\mu }}(r)\) is bounded by \(\sigma (R)\) as \(r\rightarrow R^-\), to conclude it suffices to show that for every \(\sigma >0\) the map \({\hat{\mu }}\mapsto {\bar{\delta }}({\hat{\mu }}, \sigma )\) (a) is increasing in \((0,\sigma )\), (b) vanishes in 0 and (c) is bounded as \({\hat{\mu }}\) tends to \({\hat{\mu }}(R)\). Let \(L({\hat{\mu }},\sigma )\) be the Lipschitz constant of W on \(W^{-1}([{\hat{\mu }},2\sigma ])\), let \(m=m(\sigma )\) be as in (3.9) and recall from (3.8) the following definition

$$\begin{aligned} {\bar{\delta }}({\hat{\mu }},\sigma )=\min \left\{ {\hat{\delta }}\left( \frac{{\hat{\mu }}}{4},2\sigma \right) , \frac{{\hat{\mu }}}{4L({\hat{\mu }},\sigma )\,m}\right\} . \end{aligned}$$

We note that \({\hat{\mu \mapsto }} L({\hat{\mu }},\sigma )\) is decreasing in \((0,\sigma )\): this implies at once conditions (b) and (c) and the fact that, for every \(\sigma >0\), the map \({\hat{\mu }}\mapsto {\hat{\mu }}/4L({\hat{\mu }},\sigma )m\) is increasing. To conclude it is left to show that, for every \(\sigma >0\), the map \({\hat{\mu }}\mapsto {\hat{\delta }}({\hat{\mu }},\sigma )\) is increasing in \((0,\sigma )\). This monotonicity that can be easily derived arguing as above, by the definitions of the constants in the proof of [19, Proposition 3.5], hence we omit to prove it. \(\square \)

Owing to Lemma 3.9, given a \({p_{0}}\)-MRF W and a positive constant R, we can assume without loss of generality that \(\delta (\cdot ,R)\) defined as above is strictly increasing and continuous. Therefore, for any \(R>0\) we can define the inverse of the map \(r\mapsto \delta (r):=\delta (r ,R)\), given by

$$\begin{aligned} \delta \mapsto r(\delta ) \qquad \forall \delta \in [0,\delta (R)], \end{aligned}$$
(3.27)

which is continuous, strictly increasing and such that \(r(0)=0\) and \(r(\delta (R))=R\). As an immediate consequence, by the sample stabilizability of (2.1) with \((p_0,W)\)-regulated cost we get the following result.

Lemma 3.10

Assume (H0) and let W be a \({p_{0}}\)-MRF with \({p_{0}}\ge 0\). Then there exists a function \(\beta \in {\mathcal {KL}}\) such that, for each pair \(R>0\) and \(\delta \in (0,\delta (R))\), for every partition \(\pi \) of \([0,+\infty )\) with \(\text {diam}(\pi )=\delta \) and for any initial state \(z\in {{\mathbb {R}}}^n\setminus \mathbf{C}\) such that \(\mathbf{d}(z)\le R\), any \(\pi \)-sampling trajectory-control pair (xu) of (2.1) from z is defined in \([0,+\infty )\) and verifies:

$$\begin{aligned} \mathbf{d}(x(t))\le \max \{\beta (R,t), r(\delta )\} \qquad \forall t\ge 0. \end{aligned}$$
(3.28)

Moreover, if \({p_{0}}>0\),

$$\begin{aligned} x^0({\bar{T}}_x^{r(\delta )})=\int _0^{{\bar{T}}_x^{r(\delta )}}l(x(\tau ),u(\tau ))\,d\tau \le \frac{W(z)}{{p_{0}}}, \end{aligned}$$
(3.29)

where \( {\bar{T}}_x^{r(\delta )}=\inf \{t>0: \ {\mathbf {d}}(x(\tau ))\le r(\delta ) \ \ \forall \tau \ge t\} \), as in (2.6).

Remark 3.11

When f and l verify hypothesis (H0) and system (2.1) is sample stabilizable to \(\mathbf{C}\) with \((p_0,W)\)-regulated cost, there always exist continuous Euler solutions to (2.1). Indeed, for any z with \(0<{\mathbf {d}}(z)\le R\) and any sequence \((x_i,u_i)\) of \(\pi _i\)-sampling trajectory-control pairs of (2.1) with \(x(0)=z\), \(\delta _i:=\)diam\((\pi _i)\rightarrow 0\) as \(i\rightarrow +\infty \), and associated costs \(x^0_i\), it turns out that \((x_i^0, x_i)\) is equi-Lipschitz continuous on \([0,+\infty )\) with Lipschitz constant \(m>0\). Hence the existence of continuous, actually m-Lipschitz continuous Euler solutions to (2.1) from z with m-Lipschitz continuous Euler costs follows straightforwardly by Ascoli-Arzelá’s Theorem.

We are now in position to prove that, if we assume (H0) and W is a \({p_{0}}\)-MRF with \({p_{0}}>0\), the feedback K Euler-stabilizes the system (2.1) to \(\mathbf{C}\) with \((p_0,W)\)-regulated cost. Given \(z\in {{\mathbb {R}}}^n\setminus \mathbf{C}\), let \(({\mathscr {X}}^0,{\mathscr {X}})\) be an Euler solution of (2.1) with initial condition \({\mathscr {X}}(0)=z\). By definition there exist a sequence of partitions \((\pi _i)\) of \([0,+\infty )\) such that \(\delta _i:=\text {diam}(\pi _i)\rightarrow 0\) as \(i\rightarrow \infty \), a sequence of \(\pi _i\)-sampling trajectory-control pairs \((x_i,u_i)\) for (2.1) with \(x_i(0)=z\) for each i, and associated costs \(x^0_i\), satisfying

$$\begin{aligned} (x_i^0,x_i)\rightarrow ({\mathscr {X}}^0,{\mathscr {X}}) \ \text {locally uniformly on } [0,+\infty ). \end{aligned}$$
(3.30)

Set \(R:={\mathbf {d}}(z)\) and let \(\beta \), \(\delta (R)\) and \(r:[0,\delta (R)]\rightarrow [0,R]\) be as in Lemma 3.10. Since \(\delta _i\rightarrow 0\), we can assume without loss of generality that \(\delta _i<\delta (R)\) for all i. Hence Lemma 3.10 implies that, for every i,

$$\begin{aligned} {\mathbf {d}}(x_i(t))\le \max \{\beta ({\mathbf {d}}(z),t),r(\delta _i)\} \qquad \forall t\ge 0 \end{aligned}$$
(3.31)

and

$$\begin{aligned} x_i^0(t)\le \frac{W(z)}{{p_{0}}} \quad \forall t\in [0,{\bar{T}}_{x_i}^{r(\delta _i)}]. \end{aligned}$$
(3.32)

As \(i\rightarrow \infty \), we have that \(\delta _i\rightarrow 0\) and consequently \(r(\delta _i)\rightarrow 0\). Then by (3.30) and (3.31) we obtain that

$$\begin{aligned} {\mathbf {d}}({\mathscr {X}}(t))\le \beta ({\mathbf {d}}(z),t) \qquad \forall t\ge 0. \end{aligned}$$
(3.33)

Hence \(\displaystyle \lim _{t\rightarrow +\infty }{\mathbf {d}}({\mathscr {X}}(t))=0\) and there exists

$$\begin{aligned} T_{\mathscr {X}}:=\inf \{\tau \ge 0: \ \ \lim _{t\rightarrow \tau ^-}{\mathbf {d}}({\mathscr {X}}(t))=0\}\le +\infty . \end{aligned}$$

To conclude the proof it remains only to show that

$$\begin{aligned} \lim _{t\rightarrow T_{\mathscr {X}}^-} {\mathscr {X}}^0(t) \le \frac{W(z)}{{p_{0}}}, \end{aligned}$$
(3.34)

where the limit is well defined, since \({\mathscr {X}}^0\), pointwise limit of monotone nondecreasing functions, is monotone nondecreasing. Passing eventually to a subsequence, we set \({\bar{T}}:=\lim _i{\bar{T}}_{x_i}^{r(\delta _i)}\). In view of Lemma 3.8, \({\bar{T}}\) satisfies

$$\begin{aligned} {\bar{T}}\ge \frac{{\mathbf {d}}(z)}{m}>0. \end{aligned}$$
(3.35)

Then for any \(t\in [0,{\bar{T}})\) one has \({\bar{T}}_{x_i}^{r(\delta _i)}>t\) for all i sufficiently large and, taking the limit as \(i\rightarrow \infty \) in (3.32), by (3.30) it follows that

$$\begin{aligned} {\mathscr {X}}^0(t)\le \frac{W(z)}{{p_{0}}} \qquad \forall t\in [0,{\bar{T}}). \end{aligned}$$
(3.36)

If \({\bar{T}}=+\infty \), this implies directly the thesis (3.34). If instead \({\bar{T}}<+\infty \), the definition of \({\bar{T}}_{x_i}^{r(\delta _i)}\) yields that

$$\begin{aligned} {\mathbf {d}}(x_i({\bar{T}}_{x_i}^{r(\delta _i)}))= r(\delta _i). \end{aligned}$$

Moreover, by the locally uniform convergence of \(x_i\) to \({\mathscr {X}}\) and the m-Lipschitz continuity of \({\mathscr {X}}\), we get the following estimate

$$\begin{aligned} \begin{array}{l} {\mathbf {d}}({\mathscr {X}}({\bar{T}}))\le |{\mathscr {X}}({\bar{T}})-{\mathscr {X}}({\bar{T}}_{x_i}^{r(\delta _i)})| +|{\mathscr {X}}({\bar{T}}_{x_i}^{r(\delta _i)})-x_i({\bar{T}}_{x_i}^{r(\delta _i)})|+{\mathbf {d}}(x_i({\bar{T}}_{x_i}^{r(\delta _i)})), \end{array} \end{aligned}$$

where the r.h.s. tends to zero as \(i\rightarrow +\infty \). Thus we have in any case

$$\begin{aligned} 0< \frac{{\mathbf {d}}(z)}{m}\le T_{\mathscr {X}}\le {\bar{T}}, \end{aligned}$$
(3.37)

where the first inequality is again a consequence of the m-Lipschitz continuity of \({\mathscr {X}}\). This concludes the proof, since (3.36) implies now the thesis (3.34). \(\square \)

4 On the notion of \({p_{0}}\)-minimum restraint function

In Sect. 4.1 we obtain an equivalent formulation of the definition of \({p_{0}}\)-MRF. Using this condition, in Sect. 4.2 we prove that, when the data f and l are locally Lipschitz continuous, the existence of a locally Lipschitz, not necessarily semiconcave, \({p_{0}}\)-MRF W, still guarantees sample and Euler stabilizability of the control system (2.1) to \(\mathbf{C}\) with \((p_0,W)\)-regulated cost. Section 4.3 is devoted to extend these results to the original notion of \({p_{0}}\)-MRF introduced in [23].

4.1 An equivalent notion of \({p_{0}}\)-MRF

Proposition 4.1

Assume (H0). Let \(W :\overline{{{\mathbb {R}}}^n\setminus {\mathbf {C}}}\rightarrow [0,+\infty )\), be a continuous function, which is positive definite, proper, and semiconcave on \({{\mathbb {R}}}^n\setminus {\mathbf {C}}\). Then W is a \({p_{0}}\)-MRF for some \({p_{0}}\ge 0\) if and only if for any continuous function \(W_1 :\overline{{{\mathbb {R}}}^n\setminus {\mathbf {C}}}\rightarrow [0,+\infty )\), positive definite, and proper on \({{\mathbb {R}}}^n\setminus {\mathbf {C}}\), there is some continuous, strictly increasing function \({\tilde{\gamma }}:(0,+\infty )\rightarrow (0,+\infty )\) such that

$$\begin{aligned} H (x,{p_{0}}, D^*W(x) )\le -{\tilde{\gamma }}(W_1(x)) \quad \forall x\in {{{{\mathbb {R}}}^n}\setminus {\mathbf {C}}}. \end{aligned}$$

For instance, one can choose \(W_1={\mathbf {d}}\), distance function from \(\mathbf{C}\).

Proposition 4.1 is a consequence of the following more general result, involving a locally Lipschitz, not necessarily semiconcave, function W.

Proposition 4.2

Assume (H0) and let \(W:\overline{{{\mathbb {R}}}^n\setminus {{\mathbf {C}}}}\rightarrow [0,+\infty )\) be a continuous map, which is locally Lipschitz, positive definite, and proper on \({{\mathbb {R}}}^n\setminus {\mathbf {C}}\). Given \({p_{0}}\ge 0\) and a set \(\Omega \subseteq {{{{\mathbb {R}}}^n}\setminus {\mathbf {C}}}\), the following statements are equivalent:

  1. (i)

    W verifies the decrease condition

    $$\begin{aligned} H (x,{p_{0}}, \partial _PW(x))\le -\gamma (W(x)) \quad \forall x\in \Omega , \end{aligned}$$
    (4.1)

    for some continuous, strictly increasing function \(\gamma :(0,+\infty )\rightarrow (0,+\infty )\);

  2. (ii)

    for any continuous function \(W_1 :\overline{{{\mathbb {R}}}^n\setminus {\mathbf {C}}}\rightarrow [0,+\infty )\) which is positive definite and proper on \({{\mathbb {R}}}^n\setminus {\mathbf {C}}\), there exists some continuous, strictly increasing function \({\tilde{\gamma }}:(0,+\infty )\rightarrow (0,+\infty )\) such that W verifies

    $$\begin{aligned} H (x,{p_{0}}, \partial _PW(x) )\le -{\tilde{\gamma }}(W_1(x)) \quad \forall x\in \Omega . \end{aligned}$$
    (4.2)

Proof

Let us first prove that (i) \(\Rightarrow \) (ii). Assume that W verifies the decrease condition (4.1). Given an arbitrary continuous map \(W_1\), positive definite and proper in \({{\mathbb {R}}}^n\setminus \mathbf{C}\), let \({\tilde{\gamma }}:(0,+\infty )\rightarrow (0,+\infty )\) be a continuous, strictly increasing approximation from below of the increasing map \(r\mapsto \gamma \circ \underline{g}_{\, W,W_1}(r)\), where \(\underline{g}_{\, W,W_1}\) is defined accordingly to Lemma 3.6. Then by (3.15), for any \(x\in {{\mathbb {R}}}^n\setminus \mathbf{C}\), one has

$$\begin{aligned} W(x)\ge \underline{g}_{\, W,W_1}(W_1(x)) \ \Longrightarrow \ \gamma (W(x))\ge \gamma \circ \underline{g}_{\, W,W_1}(W_1(x))\ge {\tilde{\gamma }}(W_1(x)), \end{aligned}$$

so that (4.1) implies (4.2) for such \({\tilde{\gamma }}\).

To prove that (ii) \(\Rightarrow \) (i), it is enough to invert the roles of W and \(W_1\). Precisely, if (4.2) is verified for some W, \(W_1\) and \({\tilde{\gamma }}\) as in the statement of the proposition, arguing as above one obtains that W verifies (4.1) choosing as \(\gamma :(0,+\infty )\rightarrow (0,+\infty )\) any continuous, strictly increasing approximation from below of the increasing map \(r\mapsto {\tilde{\gamma }}\circ \underline{g}_{\, W_1,W}(r)\). \(\square \)

Proof of Proposition 4.1

The only non trivial fact in order to derive Proposition 4.1 from Proposition 4.2, is that (4.1) involves the proximal subdifferential \(\partial _PW(x)\) at x instead of the set of limiting gradients \(D^*W(x)\) at x, considered in the decrease condition for a \({p_{0}}\)-MRF. However, when W is locally Lipschitz continuous, condition (4.1) with \(\Omega ={{\mathbb {R}}}^n\setminus \mathbf{C}\) implies readily the following inequality:

$$\begin{aligned} H (x,{p_{0}}, \partial _LW(x))\le -\gamma (W(x)) \quad \forall x\in {{{{\mathbb {R}}}^n}\setminus {\mathbf {C}}}, \end{aligned}$$

where \(\partial _LW(x)\) denotes the limiting subdifferential at x. This concludes the proof, since a \({p_{0}}\)-MRF W is locally semiconcave and therefore \(\partial _LW(x)=D^*W(x)\) at any x. \(\square \)

4.2 Lipschitz continuous \({p_{0}}\)-MRF

Under the following hypothesis:

(H1) The sets \(U\subset {{\mathbb {R}}}^m\), \({\mathbf {C}}\subset {{\mathbb {R}}}^n\) are closed and the boundary \(\partial {\mathbf {C}}\) is compact. \(f:\overline{({{\mathbb {R}}}^n\setminus \mathbf{C})}\times U\rightarrow {{\mathbb {R}}}^n\), \(l: \overline{({{\mathbb {R}}}^n\setminus \mathbf{C})}\times U\rightarrow [0,+\infty )\) are continuous functions such that for every compact subset \({{\mathcal {K}}}\subset \overline{{{\mathbb {R}}}^n\setminus \mathbf{C}}\) there exist \(M_f\), \(M_l\), \(L_f\), \(L_l>0\) such that

$$\begin{aligned} \left\{ \begin{array}{l} |f(x,u)|\le M_f, \quad l(x,u)\le M_l \qquad \forall (x,u)\in {{\mathcal {K}}}\times U, \\ |f(x_1,u)-f(x_2,u)|\le L_f|x_1-x_2|, \\ |l(x_1,u)-l(x_2,u)|\le L_l |x_1-x_2|\qquad \forall (x_1,u), \, (x_2,u)\in {{\mathcal {K}}}\times U, \end{array}\right. \end{aligned}$$

we obtain the main result of this section:

Theorem 4.3

Assume (H1) and let \({p_{0}}\ge 0\). Let \(W:\overline{ {{\mathbb {R}}}^n\setminus \mathbf{C}}\rightarrow [0,+\infty )\) be a locally Lipschitz continuous map on \(\overline{{{\mathbb {R}}}^n\setminus \mathbf{C}}\), such that W is positive definite, and proper on \({{\mathbb {R}}}^n\setminus {\mathbf {C}}\), and verifies the decrease condition

$$\begin{aligned} H (x,{p_{0}}, \partial _PW(x) )\le -{\tilde{\gamma }}(W_1(x)) \quad \forall x\in {{{{\mathbb {R}}}^n}\setminus {\mathbf {C}}}, \end{aligned}$$
(4.3)

for some continuous, strictly increasing function \({\tilde{\gamma }}:(0,+\infty )\rightarrow (0,+\infty )\) and some continuous function \(W_1 :\overline{{{\mathbb {R}}}^n\setminus {\mathbf {C}}}\rightarrow [0,+\infty )\), positive definite, and proper on \({{\mathbb {R}}}^n\setminus {\mathbf {C}}\). Then there exists a \(\displaystyle \frac{{p_{0}}}{2}\)-MRF \({\bar{W}}\), which also satisfies \({\bar{W}}(x)\le W(x)\) for all \(x\in {{\mathbb {R}}}^n\setminus {\mathbf {C}}\).

Theorem 4.3, whose proof is postponed to “Appendix A.1”, generalizes the result on the existence of a semiconcave Control Lyapunov Function obtained in [28, sect. 5] to the present case, where the decrease condition involves also the Lagrangian l and the target is not the origin, but an arbitrary closed set \(\mathbf{C}\) with compact boundary.

Let us call a map W as in Theorem 4.3 a Lipschitz continuous\({p_{0}}\)-MRF. As an immediate consequence of Theorems 4.3 and 3.1, we have the following:

Corollary 4.4

Assume (H1) and let \({p_{0}}> 0\). Let \(W:\overline{ R^n\setminus \mathbf{C}}\rightarrow [0,+\infty )\) be a Lipschitz continuous \({p_{0}}\)-MRF. Then there exists a locally bounded feedback \(K:{{\mathbb {R}}}^n\setminus \mathbf{C}\rightarrow U\) that sample and Euler stabilizes system (1.1) to \(\mathbf{C}\) with \((p_0/2,W)\)-regulated cost.

Note that, using the notation of Theorem 4.3, the feedback K in Corollary 4.4 is actually a \({\bar{W}}\)-feedback and the claim above relies on the inequality \({\bar{W}}\le W\).

4.3 Comparison with the original notion of \({p_{0}}\)-MRF

Let us call \({p_{0}}\)-OMRF the notion of \({p_{0}}\)-MRF originally introduced in [23].

Definition 4.5

(\({p_{0}}\)-OMRF) Let \(W:\overline{{{\mathbb {R}}}^n\setminus {{\mathbf {C}}}}\rightarrow [0,+\infty )\) be a continuous function, and let us assume that W is locally semiconcave, positive definite, and proper on \({{\mathbb {R}}}^n\setminus {\mathbf {C}}\). We say that W is a \({p_{0}}\)-OMRF for some\({p_{0}}\ge 0\) if it verifies

$$\begin{aligned} H (x,{p_{0}}, D^*W(x) )<0 \quad \forall x\in {{{{\mathbb {R}}}^n}\setminus {\mathbf {C}}}. \end{aligned}$$
(4.4)

A \({p_{0}}\)-MRF is obviously a \({p_{0}}\)-OMRF, but the converse might be false. By [23] we have the following result.

Proposition 4.6

[23, Prop. 3.1] Assume that W is a \({p_{0}}\)-OMRF with \({p_{0}}\ge 0\). Then for every \(\sigma >0\) there exists a continuous, increasing map \(\gamma _\sigma :(0,\sigma ]\rightarrow (0,+\infty )\) such that

$$\begin{aligned} H (x,{p_{0}},D^*W(x)) <-\gamma _\sigma (W(x)) \qquad \forall x\in W^{-1}((0,\sigma ]). \end{aligned}$$
(4.5)

Proposition 4.6 clarifies the difference between the two notions: the existence of a \({p_{0}}\)-OMRF implies that there exists a rate function \(\gamma _\sigma \), which is in general, not global. In particular, \(\gamma _\sigma \) can become smaller and smaller as \(\sigma \) tends to \(+\infty \). Consequently, also the feedback K can be defined only given a \(\sigma >0\), on \(W^{-1}((0,\sigma ])\).

Remark 4.7

When a \({p_{0}}\)-OMRF W verifies condition (4.4) in the following stronger form

$$\begin{aligned} \forall M>0: \quad \sup _{ p\in D^*W(x)} H(x,{p_{0}}, p)<0 \quad \forall x\in {{{{\mathbb {R}}}^n}\setminus {\mathbf {C}}} \text { s.t. } {\mathbf {d}}(x)\ge M \,, \end{aligned}$$
(4.6)

it is not difficult to prove that, under the assumptions of Proposition 4.6, there exists a continuous, strictly increasing function \(\gamma :(0,+\infty )\rightarrow (0,+\infty )\) independent of \(\sigma \), such that (4.5) holds for all \(x\in {{\mathbb {R}}}^n\setminus \mathbf{C}\) (see [23, Remark 3.1]). In other words,

$$\begin{aligned} {\textit{a }}{p_{0}}-{\textit{OMRF W verifying }}(4.6){\textit{ is actually a }}{p_{0}}{\textit{-MRF.}} \end{aligned}$$

The proof of Theorem 3.1 can be easily adapted to derive the following result.

Theorem 4.8

Assume that f, l verify hypothesis (H0) and let W be a \({p_{0}}\)-OMRF with \({p_{0}}>0\). Then for any \(\sigma >0\) there exists a locally bounded feedback \(K:W^{-1}((0,\sigma ])\rightarrow U\) that sample and Euler stabilizes system (2.1) to \(\mathbf{C}\) with \((p_0,W)\)-regulated cost for any initial point \(z\in W^{-1}((0,\sigma ])\).

As in the case of \({p_{0}}\)-MRF, when f, l are locally Lipschitz continuous in x, we can replace the semiconcavity assumption in the definition of a \({p_{0}}\)-OMRF with local Lipschitz continuity. Precisely, we establish what follows.

Theorem 4.9

Assume (H1) and let \({p_{0}}\ge 0\). Let \(W:\overline{{{\mathbb {R}}}^n\setminus \mathbf{C}}\rightarrow [0,+\infty )\) be a locally Lipschitz continuous map on \(\overline{({{\mathbb {R}}}^n\setminus \mathbf{C})}\), such that W is positive definite, and proper on \({{\mathbb {R}}}^n\setminus {\mathbf {C}}\), and verifies the decrease condition

$$\begin{aligned} H (x,{p_{0}}, \partial _LW(x))<0 \quad \forall x\in {{{{\mathbb {R}}}^n}\setminus {\mathbf {C}}} \end{aligned}$$
(4.7)

Then there exists a \(\displaystyle \frac{{p_{0}}}{2}\)-OMRF \({\bar{W}}\) which also satisfies \({\bar{W}}(x)\le W(x)\) for all \(x\in {{\mathbb {R}}}^n\setminus {\mathbf {C}}\).

The proof of this theorem is sketched in “Appendix A.2”.

Let us call a map W as in Theorem 4.9 a Lipschitz continuous\({p_{0}}\)-OMRF. Theorems 4.9 and 4.8 imply what follows:

Corollary 4.10

Assume (H1) and let \({p_{0}}> 0\). Let \(W:\overline{ {{\mathbb {R}}}^n\setminus \mathbf{C}}\rightarrow [0,+\infty )\) be a Lipschitz continuous \({p_{0}}\)-OMRF. Then for any \(\sigma >0\) there exists a locally bounded feedback \(K: W^{-1}((0,\sigma ])\rightarrow U\) that sample and Euler stabilizes system (2.1) to \(\mathbf{C}\) with \((p_0/2,W)\)-regulated cost for any initial point \(z\in W^{-1}((0,\sigma ])\).

5 An example: stabilization of the non-holonomic integrator control system with regulated cost

Let us illustrate the preceding theory through a classical example. Precisely, in the first part of this section we provide a \({p_{0}}\)-MRF \(W_1\) for the non-holonomic integrator control system associated to a Lagrangian l, that verifies a suitable growth condition (see (5.3) below). In view of Theorem 3.1, this implies the existence of a possibly discontinuous feedback K that sample and Euler stabilizes the non-holonomic integrator to the origin with the cost bounded above by \(W_1/{p_{0}}\). Furthermore, we show how weakening the requirements on the \({p_{0}}\)-MRF (by replacing semiconcavity with Lipschitz continuity) may be crucial for the effective construction of a \({p_{0}}\)-MRF. In particular, for any bounded Lagrangian l which cannot satisfy assumption (5.3) below for any \(C>0\) (as, for instance, in the case of the minimum time problem, where \(l\equiv 1\)), we provide a less regular, Lipschitz continuous but not semiconcave \({p_{0}}\)-MRF \(W_2\). In this case, the sample and Euler stabilizability of the control system with \(({{p_{0}}/2},W_2)\)-regulated cost is guaranteed by Corollary 4.4.

Set \(U:=\{u=(u_1,u_2)\in {{\mathbb {R}}}^2: \ \ u_1^2+u_2^2\le 1\}\), \({\mathbf {C}}:=\{0\}\subset {{\mathbb {R}}}^3\) and consider the non-holonomic integrator control system:

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{x}_1= u_1\\ \dot{x}_2= u_2\\ \displaystyle \dot{x}_3= x_1u_2-x_2u_1, \qquad u(t)=(u_1,u_2)(t)\in U. \end{array}\right. } \end{aligned}$$
(5.1)

Given a continuous Lagrangian \(l(x,u)\ge 0\), let us associate to (5.1) the cost

$$\begin{aligned} \int _0^{T_x}l(x(t),u(t))\,dt \end{aligned}$$
(5.2)

(as in the rest of the paper, \(T_x\) denotes the exit-time of x from \({{\mathbb {R}}}^3\setminus \{0\}\)). Set

$$\begin{aligned} f(x,u):=(u_1,u_2,x_1u_2-x_2u_1) \qquad \forall (x,u)\in {{\mathbb {R}}}^3\times U. \end{aligned}$$

The following map \(W_1\), introduced in [21], given by

$$\begin{aligned} W_1(x):=\left( \sqrt{x_1^2+x_2^2}-|x_3|\right) ^2+x_3^2 \qquad \forall x\in {{\mathbb {R}}}^3, \end{aligned}$$

is proper, positive definite, locally semiconcave in \({{\mathbb {R}}}^3\setminus \{0\}\), and verifies

$$\begin{aligned} \min _{u\in U} \langle p, f(x,u)\rangle =-\sqrt{V(x)}\qquad \forall x\in {{\mathbb {R}}}^3\setminus \{0\}, \ \forall p\in D^* W_1 (x), \end{aligned}$$

where

$$\begin{aligned} V(x):=\left( \sqrt{x_1^2+x_2^2}-|x_3|\right) ^2+\left( \sqrt{x_1^2+x_2^2}-2|x_3|\right) ^2(x_1^2+x_2^2) \quad \forall x\in {{\mathbb {R}}}^3. \end{aligned}$$

Therefore, \(W_1\) is a Control Lyapunov Function for the control system (5.1) and, consequently, any \(W_1\)-feedback sample and Euler stabilizes (5.1) to the origin [29]. When the Lagrangian l satisfies, for some positive constant C,

$$\begin{aligned} 0\le l(x,u)\le C \sqrt{V(x)}\quad \forall (x,u)\in ({{\mathbb {R}}}^3\setminus \{0\})\times U, \end{aligned}$$
(5.3)

then \(W_1\) is also a \({p_{0}}\)-MRF for every \({p_{0}}\in (0,1/C)\). Indeed, for all \(x\in {{\mathbb {R}}}^3\setminus \{0\}\) and for all \(p\in D^*W_1(x)\), one has

$$\begin{aligned} \displaystyle H(x,p_0,p)&=\min _{u\in U}\{ \langle p, f(x,u)\rangle + {p_{0}}\,l(x,u)\} \\&\displaystyle \le \min _{u\in U}\{ \langle p, f(x,u)\rangle \}+ p_0 C \sqrt{V(x)}=-(1-p_0 C)\sqrt{V(x)}. \end{aligned}$$

However, the Control Lyapunov Function \(W_1\) cannot be a \({p_{0}}\)-MRF when

$$\begin{aligned} \lim _{x\rightarrow 0} \frac{\inf _{u\in U} l(x,u)}{\sqrt{V(x)}}=+\infty . \end{aligned}$$
(5.4)

Since V(x) tends to \(0^+\) as \(x\rightarrow 0\), condition (5.4) does not hold, for instance, for the minimum time problem, where \(l\equiv 1\).

For any bounded Lagrangian l, let \(M_l>0\) verify \(l(x,u)\le M_l\) for all \((x,u)\in ({{\mathbb {R}}}^n\setminus \mathbf{C})\times U\). Then a discontinuous feedback that sample and Euler stabilizes (5.1) and at the meantime provides strategies for which the target is reached with regulated cost, can be obtained if we consider the following Control Lyapunov Function \(W_2\), introduced in [27]:

$$\begin{aligned} W_2(x):=\max \left\{ \sqrt{x_1^2+x_2^2}, |x_3|-\sqrt{x_1^2+x_2^2}\right\} \qquad \forall x\in {{\mathbb {R}}}^3. \end{aligned}$$

The map \(W_2\) is locally semiconcave only outside the set \(S:=\{(x_1,x_2,x_3)\in {{\mathbb {R}}}^3\mid x_3^2=4(x_1^2+x_2^2)\}\), therefore, it is not a \({p_{0}}\)-MRF. However, \(W_2\) matches the weaker definition of Lipschitz continuous\({p_{0}}\)-MRF for \({p_{0}}<1/M_l\): it is indeed a locally Lipschitz continuous map in \({{\mathbb {R}}}^3\), which is positive definite and proper in \({{\mathbb {R}}}^3\setminus \{0\}\), and a direct computation shows that

$$\begin{aligned} H(x,p_0,p)\le \min _{u\in U} \langle f (x,u), p \rangle + {p_{0}}M_l<0 \ \ \forall x\in {{\mathbb {R}}}^3\setminus \{0\}, \ \forall p\in \partial _P W(x) \end{aligned}$$

(see also [29] and [21]). Since the data f and l verify assumption (H1), it follows by Corollary 4.4 that (5.1)–(5.2) is sample and Euler stabilizable with \(({{\bar{l}}}/2,W_2)\)-regulated cost as soon as the Lagrangian l is bounded.

6 Conclusions

In this paper we addressed sample and Euler stabilizability of nonlinear control system in an optimal control theoretic framework. We introduced the notion of sample and Euler trajectories with regulated cost, which conjugate stabilizability with an upper bound on the payoff, depending on the initial state. Under mild regularity hypotheses on the vector field f and on the Lagrangian l and for a closed, possibly unbounded control set, we proved that the existence of a special Control Lyapunov Function W, called a \({p_{0}}\)-Minimum Restraint Function, \({p_{0}}\)-MRF, implies that all sample and Euler stabilizing trajectories have \((p_0,W)\)-regulated costs. The proof is constructive: it is based indeed on the synthesis of appropriate feedbacks derived from W. As in the case of classical Control Lyapunov Functions, this construction requires that W is locally semiconcave. However, by generalizing an earlier result by Rifford [28] we established that it is possible to trade regularity assumptions on f and l with milder regularity assumptions on W. In particular, we showed that if the vector field f and the Lagrangian l are locally Lipschitz up to the boundary of the target, then the existence of a mere locally Lipschitz \({p_{0}}\)-MRF W provides sample and Euler stabilizability with \((p_0/2,W)\)-regulated cost.

The present work is part of an ongoing, wider investigation of global asymptotic controllability and stabilizability in an optimal control perspective. A slightly weaker notion of \(p_0\)-MRF—called here \(p_0\)-OMRF—was introduced in [23] and further extended in [19] to more general optimization problems, in order to yield global asymptotic controllability with regulated cost. This paper represents the stability-oriented counterpart of [23]. In a forthcoming paper we will address the question of stabilizability with regulated cost for possibly non-coercive optimization problems with unbounded controls. Other interesting research directions include the relation between \(p_0\)-MRFs and input-to-state stability (in the fashion of [21]) and the study of a possible inverse Lyapunov theorem for \(p_0\)-MRFs—i.e., whether the results in [30] can be extended to \(p_0\)-MRFs by showing that their existence is also a necessary condition for the global asymptotic controllability of the control system with regulated cost.