1 Introduction

Consider a nonlinear control system

$$\begin{aligned} \dot{x} = \sum _{k=1}^m f_k (x,t)u_k, \end{aligned}$$
(1)

where \(x=(x_1,\ldots ,x_n)^\top \in D\subseteq {\mathbb {R}}^n\) is the state, \(u=(u_1,\ldots ,u_m)^\top \in {\mathbb {R}}^m\) is the control, D is a domain, and the time-dependent vector fields \(f_k:D\times {\mathbb R}^+\rightarrow {{\mathbb {R}}}^n\) are regular enough to guarantee the existence and uniqueness of solutions to the Cauchy problem for system (1) with any initial data \(x(t_0)=x^0\in D\), \(t_0\ge 0\), and any admissible control \(u:[t_0,+\infty )\rightarrow {\mathbb R}^m\). We will formulate the required regularity assumptions precisely below.

The driftless control-affine system (1) is an extremely important mathematical model in nonholonomic mechanics, which represents the kinematics with nonintegrable constraints in the case \(m<n\) (we refer to the book [3] for general reference). In this area, the class of systems with time-independent vector fields is of special interest:

$$\begin{aligned} \dot{x}= & {} \sum _{k=1}^m {\tilde{f}}_k (x)u_k,\quad x\in D\subseteq \mathbb R^n,\; u\in {\mathbb {R}}^m,\nonumber \\{} & {} {\tilde{f}}_k\in C^1(D;{{\mathbb {R}}}^n). \end{aligned}$$
(2)

In contrast to linear control theory, the controllability of system (2) does not imply its stabilizability by a regular feedback law of the form \(u=h(x)\). A famous example of a completely controllable system (2) with \(n=3\) and \(m=2\), which is not stabilizable in the classical sense, was presented in [5]. Since then, the stabilization and motion planning problems of nonholonomic systems have been extensively studied by many experts in nonlinear control theory, mechanics, and robotics. A survey of essential contributions in this area is performed in Sect. 2, where the advantages and limitations of known approaches are discussed.

To the best of our knowledge, the present paper contains the first description of a unified control design method for solving a variety of different control problems such as stabilization of an equilibrium point \(x=x^*\), tracking an arbitrary curve in the state space, and motion planning with obstacles for rather general nonautonomous systems (1). The main idea behind our construction is to design time-dependent feedback controllers in such a way that the trajectories of the corresponding closed-loop system approximate the trajectories of a gradient-like system of the form

$$\begin{aligned} {\dot{x}}=-\gamma \frac{\partial }{\partial x} P( x,t),\quad x\in {\mathbb {R}}^n, \end{aligned}$$
(3)

where the potential function P(xt) and gain \(\gamma >0\) are to be defined according to the specific problem statement. In particular, the use of Lyapunov-like functions P allows to solve the stabilization and trajectory tracking problems, while so-called navigation functions or artificial potential fields can be exploited for generating collision-free motion of system (1) in domains with obstacles. In more details, we discuss these problems in Sect. 3. The key contribution of our work is twofold:

  • a unified approach for solving the stabilization and motion planning problems for driftless control-affine systems of the form (1) under the bracket-generating condition;

  • convergence results under relaxed regularity assumptions on the vector fields and their directional derivatives. In particular, the vector fields of the considered class of systems are not required to be smooth.

The subsequent presentation is organized as follows. The outcomes of the literature study are reported in Sect. 2. A family of \(\varepsilon \)-periodic feedback controllers is introduced in Sect. 3 in the form of trigonometric polynomials with respect to time with coefficients depending on the system state. It is shown in Sect. 3.2 that the proposed controllers allow approaching an arbitrary neighborhood of the set of critical points of P by the solutions of system (1) at large time t under a suitable choice of the small parameter \(\varepsilon \). These approximation schemes are then adapted to derive stabilizing controllers for the equilibrium stabilization problem (Theorem 3 and its corollary in Sect. 3.3), tracking problem (Theorem 4 and its corollary in Sect. 3.4), and obstacle avoidance (Sect. 3.5). We illustrate the proposed control design methodology with examples in Sect. 4. Finally, concluding comments are given in Sect. 5 to summarize the key results of the present paper and underline its contribution with respect to the previous work. The proofs of the main results are given in Appendices A–D.

Notations

Throughout the text, we will use the following notations:

\({\mathbb {R}}^+\)—the set of nonnegative real numbers;

\({\mathbb {R}}_{>0}\)—the set of positive real numbers;

\(\delta _{ij}\)—the Kronecker delta: \(\delta _{ii}{=}1\) and \(\delta _{ij}{=}0\) whenever \(i\ne j\);

\(\textrm{dist}(x,S)\)—the Euclidean distance between a point \(x\in {\mathbb {R}}^{n}\) and a set \(S\subset {\mathbb R}^n\)

\(B_\delta (x^*)\)\(\delta \)-neighborhood of an \(x^*\in {\mathbb {R}}^n\) with \(\delta >0\);

\(\displaystyle B_\delta (S)=\bigcup _{x\in S} B_\delta (x)\)\(\delta \)-neighborhood of a set \(S\subset {\mathbb {R}}^{n}\) with \(\delta >0\);

\(\partial M\), \({\overline{M}}\)—the boundary and the closure of a set \(M\subset {\mathbb {R}}^n\), respectively; \({\overline{M}}= M\cup \partial M\);

|S|—the cardinality of a set S;

\({\mathcal {K}}\)—the class of continuous strictly increasing functions \(\varphi :{\mathbb {R}}^+\rightarrow {\mathbb {R}}^+\) such that \(\varphi (0)=0\);

[fg](x)—the Lie bracket of vector fields \(f,g:\mathbb R^n\rightarrow {\mathbb {R}}^n \) at a point \(x\in {\mathbb {R}}^n\), \([f,g](x)=L_fg(x)- L_gf(x)\), where \( L_gf(x)=\lim \limits _{s\rightarrow 0}\tfrac{f(x+sg(x))-f(x)}{s}\), \( L_fg(x)=\lim \limits _{s\rightarrow 0}\tfrac{g(x+sf(x))-g(x)}{s}\); if f and g are differentiable, then \(L_gf(x)=\frac{\partial f(x)}{\partial x}g(x)\) and \(L_fg(x)=\frac{\partial g(x)}{\partial x}f(x)\);

for a differentiable function \(P:{\mathbb {R}}^n\rightarrow {\mathbb {R}}\), the gradient of P(x) evaluated at a point \(x^0\in {\mathbb {R}}^n\) is denoted by \(\nabla P(x^0)=\left. \frac{\partial P(x)}{\partial x}\right| _{x=x^0}\);

if \(P:{\mathbb {R}}^n\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) is differentiable with respect to its first argument, we denote \(\nabla _x P(x^0,t_0)=\left. \frac{\partial P(x,t)}{\partial x}\right| _{x=x^0,t=t_0}\) for given \(x^0\in {\mathbb {R}}^n\) and \(t_0\in {\mathbb {R}}\).

2 Related work

In this section, we briefly summarize some known results on the stabilization and motion planning of control-affine systems of the form (1). Note that, to our best knowledge, most of the related publications focus on autonomous systems with time-independent vector fields. A number of efficient control design methods have been developed in the literature with the emphasis on special classes of systems, such as flat systems [8], chained-form systems [26, 40], unicycle- and car-like systems [7, 29, 30, 33, 36], manipulator models [9, 28], and Chaplygin systems [34].

For planning the motion of general nonholonomic systems, a broad class of approaches is based on the application of Lie algebraic techniques. With this respect, an essential assumption is that the vector fields of system (2) together with their iterated Lie brackets span the whole tangent space at each point of the state manifold (Hörmander’s condition). Several authors used this assumption to produce time-periodic control laws such that the trajectories of a nonholonomic system approximate the trajectories of an extended system. The papers [27, 39] exploited an unbounded sequence of oscillating controls with unbounded frequencies for such an approximation in case of driftless systems. The paper [23] addresses the limit behavior of solutions of a control-affine system with input signals of magnitude \(\varepsilon ^{-\alpha }\) and frequency scaling \(1/\varepsilon \) as \(\varepsilon \rightarrow 0\). It is assumed that the primitives of input signals and their iterated primitives up to a certain order are bounded. Then it is shown that the limit behavior of the considered oscillating system is either defined by its drift term or by a linear combination of certain iterated Lie brackets, depending on the value of \(\alpha \). In the paper [4], the averaged system as a differential inclusion is constructed for driftless control-affine systems with fast oscillating inputs. It is proved that an arbitrary solution of such a differential inclusion can be approximated by a family of solution of the original system when the oscillation frequency tends to infinity. This approximation result is also extended to the class of systems with drift under a time reparametrization and the assumption that the drift generates periodic dynamics.

An overview of motion planning methods for nonholonomic systems is presented in the book [18]. For nilpotent systems, exact solutions to the motion planning problem are proposed with the use of sinusoidal inputs. In general case, the local steering problem can be solved by constructing a nilpotent approximation under a suitable choice of privileged coordinates. Then the global steering algorithm is summarized in [18] as a finite sequence of steps which steers the given nonholonomic system to an arbitrary small neighborhood of the target point. The nilpotentization of a wheeled mobile robot model with a trailer is proposed in the paper [1] for planing local maneuvers of this kinematic system. On the basis of solving the related sub-Riemannian problem, an algorithm for suboptimal parking has been implemented and tested for several robot configurations.

An algorithm for motion planning of kinematic models of nonholonomic systems in task space is developed in [31] with the use of the Campbell–Baker–Hausdorff–Dynkin formula. The motion planning in task space is treated in the sense of steering the system output to a neighborhood of the desired point. The proposed algorithm is illustrated with a unicycle and kinematic car examples. A nonholonomic snakelike robot model with m (\(m \ge 3\)) rigid links is considered in [17]. The motion planning problem is treated there in the sense of generating a gait such that the origin of the snake’s body moves along a given planar curve. This problem is solved by expressing the body velocity from the compatibility equation and reconstruction equation.

An interesting example of nonholonomic system with the growth vector (4,7) is studied in [16]. Such an example is a modification of the trident snake robot with three 1-link branches of variable length. A nilpotent approximation of this system is constructed, and the local optimal steering problem is analyzed by the Pontryagin maximum principle. Controls for generating the motion in the direction of higher-order Lie brackets were proposed in [10, 11] for systems with two inputs.

A hybrid path planning method based on the combination of a high-level planner with a low-level controller performing in autonomous vehicle is described in [37]. The high-level planner (\(D^*\) Lite planner) works on the discretized 2D workspace to produce a reference path such that at each step the robot model moves from a given cell to one of the eight neighboring cell which does not have an obstacle. The output of the high-level planner is collected as a set of waypoints ending at the goal, and the cost is the total length of the path. Then the low-level controller, running on the autonomous vehicle, provides control inputs to generate motion from the current state to the next waypoint. This path planning method is experimentally validated on a differential drive robot in rough terrain environments.

Stabilizing time-varying controls were proposed in [45] for second degree nonholonomic systems (following the terminology used in [25]). Unlike other publication in this area, the exponential convergence to the equilibrium was proved without the assumption that the frequencies of controls tend to infinity. Besides, the paper [45] presented a rigorous solvability analysis of the stabilization problem in the proposed class of controls. For detailed reviews of other motion planning and stabilizing strategies we refer to [3, 15, 22]. It has to be emphasized that, in spite of a large number of publications on nonholonomic motion planning, only particular results are available for the stabilization together with the obstacle avoidance. Even for static obstacles, this problem was studied only for specific systems (see, e.g., [21, 35, 38]). A general class of nonholonomic systems was considered in the paper [41], where a time-independent controller was constructed based on the gradient of a potential function. Note that such a result ensures only stability (but not asymptotic stability) property. An algorithm computing time-periodic feedback controls for approximating collision-free paths was presented in [14]; however, no solvability issues concerning the general collision avoidance problem have been addressed in that paper.

For a class of driftless control-affine systems, the trajectory tracking problem was addressed in [43] under the assumption that the target trajectory is feasible, i.e., satisfies the dynamical equations with some control inputs. However, to the best of our knowledge, there are no results available for the stabilization of general classes of nonlinear control systems in a neighborhood of nonfeasible curves or in domains with obstacles.

3 Unified control framework for second degree nonholonomic systems

In this section, we present the main idea of our control design scheme by considering the nonholonomic systems of degree 2, according to the classification of [25]. The proposed control design provides a generic approach for stabilization and motion planning of underactuated driftless control-affine systems.

3.1 Definitions and assumptions

To generate stabilizing control strategies, we will exploit sampling, similar to the approaches of [6, 45]. With this respect, we introduce the following definition, which extends the notion of \(\pi _\varepsilon \)-solutions to nonautonomous systems.

Definition 1

(\(\pi _\varepsilon \)-solution) Consider a control system

$$\begin{aligned}{} & {} \dot{x}=f(x,u,t),\quad x\in D\subseteq {\mathbb {R}}^n,\,u\in \mathbb R^m,\\{} & {} t\in {\mathbb {R}},\,f:D\times {\mathbb {R}}^m\times {\mathbb {R}}\rightarrow \mathbb R^n, \end{aligned}$$

and assume that a feedback control is given in the form \(u=h(a(x,t),t)\), \(a:D\times {\mathbb {R}}\rightarrow {\mathbb {R}}^l\), \(h: \mathbb R^l\times {\mathbb {R}}\rightarrow {\mathbb {R}}^m\). For given \(t_0\in {\mathbb {R}}\) and \(\varepsilon >0\), define a partition \(\pi _\varepsilon \) of \([t_0,+\infty )\) into the intervals

$$\begin{aligned} I_j=[t_j,t_{j+1}),\;t_j=t_0+\varepsilon j, \quad j=0,1,2,\dots \,. \end{aligned}$$

A \(\pi _\varepsilon \)-solution of the considered closed-loop corresponding to the initial value \(x^0\in {\mathbb {R}}^n\) is an absolutely continuous function \(x_\pi (t)\in D\), defined for \(t\in [t_0,+\infty )\), which satisfies the initial condition \(x_\pi (t_0)=x^0\) and the differential equations

$$\begin{aligned} \dot{x}_\pi (t)= & {} f\big (x_\pi (t), h( a(x_\pi (t_j),t_j),t), t\big ),\\{} & {} \quad t\in I_j,\; \text {for each}\; j=0,1,2,\ldots . \end{aligned}$$

We will illustrate the relation between \(\pi _\varepsilon \)-solutions and classical solutions with examples in Sect. 4.

Before formulating basic results of this paper, we introduce the main assumptions on the state space D, vector fields \(f_k\), and the potential function P used in the gradient flow dynamics (3).

Assumption 1

The vector fields \(f_k(x,t):D\times {\mathbb {R}}^+\rightarrow {\mathbb {R}}^n\) are twice continuously differentiable w.r.t. x, and \(f_k\), \(L_{f_{j}}f_k\) are continuously differentiable w.r.t. t, for all \(j,k=\overline{1,m}\).

Moreover, for any family of compact subsets \(\widetilde{{\mathcal {D}}}_t\subset D\), \(t\ge 0\), there exist constants \(M_f,L_{fx},L_{2f}>0\), \(L_{ft},H_{fx},H_{ft}\ge 0\) such that

  1. (1.1)

    \(\Vert f_k(x,t)\Vert \le M_f\),

  2. (1.2)

    \(\Vert f_k(x,t)-f_k(y,t)\Vert \le L_{fx}\Vert x-y\Vert ,\,\Big \Vert \frac{\partial f_k(x,t)}{\partial t}\Big \Vert \le L_{ft},\,\Vert L_{f_{j}}f_k(x,t)\Vert \le L_{2f}\),

  3. (1.3)

    \(\Vert L_{f_l}L_{f_{j}}f_k(x,t)\Vert \le H_{fx},\,\Big \Vert \frac{\partial (L_{f_{j}}f_k(x,t))}{\partial t}\Big \Vert \le H_{ft}\),

for all \(t\ge 0\), \(x,y\in \widetilde{{\mathcal {D}}}_t\), \(j,k,l=\overline{1,m}\).

Another important assumption is related to the controllability property of system (1). As it has already been mentioned, in this section we focus on systems with the degree of nonholonomy 2, i.e., those whose vector fields together with their Lie brackets span the whole n-dimensional space.

Assumption 2

  1. (2.1)

    System (1) satisfies the bracket-generating condition of degree 2 in D, i.e., there exist sets of indices \(S_1\subseteq \{1,2,\ldots ,m\}\), \(S_2\subseteq \{1,2,\ldots ,m\}^2\) such that \(|S_1|+|S_2|=n\) and

    $$\begin{aligned} \begin{aligned}&\textrm{span}\big \{f_{i}(x,t), [f_{j_1},f_{j_2}](x,t)\,|\,i\in S_1,\\&\qquad (j_1,j_2)\in S_2\big \}=\mathbb {R}^n\,\\&\quad \text { for all }t\ge 0,x\in D. \end{aligned}\nonumber \\ \end{aligned}$$
    (4)
  2. (2.2)

    For any family of compact subsets \(\widetilde{{\mathcal {D}}}_t \subset D\), \({t\ge 0}\), there exists an \({M_F}>0\) such that

    $$\begin{aligned} \begin{aligned} \Vert {\mathcal {F}}^{-1}(x,t)\Vert \le {M_F}\text { for all }t\ge 0,\,x\in \widetilde{{\mathcal {D}}}_t, \end{aligned} \end{aligned}$$

    where \({\mathcal {F}}^{-1}(x,t)\) is the inverse matrix for

    $$\begin{aligned} {\mathcal {F}}(x,t)= & {} \Big (\big (f_{j_1}(x,t)\big )_{j_1\in S_1}\ \ \big ([f_{j_1},f_{j_2}]\nonumber \\{} & {} (x,t)\big )_{(j_1,j_2)\in S_2}\Big ). \end{aligned}$$
    (5)

It is important to note that the rank condition (4) implies nonsingularity of the \(n\times n\) matrix \({\mathcal {F}}(x,t)\) for all \(t\ge 0\), \(x\in D\).

The next two assumptions describe properties of the potential function P for the gradient-like system (3).

Assumption 3

The function \(P: D\times {\mathbb {R}}^+ \rightarrow {\mathbb {R}}\) is twice continuously differentiable w.r.t. x. Moreover, for any family of compact subsets \(\widetilde{{\mathcal {D}}}_t\subset D\), \(t\ge 0\), there exist constants \(m_P\in {\mathbb {R}}\), \(L_{Px}>0\), \(L_{2Px},L_{2Pt},L_{Pt},H_{Px}\ge 0\) such that

  1. (3.1)

    \(m_P\le P(x,t)\),

  2. (3.2)

    \(\left\| \frac{\partial P(x,t)}{\partial x}\right\| \le L_{Px},\,\Vert P(x,t)-P(y,\tau )\Vert \le L_{Px}\Vert x-y\Vert +L_{Pt}\Vert t-\tau \Vert \),

  3. (3.3)

    \(\Vert \nabla _xP(x,t)-\nabla _xP(y,\tau )\Vert \le L_{2Px}\Vert x-y\Vert +L_{2Pt}\Vert t-\tau \Vert \),

  4. (3.4)

    \(\sum \limits _{i,j=1}^n\left\| \dfrac{\partial ^2 P(x,t)}{\partial x_i\partial x_j}\right\| \le H_{Px}\),

for all \(t,\tau \ge 0\), \(x\in \widetilde{{\mathcal {D}}}_t\), \(y\in \widetilde{{\mathcal {D}}}_\tau \).

To formulate the last assumption of this section, we introduce families of level sets for a function P(xt). Namely, given a constant \(c\in {\mathbb {R}}\), we denote

$$\begin{aligned}{} & {} {\mathcal {L}}_t^{P,c}= \{x\in D:P(x,t)\le c\}, \\{} & {} {\mathcal {L}}_t^{\nabla P,c} = \{x\!\in \!{\mathbb {D}}\!:\Vert \nabla _xP(x,t)\Vert \!\le c\}\; \text {for}\, t\ge 0. \end{aligned}$$

Assumption 4

For every \(x^0\in D\), there exist \(\lambda >0\) and \(\rho >0\) such that, for all \(t\ge t_0\ge 0\), \( {\mathcal {L}}_t^{P,P(x^0,t_0)+\lambda }\) is nonempty, compact, convex set, and

$$\begin{aligned} {\mathcal {L}}_t^{\nabla P,\rho }\subseteq {\mathcal {L}}_t^{P,P(x^0,t_0)+\lambda }\subset D. \end{aligned}$$

3.2 Convergence results

Below we propose a universal control strategy which ensures the convergence of the trajectories of system (1) to the set of extremum points of a given function P. Suppose that the index sets \(S_1\), \(S_2\) and the matrix \({{\mathcal {F}}}(x,t)\) are described in Assumption 2, then we parameterize the controls as

$$\begin{aligned} u_k&= u^\varepsilon _k(a(x,t),t)=\sum _{i\in S_1} a_{i}(x,t)\delta _{ki}\nonumber \\&\quad +\varepsilon ^{-\tfrac{1}{2}}\sum _{(j_1,j_2)\in S_2} \sqrt{|a_{j_1j_2}(x,t)|}\phi ^{(k,\varepsilon )}_{j_1j_2}(t),\, k = \overline{1,m}.\nonumber \\ \end{aligned}$$
(6)

Here the column vector \(a(x,t)=\big (a_{i_1}(x,t)\big |_{i_1\in S_1}\), \( a_{j_1j_2}(x,t)\big |_{(j_1,j_2)\in S_2}\big )^\top \in {\mathbb {R}}^n\) is obtained from

$$\begin{aligned} a(x,t)&=- \gamma {\mathcal {F}}^{-1}(x,t) \nabla _x P(x,t), \end{aligned}$$
(7)

and the oscillating components are

$$\begin{aligned} \phi ^{(k,\varepsilon )}_{j_1j_2}(t)= & {} 2\sqrt{\pi \kappa _{j_1j_2}} \Big (\delta _{kj_1}\textrm{sign}(a_{j_1,j_2}(x,t))\nonumber \\{} & {} \ \times \cos {\frac{2\pi \kappa _{j_1j_2}}{\varepsilon }}t +\delta _{kj_2}\sin {\frac{2\pi \kappa _{j_1j_2}}{\varepsilon }}t\Big ), \end{aligned}$$
(8)

where \(\kappa _{j_1j_2}\in {\mathbb {N}}\) are pairwise distinct numbers, \(\gamma >0\) is a control gain, and \(\varepsilon >0\) is a small parameter.

Remark 1

In what follows, sufficient conditions for the convergence of our control scheme will be proposed for large values of \(\gamma \) and small values of \(\varepsilon \). In this framework, the gain \(\gamma \), corresponding to the amplitude of control signals (6), has the same meaning as \(\gamma \) in (3). Thus, the bigger the \(\gamma \), the faster the transient behavior could be achieved. From the practical viewpoint, there is a trade-off between the convergence rate and control constraints in possible applications, so the amplitude parameter \(\gamma \) should not exceed the actuator bounds, and the frequency parameters \(\omega _{j_1j_2}=\frac{2\pi \kappa _{j_1j_2}}{\varepsilon }\), \((j_1,j_2)\in S_2\) should be within the actuator bandwidth. The requirement for \(\kappa _{j_1j_2}>0\) to be pairwise distinct integers in (8) means that there are no resonances up to order 2 between the frequencies \(\omega _{j_1j_2}\) (see, e.g., [45] and [2, Chap. 6] for the resonance conditions). If the value \(\varepsilon >0\) is fixed, then an optimal choice (with respect to minimizing the frequencies) is to define \(\kappa _{j_1j_2}\) as the minimum natural numbers from 1 to \(\vert S_2\vert \), i.e., \(\{\kappa _{j_1j_2}: (j_1,j_2)\in S_2\} =\{1,2,\ldots , \vert S_2\vert \}\).

The first result of this section is as follows.

Lemma 1

Let Assumptions 14 be satisfied for system (1) with a function P(xt). Then there exist a \({\bar{\gamma }}>0\) and \({\bar{\varepsilon }}:[{\bar{\gamma }},+\infty )\rightarrow {{\mathbb {R}}}_{>0}\) such that, for any \(\gamma \ge {\bar{\gamma }}\) and any \(\varepsilon \in (0,{\bar{\varepsilon }}(\gamma )]\), the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (1) with the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) given by (6)–(8) and the initial data \(x_\pi (t_0)=x^0\in D\), \(t_0\ge 0\) is well defined and \(x_\pi (t)\in {\mathcal {L}}_t^{P,P(x^0,t_0)+\lambda }\) for all \(t\ge t_0\), and there exists a \(T\ge 0\) such that

$$\begin{aligned}{} & {} P(x_\pi (t),t) \le \sup _{t\ge t_0+ T} \sup \limits _{\xi \in {\mathcal {L}}^{\nabla P,\rho }_{t}}P(\xi ,t)\\{} & {} \text {for all} \ t\ge t_0+T, \end{aligned}$$

where \(\lambda \), \(\rho \) are positive numbers from Assumption 4.

The proof is given in Appendix B.

In the case of time-independent function P(x) and vector fields \(f_k(x)\), it is possible to prove a stronger result under milder assumptions. Let us denote the set of local minima of the function P by

$$\begin{aligned} S^*_{{\min }}= & {} \{x^*\in D:\text { there exists }r > 0\text { s.t.}\\{} & {} P(x) \ge P(x^*)\text { for all }x\in B_r(x^*)\}. \end{aligned}$$

The following theorem holds for the system

$$\begin{aligned} \dot{x} = \sum _{k=1}^m f_k (x)u_k,\quad x\in D\subseteq {\mathbb {R}}^n, \,u\in {\mathbb {R}}^m. \end{aligned}$$
(9)

Theorem 1

Given system (9), let \(f_k\in C^2(D;{\mathbb {R}}^n)\) satisfy Assumption 2 in a domain \(D\subseteq {\mathbb {R}}^n\), and let a function \(P\in C^2(D;{\mathbb {R}})\) be such that its level sets \({\mathcal {L}}^{P,P(x^0)}=\{x\in D: P(x)\le P(x^0)\}\) are compact for all \(x^0\in D\).

Then for any \(\gamma >0\) there exists an \({\bar{\varepsilon }}>0\) such that, for any \(\varepsilon \in (0,{\bar{\varepsilon }}]\), the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (9) with the controls \(u_k=u_k^\varepsilon (a(x),t)\) given by (6)–(8) and the initial data \(t_0\ge 0\), \(x_\pi (t_0)=x^0\in D\) is well defined and satisfies the following property:

$$\begin{aligned} P(x_\pi (t))\rightarrow \alpha ^*\in {S^*_{P_{\min }}} \text { as }t\rightarrow +\infty , \end{aligned}$$
(10)

provided that \(x^0\notin \{x\in D:\nabla P(x)=0\}{\setminus } S^*_{{\min }}\). Here

$$\begin{aligned} S^*_{P_{\min }}= & {} \{P^*\in [m_P,P(x^0)]:\text { there exists }x^*\in S^*_{{\min }}\\{} & {} \text { such that }P^*=P(x^*) \}. \end{aligned}$$

The proof of the asymptotic convergence of \(P(x_\pi (t))\) to the set of critical values of P can be found in [47]. More strict property (10) follows from the fact that, for small enough \(\varepsilon \),

$$P(x^0)\ge P(x_\pi (t_0+\varepsilon ))\ge P(x_\pi (t_0+2\varepsilon ))\ge \dots $$

and the uniqueness of the solutions of system (9) with the controls \(u_k=u_k^\varepsilon (a(x),t)\) and the initial data \(t_0\ge 0\), \(x(t_0)=x^0\in D\).

The approximate convergence of a time-varying function P to its minimum value can be proved under an additional requirement, which also allows to estimate the convergence rate:

Theorem 2

  Let Assumptions 13 be satisfied for system (1) with a function P(xt), and let \(\rho >0\) be such that \(\emptyset \ne {\mathcal {L}}_t^{P,m_P+\rho }\subset D\) for all \(t\ge 0\). Assume moreover that, for any family of compact subsets \(\widetilde{{\mathcal {D}}}_t\subset D\), \({t\ge 0}\), there exists a \(\mu >0\) and \(\nu \ge 0\) such that

$$\begin{aligned}{} & {} \Vert \nabla _xP(x,t)\Vert ^2\nonumber \\{} & {} \quad \ge \mu (P(x,t)-m_P)^\nu \text { for all }x\in \widetilde{{\mathcal {D}}}_t,\,t\ge 0. \end{aligned}$$
(11)

Then for any \(\gamma ^*>0\) there is a \({\bar{\gamma }}>\gamma ^*\) such that, for any \(\gamma >{\bar{\gamma }}\) and \(\varepsilon \in (0,{\bar{\varepsilon }})\) (\({\bar{\varepsilon }}> 0\) depends on \(\gamma \)), the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (1) with the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) given by (6)–(8) and the initial data \(t_0\ge 0\), \(x_\pi (t_0)=x^0\in {\mathcal {D}}_{t_0}\) is well defined and satisfies one of the following properties:

  1. (I)

    If \(\nu =1\), then

    $$\begin{aligned}{} & {} P(x_\pi (t),t)-m_P\\{} & {} \quad \le (P(x^0,t_0)-m_P)e^{-\mu \gamma ^*(t-t_0-\varepsilon )}+\rho \\{} & {} \qquad \text { for all }t\ge t_0. \end{aligned}$$
  2. (II)

    If \(\nu >1\), then

    $$\begin{aligned}{} & {} P(x_\pi (t),t)-m_P\le \big ((P(x^0,t_0)-m_P)^{1-\nu }\\{} & {} \quad +\mu \gamma ^*(\nu -1)(t-t_0-\varepsilon )\big )^{\frac{1}{1-\nu }}+\rho ,\, t\ge t_0. \end{aligned}$$

The proof is given in Appendix C.

Remark 2

As it follows from the proof of Theorem 2, it suffices to take

$$\begin{aligned} {\bar{\gamma }}= & {} \gamma ^*+\frac{2^{2\nu }}{\rho ^\nu \mu }\left( L_{Pt}+L_{Px}c_{R1}(\sqrt{{M_F} L_{Px}}\right. \\{} & {} \left. +{H_{Px}}c_{R1}{M_F}{\bar{\varepsilon }})\right) , \end{aligned}$$

where \(c_{R1}=\frac{L_{ft}}{2}+\frac{H_{ft}}{6}\sqrt{{M_F} L_{Px}}\). Obviously, one may put \({\bar{\gamma }}=\gamma ^*+\dfrac{2^{2\nu }}{\rho ^\nu \mu }L_{Pt}\) if the vector fields of system (1) are time-independent, and \({\bar{\gamma }}=\gamma ^*\) if, additionally, the function P does not depend on t.

Corollary 1

  Assume that the constants required in Assumptions 13 (and in (11)) exist for all \(x\in {\mathcal {L}}_t^{P,P(x^0,t_0)}\), \(x^0\in D\), \(t_0\ge 0\). Then the assertions of Lemma 1 (Theorem 2) remain valid even if the level sets of the function P(xt) are not compact.

Similarly, if the functions \(f_k(x)\) are globally Lipschitz in \({\mathcal {L}}^{P,P(x^0)} \), for any \(x^0\in D\), the functions \(f_k(x)\), \(L_{f_{j}}f_k(x,t)\), \(L_{f_l}L_{f_{j}}f_k(x)\), \(\Vert {\mathcal {F}}^{-1}(x)\Vert \), \(\frac{\partial P(x)}{\partial x}\), \(\frac{\partial ^2 P(x)}{\partial x^2}\) are bounded, and the function P(x) is bounded from below for all \(x\in {\mathcal {L}}^{P,P(x^0)} \), \(x^0\in D\), then the assertion of Theorem 1 remains valid even if the level sets of the function P(x) are not compact.

Corollary 2

Let the conditions of Theorem 1 be satisfied. Furthermore, assume that for any compact subset \(\widetilde{{\mathcal {D}}}\subset D\) there exist a \(\mu >0\) and \(\nu \ge 1\) such that \( \Vert \nabla P(x)\Vert ^2\ge \mu (P(x)-m_P)^\nu \text { for all }x\in \widetilde{{\mathcal {D}}}, \) where \(m_P\) is defined in Assumption 3.1.

Then for any \(\gamma>\gamma ^*>0\) there exists an \({\bar{\varepsilon }}>0\) such that, for any \(\varepsilon \in (0,{\bar{\varepsilon }})\), the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (9) with the controls \(u_k=u_k^\varepsilon (a(x),t)\) given by (6)–(8) and the initial data \(t_0\ge 0\), \(x_\pi (t_0)=x^0\in D\) is well defined and satisfies one of the following properties:

  1. (I)

    If \(\nu =1\), then

    $$\begin{aligned}{} & {} P(x_\pi (t))-m_P\\{} & {} \le (P(x^0)-m_P)e^{-\mu \gamma ^*(t-t_0-\varepsilon )}\text { for all }t\ge t_0. \end{aligned}$$
  2. (II)

    If \(\nu >1\), then

    $$\begin{aligned}{} & {} P(x_\pi (t))-m_P\le \big ((P(x^0)-m_P)^{1-\nu }\\{} & {} \quad +\mu \gamma ^*(\nu -1)(t-t_0-\varepsilon )\big )^{\frac{1}{1-\nu }}\text { for all }t\ge t_0. \end{aligned}$$

These results follow from the proofs of Lemma 1 and Theorem 2.

Lemma 1 and Theorem 1 give rise to several important results applicable to more specific control problems. Namely, one can choose a function P so that the corresponding gradient system (3) possesses some desired properties, such as asymptotic stability of a given point or set and collision-free motion. In the next section, we will consider different classes of functions P in order to solve the stabilization, trajectory tracking, and obstacle avoidance problems.

3.3 Stabilization problem

In this section, we consider a classical control problem of finding control laws which ensure the asymptotic stability of a point \(x=x^*\in D\) for system (9).

Problem 1

(Stabilization problem) Given system (9) and a point \(x^*\in D\), the goal is to construct a feedback control of the form (6)–(8) ensuring the asymptotic stability of \(x^*\) for the corresponding closed-loop system.

To solve Problem 1, we apply the results of Sect. 3.2 with a Lyapunov-like function P(x), which ensures the asymptotic stability of \(x^*\) for the gradient system (3).

Theorem 3

Given system (9) with \(f_k\in C^2(D;{\mathbb {R}}^n)\) satisfying Assumption 2 in a domain \(D\subseteq {\mathbb {R}}^n\) and a point \(x^*\in D\), let a function \(P\in C^2(D;{\mathbb {R}})\) satisfy the following conditions:

  • 3.1) there exist functions \(w_{11},w_{12} \in {\mathcal {K}}\) such that \(\{x\in {\mathbb {R}}^n: \Vert x-x^*\Vert \le w_{11}^{-1}\big (P(x^0)-m_P\big )\}\subset D\) for all \(x^0\in D\), and

    $$\begin{aligned} w_{11}(\Vert x-x^*\Vert )\le & {} P(x)-m_P\\\le & {} w_{12}(\Vert x-x^*\Vert )\text { for all }x\in D; \end{aligned}$$
  • 3.2) \(\Vert \nabla P(x)\Vert =0\) if and only if \(x=x^*\), and there exists a function \(w_2\in {\mathcal {K}}\) such that

    $$\begin{aligned} \Vert \nabla P(x)\Vert \le w_2(\Vert x-x^*\Vert )\text { for all }x\in D. \end{aligned}$$

Then for any \(\gamma >0\), there exists an \({\bar{\varepsilon }}>0\) such that the point \(x^*\) is asymptotically stable for system (9) with the controls \(u_k=u_k^\varepsilon (a(x),t)\) given by (6)–(8) and any \(\varepsilon \in (0,{\bar{\varepsilon }})\), provided that the solutions of the closed-loop system (9), (6)–(8) are defined in the sense of Definition 1.

The proof of this theorem is based on the proofs of Lemma 1 and Theorem 1 (see Appendix D). The following result directly follows from Theorem 3 and Corollary 2:

Corollary 3

Given system (9) with \(f_k\in C^2(D;{\mathbb {R}}^n)\) satisfying Assumption 2 in a domain \(D\subseteq {\mathbb {R}}^n\) and a point \(x^*\in D\), let a function \(P\in C^2(D;{\mathbb {R}})\) satisfy the following conditions:

  • C3.1) there exist constants \(\omega _{11},\omega _{12},v_1,v_2>0\) such that

    $$\begin{aligned}{} & {} \omega _{11}\Vert x-x^*\Vert ^{v_1}\\{} & {} \quad \le P(x)-m_P \le \omega _{12}\Vert x-x^*\Vert ^{v_2}\text { for all }x\in D; \end{aligned}$$
  • C3.2) there exist constants \(\mu _1,\mu _2>0\) and \(\nu _1,\nu _2\ge 1\) such that

    $$\begin{aligned}{} & {} \mu _1 (P(x)-m_P)^{\nu _1}\\{} & {} \quad \le \Vert \nabla P(x)\Vert ^2 \le \mu _2 (P(x)-m_P)^{\nu _2}\text { for all }x\in D. \end{aligned}$$

Then for any \(\gamma >0\) there exists an \({\bar{\varepsilon }}>0\) such that the point \(x^*\) is asymptotically stable for the closed-loop system (9) with the controls \(u_k=u_k^\varepsilon (a(x),t)\) given by (6)–(8) and any \(\varepsilon \in (0,{\bar{\varepsilon }})\), provided that the solutions of the closed-loop system are defined in the sense of Definition 1. Moreover,

  1. (I)

    If \(\nu _1=1\), then \(x^*\) is exponentially stable; namely, for any \(\gamma>\gamma ^*>0\), there exists an \(\varepsilon >0\) such that

    $$\begin{aligned}{} & {} \Vert x_\pi (t)-x^*\Vert \\{} & {} \quad \le \beta \Vert x^0-x^*\Vert ^{\frac{v_2}{v_1}}e^{-\frac{\mu _1\gamma ^*}{v_1}(t-t_0-\varepsilon )}\text { for all }t\ge t_0, \end{aligned}$$

    where \(\beta =\left( \frac{\omega _{12}}{\omega _{11}}\right) ^{\frac{1}{v_1}}\).

  2. (II)

    If \(\nu _1>1\), then \(x^*\) is polynomially stable; namely, for any \(\gamma ^*>0\) and \(\gamma >\gamma ^*\) there exists an \(\varepsilon >0\) such that

    $$\begin{aligned}{} & {} \Vert x_\pi (t)-x^*\Vert \le \left( \beta _1\Vert x^0-x^*\Vert ^{v_2(1-\nu _1)}\right. \\{} & {} \quad \left. +\beta _2 (t-t_0-\varepsilon )\right) ^{\frac{1}{v_1(1-\nu _1)}}\text { for all }t\ge t_0, \end{aligned}$$

    where \(\beta _1=\left( \dfrac{\omega _{12}}{\omega _{11}}\right) ^{1-\nu _1}\), \(\beta _2=\dfrac{\mu _1\gamma ^*(\nu _1-1)}{\omega _{11}^{1-\nu _1}}\).

In particular, to exponentially stabilize system (9) at \(x^*\), one can simply put

$$\begin{aligned} P(x)=\Vert x-x^*\Vert ^2. \end{aligned}$$

The above-stated decay rate estimates are illustrated with numerical examples in Sect. 4.1.

Remark 3

It is interesting to note that for the degree 1 nonholomonic systems, i.e., for the case \(m=n\), \(S_1=\{1,\dots ,n\}\), the proposed stabilizing controls are time-invariant functions

$$\begin{aligned} u_i^\varepsilon (x,t)=u_i(x)=- \big (f_1(x)\ f_2(x)\ \dots \ f_n(x)\big )^{-1} \nabla P(x), \end{aligned}$$

which is the classical control design for stabilization of fully actuated driftless control-affine systems.

Remark 4

The proposed control algorithm (6)–(8) significantly simplifies the stabilizing control design procedure introduced in [45] and makes it possible to express control coefficients explicitly without solving a cumbersome system of algebraic equations.

3.4 Trajectory tracking problem

The proposed control design procedure with a time-varying function P(xt) can be used for ensuring the motion of system (1) along desirable curves. Note that we consider arbitrary continuous curves \(x^*(t)\) which may not be feasible for system (1). Consequently, we consider a relaxed problem statement for the approximate trajectory tracking as follows:

Problem 2

(Trajectory tracking problem) Given system (1), a continuous curve \(x^*:{\mathbb {R}}^+\rightarrow D\), and a constant \(\rho >0\), the goal is to construct a feedback law ensuring the attractivity of the family of sets

$$\begin{aligned} {\mathcal {L}}^\rho _t=\{x\in D: \Vert x-x^*(t)\Vert \le \rho \}_{t\ge 0}. \end{aligned}$$
(12)

for the corresponding closed-loop system.

Note that attracting (locally/globally pullback attracting) families of time-varying sets have been studied in the paper [24] for nonautonomous systems of ordinary differential equations. Here we treat this notion in the sense of \(\pi _\varepsilon \)-solutions (Definition 1) for system (1) with control inputs. To be precise, we introduce the following definition.

Definition 2

(Attracting family of sets in the sense of \(\pi _\varepsilon \)-solutions) Let a feedback control of the form (6)–(8) be given, and let \(\rho >0\). We call the family of sets (12) attracting for the closed-loop system (1), (6)–(8), if there exist \(\Delta >0\), \({\bar{\gamma }}>0\), and \({\bar{\varepsilon }}:[\bar{\gamma },+\infty )\rightarrow {\mathbb {R}}_{>0}\) such that, for any \(t_0\ge 0\), \(x^0\in B_{\Delta }({\mathcal {L}}^{\rho }_{t_0})\cap D\), \(\gamma \ge \bar{\gamma }\), \(\varepsilon \in (0,{\bar{\varepsilon }}(\gamma )]\), the corresponding \(\pi _\varepsilon \)-solution \(x_\pi (t)\) satisfying the initial condition \(x_\pi (t_0)=x^0\) is well defined and

$$\begin{aligned} \textrm{dist}(x_\pi (t),{\mathcal {L}}^{\rho }_{t}) \rightarrow 0\quad \text {as}\;\;t\rightarrow + \infty . \end{aligned}$$

Based on Theorem 2, we are in a position to state sufficient conditions for the solvability of Problem 2.

Theorem 4

Given system (1), a continuous curve \(x^*: {\mathbb {R}}^+\rightarrow D\), and a function \(P:D\times {\mathbb {R}}^+\rightarrow {\mathbb {R}}\), let Assumptions 14 be satisfied, and assume that the following conditions hold:

  • 4.1) there exist constants \(\omega _{11},\omega _{12},v_1,v_2>0\) such that

    $$\begin{aligned}{} & {} \omega _{11}\Vert x-x^*(t)\Vert ^{v_1}\le P(x,t)-m_P\\{} & {} \quad \le \omega _{12}\Vert x-x^*(t)\Vert ^{v_2}\text { for all }t\ge 0,\,x\in D; \end{aligned}$$
  • 4.2) there exist constants \(\mu _1,\mu _2>0\) and \(\nu _1,\nu _2\ge 1\) such that

    $$\begin{aligned}{} & {} \mu _1 (P(x,t)-m_P)^{\nu _1}\le \Vert \nabla _x P(x,t)\Vert ^2\\{} & {} \quad \le \mu _2 (P(x,t)-m_P)^{\nu _2}\text { for all }t\ge 0,\,x\in D. \end{aligned}$$

Then, for any \(\rho >0\), the family of sets \({\mathcal {L}}^{\rho }_t=\{x\in D: \Vert x-x^*(t)\Vert \le \rho \}_{t\ge 0}\) is attracting for the closed-loop system (1) with the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) given by (6)–(8) in the sense of Definition 2. Moreover, one of the following assertions holds for any \(\gamma >\gamma ^*\ge {\bar{\gamma }}\), \(\varepsilon \in (0,{\bar{\varepsilon }}(\gamma )]\), and \(x^0\in B_{\Delta }(\mathcal L^{\rho }_{t_0})\cap D\):

  1. (I)

    if \(\nu _1=1\), then \(\{{\mathcal {L}}^{\rho }_t\}_{t\ge 0}\) is exponentially attractive, i.e.,

    $$\begin{aligned}{} & {} \Vert x_\pi (t)-x^*\Vert \le \beta \Vert x^0-x^*\Vert ^{\frac{v_2}{v_1}}e^{-\frac{\mu _1\gamma ^*}{v_1}(t-t_0-\varepsilon )} +\rho \\{} & {} \quad \text { for all }t\ge t_0, \end{aligned}$$

    where \(\beta =\left( \frac{\omega _{12}}{\omega _{11}}\right) ^{\frac{1}{v_1}}\);

  2. (II)

    if \(\nu _1>1\), then \(\{{\mathcal {L}}^{\rho _1}_t\}_{t\ge 0}\) is polynomially attractive, i.e.,

    $$\begin{aligned}{} & {} \Vert x_\pi (t)-x^*\Vert \le \left( \beta _1\Vert x^0-x^*\Vert ^{v_2(1-\nu _1)}\right. \\{} & {} \left. + \beta _2(t-t_0-\varepsilon )\right) ^{\frac{1}{v_1(1-\nu _1)}}+\rho \text { for }t\ge t_0, \end{aligned}$$

    where \(\beta _1=\left( \dfrac{\omega _{12}}{\omega _{11}}\right) ^{1-\nu _1}\) and \(\beta _2=\dfrac{\mu _1\gamma ^*(\nu _1-1)}{\omega _{11}^{1-\nu _1}}\).

The proof is similar to the proof of Theorem 3.

Corollary 4

Given system (9) with \(f_k\in C^2(D;{\mathbb {R}}^n)\) satisfying Assumption 2 in a domain \(D\subseteq {\mathbb {R}}^n\), let a curve \(x^*:{\mathbb {R}}^+ \rightarrow D\) be Lipschitz continuous such that \(B_\delta (x^*(t))\subset D\) for all \(t\ge 0\) with some \(\delta >0\).

Then, for any \(\rho >0\), the family of sets \({\mathcal {L}}^{\rho }_t=\{x\in D: \Vert x-x^*(t)\Vert \le \rho \}_{t\ge 0}\) is (exponentially) attracting for the closed-loop system (9) with the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) given by (6)–(8) in the sense of Definition 2.

The above result has been proved in [13] for continuously differentiable \(x^*(t)\) with bounded first derivative.

3.5 Obstacle avoidance problem

Another important problem which can be solved by the proposed approach is generating collision-free motion of system (9) in environments with obstacles. To formulate such problem, assume that the set D is represented as a closed bounded domain with “holes,” i.e.,

$$\begin{aligned} D={\mathcal {W}}\setminus \bigcup _{j=1}^N{\mathcal {O}}_j, \end{aligned}$$

where \({\mathcal {W}}\subset {\mathbb {R}}^n\) is a closed bounded domain (workspace), and \({\mathcal {O}}_1,{\mathcal {O}}_2,\ldots , {\mathcal {O}}_N\subset {\mathcal {W}}\) are open domains (obstacles). The resulting set D is supposed to be valid [21], i.e., \(\displaystyle \overline{{\mathcal {O}}_i} \subset \textrm{int}\, {\mathcal {W}}\) and \( \displaystyle \overline{{\mathcal {O}}_i} \cap \overline{{\mathcal {O}}_j} = \emptyset \;\;\text {if}\; \ne j\), for all \(i,j\in \{1,\dots ,N\}\).

Problem 3

(Obstacle avoidance problem) Given system (9), an initial point \(x^0\in \textrm{int}\, D\), and a destination point \(x^*\in \textrm{int}\, D\), the goal is to construct a feedback control such that the corresponding solution x(t) of the closed-loop system (9) with the initial data \(x(0)=x^0\) satisfies the conditions:

  • collision-free motion: \( x(t)\in \textrm{int}\, D\text { for all }t\ge 0;\)

  • convergence to the target: \(x(t)\rightarrow x^* \text { as } t\rightarrow +\infty \).

As it is implied by Theorem 1, the above problem can be solved by the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) from (6)–(8) with a proper function \(P\in C^2(D;{\mathbb {R}})\) being such that its level sets \({\mathcal {L}}^{P,P(x^0)}=\{x\in {\mathbb {R}}^n: P(x)\le P(x^0)\}\) are compact and \({\mathcal {L}}^{P,P(x^0)}\subset D\) for all \(x^0\in D\) (see also [12]). There is a broad range of potential functions ensuring collision-free motion for specific classes of systems, see, e.g., [32]. Some of those functions can be used under our control design framework for general classes of nonholonomic systems. As possible candidates for the function P, one can consider, e.g., the following:

  • Navigation functions. According to [32], a map \(P\in C^2(D;[0,1])\) defined on a compact connected analytic manifold D with boundary is a navigation function, if it is: 1) polar at \(x^*\in \textrm{int} D\), i.e., has a unique minimum at \(x^*\); 2) Morse, i.e., its critical points on D are nondegenerate; 3) admissible, i.e., all boundary components have the same maximal value, namely \(\partial D= P^{-1}(1)\).

    In particular, if \({\mathcal {W}}=\{x\in {\mathbb {R}}^n:\varphi _0(x)\ge 0\}\) and \({\mathcal {O}}_i=\{x\in {\mathbb {R}}^n:\varphi _i(x)<0\}\), \(i=\overline{1,N}\), with convex functions \(\varphi _0,\varphi _i\in C^2({\mathbb {R}}^n;{\mathbb {R}})\), then the navigation function can be taken in the form

    $$\begin{aligned} P(x)= & {} \frac{\Vert x-x^*\Vert ^2}{\big (\Vert x-x^*\Vert ^{2\Lambda }+ \varphi (x)\big )^{\frac{1}{\Lambda }}},\nonumber \\{} & {} \varphi (x) = \prod _{i=0}^N\varphi _i(x), \end{aligned}$$
    (13)

    provided that \(\Lambda \) is large enough and, for all \(x\in \partial O_i\), \(i=\overline{1,N}\),

    $$\begin{aligned} \dfrac{\nabla \varphi _i(x)^\top (x-x^*)}{\Vert x-x^*\Vert ^2}<c_{\varphi }^i, \end{aligned}$$

    where \(c_{\varphi }^i\) is the minimal eigenvalue of the Hessian of \(\varphi _i(x)\) (see [32] for more details).

  • Artificial potential fields, which represent a combination of attractive and repulsive potential fields. In particular, one can take [19]:

    $$\begin{aligned}{} & {} P(x)\nonumber \\{} & {} =\left\{ \begin{aligned}&\Vert x-x^*\Vert ^2+K\Big (\frac{1}{\varphi (x)}- \frac{1}{\varphi (\xi )}\Big )^2&\text { if }\, \varphi (x)\le \varphi (\xi ),\\&K\Vert x-x^*\Vert ^2&\text { if }\, \varphi (x)> \varphi (\xi ), \end{aligned} \right. \nonumber \\ \end{aligned}$$
    (14)

    where K is a positive constant gain, \(\xi \) belongs to a neighborhood of obstacles (see [19] for more details). Another function of such type was proposed in [42]:

    $$\begin{aligned} P(x)= \Vert x-x^*\Vert ^2\left( 1+\frac{K}{\varphi (x)}\right) ,\quad K>0. \end{aligned}$$
    (15)

4 Examples

In this section, we demonstrate the proposed control design approach on the mathematical model of a unicycle, which is a well-known example with the degree of nonholonomy 2. The equations of motion have the form (9) with \(n=3\), \(m=2\), \(f_1(x)=\big (\cos x_3,\sin x_3, 0\big )^\top \), \(f_2(x)=\big (0,0,1\big )^\top \):

$$\begin{aligned} \begin{aligned}&\dot{x}_1=u_1\cos x_3,\\&\dot{x}_2=u_1\sin x_3,\\&\dot{x}_3=u_2. \end{aligned} \end{aligned}$$
(16)

Here \((x_1,x_2)\) denote the coordinates of the contact point of the unicycle wheel, \(x_3\) is the angle between the wheel and the \(x_1\)-axis, and \(u_1\) and \(u_2\) control the forward and the angular velocity, respectively. Note that the above control system can also be represented on the Lie group \(SE(2)\subset GL(3)\). We refer the reader to [3, Chap. 4.3] for more details on the Lie group representation.

Fig. 1
figure 1

Exponential stabilization. a Time plots of \(\Vert x(t)-x^*\Vert \): for the classical solution x(t) of the closed-loop system (blue); for the \(\pi _\varepsilon \)-solution (green); for the gradient system (red); exponential decay envelope \(\Vert x^0-x^*\Vert e^{-2\gamma (t-\varepsilon )}\) (dark blue). b Time plots of \(d(t)=\Vert x(t) - {\bar{x}}(t)\Vert \): for the classical solution x(t) of the closed-loop system (blue) and for the \(\pi _\varepsilon \)-solution (green). Here \({\bar{x}}(t)\) is the solution of \(\dot{{\bar{x}}}=-\gamma \nabla P({\bar{x}})\)

It is easy to see that the vector fields of system (16) satisfy Assumptions 12 in \(D = {\mathbb {R}}^3\). In particular, Assumption 2 holds with the set of indices \(S_1=\{1,2\}\), \(S_2=\{(1,2)\}\):

$$\begin{aligned}{} & {} \textrm{span}\big \{f_{1}(x),f_2(x), [f_{1},f_{2}](x)\big \}=\mathbb {R}^3 \;\text { for all }\\{} & {} x\in D={\mathbb {R}}^3, \end{aligned}$$

so that the matrix

$$\begin{aligned} {\mathcal {F}}(x)= & {} \left( f_{1}(x)\ f_2(x)\ [f_{1},f_{2}](x)\right) \\= & {} \left( \begin{array}{ccc} \cos x_3 &{} 0 &{} \sin x_3 \\ \sin x_3 &{} 0 &{} -\cos x_3 \\ 0 &{} 1 &{} 0 \\ \end{array} \right) \end{aligned}$$

is nonsingular in D, and the corresponding inverse matrix

$$\begin{aligned} {\mathcal {F}}^{-1}(x)= \left( \begin{array}{ccc} \cos x_3 &{} \sin x_3 &{} 0 \\ 0 &{} 0 &{} 1 \\ \sin x_3 &{} - \cos x_3 &{} 0 \\ \end{array} \right) \end{aligned}$$
(17)

has bounded norm for all \(x\in D\).

According to the proposed control laws (6), we take

$$\begin{aligned} \begin{aligned} u_1&= u^\varepsilon _1(a(x,t),t)=a_{1}(x,t) \\&\quad +2\sqrt{\frac{\pi |a_{12} (x,t)|}{\varepsilon }}\ \textrm{sign}(a_{12}(x,t))\cos {\frac{2\pi t}{\varepsilon }},\\ u_2&= u^\varepsilon _2(a(x,t),t)=a_{2}(x,t)\\&\quad +2\sqrt{\frac{\pi |a_{12}(x,t)|}{\varepsilon }}\sin {\frac{2\pi t}{\varepsilon }}. \end{aligned} \end{aligned}$$
(18)

In the above formulas, \(\kappa _{12}\) is taken to be equal 1 (as suggested in Remark 1 with \(\vert S_2\vert = 1\)), and the vector of state-dependent coefficients a(xt) is defined by (7):

$$\begin{aligned} a(x,t)= & {} \left( a_1(x,t)\ a_2(x,t)\ a_{12}(x,t)\right) ^\top \\= & {} -\gamma {\mathcal {F}}^{-1}(x,t)\nabla _x P(x,t), \end{aligned}$$

where \(\gamma >0\) and \(\varepsilon >0\) are control parameters, the matrix \({\mathcal {F}}^{-1}(x,t)\) is given by (17), and \(P\in C^2(D\times {\mathbb {R}};{\mathbb {R}})\). Thus,

$$\begin{aligned} \begin{aligned}&a_1(x,t)=-\gamma \left( \frac{\partial P(x,t)}{\partial x_1} \cos x_3+\frac{\partial P(x,t)}{\partial x_2}\sin x_3\right) ,\\&a_2(x,t)=-\gamma \frac{\partial P(x,t)}{\partial x_3},\\&a_{12}(x,t)=-\gamma \left( \frac{\partial P(x,t)}{\partial x_1}\sin x_3-\frac{\partial P(x,t)}{\partial x_2}\cos x_3\right) . \end{aligned} \end{aligned}$$

Next, we will illustrate the behavior of solutions to system (16), (18) with different functions P(xt), depending on the control goal. As it has been mentioned in Sect. 3.1, the obtained control scheme can be used within the framework of sampling in the sense of Definition 1, and for classical solutions as well. In the simulations below, we depict the trajectories of system (16) with both types of solutions of the closed-loop system.

Fig. 2
figure 2

Polynomial stabilization. a Time plots of \(\Vert x(t)-x^*\Vert \): for the classical solution x(t) of the closed-loop system (blue); for the \(\pi _\varepsilon \)-solution (green); for the gradient system (red); polynomial decay envelope \(\left( \Vert x^0-x^*\Vert ^{-2}+8\gamma (t-\varepsilon )\right) ^{-1/2}\). b Time plots of \(d(t)=\Vert x(t) - {\bar{x}}(t)\Vert \): for the classical solution x(t) of the closed-loop system (blue) and for the \(\pi _\varepsilon \)-solution (green). Here \({\bar{x}}(t)\) is the solution of \(\dot{{\bar{x}}}=-\gamma \nabla P({\bar{x}})\)

4.1 Stabilization problem

We start with Problem 1 considered in Sect. 3.3. To exponentially stabilize system (16) at an arbitrary \(x^*\in {\mathbb {R}}^3\), one can take the simple quadratic function

$$\begin{aligned} P(x)=\Vert x-x^*\Vert ^2. \end{aligned}$$
(19)

According to Corollary 3.I, the following decay rate estimate holds:

$$\begin{aligned} \Vert x_\pi (t)-x^*\Vert \le \Vert x^0-x^*\Vert e^{-2\gamma (t-\varepsilon )}\text { for all }t\ge 0. \end{aligned}$$

Figure 1 shows the trajectories of system (16) for \(x^*=(1,-1,\pi )^\top \), \(\gamma =1\), \(\varepsilon =0.1\), \(x(0)=(0,0,0)^\top \).

To illustrate the polynomial decay rate estimate stated in Corollary 3.II, consider the function

$$\begin{aligned} P(x)=\Vert x-x^*\Vert ^4. \end{aligned}$$
(20)

In this case, \( \Vert x_\pi (t)-x^*\Vert \le \big (\Vert x^0-x^*\Vert ^{-2}\) \(+8\gamma (t-\varepsilon )\big )^{-1/2}\text { for all }t\ge 0. \) Figure 2 illustrates the behavior of trajectories of system (16) for \(x^*=(\frac{1}{2},-\frac{1}{2},\frac{\pi }{2})^\top \), \(\gamma =1\), \(\varepsilon =0.1\), \(x(0)=(0,0,0)^\top \).

4.2 Trajectory tracking

For a given curve \(x^*(t)\in {\mathbb {R}}^3\) on a finite time horizon \(t\in [0,T]\), we will illustrate solutions to the trajectory tracking problem (Problem 2) for system (16) with controls of the form (18) generated by the following potential function:

$$\begin{aligned} P(x,t) = \Vert x-x^*(t)\Vert ^2,\quad x\in {{\mathbb {R}}}^3,\; t\in [0,T]. \end{aligned}$$

Nonfeasible curve. Consider the curve \(x^*\in C^1([0,20\pi ];{\mathbb {R}}^3)\):

$$\begin{aligned} x^*(t)= & {} (0.01x^*_{c,1}(0.1t),0.01x^*_{c,2}(0.1t),0)^\top , \; \\{} & {} t\in [0,20\pi ], \end{aligned}$$

where the equations for \(x^*_{c,1}(t)\) and \(x^*_{c,2}(t)\) are given in [44]. The classical and \(\pi _\varepsilon \)-solutions of system (16) with the feedback control (18) are shown in Fig. 3. For these simulations, we take

$$\begin{aligned} \varepsilon =0.25,\; \gamma =1, \; x(0)=(-4,0,0)^\top . \end{aligned}$$
(21)

Figure 3 shows considerable oscillations of the \(x_1\) and \(x_2\) solution components around their reference values \(x_1^*(t)\) and \(x_1^*(t)\). Note that in this case the curve \(x^*(t)\) is not feasible, i.e., \(x=x^*(t)\in {\mathbb {R}}^3\), \(t\in [0,20\pi ]\) is not a solution of system (16) under any choice of admissible controls \(u_1\) and \(u_2\). Indeed, the only possibility to satisfy system (16) with \(x^*_3(t)\equiv 0\) is to have \(x^*_2(t)\equiv \textrm{const}\), which does not hold in the considered case. We will show in the next simulation that the oscillations due to nonfeasible character of the reference curve can be significantly reduced if \(x^*(t)\) is a solution of the kinematic equations (16).

Fig. 3
figure 3

Tracking nonfeasible curve \(x^*(t)\): classical solution (blue); \(\pi _\varepsilon \)-solution (green), projection of \(x^*(t)\) (dark blue)

Feasible curve. Consider now the feasible curve \(x^*\in C^1([0,20\pi ];{\mathbb {R}}^3)\) such that

$$\begin{aligned}{} & {} x^*(t)=(x^*_1(t),x^*_2(t),x^*_{3}(t))^\top ,\\{} & {} x^*_i(t) = 0.01x^*_{c,i}(0.01t), i=1,2,\tan x^*_3(t) = \frac{\dot{x}^*_2}{\dot{x}^*_1}. \end{aligned}$$

In this case, \(x^*(t)\) satisfies system (16) with continuous controls \(u_1={\tilde{u}}_1(t)\) and \(u_2={\tilde{u}}_2(t)\), where \({\tilde{u}}_1(t)=\dot{x}^*_1(t)\cos x^*_3(t) + \dot{x}^*_2(t)\sin x^*_3(t)\) and \({{\tilde{u}}_2(t)}=\dot{x}^*_3(t)\). To illustrate solutions of the trajectory tracking problem, we apply slightly modified controls of the form

$$\begin{aligned} \begin{aligned} u_1&= u^\varepsilon _1(a(x,t),t)\\&=a_{1}(x,t) +2\sqrt{\frac{\pi |a_{12}(x,t)|}{\varepsilon }}\ \textrm{sign}(a_{12}(x,t))\\&\quad \times \cos {\frac{2\pi t}{\varepsilon }}+{\tilde{u}}_1(t),\\ u_2&= u^\varepsilon _2(a(x,t),t)=a_{2}(x,t)\\&\quad +2\sqrt{\frac{\pi |a_{12}(x,t)|}{\varepsilon }}\sin {\frac{2\pi t}{\varepsilon }}+{\tilde{u}}_2(t). \end{aligned} \end{aligned}$$
(22)

Figure 4 shows the behavior of the closed-loop system (16), (22) with the same initial value and control parameters as in (21).

Fig. 4
figure 4

Tracking feasible curve \(x^*(t)\): classical solution (blue); \(\pi _\varepsilon \)-solution (green), projection of \(x^*(t)\) (dark blue)

Unbounded and non-Lipschitz curves. Note that the approach of Sect. 3.4 is also applicable for unbounded curves which are not continuously differentiable, e.g., \(x^*(t)=(t,0.5|t-10|,0)^\top \). The results of numerical simulations are in Fig. 5 with the control parameters (21) and \(x(0)=(0,0,0)^\top \). However, the Lipschitz property required in Corollary 4 is important, see Fig. 6 with \(x^*(t)=(t,0.1t^2,0)^\top \). As in Fig. 3, some zigzags are present in Fig. 5 due to nonfeasible character of the reference curve.

Although our theoretical estimates allow to track even nonfeasible curves with any prescribed accuracy, possible practical implementations of this approach should take into account the trade-off between the tracking accuracy and the frequency of switching allowed by the actuators.

4.3 Obstacle avoidance

We consider the obstacle avoidance problem (Problem 3) for system (16) in the domain \(D\subset {\mathbb {R}}^3\) represented as

$$\begin{aligned} D= & {} {\mathcal {W}}\setminus \bigcup _{j=1}^7{\mathcal {O}}_j,\quad \mathcal W=\{x\in {\mathbb {R}}^n:\varphi _0(x)\ge 0\},\\ {\mathcal {O}}_i= & {} \{x\in \mathbb R^n:\varphi _i(x)<0\}, \end{aligned}$$

where the cylindric workspace \({\mathcal {W}}\) and obstacles \(\mathcal O_i\) are defined by the functions \(\varphi _i(x)=(x_1-x_{oi})^2+(x_2-y_{oi})^2-r_i^2\), \(i=\overline{0,7}\), whose parameters are

$$\begin{aligned} \begin{aligned}&x_{o0}=0,\,y_{o0}=0,\,r_{0}=3.5,\ x_{o1}=2,\,y_{o1}=1,\\&r_{1}=1,x_{o2}=0,\,y_{o2}=-0.25,\,r_{2}=0.5, \, x_{o3}=-1.5,\\&y_{o3}=2,\,r_{3}=0.75,x_{o4}=-2,\,y_{o4}=0,\\&r_{4}=0.75,\, x_{o5}=1.5,\,y_{o5}=-2,\,r_{5}=0.75,\, x_{o6}=0.5,\\&y_{o6}=2.5,\,r_{6}=0.5,\ x_{o7}=-1,\,y_{o7}=-2,\,r_{7}=1. \end{aligned} \end{aligned}$$
Fig. 5
figure 5

Tracking the reference curve \(x^*(t)=(t,0.5|t-10|,0)^\top \): classical solution (blue); \(\pi _\varepsilon \)-solution (green); \(x^*(t)\) (dark blue)

Fig. 6
figure 6

Tracking the reference curve \(x^*(t)=(t,0.1t^2,0)^\top \): classical solution (blue); \(\pi _\varepsilon \)-solution (green); \(x^*(t)\) (dark blue)

Fig. 7
figure 7

Obstacle avoidance problem with the potential function (23): classical solution (blue); \(\pi _\varepsilon \)-solution (green); gradient flow (red)

The potential function P(x) is constructed in the form (13),

$$\begin{aligned} P(x)=\frac{\Vert x-x^*\Vert ^2}{\big (\Vert x-x^*\Vert ^{2\Lambda }+ \prod _{i=0}^7\varphi _i(x)\big )^{\frac{1}{\Lambda }}}, \end{aligned}$$
(23)

with the target point \(x^*=(-2,1,0)^\top \). For this example, we take \(\Lambda =5\). In Fig. 7, we present the classical and \(\pi _\varepsilon \)-solutions of the corresponding closed-loop system (16) with \(x^0=(1,-1,0)^\top \) and the control (18) with \(\varepsilon =0.5\), \(\gamma =5\). For the comparison, we illustrate the solution of the above obstacle avoidance problem with the potential function of form (15),

$$\begin{aligned} P(x)= \Vert x-x^*\Vert ^2\left( 1+\frac{K}{\varphi (x)}\right) ,\quad K>0. \end{aligned}$$
(24)

Figure 8 shows the closed-loop response with the same initial and target points, \(K=300\), \(\varepsilon =0.1\), and \(\gamma =0.1\). In both cases, the numerical simulations illustrate that the proposed controllers solve the obstacle avoidance problem with acceptable accuracy.

It should be noted that solutions of the designed closed-loop system inherit such properties of the gradient system as the convergence to an undesirable minimum. In particular, consider the same problem setting, but with the other initial and target points:

$$\begin{aligned} x^0=(2.3,-2.5,0)^\top ,\,x^*=(-3,0.6,0)^\top . \end{aligned}$$
(25)

Then the solution of the gradient system \(\dot{x}=-\gamma \nabla P(x)\) with P given by (23) “falls into a trap,” i.e., tends to a local minimum of the function P. According to [20], this minimum can be avoided by increasing \(\Lambda \) which, however, results in a larger convergence time. As it is shown on Fig. 9, the trajectories of system (16) exhibit the same behavior. A possible way to tackle this problem is to use another potential function, e.g., (24). Figure 10 illustrates the behavior of system (16) and the gradient system with function (24) and the parameters \(K=300\), \(\varepsilon =0.01\), \(\gamma =0.1\).

Fig. 8
figure 8

Obstacle avoidance problem with the potential function (24): classical solution (blue); \(\pi _\varepsilon \)-solution (green); gradient flow (red)

Fig. 9
figure 9

Obstacle avoidance problem for the points (25) and potential function (23): classical solution (blue); \(\pi _\varepsilon \)-solution (green); gradient flow (red)

Fig. 10
figure 10

Obstacle avoidance problem for the points (25) and potential function (24): classical solution (blue); \(\pi _\varepsilon \)-solution (green); gradient flow (red)

5 Conclusion

The proposed design methodology can be considered as a multilayered hierarchical scheme, where the reference dynamics (upper level) is governed by the gradient flow system (3) with some potential function P(xt), and the physical level is ruled by nonholonomic control system (1) with oscillating inputs (6). In this framework, the coordination between the physical and reference dynamics is performed via discrete-time sampling at time instants \(t_j = t_0+\varepsilon j\), \(j=1,2,...\) . The proposed scheme generalizes and significantly extends the approaches previously developed for particular control problems with time-invariant vector fields such as stabilization [45], motion planning on a finite time horizon [46], and obstacle avoidance [47]. It should be emphasized that the contribution of this paper allows the treatment of nonlinear control systems with time-varying vector fields and relatively simple structure of the control functions (6), whose amplitude factors a(xt) are effectively defined by the matrix inversion in (7). The latter feature is considered as an important advantage with respect to the method of [45, 46], where solutions to a system of nonlinear algebraic equations are required for the design procedure.

Although the formal proof of our results for small \(\varepsilon \) is established for \(\pi _\varepsilon \)-solutions only, numerical simulations illustrate the similar behavior of classical solutions of the corresponding closed-loop system. Hence, the analysis of asymptotic behavior of classical solutions remains the subject of future study.