Abstract
Nonlinear control-affine systems described by ordinary differential equations with time-varying vector fields are considered in the paper. We propose a unified control design scheme with oscillating inputs for solving the trajectory tracking and stabilization problems under the bracket-generating condition. This methodology is based on the approximation of a gradient-like dynamics by trajectories of the designed closed-loop system. As an intermediate outcome, we characterize the asymptotic behavior of solutions of the considered class of nonlinear control systems with oscillating inputs under rather general assumptions on the generating potential function. These results are applied to examples of nonholonomic trajectory tracking and obstacle avoidance.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Consider a nonlinear control system
where \(x=(x_1,\ldots ,x_n)^\top \in D\subseteq {\mathbb {R}}^n\) is the state, \(u=(u_1,\ldots ,u_m)^\top \in {\mathbb {R}}^m\) is the control, D is a domain, and the time-dependent vector fields \(f_k:D\times {\mathbb R}^+\rightarrow {{\mathbb {R}}}^n\) are regular enough to guarantee the existence and uniqueness of solutions to the Cauchy problem for system (1) with any initial data \(x(t_0)=x^0\in D\), \(t_0\ge 0\), and any admissible control \(u:[t_0,+\infty )\rightarrow {\mathbb R}^m\). We will formulate the required regularity assumptions precisely below.
The driftless control-affine system (1) is an extremely important mathematical model in nonholonomic mechanics, which represents the kinematics with nonintegrable constraints in the case \(m<n\) (we refer to the book [3] for general reference). In this area, the class of systems with time-independent vector fields is of special interest:
In contrast to linear control theory, the controllability of system (2) does not imply its stabilizability by a regular feedback law of the form \(u=h(x)\). A famous example of a completely controllable system (2) with \(n=3\) and \(m=2\), which is not stabilizable in the classical sense, was presented in [5]. Since then, the stabilization and motion planning problems of nonholonomic systems have been extensively studied by many experts in nonlinear control theory, mechanics, and robotics. A survey of essential contributions in this area is performed in Sect. 2, where the advantages and limitations of known approaches are discussed.
To the best of our knowledge, the present paper contains the first description of a unified control design method for solving a variety of different control problems such as stabilization of an equilibrium point \(x=x^*\), tracking an arbitrary curve in the state space, and motion planning with obstacles for rather general nonautonomous systems (1). The main idea behind our construction is to design time-dependent feedback controllers in such a way that the trajectories of the corresponding closed-loop system approximate the trajectories of a gradient-like system of the form
where the potential function P(x, t) and gain \(\gamma >0\) are to be defined according to the specific problem statement. In particular, the use of Lyapunov-like functions P allows to solve the stabilization and trajectory tracking problems, while so-called navigation functions or artificial potential fields can be exploited for generating collision-free motion of system (1) in domains with obstacles. In more details, we discuss these problems in Sect. 3. The key contribution of our work is twofold:
-
a unified approach for solving the stabilization and motion planning problems for driftless control-affine systems of the form (1) under the bracket-generating condition;
-
convergence results under relaxed regularity assumptions on the vector fields and their directional derivatives. In particular, the vector fields of the considered class of systems are not required to be smooth.
The subsequent presentation is organized as follows. The outcomes of the literature study are reported in Sect. 2. A family of \(\varepsilon \)-periodic feedback controllers is introduced in Sect. 3 in the form of trigonometric polynomials with respect to time with coefficients depending on the system state. It is shown in Sect. 3.2 that the proposed controllers allow approaching an arbitrary neighborhood of the set of critical points of P by the solutions of system (1) at large time t under a suitable choice of the small parameter \(\varepsilon \). These approximation schemes are then adapted to derive stabilizing controllers for the equilibrium stabilization problem (Theorem 3 and its corollary in Sect. 3.3), tracking problem (Theorem 4 and its corollary in Sect. 3.4), and obstacle avoidance (Sect. 3.5). We illustrate the proposed control design methodology with examples in Sect. 4. Finally, concluding comments are given in Sect. 5 to summarize the key results of the present paper and underline its contribution with respect to the previous work. The proofs of the main results are given in Appendices A–D.
Notations
Throughout the text, we will use the following notations:
\({\mathbb {R}}^+\)—the set of nonnegative real numbers;
\({\mathbb {R}}_{>0}\)—the set of positive real numbers;
\(\delta _{ij}\)—the Kronecker delta: \(\delta _{ii}{=}1\) and \(\delta _{ij}{=}0\) whenever \(i\ne j\);
\(\textrm{dist}(x,S)\)—the Euclidean distance between a point \(x\in {\mathbb {R}}^{n}\) and a set \(S\subset {\mathbb R}^n\)
\(B_\delta (x^*)\)—\(\delta \)-neighborhood of an \(x^*\in {\mathbb {R}}^n\) with \(\delta >0\);
\(\displaystyle B_\delta (S)=\bigcup _{x\in S} B_\delta (x)\) – \(\delta \)-neighborhood of a set \(S\subset {\mathbb {R}}^{n}\) with \(\delta >0\);
\(\partial M\), \({\overline{M}}\)—the boundary and the closure of a set \(M\subset {\mathbb {R}}^n\), respectively; \({\overline{M}}= M\cup \partial M\);
|S|—the cardinality of a set S;
\({\mathcal {K}}\)—the class of continuous strictly increasing functions \(\varphi :{\mathbb {R}}^+\rightarrow {\mathbb {R}}^+\) such that \(\varphi (0)=0\);
[f, g](x)—the Lie bracket of vector fields \(f,g:\mathbb R^n\rightarrow {\mathbb {R}}^n \) at a point \(x\in {\mathbb {R}}^n\), \([f,g](x)=L_fg(x)- L_gf(x)\), where \( L_gf(x)=\lim \limits _{s\rightarrow 0}\tfrac{f(x+sg(x))-f(x)}{s}\), \( L_fg(x)=\lim \limits _{s\rightarrow 0}\tfrac{g(x+sf(x))-g(x)}{s}\); if f and g are differentiable, then \(L_gf(x)=\frac{\partial f(x)}{\partial x}g(x)\) and \(L_fg(x)=\frac{\partial g(x)}{\partial x}f(x)\);
for a differentiable function \(P:{\mathbb {R}}^n\rightarrow {\mathbb {R}}\), the gradient of P(x) evaluated at a point \(x^0\in {\mathbb {R}}^n\) is denoted by \(\nabla P(x^0)=\left. \frac{\partial P(x)}{\partial x}\right| _{x=x^0}\);
if \(P:{\mathbb {R}}^n\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) is differentiable with respect to its first argument, we denote \(\nabla _x P(x^0,t_0)=\left. \frac{\partial P(x,t)}{\partial x}\right| _{x=x^0,t=t_0}\) for given \(x^0\in {\mathbb {R}}^n\) and \(t_0\in {\mathbb {R}}\).
2 Related work
In this section, we briefly summarize some known results on the stabilization and motion planning of control-affine systems of the form (1). Note that, to our best knowledge, most of the related publications focus on autonomous systems with time-independent vector fields. A number of efficient control design methods have been developed in the literature with the emphasis on special classes of systems, such as flat systems [8], chained-form systems [26, 40], unicycle- and car-like systems [7, 29, 30, 33, 36], manipulator models [9, 28], and Chaplygin systems [34].
For planning the motion of general nonholonomic systems, a broad class of approaches is based on the application of Lie algebraic techniques. With this respect, an essential assumption is that the vector fields of system (2) together with their iterated Lie brackets span the whole tangent space at each point of the state manifold (Hörmander’s condition). Several authors used this assumption to produce time-periodic control laws such that the trajectories of a nonholonomic system approximate the trajectories of an extended system. The papers [27, 39] exploited an unbounded sequence of oscillating controls with unbounded frequencies for such an approximation in case of driftless systems. The paper [23] addresses the limit behavior of solutions of a control-affine system with input signals of magnitude \(\varepsilon ^{-\alpha }\) and frequency scaling \(1/\varepsilon \) as \(\varepsilon \rightarrow 0\). It is assumed that the primitives of input signals and their iterated primitives up to a certain order are bounded. Then it is shown that the limit behavior of the considered oscillating system is either defined by its drift term or by a linear combination of certain iterated Lie brackets, depending on the value of \(\alpha \). In the paper [4], the averaged system as a differential inclusion is constructed for driftless control-affine systems with fast oscillating inputs. It is proved that an arbitrary solution of such a differential inclusion can be approximated by a family of solution of the original system when the oscillation frequency tends to infinity. This approximation result is also extended to the class of systems with drift under a time reparametrization and the assumption that the drift generates periodic dynamics.
An overview of motion planning methods for nonholonomic systems is presented in the book [18]. For nilpotent systems, exact solutions to the motion planning problem are proposed with the use of sinusoidal inputs. In general case, the local steering problem can be solved by constructing a nilpotent approximation under a suitable choice of privileged coordinates. Then the global steering algorithm is summarized in [18] as a finite sequence of steps which steers the given nonholonomic system to an arbitrary small neighborhood of the target point. The nilpotentization of a wheeled mobile robot model with a trailer is proposed in the paper [1] for planing local maneuvers of this kinematic system. On the basis of solving the related sub-Riemannian problem, an algorithm for suboptimal parking has been implemented and tested for several robot configurations.
An algorithm for motion planning of kinematic models of nonholonomic systems in task space is developed in [31] with the use of the Campbell–Baker–Hausdorff–Dynkin formula. The motion planning in task space is treated in the sense of steering the system output to a neighborhood of the desired point. The proposed algorithm is illustrated with a unicycle and kinematic car examples. A nonholonomic snakelike robot model with m (\(m \ge 3\)) rigid links is considered in [17]. The motion planning problem is treated there in the sense of generating a gait such that the origin of the snake’s body moves along a given planar curve. This problem is solved by expressing the body velocity from the compatibility equation and reconstruction equation.
An interesting example of nonholonomic system with the growth vector (4,7) is studied in [16]. Such an example is a modification of the trident snake robot with three 1-link branches of variable length. A nilpotent approximation of this system is constructed, and the local optimal steering problem is analyzed by the Pontryagin maximum principle. Controls for generating the motion in the direction of higher-order Lie brackets were proposed in [10, 11] for systems with two inputs.
A hybrid path planning method based on the combination of a high-level planner with a low-level controller performing in autonomous vehicle is described in [37]. The high-level planner (\(D^*\) Lite planner) works on the discretized 2D workspace to produce a reference path such that at each step the robot model moves from a given cell to one of the eight neighboring cell which does not have an obstacle. The output of the high-level planner is collected as a set of waypoints ending at the goal, and the cost is the total length of the path. Then the low-level controller, running on the autonomous vehicle, provides control inputs to generate motion from the current state to the next waypoint. This path planning method is experimentally validated on a differential drive robot in rough terrain environments.
Stabilizing time-varying controls were proposed in [45] for second degree nonholonomic systems (following the terminology used in [25]). Unlike other publication in this area, the exponential convergence to the equilibrium was proved without the assumption that the frequencies of controls tend to infinity. Besides, the paper [45] presented a rigorous solvability analysis of the stabilization problem in the proposed class of controls. For detailed reviews of other motion planning and stabilizing strategies we refer to [3, 15, 22]. It has to be emphasized that, in spite of a large number of publications on nonholonomic motion planning, only particular results are available for the stabilization together with the obstacle avoidance. Even for static obstacles, this problem was studied only for specific systems (see, e.g., [21, 35, 38]). A general class of nonholonomic systems was considered in the paper [41], where a time-independent controller was constructed based on the gradient of a potential function. Note that such a result ensures only stability (but not asymptotic stability) property. An algorithm computing time-periodic feedback controls for approximating collision-free paths was presented in [14]; however, no solvability issues concerning the general collision avoidance problem have been addressed in that paper.
For a class of driftless control-affine systems, the trajectory tracking problem was addressed in [43] under the assumption that the target trajectory is feasible, i.e., satisfies the dynamical equations with some control inputs. However, to the best of our knowledge, there are no results available for the stabilization of general classes of nonlinear control systems in a neighborhood of nonfeasible curves or in domains with obstacles.
3 Unified control framework for second degree nonholonomic systems
In this section, we present the main idea of our control design scheme by considering the nonholonomic systems of degree 2, according to the classification of [25]. The proposed control design provides a generic approach for stabilization and motion planning of underactuated driftless control-affine systems.
3.1 Definitions and assumptions
To generate stabilizing control strategies, we will exploit sampling, similar to the approaches of [6, 45]. With this respect, we introduce the following definition, which extends the notion of \(\pi _\varepsilon \)-solutions to nonautonomous systems.
Definition 1
(\(\pi _\varepsilon \)-solution) Consider a control system
and assume that a feedback control is given in the form \(u=h(a(x,t),t)\), \(a:D\times {\mathbb {R}}\rightarrow {\mathbb {R}}^l\), \(h: \mathbb R^l\times {\mathbb {R}}\rightarrow {\mathbb {R}}^m\). For given \(t_0\in {\mathbb {R}}\) and \(\varepsilon >0\), define a partition \(\pi _\varepsilon \) of \([t_0,+\infty )\) into the intervals
A \(\pi _\varepsilon \)-solution of the considered closed-loop corresponding to the initial value \(x^0\in {\mathbb {R}}^n\) is an absolutely continuous function \(x_\pi (t)\in D\), defined for \(t\in [t_0,+\infty )\), which satisfies the initial condition \(x_\pi (t_0)=x^0\) and the differential equations
We will illustrate the relation between \(\pi _\varepsilon \)-solutions and classical solutions with examples in Sect. 4.
Before formulating basic results of this paper, we introduce the main assumptions on the state space D, vector fields \(f_k\), and the potential function P used in the gradient flow dynamics (3).
Assumption 1
The vector fields \(f_k(x,t):D\times {\mathbb {R}}^+\rightarrow {\mathbb {R}}^n\) are twice continuously differentiable w.r.t. x, and \(f_k\), \(L_{f_{j}}f_k\) are continuously differentiable w.r.t. t, for all \(j,k=\overline{1,m}\).
Moreover, for any family of compact subsets \(\widetilde{{\mathcal {D}}}_t\subset D\), \(t\ge 0\), there exist constants \(M_f,L_{fx},L_{2f}>0\), \(L_{ft},H_{fx},H_{ft}\ge 0\) such that
-
(1.1)
\(\Vert f_k(x,t)\Vert \le M_f\),
-
(1.2)
\(\Vert f_k(x,t)-f_k(y,t)\Vert \le L_{fx}\Vert x-y\Vert ,\,\Big \Vert \frac{\partial f_k(x,t)}{\partial t}\Big \Vert \le L_{ft},\,\Vert L_{f_{j}}f_k(x,t)\Vert \le L_{2f}\),
-
(1.3)
\(\Vert L_{f_l}L_{f_{j}}f_k(x,t)\Vert \le H_{fx},\,\Big \Vert \frac{\partial (L_{f_{j}}f_k(x,t))}{\partial t}\Big \Vert \le H_{ft}\),
for all \(t\ge 0\), \(x,y\in \widetilde{{\mathcal {D}}}_t\), \(j,k,l=\overline{1,m}\).
Another important assumption is related to the controllability property of system (1). As it has already been mentioned, in this section we focus on systems with the degree of nonholonomy 2, i.e., those whose vector fields together with their Lie brackets span the whole n-dimensional space.
Assumption 2
-
(2.1)
System (1) satisfies the bracket-generating condition of degree 2 in D, i.e., there exist sets of indices \(S_1\subseteq \{1,2,\ldots ,m\}\), \(S_2\subseteq \{1,2,\ldots ,m\}^2\) such that \(|S_1|+|S_2|=n\) and
$$\begin{aligned} \begin{aligned}&\textrm{span}\big \{f_{i}(x,t), [f_{j_1},f_{j_2}](x,t)\,|\,i\in S_1,\\&\qquad (j_1,j_2)\in S_2\big \}=\mathbb {R}^n\,\\&\quad \text { for all }t\ge 0,x\in D. \end{aligned}\nonumber \\ \end{aligned}$$(4) -
(2.2)
For any family of compact subsets \(\widetilde{{\mathcal {D}}}_t \subset D\), \({t\ge 0}\), there exists an \({M_F}>0\) such that
$$\begin{aligned} \begin{aligned} \Vert {\mathcal {F}}^{-1}(x,t)\Vert \le {M_F}\text { for all }t\ge 0,\,x\in \widetilde{{\mathcal {D}}}_t, \end{aligned} \end{aligned}$$where \({\mathcal {F}}^{-1}(x,t)\) is the inverse matrix for
$$\begin{aligned} {\mathcal {F}}(x,t)= & {} \Big (\big (f_{j_1}(x,t)\big )_{j_1\in S_1}\ \ \big ([f_{j_1},f_{j_2}]\nonumber \\{} & {} (x,t)\big )_{(j_1,j_2)\in S_2}\Big ). \end{aligned}$$(5)
It is important to note that the rank condition (4) implies nonsingularity of the \(n\times n\) matrix \({\mathcal {F}}(x,t)\) for all \(t\ge 0\), \(x\in D\).
The next two assumptions describe properties of the potential function P for the gradient-like system (3).
Assumption 3
The function \(P: D\times {\mathbb {R}}^+ \rightarrow {\mathbb {R}}\) is twice continuously differentiable w.r.t. x. Moreover, for any family of compact subsets \(\widetilde{{\mathcal {D}}}_t\subset D\), \(t\ge 0\), there exist constants \(m_P\in {\mathbb {R}}\), \(L_{Px}>0\), \(L_{2Px},L_{2Pt},L_{Pt},H_{Px}\ge 0\) such that
-
(3.1)
\(m_P\le P(x,t)\),
-
(3.2)
\(\left\| \frac{\partial P(x,t)}{\partial x}\right\| \le L_{Px},\,\Vert P(x,t)-P(y,\tau )\Vert \le L_{Px}\Vert x-y\Vert +L_{Pt}\Vert t-\tau \Vert \),
-
(3.3)
\(\Vert \nabla _xP(x,t)-\nabla _xP(y,\tau )\Vert \le L_{2Px}\Vert x-y\Vert +L_{2Pt}\Vert t-\tau \Vert \),
-
(3.4)
\(\sum \limits _{i,j=1}^n\left\| \dfrac{\partial ^2 P(x,t)}{\partial x_i\partial x_j}\right\| \le H_{Px}\),
for all \(t,\tau \ge 0\), \(x\in \widetilde{{\mathcal {D}}}_t\), \(y\in \widetilde{{\mathcal {D}}}_\tau \).
To formulate the last assumption of this section, we introduce families of level sets for a function P(x, t). Namely, given a constant \(c\in {\mathbb {R}}\), we denote
Assumption 4
For every \(x^0\in D\), there exist \(\lambda >0\) and \(\rho >0\) such that, for all \(t\ge t_0\ge 0\), \( {\mathcal {L}}_t^{P,P(x^0,t_0)+\lambda }\) is nonempty, compact, convex set, and
3.2 Convergence results
Below we propose a universal control strategy which ensures the convergence of the trajectories of system (1) to the set of extremum points of a given function P. Suppose that the index sets \(S_1\), \(S_2\) and the matrix \({{\mathcal {F}}}(x,t)\) are described in Assumption 2, then we parameterize the controls as
Here the column vector \(a(x,t)=\big (a_{i_1}(x,t)\big |_{i_1\in S_1}\), \( a_{j_1j_2}(x,t)\big |_{(j_1,j_2)\in S_2}\big )^\top \in {\mathbb {R}}^n\) is obtained from
and the oscillating components are
where \(\kappa _{j_1j_2}\in {\mathbb {N}}\) are pairwise distinct numbers, \(\gamma >0\) is a control gain, and \(\varepsilon >0\) is a small parameter.
Remark 1
In what follows, sufficient conditions for the convergence of our control scheme will be proposed for large values of \(\gamma \) and small values of \(\varepsilon \). In this framework, the gain \(\gamma \), corresponding to the amplitude of control signals (6), has the same meaning as \(\gamma \) in (3). Thus, the bigger the \(\gamma \), the faster the transient behavior could be achieved. From the practical viewpoint, there is a trade-off between the convergence rate and control constraints in possible applications, so the amplitude parameter \(\gamma \) should not exceed the actuator bounds, and the frequency parameters \(\omega _{j_1j_2}=\frac{2\pi \kappa _{j_1j_2}}{\varepsilon }\), \((j_1,j_2)\in S_2\) should be within the actuator bandwidth. The requirement for \(\kappa _{j_1j_2}>0\) to be pairwise distinct integers in (8) means that there are no resonances up to order 2 between the frequencies \(\omega _{j_1j_2}\) (see, e.g., [45] and [2, Chap. 6] for the resonance conditions). If the value \(\varepsilon >0\) is fixed, then an optimal choice (with respect to minimizing the frequencies) is to define \(\kappa _{j_1j_2}\) as the minimum natural numbers from 1 to \(\vert S_2\vert \), i.e., \(\{\kappa _{j_1j_2}: (j_1,j_2)\in S_2\} =\{1,2,\ldots , \vert S_2\vert \}\).
The first result of this section is as follows.
Lemma 1
Let Assumptions 1–4 be satisfied for system (1) with a function P(x, t). Then there exist a \({\bar{\gamma }}>0\) and \({\bar{\varepsilon }}:[{\bar{\gamma }},+\infty )\rightarrow {{\mathbb {R}}}_{>0}\) such that, for any \(\gamma \ge {\bar{\gamma }}\) and any \(\varepsilon \in (0,{\bar{\varepsilon }}(\gamma )]\), the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (1) with the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) given by (6)–(8) and the initial data \(x_\pi (t_0)=x^0\in D\), \(t_0\ge 0\) is well defined and \(x_\pi (t)\in {\mathcal {L}}_t^{P,P(x^0,t_0)+\lambda }\) for all \(t\ge t_0\), and there exists a \(T\ge 0\) such that
where \(\lambda \), \(\rho \) are positive numbers from Assumption 4.
The proof is given in Appendix B.
In the case of time-independent function P(x) and vector fields \(f_k(x)\), it is possible to prove a stronger result under milder assumptions. Let us denote the set of local minima of the function P by
The following theorem holds for the system
Theorem 1
Given system (9), let \(f_k\in C^2(D;{\mathbb {R}}^n)\) satisfy Assumption 2 in a domain \(D\subseteq {\mathbb {R}}^n\), and let a function \(P\in C^2(D;{\mathbb {R}})\) be such that its level sets \({\mathcal {L}}^{P,P(x^0)}=\{x\in D: P(x)\le P(x^0)\}\) are compact for all \(x^0\in D\).
Then for any \(\gamma >0\) there exists an \({\bar{\varepsilon }}>0\) such that, for any \(\varepsilon \in (0,{\bar{\varepsilon }}]\), the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (9) with the controls \(u_k=u_k^\varepsilon (a(x),t)\) given by (6)–(8) and the initial data \(t_0\ge 0\), \(x_\pi (t_0)=x^0\in D\) is well defined and satisfies the following property:
provided that \(x^0\notin \{x\in D:\nabla P(x)=0\}{\setminus } S^*_{{\min }}\). Here
The proof of the asymptotic convergence of \(P(x_\pi (t))\) to the set of critical values of P can be found in [47]. More strict property (10) follows from the fact that, for small enough \(\varepsilon \),
and the uniqueness of the solutions of system (9) with the controls \(u_k=u_k^\varepsilon (a(x),t)\) and the initial data \(t_0\ge 0\), \(x(t_0)=x^0\in D\).
The approximate convergence of a time-varying function P to its minimum value can be proved under an additional requirement, which also allows to estimate the convergence rate:
Theorem 2
Let Assumptions 1–3 be satisfied for system (1) with a function P(x, t), and let \(\rho >0\) be such that \(\emptyset \ne {\mathcal {L}}_t^{P,m_P+\rho }\subset D\) for all \(t\ge 0\). Assume moreover that, for any family of compact subsets \(\widetilde{{\mathcal {D}}}_t\subset D\), \({t\ge 0}\), there exists a \(\mu >0\) and \(\nu \ge 0\) such that
Then for any \(\gamma ^*>0\) there is a \({\bar{\gamma }}>\gamma ^*\) such that, for any \(\gamma >{\bar{\gamma }}\) and \(\varepsilon \in (0,{\bar{\varepsilon }})\) (\({\bar{\varepsilon }}> 0\) depends on \(\gamma \)), the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (1) with the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) given by (6)–(8) and the initial data \(t_0\ge 0\), \(x_\pi (t_0)=x^0\in {\mathcal {D}}_{t_0}\) is well defined and satisfies one of the following properties:
-
(I)
If \(\nu =1\), then
$$\begin{aligned}{} & {} P(x_\pi (t),t)-m_P\\{} & {} \quad \le (P(x^0,t_0)-m_P)e^{-\mu \gamma ^*(t-t_0-\varepsilon )}+\rho \\{} & {} \qquad \text { for all }t\ge t_0. \end{aligned}$$ -
(II)
If \(\nu >1\), then
$$\begin{aligned}{} & {} P(x_\pi (t),t)-m_P\le \big ((P(x^0,t_0)-m_P)^{1-\nu }\\{} & {} \quad +\mu \gamma ^*(\nu -1)(t-t_0-\varepsilon )\big )^{\frac{1}{1-\nu }}+\rho ,\, t\ge t_0. \end{aligned}$$
The proof is given in Appendix C.
Remark 2
As it follows from the proof of Theorem 2, it suffices to take
where \(c_{R1}=\frac{L_{ft}}{2}+\frac{H_{ft}}{6}\sqrt{{M_F} L_{Px}}\). Obviously, one may put \({\bar{\gamma }}=\gamma ^*+\dfrac{2^{2\nu }}{\rho ^\nu \mu }L_{Pt}\) if the vector fields of system (1) are time-independent, and \({\bar{\gamma }}=\gamma ^*\) if, additionally, the function P does not depend on t.
Corollary 1
Assume that the constants required in Assumptions 1–3 (and in (11)) exist for all \(x\in {\mathcal {L}}_t^{P,P(x^0,t_0)}\), \(x^0\in D\), \(t_0\ge 0\). Then the assertions of Lemma 1 (Theorem 2) remain valid even if the level sets of the function P(x, t) are not compact.
Similarly, if the functions \(f_k(x)\) are globally Lipschitz in \({\mathcal {L}}^{P,P(x^0)} \), for any \(x^0\in D\), the functions \(f_k(x)\), \(L_{f_{j}}f_k(x,t)\), \(L_{f_l}L_{f_{j}}f_k(x)\), \(\Vert {\mathcal {F}}^{-1}(x)\Vert \), \(\frac{\partial P(x)}{\partial x}\), \(\frac{\partial ^2 P(x)}{\partial x^2}\) are bounded, and the function P(x) is bounded from below for all \(x\in {\mathcal {L}}^{P,P(x^0)} \), \(x^0\in D\), then the assertion of Theorem 1 remains valid even if the level sets of the function P(x) are not compact.
Corollary 2
Let the conditions of Theorem 1 be satisfied. Furthermore, assume that for any compact subset \(\widetilde{{\mathcal {D}}}\subset D\) there exist a \(\mu >0\) and \(\nu \ge 1\) such that \( \Vert \nabla P(x)\Vert ^2\ge \mu (P(x)-m_P)^\nu \text { for all }x\in \widetilde{{\mathcal {D}}}, \) where \(m_P\) is defined in Assumption 3.1.
Then for any \(\gamma>\gamma ^*>0\) there exists an \({\bar{\varepsilon }}>0\) such that, for any \(\varepsilon \in (0,{\bar{\varepsilon }})\), the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (9) with the controls \(u_k=u_k^\varepsilon (a(x),t)\) given by (6)–(8) and the initial data \(t_0\ge 0\), \(x_\pi (t_0)=x^0\in D\) is well defined and satisfies one of the following properties:
-
(I)
If \(\nu =1\), then
$$\begin{aligned}{} & {} P(x_\pi (t))-m_P\\{} & {} \le (P(x^0)-m_P)e^{-\mu \gamma ^*(t-t_0-\varepsilon )}\text { for all }t\ge t_0. \end{aligned}$$ -
(II)
If \(\nu >1\), then
$$\begin{aligned}{} & {} P(x_\pi (t))-m_P\le \big ((P(x^0)-m_P)^{1-\nu }\\{} & {} \quad +\mu \gamma ^*(\nu -1)(t-t_0-\varepsilon )\big )^{\frac{1}{1-\nu }}\text { for all }t\ge t_0. \end{aligned}$$
These results follow from the proofs of Lemma 1 and Theorem 2.
Lemma 1 and Theorem 1 give rise to several important results applicable to more specific control problems. Namely, one can choose a function P so that the corresponding gradient system (3) possesses some desired properties, such as asymptotic stability of a given point or set and collision-free motion. In the next section, we will consider different classes of functions P in order to solve the stabilization, trajectory tracking, and obstacle avoidance problems.
3.3 Stabilization problem
In this section, we consider a classical control problem of finding control laws which ensure the asymptotic stability of a point \(x=x^*\in D\) for system (9).
Problem 1
(Stabilization problem) Given system (9) and a point \(x^*\in D\), the goal is to construct a feedback control of the form (6)–(8) ensuring the asymptotic stability of \(x^*\) for the corresponding closed-loop system.
To solve Problem 1, we apply the results of Sect. 3.2 with a Lyapunov-like function P(x), which ensures the asymptotic stability of \(x^*\) for the gradient system (3).
Theorem 3
Given system (9) with \(f_k\in C^2(D;{\mathbb {R}}^n)\) satisfying Assumption 2 in a domain \(D\subseteq {\mathbb {R}}^n\) and a point \(x^*\in D\), let a function \(P\in C^2(D;{\mathbb {R}})\) satisfy the following conditions:
-
3.1) there exist functions \(w_{11},w_{12} \in {\mathcal {K}}\) such that \(\{x\in {\mathbb {R}}^n: \Vert x-x^*\Vert \le w_{11}^{-1}\big (P(x^0)-m_P\big )\}\subset D\) for all \(x^0\in D\), and
$$\begin{aligned} w_{11}(\Vert x-x^*\Vert )\le & {} P(x)-m_P\\\le & {} w_{12}(\Vert x-x^*\Vert )\text { for all }x\in D; \end{aligned}$$ -
3.2) \(\Vert \nabla P(x)\Vert =0\) if and only if \(x=x^*\), and there exists a function \(w_2\in {\mathcal {K}}\) such that
$$\begin{aligned} \Vert \nabla P(x)\Vert \le w_2(\Vert x-x^*\Vert )\text { for all }x\in D. \end{aligned}$$
Then for any \(\gamma >0\), there exists an \({\bar{\varepsilon }}>0\) such that the point \(x^*\) is asymptotically stable for system (9) with the controls \(u_k=u_k^\varepsilon (a(x),t)\) given by (6)–(8) and any \(\varepsilon \in (0,{\bar{\varepsilon }})\), provided that the solutions of the closed-loop system (9), (6)–(8) are defined in the sense of Definition 1.
The proof of this theorem is based on the proofs of Lemma 1 and Theorem 1 (see Appendix D). The following result directly follows from Theorem 3 and Corollary 2:
Corollary 3
Given system (9) with \(f_k\in C^2(D;{\mathbb {R}}^n)\) satisfying Assumption 2 in a domain \(D\subseteq {\mathbb {R}}^n\) and a point \(x^*\in D\), let a function \(P\in C^2(D;{\mathbb {R}})\) satisfy the following conditions:
-
C3.1) there exist constants \(\omega _{11},\omega _{12},v_1,v_2>0\) such that
$$\begin{aligned}{} & {} \omega _{11}\Vert x-x^*\Vert ^{v_1}\\{} & {} \quad \le P(x)-m_P \le \omega _{12}\Vert x-x^*\Vert ^{v_2}\text { for all }x\in D; \end{aligned}$$ -
C3.2) there exist constants \(\mu _1,\mu _2>0\) and \(\nu _1,\nu _2\ge 1\) such that
$$\begin{aligned}{} & {} \mu _1 (P(x)-m_P)^{\nu _1}\\{} & {} \quad \le \Vert \nabla P(x)\Vert ^2 \le \mu _2 (P(x)-m_P)^{\nu _2}\text { for all }x\in D. \end{aligned}$$
Then for any \(\gamma >0\) there exists an \({\bar{\varepsilon }}>0\) such that the point \(x^*\) is asymptotically stable for the closed-loop system (9) with the controls \(u_k=u_k^\varepsilon (a(x),t)\) given by (6)–(8) and any \(\varepsilon \in (0,{\bar{\varepsilon }})\), provided that the solutions of the closed-loop system are defined in the sense of Definition 1. Moreover,
-
(I)
If \(\nu _1=1\), then \(x^*\) is exponentially stable; namely, for any \(\gamma>\gamma ^*>0\), there exists an \(\varepsilon >0\) such that
$$\begin{aligned}{} & {} \Vert x_\pi (t)-x^*\Vert \\{} & {} \quad \le \beta \Vert x^0-x^*\Vert ^{\frac{v_2}{v_1}}e^{-\frac{\mu _1\gamma ^*}{v_1}(t-t_0-\varepsilon )}\text { for all }t\ge t_0, \end{aligned}$$where \(\beta =\left( \frac{\omega _{12}}{\omega _{11}}\right) ^{\frac{1}{v_1}}\).
-
(II)
If \(\nu _1>1\), then \(x^*\) is polynomially stable; namely, for any \(\gamma ^*>0\) and \(\gamma >\gamma ^*\) there exists an \(\varepsilon >0\) such that
$$\begin{aligned}{} & {} \Vert x_\pi (t)-x^*\Vert \le \left( \beta _1\Vert x^0-x^*\Vert ^{v_2(1-\nu _1)}\right. \\{} & {} \quad \left. +\beta _2 (t-t_0-\varepsilon )\right) ^{\frac{1}{v_1(1-\nu _1)}}\text { for all }t\ge t_0, \end{aligned}$$where \(\beta _1=\left( \dfrac{\omega _{12}}{\omega _{11}}\right) ^{1-\nu _1}\), \(\beta _2=\dfrac{\mu _1\gamma ^*(\nu _1-1)}{\omega _{11}^{1-\nu _1}}\).
In particular, to exponentially stabilize system (9) at \(x^*\), one can simply put
The above-stated decay rate estimates are illustrated with numerical examples in Sect. 4.1.
Remark 3
It is interesting to note that for the degree 1 nonholomonic systems, i.e., for the case \(m=n\), \(S_1=\{1,\dots ,n\}\), the proposed stabilizing controls are time-invariant functions
which is the classical control design for stabilization of fully actuated driftless control-affine systems.
Remark 4
The proposed control algorithm (6)–(8) significantly simplifies the stabilizing control design procedure introduced in [45] and makes it possible to express control coefficients explicitly without solving a cumbersome system of algebraic equations.
3.4 Trajectory tracking problem
The proposed control design procedure with a time-varying function P(x, t) can be used for ensuring the motion of system (1) along desirable curves. Note that we consider arbitrary continuous curves \(x^*(t)\) which may not be feasible for system (1). Consequently, we consider a relaxed problem statement for the approximate trajectory tracking as follows:
Problem 2
(Trajectory tracking problem) Given system (1), a continuous curve \(x^*:{\mathbb {R}}^+\rightarrow D\), and a constant \(\rho >0\), the goal is to construct a feedback law ensuring the attractivity of the family of sets
for the corresponding closed-loop system.
Note that attracting (locally/globally pullback attracting) families of time-varying sets have been studied in the paper [24] for nonautonomous systems of ordinary differential equations. Here we treat this notion in the sense of \(\pi _\varepsilon \)-solutions (Definition 1) for system (1) with control inputs. To be precise, we introduce the following definition.
Definition 2
(Attracting family of sets in the sense of \(\pi _\varepsilon \)-solutions) Let a feedback control of the form (6)–(8) be given, and let \(\rho >0\). We call the family of sets (12) attracting for the closed-loop system (1), (6)–(8), if there exist \(\Delta >0\), \({\bar{\gamma }}>0\), and \({\bar{\varepsilon }}:[\bar{\gamma },+\infty )\rightarrow {\mathbb {R}}_{>0}\) such that, for any \(t_0\ge 0\), \(x^0\in B_{\Delta }({\mathcal {L}}^{\rho }_{t_0})\cap D\), \(\gamma \ge \bar{\gamma }\), \(\varepsilon \in (0,{\bar{\varepsilon }}(\gamma )]\), the corresponding \(\pi _\varepsilon \)-solution \(x_\pi (t)\) satisfying the initial condition \(x_\pi (t_0)=x^0\) is well defined and
Based on Theorem 2, we are in a position to state sufficient conditions for the solvability of Problem 2.
Theorem 4
Given system (1), a continuous curve \(x^*: {\mathbb {R}}^+\rightarrow D\), and a function \(P:D\times {\mathbb {R}}^+\rightarrow {\mathbb {R}}\), let Assumptions 1–4 be satisfied, and assume that the following conditions hold:
-
4.1) there exist constants \(\omega _{11},\omega _{12},v_1,v_2>0\) such that
$$\begin{aligned}{} & {} \omega _{11}\Vert x-x^*(t)\Vert ^{v_1}\le P(x,t)-m_P\\{} & {} \quad \le \omega _{12}\Vert x-x^*(t)\Vert ^{v_2}\text { for all }t\ge 0,\,x\in D; \end{aligned}$$ -
4.2) there exist constants \(\mu _1,\mu _2>0\) and \(\nu _1,\nu _2\ge 1\) such that
$$\begin{aligned}{} & {} \mu _1 (P(x,t)-m_P)^{\nu _1}\le \Vert \nabla _x P(x,t)\Vert ^2\\{} & {} \quad \le \mu _2 (P(x,t)-m_P)^{\nu _2}\text { for all }t\ge 0,\,x\in D. \end{aligned}$$
Then, for any \(\rho >0\), the family of sets \({\mathcal {L}}^{\rho }_t=\{x\in D: \Vert x-x^*(t)\Vert \le \rho \}_{t\ge 0}\) is attracting for the closed-loop system (1) with the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) given by (6)–(8) in the sense of Definition 2. Moreover, one of the following assertions holds for any \(\gamma >\gamma ^*\ge {\bar{\gamma }}\), \(\varepsilon \in (0,{\bar{\varepsilon }}(\gamma )]\), and \(x^0\in B_{\Delta }(\mathcal L^{\rho }_{t_0})\cap D\):
-
(I)
if \(\nu _1=1\), then \(\{{\mathcal {L}}^{\rho }_t\}_{t\ge 0}\) is exponentially attractive, i.e.,
$$\begin{aligned}{} & {} \Vert x_\pi (t)-x^*\Vert \le \beta \Vert x^0-x^*\Vert ^{\frac{v_2}{v_1}}e^{-\frac{\mu _1\gamma ^*}{v_1}(t-t_0-\varepsilon )} +\rho \\{} & {} \quad \text { for all }t\ge t_0, \end{aligned}$$where \(\beta =\left( \frac{\omega _{12}}{\omega _{11}}\right) ^{\frac{1}{v_1}}\);
-
(II)
if \(\nu _1>1\), then \(\{{\mathcal {L}}^{\rho _1}_t\}_{t\ge 0}\) is polynomially attractive, i.e.,
$$\begin{aligned}{} & {} \Vert x_\pi (t)-x^*\Vert \le \left( \beta _1\Vert x^0-x^*\Vert ^{v_2(1-\nu _1)}\right. \\{} & {} \left. + \beta _2(t-t_0-\varepsilon )\right) ^{\frac{1}{v_1(1-\nu _1)}}+\rho \text { for }t\ge t_0, \end{aligned}$$where \(\beta _1=\left( \dfrac{\omega _{12}}{\omega _{11}}\right) ^{1-\nu _1}\) and \(\beta _2=\dfrac{\mu _1\gamma ^*(\nu _1-1)}{\omega _{11}^{1-\nu _1}}\).
The proof is similar to the proof of Theorem 3.
Corollary 4
Given system (9) with \(f_k\in C^2(D;{\mathbb {R}}^n)\) satisfying Assumption 2 in a domain \(D\subseteq {\mathbb {R}}^n\), let a curve \(x^*:{\mathbb {R}}^+ \rightarrow D\) be Lipschitz continuous such that \(B_\delta (x^*(t))\subset D\) for all \(t\ge 0\) with some \(\delta >0\).
Then, for any \(\rho >0\), the family of sets \({\mathcal {L}}^{\rho }_t=\{x\in D: \Vert x-x^*(t)\Vert \le \rho \}_{t\ge 0}\) is (exponentially) attracting for the closed-loop system (9) with the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) given by (6)–(8) in the sense of Definition 2.
The above result has been proved in [13] for continuously differentiable \(x^*(t)\) with bounded first derivative.
3.5 Obstacle avoidance problem
Another important problem which can be solved by the proposed approach is generating collision-free motion of system (9) in environments with obstacles. To formulate such problem, assume that the set D is represented as a closed bounded domain with “holes,” i.e.,
where \({\mathcal {W}}\subset {\mathbb {R}}^n\) is a closed bounded domain (workspace), and \({\mathcal {O}}_1,{\mathcal {O}}_2,\ldots , {\mathcal {O}}_N\subset {\mathcal {W}}\) are open domains (obstacles). The resulting set D is supposed to be valid [21], i.e., \(\displaystyle \overline{{\mathcal {O}}_i} \subset \textrm{int}\, {\mathcal {W}}\) and \( \displaystyle \overline{{\mathcal {O}}_i} \cap \overline{{\mathcal {O}}_j} = \emptyset \;\;\text {if}\; \ne j\), for all \(i,j\in \{1,\dots ,N\}\).
Problem 3
(Obstacle avoidance problem) Given system (9), an initial point \(x^0\in \textrm{int}\, D\), and a destination point \(x^*\in \textrm{int}\, D\), the goal is to construct a feedback control such that the corresponding solution x(t) of the closed-loop system (9) with the initial data \(x(0)=x^0\) satisfies the conditions:
-
collision-free motion: \( x(t)\in \textrm{int}\, D\text { for all }t\ge 0;\)
-
convergence to the target: \(x(t)\rightarrow x^* \text { as } t\rightarrow +\infty \).
As it is implied by Theorem 1, the above problem can be solved by the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) from (6)–(8) with a proper function \(P\in C^2(D;{\mathbb {R}})\) being such that its level sets \({\mathcal {L}}^{P,P(x^0)}=\{x\in {\mathbb {R}}^n: P(x)\le P(x^0)\}\) are compact and \({\mathcal {L}}^{P,P(x^0)}\subset D\) for all \(x^0\in D\) (see also [12]). There is a broad range of potential functions ensuring collision-free motion for specific classes of systems, see, e.g., [32]. Some of those functions can be used under our control design framework for general classes of nonholonomic systems. As possible candidates for the function P, one can consider, e.g., the following:
-
Navigation functions. According to [32], a map \(P\in C^2(D;[0,1])\) defined on a compact connected analytic manifold D with boundary is a navigation function, if it is: 1) polar at \(x^*\in \textrm{int} D\), i.e., has a unique minimum at \(x^*\); 2) Morse, i.e., its critical points on D are nondegenerate; 3) admissible, i.e., all boundary components have the same maximal value, namely \(\partial D= P^{-1}(1)\).
In particular, if \({\mathcal {W}}=\{x\in {\mathbb {R}}^n:\varphi _0(x)\ge 0\}\) and \({\mathcal {O}}_i=\{x\in {\mathbb {R}}^n:\varphi _i(x)<0\}\), \(i=\overline{1,N}\), with convex functions \(\varphi _0,\varphi _i\in C^2({\mathbb {R}}^n;{\mathbb {R}})\), then the navigation function can be taken in the form
$$\begin{aligned} P(x)= & {} \frac{\Vert x-x^*\Vert ^2}{\big (\Vert x-x^*\Vert ^{2\Lambda }+ \varphi (x)\big )^{\frac{1}{\Lambda }}},\nonumber \\{} & {} \varphi (x) = \prod _{i=0}^N\varphi _i(x), \end{aligned}$$(13)provided that \(\Lambda \) is large enough and, for all \(x\in \partial O_i\), \(i=\overline{1,N}\),
$$\begin{aligned} \dfrac{\nabla \varphi _i(x)^\top (x-x^*)}{\Vert x-x^*\Vert ^2}<c_{\varphi }^i, \end{aligned}$$where \(c_{\varphi }^i\) is the minimal eigenvalue of the Hessian of \(\varphi _i(x)\) (see [32] for more details).
-
Artificial potential fields, which represent a combination of attractive and repulsive potential fields. In particular, one can take [19]:
$$\begin{aligned}{} & {} P(x)\nonumber \\{} & {} =\left\{ \begin{aligned}&\Vert x-x^*\Vert ^2+K\Big (\frac{1}{\varphi (x)}- \frac{1}{\varphi (\xi )}\Big )^2&\text { if }\, \varphi (x)\le \varphi (\xi ),\\&K\Vert x-x^*\Vert ^2&\text { if }\, \varphi (x)> \varphi (\xi ), \end{aligned} \right. \nonumber \\ \end{aligned}$$(14)where K is a positive constant gain, \(\xi \) belongs to a neighborhood of obstacles (see [19] for more details). Another function of such type was proposed in [42]:
$$\begin{aligned} P(x)= \Vert x-x^*\Vert ^2\left( 1+\frac{K}{\varphi (x)}\right) ,\quad K>0. \end{aligned}$$(15)
4 Examples
In this section, we demonstrate the proposed control design approach on the mathematical model of a unicycle, which is a well-known example with the degree of nonholonomy 2. The equations of motion have the form (9) with \(n=3\), \(m=2\), \(f_1(x)=\big (\cos x_3,\sin x_3, 0\big )^\top \), \(f_2(x)=\big (0,0,1\big )^\top \):
Here \((x_1,x_2)\) denote the coordinates of the contact point of the unicycle wheel, \(x_3\) is the angle between the wheel and the \(x_1\)-axis, and \(u_1\) and \(u_2\) control the forward and the angular velocity, respectively. Note that the above control system can also be represented on the Lie group \(SE(2)\subset GL(3)\). We refer the reader to [3, Chap. 4.3] for more details on the Lie group representation.
It is easy to see that the vector fields of system (16) satisfy Assumptions 1–2 in \(D = {\mathbb {R}}^3\). In particular, Assumption 2 holds with the set of indices \(S_1=\{1,2\}\), \(S_2=\{(1,2)\}\):
so that the matrix
is nonsingular in D, and the corresponding inverse matrix
has bounded norm for all \(x\in D\).
According to the proposed control laws (6), we take
In the above formulas, \(\kappa _{12}\) is taken to be equal 1 (as suggested in Remark 1 with \(\vert S_2\vert = 1\)), and the vector of state-dependent coefficients a(x, t) is defined by (7):
where \(\gamma >0\) and \(\varepsilon >0\) are control parameters, the matrix \({\mathcal {F}}^{-1}(x,t)\) is given by (17), and \(P\in C^2(D\times {\mathbb {R}};{\mathbb {R}})\). Thus,
Next, we will illustrate the behavior of solutions to system (16), (18) with different functions P(x, t), depending on the control goal. As it has been mentioned in Sect. 3.1, the obtained control scheme can be used within the framework of sampling in the sense of Definition 1, and for classical solutions as well. In the simulations below, we depict the trajectories of system (16) with both types of solutions of the closed-loop system.
4.1 Stabilization problem
We start with Problem 1 considered in Sect. 3.3. To exponentially stabilize system (16) at an arbitrary \(x^*\in {\mathbb {R}}^3\), one can take the simple quadratic function
According to Corollary 3.I, the following decay rate estimate holds:
Figure 1 shows the trajectories of system (16) for \(x^*=(1,-1,\pi )^\top \), \(\gamma =1\), \(\varepsilon =0.1\), \(x(0)=(0,0,0)^\top \).
To illustrate the polynomial decay rate estimate stated in Corollary 3.II, consider the function
In this case, \( \Vert x_\pi (t)-x^*\Vert \le \big (\Vert x^0-x^*\Vert ^{-2}\) \(+8\gamma (t-\varepsilon )\big )^{-1/2}\text { for all }t\ge 0. \) Figure 2 illustrates the behavior of trajectories of system (16) for \(x^*=(\frac{1}{2},-\frac{1}{2},\frac{\pi }{2})^\top \), \(\gamma =1\), \(\varepsilon =0.1\), \(x(0)=(0,0,0)^\top \).
4.2 Trajectory tracking
For a given curve \(x^*(t)\in {\mathbb {R}}^3\) on a finite time horizon \(t\in [0,T]\), we will illustrate solutions to the trajectory tracking problem (Problem 2) for system (16) with controls of the form (18) generated by the following potential function:
Nonfeasible curve. Consider the curve \(x^*\in C^1([0,20\pi ];{\mathbb {R}}^3)\):
where the equations for \(x^*_{c,1}(t)\) and \(x^*_{c,2}(t)\) are given in [44]. The classical and \(\pi _\varepsilon \)-solutions of system (16) with the feedback control (18) are shown in Fig. 3. For these simulations, we take
Figure 3 shows considerable oscillations of the \(x_1\) and \(x_2\) solution components around their reference values \(x_1^*(t)\) and \(x_1^*(t)\). Note that in this case the curve \(x^*(t)\) is not feasible, i.e., \(x=x^*(t)\in {\mathbb {R}}^3\), \(t\in [0,20\pi ]\) is not a solution of system (16) under any choice of admissible controls \(u_1\) and \(u_2\). Indeed, the only possibility to satisfy system (16) with \(x^*_3(t)\equiv 0\) is to have \(x^*_2(t)\equiv \textrm{const}\), which does not hold in the considered case. We will show in the next simulation that the oscillations due to nonfeasible character of the reference curve can be significantly reduced if \(x^*(t)\) is a solution of the kinematic equations (16).
Feasible curve. Consider now the feasible curve \(x^*\in C^1([0,20\pi ];{\mathbb {R}}^3)\) such that
In this case, \(x^*(t)\) satisfies system (16) with continuous controls \(u_1={\tilde{u}}_1(t)\) and \(u_2={\tilde{u}}_2(t)\), where \({\tilde{u}}_1(t)=\dot{x}^*_1(t)\cos x^*_3(t) + \dot{x}^*_2(t)\sin x^*_3(t)\) and \({{\tilde{u}}_2(t)}=\dot{x}^*_3(t)\). To illustrate solutions of the trajectory tracking problem, we apply slightly modified controls of the form
Figure 4 shows the behavior of the closed-loop system (16), (22) with the same initial value and control parameters as in (21).
Unbounded and non-Lipschitz curves. Note that the approach of Sect. 3.4 is also applicable for unbounded curves which are not continuously differentiable, e.g., \(x^*(t)=(t,0.5|t-10|,0)^\top \). The results of numerical simulations are in Fig. 5 with the control parameters (21) and \(x(0)=(0,0,0)^\top \). However, the Lipschitz property required in Corollary 4 is important, see Fig. 6 with \(x^*(t)=(t,0.1t^2,0)^\top \). As in Fig. 3, some zigzags are present in Fig. 5 due to nonfeasible character of the reference curve.
Although our theoretical estimates allow to track even nonfeasible curves with any prescribed accuracy, possible practical implementations of this approach should take into account the trade-off between the tracking accuracy and the frequency of switching allowed by the actuators.
4.3 Obstacle avoidance
We consider the obstacle avoidance problem (Problem 3) for system (16) in the domain \(D\subset {\mathbb {R}}^3\) represented as
where the cylindric workspace \({\mathcal {W}}\) and obstacles \(\mathcal O_i\) are defined by the functions \(\varphi _i(x)=(x_1-x_{oi})^2+(x_2-y_{oi})^2-r_i^2\), \(i=\overline{0,7}\), whose parameters are
The potential function P(x) is constructed in the form (13),
with the target point \(x^*=(-2,1,0)^\top \). For this example, we take \(\Lambda =5\). In Fig. 7, we present the classical and \(\pi _\varepsilon \)-solutions of the corresponding closed-loop system (16) with \(x^0=(1,-1,0)^\top \) and the control (18) with \(\varepsilon =0.5\), \(\gamma =5\). For the comparison, we illustrate the solution of the above obstacle avoidance problem with the potential function of form (15),
Figure 8 shows the closed-loop response with the same initial and target points, \(K=300\), \(\varepsilon =0.1\), and \(\gamma =0.1\). In both cases, the numerical simulations illustrate that the proposed controllers solve the obstacle avoidance problem with acceptable accuracy.
It should be noted that solutions of the designed closed-loop system inherit such properties of the gradient system as the convergence to an undesirable minimum. In particular, consider the same problem setting, but with the other initial and target points:
Then the solution of the gradient system \(\dot{x}=-\gamma \nabla P(x)\) with P given by (23) “falls into a trap,” i.e., tends to a local minimum of the function P. According to [20], this minimum can be avoided by increasing \(\Lambda \) which, however, results in a larger convergence time. As it is shown on Fig. 9, the trajectories of system (16) exhibit the same behavior. A possible way to tackle this problem is to use another potential function, e.g., (24). Figure 10 illustrates the behavior of system (16) and the gradient system with function (24) and the parameters \(K=300\), \(\varepsilon =0.01\), \(\gamma =0.1\).
5 Conclusion
The proposed design methodology can be considered as a multilayered hierarchical scheme, where the reference dynamics (upper level) is governed by the gradient flow system (3) with some potential function P(x, t), and the physical level is ruled by nonholonomic control system (1) with oscillating inputs (6). In this framework, the coordination between the physical and reference dynamics is performed via discrete-time sampling at time instants \(t_j = t_0+\varepsilon j\), \(j=1,2,...\) . The proposed scheme generalizes and significantly extends the approaches previously developed for particular control problems with time-invariant vector fields such as stabilization [45], motion planning on a finite time horizon [46], and obstacle avoidance [47]. It should be emphasized that the contribution of this paper allows the treatment of nonlinear control systems with time-varying vector fields and relatively simple structure of the control functions (6), whose amplitude factors a(x, t) are effectively defined by the matrix inversion in (7). The latter feature is considered as an important advantage with respect to the method of [45, 46], where solutions to a system of nonlinear algebraic equations are required for the design procedure.
Although the formal proof of our results for small \(\varepsilon \) is established for \(\pi _\varepsilon \)-solutions only, numerical simulations illustrate the similar behavior of classical solutions of the corresponding closed-loop system. Hence, the analysis of asymptotic behavior of classical solutions remains the subject of future study.
Data availability
The datasets generated during the current study are available from the corresponding author on reasonable request.
References
Ardentov, A.A., Mashtakov, A.P.: Control of a mobile robot with a trailer based on nilpotent approximation. Autom. Remote Control 82(1) 73–92 (2021)
Argyris, J.H., Faust, G., Haase, M., Friedrich, R.: An Exploration of Dynamical Systems and Chaos, 2nd edn. Springer, Berlin (2015)
Bloch, A.M.: Nonholonomic Mechanics and Control, 2nd edn. Springer, Berlin (2015)
Bombrun, A., Pomet, J.B.: The averaged control system of fast-oscillating control systems. SIAM J. Control. Optim. 51(3), 2280–2305 (2013)
Brockett, R.W.: Asymptotic stability and feedback stabilization. Differ. Geom. Control Theory 27, 181–191 (1983)
Clarke, F.H., Ledyaev, Y.S., Sontag, E.D., Subbotin, A.I.: Asymptotic controllability implies feedback stabilization. IEEE Trans. Autom. Control 42(10), 1394–1407 (1997)
Deng, M., Inoue, A., Sekiguchi, K., Jiang, L.: Two-wheeled mobile robot motion control in dynamic environments. Robot. Comput. Integr. Manuf. 26(3), 268–272 (2010)
Fliess, M., Lévine, J., Martin, P., Rouchon, P.: Design of trajectory stabilizing feedback for driftless flat systems. In: Proceedings of the 3rd ECC, pp. 1882–1887 (1995)
Galicki, M.: Path following by the end-effector of a redundant manipulator operating in a dynamic environment. IEEE Trans. Robot. 20(6), 1018–1025 (2004)
Gauthier, J.P., Kawski, M.: Minimal complexity sinusoidal controls for path planning. In: Proceedings of the 53rd IEEE Conference on Decision and Control, pp. 3731–3736 (2014)
Gauthier, J.P., Monroy-Perez, F.: On certain hyperelliptic signals that are natural controls for nonholonomic motion planning. Math. Control Signals Syst. 27(3), 415–437 (2015)
Grushkovskaya, V., Zuyev, A.: Obstacle avoidance problem for second degree nonholonomic systems. In: Proceedings of the 57th IEEE Conference on Decision and Control, pp. 1500–1505 (2018)
Grushkovskaya, V., Zuyev, A.: Stabilization of non-admissible curves for a class of nonholonomic systems. In: 2019 18th European Control Conference (ECC), pp. 656–661 (2019). https://doi.org/10.23919/ECC.2019.8795948
Gurvits, L., Li, Z.X.: Smooth time-periodic feedback solutions for nonholonomic motion planning. In: Li, Z., Canny, J.F. (eds.) Nonholonomic Motion Planning, pp. 53–108. Springer, Berlin (1993)
Hoy, M., Matveev, A.S., Savkin, A.V.: Algorithms for collision-free navigation of mobile robots in complex cluttered environments: a survey. Robotica 33(03), 463–497 (2015)
Hrdina, J., Zalabová, L.: Local geometric control of a certain mechanism with the growth vector (4, 7). J. Dyn. Control Syst. 26(2), 199–216 (2020)
Itani, O., Shammas, E.: Motion planning for redundant multi-bodied planar kinematic snake robots. Nonlinear Dyn. 104(4), 3845–3860 (2021)
Jean, F.: Control of Nonholonomic Systems: From Sub-Riemannian Geometry to Motion Planning. Springer, Berlin (2014)
Khatib, O.: Real-time obstacle avoidance for manipulators and mobile robots. Int. J. Robot. Res. 5(1), 90–98 (1986)
Ko, N.Y., Lee, B.H.: Avoidability measure in moving obstacle avoidance problem and its use for robot motion planning. In: Proceedings of the 1996 IEEE/RSJ International Conference on Intelligent Robots and Systems vol. 3, pp. 1296–1303 (1996)
Koditschek, D.E., Rimon, E.: Robot navigation functions on manifolds with boundary. Adv. Appl. Math. 11(4), 412–442 (1990)
Kolmanovsky, I., McClamroch, N.H.: Developments in nonholonomic control problems. IEEE Control. Syst. 15(6), 20–36 (1995)
Kurzweil, J., Jarník, J.: Iterated Lie brackets in limit processes in ordinary differential equations. Res. Math. 14(1), 125–137 (1988)
Langa, J.A., Robinson, J.C., Suárez, A.: Stability, instability, and bifurcation phenomena in non-autonomous differential equations. Nonlinearity 15(3), 887–903 (2002)
Laumond, J.P., Risler, J.J.: Nonholonomic systems: controllability and complexity. Theor. Comput. Sci. 157(1), 101–114 (1996)
Li, L.: Nonholonomic motion planning using trigonometric switch inputs. Int. J. Simul. Model. 16(1), 176–186 (2017)
Liu, W.: An approximation algorithm for nonholonomic systems. SIAM J. Control. Optim. 35(4), 1328–1365 (1997)
Maciejewski, A.A., Klein, C.A.: Obstacle avoidance for kinematically redundant manipulators in dynamically varying environments. Int. J. Robot. Res. 4(3), 109–117 (1985)
Y. A. Kapitanyuk. H. G. de Marina A. V. Proskurnikov, M.C.: Guiding vector field algorithm for a moving path following problem. Preprints of the 20th IFAC World Congress, pp. 7177–7182 (2017)
Masehian, E., Katebi, Y.: Robot motion planning in dynamic environments with moving obstacles and target. Int. Sci. Index Comput. Inf. Eng. 1(5), 1249–1254 (2007)
Mielczarek, A., Duleba, I.: Development of task-space nonholonomic motion planning algorithm based on lie-algebraic method. Appl. Sci. 11(21), 10245 (2021)
Paternain, S., Koditschek, D.E., Ribeiro, A.: Navigation functions for convex potentials in a space with convex obstacles. IEEE Trans. Autom. Control 63(9), 2944–2959 (2018). https://doi.org/10.1109/TAC.2017.2775046
Qu, Z., Wang, J., Plaisted, C.E.: A new analytical solution to mobile robot trajectory generation in the presence of moving obstacles. IEEE Trans. Robot. 20(6), 978–993 (2004)
Reyhanoglu, M., McClamroch, N., Bloch, A.: Motion planning for nonholonomic dynamic systems. In: Li, Z., Canny, J. (eds.) Nonholonomic Motion Planning, pp. 201–234. Springer, Berlin (1993)
Rimon, E., Koditschek, D.E.: Exact robot navigation using artificial potential functions. IEEE Trans. Robot. Autom. 8(5), 501–518 (1992)
Savkin, A.V., Matveev, A.S., Hoy, M., Wang, C.: Safe Robot Navigation Among Moving and Steady Obstacles. Butterworth-Heinemann, Amsterdam (2015)
Sebastian, B., Ben-Tzvi, P.: Physics based path planning for autonomous tracked vehicle in challenging terrain. J. Intell. Rob. Syst. 95(2), 511–526 (2019)
Sharma, B., Vanualailai, J., Singh, S.: Lyapunov-based nonlinear controllers for obstacle avoidance with a planar n-link doubly nonholonomic manipulator. Robot. Auton. Syst. 60(12), 1484–1497 (2012)
Sussmann, H.J., Liu, W.: Limits of highly oscillatory controls and the approximation of general paths by admissible trajectories. In: Proceedings of the 30th IEEE Conference on Decision and Control, pp. 437–442 (1991)
Teel, A.R., Murray, R.M., Walsh, G.C.: Non-holonomic control systems: from steering to stabilization with sinusoids. Int. J. Control 62(4), 849–870 (1995)
Urakubo, T.: Stability analysis and control of nonholonomic systems with potential fields. J. Intell. Robot. Syst. (2017). https://doi.org/10.1007/s10846-017-0473-1
Vanualailai, J., Sharma, B., Nakagiri, S.: An asymptotically stable collision-avoidance system. Int. J. Non-Linear Mech. 43(9), 925–932 (2008)
Walsh, G., Tilbury, D., Sastry, S., Murray, R., Laumond, J.P.: Stabilization of trajectories for systems with nonholonomic constraints. IEEE Trans. Autom. Control 39(1), 216–222 (1994)
Zuyev, A.: Exponential stabilization of nonholonomic systems by means of oscillating controls. SIAM J. Control. Optim. 54(3), 1678–1696 (2016)
Zuyev, A., Grushkovskaya, V.: Motion planning for control-affine systems satisfying low-order controllability conditions. Int. J. Control 90, 2517–2537 (2017)
Zuyev, A., Grushkovskaya, V.: Obstacle avoidance problem for driftless nonlinear systems with oscillating controls. IFAC Pap. Online 50, 10476–10481 (2017)
Funding
Open Access funding enabled and organized by Projekt DEAL. This work was partially supported by the DFG (German Research Foundation) under grants GR 5293/1-1 and ZU 359/2-1.
Author information
Authors and Affiliations
Contributions
The authors contributed equally to this work. The authors have read and approved the final manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors have no relevant financial or nonfinancial interests to disclose.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Auxiliary results
In this appendix, we summarize some auxiliary lemmas which are needed for the proof of the main results.
Lemma 2
Let \(D\subseteq {\mathbb {R}}^n\), \(t_0\ge 0\), and \(x(t)\in D\), \(t_0\le t\le \tau \), be a solution of system (1). Assume that there exist \(M,L\ge 0\) such that
for all \(x,y\in D\), \(t\ge 0\), \(k=\overline{1,m}\). Then
with \(U=\max \limits _{t\in [t_0,\tau ]}\sum \limits _{k=1}^{m}|u_{k}(t)|\).
Proof
Follows from the Grönwall–Bellman inequality. \(\square \)
Lemma 3
Let \(D\subseteq {\mathbb {R}}^n\), \(t_0\ge 0\), and \(x(t)\in D\), \(t_0\le t\le \tau \), be a solution of system (1) with \(u\in C[t_0,\tau ]\) and \(x(t_0)=x^0\in D\). Assume that the vector fields \(f_k\in C^2(D\times {\mathbb {R}}^+;{\mathbb {R}}^n)\) are such that \(f_k(\cdot ,t)\in C^3(D;{\mathbb {R}}^n)\) for each fixed \(t\ge 0\), \(k=\overline{1,m}\). Then \(x_\pi (t)\) can be represented in the following way:
where
Proof
This result provides a modification of the Chen–Fliess series expansion (see, e.g., [46]). \(\square \)
Appendix B: Proof of Lemma 1
The proof consists of several steps. Throughout the paper, we assume that
and \(\gamma >0\) will be chosen in Step 3.
Step 1. The goal of the first step is to find \(\varepsilon _1(\gamma )>0\) such that, for all \(\gamma >0\) and \(\varepsilon \in (0,\min \{\varepsilon _0(\gamma ),\varepsilon _1(\gamma )\}]\), the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (1) with the initial data \(x_\pi (t_0)=x^0\) and the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) is well defined on \(t\in [t_0,t_0+\varepsilon ]\), i.e. \(x_\pi (t)\in D\) for all \(t\in [t_0,t_0+\varepsilon ]\).
Let \(t_0\ge 0\), \(x^0\in D\), and let P(x, t) satisfy Assumptions 3–4. Given any positive numbers \(\lambda >0\) and \(\rho >0\) satisfying Assumption 4, consider the level sets
and
for \(t\ge 0\). By Assumption 4, \(\widetilde{\mathcal D}_t\) are compact subsets and
Note that according to Definition 1, \(u_k=u_k^\varepsilon (a(x^0,t_0),t)\) for \(t\in [t_0,t_0+\varepsilon ]\). Let us estimate the value of \(U(x^0,t_0)=\max \limits _{t_0\le t\le t_0+\varepsilon }\sum \limits _{k=1}^m|u_k^\varepsilon (a(x^0,t_0),t|\):
From (8) and properties of the Kronecker delta \(\delta _{ij}\),
therefore, \(\displaystyle U(x^0,t_0) \le \sum _{k\in S_1} \left| a_{k}(x^0,t_0)\right| +2\sqrt{\frac{\pi }{\varepsilon }}\sum \limits _{k=1}^m\sum _{(j_1,j_2)\in S_2} \sqrt{|a_{j_1j_2}(x^0,t_0)| \kappa _{j_1j_2}}. \) Using Hölder’s inequalities with indices \(p=q=2\) and \(p=4\), \(q=\frac{4}{3}\), we further estimate the value of \(U(x^0,t_0)\) as
From Formula (7) and Assumptions (2.2) and (3.2) we conclude that, since \(x^0\in D\),
so that
Here the constant \(L_{Px}\) is defined from Assumption 3.2) with \(\{\widetilde{\mathcal D}_t\}_{t\ge 0}\) given by (31). Thus, for all \(\gamma >0\) and \(\varepsilon \in (0,\varepsilon _0(\gamma )]\),
with
Let us also take constants \(M_f>0\) and \(L_{fx}\ge 0\) from Assumption (1.1)–(1.2) with \(\{\widetilde{\mathcal D}_t\}_{t\ge 0}\) given by (31):
Then Lemma 2 together with (38) yields the following estimate:
Let us underline that the latter estimate holds not only for the chosen \(x^0\in D\), but also for any \(x^0\in \widetilde{\mathcal D}_t\), \({t\ge 0}\). Using the obtained inequality and Assumption (3.2), we estimate \(P(x_\pi (t),t)\) in the following way:
for all \(t\in [t_0,t_0+\varepsilon ]\). Let us define \(\varepsilon _1(\gamma )\) as the smallest positive root of the equation
Then for any \(\gamma >0\) and \(\varepsilon \in \big (0,\min \{\varepsilon _0(\gamma ),\varepsilon _1(\gamma )\}\big ]\),
that is \(x_\pi (t)\in \widetilde{{\mathcal {D}}}_t\subset D\) for all \(t\in [t_0,t_0+\varepsilon ]\).
Step 2. The goal of this step is to show that the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (1) with the initial data \(x_\pi (t_0)=x^0\in D\) and the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) can be represented in the form
where \(\Vert R(\varepsilon )\Vert =O\big ((\varepsilon \Vert \nabla _xP(x^0,t_0)\Vert )^{1/2}\big )+O((\varepsilon \Vert \nabla _x P(x^0,t_0)\Vert )^{3/2})\) as \(\varepsilon \rightarrow 0\).
Applying Lemma 3 to the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (1) with the initial data \(x_\pi (t_0)=x^0\in D\) and the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) given by (6), we represent \(x_\pi (\varepsilon )\) as
where \(r_1,r_2\) are given by (29) and
Using Assumption (1.2)–(1.3), we estimate \(r_1(\varepsilon ),r_2(\varepsilon ),r_3(\varepsilon )\) as follows:
Here the constant \(c_u\) is given by (39), and \(M_f\), \(L_{2f}\), \(L_{ft}\), \(H_{fx}\), \(H_{ft}\) are defined from Assumptions (1.2)–(1.3) and (2.2) with \(\{\widetilde{{\mathcal {D}}}_t\}_{t\ge 0}\) given by (31). Thus, for any \(t_0\ge 0\), \(x^0\in D\), \(\gamma >0\), and \(\varepsilon \in \big (0,\min \{\varepsilon _0(\gamma ),\varepsilon _1(\gamma )\}\big ]\),
where
Finally, inserting (7) into (45), we obtain the representation
and thus reach the goal of Step 2.
Step 3. In this step we will estimate the value \(P(x_\pi (t_0+\varepsilon ),t_0+\varepsilon )\). Given a \(\rho >0\), we will show that there exist an \(\varepsilon _2(\gamma )>0\) and a \({\bar{\gamma }}(\rho )>0\) such that, for any \(\gamma \ge {\bar{\gamma }}(\rho )\) and \(\varepsilon \in \big (0,\min \{\varepsilon _0(\gamma ),\varepsilon _1(\gamma ),\varepsilon _2(\gamma )\}\big ]\), the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (1) with the initial data \(x_\pi (t_0)=x^0\in D\) and the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) satisfies the property
To analyze the value \(P(x_\pi (t_0+\varepsilon ),t_0+\varepsilon )\), we use the Taylor formula with Lagrange’s form of the remainder for \(P(x_\pi (t_0+\varepsilon ),t_0)\):
Inserting (49) into the obtained representation and using Assumption (3.2), (3.4) with \(\{\widetilde{{\mathcal {D}}}_t\}_{t\ge 0}\) given by (31), we obtain
With the use of estimate (47) we conclude that, for all \(\varepsilon \in \big (0,\min \{\varepsilon _0(\gamma ),\varepsilon _1(\gamma )\}\big ]\),
where \(c_{p1}=c_{R2}\sqrt{{M_F}^3L_{Px}} +H_{Px}\big (1+ c_{R2}^2{M_F}^3L_{Px} +2c_{R1}c_{R2}\varepsilon {M_F}^2\big )\), \(c_{p2}=c_{R1}\sqrt{{M_F} L_{Px}}+{H_{Px}}c_{R1}^2{M_F}\varepsilon \). Therefore,
Assume that \(\Vert \nabla _x P(x^0,t_0)\Vert \ge \dfrac{\rho }{2}>0\). Then the above inequality can be rewritten as
Let us fix any \(\sigma \in (0,1)\), \({\tilde{\gamma }}>0\), and put
We obtain that, for any \(\gamma \ge {\bar{\gamma }}(\rho )\), \(\varepsilon \in \big (0,\min \{\varepsilon _0(\gamma ),\varepsilon _1(\gamma ), \varepsilon _2(\gamma )\}\big ]\),
that is,
whenever \(\Vert \nabla _x P(x^0,t_0)\Vert \ge \dfrac{\rho }{2}\). Moreover, the obtained inequality is strict if \(\Vert \nabla _x P(x^0,t_0)\Vert >\dfrac{\rho }{2}\). Similarly to Step 2, we emphasize that the results of the current step hold for any \(x^0\in \{\widetilde{{\mathcal {D}}}_t\}_{t\ge 0}\) provided that the corresponding \(\pi _\varepsilon \)-solution \(x_\pi (t)\) is well defined in \(\{\widetilde{{\mathcal {D}}}_t\}_{t\ge 0}\) for all \(t\in [t_0,t_0+\varepsilon ]\).
Step 4. The goal of this step is to ensure the following property: after some finite time \(t=T\ge 0\), the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (1) enters the set \( {{\mathcal {L}}}_{t_0+T}^{\nabla P,{\rho /2}}\) and remains in \( {{\mathcal {L}}}_t^{\nabla P,{\rho }}\), \({t\in [t_0+T,t_0+T+\varepsilon ]}\). More precisely, we will show that there exists an \(N\in {\mathbb {N}}\cup \{0\}\) such that \(\Vert \nabla _x P(x_\pi (t_0+N\varepsilon ),t_0+N\varepsilon )\Vert \le \dfrac{\rho }{2}\) and, moreover, there exists an \(\varepsilon _{3}(\gamma )>0\) such that, for any \(\gamma \ge {\bar{\gamma }}(\rho )\) and \(\varepsilon \in \big (0,\min \{\varepsilon _0(\gamma ),\dots ,\varepsilon _{3}(\gamma )\}\big ]\), the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (1) with the initial data \(x_\pi (t_0)=x^0\) and the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) satisfies the property
We have obtained in Step 3 that \(x_\pi (t_0+\varepsilon )\in {\mathcal {L}}_{t_0+\varepsilon }^{P,P(x^0,t_0)}\). Applying the results of Step 1 with the same choice of parameters \(\varepsilon \) and \(\gamma \) and the initial data \(x_\pi (t_0+\varepsilon )\in {\mathcal {L}}^{P,P(x^0,t_0)}\), \({t\ge 0}\), we get \(x_\pi (t)\in \widetilde{{\mathcal {D}}}_t\) for all \(t\in [t_0,t_0+2\varepsilon ]\). Furthermore, we may repeat Steps 2–3 and conclude that
Let us show that there exists an \(N\in {\mathbb {N}}\cup \{0\}\) such that \(\Vert \nabla _x P(x_\pi (t_0+N\varepsilon ),t_0+N\varepsilon )\Vert \le \dfrac{\rho }{2}\). Indeed, assume \(\Vert \nabla _x P(x_\pi (t_0+N\varepsilon ),t_0+N\varepsilon )\Vert > \dfrac{\rho }{2}\) for all \(N\in {\mathbb {N}}\cup \{0\}\). Then iterating Step 3 and inequality (56), we conclude that, for any \(N\in {\mathbb {N}}\),
and
Obviously, the right-hand side of the latter inequality becomes strictly negative for \(N>\Big [\dfrac{4(P(x^0,t_0)-m_P)}{\varepsilon {\tilde{\gamma }}\rho ^2}\Big ]\), while the left-hand side remains nonnegative. The obtained contradiction proves, that after the time \(T=N\varepsilon \), \(N\in {\mathbb {N}}\cup \{0\}\), the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (1) enters the set \( {\mathcal L}_{t_0+T}^{\nabla P,{\rho /2}}\)
The next goal is to ensure that the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (1) remains in the family of sets \( {{\mathcal {L}}}_t^{\nabla P,{\rho }}\) for \({t\in [t_0+T,t_0+T+\varepsilon ]}\). Because of Assumption 4, \(x_\pi (t)\in \widetilde{{\mathcal {D}}}_t\) for \(t\in [t_0+T,t_0+T+\varepsilon ]\). Applying Assumption (3.3) with \(\{\widetilde{\mathcal D}_t\}_{t\ge 0}\) given by (31), we get
Since the obtained estimate holds for all \(t\in [t_0+T,t_0+T+\varepsilon ]\), we apply estimate (41):
Let us take \(\varepsilon _{3}(\gamma )\) as the smallest positive root of the equation
Then for any \(\gamma \ge {\bar{\gamma }}(\rho )\) and \(\varepsilon \in \big (0,\min \{\varepsilon _0(\gamma ),\dots ,\varepsilon _{3}(\gamma )\}\big ]\),
Step 5. This step summarizes all the obtained results and completes the proof of this lemma.
From Steps 3 and 4, there exists an \(N\in {\mathbb {N}}\cup \{0\}\) such that \(\Vert \nabla _x P(x_\pi (t_0+j\varepsilon ),t_0+j\varepsilon )\Vert \ge \dfrac{\rho }{2}\) for \(j=0,1,\dots ,N-1\), and \(\Vert \nabla _x P(x_\pi (t_0+T),t_0+T)\Vert \le \dfrac{\rho }{2}\). Thus,
and \(\Vert \nabla _x P(x_\pi (t),t)\Vert \le \rho \) for all \(t\in [t_0+T,t_0+T+\varepsilon ]\). Consequently, \(x_\pi (t)\) is well defined in \(\widetilde{{\mathcal {D}}}_t\) for all \(t\in [t_0,t_0+T+\varepsilon ]\), and
Next, consider two possible scenarios:
S1) \(\Vert \nabla _x P(x_\pi (t_0+T+\varepsilon ),t_0+T+\varepsilon )\Vert \le \dfrac{\rho }{2}\).
Then similarly to Step 4 we have that \(\Vert \nabla _x P(x_\pi (t),t)\Vert \le \rho \) for \(t\in [t_0+T+\varepsilon ,t_0+T+2\varepsilon ]\), which implies that \(x_\pi (t)\) is well defined in \(\widetilde{\mathcal D}_t\) for all \(t\in [t_0,t_0+T+2\varepsilon ]\) and
S2) \(\dfrac{\rho }{2}<\Vert \nabla _x P(x_\pi (t_0+T+\varepsilon ),t_0+T+\varepsilon )\Vert \le \rho \).
Repeating Steps 3–4, we conclude that there exists an integer \(N_2\ge N+2\) such that \(\Vert \nabla _x P(x_\pi (t_0+j\varepsilon ),t_0+j\varepsilon )\Vert > \dfrac{\rho }{2}\) for \(j=N+1,\dots ,N_2-1\), and \(\Vert \nabla _x P(x_\pi (t_0+N_2\varepsilon ),t_0+N_2\varepsilon )\Vert \le \dfrac{\rho }{2}\). Besides,
Obviously,
To estimate the values of \(P(x_\pi (t),t)\) for \(t\in [t_0+T+\varepsilon ,t_0+N_2\varepsilon ]\), denote the integer part of \(\dfrac{t-t_0}{\varepsilon }\) as \(\Big [\dfrac{t-t_0}{\varepsilon }\Big ]\) and observe that \(0\le t-t_0-\Big [\dfrac{t-t_0}{\varepsilon }\Big ]\varepsilon \le \varepsilon \). Then by Assumption (3.1)–(3.2) and estimate (41),
From (72), for any \(\gamma \ge {\bar{\gamma }}(\rho )\) and \(\varepsilon \in \big (0,\min \{\varepsilon _0(\gamma ), \varepsilon _1(\gamma ),\varepsilon _2(\gamma )\}\big ]\),
Iterating S1)–S2), we obtain that \(x_\pi (t)\) is well defined in \(\widetilde{{\mathcal {D}}}_t\) for all \(t\ge t_0\) and \(x_\pi (t)\in \{x:P(x,t)\le P^*(\rho ,\lambda )+\lambda \}\) for \(t\ge T+\varepsilon \). As \(\lambda \) and \(\rho \) are assumed arbitrary, the proof of Lemma 1 is completed. \(\square \)
Appendix C: Proof of Theorem 2
The first two steps and the beginning of the third step of the proof are similar to the proof of Lemma 1. We summarize the main differences and results as follows:
-
For any \(\rho >0\) such that \({\mathcal {L}}_t^{P,m_P+\rho }\subset D\), \(t\ge 0\), we define the sets (31) as
$$\begin{aligned} \widetilde{{\mathcal {D}}}_t={\mathcal {L}}_t^{P,P(x^0,t_0)+m_P+\rho }\subset D. \end{aligned}$$(71) -
\(\varepsilon _1(\gamma )\) is the smallest positive root of the equation
$$\begin{aligned} \sqrt{\varepsilon \gamma {M_F}} e^{L_{fx}\varepsilon }c_uM_f L_{Px}^{3/2}+\varepsilon L_{Pt}=\frac{\rho }{4}. \end{aligned}$$(72)Similar to the outcome of Step 1, for any \(\gamma >0\) and \(\varepsilon \in \big (0,\min \{\varepsilon _0(\gamma ),\varepsilon _1(\gamma )\}\big ]\), we have
$$\begin{aligned} P(x_\pi (t),t)\le & {} P(x^0,t_0)+\frac{\rho }{4}\nonumber \\{} & {} \text { for all} \ t \in [t_0,t_0+\varepsilon ]. \end{aligned}$$(73) -
For all \(\gamma >0\) and \(\varepsilon \in (0,{\tilde{\varepsilon }}(\gamma )=\min \{{\bar{\varepsilon }}_0(\gamma ),\varepsilon _1(\gamma )\}]\), the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (1) with the initial data \(x_\pi (t_0)=x^0\) and the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) defined by (6) satisfies the property
$$\begin{aligned}{} & {} P(x_\pi (t_0+\varepsilon ),t_0+\varepsilon )\\{} & {} \le P(x^0,t_0)-\varepsilon \Big (\gamma \Vert \nabla _xP(x^0,t_0)\Vert ^2\big (1- \sqrt{\varepsilon \gamma } c_{p1}\big )\\{} & {} -\Vert \nabla _xP(x^0,t_0)\Vert c_{p2}-L_{Pt}\Big ). \end{aligned}$$
Now we come to the main part of the proof. Using the above estimate, Assumption (3.2) and property (11), we obtain
where \(c_{p3}=L_{Px} c_{p2}+L_{Pt}\), \(\nu \ge 1\). For an arbitrary \(\rho >0\), \(\gamma ^*>0\), let
Then, for any \(\gamma >{\bar{\gamma }}(\rho )\), \(\varepsilon \in \big (0,\min \{{\tilde{\varepsilon }}(\gamma ),\varepsilon _2(\gamma )\}\big )\), the following properties hold:
-
i)
$$\begin{aligned} \begin{aligned}&P(x_\pi (t_0+\varepsilon ),t_0+\varepsilon )-m_P\\&\le (P(x^0,t_0)-m_P)\\&\times \Big (1-\varepsilon {\bar{\gamma }}\mu ( P(x^0,t_0)-m_P)^{\nu -1}\Big )+\varepsilon c_{p3}. \end{aligned} \end{aligned}$$
-
ii)
If \(P(x^0,t_0)-m_P\ge \tfrac{\rho }{4}>0\), then
$$\begin{aligned} \begin{aligned}&P(x_\pi (t_0+\varepsilon ),t_0+\varepsilon )-m_P\\&< (P(x^0,t_0)-m_P)\big (1-\varepsilon \gamma ^*\mu (P(x^0,t_0)\\&\quad -m_P)^{\nu -1}\big )\\&< P(x^0,t_0)-m_P. \end{aligned} \end{aligned}$$This means that \(x(\varepsilon )\in {\mathcal {L}}_t^{P,P(x^0,t_0)}\), \({t\ge 0}\), and by (73), \(x_\pi (t)\in \widetilde{{\mathcal {D}}}_t\) for all \(t\in [t_0,t_0+\varepsilon ]\).
-
iii)
If \(P(x^0,t_0)-m_P<\tfrac{\rho }{4} \), then (73) immediately implies \(x_\pi (t)\in \widetilde{{\mathcal {D}}}_t\) for all \(t\in [t_0,t_0+\varepsilon ]\), and \(P(x_\pi (t_0+\varepsilon ),t_0+\varepsilon )-m_P\le \tfrac{\rho }{2}\). Considering again the two cases \( P(x_\pi (t_0+\varepsilon ),t_0+\varepsilon )-m_P\ge \tfrac{\rho }{4}\) and \( P(x_\pi (t_0+\varepsilon ),t_0+\varepsilon )-m_P<\tfrac{\pi }{4}\), we see that \(x_\pi (t)\in \widetilde{{\mathcal {D}}}_t\) for all \(t\in [t_0,t_0+2\varepsilon ]\).
Repeating ii) and iii), we conclude that \(x_\pi (t)\in \widetilde{{\mathcal {D}}}_t\subset D\) for all \(t\ge 0\).
It remains to estimate the decay rate of the function \(P(x_\pi (t),t)\) as \(t\rightarrow +\infty \).
I) If \(\nu =1\), then, for all \(j\in {\mathbb {N}}\),
Using the property
and calculating
we obtain
Under the above choice of \({\bar{\gamma }}\), for any \(\gamma \ge {\bar{\gamma }}(\rho )\) and \(\varepsilon \in \big (0,\min \{{\tilde{\varepsilon }}(\gamma ),\varepsilon _2(\gamma )\}\big ]\),
Hence,
For an arbitrary \(t\ge t_0\), estimate (73) yields
II) If \(\nu >1\), then, for all \(j\in {\mathbb {N}}\),
Let us show that there exists an \(N\ge 0\) such that \( P(x_\pi (t_0+j\varepsilon ),t_0+j\varepsilon )-m_P\le \dfrac{\rho }{2}. \)
Assume the contrary: \(P(x_\pi (t_0+j\varepsilon ),t_0+j\varepsilon )-m_P>\frac{\rho }{2}\) for all \(j\in {\mathbb {N}}\cup \{0\}\). Then
To obtain decay rate estimates, we exploit the property of a strictly convex function and its tangent line: for any \(\theta \in {\mathbb {R}}\), \(k>0\), \(1- \theta \le (1+k\theta )^{-\frac{1}{k}}\). Thus,
and
Then, for \(j\ge T=\dfrac{1}{\varepsilon \mu \gamma ^*(\nu -1)}\Big (\Big (\dfrac{\rho }{2} \Big )^{1-\nu }-(P(x^0,t_0)-m_P)^{1-\nu }\Big )\), we get
which gives contradiction. Thus, there exists an \(N\in \mathbb N\cup \{0\}\) such that
and
For an arbitrary \(t\in [t_0,t_0+N\varepsilon ]\), we again exploit the property
Similarly to the derivation of (73), we can show that, for any \(\varepsilon \in \big (0,\min \{\varepsilon _0(\gamma ),\varepsilon _1(\gamma )\}\big ]\),
Then two cases are possible:
-
if \(P(x_\pi (t_0+(N+1)\varepsilon ),t_0+(N+1)\varepsilon )-m_P\le \dfrac{\rho }{2}\), then \(P(x_\pi (t),t)-m_P\le \rho \) for all \(t\in [t_0+(N+1)\varepsilon ,t_0+(N+2)\varepsilon ]\);
-
if \(\dfrac{\rho }{2}<P(x_\pi (t_0+(N+1)\varepsilon ),t_0+(N+1)\varepsilon )-m_P\le \rho \), then
$$\begin{aligned} \begin{aligned} P&(x_\pi (t),t)-m_P \\&\le \big ((P(x_\pi (t_0+(N+1)\varepsilon ),t_0+(N+1)\varepsilon )\\&\quad -m_P)^{1-\nu } +\mu \gamma ^*(\nu -1)(t-t_0-\varepsilon )\big )^{\frac{1}{1-\nu }}\\&\le \big (\rho ^{1-\nu }+\mu \gamma ^*(\nu -1)(t-t_0-\varepsilon )\big )^{\frac{1}{1-\nu }}\\&\le \rho ,\;\text {for all}\;t\in [t_0+(N+1)\varepsilon ,t_0+(N+2)\varepsilon ]. \end{aligned}\nonumber \\ \end{aligned}$$(92)
The iteration of the above two cases yields
which completes the proof of Theorem 2. \(\square \)
Appendix D: Proof of Theorem 3
For an arbitrary \(x^0\in D\), define \(\mathcal L^{P,P(x^0)}=\{x\in {\mathbb {R}}^n:P(x)\le P(x^0)\}\). From the condition 3.1),
Let \({\widetilde{D}}\) be an arbitrary convex compact set such that
All the assumptions of Theorem 1 are satisfied, so that we immediately have the following properties: for any \(\gamma >0\) there exists an \({\bar{\varepsilon }}:{\mathbb {R}}_{>0}\rightarrow \mathbb R_{>0}\) such that, for all \(t_0\ge 0\), \(x^0\in {\widetilde{D}}\), \(t_0\ge 0\) and \(\varepsilon \in (0,{\tilde{\varepsilon }}(\gamma )]\), the \(\pi _\varepsilon \)-solution \(x_\pi (t)\) of system (9) with the initial data \(x_\pi (t_0)=x^0\) and the controls \(u_k=u_k^\varepsilon (a(x,t),t)\) given by (6)–(8) are well defined in \({\widetilde{D}}\) for all \(t\ge t_0\), and
As \(\Vert x_\pi (t)-x^*\Vert \le w_{11}^{-1}\big (P(x_\pi (t))-m_P\big )\), this also implies
Thus, the point \(x^*\) is attractive for system (9).
Let us prove that \(x^*\) is stable. Assume that \(\gamma \) and \({\bar{\varepsilon }}(\gamma )\) are fixed, \(\gamma {\bar{\varepsilon }}(\gamma )\le 1\). For an arbitrary \(t\ge t_0\), denote the integer part of \(\frac{t-t_0}{\varepsilon }\) as \(N=\Big [\frac{t-t_0}{\varepsilon }\Big ]\). From (41),
Using the triangle inequality and condition 3.2), we get
Furthermore, from the proofs of Lemma 1 and Theorem 2 it follows that
i.e.,
Combining (99) and (101) we conclude that, given an arbitrary \(\epsilon >0\), one can choose a \(\delta >0\) satisfying
so that
\(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Grushkovskaya, V., Zuyev, A. Motion planning and stabilization of nonholonomic systems using gradient flow approximations. Nonlinear Dyn 111, 21647–21671 (2023). https://doi.org/10.1007/s11071-023-08908-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11071-023-08908-7