Motion planning and stabilization of nonholonomic systems using gradient flow approximations

Nonlinear control-affine systems described by ordinary differential equations with time-varying vector fields are considered in the paper. We propose a unified control design scheme with oscillating inputs for solving the trajectory tracking and stabilization problems under the bracket-generating condition. This methodology is based on the approximation of a gradient-like dynamics by trajectories of the designed closed-loop system. As an intermediate outcome, we characterize the asymptotic behavior of solutions of the considered class of nonlinear control systems with oscillating inputs under rather general assumptions on the generating potential function. These results are applied to examples of nonholonomic trajectory tracking and obstacle avoidance.


Introduction
Consider a nonlinear control system where x = (x 1 , ..., x n ) ∈ D ⊆ R n is the state, u = (u 1 , ..., u m ) ∈ R m is the control, D is a domain, and the time dependent vector fields f k : D×R + → R n are regular enough to guarantee the existence and uniqueness of solutions to the Cauchy problem for system (1) with any initial data x(t 0 ) = x 0 ∈ D, t 0 ≥ 0, and any admissible control u : [t 0 , +∞) → R m .We will formulate the required regularity assumptions precisely below.The driftless control-affine system (1) is an extremely important mathematical model in nonholonomic mechanics, which represents the kinematics with non-integrable constraints in the case m < n (we refer to the book [2] for general reference).Of special interest is the class of systems with time independent vector fields: ( In contrast to linear control theory, the controllability of system (2) does not imply its stabilizability by a regular feedback law of the form u = h(x).A famous example of a completely controllable system (2) with n = 3 and m = 2, which is not stabilizable in the classical sense, was presented in [4].Since then, the stabilization and motion planning problems of nonholonomic systems have been extensively studied by many experts in nonlinear control theory, mechanics, and robotics.A survey of essential contributions in this area is performed in Section 2.
To the best of our knowledge, the present paper contains the first description of a unified control design method for solving a variety of different control problems such as: stabilization of an equilibrium point x = x * , tracking an arbitrary curve in the state space, and motion planning with obstacles for rather general non-autonomous systems (1).The main idea behind our construction is to design time dependent feedback controllers in such a way that the trajectories of the corresponding closed-loop system approximate the trajectories of a gradient-like system of the form where the potential function P (x, t) and gain γ > 0 are to be defined according to the specific problem statement.The key contribution of our work is twofold: a unified approach for solving the stabilization and motion planning problems under the bracket-generating condition; relaxed regularity assumptions on the vector fields and their directional derivatives.In particular, vector fields of the considered class of systems are not required to be smooth.
The subsequent presentation is organized as follows.The outcomes of the literature study are reported in Section 2. A family of ε-periodic feedback controllers is introduced in Section 3 in the form of trigonometric polynomials with respect to time with coefficients depending on the system state.It is shown in Subsection 3.2 that the proposed controllers allow approximating the reference gradient flow dynamics by the trajectories of system (1) with arbitrary accuracy under a suitable choice of the small parameter ε.These approximation schemes are then adapted to derive stabilizing controllers for the equilibrium stabilization problem (Theorem 3 and its corollary in Subsection 3.3), tracking problem (Theorem 4 and its corollary in Subsection 3.4), and obstacle avoidance (Subsection 3.5).We illustrate the proposed control design methodology with examples in Section 4. Finally, concluding comments are given in Section 5 to summarize the key results of the present paper and underline its contribution with respect to the previous work.The proofs of the main results are contained in Appendices A-D.

Notations
Throughout the text, we will use the following notations: R + -the set of nonnegative real numbers; R >0 -the set of positive real numbers; δ ij -the Kronecker delta: δ ii =1 and δ ij =0 whenever i = j; dist(x, S) -the Euclidian distance between a point x ∈ R n and a set B δ (x * ) -δ-neighborhood of an x * ∈ R n with δ > 0; B δ (S) = ∪ x∈S B δ (x) -δ-neighborhood of a set S ⊂ R n ; ∂M , M -the boundary and the closure of a set M ⊂ R n , respectively; M = M ∪ ∂M ; |S| -the cardinality of a set S; K -the class of continuous strictly increasing functions ϕ : R + → R + such that ϕ(0) = 0; ∇ x P (x 0 , y 0 ) = ∂P (x,y) ∂x x=x 0 ,y=y 0 [f, g](x) -the Lie bracket of vector fields f, g : R n → R n at a point x ∈ R n , [f, g](x) = L f g(x) − L g f (x), where L g f (x) = lim .In the case of differentiable f , L g f (x) = ∂f (x) ∂x g(x).

Related work
A number of efficient control design methods have been developed in the literature with the emphasis on special classes of nonlinear systems, such as flat systems [7], chained-form systems [38,24], unicycle-and car-like systems [31,28,6,34,27], manipulator models [26,8], Chaplygin systems [32], etc.For planning the motion of general nonholonomic systems, a broad class of approaches is based on the application of Lie-algebraic techniques.With this respect, an essential assumption is that the vector fields of system (2) together with their iterated Lie brackets span the whole tangent space at each point of the state manifold (Hörmander's condition).Several authors used this assumption to produce time-periodic control laws such that the trajectories of a nonholonomic system approximate the trajectories of an extended system.The papers [37,25] exploited an unbounded sequence of oscillating controls with unbounded frequencies for such an approximation in case of driftless systems.The paper [21] addresses the limit behavior of solutions of a controlaffine system with input signals of magnitude ε −α and frequency scaling 1/ε as ε → 0. It is assumed that the primitives of input signals and their iterated primitives up to a certain order are bounded.Then it is shown that the limit behavior of the considered oscillating system is either defined by its drift term or by a linear combination of certain iterated Lie brackets, depending on the value of α.In the paper [3], the averaged system as a differential inclusion is constructed for driftless control-affine systems with fast oscillating inputs.It is proved that an arbitrary solution of such a differential inclusion can be approximated by a family of solution of the original system when the oscillation frequency tends to infinity.This approximation result is also extended to the class of systems with drift under a time reparametrization and the assumption that the drift generates periodic dynamics.An overview of motion planning methods for nonholonomic systems is presented in the book [17].For nilpotent systems, exact solutions to the motion planning problem are proposed with the use of sinusoidal inputs.In general case, the local steering problem can be solved by constructing a nilpotent approximation under a suitable choice of privileged coordinates.Then the global steering algorithm is summarized in [17] as a finite sequence of steps which steers the given nonholonomic system to an arbitrary small neighborhood of the target point.The nilpotentization of a wheeled mobile robot model with a trailer is proposed in the paper [1] for planing local maneuvers of this kinematic system.On the basis of solving the related sub-Riemannian problem, an algorithm for suboptimal parking has been implemented and tested for several robot configurations.
An algorithm for motion planning of kinematic models of nonholonomic systems in task-space is developed in [29] with the use of the Campbell-Baker-Hausdorff-Dynkin formula.The motion planning in task-space is treated in the sense of steering the system output to a neighborhood of the desired point.The proposed algorithm is illustrated with a unicycle and kinematic car examples.A nonholonomic snake-like robot model with m (m ≥ 3) rigid links is considered in [16].The motion planning problem is treated there in the sense of generating a gait such that the origin of the snake's body moves along a given planar curve.This problem is solved by expressing the body velocity from the compatibility equation and reconstruction equation.
An interesting example of nonholonomic system with the growth vector (4,7) is studied in [15].Such an example is a modification of the trident snake robot with three 1-link branches of variable length.A nilpotent approximation of this system is constructed, and the local optimal steering problem is analyzed by the Pontryagin maximum principle.Controls for generating the motion in the direction of higher order Lie brackets were proposed in [9,10] for systems with two inputs.
A hybrid path planning method based on the combination of a high-level planner with a low-level controller performing in autonomous vehicle is de-scribed in [35].The high-level planner (D * Lite planner) works on the discretized 2D workspace to produce a reference path such that at each step the robot model moves from a given cell to one of the eight neighboring cell which does not have an obstacle.The output of the high-level planner is collected as a set of waypoints ending at the goal, and the cost is the total length of the path.Then the low-level controller, running on the autonomous vehicle, provides control inputs to generate motion from the current state to the next waypoint.This path planning method is experimentally validated on a differential drive robot in rough terrain environments.
Stabilizing time-varying controls were proposed in [43] for second degree nonholonomic systems (following the terminology used in [23]).Unlike other publication in this area, the exponential convergence to the equilibrium was proved without the assumption that the frequencies of controls tend to infinity.Besides, the paper [43] presented a rigorous solvability analysis of the stabilization problem in the proposed class of controls.For detailed reviews of other motion planning and stabilizing strategies we refer to [20,2,14].It has to be emphasized that, in spite of a large number of publications on nonholonomic motion planning, only particular results are available for the stabilization together with the obstacle avoidance.Even for static obstacles, this problem was studied only for specific systems (see, e.g., [19,33,36]).A general class of nonholonomic systems was considered in the paper [39], where a timeindependent controller was constructed based on the gradient of a potential function.Note that such a result ensures only stability (but not asymptotic stability) property.An algorithm computing time-periodic feedback controls for approximating collision-free paths was presented in [13], however, no solvability issues concerning the general collision avoidance problem have been addressed in that paper.
For a class of driftless control-affine systems, the trajectory tracking problem was addressed in [41] under the assumption that the target trajectory is feasible, i.e. satisfies the dynamical equations with some control inputs.However, to the best of our knowledge, there are no results available for the stabilization of general classes of nonlinear control systems in a neighborhood of non-feasible curves or in domains with obstacles.

Unified control framework for second degree nonholonomic systems
In this section, we present the main idea of our control design scheme by considering the nonholonomic systems of degree 2, according to the classification of [23].The proposed control design provides a generic approach for stabilization and motion planning of underactuated driftless control-affine systems.

Definitions and assumptions
To generate stabilizing control strategies, we will exploit sampling, similar to the approaches of [5,43].With this respect, we introduce the following definition.
and assume that a feedback control is given in the form u = h(a(x, t), t), a : D × R → R l , h : R l × R → R m .For given t 0 ∈ R and ε > 0, define a partition π ε of [t 0 , +∞) into the intervals A π ε -solution of the considered closed-loop corresponding to the initial value x 0 ∈ R n is an absolutely continuous function x π (t) ∈ D, defined for t ∈ [t 0 , +∞), which satisfies the initial condition x π (t 0 ) = x 0 and the differential equations ẋπ (t) = f x π (t), h(a(x π (t j ), t j ), t), t , t ∈ I j , for each j = 0, 1, 2, . . . .We will illustrate the relation between π ε -solutions and classical solutions with examples in Section 4.
Before formulating basic results of this paper, we introduce the main assumptions on the state space D, vector fields f k , and the potential function P used in the gradient flow dynamics (3).

Assumption 1
The vector fields f k (x, t) : D × R + → R n are twice continuously differentiable w.r.t.x, and f k , L fj f k are continuously differentiable w.r.t.t, for all j, k = 1, m.
Moreover, for any family of compact subsets D t ⊂ D, t ≥ 0, there exist constants Another important assumption is related to the controllability property of system (1).As it has already been mentioned, in this section we focus on systems with the degree of nonholonomy 2, i.e. those whose vector fields together with their Lie brackets span the whole n-dimensional space.
2) For any family of compact subsets D t ⊂ D, t ≥ 0, there exists an M F > 0 such that where F −1 (x, t) is the inverse matrix for It is important to note that the rank condition (4) implies nonsingularity of the n × n matrix F(x, t) for all t ≥ 0, x ∈ D.
The next two assumptions describe properties of the potential function P for the gradient-like system (3).

Assumption 3
The function P : D × R + → R is twice continuously differentiable w.r.t.x.Moreover, for any family of compact subsets D t ⊂ D, t ≥ 0, there exist constants m P ∈ R, L P x > 0, L 2P x , L 2P t , L P t , H P x ≥ 0 such that 3.1) m P ≤ P (x, t), for all t, τ ≥ 0, x ∈ D t , y ∈ D τ .
To formulate the last assumption of this section, we introduce families of level sets for a function P (x, t).Namely, given a constant c ∈ R, we denote

Convergence results
Below we propose a universal control strategy which ensures the convergence of the trajectories of system (1) to the set of extremum points of a given function P .Suppose that the index sets S 1 , S 2 and the matrix F(x, t) are described in Assumption 2, then we parameterize the controls as Here the column vector a( and the oscillating components are where κ j1j2 ∈ N are pairwise distinct numbers, γ > 0 is a control gain, and ε > 0 is a small parameter.
The first result of this section is as follows.
The proof is in Appendix B. In the case of time-independent function P (x) and vector fields f k (x), it is possible to prove a stronger result under milder assumptions.Let us denote the set of local minima of the function P by The following theorem holds for the system and let a function P ∈ C 2 (D; R) be such that its level sets L P,P (x 0 ) = {x ∈ D : P (x) ≤ P (x 0 )} are compact for all x 0 ∈ D.
Then for any γ > 0 there exists an ε > 0 such that, for any ε ∈ (0, ε], the π ε -solution x π (t) of system (9) with the controls u k = u ε k (a(x), t) given by ( 6)-( 8) and the initial data t 0 ≥ 0, x π (t 0 ) = x 0 ∈ D is well-defined, and satisfies the following property: Here The proof of the asymptotic convergence of P (x π (t)) to the set of critical values of P can be found in [45].More strict property (10) follows from the fact that, for small enough ε, P (x 0 ) ≥ P (x π (t 0 + ε)) ≥ P (x π (t 0 + 2ε)) ≥ . . .and the uniqueness of the solutions of system ( 9) with the controls The approximate convergence of a time-varying function P to its minimal value can be proved under an additional requirement, which also allows to estimate the convergence rate:

Theorem 2
Let Assumptions 1-3 be satisfied for system (1) with a function P (x, t), and let ρ > 0 be such that ∅ = L P,m P +ρ t ⊂ D for all t ≥ 0. Assume moreover that, for any family of compact subsets D t ⊂ D, t ≥ 0, there exists a µ > 0 and ν ≥ 0 such that Then for any γ * > 0 there is a γ > γ * such that, for any γ > γ and ε ∈ (0, ε) (ε > 0 depends on γ), the π ε -solution x π (t) of system (1) with the controls u k = u ε k (a(x, t), t) given by ( 6)-( 8) and the initial data t 0 ≥ 0, x π (t 0 ) = x 0 ∈ D t0 is well-defined, and satisfies one of the following properties: The proof is in Appendix C.
Remark 1 As it follows from the proof of Theorem 2, it suffices to take the vector fields of system (1) are time-independent, and γ = γ * if, additionally, the function P does not depend on t.
Corollary 1 Assume that the constants required in Assumptions 1-3 (and in (11)) exist for all x ∈ L P,P (x 0 ,t0) t , x 0 ∈ D, t 0 ≥ 0. Then the assertions of Lemma 1 (Theorem 2) remain valid even if the level sets of the function P (x, t) are not compact.
Similarly, if the functions f k (x) are globally Lipschitz in L P,P (x 0 ) , for any ∂x , are bounded, and the function P (x) is bounded from below for all x ∈ L P,P (x 0 ) , x 0 ∈ D, then the assertion of Theorem 1 remains valid even if the level sets of the function P (x) are not compact.

Corollary 2
Let the conditions of Theorem 1 be satisfied.Furthermore, assume that for any compact subset D ⊂ D there exist a µ > 0 and ν ≥ 1 such that ∇P (x) 2 ≥ µ(P (x) − m P ) ν for all x ∈ D, where m P is defined in Assumption 3.1.
Then for any γ > γ * > 0 there exists an ε > 0 such that, for any ε ∈ (0, ε), the π ε -solution x π (t) of system (9) with the controls u k = u ε k (a(x), t) given by ( 6)-( 8) and the initial data t 0 ≥ 0, x π (t 0 ) = x 0 ∈ D is well-defined, and satisfies one of the following properties: These results follow from the proofs of Lemma 1 and Theorem 2.
Lemma 1 and Theorem 1 give rise to several important results applicable to more specific control problems.

Stabilization problem
In this section, we consider a classical control problem of finding control laws which ensure the asymptotic stability of a point x = x * ∈ D for system (9).
Problem 1 (Stabilization problem) Given system (9) and a point x * ∈ D, the goal is to construct a feedback control of the form ( 6)-( 8) ensuring the asymptotic stability of x * for the corresponding closed-loop system.
To solve Problem 1, we apply the results of Section 3.2 with a Lyapunovlike function P (x): Then for any γ > 0 there exists an ε > 0 such that the point x * is asymptotically stable for system (9) with the controls u k = u ε k (a(x), t) given by ( 6)-( 8) and any ε ∈ (0, ε), provided that the solutions of the closed-loop system (9), ( 6)-( 8) are defined in the sense of Definition 1.
The proof of this theorem is based on the proofs of Lemma 1 and Theorem 1 (see Appendix D).The following result directly follows from Theorem 3 and Corollary 2: Then for any γ > 0 there exists an ε > 0 such that the point x * is asymptotically stable for the closed-loop system (9) with the controls u k = u ε k (a(x), t) given by ( 6)-( 8) and any ε ∈ (0, ε), provided that the solutions of the closedloop system are defined in the sense of Definition 1.Moreover, there exists an ε > 0 such that II) If ν 1 > 1, then x * is polynomially stable, namely, for any γ * > 0 and γ > γ * there exists an ε > 0 such that where In particular, to exponentially stabilize system (9) at x * , one can simply put The above-stated decay rate estimates are illustrated with numerical examples in Section 4.1.
Remark 2 It is interesting to note that for the degree 1 nonholomonic systems, i.e. for the case m = n, S 1 = {1, . . ., n}, the proposed stabilizing controls are time-invariant functions which is the classical control design for stabilization of fully-actuated driftless control-affine systems.

Remark 3
The proposed control algorithm ( 6)-( 8) significantly simplifies the stabilizing control design procedure introduced in [43] and makes it possible to express control coefficients explicitly without solving a cumbersome system of algebraic equations.

Trajectory tracking problem
The proposed control design procedure with a time-varying function P (x, t) can be used for ensuring the motion of system (1) along desirable curves.Note that we consider arbitrary continuous curves x * (t) which may not be feasible for system (1).Consequently, we consider a relaxed problem statement for the approximate trajectory tracking as follows: Problem 2 (Trajectory tracking problem) Given system (1), a continuous curve x * : R + → D, and a constant ρ > 0, the goal is to construct a feedback law ensuring the attractivity of the family of sets for the corresponding closed-loop system.
Note that attracting (locally/globally pullback attracting) families of timevarying sets have been studied in the paper [22] for non-autonomous systems of ordinary differential equations.Here we treat this notion in the sense of π ε -solutions (Definition 1) for system (1) with control inputs.To be precise, we introduce the following definition.

Corollary 4
Given system (9) with f k ∈ C 2 (D; R n ) satisfying Assumption 2 in a domain D ⊆ R n , let a curve x * : R + → D be Lipschitz continuous such that B δ (x * (t)) ⊂ D for all t ≥ 0 with some δ > 0.
Then, for any ρ > 0, the family of sets L ρ t = {x ∈ D : x − x * (t) ≤ ρ} t≥0 is (exponentially) attracting for the closed-loop system (9) with the controls u k = u ε k (a(x, t), t) given by ( 6)-( 8) in the sense of Definition 2. The above result has been proved in [12] for continuously differentiable x * (t) with bounded first derivative.

Obstacle avoidance problem
Another important problem which can be solved by the proposed approach is generating collision-free motion of system (9) in environments with obstacles.To formulate such problem, assume that the set D is represented as a closed bounded domain with "holes", i.e.Problem 3 (Obstacle avoidance problem) Given system (9), an initial point x 0 ∈ int D and a destination point x * ∈ int D, the goal is to construct a feedback control such that the corresponding solution x(t) of the closed-loop system (9) with the initial data x(0) = x 0 satisfies the conditions: collision-free motion: x(t) ∈ int D for all t ≥ 0; convergence to the target: As it is implied by Theorem 1, the above problem can be solved by the controls u k = u ε k (a(x, t), t) from ( 6)-( 8) with a proper function P ∈ C 2 (D; R) being such that its level sets L P,P (x 0 ) = {x ∈ R n : P (x) ≤ P (x 0 )} are compact and L P,P (x 0 ) ⊂ D for all x 0 ∈ D (see also [11]).There is a broad range of potential functions ensuring collision-free motion for specific classes of systems, see, e.g.[30].Some of those functions can be used under our control-design framework for general classes of nonholonomic systems.As possible candidates for the function P , one can consider, e.g., the following: -Navigation functions.According to [30], a map P ∈ C 2 (D; [0, 1]) defined on a compact connected analytic manifold D with boundary is a navigation function, if it is: 1) polar at x * ∈ intD, i.e. has a unique minimum at x * ; 2) Morse, i.e. its critical points on D are nondegenerate; 3) admissible, i.e. all boundary components have the same maximal value, namely ∂D = P −1 (1).
In particular, if , then the navigation function can be taken in the form provided that K is large enough and, for all x ∈ ∂O i , i = 1, N , where c i ϕ is the minimal eigenvalue of the Hessian of ϕ i (x) (see [30] for more details).
-Artificial potential fields, which represent a combination of attractive and repulsive potential fields.In particular, one can take [18]: where K is a positive constant gain, ξ belongs to a neighborhood of obstacles (see [18] for more details).Another function of such type was proposed in [40]: We expect that a similar approach can be applied to time-varying navigation functions (or time-varying artificial potential fields) with the use of Lemma 1, i.e. in cases where either obstacles or destination point are "moving", i.e. are given by time-dependent functions.We will illustrate this claim via numerical examples in Section 4.However, the analysis of properties of such functions requires a separate study which we leave for future work.

Examples
In this section, we will demonstrate the proposed control design approach on the mathematical model of a unicycle, which is a well-known example with the degree of nonholonomy 2. The equations of motion have the form (9) with n = 3, m = 2, f 1 (x) = cos x 3 , sin x 3 , 0 , f 2 (x) = 0, 0, 1 : Here (x 1 , x 2 ) denote the coordinates of the contact point of the unicycle wheel, x 3 is the angle between the wheel and the x 1 -axis, u 1 and u 2 control the forward and the angular velocity, respectively.It is easy to see that the vector fields of system ( 16) satisfy Assumptions 1-2 in D = R 3 .In particular, Assumption 2 holds with the set of indices is nonsingular in D, and the corresponding inverse matrix has bounded norm for all x ∈ D.
According to the proposed control laws (6), we take In the above formulas, κ 12 is taken to be equal 1, and the vector of statedependent coefficients a(x, t) is defined by (7): where γ > 0 and ε > 0 are control parameters, the matrix F −1 (x, t) is given by ( 17), and P ∈ C 2 (D × R; R).Thus, Next, we will illustrate the behavior of solutions to system ( 16), ( 18) with different functions P (x, t), depending on the control goal.As it has been mentioned in Subsection 3.1, the obtained control scheme can be used within the framework of sampling in the sense of Definition 1, and for classical solutions as well.In the simulations below, we depict the trajectories of system (16) with both types of solutions of the closed-loop system.

Trajectory tracking
For a given curve x * (t) ∈ R 3 on a finite time horizon t ∈ [0, T ], we will illustrate solutions to the trajectory tracking problem (Problem 2) for system (16) with controls of the form ( 18) generated by the following potential function: Non-feasible curve.Consider the curve x * ∈ C 1 ([0, 20π]; R 3 ): where the equations for x * c,1 (t) and x * c,2 (t) are given in [42].The classical and π ε -solutions of system ( 16) with the feedback control (18) are shown in Fig. 4. For these simulations, we take Fig. 4 presents considerable oscillations of the x 1 and x 2 solution components around their reference values x * 1 (t) and x * 1 (t).Note that in this case the curve x * (t) is not feasible, i.e. x = x * (t) ∈ R 3 , t ∈ [0, 20π] is not a solution of system (16) under any choice of admissible controls u 1 and u 2 .Indeed, the only possibility to satisfy system (16) with x * 3 (t) ≡ 0 is to have x * 2 (t) ≡ const, which does not hold in the considered case.We will show in the next simulation that the oscillations due to non-feasible character of the reference curve can be significantly reduced if x * (t) is a solution of the kinematic equations (16).
Feasible curve.Consider now the feasible curve In this case x * 3 (t) satisfies system (16) with To illustrate solutions of the trajectory tracking problem, we apply slightly modified controls of the form Fig. 5 shows the behavior of the closed-loop system ( 16), (22) with the same initial value and control parameters as in (21).
Although our theoretical estimates allow to track even non-feasible curves with any prescribed accuracy, possible practical implementations of this approach should take into account the trade-off between the tracking accuracy and the frequency of switching allowed by the actuators.

Conclusion
The proposed design methodology can be considered as a multi-layered hierarchical scheme, where the reference dynamics (upper level) is governed by the gradient flow system (3) with some potential function P (x, t), and the physical level is ruled by nonholonomic control system (1) with oscillating inputs (6).In this framework, the coordination between the physical and reference dynamics is performed via discrete-time sampling at time instants t j = t 0 + εj, j = 1, 2, ... .The proposed scheme generalizes and significantly extends the approaches previously developed for particular control problems with timeinvariant vector fields such as stabilization [43], motion planning on a finite time horizon [44], and obstacle avoidance [45].It should be emphasized that the contribution of this paper allows the treatment of nonlinear control systems with time-varying vector fields and relatively simple structure of the control functions (6), whose amplitude factors a(x, t) are effectively defined by the matrix inversion in (7).The latter feature is considered as an important advantage with respect to the method of [43,44], where solutions to a system of nonlinear algebraic equations are required for the design procedure.
Although the formal proof of our results for small ε is established for π εsolutions only, numerical simulations illustrate the similar behavior of classical solutions of the corresponding closed-loop system.Hence, the analysis of asymptotic behavior of classical solutions remains the subject of future study.

A Auxiliary results
In this appendix, we summarize some auxiliary lemmas which are needed for the proof of the main results.
Lemma 2 Let D ⊆ R n , t 0 ≥ 0, and x(t) ∈ D, t 0 ≤ t ≤ τ , be a solution of system (1).Assume that there exist M, L ≥ 0 such that with U = max Proof Follows from the Grönwall-Bellman inequality.
Lemma 3 Let D ⊆ R n , t 0 ≥ 0, and x(t) ∈ D, t 0 ≤ t ≤ τ , be a solution of system (1) with u ∈ C[t 0 , τ ] and x(t 0 ) = x 0 ∈ D. Assume that the vector fields Then xπ(t) can be represented in the following way: where Proof This result provides a modification of the Chen-Fliess series expansion (see, e.g., [44]).

B Proof of Lemma 1
The proof consists of several steps.Throughout the paper, we assume that and γ > 0 will be chosen in Step 3.
Step 5.This step summarizes all the obtained results and completes the proof of this lemma.
All the assumptions of Theorem 1 are satisfied, so that we immediately have the following properties: for any γ > 0 there exists an ε : R >0 → R >0 such that, for all t 0 ≥ 0, x 0 ∈ D, t 0 ≥ 0 and ε ∈ (0, ε(γ)], the πε-solution xπ(t) of system (9) with the initial data xπ(t 0 ) = x 0 and the controls u k = u ε k (a(x, t), t) given by ( 6)-( 8) are well-defined in D for all t ≥ t 0 , and Thus, the point x * is attractive for system (9).Let us prove that x * is stable.Assume that γ and ε(γ) are fixed, γ ε(γ) ≤ 1.For an arbitrary t ≥ t 0 , denote the integer part of t−t 0 ε as N = t−t 0 ε .From (28), Using the triangle inequality and condition 3.

s→0f
(x+sg(x))−f (x) s Assumption 2 in a domain D ⊆ R n and a point x * ∈ D, let a function P ∈ C 2 (D; R) satisfy the following conditions: -3.1) there exist functions w 11 , w 12 ∈ K such that {x ∈ R n : x − x * ≤ w −1 11 P (x 0 ) − m P } ⊂ D for all x 0 ∈ D, and w 11 ( x − x * ) ≤ P (x) − m P ≤ w 12 ( x − x * ) for all x ∈ D; -3.2) ∇P (x) = 0 if and only if x = x * , and there exists a function w 2 ∈ K such that ∇P (x) ≤ w 2 ( x − x * ) for all x ∈ D.