Lagrangian Jacobian inverse for nonholonomic robotic systems

The motion planning problem for nonholonomic robotic systems is studied using the continuation method and the optimization paradigms. A new Jacobian motion planning algorithm is derived, based on a solution of the Lagrange-type optimization problem addressed in the linear approximation of the system. Performance of the new algorithm is illustrated by numeric computations performed for the unicycle robot kinematics.

motion planning problem becomes then equivalent to inverting the end point map of this system. An application of the continuation method provides a systematic way of deriving Jacobian motion planning algorithms. In the context of robotics, the continuation method was introduced by Sussmann [16] and then used successfully in the motion planning of mobile robots [6,7], mobile manipulators [17] and rolling bodies [1,3], extended to systems with dynamics [10,13,19], as well as developed theoretically towards proving completeness (convergence for any initial controls) of Jacobian motion planning algorithms [4,5,15,18], incorporating state and control bounds [9], adopting them to the multiple-task motion planning [11,13], as well as improving their computational properties [12]. The Jacobian of a nonholonomic system is defined as the end point map in the linear approximation to the original system. An inverse Jacobian is computed as a solution of a constrained optimization problem formulated in this linear approximation. Then, an essential ingredient of the motion planning algorithm is a functional differential equation involving an inverse of the Jacobian. In the cited works, the Jacobian inverse has usually come from the minimization of the control squared norm (energy), resulting in either the Jacobian pseudoinverse (a Moore-Penrose generalized inverse in a Hilbert space [2]) or a weighted Jacobian pseudoinverse.
In this note, using the continuation method, we introduce a new Jacobian inverse for the nonholonomic kinematics. The new inverse is referred to as the Lagrangian Jacobian inverse, as it relies on a constrained minimization of the Lagrange-type objective function. Our main result consists in deriving this new inverse and providing a corresponding motion planning algorithm. Differently to the Jacobian pseudoinverse, the Lagrangian Jacobian inverse incorporates at every step not only the minimization of the control variation, but a joint minimization of both the system control and the system trajectory variations. The computations underlying the algorithm are equivalent to solving a set of differential-algebraic equations. Computability and performance of the Lagrangian Jacobian motion planning algorithm are demonstrated by solving a motion planning problems for the unicycle-type mobile robot. The presented case study suggests that this motion planning algorithm can be designed in such a way that, besides solving the motion planning problem itself, it allows for shaping the system trajectories in order to foster a desired way of motion, e.g. avoiding obstacles.
The remaining part of this note is composed in the following way. Sect. 2 presents the basic concepts, including basics of the continuation method and the concept of the Jacobian pseudoinverse for nonholonomic systems. Section 3 contains our main result, i.e. the Lagrangian Jacobian inverse. The corresponding motion planning algorithm is described in Sect. 4. Computational aspects of the new algorithm are discussed in Sect. 5 in the context of the unicycle robot. Section 6 contains conclusions. Proofs of the main results are given in "Appendix".

Basic concepts
We shall study the kinematics of a nonholonomic robotic system, represented as a driftless control system with outputs where q = (q 1 , q 2 , . . . , q n ) T ∈ R n is the state variable, u = (u 1 , u 2 , . . . , u m ) T ∈ R m denotes the control, and y = (y 1 , y 2 , . . . , y r ) T ∈ R r stands for the output variable. All the vector fields and functions appearing in (1) will be assumed smooth (of C ∞ class). The admissible control functions u(·) ∈ U ⊂ L 2 m [0, T ], where T is a fixed control horizon, belong to a subset of the space of Lebesgue square integrable functions on [0, T ], with inner product inherited from the ambient space. It is assumed that for admissible control functions the state trajectory q(t) = ϕ q 0 ,t (u(·)) exists for every t ∈ [0, T ] and every initial state q 0 ∈ R n . Given a state trajectory, the end point map K q 0 ,T : U −→ R r of the system (1) is defined as It is well known that the end point map of the system (1) is continuously differentiable [14]. Its derivative will be referred to as the system's Jacobian, for u(·), v(·) ∈ U, and η(t) denote the output trajectory of the linear approximation to (1) along (u(t), q(t)). Letting we get [17] where A(t) = ∂G(q(t))u(t) ∂q , B(t) = G(q(t)), and C(t) = ∂k(q(t)) ∂q . Then, after solving (5) for ξ(0) = 0, the Jacobian (3) can be expressed as (6) with the transition matrix Φ(t, s) satisfying the evolution equation along with Φ(s, s) = I n , the identity matrix. Using the kinematics representation (1), we shall study the following motion planning problem of a nonholonomic robotic system: given an initial state q 0 ∈ R n and a terminal point y d ∈ R r , find a control function u(·) ∈ U such that This motion planning problem can be solved by means of a Jacobian algorithm whose derivation goes along the following lines. We begin with an arbitrary control function u 0 (·) ∈ U. If it solves the problem, we are done. Otherwise, we introduce a deformation of u 0 (·) into a smooth curve u θ (·) ∈ U, θ ∈ R, such that u θ=0 (·) = u 0 (·), and compute the motion planning error e(θ ) = K q 0 ,T (u θ (·)) − y d along this curve.
Requesting that the error decreases exponentially, we arrive at Ważewski-Davidenko equation involving the Jacobian (3). This equation can be made explicit after employing a right inverse J # q 0 ,T (u(·)) : R r −→ U of the Jacobian, so that By design, the control function driving the system (1) in the initial state q 0 to y(T ) = y d ∈ R r is computed as the limit The right Jacobian inverse is usually obtained by solving for v(·) ∈ U a Jacobian equation Specifically, the Jacobian equation may be attached as an equality constraint to the energy minimization problem in the linearization (5), on condition that that results in the Jacobian pseudoinverse [17] where is the output controllability Gramian of the system (5). In the mobile robotics context, this Gramian is referred to as the mobility matrix [19]. It follows that the Gramian matrix can be computed by solving the Lyapunov differential equatioṅ for zero initial condition M(0) = 0 and setting (15) is full rank, a straightforward computation shows that the minimal value of the objective function (13) equals η T G −1 q 0 ,T η.

Lagrangian Jacobian inverse
As has been said above, a right inverse of the Jacobian (6) can be obtained by solving a constrained optimization problem for the linearization (5) of the original system (1). A natural generalization of the objective function considered in (13) is an objective function in the Lagrange form that leads to the Lagrange-type optimization problem in (5) min along with The resulting Jacobian inverse will be referred to as the Lagrangian Jacobian inverse J L# q 0 ,T (u(·)). The following theorem establishes the explicit form of the Lagrangian Jacobian inverse. Its proof can be found in Proof of Theorem 1 in "Appendix".

Theorem 1 The Lagrangian Jacobian inverse
where and the block matrix function Ψ (t) = [ψ i j (t)], i, j = 1, 2, 3 solves the linear differential equatioṅ with initial condition ψ i j (0) = δ i j I n , δ i j denoting the Kronecker delta. The matrix D(t) is a solution of the Lyapunov equatioṅ with zero initial condition D(0) = 0, and the mobility matrix The mobility matrix will be assumed to have full rank r . It is easily shown that, after setting Q(t) = 0 and R(t) = I m , the Lagrangian Jacobian inverse and the Jacobian pseudoinverse coincide. This is stated as . This corollary will be demonstrated in Proof of Corollary 1 in "Appendix". By inspection of the differential equation (20), we obtain the next Corollary 2 In the block matrix Ψ (t), the terms ψ 13 (t) and ψ 23 (t) are zero.
Suppose that the output function of (1) is the identity function k(q) = q. Then, we have the following Proof of this Corollary can be found in Proof of Corollary 3 in "Appendix".

Motion planning
The Lagrangian Jacobian inverse gives rise to a Lagrangian Jacobian motion planning algorithm based on the scheme (11) with J # q 0 ,T (u(·)) replaced by J L# q 0 ,T (u(·)), see (18). The computations involved in this algorithm are tantamount to solving for u θ (t) the following set of differential-algebraic equations.
As explained in Sect. 2, the solution of the motion planning problem is obtained as the limit

Computer simulations
An advantage of the Lagrangian Jacobian motion planning algorithm is that by a suitable choice of matrices Q(t) and R(t) defining the Lagrange objective function (17), besides solving the motion planning problem itself, it also enables shaping the state trajectory of the system (1).
For a given θ , suppose that v θ (·) is the solution of the Jacobian equation obtained by means of the Lagrangian Jacobian inverse. By (11) and (18) with a substitution η = e(θ ), this means that Furthermore, let q θ (t) = ϕ q 0 ,t (u θ (·)) be the state trajectory of the control system (1) steered by u θ (t). Then, the identity (4) combined with (25) yields The functions v θ (·) and ξ θ (·) can be interpreted as directions of motion in the control and trajectory spaces. For a curve c θ (t) in the state space of system (1), let V θ (t) denote a vector field along c θ (t). The matrix Q(t) can be made θ dependent, so we set Then, for a fixed θ , the Lagrange objective function (17) becomes equal to where we have also allowed for the matrix R(t) to depend on θ . With a suitable choice of the vector field V θ (t) and the matrix R θ (t), the minimization of this objective function will make the direction of motion ξ θ (·) orthogonal to V θ (·) at each t. It might be expected that, by an appropriate choice of the matrices Q(t) and R(t), it could be possible to shape the solution of the motion planning problem, e.g. in order to enable avoiding obstacles in the state space of the system (1).
In the remaining part of this section, we shall verify this expectation by numerically solving a motion planning problem for the unicycle-type mobile robot. The robot is shown schematically in Fig. 1.
Under assumption that the wheel is not permitted to slip laterally, its kinematics are represented by a driftless control system with the state variable q = (q 1 , q 2 , q 3 ) T ∈ R 3 denoting the wheel's position and orientation (see the figure), a pair of controls u 1 , u 2 denoting the linear and the angular velocity of the wheel and the identity output function. Suppose that the unicycle placed at a given initial state q 0 should move to a desired point y d , simultaneously avoiding state space point-obstacles Let for a certain θ the trajectory of the unicycle be q θ (t). Along this trajectory, we define a vector field The fractional term in the above expression is a direction pointing from the trajectory towards the obstacle number i, while R(Z , π/2) denotes the rotation matrix around the Z axis by the angle π/2. In order to deal only with position obstacles, the matrix is set to C = I 2 0 0 0 . For a vector field so defined, the minimization of the Lagrange objective function (27) will result in making ξ θ (t) orthogonal to V θ (t), i.e. repelling the trajectory q θ (t) from the obstacles. This idea will be illustrated with solving numerically a motion planning problem for the unicycle, characterized by the initial state q 0 = (0, 0, 0), the desired point y d = q d = (1, 1, 0), and the set of three obstacles ( p = 3) to be specified later on. The control horizon is set to T = 2, and the decay rate in the Algorithm 1 is chosen as γ = 3. The computations run until the error norm ||e(θ )|| drops below 10 −4 . The initial control function is chosen as u 0 = (0.5, sin(2π t/T )). In order to effectively visualize the algorithm performance, the obstacles will be added progressively. Firstly, we solve the motion planning problem without any obstacles setting Q θ (t) = 10 2 I 3 . Next, we will place the first obstacle o 1 directly on the trajectory obtained form the previous simulation, so we obtain the matrix Q θ (t) = 10 2 V 1θ (t)V T 1θ (t). After that, we will add a second obstacle o 2 which will be placed  In Figs. 2, 3, 4, the dotted line corresponds to the computation without the obstacles, the thick solid line is the final solution for all obstacles, and the thin solid lines represent the intermediate solutions. Figure  2 reveals how the consecutive addition of the obstacles affects the solutions. Figures 3, 4 show the control function for each solution. The algorithm convergence is depicted in Fig. 5, and it can be seen that the preassumed decay rate γ = 3 has been maintained.
Algorithm 1 can be run either in a parametric or in a nonparametric mode [11]. In the former case, a finite-dimensional representation of the control func- The computations accomplished in this note follow the nonparametric mode and employ a built-in MAT-LAB variable step, higher-order ODE solver. Such an approach provides a very high accuracy of computations at the expense of the computation time. In the presented example, in every step of the motion planning algorithm, solving the differential equation ( †) for all t and a single value of θ takes about 0.5 s on a 3.2 GHz PC. In order to solve the whole motion planning problem, it is necessary to perform about 100 steps that requires a total computation time around 1 min. As long as we are solving a motion planning problem, this is tolerable because the computations can be done offline. These computations might be accelerated by relaxing accordingly the accuracy requirement and passing to the parametric mode. For real-time control, e. g. for the tracking of a trajectory planned offline, the predictive control algorithm may be recommended, supported with the computational tools of ACADO [8].

Conclusion
A contribution of this note is the Lagrangian Jacobian inverse, and the corresponding motion planning algorithm for nonholonomic robotic systems, based on the continuation method and the optimization paradigm. This inverse has been designed by a joint minimization of variations of the system control and the system trajectory. Computability of the new algorithm has been shown on a simple example of unicycle. The outcomes of computations suggest that employing the Lagrange objective function in the definition of the Jacobian inverse offers new possibilities of shaping the resulting plan of motion, e.g. in order to avoid obstacles. A systematic examination of these possibilities will be a subject of our future work. Another open area for future research is the problem of existence (singularities) and completeness (global convergence) of the Lagrangian Jacobian motion planning algorithm, both in the form presented in this note as well as in a singularity robust version. As in the case of the classical inverse (Q(t) = 0), a version of the Algorithm 1 respecting control and configuration constraints can be derived. Last but not least, an important issue will be concerned with the organization of computations supporting Algorithm 1 that would balance the accuracy and the efficiency. Although designed basically for driftless control systems representing the nonholonomic robot kinematics, the Lagrangian Jacobian motion planning algorithm can be extended in a natural way to nonholonomic robotic systems with dynamics, represented by control affine systems.
Acknowledgments The authors are indebted to anonymous reviewers for their remarks concerned with the contents of this note as well as for their suggestions aimed at improving its readability.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

ξ T (t)Q(t)Φ(t, s)B(s)w(s)ds
where λ ∈ R r denotes a vector of Lagrange multipliers. Using the identity we transform (29) to

t)Q(t)Φ(t, s)B(s)dt +v T (s)R(s)
The necessary optimality condition, requesting that for where Now, a substitution of (31) to the Jacobian equation (12) allows one to compute λ as and After the elimination of λ from (31), we obtain a preliminary form of the Lagrangian Jacobian inverse ). Let us denote the term in brackets on the right-hand side of the above expression as so that This means that (18) holds. On account of [see (7)], a time differentiation of (35) yields But from (32), we find and conclude that Eventually, taking into account the formula (34), and the fact that F(t, t) = 0, we computė