Encyclopedia of Systems and Control

Living Edition
| Editors: John Baillieul, Tariq Samad

Singular Trajectories in Optimal Control

  • Bernard Bonnard
  • Monique ChybaEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-1-4471-5102-9_49-1

Abstract

Singular trajectories arise in optimal control as singularities of the end-point mapping. Their importance has long been recognized, at first in the Lagrange problem in the calculus of variations where they are lifted into abnormal extremals. Singular trajectories are candidates as minimizers for the time-optimal control problem, and they are parameterized by the maximum principle via a pseudo-Hamiltonian function. Moreover, besides their importance in optimal control theory, these trajectories play an important role in the classification of systems for the action of the feedback group.

Keywords

End-point mapping Abnormal extremals Pseudo-Hamiltonian Saturation problem in NMR Martinet flat case in SR geometry 

Introduction

The concept of singular trajectories in optimal control corresponds to abnormal extrema in optimization. Suppose that a point \({x}^{{\ast}}\in X \sim eq{\mathbb{R}}^{n}\) is a point of extremum for a smooth function \(\mathcal{L} : {\mathbb{R}}^{n} \rightarrow\mathbb{R}\) under the equality constraints F(x) = 0 where F : XY is a smooth mapping into \(Y \sim eq{\mathbb{R}}^{p}\), p < n. The Lagrange multiplier rule (Agrachev et al. 1997) asserts the existence of nonzero pairs \((\lambda _{0}{,\lambda }^{{\ast}})\) of Lagrange multipliers such that \(\lambda _{0}\mathcal{L}'({x}^{{\ast}}) {+\lambda }^{{\ast}}F'({x}^{{\ast}}) = 0\). The normality condition is given by λ 0≠0, and the abnormal case corresponds to the situation when the rank of F′(x ) is strictly less than p.

Abnormal extremals have played an important role in the standard calculus of variations (Bliss 1946). Indeed, consider a classical Lagrange problem:
$$\displaystyle\begin{array}{rcl} \frac{dx} {dt} (t) = F(x(t),u(t)),\;\;\min _{u(.)}\displaystyle\int _{0}^{T}L(x(t),u(t))dt& & \\ x(0) = x_{0},x(T) = x_{1},& & \\ \end{array}$$
where \(x(t) \in X \simeq{\mathbb{R}}^{n}\), \(u(t) \in{\mathbb{R}}^{m}\), F and L are smooth. Using an infinite dimensional framework, the Lagrange multiplier rule still holds and an abnormal extremum corresponds to a singularity of the set of constraints.

Definition

Consider a system of \({\mathbb{R}}^{n}\): \(\frac{dx} {dt} (t) = F(x(t),u(t))\) where F is a smooth mapping from \({\mathbb{R}}^{n} \times{\mathbb{R}}^{m}\) into \({\mathbb{R}}^{n}\). Fix \(x_{0} \in{\mathbb{R}}^{n}\) and T > 0. The end-point mapping is the mapping \({E}^{x_{0},T} : u(.) \in \mathcal{U}\rightarrow x(T,x_{0},u)\) where \(\mathcal{U}\subset{L}^{\infty }[0,T]\) is the set of admissible controls such that the corresponding trajectory x(., x 0, u) is defined on [0, T]. A control u(. ) and its corresponding trajectory are called singular on \([0,T]\) if \(u(.) \in \mathcal{U}\) is such that the Fréchet derivative \({E'}^{x_{0},T}\) of the end-point mapping is not of full rank n at u(. ).

Fréchet Derivative and Linearized System

Given a reference trajectory x(. ), t ∈ [0, T], associated to u(. ) with x(0) = x 0, and solution of \(\frac{dx} {dt} (t) = F(x(t),u(t))\), the system
$$\dot{\delta x}(t) = A(t)\delta x(t) + B(t)\delta u(t)$$
with
$$A(t) = \frac{\partial F} {\partial x} (x(t),u(t)),\;B(t) = \frac{\partial F} {\partial u} (x(t),u(t))$$
is called the linearized system along the control-trajectory pair (u(. ), x(. )).
Let M(t) be the fundamental matrix, t ∈ [0, T] solution of
$$\dot{M}(t) = A(t)M(t),\qquad M(0) =\mathrm{ I}_{n}.$$
Integrating the linearized system with δx(0) = 0, one gets the following proposition.

Proposition 1.

The Fréchet derivative of \({E}^{x_{0},T}\) at u(.) is given by
$$E'{}_{u}^{x_{0},T}(v) = M(T)\displaystyle\int _{ 0}^{T}{M}^{-1}(t)B(t)v(t)dt.$$

Computation of the Singular Trajectories and Pontryagin Maximum Principle

According to the previous computations, a control u(. ) with corresponding trajectory x(. ) is singular on [0, T] if the Fréchet derivative \({E'}^{x_{0},T}\) is not of full rank at u(. ). This is equivalent to the condition that the linearized system is not controllable (Lee and Markus 1967).

Such a condition is difficult to verify directly since the linearized system is time-depending and the computation is associated to the Maximum Principle (Pontryagin et al. 1962).

Let p be a nonzero vector such that p is orthogonal to \(\mathrm{Im}({E'}^{x_{0},T})\) and let \(p(t) = {p}^{{\ast}}M(T){M}^{-1}(t)\); then p(. ) is solution of the adjoint system
$$\dot{p}(t) = -p(t)\frac{\partial F} {\partial u} (x(t),u(t))$$
and satisfies almost everywhere the equality
$$p(t)\frac{\partial F} {\partial u} (x(t),u(t)) = 0.$$
Introduce the pseudo-Hamiltonian \(H(x,p,u) =\langle p,F(x,u)\rangle\), where \(\langle.,.\rangle\) is the Euclidean inner product, one gets the following characterization.

Proposition 2.

If (x,u) is a singular control-trajectory pair on [0,T], then there exists a nonzero adjoint vector p(.) defined on [0,T] such that (x,p,u) is solution a.e. of the following equations:
$$\displaystyle\begin{array}{rcl} \frac{dx} {dt} = \frac{\partial H} {\partial p} (x,p,u),\; \frac{dp} {dt} = -\frac{\partial H} {\partial x} (x,p,u)& & \\ \frac{\partial H} {\partial u} (x,p,u) = 0.& & \\ \end{array}$$

Application to the Lagrange Problem

Consider the problem
$$\frac{dx} {dt} (t) = F(x(t),u(t)),\;\;\min \displaystyle\int _{0}^{T}L(x(t),u(t))dt$$
with x(0) = x 0, x(T) = x 1.
Introduce the cost-extended pseudo-Hamiltonian: \(\tilde{H}(x,p,u) =\langle p,F(x,u)\rangle + p_{0}L(x,u)\); it follows that the maximum principle is equivalent to the Lagrange multiplier rule presented in the introduction:
$$\displaystyle\begin{array}{rcl} \frac{d\tilde{x}} {dt} = \frac{\partial \tilde{H}} {\partial \tilde{p}} (\tilde{x},\tilde{p},u), \frac{d\tilde{p}} {dt} = -\frac{\partial \tilde{H}} {\partial \tilde{x}} (\tilde{x},\tilde{p},u)& & \\ \frac{\partial \tilde{H}} {\partial u} (\tilde{x},\tilde{p},u) = 0& & \\ \end{array}$$
where \(\tilde{x} = (x,{x}^{0})\) is the extended state variable solution of \(\frac{dx} {dt} = F(x,u), \frac{d{x}^{0}} {dt} = L(x,u)\) and \(\tilde{p} = (p,p_{0})\) is the extended adjoint vector. One has the condition \(\langle \tilde{p},\tilde{E}'{}_{u}^{x_{0},T}(v)\rangle = 0\) where \(\tilde{{E}}^{x_{0},T}\) is the cost-extended end-point mapping.

The Role of Singular Extremals in Optimal Control

While the traditional treatment in optimization of singular extremals is to consider them as a pathology, in modern optimal control, they play an important role which is illustrated by two examples from geometric optimal control.

Singular Trajectories in Quantum Control

Up to a normalization (Lapert et al. 2010), the time minimization saturation problem is to steer in minimum time the magnetization vector M = (x, y, z) from the north pole of the Bloch Ball N = (0, 0, 1) to its center O = (0, 0, 0). The evolution of the system is described by the Bloch equation in nuclear magnetic resonance (Levitt 2008)
$$\displaystyle\begin{array}{rcl} \frac{dx} {dt} = -\Gamma x + u_{2}z& & \\ \frac{dy} {dt} = -\Gamma- u_{1}z& & \\ \frac{dz} {dt} =\gamma (1 - z) + u_{1}y - u_{2}x& & \\ \end{array}$$
where (Γ, γ) are proportional to the inverse of the relaxation times and u = (u 1, u 2) is the control radio frequency-magnetic field bounded according to | u | ≤ M. Due to the z-symmetry of revolution, one can restrict the problem to the 2D single-input case
$$\frac{dy} {dt} = -\Gamma y - uz, \frac{dz} {dt} =\gamma (1 - z) + uy$$
that can be written as \(\frac{dq} {dt} = F(q) + uG(q)\).
According to the maximum principle, the time-optimal solutions are the concatenations of regular extremals for which \(u(t) = M\mathrm{sign}\langle p(t),G(q(t))\rangle\) and singular arcs where \(\langle p(t),G(q(t))\rangle = 0\), ∀t, and p(t) is solution of the adjoint system. Differentiating with respect of time and using the Lie bracket notation \([X,Y ](q) = \frac{\partial X} {\partial q} (q)Y (q) -\frac{\partial Y } {\partial q} (q)X(q)\), we get
$$\displaystyle\begin{array}{rcl} \langle p,[G,F](q)\rangle = 0,& & \\ \langle p,[[G,F],G](q)\rangle + u\langle p,[[G,F],F](q)\rangle = 0.& & \\ \end{array}$$
This leads to two singular arcs:
  • The vertical line y = 0, corresponding to the z-axis of revolution

  • The horizontal line \(z = \frac{\gamma } {2(\gamma -\Gamma )}\)

The interesting physical case is when 2Γ > 3γ where the vertical singular line is such that \(-1 < \frac{\gamma } {2(\gamma -\Gamma )} < 0\). In this case, the time minimum solution is represented on Fig. 1. On Fig. 2 we draw the experimental solution in the deoxygenated blood case, compared with the standard inversion recovery sequence.

Fig. 1

The computed optimal solution is the following concatenation: bang arc σ′ M with the horizontal singular arc σ sh followed by a bang arc P and finally the singular vertical arc σ sv

Fig. 2

Experimental result. Usual inversion sequence in green, optimal computed sequence in blue

Abnormal Extremals in SR Geometry

Sub-Riemannian geometry was introduced by R.W. Brockett as a generalization of Riemannian geometry (Brockett 1982; Montgomery 2002) with many applications in control (for instance, in motion planning (Bellaiche et al. 1998; Gauthier and Zakalyukin 2006) and quantum control). Its formulation in the framework of control theory is
$$\dot{q}(t) =\displaystyle\sum _{ i=1}^{m}u_{ i}(t)F_{i}(q(t)),\;\;\min _{u(.)}\displaystyle\int _{0}^{T}(\displaystyle\sum _{ i=1}^{m}u_{ i}^{2}(t)dt)$$
where qU open set in \({\mathbb{R}}^{n}\), m < n and F 1, ⋯, F m are smooth vector fields which forms an orthonormal basis of the distribution they generate.
According to the maximum principle, normal extremals are solutions of the Hamiltonian vector field \(\vec{H}_{n}\), \(H_{n} = \frac{1} {2}(\sum _{i=1}^{m}H_{ i}{(q,p)}^{2})\), \(H_{i} =\langle p,F_{i}(q)\rangle\) for i = 1, ⋯m. Again abnormal extremals can be computed by differentiating the constraint H i = 0 along the extremals. Their first occurrence takes place in the so-called Martinet flat case: \(n = 3,m = 2\), F 1, F 2 are given by
$$F_{1} = \frac{\partial } {\partial x} + \frac{{y}^{2}} {2} \frac{\partial } {\partial z},F_{2} = \frac{\partial } {\partial y}$$
where q = (x, y, z) ∈ U neighborhood of the origin, and the metric is given by \(d{s}^{2} = d{x}^{2} + d{y}^{2}\). The singular trajectories are contained in the Martinet plane M : y = 0 and are the lines z = z 0. An easy computation shows that they are optimal for the problem. We represent below the role of the singular trajectories when computing the sphere of small radius, from the origin, intersected with the Martinet plane (Fig. 3).
Fig. 3

Projection of the SR sphere on the xz-plane. The singular line is x = t and the picture shows the pinching of the SR sphere in the singular direction

Summary and Future Directions

Singular trajectories play an important role in many optimal control problem such as in quantum control and cancer therapy (Schättler and Ledzewicz 2012). They have to be carefully analyzed in any applications; in particular in Boscain and Piccoli (2006) the authors provide for single-input systems in two dimensions a classification of optimal synthesis with singular arcs.

Additionally, from a theoretical point of view, singular trajectories can be used to compute feedback invariants for nonlinear systems (Bonnard and Chyba 2003). In relation, a purely mathematical problem is the classification of distributions describing the nonholonomic constraints in sub-Riemannian geometry (Montgomery 2002).

Cross-References

Bibliography

  1. Agrachev A, Sarychev AV (1998) On abnormal extremals for lagrange variational problems. J Math Syst Estim Control 8(1):87–118MathSciNetGoogle Scholar
  2. Agrachev A, Bonnard B, Chyba M, Kupka I (1997) Sub-Riemannian sphere in Martinet flat case. ESAIM Control Optim Calc Var 2:377–448CrossRefzbMATHMathSciNetGoogle Scholar
  3. Bellaiche A, Jean F, Risler JJ (1998) Geometry of nonholonomic systems. In: Laumond JP (ed) Robot motion planning and control. Lecture notes in control and information sciences, vol 229. Springer, London, pp 55–91CrossRefGoogle Scholar
  4. Bliss G (1946) Lectures on the calculus of variations. University of Chicago Press, Chicago, ix+296ppGoogle Scholar
  5. Bloch A (2003) Nonholonomic mechanics and control. Interdisciplinary applied mathematics, vol 24. Springer, New York, xix+484ppGoogle Scholar
  6. Bonnard B, Chyba M (2003) Singular trajectories and their role in control theory. Mathématiques & applications, vol 40. Springer, Berlin, xvi+357ppGoogle Scholar
  7. Bonnard B, Cots O, Glaser S, Lapert M, Sugny D, Zhang Y (2012) Geometric optimal control of the contrast imaging problem in nuclear magnetic resonance. IEEE Trans Autom Control 57(8):1957–1969CrossRefMathSciNetGoogle Scholar
  8. Boscain U, Piccoli B (2004) Optimal syntheses for control systems on 2-D manifolds. Mathématiques & applications, vol 43. Springer, Berlin, xiv+261ppGoogle Scholar
  9. Brockett RW, (1982) Control theory and singular Riemannian geometry. New directions in applied mathematics. Springer, New York/Berlin, pp 11–27Google Scholar
  10. Gauthier JP, Zakalyukin V (2006) On the motion planning problem, complexity, entropy, and nonholonomic interpolation. J Dyn Control Syst 12(3):371–404CrossRefzbMATHMathSciNetGoogle Scholar
  11. Lapert M, Zhang Y, Braun M, Glaser SJ, Sugny D (2010) Singular extremals for the time-optimal control of dissipative spin 1/2 particles. Phys Rev Lett 104:083001CrossRefGoogle Scholar
  12. Lapert M, Zhang Y, Janich M, Glaser SJ, Sugny D (2012) Exploring the physical limits of saturation contrast in magnetic resonance imaging. Nat Sci Rep 2:589Google Scholar
  13. Lee EB, Markus L (1967) Foundations of optimal control theory. Wiley, New York/London/ Sydney, x+576ppGoogle Scholar
  14. Levitt MH (2008) Spin dynamics: basics of nuclear magnetic resonance, 2nd edn. Wiley, Chichester/Hoboken, +744ppGoogle Scholar
  15. Montgomery R (2002) A tour of subriemannian geometries, their geodesics and applications. Mathematical surveys and monographs, vol 91. American Mathematical Society, Providence, xx+259ppGoogle Scholar
  16. Schättler H, Ledzewicz U (2012) Geometric optimal control: theory, methods and examples. Interdisciplinary applied mathematics, vol 38. Springer, New York, xx+640ppGoogle Scholar
  17. Pontryagin LS, Boltyanskii VG, Gamkrelidze RV, Mishchenko EF (1962) The mathematical theory of optimal processes (Translated from the Russian by Trirogoff KN; edited by Neustadt LW). Wiley Interscience, New York/London, viii+360ppGoogle Scholar

Copyright information

© Springer-Verlag London 2014

Authors and Affiliations

  1. 1.Institute of Mathematics, University of BurgundyDijonFrance