The Geometry of the Solution Set of Nonlinear Optimal Control Problems
In an optimal control problem one seeks a time-varying input to a dynamical systems in order to stabilize a given target trajectory, such that a particular cost function is minimized. That is, for any initial condition, one tries to find a control that drives the point to this target trajectory in the cheapest way. We consider the inverted pendulum on a moving cart as an ideal example to investigate the solution structure of a nonlinear optimal control problem. Since the dimension of the pendulum system is small, it is possible to use illustrations that enhance the understanding of the geometry of the solution set. We are interested in the value function, that is, the optimal cost associated with each initial condition, as well as the control input that achieves this optimum. We consider different representations of the value function by including both globally and locally optimal solutions. Via Pontryagin’s maximum principle, we can relate the optimal control inputs to trajectories on the smooth stable manifold of a Hamiltonian system. By combining the results we can make some firm statements regarding the existence and smoothness of the solution set.
KeywordsHamilton–Jacobi–Bellman equation Hamiltonian system optimal control Pontryagin’s maximum principle global stable manifolds
AMS Subject Classifications49J40 49K20 49L20 53D12 65P10
Unable to display preview. Download preview PDF.
- 1.Bardi M., Capuzzo-Dolcetta I. (1997). Optimal Control and Viscosity Solugions of Hamilton Jacobi Bellman Equations. Birkhäuser, BostonGoogle Scholar
- 3.Cesari L. (1983). Optimization – Theory and Applications: Problems with Ordinary Differential Equations. Springer-Verlag, New YorkGoogle Scholar
- 6.Hauser, J., and Osinga, H. M. (2001). On the geometry of optimal control: the inverted pendulum example. In Proceedings Amer. Control Conference, Arlington VA, pp. 1721–1726.Google Scholar
- 7.Jadbabaie, A., Yu, J., and Hauser, J. (1999). Unconstrained receding horizon control: Stability region of attraction results. In Proceedings Conference on Decision and Control, No. CDC99-REG0545, Phoenix, AZ.Google Scholar
- 11.Lee E.B., Markus L. (1967). Optimization – Theory and Applications: Problems with Ordinary Differential Equations. Wiley, New YorkGoogle Scholar
- 16.Osinga H. M., and Hauser, J. (2005). Multimedia supplement with this paper; available at http://www.enm.bris.ac.uk/anm/preprints/2005r28.html.Google Scholar
- 18.Pontryagin L.S., Boltyanski V.G., Gamkrelidze R.V., Mischenko E.F. (1962). The Mathematical Theory of Optimal Processes. Wiley, New YorkGoogle Scholar
- 20.Sontag E.D. (1998). Mathematical Control Theory: Deterministic Finite Dimensional Systems, 2nd edn. Texts in Applied Mathematics 6, Springer-Verlag, New YorkGoogle Scholar
- 22.Sussmann H.J. (1998). Geometry and optimal control. In: Baillieul J., Willems J.C. (eds). Mathematical Control Theory. Springer-Verlag, New York, pp. 140–198Google Scholar
- 23.Zelikin M.I., Borisov V.F. (1994). Theory of Chattering Control with Applications to Astronautics, Robotcs, Economics, and Engineering. Birkhäuser, BostonGoogle Scholar