Journal of Optimization Theory and Applications

, Volume 128, Issue 3, pp 499–521

Optimization Techniques for State-Constrained Control and Obstacle Problems

  • A. B. Kurzhanski
  • I. M. Mitchell
  • P. Varaiya
Article
  • 205 Downloads

Abstract

The design of control laws for systems subject to complex state constraints still presents a significant challenge. This paper explores a dynamic programming approach to a specific class of such problems, that of reachability under state constraints. The problems are formulated in terms of nonstandard minmax and maxmin cost functionals, and the corresponding value functions are given in terms of Hamilton-Jacobi-Bellman (HJB) equations or variational inequalities. The solution of these relations is complicated in general; however, for linear systems, the value functions may be described also in terms of duality relations of convex analysis and minmax theory. Consequently, solution techniques specific to systems with a linear structure may be designed independently of HJB theory. These techniques are illustrated through two examples.

Keywords

Nonlinear systems control synthesis state constraints obstacle problems dynamic programming variational inequalities convex analysis 

Copyright information

© Springer Science+Business Media, Inc. 2006

Authors and Affiliations

  • A. B. Kurzhanski
    • 1
  • I. M. Mitchell
    • 2
  • P. Varaiya
    • 3
  1. 1.Department of Computational Mathematics and CyberneticsMoscow State (Lomonosov) UniversityMoscowRussia
  2. 2.Department of Computer ScienceUniversity of British ColumbiaVancouverCanada
  3. 3.Department of Electrical Engineering and Computer ScienceUniversity of California at BerkeleyBerkeleyUSA

Personalised recommendations