Journal of Optimization Theory and Applications

, Volume 128, Issue 3, pp 499–521

Optimization Techniques for State-Constrained Control and Obstacle Problems


  • A. B. Kurzhanski
    • Department of Computational Mathematics and CyberneticsMoscow State (Lomonosov) University
  • I. M. Mitchell
    • Department of Computer ScienceUniversity of British Columbia
  • P. Varaiya
    • Department of Electrical Engineering and Computer ScienceUniversity of California at Berkeley

DOI: 10.1007/s10957-006-9029-4

Cite this article as:
Kurzhanski, A.B., Mitchell, I.M. & Varaiya, P. J Optim Theory Appl (2006) 128: 499. doi:10.1007/s10957-006-9029-4


The design of control laws for systems subject to complex state constraints still presents a significant challenge. This paper explores a dynamic programming approach to a specific class of such problems, that of reachability under state constraints. The problems are formulated in terms of nonstandard minmax and maxmin cost functionals, and the corresponding value functions are given in terms of Hamilton-Jacobi-Bellman (HJB) equations or variational inequalities. The solution of these relations is complicated in general; however, for linear systems, the value functions may be described also in terms of duality relations of convex analysis and minmax theory. Consequently, solution techniques specific to systems with a linear structure may be designed independently of HJB theory. These techniques are illustrated through two examples.


Nonlinear systemscontrol synthesisstate constraintsobstacle problemsdynamic programmingvariational inequalitiesconvex analysis

No Body is taken as this work is PDF workflow.

Copyright information

© Springer Science+Business Media, Inc. 2006