Applied Mathematics & Optimization

, Volume 65, Issue 3, pp 403–439 | Cite as

First and Second Order Necessary Conditions for Stochastic Optimal Control Problems

Article

Abstract

In this work we consider a stochastic optimal control problem with either convex control constraints or finitely many equality and inequality constraints over the final state. Using the variational approach, we are able to obtain first and second order expansions for the state and cost function, around a local minimum. This fact allows us to prove general first order necessary condition and, under a geometrical assumption over the constraint set, second order necessary conditions are also established. We end by giving second order optimality conditions for problems with constraints on expectations of the final state.

Keywords

Stochastic optimal control Variational approach First and second order optimality conditions Polyhedric constraints Final state constraints 

References

  1. 1.
    Bensoussan, A.: Lectures on Stochastic Control. Lecture Notes in Math., vol. 972. Springer, Berlin (1981) Google Scholar
  2. 2.
    Bensoussan, A.: Stochastic maximum principle for distributed parameter system. J. Franklin Inst. 315, 387–406 (1983) MathSciNetMATHCrossRefGoogle Scholar
  3. 3.
    Bismut, J.M.: Analyse convexe et probabilités. PhD thesis, Faculté des Sciences de Paris (1973) Google Scholar
  4. 4.
    Bismut, J.M.: Théorie probabiliste du contrôle des diffusions. Mem. Am. Math. Soc. 4, 1–130 (1976) MathSciNetGoogle Scholar
  5. 5.
    Bismut, J.M.: An introductory approach to duality in optimal stochastic control. SIAM Rev. 20, 62–78 (1978) MathSciNetMATHCrossRefGoogle Scholar
  6. 6.
    Bonnans, J.F., Casas, E.: On the choice of the function spaces for some state-constrained control problems. Numer. Funct. Anal. Optim. 7(4), 333–348 (1984) MathSciNetCrossRefGoogle Scholar
  7. 7.
    Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000) MATHGoogle Scholar
  8. 8.
    Cadenillas, A., Karatzas, I.: The stochastic maximum principle for linear convex optimal control with random coefficients. SIAM J. Control Optim. 33, 590–624 (1995) MathSciNetMATHCrossRefGoogle Scholar
  9. 9.
    Dmitruk, A.V.: Jacobi type conditions for singular extremals. Control Cybern. 37, 285–306 (2008) MathSciNetMATHGoogle Scholar
  10. 10.
    Dontchev, A.L.: The Graves theorem revisited. J. Convex Anal. 3(1), 45–53 (1996) MathSciNetMATHGoogle Scholar
  11. 11.
    Fleming, W.H., Soner, H.M.: Controlled Markov Processes and Viscosity Solutions. Springer, New York (1993) MATHGoogle Scholar
  12. 12.
    Graves, L.M.: Some mapping theorems. Duke Math. J. 17, 111–114 (1950) MathSciNetMATHCrossRefGoogle Scholar
  13. 13.
    Haraux, A.: How to differentiate the projection on a convex set in Hilbert space. Some applications to variational inequalities. J. Math. Soc. Jpn. 29, 615–631 (1977) MathSciNetMATHCrossRefGoogle Scholar
  14. 14.
    Haussmann, U.G.: General necessary conditions for optimal control of stochastic systems. Math. Program. Stud. 6, 34–48 (1976) MathSciNetGoogle Scholar
  15. 15.
    Haussmann, U.G.: Some examples of optimal stochastic controls or: the stochastic maximum principle at work. SIAM Rev. 23, 292–307 (1981) MathSciNetMATHCrossRefGoogle Scholar
  16. 16.
    Hestenes, M.R.: Applications of the theory of quadratic forms in Hilbert space to the calculus of variations. Pac. J. Math. 1, 525–581 (1951) MathSciNetMATHGoogle Scholar
  17. 17.
    Hida, T.: Brownian Motion. Springer, Berlin (1980) MATHGoogle Scholar
  18. 18.
    Karatzas, I., Shreve, S.E.: Brownian Motion and Stochastic Calculus, 2nd edn. Springer, New York (1991) MATHCrossRefGoogle Scholar
  19. 19.
    Kushner, H.J.: On the existence of optimal stochastic controls. J. Math. Anal. Appl. 3, 463–474 (1965) MathSciNetMATHGoogle Scholar
  20. 20.
    Kushner, H.J.: On the stochastic maximum principle: Fixed time of control. J. Math. Anal. Appl. 11, 78–92 (1965) MathSciNetMATHCrossRefGoogle Scholar
  21. 21.
    Kushner, H.J.: Necessary conditions for continuous parameter stochastic optimization problems. SIAM J. Control Optim. 10, 550–565 (1972) MathSciNetMATHCrossRefGoogle Scholar
  22. 22.
    Lions, P.-L.: Optimal control of diffusion processes and Hamilton-Jacobi-Bellman equations. I. The dynamic programming principle and applications. Commun. Partial Differ. Equ. 8(10), 1101–1174 (1983) MATHCrossRefGoogle Scholar
  23. 23.
    Lions, P.-L.: Optimal control of diffusion processes and Hamilton-Jacobi-Bellman equations. Part 2: viscosity solutions and uniqueness. Commun. Partial Differ. Equ. 8, 1229–1276 (1983) MATHCrossRefGoogle Scholar
  24. 24.
    Mangasarian, O., Fromovitz, S.: The Fritz-John necessary optimality conditions in the presence of equality and inequality constraints. J. Math. Anal. Appl. 7, 37–47 (1967) MathSciNetCrossRefGoogle Scholar
  25. 25.
    Mignot, F.: Contrôle dans les inéquations variationnelles. J. Funct. Anal. 22, 25–39 (1976) MathSciNetCrossRefGoogle Scholar
  26. 26.
    Mou, L., Yong, J.: A variational formula for stochastic controls and some applications. Pure Appl. Math. Q. 3, 539–567 (2007) MathSciNetMATHGoogle Scholar
  27. 27.
    Peng, S.: A general stochastic maximum principle for optimal control problems. SIAM J. Control Optim. 28, 966–979 (1990) MathSciNetMATHCrossRefGoogle Scholar
  28. 28.
    Pham, H.: Optimisation et contrôle stochastique appliqués à la finance. Mathématiques & Applications, vol. 61. Springer, Berlin (2007) MATHGoogle Scholar
  29. 29.
    Rockafellar, R.T.: Integral functionals, normal integrands and measurable selections. In: Nonlinear Operators and the Calculus of Variations, Summer School, Univ. Libre Bruxelles, Brussels, 1975. Lecture Notes in Math., vol. 543, pp. 157–207. Springer, Berlin (1976) CrossRefGoogle Scholar
  30. 30.
    Yong, J., Zhou, X.Y.: Stochastic Controls, Hamiltonian Systems and HJB Equations. Springer, New York (2000) Google Scholar
  31. 31.
    Zhou, X.Y.: The connection between the maximum principle and dynamic programming in stochastic control. Stoch. Stoch. Rep. 31, 1–13 (1990) MATHGoogle Scholar
  32. 32.
    Zhou, X.Y.: A unified treatment of maximum principle and dynamic programming in stochastic controls. Stoch. Stoch. Rep. 36, 137–161 (1991) MATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  1. 1.INRIA-Saclay and CMAPÉcole PolytechniquePalaiseauFrance
  2. 2.Laboratoire de Finance des Marchés d’ÉnergieParisFrance
  3. 3.Dipartimento di Matematica Guido CastelnuovoRomeItaly

Personalised recommendations