Advertisement

Differential Equations and Dynamical Systems

, Volume 25, Issue 1, pp 83–100 | Cite as

Abel-type Results for Controlled Piecewise Deterministic Markov Processes

  • Dan Goreac
  • Oana-Silvia SereaEmail author
Original Research
  • 122 Downloads

Abstract

In this short paper we prove that, in the framework of continuous control problems for piecewise deterministic Markov processes, the existence of a uniform limit for discounted value functions as the discount factor vanishes implies (without any further assumption) the uniform convergence of the value functions with long run average cost as the time horizon increases to infinity. The two limit values coincide. We also provide a converse Tauberian result for a particular class of systems with Poisson-triggered jump mechanism. We exhibit a very simple example in which the dynamics are not dissipative, nevertheless discounted values converge uniformly to a non-constant limit function.

Keywords

Piecewise deterministic Markov processes Long run average Abelian/Tauberian theorem Optimal control Viscosity solutions 

Mathematics Subject Classification

49L25 60J25 93E20 

Notes

Acknowledgments

The work of the first author has been partially supported by the French National Research Agency ANR PIECE ANR-12-JS01-0006.

References

  1. 1.
    Almudevar, A.: A dynamic programming algorithm for the optimal control of piecewise deterministic Markov processes. SIAM J. Control Optim. 40(2), 525–539 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Arisawa, M.: Ergodic problem for the Hamilton–Jacobi–Bellman equation, II. Annales de l’Institut Henri Poincare (C) Non Linear Anal. 15(1), 1–24 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Barles, G., Jakobsen, E.R.: On the convergence rate of approximation schemes for Hamilton–Jacobi–Bellman equations. ESAIM Math. Model. Numer. Anal. 36(1), 251–274 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Benaïm, M., Le Borgne, S., Malrieu, F., Zitt, P.-A.: Quantitative ergodicity for some switched dynamical systems. Electron. Commun. Probab. 17(56), 1–14 (2012)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Buckdahn, R., Goreac, D., Quincampoix, M.: Existence of asymptotic values for nonexpansive stochastic control systems. Appl. Math. Optim. 70(1), 1–28 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Costa, O.: Average impulse control of piecewise deterministic processes. IMA J. Math. Control Inf. 6(4), 375–397 (1989)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Costa, O.L.V., Dufour, F.: The vanishing discount approach for the average continuous control of piecewise deterministic Markov processes. J. Appl. Probab. 46(4), 1157–1183 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Costa, O.L.V., Dufour, F.: Average continuous control of piecewise deterministic Markov processes. SIAM J. Control Optim. 48(7), 4262–4291 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Davis, M.H.A.: Piecewise-deterministic Markov-processes—a general-class of non-diffusion stochastic-models. J. R. Stat. Soc. Ser. B Methodol. 46(3), 353–388 (1984)zbMATHGoogle Scholar
  10. 10.
    Davis, M.H.A.: Control of piecewise-deterministic processes via discrete-time dynamic-programming. Lect. Notes Control Inf. Sci. 78, 140–150 (1986)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Davis, M.H.A.: Markov Models and Optimization. Monographs on Statistics and Applied Probability, vol. 49. Chapman & Hall, London (1993)CrossRefzbMATHGoogle Scholar
  12. 12.
    Dempster, M.A.H., Ye, J.J.: Generalized Bellman–Hamilton–Jacobi optimality conditions for a control problem with a boundary condition. Appl. Math. Optim. 33(3), 211–225 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Feller, W.: An Introduction to Probability Theory and its Applications, vol. II, 2nd edn. Wiley, New York (1971)zbMATHGoogle Scholar
  14. 14.
    Forwick, L., Schal, M., Schmitz, M.: Piecewise deterministic Markov control processes with feedback controls and unbounded costs. Acta Appl. Math. 82(3), 239–267 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Gatarek, D.: Optimality conditions for impulsive control of piecewise-deterministic processes. Math. Control Signal Syst. 5(2), 217–232 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Goreac, D.: Viability, Invariance and reachability for controlled piecewise deterministic Markov processes associated to gene networks. ESAIM-Control Optim. Calc. Var. 18(2), 401–426 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Goreac, D., Serea, O.-S.: Mayer and optimal stopping stochastic control problems with discontinuous cost. J. Math. Anal. Appl. 380(1), 327–342 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Goreac, D., Serea, O.-S.: Linearization techniques for controlled piecewise deterministic Markov processes; application to Zubov’s method. Appl. Math. Optim. 66, 209–238 (2012). doi: 10.1007/s00245-012-9169-x MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Goreac, D., Serea, O.-S.: Linearization techniques for controlled piecewise deterministic Markov processes; application to Zubov’s method. Appl. Math. Optim. 66, 209–238 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Hardy, G.H., Littlewood, J.E.: Tauberian theorems concerning power series and Dirichlet’s series whose coefficients are positive. Proc. Lond. Math. Soc. s2–13(1), 174–191 (1914)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Ikeda, N., Watanabe, S.: Stochastic Differential Equations and Diffusion Processes. North-Holland Mathematical Library, vol. 24. North-Holland Publishing Co./Kodansha Ltd, Amsterdam/New York (1981)CrossRefzbMATHGoogle Scholar
  22. 22.
    Jacobsen, M.: Point Process Theory and Applications. Marked Point and Piecewise Deterministic Processes. Birkhäuser Verlag GmbH, London (2006)zbMATHGoogle Scholar
  23. 23.
    Krylov, N.V.: On the rate of convergence of finite-difference approximations for Bellman’s equations with variable coefficients. Probab. Theory Relat. Fields 117(1), 1–16 (2000)CrossRefzbMATHGoogle Scholar
  24. 24.
    Li, X., Quincampoix, M., Renault, J.: Generalized limit value in optimal control. Technical report (2014)Google Scholar
  25. 25.
    Øksendal, B., Sulem, A.: Applied Stochastic Control of Jump Diffusions, 2nd edn. Universitext, Springer, Berlin (2007)CrossRefzbMATHGoogle Scholar
  26. 26.
    Oliu-Barton, M., Vigeral, G.: A uniform Tauberian theorem in optimal control. In: Cardaliaguet, P., Cressman, R. (eds.) Annals of the International Society of Dynamic Games vol 12: Advances in Dynamic Games. Birkhäuser, Boston (2013)Google Scholar
  27. 27.
    Pham, Huyên: Optimal stopping of controlled jump diffusion processes: a viscosity solution approach. J. Math. Syst. Estim. Control 8(1), 27 (1998). (electronic)MathSciNetGoogle Scholar
  28. 28.
    Quincampoix, M., Renault, J.: On the existence of a limit value in some nonexpansive optimal control problems. SIAM J. Control Optim. 49(5), 2118–2132 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Renault, Jérôme: General limit value in dynamic programming. J. Dyn. Games 1(3), 471–484 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Soner, H.M.: Optimal control with state-space constraint, II. SIAM J. Control Optim. 24(6), 1110–1122 (1986)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Foundation for Scientific Research and Technological Innovation 2015

Authors and Affiliations

  1. 1.Université Paris-Est, LAMA (UMR 8050), UPEMLV, UPEC, CNRSMarne-la-ValléeFrance
  2. 2.LAboratoire de Mathématiques et PhySiqueUniv. Perpignan Via DomitiaPerpignanFrance

Personalised recommendations