Playing Stochastic Games Precisely

  • Taolue Chen
  • Vojtěch Forejt
  • Marta Kwiatkowska
  • Aistis Simaitis
  • Ashutosh Trivedi
  • Michael Ummels
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7454)


We study stochastic two-player games where the goal of one player is to achieve precisely a given expected value of the objective function, while the goal of the opponent is the opposite. Potential applications for such games include controller synthesis problems where the optimisation objective is to maximise or minimise a given payoff function while respecting a strict upper or lower bound, respectively. We consider a number of objective functions including reachability, ω-regular, discounted reward, and total reward. We show that precise value games are not determined, and compare the memory requirements for winning strategies. For stopping games we establish necessary and sufficient conditions for the existence of a winning strategy of the controller for a large class of functions, as well as provide the constructions of compact strategies for the studied objectives.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Brázdil, T., Brožek, V., Chatterjee, K., Forejt, V., Kučera, A.: Two views on multiple mean-payoff objectives in Markov decision processes. In: LICS, pp. 33–42 (2011)Google Scholar
  2. 2.
    Cabot, A.N., Hannum, R.C.: Gaming regulation and mathematics: A marriage of necessity. John Marshall Law Review 35(3), 333–358 (2002)Google Scholar
  3. 3.
    Chatterjee, K., Henzinger, T.A.: A survey of stochastic ω-regular games. J. Comput. Syst. Sci. 78(2), 394–413 (2012)MathSciNetMATHCrossRefGoogle Scholar
  4. 4.
    Chatterjee, K., Majumdar, R., Henzinger, T.A.: Markov Decision Processes with Multiple Objectives. In: Durand, B., Thomas, W. (eds.) STACS 2006. LNCS, vol. 3884, pp. 325–336. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  5. 5.
    Chen, T., Forejt, V., Kwiatkowska, M., Simaitis, A., Trivedi, A., Ummels, M.: Playing stochastic games precisely. Technical Report No. CS-RR-12-03, Department of Computer Science, University of Oxford (June 2012)Google Scholar
  6. 6.
    Dziembowski, S., Jurdzinski, M., Walukiewicz, I.: How much memory is needed to win infinite games? In: LICS, pp. 99–110. IEEE Computer Society (1997)Google Scholar
  7. 7.
    Etessami, K., Kwiatkowska, M.Z., Vardi, M.Y., Yannakakis, M.: Multi-objective model checking of Markov decision processes. LMCS 4(4) (2008)Google Scholar
  8. 8.
    Filar, J., Vrieze, K.: Competitive Markov Decision Processes. Springer (1997)Google Scholar
  9. 9.
    Grädel, E., Thomas, W., Wilke, T. (eds.): Automata, Logics, and Infinite Games. LNCS, vol. 2500. Springer, Heidelberg (2002)MATHGoogle Scholar
  10. 10.
    Neyman, A., Sorin, S. (eds.): Stochastic Games and Applications. NATO Science Series C, vol. 570. Kluwer Academic Publishers (2004)Google Scholar
  11. 11.
    Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley (1994)Google Scholar
  12. 12.
    Ramadge, P., Wonham, W.: The control of discrete event systems. Proc. IEEE 77(1) (1989)Google Scholar
  13. 13.
    Shapley, L.S.: Stochastic games. Proc. Nat. Acad. Sci. U.S.A. 39 (1953)Google Scholar
  14. 14.
    State of New Jersey, 214th legislature, as amended by the General Assembly on 01/10/2011 (November 2010),

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Taolue Chen
    • 1
  • Vojtěch Forejt
    • 1
  • Marta Kwiatkowska
    • 1
  • Aistis Simaitis
    • 1
  • Ashutosh Trivedi
    • 2
  • Michael Ummels
    • 3
  1. 1.Department of Computer ScienceUniversity of OxfordOxfordUK
  2. 2.University of PennsylvaniaPhiladelphiaUSA
  3. 3.Technische Universität DresdenGermany

Personalised recommendations