Playing Stochastic Games Precisely
We study stochastic two-player games where the goal of one player is to achieve precisely a given expected value of the objective function, while the goal of the opponent is the opposite. Potential applications for such games include controller synthesis problems where the optimisation objective is to maximise or minimise a given payoff function while respecting a strict upper or lower bound, respectively. We consider a number of objective functions including reachability, ω-regular, discounted reward, and total reward. We show that precise value games are not determined, and compare the memory requirements for winning strategies. For stopping games we establish necessary and sufficient conditions for the existence of a winning strategy of the controller for a large class of functions, as well as provide the constructions of compact strategies for the studied objectives.
Unable to display preview. Download preview PDF.
- 1.Brázdil, T., Brožek, V., Chatterjee, K., Forejt, V., Kučera, A.: Two views on multiple mean-payoff objectives in Markov decision processes. In: LICS, pp. 33–42 (2011)Google Scholar
- 2.Cabot, A.N., Hannum, R.C.: Gaming regulation and mathematics: A marriage of necessity. John Marshall Law Review 35(3), 333–358 (2002)Google Scholar
- 5.Chen, T., Forejt, V., Kwiatkowska, M., Simaitis, A., Trivedi, A., Ummels, M.: Playing stochastic games precisely. Technical Report No. CS-RR-12-03, Department of Computer Science, University of Oxford (June 2012)Google Scholar
- 6.Dziembowski, S., Jurdzinski, M., Walukiewicz, I.: How much memory is needed to win infinite games? In: LICS, pp. 99–110. IEEE Computer Society (1997)Google Scholar
- 7.Etessami, K., Kwiatkowska, M.Z., Vardi, M.Y., Yannakakis, M.: Multi-objective model checking of Markov decision processes. LMCS 4(4) (2008)Google Scholar
- 8.Filar, J., Vrieze, K.: Competitive Markov Decision Processes. Springer (1997)Google Scholar
- 10.Neyman, A., Sorin, S. (eds.): Stochastic Games and Applications. NATO Science Series C, vol. 570. Kluwer Academic Publishers (2004)Google Scholar
- 11.Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley (1994)Google Scholar
- 12.Ramadge, P., Wonham, W.: The control of discrete event systems. Proc. IEEE 77(1) (1989)Google Scholar
- 13.Shapley, L.S.: Stochastic games. Proc. Nat. Acad. Sci. U.S.A. 39 (1953)Google Scholar
- 14.State of New Jersey, 214th legislature, as amended by the General Assembly on 01/10/2011 (November 2010), http://www.njleg.state.nj.us/2010/Bills/S0500/12_R4.PDF