Abstraction-Based Computation of Reward Measures for Markov Automata

  • Bettina Braitling
  • Luis María Ferrer Fioriti
  • Hassan Hatefi
  • Ralf Wimmer
  • Bernd Becker
  • Holger Hermanns
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8931)


Markov automata allow us to model a wide range of complex real-life systems by combining continuous stochastic timing with probabilistic transitions and nondeterministic choices. By adding a reward function it is possible to model costs like the energy consumption of a system as well.

However, models of real-life systems tend to be large, and the analysis methods for such powerful models like Markov (reward) automata do not scale well, which limits their applicability. To solve this problem we present an abstraction technique for Markov reward automata, based on stochastic games, together with automatic refinement methods for the computation of time-bounded accumulated reward properties. Experiments show a significant speed-up and reduction in system size compared to direct analysis methods.


Markov Decision Process Reward Function Stochastic Game Markovian Jump Labelling Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Eisentraut, C., Hermanns, H., Zhang, L.: On probabilistic automata in continuous time. In: Proc. of LICS, pp. 342–351. IEEE CS (2010)Google Scholar
  2. 2.
    Hermanns, H. (ed.): Interactive Markov Chains. LNCS, vol. 2428. Springer, Heidelberg (2002)zbMATHGoogle Scholar
  3. 3.
    Eisentraut, C., Hermanns, H., Katoen, J.-P., Zhang, L.: A semantics for every GSPN. In: Colom, J.-M., Desel, J. (eds.) PETRI NETS 2013. LNCS, vol. 7927, pp. 90–109. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  4. 4.
    Meyer, J.F., Movaghar, A., Sanders, W.H.: Stochastic activity networks: Structure, behavior, and application. In: Proc. of PNPM, pp. 106–115. IEEE CS (1985)Google Scholar
  5. 5.
    Timmer, M., Katoen, J.-P., van de Pol, J., Stoelinga, M.I.A.: Efficient modelling and generation of markov automata. In: Koutny, M., Ulidowski, I. (eds.) CONCUR 2012. LNCS, vol. 7454, pp. 364–379. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  6. 6.
    Hatefi, H., Hermanns, H.: Model checking algorithms for Markov automata. ECEASST 53 (2012)Google Scholar
  7. 7.
    Guck, D., Hatefi, H., Hermanns, H., Katoen, J.-P., Timmer, M.: Modelling, reduction and analysis of Markov automata. In: Joshi, K., Siegle, M., Stoelinga, M., D’Argenio, P.R. (eds.) QEST 2013. LNCS, vol. 8054, pp. 55–71. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  8. 8.
    Guck, D., Hatefi, H., Hermanns, H., Katoen, J., Timmer, M.: Analysis of timed and long-run objectives for markov automata. Logical Methods in Computer Science 10(3) (2014)Google Scholar
  9. 9.
    Guck, D., Timmer, M., Hatefi, H., Ruijters, E., Stoelinga, M.: Modelling and analysis of Markov reward automata. In: Cassez, F., Raskin, J.-F. (eds.) ATVA 2014. LNCS, vol. 8837, pp. 168–184. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  10. 10.
    Guck, D., Han, T., Katoen, J.-P., Neuhäußer, M.R.: Quantitative timed analysis of interactive Markov chains. In: Goodloe, A.E., Person, S. (eds.) NFM 2012. LNCS, vol. 7226, pp. 8–23. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  11. 11.
    Kattenbelt, M., Kwiatkowska, M.Z., Norman, G., Parker, D.: A game-based abstraction-refinement framework for Markov decision processes. Formal Methods in System Design 36(3), 246–280 (2010)CrossRefzbMATHGoogle Scholar
  12. 12.
    Braitling, B., Ferrer Fioriti, L.M., Hatefi, H., Wimmer, R., Becker, B., Hermanns, H.: MeGARA: Menu-based game abstraction and abstraction refinement of Markov automata. In: Proc. of QAPL. EPTCS, vol. 154, pp. 48–63 (2014)Google Scholar
  13. 13.
    D’Argenio, P.R., Jeannet, B., Jensen, H.E., Larsen, K.G.: Reduction and refinement strategies for probabilistic analysis. In: Hermanns, H., Segala, R. (eds.) PAPM-PROBMIV 2002. LNCS, vol. 2399, pp. 57–76. Springer, Heidelberg (2002)Google Scholar
  14. 14.
    Wachter, B., Zhang, L.: Best probabilistic transformers. In: Barthe, G., Hermenegildo, M. (eds.) VMCAI 2010. LNCS, vol. 5944, pp. 362–379. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  15. 15.
    Wachter, B.: Refined probabilistic abstraction. PhD thesis, Saarland University (2011)Google Scholar
  16. 16.
    Braitling, B., Ferrer Fioriti, L.M., Hatefi, H., Wimmer, R., Hermanns, H., Becker, B.: Abstraction-based computation of reward measures for Markov automata (extended version). Reports of SFB/TR 14 AVACS 106, SFB/TR 14 AVACS (2014),
  17. 17.
    Neuhäußer, M.R., Zhang, L.: Time-bounded reachability probabilities in continuous-time Markov decision processes. In: Proc. of QEST, pp. 209–218. IEEE CS (2010)Google Scholar
  18. 18.
    Ash, R.B., Doléans-Dade, C.A.: Probability & Measure Theory, 2nd edn. Academic Press (1999)Google Scholar
  19. 19.
    Neuhäußer, M.R.: Model checking nondeterministic and randomly timed systems. PhD thesis, RWTH Aachen University and University of Twente (2010)Google Scholar
  20. 20.
    Johr, S.: Model checking compositional Markov systems. PhD thesis, Saarland University, Germany (2008)Google Scholar
  21. 21.
    Shapley, L.S.: Stochastic games. Proceedings of the National Academy of Sciences of the United States of America 39(10), 1095 (1953)CrossRefzbMATHMathSciNetGoogle Scholar
  22. 22.
    Brázdil, T., Forejt, V., Krcál, J., Kretínský, J., Kucera, A.: Continuous-time stochastic games with time-bounded reachability. Information and Computation 224, 46–70 (2013)CrossRefzbMATHMathSciNetGoogle Scholar
  23. 23.
    Baier, C., Haverkort, B.R., Hermanns, H., Katoen, J.P.: Model-checking algorithms for continuous-time markov chains. IEEE Trans. Software Eng. 29(6), 524–541 (2003)CrossRefGoogle Scholar
  24. 24.
    Eisentraut, C., Hermanns, H., Zhang, L.: Concurrency and composition in a stochastic world. In: Gastin, P., Laroussinie, F. (eds.) CONCUR 2010. LNCS, vol. 6269, pp. 21–39. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  25. 25.
    Deng, Y., Hennessy, M.: On the semantics of Markov automata. Information and Computation 222, 139–168 (2013)CrossRefzbMATHMathSciNetGoogle Scholar
  26. 26.
    Fearnley, J., Rabe, M.N., Schewe, S., Zhang, L.: Efficient approximation of optimal control for continuous-time markov games. In: Proc. of FSTTCS. LIPIcs, vol. 13, pp. 399–410. Schloss Dagstuhl – Leibniz-Zentrum fuer Informatik (2011)Google Scholar
  27. 27.
    Chatterjee, K., Henzinger, M.: Faster and dynamic algorithms for maximal end-component decomposition and related graph problems in probabilistic verification. In: Proc. of SODA, pp. 1318–1336. SIAM (2011)Google Scholar
  28. 28.
    Timmer, M., van de Pol, J., Stoelinga, M.I.A.: Confluence reduction for Markov automata. In: Braberman, V., Fribourg, L. (eds.) FORMATS 2013. LNCS, vol. 8053, pp. 243–257. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  29. 29.
    Zhang, L., Neuhäußer, M.R.: Model checking interactive Markov chains. In: Esparza, J., Majumdar, R. (eds.) TACAS 2010. LNCS, vol. 6015, pp. 53–68. Springer, Heidelberg (2010)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Bettina Braitling
    • 1
  • Luis María Ferrer Fioriti
    • 2
  • Hassan Hatefi
    • 2
  • Ralf Wimmer
    • 1
  • Bernd Becker
    • 1
  • Holger Hermanns
    • 2
  1. 1.Albert-Ludwigs-Universität FreiburgGermany
  2. 2.Saarland UniversitySaarbrückenGermany

Personalised recommendations