Long-Run Rewards for Markov Automata

  • Yuliya ButkovaEmail author
  • Ralf Wimmer
  • Holger Hermanns
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10206)


Markov automata are a powerful formalism for modelling systems which exhibit nondeterminism, probabilistic choices and continuous stochastic timing. We consider the computation of long-run average rewards, the most classical problem in continuous-time Markov model analysis. We propose an algorithm based on value iteration. It improves the state of the art by orders of magnitude. The contribution is rooted in a fresh look on Markov automata, namely by treating them as an efficient encoding of CTMDPs with – in the worst case – exponentially more transitions.


  1. 1.
    Bellman, R.E.: Dynamic Programming. Princeton University Press, Princeton (1957)zbMATHGoogle Scholar
  2. 2.
    Bertsekas, D.P.: Dynamic Programming and Optimal Control, 2nd edn. Athena Scientific, Belmont (2000)Google Scholar
  3. 3.
    Bhattacharya, R.N., Waymire, E.C.: Stochastic Processes with Applications. SIAM, Philadelphia (2009)CrossRefzbMATHGoogle Scholar
  4. 4.
    Chatterjee, K., Henzinger, M.: Faster and dynamic algorithms for maximal end-component decomposition and related graph problems in probabilistic verification. In: Proceedings of SODA, pp. 1318–1336, January 2011Google Scholar
  5. 5.
    Eisentraut, C., Hermanns, H., Zhang, L.: On probabilistic automata in continuous time. In: Proceedings of LICS, pp. 342–351. IEEE CS (2010)Google Scholar
  6. 6.
    Ghemawat, S., Gobioff, H., Leung, S.: The Google file system. In: Scott, M.L., Peterson, L.L. (eds.) Proceedings of SOSP, Bolton Landing, NY, USA, pp. 29–43. ACM, October 2003Google Scholar
  7. 7.
    Guck, D.: Quantitative Analysis of Markov Automata. Master’s thesis, RWTH Aachen University, June 2012Google Scholar
  8. 8.
    Guck, D., Hatefi, H., Hermanns, H., Katoen, J.-P., Timmer, M.: Modelling, reduction and analysis of Markov automata. In: Joshi, K., Siegle, M., Stoelinga, M., D’Argenio, P.R. (eds.) QEST 2013. LNCS, vol. 8054, pp. 55–71. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-40196-1_5 CrossRefGoogle Scholar
  9. 9.
    Guck, D., Hatefi, H., Hermanns, H., Katoen, J., Timmer, M.: Analysis of timed and long-run objectives for Markov automata. Log. Methods Comput. Sci. 10(3) (2014)Google Scholar
  10. 10.
    Guck, D., Timmer, M., Hatefi, H., Ruijters, E., Stoelinga, M.: Modelling and analysis of Markov reward automata. In: Cassez, F., Raskin, J.-F. (eds.) ATVA 2014. LNCS, vol. 8837, pp. 168–184. Springer, Heidelberg (2014). doi: 10.1007/978-3-319-11936-6_13 Google Scholar
  11. 11.
    Hatefi, H., Hermanns, H.: Model checking algorithms for Markov automata. ECE ASST 53 (2012).
  12. 12.
    Haverkort, B.R., Hermanns, H., Katoen, J.: On the use of model checking techniques for dependability evaluation. In: Proceedings of SRDS, Nürnberg, Germany, pp. 228–237. IEEE CS, October 2000Google Scholar
  13. 13.
    Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming, 1st edn. Wiley, New York (1994)CrossRefzbMATHGoogle Scholar
  14. 14.
    Stewart, W.J.: Introduction to the Numerical Solution of Markov Chains. Princeton University Press, Princeton (1994)zbMATHGoogle Scholar
  15. 15.
    Timmer, M.: SCOOP: a tool for symbolic optimisations of probabilistic processes. In: Proceedings of QEST, Aachen, Germany, pp. 149–150. IEEE CS, September 2011Google Scholar
  16. 16.
    Timmer, M., Pol, J., Stoelinga, M.I.A.: Confluence reduction for Markov automata. In: Braberman, V., Fribourg, L. (eds.) FORMATS 2013. LNCS, vol. 8053, pp. 243–257. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-40229-6_17 CrossRefGoogle Scholar
  17. 17.
    Wunderling, R.: Paralleler und objektorientierter Simplex-Algorithmus. Ph.D. thesis, Berlin Institute of Technology (1996).

Copyright information

© Springer-Verlag GmbH Germany 2017

Authors and Affiliations

  1. 1.Saarland UniversitySaarbrückenGermany
  2. 2.Albert-Ludwigs-Universität FreiburgFreiburg im BreisgauGermany

Personalised recommendations