Markov Chains and Markov Decision Processes in Isabelle/HOL


DOI: 10.1007/s10817-016-9401-5

Cite this article as:
Hölzl, J. J Autom Reasoning (2016). doi:10.1007/s10817-016-9401-5


This paper presents an extensive formalization of Markov chains (MCs) and Markov decision processes (MDPs), with discrete time and (possibly infinite) discrete state-spaces. The formalization takes a coalgebraic view on the transition systems representing MCs and constructs their trace spaces. On these trace spaces properties like fairness, reachability, and stationary distributions are formalized. Similar to MCs, MDPs are represented as transition systems with a construction for trace spaces. These trace spaces provide maximal and minimal expectation over all possible non-deterministic decisions. As applications we provide a certifier for finite reachability problems and we relate the denotational semantics and operational semantics of the probabilistic guarded command language. A distinctive feature of our formalization is the order-theoretic and coalgebraic view on our concepts: we view transition systems as coalgebras, we view traces as coinductive streams, we provide iterative computation rules for expectations, and we define many properties on traces as least or greatest fixed points.


Markov chains Markov decision processes Probabilistic guarded command language Probabilistic model checking Isabelle/HOL 

Funding information

Funder NameGrant NumberFunding Note
Deutsche Forschungsgemeinschaft
  • NI 491/15-1

Copyright information

© Springer Science+Business Media Dordrecht 2016

Authors and Affiliations

  1. 1.Department of InformaticsTechnical University of MunichGarchingGermany

Personalised recommendations