Abstract
This chapter deals with a particular class of stochastic models that form a cornerstone of this book. Stochastic models are widely used to describe phenomena that change randomly as time progresses.We focus onMarkov chains, as simple and adequate models for many such phenomena. More precise, we cover discrete—as well as continuous-time Markov chains. We discuss details of their analysis to set the ground for the requirements of later chapters, and introduce useful equivalence relations for both types of models. These relations are defined in the style of bisimilarity and are akin to the notion lumpability on Markov chains. Furthermore, we present efficient algorithms to compute these relations, which, as a side result, can be used to compute the ‘best possible’ lumping of a given Markov chain.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Hermanns, H. (2002). Markov Chains. In: Interactive Markov Chains. Lecture Notes in Computer Science, vol 2428. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45804-2_3
Download citation
DOI: https://doi.org/10.1007/3-540-45804-2_3
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-44261-5
Online ISBN: 978-3-540-45804-3
eBook Packages: Springer Book Archive