Abstract
WE TURN now to the continuous time version of the Markov property. Some of the simplicity of Chapter 2 is retained, because we assume the state space S is discrete. Usually we can suppose that S = {0, 1, … }. The succession of states visited still follows a discrete parameter Markov chain but now the flow of time is perturbed by exponentially distributed holding times in each state. An easy generalization of the dissection argument of Chapter 2 shows that the process regenerates at return times to a fixed reference state, so renewal theory and regenerative processes are useful.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer Science+Business Media New York
About this chapter
Cite this chapter
Resnick, S.I. (2002). Continuous Time Markov Chains. In: Adventures in Stochastic Processes. Birkhäuser, Boston, MA. https://doi.org/10.1007/978-1-4612-0387-2_5
Download citation
DOI: https://doi.org/10.1007/978-1-4612-0387-2_5
Publisher Name: Birkhäuser, Boston, MA
Print ISBN: 978-1-4612-6738-6
Online ISBN: 978-1-4612-0387-2
eBook Packages: Springer Book Archive