Discrete-Time Markov Models

Part of the Springer Texts in Statistics book series (STS)


Consider a system that is observed at times 0, 1, 2,.... Let X n be the state of the system at time n for n = 0, 1, 2,.... Suppose we are currently at time n = 10. That is, we have observed X 0, X 1,..., X 10. The question is: can we predict, in a probabilistic way, the state of the system at time 11? In general, X 11 depends (in a possibly random fashion) on X 0, X 1,..., X 10. Considerable simplification occurs if, given the complete history X 0, X 1,..., X 10, the next state X 11 depends only upon X 10. That is, as far as predicting X 11 is concerned, the knowledge of X 0, X 1,..., X 9 is redundant if X 10 is known. If the system has this property at all times n (and not just at n = 10), it is said to have a Markov property. (This is in honor of Andrey Markov, who, in the 1900s, first studied the stochastic processes with this property.)We start with a formal definition below.


Stationary Distribution Transition Probability Matrix Conceptual Problem Transition Diagram Weather Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.Department of Statistics and Operations ResearchUniversity of North CarolinaChapel HillUSA

Personalised recommendations