Discrete-Time Markov Models
Consider a system that is observed at times 0, 1, 2,.... Let X n be the state of the system at time n for n = 0, 1, 2,.... Suppose we are currently at time n = 10. That is, we have observed X 0, X 1,..., X 10. The question is: can we predict, in a probabilistic way, the state of the system at time 11? In general, X 11 depends (in a possibly random fashion) on X 0, X 1,..., X 10. Considerable simplification occurs if, given the complete history X 0, X 1,..., X 10, the next state X 11 depends only upon X 10. That is, as far as predicting X 11 is concerned, the knowledge of X 0, X 1,..., X 9 is redundant if X 10 is known. If the system has this property at all times n (and not just at n = 10), it is said to have a Markov property. (This is in honor of Andrey Markov, who, in the 1900s, first studied the stochastic processes with this property.)We start with a formal definition below.
KeywordsStationary Distribution Transition Probability Matrix Conceptual Problem Transition Diagram Weather Model
Unable to display preview. Download preview PDF.