# Discrete-Time Markov Models

## Abstract

Consider a system that is observed at times 0, 1, 2,.... Let *X* _{n} be the state of the system at time *n* for *n* = 0, 1, 2,.... Suppose we are currently at time *n* = 10. That is, we have observed *X* _{0}, *X* _{1},..., *X* _{10}. The question is: can we predict, in a probabilistic way, the state of the system at time 11? In general, *X* _{11} depends (in a possibly random fashion) on *X* _{0}, *X* _{1},..., *X* _{10}. Considerable simplification occurs if, given the complete history *X* _{0}, *X* _{1},..., *X* _{10}, the next state *X* _{11} depends only upon *X* _{10}. That is, as far as predicting *X* _{11} is concerned, the knowledge of *X* _{0}, *X* _{1},..., *X* _{9} is redundant if *X* _{10} is known. If the system has this property at all times *n* (and not just at *n* = 10), it is said to have a *Markov property*. (This is in honor of Andrey Markov, who, in the 1900s, first studied the stochastic processes with this property.)We start with a formal definition below.

## Keywords

Stationary Distribution Transition Probability Matrix Conceptual Problem Transition Diagram Weather Model## Preview

Unable to display preview. Download preview PDF.