## Summary

Throughout this chapter, \(\mathcal{S}\) is a countable space, called the *state space*; a *discrete-time stochastic process on* \(\mathcal{S}\) is a collection *X* of \(\mathcal{S}-\)valued random variables \((X_n)_{n \in \mathbb N}\) indexed by time *n*. We shall begin by formalizing the idea of lack of memory, which will be made rigorous in the definition of Markov processes. The use of the law of total probability will lead to the definition of the transition matrix, which encodes the dynamics of *X*. The problem is then to extract from this transition matrix information about the behavior of *X*. This problem is tackled in this chapter mostly by probabilistic methods, and we shall return to it with linear-algebraic tools in the next chapter. Although most results on the asymptotic behavior are quite intuitive, the proofs sometimes tend to be rather technical; readers interested mostly in the modeling aspects of Markov processes could limit themselves to the first five sections and then take a look at the statement of the main convergence result, Theorem 1.39; finally, some results specific to the case that \(\mathcal{S}\) is finite are collected in the last section.