Markov Chains

  • Christian P. Robert
  • George Casella
Part of the Springer Texts in Statistics book series (STS)


In this chapter we introduce fundamental notions of Markov chains and state the results that are needed to establish the convergence of various MCMC algorithms and, more generally, to understand the literature on this topic. Thus, this chapter, along with basic notions of probability theory, will provide enough foundation for the understanding of the following chapters. It is, unfortunately, a necessarily brief and, therefore, incomplete introduction to Markov chains, and we refer the reader to Meyn and Tweedie (1993), on which this chapter is based, for a thorough introduction to Markov chains. Other perspectives can be found in Doob (1953), Chung (1960), Feller (1970, 1971), and Billingsley (1995) for general treatments, and Norris (1997), Nummelin (1984), Revuz (1984), and Resnick (1994) for books entirely dedicated to Markov chains. Given the purely utilitarian goal of this chapter, its style and presentation differ from those of other chapters, especially with regard to the plethora of definitions and theorems and to the rarity of examples and proofs. In order to make the book accessible to those who are more interested in the implementation aspects of MCMC algorithms than in their theoretical foundations, we include a preliminary section that contains the essential facts about Markov chains.


Markov Chain Invariant Measure Central Limit Theorem Markov Chain Monte Carlo Algorithm Transition Kernel 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Meyn, S. and Tweedie, R. (1993). Markov Chains and Stochastic Stability. Springer-Verlag, New York.MATHCrossRefGoogle Scholar
  2. Norris, J. (1997). Markov Chains. Cambridge University Press, Cambridge.MATHGoogle Scholar
  3. Mengersen, K. and Tweedie, R. (1996). Rates of convergence of the Hastings and Metropolis algorithms. Ann. Statist., 24: 101–121.MathSciNetMATHCrossRefGoogle Scholar
  4. Eaton, M. (1992). A statistical dyptich: Admissible inferences–recurrence of symmetric Markov chains. Ann. Statist., 20: 1147–1179.MathSciNetMATHCrossRefGoogle Scholar
  5. Brown, L. (1971). Admissible estimators, recurrent diffusions, and insoluble boundary-value problems. Ann. Mathecoat. Statist., 42: 855–903.MATHCrossRefGoogle Scholar
  6. Brooks, S. and Roberts, G. (1999). On quantile estimation and MCMC convergence. Biometrika, 86: 710–717.MathSciNetMATHCrossRefGoogle Scholar
  7. Athreya, K., Doss, H., and Sethuraman, J. (1996). On the convergence of the Markov chain simulation method. Ann. Statist., 24: 69–100.MathSciNetMATHCrossRefGoogle Scholar
  8. Rosenblatt, M. (1971). Markov Processes: Structure and Asymptotic Behavior. Springer-Verlag, New York.MATHCrossRefGoogle Scholar
  9. Billingsley, P. (1995). Probability and Measure. John Wiley, New York, third edition.MATHGoogle Scholar
  10. Bradley, R. (1986). Basic properties of strong mixing conditions. In Ebberlein, E. and Taqqu, M., editors, Dependence in Probability and Statistics, pages 165–192. Birkhäuser, Boston.Google Scholar
  11. Tierney, L. (1994). Markov chains for exploring posterior distributions (with discussion). Ann. Statist., 22: 1701–1786.MathSciNetMATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2004

Authors and Affiliations

  • Christian P. Robert
    • 1
  • George Casella
    • 2
  1. 1.CEREMADEUniversité Paris DauphineParis Cedex 16France
  2. 2.Department of StatisticsUniversity of FloridaGainesvilleUSA

Personalised recommendations