Abstract
A Markov chain extends the idea of a single probabilistic experiment on the outcome space Ω to a sequence of experiments on Ω, one for every t = 0, 1,…Letting X t denote the tth outcome, we say that the process moves from state X t-1 to state X t on the tth iteration. The other major novelty here is that the probabilities governing the next move can depend on the present state. In fact, it is usually the case that from any given state x it is possible to move to only a small subset of Ω, called the neighborhood of x.
It turns out that this setup of a recurring probabilistic process has wide applicability. Some examples are the changes from moment to moment of a thermodynamic system, the changes in a species’ DNA sequence wrought by mutations, the step-by-step folding of a protein molecule, the day-to-day price of a stock, a gambler’s fortune from gamble to gamble, and many others. The crucial aspect of a Markov chain is that the system must evolve from one moment to the next in a random way, but depending only on the state of the system at the given moment and not on the entire history.
As the process is performed repeatedly, what conclusions can be drawn about it?
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2009 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
Shonkwiler, R.W., Mendivil, F. (2009). Markov Chain Monte Carlo. In: Explorations in Monte Carlo Methods. Undergraduate Texts in Mathematics. Springer, New York, NY. https://doi.org/10.1007/978-0-387-87837-9_3
Download citation
DOI: https://doi.org/10.1007/978-0-387-87837-9_3
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-87836-2
Online ISBN: 978-0-387-87837-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)