Abstract
There is an obvious difference between the theoretical guarantee that f is the stationary distribution of a Markov chain (x(t)) and the practical requirement that (1.2) is close enough to (1.1). It is thus necessary to develop diagnostic tools towards the latter goal, namely convergence control.1While control is the topic of this book, we first present in this chapter some of the usual methods, before embarking upon the description of new control methods. The reader is referred to the survey papers of Brooks (1998), Brooks and Roberts (1998) and Cowles and Carlin (1996), as well as to Robert and Casella (1998) and Gelfand and Smith (1998) for details.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer Science+Business Media New York
About this chapter
Cite this chapter
Robert, C.P., Cellier, D. (1998). Convergence Control of MCMC Algorithms. In: Robert, C.P. (eds) Discretization and MCMC Convergence Assessment. Lecture Notes in Statistics, vol 135. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-1716-9_2
Download citation
DOI: https://doi.org/10.1007/978-1-4612-1716-9_2
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-98591-6
Online ISBN: 978-1-4612-1716-9
eBook Packages: Springer Book Archive