The Two-Stage Gibbs Sampler

  • Christian P. Robert
  • George Casella
Part of the Springer Texts in Statistics book series (STS)


The previous chapter presented the slice sampler, a special case of a Markov chain algorithm that did not need an Accept–Reject step to be valid, seemingly because of the uniformity of the target distribution. The reason why the slice sampler works is, however, unrelated to this uniformity and we will see in this chapter a much more general family of algorithms that function on the same principle. This principle is that of using the true conditional distributions associated with the target distribution to generate from that distribution.


Markov Chain Posterior Distribution Gibbs Sampler Joint Density Target Distribution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Roberts, G. and Rosenthal, J. (2003). The polar slice sampler. Stochastic Models, 18: 257–236.MathSciNetCrossRefGoogle Scholar
  2. Diebolt, J. and Robert, C. (1990a). Bayesian estimation of finite mixture distributions, part I: Theoretical aspects. Technical Report 110, LSTA, Univ. Paris VI, Paris.Google Scholar
  3. Diebolt, J. and Robert, C. (1990b). Bayesian estimation of finite mixture distributions, part II: Sampling implementation. Technical Report 111, LSTA, Univ. Paris VI, Paris.Google Scholar
  4. Roeder, K. and Wasserman, L. (1997). Practical Bayesian density estimation using mixtures of normals. J. American Statist. Assoc., 92: 894–902.MathSciNetMATHCrossRefGoogle Scholar
  5. Billio, M., Monfort, A., and Robert, C. (1999). Bayesian estimation of switching ARMA models. J. Econometrics, 93: 229–255.MathSciNetMATHCrossRefGoogle Scholar
  6. MacLachlan, G. and Basford, K. (1988). Mixture Models: Inference and Applications to Clustering. Marcel Dekker, New York.Google Scholar
  7. Everitt, B. (1984). An Introduction to Latent Variable Models. Chapman and Hall, New York.MATHGoogle Scholar
  8. Titterington, D., Smith, A., and Makov, U. (1985). Statistical Analysis of Finite Mixture Distributions. John Wiley, New York.MATHGoogle Scholar
  9. Casella, G. and Robert, C. (1996). Rao-Blackwellisation of sampling schemes. Biometrika, 83 (1): 81–94.MathSciNetMATHCrossRefGoogle Scholar
  10. Guillin, A., Marin, J., and Robert, C. (2004). Estimation bayésienne approximative par échantillonnage préférentiel. Revue Statist. Appliquée. (to appear).Google Scholar
  11. Seher, G. (1992). A review of estimating animal abundance (ii). Workshop on Design of Longitudinal Studies and Analysis of Repeated Measure Data, 60: 129–166.Google Scholar
  12. Seidenfeld, T. and Wasserman, L. (1993). Dilation for sets of probabilities. Ann. Statist., 21: 1139–1154.MathSciNetMATHCrossRefGoogle Scholar
  13. Roberts, G. and Tweedie, R. (1996). Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms Biometrika, 83: 95–110.MathSciNetMATHCrossRefGoogle Scholar
  14. Chib, S. (1995). Marginal likelihood from the Gibbs output. J. American Statist. Assoc., 90: 1313–1321.MathSciNetMATHCrossRefGoogle Scholar
  15. Chib, S. (1996). Calculating posterior distributions and modal estimates in Markov mixture models. J. Econometrics, 75: 79–97.MathSciNetMATHCrossRefGoogle Scholar
  16. Lehmann, E. (1998). Introduction to Large-Sample Theory. Springer-Verlag, New York.Google Scholar

Copyright information

© Springer Science+Business Media New York 2004

Authors and Affiliations

  • Christian P. Robert
    • 1
  • George Casella
    • 2
  1. 1.CEREMADEUniversité Paris DauphineParis Cedex 16France
  2. 2.Department of StatisticsUniversity of FloridaGainesvilleUSA

Personalised recommendations