Advertisement

Statistics and Computing

, Volume 6, Issue 2, pp 101–111 | Cite as

Accelerating Monte Carlo Markov chain convergence for cumulative-link generalized linear models

  • Mary Kathryn Cowles
Papers

Abstract

The ordinal probit, univariate or multivariate, is a generalized linear model (GLM) structure that arises frequently in such disparate areas of statistical applications as medicine and econometrics. Despite the straightforwardness of its implementation using the Gibbs sampler, the ordinal probit may present challenges in obtaining satisfactory convergence.

We present a multivariate Hastings-within-Gibbs update step for generating latent data and bin boundary parameters jointly, instead of individually from their respective full conditionals. When the latent data are parameters of interest, this algorithm substantially improves Gibbs sampler convergence for large datasets. We also discuss Monte Carlo Markov chain (MCMC) implementation of cumulative logit (proportional odds) and cumulative complementary log-log (proportional hazards) models with latent data.

Keywords

Blocking collapsing data augmentation Gibbs sampler latent data 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Agresti, A. (1990) Categorical Data Analysis. Wiley, New York.Google Scholar
  2. Albert, J. H. (1992) Bayesian estimation of the polychoric correlation coefficient. Journal of Statistical Computation and Simulation, 44, 47–61.Google Scholar
  3. Albert, J. H. and Chib, S. (1993) Bayesian analysis of binary and polychotomous response data. Journal of the American Statistical Association, 88, 669–79.Google Scholar
  4. Belisle, C. J. P., Romeijn, H. E. and Smith, R. O. (1993) Hit-and- run algorithms for generating multivariate distributions. Mathematics of Operations Research, 18, 255–66.Google Scholar
  5. Carlin, B. P. and Polson, N. G. (1992) Monte Carlo Bayesian methods for discrete regression models and categorical time series. In Bayesian Statistics 4 (eds. J. M. Bernardo, J. Berger, A. P. Dawid and A. F. M. Smith), pp. 577–86. Oxford University Press.Google Scholar
  6. Casella, G. and George, E. I. (1992) Explaining the Gibbs sampler. American Statistician, 46, 167–74.Google Scholar
  7. Cowles, M. K., Carlin, B. P. and Connett, J. E. (1996) Bayesian tobit modeling of longitudinal ordinal clinical trial compliance data with nonignorable nonresponse. Journal of the American Statistical Association.Google Scholar
  8. Cowles, M. K. and Carlin, B. P. (1996) Markov chain Monte Carlo convergence diagnostics: a comparative review. Journal of the American Statistical Assocation.Google Scholar
  9. Dellaportas, P. and Smith, A. F. M. (1993) Bayesian inference for eneralised linear and proportional hazards models via Gibbs sampling. Applied Statistics, 42, 443–59.Google Scholar
  10. Gelfand, A. E. and Smith, A. F. M. (1990) Sampling based approaches to calculating marginal densities. Journal of the American Statistical Association, 85, 398–409.Google Scholar
  11. Gelman, A. and Rubin, D. B. (1992) Inference from iterative simulation using multiple sequences (with discussion). Statistical Science, 7, 457–511.Google Scholar
  12. Gelman, A., Roberts, G., and Gilks, W. (1996) Efficient Metropolis jumping rules. In Bayesian Statistics 5 (Eds. J. M. Bernardo, J. Berger, A. P. Dawid and A. F. M. Smith), Oxford University Press.Google Scholar
  13. Geman, S. and Geman, D. (1984) Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 721–41.Google Scholar
  14. Geweke, J. (1992) Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments. In Bayesian Statistics 4 (Eds. J. M. Bernardo, J. Berger, A. P. Dawid and A. F. M. Smith), pp. 169–193. Oxford University Press.Google Scholar
  15. Geyer, C. J. (1992) Practical Markov chain Monte Carlo. Statistical Science, 7, 473–83.Google Scholar
  16. Gilks, W. R. and Wild, P. (1992) Adaptive rejection sampling for Gibbs sampling. Applied Statistics, 41, 337–48.Google Scholar
  17. Gilks, W. R., Roberts, G. O., and George, E. (1994) Adaptive direction sampling. The Statistician, 43, 179–88.Google Scholar
  18. Hastings, W. K. (1970) Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57, 97–109.Google Scholar
  19. Liu, J. S., Wong, W. H. and Kong, A. (1994) Covariance structure f the Gibbs sampler with applications to the comparisons of stimators and augmentation schemes Biometrika, 81, 27–40.Google Scholar
  20. Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. and Teller, E. (1953) Equations of state calculations by fast computing machines. Journal of Chemical Physics, 21, 1087–91.Google Scholar
  21. Muller, P. (1994) A generic approach to posterior integration and Gibbs sampling. Journal of the American Statistical Association.Google Scholar
  22. Raftery, A. E. and Lewis S. (1992) How many iterations in the Gibbs sampler? In Bayesian Statistics 4 (Eds. J. M. Bernardo, J. Berger, A. P. Dawid and A. F. M. Smith), pp. 763–73. Oxford University Press.Google Scholar
  23. Ripley, B. D. (1987) Stochastic Simulation. Wiley, New York.Google Scholar
  24. Tanner, T. A. and Wong, W. H. (1987) The calculation of posterior distributions by data augmentation (with discussion). Journal of the American Statistical Association, 82, 528–50.Google Scholar
  25. Tierney, L. (1994) Markov chains for exploring posterior distributions. Annals of Statistics, 22, 1701–86.Google Scholar

Copyright information

© Chapman & Hall 1996

Authors and Affiliations

  • Mary Kathryn Cowles
    • 1
  1. 1.Department of BiostatisticsHarvard School of Public HealthBostonUSA

Personalised recommendations