Marginal maximum a posteriori estimation using Markov chain Monte Carlo

Abstract

Markov chain Monte Carlo (MCMC) methods, while facilitating the solution of many complex problems in Bayesian inference, are not currently well adapted to the problem of marginal maximum a posteriori (MMAP) estimation, especially when the number of parameters is large. We present here a simple and novel MCMC strategy, called State-Augmentation for Marginal Estimation (SAME), which leads to MMAP estimates for Bayesian models. We illustrate the simplicity and utility of the approach for missing data interpolation in autoregressive time series and blind deconvolution of impulsive processes.

This is a preview of subscription content, log in to check access.

References

  1. Bernardo J.M. and Smith A.F.M. 1994. Bayesian Theory. John Wiley & Sons, New York.

    Google Scholar 

  2. Cappé O., Doucet A., Moulines E., and Lavielle M. 1999. Simulation-based methods for blind maximum-likelihood filter identification. Signal Processing 73: 3-25.

    Google Scholar 

  3. Carter C.K. and Kohn R. 1994. On Gibbs sampling for state space models. Biometrika 81: 541-553.

    Google Scholar 

  4. Celeux G. and Diebolt J. 1985. The SEM algorithm: A probabilistic teacher algorithm derived from the EM algorithm for the mixture problem. Computational Statistics 2: 73-82.

    Google Scholar 

  5. Celeux G. and Diebolt J. 1990. Une Version Recuit Simulé de l'Algorithme EM. Comptes Rendus de l'Académie des Sciences 310: 119-124 (in French).

    Google Scholar 

  6. Cheng Q., Chen R., and Li T. 1996. Simultaneous wavelet estimation and deconvolution of reflection seismic signals via Gibbs sampler.IEEE Transactions on Geoscience and Remote Sensing 34: 377-384.

    Google Scholar 

  7. DeJong P. and Shephard N. 1995. The simulation smoother for time series models. Biometrika 82: 339-350.

    Google Scholar 

  8. Dempster A.P., Laird N.M., and Rubin D.B. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B 39: 1-38.

    Google Scholar 

  9. Gilks W.R., Richardson S., and Spiegelhalter D.J. 1996. Markov Chain Monte Carlo in Practice. Chapman and Hall, London.

    Google Scholar 

  10. Godsill S.J. and Rayner P.J.W. 1998. Digital Audio Restoration-A Statistical Model-Based Approach. Springer-Verlag, London.

    Google Scholar 

  11. Ripley B.D. 1987. Stochastic Simulation. Wiley, New York.

    Google Scholar 

  12. Robert C.P. 1996. The Bayesian Choice, Springer-Verlag Series in Statistics. Springer-Verlag, New York.

    Google Scholar 

  13. Robert C.P. 1998. Discretization and MCMC Convergence Assessment, Lecture Notes in Statistics no. 135. Springer-Verlag, New York.

    Google Scholar 

  14. Robert C.P. and Casella G. 1999. Monte Carlo Statistical Methods, Springer-Verlag Series in Statistics. Springer-Verlag, New York.

    Google Scholar 

  15. Van Laarhoven P.J. and Arts E.H.L. 1987. Simulated Annealing: Theory and Applications. Reidel, Amsterdam.

    Google Scholar 

  16. Wei G.C.G. and Tanner M.A. 1990. A Monte Carlo implementation of the EMalgorithm and the poor man's data augmentation algorithm. Journal of the American Statistical Association 85: 699-704.

    Google Scholar 

Download references

Authors

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Doucet, A., Godsill, S.J. & Robert, C.P. Marginal maximum a posteriori estimation using Markov chain Monte Carlo. Statistics and Computing 12, 77–84 (2002). https://doi.org/10.1023/A:1013172322619

Download citation

  • Bayesian computation
  • data augmentation
  • deconvolution
  • missing data
  • simulated annealing