Bayesian inference via projections
- 566 Downloads
Bayesian inference often poses difficult computational problems. Even when off-the-shelf Markov chain Monte Carlo (MCMC) methods are available to the problem at hand, mixing issues might compromise the quality of the results. We introduce a framework for situations where the model space can be naturally divided into two components: (i) a baseline black-box probability distribution for the observed variables and (ii) constraints enforced on functionals of this probability distribution. Inference is performed by sampling from the posterior implied by the first component, and finding projections on the space defined by the second component. We discuss the implications of this separation in terms of priors, model selection, and MCMC mixing in latent variable models. Case studies include probabilistic principal component analysis, models of marginal independence, and a interpretable class of structured ordinal probit models.
KeywordsMCMC Optimization Latent variable models Structured covariance matrices
We thank Irini Moustaki for the green consumer data. This work was partially funded by a EPSRC Grant EP/J013293/1.
- Beaumont, M., Zhang, W., Balding, D.: Approximate Bayesian computation in population genetics. Genetics 162, 2025–2035 (2002)Google Scholar
- Bissiri, P., Holmes, C., Walker, S.: A general framework for updating belief distributions. arXiv:1306.6430 (2013)
- Care Quality Commission and Aston University: Aston Business School, National Health Service National Staff Survey, 2009 [computer file]. Colchester, Essex: UK Data Archive [distributor], October 2010. Available at https://www.esds.ac.uk, SN: 6570 (2010)
- Drovandi, C.C., Pettitt, A.N., Lee, A.: Bayesian indirect inference using a parametric auxiliary model. Stat. Sci. (in press)Google Scholar
- Drton, M., Richardson, T.: A new algorithm for maximum likelihood estimation in Gaussian models for marginal independence. In: Proceedings of the 19th Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann Publishers Inc., (2003)Google Scholar
- Gribonval, R., Machart, P.: Reconciling “priors” & “priors” without prejudice? Adv. Neural Inf. Process. Syst. 26, 2193–2201 (2013)Google Scholar
- Jerrum, M., Sinclair, A.: The Markov chain Monte Carlo method: an approach to approximate counting and integration. In: Hochbaum, D.S. (ed.) Approximation Algorithms for NP-hard Problems, pp. 482–520. PWS Publishing Company, Pacific Grove (1996)Google Scholar
- Neal, R.: Probabilistic inference using Markov chain monte carlo methods. Technical Report CRG-TR-93-1, Department of Computer Science, University of Toronto (1993)Google Scholar
- Palla, K., Knowles, D.A., Ghahramani, Z.: A nonparametric variable clustering model. Adv. Neural Inf. Process. Syst. 25, 2987–2995 (2012)Google Scholar
- Reeves, R., Pettitt, A.: A theoretical framework for approximate bayesian computation. In: 20th International Workshop on Statistical Modelling, pp. 393–396 (2005)Google Scholar
- Wright, J., Ganesh, A., Rao, S., Peng, Y., Ma, Y.: Robust principal component analysis: exact recovery of corrupted low-rank matrices via convex optimization. Adv. Neural Inf. Process. Syst. 22, 2080–2088 (2009)Google Scholar