Statistics and Computing

, Volume 29, Issue 3, pp 449–482 | Cite as

Informed sub-sampling MCMC: approximate Bayesian inference for large datasets

  • Florian MaireEmail author
  • Nial Friel
  • Pierre Alquier


This paper introduces a framework for speeding up Bayesian inference conducted in presence of large datasets. We design a Markov chain whose transition kernel uses an unknown fraction of fixed size of the available data that is randomly refreshed throughout the algorithm. Inspired by the Approximate Bayesian Computation literature, the subsampling process is guided by the fidelity to the observed data, as measured by summary statistics. The resulting algorithm, Informed Sub-Sampling MCMC, is a generic and flexible approach which, contrary to existing scalable methodologies, preserves the simplicity of the Metropolis–Hastings algorithm. Even though exactness is lost, i.e  the chain distribution approximates the posterior, we study and quantify theoretically this bias and show on a diverse set of examples that it yields excellent performances when the computational budget is limited. If available and cheap to compute, we show that setting the summary statistics as the maximum likelihood estimator is supported by theoretical arguments.


Bayesian inference Big-data Approximate Bayesian Computation noisy Markov chain Monte Carlo 

Mathematics Subject Classification

Primary 65C40 65C60 Secondary 62F15 



The Insight Centre for Data Analytics is supported by Science Foundation Ireland under Grant Number SFI/12/RC/2289. Nial Friel’s research was also supported by an Science Foundation Ireland grant: 12/IP/1424. Pierre Alquier’s research was funded by Labex ECODEC (ANR - 11-LABEX-0047) and by the research programme New Challenges for New Data from LCL and GENES, hosted by the Fondation du Risque. We thank the Associate Editor and two anonymous Referees for their contribution to this work.


  1. Allassonnière, S., Amit, Y., Trouvé, A.: Towards a coherent statistical framework for dense deformable template estimation. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 69(1), 3–29 (2007)MathSciNetCrossRefGoogle Scholar
  2. Alquier, P., Friel, N., Everitt, R., Boland, A.: Noisy Monte Carlo: convergence of Markov chains with approximate transition kernels. Stat. Comput. 26(1–2), 29–47 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  3. Andrieu, C., Roberts, G.O.: The pseudo-marginal approach for efficient Monte Carlo computations. Ann. Stat. 37, 697–725 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  4. Andrieu, C., Vihola, M.: Convergence properties of pseudo-marginal Markov chain Monte Carlo algorithms. Ann Appl. Probab. 25(2), 1030–1077 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  5. Banterle, M., Grazian, C., Lee, A., Robert, C.P.: Accelerating Metropolis–Hastings algorithms by delayed acceptance. arXiv preprint arXiv:1503.00996 (2015)
  6. Bardenet, R., Doucet, A., Holmes, C.: Towards scaling up Markov chain Monte Carlo: an adaptive subsampling approach. In: ICML, pp. 405–413 (2014)Google Scholar
  7. Bardenet, R., Doucet, A., Holmes, C.: On Markov chain Monte Carlo methods for tall data. J. Mach. Learn. Res. 18, 1–43 (2017)MathSciNetzbMATHGoogle Scholar
  8. Bierkens, J., Fearnhead, P., Roberts, G.: The zig-zag process and super-efficient sampling for Bayesian analysis of big data. Ann. Stat. (2018) (to appear) Google Scholar
  9. Chib, S., Greenberg, E.: Understanding the metropolis-Hastings algorithm. Am. Stat. 49(4), 327–335 (1995)Google Scholar
  10. Csilléry, K., Blum, M.G., Gaggiotti, O.E., François, O.: Approximate Bayesian computation (ABC) in practice. Trends Ecol. Evolut. 25(7), 410–418 (2010)CrossRefGoogle Scholar
  11. Dalalyan, A.S.: Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent. arXiv preprint arXiv:1704.04752 (2017)
  12. Douc, R., Moulines, E., Rosenthal, J.S.: Quantitative bounds on convergence of time-inhomogeneous Markov chains. Ann. Appl. Probab. 14, 1643–1665 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  13. Fearnhead, P., Bierkens, J., Pollock, M., Roberts, G.O.: Piecewise deterministic Markov processes for continuous-time Monte Carlo. arXiv preprint arXiv:1611.07873 (2016)
  14. Fearnhead, P., Prangle, D.: Constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation. J. R. Stat. Soc. Seri. B (Stat. Methodol.) 74(3), 419–474 (2012)MathSciNetCrossRefGoogle Scholar
  15. Geyer, C.J., Thompson, E.A.: Annealing Markov chain Monte Carlo with applications to ancestral inference. J. Am. Stat. Assoc. 90(431), 909–920 (1995)CrossRefzbMATHGoogle Scholar
  16. Haario, H., Saksman, E., Tamminen, J.: An adaptive Metropolis algorithm. Bernoulli 7, 223–242 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  17. Hobert, J.P., Robert, C.P.: A mixture representation of \(\pi \) with applications in Markov chain Monte Carlo and perfect sampling. Ann. Appl. Probab. 14, 1295–1305 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  18. Huggins, J., Zou, J.: Quantifying the accuracy of approximate diffusions and Markov chains. In: Proceedings of the 20th International Conference on Artifical Intelligence and Statistics, PLMR, vol. 54, pp. 382–391 (2016)Google Scholar
  19. Jacob, P.E., Thiery, A.H., et al.: On nonnegative unbiased estimators. Ann. Stat. 43(2), 769–784 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  20. Johndrow, J.E., Mattingly, J.C.: Error bounds for approximations of Markov chains. arXiv preprint arXiv:1711.05382 (2017)
  21. Johndrow, J.E., Mattingly, J.C., Mukherjee, S., Dunson, D.: Approximations of Markov chains and Bayesian inference. arXiv preprint arXiv:1508.03387 (2015)
  22. Korattikara, A., Chen, Y., Welling, M.: Austerity in MCMC land: cutting the Metropolis–Hastings budget. In: Proceedings of the 31st International Conference on Machine Learning (2014)Google Scholar
  23. Le Cam, L.: On some asymptotic properties of maximum likelihood estimates and related Bayes’ estimates. Univ. Calif. Publ. Stat. 1, 277–330 (1953)MathSciNetGoogle Scholar
  24. Le Cam, L.: Asymptotic Methods in Statistical Decision Theory. Springer, Berlin (1986)CrossRefzbMATHGoogle Scholar
  25. Maclaurin, D., Adams, R.P.: Firefly Monte Carlo: exact MCMC with subsets of data. In: Twenty-Fourth International Joint Conference on Artificial Intelligence (2015)Google Scholar
  26. Marin, J.-M., Pudlo, P., Robert, C.P., Ryder, R.J.: Approximate Bayesian computational methods. Stat. Comput. 22(6), 1167–1180 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  27. Medina-Aguayo, F.J., Lee, A., Roberts, G.O.: Stability of noisy Metropolis-Hastings. Stat. Comput. 26(6), 1187–1211 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  28. Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H., Teller, E.: Equation of state calculations by fast computing machines. J. Chem. Phys. 21(6), 1087–1092 (1953)CrossRefGoogle Scholar
  29. Meyn, S.P., Tweedie, R.L.: Markov Chains and Stochastic Stability. Cambridge University Press, Cambridge (2009)CrossRefzbMATHGoogle Scholar
  30. Mitrophanov, A.Y.: Sensitivity and convergence of uniformly ergodic Markov chains. J. Appl. Probab. 142, 003–1014 (2005)MathSciNetzbMATHGoogle Scholar
  31. Nunes, M.A., Balding, D.J.: On optimal selection of summary statistics for approximate Bayesian computation. Stat. Appl. Genet. Mol. Biol. 9(1) (2010)Google Scholar
  32. Pollock, M., Fearnhead, P., Johansen, A.M., Roberts, G.O.: The scalable Langevin exact algorithm: Bayesian inference for big data. arXiv preprint arXiv:1609.03436 (2016)
  33. Pritchard, J.K., Seielstad, M.T., Perez-Lezaun, A., Feldman, M.W.: Population growth of human Y chromosomes: a study of Y chromosome microsatellites. Mol. Biol. Evol. 16(12), 1791–1798 (1999)CrossRefGoogle Scholar
  34. Quiroz, M., Villani, M., Kohn, R.: Speeding up MCMC by efficient data subsampling. Riksbank Research Paper Series (121) (2015)Google Scholar
  35. Quiroz, M., Villani, M., Kohn, R.: Exact subsampling MCMC. arXiv preprint arXiv:1603.08232 (2016)
  36. Roberts, G.O., Rosenthal, J.S., et al.: Optimal scaling for various Metropolis-Hastings algorithms. Stat. Sci. 16(4), 351–367 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  37. Rudolf, D., Schweizer, N.: Perturbation theory for Markov chains via Wasserstein distance. Bernoulli 24(4A), 2610–2639 (2018)MathSciNetCrossRefzbMATHGoogle Scholar
  38. Van der Vaart, A.W.: Asymptotic Statistics, vol. 3. Cambridge University Press, Cambridge (2000)Google Scholar
  39. Welling, M., Teh, Y.W.: Bayesian learning via stochastic gradient Langevin dynamics. In: Proceedings of the 28th International Conference on Machine Learning (ICML-11). pp. 681–688 (2011)Google Scholar
  40. Wilkinson, R.D.: Approximate Bayesian computation (ABC) gives exact results under the assumption of model error. Stat. Appl. Genet. Mol. Biol 12(2), 129–141 (2013)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Mathematics and StatisticsUniversity College DublinDublinIreland
  2. 2.The Insight Centre of Data AnalyticsUniversity College DublinDublinIreland
  3. 3.CREST, ENSAEUniversité Paris SaclayParisFrance

Personalised recommendations