Advertisement

Statistics and Computing

, Volume 26, Issue 1–2, pp 393–407 | Cite as

Adaptive particle allocation in iterated sequential Monte Carlo via approximating meta-models

  • Anindya Bhadra
  • Edward L. Ionides
Article

Abstract

Sequential Monte Carlo (SMC) filters (also known as particle filters) are widely used in the analysis of non-linear and non-Gaussian time series models in diverse application areas such as engineering, finance, and epidemiology. When a time series contains an observation that is very unlikely given the previous observations, evaluation of its conditional log likelihood by SMC can suffer from high variance. The presence of one or more such observations can result in poor Monte Carlo estimate of the overall likelihood. In this article, we develop a novel strategy of particle allocation for off-line iterated SMC based filters, in order to reduce the overall variance of the likelihood estimate to enable efficient computation. The complications arising from the intractability of the actual SMC variance is handled via an approximating meta-model, in which we model the SMC errors in the evaluation of conditional log likelihood of the observations as an autoregressive process. We demonstrate numerical results on both simulated and real data sets where adaptive particle allocation results in 54 % lower overall variance over the naïve equal allocation of particles at all time points in simulations and 53 % lower variance on a real time series model of epidemic malaria transmission. The approximating model approach presented in this article is novel in the context of SMC and offers a computationally attractive procedure for practical analysis of a broad class of time series models.

Keywords

Approximating meta-models Optimization Particle filters Sequential Monte Carlo Variance reduction 

Notes

Acknowledgments

The authors thank two anonymous referees for their constructive suggestions

References

  1. Andrieu, C., Doucet, A., Holenstein, R.: Particle Markov chain Monte Carlo methods. J. R. Stat. Soc. Ser. B (Stat. Method.) 72, 269–342 (2010)MathSciNetCrossRefGoogle Scholar
  2. Arulampalam, M.S., Maskell, S., Gordon, N., Clapp, T.: A tutorial on particle filters for online nonlinear, non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 50, 174–188 (2002)CrossRefGoogle Scholar
  3. Bérard, J., Del-Moral, P., Doucet, A.: A lognormal central limit theorem for particle approximations of normalizing constants (2013). arXiv:1307.0181
  4. Berzuini, C., Gilks, W.: RESAMPLE-MOVE filtering with cross-model jumps. In: Doucet, A., de Freitas, N., Gordon, N.J. (eds.) Sequential Monte Carlo Methods in Practice, pp. 117–138. Springer, New York (2001)CrossRefGoogle Scholar
  5. Bhadra, A., Ionides, E.L., Laneri, K., Bouma, M., Dhiman, R.C., Pascual, M.: Malaria in Northwest India: data analysis via partially observed stochastic differential equation models driven by Lévy noise. J. Am. Stat. Assoc. 106, 440–451 (2011a)MATHMathSciNetCrossRefGoogle Scholar
  6. Bhadra, A., Ionides, E. L., Laneri, K., Bouma, M., Dhiman, R. C.,Pascual, M.: Online supplement to: Malaria in Northwest India: Data analysis via partially observed stochastic differential equation models driven by Lévy noise. J. Am. Stat. Assoc. 106 (2011). http://pubs.amstat.org/doi/suppl/10.1198/jasa.2011.ap10323
  7. Cappé, O., Godsill, S., Moulines, E.: An overview of existing methods and recent advances in sequential Monte Carlo. Proc. IEEE 95, 899–924 (2007)CrossRefGoogle Scholar
  8. Cappé, O., Moulines, E., Rydén, T.: Inference in Hidden Markov Models. Springer, New York (2005)MATHGoogle Scholar
  9. Del Moral, P., Doucet, A., Jasra, A.: On adaptive resampling strategies for sequential Monte Carlo methods. Bernoulli 18, 252–278 (2012)Google Scholar
  10. Doucet, A., Johansen, A.M.: A tutorial on particle filtering and smoothing: fiteen years later. In: Crisan, D., Rozovsky, B. (eds.) Oxford Handbook of Nonlinear Filtering. Oxford University Press, Oxford (2011)Google Scholar
  11. Floudas, C., Gounaris, C.: A review of recent advances in global optimization. J. Glob. Optim. 45, 3–38 (2009)MATHMathSciNetCrossRefGoogle Scholar
  12. Fox, D.: Adapting the sample size in particle filters through KLD-sampling. Int. J. Robotics Res. 22, 985–1003 (2003)CrossRefGoogle Scholar
  13. Gordon, N.J., Salmond, D.J., Smith, A.F.M.: Novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEE Proc. F (Radar Signal Process.) 140, 107–113 (1993)CrossRefGoogle Scholar
  14. He, D., Ionides, E.L., King, A.A.: Plug-and-play inference for disease dynamics: measles in large and small towns as a case study. J. R. Soc. Interface 7, 271–283 (2010)CrossRefGoogle Scholar
  15. Ionides, E.L., Bhadra, A., Atchadé, Y., King, A.A.: Iterated filtering. Ann. Stat. 39, 1776–1802 (2011)MATHCrossRefGoogle Scholar
  16. Ionides, E.L., Bretó, C., King, A.A.: Inference for nonlinear dynamical systems. Proc. Natl. Acad. Sci. USA 103, 18438–18443 (2006)CrossRefGoogle Scholar
  17. Jin, Y., Branke, J.: Evolutionary optimization in uncertain environments—a survey. IEEE Trans. Evol. Comput. 9, 303–317 (2005)CrossRefGoogle Scholar
  18. King, A. A., Ionides, E. L., Bretó, C. M., Ellner, S., Kendall, B.pomp: Statistical inference for partially observed Markov processes. R package (2009). Available at http://www.r-project.org
  19. Koller, D., Fratkina, R.: Using learning for approximation in stochastic processes. In: Proceedings of the 15th International Conference on Machine Learning (ICML), pp. 287–295 (1998)Google Scholar
  20. Laneri, K., Bhadra, A., Ionides, E.L., Bouma, M., Dhiman, R.C., Yadav, R.S., Pascual, M.: Forcing versus feedback: epidemic malaria and monsoon rains in northwest india. PLoS Comput. Biol. 6, e1000898 (2010)Google Scholar
  21. Lanz, O.: An information theoretic rule for sample size adaptation in particle filtering. In: Proceedings of the 14th International Conference on Image Analysis and Processing (ICIAP), pp. 317–322. Washington DC (2007)Google Scholar
  22. Liu, J.S.: Monte Carlo Strategies in Scientific Computing. Springer, New York (2001)MATHGoogle Scholar
  23. Liu, J.S., Chen, R.: Blind deconvolution via sequential imputations. J. Am. Stat. Assoc. 90, 567–576 (1995)MATHCrossRefGoogle Scholar
  24. Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (1999)MATHCrossRefGoogle Scholar
  25. Pan, P., Schonfeld, D.: Dynamic proposal variance and optimal particle allocation in particle filtering for video tracking. IEEE Trans. Circuits Syst. Video Technol. 18, 1268–1279 (2008)CrossRefGoogle Scholar
  26. Pitt, M.K., Shephard, N.: Filtering via simulation: auxiliary particle filters. J. Am. Stat. Assoc. 94, 590–599 (1999)MATHMathSciNetCrossRefGoogle Scholar
  27. Soto, A.: Self adaptive particle filter. In: Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), pp. 1398–1406 (2005)Google Scholar
  28. van der Vaart, A.W.: Asymptotic statistics. Cambridge Series in Statistical and Probabilistic MathematicsCambridge University Press, Cambridge (1998)Google Scholar
  29. Whiteley, N., Lee, A.: Twisted particle filters. Ann. Stat. 42, 115–141 (2014)MATHMathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Department of StatisticsPurdue UniversityWest LafayetteUSA
  2. 2.Department of StatisticsUniversity of MichiganAnn ArborUSA

Personalised recommendations