Abstract
This article considers the problem of storing the paths generated by a particle filter and more generally by a sequential Monte Carlo algorithm. It provides a theoretical result bounding the expected memory cost by T+CNlogN where T is the time horizon, N is the number of particles and C is a constant, as well as an efficient algorithm to realise this. The theoretical result and the algorithm are illustrated with numerical experiments.
Similar content being viewed by others
References
Andrieu, C., Doucet, A., Holenstein, R.: Particle Markov chain Monte Carlo (with discussion). J. R. Stat. Soc., Ser. B 72(4), 357–385 (2010)
Cappé, O., Moulines, E., Rydén, T.: Inference in Hidden Markov Models. Springer, New York (2005)
Carpenter, J., Clifford, P., Fearnhead, P.: Improved particle filter for nonlinear problems. IEE Proc. Radar Sonar Navig. 146(1), 2–7 (1999)
Chopin, N., Singh, S.S.: On the particle Gibbs sampler (2013). ArXiv e-prints 1304.1887
Chopin, N., Jacob, P., Papaspiliopoulos, O.: SMC2: an efficient algorithm for sequential analysis of state space models. J. R. Stat. Soc., Ser. B, Stat. Methodol. 75(3), 397–426 (2013)
Del Moral, P.: Feynman-Kac Formulae. Springer, Berlin (2004)
Del Moral, P., Doucet, A.: On a class of genealogical and interacting metropolis models. Sémin. Probab. XXXVII, 415–446 (2003)
Del Moral, P., Miclo, L., Patras, F., Rubenthaler, S.: The convergence to equilibrium of neutral genetic models. Stoch. Anal. Appl. 28(1), 123–143 (2009)
Douc, R., Moulines, E., Olsson, J.: Long-term stability of sequential Monte Carlo methods under verifiable conditions. Ann. Appl. Probab. (2012, to appear). ArXiv e-prints 1203.6898
Doucet, A., Johansen, A.: A tutorial on particle filtering and smoothing: fifteen years later. In: Handbook of Nonlinear Filtering. Oxford University Press, Oxford (2011)
Doucet, A., de Freitas, N., Gordon, N.: Sequential Monte Carlo Methods in Practice. Springer, New York (2001)
Gordon, N., Salmond, J., Smith, A.: A novel approach to non-linear/non-Gaussian Bayesian state estimation. IEEE Proc. Radar Signal Process. 140, 107–113 (1993)
van Handel, R.: Uniform time average consistency of Monte Carlo particle filters. Stoch. Process. Appl. 119(11), 3835–3861 (2009)
Jones, E.M., Parslow, J., Murray, L.M.: A Bayesian approach to state and parameter estimation in a phytoplankton-zooplankton model. Aust. Meteorol. Oceanogr. J. 59, 7–16 (2010)
Kitagawa, G.: A self-organizing state-space model. J. Am. Stat. Assoc. 93, 1203–1215 (1998)
Lee, A., Yau, C., Giles, M.B., Doucet, A., Holmes, C.C.: On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. J. Comput. Graph. Stat. 19, 769–789 (2010)
Lindsten, F., Jordan, M., Schön, T.: Ancestor sampling for particle Gibbs (2012). ArXiv e-prints 1210.6911
Liu, J., Chen, R.: Sequential Monte Carlo methods for dynamic systems. J. Am. Stat. Assoc. 93, 1032–1044 (1998)
Möhle, M.: The time back to the most recent common ancestor in exchangeable population models. Adv. Appl. Probab. 36, 78–97 (2004)
Murray, L.M.: Bayesian state-space modelling on high-performance hardware using LibBi (2013). ArXiv e-prints 1306.3277
Murray, L.M., Jones, E.M., Parslow, J.: On collapsed state-space models and the particle marginal Metropolis-Hastings sampler (2012). ArXiv e-prints 1202.6159
Murray, L.M., Lee, A., Jacob, P.E.: Rethinking resampling in the particle filter on graphics processing units (2013). ArXiv e-prints 1301.4019
Poyiadjis, G., Doucet, A., Singh, S.: Particle approximations of the score and observed information matrix in state space models with application to parameter estimation. Biometrika 98(1), 65–80 (2011)
Sengupta, S., Harris, M., Garland, M.: Efficient parallel scan algorithms for GPUs. Tech. Rep. NVR-2008-003, NVIDIA (2008)
Wang, J., Jasra, A., De Iorio, M.: Computational methods for a class of network models. J. Comput. Biol. (2014)
Whiteley, N.: Stability properties of some particle filters. Ann. Appl. Probab. (2011, to appear). ArXiv e-prints 1109.6779
Author information
Authors and Affiliations
Corresponding author
Appendix: Proof of Lemma 1
Appendix: Proof of Lemma 1
Let \(N\in\mathbb{N}\) and ε∈(0,1), define (u k ) k≥0 as in the statement of the lemma and define g N,ε as in Eq. (6). We are interested in ∑ k≥0(u k −1). Note first that g N,ε is contracting and is such that g N,ε (1)=1, so that u k goes to 1 using Banach fixed-point theorem. The contraction coefficient of g N,ε can be bounded by
however this contraction coefficient depends on N and a direct use of it yields a bound on ∑ k≥0(u k −1) that is not in NlogN.
Note also that even though u k goes to 1, we can focus on the partial sum \(\sum_{k=0}^{\sigma_{2}}(u_{k}-1)\) where σ 2=inf{k:u k ≤2}, because \(\sum_{k=\sigma_{2}}^{\infty}(u_{k}-1)\) is essentially bounded by N. Indeed note that for 1≤u≤2 we have (ε 3/6N 2)u(u−1)(u−2)≤0 so that
hence \(\sum_{k=\sigma_{2}}^{\infty}(u_{k}-1)\leq(2N/ \varepsilon^{2})\). Therefore we can focus on bounding \(\sum_{k=0}^{\sigma_{2}}(u_{k}-1)\) by NlogN. Let us split this sum into partial sums, where the first partial sum is over indices k such that N/2≤u k ≤N, the second is over indices k such that N/4≤u k ≤N/2, etc. More formally, we introduce \((k_{j})_{j=0}^{J}\) such that k 0=0, k 1=inf{k:u k ≤N/2},…,k j =inf{k:u k ≤N/2j}, up to k J =inf{k:u k ≤N/2J} where J is such that N/2J≤2, or equivalently logN/log2−1≤J. For instance we take J=⌊logN/log2⌋. Thus we have split \(\sum_{k=0}^{\sigma_{2}}(u_{k}-1)\) into J partial sums of the form \(\sum_{k=k_{j}}^{k_{j+1}-1}(u_{k}- 1)\) and we are now going to bound each of these partial sum by the same quantity C(ε)N for some C(ε) that depends only on ε.
To do so, we consider the time needed by (u k ) k≥0 to decrease from a value N/m j to a value N/m j+1, with m j+1>m j ; we will later take m j =2j and m j+1=2j+1. Note that for any m we have
Define
and note that for any N≥6 and m≤N/2 we have
which is clear upon noticing that β(N,m,ε) as a function of m on [1,N/2] is concave and thus reaches its minimum in 1 or N/2 (and this minimum is greater than ε 2/4, provided N≥6). For any x≥N/m j+1 we can check that
by noticing that g N,ε is concave and that g N,ε (x)≤x for x∈[0,N]. Hence for k≥0 such that u k−1≥N/m j+1, we have
Now suppose that for some k j ≥0 we have \(u_{k_{j}}\leq N/m_{j}\). Then let us find K such that \(u_{k_{j}+K}\leq N/m_{j+1}\). It is sufficient to find K such that
Finally by using
we conclude that K defined as
guarantees the inequality \(u_{k_{j}+K}\leq N/m_{j+1}\). In other words (u k ) k≥0 needs less than K steps to decrease from N/m j to N/m j+1. Summing the terms between k j and k j +K, we obtain
Taking m j =2j and m j+1=2j+1, we have k j+1≤k j +K and thus obtain
with C(ε) independent of N. We have thus bounded the full sum by
for some D(ε) independent of N.
Rights and permissions
About this article
Cite this article
Jacob, P.E., Murray, L.M. & Rubenthaler, S. Path storage in the particle filter. Stat Comput 25, 487–496 (2015). https://doi.org/10.1007/s11222-013-9445-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11222-013-9445-x