Path storage in the particle filter


This article considers the problem of storing the paths generated by a particle filter and more generally by a sequential Monte Carlo algorithm. It provides a theoretical result bounding the expected memory cost by T+CNlogN where T is the time horizon, N is the number of particles and C is a constant, as well as an efficient algorithm to realise this. The theoretical result and the algorithm are illustrated with numerical experiments.

This is a preview of subscription content, access via your institution.

Algorithm 1
Fig. 1
Algorithm 2
Algorithm 3
Fig. 2
Fig. 3


  1. Andrieu, C., Doucet, A., Holenstein, R.: Particle Markov chain Monte Carlo (with discussion). J. R. Stat. Soc., Ser. B 72(4), 357–385 (2010)

    MathSciNet  Google Scholar 

  2. Cappé, O., Moulines, E., Rydén, T.: Inference in Hidden Markov Models. Springer, New York (2005)

    MATH  Google Scholar 

  3. Carpenter, J., Clifford, P., Fearnhead, P.: Improved particle filter for nonlinear problems. IEE Proc. Radar Sonar Navig. 146(1), 2–7 (1999)

    Article  Google Scholar 

  4. Chopin, N., Singh, S.S.: On the particle Gibbs sampler (2013). ArXiv e-prints 1304.1887

  5. Chopin, N., Jacob, P., Papaspiliopoulos, O.: SMC2: an efficient algorithm for sequential analysis of state space models. J. R. Stat. Soc., Ser. B, Stat. Methodol. 75(3), 397–426 (2013)

    Article  MathSciNet  Google Scholar 

  6. Del Moral, P.: Feynman-Kac Formulae. Springer, Berlin (2004)

    Book  MATH  Google Scholar 

  7. Del Moral, P., Doucet, A.: On a class of genealogical and interacting metropolis models. Sémin. Probab. XXXVII, 415–446 (2003)

    Google Scholar 

  8. Del Moral, P., Miclo, L., Patras, F., Rubenthaler, S.: The convergence to equilibrium of neutral genetic models. Stoch. Anal. Appl. 28(1), 123–143 (2009)

    Article  Google Scholar 

  9. Douc, R., Moulines, E., Olsson, J.: Long-term stability of sequential Monte Carlo methods under verifiable conditions. Ann. Appl. Probab. (2012, to appear). ArXiv e-prints 1203.6898

  10. Doucet, A., Johansen, A.: A tutorial on particle filtering and smoothing: fifteen years later. In: Handbook of Nonlinear Filtering. Oxford University Press, Oxford (2011)

    Google Scholar 

  11. Doucet, A., de Freitas, N., Gordon, N.: Sequential Monte Carlo Methods in Practice. Springer, New York (2001)

    Book  MATH  Google Scholar 

  12. Gordon, N., Salmond, J., Smith, A.: A novel approach to non-linear/non-Gaussian Bayesian state estimation. IEEE Proc. Radar Signal Process. 140, 107–113 (1993)

    Article  Google Scholar 

  13. van Handel, R.: Uniform time average consistency of Monte Carlo particle filters. Stoch. Process. Appl. 119(11), 3835–3861 (2009)

    Article  MATH  Google Scholar 

  14. Jones, E.M., Parslow, J., Murray, L.M.: A Bayesian approach to state and parameter estimation in a phytoplankton-zooplankton model. Aust. Meteorol. Oceanogr. J. 59, 7–16 (2010)

    Google Scholar 

  15. Kitagawa, G.: A self-organizing state-space model. J. Am. Stat. Assoc. 93, 1203–1215 (1998)

    Google Scholar 

  16. Lee, A., Yau, C., Giles, M.B., Doucet, A., Holmes, C.C.: On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. J. Comput. Graph. Stat. 19, 769–789 (2010)

    Article  Google Scholar 

  17. Lindsten, F., Jordan, M., Schön, T.: Ancestor sampling for particle Gibbs (2012). ArXiv e-prints 1210.6911

  18. Liu, J., Chen, R.: Sequential Monte Carlo methods for dynamic systems. J. Am. Stat. Assoc. 93, 1032–1044 (1998)

    Article  MATH  Google Scholar 

  19. Möhle, M.: The time back to the most recent common ancestor in exchangeable population models. Adv. Appl. Probab. 36, 78–97 (2004)

    Article  MATH  Google Scholar 

  20. Murray, L.M.: Bayesian state-space modelling on high-performance hardware using LibBi (2013). ArXiv e-prints 1306.3277

  21. Murray, L.M., Jones, E.M., Parslow, J.: On collapsed state-space models and the particle marginal Metropolis-Hastings sampler (2012). ArXiv e-prints 1202.6159

  22. Murray, L.M., Lee, A., Jacob, P.E.: Rethinking resampling in the particle filter on graphics processing units (2013). ArXiv e-prints 1301.4019

  23. Poyiadjis, G., Doucet, A., Singh, S.: Particle approximations of the score and observed information matrix in state space models with application to parameter estimation. Biometrika 98(1), 65–80 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  24. Sengupta, S., Harris, M., Garland, M.: Efficient parallel scan algorithms for GPUs. Tech. Rep. NVR-2008-003, NVIDIA (2008)

  25. Wang, J., Jasra, A., De Iorio, M.: Computational methods for a class of network models. J. Comput. Biol. (2014)

  26. Whiteley, N.: Stability properties of some particle filters. Ann. Appl. Probab. (2011, to appear). ArXiv e-prints 1109.6779

Download references

Author information



Corresponding author

Correspondence to Pierre E. Jacob.

Appendix: Proof of Lemma 1

Appendix: Proof of Lemma 1

Let \(N\in\mathbb{N}\) and ε∈(0,1), define (u k ) k≥0 as in the statement of the lemma and define g N,ε as in Eq. (6). We are interested in ∑ k≥0(u k −1). Note first that g N,ε is contracting and is such that g N,ε (1)=1, so that u k goes to 1 using Banach fixed-point theorem. The contraction coefficient of g N,ε can be bounded by

$$\begin{aligned} \sup_{x}\bigl\lvert g_{N,\varepsilon}'(x)\bigr\rvert & \leq g_{N,\varepsilon }'(1)=1-\frac{\varepsilon^{2}}{2N}<1, \end{aligned}$$

however this contraction coefficient depends on N and a direct use of it yields a bound on ∑ k≥0(u k −1) that is not in NlogN.

Note also that even though u k goes to 1, we can focus on the partial sum \(\sum_{k=0}^{\sigma_{2}}(u_{k}-1)\) where σ 2=inf{k:u k ≤2}, because \(\sum_{k=\sigma_{2}}^{\infty}(u_{k}-1)\) is essentially bounded by N. Indeed note that for 1≤u≤2 we have (ε 3/6N 2)u(u−1)(u−2)≤0 so that

$$\begin{aligned} u_{k}-1 & \leq u_{k-1}-1-\frac{\varepsilon^{2}}{2N}u_{k-1}(u_{k-1}-1) \\ &\leq (u_{k-1}-1) \biggl(1-\frac{\varepsilon^{2}}{2N}\biggr), \end{aligned}$$

hence \(\sum_{k=\sigma_{2}}^{\infty}(u_{k}-1)\leq(2N/ \varepsilon^{2})\). Therefore we can focus on bounding \(\sum_{k=0}^{\sigma_{2}}(u_{k}-1)\) by NlogN. Let us split this sum into partial sums, where the first partial sum is over indices k such that N/2≤u k N, the second is over indices k such that N/4≤u k N/2, etc. More formally, we introduce \((k_{j})_{j=0}^{J}\) such that k 0=0, k 1=inf{k:u k N/2},…,k j =inf{k:u k N/2j}, up to k J =inf{k:u k N/2J} where J is such that N/2J≤2, or equivalently logN/log2−1≤J. For instance we take J=⌊logN/log2⌋. Thus we have split \(\sum_{k=0}^{\sigma_{2}}(u_{k}-1)\) into J partial sums of the form \(\sum_{k=k_{j}}^{k_{j+1}-1}(u_{k}- 1)\) and we are now going to bound each of these partial sum by the same quantity C(ε)N for some C(ε) that depends only on ε.

To do so, we consider the time needed by (u k ) k≥0 to decrease from a value N/m j to a value N/m j+1, with m j+1>m j ; we will later take m j =2j and m j+1=2j+1. Note that for any m we have

$$\begin{aligned} &{g_{N,\varepsilon} \biggl(\frac{N}{m} \biggr) }\\ &{\quad = \frac{N}{m} \biggl(1-\frac{1}{m} \biggl[\frac{\varepsilon^{2}}{2}- \frac {m\varepsilon^{2}}{2N}-\frac{\varepsilon^{3}}{6m}+\frac{\varepsilon ^{3}}{2N}-\frac{m\varepsilon^{3}}{3N^{2}} \biggr] \biggr).} \end{aligned}$$


$$\beta(N,m,\varepsilon)=\frac{\varepsilon^{2}}{2}-\frac{m\varepsilon ^{2}}{2N}-\frac{\varepsilon^{3}}{6m}+ \frac{\varepsilon^{3}}{2N}-\frac {m\varepsilon^{3}}{3N^{2}} $$

and note that for any N≥6 and mN/2 we have

$$\underline{\beta}(\varepsilon):=\frac{\varepsilon^{2}}{4}\leq\beta (N,m,\varepsilon), $$

which is clear upon noticing that β(N,m,ε) as a function of m on [1,N/2] is concave and thus reaches its minimum in 1 or N/2 (and this minimum is greater than ε 2/4, provided N≥6). For any xN/m j+1 we can check that

$$g_{N,\varepsilon}(x)\leq\frac{g_{N,\varepsilon}(N/m_{j+1})}{N/m_{j+1}}\times x $$

by noticing that g N,ε is concave and that g N,ε (x)≤x for x∈[0,N]. Hence for k≥0 such that u k−1N/m j+1, we have

$$u_{k}\leq \biggl(1-\frac{1}{m_{j+1}}\underline{\beta}(\varepsilon) \biggr)u_{k-1}. $$

Now suppose that for some k j ≥0 we have \(u_{k_{j}}\leq N/m_{j}\). Then let us find K such that \(u_{k_{j}+K}\leq N/m_{j+1}\). It is sufficient to find K such that

$$\begin{aligned} &{ \biggl(1-\frac{1}{m_{j+1}}\underline{\beta}(\varepsilon) \biggr)^{K}\frac {N}{m_{j}}\leq\frac{N}{m_{j+1}}} \\ &{\quad \Leftrightarrow\quad K\geq\log \frac{m_{j+1}}{m_{j}} \biggl(-\log \biggl(1-\frac{1}{m_{j+1}}\underline{\beta}( \varepsilon) \biggr) \biggr)^{-1}.} \end{aligned}$$

Finally by using

$$\forall x\in(0,1)\quad\frac{1}{x}-1\leq\frac{1}{-\log(1-x)}\leq \frac{1}{x} $$

we conclude that K defined as

$$K= \biggl\lceil \biggl(\log\frac{m_{j+1}}{m_{j}} \biggr)\frac {m_{j+1}}{\underline{\beta}(\varepsilon)} \biggr\rceil $$

guarantees the inequality \(u_{k_{j}+K}\leq N/m_{j+1}\). In other words (u k ) k≥0 needs less than K steps to decrease from N/m j to N/m j+1. Summing the terms between k j and k j +K, we obtain

$$\begin{aligned} \sum_{k=k_{j}}^{k_{j}+K}u_{k} & \leq K \frac{N}{m_{j}} \leq \biggl[ \biggl(\log \frac{m_{j+1}}{m_{j}} \biggr)\frac {m_{j+1}}{\underline{\beta}(\varepsilon)}+1 \biggr]\frac{N}{m_{j}}. \end{aligned}$$

Taking m j =2j and m j+1=2j+1, we have k j+1k j +K and thus obtain

$$\begin{aligned} \sum_{k=k_{j}}^{k_{j+1}}u_{k}\leq\sum _{k=k_{j}}^{k_{j}+K}u_{k} & \leq \biggl[ (\log2 )\frac{2}{\underline{\beta}(\varepsilon )}+\frac{1}{2^{j}} \biggr]N = C(\varepsilon)N \end{aligned}$$

with C(ε) independent of N. We have thus bounded the full sum by

$$\begin{aligned} \sum_{k\geq0}(u_{k}-1) & \leq\sum _{k=0}^{\sigma_{2}}(u_{k}-1)+\sum _{k\geq\sigma_{2}}(u_{k}-1) \\ & \leq \biggl\lceil \frac{\log N}{\log2} \biggr\rceil C( \varepsilon)N+\frac {2N}{\varepsilon^{2}} \leq D(\varepsilon) N \log N \end{aligned}$$

for some D(ε) independent of N.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Jacob, P.E., Murray, L.M. & Rubenthaler, S. Path storage in the particle filter. Stat Comput 25, 487–496 (2015).

Download citation


  • Sequential Monte Carlo
  • Particle filter
  • Memory cost
  • Parallel computation