## Abstract

This article considers the problem of storing the paths generated by a particle filter and more generally by a sequential Monte Carlo algorithm. It provides a theoretical result bounding the expected memory cost by *T*+*CN*log*N* where *T* is the time horizon, *N* is the number of particles and *C* is a constant, as well as an efficient algorithm to realise this. The theoretical result and the algorithm are illustrated with numerical experiments.

This is a preview of subscription content, access via your institution.

## References

Andrieu, C., Doucet, A., Holenstein, R.: Particle Markov chain Monte Carlo (with discussion). J. R. Stat. Soc., Ser. B

**72**(4), 357–385 (2010)Cappé, O., Moulines, E., Rydén, T.: Inference in Hidden Markov Models. Springer, New York (2005)

Carpenter, J., Clifford, P., Fearnhead, P.: Improved particle filter for nonlinear problems. IEE Proc. Radar Sonar Navig.

**146**(1), 2–7 (1999)Chopin, N., Singh, S.S.: On the particle Gibbs sampler (2013). ArXiv e-prints 1304.1887

Chopin, N., Jacob, P., Papaspiliopoulos, O.: SMC

^{2}: an efficient algorithm for sequential analysis of state space models. J. R. Stat. Soc., Ser. B, Stat. Methodol.**75**(3), 397–426 (2013)Del Moral, P.: Feynman-Kac Formulae. Springer, Berlin (2004)

Del Moral, P., Doucet, A.: On a class of genealogical and interacting metropolis models. Sémin. Probab.

**XXXVII**, 415–446 (2003)Del Moral, P., Miclo, L., Patras, F., Rubenthaler, S.: The convergence to equilibrium of neutral genetic models. Stoch. Anal. Appl.

**28**(1), 123–143 (2009)Douc, R., Moulines, E., Olsson, J.: Long-term stability of sequential Monte Carlo methods under verifiable conditions. Ann. Appl. Probab. (2012, to appear). ArXiv e-prints 1203.6898

Doucet, A., Johansen, A.: A tutorial on particle filtering and smoothing: fifteen years later. In: Handbook of Nonlinear Filtering. Oxford University Press, Oxford (2011)

Doucet, A., de Freitas, N., Gordon, N.: Sequential Monte Carlo Methods in Practice. Springer, New York (2001)

Gordon, N., Salmond, J., Smith, A.: A novel approach to non-linear/non-Gaussian Bayesian state estimation. IEEE Proc. Radar Signal Process.

**140**, 107–113 (1993)van Handel, R.: Uniform time average consistency of Monte Carlo particle filters. Stoch. Process. Appl.

**119**(11), 3835–3861 (2009)Jones, E.M., Parslow, J., Murray, L.M.: A Bayesian approach to state and parameter estimation in a phytoplankton-zooplankton model. Aust. Meteorol. Oceanogr. J.

**59**, 7–16 (2010)Kitagawa, G.: A self-organizing state-space model. J. Am. Stat. Assoc.

**93**, 1203–1215 (1998)Lee, A., Yau, C., Giles, M.B., Doucet, A., Holmes, C.C.: On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. J. Comput. Graph. Stat.

**19**, 769–789 (2010)Lindsten, F., Jordan, M., Schön, T.: Ancestor sampling for particle Gibbs (2012). ArXiv e-prints 1210.6911

Liu, J., Chen, R.: Sequential Monte Carlo methods for dynamic systems. J. Am. Stat. Assoc.

**93**, 1032–1044 (1998)Möhle, M.: The time back to the most recent common ancestor in exchangeable population models. Adv. Appl. Probab.

**36**, 78–97 (2004)Murray, L.M.: Bayesian state-space modelling on high-performance hardware using LibBi (2013). ArXiv e-prints 1306.3277

Murray, L.M., Jones, E.M., Parslow, J.: On collapsed state-space models and the particle marginal Metropolis-Hastings sampler (2012). ArXiv e-prints 1202.6159

Murray, L.M., Lee, A., Jacob, P.E.: Rethinking resampling in the particle filter on graphics processing units (2013). ArXiv e-prints 1301.4019

Poyiadjis, G., Doucet, A., Singh, S.: Particle approximations of the score and observed information matrix in state space models with application to parameter estimation. Biometrika

**98**(1), 65–80 (2011)Sengupta, S., Harris, M., Garland, M.: Efficient parallel scan algorithms for GPUs. Tech. Rep. NVR-2008-003, NVIDIA (2008)

Wang, J., Jasra, A., De Iorio, M.: Computational methods for a class of network models. J. Comput. Biol. (2014)

Whiteley, N.: Stability properties of some particle filters. Ann. Appl. Probab. (2011, to appear). ArXiv e-prints 1109.6779

## Author information

### Affiliations

### Corresponding author

## Appendix: Proof of Lemma 1

### Appendix: Proof of Lemma 1

Let \(N\in\mathbb{N}\) and *ε*∈(0,1), define (*u*
_{
k
})_{
k≥0} as in the statement of the lemma and define *g*
_{
N,ε
} as in Eq. (6). We are interested in ∑_{
k≥0}(*u*
_{
k
}−1). Note first that *g*
_{
N,ε
} is contracting and is such that *g*
_{
N,ε
}(1)=1, so that *u*
_{
k
} goes to 1 using Banach fixed-point theorem. The contraction coefficient of *g*
_{
N,ε
} can be bounded by

however this contraction coefficient depends on *N* and a direct use of it yields a bound on ∑_{
k≥0}(*u*
_{
k
}−1) that is not in *N*log*N*.

Note also that even though *u*
_{
k
} goes to 1, we can focus on the partial sum \(\sum_{k=0}^{\sigma_{2}}(u_{k}-1)\) where *σ*
_{2}=inf{*k*:*u*
_{
k
}≤2}, because \(\sum_{k=\sigma_{2}}^{\infty}(u_{k}-1)\) is essentially bounded by *N*. Indeed note that for 1≤*u*≤2 we have (*ε*
^{3}/6*N*
^{2})*u*(*u*−1)(*u*−2)≤0 so that

hence \(\sum_{k=\sigma_{2}}^{\infty}(u_{k}-1)\leq(2N/ \varepsilon^{2})\). Therefore we can focus on bounding \(\sum_{k=0}^{\sigma_{2}}(u_{k}-1)\) by *N*log*N*. Let us split this sum into partial sums, where the first partial sum is over indices *k* such that *N*/2≤*u*
_{
k
}≤*N*, the second is over indices *k* such that *N*/4≤*u*
_{
k
}≤*N*/2, etc. More formally, we introduce \((k_{j})_{j=0}^{J}\) such that *k*
_{0}=0, *k*
_{1}=inf{*k*:*u*
_{
k
}≤*N*/2},…,*k*
_{
j
}=inf{*k*:*u*
_{
k
}≤*N*/2^{j}}, up to *k*
_{
J
}=inf{*k*:*u*
_{
k
}≤*N*/2^{J}} where *J* is such that *N*/2^{J}≤2, or equivalently log*N*/log2−1≤*J*. For instance we take *J*=⌊log*N*/log2⌋. Thus we have split \(\sum_{k=0}^{\sigma_{2}}(u_{k}-1)\) into *J* partial sums of the form \(\sum_{k=k_{j}}^{k_{j+1}-1}(u_{k}- 1)\) and we are now going to bound each of these partial sum by the same quantity *C*(*ε*)*N* for some *C*(*ε*) that depends only on *ε*.

To do so, we consider the time needed by (*u*
_{
k
})_{
k≥0} to decrease from a value *N*/*m*
_{
j
} to a value *N*/*m*
_{
j+1}, with *m*
_{
j+1}>*m*
_{
j
}; we will later take *m*
_{
j
}=2^{j} and *m*
_{
j+1}=2^{j+1}. Note that for any *m* we have

Define

and note that for any *N*≥6 and *m*≤*N*/2 we have

which is clear upon noticing that *β*(*N*,*m*,*ε*) as a function of *m* on [1,*N*/2] is concave and thus reaches its minimum in 1 or *N*/2 (and this minimum is greater than *ε*
^{2}/4, provided *N*≥6). For any *x*≥*N*/*m*
_{
j+1} we can check that

by noticing that *g*
_{
N,ε
} is concave and that *g*
_{
N,ε
}(*x*)≤*x* for *x*∈[0,*N*]. Hence for *k*≥0 such that *u*
_{
k−1}≥*N*/*m*
_{
j+1}, we have

Now suppose that for some *k*
_{
j
}≥0 we have \(u_{k_{j}}\leq N/m_{j}\). Then let us find *K* such that \(u_{k_{j}+K}\leq N/m_{j+1}\). It is sufficient to find *K* such that

Finally by using

we conclude that *K* defined as

guarantees the inequality \(u_{k_{j}+K}\leq N/m_{j+1}\). In other words (*u*
_{
k
})_{
k≥0} needs less than *K* steps to decrease from *N*/*m*
_{
j
} to *N*/*m*
_{
j+1}. Summing the terms between *k*
_{
j
} and *k*
_{
j
}+*K*, we obtain

Taking *m*
_{
j
}=2^{j} and *m*
_{
j+1}=2^{j+1}, we have *k*
_{
j+1}≤*k*
_{
j
}+*K* and thus obtain

with *C*(*ε*) independent of *N*. We have thus bounded the full sum by

for some *D*(*ε*) independent of *N*.

## Rights and permissions

## About this article

### Cite this article

Jacob, P.E., Murray, L.M. & Rubenthaler, S. Path storage in the particle filter.
*Stat Comput* **25, **487–496 (2015). https://doi.org/10.1007/s11222-013-9445-x

Received:

Accepted:

Published:

Issue Date:

### Keywords

- Sequential Monte Carlo
- Particle filter
- Memory cost
- Parallel computation