On the performance of particle filters with adaptive number of particles

We investigate the performance of a class of particle filters (PFs) that can automatically tune their computational complexity by evaluating online certain predictive statistics which are invariant for a broad class of state-space models. To be specific, we propose a family of block-adaptive PFs based on the methodology of Elvira et al. (IEEE Trans Signal Process 65(7):1781–1794, 2017). In this class of algorithms, the number of Monte Carlo samples (known as particles) is adjusted periodically, and we prove that the theoretical error bounds of the PF actually adapt to the updates in the number of particles. The evaluation of the predictive statistics that lies at the core of the methodology is done by generating fictitious observations, i.e., particles in the observation space. We study, both analytically and numerically, the impact of the number K of these particles on the performance of the algorithm. In particular, we prove that if the predictive statistics with K fictitious observations converged exactly, then the particle approximation of the filtering distribution would match the first K elements in a series of moments of the true filter. This result can be understood as a converse to some convergence theorems for PFs. From this analysis, we deduce an alternative predictive statistic that can be computed (for some models) without sampling any fictitious observations at all. Finally, we conduct an extensive simulation study that illustrates the theoretical results and provides further insights into the complexity, performance and behavior of the new class of algorithms.


Introduction
In science and engineering, there are many problems that are studied by way of dynamic probabilistic models. Some of these models describe mathematically the evolution of hidden states and their relations with observations, which are sequentially acquired. In many of these problems, the objective is to estimate sequentially the posterior probability distribution of the hidden model states. A methodology that has gained considerable popularity in the last two and a half decades is particle filtering (also known as sequential Monte Carlo) (Gordon et al. 1993;Liu et al. 1998;Doucet et al. 2001;Djurić et al. 2003;Künsch 2013). This is a Monte Carlo methodology that approximates the distributions of interest by means of random (weighted) samples.
Arguably, a key parameter of particle filters (PFs) is the number of generated Monte Carlos samples (usually termed particles). A larger number of particles improves the approximation of the filter but also increases the computational complexity. However, it is impossible to know a priori the appropriate number of particles to achieve a prescribed accu-racy in the estimated parameters and distributions. So a question of great practical interest is how to determine the necessary number of particles to achieve a prescribed performance and, in particular, how to determine it automatically and in real time.

Particle filtering with time-varying number of particles
Until the publication of Elvira et al. (2017), not many papers had considered the selection/adaptation of the number of particles in a systematic and rigorous manner. In Elvira et al. (2017), a methodology was introduced to address this problem with the goal of adapting the number of particles in real time. The method is based on a rigorous mathematical analysis and we discuss it in more detail in Sect. 1.2.
Other efforts toward the same goal include the use of a Kullback-Leibler divergence-based approximation error by Fox (2003), where the divergence was defined between the distribution of the PF and a discrete approximation of the true distribution computed on a predefined grid. The ideas in Fox (2003) were further explored by Soto (2005). A heuristic approach based on the effective sample size was proposed by Straka and Šimandl (2006), while Cornebise (2009, Chapter 4) pursued similar ideas with a scheme that regenerated particles until a certain performance criterion was met. A disadvantage of using the effective sample size is that once a PF loses track of the hidden state, the effective sample size does not provide information for adjusting the number of particles. See other issues related to the effective sample size in Elvira et al. (2018).
A method for selecting the number of particles based on the particle approximation of the variance of the particle estimators was reported by Lee and Whiteley (2018), where the Feynman-Kac framework of Del Moral (2004) was used for the analysis. While well principled, this technique cannot be implemented online. Bhadra and Ionides (2016) suggested an autoregressive model for the variance of the estimators produced by the PF was employed, but the resulting method works offline as well. In a group of papers on alive PFs, the number of particles is adaptive and based on sampling schemes that ensure a predefined number of particles to have non-zero weights (LeGland and Oudjane 2005;Jasra et al. 2013;Moral et al. 2015). In Martino et al. (2017), a fixed number of particles is adaptively allocated to several candidate models according to their performances. In Hu et al. (2008), particle sets of the same size are generated until an estimation criterion for their acceptance is met.

Some background
In Elvira et al. (2017), we introduced a methodology for assessing the convergence of PFs that works online and can be applied to a very broad class of state-space models and versions of the PF. The method is based on simulating fictitious observations from one-step-ahead predictive distributions approximated by the PF and comparing them with actual observations that are available at each time step. In the case of one-dimensional observations, a statistic is constructed that simply represents the number of fictitious observations which are smaller than the actual observation. It is proved in Elvira et al. (2017) that, as the PF converges, the predictive statistics become uniform on a discrete support and independent over time. From that realization, we proposed an algorithm for statistically testing the uniformity of the predictive statistic and, based on the test, update the number of particles in the PF.
The same type of predictive statistic has been used as a tool for evaluating ensemble forecasts in weather prediction, under the name of rank statistic-see, e.g., Anderson (1996), Hamill (2001) and Bröcker (2018). Recently, Talts et al. (2018) have also used the same statistic to validate both models and inferential methods in a Bayesian framework. The scheme in the latter paper is similar to the methodology we proposed in Elvira et al. (2017) for Bayesian inference in state-space models.

Contributions
In this paper, we propose a general block-adaptive PF where the number of particles is updated periodically, every W discrete time steps. It is a rather general scheme that provides a common framework for the procedures described in Elvira et al. (2017) and enables us to introduce different versions of the algorithm and to extend the analysis of the methodology.
In particular, we first tackle the problem of whether the updates in the number of particles carried out at the end of each block of length W translate into changes to the theoretical error bounds for the Monte Carlo estimators. While this is the kind of performance that one would like to have (e.g., we want to see smaller errors when we increase the number of particles), what the standard arguments for proving the convergence of the PF 1 yield directly are error bounds that depend on the minimum of the number of particles over time.
Here, we use the approach of Del Moral (2004) to prove that, assuming that the state sequence is Markov and mixing, the approximation errors at the end of each block are bounded by the sum of two terms: one that depends on the number of particles in the current block and another one that decreases exponentially with the block length W .
Next, we turn our attention to the analysis of the impact of the number of fictitious observations, K , used by the algo-rithm to compute the predictive statistics. We first prove that if the predictive statistics with K fictitious observations are uniformly distributed, then the particle approximation of the filtering distribution match at least the first K elements in a series of moments that characterize the true filter completely. Let us remark that this result is (close to) a converse to Theorem 2 in Elvira et al. (2017): the latter says that when the PF converges, the predictive statistics become uniform, while the new result says that if the predictive statistics with K fictitious observations become uniform, then the PF necessarily converges to match at least K moments of the true filter. From this analysis, we deduce an alternative predictive statistic that can be computed (for some models) without sampling any fictitious observations at all and establish its connection with the original one.
Finally, we conduct an extensive simulation study that illustrates the theoretical results and provides further insights into the complexity, performance and behavior of the new class of algorithms. We show, for example, that choosing larger values of K leads to more accurate filters with a higher computational cost, while a smaller K yields a faster filter using less particles (but yielding rougher errors). We also illustrate how the approximation errors change with the number of particles (as predicted by the theory) or how the adaptive PF stabilizes around the same number of particles no matter the initial condition (i.e., whether started with many or few particles). Our simulations also show that the new predictive statistic (without fictitious observations) is effective but sensitive to prediction errors and hence it leads to higher computational loads.

Organization of the paper
In the next section, we briefly describe particle filtering as a sequential Monte Carlo methodology, then we introduce our notation and a general block-adaptive PF that updates the number of particles periodically and admits different implementations. In Sect. 3, we present our convergence analysis of block-adaptive PFs. In Sect. 4, we provide a detailed analysis of the number of generated fictitious particles and introduce a new predictive statistic that does not require generation of fictitious particles. In the last two sections, we present results of numerical experiments and our conclusions, respectively.

State-space models and particle filtering
We investigate Markov state-space models described by the triplet of probability distributions where t ∈ N denotes discrete time; -X t is the system state at time t, i.e., a d x × 1-dimensional random vector taking values in a state space X ⊆ R d x , p(x 0 ) is the a priori probability density function (pdf) of the state, where Y t ∈ Y ⊆ R d y and is assumed conditionally independent of all the other observations given X t , p(y t |x t ) is the conditional pdf of Y t given X t = x t . It is often referred to as the likelihood of x t , when it is viewed as a function of x t for some fixed y t .
Based on the model (2.1)-(2.3), we aim at estimating the sequence of posterior probability distributions p(x t |y 1:t ), t = 1, 2, . . ., recursively. Many schemes addressing this task rely on the decomposition that relates the so-called filtering pdf at time t, p(x t |y 1:t ), to the filtering density at time t − 1, p(x t−1 |y 1:t−1 ). Let us denote the filtering and the predictive posterior probability measures as (2.4) The measure π t does not provide any further characterization of the probability distribution compared to the density p(x t |y 1:t ), however, Monte Carlo methods (including PFs) yield an approximation of π t , rather than the pdf p(x t |y 1:t ).
Another function that plays a central role in the methods investigated in this paper is the predictive pdf of the observations, p(y t |y 1;t−1 ). We denote the associated probability measure as μ t (dy t ) := p(y t |y 1:t−1 )dy t .
It is well known that the predictive pdf is instrumental for model inference (Andrieu et al. 2010;Djurić and Míguez 2010;Chopin et al. 2012;Crisan and Miguez 2018). The goal of particle filtering algorithms is to estimate sequentially the probability measures {π t } t≥1 as the observations {y t } t≥1 are collected. The basic method for accom-plishing this is known as the bootstrap filter (BF) introduced by Gordon et al. (1993) (see also Doucet et al. 2000).
At time t = 0, the algorithm applies standard Monte Carlo to approximate the prior probability distribution, i.e., we generate M i.i.d. samples x (m) 0 , m = 1, . . . , M, from the pdf p(x 0 ). The samples x (m) 0 are often termed particles. Assume that the particles can be propagated over time, in such a way that we obtain a Monte Carlo approximation of the filtering distribution at time t − 1 given by the particle set {x . At time t, the BF generates an estimate of π t recursively, by taking three steps:  (Li et al. 2015). This yields the new (and non-weighted) particle set From the particles and their weights, one can compute estimates of several probability measures and pdfs. The filtering measure π t can be approximated as where δx(m) t represents the Dirac delta measure located at x (m) t ∈ X . Moreover, at time t, once Y 1:t−1 = y 1:t−1 are available but Y t = y t has not been observed yet, the predictive pdfs of X t , denotedp t (x t ) := p(x t |y 1:t−1 ), and Y t , denoted p t (y t ) := p(y t |y 1:t−1 ), can be approximated as (2.6) A key parameter in the standard BF is the number of particles M, which determines both the computational cost of the algorithm and also the accuracy of any estimators computed using the particles and weights (Del Moral, 2004;Bain and Crisan, 2008). While M is fixed in conventional particle filtering methods, the focus of this paper is on algorithms where M can be updated sequentially .

Block-adaptive selection of the number of particles
A generic block-adaptive method for selecting the number of particles is summarized in Table 1. Hereafter, we assume that the observations are one-dimensional (and hence we denote them as y t instead of y t ) unless explicitly indicated. The methods to be described can be adapted to systems with multidimensional observations in a number of ways-see Elvira et al. (2017, Section IV-E) for a discussion on this topic. Also note that we implement the algorithm based on the standard BF, but it is straightforward to extend the methodology to other PFs. The block-adaptive method proceeds as follows. In Step 1(a) of Table 1, the filter is initialized with M 0 particles. The particle filter works at each time step in a standard manner with the current number of particles, as described in Step 2(a). The first modification with respect to (w.r.t.) the BF comes in Step 2 Elvira et al. 2017, Section IV-A for additional details). Assume that the fictitious observations are ordered, i.e.,ỹ t , and let y t be the actual observation at time t. The statistic A K ,M n ,t is a r.v. constructed as Therefore, A K ,M n ,t yields a non-negative integer (from 0 to K ) that indicates how many fictitious observations are smaller than the actual observation y t . We use A K ,M n ,t to denote the r.v., while a K ,M n ,t indicates a specific realization of it. The algorithm works with windows of varying size W n . At the end of the nth window (Step 2(c)), the sequence is collected and processed for assessing the convergence of the filter. The number of particles is adapted (increased, decreased, or kept constant) based on the assessment. When we assume that -the fictitious observations {ỹ (k) t } K k=1 are independently drawn from the same pdf as the actual observation y t , i.e., the predictive pdf p(y t |y 1:t−1 ), and

[Initialization]
(a) Draw independent samples x (m) 0 from the prior p(x 0 ) and assign uniform weights, i.e., x (b) Set n = 0 (block counter) and choose W 0 > 0 (size of the first block).

[For
(a) Bootstrap particle filter: i.e., the position of the actual observation y t within the set of ordered fictitious observations {ỹ -Set n = n + 1.
-Select the number of particles M n > 0.
-Select the block size W n > 0.
-Resample M n particles with replacement, from the weighted set {x it is relatively straightforward to prove the two propositions below .

samples from a common continuous (but otherwise arbitrary) probability distribution, then the pmf of the random variable (r.v.) A K ,t is
In practical terms, Propositions 1 and 2 suggest that when the approximation errors in the PF are small, i.e., p M t (dy t ) ≈ p t (dy t ), we can expect the statistics in the sequence S n of Eq.
(2.7) to be (nearly) independent and uniformly distributed. Therefore, testing whether the variates are independent and/or uniform is an indirect manner of assessing the convergence of the PF. The key advantage of this approach is that Propositions 1 and 2 do not depend on the specific choice of the transition density p(x t |x t−1 ) and the likelihood p(y t |x t ), and therefore the statistics can be applied to a very general class of state-space models. A detailed analysis of the approximation errors in the statistic A K ,M,t and its pmf Q K ,M,t (n) is provided in Elvira et al. (2017). In particular, it is proved that lim M→∞ Q K ,M,t = Q K (n) almost surely (a.s.) for every t.

Summary of algorithms for adapting the number of particles
The general block-adaptive framework of Table 1 allows for developing different algorithms. Here we outline two specific procedures that differ in the implementation of step 2(c) of Table 1-specifically, in the analysis of the subsequence of statistics denoted as S n = a K ,M n ,t−W n +1:t . The first scheme tests whether the samples in S n have a uniform distribution, while the second scheme assesses the correlation among the elements of S n .

Algorithm 1: Uniformity of A K ,M,t
This is the scheme originally proposed in Elvira et al. (2017), and it exploits Proposition 1. Under the null hypothesis of perfect convergence of the PF (i.e., p M t (dy t ) = p t (dy t )), the r.v.'s A K ,M,t are statistically uniform. Therefore, the algorithm tests if the variates in the subset S n are i.i.d. uniform draws from the set {0, . . . , K }. In Elvira et al. (2017), this is done by performing the Pearson's χ 2 test on the set S n of Eq. (2.7). To be specific, let M min and M max be the minimum and maximum number of particles, respectively, to be admitted by the PF. Further, let there be two threshold values 0 < p < p h < 1. At the end of the nth block, i.e., at time t = n j=0 W j − 1, the Pearson's χ 2 test is run on S n and it outputs a p value which we denote as p * K ,n . The p value is compared against the thresholds p and p h . In particular, Intuitively, when the Pearson test yields a small p value this is interpreted as "poor performance" of the PF and the number of particles is increased. If the p value is large, this is interpreted as "good performance" and the computational effort is relaxed (by decreasing M n ). If the p value is interpreted as "average" (i.e., between the thresholds) then the computational effort is neither increased nor decreased. In the numerical examples in Elvira et al. (2017), the number of particles is increased or decreased by a factor of c = 2, while keeping the number M n always between M min and M max .

Algorithm 2: Correlation of A K .M,t
Under the same null hypothesis of perfect convergence of the PF, the variates in S n are i.i.d. (Proposition 2). Since independence implies absence of correlation, we can test if the samples of S t are correlated, e.g., using the scheme in Elvira et al. (2016). Note that in estimating the autocorrelation of A K ,M,t , longer windows (larger values of W n ) may be needed to improve accuracy. However, larger block-sizes imply a loss of responsiveness in the adaptation of M n .
The algorithm proposed in Elvira et al. (2016) follows the same general scheme described in the previous Sect. 2.3.1. At the end of the nth block (time t = n j=0 W j −1), a Pearson's correlation coefficient r n is computed with the statistics in the set S n , and a statistical test using the Student's t-distribution is performed in order to obtain the p value p * K ,n . This p value is used in the same way as in Sect. 2.3.1 in order to increase, decrease or maintain the number of particles M n .

Error bounds for block-adaptive particle filters
We present an analysis of the class of block-adaptive filters outlined in Table 1, with either fixed (W n = W for all n) or adaptive (W n updated together with M n ) block size from a viewpoint that was ignored in Elvira et al. (2017). To be specific, we prove that at the end of the nth window, 2 the error bounds for the estimators that are computed using the weighted particle set {w can be written as a function of the current number of particles M n -provided that the optimal filter π t satisfies a stability condition (Del Moral, 2004). If one were to rely directly on classical convergence results for algorithms with fixed The current number of particles, M n , can be considerably larger than M min n and, as a consequence, the error bound can be remarkably smaller.

Notation
For notational clarity and conciseness, let denote the Markov kernel that governs the dynamics of the state sequence {x t } t>0 and write to indicate the conditional pdf of the observations.
We analyze the algorithm outlined in Table 1, which is essentially a BF with M n particles in the nth time window, where W j is the length of the jth window. As pointed out, the theoretical results we introduce are valid both for variable window lengths as well as for fixed W n = W . Our analysis also holds independently of the update rule for M n , as long as only positive values are permitted. Specifically, we assume that there is a positive lower bound M such that M n ≥ M for every n ≥ 0. In practice, we usually have a finite upper bound M ≥ M n as well (but this plays no role in the analysis).
For any integrable real function f : X → R and a probability measure α, we use the shorthand notation for integrals with respect to α. If α has a pdf a(x), we also denote ( f , a) := ( f , α) when convenient. Intuitively, we aim at proving that the bounds for the approximation errors effectively change when the number of particles M n is updated. Since the measures π M n t are random, the approximation errors ( f , π M n t ) − ( f , π t ) are real r.v.'s, and we can assess their L p norms. We recall that for a real r.v. Z with probability measure α, the L p norm of Z , with p ≥ 1, is where E[·] denotes the expected value w.r.t. the distribution of the r.v.

Error bounds
We show hereafter that by the end of the nth block of observations, the approximation error and f is a bounded real function, can be upper bounded by a function that depends on the current number of particles M n and "forgets" past errors exponentially fast. This is true under certain regularity assumptions that we detail below. Let us introduce the prediction-update (PU) operators Ψ t that generate the sequence of filtering probability measures π t given a prior measure π 0 , the sequence of kernels κ t and the likelihoods g y t t .
Definition 1 Let B(X ) denote the Borel σ -algebra of subsets of X and let P(X ) be the set of probability measures on the space (X , B(X )). We construct the sequence of PU operators Ψ t : P(X ) → P(X ), t ≥ 1, that satisfy , t = 1, 2, . . . , (3.1) for any α ∈ P(X ) and any integrable real function f , where κ t α(dx t ) = κ t (dx t |x )α(dx ) is the result of applying the Markov kernel κ t to the probability measure α. 3 It is not hard to see that Definition 1 implies that π t = Ψ t (π t−1 ). In order to represent the evolution of the sequence of filtering measures over several time steps, we introduce the composition of operators It is apparent that π t = Ψ t|t−r (π t−r ). The composition operator Ψ t|t−r is most useful for representing the filters obtained after r consecutive steps when we start from different probability measures at time t − r , i.e., for comparing Ψ t|t−r (α) and Ψ t|t−r (β) for α, β ∈ P(X ).
In our analysis, we assume that the kernels κ t (dx t |x t−1 ) satisfy a mixing assumption (Del Moral 2004;Künsch 2005). While this can be stated in various ways, we follow the approach in Del Moral (2004), which relies on the composition of kernels to mix sufficiently over several time steps.

Assumption 1 (Mixing kernel) Let us write
for the composition of m consecutive Markov kernels. For every S ∈ B(X ) and integer m ≥ 1, there exists a constant ε m > 0, independent of t and S, such that Assumption 1 implies that the sequence of optimal filters generated by the operators Ψ t , t ≥ 1, is stable (Del Moral and Guionnet, 2001). To be specific, it can be proved (Del Moral and Guionnet, 2001;Del Moral, 2004 exponentially fast. The intuitive meaning is that such sequences "forget" their initial condition over time. It also implies that approximation errors are also forgotten over time when propagated through the operators Ψ t , a fact that is often exploited in the analysis of PFs. The strongest assumption in our analysis is that the sequence of likelihoods is uniformly bounded away from zero (as well as upper bounded), as specified below.
Assumption 2 (Bounds) There exists a constant γ > 0 such that for every t ≥ 1 and every x ∈ X .
Assumption 2 depends not only on the form of the likelihood g y t t (x) = p(y t |x t ) but also on the specific sequence of observations y 1 , y 2 , . . . While it may appear restrictive, this is rather typical in the analysis of PFs (see Del Moral 2004;Künsch 2005;Gupta et al. 2015;Crisan and Miguez 2017) and is expected to hold naturally when the state space X is compact (as well as in other typical scenarios 4 ). Also note that any bounded likelihood function can be normalized to guarantee g y t t ≤ 1. The error bounds for estimator ( f , π M n t n ) are made precise by the following statement.
Theorem 1 Let t n = n j=0 W j − 1 and let π M n t n be the particle approximation of the filtering measure π t n produced by the block-adaptive BF in Table 1. If Assumption 1 (mixing kernel) and Assumption 2 (bounds) hold, then for any p ≥ 1, where sup | f |<1 denotes the supremum over all real functions f : X → R with f ∞ ≤ 1. The real constants C < ∞, γ > 0 and ε m > 0, as well as the integer m ≥ 1, are independent of n, M n and W n .
See Appendix A for a proof. Theorem 1 states that the error bound at the end of the current (nth) block depends on the error at the end of the previous ((n − 1)th) block plus an additional term that depends on the number of particles, M n , used within the current block. Moreover, the block size W n can be chosen in such a way that the "inherited error" due to, e.g., a lower number of particles M n−1 in the (n − 1)th block can be forgotten when a sufficiently large window length W n is selected in the nth block. 4 For example, suppose that the observations are collected by sensors with limited sensitivity. To see this, consider a sensor located at s that measures the power transmitted by an object located at x. Assuming free space, the received power (in dB's) can be modeled as y = 10 log 10 P 0 s − x −2 + η + z, where z ∼ N (0, σ 2 ) is Gaussian noise, P 0 is the transmitted power, and the parameter η > 0 determines the sensitivity of the sensor. The likelihood function is g y (x) ∝ exp − 1 2σ 2 y − 10 log 10 P 0 s − x −2 + η 2 .
As a consequence, when s − x → ∞ the sensor observes y = 10 log 10 (η) + z independently of the target position x and, in particular, for fixed s we have lim x →∞ g y (x) ∝ exp − 1 2σ 2 y − 10 log 10 η 2 > 0. Intuitively, the sensor cannot "see" targets which are too far away. This is due to the stability property of the PU operator Ψ t (which is guaranteed under Assumption 1). In particular,

Analysis of the number of fictitious observations, K
The performance analysis of Elvira et al. (2017) establishes the main results needed for a principled, online adaptation of the number of particles M, but leaves a number of questions unanswered. One of them, whether the error bounds of the particle estimators change as we update the number of particles online, has been addressed in Sect. 3. Two other major questions are (i) whether the statistics A K ,M n ,t becoming uniform is sufficient for the particle approximations p M n t and π M n t to converge toward p t and π t , respectively (the analysis in Elvira et al. (2017) only shows that this is necessary), and (ii) how the choice of the number of fictitious observations K affects the performance of the adaptive algorithm (i.e., the approximation error of either p M n t or π M n t ).
We tackle these two issues in this section. To be precise, we prove that if the statistics A K ,M n ,t are uniform r.v.'s for every K ∈ N, then the approximate predictive pdf p M n t (y) becomes equal to the actual density p t (y) almost everywhere. This result serves as a converse for Theorem 2 in Elvira et al. (2017)-which states that p M n t (y) M n →∞ −→ p t (y) implies that the statistics A K ,M n ,t become uniform r.v.'s. Intuitively, the new result ensures that if the A K ,M,t 's are well-behaved then so are the particle approximations p M n t . Our analysis also provides insight into the choice of K < ∞. Specifically, it yields a quantitative interpretation of how p M n t becomes closer to p t when A K ,M n ,t is uniform for larger and larger K .
As a by-product of this analysis, we identify an alternative statistic B t (and its particle estimator B M n ,t ) that can be used for assessing the performance of the particle filter without generating fictitious observations. This alternative statistic admits an interpretation as the limit of the sequence K −1 A K ,M n ,t when K → ∞ and, therefore, it inherits the key theoretical properties of the statistics A K ,M n ,t .

A converse theorem
Let us consider the true predictive density p t (y) and an approximation, computed via particle filtering or otherwise, that we denote asp t (y). In this subsection, we drop the num-ber of particles M n in the notation because the results to be presented are valid without regard for the type of approximation of the predictive distribution of the observations, that is, it is not important if it is approximated by a Monte Carlobased method or is obtained via an analytical approach. The analysis in this section relies to a large extent on the properties of the cumulative distribution functions (cdf's) associated to p t (y) andp t (y), which we denote as t , and then computing the relative position of the actual observation y t (distributed according to p t ) within the ordered fictitious observations. From Proposition 1, we know that if p t =p t thenÂ K ,t is uniform for every K ∈ N. Here, we pose the reverse question: ifÂ K ,t is uniform for every K ∈ N, can we claim that p t =p t ? Moreover, if only some statisticÂ K ,t is uniform (i.e., for some finite K ∈ N), can we expectp t to be close to p t in some quantitative well-defined sense?
Our analysis relies on two basic results in probability theory.
Lemma 1 Let Y be a continuous real r.v. on a probability space (Ω, F, P) and let p denote a pdf, with cdf F(·) = (1 (−∞,·) , p). The r.v. F(Y ) has uniform distribution U(0, 1) if, and only if, Y is distributed according to p.
Using the basic lemmas above, we establish the key result that relates the approximate cdfF t to the true functions F t and p t . (1 (−∞,·) ,p t ) be estimates of p t and F t , respectively. If the r.v.Â K ,t constructed fromp t is uniform then

Theorem 2 Assume the observation Y t is a continuous r.v. with a pdf p t and cdf F t . Let the pdfp t and its associated cdf F t (·) =
Remark 1 Let Y t be the actual observation with pdf p t . Given the actual cdf F t and its estimateF t , we can construct the r.v.'s for every n ≥ 0.
However, Theorem 2 guarantees that ifÂ K ,t is uniform, then share their first K moments. This is a quantitative characterization of the similarity between F t andF t . In particular, ifÂ K ,t is uniform for every K ∈ N, we have E[B n t ] = E[B n t ] = 1 n+1 for every n ∈ N and, as a consequence,F t = F t andp t (y) = p t (y) almost everywhere in the observation space.

Assessment without fictitious observations: the statistic B t
If Y t is a continuous r.v. with pdf p t , then the sequence of with a common distribution U(0, 1). 5 From Remark 1, it is apparent that we can use the particle filter to compute estimatorsB t ≡ B M n t over a window of observations (i.e., for t n−1 < t ≤ t n ) and then use the estimates to assess the performance of the filter. Two straightforward approaches to performing this assessment are: -testing for uniformity in (0,1) of the estimates b M n t n−1 +1 , . . . , b M n t n or -evaluating the sample moments 1 W n t n t=t n−1 +1 b M n t m , which should be close to 1 m+1 , according to Theorem 2, when the particle filter is "performing well." Since the approximate cdf of Y t computed via the particle filter Note, however, that calculating the b M n t 's from the observation y t demands the ability to integrate the conditional pdf of the observations g y t (x t ) = p(y|x t ). This is a straightforward numerical task when the observation noise is additive and Gaussian, but it may not be possible for other models.
Provided it can be computed, the statistic B M n t converges to the actual r.v. B t when M n → ∞ under the basic assumptions in Elvira et al. (2017), reproduced below for convenience (and restricted to the case of scalar observations). To be specific, we have the following result. Proof Recall that B t = (1 (−∞,Y t ) , p t ) and B M n t = (1 (−∞,Y t ) , p M n t ). Therefore, Proposition 3 is a straightforward consequence of Theorem 1 in Elvira et al. (2017), provided that Assumptions (L), (D), and (C) hold.
Finally, we note the strong connection between the statistics B M n t and A K ,M n ,t . Recall A K ,M n ,t represents the number of fictitious observations that are smaller than y t , while B M n t represents the probability Y t −∞ p M n t (y)dy. Intuitively, when K → ∞, the empirical rate of observations smaller than y t should converge to the probability of a fictitious observation being smaller than Y t . More precisely, we can state the proposition below.

Proposition 4 If Y t ∼ p t is a continuous r.v., then
It is possible to estimate this integral by drawing K samplesỹ (k) t from μ M t and building the standard Monte Carlo estimator Note that this estimator is unbiased, i.e., according to the strong law of large numbers,

Numerical experiments
In the first experiment, we show the relation between the correlation coefficient of A K ,M,t and the MSE of an estimator obtained from the particle approximation in a nonlinear statespace model. Then, we complement the results of Sect. 4.1, showing numerically some properties of A K ,M,t for different values of K and M, and their connection to the statistic B M,t . Third, we illustrate numerically the convergence of the blockadaptive BF.

Assessing convergence from the correlation of A K,M,t .
Consider the stochastic growth model (see, e.g., Djurić and Míguez 2010),   where φ = 0.4, and u t and v t are independent Gaussian r.v.'s with zero mean, and variance σ 2 u and σ 2 v , respectively. At this point, we define two models: -Model 1: σ u = 1 and σ v = 0.5, -Model 2: σ u = 2 and σ v = 0.1.
In this example, we ran the BF for T = 5, 000 time steps, always with a fixed number of particles M. We tested different values, namely M ∈ {2, 2 2 , 2 3 , . . . , 2 12 }. In order to assess the behavior of A K ,M,t , we set K = 7 fictitious observations. Figure 1a shows the mean squared error (MSE) of the estimate of the posterior mean for each value of M, which obviously decreases as M increases. Figure 1b displays the p value of the Pearson's χ 2 test for assessing the uniformity of A K ,M,t (in the domain A K ,M,t ∈ {0, . . . , K +1}) in windows of length W = 20 (Algorithm 1 of Sect. 2.3; see more details in Elvira et al. 2017). Clearly, increasing the number of particles also increases the p value, i.e., the distribution of the statistic becomes closer to the uniform distribution. Figure 1c is related to Algorithm 2 of Sect. 2.3.2. We show the sample Pearson's correlation coefficient r , using the whole sequence of statistics {a K ,M,t } T t=1 , computed with a lag τ = 1. All results are averaged over 200 independent runs.
We observe that when we increase M, the correlation between consecutive statistics decreases. It is interesting to note that the curve of the correlation coefficient r has a very similar shape to the MSE curve and, hence, can be used as a proxy. While r can be easily computed, the MSE is always unknown. This shows the utility of Algorithm 2.
It can be seen that both algorithms can identify a malfunctioning of the filter when the number of particles is insufficient. We note that Algorithm 2 works better for Model 1 than for Model 2 because the autocorrelation of the statistics is more sensitive for detecting the malfunctioning for low M. However, Algorithm 1 works better for Model 2 because the p value of the uniformity test is always smaller than in Model 1, i.e., it is more discriminative. Therefore, there is no clear superiority of one algorithm over the other.

Effect of the choice of the number of fictitious observations K
In this experiment, we evaluate the effect of the value K in the performance of the uniformity test of the statistic A K ,M,t . We use the same model parameters as in the previous example. First, we fix the number of particles M = {2, 4, 16, 64, 256, 1024, 4096} for each run during the T time steps (i.e., no adaptation is performed). Then, with W = 15 and K ∈ {2, 3, 5, 7, 10, 20}, we compute the p value of the Pearson's χ 2 test for assessing the uniformity of the statistic A K ,M,t . In Table 2, we show the average of the p value over 1000 independent runs for all combinations of K and M. We can see that the misbehavior of the filter with a low number of particles M can be detected regardless the number of fictitious observations K . For a larger K , in general the p value decreases but it does not make a significant difference, which confirms our hypotheses in Sect. 4: (a) the framework is robust to the selection of K ; (b) increasing K increases the detection power of the algorithms; (c) for reasonable values of K , when the filter misbehaves, the assessment of the uniformity of A K ,M,t detects the misbehavior (and when the filter works well, there are not false alarms with large K ); and (d) a small K can be selected, which implies a low extra computational complexity of the proposed methodology.
In a second experiment, we implement Algorithm 1 described in Sect. 2.3 on the same model, now using T = 10 4 as the length of the time series. We set the algorithms parameters as p = 0.2, p h = 0.6, W ∈ {50, 200}, and an initial number of particles M 0 = {16, 1024}. In Table 3, we show the resulting number of particles averaged over the last 50 windows of the adaptive algorithm   , 3, 5, 7, 9, 10, 11, 15, 20}. The results are also averaged over 100 independent runs. Figure 2 shows the same results of the averaged number of particles as a function of K . We see again that the selected number of particles does not depend much on K , as observed in the previous experiment. We also see that the window length does have an effect, requiring a higher number of particles when W is larger. The reason is that a larger W implies that more realizations of the statistic are observed, so in cases where the filter is tracking but with some non-negligible errors, it is more likely that the statistical test rejects the null hypothesis whenever more evidence is accumulated. Figures 3 and 4 show the averaged number of particles (over 100 independent runs) as a function of time. In Fig. 3, each subplot is obtained by fixing M 0 ∈ {16, 128, 1024} and each line represents the evolution of the number of particles for each K ∈ {2, 3, 5, 7, 9, 15, 20}. We see that for most values K , the averaged number of particles is very similar. It is interesting to note that at some stages, the required number of particles is larger, and this is better detected with slightly larger values of K . In Fig. 4, we show the same information, but now each subplot is obtained by fixing K ∈ {5, 9}, and each line represents the evolution of the number of particles for each initialization M 0 ∈ {16, 128, 1024}. We note that regardless the initial number of particles, after around 3, 000 time steps, the averaged number of particles is the same.

Behavior of A K,M,t and its relation with B M,t
In Fig. 5, we show the histograms of A K ,M,t and B M,t for the stochastic growth model described in (5.1) and (5.2). We set K ∈ {3, 5, 7, 10, 20, 50, 100, 1000, 5000}. The BF is with fixed M = 2 14 . When K grows, the pmf seems to converge to the pdf of B M,t .
In Table, 5 we show the averaged absolute error (distance) between the realizations of r.v.'s A K ,M,t /K and B M,t for the stochastic growth model with fixed M = 2 14 . The results are averaged over T = 100 time steps in 100 independent runs. It is clear that when K grows, the deviation between both r.v.'s, which take values in (0, 1), decreases. Thus, for K = 5000, the difference is on average 0.43%.

Forgetting property in the block-adaptive bootstrap particle filter
In this section, we assess the approximation errors when the block-adaptive BF increases the number of particles. To that end, we run two specific state-space models where, in the first half of time steps, the number of particles is set to M 1 while, Algorithm details: W = 20, K = 7. MSE in the approximation of the posterior mean, the averagedR(1), and the averaged p value of the Pearson's chi-square test on the uniformity on S t in the second half, the number of particles is M 2 > M 1 . We then compute the MSE of predicted observations (in the last quarter of the time steps), and we compare it with the standard BF with M 2 particles used from the beginning. Table 6 shows the MSE of a BF run on the linear Gaussian model described by 3) with T = 1000, σ u = √ 0.5, σ v = 1, and a = 0.9. We simulate one example with M 1 = 100 and M 2 = 1000 (left side of the table), and another one with M 1 = 1000 and M 2 = 10,000 (right side of the table). In both cases, we are able to show that the BF achieves in the last quarter of time The results are averaged over T = 100 time steps in 100 independent runs  steps (from t = 750 to t = T = 1000) the same MSE as if the largest number of particles was set at the very beginning. Table 7 presents analogous results for the stochastic growth model described in the first experiment, with T = 1000, σ x = 1, σ 2 y = 0.1, and φ = 0.4. Now we simulate the BF with the following pairs of number of particles (M 1 , M 2 ) ∈ {(50, 1000), (200, 4000), (1000, 20,000)}. We arrive at the same conclusions.

Summary and conclusions
In this paper, we have provided new methodological, theoretical and numerical results on the performance of particle filtering algorithms with an adaptive number of particles. We have looked into a class of PFs that update the number of particles periodically, at the end of observations blocks of a prescribed length. Decisions on whether to increase or decrease the computational effort are automatically made based on predictive statistics which are computed by generating fictitious observations, i.e., particles in the observation space. For this type of algorithms, we have proved that: (a) The error bounds for the adaptive PF depend on the current number of particles (say, M n ) and the dependence on former values (say M n−1 , M n−2 , . . . ) decays exponentially with the block length. This result holds under standard assumptions on the Markov kernel of the statespace model (as discussed in Del Moral (2004), Künsch (2005) and others). This result, which does not follow from classical convergence theorems for Monte Carlo filters, implies that one can effectively tune the performance of the PF by adapting the computational effort. (b) Convergence of the predictive statistics used for making decisions on the adaptation of the computational effort implies convergence of the PF itself. To be specific, we have proved that if the predictive statistics computed with K fictitious observations attain a uniform distribution then the true filtering distribution and its particle approximation have K common moments. This result can be understood as a converse to the convergence theorem introduced in Elvira et al. (2017). It guarantees that assessing the convergence of the PF using the proposed predictive statistics is a sound approach (the "more uniform" the predictive statistics, the better the PF general performance).
In addition to the theoretical analysis, we have carried out an extensive computer simulation study. On one hand, the numerical results have corroborated the theoretical results, e.g., by showing how increasing the number of particles directly improves the performance (and past errors are forgotten), or how increasing the number of fictitious observations K (or the block size) leads to a higher computational effort and more accurate estimators. We have also shown that the proposed block-adaptive algorithms are stable w.r.t. the initial number of particles. Overall, the proposed class of algorithms is easy to implement and can be used with different versions of the PF. It also enables the automatic, online tuning of the computational complexity without time-consuming (and often unreliable) trial-and-error procedures.
Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.