# Stability of noisy Metropolis–Hastings

- 2.3k Downloads
- 6 Citations

## Abstract

Pseudo-marginal Markov chain Monte Carlo methods for sampling from intractable distributions have gained recent interest and have been theoretically studied in considerable depth. Their main appeal is that they are exact, in the sense that they target marginally the correct invariant distribution. However, the pseudo-marginal Markov chain can exhibit poor mixing and slow convergence towards its target. As an alternative, a subtly different Markov chain can be simulated, where better mixing is possible but the exactness property is sacrificed. This is the noisy algorithm, initially conceptualised as Monte Carlo within Metropolis, which has also been studied but to a lesser extent. The present article provides a further characterisation of the noisy algorithm, with a focus on fundamental stability properties like positive recurrence and geometric ergodicity. Sufficient conditions for inheriting geometric ergodicity from a standard Metropolis–Hastings chain are given, as well as convergence of the invariant distribution towards the true target distribution.

## Keywords

Markov chain Monte Carlo Pseudo-marginal Monte Carlo Monte Carlo within Metropolis Intractable likelihoods Geometric ergodicity## 1 Introduction

### 1.1 Intractable target densities and the pseudo-marginal algorithm

Suppose our aim is to simulate from an intractable probability distribution \(\pi \) for some random variable *X*, which takes values in a measurable space \(\left( \mathcal {X},\mathcal {B}(\mathcal {X})\right) \). In addition, let \(\pi \) have a density \(\pi (x)\) with respect to some reference measure \(\mu (dx)\), e.g. the counting or the Lebesgue measure. By intractable we mean that an analytical expression for the density \(\pi (x)\) is not available and so implementation of a Markov chain Monte Carlo (MCMC) method targeting \(\pi \) is not straightforward.

*X*,

*W*) defined on the product space \(\left( \mathcal {X}\times \mathcal {W},\mathcal {B}(\mathcal {X})\times \mathcal {B}(\mathcal {W})\right) \) where \(\mathcal {W}\subseteq {\mathbb {R}}^{+}_0:=[0,\infty )\). It is given by

Due to its exactness and straightforward implementation in many settings, the pseudo-marginal has gained recent interest and has been theoretically studied in some depth, see e.g. Andrieu and Roberts (2009), Andrieu and Vihola (2014, 2015), Doucet et al. (2015), Girolami et al. (2013), Maire et al. (2014) and Sherlock et al. (2015). These studies typically compare the pseudo-marginal Markov chain with a “marginal” Markov chain, arising in the case where all the weights are almost surely equal to 1, and (3) is then the standard Metropolis–Hastings acceptance probability associated with the target density \(\pi \) and the proposal *q*.

### 1.2 Examples of pseudo-marginal algorithms

*Z*on \((Z,\mathcal {B}(Z))\) is used to model observed data, as in hidden Markov models (HMMs) or mixture models. Although the density \(\pi (x)\) cannot be computed, it can be approximated via importance sampling, using an appropriate auxiliary distribution, say \(\nu _{x}\). Here, appropriate means \(\pi _{x}\ll \nu _{x}\), where \(\pi _{x}\) denotes the conditional distribution of

*Z*given \(X=x\). Therefore, for this setting, the weights are given by

*N*particles, we can then define

### 1.3 The noisy algorithm

Although the pseudo-marginal has the desirable property of exactness, it can suffer from “sticky” behaviour, exhibiting poor mixing and slow convergence towards the target distribution (Andrieu and Roberts 2009; Lee and Łatuszyński 2014). The cause for this is well-known to be related with the value of the ratio between \(W_{y,N}\) and \(W_{x,N}\) at a particular iteration. Heuristically, when the value of the current weight (*w* in (3)) is large, proposed moves can have a low probability of acceptance. As a consequence, the resulting chain can get “stuck” and may not move after a considerable number of iterations.

In order to overcome this issue, a subtly different algorithm is performed in some practical problems (see, e.g., McKinley et al. 2014). The basic idea is to refresh, independently from the past, the value of the current weight at every iteration. The ratio of the weights between \(W_{y,N}\) and \(W_{x,N}\) still plays an important role in this alternative algorithm, but here refreshing \(W_{x,N}\) at every iteration can improve mixing and the rate of convergence.

Even though these algorithms differ only slightly, the related chains have very different properties. In Algorithm 2, the value *w* is generated at every iteration whereas in Algorithm 1, it is treated as an input. As a consequence, Algorithm 1 produces a chain on \(\left( \mathcal {X}\times \mathcal {W},\mathcal {B}(\mathcal {X})\times \mathcal {B}(\mathcal {W})\right) \) contrasting with a chain from Algorithm 2 taking values on \(\left( \mathcal {X},\mathcal {B}(\mathcal {X})\right) \). However, the noisy chain is not invariant under \(\pi \) and it is not reversible in general. Moreover, it may not even have an invariant distribution as shown by some examples in Sect. 2.

From O’Neill et al. (2000) and Fernández-Villaverde and Rubio-Ramírez (2007), it is evident that the implementation of the noisy algorithm goes back even before the appearance of the pseudo-marginal, the latter initially conceptualised as Grouped Independence Metropolis–Hastings (GIMH) in Beaumont (2003). Theoretical properties, however, of the noisy algorithm have mainly been studied in tandem with the pseudo-marginal by Beaumont (2003), Andrieu and Roberts (2009) and more recently by Alquier et al. (2014).

### 1.4 Objectives of the article

*q*be random walk given by \(q(x,\cdot )=\mathcal {N}\left( \cdot |x, 4 \right) \). For this example, Fig. 2 shows the estimated densities using the noisy chain for different values of

*N*. It appears that the noisy chain has an invariant distribution, and as

*N*increases it seems to approach the desired target \(\pi \). Our objectives here are to answer the following types of questions about the noisy algorithm in general:

- 1.
Does an invariant distribution exist, at least for

*N*large enough? - 2.
Does the noisy Markov chain behave like the marginal chain for sufficiently large

*N*? - 3.
Does the invariant distribution, if it exists, converge to \(\pi \) as

*N*increases?

### 1.5 Marginal chains and geometric ergodicity

*P*denote the Markov transition kernel of a standard MH chain on \(\left( \mathcal {X},\mathcal {B}(\mathcal {X})\right) \), targeting \(\pi \) with proposal

*q*. We will refer to this chain and this algorithm using the term marginal (as in Andrieu and Roberts 2009; Andrieu and Vihola 2015), which is the idealised version for which the noisy chain and corresponding algorithm are simple approximations. Therefore

*q*but are accepted using \({\bar{\alpha }}_{N}\) (as in (3)) instead of \(\alpha \), once values for \(W_{x,N}\) and \(W_{y,N}\) are sampled. In order to distinguish the acceptance probabilities between the noisy and the pseudo-marginal processes, despite being the same after sampling values for the weights, define

*P*involving a ratio of weights in the noisy acceptance probability \(\tilde{\alpha }_{N}\). When such weights are identically one, i.e. \(Q_{x,N}(\{1\})=1\), the noisy chain reduces to the marginal chain, whereas the pseudo-marginal becomes the marginal chain with an extra component always equal to 1.

*n*-step transition kernel, which is given by

### **Definition 1.1**

*Geometric ergodicity*) A \(\varphi \)-irreducible and aperiodic Markov chain \(\varvec{\Phi }:=(\varPhi _i)_{i\ge 0}\) on a measurable space \(\left( \mathcal {X},\mathcal {B}(\mathcal {X})\right) \), with transition kernel

*P*and invariant distribution \(\pi \), is geometrically ergodic if there exists a finite function \(V\ge 1\) and constants \(\tau <1\), \(R<\infty \) such that

Geometric ergodicity does not necessarily provide fast convergence in an absolute sense. For instance, consider cases where \(\tau \), or *R*, from Definition 1.1 are extremely close to one, or very large respectively. Then the decay of the total variation distance, though geometric, is not particularly fast (see Roberts and Rosenthal 2004 for some examples).

As noted in Andrieu and Roberts (2009), if the weights \(\left\{ W_{x,N}\right\} _{x,N}\) are not essentially bounded then the pseudo-marginal chain cannot be geometrically ergodic; in such cases the “stickiness” may be more evident. In addition, under mild assumptions (in particular, that \(\bar{P}_N\) has a left spectral gap), from Andrieu and Vihola (2015, Proposition 10) and Lee and Łatuszyński (2014), a sufficient but not necessary condition ensuring the pseudo-marginal inherits geometric ergodicity from the marginal, is that the weights are uniformly bounded. This certainly imposes a tight restriction in many practical problems.

The analyses in Andrieu and Roberts (2009) and Alquier et al. (2014) mainly study the noisy algorithm in the case where the marginal Markov chain is uniformly ergodic, i.e. when it satisfies (8) with \(\sup _{x\in \mathcal {X}} V(x)<\infty \). However, there are many Metropolis–Hastings Markov chains for statistical estimation that cannot be uniformly ergodic, e.g. random walk Metropolis chains when \(\pi \) is not compactly supported. Our focus is therefore on inheritance of geometric ergodicity by the noisy chain, complementing existing results for the pseudo-marginal chain.

### 1.6 Outline of the paper

In Sect. 2, some simple examples are presented for which the noisy chain is positive recurrent, so it has an invariant probability distribution. This is perhaps the weakest stability property that one would expect a Monte Carlo Markov chain to have. However, other fairly surprising examples are presented for which the noisy Markov chain is transient even though the marginal and pseudo-marginal chains are geometrically ergodic. Section 3 is dedicated to inheritance of geometric ergodicity from the marginal chain, where two different sets of sufficient conditions are given and are further analysed in the context of arithmetic averages given by (4). Once geometric ergodicity is attained, it guarantees the existence of an invariant distribution \(\tilde{\pi }_{N}\) for the noisy chain. Under the same sets of conditions, we show in Sect. 4 that \(\tilde{\pi }_{N}\) and \(\pi \) can be made arbitrarily close in total variation as *N* increases. Moreover, explicit rates of convergence are possible to obtain in principle, when the weights arise from an arithmetic average setting as in (4).

## 2 Motivating examples

### 2.1 Homogeneous weights with a random walk proposal

*h*is convex, the noisy chain is geometrically ergodic, implying the existence of an invariant probability distribution.

### **Proposition 2.1**

Consider a log-concave target density on the positive integers and a proposal density as in (9). In addition, let the distribution of the weights be homogeneous as in (10). Then, the chain generated by the noisy kernel \({\tilde{P}}_{N}\) is geometrically ergodic.

### 2.2 Particle MCMC

*a*of the marginal chain. Similarly, Fig. 4 shows the corresponding run and acf for both the pseudo-marginal and the noisy chain when \(N=250\). Plots for the other parameters and different values of

*N*can be found in Online Appendix 2. It is noticeable how the pseudo-marginal gets “stuck”, resulting in a lower acceptance than the marginal and noisy chains. In addition, the acf of the noisy chain seems to decay faster than that of the pseudo-marginal chain.

*N*, the pseudo-marginal will require more iterations due to the slow mixing, whereas the noisy converges faster towards an unknown noisy invariant distribution. By increasing

*N*, the mixing in the pseudo-marginal improves and the noisy invariant approaches the true posterior. Plots for other values of

*N*can also be found in Online Appendix 2.

### 2.3 Transient noisy chain with homogeneous weights

*Ber*(

*s*) denotes a Bernoulli random variable of parameter \(s\in (0,1).\) There exists a relationship between

*s*,

*b*and \(\varepsilon \) that guarantees the expectation of the weights is identically one. The following proposition, proven in Appendix 1 by taking \(\theta > 1/2\), shows that the resulting noisy chain can be transient for certain values of

*b*, \(\epsilon \) and \(\theta \).

### **Proposition 2.2**

Consider a geometric target density as in (11) and a proposal density as in (12). In addition, let the weights when \(N=1\) be given by (13). Then, for some *b*, \(\varepsilon \) and \(\theta \) the chain generated by the noisy kernel \({\tilde{P}}_{N=1}\) is transient.

In contrast, since the weights are uniformly bounded by *b*, the pseudo-marginal chain inherits geometric ergodicity for any \(\theta \), *b* and \(\epsilon \). The left plot in Fig. 7 shows an example. We will discuss the behaviour of this example as *N* increases in Sect. 3.4 .

### 2.4 Transient noisy chain with non-homogeneous weights

*b*large enough. The proof can be found in Appendix 1.

### **Proposition 2.3**

Consider a geometric target density as in (11) and a proposal density as in (12). In addition, let the weights when \(N=1\) be given by (14). Then, for any \(\theta \in (0,1)\) there exists some \(b>1\) such that the chain generated by the noisy kernel \({\tilde{P}}_{N=1}\) is transient.

*m*is large enough. Once again, the pseudo-marginal chain inherits the geometrically ergodic property from the marginal. See the central and right plots of Fig. 7 for two examples using different proposals. Again, we will come back to this example in Sect. 3.4, where we look at the behaviour of the associated noisy chain as

*N*increases.

## 3 Inheritance of ergodic properties

The inheritance of various ergodic properties of the marginal chain by pseudo-marginal Markov chains has been established using techniques that are powerful but suitable only for reversible Markov chains (see, e.g. Andrieu and Vihola 2015). Since the noisy Markov chains treated here can be non-reversible, a suitable tool for establishing geometric ergodicity is the use of Foster–Lyapunov functions, via geometric drift towards a small set.

### **Definition 3.1**

*Small set*) Let

*P*be the transition kernel of a Markov chain \(\varvec{\Phi }\). A subset \(C\subseteq \mathcal {X}\) is small if there exists a positive integer \(n_{0}\), \(\varepsilon >0\) and a probability measure \(\nu (\cdot )\) on \(\left( \mathcal {X},\mathcal {B}(\mathcal {X})\right) \) such that the following minorisation condition holds

### **Theorem 3.1**

*P*and invariant distribution \(\pi \). Then, the following statements are equivalent:

- (i)There exists a small set
*C*, constants \(\lambda <1\) and \(b<\infty \), and a function \(V\ge 1\) finite for some \(x_0\in \mathcal {X}\) satisfying the geometric drift condition$$\begin{aligned} PV(x) \le \lambda V(x)+b\mathbbm {1}_{\left\{ x\in C\right\} },\quad \text {for}\;x\in \mathcal {X}. \end{aligned}$$(16) - (ii)
The chain is \(\pi \)-a.e. geometrically ergodic, meaning that for \(\pi \)-a.e. \(x\in \mathcal {X}\) it satisfies (8) for some \(V\ge 1\) (which can be taken as in (i)) and constants \(\tau <1\), \(R<\infty \).

- (
**P1**) -
The marginal chain is geometrically ergodic, implying its kernel

*P*satisfies the geometric drift condition in (16) for some constants \(\lambda <1\) and \(b<\infty \), some function \(V\ge 1\) and a small set \(C\subseteq \mathcal {X}\).

### 3.1 Conditions involving a negative moment

- (
**W1**) - For any \(\delta >0\), the weights \(\left\{ W_{x,N}\right\} _{x,N}\) satisfy$$\begin{aligned} \lim _{N\rightarrow \infty }\sup _{x\in \mathcal {X}}{\mathbb {P}}_{Q_{x,N}}\left[ \big \vert W_{x,N}-1\big \vert \ge \delta \right] = 0. \end{aligned}$$
- (
**W2**) - The weights \(\left\{ W_{x,N}\right\} _{x,N}\) satisfy$$\begin{aligned} \lim _{N\rightarrow \infty }\sup _{x\in \mathcal {X}}{\mathbb {E}}_{Q_{x,N}}\left[ W_{x,N}^{-1}\right] = 1. \end{aligned}$$

### **Theorem 3.2**

Assume (P1), (W1) and (W2). Then, there exists \(N_{0}\in {\mathbb {N}}^{+}\) such that for all \(N\ge N_{0}\), the noisy chain with transition kernel \({\tilde{P}}_{N}\) is geometrically ergodic.

The above result is obtained by controlling the dissimilarity of the marginal and noisy kernels. This is done by looking at the corresponding rejection and acceptance probabilities. The proofs of the following lemmas appear in Appendix 1.

### **Lemma 3.1**

### **Lemma 3.2**

### **Lemma 3.3**

Notice that (W1) and (W2) allow control on the bounds in the above lemmas. While Lemma 3.2 provides a bound for the difference of the rejection probabilities, Lemma 3.3 gives one for the ratio of the acceptance probabilities. The proof of Theorem 3.2 is now presented.

### *Proof of Theorem 3.2*

*P*is geometrically ergodic, it satisfies the geometric drift condition in (16) for some \(\lambda <1\), \(b<\infty \), some function \(V\ge 1\) and a small set \(C\subseteq \mathcal {X}\). Now, using the above lemmas

*V*and small set

*C*, completing the proof. \(\square \)

### *Remark 3.1*

### 3.2 Conditions on the proposal distribution

*q*instead.

- \((\mathbf{P1}^{*})\)
- (P1) holds and for the same drift function
*V*in (P1) there exists \(K<\infty \) such that the proposal kernel*q*satisfies$$\begin{aligned} qV(x) \le KV(x),\quad \text {for}\;x\in \mathcal {X}. \end{aligned}$$

### **Theorem 3.3**

Assume (P1*) and (W1). Then, there exists \(N_{0}\in {\mathbb {N}}^{+}\) such that for all \(N\ge N_{0}\), the noisy chain with transition kernel \({\tilde{P}}_{N}\) is geometrically ergodic.

In order to prove Theorem 3.3 the following lemma is required. Its proof can be found in Appendix 1. In contrast with Lemma 3.3, this lemma provides a bound for the additive difference of the noisy and marginal acceptance probabilities.

### **Lemma 3.4**

### *Proof of Theorem 3.3*

*V*and small set

*C*, completing the proof. \(\square \)

### *Remark 3.2*

*f*constant therein. Additionally, (W1) and (P1*) imply the required conditions on \(\mathcal {E}\) and \(\lambda \) in Rudolf and Schweizer (2015, Corollary 31), where a similar result is proved in terms of

*V*-uniform ergodicity.

*V*, but it is easily satisfied when restricting to log-Lipschitz targets and when using a random walk proposal of the form

- \((\mathbf{P1}^{**})\)
- \(\mathcal {X}\subseteq {\mathbb {R}}^{d}\). The target \(\pi \) is log-Lipschitz, meaning that for some \(L>0\)(P1) holds taking the drift function \(V=\pi ^{-s}\), for any \(s\in (0,1)\). The proposal$$\begin{aligned} |\log \pi (z)-\log \pi (x)| \le L\Vert z-x\Vert . \end{aligned}$$
*q*is a random walk as in (17) satisfyingfor some \(a>0\).$$\begin{aligned} \int _{{\mathbb {R}}^{d}}\exp \left\{ a\Vert u\Vert \right\} q(\Vert u\Vert )du < \infty , \end{aligned}$$

See Appendix 1 for a proof of the following proposition.

### **Proposition 3.1**

Assume \((\mathrm{P}1^{**})\) and (W1). Then, (P1*) holds.

### 3.3 Conditions for arithmetic averages

In the particular setting where the weights are given by (4), sufficient conditions on these can be obtained to ensure geometric ergodicity is inherited by the noisy chain. For the simple case where the weights are homogeneous with respect to the state space (W1) is automatically satisfied. In order to attain (W2), the existence of a negative moment for a single weight is required. See Appendix 1 for a proof of the following result.

### **Proposition 3.2**

- (
**W3**) - The weights \(\left\{ W_{x}\right\} _{x}\) satisfy$$\begin{aligned} \lim _{K\rightarrow \infty }\sup _{x\in \mathcal {X}}{\mathbb {E}}_{Q_{x}}\left[ W_{x}\mathbbm {1}_{\{W_{x}>K\}}\right] =0. \end{aligned}$$

- (
**W4**) - There exists \(\gamma \in (0,1)\) and constants \(M<\infty \), \(\beta >0\) such that for \(w\in (0,\gamma )\) the weights \(\left\{ W_{x}\right\} _{x}\) satisfy$$\begin{aligned} \sup _{x\in \mathcal {X}} {\mathbb {P}}_{Q_x} \left[ W_x \le w \right] \le M w^{\beta }. \end{aligned}$$

### **Proposition 3.3**

- (i)
(W3) implies (W1);

- (ii)
(W1) and (W4) imply (W2).

The following corollary is obtained as an immediate consequence of the above proposition, Theorems 3.2 and 3.3.

### **Corollary 3.1**

- (i)
(P1) and (W4);

- (ii)
(P1*).

The proof of Proposition 3.3 follows the statement of Lemma 3.5, whose proof can be found in Appendix 1. This lemma allows us to characterise the distribution of \(W_{x,N}\) near 0 assuming (W4) and also provides conditions for the existence and convergence of negative moments.

### **Lemma 3.5**

- (i)Suppose
*Z*is a positive random variable, and assume that for \(z\in (0,\gamma )\)Then,$$\begin{aligned} {\mathbb {P}}\left[ Z \le z \right] \le Mz^{\alpha },\quad \text {where}\;\alpha >p,M<\infty . \end{aligned}$$$$\begin{aligned} {\mathbb {E}}\left[ Z^{-p} \right] \le \frac{1}{\gamma ^p} + pM\frac{\gamma ^{\alpha -p}}{\alpha -p}. \end{aligned}$$ - (ii)Suppose \(\left\{ Z_{i}\right\} _{i=1}^{N}\) is a collection of positive and independent random variables, and assume that for each \(i\in \left\{ 1,\ldots ,N\right\} \) and \(z\in (0,\gamma )\)Then, for \(z\in (0,\gamma )\)$$\begin{aligned} {\mathbb {P}}\left[ Z_{i} \le z \right] \le M_{i} z^{\alpha _{i}},\quad \text {where}\;\alpha _{i}>0,M_{i}<\infty . \end{aligned}$$$$\begin{aligned} {\mathbb {P}}\left[ \sum _{i=1}^{N}Z_{i}\le z\right] \le \prod _{i=1}^N M_i z^{\sum _{i=1}^{N}\alpha _{i}}. \end{aligned}$$
- (iii)Let the weights be as in (4). If for some \(N_0\in {\mathbb {N}}^+\)then for any \(N\ge N_0\)$$\begin{aligned} {\mathbb {E}}_{Q_{x,N_0}}\left[ W_{x,N_0}^{-p} \right] < \infty , \end{aligned}$$$$\begin{aligned} {\mathbb {E}}_{Q_{x,N+1}}\left[ W_{x,N+1}^{-p} \right] \le {\mathbb {E}}_{Q_{x,N}}\left[ W_{x,N}^{-p} \right] . \end{aligned}$$
- (iv)Assume (W1) and let \(g:{\mathbb {R}}^{+}\rightarrow {\mathbb {R}}\) be a function that is continuous at 1 and bounded on the interval \([\gamma ,\infty )\). Then$$\begin{aligned} \lim _{N\rightarrow \infty }\sup _{x\in \mathcal {X}}{\mathbb {E}}_{Q_{x,N}}\left[ |g\left( W_{x,N}\right) -g\left( 1\right) |\mathbbm {1}_{W_{x,N}\ge \gamma }\right] = 0. \end{aligned}$$

### *Proof of Proposition 3.3*

### 3.4 Remarks on results

Equipped with these results, we return to the examples in Sects. 2.3 and 2.4. Even though the noisy chain can be transient in these examples, the behaviour is quite different when considering weights that are arithmetic averages of the form in (4). Since in both examples the weights are uniformly bounded by the constant *b*, they immediately satisfy (W1). Additionally, by Proposition 3.2, condition (W2) is satisfied for the example in Sect. 2.3. This is not the case for example in Sect. 2.4, but condition (P1*) is satisfied by taking \(V=\pi ^{-\frac{1}{2}}\). Therefore, applying Theorems 3.2 and 3.3 to examples in Sects. 2.3 and 2.4 respectively, as *N* increases the corresponding chains will go from being transient to geometrically ergodic.

### **Proposition 3.4**

Finally, in many of the previous examples, increasing the value of *N* seems to improve the ergodic properties of the noisy chain. However, the geometric ergodicity property is not always inherited, no matter how large *N* is taken. The following proposition shows an example rather similar to Proposition 3.4, but in which the ratio \(\frac{\varepsilon _{m-1}}{\varepsilon _{m}}\) does not converge as \(m\rightarrow \infty \).

### **Proposition 3.5**

## 4 Convergence of the noisy invariant distribution

So far the only concern has been whether the noisy chain inherits the geometric ergodicity property from the marginal chain. As an immediate consequence, geometric ergodicity guarantees the existence of an invariant probability distribution \(\tilde{\pi }_{N}\) for \({\tilde{P}}_{N}\), provided *N* is large enough. In addition, using the same conditions from Sect. 3, we can characterise and in some cases quantify the convergence in total variation of \(\tilde{\pi }_{N}\) towards the desired target \(\pi \), as \(N\rightarrow \infty \).

### 4.1 Convergence in total variation

The following definition, taken from Roberts et al. (1998), characterises a class of kernels satisfying a geometric drift condition as in (16) for the same *V*, *C*, \(\lambda \) and *b*.

### **Definition 4.1**

*Simultaneous geometric ergodicity*) A class of Markov chain kernels \(\left\{ P_{k}\right\} _{k\in \mathcal {K}}\) is simultaneously geometrically ergodic if there exists a class of probability measures \(\left\{ \nu _{k}\right\} _{k\in \mathcal {K}}\), a measurable set \(C\subseteq \mathcal {X}\), a real valued measurable function \(V\ge 1\), a positive integer \(n_{0}\) and positive constants \(\varepsilon \), \(\lambda \),

*b*such that for each \(k\in \mathcal {K}\):

- (i)
*C*is small for \(P_{k}\), with \(P_{k}^{n_{0}}(x,\cdot )\ge \varepsilon \nu _{k}(\cdot )\) for all \(x\in C\); - (ii)
the chain \(P_{k}\) satisfies the geometric drift condition in (16) with drift function

*V*, small set*C*and constants \(\lambda \) and*b*.

*N*is large, the noisy kernels \(\{ {\tilde{P}}_{N+k}\}_{k\ge 0}\) together with the marginal

*P*will be simultaneous geometrically ergodic. This will allow the use of coupling arguments for ensuring \(\tilde{\pi }_{N}\) and \(\pi \) get arbitrarily close in total variation. The main additional assumption is

- (
**P2**) - For some \(\varepsilon >0\), some probability measure \(\nu (\cdot )\) on \(\left( \mathcal {X},\mathcal {B}(\mathcal {X})\right) \) and some subset \(C\subseteq \mathcal {X}\), the marginal acceptance probability \(\alpha \) and the proposal kernel
*q*satisfy$$\begin{aligned} \alpha (x,y)q(x,dy) \ge \varepsilon \nu (dy), \quad \text {for}\;x\in C. \end{aligned}$$

### *Remark 4.1*

(P2) ensures the marginal chain satisfies the minorisation condition in (15), purely attained by the sub-kernel \(\alpha (x,y)q(x,dy)\). This occurs under fairly mild assumptions (see, e.g., Roberts and Tweedie 1996, Theorem 2.2).

### **Theorem 4.1**

- (i)
there exists \(N_{0}\in {\mathbb {N}}^{+}\) such that the class of kernels \(\left\{ P,{\tilde{P}}_{N_{0}},{\tilde{P}}_{N_{0}+1},\ldots \right\} \) is simultaneously geometrically ergodic;

- (ii)
for all \(x\in \mathcal {X}\), \(\lim _{N\rightarrow \infty }\Vert {\tilde{P}}_{N}(x,\cdot )-P(x,\cdot )\Vert _{TV}=0\);

- (iii)
\(\lim _{N\rightarrow \infty }\Vert \tilde{\pi }_{N}(\cdot )-\pi (\cdot )\Vert _{TV}=0.\)

*n*. In addition, due to the simultaneous geometrically ergodic property, the first term in (22) is uniformly controlled regardless the value of

*N*. Finally, using an inductive argument, part (ii) implies that for all \(x\in \mathcal {X}\) and all \(n\in {\mathbb {N}}^{+}\)

### *Proof of Theorem 4.1*

*V*, small set

*C*and constants \(\lambda _{N_{1}},b_{N_{1}}\). Respecting (i), for any \(\delta \in (0,1)\)

For (iii), see Theorem 9 in Roberts et al. (1998) for a detailed proof. \(\square \)

### 4.2 Rate of convergence

*P*, respectively and define \(c_{x}:=1-\Vert {\tilde{P}}_{N}(x,\cdot )-P(x,\cdot )\Vert _{TV}\). Using notions of maximal coupling for random variables defined on a Polish space (see Lindvall 2002 and Thorisson 2013), there exists a probability measure \(\nu _{x}(\cdot )\) such that

- If \(\tilde{\varPhi }_{n-1}^{N}=\varPhi _{n-1}=y\), with probability
*c*draw \(\varPhi _{n}\sim \nu _{y}(\cdot )\) and set \(\tilde{\varPhi }_{n}^{N}=\varPhi _{n}\). Otherwise, draw independently \(\varPhi _{n}\sim R(y,\cdot )\) and \(\tilde{\varPhi }_{n}^{N}\sim \tilde{R}_{N}(y,\cdot )\), where$$\begin{aligned} R(y,\cdot )&:= \left( 1-c\right) ^{-1}\left( P(y,\cdot )-c\nu _{y}(\cdot )\right) \quad \text {and}\nonumber \\ \tilde{R}_{N}(y,\cdot )&:= \left( 1-c\right) ^{-1}\left( {\tilde{P}}_{N}(y,\cdot )-c\nu _{y}(\cdot )\right) . \end{aligned}$$ -
If \(\tilde{\varPhi }_{n-1}^{N}\ne \varPhi _{n-1}\), draw independently \(\varPhi _{n}\sim P(y,\cdot )\) and \(\tilde{\varPhi }_{n}^{N}\sim {\tilde{P}}_{N}(y,\cdot )\).

*N*is large enough, the noisy and marginal kernels will each satisfy a geometric drift condition as in (16) with a common drift function \(V\ge 1\), small set

*C*and constants \(\lambda ,b\). Therefore, by Theorem 3.1, there exist \(R>0\), and \(\tau <1\) such that

*R*and \(\tau \) are in principle possible, as done in Rosenthal (1995) and Meyn and Tweedie (1994). For simplicity assume \(\inf _{x\in \mathcal {X}}V(x)=1\), then combining (24) and (25) in (22), for all \(n\in {\mathbb {N}}^{+}\)

*N*is available for the second term on the right hand side of (26), it will be possible to obtain an explicit rate of convergence for \(\tilde{\pi }_N\) and \(\pi \).

### **Theorem 4.2**

### *Proof*

*r*large enough, such that

*f*to the positive integers and due to convexity, it is then minimised at either

*N*large enough such that

### *Remark 4.3*

A general result bounding the total variation between the law of a Markov chain and a perturbed version is presented in Rudolf and Schweizer (2015, Theorem 21). This is done using the connection between the *V*-norm distance and the Wasserstein distance introduced in Hairer and Mattingly (2011). With such a result, and considering the same assumptions in Theorem 4.2, one could in principle obtain an explicit value for *D* in (27).

*r*(

*N*) can be obtained whenever there exists a uniformly bounded moment. This is a slightly stronger assumption than (W3).

- (
**W5**) - There exists \(k>0\), such that the weights \(\left\{ W_{x}\right\} _{x}\) satisfy$$\begin{aligned} \sup _{x\in \mathcal {X}}{\mathbb {E}}_{Q_{x}}\left[ W_{x}^{1+k}\right] < \infty . \end{aligned}$$

### **Proposition 4.1**

## 5 Discussion

In this article, fundamental stability properties of the noisy algorithm have been explored. The noisy Markov kernels considered are perturbed Metropolis–Hastings kernels defined by a collection of state-dependent distributions for non-negative weights all with expectation 1. The general results do not assume a specific form for these weights, which can be simple arithmetic averages or more complex random variables. The former may arise when unbiased importance sampling estimates of a target density are used, while the latter may arise when such densities are estimated unbiasedly using a particle filter.

Two different sets of sufficient conditions were provided under which the noisy chain inherits geometric ergodicity from the marginal chain. The first pair of conditions, (W1) and (W2), involve a stronger version of the Law of Large Numbers for the weights and uniform convergence of the first negative moment, respectively. For the second set, (W1) is still required but (W2) can be replaced with (P1*), which imposes a condition on the proposal distribution. These conditions also imply simultaneous geometric ergodicity of a sequence of noisy Markov kernels together with the marginal Markov kernel, which then ensures that the noisy invariant \(\tilde{\pi }_{N}\) converges to \(\pi \) in total variation as *N* increases. Moreover, an explicit bound for the rate of convergence between \(\tilde{\pi }_{N}\) and \(\pi \) is possible whenever an explicit bound (that is uniform in *x*) is available for the convergence between \({\tilde{P}}_{N}(x,\cdot )\) and \(P(x,\cdot )\).

When weights are arithmetic averages as in (4), specific conditions were given for inheriting geometric ergodicity from the corresponding marginal chain. The uniform integrability condition in (W3) ensures that (W1) is satisfied, whereas (W4) is essential for satisfying (W2). Regarding the noisy invariant distribution \(\tilde{\pi }_{N}\), (W5), which is slightly stronger than (W3), leads to an explicit bound on the rate of convergence of this distribution to \(\pi \).

The noisy algorithm remains undefined when the weights have positive probability of being zero. If both weights were zero one could accept the move, reject the move or keep sampling new weights until one of them is not zero. Each of these lead to different behaviour.

As seen in the examples of Sect. 3.4, the behaviour of the ratio of the weights (at least in the tails of the target) plays an important role in the ergodic properties of the noisy chain. In this context, it seems plausible to obtain geometric noisy chains, even when the marginal is not, if the ratio of the weights decays sufficiently fast to zero in the tails. Another interesting possibility, that may lead to future research, is to relax the condition on the expectation of the weights to be identically one.

## Notes

### Acknowledgments

All authors would like to thank the EPSRC-funded Centre for Research in Statistical methodology (EP/D002060/1). The first author was also supported by Consejo Nacional de Ciencia y Tecnología. The third author’s research was also supported by the EPSRC Programme Grant *i-like* (EP/K014463/1).

## Supplementary material

## References

- Alquier, P., Friel, N., Everitt, R., Boland, A.: Noisy Monte Carlo: convergence of Markov chains with approximate transition kernels. Stat. Comput., 1–19 (2014). doi: 10.1007/s11222-014-9521-x
- Andrieu, C., Roberts, G.O.: The pseudo-marginal approach for efficient Monte Carlo computations. Ann. Stati.
**37**(2), 697–725 (2009). http://www.jstor.org/stable/30243645 - Andrieu, C., Vihola, M.: Establishing some order amongst exact approximations of MCMCs (2014). arXiv preprint. arXiv:14046909
- Andrieu, C., Vihola, M.: Convergence properties of pseudo-marginal Markov chain Monte Carlo algorithms. Ann. Appl. Probab.
**25**(2), 1030–1077 (2015). doi: 10.1214/14-AAP1022 MathSciNetCrossRefzbMATHGoogle Scholar - Andrieu, C., Doucet, A., Holenstein, R.: Particle Markov chain Monte Carlo methods. J. R. Stat. Soc. Ser. B (Stat. Methodol.)
**72**(3), 269–342 (2010). http://www.jstor.org/stable/40802151 - Beaumont, M.A.: Estimation of population growth or decline in genetically monitored populations. Genetics
**164**(3), 1139–1160 (2003). http://www.genetics.org/content/164/3/1139.abstract. http://www.genetics.org/content/164/3/1139.full.pdf+html - Breyer, L., Roberts, G.O., Rosenthal, J.S.: A note on geometric ergodicity and floating-point roundoff error. Stat. Probab. Lett.
**53**(2), 123–127 (2001). doi: 10.1016/S0167-7152(01)00054-2. http://www.sciencedirect.com/science/article/pii/S0167715201000542 - Callaert, H., Keilson, J.: On exponential ergodicity and spectral structure for birth-death processes, II. Stoch. Process. Appl.
**1**(3), 217–235 (1973). doi: 10.1016/0304-4149(73)90001-X. http://www.sciencedirect.com/science/article/pii/030441497390001X - Chan, K.S., Geyer, C.J.: Discussion: Markov chains for exploring posterior distributions. Ann. Stat.
**22**(4), 1747–1758 (1994). http://www.jstor.org/stable/2242481 - Chandra, T.K.: Uniform Integrability in the Cesàro sense and the weak law of large numbers. Sankhyā Ser. A (1961–2002)
**51**(3), 309–317 (1989). http://www.jstor.org/stable/25050754 - Del Moral, P.: Feynman–Kac formulae: genealogical and interacting particle systems with applications. In: Probability and Its Applications. Springer, New York (2004). http://books.google.co.uk/books?id=8LypfuG8ZLYC
- Doucet, A., Pitt, M.K., Deligiannidis, G., Kohn, R.: Efficient implementation of Markov chain Monte Carlo when using an unbiased likelihood estimator. Biometrika (2015). doi: 10.1093/biomet/asu075. http://biomet.oxfordjournals.org/content/early/2015/03/07/biomet.asu075.abstract, http://biomet.oxfordjournals.org/content/early/2015/03/07/biomet.asu075.full.pdf+html
- Fernández-Villaverde, J., Rubio-Ramírez, J.F.: Estimating macroeconomic models: a likelihood approach. Rev. Econ. Stud.
**74**(4), 1059–1087 (2007). doi: 10.1111/j.1467-937X.2007.00437.x. http://restud.oxfordjournals.org/content/74/4/1059.abstract, http://restud.oxfordjournals.org/content/74/4/1059.full.pdf+html - Ferré, D., Hervé, L., Ledoux, J.: Regular perturbation of V-geometrically ergodic Markov chains. J. Appl. Probab.
**50**(1), 184–194 (2013). doi: 10.1239/jap/1363784432 MathSciNetCrossRefzbMATHGoogle Scholar - Flegal, J.M., Jones, G.L.: Batch means and spectral variance estimators in Markov chain Monte Carlo. Ann. Stat.
**38**(2), 1034–1070 (2010). http://www.jstor.org/stable/25662268 - Girolami, M., Lyne, A.M., Strathmann, H., Simpson, D., Atchade, Y.: Playing Russian roulette with intractable likelihoods (2013). arXiv preprint. arXiv:13064032
- Hairer, M., Mattingly, J.C.: Yet another look at Harris’ ergodic theorem for Markov chains. In: Seminar on Stochastic Analysis, Random Fields and Applications VI, Springer, Basel, pp. 109–117 (2011)Google Scholar
- Khuri, A., Casella, G.: The existence of the first negative moment revisited. Am. Stat.
**56**(1), 44–47 (2002). http://www.jstor.org/stable/3087326 - Lee, A., Łatuszyński, K.: Variance bounding and geometric ergodicity of Markov chain Monte Carlo kernels for approximate Bayesian computation. Biometrika (2014). doi: 10.1093/biomet/asu027. http://biomet.oxfordjournals.org/content/early/2014/08/05/biomet.asu027.abstract, http://biomet.oxfordjournals.org/content/early/2014/08/05/biomet.asu027.full.pdf+html
- Lindvall, T.: Lectures on the Coupling Method. Dover Books on Mathematics Series. Dover Publications, Incorporated (2002). http://books.google.co.uk/books?id=GUwyU1ypd1wC
- Maire, F., Douc, R., Olsson, J.: Comparison of asymptotic variances of inhomogeneous Markov chains with application to Markov chain Monte Carlo methods. Ann. Stat.
**42**(4), 1483–1510 (2014). doi: 10.1214/14-AOS1209 MathSciNetCrossRefzbMATHGoogle Scholar - McKinley, T.J., Ross, J.V., Deardon, R., Cook, A.R.: Simulation-based Bayesian inference for epidemic models. Comput. Stat. Data Anal.
**71**(0), 434 – 447 (2014). doi: 10.1016/j.csda.2012.12.012. http://www.sciencedirect.com/science/article/pii/S016794731200446X - Mengersen, K.L., Tweedie, R.L.: Rates of convergence of the Hastings and Metropolis algorithms. Ann. Stat.
**24**(1), 101–121 (1996). http://www.jstor.org/stable/2242610 - Meyn, S.P., Tweedie, R.L.: Computable bounds for geometric convergence rates of Markov chains. Ann. Appl. Probab.
**4**(4), 981–1011 (1994). http://www.jstor.org/stable/2245077 - Meyn, S., Tweedie, R.L.: Markov Chains and Stochastic Stability, 2nd edn. Cambridge University Press, New York (2009)CrossRefzbMATHGoogle Scholar
- Mitrophanov, A.Y.: Sensitivity and convergence of uniformly ergodic Markov chains. J. Appl. Probab.
**42**(4), 1003–1014 (2005). doi: 10.1239/jap/1134587812 MathSciNetCrossRefzbMATHGoogle Scholar - Norris, J.: Markov Chains. No. 2008 in Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press (1999). https://books.google.co.uk/books?id=qM65VRmOJZAC
- O’Neill, P.D., Balding, D.J., Becker, N.G., Eerola, M., Mollison, D.: Analyses of infectious disease data from household outbreaks by Markov chain Monte Carlo methods. J. R. Stat. Soc. Ser. C (Appl. Stat.)
**49**(4), 517–542 (2000). http://www.jstor.org/stable/2680786 - Piegorsch, W.W., Casella, G.: The existence of the first negative moment. Am. Stat.
**39**(1), 60–62 (1985). http://www.jstor.org/stable/2683910 - Pillai, N.S., Smith, A.: Ergodicity of approximate MCMC chains with applications to large data sets (2014). arXiv preprint arXiv:14050182
- Pitt, M.K., dos Santos Silva, R., Giordani, P., Kohn, R.: On some properties of Markov chain Monte Carlo simulation methods based on the particle filter. J. Econom.
**171**(2), 134–151 (2012). doi: 10.1016/j.jeconom.2012.06.004. http://www.sciencedirect.com/science/article/pii/S0304407612001510 - Roberts, G., Rosenthal, J.: Geometric ergodicity and hybrid Markov chains. Electron. Commun. Probab.
**2**(2), 13–25 (1997). doi: 10.1214/ECP.v2-981. http://ecp.ejpecp.org/article/view/981 - Roberts, G.O., Rosenthal, J.S.: General state space Markov chains and MCMC algorithms. Probab. Surv.
**1**, 20–71 (2004)MathSciNetCrossRefzbMATHGoogle Scholar - Roberts, G.O., Tweedie, R.L.: Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika
**83**(1), 95–110 (1996). http://www.jstor.org/stable/2337435 - Roberts, G.O., Rosenthal, J.S., Schwartz, P.O.: Convergence properties of perturbed Markov chains. J. Appl. Probab.
**35**(1), 1–11 (1998). http://www.jstor.org/stable/3215541 - Rosenthal, J.S.: Minorization conditions and convergence rates for Markov chain Monte Carlo. J. Am. Stat. Assoc.
**90**(430), 558–566 (1995). http://www.jstor.org/stable/2291067 - Rudolf, D., Schweizer, N.: Perturbation theory for Markov chains via Wasserstein distance (2015). arXiv preprint. arXiv:150304123
- Shardlow, T., Stuart, A.M.: A perturbation theory for ergodic Markov chains and application to numerical approximations. SIAM J. Numer. Anal.
**37**(4), 1120–1137 (2000). doi: 10.1137/S0036142998337235 MathSciNetCrossRefzbMATHGoogle Scholar - Sherlock, C., Thiery, A.H., Roberts, G.O., Rosenthal, J.S.: On the efficiency of pseudo-marginal random walk Metropolis algorithms. Ann. Stat.
**43**(1), 238–275 (2015). doi: 10.1214/14-AOS1278 MathSciNetCrossRefzbMATHGoogle Scholar - Thorisson, H.: Coupling, stationarity, and regeneration. In: Probability and Its Applications. Springer, New York (2013). http://books.google.co.uk/books?id=187hnQEACAAJ

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.