Appendix A: Autocorrelation Functions of Strictly Increasing Observables
Let P denote the transition matrix of the FK heat-bath process on a finite graph \(G=(V,E)\) with parameters \(p\in (0,1)\) and \(q\ge 1\), and let \(k=2^{|E|}\). To avoid trivialities, we assume \(|E|>1\). We regard elements of \(\mathbb {R}^k\) as functions from \(2^E\) to \(\mathbb {R}\), and we endow \(\mathbb {R}^k\) with the inner product \(\langle \cdot ,\cdot \rangle \) defined by
$$\begin{aligned} \langle g,h\rangle := \sum _{A\subseteq E} g(A)\,h(A)\,\phi (A). \end{aligned}$$
Denote the eigenvalues of P by \(1=\lambda _1 > \lambda _2\ge \ldots \ge \lambda _k\). As mentioned in Sect. 2.2, general results for heat-bath chains [13] imply that all \(\lambda _i\) are non-negative. Let \(\{\psi _i\}_{i=1}^k\) be an orthonormal basis for \(\mathbb {R}^k\) such that \(\psi _i\) is an eigenfunction of P corresponding to \(\lambda _i\). The Perron-Frobenius theorem implies that the eigenspace of \(\lambda _1\) is one-dimensional, and that we can take \(\psi _1(A)=1\) for all \(A\subseteq E\). Let W denote the eigenspace of \(\lambda _2\). For \(g\in \mathbb {R}^k\), we let \(g_W\) denote its projection onto W.
We say \(g\in \mathbb {R}^k\) is increasing if \(A\subset B\) implies \(g(A)\le g(B)\), and strictly increasing if \(A\subset B\) implies \(g(A) < g(B)\).
Proposition A.1
Let \((X_t)_{t\in \mathbb {N}}\) be a stationary FK heat-bath process, and for \(g\in \mathbb {R}^k\) define \((g_t)_{t\in \mathbb {N}}\) via \(g_t\,:=g(X_t)\). If g is strictly increasing, then its autocorrelation function satisfies
$$\begin{aligned} \rho _{g}(t) := \frac{{\text {cov}}(g_0,g_t)}{{\text {var}}(g_0)} \sim C e^{-t/t_{\exp }},\qquad t\rightarrow \infty , \end{aligned}$$
for constant \(C>0\).
Proof
Let \(\varPi \) denote the projection matrix onto the space of constant functions. General arguments (see e.g. [45] or [36, Chap. 9]) imply
$$\begin{aligned} {\text {cov}}(g_0,g_t) = \langle g,(P^t-\varPi )g\rangle = \sum _{l=2}^k \langle g,\psi _l\rangle ^2\lambda _l^t = \Vert g_W\Vert ^2\lambda _2^t \,+\, \sum _{l=\dim (W)+2}^k\langle g,\psi _l\rangle ^2\lambda _l^t. \end{aligned}$$
Since g is strictly increasing, Lemma A.2 implies that \(\Vert g_W\Vert ^2>0\), and therefore
$$\begin{aligned} {\text {cov}}(g_0,g_t) \sim \Vert g_W\Vert ^2 e^{-t/t_{\exp }}, \qquad t\rightarrow \infty . \end{aligned}$$
It follows that
$$\begin{aligned} \rho _g(t) \sim \frac{\Vert g_W\Vert ^2}{{\text {var}}(g)} e^{-t/t_{\exp }}, \qquad t\rightarrow \infty . \end{aligned}$$
\(\square \)
Lemma A.2
If g is strictly increasing, then its projection onto W is non-zero.
Proof
Lemma A.3 implies there exists \(\psi \in W\) which is non-zero and increasing. Positive association (see e.g. [24, Theorem 3.8 (b)]) then implies that for any other increasing g we have
$$\begin{aligned} \langle g,\psi \rangle \ge \mathbb {E}(g)\,\mathbb {E}(\psi ) = 0, \end{aligned}$$
(A.1)
since \(\mathbb {E}(\psi )=\langle \psi _1,\psi \rangle =0\). In particular, suppose that g is strictly increasing. Choosing \(\alpha >0\) so that
$$\begin{aligned} g(B)-g(A) > \alpha [\psi (B)-\psi (A)], \qquad \text { for all } A \subset B\subseteq E, \end{aligned}$$
implies that \(g-\alpha \psi \) is also strictly increasing. Applying (A.1) to \(g-\alpha \psi \) then yields
$$\begin{aligned} \langle g-\alpha \psi ,\psi \rangle \ge 0. \end{aligned}$$
Rearranging, and using the fact that \(\psi \) is non-zero then implies
$$\begin{aligned} \langle g,\psi \rangle \ge \alpha \langle \psi ,\psi \rangle >0. \end{aligned}$$
Therefore, g has a non-zero projection onto \(\psi \in W\), and the stated result follows. \(\square \)
The following lemma is the natural analogue, in the FK setting, of the result [39, Lemma 3] established for the Ising heat-bath process.
Lemma A.3
There exists \(\psi \in W\) which is non-zero and increasing.
Proof
Let \(g=\psi _2+ C (\mathscr {N}-\mathbb {E}(\mathscr {N}))\), where \(\mathscr {N}\in \mathbb {R}^k\) is defined so that \(\mathscr {N}(A)=|A|\) for each \(A\subseteq E\), and \(C>0\) is a constant. We have
$$\begin{aligned} g = [1+C\langle \mathscr {N},\psi _2\rangle ]\psi _2 + C \sum _{j=3}^k \langle \mathscr {N},\psi _j\rangle \psi _j. \end{aligned}$$
If \(\langle \mathscr {N},\psi _2\rangle =0\), then g has a non-zero projection onto \(\psi _2\), for any choice of \(C>0\). If \(\langle \mathscr {N},\psi _2\rangle \ne 0\), then choosing \(C>|\langle \mathscr {N},\psi _2\rangle |^{-1}\) suffices to guarantee that g again has a non-zero projection onto \(\psi _2\). In either case, assume C is so chosen. It follows that \(g_W\) is non-zero.
If \(A\subset B\), then
$$\begin{aligned} g(B) - g(A) = \psi _2(B)-\psi _2(A) + C[\mathscr {N}(B)-\mathscr {N}(A)] \ge \min _{A\subset B\subseteq E} [\psi _2(B) - \psi _2(A)] + C. \end{aligned}$$
Therefore, by choosing \(C>\left| \min \limits _{A\subset B\subseteq E} [\psi _2(B) - \psi _2(A)]\right| \) we guarantee that g is increasing. Lemma A.4 then implies that \(g_W\) is increasing. Therefore, \(\psi =g_W\) is an increasing, non-zero element of W. \(\square \)
Lemma A.4
If g is increasing and has zero-mean, then its projection onto W is also increasing.
Proof
Let \(g\in \mathbb {R}^k\) be any increasing observable with mean zero, and let \(t\in \mathbb {N}^+\). Since Lemma A.6 implies \(\lambda _2>0\), we can write
$$\begin{aligned} \frac{P^t g}{\lambda _2^t} = g_W + \sum _{l=\dim (W)+2}^k \langle g,\psi _l\rangle \psi _l\,\left( \frac{\lambda _l}{\lambda _2}\right) ^t. \end{aligned}$$
It follows that
$$\begin{aligned} \lim _{t\rightarrow \infty } \frac{P^t g}{\lambda _2^t} = g_W. \end{aligned}$$
(A.2)
Now, for any given \(t\ge 1\), Lemma A.5 implies that \(P^t g(A)\) is an increasing function of A, and so \(\lambda _2^{-t}P^t g (A)\) is also an increasing function of A. It then follows, as an elementary consequence of (A.2), that \(g_W\) is also increasing. We have therefore established that if g is an increasing zero-mean function, then its projection \(g_W\) is also increasing. \(\square \)
Lemma A.5
If \(g\in \mathbb {R}^k\) is increasing, then \(P^tg\) is also increasing, for every \(t\ge 1\).
Proof
Let \((f,\mathscr {E},U)\) be the random mapping representation for P given in Sect. 2.1; see (2.3). Let \(A_1\subset A_2\subseteq E\), and let \(B_i=f(A_i,\mathscr {E},U)\) for \(i=1,2\). Clearly, \((B_1,B_2)\) is a coupling of the distributions \(P(A_1,\cdot )\) and \(P(A_2,\cdot )\), and the monotonicity of f implies \(B_1\subseteq B_2\). Strassen’s theorem (see e.g. [25, Theorem 4.2]) then implies that
$$\begin{aligned} \mathbb {E}_{P(A_1,\cdot )}(g) \le \mathbb {E}_{P(A_2,\cdot )}(g) \end{aligned}$$
for any increasing \(g\in \mathbb {R}^k\). It follows that
$$\begin{aligned} (Pg)(A_1) = \sum _{B\subseteq E} P(A_1,B) g(B)= & {} \mathbb {E}_{P(A_1,\cdot )}(g)\\\le & {} \mathbb {E}_{P(A_2,\cdot )}(g) = \sum _{B\subseteq E} P(A_2,B) g(B) = (Pg)(A_2). \end{aligned}$$
Since this holds for any \(A_1\subset A_2\subseteq E\), it follows that Pg is increasing. It then follows by a simple induction that \(P^tg\) is increasing for any \(t\ge 1\). \(\square \)
Lemma A.6
The second-largest eigenvalue of P is positive.
Proof
Since P is reversible and irreducible we have the spectral decomposition (see e.g. [35, Lemma 12.2])
$$\begin{aligned} \frac{P(A,B)}{\phi (B)} = 1 + \sum _{j=2}^k\psi _i(A)\psi _i(B)\lambda _i. \end{aligned}$$
Since \(\lambda _2\ge \lambda _j\ge 0\) for all \(j>2\), it follows that if \(\lambda _2=0\), then \(P(A,B)=\phi (B)\) for all \(A,B\subseteq E\). But since, by assumption, we have \(|E|>1\), we can choose \(A,B\subseteq E\) with \(|A\triangle B| > 1\), where \(\triangle \) denotes symmetric difference, and (2.2) then implies
$$\begin{aligned} P(A,B) = 0 \ne \phi (B). \end{aligned}$$
We have therefore reached a contradiction, and we conclude that \(\lambda _2>0\). \(\square \)
Appendix B: Coupon Collecting
Let \(n\in \mathbb {N}^+\), and let \(C_1,C_2,\ldots \) be an iid sequence of uniformly random elements of \([n]\,:=\{1,2,\ldots ,n\}\). For \(t\in \mathbb {N}^+\), we think of \(C_t\) as the coupon collected at time t. For \(i\in [n]\), let \(D_i\in [n]\) denote the ith distinct type of coupon collected; i.e. the ith distinct element of the sequence \(C_1,C_2,\ldots \). Let \(S_i(t)\,:=\#\{s\le t : C_s = D_i\}\), the number of copies of \(D_i\) collected by time t. Define \(R_t\,:=\{c\in [n]: C_s = c \text { for some } s\le t\}\), the set of distinct coupon types collected up to time t. For any \(1\le k \le n\), let \(W_k=\inf \{t\in \mathbb {N}^+: |R_t| = k\}\), and note that \(W_k\) is simply the hitting time of \(D_{k}\). The coupon collector’s time is then defined as \(W\,:=W_n\).
For each \(c\in [n]\), define
$$\begin{aligned} H(c) = \sup \{t\le W: C_t=c\}. \end{aligned}$$
We refer to the time \(H(c)\) as the last visit to c. Let \((H_i)_{i=1}^n\) denote the sequence of the \(H(c)\), arranged in increasing order. In particular, \(H_1\) is the first time that a last visit occurs.
Lemma B.1
There exists \(\varphi >0\) such that \(\mathbb {P}(|R_{H_{1}}| \le \left\lfloor \ln n \right\rfloor ) = O(n^{-\varphi })\).
Proof
Inserting \(a_n=\lfloor \ln (n)\rfloor \) and \(c_n=\lfloor \ln (n)/4\rfloor \) into Lemma B.2 and applying the union bound, implies
$$\begin{aligned} \mathbb {P}\left( \bigcup _{i=1}^{a_{n}} \left\{ S_{i}(W) \le c_n \right\} \right)&\le \ln (n) \exp \left( -\frac{1}{2}\ln (n-a_n) + \frac{\ln (n)}{4} + 1\right) \\&= e\,\ln (n) \exp \left( -\frac{1}{4}\ln (n) - \frac{1}{2}\ln \left( 1-\frac{a_n}{n}\right) \right) \\&= \frac{e}{\sqrt{1-\lfloor \ln (n)\rfloor /n}}\,\ln (n)\, n^{-1/4}. \end{aligned}$$
Therefore, for any \(0<\rho <1/4\), we have
$$\begin{aligned} \mathbb {P}\left( \bigcup _{i=1}^{a_{n}} \left\{ S_{i}(W) \le c_n \right\} \right) = O(n^{-\rho }), \qquad n\rightarrow \infty . \end{aligned}$$
It follows that,
$$\begin{aligned} \mathbb {P}(|R_{H_1}| \le a_n) = \mathbb {P}\left( |R_{H_1}|\le a_n, \bigcap _{i=1}^{a_n}\{S_i(W)>c_n\}\right) + O(n^{-\rho }) \end{aligned}$$
(B.1)
Let \(I\,:=\inf \{t\in \mathbb {N}^+: S_i(t) = c_n \text { for some } i\in [n] \}\), the first time that there exists a coupon type for which exactly \(c_n\) copies have been collected, and define the random variable \(K\in [n]\) via \(C_{H_1}=D_K\). If \(|R_{H_1}|\le a_n\), then \(1\le K \le a_n\). Therefore, observing that \(S_{K}(W)=S_{K}(H_1)\), we find
$$\begin{aligned} \mathbb {P}\left( |R_{H_1}|\le a_n, \bigcap _{i=1}^{a_n}\{S_i(W)>c_n\}\right)&\le \mathbb {P}(|R_{H_1}|\le a_n, S_{K}(W)> c_n) \nonumber \\&= \mathbb {P}(|R_{H_1}|\le a_n, S_{K}(H_1) > c_n) \nonumber \\&\le \mathbb {P}(|R_I|\le a_n) \end{aligned}$$
(B.2)
since if \(|R_{H_1}|\le a_n\) and \(S_{K}(H_1) > c_n\) then \(|R_I|\le a_n\). Combining (B.1) and (B.2) then implies
$$\begin{aligned} \mathbb {P}(|R_{H_1}| \le a_n) \le \mathbb {P}(|R_I|\le a_n) + O(n^{-\rho }). \end{aligned}$$
However, Lemma B.3 implies that there exists \(\delta >0\) such that \(\mathbb {P}(|R_I|\le a_n) = O(n^{-\delta })\). We therefore conclude that, if \(\varphi =\min \{\rho ,\delta \}\), then
$$\begin{aligned} \mathbb {P}(|R_{H_{1}}| \le \left\lfloor \ln n \right\rfloor ) = O(n^{-\varphi }). \end{aligned}$$
\(\square \)
Lemma B.2
Let \((a_n)_{n\in \mathbb {N}^+}\) and \((c_n)_{n\in \mathbb {N}^+}\) be any two sequences of natural numbers. For \(n\in \mathbb {N}^+\), if \(a_n<n\) then for each \(1\le i\le a_n\) we have
$$\begin{aligned} \mathbb {P}\left( S_i(W) \le c_n \right) \le \exp \left( -\ln (\sqrt{n-a_n}) + c_n + 1\right) . \end{aligned}$$
Proof
Fix \(n\in \mathbb {N}^+\) and \(1\le i \le a_n\), and assume \(a_n<n\). Adopting the convention \(W_0=0\), for \(0\le k \le n-1\) we define
$$\begin{aligned} Y_i(k) \,:= \sum _{j=W_{k}+1}^{W_{k+1}-1} \mathbf {1}_{\{C_{j}= D_{i}\}}. \end{aligned}$$
Since \(Y_i(k)=0\) for all \(k<i\), and \(C_{W_k}=D_i\) iff \(k=i\), we then have
$$\begin{aligned} S_{i}(W) = 1 + \sum _{k=i}^{n-1} Y_i(k). \end{aligned}$$
And since the random variables \(Y_i(k)\) are independent, for any \(\theta <0\), we have
$$\begin{aligned} \mathbb {P}\left( S_i(W) \le c_n \right)&\le \mathbb {P}\left( \sum _{k=a_n}^{n-1} Y_i(k) \le c_n \right) \nonumber \\&= \mathbb {P}\left( \exp \left[ \theta \sum _{k=a_n}^{n-1} Y_i(k)\right] \ge e^{\theta c_n} \right) \nonumber \\&\le \exp \left( -\theta c_n + \sum _{k=a_n}^{n-1} \ln \mathbb {E}[e^{\theta Y_i(k)}]\right) , \end{aligned}$$
(B.3)
where the final step follows from Markov’s inequality.
The moment generating function of \(Y_i(k)\) can be calculated explicitly. Let \(i \le k \le n-1\). Given \(W_{k}\) and \(W_{k+1}\), the random variable \(Y_i(k)\) has binomial distribution with \(W_{k+1} -W_{k}-1\) trials and success probability 1 / k, which implies
$$\begin{aligned} \mathbb {E}(e^{\theta Y_i(k)})&= \mathbb {E}(\mathbb {E}[e^{\theta Y_i(k)}|W_{k},W_{k+1}]) \\&= \mathbb {E}\left[ \left( \frac{e^{\theta }}{k} + 1 - \frac{1}{k}\right) ^{W_{k+1} - W_{k}-1}\right] . \end{aligned}$$
But since \(W_{k+1}-W_{k}\) has geometric distribution with parameter \(1-k/n\), this becomes
$$\begin{aligned} \mathbb {E}(e^{\theta Y_i(k)}) = \frac{n-k}{n-k+1-e^{\theta }}. \end{aligned}$$
Therefore, setting \(\lambda =1-e^\theta \) and \(b_n=n-a_n\), it follows from the fact that \(\ln (1+\lambda /k)\) is a decreasing function of k that
$$\begin{aligned} -\sum _{k=a_{n}}^{n-1} \ln \mathbb {E}(e^{\theta Y_i(k)})&= \sum _{k=1}^{b_n} \ln \left( 1 + \frac{\lambda }{k} \right) \nonumber \\&\ge \int _1^{b_n} \ln \left( 1+\frac{\lambda }{x}\right) \mathrm {d}\,x \nonumber \\&= \lambda \ln (b_n) + (b_n + \lambda ) \ln (1 + \lambda /b_n) - (1+\lambda )\ln (1+\lambda )\nonumber \\&\ge \lambda \ln (b_n) + \lambda - (1+\lambda )\ln (1+\lambda )\nonumber \\&\ge \lambda \ln (b_n) - 1 \end{aligned}$$
(B.4)
where, in the penultimate step, we used the fact that \(\ln (1+x)\ge x/(1+x)\) holds for all \(x>-1\), and in the last step we used the fact that \((1+\lambda )-(1+\lambda )\ln (1+\lambda )>0\) for any \(\lambda \in (0,1)\). Combining (B.3) and (B.4), we conclude that for all \(\lambda \in (0,1)\) we have
$$\begin{aligned} \mathbb {P}\left( S_i(W) \le c_n \right) \le \exp \left[ -\lambda \ln (n-a_n) -\ln (1-\lambda )c_n + 1) \right] . \end{aligned}$$
Choosing \(\lambda =1/2\) yields the stated result. \(\square \)
Lemma B.3
Fix \(c\in (0,1)\), and define sequences \((a_n)_{n\in \mathbb {N}^+}\) and \((c_n)_{n\in \mathbb {N}^+}\) such that \(a_n=\lfloor \ln (n)\rfloor \) and \(c_n=\lfloor c\ln (n)\rfloor \). Let
$$\begin{aligned} I\,:=\inf \{t\in \mathbb {N}^+: S_i(t) = c_n \text { for some } i\in [n] \}, \end{aligned}$$
the first time that there exists a coupon type for which exactly \(c_n\) copies have been collected. Then there exists \(\delta >0\) such that
$$\begin{aligned} \mathbb {P}(|R_{I}|\le a_n)=O(n^{-\delta }), \qquad n \rightarrow \infty . \end{aligned}$$
Proof
We assume, in all that follows, that n is sufficiently large that \(c_n>1\). For \(k\in [n]\), let
$$\begin{aligned} I_k=\inf \{t\in \mathbb {N}^+: S_{k}(t) =c_n\} \end{aligned}$$
be the first time that \(c_n\) copies of coupon type \(D_k\) have been collected. For any sequence of natural numbers \((b_n)_{n\in \mathbb {N}^+}\), we have
$$\begin{aligned} \mathbb {P}(|R_{I_k}| \le a_n)&= \mathbb {P}(|R_{I_k}|\le a_n, I_k\le b_n) + \mathbb {P}(|R_{I_k}|\le a_n, I_k> b_n)\nonumber \\&\le \mathbb {P}(I_k\le b_n) + \mathbb {P}(|R_{I_k}|\le a_n, I_k> b_n)\nonumber \\&\le \mathbb {P}(I_k\le b_n) + \mathbb {P}(W_{a_n+1}>b_n), \end{aligned}$$
(B.5)
where the last inequality follows by observing that if \(|R_{I_k}|\le a_n\) and \(I_k>b_n\), then \(W_{a_n+1}>b_n\).
To find an upper bound for \(\mathbb {P}(I_k\le b_n)\), note that, for any \(s\ge 1\), the random time between the sth and \((s+1)\)th arrival of coupon type \(D_k\) is a geometric random variable with success probability 1 / n. It follows that \(\varDelta _k\,:=I_k-W_k\) is a sum of \(c_n-1\) independent geometric random variables,Footnote 6 each with success probability 1 / n. Lemma B.4 therefore implies that for any \(0<\lambda <1\),
$$\begin{aligned} \mathbb {P}(\varDelta _k\le \lambda n(c_n-1)) \le e^{-f(\lambda ) c_n + f(\lambda )} \end{aligned}$$
where \(f(\lambda )>0\). But from the trivial lower bound \(W_k\ge 1\), it follows that \(\varDelta _k\le I_k -1\). Therefore, for any \(b_n\le \lambda n(c_n-1)+1\), we have
$$\begin{aligned} \mathbb {P}(I_k \le b_n) \le \mathbb {P}(I_k\le \lambda n (c_n-1) + 1) \le \mathbb {P}(\varDelta _k\le \lambda n (c_n-1)) \le e^{-f(\lambda ) c_n + f(\lambda )}. \end{aligned}$$
(B.6)
To find an upper bound for \(\mathbb {P}(W_{a_n+1}>b_n)\), we begin with the observation that, with the convention \(W_0=0\), we have
$$\begin{aligned} W_{a_n+1} = \sum _{i=0}^{a_n}(W_{i+1}-W_i). \end{aligned}$$
For \(0\le i \le a_n\), the random variables \(W_{i+1}-W_i\) are independent, and distributed according to a geometric distribution with success probability \(1-i/n\). Therefore, Lemma B.4 implies that for any \(\zeta >1\)
$$\begin{aligned} \mathbb {P}(W_{a_n+1}\ge \zeta \,\mathbb {E}[W_{a_n+1}]) \le e^{-f(\zeta )(1-a_n/n)\mathbb {E}(W_{a_n+1})} \end{aligned}$$
with \(f(\zeta )>0\). But explicit calculation shows that
$$\begin{aligned} \mathbb {E}(W_{a_n+1}) = n(H_n - H_{n-a_n-1})\sim a_n, \qquad n\rightarrow \infty , \end{aligned}$$
where \(H_i\) is the ith harmonic number, and the asymptotic result follows from \(H_n\sim \ln (n)\) and the fact that \(a_n=o(n)\). It follows that for any choice of \(b_n \ge \zeta \, \mathbb {E}(W_{a_n+1})\) and \(\alpha \in (0,f(\zeta ))\), for sufficiently large n, we have
$$\begin{aligned} \mathbb {P}(W_{a_n+1}\ge b_n) \le e^{-\alpha \, a_n}. \end{aligned}$$
(B.7)
Any choice of \(b_n\) satisfying \(\zeta \,\mathbb {E}(W_{a_n+1}) \le b_n \le \lambda n(c_n-1)+1\), for sufficiently large n, suffices to ensure (B.6) and (B.7) hold simultaneously. It therefore suffices to set \(b_n=n\). For simplicity, \(\lambda \in (0,1)\) and \(\zeta >1\) can be chosen so that \(f(\lambda )=1=f(\zeta )\). Combining (B.5), (B.6) and (B.7) then implies that for any \(\alpha <1\) we have
$$\begin{aligned} \mathbb {P}(|R_{I_k}| \le a_n) \le e^{-c_n+1} + e^{-\alpha \,a_n} \end{aligned}$$
for sufficiently large n.
Finally, since \(|R_I|\le a_n\) implies \(|R_{I_k}|\le a_n\) for some \(1\le k \le a_n\), it follows from the union bound that, for sufficiently large n,
$$\begin{aligned} \mathbb {P}(|R_I|\le a_n) \le \mathbb {P}\left( \bigcup _{k=1}^{a_n}\{|R_{I_k}|\le a_n\}\right) \le \sum _{k=1}^{a_n} \mathbb {P}(|R_{I_k}|\le a_n)&\le a_n\, e^{-c_n+1} + a_n\, e^{-\alpha \,a_n}\\&\le e^2\,\ln (n)\,n^{-c} + e^{\alpha }\,\ln (n)\,n^{-\alpha }. \end{aligned}$$
Since \(c,\alpha >0\), we can choose \(0< \delta <\min \{c,\alpha \}\), and we obtain \(\mathbb {P}(|R_I|\le a_n) = O(n^{-\delta })\). \(\square \)
Lemma B.4
Let \(X_1,X_2,\ldots ,X_n\) be independent random variables, such that \(X_i\) has geometric distribution with success probability \(p_i\), and let \(X=\sum _{i=1}^n X_i\). Then
$$\begin{aligned} \mathbb {P}(X\le \lambda \mu )&\le e^{-p_{*}\mu f(\lambda )}, \qquad \forall \,\lambda \le 1,\\ \mathbb {P}(X\ge \zeta \mu )&\le e^{-p_{*}\mu f(\zeta )}, \qquad \forall \,\zeta \ge 1, \end{aligned}$$
where \(\mu =\mathbb {E}(X) = \sum _{i=1}^n 1/p_i\), \(p_*= \min _{i\in [n]} p_i\) and \(f(x)=x -1 -\ln (x)\).
Proof
These results can be established, in the standard way, by applying Markov’s inequality to \(\mathbb {E}(e^{t X})\), and using the explicit form for \(\mathbb {E}(e^{t X_i})\); see e.g. [31]. \(\square \)