# Large Deviation Properties of the Empirical Measure of a Metastable Small Noise Diffusion

## Abstract

The aim of this paper is to develop tractable large deviation approximations for the empirical measure of a small noise diffusion. The starting point is the Freidlin–Wentzell theory, which shows how to approximate via a large deviation principle the invariant distribution of such a diffusion. The rate function of the invariant measure is formulated in terms of quasipotentials, quantities that measure the difficulty of a transition from the neighborhood of one metastable set to another. The theory provides an intuitive and useful approximation for the invariant measure, and along the way many useful related results (e.g., transition rates between metastable states) are also developed. With the specific goal of design of Monte Carlo schemes in mind, we prove large deviation limits for integrals with respect to the empirical measure, where the process is considered over a time interval whose length grows as the noise decreases to zero. In particular, we show how the first and second moments of these integrals can be expressed in terms of quasipotentials. When the dynamics of the process depend on parameters, these approximations can be used for algorithm design, and applications of this sort will appear elsewhere. The use of a small noise limit is well motivated, since in this limit good sampling of the state space becomes most challenging. The proof exploits a regenerative structure, and a number of new techniques are needed to turn large deviation estimates over a regenerative cycle into estimates for the empirical measure and its moments.

## Introduction

Among the many interesting results proved by Freidlin and Wentzell in the 70’s and 80’s concerning small random perturbations of dynamical systems, one of particular note is the large deviation principle for the invariant measure of such a system. Consider the small noise diffusion

\begin{aligned} dX_{t}^{\varepsilon }=b(X_{t}^{\varepsilon })dt+\sqrt{\varepsilon }\sigma (X_{t}^{\varepsilon })dW_{t},\quad X_{0}^{\varepsilon }=x, \end{aligned}

where $$X_{t}^{\varepsilon }\in {\mathbb {R}}^{d}$$, $$b:{\mathbb {R}}^{d}\rightarrow {\mathbb {R}}^{d}$$, $$\sigma :{\mathbb {R}}^{d}\rightarrow {\mathbb {R}}^{d} \times {\mathbb {R}}^{k}$$ (the $$d\times k$$ matrices) and $$W_{t}\in {\mathbb {R}}^{k}$$ is a standard Brownian motion. Under mild regularity conditions on b and $$\sigma$$, one has that for any $$T\in (0,\infty )$$ the processes $$\{X_{\cdot }^{\varepsilon }\}_{\varepsilon >0}$$ satisfy a large deviation principle on $$C([0,T]:{\mathbb {R}}^{d})$$ with rate function

\begin{aligned} I_{T}(\phi )\doteq \int _{0}^{T}\sup _{\alpha \in {\mathbb {R}}^{d}}\left[ \langle {\dot{\phi }}_{t},\alpha \rangle -\left\langle b(\phi _{t}),\alpha \right\rangle -\frac{1}{2}\left\| \sigma (\phi _{t} )\alpha \right\| ^{2}\right] dt \end{aligned}

when $$\phi$$ is absolutely continuous and $$\phi (0)=x$$, and $$I_{T}(\phi )=\infty$$ otherwise. If $$\sigma (x)\sigma (x)^{\prime }>0$$ (in the sense of symmetric square matrices) for all $$x\in {\mathbb {R}}^{d}$$, then one can evaluate the supremum and find

\begin{aligned} I_{T}(\phi )=\int _{0}^{T}\frac{1}{2}\left\langle {\dot{\phi }}_{t}-b(\phi _{t}),\left[ \sigma (\phi _{t})\sigma (\phi _{t})^{\prime }\right] ^{-1} ({\dot{\phi }}_{t}-b(\phi _{t}))\right\rangle dt. \end{aligned}
(1.1)

To simplify the discussion, we will assume this non-degeneracy condition. It is also assumed by Freidlin and Wentzell in , but can be weakened.

Define the quasipotential V(xy) for $$x,y \in {\mathbb {R}}^{d}$$ by

\begin{aligned} V(x,y)\doteq \inf \left\{ I_{T}(\phi ):\phi (0)=x,\phi (T)=y,T<\infty \right\} . \end{aligned}

Suppose that $$\{X^{\varepsilon }\}$$ is ergodic on a compact manifold $$M\subset {\mathbb {R}}^{d}$$ with invariant measure $$\mu ^{\varepsilon } \in {\mathcal {P}}(M)$$. Then, under a number of additional assumptions, including assumptions on the structure of the dynamical system $${\dot{X}}_{t}^{0} =b(X_{t}^{0})$$, Freidlin and Wentzell [12, Chapter 6] show how to construct a function $$J:M\rightarrow [0,\infty ]$$ in terms of V, such that J is the large deviation rate function for $$\{\mu ^{\varepsilon }\}_{\varepsilon >0}$$: J has compact level sets, and

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}\varepsilon \log \mu ^{\varepsilon }(G)\ge -\inf _{y\in G}J(y)\text { for open }G\subset M, \\&\limsup _{\varepsilon \rightarrow 0}\varepsilon \log \mu ^{\varepsilon }(F)\le -\inf _{y\in F}J(y)\text { for closed }F\subset M. \end{aligned}

This gives a very useful approximation to $$\mu ^{\varepsilon }$$, and along the way many interesting related results (e.g., transition rates between metastable states) are also developed.

The aim of this paper is to develop large deviation type estimates for a quantity that is closely related to $$\mu ^{\varepsilon }$$, which is the empirical measure over an interval $$[0,T^{\varepsilon }]$$. This is defined by

\begin{aligned} \rho ^{\varepsilon }(A)\doteq \frac{1}{T^{\varepsilon }}\int _{0}^{T^{\varepsilon }} 1_{A}(X_{s}^{\varepsilon })ds \end{aligned}
(1.2)

for $$A\in {\mathcal {B}}(M)$$. For reasons that will be made precise later on, we will assume $$T^{\varepsilon }\rightarrow \infty$$ as $$\varepsilon \rightarrow 0$$, and typically $$T^{\varepsilon }$$ will grow exponentially in the form $$e^{c/\varepsilon }$$ for some $$c>0$$.

There is of course a large deviation theory for the empirical measure when $$\varepsilon >0$$ is held fixed and the length of the time interval tends to infinity (see, e.g., [7, 8]). However, it can be hard to extract information from the corresponding rate function. Our interest in proving large deviations estimates when $$\varepsilon \rightarrow 0$$ and $$T^{\varepsilon }\rightarrow \infty$$ is in the hope that one will find it easier to extract information in this double limit, analogous to the simplified approximation to $$\mu ^{\varepsilon }$$ just mentioned. These results will be applied in  to analyze and optimize a Monte Carlo method known as infinite swapping [9, 15] when the noise is small. Small noise models are common in applications and are also the setting in which Monte Carlo methods can have the greatest difficulty. We expect that the general set of results will be useful for other purposes as well.

We note that while developed in the context of small noise diffusions, the collection of results due to Freidlin and Wentzell that are discussed in  also hold for other classes of processes, such as scaled stochastic networks, when appropriate conditions are assumed and the finite time sample path large deviation results are available (see, e.g., ). We expect that such generalizations are possible for the results we prove as well.

The outline of the paper is as follows. In Sect. 2, we explain our motivation and the relevance for studying the particular quantities that are the topic of the paper. In Sect. 3, we provide definitions and assumptions that are used throughout the paper, and Sect. 4 states the main asymptotic results as well as a related conjecture. Examples that illustrate the results are given in Sect. 5. In Sect. 6, we introduce an important tool for our analysis—the regenerative structure, and with this concept, we decompose the original asymptotic problem into two sub-problems that require very different forms of analysis. These two types of asymptotic problems are then analyzed separately in Sects. 7 and  8. In Sect. 9, we combine the partial asymptotic results from Sects. 7 and  8 to prove the main large deviation type results that are stated in Section 4. Section 10 gives the proof of a key theorem from Section 8, which asserts an approximately exponential distribution for return times that arise in the decomposition based on regenerative structure, as well as a tail bound needed for some integrability arguments. The last section of the paper, Sect. 11, presents the proof of an upper bound for the rate of decay of the variance per unit time in the context of a special case, thereby showing for the case that the lower bounds of Sect. 4 are in a sense tight. To focus on the main discussion, proofs of some lemmas are collected in an Appendix.

### Remark 1.1

There are certain time-scaling parameters that play key roles throughout this paper. For the reader’s convenience, we record here where they are first described: $$h_1$$ and w are defined in (4.1) and (4.2); c is introduced and its relation to $$h_1$$ and w is given in Theorem 4.3; m is introduced at the beginning of Sect. 6.2.

## Quantities of Interest

The quantities we are interested in are the higher order moments, and in particular second moments, of an integral of a risk-sensitive functional with respect to the empirical measure $$\rho ^{\varepsilon }$$ defined in (1.2). To be more precise, the integral is of the form

\begin{aligned} \int _{M}e^{-\frac{1}{\varepsilon }f\left( x\right) }1_{A}\left( x\right) \rho ^{\varepsilon }\left( dx\right) \end{aligned}
(2.1)

for some nice (e.g., bounded and continuous) function $$f:M\rightarrow {\mathbb {R}}$$ and a closed set $$A\in {\mathcal {B}}(M)$$. Note that this integral can also be expressed as

\begin{aligned} \frac{1}{T^{\varepsilon }}\int _{0}^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt. \end{aligned}
(2.2)

In order to understand the large deviation behavior of moments of such an integral, we must identify the correct scaling to extract meaningful information. Moreover, as will be shown, there is an important difference between centered moments and ordinary (non-centered) moments.

By the use of the regenerative structure of $$\{X_{t}^{\varepsilon }\}_{t\ge 0}$$, we can decompose (2.2) [equivalently (2.1)] into the sum of a random number of independent and identically distributed (iid) random variables, plus a residual term which here we will ignore. To simplify the notation, we temporarily drop the $$\varepsilon$$, and without being precise about how the regenerative structure is introduced, let $$Y_{j}$$ denote the integral of $$e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right)$$ over a regenerative cycle. (The specific regenerative structure we use will be identified later on.)

Thus, we consider a sequence $$\{Y_{j}\}_{j\in {\mathbb {N}} }$$ of iid random variables with finite second moments and want to compare the scaling properties of, for example, the second moment and the second centered moment of $$\frac{1}{n}\sum _{j=1}^{n}Y_{j}$$. When used for the small noise system, both n and moments of $$Y_{i}$$ will scale exponentially in $$1/\varepsilon$$, and n will be random, but for now we assume n is deterministic. The second moment is

\begin{aligned} E\left( \frac{1}{n}\sum _{k=1}^{n}Y_{k}\right) ^{2} =\frac{1}{n^{2}} \sum _{k=1}^{n}E\left( Y_{k}\right) ^{2}+\frac{1}{n^{2}}\sum _{i,j:i\ne j}E\left( Y_{i}Y_{j}\right) =\left( EY_{1}\right) ^{2}+\frac{1}{n}\mathrm {Var}\left( Y_{1}\right) , \end{aligned}

and the second centered moment is

\begin{aligned} E\left( \frac{1}{n}\sum _{k=1}^{n}\left( Y_{k}-EY_{1}\right) \right) ^{2}=\mathrm {Var}\left( \frac{1}{n}\sum _{k=1}^{n}Y_{k}\right) =\frac{1}{n}\mathrm {Var}\left( Y_{1}\right) . \end{aligned}

When analyzing the performance of the Monte Carlo schemes, one is concerned of course with both bias and variance, but in situations where we would like to apply the results of this paper one assumes $$T^{\varepsilon }$$ is large enough that the bias term is unimportant, so that all we are concerned with is the variance. However, some care will be needed to determine a suitable measure of quality of the algorithm, since as noted $$Y_{i}$$ could scale exponentially with in $$1/\varepsilon$$ with a negative coefficient (exponentially small), while n will be exponentially large.

In the analysis of unbiased accelerated Monte Carlo methods for small noise systems over bounded time intervals (e.g., to estimate escape probabilities), it is standard to use the second moment, which is often easier to analyze, in lieu of the variance [3, Chapter VI], [4, Chapter 14]. This situation corresponds to $$n=1$$. The alternative criterion is more convenient since by Jensen’s inequality one can easily establish a best possible rate of decay of the second moment, and estimators are deemed efficient if they possess the optimal rate of decay [3, 4]. However, with n exponentially large this is no longer true. Using the previous calculations, we see that the second moment of $$\frac{1}{n}\sum _{j=1}^{n}Y_{j}$$ can be completely dominated by $$\left( EY_{1}\right) ^{2}$$, and therefore using this quantity to compare algorithms may be misleading, since our true concern is the variance of $$\frac{1}{n}\sum _{j=1}^{n}Y_{j}$$.

This observation suggests that our study of moments of the empirical measure we should consider only centered moments, and in particular quantities like

\begin{aligned} T^{\varepsilon }\mathrm {Var}\left( \int _{M}e^{-\frac{1}{\varepsilon }f\left( x\right) }1_{A}\left( x\right) \rho ^{\varepsilon }\left( dx\right) \right) =T^{\varepsilon }\mathrm {Var}\left( \frac{1}{T^{\varepsilon }}\int _{0}^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) , \end{aligned}

which is the variance per unit time. For Monte Carlo, one wants to minimize the variance per unit time, and to make the problem more tractable we instead try to maximize the decay rate of the variance per unit time. Assuming the limit exists, this is defined by

\begin{aligned} \lim _{\varepsilon \rightarrow 0}-\varepsilon \log \left[ T^{\varepsilon }\mathrm {Var}\left( \frac{1}{T^{\varepsilon }}\int _{0}^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) \right] \end{aligned}

and so we are especially interested in lower bounds on this decay rate.

Thus, our goal is to develop methods that allow the approximation of at least first and second moments of (2.2). In fact, the methods we introduce can be developed further to obtain large deviation estimates of higher moments if that were needed or desired.

## Setting of the Problem, Assumptions and Definitions

The process model we would like to consider is an $${\mathbb {R}}^{d}$$-valued solution to an Itô stochastic differential equation (SDE), where the drift so strongly returns the process to some compact set that events involving exit of the process from some larger compact set are so rare that they can effectively be ignored when analyzing the empirical measure. However, to simplify the analysis we follow the convention of [12, Chapter 6], and work with a small noise diffusion that takes values in a compact and connected manifold $$M\subset {\mathbb {R}}^{d}$$ of dimension r and with smooth boundary. The precise regularity assumptions for M are given on [12, page 135]. With this convention in mind, we consider a family of diffusion processes $$\{X^{\varepsilon }\}_{\varepsilon \in (0,\infty )},X^{\varepsilon }\in C([0,\infty ):M)$$, that satisfy the following condition.

### Condition 3.1

Consider continuous $$b:M\rightarrow {\mathbb {R}}^{d}$$ and $$\sigma :M\rightarrow {\mathbb {R}}^{d}\times {\mathbb {R}}^{d}$$ (the $$d\times d$$ matrices) and assume that $$\sigma$$ is uniformly nondegenerate, in that there is $$c>0$$ such that for any x and any v in the tangent space of M at x, $$\langle v,\sigma (x)\sigma (x)^{\prime }v\rangle \ge c\langle v,v\rangle$$. For absolutely continuous $$\phi \in C([0,T]:M)$$ define $$I_{T}(\phi )$$ by (1.1), where the inverse $$\left[ \sigma (x)\sigma (x)^{\prime }\right] ^{-1}$$ is relative to the tangent space of M at x. Let $$I_{T}(\phi )=\infty$$ for all other $$\phi \in C([0,T]:M)$$. Then, we assume that for each $$T<\infty$$, $$\{X^{\varepsilon }_t\}_{0\le t\le T}$$ satisfies the large deviation principle with rate function $$I_{T}$$, uniformly with respect to the initial condition [4, Definition 1.13].

We note that for such diffusion processes nondegeneracy of the diffusion matrix implies there is a unique invariant measure $$\mu ^{\varepsilon } \in {\mathcal {P}}(M)$$. A discussion of weak sufficient conditions under which Condition 3.1 holds appears in [12, Sect. 3, Chapter 5].

### Remark 3.2

There are several ways one can approximate a diffusion of the sort described at the beginning of this section by a diffusion on a smooth compact manifold. One such “compactification” of the state space can be obtained by assuming that for some bounded but large enough rectangle trajectories that exit the rectangle do not affect the large deviation behavior of quantities of interest and then extend the coefficients of the process periodically and smoothly off an even larger rectangle to all of $${\mathbb {R}}^{d}$$ (a technique sometimes used to bound the state space for purposes of numerical approximation). One can then map $${\mathbb {R}}^{d}$$ to a manifold that is topologically equivalent to a torus, and even arrange that the metric structure on the part of the manifold corresponding to the smaller rectangle coincides with a Euclidean metric.

Define the quasipotential $$V(x,y):M\times M\rightarrow [0,\infty )$$ by

\begin{aligned} V(x,y)\doteq \inf \left\{ I_{T}(\phi ):\phi (0)=x,\phi (T)=y,T<\infty \right\} . \end{aligned}
(3.1)

For a given set $$A\subset M,$$ define $$V(x,A)\doteq \inf _{y\in A}V(x,y)$$ and $$V(A,y)\doteq \inf _{x\in A}V(x,y).$$

### Remark 3.3

For any fixed y and set AV(xy) and V(xA) are both continuous functions of x. Similarly, for any given x and any set AV(xy) and V(Ay) are also continuous in y.

### Definition 3.4

We say that a set $$N\subset M$$ is stable if for any $$x\in N,y\notin N$$ we have $$V(x,y)>0.$$ A set which is not stable is called unstable.

### Definition 3.5

We say that $$O\in M$$ is an equilibrium point of the ordinary differential equation (ODE) $${\dot{x}}_{t}=b(x_{t})$$ if $$b(O)=0.$$ Moreover, we say that this equilibrium point O is asymptotically stable if for every neighborhood $${\mathcal {E}}_{1}$$ of O (relative to M) there exists a smaller neighborhood $${\mathcal {E}}_{2}$$ such that the trajectories of system $${\dot{x}}_{t}=b(x_{t})$$ starting in $${\mathcal {E}}_{2}$$ converge to O without leaving $${\mathcal {E}}_{1}$$ as $$t\rightarrow \infty .$$

### Remark 3.6

An asymptotically stable equilibrium point is a stable set, but a stable set might contain no asymptotically stable equilibrium point.

The following restrictions on the structure of the dynamical system in M will be used. These restrictions include the assumption that the equilibrium points are a finite collection. This is a more restrictive framework than that of , which allows, e.g., limit cycles. In a remark at the end of this section, we comment on what would be needed to extend to the general setup of .

### Condition 3.7

There exists a finite number of points $$\{O_{j} \}_{j\in L} \subset M$$ with $$L\doteq \{1,2,\ldots ,l\}$$ for some $$l\in {\mathbb {N}}$$, such that $$\cup _{j\in L}\{O_j\}$$ coincides with the $$\omega$$-limit set of the ODE $${\dot{x}}_{t}=b(x_{t})$$.

Without loss of generality, we may assume that $$O_{j}$$ is stable if and only if $$j\in L_{\mathrm{{s}}}$$ where $$L_{\mathrm{{s}}}\doteq \{1,\ldots ,l_{\mathrm{{s}}}\}$$ for some $$l_{\mathrm{{s}}}\le l.$$

Next, we give a definition from graph theory which will be used in the statement of the main results.

### Definition 3.8

Given a subset $$W\subset L=\{1,\ldots ,l\},$$ a directed graph consisting of arrows $$i\rightarrow j$$ $$(i\in L\setminus W,j\in L,i\ne j)$$ is called a W-graph on L if it satisfies the following conditions.

1. 1.

Every point i $$\in L\setminus W$$ is the initial point of exactly one arrow.

2. 2.

For any point i $$\in L\setminus W,$$ there exists a sequence of arrows leading from i to some point in W.

We note that we could replace the second condition by the requirement that there are no closed cycles in the graph. We denote by G(W) the set of W-graphs; we shall use the letter g to denote graphs. Moreover, if $$p_{ij}$$ ($$i,j\in L,j\ne i$$) are numbers, then $$\prod _{(i\rightarrow j)\in g}p_{ij}$$ will be denoted by $$\pi (g).$$

### Remark 3.9

We mostly consider the set of $$\{i\}$$-graphs, i.e., $$G(\{i\})$$ for some $$i\in$$ L, and also use G(i) to denote $$G(\{i\}).$$ We occasionally consider the set of $$\{i,j\}$$-graphs, i.e., $$G(\{i,j\})$$ for some $$i,j\in$$ L with $$i\ne j.$$ Again, we also use G(ij) to denote $$G(\{i,j\}).$$

### Definition 3.10

For all $$j\in L$$, define

\begin{aligned} W\left( O_{j}\right) \doteq \min _{g\in G\left( j\right) }\left[ {\textstyle \sum _{\left( m\rightarrow n\right) \in g}} V\left( O_{m},O_{n}\right) \right] \end{aligned}
(3.2)

and

\begin{aligned} W\left( O_1\cup O_{j}\right) \doteq \min _{g\in G\left( 1,j\right) }\left[ {\textstyle \sum _{\left( m\rightarrow n\right) \in g}} V\left( O_{m},O_{n}\right) \right] . \end{aligned}
(3.3)

### Remark 3.11

Heuristically, if we interpret $$V\left( O_{m},O_{n}\right)$$ as the “cost” of moving from $$O_{m}$$ to $$O_{n},$$ then $$W\left( O_{j}\right)$$ is the “least total cost” of reaching $$O_{j}$$ from every $$O_{i}$$ with $$i\in L\setminus \{j\}.$$ According to [12, Theorem 4.1, Chapter 6], one can interpret $$W(O_i)-\min _{j \in L}W(O_j)$$ as the decay rate of $$\mu ^\varepsilon (B_\delta (O_i))$$, where $$B_\delta (O_i)$$ is a small open neighborhood of $$O_i$$.

### Definition 3.12

We use $$G_{\mathrm{{s}}}\left( W\right)$$ to denote the collection of all W-graphs on $$L_{\mathrm{{s}}}=\{1,\ldots ,l_{\mathrm{{s}}}\}$$ with $$W\subset L_{\mathrm{{s}}}.$$

We make the following technical assumptions on the structure of the SDE. Let $$B_{\delta }(K)$$ denote the $$\delta$$-neighborhood of a set $$K\subset M.$$ Recall that $$\mu ^{\varepsilon }$$ is the unique invariant probability measure of the diffusion process $$\{X^{\varepsilon }_t\}_{t}.$$ The existence of the limits appearing in the first part of the condition is ensured by Theorem 4.1 in [12, Chapter 6].

### Condition 3.13

1. 1.

There exists a unique asymptotically stable equilibrium point $$O_{1}$$ of the system $${\dot{x}}_{t}=b(x_{t})$$ such that

\begin{aligned}&\lim _{\delta \rightarrow 0}\lim _{\varepsilon \rightarrow 0}-\varepsilon \log \mu ^{\varepsilon }(B_{\delta }(O_{1}))=0, \text { and } \nonumber \\&\lim _{\delta \rightarrow 0}\lim _{\varepsilon \rightarrow 0}-\varepsilon \log \mu ^{\varepsilon }(B_{\delta }(O_{j})) >0 \text{ for } \text{ any } j\in L\setminus \{1\}. \end{aligned}
2. 2.

All of the eigenvalues of the matrix of partial derivatives of b at $$O_\ell$$ relative to M have negative real parts for $$\ell \in L_{\mathrm{{s}}}$$.

3. 3.

$$b:M\rightarrow {\mathbb {R}}^{d}$$ and $$\sigma :M\rightarrow {\mathbb {R}}^{d}\times {\mathbb {R}}^{d}$$ are $$C^{1}$$.

### Remark 3.14

According to [12, Theorem 4.1, Chapter 6] and the first part of Condition 3.13, we know that $$W(O_{j})>W(O_{1})$$ for all $$j\in L\setminus \{1\}.$$

### Remark 3.15

We comment on the use of the various parts of the condition. Part 1 means that neighborhoods of $$O_{1}$$ capture more of the mass as $$\varepsilon \rightarrow 0$$ than neighborhoods of any other equilibrium point. It simplifies the analysis greatly, but we expect it could be weakened if desired. Parts 2 and 3 are assumed in , which gives an explicit exponential bound on the tail probability of the exit time from the domain of attraction. It is largely because of our reliance on the results of  that we must assume that equilibrium sets are points in Condition 3.7, rather than the more general compacta as considered in . Both Condition 3.7 and  3.13 could be weakened if the corresponding versions of the results we use from  were available.

### Remark 3.16

The quantities $$V(O_i,O_j)$$ determine various key transition probabilities and time scales in the analysis of the empirical measure. The more general framework of , as well as the one-dimensional case (i.e., $$r=1$$) in the present setting, requires some closely related but slightly more complicated quantities. These are essentially the analogues of $$V(O_i,O_j)$$ under the assumption that trajectories used in the definition are not allowed to pass through equilibrium compacta (such as a limit cycle) when traveling from $$O_i$$ to $$O_j$$. The related quantities, which are designated using notation of the form $${\tilde{V}}(O_i,O_j)$$ in , are needed since the probability of a direct transition from $$O_i$$ to $$O_j$$ without passing though another equilibrium structure may be zero, which means that transitions from $$O_i$$ to $$O_j$$ must be decomposed according to these intermediate transitions. To simplify the presentation, we do not provide the details of the one-dimensional case in our setup, but simply note that it can be handled by the introduction of these additional quantities.

Consider the filtration $$\{{\mathcal {F}}_{t}\}_{t\ge 0}$$ defined by $${\mathcal {F}}_{t}\doteq \sigma (X_{s}^{\varepsilon },s\le t)$$ for any $$t\ge 0.$$ For any $$\delta >0$$ smaller than a quarter of the minimum of the distances between $$O_{i}$$ and $$O_{j}$$ for all $$i\ne j$$, we consider two types of stopping times with respect to the filtration $$\{{\mathcal {F}}_{t}\}_{t}$$. The first type are the hitting times of $$\{X^{\varepsilon }_t\}_{t}$$ at the $$\delta$$-neighborhood of all equilibrium points $$\{O_{j}\}_{j\in L}$$ after traveling a reasonable distance away from those neighborhoods. More precisely, we define stopping times by $$\tau _{0}\doteq 0,$$

\begin{aligned} \sigma _{n}\!\doteq \!\inf \{t>\tau _{n}\!:\!X_{t}^{\varepsilon }\in {\mathop {\cup }\nolimits _{j\in L}}\partial B_{2\delta }(O_{j})\} \text { and }\tau _{n}\!\doteq \!\inf \{t>\sigma _{n-1}:X_{t}^{\varepsilon }\!\in \!{\mathop {\cup }\nolimits _{j\in L}} \partial B_{\delta }(O_{j})\}. \end{aligned}

The second type of stopping times is the return times of $$\{X^{\varepsilon }_t\}_{t}$$ to the $$\delta$$-neighborhood of $$O_{1}$$, where as noted previously $$O_{1}$$ is in some sense the most important equilibrium point. The exact definitions are $$\tau _{0}^{\varepsilon }\doteq 0,$$

\begin{aligned} \sigma _{n}^{\varepsilon }\!\doteq \!\inf \{t>\tau _{n}^{\varepsilon }:X_{t} ^{\varepsilon }\!\in \! {\textstyle \mathop {\cup }\nolimits _{j\in L\setminus \{1\}}} \partial B_{\delta }(O_{j})\} \text { and } \tau _{n}^{\varepsilon }\!\doteq \!\inf \left\{ t\!>\!\sigma _{n-1}^{\varepsilon } :X_{t}^{\varepsilon }\in \partial B_{\delta }(O_{1})\right\} .\nonumber \\ \end{aligned}
(3.4)

We then define two embedded Markov chains $$\{Z_{n}\}_{n\in {\mathbb {N}} _{0}}\doteq \{X_{\tau _{n}}^{\varepsilon }{}\}_{n\in {\mathbb {N}} _{0}}$$ with state space $${\textstyle \mathop {\cup }\nolimits _{j\in L}} \partial B_{\delta }(O_{j})$$, and $$\{Z_{n}^{\varepsilon }\}_{n\in {\mathbb {N}} _{0}}\doteq \{X_{\tau _{n}^{\varepsilon }}^{\varepsilon }{}\}_{n\in {\mathbb {N}} _{0}}$$ with state space $$\partial B_{\delta }(O_{1}).$$

Let $$p(x,\partial B_{\delta }(O_{j}))$$ denote the one-step transition probabilities of $$\{Z_{n}\}_{n\in {\mathbb {N}} _{0}}$$ starting from a point $$x\in {\textstyle \mathop {\cup }\nolimits _{i\in L}} \partial B_{\delta }(O_{i}),$$ namely,

\begin{aligned} p(x,\partial B_{\delta }(O_{j}))\doteq P_{x}(Z_{1}\in \partial B_{\delta } (O_{j})). \end{aligned}

We have the following estimates on $$p(x,\partial B_{\delta }(O_{j}))$$ in terms of V. The lemma is a consequence of [12, Lemma 2.1, Chapter 6] and the fact that under our conditions $$V(O_i,O_j)$$ and $${\tilde{V}}(O_i,O_j)$$ as defined in  coincide.

### Lemma 3.17

For any $$\eta >0,$$ there exists $$\delta _{0}\in (0,1)$$ and $$\varepsilon _{0}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{0})$$ and $$\varepsilon \in (0,\varepsilon _{0}),$$ for all $$x\in \partial B_{\delta }(O_{i}),$$ the one-step transition probability of the Markov chain $$\{Z_{n}\}_{n\in {\mathbb {N}} }$$ on $$\partial B_{\delta }(O_{j})$$ satisfies

\begin{aligned} e^{-\frac{1}{\varepsilon }\left( V\left( O_{i},O_{j}\right) +\eta \right) }\le p(x,\partial B_{\delta }(O_{j}))\le e^{-\frac{1}{\varepsilon }\left( V\left( O_{i},O_{j}\right) -\eta \right) } \end{aligned}

for any $$i,j\in L.$$

### Remark 3.18

According to Lemma 4.6 in , Condition 3.1 guarantees the existence and uniqueness of invariant measures for $$\{Z_{n}\}_{n}$$ and $$\{Z_{n}^{\varepsilon }\}_{n}.$$ We use $$\nu ^{\varepsilon }\in {\mathcal {P}}(\cup _{i\in L}\partial B_{\delta }(O_{i}))$$ and $$\lambda ^{\varepsilon }\in {\mathcal {P}}(\partial B_{\delta }(O_{1}))$$ to denote the associated invariant measures.

## Results and a Conjecture

The following main results of this paper assume Conditions 3.13.7 and 3.13 . Although moments higher than the second moment are not considered in this paper, as noted previously one can use arguments such as those used here to identify and prove the analogous results.

Recall that $$\{O_{j}\}_{j\in L}$$ are the equilibrium points and that they satisfy Conditions 3.7 and  3.13. In addition, $$O_{j}$$ is stable if and only if $$j\in L_{\mathrm{{s}}}$$, where $$L_{\mathrm{{s}}}\doteq \{1,\ldots ,l_{\mathrm{{s}}}\}$$ for some $$l_{\mathrm{{s}}}\le l=\left| L\right|$$, and $$\tau ^\varepsilon _1$$ is the first return time to the $$\delta$$-neighborhood of $$O_1$$ after having first visited the $$\delta$$-neighborhood of any other equilibrium point.

### Lemma 4.1

For any $$\delta \in (0,1)$$ smaller than a quarter of the minimum of the distances between $$O_{i}$$ and $$O_{j}$$ for all $$i\ne j$$, any $$\varepsilon >0$$ and any nonnegative measurable function $$g:M\rightarrow {\mathbb {R}}$$

\begin{aligned} E_{\lambda ^{\varepsilon }}\left( \int _{0}^{\tau _{1}^{\varepsilon }}g\left( X_{s}^{\varepsilon }\right) ds\right) =E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }\cdot \int _M g\left( x\right) \mu ^{\varepsilon }\left( dx\right) , \end{aligned}

where $$\lambda ^{\varepsilon }\in {\mathcal {P}}(\partial B_{\delta }(O_{1}))$$ is the unique invariant measure of $$\{Z_{n}^{\varepsilon }\}_{n}=\{X_{\tau _{n}^{\varepsilon }}^{\varepsilon }\}_{n}$$ and $$\mu ^{\varepsilon }\in {\mathcal {P}}(M)$$ is the unique invariant measure of $$\{X_{t}^{\varepsilon }\}_{t}.$$

### Proof

We define a measure on M by

\begin{aligned} {\hat{\mu }}^{\varepsilon }\left( B\right) \doteq E_{\lambda ^{\varepsilon } }\left( \int _{0}^{\tau _{1}^{\varepsilon }}1_{B}\left( X^{\varepsilon }_t \right) dt\right) \end{aligned}

for $$B\in {\mathcal {B}}(M),$$ so that for any nonnegative measurable function $$g:M\rightarrow {\mathbb {R}}$$

\begin{aligned} \int _{M}g\left( x\right) {\hat{\mu }}^{\varepsilon }\left( dx\right) =E_{\lambda ^{\varepsilon }}\left( \int _{0}^{\tau _{1}^{\varepsilon }}g\left( X^{\varepsilon }_t \right) dt\right) . \end{aligned}

According to the proof of Theorem 4.1 in , the measure given by $${\hat{\mu }}^{\varepsilon }\left( B\right) /{\hat{\mu }}^{\varepsilon }\left( M\right)$$ is an invariant measure of $$\{X^{\varepsilon }_t\}_{t}.$$ Since we already know that $$\mu ^{\varepsilon }$$ is the unique invariant measure of $$\{X^{\varepsilon }_t\}_{t},$$ this means that $$\mu ^{\varepsilon }(B)=\hat{\mu }^{\varepsilon }\left( B\right) /{\hat{\mu }}^{\varepsilon }\left( M\right)$$ for any $$B\in {\mathcal {B}}(M).$$ Therefore, for any nonnegative measurable function $$g:M\rightarrow {\mathbb {R}}$$

\begin{aligned} E_{\lambda ^{\varepsilon }}\left( \int _{0}^{\tau _{1}^{\varepsilon }}g\left( X^{\varepsilon }_t \right) dt\right)&=\int _{M}g\left( x\right) \mu ^{\varepsilon }\left( dx\right) \cdot {\hat{\mu }}^{\varepsilon }\left( M\right) =\int _{M}g\left( x\right) \mu ^{\varepsilon }\left( dx\right) \cdot E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }. \end{aligned}

$$\square$$

Recall the definitions of $$W(O_j)$$ and $$W(O_1\cup O_j)$$ in Definition 3.10, as well as the definition of the quasipotential V(xy) in (3.1). For any $$k\in L$$, we define

\begin{aligned} h_k\doteq \min _{j\in L\setminus \{k\}}V(O_{k},O_{j}). \end{aligned}
(4.1)

\begin{aligned} w\doteq W(O_1)-\min _{j\in L\setminus \{1\}}W(O_1\cup O_j). \end{aligned}
(4.2)

### Remark 4.2

The quantity $$h_k$$ is related to the time that it takes for the process to leave a neighborhood of $$O_k$$, and $$W(O_1)-W(O_1\cup O_j)$$ is related to the transition time from a neighborhood of $$O_j$$ to one of $$O_1$$. It turns out that our results and arguments depend on which of $$h_1$$ or w is larger. Throughout the paper, the constructions used in the case when $$h_1>w$$ will be in terms of what we call a single cycle, and those for the case when $$h_1\le w$$ in terms of a multicycle.

### Theorem 4.3

Let $$T^{\varepsilon }=e^{\frac{1}{\varepsilon }c}$$ for some $$c>h_1\vee w$$. Given $$\eta >0,$$ a continuous function $$f:M\rightarrow {\mathbb {R}}$$ and any compact set $$A\subset M,$$ there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left| E_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }}\int _{0}^{T^{\varepsilon } }e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) -\int _M e^{-\frac{1}{\varepsilon }f\left( x\right) }1_{A}\left( x\right) \mu ^{\varepsilon }\left( dx\right) \right| \\&\qquad \ge \inf _{x\in A}\left[ f\left( x\right) +W\left( x\right) \right] -W\left( O_{1}\right) +c-(h_1\vee w)-\eta , \end{aligned}

where $$W(x)\doteq \min _{j\in L}[W(O_{j})+V(O_{j},x)]$$.

### Remark 4.4

Since $$W(x)=\min _{j\in L}[W(O_{j})+V(O_{j},x)],$$ the lower bound appearing in Theorem 4.3 is equivalent to

\begin{aligned} \min _{j\in L}\left( \inf _{x\in A}\left[ f\left( x\right) +V(O_{j} ,x)\right] +W(O_{j})-W\left( O_{1}\right) \right) +c-(h_1\vee w)-\eta . \end{aligned}

The next result gives an upper bound on the variance per unit time, or equivalently a lower bound on its rate of decay. In the design of a Markov chain Monte Carlo method, one would maximize this rate of decay to improve the method’s performance.

### Theorem 4.5

Let $$T^{\varepsilon }=e^{\frac{1}{\varepsilon }c}$$ for some $$c>h_1\vee w$$. Given $$\eta >0,$$ a continuous function $$f:M\rightarrow {\mathbb {R}}$$ and any compact set $$A\subset M,$$ there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( T^{\varepsilon }\cdot \mathrm {Var}_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }} \int _{0}^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t} ^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) \right) \\&\qquad \ge {\left\{ \begin{array}{ll} \min _{j\in L}\left( R_{j}^{(1)}\wedge R_{j}^{(2)}\right) -\eta ,&{} \text {if } h_1>w\\ \min _{j\in L}\left( R_{j}^{(1)}\wedge R_{j}^{(2)}\wedge R_{j}^{(3)}\right) -\eta ,&{} \text {otherwise} \end{array}\right. }, \end{aligned}

where

\begin{aligned}&R_{j}^{(1)}\doteq \inf _{x\in A}\left[ 2f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) -W\left( O_{1}\right) , \\&R_{1}^{(2)}\doteq 2\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{1},x\right) \right] -h_1, \end{aligned}

and for $$j\in L\setminus \{1\}$$

\begin{aligned} R_{j}^{(2)}&\doteq 2\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) -2W\left( O_{1}\right) +W(O_{1}\cup O_{j}),\\ R_{j}^{(3)}&\doteq 2\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +2W\left( O_{j}\right) -2W\left( O_{1}\right) -w . \end{aligned}

### Remark 4.6

If one mistakenly treated a single cycle case as a multicycle case in the application of Theorem 4.5, then the result is the same since with $$h_1>w$$, (4.2) implies that $$R_{j}^{(3)}\ge R_{j}^{(2)}$$ for any $$j\in L$$.

### Remark 4.7

Although Theorems 4.3 and 4.5 as stated assume the starting distribution $$\lambda ^\varepsilon$$, they can be extended to general initial distributions by using results from Sect. 10, which show that the process essentially forgets the initial distribution before leaving the neighborhood of $$O_1$$.

### Remark 4.8

In this remark, we interpret the use of Theorems 4.3 and 4.5 in the context of Monte Carlo and also explain the role of the time scaling $$T^\varepsilon$$.

There is a minimum amount of time that must elapse before the process can visit all stable equilibrium points often enough that good estimation of risk-sensitive integrals is possible. As is well known, this time scales exponentially in the form of $$T^\varepsilon = e^{c/\varepsilon }$$, and the issue is the selection of the constant $$c>0$$, which motivates the assumptions on $$T^\varepsilon$$ for the two cases. However, when designing a scheme there typically will be parameters available for selection. The growth constant in $$T^\varepsilon$$ will then depend on these parameters, which will then be chosen to (either directly or indirectly, depending on the criteria used) reduce the size of $$T^\varepsilon$$. For a compelling example, we refer to , which shows how for a system with fixed well depths a scheme known as infinite swapping can be designed so that given any $$a>0$$ one can design a scheme so that an interval of length $$e^{a/\varepsilon }$$ suffices.

Theorem 4.3 is concerned with bias, and for $$T^\varepsilon$$ as above will give a negligible contribution to the total error in comparison with the variance. Thus, it is Theorem 4.5 that determines the performance of the scheme and serves as a criteria for optimization. Of particular note is that the value of c does not appear in the variational problem appearing in Theorem 4.5.

Theorem 4.5 gives a lower bound on the rate of decay of variance per unit time. For applications to the design of Monte Carlo schemes as in , there is an a priori bound on the best possible performance, and so this lower bound (which yields an upper bound on variances) is sufficient to determine if a scheme is nearly optimal. However, for other purposes an upper bound on the decay rate could be useful, and we expect the other direction holds as well.

The proofs of Theorems 4.3 and 4.5 for single cycles and multicycles are almost identical with a few key differences. We focus on providing proofs in the single cycle case, and then point out the required modifications in the proofs for the multicycle case.

### Theorem 4.9

The bound in Theorem 4.3 can be calculated using only stable equilibrium points. Specifically,

1. 1.

$$W(x)=\min _{j\in L_{\mathrm{{s}}}}[W(O_{j})+V(O_{j},x)]$$

2. 2.

$$W\left( O_{j}\right) =\min _{g\in G_{\mathrm{{s}}}\left( j\right) }\left[ {\textstyle \sum _{\left( m\rightarrow n\right) \in g}} V\left( O_{m},O_{n}\right) \right]$$

3. 3.

$$W(O_{1}\cup O_{j})=\min _{g\in G_{\mathrm{{s}}}\left( 1,j\right) }\left[ {\textstyle \sum _{\left( m\rightarrow n\right) \in g}} V\left( O_{m},O_{n}\right) \right]$$

4. 4.

$$\min _{j\in L}( \inf _{x\in A}[ f( x) +V(O_{j},x)] +W(O_{j}) ) =\min _{j\in L_{\mathrm{{s}}}}( \inf _{x\in A}[ f(x)+V(O_{j},x)] +W(O_{j}) )$$.

### Remark 4.10

Theorem 4.9 says that the bound appearing in Theorem 4.3 depends on the set of indices of only stable equilibrium points. This is not surprising, since in [12, Chapter 6], it has been shown that the logarithmic asymptotics of the invariant measure of a Markov process in this framework can be characterized in terms of graphs on the set of indices of just stable equilibrium points. It is natural to ask if the same property holds for the lower bound appearing in Theorem 4.5. Notice that part 4 of Theorem 4.9 implies $$\min _{j\in L}R_{j}^{(1)}=\min _{j\in L_{\mathrm{{s}}}}R_{j}^{(1)}$$, so if one can prove (possibly under extra conditions, for example, by considering a double-well model as in Sect. 11) that $$\min _{j\in L}R_{j}^{(2)}=\min _{j\in L_{\mathrm{{s}}}}R_{j}^{(2)}$$, then these two equations assert the property we want for the single cycle case, namely, $$\min _{j\in L}( R_{j}^{(1)}\wedge R_{j}^{(2)}) =\min _{j\in L_{\mathrm{{s}}}}( R_{j}^{(1)}\wedge R_{j} ^{(2)}).$$ An analogous comment applies for the multicycle case.

### Conjecture 4.11

Let $$T^{\varepsilon }=e^{\frac{1}{\varepsilon }c}$$ for some $$c>h_1\vee w$$. Let f be continuous and suppose that A is the closure of its interior. Then for any $$\eta >0,$$ there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( T^{\varepsilon }\cdot \mathrm {Var}_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }} \int _{0}^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t} ^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) \right) \\&\qquad \le {\left\{ \begin{array}{ll} \min _{j\in L}\left( R_{j}^{(1)}\wedge R_{j}^{(2)}\right) +\eta ,&{} \text {if } h_1>w\\ \min _{j\in L}\left( R_{j}^{(1)}\wedge R_{j}^{(2)}\wedge R_{j}^{(3)}\right) +\eta ,&{} \text {otherwise} \end{array}\right. }. \end{aligned}

In Section 11, we outline the proof of Conjecture 4.11 for a special case.

## Examples

### Example 5.1

We first consider the situation depicted in Fig. 1. Values of $$W(O_{j})$$ are given in the figure. If one interprets the figure as a potential with minimum zero, then the corresponding heights of the equilibrium points are given by $$W(O_{j})-W(O_{1})$$. We take $$f=0$$ and A to be a small closed interval about $$O_{5}$$. As we will see and should be clear from the figure, this example can be analyzed using single regenerative cycles.

Recall that

\begin{aligned}&R_{j}^{(1)}\doteq \inf _{x\in A}[2f(x)+V(O_{j},x)]+W(O_{j})-W(O_{1}) \\&R_{1}^{(2)}\doteq 2\inf _{x\in A}[f(x)+V(O_{1},x)]-h_{1} \end{aligned}

and for $$j>1$$

\begin{aligned} R_{j}^{(2)}\doteq 2\inf _{x\in A}[f(x)+V(O_{j},x)]+\left( W(O_{j})-W(O_{1})\right) -W(O_{1})+W(O_{1}\cup O_{j}) \end{aligned}

If one traces through the proof of Theorem 4.5 for the case of a single cycle, then one finds that the constraining bound is given in Lemma 7.23, which is in turn based on Lemma 7.9. As we will see, in the minimization problem $$\min _{j\in L}(R_{j}^{(1)}\wedge R_{j}^{(2)})$$ the min on j turns out to be achieved at $$j=5$$. This is of course not surprising, since A is an interval about $$O_{5}$$. It is then the minimum of $$R_{5}^{(1)}$$ and $$R_{5}^{(2)}$$ which determines the dominant source of the variance of the estimator.

We recall that $$\tau _{1}^{\varepsilon }$$ is the time for a full regenerative cycle, and that $$\tau _{1}$$ is the time to first reach the $$2\delta$$ neighborhood of an equilibrium point and then reach a $$\delta$$ neighborhood of a (perhaps the same) equilibrium point. The quantities that are relevant in Lemma 7.9 are

\begin{aligned} \sup _{y\in \partial B_{\delta }(O_{5})}E_{y}\left( \int _{0}^{\tau _{1}} 1_{A}(X_{t}^{\varepsilon })dt\right) ^{2}\text { and }E_{x}N_{5} \end{aligned}

for $$R_{j}^{(1)}$$ and

\begin{aligned} \left[ \sup _{y\in \partial B_{\delta }(O_{5})}E_{y}\int _{0}^{\tau _{1}} 1_{A}(X_{t}^{\varepsilon })dt\right] ^{2}\text { , }E_{x}N_{5}\text {, and essentially }\sup _{y\in \partial B_{\delta }(O_{5})}E_{y}N_{5} \end{aligned}

for $$R_{j}^{(2)}$$. Decay rates are in turn determined by (see the proof of Lemma 7.23)

\begin{aligned} 0\text { and }W(O_{1})-W(O_{5})+h_{1} \end{aligned}

and

\begin{aligned} 0,\text { }W(O_{1})-W(O_{5})+h_1\text { and }W(O_{1})-W(O_{1}\cup O_{5}), \end{aligned}

respectively. Thus, for this example it is only the term $$W(O_{1})-W(O_{1}\cup O_{5})$$ that distinguishes between the two. Since this is always greater than zero and it appears in $$R_{j}^{(2)}$$ in the form $$-(W(O_{1})-W(O_{1}\cup O_{5}))$$, it must be the case that $$R_{5}^{(2)}<$$ $$R_{5}^{(1)}$$.

The numerical values for the example are

\begin{aligned}&(W(O_{1}\cup O_{j}),j=2,\ldots ,5)=(5,3,5,2) \\&(V(O_{j},O_{5}),j=1,\ldots ,5)=(8,4,4,0,0) \\&(W(O_{j})-W(O_{1}),j=1,\ldots ,5)=(0,4,2,6,3) \\&(R_{j}^{(1)},j=1,\ldots ,5)=(8,8,6,6,3) \\&(R_{j}^{(2)},j=2,\ldots ,5)=(12,8,6,0) \end{aligned}

and $$R_{1}^{(2)}=16-4=12,h_{1}=4$$ and $$w=5-2=3$$. Since $$w<h_{1}$$, it falls into the single cycle case. We therefore find $$\min _{j}R_{j}^{(1)}\wedge R_{j}^{(2)}$$ equals to 0 and occurs with superscript 2 and at $$j=5$$.

For an example where the dominant contribution to the variance is through the quantities associated with $$R_{j}^{(1)}$$, we move the set A further to the right of $$O_{5}$$. All other quantities are unchanged save

\begin{aligned} \sup _{y\in \partial B_{\delta }(O_{5})}E_{y}\left( \int _{0}^{\tau _{1}} 1_{A}(X_{t}^{\varepsilon })dt\right) ^{2}\text { and }\left[ \sup _{y\in \partial B_{\delta }(O_{5})}E_{y}\int _{0}^{\tau _{1}}1_{A}(X_{t} ^{\varepsilon })dt\right] ^{2}, \end{aligned}

whose decay rates are governed (for $$j=5$$) by $$\inf _{x\in A}[V(O_{5},x)]\text { and }2\inf _{x\in A} [V(O_{5},x)],$$ respectively. Choosing A so that $$\inf _{x\in A}[V(O_{5},x)]>3$$, it is now the case that $$R_{5}^{(1)}<$$ $$R_{5}^{(2)}$$.

### Example 5.2

We consider the situation depicted in Fig. 2. In this example, we again take $$f=0$$ and A to be a small closed interval about $$O_{3}$$. Since the well at $$O_5$$ is deeper than that at $$O_1$$, we expect that multicycles will be needed, and so recall

\begin{aligned} R_{j}^{(3)} \doteq 2\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +2W\left( O_{j}\right) -2W\left( O_{1}\right) -w . \end{aligned}

The needed values are

\begin{aligned}&(W(O_{1}\cup O_{j}),j=2,\ldots ,5)=(7,5,7,2) \\&(V(O_{j},O_{3}),j=1,\ldots ,5)=(4,0,0,0,5) \\&(W(O_{j})-W(O_{1}),j=1,\ldots ,5)=(0,4,2,6,1) \\&(R_{j}^{(1)},j=1,\ldots ,5)=(4,4,2,6,6) \\&(R_{j}^{(2)},j=2,\ldots ,5)=(4,0,6,6) \\&(R_{j}^{(3)},j=1,\ldots ,5)=(3,3,-1,7,7) \end{aligned}

and $$R_{1}^{(2)}=8-4=4,h_{1}=4$$ and $$w=7-2=5$$. Since $$w>h_{1}$$ a single cycle cannot be used for the analysis of the variance, and we need to use multicycles. We find $$\min _{j} R_{j}^{(1)}\wedge R_{j}^{(2)}\wedge R_{j}^{(3)}$$ is equal to $$-1$$ and occurs with superscript 3 and $$j=3$$.

## Wald’s Identities and Regenerative Structure

To prove Theorems 4.3 and 4.5, we will use the regenerative structure to analyze the system over the interval $$[0,T^{\varepsilon }]$$. Since the number of regenerative cycles will be random, Wald’s identities will be useful.

Recall that $$\tau _{n}^{\varepsilon }$$ is the n-th return time to $$\partial B_{\delta }\left( O_{1}\right)$$ after having visited the neighborhood of a different equilibrium point, and $$\lambda ^{\varepsilon }\in {\mathcal {P}}(\partial B_{\delta }(O_{1}))$$ is the invariant measure of the Markov process $$\{X_{\tau _{n}^{\varepsilon }}^{\varepsilon }\}_{n\in {\mathbb {N}}_{0}}$$ with state space $$\partial B_{\delta }(O_{1}).$$ If we let the process $$\{X^{\varepsilon }_t \}_{t}$$ start with $$\lambda ^{\varepsilon }$$ at time 0,  that is, assume the distribution of $$X^{\varepsilon }_0$$ is $$\lambda ^{\varepsilon },$$ then by the strong Markov property of $$\{X^{\varepsilon }_t \}_{t},$$ we find that $$\{X^{\varepsilon }_t \}_{t}$$ is a regenerative process and the cycles $$\{\{X^{\varepsilon }_{\tau _{n-1}^{\varepsilon }+t}:0\le t<\tau _{n}^{\varepsilon }-\tau _{n-1}^{\varepsilon }\},\tau _{n}^{\varepsilon } -\tau _{n-1}^{\varepsilon }\}$$ are iid objects. Moreover, $$\{\tau _{n}^{\varepsilon }\}_{n\in {\mathbb {N}} _{0}}$$ is a sequence of renewal times under $$\lambda ^{\varepsilon }.$$

### Single cycle

Define the filtration $$\{{\mathcal {H}}_{n}\}_{n\in {\mathbb {N}} },$$ where $${\mathcal {H}}_{n}\doteq {\mathcal {F}}_{\tau _{n}^{\varepsilon }}$$ and $${\mathcal {F}}_{t}\doteq$$ $$\sigma (\{X^{\varepsilon }_s$$; $$s\le t\})$$. With respect to this filtration, in the single cycle case (i.e., when $$h_1>w$$), we consider the stopping times $$N^{\varepsilon }\left( T\right) \doteq \inf \left\{ n\in {\mathbb {N}}:\tau _{n}^{\varepsilon }>T\right\} .$$ Note that $$N^{\varepsilon }\left( T\right) -1$$ is the number of complete single renewal intervals contained in [0, T].

With this notation, we can bound $$\frac{1}{T^{\varepsilon }}\int _{0} ^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt$$ from above and below by

\begin{aligned} \frac{1}{T^{\varepsilon }}\sum \limits _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) -1}S_{n}^{\varepsilon }\le \frac{1}{T^{\varepsilon }}\int _{0}^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt\le \frac{1}{T^{\varepsilon }}\sum \nolimits _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) }S_{n}^{\varepsilon }, \end{aligned}
(6.1)

where

\begin{aligned} S_{n}^{\varepsilon }\doteq \int _{\tau _{n-1}^{\varepsilon }}^{\tau _{n}^{\varepsilon } }e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt. \end{aligned}

Applying Wald’s first identity shows

\begin{aligned} E_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }}\sum \limits _{n=1} ^{N^{\varepsilon }\left( T^{\varepsilon }\right) }S_{n}^{\varepsilon }\right) =\frac{1}{T^{\varepsilon }}E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) E_{\lambda ^{\varepsilon }} S_{1}^{\varepsilon }. \end{aligned}
(6.2)

Therefore, the logarithmic asymptotics of $$E_{\lambda ^{\varepsilon }}(\int _{0}^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt/T^{\varepsilon })$$ are determined by those of $$E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) /T^{\varepsilon }$$ and $$E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }.$$ Likewise, to understand the logarithmic asymptotics of $$T^{\varepsilon }\cdot$$ $$\hbox {Var}_{\lambda ^{\varepsilon }}(\int _{0}^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt/T^{\varepsilon }),$$ it is sufficient to identify the corresponding logarithmic asymptotics of $$\hbox {Var}_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) /T^{\varepsilon }$$, $$\hbox {Var}_{\lambda ^{\varepsilon }}(S_{1}^{\varepsilon }),$$ $$E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) /T^{\varepsilon }$$ and $$E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }$$. This can be done with the help of Wald’s second identity, since

\begin{aligned} T^{\varepsilon }&\cdot \mathrm {Var}_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }} {\textstyle \sum _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) }} S_{n}^{\varepsilon }\right) \nonumber \\&\le 2T^{\varepsilon }\cdot E_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }} {\textstyle \sum _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) }} S_{n}^{\varepsilon }-\frac{1}{T^{\varepsilon }}N^{\varepsilon }\left( T^{\varepsilon }\right) E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }\right) ^{2}\nonumber \\&\quad +2T^{\varepsilon }\cdot E_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }}N^{\varepsilon }\left( T^{\varepsilon }\right) E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }-\frac{1}{T^{\varepsilon } }E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }\right) ^{2}\nonumber \\&=2\frac{E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}\mathrm {Var}_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }+2\frac{\mathrm {Var}_{\lambda ^{\varepsilon } }\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}\left( E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }\right) ^{2}. \end{aligned}
(6.3)

In the next two sections, we derive bounds on $$E_{\lambda ^{\varepsilon }} S_{1}^{\varepsilon }$$, $$\hbox {Var}_{\lambda ^{\varepsilon }}(S_{1}^{\varepsilon })$$ and $$E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right)$$, $$\hbox {Var}_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right)$$, respectively.

### Multicycle

Recall that in the case of a multicycle, we have $$w\ge h_1$$. For any $$m>0$$ such that $$h_1+m>w$$ and for any $$\varepsilon >0$$, on the same probability space as $$\{\tau ^{\varepsilon }_n\}$$, one can define a sequence of independent and geometrically distributed random variables $$\{{\mathbf {M}}^{\varepsilon }_i\}_{i\in {\mathbb {N}}}$$ with parameter $$e^{-m/\varepsilon }$$ that are independent of $$\{\tau ^{\varepsilon }_n\}$$. We then define multicycles according to

\begin{aligned} {\mathbf {K}}^\varepsilon _i\doteq \sum _{j=1}^i {\mathbf {M}}^{\varepsilon }_j, \quad {\hat{\tau }}^\varepsilon _i\doteq \sum _{n={\mathbf {K}}^\varepsilon _{i-1}+1}^{{\mathbf {K}}^{\varepsilon }_i} \tau ^\varepsilon _n, \quad i\in {\mathbb {N}}. \end{aligned}
(6.4)

Consider the stopping times $${\hat{N}}^{\varepsilon }\left( T\right) \doteq \inf \left\{ n\in {\mathbb {N}} :{\hat{\tau }}_{n}^{\varepsilon }>T\right\} .$$ Note that $${\hat{N}}^{\varepsilon }\left( T\right) -1$$ is the number of complete multicycles contained in [0, T]. With this notation and by following the same idea as in the single cycle case, we can bound $$\frac{1}{T^{\varepsilon }}\int _{0} ^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt$$ from above and below by

\begin{aligned} \frac{1}{T^{\varepsilon }}\sum \limits _{n=1}^{{\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) -1}{\hat{S}}_{n}^{\varepsilon }\le \frac{1}{T^{\varepsilon }}\int _{0}^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt\le \frac{1}{T^{\varepsilon }}\sum \limits _{n=1}^{{\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) }{\hat{S}}_{n}^{\varepsilon }, \end{aligned}
(6.5)

where

\begin{aligned} {\hat{S}}_{n}^{\varepsilon }\doteq \int _{{\hat{\tau }}_{n-1}^{\varepsilon }}^{{\hat{\tau }}_{n}^{\varepsilon } }e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt. \end{aligned}

Therefore, by applying Wald’s first and second identities, we know that the logarithmic asymptotics of $$E_{\lambda ^{\varepsilon }}(\int _{0}^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt/T^{\varepsilon })$$ are determined by those of $$E_{\lambda ^{\varepsilon }}( {\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) ) /T^{\varepsilon }$$ and $$E_{\lambda ^{\varepsilon }}{\hat{S}}_{1}^{\varepsilon }$$, and the asymptotics of $$T^{\varepsilon }\cdot$$ $$\hbox {Var}_{\lambda ^{\varepsilon }}(\int _{0}^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt/T^{\varepsilon })$$ by those of $$\hbox {Var}_{\lambda ^{\varepsilon }}( {\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) ) /T^{\varepsilon }$$, $$\hbox {Var}_{\lambda ^{\varepsilon }}({\hat{S}}_{1}^{\varepsilon })$$, $$E_{\lambda ^{\varepsilon }}( {\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) ) /T^{\varepsilon }$$ and $$E_{\lambda ^{\varepsilon }}{\hat{S}}_{1}^{\varepsilon }$$. In particular, we have

\begin{aligned} E_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }}\sum \nolimits _{n=1} ^{{\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) }{\hat{S}}_{n}^{\varepsilon }\right) =\frac{1}{T^{\varepsilon }}E_{\lambda ^{\varepsilon }}\left( {\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) \right) E_{\lambda ^{\varepsilon }} {\hat{S}}_{1}^{\varepsilon } \end{aligned}
(6.6)

and

\begin{aligned}&T^{\varepsilon } \cdot \mathrm {Var}_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }} {\sum \nolimits _{n=1}^{{\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) }} {\hat{S}}_{n}^{\varepsilon }\right) \nonumber \\&\qquad \le 2\frac{E_{\lambda ^{\varepsilon }}( {\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) ) }{T^{\varepsilon }}\mathrm {Var}_{\lambda ^{\varepsilon }}{\hat{S}}_{1}^{\varepsilon }+2\frac{\mathrm {Var}_{\lambda ^{\varepsilon } }( {\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) ) }{T^{\varepsilon }}\left( E_{\lambda ^{\varepsilon }}{\hat{S}}_{1}^{\varepsilon }\right) ^{2}. \end{aligned}
(6.7)

In the next two sections, we derive bounds on $$E_{\lambda ^{\varepsilon }} {\hat{S}}_{1}^{\varepsilon }$$, $$\hbox {Var}_{\lambda ^{\varepsilon }}({\hat{S}}_{1}^{\varepsilon })$$ and $$E_{\lambda ^{\varepsilon }}( {\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) )$$, $$\hbox {Var}_{\lambda ^{\varepsilon }}( {\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) )$$, respectively.

### Remark 6.1

It should be kept in mind that $${\hat{\tau }}^{\varepsilon }_n, {\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right)$$ and $${\hat{S}}_{n}^{\varepsilon }$$ all depend on m, although this dependence is not explicit in the notation.

### Remark 6.2

In general, for any quantity in the single cycle case, we use analogous notation with a “hat” on it to represent the corresponding quantity in the multicycle version. For instance, we use $$\tau ^{\varepsilon }_n$$ for a single regenerative cycle, and $${\hat{\tau }}^{\varepsilon }_n$$ for a multi-regenerative cycle.

## Asymptotics of Moments of $$S_{1}^{\varepsilon }$$ and $${\hat{S}}_{1}^{\varepsilon }$$

In this section, we will first introduce the elementary theory of an irreducible finite state Markov chain $$\{Z_{n}\}_{n\in {\mathbb {N}} _{0}}$$ with state space L, and then state and prove bounds for the asymptotics of moments of $$S_{1}^{\varepsilon }$$ and $${\hat{S}}_{1}^{\varepsilon }$$.

For the asymptotic analysis, the following useful facts will be used repeatedly.

### Lemma 7.1

For any nonnegative sequences $$\left\{ a_{\varepsilon }\right\} _{\varepsilon >0}$$ and $$\left\{ b_{\varepsilon }\right\} _{\varepsilon >0}$$, we have

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( a_{\varepsilon }b_{\varepsilon }\right) \ge \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log a_{\varepsilon }+\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log b_{\varepsilon }, \end{aligned}
(7.1)
\begin{aligned}&\limsup _{\varepsilon \rightarrow 0}-\varepsilon \log \left( a_{\varepsilon }+b_{\varepsilon }\right) \le \min \left\{ \limsup _{\varepsilon \rightarrow 0}-\varepsilon \log a_{\varepsilon },\limsup _{\varepsilon \rightarrow 0}-\varepsilon \log b_{\varepsilon }\right\} , \nonumber \\&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( a_{\varepsilon }+b_{\varepsilon }\right) =\min \left\{ \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log a_{\varepsilon },\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log b_{\varepsilon }\right\} . \end{aligned}
(7.2)

### Markov Chains and Graph Theory

In this subsection, we state some elementary theory for finite state Markov chains taken from [1, Chapter 2]. For a finite state Markov chain, the invariant measure, the mean exit time, etc., can be expressed explicitly as the ratio of certain determinants, i.e., sums of products consisting of transition probabilities, and these sums only contain terms with a plus sign. Which products should appear in the various sums can be described conveniently by means of graphs on the set of states of the chain. This method of linking graphs and quantities associated with a finite state Markov chain was introduced by Freidlin and Wentzell in [12, Chapter 6].

Consider an irreducible finite state Markov chain $$\{Z_{n}\}_{n\in {\mathbb {N}} _{0}}$$ with state space L. For any $$i,j\in L,$$ let $$p_{ij}$$ be the one-step transition probability of $$\{Z_{n}\}_{n}$$ from state i to state j. Write $$P_{i}(\cdot )$$ and $$E_{i}(\cdot )$$ for probabilities and expectations of the chain started at state i at time 0. Recall the notation $$\pi (g)\doteq \prod _{(i\rightarrow j)\in g}p_{ij}$$.

### Lemma 7.2

The unique invariant measure of $$\{Z_{n}\}_{n\in {\mathbb {N}} }$$ can be expressed

\begin{aligned} \lambda _i =\frac{\sum _{g\in G\left( i\right) }\pi \left( g\right) }{\sum _{j\in L}\left( \sum _{g\in G\left( j\right) }\pi \left( g\right) \right) }. \end{aligned}

### Proof

See Lemma 3.1, Chapter 6 in . $$\square$$

To analyze the empirical measure, we will need additional results, including representations for the number of visits to a state during a regenerative cycle. Write

\begin{aligned} T_{i}\doteq \inf \left\{ n\ge 0:Z_{n}=i\right\} \end{aligned}

for the first hitting time of state i,  and write

\begin{aligned} T_{i}^{+}\doteq \inf \left\{ n\ge 1:Z_{n}=i\right\} . \end{aligned}

Observe that $$T_{i}^{+}=T_{i}$$ unless $$Z_{0}=i,$$ in which case we call $$T_{i}^{+}$$ the first return time to state i.

Let $${\hat{N}}\doteq \inf \{n\in {\mathbb {N}} _{0}:Z_{n}\in L\setminus \{1\}\}$$ and $$N\doteq \inf \{n\in {\mathbb {N}} :Z_{n}=1,n\ge {\hat{N}}\}.$$ $${\hat{N}}$$ is the first time of visiting a state other than state 1, and N is the first time of visiting state 1 after $${\hat{N}}.$$ For any $$j\in L,$$ let $$N_{j}$$ be the number of visits (including time 0) of state j before N,  i.e., $$N_{j}=\left| \{n\in {\mathbb {N}} _{0}:n<N\text { and }Z_{n}=j\}\right| .$$ We would like to understand $$E_{1}N_{j}$$ and $$E_{j}N_{j}$$ for any $$j\in L.$$ These quantities will appear later on in Subsection 7.2. The next lemma shows how they can be related to the invariant measure of $$\{Z_{n}\}_{n}$$.

### Lemma 7.3

1. 1.

For any $$j\in L\setminus \{1\}$$

\begin{aligned} E_{j}N_{j}=\frac{\sum _{g\in G\left( 1,j\right) }\pi \left( g\right) }{\sum _{g\in G\left( 1\right) }\pi \left( g\right) }\text { and }E_{j} N_{j}=\lambda _{j}\left( E_{j}T_{1}+E_{1}T_{j}\right) . \end{aligned}
2. 2.

For any $$i,j\in L,$$ $$j\ne i$$

\begin{aligned} P_{i}\left( T_{j}<T_{i}^{+}\right) =\frac{1}{\lambda _{i}\left( E_{j} T_{i}+E_{i}T_{j}\right) }. \end{aligned}
3. 3.

For any $$j\in L$$

\begin{aligned} E_{1}N_{j}=\frac{1}{1-p_{11}}\frac{\lambda _{j}}{\lambda _{1}}. \end{aligned}

### Proof

See Lemma 3.4 in [12, Chapter 6] for the first assertion of part 1 and see Lemma 2.7 in [1, Chapter 2] for the second assertion of part 1. For part 2, see Corollary 2.8 in [1, Chapter 2]. For part 3, since $$E_{1}N_{j}=\sum _{\ell =1}^{\infty }P_{1}\left( N_{j}\ge \ell \right) ,$$ we need to understand $$P_{1}\left( N_{j}\ge \ell \right)$$, which means we need to know how to count all the ways to get $$N_{j}\ge \ell$$ before returning to state 1.

We first have to move away from state 1, so the types of sequences are of the form

\begin{aligned} \underset{i\text { times}}{\underbrace{1,1,\ldots ,1}},k_{1},k_{2},\ldots ,k_{q},1 \end{aligned}

for some $$i,q\in {\mathbb {N}}$$ and $$k_{1}\ne 1,\cdots ,k_{q}\ne 1$$. When $$j=1,$$ we do not care about $$k_{1},k_{2},\ldots ,k_{q},$$ and therefore

\begin{aligned} P_{1}\left( N_{1}\ge i\right) =p_{11}^{i-1}\text { and }E_{1}N_{1} =\sum \nolimits _{i=1}^{\infty }P_{1}\left( N_{1}\ge i\right) =\frac{1}{1-p_{11}}. \end{aligned}

For $$j\in L\setminus \{1\},$$ the event $$\{N_{j}\ge \ell \}$$ requires that within $$k_{1},k_{2},\ldots ,k_{q},$$ we

1. 1.

first visit state j before returning to state 1,  which has corresponding probability $$P_{1}(T_{j}<T_{1}^{+})$$,

2. 2.

then start from state j and again visit state j before returning to state 1,  which has corresponding probability $$P_{j}(T_{j}^{+}<T_{1}).$$

Step 2 needs to happen at least $$\ell -1$$ times in a row, and after that we do not care. Thus,

\begin{aligned} P_{1}\left( N_{j}\ge \ell \right)&=\sum \nolimits _{i=1}^{\infty }\left( p_{11}\right) ^{i-1}P_{1}\left( T_{j}<T_{1}^{+}\right) ( P_{j}( T_{j}^{+}<T_{1}) ) ^{\ell -1}\\&=\frac{1}{1-p_{11}}P_{1}\left( T_{j}<T_{1}^{+}\right) ( P_{j}( T_{j}^{+}<T_{1}) ) ^{\ell -1} \end{aligned}

and

\begin{aligned} \sum \nolimits _{\ell =1}^{\infty }P_{1}\left( N_{j}\ge \ell \right)&=\frac{1}{1-p_{11}}\frac{P_{1}\left( T_{j}<T_{1}^{+}\right) }{P_{j} (T_{1}<T_{j}^{+})} =\frac{1}{1-p_{11}}\frac{\lambda _{j}\left( E_{1}T_{j}+E_{j}T_{1}\right) }{\lambda _{1}\left( E_{1}T_{j}+E_{j}T_{1}\right) }\\&=\frac{1}{1-p_{11}}\frac{\lambda _{j}}{\lambda _{1}}. \end{aligned}

The third equality comes from part 2. $$\square$$

To apply the preceding results using the machinery developed by Freidlin and Wentzell, one must have analogues that allow for small perturbations of the transition probabilities due to the fact that initial conditions are to be taken in small neighborhoods of the equilibrium points. The addition of a tilde will be used to identify the corresponding objects, such as hitting and return times. Take as given a Markov chain $$\{\tilde{Z}_{n}\}_{n\in {\mathbb {N}} _{0}}$$ on a state space $${\mathcal {X}}= {\textstyle \cup _{i\in L}} {\mathcal {X}}_{i},$$ with $${\mathcal {X}}_{i}\cap {\mathcal {X}}_{j}=\emptyset$$ $$(i\ne j),$$ and assume there is $$a\in [1,\infty )$$ such that for any $$i,j\in L$$ and $$j\ne i,$$ the transition probability of the chain from $$x\in {\mathcal {X}}_{i}$$ to $${\mathcal {X}}_{j}$$ (denoted by $$p\left( x,{\mathcal {X}}_{j}\right)$$) satisfies the inequalities

\begin{aligned} a^{-1}p_{ij}\le p\left( x,{\mathcal {X}}_{j}\right) \le ap_{ij} \end{aligned}
(7.3)

for any $$x\in {\mathcal {X}}_{i}$$. Write $$P_{x}(\cdot )$$ and $$E_{x}(\cdot )$$ for probabilities and expectations of the chain started at $$x\in {\mathcal {X}}$$ at time 0. Write

\begin{aligned} {\tilde{T}}_{i}\doteq \inf \{ n\ge 0:{\tilde{Z}}_{n}\in {\mathcal {X}} _{i}\} \end{aligned}

for the first hitting time of $${\mathcal {X}}_{i},$$ and write

\begin{aligned} {\tilde{T}}_{i}^{+}\doteq \inf \{ n\ge 1:{\tilde{Z}}_{n}\in {\mathcal {X}} _{i}\} . \end{aligned}

Observe that $${\tilde{T}}_{i}^{+}={\tilde{T}}_{i}$$ unless $${\tilde{Z}}_{0} \in {\mathcal {X}}_{i},$$ in which case we call $${\tilde{T}}_{i}^{+}$$ the first return time to $${\mathcal {X}}_{i}.$$ Recall that $$l=|L|$$.

### Remark 7.4

Observe that given $$j\in L$$ and for any $$x\in {\mathcal {X}}_{j}$$, $$1-p\left( x,{\mathcal {X}}_{j}\right) =\textstyle \sum _{k\in L\setminus \left\{ j\right\} }p\left( x,{\mathcal {X}}_{k}\right) .$$ Therefore, we can apply (7.3) to obtain

\begin{aligned} a^{-1}\textstyle \sum _{k\in L\setminus \left\{ j\right\} }p_{jk}\le 1-p\left( x,{\mathcal {X}}_{j}\right) \le a\textstyle \sum _{k\in L\setminus \left\{ j\right\} }p_{jk}. \end{aligned}

### Lemma 7.5

1. 1.

Consider distinct $$i,j,k\in L$$. Then, for $$x\in {\mathcal {X}}_{k},$$

\begin{aligned} a^{-4^{l-2}}P_{k}\left( T_{j}<T_{i}\right) \le P_{x}( {\tilde{T}} _{j}<{\tilde{T}}_{i}) \le a^{4^{l-2}}P_{k}\left( T_{j}<T_{i}\right) . \end{aligned}
2. 2.

For any $$i\in L$$, $$j\in L\setminus \{i\}$$ and $$x\in {\mathcal {X}}_{i},$$

\begin{aligned} a^{-4^{l-2}-1}P_{i}\left( T_{j}<T_{i}^{+}\right) \le P_{x}( \tilde{T}_{j}<{\tilde{T}}_{i}^{+}) \le a^{4^{l-2}+1}P_{i}\left( T_{j} <T_{i}^{+}\right) . \end{aligned}

### Proof

For part 1, see Lemma 3.3 in [12, Chapter 6]. We only need to prove part 2. Note that by a first step analysis on $$\{{\tilde{Z}}_{n}\}_{n\in {\mathbb {N}} _{0}}$$, for any $$i\in L$$, $$j\in L\setminus \{i\}$$ and $$x\in {\mathcal {X}}_{i},$$

\begin{aligned} P_{x}( {\tilde{T}}_{j}<{\tilde{T}}_{i}^{+})&=p\left( x,{\mathcal {X}}_{j}\right) +\sum \nolimits _{k\in L\setminus \{i,j\}}\int _{{\mathcal {X}} _{k}}P_{y}( {\tilde{T}}_{j}<{\tilde{T}}_{i}) p\left( x,dy\right) \\&\le ap_{ij}+\sum \nolimits _{k\in L\setminus \{i,j\}}\left( a^{4^{l-2}}P_{k}\left( T_{j}<T_{i}\right) \right) \left( ap_{ik}\right) \\&\le a^{4^{l-2}+1}\left( p_{ij}+\sum \nolimits _{k\in L\setminus \{i,j\}}P_{k}\left( T_{j}<T_{i}\right) p_{ik}\right) \\&=a^{4^{l-2}+1}P_{i}\left( T_{j}<T_{i}^{+}\right) . \end{aligned}

The first inequality comes from the use of (7.3) and part 1; the last equality holds since we can do a first step analysis on $$\{Z_{n}\}_{n}.$$ Similarly, we can show the lower bound. $$\square$$

Let $${\check{N}}\doteq \inf \{n\in {\mathbb {N}} _{0}:{\tilde{Z}}_{n}\in \cup _{j\in L\setminus \{1\}}{\mathcal {X}}_{j}\}$$ and $${\tilde{N}}\doteq \inf \{n\in {\mathbb {N}} :Z_{n}\in {\mathcal {X}}_{1},n\ge {\check{N}}\}.$$ For any $$j\in L,$$ let $$\tilde{N}_{j}$$ be the number of visits (including time 0) of state $${\mathcal {X}} _{j}$$ before $${\tilde{N}},$$ i.e., $${\tilde{N}}_{j}=| \{n\in {\mathbb {N}} _{0}:n<{\tilde{N}}\text { and }Z_{n}\in {\mathcal {X}}_{j}\}| .$$ We would like to understand $$E_{x}{\tilde{N}}_{j}$$ for any $$j\in L$$ and $$x\in {\mathcal {X}}_{1}$$ or $${\mathcal {X}}_{j}.$$

### Lemma 7.6

For any $$j\in L$$ and $$x\in {\mathcal {X}}_{1}$$

\begin{aligned} E_{x}{\tilde{N}}_{j}\le \frac{a^{4^{l-1}}}{\sum _{\ell \in L\setminus \{1\}}p_{1\ell }}\frac{\sum _{g\in G\left( j\right) }\pi \left( g\right) }{\sum _{g\in G\left( 1\right) }\pi \left( g\right) }. \end{aligned}

Moreover, for any $$j\in L\setminus \{1\}$$

\begin{aligned} \sum _{\ell =1}^{\infty }\sup _{x\in {\mathcal {X}}_{j}}P_{x}\left( {\tilde{N}}_{j} \ge \ell \right)\le & {} a^{4^{l-1}}\frac{\sum _{g\in G\left( 1,j\right) } \pi \left( g\right) }{\sum _{g\in G\left( 1\right) }\pi \left( g\right) } \end{aligned}

and

\begin{aligned} \sum _{\ell =1}^{\infty }\sup _{x\in {\mathcal {X}}_{1}}P_{x}\left( {\tilde{N}}_{1} \ge \ell \right)\le & {} \frac{a}{\sum _{\ell \in L\setminus \{1\}}p_{1\ell }}. \end{aligned}

### Proof

For any $$x\in {\mathcal {X}}_{1},$$ note that for any $$\ell \in {\mathbb {N}} ,$$ by a conditioning argument as in the proof of Lemma 7.3 (3), we find that for $$j\in L\setminus \{1\}$$

\begin{aligned} P_{x}( {\tilde{N}}_{j}\ge \ell ) \le \frac{\sup _{y\in {\mathcal {X}} _{1}}P_{y}( {\tilde{T}}_{j}<{\tilde{T}}_{1}^{+}) }{1-\sup _{y\in {\mathcal {X}}_{1}}p\left( y,{\mathcal {X}}_{1}\right) }\left( \sup \nolimits _{y\in {\mathcal {X}}_{j}}P_{y}( {\tilde{T}}_{j}^{+}<{\tilde{T}}_{1}) \right) ^{\ell -1} \end{aligned}

and

\begin{aligned} P_{x}( {\tilde{N}}_{1}\ge \ell ) \le \left( \sup \nolimits _{y\in {\mathcal {X}}_{1}}p\left( y,{\mathcal {X}}_{1}\right) \right) ^{\ell -1}. \end{aligned}

Thus, for any $$x\in {\mathcal {X}}_{1}$$ and for $$j\in L\setminus \{1\}$$

\begin{aligned} E_{x}{\tilde{N}}_{j}&=\sum _{\ell =1}^{\infty }P_{x}( {\tilde{N}}_{j} \ge \ell ) \le \frac{\sup _{y\in {\mathcal {X}}_{1}}P_{y}( {\tilde{T}}_{j}<\tilde{T}_{1}^{+}) }{1-\sup _{y\in {\mathcal {X}}_{1}}p\left( y,{\mathcal {X}} _{1}\right) }\cdot \frac{1}{1-\sup _{y\in {\mathcal {X}}_{j}}P_{y}( \tilde{T}_{j}^{+}<{\tilde{T}}_{1}) }\\&=\frac{\sup _{y\in {\mathcal {X}}_{1}}P_{y}( {\tilde{T}}_{j}<{\tilde{T}} _{1}^{+}) }{\left( \inf _{y\in {\mathcal {X}}_{j}}\left( 1-p\left( y,{\mathcal {X}}_{1}\right) \right) \right) ( \inf _{y\in {\mathcal {X}}_{j} }P_{y}( {\tilde{T}}_{1}<{\tilde{T}}_{j}^{+}) ) }\\&\le a^{4^{l-1}}\frac{P_{1}( T_{j}<T_{1}^{+}) }{( \sum _{\ell \in L\setminus \{1\}}p_{1\ell }) P_{j}( T_{1}<T_{j} ^{+}) }\\&=\frac{a^{4^{l-1}}}{\sum _{\ell \in L\setminus \{1\}}p_{1\ell }}\frac{\lambda _{j}}{\lambda _{1}} =\frac{a^{4^{l-1}}}{\sum _{\ell \in L\setminus \{1\}}p_{1\ell }}\frac{\sum _{g\in G\left( j\right) }\pi \left( g\right) }{\sum _{g\in G\left( 1\right) }\pi \left( g\right) }. \end{aligned}

The second inequality is from Remark 7.4 and Lemma 7.5 (2); the third equality comes from Lemma 7.3 (2); the last equality holds due to Lemma 7.2. Also,

\begin{aligned} E_{x}{\tilde{N}}_{1}&= \sum _{\ell =1}^{\infty }P_{x}( {\tilde{N}}_{1} \ge \ell ) \le \frac{1}{1-\sup _{y\in {\mathcal {X}}_{1}}p\left( y,{\mathcal {X}}_{1}\right) } =\frac{1}{\inf _{y\in {\mathcal {X}}_{1}}\left( 1-p\left( y,{\mathcal {X}} _{1}\right) \right) }\\&\le \frac{a}{\sum _{\ell \in L\setminus \{1\}}p_{1\ell }}. \end{aligned}

The last inequality is from Remark 7.4. This completes the proof of part 1.

Turning to part 2, since for any $$\ell \in {\mathbb {N}}$$

\begin{aligned} \sup \nolimits _{x\in {\mathcal {X}}_{1}}P_{x}( {\tilde{N}}_{1}\ge \ell ) \le \left( \sup \nolimits _{y\in {\mathcal {X}}_{1}}p\left( y,{\mathcal {X}}_{1}\right) \right) ^{\ell -1}, \end{aligned}

we have

\begin{aligned} \sum _{\ell =1}^{\infty }\sup _{x\in {\mathcal {X}}_{1}}P_{x}( {\tilde{N}}_{1} \ge \ell ) \le \frac{1}{1-\sup _{y\in {\mathcal {X}}_{1}}p\left( y,{\mathcal {X}}_{1}\right) }\le \frac{a}{\sum _{\ell \in L\setminus \{1\}}p_{1\ell }}. \end{aligned}

Furthermore, we use the conditioning argument again to find that for any $$j\in L\setminus \{1\}$$ and $$\ell \in {\mathbb {N}}$$

\begin{aligned} \sup \nolimits _{x\in {\mathcal {X}}_{j}}P_{x}( {\tilde{N}}_{j}\ge \ell ) \le ( \sup \nolimits _{y\in {\mathcal {X}}_{j}}P_{y}( {\tilde{T}}_{j}^{+}<\tilde{T}_{1}) ) ^{\ell -1}. \end{aligned}

This implies that

\begin{aligned}&\sum _{\ell =1}^{\infty }\sup \nolimits _{x\in {\mathcal {X}}_{j}}P_{x}( {\tilde{N}}_{j} \ge \ell ) \\&\qquad \le \sum _{\ell =1}^{\infty }( \sup \nolimits _{y\in {\mathcal {X}}_{j}}P_{y}( {\tilde{T}}_{j}^{+}<{\tilde{T}}_{1}) ) ^{\ell -1} =\frac{1}{1-\sup _{y\in {\mathcal {X}}_{j}}P_{y}( {\tilde{T}}_{j}^{+}<{\tilde{T}}_{1}) }\\&\qquad =\frac{1}{\inf _{y\in {\mathcal {X}}_{j}} P_{y}( {\tilde{T}} _{1}<{\tilde{T}}_{j}^{+}) } \le a^{4^{l-1}}\frac{1}{P_{j}( T_{1}<T_{j}^{+}) }\\&\qquad =a^{4^{l-1}}\lambda _{j}(E_{1}T_{j}+E_{j}T_{1}) =a^{4^{l-1}}\frac{\sum _{g\in G\left( 1,j\right) }\pi \left( g\right) }{\sum _{g\in G\left( 1\right) }\pi \left( g\right) }. \end{aligned}

We use Lemma 7.5 (2) to obtain the second inequality and Lemma 7.3, parts (2) and (1), for the penultimate and last equalities. $$\square$$

### Asymptotics of Moments of $$S_{1}^{\varepsilon }$$

Recall that $$\{X^{\varepsilon }\}_{\varepsilon \in (0,\infty )}\subset C([0,\infty ):M)$$ is a sequence of stochastic processes satisfying Conditions  3.13.7 and 3.13. Moreover, recall that $$S_{1}^{\varepsilon }$$ is defined by

\begin{aligned} S_{1}^{\varepsilon }\doteq \int _{0}^{\tau _{1}^{\varepsilon } }e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt. \end{aligned}
(7.4)

As mentioned in Section 6, we are interested in the logarithmic asymptotics of $$E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }$$ and $$E_{\lambda ^{\varepsilon }}(S_{1}^{\varepsilon })^{2}.$$ To find these asymptotics, the main tool we will use is Freidlin–Wentzell theory . In fact, we will generalize the results of Freidlin–Wentzell to the following: For any given continuous function $$f:M\rightarrow {\mathbb {R}}$$ and any compact set $$A\subset M,$$ we will provide lower bounds for

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}\left( \int _{0}^{\tau _{1}^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{s}^{\varepsilon }\right) }1_{A}\left( X_{s}^{\varepsilon }\right) ds\right) \right) \end{aligned}
(7.5)

and

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}\left( \int _{0}^{\tau _{1}^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{s}^{\varepsilon }\right) }1_{A}\left( X_{s}^{\varepsilon }\right) ds\right) ^{2}\right) . \end{aligned}
(7.6)

As will be shown, these two bounds can be expressed in terms of the quasipotentials $$V(O_{i},O_{j})$$ and $$V(O_{i},x).$$

### Remark 7.7

In the Freidlin–Wentzell theory as presented in , they only consider bounds for

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup \nolimits _{z\in \partial B_{\delta }(O_{1})}E_{z}\tau _{1}^{\varepsilon }\right) . \end{aligned}

Thus, their result is a special case of (7.5) with $$f\equiv 0$$ and $$A=M$$. Moreover, we generalize their result further by considering the logarithmic asymptotics of higher moment quantities such as (7.6).

Before proceeding, we recall that $$L=\{1,\ldots ,l\}$$ and for any $$\delta >0,$$ we define $$\tau _{0}\doteq 0,$$

\begin{aligned} \sigma _{n}{\doteq } \inf \{t>\tau _{n}:X_{t}^{\varepsilon }\in {\textstyle \bigcup \nolimits _{j\in L}} \partial B_{2\delta }(O_{j})\} \text { and } \tau _{n} \doteq \inf \{t>\sigma _{n-1}{:}X_{t}^{\varepsilon }\in {\textstyle \bigcup \nolimits _{j\in L}} \partial B_{\delta }(O_{j})\}. \end{aligned}

Moreover, $$\tau _{0}^{\varepsilon }\doteq 0,$$

\begin{aligned} \sigma _{n}^{\varepsilon }{\doteq } \inf \{t>\tau _{n}^{\varepsilon }{:}X_{t} ^{\varepsilon }\in {\textstyle \bigcup \nolimits _{j\in L\setminus \{1\}}} \partial B_{\delta }(O_{j})\} \text { and }\tau _{n}^{\varepsilon } \doteq \inf \left\{ t>\sigma _{n-1}^{\varepsilon }:X_{t}^{\varepsilon }\in \partial B_{\delta }(O_{1})\right\} . \end{aligned}

In addition, $$\{Z_{n}\}_{n\in {\mathbb {N}} _{0}}\doteq \{X_{\tau _{n}}^{\varepsilon }\}_{n\in {\mathbb {N}} _{0}}$$ is a Markov chain on $${\textstyle \bigcup \nolimits _{j\in L}} \partial B_{\delta }(O_{j})$$ and $$\{Z_{n}^{\varepsilon }\}_{n\in {\mathbb {N}} _{0}}$$ $$\doteq \{X_{\tau _{n}^{\varepsilon }}^{\varepsilon }\}_{n\in {\mathbb {N}} _{0}}$$ is a Markov chain on $$\partial B_{\delta }(O_{1}).$$ It is essential to keep the distinction clear: when there is an $$\varepsilon$$ superscript the chain makes transitions between neighborhoods of distinct equilibria, while if absent such transitions are possible, but for stable equilibria there will be many more transitions between the $$\delta$$ and $$2\delta$$ neighborhoods.

Following the notation of Subsect. 7.1, let $${\hat{N}}\doteq \inf \{n\in {\mathbb {N}} _{0}:Z_{n}\in {\textstyle \bigcup \nolimits _{j\in L\setminus \{1\}}} \partial B_{\delta }(O_{j})\}$$, $$N\doteq \inf \{n\ge {\hat{N}}:Z_{n}\in \partial B_{\delta }(O_{1})\}$$, and recall $${\mathcal {F}}_{t}\doteq \sigma (\{X_{s} ^{\varepsilon };s\le t\})$$. Then, since $$\{\tau _{n}\}_{n\in {\mathbb {N}} _{0}}$$ are stopping times with respect to the filtration $$\{{\mathcal {F}} _{t}\}_{t\ge 0},$$ $${\mathcal {F}}_{\tau _{n}}$$ are well-defined for any $$n\in {\mathbb {N}} _{0}$$ and we use $${\mathcal {G}}_{n}$$ to denote $${\mathcal {F}}_{\tau _{n}}.$$ One can prove that $${\hat{N}}$$ and N are stopping times with respect to $$\{{\mathcal {G}} _{n}\}_{n\in {\mathbb {N}} }.$$ For any $$j\in L,$$ let $$N_{j}$$ be the number of visits of $$\{Z_{n}\}_{n\in {\mathbb {N}} _{0}}$$ to $$\partial B_{\delta }(O_{j})$$ (including time 0) before N.

The proofs of the following two lemmas are given in Appendix.

### Lemma 7.8

Given $$\delta >0$$ sufficiently small, for any $$x\in \partial B_{\delta }(O_{1})$$ and any nonnegative measurable function g $$:M\rightarrow {\mathbb {R}}$$,

\begin{aligned} E_{x}\left( \int _{0}^{\tau _{1}^{\varepsilon }}g\left( X_{s}^{\varepsilon }\right) ds\right) \le \sum _{j\in L}\left[ \sup _{y\in \partial B_{\delta }(O_{j})}E_{y}\left( \int _{0}^{\tau _{1}}g\left( X_{s}^{\varepsilon }\right) ds\right) \right] \cdot E_{x}N_{j}. \end{aligned}

### Lemma 7.9

Given $$\delta >0$$ sufficiently small,  for any $$x\in \partial B_{\delta }(O_{1})$$ and any nonnegative measurable function g $$:M\rightarrow {\mathbb {R}}$$,

\begin{aligned} E_{x}\left( \int _{0}^{\tau _{1}^{\varepsilon }}g\left( X_{s}^{\varepsilon }\right) ds\right) ^{2}&\le l\sum _{j\in L}\left[ \sup _{y\in \partial B_{\delta }(O_{j})}E_{y}\left( \int _{0}^{\tau _{1}}g\left( X_{s}^{\varepsilon }\right) ds\right) ^{2}\right] \cdot E_{x}N_{j}\nonumber \\&\qquad +2l\sum _{j\in L}\left[ \sup _{y\in \partial B_{\delta }(O_{j})} E_{y}\left( \int _{0}^{\tau _{1}}g\left( X_{s}^{\varepsilon }\right) ds\right) \right] ^{2}\cdot E_{x}N_{j}\nonumber \\&\quad \quad \qquad \cdot \sum _{k=1}^{\infty }\sup _{y\in \partial B_{\delta } (O_{j})}P_{y}\left( k\le N_{j}\right) . \end{aligned}
(7.7)

Although as noted the proofs are given in Appendix, these results follow in a straightforward way by decomposing the excursion away from $$O_{1}$$ during $$[0,\tau _{1}^{\varepsilon }]$$, which only stops when returning to a neighborhood of $$O_{1}$$, into excursions between any pair of equilibrium points, counting the number of such excursions that start near a particular equilibrium point, and using the strong Markov property.

### Remark 7.10

Following an analogous argument as in the proof of Lemmas 7.8 and  7.9, we can prove the following: Given $$\delta >0$$ sufficiently small, for any $$x\in \partial B_{\delta }(O_{1})$$ and any nonnegative measurable function g $$:M\rightarrow {\mathbb {R}}$$,

\begin{aligned} E_{x}\left( \int _{\sigma _{0}^{\varepsilon }}^{\tau _{1}^{\varepsilon }}g\left( X_{s}^{\varepsilon }\right) ds\right) \le {\textstyle \sum _{j\in L\setminus \{1\}}} \left[ \sup _{y\in \partial B_{\delta }(O_{j})}E_{y}\left( \int _{0}^{\tau _{1} }g\left( X_{s}^{\varepsilon }\right) ds\right) \right] \cdot E_{x}N_{j} \end{aligned}

and

\begin{aligned} E_{x}\left( \int _{\sigma _{0}^{\varepsilon }}^{\tau _{1}^{\varepsilon }}g\left( X_{s}^{\varepsilon }\right) ds\right) ^{2}&\le l {\textstyle \sum _{j\in L\setminus \{1\}}} \left[ \sup _{y\in \partial B_{\delta }(O_{j})}E_{y}\left( \int _{0}^{\tau _{1} }g\left( X_{s}^{\varepsilon }\right) ds\right) ^{2}\right] \cdot E_{x} N_{j}\\&\quad +2l {\textstyle \sum _{j\in L\setminus \{1\}}} \left[ \sup _{y\in \partial B_{\delta }(O_{j})}E_{y}\left( \int _{0}^{\tau _{1} }g\left( X_{s}^{\varepsilon }\right) ds\right) \right] ^{2}\cdot E_{x} N_{j}\\&\quad \cdot {\textstyle \sum _{\ell =1}^{\infty }} \sup _{y\in \partial B_{\delta }(O_{j})}P_{y}\left( k\le N_{j}\right) . \end{aligned}

The main difference is that if the integration starts from $$\sigma _{0}^{\varepsilon }$$ (the first visiting time of $${\textstyle \bigcup \nolimits _{j\in L\setminus \{1\}}} \partial B_{\delta }(O_{j})$$), then any summation appearing in the upper bounds should sum over all indices in $$L\setminus \{1\}$$ instead of L.

Owing to its frequent appearance but with varying arguments, we introduce the notation

\begin{aligned} I^{\varepsilon }(t_{1},t_{2};f,A)\doteq \int _{t_{1}}^{t_{2}}e^{-\frac{1}{\varepsilon }f(X_{s}^{\varepsilon })}1_{A}(X_{s}^{\varepsilon })ds, \end{aligned}
(7.8)

and write $$I^{\varepsilon }(t;f,A)$$ if $$t_{1}=0$$ and $$t_{2}=t$$ so that, e.g., $$S_{1}^{\varepsilon }=I^{\varepsilon }(\tau _{1}^{\varepsilon };f,A)$$.

### Corollary 7.11

Given any measurable set $$A\subset M$$, a measurable function $$f:M\rightarrow {\mathbb {R}} ,$$ $$j\in L$$ and $$\delta >0,$$ we have

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}I^{\varepsilon }(\tau _{1}^{\varepsilon };f,A)\right) \\&\quad \ge \min _{j\in L}\left\{ \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z} N_{j}\right) +\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1} ;f,A)\right) \right\} , \end{aligned}

and

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}I^{\varepsilon }(\tau _{1};f,A)^{2}\right) \ge \min _{j\in L}\left( {\hat{R}}_{j}^{(1)}\wedge {\hat{R}}_{j}^{(2)}\right) , \end{aligned}

where

\begin{aligned} {\hat{R}}_{j}^{(1)}\doteq \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};f,A)^{2}\right) +\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}N_{j}\right) \end{aligned}

and

\begin{aligned} {\hat{R}}_{j}^{(2)}&\doteq 2\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};f,A)\right) +\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}N_{j}\right) \\&\quad +\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sum \nolimits _{\ell =1}^{\infty }\sup _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \ell \le N_{j}\right) \right) . \end{aligned}

### Proof

For the first part, applying Lemma 7.8 with $$g(x)=e^{-\frac{1}{\varepsilon }f\left( x\right) }1_{A}\left( x\right)$$ and using (7.1) and (7.2) completes the proof. For the second part, using Lemma 7.9 with $$g(x)=e^{-\frac{1}{\varepsilon }f\left( x\right) }1_{A}\left( x\right)$$ and using (7.1) and (7.2) again completes the proof. $$\square$$

### Remark 7.12

Owing to Remark 7.10, we can modify the proof of Corollary 7.11 and show that given any set $$A\subset M,$$ a measurable function $$f:M\rightarrow {\mathbb {R}} ,$$ $$j\in L$$ and $$\delta >0,$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}I^{\varepsilon }(\sigma _{0}^{\varepsilon },\tau _{1}^{\varepsilon };f,A)\right) \\&\quad \ge \min _{j\in L\setminus \{1\}}\left\{ \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1} )}E_{z}N_{j}\right) +\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1} ;f,A)\right) \right\} . \end{aligned}

Moreover,

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}I^{\varepsilon }(\sigma _{0}^{\varepsilon },\tau _{1}^{\varepsilon };f,A)^{2}\right) \ge \min _{j\in L\setminus \{1\}}\left( {\hat{R}}_{j}^{(1)}\wedge {\hat{R}}_{j}^{(2)}\right) , \end{aligned}

where the definitions of $${\hat{R}}_{j}^{(1)}$$ and $${\hat{R}}_{j}^{(2)}$$ can be found in Corollary 7.11.

We next consider lower bounds on

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};f,A)\right) \quad \text{ and }\quad \\&\quad \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1} ;f,A)^{2}\right) \end{aligned}

for $$j\in L$$. We state some useful results before studying the lower bounds. Recall also that $$\tau _{1}$$ is the time to reach the $$\delta$$-neighborhood of any of the equilibrium points after leaving the $$2\delta$$-neighborhood of one of the equilibrium points.

### Lemma 7.13

For any $$\eta >0,$$ there exists $$\delta _{0}\in (0,1)$$ and $$\varepsilon _{0}\in (0,1)$$, such that for all $$\delta \in (0,\delta _{0})$$ and $$\varepsilon \in (0,\varepsilon _{0})$$

\begin{aligned} \sup _{x\in M}E_{x}\tau _{1}\le e^{\frac{\eta }{\varepsilon }}\text { and } \sup _{x\in M}E_{x}\left( \tau _{1}\right) ^{2}\le e^{\frac{\eta }{\varepsilon }}. \end{aligned}

### Proof

If x is not in $$\cup _{j\in L}B_{2\delta }(O_{j})$$, then a uniform (in x and small $$\varepsilon$$) upper bound on these expected values follows from the corollary to [12, Lemma 1.9, Chapter 6].

If $$x\in \cup _{j\in L}B_{2\delta }(O_{j})$$, then we must wait till the process reaches $$\cup _{j\in L}\partial B_{2\delta }(O_{j})$$, after which we can use the uniform bound (and the strong Markov property). Since there exists $$\delta >0$$ such the lower bound $$P_{x}(\inf \{t\ge 0:X_{t}^{\varepsilon }\in \cup _{j\in L}\partial B_{2\delta }(O_{j})\}\le 1)\ge e^{-\eta /2\varepsilon }$$ is valid for all $$x\in \cup _{j\in L}B_{2\delta }(O_{j})$$ and small $$\varepsilon >0$$, upper bounds of the desired form follow from the Markov property and standard calculations. $$\square$$

For any compact set $$A\subset M$$, we use $$\vartheta _{A}$$ to denote the first hitting time

\begin{aligned} \vartheta _{A}\doteq \inf \left\{ t\ge 0:X_{t}^{\varepsilon }\in A\right\} . \end{aligned}

Note that $$\vartheta _{A}$$ is a stopping time with respect to filtration $$\{{\mathcal {F}}_{t}\}_{t\ge 0}.$$ The following result is relatively straightforward given the just discussed bound on the distribution of $$\tau _{1}$$, and follows by partitioning according to $$\tau _{1} \ge T$$ and $$\tau _{1} < T$$ for large but fixed T.

### Lemma 7.14

For any compact set $$A\subset M,$$ $$j\in L$$ and any $$\eta >0,$$ there exists $$\delta _{0}\in (0,1)$$ and $$\varepsilon _{0}\in (0,1)$$, such that for all $$\varepsilon \in (0,\varepsilon _{0})$$ and $$\delta \in (0,\delta _{0})$$

\begin{aligned} \sup _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \vartheta _{A}\le \tau _{1}\right) \le e^{-\frac{1}{\varepsilon }\left( \inf _{x\in A}\left[ V\left( O_{j},x\right) \right] -\eta \right) }. \end{aligned}

### Lemma 7.15

Given a compact set $$A\subset M$$, any $$j\in L$$ and $$\eta >0,$$ there exists $$\delta _{0}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}\left[ \int _{0}^{\tau _{1}}1_{A}\left( X_{s} ^{\varepsilon }\right) ds\right] \right) \ge \inf _{x\in A}V\left( O_{j},x\right) -\eta \end{aligned}

and

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}\left( \int _{0}^{\tau _{1}}1_{A}\left( X_{s} ^{\varepsilon }\right) ds\right) ^{2}\right) \ge \inf _{x\in A}V\left( O_{j},x\right) -\eta . \end{aligned}

### Proof

The idea of this proof follows from the proof of Theorem 4.3 in [12, Chapter 4]. Since $$I^{\varepsilon }(\tau _{1};0,A)=\int _{0}^{\tau _{1}} 1_{A}\left( X_{s}^{\varepsilon }\right) ds$$, for any $$x\in \partial B_{\delta }(O_{j}),$$

\begin{aligned}&E_{x}I^{\varepsilon }(\tau _{1};0,A)\\&=E_{x}\left[ I^{\varepsilon }(\tau _{1};0,A)1_{\left\{ \vartheta _{A} \le \tau _{1}\right\} }\right] =E_{x}\left[ E_{x}\left[ \left. I^{\varepsilon }(\tau _{1};0,A)\right| {\mathcal {F}}_{\vartheta _{A}}\right] 1_{\left\{ \vartheta _{A}\le \tau _{1}\right\} }\right] \\&=E_{x}\left[ ( E_{X_{\vartheta _{A}}^{\varepsilon }}I^{\varepsilon }(\tau _{1};0,A)) 1_{\left\{ \vartheta _{A}\le \tau _{1}\right\} }\right] \le \sup \nolimits _{y\in \partial A}E_{y}\tau _{1} \cdot \sup \nolimits _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \vartheta _{A}\le \tau _{1}\right) . \end{aligned}

The inequality is due to $$E_{X_{\vartheta _{A}}^{\varepsilon }}I^{\varepsilon }(\tau _{1};0,A)\le E_{X_{\vartheta _{A}}^{\varepsilon }}\tau _{1}\le \sup _{y\in \partial A}E_{y} \tau _{1}.$$ We then apply Lemmas 7.13 and  7.14 to find that for the given $$\eta >0,$$ there exists $$\delta _{0}\in (0,1)$$ and $$\varepsilon _{0} \in (0,1)$$, such that for all $$\varepsilon \in (0,\varepsilon _{0})$$ and $$\delta \in (0,\delta _{0}),$$

\begin{aligned} E_{x}I^{\varepsilon }(\tau _{1};0,A)\le \sup _{y\in \partial A}E_{y}\tau _{1} \cdot \sup _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \vartheta _{A}\le \tau _{1}\right) \le e^{\frac{\eta /2}{\varepsilon }}e^{-\frac{1}{\varepsilon }\left( \inf _{y\in A}V\left( O_{j},y\right) -\eta /2\right) }. \end{aligned}

Thus,

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup \nolimits _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};0,A)\right) \ge \inf _{x\in A}V\left( O_{j},x\right) -\eta . \end{aligned}

This completes the proof of part 1.

For part 2, following the same conditioning argument as for part 1 with the use of Lemmas 7.13 and  7.14 gives that for the given $$\eta >0,$$ there exists $$\delta _{0}\in (0,1)$$ and $$\varepsilon _{0}\in (0,1)$$, such that for all $$\varepsilon \in (0,\varepsilon _{0})$$ and $$\delta \in (0,\delta _{0}),$$

\begin{aligned} E_{x}I^{\varepsilon }(\tau _{1};0,A)^{2}\le \sup _{y\in \partial A}E_{y}\left( \tau _{1}\right) ^{2}\cdot \sup _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \vartheta _{A}\le \tau _{1}\right) \le e^{\frac{\eta /2}{\varepsilon }} e^{-\frac{1}{\varepsilon }\left( \inf _{x\in A}V\left( O_{j},x\right) -\eta /2\right) }. \end{aligned}

Therefore,

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup \nolimits _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};0,A)^{2}\right) \ge \inf _{x\in A}V\left( O_{j},x\right) -\eta . \end{aligned}

$$\square$$

### Lemma 7.16

Given compact sets $$A_{1},A_{2}\subset M$$, $$j\in L$$ and $$\eta >0,$$ there exists $$\delta _{0}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}\left[ \left( \int _{0}^{\tau _{1}}1_{A_{1}}\left( X_{s}^{\varepsilon }\right) ds\right) \left( \int _{0}^{\tau _{1}}1_{A_{2} }\left( X_{s}^{\varepsilon }\right) ds\right) \right] \right) \\&\qquad \ge \max \left\{ \inf _{x\in A_{1}}V\left( O_{j},x\right) ,\inf _{x\in A_{2}}V\left( O_{j},x\right) \right\} -\eta . \end{aligned}

### Proof

We set $$\vartheta _{A_{i}}\doteq \inf \left\{ t\ge 0:X_{t}^{\varepsilon }\in A_{i}\right\}$$ for $$i=1,2.$$ For any $$x\in \partial B_{\delta }(O_{j}),$$ using a conditioning argument as in the proof of Lemma 7.15 we obtain that for any $$\eta >0,$$ there exists $$\delta _{0}\in (0,1)$$ and $$\varepsilon _{0} \in (0,1)$$, such that for all $$\varepsilon \in (0,\varepsilon _{0})$$ and $$\delta \in (0,\delta _{0}),$$

\begin{aligned}&E_{x}\left[ \left( \int _{0}^{\tau _{1}}1_{{A}_{1}}\left( X_{s} ^{\varepsilon }\right) ds\right) \left( \int _{0}^{\tau _{1}}1_{{A}_{2} }\left( X_{s}^{\varepsilon }\right) ds\right) \right] \nonumber \\&=E_{x}\left[ \left( \int _{0}^{\tau _{1}}\int _{0}^{\tau _{1}}1_{{A}_{1} }\left( X_{s}^{\varepsilon }\right) 1_{{A}_{2}}\left( X_{t}^{\varepsilon }\right) dsdt\right) 1_{\left\{ \vartheta _{{A}_{1}}\vee \vartheta _{{A}_{2} }\le \tau _{1}\right\} }\right] \nonumber \\&=E_{x}\left[ \left( E_{X_{\vartheta _{{A}_{1}}\vee \vartheta _{{A}_{2}} }^{\varepsilon }}\left[ \int _{0}^{\tau _{1}}\int _{0}^{\tau _{1}}1_{{A}_{1} }\left( X_{s}^{\varepsilon }\right) 1_{{A}_{2}}\left( X_{t}^{\varepsilon }\right) dsdt\right] \right) 1_{\left\{ \vartheta _{{A}_{1}}\vee \vartheta _{{A}_{2}}\le \tau _{1}\right\} }\right] \nonumber \\&\le \sup \nolimits _{y\in \partial {A}_{1}\cup \partial {A}_{2}}E_{y}\left( \tau _{1}\right) ^{2}\cdot \sup \nolimits _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \vartheta _{{A}_{1}}\le \tau _{1},\vartheta _{{A}_{2}}\le \tau _{1}\right) \nonumber \\&\le e^{\frac{\eta /2}{\varepsilon }}\cdot \min \left\{ \sup \nolimits _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \vartheta _{{A}_{1}}\le \tau _{1}\right) ,\sup \nolimits _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \vartheta _{{A}_{2}}\le \tau _{1}\right) \right\} , \end{aligned}
(7.9)

The last inequality holds since for $$i=1,2$$

\begin{aligned} \sup \nolimits _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \vartheta _{A_{1} }\le \tau _{1},\vartheta _{A_{2}}\le \tau _{1}\right) \le \sup \nolimits _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \vartheta _{A_{i}}\le \tau _{1}\right) \end{aligned}

and owing to Lemma 7.13, for all $$\varepsilon \in (0,\varepsilon _{0})$$

\begin{aligned} \sup \nolimits _{y\in \partial A_{1}}E_{y}\left( \tau _{1}\right) ^{2}\le e^{\frac{\eta /2}{\varepsilon }}\text { and }\sup \nolimits _{y\in \partial A_{2} }E_{y}\left( \tau _{1}\right) ^{2}\le e^{\frac{\eta /2}{\varepsilon }}. \end{aligned}

Furthermore, for the given $$\eta >0,$$ by Lemma 7.14, there exists $$\delta _{i}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{i})$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup \nolimits _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \vartheta _{A_{i}}\le \tau _{1}\right) \right) \ge \inf _{x\in A_{i}}V\left( O_{j},x\right) -\eta /2 \end{aligned}

for $$i=1,2.$$ Hence, letting $$\delta _{0}=\delta _{1}\wedge \delta _{2},$$ for any $$\delta \in (0,\delta _{0})$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup \nolimits _{z\in \partial B_{\delta }(O_{j})}E_{z}\left[ \left( \int _{0} ^{\tau _{1}}1_{A_{1}}\left( X_{s}^{\varepsilon }\right) ds\right) \left( \int _{0}^{\tau _{1}}1_{A_{2}}\left( X_{s}^{\varepsilon }\right) ds\right) \right] \right) \\&\quad \ge \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( e^{\frac{\eta }{2\varepsilon }}\min \left\{ \sup \nolimits _{z\in \partial B_{\delta }(O_{j} )}P_{z}\left( \vartheta _{A_{1}}\le \tau _{1}\right) ,\sup \nolimits _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \vartheta _{A_{2}}\le \tau _{1}\right) \right\} \right) \\&\quad \ge -\eta /2+\max \left\{ \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup \nolimits _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \vartheta _{A_{1}}\le \tau _{1}\right) \right) ,\right. \\&\left. \qquad \qquad \qquad \qquad \qquad \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup \nolimits _{z\in \partial B_{\delta }(O_{j})} P_{z}\left( \vartheta _{A_{2}}\le \tau _{1}\right) \right) \right\} \\&\quad \ge \max \left\{ \inf \nolimits _{x\in A_{1}}V\left( O_{j},x\right) ,\inf \nolimits _{x\in A_{2}}V\left( O_{j},x\right) \right\} -\eta . \end{aligned}

The first inequality is from (7.9). $$\square$$

### Remark 7.17

The next lemma considers asymptotics of the first and second moments of a certain integral that will appear in a decomposition of $$S^{\varepsilon }_{1}$$. It is important to note that the variational bounds for both moments have the same structure as an infimum over $$x \in A$$. While one might consider it possible that the variational problem for the second moment could require a pair of parameters (e.g., infimum over $$x,y \in A$$), the infimum is in fact achieved on the “diagonal” $$x=y$$. This means that the biggest contribution to the second moment is likewise due to mass along the “diagonal.”

### Lemma 7.18

Given a compact set $$A\subset M,$$ a continuous function $$f:M\rightarrow {\mathbb {R}} ,$$ $$j\in L$$ and $$\eta >0,$$ there exists $$\delta _{0}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};f,A)\right) \ge \inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] -\eta \end{aligned}

and

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};f,A)^{2}\right) \ge \inf _{x\in A}\left[ 2f\left( x\right) +V\left( O_{j},x\right) \right] -\eta . \end{aligned}

### Proof

Since a continuous function is bounded on a compact set, there exists $$m\in (0,\infty )$$ such that $$-m\le f(x)\le m$$ for all $$x\in A.$$ For $$n\in {\mathbb {N}}$$ and $$k\in \{1,2,\ldots ,n\},$$ consider the sets

\begin{aligned} A_{n,k}\doteq \left\{ x\in A:f\left( x\right) \in \left[ -m+\frac{2\left( k-1\right) m}{n},-m+\frac{2km}{n}\right] \right\} . \end{aligned}

Note that $$A_{n,k}$$ is a compact set for any nk. In addition, for any n fixed, $${\textstyle \bigcup _{k=1}^{n}} A_{n,k}=A.$$ With this expression, for any $$x\in \partial B_{\delta }(O_{j})$$ and $$n\in {\mathbb {N}}$$

\begin{aligned}&E_{x}I^{\varepsilon }(\tau _{1};f,A) \le \sum \nolimits _{k=1}^{n}E_{x}I^{\varepsilon }(\tau _{1};f,{A_{n,k}})\\&\quad \le \sum \nolimits _{k=1}^{n}E_{x}I^{\varepsilon }(\tau _{1};0,{A_{n,k}})e^{-\frac{1}{\varepsilon }\left( F_{n,k} -2m/n\right) }. \end{aligned}

The second inequality holds because by definition of $$A_{n,k},$$ for any $$x\in A_{n,k}$$, $$f(x)\ge F_{n,k} -2m/n$$ with $$F_{n,k}\doteq \sup _{y\in A_{n,k}}f\left( y\right)$$.

Next, we first apply (7.2) and then Lemma 7.15 with compact sets $$A_{n,k}$$ for $$k\in \{1,2,\ldots ,n\}$$ to get

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};f,A)\right) \\&\quad \ge \min _{k\in \left\{ 1,\ldots ,n\right\} }\left\{ \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup \limits _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};0,{A_{n,k}})e^{-\frac{1}{\varepsilon }\left( F_{n,k}-\frac{2m}{n}\right) }\right) \right\} \\&\quad =\min _{k\in \left\{ 1,\ldots ,n\right\} }\left\{ \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j} )}E_{z}I^{\varepsilon }(\tau _{1};0,{A_{n,k}})\right) +F_{n,k}\right\} -\frac{2m}{n}\\&\quad \ge \min _{k\in \left\{ 1,\ldots ,n\right\} }\left\{ \sup _{x\in A_{n,k} }f\left( x\right) +\inf _{x\in A_{n,k}}V\left( O_{j},x\right) \right\} -\eta -\frac{2m}{n}. \end{aligned}

Finally, we know that $$V\left( O_{j},x\right)$$ is bounded below by 0, and then we use the fact that for any two functions $$f,g: {\mathbb {R}} ^{d}\rightarrow {\mathbb {R}}$$ with g being bounded below (to ensure that the right hand side is well defined) and any set $$A\subset {\mathbb {R}} ^{d},$$ $$\inf _{x\in A}\left( f\left( x\right) +g\left( x\right) \right) \le \sup _{x\in A}f\left( x\right) +\inf _{x\in A}g\left( x\right)$$ to find that the last minimum in the previous display is greater than or equal to

\begin{aligned} \min _{k\in \left\{ 1,\ldots ,n\right\} }\left\{ \inf _{x\in A_{n,k}}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] \right\} =\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] . \end{aligned}

Therefore,

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};f,A)\right) \ge \inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] -\eta -\frac{2m}{n}. \end{aligned}

Since n is arbitrary, sending $$n\rightarrow \infty$$ completes the proof for the first part.

Turning to part 2, we follow the same argument as for part 1. For any $$n\in {\mathbb {N}} ,$$ we use the decomposition of A into $${\textstyle \bigcup _{k=1}^{n}} A_{n,k}$$ to have that for any $$x\in \partial B_{\delta }(O_{j}),$$

\begin{aligned}&E_{x}I^{\varepsilon }(\tau _{1};f,A)^{2} \le E_{x}\left( \sum _{k=1}^{n}I^{\varepsilon }(\tau _{1};f,{A_{n,k} })\right) ^{2}\nonumber \\&\quad =\sum _{k=1}^{n}\sum _{\ell =1}^{n}E_{x}\left[ I^{\varepsilon }(\tau _{1};f,{A_{n,k}})I^{\varepsilon }(\tau _{1};f,A_{n,\ell })\right] . \end{aligned}

Recall that $$F_{n,k}$$ is used to denote $$\sup _{y\in A_{n,k}}f\left( y\right)$$. Using the definition of $$A_{n,k}$$ gives that for any $$k,\ell \in \{1,\ldots ,n\}$$

\begin{aligned}&E_{x}\left[ I^{\varepsilon }(\tau _{1};f,{A_{n,k}})I^{\varepsilon }(\tau _{1};f,A_{n,\ell })\right] \\&\quad \le \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}\left[ I^{\varepsilon } (\tau _{1};0,{A_{n,k}})I^{\varepsilon }(\tau _{1};0,A_{n,\ell })\right] e^{-\frac{1}{\varepsilon }\left( F_{n,k}+F_{n,\ell }-\frac{4m}{n}\right) }. \end{aligned}

Applying (7.2) first and then Lemma 7.16 with compact sets $$A_{n,k}$$ and $$A_{n,\ell }$$ pairwise for all $$k,\ell \in \{1,2,\ldots ,n\}$$ gives that

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};f,A)^{2}\right) \\&\quad \ge \min _{k,\ell \in \left\{ 1,\ldots ,n\right\} }\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}\left[ I^{\varepsilon }(\tau _{1};f,{A_{n,k}})I^{\varepsilon }(\tau _{1};f,A_{n,\ell })\right] \\&\quad \ge \min _{k,\ell \in \left\{ 1,\ldots ,n\right\} }\left\{ \max \left\{ \inf _{x\in A_{n,k}}V\left( O_{j},x\right) ,\inf _{x\in A_{n,\ell }}V\left( O_{j},x\right) \right\} +F_{n,k}+F_{n,\ell }\right\} -\eta -\frac{4m}{n}\\&\quad \ge \min _{k\in \left\{ 1,\ldots ,n\right\} }\left\{ \sup _{x\in A_{n,k} }\left[ 2f\left( x\right) \right] +\inf _{x\in A_{n,k}}V\left( O_{j},x\right) \right\} -\eta -\frac{4m}{n}\\&\quad \ge \min _{k\in \left\{ 1,\ldots ,n\right\} }\left\{ \inf _{x\in A_{n,k} }\left[ 2f\left( x\right) +V\left( O_{j},x\right) \right] \right\} -\eta -\frac{4m}{n}\\&\quad =\inf _{x\in A}\left[ 2f\left( x\right) +V\left( O_{j},x\right) \right] -\eta -\frac{4m}{n}. \end{aligned}

Sending $$n\rightarrow \infty$$ completes the proof for the second part. $$\square$$

Our next interest is to find lower bounds for

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}N_{j}\right) \text { and } \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sum _{\ell =1}^{\infty }\sup _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \ell \le N_{j}\right) \right) . \end{aligned}

We first recall that $$N_{j}$$ is the number of visits of the embedded Markov chain $$\{Z_{n}\}_{n}=\{X_{\tau _{n}}^{\varepsilon }\}_{n}$$ to $$\partial B_{\delta }(O_{j})$$ within one loop of regenerative cycle. Also, the definitions of G(i) and G(ij) for any $$i,j\in L$$ with $$i\ne j$$ are given in Definition 3.8 and Remark 3.9.

### Lemma 7.19

For any $$\eta >0,$$ there exists $$\delta _{0}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{0})$$ and for any $$j\in L$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup \nolimits _{z\in \partial B_{\delta }(O_{1})}E_{z}N_{j}\right) \!\ge \!-\!\min _{\ell \in L\setminus \{1\}}V\left( O_{1},O_{\ell }\right) \!+\!W\left( O_{j}\right) -W\left( O_{1}\right) -\eta ,\text { } \end{aligned}

where

\begin{aligned} W\left( O_{j}\right) \doteq \min _{g\in G\left( j\right) }\left[ {\textstyle \sum _{\left( m\rightarrow n\right) \in g}} V\left( O_{m},O_{n}\right) \right] . \end{aligned}

### Proof

According to Lemma 3.17, we know that for any $$\eta >0,$$ there exist $$\delta _{0}\in (0,1)$$ and $$\varepsilon _{0}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{0})$$ and $$\varepsilon \in (0,\varepsilon _{0}),$$ for all $$x\in \partial B_{\delta }(O_{i}),$$ the one-step transition probability of the Markov chain $$\{Z_{n}\}_{n}$$ on $$\partial B_{\delta }(O_{j})$$ satisfies the inequalities

\begin{aligned} e^{-\frac{1}{\varepsilon }\left( V\left( O_{i},O_{j}\right) +\eta /4^{l-1}\right) }\le p(x,\partial B_{\delta }(O_{j}))\le e^{-\frac{1}{\varepsilon }\left( V\left( O_{i},O_{j}\right) -\eta /4^{l-1}\right) }. \end{aligned}
(7.10)

We can then apply Lemma 7.6 with $$p_{ij}=e^{-\frac{1}{\varepsilon }V\left( O_{i},O_{j}\right) }$$ and $$a=e^{\frac{1}{\varepsilon }\eta /4^{l-1}}$$ to obtain that

\begin{aligned} \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}N_{j}&\le \frac{e^{\frac{1}{\varepsilon }\eta }}{\sum _{\ell \in L\setminus \{1\}}e^{-\frac{1}{\varepsilon }V\left( O_{1},O_{\ell }\right) }}\frac{ {\textstyle \sum _{g\in G\left( j\right) }} \pi \left( g\right) }{\sum _{g\in G\left( 1\right) }\pi \left( g\right) }\\&\le \frac{e^{\frac{1}{\varepsilon }\eta }}{e^{-\frac{1}{\varepsilon }\min _{\ell \in L\setminus \{1\}}V\left( O_{1},O_{\ell }\right) }}\frac{ {\textstyle \sum _{g\in G\left( j\right) }} \pi \left( g\right) }{\sum _{g\in G\left( 1\right) }\pi \left( g\right) }. \end{aligned}

Thus,

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}N_{j}\right) \\&\quad \ge -\min _{\ell \in L\setminus \{1\}}V\left( O_{1},O_{\ell }\right) -\eta +\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \frac{\sum _{g\in G\left( j\right) }\pi \left( g\right) }{\sum _{g\in G\left( 1\right) }\pi \left( g\right) }\right) . \end{aligned}

Hence, it suffices to show that

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \frac{\sum _{g\in G\left( j\right) }\pi \left( g\right) }{\sum _{g\in G\left( 1\right) } \pi \left( g\right) }\right) \ge W\left( O_{j}\right) -W\left( O_{1}\right) . \end{aligned}

Observe that by definition for any $$j\in L$$ and $$g\in G\left( j\right)$$

\begin{aligned} \pi \left( g\right) = {\textstyle \prod _{\left( m\rightarrow n\right) \in g}} p_{mn}= {\textstyle \prod _{\left( m\rightarrow n\right) \in g}} e^{-\frac{1}{\varepsilon }V\left( O_{m},O_{n}\right) } =\exp \left\{ -\frac{1}{\varepsilon } {\textstyle \sum _{\left( m\rightarrow n\right) \in g}} V\left( O_{m},O_{n}\right) \right\} , \end{aligned}

which implies that

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \frac{\sum _{g\in G\left( j\right) }\pi \left( g\right) }{\sum _{g\in G\left( 1\right) } \pi \left( g\right) }\right) \\&\quad \ge \min _{g\in G\left( j\right) }\left[ \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \exp \left\{ -\frac{1}{\varepsilon } {\textstyle \sum _{\left( m\rightarrow n\right) \in g}} V\left( O_{m},O_{n}\right) \right\} \right) \right] \\&\qquad -\min _{g\in G\left( 1\right) }\left[ \limsup _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \exp \left\{ -\frac{1}{\varepsilon } {\textstyle \sum _{\left( m\rightarrow n\right) \in g}} V\left( O_{m},O_{n}\right) \right\} \right) \right] \\&\quad =\min _{g\in G\left( j\right) }\left[ {\textstyle \sum _{\left( m\rightarrow n\right) \in g}} V\left( O_{m},O_{n}\right) \right] -\min _{g\in G\left( 1\right) }\left[ {\textstyle \sum _{\left( m\rightarrow n\right) \in g}} V\left( O_{m},O_{n}\right) \right] \\&\quad =W\left( O_{j}\right) -W\left( O_{1}\right) . \end{aligned}

The inequality is from Lemma 7.1; the last equality holds due the definition of $$W\left( O_{j}\right)$$.

$$\square$$

Recall the definition of $$W(O_{1}\cup O_{j})$$ in (3.3). In the next result, we obtain bounds on, for example, a quantity close to the expected number of visits to $$B_{\delta }(O_{j})$$ before visiting a neighborhood of $$O_{1}$$, after starting near $$O_{j}$$.

### Lemma 7.20

For any $$\eta >0,$$ there exists $$\delta _{0}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sum _{\ell =1} ^{\infty }\sup _{z\in \partial B_{\delta }(O_{1})}P_{z}\left( \ell \le N_{1}\right) \right) \ge -\min _{\ell \in L\setminus \{1\}}V\left( O_{1},O_{\ell }\right) -\eta \end{aligned}

and for any $$j\in L\setminus \{1\}$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sum _{\ell =1} ^{\infty }\sup _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \ell \le N_{j}\right) \right) \ge W(O_{1}\cup O_{j})-W\left( O_{1}\right) -\eta . \end{aligned}

### Proof

We again use that by Lemma 3.17, for any $$\eta >0$$ there exist $$\delta _{0}\in (0,1)$$ and $$\varepsilon _{0}\in (0,1),$$ such that (7.10) holds for any $$\delta \in (0,\delta _{0})$$, $$\varepsilon \in (0,\varepsilon _{0})$$ and all $$x\in \partial B_{\delta }(O_{i}).$$ Then, by Lemma 7.6 with $$p_{ij}=e^{-\frac{1}{\varepsilon }V\left( O_{i},O_{j}\right) }$$ and $$a=e^{\frac{1}{\varepsilon }\eta /4^{l-1}}$$

\begin{aligned} \sum _{\ell =1}^{\infty }\sup _{x\in \partial B_{\delta }(O_{j})}P_{x}\left( N_{1}\ge \ell \right) \le \frac{e^{\frac{1}{\varepsilon }\eta }}{\sum _{\ell \in L\setminus \{1\}}e^{-\frac{1}{\varepsilon }V\left( O_{1},O_{\ell }\right) }} \end{aligned}

and for any $$j\in L\setminus \{1\}$$

\begin{aligned} \sum _{\ell =1}^{\infty }\sup _{x\in \partial B_{\delta }(O_{j})}P_{x}\left( N_{j}\ge \ell \right) \le e^{\frac{1}{\varepsilon }\eta }\frac{\sum _{g\in G\left( 1,j\right) }\pi \left( g\right) }{\sum _{g\in G\left( 1\right) }\pi \left( g\right) }. \end{aligned}

Thus,

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sum _{\ell =1} ^{\infty }\sup _{z\in \partial B_{\delta }(O_{1})}P_{z}\left( \ell \le N_{1}\right) \right) \\&\quad \ge -\limsup _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sum \nolimits _{\ell \in L\setminus \{1\}}e^{-\frac{1}{\varepsilon }V\left( O_{1},O_{\ell }\right) }\right) -\eta \end{aligned}

and

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sum _{\ell =0} ^{\infty }\sup _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \ell \le N_{j}\right) \right) \\&\quad \ge \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \frac{\sum _{g\in G\left( 1,j\right) }\pi \left( g\right) }{\sum _{g\in G\left( 1\right) }\pi \left( g\right) }\right) -\eta . \end{aligned}

Following the same argument as for the proof of Lemma 7.19, we can use Lemma 7.1 to obtain that

\begin{aligned} -\limsup _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sum \nolimits _{\ell \in L\setminus \{1\}}e^{-\frac{1}{\varepsilon }V\left( O_{1},O_{\ell }\right) }\right) \ge -\min _{\ell \in L\setminus \{1\}}V\left( O_{1},O_{\ell }\right) \end{aligned}

and

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \frac{\sum _{g\in G\left( 1,j\right) }\pi \left( g\right) }{\sum _{g\in G\left( 1\right) }\pi \left( g\right) }\right) \\&\qquad \ge \min _{g\in G\left( 1,j\right) }\left[ {\textstyle \sum _{\left( m\rightarrow n\right) \in g} }V\left( O_{m},O_{n}\right) \right] -\min _{g\in G\left( 1\right) }\left[ {\textstyle \sum _{\left( m\rightarrow n\right) \in g} }V\left( O_{m},O_{n}\right) \right] . \end{aligned}

Recalling (3.2) and (3.3), we are done. $$\square$$

As mentioned at the beginning of this subsection, our main goal is to provide lower bounds for

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}\left( \int _{0}^{\tau _{1}^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{s}^{\varepsilon }\right) }1_{A}\left( X_{s}^{\varepsilon }\right) ds\right) \right) \end{aligned}

and

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}\left( \int _{0}^{\tau _{1}^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{s}^{\varepsilon }\right) }1_{A}\left( X_{s}^{\varepsilon }\right) ds\right) ^{2}\right) \end{aligned}

for a given continuous function $$f:M\rightarrow {\mathbb {R}}$$ and compact set $$A\subset M.$$ We now state the main results of the subsection. Recall that $$h_{1}=\min _{\ell \in L\setminus \{1\}}V\left( O_{1},O_{\ell }\right)$$, $$S_{1}^{\varepsilon }\doteq \int _{0}^{\tau _{1}^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{s}^{\varepsilon }\right) }1_{A}\left( X_{s}^{\varepsilon }\right) ds$$ and $$W\left( O_{j}\right) \doteq \min _{g\in G\left( j\right) }[\sum _{\left( m\rightarrow n\right) \in g}V\left( O_{m},O_{n}\right) ]$$ and the definitions (7.8).

### Lemma 7.21

Given a compact set $$A\subset M,$$ a continuous function $$f:M\rightarrow {\mathbb {R}}$$ and $$\eta >0,$$ there exists $$\delta _{0}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left[ \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}S_{1}^{\varepsilon }\right]\ge & {} \min _{j\in L}\left\{ \inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) \right\} \\&\quad -W\left( O_{1}\right) -h_{1}-\eta . \end{aligned}

### Proof

Recall that by Lemma 7.18, we have shown that for the given $$\eta ,$$ there exists $$\delta _{1}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{1})$$ and $$j\in L$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};f,A)\right) \ge \inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] -\frac{\eta }{2}. \end{aligned}

In addition, by Lemma 7.19, we know that for the same $$\eta ,$$ there exists $$\delta _{2}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{2})$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup \nolimits _{z\in \partial B_{\delta }(O_{1})}E_{z}N_{j}\right) \ge -\min _{\ell \in L\setminus \{1\}}V\left( O_{1},O_{\ell }\right) +W\left( O_{j}\right) -W\left( O_{1}\right) -{\eta }/{2}. \end{aligned}

Hence, for any $$\delta \in (0,\delta _{0})$$ with $$\delta _{0}=\delta _{1} \wedge \delta _{2},$$ we apply Corollary 7.11 to get

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left[ E_{x} I^{\varepsilon }(\tau _{1}^{\varepsilon };f,A)\right] \\&\quad \ge \min _{j\in L}\left\{ \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};f,A)\right) +\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1} )}E_{z}\left( N_{j}\right) \right) \right\} \\&\quad \ge \min _{j\in L}\left\{ \inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) \right\} -W\left( O_{1}\right) -h_{1}-\eta , \end{aligned}

where $$\tau _{1}^{\varepsilon }$$ is the time for a regenerative cycle and $$\tau _{1}$$ is the first visit time of neighborhoods of equilibrium points after being a certain distance away from them. $$\square$$

### Remark 7.22

According to Remark 7.12 and using the same argument as in Lemma 7.21, we can find that given a compact set $$A\subset M,$$ a continuous function $$f:M\rightarrow {\mathbb {R}}$$ and $$\eta >0,$$ there exists $$\delta _{0}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left[ \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}I^{\varepsilon }(\sigma _{0}^{\varepsilon },\tau _{1}^{\varepsilon };f,A)\right] \\&\qquad \ge \min _{j\in L\setminus \{1\}}\left\{ \inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) \right\} -W\left( O_{1}\right) -h_{1}-\eta . \end{aligned}

### Lemma 7.23

Given a compact set $$A\subset M,$$ a continuous function $$f:M\rightarrow {\mathbb {R}}$$ and $$\eta >0,$$ there exists $$\delta _{0}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left[ \sup \nolimits _{z\in \partial B_{\delta }(O_{1})}E_{z}(S_{1}^{\varepsilon })^{2}\right] \ge \min _{j\in L}\left( R_{j}^{(1)}\wedge R_{j}^{(2)}\right) -h_{1}-\eta , \end{aligned}

where $$S_{1}^{\varepsilon }\doteq \int _{0}^{\tau _{1}^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{s}^{\varepsilon }\right) }1_{A}\left( X_{s}^{\varepsilon }\right) ds$$ and $$h_{1}=\min _{\ell \in L\setminus \{1\}}V\left( O_{1},O_{\ell }\right)$$, and

\begin{aligned}&R_{j}^{(1)}\doteq \inf _{x\in A}\left[ 2f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) -W\left( O_{1}\right) \\&R_{1}^{(2)}\doteq 2\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{1},x\right) \right] -h_{1} \end{aligned}

and for $$j\in L\setminus \{1\}$$

\begin{aligned} R_{j}^{(2)}\doteq 2\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) -2W\left( O_{1}\right) +W(O_{1}\cup O_{j}). \end{aligned}

### Proof

Following a similar argument as for the proof of Lemma 7.21, given any $$\eta >0,$$ owing to Lemmas 7.187.19 and 7.20, there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$ and for any $$j\in L$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};f,A)\right) \ge \inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] -\frac{\eta }{4}, \\&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};f,A)^{2}\right) \ge \inf _{x\in A}\left[ 2f\left( x\right) +V\left( O_{j},x\right) \right] -\frac{\eta }{4}, \\&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}N_{j}\right) \ge -h_{1}+W\left( O_{j}\right) -W\left( O_{1}\right) -\frac{\eta }{4}, \\&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sum _{\ell =1} ^{\infty }\sup _{z\in \partial B_{\delta }(O_{1})}P_{z}\left( \ell \le N_{1}\right) \right) \ge -h_{1}-\frac{\eta }{4}, \end{aligned}

and for any $$j\in L\setminus \{1\}$$,

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( {\textstyle \sum _{\ell =1}^{\infty }} \sup _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \ell \le N_{j}\right) \right) \ge W(O_{1}\cup O_{j})-W\left( O_{1}\right) -\frac{\eta }{4}. \end{aligned}

Hence, for any $$\delta \in (0,\delta _{0})$$ we apply Corollary 7.11 to get

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}\left( S_{1}^{\varepsilon }\right) ^{2}\right) \ge \min _{j\in L}\left( {\hat{R}}_{j}^{(1)}\wedge {\hat{R}}_{j}^{(2)}\right) , \end{aligned}

where

\begin{aligned} {\hat{R}}_{j}^{(1)}&\doteq \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};f,A)^{2}\right) +\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}N_{j}\right) \\&\ge \inf _{x\in A}\left[ 2f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) -W\left( O_{1}\right) -h_{1}-\eta =R_{j}^{(1)}-h_{1}-\eta \end{aligned}

and

\begin{aligned} {\hat{R}}_{1}^{(2)}&\doteq 2\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}I^{\varepsilon }(\tau _{1};f,A)\right) \\&\quad +\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}N_{1}\right) +\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( {\textstyle \sum _{\ell =1}^{\infty }} \sup _{z\in \partial B_{\delta }(O_{1})}P_{z}\left( \ell \le N_{1}\right) \right) \\&\ge 2\left( \inf _{x\in A}\left[ f\left( x\right) +V\left( O_{1},x\right) \right] -\frac{\eta }{4}\right) +\left( -h_{1}-\frac{\eta }{4}\right) +\left( -h_{1}-\frac{\eta }{4}\right) \\&=2\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{1},x\right) \right] -2h_{1}-\eta =R_{1}^{(2)}-h_{1}-\eta \end{aligned}

and for $$j\in L\setminus \{1\}$$

\begin{aligned} {\hat{R}}_{j}^{(2)}&\doteq 2\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{j})}E_{z}I^{\varepsilon }(\tau _{1};f,A)\right) \\&\quad +\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}N_{j}\right) +\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( {\textstyle \sum _{\ell =1}^{\infty }} \sup _{z\in \partial B_{\delta }(O_{j})}P_{z}\left( \ell \le N_{j}\right) \right) \\&\ge 2\left( \inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] -\frac{\eta }{4}\right) +\left( -h_{1}+W\left( O_{j}\right) -W\left( O_{1}\right) -\frac{\eta }{4}\right) \\&\quad +\left( W(O_{1}\cup O_{j})-W\left( O_{1}\right) -\frac{\eta }{4}\right) \\&=2\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) -2W\left( O_{1}\right) +W(O_{1}\cup O_{j})-h_{1}-\eta \\&=R_{j}^{(2)}-h_{1}-\eta . \end{aligned}

$$\square$$

### Asymptotics of Moments of $${\hat{S}}_{1}^{\varepsilon }$$

Recall that

\begin{aligned} {\hat{S}}_{n}^{\varepsilon }\doteq \int _{{\hat{\tau }}_{n-1}^{\varepsilon }}^{{\hat{\tau }}_{n}^{\varepsilon } }e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt, \end{aligned}

where $${\hat{\tau }}_{i}^{\varepsilon }$$ is a multicycle defined according to (6.4) and with $$\{{\mathbf {M}}^{\varepsilon }_i\}_{i\in {\mathbb {N}}}$$ being a sequence of independent and geometrically distributed random variables with parameter $$e^{-m/\varepsilon }$$ for some $$m>0$$ such that $$m+h_1>w$$. Moreover, $$\{{\mathbf {M}}^{\varepsilon }_i\}$$ is also independent of $$\{\tau ^{\varepsilon }_n\}$$. Using the independence of $$\{{\mathbf {M}}^{\varepsilon }_i\}$$ and $$\{\tau ^{\varepsilon }_n\}$$, and the fact that $$\{\tau ^{\varepsilon }_n\}$$ and $$\{S_{n}^{\varepsilon }\}$$ are both iid under $$P_{\lambda ^{\varepsilon }}$$, we find that $$\{{\hat{S}}_{n}^{\varepsilon }\}$$ is also iid under $$P_{\lambda ^{\varepsilon }}$$ and

\begin{aligned} E_{\lambda ^{\varepsilon }}{\hat{S}}_{1}^{\varepsilon } =E_{\lambda ^{\varepsilon }}{\mathbf {M}}^{\varepsilon }_1 \cdot E_{\lambda ^{\varepsilon }}S^\varepsilon _1 \end{aligned}
(7.11)

and

\begin{aligned} \mathrm {Var}_{\lambda ^{\varepsilon }}{\hat{S}}_{1}^{\varepsilon }&=E_{\lambda ^{\varepsilon }}{\mathbf {M}}^{\varepsilon }_1 \cdot \mathrm {Var}_{\lambda ^{\varepsilon }}(S^\varepsilon _1) +\mathrm {Var}_{\lambda ^{\varepsilon }}({\mathbf {M}}^{\varepsilon }_1) \cdot (E_{\lambda ^{\varepsilon }}S^\varepsilon _1)^2\nonumber \\&\le E_{\lambda ^{\varepsilon }}{\mathbf {M}}^{\varepsilon }_1 \cdot E_{\lambda ^{\varepsilon }}(S^\varepsilon _1)^2 +\mathrm {Var}_{\lambda ^{\varepsilon }}({\mathbf {M}}^{\varepsilon }_1) \cdot (E_{\lambda ^{\varepsilon }}S^\varepsilon _1)^2 \end{aligned}
(7.12)

On the other hand, since $${\mathbf {M}}^{\varepsilon }_1$$ is geometrically distributed with parameter $$e^{-m/\varepsilon }$$, this gives that

\begin{aligned} E_{\lambda ^{\varepsilon }}{\mathbf {M}}^{\varepsilon }_1=e^{\frac{m}{\varepsilon }} \text { and }\mathrm {Var}_{\lambda ^{\varepsilon }}({\mathbf {M}}^{\varepsilon }_1) = e^{\frac{2m}{\varepsilon }}(1-e^{\frac{-m}{\varepsilon }}). \end{aligned}
(7.13)

Therefore, by combining (7.11), (7.12) and (7.13) with Lemma 7.21 and Lemma 7.23, we have the following two lemmas.

### Lemma 7.24

Given a compact set $$A\subset M,$$ a continuous function $$f:M\rightarrow {\mathbb {R}}$$ and $$\eta >0,$$ there exists $$\delta _{0}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log E_{\lambda ^{\varepsilon }}{\hat{S}}_{1}^{\varepsilon }\\&\quad \ge \min _{j\in L}\left\{ \inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) \right\} -W\left( O_{1}\right) -(m+h_1)-\eta . \end{aligned}

### Lemma 7.25

Given a compact set $$A\subset M,$$ a continuous function $$f:M\rightarrow {\mathbb {R}}$$ and $$\eta >0,$$ there exists $$\delta _{0}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \mathrm {Var}_{\lambda ^{\varepsilon }}( {\hat{S}}_{1}^{\varepsilon }) \ge \min _{j\in L}\left( R_{j}^{(1)}\wedge R_{j}^{(2)}\wedge R_{j}^{(3,m)}\right) -(m+h_1)-\eta , \end{aligned}

where $$R_{j}^{(1)}$$ and $$R_{j}^{(2)}$$ are defined as in Lemma 7.23, and

\begin{aligned} R_{j}^{(3,m)}\doteq 2\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +2W\left( O_{j}\right) -2W\left( O_{1}\right) -(m+h_1). \end{aligned}

Later on, we will optimize on m to obtain the largest bound from below. This will require that we consider first $$m>w-h_1$$, so that as shown in the next section $$N^{\varepsilon }(T^{\varepsilon })$$ can be suitably approximated in terms of a Poisson distribution, and then sending $$m\downarrow w-h_1$$.

## Asymptotics of Moments of $$N^{\varepsilon }(T^{\varepsilon })$$ and $${\hat{N}}^{\varepsilon }(T^{\varepsilon })$$

Recall that the number of single cycles in the time interval $$[0,T^{\varepsilon }]$$ plus one is defined as

\begin{aligned} N^{\varepsilon }\left( T^{\varepsilon }\right) \doteq \inf \left\{ n\in {\mathbb {N}} :\tau _{n}^{\varepsilon }>T^{\varepsilon }\right\} , \end{aligned}

where the $$\tau _{n}^{\varepsilon }$$ are the return times to $$B_{\delta }(O_{1})$$ after ever visiting one of the $$\delta$$-neighborhood of other equilibrium points than $$O_{1}.$$ In addition, $$\lambda ^{\varepsilon }$$ is the unique invariant measure of $$\{Z_{n}^{\varepsilon }\}_{n}=\{X_{\tau _{n}^{\varepsilon } }^{\varepsilon }\}_{n}.$$ The number of multicycles in the time interval $$[0,T^{\varepsilon }]$$ plus one is defined as

\begin{aligned} {\hat{N}}^{\varepsilon }\left( T^\varepsilon \right) \doteq \inf \left\{ n\in {\mathbb {N}} :{\hat{\tau }}_{n}^{\varepsilon }>T^\varepsilon \right\} , \end{aligned}

where $${\hat{\tau }}^\varepsilon _i$$ are defined as in (6.4).

In this section, we will find the logarithmic asymptotics of the expected value and the variance of $$N^{\varepsilon }\left( T^{\varepsilon }\right)$$ with $$T^{\varepsilon }=e^{\frac{1}{\varepsilon }c}$$ for some $$c>h_1$$ in Lemmas 8.2 and 8.4 under the assumption that $$h_1>w$$ (i.e., single cycle case), and the analogous quantities for $${\hat{N}}^{\varepsilon }\left( T^\varepsilon \right)$$ with $$T^{\varepsilon }=e^{\frac{1}{\varepsilon }c}$$ for some $$c>w$$ in Lemmas 8.19 and 8.21 under the assumption that $$w\ge h_1$$ (i.e., multicycle case).

### Remark 8.1

While the proofs of these asymptotic results are quite detailed, it is essential that we obtain estimates good enough for a relatively precise comparison of the expected value and the variance of $$N^{\varepsilon }\left( T^{\varepsilon }\right)$$, and likewise for $${\hat{N}}^{\varepsilon }\left( T^\varepsilon \right)$$. For this, the key result needed is the characterization of $$N^{\varepsilon }\left( T^\varepsilon \right)$$ (and $${\hat{N}}^{\varepsilon }\left( T^\varepsilon \right)$$) as having an approximately Poisson distribution. These follow by exploiting the asymptotically exponential character of $$\tau _{n}^{\varepsilon }$$ (and $${\hat{\tau }}_{n}^{\varepsilon }$$), together with some uniform integrability properties.

Lemmas 8.2 and 8.4 below are proved in Sect. 8.3.

### Lemma 8.2

If $$h_1>w$$ and $$T^{\varepsilon }=e^{\frac{1}{\varepsilon }c}$$ for some $$c>h_1$$, then there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left| \frac{E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}-\frac{1}{E_{\lambda ^{\varepsilon }} \tau _{1}^{\varepsilon }}\right| \ge c. \end{aligned}

### Corollary 8.3

If $$h_1>w$$ and $$T^{\varepsilon }=e^{\frac{1}{\varepsilon }c}$$ for some $$c>h_1$$, then there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \frac{E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}\ge \varkappa _{\delta }, \end{aligned}

where $$\varkappa _{\delta }\doteq \min _{y\in \cup _{k\in L\setminus \{1\}}\partial B_{\delta }(O_{k})}V(O_{1},y)$$.

### Lemma 8.4

If $$h_1>w$$ and $$T^{\varepsilon }=e^{\frac{1}{\varepsilon }c}$$ for some $$c>h_1$$, then for any $$\eta >0,$$ there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \frac{\mathrm {Var} _{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}\ge h_1-\eta . \end{aligned}

Before proceeding, we mention a result from  and define some notation which will be used in this section. Results in Section 5 and Section 10 of [11, Chapter XI] say that for any $$t>0,$$ the first and second moment of $$N^{\varepsilon }\left( t\right)$$ can be represented as

\begin{aligned} E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( t\right) \right) =\sum \nolimits _{n=0}^{\infty }P_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le t\right) \text { and }E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( t\right) \right) ^{2}=\sum \nolimits _{n=0}^{\infty }\left( 2n+1\right) P_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le t\right) . \nonumber \\ \end{aligned}
(8.1)

Let $$\Gamma ^{\varepsilon }\doteq T^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }$$ and $$\gamma ^{\varepsilon }\doteq \left( \Gamma ^{\varepsilon }\right) ^{-\ell }$$ with some $$\ell \in (0,1)$$ which will be chosen later. Intuitively, $$\Gamma ^{\varepsilon }$$ is the typical number of regenerative cycles in $$[0,T^{\varepsilon }]$$ since $$E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }$$ is the expected length of one regenerative cycle. To simplify notation, we pretend that $$\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }$$ and $$\left( 1-2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }$$ are positive integers so that we can divide $$E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right)$$ into three partial sums which are

\begin{aligned} {\mathfrak {P}}_{1}\doteq \sum \nolimits _{n=\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }+1}^{\infty }P_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le T^{\varepsilon }\right) ,\text { }{\mathfrak {P}}_{2} \doteq \sum \nolimits _{n=\left( 1-2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon } }^{\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }}\text { }P_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le T^{\varepsilon }\right) \text { } \end{aligned}

and

\begin{aligned} {\mathfrak {P}}_{3}\doteq \sum \nolimits _{n=0}^{\left( 1-2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }-1}P_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le T^{\varepsilon }\right) . \end{aligned}
(8.2)

Similarly, we divide $$E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) ^{2}$$ into

\begin{aligned} {\mathfrak {R}}_{1}\doteq \sum _{n=\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }+1}^{\infty }\left( 2n+1\right) P_{\lambda ^{\varepsilon } }\left( \tau _{n}^{\varepsilon }\le T^{\varepsilon }\right) , \text { } {\mathfrak {R}}_{2}\doteq \sum _{n=\left( 1-2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }}^{\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }}\text { }\left( 2n+1\right) P_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le T^{\varepsilon }\right) \text { } \end{aligned}

and

\begin{aligned} {\mathfrak {R}}_{3}\doteq \sum \nolimits _{n=0}^{\left( 1-2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }-1}\left( 2n+1\right) P_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le T^{\varepsilon }\right) . \end{aligned}
(8.3)

The next step is to find upper bounds for these partial sums, and these bounds will help us to find suitable lower bounds for the logarithmic asymptotics of $$E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right)$$ and $$\hbox {Var}_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right)$$. Before looking into the upper bound for partial sums, we establish some properties.

### Theorem 8.5

If $$h_1>w$$, then for any $$\delta >0$$ sufficiently small,

\begin{aligned} \lim _{\varepsilon \rightarrow 0}\varepsilon \log E_{\lambda ^{\varepsilon }} \tau _{1}^{\varepsilon }=\varkappa _{\delta }\text { and }\tau _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon } \overset{d}{\rightarrow }\mathrm {Exp}(1). \end{aligned}

Moreover, there exists $$\varepsilon _{0}\in (0,1)$$ and a constant $${\tilde{c}}>0$$ such that

\begin{aligned} P_{\lambda ^{\varepsilon }}\left( \tau _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }>t\right) \le e^{-{\tilde{c}}t} \end{aligned}

for any $$t>0$$ and any $$\varepsilon \in (0,\varepsilon _{0}).$$

### Remark 8.6

For any $$\delta >0,$$ $$\varkappa _{\delta }\le h_1.$$

The proof of Theorem 8.5 will be given in Section 10. In that section, we will first prove an analogous result for the exit time (or first visiting time to other equilibrium points to be more precise) and then show how one can extend those results to the return time. The proof of the following lemma is straightforward and hence omitted.

### Lemma 8.7

If $$h_1>w$$ and $$T^{\varepsilon }=e^{\frac{1}{\varepsilon }c}$$ for some $$c>h_1$$, then for any $$\eta >0,$$ there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$,

\begin{aligned} h_1-\eta \ge \lim _{\varepsilon \rightarrow 0}-\varepsilon \log \Gamma ^{\varepsilon }\ge h_1-c-\eta . \end{aligned}

### Lemma 8.8

Define $${\mathcal {Z}}_{1}^{\varepsilon }\doteq \tau _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }.$$ Then, for any $$\delta >0$$ sufficiently small,

• there exists some $$\varepsilon _{0}\in (0,1)$$ such that $$\sup _{\varepsilon \in (0,\varepsilon _{0})}E_{\lambda ^{\varepsilon }}\left( {\mathcal {Z}}_{1}^{\varepsilon }\right) ^{3}<\infty ,$$

• there exists some $$\varepsilon _{0}\in (0,1)$$ such that $$\inf _{\varepsilon \in (0,\varepsilon _{0})}\mathrm {Var}_{\lambda ^{\varepsilon }} ({\mathcal {Z}}_{1}^{\varepsilon })>0$$ and $$E_{\lambda ^{\varepsilon }}\left( {\mathcal {Z}}_{1}^{\varepsilon }\right) ^{2}=E_{\lambda ^{\varepsilon }}\left( \tau _{1}^{\varepsilon }\right) ^{2}/\left( E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }\right) ^{2}\rightarrow 2$$ as $$\varepsilon \rightarrow 0.$$

### Proof

For the first part, we use Theorem 8.5 to find that there exists $$\varepsilon _{0}\in (0,1)$$ and a constant $${\tilde{c}}>0$$ such that

\begin{aligned} P_{\lambda ^{\varepsilon }}\left( {\mathcal {Z}}_{1}^{\varepsilon }>t\right) =P_{\lambda ^{\varepsilon }}\left( \tau _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }>t\right) \le e^{-{\tilde{c}}t} \end{aligned}

for any $$t>0$$ and any $$\varepsilon \in (0,\varepsilon _{0}).$$ Therefore, for $$\varepsilon \in (0,\varepsilon _{0})$$

\begin{aligned} E_{\lambda ^{\varepsilon }}\left( {\mathcal {Z}}_{1}^{\varepsilon }\right) ^{3}=3\int _{0}^{\infty }t^{2}P_{\lambda ^{\varepsilon }}({\mathcal {Z}} _{1}^{\varepsilon }>t)dt\le 3\int _{0}^{\infty }t^{2}e^{-{\tilde{c}}t}dt<\infty . \end{aligned}

For the second assertion, since $$\sup _{0<\varepsilon<\varepsilon _{0} }E_{\lambda ^{\varepsilon }}\left( {\mathcal {Z}}_{1}^{\varepsilon }\right) ^{3}<\infty ,$$ it implies that $$\{\left( {\mathcal {Z}}_{1}^{\varepsilon }\right) ^{2}\}_{0<\varepsilon <\varepsilon _{0}}$$ and $$\{{\mathcal {Z}}_{1}^{\varepsilon }\}_{0<\varepsilon <\varepsilon _{0}}$$ are both uniformly integrable. Moreover, because $${\mathcal {Z}}_{1}^{\varepsilon }\overset{d}{\rightarrow }$$ Exp(1) as $$\varepsilon \rightarrow 0$$ from Theorem 8.5 and since for $$X\overset{d}{=}$$ Exp(1),  $$EX=1$$ and $$EX^{2}=2,$$ we obtain

\begin{aligned} E_{\lambda ^{\varepsilon }}\left( \tau _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }\right) ^{2}=E_{\lambda ^{\varepsilon } }\left( {\mathcal {Z}}_{1}^{\varepsilon }\right) ^{2}\rightarrow 2\text { and }E_{\lambda ^{\varepsilon }}{\mathcal {Z}}_{1}^{\varepsilon }\rightarrow 1. \end{aligned}

as $$\varepsilon \rightarrow 0.$$ This implies $$\hbox {Var}_{\lambda ^{\varepsilon } }({\mathcal {Z}}_{1}^{\varepsilon })\rightarrow 1$$ as $$\varepsilon \rightarrow 0.$$ Obviously, there exists some $$\varepsilon _{0}\in (0,1)$$ such that $$\inf \nolimits _{\varepsilon \in (0,\varepsilon _{0})}\mathrm {Var}_{\lambda ^{\varepsilon } }({\mathcal {Z}}_{1}^{\varepsilon })\ge 1/2>0.$$ This completes the proof. $$\square$$

### Remark 8.9

Throughout the rest of this section, we will use C to denote a constant in $$(0,\infty )$$ which is independent of $$\varepsilon$$ but whose value may change from use to use.

### Chernoff Bound

In this subsection, we will provide upper bounds for

\begin{aligned} {\mathfrak {P}}_{1}\doteq \sum _{n=\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }+1}^{\infty }P_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le T^{\varepsilon }\right) \text { and } {\mathfrak {R}}_{1}\doteq \sum _{n=\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }+1}^{\infty }\left( 2n+1\right) P_{\lambda ^{\varepsilon } }\left( \tau _{n}^{\varepsilon }\le T^{\varepsilon }\right) \end{aligned}

via a Chernoff bound. The following result is well known, and its proof is standard.

### Lemma 8.10

(Chernoff bound) Let $$X_{1},\ldots ,X_{n}$$ be an iid sequence of random variables. For any $$a\in {\mathbb {R}}$$ and for any $$t\in (0,\infty )$$

\begin{aligned} P\left( X_{1}+\cdots +X_{n}\le a\right) \le \left( Ee^{-tX_{1}}\right) ^{n}e^{ta}. \end{aligned}

Recall that $$\Gamma ^{\varepsilon }\doteq T^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }$$ and $$\gamma ^{\varepsilon }\doteq \left( \Gamma ^{\varepsilon }\right) ^{-\ell }$$ with some $$\ell \in (0,1)$$ which will be chosen later.

### Lemma 8.11

Given any $$\delta >0$$ and any $$\ell >0,$$ there exists $$\varepsilon _{0}\in (0,1)$$ such that for any $$\varepsilon \in (0,\varepsilon _{0})$$

\begin{aligned} P_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le T^{\varepsilon }\right) \le e^{-n\left( \Gamma ^{\varepsilon }\right) ^{-2\ell }} \end{aligned}

for any $$n\ge \left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }.$$ In addition,

\begin{aligned} {\mathfrak {P}}_{1} \le C\left( \Gamma ^{\varepsilon }\right) ^{2\ell }e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }} \text { and } {\mathfrak {R}}_{1} \le C\left( \Gamma ^{\varepsilon }\right) ^{1+2\ell }e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }}+C\left( \Gamma ^{\varepsilon }\right) ^{4\ell }e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }}. \end{aligned}

### Proof

Given $$\delta >0$$, $$\ell >0$$ and $$\varepsilon \in (0,1),$$ we find that for $$n\ge \left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }$$

\begin{aligned} P_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le T^{\varepsilon }\right)&=P_{\lambda ^{\varepsilon }}\left( \frac{\tau _{1}^{\varepsilon }+\left( \tau _{2}^{\varepsilon }-\tau _{1}^{\varepsilon }\right) +\cdots +\left( \tau _{n}^{\varepsilon }-\tau _{n-1}^{\varepsilon }\right) }{E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }}\le \Gamma ^{\varepsilon }\right) \\&\le P_{\lambda ^{\varepsilon }}\left( \frac{\tau _{1}^{\varepsilon }+\left( \tau _{2}^{\varepsilon }-\tau _{1}^{\varepsilon }\right) +\cdots +\left( \tau _{n}^{\varepsilon }-\tau _{n-1}^{\varepsilon }\right) }{E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }}\le \frac{n}{1+2\gamma ^{\varepsilon }}\right) \\&\le \left( E_{\lambda ^{\varepsilon }}e^{-\gamma ^{\varepsilon }{\mathcal {Z}}_{1}^{\varepsilon }}\right) e^{\frac{n\gamma ^{\varepsilon } }{1+2\gamma ^{\varepsilon }}}, \end{aligned}

where $${\mathcal {Z}}_{1}^{\varepsilon }=\tau _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }.$$ We use the fact that $$\{\tau _{n}^{\varepsilon }-\tau _{n-1}^{\varepsilon }\}_{n\in {\mathbb {N}} }$$ are iid and apply Lemma 8.10 (Chernoff bound) with $$a=n/\left( 1+2\gamma ^{\varepsilon }\right)$$ and $$t=\gamma ^{\varepsilon }$$ for the last inequality. Therefore, in order to verify the first claim, it suffices to show that

\begin{aligned} \left( E_{\lambda ^{\varepsilon }}e^{-\gamma ^{\varepsilon }{\mathcal {Z}} _{1}^{\varepsilon }}\right) e^{\frac{\gamma ^{\varepsilon }}{1+2\gamma ^{\varepsilon }}}\le e^{-\left( \gamma ^{\varepsilon }\right) ^{2} }=e^{-\left( \Gamma ^{\varepsilon }\right) ^{-2\ell }}. \end{aligned}

We observe that for any $$x\ge 0,$$ $$e^{-x}\le 1-x+x^{2}/2,$$ and this gives

\begin{aligned} E_{\lambda ^{\varepsilon }}e^{-\gamma ^{\varepsilon }{\mathcal {Z}}_{1}^{\varepsilon }} \le 1-E_{\lambda ^{\varepsilon }}\left( \gamma ^{\varepsilon } {\mathcal {Z}}_{1}^{\varepsilon }\right) +E_{\lambda ^{\varepsilon } }\left( \gamma ^{\varepsilon }{\mathcal {Z}}_{1}^{\varepsilon }\right) ^{2}/2 =1-\gamma ^{\varepsilon }+\left( \gamma ^{\varepsilon }\right) ^{2}E_{\lambda ^{\varepsilon }}\left( {\mathcal {Z}}_{1}^{\varepsilon }\right) ^{2}/2. \end{aligned}

Moreover, since we can apply Lemma 8.8 to find $$E_{\lambda ^{\varepsilon }}\left( {\mathcal {Z}}_{1}^{\varepsilon }\right) ^{2}\rightarrow 2$$ as $$\varepsilon \rightarrow 0,$$ there exists $$\varepsilon _{0}\in (0,1)$$ such that for any $$\varepsilon \in (0,\varepsilon _{0})$$, $$E_{\lambda ^{\varepsilon }}\left( {\mathcal {Z}}_{1}^{\varepsilon }\right) ^{2}\le 9/4.$$ Thus, for any $$\varepsilon \in (0,\varepsilon _{0})$$

\begin{aligned} \left( E_{\lambda ^{\varepsilon }}e^{-\gamma ^{\varepsilon }{\mathcal {Z}} _{1}^{\varepsilon }}\right) e^{\frac{\gamma ^{\varepsilon }}{1+2\gamma ^{\varepsilon }}}\le \exp \left\{ \gamma ^{\varepsilon }/(1+2\gamma ^{\varepsilon })+\log ( 1-\gamma ^{\varepsilon }+(9/8)\left( \gamma ^{\varepsilon }\right) ^{2}) \right\} . \end{aligned}

Using a Taylor series expansion, we find that for all $$\left| x\right| <1$$

\begin{aligned} 1/(1+x)=1-x+O\left( x^{2}\right) \text { and }\log \left( 1+x\right) =x-x^{2}/2+O\left( x^{3}\right) , \end{aligned}

which gives

\begin{aligned}&\gamma ^{\varepsilon }/(1+2\gamma ^{\varepsilon })+\log ( 1-\gamma ^{\varepsilon }+(9/8)\left( \gamma ^{\varepsilon }\right) ^{2}) \\&\quad =\gamma ^{\varepsilon }-2\left( \gamma ^{\varepsilon }\right) ^{2}+[( -\gamma ^{\varepsilon }+(9/8)\left( \gamma ^{\varepsilon }\right) ^{2}] -[ -\gamma ^{\varepsilon }+(9/8)\left( \gamma ^{\varepsilon }\right) ^{2}] ^{2}/2+O( ( \gamma ^{\varepsilon }) ^{3}) \\&\quad =-(11/8)\left( \gamma ^{\varepsilon }\right) ^{2}+O( \left( \gamma ^{\varepsilon }\right) ^{3}) \le -\left( \gamma ^{\varepsilon }\right) ^{2}, \end{aligned}

for all $$\varepsilon \in (0,\varepsilon _{0})$$. We are done for part 1.

For part 2, we use the estimate from part 1 and find

\begin{aligned} \sum _{n=\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }+1} ^{\infty }P_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le T^{\varepsilon }\right) \le \sum _{n=\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }+1}^{\infty }e^{-n\left( \gamma ^{\varepsilon }\right) ^{2}}\le \frac{e^{-\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }\left( \gamma ^{\varepsilon }\right) ^{2}} }{1-e^{-\left( \gamma ^{\varepsilon }\right) ^{2}}}. \end{aligned}

Since $$e^{-x}\le 1-x+x^{2}/2$$ for any $$x\in {\mathbb {R}} ,$$ we have $$1-e^{-x}\ge x-x^{2}/2\ge x-x/2=x/2$$ for all $$x\in (0,1),$$ and thus $$1/(1-e^{-x})\le 2/x$$ for all $$x\in (0,1).$$ As a result,

\begin{aligned} \sum _{n=\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }+1} ^{\infty }P_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le T^{\varepsilon }\right)&\le \frac{e^{-\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }\left( \gamma ^{\varepsilon }\right) ^{2}}}{1-e^{-\left( \gamma ^{\varepsilon }\right) ^{2}}}\le \frac{2}{\left( \gamma ^{\varepsilon }\right) ^{2}}e^{-\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }\left( \gamma ^{\varepsilon }\right) ^{2}}\\&\le 2\left( \Gamma ^{\varepsilon }\right) ^{2\ell }e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }}. \end{aligned}

This completes the proof of part 2.

Finally, for part 3, we use the fact that for $$x\in (0,1),$$ and for any $$k\in {\mathbb {N}} ,$$

\begin{aligned} \sum \nolimits _{n=k}^{\infty }nx^{n}=k x^{k}(1-x)^{-1}+x^{k+1}( 1-x)^{-2}\le (k(1-x)^{-1} + (1-x)^{-2}) x^{k}. \end{aligned}

Using the estimate from part 1 once again, we have

\begin{aligned} \sum _{n=\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon } +1}^{\infty }nP_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le T^{\varepsilon }\right)&\le \sum _{n=\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }}^{\infty }ne^{-n\left( \gamma ^{\varepsilon }\right) ^{2}}\\&\le \left( \frac{\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }}{1-e^{-\left( \gamma ^{\varepsilon }\right) ^{2}}}+\left( 1-e^{-\left( \gamma ^{\varepsilon }\right) ^{2}}\right) ^{-2}\right) e^{-\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }\left( \gamma ^{\varepsilon }\right) ^{2}}\\&\le \left( 4\left( \Gamma ^{\varepsilon }\right) ^{1+2\ell }+4\left( \Gamma ^{\varepsilon }\right) ^{4\ell }\right) e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }}. \end{aligned}

We are done. $$\square$$

### Remark 8.12

If $$0<\ell <1/2,$$ then $${\mathfrak {P}}_{1}$$ and $${\mathfrak {R}}_{1}$$ converge to 0 doubly exponentially fast as $$\varepsilon \rightarrow 0$$ in the sense that for any $$k\in (0,\infty )$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left[ \left( \Gamma ^{\varepsilon }\right) ^{k}e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }}\right] =\infty . \end{aligned}

### Berry–Esseen Bound

In this subsection, we will provide upper bounds for

\begin{aligned} {\mathfrak {P}}_{2}\doteq \sum _{n=\left( 1-2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }}^{\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }}P_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le T^{\varepsilon }\right) \text { and } {\mathfrak {R}}_{2}\doteq \sum _{n=\left( 1-2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }}^{\left( 1+2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }}\left( 2n+1\right) P_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le T^{\varepsilon }\right) \end{aligned}

via the Berry–Esseen bound.

We first recall that $$\Gamma ^{\varepsilon }=T^{\varepsilon }/E_{\lambda ^{\varepsilon }}^{\varepsilon }\tau _{1}^{\varepsilon }$$. The following is Theorem 1 in [11, Chapter XVI.5].

### Theorem 8.13

(Berry–Esseen) Let $$\left\{ X_{n}\right\} _{n\in {\mathbb {N}} }$$ be independent real-valued random variables with a common distribution such that

\begin{aligned} E\left( X_{1}\right) =0,\text { }\sigma ^{2}\doteq E\left( X_{1}\right) ^{2}>0,\text { }\rho \doteq E \left| X_{1}\right| ^{3} <\infty . \end{aligned}

Then, for all $$x\in {\mathbb {R}}$$ and $$n\in {\mathbb {N}} ,$$

\begin{aligned} \left| P\left( \frac{X_{1}+\cdots +X_{n}}{\sigma \sqrt{n}}\le x\right) -\Phi \left( x\right) \right| \le \frac{3\rho }{\sigma ^{3}\sqrt{n}}, \end{aligned}

where $$\Phi \left( \cdot \right)$$ is the distribution function of $$N\left( 0,1\right) .$$

### Corollary 8.14

For any $$\varepsilon >0,$$ let $$\left\{ X_{n}^{\varepsilon }\right\} _{n\in {\mathbb {N}} }$$ be independent real-valued random variables with a common distribution such that

\begin{aligned} E\left( X_{1}^{\varepsilon }\right) =0,\text { }\left( \sigma ^{\varepsilon }\right) ^{2}\doteq E\left( X_{1}^{\varepsilon }\right) ^{2}>0,\text { } \rho ^{\varepsilon }\doteq E \left| X_{1}^{\varepsilon }\right| ^{3} <\infty . \end{aligned}

Assume that there exists $$\varepsilon _{0}\in (0,1)$$ such that

\begin{aligned} {\hat{\rho }}\text { }\doteq \sup \nolimits _{\varepsilon \in (0,\varepsilon _{0})} \rho ^{\varepsilon }<\infty \text { and }{\hat{\sigma }}^{2}\doteq \inf \nolimits _{\varepsilon \in (0,\varepsilon _{0})}\left( \sigma ^{\varepsilon }\right) ^{2}>0. \end{aligned}

Then for all $$x\in {\mathbb {R}} ,n\in {\mathbb {N}}$$ and $$\varepsilon \in (0,\varepsilon _{0}),$$

\begin{aligned} \left| P\left( \frac{X_{1}^{\varepsilon }+\cdots +X_{n}^{\varepsilon } }{\sigma ^{\varepsilon }\sqrt{n}}\le x\right) -\Phi \left( x\right) \right| \le \frac{3\rho ^{\varepsilon }}{\left( \sigma ^{\varepsilon }\right) ^{3}\sqrt{n}}\le \frac{3{\hat{\rho }}}{{\hat{\sigma }}^{3}\sqrt{n}}. \end{aligned}

### Lemma 8.15

Given any $$\delta >0$$ and any $$\ell >0,$$ there exists $$\varepsilon _{0}\in (0,1)$$ such that for any $$\varepsilon \in (0,\varepsilon _{0})$$ and $$k\in {\mathbb {N}} _{0}$$, $$0\le k\le 2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }$$

\begin{aligned} P_{\lambda ^{\varepsilon }}\left( \tau _{\Gamma ^{\varepsilon }+k}^{\varepsilon }\le T^{\varepsilon }\right) \le 1-\Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }+k}}\right) +\frac{3{\hat{\rho }}}{{\hat{\sigma }} ^{3}\sqrt{\Gamma ^{\varepsilon }+k}} \end{aligned}

and

\begin{aligned} P_{\lambda ^{\varepsilon }}\left( \tau _{\Gamma ^{\varepsilon }-k}^{\varepsilon }\le T^{\varepsilon }\right) \le \Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }-k}}\right) +\frac{3{\hat{\rho }}}{{\hat{\sigma }} ^{3}\sqrt{\Gamma ^{\varepsilon }-k}}, \end{aligned}

where $$(\sigma ^{\varepsilon })^{2}\doteq E_{\lambda ^{\varepsilon }}\left( {\mathfrak {X}}_{1}^{\varepsilon }\right) ^{2},$$ $${\hat{\rho }} \doteq \sup _{\varepsilon \in (0,\varepsilon _{0})}E_{\lambda ^{\varepsilon }} \left| {\mathfrak {X}} _{1}^{\varepsilon }\right| ^{3} <\infty$$ and $${\hat{\sigma }}^{2} \doteq \inf _{\varepsilon \in (0,\varepsilon _{0})}(\sigma ^{\varepsilon })^{2}>0$$ with $${\mathfrak {X}}_{1}^{\varepsilon }\doteq \tau _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}^{\varepsilon }\tau _{1}^{\varepsilon }-1.$$

### Proof

For any $$n\in {\mathbb {N}} ,$$ we define $${\mathfrak {X}}_{n}^{\varepsilon }\doteq {\mathcal {Z}}_{n}^{\varepsilon }-E_{\lambda ^{\varepsilon }}^{\varepsilon }{\mathcal {Z}}_{1}^{\varepsilon }$$ with $${\mathcal {Z}}_{n}^{\varepsilon }\doteq (\tau _{n}^{\varepsilon }-\tau _{n-1} ^{\varepsilon })/E_{\lambda ^{\varepsilon }}^{\varepsilon }\tau _{1}^{\varepsilon }.$$ Obviously, $$E_{\lambda ^{\varepsilon }}{\mathcal {Z}}_{n}^{\varepsilon }=1$$ and $$E_{\lambda ^{\varepsilon }}{\mathfrak {X}}_{n}^{\varepsilon }=0$$ and if we apply Lemma 8.8, then we find that there exists some $$\varepsilon _{0} \in (0,1)$$ such that $$\sup \nolimits _{\varepsilon \in (0,\varepsilon _{0})}E_{\lambda ^{\varepsilon }}\left( {\mathcal {Z}}_{1}^{\varepsilon }\right) ^{3}<\infty \text { and }\inf \nolimits _{\varepsilon \in (0,\varepsilon _{0})}\mathrm {Var}_{\lambda ^{\varepsilon } }({\mathcal {Z}}_{1}^{\varepsilon })>0.$$ Since $${\mathcal {Z}}_{1}^{\varepsilon }\ge 0$$, Jensen’s inequality implies $$\left( E_{\lambda ^{\varepsilon }}{\mathcal {Z}}_{1}^{\varepsilon }\right) ^{3}\le E_{\lambda ^{\varepsilon }}\left( {\mathcal {Z}}_{1}^{\varepsilon }\right) ^{3}$$, and therefore

\begin{aligned} {\hat{\rho }} \le 4\sup \nolimits _{\varepsilon \in (0,\varepsilon _{0})}\left( E_{\lambda ^{\varepsilon }}\left( {\mathcal {Z}}_{1}^{\varepsilon }\right) ^{3}+\left( E_{\lambda ^{\varepsilon }}{\mathcal {Z}}_{1}^{\varepsilon }\right) ^{3}\right) \le 8\sup \nolimits _{\varepsilon \in (0,\varepsilon _{0})}E_{\lambda ^{\varepsilon } }\left( {\mathcal {Z}}_{1}^{\varepsilon }\right) ^{3}<\infty , \end{aligned}

and

\begin{aligned} {\hat{\sigma }}^{2}=\inf \nolimits _{\varepsilon \in (0,\varepsilon _{0})}E_{\lambda ^{\varepsilon }}\left( {\mathfrak {X}}_{1}^{\varepsilon }\right) ^{2} =\inf \nolimits _{\varepsilon \in (0,\varepsilon _{0})}\mathrm {Var}_{\lambda ^{\varepsilon } }\left( {\mathcal {Z}}_{1}^{\varepsilon }\right) >0. \end{aligned}

Therefore, we can use Corollary 8.14 with the iid sequence $$\left\{ {\mathfrak {X}}_{n}^{\varepsilon }\right\} _{n\in {\mathbb {N}} }$$ to find that for any $$k\in {\mathbb {N}} _{0}$$ and $$0\le k\le 2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }$$

\begin{aligned} P_{\lambda ^{\varepsilon }}\left( \tau _{\Gamma ^{\varepsilon }+k} ^{\varepsilon }\le T^{\varepsilon }\right)&=P_{\lambda ^{\varepsilon }}\left( {\mathcal {Z}}_{1}^{\varepsilon } +\cdots +{\mathcal {Z}}_{\Gamma ^{\varepsilon }+k}^{\varepsilon }\le \Gamma ^{\varepsilon }\right) \\&=P_{\lambda ^{\varepsilon }}\left( \frac{{\mathfrak {X}}_{1}^{\varepsilon }+\cdots +{\mathfrak {X}}_{\Gamma ^{\varepsilon }+k}^{\varepsilon }}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }+k}}\le \frac{-k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }+k}}\right) \\&\le 1-\Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }+k} }\right) +\frac{3{\hat{\rho }}}{{\hat{\sigma }}^{3}\sqrt{\Gamma ^{\varepsilon }+k}}, \end{aligned}

and similarly

\begin{aligned} P_{\lambda ^{\varepsilon }}\left( \tau _{\Gamma ^{\varepsilon }-k} ^{\varepsilon }\le T^{\varepsilon }\right)&\le \Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }-k} }\right) +\frac{3{\hat{\rho }}}{{\hat{\sigma }}^{3}\sqrt{\Gamma ^{\varepsilon }-k}}. \end{aligned}

$$\square$$

### Lemma 8.16

Given any $$\delta >0$$ and any $$\ell \in (0,1/2),$$ there exists $$\varepsilon _{0}\in (0,1)$$ such that for any $$\varepsilon \in (0,\varepsilon _{0})$$, $${\mathfrak {P}}_{2} \le C\left( \Gamma ^{\varepsilon }\right) ^{\frac{1}{2}-\ell }+2\left( \Gamma ^{\varepsilon }\right) ^{1-\ell }.$$

### Proof

We rewrite $${\mathfrak {P}}_{2}$$ as

\begin{aligned} {\mathfrak {P}}_{2} =\sum \nolimits _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}P_{\lambda ^{\varepsilon }}\left( \tau _{\Gamma ^{\varepsilon }-k}^{\varepsilon }\le T^{\varepsilon }\right) +P_{\lambda ^{\varepsilon }}\left( \tau _{\Gamma ^{\varepsilon }}^{\varepsilon }\le T^{\varepsilon }\right) +\sum \nolimits _{k=1} ^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}P_{\lambda ^{\varepsilon }}\left( \tau _{\Gamma ^{\varepsilon }+k}^{\varepsilon }\le T^{\varepsilon }\right) . \end{aligned}

Then, we use the upper bounds from Lemma 8.15 to get

\begin{aligned} {\mathfrak {P}}_{2}&\le \sum _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon } }\left[ \Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }-k}}\right) +\frac{3{\hat{\rho }}}{{\hat{\sigma }}^{3}\sqrt{\Gamma ^{\varepsilon }-k}}\right] \\&\quad +1 +\sum _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}\left[ 1-\Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }+k} }\right) +\frac{3{\hat{\rho }}}{{\hat{\sigma }}^{3}\sqrt{\Gamma ^{\varepsilon }+k} }\right] \\&\le \frac{24{\hat{\rho }}}{{\hat{\sigma }}^{3}}\gamma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }}+1+2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }+\sum _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}\left[ \Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }-k}}\right) -\Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }+k}}\right) \right] . \end{aligned}

The sum of the first three terms is easily bounded above by $$C\left( \Gamma ^{\varepsilon }\right) ^{\frac{1}{2}-\ell }+2\left( \Gamma ^{\varepsilon }\right) ^{1-\ell }$$. We will show that the last term is bounded above by a constant to complete the proof.

To prove this, we observe that for any $$k\le 2\gamma ^{\varepsilon } \Gamma ^{\varepsilon },$$ we may assume $$k\le \Gamma ^{\varepsilon }/2$$ by taking $$\varepsilon$$ sufficiently small. Then, we apply the Mean Value Theorem and find

\begin{aligned}&\left| \Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }-k}}\right) -\Phi \left( \frac{k}{\sigma ^{\varepsilon } \sqrt{\Gamma ^{\varepsilon }+k}}\right) \right| \\&\quad \le \sup \limits _{x\in \left[ \frac{\sqrt{2/3}k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }}},\frac{\sqrt{2} k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }}}\right] }\phi \left( x\right) \cdot \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }-k}}-\frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }+k}}\right) , \end{aligned}

where $$\phi \left( x\right) \doteq e^{-\frac{x^{2}}{2}}/\sqrt{2\pi }$$ and since $$0\le k\le \Gamma ^{\varepsilon }/2,$$ we have

\begin{aligned} \left[ \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }+k}},\frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }-k}}\right] \subset \left[ \frac{\sqrt{2/3}k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }}},\frac{\sqrt{2} k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }}}\right] . \end{aligned}

Additionally, because $$\phi \left( x\right) =e^{-\frac{x^{2}}{2}}/\sqrt{2\pi }$$ is a monotone decreasing function on $$[0,\infty )$$, we find that

\begin{aligned} x\in [ (\sqrt{2/3}k)(\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }}),(\sqrt{2} k)(\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }})] \quad \text{ implies } \quad \phi \left( x\right) \le e^{-\frac{k^{2} }{3\left( \sigma ^{\varepsilon }\right) ^{2}\Gamma ^{\varepsilon }}}/{\sqrt{2\pi }}. \end{aligned}

Also, $$\sqrt{1+x}-\sqrt{1-x}\le 2x$$ for all $$x\in [0,1]$$ and $$k\le \Gamma ^{\varepsilon }/2$$ and a little algebra give $${k}/{\sqrt{\Gamma ^{\varepsilon }-k}}-{k}/{\sqrt{\Gamma ^{\varepsilon } +k}} \le {4k^{2}}/{\Gamma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }}}.$$ Therefore, we find

\begin{aligned}&\sum \nolimits _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}\left[ \Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }-k}}\right) -\Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }+k} }\right) \right] \\&\quad \le \sum \nolimits _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}\frac{1}{\sqrt{2\pi }}e^{-\frac{k^{2}}{3\left( \sigma ^{\varepsilon }\right) ^{2}\Gamma ^{\varepsilon }}}\frac{4k^{2}}{\sigma ^{\varepsilon }\Gamma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }}} \le \frac{4}{\sigma ^{\varepsilon }\Gamma ^{\varepsilon }}\sum \nolimits _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}\int _{k-1}^{k}\frac{\left( 1+x\right) ^{2}}{\sqrt{2\pi \Gamma ^{\varepsilon }}}e^{-\frac{x^{2}}{3\left( \sigma ^{\varepsilon }\right) ^{2}\Gamma ^{\varepsilon }}}dx\\&\quad \le \frac{4}{\Gamma ^{\varepsilon }}\sqrt{\frac{3}{2}}\int _{0}^{\infty }\frac{\left( 1+x\right) ^{2}}{\sqrt{3\pi \left( \sigma ^{\varepsilon }\right) ^{2} \Gamma ^{\varepsilon }}}e^{-\frac{x^{2}}{3\left( \sigma ^{\varepsilon }\right) ^{2}\Gamma ^{\varepsilon }}}dx \le \frac{6}{\Gamma ^{\varepsilon }}E\left( 1+X^{+}\right) ^{2}, \end{aligned}

where $$X\sim N(0,3\left( \sigma ^{\varepsilon }\right) ^{2}\Gamma ^{\varepsilon }/2).$$ Finally, since $$E\left( 1+X^{+}\right) ^{2}\le 2+2E\left( X^{2}\right) =2+3\left( \sigma ^{\varepsilon }\right) ^{2}\Gamma ^{\varepsilon },$$ this implies that

\begin{aligned} \sum _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}\left[ \Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }-k}}\right) -\Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }+k} }\right) \right]&\le \frac{6}{\Gamma ^{\varepsilon }}\left( 2+3\left( \sigma ^{\varepsilon }\right) ^{2}\Gamma ^{\varepsilon }\right) \le 12+18{\hat{\rho }}^{2/3}, \end{aligned}
(8.4)

where the last inequality is from

\begin{aligned} \sup \nolimits _{\varepsilon \in (0,\varepsilon _{0})}\sigma ^{\varepsilon }&=\sup \nolimits _{\varepsilon \in (0,\varepsilon _{0})}\left( E_{\lambda ^{\varepsilon } }( {\mathfrak {X}}_{1}^{\varepsilon }\right) ^{2}) ^{1/2} \le \sup \nolimits _{\varepsilon \in (0,\varepsilon _{0})}( E_{\lambda ^{\varepsilon }}\left| {\mathfrak {X}}_{1}^{\varepsilon }\right| ^{3}) ^{1/3}={\hat{\rho }}^{1/3}. \end{aligned}

Since according to Lemma 8.15$${\hat{\rho }}^{1/3}$$ is finite, we are done. $$\square$$

### Lemma 8.17

Given any $$\delta >0$$ and any $$\ell \in (0,1/2),$$ there exists $$\varepsilon _{0}\in (0,1)$$ and a constant $$C<\infty$$ such that for any $$\varepsilon \in (0,\varepsilon _{0})$$, $${\mathfrak {R}}_{2} \le 4\left( \Gamma ^{\varepsilon }\right) ^{2-\ell }+C\left( \Gamma ^{\varepsilon }\right) ^{2\left( 1-\ell \right) }.$$

### Proof

The proof of this lemma is similar to the proof of Lemma 8.16. We rewrite $${\mathfrak {R}}_{2}$$ as

\begin{aligned} {\mathfrak {R}}_{2}&=\sum \nolimits _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon } }\left( 2\Gamma ^{\varepsilon }-2k+1\right) P_{\lambda ^{\varepsilon }}\left( \tau _{\Gamma ^{\varepsilon }-k} ^{\varepsilon }\le T^{\varepsilon }\right) +\left( 2\Gamma ^{\varepsilon }+1\right) P_{\lambda ^{\varepsilon } }\left( \tau _{\Gamma ^{\varepsilon }} ^{\varepsilon }\le T^{\varepsilon }\right) \\&\quad +\sum \nolimits _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}\left( 2\Gamma ^{\varepsilon }+2k+1\right) P_{\lambda ^{\varepsilon }}\left( \tau _{\Gamma ^{\varepsilon }+k} ^{\varepsilon }\le T^{\varepsilon }\right) . \end{aligned}

Then, we use the upper bounds from Lemma 8.15 to get

\begin{aligned} {\mathfrak {R}}_{2}&\le \sum \nolimits _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon } }\left( 2\Gamma ^{\varepsilon }-2k+1\right) \left[ \Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }-k}}\right) +\frac{3{\hat{\rho }}}{{\hat{\sigma }}^{3}\sqrt{\Gamma ^{\varepsilon }-k}}\right] +\left( 2\Gamma ^{\varepsilon }+1\right) \\&\quad +\sum \nolimits _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}\left( 2\Gamma ^{\varepsilon }+2k+1\right) \left[ 1-\Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }+k}}\right) +\frac{3\hat{\rho }}{{\hat{\sigma }}^{3}\sqrt{\Gamma ^{\varepsilon }+k}}\right] . \end{aligned}

The next thing is to pair all the terms carefully and bound these pairs separately. We start with

\begin{aligned}&\sum \nolimits _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}\left( 2\Gamma ^{\varepsilon }-2k+1\right) \Phi \left( \frac{k}{\sigma ^{\varepsilon } \sqrt{\Gamma ^{\varepsilon }-k}}\right) -\sum \nolimits _{k=1}^{2\gamma ^{\varepsilon } \Gamma ^{\varepsilon }}\left( 2\Gamma ^{\varepsilon }+2k+1\right) \Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }+k}}\right) \\&\quad \le \left( 2\Gamma ^{\varepsilon }+1\right) \sum \nolimits _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}\left[ \Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }-k}}\right) -\Phi \left( \frac{k}{\sigma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon }+k}}\right) \right] \le C\Gamma ^{\varepsilon }. \end{aligned}

We use (8.4) for the last inequality. The second pair is

\begin{aligned}&\sum \nolimits _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}\left( 2\Gamma ^{\varepsilon }-2k+1\right) \frac{3{\hat{\rho }}}{{\hat{\sigma }}^{3}\sqrt{\Gamma ^{\varepsilon }-k}}+\sum \nolimits _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}\left( 2\Gamma ^{\varepsilon }+2k+1\right) \frac{3{\hat{\rho }}}{\hat{\sigma }^{3}\sqrt{\Gamma ^{\varepsilon }+k}}\\&\quad =\frac{6{\hat{\rho }}}{{\hat{\sigma }}^{3}}\sum \nolimits _{k=1}^{2\gamma ^{\varepsilon } \Gamma ^{\varepsilon }}\left( \sqrt{\Gamma ^{\varepsilon }-k}+\sqrt{\Gamma ^{\varepsilon }+k}\right) +\frac{3{\hat{\rho }}}{{\hat{\sigma }}^{3}} \sum \nolimits _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}\left( \frac{1}{\sqrt{\Gamma ^{\varepsilon }-k}}+\frac{1}{\sqrt{\Gamma ^{\varepsilon }+k} }\right) \\&\quad \le \frac{6{\hat{\rho }}}{{\hat{\sigma }}^{3}} \sum \nolimits _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}2\sqrt{2\Gamma ^{\varepsilon }}+\frac{3{\hat{\rho }}}{\hat{\sigma }^{3}}\sum \nolimits _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}2\le C\gamma ^{\varepsilon }\Gamma ^{\varepsilon }\sqrt{\Gamma ^{\varepsilon } }+C\gamma ^{\varepsilon }\Gamma ^{\varepsilon }\le C\left( \Gamma ^{\varepsilon }\right) ^{\frac{3}{2}-\ell }, \end{aligned}

where the first inequality holds due to $$k \le \Gamma ^{\varepsilon }/2$$. The third term is

\begin{aligned}&\sum \nolimits _{k=1}^{2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }}\left( 2\Gamma ^{\varepsilon }+2k+1\right) +\left( 2\Gamma ^{\varepsilon }+1\right) \\&\quad =4\gamma ^{\varepsilon }\left( \Gamma ^{\varepsilon }\right) ^{2} +2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }+4\left( \gamma ^{\varepsilon } \Gamma ^{\varepsilon }\right) ^{2}+2\gamma ^{\varepsilon }\Gamma ^{\varepsilon }+\left( 2\Gamma ^{\varepsilon }+1\right) \\&\quad \le 4\gamma ^{\varepsilon }\left( \Gamma ^{\varepsilon }\right) ^{2}+C\left( \gamma ^{\varepsilon }\Gamma ^{\varepsilon }\right) ^{2} =4\left( \Gamma ^{\varepsilon }\right) ^{2-\ell }+C\left( \Gamma ^{\varepsilon }\right) ^{2\left( 1-\ell \right) }, \end{aligned}

where the inequality holds since for $$\ell \in (0,1/2),$$ $$2-2\ell \ge 1$$ and this implies that $$\left( 2\Gamma ^{\varepsilon }+1\right) \le C\left( \gamma ^{\varepsilon }\Gamma ^{\varepsilon }\right) ^{2}.$$ Lastly, combining all the pairs and the corresponding upper bounds, we find that for any $$\ell \in (0,1/2),$$

\begin{aligned} {\mathfrak {R}}_{2}&\le [ 4\left( \Gamma ^{\varepsilon }\right) ^{2-\ell }+C\left( \Gamma ^{\varepsilon }\right) ^{2\left( 1-\ell \right) }] +C\Gamma ^{\varepsilon }+C\left( \Gamma ^{\varepsilon }\right) ^{\frac{3}{2}-\ell } \le 4\left( \Gamma ^{\varepsilon }\right) ^{2-\ell }+C\left( \Gamma ^{\varepsilon }\right) ^{2\left( 1-\ell \right) }, \end{aligned}

where C is a constant which depends on $$\ell$$ only (and in particular is independent of $$\varepsilon$$). $$\square$$

### Asymptotics of Moments of $$N^{\varepsilon }(T^{\varepsilon })$$

In this subsection, we prove Lemmas 8.2 and 8.4.

### Proof of Lemma 8.2

First, recall that

\begin{aligned} E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) =\sum \nolimits _{n=0}^{\infty }P_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le T^{\varepsilon }\right) ={\mathfrak {P}} _{1}+{\mathfrak {P}}_{2}+{\mathfrak {P}}_{3}, \end{aligned}

where the $${\mathfrak {P}}_{i}$$ are defined in (8.2). We can simply bound $${\mathfrak {P}}_{3}$$ from above by $$\left( 1-2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }$$. Applying Lemma 8.11 and Lemma 8.16 for the other terms, we have for any $$\ell \in (0,1/2)$$ that

\begin{aligned} E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right)&\le C\left( \Gamma ^{\varepsilon }\right) ^{2\ell }e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }}+( C\left( \Gamma ^{\varepsilon }\right) ^{\frac{1}{2}-\ell }+2\left( \Gamma ^{\varepsilon }\right) ^{1-\ell }) +\left( 1-2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon }\\&=T^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1} ^{\varepsilon }+C\left( \Gamma ^{\varepsilon }\right) ^{\frac{1}{2}-\ell }+C\left( \Gamma ^{\varepsilon }\right) ^{2\ell }e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }}. \end{aligned}

On the other hand, from the definition of $$N^{\varepsilon }\left( T^{\varepsilon }\right) ,$$ $$E_{\lambda ^{\varepsilon }}\tau _{N^{\varepsilon }\left( T^{\varepsilon }\right) }^{\varepsilon }\ge T^{\varepsilon }.$$ Using Wald’s first identity, we find

\begin{aligned} E_{\lambda ^{\varepsilon }}\tau _{N^{\varepsilon }\left( T^{\varepsilon }\right) }^{\varepsilon }=E_{\lambda ^{\varepsilon }} \sum \nolimits _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) }\left( \tau _{n}^{\varepsilon }-\tau _{n-1}^{\varepsilon }\right) =E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) \cdot E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }. \end{aligned}

Hence,

\begin{aligned} 0\le \frac{E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}-\frac{1}{E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }}\le \frac{1}{T^{\varepsilon }}[ C\left( \Gamma ^{\varepsilon }\right) ^{\frac{1}{2}-\ell }+C\left( \Gamma ^{\varepsilon }\right) ^{2\ell }e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }}] . \end{aligned}

Therefore,

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left| \frac{E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}-\frac{1}{E_{\lambda ^{\varepsilon }} \tau _{1}^{\varepsilon }}\right| \\&\quad \ge \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left[ \frac{1}{T^{\varepsilon }}\left( C\left( \Gamma ^{\varepsilon }\right) ^{\frac{1}{2}-\ell }+\left( \Gamma ^{\varepsilon }\right) ^{2\ell }e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }}\right) \right] . \end{aligned}

It remains to find an appropriate lower bound for the liminf.

We use (7.2), Lemma 8.7 and Remark 8.12 to find that for any $$\eta >0,$$ there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$ and any $$\ell \in (0,1/2)$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left[ \frac{1}{T^{\varepsilon }}\left( C\left( \Gamma ^{\varepsilon }\right) ^{\frac{1}{2}-\ell }+\left( \Gamma ^{\varepsilon }\right) ^{2\ell } e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }}\right) \right] \ge \liminf _{\varepsilon \rightarrow 0}\varepsilon \log T^{\varepsilon } \\&\quad +\min \left\{ \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \Gamma ^{\varepsilon }\right) ^{1/2-\ell },\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \left( \Gamma ^{\varepsilon }\right) ^{2\ell }e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }}\right) \right\} \\&\quad \ge c+\min \left\{ \left( 1/2-\ell \right) \left( h_1-c-\eta \right) ,\infty \right\} =c+\left( 1/2-\ell \right) \left( h_1-c-\eta \right) . \end{aligned}

We complete the proof by sending $$\ell$$ to 1/2. $$\square$$

### Proof of Lemma 8.4

Recall that

\begin{aligned} E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) ^{2}=\sum \nolimits _{n=0}^{\infty }\left( 2n+1\right) P_{\lambda ^{\varepsilon }}\left( \tau _{n}^{\varepsilon }\le t\right) ={\mathfrak {R}} _{1}+{\mathfrak {R}}_{2}+{\mathfrak {R}}_{3} \end{aligned}

where the $${\mathfrak {R}}_{i}$$ are defined in (8.3). We can bound $${\mathfrak {R}}_{3}$$ from above by

\begin{aligned}&\sum \nolimits _{n=0}^{\left( 1-2\gamma ^{\varepsilon }\right) \Gamma ^{\varepsilon } -1}\left( 2n+1\right) =(1-4\gamma ^{\varepsilon }+4\left( \gamma ^{\varepsilon }\right) ^{2}) \left( \Gamma ^{\varepsilon }\right) ^{2}. \end{aligned}

Applying Lemma 8.11 and Lemma 8.17, we have for any $$\ell \in (0,1/2)$$ that

\begin{aligned} E_{\lambda ^{\varepsilon }}( N^{\varepsilon }\left( T^{\varepsilon }\right) ) ^{2}&\le C\left( \Gamma ^{\varepsilon }\right) ^{1+2\ell }e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }}\\&\quad + 4\left( \Gamma ^{\varepsilon }\right) ^{2-\ell }+C\left( \Gamma ^{\varepsilon }\right) ^{2\left( 1-\ell \right) } +( 1-4\gamma ^{\varepsilon }+4\left( \gamma ^{\varepsilon }\right) ^{2}) \left( \Gamma ^{\varepsilon }\right) ^{2}\\&\le \left( \Gamma ^{\varepsilon }\right) ^{2}+C\left( \Gamma ^{\varepsilon }\right) ^{2\left( 1-\ell \right) }+C\left( \Gamma ^{\varepsilon }\right) ^{1+2\ell }e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }}. \end{aligned}

As in the proof of Lemma 8.2$$E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) \ge \Gamma ^{\varepsilon }.$$ Thus, for any $$\ell \in (0,1/2)$$

\begin{aligned} \mathrm {Var}_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right)&\le E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) ^{2}-\left( \Gamma ^{\varepsilon }\right) ^{2}\\&\le [ \left( \Gamma ^{\varepsilon }\right) ^{2}+C\left( \Gamma ^{\varepsilon }\right) ^{2\left( 1-\ell \right) }+C\left( \Gamma ^{\varepsilon }\right) ^{1+2\ell }e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }}] -\left( \Gamma ^{\varepsilon }\right) ^{2}\\&=C\left( \Gamma ^{\varepsilon }\right) ^{2\left( 1-\ell \right) }+C\left( \Gamma ^{\varepsilon }\right) ^{1+2\ell }e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }}. \end{aligned}

Again we use (7.2), Lemma 8.7 and Remark 8.12 to find that for any $$\eta >0,$$ there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$ and for any $$\ell \in (0,1/2)$$,

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \frac{\mathrm {Var} _{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}\\&\quad \ge \liminf _{\varepsilon \rightarrow 0}\varepsilon \log T^{\varepsilon } +\min \left\{ \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \Gamma ^{\varepsilon }\right) ^{2\left( 1-\ell \right) } ,\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \left( \Gamma ^{\varepsilon }\right) ^{1+2\ell }e^{-\left( \Gamma ^{\varepsilon }\right) ^{1-2\ell }}\right) \right\} \\&\quad \ge c+\min \left\{ 2\left( 1-\ell \right) \left( h_1-c-\eta \right) ,\infty \right\} =2\left( 1-\ell \right) (h_1-\eta )+\left( 2\ell -1\right) c. \end{aligned}

We complete the proof by sending $$\ell$$ to 1/2. $$\square$$

### Asymptotics of Moments of $${\hat{N}}^{\varepsilon }(T^{\varepsilon })$$

The proof of the following result is given in Section 10.

### Theorem 8.18

If $$w\ge h_1$$, then given any $$m>0$$ such that $$m+h_1>w$$ and for any $$\delta >0$$ sufficiently small,

\begin{aligned} \lim _{\varepsilon \rightarrow 0}\varepsilon \log E_{\lambda ^{\varepsilon }} {\hat{\tau }}_{1}^{\varepsilon }=m+\varkappa _{\delta }\text { and }{\hat{\tau }}_{1}^{\varepsilon } /E_{\lambda ^{\varepsilon }} {\hat{\tau }}_{1}^{\varepsilon }\overset{d}{\rightarrow }\mathrm {Exp}(1). \end{aligned}

Moreover, there exists $$\varepsilon _{0}\in (0,1)$$ and a constant $${\tilde{c}}>0$$ such that

\begin{aligned} P_{\lambda ^{\varepsilon }}\left( {\hat{\tau }}_{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}{\hat{\tau }}_{1}^{\varepsilon }>t\right) \le e^{-{\tilde{c}}t} \end{aligned}

for any $$t>0$$ and any $$\varepsilon \in (0,\varepsilon _{0}).$$

Notice that Theorem 8.18 is a multicycle version of Theorem 8.5, which is the key to the proofs of the asymptotics of moments of $$N^{\varepsilon }(T^{\varepsilon })$$, namely, Lemma 8.2 and Lemma 8.4. Given Theorem 8.18, the proofs of the following analogous results follow from essentially the same arguments as those of Lemma 8.2 and Lemma 8.4.

### Lemma 8.19

Suppose that $$w\ge h_1$$, $$m+h_1>w$$, and that $$T^{\varepsilon }=e^{\frac{1}{\varepsilon }c}$$ for some $$c>w$$. Then, there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left| \frac{E_{\lambda ^{\varepsilon }}( {\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) ) }{T^{\varepsilon }}-\frac{1}{E_{\lambda ^{\varepsilon }} {\hat{\tau }}_{1}^{\varepsilon }}\right| \ge c. \end{aligned}

### Corollary 8.20

Suppose that $$w\ge h_1$$, $$m+h_1>w$$ and that $$T^{\varepsilon }=e^{\frac{1}{\varepsilon }c}$$ for some $$c>w$$. Then, there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \frac{E_{\lambda ^{\varepsilon }}( {\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) ) }{T^{\varepsilon }}\ge m+\varkappa _{\delta }. \end{aligned}

### Lemma 8.21

Suppose that $$w\ge h_1$$, $$m+h_1>w$$ and that $$T^{\varepsilon }=e^{\frac{1}{\varepsilon }c}$$ for some $$c>w$$. Then, for any $$\eta >0,$$ there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \frac{\mathrm {Var} _{\lambda ^{\varepsilon }}( {\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) ) }{T^{\varepsilon }}\ge m+h_1-\eta . \end{aligned}

## Large Deviation Type Upper Bounds

In this section, we collect results from the previous sections to prove the main results of the paper, Theorems 4.3 and 4.5 , which give large deviation upper bounds on the bias under the empirical measure and the variance per unit time. We also give the proof of Theorem 4.9, which shows how to simplify some expressions appearing in the large deviation bounds. Before giving the proof of the first result, we establish Lemmas 9.1 and 9.2 for the single cycle case, and Lemmas 9.3 and 9.4 for the multicycle case, which are needed in the proof of Theorem 4.3. Recall that for any $$n\in {\mathbb {N}}$$

\begin{aligned} S_{n}^{\varepsilon }\doteq \int _{\tau _{n-1}^{\varepsilon }}^{\tau _{n} ^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt. \end{aligned}
(9.1)

### Lemma 9.1

If $$h_1>w$$, $$A\subset M$$ is compact and $$T^{\varepsilon }=e^{\frac{1}{\varepsilon }c}$$ for some $$c>h_1$$, then for any $$\eta >0$$, there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left| \frac{E_{\lambda ^{\varepsilon }}N^{\varepsilon }\left( T^{\varepsilon }\right) }{T^{\varepsilon }}E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }-\int _M e^{-\frac{1}{\varepsilon }f\left( x\right) }1_{A}\left( x\right) \mu ^{\varepsilon }\left( dx\right) \right| \\&\quad \ge \inf _{x\in A}\left[ f\left( x\right) +W\left( x\right) \right] -W\left( O_{1}\right) +c -h_1-\eta . \end{aligned}

### Proof

To begin, by Lemma 4.1 with $$g\left( x\right) =e^{-\frac{1}{\varepsilon }f\left( x\right) }1_{A}\left( x\right)$$, we know that for any $$\delta$$ sufficiently small and $$\varepsilon >0,$$

\begin{aligned} E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }&=E_{\lambda ^{\varepsilon } }\left( \int _{0}^{\tau _{1}^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{s}^{\varepsilon }\right) }1_{A}\left( X_{s}^{\varepsilon }\right) ds\right) =E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }\cdot \int _M e^{-\frac{1}{\varepsilon }f\left( x\right) }1_{A}\left( x\right) \mu ^{\varepsilon }\left( dx\right) . \end{aligned}

This implies that

\begin{aligned}&\left| \frac{E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}E_{\lambda ^{\varepsilon } }S_{1}^{\varepsilon }-\int _M e^{-\frac{1}{\varepsilon }f\left( x\right) } 1_{A}\left( x\right) \mu ^{\varepsilon }\left( dx\right) \right| \\&\quad = E_{\lambda ^{\varepsilon } }S_{1}^{\varepsilon } \cdot \left| \frac{E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon } }-\frac{1}{E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }}\right| . \end{aligned}

Hence, by (7.1), Lemmas 7.21 and 8.2, we find that given $$\eta >0,$$ there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left| \frac{E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}E_{\lambda ^{\varepsilon }}S_{1} ^{\varepsilon }-\int _M e^{-\frac{1}{\varepsilon }f\left( x\right) }1_{A}\left( x\right) \mu ^{\varepsilon }\left( dx\right) \right| \\&\quad \ge \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log E_{\lambda ^{\varepsilon }}S_{1} ^{\varepsilon } +\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left| \frac{E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}-\frac{1}{E_{\lambda ^{\varepsilon }} \tau _{1}^{\varepsilon }}\right| \\&\quad \ge \inf _{x\in A}\left[ f\left( x\right) +W\left( x\right) \right] -W\left( O_{1}\right) +c -h_1-\eta . \end{aligned}

$$\square$$

In the application of Wald’s identity, a difficulty arises in that, owing to the randomness of $$N^{\varepsilon }\left( T^{\varepsilon }\right)$$, $$S_{N^{\varepsilon }\left( T^{\varepsilon }\right) }^{\varepsilon }$$ need not have the same distribution as $$S_{1}^{\varepsilon }$$. Nevertheless, such minor term can be dealt with by using technique in, for example, [18, Theorem 3.16]. The proof of the following lemma can be found in Appendix.

### Lemma 9.2

If $$h_1>w$$, $$A\subset M$$ is compact and $$T^{\varepsilon }=e^{\frac{1}{\varepsilon }c}$$ for some $$c>h_1$$, then for any $$\eta >0$$, there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \frac{E_{\lambda ^{\varepsilon }}S_{N^{\varepsilon }\left( T^{\varepsilon }\right) }^{\varepsilon }}{T^{\varepsilon }} \ge \inf _{x\in A}\left[ f\left( x\right) +W\left( x\right) \right] -W\left( O_{1}\right) + c -h_1-\eta . \end{aligned}

We have similar results for multicycles. To be specific, we have the following two lemmas.

### Lemma 9.3

Suppose that $$w\ge h_1$$, $$m+h_1>w$$, $$A\subset M$$ is compact and that $$T^{\varepsilon }=e^{\frac{1}{\varepsilon }c}$$ for some $$c>w$$. Then, for any $$\eta >0$$, there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left| \frac{E_{\lambda ^{\varepsilon }}{\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) }{T^{\varepsilon }}E_{\lambda ^{\varepsilon }}{\hat{S}}_{1}^{\varepsilon }-\int _M e^{-\frac{1}{\varepsilon }f\left( x\right) }1_{A}\left( x\right) \mu ^{\varepsilon }\left( dx\right) \right| \\&\quad \ge \inf _{x\in A}\left[ f\left( x\right) +W\left( x\right) \right] -W\left( O_{1}\right) +c-(m+h_1)-\eta . \end{aligned}

### Lemma 9.4

Suppose that $$w\ge h_1$$, $$m+h_1>w$$, $$A\subset M$$ is compact and that $$T^{\varepsilon }=e^{\frac{1}{\varepsilon }c}$$ for some $$c>w$$. Then, for any $$\eta >0$$, there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \frac{E_{\lambda ^{\varepsilon }}{\hat{S}}_{{\hat{N}}^{\varepsilon }\left( T^{\varepsilon }\right) }^{\varepsilon }}{T^{\varepsilon }}\ge \inf _{x\in A}\left[ f\left( x\right) +W\left( x\right) \right] -W\left( O_{1}\right) + c-(m+h_1)-\eta . \end{aligned}

### Proof of Theorem 4.3

If $$h_1>w$$, then recall that

\begin{aligned} \frac{1}{T^{\varepsilon }}\sum \nolimits _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) -1}S_{n}^{\varepsilon }\le \frac{1}{T^{\varepsilon }}\int _{0}^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt\le \frac{1}{T^{\varepsilon }}\sum \nolimits _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) }S_{n}^{\varepsilon }, \end{aligned}

where $$S_{n}^{\varepsilon }$$ is defined in (9.1). Then, we apply Wald’s first identity to obtain

\begin{aligned} E_{\lambda ^{\varepsilon }}\left( \sum \nolimits _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) -1}S_{n}^{\varepsilon }\right)&=E_{\lambda ^{\varepsilon }}\left( \sum \nolimits _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) }S_{n}^{\varepsilon }\right) -E_{\lambda ^{\varepsilon } }S_{N^{\varepsilon }\left( T^{\varepsilon }\right) }^{\varepsilon }\\&=E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }-E_{\lambda ^{\varepsilon }}S_{N^{\varepsilon }\left( T^{\varepsilon }\right) }^{\varepsilon }. \end{aligned}

Thus,

\begin{aligned}&\left| E_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }} \int _{0}^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t} ^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) -\int _M e^{-\frac{1}{\varepsilon }f\left( x\right) }1_{A}\left( x\right) \mu ^{\varepsilon }\left( dx\right) \right| \\&\quad \le \left| \frac{E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }-\int _M e^{-\frac{1}{\varepsilon }f\left( x\right) }1_{A}\left( x\right) \mu ^{\varepsilon }\left( dx\right) \right| +\frac{E_{\lambda ^{\varepsilon }}S_{N^{\varepsilon }\left( T^{\varepsilon }\right) }^{\varepsilon }}{T^{\varepsilon }}. \end{aligned}

Therefore, by Lemmas 9.1 and 9.2 we have that for any $$\eta >0$$, there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$,

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left| E_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }}\int _{0}^{T^{\varepsilon } }e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) -\int _M e^{-\frac{1}{\varepsilon }f\left( x\right) }1_{A}\left( x\right) \mu ^{\varepsilon }\left( dx\right) \right| \\&\quad \ge \inf _{x\in A}\left[ f\left( x\right) +W\left( x\right) \right] -W\left( O_{1}\right) + c-h_1-\eta . \end{aligned}

The argument for $$h_1\le w$$ is entirely analogous but uses by Lemmas 9.3 and 9.4. $$\square$$

The following lemma bounds quantities that will arise in the proof of Theorem 4.5. Its proof is given in Appendix.

### Lemma 9.5

Recall the definitions $$R_{1}^{(2)}\doteq 2\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{1},x\right) \right] -h_1,$$ and for $$j\in L\setminus \{1\}$$, $$R_{j}^{(2)} \doteq 2\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) -2W\left( O_{1}\right) +W\left( O_{1}\cup O_{j}\right)$$ with $$h_1\doteq \min _{\ell \in L\setminus \{1\}}V(O_{1},O_{\ell }).$$ Then, $$2\inf _{x\in A}\left[ f\left( x\right) +W\left( x\right) \right] -2W\left( O_{1}\right) -h_1\ge \min _{j\in L}R_{j}^{(2)}.$$

### Proof of Theorem 4.5

We begin with the observation that is for any random variables XY and Z satisfying $$0\le Y-Z\le X\le Y,$$

\begin{aligned} \mathrm {Var}\left( X\right)&=EX^{2}-\left( EX\right) ^{2}\le EY^{2}-\left( E\left( Y-Z\right) \right) ^{2}\\&=\mathrm {Var}\left( Y\right) +2EY\cdot EZ-\left( EZ\right) ^{2} \le \mathrm {Var}\left( Y\right) +2EY\cdot EZ. \end{aligned}

When $$h_1>w$$, since

\begin{aligned} 0\le \frac{1}{T^{\varepsilon }}\sum \nolimits _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) }S_{n}^{\varepsilon }-\frac{1}{T^{\varepsilon } }S_{N^{\varepsilon }\left( T^{\varepsilon }\right) }^{\varepsilon }\le \frac{1}{T^{\varepsilon }}\int _{0}^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt\le \frac{1}{T^{\varepsilon }}\sum \nolimits _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) }S_{n}^{\varepsilon }, \end{aligned}

we have

\begin{aligned}&\mathrm {Var}_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }}\int _{0}^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) \\&\quad \le \mathrm {Var}_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }}\sum \nolimits _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) }S_{n} ^{\varepsilon }\right) +2E_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }}\sum \nolimits _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) }S_{n}^{\varepsilon }\right) \frac{E_{\lambda ^{\varepsilon }}S_{N^{\varepsilon }\left( T^{\varepsilon }\right) }^{\varepsilon }}{T^{\varepsilon }}, \end{aligned}

and with the help of (7.2)

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \mathrm {Var} _{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }}\int _{0} ^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) T^{\varepsilon }\right) \\&\quad \ge \min \left\{ \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left[ \mathrm {Var}_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }}\sum \nolimits _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) }S_{n} ^{\varepsilon }\right) T^{\varepsilon }\right] ,\right. \\&\quad \qquad \left. \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left[ E_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }}\sum \nolimits _{n=1} ^{N^{\varepsilon }\left( T^{\varepsilon }\right) }S_{n}^{\varepsilon }\right) \frac{E_{\lambda ^{\varepsilon }}S_{N^{\varepsilon }\left( T^{\varepsilon }\right) }^{\varepsilon }}{T^{\varepsilon }}T^{\varepsilon }\right] \right\} . \end{aligned}

We complete the proof in the case of single cycle by showing both terms are bounded below by $$\min \nolimits _{j\in L}( R_{j}^{(1)}\wedge R_{j} ^{(2)}) -\eta$$, where we recall

\begin{aligned}&R_{j}^{(1)}\doteq \inf \nolimits _{x\in A}\left[ 2f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) -W\left( O_{1}\right) , \\&R_{1}^{(2)}\doteq 2\inf \nolimits _{x\in A}\left[ f\left( x\right) +V\left( O_{1},x\right) \right] -h_1, \end{aligned}

and for $$j\in L\setminus \{1\}$$

\begin{aligned} R_{j}^{(2)}&\doteq 2\inf \nolimits _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) -2W\left( O_{1}\right) +W\left( O_{1}\cup O_{j}\right) . \end{aligned}

For the second term, we apply Wald’s first identity, Lemma 7.21, Corollary 8.3 and Lemma 9.2 to find that given $$\eta >0,$$ there exists $$\delta _{0}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left[ E_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }}\sum \nolimits _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) }S_{n}^{\varepsilon }\right) \frac{E_{\lambda ^{\varepsilon }}S_{N^{\varepsilon }\left( T^{\varepsilon }\right) }^{\varepsilon }}{T^{\varepsilon }}T^{\varepsilon }\right] \\&\quad \ge \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log T^{\varepsilon }+\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log E_{\lambda ^{\varepsilon } }\left( \frac{1}{T^{\varepsilon }}\sum \nolimits _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) }S_{n}^{\varepsilon }\right) \\&\qquad +\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \frac{E_{\lambda ^{\varepsilon }}S_{N^{\varepsilon }\left( T^{\varepsilon }\right) }^{\varepsilon }}{T^{\varepsilon }}\\&\quad \ge -c+(\inf \nolimits _{x\in A}\left[ f\left( x\right) +W\left( x\right) \right] -W\left( O_{1}\right) -h_1-\eta /3 ) + \varkappa _{\delta }\\&\qquad +( \inf \nolimits _{x\in A}\left[ f\left( x\right) +W\left( x\right) \right] -W\left( O_{1}\right) +\left( c-h_1\right) -\eta /3 ) \\&\quad \ge 2\inf \nolimits _{x\in A}\left[ f\left( x\right) +W\left( x\right) \right] -2W\left( O_{1}\right) -h_1-\eta \\&\quad \ge \min \nolimits _{j\in L}R_{j}^{(2)}-\eta \ge \min \nolimits _{j\in L}( R_{j}^{(1)}\wedge R_{j}^{(2)})-\eta . \end{aligned}

The third inequality holds by choosing $$\delta$$ sufficiently small $$h_\delta \ge h_1-\eta /3$$. The fourth inequality is from Lemma 9.5.

Turning to the first term, we can bound the variance by (6.3):

\begin{aligned} \mathrm {Var}_{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }} \sum \nolimits _{n=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) }S_{n} ^{\varepsilon }\right) T^{\varepsilon }&\le 2\frac{E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}\mathrm {Var}_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }+2\frac{\mathrm {Var}_{\lambda ^{\varepsilon } }\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}\left( E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }\right) ^{2}\\&\le 2\frac{E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}E_{\lambda ^{\varepsilon } }\left( S_{1}^{\varepsilon }\right) ^{2}+2\frac{\mathrm {Var}_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}\left( E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }\right) ^{2}. \end{aligned}

If we use Corollary 8.3 and Lemma 7.23, then we know that given $$\eta >0,$$ there exists $$\delta _{0}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left[ \frac{E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}E_{\lambda ^{\varepsilon }}\left( S_{1}^{\varepsilon }\right) ^{2}\right] \\&\quad \ge \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \frac{E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}+\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log E_{\lambda ^{\varepsilon }}\left( S_{1}^{\varepsilon }\right) ^{2}\\&\quad \ge \min _{j\in L}( R_{j}^{(1)}\wedge R_{j}^{(2)}) -\eta . \end{aligned}

In addition, we can apply Lemmas 7.21 and 8.4 to show that given $$\eta >0,$$ there exists $$\delta _{0}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left[ \frac{\mathrm {Var}_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}\left( E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }\right) ^{2}\right] \\&\quad \ge \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \frac{\mathrm {Var}_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( T^{\varepsilon }\right) \right) }{T^{\varepsilon }}+2\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon } \\&\quad \ge 2\inf _{x\in A}\left[ f\left( x\right) +W\left( x\right) \right] -2W\left( O_{1}\right) -h_1-\eta \\&\quad \ge \min _{j\in L}R_{j}^{(2)}-\eta \ge \min _{j\in L}( R_{j}^{(1)}\wedge R_{j}^{(2)})-\eta . \end{aligned}

The second last inequality comes from Lemma 9.5.

Hence, we find that given $$\eta >0,$$ there exists $$\delta _{0}\in (0,1),$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned} \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \mathrm {Var} _{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }}\sum \nolimits _{n=1} ^{N^{\varepsilon }\left( T^{\varepsilon }\right) }S_{n}^{\varepsilon }\right) T^{\varepsilon }\right) \ge \min _{j\in L}( R_{j}^{(1)}\wedge R_{j} ^{(2)})-\eta , \end{aligned}

and we are done for the single cycle case.

For multicycle case, by using a similar argument and applying Lemmas 7.247.25, 8.219.4 and Corollary 8.20, we find that

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \mathrm {Var} _{\lambda ^{\varepsilon }}\left( \frac{1}{T^{\varepsilon }}\int _{0} ^{T^{\varepsilon }}e^{-\frac{1}{\varepsilon }f\left( X_{t}^{\varepsilon }\right) }1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) T^{\varepsilon }\right) \\&\quad \ge \min _{j\in L}( R_{j}^{(1)}\wedge R_{j} ^{(2)}\wedge R_{j}^{(3,m)} ) -\eta , \end{aligned}

with

\begin{aligned} R_{j}^{(3,m)}\doteq 2\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +2W\left( O_{j}\right) -2W\left( O_{1}\right) -(m+h_1). \end{aligned}

We complete the proof by sending $$m\downarrow w-h_1$$. $$\square$$

### Proof of Theorem 4.9

Parts 1, 2 and 3 are from Theorem 4.3, Lemma 4.3 (b) and Theorem 6.1 in [12, Chapter 6], respectively.

We now turn to part 4. Before giving the proof, we state a result from . The result is Lemma 4.3 (c) in [12, Chapter 6], which says that for any unstable equilibrium point $$O_{j},$$ there exists a stable equilibrium point $$O_{i}$$ such that $$W(O_{j})=W(O_{i})+V(O_{i},O_{j}).$$

Now, suppose that $$\min \nolimits _{j\in L}( \inf \nolimits _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) )$$ is attained at some $$\ell \in L$$ such that $$O_{\ell }$$ is unstable (i.e., $$\ell \in L\setminus L_{s}$$). Then, since there exists a stable equilibrium point $$O_{i}$$ (i.e., $$i\in L_s$$) such that $$W(O_{\ell })=W(O_{i})+V(O_{i} ,O_{\ell })$$ we find

\begin{aligned}&\min _{j\in L}\left( \inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) \right) \\&\quad =\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{\ell },x\right) \right] +W\left( O_{\ell }\right) =\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{\ell },x\right) \right] +V(O_{i},O_{\ell })+W(O_{i})\\&\quad \ge \inf _{x\in A}\left[ f\left( x\right) +V\left( O_{i},x\right) \right] +W(O_{i}) \ge \min _{j\in L_{\mathrm{{s}}}}\left( \inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) \right) \\&\quad \ge \min _{j\in L}\left( \inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +W\left( O_{j}\right) \right) . \end{aligned}

The first inequality is from a dynamic programming inequality. Therefore, the minimum is also attained at $$i\in L_{\mathrm{{s}}}$$ and $$\min _{j\in L}R_{j}^{(1)} =\min _{j\in L_{\mathrm{{s}}}}R_{j}^{(1)}.$$ $$\square$$

## Exponential Return Law and Tail Behavior

In this section, we give the proof of Theorem 8.5, which was the key fact needed to obtain bounds on the distribution of $$N^{\varepsilon }(T^{\varepsilon })$$, and the related multicycle analogy. A result of this type first appears in , which asserts that the time needed to escape from an open subset of the domain of attraction of a stable equilibrium point that contains the equilibrium point has an asymptotically exponential distribution.  also proves a nonasymptotic bound on the tail of the probability of escape before a certain time that is also of exponential form. Theorem 8.5 is a more complicated statement, in that it asserts the asymptotically exponential form for the return time to the neighborhood of $$O_{1}$$. To prove this, we build on the results of  and decompose the return time into times of transitions between equilibrium points. This in turn will require the proof of a number of related results, such as establishing the independence of certain estimates with respect to initial distributions.

The existence of an exponentially distributed first hitting time is a central topic in the theory of quasistationary distributions. For a recent book length treatment of the topic, we refer to . However, so far as we can tell the types of situations, we encounter are not covered by existing results, and so as noted we develop what is needed using  as the starting point. See Remark 3.15.

For any $$j\in L,$$ define $$\upsilon _{j}^{\varepsilon }$$ as the hitting time of $$\partial B_{\delta }(O_{k})$$ for some $$k\in L\setminus \{j\}$$, i.e.,

\begin{aligned} \upsilon _{j}^{\varepsilon }\doteq \inf \left\{ t>0:X_{t}^{\varepsilon }\in \cup _{k\in L\setminus \{j\}}\partial B_{\delta }(O_{k})\right\} . \end{aligned}
(10.1)

We will prove the following result for first hitting times of another equilibrium point, and later extend to return times.

### Lemma 10.1

For any $$j\in L_{\mathrm{{s}}}$$, there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$ and any distribution $$\lambda ^{\varepsilon }$$ on $$\partial B_{\delta }(O_{j}),$$

\begin{aligned} \lim _{\varepsilon \rightarrow 0}\varepsilon \log E_{\lambda ^{\varepsilon }}\upsilon _{j}^{\varepsilon }=\min _{y\in \cup _{k\in L\setminus \{j\}}\partial B_{\delta }(O_{k})} V(O_{j},y)\text { and }\upsilon _{j}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\upsilon _{j}^{\varepsilon }\overset{d}{\rightarrow }\mathrm {Exp}(1). \end{aligned}

Moreover, there exists $$\varepsilon _{0}\in (0,1)$$ and a constant $${\tilde{c}}>0$$ such that

\begin{aligned} P_{\lambda ^{\varepsilon }}\left( \upsilon _{j}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\upsilon _{j}^{\varepsilon }>t\right) \le e^{-{\tilde{c}}t} \end{aligned}

for any $$t>0$$ and any $$\varepsilon \in (0,\varepsilon _{0}).$$

The organization of this section is as follows. The first part of Lemma 10.1 that is concerned with mean first hitting times is proved in Section 10.1, while the second part that is concerned with an asymptotically exponential distribution but when starting with a special distribution is proved in Section 10.2. The last part of the lemma, which focuses on bounds on the tail of the hitting time of another equilibrium point but when starting with a special distribution, is proved in Section 10.3. We then extend the second and third parts of Lemma 10.1 to general initial distributions in Section 10.4 and Section 10.5. The last two subsections then extend all of Lemma 10.1 to return times for single cycles and multicycles, respectively.

### Lemma 10.2

For any $$\delta >0$$ sufficiently small and $$x\in \partial B_{\delta }(O_{j})$$ with $$j\in L_{\mathrm{{s}}}$$

\begin{aligned} \lim _{\varepsilon \rightarrow 0}\varepsilon \log E_{x} \upsilon _{j}^{\varepsilon }=\min _{y\in \cup _{k\in L\setminus \{j\}}\partial B_{\delta }(O_{k})}V(O_{j},y). \end{aligned}
(10.2)

### Proof

For the given $$j\in L_{\mathrm{{s}}}$$, let $$D_{j}$$ denote the corresponding domain of attraction. We claim there is $$k\in L\setminus L_{\mathrm{{s}}}$$ such that

\begin{aligned} q_{j}\doteq \inf _{y\in \partial D_{j}}V(O_{j},y)=V(O_{j},O_{k}). \end{aligned}

Since $$V(O_{j},\cdot )$$ is continuous and $$\partial D_{j}$$ is compact, there is a point $$y^{*}\in \partial D_{j}$$ such that $$q_{j}=V(O_{j},y^{*})$$. If $$y^{*}\in \cup _{k\in L\setminus L_{\mathrm{{s}}}}O_{k}$$, then we are done. If this is not true, then since $$y^{*}\notin (\cup _{k\in L_{\mathrm{{s}}}}D_{k})\cup (\cup _{k\in L\setminus L_{\mathrm{{s}}}}O_{k})$$, and since the solution to $${\dot{\phi }}=b(\phi ),\phi (0)=y^{*}$$ must converge to $$\cup _{k\in L}O_{k}$$ as $$t\rightarrow \infty$$, it must in fact converge to a point in $$\cup _{k\in L\setminus L_{\mathrm{{s}}} }O_{k}$$, say $$O_{k}$$. Since such trajectories have zero cost, by a standard argument for any $$\varepsilon >0$$ we can construct by concatenation a trajectory that connects $$O_{j}$$ to $$O_{k}$$ in finite time and with cost less than $$q_{j}+\varepsilon$$. Since $$\varepsilon >0$$ is arbitrary, we have $$q_{j}=V(O_{j},O_{k})$$.

There may be more than one $$l\in L\setminus L_{\mathrm{{s}}}$$ such that $$O_{l} \in \partial D_{j}$$ and $$q_{j}=V(O_{j},O_{l})$$, but we can assume that for some $$k\in L\setminus L_{\mathrm{{s}}}$$ and $${\bar{y}}\in \partial B_{\delta }(O_{k})$$ we attain the min in (10.2). Then, $${\bar{q}}_{j}\doteq V(O_{j},{\bar{y}})\le q_{j}$$, and we need to show $$\lim _{\varepsilon \rightarrow 0}\varepsilon \log E_{x}\upsilon _{j}^{\varepsilon }={\bar{q}}_{j}$$.

Given $$s<{\bar{q}}_{j}$$, let $$D_{j}(s)=\{x:V(O_{j},x)\le s\}$$ and assume s is large enough that $$B_{\delta }(O_{j})\subset D_{j}(s)^{\circ }$$. Then, $$D_{j}(s)\subset D_{j}^{\circ }$$ is closed and contained in the open set $$D_{j}\setminus \cup _{l\in L\setminus \{j\}}B_{\delta }(O_{l})$$, and thus the time to reach $$\partial D_{j}(s)$$ is never greater than $$\upsilon _{j}^{\varepsilon }$$. Given $$\eta >0$$, we can find a set $$D_{j}^{\eta }(s)$$ that is contained in $$D_{j}(s)$$ and satisfies the conditions of [12, Theorem 4.1, Chapter 4], and also $$\inf _{z\in \partial D_{j}^{\eta }(s)} V(O_{j},z)\ge s-\eta$$. This theorem gives the equality in the following display:

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}\varepsilon \log E_{x}\upsilon _{j} ^{\varepsilon }\ge \liminf _{\varepsilon \rightarrow 0}\varepsilon \log E_{x} \inf \{t\ge 0:X_{t}^{\varepsilon }\in \partial D_{j}^{\eta }(s)\}\\&\quad = \inf _{z\in \partial D_{j}^{\eta }(s)} V(O_{j},z) \ge s-\eta . \end{aligned}

Letting $$\eta \downarrow 0$$ and then $$s\uparrow {\bar{q}}_{j}$$ gives $$\liminf _{\varepsilon \rightarrow 0}\varepsilon \log E_{x}\upsilon _{j}^{\varepsilon } \ge {\bar{q}}_{j}$$.

For the reverse inequality, we also adapt an argument from the proof of [12, Theorem 4.1, Chapter 4]. One can find $$T_{1}<\infty$$ such that the probability for $$X_{t}^{\varepsilon }$$ to reach $$\cup _{l\in L}B_{\delta }(O_{l})$$ by time $$T_{1}$$ from any $$x\in M\setminus \cup _{l\in L}B_{\delta }(O_{l})$$ is bounded below by 1/2. (This follows easily from the law of large numbers and that all trajectories of the noiseless system reach $$\cup _{l\in L}B_{\delta /2}(O_{l})$$ in some finite time that is bounded uniformly in $$x\in M\setminus \cup _{l\in L}B_{\delta }(O_{l})$$.) Also, given $$\eta >0$$ there is $$T_{2}<\infty$$ and $$\varepsilon _{0}>0$$ such that $$P_{x}\{X_{t}^\varepsilon$$ reaches $$\cup _{k\in L\setminus \{j\}}\partial B_{\delta }(O_{k})$$ before $$T_{2}\}\ge \exp -({\bar{q}}_{j}+\eta )/\varepsilon$$ for all $$x\in \partial B_{\delta }(O_{j})$$. It then follows from the strong Markov property that for any $$x\in M\setminus \cup _{l\in L}B_{\delta }(O_{l})$$

\begin{aligned} P_{x}\{\upsilon _{j}^{\varepsilon }\le T_{1}+T_{2}\}\ge e^{-\frac{1}{\varepsilon }({\bar{q}}_{j}+\eta )}/2. \end{aligned}

Using the ordinary Markov property, we have

\begin{aligned} E_{x}\upsilon _{j}^{\varepsilon }&\le \sum \nolimits _{n=0}^{\infty }(n+1)(T_{1} +T_{2})P_{x}\{n(T_{1}+T_{2})<\upsilon _{j}^{\varepsilon }\le (n+1)(T_{1} +T_{2})\}\\&=(T_{1}+T_{2})\sum \nolimits _{n=0}^{\infty }P_{x}\{\upsilon _{j}^{\varepsilon } >n(T_{1}+T_{2})\}\\&\le (T_{1}+T_{2})\sum \nolimits _{n=0}^{\infty }\left[ 1-\inf _{x\in M\setminus \cup _{l\in L}B_{\delta }(O_{l})}P_{x}\{\upsilon _{j}^{\varepsilon }\le T_{1}+T_{2}\}\right] ^{n}\\&=(T_{1}+T_{2})\left( \inf _{x\in M\setminus \cup _{l\in L}B_{\delta }(O_{l} )}P_{x}\{\upsilon _{j}^{\varepsilon }\le T_{1}+T_{2}\}\right) ^{-1}\\&\le 2(T_{1}+T_{2})e^{\frac{1}{\varepsilon }({\bar{q}}_{j}+\eta )}. \end{aligned}

Thus, $$\limsup _{\varepsilon \rightarrow 0}\varepsilon \log E_{x}\upsilon _{j}^{\varepsilon }\le {\bar{q}}_{j}+\eta$$, and letting $$\eta \downarrow 0$$ completes the proof. $$\square$$

### Remark 10.3

By the standard Freidlin–Wentzell theory, the convergence asserted in Lemma 10.2 is uniform on $$\partial B_{\delta }(O_j)$$. Therefore, we have the first part of Lemma 10.1.

### Lemma 10.4

For each $$j\in L_{\mathrm{{s}}}$$ there is a distribution $$u^{\varepsilon }$$ on $$\partial B_{2\delta }(O_{j})$$ such that $$\upsilon _{j}^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{j} ^{\varepsilon }\overset{d}{\rightarrow }\mathrm {Exp}(1).$$

### Proof

To simplify notation and since it plays no role, we write $$j=1$$ throughout the proof. We call $$\partial B_{\delta }\left( O_{1}\right)$$ and $$\partial B_{2\delta }\left( O_{1}\right)$$ the inner and outer rings of $$O_{1}$$. We can then decompose the hitting time as

\begin{aligned} \upsilon _{1}^{\varepsilon }=\sum \nolimits _{k=1}^{{\mathcal {N}}^{\varepsilon }-1}\theta _{k}^{\varepsilon }+\zeta ^{\varepsilon }, \end{aligned}
(10.3)

where $$\theta _{k}^{\varepsilon }$$ is the k-th amount of time that the process travels from the outer ring to the inner ring and back without visiting $$\cup _{j\in L\setminus \{1\}}\partial B_{\delta }(O_{j})$$, $$\zeta ^{\varepsilon }$$ is the amount of time that the process travels from the outer ring directly to $$\cup _{j\in L\setminus \{1\}}\partial B_{\delta }(O_{j})$$ without visiting the inner ring, and $${\mathcal {N}}^{\varepsilon }-1$$ is the number of times that the process goes back and forth between the inner ring and outer ring. (It is assumed that $$\delta >0$$ is small enough that $$B_{2\delta }\left( O_{1}\right) \subset M\setminus \cup _{j\in L\setminus \{1\}} B_{2\delta }(O_{j})$$.) Note that $$\theta _{k}^{\varepsilon }$$ grows exponentially of the order $$\delta$$, due to the time taken to travel from the inner ring to the outer ring, and $$\zeta ^{\varepsilon }$$ is uniformly bounded in expected value.

For any set A,  define the first hitting time by $$\tau \left( A\right) \doteq \inf \left\{ t>0:X_{t}^{\varepsilon }\in A\right\} .$$ Consider the conditional transition probability from $$x\in \partial B_{2\delta }\left( O_{1}\right)$$ to $$y\in \partial B_{\delta }\left( O_{1}\right)$$ given by

\begin{aligned} \psi _{1}^{\varepsilon }\left( dy|x\right) \doteq P\left( X_{\tau \left( \partial B_{\delta }\left( O_{1}\right) \right) }^{\varepsilon }\in dy|X_{0}^{\varepsilon }=x,\text { } X_{t}^{\varepsilon }\notin \cup _{j\in L\setminus \{1\}}\partial B_{\delta } (O_{j}),t\in [0,\tau (\partial B_{\delta }\left( O_{1}\right) ))]\right) , \end{aligned}

and the transition probability from $$y\in \partial B_{\delta }\left( O_{1}\right)$$ to $$x\in \partial B_{2\delta }\left( O_{1}\right)$$ given by

\begin{aligned} \psi _{2}^{\varepsilon }\left( dx|y\right) \doteq P\left( X_{\tau \left( \partial B_{2\delta }\left( O_{1}\right) \right) }^{\varepsilon }\in dx|X_{0}^{\varepsilon }=y\right) . \end{aligned}
(10.4)

Then, we can create a transition probability from $$x\in \partial B_{2\delta }\left( O_{1}\right)$$ to $$y\in \partial B_{2\delta }\left( O_{1}\right)$$ by

\begin{aligned} \psi ^{\varepsilon }\left( dy|x\right) \doteq \int _{\partial B_{\delta }\left( O_{1}\right) }\psi _{2} ^{\varepsilon }\left( dy|z\right) \psi _{1}^{\varepsilon }\left( dz|x\right) . \end{aligned}
(10.5)

Since $$\partial B_{2\delta }\left( O_{1}\right)$$ is compact and $$\{X_{t}^{\varepsilon }\}_{t}$$ is non-degenerate and Feller, there exists an invariant measure $$u^{\varepsilon }\in {\mathcal {P}}\left( \partial B_{2\delta }\left( O_{1}\right) \right)$$ with respect to the transition probability $$\psi ^{\varepsilon }\left( dy|x\right) .$$ If we start with the distribution $$u^{\varepsilon }$$ on $$\partial B_{2\delta }\left( O_{1}\right)$$, then it follows from the definition of $$u^{\varepsilon }$$ and the strong Markov property that the $$\{\theta _{k}^{\varepsilon }\}_{k<{\mathcal {N}}^{\varepsilon }}$$ are iid. Moreover, the indicators of escape (i.e., $$1_{\{\tau (\cup _{j\in L\setminus \{1\}}\partial B_{\delta }(O_{j}))=\tau (\cup _{j\in L}\partial B_{\delta }(O_{j}))\}}$$) are iid Bernoulli, and we write them as $$Y_{k}^{\varepsilon }$$ with $$P_{u^{\varepsilon }}(Y_{k}^{\varepsilon }=1)=e^{-h_{1}^{\varepsilon } (\delta )/\varepsilon },$$ where $$\delta >0$$ is from the construction, $$h_{1}^{\varepsilon }(\delta )\rightarrow h_{1}(\delta )$$ as $$\varepsilon \rightarrow 0$$ and $$h_{1} (\delta )\uparrow h_{1}$$ as $$\delta \downarrow 0$$ with $$h_{1}=\min _{j\in L\setminus \{1\}}V(O_{1},O_{j})$$. Note that $${\mathcal {N}}^{\varepsilon }=\inf \left\{ k\in {\mathbb {N}} :Y_{k}^{\varepsilon }=1\right\} .$$ We therefore have

\begin{aligned} P_{u^{\varepsilon }}({\mathcal {N}}^{\varepsilon }=k)=(1-e^{-h_{1}^{\varepsilon }(\delta )/\varepsilon })^{k-1}e^{-h_{1}^{\varepsilon }(\delta )/\varepsilon }, \end{aligned}

and thus

\begin{aligned} E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }=E_{u^{\varepsilon }}\left[ \sum \nolimits _{j=1}^{{\mathcal {N}}^{\varepsilon }-1}\theta _{j}^{\varepsilon }\right] +E_{u^{\varepsilon }}\zeta ^{\varepsilon }=E_{u^{\varepsilon }}({\mathcal {N}} ^{\varepsilon }-1)E_{u^{\varepsilon }}\theta _{1}^{\varepsilon }+E_{u^{\varepsilon }}\zeta ^{\varepsilon }, \end{aligned}

where the second equality comes from Wald’s identity. Using $$\sum _{k=1}^{\infty }ka^{k-1} ={1/(1-a)^{2}}$$ for $$a \in [0,1)$$, we also have

\begin{aligned} E_{u^{\varepsilon }}{\mathcal {N}}^{\varepsilon } =\sum \nolimits _{k=1}^{\infty }k(1-e^{-h_{1}^{\varepsilon }(\delta )/\varepsilon })^{k-1}e^{-h_{1} ^{\varepsilon }(\delta )/\varepsilon } =e^{-h_{1}^{\varepsilon } (\delta )/\varepsilon }e^{2h_{1}^{\varepsilon }(\delta )/\varepsilon } =e^{h_{1}^{\varepsilon }(\delta )/\varepsilon }, \end{aligned}

and therefore

\begin{aligned} E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }=e^{h_{1}^{\varepsilon } (\delta )/\varepsilon }E_{u^{\varepsilon }}\theta _{1}^{\varepsilon } +(E_{u^{\varepsilon }}\zeta ^{\varepsilon }-E_{u^{\varepsilon }}\theta _{1}^{\varepsilon }). \end{aligned}
(10.6)

Next consider the characteristic function of $$\upsilon _{1}^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }$$

\begin{aligned} \phi ^{\varepsilon }(t)=E_{u^{\varepsilon }}e^{it\upsilon _{1}^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}=\phi _{\upsilon }^{\varepsilon }(t/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }), \end{aligned}

where $$\phi _{\upsilon }^{\varepsilon }$$ is the characteristic function of $$\upsilon _{1}^{\varepsilon }.$$ By (10.3), we have

\begin{aligned} \phi _{\upsilon }^{\varepsilon }(s)&=E_{u^{\varepsilon }}e^{is\left( \sum _{k=1}^{{\mathcal {N}}^{\varepsilon }-1}\theta _{k}^{\varepsilon } +\zeta ^{\varepsilon }\right) } =E_{u^{\varepsilon }}e^{is\zeta ^{\varepsilon }}E_{u^{\varepsilon } }e^{is\left( \sum \nolimits _{k=1}^{{\mathcal {N}}^{\varepsilon }-1}\theta _{k}^{\varepsilon }\right) }\\&=\phi _{\zeta }^{\varepsilon }(s)\sum \nolimits _{k=1}^{\infty }(1-e^{-h^{\varepsilon } _{1}(\delta )/\varepsilon })^{k-1}e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon }\phi _{\theta }^{\varepsilon }(s)^{k-1}\\&=\phi _{\zeta }^{\varepsilon }(s)e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon } (1-[ (1-e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon } )\phi _{\theta }^{\varepsilon }(s)] )^{-1}, \end{aligned}

where $$\phi _{\theta }^{\varepsilon }$$ and $$\phi _{\zeta }^{\varepsilon }$$ are the characteristic functions of $$\theta _{1}^{\varepsilon }$$ and $$\zeta ^{\varepsilon }$$, respectively. We want to show that for any $$t\in {\mathbb {R}}$$

\begin{aligned} \phi ^{\varepsilon }(t)=\phi _{\upsilon }^{\varepsilon }(t/E_{u^{\varepsilon }} \upsilon _{1}^{\varepsilon })\rightarrow 1/(1-it)\text { as }\varepsilon \rightarrow 0. \end{aligned}

We first show that $$\phi _{\zeta }^{\varepsilon }(t/E_{u^{\varepsilon }} \upsilon _{1}^{\varepsilon })\rightarrow 1.$$ By definition, $$\phi _{\zeta }^{\varepsilon }\left( t/E_{u^{\varepsilon }\upsilon _{1}^{\varepsilon }}\right) =E_{u^{\varepsilon }}\cos \left( t\zeta ^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }\right) +iE_{u^{\varepsilon }}\sin \left( t\zeta ^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon } \right) .$$ According to [12, Lemma 1.9, Chapter 6], we know that there exist $$T_{0}\in (0,\infty )$$ and $$\beta >0$$ such that for any $$T\in (T_0,\infty )$$ and for all $$\varepsilon$$ sufficiently small

\begin{aligned} P_{u^{\varepsilon }}\left( \zeta ^{\varepsilon }>T\right) \le e^{-\frac{1}{\varepsilon }\beta \left( T-T_{0}\right) }, \end{aligned}
(10.7)

and therefore for any bounded and continuous function $$f:{\mathbb {R}} \rightarrow {\mathbb {R}}$$

\begin{aligned} \left| E_{u^{\varepsilon }}f\left( t\zeta ^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }\right) -f\left( 0\right) \right|&\le 2\left\| f\right\| _{\infty }P_{u^{\varepsilon }}\left( \zeta ^{\varepsilon }>T\right) \\&\quad +E_{u^{\varepsilon }}\left[ \left| f\left( t\zeta ^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon } \right) -f\left( 0\right) \right| 1_{\left\{ \zeta ^{\varepsilon }\le T\right\} }\right] . \end{aligned}

The first term in the last display goes to 0 as $$\varepsilon \rightarrow 0$$. For any fixed t$$t/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon } \rightarrow 0$$ as $$\varepsilon \rightarrow 0$$. Since f is continuous, the second term in the last display also converges to 0 as $$\varepsilon \rightarrow 0.$$ $$\phi _{\zeta }^{\varepsilon }(t/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon })\rightarrow 1$$ follows by taking f to be $$\sin x$$ and $$\cos x.$$

It remains to show that for any $$t\in {\mathbb {R}}$$

\begin{aligned} e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon } \left( 1-\left[ (1-e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon })\phi _{\theta }^{\varepsilon }(t/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon })\right] \right) ^{-1}\rightarrow 1/(1-it) \end{aligned}

as $$\varepsilon \rightarrow 0.$$ Observe that

\begin{aligned} e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon } \left( 1-\left[ (1-e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon })\phi _{\theta }^{\varepsilon }(t/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon })\right] \right) ^{-1}=\left( {\frac{1-\phi _{\theta }^{\varepsilon }(t/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon })}{e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon }} +\phi _{\theta }^{\varepsilon }(t/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }} )\right) ^{-1}, \end{aligned}

so it suffices to show that $$\phi _{\theta }^{\varepsilon }(t/E_{u^{\varepsilon } }\upsilon _{1}^{\varepsilon })\rightarrow 1$$ and $$[1-\phi _{\theta }^{\varepsilon }(t/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon })]/e^{-h^{\varepsilon } _{1}(\delta )/\varepsilon }\rightarrow -it$$ as $$\varepsilon \rightarrow 0.$$

For the former, note that by (10.6)

\begin{aligned} 0\le E_{u^{\varepsilon }}\left( t\theta _{1}^{\varepsilon }/E_{u^{\varepsilon } }\upsilon _{1}^{\varepsilon }\right) \le \frac{tE_{u^{\varepsilon }}\theta _{1}^{\varepsilon }}{\left( e^{h^{\varepsilon }_{1}(\delta )/\varepsilon }-1\right) E_{u^{\varepsilon }}\theta _{1}^{\varepsilon }}\rightarrow 0 \end{aligned}

as $$\varepsilon \rightarrow 0,$$ and so $$t\theta _{1}^{\varepsilon } /E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }$$ converges to 0 in distribution. Moreover, since $$e^{ix}$$ is bounded and continuous, we find $$\phi _{\theta }^{\varepsilon }(t/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }) \rightarrow 1$$. For the second part, using

\begin{aligned} x-{x^{3}}/{3!}\le \sin x\le x\text { and }1-{x^{2}}/{2}\le \cos x\le 1 \end{aligned}

for $$x\in {\mathbb {R}}$$ we find that

\begin{aligned} 0\le \frac{1-E_{u^{\varepsilon }}\cos \left( t\theta _{1}^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }\right) }{e^{-h_{1} ^{\varepsilon }(\delta )/\varepsilon }}\le \frac{E_{u^{\varepsilon }}\left( t\theta _{1}^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }\right) ^{2}}{2e^{-h_{1}^{\varepsilon }(\delta )/\varepsilon }} \end{aligned}

and

\begin{aligned} \frac{E_{u^{\varepsilon }}\left( t\theta _{1}^{\varepsilon }/E_{u^{\varepsilon } }\upsilon _{1}^{\varepsilon }\right) }{e^{-h_{1}^{\varepsilon }(\delta )/\varepsilon }}-\frac{E_{u^{\varepsilon }}\left( t\theta _{1}^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }\right) ^{3}}{3!e^{-h_{1} ^{\varepsilon }(\delta )/\varepsilon }}\le \frac{E_{u^{\varepsilon }}\sin \left( t\theta _{1}^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }\right) }{e^{-h_{1}^{\varepsilon }(\delta )/\varepsilon }}\le \frac{E_{u^{\varepsilon }}\left( t\theta _{1}^{\varepsilon }/E_{u^{\varepsilon } }\upsilon _{1}^{\varepsilon }\right) }{e^{-h_{1}^{\varepsilon }(\delta )/\varepsilon }}. \end{aligned}

From our previous observation regarding the distribution of $$\zeta ^{\varepsilon }$$ and (10.6)

\begin{aligned} \frac{E_{u^{\varepsilon }}\left( t\theta _{1}^{\varepsilon }/E_{u^{\varepsilon } }\upsilon _{1}^{\varepsilon }\right) }{e^{-h_{1}^{\varepsilon }(\delta )/\varepsilon }}\rightarrow t\text { as }\varepsilon \rightarrow 0. \end{aligned}

In addition, since $$\theta _{1}^{\varepsilon }$$ can be viewed as the time from the outer ring to the inner ring without visiting $$\cup _{j\in L\setminus \{1\}}\partial B_{\delta }(O_{j})$$ plus the time from the inner ring to the outer ring, by applying (10.7) to the former and using [6, Theorem 4 and Corollary 1] under Condition 3.13 to the later, we find that

\begin{aligned} P_{u^{\varepsilon }}\left( \theta _{1}^{\varepsilon }/E_{u^{\varepsilon } }\theta _{1}^{\varepsilon }>t\right) \le 2e^{-t} \end{aligned}
(10.8)

for all $$t\in [0,\infty )$$ and $$\varepsilon$$ sufficiently small. This implies that

\begin{aligned} E_{u^{\varepsilon }}\left( \theta _{1}^{\varepsilon }/E_{u^{\varepsilon } }\theta _{1}^{\varepsilon }\right) ^{2} =2\int _{0}^{\infty }t^{2} P_{u^{\varepsilon }}\left( \theta _{1}^{\varepsilon }/E_{u^{\varepsilon } }\theta _{1}^{\varepsilon }>t\right) dt \le 4\int _{0}^{\infty }t^{2}e^{-t}dt=8 \end{aligned}

and similarly $$E_{u^{\varepsilon }}\left( \theta _{1}^{\varepsilon }/E_{u^{\varepsilon } }\theta _{1}^{\varepsilon }\right) ^{3} =3\int _{0}^{\infty }t^{3}\le 36$$. Then combined with (10.6), we have

\begin{aligned} 0\le \frac{E_{u^{\varepsilon }}\left( t\theta _{1}^{\varepsilon }/E_{u^{\varepsilon } }\upsilon _{1}^{\varepsilon }\right) ^{2}}{2e^{-h_{1}^{\varepsilon } (\delta )/\varepsilon }} \le \frac{t^2 E_{u^{\varepsilon }}\left( \theta _{1}^{\varepsilon }/E_{u^{\varepsilon } }\theta _{1}^{\varepsilon }\right) ^{2}}{2e^{-h_{1}^{\varepsilon } (\delta )/\varepsilon }(e^{h_{1}^{\varepsilon } (\delta )/\varepsilon }-1)^2} \rightarrow 0 \end{aligned}

and

\begin{aligned} 0\le \frac{E_{u^{\varepsilon }}\left( t\theta _{1}^{\varepsilon }/E_{u^{\varepsilon } }\upsilon _{1}^{\varepsilon }\right) ^{3}}{3!e^{-h_{1}^{\varepsilon } (\delta )/\varepsilon }} \le \frac{t^3 E_{u^{\varepsilon }}\left( \theta _{1}^{\varepsilon }/E_{u^{\varepsilon } }\theta _{1}^{\varepsilon }\right) ^{3}}{3!e^{-h_{1}^{\varepsilon } (\delta )/\varepsilon }(e^{h_{1}^{\varepsilon } (\delta )/\varepsilon }-1)^3} \rightarrow 0. \end{aligned}

Therefore, we have shown that for any $$t\in {\mathbb {R}}$$

\begin{aligned} \frac{1-\phi _{\theta }^{\varepsilon }(t/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon })}{e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon }} =\frac{1-E_{u^{\varepsilon }}\cos \left( t\theta _{1}^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }\right) }{e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon }}-i\frac{E_{u^{\varepsilon }}\sin \left( t\theta _{1}^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }\right) }{e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon }}\rightarrow -it. \end{aligned}

$$\square$$

### Remark 10.5

From the proof of Lemma 10.4, we actually know that

\begin{aligned} \phi _{\upsilon }^{\varepsilon }(t/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon })\rightarrow 1/(1-it) \end{aligned}

uniformly on any compact set in $${\mathbb {R}}$$ as $$\varepsilon \rightarrow 0$$.

### Tail Probability

The goal of this subsection is to prove the following.

### Lemma 10.6

For each $$j\in L_{\mathrm{{s}}}$$ there is a distribution $$u^{\varepsilon }$$ on $$\partial B_{2\delta }(O_{j})$$ and $${\tilde{c}}>0$$ such that for any $$t\in [0,\infty )$$, $$P_{u^{\varepsilon }}( \upsilon _{j}^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{j}^{\varepsilon }>t) \le e^{-{\tilde{c}}t}$$ (here, $$\upsilon _{j}^{\varepsilon }$$ and $$u^{\varepsilon }$$ are defined as in the last subsection).

### Proof

As in the last subsection, we give the proof for the case $$j=1$$. To begin, we note that for any $$\alpha >0$$ Chebyshev’s inequality implies

\begin{aligned} P_{u^{\varepsilon }}\left( \upsilon _{1}^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }>t\right) =P_{u^{\varepsilon }}( e^{\alpha \upsilon _{1}^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}>e^{\alpha t}) \le e^{-\alpha t}\cdot E_{u^{\varepsilon } }e^{\alpha \upsilon _{1}^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}. \end{aligned}

By picking $$\alpha = \alpha ^{*}\doteq 1/8$$, it suffices to show that $$E_{u^{\varepsilon }}e^{{\alpha ^{*}\upsilon _{1}^{\varepsilon }}/{E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}}$$ is bounded by a constant. We will do this by showing how the finiteness of $$E_{u^{\varepsilon } }e^{{\alpha ^{*}\upsilon _{1}^{\varepsilon }}/{E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}}$$ is implied by the finiteness of $$E_{u^{\varepsilon }}e^{{\alpha ^{*}\theta _{1}^{\varepsilon }}/{E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}}$$ and $$E_{u^{\varepsilon }}e^{{\alpha ^{*}\zeta ^{\varepsilon } }/{E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}}.$$

Using (10.8), we find that for any $$\alpha >0$$

\begin{aligned} P_{u^{\varepsilon }}( e^{\alpha \theta _{1}^{\varepsilon }/E_{u^{\varepsilon }}\theta _{1}^{\varepsilon }}>t) \le 2e^{-\frac{1}{\alpha }\log t}=2t^{-\frac{1}{\alpha }} \end{aligned}

for all $$t\in [1,\infty )$$ and $$\varepsilon$$ sufficiently small. Then, (10.6) implies $$E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon } \ge \left( e^{h^{\varepsilon }_{1}\left( \delta \right) /\varepsilon }-1\right) E_{u^{\varepsilon }}\theta _{1}^{\varepsilon }$$ and therefore

\begin{aligned} E_{u^{\varepsilon }}e^{\alpha ^{*}/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}\theta _{1}^{\varepsilon }&\le \int _{0}^{1}P_{u^{\varepsilon }}\left( \exp \left( \alpha ^{*} \theta _{1}^{\varepsilon }/[{(e}^{h^{\varepsilon }_{1}\left( \delta \right) /\varepsilon }-1{)}E_{u^{\varepsilon }}\theta _{1}^{\varepsilon }]\right)>t\right) dt\\&\quad +\int _{1}^{\infty }P_{u^{\varepsilon }}\left( \exp \left( \alpha ^{*}\theta _{1}^{\varepsilon }/[{(e}^{h^{\varepsilon }_{1}\left( \delta \right) /\varepsilon }-1{)}E_{u^{\varepsilon }}\theta _{1}^{\varepsilon }]\right) >t\right) dt\\&\le 1+2\int _{1}^{\infty }t^{-{(e}^{h^{\varepsilon }_{1}\left( \delta \right) /\varepsilon }-1{)/\alpha }^{*}}dt\\&=1+2[{(e}^{h^{\varepsilon }_{1}\left( \delta \right) /\varepsilon }-1{)/\alpha }^{*}-1]^{-1}=1+2\alpha ^{*}[{e}^{h^{\varepsilon }_{1}\left( \delta \right) /\varepsilon }-\alpha ^{*}-1]^{-1}. \end{aligned}

To estimate $$\zeta ^{\varepsilon },$$ we use that by (10.7) there are $$T_{0}\in (0,\infty )$$ and $$\beta >0$$ such that for any $$t\in (T_0,\infty )$$ and for all $$\varepsilon$$ sufficiently small $$P_{u^{\varepsilon }}\left( \zeta ^{\varepsilon }>t\right) \le e^{-\frac{1}{\varepsilon }\beta \left( t-T_{0}\right) },$$ so that for any $$\alpha >0$$ $$P_{u^{\varepsilon }}\left( e^{\alpha \zeta ^{\varepsilon }}>t\right) \le e^{-\frac{1}{\varepsilon }\beta \left( \frac{1}{\alpha }\log t-T_{0}\right) }$$ for any $$t\ge e^{\alpha T_{0}}.$$ Given $$n\in {\mathbb {N}} ,$$ for all sufficiently small $$\varepsilon$$ we have $$\alpha ^{*} /E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }\le 1/n$$, and thus

\begin{aligned} P_{u^{\varepsilon }}\left( e^{\alpha ^{*}\zeta ^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}>t\right) \le P_{u^{\varepsilon } }\left( e^{\zeta ^{\varepsilon }/n}>t\right) \le e^{-\frac{1}{\varepsilon }\beta \left( n\log t-T_{0}\right) }. \end{aligned}

Hence for any n such that $$e^{T_{0}/n}\le 3/2$$ and $$\left( -\beta n+1\right) \log \left( 3/2\right) +\beta T_{0}<0,$$ and for $$\varepsilon$$ small enough that $$\alpha ^{*}/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }\le 1/n,$$ we have

\begin{aligned} E_{u^{\varepsilon }}e^{\alpha ^{*}\zeta ^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}&\le \int _{0}^{\infty }P_{u^{\varepsilon }}\left( e^{\alpha ^{*}\zeta ^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}>t\right) dt \le 3/2+\int _{\frac{3}{2}}^{\infty }P_{u^{\varepsilon }}\left( e^{\alpha ^{*}\zeta ^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}>t\right) dt\\&\le 3/2+\int _{\frac{3}{2}}^{\infty }e^{-\frac{1}{\varepsilon } \beta \left( n\log t-T_{0}\right) }dt =3/2+e^{\frac{1}{\varepsilon }\beta T_{0}}(\beta n/{\varepsilon }-1)^{-1}\left( 3/2\right) ^{\frac{1}{\varepsilon }\left( -\beta n+\varepsilon \right) }\\&=3/2+(\beta n/{\varepsilon }-1)^{-1}e^{\frac{1}{\varepsilon }\left[ \left( -\beta n+\varepsilon \right) \log \left( 3/2\right) +\beta T_{0}\right] } \le 3/2+(\beta n/{\varepsilon }-1)^{-1} \le 2. \end{aligned}

We have shown that for such $$\alpha ^{*},$$ $$E_{u^{\varepsilon }}e^{\alpha ^{*}\zeta ^{\varepsilon } /E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}$$ and $$E_{u^{\varepsilon }}e^{\alpha ^{*}\theta _{1}^{\varepsilon }/E_{u^{\varepsilon }} \upsilon _{1}^{\varepsilon }}$$ are uniformly bounded for all $$\varepsilon$$ sufficiently small. Lastly, using the same calculation as used for the characteristic function

\begin{aligned} E_{u^{\varepsilon }}e^{\alpha ^{*}\upsilon _{1}^{\varepsilon } /E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}&=E_{u^{\varepsilon }}e^{\alpha ^{*}\zeta ^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}\cdot e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon } \left( 1-\left[ (1-e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon })E_{u^{\varepsilon }} e^{\alpha ^{*}\theta _{1}^{\varepsilon }/E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}\right] \right) ^{-1}\\&\le 2e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon }\left( 1-\left[ (1-e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon })\left( 1+\frac{2\alpha ^{*}}{{e}^{h^{\varepsilon }_{1}\left( \delta \right) /\varepsilon }-\alpha ^{*}-1}\right) \right] \right) ^{-1}\\&=2e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon }\left( e^{-h^{\varepsilon }_{1}(\delta ) /\varepsilon } -\frac{2\alpha ^{*}}{{e}^{h^{\varepsilon }_{1}\left( \delta \right) /\varepsilon }-\alpha ^{*}-1} +\frac{2\alpha ^{*}}{{e}^{h^{\varepsilon }_{1}\left( \delta \right) /\varepsilon }-\alpha ^{*} -1}e^{-h^{\varepsilon }_{1}(\delta )/\varepsilon }\right) ^{-1}\\&=2\left( 1-2\alpha ^{*}\frac{e^{h^{\varepsilon }_{1}(\delta )/\varepsilon }-1}{{e}^{h^{\varepsilon }_{1}\left( \delta \right) /\varepsilon }-\alpha ^{*}-1}\right) ^{-1} \le 2/(1-4\alpha ^{*})=4. \end{aligned}

$$\square$$

### General Initial Condition

This subsection presents results that will allow us to extend the results in the previous two subsections to arbitrary initial distribution $$\lambda ^{\varepsilon }\in {\mathcal {P}}(\partial B_{\delta }\left( O_{1}\right) )$$. Under our assumptions, for any $$j\in L_{\mathrm{{s}}}$$ we observe that the process model

\begin{aligned} dX_{t}^{\varepsilon }=b\left( X_{t}^{\varepsilon }\right) dt+\sqrt{\varepsilon }\sigma \left( X_{t}^{\varepsilon }\right) dW_{t} \end{aligned}
(10.9)

has the property that $$b(x)=A(x-O_{j})[1+o(1)]$$ and $$\sigma \left( x\right) ={\bar{\sigma }}[1+o\left( 1\right) ]$$, where $$o(1)\rightarrow 0$$ as $$\left\| x-O_{j}\right\| \rightarrow 0$$, A is stable and $${\bar{\sigma }}$$ is invertible. By an invertible change of variable, we can arrange so that $$O_{j}=0$$ and $${\bar{\sigma }}=I$$, and to simplify we assume this in the rest of the section.

Since A is stable, there exists a positive definite and symmetric solution M to the matrix equation $$AM+MA^{T}=-I$$ (we can in fact exhibit the solution in the form $$M=\int _{0}^{\infty } e^{At}e^{A^{T}t}dt)$$. To prove the ergodicity, we introduce some additional notation: $$U(x)\doteq \langle x, Mx \rangle$$, $$B_i\doteq \{x:U(x) < b_i^2\}$$ and $${\mathcal {S}}_{i}(\varepsilon )\doteq \{x: U(x)< a_{i}^2 \varepsilon \} ,$$ for $$i=1,2$$, where $$0<a_{1}<a_{2}$$, $$0<b_{0}<b_{1}<b_{2}$$. If $$\varepsilon _{0}=(b_0^2/a_2^2)/2$$, then with cl denoting closure, $$\mathrm{cl}({\mathcal {S}}_{2}(\varepsilon _{0})) \subset B_0,$$ and we will assume $$\varepsilon \in (0,\varepsilon _{0})$$ henceforth. For a use later on, we will also assume that $$a_1^2 = 2 \sup \nolimits _{x \in B_2}\text{ tr }[\sigma (x) \sigma (x)^TM].$$

### Remark 10.7

The sets $$B_1$$ and $$B_2$$ will play the roles that $$B_\delta (O_1)$$ and $$B_{2\delta } (O_1)$$ played previously in this section. Although elsewhere in this paper as well as in the reference  these sets are taken to be balls with respect to the Euclidean norm, in this subsection we take them to be level sets of U(x). The shape of these sets and the choice of the factor of 2 relating the radii play no role in the analysis of  or in our prior use in this paper. However, in this subsection it is notationally convenient for the sets to be level sets of U, since U is a Lyapunov function for the noiseless dynamics near 0. After this subsection, we will revert to the $$B_\delta (O_1)$$ and $$B_{2\delta } (O_1)$$ notation.

In addition to the restrictions $$a_1<a_2$$ and $$a_2^2 \varepsilon _0\le b_0^2$$, we also assume that $$a_{1}, a_{2}$$ and $$\varepsilon _{0}>0$$ are such that if $$\phi ^{x}$$ is the solution to the noiseless dynamics $${\dot{\phi }}=b(\phi )$$ with initial condition x, then: (i) for all $$x \in \partial {\mathcal {S}}_{2}(\varepsilon )$$, $$\phi ^{x}$$ never crosses $$\partial B_{1}$$; (i) for all $$x \in \partial {\mathcal {S}}_{1}(\varepsilon )$$, $$\phi ^{x}$$ never exits $${\mathcal {S}}_{2}(\varepsilon )$$.

The idea that will be used to establish asymptotic independence from the starting distribution is the following. We start the process on $$\partial B_{1}$$. With some small probability, it will hit $$\partial B_{2}$$ before hitting $$\partial {\mathcal {S}}_{2}(\varepsilon )$$. This gives a contribution to $$\psi _{2}^{\varepsilon }(dz|x)$$ defined in (10.4) that will be relatively unimportant. If instead it hits $$\partial {\mathcal {S}}_{2}(\varepsilon )$$ first, then we do a Freidlin–Wentzell type analysis and decompose the trajectory into excursions between $$\partial {\mathcal {S}}_{2}(\varepsilon )$$ and $$\partial {\mathcal {S}}_{1}(\varepsilon )$$, before a final excursion from $$\partial {\mathcal {S}}_{2}(\varepsilon )$$ to $$\partial B_{2}$$.

To exhibit the asymptotic independence from $$\varepsilon$$, we introduce the scaled process $$Y^{\varepsilon }_t=X^{\varepsilon }_t/\sqrt{\varepsilon }$$, which solves the SDE

\begin{aligned} dY^{\varepsilon }_t=\frac{1}{\sqrt{\varepsilon }}b(\sqrt{\varepsilon }Y^{\varepsilon }_t)dt+ \sigma (\sqrt{\varepsilon }Y^{\varepsilon }_t)dW_t. \end{aligned}

Let $$\mathcal {{\bar{S}}}_{1}=\partial {\mathcal {S}}_{1}(1) \text { and }\mathcal {{\bar{S}} }_{2}=\partial {\mathcal {S}}_{2}(1) .$$ Let $$\omega ^{\varepsilon }(w|x)$$ denote the density of the hitting location on $$\mathcal {{\bar{S}}}_{2}$$ by the process $$Y^{\varepsilon }$$, given $$Y^{\varepsilon }_0=x\in \mathcal {{\bar{S}}}_{1}$$. The following estimate is essential. The density function can be identified with the normal derivative of a related Green’s function, which is bounded from above by the boundary gradient estimate and bounded below by using the Hopf lemma .

### Lemma 10.8

Given $$\varepsilon _{0}>0$$, there are $$0<c_{1}<c_{2}<\infty$$ such that $$c_{1}\le \omega ^{\varepsilon }(w|x)\le c_{2}$$ for all $$x\in \mathcal {{\bar{S}}}_{1}$$, $$w\in \mathcal {{\bar{S}}}_{2}$$ and $$\varepsilon \in (0,\varepsilon _{0})$$.

Next let $$p^{\varepsilon }(u|w)$$ denote the density of the return location for $$Y^{\varepsilon }$$ on $$\mathcal {{\bar{S}}}_{2}$$, conditioned on visiting $$\mathcal {{\bar{S}}}_{1}$$ before $$\partial B_{2}/\sqrt{\varepsilon }$$, and starting at $$w \in \mathcal {{\bar{S}}}_{2}$$. The last lemma then directly gives the following.

### Lemma 10.9

For $$\varepsilon _{0}>0$$ and $$c_{1},c_{2}$$ as in the last lemma $$c_{1}\le p^{\varepsilon }(u|w)\le c_{2}$$ for all $$u,w\in \mathcal {{\bar{S}}}_{2}$$ and $$\varepsilon \in (0,\varepsilon _{0})$$.

Let $$r^{\varepsilon }(w)$$ denote the unique stationary distribution of $$p^{\varepsilon }(u|w)$$, and let $$p^{\varepsilon ,n}(u|w)$$ denote the n-step transition density. The preceding lemma, [14, Theorem 10.1 Chapter 3], and the existence of a uniform strictly positive lower bound on $$r^{\varepsilon }(u)$$ for all sufficiently small $$\varepsilon >0$$ imply the following.

### Lemma 10.10

There is $$K<\infty$$ and $$\alpha \in (0,1)$$ such that for all $$\varepsilon \in (0,\varepsilon _{0})$$

\begin{aligned} \sup _{w\in \mathcal {{\bar{S}}}_{2}}\left| p^{\varepsilon ,n} (u|w)-r^{\varepsilon }(u)\right| /r^{\varepsilon }(u) \le K\alpha ^{n}. \end{aligned}

Let $$\eta ^{\varepsilon }(dx|w)$$ denote the distribution of $$X^\varepsilon$$ upon first hitting $$\partial B_{2}$$ given that $$X^{\varepsilon }$$ reaches $$\partial {\mathcal {S}}_{1}(\varepsilon )$$ before $$\partial B_{2}$$ and starts at $$w \in \partial {\mathcal {S}}_{2}(\varepsilon )$$.

### Lemma 10.11

There is $$\kappa >0$$ and $$\varepsilon _{0}>0$$ such that for all $$\varepsilon \in (0,\varepsilon _{0})$$

\begin{aligned} \sup _{x \in \partial B_{1}}P_{x}\left\{ X^{\varepsilon } \text{ reaches } \partial B_{2} \text{ before } {\mathcal {S}}_{2}(\varepsilon )\right\} \le e^{-\kappa /\varepsilon }. \end{aligned}

### Lemma 10.12

There are $${\bar{\eta }}^{\varepsilon }(dz)\in$$ $${\mathcal {P}}(\partial B_{2})$$, $$s^{\varepsilon }$$ that tends to 0 as $$\varepsilon \rightarrow 0$$ and $$\varepsilon _0>0$$, such that for all $$A\in {\mathcal {B}}(\partial B_{2}),w\in \partial {\mathcal {S}}_{2}(\varepsilon )$$ and $$\varepsilon \in (0,\varepsilon _0)$$

\begin{aligned} {\bar{\eta }}^{\varepsilon }(A)[1-s^{\varepsilon }K/(1-\alpha )]\le \eta ^{\varepsilon }(A|w)\le {\bar{\eta }}^{\varepsilon }(A)[1+s^{\varepsilon }K/(1-\alpha )], \end{aligned}

where K and $$\alpha$$ are from Lemma 10.10.

### Proof of Lemma 10.11

Recall that $$a_{1}^{2}=2\sup _{x\in B_{2}}$$ tr$$[\sigma (x)\sigma (x)^{T}M]$$. We then use that $$AM+MA^{T}=-I$$ to get that with $$U(x)\doteq \left\langle x,Mx\right\rangle$$,

\begin{aligned} \left\langle DU(x),b(x)\right\rangle \le -\varepsilon a_{1}^{2} \end{aligned}
(10.10)

for $$x\in B_{2}\setminus {\mathcal {S}}_{2}(\varepsilon )$$, and

\begin{aligned} \left\langle DU(x),b(x)\right\rangle \le -\frac{1}{8}b_{0}^{2} \end{aligned}
(10.11)

for $$B_{2}\setminus (B_{0}/2)$$. By Itô’s formula

\begin{aligned} dU(X^{\varepsilon }_t)&=\left\langle DU(X^{\varepsilon }_t ),b(X^{\varepsilon }_t)\right\rangle dt+\frac{\varepsilon }{2} \text {tr}[\sigma (X^{\varepsilon }_t)\sigma (X^{\varepsilon }_t)^{T} M]dt \nonumber \\&\quad +\sqrt{\varepsilon }\left\langle DU(X^{\varepsilon }_t),\sigma (X^{\varepsilon }_t)dW_t\right\rangle . \end{aligned}
(10.12)

Starting at $$x\in \partial B_{1}$$, we are concerned with the probability

\begin{aligned} P_{x}\left\{ U(X^{\varepsilon }_t)\text { reaches }b_{2}^{2}\text { before }a_{2}^{2}\varepsilon \right\} , \end{aligned}

where $$U(x)=b_{1}^{2}$$. However, according to (10.12) and (10.11), reaching $$b_{2}^{2}$$ before $$b_{0}^{2}/4$$ is a rare event, and its probability decays exponentially in the form $$e^{-\kappa /\varepsilon }$$ for some $$\kappa >0$$ and uniformly in $$x\in \partial B_{1}$$. Once the process reaches $$B_{0}/2$$, (10.12) and (10.10) imply $$U(X^{\varepsilon }_t)$$ is a supermartingale as long as it is in the interval $$[0,b_{0}^{2}]$$, and therefore after $$X^{\varepsilon }_t$$ reaches $$B_{0}/2$$, the probability that $$U(X^{\varepsilon }_t)$$ reaches $$a_{2}^{2}\varepsilon$$ before $$b_{0}^{2}$$ is greater than 1/2. $$\square$$

### Proof of Lemma 10.12

Consider a starting position $$w\in \partial {\mathcal {S}}_{2}(\varepsilon )$$, and recall that $$\eta ^{\varepsilon }(dz|w)$$ denotes the hitting distribution on $$\partial B_{2}$$ after starting at w. Let $$\theta _{k}^{\varepsilon }$$ denote the return times to $$\partial {\mathcal {S}}_{2}(\varepsilon )$$ after visiting $$\partial {\mathcal {S}} _{1}(\varepsilon )$$, and let $$q_{n}^{\varepsilon }(w)$$ denote the probability that the first k for which $$X^{\varepsilon }$$ visits $$\partial B_{2}$$ before visiting $$\partial {\mathcal {S}}_{1}(\varepsilon )$$ during $$[\theta _{k}^{\varepsilon },\theta _{k+1}^{\varepsilon }]$$ is n. Then by the strong Markov property and using the rescaled process

\begin{aligned} \int _{\partial B_{2}}g(z)\eta ^{\varepsilon }(dz|w)=\sum \nolimits _{n=0}^{\infty }\int _{\partial B_{2} }g(z)q_{n}^{\varepsilon }(w)\int _{\partial {\mathcal {S}}_{2}(\varepsilon )} \eta ^{\varepsilon }(dz|u)J^{\varepsilon }(u)p^{\varepsilon ,n} (\sqrt{\varepsilon }u|\sqrt{\varepsilon }w)du, \end{aligned}

where $$J^{\varepsilon }(u)$$ is the Jacobian that accounts for the mapping between $$\partial {\mathcal {S}}_{2}(\varepsilon )$$ and $$\partial {\mathcal {S}} _{2}(1)$$ and is given by $$u/\sqrt{\varepsilon }$$. We next use that uniformly in $$w\in \partial {\mathcal {S}}_{2}(\varepsilon )$$

\begin{aligned} p^{\varepsilon ,n}(\sqrt{\varepsilon }u|\sqrt{\varepsilon }w)\le r^{\varepsilon }(\sqrt{\varepsilon }u)[1+K\alpha ^{n}] \end{aligned}

to get

\begin{aligned}&\sum \nolimits _{n=0}^{\infty }\int _{\partial B_{2}}g(z)q_{n}^{\varepsilon }(w)\int _{\partial {\mathcal {S}}_{2}(\varepsilon )}\eta ^{\varepsilon }(dz|u)J^{\varepsilon }(u)p^{\varepsilon ,n}(\sqrt{\varepsilon }u|\sqrt{\varepsilon }w)du\\&\quad \le \left( \sum \nolimits _{n=0}^{\infty }\int _{\partial B_{2}}g(z)q_{n}^{\varepsilon }(w)\int _{\partial {\mathcal {S}}_{2}(\varepsilon )}\eta ^{\varepsilon }(dz|u)J^{\varepsilon }(u)r^{\varepsilon }(\sqrt{\varepsilon }u)du\right) [1+K\alpha ^{n}]\\&\quad =\int _{\partial B_{2}}g(z)\int _{\partial {\mathcal {S}}_{2}(\varepsilon )} \eta ^{\varepsilon }(dz|u)J^{\varepsilon }(u)r^{\varepsilon }(\sqrt{\varepsilon }u)du\left[ 1+K\sum \nolimits _{n=0}^{\infty }q_{n}^{\varepsilon }(w)\alpha ^{n}\right] .\\ \end{aligned}

Now, use that $$K\sum _{n=0}^{\infty }\alpha ^{n}=K/(1-\alpha )<\infty$$ and $$\sup _{w\in \partial {\mathcal {S}}_{2}(\varepsilon )}\sup _{n\in {\mathbb {N}}_{0}}q_{n}^{\varepsilon }(w)\rightarrow 0$$ as $$\varepsilon \rightarrow 0$$ to get the upper bound with

\begin{aligned} {\bar{\eta }}^{\varepsilon }(dz)\doteq \int _{\partial {\mathcal {S}}_{2}(\varepsilon )}\eta ^{\varepsilon }(dz|u)J^{\varepsilon }(u)r^{\varepsilon }(\sqrt{\varepsilon }u)du. \end{aligned}

When combined with the lower bound which has an analogous proof, Lemma 10.12 follows. $$\square$$

### Lemma 10.13

For each $$j\in L_{\mathrm{{s}}}$$, there exist $${\tilde{c}}>0$$ and $$\varepsilon _{0}\in (0,1)$$ such that for any distribution $$\lambda ^{\varepsilon }$$ on $$\partial B_{\delta }(O_j)$$,

\begin{aligned} P_{\lambda ^{\varepsilon }}(\upsilon _{j}^{\varepsilon }/E_{\lambda ^{\varepsilon } }\upsilon _{j}^{\varepsilon }>t)\le e^{-{\tilde{c}}t} \end{aligned}

for all $$t>0$$ and $$\varepsilon \in (0,\varepsilon _{0}).$$

### Proof

We give the proof for the case $$j=1$$. We first show for any $$r\in (0,1)$$ there is $$\varepsilon _{0}>0$$ such that for any $$\varepsilon \in (0,\varepsilon _{0})$$ and $$\lambda ^{\varepsilon } ,\theta ^{\varepsilon }\in {\mathcal {P}}(\partial B_{\delta }(O_1))$$

\begin{aligned} E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }/E_{\theta ^{\varepsilon }}\upsilon _{1}^{\varepsilon }\ge r. \end{aligned}
(10.13)

We use that $$\upsilon _{1}^{\varepsilon }$$ can be decomposed into $$\bar{\upsilon }_1^{\varepsilon }+{\hat{\upsilon }}_1^{\varepsilon }$$, where $$\bar{\upsilon }_1^{\varepsilon }$$ is the first hitting time to $$\partial B_{2\delta }(O_{1} )$$. Since by standard large deviation theory the exponential growth rate of the expected value of $$\upsilon _{1}^{\varepsilon }$$ is strictly greater than that of $${\bar{\upsilon }}_1^{\varepsilon }$$ (uniformly in the initial distribution) $$E_{\lambda ^{\varepsilon }}{\bar{\upsilon }}_{1}^{\varepsilon }$$ (respectively, $$E_{\theta ^{\varepsilon }}{\bar{\upsilon }}_{1}^{\varepsilon }$$) is negligible compared to $$E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }$$ (respectively, $$E_{\theta ^{\varepsilon }}\upsilon _{1}^{\varepsilon }$$), and so it is enough to show $$E_{\lambda ^{\varepsilon }}{\hat{\upsilon }}_1^{\varepsilon }/E_{\theta ^{\varepsilon }}{\hat{\upsilon }}_1^{\varepsilon }\ge r.$$ Owing to Lemma 10.11 (and in particular because $$\kappa >0$$), the contribution to either $$E_{\lambda ^{\varepsilon }}{\hat{\upsilon }}_1^{\varepsilon }$$ or $$E_{\theta ^{\varepsilon }}{\hat{\upsilon }}_1^{\varepsilon }$$ from trajectories that reach $$\partial B_{2\delta }(O_1)$$ before $$\partial {\mathcal {S}}_{2}(\varepsilon )$$ can be neglected. Using Lemma 10.12 and the strong Markov property gives

\begin{aligned} \inf _{w_{1},w_{2}\in \partial {\mathcal {S}}_{2}(\varepsilon )}\frac{E_{w_{1}} {\hat{\upsilon }}_1^{\varepsilon }}{E_{w_{2}}{\hat{\upsilon }}_1^{\varepsilon }}\ge \frac{[1-s^{\varepsilon }K/(1-\alpha )]}{[1+s^{\varepsilon }K/(1-\alpha )]}, \end{aligned}

and the lower bound follows since $$s^{\varepsilon }\rightarrow 0$$.

We next claim that a suitable bound can be found for $$P_{\lambda ^{\varepsilon }}({\hat{\upsilon }}_{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }>t)$$. Recall that $$u^{\varepsilon }\in {\mathcal {P}}(\partial B_{2\delta }(O_1))$$ is the stationary probability for $$\psi ^{\varepsilon }$$ defined in (10.5). Let $$\beta ^{\varepsilon }$$ be the probability measure on $$\partial B_{\delta }(O_1)$$ obtained by integrating the transition kernel $$\psi _{1}^{\varepsilon }$$ with respect to $$u^{\varepsilon }$$, and note that integrating $$\psi _{2}^{\varepsilon }$$ with respect to $$\beta ^{\varepsilon }$$ returns $$u^{\varepsilon }$$. Since the diffusion matrix is uniformly nondegenerate, by using well-known “Gaussian type” bounds on the transition density for the process , there are $$K\in (0,\infty )$$ and $$p\in (0,\infty )$$ such that

\begin{aligned} P_{x}\left\{ X_{\theta }^{\varepsilon }\in A|X_{\theta }^{\varepsilon }\in \partial B_{2\delta }(O_1)\right\} \le Km(A)/\varepsilon ^p \end{aligned}

for all $$x\in \partial B_{\delta }(O_1)$$, where m is the uniform measure on $$\partial B_{2\delta }(O_1)$$ and $$\theta =\inf \{t>0:X_{t}^{\varepsilon }\in \partial B_{2\delta }(O_1)\cup {\mathcal {S}}_{2} (\varepsilon )\}$$. Together with Lemmas 10.11 and 10.12 , this implies that for all sufficiently small $$\varepsilon >0$$ and any bounded measurable function $$h:\partial B_{2\delta }(O_1)\rightarrow {\mathbb {R}}$$,

\begin{aligned} \int _{\partial B_{2\delta }(O_1)}\int _{\partial B_{\delta }(O_1)}h(y)\psi _{2}^{\varepsilon }(dy|x)\lambda ^{\varepsilon }(dx)&\le 2\int _{\partial B_{2\delta }(O_1)}\int _{\partial B_{\delta }(O_1)}h(y)\psi _{2}^{\varepsilon } (dy|x)\beta ^{\varepsilon }(dx)\\&\le 2\int _{\partial B_{2\delta }(O_1)}h(y)u^{\varepsilon }(dy). \end{aligned}

Using the last display for the first inequality, (10.13) for the second, that $${\bar{\upsilon }}_{1}^{\varepsilon }$$ is small compared with $${\hat{\upsilon }} _{1}^{\varepsilon }$$ for the third and Lemma 10.6 for the last, there is $$\varepsilon _{1}>0$$ such that

\begin{aligned} P_{\lambda ^{\varepsilon }}({\hat{\upsilon }}_{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }>t)&=E_{\lambda ^{\varepsilon } }(P_{X_{{\bar{\upsilon }}_{1}^{\varepsilon }}^{\varepsilon }}({\hat{\upsilon }} _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }>t)) \le 2P_{u^{\varepsilon }}(\upsilon _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }>t)\\&\le 2P_{u^{\varepsilon }}(\upsilon _{1}^{\varepsilon }/E_{\beta ^{\varepsilon } }\upsilon _{1}^{\varepsilon }>t/2) \le 2P_{u^{\varepsilon }}(\upsilon _{1}^{\varepsilon }/E_{u^{\varepsilon } }\upsilon _{1}^{\varepsilon }>t/4) \le 2e^{-{\tilde{c}}t/4} \end{aligned}

for all $$\varepsilon \in (0,\varepsilon _{1})$$ and $$t\ge 0$$.

Since as noted previously $$E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }\ge E_{\lambda ^{\varepsilon }}{\bar{\upsilon }}_{1}^{\varepsilon }$$ and since by [6, Theorem 4 and Corollary 1], there exists $$\varepsilon _{2}\in (0,1)$$ such that $$P_{\lambda ^{\varepsilon }}({\bar{\upsilon }}_1^{\varepsilon }/E_{\lambda ^{\varepsilon }}{\bar{\upsilon }}_1^{\varepsilon }>t)\le 2e^{-t/2}$$ for any $$t>0$$ and $$\varepsilon \in (0,\varepsilon _{2})$$, we conclude that for any $$t>0$$ $$P_{\lambda ^{\varepsilon }}({\bar{\upsilon }}^{\varepsilon }_{1}/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }>t/2) \le P_{\lambda ^{\varepsilon } }({\bar{\upsilon }}^{\varepsilon }_{1}/E_{\lambda ^{\varepsilon }}\bar{\upsilon }^{\varepsilon }_{1}> t/2) \le 2e^{-t/2}.$$ The conclusion of the lemma follows from these two bounds and

\begin{aligned} P_{\lambda ^{\varepsilon }}(\upsilon _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon } }\upsilon _{1}^{\varepsilon }&>t)\le P_{\lambda ^{\varepsilon }} ({\bar{\upsilon }}^{\varepsilon }_{1}/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }>t/2)+P_{\lambda ^{\varepsilon }}({\hat{\upsilon }}^{\varepsilon }_{1}/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }>t/2). \end{aligned}

$$\square$$

### Lemma 10.14

For any $$j\in L_{\mathrm{{s}}}$$ and any distribution $$\lambda ^{\varepsilon }$$ on $$\partial B_{\delta }(O_j)$$, $$\upsilon _{j}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\upsilon _{j} ^{\varepsilon }$$ converges in distribution to an Exp(1) random variable under $$P_{\lambda ^{\varepsilon }}.$$ Moreover, $$E_{\lambda ^{\varepsilon }}e^{it\upsilon _{j}^{\varepsilon }/E_{\lambda ^{\varepsilon } }\upsilon _{j}^{\varepsilon }}\rightarrow 1/(1-it)$$ uniformly on any compact set in $${\mathbb {R}}$$.

### Proof

We give the proof for the case $$j=1$$. Recall that $$E_{u^{\varepsilon }}e^{it\upsilon _{1}^{\varepsilon } /E_{u^{\varepsilon }}\upsilon _{1}^{\varepsilon }}\rightarrow 1/(1-it)$$ uniformly on any compact set in $${\mathbb {R}}$$ as $$\varepsilon \rightarrow 0$$ from Remark 10.5. We would like to show that $$E_{\lambda ^{\varepsilon }}e^{it\upsilon _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon } }\upsilon _{1}^{\varepsilon }}\rightarrow 1/(1-it)$$ uniformly on any compact set in $${\mathbb {R}}$$. Since $$\upsilon _{1}^{\varepsilon }={\bar{\upsilon }}^{\varepsilon }_{1}+\hat{\upsilon }^{\varepsilon }_{1}$$ with $${\bar{\upsilon }}^{\varepsilon }_{1}$$ the first hitting time to $$\partial B_{2\delta }(O_{1}),$$ we know that $$E_{\lambda ^{\varepsilon }}{\bar{\upsilon }}^{\varepsilon }_{1}/E_{\lambda ^{\varepsilon } }\upsilon _{1}^{\varepsilon }\rightarrow 0$$ and thus $$E_{\lambda ^{\varepsilon } }{\hat{\upsilon }}^{\varepsilon }_{1}/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }\rightarrow 1.$$ Observe that

\begin{aligned} E_{\lambda ^{\varepsilon }}e^{it\upsilon _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }} =E_{\lambda ^{\varepsilon }}\left[ e^{it{\bar{\upsilon }}^{\varepsilon } _{1}/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }}\cdot E_{X^{\varepsilon }\left( {\bar{\upsilon }}^{\varepsilon }_{1}\right) }\left( e^{it{\hat{\upsilon }}^{\varepsilon }_{1}/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }}\right) \right] , \end{aligned}
\begin{aligned} E_{\lambda ^{\varepsilon }}\left[ E_{X^{\varepsilon }\left( \bar{\upsilon }^{\varepsilon }_{1}\right) }\left( e^{it{\hat{\upsilon }}^{\varepsilon } _{1}/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }}\right) \right] \le \frac{[1+s^{\varepsilon }K/(1-\alpha )]}{[1-s^{\varepsilon }K/(1-\alpha )]}E_{u^{\varepsilon }}e^{it\upsilon _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }}\rightarrow 1/(1-it) \end{aligned}

and

\begin{aligned} E_{\lambda ^{\varepsilon }}\left[ E_{X^{\varepsilon }\left( \bar{\upsilon }^{\varepsilon }_{1}\right) }\left( e^{it{\hat{\upsilon }}^{\varepsilon } _{1}/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }}\right) \right] \ge \frac{[1-s^{\varepsilon }K/(1-\alpha )]}{[1+s^{\varepsilon }K/(1-\alpha )]}E_{u^{\varepsilon }}e^{it\upsilon _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }}\rightarrow 1/(1-it). \end{aligned}

Since $$E_{\lambda ^{\varepsilon }}{\bar{\upsilon }}^{\varepsilon }_{1} /E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }\rightarrow 0$$ and $$e^{ix}$$ is a bounded and continuous function, a conditioning argument gives

\begin{aligned} \left| E_{\lambda ^{\varepsilon }}e^{it\upsilon _{1}^{\varepsilon } /E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }}-E_{\lambda ^{\varepsilon }}\left[ E_{X^{\varepsilon }\left( {\bar{\upsilon }}^{\varepsilon }_{1}\right) }\left( e^{it{\hat{\upsilon }}^{\varepsilon }_{1}/E_{\lambda ^{\varepsilon } }\upsilon _{1}^{\varepsilon }}\right) \right] \right| \le E_{\lambda ^{\varepsilon }}\left| e^{it{\bar{\upsilon }}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }}-1\right| \rightarrow 0. \end{aligned}

We conclude that $$E_{\lambda ^{\varepsilon }}e^{it\upsilon _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }}\rightarrow 1/(1-it)$$ uniformly on any compact set in $${\mathbb {R}}$$. $$\square$$

### Return Times (Single Cycles)

In this subsection, we will extend all the three results to return times for the single cycle case (i.e., when $$h_1>w$$).

### Lemma 10.15

There exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$ and any distribution $$\lambda ^{\varepsilon }$$ on $$\partial B_{\delta }(O_{1}),$$

\begin{aligned} \lim _{\varepsilon \rightarrow 0}\varepsilon \log E_{\lambda ^{\varepsilon }} \tau _{1}^{\varepsilon }=\min _{y\in \cup _{k\in L\setminus \{1\}}\partial B_{\delta }(O_{k})}V(O_{1},y). \end{aligned}

### Proof

We have $$E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }=E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }+E_{\lambda ^{\varepsilon }} (\tau _{1}^{\varepsilon }-\upsilon _{1}^{\varepsilon })$$, and by Lemma 10.2 we know that

\begin{aligned} \lim _{\varepsilon \rightarrow 0}\varepsilon \log E_{\lambda ^{\varepsilon } }\upsilon _{1}^{\varepsilon }=\min _{y\in \cup _{k\in L\setminus \{1\}}\partial B_{\delta }(O_{k})}V(O_{1},y). \end{aligned}

Moreover, observe that $$W(O_{j})>W(O_{1})$$ for any $$j\in L\setminus \{1\}$$ due to Remark 3.14. Note that $$\upsilon _{1}^{\varepsilon }$$ as defined in (10.1) coincides with $$\sigma _{0}^{\varepsilon }$$ defined in (3.4). We can therefore apply Remark 7.22 with $$f=0$$, $$A=M$$ and $$\eta =[\min _{j\in L\setminus \{1\}} W(O_{j})-W(O_{1})]/3,$$ we find that there exists $$\delta _{1}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{1})$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z}\left( \tau _{1}^{\varepsilon } -\upsilon _{1}^{\varepsilon }\right) \right) \\&\quad \ge \min _{j\in L\setminus \{1\}}W(O_{j})-W(O_{1})-\min _{j\in L\setminus \{1\}}V(O_{1},O_{j})-\eta \\&\quad =-\min _{j\in L\setminus \{1\}}V(O_{1},O_{j})+2\eta . \end{aligned}

On the other hand, by continuity of $$V(O_{1},\cdot ),$$ for this given $$\eta ,$$ there exists $$\delta _{2}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{2})$$

\begin{aligned} \min _{y\in \cup _{k\in L\setminus \{1\}}\partial B_{\delta }(O_{k})} V(O_{1},y)\ge \min _{j\in L\setminus \{1\}}V(O_{1},O_{j})-\eta . \end{aligned}

Thus, for any $$\delta \in (0,\delta _{0})$$ with $$\delta _{0}\doteq \delta _{1} \wedge \delta _{2}$$

\begin{aligned} \limsup _{\varepsilon \rightarrow 0}\varepsilon \log E_{\lambda ^{\varepsilon } }(\tau _{1}^{\varepsilon }-\upsilon _{1}^{\varepsilon })&\le \limsup _{\varepsilon \rightarrow 0}\varepsilon \log \left( \sup _{z\in \partial B_{\delta }(O_{1})}E_{z} (\tau _{1}^{\varepsilon }-\upsilon _{1}^{\varepsilon })\right) \\&\le \min _{j\in L\setminus \{1\}}V(O_{1},O_{j})-2\eta \le \min _{y\in \cup _{k\in L\setminus \{1\}}\partial B_{\delta }(O_{k})} V(O_{1},y)-\eta \\&=\lim _{\varepsilon \rightarrow 0}\varepsilon \log E_{\lambda ^{\varepsilon } }\upsilon _{1}^{\varepsilon }-\eta \end{aligned}

and

\begin{aligned} \lim _{\varepsilon \rightarrow 0}\varepsilon \log E_{\lambda ^{\varepsilon }} \tau _{1}^{\varepsilon }&=\lim _{\varepsilon \rightarrow 0}\varepsilon \log E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon } =\min _{y\in \cup _{k\in L\setminus \{1\}}\partial B_{\delta }(O_{k})}V(O_{1},y). \end{aligned}

$$\square$$

### Lemma 10.16

Given $$\delta >0$$ sufficiently small, and for any distribution $$\lambda ^{\varepsilon }$$ on $$\partial B_{\delta }(O_{1}),$$ there exist $${\tilde{c}}>0$$ and $$\varepsilon _{0}\in (0,1)$$ such that

\begin{aligned} P_{\lambda ^{\varepsilon }}(\tau _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }} \tau _{1}^{\varepsilon }>t)\le e^{-{\tilde{c}}t} \end{aligned}

for all $$t\ge 1$$ and $$\varepsilon \in (0,\varepsilon _{0}).$$

### Proof

For any $$t>0,$$ $$P_{\lambda ^{\varepsilon }}(\tau _{1}^{\varepsilon } /E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }>t)\le P_{\lambda ^{\varepsilon }}(\upsilon _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }>t/2)+P_{\lambda ^{\varepsilon }}((\tau _{1}^{\varepsilon } -\upsilon _{1}^{\varepsilon })/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }>t/2)$$. It is easy to see that the first term has this sort of bound due to Lemma 10.13 and $$E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }\ge E_{\lambda ^{\varepsilon } }\upsilon _{1}^{\varepsilon }.$$

It suffices to show that this sort of bound holds for the second term, namely, there exists a constant $${\tilde{c}}>0$$ such that

\begin{aligned} P_{\lambda ^{\varepsilon }}\left( (\tau _{1}^{\varepsilon } -\upsilon _{1}^{\varepsilon })/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon } >t\right) \le e^{-{\tilde{c}} t} \end{aligned}

for all $$t\in [0,\infty )$$ and $$\varepsilon$$ sufficiently small. By Chebyshev’s inequality,

\begin{aligned} P_{\lambda ^{\varepsilon }}\left( (\tau _{1}^{\varepsilon }-\upsilon _{1}^{\varepsilon }) /E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }>t\right) =P_{\lambda ^{\varepsilon }} (e^{(\tau _{1}^{\varepsilon }-\upsilon _{1}^{\varepsilon })/E_{\lambda ^{\varepsilon }} \tau _{1}^{\varepsilon }}>e^{t})\le e^{-t}E_{\lambda ^{\varepsilon }}e^{(\tau _{1}^{\varepsilon }-\upsilon _{1}^{\varepsilon }) /E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }}, \end{aligned}

and it therefore suffices to prove that $$E_{\lambda ^{\varepsilon }}e^{(\tau _{1}^{\varepsilon }-\upsilon _{1}^{\varepsilon }) /E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }}$$ is less than a constant for all $$\varepsilon$$ sufficiently small. Observe that

\begin{aligned} \tau _{1}^{\varepsilon }-\upsilon _{1}^{\varepsilon }=\sum \nolimits _{j\in L\setminus \{1\}}\sum \nolimits _{k=1}^{N_{j}}\upsilon _{j}^{\varepsilon }(k), \end{aligned}

where $$N_{j}$$ is the number of visits of $$\partial B_{\delta }(O_{j})$$, and $$\upsilon _{j}^{\varepsilon }(k)$$ is the k-th copy of the first hitting time to $$\cup _{k\in L\setminus \{j\}}\partial B_\delta (O_{k})$$ after starting from $$\partial B_{\delta }(O_{j}).$$

If we consider $$\partial B_{\delta }(O_{j})$$ as the starting location of a regenerative cycle, as was done previously in the paper for $$\partial B_{\delta }(O_{1})$$, then there will be a unique stationary distribution, and if the process starts with that as the initial distribution, then the times $$\upsilon _{j}^{\varepsilon }(k)$$ are independent from each other and from the number of returns to $$\partial B_{\delta }(O_{j})$$ before first visiting $$\partial B_{\delta }(O_{1})$$. While these random times as used here do not arise from starting with such a distribution, we can use Lemma 10.12 to bound the error in terms of a multiplicative factor that is independent of $$\varepsilon$$ for small $$\varepsilon >0$$, and thereby justify treating $$N_{j}$$ as though it is independent of the $$\upsilon _{j}^{\varepsilon }(k)$$.

Recalling that $$l\doteq |L|$$,

\begin{aligned} E_{\lambda ^{\varepsilon }}e^{(\tau _{1}^{\varepsilon }-\upsilon _{1}^{\varepsilon }) /E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }}&= E_{\lambda ^{\varepsilon }}\prod \nolimits _{j\in L\setminus \{1\}}e^{\left( \sum _{k=1}^{N_{j}}\upsilon _{j}^{\varepsilon }(k)\right) /E_{\lambda ^{\varepsilon }} \tau _{1}^{\varepsilon }}\\&\le \prod \nolimits _{j\in L\setminus \{1\}}\left( E_{\lambda ^{\varepsilon }}\left[ e^{\left( \sum _{k=1}^{N_{j}} \upsilon _{j}^{\varepsilon }(k)\right) (l-1)/E_{\lambda ^{\varepsilon }} \tau _{1}^{\varepsilon }}\right] \right) ^{1/(l-1)}, \end{aligned}

where we use the generalized Hölder’s inequality for the last line. Thus, if we can show for each $$j\in L\setminus \{1\}$$ that $$E_{\lambda ^{\varepsilon }}\exp [{(\sum _{k=1}^{N_{j}}\upsilon _{j}^{\varepsilon }(k))(l-1)/E_{\lambda ^{\varepsilon }} \tau _{1}^{\varepsilon }}]$$ is less than a constant for all $$\varepsilon$$ sufficiently small, then we are done.

Such an estimate is straightforward for the case of an unstable equilibrium, i.e., for $$j\in L\backslash L_{\mathrm{{s}}}$$, and so we focus on the case $$j\in L_{\mathrm{{s}}}\backslash \{1\}$$. For this case, we apply Lemma 10.13 to find that there exists $$\tilde{{c}}>0$$ and $$\varepsilon _{0}\in (0,1)$$ such that for any $$j\in L$$ and any distribution $${\tilde{\lambda }}^{\varepsilon }$$ on $$\partial B(O_{j}),$$

\begin{aligned} P_{{\tilde{\lambda }}^{\varepsilon }}(\upsilon _{j}^{\varepsilon } /E_{{\tilde{\lambda }}^{\varepsilon }}\upsilon _{j}^{\varepsilon }>t)\le e^{-\tilde{{c}}t} \end{aligned}
(10.14)

for any $$t>0$$ and $$\varepsilon \in (0,\varepsilon _{0}).$$ Hence, given any $$\eta >0$$, there is $${\bar{\varepsilon }}_0 \in (0,\varepsilon _0)$$ such that for all $$\varepsilon \in (0,{\bar{\varepsilon }}_0)$$ and any $$j\in L\setminus \{1\}$$

\begin{aligned} E_{\lambda ^{\varepsilon }}\left[ e^{\upsilon _{j}^{\varepsilon }(l-1) /E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }}\right]&\le 1+\int _{1}^{\infty }P_{\lambda ^{\varepsilon }}(e^{(l-1) \upsilon _{j}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }}>t)dt\\&\le 1+\int _{1}^{\infty }P_{\lambda ^{\varepsilon }} \left( \upsilon _{j}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\upsilon _{j}^{\varepsilon } >\log t\cdot E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }/((l-1) E_{\lambda ^{\varepsilon }}\upsilon _{j}^{\varepsilon })\right) dt\\&\le 1+\int _{1}^{\infty }t^{-\tilde{{c}}E_{\lambda ^{\varepsilon }} \tau _{1}^{\varepsilon }/((l-1)E_{\lambda ^{\varepsilon }}\upsilon _{j}^{\varepsilon })}dt\\&=1+\left( \tilde{{c}}E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon } /((l-1)E_{\lambda ^{\varepsilon }}\upsilon _{j}^{\varepsilon })-1\right) ^{-1} \le 1+e^{-\frac{1}{\varepsilon }(h_{1}-h_{j}-\eta )}, \end{aligned}

where the last inequality comes from Lemmas 10.2 and 10.15, and by picking the range of $$\varepsilon$$ small if it needs to be.

By using induction and a conditioning argument, it follows that for any $$\eta >0$$, for any $$j\in L\setminus \{1\}$$ and for any $$n\in {\mathbb {N}},$$

\begin{aligned} E_{\lambda ^{\varepsilon }}\left[ e^{\left( \sum _{k=1}^{n}\upsilon _{j}^{\varepsilon } (k)\right) (l-1)/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }}\right] \le \left( 1+e^{-\frac{1}{\varepsilon }(h_{1}-h_{j}-\eta )}\right) ^{n}. \end{aligned}

This implies that

\begin{aligned} E_{\lambda ^{\varepsilon }}\left[ e^{\left( \sum _{k=1}^{N_{j}}\upsilon _{j}^{\varepsilon } (k)\right) (l-1)/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }}\right] \le E_{\lambda ^{\varepsilon }}\left[ \left( 1+e^{-\frac{1}{\varepsilon }(h_{1}-h_{j}-\eta )} \right) ^{N_{j}}\right] . \end{aligned}

The next thing we need to know is the distribution of $$N_{j}$$, i.e., $$P_{\lambda ^{\varepsilon }}(N_{j}=n)$$ for $$n\in {\mathbb {N}}$$. Following a similar argument as in the proof of Lemma 7.3 and the proof of Lemma 7.6, for sufficiently small $$\varepsilon >0$$ we find

\begin{aligned} P_{\lambda ^{\varepsilon }}(N_{j}=n)&\le \left( 1\wedge e^{-\frac{1}{\varepsilon }(W(O_{j})-W(O_{1}\cup O_{j})-h_{1}-\eta )}\right) (1-q_{j})^{n-1}q_{j}, \end{aligned}

where

\begin{aligned} q_{j} \doteq \frac{\inf _{x\in \partial B_{\delta }(O_{j})}P_{x}(\tilde{{T_{1}}}<{{\tilde{T}}}_{j}^{+})}{1-\sup _{y\in \partial B_{\delta }(O_{j})}p(y,\partial B_{\delta }(O_{j}))}\ge e^{-\frac{1}{\varepsilon }(W(O_{1})-W(O_{1}\cup O_{j})-h_{j}+\eta )}. \end{aligned}
(10.15)

Therefore,

\begin{aligned}&E_{\lambda ^{\varepsilon }}\left[ e^{\left( \sum _{k=1}^{N_{j}}\upsilon _{j}^{\varepsilon } (k)\right) (l-1)/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }}\right] \\&\quad \le E_{\lambda ^{\varepsilon }}\left[ \left( 1+e^{-\frac{1}{\varepsilon }(h_{1}-h_{j} -\eta )}\right) ^{N_{j}}\right] =\sum \nolimits _{n=1}^{\infty }\left( 1+e^{-\frac{1}{\varepsilon }(h_{1}-h_{j} -\eta )}\right) ^{n}P_{\lambda ^{\varepsilon }}(N_{j}=n)\\&\quad \le \sum _{n=1}^{\infty }\left( 1\wedge e^{-\frac{1}{\varepsilon }(W(O_{j}) -W(O_{1}\cup O_{j})-h_{1}-\eta )}\right) \left( 1+e^{-\frac{1}{\varepsilon }(h_{1} -h_{j}-\eta )}\right) ^{n}(1-q_{j})^{n-1}q_{j}\\&\quad =\frac{\left( 1\wedge e^{-\frac{1}{\varepsilon }(W(O_{j})-W(O_{1}\cup O_{j}) -h_{1}-\eta )}\right) q_{j}\left( 1+e^{-\frac{1}{\varepsilon }(h_{1}-h_{j}-\eta )}\right) }{1-\left( 1+e^{-\frac{1}{\varepsilon }(h_{1}-h_{j}-\eta )}\right) (1-q_{j})}\\&\quad \le \frac{\left( 1\wedge e^{-\frac{1}{\varepsilon }(W(O_{j})-W(O_{1}\cup O_{j}) -h_{1}-\eta )}\right) \left( 1+e^{-\frac{1}{\varepsilon }(h_{1}-h_{j}-\eta )}\right) }{-e^{-\frac{1}{\varepsilon }(h_{1}-h_{j}-\eta )}/q_{j}+1}\\&\quad \le \frac{\left( 1\wedge e^{-\frac{1}{\varepsilon }(W(O_{j})-W(O_{1}\cup O_{j})-h_{1}-\eta )}\right) \left( 1+e^{-\frac{1}{\varepsilon }(h_{1}-h_{j}-\eta )} \right) }{-e^{-\frac{1}{\varepsilon }(h_{1}+W(O_1\cup O_j)-W(O_1)-2\eta )}+1}. \end{aligned}

The second equality holds since $$h_1>w\ge h_j$$ and (10.15) imply $$(1-q_{j})(1+e^{-\frac{1}{\varepsilon }(h_{1}-h_{j}- \eta )})<1$$ for all $$\varepsilon$$ sufficiently small; the last inequality is from (10.15).

Then, we use the fact that for $$x\in (0,1/2)$$, $$1/(1-x)\le 1+2x$$ to find that

\begin{aligned}&E_{\lambda ^{\varepsilon }}\left[ e^{\left( \sum _{k=1}^{N_{j}} \upsilon _{j}^{\varepsilon }(k)\right) (l-1)/E_{\lambda ^{\varepsilon }} \tau _{1}^{\varepsilon }}\right] \nonumber \\&\quad \le \left( 1\wedge e^{-\frac{1}{\varepsilon }(W(O_{j})-W(O_{1}\cup O_{j})-h_{1}-\eta )}\right) \left( 1+e^{-\frac{1}{\varepsilon }(h_{1}-h_{j}-\eta )}\right) \nonumber \\&\qquad \times \left( 1+2e^{-\frac{1}{\varepsilon }(h_{1}+W(O_1\cup O_j)-W(O_1)-2\eta )}\right) \nonumber \\&\quad \le \left( 1\wedge e^{-\frac{1}{\varepsilon }(W(O_{j})-W(O_{1}\cup O_{j})-h_{1} -\eta )}\right) \left( 1+5e^{-\frac{1}{\varepsilon }(h_{1}+W(O_1\cup O_j)-W(O_1)-2\eta )} \right) \nonumber \\&\quad \le 1\cdot 6=6. \end{aligned}
(10.16)

The third inequality holds due to the fact that $$W(O_1)\ge W(O_1\cup O_j)+h_j$$ and the last inequality comes from the assumption that $$h_1>w$$ and by picking $$\eta$$ to be smaller than $$(h_1-w)/2$$. This completes the proof. $$\square$$

### Lemma 10.17

Given $$\delta >0$$ sufficiently small, and for any distribution $$\lambda ^{\varepsilon }$$ on $$\partial B_{\delta }(O_{1})$$, $$\tau _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }$$ converges in distribution to an Exp(1) random variable under $$P_{\lambda ^{\varepsilon }}.$$ Moreover, $$E_{\lambda ^{\varepsilon }}( e^{it\left( \tau _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }\right) }) \rightarrow 1/(1-it)$$ uniformly on any compact set in $${\mathbb {R}}$$.

### Proof

Note that

\begin{aligned} E_{\lambda ^{\varepsilon }}\left( e^{it\left( \tau _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }\right) }\right) =E_{\lambda ^{\varepsilon }}\left( e^{it\left( \upsilon _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }\right) }E_{X^{\varepsilon }( \upsilon _{1}^{\varepsilon }) }\left( e^{it\left( (\tau _{1}^{\varepsilon }-\upsilon _{1}^{\varepsilon })/E_{\lambda ^{\varepsilon }} \tau _{1}^{\varepsilon }\right) }\right) \right) . \end{aligned}

Since

\begin{aligned} E_{\lambda ^{\varepsilon }}\left( e^{it\left( \upsilon _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }\right) }\right) =E_{\lambda ^{\varepsilon }}\left( e^{it\left( E_{\lambda ^{\varepsilon } }\upsilon _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }\right) \left( \upsilon _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }} \upsilon _{1}^{\varepsilon }\right) }\right) \end{aligned}

and we know that $$E_{\lambda ^{\varepsilon }}\upsilon _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }\rightarrow 1$$ from the proof of Lemma 10.15, by applying Lemma 10.14 we have $$E_{\lambda ^{\varepsilon }}( e^{it\left( \upsilon _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }\right) }) \rightarrow 1/(1-it)$$ uniformly on any compact set in $${\mathbb {R}}$$. Also

\begin{aligned} \left| E_{\lambda ^{\varepsilon }}\left( e^{it\left( \tau _{1} ^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }\right) }\right) \!-\!E_{\lambda ^{\varepsilon }}\left( e^{it\left( \upsilon _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }\right) }\right) \right| \!\le \! E_{\lambda ^{\varepsilon }}\left| E_{X^{\varepsilon }( \upsilon _{1}^{\varepsilon }) }\left( e^{it\left( (\tau _{1}^{\varepsilon }-\upsilon _{1}^{\varepsilon })/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }\right) }\right) -1\right| , \end{aligned}

where the right hand side converges to 0 using $$E_{\lambda ^{\varepsilon }}(\tau _{1}^{\varepsilon }-\upsilon _{1}^{\varepsilon }) /E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon } \rightarrow 0$$ and the dominated convergence theorem. The convergence of $$\tau _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }$$ to an Exp(1) random variable under $$P_{\lambda ^{\varepsilon }}$$ and uniform convergence of $$E_{\lambda ^{\varepsilon }}( e^{it\left( \tau _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }\right) })$$ to $$1/(1-it)$$ on compact set in $${\mathbb {R}}$$ follows. $$\square$$

### Return Times (Multicycles)

In this subsection, we will extend all the three results to multi-regenerative cycles (when $$w\ge h_1$$). Recall that the multicycle times $${\hat{\tau }}^\varepsilon _i$$ are defined according to (6.4) where $$\{{\mathbf {M}}^{\varepsilon }_i\}_{i\in {\mathbb {N}}}$$ is a sequence of independent and geometrically distributed random variables with parameter $$e^{-m/\varepsilon }$$ for some $$m>0$$ such that $$m+h_1>w$$. In addition, $$\{{\mathbf {M}}^{\varepsilon }_i\}$$ is independent of $$\{\tau ^\varepsilon _n\}$$.

### Lemma 10.18

There exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$ and any distribution $$\lambda ^{\varepsilon }$$ on $$\partial B_{\delta }(O_{1}),$$

\begin{aligned} \lim _{\varepsilon \rightarrow 0}\varepsilon \log E_{\lambda ^{\varepsilon }} {\hat{\tau }}_{1}^{\varepsilon }=m+\min _{y\in \cup _{k\in L\setminus \{1\}}\partial B_{\delta }(O_{k})}V(O_{1},y). \end{aligned}

### Proof

Since $$\{{\mathbf {M}}^{\varepsilon }_i\}$$ is independent of $$\{\tau ^\varepsilon _n\}$$ and $$E_{\lambda ^{\varepsilon }}{\mathbf {M}}^{\varepsilon }_i= e^{m/\varepsilon }$$, we apply Lemma 10.15 to complete the proof. $$\square$$

### Lemma 10.19

Given $$\delta >0,$$ for any distribution $$\lambda ^{\varepsilon }$$ on $$\partial B_{\delta }(O_{1}),$$ there exist $${\tilde{c}}>0$$ and $$\varepsilon _{0}\in (0,1)$$ such that

\begin{aligned} P_{\lambda ^{\varepsilon }}({\hat{\tau }}_{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }} {\hat{\tau }}_{1}^{\varepsilon }>t)\le e^{-{\tilde{c}}t} \end{aligned}

for all $$t\ge 1$$ and $$\varepsilon \in (0,\varepsilon _{0}).$$

### Proof

We divide the multicycle into a sum of two terms. The first term is the sum of all the hitting times to $$\cup _{j\in L\setminus \{1\}}\partial B_{\delta }(O_j)$$, and the second term is the sum of all residual times. That is, $${\hat{\tau }}_1^{\varepsilon }={\hat{\upsilon }}_{1}^{\varepsilon } +({\hat{\tau }}_1^{\varepsilon }-{\hat{\upsilon }}_{1}^{\varepsilon })$$, where

\begin{aligned} {\hat{\upsilon }}_{1}^{\varepsilon }=\sum \nolimits _{i=1}^{{\mathbf {M}}^{\varepsilon }_1} \upsilon _{1}^{\varepsilon }(i)\text { and }{\hat{\tau }}_1^{\varepsilon }-{\hat{\upsilon }}_{1}^{\varepsilon }=\sum \nolimits _{i=1}^{{\mathbf {M}}^{\varepsilon }_1}\left( \sum \nolimits _{j\in L\setminus \{1\}}\sum \nolimits _{k=1}^{N_{j}}\upsilon _{j}^{\varepsilon }(i,k)\right) . \end{aligned}

As discussed many times, it suffices to show that there exist $${\tilde{c}}>0$$ and $$\varepsilon _{0}\in (0,1)$$ such that

\begin{aligned} P_{\lambda ^{\varepsilon }}\left( {\hat{\upsilon }}_{1}^{\varepsilon } /E_{\lambda ^{\varepsilon }}{\hat{\tau }}_1^{\varepsilon }>t\right) \le e^{-{\tilde{c}}t}\text { and }P_{\lambda ^{\varepsilon }}\left( ({\hat{\tau }}_1^{\varepsilon } -{\hat{\upsilon }}_{1}^{\varepsilon })/E_{\lambda ^{\varepsilon }} {\hat{\tau }}_1^{\varepsilon }>t\right) \le e^{-{\tilde{c}}t} \end{aligned}

for all $$t\ge 1$$ and $$\varepsilon \in (0,\varepsilon _{0}).$$

The first bound is relatively easy since $$\upsilon _{1}^{\varepsilon }(i)$$ is a sum of approximate exponentials with a tail bound of the given sort, and since the sum of geometrically many independent and identically distributed exponentials is again an exponential distribution.

For the second bound, we use Chebyshev’s inequality again as in the proof of Lemma 10.16 to find that it suffices to prove that $$E_{\lambda ^{\varepsilon }}e^{({\hat{\tau }}_1^{\varepsilon } -{\hat{\upsilon }}_{1}^{\varepsilon })/E_{\lambda ^{\varepsilon }} {\hat{\tau }}_1^{\varepsilon }}$$ is less than a constant for all $$\varepsilon$$ sufficiently small. Now due to the independence of $${\mathbf {M}}^{\varepsilon }_1$$ and $$\{\upsilon _{j}^{\varepsilon }(i,k)\}$$, we have

\begin{aligned}&E_{\lambda ^{\varepsilon }}e^{({\hat{\tau }}_1^{\varepsilon } -{\hat{\upsilon }}_{1}^{\varepsilon })/E_{\lambda ^{\varepsilon }} {\hat{\tau }}_1^{\varepsilon }}\nonumber \\&\qquad = \sum \limits _{i=1}^{\infty }\left( E_{\lambda ^{\varepsilon }} \left[ \prod \nolimits _{j\in L\setminus \{1\}}e^{\left( \sum _{k=1}^{N_{j}} \upsilon _{j}^{\varepsilon }(k)\right) /E_{\lambda ^{\varepsilon }} {\hat{\tau }}_1^{\varepsilon }}\right] \right) ^{i}\cdot P_{\lambda ^{\varepsilon }} ({\mathbf {M}}^{\varepsilon }_1=i)\nonumber \\&\qquad =e^{-m/\varepsilon }\cdot E_{\lambda ^{\varepsilon }}\left[ \prod \nolimits _{j\in L\setminus \{1\}}e^{\left( \sum _{k=1}^{N_{j}} \upsilon _{j}^{\varepsilon }(k)\right) /E_{\lambda ^{\varepsilon }} {\hat{\tau }}_1^{\varepsilon }}\right] \nonumber \\&\qquad \qquad \cdot \sum \limits _{i=1}^{\infty }\left( E_{\lambda ^{\varepsilon }}\left[ \prod \nolimits _{j\in L\setminus \{1\}}e^{\left( \sum _{k=1}^{N_{j}}\upsilon _{j}^{\varepsilon } (k)\right) /E_{\lambda ^{\varepsilon }}{\hat{\tau }}_1^{\varepsilon }} \right] (1-e^{-m/\varepsilon })\right) ^{i-1}. \end{aligned}
(10.17)

To do a further computation, we have to at least make sure that

\begin{aligned} E_{\lambda ^{\varepsilon }}\left[ \prod \nolimits _{j\in L\setminus \{1\}}e^{\left( \sum _{k=1}^{N_{j}}\upsilon _{j}^{\varepsilon }(k) \right) /E_{\lambda ^{\varepsilon }}{\hat{\tau }}_1^{\varepsilon }}\right] (1-e^{-m/\varepsilon })<1. \end{aligned}
(10.18)

To see this, we first use the generalized Hölder’s inequality to find

\begin{aligned}&E_{\lambda ^{\varepsilon }}\left[ \prod \nolimits _{j\in L\setminus \{1\}}e^{\left( \sum _{k=1}^{N_{j}}\upsilon _{j}^{\varepsilon }(k) \right) /E_{\lambda ^{\varepsilon }}{\hat{\tau }}_1^{\varepsilon }}\right] \\&\quad \le \prod \nolimits _{j\in L\setminus \{1\}}\left( E_{\lambda ^{\varepsilon }}\left[ e^{\left( \sum _{k=1}^{N_{j}} \upsilon _{j}^{\varepsilon }(k)\right) (l-1)/E_{\lambda ^{\varepsilon }} {\hat{\tau }}_{1}^{\varepsilon }}\right] \right) ^{1/(l-1)}. \end{aligned}

Moreover, since $$m+h_1>w$$ and $$E_{\lambda ^{\varepsilon }}{\hat{\tau }}_1^{\varepsilon }=E_{\lambda ^{\varepsilon }} \tau _{1}^{\varepsilon }\cdot E_{\lambda ^{\varepsilon }}{\mathbf {M}}^{\varepsilon }_1=e^{m/\varepsilon } E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }$$, by the same argument that gives (10.16), for any $$\eta >0$$ and $$j\in L\setminus \{1\}$$

\begin{aligned}&E_{\lambda ^{\varepsilon }}\left[ e^{\left( \sum _{k=1}^{N_{j}}\upsilon _{j}^{\varepsilon } (k)\right) (l-1)/E_{\lambda ^{\varepsilon }}{\hat{\tau }}_{1}^{\varepsilon }}\right] \\&\quad \le \left( 1\wedge e^{-\frac{1}{\varepsilon }(W(O_{j})-W(O_{1}\cup O_{j})-h_{1}-\eta )}\right) \left( 1+5e^{-\frac{1}{\varepsilon }(m+h_{1}+W(O_1\cup O_j)-W(O_1)-2\eta )}\right) . \end{aligned}

Therefore,

\begin{aligned} E_{\lambda ^{\varepsilon }}\left[ \prod \nolimits _{j\in L\setminus \{1\}}e^{\left( \sum _{k=1}^{N_{j}}\upsilon _{j}^{\varepsilon }(k)\right) /E_{\lambda ^{\varepsilon }}{\hat{\tau }}_1^{\varepsilon }}\right] (1-e^{-m/\varepsilon }) \le \prod \nolimits _{j\in L\setminus \{1\}}s_j^{1/(l-1)}, \end{aligned}

with

\begin{aligned}&s_j\doteq \left( 1\wedge e^{-\frac{1}{\varepsilon }(W(O_{j})-W(O_{1}\cup O_{j})-h_{1}-\eta )}\right) \\&\qquad \qquad \qquad \cdot \left( 1+5e^{-\frac{1}{\varepsilon }(m+h_{1}+W(O_1\cup O_j)-W(O_1)-2\eta )}\right) (1-e^{-m/\varepsilon }). \end{aligned}

Using $$(a\wedge b)(c+d)\le ac +bd$$ for positive numbers abcd,

\begin{aligned} s_j \le \left( 1+5e^{-\frac{1}{\varepsilon }(m+W( O_j)-W(O_1)-3\eta )}\right) (1-e^{-m/\varepsilon }) \le 1-e^{-m/\varepsilon }/2, \end{aligned}

where we use $$W(O_j)>W(O_1)$$ for the second inequality and pick the range of $$\varepsilon$$ small if it needs to be. Thus, (10.18) holds, and by (10.17)

\begin{aligned} E_{\lambda ^{\varepsilon }}e^{({\hat{\tau }}_1^{\varepsilon } -{\hat{\upsilon }}_{1}^{\varepsilon })/E_{\lambda ^{\varepsilon }} {\hat{\tau }}_1^{\varepsilon }}\le e^{-m/\varepsilon }\cdot 2\sum _{i=1}^{\infty }\left( 1-e^{-m/\varepsilon }/2 \right) ^{i-1}=\frac{2e^{-m/\varepsilon }}{1-\left( 1-e^{-m/\varepsilon }/2\right) }=4. \end{aligned}

We complete the proof. $$\square$$

### Lemma 10.20

Given $$\delta >0,$$ for any distribution $$\lambda ^{\varepsilon }$$ on $$\partial B_{\delta }(O_{1})$$, $${\hat{\tau }}_{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}{\hat{\tau }}_{1}^{\varepsilon }$$ converges in distribution to an Exp(1) random variable under $$P_{\lambda ^{\varepsilon }}.$$

### Proof

Let $${\mathbf {M}}$$ be a geometrically distributed random variables with parameter $$p\in (0,1)$$ and assume that it is independent of $$\{\tau ^\varepsilon _n\}$$. Then, $$E_{\lambda ^{\varepsilon }}\left( \sum \nolimits _{n=1}^{{\mathbf {M}}} \tau ^\varepsilon _n\right) =E_{\lambda ^{\varepsilon }}\tau ^\varepsilon _1/p$$ and

\begin{aligned}&E_{\lambda ^{\varepsilon }}e^{it \left( \left( p\sum _{n=1}^{{\mathbf {M}}}\tau ^\varepsilon _n\right) /E_{\lambda ^{\varepsilon }} \tau ^\varepsilon _1\right) }\\&\quad =\sum _{k=1}^{\infty } \left( E_{\lambda ^{\varepsilon }}e^{it \left( p\tau ^\varepsilon _1/E_{\lambda ^{\varepsilon }}\tau ^\varepsilon _1\right) }\right) ^k (1-p)^{k-1}p =\frac{pE_{\lambda ^{\varepsilon }}e^{it \left( p\tau ^\varepsilon _1/E_{\lambda ^{\varepsilon }}\tau ^\varepsilon _1\right) }}{1-(1-p)E_{\lambda ^{\varepsilon }}e^{it \left( p\tau ^\varepsilon _1/E_{\lambda ^{\varepsilon }}\tau ^\varepsilon _1\right) }}. \end{aligned}

Given any fixed $$t\in {\mathbb {R}}$$, consider

\begin{aligned} f_{\varepsilon }(p)=\frac{pE_{\lambda ^{\varepsilon }}e^{it \left( p\tau ^\varepsilon _1/E_{\lambda ^{\varepsilon }}\tau ^\varepsilon _1\right) }}{1-(1-p)E_{\lambda ^{\varepsilon }}e^{it \left( p\tau ^\varepsilon _1/E_{\lambda ^{\varepsilon }}\tau ^\varepsilon _1\right) }} \text { and } f(p)=\frac{1}{1-it}. \end{aligned}

According to Lemma 10.17, $$E_{\lambda ^{\varepsilon }}e^{it \left( \tau ^\varepsilon _1/E_{\lambda ^{\varepsilon }}\tau ^\varepsilon _1\right) } \rightarrow 1/(1-it)$$ uniformly on any compact set in $${\mathbb {R}}$$. This implies that

\begin{aligned} f_{\varepsilon }(p)=\frac{pE_{\lambda ^{\varepsilon }}e^{it \left( p\tau ^\varepsilon _1/E_{\lambda ^{\varepsilon }}\tau ^\varepsilon _1\right) }}{1-(1-p)E_{\lambda ^{\varepsilon }}e^{it \left( p\tau ^\varepsilon _1/E_{\lambda ^{\varepsilon }}\tau ^\varepsilon _1\right) }} \rightarrow \frac{p/(1-itp)}{1-(1-p)/(1-itp)}=\frac{1}{1-it}=f(p) \end{aligned}

uniformly on $$p\in (0,1)$$. Therefore, if we consider $$p^{\varepsilon }\doteq e^{-m/{\varepsilon }}\rightarrow 0$$, it follows from the uniform (in p) convergence that

\begin{aligned} E_{\lambda ^{\varepsilon }}e^{it({\hat{\tau }}_{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}{\hat{\tau }}_{1}^{\varepsilon })} =f_{\varepsilon }(p^{\varepsilon }) \rightarrow f(0)=\frac{1}{1-it}. \end{aligned}

We complete the proof. $$\square$$

## Sketch of the Proof of Conjecture 4.11 for a Special Case

In this section, we outline the proof of the upper bound on the decay rate (giving a lower bound on the variance per unit time) that complements Theorem 4.5 for a special case. Consider $$U:{\mathbb {R}}\rightarrow {\mathbb {R}}$$ shown as in Fig. 3.

In particular, assume U is a bounded $$C^{2}$$ function satisfying the following conditions:

### Condition 11.1

• U is defined on a compact interval $$D\doteq [{\bar{x}}_L,{\bar{x}}_R] \subset {\mathbb {R}}$$ and extends periodically as a $$C^{2}$$ function.

• U has two local minima at $$x_{L}$$ and $$x_{R}$$ with values $$U(x_{L})<U(x_{R})$$ and $$[x_L-\delta ,x_R+\delta ]\subset D$$ for some $$\delta >0$$.

• U has one local maximum at $$0\in (x_{L},x_{R})$$.

• $$U(x_{L})=0,$$ $$U(0)=h_{L}$$ and $$U(x_{R})=h_{L}-h_{R}>0.$$

• $$\inf _{x\in \partial D}U(x)>h_{L}.$$

Consider the diffusion process $$\{X^{\varepsilon }_t\}_{t\ge 0}$$ satisfying the stochastic differential equation

\begin{aligned} dX^{\varepsilon }_t =-\nabla U\left( X^{\varepsilon }_t \right) dt+\sqrt{2\varepsilon }dW_t , \end{aligned}
(11.1)

where W is a one-dimensional standard Wiener process. Then, there are just two stable equilibrium points $$O_{1}=x_{L}$$ and $$O_{2}=x_{R}$$, and one unstable equilibrium point $$O_3=0.$$ Moreover, one easily finds that $$V(O_{1},O_{2})=h_{L}$$ and $$V(O_{2},O_{1})=h_{R},$$ and these give that $$W(O_{1})=V(O_{2},O_{1})$$, $$W(O_{2})=V(O_{1},O_{2})$$ and $$W\left( O_{1}\cup O_2\right) =0$$ (since $$L_{\mathrm{{s}}}=\{1,2\},$$ this implies that $$G_{\mathrm{{s}}}(1)=\{(2\rightarrow 1)\}$$ and $$G_{\mathrm{{s}}}(2)=\{(1\rightarrow 2)\}$$). Another observation is that $$h_1\doteq \min _{\ell \in {\mathcal {M}} \setminus \{1\}}V\left( O_{1},O_{\ell }\right) =V\left( O_{1},O_{3}\right) =h_{L}$$ in this model.

If $$f\equiv 0,$$ then one obtains

\begin{aligned} R_{1}^{(1)}&\doteq \inf _{y\in A}V(O_{1},y)+W(O_{1})-W(O_{1})=\inf _{y\in A}V(O_{1},y);\\ R_{1}^{(2)}&\doteq 2\inf _{y\in A}V\left( O_{1},y\right) -h_1 =2\inf _{y\in A}V\left( O_{1},y\right) -h_{L}; \end{aligned}
\begin{aligned} R_{2}^{(1)}&\doteq \inf _{y\in A}V(O_{2},y)+W(O_{2})-W(O_{1}) =\inf _{y\in A}V(O_{2},y)+h_{L}-h_{R};\\ R_{2}^{(2)}&\doteq 2\inf _{y\in A}V\left( O_{2},y\right) +W\left( O_{2}\right) -2W\left( O_{1}\right) +0-W\left( O_{1}\cup O_2\right) \\&=2\inf _{y\in A}V\left( O_{2},y\right) +h_{L}-2h_{R}. \end{aligned}

Let $$A\subset [0,{\bar{x}}_R]$$ and assume that it contains a nonempty open interval, so that we are computing approximations to probabilities that are small under the stationary distribution (the case of bounded and continuous f can be dealt with by approximation, as in the case of the upper bound on the decay rate). We first compute the bounds one would obtain from Theorem 4.5.

Case I. If $$x_{R}\in A,$$ then $$\inf _{y\in A}V(O_{1},y)=h_{L}$$ and $$\inf _{y\in A}V\left( O_{2},y\right) =0.$$ Thus, the decay rate of variance per unit time is bounded below by

\begin{aligned} \min _{j=1,2}\left[ R_{j}^{(1)}\wedge R_{j}^{(2)}\right] =\min \left\{ h_{L},h_{L}-2h_{R}\right\} =h_{L}-2h_{R}. \end{aligned}

Case II. If $$A\subset [0,x_{R}-\delta ]$$ for some $$\delta >0$$ and $$\delta <x_{R},$$ then $$\inf _{y\in A}V(O_{1},y)=h_{L}$$ and $$\inf _{y\in A}V\left( O_{2},y\right) >0$$ (we denote it by $$b\in (0,h_{R}]$$). Thus, the decay rate of variance per unit time is bounded below by

\begin{aligned} \min _{j=1,2}\left[ R_{j}^{(1)}\wedge R_{j}^{(2)}\right] =\min \left\{ h_{L},h_{L}+2\left( b-h_{R}\right) \right\} =h_{L}+2\left( b-h_{R}\right) . \end{aligned}

Case III. If $$A\subset [x_{R}+\delta ,x^{*}]$$ with $$U(x^{*})=h_{L}$$ for some $$\delta >0$$ and $$\delta <x^{*}-x_{R},$$ then $$\inf _{y\in A}V(O_{1},y)=h_{L}+\inf _{y\in A}V\left( O_{2},y\right)$$ and $$\inf _{y\in A}V\left( O_{2},y\right) >0$$ (we denote it by $$b\in (0,h_{R}]$$). Thus, the decay rate of variance per unit time is bounded below by

\begin{aligned} \min _{j=1,2}\left[ R_{j}^{(1)}\wedge R_{j}^{(2)}\right] =\min \left\{ h_{L}+b,h_{L}+2\left( b-h_{R}\right) \right\} =h_{L}+2\left( b-h_{R}\right) . \end{aligned}

Case IV. If $$A\subset [x^{*}+\delta ,{\bar{x}}_R]$$ with $$U(x^{*})=h_{L}$$ for some $$\delta >0$$ and $$x^{*}>x_{R},$$ then $$\inf _{y\in A}V(O_{1},y)=h_{L}+\inf _{y\in A}V\left( O_{2},y\right)$$ and $$\inf _{y\in A}V\left( O_{2},y\right) >0$$ (we denote it by $${\bar{b}}>h_{R}$$). Thus, the decay rate of variance per unit time is bounded below by

\begin{aligned} \min _{j=1,2}\left[ R_{j}^{(1)}\wedge R_{j}^{(2)}\right] =\min \left\{ h_{L}+{\bar{b}},h_{L}+\left( {\bar{b}}-h_{R}\right) \right\} =h_{L}+\left( {\bar{b}}-h_{R}\right) . \end{aligned}

To find an upper bound for the decay rate of variance per unit time, we recall that

\begin{aligned} \frac{1}{T^{\varepsilon }}\sum _{j=1} ^{N^{\varepsilon }\left( T^{\varepsilon }\right) -1 }\int _{\tau _{j-1} ^{\varepsilon }}^{\tau _{j}^{\varepsilon }}1_{A}\left( X_{t}^{\varepsilon }\right) dt \le \frac{1}{T^{\varepsilon }}\int _{0}^{T^{\varepsilon }}1_{A}\left( X_{t} ^{\varepsilon }\right) dt \le \frac{1}{T^{\varepsilon }}\sum _{j=1} ^{N^{\varepsilon }\left( T^{\varepsilon }\right) }\int _{\tau _{j-1} ^{\varepsilon }}^{\tau _{j}^{\varepsilon }}1_{A}\left( X_{t}^{\varepsilon }\right) dt \end{aligned}

with $$\tau _{j}^{\varepsilon }$$ being the j-th regenerative cycle. In Case I, one might guess that

\begin{aligned} \int _{\tau _{j-1}^{\varepsilon }}^{\tau _{j}^{\varepsilon }}1_{A}\left( X_{t}^{\varepsilon }\right) dt \end{aligned}
(11.2)

has approximately the same distribution as the exit time from the shallow well, which has been shown to asymptotically have an exponential distribution with parameter $$\exp (-h_{R}/\varepsilon ).$$ Additionally, since the exit time from the shallower well is exponentially smaller than $$\tau _{j}^{\varepsilon },$$ it suggests that the random variables (11.2 ) can be taken as independent of $$N^{\varepsilon }\left( T^{\varepsilon }\right)$$ when $$\varepsilon$$ is small. We also know that

\begin{aligned} EN^{\varepsilon }\left( T^{\varepsilon }\right) /T^{\varepsilon }\approx 1/E\tau _{1}^{\varepsilon }\approx \exp \left( -h_{L}(\delta ) /\varepsilon \right) , \end{aligned}

where $$h_{L}(\delta )\uparrow h_{L}$$ as $$\delta \downarrow 0$$ and $$\approx$$ means that quantities on either side have the same exponential decay rate. Using Jensen’s inequality to find that $$E[N^\varepsilon (T^\varepsilon )]^2 \ge [EN^\varepsilon (T^\varepsilon )]^2$$ and then applying Wald’s identity, we obtain

\begin{aligned}&T^{\varepsilon }\mathrm {Var}\left( \frac{1}{T^{\varepsilon }}\int _{0}^{T^{\varepsilon }}1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) \nonumber \\&\quad \approx \frac{1}{T^{\varepsilon }}E\left[ \sum \nolimits _{j=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) }\int _{\tau _{j-1}^{\varepsilon }}^{\tau _{j} ^{\varepsilon }}1_{A}\left( X_{t}^{\varepsilon }\right) dt-EN^{\varepsilon }\left( T^{\varepsilon }\right) E\left( \int _{\tau _{j-1}^{\varepsilon } }^{\tau _{j}^{\varepsilon }}1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) \right] ^{2}\nonumber \\&\quad =\frac{1}{T^{\varepsilon }}E\left( \sum \nolimits _{j=1}^{N^{\varepsilon }\left( T^{\varepsilon }\right) } \int _{\tau _{j-1}^{\varepsilon }}^{\tau _{j}^{\varepsilon }}1_{A}\left( X_{t}^{\varepsilon }\right) dt \right) ^{2} - \frac{1}{T^{\varepsilon }} (E(N^{\varepsilon }(T^{\varepsilon })))^{2} \left( E\left( \int _{\tau _{j-1}^{\varepsilon }}^{\tau _{j} ^{\varepsilon }}1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) \right) ^{2}\nonumber \\&\quad = \frac{1}{T^{\varepsilon }} EN^{\varepsilon }\left( T^{\varepsilon }\right) E\left( \int _{\tau _{j-1}^{\varepsilon }}^{\tau _{j}^{\varepsilon }}1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) ^{2}-\frac{1}{T^{\varepsilon }}\left[ EN^{\varepsilon }\left( T^{\varepsilon }\right) \right] ^{2}\left( E\left( \int _{\tau _{j-1}^{\varepsilon }}^{\tau _{j}^{\varepsilon }}1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) \right) ^{2}\nonumber \\&\qquad +\frac{1}{T^{\varepsilon }}\left( E\left[ N^{\varepsilon }\left( T^{\varepsilon }\right) \right] ^{2}-EN^{\varepsilon }\left( T^{\varepsilon }\right) \right) \left( E\left( \int _{\tau _{j-1}^{\varepsilon }}^{\tau _{j}^{\varepsilon }}1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) \right) ^{2}\nonumber \\&\quad \ge \frac{EN^{\varepsilon }\left( T^{\varepsilon }\right) }{T^{\varepsilon }}\mathrm {Var}\left( \int _{\tau _{j-1}^{\varepsilon }}^{\tau _{j}^{\varepsilon } }1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) \nonumber \\&\quad \approx \exp \left( -h_{L}(\delta )/\varepsilon \right) \cdot \exp (2h_{R} /\varepsilon ) =\exp (\left( 2h_{R}-h_{L}(\delta )\right) /\varepsilon ). \end{aligned}
(11.3)

Letting $$\delta \rightarrow 0$$, we see that the decay rate of variance per unit time is bounded above by $$h_{L}-2h_{R}$$, which is the same as lower bound found for Case I.

For the other three Cases II, III and IV, the process spends only a very small fraction of the time while in the shallower well in the set A. In fact, using the stopping time arguments of the sort that appear in [12, Chapter 4], the event that the process enters A during an excursion away from the neighborhood of $$x_R$$ can be accurately approximated (as far as large deviation behavior goes) using independent Bernoulli random variables $$\{B^\varepsilon _i\}$$ with success parameter $$e^{-b/\varepsilon }$$, and when this occurs the process spends an order one amount of time in A before returning to the neighborhood of $$x_R$$. There is, however, another sequence of independent Bernoulli random variables with success parameter $$e^{-h_R/\varepsilon }$$, and the process accumulates time in A only up till the time of first success of this sequence.

Then, $$\mathrm {Var}( \int _{\tau _{j-1}^{\varepsilon }}^{\tau _{j}^{\varepsilon } }1_{A}\left( X_{t}^{\varepsilon }\right) dt)$$ has the same logarithmic asymptotics as $$\mathrm {Var}( \sum \nolimits _{i=1}^{R^\varepsilon }1_{\{B^\varepsilon _i=1\}} ),$$ where $$R^\varepsilon$$ is geometric with success parameter $$e^{-h_R/\varepsilon }$$ and independent of the $$\{B^\varepsilon _i\}$$. Straightforward calculation using Wald’s identity then gives the exponential rate of decay $$2h_R-2b$$ for Cases II, III and $$h_R-{\bar{b}}$$ for Case IV, so according to (11.3), we obtain

\begin{aligned} T^{\varepsilon }\mathrm {Var}\left( \frac{1}{T^{\varepsilon }}\int _{0} ^{T^{\varepsilon }}1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) \ge \frac{EN^{\varepsilon }\left( T^{\varepsilon }\right) }{T^{\varepsilon } }\mathrm {Var}\left( \int _{\tau _{j-1}^{\varepsilon }}^{\tau _{j}^{\varepsilon } }1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) \approx e^{\left[ \left( 2\left( h_{R}-b\right) -h_{L}(\delta )\right) /\varepsilon \right] } \end{aligned}

for Cases II and III and

\begin{aligned} T^{\varepsilon }\mathrm {Var}\left( \frac{1}{T^{\varepsilon }}\int _{0}^{T^{\varepsilon }}1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) \ge \frac{EN^{\varepsilon }\left( T^{\varepsilon }\right) }{T^{\varepsilon }}\mathrm {Var}\left( \int _{\tau _{j-1}^{\varepsilon }}^{\tau _{j}^{\varepsilon } }1_{A}\left( X_{t}^{\varepsilon }\right) dt\right) \approx e^{\left[ \left( ( h_{R}-{\bar{b}}) -h_{L}(\delta )\right) /\varepsilon \right] } \end{aligned}

for Case IV.

Letting $$\delta \rightarrow 0$$, this means that the decay rate of variance per unit time is bounded above by $$h_{L}+2\left( b-h_{R}\right)$$ for Case II and III, and by $$h_L+({\bar{b}}-h_R)$$ for Case IV which is again the same as the corresponding lower bound.

## References

1. Aldous, D., Fill, J.: Reversible markov chains and random walks on graphs (monograph), 2002. Available at https://www.stat.berkeley.edu/users/aldous/RWG/book.pdf

2. Aronson, D.G.: Bounds for the fundamental solution of a parabolic equation. Bull. Am. Math. Soc. 73(6), 890–896 (1967)

3. Asmussen, S., Glynn, P.W.: Stoch. Simul. Algorithms Anal. Appl. Math. Springer Science+Business Media, LLC (2007)

4. Budhiraja, A., Dupuis, P.: Analysis and Approximation of Rare Events. Representations and Weak Convergence Methods: Number 94 in Probability Theory and Stochastic Modelling. Springer, New York (2019)

5. Collet, P., Martínez, S., San Martín, J.: Quasi-Stationary Distributions. Springer, Berlin (2013)

6. Day, M.V.: On the exponential exit law in the small parameter exit problem. Stochastics 8(4), 297–323 (1983)

7. Donsker, M.D., Varadhan, S.R.S.: Asymptotic evaluation of certain Markov process expectations for large time I. Comm. Pure Appl. Math. 28, 1–47 (1975)

8. Donsker, M.D., Varadhan, S.R.S.: Asymptotic evaluation of certain Markov process expectations for large time III. Comm. Pure Appl. Math. 29, 389–461 (1976)

9. Dupuis, P., Liu, Y., Plattner, N., Doll, J.D.: On the infinite swapping limit for parallel tempering. SIAM J. Multiscale Model. Simul. 10, 986–1022 (2012)

10. Dupuis, P.,Wu, G.-J.: Analysis and optimization of certain parallel Monte Carlo methods in the low temperature limit. Working paper (2020)

11. Feller, W.: An Introduction to Probability Theory and Its Applications, vol. 2. Wiley, New York (1971)

12. Freidlin, M.I., Wentzell, A.D.: Random Perturbations of Dynamical Systems, 3rd edn. Springer, New York (2012)

13. Gilbarg, D., Trudinger, N.S.: Elliptic Partial Differential Equations of Second Order, 2nd edn. Springer, Berlin (1983)

14. Harris, T.E.: The Theory of Branching Processes Die Grundlehren der Mathematischen Wissenschaften in Einzeldarstellungen. Springer, Newyork (1963)

15. Dupuis, P., Doll, J., Nyquist, P.: A large deviations analysis of certain qualitative properties of parallel tempering and infinite swapping algorithms. Appl. Math. and Opt. 78, 103–144 (2018)

16. Khasminskii, R.: Stochastic Stability of Differential Equations, 2nd edn. Springer, Berlin Heidelberg (2012)

17. Limnios, N., Oprisan, G.: Semi-Markov Processes and Reliability. Statistics for Industry and Technology. Birkhäuser, Boston (2001)

18. Ross, S.: Applied Probability Models with Optimization Applications. Dover Publications, New York (1992)

19. Shwartz, A., Weiss, A.: Large Deviations for Performance Analysis: Queues. Communication and Computing. Chapman and Hall, New York (1995)

## Acknowledgements

We thank the referee for corrections and suggestions that improved this paper.

## Funding

Open Access funding provided by Royal Institute of Technology.

## Author information

Authors

### Corresponding author

Correspondence to Guo-Jhen Wu.

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

P. Dupuis: Research supported in part by the National Science Foundation (DMS-1904992) and the AFOSR (FA-9550-18-1-0214). G.-J. Wu: Research supported in part by the AFOSR (FA-9550-18-1-0214).

## Appendix

### Proof of Lemma 7.8

Given a function g, we define the notation

\begin{aligned} I(t_1,t_2;g) \doteq \int _{t_1}^{t_2}g(X^{\varepsilon }_s)ds, \end{aligned}

for any $$0\le t_1\le t_2$$. By definition, $$\tau _{1}^{\varepsilon }=\tau _{N}$$ and observe that

\begin{aligned} I(0,\tau _{N};g)&=\sum \nolimits _{\ell =1}^{N}I(\tau _{\ell -1},\tau _{\ell };g)=\sum \nolimits _{\ell =1}^{\infty }I (\tau _{\ell -1},\tau _{\ell };g)\cdot 1_{\left\{ \ell \le N\right\} }\\&=\sum \nolimits _{\ell =1}^{\infty }I(\tau _{\ell -1},\tau _{\ell };g)\cdot 1_{\left\{ \ell \le {\hat{N}}\right\} } +\sum \nolimits _{\ell =1}^{\infty }I(\tau _{\ell -1},\tau _{\ell };g)\cdot 1_{\left\{ {\hat{N}}+1\le \ell \le N\right\} }\\&\quad +\sum \nolimits _{j\in L\setminus \left\{ 1\right\} }\sum \nolimits _{\ell =1}^{\infty }\left( I(\tau _{\ell -1},\tau _{\ell };g)\cdot 1_{\left\{ {\hat{N}}+1\le \ell \le N,Z_{\ell -1}\in \partial B_{\delta }(O_{j})\right\} }\right) . \end{aligned}

Since $${\hat{N}}$$ and N are stopping times with respect to the filtration $$\{{\mathcal {G}}_{n}\}_{n},$$ it implies that $$\{ \ell \le {\hat{N}}\} =\{ {\hat{N}}\le \ell -1\}^{c} \in {\mathcal {G}}_{\ell -1}$$ and $$\{ {\hat{N}}+1\le \ell \le N,Z_{\ell -1}\in \partial B_{\delta } (O_{j})\} \in {\mathcal {G}}_{\ell -1}.$$ Let

\begin{aligned}&{\mathfrak {S}}_{1}=\sum _{\ell =1}^{\infty }I(\tau _{\ell -1},\tau _{\ell };g)\cdot 1_{\left\{ \ell \le \hat{N}\right\} } \text { and } \\&{\mathfrak {S}}_{j}=\sum _{\ell =1}^{\infty }\left( I(\tau _{\ell -1},\tau _{\ell };g)\cdot 1_{\left\{ \hat{N}+1\le \ell \le N,Z_{\ell -1}\in \partial B_{\delta }(O_{j})\right\} }\right) \end{aligned}

for all $$j\in L\setminus \left\{ 1\right\} .$$ We find

\begin{aligned} E_{x}\left( {\mathfrak {S}}_{1}\right)&=\sum \nolimits _{\ell =1}^{\infty }E_{x}\left( E_{x}\left[ \left. I(\tau _{\ell -1},\tau _{\ell };g)\cdot 1_{\left\{ \ell \le {\hat{N}}\right\} }\right| {\mathcal {G}}_{\ell -1}\right] \right) \\&=\sum \nolimits _{\ell =1}^{\infty }E_{x}\left( 1_{\left\{ \ell \le {\hat{N}}\right\} }E_{Z_{\ell -1}}\left[ I(0,\tau _{1};g)\right] \right) \\&\le \sup \nolimits _{y\in \partial B_{\delta }(O_{1})}E_{y}\left[ I(0,\tau _{1};g)\right] \cdot \left( \sum \nolimits _{\ell =1}^{\infty }P_{x}( {\hat{N}}\ge \ell ) \right) . \end{aligned}

In addition, for $$j\in L\setminus \left\{ 1\right\} ,$$

\begin{aligned} E_{x}\left( {\mathfrak {S}}_{j}\right)&=\sum \nolimits _{\ell =1}^{\infty }E_{x}\left( I(\tau _{\ell -1},\tau _{\ell };g)\cdot 1_{\left\{ {\hat{N}}+1\le \ell \le N,Z_{\ell -1}\in \partial B_{\delta }(O_{j})\right\} }\right) \\&=\sum \nolimits _{\ell =1}^{\infty }E_{x}\left( E_{x}\left[ \left. I(\tau _{\ell -1},\tau _{\ell };g)\cdot 1_{\left\{ \hat{N}+1\le \ell \le N,Z_{\ell -1}\in \partial B_{\delta }(O_{j})\right\} }\right| {\mathcal {G}}_{\ell -1}\right] \right) \\&=\sum \nolimits _{\ell =1}^{\infty }E_{x}\left( 1_{\left\{ {\hat{N}}+1\le \ell \le N,Z_{\ell -1}\in \partial B_{\delta }(O_{j})\right\} }E_{Z_{\ell -1}}\left[ I(0,\tau _{1};g)\right] \right) \\&\le \sup \nolimits _{y\in \partial B_{\delta }(O_{j})}E_{y}\left[ I(0,\tau _{1};g)\right] \cdot \left( \sum \nolimits _{\ell =1}^{\infty }E_{x}\left( 1_{\left\{ {\hat{N}}+1\le \ell \le N,Z_{\ell -1} \in \partial B_{\delta }(O_{j})\right\} }\right) \!\right) . \end{aligned}

It is straightforward to see that $${\hat{N}}=N_{1}$$. This implies that $$\sum \nolimits _{\ell =1}^{\infty }P_{x}( {\hat{N}}\ge \ell ) =E_{x}{\hat{N}} =E_{x}N_{1}.$$ Moreover, observe that for any $$j\in L\setminus \left\{ 1\right\}$$ $$\sum _{\ell =1}^{\infty }1_{\left\{ {\hat{N}}+1\le \ell \le N,Z_{\ell -1} \in \partial B_{\delta }(O_{j})\right\} }=N_{j},$$ which gives that $$\sum _{\ell =1}^{\infty }E_{x}( 1_{\{ {\hat{N}}+1\le \ell \le N,Z_{\ell -1}\in \partial B_{\delta }(O_{j})\} }) =E_{x}N_{j}.$$ Hence,

\begin{aligned} E_{x}\left( I(0,\tau _{N};g)\right) =\sum \nolimits _{j\in L}E_{x}\left( {\mathfrak {S}}_{j}\right) \le \sum \nolimits _{j\in L}\left[ \sup \nolimits _{y\in \partial B_{\delta }(O_{j})}E_{y}\left( I(0,\tau _{1};g)\right) \right] \cdot E_{x}N_{j}. \end{aligned}

$$\square$$

### Proof of Lemma 7.9

Let $$l=|L|$$. For any $$j\in L$$ and $$n\in {\mathbb {N}} ,$$ $$\xi _{1}^{(j)}=\inf \{k\in {\mathbb {N}} _{0}:$$ $$Z_{k}\in \partial B_{\delta }(O_{j})\},$$ $$\xi _{n}^{(j)}=\inf \{k\in {\mathbb {N}} :k>\xi _{n-1}^{(j)}$$ and $$Z_{k}\in \partial B_{\delta }(O_{j})\}$$, i.e., $$\xi _{n}^{(j)}$$ is the n-th time of hitting $$\partial B_{\delta }(O_{j}).$$ Moreover, we define $$N^{(j)}=\inf \{n\in {\mathbb {N}} :\xi _{n}^{(j)}\ge N\},$$ recalling that $$N\doteq \inf \{n\ge {\hat{N}}:Z_{n}\in \partial B_{\delta }(O_{1})\}$$ and $${\hat{N}}\doteq \inf \{n\in {\mathbb {N}} :Z_{n}\in {\textstyle \mathop {\cup }\nolimits _{j\in L\setminus \{1\}}} \partial B_{\delta }(O_{j})\}.$$ Since $$\xi _{n}^{(j)}$$ is a stopping time with respect to $$\{{\mathcal {G}}_{n}\}_{n},$$ we can define the filtration $$\{{\mathcal {G}}_{\xi _{n}^{(j)}}\},$$ and one can verify that $$N^{(j)}$$ is a stopping time with respect to $$\{{\mathcal {G}}_{\xi _{n}^{(j)}}\}_{n}.$$ As in the proof just given, for any function g and for any $$0\le t_1\le t_2$$ we define $$I(t_1,t_2;g) \doteq \int _{t_1}^{t_2}g(X^{\varepsilon }_s)ds$$. With this notation and since by definition $$\tau _{1}^{\varepsilon }=\tau _{N}$$, we can write

\begin{aligned} I(0,\tau _N;g)=\sum \nolimits _{j\in L}\sum \nolimits _{\ell =1}^{\infty }I(\tau _{\xi _{\ell }^{(j)}},\tau _{\xi _{\ell }^{(j)}+1};g)\cdot 1_{\left\{ \ell \le N^{(j)}-1\right\} }. \end{aligned}

Since $$(x_{1}+\cdots +x_{l})^{2}\le l(x_{1}^{2}+\cdots +x_{l}^{2})$$ for any $$(x_{1},\ldots ,x_{l})\in {\mathbb {R}} ^{l}$$ and $$l\in {\mathbb {N}} ,$$

\begin{aligned} I(0,\tau _{N};g)^2&\le l\sum \nolimits _{j\in L}\left( \sum \nolimits _{\ell =1}^{\infty }I(\tau _{\xi _{\ell }^{(j)}},\tau _{\xi _{\ell }^{(j)}+1};g) \cdot 1_{\left\{ \ell \le N^{(j)}-1\right\} }\right) ^{2}. \end{aligned}

Now for any $$j\in L$$, each square term from the right can be written an addition of two sums, where the first sum is summation of $$I(\tau _{\xi _{\ell }^{(j)}},\tau _{\xi _{\ell }^{(j)}+1};g)^2\cdot 1_{\{ \ell \le N^{(j)}-1\} }$$ over all $$\ell$$, and the second sum is twice of summation of $$I(\tau _{\xi _{\ell }^{(j)}},\tau _{\xi _{\ell }^{(j)}+1};g)\cdot 1_{\{ \ell \le N^{(j)}-1\} }I(\tau _{\xi _{k}^{(j)}},\tau _{\xi _{k}^{(j)}+1};g)\cdot 1_{\{ \ell \le N^{(j)}-1\} }$$ over $$k,\ell$$ with $$k<\ell$$. For the expected value of the first sum, note that $$\{ \ell \le N^{(j)}-1\} =\{ N^{(j)}\le \ell \} ^{c}\in {\mathcal {G}} _{\xi _{\ell }^{(j)}},$$ we have

\begin{aligned}&\sum _{\ell =1}^{\infty }E_{x}\left[ I(\tau _{\xi _{\ell }^{(j)}},\tau _{\xi _{\ell }^{(j)}+1};g)^2 1_{\left\{ \ell \le N^{(j)}-1\right\} }\right] \\&\quad =\sum _{\ell =1}^{\infty }E_{x}\left[ 1_{\left\{ \ell \le N^{(j)}-1\right\} }E_{x}\left[ \left. I(\tau _{\xi _{\ell }^{(j)}},\tau _{\xi _{\ell }^{(j)}+1};g)^2\right| {\mathcal {G}}_{\xi _{\ell } ^{(j)}}\right] \right] \\&\quad \le \sup _{y\in \partial B_{\delta }(O_{j})}E_{y} I(0,\tau _{1};g)^2 \sum \nolimits _{\ell =1}^{\infty }P_{x}( N^{(j)}-1\ge \ell ) \\&\quad =\sup _{y\in \partial B_{\delta } (O_{j})}E_{y} I(0,\tau _{1};g)^2 E_{x}( N_{j}). \end{aligned}

The last equality holds since $$N^{(j)}-1=N_{j}$$ (recall that $$N_{j}$$ is the number of visits of $$\{Z_{n}\}_{n\in {\mathbb {N}}_{0}}$$ to $$\partial B_{\delta }(O_{j})$$ before N including the initial position) this implies that $$\sum _{\ell =1}^{\infty }P_{x}( N^{(j)}-1\ge \ell ) =\sum _{\ell =1}^{\infty }P_{x}( N_{j}\ge \ell ) =E_{x}( N_{j}) .$$

Turning to the expected value of the second sum, by conditioning on $${\mathcal {G}}_{\xi _{\ell }^{(j)}}$$ gives

\begin{aligned}&\sum _{\ell =2}^{\infty }\sum _{k=1}^{\ell -1}E_{x}\left[ I(\tau _{\xi _{\ell }^{(j)}},\tau _{\xi _{\ell }^{(j)}+1};g)\cdot 1_{\left\{ \ell \le N^{(j)}-1\right\} }I(\tau _{\xi _{k}^{(j)}},\tau _{\xi _{k}^{(j)}+1};g)\cdot 1_{\left\{ k\le N^{(j)}-1\right\} }\right] \\&\quad \le \sup _{y\in \partial B_{\delta }(O_{j})}E_{y}\left( I(0,\tau _{1};g)\right) \sum _{\ell =2}^{\infty }\sum _{k=1}^{\ell -1}E_{x}\left[ 1_{\left\{ \ell \le N^{(j)}-1\right\} }I(\tau _{\xi _{k}^{(j)}},\tau _{\xi _{k}^{(j)}+1};g)\cdot 1_{\left\{ k\le N^{(j)}-1\right\} }\right] . \end{aligned}

Now, since for any $$k\le \ell -1,$$ i.e., $$k+1\le \ell$$, $$I(\tau _{\xi _{k}^{(j)}},\tau _{\xi _{k}^{(j)}+1};g)\in {\mathcal {G}}_{\xi _{k}^{(j)} +1}\text { and }1_{\left\{ \ell \le N^{(j)}-1\right\} }\in {\mathcal {G}}_{\xi _{\ell }^{(j)}},$$ we have

\begin{aligned}&E_{x}\left[ 1_{\left\{ \ell \le N^{(j)}-1\right\} }I(\tau _{\xi _{k}^{(j)}},\tau _{\xi _{k}^{(j)}+1};g)\cdot 1_{\left\{ k\le N^{(j)}-1\right\} }\right] \\&=E_{x}\left[ I(\tau _{\xi _{k}^{(j)}},\tau _{\xi _{k}^{(j)}+1};g)\cdot 1_{\{\tau _{\xi _{1}^{(j)}}<N,\ldots ,\tau _{\xi _{\ell } ^{(j)}}<N\}}\right] \\&=E_{x}\left[ E_{Z_{\xi _{k+1}^{(j)}}}\left[ 1_{\{\tau _{\xi _{1}^{(j)} }<N,\ldots ,\tau _{\xi _{\ell -k}^{(j)}}<N\}}\right] 1_{\{\tau _{\xi _{k+1}^{(j)} }<N\}}I(\tau _{\xi _{k}^{(j)}},\tau _{\xi _{k}^{(j)}+1};g)\cdot 1_{\{\tau _{\xi _{1}^{(j)}}<N,\ldots ,\tau _{\xi _{k}^{(j)} }<N\}}\right] \\&=E_{x}\left[ E_{Z_{\xi _{k+1}^{(j)}}}\left[ 1_{\left\{ \ell -k\le N^{(j)}-1\right\} }\right] 1_{\{\tau _{\xi _{k+1}^{(j)}}<N\}}I(\tau _{\xi _{k}^{(j)}}, \tau _{\xi _{k}^{(j)}+1};g)\cdot 1_{\left\{ k\le N^{(j)}-1\right\} }\right] \\&\le \sup \nolimits _{y\in \partial B_{\delta }(O_{j})}P_{y}( \ell -k\le N^{(j)}-1) E_{x}\left[ I(\tau _{\xi _{k}^{(j)}},\tau _{\xi _{k}^{(j)}+1};g)\cdot 1_{\left\{ k\le N^{(j)} -1\right\} }\right] \\&=\sup \nolimits _{y\in \partial B_{\delta }(O_{j})}P_{y}\left( \ell -k\le N_{j}\right) E_{x}\left[ E_{x}\left[ \left. I(\tau _{\xi _{k}^{(j)}},\tau _{\xi _{k}^{(j)}+1};g)\right| {\mathcal {G}}_{\xi _{k}^{(j)}} \right] \cdot 1_{\left\{ k\le N^{(j)}-1\right\} }\right] \\&\le \sup \nolimits _{y\in \partial B_{\delta }(O_{j})}E_{y}\left( I(0,\tau _{1};g)\right) \cdot \sup \nolimits _{y\in \partial B_{\delta }(O_{j} )}P_{y}\left( \ell -k\le N_{j}\right) \cdot P_{x}\left( k\le N_{j}\right) . \end{aligned}

This gives that the expected value of the second sum is less than or equal to

\begin{aligned}&\left( \sup \nolimits _{y\in \partial B_{\delta }(O_{j})}E_{y} I(0,\tau _{1};g) \right) ^{2}\sum \nolimits _{\ell =2}^{\infty }\sum \nolimits _{k=1}^{\ell -1}\sup \nolimits _{y\in \partial B_{\delta }(O_{j})} P_{y}\left( \ell -k\le N_{j}\right) \cdot P_{x}\left( k\le N_{j}\right) \\&\qquad =\left( \sup \nolimits _{y\in \partial B_{\delta }(O_{j})}E_{y} I(0,\tau _{1};g) \right) ^{2}\sum \nolimits _{k =1}^{\infty }\sup \nolimits _{y\in \partial B_{\delta }(O_{j})}P_{y}\left( k\le N_{j}\right) \cdot E_{x}N_{j}. \end{aligned}

Therefore, putting the estimates together gives

\begin{aligned} E_{x} I(0,\tau _{1}^{\varepsilon };g)^{2}&\le 2l\sum _{j\in L}\left[ \sup _{y\in \partial B_{\delta }(O_{j})} E_{y} I(0,\tau _{1};g) \right] ^{2}\cdot E_{x}N_{j}\cdot \sum _{\ell =1}^{\infty }\sup _{y\in \partial B_{\delta }(O_{j})}P_{y}\left( \ell \le N_{j}\right) \\&\quad + l\sum _{j\in L}\left[ \sup _{y\in \partial B_{\delta }(O_{j})} E_{y} I(0,\tau _{1};g)^2\right] \cdot E_{x}N_{j}. \end{aligned}

$$\square$$

### Proof of Lemma 9.2

The main idea of the proof comes from [18, Theorem 3.16].

Given any $$\varepsilon >0,$$ we define $$g^{\varepsilon }\left( t\right) \doteq E_{\lambda ^{\varepsilon }}S_{N^{\varepsilon }\left( t\right) }^{\varepsilon }$$ for any $$t\ge 0.$$ Conditioning on $$\tau _{1}^{\varepsilon }$$ yields

\begin{aligned} g^{\varepsilon }\left( t\right) =\int _{0}^{\infty }E_{\lambda ^{\varepsilon } }[ S_{N^{\varepsilon }\left( t\right) }^{\varepsilon }|\tau _{1}^{\varepsilon }=x] dF^{\varepsilon }\left( x\right) , \end{aligned}

where $$F^{\varepsilon }\left( \cdot \right)$$ is the distribution function of $$\tau _{1}^{\varepsilon }.$$ Note that

\begin{aligned} E_{\lambda ^{\varepsilon }}\left[ S_{N^{\varepsilon }\left( t\right) }^{\varepsilon }|\tau _{1}^{\varepsilon }=x\right] =\left\{ \begin{array} [c]{c} g^{\varepsilon }\left( t-x\right) \text { if }x\le t\\ E_{\lambda ^{\varepsilon }}\left[ S_{1}^{\varepsilon }|\tau _{1}^{\varepsilon }=x\right] \text { if }x>t \end{array} \right. , \end{aligned}

which implies

\begin{aligned} g^{\varepsilon }\left( t\right) =\int _{0}^{t}g^{\varepsilon }\left( t-x\right) dF^{\varepsilon }\left( x\right) +h^{\varepsilon }\left( t\right) , \end{aligned}

with

\begin{aligned} h^{\varepsilon }\left( t\right) =\int _{t}^{\infty }E_{\lambda ^{\varepsilon } }\left[ S_{1}^{\varepsilon }|\tau _{1}^{\varepsilon }=x\right] dF^{\varepsilon }\left( x\right) . \end{aligned}

Since $$E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }=\int _{0}^{\infty }E_{\lambda ^{\varepsilon }}\left[ S_{1}^{\varepsilon }|\tau _{1}^{\varepsilon }=x\right] dF^{\varepsilon }\left( x\right) <\infty ,$$ we have $$h^{\varepsilon }\left( t\right) \le E_{\lambda ^{\varepsilon }} S_{1}^{\varepsilon }$$ for all $$t\ge 0.$$ Moreover, if we apply Hölder’s inequality first and then the conditional Jensen’s inequality, we find that for all $$t\ge 0,$$

\begin{aligned} h^{\varepsilon }\left( t\right)&\le \left( \int _{t}^{\infty }\left( E_{\lambda ^{\varepsilon }}\left[ S_{1}^{\varepsilon }|\tau _{1}^{\varepsilon }=x\right] \right) ^{2} dF^{\varepsilon }\left( x\right) \right) ^{\frac{1}{2}}\left( \int _{t}^{\infty }1^{2}dF^{\varepsilon }\left( x\right) \right) ^{\frac{1}{2}}\\&\le \left( 1-F^{\varepsilon }\left( t\right) \right) ^{\frac{1}{2} }\left( \int _{t}^{\infty }E_{\lambda ^{\varepsilon }}[ \left( S_{1}^{\varepsilon }\right) ^{2}|\tau _{1}^{\varepsilon }=x] dF^{\varepsilon }\left( x\right) \right) ^{\frac{1}{2}} \le \left( 1-F^{\varepsilon }\left( t\right) \right) ^{\frac{1}{2} }( E_{\lambda ^{\varepsilon }}\left( S_{1}^{\varepsilon }\right) ^{2}) ^{\frac{1}{2}}. \end{aligned}

Given $$\ell \in (0,c-h_1)$$ let $$U^{\varepsilon }\doteq e^{{\ell }/{\varepsilon } }E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }$$. According to Theorem 8.5, there exists $$\varepsilon _{0}\in (0,1)$$ and a constant $${\tilde{c}}>0$$ such that

\begin{aligned} 1-F^{\varepsilon }\left( U^{\varepsilon }\right) =P_{\lambda ^{\varepsilon } }( \tau _{1}^{\varepsilon }/E_{\lambda ^{\varepsilon }}\tau _{1}^{\varepsilon }>e^{{\ell }/{\varepsilon }}) \le e^{-\tilde{c}e^{{\ell }/{\varepsilon }}} \end{aligned}

for any $$\varepsilon \in (0,\varepsilon _{0}).$$ Also by Theorem 8.5, $$U^{\varepsilon }<T^{\varepsilon }$$ for all $$\varepsilon$$ small enough. Hence, for any $$t\ge U^{\varepsilon }$$,

\begin{aligned} 1-F^{\varepsilon }\left( t\right) \le 1-F^{\varepsilon }\left( U^{\varepsilon }\right) \le e^{-{\tilde{c}}e^{{\ell }/{\varepsilon }}}\text { and }h^{\varepsilon }\left( t\right) \le e^{-{\tilde{c}}e^{{\ell }/{\varepsilon }}/2}( E_{\lambda ^{\varepsilon }}\left( S_{1}^{\varepsilon }\right) ^{2}) ^{\frac{1}{2}}. \end{aligned}

By Proposition 3.4 in , we know that for any $$\varepsilon >0$$, for $$t\in [0,\infty )$$

\begin{aligned} g^{\varepsilon }\left( t\right) =h^{\varepsilon }\left( t\right) +\int _{0}^{t}h^{\varepsilon }\left( t-x\right) da^{\varepsilon }\left( x\right) , \end{aligned}

where

\begin{aligned} a^{\varepsilon }\left( t\right) \doteq \int _{0}^{\infty }E_{\lambda ^{\varepsilon }}\left[ N^{\varepsilon }\left( t\right) |\tau _{1} ^{\varepsilon }=x\right] dF^{\varepsilon }\left( x\right) =E_{\lambda ^{\varepsilon }}\left( N^{\varepsilon }\left( t\right) \right) . \end{aligned}

This implies

\begin{aligned} \frac{E_{\lambda ^{\varepsilon }}S_{N^{\varepsilon }\left( T^{\varepsilon }\right) }^{\varepsilon }}{T^{\varepsilon }}&=\frac{h^{\varepsilon }\left( T^{\varepsilon }\right) }{T^{\varepsilon } }+\frac{1}{T^{\varepsilon }}\int _{0}^{T^{\varepsilon }-U^{\varepsilon } }h^{\varepsilon }\left( T^{\varepsilon }-x\right) da^{\varepsilon }\left( x\right) +\frac{1}{T^{\varepsilon }}\int _{T^{\varepsilon }-U^{\varepsilon } }^{T^{\varepsilon }}h^{\varepsilon }\left( T^{\varepsilon }-x\right) da^{\varepsilon }\left( x\right) ,\\&\le \frac{E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }}{T^{\varepsilon } }+ e^{-{\tilde{c}}e^{{\ell }/{\varepsilon }}/2} ( E_{\lambda ^{\varepsilon }}\left( S_{1}^{\varepsilon }\right) ^{2}) ^{\frac{1}{2}}\frac{a^{\varepsilon }\left( T^{\varepsilon }-U^{\varepsilon }\right) }{T^{\varepsilon }}+E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }\frac{a^{\varepsilon }\left( T^{\varepsilon }\right) -a^{\varepsilon }\left( T^{\varepsilon }-U^{\varepsilon }\right) }{T^{\varepsilon }}, \end{aligned}

where we use $$h^{\varepsilon }\left( t\right) \le E_{\lambda ^{\varepsilon } }S_{1}^{\varepsilon }$$ to bound the first term and the third term, and $$h^{\varepsilon }\left( t\right) \le e^{-{\tilde{c}}e^{{\ell }/{\varepsilon }}/2}(E_{\lambda ^{\varepsilon }}\left( S_{1}^{\varepsilon }\right) ^{2})^{1/2}$$ for any $$t\ge U^{\varepsilon }$$ for the second term.

To calculate the decay rate of the first term, we apply Lemma 7.21 to find that for any $$\eta >0$$, there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \frac{E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }}{T^{\varepsilon }} \ge \inf _{x\in A}\left[ f\left( x\right) +W\left( x\right) \right] -W\left( O_{1}\right) + c-h_1-\eta . \end{aligned}
(12.1)

For the decay rate of the second term, given any $$\delta >0$$

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( e^{-\tilde{c}e^{{\ell }/{\varepsilon }}/4} (E_{\lambda ^{\varepsilon }}\left( S_{1}^{\varepsilon }\right) ^{2}) ^{\frac{1}{2}}\frac{a^{\varepsilon }\left( T^{\varepsilon }-U^{\varepsilon }\right) }{T^{\varepsilon }}\right) \nonumber \\&\quad =\frac{{\tilde{c}}}{4}\liminf _{\varepsilon \rightarrow 0}\varepsilon e^{{\ell }/{\varepsilon }}+\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( ( E_{\lambda ^{\varepsilon }}\left( S_{1}^{\varepsilon }\right) ^{2}) ^{\frac{1}{2}}\frac{a^{\varepsilon }\left( T^{\varepsilon }-U^{\varepsilon }\right) }{T^{\varepsilon }}\right) =\infty , \end{aligned}
(12.2)

where the last equality holds since $$\ell >0$$ implies $$\liminf _{\varepsilon \rightarrow 0}\varepsilon e^{{\ell }/{\varepsilon }}=\infty$$ and also because Lemma 7.23 and Corollary 8.3 ensure that

$$\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log ((E_{\lambda ^{\varepsilon }}\left( S_{1}^{\varepsilon }\right) ^{2})^{1/2}a^{\varepsilon }(T^{\varepsilon }-U^{\varepsilon })/T^{\varepsilon })$$ is bounded below by a constant.

For the last term, note that for any $$\varepsilon$$ fixed, the renewal function $$a^{\varepsilon }\left( t\right)$$ is subadditive in t (see for example Lemma 1.2 in ), so we have $$a^{\varepsilon }\left( T^{\varepsilon }\right) -a^{\varepsilon }\left( T^{\varepsilon }-U^{\varepsilon }\right) \le a^{\varepsilon }\left( U^{\varepsilon }\right) .$$ Thus, we apply by Lemma 7.21, Corollary 8.3 and Theorem 8.5 to find that for any $$\eta >0$$, there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$,

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \left( E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }\frac{a^{\varepsilon }\left( T^{\varepsilon }\right) -a^{\varepsilon }\left( T^{\varepsilon }-U^{\varepsilon }\right) }{T^{\varepsilon }}\right) \nonumber \\&\quad \ge \liminf _{\varepsilon \rightarrow 0}-\varepsilon \log E_{\lambda ^{\varepsilon }}S_{1}^{\varepsilon }+\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \frac{a^{\varepsilon }\left( U^{\varepsilon }\right) }{U^{\varepsilon }}+\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \frac{U^{\varepsilon }}{T^{\varepsilon }}\nonumber \\&\quad \ge \inf _{x\in A}\left[ f\left( x\right) +W\left( x\right) \right] -W\left( O_{1}\right) +\left( c-h_1-\ell \right) -\eta . \end{aligned}
(12.3)

Since (12.3) holds for all $$\ell >0$$, by sending $$\ell$$ to 0, we know that (12.3) holds with $$\ell =0$$.

Putting the bounds (12.1), (12.2) and (12.3) with $$\ell =0$$ together gives that for any $$\eta >0$$, there exists $$\delta _{0}\in (0,1)$$ such that for any $$\delta \in (0,\delta _{0})$$,

\begin{aligned}&\liminf _{\varepsilon \rightarrow 0}-\varepsilon \log \frac{E_{\lambda ^{\varepsilon }}S_{N^{\varepsilon }\left( T^{\varepsilon }\right) }^{\varepsilon }}{T^{\varepsilon }} \ge \inf _{x\in A}\left[ f\left( x\right) +W\left( x\right) \right] -W\left( O_{1}\right) + c-h_1-\eta . \end{aligned}

$$\square$$

### Proof of Lemma 9.5

By the definition of W(x),

\begin{aligned}&2\inf _{x\in A}[ f\left( x\right) +W\left( x\right) ] -2W\left( O_{1}\right) -h_1\\&=2\inf _{x\in A}[ f\left( x\right) +\min _{j\in L}\left( V(O_{j},x)+W\left( O_{j}\right) \right) ] -2W\left( O_{1}\right) -h_1\\&=\min _{j\in L}\{ 2\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +2W\left( O_{j}\right) -2W\left( O_{1}\right) -h_1\} . \end{aligned}

Define $$Q_{j}\doteq 2\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{j},x\right) \right] +2W\left( O_{j}\right) -2W\left( O_{1}\right) -h_1$$. Then, it suffices to show that $$Q_{j}\ge R_{j}^{(2)}$$ for all $$j\in L.$$

For $$j=1,$$ $$Q_{1}=2\inf _{x\in A}\left[ f\left( x\right) +V\left( O_{1},x\right) \right] -h_1=R_{1}^{(2)}.$$ For $$j\in L\setminus \{1\},$$ $$Q_{j}\ge R_{j}^{(2)}$$ if and only if $$W\left( O_{j}\right) -h_1\ge W\left( O_{1}\cup O_{j}\right) .$$ Recall that

\begin{aligned} W\left( O_{j}\right)= & {} \min _{g\in G\left( j\right) }\left[ {\textstyle \sum _{\left( m\rightarrow n\right) \in g}}V\left( O_{m},O_{n}\right) \right] \text { and } \\ W\left( O_{1}\cup O_{j}\right)= & {} \min _{g\in G\left( 1,j\right) }\left[ {\textstyle \sum _{\left( m\rightarrow n\right) \in g}}V\left( O_{m},O_{n}\right) \right] . \end{aligned}

Therefore, for any $${\tilde{g}}\in G\left( j\right)$$ such that $$W\left( O_{j}\right) =\sum \nolimits _{\left( m\rightarrow n\right) \in {\tilde{g}} }V\left( O_{m},O_{n}\right) ,$$ if we remove the arrow starting from 1, and assume that it goes to i,  then it is easy to see that $${\hat{g}}\doteq {\tilde{g}}\setminus \{(1,i)\}\in G(1,j)$$. Since $$V(O_{1},$$ $$O_{j})\ge h_1,$$ we find that

\begin{aligned} W\left( O_{j}\right) -h_1&=\sum \nolimits _{\left( m\rightarrow n\right) \in {\tilde{g}}}V\left( O_{m},O_{n}\right) -h_1\\&=\sum \nolimits _{\left( m\rightarrow n\right) \in {\hat{g}}}V\left( O_{m} ,O_{n}\right) +V(O_{1},O_{j})-h_1\\&\ge \min _{g\in G\left( 1,j\right) }\left[ {\textstyle \sum _{\left( m\rightarrow n\right) \in g}}V\left( O_{m},O_{n}\right) \right] =W\left( O_{1}\cup O_{j}\right) . \end{aligned}

$$\square$$