1 Introduction

Let \(\{X_t,\ t\ge 0\}\) be a symmetric \(\alpha \)-stable process on \(\mathbb {R}^d\) with \(\alpha \in (0, 2]\). The underlying probability space with a filtration is \(\{\Omega ,\{\mathscr {F}_t\},\mathbb {P}\}\). Denote by \(\{\mathbb {P}_x,\ x\in \mathbb {R}^d\}\) the corresponding Markov family, \(\mathbb {E}_x\) the expectation under \(\mathbb {P}_x\). Let D be a domain in \(\mathbb {R}^d\) with finite Lebesgue measure. Let

$$\begin{aligned} \tau _D=\inf \{t>0,\ X_t\notin D\} \end{aligned}$$

be the first exit time of D. Introducing an extra point, say \(\Delta \), to D, we define the process that is absorbed or killed at \(\Delta \) by

$$\begin{aligned} X_t^D =\left\{ \begin{array}{cl}\displaystyle X_t, &{}\text{ if } \displaystyle \tau _D>t,\\ \displaystyle \Delta ,&{}\text{ if } \displaystyle \tau _D\le t.\\ \end{array}\right. \end{aligned}$$
(1.1)

For such a process, an interesting problem is to study its long-term behavior conditional on \(\{\tau _D>t\}\). A well-studied object is quasi-stationary distribution, i.e., a probability distribution \(\nu \) satisfying that

$$\begin{aligned} P_\nu (X_t\in \cdot |\tau _D>t)=\nu \quad \forall t>0, \end{aligned}$$

where \(P_\nu \) is the probability law of the process with initial distribution \(\nu \); see [5, 7,8,9,10]. Under some mild conditions, we also know that starting from any initial distribution \(\mu \),

$$\begin{aligned} P_\mu (X_t\in \cdot |\tau _D>t)\rightarrow \nu \quad \text{ as }\ t\rightarrow \infty . \end{aligned}$$

Further investigation revealed that, if we consider the Cesaro average \(\frac{1}{t}\int _0^tf(X_s)ds\) of a functional f, the limiting distribution is substantially different from the quasi-stationary distribution; see [1,2,3] and the references cite therein. We call such a distribution a quasi-ergodic distribution. Recall that for a conservative Markov process \(X=\{X_t,\ t\ge 0\}\), under some irreducibility conditions, if \(\nu \) is the stationary distribution, then starting form any initial distribution \(\mu \), both \(P_\mu (X_t\in \cdot )\) and the above Cesaro average converge to \(\nu \). This difference deserves further investigation. In the present paper, relating to a quasi-ergodic distribution, we study \(L^1-\) ergodicity by deriving certain deviation inequality. To state more precisely, we need some notations. If we denote the transition function of \(X^D\) by \(P^D (t;\cdot ,\cdot )\), then it has a density \(p^D (t;\cdot ,\cdot )\) with respect to the Lebesgue measure on \(\mathbb {R}^d\). The semigroup and generator of \(X_t^D \) are denoted by \(P^D=\{P^D_t,\ t\ge 0\}\) and \(\mathcal {L}^D\) respectively. Since the Lebesgue measure of D is finite, \(\mathcal {L}^D\) has discrete spectrum. Let \(0<\lambda _1\le \lambda _2\le \cdots \), with \(\lambda _n\rightarrow \infty \), be all the eigenvalues of \(-\mathcal {L}^D\), repeated according to their multiplicity. The corresponding eigenfunctions are \(\{\varphi _n, n\ge 1\}\). These eigenfunctions can be normalized so that they form an orthonormal basis of \(L^2(D)\). We impose the following assumptions:

(A \(_1\) ) :

\(\lambda _1<\lambda _2\), \(\lambda _1\) is simple and \(\varphi _1>0\) on D;

(A \(_2\) ) :

\(\{P^D_t,\ t\ge 0\}\) is ultracontractive, i.e., for each \(t>0\), \(P^D_t\) is bounded from \(L^2(D)\) to \(L^\infty (D)\).

The following facts are well known:

Theorem 1.1

(Davies [6]) Assume (A\(_1\)) and (A\(_2\)). Then

  1. (i)

    the \(\varphi _n\)’s are continuous, and there is a nonnegative function \(c_t\), decreasing in t, such that

    $$\begin{aligned} \varphi _n^2(x)\le c_te^{2\lambda _nt/3},\quad \forall n\ge 1,\ t>0 \text{ and } x\in D. \end{aligned}$$
    (1.2)
  2. (ii)

    For each \(t>0\),

    $$\begin{aligned} p^D(t;x,y)=\sum _{n=1}^\infty \exp (-\lambda _nt) \varphi _n(x)\varphi _n(y), \end{aligned}$$
    (1.3)

    and the series is uniformly convergent on \(D\times D\).

It immediately follows that

$$\begin{aligned} P^D(t,x,y)\thicksim e^{-\lambda _1 t}\varphi _1(x)\varphi _1(y),\ \ x,y\in D. \end{aligned}$$

As a consequence of the theorem, we have the following corollary, which gives a probabilistic interpretation of \(\lambda _1\):

Corollary 1.1

Assume (A\(_1\)) and (A\(_2\)). Then for \(x\in D,\)

$$\begin{aligned} \lim _{t\rightarrow \infty }e^{\lambda _1t}P_x(\tau >t)=\varphi _1(x)||\varphi _1||_1 \end{aligned}$$
(1.4)

and

$$\begin{aligned} \lim _{t\rightarrow \infty }e^{\lambda _1t}\sup _{x\in D}P_x(\tau >t)=||\varphi _1||_{\infty }||\varphi _1||_1. \end{aligned}$$
(1.5)

More significantly, from Theorem 1.1, the unique quasi-stationary distribution and the unique quasi-ergodic distribution of the process can be directly constructed. More explicitly, we have the following

Theorem 1.2

Assume (\(\hbox {A}_1\)) and (\(\hbox {A}_2\)). Define two probability measures \(\nu \) and \(\mu _0\) on D by

$$\begin{aligned} \mathrm{d}\nu =c_1\varphi _11_D\mathrm{d}x\qquad \text{ and }\quad \mathrm{d}\mu _0=c_2\varphi _1^21_D\mathrm{d}x \end{aligned}$$
(1.6)

respectively, where \(c_1\) and \(c_2\) are the respective normalizing constants. Then for any \(x\in D\) and any bounded measurable function f on D,

$$\begin{aligned} \lim _{t\rightarrow \infty }E_x[f(X_t)|\tau _D>t]=\int _D f\mathrm{d}\nu \end{aligned}$$

and

$$\begin{aligned} \lim _{t\rightarrow \infty }E_x\left[ \frac{1}{t}\int _0^tf(X_s)\mathrm{d}s|\tau _D>t\right] =\int _Df\mathrm{d}\mu _0. \end{aligned}$$
(1.7)

The proof is a direct application of assumptions (A\(_1\)), (A\(_2\)) and of (1.3).

The motivation of the present paper comes from strengthening the convergence of (1.7) in the sense that

$$\begin{aligned} \lim _{t\rightarrow \infty }E_x\left[ |\frac{1}{t}\int _0^tf(X_s)\hbox {d}s-\int _Df\hbox {d}\mu _0||\tau _D>t\right] =0 \end{aligned}$$
(1.8)

for not necessary bounded f. The approach we are going to adopt is deviation inequality. The deviation inequality we will derive concerns functions in a Kato class J defined as follows (refer to [4], p. 62 for more details):

$$\begin{aligned} g(u)=g(|u|)=\left\{ \begin{array}{cl}\displaystyle |u|^{2-d}, &{}\quad \text{ if } \displaystyle d\ge 3,\\ \displaystyle -\log |u|,&{}\quad \text{ if } \displaystyle d=2,\\ \displaystyle |u|,&{}\quad \text{ if } \displaystyle d=1.\\ \end{array}\right. \end{aligned}$$
(1.9)

Let V be a measurable function from \(\mathbb {R}^d\) to \([-\infty ,+\infty ]\), and then \(V\in J\) iff

$$\begin{aligned} \lim _{s\downarrow 0}\left[ \sup _{x\in \mathbb {R}^d}\int _{|y-x|<s}|g(y-x)V(y)|\hbox {d}y \right] =0. \end{aligned}$$

In this paper, V is only defined on the domain D, but we can extend it to \(\mathbb {R}^d\) by fixing it to be 0 on \(\mathbb {R}^d - D\).

For a function V in the Kato class J, we can define the Feynman–Kac semigroup \(\{P_t^{V},t\ge 0\}\) on \(L^2(D)\) by:

$$\begin{aligned} P_t^{V}f(x)=\mathbb {E}_x\left\{ \exp \left[ \int _0^tV(X_s)\hbox {d}s\right] f(X_t);\tau >t\right\} , \quad f\in L^2(D). \end{aligned}$$

The generator \(A_V\) of \(P^V\) is \(\mathcal {L}^D+V\). The domain of the generator is denoted by \(\mathcal {D}(A_V)\).

2 The governing functional

In this section, we introduce the functional which will govern the deviation inequality we are going to derive. We use the Poincar\(\acute{\text{ e }}\) inequality to get an upper bound of the exponential growth rate for the Feynman–Kac semigroup. The Legendre transform of this bound will be shown to be just the functional governing the deviation inequality. By the Fenchel–Legendre theorem, this functional can be expressed in a more explicit form. Following a standard variational approach, we provide some sufficient conditions for the functional to achieve a unique minimum.

For \(\lambda \in \mathbb {R}\), we define

$$\begin{aligned} \begin{aligned} H_V(\lambda )&=\sup \left\{ \int _D(f\cdot A_{\lambda V}f)\hbox {d}x: f\in \mathcal {D}(A_{\lambda V}),\int _Df^2\hbox {d}x=1\right\} \\&=\sup \left\{ \int _D\left( \lambda Vf^2+f\mathcal {L}^Df\right) \hbox {d}x: f\in \mathcal {D}(A_{\lambda V}),\int _Df^2\hbox {d}x=1\right\} . \end{aligned} \end{aligned}$$
(2.1)

Then

$$\begin{aligned} ||P_t^{\lambda V}||_2\le e^{tH_V(\lambda )}. \end{aligned}$$
(2.2)

We further express \(H_V\) in the form of a Legendre transform of certain function \(J_V\) on \(\mathbb {R}^d\).

$$\begin{aligned} \begin{aligned} H_V(\lambda )&=\sup \left\{ \int _D\left( \lambda Vf^2+f\mathcal {L}^D f\right) \hbox {d}x: f\in \mathcal {D}(A_V),\int _Df^2\hbox {d}x=1\right\} \\&=\sup _{z\in \mathbb {R}}\sup _{\int _DVf^2\hbox {d}x=z}\left\{ \lambda z-\int _D-f\mathcal {L}^D f\hbox {d}x: f\in \mathcal {D}(A_V),\int _Df^2\hbox {d}x=1\right\} \\&=\sup _{z\in \mathbb {R}}\{\lambda z-J_V(z)\}, \end{aligned} \end{aligned}$$
(2.3)

where

$$\begin{aligned} J_V(z)=\inf \left\{ -\int _Df\mathcal {L}^Df\hbox {d}x, \int _Df^2\hbox {d}x=1, \int _DVf^2\hbox {d}x=z\right\} . \end{aligned}$$
(2.4)

As we stated in the Introduction, the probability measure \(\mu _0\) defined by

$$\begin{aligned} \hbox {d}\mu _0=c_2\varphi _1^2(x)1_D \hbox {d}x. \end{aligned}$$
(2.5)

is crucial for our study. It gives the conditional limit for the Cesaro averages.

Lemma 2.1

\(J_V:\mathbb {R}\rightarrow [\lambda _1,\infty ]\) is a convex function and attains its minimum \(\lambda _1\) at

$$\begin{aligned} a=\int _DV\mathrm{d}\mu _0, \end{aligned}$$
(2.6)

provided that \(V\in L^1(\mu _0)\).

Proof

The convexity of \(J_V\) can be verified directly from its definition. The last statement is just a consequence of assumptions (\(\hbox {A}_1\)) and (\(\hbox {A}_2\)). \(\square \)

By the convexity of \(J_V\), \(\{J_V<+\infty \}^\circ \) is an open interval \((l_-,l_+)\), where \(-\infty \le l_-< l_+\le +\infty \). We now define the function \(I_V\) as the lower semicontinuous version of \(J_V\):

$$\begin{aligned} I_V(z)=\lim _{\delta \downarrow 0}\inf _{y: |y-z|<\delta }J_V(y), \end{aligned}$$

or more precisely,

$$\begin{aligned} I_V(x)=\left\{ \begin{array}{cl}\displaystyle J_V(x), &{}\quad \text{ if } \displaystyle l_-<x<l_+,\\ \displaystyle J_V(l_++),&{}\quad \text{ if } \displaystyle x=l_+,\\ \displaystyle J_V(l_--),&{}\quad \text{ if } \displaystyle x=l_-,\\ \displaystyle \infty ,&{}\quad \text{ if } \displaystyle \text{ otherwise }.\\ \end{array}\right. \end{aligned}$$
(2.7)

Corollary

Under the assumptions in the above lemma, \(I_V\) is convex, is lower semicontinuous, and attains its minimum at the unique point a.

In the next section, we will see that \(I_V\) is actually the governing functional for the deviation inequality. Now we study some further properties of \(I_V\).

Proposition 2.1

  1. (i)

    For any \(x \in \mathbb {R}\), we have that

    $$\begin{aligned} I_V(x)=\sup _{\lambda \in \mathbb {R}}\{\lambda x-H_V(\lambda )\}. \end{aligned}$$
    (2.8)
  2. (ii)

    H is a convex function, and for any \(b>a\)

    $$\begin{aligned} I_V(b)=\sup _{\lambda \ge 0}\{\lambda b-H_V(\lambda )\} \end{aligned}$$
    (2.9)

    and for \(b<a\)

    $$\begin{aligned} I_V(b)=\sup _{\lambda \le 0}\{\lambda b-H_V(\lambda )\}. \end{aligned}$$
    (2.10)

Proof

  1. (i)

    The result is due to the celebrated Fenchel–Legendre theorem.

  2. (ii)

    We only need to prove (2.9). By the convexity of \(J_V\) (Lemma 2.1) and (2.3), we see that H is convex.

Taking \(z=a\) in (2.3), we see that

$$\begin{aligned} H_V(\lambda )\ge \lambda \cdot a-\lambda _1. \end{aligned}$$

Thus for \(\lambda <0,\)

$$\begin{aligned} \lambda b-H_V(\lambda )\le \lambda \cdot ( b-a)+\lambda _1 <\lambda _1. \end{aligned}$$

But from (2.7) and Lemma 2.1,

$$\begin{aligned} \inf _{x\in \mathbb {R}}I_V(x)=\lambda _1, \end{aligned}$$

it follows that

$$\begin{aligned} I_V(b)=\sup _{\lambda \in \mathbb {R}}\{\lambda b-H_V(\lambda )\}=\sup _{\lambda \ge 0}\{\lambda b-H_V(\lambda )\}. \end{aligned}$$

\(\square \)

The next results are important for the quasi-ergodicity to be discussed in §4.

Theorem 2.1

If \(V\in J\cap L^1(\mu _0)\), then for any \(\epsilon >0\), there exists a \(\gamma >0\), such that \(\forall b\ge a\),

$$\begin{aligned} I_V(b+\epsilon )\ge \lambda _1+\gamma (b-a) , \end{aligned}$$

and \(\forall b\le a\),

$$\begin{aligned} I_V(b-\epsilon )\ge \lambda _1+\gamma (a-b) , \end{aligned}$$

Proof

We only need to prove the first inequality. We first note that according to the Corollary, \(I_V\) attains its minimum at the unique point a. It is easy to see that \(H_V\) is lower semicontinuous and \(H_V(0)=-\lambda _1\). Thus

$$\begin{aligned} \limsup _{\lambda \rightarrow 0+}[\lambda (a+\epsilon )-H_V(\lambda )]\le \lambda _1. \end{aligned}$$

From this, Proposition 2.1 and the assumption, we see that there is a \(\gamma >0\) such that

$$\begin{aligned} I_V(a+\epsilon )=\sup _{\lambda \ge \gamma }[\lambda (a+\epsilon )-H_V(\lambda )]>\lambda _1. \end{aligned}$$

Now it follows that for each \(b\ge 0\),

$$\begin{aligned} I_V(a+\epsilon +b)=\sup _{\lambda \ge 0}[\lambda (a+\epsilon +b)-H_V(\lambda )]\ge \sup _{\lambda \ge \gamma }[\lambda (a+\epsilon +b)-H_V(\lambda )]\ge \lambda _1+\gamma b. \end{aligned}$$

\(\square \)

3 The deviation inequality and quasi-ergodicity

In this section, we first derive a deviation inequality for \(\frac{1}{t}\int _0^tV(X_s)\hbox {d}s\) governed by the functional \(I_V\), \(v\in J\). Then we apply such inequality to quasi-ergodicity.

Let |D|= the Lebesgue measure of D. Define a measure \(\mu _1\) by

$$\begin{aligned} \mu _1=\frac{1_Ddx}{|D|}. \end{aligned}$$
(3.1)

The main result of this section is the following

Theorem 3.1

Let \(V \in J\cap L^1(\mu _0)\) and \(\nu \in \mathscr {P}_1(D)\) satisfying \(\nu<<\mu _1\) with \(\left\| \frac{\mathrm{d}\nu }{\mathrm{d}\mu _1}\right\| _2<\infty \). Then for any \(t>0\) and \(b>0\), we have that:

$$\begin{aligned} P_\nu \left[ \frac{1}{t}\int _0^tV(X_s)\mathrm{d}s-a>b,\tau >t\right]\le & {} \left\| \frac{\mathrm{d}\nu }{\mathrm{d}\mu _1}\right\| _2\exp [-tI_V(a+b)], \end{aligned}$$
(3.2)
$$\begin{aligned} P_\nu \left[ \frac{1}{t}\int _0^tV(X_s)\mathrm{d}s-a<-b,\tau >t\right]\le & {} \left\| \frac{\mathrm{d}\nu }{\mathrm{d}\mu _1}\right\| _2\exp [-tI_V(a-b)]. \end{aligned}$$
(3.3)

Proof

For any number \(t,\ b>0\), by Chebychev’s inequality and (2.2),

$$\begin{aligned} \begin{aligned}&P_\nu \left[ \frac{1}{t}\int _0^tV(X_s)\mathrm{d}s-a>b,\tau>t\right] \\&\quad \le \inf _{\lambda>0}\exp [-\lambda t(a+b)]E_\nu \left\{ \exp \left[ \int _0^t\lambda V(X_s)\mathrm{d}s\right] ,\tau>t\right\} \\&\quad \le \inf _{\lambda>0}\exp [-\lambda t(a+b)]\int _DE_x\left\{ \exp \left[ \int _0^t\lambda V(X_s)\mathrm{d}s\right] ,\tau>t\right\} \frac{\mathrm{d}\nu }{\mathrm{d}x}\mathrm{d}x\\&\quad \le \left\| \frac{\mathrm{d}\nu }{\mathrm{d}x}\right\| _2\cdot \inf _{\lambda>0}\exp [-\lambda t(a+b)]\cdot ||P_t^{\lambda V}1||_2\\&\quad \le \sqrt{|D|}\cdot \left\| \frac{\mathrm{d}\nu }{\mathrm{d}x}\right\| _2\cdot \inf _{\lambda>0}\left\{ \exp [-\lambda t(a+b)]\cdot e^{tH_V(\lambda )}\right\} \\&\quad =\left\| \frac{\mathrm{d}\nu }{\mathrm{d}\mu _1}\right\| _2\exp \left\{ -t\sup _{\lambda >0}[\lambda (a+b)-H_V(\lambda )]\right\} . \end{aligned} \end{aligned}$$
(3.4)

It follows from Proposition 2.1 that

$$\begin{aligned} \sup _{\lambda >0}\{\lambda (a+b)-H_V(\lambda )\}=I_V(a+b), \end{aligned}$$

The first assertion follows. The second one follows by replacing V with \(-V\). \(\square \)

Remark 3.1

The theorem is motivated by the paper [11], which deals with general conservative Markov processes.

The next lemma is to study the absorbing probability. The first Dirichlet eigenvalue characterizes the absorbing rate for the absorbing process.

Lemma 3.1

As \(t\rightarrow \infty \), \(\sum _{n=2}^\infty \exp (-(\lambda _n-\lambda _1)t) \varphi _n(x)\varphi _n(y)\) converges to 0 absolutely and uniformly for \((x,y)\in D\times D\).

Proof

From Theorem 1.1, for \(0<\epsilon <t\) and \(x,y\in D\),

$$\begin{aligned} \begin{aligned}&\sum _{n=2}^\infty \exp (-(\lambda _n-\lambda _1)t) |\varphi _n(x)\varphi _n(y)|\\&\quad \le \left( c_te^{\frac{2}{3}\lambda _1 t}\right) \exp (\lambda _1\epsilon /3)\sum _{n=2}^\infty \exp \left( -\frac{1}{3}(\lambda _n-\lambda _1)(t-\epsilon )\right) <+\infty .\\ \end{aligned} \end{aligned}$$

Observing that \(\displaystyle \lim _{t\rightarrow \infty }\exp (-\frac{1}{3}(\lambda _n-\lambda _1)(t-\epsilon ))=0\) monotonically and applying the dominated convergence theorem, we see that

$$\begin{aligned} \begin{aligned} \lim _{t\rightarrow \infty }&\sup _{x,y\in D}\left\{ \sum _{n=2}^\infty \exp (-(\lambda _n-\lambda _1)t) |\varphi _n(x)\varphi _n(y)|\right\} \\&\le \lim _{t\rightarrow \infty }\left( c_te^{\frac{2}{3}\lambda _1 t}\right) \exp (\lambda _1\epsilon /3)\sum _{n=2}^\infty \exp \left( -\frac{1}{3}(\lambda _n-\lambda _1)(t-\epsilon )\right) =0.\\ \end{aligned} \end{aligned}$$

\(\square \)

Proposition 3.1

Given \(\nu \in \mathscr {P}_1(D) \), for any number \(t>0\)

$$\begin{aligned} \mathbb {P}_{\nu }(\tau >t)=C_{\nu }(t)e^{-\lambda _1t}. \end{aligned}$$
(3.5)

where \(C_{\nu }(t):(0,+\infty )\rightarrow (0,+\infty )\) is a continuous function such that

$$\begin{aligned} \lim _{t\rightarrow \infty }C_{\nu }(t)=\int \varphi _1(x)\nu (\mathrm{d}x)\int \varphi _1(y)\mathrm{d}y>0 \quad \text {and}\quad \lim _{t\rightarrow 0}C_{\nu }(t)=1. \end{aligned}$$

Proof

It follows from Theorem 1.1 that

$$\begin{aligned} \begin{aligned}&\mathbb {P}_{\nu }(\tau >t)=\int \int p^D(t;x,y)\mathrm{d}y\nu (\mathrm{d}x)\\&\quad =\int \int \sum _{n=1}^\infty \exp (-\lambda _nt) \varphi _n(x)\varphi _n(y)\mathrm{d}y\nu (\mathrm{d}x)\\&\quad =e^{-\lambda _1t}\left[ \int \varphi _1(x)\nu (\mathrm{d}x)\int \varphi _1(y)\mathrm{d}y+\sum _{n=2}^\infty e^{-(\lambda _n-\lambda _1)t} \int \varphi _n(x)\nu (\mathrm{d}x)\int \varphi _n(y)\mathrm{d}y\right] . \end{aligned} \end{aligned}$$
(3.6)

Thus if we define

$$\begin{aligned} C_{\nu }(t)=\int \varphi _1(x)\nu (\mathrm{d}x)\int \varphi _1(y)\mathrm{d}y+\sum _{n=2}^\infty e^{-(\lambda _n-\lambda _1)t} \int \varphi _n(x)\nu (\mathrm{d}x)\int \varphi _n(y)\mathrm{d}y, \end{aligned}$$
(3.7)

then \(C_{\nu }(t)>0\) since \(\mathbb {P}_x(\tau>t)>0\). The continuity of \(C_{\nu }(t)\) on \((0,+\infty )\) is guaranteed by Theorem 1.1), and Lemma 3.1 gives

$$\begin{aligned} \lim _{t\rightarrow \infty }C_\nu (t)=\int \varphi _1(x)\nu (\hbox {d}x)\int \varphi _1(y)\hbox {d}y>0. \end{aligned}$$

By the continuity of the paths of \(\{X_t\}_{t\ge 0}\), we have that

$$\begin{aligned} \lim _{t\rightarrow 0}\mathbb {P}_x(\tau >t)=1-\lim _{t\rightarrow 0}\mathbb {P}_x(\tau \le t)=1, \end{aligned}$$

which gives that

$$\begin{aligned} \lim _{t\rightarrow 0}\mathbb {P}_{\nu }(\tau >t)=1\quad \text {and}\quad \lim _{t\rightarrow 0}C_{\nu }(t)=1. \end{aligned}$$

\(\square \)

As a easy consequence, we have the following conditional exponential convergence for \(\frac{1}{t}\int _0^tV(X_s)ds\).

Corollary 3.1

Under the same hypotheses on V and \(\mu \) in Theorem 3.1, for any number \(t>0\) and \(b>0\), we have that

$$\begin{aligned} P_\nu \left[ \left. \frac{1}{t}\int _0^tV(X_s)\mathrm{d}s-a>b\right| \tau >t\right]\le & {} \frac{\left\| \frac{\mathrm{d}\nu }{\mathrm{d}\mu _1}\right\| _2\exp [-tI_V(a+b)]}{C_{\nu }(t)e^{-\lambda _1t}}, \end{aligned}$$
(3.8)
$$\begin{aligned} P_\nu \left[ \left. \frac{1}{t}\int _0^tV(X_s)\mathrm{d}s-a<-b\right| \tau >t\right]\le & {} \frac{\left\| \frac{\mathrm{d}\nu }{\mathrm{d}\mu _1}\right\| _2\exp [-tI_V(a-b)]}{C_{\nu }(t)e^{-\lambda _1t}}. \end{aligned}$$
(3.9)

Furthermore, adding the hypotheses on V and \(\mu \) in Theorem 3.1,

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{1}{t}\log P_\nu \left[ \left. \frac{1}{t}\int _0^tV(X_s)\mathrm{d}s-a>b\right| \tau >t\right]\le & {} \lambda _1-I_V(a+b)<0, \end{aligned}$$
(3.10)
$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{1}{t}\log P_\nu \left[ \left. \frac{1}{t}\int _0^tV(X_s)\mathrm{d}s-a<-b\right| \tau >t\right]\le & {} \lambda _1-I_V(a-b)<0. \end{aligned}$$
(3.11)

Proof

A direct application of Theorem 3.1 and Proposition 3.1. \(\square \)

The following is the quasi-ergodic theorem.

Theorem 3.2

Let \(V\in J\) be as in Theorem 2.1, and \(a=\int V\mathrm{d}\mu _0\). Then for \(\nu \in \mathscr {P}_1(D)\) with \(\nu<<\mu _1\) and \(\left\| \frac{\mathrm{d}\nu }{\mathrm{d}\mu _1}\right\| _2<\infty \),

$$\begin{aligned} \lim _{t\rightarrow \infty }E_\nu \left[ \left| \frac{1}{t}\int _0^tV(X_s)\mathrm{d}s-a\right| |\tau >t\right] =0. \end{aligned}$$

Proof

Given \(\epsilon >0\), let \(\gamma >0\) be as in Theorem 2.1. Denote

$$\begin{aligned} \Delta _t=\left| \frac{1}{t}\int _0^tV(X_s)\mathrm{d}s-a\right| . \end{aligned}$$

Then from Theorems 3.1 and 1.2, we see that

$$\begin{aligned} \begin{aligned}&E_\nu \left[ \left| \frac{1}{t}\int _0^tV(X_s)\mathrm{d}s-a\right| |\tau>t\right] \\&\quad =E_\nu \left[ \Delta _t,;\Delta _t\le 2\epsilon | \tau>t\right] +E_\nu \left[ \Delta _t,;\Delta _t> 2\epsilon |\tau>t\right] \\&\quad \le 2\epsilon +E_\nu \left[ \Delta _t,;\Delta _t> 2\epsilon , \tau>t\right] P_\nu ^{-1}(\tau>t)\\&\quad \le 2\epsilon +\sum _{k=2}^\infty E_\nu \left[ \Delta _t,k\epsilon \le \Delta _t <(k+1)\epsilon , \tau>t\right] P_\nu ^{-1}(\tau>t)\\&\quad \le 2\epsilon +\sum _{k=2}^\infty (k+1) \epsilon P_\nu (\Delta _t\ge k\epsilon ,\tau>t)P_\nu ^{-1}(\tau>t)\\&\quad \le 2\epsilon +2\epsilon \left\| \frac{\mathrm{d}\nu }{\mathrm{d}\mu _1}\right\| _2 \sum _{k=2}^\infty (k+1)e^{-\lambda _1t}e^{-\gamma (k-1)\epsilon t}P_\nu ^{-1}(\tau >t). \end{aligned} \end{aligned}$$

Letting \(t\rightarrow \infty \) and applying Proposition 3.1, we obtain that

$$\begin{aligned} \limsup _{t\rightarrow \infty }E_\nu \left[ \left| \frac{1}{t}\int _0^tV(X_s)\mathrm{d}s-a\right| |\tau >t\right] \le 2\epsilon , \end{aligned}$$

completing the proof. \(\square \)

Example

Let \(d=1\) and \(D=(-\pi /2,\pi /2)\). Consider the Brownian Motion on D killed once it reaches \(\partial D=\{-\pi /2,\pi /2\}\). Then assumptions (A \(_1\) ) and (A \(_2\) ) are fulfilled with \(\lambda _1=1/2\) and

$$\begin{aligned} \varphi _1(x)=\sin \left( x+\pi /2\right) , \quad -\pi /2<x<\pi /2. \end{aligned}$$

Therefore,

$$\begin{aligned} d\mu _0=c\sin ^2\left( x+\pi /2\right) \hbox {d}x. \end{aligned}$$

\(V\in J\) if

$$\begin{aligned} \lim _{s\downarrow 0}\left[ \sup _{x\in (-\pi /2,\pi /2)}\int _{|u|<s}|uV(u+x)|\hbox {d}u \right] =0. \end{aligned}$$

Thus for \(V\in J\cap L^1(D,dx)\)

$$\begin{aligned} \lim _{t\rightarrow \infty }E_\mu \left[ \left| \frac{1}{t}\int _0^tV(x_s)\hbox {d}s-a\right| |\tau _D>t\right] =0, \end{aligned}$$

provided that \(\hbox {d}\mu /\hbox {d}m_1\in L^2(D,dx)\).