Abstract
In this note, we study quasi-ergodic behavior for killed Markov process. For symmetric stable processes, we derive a conditional deviation inequality for \(\int _0^tV(X_s)\hbox {d}s\) for certain (unbounded) functions V. Then we apply it to prove a quasi \(L^1\)-ergodic theorem for the killed process.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let \(\{X_t,\ t\ge 0\}\) be a symmetric \(\alpha \)-stable process on \(\mathbb {R}^d\) with \(\alpha \in (0, 2]\). The underlying probability space with a filtration is \(\{\Omega ,\{\mathscr {F}_t\},\mathbb {P}\}\). Denote by \(\{\mathbb {P}_x,\ x\in \mathbb {R}^d\}\) the corresponding Markov family, \(\mathbb {E}_x\) the expectation under \(\mathbb {P}_x\). Let D be a domain in \(\mathbb {R}^d\) with finite Lebesgue measure. Let
be the first exit time of D. Introducing an extra point, say \(\Delta \), to D, we define the process that is absorbed or killed at \(\Delta \) by
For such a process, an interesting problem is to study its long-term behavior conditional on \(\{\tau _D>t\}\). A well-studied object is quasi-stationary distribution, i.e., a probability distribution \(\nu \) satisfying that
where \(P_\nu \) is the probability law of the process with initial distribution \(\nu \); see [5, 7,8,9,10]. Under some mild conditions, we also know that starting from any initial distribution \(\mu \),
Further investigation revealed that, if we consider the Cesaro average \(\frac{1}{t}\int _0^tf(X_s)ds\) of a functional f, the limiting distribution is substantially different from the quasi-stationary distribution; see [1,2,3] and the references cite therein. We call such a distribution a quasi-ergodic distribution. Recall that for a conservative Markov process \(X=\{X_t,\ t\ge 0\}\), under some irreducibility conditions, if \(\nu \) is the stationary distribution, then starting form any initial distribution \(\mu \), both \(P_\mu (X_t\in \cdot )\) and the above Cesaro average converge to \(\nu \). This difference deserves further investigation. In the present paper, relating to a quasi-ergodic distribution, we study \(L^1-\) ergodicity by deriving certain deviation inequality. To state more precisely, we need some notations. If we denote the transition function of \(X^D\) by \(P^D (t;\cdot ,\cdot )\), then it has a density \(p^D (t;\cdot ,\cdot )\) with respect to the Lebesgue measure on \(\mathbb {R}^d\). The semigroup and generator of \(X_t^D \) are denoted by \(P^D=\{P^D_t,\ t\ge 0\}\) and \(\mathcal {L}^D\) respectively. Since the Lebesgue measure of D is finite, \(\mathcal {L}^D\) has discrete spectrum. Let \(0<\lambda _1\le \lambda _2\le \cdots \), with \(\lambda _n\rightarrow \infty \), be all the eigenvalues of \(-\mathcal {L}^D\), repeated according to their multiplicity. The corresponding eigenfunctions are \(\{\varphi _n, n\ge 1\}\). These eigenfunctions can be normalized so that they form an orthonormal basis of \(L^2(D)\). We impose the following assumptions:
- (A \(_1\) ) :
-
\(\lambda _1<\lambda _2\), \(\lambda _1\) is simple and \(\varphi _1>0\) on D;
- (A \(_2\) ) :
-
\(\{P^D_t,\ t\ge 0\}\) is ultracontractive, i.e., for each \(t>0\), \(P^D_t\) is bounded from \(L^2(D)\) to \(L^\infty (D)\).
The following facts are well known:
Theorem 1.1
(Davies [6]) Assume (A\(_1\)) and (A\(_2\)). Then
-
(i)
the \(\varphi _n\)’s are continuous, and there is a nonnegative function \(c_t\), decreasing in t, such that
$$\begin{aligned} \varphi _n^2(x)\le c_te^{2\lambda _nt/3},\quad \forall n\ge 1,\ t>0 \text{ and } x\in D. \end{aligned}$$(1.2) -
(ii)
For each \(t>0\),
$$\begin{aligned} p^D(t;x,y)=\sum _{n=1}^\infty \exp (-\lambda _nt) \varphi _n(x)\varphi _n(y), \end{aligned}$$(1.3)and the series is uniformly convergent on \(D\times D\).
It immediately follows that
As a consequence of the theorem, we have the following corollary, which gives a probabilistic interpretation of \(\lambda _1\):
Corollary 1.1
Assume (A\(_1\)) and (A\(_2\)). Then for \(x\in D,\)
and
More significantly, from Theorem 1.1, the unique quasi-stationary distribution and the unique quasi-ergodic distribution of the process can be directly constructed. More explicitly, we have the following
Theorem 1.2
Assume (\(\hbox {A}_1\)) and (\(\hbox {A}_2\)). Define two probability measures \(\nu \) and \(\mu _0\) on D by
respectively, where \(c_1\) and \(c_2\) are the respective normalizing constants. Then for any \(x\in D\) and any bounded measurable function f on D,
and
The proof is a direct application of assumptions (A\(_1\)), (A\(_2\)) and of (1.3).
The motivation of the present paper comes from strengthening the convergence of (1.7) in the sense that
for not necessary bounded f. The approach we are going to adopt is deviation inequality. The deviation inequality we will derive concerns functions in a Kato class J defined as follows (refer to [4], p. 62 for more details):
Let V be a measurable function from \(\mathbb {R}^d\) to \([-\infty ,+\infty ]\), and then \(V\in J\) iff
In this paper, V is only defined on the domain D, but we can extend it to \(\mathbb {R}^d\) by fixing it to be 0 on \(\mathbb {R}^d - D\).
For a function V in the Kato class J, we can define the Feynman–Kac semigroup \(\{P_t^{V},t\ge 0\}\) on \(L^2(D)\) by:
The generator \(A_V\) of \(P^V\) is \(\mathcal {L}^D+V\). The domain of the generator is denoted by \(\mathcal {D}(A_V)\).
2 The governing functional
In this section, we introduce the functional which will govern the deviation inequality we are going to derive. We use the Poincar\(\acute{\text{ e }}\) inequality to get an upper bound of the exponential growth rate for the Feynman–Kac semigroup. The Legendre transform of this bound will be shown to be just the functional governing the deviation inequality. By the Fenchel–Legendre theorem, this functional can be expressed in a more explicit form. Following a standard variational approach, we provide some sufficient conditions for the functional to achieve a unique minimum.
For \(\lambda \in \mathbb {R}\), we define
Then
We further express \(H_V\) in the form of a Legendre transform of certain function \(J_V\) on \(\mathbb {R}^d\).
where
As we stated in the Introduction, the probability measure \(\mu _0\) defined by
is crucial for our study. It gives the conditional limit for the Cesaro averages.
Lemma 2.1
\(J_V:\mathbb {R}\rightarrow [\lambda _1,\infty ]\) is a convex function and attains its minimum \(\lambda _1\) at
provided that \(V\in L^1(\mu _0)\).
Proof
The convexity of \(J_V\) can be verified directly from its definition. The last statement is just a consequence of assumptions (\(\hbox {A}_1\)) and (\(\hbox {A}_2\)). \(\square \)
By the convexity of \(J_V\), \(\{J_V<+\infty \}^\circ \) is an open interval \((l_-,l_+)\), where \(-\infty \le l_-< l_+\le +\infty \). We now define the function \(I_V\) as the lower semicontinuous version of \(J_V\):
or more precisely,
Corollary
Under the assumptions in the above lemma, \(I_V\) is convex, is lower semicontinuous, and attains its minimum at the unique point a.
In the next section, we will see that \(I_V\) is actually the governing functional for the deviation inequality. Now we study some further properties of \(I_V\).
Proposition 2.1
-
(i)
For any \(x \in \mathbb {R}\), we have that
$$\begin{aligned} I_V(x)=\sup _{\lambda \in \mathbb {R}}\{\lambda x-H_V(\lambda )\}. \end{aligned}$$(2.8) -
(ii)
H is a convex function, and for any \(b>a\)
$$\begin{aligned} I_V(b)=\sup _{\lambda \ge 0}\{\lambda b-H_V(\lambda )\} \end{aligned}$$(2.9)and for \(b<a\)
$$\begin{aligned} I_V(b)=\sup _{\lambda \le 0}\{\lambda b-H_V(\lambda )\}. \end{aligned}$$(2.10)
Proof
-
(i)
The result is due to the celebrated Fenchel–Legendre theorem.
-
(ii)
We only need to prove (2.9). By the convexity of \(J_V\) (Lemma 2.1) and (2.3), we see that H is convex.
Taking \(z=a\) in (2.3), we see that
Thus for \(\lambda <0,\)
it follows that
\(\square \)
The next results are important for the quasi-ergodicity to be discussed in §4.
Theorem 2.1
If \(V\in J\cap L^1(\mu _0)\), then for any \(\epsilon >0\), there exists a \(\gamma >0\), such that \(\forall b\ge a\),
and \(\forall b\le a\),
Proof
We only need to prove the first inequality. We first note that according to the Corollary, \(I_V\) attains its minimum at the unique point a. It is easy to see that \(H_V\) is lower semicontinuous and \(H_V(0)=-\lambda _1\). Thus
From this, Proposition 2.1 and the assumption, we see that there is a \(\gamma >0\) such that
Now it follows that for each \(b\ge 0\),
\(\square \)
3 The deviation inequality and quasi-ergodicity
In this section, we first derive a deviation inequality for \(\frac{1}{t}\int _0^tV(X_s)\hbox {d}s\) governed by the functional \(I_V\), \(v\in J\). Then we apply such inequality to quasi-ergodicity.
Let |D|= the Lebesgue measure of D. Define a measure \(\mu _1\) by
The main result of this section is the following
Theorem 3.1
Let \(V \in J\cap L^1(\mu _0)\) and \(\nu \in \mathscr {P}_1(D)\) satisfying \(\nu<<\mu _1\) with \(\left\| \frac{\mathrm{d}\nu }{\mathrm{d}\mu _1}\right\| _2<\infty \). Then for any \(t>0\) and \(b>0\), we have that:
Proof
For any number \(t,\ b>0\), by Chebychev’s inequality and (2.2),
It follows from Proposition 2.1 that
The first assertion follows. The second one follows by replacing V with \(-V\). \(\square \)
Remark 3.1
The theorem is motivated by the paper [11], which deals with general conservative Markov processes.
The next lemma is to study the absorbing probability. The first Dirichlet eigenvalue characterizes the absorbing rate for the absorbing process.
Lemma 3.1
As \(t\rightarrow \infty \), \(\sum _{n=2}^\infty \exp (-(\lambda _n-\lambda _1)t) \varphi _n(x)\varphi _n(y)\) converges to 0 absolutely and uniformly for \((x,y)\in D\times D\).
Proof
From Theorem 1.1, for \(0<\epsilon <t\) and \(x,y\in D\),
Observing that \(\displaystyle \lim _{t\rightarrow \infty }\exp (-\frac{1}{3}(\lambda _n-\lambda _1)(t-\epsilon ))=0\) monotonically and applying the dominated convergence theorem, we see that
\(\square \)
Proposition 3.1
Given \(\nu \in \mathscr {P}_1(D) \), for any number \(t>0\)
where \(C_{\nu }(t):(0,+\infty )\rightarrow (0,+\infty )\) is a continuous function such that
Proof
It follows from Theorem 1.1 that
Thus if we define
then \(C_{\nu }(t)>0\) since \(\mathbb {P}_x(\tau>t)>0\). The continuity of \(C_{\nu }(t)\) on \((0,+\infty )\) is guaranteed by Theorem 1.1), and Lemma 3.1 gives
By the continuity of the paths of \(\{X_t\}_{t\ge 0}\), we have that
which gives that
\(\square \)
As a easy consequence, we have the following conditional exponential convergence for \(\frac{1}{t}\int _0^tV(X_s)ds\).
Corollary 3.1
Under the same hypotheses on V and \(\mu \) in Theorem 3.1, for any number \(t>0\) and \(b>0\), we have that
Furthermore, adding the hypotheses on V and \(\mu \) in Theorem 3.1,
Proof
A direct application of Theorem 3.1 and Proposition 3.1. \(\square \)
The following is the quasi-ergodic theorem.
Theorem 3.2
Let \(V\in J\) be as in Theorem 2.1, and \(a=\int V\mathrm{d}\mu _0\). Then for \(\nu \in \mathscr {P}_1(D)\) with \(\nu<<\mu _1\) and \(\left\| \frac{\mathrm{d}\nu }{\mathrm{d}\mu _1}\right\| _2<\infty \),
Proof
Given \(\epsilon >0\), let \(\gamma >0\) be as in Theorem 2.1. Denote
Then from Theorems 3.1 and 1.2, we see that
Letting \(t\rightarrow \infty \) and applying Proposition 3.1, we obtain that
completing the proof. \(\square \)
Example
Let \(d=1\) and \(D=(-\pi /2,\pi /2)\). Consider the Brownian Motion on D killed once it reaches \(\partial D=\{-\pi /2,\pi /2\}\). Then assumptions (A \(_1\) ) and (A \(_2\) ) are fulfilled with \(\lambda _1=1/2\) and
Therefore,
\(V\in J\) if
Thus for \(V\in J\cap L^1(D,dx)\)
provided that \(\hbox {d}\mu /\hbox {d}m_1\in L^2(D,dx)\).
References
Breyer, L.A., Roberts, G.O.: A quasi-ergodic theorem for evanescent processes. Stoch. Process. Appl. 84, 177–186 (1999)
Chen, J.W., Jian, S.Q.: Some limit theorems of killed Brownian motion. Sci. China Math. 56(3), 497–514 (2013)
Chen, J.W., Li, H.T., Jian, S.Q.: Some limit theorems for absorbing Markov processes. J. Phys. A Math. Theor. 45, 345003 (2012)
Chung, K.L., Zhao, Z.X.: From Brownian Motion to Schrodinger’s Equation. Springer, Berlin (1995)
Darroch, J.N., Seneta, E.: On quasi-stationary distributions in absorbing continuous-time finite Markov chains. J. Appl. Prob. 4, 192–196 (1967)
Davies, E.B.: Heat Kernels and Spectral Theory. Cambridge University Press, Cambridge (1990)
Flaspohler, D.C.: Quasi-stationary distributions for absorbing continuous-time denumerable Markov chains. Ann. Inst. Stat. Math. 26, 351–356 (1973)
Pinsky, R.G.: On the convergence of diffusion processes conditioned to remain in a bounded for large time to limiting positive recurrent diffusion processes. Ann. Probab. 18, 363–378 (1985)
van Doorn, E.A., Pollett, P.K.: Quasi-stationary distributions (2011) (preprints)
Vere-Jones, D.: Some limit theorems for evanescent processes. Aust. J. Stat. 11, 67–78 (1969)
Wu, L.: A deviation inequality for non-reversible Markov processes. Ann. Inst. Henri Hoincare. Prob. Stat. 11, 435–445 (2000)
Acknowledgements
The work is Supported by the NSFC 11671226 .
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Chen, J., Jian, S. A deviation inequality and quasi-ergodicity for absorbing Markov processes. Annali di Matematica 197, 641–650 (2018). https://doi.org/10.1007/s10231-017-0695-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10231-017-0695-7