1 Introduction

The spectral theory is an active research subject, not only in mathematics but also in physics. The discrete spectrum has an especial meaning in quantum physics, it represents the discrete levels of energy. From the Internet, one may find a large number of publications in the field (more than 50,000 web pages in the scholar search for “discrete spectrum”). From the search, we learnt that the theory was begun in early 1900s, mainly from the interaction of mathematics and physics, by F. Riesz, D. Hilbert, H. Weyl, J. von Neumann and many others. In particular, the concept of “essential spectrum” used below was first introduced by H. Weyl in 1910. Surprisingly, in such a long time-developed field, the known complete results are still rather limited, even in dimension 1. We will review some of the related results case by case subsequently.

This paper deals with 1-dimensional case only. Mainly, the results come from three resources: (i) Mao’s criteria [10] in the ergodic case; (ii) the Karlin–McGregor’s dual technique (cf. [3]); and (iii) an isospectral transform introduced recently by the author and X. Zhang (2014). The last point is essential different from the known approach (comparing with [7, 12]). If the harmonic function is replaced by the ground state (i.e., the eigenfunction corresponding to the principal eigenvalue), then the transform in (iii) is just the \(H\)-transform often used in the study of spectral gap for Schrödinger operators. Certainly, in practice, it is important to estimate the harmonic function. For this, we introduce some easier algorithms in the discrete context and review two successive approximation schemes in the continuous context.

A large part of the paper (5 sections: §2–§6) deals with the discrete space. A typical result of the paper is presented in the next section (Theorem 2.1), its proof is given in §3. Some illustrating examples are also presented in the next section, their proofs are delayed to §6. The new algorithms are presented in §4 and §5. The continuous analog of the results in the discrete case is presented in the last section (§7) of the paper. Additionally, a powerful application of our approach is illustrated by Corollary 7.9 and Examples 7.10 and 7.11.

2 Main Results in Discrete Case

Given a tridiagonal matrix \(Q^c=\{q_{ij}\}\) on \(E:=\{0, 1, 2, \ldots \}: q_{i,i+1}=b_i>0\,(i\geqslant 0), q_{i, i-1}=a_i>0\,(i\geqslant 1), q_{i,i}=-(a_i+b_i+c_i)\), where \(c_i\geqslant 0\,(i\geqslant 0)\), and \(q_{i,j}=0\) for other \(j\ne i\). From probabilistic language, this matrix corresponds to a birth–death process with birth rates \(b_i\), death rates \(a_i\), and killing rates \(c_i\). Corresponding to the matrix \(Q^c\), we have an operator

$$\begin{aligned} \Omega ^c f(k)=b_k(f_{k+1}-f_k)+a_k(f_{k-1}-f_k)-c_k f_k,\qquad k\in E,\;\; a_0:=0. \end{aligned}$$

In what follows, we need two measures \(\mu \) and \(\hat{\nu }\) on \(E\):

$$\begin{aligned} \mu _0=1, \quad \mu _n=\frac{b_0\cdots b_{n-1}}{a_1\cdots a_n},\quad n\geqslant 1; \qquad {\hat{\nu }}_n=\frac{1}{\mu _n b_n},\quad n\geqslant 0. \end{aligned}$$

Corresponding to the operator \(\Omega ^c\), on \(L^2(\mu )\), there are two quadratic (Dirichlet) forms

$$\begin{aligned} D^c(f)=\sum _{k\geqslant 0}\mu _k \bigg [b_k (f_{k+1}-f_k)^2+c_k f_k^2\bigg ] \end{aligned}$$

either with the maximal domain

$$\begin{aligned} {\fancyscript{D}}_{\max }(D^c)=\{f\in L^2(\mu ): D^c(f)<\infty \} \end{aligned}$$

or with the minimal one \({\fancyscript{D}}_{\min }(D^c)\) which is the smallest closure of

$$\begin{aligned} \{f\in L^2(\mu ): f\text { has a finite support}\} \end{aligned}$$

with respect to the norm \(\Vert \cdot \Vert _D: \Vert f\Vert _D^2= \Vert f\Vert _{L^2(\mu )}^2+ D^c(f)\). The spectrum we are going to study is with respect to these Dirichlet forms. We say that \(\big (D^c, {\fancyscript{D}}_{\min }(D^c)\big )\) has discrete spectrum \(\big (\hbox {equivalently, the essential spectrum of} \big (D^c, {\fancyscript{D}}_{\min }(D^c)\big ), \hbox {denoted} \hbox {by } \sigma _\mathrm{ess}\big (\Omega _{\min }^c\big ), \hbox {is empty}\big )\) if its spectrum consists only isolated eigenvalues of finite multiplicity. For an operator \(L\), we have

$$\begin{aligned} \text {spectrum of } L= \text {discrete part} + \text {essential part}. \end{aligned}$$

Hence the statement “\(L\) has discrete spectrum” is exactly the same as “\(\sigma _\mathrm{ess}(L) =\emptyset \)”. To state our first main result, we need some notation. Define

$$\begin{aligned}&u_i=\frac{a_i}{b_i},\qquad v_i=\frac{c_i}{b_i},\qquad \xi _i=1+u_i+v_i,\qquad i\geqslant 0;\\&r_0\!=\!\frac{1}{1+v_0},\quad r_n\!=\!\frac{1}{\xi _n-\frac{u_n}{\xi _{n-1}- \frac{u_{n-1}}{\ddots { \xi _2- \frac{u_2}{\xi _1-\frac{u_1}{1+v_0}}}}}}\!=\!\frac{1}{\xi _n-u_n r_{n-1}},\qquad n\!\geqslant \! 1;\\&h_0=1,\qquad h_n=\bigg (\prod _{k=0}^{n-1}r_k\bigg )^{-1},\qquad n\geqslant 1. \end{aligned}$$

For simplicity, we write

$$\begin{aligned} \text {Spec}\big (\Omega _{\min }^c\big )= \text {The } L^2(\mu )\text {-}\text {spectrum of}\,\,\, \big (\!D^c\!, {\fancyscript{D}}_{\min }(D^c)\!\big )\!. \end{aligned}$$

Similarly, we have \(\text {Spec}\big (\Omega _{\max }^c\big )\).

Theorem 2.1

  1. (1)

    Let \(\sum _{k=0}^\infty (h_k h_{k+1}\mu _k b_k)^{-1}<\infty \). Then \(\text {Spec}\big (\Omega _{\min }^c\big )\) is discrete iff

    $$\begin{aligned} \lim _{n\rightarrow \infty }\sum _{j=0}^n \mu _j h_j^2\sum _{k=n}^\infty \frac{1}{h_k h_{k+1}\mu _k b_k}=0. \end{aligned}$$
  2. (2)

    Let \(\sum _{j=0}^{\infty } \mu _j h_j^2<\infty \). Then \(\text {Spec}\big (\Omega _{\max }^c\big )\) is discrete iff

    $$\begin{aligned} \lim _{n\rightarrow \infty }\sum _{j=n+1}^{\infty } \mu _j h_j^2\sum _{k=0}^{n}\frac{1}{h_k h_{k+1}\mu _k b_k}=0. \end{aligned}$$
  3. (3)

    Let \(\sum _{k=0}^\infty (h_k h_{k+1}\mu _k b_k)^{-1}=\infty =\sum _{j=0}^\infty \mu _jh_j^2\). Then \(\text {Spec}\big (\Omega _{\min }^c\big )= \text {Spec}\big (\Omega _{\max }^c\big )\) is not discrete.

Corollary 2.2

If \(\sigma _\mathrm{ess}\big (\Omega _{\min }^c\big )=\emptyset \), then \(\lambda _0\big (\Omega _{\min }^c\big )>0\), where

$$\begin{aligned} \lambda _0\big (\Omega _{\min }^c\big )=\inf \bigg \{D^c(f): f\in {\fancyscript{D}}_{\min }(D^c), \Vert f\Vert _{L^2(\mu )}=1\bigg \}. \end{aligned}$$

Proof

Once \(\sigma _\mathrm{ess}\big (\Omega _{\min }^c\big )=\emptyset \), by Theorem 2.1 (1), it is obvious that

$$\begin{aligned} \sup _n \sum _{j=0}^n\mu _j h_j^2\sum _{k=n}^\infty \frac{1}{h_k h_{k+1}\mu _k b_k}<\infty . \end{aligned}$$

Then the conclusion follows from [5, Theorem 2.6]. \(\square \)

Remark 2.3

If \(c_i\equiv 0\), then \(v_i\equiv 0\) and \(\xi _n\equiv 1+u_n\). Since \(r_0=1\) and \(r_n=(\xi _n-u_nr_{n-1})^{-1}\), by induction, it is obvious to see that \(r_n\equiv 1\) and then \(h_n\equiv 1\).

When \(c_i\equiv 0\), we drop the superscript \(c\) from \(\Omega ^c\) and \(D^c\) for simplicity. In this case, part (2) of the theorem is due to [10, Theorem 1.2]. Under the same condition, a parallel spectral property of the birth–death processes has recently obtained by [13]. The criteria in the present general setup seem to be new. Let us mention that different sums \(\sum _n^{\infty }\) and \(\sum _{n+1}^{\infty }\) are used respectively in the first two parts of Theorem 2.1.

Before moving further, let us explain the reasons for the partition of three parts given in the theorem.

Remark 2.4

Consider \(c_i\equiv 0\) only for simplicity.

(a) First, let \(\sum _n \mu _n <\infty \). If furthermore \(\sum _n (\mu _n b_n)^{-1} =\infty \), then the corresponding unique birth–death process is ergodic. It becomes exponentially ergodic iff the first non-trivial “eigenvalue” \(\lambda _1 \big (\hbox {or the spectral gap} \inf \{\text {Spec}(\Omega )\setminus \{0\}\}\big )\) is positive. Equivalently,

$$\begin{aligned} \sup _{n\geqslant 1}\sum _{k=0}^{n-1}\frac{1}{\mu _k b_k}\sum _{j=n}^\infty \mu _j<\infty \end{aligned}$$

(cf. [2, Theorem 9.25]). One may compare this condition with part (2) of Theorem 2.1 having \(h_n\equiv 1\). Clearly, this is a necessary condition for \(\text {Spec}\big (\Omega \big )\) to be discrete. The exponential ergodicity means that the process will return to the original exponential fast. Hence with probability one, it will never go to infinity.

(b) Conversely, if \(\sum _n (\mu _n b_n)^{-1} <\infty \), then the process is transient. It decays (or “goes to infinity”) exponentially fast iff

$$\begin{aligned} \sup _{n\geqslant 1}\sum _{j=0}^n\mu _j\sum _{k=n}^{\infty }\frac{1}{\mu _k b_k}<\infty . \end{aligned}$$

Refer to [3, Theorem 3.1] for more details. One may compare this condition with part (1) of Theorem 2.1 having \(h_n\equiv 1\). This conclusion holds even without the uniqueness assumption:

$$\begin{aligned} \sum _{k=0}^\infty \frac{1}{\mu _k b_k} \sum _{j=0}^k\mu _j =\infty . \end{aligned}$$

(cf. [2, Corollary 3.18] or [3, (1.2)]).

(c) Let \(\sum _n \mu _n <\infty \) and \({\fancyscript{D}}_{\min }(D)\ne {\fancyscript{D}}_{\max }(D)\). From [3, Proposition1.3], it is known that \({\fancyscript{D}}_{\min }(D)= {\fancyscript{D}}_{\max }(D)\) iff

$$\begin{aligned} \sum _{k=0}^\infty \bigg (\frac{1}{\mu _k b_k} +\mu _k\bigg )=\infty . \end{aligned}$$

Hence we have also \(\sum _{k=0}^\infty (\mu _k b_k)^{-1}<\infty \). In this case, we should study their spectrum separately. For the maximal one \(\big (D, {\fancyscript{D}}_{\max }(D)\big )\), the solution is given by part (2) of the theorem. For the minimal one, the solution is given in part (1). In this case, both \(\text {Spec}(\Omega _{\min })\) and \(\text {Spec}(\Omega _{\max })\) are discrete. In [5, Theorem 2.6], the principal eigenvalue is studied only in a case for \(\Omega _{\min }^c\). The other three cases (cf. [3]) should be in parallel. For instance, \(\Omega _{\max }^c\) corresponds to an extended Hardy inequality:

$$\begin{aligned} \Vert f\Vert _{L^2(\mu )}^2\leqslant A D^c(f),\qquad f\in L^2(\mu ), \end{aligned}$$

where \(A\) is a constant. However, for \(\Omega _{\min }^c\), the condition “\(f\!\in \! L^2(\mu )\)” in the last line should be replaced by “\(f\) has finite support”.

(d) As for part (3) of the theorem, since part (1) remains true even if \(\sum _n (\mu _n b_n)^{-1}=\infty \). Dually, part (2) remains true even if \(\sum _n \mu _n=\infty \). Alternatively, in case (3), the birth–death is zero recurrent and so the spectrum cannot be discrete. Actually, it can not have exponential decay. Otherwise,

$$\begin{aligned} \infty =\int _0^\infty p_{ii}(t)\text {d}t\leqslant C\int _0^\infty e^{-\lambda _0 t}<\infty . \end{aligned}$$

Besides, \({\fancyscript{D}}_{\min }(D)= {\fancyscript{D}}_{\max }(D)\). The assertion is now clear.

The next four simple examples show that the three parts in Theorem 2.1 are independent. Note that in what follows, we do not care about \(b_0\) and \(a_0\) since a change of finite number of the coefficients does not interfere our conclusion (in general, the essential spectrum is invariant under compact perturbations).

Example 2.5

Let \(b_n\!\!=\!n^4\) and \(\mu _n\!\!=\!n^{-2}\). Then both \(\text {Spec}(\Omega _{\min })\) and \(\text {Spec}(\Omega _{\max })\) are discrete.

Proof

Since \(\hat{\nu }_n=n^{-2}\), we have \(\sum _n \mu _n<\infty \) and \(\sum _n \hat{\nu }_n<\infty \). The assertion follows from the first two parts of Theorem 2.1.\(\square \)

Example 2.6

Let \(c_n\equiv 0, b_n=a_{n+1}=n^{\gamma }\,(\gamma \geqslant 0)\). Then \(\text {Spec}(\Omega _{\min })\) is discrete iff \(\gamma >2\). In particular, if \(\gamma \in [0, 1]\), then \(\text {Spec}(\Omega _{\min })=\text {Spec}(\Omega _{\max })\) is not discrete.

Proof

Because \(\mu _n\sim 1, \hat{\nu }_n \sim n^{-\gamma }\). Hence \(\sum _n \hat{\nu }_n<\infty \) iff \(\gamma >1\),

$$\begin{aligned} \sum _0^n\mu _k \sum _n^\infty {\hat{\nu }}_j\sim n^{2-\gamma }. \end{aligned}$$

The main assertion follows from the last two parts of Theorem 2.1. In the particular case that \(\gamma \in [0, 1]\), we have \(\sum _n \mu _n=\infty \) and \(\sum _n \hat{\nu }_n\!=\!\infty \). The assertion follows from part (3) of Theorem 2.1. \(\square \)

Dually, we have the following example.

Example 2.7

Let \(c_n\equiv 0, a_n=b_n=n^{\gamma }\,(\gamma \geqslant 0)\). Then \(\text {Spec}\big (\Omega _{\max }^c\big )\) is discrete iff \(\gamma >2\). In particular, when \(\gamma \in [0, 1]\), then \(\text {Spec}(\Omega _{\min })=\text {Spec}(\Omega _{\max })\) is not discrete.

Since a local modification of the rates does not make influence to our conclusion, we obtain the next result.

Example 2.8

If \(c_i\ne 0\) only on a finite set, then the conclusions of the last three examples remain the same.

The next three examples are much more technical since their \((c_n)\) are not local. This is what we have to pay by our approach. The proofs are delayed to Sect. 6.

Example 2.9

Let \(a_n=b_n=1\) and \(c_n \downarrow 0\). Then \(\text {Spec}\big (\Omega _{\min }^c\big )\) is not discrete or equivalently \(\sigma _{\text {ess}}\big (\Omega _{\min }^c\big )\ne \emptyset \).

Example 2.10

Let \(a_n \!=\! b_n \!=\! (n + 1)/4 , c_n \!=\! 9(n + 1)/16 \). Then \(\sigma _\mathrm{ess}\big (\Omega _{\min }^c\big )\!=\!\emptyset \).

Example 2.11

Let \(a_n \!=\! b_n \!=\! (n\! +\! 1)^2 , c_n \!=\! 5\! +\! 10/(5n \!-\! 12) \). Then \(\sigma _{\text { ess}}\big (\Omega _{\min }^c\big )\!\!\ne \!\emptyset \).

3 Proof of Theorem 2.1

(a) The computation of the \(\Omega ^c\)-harmonic function \(h\) (i.e., \(\Omega ^c h=0\)) used in the theorem is delayed to Sect. 5.

(b) By using \(h\), one can reduce the case of \(c_i\not \equiv 0\) to the one that \(c_i\equiv 0\). Roughly speaking, the idea goes as follows. Let \(h\ne 0, \mu \)-a.e. Then the mapping \(f\rightarrow {\tilde{f}}: {\tilde{f}}= 1\!\!1_{[h\ne 0]} f/h\) is an isometry from \(L^2(\mu )\) to \(L^2(\tilde{\mu })\), where \(\tilde{\mu }=h^2\mu \). Next, for given operator \(\big (\Omega ^c, {\fancyscript{D}}\big (\Omega ^c\big )\big )\), one may introduce an operator \(\widetilde{\Omega }\) on \(L^2(\tilde{\mu })\) (without killing) with deduced domain \({\fancyscript{D}}\big (\widetilde{\Omega }\big )\) from \({\fancyscript{D}}\big (\Omega ^c\big )\) under the mapping \(f\rightarrow \tilde{f}\) such that

$$\begin{aligned} \big (\Omega ^c f, \, f\big )_{\mu } = \big ({\widetilde{\Omega }} {\tilde{f}}, \,{\tilde{f}}\big )_{\tilde{\mu }}, \qquad f\in {\fancyscript{D}}\big (\Omega ^c\big ). \end{aligned}$$

This implies that the corresponding quadratic form \(\big (D^c, {\fancyscript{D}}\big (D^c\big )\big )\) on \(L^2(\mu )\) coincides with \(\big (\widetilde{D}, {\fancyscript{D}}\big (\widetilde{D}\big )\big )\) on \(L^2(\tilde{\mu })\) under the same mapping, and hence

$$\begin{aligned} \hbox {Spec}_{\mu }\big (\Omega ^c\big )=\hbox {Spec}_{\tilde{\mu }}\big (\widetilde{\Omega }\big ). \end{aligned}$$

Refer to [5, Lemma 1.3 and §2]. Actually, as studied in the cited paper, this idea works in a rather general setup.

From now on in this section, we assume that \(c_i\equiv 0\).

(c) Consider first \(\text {Spec}\big (\Omega _{\max }\big )\). Without loss of generality, assume that \(\mu (E)<\infty \). Otherwise, \(\sigma _\mathrm{ess}(\Omega _{\max })\ne \emptyset \). (Actually, in this case, the spectral gap vanishes and so the spectrum cannot be discrete.) By [10, Theorem 1.2], \(\sigma _\mathrm{ess}(\Omega )=\emptyset \) iff

$$\begin{aligned} \lim _{n\rightarrow \infty }\mu [n, \infty )\sum _{j=0}^{n-1}\frac{1}{\mu _j b_j} =\lim _{n\rightarrow \infty }{\hat{\nu }}[0, n]\,\mu [n+1,\infty )=0, \end{aligned}$$

where \((\mu _n)\) and \(({\hat{\nu }}_n)\) are defined at the beginning of the paper. This is the condition given in Theorem 2.1 (2) with \(h_k\equiv 1\). Here we remark that in the original [10, Theorem 1.2], the non-explosive (uniqueness) assumption was made. However, as mentioned in [3, §6], one can use the maximal process instead of the uniqueness condition. This remains true in the present setup, since the basic estimates for the principal eigenvalue used in [10, Theorem 2.4] do not change if the uniqueness condition is replaced by the use of the maximal process, as proved in [3, §4].

(d) Define a dual birth–death process on \(\{0, 1, 2, \ldots \}\) by

$$\begin{aligned} b_i^*=a_{i+1},\qquad a_i^*=b_i,\qquad i\geqslant 0. \end{aligned}$$

Similar to \((\mu _n)\) and \((\hat{\nu })\), we have

$$\begin{aligned} \mu _0^*=1, \quad \mu _n^*=\frac{b_0^*\cdots b_{n-1}^*}{a_1^*\cdots a_{n}^*}, \quad n\geqslant 1;\qquad {\hat{\nu }}_n^*= \frac{1}{\mu _n^* b_n^*},\quad n\geqslant 0. \end{aligned}$$

Then

$$\begin{aligned}&\mu _n =\frac{a_0^*\cdots a_{n-1}^*}{b_0^*\cdots b_{n-1}^*} =\frac{a_0^*}{\mu _{n-1}^*b_{n-1}^*}=a_0^*{\hat{\nu }}_{n-1}^*,\qquad n\geqslant 1.\\&{\hat{\nu }}_n=\frac{1}{\mu _n b_n}=\frac{1}{a_0^* {\hat{\nu }}_{n-1}^*a_{n}^*} =\frac{\mu _{n-1}^* b_{n-1}^*}{a_0^* a_n^*}=\frac{\mu _n^*}{a_0^*},\qquad n\geqslant 1. \end{aligned}$$

The last equality holds also at \(n=0\), and then

$$\begin{aligned} {\hat{\nu }}_n=\frac{1}{\mu _n b_n}=\frac{\mu _n^*}{a_0^*},\qquad n\geqslant 0. \end{aligned}$$

Therefore,

$$\begin{aligned} {\hat{\nu }}[0, n]\,\mu [n+1,\infty ) =\frac{1}{a_0^*} \mu ^*[0, n]\,a_0^*\,{\hat{\nu }}^*[n, \infty )=\mu ^*[0, n]\,{\hat{\nu }}^*[n, \infty ). \end{aligned}$$

Clearly, we have \({\hat{\nu }}^*(E)<\infty \) iff \(\mu (E)<\infty \).

Next, define

$$\begin{aligned} {M=\left[ \begin{array}{lllll} {\mu _0} &{}\quad {\mu _1} &{}\quad {\mu _2} &{}\quad {\mu _3} &{}\quad \ldots \\ 0 &{}\quad {\mu _1} &{}\quad {\mu _2} &{}\quad {\mu _3} &{}\quad \ldots \\ 0 &{}\quad 0 &{}\quad {\mu _2} &{}\quad {\mu _3} &{}\quad \ldots \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad {\mu _3} &{}\quad \ldots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \qquad \ddots \!\!\!\!\!\!\!\! &{}\quad {} \end{array}\right] \!,}\quad {M^{-1}\!=\!\left[ \begin{array}{lllll} \dfrac{1}{\mu _0} &{}\quad -\dfrac{1}{\mu _0} &{}\quad 0 &{}\quad 0 &{}\quad \ldots \\ 0 &{}\quad \dfrac{1}{\mu _1} &{}\quad -\dfrac{1}{\mu _1} &{}\quad 0 &{}\quad \ldots \\ 0 &{}\quad 0 &{}\quad \dfrac{1}{\mu _2} &{}\quad -\dfrac{1}{\mu _2} &{}\quad \ldots \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad \dfrac{1}{\mu _3} &{}\quad \ldots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \qquad \ddots \!\!\!\!\!\!\!\! &{}\quad {} \end{array}\right] \!.} \end{aligned}$$

Then we have \(\Omega ^*=M \Omega M^{-1}\) or equivalently, \(Q^*=M Q M^{-1}\). In other words, \(\Omega \) and \(\Omega ^*\) are similar and so have the same spectrum (one may worry the domain problem of the operators, but they can be approximated by finite ones, as used often in the literature, see for instance [3]). Now, we can read from proof (c) above for a criterion for \(\text {Spec}(\Omega _{\min }^*)\) to have discrete spectrum: \(\sigma _\mathrm{ess}(\Omega _{\min }^*)=\emptyset \) iff

$$\begin{aligned} \lim _{n\rightarrow \infty }\mu ^*[0, n]\,{\hat{\nu }}^*[n, \infty )=0. \end{aligned}$$

Ignoring the superscript \(*\), this is the condition given in Theorem 2.1 (1) with \(h_k\equiv 1\).

4 An Algorithm for \((h_i)\) in the “Lower-Triangle” Case

To get a representation of the harmonic function \(h\), as mentioned in [6, Remark 2.5 (3)], even in the special case of birth–death processes, we originally still had to go to a more general setup: the “lower-triangle” matrix (or single birth process). For those readers who are interested in the tridiagonal case only, one may jump from here to the next section. The matrix we are working in this section is as follows: \(q_{i, i+1}>0\) for each \(i\geqslant 0\) but \(q_{ij}\geqslant 0\) can be arbitrary for every \(j<i\). For each \((c_i\in {\mathbb R})\), the operator \(\Omega ^c\) becomes

$$\begin{aligned} \Omega ^c f(i)=\sum _{j<i}q_{ij}(f_j-f_i)+q_{i, i+1}(f_{i+1}-f_i)-c_i f_i,\qquad i\geqslant 0. \end{aligned}$$

To be consistence to what used in the last section, we replace \(c_i\) used in [6] by \(-c_i\) here. Following [6, Theorem 1.1], we adopt the notation:

$$\begin{aligned} {\tilde{q}}_n^{(k)}&= \sum _{j=0}^k q_{nj}+c_n\;\;(\hbox {here }c_n\in {\mathbb {R}}!), \qquad 0\leqslant k< n,\\ {\widetilde{F}}_i^{(i)}&= 1,\quad {\widetilde{F}}_n^{(i)}=\frac{1}{q_{n,n+1}}\sum _{k=i}^{n-1} {\tilde{q}}_n^{(k)} {\widetilde{F}}_k^{(i)},\qquad n>i\geqslant 0,\\ g_n&= g_0+\sum _{0\leqslant k\leqslant n-1} \sum _{0\leqslant j\leqslant k} {\widetilde{F}}_k^{(j)}\frac{f_j+c_jg_0}{q_{j,j+1}}\quad \bigg [\sum _{\emptyset }:=0\bigg ],\qquad n\geqslant 0. \end{aligned}$$

The theorem just cited says that \((g_n)\) is the solution to the Poisson equation

$$\begin{aligned} \Omega ^c g= f \quad \text { on }E=\{0, 1, \cdots \}. \end{aligned}$$

In particular, when \(f=0\), this \(g\) gives us the unified formula of \(\Omega ^c\)-harmonic function \(h\).

We now introduce an alternative algorithm for \(\big \{{\widetilde{F}}_n^{(i)}\big \}_{n\geqslant i\geqslant 0}\) (and then for \(\{g_n\}_{n\geqslant 0}\)). This is meaningful since it is the most important sequence used in [6]. The advantage of the new algorithm given in (1) below is that at the kth step in computing \(G_{\cdot , k}^{(i)}\), we use \(G_{\cdot , k-1}^{(i)}\) only but not \(G_{\cdot , s}^{(i)}\) all \(s\): \(i\leqslant s\leqslant k-2\), as in the original computation for \({\widetilde{F}}_n^{(i)}\) where the whole family \(\big \{{\widetilde{F}}_s^{(i)}\big \}_{s= i}^{n-1}\) is required.

Proposition 4.1

Let

$$\begin{aligned} u_{\ell }^{(i)}=\frac{{\tilde{q}}_{i+\ell }^{(i)}}{q_{i+\ell ,\, i+\ell +1}}, \qquad i\geqslant 0,\; \ell \geqslant 1. \end{aligned}$$

Fix \(i\geqslant 0\), define \(\big \{G_{\ell , k}^{(i)}: \ell \geqslant k\big \}_{k\geqslant 1}\), recursively in \(k\), by

$$\begin{aligned}&G_{\ell , k}^{(i)}= G_{\ell ,\, k-1}^{(i)}+ u_{\ell -k+1}^{(i+k-1)}G_{k-1,\, k-1}^{(i)},\qquad (\ell \geqslant )\, k\geqslant 2 \end{aligned}$$
(1)

with initial condition

$$\begin{aligned} G_{\ell , 1}^{(i)}=u_{\ell }^{(i)},\qquad \ell \geqslant 1. \end{aligned}$$

Then, with \(G_{0,0}^{(i)}\equiv 1\), we have the following alternative representation.

  1. (1)

    For each \(m\geqslant 0\) and \(i\geqslant 0\),

    $$\begin{aligned} {\widetilde{F}}_{i+m}^{(i)}= G_{m, m}^{(i)}. \end{aligned}$$
  2. (2)

    For each \(n\geqslant 0\) and \(i\geqslant 0\),

    $$\begin{aligned} g_n=g_0+ \sum _{0\leqslant j\leqslant n-1} v_j\sum _{k=0}^{n-j-1}G_{k, k}^{(j)}, \end{aligned}$$

    where

    $$\begin{aligned} v_j=\frac{f_j+c_j g_0}{q_{j,\,j+1}},\qquad j\geqslant 0. \end{aligned}$$

Proof

(a) To prove part (1) of the proposition, by [6, (2.7)], we have

$$\begin{aligned} {\widetilde{F}}_i^{(i)}=1,\quad {\widetilde{F}}_n^{(i)}=\sum _{k=i+1}^{n} {\widetilde{F}}_n^{(k)} \frac{{\tilde{q}}_k^{(i)}}{q_{k,\,k+1}},\qquad n\geqslant i+1. \end{aligned}$$

Rewrite

$$\begin{aligned} {\widetilde{F}}_n^{(i)}=\sum _{\ell =1}^{n-i} {\widetilde{F}}_n^{(i+\ell )} \frac{{\tilde{q}}_{i+\ell }^{(i)}}{q_{i+\ell ,\,i+\ell +1}},\qquad n\geqslant i+1. \end{aligned}$$

For simplicity, let

$$\begin{aligned} m=n-i, \quad f_m^{(i)}={\widetilde{F}}_{m+i}^{(i)},\quad u_{\ell }^{(i)}=\frac{{\tilde{q}}_{i+\ell }^{(i)}}{q_{i+\ell ,\,i+\ell +1}}. \end{aligned}$$

Then we have

$$\begin{aligned} f_0^{(i)}=1,\quad f_m^{(i)}=\sum _{\ell =1}^m f_{m-\ell }^{(i+\ell )}u_{\ell }^{(i)},\qquad m\geqslant 1,\; i\geqslant 0. \end{aligned}$$
(2)

The goal of the construction of \(\{G_{\cdot , k}^{(i)}\}\) is for each \(k: 1\leqslant k\leqslant m\), express \(f_m^{(i)}\) as

$$\begin{aligned} f_m^{(i)}=\sum _{\ell =k}^m f_{m-\ell }^{(i+\ell )}G_{\ell , k}^{(i)}. \end{aligned}$$

Clearly, \(f_1^{(i)}=u_{1}^{(i)}\). Next, by (2), we have

$$\begin{aligned} f_{m-1}^{(i+1)}=\sum _{s=1}^{m-1}f_{m-1-s}^{(i+1+s)}u_s^{(i+1)} =\sum _{s=2}^{m}f_{m-s}^{(i+s)}u_{s-1}^{(i+1)},\qquad m\geqslant 2. \end{aligned}$$

Hence by (2) again, it follows that

$$\begin{aligned} f_m^{(i)}=\sum _{\ell =2}^m f_{m-\ell }^{(i+\ell )}u_{\ell }^{(i)} + f_{m-1}^{(i+1)} u_1^{(i)} =\sum _{\ell =2}^{m}f_{m-\ell }^{(i+\ell )}\bigg [u_{\ell }^{(i)} +u_{\ell -1}^{(i+1)}u_1^{(i)}\bigg ]. \end{aligned}$$

Comparing this and (2), it is clear that for replacing the set \(\{1, 2, \ldots , m\}\) by \(\{2, 3, \ldots , m\}\) in the summation, we should replace the term

$$\begin{aligned} u_{\ell }^{(i)}=: G_{\ell , 1}^{(i)},\qquad (m\geqslant )\,\ell \geqslant 1\quad \text {(at the first step)} \end{aligned}$$

by

$$\begin{aligned} u_{\ell }^{(i)}+u_{\ell -1}^{(i+1)}u_{1}^{(i)}= G_{\ell , 1}^{(i)}+u_{\ell -1}^{(i+1)}G_{1, 1}^{(i)}=:G_{\ell , 2}^{(i)},\;\; (m\geqslant )\,\ell \geqslant 2. \end{aligned}$$

Then, we have

$$\begin{aligned} f_m^{(i)}=\sum _{\ell =2}^{m}f_{m-\ell }^{(i+\ell )}G_{\ell , 2}^{(i)},\qquad m\geqslant 2 \quad \text {(at the second step).} \end{aligned}$$
(3)

Similarly, by (2), we have

$$\begin{aligned} f_{m-2}^{(i+2)}=\sum _{s=1}^{m-2}f_{m-2-s}^{(i+2+s)}u_s^{(i+2)} =\sum _{s=3}^{m}f_{m-s}^{(i+s)}u_{s-2}^{(i+2)},\qquad m\geqslant 3. \end{aligned}$$

Inserting this into (3), it follows that

$$\begin{aligned} f_m^{(i)}=\sum _{\ell =3}^{m}f_{m-\ell }^{(i+\ell )}G_{\ell , 3}^{(i)},\qquad m\geqslant 3 \quad \text {(at the third step)} \end{aligned}$$

with

$$\begin{aligned} G_{\ell , 3}^{(i)}= G_{\ell , 2}^{(i)}+ u_{\ell -2}^{(i+2)}G_{2, 2}^{(i)},\qquad (m\geqslant )\,\ell \geqslant 3. \end{aligned}$$

One may continue the construction of \(G_{\cdot , k}^{(i)}\) recursively in \(k\). In particular, with

$$\begin{aligned} f_m^{(i)}&= \sum _{\ell =m-1}^{m}f_{m-\ell }^{(i+\ell )}G_{\ell ,\, m-1}^{(i)}\\&= f_0^{(i+m)}G_{m,\, m-1}^{(i)}+f_1^{(i+m-1)}G_{m-1,\, m-1}^{(i)}\\&= G_{m,\, m-1}^{(i)}+u_1^{(i+m-1)}G_{m-1,\, m-1}^{(i)}\quad (\hbox {at }(m-1)\,\hbox {th step}) \end{aligned}$$

and

$$\begin{aligned} f_m^{(i)}=f_0^{(i+m)}G_{m, m}^{(i)}=G_{m, m}^{(i)}\quad \text {(by (2))}, \end{aligned}$$

at last, we obtain

$$\begin{aligned} f_m^{(i)}=G_{m, m}^{(i)} =G_{m,\, m-1}^{(i)}+u_1^{(i+m-1)}G_{m-1,\, m-1}^{(i)}\quad (\hbox {at the }m\,\hbox {th step}) \end{aligned}$$

for \(m\geqslant 2\) and \(i\geqslant 0.\) We have thus proved not only (1) but also the first assertion of the proposition.

(b) To prove part (2) of the proposition, we rewrite \(g_n\) as

$$\begin{aligned} g_n=g_0+ \sum _{0\leqslant j\leqslant n-1}v_j\sum _{k=j}^{n-1} {\widetilde{F}}_k^{(j)},\qquad n\geqslant 0. \end{aligned}$$

Then the second assertion follows from the first one of the proposition. \(\square \)

Remark 4.2

From (1), it follows that

$$\begin{aligned} G_{k, k}^{(i)}\geqslant u_1^{(i+k-1)} G_{k-1,\, k-1}^{(i)}. \end{aligned}$$

Successively, we get

$$\begin{aligned} G_{k, k}^{(i)}&\geqslant u_1^{(i+k-1)}u_1^{(i+k-2)} G_{k-2,\, k-2}^{(i)}\\&\cdots \cdots \\&\geqslant u_1^{(i+k-1)}u_1^{(i+k-2)}\cdots u_1^{(i+1)} G_{1, 1}^{(i)}\\&= \prod _{s=0}^{k-1} u_1^{(i+s)}. \end{aligned}$$

We have thus obtained a lower bound of \(G_{m, m}^{(i)}\) (and then lower bound of \(g_n\)):

$$\begin{aligned} G_{m, m}^{(i)}\geqslant \prod _{s=0}^{m-1}\frac{{\tilde{q}}_{i+s+1}^{(i+s)}}{q_{i+s+1,\, i+s+2}}. \end{aligned}$$

When \(c_i\equiv 0\), we return to the original \(F_m^{(0)}\):

$$\begin{aligned} F_m^{(0)}=G_{m, m}^{(0)}=\prod _{s=0}^{m-1}\frac{a_{s+1}}{b_{s+1}}. \end{aligned}$$

5 An Algorithm for \((h_i)\) in the Tridiagonal Case

We now come back to the birth–death processes and look for a simpler algorithm for the \(\Omega ^c\)-harmonic function \(h\).

Lemma 5.1

For a birth–death process with killing, the \(\Omega ^c\)-harmonic function \(h\):

$$\begin{aligned} b_i(h_{i+1}-h_i)+a_i(h_{i-1}-h_i)-c_ih_i=0,\qquad i\geqslant 0 \end{aligned}$$

can be expressed by the following recursive formula

$$\begin{aligned} {\left\{ \begin{array}{ll} h_0=1,\\ h_1=1+v_0,\\ h_i=(1+u_{i-1}+v_{i-1}) h_{i-1} -u_{i-1} h_{i-2},\qquad i\geqslant 2, \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} u_i=\frac{a_i}{b_i},\qquad v_i=\frac{c_i}{b_i},\qquad i\geqslant 0. \end{aligned}$$

From Lemma 5.1, it is clear that the sequence \((h_n)\) is completely determined by the sequences \((u_n)\) and \((v_n)\).

Next, we introduce a first-order difference equation instead the second-order one used in the last lemma. To do so, set

$$\begin{aligned} r_i=\frac{h_i}{h_{i+1}},\qquad i\geqslant 0,\qquad r_0=\frac{1}{1+v_0}. \end{aligned}$$

By induction, we have \(h_i\geqslant (1+v_{i-1})h_{i-1}\) and hence \(r_i\leqslant (1+v_i)^{-1}\). From

$$\begin{aligned} h_{n+1}=(1+u_n+v_n)h_n -u_n h_{n-1},\qquad n\geqslant 1, \end{aligned}$$

we get

$$\begin{aligned} 1=(1+u_n+v_n) r_n -u_n r_{n-1}r_n=(1+u_n+v_n-u_n r_{n-1}) r_n , \qquad n\geqslant 1. \end{aligned}$$

Clearly, we have

$$\begin{aligned} 1+u_n+v_n -u_n r_{n-1}=1+v_n+u_n(1-r_{n-1})\geqslant 1+v_n\geqslant 1. \end{aligned}$$

The next result says that we can describe \((h_n)\) by \((r_n)\) which has a simpler expression.

Proposition 5.2

Let \((u_n)\) and \((v_n)\) be given in the last lemma, set \(\xi _n=1+u_n+v_n.\) Then

$$\begin{aligned} r_0=\frac{1}{1+v_0},\quad r_n= \frac{1}{\xi _n -u_n r_{n-1}} \leqslant \bigg [1+v_n+\frac{u_n v_{n-1}}{1+ v_{n-1}}\bigg ]^{-1},\quad n\geqslant 1. \end{aligned}$$

Furthermore, the sequences \(\{r_n\}\) and \(\{h_n\}\) are presented in Theorem 2.1.

In what follows, we are going to work out some more explicit bounds of \((r_n)\) and a more practical corollary of our main criterion (Theorem 2.1). We will pay a particular attention to the case that \(u_n\equiv 1\) which is more attractive since then the principal eigenvalue \(\lambda _0\big (\Omega _{\min }^c\big )=0 (\Rightarrow \sigma _{\text {ess}}\big (\Omega _{\min }^c\big )\ne \emptyset )\) once \(v_n\equiv 0\). Thus, one may get some impression about the role played by \((c_n)\).

Lemma 5.3

If

$$\begin{aligned} u_n\Bigg (\xi _{n-1}-\sqrt{\xi _{n-1}^2- 4 u_{n-1}}\,\Bigg ) \leqslant u_{n-1}\bigg (\xi _n-\sqrt{\xi _n^2- 4 u_n}\,\bigg ) \end{aligned}$$

for large \(n\), then by a local modification of the rates \((a_i, c_i)\) if necessary, we have

$$\begin{aligned} r_n\leqslant \frac{\xi _n-\sqrt{\xi _n^2- 4 u_n}}{2u_n},\qquad n\geqslant 1 \end{aligned}$$
(4)

and then

$$\begin{aligned} h_n\geqslant (1+v_0)\prod _{k=1}^{n-1}\frac{\xi _k+ \sqrt{\xi _k^2-4 u_k}}{2},\qquad n\geqslant 1. \end{aligned}$$

Besides, for \(r_{n-1}\leqslant r_n\), condition (4) is necessary.

Proof

Let us start the proof of an informal description of the idea of the lemma. Suppose that \(r_n\sim x\) as \(n\rightarrow \infty \). From the second equation given in Proposition 5.2, we obtain an approximating equation

$$\begin{aligned} x=\frac{1}{\xi _n -u_n x}. \end{aligned}$$

Since \(x\leqslant 1\), we have only one solution

$$\begin{aligned} x=\frac{\xi _n-\sqrt{\xi _n^2- 4 u_n}}{2u_n}. \end{aligned}$$

This suggests us the upper bound

$$\begin{aligned} r_n\leqslant \frac{\xi _n-\sqrt{\xi _n^2- 4 u_n}}{2u_n} \end{aligned}$$

for large \(n\). This leads to the conclusion of the lemma.

(a) Assume that condition in the lemma holds starting from \(n_0\), and suppose that (4) holds for \(n-1\,(n\geqslant n_0)\). Then we have

$$\begin{aligned} r_n&= \frac{1}{\xi _n -u_n r_{n-1}}\\&\leqslant \frac{1}{\xi _n -u_n (2u_{n-1})^{-1} \Big (\xi _{n-1}-\sqrt{\xi _{n-1}^2- 4 u_{n-1}}\,\Big )}\\&= \frac{2u_{n-1}}{2u_{n-1}\xi _n-u_n\Big (\xi _{n-1}-\sqrt{\xi _{n-1}^2- 4 u_{n-1}}\,\Big )}. \end{aligned}$$

We now show that the right-hand side is upper bounded by

$$\begin{aligned} \frac{\xi _n-\sqrt{\xi _n^2- 4 u_n}}{2u_n}=\frac{2}{\xi _n+\sqrt{\xi _n^2- 4 u_n}}. \end{aligned}$$

Or equivalently,

$$\begin{aligned} \frac{u_{n-1}}{2u_{n-1}\xi _n-u_n\Big (\xi _{n-1}-\sqrt{\xi _{n-1}^2- 4 u_{n-1}}\,\Big )} \leqslant \frac{1}{\xi _n+\sqrt{\xi _n^2- 4 u_n}}. \end{aligned}$$

This clearly holds by the condition of the lemma. We have thus obtained (4) for \(n\) and then completed the second step of the induction argument.

(b) The proof for the last assertion of the lemma is similar: from \(r_{n-1}\leqslant r_n\), one obtains

$$\begin{aligned} r_n=\frac{1}{\xi _n -u_n r_{n-1}}\leqslant \frac{1}{\xi _n -u_n r_{n-1}}. \end{aligned}$$

Solving this inequality and noting that \(r_n\leqslant 1\), we obtain again condition (4).

(c) It remains to show that (4) holds for every \(n\leqslant n_0-1\) by a suitable modification of the rates, and then complete the induction argument. To see this, we may modify the rates \((a_i, c_i)\) step by step. Let us start at \(n=1\). First, let \(c_1=0\), then \(v_1=0\). Moreover,

$$\begin{aligned} \frac{\xi _1-\sqrt{\xi _1^2- 4 u_{1}}}{2u_{1}} =\frac{1+u_{1}-|1-u_{1}|}{2u_{1}}= {\left\{ \begin{array}{ll} 1 \quad \text {if}\, u_{1}\leqslant 1,\\ u_1^{-1} \quad \text {if}\,u_{1}> 1. \end{array}\right. } \end{aligned}$$

Hence we can simply choose \(a_{1}\leqslant b_{1}\) which implies that \(u_{1}\leqslant 1\). At the same time,

$$\begin{aligned} r_1=\frac{1}{\xi _1-u_1 r_0}=\frac{1}{1+u_1(1-r_0)}\leqslant 1. \end{aligned}$$

Therefore, for the modified rates, the assertion holds at \(n=1\). Note that this modification does not change anything of \(r_n\) for \(n\geqslant 3\) and \((a_n,b_n,c_n)\) for \(n\geqslant 2\). Besides, for smaller \(r_1\), we have smaller \(r_2\). Continuing the modification step by step, we can arrived at the required conclusion.\(\square \)

In particular, if \(u_n\equiv 1\), the condition of the lemma becomes

$$\begin{aligned} \sqrt{v_{n-1}(4+v_{n-1})}-v_{n-1}\geqslant \sqrt{v_{n}(4+v_{n})}-v_{n} \end{aligned}$$

which holds once \(v_n\) is decreasing in \(n\) since the function \(\sqrt{x(x+4)}-x\) is increasing in \(x\).

Lemma 5.4

Given two sequences \(\{p_n\}\) and \(\{q_n\}\), suppose that \(q_n\uparrow \uparrow \infty \) as \((n_0\leqslant )\,n\uparrow \infty \).

  1. (1)

    If

    $$\begin{aligned} \frac{p_{n+1}-p_n}{q_{n+1}-q_n}\geqslant \eta ,\qquad n\geqslant n_0, \end{aligned}$$

    then

    $$\begin{aligned} \varliminf _n \frac{p_n}{q_n}\geqslant \eta . \end{aligned}$$
  2. (2)

    Dually, if

    $$\begin{aligned} \frac{p_{n+1}-p_n}{q_{n+1}-q_n}\leqslant \varepsilon ,\qquad n\geqslant n_0, \end{aligned}$$

    then

    $$\begin{aligned} \varlimsup _n \frac{p_n}{q_n}\leqslant \varepsilon . \end{aligned}$$

Proof

Here we prove part (1) of the lemma only. Since \(q_n\uparrow \uparrow \), by assumption and the proportional property, we have

$$\begin{aligned} \frac{p_{n+1}-p_{n_0}}{q_{n+1}-q_{n_0}} =\frac{(p_{n+1}-p_n)+\cdots + (p_{n_0+1}-p_{n_0})}{(q_{n+1}-q_n)+\cdots +(q_{n_0+1}-q_{n_0})} \geqslant \eta ,\qquad n\geqslant n_0. \end{aligned}$$

Because \(q_n\uparrow \infty \), we have

$$\begin{aligned} \varliminf _n \frac{p_n}{q_n} =\varliminf _n \frac{p_{n+1}/q_{n+1}-p_{n_0}/q_{n+1}}{1-q_{n_0}/q_{n+1}} =\sup _{m>n_0}\inf _{n>m}\frac{p_{n+1}-p_{n_0}}{q_{n+1}-q_{n_0}}\geqslant \eta \end{aligned}$$

as required. \(\square \)

With some obvious change, one may prove the following result.

Lemma 5.5

Suppose that \(q_n\downarrow \downarrow 0\) as \((n_0\leqslant )\,n\uparrow \infty \).

  1. (1)

    If

    $$\begin{aligned} \frac{p_n -p_{n+1}}{q_n-q_{n+1}}\geqslant \eta ,\qquad n\geqslant n_0, \end{aligned}$$

    then

    $$\begin{aligned} \varliminf _n \frac{p_n}{q_n}\geqslant \eta . \end{aligned}$$
  2. (2)

    If

    $$\begin{aligned} \frac{p_n -p_{n+1}}{q_n-q_{n+1}}\leqslant \varepsilon ,\qquad n\geqslant n_0, \end{aligned}$$

    then

    $$\begin{aligned} \varlimsup _n \frac{p_n}{q_n}\leqslant \varepsilon . \end{aligned}$$

Corollary 5.6

Let

$$\begin{aligned} A_n=\sum _0^n \mu _i h_i^2,\qquad B_n=\sum _{k\geqslant n}\frac{1}{h_kh_{k+1}\mu _k b_k}. \end{aligned}$$

If \(B_n=\infty \) and \(\lim _n A_n =\infty \), then \(\sigma _\mathrm{ess}(\Omega _{\min }^c)\ne \emptyset \). Next, assume that \(B_n<\infty \).

  1. (1)

    If \(\inf _{n\gg 1}a_n>0\) and \(\lim _n h_n^2\mu _n \sqrt{a_n}B_n=0\), then \(\lim _n A_n B_n=0\) and so \(\sigma _\mathrm{ess}\big (\Omega _{\min }^c\big )=\emptyset \).

  2. (2)

    If either \(\varliminf _n h_n^2\mu _n B_n>0\) or \(\varliminf _n h_n^2\mu _n \sqrt{a_n}B_n>0\) plus \(\inf _{n\gg 1} r_n>0\), then \(\varliminf _n A_n B_n>0\) and so \(\sigma _\mathrm{ess}\big (\Omega _{\min }^c\big )\ne \emptyset \).

Proof

The trivial case that \(B_n=\infty \) is easy by our criterion. Now, assume that \(B_n<\infty \). Note that \(B_n^{-1}\uparrow \uparrow \infty \) as \(n\uparrow \infty \). We have

$$\begin{aligned} \frac{A_{n+1}-A_n}{B_{n+1}^{-1}- B_n^{-1}}&= \frac{\mu _{n+1}h_{n+1}^2B_n B_{n+1}}{1/(h_n h_{n+1}\mu _n b_n)}\\&= h_n h_{n+1}^3\mu _n\mu _{n+1}b_n B_{n+1}\bigg (B_{n+1} +\frac{1}{h_n h_{n+1}\mu _n b_n}\bigg )\\&= \big (h_{n+1}^2\mu _{n+1}\sqrt{a_{n+1}}B_{n+1}\big )^2 r_n + h_{n+1}^2\mu _{n+1}B_{n+1}. \end{aligned}$$

By part (2) of Lemma 5.4, it follows that

$$\begin{aligned} \lim _n A_n B_n=0\quad \text {once}\quad \lim _n h_{n}^2\mu _{n}\sqrt{a_n}\,B_{n}=0. \end{aligned}$$

We have proved part (1) of the corollary. The proof of part (2) is similar.\(\square \)

Lemma 5.7

Assume that

$$\begin{aligned} r_n< \bigg (\frac{b_n}{u_n a_{n+1}}\bigg )^{1/4},\qquad n\gg 1 \end{aligned}$$

and \(\lim _n h_n^2\mu _n \sqrt{a_n}=\infty \).

  1. (1)

    If \(\lim _n \big [b_n/\sqrt{a_n} -r_n^2\sqrt{a_{n+1}}\big ]=\infty \), then \(\lim _n h_n^2\mu _n \sqrt{a_n}\,B_n=0\).

  2. (2)

    If \(\inf _{n\gg 1}r_n\!>\!0\) and \(\varlimsup _n \big [b_n/\sqrt{a_n} -r_n^2\sqrt{a_{n+1}}\big ]\!<\!\infty \), then \(\varliminf _n h_n^2\mu _n \sqrt{a_n}\,B_n >0\).

Proof

Note that \(h_n^2\mu _n \sqrt{a_n} \) is strictly increasing iff

$$\begin{aligned} r_n< \sqrt{\frac{b_n}{\sqrt{a_n a_{n+1}}}}=\bigg (\frac{b_n}{u_n a_{n+1}}\bigg )^{1/4}. \end{aligned}$$

If \(\lim _n h_n^2\mu _n \sqrt{a_n} =\infty \), then by Lemma 5.5, the study of the limit

$$\begin{aligned} h_n^2\mu _n \sqrt{a_n}\,B_n=\frac{B_n}{1/(h_n^2\mu _n \sqrt{a_n}\,)} \end{aligned}$$

can be reduced to examine the limit of

$$\begin{aligned} \frac{1/(h_n h_{n+1}\mu _n b_n)}{1/(h_n^2\mu _n \sqrt{a_n}) -1/( h_{n+1}^2\mu _{n+1} \sqrt{a_{n+1}} )} =\frac{r_n}{b_n/\sqrt{a_n} -r_n^2\sqrt{a_{n+1}}}. \end{aligned}$$

\(\square \)

The next result shows that once we know the precise leading order of the summands, the computation used in Theorem 2.1 becomes much easier.

Lemma 5.8

  1. (1)

    If both \(\mu _n h_n^2\) and \(\mu _n b_n h_n h_{n+1}\) have algebraic tail (i.e., \(\sim n^{\alpha }\) for some \(\alpha >0\)), then

    $$\begin{aligned}&\sum _{j=0}^n \mu _j h_j^2\sum _{k=n}^\infty \frac{1}{h_k h_{k+1}\mu _k b_k}\sim \frac{n^2}{b_n}r_n\qquad \text {as} \;\; n\rightarrow \infty .\\&\sum _{j=n+1}^{\infty } \mu _j h_j^2\sum _{k=0}^{n}\frac{1}{h_k h_{k+1}\mu _k b_k}\sim \frac{n^2}{a_{n+1} r_n}\qquad \text {as} \;\; n\rightarrow \infty . \end{aligned}$$
  2. (2)

    If both \(\mu _n h_n^2\) and \(\mu _n b_n h_n h_{n+1}\) have exponential tail (i.e., \(\sim e^{\alpha n}\) for some \(\alpha >0\)), then

    $$\begin{aligned}&\sum _{j=0}^n \mu _j h_j^2\sum _{k=n}^\infty \frac{1}{h_k h_{k+1}\mu _k b_k}\sim \frac{r_n}{b_n}\qquad \text {as} \;\; n\rightarrow \infty ,\\&\sum _{j=n+1}^{\infty } \mu _j h_j^2\sum _{k=0}^{n}\frac{1}{h_k h_{k+1}\mu _k b_k}\sim \frac{1}{a_{n+1} r_n}\qquad \text {as} \;\; n\rightarrow \infty . \end{aligned}$$

Proof

Let \(\mu _n h_n^2\sim n^{\alpha }\) and \(\mu _n b_n h_n h_{n+1}\sim n^{\beta }\). Then

$$\begin{aligned} \sum _{j=0}^n \mu _j h_j^2\sim n\mu _n h_n^2,\qquad \sum _{k=n}^\infty \frac{1}{h_k h_{k+1}\mu _k b_k}\sim \frac{n}{h_n h_{n+1}\mu _n b_n}. \end{aligned}$$

Hence we obtain the first assertion in part (1). The other assertion can be proved similarly.\(\square \)

6 Proofs of Examples 2.92.11

Proof of Example 2.9

The conclusion that \(\sigma _\text { ess}\big (\Omega _{\min }^c\big )\ne \emptyset \) is actually known since the principal eigenvalue \(\lambda _0\big (\Omega _{\min }^c\big )=0\) by [3, Example 9.16] which implies the required assertion.

We now prove the assertion by our new criterion. First, noting that \(h_n\uparrow \), if \(h_{\infty }:=\lim _n h_n<\infty \), then \(B_n=\infty \) and so the conclusion follows by Corollary 5.6. Next, let \(h_{\infty }=\infty \). Then \(\lim _n h_n^2\mu _n \sqrt{a_n}=\infty \). Because \(c_n \downarrow 0\),

$$\begin{aligned} r_n< 1=\bigg (\frac{b_n}{u_n a_{n+1}}\bigg )^{1/4},\quad n\geqslant 1,\qquad \lim _n r_n=1, \end{aligned}$$

we have \(\varlimsup _n \big [b_n/\sqrt{a_n} -r_n^2\sqrt{a_{n+1}}\big ]=0\) and so the assertion that \(\sigma _\mathrm{ess}\big (\Omega _{\min }^c\big )\ne \emptyset \) follows using part (2) of Lemma 5.7 and part (2) of Corollary 5.6. \(\square \)

Proof of Example 2.10

The model is modified from [3, Example 9.19] where it was proved that \(\lambda _0(\Omega _{\min }^c)>0\). The key for this example is that \(u_n\equiv 1\) and \(v_n\equiv 9/4\). Hence \(r_n\sim 1/4\) and then \(h_n\sim 4^n\). Because \(\mu _n\sim n^{-1}\), we have \(\sum _n\mu _n h_n^2=\infty \) and \(\lim _n h_n^2\mu _n \sqrt{a_n}=\infty \). It is obvious that

$$\begin{aligned} r_n< \bigg (\frac{b_n}{u_n a_{n+1}}\bigg )^{1/4}=\bigg (\frac{n+1}{ n+2}\bigg )^{1/4},\qquad n\geqslant 1. \end{aligned}$$

Besides, we have

$$\begin{aligned} b_n/\sqrt{a_n}-r_n^2\sqrt{a_{n+1}}\sim \frac{15}{16}\sqrt{n} \qquad \text {as}\;\; n\rightarrow \infty . \end{aligned}$$

The assertion that \(\sigma _\mathrm{ess}\big (\Omega _{\min }^c\big )=\emptyset \) now follows from part (1) of Lemma 5.7 and part (1) of Corollary 5.6.

Lemma 5.8 is applicable to this example. Because \(r_n\sim 1/4\) and then \(h_n\sim 4^n\), we are in the case of exponential tail. By the first assertion in part (2) of Lemma 5.8, we have

$$\begin{aligned} \sum _{j=0}^n \mu _j h_j^2\sum _{k=n}^\infty \frac{1}{h_k h_{k+1}\mu _k b_k} \sim \frac{r_n}{b_n}\sim \frac{1}{n}\sim 0\qquad \text {as} \;\; n\rightarrow \infty . \end{aligned}$$

The required assertion then follows from part (1) of Theorem 2.1.\(\square \)

Proof of Example 2.11

The model is modified from [3, Example 9.20] where it was proved that \(\lambda _0\big (\Omega _{\min }^c\big )>0\). We have \(u_n\equiv 1\) and

$$\begin{aligned} v_n= \frac{1}{(n+1)^2}\bigg [5 +\frac{10}{5 n - 12}\bigg ] \end{aligned}$$

which is decreasing in \(n\geqslant 3\). Because we are studying a property at infinity, without loss of generality, we may apply Lemma 5.3 to derive

$$\begin{aligned} r_n\leqslant \frac{2+v_n-\sqrt{(4+v_n)v_n}}{2}<\bigg (\frac{n+1}{n+2}\bigg )^{1/2}.\qquad n\geqslant 3. \end{aligned}$$
(5)

From (5), we get

$$\begin{aligned} h_n=\bigg (\prod _{k=0}^{n-1}r_k\bigg )^{-1} > \bigg (\prod _{k=0}^{n-1}\bigg (\frac{k+1}{k+2}\bigg )^{1/2}\bigg )^{-1} =(n+1)^{1/2}. \end{aligned}$$

Hence \(\mu _n h_n^2\geqslant n^{-1}\) and so \(\sum \mu _n h_n^2=\infty \).

To estimate \(\varlimsup _n \big [b_n/\sqrt{a_n} -r_n^2\sqrt{a_{n+1}}\big ]\), we need a lower bound of \(r_n\). An easier way to do so is modifying the rather precise upper bound of \(r_n\):

$$\begin{aligned} r_n\leqslant \frac{\xi _n-\sqrt{\xi _n^2- 4 u_n}}{2u_n} =\frac{\xi _n}{2u_n}\Bigg [1-\sqrt{1-\frac{4u_n}{\xi _n^2}}\,\Bigg ]. \end{aligned}$$

Clearly, we need only to look for a lower bound of \(-\sqrt{1-{4u_n}/{\xi _n^2}}\) as \(n\rightarrow \infty .\) For this example, \(u_n\equiv 1, \xi _n=2+ v_n\). Since \(v_n\rightarrow 0\), it is clear that

$$\begin{aligned} -\sqrt{1-{4u_n}/{\xi _n^2}}\sim 0\qquad \text {as} \;\; n\rightarrow \infty . \end{aligned}$$

Thus, it is natural to approximate this by second-order polynomials of \(1/n\):

$$\begin{aligned} -\sqrt{1-\frac{4u_n}{\xi _n^2}}\sim - \frac{2.23615}{n}+\frac{1.81327}{n^2}. \end{aligned}$$

This leads us to choose the following lower bound:

$$\begin{aligned} -\sqrt{1-\frac{4u_n}{\xi _n^2}}\geqslant -\frac{3}{n} + \frac{2}{n^2}. \end{aligned}$$

Then

$$\begin{aligned} r_n> \bigg (1+\frac{v_n}{2}\bigg )\bigg (1-\frac{3}{n}+\frac{2}{n^2}\bigg ) \;\;\text {(by a numerical check)}\,=:\eta _n \text { (Fig. 1).} \end{aligned}$$
Fig. 1
figure 1

The top curve is \(r_n\). The bottom curve is \(\big (1+\frac{v_n}{2}\big )\big (1-\frac{3}{n}+\frac{2}{n^2}\big )\)

Furthermore,

$$\begin{aligned} \varlimsup _n \big [b_n/\sqrt{a_n} -r_n^2\sqrt{a_{n+1}}\big ] \leqslant \varlimsup _n \big [n+1-\eta _n^2 (n+2)\big ]=5 <\infty . \end{aligned}$$

We mention that there is enough freedom in choosing the lower bound. For instance, replacing \(2/n^2\) by \(5/n^2\) in the lower bound above, the result is the same. Using part (2) of Lemma 5.7 and part (2) of Corollary 5.6, we obtain \(\sigma _\mathrm{ess}\big (\Omega _{\min }^c\big )\ne \emptyset \).

Alternatively, we can also use Lemma 5.8 to prove this example. We have seen that \(r_n\sim 1-\alpha /n\) for some \(\alpha >0\). This means that \(h_n\sim n^{\alpha }\) and hence we are in the case of algebraic tail. By the first assertion in part (1) of Lemma 5.8, we have

$$\begin{aligned} \sum _{j=0}^n \mu _j h_j^2\sum _{k=n}^\infty \frac{1}{h_k h_{k+1}\mu _k b_k}\sim \frac{n^2}{b_n} r_n\sim r_n\sim 1\qquad \text { as} \;\; n\rightarrow \infty . \end{aligned}$$

Then the required assertion follows from part (1) of Theorem 2.1. \(\square \)

7 Elliptic Differential Operators (Diffusions)

Consider the elliptic differential (diffusion) operator

$$\begin{aligned} L^c= a(x)\frac{\text {d}^2}{\text {d}{x^2}}+ b(x)\frac{\text {d}}{\text {d}x}-c(x) ,\qquad a(x)>0,\; c(x)\geqslant 0 \end{aligned}$$

on \(E:=(0, \infty )\) or \(\mathbb R\). Define two measures

$$\begin{aligned} \mu (\text {d}x)= \frac{e^{C(x)}}{a(x)}\text {d}x,\qquad \nu (\text {d}x)= e^{C(x)}\text {d}x, \end{aligned}$$

where \(C(x)=\int _{\theta }^x (b/a)(y)\text {d}y\) and \(\theta \) is a reference point. Define also a measure deduced from \(\nu \)

$$\begin{aligned} {\hat{\nu }}(\text {d}x)=e^{-C(x)}\text {d}x. \end{aligned}$$

Corresponding to the operator, we have the following Dirichlet form

$$\begin{aligned} D^c(f)=\int _E {f'(x)}^2 \nu (\text {d}x)+\int _E c(x)f(x)^2\mu (\text {d}x) \end{aligned}$$

with domains: either the maximal one

$$\begin{aligned} {\fancyscript{D}}_{\max }(D^c)=\big \{f\in L^2(\mu ): f \text { is absolutely continuous and }D^c(f)<\infty \big \}, \end{aligned}$$

or the minimal one \({\fancyscript{D}}_{\min }(D^c)\) which is the smallest closure of the set

$$\begin{aligned} \big \{f\in {\fancyscript{C}}^2(E): f \text { has a compact support}\big \} \end{aligned}$$

with respect to the norm \(\Vert \cdot \Vert _D\), as in the discrete case (§2).

In parallel to Theorem 2.1, we have the following result.

Theorem 7.1

Let \(E=(0, \infty )\) and \(h\ne 0\)-a.e. be an \(L^c\)-harmonic function (to be constructed in Theorem 7.4 below): \(L^c h=0\), a.e.

  1. (1)

    If \({\hat{\nu }}\big (h^{-2}\big )<\infty \), then \(\sigma _{\mathrm{ess}}(L_{\min }^c)=\emptyset \) iff

    $$\begin{aligned} \lim _{x\rightarrow \infty }\mu \big (h^21\!\!1_{(0,x)}\big ){\hat{\nu }}\big (h^{-2}1\!\!1_{(x,\infty )}\big ) =\lim _{x\rightarrow \infty }\int _0^x h^2\text {d}\mu \int _x^\infty \frac{1}{h^2}\text {d}{\hat{\nu }}=0. \end{aligned}$$
  2. (2)

    If \(\mu \big (h^2\big )<\infty \), then \(\sigma _{\mathrm{ess}}(L_{\max }^c)=\emptyset \) iff

    $$\begin{aligned} \lim _{x\rightarrow \infty }\mu \big (h^21\!\!1_{(x, \infty )}\big ){\hat{\nu }}\big (h^{-2}1\!\!1_{(0, x)}\big ) =\lim _{x\rightarrow \infty }\int _x^\infty h^2\text {d}\mu \int _0^x\frac{1}{h^2}\text {d}{\hat{\nu }}=0. \end{aligned}$$
  3. (3)

    If \({\hat{\nu }}\big (h^{-2}\big )=\infty =\mu \big (h^2\big )\), then \(\sigma _{\mathrm{ess}}(L_{\min }^c)=\sigma _{\mathrm{ess}}(L_{\max }^c)\ne \emptyset \).

When \(c(x)\equiv 0\) and \(b(x)\equiv 0\), the first two parts of the theorem may go back to [9]. When \(c(x)\equiv 0\), part (1) was presented in [1, Theorem 4.1] and [7, Example 6.1]; under the same condition \(c(x)\equiv 0\), part (2) of the theorem is due to [10, Theorem 1.1], again replacing the uniqueness condition by a use of the maximal process. In the general setup, a different criterion for part (1) was presented in [12] and [7, Corollary 5.3], assuming some weak smooth conditions on the coefficients of the operator. Unfortunately, we are unable to state here their results in a short way. In particular, in [7, Corollary 5.3], the family of intervals \(\{I(x): x\in E\}\) is assumed to be existence but is not explicitly constructed. When \(b(x)\equiv 0\) in \(L^c\), a compact criterion for part (1) was presented in [12]. For general \(b\) in part(1), it was also handled in [9, 12] by a standard change of variables (called time-change in probabilistic language). Unfortunately, as far as we know, the conditions of these general results are usually not easy to verify in practice and so a different approach should be meaningful. Here is a key difference between the approaches, the time-change technique eliminates the first-order differential term \(b\) and our \(H\)-transform eliminates the killing (or potential) term \(c\).

Corollary 7.2

If \(\sigma _{\mathrm{ess}}(L_{\min }^c)=\emptyset \), then \(\lambda _0(L_{\min }^c)>0\).

To study a construction (existence and uniqueness) of an a.e. \(L^c\)-harmonic function, we need the following hypothesis.

Hypotheses 7.3

Let \(J\subset {\mathbb R}\). Suppose that

  1. (1)

    \(a>0\) on \(J\);

  2. (2)

    \(b/a\) and \(c/a\) are locally integrable with respect to the Lebesgue measure.

In an earlier version, we assumed that \(e^C/a\) is locally integrable. Actually, this is equivalent to the local integrability of \(b/a\) since \(C\) and then \(e^C\) are locally bounded due to the assumption that \(b/a\) is locally integrable.

Theorem 7.4

Under Hypothesis 7.3, for every \( \gamma ^{(0)},\,\gamma ^{(1)}\! \in \! {\mathbb {R}}\), an \(L^c\)-a.e. harmonic function \(f\) always exists. More precisely, a function \(f\) can be chosen from the first component of \(F^*\) obtained uniquely by the following successive approximation scheme.

  1. (1)

    The first successive approximation scheme. Define

    $$\begin{aligned} F^{(1)}(x)\!=\!F(\theta )\!=\! \begin{pmatrix} \gamma ^{(0)}\\ \gamma ^{(1)}\end{pmatrix}, \;\; F^{(n+1)}(x)\!=\!F(\theta )+\int _{\theta }^x G F^{(n)},\quad x\!\in \! J,\; n\!\geqslant \! 1, \qquad \end{aligned}$$
    (6)

    where \(G(x)=\begin{pmatrix} 0 &{} e^{-C}\\ c e^C/a &{} 0\end{pmatrix}.\) Then

    $$\begin{aligned} F^{(n)}\rightarrow \begin{pmatrix} f\\ e^C f'\end{pmatrix}=:F^*\qquad \text {as }n\rightarrow \infty \end{aligned}$$
    (7)

    uniformly on each compact subinterval of \(J\). In other words, \(F^*\) is the unique solution to the equation

    $$\begin{aligned} F(x)=F(\theta )+\int _{\theta }^x G F,\qquad x\in J \end{aligned}$$
    (8)

    and so it is absolutely continuous on each compact subinterval of \(J\).

  2. (2)

    The second successive approximation scheme. Define

    $$\begin{aligned} {\widetilde{F}}^{(1)}(x)=F(\theta ), \qquad {\widetilde{F}}^{(n+1)}(x)= \int _{\theta }^x G {\widetilde{F}}^{(n)},\quad x\in J,\; n\geqslant 1, \end{aligned}$$
    (9)

    then \(F^*=\sum _{n=1}^\infty {\widetilde{F}}^{(n)}\) (which is the so-called Peano-Baker series).

Proof

(a) Part (1) is taken from [14, Theorem 1.2.1 and its proof plus Theorem 2.2.1].

(b) By induction, it is easy to check that \(F^{(n)}=\sum _{k=1}^n{\widetilde{F}}^{(k)}\). Then part (2) follows from part (1). \(\square \)

A simple way to understand Theorem 7.4 is to look at its differential form of (8):

$$\begin{aligned} F'= G F,\qquad \text {a.e.} \end{aligned}$$
(10)

From (9), one sees that the sequence \(\big \{{\widetilde{F}}^{(n)}\big \}_{n\geqslant 1}\) is given by a one-step algorithm, as the one for \(\{r_n\}_{n\geqslant 1}\) used in the discrete case. Then \(F^*\) is given by the summation of \(\big \{{\widetilde{F}}^{(n)}\big \}\), which is different from the discrete situation where \(h\) is defined by a product of \(\{r_n^{-1}\}\).

Theorem 7.5

Let \(c, \gamma ^{(0)},\,\gamma ^{(1)}\geqslant 0\). Then under Hypothesis 7.3,

  1. (1)

    the solution \(F^*\) constructed in Theorem 7.4 is actually the (finite) minimal nonnegative solution to (8). Furthermore, \({F}^{(n)}\uparrow F^*\) (pointwise) as \(n\rightarrow \infty \).

  2. (2)

    Let \({\bar{F}}\) be a solution to the inequality

    $$\begin{aligned} F(x)\geqslant F(\theta )+\int _{\theta }^x G F,\qquad x\in J \end{aligned}$$
    (11)

    or more simplicity, to the inequality

    $$\begin{aligned} F'\geqslant G F,\qquad \text { with } {\bar{F}}(\theta )\geqslant F(\theta ). \end{aligned}$$
    (12)

    Then \({\bar{F}}\geqslant F^*\).

Proof

Apply [2, Theorems 2.2, 2.6 and 2.9]).\(\square \)

Example 7.6

Let

$$\begin{aligned} L^c=\frac{\text {d}^2}{\text {d}x^2}-c(x),\quad c(x):=\frac{1}{4}x^{2\alpha -2}+\frac{\alpha -1}{2}x^{\alpha -2},\qquad \alpha \geqslant 1. \end{aligned}$$

Then \(\sigma _\mathrm{ess}(L_{\min }^c)=\emptyset \) if \(\alpha >1\) and \(\sigma _\mathrm{ess}(L_{\min }^c)\ne \emptyset \) if \(\alpha =1\).

Proof

By Theorem 7.4, we have \(F^*=\begin{pmatrix} h\\ e^C h'\end{pmatrix}.\) Hence, it is natural to choose \(F(\theta )=\begin{pmatrix} 1\\ 0\end{pmatrix}.\) Because of this, we may denote the first component of \(F^{(n)}\) by \(h^{(n)}\). Similarly, we have \({\tilde{h}}^{(n)}\) from the second successive approximation scheme.

First, using Mathematica, we have \({\tilde{h}}^{(2n)}=0\),

$$\begin{aligned} {\tilde{h}}^{(1)}(x)&= 1,\\ {\tilde{h}}^{(3)}(x)&= \frac{x^\alpha }{2 \alpha }+\frac{x^{2 \alpha }}{8 \alpha (2 \alpha -1)}, \\ {\tilde{h}}^{(5)}(x)&= \frac{(\alpha -1) x^{2 \alpha }}{8 \alpha ^2 (2 \alpha -1)}+\frac{(5 \alpha -3) x^{3 \alpha }}{48 \alpha ^2 (2 \alpha -1) (3 \alpha -1)}+\frac{x^{4 \alpha }}{128 \alpha ^2 (2 \alpha -1) (4 \alpha -1)},\\ {\tilde{h}}^{(7)}(x)&= \frac{(\alpha -1)^2 x^{3 \alpha }}{48 \alpha ^3 \left( 6 \alpha ^2-5 \alpha +1\right) }+\frac{(\alpha -1) (7 \alpha -3) x^{4 \alpha }}{192 \alpha ^3 (2 \alpha -1) (3 \alpha -1) (4 \alpha -1)}\\&+\,\frac{\left( 89 \alpha ^2-80 \alpha +15\right) x^{5 \alpha }}{3840 \alpha ^3 \left( 120 \alpha ^4-154 \alpha ^3+71 \alpha ^2-14 \alpha +1\right) }\\&+\,\frac{x^{6 \alpha }}{3072 \alpha ^3 \left( 48 \alpha ^3-44 \alpha ^2+12 \alpha -1\right) }. \end{aligned}$$

Their leading orders are as follows:

$$\begin{aligned} {\tilde{h}}^{(1)}(x)=1,\; {\tilde{h}}^{(3)}(x)\sim \frac{1}{4} \bigg (\frac{x^\alpha }{2 \alpha }\bigg )^2\!, \; {\tilde{h}}^{(5)}(x)\sim \frac{1}{64} \bigg (\frac{x^\alpha }{2 \alpha }\bigg )^4\!,\; {\tilde{h}}^{(7)}(x)\sim \frac{1}{2304} \bigg (\frac{x^\alpha }{2 \alpha }\bigg )^6\!. \end{aligned}$$

More simply, one may use \(x^{2\alpha -2}/4\) instead of the original \(c(x)\), one gets the same leading order of \({\tilde{h}}^{(2n+1)}\). From this, we guess that \(h=\sum _{n=1}^\infty {\tilde{h}}^{(n)}\) looks like \(\exp \frac{x^\alpha }{2 \alpha }\). This becomes more clear when we use the first successive approximation scheme.

$$\begin{aligned} h^{(1)}(x)=h^{(2)}(x)&= 1,\\ h^{(3)}(x)=h^{(4)}(x)&= \underbrace{1+\frac{x^\alpha }{2 \alpha }}+\frac{x^{2 \alpha }}{8 \alpha (2 \alpha -1)},\\ h^{(5)}(x)=h^{(6)}(x)&= \underbrace{1+\frac{x^\alpha }{2 \alpha }\!+\!\frac{x^{2 \alpha }}{8 \alpha ^2}}+\frac{(5 \alpha -3) x^{3 \alpha }}{48 \alpha ^2 \left( 6 \alpha ^2-5 \alpha \!+\!1\right) }\!+\!\frac{x^{4 \alpha }}{128 \alpha ^2 \left( 8 \alpha ^2-6 \alpha \!+\!1\right) },\\ h^{(7)}(x)=h^{(8)}(x)&= \underbrace{1+\frac{x^\alpha }{2 \alpha }+\frac{x^{2 \alpha }}{8 \alpha ^2}+\frac{x^{3 \alpha }}{48 \alpha ^3}}+\frac{\left( 23 \alpha ^2-23 \alpha +6\right) x^{4 \alpha }}{384 \alpha ^3 (3 \alpha -1) \left( 8 \alpha ^2-6 \alpha +1\right) }\\&+\, \frac{\left( 89 \alpha ^2-80 \alpha +15\right) x^{5 \alpha }}{3840 \alpha ^3 (3 \alpha -1) (5 \alpha -1) \left( 8 \alpha ^2-6 \alpha +1\right) }\\&+\, \frac{x^{6 \alpha }}{3072 \alpha ^3 (6 \alpha -1) \left( 8 \alpha ^2-6 \alpha +1\right) }. \end{aligned}$$

Obviously, \(h^{(n)}\) is approximating to

$$\begin{aligned} \exp \bigg [\frac{x^{\alpha }}{2 \alpha }\bigg ] =\sum _{n=0}^\infty \frac{1}{n!}\bigg (\frac{x^{\alpha }}{2 \alpha }\bigg )^n \end{aligned}$$

step by step. Since \(\alpha \geqslant 1\), we have seen that

$$\begin{aligned} h^{(2n+2)}\geqslant \sum _{k=0}^{n} \frac{1}{k!}\bigg (\frac{x^{\alpha }}{2 \alpha }\bigg )^k,\qquad n=0, 1, 2, 3. \end{aligned}$$

This leads to the lower estimate of \(h: h(x)\geqslant \exp \frac{x^\alpha }{2 \alpha }\). We are now going to show that the equality sign here holds.

In general, in order to check that \(F^*=\begin{pmatrix} h\\ e^C h'\end{pmatrix},\) it is easier to check (10). With \(h=\exp \psi \), from equation (10), it follows that

$$\begin{aligned} a(\psi ''+ {\psi '}^2)+b\psi '=c, \end{aligned}$$

or equivalently,

$$\begin{aligned} \psi ''+ {\psi '}^2+\frac{b}{a}\psi '=\frac{c}{a}. \end{aligned}$$
(13)

In the present case, it is simply

$$\begin{aligned} \psi ''(x) +{\psi '}(x)^2=\frac{1}{4}x^{2\alpha -2}+\frac{\alpha -1}{2}x^{\alpha -2} =\bigg (\frac{x^{\alpha -1}}{2}\bigg )^2 +\frac{\alpha -1}{2}x^{\alpha -2}. \end{aligned}$$

From this, we obtain \(\psi '(x)=x^{\alpha -1}/2\) and then \(\psi (x)=x^{\alpha }/(2\alpha )\). Having \(h\) at hand, the assertion of the lemma follows from Theorem 7.1. Since \(h\) is increasing, \(\mu (h^2)=\infty \), we need only the last two parts of Theorem 7.1. The details are delayed to the next example since this one is actually a particular case of Example 7.10 (2) with \(b=0\).

We remark that the precise leading order of \(h\) at infinity is required for our purpose, the natural lower estimate \(h\geqslant h^{(n)}\) for fixed \(n\) is usually not enough. Nevertheless, the successive approximation schemes are still effective to provide practical lower bound of \(h\). An upper bound of \(h\) is often easier to obtain using (12). We also remark that the simplest way to prove Example 7.6 is using the following Molchanov’s criterion ([11], see also [8, p. 90, Theorem 6]): if \(b=0, a=1\), and \(c\) is lower bounded, then \(\sigma _\mathrm{ess}(L_{\min }^c)=\emptyset \) iff

$$\begin{aligned} \text {for each }\theta >0,\qquad \int _x^{x+\theta } c\rightarrow \infty \quad \text {as } x\rightarrow \infty . \end{aligned}$$

From this remark, it should be clear that there is quite a distance from the last special case to our general setup.

With a little modification of the proof for the last example in the case of \(\alpha =2\), it follows that \(h(x):=\exp [x^2/2]\) is harmonic of the following operator

$$\begin{aligned} L^c=\frac{\text {d}^2}{\text {d}x^2}-c(x),\quad c(x):=x^2+1. \end{aligned}$$

Then, using a shift, we obtain the following result.\(\square \)

Example 7.7

The 1-dimensional harmonic oscillator

$$\begin{aligned} L^c=\frac{\text {d}^2}{\text {d}x^2}-c(x),\quad c(x):=x^2 \end{aligned}$$

has discrete spectrum.

Actually, it is known that the eigenvalues of the last operator \(-L^c\) are simple: \(\lambda _n = 2n + 1\), \(n = 0, 1, \ldots \) with eigenfunction

$$\begin{aligned} g_n(x)= (-1)^n e^{x^2/2}\frac{\text {d}^n}{\text {d}x^n}e^{-x^2},\qquad n = 0, 1, \ldots , \end{aligned}$$

respectively. By symmetry, the conclusion holds not only on the half-line but also on the whole line.

Example 7.8

Let \(E=(0, \infty ), \gamma \geqslant 10/9\), and

$$\begin{aligned} L^c=(1+x)^{\gamma }\frac{\text {d}^2}{\text {d}x^2}+\frac{4\gamma }{5} (1+x)^{\gamma -1}\frac{\text {d}}{\text {d}x}- c(x), \quad c(x):=\frac{\gamma (9\gamma -10)}{100}(1+x)^{\gamma -2}. \end{aligned}$$

Then \(\sigma _\mathrm{ess}(L_{\min }^c)=\emptyset \) if \(\gamma >2\) and \(\sigma _\mathrm{ess}(L_{\min }^c)\ne \emptyset \) if \(\gamma \in [10/9, 2]\).

Proof

We remark that condition \(\gamma \geqslant 10/9\) is for \(c(x)\geqslant 0\).

First, we look for the \(L^c\)-harmonic function \(h\) having the form \(h=\exp \psi \) for some \(\psi \). Then, by (13), we have

$$\begin{aligned} (1+x)^{2} (\psi ''+{\psi '}^2)+\frac{4\gamma }{5} (1+x) \psi '= \frac{\gamma (9\gamma -10)}{100}. \end{aligned}$$

This equation suggests us first that \(\psi '=\beta (1+x)^{-1}\) for some constant \(\beta \), and then \(\beta =\gamma /10\). Hence, we obtain \(h(x)=(1+x)^{\beta }.\)

Next, we have

$$\begin{aligned}&C(x)=\int _0^x \frac{b}{a} =\frac{4\gamma }{5}\log (1+x),\qquad e^{C(x)}=(1+x)^{4\gamma /5},\\&\mu (\text {d}x)= (1+x)^{-\gamma /5}\text {d}x, \qquad {\hat{\nu }}(\text {d}x)=(1+x)^{-4\gamma /5}\text {d}x,\\&\mu \big (h^21\!\!1_{(0, x)}\big )=x,\qquad {\hat{\nu }}\big (h^{-2}1\!\!1_{(x, \infty )}\big )=\frac{x}{(1+x)^{\gamma }}. \end{aligned}$$

Therefore, \(\mu \big (h^21\!\!1_{(0, x)}\big ){\hat{\nu }}\big (h^{-2}1\!\!1_{(x, \infty )}\big )\sim x^{2-\gamma }\) as \(x\rightarrow \infty .\) The result now follows from Theorem 7.1 (1). \(\square \)

Actually, this example is a special case of Example 7.11 (2).

Up to now, we have studied in one direction: reducing the case that \(c(x)\not \equiv 0\) to the one \(c(x)\equiv 0\). Certainly, we can go to the opposite direction: extending the result from \(c(x)\equiv 0\) to \(c(x)\not \equiv 0\). This is actually much easier but is very powerful. For simplicity, we restrict ourselves to the special case that \(h>0\). Then one may write \(h=\exp \psi \) for some \(\psi \). This leads to the next result which is a special case of [5, Corollary 3.7].

Corollary 7.9

Given

$$\begin{aligned} {\widetilde{L}}={\tilde{a}}(x)\frac{\text {d}^2}{\text {d}x^2}+{\tilde{b}}(x)\frac{\text {d}}{\text {d}x}\quad \text { with domain } {\fancyscript{D}}\big ({\widetilde{L}}\big ),\; {\tilde{a}}(x)>0 \end{aligned}$$

and \(\psi \in {\fancyscript{C}}^2(E)\,(E\subset {\mathbb R})\), define

$$\begin{aligned} L&={\widetilde{L}}- 2 {\tilde{a}} \psi '\frac{\text {d}}{\text {d}x} +\Big [{\tilde{a}} {\psi '}^2- {\widetilde{L}}\psi \Big ]\nonumber \\&= {\tilde{a}}\frac{\text {d}^2}{\text {d}x^2}+\big [{\tilde{b}}- 2 {\tilde{a}} \psi '\big ]\frac{\text {d}}{\text {d}x} +\Big [{\tilde{a}} {\psi '}^2- {\tilde{a}}\psi ''-{\tilde{b}}\psi '\Big ],\nonumber \\ {\fancyscript{D}}(L)&=\big \{f\exp [-\psi ]\in L^2(\tilde{\mu }): f\exp [-\psi ]\in {\fancyscript{D}}\big (\widetilde{L}\big )\big \}. \end{aligned}$$
(14)

Then \((L, {\fancyscript{D}}(L))\) and \(\big ({\widetilde{L}}, {\fancyscript{D}}\big ({\widetilde{L}}\big )\big )\) are isospectral \(\big (\hbox {in particular}, \sigma _\mathrm{ess}(L)=\sigma _\mathrm{ess}\big (\widetilde{L}\big )\big )\). Furthermore, if we replace \(\psi '\) by

$$\begin{aligned} \psi '=\frac{{\tilde{b}}-b}{2{\tilde{a}}}\;\;\text { for varying }b \end{aligned}$$
(15)

\(\big (\hbox {assuming}\, \tilde{a}, \tilde{b}, b \in {\fancyscript{C}}^1(E)\big )\), then the operator \(L\) becomes

$$\begin{aligned} L^b={\tilde{a}}\frac{\text {d}^2}{\text {d}x^2}+b\frac{\text {d}}{\text {d}x} +\frac{1}{2} \bigg [\frac{b^2-{\tilde{b}}^2}{2\tilde{a}}- {\tilde{a}}\frac{\text {d}}{\text {d}x}\bigg (\frac{{\tilde{b}}-b}{{\tilde{a}}}\bigg )\bigg ]. \end{aligned}$$

Corresponding to \({\fancyscript{D}}_{\max }\big (\widetilde{L}\big )\), we have \({\fancyscript{D}}_{\max }(L)\) defined by (14) in terms of \(\psi \). Then, we have \(L_{\max }\). Furthermore, we have \(L_{\max }^b\) in terms of (15). Similarly, corresponding to \({\fancyscript{D}}_{\min }\big (\widetilde{L}\big )\), we have \({\fancyscript{D}}_{\min }(L), L_{\min }\) and \(L_{\min }^b\), respectively.

Example 7.10

Let \(E=(0, \infty ), \alpha > 0\) and \(b\in {\fancyscript{C}}(E)\).

  1. (1)

    Define

    $$\begin{aligned} {\widetilde{L}}&= \frac{\text {d}^2}{\text {d}x^2}- x^{\alpha -1}\frac{\text {d}}{\text {d}x},\\ L^b&= \frac{\text {d}^2}{\text {d}x^2}+b(x)\frac{\text {d}}{\text {d}x} +\frac{1}{2}\bigg [\frac{1}{2} b(x)^2+b'(x)-\bigg (\frac{1}{2} x^{\alpha }-\alpha +1\bigg )x^{\alpha -2}\bigg ]. \end{aligned}$$

    Then for each \(b, L_{\max }^b\) and \({\widetilde{L}}_{\max }\) are isospectral, \(\sigma _{\mathrm{ess}}\big (L_{\max }^b\big )=\emptyset \) if \(\alpha >1\) and \(\sigma _{\mathrm{ess}}\big (L_{\max }^b\big )\ne \emptyset \) if \(\alpha \in (0, 1]\).

  2. (2)

    Define

    $$\begin{aligned} {\widetilde{L}}&= \frac{\text {d}^2}{\text {d}x^2}+ x^{\alpha -1}\frac{\text {d}}{\text {d}x},\\ L^b&= \frac{\text {d}^2}{\text {d}x^2}+b(x)\frac{\text {d}}{\text {d}x} +\frac{1}{2}\bigg [\frac{1}{2} b(x)^2+b'(x)-\bigg (\frac{1}{2} x^{\alpha }+\alpha -1\bigg )x^{\alpha -2}\bigg ]. \end{aligned}$$

    Then for each \(b, L_{\min }^b\) and \({\widetilde{L}}_{\min }\) are isospectral, \(\sigma _{\mathrm{ess}}\big (L_{\min }^b\big )=\emptyset \) if \(\alpha >1\) and \(\sigma _{\mathrm{ess}}\big (L_{\min }^b\big )\ne \emptyset \) if \(\alpha \in (0, 1]\).

Proof

Note that \({\hat{\nu }(E)}=\infty \). By [10, Example 4.1], for the operator

$$\begin{aligned} L_0=\frac{\text {d}^2}{\text {d}x^2}- x^{\alpha -1}\frac{\text {d}}{\text {d}x}\quad \text {on}\quad (0, \infty ) \end{aligned}$$

with the maximal domain, we have \(\sigma _{\mathrm{ess}}(L_0)=\emptyset \) if \(\alpha >1\) and \(\sigma _{\mathrm{ess}}(L_0)\ne \emptyset \) if \(\alpha \in (0, 1]\). Actually,

$$\begin{aligned} C(x)\!=\! -\!\!\!\int _0^x\!\!\! x^{\alpha -1}\!\!\sim \! -x^{\alpha }\!\!,\; \mu (x, \infty )\!=\!\! \int _x^\infty \!\!\!\! e^{C(x)}\!\!\sim \! x^{1-\alpha }e^{-x^{\alpha }}\!\!\!,\; {\hat{\nu }}(0, x)=\!\!\!\int _0^x\!\!\!\! e^{-C(x)}\!\!\sim \! x^{1-\alpha } e^{x^{\alpha }} \end{aligned}$$

as \(x\rightarrow \infty .\) Hence \({\hat{\nu }}(0, x)\mu (x, \infty )\sim x^{2(1-\alpha )}\) as \(x\rightarrow \infty .\) The required conclusion now follows from Theorem 7.1 (2) with \(h=1\). Then, by Corollary 7.9, we obtain part (1).

To prove part (2), recall that for the differential operator

$$\begin{aligned} L=a(x)\frac{\text {d}^2}{\text {d}x^2}+b(x)\frac{\text {d}}{\text {d}x}, \end{aligned}$$

as an analog of the duality for birth–death processes used in Sect. 3 (part (d)), its dual operator \(\widehat{L}\) takes the following form:

$$\begin{aligned} {\widehat{L}}=a(x)\frac{\text {d}^2}{\text {d}x^2}+\bigg (\frac{\text {d}}{\text {d}x}a(x)-b(x)\bigg )\frac{\text {d}}{\text {d}x} \end{aligned}$$

(cf. [3, (10.6)] or [4, §3.2]). Hence, the operator

$$\begin{aligned} {\widetilde{L}}=\frac{\text {d}^2}{\text {d}x^2}+x^{\alpha -1}\frac{\text {d}}{\text {d}x} \end{aligned}$$

with the minimal domain is a dual of \(L_0\) with the maximal domain (cf. [3, (10.6)]) and so they have the same spectrum. Thus, part (2) follows again from Corollary 7.9. \(\square \)

Example 7.11

Let \(E=(0, \infty ), \gamma > 1\) and \(b\in {\fancyscript{C}}(E)\).

  1. (1)

    Define

    $$\begin{aligned} {\widetilde{L}}&= (1+x)^{\gamma }\frac{\text {d}^2}{\text {d}x^2},\\ L^b&= (1+x)^{\gamma }\frac{\text {d}^2}{\text {d}x^2}+b(x)\frac{\text {d}}{\text {d}x} +\frac{1}{2}\bigg [\frac{b(x)^2}{2(1+x)^{\gamma }}+b'(x)-\frac{\gamma b(x)}{1+x}\bigg ]. \end{aligned}$$

    Then for each \(b, L_{\max }^b\) and \({\widetilde{L}}_{\max }\) are isospectral, \(\sigma _{\mathrm{ess}}\big (L_{\max }^b\big )=\emptyset \) if \(\gamma >2\) and \(\sigma _{\mathrm{ess}}\big (L_{\max }^b\big )\ne \emptyset \) if \(\gamma \in (1, 2]\).

  2. (2)

    Define

    $$\begin{aligned} {\widetilde{L}}&= (1+x)^{\gamma }\frac{\text {d}^2}{\text {d}x^2}+\gamma (1+x)^{\gamma -1}\frac{\text {d}}{\text {d}x},\\ L^b&= (1+x)^{\gamma }\frac{\text {d}^2}{\text {d}x^2}+b(x)\frac{\text {d}}{\text {d}x}\\&+\,\frac{1}{2}\bigg [\frac{b(x)^2}{2(1+x)^{\gamma }}-\frac{\gamma b(x)}{1+x}+b'(x) -{\gamma }\bigg (\frac{\gamma }{2}-1\bigg )(1+x)^{\gamma -2}\bigg ]. \end{aligned}$$

    Then for each \(b, L_{\min }^b\) and \({\widetilde{L}}_{\min }\) are isospectral, \(\sigma _{\mathrm{ess}}\big (L_{\min }^b\big )=\emptyset \) if \(\gamma >2\) and \(\sigma _{\mathrm{ess}}\big (L_{\min }^b\big )\ne \emptyset \) if \(\gamma \in (1, 2]\).

Proof

As in the proof of Example 7.10, it suffices to study the spectrum of the operator

$$\begin{aligned} L_0=(1+x)^{\gamma }\frac{\text {d}^2}{\text {d}x^2} \end{aligned}$$

with the maximal domain. Clearly,

$$\begin{aligned} \mu (\text {d}x)=(1+x)^{-\gamma }\text {d}x,\quad {\hat{\nu }}(\text {d}x)=\text {d}x, \quad {\hat{\nu }}(0, \infty )=\infty , \quad \mu (E)<\infty \;\text {if } \gamma >1. \end{aligned}$$

We are in the case of Lemma 5.8 (1): \({\hat{\nu }}(0, x)\mu (x, \infty )\sim x^{2-\gamma }\) as \(x\rightarrow \infty . \) Hence, by Theorem 7.1 (2) with \(h=1, L_0\) has discrete spectrum if \(\gamma >2\) and otherwise, if \(\gamma \in (1, 2]\).\(\square \)

Remark 7.12

The condition \(c(x)\geqslant 0\) used in the paper has some probabilistic meaning (killing rate), but it is not necessary, as we have seen from Example 7.10 (2) with \(b(x)\equiv 0\) and \(\alpha \in (0, 1)\). Everything should be the same if \(c\) is lower bounded which can be reduced to the nonnegative case using a shift. The last ‘bounded below’ condition is still not necessary, refer to [5].

Up to now, we have studied the half-space only. The case of whole line is in parallel. To see this, fix the reference point \(\theta =0\) and use the measures \(\mu \) and \(\hat{\nu }\) defined at the beginning of this section. For simplicity, here we write down only the symmetric case (which means that \(\mu \) is finite or not simultaneously on \((-\infty , 0)\) and \((0, \infty )\), and similarly for \(\hat{\nu }\)). The other cases may be handled in parallel.

Theorem 7.13

Let \(E={\mathbb R}\) and \(h\ne 0\)-a.e. be an \(L^c\)-harmonic function constructed in Theorem 7.4.

  1. (1)

    If \({\hat{\nu }}\big (h^{-2}\big )<\infty \), then \(\sigma _{\mathrm{ess}}(L_{\min }^c)=\emptyset \) iff

    $$\begin{aligned} \lim _{x\rightarrow \infty }\Big [\mu \big (h^21\!\!1_{(0,x)}\big ){\hat{\nu }}\big (h^{-2}1\!\!1_{(x,\infty )}\big ) +\mu \big (h^21\!\!1_{(-x, 0)}\big ){\hat{\nu }}\big (h^{-2}1\!\!1_{(-\infty , -x)}\big ) \Big ]=0. \end{aligned}$$
  2. (2)

    If \(\mu \big (h^2\big )<\infty \), then \(\sigma _{\mathrm{ess}}(L_{\max }^c)=\emptyset \) iff

    $$\begin{aligned} \lim _{x\rightarrow \infty }\Big [\mu \big (h^21\!\!1_{(x, \infty )}\big ) {\hat{\nu }}\big (h^{-2}1\!\!1_{(0, x)}\big )+\mu \big (h^21\!\!1_{(-\infty , -x)}\big ) {\hat{\nu }}\big (h^{-2}1\!\!1_{(-x, 0)}\big ) \Big ]=0. \end{aligned}$$
  3. (3)

    If \({\hat{\nu }}\big (h^{-2}1\!\!1_{(-\infty , 0)}\big )={\hat{\nu }} \big (h^{-2}1\!\!1_{(0, \infty )}\big )=\infty =\mu \big (h^21\!\!1_{(-\infty , 0)}\big )=\mu \big (h^21\!\!1_{(0, \infty )}\big )\), then \(\sigma _{\mathrm{ess}}(L_{\min }^c)=\sigma _{\mathrm{ess}}(L_{\max }^c)\ne \emptyset \).

Proof

(a) As in the proof of Theorem 2.1, by [5, Theorem 3.1], it suffices to consider only the case that \(c(x)\equiv 0\).

(b) Part (2) of the theorem follows from [10, Theorem 2.5 (2)].

(c) Part (1) of the theorem is a dual of part (2). Refer to [3, (10.6)] or [4, §3.2].

(d) In the present symmetric case, part (3) is obvious in view of Theorem 7.1 (3).   \(\square \)

The approach used in this paper is meaningful in a quite general setup. For instance, one may refer to [5] for some isospectral operators in higher dimensions.