1 Introduction

Given sequences \(\{a_n\}_{n=0}^\infty \) and \(\{b_n\}_{n=0}^\infty \) such that \(a_n > 0\) and \(b_n \in \mathbb {R}\), we set

$$\begin{aligned} A = \left( \begin{array}{cccccc} b_0 &{} a_0 &{} 0 &{} 0 &{} 0 &{}\ldots \\ a_0 &{} b_1 &{} a_1 &{} 0 &{} 0 &{} \ldots \\ 0 &{} a_1 &{} b_2 &{} a_2 &{} 0 &{} \ldots \\ 0 &{} 0 &{} a_2 &{} b_3 &{} a_3 &{} \ldots \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots &{} \ddots \end{array} \right) . \end{aligned}$$

The operator A is defined on the domain \(Dom(A) = \{ x \in \ell ^2 :A x \in \ell ^2\}\), where

$$\begin{aligned} \ell ^2 = \Big \{ x \in \mathbb {C}^{\mathbb {N}} :\sum _{n=0}^\infty |x_n|^2 < \infty \Big \}, \end{aligned}$$

and is called a Jacobi matrix.

The study of Jacobi matrices is motivated by connections with orthogonal polynomials and the classical moment problem (see, e.g., [23]). Also, every self-adjoint operator can be represented as a direct sum of Jacobi matrices. In particular, generators of birth and death processes may be seen as Jacobi matrices acting on weighted \(\ell ^2\) spaces.

There are several approaches to the problem of the identification of the spectrum of unbounded Jacobi matrices. A method often used is based on subordination theory (see, e.g., [6, 15, 19]). Another technique uses the analysis of a commutator between a Jacobi matrix and a suitable chosen matrix (see, e.g., [22]). The case of Jacobi matrices with monotonic weights was considered mainly by Dombrowski (see, e.g., [8]), where the author developed commutator techniques which enabled qualitative spectral analysis of examined operators.

The present article is motivated by commutator techniques of Dombrowski and some ideas of Clark [6]. In fact, commutators do not appear here directly but are hidden in some of our expressions.

Let A be a Jacobi matrix, and assume that the matrix A is self-adjoint. The spectrum of the operator A will be denoted by \(\sigma (A)\), the set of all its eigenvalues by \(\sigma _\mathrm {p}(A)\), and the set of all accumulation points of \(\sigma (A)\) by \(\sigma _\mathrm {ess}(A)\). For a real number x, we define \(x^- = \max (-x,0)\).

Our main result is the following theorem.

Theorem 1.1

Let A be a Jacobi matrix. If there is a positive sequence \(\{ \alpha _n \}\) such that

figure a

then the Jacobi matrix A is self-adjoint and satisfies \(\sigma _\mathrm {p}(A) = \emptyset \), and \(\sigma (A) = \mathbb {R}\).

The importance of Theorem 1.1 lies in the fact that we have flexibility in the choice of the sequence \(\alpha _n\). Some choices of the sequence \(\alpha _n\) are given in Sect. 4.

Let us concentrate now on the simplest case of the theorem, i.e., when \(\alpha _n = a_n\). In [11, Lemma 2.6], it was proven that if the nonnegative sequence \(a_n^2 - a_{n-1}^2\) is bounded and \(b_n \equiv 0\), then the matrix A has no eigenvalues. Our theorem gives additional information that in this case \(\sigma (A) = \mathbb {R}\) holds. Moreover, our assumptions are weaker than the conditions of [11, Lemma 2.6].

In Sect. 6, we provide examples showing sharpness of the assumptions in the case \(\alpha _n = a_n\). In particular, condition (e) is necessary in the class of monotonic sequences \(\{ a_n \}\), and condition (b) could not be replaced by \([(a_{n+1}/a_n)^2 - 1]^- \rightarrow 0\). Corollary 1.3 shows that in general, condition (g) is necessary.

In Sect. 5, we apply the theorem for \(\alpha _n = a_n\) to resolve a conjecture (see [21]) about continuous spectra of generators of birth and death processes. We also present there applications to the following problem.

Problem 1.2

(Chihara [4, 5]) Assume that a Jacobi matrix A is self-adjoint, \(b_n \rightarrow \infty \), the smallest point \(\rho \) of \(\sigma _\mathrm{ess}(A)\) is finite, and

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{a_n^2}{b_n b_{n+1}} = \frac{1}{4}. \end{aligned}$$

Find additional assumptions (if needed) to assure that \(\sigma _\mathrm{ess}(A) = [\rho , \infty )\).

A direct consequence of the theorem in the case \(\alpha _n = a_n\), providing easy to check additional assumptions to Problem 1.2, is the following result.

Corollary 1.3

Assume

figure b

Then the Jacobi matrix A satisfies \(\sigma _\mathrm {ess}(A) = [-M, \infty )\). Moreover, if \(a_{n+1}/a_n \rightarrow 1\), then

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{a_n^2}{b_n b_{n+1}} = \frac{1}{4}. \end{aligned}$$

Let us present ideas behind the proof of the theorem. Let the difference operator J be defined by

$$\begin{aligned} (J x)_n = - \alpha _{n-1} x_{n-1} + \alpha _n x_{n+1} \quad (n \ge 0) \end{aligned}$$

for a positive sequence \(\{ \alpha _n \}_{n=0}^\infty \) and \(\alpha _{-1}=x_{-1}=0\). Then we define commutator K on finite sequences by the formula

$$\begin{aligned} 2 K = J A - A J. \end{aligned}$$

The expression \(S_n = \langle K(p^n), p^n \rangle \), where \(p^n = (p_0, p_1, \ldots , p_n, 0, 0, \ldots )\), \(\{p_k\}\) is the formal eigenvector of A, and \(\langle \cdot , \cdot \rangle \) is the scalar product on \(\ell ^2\), proved to be a useful tool to show that the matrix A has continuous spectrum (see, e.g., [8, 11, 13]).

An important observation is that we can give closed form for \(S_n\) [see (3.1)]. To the author’s knowledge, this closed form has been known only for \(\alpha _n = a_n\) (see [7]). A related expression for \(\alpha _n \equiv 1\) was analyzed in [6]. Adaptation of techniques from [6] allow us to circumvent technical difficulties present in Dombrowski’s approach. Extending the definition of \(S_n\) to generalized eigenvectors [see (2.1)] enables us to show that \(\sigma (A) = \mathbb {R}\).

The article is organized as follows. In Sect. 2, we present definitions and well-known facts important for our argument. In Sect. 3, we prove Theorem 1.1, whereas in Sect. 4 we show its variants. In particular, we identify spectra of operators considered in [20] and [14]. In Sect. 5, we present applications of our theorem to some open problems. Finally, in the last section, we discuss the necessity of the assumptions in the case \(\alpha _n = a_n\). We also present examples showing that in some cases, our results are stronger than results known in the literature.

2 Tools

Given a Jacobi matrix A, \(\lambda \in \mathbb {R}\) and real numbers \((a,b) \ne (0, 0)\), we introduce a generalized eigenvector u by

$$\begin{aligned}&u_0 = a, \quad u_1 = b, \nonumber \\&a_n u_{n+1} = (\lambda - b_n) u_n - a_{n-1} u_{n-1} \quad (n \ge 1). \end{aligned}$$
(2.1)

Furthermore, we define the sequence

$$\begin{aligned}&p_{-1}(\lambda ) = 0, \quad p_0(\lambda ) = 1, \\&a_n p_{n+1}(\lambda ) = (\lambda - b_n) p_n(\lambda ) - a_{n-1} p_{n-1}(\lambda ) \quad (n \ge 0). \end{aligned}$$

The sequence \(\{ p_n(\lambda ) \}\) is a formal eigenvector of matrix A associated with an eigenvalue \(\lambda \).

Observe that \(\{ p_n(\cdot ) \}_{n=0}^\infty \) is a sequence of polynomials. Moreover, the sequence is orthonormal with respect to the measure \(\mu (\cdot ) = \langle E(\cdot ) \delta _0, \delta _0 \rangle \), where E is the spectral resolution of the matrix A, \(\langle \cdot , \cdot \rangle \) is the scalar product on \(\ell ^2\), and \(\delta _0 = (1, 0, 0, \ldots )\).

The following propositions are well known. We include them for the sake of completeness.

Proposition 2.1

Let \(\lambda \in \mathbb {R}\). If every generalized eigenvector u does not belong to \(\ell ^2\), then the matrix A is self-adjoint, \(\lambda \notin \sigma _\mathrm {p}(A)\), and \(\lambda \in \sigma (A)\).

Proof

[23, Theorem 3] asserts that A is self-adjoint provided that at least one generalized eigenvector \(\{u_n\} \notin \ell ^2\). Direct computation shows that \(\lambda \in \sigma _\mathrm {p}(A)\) if and only if \(\{ p_n(\lambda ) \} \in \ell ^2\). Therefore the matrix A is self-adjoint and \(\lambda \notin \sigma _\mathrm {p}(A)\).

Observe that the vector x such that \((A - \lambda I) x = \delta _0\) satisfies the following recurrence relation:

$$\begin{aligned}&b_0 x_0 + a_0 x_1 = \lambda x_0 + 1, \\&a_{n-1} x_{n-1} + b_n x_n + a_n x_{n+1} = \lambda x_n \quad (n \ge 1). \end{aligned}$$

Hence x is a generalized eigenvector, thus \(x \notin \ell ^2\). Therefore, the operator \(A - \lambda I\) is not surjective, i.e., \(\lambda \in \sigma (A)\). \(\square \)

The following proposition is well known. For the proof we refer to, e.g., [12, Corollary 3.6].

Proposition 2.2

Let A and \(\widehat{A}\) be Jacobi matrices defined by sequences \(\{a_n\}\), \(\{b_n\}\) and \(\{a_n\}\), \(\{-b_n\}\), respectively. Then

$$\begin{aligned} \sigma (A) = -\sigma (\widehat{A}), \quad \sigma _\mathrm {p}(A) = -\sigma _\mathrm {p}(\widehat{A}). \end{aligned}$$

The following proposition has been used many times in the literature (see, e.g., [11, 12]).

Proposition 2.3

Let A be a self-adjoint Jacobi matrix associated with the sequence \(b_n \equiv 0\). Let \(A_e\) and \(A_o\) be restrictions of \(A \cdot A\) to the subspaces \(span\{\delta _{2k} :k \in \mathbb {N}\}\) and \(span\{\delta _{2k+1} :k \in \mathbb {N}\}\), respectively. Then \(A_e\) and \(A_o\) are Jacobi matrices associated with

$$\begin{aligned}&a_n^e = a_{2n} a_{2n+1}, \quad b_n^e = a_{2n-1}^2 + a_{2n}^2,\nonumber \\&a_n^o = a_{2n+1} a_{2n+2}, \quad b_n^o = a_{2n}^2 + a_{2n+1}^2, \end{aligned}$$
(2.2)

respectively. Moreover, \(A_o\) and \(A_e\) are self-adjoint, and

$$\begin{aligned} \sigma (A_o) = \sigma (A_e) = \left( \sigma (A) \right) ^2, \quad \sigma _\mathrm{p}(A_o) = \sigma _\mathrm{p}(A_e) = \left( \sigma _\mathrm{p}(A) \right) ^2, \end{aligned}$$

when \(0 \notin \sigma _\mathrm {p}(A)\) and \(0 \notin \sigma _\mathrm {p}(\widetilde{A})\), where \(\widetilde{A}\) is a self-adjoint Jacobi matrix associated with the sequences \(\{a_{n+1}\}_{n=0}^\infty \) and \(b_n \equiv 0\), and for a set X, we define \(X^2 = \{ x^2 :x \in X \}\).

Proof

By direct computation it may be proved that \(A_o\) and \(A_e\) satisfy (2.2).

Let \(\{p_n^e\}\) be the sequence of polynomials associated with the matrix \(A_e\). Then [23, Theorem 3] asserts that \(A_e\) is self-adjoint provided \(\{p_n^e(0)\} \notin \ell ^2\). It is known that \(p_{2n}(x) = p_n^e(x^2)\) (see, e.g., [12, Section 4]). Since \(p_{2k+1}(0)=0\) and \(0 \notin \sigma _\mathrm {p}(C)\), we have

$$\begin{aligned} \infty = \sum _{n=0}^\infty p_n^2(0) = \sum _{n = 0}^\infty p^2_{2n}(0) = \sum _{n=0}^\infty \left( p_n^e(0) \right) ^2. \end{aligned}$$

Therefore \(A_e\) is self-adjoint.

Assume that \(0 \notin \sigma _\mathrm {p}(\widetilde{A})\). Observe that \(A_o = \widetilde{A}_e\). Therefore, the previous argument applied to \(\widetilde{A}\) implies also that \(A_o\) is self-adjoint.

The conclusion of spectra follows from, e.g., [12, Section 4]. \(\square \)

3 Proof of the Main Theorem

Given a generalized eigenvector u and a positive sequence \(\{ \alpha _n \}\), we set

$$\begin{aligned} S_n = a_{n-1} \alpha _{n-1} u_{n-1}^2 + a_n \alpha _n u_n^2 - (\lambda - b_n) \alpha _{n-1} u_{n-1} u_n \quad (n \ge 1). \end{aligned}$$
(3.1)

Using the identity \(a_{n-1} u_{n-1} = (\lambda - b_n)u_n - a_n u_{n+1}\), we get an equivalent formula,

$$\begin{aligned} S_n = a_n \alpha _n u_n^2 + \frac{\alpha _{n-1}}{a_{n-1}} a_n^2 u_{n+1}^2 - \frac{\alpha _{n-1}}{a_{n-1}} a_n (\lambda - b_n) u_n u_{n+1} \quad (n \ge 1). \end{aligned}$$
(3.2)

The sequence \(S_n\) for \(\alpha _n = a_n\) was previously used in the study of Jacobi matrices but only in the case of bounded ones (see, e.g., [7, 10]). In the case of unbounded operators, a sequence similar to \(S_n\) for \(\alpha _n \equiv 1\) was also used in [6].

The following proposition is an adaptation of [6, Lemma 3.1].

Proposition 3.1

Let u be a generalized eigenvector associated with \(\lambda \in \mathbb {R}\) and

$$\begin{aligned} \widetilde{S}_n&= u_n^2 + u_{n+1}^2. \end{aligned}$$

Assume that \(a_n \rightarrow \infty \), and

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{\alpha _{n-1}}{\alpha _n} \frac{a_n}{a_{n-1}} = 1, \quad \limsup _{n \rightarrow \infty } \frac{|b_n|}{a_n} < 2. \end{aligned}$$

Then there exist constants \(c_1>0, c_2>0\) such that for sufficiently large n,

$$\begin{aligned} c_1 a_n \alpha _n \le \frac{S_n}{\widetilde{S}_n} \le c_2 a_n \alpha _n. \end{aligned}$$

Proof

Observe that from the representation (3.2) we have that \(S_n\) is a quadratic form with respect to variables \(u_n\) and \(u_{n+1}\). Let the minimal and the maximal value of \(S_n\) under the condition \(\widetilde{S}_n = 1\) be denoted by \(w_n^{\text {min}}\) and \(w_n^{\text {max}}\), respectively. Then

$$\begin{aligned} \frac{2 w_n^{\text {min}}}{a_n \alpha _n}&= 1 + \frac{\alpha _{n-1}}{\alpha _n} \frac{a_n}{a_{n-1}} - \sqrt{\left( 1 - \frac{\alpha _{n-1}}{\alpha _n} \frac{a_n}{a_{n-1}} \right) ^2 + \left( \frac{\alpha _{n-1}}{\alpha _n} \frac{a_n}{a_{n-1}} \frac{\lambda - b_n}{a_n} \right) ^2},\\ \frac{2 w_n^{\text {max}}}{a_n \alpha _n}&= 1 + \frac{\alpha _{n-1}}{\alpha _n} \frac{a_n}{a_{n-1}} + \sqrt{\left( 1 - \frac{\alpha _{n-1}}{\alpha _n} \frac{a_n}{a_{n-1}} \right) ^2 + \left( \frac{\alpha _{n-1}}{\alpha _n} \frac{a_n}{a_{n-1}} \frac{\lambda - b_n}{a_n} \right) ^2}. \end{aligned}$$

Letting \(n \rightarrow \infty \), we see that for large n, there is a positive upper and lower bound of the above expressions. This completes the proof. \(\square \)

Corollary 3.2

Under the assumptions of Proposition 3.1, together with

$$\begin{aligned} \sum _{n=0}^\infty \frac{1}{a_n \alpha _n} = \infty , \end{aligned}$$

if \(\liminf _n S_n > 0\), then \(u \notin \ell ^2\).

Proof

Since \(\liminf _n S_n > 0\) by Proposition 3.1, there exists a constant \(c > 0\) such that for every n sufficiently large we have

$$\begin{aligned} \frac{c}{a_n \alpha _n} \le \widetilde{S}_n. \end{aligned}$$

This shows that u cannot belong to \(\ell ^2\). \(\square \)

Now we are ready to prove Theorem 1.1.

Proof (of Theorem 1.1)

By virtue of Corollary 3.2, it is sufficient to show that \(\liminf _n S_n > 0\) for every generalized eigenvector \(\{ u_n \}\).

By Proposition 3.1, there exists N such that for every \(n \ge N, S_n > 0\) holds. Let us define \(F_n = (S_{n+1} - S_n) / S_n\). Then \(S_{n+1} / S_n = 1 + F_n\); thus

$$\begin{aligned} \frac{S_n}{S_N} = \prod _{k=N}^{n-1} (1 + F_n). \end{aligned}$$

Hence

$$\begin{aligned} \sum _{n=1}^\infty F_n^- < \infty \end{aligned}$$
(3.3)

implies \(\liminf _n S_n > 0\). Observe that by (3.1) and (3.2), we get

$$\begin{aligned} S_{n+1} - S_n= & {} \left( a_{n+1} \alpha _{n+1} - \frac{\alpha _{n-1}}{a_{n-1}} a_n^2 \right) u_{n+1}^2\\&+\,\left( \frac{\alpha _{n-1}}{a_{n-1}} a_n (\lambda - b_n) - \alpha _n (\lambda - b_{n+1}) \right) u_n u_{n+1}. \end{aligned}$$

Therefore,

$$\begin{aligned} F_n= & {} \frac{S_{n+1} - S_n}{S_n} = \left[ \left( a_{n+1} \alpha _{n+1} - \frac{\alpha _{n-1}}{a_{n-1}} a_n^2 \right) \frac{u_{n+1}^2}{\widetilde{S}_n}\right. \\&\left. +\,\left( \frac{\alpha _{n-1}}{a_{n-1}} a_n (\lambda - b_n) - \alpha _n(\lambda - b_{n+1}) \right) \frac{u_n u_{n+1}}{\widetilde{S}_n} \right] \frac{\widetilde{S}_n}{S_n}, \end{aligned}$$

where \(\widetilde{S}_n = u_n^2 + u_{n+1}^2\). By Proposition 3.1 and \(|u_n u_{n+1}| / \widetilde{S}_n \le 1\), there exists a constant \(c > 0\) such that

$$\begin{aligned} F_n^- \le \frac{c}{a_n \alpha _n} \left( \left[ a_{n+1} \alpha _{n+1} - \frac{\alpha _{n-1}}{a_{n-1}} a_n^2 \right] ^- + \left| \frac{\alpha _{n-1}}{a_{n-1}} a_n (\lambda - b_n) - \alpha _n (\lambda - b_{n+1}) \right| \right) . \end{aligned}$$

Since

$$\begin{aligned} \frac{1}{a_n \alpha _n} \left[ a_{n+1} \alpha _{n+1} - \frac{\alpha _{n-1}}{a_{n-1}} a_n^2 \right] ^- = \left[ \frac{a_{n+1}}{a_n} \frac{\alpha _{n+1}}{\alpha _n} - \frac{a_n}{a_{n-1}} \frac{\alpha _{n-1}}{\alpha _n} \right] ^- \end{aligned}$$

and

we obtain (3.3). \(\square \)

Remark 3.3

If we replace the condition (b) by

figure c

then \(S_n\) is in fact convergent to a positive number. This shows that for every generalized eigenvector u, there are constants \(c_1 > 0, c_2 > 0\) such that \(c_1/(a_n \alpha _n) \le u_n^2 + u_{n+1}^2 \le c_2 / (a_n \alpha _n)\). Hence for every generalized eigenvectors uv associated with \(\lambda \in \mathbb {R}\), there is a constant \(c > 0\) such that

$$\begin{aligned} \limsup _{N \rightarrow \infty } \frac{\sum _{n=0}^N |u_n|^2}{\sum _{n=0}^N |v_n|^2} \le c. \end{aligned}$$

This estimate together with the subordination method (see, e.g., [6, 15]) shows that the spectrum of the matrix A is purely absolutely continuous.

4 Special Cases

In this section, we are going to show a few choices of the sequence \(\{ \alpha _n \}\) from Theorem 1.1. In this way, we show the flexibility of our approach.

The following theorem was proved in [14, Theorem 1.6] and is a generalization of [6, Theorem 1.10]. In the proof, the authors analyze transfer matrices. Therefore, our argument gives an alternative proof.

Theorem 4.1

(Janas and Moszyński [14] ) Assume

figure d

Then \(\sigma (A) = \mathbb {R}\), and the matrix A has purely absolutely continuous spectrum.

Proof

Let \(\alpha _n \equiv 1\). By virtue of Remark 3.3, we need to check the assumptions (b’), (d), and (f) of Theorem 1.1.

Since the sequence \(\{ a_{n+1}/a_n \}\) is of bounded variation, it is convergent to a number a. From the condition (b), we have \(a \le 1\), whereas the condition (a) gives \(a \ge 1\). This proves the conditions (b’) and (f) of Theorem 1.1.

The sequence \(\{ b_{n+1}/a_n \}\) is of bounded variation because \(\frac{b_{n+1}}{a_n} = \frac{b_{n+1}}{a_{n+1}} \cdot \frac{a_{n+1}}{a_n}\). The proof is complete. \(\square \)

The next theorem imposes very simple conditions on Jacobi matrices. In Sect. 5, we show its applications; furthermore, in Sect. 6, we discuss sharpness of the assumptions.

Theorem 4.2

Assume

figure e

Then the Jacobi matrix A is self-adjoint and satisfies \(\sigma _\mathrm {p}(A) = \emptyset \) and \(\sigma (A) = \mathbb {R}\).

Proof

Apply Theorem 1.1 with \(\alpha _n = a_n\). \(\square \)

Special cases of the following theorem were examined in [20] and [14] using commutator methods.

Theorem 4.3

Let \(\log ^{(i)}\) be defined by \(\log ^{(0)}(x) = x, \log ^{(i+1)}(x) = \log (\log ^{(i)}(x))\). Let \(g_j(n) = \prod _{i=1}^j \log ^{(i)}(n)\). Assume that for positive numbers KN and for a summable nonnegative sequence \(c_n\),

figure f

Then \(\sigma _\mathrm {p}(A) = \emptyset \) and \(\sigma (A) = \mathbb {R}\).

Proof

We can assume that \(\log ^{(K)}(N) > 0\). Set

$$\begin{aligned} \alpha _n = {\left\{ \begin{array}{ll} 1 &{} \text { for } n < N, \\ \frac{n g_K(n)}{a_n} &{} \text { otherwise.} \end{array}\right. } \end{aligned}$$

To get the conclusion, we need to check the assumptions (b), (d), and (c) of Theorem 1.1.

To show Theorem 1.1(b), let us observe that the assumption (b) of the present theorem gives

$$\begin{aligned} \left( \frac{a_n}{a_{n-1}} \right) ^2 \le 1 + \frac{2}{n} + \sum _{j=1}^K \frac{2}{n g_j(n)} + c_n' \end{aligned}$$

for a summable sequence \(c_n'\). Therefore,

Since the functions \(g_j\) are increasing, we have

$$\begin{aligned} \ge \frac{n-1}{n} \left( \frac{g_K(n+1) - g_K(n-1)}{g_K(n)} \right) - \frac{g_K(n-1)}{n g_K(n)} \sum _{j=1}^K \frac{2}{g_j(n-1)} - c_n'. \end{aligned}$$
(4.1)

Next, observe that

$$\begin{aligned} g'_K(x) = g_K(x) \sum _{j=1}^K \frac{(\log ^{(j)})'(x)}{\log ^{(j)}(x)}. \end{aligned}$$

Therefore,

$$\begin{aligned} g_K'(x) = g_K(x) \sum _{j=1}^K \frac{1}{x g_j(x)}. \end{aligned}$$

Hence Taylor’s formula applied to \(g_K\) at the point \(n-1\) gives

$$\begin{aligned} (n-1)[g_K(n+1) - g_K(n-1)] = g_K(n-1) \sum _{j=1}^K \frac{2}{g_j(n-1)} + 2 (n-1)g_K''(\xi ) \end{aligned}$$

for \(\xi \in (n-1, n+1)\). Direct computation shows \(|g_K''(x)| \le c/x^{3/2}\) for x sufficiently large and a constant \(c > 0\). Therefore, the right-hand side of (4.1) is summable.

Next, since

$$\begin{aligned} \frac{b_{n+1}}{a_n} - \frac{b_n}{a_{n-1}} \frac{\alpha _{n-1}}{\alpha _n} = \frac{b_{n+1} - b_n}{a_n} + \frac{b_n}{a_{n-1}} \left( \frac{a_{n-1}}{a_n} - \frac{\alpha _{n-1}}{\alpha _n} \right) , \end{aligned}$$

the condition Theorem 1.1(d) reduces to showing Theorem 1.1(c):

$$\begin{aligned} \sum _{n=0}^\infty \frac{1}{a_{n-1}} \left| \frac{a_{n-1}}{a_n} - \frac{n-1}{n} \frac{g_K(n-1)}{g_K(n)} \frac{a_n}{a_{n-1}} \right| < \infty . \end{aligned}$$
(4.2)

For constants \(K'\) and \(c > 0\), we have

$$\begin{aligned} \frac{a_{n-1}}{a_n} - \frac{n-1}{n} \frac{g_K(n-1)}{g_K(n)} \frac{a_n}{a_{n-1}} \ge \frac{1}{1 + \frac{K'}{n} + c_n} - \left( 1 - \frac{1}{n} \right) \left( 1 + \frac{K'}{n} + c_n \right) \ge -\frac{c}{n} - c_n' \end{aligned}$$

for a summable sequence \(c_n'\). On the other hand,

$$\begin{aligned} \frac{a_{n-1}}{a_n} - \frac{n-1}{n} \frac{g_K(n-1)}{g_K(n)} \frac{a_n}{a_{n-1}}\le & {} \frac{1}{1-c_n} - \left( 1 - \frac{1}{n} \right) \frac{g_K(n-1)}{g_K(n)} (1-c_n) \\= & {} 1 - \frac{g_K(n-1)}{g_K(n)} + c_n' = \frac{g_K(n) - g_K(n-1)}{g_K(n)} + c_n' \end{aligned}$$

for a summable sequence \(c_n'\). Hence as previously, Taylor’s formula applied to \(g_K\) at the point \(n-1\) gives

$$\begin{aligned} \frac{a_{n-1}}{a_n} - \frac{n-1}{n} \frac{g_K(n-1)}{g_K(n)} \frac{a_n}{a_{n-1}} \le \frac{c}{n} + c''_n \end{aligned}$$

for a constant \(c > 0\) and summable sequence \(c''_n\). Finally, condition (d) leads to (4.2). \(\square \)

Remark 4.4

When we compare Theorem 4.2 with Theorem 4.3, we see that Theorem 4.3 is interesting only in the case when \(\sum _{n=0}^\infty 1/a_n^2 < \infty \). In this case, the condition Theorem 4.3(d) is satisfied.

A sequence similar to \(\alpha _n = n a_n^{-1}\) was used in the proof of [20, Theorem 4.1] and [14, Theorem 2.1]. There it was shown that under the stronger assumptions (which in particular imply \(c_n \equiv 0\), \(b_n \equiv 0\) and \(K=0\)), the measure \(\mu \) is absolutely continuous. Whether \(\sigma (A) = \mathbb {R}\) was not investigated.

Example 4.5

Let \(K > 0\). Fix M such that \(\log ^{(K)}(M) > 0\). Then for the sequences \(a_n = (n+M) g_K(n+M)\) and \(b_n \equiv 0\), the assumptions of Theorem 4.3 are satisfied.

5 Applications of Theorem 4.2

5.1 Birth and Death Processes

Given sequences \(\{\lambda _n\}_{n=0}^\infty \) and \(\{\mu _n\}_{n=0}^\infty \) such that \(\lambda _n > 0, \mu _{n+1} > 0 \ (n \ge 0)\) and \(\mu _0 \ge 0\), we set

$$\begin{aligned} Q = \left( \begin{array}{cccccc} -(\lambda _0 + \mu _0) &{} \lambda _0 &{} 0 &{} 0 &{}\ldots \\ \mu _1 &{} -(\lambda _1 + \mu _1) &{} \lambda _1 &{} 0 &{} \ldots \\ 0 &{} \mu _2 &{} -(\lambda _2 + \mu _2) &{} \lambda _2 &{} \ldots \\ 0 &{} 0 &{} \mu _3 &{} -(\lambda _3 + \mu _3) &{} \ldots \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \ddots \end{array} \right) . \end{aligned}$$
(5.1)

Let us define

$$\begin{aligned} \ell ^2(\pi ) = \Big \{ x \in \mathbb {C}^\mathbb {N} :\sum _{n=0}^\infty \pi _n |x_n|^2 < \infty \Big \}, \quad \langle x, y \rangle _{\ell ^2(\pi )} = \sum _{n=0}^\infty \pi _n x_n \overline{y_n} \end{aligned}$$

where

$$\begin{aligned} \pi _0 = 1, \quad \pi _n = \frac{\lambda _0 \lambda _1 \ldots \lambda _{n-1}}{\mu _1 \mu _2 \ldots \mu _n}. \end{aligned}$$

The operator Q is well defined on the domain \(Dom(Q) = \{ x \in \ell ^2(\pi ) :Q x \in \ell ^2(\pi )\}\). Notice that any sequence with finite support belongs to \(Dom(Q)\). If the operator Q is self-adjoint, it is of a probabilistic interest to examine the spectrum \(\sigma (Q)\) of the operator Q (see, e.g., [17]).

Theorem 5.1

Let \(a = (\mu _1, \lambda _1, \mu _2, \lambda _2, \mu _3, \lambda _3, \ldots )\). Assume

figure g

Then the matrix Q is self-adjoint and satisfies \(\sigma _\mathrm {p}(Q) = \emptyset \) and \(\sigma (Q) = (-\infty , 0]\).

Proof

Let P be a diagonal matrix with entries \(\sqrt{\pi _n}\) on the main diagonal. Then we have \(\bar{A} = P Q P^{-1}\), where \(\bar{A}\) is the Jacobi matrix associated with sequences \(\bar{a}_n = \sqrt{\lambda _n \mu _{n+1}}\) and \(\bar{b}_n = -(\lambda _n + \mu _n)\) (see [18, Section 2]). Since the matrix \(P :\ell ^2(\pi ) \rightarrow \ell ^2\) is an isometry (hence P and \(P^{-1}\) are bounded), it is enough to consider only the spectrum of \(\bar{A}\). By virtue of Proposition 2.2, it is sufficient to consider the spectrum of the matrix \(\widehat{A}\), corresponding with the sequences \(\{a_n\}\) and \(\{-b_n\}\).

Let us consider the case \(\mu _0 = 0\). Let \(\widetilde{b}_n \equiv 0\) and

$$\begin{aligned} \widetilde{a} = (\sqrt{\lambda _0}, \sqrt{\mu _1}, \sqrt{\lambda _1}, \sqrt{\mu _2}, \sqrt{\lambda _2}, \ldots ). \end{aligned}$$

Observe that by Proposition 2.3, we have \(\widetilde{A}_e = \widehat{A}\). Hence, by Theorem 4.2, the conclusion follows.

Next, suppose that \(\mu _0 > 0\). Let \(\widetilde{b}_n \equiv 0\) and

$$\begin{aligned} \widetilde{a} = (\sqrt{\mu _0}, \sqrt{\lambda _0}, \sqrt{\mu _1}, \sqrt{\lambda _1}, \sqrt{\mu _2}, \sqrt{\lambda _2}, \ldots ). \end{aligned}$$

Applying Proposition 2.3 to \(\widetilde{A}_o = \widehat{A}\) by Theorem 4.2, we finish the proof. \(\square \)

In [16] it was shown that in the case when \(\lambda _n = \mu _{n+1} = n+a,\ a > 0\), the matrix Q has purely absolutely continuous spectrum and \(\sigma (Q) = (-\infty , 0]\). Theorem 5.1 is applicable in more general situations.

In [21], the following conjecture about spectral properties of operators of the form (5.1) was stated.

Conjecture 5.2

(Roehner and Valent [21]) Assume that

$$\begin{aligned} \lim _{n \rightarrow \infty } \mu _n/\lambda _n = 1, \quad \lim _{n \rightarrow \infty } \lambda _n/n^\alpha = a \end{aligned}$$

for constants \(a > 0\) and \(0 < \alpha \le 2\). Then \(\sigma _p(Q) = \emptyset \).

In [3] it was shown that without additional assumptions, the conjecture is false. In Theorem 5.1, we provide sufficient conditions when Conjecture 5.2 holds.

It is worthwhile to compare Theorem 5.1 with results obtained in [18]. Let

$$\begin{aligned} \lim _{n \rightarrow \infty } \mu _n / \lambda _n = q \quad (0 < q < \infty ). \end{aligned}$$

Then in [18], it was concluded that under additional assumptions (which in particular imply \(\lambda _{k+1}/\lambda _k \rightarrow 1, \ \mu _{k+1} / \mu _k \rightarrow 1\), \(\lambda _k \rightarrow \infty \) and \(\alpha < 1\)), the matrix Q satisfies \(\sigma _\mathrm{ess}(Q) = \emptyset \). However, there is a problem in the proof of Lemma 1(iii) on the page 69. The author states that \(||F D^{-1} ||_{\ell ^2} < 1\) if for a certain \(\zeta > 0\),

$$\begin{aligned} \frac{\sqrt{\lambda _k \mu _{k+1}}}{\lambda _k + \mu _k + \zeta } < \frac{1}{2}, \quad \frac{\sqrt{\lambda _k \mu _{k+1}}}{\lambda _{k+1} + \mu _{k+1} + \zeta } < \frac{1}{2}. \end{aligned}$$

In fact what we need is

$$\begin{aligned} \frac{\sqrt{\lambda _k \mu _{k+1}}}{\lambda _k + \mu _k} < \frac{1}{2} - \epsilon , \quad \frac{\sqrt{\lambda _k \mu _{k+1}}}{\lambda _{k+1} + \mu _{k+1}} < \frac{1}{2} - \epsilon \end{aligned}$$

for a certain \(\epsilon > 0\), which under the assumption \(q = 1\) is impossible because the left-hand sides converge to 1 / 2. In fact Theorem 5.1 implies the opposite conclusion to results from [18].

Note that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{\lambda _n \mu _{n+1}}{(\lambda _n + \mu _n) (\lambda _{n+1} + \mu _{n+1})} = \frac{q}{(1+q)^2}, \end{aligned}$$

which under the assumption \(q \ne 1\) is strictly less than 1 / 4. Therefore [3, Theorem 1] (for a functional analytic proof see [24, Theorem 2.6]) combined with Proposition 2.2 implies that if the matrix Q is self-adjoint and \(\lambda _k \rightarrow \infty \), then \(\sigma _\mathrm{ess}(Q) = \emptyset \).

5.2 Chihara’s Problem

In [1] (see also [2, IV-Theorem 4.2]), the following result was proved.

Theorem 5.3

(Chihara [1] ) Assume that a Jacobi matrix A is self-adjoint, \(b_n \rightarrow \infty \), the smallest point \(\rho \) of \(\sigma _{ess}(A)\) is finite, and

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{a_n^2}{b_n b_{n+1}} = \frac{1}{4}. \end{aligned}$$

Then the set \(\{ x :p_n(x) = 0, n \in \mathbb {N} \}\) of the zeros of orthogonal polynomials \(\{ p_n \}\) is dense in \([\rho , \infty )\).

This suggests the following problem stated in [4] and [5].

Problem 5.4

(Chihara [4, 5]) Let the assumptions of Theorem 5.3 be satisfied. Find additional assumptions (if needed) to assure that \(\sigma _\mathrm{ess}(A) = [\rho , \infty )\).

The following theorem gives sufficient (and easy to verify) additional conditions to Problem 5.4. In fact every Jacobi matrix with \(b_n \equiv 0\) and \(a_{n+1}/a_n \rightarrow 1\) from this article provides an example (via Proposition 2.3) satisfying the conclusion of Problem 5.4.

Theorem 5.5

Assume

figure h

Then the Jacobi matrix A satisfies \(\sigma _\mathrm {ess}(A) = [-M, \infty )\). Moreover, if \(a_{n+1}/a_n \rightarrow 1\), then

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{a_n^2}{b_n b_{n+1}} = \frac{1}{4}. \end{aligned}$$
(5.2)

Proof

We show (5.2) by a direct computation. Without loss of generality, we may assume that \(M = 0\). Let \(-r_n = a_{n-1} - b_n + a_n\). Then \(a_{n-1} - (b_n-r_n) + a_n = 0\). Let \(\widetilde{A}\) be the Jacobi matrix for sequences \(\widetilde{a}_n = a_n, \ \widetilde{b}_n = b_n - r_n\). The matrix \(R = A - \widetilde{A}\) defines a compact self-adjoint operator on \(\ell ^2\) (because \(r_n \rightarrow 0\)). Hence, by the Weyl perturbation theorem (see [25]), \(\sigma _\mathrm{ess}(A) = \sigma _\mathrm{ess}(\widetilde{A})\). Theorem 5.1 implies that \(\sigma _\mathrm{ess}(\widetilde{A}) = (-\infty , 0]\). Finally, Proposition 2.2 applied to the matrix \(\widetilde{A}\) finishes the proof. \(\square \)

6 Examples

Example 6.1

Let \(b_n \equiv 0, \ \epsilon > 0, \ a_0 = \epsilon \), and \(a_{2k - 1} = a_{2k} = \widetilde{a}_k \ (k \ge 1)\) for a sequence \(\widetilde{a}_k, \ \widetilde{a}_k \rightarrow \infty \). Then the matrix A is always self-adjoint. Moreover, 0 is its eigenvalue if and only if

$$\begin{aligned} \sum _{k=0}^\infty \left( \frac{a_0 a_2 \ldots a_{2k}}{a_1 a_3 \ldots a_{2k+1}} \right) ^2 = \epsilon ^2 \sum _{k=1}^\infty \frac{1}{\widetilde{a}_k^2} < \infty , \end{aligned}$$

(see, e.g., [12, Theorem3.2]). Therefore the condition Theorem 4.2(b) could not be weakened even for the class of monotonic sequences \(a_n\).

In [19] it was shown that for \(\widetilde{a}_k = k^\alpha , \ (\alpha \in (0,1))\) the spectrum \(\sigma (C) = \mathbb {R}\). In the case \(\alpha \le 1/2\), the measure \(\mu (\cdot ) = \langle E(\cdot ) \delta _0, \delta _0 \rangle \) is absolutely continuous, whereas for \(\alpha > 1/2\), the measure \(\mu \) is absolutely continuous on the set \(\mathbb {R} \backslash \{ 0 \}\).

Example 6.2

Let \(b_n \equiv 0\) and \(a_n = n^\alpha + c_n \ (0 < \alpha \le 2/3)\), where \(c_{2n} = 1\) and \(c_{2n+1} = 0\). Then (see [9]) \(\sigma (A) = \mathbb {R} \backslash (-1,1)\), and the measure \(\mu \) is absolutely continuous on \(\mathbb {R} \backslash [-1,1]\). It shows that the condition Theorem 4.2(c) could not be replaced by \([(a_{n+1}/a_n)^2 - 1]^- \rightarrow 0\).

Example 6.3

Let \(a_0 = 1\). For \(k! \le n < (k+1)!\), we define \(a_n = \sqrt{k!}\). For \(n > 0\), we have

$$\begin{aligned} \frac{a_{n+1}}{a_n} = {\left\{ \begin{array}{ll} \sqrt{k} &{} \quad \text {if n+1=k!} \\ 1 &{} \quad \text {otherwise.} \end{array}\right. } \end{aligned}$$

Define \(b_n \equiv 0\). We have \(a_n \le \sqrt{n+1}\). Therefore \(\sum _{n=0}^\infty 1/a_n^2 = \infty \). Observe that the assumptions of Theorem 4.2 are satisfied. Moreover, \(a_{n+1}/a_n \nrightarrow 1\) and neither [15, Theorem 3.1] nor [11, Lemma 2.6] can be applied.