## 1 Introduction

Let $$\mathcal {H}$$ be a complex Hilbert space. Consider two sequences $$a = (a_n :n \ge 0)$$ and $$b = (b_n :n \ge 0)$$ of bounded linear operators on $$\mathcal {H}$$ such that for every $$n \ge 0$$, the operator $$a_n$$ has a bounded inverse and $$b_n$$ is self-adjoint. Then one defines the symmetric tridiagonal matrix by the formulaFootnote 1

\begin{aligned} \mathcal {A}= \begin{pmatrix} b_0 &{}\quad a_0 &{}\quad 0 &{}\quad 0 &{}\quad \ldots \\ a_0^* &{}\quad b_1 &{}\quad a_1 &{}\quad 0 &{}\quad \ldots \\ 0 &{}\quad a_1^* &{}\quad b_2 &{}\quad a_2 &{}\quad \ldots \\ 0 &{}\quad 0 &{}\quad a_2^* &{}\quad b_3 &{}\quad \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad &{}\quad \ddots \end{pmatrix}. \end{aligned}

The action of $$\mathcal {A}$$ on any sequence of elements from $$\mathcal {H}$$ is defined by the formal matrix multiplication. Let the operator A be the minimal operator associated with $$\mathcal {A}$$. Specifically, by A we mean the closure in $$\ell ^2(\mathbb {N}; \mathcal {H})$$ of the restriction of $$\mathcal {A}$$ to the set of the sequences of finite support. Let us recall that

\begin{aligned} \langle {x}, {y} \rangle _{\ell ^2(\mathbb {N}; \mathcal {H})} = \sum _{n=0}^\infty \langle {x_n}, {y_n} \rangle _\mathcal {H}, \quad \ell ^2(\mathbb {N}; \mathcal {H}) = \left\{ x \in \mathcal {H}^\mathbb {N} :\langle {x}, {x} \rangle _{\ell ^2(\mathbb {N}; \mathcal {H})} < \infty \right\} . \end{aligned}

The operator A is called a block Jacobi matrix. It is self-adjoint provided the Carleman condition is satisfied, i.e.

\begin{aligned} \sum _{n=0}^\infty \frac{1}{\left||{a_n} \right||} = \infty , \end{aligned}
(1)

where $$\left||{\cdot } \right||$$ is the operator norm (see [2, Theorem VII-2.9]).

Block Jacobi matrices are related to such topics as: matrix orthogonal polynomials (see [8]), the matrix moment problem (see [13]), difference equations of finite order (see [10]), partial difference equations (see [2]), level dependent quasi-birth–death processes (see [9] and references therein). For further applications, we refer to [20, 25].

The theory of block Jacobi matrices is much less developed than the scalar ones, i.e., corresponding to $$\mathcal {H}= \mathbb {C}$$. The aim of this paper is to provide extensions of results obtained in [26, 28] for $$\mathcal {H}= \mathbb {R}$$ to the case of arbitrary $$\mathcal {H}$$. It is of interest as we provide new results even for $$\mathcal {H}= \mathbb {C}^d$$ with $$d \ge 1$$, i.e., the most common (apart from $$\mathbb {R}$$) studied case.

Originally, we were interested in the unbounded case, i.e.,

\begin{aligned} \begin{aligned} \lim _{n \rightarrow \infty } \big \Vert {a_n^{-1}} \big \Vert = 0. \end{aligned} \end{aligned}

But it seems that even the bounded case is not well understood (see [19, 23]). Therefore, we present a unified treatment of both bounded and unbounded cases. In the unbounded case, the formulation of our results is simpler.

In the proofs of the presented theorems we will use the following notion. A nonzero sequence $$(u_n : n \ge 0)$$ will be called a generalized eigenvector associated with $$z \in \mathbb {C}$$ if it satisfies the recurrence relation

\begin{aligned} a_{n-1}^* u_{n-1} + b_n u_n + a_n u_{n+1} = z u_n, \quad (n \ge 1). \end{aligned}

In Sect. 3, we show the correspondence between asymptotic behavior of generalized eigenvectors and the spectral properties of A.

The first main result of this article is Theorem 4, which generalizes the results obtained in [26] to the operator case. Its formulation involves an additional parameter sequence $$\alpha = (\alpha _n : n \ge 0)$$. In Sect. 5, we present some of the possible choices of $$\alpha$$. The following theorem is a special case of Theorem 4 (obtained for $$\alpha _n = a_n$$).

### Theorem 1

Assume

\begin{aligned} \begin{aligned} \lim _{n \rightarrow \infty } \big \Vert {a_n^{-1}} \big \Vert = 0, \quad \lim _{n \rightarrow \infty } \big \Vert {a_n^{-1} b_n} \big \Vert = 0 \end{aligned} \end{aligned}

andFootnote 2

• $$(a) \quad \sum \limits _{n=1}^\infty \frac{\left||{[a_{n+1} a_{n+1}^* - a_n^* a_n]^-} \right||}{\left||{a_n} \right||^2} < \infty$$,

• $$(b) \quad \sum \limits _{n=0}^\infty \frac{\left||{a_n b_{n+1} - b_n a_n} \right||}{\left||{a_n} \right||^2} < \infty$$,

• $$(c) \quad \sum \limits _{n=0}^\infty \frac{1}{\left||{a_n} \right||^2} = \infty$$.

Then the operator A is self-adjoint. Moreover,Footnote 3 $$\sigma (A) = \mathbb {R}$$ and $$\sigma _{\text {p}}(A) = \emptyset$$ provided

\begin{aligned} \lim _{n \rightarrow \infty } \left\| \frac{a_n}{\left||{a_n} \right||} - C \right\| = 0, \end{aligned}

where C is invertible.

Before we formulate the next result, we need a definition. Given a positive integer N, we define the total N-variation $$\mathcal {V}_N$$ of a sequence of vectors $$x = \big (x_n : n \ge 0 \big )$$ from a vector space V by

\begin{aligned} \mathcal {V}_N(x) = \sum _{n = 0}^\infty \left||{x_{n + N} - x_n} \right||. \end{aligned}

Observe that if $$(x_n : n \ge 0)$$ has a finite total N-variation, then for each $$j \in \{0, \ldots , N-1\}$$, a subsequence $$(x_{k N + j} : k \ge 0)$$ is a Cauchy sequence.

The following theorem is interesting even for $$N=1$$. Since recently block periodic Jacobi matrices have obtained some attention (see [7, 19]), we formulate it for an arbitrary natural number N.

### Theorem 2

Let $$N \ge 1$$ be an integer. Assume

\begin{aligned} \mathcal {V}_N \left( a_n^{-1} : n \ge 0\right) + \mathcal {V}_N \left( a_n^{-1} b_n : n \ge 0\right) + \mathcal {V}_N \left( a_n^{-1} a_{n-1}^* : n \ge 1\right) < \infty . \end{aligned}

Let

• $$(a) \quad \lim \limits _{n \rightarrow \infty } \left||{a_n^{-1} - T_n} \right|| = 0$$,

• $$(b) \quad \lim \limits _{n \rightarrow \infty } \left||{a_n^{-1} b_n - Q_n} \right|| = 0$$,

• $$(c) \quad \lim \limits _{n \rightarrow \infty } \left||{a_n^{-1} a_{n-1}^* - R_n} \right|| = 0$$,

• $$(d) \quad \lim \limits _{n \rightarrow \infty } \left\| \frac{a_n}{\left||{a_n} \right||} - C_n \right\| = 0$$

for N-periodic sequences $$(T_n: n \ge 0)$$, $$(Q_n : n \ge 0)$$, $$(R_n : n \ge 0)$$, and $$(C_n : n \ge 0)$$ with $$C_n$$ invertible. Let $$\Lambda$$ be the set of $$\lambda \in \mathbb {R}$$ such thatFootnote 4

\begin{aligned} \mathcal {F}(\lambda ) = \mathrm {Re} \left[ { \begin{pmatrix} 0 &{}\quad -C_{N-1} \\ C_{N-1}^* &{}\quad 0 \end{pmatrix} \prod _{i=0}^{N-1} \begin{pmatrix} 0 &{}\quad \mathrm {Id}\\ -R_i &{}\quad \lambda T_i - Q_i \end{pmatrix} } \right] \end{aligned}

is a strictly positive or a strictly negative operator on $$\mathcal {H}\oplus \mathcal {H}$$. Then for every compact set $$K \subset \Lambda$$, there are positive constants $$c_1, c_2$$ such that for every generalized eigenvector associated with $$\lambda \in K$$ and every $$n \ge 1$$,

\begin{aligned} c_1 \left( \left||{u_0} \right||^2 + \left||{u_1} \right||^2\right) \le \left||{a_n} \right|| \left( \left||{u_{n-1}} \right||^2 + \left||{u_n} \right||^2\right) \le c_2 \left( \left||{u_0} \right||^2 + \left||{u_1} \right||^2\right) . \end{aligned}
(2)

When the Carleman condition is satisfied, the asymptotics (2) implies the similar conclusion as Theorem 1; i.e., $$\sigma _{\text {p}}(A) \cap \Lambda = \emptyset$$ and $$\sigma (A) \supset \overline{\Lambda }$$. In the scalar case, the subordination theory (see, e.g., [6]) implies that in fact the spectrum of A is purely absolutely continuous on $$\Lambda$$. Unfortunately, a subordination theory for the nonscalar case has not been formulated (but there is some progress, see [5]). We expect that in our case the spectrum of A is, similarly to the scalar case, purely absolutely continuous of the maximal multiplicity on $$\Lambda$$.

It is also of interest to obtain a characterization when the symmetric operator A is not self-adjoint (see, e.g., [12, 29]). The following theorem shows that in the setting of Theorem 2, the Carleman condition is also necessary to the self-adjointness of A.

### Theorem 3

Let the assumptions of Theorem 2 be satisfied with $$\Lambda \ne \emptyset$$. If (1) is not satisfied, then the conclusion of Theorem 2 holds for $$\Lambda = \mathbb {C}$$. Consequently, for every $$z \in \mathbb {C}$$,

\begin{aligned} \ker [A^* - z\mathrm {Id}] \simeq \mathcal {H}. \end{aligned}

Hence, we have the so-called complete indeterminate case. In particular, the symmetric operator A is not self-adjoint but it has self-adjoint extensions.

The estimate implied by Theorem 3 is useful even in the scalar case (see [3]).

The method of the proofs of the presented theorems is based on an extension of the techniques used in [26, 28]. In these articles, one examines the positivity or the convergence of sequences of quadratic forms on $$\mathbb {R}^2$$ acting on the vector of two consecutive values of a generalized eigenvector u associated with $$\lambda \in \Lambda \subset \mathbb {R}$$; i.e.,

\begin{aligned} S_n = \left\langle X_n(\lambda ) \begin{pmatrix} u_{n-1} \\ u_n \end{pmatrix}, \begin{pmatrix} u_{n-1} \\ u_n \end{pmatrix} \right\rangle _{\mathbb {R}^2}, \end{aligned}

for a suitably chosen sequence $$(X_n(\lambda ) : n \ge 0)$$, $$X_n(\lambda ) \in \mathcal {B}(\mathbb {R}^2)$$. In trying to extend this method, one encounters several difficulties.

First of all, what is the right quadratic form for the operator case? One real number should control the norm of generalized eigenvectors, which, unlike the scalar case, need not be real. Moreover, the convergence (or at least positivity) should be easily expressible in terms of the recurrence relation. What additionally complicates the matter is the fact that in general the parameters $$(a_n : n \ge 0)$$ and $$(b_n : n \ge 0)$$, unlike the scalars, are not commuting with each other. The first one need not even be symmetric. Moreover, because of the fact that the Hilbert space $$\mathcal {H}$$ can be arbitrary, we cannot assume that it is locally compact. This complicates the analysis of the proposed quadratic forms.

The second issue concerns the problem of how one can express quantitatively the rate of divergence or deviation from the positivity of the parameters. As simple examples of diagonal $$a_n$$ and $$b_n$$ show, the divergence of the norms is too coarse. The scaling from Theorem 2(d) seems to be a natural one. However, there are also different possibilities known in the literature (see [11]).

The article is organized as follows. In Sect. 2, we present basic notions needed in the rest of the article. In Sect. 3, we define generalized eigenvectors and prove the correspondence of their asymptotic behavior with the spectral properties of A. In Sect. 4, we prove Theorem 4. Next, in Sect. 5, we present its special cases. In particular, the choice of the parameter sequence $$\alpha _n \equiv \mathrm {Id}$$ motivates us to define the notion of N-shifted Turán determinants in Sect. 6. Section 6 is devoted to the proof of Theorems 2 and 3. In Sect. 7, we present the situation when one can compute exact asymptotics of u. In the scalar case, it has applications to the so-called Christoffel functions. Finally, in Sect. 8, we present some examples illustrating the sharpness of the assumptions.

## 2 Preliminaries

In this section, we collect some basic notation and properties, which will be needed hereafter.

### 2.1 Operators

On the space of bounded operators, we consider only the norm topology. In particular, a sequence $$(X_n : n \ge 0)$$ converges to X provided

\begin{aligned} \lim _{n \rightarrow \infty } \left||{X_n - X} \right|| = 0, \end{aligned}

where $$\left||{\cdot } \right||$$ is the operator norm.

For a sequence of operators $$(X_n : n \ge 0)$$ and $$n_0, n_1 \in \mathbb {N}$$, we set

\begin{aligned} \prod _{k=n_0}^{n_1} X_k = {\left\{ \begin{array}{ll} X_{n_1} X_{n_1 - 1} \cdots X_{n_0}, &{} n_1 \ge n_0, \\ \mathrm {Id}, &{} \text {otherwise.} \end{array}\right. } \end{aligned}

For any bounded operator X, we define its real part by

\begin{aligned} \mathrm {Re} \left[ {X} \right] = \frac{1}{2} (X + X^*). \end{aligned}

Direct computation shows that for any bounded operator Y, one has

\begin{aligned} Y^* \mathrm {Re} \left[ {X} \right] Y = \mathrm {Re} \left[ {Y^* X Y} \right] \end{aligned}
(3)

and

\begin{aligned} \mathrm {Re} \left[ {X + Y} \right] = \mathrm {Re} \left[ {X} \right] + \mathrm {Re} \left[ {Y} \right] . \end{aligned}
(4)

Moreover,

\begin{aligned} \left||{\mathrm {Re} \left[ {X} \right] } \right|| \le \left||{X} \right||. \end{aligned}
(5)

For a number $$x \in \mathbb {R}$$, we define its negative part by the formula

\begin{aligned} x^- = \max (0, -x). \end{aligned}

For a self-adjoint operator X, we define $$X^-$$ by the spectral theorem.

For any bounded operator X, we define its absolute value by

\begin{aligned} |X| = (X^* X)^{1/2}. \end{aligned}

### 2.2 Total Variation

Given a positive integer N, we define the total N-variation $$\mathcal {V}_N$$ of a sequence of vectors $$x = (x_n : n \ge 0)$$ from a vector space V by

\begin{aligned} \mathcal {V}_N(x) = \sum _{n = 0}^\infty \left||{x_{n + N} - x_n} \right||. \end{aligned}

Observe that if $$(x_n : n \ge 0)$$ has a finite total N-variation, then for each $$j \in \{0, \ldots , N-1\}$$, a subsequence $$(x_{k N + j} : k \ge 0)$$ is a Cauchy sequence.

### Proposition 1

If V is a normed algebra, then

\begin{aligned} \begin{aligned} \mathcal {V}_N(x_n y_n : n \ge 0) \le \sup _{n \in \mathbb {N}}{\Vert {x_n}\Vert }\ \mathcal {V}_N(y_n : n \ge 0) + \sup _{n \in \mathbb {N}}{\Vert {y_n} \Vert }\mathcal {V}_N(x_n : n \ge 0). \end{aligned}\end{aligned}

### Proof

Observe that

\begin{aligned} x_{n+N} y_{n+N} - x_n y_n = (x_{n+N} - x_n) y_{n+N} + x_{n} (y_{n+N} - y_n). \end{aligned}

Hence,

\begin{aligned} \left||{x_{n+N} y_{n+N} - x_n y_n} \right|| \le \left||{x_{n+N} - x_n} \right|| \left||{y_{n+N}} \right|| + \left||{x_{n}} \right|| \left||{y_{n+N} - y_n} \right||. \end{aligned}

Consequently,

\begin{aligned} \left||{x_{n+N} y_{n+N} - x_n y_n} \right|| \le \sup _{m \in \mathbb {N}} \left||{y_{m}} \right|| \left||{x_{n+N} - x_n} \right|| + \sup _{m \in \mathbb {N}} \left||{x_{m}} \right|| \left||{y_{n+N} - y_n} \right||. \end{aligned}

Summing by n, the result follows. $$\square$$

## 3 Generalized Eigenvectors and the Transfer Matrix

For a number $$z \in \mathbb {C}$$, a nonzero sequence $$u = (u_n :n \ge 0)$$ will be called a generalized eigenvector provided that it satisfies

\begin{aligned} a_{n-1}^* u_{n-1} + b_n u_n + a_n u_{n+1} = z u_n, \quad (n \ge 1). \end{aligned}
(6)

For each nonzero $$\alpha \in \mathcal {H}\oplus \mathcal {H}$$, there is a unique generalized eigenvector u such thatFootnote 5 $$(u_0, u_1)^t = \alpha$$. If the recurrence relation (6) holds also for $$n = 0$$, with the convention that $$a_{-1} = u_{-1} = 0$$, then u is a formal eigenvector of the matrix A associated with z.

For each $$(z \in \mathbb {C})$$, we define the transfer matrix $$B_n(z)$$ by

\begin{aligned} B_n(z) = \begin{pmatrix} 0 &{}\quad \mathrm {Id}\\ -a_n^{-1} a_{n-1}^* &{}\quad a_{n}^{-1} (z \mathrm {Id}- b_n) \end{pmatrix}, \quad (n > 0). \end{aligned}
(7)

Then for any generalized eigenvector u corresponding to z, we have

\begin{aligned} \begin{pmatrix} u_n \\ u_{n+1} \end{pmatrix} = B_n(z) \begin{pmatrix} u_{n-1}\\ u_n \end{pmatrix}, \quad (n > 0). \end{aligned}
(8)

It is easy to verify that

\begin{aligned} \begin{aligned} B_n^{-1}(z) = \begin{pmatrix} \big ( a_{n-1}^*\big ) ^{-1} (z \mathrm {Id}- b_n) &{}{}\quad -\big (a_{n-1}^* \big )^{-1} a_n \\ \mathrm {Id}&{}{}\quad 0 \end{pmatrix}. \end{aligned} \end{aligned}
(9)

The rest of this section concerns relations between generalized eigenvectors and spectral properties of block Jacobi matrices.

The proof of [1, Lemma 2.1] implies that the adjoint operator to A can be described as the restriction of $$\mathcal {A}$$ to $$\ell ^2(\mathbb {N}; \mathcal {H})$$; i.e., $$A^* x = \mathcal {A}x$$ for $$x \in {{\mathrm{Dom}}}(A^*)$$, where

\begin{aligned} {{\mathrm{Dom}}}(A^*) = \{ x \in \ell ^2(\mathbb {N}; \mathcal {H}) :\mathcal {A}x \in \ell ^2(\mathbb {N}; \mathcal {H})\}. \end{aligned}
(10)

The following proposition is essential in examining properties of $$A^*$$.

### Proposition 2

Let $$z \in \mathbb {C}$$. The sequence u satisfies $$\mathcal {A}u = z u$$ if and only if

\begin{aligned} \begin{aligned}&u_0 \in \mathcal {H}, \quad u_1 = a_0^{-1} (z \mathrm {Id}- b_0) u_0, \\&a_{n-1}^* u_{n-1} + b_n u_n + a_n u_{n+1} = z u_n \quad (n \ge 1). \end{aligned} \end{aligned}
(11)

### Proof

It immediately follows from the direct computations. $$\square$$

The following corollary describes some of the situations when we can describe the deficiency spaces of the operator A explicitly.

### Corollary 1

Let $$z \in \mathbb {C}$$. If every generalized eigenvector associated with z belongs to $$\ell ^2(\mathbb {N}; \mathcal {H})$$, then

\begin{aligned} \ker [A^* - z \mathrm {Id}] \simeq \mathcal {H}. \end{aligned}
(12)

In particular, if (12) is satisfied for $$z = \pm i$$, then the symmetric operator A is not self-adjoint, but it has self-adjoint extensions.

### Proof

Observe that the space $$\ker [A^* - z \mathrm {Id}]$$ is a Hilbert space. Indeed, since $$\ker [A^* - z \mathrm {Id}] = \mathrm {Im} \left[ {A - \overline{z} \mathrm {Id}} \right] ^\perp$$ (see, e.g., [24, formula (7.1.45)]) it is a closed subspace of $$\ell ^2(\mathbb {N}; \mathcal {H})$$.

Define the operator $$T : \ker [A^* - z \mathrm {Id}] \rightarrow \mathcal {H}$$ by $$T u = u_0$$. Then by (11), $$T u = 0$$ implies $$u=0$$; hence, T is injective. To prove the surjectivity, take $$u_0 \in \mathcal {H}\setminus \{ 0 \}$$; then the sequence u defined by (11) is a generalized eigenvector associated with z. Therefore, it belongs to $$\ell ^2(\mathbb {N}; \mathcal {H})$$. Hence, by (10), $$u \in {{\mathrm{Dom}}}(A^*)$$, and consequently, T is surjective. Since the mapping T is a contraction, it is a bounded linear bijection. By the inverse mapping theorem, the operator T is a linear isomorphism.

The assertion about the self-adjoint extensions of A follows from von Neumann’s extension theorem (see, e.g., [24, Theorem 7.4.1]). $$\square$$

### Remark 1

The proof of [21, Theorem 1] shows that the same conclusion holds if every generalized eigenvector associated with $$z=0$$ belongs to $$\ell ^2(\mathbb {N}; \mathcal {H})$$. As was pointed out in [4], the formulation of [21, Theorem 1] has a typo.

The following proposition is an adaptation of [26, Proposition 2.1]. We include it for the sake of self-containment.

### Proposition 3

Let $$z \in \mathbb {C}$$. If every generalized eigenvector u associated with z does not belong to $$\ell ^2(\mathbb {N}; \mathcal {H})$$, then $$z \notin \sigma _{\text {p}}(A^*)$$ and $$z \in \sigma (A^*)$$.

### Proof

Let $$u \ne 0$$ be such that $$\mathcal {A}u = z u$$. Then by Proposition 2, u is a generalized eigenvector associated with z. By assumption, $$u \notin \ell ^2(\mathbb {N}; \mathcal {H})$$. Therefore, $$u \notin {{\mathrm{Dom}}}(A^*)$$, and consequently, $$z \notin \sigma _{\text {p}}(A^*)$$.

Observe that the vector u such that $$(\mathcal {A}- z \mathrm {Id}) u = \delta _0 v$$, where $$0 \ne v \in \mathcal {H}$$, has to satisfy the following recurrence relation:

\begin{aligned} \begin{aligned}&b_0 u_0 + a_0 u_1 = z u_0 + v, \\ {}&a_{n-1}^* u_{n-1} + b_n u_n + a_n u_{n+1} = z u_n, \quad (n \ge 1). \end{aligned} \end{aligned}

Hence u is a generalized eigenvector; thus $$u \notin \ell ^2(\mathbb {N}; \mathcal {H})$$. Therefore, $$u \notin {{\mathrm{Dom}}}(A^*)$$, and consequently, the operator $$A^* - z \mathrm {Id}$$ is not surjective; i.e., $$z \in \sigma (A^*)$$. $$\square$$

### Remark 2

In the scalar case, if the assumptions of Proposition 3 are satisfied for $$z=0$$, then the operator A is self-adjoint. We expect the same behavior for every $$\mathcal {H}$$.

## 4 A Commutator Approach

The aim of this section is to prove the following theorem.

### Theorem 4

Let A be a Jacobi matrix. Assume that there is a sequence $$(\alpha _n : n \ge 0)$$ of elements from $$\mathcal {B}(\mathcal {H})$$ such that

• $$(a) \quad \sum \limits _{n=1}^\infty \frac{\big \Vert \mathrm{Re}[\alpha _{n+1} a_{n+1}^* - a_n^* a_{n-1}^{-1} \alpha _{n-1} a_n]^-\big \Vert }{\left||{\alpha _n a_n^*} \right||} < \infty$$,

• $$(b) \quad \sum \limits _{n=1}^\infty \frac{\big \Vert a_{n-1}^{-1} \alpha _{n-1} a_n - \alpha _n\big \Vert }{\left||{\alpha _n a_n^*} \right||} < \infty$$,

• $$(c)\quad \sum \limits _{n=1}^\infty \frac{\big \Vert \alpha _n b_{n+1} - b_n a_{n-1}^{-1} \alpha _{n-1} a_n\big \Vert }{\left||{\alpha _n a_n^*} \right||} < \infty$$,

• $$(d)\quad \sum \limits _{n=1}^\infty \frac{1}{\left||{\alpha _n a_n^*} \right||} = \infty$$.

Let $$\Lambda$$ be the set of $$\lambda \in \mathbb {R}$$ such that the following limit exists in the norm and defines a strictly positive operator on $$\mathcal {H}\oplus \mathcal {H}$$:

\begin{aligned} C(\lambda ) = \lim _{n \rightarrow \infty } \frac{1}{\left||{\alpha _n a_n^*} \right||} \mathrm {Re} \left[ { \begin{pmatrix} \alpha _n a_{n}^* &{}\quad -(\lambda \mathrm {Id}- b_n) a_{n-1}^{-1} \alpha _{n-1} a_n \\ 0 &{}\quad a_n^* a_{n-1}^{-1} \alpha _{n-1} a_n \end{pmatrix}} \right] . \end{aligned}

Then $$\sigma _{\text {p}}(A^*) \cap \Lambda = \emptyset$$ and $$\sigma (A^*) \supset \overline{\Lambda }$$.

Given a sequence $$(\alpha _n : n \ge 0)$$ of elements from $$\mathcal {B}(\mathcal {H})$$ and $$\lambda \in \mathbb {R}$$, we define a sequence of binary quadratic forms $$Q^\lambda$$ on $$\mathcal {H}\oplus \mathcal {H}$$ by the formula

\begin{aligned} Q_n^\lambda (v) = \frac{1}{\left||{\alpha _n a_n^*} \right||} \left\langle \mathrm {Re} \left[ { \begin{pmatrix} \alpha _{n-1} a_{n-1}^* &{}\quad -\alpha _{n-1} (\lambda \mathrm {Id}- b_n) \\ 0 &{}\quad \alpha _n a_n^* \end{pmatrix}} \right] v, v \right\rangle . \end{aligned}

Moreover, we define the sequence of functions by the formula

\begin{aligned} \begin{aligned} S_n(\alpha , \lambda ) = \left\| {\alpha _n a_n^*} \right\| Q_n^\lambda \left( \begin{pmatrix} u_{n-1} \\ u_n \end{pmatrix} \right) , \end{aligned} \end{aligned}
(13)

where u is the generalized eigenvector corresponding to $$\lambda$$ such that $$(u_0, u_1)^t = \alpha \in \mathcal {H}\oplus \mathcal {H}$$.

The first proposition provides a different representation of $$S_n$$.

### Proposition 4

An alternative formula for $$S_n$$ is

\begin{aligned} S_n(\alpha , \lambda ) = \left\langle \mathrm {Re} \left[ { \begin{pmatrix} \alpha _n a_n^* &{}\quad -(\lambda \mathrm {Id}- b_n) a_{n-1}^{-1} \alpha _{n-1} a_n \\ 0 &{}\quad a_n^* a_{n-1}^{-1} \alpha _{n-1} a_n \end{pmatrix}} \right] \begin{pmatrix} u_{n} \\ u_{n+1} \end{pmatrix}, \begin{pmatrix} u_{n} \\ u_{n+1} \end{pmatrix} \right\rangle . \end{aligned}

### Proof

By (8), one has

\begin{aligned} S_n(\alpha , \lambda )= & {} \left\langle \mathrm {Re} \left[ { \begin{pmatrix} \alpha _{n-1} a_{n-1}^* &{}\quad -\alpha _{n-1} (\lambda \mathrm {Id}- b_n) \\ 0 &{}\quad \alpha _n a_n^* \end{pmatrix}} \right] \right. \\&\left. B^{-1}_n(\lambda ) \begin{pmatrix} u_{n} \\ u_{n+1} \end{pmatrix}, B^{-1}_n(\lambda ) \begin{pmatrix} u_{n} \\ u_{n+1} \end{pmatrix} \right\rangle \\= & {} \left\langle \left( B_n^{-1}(\lambda )\right) ^{*} \mathrm {Re} \left[ { \begin{pmatrix} \alpha _{n-1} a_{n-1}^* &{}\quad -\alpha _{n-1} (\lambda \mathrm {Id}- b_n) \\ 0 &{}\quad \alpha _n a_n^* \end{pmatrix} } \right] \right. \\&\left. B_n^{-1}(\lambda ) \begin{pmatrix} u_{n} \\ u_{n+1} \end{pmatrix}, \begin{pmatrix} u_{n} \\ u_{n+1} \end{pmatrix} \right\rangle . \end{aligned}

Then formula (9) implies

\begin{aligned}&\left( B_n^{-1}(\lambda )\right) ^{*} \begin{pmatrix} \alpha _{n-1} a_{n-1}^* &{}\quad -\alpha _{n-1} (\lambda \mathrm {Id}- b_n) \\ 0 &{}\quad \alpha _n a_n^* \end{pmatrix} B_n^{-1}(\lambda ) \\&\quad = \begin{pmatrix} (\lambda \mathrm {Id}- b_n) a_{n-1}^{-1} &{}\quad \mathrm {Id}\\ - a_{n}^* a_{n-1}^{-1} &{}\quad 0 \end{pmatrix} \begin{pmatrix} 0 &{}\quad -\alpha _{n-1} a_n \\ \alpha _n a_n^* &{}\quad 0 \end{pmatrix} \\&\quad = \begin{pmatrix} \alpha _n a_n^* &{}\quad -(\lambda \mathrm {Id}- b_n) a_{n-1}^{-1} \alpha _{n-1} a_n \\ 0 &{}\quad a_{n}^* a_{n-1}^{-1} \alpha _{n-1} a_n \end{pmatrix}. \end{aligned}

Hence, by formula (3),

\begin{aligned} S_n(\alpha , \lambda ) = \left\langle \mathrm {Re} \left[ { \begin{pmatrix} \alpha _n a_n^* &{}\quad -(\lambda \mathrm {Id}- b_n) a_{n-1}^{-1} \alpha _{n-1} a_n \\ 0 &{}\quad a_n^* a_{n-1}^{-1} \alpha _{n-1} a_n \end{pmatrix}} \right] \begin{pmatrix} u_{n} \\ u_{n+1} \end{pmatrix}, \begin{pmatrix} u_{n} \\ u_{n+1} \end{pmatrix} \right\rangle , \end{aligned}

which completes the proof. $$\square$$

The next proposition provides assumptions on the quadratic form under which it controls the norm of generalized eigenvectors.

### Proposition 5

Let $$\Lambda$$ be the set of $$\lambda \in \mathbb {R}$$ such that the following limit exists in the operator norm and defines a strictly positive operator:

\begin{aligned} C(\lambda ) = \lim _{n \rightarrow \infty } \frac{1}{\left||{\alpha _n a_n^*} \right||} \mathrm {Re} \left[ { \begin{pmatrix} \alpha _n a_{n}^* &{}\quad -(\lambda \mathrm {Id}- b_n) a_{n-1}^{-1} \alpha _{n-1} a_n \\ 0 &{}\quad a_n^* a_{n-1}^{-1} \alpha _{n-1} a_n \end{pmatrix}} \right] . \end{aligned}

Then for every $$\lambda \in \Lambda$$, there is an integer N and positive constants $$c_1, c_2$$ such that for every generalized eigenvector u associated with $$\lambda$$ and $$0 \ne \alpha \in \mathcal {H}\oplus \mathcal {H}$$,

\begin{aligned}&c_1 \left||{\alpha _n a_n^*} \right|| \left( \left||{u_{n}} \right||^2 + \left||{u_{n+1}} \right||^2\right) \le S_n(\alpha , \lambda ) \le c_2 \left||{\alpha _n a_n^*} \right|| \left( \left||{u_{n}} \right||^2 + \left||{u_{n+1}} \right||^2\right) , \\&\quad (n \ge N). \end{aligned}

### Proof

Fix $$\lambda \in \Lambda$$. Let

\begin{aligned} \mu ^{\text {min}}_n = \min \sigma (Z_n), \quad \mu ^{\text {max}}_n = \max \sigma (Z_n), \end{aligned}

where

\begin{aligned} \begin{aligned} Z_n = \frac{1}{\left\| {\alpha _n a_n^*} \right\| } \mathrm {Re} \left[ { \begin{pmatrix} \alpha _n a_{n}^* &{}{}\quad -(\lambda \mathrm {Id}- b_n) a_{n- 1}^{-1} \alpha _{n-1} a_n \\ 0 &{}{}\quad a_n^* a_{n-1}^{-1} \alpha _{n-1} a_n \end{pmatrix}} \right] . \end{aligned} \end{aligned}

Hence,

\begin{aligned} \begin{aligned} \mu ^{\text{ min }}_n \le \frac{S_n(\alpha , \lambda )}{\left\| {\alpha _n a_n^*} \right\| \big (\left\| {u_{n}} \right\| ^2 + \left\| {u_{n+1}} \right\| ^2 \big )} \le \mu ^{\text{ max }}_n. \end{aligned} \end{aligned}

But from the definition of $$C(\lambda )$$, we have

\begin{aligned} \lim _{n \rightarrow \infty } \mu ^{\text {min}}_n = \min \sigma (C(\lambda )), \quad \lim _{n \rightarrow \infty } \mu ^{\text {max}}_n = \max \sigma (C(\lambda )), \end{aligned}

which are positive numbers. Therefore, there is N and $$c_1, c_2>0$$ such that for every $$n \ge N$$,

\begin{aligned} c_1 \le \frac{S_n(\alpha , \lambda )}{\left||{\alpha _n a_n^*} \right||\left( \left||{u_{n}} \right||^2 + \left||{u_{n+1}} \right||^2\right) } \le c_2, \end{aligned}

and the proof is complete. $$\square$$

The next corollary together with Proposition 3 suggest the method of proving that every $$\lambda \in \Lambda$$ is not an eigenvalue of A but belongs to $$\sigma (A)$$.

### Corollary 2

Under the assumptions of Proposition 5, together with

\begin{aligned} \sum _{n = 0}^\infty \frac{1}{\left||{\alpha _n a_n^*} \right||} = \infty , \end{aligned}

if

\begin{aligned} \liminf _{n \rightarrow \infty } S_n(\alpha , \lambda ) > 0, \end{aligned}

then u does not belong to $$\ell ^2(\mathbb {N}; \mathcal {H})$$.

### Proof

By Proposition 5,

\begin{aligned} \frac{S_n(\alpha , \lambda )}{c_2 \left||{\alpha _n a_n^*} \right||} \le \left||{u_n} \right||^2 + \left||{u_{n+1}} \right||^2 \end{aligned}

for a positive constant $$c_2$$. Therefore, there exists a constant $$c>0$$ such that

\begin{aligned} \frac{c}{\left||{\alpha _n a_n^*} \right||} \le \left||{u_n} \right||^2 + \left||{u_{n+1}} \right||^2, \end{aligned}

which cannot be summable. $$\square$$

The following lemma is the main algebraic part of the proof of Theorem 4.

### Lemma 1

Let u be a generalized eigenvector associated with $$\lambda \in \mathbb {R}$$ and $$\alpha \in \mathcal {H}\oplus \mathcal {H}$$. Then

\begin{aligned}&\frac{[S_{n+1}(\alpha , \lambda ) - S_n(\alpha , \lambda )]^-}{\left||{u_n} \right||^2 + \left||{u_{n+1}} \right||^2} \le \left||{\mathrm {Re} \left[ {\alpha _{n+1} a_{n+1}^* - a_{n}^* a_{n-1}^{-1} \alpha _{n-1} a_n} \right] ^-} \right|| \\&\quad +\, |\lambda | \left||{a_{n-1}^{-1} \alpha _{n-1} a_n - \alpha _n} \right|| + \left||{\alpha _n b_{n+1} - b_n a_{n-1}^{-1} \alpha _{n-1} a_n} \right||. \end{aligned}

### Proof

By Proposition 4 and formula (13), we have

\begin{aligned} S_{n+1}(\alpha , \lambda ) - S_n(\alpha , \lambda ) = \left\langle \mathrm {Re} \left[ {C_n^\lambda } \right] \begin{pmatrix} u_{n} \\ u_{n+1} \end{pmatrix}, \begin{pmatrix} u_{n} \\ u_{n+1} \end{pmatrix} \right\rangle \end{aligned}

for

\begin{aligned} C_n^\lambda= & {} \begin{pmatrix} \alpha _{n} a_{n}^* &{}\quad -\alpha _{n} (\lambda \mathrm {Id}- b_{n+1}) \\ 0 &{}\quad \alpha _{n+1} a_{n+1}^* \end{pmatrix} - \begin{pmatrix} \alpha _n a_n^* &{}\quad -(\lambda \mathrm {Id}- b_n) a_{n-1}^{-1} \alpha _{n-1} a_n \\ 0 &{}\quad a_n^* a_{n-1}^{-1} \alpha _{n-1} a_n \end{pmatrix} \\= & {} \begin{pmatrix} 0 &{}\quad (\lambda \mathrm {Id}- b_n) a_{n-1}^{-1} \alpha _{n-1} a_n - \alpha _n (\lambda \mathrm {Id}- b_{n+1}) \\ 0 &{}\quad \alpha _{n+1} a_{n+1}^* - a_{n}^* a_{n-1}^{-1} \alpha _{n-1} a_n \end{pmatrix}. \end{aligned}

Hence,

\begin{aligned} S_{n+1}(\alpha , \lambda ) - S_n(\alpha , \lambda )= & {} \left\langle \mathrm {Re} \left[ {\alpha _{n+1} a_{n+1}^* - a_{n}^* a_{n-1}^{-1} \alpha _{n-1} a_n} \right] u_{n+1}, u_{n+1} \right\rangle _\mathcal {H}\\&+\, \lambda \text {Re}\left\langle \left( a_{n-1}^{-1} \alpha _{n-1} a_n - \alpha _n\right) u_{n+1}, u_n \right\rangle _\mathcal {H}\\&+\, \text {Re}\left\langle \left( \alpha _n b_{n+1} - b_n a_{n-1}^{-1} \alpha _{n-1} a_n\right) u_{n+1}, u_n \right\rangle _\mathcal {H}. \end{aligned}

By the Schwarz inequality, the result follows. $$\square$$

We are ready to prove Theorem 4.

### Proof of Theorem 4

By virtue of Corollary 2 and Proposition 3, it is enough to show that $$\liminf _n S_n(\alpha , \lambda ) > 0$$ for every $$\lambda \in \Lambda$$ and a nonzero $$\alpha \in \mathcal {H}\oplus \mathcal {H}$$.

Fix $$\lambda \in \Lambda$$ and a nonzero $$\alpha \in \mathcal {H}\oplus \mathcal {H}$$. By Proposition 5, there exists N such that for every $$n \ge N$$, $$S_n(\alpha , \lambda ) > 0$$ holds. Let us define

\begin{aligned} F_n(\alpha , \lambda ) = \frac{S_{n+1}(\alpha , \lambda ) - S_n(\alpha , \lambda )}{S_n(\alpha , \lambda )}. \end{aligned}

Then

\begin{aligned} \frac{S_{n+1}(\alpha , \lambda )}{S_n(\alpha , \lambda )} = 1 + F_n(\alpha , \lambda ), \end{aligned}

and consequently,

\begin{aligned} \begin{aligned} \frac{S_n(\alpha , \lambda )}{S_N(\alpha , \lambda )} = \prod _{k=N}^{n-1} \big (1 + F_k(\alpha , \lambda ) \big ). \end{aligned} \end{aligned}

Hence,

\begin{aligned} \sum _{k=N}^\infty [F_k(\alpha , \lambda )]^- < \infty \end{aligned}
(14)

implies $$\liminf _n S_n(\alpha , \lambda ) > 0$$. By Proposition 5,

\begin{aligned} \begin{aligned} S_n(\alpha , \lambda ) \ge c^{-1} \Vert \alpha _n a_n^* \Vert \big ( \Vert u_n \Vert ^2 + \Vert u_{n+1} \Vert ^2 \big ) \end{aligned} \end{aligned}

for some constant $$c>0$$. Hence, by Lemma 1,

\begin{aligned} \begin{aligned}{}[F_n(\alpha , \lambda )]^-\le&{}\,\, \frac{c}{\Vert {\alpha _n a_n^*} \Vert } \Big ( \big \Vert \mathrm {Re} [ {\alpha _{n+1} a_{n+1}^* - a_{n}^* a_{n-1}^{-1} \alpha _{n-1} a_n} ] ^-\big \Vert \\ {}&+ |\lambda | \big \Vert a_{n-1}^{-1} \alpha _{n-1} a_n - \alpha _n \big \Vert + \big \Vert \alpha _n b_{n+1} - b_n a_{n-1}^{-1} \alpha _{n-1} a_n \big \Vert \Big ) , \end{aligned} \end{aligned}

which is summable by assumptions (a), (b), and (c). This shows (14). The proof is complete. $$\square$$

## 5 Special Cases of Theorem 4

In this section, we show several choices of the sequence $$(\alpha _n : n \ge 0)$$. In this way, we show the flexibility of our approach. For the simplification of the condition for $$C(\lambda )$$, we assume that the sequence $$(a_n : n \ge 0)$$ tends to infinity; i.e.,

\begin{aligned} \begin{aligned} \lim _{n \rightarrow \infty } \big \Vert {a_n^{-1}} \big \Vert = 0. \end{aligned} \end{aligned}

This condition implies that $$C(\lambda )$$ does not depend on $$\lambda$$.

The first theorem is an extension of [18, Theorem 1.6] to the operator case. Since Sect. 6 is devoted to the proof of a far reaching extension of this result, we omit the details.

### Theorem 5

Assume:

• $$(a) \quad \sum \limits _{n=1}^\infty \big \Vert a_{n+1}^* a_n^{-1} - a_n^* a_{n-1}^{-1}\big \Vert < \infty$$,

• $$(b) \quad \sum \limits _{n=1}^\infty \big \Vert a_{n-1}^{-1} - a_n^{-1}\big \Vert < \infty$$,

• $$(c) \quad \sum \limits _{n=1}^\infty \big \Vert b_{n+1} a_n^{-1} - b_n a_{n-1}^{-1}\big \Vert < \infty$$,

• $$(d) \quad \sum \limits _{n=0}^\infty \frac{1}{\left||{a_n} \right||} = \infty$$,

and $$C(\lambda )$$ defined for $$\alpha _n \equiv \mathrm {Id}$$ is a positive operator on $$\mathcal {H}\oplus \mathcal {H}$$. Then the assumptions of Theorem 4 are satisfied.

We are ready to prove Theorem 1. Let us note that this result is a vector valued version of [26, Theorem 4.3]. In the scalar case, it has far-reaching applications (see [26, Section 5]).

### Proof of Theorem 1

Take $$\alpha _n = a_n$$. It is sufficient to show that $$\Lambda = \mathbb {R}$$. We have

\begin{aligned} \begin{aligned} C(\lambda )&= \lim _{n \rightarrow \infty } \frac{1}{\left\| {a_n a_n^*} \right\| } \mathrm {Re} \left[ { \begin{pmatrix} a_n a_{n}^* &{}{}\quad -(\lambda \mathrm {Id}- b_n) \left( a_n^*\right) ^{-1} a_n^* a_n \\ 0 &{}{}\quad a_n^* a_n \end{pmatrix} } \right] \\&= \mathrm {Re} \left[ { \begin{pmatrix} C C^* &{}{}\quad 0 \\ 0 &{}{}\quad C^* C \end{pmatrix}} \right] , \end{aligned} \end{aligned}

which is clearly positive for $$\lambda \in \mathbb {R}$$. Hence, $$\Lambda = \mathbb {R}$$. $$\square$$

To formulate the last example, we need a definition. Let

\begin{aligned} \log ^{(0)}(x) = x, \quad \log ^{(i+1)}(x) = \log \left( \log ^{(i)}(x)\right) , \quad (i \ge 0), \end{aligned}
(15)

and

\begin{aligned} g_j(x) = \prod _{i=1}^j \log ^{(i)}(x). \end{aligned}

The following theorem is a vector valued version of [26, Theorem 4.3], and its proof is inspired by the techniques employed in the proof of [17, Theorem 3].

### Theorem 6

Assume that for positive integers KN and a non-negative summable sequence $$c_n$$:

• $$(a) \quad \lim \limits _{n \rightarrow \infty } a_n^{-1} = 0$$,

• $$(b) \quad (1 - c_n) \mathrm {Id}\le | (a_{n-1}^*)^{-1} a_n | \le \bigg ( 1 + \frac{1}{n} + \sum \limits _{j=1}^K \frac{1}{n g_j(n)} + c_n \bigg ) \mathrm {Id}$$    for $$n > N$$,

• (c)    the sequence $$(b_n : n \ge 0)$$ is bounded and $$\sum \limits _{n=0}^\infty \left||{a_n^{-1} b_n - b_{n+1} a_n^{-1} } \right|| < \infty$$,

• $$(d) \quad \sum \limits _{n=1}^\infty \frac{||a_n^{-1} ||}{n} < \infty$$.

Then the assumptions of Theorem 4 are satisfied with $$\Lambda = \mathbb {R}$$.

### Proof

We can assume that $$\log ^{(K)}(N) > 0$$. Let

\begin{aligned} \alpha _n = {\left\{ \begin{array}{ll} \mathrm {Id}&{} \text { for } n < N, \\ n g_K(n) \left( a_n^*\right) ^{-1} &{} \text { otherwise}. \end{array}\right. } \end{aligned}

We have to compute the set $$\Lambda$$ and check the assumptions (a), (b), (c) of Theorem 4.

Let us begin with the computation of $$\Lambda$$. We have

\begin{aligned}&\frac{1}{\left||{\alpha _n a_n^*} \right||} \begin{pmatrix} \alpha _n a_{n}^* &{}\quad -(\lambda \mathrm {Id}- b_n) a_{n-1}^{-1} \alpha _{n-1} a_n \\ 0 &{}\quad a_n^* a_{n-1}^{-1} \alpha _{n-1} a_n \end{pmatrix} \\&\quad = \begin{pmatrix} \mathrm {Id}&{}\quad -\frac{(n-1) g_K(n-1)}{n g_K(n)} (\lambda \mathrm {Id}- b_n) \left( a_n^*\right) ^{-1} \left| \left( a_{n-1}^*\right) ^{-1} a_n\right| ^2 \\ 0 &{}\quad \frac{(n-1) g_K(n-1)}{n g_K(n)} \left| \left( a_{n-1}^*\right) ^{-1} a_n\right| ^2 \end{pmatrix}, \end{aligned}

which by the hypotheses (a) and (b) tends to

\begin{aligned} \begin{pmatrix} \mathrm {Id}&{}\quad 0 \\ 0 &{}\quad \mathrm {Id}\\ \end{pmatrix}, \end{aligned}

which is clearly a positive operator on $$\mathcal {H}\oplus \mathcal {H}$$ for any $$\lambda \in \mathbb {R}$$. Hence, $$\Lambda = \mathbb {R}$$.

Let us show the assumption (a). We have

\begin{aligned} \begin{aligned} \frac{\alpha _{n+1} a_{n+1}^* - a_n^* a_{n-1}^{-1} \alpha _{n-1} a_n}{\left\| {\alpha _n a_n^*} \right\| }=&{} \frac{(n+1) g_K(n+1)}{n g_K(n)} \mathrm {Id} \\ {}&- \frac{(n-1) g_K(n-1)}{n g_K(n)} \big | \left( a_{n-1}^*\right) ^{-1} a_n \big | ^2 \\ \ge&{}\,\, \Bigg [ \frac{(n+1) g_K(n+1)}{n g_K(n)} - \frac{(n-1) g_K(n-1)}{n g_K(n)} \nonumber \\ {}&\bigg ( 1 +\frac{1}{n} + \sum _{j=1}^K \frac{1}{n g_j(n)} + c_n \bigg ) ^2 \Bigg ] \mathrm {Id}. \end{aligned} \end{aligned}

The above expression has been estimated in the proof of [26, Theorem 4.3].

Next, since

\begin{aligned} \alpha _n b_{n+1} - b_n a_{n-1}^{-1} \alpha _{n-1} a_n = \alpha _n b_{n+1} - b_{n} \alpha _n + b_n\left( \alpha _n - a_{n-1}^{-1} \alpha _{n-1} a_n\right) , \end{aligned}

the hypothesis (c) implies that the assumption (b) will be satisfied if we show that the assumption (c) holds.

We have

\begin{aligned} \frac{a_{n-1}^{-1} \alpha _{n-1} a_n - \alpha _n}{\left||{\alpha _n a_n^*} \right||} = \frac{(n-1) g_K(n-1)}{n g_K(n)} a_{n-1}^{-1} \left( a_{n-1}^*\right) ^{-1} a_n - \left( a_n^*\right) ^{-1} = a_{n-1}^{-1} T_n, \end{aligned}

where

\begin{aligned} T_n = \frac{(n-1) g_K(n-1)}{n g_K(n)} W_n^* - W_n^{-1}, \quad W_n = a_n^* a_{n-1}^{-1}. \end{aligned}

By virtue of the hypothesis (d), the assumption (c) will be satisfied as long as

\begin{aligned} \left||{T_n} \right|| \le c \left( \frac{1}{n} + c_n' \right) \end{aligned}
(16)

for a constant $$c>0$$ and a non-negative summable sequence $$(c_n' : n \ge 0)$$. Because

\begin{aligned} T_n T_n^* = \left( \frac{(n-1) g_K(n-1)}{n g_K(n)} \right) ^2 W_n^* W_n - 2 \frac{(n-1) g_K(n-1)}{n g_K(n)} \mathrm {Id}+ \left( W_n^* W_n\right) ^{-1}, \end{aligned}

and because of the non-negativity of $$T_n T_n^*$$ and $$\left||{T_n T_n^*} \right|| = \left||{T_n} \right||^2$$, the inequality (16) will be satisfied if

\begin{aligned} T_n T_n^* \le c^2 \left( \frac{1}{n} + c_n' \right) ^2 \mathrm {Id}. \end{aligned}

The spectral theorem applied to $$W_n^* W_n$$ implies that the above inequality will be satisfied if

\begin{aligned} \left( \frac{(n-1) g_K(n-1)}{n g_K(n)} \right) ^2 \lambda _n - 2 \frac{(n-1) g_K(n-1)}{n g_K(n)} + \lambda _n^{-1} \le \left( \frac{1}{n} + c_n' \right) ^2 \end{aligned}
(17)

for every $$\lambda _n \in \sigma (W_n^* W_n)$$, which by the hypothesis (b) corresponds to

\begin{aligned} \begin{aligned} \lambda _n \in \Bigg [\big ( 1-c_n \big )^2, \bigg ( 1 + \frac{1}{n} + \sum _{j=1}^K \frac{1}{n g_j(n)} + c_n \bigg ) ^2 \Bigg ] . \end{aligned} \end{aligned}

But

\begin{aligned}&\left( \frac{(n-1) g_K(n-1)}{n g_K(n)} \right) ^2 \lambda _n - 2 \frac{(n-1) g_K(n-1)}{n g_K(n)} + \lambda ^{-1}_n \\&\quad = \left( \frac{(n-1) g_K(n-1)}{n g_K(n)} \sqrt{\lambda _n} - \frac{1}{\sqrt{\lambda _n}} \right) ^2, \end{aligned}

and the above expression has been estimated in the proof of [26, Theorem 4.3]. This shows (17) and ends the proof. $$\square$$

## 6 Turán Determinants

Let us note that for $$\mathcal {H}= \mathbb {R}$$, the expression $$S_n$$ for $$\alpha _n \equiv \mathrm {Id}$$ (see (13)) is known as the Turán determinant (see [14]). Hence, Theorem 5 motivates us to the following construction. Fix a positive integer N and a Jacobi matrix A. Let us define a sequence of quadratic forms $$Q^z$$ on $$\mathcal {H}\oplus \mathcal {H}$$ by the formula

\begin{aligned} Q_n^z(v) = \frac{1}{\left||{a_{n+N-1}} \right||} \left\langle \mathrm {Re} \left[ { \begin{pmatrix} a_{n+N-1} &{}\quad 0 \\ 0 &{}\quad a_{n+N-1}^* \end{pmatrix} E X_n(z) } \right] v, v \right\rangle , \end{aligned}
(18)

where

\begin{aligned} X_n(z) = \prod _{j = n}^{n+N-1} B_j(z) \quad \text {and}\quad E = \begin{pmatrix} 0 &{}\quad -\mathrm {Id}\\ \mathrm {Id}&{}\quad 0 \end{pmatrix}. \end{aligned}

Then we define the N-shifted Turán determinants by

\begin{aligned} \begin{aligned} S_n(\alpha , z) = \left\| {a_{n+N-1}} \right\| Q_n^z \left( \begin{pmatrix} u_{n-1} \\ u_n \end{pmatrix} \right) , \end{aligned} \end{aligned}
(19)

where u is the generalized eigenvector corresponding to $$z \in \mathbb {C}$$ such that $$(u_0, u_1)^t = \alpha \in \mathcal {H}\oplus \mathcal {H}$$.

The rest of this section is devoted to the analysis of the sequence $$S_n$$. Since the proof of the uniform convergence of $$S_n$$ is quite involved, we divide it into 3 subsections. The method used here is an adaptation of the techniques employed in [28].

### 6.1 Almost Uniform Nondegeneracy

Let $$\Lambda$$ be a subset of $$\mathbb {C}$$. In this section, we consider the family $$\{ Q^z : z \in \Lambda \}$$ defined in (18).

We say that $$\{ Q^z : z \in \Lambda \}$$ is uniformly nondegenerated on $$K \subset \Lambda$$ if there are $$c \ge 1$$ and $$M \ge 1$$ such that for all $$v \in \mathcal {H}\oplus \mathcal {H}$$, $$z \in K$$, and $$n \ge M$$,

\begin{aligned} c^{-1} \left||{v} \right||^2 \le \left|{Q_n^z(v)} \right| \le c \left||{v} \right||^2. \end{aligned}

We say that $$\{Q^z : z \in \Lambda \}$$ is almost uniformly nondegenerated on $$\Lambda$$ if it is uniformly nondegenerated on each compact subset of $$\Lambda$$.

We begin with two simple auxiliary results that will be needed in the proof of the nondegeneracy of the considered quadratic forms.

### Lemma 2

For every n and $$\lambda \in \mathbb {R}$$, one has

\begin{aligned} \begin{pmatrix} a_n &{}\quad 0 \\ 0 &{}\quad a_n^* \end{pmatrix} E B_n(\lambda ) = \left[ B_n^{-1}(\lambda )\right] ^* \begin{pmatrix} a_{n-1} &{}\quad 0 \\ 0 &{}\quad a_{n-1}^* \end{pmatrix} E. \end{aligned}

### Proof

Using (9) and (7), one can compute that both sides are equal to

\begin{aligned} \begin{pmatrix} a_{n-1}^* &{}\quad -(\lambda \mathrm {Id}- b_n) \\ 0 &{}\quad a_n^* \end{pmatrix}, \end{aligned}

and the result follows. $$\square$$

### Proposition 6

Let N be an integer. Assume:

• $$(a) \quad \lim \limits _{n \rightarrow \infty } \left||{a_n^{-1} a_{n-1}^* - R_n} \right|| = 0$$,

• $$(b) \quad \lim \limits _{n \rightarrow \infty } \left\| \frac{a_n}{\left||{a_n} \right||} - C_n \right\| = 0$$,

for N-periodic sequences of invertible operators R and C. Then

\begin{aligned} \lim _{n \rightarrow \infty } \left\| \frac{\left||{a_n} \right||}{\left||{a_{n-1}} \right||} \mathrm {Id}- C_n^{-1} C_{n-1}^* R_n^{-1} \right\| = 0. \end{aligned}

In particular,

\begin{aligned} \lim _{n \rightarrow \infty } \left| \frac{\left||{a_n} \right||}{\left||{a_{n-1}} \right||} - r_n \right| = 0 \end{aligned}

for a positive N-periodic sequence

\begin{aligned} r_n = \left||{C_n^{-1} C_{n-1}^* R_n^{-1}} \right||. \end{aligned}

### Proof

We have

\begin{aligned} \begin{aligned} \frac{\left\| {a_n} \right\| }{\left\| {a_{n-1}} \right\| } \mathrm {Id}= \left( \frac{a_n}{\left\| {a_n} \right\| } \right) ^{-1} \frac{a_{n-1}^*}{\left\| {a_{n-1}} \right\| } \big ( a_n^{-1} a_{n- 1}^* \big )^{-1}. \end{aligned} \end{aligned}

Hence,

\begin{aligned} \lim _{n \rightarrow \infty } \left\| \frac{\left||{a_n} \right||}{\left||{a_{n-1}} \right||} \mathrm {Id}- C_n^{-1} C_{n-1}^* R_n^{-1} \right\| = 0, \end{aligned}

and the result follows. $$\square$$

In the next proposition, we examine the limiting behavior of the considered quadratic forms.

### Proposition 7

Let $$N \ge 1$$ be an integer. Assume:

• $$(a) \quad \lim \limits _{n \rightarrow \infty } \left||{a_n^{-1} - T_n} \right|| = 0$$,

• $$(b) \quad \lim \limits _{n \rightarrow \infty } \left||{a_n^{-1} b_n - Q_n} \right|| = 0$$,

• $$(c) \quad \lim \limits _{n \rightarrow \infty } \left||{a_n^{-1} a_{n-1}^* - R_n} \right|| = 0$$,

• $$(d) \quad \lim \limits _{n \rightarrow \infty } \left\| \frac{a_n}{\left||{a_n} \right||} - C_n \right\| = 0$$,

for N-periodic sequences TQR, and C such that for every n, the operators $$R_n$$ and $$C_n$$ are invertible. Then on every compact subset of $$\mathbb {C}$$, the sequence $$(\left||{X_n(\cdot )} \right|| : n \ge 0)$$ is uniformly bounded. Moreover,

\begin{aligned} \lim _{n \rightarrow \infty } \left\| \begin{pmatrix} \frac{a_{n+N-1}}{\left||{a_{n+N-1}} \right||} &{}\quad 0 \\ 0 &{}\quad \frac{a^*_{n+N-1}}{\left||{a_{n+N-1}} \right||} \end{pmatrix} E X_n(\cdot ) - \mathcal {F}^n(\cdot ) \right\| = 0 \end{aligned}
(20)

uniformly on compact subsets of $$\mathbb {C}$$, where

\begin{aligned} \mathcal {F}^n(z) = \begin{pmatrix} C_{n+N-1} &{}\quad 0 \\ 0 &{}\quad C_{n+N-1}^* \end{pmatrix} E \prod _{k=n}^{N+n-1} \begin{pmatrix} 0 &{}\quad \mathrm {Id}\\ -R_k &{}\quad z T_k - Q_k \end{pmatrix}. \end{aligned}

### Proof

Let us define

\begin{aligned} \mathcal {X}_n(z) = \prod _{j=n}^{n+N-1} \mathcal {B}_j(z), \quad \text {where} \quad \mathcal {B}_n(z) = \begin{pmatrix} 0 &{}\quad \mathrm {Id}\\ -R_n &{}\quad z T_n - Q_n \end{pmatrix}. \end{aligned}

We have

\begin{aligned} \left||{B_n(z) - \mathcal {B}_n(z)} \right|| \le \left||{R_n - a_n^{-1} a_{n-1}^*} \right|| + |z| \left||{a_n^{-1} - T_n} \right|| + \left||{Q_n - a_n^{-1} b_n} \right||, \end{aligned}

which tends to 0 uniformly on compact subsets of $$\mathbb {C}$$. Consequently, since every function $$B_n(\cdot )$$ is continuous, one has

\begin{aligned} \lim _{n \rightarrow \infty } \left||{X_n(\cdot ) - \mathcal {X}_n(\cdot )} \right|| = 0 \end{aligned}

uniformly on the compact subsets of $$\mathbb {C}$$. In particular, this implies (20) and the uniform boundedness of $$(\left||{X_n(\cdot )} \right|| : n \ge 0)$$ on every compact subset of $$\mathbb {C}$$. $$\square$$

Finally, in the last proposition, we formulate the conditions under which the sequence $$\{ Q^z : z \in \Lambda \}$$ is almost uniformly nondegenerated.

### Proposition 8

Let the assumptions of Proposition 7 be satisfied. If for every $$i \in \mathbb {N}$$ and every $$z \in \Lambda$$ there is $$\varepsilon (i, z) \in \{-1, 1\}$$ such that

\begin{aligned} \varepsilon (i, z) \mathrm {Re} \left[ {\mathcal {F}^i(z)} \right] > 0, \end{aligned}
(21)

then $$(Q^z : z \in \Lambda )$$ is almost uniformly nondegenerated. Moreover, if $$\Lambda \subset \mathbb {R}$$, then the same conclusion follows provided (21) holds only for $$i=0$$.

### Proof

By (20) and (21), we have that for every compact $$K \subset \Lambda$$ there is a constant $$c>0$$ such that for n sufficiently large and all $$z \in K$$,

\begin{aligned} \varepsilon (i, z) \mathrm {Re} \left[ { \begin{pmatrix} \frac{a_{n+N-1}}{\left||{a_{n+N-1}} \right||} &{}\quad 0 \\ 0 &{}\quad \frac{a^*_{n+N-1}}{\left||{a_{n+N-1}} \right||} \end{pmatrix} E X_n(z) } \right] > c \mathrm {Id}. \end{aligned}

This implies the uniform nondegeneracy of $$\{ Q^z : z \in K \}$$.

Consider $$\lambda \in \mathbb {R}$$. According to Lemma 2, we have

\begin{aligned}&\frac{\left||{a_{n+N}} \right||}{\left||{a_{n+N-1}} \right||} \begin{pmatrix} \frac{a_{n+N}}{\left||{a_{n+N}} \right||} &{} 0 \\ 0 &{}\frac{a_{n+N}^*}{\left||{a_{n+N}} \right||} \end{pmatrix}\! E X_{n+1}(\lambda )\\&\quad = \left[ B_{n+N}^{-1}(\lambda )\right] ^* \begin{pmatrix} \frac{a_{n+N-1}}{\left||{a_{n+N-1}} \right||} &{}\quad 0 \\ 0 &{}\quad \frac{a_{n+N-1}^*}{\left||{a_{n+N-1}} \right||} \end{pmatrix} E X_n(\lambda ) B_n^{-1}(\lambda ). \end{aligned}

Let $$n = kN+i$$, and let us compute the limit of both sides as k tends to $$\infty$$. By Propositions 6 and 7, we have

\begin{aligned} \begin{aligned} r_i \mathcal {F}^i(\lambda ) = \big [ \mathcal {B}_i^{-1}(\lambda ) \big ]^* \mathcal {F}^{i- 1}(\lambda ) \mathcal {B}_i^{-1}(\lambda ), \end{aligned} \end{aligned}

where

\begin{aligned} \mathcal {B}_i(\lambda ) = \begin{pmatrix} 0 &{}\quad \mathrm {Id}\\ -R_i &{} \quad \lambda T_i - Q_i \end{pmatrix} \end{aligned}

and the convergence is uniform on every compact subset of $$\mathbb {R}$$. By (3), this implies that if for some $$\varepsilon (\lambda ) \in \{ -1, 1 \}$$,

\begin{aligned} \varepsilon (\lambda ) \mathrm {Re} \left[ {\mathcal {F}^0(\lambda )} \right] > 0, \end{aligned}

then for every $$j \in \{ 0, 1, \ldots , N-1 \}$$,

\begin{aligned} \varepsilon (\lambda ) \mathrm {Re} \left[ {\mathcal {F}^j(\lambda )} \right] > 0. \end{aligned}

The proof is complete. $$\square$$

### 6.2 Asymptotics of Generalized Eigenvectors

This section is devoted to showing the implications of the nondegeneracy of $$(Q^z : z \in \Lambda )$$ together with the positivity of $$|S_n|$$ to the asymptotics of the generalized eigenvectors.

### Theorem 7

Let the family $$\{Q^z : z \in K\}$$ defined in (18) be uniformly nondegenerated on a compact set K. Suppose that there are $$c \ge 1$$ and $$M' > 0$$ such that for all $$\alpha \in \mathcal {H}\oplus \mathcal {H}$$ such that $$\left||{\alpha } \right|| = 1$$, $$z \in K$$, and $$n \ge M$$,

\begin{aligned} c^{-1} \le \left|{S_n(\alpha , z)} \right| \le c. \end{aligned}
(22)

Then there is $$c \ge 1$$ such that for all $$z \in K$$, $$n \ge 1$$, and for every generalized eigenvector u corresponding to z,

\begin{aligned} c^{-1} \left( \left||{u_0} \right||^2 + \left||{u_1} \right||^2\right) \le \left||{a_{n+N-1}} \right|| \left( \left||{u_{n-1}} \right||^2 + \left||{u_n} \right||^2\right) \le c \left( \left||{u_0} \right||^2 + \left||{u_1} \right||^2\right) . \end{aligned}

### Proof

Let $$z \in K$$, and let u be a generalized eigenvector corresponding to z such that $$(u_0, u_1)^t = \alpha$$, $$\left||{\alpha } \right|| = 1$$. Since $$\{Q^z : z \in K \}$$ is uniformly nondegenerated, there are $$c \ge 1$$ and $$M \ge M'$$ such that for all $$n \ge M$$,

\begin{aligned} c^{-1} \left||{a_{n+N-1}} \right|| \left( \left||{u_{n-1}} \right||^2 + \left||{u_n} \right||^2\right) \le |S_n(\alpha , z)| \le c \left||{a_{n+N-1}} \right|| \left( \left||{u_{n-1}} \right||^2 + \left||{u_n} \right||^2\right) , \end{aligned}

which together with (22) implies that there is $$c \ge 1$$ such that for all $$n \ge M$$,

\begin{aligned} \begin{aligned} c^{-1} \le \left\| {a_{n+N-1}} \right\| \left( \left\| {u_{n-1}} \right\| ^2 + \left\| {u_n} \right\| ^2 \right) \le c. \end{aligned} \end{aligned}

For the general nonzero $$\alpha$$, we use the fact that

\begin{aligned} S_n \left( \frac{\alpha }{\left||{\alpha } \right||}, z \right) = \frac{1}{\left||{\alpha } \right||^2} S_n(\alpha , z) \end{aligned}

and generalized eigenvectors depend linearly on the initial conditions. $$\square$$

### Corollary 3

Suppose that the assumptions of Theorem 7 are satisfied. Let $$\Omega \subset \mathcal {H}\oplus \mathcal {H}\setminus \{ 0 \}$$ be a bounded closed set, and let $$K \subset \Lambda$$ be a compact set. Assume that for N-periodic sequence of self-adjoint operators $$(D_n : n \ge 0)$$,

\begin{aligned} \lim _{n \rightarrow \infty } \left\| \frac{1}{\left||{a_{n+N-1}} \right||} \mathrm {Re} \left[ { \begin{pmatrix} a_{n+N-1} &{}\quad 0 \\ 0 &{}\quad a_{n+N-1}^* \end{pmatrix} E X_n(z) } \right] - \begin{pmatrix} D_n &{}\quad 0 \\ 0 &{}\quad D_n \end{pmatrix} \right\| = 0 \end{aligned}
(23)

uniformly on K and

\begin{aligned} g(\alpha , z) = \lim _{n \rightarrow \infty } S_n(\alpha , z) \end{aligned}

uniformly on $$\Omega \times K$$. Then

\begin{aligned} \begin{aligned} \lim _{n \rightarrow \infty } \left\| {a_{n+N-1}} \right\| \big ( \langle {D_n u_{n-1}}, {u_{n- 1}} \rangle _\mathcal {H}+ \langle {D_n u_{n}}, {u_{n}} \rangle _\mathcal {H} \big ) = g \end{aligned} \end{aligned}

uniformly on $$\Omega \times K$$.

### Proof

Fix $$\varepsilon > 0$$. By (23), there is M such that for all $$n \ge M$$, $$z \in K$$, and $$v \in \mathcal {H}\oplus \mathcal {H}$$,

\begin{aligned} \begin{aligned} \big | Q_n^z(v) - \big ( \langle {D_n v_1}, {v_1} \rangle _\mathcal {H}+ \langle {D_n v_2}, {v_2} \rangle _\mathcal {H} \big ) \big | \le \varepsilon \left\| {v} \right\| ^2. \end{aligned} \end{aligned}

Hence,

uniformly on $$\Omega \times K$$. By Theorem 7, there is a constant $$c' > 0$$ such that

\begin{aligned} \begin{aligned} \big | S_n - \left\| {a_{n+N-1}} \right\| \big ( \langle {D_n u_{n-1}}, {u_{n-1}} \rangle _\mathcal {H}+ \langle {D_n u_{n}}, {u_{n}} \rangle _\mathcal {H} \big ) \big | \le \varepsilon c' \end{aligned} \end{aligned}

uniformly on $$\Omega \times K$$. The proof is complete. $$\square$$

### 6.3 The Proof of the Convergence

In this section, we are going to prove that the sequence $$(S_n : n \ge 0)$$ is convergent, which leads to the proofs of Theorem 2 and 3.

Let us begin with the main algebraic part of the proof.

### Lemma 3

Let u be a generalized eigenvector associated with $$z \in \mathbb {C}$$ and $$\alpha \in \mathcal {H}\oplus \mathcal {H}$$. Then

\begin{aligned} \frac{|S_{n+1}(\alpha , z) - S_n(\alpha , z)|}{\left||{u_{n-1}} \right||^2 + \left||{u_n} \right||^2}\le & {} \left||{ X_n(z)} \right|| \left||{a_{n+N}} \right|| \left( \left||{a_{n+N}^{-1} a_{n+N-1}^* - a_n^{-1} a_{n-1}^*} \right|| \right. \\&\left. + \,|z| \left||{a_{n+N}^{-1} - a_{n}^{-1}} \right|| + |z - \overline{z}| \left||{a_{n+N}^{-1}} \right|| \right. \\&\left. + \left||{a_{n+N}^{-1} b_{n+N} - a_{n}^{-1} b_n} \right|| \right) . \end{aligned}

### Proof

The formula (8) implies

\begin{aligned} S_{n+1}(\alpha , z)&= \bigg \langle \mathrm {Re} \left[ { \begin{pmatrix} a_{n+N} &{}{}\quad 0 \\ 0 &{}{}\quad a_{n+N}^* \end{pmatrix} E X_{n+1}(z) } \right] \begin{pmatrix} u_{n}\\ u_{n+1} \end{pmatrix} , \begin{pmatrix} u_n \\ u_{n+1} \end{pmatrix} \bigg \rangle \\&= \bigg \langle \big ( B_n(z) \big )^* \mathrm {Re} \left[ { \begin{pmatrix} a_{n+N} &{}{}\quad 0 \\ 0 &{}{}\quad a_{n+N}^* \end{pmatrix} E X_{n+1}(z) } \right] B_n(z) \begin{pmatrix} u_{n-1}\\ u_{n} \end{pmatrix} , \begin{pmatrix} u_{n-1}\\ u_{n} \end{pmatrix} \bigg \rangle . \end{aligned}

Therefore, by the formulas (3) and (4),

\begin{aligned} S_{n+1} - S_n = \bigg \langle \mathrm {Re} \left[ {C_n(z)} \right] \begin{pmatrix} u_{n-1} \\ u_{n} \\ \end{pmatrix} , \begin{pmatrix} u_{n-1}\\ u_{n} \end{pmatrix} \bigg \rangle , \end{aligned}
(24)

where

\begin{aligned} \begin{aligned} C_n(z) = \big ( B_n(z) \big )^* \begin{pmatrix} a_{n+N} &{}{}\quad 0 \\ 0 &{}{}\quad a_{n+N}^* \end{pmatrix} E X_{n+1}(z) B_n(z) {-} \begin{pmatrix} a_{n+N-1} &{}{} 0 \\ 0 &{}{} a_{n+N-1}^* \end{pmatrix} E X_n(z). \end{aligned}\end{aligned}

By using $$E^{-1} = -E$$, we can write

\begin{aligned} \begin{aligned}&\big ( B_n(z) \big )^* \begin{pmatrix} a_{n+N} &{}{}\quad 0 \\ 0 &{}{}\quad a_{n+N}^* \end{pmatrix} E X_{n+1}(z) B_n(z) \\ {}&\quad = - \big ( B_n(z) \big )^* \begin{pmatrix} a_{n+N} &{}{}\quad 0 \\ 0 &{}{}\quad a_{n+N}^* \end{pmatrix} E B_{n+N}(z) E E X_{n}(z). \end{aligned} \end{aligned}

Hence,

\begin{aligned} \begin{aligned} C_n(z) = -\left[ \big ( B_n(z) \big )^* \begin{pmatrix} a_{n+N} &{}{} 0 \\ 0 &{}{} a_{n+N}^* \end{pmatrix} E B_{n+N}(z) E + \begin{pmatrix} a_{n+N-1} &{}{} 0 \\ 0 &{}{} a_{n+N-1}^* \end{pmatrix} \right] E X_n(z). \end{aligned} \end{aligned}

Now we can compute

\begin{aligned} \begin{aligned}&\big ( B_n(z) \big )^* \begin{pmatrix} a_{n+N} &{}{}\quad 0 \\ 0 &{}{}\quad a_{n+N}^*\end{pmatrix} E B_{n+N}(z) E \\ {}&\quad = \begin{pmatrix} 0 &{}{}\quad -a_{n-1} \left( a_n^*\right) ^{-1} \\ \mathrm {Id}&{}{}\quad (\overline{z} \mathrm {Id}- b_n) \left( a_n^*\right) ^{-1} \end{pmatrix} \begin{pmatrix} 0 &{}{}\quad -a_{n+N} \\ a_{n+N}^* &{}{}\quad 0 \end{pmatrix}\\ {}&\qquad \begin{pmatrix} \mathrm {Id}&{}{}\quad 0 \\ a_{n+N}^{-1} (\lambda \mathrm {Id}- b_{n+N}) &{}{}\quad a_{n+N}^{-1} a_{n+N-1}^* \end{pmatrix} \\ {}&\quad = \begin{pmatrix} -a_{n-1} \left( a_n^*\right) ^{- 1} a_{n+N}^* &{}{}\quad 0 \\ (\overline{z} \mathrm {Id}- b_n) \left( a_n^*\right) ^{-1} a_{n+N}^* &{}{}\quad -a_{n+N} \end{pmatrix} \begin{pmatrix} \mathrm {Id}&{}{} \quad 0 \\ a_{n+N}^{-1}(z \mathrm {Id}- b_{n+N}) &{}{}\quad a_{n+N}^{-1} a_{n+N-1}^* \end{pmatrix} \\ {}&\quad = \begin{pmatrix} -a_{n-1} \left( a_n^*\right) ^{-1} a_{n+N}^* &{}{}\quad 0 \\ (\overline{z} \mathrm {Id}- b_n)\left( a_n^*\right) ^{-1} a_{n+N}^* -(z \mathrm {Id}- b_{n+N}) &{}{}\quad -a_{n+N-1}^* \end{pmatrix}. \end{aligned}\end{aligned}

Therefore,

\begin{aligned} C_n(z) =- \begin{pmatrix} -a_{n-1}\left( a_n^*\right) ^{-1} a_{n+N}^* + a_{n+N-1} &{}\quad 0 \\ (\overline{z} \mathrm {Id}- b_n)\left( a_n^*\right) ^{-1} a_{n+N}^* -(z \mathrm {Id}- b_{n+N}) &{}\quad 0 \end{pmatrix} E X_n(z). \end{aligned}

In particular, we can estimate

\begin{aligned} \left||{C_n(z)} \right||\le & {} \left||{X_n(z)} \right|| \left||{a_{n+N}^*} \right|| \left( \left||{a_{n+N-1} (a_{n+N}^*)^{-1} - a_{n-1}(a_{n}^*)^{-1}} \right|| \right. \\&+\, |z| \left||{(a_n^*)^{-1} - (a_{n+N}^*)^{-1}} \right|| + |z - \overline{z}| \left||{a_{n+N}^{-1}} \right||\\&\left. + \left||{b_{n+N}(a_{n+N}^*)^{-1} - b_n (a_n^*)^{-1}} \right|| \right) . \end{aligned}

Therefore, by the last inequality together with (24), the Schwarz inequality, and (5), the result follows. $$\square$$

The main result of this section is the following theorem.

### Theorem 8

Assume that for an integer $$N \ge 1$$:

• $$(a) \quad \mathcal {V}_N \bigg ( a_n^{-1} : n \ge 0 \bigg ) + \mathcal {V}_N \bigg ( a_{n}^{-1} b_n : n \ge 0 \bigg ) + \mathcal {V}_N \bigg (a_{n}^{-1} a_{n-1}^* : n \ge 1 \bigg ) < \infty$$;

• $$(b) \quad \frac{\left||{a_{n+1}} \right||}{\left||{a_n} \right||} < c_1$$ for a constant $$c_1 > 0$$ and all $$n \in \mathbb {N}$$;

• (c)    the family defined in (18) $$\big \{Q^z : z \in K \big \}$$ is uniformly nondegenerated on a compact connected set K.

Then there is $$c \ge 1$$ such that for every $$n \ge 1$$, for all $$z \in K \cap \mathbb {R}$$, and for every generalized eigenvector u corresponding to z, we have

\begin{aligned} c^{-1} \left( \left||{u_0} \right||^2 + \left||{u_1} \right||^2\right) \le \left||{a_n} \right||\left( \left||{u_{n-1}} \right||^2 + \left||{u_n} \right||^2\right) \le c \left( \left||{u_0} \right||^2 + \left||{u_1} \right||^2\right) . \end{aligned}

Moreover, if

\begin{aligned} \sum _{n=0}^\infty \big \Vert {a_{n}^{-1}} \big \Vert < \infty ,\end{aligned}
(25)

then the same conclusion holds for $$z \in K$$.

### Proof

Let $$\Omega \subset \mathcal {H}\oplus \mathcal {H}\setminus \{ 0 \}$$ be a connected bounded closed set. Let $$S_n$$ be a sequence of functions defined by (19). In view of Theorem 7, it is enough to show that there are $$c \ge 1$$ and $$M > 0$$ such that

\begin{aligned} c^{-1} \le \left|{S_n(\alpha , z)} \right| \le c \end{aligned}
(26)

for all $$\alpha \in \Omega$$, $$z \in K$$, and $$n > M$$. The study of the sequence $$(S_n : n \ge 1)$$ is motivated by the method developed in [28].

Given a generalized eigenvector corresponding to $$z \in K$$ such that $$(u_0, u_1)^t = \alpha \in \Omega$$, we can easily see that for each $$n \ge 2$$, $$u_n$$, considered as a function of $$\alpha$$ and z, is continuous on $$\Omega \times K$$. As a consequence, the function $$S_n$$ is continuous on $$\Omega \times K$$. Since $$\{Q^z : z \in K\}$$ is uniformly nondegenerated, there is $$M > 0$$, such that for each $$n \ge M$$, the function $$S_n$$ has no zeros and has the same sign for all $$z \in K$$ and $$\alpha \in \Omega$$. Otherwise, by the connectedness of $$\Omega \times K$$, there would be $$\alpha \in \Omega$$ and $$z \in K$$ such that $$S_n(\alpha , z) = 0$$, which would contradict the nondegeneracy of $$Q_n^z$$.

Next, we define a sequence of functions $$(F_n : n \ge M)$$ on $$\Omega \times K$$ by setting

\begin{aligned} F_n = \frac{S_{n+1} - S_n}{S_n}. \end{aligned}

Then

\begin{aligned} \frac{S_n}{S_M} = \prod _{j=M}^{n-1} (1 + F_j). \end{aligned}
(27)

First of all, let us show that

\begin{aligned} C^{-1} \le |S_M(\alpha , z)| \le C \end{aligned}
(28)

for a constant $$C>1$$ independent of $$\alpha$$ and z. If this is the case, then by (27) and the fact that each function $$F_n$$ is continuous, to conclude (26) it is enough to show that the product

\begin{aligned} \prod _{j = M}^n (1 + F_j) \end{aligned}

converges uniformly on $$\Omega \times K$$ to a limit that is bounded away from 0, which will be satisfied if we prove that

\begin{aligned} \sum _{j = M}^\infty \sup _{\alpha \in \Omega } \sup _{z \in K} \left|{F_n(\alpha , z)} \right| < \infty . \end{aligned}
(29)

Let us observe that by (19) and (5),

\begin{aligned} \begin{aligned} |S_M(\alpha , z)| \le \left\| {a_{M+N-1}} \right\| \left\| {X_M(z)} \right\| \big ( \left\| {u_{M-1}(\alpha , z)} \right\| ^2 + \left\| {u_{M}(\alpha , z)} \right\| ^2 \big ). \end{aligned}\end{aligned}
(30)

Moreover, by (8),

\begin{aligned} \left||{u_{M-1}(\alpha , z)} \right||^2 + \left||{u_{M}(\alpha , z)} \right||^2 = \langle Y(z) \alpha , Y(z) \alpha \rangle = \langle [Y(z)]^* Y(z) \alpha , \alpha \rangle \end{aligned}
(31)

for

\begin{aligned} Y(z) = \prod _{i=1}^{M-1} B_i(z). \end{aligned}
(32)

Hence,

\begin{aligned} \left||{u_{M-1}(\alpha , z)} \right||^2 + \left||{u_{M}(\alpha , z)} \right||^2 \le \left[ \prod _{i=1}^{M-1} \left||{B_i(z)} \right||^2 \right] \left||{\alpha } \right||^2. \end{aligned}
(33)

For every i, the function $$z \mapsto \left||{B_i(z)} \right||$$ is continuous on the compact set K. Hence, it is uniformly bounded. Furthermore, by the boundedness of $$\Omega$$, one has that $$\left||{\alpha } \right||$$ is bounded as well. This shows that the right-hand side of (33) is uniformly bounded on $$\Omega \times K$$. Similarly,

\begin{aligned} \left||{X_M(z)} \right|| \le \prod _{i=M}^{M+N-1} \left||{B_{i}(z)} \right|| \end{aligned}

is uniformly bounded. This implies that the right-hand side of (30) is uniformly bounded as well. Thus, the upper bound in the inequality (28) is proved. To prove the lower bound, let us see that the uniform nondegeneracy implies

\begin{aligned} \begin{aligned} |S_M(\alpha , z)| \ge \left\| {a_{N+M-1}} \right\| \big ( \left\| {u_{M-1}(\alpha , z)} \right\| ^2 + \left\| {u_{M}(\alpha , z)} \right\| ^2 \big ) \end{aligned} \end{aligned}
(34)

for a constant $$c>0$$ independent of $$\alpha$$ and z. So by (31), it remains to show that $$[Y(z)]^* Y(z)$$ is a strictly positive operator uniformly with respect to $$z \in K$$. It will be implied by the uniform bound on $$\left||{([Y(z)]^* Y(z))^{-1}} \right||$$. According to (32),

\begin{aligned} \begin{aligned} \Big \Vert {\big ( [Y(z)]^* Y(z) \big )^{-1}} \Big \Vert \le \prod _{i=1}^{M-1} \big \Vert {B_i^{-1}(z)} \big \Vert ^2 \end{aligned}\end{aligned}

and by (9), as in (33), the right-hand side of this inequality is uniformly bounded on K. Hence, by (31), there is a constant $$c'>0$$ such that

\begin{aligned} \left||{u_{M-1}(\alpha , z)} \right||^2 + \left||{u_{M}(\alpha , z)} \right||^2 \ge c' \left||{\alpha } \right||^2. \end{aligned}

Consequently, by the positive distance of $$\Omega$$ to 0 and (34), we proved the remaining lower bound in (28).

It remains to prove (29). Let u be a generalized eigenvector corresponding to $$z \in K$$ such that $$(u_0, u_1)^t = \alpha \in \Omega$$. In view of (a), each subsequence $$(B_{kN+j}(z) : k \ge 1)$$ is uniformly convergent, and consequently, the norms $$\Vert X_n(z) \Vert$$ are uniformly bounded with respect to n and $$z \in K$$. Moreover, since $$\{Q(z) : z \in K\}$$ is uniformly nondegenerated,

\begin{aligned} \begin{aligned} \left| {S_n(\alpha , z)} \right| \ge c^{-1} \left\| {a_{n+N-1}} \right\| \big ( \left\| {u_{n-1}} \right\| ^2 + \left\| {u_n} \right\| ^2 \big ) \end{aligned} \end{aligned}

for $$n \ge M$$. Therefore, by Lemma 3,

\begin{aligned} \left|{F_n(\alpha , z)} \right|\le & {} c c' c_1 \left( \left||{a_{n+N}^{-1} a_{n+N-1}^* - a_n^{-1} a_{n-1}^*} \right|| + |z| \left||{a_{n+N}^{-1} - a_{n}^{-1}} \right|| \right. \nonumber \\&\left. +\, |z - \overline{z}| \left||{a_{n+N}^{-1}} \right|| + \left||{a_{n+N}^{-1} b_{n+N} - a_{n}^{-1} b_n} \right|| \right) \end{aligned}
(35)

for every $$\alpha \in \Omega$$. Using (b), we can estimate

\begin{aligned} \begin{aligned}&\sum _{n = M}^\infty \sup _{\alpha \in \Omega } \sup _{z \in K} \left| {F_n(\alpha , z)} \right| \le c c' c_1 \mathcal {V}_N \left( a_{n}^{-1} a_{n-1}^{*} : n \ge M \right) \\ {}&\quad + c c' c_1 \mathcal {V}_N \left( a_{n}^{-1} b_n : n \ge M\right) \\ {}&\quad + c c' c_1 \sup _{z \in K} |z| \mathcal {V}_N \left( a_n^{-1} : n \ge M\right) + c c' c_1 \sup _{z \in K} |z - \overline{z}| \sum _{n=M}^\infty \big \Vert {a_{n}^{-1}} \big \Vert . \end{aligned} \end{aligned}

Thus, (a) and (25) imply (26). If condition (25) is not satisfied, consider $$K \cap \mathbb {R}$$ instead K in the last inequality. The proof is complete. $$\square$$

The following corollary provides an estimate, which in the scalar case expresses the bound on the rate of the convergence of Turán determinants to the density of the spectral measure of A (see [27]). It follows from the standard proof of the convergence of infinite products of numbers.

### Corollary 4

Under the hypothesis of Theorem 8, for every bounded and closed $$\Omega \subset \mathcal {H}\oplus \mathcal {H}\setminus \{ 0 \}$$, the sequence of continuous functions $$(S_n : n \ge 1)$$ converges uniformly on $$\Omega \times (K \cap \mathbb {R})$$ (or on $$\Omega \times K$$ if (25) is satisfied) to the function g bounded away from 0. Moreover, by (35), there is a constant $$c>0$$ such that for all $$m > 0$$,

\begin{aligned} \begin{aligned}\sup _{\alpha \in \Omega } \sup _{z \in K \cap \mathbb {R}} |g(\alpha , \lambda ) - S_m(\alpha , z)|&\le c \mathcal {V}_N \left( a_{n}^{-1} a_{n-1}^{*} : n \ge m \right) + c \mathcal {V}_N \left( a_n^{-1} : n \ge m\right) \\&\quad + c \mathcal {V}_N \left( a_{n}^{-1} b_n : n \ge m\right) . \end{aligned} \end{aligned}

Finally, we are ready to prove Theorems 2 and 3.

### Proof of Theorem 2

By Propositions 6 and 8, we have that the assumptions of Theorem 8 are satisfied. Therefore, the result follows from Theorem 7. $$\square$$

### Proof of Theorem 3

Since every $$C_n$$ is invertible, we have

\begin{aligned} \lim _{n \rightarrow \infty } \left\| \left( \frac{a_n}{\left||{a_n} \right||} \right) ^{-1} - C_n^{-1} \right\| = 0. \end{aligned}

Hence, for some $$c > 0$$,

\begin{aligned} \begin{aligned} \left\| {a_n} \right\| \big \Vert {a_n^{-1}} \big \Vert \le c. \end{aligned}\end{aligned}

Consequently,

\begin{aligned} \begin{aligned} \big \Vert {a_n^{-1}} \big \Vert \le \frac{c}{\left\| {a_n} \right\| }, \end{aligned} \end{aligned}

and (25) is satisfied. Moreover, this implies that $$T_n \equiv 0$$, so, in the notation of Proposition 7, every $$\mathcal {F}^i(\cdot )$$ is constant. Hence, Proposition 8 implies the almost uniform nondegeneracy of $$\{ Q^z : z \in \mathbb {R}\}$$. Since $$\mathcal {F}^i(\cdot )$$ is constant on $$\mathbb {C}$$, Proposition 8 implies that $$\{ Q^z : z \in \mathbb {C}\}$$ is almost uniformly nondegenerated as well. Thus, the assumptions of Theorem 8 are satisfied, and consequently, Theorem 7 implies the requested asymptotics. Finally, Corollary 1 finishes the proof. $$\square$$

## 7 Exact Asymptotics of Generalized Eigenvectors

The following theorem is a vector valued version of [27, Corollary 1].

### Theorem 9

Let $$\Omega \subset \mathcal {H}\oplus \mathcal {H}\setminus \{ 0 \}$$ be a bounded and closed set, and let $$K \subset \mathbb {R}$$ (or $$K \subset \mathbb {C}$$ whether the Carleman condition is not satisfied) be a compact set. Let N be an odd integer. Let the hypotheses of Theorem 2 be satisfied. Assume further that

\begin{aligned} T_n \equiv 0, Q_n \equiv 0, R_n \equiv \mathrm{Id}, C_n \equiv C. \end{aligned}

Then $$C = C^*$$, and

\begin{aligned} \lim _{n \rightarrow \infty } \left\| {a_{n}} \right\| \big ( \langle {C u_{n- 1}}, {u_{n-1}} \rangle _\mathcal {H}+ \langle {C u_{n}}, {u_{n}} \rangle _\mathcal {H} \big ) = g \end{aligned}

uniformly on $$\Omega \times K$$, where

\begin{aligned} g(\alpha , z) = \lim _{n \rightarrow \infty } S_n(\alpha , z) \end{aligned}

for $$S_n$$ defined in (19).

### Proof

We have

\begin{aligned} \begin{pmatrix} 0 &{} \quad \mathrm {Id}\\ -\mathrm {Id}&{}\quad 0 \end{pmatrix}^2 = -\begin{pmatrix} \mathrm {Id}&{}\quad 0 \\ 0 &{}\quad \mathrm {Id}\end{pmatrix}. \end{aligned}

Hence,

\begin{aligned} \begin{pmatrix} 0 &{}\quad -C \\ C^* &{}\quad 0 \end{pmatrix} \begin{pmatrix} 0 &{}\quad \mathrm {Id}\\ -\mathrm {Id}&{}\quad 0 \end{pmatrix}^N = (-1)^{(N-1)/2} \begin{pmatrix} C &{}\quad 0 \\ 0 &{}\quad C^* \end{pmatrix}. \end{aligned}

Consequently,

\begin{aligned} \mathcal {F}(\lambda ) = \begin{pmatrix} \mathrm {Re} \left[ {C} \right] &{}\quad 0 \\ 0 &{}\quad \mathrm {Re} \left[ {C} \right] \end{pmatrix}. \end{aligned}

Therefore, by Proposition 6, $$r \mathrm {Id}= C^{-1} C^*$$ for $$r = \left||{C^{-1} C^*} \right||$$. This implies that $$r C = C^*$$. Taking norms, we obtain $$r=1$$, and consequently, $$C = C^*$$. Moreover, by Corollary 4, g is a continuous function on $$\Omega \times K$$ that is bounded away from 0. Hence, by Corollary 3, the result follows. $$\square$$

In the scalar case, and under stronger assumptions, similar results were obtained in [16]. To obtain the complete information of the asymptotics, it is of interest to identify the function g. In the scalar case, g is related to the density of the spectral measure of A (see [27, Corollary 1]).

The following corollary is an extension of [27, Corollary 3] to the operator case. In the scalar case, it provides exact asymptotics of the so-called Christoffel functions, which have applications, e.g., in random matrix theory (see [22]) or signal processing (see [15]). We believe that in the operator case, it will also have some applications.

### Corollary 5

Let the assumptions of Theorem 9 be satisfied. Assume further that

\begin{aligned} \sum _{k=0}^\infty \frac{1}{\left||{a_k} \right||} = \infty . \end{aligned}

Then

\begin{aligned} \lim _{n \rightarrow \infty } \left[ \sum _{k=0}^n \frac{1}{\left||{a_k} \right||}\right] ^{-1} \sum _{k=0}^n \langle {C u_{k}}, {u_{k}} \rangle _\mathcal {H}= \frac{1}{2} g \end{aligned}

uniformly on $$\Omega \times K$$, where

\begin{aligned} g(\alpha , z) = \lim _{n \rightarrow \infty } S_n(\alpha , z) \end{aligned}

for $$S_n$$ defined in (19).

### Proof

By the Stolz–Cesàro theorem (also known as L’Hôpital’s rule for sequences),

\begin{aligned} \begin{aligned}&\lim _{n \rightarrow \infty } \left[ \sum _{k=0}^n \frac{1}{\left\| {a_k} \right\| }\right] ^{-1} \sum _{k=0}^n \langle {C u_{k}}, {u_{k}} \rangle _\mathcal {H}= \lim _{n \rightarrow \infty } \frac{\langle {C u_{n-1}}, {u_{n-1}} \rangle _\mathcal {H}+ \langle {C u_{n}}, {u_{n}} \rangle _\mathcal {H}}{1/\left\| {a_{n-1}} \right\| + 1/\left\| {a_n} \right\| } \\ {}&\quad = \lim _{n \rightarrow \infty } \frac{\left\| {a_{n}} \right\| \big ( \langle {C u_{n-1}}, {u_{n-1}} \rangle _\mathcal {H}+ \langle {C u_{n}}, {u_{n}} \rangle _\mathcal {H} \big )}{\left\| {a_{n}} \right\| /\left\| {a_{n-1}} \right\| + 1}. \end{aligned} \end{aligned}

Theorem 9 implies that $$C=C^*$$, and consequently, Proposition 6 shows that $$\left||{a_n} \right||/\left||{a_{n-1}} \right||$$ tends to 1. Therefore, by Theorem 9, the result follows. $$\square$$

## 8 Examples

### 8.1 Examples of Theorem 4

In this section, we show examples of the special cases of Theorem 4 presented in Sect. 5, i.e., to Theorems 1 and 6. Since Theorem 5 is a weaker version of Theorem 2, the examples of it are postponed to the next section.

### Example 1

Assume that X and Y are bounded noncommuting operators on $$\mathcal {H}$$ such that X is invertible normal and Y is self-adjoint. Let

\begin{aligned} \tilde{x}_k = k \sqrt{\log (k+1)}, \quad \tilde{y}_k = \frac{1}{k \log (k+1)}. \end{aligned}

Write

\begin{aligned} \tilde{x}^k = (\tilde{x}_k : 1 \le j \le k), \quad \tilde{y}^k = (\tilde{y}_k : 1 \le j \le k), \end{aligned}

i.e., the $$k\hbox {th}$$ repetition of $$\tilde{x}_k$$ and $$\tilde{y}_k$$. We define in the block form,

\begin{aligned} x = (\tilde{x}^k : k \ge 1), \quad y = (\tilde{y}^k : k \ge 1). \end{aligned}

Then for

\begin{aligned} a_n = x_n X, \quad b_n = y_n Y, \end{aligned}

the assumptions of Theorem 1 are satisfied.

### Proof

We have

\begin{aligned} a_{n+1} a_{n+1}^* - a_n^* a_n = x_{n+1}^2 X X^* - x_n^2 X^* X, \end{aligned}

which by the monotonicity of $$x_n$$ and normality of X is positive. Hence, the hypothesis (a) is satisfied.

Next, one has $$\left||{a_n} \right|| = x_n \left||{X} \right||$$. Therefore, by

\begin{aligned} \sum _{n=0}^\infty \frac{1}{x_n^2} = \sum _{k=1}^\infty \frac{k}{\tilde{x}_k^2} = \sum _{k=1}^\infty \frac{1}{k \log (k+1)} = \infty , \end{aligned}

we obtain the hypothesis (c).

Finally,

\begin{aligned} \frac{\left||{a_n b_{n+1} - b_n a_n} \right||}{x_n^2} \le \frac{|y_{n+1} - y_n|}{x_n} \left||{X Y} \right|| + \frac{|y_n|}{x_n} \left||{X Y - Y X} \right||, \end{aligned}

and by the fact that $$(x_{n+1}/x_n : n \ge 0)$$ tends to 1, the hypothesis (b) will be satisfied if $$(y_n/ x_n : n \ge 0)$$ is summable. But

\begin{aligned} \sum _{n=0}^\infty \frac{y_n}{x_n} = \sum _{k=1}^\infty k \frac{\tilde{y}_k}{\tilde{x}_k} = \sum _{k=1}^\infty \frac{1}{k [\log (k+1)]^{3/2}} < \infty , \end{aligned}

and the result follows. $$\square$$

### Example 2

Let $$K \ge 1$$ be an integer and M be such that $$\log ^{(K)}(M) > 0$$ (see (15)). Assume that X and Y are bounded noncommuting self-adjoint operators on $$\mathcal {H}$$ such that X is invertible. Let

\begin{aligned} a_n = x_n X, \quad b_n = y_n Y, \end{aligned}

for

\begin{aligned} x_n = (n+M) g_K(n+M), \quad y_n = \frac{1}{\log ^{(K)}(n+M)}. \end{aligned}

Then the assumptions of Theorem 6 are satisfied.

### Proof

The hypotheses (a) and (d) from Theorem 6 are straightforward.

Since X is self-adjoint,

\begin{aligned} \left( a_{n-1}^*x\right) ^{-1} a_{n} = \frac{x_{n}}{x_{n-1}} \mathrm {Id}. \end{aligned}

Therefore, by [26, Example 4.5], the hypothesis (b) is satisfied.

It remains to show the hypothesis (c). We have

\begin{aligned} a_n^{-1} b_n - b_{n+1} a_n^{-1} = \frac{y_n - y_{n+1}}{x_n} X^{-1} Y + \frac{y_{n+1}}{x_n} (X^{-1} Y - Y X^{-1}). \end{aligned}

Since $$(y_{n+1}/x_n : n \ge 0)$$ tends to 1, it remains to show that $$(y_n/x_n : n \ge 0)$$ is summable. But

\begin{aligned} \frac{y_n}{x_n} = \frac{1}{(n+M) g_{K-1}(n+M) [\log ^{(K)}(n+M)]^2}, \end{aligned}

which by the Cauchy condensation test applied K times is summable. The proof is complete. $$\square$$

### 8.2 Examples of Theorems 2 and 3

The following Proposition provides a simple way of the construction of sequences satisfying the bounded variation condition of Theorem 2.

### Proposition 9

Fix $$N \ge 1$$ and a Hilbert space $$\mathcal {H}$$. Let $$(x_n : n \ge 0)$$ and $$(y_n : n \ge 0)$$ be sequences of numbers such that $$x_n > 0$$, $$b_n \in \mathbb {R}$$, and

\begin{aligned} \mathcal {V}_N \left( \frac{x_{n-1}}{x_n} : n \ge 1 \right) + \mathcal {V}_N \left( \frac{y_n}{x_n} : n \ge 0 \right) + \mathcal {V}_N \left( \frac{1}{x_n} : n \ge 0 \right) < \infty . \end{aligned}

Let $$(X_n : n \in \mathbb {Z})$$ and $$(Y_n : n \in \mathbb {Z})$$ be N-periodic sequences of bounded operators on $$\mathcal {H}$$ such that for every n, each $$X_n$$ is invertible and each $$Y_n$$ is self-adjoint. Let us define

\begin{aligned} a_n = x_n X_n, \quad b_n = y_n Y_n. \end{aligned}

Then

\begin{aligned} \mathcal {V}_N\left( a_n^{-1} a_{n-1}^* : n \ge 1\right) + \mathcal {V}_N\left( a_n^{-1} b_n : n \ge 0\right) + \mathcal {V}_N\left( a_n^{-1} : n \ge 0\right) < \infty . \end{aligned}

### Proof

We have

\begin{aligned}&a_n^{-1} a_{n-1}^* = \left( \frac{x_{n-1}}{x_n} \mathrm {Id}\right) \left( X_{n}^{-1} X^*_{n-1} \right) , \quad a_{n}^{-1} b_n = \left( \frac{y_n}{x_n} \mathrm {Id}\right) \left( X^{-1}_n Y_n \right) , \\&a_n^{-1} = \left( \frac{1}{x_n} \mathrm {Id}\right) X_n^{-1}. \end{aligned}

Therefore, it is enough to apply Proposition 1. $$\square$$

The next proposition provides a convenient form of $$\mathcal {F}(\lambda )$$ for $$N=1$$.

### Proposition 10

Assume;

• $$(a) \quad \lim \limits _{n \rightarrow \infty } \left||{a_n} \right|| = a \in (0, \infty ]$$,

• $$(b) \quad \lim \limits _{n \rightarrow \infty } \frac{a_n}{\left||{a_n} \right||} = C$$,

• $$(c) \quad \lim \limits _{n \rightarrow \infty } \frac{b_n}{\left||{a_n} \right||}= D$$,

• $$(d) \quad \lim \limits _{n \rightarrow \infty } \frac{\left||{a_{n-1}} \right||}{\left||{a_n} \right||}= 1$$.

Then, in the notation of Theorem 2,

\begin{aligned} \mathcal {F}(\lambda ) = \begin{pmatrix} \mathrm {Re} \left[ {C} \right] &{}\quad \frac{1}{2} D - \frac{\lambda }{2a} \mathrm {Id}\\ \frac{1}{2} D - \frac{\lambda }{2a} \mathrm {Id}&{}\quad \mathrm {Re} \left[ {C} \right] \end{pmatrix}. \end{aligned}

### Proof

Since

\begin{aligned} a_n^{-1} b_n = \left( \frac{a_n}{\left||{a_n} \right||} \right) ^{-1} \frac{b_n}{\left||{a_n} \right||}, \quad a_{n}^{-1} a_{n-1}^* = \left( \frac{a_n}{\left||{a_n} \right||} \right) ^{-1} \frac{a_{n-1}^*}{\left||{a_{n-1}} \right||} \frac{\left||{a_{n-1}} \right||}{\left||{a_n} \right||}, \end{aligned}

we have

\begin{aligned} Q_0 = C^{-1} D, \quad R_0 = C^{-1} C^*. \end{aligned}

Hence, the direct computation shows that $$\mathcal {F}(\lambda )$$ has the requested form. $$\square$$

In the following example, we discuss the optimality of $$\Lambda$$ in the case of constant coefficients.

### Example 3

Let

\begin{aligned} a_n = \begin{pmatrix} 1 &{}\quad 1\\ 1 &{}\quad 2 \end{pmatrix}, \quad b_n = \begin{pmatrix} 2 &{}\quad 1\\ 1 &{}\quad 1 \end{pmatrix}. \end{aligned}

Then the assumptions of Theorem 2 are satisfied with

\begin{aligned} \Lambda = \left( \frac{-3+\sqrt{13}}{2}, \frac{9-\sqrt{37}}{2} \right) \supset [0.303, 1.458]. \end{aligned}

Moreover, $$\Lambda$$ is the maximal set where A has absolutely continuous spectrum of the multiplicity 2.

### Proof

Let

\begin{aligned} M_1= & {} \left( \frac{-3-\sqrt{13}}{2}, \frac{-3+\sqrt{13}}{2} \right) \cup \left( \frac{9-\sqrt{37}}{2}, \frac{9+\sqrt{37}}{2} \right) , \\ M_2= & {} \left( \frac{-3+\sqrt{13}}{2}, \frac{9-\sqrt{37}}{2} \right) . \end{aligned}

Since $$(a_n : n \ge 0)$$ and $$(b_n : n \ge 0)$$ are constant, it is sufficient to show that matrix $$\mathcal {F}(\lambda )$$ is positive definite for $$\lambda \in M_2$$.

According to Proposition 10, we have

\begin{aligned} \left||{a_n} \right|| \mathcal {F}(\lambda ) = \begin{pmatrix} 1 &{} \quad 1 &{}\quad 1 -\frac{\lambda }{2} &{}\quad \frac{1}{2} \\ 1 &{} \quad 2 &{}\quad \frac{1}{2} &{}\quad \frac{1}{2} - \frac{\lambda }{2} \\ 1 -\frac{\lambda }{2} &{}\quad \frac{1}{2} &{}\quad 1 &{}\quad 1 \\ \frac{1}{2} &{}\quad \frac{1}{2} - \frac{\lambda }{2} &{}\quad 1 &{}\quad 2 \end{pmatrix}. \end{aligned}

The determinants of its principal minors are equal to

\begin{aligned} 1, \quad 1, \quad - \frac{1}{2} \lambda ^2 + \frac{3}{2} \lambda -\frac{1}{4}, \quad \frac{1}{16} \lambda ^4 -\frac{3}{8} \lambda ^3 - \frac{17}{16} \lambda ^2 + \frac{21}{8} \lambda -\frac{11}{16}. \end{aligned}

Hence, the matrix $$\mathcal {F}(\lambda )$$ is positively definite whether $$\lambda \in M_2$$. Moreover, the determinant of the last minor is negative only for $$\lambda \in M_1$$.

According to [30, Theorem 3], the matrix A is purely absolutely continuous on the closure of the set $$M_1 \cup M_2$$. Moreover, the spectrum of A is of multiplicity 1 and 2 on $$M_1$$ and $$M_2$$, respectively. $$\square$$

In the next example, we consider the unbounded case for $$N=1$$.

### Example 4

Let

\begin{aligned} X = \begin{pmatrix} 1 &{}\quad 1\\ 1 &{}\quad 2 \end{pmatrix}, \quad Y = \begin{pmatrix} 2 &{}\quad 1\\ 1 &{}\quad 1 \end{pmatrix}. \end{aligned}

Let us assume that real sequences $$(x_n: n \ge 0)$$ and $$(y_n: n \ge 0)$$ such that $$x_n > 0$$ and $$y_n \in \mathbb {R}$$ for every n satisfy

\begin{aligned} \mathcal {V}_1 \left( \frac{x_{n-1}}{x_n} : n \ge 1 \right) + \mathcal {V}_1 \left( \frac{y_n}{x_n} : n \ge 0 \right) + \mathcal {V}_1 \left( \frac{1}{x_n} : n \ge 0 \right) < \infty \end{aligned}

and

\begin{aligned} \lim _{n \rightarrow \infty } x_n = \infty , \quad \lim _{n \rightarrow \infty } \frac{x_{n-1}}{x_n} = 1, \quad \lim _{n \rightarrow \infty } \frac{y_n}{x_n} = q \in (\sqrt{5}-3, 3-\sqrt{5}). \end{aligned}

For example: $$x_n = (n+1)^\alpha , y_n = q a_n$$ for $$\alpha > 0$$.

Then for

\begin{aligned} a_n = x_n X, \quad b_n = y_n Y, \end{aligned}

the assumptions of Theorem 2 are satisfied.

### Proof

In view of Proposition 9, it is enough to show that $$\mathcal {F}$$ is positive definite. In the notation of Proposition 10,

\begin{aligned} C = \frac{1}{\left||{X} \right||} X, \quad D = \frac{q}{\left||{X} \right||} Y, \quad a = \infty . \end{aligned}

Hence, by Proposition 10,

\begin{aligned} \left||{X} \right|| \cdot \mathcal {F}(\lambda ) = \begin{pmatrix} 1 &{}\quad 1 &{}\quad q &{}\quad q/2 \\ 1 &{}\quad 2 &{}\quad q/2 &{}\quad q/2 \\ q &{}\quad q/2 &{}\quad 1 &{}\quad 1 \\ q/2 &{}\quad q/2 &{}\quad 1 &{}\quad 2 \end{pmatrix}. \end{aligned}

The determinants of the principal minors of this matrix are equal to

\begin{aligned} 1, \quad 1, \quad -\frac{5}{4} q^2 + 1, \quad \frac{1}{16} q^4 - \frac{7}{4} q^2 + 1. \end{aligned}

Hence, this matrix is positive definite if and only if

\begin{aligned} q \in (\sqrt{5}-3, 3-\sqrt{5}) \supset [-0.763, 0.763]. \end{aligned}

$$\square$$