1 Introduction

Consider two sequences \(a = (a_n: n \in \mathbb {N}_0)\) and \(b = (b_n: n \in \mathbb {N}_0)\) such that \(a_n > 0\) and \(b_n \in \mathbb {R}\) for all \(n \ge 0\). Let A be the closure in \(\ell ^2(\mathbb {N}_0)\) of the operator acting by the matrix

$$\begin{aligned} \mathcal {A}=\begin{pmatrix} b_0 &{}\quad a_0 &{}\quad 0 &{}\quad 0 &{}\quad \ldots \\ a_0 &{}\quad b_1 &{}\quad a_1 &{}\quad 0 &{}\quad \ldots \\ 0 &{}\quad a_1 &{}\quad b_2 &{}\quad a_2 &{}\quad \ldots \\ 0 &{}\quad 0 &{}\quad a_2 &{}\quad b_3 &{}\quad \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad &{}\quad \ddots \end{pmatrix} \end{aligned}$$

on finitely supported sequences. The operator A is called Jacobi matrix and its Jacobi parameters are the sequences a and b. Recall that \(\ell ^2(\mathbb {N}_0)\) is the Hilbert space of square summable complex-valued sequences with the scalar product

$$\begin{aligned} \langle {x}, {y} \rangle _{\ell ^2(\mathbb {N}_0)} = \sum _{n=0}^\infty x_n \overline{y_n}. \end{aligned}$$

Its standard orthonormal basis will be denoted by \((\delta _n: n \in \mathbb {N}_0)\). Namely, \(\delta _n\) is the sequence having 1 on the nth position and 0 elsewhere.

Let us observe that the operator A is always symmetric. However, if A is unbounded, that is at least one of the sequences a and b is unbounded, it does not have to be self-adjoint. If it is self-adjoint, then one can define a Borel probability measure \(\mu \) as

$$\begin{aligned} \mu (\cdot ) = \langle E_A(\cdot ) \delta _0, \delta _0 \rangle _{\ell ^2} \end{aligned}$$

where \(E_A\) is the spectral resolution of the identity of A. Then the sequence of polynomials \((p_n: n \in \mathbb {N}_0)\) satisfying

$$\begin{aligned} \begin{aligned}&p_0(x) = 1, \quad p_1(x) = \frac{x-b_0}{a_0}, \\&\quad a_{n-1} p_{n-1}(x) + b_n p_n(x) + a_n p_{n+1}(x) = x p_n(x), \quad n \ge 1. \end{aligned} \end{aligned}$$

is an orthonormal basis in \(L^2(\mathbb {R}, \mu )\), that is the Hilbert space of square integrable complex-valued functions with the scalar product

$$\begin{aligned} \langle f, g \rangle _{L^2(\mathbb {R}, \mu )} = \int _{\mathbb {R}} f(x) \overline{g(x)} \mu (\textrm{d} x). \end{aligned}$$

Moreover, the operator \(U: \ell ^2(\mathbb {N}_0) \rightarrow L^2(\mathbb {R}, \mu )\) defined on the basis vectors by

$$\begin{aligned} U \delta _n = p_n \end{aligned}$$

is unitary and satisfies

$$\begin{aligned} (U A U^{-1} f)(x) = x f(x) \end{aligned}$$

for every \(f \in L^2(\mathbb {R}, \mu )\) such that \(x f \in L^2(\mathbb {R}, \mu )\), see [45, Section 6] for more details. It follows that the spectral properties of A are intimately related to the properties of \(\mu \). For example, \(\sigma _{\textrm{ess}}(A)\) is the set of accumulation points of \(\textrm{supp}\,(\mu )\). Furthermore, if

$$\begin{aligned} \mu = \mu _{\textrm{ac}} + \mu _{\textrm{sing}} \end{aligned}$$

is the Lebesgue decomposition of \(\mu \) into the absolutely continuous and the singular parts with respect to the Lebesgue measure, then \(\sigma _{\textrm{ac}}(A) = \textrm{supp}\,(\mu _{\textrm{ac}})\) and \(\sigma _{\textrm{sing}}(A) = \textrm{supp}\,(\mu _{\textrm{sing}})\).

Jacobi matrices are thoroughly studied. In the bounded case, let us only refer to the recent monograph [50] and to the references therein. For unbounded case, see e.g. [10, 16, 21, 39, 42, 57,58,59,60] and the references therein. In this article we consider unbounded Jacobi matrices only.

An interesting class of unbounded Jacobi matrices is related to the so-called birth–death processes (see, e.g. [46]), that is stationary Markov processes with the discrete state space \(\mathbb {N}_0\). According to [24] generators of birth–death processes correspond to the Jacobi parameters

$$\begin{aligned} a_n = \sqrt{\lambda _n \mu _{n+1}}, \quad b_n = -\lambda _n - \mu _n \end{aligned}$$

where positive sequences \((\lambda _n: n \in \mathbb {N}_0)\) and \((\mu _n: n \in \mathbb {N}_0)\) are called the birth and death rates, respectively. The simplest case is when \(\lambda _n = \mu _{n+1}\), which we call symmetric. In particular, we can consider the following example.

Example 1.1

Let \(\kappa \in (1,2)\) and set

$$\begin{aligned} a_n = (n+1)^\kappa , \quad b_n = -(n+1)^\kappa -n^\kappa . \end{aligned}$$

Then \(\lambda _n = (n+1)^\kappa \), \(\mu _n = n^\kappa \).

Another interesting class of unbounded Jacobi matrices has been recently studied in [66].

Example 1.2

For \(\kappa \in (1, \infty )\) and \(f,g>-1\) we set

$$\begin{aligned} a_n = (n+1)^{\kappa } \Big ( 1 + \frac{f}{n+1} \Big ), \quad b_n = -2 (n+1)^{\kappa } \Big ( 1 + \frac{g}{n+1} \Big ). \end{aligned}$$

In particular, in [66], spectral properties of A has been described if \(\kappa \in (\frac{3}{2}, \infty )\) and \(\kappa + 2\,g - 2 f \ne 0\).

Let us observe that in both examples the Jacobi parameters satisfy

$$\begin{aligned} \lim _{n \rightarrow \infty } a_n = \infty , \quad \lim _{n \rightarrow \infty } \frac{a_{n-1}}{a_n} = 1, \quad \lim _{n \rightarrow \infty } \frac{b_n}{a_n} = -2. \end{aligned}$$
(1.1)

The aim of this article is to study spectral properties of A as well as the asymptotic behavior of the associated orthogonal polynomials \((p_n: n \ge 0)\) for a large subclass of Jacobi parameters satisfying (1.1) containing sequences from Examples 1.1 and 1.2 as special cases. In fact, in this article we will go beyond (1.1) by allowing the sequences \((\frac{a_{n-1}}{a_n})\) and \((\frac{b_n}{a_n})\) to be asymptotically periodic. To be more precise, given N a positive integer, we say that Jacobi parameters \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) are N-periodically modulated if there are two N-periodic sequences \((\alpha _n: n \in \mathbb {Z})\) and \((\beta _n: n \in \mathbb {Z})\) of positive and real numbers, respectively, such that

  1. (a)

    \(\begin{aligned} \lim _{n \rightarrow \infty } a_n = \infty \end{aligned},\)

  2. (b)

    \(\begin{aligned} \lim _{n \rightarrow \infty } \bigg | \frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n} \bigg | = 0 \end{aligned},\)

  3. (c)

    \(\begin{aligned} \lim _{n \rightarrow \infty } \bigg | \frac{\beta _n}{\alpha _n} - \frac{b_n}{a_n} \bigg | = 0 \end{aligned}.\)

It turns out that spectral properties of N-periodically modulated Jacobi matrices depend on the matrix \(\mathfrak {X}_0(0)\) where for any \(n \ge 0\) we have set

$$\begin{aligned} \mathfrak {X}_n(x) = \mathfrak {B}_{N+n-1}(x) \mathfrak {B}_{N+n-2}(x) \cdots \mathfrak {B}_n(x) \end{aligned}$$

where

$$\begin{aligned} \mathfrak {B}_j(x) = \begin{pmatrix} 0 &{} 1 \\ -\frac{\alpha _{j-1}}{\alpha _j} &{} \frac{x - \beta _j}{\alpha _j} \end{pmatrix}. \end{aligned}$$

More specifically, we can distinguish four cases:

  1. I.

    if \(|{\text {tr}}\mathfrak {X}_0(0)|<2\), then under some regularity assumptions on Jacobi parameters one has that \(\sigma (A) = \mathbb {R}\), and it is purely absolutely continuous, see e.g. [19, 21, 54, 56, 57];

  2. II.

    if \(|{\text {tr}}\mathfrak {X}_0(0)|=2\), then we have two subcases:

    1. (a)

      if \(\mathfrak {X}_0(0)\) is diagonalizable then under some regularity assumptions on Jacobi parameters there is a compact interval \(I \subset \mathbb {R}\) such that A is purely absolutely continuous on \(\mathbb {R}{\setminus } I\), and it is purely discrete in the interior of I, see e.g. [5,6,7, 10, 11, 17, 18, 23, 43, 54, 55, 60];

    2. (b)

      if \(\mathfrak {X}_0(0)\) is not diagonalizable then the only situation which was known is the case when either the essential spectrum of A is empty or it is a half-line, see e.g. [4, 8, 9, 20, 22, 33, 34, 36, 37, 39, 44, 51, 61, 66];

  3. III.

    if \(|{\text {tr}}\mathfrak {X}_0(0)|>2\), then under some regularity assumptions on Jacobi parameters the essential spectrum of A is empty, see e.g. [16, 21, 38, 60, 64];

Observe that in case I the absolutely continuous spectrum fills the whole real line, whereas in the case III it is empty. This phenomenon was originally observed in [21] and it was called spectral phase transition of the first type. Notice that the case II corresponds to the point where the actual phase transition occurs. In fact, in [21, Section 5] the task of analyzing the case II was formulated as a very interesting open problem, whose analysis required finding new tools. Nowadays, the case II.a is quite well-understood, see [58, 60]. Therefore, in this article we are exclusively interested in the case II.b, which for \(N = 1\) and \(\alpha _n \equiv 1, \beta _n \equiv -2\) covers (1.1).

Let us introduce an auxiliary positive sequence \(\gamma = (\gamma _n: n \in \mathbb {N}_0)\) tending to infinity. In Examples 1.1 and 1.2 we take \(\gamma _n = a_n\) and \(\gamma _n = n+1\), respectively. We say that N-periodically modulated Jacobi parameters \((a_n), (b_n)\) are \(\gamma \)-tempered if the sequences

$$\begin{aligned} \bigg ( \sqrt{\gamma _n} \Big ( \frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n} \Big ): n \in \mathbb {N}\bigg ), \bigg ( \sqrt{\gamma _n} \Big ( \frac{\beta _n}{\alpha _n} - \frac{b_n}{a_n} \Big ): n \in \mathbb {N}\bigg ), \bigg ( \frac{\gamma _n}{a_n}: n \in \mathbb {N}\bigg ) \end{aligned}$$

belongs to \(\mathcal {D}^1_N\). Let us recall that a sequence \((x_n: n \in \mathbb {N})\) belongs to \(\mathcal {D}_1^N\) if

$$\begin{aligned} \sum _{n=1}^\infty |x_{n+N} - x_n| < \infty . \end{aligned}$$

About the sequence \(\gamma \) we assume that

$$\begin{aligned} \bigg ( \sqrt{\gamma _n} \Big ( \sqrt{\frac{\alpha _{n-1}}{\alpha _n}} - \sqrt{\frac{\gamma _{n-1}}{\gamma _n}} \Big ) : n \in \mathbb {N}\bigg ), \bigg (\frac{1}{\sqrt{\gamma _n}} : n \in \mathbb {N}\bigg ) \in \mathcal {D}_1^N, \end{aligned}$$
(1.2)

and

$$\begin{aligned} \lim _{n \rightarrow \infty } \big ( \sqrt{\gamma _{n+N}} - \sqrt{\gamma _n} \big ) = 0. \end{aligned}$$
(1.3)

Moreover, we impose that

$$\begin{aligned} \begin{aligned}&\bigg (\gamma _n \big (1 - \varepsilon \big [\mathfrak {X}_n(0)\big ]_{11}\big ) \Big (\frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n} \Big ) \\&\quad - \gamma _n \varepsilon \big [\mathfrak {X}_n(0) \big ]_{21} \Big (\frac{\beta _n}{\alpha _n} - \frac{b_n}{a_n}\Big ) : n \in \mathbb {N}\bigg ) \in \mathcal {D}_1^N \end{aligned} \end{aligned}$$
(1.4)

where \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). To formulate the main results of this paper, we need further definitions. For \(x \in \mathbb {C}\) and \(n \in \mathbb {N}_0\) we define the transfer matrix by

$$\begin{aligned} B_n(x) =\begin{pmatrix} 0 &{}\quad 1 \\ -\frac{a_{n-1}}{a_n} &{}\quad \frac{x-b_n}{a_n} \end{pmatrix}. \end{aligned}$$

We use the convention that \(a_{-1}:= 1\). Moreover, for a matrix

$$\begin{aligned} Y =\begin{pmatrix} y_{11} &{}\quad y_{12} \\ y_{21} &{}\quad y_{22} \end{pmatrix} \end{aligned}$$

we set \([Y]_{ij} = y_{ij}\). The discriminant of Y is defined as \({\text {discr}}Y = ({\text {tr}}Y)^2 - 4 \det Y\).

The first main result of this article identifies the absolutely continuous and the essential spectrum of the studied class of Jacobi matrices.

Theorem A

Let N be a positive integer. Let \((\gamma _n)\) be a sequence of positive numbers tending to infinity and satisfying (1.2) and (1.3). Let \((a_n)\) and \((b_n)\) be \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element. Suppose that (1.4) holds true with \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). Set

$$\begin{aligned} X_n(x) = B_{n+N-1}(x) B_{n+N-2}(x) \ldots B_{n+1}(x) B_n(x). \end{aligned}$$
(1.5)

Then the limit

$$\begin{aligned} \tau (x) = \frac{1}{4} \lim _{n \rightarrow \infty } \frac{\gamma _{n+N-1}}{\alpha _{n+N-1}} {\text {discr}}X_n(x), \quad x \in \mathbb {R}, \end{aligned}$$
(1.6)

exists and defines a polynomial of degree at most one. Let

$$\begin{aligned} \Lambda _- = \tau ^{-1} \big ( (-\infty , 0) \big ) \quad \text {and} \quad \Lambda _+ = \tau ^{-1} \big ( (0, \infty ) \big ). \end{aligned}$$

If \(\Lambda _- \cup \Lambda _+ \ne \emptyset \) and A is self-adjoint thenFootnote 1

$$\begin{aligned} \sigma _{\textrm{sing}}(A) \cap \Lambda _- = \emptyset \quad \text {and} \quad \sigma _{\textrm{ac}}(A) = \sigma _{\textrm{ess}}(A) = {\text {cl}}(\Lambda _-). \end{aligned}$$

Let us emphasize that Theorem A excludes the case \(\Lambda _- = \Lambda _+ = \emptyset \), that is \(\tau \equiv 0\). Moreover, Theorem A implies that the operator A is not semi-bounded if \(\Lambda _+ = \emptyset \) and \(\Lambda _- \ne \emptyset \), because \(\Lambda _- = \mathbb {R}= \sigma (A)\). However, it is unclear under what hypotheses the operator A is semi-bounded when \(\Lambda _+ \ne \emptyset \). Recall that in the case III a characterization of semi-boundedness of the operator A was given in [38].

The condition (1.4) might look rather restrictive. However, it is always satisfied by Jacobi parameters studied in [61] as well as for generators of symmetric birth–death processes (cf. Remark 11.5). Hence, Theorem A can be applied to Jacobi parameters described in Example 1.1 where for \(\gamma _n = a_n\) we get \(\tau (x) = x\). Moreover, if \(N=1\) the condition (1.4) reduces to

$$\begin{aligned} \bigg ( \gamma _n \Big ( 1 + \frac{a_{n-1}}{a_n} + \varepsilon \frac{b_n}{a_n} \Big ) \bigg ) \in \mathcal {D}_1^1. \end{aligned}$$

Therefore, Theorem A can be applied to Jacobi parameters given in Example 1.2 where for \(\gamma _n = n+1\) we obtain \(\tau (x) \equiv -\kappa - 2 g + 2 f\).

The proof of Theorem A uses the theory of subordinacy. It was first developed in [14] for one-dimensional Schrödinger operators on the real half-line, and later adapted to other classes of operators, see e.g. the survey [13] for more details. In particular, the extension to Jacobi matrices has been accomplished in [26]. The theory of subordinacy links asymptotic behavior of generalized eigenvectors to spectral properties of Jacobi matrices. Let us recall that a sequence \((u_n: n \in \mathbb {N}_0)\) is a generalized eigenvector associated to \(x \in \mathbb {C}\), and corresponding to \(\eta \in \mathbb {R}^2 {\setminus } \{0\}\), if the sequence of vectors

$$\begin{aligned} \vec {u}_0&= \eta , \\ \vec {u}_n&= \begin{pmatrix} u_{n-1} \\ u_n \end{pmatrix}, \quad n \ge 1, \end{aligned}$$

satisfies

$$\begin{aligned} \vec {u}_{n+1} = B_n(x) \vec {u}_n, \quad n \ge 0. \end{aligned}$$
(1.7)

We often write \((u_n(\eta , x): n \in \mathbb {N}_0)\) to indicate the dependence on the parameters. In particular, the sequence of orthogonal polynomials \((p_n(x): n \in \mathbb {N}_0)\) is the generalized eigenvector associated to \(\eta = (0,1)^t\) and \(x \in \mathbb {C}\). Motivated by [49, Section 8] it will be convenient to define (generalized) Christoffel–Darboux kernel by

$$\begin{aligned} K_n(x, y; \eta ) = \sum _{j=0}^n u_j(\eta , x) u_j(\eta , y), \quad x, y \in \mathbb {R},\ \eta \in \mathbb {R}^2 {\setminus } \{0\}. \end{aligned}$$

Suppose that A is self-adjoint. According to [26, Theorem 3], if for some compact interval with non-empty interior \(K \subset \mathbb {R}\),

$$\begin{aligned} \liminf _{n \rightarrow \infty } \frac{K_n(x,x;\eta )}{K_n(x,x;\tilde{\eta })} < \infty \quad \text {for any } x \in K \text { and } \eta , \tilde{\eta } \in \mathbb {S}^1 , \end{aligned}$$
(1.8)

where by \(\mathbb {S}^1\) we denote the unit sphere in \(\mathbb {R}^2\), then the measure \(\mu \) is absolutely continuous on K, and \(K \subset \textrm{supp}\,(\mu )\). Consequently, A is absolutely continuous on K, and \(K \subset \sigma _{\textrm{ac}}(A)\). This theory became a standard approach to spectral analysis of Jacobi matrices. It has also been observed that by imposing some uniformity conditions to (1.8) more detailed information on the density of \(\mu \) can be obtained, see the references in [13, Section 4]. In the present article we shall show that for any compact interval \(K \subset \Lambda _-\) the following stronger version of (1.8) holds true

$$\begin{aligned} \sup _{n \in \mathbb {N}_0} \sup _{x \in K} \sup _{\eta , \tilde{\eta } \in \mathbb {S}^1} \frac{K_n(x, x; \eta )}{K_n(x, x; \tilde{\eta })} < \infty . \end{aligned}$$
(1.9)

In view of [2] (see also [32] for a different proof in a more general setup) the condition (1.9) implies existence of positive constants \(c_1, c_2\) such that the density of \(\mu \), \(\mu '\), satisfies

$$\begin{aligned} c_1< \mu '(x) < c_2 \end{aligned}$$
(1.10)

for almost all \(x \in K\), with respect to the Lebesgue measure. Finally, in [47], the following consequence of subordinacy theory has been established: if A is self-adjoint and for some \(K \subset \mathbb {R}\) there is a function \(\eta : K \rightarrow \mathbb {R}^2 {\setminus } \{0\}\) such that

$$\begin{aligned} \sum _{n=0}^\infty \sup _{x \in K} \big | u_n \big ( \eta (x), x \big ) \big |^2 < \infty , \end{aligned}$$
(1.11)

then \(K \cap \sigma _{\textrm{ess}}(A) = \emptyset \). In Theorem 4.1, with a help of a recently obtained variant of discrete Levinson’s type theorem (see [60]), we show that (1.11) holds for every compact interval \(K \subset \Lambda _+\). The fact that (1.9) holds for every compact interval \(K \subset \Lambda _-\) is a consequence of the following theorem.

Theorem B

Let N be a positive integer. Let \((\gamma _n)\) be a sequence of positive numbers tending to infinity and satisfying (1.2) and (1.3). Let \((a_n)\) and \((b_n)\) be \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element. Suppose that (1.4) holds true with \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). Set

$$\begin{aligned} \rho _n = \sum _{j=0}^n \frac{\sqrt{\alpha _j \gamma _j}}{a_j}. \end{aligned}$$

If \(\Lambda _- \ne \emptyset \), then A is self-adjoint if and only if \(\rho _n \rightarrow \infty \). If it is the case, then the limit

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{\rho _n} K_n(x, x; \eta ) \end{aligned}$$
(1.12)

exists locally uniformly with respect to \((x, \eta ) \in \Lambda _- \times \mathbb {S}^1\), and defines a continuous positive function.

Example 1.3

Let \(\kappa \in (1, \tfrac{3}{2}]\) and \(f, g > -1\) be such that

$$\begin{aligned} \kappa + 2 g - 2 f < 0. \end{aligned}$$

We set

$$\begin{aligned} a_n = (n+1)^{\kappa } \bigg (1 + \frac{f}{n+1}\bigg ), \quad b_n = 2 (n+1)^\kappa \bigg (1 + \frac{g}{n+1}\bigg ). \end{aligned}$$

Since \(\kappa > 1\), the Carleman condition is not satisfied, that is

$$\begin{aligned} \sum _{n=0}^\infty \frac{1}{a_n} < \infty . \end{aligned}$$

As it is easy to check, we can apply Theorems A and B to the above Jacobi parameters, which leads to the conclusion that the corresponding Jacobi operator A is self-adjoint and \(\sigma _{\textrm{ac}}(A) = \mathbb {R}\).

Example 1.3 is inspired by examples given by Kostyuchenko–Mirzoev in [28] who provided Jacobi parameters giving rise to self-adjoint Jacobi operators violating the Carleman’s condition. Later the original Kostyuchenko–Mirzoev class was somewhat extended and it was proven that one usually has \(\sigma _{\textrm{ess}}(A) = \emptyset \), see e.g. [17, Section 2.2] and [60, Section 6.2]. To the best of our knowledge Jacobi parameters described in Example 1.3 provide the first instances of Jacobi operators violating the Carleman’s condition such that \(\sigma _{\textrm{ac}}(A) = \mathbb {R}\). In contrast, a construction of self-adjoint Jacobi matrices with \(\sigma _{\textrm{ac}}(A) = [0, \infty )\) violating Carleman’s condition is well-known, see e.g. [9].

To prove Theorem B, we first determine asymptotic behavior of generalized eigenvectors. Then we apply a non-trivial averaging procedure to it. The asymptotic formula is given in the following theorem.

Theorem C

Let N be a positive integer. Let \((\gamma _n)\) be a sequence of positive numbers tending to infinity and satisfying (1.2) and (1.3). Let \((a_n)\) and \((b_n)\) be \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element. Suppose that (1.4) holds true with \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). If \(\Lambda _- \ne \emptyset \), then for each \(i \in \{0, 1, \ldots , N-1 \}\) and every compact interval \(K \subset \Lambda _-\), there are a continuous function \(\varphi _i: \mathbb {S}^1 \times K \rightarrow \mathbb {C}\) and \(j_0 \ge 1\) such that

$$\begin{aligned}{} & {} \lim _{j \rightarrow \infty } \sup _{(\eta ,x) \in \mathbb {S}^1 \times K} \bigg | \sqrt{\frac{a_{jN+i-1}}{\sqrt{\gamma _{jN+i-1}}}} u_{jN+i}(\eta , x)\\{} & {} \quad - |\varphi _i(\eta , x)| \sin \bigg ( \sum _{k=j_0}^{j-1} \theta _{k;i}(x) + \arg \varphi _i(\eta ,x) \bigg ) \bigg | = 0 \end{aligned}$$

where \(\theta _{k;i}: K \rightarrow \mathbb {R}\) are some explicit continuous functions. Moreover, \(\varphi _i(\eta ,x) = 0\) for some (and then for all) \((\eta ,x) \in \mathbb {S}^1 \times K\) if and only if \([\mathfrak {X}_i(0)]_{21} = 0\).

The proof of Theorem C is based on uniform diagonalization of transfer matrices which has been already used in [61]. However, in the current setup we were not able to relate \(|\varphi _i(\eta ,x)|\) to the density of \(\mu \). Hence, in order to prove that \(\varphi _i(\eta , x) \ne 0\) provided \([\mathfrak {X}_i(0)]_{21} \ne 0\), we needed an additional argument based on a consequence of the following theorem (see Corollary 6.3 for details) which studies convergence of generalized N-shifted Turán determinants. The latter are defined as

$$\begin{aligned} \mathscr {D}_n(\eta , x)&= \det \begin{pmatrix} u_{n+N-1}(\eta ,x) &{} u_{n-1}(\eta ,x) \\ u_{n+N}(\eta ,x) &{} u_n(\eta ,x) \end{pmatrix} \\&= u_{n}(\eta ,x) u_{n+N-1}(\eta ,x) - u_{n-1}(\eta ,x) u_{n+N}(\eta ,x) \end{aligned}$$

where \((u_n(\eta ,x): n \in \mathbb {N}_0)\) is the generalized eigenvector associated to \(x \in \mathbb {R}\), and corresponding to \(\eta \in \mathbb {R}^2 {\setminus } \{0\}\). The (classical) shifted Turán determinants correspond to \(\eta = (0,1)^t\). They were defined for the first time in [65] for \(N=1\), and then generalized in [12] to \(N \ge 1\). In [65] they were instrumental in studying the zeros of the Legendre polynomials where it was observed that they are non-negative on the support of their orthogonality measure, see also [25] for later developments. As it was shown in [40, Theorem 7.34] and [12, Theorem 6], if \(\textrm{supp}\,(\mu )\) is compact, the asymptotic behavior of shifted Turán determinants is usually closely related to the density of \(\mu \), see [30, 31] and the survey [41]. The extension of the above phenomena to measures with unbounded support has been accomplished in [54, 55, 57, 61]. For these reasons the following theorem is an important result on its own.

Theorem D

Let N be a positive integer. Let \((\gamma _n)\) be a sequence of positive numbers tending to infinity and satisfying (1.2) and (1.3). Let \((a_n)\) and \((b_n)\) be \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element. Suppose that (1.4) holds true with \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). If \(\Lambda _- \ne \emptyset \), then for each \(i \in \{0, 1, \ldots , N-1 \}\) the limit

$$\begin{aligned} \lim _{\begin{array}{c} n \rightarrow \infty \\ n \equiv i \bmod N \end{array}} a_{n+N+1} \sqrt{\gamma _{n+N-1}} \big | \mathscr {D}_n(\eta ,x) \big | \end{aligned}$$
(1.13)

exists locally uniformly with respect to \((x, \eta ) \in \Lambda _- \times \mathbb {S}^1\) and defines a continuous positive function.

Let us remark that the first order asymptotics of generalized eigenvectors provided by Theorem C is insufficient to prove (1.13). It is an open problem whether, similarly to [57,58,59, 61], one can relate the value of (1.13) to the density of the measure \(\mu \). We hope to return to this problem in the future.

In this article, we also consider \(\ell ^1\)-type perturbations of Jacobi parameters a, b satisfying hypotheses of Theorem A. Namely, in Sect. 10, we study Jacobi parameters \(\tilde{a}, \tilde{b}\) of the form

$$\begin{aligned} \tilde{a}_n = a_n (1 + \xi _n), \quad \tilde{b}_n = b_n (1+\zeta _n), \end{aligned}$$

where \((\sqrt{\gamma _n} \xi _n), (\sqrt{\gamma _n} \zeta _n) \in \ell ^1\). We show that for sequences \(\tilde{a}\) and \(\tilde{b}\) the analogues of Theorems AC hold true. In particular, we can treat the following Jacobi parameters

Example 1.4

For \(\kappa \in (1, \infty )\) and \(f,g \in \mathbb {R}\) we set

$$\begin{aligned} \tilde{a}_n = (n+1)^{\kappa } \Big ( 1 + \frac{f}{n+1} + \xi _n \Big ), \quad \tilde{b}_n = 2 (n+1)^{\kappa } \Big ( 1 + \frac{g}{n+1} + \zeta _n \Big ), \end{aligned}$$

where \((\sqrt{n} \xi _n), (\sqrt{n} \zeta _n) \in \ell ^1\) and \(\kappa + 2\,g - 2f \ne 0\).

Jacobi parameters considered in Example 1.4 under the additional restrictions \(\kappa \in (\tfrac{3}{2}, \infty )\) and \(\xi _n, \zeta _n = \mathcal {O}(n^{-2})\), have recently been studied in [66].

Before we close the introduction, let us mention some of the approaches used in the literature for analysis of the case II.b. In [9] it was observed that a certain class of Jacobi matrices related to birth–death processes can be studied by considering the restriction to a subspace of \(\ell ^2\) of the square of Jacobi matrices belonging to the case I with \(b_n \equiv 0\). This method is particularly effective in describing \(\sigma _{\textrm{ac}}(A)\). Next, in [36], asymptotics of generalized eigenvectors was studied by the reduction to the analysis of a discrete variant of Ricatti equation, whereas in [44, 51] the analysis was possible by applying Birkhoff–Adams theorem. Further, in [39] by adaptation of Kooman method (see [27]) and the approach of [1] it was possible to obtain asymptotic behavior of generalized eigenvectors for \(x \in \mathbb {C}{\setminus } \{0\}\) as well as continuity of the density of the measure \(\mu \). A very important class of methods is motivated by the technique introduced by Harris and Lutz in [15]. In these methods for a given \(i \in \{0,1, \ldots , N-1\}\) one consider the "change of variables"

$$\begin{aligned} \vec {u}_{nN+i} = Z_n \vec {v}_n, \quad n \ge 0 \end{aligned}$$
(1.14)

for some invertible matrices \(Z_n\). Then by (1.7) and (1.5) the sequence \((\vec {v}_n)\) satisfies the equation

$$\begin{aligned} \vec {v}_{n+1} = Z_{n+1}^{-1} X_{nN+i} Z_n \vec {v}_n, \quad n \ge 0. \end{aligned}$$
(1.15)

The matrices \(Z_n\) are chosen in a way that one can apply to the system (1.15) Levinson’s theorem. Then thanks to the relation (1.14), the asymptotics of \((\vec {v}_n)\) easily leads to the asymptotics of \((\vec {u}_{nN+i}: n \ge n_0)\). The success of this approach depends on the properties of the matrices \(Z_n\). In [20] the construction of these matrices were motivated by a formal WKB method in which, by means of an ansatz, one guesses the form of the solution. This approach was later extended in [4, 22, 33, 37]. It should be emphasized that the resulting matrices \(Z_n\) were complex-valued, oscillating and unbounded.

In this work, we start by extending techniques which were successful in the prequel [61]. Namely, we construct matrices \(Z_n\) such that the system (1.15) satisfies hypotheses of a uniform discrete Levinson’s theorem so it belongs to Harris–Lutz paradigm. However, our matrices are very simple and explicit (see (3.1)), real and convergent (obviously to a singular matrix). These features lead to greater applicability of our approach than in the previous works. Since Jacobi parameters considered in this paper are more “singular” than in [61], we were forced to use a more general and delicate change of variables, so that we can exploit the condition (1.4) to “smooth them out”. Using our change of variables, the spectral properties of A on \(\Lambda _+\) can be derived analogously to [61]. On \(\Lambda _-\) the situation is much more involved. Namely, in [61], in order to prove that \(\mu \) is absolutely continuous on every compact \(K \subset \Lambda _-\), we used an explicit sequence of probability measures \((\mu _n: n \in \mathbb {N})\) which converges weakly to \(\mu \), and such that the sequence of their densities converges uniformly on K to a continuous positive function. In the present paper this approach does not work anymore. To get around of this issue we apply the subordinacy theory. This requires to analyze the asymptotic behavior of Christoffel–Darboux kernel which was possible thanks to the asymptotics obtained in Theorem C. All of this reduces the problem to study averages of highly oscillatory sums. For this reason we develop Lemma 8.1, which might be of independent interest.

The method of asymptotic analysis of generalized eigenvectors is similar to [61]. However, in the present situation we had to find another argument showing positivity of the function \(|\varphi _i|\). Previously, by using the convergence of densities of the sequence \((\mu _n: n \in \mathbb {N})\), we were able to explicitly compute the value of \(|\varphi _i|\) in terms of \(\mu '\). In the present work we use certain algebraic properties of \(\varphi _i\) together with Theorem D, see Claim 7.2 for details. Let us emphasize that the method of subordinacy gives the bound (1.10) only, which is weaker than the continuity of \(\mu '\). The drawback of the current approach compared to [61] is that we do not get a constructive method to approximate the density of \(\mu \). In the forthcoming article [62], by linking the asymptotic behavior of zeros of the polynomials \((p_n: n \in \mathbb {N}_0)\) with the value of (1.12), we managed to prove that, under certain additional hypotheses, the density of the measure \(\mu \) for Jacobi matrices satisfying Theorem B is a continuous positive function on \(\Lambda _-\).

The article is organized as follows: In Sect. 2 we fix notation and we formulate basic facts. Section 3 is devoted to our change of variables. In Sect. 4 we study spectral properties of A on \(\Lambda _+\). Next, in Sect. 5 we describe uniform diagonalization of transfer matrices on \(\Lambda _-\), which is used in the rest of the article. The proof of Theorem D is presented in Sect. 6. Next, in Sect. 7, we prove Theorem C. Section 8 is devoted to the proof of Theorem B. In Sect. 9 we study the self-adjointness of A. The extensions of Theorems AC to \(\ell ^1\)-type perturbations is achieved in Sect. 10. Finally, in Sect. 11, we present more concrete classes of sequences to illustrate results of this article.

1.1 Notation

By \(\mathbb {N}\) we denote the set of positive integers and \(\mathbb {N}_0 = \mathbb {N}\cup \{0\}\). Throughout the whole article, we write \(A \lesssim B\) if there is an absolute constant \(c>0\) such that \(A \le cB\). We write \(A \asymp B\) if \(A \lesssim B\) and \(B \lesssim A\). Moreover, c stands for a positive constant whose value may vary from occurrence to occurrence. For any compact set K, by \(o_K(1)\) we denote the class of functions \(f_n: K \rightarrow \mathbb {R}\) such that \(\lim _{n \rightarrow \infty } f_n = 0\) uniformly on K.

2 Preliminaries

In this section we fix the notation which is used in the rest of the article.

2.1 Stolz Class

In this section we define a proper class of slowly oscillating sequences which is motivated by [52], see also [57, Section 2]. Let V be a normed space. We say that a sequence \((x_n: n \in \mathbb {N})\) of vectors from V belongs to \(\mathcal {D}_r(V)\) for certain \(r \in \mathbb {N}\), if it is bounded and for each \(j \in \{1,\ldots ,r\}\),

$$\begin{aligned} \sum _{n=1}^\infty \Vert \Delta ^j x_n \Vert ^{\tfrac{r}{j}} < \infty \end{aligned}$$

where

$$\begin{aligned} \Delta ^0 x_n&= x_n, \\ \Delta ^j x_n&= \Delta ^{j-1} x_{n+1} - \Delta ^{j-1} x_n, \quad j \ge 1. \end{aligned}$$

If V is the real line with Euclidean norm we abbreviate \(\mathcal {D}_{r} = \mathcal {D}_{r}(V)\). Given a compact set \(K \subset \mathbb {C}\) and a normed vector space R, we denote by \(\mathcal {D}_{r}(K, R)\) the case when V is the space of all continuous mappings from K to R equipped with the supremum norm. Let us recall that \(\mathcal {D}_r(V)\) is an algebra provided V is a normed algebra. Let N be a positive integer. We say that a sequence \((x_n: n \in \mathbb {N})\) belongs to \(\mathcal {D}_r^N (V)\), if for any \(i \in \{0, 1, \ldots , N-1 \}\),

$$\begin{aligned} (x_{nN+i}: n \in \mathbb {N}) \in \mathcal {D}_r(V). \end{aligned}$$

Again, \(\mathcal {D}_r^N(V)\) is an algebra provided V is a normed algebra. In what follows we shall use \(\mathcal {D}_1^N(V)\) only.

2.2 Finite Matrices

By \({\text {Mat}}(2, \mathbb {C})\) and \({\text {Mat}}(2, \mathbb {R})\) we denote the space of \(2 \times 2\) matrices with complex and real entries, respectively, equipped with the spectral norm. Next, \({\text {GL}}(2, \mathbb {R})\) and \({\text {SL}}(2, \mathbb {R})\) consist of all matrices from \({\text {Mat}}(2, \mathbb {R})\) which are invertible and of determinant equal 1, respectively. A matrix \(X \in {\text {SL}}(2, \mathbb {R})\) is a non-trivial parabolic if it is not a multiple of the identity and \(|{\text {tr}}X| = 2\).

Let \(X \in {\text {Mat}}(2, \mathbb {C})\). By \(X^t\) we denote the transpose of the matrix X. Let us recall that symmetrization and the discriminant are defined as

$$\begin{aligned} {\text {sym}}(X) = \frac{1}{2} X + \frac{1}{2} X^*, \quad \text {and}\quad {\text {discr}}(X) = ({\text {tr}}X)^2 - 4 \det X, \end{aligned}$$

respectively. Here \(X^*\) denotes the Hermitian transpose of the matrix X.

By \(\{ e_1, e_2 \}\) we denote the standard orthonormal basis of \(\mathbb {C}^2\), i.e.

$$\begin{aligned} e_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix} \quad \text {and} \quad e_2 = \begin{pmatrix} 0 \\ 1 \end{pmatrix}. \end{aligned}$$

Lastly, for a sequence of square matrices \((C_n: n_0 \le n \le n_1)\) we set

$$\begin{aligned} \prod _{k = n_0}^{n_1} C_k = C_{n_1} C_{n_1-1} \cdots C_{n_0}. \end{aligned}$$

2.3 Generalized Eigenvectors

A sequence \((u_n: n \in \mathbb {N}_0)\) is a generalized eigenvector associated to \(x \in \mathbb {C}\) and corresponding to \(\eta \in \mathbb {R}^2 {\setminus } \{0\}\), if the sequence of vectors

$$\begin{aligned} \vec {u}_0&= \eta , \\ \vec {u}_n&= \begin{pmatrix} u_{n-1} \\ u_n \end{pmatrix}, \quad n \ge 1, \end{aligned}$$

satisfies

$$\begin{aligned} \vec {u}_{n+1} = B_n(x) \vec {u}_n, \quad n \ge 0, \end{aligned}$$
(2.1)

where \(B_n\) is the transfer matrix defined as

$$\begin{aligned} \begin{aligned} B_0(x)&= \begin{pmatrix} 0 &{}\quad 1 \\ -\frac{1}{a_0} &{}\quad \frac{x-b_0}{a_0} \end{pmatrix} \\ B_n(x)&= \begin{pmatrix} 0 &{}\quad 1 \\ -\frac{a_{n-1}}{a_n} &{}\quad \frac{x - b_n}{a_n} \end{pmatrix} , \quad n \ge 1. \end{aligned} \end{aligned}$$
(2.2)

To indicate the dependence on the parameters, we write \((u_n(\eta , x): n \in \mathbb {N}_0)\). In particular, the sequence of orthogonal polynomials \((p_n(x): n \in \mathbb {N}_0)\) is the generalized eigenvector associated to \(\eta = e_2\) and \(x \in \mathbb {C}\).

2.4 Periodic Jacobi Parameters

By \((\alpha _n: n \in \mathbb {Z})\) and \((\beta _n: n \in \mathbb {Z})\) we denote N-periodic sequences of real and positive numbers, respectively. For each \(k \ge 0\), let us define polynomials \((\mathfrak {p}^{[k]}_n: n \in \mathbb {N}_0)\) by relations

$$\begin{aligned} \begin{aligned}&\mathfrak {p}_0^{[k]}(x) = 1, \qquad \mathfrak {p}_1^{[k]}(x) = \frac{x-\beta _k}{\alpha _k}, \\&\quad \alpha _{n+k-1} \mathfrak {p}^{[k]}_{n-1}(x) + \beta _{n+k} \mathfrak {p}^{[k]}_n(x) + \alpha _{n+k} \mathfrak {p}^{[k]}_{n+1}(x) = x \mathfrak {p}^{[k]}_n(x), \quad n \ge 1. \end{aligned} \end{aligned}$$

Let

$$\begin{aligned} \mathfrak {B}_n(x) = \begin{pmatrix} 0 &{}\quad 1 \\ -\frac{\alpha _{n-1}}{\alpha _n} &{}\quad \frac{x - \beta _n}{\alpha _n} \end{pmatrix}, \quad \text {and}\quad \mathfrak {X}_n(x) = \prod _{j = n}^{N+n-1} \mathfrak {B}_j(x), \quad n \in \mathbb {Z}. \end{aligned}$$

By \(\mathfrak {A}\) we denote the Jacobi matrix corresponding to

$$\begin{aligned} \begin{pmatrix} \beta _0 &{}\quad \alpha _0 &{}\quad 0 &{}\quad 0 &{}\quad \ldots \\ \alpha _0 &{}\quad \beta _1 &{}\quad \alpha _1 &{}\quad 0 &{}\quad \ldots \\ 0 &{}\quad \alpha _1 &{}\quad \beta _2 &{}\quad \alpha _2 &{}\quad \ldots \\ 0 &{}\quad 0 &{}\quad \alpha _2 &{}\quad \beta _3 &{}\quad \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad &{}\quad \ddots \end{pmatrix}. \end{aligned}$$

2.5 Tempered Periodic Modulations

Let N be a positive integer. We say that Jacobi parameters \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) are N-periodically modulated if there are two N-periodic sequences \((\alpha _n: n \in \mathbb {Z})\) and \((\beta _n: n \in \mathbb {Z})\) of positive and real numbers, respectively, such that

  1. (a)

    \(\begin{aligned} \lim _{n \rightarrow \infty } a_n = \infty \end{aligned},\)

  2. (b)

    \(\begin{aligned} \lim _{n \rightarrow \infty } \bigg | \frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n} \bigg | = 0 \end{aligned},\)

  3. (c)

    \(\begin{aligned} \lim _{n \rightarrow \infty } \bigg | \frac{\beta _n}{\alpha _n} - \frac{b_n}{a_n} \bigg | = 0 \end{aligned}.\)

In this article we are mostly interested in tempered N-periodically modulated Jacobi parameters, i.e. we assume that there is a sequence of positive numbers \((\gamma _n: n \in \mathbb {N}_0)\) tending to infinity and satisfying

$$\begin{aligned} \bigg ( \sqrt{\gamma _n} \Big ( \sqrt{\frac{\alpha _{n-1}}{\alpha _n}} - \sqrt{\frac{\gamma _{n-1}}{\gamma _n}} \Big ) : n \in \mathbb {N}\bigg ), \bigg (\frac{1}{\sqrt{\gamma _n}} : n \in \mathbb {N}\bigg ) \in \mathcal {D}_1^N, \end{aligned}$$
(2.3)

and

$$\begin{aligned} \lim _{n \rightarrow \infty } \big ( \sqrt{\gamma _{n+N}} - \sqrt{\gamma _n} \big ) = 0, \end{aligned}$$
(2.4)

such that

$$\begin{aligned} \begin{aligned}&\bigg (\sqrt{\gamma _n} \Big (\frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n} \Big ) : n \in \mathbb {N}\bigg ), \bigg (\sqrt{\gamma _n} \Big (\frac{\beta _n}{\alpha _n} - \frac{b_n}{a_n} \Big ) : n \in \mathbb {N}\bigg ), \\&\quad \bigg (\frac{\gamma _n}{a_n} : n \in \mathbb {N}\bigg ) \in \mathcal {D}_1^N. \end{aligned} \end{aligned}$$
(2.5)

In view of (2.5), there are two N-periodic sequence \((\mathfrak {s}_n: n \in \mathbb {Z})\) and \((\mathfrak {r}_n: n \in \mathbb {Z})\) such that

$$\begin{aligned} \lim _{n \rightarrow \infty } \bigg | \sqrt{\alpha _n \gamma _n} \Big (\frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n} \Big ) - \mathfrak {s}_n \bigg | = \lim _{n \rightarrow \infty } \bigg | \sqrt{\alpha _n \gamma _n} \Big (\frac{\beta _n}{\alpha _n} - \frac{b_n}{a_n} \Big ) - \mathfrak {r}_n \bigg | = 0. \end{aligned}$$
(2.6)

From (2.3) it stems that

$$\begin{aligned} \lim _{n \rightarrow \infty } \bigg |\frac{\alpha _{n-1}}{\alpha _n} - \frac{\gamma _{n-1}}{\gamma _n} \bigg | = 0. \end{aligned}$$
(2.7)

Hence, there is \(\mathfrak {t}\ge 0\), such that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{\gamma _n}{a_n} = \mathfrak {t}. \end{aligned}$$
(2.8)

Let us observe that, if \(\mathfrak {t}> 0\) then with no lose of generality we can assume that \(\mathfrak {t}= 1\) and \(\gamma _n \equiv a_n\). Therefore, in what follows we shall assume that \(\mathfrak {t}\in \{0, 1\}\).

Let us define the N-step transfer matrix by

$$\begin{aligned} X_n = B_{n+N-1} B_{n+N-2} \cdots B_{n+1} B_n. \end{aligned}$$

Observe that for each \(i \in \{0, 1, \ldots , N-1\}\),

$$\begin{aligned} \lim _{j \rightarrow \infty } B_{jN+i}(x) = \mathfrak {B}_i(0) \end{aligned}$$

and

$$\begin{aligned} \lim _{j \rightarrow \infty } X_{jN+i}(x) = \mathfrak {X}_i(0) \end{aligned}$$

locally uniformly with respect to \(x \in \mathbb {C}\). In the whole article we assume that the matrix \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element of \({\text {SL}}(2, \mathbb {R})\). Let \(T_0\) be a matrix so that

$$\begin{aligned} \mathfrak {X}_0(0) = \varepsilon T_0 \begin{pmatrix} 0 &{}\quad 1 \\ -\,1 &{}\quad 2 \end{pmatrix} T_0^{-1} \end{aligned}$$
(2.9)

where

$$\begin{aligned} \varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)}). \end{aligned}$$
(2.10)

Since

$$\begin{aligned} \mathfrak {X}_i(0) = \mathfrak {B}_{i-1}(0) \cdots \mathfrak {B}_0(0) \mathfrak {X}_0(0) \mathfrak {B}_0^{-1} (0) \cdots \mathfrak {B}_{i-1}^{-1}(0), \end{aligned}$$
(2.11)

by taking

$$\begin{aligned} T_i = \mathfrak {B}_{i-1}(0) \cdots \mathfrak {B}_0(0) T_0, \end{aligned}$$
(2.12)

we obtain

$$\begin{aligned} \mathfrak {X}_i(0) = \varepsilon T_i \begin{pmatrix} 0 &{}\quad 1 \\ -\,1 &{}\quad 2 \end{pmatrix} T_i^{-1}. \end{aligned}$$

Hence,

$$\begin{aligned} {[}\mathfrak {X}_i(0)]_{11}&= \frac{\varepsilon }{\det T_i} \big ( \det T_i - ([T_i]_{11} + [T_i]_{12})([T_i]_{21} + [T_i]_{22}) \big ) \\ {[}\mathfrak {X}_i(0)]_{12}&= \frac{\varepsilon }{\det T_i} \big ( [T_i]_{11}+[T_i]_{12} \big )^2 \\ {[}\mathfrak {X}_i(0)]_{21}&= \frac{\varepsilon }{\det T_i} \big ( -([T_i]_{21} + [T_i]_{22})^2 \big ) \\ {[}\mathfrak {X}_i(0)]_{22}&= \frac{\varepsilon }{\det T_i} \big ( \det T_i + ([T_i]_{11} + [T_i]_{12})([T_i]_{21} + [T_i]_{22}) \big ). \end{aligned}$$

In particular,

$$\begin{aligned} \frac{([T_i]_{11} + [T_i]_{12})([T_i]_{21} + [T_i]_{22})}{\det T_i} = 1 - \varepsilon [\mathfrak {X}_i(0)]_{11} \end{aligned}$$
(2.13)

and

$$\begin{aligned} \frac{([T_i]_{21} + [T_i]_{22})^2}{\det T_i} = -\varepsilon [\mathfrak {X}_i(0)]_{21}. \end{aligned}$$
(2.14)

We often assume that

$$\begin{aligned} \begin{aligned}&\bigg (\gamma _n \big (1 - \varepsilon \big [\mathfrak {X}_n(0)\big ]_{11}\big ) \Big (\frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n} \Big ) \\&\quad - \gamma _n \varepsilon \big [\mathfrak {X}_n(0) \big ]_{21} \Big (\frac{\beta _n}{\alpha _n} - \frac{b_n}{a_n}\Big ) : n \in \mathbb {N}\bigg ) \in \mathcal {D}_1^N. \end{aligned} \end{aligned}$$
(2.15)

Therefore, there is N-periodic sequence \((\mathfrak {u}_n: n \in \mathbb {N}_0)\) such that

$$\begin{aligned} \begin{aligned}&\lim _{n \rightarrow \infty } \bigg |\gamma _n \big (1 - \varepsilon \big [\mathfrak {X}_n(0)\big ]_{11}\big ) \Big (\frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n} \Big ) \\&\quad - \gamma _n \varepsilon \big [\mathfrak {X}_n(0) \big ]_{21} \Big (\frac{\beta _n}{\alpha _n} - \frac{b_n}{a_n} \Big ) - \mathfrak {u}_n \bigg | =0. \end{aligned} \end{aligned}$$
(2.16)

Let us define

$$\begin{aligned} \tau (x) = \frac{1}{4} \mathfrak {S}^2 - \upsilon (x) \end{aligned}$$
(2.17)

where

$$\begin{aligned} \upsilon (x) = \mathfrak {t}\varepsilon x \sum _{i' = 0}^{N-1} \frac{[\mathfrak {X}_{i'}(0)]_{21}}{\alpha _{i'-1}} -\mathfrak {U}, \end{aligned}$$
(2.18)

and

$$\begin{aligned} \mathfrak {U}= \sum _{i' = 0}^{N-1} \frac{\mathfrak {u}_{i'}}{\alpha _{i'-1}}, \quad \text {and}\quad \mathfrak {S}= \sum _{i' = 0}^{N-1} \frac{\mathfrak {s}_{i'}}{\alpha _{i'-1}}. \end{aligned}$$
(2.19)

In view of [61, Proposition 2.1],

$$\begin{aligned} \sum _{i' = 0}^{N-1} \frac{[\mathfrak {X}_{i'}(0)]_{21}}{\alpha _{i'-1}} = -{\text {tr}}\mathfrak {X}_0'(0), \end{aligned}$$
(2.20)

thus

$$\begin{aligned} \tau (x) = \frac{1}{4} \mathfrak {S}^2 + \mathfrak {U}+ \mathfrak {t}\varepsilon ({\text {tr}}\mathfrak {X}_0'(0)) x. \end{aligned}$$
(2.21)

The following proposition answers the question when \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element of \({\text {SL}}(2, \mathbb {R})\).

Proposition 2.1

Suppose that \(|{\text {tr}}\mathfrak {X}_0(0)| = 2\). Then \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element of \({\text {SL}}(2, \mathbb {R})\) if and only if \({\text {tr}}\mathfrak {X}_0'(0) \ne 0\).

Proof

The matrix \(\mathfrak {X}_0(0)\) is a trivial parabolic element if and only if \(\mathfrak {X}_{0}(0) = \varepsilon {\text {Id}}\) where \(\varepsilon \) is defined in (2.10). Then by (2.11) we get \(\mathfrak {X}_i(0) \equiv \varepsilon {\text {Id}}\) for all \(i \in \{0, 1, \ldots , N-1\}\). Consequently, by (2.20), \({\text {tr}}\mathfrak {X}_0'(0) = 0\). On the other hand, if \(\mathfrak {X}_0(0) \ne \varepsilon {\text {Id}}\), then thanks to [55, Proposition 3] at least one of the numbers \([\mathfrak {X}_0(0)]_{21}\) and \([\mathfrak {X}_1(0)]_{21}\) is non-zero. In view of [61, Proposition 2.2] we have

$$\begin{aligned} \sum _{i' = 0}^{N-1} \frac{|[\mathfrak {X}_{i'}(0)]_{21}|}{\alpha _{i'-1}} = |{\text {tr}}\mathfrak {X}_0'(0)|, \end{aligned}$$

thus \({\text {tr}}\mathfrak {X}_0'(0) \ne 0\). \(\square \)

If \(\mathfrak {t}\ne 0\), then thanks to Proposition 2.1 we have \({\text {tr}}\mathfrak {X}'(0) \ne 0\), so we can set

$$\begin{aligned} x_0 = -\frac{\mathfrak {U}+ \tfrac{1}{4} \mathfrak {S}^2}{\varepsilon {\text {tr}}\mathfrak {X}_0'(0)}, \end{aligned}$$
(2.22)

and \(\Lambda = \mathbb {R}{\setminus } \{x_0\}\). Otherwise, we shall assume that and we set \(\Lambda = \mathbb {R}\).

Proposition 2.2

If (2.7) is satisfied, then

$$\begin{aligned} \mathfrak {S}= \lim _{n \rightarrow \infty } \sqrt{\frac{\gamma _n}{\alpha _n}} \Big ( 1 - \frac{a_n}{a_{n+N}} \Big ). \end{aligned}$$
(2.23)

In particular, \(\mathfrak {S}\ge 0\).

Proof

Let us first observe that

$$\begin{aligned} 1 - \frac{a_{n}}{a_{n+N}}&= \frac{\alpha _{n}}{\alpha _{n+N}} - \frac{a_{n}}{a_{n+N}} \\&= \sum _{j=0}^{N-1} \Big ( \prod _{\ell =j+1}^{N-1} \frac{\alpha _{n+\ell }}{\alpha _{n+\ell +1}} \Big ) \Big ( \frac{\alpha _{n+j}}{\alpha _{n+j+1}} - \frac{a_{n+j}}{a_{n+j+1}} \Big ) \Big ( \prod _{\ell =0}^{j-1} \frac{a_{n+\ell }}{a_{n+\ell +1}} \Big ) \\&= \sum _{j=0}^{N-1} \frac{\alpha _{n+j+1}}{\alpha _{n+N}} \frac{a_n}{a_{n+j}} \Big ( \frac{\alpha _{n+j}}{\alpha _{n+j+1}} - \frac{a_{n+j}}{a_{n+j+1}} \Big ). \end{aligned}$$

Thus

$$\begin{aligned} \sqrt{\frac{\gamma _n}{\alpha _n}} \Big ( 1 - \frac{a_{n}}{a_{n+N}} \Big )= & {} \sum _{j=0}^{N-1} \frac{\alpha _{n+j+1}}{\alpha _{n+N}} \frac{a_n}{a_{n+j}} \frac{1}{\sqrt{\alpha _n}} \sqrt{\frac{\gamma _n}{\alpha _{n+j+1} \gamma _{n+j+1}}} \sqrt{\alpha _{n+j+1} \gamma _{n+j+1}}\\{} & {} \times \Big ( \frac{\alpha _{n+j}}{\alpha _{n+j+1}} - \frac{a_{n+j}}{a_{n+j+1}} \Big ). \end{aligned}$$

Hence, by the hypothesis (b), (2.7) and (2.6),

$$\begin{aligned} \lim _{n \rightarrow \infty } \sqrt{\frac{\gamma _n}{\alpha _n}} \Big ( 1 - \frac{a_{n}}{a_{n+N}} \Big )&= \lim _{n \rightarrow \infty } \sum _{j=0}^{N-1} \frac{\alpha _{n+j+1}}{\alpha _{n+N}} \frac{\alpha _n}{\alpha _{n+j}} \frac{1}{\sqrt{\alpha _n}} \sqrt{\frac{\alpha _n}{\alpha _{n+j+1} \alpha _{n+j+1}}}\mathfrak {s}_{n+j+1} \\&= \lim _{n \rightarrow \infty } \sum _{j=0}^{N-1} \frac{\mathfrak {s}_{n+j+1}}{\alpha _{n+j}} = \mathfrak {S}\end{aligned}$$

and (2.23) follows. To see the last statement, we assume, contrary to our claim, that \(\mathfrak {S}< 0\). Then there is \(n_0 > 0\) such that for \(n \ge n_0\),

$$\begin{aligned} a_n - a_{n+N} > 0. \end{aligned}$$

Therefore, for each \(n \ge n_0\),

$$\begin{aligned} a_n \le \max _{0 \le i \le N-1} a_{n_0+i} < \infty , \end{aligned}$$

which contradicts the hypothesis (a). \(\square \)

3 The Shifted Conjugation

In this section we freely use the notation introduced in Sect. 2. Fix \(i \in \{0, 1, \ldots , N-1\}\) and set

$$\begin{aligned} Z_j = T_i \begin{pmatrix} 1 &{}\quad 1 \\ e^{\vartheta _j} &{}\quad e^{-\vartheta _j} \end{pmatrix} \end{aligned}$$
(3.1)

where \(T_i\) has been defined in (2.9) and (2.12), and

$$\begin{aligned} \vartheta _j(x) = \sqrt{\frac{\alpha _{i-1} |{\tau (x)} |}{\gamma _{(j+1)N+i-1}}}. \end{aligned}$$
(3.2)

The proof of the following theorem is a generalization of [61, Theorem 3.2].

Theorem 3.1

Let N be a positive integer and \(i \in \{0, 1, \ldots N-1\}\). Let \((\gamma _n: n \in \mathbb {N})\) be a sequence of positive numbers tending to infinity and satisfying (1.2) and (1.3). Let \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) be \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element. Suppose that (1.4) holds true with \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). Then for any compact interval \(K \subset \Lambda \),

$$\begin{aligned} Z_j^{-1} Z_{j+1} = {\text {Id}}+ \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} Q_j \end{aligned}$$
(3.3)

where \((Q_j)\) is a sequence from \(\mathcal {D}_1\big (K, {\text {Mat}}(2, \mathbb {R})\big )\) convergent uniformly on K to the zero matrix.

Proof

In the proof we denote by \((\delta _j)\) a generic sequence from \(\mathcal {D}_1\) tending to zero which may change from line to line. By a straightforward computation we obtain

$$\begin{aligned} Z_j^{-1} Z_{j+1}&= \frac{1}{\det Z_j} \begin{pmatrix} \text {e}^{-\vartheta _j} &{}\quad -1\\ -\text {e}^{\vartheta _j} &{}\quad 1 \end{pmatrix} \begin{pmatrix} 1 &{}\quad 1\\ \text {e}^{\vartheta _{j+1}} &{}\quad \text {e}^{-\vartheta _{j+1}} \end{pmatrix} \\&= \frac{1}{\text {e}^{-\vartheta _{j}} - \text {e}^{\vartheta _{j}}} \begin{pmatrix} f_j &{}\quad g_j \\ \tilde{g}_j &{}\quad \tilde{f}_j \end{pmatrix} \end{aligned}$$

where

$$\begin{aligned} f_j&= \text {e}^{-\vartheta _j} - \text {e}^{\vartheta _{j+1}}, \quad g_j = \text {e}^{-\vartheta _j} - \text {e}^{-\vartheta _{j+1}} \\ \tilde{g}_j&= - \text {e}^{\vartheta _j} + \text {e}^{\vartheta _{j+1}} \tilde{f}_j = - \text {e}^{\vartheta _j} + \text {e}^{-\vartheta _{j+1}}. \end{aligned}$$

Observe that

$$\begin{aligned} \gamma _{n} \bigg ( \frac{1}{\sqrt{\gamma _{n}}} - \frac{1}{\sqrt{\gamma _{n+N}}} \bigg ) = \big (\sqrt{\gamma _{n+N}} - \sqrt{\gamma _n}\big ) \sqrt{\frac{\gamma _n}{\gamma _{n+N}}}. \end{aligned}$$
(3.4)

Notice that

$$\begin{aligned} \sqrt{\frac{\gamma _{n+N}}{\alpha _{n+N}}} - \sqrt{\frac{\gamma _{n}}{\alpha _{n}}} = \sum _{j=0}^{N-1} \bigg ( \sqrt{\frac{\gamma _{n+j+1}}{\alpha _{n+j+1}}} - \sqrt{\frac{\gamma _{n+j}}{\alpha _{n+j}}} \bigg ). \end{aligned}$$

Hence, by N-periodicity of \((\alpha _n)\),

$$\begin{aligned} \sqrt{\gamma _{n+N}} - \sqrt{\gamma _{n}} = \sum _{j=0}^{N-1} \sqrt{\frac{\alpha _n}{\alpha _{n+j}}} \sqrt{\gamma _{n+j+1}} \bigg ( \sqrt{\frac{\alpha _{n+j}}{\alpha _{n+j+1}}} - \sqrt{\frac{\gamma _{n+j}}{\gamma _{n+j+1}}} \bigg ). \end{aligned}$$

Now, by (2.3) we obtain

$$\begin{aligned} \big ( \sqrt{\gamma _{n+N}} - \sqrt{\gamma _{n}} : n \in \mathbb {N}\big ) \in \mathcal {D}_1^N. \end{aligned}$$
(3.5)

Similarly, N-periodicity of \((\alpha _n)\) and (2.3) leads to

$$\begin{aligned} \bigg ( \sqrt{\frac{\gamma _{n-1}}{\gamma _n}}: n \in \mathbb {N}\bigg ) \in \mathcal {D}_1^N. \end{aligned}$$

For fixed \(j \in \mathbb {N}\), we have

$$\begin{aligned} \sqrt{\frac{\gamma _{n}}{\gamma _{n+j}}} = \sqrt{\frac{\gamma _{n}}{\gamma _{n+1}}} \sqrt{\frac{\gamma _{n+1}}{\gamma _{n+2}}} \ldots \sqrt{\frac{\gamma _{n+j-1}}{\gamma _{n+j}}}, \end{aligned}$$

thus

$$\begin{aligned} \bigg ( \sqrt{\frac{\gamma _{n}}{\gamma _{n+j}}} : n \in \mathbb {N}\bigg ) \in \mathcal {D}_1^N. \end{aligned}$$
(3.6)

Hence, by (3.4)–(3.6)

$$\begin{aligned} \frac{1}{\sqrt{\gamma _{(j+2)N+i-1}}} = \frac{1}{\sqrt{\gamma _{(j+1)N+i-1}}} + \frac{1}{\gamma _{jN}} \delta _j, \end{aligned}$$

and so

$$\begin{aligned} \vartheta _{j+1} = \vartheta _j + \frac{1}{\gamma _{jN}} \delta _j. \end{aligned}$$
(3.7)

Next, we compute

$$\begin{aligned} \text {e}^{\vartheta _{j+1}} = 1 + \vartheta _{j+1} + \frac{1}{2} \vartheta _{j+1}^2 + \frac{1}{\gamma _{jN}} \delta _j \end{aligned}$$

and

$$\begin{aligned} \text {e}^{-\vartheta _j} = 1 - \vartheta _j + \frac{1}{2} \vartheta _j^2 + \frac{1}{\gamma _{jN}} \delta _j. \end{aligned}$$

Hence,

$$\begin{aligned} f_j&=1 - \vartheta _j + \frac{1}{2} \vartheta _j^2 -\bigg (1 + \vartheta _{j+1} + \frac{1}{2} \vartheta _{j+1}^2\bigg ) + \frac{1}{\gamma _{jN}} \delta _j\\&= -2 \vartheta _j + \frac{1}{\gamma _{jN}} \delta _j. \end{aligned}$$

Since \(\frac{x}{\sinh (x)}\) is an even \(\mathcal {C}^2(\mathbb {R})\) function, we have

$$\begin{aligned} \frac{\vartheta _{j}}{\sinh ( \vartheta _{j} )} = 1 + \frac{1}{\sqrt{\gamma _{jN}}} \delta _j. \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{1}{\text {e}^{-\vartheta _{j}} - \text {e}^{\vartheta _{j}}} f_j&= \frac{f_j}{-2\vartheta _{j}} \frac{\vartheta _j}{\sinh (\vartheta _{j})} \\&= \bigg (1 + \frac{1}{\sqrt{\gamma _{jN}}} \delta _j \bigg ) \bigg (1 + \frac{1}{\sqrt{\gamma _{jN}}}\delta _j \bigg ) \\&= 1 + \frac{1}{\sqrt{\gamma _{jN}}} \delta _j. \end{aligned}$$

Analogously, we treat \(g_j\). Namely, we write

$$\begin{aligned} g_j&= 1 - \vartheta _j + \frac{1}{2} \vartheta _j^2 - \bigg (1 - \vartheta _{j+1} + \frac{1}{2}\vartheta _{j+1}^2\bigg ) + \frac{1}{\gamma _{jN}} \delta _j \\&= \frac{1}{\gamma _{jN}} \delta _j. \end{aligned}$$

Hence,

$$\begin{aligned} \frac{1}{\text {e}^{-\vartheta _{j}} - \text {e}^{\vartheta _{j}}} g_j&= \frac{1}{\sqrt{\gamma _{jN}}} \delta _j. \end{aligned}$$

Similarly, we can find that

$$\begin{aligned} \frac{1}{\text {e}^{-\vartheta _{j}} - \text {e}^{\vartheta _{j}}} \tilde{f}_j&= 1 + \frac{1}{\sqrt{\gamma _{jN}}} \delta _j,\\ \frac{1}{\text {e}^{-\vartheta _{j}} - \text {e}^{\vartheta _{j}}} \tilde{g}_j&= \frac{1}{\sqrt{\gamma _{jN}}} \delta _j. \end{aligned}$$

Consequently,

$$\begin{aligned} Z_{j}^{-1} Z_{j+1} = {\text {Id}}+ \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} Q_j \end{aligned}$$

where \((Q_j)\) is a sequence from \(\mathcal {D}_1\big (K, {\text {Mat}}(2, \mathbb {R})\big )\) for any compact interval \(K \subset \Lambda \) convergent to the zero matrix proving the formula (3.3). \(\square \)

Theorem 3.2

Let N be a positive integer and \(i \in \{0, 1, \ldots N-1\}\). Let \((\gamma _n: n \in \mathbb {N})\) be a sequence of positive numbers tending to infinity and satisfying (1.2) and (1.3). Let \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) be \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element. Suppose that (1.4) holds true with \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). Then for any compact interval \(K \subset \Lambda \),

$$\begin{aligned} Z_{j+1}^{-1} X_{jN+i} Z_{j} = \varepsilon \bigg ( {\text {Id}}+ \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} R_j \bigg ) \end{aligned}$$

where \((R_j)\) is a sequence from \(\mathcal {D}_1\big (K, {\text {Mat}}(2, \mathbb {R})\big )\) convergent uniformly on K to

$$\begin{aligned} \mathcal {R}_i(x) = \frac{\sqrt{|{\tau (x)} |}}{2} \begin{pmatrix} 1 &{}\quad -1\\ 1 &{}\quad -1 \end{pmatrix} - \frac{\upsilon (x)}{2 \sqrt{|{\tau (x)} |}} \begin{pmatrix} 1 &{}\quad 1\\ -1 &{}\quad -1 \end{pmatrix} - \frac{\mathfrak {S}}{2} \begin{pmatrix} 1 &{}\quad -1\\ -1 &{}\quad 1 \end{pmatrix}. \end{aligned}$$
(3.8)

In particular, \({\text {discr}}\mathcal {R}_i = 4 \tau (x)\).

Proof

In the following argument, we denote by \((\delta _j)\) and \((\mathcal {E}_j)\) generic sequences tending to zero from \(\mathcal {D}_1\) and \(\mathcal {D}_1\big (K, {\text {Mat}}(2, \mathbb {R})\big )\), respectively, which may change from line to line.

Observe that by (1.2)

$$\begin{aligned} \lim _{j \rightarrow \infty } \frac{\gamma _{jN+i'-1}}{\gamma _{jN+i'}} = \frac{\alpha _{i'-1}}{\alpha _{i'}}, \end{aligned}$$

and

$$\begin{aligned} \bigg (\frac{\gamma _{jN+i'-1}}{\gamma _{jN+i'}}: j \in \mathbb {N}\bigg ) \in \mathcal {D}_1. \end{aligned}$$

Hence,

$$\begin{aligned} \frac{1}{\gamma _{jN+i'}}&= \frac{1}{\gamma _{jN+i'-1}} \frac{\gamma _{jN+i'-1}}{\gamma _{jN+i'}} \\&= \frac{1}{\gamma _{jN+i'-1}} \frac{\alpha _{i'-1}}{\alpha _{i'}} - \frac{1}{\gamma _{jN+i'-1}} \bigg ( \frac{\gamma _{jN+i'-1}}{\gamma _{jN+i'}} - \frac{\alpha _{i'-1}}{\alpha _{i'}}\bigg )\\&= \frac{\alpha _{i'-1}}{\alpha _{i'}} \frac{1}{\gamma _{jN+i'-1}} + \frac{1}{\gamma _{jN}} \delta _j. \end{aligned}$$

Consequently,

$$\begin{aligned} \frac{\alpha _{i'}}{\gamma _{jN+i'}} = \frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}} + \frac{1}{\gamma _{jN}} \delta _j. \end{aligned}$$
(3.9)

Next, for \(x \in K\), we have

$$\begin{aligned} \frac{x}{a_{jN+i'}}&= \frac{x}{\gamma _{jN+i'}} \frac{\gamma _{jN+i'}}{a_{jN+i'}} \nonumber \\&= \frac{\mathfrak {t}x}{\gamma _{jN+i'}} + \frac{x}{\gamma _{jN+i'}} \bigg (\frac{\gamma _{jN+i'}}{a_{jN+i'}} - \mathfrak {t}\bigg )\nonumber \\&= \frac{\mathfrak {t}x}{\gamma _{jN+i'}} + \frac{1}{\gamma _{jN}} \delta _j. \end{aligned}$$
(3.10)

Let

$$\begin{aligned} \xi _n = \frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n}, \quad \text {and}\quad \zeta _n = \frac{\beta _n}{\alpha _n} - \frac{b_n}{a_n}. \end{aligned}$$

Using (3.10), the matrix \(B_{jN+i'}\) can be written as

$$\begin{aligned}&\begin{pmatrix} 0 &{} 1 \\ -\frac{\alpha _{i'-1}}{\alpha _{i'}} + \xi _{jN+i'} + \frac{1}{\gamma _{jN+i'}} \delta _j &{} -\frac{\beta _{i'}}{\alpha {i'}} +\frac{x \mathfrak {t}}{\gamma _{jN+i'}} +\zeta _{jN+i'} + \frac{1}{\gamma _{jN+i'}} \delta _j \end{pmatrix} \\&\quad =\begin{pmatrix} 0 &{} 1 \\ -\frac{\alpha _{i'-1}}{\alpha _{i'}} &{} - \frac{\beta _{i'}}{\alpha _{i'}} \end{pmatrix} + \begin{pmatrix} 0 &{} 0 \\ \xi _{jN+i'} &{} \frac{x \mathfrak {t}}{\gamma _{jN+i'}} + \zeta _{jN+i'} \end{pmatrix} + \frac{1}{\gamma _{jN+i'}} \mathcal {E}_j \\&\quad = \begin{pmatrix} 0 &{} 1 \\ -\frac{\alpha _{i'-1}}{\alpha _{i'}} &{} - \frac{\beta _{i'}}{\alpha _{i'}} \end{pmatrix} \Bigg \{ {\text {Id}}+ \frac{\alpha _{i'}}{\alpha _{i'-1}} \begin{pmatrix} -\frac{\beta _{i'}}{\alpha _{i'}} &{} -1 \\ \frac{\alpha _{i'-1}}{\alpha _{i'}} &{} 0 \end{pmatrix} \begin{pmatrix} 0 &{} 0 \\ \xi _{jN+i'} &{} \frac{x \mathfrak {t}}{\gamma _{jN+i'}} + \zeta _{jN+i'} \end{pmatrix} \\&\qquad + \frac{1}{\gamma _{jN+i'}} \mathcal {E}_j \Bigg \} \\&\quad = \begin{pmatrix} 0 &{} 1 \\ -\frac{\alpha _{i'-1}}{\alpha _{i'}} &{} - \frac{\beta _{i'}}{\alpha _{i'}} \end{pmatrix} \left\{ {\text {Id}}- \frac{\alpha _{i'}}{\alpha _{i'-1}} \begin{pmatrix} \xi _{jN+i'} &{} \zeta _{jN+i'} + \frac{x \mathfrak {t}}{\gamma _{jN+i'}}\\ 0 &{} 0 \end{pmatrix} + \frac{1}{\gamma _{jN}} \mathcal {E}_j \right\} . \end{aligned}$$

Hence,

$$\begin{aligned} X_{jN+i}&= B_{jN+i+N-1} \ldots B_{jN+i+1} B_{jN+i} \\&= \mathfrak {X}_{i}(0) \Bigg \{ {\text {Id}}- \sum _{i' = i}^{N+i-1} \frac{\alpha _{i'}}{\alpha _{i'-1}} \Big ( \mathfrak {B}_{i'-1}(0) \ldots \mathfrak {B}_i(0)\Big )^{-1} \\&\quad \times \begin{pmatrix} \xi _{jN+i'} &{}\quad \zeta _{jN+i'} + \frac{x \mathfrak {t}}{\gamma _{jN+i'}} \\ 0 &{}\quad 0 \end{pmatrix} \Big (\mathfrak {B}_{i'-1}(0) \ldots \mathfrak {B}_i(0)\Big ) + \frac{1}{\gamma _{jN}} \mathcal {E}_j \Bigg \}, \end{aligned}$$

and so

$$\begin{aligned} Z_{j+1}^{-1} X_{jN+i} Z_j&= Z_{j+1}^{-1} \mathfrak {X}_{i}(0) Z_j \Bigg \{ {\text {Id}}- \sum _{i' = i}^{N+i-1} \frac{\alpha _{i'}}{\alpha _{i'-1}} \begin{pmatrix} 1 &{}\quad 1 \\ \text {e}^{\vartheta _j} &{}\quad \text {e}^{-\vartheta _j} \end{pmatrix}^{-1} T_{i'}^{-1} \\&\quad \times \begin{pmatrix} \xi _{jN+i'}&{}\quad \zeta _{jN+i'} + \frac{x \mathfrak {t}}{\gamma _{jN+i'}}\\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ \text {e}^{\vartheta _j} &{}\quad \text {e}^{-\vartheta _j} \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j \Bigg \}. \end{aligned}$$

To find the asymptotic of the first factor, we write

$$\begin{aligned} Z_{j+1}^{-1} \mathfrak {X}_{i}(0) Z_j = \frac{\varepsilon }{\text {e}^{-\vartheta _{j+1}} - \text {e}^{\vartheta _{j+1}}} \begin{pmatrix} f_j &{}\quad g_j \\ \tilde{g}_j &{}\quad \tilde{f}_j \end{pmatrix} \end{aligned}$$

where

$$\begin{aligned} f_j&= \text {e}^{{\vartheta _{j}} - \vartheta _{j+1}} + 1 - 2\text {e}^{\vartheta _{j}}, \quad g_j = \text {e}^{{-\vartheta _{j}} - \vartheta _{j+1}} + 1 - 2\text {e}^{-\vartheta _{j}}, \\ \tilde{g}_j&= -\text {e}^{\vartheta _{j}+\vartheta _{j+1}} -1 +2\text {e}^{\vartheta _{j}}, \quad \tilde{f}_j= -\text {e}^{-\vartheta _{j}+\vartheta _{j+1}} -1 + 2 \text {e}^{-\vartheta _{j}}. \end{aligned}$$

Since by (3.7)

$$\begin{aligned} \text {e}^{{\vartheta _{j}} - \vartheta _{j+1}} = 1 + \frac{1}{\gamma _{jN}} \delta _j, \quad \text {and}\quad \text {e}^{\vartheta _{j}} = 1 + \vartheta _j + \frac{1}{2} \vartheta _j^2 + \frac{1}{\gamma _{jN}} \delta _j, \end{aligned}$$

we get

$$\begin{aligned} f_j&= 1 + 1 - 2 \bigg (1 + \vartheta _{j} + \frac{1}{2}\vartheta _{j}^2 \bigg )+ \frac{1}{\gamma _{jN}} \delta _j \\&= -2 \vartheta _{j} - \vartheta _{j}^2 + \frac{1}{\gamma _{jN}} \delta _j. \end{aligned}$$

Moreover,

$$\begin{aligned} \frac{\vartheta _j}{\vartheta _{j+1}} = 1 + \frac{1}{\sqrt{\gamma _{jN}}} \delta _j. \end{aligned}$$

Thus

$$\begin{aligned} \frac{1}{\text {e}^{-\vartheta _{j+1}} - \text {e}^{\vartheta _{j+1}}} f_j&= \frac{f_j}{-2\vartheta _{j}} \frac{\vartheta _j}{\vartheta _{j+1}} \frac{\vartheta _{j+1}}{\sinh \vartheta _{j+1}} \nonumber \\&= \bigg (1 + \frac{1}{2} \vartheta _j + \frac{1}{\sqrt{\gamma _{jN}}} \delta _j \bigg ) \bigg (1 + \frac{1}{\sqrt{\gamma _{jN}}} \delta _j \bigg )\bigg (1 + \frac{1}{\sqrt{\gamma _{jN}}} \delta _j\bigg ) \nonumber \\&= 1 + \frac{1}{2} \vartheta _{j} + \frac{1}{\sqrt{\gamma _{jN}}} \delta _j. \end{aligned}$$
(3.11)

Analogously, we can find that

$$\begin{aligned} \tilde{f}_j = -2 \vartheta _{j} + \vartheta _{j}^2 +\frac{1}{\gamma _{jN}} \delta _j, \end{aligned}$$

and

$$\begin{aligned} \frac{1}{\text {e}^{-\vartheta _{j+1}} - \text {e}^{\vartheta _{j+1}}} \tilde{f}_{j} = 1 -\frac{1}{2} \vartheta _{j} + \frac{1}{\sqrt{\gamma _{jN}}} \delta _j. \end{aligned}$$
(3.12)

Next, we write

$$\begin{aligned} g_j&= 1 - \vartheta _{j} - \vartheta _{j+1} + \frac{1}{2} (\vartheta _{j}+\vartheta _{j+1})^2 + 1 - 2 \bigg (1 -\vartheta _{j} + \frac{1}{2} \vartheta _{j}^2\bigg ) + \frac{1}{\gamma _{jN}} \delta _j \\&= \vartheta _{j}^2 + \frac{1}{\gamma _{jN}} \delta _j, \end{aligned}$$

thus

$$\begin{aligned} \frac{1}{\text {e}^{-\vartheta _{j+1}} - \text {e}^{\vartheta _{j+1}}} g_j = -\frac{1}{2} \vartheta _{j} + \frac{1}{\sqrt{\gamma _{jN}}} \delta _j. \end{aligned}$$
(3.13)

Similarly, we get

$$\begin{aligned} \tilde{g}_j = -\vartheta _{j}^2 + + \frac{1}{\gamma _{jN}} \delta _j, \end{aligned}$$

and so

$$\begin{aligned} \frac{1}{\text {e}^{-\vartheta _{j+1}} - \text {e}^{\vartheta _{j+1}}} \tilde{g}_j = \frac{1}{2} \vartheta _{j} + +\frac{1}{\sqrt{\gamma _{jN}}} \delta _j. \end{aligned}$$
(3.14)

Consequently, by (3.11)–(3.14) we obtain

$$\begin{aligned} Z_{j+1}^{-1} \mathfrak {X}_{i}(0) Z_{j} = \varepsilon \left\{ {\text {Id}}+ \frac{1}{2} \vartheta _{j} \begin{pmatrix} 1 &{} -1 \\ 1 &{} -1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j \right\} . \end{aligned}$$
(3.15)

Next, we observe that

$$\begin{aligned} \text {e}^{\vartheta _{j}} = 1 + \delta _j, \quad \frac{\vartheta _{j}}{\sinh \vartheta _{j}} = 1 + \delta _j, \end{aligned}$$

thus in view of (2.6), for each \(i' \in \{0, 1, \ldots , N-1\}\) we have

$$\begin{aligned}&\begin{pmatrix} 1 &{}\quad 1 \\ \text {e}^{\vartheta _j} &{}\quad \text {e}^{-\vartheta _j} \end{pmatrix}^{-1} T_{i'}^{-1} \begin{pmatrix} 0 &{}\quad \frac{x \mathfrak {t}}{\gamma _{jN+i'}} \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ \text {e}^{\vartheta _j} &{}\quad \text {e}^{-\vartheta _j} \end{pmatrix} \\&\quad = - \frac{1}{2\vartheta _j} \frac{x \mathfrak {t}}{\gamma _{jN+i'}} \begin{pmatrix} 1 &{}\quad -1\\ -1 &{}\quad 1 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} 0 &{}\quad 1\\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j \\&\quad = -\frac{1}{2\vartheta _j} \frac{x \mathfrak {t}}{\gamma _{jN+i'}} \frac{([T_{i'}]_{21} + [T_{i'}]_{22})^2}{\det T_{i'}} \begin{pmatrix} 1 &{}\quad 1\\ -1 &{}\quad -1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j \\&\quad = \frac{1}{2\vartheta _j} \frac{x \mathfrak {t}}{\gamma _{jN+i'}} \big (\varepsilon [\mathfrak {X}_{i'}(0)]_{21}\big ) \begin{pmatrix} 1 &{}\quad 1\\ -1 &{}\quad -1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j. \end{aligned}$$

We write

$$\begin{aligned}&\begin{pmatrix} 1 &{}\quad 1 \\ \text {e}^{\vartheta _j} &{}\quad \text {e}^{-\vartheta _j} \end{pmatrix}^{-1} T_{i'}^{-1} \begin{pmatrix} \xi _{jN+i'} &{}\quad \zeta _{jN+i'} \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ \text {e}^{\vartheta _j} &{}\quad \text {e}^{-\vartheta _j} \end{pmatrix} \\&\quad = \frac{1}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \begin{pmatrix} 1 &{}\quad -1 \\ -1 &{}\quad 1 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} \xi _{jN+i'} &{}\quad \zeta _{jN+i'}\\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix} \\&\qquad + \frac{1}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \begin{pmatrix} \text {e}^{-\vartheta _j}-1 &{}\quad 0 \\ 1-\text {e}^{\vartheta _j} &{}\quad 0 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} \xi _{jN+i'} &{}\quad \zeta _{jN+i'} \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix} \\&\qquad + \frac{1}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \begin{pmatrix} 1 &{}\quad -1 \\ -1 &{}\quad 1 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} \xi _{jN+i'} &{}\quad \zeta _{jN+i'} \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 0 &{}\quad 0 \\ \text {e}^{\vartheta _j}-1 &{}\quad \text {e}^{-\vartheta _j} -1 \end{pmatrix} \\&\qquad + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j. \end{aligned}$$

We observe that

$$\begin{aligned}&\frac{1}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \begin{pmatrix} 1 &{}\quad -1 \\ -1 &{}\quad 1 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} \xi _{jN+i'} &{}\quad 0\\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix} \\&\quad = \frac{\xi _{jN+i'}}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \frac{([T_{i'}]_{11} + [T_{i'}]_{12})([T_{i'}]_{21} + [T_{i'}]_{22})}{\det T_{i'}} \begin{pmatrix} 1 &{}\quad 1 \\ -1 &{}\quad -1 \end{pmatrix} \\&\quad = \frac{\xi _{jN+i'}}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \big (1 - \varepsilon [\mathfrak {X}_{i'}(0)]_{11}\big ) \begin{pmatrix} 1 &{}\quad 1 \\ -1 &{}\quad -1 \end{pmatrix}, \end{aligned}$$

and

$$\begin{aligned}&\frac{1}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \begin{pmatrix} 1 &{}\quad -1 \\ -1 &{}\quad 1 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} 0 &{}\quad \zeta _{jN+i'} \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix}\\&\quad = \frac{\zeta _{jN+i'}}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \frac{([T_{i'}]_{21} + [T_{i'}]_{22})^2}{\det T_{i'}} \begin{pmatrix} 1 &{}\quad 1 \\ -1 &{}\quad -1 \end{pmatrix} \\&\quad = - \frac{\zeta _{jN+i'}}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \big (\varepsilon [\mathfrak {X}_{i'}(0)]_{21}\big ) \begin{pmatrix} 1 &{}\quad 1 \\ -1 &{}\quad -1 \end{pmatrix}. \end{aligned}$$

Hence, by (1.4) and (2.16),

$$\begin{aligned}&\frac{1}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \begin{pmatrix} 1 &{}\quad -1 \\ -1 &{}\quad 1 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} \xi _{jN+i'} &{}\quad \zeta _{jN+i'} \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix} \\&\quad = -\frac{1}{2\vartheta _j} \frac{\mathfrak {u}_{i'}}{\gamma _{jN+i'}} \begin{pmatrix} 1 &{}\quad 1 \\ -1 &{}\quad -1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j. \end{aligned}$$

Next, let us notice that

$$\begin{aligned} \bigg (\frac{e^{\vartheta _j} - 1}{e^{\vartheta _j} - e^{-\vartheta _j}}: j \in \mathbb {N}\bigg ) \in \mathcal {D}_1, \end{aligned}$$

and

$$\begin{aligned} \lim _{j \rightarrow \infty } \frac{e^{\vartheta _j} - 1}{e^{\vartheta _j} - e^{-\vartheta _j}} = \frac{1}{2}. \end{aligned}$$

Therefore, by (2.6), we get

$$\begin{aligned}&\frac{1}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \begin{pmatrix} \text {e}^{-\vartheta _j}-1 &{}\quad 0 \\ 1-\text {e}^{\vartheta _j} &{}\quad 0 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} \xi _{jN+i'} &{}\quad 0 \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix}\\&\quad = \frac{1}{2} \begin{pmatrix} 1 &{}\quad 0 \\ 1 &{}\quad 0 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} \frac{\mathfrak {s}_{i'}}{\sqrt{\alpha _{i'} \gamma _{jN+i'}}} &{}\quad 0 \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}\\&\quad = \frac{\mathfrak {s}_{i'}}{2 \sqrt{ \alpha _{i'} \gamma _{jN+i'}}} \frac{([T_{i'}]_{11} + [T_{i'}]_{12}) [T_{i'}]_{22}}{\det T_{i'}} \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j, \end{aligned}$$

and

$$\begin{aligned}&\frac{1}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \begin{pmatrix} 1 &{}\quad -1 \\ -1 &{}\quad 1 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} \xi _{jN+i'} &{}\quad 0 \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 0 &{}\quad 0 \\ e^{\vartheta _j}-1 &{}\quad e^{-\vartheta _j} - 1 \end{pmatrix} \\&\quad = \frac{1}{2} \begin{pmatrix} 1 &{}\quad -1\\ -1 &{}\quad 1 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} \frac{\mathfrak {s}_{i'}}{\sqrt{ \alpha _{i'} \gamma _{jN+i'}}} &{}\quad 0 \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 0 &{}\quad 0 \\ -1 &{}\quad 1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j \\&\quad = \frac{\mathfrak {s}_{i'}}{2 \sqrt{ \alpha _{i'} \gamma _{jN+i'}}} \frac{([T_{i'}]_{21} + [T_{i'}]_{22}) [T_{i'}]_{12}}{\det T_{i'}} \begin{pmatrix} -1 &{}\quad 1 \\ 1 &{}\quad -1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j. \end{aligned}$$

Analogously,

$$\begin{aligned}&\frac{1}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \begin{pmatrix} \text {e}^{-\vartheta _j}-1 &{}\quad 0 \\ 1-\text {e}^{\vartheta _j} &{}\quad 0 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} 0 &{}\quad \zeta _{jN+i'} \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix}\nonumber \\&\quad = \frac{1}{2} \begin{pmatrix} 1 &{}\quad 0 \\ 1 &{}\quad 0 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} 0 &{}\quad \frac{\mathfrak {r}_{i'}}{\sqrt{ \alpha _{i'} \gamma _{jN+i'}}} \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix} +\frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j \nonumber \\&\quad = \frac{\mathfrak {r}_{i'}}{2 \sqrt{\alpha _{i'} \gamma _{jN+i'}}} \frac{([T_{i'}]_{21} + [T_{i'}]_{22}) [T_{i'}]_{22}}{\det T_{i'}} \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j, \end{aligned}$$
(3.16)

and

$$\begin{aligned}&\frac{1}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \begin{pmatrix} 1 &{}\quad -1 \\ -1 &{}\quad 1 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} 0 &{}\quad \zeta _{jN+i'} \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 0 &{}\quad 0 \\ e^{\vartheta _j}-1 &{}\quad e^{-\vartheta _j} - 1 \end{pmatrix} \nonumber \\&\quad = \frac{1}{2} \begin{pmatrix} 1 &{}\quad -1\\ -1 &{}\quad 1 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} 0 &{}\quad \frac{\mathfrak {r}_{i'}}{\sqrt{ \alpha _{i'} \gamma _{jN+i'}}} \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 0 &{}\quad 0 \\ -1 &{}\quad 1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j \nonumber \\&\quad = \frac{\mathfrak {r}_{i'}}{2 \sqrt{ \alpha _{i'} \gamma _{jN+i'}}} \frac{([T_{i'}]_{21} + [T_{i'}]_{22}) [T_{i'}]_{22}}{\det T_{i'}} \begin{pmatrix} -1 &{}\quad 1 \\ 1 &{}\quad -1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j. \end{aligned}$$
(3.17)

In view of (2.13), (2.14) and (2.16), we have

$$\begin{aligned} \mathfrak {s}_{i'} ([T_{i'}]_{11} + [T_{i'}]_{12}) = -\mathfrak {r}_{i'} ([T_{i'}]_{21} + [T_{i'}]_{22}), \end{aligned}$$

thus by (3.16) and (3.17) we get

$$\begin{aligned}&\frac{1}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \begin{pmatrix} \text {e}^{-\vartheta _j}-1 &{}\quad 0 \\ 1-\text {e}^{\vartheta _j} &{}\quad 0 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} 0 &{}\quad \zeta _{jN+i'} \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix}\\&\quad = \frac{\mathfrak {s}_{i'}}{2 \sqrt{\alpha _{i'} \gamma _{jN+i'}}} \frac{([T_{i'}]_{11} + [T_{i'}]_{12}) [T_{i'}]_{22}}{\det T_{i'}} \begin{pmatrix} -1 &{}\quad -1 \\ -1 &{}\quad -1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j, \end{aligned}$$

and

$$\begin{aligned}&\frac{1}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \begin{pmatrix} 1 &{}\quad -1 \\ -1 &{}\quad 1 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} 0 &{}\quad \zeta _{jN+i'} \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 0 &{}\quad 0 \\ e^{\vartheta _j}-1 &{}\quad e^{-\vartheta _j} - 1 \end{pmatrix} \\&\quad = \frac{\mathfrak {s}_{i'}}{2 \sqrt{ \alpha _{i'} \gamma _{jN+i'}}} \frac{([T_{i'}]_{11} + [T_{i'}]_{12}) [T_{i'}]_{22}}{\det T_{i'}} \begin{pmatrix} 1 &{}\quad -1 \\ -1 &{}\quad 1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j. \end{aligned}$$

Hence,

$$\begin{aligned}&\frac{1}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \begin{pmatrix} \text {e}^{-\vartheta _j}-1 &{}\quad 0 \\ 1-\text {e}^{\vartheta _j} &{}\quad 0 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} \xi _{jN+i'} &{}\quad \zeta _{jN+i'} \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix} \\&\qquad + \frac{1}{\text {e}^{-\vartheta _j} - \text {e}^{\vartheta _j}} \begin{pmatrix} 1 &{}\quad -1 \\ -1 &{}\quad 1 \end{pmatrix} T_{i'}^{-1} \begin{pmatrix} \xi _{jN+i'} &{}\quad \zeta _{jN+i'} \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 0 &{}\quad 0 \\ \text {e}^{\vartheta _j}-1 &{}\quad \text {e}^{-\vartheta _j} -1 \end{pmatrix}\\&\quad = \frac{\mathfrak {s}_{i'}}{2 \sqrt{\alpha _{i'} \gamma _{jN+i'}}} \begin{pmatrix} 1 &{}\quad -1 \\ -1 &{}\quad 1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j. \end{aligned}$$

Summarizing, we obtain

$$\begin{aligned}{} & {} \begin{pmatrix} 1 &{}\quad 1 \\ \text {e}^{\vartheta _j} &{}\quad \text {e}^{-\vartheta _j} \end{pmatrix}^{-1} T_{i'}^{-1} \begin{pmatrix} \xi _{jN+i'} &{}\quad \zeta _{jN+i'}+\frac{x \mathfrak {t}}{\gamma _{jN+i'}} \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ \text {e}^{\vartheta _j} &{}\quad \text {e}^{-\vartheta _j} \end{pmatrix} \\{} & {} \quad = \frac{1}{2\vartheta _j} \frac{\mathfrak {t}x \big (\varepsilon [\mathfrak {X}_{i'}(0)]_{21} \big ) - \mathfrak {u}_{i'}}{\gamma _{jN+i'}} \begin{pmatrix} 1 &{}\quad 1\\ -1 &{}\quad -1 \end{pmatrix} \\{} & {} \qquad +\, \frac{\mathfrak {s}_{i'}}{2 \sqrt{\alpha _{i'} \gamma _{jN+i'}}} \begin{pmatrix} 1 &{}\quad -1\\ -1 &{}\quad 1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j. \end{aligned}$$

By (3.9) and (2.18), we have

$$\begin{aligned}{} & {} \sqrt{\frac{\gamma _{(j+1)N+i-1}}{\alpha _{i-1}}} \sum _{i' = i}^{N+i-1} \frac{\alpha _{i'}}{\alpha _{i'-1}} \frac{x \mathfrak {t}\big (\varepsilon [\mathfrak {X}_{i'}(0)]_{21} \big )-\mathfrak {u}_{i'}}{\gamma _{jN+i'}} \\{} & {} \quad = \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} \upsilon + \frac{1}{\sqrt{\gamma _{jN}}} \delta _j, \end{aligned}$$

and

$$\begin{aligned} \sum _{i' = i}^{N+i-1} \frac{\alpha _i'}{\alpha _{i'-1}} \frac{\mathfrak {s}_{i'}}{\sqrt{\alpha _{i'} \gamma _{jN+i'}}} = \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} \mathfrak {S}+ \frac{1}{\sqrt{\gamma _{jN}}} \delta _j. \end{aligned}$$

Therefore,

$$\begin{aligned}&\sum _{i' = i}^{N+i-1} \frac{\alpha _{i'}}{\alpha _{i'-1}} \begin{pmatrix} 1 &{}\quad 1 \\ \text {e}^{\vartheta _j} &{}\quad \text {e}^{-\vartheta _j} \end{pmatrix}^{-1} T_{i'}^{-1} \begin{pmatrix} \xi _{jN+i'} &{}\quad \zeta _{jN+i'}-\frac{x \mathfrak {t}}{\gamma _{jN+i'}} \\ 0 &{}\quad 0 \end{pmatrix} T_{i'} \begin{pmatrix} 1 &{}\quad 1 \\ \text {e}^{\vartheta _j} &{}\quad \text {e}^{-\vartheta _j} \end{pmatrix} \\&\quad = \frac{\vartheta _j}{2} \frac{\upsilon }{|{\tau } |} \begin{pmatrix} 1 &{}\quad 1 \\ -1 &{}\quad -1 \end{pmatrix} + \frac{\sqrt{\alpha _{i-1}} \mathfrak {S}}{2\sqrt{\gamma _{(j+1)N+i-1}}} \begin{pmatrix} 1 &{}\quad -1\\ -1 &{}\quad 1 \end{pmatrix} + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j. \end{aligned}$$

Finally, by (3.15) we get

$$\begin{aligned} Z_{j+1}^{-1} X_{jN+i} Z_j = \varepsilon \left\{ {\text {Id}}+ \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} \mathcal {R}_i + \frac{1}{\sqrt{\gamma _{jN}}} \mathcal {E}_j \right\} \end{aligned}$$

where

$$\begin{aligned} \mathcal {R}_i(x) = \frac{\sqrt{|{\tau (x)} |}}{2} \begin{pmatrix} 1 &{}\quad -1\\ 1 &{}\quad -1 \end{pmatrix} - \frac{\upsilon (x)}{2 \sqrt{|{\tau (x)} |}} \begin{pmatrix} 1 &{}\quad 1\\ -1 &{}\quad -1 \end{pmatrix} - \frac{\mathfrak {S}}{2} \begin{pmatrix} 1 &{}\quad -1\\ -1 &{}\quad 1 \end{pmatrix} \end{aligned}$$

which finishes the proof. \(\square \)

Corollary 3.3

Suppose that the hypotheses of Theorem 3.2 are satisfied. Then

$$\begin{aligned} \lim _{j \rightarrow \infty } \gamma _{(j+1)N+i-1} {\text {discr}}\big (X_{jN+i} \big ) = 4 \tau \alpha _{i-1} \end{aligned}$$

locally uniformly on \(\Lambda \) where \(\tau \) is defined in (2.17).

Proof

We write

$$\begin{aligned} Z_j^{-1} X_{jN+i} Z_j = \big ( Z_j^{-1} Z_{j+1} \big ) \big ( Z_{j+1}^{-1} X_{jN+i} Z_j \big ), \end{aligned}$$

thus by Theorems 3.1 and 3.2, we obtain

$$\begin{aligned} \varepsilon Z_j^{-1} X_{jN+i} Z_j&= \bigg ( {\text {Id}}+ \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} Q_j \bigg ) \bigg ( {\text {Id}}+ \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} R_j \bigg ) \\&= {\text {Id}}+ \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} (Q_j + R_j) + \frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}} Q_j R_j. \end{aligned}$$

Hence,

$$\begin{aligned} \frac{\gamma _{(j+1)N+i-1}}{\alpha _{i-1}} {\text {discr}}(X_{jN+i}) = {\text {discr}}\bigg ( R_j + Q_j + \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} Q_j R_j \bigg ), \end{aligned}$$

and consequently,

$$\begin{aligned} \lim _{j \rightarrow \infty } \frac{\gamma _{(j+1)N+i-1}}{\alpha _{i-1}} {\text {discr}}(X_{jN+i}) = {\text {discr}}(\mathcal {R}_i). \end{aligned}$$

Since \({\text {discr}}\mathcal {R}_i = 4 \tau \), the conclusion follows. \(\square \)

4 Essential Spectrum

In this section we want to understand spectral properties of Jacobi operators corresponding to tempered N-periodically modulated sequences. We set

$$\begin{aligned} \Lambda _- = \tau ^{-1}\big ((-\infty , 0)\big ), \quad \text {and}\quad \Lambda _+ = \tau ^{-1}\big ((0, +\infty )\big ). \end{aligned}$$

Observe that, if \(\mathfrak {t}= 0\) then at least one of the sets \(\Lambda _-\) or \(\Lambda _+\) is empty. Similar to the proof of [61, Theorem 4.1] we use the shifted conjugation (Theorems 3.13.2) together with a variant of a Levison’s theorem for discrete systems developed in [60, Theorem 4.4].

Theorem 4.1

Let N be a positive integer. Let \((\gamma _n: n \in \mathbb {N})\) be a sequence of positive numbers tending to infinity and satisfying (1.2) and (1.3). Let \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) be \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element. Suppose that (1.4) holds true with \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). If the operator A is self-adjoint then

$$\begin{aligned} \sigma _{\textrm{ess}}(A) \cap \Lambda _+ = \emptyset . \end{aligned}$$

Proof

Let us fix a compact interval \(K \subset \Lambda _+\) and \(i \in \{0, 1, \ldots , N-1\}\). We set

$$\begin{aligned} Y_j = Z_{j+1}^{-1} X_{jN+i} Z_j \end{aligned}$$

where \(Z_j\) is the matrix defined in (3.1). By Theorem 3.2, we have

$$\begin{aligned} Y_j = \varepsilon \bigg ({\text {Id}}+ \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} R_j\bigg ) \end{aligned}$$
(4.1)

where \((R_j: j \in \mathbb {N})\) is a sequence from \(\mathcal {D}_1\big (K, {\text {Mat}}(2, \mathbb {R}) \big )\) convergent uniformly on K to the matrix \(\mathcal {R}_i\) given by the formula (3.8). By Corollary 3.3, there are \(j_0 \ge 0\) and \(\delta > 0\), so that for all \(j \ge j_0\) and \(x \in K\),

$$\begin{aligned} {\text {discr}}R_j(x) \ge \delta . \end{aligned}$$
(4.2)

In particular, the matrix \(R_j(x)\) has two eigenvalues

$$\begin{aligned} \xi _j^+(x) = \frac{{\text {tr}}R_j(x) + \sqrt{{\text {discr}}R_j(x)}}{2} \ \,\text { and }\ \, \xi _j^-(x) = \frac{{\text {tr}}R_j(x) - \sqrt{{\text {discr}}R_j(x)}}{2}. \end{aligned}$$

In view of (4.1), the matrix \(Y_j(x)\) has eigenvalues

$$\begin{aligned} \lambda _j^+ = \varepsilon \bigg (1 + \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} \xi _j^+\bigg ) \quad \text {and}\quad \lambda _j^- = \varepsilon \bigg (1 + \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} \xi _j^-\bigg ). \end{aligned}$$

By possible increasing \(j_0\) we can guarantee that

$$\begin{aligned} |\lambda _j^-(x)| \le |\lambda _j^+(x)| \end{aligned}$$
(4.3)

for all \(j \ge j_0\) and \(x \in K\).

Now, by the Stolz–Cesàro theorem (see e.g. [35, Section 3.1.7]), (1.3) implies that

$$\begin{aligned} \lim _{j \rightarrow \infty } \frac{\sqrt{\gamma _{jN+i}}}{j} = \lim _{j \rightarrow \infty } \big (\sqrt{\gamma _{(j+1)N+i}} - \sqrt{\gamma _{jN+i}}\big ) = 0, \end{aligned}$$
(4.4)

and so

$$\begin{aligned} \sum _{j = 0}^\infty \frac{1}{\sqrt{\gamma _{jN+i}}} = \infty . \end{aligned}$$

Therefore, by (4.2), we can apply [60, Theorem 4.4] to the system

$$\begin{aligned} \Psi _{j+1} = Y_j \Psi _j. \end{aligned}$$
(4.5)

Consequently, there are \((\Psi _j^-: j \ge j_0)\) and \((\Psi _j^+: j \ge j_0)\) such that

$$\begin{aligned} \begin{aligned}&\lim _{j \rightarrow \infty } \sup _{x \in K}{ \bigg \Vert \frac{\Psi _j^+(x)}{\prod _{k = j_0}^{j-1} \lambda ^+_k(x)} - v^+(x) \bigg \Vert } = 0, \quad \text {and}\\&\quad \lim _{j \rightarrow \infty } \sup _{x \in K}{ \bigg \Vert \frac{\Psi _j^-(x)}{\prod _{k = j_0}^{j-1} \lambda ^-_k(x)} - v^-(x) \bigg \Vert } = 0 \end{aligned} \end{aligned}$$
(4.6)

where \(v^-(x)\) and \(v^+(x)\) are continuous eigenvectors of \(\mathcal {R}_i(x)\) corresponding to

$$\begin{aligned} \xi ^+(x) = \frac{{\text {tr}}\mathcal {R}_i(x) + \sqrt{{\text {discr}}\mathcal {R}_i(x)}}{2} \quad \text {and}\quad \xi ^-(x) = \frac{{\text {tr}}\mathcal {R}_i(x) - \sqrt{{\text {discr}}\mathcal {R}_i(x)}}{2}. \end{aligned}$$

Since \(\tau (x) > 0\), by means of (3.8) one can verify that

$$\begin{aligned} v_1^+(x)+v_2^+(x) \ne 0, \quad \text {and}\quad v_1^-(x)+v_2^-(x) \ne 0. \end{aligned}$$
(4.7)

Indeed, otherwise \(e_1 - e_2\) would be an eigenvector of \(\mathcal {R}_i(x)\), but

$$\begin{aligned} \mathcal {R}_i(x) (e_1 - e_2)&= \big (\sqrt{|{\tau (x)} |} - \mathfrak {S}\big )e_1 + \big (\sqrt{|{\tau (x)} |} + \mathfrak {S}\big ) e_2 \\&= \big (\sqrt{|{\tau (x)} |} - \mathfrak {S}\big )(e_1-e_2) + 2 \sqrt{|{\tau (x)} |} e_2, \end{aligned}$$

thus \(\tau (x) = 0\), which is impossible.

Now, by (4.5) the sequences \(\Phi _j^\pm = Z_j \Psi _j^\pm \) satisfy

$$\begin{aligned} \Phi _{j+1} = X_{jN+i} \Phi _j, \quad j \ge j_0. \end{aligned}$$

We set

$$\begin{aligned} \phi _1^\pm = B_1^{-1} \ldots B_{j_0}^{-1} \Phi _{j_0}^\pm \end{aligned}$$

and

$$\begin{aligned} \phi _{n+1}^\pm = B_n \phi _n^\pm , \quad n > 1. \end{aligned}$$

Then for \(jN+i' > j_0N+i\) with \(i' \in \{0, 1, \ldots , N-1\}\), we get

$$\begin{aligned} \phi _{jN+i'}^\pm = {\left\{ \begin{array}{ll} B_{jN+i'}^{-1} B_{jN+i'+1}^{-1} \ldots B_{jN+i-1}^{-1} \Phi _j^\pm &{}\text {if }\quad i' \in \{0, 1, \ldots , i-1\}, \\ \Phi _j^\pm &{} \text {if }\quad i ' = i,\\ B_{jN+i'-1} B_{jN+i'-2} \ldots B_{jN+i} \Phi _j^\pm &{} \text {if }\quad i' \in \{i+1, \ldots , N-1\}. \end{array}\right. } \end{aligned}$$

Since for \(i' \in \{0, 1, \ldots , i-1\}\),

$$\begin{aligned} \lim _{j \rightarrow \infty } B_{jN+i'}^{-1} B_{jN+i'+1}^{-1} \ldots B_{jN+i-1}^{-1} = \mathfrak {B}_{i'}^{-1}(0) \mathfrak {B}_{i'+1}^{-1}(0) \cdots \mathfrak {B}_{i-1}^{-1}(0), \end{aligned}$$

and

$$\begin{aligned} \lim _{j \rightarrow \infty } Z_j v^\pm = (v_1^\pm +v_2^\pm ) T_i (e_1 + e_2), \end{aligned}$$

we obtain

$$\begin{aligned} \lim _{j \rightarrow \infty } \sup _K{ \bigg \Vert \frac{\phi ^\pm _{jN+i'}}{\prod _{k = j_0}^{j-1} \lambda ^\pm _k} - (v_1^\pm +v_2^\pm ) T_{i'} (e_1 + e_2) \bigg \Vert } = 0. \end{aligned}$$
(4.8)

Analogously, we can show that (4.8) holds true also for \(i' \in \{i+1, \ldots , N-1\}\).

Since \((\phi _n^\pm : j \in \mathbb {N})\) satisfies (1.2), the sequence \((u_n^\pm (x): n \in \mathbb {N}_0)\) defined as

$$\begin{aligned} u_n^\pm (x) ={\left\{ \begin{array}{ll} \langle \phi _1^\pm (x), e_1 \rangle &{} \text {if }\quad n = 0, \\ \langle \phi _n^\pm (x), e_2 \rangle &{} \text {if }\quad n \ge 1, \end{array}\right. } \end{aligned}$$

is a generalized eigenvector associated to \(x \in K\), provided that \((u_0^\pm , u_1^\pm ) \ne 0\) on K. Suppose on the contrary that there is \(x \in K\) such that \(\phi _1^\pm (x) = 0\). Hence, \(\phi _n^\pm (x) = 0\) for all \(n \in \mathbb {N}\), thus by (4.7) and (4.8) we must have \(T_0(e_1 + e_2) = 0\) which is impossible since \(T_0\) is invertible.

Consequently, \((u_n^+(x): n \in \mathbb {N}_0)\) and \((u_n^-(x): n \in \mathbb {N}_0)\) are two generalized eigenvectors associated with \(x \in K\) with different asymptotic behavior, thus they are linearly independent.

Now, let us suppose that A is self-adjoint. By the proof of [47, Theorem 5.3], if

$$\begin{aligned} \sum _{n = 0}^\infty \sup _{x \in K}{\big |u^-_n(x)\big |^2} < \infty \end{aligned}$$
(4.9)

then \(K \cap \sigma _{\textrm{ess}}(A) = \emptyset \), and since K is any compact subset of \(\Lambda _+\) this implies that \(\sigma _{\textrm{ess}}(A) \cap \Lambda _+ = \emptyset \). Hence, it is enough to show (4.9). Let us observe that by (4.8), for each \(i' \in \{0, 1, \ldots , N-1\}\), \(j > j_0\), and \(x \in K\),

$$\begin{aligned} |u_{jN+i'}^-(x)| \le c \prod _{k = j_0}^{j-1} |\lambda _k^-(x)|. \end{aligned}$$
(4.10)

Since \((R_j: j \in \mathbb {N})\) converges to \(\mathcal {R}_i\) uniformly on K, and

$$\begin{aligned} \lim _{n \rightarrow \infty } \gamma _n = \infty , \end{aligned}$$

there is \(j_1 \ge j_0\), such that for \(j > j_1\),

$$\begin{aligned} \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} \Big ( |{\text {tr}}R_j(x)| + \sqrt{{\text {discr}}R_j(x)}\Big )\le 1. \end{aligned}$$

Therefore, for \(j \ge j_1\),

$$\begin{aligned} |\lambda _j^-(x)| = 1 + \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} \frac{ {\text {tr}}R_j(x) - \sqrt{{\text {discr}}R_j(x)} }{2}. \end{aligned}$$

By (3.8) and Proposition 2.2, \({\text {tr}}\mathcal {R}_i = -\mathfrak {S}\le 0\), thus (4.4) implies that

$$\begin{aligned} \lim _{j \rightarrow \infty } j \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} \frac{{\text {tr}}R_j(x) - \sqrt{{\text {discr}}R_j(x)}}{2} = -\infty . \end{aligned}$$

In particular, there is \(j_2 \ge j_1\) such that for all \(j > j_2\),

$$\begin{aligned} \sup _{x \in K} {|\lambda _j^-(x)|} \le 1 - \frac{1}{j}. \end{aligned}$$

Consequently, by (4.10), there is \(c' > 0\) such that for all \(i' \in \{0, 1, \ldots , N-1\}\) and \(j > j_2\),

$$\begin{aligned} \sup _{x \in K}{|u_{jN+i'}^-(x)|} \le c \prod _{k = j_2}^{j-1} \bigg (1 - \frac{1}{k} \bigg ) \le \frac{c'}{j}, \end{aligned}$$

which leads to (4.9) and the theorem follows. \(\square \)

Remark 4.2

In Sect. 9 we characterize when A is self-adjoint. In particular, Theorem 9.1 settles the problem when \(\Lambda _- \ne \emptyset \). If \(\Lambda _- = \emptyset \) but \(\Lambda _+ \ne \emptyset \), the formula (9.5) is a necessary and sufficient condition for self-adjointness of A.

5 Uniform Diagonalization

Fix a positive integer N and \(i \in \{0, 1, \ldots , N-1\}\). Let \((\gamma _n: n \in \mathbb {N})\) be a sequence of positive numbers tending to infinity and satisfying (1.2) and (1.3). Let \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) be \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is non-diagonalizable and let \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). Suppose that (1.4) holds true. Assume that \(\Lambda _- \ne \emptyset \). Let us consider a compact interval in \(\Lambda _-\) and a generalized eigenvector \((u_n: n \in \mathbb {N}_0)\) associated to \(x \in K\) and corresponding to \(\eta \in \mathbb {S}^1\). We set

$$\begin{aligned} Y_j = Z_{j+1}^{-1} X_{jN+i} Z_j \end{aligned}$$
(5.1)

and

$$\begin{aligned} \vec {v}_j(\eta , x) = Z_j^{-1}(x) \vec {u}_{jN+i}(\eta , x) \end{aligned}$$
(5.2)

where \(Z_j\) is defined in (3.1). In view of Theorem 3.2, we have

$$\begin{aligned} Y_j = \varepsilon \bigg ({\text {Id}}+ \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} R_j \bigg ) \end{aligned}$$

where \((R_j)\) is a sequence from \(\mathcal {D}_1\big (K, {\text {Mat}}(2, \mathbb {R})\big )\) convergent to \(\mathcal {R}_i\) given by (3.8). Since \({\text {discr}}\mathcal {R}_i < 0\) on K

$$\begin{aligned} |[\mathcal {R}_i(x)]_{12}| > 0 \end{aligned}$$

and there are \(\delta > 0\) and \(j_0 \ge 1\) such that for all \(j \ge j_0\) and \(x \in K\),

$$\begin{aligned} {\text {discr}}R_j(x) < -\delta , \quad \text {and}\quad |[R_j(x)]_{12}| > \delta . \end{aligned}$$

Therefore, \(R_j(x)\) has two eigenvalues \(\xi _j(x)\) and \(\overline{\xi _j(x)}\) where

$$\begin{aligned} \xi _j(x) = \frac{{\text {tr}}R_j(x) + i \varepsilon \sqrt{-{\text {discr}}R_j(x)}}{2}. \end{aligned}$$
(5.3)

Moreover,

$$\begin{aligned} R_j = C_j \begin{pmatrix} \xi _j &{}\quad 0 \\ 0 &{}\quad \overline{\xi _j} \end{pmatrix} C_j^{-1} \end{aligned}$$

where

$$\begin{aligned} C_j = \begin{pmatrix} 1 &{}\quad 1 \\ \frac{\xi _j - [R_j]_{11}}{[R_j]_{12}} &{}\quad \frac{\overline{\xi _j} - [R_j]_{11}}{[R_j]_{12}} \end{pmatrix}. \end{aligned}$$

Using (5.1), \(Y_j(x)\) has two eigenvalues \(\lambda _j(x)\) and \(\overline{\lambda _j(x)}\) where

$$\begin{aligned} \lambda _j(x) = \varepsilon \bigg (1 + \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} \xi _j(x) \bigg ). \end{aligned}$$

Moreover,

$$\begin{aligned} Y_j = C_j D_j C_j^{-1} \end{aligned}$$
(5.4)

where

$$\begin{aligned} D_j = \begin{pmatrix} \lambda _j &{}\quad 0 \\ 0 &{}\quad \overline{\lambda _j} \end{pmatrix}. \end{aligned}$$
(5.5)

Theorem 3.2 implies that \((C_j: j \ge j_0)\) and \((D_j: j \ge j_0)\) belong to \(\mathcal {D}_1\big (K, {\text {Mat}}(2, \mathbb {C})\big )\). By (3.8), there is a mapping \(C_\infty : K \rightarrow {\text {GL}}(2, \mathbb {C})\) such that

$$\begin{aligned} \lim _{j \rightarrow \infty } C_j = C_\infty \end{aligned}$$

uniformly on K.

Claim 5.1

There is \(c > 0\) such that for all \(j \ge L > j_0\),

$$\begin{aligned} \Vert \vec {v}_j \Vert \le c \bigg (\prod _{k = L}^{j-1} \big \Vert D_k \big \Vert \bigg ) \Vert \vec {v}_{L} \Vert \end{aligned}$$

uniformly on \(\mathbb {S}^1 \times K\).

For the proof, we write

$$\begin{aligned} \vec {v}_j = Y_{j-1} \ldots Y_{j_0} \vec {v}_{L}. \end{aligned}$$

thus

$$\begin{aligned} \Vert \vec {v}_j\Vert \le \big \Vert Y_{j-1} \ldots Y_{L} \big \Vert \Vert \vec {v}_{L}\Vert . \end{aligned}$$

Next,

$$\begin{aligned} Y_{j-1} \ldots Y_{L} = C_{j-1} \Bigg (\prod _{k=L}^{j-1} \big ( D_{k} C_{k}^{-1} C_{k-1} \big ) \Bigg ) C_{L-1}^{-1} \end{aligned}$$

and so

$$\begin{aligned} \big \Vert Y_{j-1} \ldots Y_{L} \big \Vert&\le \big \Vert C_{j-1} \big \Vert \Bigg ( \prod _{k=L}^{j-1} \big \Vert D_{k} C_{k}^{-1} C_{k-1} \big \Vert \Bigg ) \big \Vert C_{L-1}^{-1} \big \Vert \\&\le c \prod _{k = L}^{j-1} \Vert D_k\Vert \end{aligned}$$

where the last estimate follows by [57, Proposition 1], proving Claim 5.1.

Next, we show the following statement.

Claim 5.2

We have

$$\begin{aligned} \lim _{j \rightarrow \infty } \frac{a_{jN+i-1}}{\sqrt{\gamma _{jN+i-1}}} \prod _{k=j_0}^{j-1} |\lambda _k|^2 = \frac{a_{j_0N+i-1} \sinh \vartheta _{j_0}}{\sqrt{\alpha _{i-1} |\tau |}} > 0 \end{aligned}$$

uniformly on \(K \subset \Lambda _-\).

By (5.4) and (5.5),

$$\begin{aligned} |\lambda _k(x)|^2 = \det D_k(x) = \det Y_j(x), \end{aligned}$$

which together with (5.1) gives

$$\begin{aligned} |\lambda _k(x)|^2 = \frac{\sinh \vartheta _k(x)}{\sinh \vartheta _{k+1}(x)} \cdot \frac{a_{jN+i-1}}{a_{(j+1)N+i-1}}. \end{aligned}$$

Hence,

$$\begin{aligned} \prod _{k=j_0}^{j-1} |\lambda _k(x)|^2 = \frac{\sinh \vartheta _{j_0}(x)}{\sinh \vartheta _j(x)} \cdot \frac{a_{j_0N+i-1}}{a_{jN+i-1}}, \end{aligned}$$

and since

$$\begin{aligned} \lim _{j \rightarrow \infty } \frac{\sinh \vartheta _j(x)}{\vartheta _j(x)} = 1 \end{aligned}$$

the claim follows.

6 Generalized Shifted Turán Determinants

In this section we study generalized N-shifted Turán determinants. Namely, for \(\eta \in \mathbb {R}^2 {\setminus } \{0\}\) and \(x \in \mathbb {R}\) we consider

$$\begin{aligned} S_n(\eta , x) = a_{n+N-1} \sqrt{\gamma _{n+N-1}} \big \langle E \vec {u}_{n+N}, \vec {u}_n \big \rangle \end{aligned}$$

where \((\vec {u}_n: n \in \mathbb {N}_0)\) corresponds to a generalized eigenvector associated to x and corresponding to \(\eta \), and

$$\begin{aligned} E = \begin{pmatrix} 0 &{}\quad -1 \\ 1 &{}\quad 0 \end{pmatrix}. \end{aligned}$$

Theorem 6.1

Let N be a positive integer and \(i \in \{0, 1, \ldots N-1\}\). Let \((\gamma _n: n \in \mathbb {N})\) be a sequence of positive numbers tending to infinity and satisfying (1.2) and (1.3). Let \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) be \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element. Suppose that (1.4) holds true with \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). Then the sequence \((|S_{jN+i}|: j \in \mathbb {N})\) converges locally uniformly on \(\mathbb {S}^1 \times \Lambda _-\) to a positive continuous function.

Proof

We use the uniform diagonalization described in Sect. 5. Let us define

$$\begin{aligned} \tilde{S}_j = a_{(j+1)N+i-1} \sqrt{\gamma _{(j+1)N+i-1}} (\det Z_j) \langle E \vec {v}_{j+1}, \vec {v}_j \rangle . \end{aligned}$$
(6.1)

The first step is to show that \((\tilde{S}_j: j \ge j_0)\) is asymptotically close to \((S_{jN+i}: j \ge j_0)\).

Claim 6.2

We have

$$\begin{aligned} \lim _{j \rightarrow \infty } \big | S_{jN+i} - \tilde{S}_j \big |= 0 \end{aligned}$$

uniformly on \(\mathbb {S}^1 \times K\).

For the proof we write

$$\begin{aligned} S_{jN+i}&= a_{(j+1)N+i-1} \sqrt{\gamma _{(j+1)N+i-1}} \langle E \vec {u}_{(j+1)N+i}, \vec {u}_{jN+i} \rangle \\&= a_{(j+1)N+i-1} \sqrt{\gamma _{(j+1)N+i-1}} \langle Z_j^* E Z_{j+1} \vec {v}_{j+1}, \vec {v}_j \rangle \\&= a_{(j+1)N+i-1} \sqrt{\gamma _{(j+1)N+i-1}} (\det Z_j) \langle E Z_j^{-1} Z_{j+1} \vec {v}_{j+1}, \vec {v}_j \rangle \end{aligned}$$

where we have used that for any \(Y \in {\text {GL}}(2, \mathbb {R})\),

$$\begin{aligned} (Y^{-1})^* E = \frac{1}{\det Y} E Y. \end{aligned}$$
(6.2)

Now, by Theorem 3.1

$$\begin{aligned} S_{jN+i} - \tilde{S}_j&= a_{(j+1)N+i-1} \sqrt{\gamma _{(j+1)N+i-1}} (\det Z_j) \big \langle E (Z_j^{-1} Z_{j+1} - {\text {Id}}) \vec {v}_{j+1}, \vec {v}_j \big \rangle \\&= a_{(j+1)N+i-1} \sqrt{\alpha _{i-1}} (\det Z_j) \big \langle E Q_j \vec {v}_{j+1}, \vec {v}_j \big \rangle . \end{aligned}$$

Observe that by (5.5) and (5.4)

$$\begin{aligned} \Vert D_k\Vert ^2 = |{\lambda _k} |^2 = \lambda _k \overline{\lambda _k} = \det Y_k. \end{aligned}$$

Therefore, by (5.1),

$$\begin{aligned} \prod _{k = j_0}^{j-1} \Vert D_k\Vert ^2 = \frac{\det Z_{j_0}}{\det Z_j} \frac{a_{j_0 N+i-1}}{a_{jN+i-1}}. \end{aligned}$$

Next, in view of Claim 5.1, for \(j \ge j_0\),

$$\begin{aligned} \Vert \vec {v}_j\Vert ^2 \lesssim \prod _{k = j_0}^{j-1} \Vert D_k\Vert ^2 \lesssim \frac{1}{a_{jN+i-1} |\det Z_j|}. \end{aligned}$$

Hence,

$$\begin{aligned} \big |S_{jN+i} - \tilde{S}_j\big |&\lesssim a_{(j+1)N+i-1} \sqrt{\alpha _{i-1}} |\det Z_j| \cdot \Vert Q_j\Vert \cdot \Vert \vec {v}_j\Vert ^2 \\&\lesssim \Vert Q_j\Vert \end{aligned}$$

and the claim follows by Theorem 3.1.

We show next that the sequence \((\tilde{S}_j: j \ge j_0)\) converges uniformly on \(\mathbb {S}^1 \times K\) to a positive continuous function. By (6.1) and (6.2), we have

$$\begin{aligned} \tilde{S}_j&= a_{(j+1)N+i-1} \sqrt{\gamma _{(j+1)N+i-1}} (\det Z_j) \langle E \vec {v}_{j+1}, Y_j^{-1} \vec {v}_{j+1} \rangle \\&= a_{(j+1)N+i-1} \sqrt{\gamma _{(j+1)N+i-1}} (\det Z_j) \langle (Y_j^{-1})^* E \vec {v}_{j+1}, \vec {v}_{j+1} \rangle \\&= a_{(j+1)N+i-1} \sqrt{\gamma _{(j+1)N+i-1}} (\det Z_j \cdot \det Y_j^{-1}) \langle E Y_j \vec {v}_{j+1}, \vec {v}_{j+1} \rangle , \end{aligned}$$

and since

$$\begin{aligned} \det Y_j = \det \big ( Z_{j+1}^{-1} X_{jN+i} Z_j \big ), \end{aligned}$$

we obtain

$$\begin{aligned} \begin{aligned} \tilde{S}_j&= a_{(j+1)N+i-1} \sqrt{\gamma _{(j+1)N+i-1}} (\det Z_{j+1}) \frac{a_{(j+1)N+i-1}}{a_{jN+i-1}} \\&\quad \times \langle E Y_j \vec {v}_{j+1}, \vec {v}_{j+1} \rangle . \end{aligned} \end{aligned}$$
(6.3)

By (6.1) we have

$$\begin{aligned} \tilde{S}_{j+1} = a_{(j+2)N+i-1} \sqrt{\gamma _{(j+2)N+i-1}} (\det Z_{j+1}) \langle E Y_{j+1} \vec {v}_{j+1}, \vec {v}_{j+1} \rangle . \end{aligned}$$

Therefore, by Theorem 3.2

$$\begin{aligned} \tilde{S}_{j+1} - \tilde{S}_j = \varepsilon a_{(j+1)N+i-1} \sqrt{\gamma _{(j+1)N+i-1}} (\det Z_{j+1}) \left\langle E W_j \vec {v}_{j+1}, \vec {v}_{j+1} \right\rangle \end{aligned}$$

where

$$\begin{aligned} W_j= & {} \frac{a_{(j+2)N+i-1}}{a_{(j+1)N+i-1}} \frac{\sqrt{\gamma _{(j+2)N+i-1}}}{\sqrt{\gamma _{(j+1)N+i-1}}} \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+2)N+i-1}}} R_{j+1} \\{} & {} - \frac{a_{(j+1)N+i-1}}{a_{jN+i-1}} \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} R_j. \end{aligned}$$

Hence,

$$\begin{aligned} W_j = \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} \bigg ( \frac{a_{(j+2)N+i-1}}{a_{(j+1)N+i-1}} R_{j+1} - \frac{a_{(j+1)N+i-1}}{a_{jN+i-1}} R_j \bigg ), \end{aligned}$$

and so

$$\begin{aligned} \Vert W_j \Vert \lesssim \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} \Big ( \Big | \Delta \Big ( \frac{a_{(j+1)N+i-1}}{a_{jN+i-1}} \Big ) \Big | + \big \Vert \Delta R_j \big \Vert \Big ). \end{aligned}$$

Therefore,

$$\begin{aligned} \big |\tilde{S}_{j+1} - \tilde{S}_j\big | \lesssim a_{(j+1)N+i-1} |\det Z_{j+1}| \Big ( \Big | \Delta \Big ( \frac{a_{(j+1)N+i-1}}{a_{jN+i-1}} \Big ) \Big | + \big \Vert \Delta R_j \big \Vert \Big ) \Vert \vec {v}_{j+1}\Vert ^2. \end{aligned}$$

On the other hand, by (6.3),

$$\begin{aligned} \tilde{S}_j= & {} \varepsilon a_{(j+1)N+i-1} \sqrt{\gamma _{(j+1)N+i-1}} (\det Z_{j+1}) \\{} & {} \times \frac{a_{(j+1)N+i-1}}{a_{jN+i-1}} \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} \big \langle E R_j \vec {v}_{j+1}, \vec {v}_{j+1} \big \rangle . \end{aligned}$$

Since

$$\begin{aligned} \lim _{j\rightarrow \infty } {\text {sym}}(E R_j)&= {\text {sym}}(E \mathcal {R}_i) \nonumber \\&= \frac{\sqrt{|\tau |}}{2} \begin{pmatrix} -1 &{}\quad 1\\ 1 &{}\quad -1 \end{pmatrix} - \frac{\upsilon }{2 \sqrt{|\tau }|} \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix} - \frac{\mathfrak {S}}{2} \begin{pmatrix} 1 &{}\quad 0 \\ 0 &{}\quad -1 \end{pmatrix} \end{aligned}$$
(6.4)

and the matrix on the right-hand side of (6.4) has determinant equal to \(-\tau > 0\), we obtain

$$\begin{aligned} |\tilde{S}_j| > rsim a_{(j+1)N+i-1} |\det Z_{j+1}| \cdot \Vert \vec {v}_{j+1} \Vert ^2. \end{aligned}$$

Consequently, we arrive at

$$\begin{aligned} |\tilde{S}_{j+1} - \tilde{S}_j| \lesssim \Big ( \Big | \Delta \Big ( \frac{a_{(j+1)N+i-1}}{a_{jN+i-1}} \Big ) \Big | + \Vert \Delta R_j \Vert \Big ) |\tilde{S}_j|. \end{aligned}$$

Since \(\tilde{S}_j \ne 0\) on K, we get

$$\begin{aligned} \sum _{j = j_0}^\infty \sup _{\eta \in \mathbb {S}^1} \sup _{x \in K} { \bigg | \frac{|{\tilde{S}_{j+1}(\eta , x)} |}{|{\tilde{S}_j(\eta , x)} |} - 1 \bigg |} \lesssim \sum _{j = j_0}^\infty \Big | \Delta \Big ( \frac{a_{(j+1)N+i-1}}{a_{jN+i-1}} \Big ) \Big | + \sup _{x \in K}{\Vert \Delta R_j(x) \Vert }, \end{aligned}$$

which implies that the product

$$\begin{aligned} \prod _{k = j_0}^\infty \bigg (1 + \frac{|{\tilde{S}_{k+1}} | - |{\tilde{S}_k} |}{|{\tilde{S}_k} |}\bigg ) \end{aligned}$$

converges uniformly on \(\mathbb {S}^1 \times K\) to a positive continuous function. Because

$$\begin{aligned} \bigg | \frac{\tilde{S}_j}{\tilde{S}_{j_0}} \bigg | = \prod _{k = j_0}^{j-1} \bigg (1 + \frac{|{\tilde{S}_{k+1}} | - |{\tilde{S}_k} |}{|{\tilde{S}_k} |}\bigg ), \end{aligned}$$

the same holds true for the sequence \((\tilde{S}_j: j \ge j_0)\). In view of Claim 6.2, the proof is completed. \(\square \)

Corollary 6.3

Suppose that the hypotheses of Theorem 6.1 are satisfied. Then for any compact \(K \subset \Lambda _-\) there is a constant \(c>1\) such that for any generalized eigenvector \(\vec {u}\) associated with \(x\in K\) and corresponding to \(\eta \in \mathbb {S}^1\), we have

$$\begin{aligned} c^{-1} \le \frac{a_{(j+1)N+i-1}}{\sqrt{\gamma _{(j+1)N+i-1}}} \big \Vert \vec {v}_j \big \Vert ^2 \le c \end{aligned}$$

where \(\vec {v}_j = Z_{j}^{-1} \vec {u}_{jN+i}\).

Proof

By (6.1) and Theorem 3.2 we have

$$\begin{aligned} \tilde{S}_j&= a_{(j+1)N+i-1} \sqrt{\gamma _{(j+1)N+i-1}} (\det Z_j) \langle E Y_j \vec {v}_j, \vec {v}_j \rangle \\&= \varepsilon a_{(j+1)N+i-1} \sqrt{\alpha _{i-1}} (\det Z_j) \langle E R_j \vec {v}_j, \vec {v}_j \rangle . \end{aligned}$$

Hence, by (6.4), we have

$$\begin{aligned} |\tilde{S}_j| \asymp a_{(j+1)N+i-1} |\det Z_j| \Vert \vec {v}_j \Vert ^2. \end{aligned}$$

Observe that

$$\begin{aligned} \lim _{j \rightarrow \infty } \sqrt{\gamma _{(j+1)N+i-1}} \det Z_j&= \lim _{j \rightarrow \infty } \frac{-2 \sinh \vartheta _j}{\vartheta _j} \vartheta _j \sqrt{\gamma _{(j+1)N+i-1}} \\&= -2 \sqrt{\alpha _{i-1} |\tau (x)|} \end{aligned}$$

uniformly on K. By the fact that \(\tilde{S}_j\) is uniformly convergent on \(\mathbb {S}^1 \times K\) to a positive function, the conclusion follows. \(\square \)

7 Asymptotics of the Generalized Eigenvectors

In this section we study the asymptotic behavior of generalized eigenvectors. We keep the notation introduced in Sect. 5.

Theorem 7.1

Let N be a positive integer. Let \((\gamma _n: n \in \mathbb {N})\) be a sequence of positive numbers tending to infinity and satisfying (1.2) and (1.3). Let \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) be \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element. Suppose that (1.4) holds true with \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). Then for each \(i \in \{0, 1, \ldots , N-1\}\) and every compact interval \(K \subset \Lambda _-\), there are \(j_0 \in \mathbb {N}\) and a continuous function \(\varphi : \mathbb {S}^1 \times K \rightarrow \mathbb {C}\) such that for every generalized eigenvector \((u_n: n \in \mathbb {N}_0)\),

$$\begin{aligned}{} & {} \lim _{j \rightarrow \infty } \sup _{\eta \in \mathbb {S}^1} \sup _{x \in K} \bigg | \frac{\sqrt{\gamma _{(j+1)N+i-1}}}{\prod _{k = j_0}^{j-1} \lambda _k(x)} \Big (u_{(j+1)N+i}(\eta , x) - \overline{\lambda _j(x)} u_{jN+i}(\eta , x)\Big )\\{} & {} \quad - \varphi (\eta , x) \bigg | =0. \end{aligned}$$

Moreover, \(\varphi (\eta ,x) = 0\) if and only if \([\mathfrak {X}_i(0)]_{21} = 0\). Furthermore,

$$\begin{aligned} \frac{u_{jN+i}(\eta , x)}{\prod _{k = j_0}^{j-1} |\lambda _k(x)|} = \frac{|\varphi (\eta , x)|}{\sqrt{\alpha _{i-1} |\tau (x)|}} \sin \Big ( \sum _{k = j_0}^{j-1} \theta _k(x) + \arg \varphi (\eta , x) \Big ) + E_j(\eta , x) \end{aligned}$$

where

$$\begin{aligned} \theta _k(x) = \arccos \bigg (\frac{{\text {tr}}Y_k(x)}{2 \sqrt{\det Y_k(x)}} \bigg ) \end{aligned}$$
(7.1)

and

$$\begin{aligned} \lim _{j \rightarrow \infty } \sup _{\eta \in \mathbb {S}^1} \sup _{x \in K} {|E_j(\eta , x)|} = 0. \end{aligned}$$

Proof

In the proof we use the uniform diagonalization constructed in Sect. 5. For \(j > j_0\), we set

$$\begin{aligned} \phi _j = \frac{u_{(j+1)N+i} - \overline{\lambda _j} u_{jN+i}}{\prod _{k = j_0}^{j-1} \lambda _k}. \end{aligned}$$

Let us observe that there is \(c > 0\) such that for all \(j \in \mathbb {N}\), and \(x \in K\),

$$\begin{aligned} \left\| Z_j^t e_2 - \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix} T_i^t e_2 \right\| \le c \vartheta _j. \end{aligned}$$
(7.2)

We show that the sequence \((\sqrt{\gamma _{(j+1) N + i-1}} \phi _j: j > j_0)\) converges uniformly on K. By (5.5), \(\Vert D_j\Vert = |\lambda _j|\), thus by Claim 5.1

$$\begin{aligned} \big | u_{(j+1)N+i} - \langle {\vec {v}_j}, {Z_j^t e_2} \rangle \big |&= \big | \big \langle \vec {v}_{j+1}, (Z^t_{j+1} - Z^t_j) e_2 \big \rangle \big | \\&\le \Vert \vec {v}_{j+1}\Vert \cdot |\vartheta _{j+1} - \vartheta _j| \\&\lesssim \bigg (\prod _{k = j_0}^{j-1} |\lambda _k| \bigg ) |\vartheta _{j+1} - \vartheta _j|. \end{aligned}$$

Hence,

$$\begin{aligned} \lim _{j \rightarrow \infty } \sqrt{\gamma _{(j+1)N+i-1}} \frac{\big | u_{(j+1)N+i} - \langle {\vec {v}_j}, {Z_j^t e_2} \rangle \big |}{\prod _{k = j_0}^{j-1} |\lambda _k|} = 0 \end{aligned}$$

uniformly on K. Next, by (5.4) we write

$$\begin{aligned} (Y_j - \overline{\lambda _j} {\text {Id}}) \vec {v}_j = C_j \begin{pmatrix} \lambda _j - \overline{\lambda _j} &{}\quad 0 \\ 0 &{}\quad 0 \end{pmatrix} C_j^{-1} \vec {v}_j, \end{aligned}$$

thus by (7.2) we obtain

$$\begin{aligned}&\left| \bigg \langle (Y_j - \overline{\lambda _j} {\text {Id}}) \vec {v}_j, Z_j^t e_2 \bigg \rangle - \bigg \langle (Y_j - \overline{\lambda _j} {\text {Id}}) \vec {v}_j, \begin{pmatrix} 1 &{}\quad 1\\ 1 &{}\quad 1 \end{pmatrix} T_i^t e_2 \bigg \rangle \right| \nonumber \\&\quad \le \vartheta _j |\lambda _j - \overline{\lambda _j}| \cdot \Vert \vec {v}_j\Vert \lesssim \vartheta _j^2 \Big (\prod _{k = j_0}^{j-1} |\lambda _k|\Big ) \end{aligned}$$
(7.3)

where in the last estimate we have used

$$\begin{aligned} |\lambda _j-\overline{\lambda _j}| = |2i \Im (\lambda _j))| = \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} \sqrt{-{\text {discr}}R_j}. \end{aligned}$$
(7.4)

Hence, it is enough to show that the sequence \((\tilde{\phi }_j: j > j_0)\) where

$$\begin{aligned} \tilde{\phi }_j = \frac{\sqrt{\gamma _{(j+1)N+i-1}}}{\prod _{k = j_0}^{j-1} \lambda _k} \Big \langle \big ( Y_j - \overline{\lambda _j} {\text {Id}}\big ) \vec {v}_j, \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix} T_i^t e_2 \Big \rangle \end{aligned}$$

converges uniformly on \(\mathbb {S}^1 \times K\). To do so, for a given \(\epsilon > 0\) there is \(L_0 > j_0\) such that for all \(L \ge L_0\) we have

$$\begin{aligned} \sum _{k=L-1}^\infty \sup _{K} \Vert \Delta C_k \Vert < \epsilon . \end{aligned}$$

For \(j \ge L\), we set

$$\begin{aligned} \psi _{j;L} = \frac{\sqrt{\gamma _{(j+1)N+i-1}}}{\prod _{k = j_0}^{j-1} \lambda _k} \Big \langle C_{j} \big ( D_j - \overline{\lambda _j} {\text {Id}}\big ) \Big ( \prod _{k=L}^{j-1} D_k \Big ) C_{L-1}^{-1} \vec {v}_L, \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix} T_i^t e_2 \Big \rangle . \end{aligned}$$

Observe that

$$\begin{aligned} (Y_j - \overline{\lambda _j} {\text {Id}}) \vec {v}_j&= (Y_j - \overline{\lambda _j} {\text {Id}}) \Big ( \prod _{k=L}^{j-1} Y_k \Big ) \vec {v}_L \\&= C_j (D_j - \overline{\lambda _j} {\text {Id}}) C_j^{-1} \Big ( \prod _{k=L}^{j-1} C_k D_k C_k^{-1} \Big ) \vec {v}_L \\&= C_j (D_j - \overline{\lambda _j} {\text {Id}}) D_j^{-1} \Big ( \prod _{k=L}^{j} D_k C_k^{-1} C_{k-1} \Big ) C_{L-1}^{-1} \vec {v}_L. \end{aligned}$$

Hence, by [57, Proposition 1], there is \(c > 0\) such that for all \(j \ge L \ge j_0\),

$$\begin{aligned} \bigg \Vert \prod _{k=L}^j D_k C_k C^{-1}_{k-1} - \prod _{k=L}^j D_k \bigg \Vert \le c \prod _{k=L}^j |\lambda _k| \cdot \sum _{k=L-1}^\infty \sup _{K} \Vert \Delta C_k \Vert . \end{aligned}$$

Thus, by Claim 5.1, (7.4) and the fact that \(\Vert D_k\Vert = |\lambda _k|\), we obtain

$$\begin{aligned} |\tilde{\phi }_j - \psi _{j;L}| \le c \sum _{k=L-1}^\infty \sup _{K} \Vert \Delta C_k \Vert \le c\epsilon , \end{aligned}$$
(7.5)

for all \(j \ge L\). Hence, for all \(n > m \ge L\),

$$\begin{aligned} |\tilde{\phi }_n - \tilde{\phi }_m| \le c \epsilon + |\psi _{n; L} - \psi _{m; L}|. \end{aligned}$$

Therefore, our task is reduced to showing that the sequence \((\psi _{j;L}: j \ge L)\) converges uniformly on K. Since, by (7.4)

$$\begin{aligned} \frac{\sqrt{\gamma _{(j+1)N+i-1}}}{\prod _{k=L}^{j-1} \lambda _k} \big ( D_j - \overline{\lambda _j} {\text {Id}}\big ) \prod _{k=L}^{j-1} D_k&= i \sqrt{\alpha _{i-1}} \sqrt{-{\text {discr}}R_j} \begin{pmatrix} 1 &{}\quad 0 \\ 0 &{}\quad 0 \end{pmatrix} \prod _{k=L}^{j-1} \frac{D_k}{\lambda _k} \\&= i \sqrt{\alpha _{i-1}} \sqrt{-{\text {discr}}R_j} \begin{pmatrix} 1 &{}\quad 0 \\ 0 &{}\quad 0 \end{pmatrix} \end{aligned}$$

we get uniformly on K

$$\begin{aligned} \lim _{j \rightarrow \infty } \psi _{j; L} = \frac{i \sqrt{\alpha _{i-1}} \sqrt{-{\text {discr}}\mathcal {R}_i}}{\prod _{k=j_0}^{L-1} \lambda _k} \Big \langle C_{\infty } \begin{pmatrix} 1 &{}\quad 0 \\ 0 &{}\quad 0 \end{pmatrix} C_{L-1}^{-1} \vec {v}_{L}, \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix} T_i^t e_2 \Big \rangle \end{aligned}$$
(7.6)

where

$$\begin{aligned} C_{\infty } = \lim _{j \rightarrow \infty } C_j = \begin{pmatrix} 1 &{} 1\\ \frac{\xi _{\infty } - [\mathcal {R}_i]_{11}}{[\mathcal {R}_i]_{12}} &{} \frac{\overline{\xi _{\infty }} - [\mathcal {R}_i]_{11}}{[\mathcal {R}_i]_{12}} \end{pmatrix}. \end{aligned}$$
(7.7)

Thus, we have proved that both sequences \((\tilde{\phi }_j: j > j_0)\) and \((\psi _{j;L}: j \ge L)\) converge uniformly on K. Let us denote its limits by \(\tilde{\phi }_\infty \) and \(\psi _{\infty ;L}\), respectively. By (7.5), for all \(L \ge L_0\) we have

$$\begin{aligned} |\tilde{\phi }_\infty - \psi _{\infty ;L}| \le c \epsilon . \end{aligned}$$
(7.8)

Let us observe that

$$\begin{aligned} \begin{pmatrix} 1 &{}\quad 1 \\ 1 &{}\quad 1 \end{pmatrix} C_{\infty } \begin{pmatrix} 1 &{}\quad 0 \\ 0 &{}\quad 0 \end{pmatrix} = \big ( [C_{\infty }]_{11} + [C_\infty ]_{21} \big ) \begin{pmatrix} 1 &{}\quad 0 \\ 1 &{}\quad 0 \end{pmatrix}. \end{aligned}$$

By (7.7) the expression on the right-hand side has non-zero imaginary part. Thus from (7.6) we can write

$$\begin{aligned} \psi _{\infty ;L}&= \frac{h}{\prod _{k=j_0}^{L-1} \lambda _k} \Big \langle C_{L-1}^{-1} \vec {v}_L, \begin{pmatrix} 1 &{}\quad 1 \\ 0 &{}\quad 0 \end{pmatrix} T_i^t e_2 \Big \rangle \nonumber \\&= \frac{h}{\prod _{k=j_0}^{L-1} \lambda _k} \big ( [T_i]_{21} + [T_i]_{22} \big ) \langle C_{L-1}^{-1} \vec {v}_L, e_1 \rangle \end{aligned}$$
(7.9)

for some function h without zeros on \(\mathbb {S}^1 \times K\). Thus, by (2.14), if \([\mathfrak {X}_{i}(0)]_{21} = 0\), then \(\psi _{\infty ;L} \equiv 0\) for all L. Consequently, by (7.8), \(\tilde{\phi }_{\infty } \equiv 0\) on \(\mathbb {S}^1 \times K\). On the other hand, if \([\mathfrak {X}_{i}(0)]_{21} \ne 0\), then the following claim holds true.

Claim 7.2

For each \((\eta , x) \in \mathbb {S}^1 \times K\),

$$\begin{aligned} \liminf _{L \rightarrow \infty } |{\psi _{\infty ;L}(\eta , x)} | > 0. \end{aligned}$$

On the contrary, let us suppose that there are \(\eta \in \mathbb {S}^1\), \(x \in K\) and a sequence \((L_j: j \in \mathbb {N})\) such that

$$\begin{aligned} \lim _{j \rightarrow \infty } L_j = \infty , \end{aligned}$$

and

$$\begin{aligned} \lim _{j \rightarrow \infty } \psi _{\infty ;L_j} (\eta ,x) = 0. \end{aligned}$$

Setting \(\vec {v}_L = v^{L}_1 e_1 + v^L_2 e_2\), we have

$$\begin{aligned} \langle C_{L-1}^{-1} \vec {v}_L, e_1 \rangle&= \bigg \langle \begin{pmatrix} \frac{\overline{\xi _{L-1}} - [R_{L-1}]_{11}}{[R_{L-1}]_{12}} &{} -1 \\ \frac{\xi _{L-1} - [R_{L-1}]_{11}}{[R_{L-1}]_{12}} &{} 1 \end{pmatrix} \vec {v}_L, e_1 \bigg \rangle \\&= \frac{\overline{\xi _{L-1}} - [R_{L-1}]_{11}}{[R_{L-1}]_{12}} v^L_1 - v^L_2. \end{aligned}$$

Hence, by (7.9), we obtain

$$\begin{aligned} \lim _{j \rightarrow \infty } \frac{1}{\prod _{k=j_0}^{L_j-1} |\lambda _k(x)|} \bigg ( \frac{\overline{\xi _{L_j-1}(x)} - [R_{L_j-1}(x)]_{11}}{[R_{L_j-1}(x)]_{12}} v^{L_j}_1(\eta , x) - v^{L_j}_2(\eta , x) \bigg ) = 0. \end{aligned}$$

In view of (5.2), \(\vec {v}_{L_j}(\eta , x)\) is a real vector, thus by taking imaginary parts of the bracket, we conclude that

$$\begin{aligned} \lim _{j \rightarrow \infty } \frac{v^{L_j}_1(\eta , x)}{\prod _{k=j_0}^{L_j-1} |\lambda _k(x)|} = 0. \end{aligned}$$

Hence,

$$\begin{aligned} \lim _{j \rightarrow \infty } \frac{v^{L_j}_2(\eta , x)}{\prod _{k=j_0}^{L_j-1} |\lambda _k(x)|} = 0, \end{aligned}$$

which in view of Claim 5.2 contradicts to Corollary 6.3 proving the claim.

Next, let us consider \(\eta \in \mathbb {S}^1\) and \(x \in K\). By Claim 7.2,

$$\begin{aligned} A = \liminf _{L \rightarrow \infty } |{\psi _{\infty ;L}(\eta , x)} | > 0. \end{aligned}$$
(7.10)

Taking \(\epsilon = \frac{A}{2c}\), by (7.8), for all \(L \ge L_0\),

$$\begin{aligned} |{\tilde{\phi }_{\infty }(\eta , x)} | \ge |{\psi _{\infty ;L}(\eta , x)} | - c \epsilon = |{\psi _{\infty ;L}(\eta , x)} | - \frac{A}{2}. \end{aligned}$$

Thus, in view of (7.10),

$$\begin{aligned} |{\tilde{\phi }_{\infty }(\eta , x)} | \ge \frac{A}{2}. \end{aligned}$$

Consequently, \(\tilde{\phi }_{\infty }\) cannot be zero on \(\mathbb {S}^1 \times K\) provided that \([\mathfrak {X}_i(0)]_{21} \ne 0\).

In view of (7.3) there is a function \(\varphi : \mathbb {S}^1 \times K \rightarrow \mathbb {R}\), such that

$$\begin{aligned} \varphi = \lim _{j \rightarrow \infty } \sqrt{\gamma _{(j+1)N+i-1}} \phi _j \end{aligned}$$

uniformly on \(\mathbb {S}^1 \times K\). In fact, one has \(\varphi = \tilde{\phi }_{\infty }\). In particular, we obtain

$$\begin{aligned}{} & {} \lim _{j \rightarrow \infty } \sup _{\eta \in \mathbb {S}^1} \sup _{x \in K} \bigg | \sqrt{\gamma _{(j+1)N+i-1}} \frac{u_{(j+1)N+i}(\eta , x) - \overline{\lambda _j(x)} u_{jN+i}(\eta , x)}{\prod _{k = j_0}^{j-1} |\lambda _k(x)|} \\{} & {} \quad - \varphi (\eta , x) \prod _{k = j_0}^{j-1} \frac{\lambda _k(x)}{|\lambda _k(x)|} \bigg | =0. \end{aligned}$$

Since \(u_n(\eta , x) \in \mathbb {R}\), by taking imaginary part we conclude that

$$\begin{aligned}{} & {} \lim _{j \rightarrow \infty } \sup _{\eta \in \mathbb {S}^1} \sup _{x \in K} \bigg | \frac{\sqrt{\alpha _{i-1}}}{2} \sqrt{-{\text {discr}}R_j(x)} \frac{u_{jN+i}(\eta , x)}{\prod _{k = j_0}^{j-1} |\lambda _k(x)|} \\{} & {} \quad - |{\varphi (\eta , x)} | \sin \Big (\sum _{k = j_0}^{j-1} \theta _k(x) + \arg \varphi (\eta , x) \Big ) \bigg | =0 \end{aligned}$$

where we have also used that

$$\begin{aligned} - \sqrt{\gamma _{(j+1)N+i-1}} \Im (\overline{\lambda _j(x)}) = \frac{\sqrt{\alpha _{i-1}}}{2} \sqrt{-{\text {discr}}R_j(x)}. \end{aligned}$$

Lastly, observe that

$$\begin{aligned} \bigg | \frac{1}{\sqrt{-{\text {discr}}R_j(x)}} - \frac{1}{2 \sqrt{|\tau (x)|}}\bigg | \lesssim \sum _{k = j}^\infty \big \Vert \Delta R_k(x)\big \Vert \end{aligned}$$

which completes the proof. \(\square \)

Remark 7.3

There is \(i \in \{0, 1, \ldots , N-1 \}\) such that \(|{\varphi _i(\eta , x)} | > 0\) for all \(x \in K\) and \(\eta \in \mathbb {S}^1\). Indeed, by [55, Proposition 3], if \([\mathfrak {X}_{i-1}(0)]_{21} = 0\) and \([\mathfrak {X}_i(0)]_{21} = 0\), then \(\mathfrak {X}_i(0)\) is a multiple of identity which is a trivial parabolic element. Contradiction.

8 Christoffel–Darboux Kernel on the Diagonal

In this section we study the asymptotic behavior of generalized Christoffel–Darboux kernel on the diagonal. Given Jacobi parameters \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\), and \(\eta \in \mathbb {S}^1\), we set

$$\begin{aligned} K_n(x, y; \eta ) = \sum _{m = 0}^n u_m(x, \eta ) u_m(y, \eta ), \quad x, y \in \mathbb {R}, \end{aligned}$$

where \((u_n(x, \eta ): n \in \mathbb {N}_0)\) is generalized eigenvector associated to x and corresponding to \(\eta \). Let

$$\begin{aligned} \rho _n = \sum _{m = 0}^n \frac{\sqrt{\alpha _m \gamma _m}}{a_m}. \end{aligned}$$

For N-periodically modulated Jacobi parameters we also study

$$\begin{aligned} K_{i;j}(x, y; \eta ) = \sum _{k=0}^j u_{kN+i}(x, \eta ) u_{kN+i}(y, \eta ), \quad x, y \in \mathbb {R}, \end{aligned}$$

where \(i \in \{0, 1, \ldots , N-1\}\). Let

$$\begin{aligned} \rho _{i;j} = \sum _{k=1}^j \frac{\sqrt{\gamma _{kN+i}}}{a_{kN+i}}. \end{aligned}$$

Lemma 8.1

Let \((\gamma _n: n \in \mathbb {N})\) and \((a_n: n \in \mathbb {N})\) be sequences of positive numbers such that

$$\begin{aligned} \lim _{n \rightarrow \infty } \gamma _n = \infty , \quad \sum _{n=0}^\infty \frac{\sqrt{\gamma _n}}{a_n} = \infty . \end{aligned}$$

Suppose that \((\xi _n: n \in \mathbb {N})\) is a sequence of real functions on some compact set \(K \subset \mathbb {R}^d\) such that

$$\begin{aligned} \lim _{n \rightarrow \infty } \sup _{x \in K}\big |\sqrt{\gamma _n} \xi _n(x) - \psi (x) \big | = 0 \end{aligned}$$

for certain function \(\psi : K \rightarrow (0, \infty )\) satisfying

$$\begin{aligned} c^{-1} \le \psi (x) \le c, \quad \text {for all } x \in K. \end{aligned}$$

We set

$$\begin{aligned} \Xi _n(x) = \sum _{j = 0}^n \xi _j(x), \quad \text {and}\quad \Delta _n = \sum _{j = 0}^n \frac{\sqrt{\gamma _j}}{a_j}. \end{aligned}$$

If

$$\begin{aligned} \bigg (\frac{\gamma _n}{a_n} : n \in \mathbb {N}\bigg ) \in \mathcal {D}_1, \end{aligned}$$
(8.1)

then

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{\Delta _n} \sum _{k=0}^n \frac{\sqrt{\gamma _k}}{a_k} \cos \big ( \Xi _k(x) \big ) = 0 \end{aligned}$$
(8.2)

uniformly with respect to \(x \in K\).

Proof

First, we write

$$\begin{aligned}{} & {} \bigg | \sum _{k = 0}^n \frac{\sqrt{\gamma _k}}{a_k} \cos \big ( \Xi _k(x) \big ) - \sum _{k = 0}^n \frac{\gamma _k}{a_k} \frac{\xi _k(x)}{\psi (x)} \cos \big ( \Xi _k(x) \big ) \bigg | \\{} & {} \quad \le \sum _{k = 0}^n \frac{\sqrt{\gamma _k}}{a_k} \bigg |1 - \sqrt{\gamma _k} \frac{\xi _k(x)}{\psi (x)}\bigg |. \end{aligned}$$

Since by the Stolz–Cesàro theorem

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{\Delta _n} \sum _{k = 0}^n \frac{\sqrt{\gamma _k}}{a_k} \bigg |1 - \sqrt{\gamma _k} \frac{\xi _k(x)}{\psi (x)}\bigg | = \lim _{n \rightarrow \infty } \bigg |1 - \sqrt{\gamma _k} \frac{\xi _k(x)}{\psi (x)}\bigg | = 0, \end{aligned}$$

uniformly with respect to \(x \in K\), we obtain

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{\Delta _n} \sum _{k = 0}^n \frac{\sqrt{\gamma _k}}{a_k} \cos \big ( \Xi _k(x) \big ) = \lim _{n \rightarrow \infty } \frac{1}{\Delta _n} \sum _{k = 0}^n \frac{\gamma _k}{a_k} \frac{\xi _k(x)}{\psi (x)} \cos \big ( \Xi _k(x) \big ). \end{aligned}$$

Next, we observe that

$$\begin{aligned}{} & {} \bigg | \sum _{k = 1}^n \frac{\gamma _k}{a_k} \bigg (\xi _k(x) \cos \big ( \Xi _k(x) \big ) - \int _{\Xi _{k-1}(x)}^{\Xi _k(x)} \cos (t) {\, \mathrm d}t \bigg )\bigg | \\{} & {} \quad \le \sum _{k = 1}^n \frac{\gamma _k}{a_k} \int _{\Xi _{k-1}(x)}^{\Xi _k(x)} \big |\cos (t) - \cos \big ( \Xi _k(x) \big ) \big | {\, \mathrm d}t \le \frac{1}{2} \sum _{k = 1}^n \frac{\gamma _k}{a_k} |\xi _k(x)|^2. \end{aligned}$$

In view of the Stolz–Cesàro theorem

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{\Delta _n} \sum _{k = 1}^n \frac{\gamma _k}{a_k} |\xi _k(x)|^2&= \lim _{n \rightarrow \infty } \sqrt{\gamma _n} |\xi _n(x)|^2 = 0, \end{aligned}$$

thus

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{\Delta _n} \sum _{k = 0}^n \frac{\sqrt{\gamma _k}}{a_k} \cos \big ( \Xi _k(x) \big ) = \lim _{n \rightarrow \infty } \frac{1}{\Delta _n} \sum _{k = 1}^n \frac{\gamma _k}{a_k} \big (\sin \Xi _k(x) - \sin \Xi _{k-1}(x)\big ). \end{aligned}$$

Now, by the summation by parts we get

$$\begin{aligned} \sum _{k = 1}^n \frac{\gamma _k}{a_k} \big (\sin \Xi _k(x) - \sin \Xi _{k-1}(x)\big )&= \frac{\gamma _n}{a_n} \sin \Xi _n(x) - \frac{\gamma _1}{a_1} \sin \Xi _0(x) \\&\quad + \sum _{k = 1}^{n-1} \bigg (\frac{\gamma _k}{a_k} - \frac{\gamma _{k+1}}{a_{k+1}}\bigg ) \sin \Xi _k(x). \end{aligned}$$

Thus, by (8.1),

$$\begin{aligned} \sup _{x \in K} \bigg | \sum _{k = 1}^n \frac{\gamma _k}{a_k} \big (\sin \Xi _k(x) - \sin \Xi _{k-1}(x)\big ) \bigg | \le 2 \frac{\gamma _1}{a_1} + 2 \sum _{k = 1}^\infty \bigg |\frac{\gamma _{k+1}}{a_{k+1}} - \frac{\gamma _k}{a_k} \bigg |. \end{aligned}$$

Consequently,

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{\Delta _n} \sum _{k = 1}^n \frac{\gamma _k}{a_k} \big (\sin \Xi _k(x) - \sin \Xi _{k-1}(x)\big ) =0 \end{aligned}$$

and the lemma follows. \(\square \)

Theorem 8.2

Let N be a positive integer. Let \((\gamma _n: n \in \mathbb {N})\) be a sequence of positive numbers tending to infinity and satisfying (1.2) and (1.3). Let \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) be \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element. Suppose that (1.4) holds true with \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). If

$$\begin{aligned} \lim _{n \rightarrow \infty } \rho _n = \infty , \end{aligned}$$
(8.3)

then there is \(n_0 \ge 1\) such that for all \(n \ge n_0\),

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{\rho _{n}} K_{n} (x, x; \eta ) = \frac{1}{2 N} \sum _{i = 0}^{N-1} \frac{|{\varphi _{i}(\eta , x)} |^2 a_{j_0N+i-1} \sinh \vartheta _{j_0N+i-1}(x)}{\big (\sqrt{\alpha _{i-1} |{\tau (x)} |}\big )^3} \end{aligned}$$

locally uniformly with respect to \((x, \eta ) \in \Lambda _- \times \mathbb {S}^1\).

Proof

Let K be a compact interval with non-empty interior contained in \(\Lambda _-\). By Theorem 7.1 and Claim 5.2, there is \(j_0 \ge 1\) such that for \(x \in K\), \(\eta \in \mathbb {S}^1\), and \(k > j_0\),

$$\begin{aligned}{} & {} \frac{a_{(k+1)N+i-1}}{\sqrt{\alpha _{i-1} \gamma _{(k+1)N+i-1}}} u_{kN+i}^2(\eta , x) \\{} & {} \quad = \frac{|{\varphi _i(\eta , x)} |^2 a_{j_0 N+i-1} \sinh \vartheta _{j_0N+i}(x) }{\big (\sqrt{\alpha _{i-1} |{\tau (x)} |}\big )^3} \sin ^2\Big (\sum _{\ell =j_0}^{k-1} \theta _{\ell N + i}(x) + \arg \varphi _i(\eta ,x) \Big ) \\{} & {} \qquad + E_{kN+i}(\eta , x) \end{aligned}$$

where

$$\begin{aligned} \lim _{k \rightarrow \infty } \sup _{\eta \in \mathbb {S}^1} \sup _{x \in K} |E_{kN+i}(\eta , x)| = 0. \end{aligned}$$

Therefore,

$$\begin{aligned}{} & {} \sum _{k = j_0+1}^j u_{kN+i}^2(\eta , x) \\{} & {} \quad = \frac{|{\varphi _i(\eta , x)} |^2 a_{j_0 N+i-1} \sinh \vartheta _{j_0N+i}(x)}{2 \big (\sqrt{\alpha _{i-1} |{\tau (x)} |}\big )^3} \sum _{k = j_0+1}^j \frac{\sqrt{\alpha _{i-1} \gamma _{(k+1)N+i-1}}}{a_{(k+1)N+i-1}} \\{} & {} \qquad \times \bigg (1 - \cos \Big (2\sum _{\ell =j_0}^{k-1} \theta _{\ell N + i}(x) + 2\arg \varphi _i(\eta ,x) \Big )\bigg ) \\{} & {} \qquad + \sum _{k = j_0+1}^j \frac{\sqrt{\alpha _{i-1} \gamma _{(k+1)N+i-1}}}{a_{(k+1)N+i-1}} E_{kN+i}(\eta , x). \end{aligned}$$

We claim that

$$\begin{aligned} \begin{aligned}&\lim _{j \rightarrow \infty } \frac{1}{\sqrt{\alpha _{i-1}} \rho _{i-1; j}} K_{i; j}(x, x; \eta ) \\&\quad = \frac{|{\varphi _i(\eta , x)} |^2 a_{j_0 N+i-1} \sinh \vartheta _{j_0N+i-1}(x)}{2 \big (\sqrt{\alpha _{i-1} |{\tau (x)} |} \big )} \end{aligned} \end{aligned}$$
(8.4)

uniformly with respect to \((x, \eta ) \in K \times \mathbb {S}^1\). To see this, we observe that by the Stolz–Cesàro theorem,

$$\begin{aligned}{} & {} \lim _{j \rightarrow \infty } \frac{1}{\rho _{i-1; j}} \sum _{k = j_0+1}^j \frac{\sqrt{\gamma _{(k+1)N+i-1}}}{a_{(k+1)N+i-1}} E_{kN+i}(\eta , x)\\{} & {} \quad = \lim _{j \rightarrow \infty } \sqrt{\frac{\gamma _{(j+1)N+i-1}}{\gamma _{jN+i-1}}} \cdot \frac{a_{jN+i-1}}{a_{(j+1)N+i-1}} E_{jN+i}(\eta , x) = 0. \end{aligned}$$

Since there is \(c > 0\) such that

$$\begin{aligned} \sup _{\eta \in \mathbb {S}^1} \sup _{x \in K} \sum _{k = 0}^{j_0} u_{kN+i}^2(\eta , x) \le c \end{aligned}$$

to prove (8.4) it is enough to show that

$$\begin{aligned}{} & {} \lim _{j \rightarrow \infty } \frac{1}{\rho _{i-1; j}} \sum _{k = j_0+1}^j \frac{\sqrt{\alpha _{i-1} \gamma _{(k+1)N+i-1}}}{a_{(k+1)N+i-1}}\nonumber \\{} & {} \quad \times \cos \Big (2 \sum _{\ell =j_0}^{k-1} \theta _{\ell N + i}(x) + 2\arg \varphi _i(\eta ,x) \Big ) =0 \end{aligned}$$
(8.5)

uniformly with respect to \((x, \eta ) \in K \times \mathbb {S}^1\). Observe that (8.5) is an easy consequence of Lemma 8.1, provided we show the following statement.

Claim 8.3

For all \(i \in \{0, 1, \ldots , N-1\}\),

$$\begin{aligned} \lim _{j \rightarrow \infty } \sqrt{\frac{\gamma _{(j+1)N+i-1}}{\alpha _{i-1}}} \theta _{jN+i}(x) = \sqrt{|{\tau (x)} |} \end{aligned}$$

uniformly with respect to \(x \in K\).

Using the notation introduced in Sect. 5, Theorem 3.2 gives

$$\begin{aligned} \lim _{j \rightarrow \infty } Y_j = \varepsilon {\text {Id}}\end{aligned}$$

locally uniformly on \(\Lambda _-\). In particular,

$$\begin{aligned} \lim _{j \rightarrow \infty } \frac{{\text {tr}}Y_j(x)}{2 \sqrt{\det Y_j(x)}} = \varepsilon . \end{aligned}$$

Since

$$\begin{aligned} \lim _{t \rightarrow 1^-} \frac{\arccos t}{\sqrt{1-t^2}} =1, \end{aligned}$$

we obtain

$$\begin{aligned} \lim _{j \rightarrow \infty } \bigg (1-\bigg ( \frac{{\text {tr}}Y_j(x)}{2 \sqrt{\det Y_j(x)}} \bigg )^2 \bigg )^{-1/2} \theta _j(x) = 1. \end{aligned}$$

Let us observe that, by Theorem 3.2,

$$\begin{aligned} \sqrt{1-\bigg ( \frac{{\text {tr}}Y_j(x)}{2 \sqrt{\det Y_j(x)}} \bigg )^2}&= \frac{\sqrt{-{\text {discr}}Y_j(x)}}{2 \sqrt{\det Y_j(x)}} \\&= \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} \frac{\sqrt{-{\text {discr}}R_j(x)}}{2 \sqrt{\det Y_j(x)}}. \end{aligned}$$

Hence,

$$\begin{aligned} \lim _{j \rightarrow \infty } \sqrt{\frac{\gamma _{(j+1)N+i-1}}{\alpha _{i-1}}} \sqrt{1-\bigg ( \frac{{\text {tr}}Y_j(x)}{2 \sqrt{\det Y_j(x)}} \bigg )^2} = \sqrt{|\tau (x)|}, \end{aligned}$$
(8.6)

proving the claim.

To complete the proof of the theorem we write

$$\begin{aligned} K_{jN+i}(x, x; \eta )&= \sum _{i' = 0}^{N-1} K_{i'; j}(x, x; \eta ) \\&\quad + \sum _{i' = i+1}^{N-1} \big (K_{i'; j-1}(x, x; \eta ) - K_{i'; j}(x, x; \eta )\big ). \end{aligned}$$

Observe that by Theorem 7.1,

$$\begin{aligned} \sup _{\eta \in \mathbb {S}^1} \sup _{x \in K}{\big |K_{i'; j-1}(x, x; \eta ) - K_{i'; j}(x, x; \eta )\big |} = \sup _{\eta \in \mathbb {S}^1} \sup _{x \in K}{|u_{jN+i'}(\eta , x)|^2 } \le c. \end{aligned}$$

Moreover, by [59, Proposition 3.7] and (2.7), for \(m, m' \in \mathbb {N}_0\),

$$\begin{aligned} \lim _{j \rightarrow \infty } \frac{a_{jN+m'}}{a_{jN+m}} = \lim _{j \rightarrow \infty } \frac{\gamma _{jN+m'}}{\gamma _{jN+m}} = \frac{\alpha _{m'}}{\alpha _m}, \end{aligned}$$

thus, by the Stolz–Cesàro theorem,

$$\begin{aligned} \lim _{j \rightarrow \infty } \frac{\rho _{i-1; j}}{\rho _{jN+i'}}&= \lim _{j \rightarrow \infty } \frac{\frac{\sqrt{\gamma _{jN+i-1}}}{a_{jN+i-1}}}{\sum _{k = 1}^N \frac{\sqrt{\alpha _{i'+k} \gamma _{jN+i'+k}}}{a_{jN+i'+k}}} \\&= \frac{1}{N \sqrt{\alpha _{i-1}}}. \end{aligned}$$

Hence, by (8.4)

$$\begin{aligned} \lim _{j \rightarrow \infty } \frac{1}{\rho _{jN+i}} K_{jN+i}(x, x; \eta )&= \lim _{j \rightarrow \infty } \sum _{i' = 0}^{N-1} \frac{1}{\rho _{i'-1; j}} K_{jN+i'}(x, x; \eta ) \cdot \frac{\rho _{i'-1; j}}{\rho _{jN+i}} \\&= \sum _{i' = 0}^{N-1} \frac{|{\varphi _{i'}(\eta , x)} |^2 a_{j_0N+i'-1} \sinh \vartheta _{j_0N+i'-1}(x)}{2 N \big (\sqrt{\alpha _{i'-1} |{\tau (x)} |}\big )^3} \end{aligned}$$

and the theorem follows. \(\square \)

Remark 8.4

In view of Remark 7.3 by Theorem 8.2, all generalized eigenvectors are not square-summable, hence by [45, Theorem 6.16] the operator A is self-adjoint. Next, by [3, Theorem 2.1], we conclude that \(\mu \) is absolutely continuous on \(\Lambda _-\) and its density \(\mu '\) has the property that for every compact interval \(K \subset \Lambda _-\) with non-empty interior there is \(c > 0\) such that

$$\begin{aligned} c^{-1} \le \mu '(x) \le c \end{aligned}$$

for almost all \(x \in K\) (with respect to the Lebesgue measure). Consequently, we have \(\sigma _{\textrm{ac}}(A) \supset {\text {cl}}(\Lambda _-)\). In view of Theorem 4.1 we actually have \(\sigma _{\textrm{ac}}(A) = \sigma _{\textrm{ess}}(A) = {\text {cl}}(\Lambda _-)\).

9 The Self-Adjointness of A

In this section we study the conditions that guarantee that the operator A is self-adjoint. The first theorem covers the case when \(\Lambda _- \ne \emptyset \).

Theorem 9.1

Let N be a positive integer. Let \((\gamma _n: n \in \mathbb {N})\) be a sequence of positive numbers tending to infinity and satisfying (1.2) and (1.3). Let \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) be \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element. Suppose that (1.4) holds true with \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). If \(\Lambda _- \ne \emptyset \), then the Jacobi operator A associated with \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) is self-adjoint if and only if

$$\begin{aligned} \sum _{n=0}^\infty \frac{\sqrt{\gamma _n}}{a_n} = \infty . \end{aligned}$$
(9.1)

Proof

The case when (9.1) is satisfied is covered by Remark 8.4. Assume now that (9.1) is not satisfied. Let \(x \in \Lambda _-\). By Theorem 7.1 and Claim 5.1, there are \(j_0 \ge 1\) and \(c > 0\) such that for all \(j \ge j_0\), \(i \in \{0, 1, \ldots , N-1\}\), and \(\eta \in \mathbb {S}^1\),

$$\begin{aligned} \big |u_{jN+i}(\eta , x) \big |^2 \le c \frac{\sqrt{\gamma _{jN+i-1}}}{a_{jN+i-1}}. \end{aligned}$$

Hence, every generalized eigenvector associated to x is square-summable. In view of [45, Theorem 6.16] the operator A is not self-adjoint. This completes the proof. \(\square \)

The next theorem covers the case when \(\Lambda _- = \emptyset \) but \(\Lambda _+ \ne \emptyset \).

Theorem 9.2

Let N be a positive integer. Let \((\gamma _n: n \in \mathbb {N})\) be a sequence of positive numbers tending to infinity and satisfying (1.2) and (1.3). Let \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) be \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element. Suppose that (1.4) holds true with \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). If \(\Lambda _- = \emptyset \) but \(\Lambda _+ \ne \emptyset \), then \(\Lambda _+ = \mathbb {R}\), and

  1. (i)

    if \(-\mathfrak {S}+ \sqrt{\mathfrak {S}^2 + 4 \mathfrak {U}} < 0\) then the operator A is not self-adjoint;

  2. (ii)

    if \(-\mathfrak {S}+ \sqrt{\mathfrak {S}^2 + 4 \mathfrak {U}} > 0\) then the operator A is self-adjoint.

Moreover, if the operator A is self-adjoint then \(\sigma _{\textrm{ess}}(A) = \emptyset \).

Proof

If \(\Lambda _- = \emptyset \) then \(\mathfrak {t}= 0\) and so \(\Lambda _+ = \mathbb {R}\). Let \(i=0\) and \(K = \{0\}\). We can repeat the first part of the proof of Theorem 4.1. Now, by (4.6) and (4.8), there are \(j_1 \ge j_0\) and \(c > 0\) such that for all \(j \ge j_1\),

$$\begin{aligned} \big |u^+_{jN}(0)\big |^2 + \big |u^+_{jN-1}(0)\big |^2 = \big \Vert \phi ^+_{jN}(0)\big \Vert ^2 \ge c \prod _{k = j_0}^j |\lambda _k^+(0)|^2. \end{aligned}$$
(9.2)

Moreover, for all \(j \ge j_1\) and \(i' \in \{0, 1, \ldots , N-1\}\),

$$\begin{aligned} \begin{aligned}&\big \Vert \phi ^-_{jN+i'}(0)\big \Vert ^2 \le c \prod _{k = j_0}^j |\lambda _k^-(0)|^2, \quad \text {and}\\&\quad \big \Vert \phi ^+_{jN+i'}(0)\big \Vert ^2 \le c \prod _{k = j_0}^j |\lambda _k^+(0)|^2. \end{aligned} \end{aligned}$$
(9.3)

By (4.3) we obtain

$$\begin{aligned} \sum _{j = j_1}^\infty \sum _{i' = 0}^{N-1} |u_{jN+i'}^-(0)|^2 \le c \sum _{j = j_1}^\infty \sum _{i' = 0}^{N-1} |u_{jN+i'}^+(0)|^2. \end{aligned}$$
(9.4)

Hence, the operator A is self-adjoint if and only if there is \(j_0 > 0\) such that

$$\begin{aligned} \sum _{j = j_0}^\infty \prod _{k = j_0}^j |\lambda _k^+(0)|^2 = \infty . \end{aligned}$$
(9.5)

Indeed, if (9.5) is satisfied then by (9.2) the generalized eigenvector \((u_n^+(0): n \in \mathbb {N}_0)\) is not square-summable, thus by [45, Theorem 6.16], the operator A is self-adjoint. On the other hand, if (9.5) is not satisfied, then by (9.3) and (9.4), all generalized eigenvectors associated to 0 are square-summable, thus by [45, Theorem 6.16], the operator A is not self-adjoint. The second part of the theorem follows by Theorem 4.1.

Since \((\gamma _{jN}: j \in \mathbb {N})\) approaches infinity, there is \(j_0 \ge 1\) such that

$$\begin{aligned} |\lambda _j^+(0)| = 1 + \sqrt{\frac{\alpha _{N-1}}{\gamma _{(j+1)N-1}}} \frac{{\text {tr}}R_j(0) + \sqrt{{\text {discr}}R_j(0)}}{2}. \end{aligned}$$

Next, we observe that

$$\begin{aligned} \lim _{j \rightarrow \infty } \frac{\sqrt{\gamma _{jN}}}{j} = 0. \end{aligned}$$

Let us consider the case (i). Because \((R_{jN}(0): j \in \mathbb {N})\) converges to \(\mathcal {R}_0(0)\), there is \(j_1 \ge j_0\) such that for all \(j \ge j_1\),

$$\begin{aligned} j\sqrt{\frac{\alpha _{N-1}}{\gamma _{(j+1)N-1}}} \frac{{\text {tr}}R_j(0) + \sqrt{{\text {discr}}R_j(0)}}{2} < -1, \end{aligned}$$

hence

$$\begin{aligned} |\lambda _j^+(0)| \le 1 - \frac{1}{j}. \end{aligned}$$

Consequently,

$$\begin{aligned} \prod _{k = j_1}^j |\lambda _k^+(0)| \le \prod _{k = j_1}^j \bigg (1 - \frac{1}{k}\bigg ) = \frac{j_1-1}{j}, \end{aligned}$$

that is (9.5) is not satisfied and so the operator A is not self-adjoint.

The reasoning in the case(ii) is analogous. Namely, there is \(j_1 \ge j_0\) such that for all \(j \ge j_1\),

$$\begin{aligned} j \sqrt{\frac{\alpha _{N-1}}{\gamma _{(j+1)N-1}}} \frac{{\text {tr}}R_j(0) + \sqrt{{\text {discr}}R_j(0)}}{2} > 1, \end{aligned}$$

hence

$$\begin{aligned} |\lambda _j^+(0)| \ge 1 + \frac{1}{j}, \end{aligned}$$

and so

$$\begin{aligned} \prod _{k = j_1}^j |\lambda _k^+(0)| \ge \prod _{k = j_1}^j \bigg (1 + \frac{1}{k}\bigg ) = \frac{j+1}{j_1}. \end{aligned}$$

Therefore, (9.5) is satisfied and the operator A is self-adjoint. \(\square \)

Remark 9.3

If in Theorem 9.2 one has \(\Lambda _- = \emptyset \) and \(\Lambda _+ \ne \emptyset \), then A is self-adjoint if and only if (9.5) holds true. Let us emphasize that we cannot treat the case \(\Lambda _- = \Lambda _+ = \emptyset \), that is \(\tau \equiv 0\).

10 The \(\ell ^1\)-Type Perturbations

In this section we show how to get the main results of the paper in the presence of certain size \(\ell ^1\) perturbations. Let \((\tilde{a}_n: n \in \mathbb {N}_0)\) and \((\tilde{b}_n: n \in \mathbb {N}_0)\) be Jacobi parameters satisfying

$$\begin{aligned} \tilde{a}_n = a_n(1 + \xi _n), \quad \tilde{b}_n = b_n(1 + \zeta _n) \end{aligned}$$

where \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) are \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element, and \((\xi _n: n \in \mathbb {N}_0)\) and \((\zeta _n: n \in \mathbb {N}_0)\) are certain real sequences satisfying

$$\begin{aligned} \sum _{k = 1}^\infty \sqrt{\gamma _n} (|\xi _n| + |\zeta _n|) < \infty . \end{aligned}$$

We follow the reasoning explained in [61, Section 9].

Fix a compact set \(K \subset \mathbb {R}\). Let us denote by \((\Delta _n)\) any sequence of \(2\times 2\) matrices such that

$$\begin{aligned} \sum _{n = 0}^\infty \sup _K \Vert \Delta _n\Vert < \infty . \end{aligned}$$

We notice that

$$\begin{aligned} \tilde{B}_n(x) = B_n(x) + \frac{\sqrt{\gamma _n}}{a_n} \Delta _n(x) \end{aligned}$$
(10.1)

where

$$\begin{aligned} \tilde{B}_0(x)&= \begin{pmatrix} 0 &{}\quad 1 \\ -\frac{1}{\tilde{a}_0} &{}\quad \frac{x - \tilde{b}_0}{\tilde{a}_0} \end{pmatrix}, \\ \tilde{B}_n(x)&= \begin{pmatrix} 0 &{}\quad 1 \\ -\frac{\tilde{a}_{n-1}}{\tilde{a}_n} &{}\quad \frac{x- \tilde{b}_n}{\tilde{a}_n} \end{pmatrix} ,\quad n \ge 1. \end{aligned}$$

Moreover, for

$$\begin{aligned} \tilde{X}_n = \tilde{B}_{n+N-1} \tilde{B}_{n+n-2} \ldots \tilde{B}_n \end{aligned}$$

we have

$$\begin{aligned} \tilde{X}_n - X_n = \sum _{k = n}^{n+N-1} \frac{\sqrt{\gamma _n}}{a_n} \tilde{B}_{n+N-1} \ldots \tilde{B}_{k+1} \Delta _k B_{k-1} \ldots B_n, \end{aligned}$$

which together with

$$\begin{aligned} \sup _{n \in \mathbb {N}_0} \sup _{x \in K} \big (\Vert B_n(x)\Vert + \Vert \tilde{B}_n(x)\Vert \big ) < \infty , \end{aligned}$$

implies that

$$\begin{aligned} \tilde{X}_n = X_n + \frac{\sqrt{\gamma _n}}{a_n} \Delta _n. \end{aligned}$$
(10.2)

Suppose that \(K \subset \Lambda _+\). Then, by Theorem 3.2,

$$\begin{aligned} Z_{j+1}^{-1} \tilde{X}_{jN+i} Z_j&= \varepsilon \bigg ({\text {Id}}+ \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} R_j\bigg ) + \frac{\sqrt{\gamma _{jN+i}}}{a_{jN+i}} Z_{j+1}^{-1} \Delta _{jN+i} Z_j. \end{aligned}$$

Since there is \(c > 0\) such that for all \(j \in \mathbb {N}\),

$$\begin{aligned} \sup _K \big \Vert Z_{j+1}^{-1} \big \Vert \le c \sqrt{\gamma _{jN+i}}, \quad \text {and}\quad \sup _K \Vert Z_j\Vert \le c, \end{aligned}$$

by setting

$$\begin{aligned} V_j = \varepsilon \frac{\sqrt{\gamma _{jN+i}}}{a_{jN+i}} Z_{j+1}^{-1} \Delta _{jN+i} Z_j \end{aligned}$$

we get

$$\begin{aligned} Z_{j+1}^{-1} \tilde{X}_{jN+i} Z_j = \varepsilon \bigg ({\text {Id}}+ \sqrt{\frac{\alpha _{i-1}}{\gamma _{(j+1)N+i-1}}} R_j + V_j\bigg ) \end{aligned}$$

where \((R_j)\) is a sequence from \(\mathcal {D}_1\big (K, {\text {Mat}}(2, \mathbb {R})\big )\) convergent uniformly on K to \(\mathcal {R}_i\), and

$$\begin{aligned} \sum _{j = 1}^\infty \sup _K \Vert V_j\Vert < \infty . \end{aligned}$$
(10.3)

If \((\sqrt{\gamma _n})\) is sublinear and \((\sup _K \Vert \Delta _n\Vert )\) belongs to \(\ell ^1\), for each subsequence there is a further subsequence \((L_j: j \in \mathbb {N}_0)\) such that

$$\begin{aligned} \sup _K \Vert \Delta _{L_j}\Vert \le c \frac{1}{\sqrt{\gamma _{L_j+N-1}}}. \end{aligned}$$

Consequently, we can find subsequence \(L_j \equiv i \bmod N\) such that

$$\begin{aligned} \sup _K \Vert V_{L_j}\Vert \le c \frac{\sqrt{\gamma _{L_j+N-1}}}{a_{L_j+N-1}}. \end{aligned}$$

Since [60, Theorem 4.4] allows perturbation satisfying (10.3) we can repeat the proof of Theorem 4.1 to get the following statement.

Theorem 10.1

Let N be a positive integer. Let \((\gamma _n: n \in \mathbb {N})\) be a sequence of positive numbers tending to infinity and satisfying (1.2) and (1.3). Let \(\tilde{A}\) be the Jacobi operator associated with Jacobi parameters \((\tilde{a}_n: n \in \mathbb {N}_0)\) and \((\tilde{b}_n: n \in \mathbb {N}_0)\) such that

$$\begin{aligned} \tilde{a}_n = a_n(1+\xi _n), \quad \tilde{b}_n = b_n(1+\zeta _n), \end{aligned}$$

where \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) are \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element. Suppose that (1.4) holds true with \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). If

$$\begin{aligned} \sum _{n = 0}^\infty \sqrt{\gamma _n}(|\xi _n| + |\zeta _n|) < \infty \end{aligned}$$

for certain real sequences \((\xi _n: n \in \mathbb {N}_0)\) and \((\zeta _n: n \in \mathbb {N}_0)\), then

$$\begin{aligned} \sigma _{\textrm{ess}}(\tilde{A}) \cap \Lambda _+ = \emptyset . \end{aligned}$$

Next, let us consider a compact set \(K \subset \Lambda _-\). By Theorem 7.1 and Claim 5.1, there is \(c > 0\) such that for all \(n \in \mathbb {N}_0\),

$$\begin{aligned} \sup _K \big \Vert B_n B_{n-1} \ldots B_0 \big \Vert \le c \gamma _n^{1/4} a_n^{-1/2}, \end{aligned}$$
(10.4)

and since \(\det B_n = \frac{a_{n-1}}{a_n}\), we get

$$\begin{aligned} \sup _K \big \Vert \big (B_n B_{n-1} \ldots B_0\big )^{-1}\big \Vert \le c \gamma _n^{1/4} a_n^{1/2}. \end{aligned}$$
(10.5)

Moreover, by (10.1)

$$\begin{aligned} \tilde{B}_n \ldots \tilde{B}_1 \tilde{B}_0&= \tilde{B}_n \ldots \tilde{B}_1 B_0 \big ({\text {Id}}+ \gamma _0^{1/2} a_0^{-1} B_0^{-1} \Delta _0\Big ) \\&=\tilde{B}_n \ldots \tilde{B}_2 B_1 B_0 \\&\quad \times \Big ({\text {Id}}+ \gamma _1^{1/2} a_1^{-1} (B_1 B_0)^{-1} \Delta _1 B_0 \Big )\Big ({\text {Id}}+ \gamma _0^{1/2} a_0^{-1} B_0^{-1} \Delta _0\Big ) \\&=B_n \ldots B_1 B_0 \\&\quad \times \prod _{j = 0}^n \Big ({\text {Id}}+ \gamma _j^{1/2} a_j^{-1} (B_j \ldots B_1 B_0)^{-1} \Delta _j (B_{j-1} \ldots B_1 B_0) \Big ) \end{aligned}$$

thus by (10.4) and (10.5)

$$\begin{aligned} \Vert \tilde{B}_n \ldots \tilde{B}_1 \tilde{B}_0\Vert&\le \Vert B_n \ldots B_1 B_0\Vert \prod _{j = 0}^{n} \Big (1 + \gamma _j^{1/2} a_j^{-1} \Vert (B_j \ldots B_1 B_0)^{-1} \Vert \\&\quad \times \Vert B_{j-1} \ldots B_1 B_0\Vert \cdot \Vert \Delta _j\Vert \Big ) \\&\le \Vert B_n \ldots B_1 B_0\Vert \prod _{j=0}^{n} \Big (1 + c \gamma _j^{3/4} \gamma _{j-1}^{1/4} a_j^{-3/2} a_{j-1}^{1/2} \Vert \Delta _j\Vert \Big ) \\&\le \Vert B_n \ldots B_1 B_0\Vert \exp \Big (c \sum _{j = 0}^n \Vert \Delta _j\Vert \Big ). \end{aligned}$$

Hence,

$$\begin{aligned} \sup _{K}{\Vert \tilde{B}_n \ldots \tilde{B}_1 \tilde{B}_0\Vert } \le c \gamma _n^{1/4} a_n^{-1/2}. \end{aligned}$$
(10.6)

Next, let us introduce the following sequence of matrices

$$\begin{aligned} M_j = \big (B_j B_{j-1} \ldots B_0 \big )^{-1} \big (\tilde{B}_j \tilde{B}_{j-1} \ldots \tilde{B}_0\big ). \end{aligned}$$
(10.7)

Since

$$\begin{aligned} M_{j+1} - M_j = \big (B_{j+1} B_{j} \ldots B_0 \big )^{-1} \big (\tilde{B}_{j+1} - B_{j+1}\big ) \big (\tilde{B}_j \tilde{B}_{j-1} \ldots \tilde{B}_0\big ), \end{aligned}$$

by (10.1), (10.5) and (10.6), we obtain

$$\begin{aligned} \sup _K{ \Vert M_{j+1} - M_j\Vert }&\le c \gamma _{j+1}^{1/4} a_{j+1}^{1/2} \gamma _j^{1/2} a_j^{-1} \gamma _j^{1/4} a_j^{-1/2} \sup _K{\Vert \Delta _{j+1}\Vert } \\&\le c \sup _K{\Vert \Delta _{j+1}\Vert }. \end{aligned}$$

Therefore, the sequence of matrices \((M_j)\) converges uniformly on K to certain continuous mapping M, and

$$\begin{aligned} \sup _K{ \big \Vert M - M_j \big \Vert } \le c \sum _{k = j+1}^\infty \sup _K{\Vert \Delta _k\Vert }. \end{aligned}$$
(10.8)

Observe that for each \(x \in K\) the matrix M(x) is non-degenerate. Indeed, we have

$$\begin{aligned} \det M(x)&= \lim _{j \rightarrow \infty } \det M_j(x) \\&= \lim _{j \rightarrow \infty } \frac{a_j}{\tilde{a}_j} = 1. \end{aligned}$$

Given \(\eta \in \mathbb {S}^1\), we set

$$\begin{aligned} \eta _n = \frac{M_{n-1} \eta }{\Vert M_{n-1} \eta \Vert }. \end{aligned}$$

Let us denote by \((\tilde{u}_n(\eta , x): n \in \mathbb {N}_0)\) generalized eigenvector associated to \(x \in \mathbb {R}\) and \(\eta \in \mathbb {S}^1\) and generated by \((\tilde{a}_n: n \in \mathbb {N}_0)\) and \((\tilde{b}_n: n \in \mathbb {N}_0)\). Notice that for all \(n \in \mathbb {N}\) and \(x \in K\), by (2.1) and (10.7), we have

$$\begin{aligned} \vec {u}_n\big (\eta _n(x), x\big ) = \frac{1}{\Vert M_{n-1}(x) \eta \Vert } \vec {\tilde{u}}_n\big (\eta , x\big ). \end{aligned}$$
(10.9)

By Theorem 7.1 and Claim 5.1,

$$\begin{aligned} \sup _{n \in \mathbb {N}} \sup _{x \in K} \frac{a_{n+N-1}}{\sqrt{\gamma _{n+N-1}}} \big \Vert \vec {u}_{n} \big ( \eta _n(x),x \big ) \big \Vert ^2 < \infty , \end{aligned}$$

which together with (10.9) implies that

$$\begin{aligned} \sup _{n \in \mathbb {N}}\sup _{x \in K}{ \frac{\tilde{a}_{n+N-1}}{\sqrt{\gamma _{n+N-1}}} \big \Vert \vec {\tilde{u}}_{n} \big ( \eta , x \big ) \big \Vert ^2} < \infty . \end{aligned}$$
(10.10)

Theorem 10.2

Let N be a positive integer. Let \((\gamma _n: n \in \mathbb {N})\) be a sequence of positive numbers tending to infinity and satisfying (1.2) and (1.3). Let \(\tilde{A}\) be the Jacobi operator associated with Jacobi parameters \((\tilde{a}_n: n \in \mathbb {N}_0)\) and \((\tilde{b}_n: n \in \mathbb {N}_0)\) such that

$$\begin{aligned} \tilde{a}_n = a_n(1+\xi _n), \quad \tilde{b}_n = b_n(1+\zeta _n), \end{aligned}$$

where \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) are \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element. Suppose that (1.4) holds true with \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). If

$$\begin{aligned} \sum _{n = 0}^\infty \sqrt{\gamma _n}(|\xi _n| + |\zeta _n|) < \infty \end{aligned}$$

for certain real sequences \((\xi _n: n \in \mathbb {N}_0)\) and \((\zeta _n: n \in \mathbb {N}_0)\), then for each compact interval \(K \subset \Lambda _-\), there are \(j_0 \in \mathbb {N}\) and a continuous function \(\tilde{\varphi }: \mathbb {S}^1 \times K \rightarrow \mathbb {R}\) such that

$$\begin{aligned} \begin{aligned} \sqrt{\frac{\tilde{a}_{jN+i-1}}{\sqrt{\gamma _{jN+i-1}}}} \tilde{u}_{jN+i}(\eta , x)&= |\tilde{\varphi }(\eta , x)| \sin \Big (\sum _{k = j_0}^j \theta _k(x) + \arg \tilde{\varphi }(\eta , x)\Big )\\&\quad + E_j(\eta , x) \end{aligned} \end{aligned}$$
(10.11)

where \(\theta _k\) are given in (7.1) and

$$\begin{aligned} \lim _{j \rightarrow \infty } \sup _{\eta \in \mathbb {S}^1} \sup _{x \in K} |E_j(\eta , x)| =0. \end{aligned}$$

Moreover, \(\tilde{\varphi }(\eta , x) = 0\) if and only if \([\mathfrak {X}_i(0)]_{21} = 0\).

Proof

Fix a compact set \(K \subset \Lambda _-\). Since

$$\begin{aligned} \big \Vert M_{jN+i-1}(x) \eta \big \Vert = \big \Vert M(x) \eta \big \Vert + o_K(1) \end{aligned}$$

and

$$\begin{aligned} \varphi (\eta _{jN+i}(x), x) = \varphi (\eta (x), x) + o_K(1) \end{aligned}$$

where

$$\begin{aligned} \eta (x) = \frac{M(x) \eta }{\Vert M(x) \eta \Vert }, \end{aligned}$$

by (10.9) and Theorem 7.1, we obtain

$$\begin{aligned} \frac{\tilde{u}_{jN+i}(\eta , x)}{\prod _{k = j_0}^{j-1} |\lambda _k(x)|}&= \big \Vert M_{jN+i-1}(x) \eta \big \Vert \frac{|\varphi (\eta _{jN+i}(x), x)|}{\sqrt{\alpha _{i-1} |\tau (x)|}} \\&\quad \times \sin \Big (\sum _{k=j_0}^{j-1} \theta _k(x) + \arg \varphi (\eta _{jN+i}(x), x) \Big ) + o_K(1) \\&=\big \Vert M(x) \eta \big \Vert \\&\quad \times \frac{|\varphi (\eta (x), x)|}{\sqrt{\alpha _{i-1} |\tau (x)|}} \sin \Big (\sum _{k=j_0}^{j-1} \theta _k(x) + \arg \varphi (\eta (x), x) \Big ) + o_K(1). \end{aligned}$$

In view of Claim 5.2 we conclude the proof. \(\square \)

Now by repeating the proofs of Theorems 8.2 and 9.1, the asymptotic formula (10.11) leads to the following statement.

Theorem 10.3

Let N be a positive integer. Let \((\gamma _n: n \in \mathbb {N})\) be a sequence of positive numbers tending to infinity and satisfying (1.2) and (1.3). Let \(\tilde{A}\) be the Jacobi operator associated with Jacobi parameters \((\tilde{a}_n: n \in \mathbb {N}_0)\) and \((\tilde{b}_n: n \in \mathbb {N}_0)\) such that

$$\begin{aligned} \tilde{a}_n = a_n(1+\xi _n), \quad \tilde{b}_n = b_n(1+\zeta _n), \end{aligned}$$

where \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) are \(\gamma \)-tempered N-periodically modulated Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element. Suppose that (1.4) holds true with \(\varepsilon = {\text {sign}}({{\text {tr}}\mathfrak {X}_0(0)})\). Assume that

$$\begin{aligned} \sum _{n = 0}^\infty \sqrt{\gamma _n}(|\xi _n| + |\zeta _n|) < \infty \end{aligned}$$

for certain real sequences \((\xi _n: n \in \mathbb {N}_0)\) and \((\zeta _n: n \in \mathbb {N}_0)\). Set

$$\begin{aligned} \tilde{\rho }_n = \sum _{j = 0}^n \frac{\sqrt{\alpha _j \gamma _j}}{\tilde{a}_j}. \end{aligned}$$

If \(\Lambda _- \ne \emptyset \) then the Jacobi operator \(\tilde{A}\) associated to parameters \(\tilde{a}\) and \(\tilde{b}\) is self-adjoint if and only if \(\tilde{\rho }_n \rightarrow \infty \). If it is the case then the limit

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{\tilde{\rho }_n} K_n(x, x; \eta ) \end{aligned}$$

exists locally uniformly with respect to \((x, \eta ) \in \Lambda _- \times \mathbb {S}^1\) and defines a continuous positive function.

11 Examples

11.1 Classes of Sequences

11.1.1 Kostyuchenko–Mirzoev

Let N be a positive integer and suppose that \((\alpha _n)\) and \((\beta _n)\) are N-periodic Jacobi parameters. We define

$$\begin{aligned} a_n = \alpha _n \hat{a}_n \Big ( 1 + \frac{f_n}{\delta _n} \Big ), \quad b_n = \beta _n \hat{a}_n \Big ( 1 + \frac{g_n}{\delta _n} \Big ), \end{aligned}$$
(11.1)

where \((f_n),(g_n)\) satisfy

$$\begin{aligned} \lim _{n \rightarrow \infty } |f_n - \mathfrak {f}_n| = 0, \quad \lim _{n \rightarrow \infty } |g_n - \mathfrak {g}_n| = 0, \end{aligned}$$
(11.2)

for some N-periodic sequences \((\mathfrak {f}_n), (\mathfrak {g}_n)\), and \((\hat{a}_n), (\delta _n)\) are positive sequences such that

$$\begin{aligned} \begin{aligned}&\sum _{n=0}^\infty \frac{1}{\hat{a}_n} < \infty , \quad \lim _{n \rightarrow \infty } \delta _n = \infty , \quad \text {and} \\&\quad \lim _{n \rightarrow \infty } \delta _n \Big ( 1 - \frac{\hat{a}_{n-1}}{\hat{a}_n} \Big ) = \kappa > 0. \end{aligned} \end{aligned}$$
(11.3)

The sequences \((a_n)\) and \((b_n)\) satisfying (11.1)–(11.3) are called N-periodically modulated Kostyuchenko–Mirzoev Jacobi parameters. This class has been studied before, see e.g. [17, 48, 60, 66].

11.1.2 Symmetric Birth–Death Processes

In [29, Section 2] it is shown that generators of a birth–death process are unitarily equivalent to Jacobi matrices of the form

$$\begin{aligned} a_n = \sqrt{\lambda _n \mu _{n+1}}, \quad b_n = -\lambda _n - \mu _n, \end{aligned}$$
(11.4)

where \((\lambda _n: n \in \mathbb {N}_0)\) and \((\mu _{n+1}: n \in \mathbb {N}_0)\) are some positive sequences. When \(\lambda _n = \mu _{n+1}\), then we obtain a particularly simple class of Jacobi parameters

$$\begin{aligned} b_n = -a_{n-1} - a_n. \end{aligned}$$
(11.5)

If (11.5) is satisfied, we shall refer to Jacobi parameters \((a_n)\) and \((b_n)\) as corresponding to a symmetric birth–death process. This class has been studied before, see e.g. [8,9,10, 42, 53]. In fact, in view of Proposition 11.1 below, instead of (11.4) it is sufficient to consider Jacobi parameters

$$\begin{aligned} a_n = \sqrt{\lambda _n \mu _{n+1}}, \quad b_n = \lambda _n + \mu _n. \end{aligned}$$

Proposition 11.1

Let \((a_n: n \in \mathbb {N}_0)\) and \((b_n: n \in \mathbb {N}_0)\) be sequences of positive and real numbers respectively. Let A and \(\hat{A}\) be Jacobi matrices with Jacobi parameters \((a_n: n \in \mathbb {N}_0), (b_n: n \in \mathbb {N}_0)\) and \((a_n: n \in \mathbb {N}_0), (-b_n: n \in \mathbb {N}_0)\), respectively. Then

$$\begin{aligned} \hat{A} = U (-A) U^{-1}, \end{aligned}$$

where \(U: \ell ^2(\mathbb {N}_0) \rightarrow \ell ^2(\mathbb {N}_0)\) is a unitary operator defined by \((U x)_n = (-1)^n x_n\).

The proof of Proposition 11.1 is just a simple computation, see e.g. [9, Lemma 3.5] or [10, Proposition 3.5] for more details.

11.2 The General N

11.2.1 Kostyuchenko–Mirzoev’s Class

Remark 11.2

Let N be a positive integer and let \((\alpha _n)\) and \((\beta _n)\) be N-periodic Jacobi parameters such that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element. Consider the sequences \((a_n), (b_n)\) satisfying (11.1)–(11.3), where

$$\begin{aligned} \Big ( \frac{\delta _n}{\hat{a}_n} \Big ), \Big ( \delta _n \Big ( 1 - \frac{\hat{a}_{n-1}}{\hat{a}_n} \Big ) \Big ), (f_n), (g_n) \in \mathcal {D}_1^N, \end{aligned}$$
(11.6)

and

$$\begin{aligned} (\delta _n - \delta _{n-1}), \Big ( \frac{1}{\sqrt{\delta _n}} \Big ) \in \mathcal {D}_1^N. \end{aligned}$$
(11.7)

Then for \(\gamma _n = \alpha _n \delta _n\), the hypotheses of Theorem 3.2 are satisfied. Moreover,

$$\begin{aligned}&\tau (x) \equiv N \kappa -\varepsilon \sum _{i=0}^{N-1} [\mathfrak {X}_i(0)]_{11} (\kappa + \mathfrak {f}_i - \mathfrak {f}_{i-1}) \\&\quad -\varepsilon \sum _{i=0}^{N-1} \frac{\beta _i}{\alpha _{i-1}} [\mathfrak {X}_i(0)]_{21} (\mathfrak {f}_i - \mathfrak {g}_i). \end{aligned}$$

To see this, let us first observe that

$$\begin{aligned} \frac{\gamma _n}{a_n} = \frac{\delta _n}{\hat{a}_n} \frac{1}{1+\tfrac{f_n}{\delta _n}}, \end{aligned}$$
(11.8)

which belongs to \(\mathcal {D}^1_N\). Next, we write

$$\begin{aligned} \frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n}&= \frac{\alpha _{n-1}}{\alpha _n} \bigg ( 1 - \frac{\hat{a}_{n-1}}{\hat{a}_n} \frac{1+\tfrac{f_{n-1}}{\delta _{n-1}}}{1+\tfrac{f_n}{\delta _n}} \bigg ) \\&= \frac{\alpha _{n-1}}{\alpha _n \delta _n} \bigg ( \delta _n \Big ( 1 - \frac{\hat{a}_{n-1}}{\hat{a}_n} \Big ) + \frac{\hat{a}_{n-1}}{\hat{a}_n} \frac{f_n - \tfrac{\delta _n}{\delta _{n-1}} f_{n-1}}{1+\tfrac{f_n}{\delta _n}} \bigg ). \end{aligned}$$

Hence,

$$\begin{aligned} \bigg ( \gamma _n \Big ( \frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n} \Big ) \bigg ) \in \mathcal {D}_1^N. \end{aligned}$$

Moreover,

$$\begin{aligned} \mathop {\lim _{n \rightarrow \infty }}_{n \equiv i \bmod N} \gamma _n \Big ( \frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n} \Big ) = \alpha _{i-1} ( \kappa + \mathfrak {f}_{i} - \mathfrak {f}_{i-1} ). \end{aligned}$$
(11.9)

Analogously, we write

$$\begin{aligned} \frac{\beta _n}{\alpha _n} - \frac{b_n}{a_n} = \frac{\beta _n}{\alpha _n} \frac{1}{\delta _n} \bigg ( 1 - \frac{1+\tfrac{g_n}{\delta _n}}{1+\tfrac{f_n}{\delta _n}} \bigg ) = \frac{\beta _n}{\alpha _n} \frac{1}{\delta _n} \frac{f_n - g_n}{1 + \tfrac{f_n}{\delta _n}}, \end{aligned}$$

thus

$$\begin{aligned} \bigg ( \gamma _n \Big ( \frac{\beta _n}{\alpha _n} - \frac{b_n}{a_n} \Big ) \bigg ) \in \mathcal {D}_1^N \end{aligned}$$

and

$$\begin{aligned} \mathop {\lim _{n \rightarrow \infty }}_{n \equiv i \bmod N} \gamma _n \Big ( \frac{\beta _n}{\alpha _n} - \frac{b_n}{a_n} \Big ) = \beta _i (\mathfrak {f}_i - \mathfrak {g}_i). \end{aligned}$$
(11.10)

Next, we easily compute that

$$\begin{aligned} \sqrt{\frac{\alpha _{n-1}}{\alpha _n}} \sqrt{\gamma _n} - \sqrt{\gamma }_{n-1} = \sqrt{\frac{\alpha _{n-1}}{\delta _n}} \frac{\delta _n - \delta _{n-1}}{1+\sqrt{\tfrac{\delta _{n-1}}{\delta _n}}}. \end{aligned}$$

Consequently, all the hypotheses of Theorem 3.2 are satisfied. Moreover, by (11.9) and (11.10), we obtain

$$\begin{aligned} \mathfrak {s}_n \equiv 0, \quad \mathfrak {r}_n \equiv 0 \end{aligned}$$

and

$$\begin{aligned} \mathfrak {u}_n = \alpha _{n-1} (\kappa + \mathfrak {f}_n - \mathfrak {f}_{n-1}) (1 - \varepsilon [\mathfrak {X}_n(0)]_{11}) - \varepsilon \beta _n (\mathfrak {f}_n - \mathfrak {g}_n) [\mathfrak {X}_n(0)]_{21}. \end{aligned}$$

To compute the value of \(\mathfrak {t}\), observe that by (11.7) the sequence \((\delta _n - \delta _{n-1}: n \in \mathbb {N})\) is bounded and by (11.3) \(\delta _n \rightarrow \infty \). Thus,

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{\delta _{n-1}}{\delta _n} = \lim _{n \rightarrow \infty } \bigg ( 1 - \frac{\delta _{n} - \delta _{n-1}}{\delta _n} \bigg ) = 1. \end{aligned}$$

Next, by (11.3)

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{\hat{a}_{n-1}}{\hat{a}_n} = 1 - \lim _{n \rightarrow \infty } \frac{\delta _n \big ( 1 - \tfrac{\hat{a}_{n-1}}{\hat{a}_n} \big )}{\delta _n} = 1 - \kappa \lim _{n \rightarrow \infty } \frac{1}{\delta _n} = 1. \end{aligned}$$

This together with (11.8) implies that

$$\begin{aligned} \mathfrak {t}= \lim _{n \rightarrow \infty } \frac{\delta _n}{\hat{a}_n} \end{aligned}$$

exists. If we had \(\mathfrak {t}> 0\), then there would exist \(n_0 \in \mathbb {N}\) and a constant \(c>0\) such that for all \(n \ge n_0\)

$$\begin{aligned} \frac{1}{\hat{a}_n} \ge c \frac{1}{\delta _n}. \end{aligned}$$

Consequently, by (11.3)

$$\begin{aligned} \sum _{n=0}^\infty \frac{1}{\delta _n} < \infty . \end{aligned}$$
(11.11)

On the other hand, we have

$$\begin{aligned} \delta _n \le \delta _0 + \sum _{k=0}^{n-1} |\delta _{k+1} - \delta _k|. \end{aligned}$$

Thus by the boundedness of \((\delta _n - \delta _{n-1}: n \in \mathbb {N})\) we get that for some \(c'>0\) one has \(\delta _n \le c'(n+1)\). It leads to a contradiction with (11.11). Hence, \(\mathfrak {t}=0\), which easily gives the formula for \(\tau \).

11.2.2 Symmetric Birth–Death Class

Lemma 11.3

Let N be a positive integer. Suppose that \((\tilde{\alpha }_n)\) is a 2N periodic sequence of positive numbers such that

$$\begin{aligned} \tilde{\alpha }_{0} \tilde{\alpha }_{2} \ldots \tilde{\alpha }_{2N-2} = \tilde{\alpha }_{1} \tilde{\alpha }_{3} \ldots \tilde{\alpha }_{2N-1}. \end{aligned}$$
(11.12)

Set

$$\begin{aligned} \alpha _n = \tilde{\alpha }_{2n+1} \tilde{\alpha }_{2n+2}, \quad \beta _n = \tilde{\alpha }_{2n}^2 + \tilde{\alpha }_{2n+1}^2. \end{aligned}$$
(11.13)

Then \({\text {tr}}\mathfrak {X}_0(0) = 2 \varepsilon \) where \(\varepsilon = (-1)^N\). Moreover,

$$\begin{aligned} {\text {tr}}\mathfrak {X}_0'(0) = -\varepsilon \sum _{i=0}^{N-1} \frac{1}{\alpha _i} \frac{\tilde{\alpha }_{2i}}{\tilde{\alpha }_{2i-1}} \sum _{k=0}^{N-1} \prod _{j=i}^{i+k-1} \bigg ( \frac{\tilde{\alpha }_{2j}}{\tilde{\alpha }_{2j+1}} \bigg )^2 \end{aligned}$$
(11.14)

and

$$\begin{aligned} \big ( 1 - \varepsilon [\mathfrak {X}_n(0)]_{11} \big ) \frac{\alpha _{n-1}}{\alpha _n} - \frac{\tilde{\alpha }_{2n}^2}{\tilde{\alpha }_{2n+1} \tilde{\alpha }_{2n+2}} \varepsilon [\mathfrak {X}_n(0)]_{21} \equiv 0. \end{aligned}$$
(11.15)

Proof

We start with the following Claim, which is inspired by [63, Lemma 2].

Claim 11.4

Let \(\ell \ge 0\) and let \(\big ( \mathfrak {p}_n^{[\ell ]}: n \ge 0 \big )\) be the sequence of orthogonal polynomials associated with recurrence coefficients \((\alpha _{n+\ell }: n \ge 0)\) and \((\beta _{n+\ell }: n \ge 0)\), where \((\alpha _n: n \ge 0)\) and \((\beta _n: n \ge 0)\) satisfy (11.13). Then

$$\begin{aligned} \mathfrak {p}_n^{[\ell ]}(0) = \frac{\tilde{\alpha }_{2 \ell }}{\tilde{\alpha }_{2n+2\ell } w_n^{[\ell ]}} \sum _{k=0}^n \big ( w_k^{[\ell ]} \big )^2, \end{aligned}$$
(11.16)

where

$$\begin{aligned} w_k^{[\ell ]} = (-1)^k \prod _{j=\ell }^{\ell +k-1} \frac{\tilde{\alpha }_{2j}}{\tilde{\alpha }_{2j+1}}. \end{aligned}$$

To see this we reason by induction over \(n \in \mathbb {N}_0\). For \(n=0\) and \(n=1\) the formula (11.16) can be checked by direct computations. Next, let us observe that

$$\begin{aligned} -\tilde{\alpha }_{2\ell +2k-1} w_k^{[\ell ]} = \tilde{\alpha }_{2\ell +2k-2} w_{k-1}^{[\ell ]}, \quad k \ge 1. \end{aligned}$$
(11.17)

By the recurrence relation we have

$$\begin{aligned} \alpha _{n+\ell } \mathfrak {p}_{n+1}^{[\ell ]}(0) = -\beta _{n+\ell } \mathfrak {p}_{n}^{[\ell ]}(0) -\alpha _{n+\ell -1} \mathfrak {p}_{n-1}^{[\ell ]}(0), \quad n \ge 1. \end{aligned}$$

Hence, by the induction hypothesis, (11.17) and (11.13) we obtain

$$\begin{aligned}&\alpha _{n+\ell } \mathfrak {p}_{n+1}^{[\ell ]}(0)\\&\quad = -\beta _{n+\ell } \frac{\tilde{\alpha }_{2\ell }}{\tilde{\alpha }_{2\ell +2n} w_n^{[\ell ]}} \sum _{k=0}^n \big ( w_k^{[\ell ]} \big )^2 -\frac{\tilde{\alpha }_{2\ell }}{\tilde{\alpha }_{2\ell +2n-2}} \frac{\alpha _{\ell +n-1}}{w_{n-1}^{[\ell ]}} \sum _{k=0}^{n-1} \big ( w_k^{[\ell ]} \big )^2 \\&\quad = \frac{\tilde{\alpha }_{2\ell }}{w_{n+1}^{[\ell ]}} \bigg ( \frac{\tilde{\alpha }_{2\ell +2n}^2 + \tilde{\alpha }_{2\ell +2n+1}^2}{\tilde{\alpha }_{2\ell +2n+1}} \sum _{k=0}^n \big ( w_k^{[\ell ]} \big )^2 - \frac{\tilde{\alpha }_{2\ell +2n}^2}{\tilde{\alpha }_{2\ell +2n+1}} \sum _{k=0}^{n-1} \big ( w_k^{[\ell ]} \big )^2 \bigg ) \\&\quad = \frac{\tilde{\alpha }_{2\ell }}{\tilde{\alpha }_{2\ell +2n+1} w_{n+1}^{[\ell ]}} \bigg ( \tilde{\alpha }_{2\ell +2n}^2 \big ( w_n^{[\ell ]} \big )^2 + \tilde{\alpha }_{2\ell +2n+1}^2 \sum _{k=0}^{n} \big ( w_k^{[\ell ]} \big )^2 \bigg ) \\&\quad = \frac{\tilde{\alpha }_{2\ell +2n+1} \tilde{\alpha }_{2\ell }}{w_{n+1}^{[\ell ]}} \sum _{k=0}^{n+1} \big ( w_k^{[\ell ]} \big )^2 \end{aligned}$$

and the conclusion follows by once again using (11.13).

Next, in view of [55, Proposition 3] we have

$$\begin{aligned} \mathfrak {X}_n(0) = \begin{pmatrix} -\frac{\alpha _{n-1}}{\alpha _n} \mathfrak {p}_{N-2}^{[n+1]}(0) &{}\quad \mathfrak {p}_{N-1}^{[n]}(0) \\ -\frac{\alpha _{n-1}}{\alpha _n} \mathfrak {p}_{N-1}^{[n+1]}(0) &{}\quad \mathfrak {p}_{N}^{[n]}(0) \end{pmatrix}. \end{aligned}$$
(11.18)

Thus, by (11.16),

$$\begin{aligned} {\text {tr}}\mathfrak {X}_0(0)&= \mathfrak {p}_{N}^{[0]}(0) - \frac{\alpha _{N-1}}{\alpha _N} \mathfrak {p}_{N-2}^{[1]}(0) \\&= \frac{1}{w_N^{[0]}} \sum _{k=0}^N \big ( w_k^{[0]} \big )^2 -\frac{\alpha _{N-1}}{\alpha _N} \frac{\tilde{\alpha }_2}{\tilde{\alpha }_{2N-2} w_{N-2}^{[1]}} \sum _{k=0}^{N-2} \big ( w_k^{[1]} \big )^2. \end{aligned}$$

Observe that

$$\begin{aligned} w^{[1]}_k = -\frac{\tilde{\alpha }_1}{\tilde{\alpha }_0} w_{k+1}^{[0]}, \quad k \ge 0. \end{aligned}$$
(11.19)

Therefore, by combining (11.17), (11.19), (11.13) and using 2N-periodicity of \((\tilde{\alpha }_n)\) we arrive at

$$\begin{aligned} {\text {tr}}\mathfrak {X}_0(0)&= \frac{1}{w_N^{[0]}} \bigg ( \sum _{k=0}^N \big ( w_k^{[0]} \big )^2 - \sum _{k=1}^{N-1} \big ( w_k^{[0]} \big )^2 \bigg ) \\&= \frac{1}{w_N^{[0]}} \Big ( 1 + \big (w_N^{[0]} \big )^2 \Big ) = 2 (-1)^N \end{aligned}$$

where the last equality follows from (11.16) and (11.12).

In view of [61, Proposition 2.1], (11.18) gives

$$\begin{aligned} {\text {tr}}\mathfrak {X}_0'(0)&= \sum _{i=0}^{N-1} \frac{\mathfrak {p}_{N-1}^{[i+1]}(0)}{\alpha _i} \\&= -\varepsilon \sum _{i=0}^{N-1} \frac{1}{\alpha _i} \frac{\tilde{\alpha }_{2i}}{\tilde{\alpha }_{2i-1}} \sum _{k=0}^{N-1} \big ( w_k^{[i]} \big )^2 \end{aligned}$$

where in the last equality we have used (11.16) and (11.17). Now, (11.14) is an easy consequence of (11.16). Since \(|{\text {tr}}\mathfrak {X}_0(0)|=2\) and \({\text {tr}}\mathfrak {X}'_0(0) \ne 0\), Proposition 2.1 implies that \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element.

It remains to prove (11.15). Observe that by (11.16), (11.17), (11.13) and 2N-periodicity of \((\tilde{\alpha }_n)\) we get

$$\begin{aligned}&\frac{\tilde{\alpha }_{2n}^2}{\tilde{\alpha }_{2n+1} \tilde{\alpha }_{2n+2}} \mathfrak {p}_{N-1}^{[n+1]}(0) + \frac{\alpha _{n-1}}{\alpha _{n}} \mathfrak {p}_{N-2}^{[n+1]}(0)\\&\quad = \frac{\tilde{\alpha }_{2n}}{\tilde{\alpha }_{2n+1} w_{N-1}^{[n+1]}} \bigg ( \sum _{k=0}^{N-1} \big ( w_k^{[n+1]} \big )^2 - \sum _{k=0}^{N-2} \big ( w_k^{[n+1]} \big )^2 \bigg ) \\&\quad = \frac{\tilde{\alpha }_{2n}}{\tilde{\alpha }_{2n+1}} w_{N-1}^{[n+1]} = -w_N^{[n+1]} = -\varepsilon . \end{aligned}$$

Hence, by (11.18),

$$\begin{aligned} \big ( 1 - \varepsilon [\mathfrak {X}_{n}(0)]_{11} \big ) \frac{\alpha _{n-1}}{\alpha _{n}} - \frac{\tilde{\alpha }_{2n}^2}{\tilde{\alpha }_{2n+1} \tilde{\alpha }_{2n+2}} \varepsilon [\mathfrak {X}_{n}(0)]_{21} = \frac{\alpha _{n-1}}{\alpha _{n}} (1 - \varepsilon ^2) = 0 \end{aligned}$$

which completes the proof. \(\square \)

Remark 11.5

Let N be a positive integer. Let \((\alpha _n)\) be a positive N-periodic sequence. Suppose that \((\gamma _n)\) is a positive sequence satisfying

$$\begin{aligned} \Big ( \sqrt{\frac{\alpha _{n-1}}{\alpha _n}} \sqrt{\gamma _n} - \sqrt{\gamma _{n-1}} \Big ), \Big ( \frac{1}{\sqrt{\gamma _n}} \Big ) \in \mathcal {D}_1^N, \end{aligned}$$

and

$$\begin{aligned} \lim _{n \rightarrow \infty } \big ( \sqrt{\gamma _{n+N}} - \sqrt{\gamma _n} \big ) = 0, \quad \lim _{n \rightarrow \infty } \gamma _n = \infty . \end{aligned}$$

Let us set

$$\begin{aligned} a_n = \gamma _n, \quad b_n = \gamma _{n-1} + \gamma _n. \end{aligned}$$

Then \(\beta _n = \alpha _{n-1} + \alpha _n\) and the hypotheses of Theorem 3.2 are satisfied with

$$\begin{aligned} \mathfrak {r}_i = \mathfrak {s}_i = 2 \sqrt{\alpha _{i-1}} \mathop {\lim _{n \rightarrow \infty }}_{n \equiv i \bmod N} \Big ( \sqrt{\frac{\alpha _{n-1}}{\alpha _n}} \sqrt{\gamma _n} - \sqrt{\gamma _{n-1}} \Big ), \end{aligned}$$
(11.20)

and

$$\begin{aligned} \mathfrak {t}= 1, \quad \mathfrak {u}_i \equiv 0. \end{aligned}$$
(11.21)

In particular,

$$\begin{aligned} \tau (x) = - \bigg ( \sum _{i=0}^{N-1} \frac{\alpha _{i-1}}{\alpha _i} \bigg ) \bigg ( \sum _{i=0}^{N-1} \frac{1}{\alpha _i} \bigg ) \cdot x. \end{aligned}$$
(11.22)

To see this, let us define

$$\begin{aligned} \tilde{\alpha }_{2n+1} = \tilde{\alpha }_{2n+2} = \sqrt{\alpha _n}, \quad n \in \mathbb {Z}. \end{aligned}$$
(11.23)

By Lemma 11.3, \(\mathfrak {X}_0(0)\) is a non-trivial parabolic element with \({\text {tr}}\mathfrak {X}_0(0) = 2 \varepsilon \) for \(\varepsilon = (-1)^N\). Next, we have

$$\begin{aligned} \begin{aligned} \frac{\beta _n}{\alpha _n} - \frac{b_n}{a_n}&= \frac{\alpha _{n-1} + \alpha _n}{\alpha _n} - \frac{a_{n-1} + a_n}{a_n} \\&= \Big ( \frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n} \Big ). \end{aligned} \end{aligned}$$
(11.24)

Hence, by (11.15) and (11.23),

$$\begin{aligned}&\Big (\frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n}\Big ) \big ( 1-\varepsilon [\mathfrak {X}_n(0)]_{11} \big ) - \varepsilon \Big ( \frac{\beta _n}{\alpha _n} - \frac{b_n}{a_n} \Big ) [\mathfrak {X}_n(0)]_{21} \\&\quad = \Big (\frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n}\Big ) \Big ( 1-\varepsilon [\mathfrak {X}_n(0)]_{11} - \varepsilon [\mathfrak {X}_n(0)]_{21} \Big ) \equiv 0. \end{aligned}$$

In particular, the left-hand side belongs to \(\mathcal {D}_1^N\) and \(\mathfrak {u}\equiv 0\).

Let us observe that

$$\begin{aligned} \frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n}&= \Big ( \sqrt{\frac{\alpha _{n-1}}{\alpha _n}} - \sqrt{\frac{a_{n-1}}{a_n}} \Big ) \Big ( \sqrt{\frac{\alpha _{n-1}}{\alpha _n}} + \sqrt{\frac{a_{n-1}}{a_n}} \Big ) \\&= \frac{1}{\sqrt{a_n}} \Big ( \sqrt{\frac{\alpha _{n-1}}{\alpha _n}} \sqrt{a_n} - \sqrt{a_{n-1}} \Big ) \Big ( \sqrt{\frac{\alpha _{n-1}}{\alpha _n}} + \sqrt{\frac{a_{n-1}}{a_n}} \Big ). \end{aligned}$$

Hence,

$$\begin{aligned} \begin{aligned}&\sqrt{\alpha _n a_n} \Big (\frac{\alpha _{n-1}}{\alpha _n} - \frac{a_{n-1}}{a_n} \Big ) \\&\quad = \sqrt{\alpha _n} \Big ( \sqrt{\frac{\alpha _{n-1}}{\alpha _n}} \sqrt{a_n} - \sqrt{a_{n-1}} \Big ) \Big ( \sqrt{\frac{\alpha _{n-1}}{\alpha _n}} + \sqrt{\frac{a_{n-1}}{a_n}} \Big ). \end{aligned} \end{aligned}$$
(11.25)

In particular, the left-hand side of (11.25) belongs to \(\mathcal {D}_1^N\). Moreover, we get

$$\begin{aligned} \mathfrak {s}_i = 2 \sqrt{\alpha _{i-1}} \mathop {\lim _{n \rightarrow \infty }}_{n \equiv i \bmod N} \Big ( \sqrt{\frac{\alpha _{n-1}}{\alpha _n}} \sqrt{\gamma _n} - \sqrt{\gamma _{n-1}} \Big ), \end{aligned}$$

which together with (11.24) gives (11.20). Finally, by (11.23) and (11.14) we get

$$\begin{aligned} {\text {tr}}\mathfrak {X}_0'(0) = -\varepsilon \sum _{i=0}^{N-1} \frac{1}{\alpha _i} \sum _{k=0}^{N-1} \frac{\alpha _{i-1}}{\alpha _{i+k-1}} = -\varepsilon \sum _{i=0}^{N-1} \frac{\alpha _{i-1}}{\alpha _i} \sum _{k=0}^{N-1} \frac{1}{\alpha _{k}}. \end{aligned}$$

By Proposition 2.2 we obtain \(\mathfrak {S}=0\). Hence, in view of (2.21) the formula (11.22) follows.

11.3 \(N=1\)

In this section we specify our results to \(N=1\).

Remark 11.6

Suppose that for some \(\varepsilon \in \{-1,1\}\)

$$\begin{aligned} \bigg ( \sqrt{\gamma _n} \Big ( \frac{a_{n-1}}{a_n} - 1 \Big ) \bigg ), \bigg ( \sqrt{\gamma _n} \Big ( \frac{b_n}{a_n} + 2 \varepsilon \Big ) \bigg ), \bigg ( \gamma _n \Big ( 1 + \frac{a_{n-1}}{a_n} + \varepsilon \frac{b_n}{a_n} \Big ) \bigg ), \bigg ( \frac{\gamma _n}{a_n} \bigg ) \in \mathcal {D}_1, \end{aligned}$$

where \((\gamma _n)\) is a positive sequence satisfying

$$\begin{aligned} \big ( \sqrt{\gamma _n} - \sqrt{\gamma _{n-1}} \big ), \bigg ( \frac{1}{\sqrt{\gamma _n}} \bigg ) \in \mathcal {D}_1, \end{aligned}$$

and

$$\begin{aligned} \lim _{n \rightarrow \infty } \big ( \sqrt{\gamma _n} - \sqrt{\gamma _{n-1}} \big ) = 0, \quad \lim _{n \rightarrow \infty } \gamma _n = \infty . \end{aligned}$$

Let

$$\begin{aligned} \mathfrak {s}= \lim _{n \rightarrow \infty } \sqrt{\gamma _n} \Big ( \frac{a_{n-1}}{a_n} - 1 \Big ), \quad \mathfrak {r}= \lim _{n \rightarrow \infty } \sqrt{\gamma _n} \Big ( \frac{b_n}{a_n} + 2 \varepsilon \Big ), \quad \mathfrak {t}= \lim _{n \rightarrow \infty } \frac{\gamma _n}{a_n} \end{aligned}$$

and

$$\begin{aligned} \mathfrak {u}= \lim _{n \rightarrow \infty } \gamma _n \Big ( 1 + \frac{a_{n-1}}{a_n} + \varepsilon \frac{b_n}{a_n} \Big ). \end{aligned}$$

Then

$$\begin{aligned} \tau (x) = x \mathfrak {t}\varepsilon - \mathfrak {u}+ \frac{1}{4} \mathfrak {s}^2. \end{aligned}$$

In particular, if \(\tau (x)\) is not identically zero, then the hypotheses of Theorems 3.1 and 3.2 are satisfied.

Remark 11.7

Suppose that sequences \((\tilde{\xi }_n)\) and \((\tilde{\zeta }_n)\) satisfy

$$\begin{aligned} (\sqrt{\delta _n} \tilde{\xi }_n), (\sqrt{\delta _n} \tilde{\zeta }_n) \in \ell ^1. \end{aligned}$$

Then

$$\begin{aligned} \Big ( 1 + \frac{f_n}{\delta _n} \Big ) ( 1 + \tilde{\xi }_n)&= 1 + \frac{f_n}{\delta _n} + \xi _n, \quad \text {and}\\ \Big ( 1 + \frac{g_n}{\delta _n} \Big ) ( 1 + \tilde{\zeta }_n)&= 1 + \frac{g_n}{\delta _n} + \zeta _n, \end{aligned}$$

where \((\sqrt{\delta _n} \xi _n), (\sqrt{\delta _n} \zeta _n) \in \ell ^1\). Thus \(\ell ^1\)-type perturbations of (11.1) cover the Jacobi parameters of the form

$$\begin{aligned} \tilde{a}_n = \hat{a}_n \Big ( 1 + \frac{f_n}{\delta _n} + \xi _n \Big ), \quad \tilde{b}_n = -2 \varepsilon \hat{a}_n \Big ( 1 + \frac{g_n}{\delta _n} + \zeta _n \Big ), \end{aligned}$$

where \((\sqrt{\delta _n} \xi _n), (\sqrt{\delta _n} \zeta _n) \in \ell ^1\).

Example 11.8

The case when \(\hat{a}_n = (n+1)^\kappa \) for some \(\kappa > \tfrac{3}{2}\) and \(\delta _n = n+1\) was considered in [66] when \(\tau (x) \ne 0\). More specifically, it was assumed that

$$\begin{aligned} a_n = (n+1)^\kappa \Big ( 1 + \frac{\mathfrak {f}}{n+1} + \mathcal {O}(n^{-2}) \Big ), \quad b_n = -2 \varepsilon (n+1)^\kappa \Big ( 1 + \frac{\mathfrak {g}}{n+1} + \mathcal {O}(n^{-2}) \Big ). \end{aligned}$$

In view of Remarks 11.7 and 11.2 the above Jacobi parameters are covered by the present article. Let us emphasize that we can take any \(\kappa >1\) and more general perturbations \((\xi _n)\) and \((\zeta _n)\).