Uni-asymptotic Linear systems and Jacobi Operators

A family Uss∈S\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left\{ U_s\right\} _{s\in S}$$\end{document} of bounded linear operators in a normed space X is uni-asymptotic, when all its trajectories Usxs∈S\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left\{ U_sx\right\} _{s\in S}$$\end{document} with x≠0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x\not =0$$\end{document} have the same norm-asymptotic behavior (see 1.5); Uss∈S\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left\{ U_s\right\} _{s\in S}$$\end{document} is tight, when the operator norm and the minimal modulus of Us\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$U_s$$\end{document} have the same asymptotic behavior (see 1.6). We prove that uni-asymptoticity is equivalent to tightness if dimX<+∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dim X<+\infty $$\end{document}, and that the finite dimension is essential. Some other conditions equivalent to uni-asymptoticity are provided, including asymptotic formulae for the operator norm and for the trajectories, expressed in terms of determinants detUs\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\det U_s$$\end{document} (see Theorem 1.7). We find a connection of these abstract results with some results and notions from spectral theory of Jacobi operators, e.g., with the H-class property for transfer matrix sequence.


Introduction
Many mathematical models describing "real" processes are built on the basis of some families of operators. Typically, when we want to define a dynamical system-a mathematical model for time dynamics of a real process, we try to find a family {U s } s∈S 1 of operators U s : X −→ X, which "codes the process mathematically". Then {U s } s∈S has the following "real" interpretation: If the initial state of the process was x ∈ X, then the state of the process at the time moment s ∈ S is U s x.
J-namely, they are so-called 2d -generalized eigenvectors 2 . We show here that the uni-asymptoticity of {U n } n∈N2 is just equivalent to the well-known H-class property of the transfer matrix family, and we describe the asymptotic behaviour of the norms of the vector terms of the above "eigenvectors", when the H-class property holds-see Theorem 2.7. It is expressed in a simple way by the determinants of the weight terms for J (see 2.29). Such a behaviour of the norms of the vector terms for the scalar d = 1 case was described before (without using the name H-classs) by some authors-see e.g. [12]. Note also, that thanks to subordination theory [5], the uni-asymptoticity of the set of 2-generalized eigenvectors (for the scalar case d = 1) is closely related to the absolute continuity of J-see Theorem 2.5.

Notation
We introduce here some notation used in the paper. The remaining notation is introduced "locally". Let us denote: C * :={z ∈ C : z = 0}, R + :={t ∈ R : t > 0}, N n 0 :={n ∈ Z : n ≥ n 0 }. (0.1) For an arbitrary set S and f, g : S −→ C we define: i.e., f g iff (f ≺ g and g ≺ f ). We shall use also alternative notations: s∈S g(s) and analogically for the symbol.
Let X be a (real or complex) normed space with the norm , and suppose that dim X > 0. As usual, the same symbol is used here also for the operator norm in the space of bounded linear operators B(X)-i.e.
Ax for A ∈ B(X). We denote: We shall use also the symbol ⇓ ⇓ for the so-called minimum modulus 3 of A:

Ax .
Recall that if A ∈ B * (X) and Ran(A) = X, then which covers also the case of A −1 = +∞ with the convention 0 −1 := +∞.

Uni-asymptoticity and Tightness
Consider an "abstract" family {U s } s∈S of bounded linear operators in a normed space X, where S is a non-empty "index" set. Typically S = N or N n0 for some "starting index" n 0 , however the cardinality as well as any extra structure of S is not important now. Let us recall again, that any indexed family {object s } s∈S is identified here with the function on S acting by the formula: "s S → object s "). We distinguish several specific properties of such operator families. The main notion for us is here the uni-asymptoticity.
The name "uni-asymptotic" will be used here also in somewhat different situation-for some sets F of functions f = {f (s)} s∈S from S into X. Let O be the constant 0-vector function on S. (1.7) These two kinds of "uni-asymptoticity" are closely related. For a family {U s } s∈S of operators from B(X) consider the set of all its trajectories (orbits): By the linearity of the operators, Orb {U s } s∈S is also a linear subspace of the space of all the functions from S into X. By the above definitions we immediately get: Let us also note the following properties of families of operators.   4. If {U s } s∈S is tight, then it is uni-asymptotic.
Proof. The parts 1. and 2. are obvious. To prove the part 3. suppose that {U s } s∈S is uni-asymptotic and U s ∈ B * (X) for some s ∈ S. So, let 0 = x 0 ∈ KerU s . Then for any To get the part 4. suppose that {U s } s∈S is tight and choose C ∈ R + such that U s ≤ C ⇓U s ⇓ for any s ∈ S. For x, y ∈ S X and for any s ∈ S we have i.e., by the part 1., {U s } s∈S is uni-asymptotic.
The main result of this part says that in the finite dimensional case the above point 4. can be essentially strengthened.
Proof. By Fact 1.4.4. it suffices to prove "=⇒". We include some remarks in this proof, showing whether the dim X < +∞ assumption is important or not for the actual part of the proof. Suppose that {U s } s∈S is uni-asymptotic. Consider first the special case with extra assumption that U s ∈ B * (X) for any s ∈ S. So, choosing an arbitrary x 0 ∈ X * we can define a "rescaling number" and, also by Fact 1.4.2., {V s } s∈S is uni-asymptotic. So applying this for any vector x ∈ X * and for y = x 0 , let us choose c x , C x ∈ R + such that (1.11) Hence, in particular, the family {V s } s∈S is bounded in B(X) (as follows e.g. from Banach-Steinhaus theorem, but here it is an even more elementary fact, because dim X < +∞)-so, choose C ∈ R + such that (1.12) We shall prove now, that there exists ∈ R + such that ≤ ⇓ V s ⇓ for any s ∈ S. Suppose, that it is not true. Thus, there exists a sequence ( 3 2 can be replaced by 1, by dim X < +∞, but this constant has no importance). We have 14) It is only now, that we will essentially use the dim X < +∞ assumption: we choose a convergent subsequence {v kn } n≥1 of {v n } n≥1 , i.e., k n −→ +∞ and v kn − v −→ 0 for some v ∈ X with v = 1. But by (1.12) and (1.11) but c v > 0, hence we get a contradiction with (1.14). This finishes the proof of the special case. Now, we go back to the general case, and we define S 1 := {s ∈ S : U s ∈ B * (X)} and S 2 := S \ S 1 . By Fact 1.4.3. we have ⇓U s ⇓= 0 = U s for any s ∈ S 2 , i.e., ⇓U s ⇓ s∈S2 U s . But also ⇓U s ⇓ s∈S1 U s by the special case just proven, so finally ⇓U s ⇓ s∈S U s .
The finite dimension assumption was important not only for the above proof. The assumption that X is a Banach space is not sufficient.
In particular we have Consider now some p ∈ [1; +∞) and the space X := p = p (N) of "power p-summable" sequences. Let us study the family {U s } s∈S , where U s is the multiplication by F s operator in X, i.e., U s x := F s · x for any sequence x ∈ X and any s ∈ S. By (1.16) For each x with x = 1 fix some k 0 := k 0 (x) such that |x k0 | > 0. Then again by (1.16), for such x and for any s < k 0 we have On the other hand, if s ≥ k 0 , then by (1.15) holds for any s ∈ N. But by (1.16) we also have U s x ≤ s x = s. Hence Thus {U s } s∈S is uni-asymptotic, but (1.17) shows that it is not tight. Note also, that the same arguments work also for the similar example with X = ∞ -the standard space of bounded sequences.

Uni-asymptoticity and Determinants
We find here the "actual size" of the norms of the vectors from the trajectories for the uni-asymptotic finite dimensional case. It is described in a simple way by the determinants of the operators U s .
We shall use the following simple lemma in the proof of this result.

there exists a positive constant C m such that
Proof. Let λ 1 , . . . λ m be all the eigenvalues of U (taking into account their algebraic multiplicities) and for each j = 1, . . . , m let x j be some normalized eigenvector for U and λ j . By the definitions of (-the operator norm) and ⇓ ⇓, we have Now, multiplying all the m inequalities "side-by-side" and using det U = λ 1 · . . . · λ m , we get part 1. of the lemma. To get part 2. consider first m by m scalar matrices. Observe, that defining A max := max{|A ij | : i, j = 1, . . . , m} for any matrix A (here A ij is the term of A from i-th row and j-th column) we get a certain norm in the matrix space. By Cramer's rule, if A is invertible, then (A −1 ) ij = (AC )ji det A , where A C denotes the matrix of cofactors of A. Applying "the permutation formula" for the determinant to the terms of A C , we get for any invertible A. Now, fixing a linear base in X to establish a linear isomorphism between B(X) and m by m 4 The following conditions are (mutually) equivalent. scalar matrix space, we use it to transfer max onto a norm in B(X). Finally, the equivalence of any two norms for B(X) (it is a m 2 -dimensional space) proves the assertion of part 2. Note that we have used here also det U = det A for A being the matrix of U for the fixed base.
To obtain part 3., we first easily check that e satisfies all the conditions for norm. Then the estimate U e ≤ U follows just from the fact that the norm on the RHS is the operator norm and that the base vectors have been normalized. The second estimate with some constant D e follows just again from the equivalence of all the norms for B(X).
Proof of Theorem 1.7. Observe that "(iii)=⇒ (iii')" follows directly from Lemma 1.8.1. Let us prove "(ii)⇐⇒ (iii)". Suppose (ii). By Lemma 1.8.3. and by (ii) used to all the m vectors of a fixed normalized base e we get for any s ∈ S with some constants C, C . Hence, using also Lemma 1.8.1., we get (iii). Now, we suppose (iii) and let x ∈ X * . We thus have for any s ∈ S with some constant C. To get the "opposite direction" estimate, we can assume that x = 1. Then by Lemma 1.8.2. for any s ∈ S we have and by (iii), for some constant C > 0 and for any s ∈ S and (ii) is proved. By Theorem 1.5 we have also "(i)⇐⇒ (iv)", so it suffices to prove "(ii)=⇒ (i)" and "(iv)=⇒ (iii)". But the former is obvious by the transitivity of s relation, and the latter follows directly from Lemma 1.8.1.
As we could see, there is a lot of equivalent ways to express uniasymptoticity in the finite dimensional case. On the other hand, Example 1.6 shows that none of these ways work for general Banach space X and general family {U s } s∈S cases. Thus the natural open problem could be formulated: To find some equivalent criteria for uni-asymptoticity for some special kinds od familes {U s } s∈S of operators acting in infinite dimensional Banach spaces X. E. g., for discrete semigroups (with S = N 0 ), for C 0 -semigroups (with S = [0; +∞)) and others.

Jacobi Operators: Generalized Eigenvectors
and 2d -generalized Eigenvectors Let us start from recalling the notions of block Jacobi matrix and operator (see, e.g., [2,6,13] and citations therein) and its particular "scalar" case. Let  1 and let A 1 , B 1 , A 2 , B 2 , . . . be d×d self-adjoint real matrices. We consider the semi-infinite "block-tri-diagonal matrix" of the form: J is called block Jacobi matrix, and in the main case d = 1 it is just Jacobi matrix (also: scalar J. m.). Sequences {A k } k≥1 , {B k } k≥1 are called weight and diagonal sequence, respectively, and here it is assumed that We identify the matrix J with the formal Jacobi operator J , acting in the linear space of all the sequences of C d -vectors u = {u(k)} k≥1 by the formula: where we denote additionally A 0 := 0, u(0) := 0 to get also sense of the formula for k = 1. We distinguish here the block Jacobi matrix (= formal operator) J and the block Jacobi operator J-being our main object here. Roughly speaking, J is just "the restriction" of J to the Hilbert space 1 An = +∞ (see [1,6] for some more information related to the self-adjointness problem for block Jacobi operators). So, in particular, if the weight sequence is bounded, then J is always self-adjoint, regardless of whether the diagonal sequence is bounded or not. In the present paper the self-adjoint case of J is most important for us.
Consider now λ ∈ C and u ∈ .
We use abbreviation GEV for "generalized eigenvector".
Note that k = 1 is omitted above and we do not require that u is in D(J) (nor in 2 ). (2.19) is equivalent to which allows to construct easily the unique GEV u (for λ), starting from any pair of C d vectors u(1), u (2). Denote: GEV (λ):={u : u is a GEV of J for λ}.
We have dim GEV (λ) = 2d. Let us rewrite (2.20) into a C 2d -vector form: for a sequence u of C d -vectors and T k (λ) is the k-th transfer matrix (of the size 2d × 2d) : u ∈ GEV (λ) .
(2.24) Some methods used in spectral studies of scalar Jacobi operators are based on the general idea of finding relations between asymptotic properties of sequences from GEV (λ) or −−−−−→ GEV (λ) and spectral properties of J. Let us stress that the above word "asymptotic" can have many particular meanings. Different spectral results could be related to different kinds of asymptotic properties. And for some asymptotic properties the use of 2d-generalized eigenvectors can be more suitable than the use of generalized eigenvectorssee, e.g., [10]. These methods, however, are less developed in the case of block Jacobi operators (with d > 1).

H-Class and Spectral Results for Scalar Jacobi Operator
The notion of H-class for sequences of 2 × 2 scalar matrices was introduced to study the absolutely continuous spectrum of scalar Jacobi operators-see, e.g., [3,9]. In a natural way, with any m, it can be extended to sequences Note that the above requirement is quite strong, since for any {C n } n≥n0 the opposite direction estimate with respect to (2.25) holds. Namely, by Lemma 1.8.1., n k=n0 | det C k | = | det(C n · . . . · C n0 )| ≤ C n · . . . · C n0 m .
(2.26) So, we easily get an equivalent formulation of H-class definition, expressed in asymptotic terms " " used in this paper. Theorem 2.5. (see [9]) If d = 1, and J is self-adjoint, then J is absolutely continuous 5 in Int(H(J)) and Int(H(J)) ⊂ σ ac (J).

H-Class and Uni-asymptoticity
The concept of H-class turns out to be closely related to our uni-asymptoticity notion. Consider a sequence {C n } n≥n0 of m × m complex matrices and define: