Non-Existence of Subordinate Solutions for Jacobi Operators in Some Critical Cases

We show a method of repairing some gaps in proofs of the absolute continuity of the spectrum of Jacobi operators. Such gaps have been found in several recent papers, dealing mainly with the so-called critical case (i.e., Jordan box case). We solve the problem by proving that the subordinate solution does not exist for many cases with two linearly independent generalised eigenvectors possessing “similar” asymptotic behaviour.

In this paper we concentrate on the second step. It was usually treated as the simplest one, especially when the generalisation of the Behncke-Stolz lemma was formulated by Janas and Naboko. This generalisation (for short, GBS, see e.g., [10,Lemma 1.5]) states that for a fixed spectral parameter no subordinate solution exists (non-subordinacy), if all the generalised eigenvectors satisfy a certain estimate expressed in terms of the weighs of the Jacobi operator. GBS proved to be applicable to many classes of Jacobi operators. Thus, to make the use of it more "automatic", several conditions simpler to check, yielding the estimate from GBS, have been found. One of them, formulated in terms of transfer matrices, was the so-called H-class condition (see [23]). Another one, being exactly in spirit of the above step 2., relates non-subordinacy to some asymptotic properties of the C 2 -vector-generalised eigenvectors, i.e., of the C 2 -vector sequences x = {x(n)} n≥1 where and u is a (scalar) generalised eigenvector (i.e., a sequence satisfying (2.3)). The appropriate result can be formulated as follows 2 (compare with [24,Lemma 5.9]). Proposition 1.1. Suppose that Jacobi operator J satisfies (2.2), and for a spectral parameter λ ∈ C it possesses two C 2 -vector-generalised eigenvectors x + , x − , such that x ± (n) = ψ ± (n)y ± (n), n∈ N, where y ± are sequences of C 2 vectors such that y ± (n) −→ y ∞ ± , (1.2) y ∞ − , y ∞ + are linearly independent C 2 vectors, and ψ ± are sequences of nonzero numbers satisfying for n sufficiently large. (1.3) Then no subordinate solution for J and λ exists.
Unfortunately, this fact, however well-known, it has never been explicitly formulated in the above simple form-it has rather been used as one of the "folklore type" arguments. This was probably the first reason of several further misunderstandings and gaps. Some authors forgot that the above result does not concern the generalised eigenvectors, but C 2 -vector-generalised eigenvectors. In particular, the linear independence of y ∞ − and y ∞ + is quite a strong condition, which is typically satisfied for so-called non-critical case (see, e.g., [8]), and it can not be replaced by the assumption, that x + and x − are linearly independent sequences of C 2 vectors. And thus it is not so strange, that the following conjecture on generalised eigenvectors is also not true. Conjecture 1.2. Suppose that Jacobi operator J satisfies (2.2), and for a spectral parameter λ ∈ C it possesses two linearly independent generalised eigenvectors u + , u − , such that where s ± are scalar sequences with and ψ ± are sequences of nonzero numbers satisfying ψ + (n) = ψ − (n), for n sufficiently large. (1.5) Then no subordinate solution for J and λ exists.
To check that this is false, one can consider a self-adjoint Jacobi matrix J with real coefficients satisfying (2.2), such that for some λ ∈ R there exists a subordinate solution for J and λ (surely, this is a frequent situation, e.g., J can be an arbitrary self-adjoint Jacobi operator with non-empty point spectrum, and λ-its arbitrary eigenvalue). Let u be now the generalized eigenvector for these J and λ determined by the initial conditions u(1) = 1, u(2) = i. Define u − := u, and u + = u. Both u + , u − are generalised eigenvectors for J and λ, since λ and the coefficients of J are real. Moreover u + , u − are linearly independent, since u + (1) = 1, u + (2) = −i. So taking ψ ± := u ± and s ± ≡ 1 we see that the assumptions of the conjecture hold, but the assertion does not.
Unluckily, this conjecture, though false, was also regarded as a "folklore type" result. And this could be the direct reason of gaps in proofs of non-subordinacy in many recent papers on spectral theory of Jacobi matrices: [4,8,17,19,26,27,33]. These papers concentrate mainly on the so-called Jordan box (i.e., critical) case-see [8].

M. Moszyński IEOT
The aim of the present article is to fill those gaps. We try to reach this goal, roughly speaking, by finding some extra conditions which added to the assumptions of Conjecture 1.2 change it into a theorem. And the reason why we attempt to repair just Conjecture 1.2 is that it concerns the typical form of asymptotic information obtained by the use of some difference equation theory tools (see e.g. [6]). This information, in the scalar representation, can be expressed as follows: Certain generalised eigenvectors u ± possess the form (1.4), where "the main scalar parts" ψ ± of asymptotics are explicitly "computable", and we have only some general regularity information on the implicit part s ± .
The main extra condition we add to the assumptions of Conjecture 1.2 is formulated here in terms of the so-called absolute γ, 2-Cesaro convergence of some scalar sequences (for short, γ, 2-convergence), which is defined in Sect. 2. The convergence weight γ is determined here by the absolute value of the main scalar parts, more precisely 6) and the sequence z to be γ, 2-convergent is given by the complex signum function (with sgn(a) := a |a| for a = 0) of this main scalar parts: The result is the following theorem. Note that this result gives even sufficient and necessary conditions for subordinacy, under the assumptions of Conjecture 1.2 and with the self-adjointness of J. Its proof is immediate, by a slightly more general result-Theorem 4.2, where we do not assume that the main scalar parts ψ ± should be mutually conjugated, and some weaker assumptions on s ± are made. We need this stronger version for some cases. But its proof is also elementary. In some sense, this is rather a "simple observation" than a "theorem". However, it is a convenient observation, since it transfers the essence of the problem from subordinacy to some more classical notion of convergence.
Studies on the absolute γ, p-convergence are the subject of Sect. 3. The main result of this section is Criterion 3.11, which gives some sufficient conditions for divergence in this sense. This will allow to prove non-subordinacy in many important cases, which are not admitted by Proposition 1.1.
Section 4, the most technical one, is devoted to the proof of Theorem 4.2, being the previously mentioned generalisation of Theorem 1.3. We prove also Theorem 4.3, which gives some necessary conditions for subordinacy in terms of γ, 2-convergence. We shall need this second result later-in some proofs of non-subordinacy which cannot be obtained by the use of Theorem 4.2.
Vol. 75 (2013) Non-Existence of Subordinate Solutions 367 In Sect. 5 we show how to use the tools from previous sections to prove that for certain Jacobi operators J and spectral parameters λ, no subordinate solution exists. We prove several results covering various forms of asymptotics for two linearly independent generalised eigenvectors, which have been described in the recent literature. Finally, in Remarks 5.7, we show how to fill the gaps in most of the proofs of non-subordinacy mentioned before.
Some basic notation and definitions are collected in Sect. 2. The remaining notation can be found in some particular sections.
We close the paper by Sect. 6 with some open problems.

Preliminaries
Let us consider the right-side infinite Jacobi matrix ⎛ for any u = {u(n)} n≥1 ∈ (N), with the convention that w 0 := 0 =: u(0) (note that we use both ways of sequence term notation in this paper: standard-like w n , and functional-like u(n)). In this paper we assume that ∀ n∈N w n = 0.
For a fixed λ ∈ C we consider generalised eigenvectors of J for λ, i.e., where for x ∈ (N), n ∈ N The above form of the definition seems convenient for our further considerations, though note that usually it is formulated in another way.
Denote by Sub(λ) the set consisting of the 0 sequence and of all the subordinate solutions for J and λ. Let us also recall some fundamental properties of subordinate solutions (see [22], pp. 514-515 for the proofs).
Sometimes we shall use also a more precise notation: to stress the J-dependence on the sets introduced above. The abbreviation: ' for k s.l. . . .' (and '. . . for k s.l.') means: 'for k sufficiently large . . .', i.e., there exists N ∈ Z such that for k ≥ N . . .. Analogically, ' for k s.s.' ('for k sufficiently small') means 'there exists N ∈ Z such that for k ≤ N '.
If x is a sequence (in this paper 'sequence' means always: right-side infinite sequence), then its starting index will be often denoted by st(x), i.e., The set of all scalar (complex) sequences (with st(x) being undetermined) we denote here by , and as usual, p , ∞ denote its subsets consisting of power p-summable (p > 0), and respectively, bounded sequences. The set of scalar sequences x which are bounded and satisfy lim inf n→+∞ |x n | > 0 is denoted by ∞ * . To shorten the notation, we shall often omit writing k ∈ Z, n ∈ Z, etc., when the integer character of the variable is clear from the context, and also, for two integers n 1 , n 2 we shall write (n 1 ; n 2 ), [n 1 ; n 2 ), etc., to denote the intersection of Z with the appropriate real intervals (denoted in the same way...).
The symbol '−→' (as well as 'lim')-without any superscripts-is reserved here only for the limit in the usual sense (also for the convergence in C 2 ).

Vol. 75 (2013)
Non-Existence of Subordinate Solutions 369 We shall consider several special kinds of convergence of complex sequences. They are determined by a weight sequence γ = {γ k } k≥k0 satisfying (2.5) To distinguish such a weight sequence from the weight sequence {w n } n≥1 for the Jacobi operator J, we call it "convergence weight ". The generalized Cesaro convergence with the convergence weight γ is denoted here by γ −→, and in the basic case st(γ) = st(x) = k 0 = 1, which is sufficient for spectral goals of this paper, it is defined as follows: for x = {x n } n≥1 -a complex sequence, and g ∈ C (note, that some terms of the sequence on the RHS can be not properly defined, but they are properly defined for n s.l., since by (2.5) the denominator is non-zero). For general cases of st(γ) and st(x) the definition is similar: consider x = {x n } n≥n0 (note that k 0 := st(γ) and n 0 := st(x) here). Denote for N ≥k, N ≥ k 0 and n ≥ min{k ≥ N : γ k > 0} (by (2.5), this "min", as well as the one defining k + , exists). In particular, we can consider N = k 0 and N =k, and we denote C γ (x) n := C γ (k, k 0 , x) n for n ≥ k + . Now, the generalized Cesaro convergence with the convergence weight γ is given by It is easily seen by (2.5) that Note that for a constant convergence weight such a convergence is the standard Cesaro convergence.
However, the key kind of convergence in this paper is the absolute γ, p-Cesaro convergence with p > 0, denoted by γ,p −→, and defined as follows: In both names of convergence we often omit "Cesaro" and/or "absolute" for short. For our spectral applications we shall use p = 2. Sometimes, instead of −→, γ −→ and γ,p −→ we shall write also − n →, − γ n → and − γ,p n → to indicate the symbol used for the index of the sequence under the limit. We shall use the notions: γ-convergence/divergence, γ, p-convergence/divergence in the usual IEOT way, i.e., convergence always means the existence of a finite limit (in C), and divergence = non-convergence in the respectively considered sense.

The Absolute γ, p-Cesaro Convergence
In this section we study the notions of γ-convergence and of the absolute γ, p-convergence. In particular a practical criterion (i.e., a sufficient condition) for γ, p-divergence is found here.
Let us start from some additional notation. The length of interval I is denoted by |I|, the number of elements (∈ N ∪ {0, +∞}) of a set A is denoted by A.
The set of the finite complex limit points of a scalar sequence x will be denoted by LIM C (x), i.e., LIM C (x) is the set of all g ∈ C for which there exists a sequence {k n } n≥1 of integers such that k n −→ +∞ and x kn −→ g.
The symbol x M denotes the arithmetic average of x over a finite set M ⊂ Z, more precisely: We call two convergence weight sequences γ, γ equivalent, which will be denoted by γ ≡ γ , iff there exist such positive constants c, C, that cγ k ≤ γ k ≤ Cγ k for k s.l..
A convergence weight γ is called here shiftable iff there exists > 0, such that γ k+1 ≥ γ k for k s.l..
For a convergence weight γ = {γ k } k≥k0 satisfying (2.5) consider the measure μ γ given for any M ⊂ Z by We shall use also the following sets of indexes of complex sequence x related to terms being "far from / near to" the complex number g: For the Readers convenience we collect here several basic properties of γ −→ and γ,p −→ (see also some general properties of various convergence notions, e.g., [2]).

11.
If γ is a convergence weight equivalent to γ, then 14. If x n γ,p −→ g then for any δ > 0 Proof. The following parts of the above proposition can be immediately checked with no difficulty, using (2.8): the uniqueness and the linearity for γ −→ in part 1.; part 2.; part 3.; part 4.; part 5.; part 11.; part 12. Observe, that for the case of γ with all γ k > 0, the fact that γ −→ generalises −→ follows from the classical Stolz theorem. But the general "nonnegative" case can be easily reduced to this special case, because we have By the Minkowski inequality we have and using |a + b| p ≤ |a| p + |b| p when 0 < p < 1 we get C γ (|x + y| p ) n ≤ C γ (|x| p ) n + C γ (|y| p ) n for 0 < p < 1. ≈ g (treating complex numbers as the appropriate constant sequences), i.e., |g − g | p γ −→ 0. Thus g = g , and the uniqueness for The linearity for γ,p −→ also follows from (3.1) or from (3.2). To obtain that γ,p −→ generalises −→ it suffices to see that x n −→ g =⇒ |x n − g| p −→ 0 and then, to use the proved fact that γ −→ generalises −→. Hence we completed the proof of part 1.
To prove part 8. take x bounded, x n γ,p −→ g, p > p, and choose C such that |x n − g| ≤ C for any n ≥ st(x). We can also choose D such that |t| p −p ≤ D for |t| ≤ C, hence for any n ≥ st(x) we have 0 ≤ |x n − g| p ≤ D|x n − g| p , which gives part 8. by part 5.
The part 9. follows from parts 6. and 1. The part 13. follows from parts 12., 1. and 7. The part 14. follows from the estimate: for n s.l.
Parts 13. and 14. of Proposition 3.1 can be employed to find some tools for proving γ, p-divergence. Asymptotic information on generalised eigenvectors, that we typically get using some asymptotic methods, leads us (via  Our aim is to find other tools, working for some sequences a satisfying (Δa) n −→ 0.

Non-Existence of Subordinate Solutions 373
Let A ⊂ R, T > 0 and let a = {a n } n≥n0 be a real sequence.

Definition 3.3. A is γ-essential mod T for a iff lim sup
Observation 3.4. Any interval of the length greater than T is obviously γ essential mod T for any sequence a (under the assumption (2.5)). The same is true for any non-open interval of the length T . If A 2 ⊃ A 1 and A 1 is γ-essential mod T for a, then A 2 is also γ-essential mod T for a.
The γ-essentiality mod T for a can be used to obtain the γ, p-divergence of sequences given by (3.3) or by a more general formulae. x n = f (a n ) for n s.l..
and a are such that for any g ∈ f (R) there exists A ⊂ R such that (i) and (ii) above hold, then x is γ, p-divergent.
Proof. We can assume that x n = f (a n ) for any n ≥ n 0 . If s ≥ n 0 and a s ∈ l∈Z (lT + A), then x s = f (a s ) ∈ f (A). Hence, assuming (ii), we see that for some δ > 0 and thus Therefore, if (i) holds, then x n γ,p −→ g by Proposition 3.1. 14. Now, to get the second assertion, it suffices to recall that if x γ,p −→ g, then by Proposition 3.1. 7. we should have g ∈ f (R).
Let ϕ, y be real sequences. We define two kinds of "growth control of y by ϕ". We shall be mainly interested in the situation with |ϕ n | −→ +∞. (3.5) In such a case the condition ≺ II ϕ is a quite strong requirement on y. In particular the following can be easily checked: Remark 3.7. If (3.5) holds and y ≺ II ϕ, then y ≺ I ϕ.
Here are some elementary growth control examples.
Soon we shall need the following generalisations of these examples: Proposition 3.9. Suppose that ρ, h, r (1) , r (2) are real sequences, r (1) , (i) If ϕ n = ρ n n β and y n = h n n b for n s.l., then y ≺ II ϕ and y ≺ I ϕ.
Proof. (i) follows directly from Example 3.8 (i), and to prove (ii) it suffices to use Lagrange mean value Theorem: assuming that C > 0 we can choose C , C , N, δ > 0, such that for s, s ≥ N and |ϕ s − ϕ s | ≤ C we get One can easily check the following consequences of "the growth control" for the estimates of averages of sequences.
We are ready now to formulate the main result of the present section.
If one of the following cases holds: case I : (I.a) γ and σ ≺ I ϕ and (I.b) (a − ϕ) ∈ ∞ ; case II : γ and σ ≺ II ϕ; then each interval with positive length is γ-essential mod T for a. If moreover f : R −→ C is a continuous, non-constant, T -periodic function and x n = f (a n ) for n s.l., then x is γ, p-divergent with any p > 0. Suppose first that A is such, that -(A1.) C n is finite for any n ∈ Z, C n = ∅ for n s.s., and C n = ∅ for n s.l.; -(A2.) (nT + A) ∩ (n T + A) = ∅ for n = n , n, n ∈ Z.
With all these assumptions, we can choose M > 0 and N 0 , N 1 ∈ Z, N 0 ≤ N 1 such that: . (3.11) But by m n → +∞ and γ ∈ 1 we have μγ ((mN ;mn]) μγ ([k0;mn]) −→ 1, moreover by (3.8) and (3.7) {max B n } n≥N1 is unbounded from above. Therefore (3.11) shows that A is γ-essential mod T for a. Now, for the first part of the assertion, it remains to prove that for A being any interval with positive length the conditions (A1.)-(A4.) hold with some sequence m appropriately chosen for this interval. By Observation 3.4 it suffices to consider each A := (b; b + 3δ) with b ∈ R, 0 < δ ≤ T /3 (in fact, it would also suffice to check this only for shorter intervals). For (A1.)-(A3.) we shall not use the assumptions of either case I or case II. Let us start from the following lemma-simple, but convenient (we omit here the obvious proof). Observe that by (v) and (ii) we have ϕ k+1 ≥ ϕ k for k s.l., and by (iv), (i) (3.12) In particular C n = ∅ for n s.s. and C n is finite for any n ∈ Z. Let A := (b + δ; b + 2δ). Using (i), (ii), (iii) and (v) we choose K 1 ∈ Z, K 1 ≥ n 0 and C > 0, such that (3.13) Now, by (3.12) we can apply Lemma 3.12 for the sequence {a k } k≥K1 , and choosing appropriately large n 0 we see that for n ≥ n 0 there exists a term of this sequence in A + nT . Thus let (3.14) Vol. 75 (2013) Non-Existence of Subordinate Solutions 377 So, we also have k n ∈ C n , n≥ n 0 , which gives (A1.). Obviously (A2.) holds because 3δ < T . For n ≥ n 0 define l n := max l ≥ k n : l−1 s=kn σ s < δ . (3.15) Note that l n is properly defined, because kn−1 s=kn σ s = 0, and +∞ s=kn σ s = +∞ (if the sum is finite, then by (ii) a would be bounded -a contradiction with (3.12)). We have thus by (3.13), (3.14) we obtain [k n ; l n ] ⊂ C n , n≥ n 0 , (3.16) and defining j n := l n − k n + 1 for n ≥ n 0 we get Let us define the sequence m as follows: Note that m n is properly defined, because the set under 'max' is finite (by (3.12)) and nonempty (it contains k n by (3.14)). By the definition m n+1 ≥ m n . Let n 1 ≥ n 0 be such, that b + n 1 T > max{a n0 , . . . , a K1 }. Suppose that n ≥ n 1 and k ∈ C n . We have which means that k > K 1 . Hence, by (3.13) we also have ϕ k ≤ a k ≤ b + 3δ + T n, and this is why k ≤ m n . So we get max C n ≤ m n for n ≥ n 1 , thus (A3.) holds. We have also obtained From (3.18) and (3.15) we see that j n · σ [k n ; l n ] = ln s=kn σ s ≥ δ for n ≥ n 1 . (3.21) By (3.13), (3.20) b + 3δ + T n ≤ ϕ k ≤ b + 3δ + T + T n for k ∈ (m n ; m n+1 ], n ≥ n 1 . The definition of C n gives b + T n ≤ a k ≤ b + 3δ + T n for k ∈ C n , n ≥ n 1 , (3.23) IEOT hence, using (3.13) and (3.19), for k ∈ C n , n ≥ n 1 Note that under the extra assumption (I.b) the estimate (3.23) gives a better lower bound for ϕ on C n , namely there exists a real constant c 1 such that (note that C ≥ 1 by (3.13), so the above is really better than (3.24) for "large" n). Using Lemma 3.

Vol. 75 (2013)
Non-Existence of Subordinate Solutions 379 Now, to prove the γ, p-divergence of x it suffices to use Proposition 3.5.
To understand the formulation of Criterion 3.11 better let us explain in a few words the role of the sequences a, ϕ, σ. The sequence a is the main sequence defining the sequence x to be γ, p-divergent, and ϕ, σ are auxiliary sequences, which we should choose properly, to satisfy conditions (i)-(v) and (I or II) of Criterion. The typical idea is to choose them in such a way, that "for large entries": (1 o ) ϕ is a "regularly behaving" and possibly large lower bound sequence for a; (2 o ) σ is a "regularly behaving" and possibly small upper bound sequence for |Δa|.
We present examples of two concrete classes of x-s and γ-s, being illustrations of the use of the above criterion and of the above idea of choosing the auxiliary sequences. These examples will be important in Sect. 5. Example 3.13. Consider γ ≥ 0 and x given by where f : R −→ C is a continuous, non-constant, periodic function, and c = {c n } n≥1 is a real sequence satisfying c ∈ ∞ , n(Δc) n −→ 0 and lim inf n→+∞ c n > 0 or lim sup c n < 0 .
Proof. We can assume that lim inf n→+∞ c n > 0, due to possible substitution "c n by −c n and f (t) by f (−t)". We use Criterion 3.11 taking: T -a positive period of f , a n := c n n β , ϕ n := δn β , σ n := S (n + 1) β − n β , n ∈ N, with δ := 1 2 lim inf n→+∞ c n , S := 2 lim sup c n . It is immediately clear that the assumptions (i), (iii)-(v) of the criterion hold. By Proposition 3.9 (i) we are in the case II of this criterion. We should only check the assumption (ii). But for n s.l. we have For the second example we introduce the following notation. Let η : [1; +∞) −→ R. We write η ∈ Υ (3.31) IEOT iff η satisfies: a). for some a > 1 η restricted to (a; +∞) is differentiable, increasing (in the non-strict sense) and convex; b). lim t→+∞ η (t) = 0 c). for any R > 0 there exists D > 0 and M ≥ 1 such that for any t ≥ M η (t + R) ≤ Dη (t); d). lim t→+∞ η (t)e −t = 0. Note that the limit from b) exists (finite or +∞) by convexity. For instance, η ∈ Υ if it is given for t ≥ 1 by the formula η(t) = t, or more generally, by η(t) = t α , with α ≥ 1.
Proof. We use now the case I of Criterion 3.11 with a positive period of f as T , a n := η(ln n + r (1) n ) + r (2) n , ϕ n := η(ln n + r (1) n ) + A, where A := inf n≥n0 r (2) n for a fixed n 0 ≥ st(r (2) ) and B will be chosen soon. Using the Lagrange mean value theorem and η ∈ Υ, (3.32), (3.33) we can easily see that assumptions (i)-(iii) of the criterion hold with B chosen to be sufficiently large. We also obtain (iv) by the convexity of η and by b) from the definition of the class Υ. Now, using (3.32) with η ∈ Υ and with the boundedness of r (1) , we obtain the assumption (v). By Proposition 3.9 (ii) we get γ ≺ I ϕ, we also have (a − ϕ) ∈ ∞ , by the choice of a and ϕ. Thus, it remains only to prove that σ ≺ I ϕ. But again by the Lagrange mean value theorem, taking some C > 0 we can chooseC, N, δ > 0, such that for any s, s ≥ N which satisfy |ϕ s − ϕ s | ≤ C we get C ≥ |η(ln s + r (1) s ) − η(ln s + r (1) s )| ≥ δ| ln s + r (1) s − ln s − r (1) s |, and consequentlyC ≥ | ln s − ln s| = | ln(s /s)|. Hence, using η ∈ Υ, we can chooseÑ ≥ N andD > 0 such that for any s, s ≥Ñ which satisfy |ϕ s − ϕ s | ≤ C we have:

Asymptotics of Generalised Eigenvectors and Subordinacy
Let us recall first a simple result (following directly from [24,Lem. 5.9 and Prop. 5.5]) related to not scalar, but C 2 -vector-generalised eigenvectors asymptotics. It is a stronger version of Proposition 1.1.

3)
is in H-class and in particular no subordinate solution for J and λ exists.
Now we formulate some results for generalised eigenvectors, i.e., for scalar solutions. These results can be treated as "repairs" of Conjecture 1.2, promised in Introduction. The first is a generalisation of Theorem 1.3.

Theorem 4.2.
Assume that J is self-adjoint, (2.2) holds and that for some λ ∈ C u + , u − are two linearly independent vectors from Sol(λ) such that u ± (n) = ψ ± (n) · s ± (n), with ψ + (n) = z(n) · ψ − (n), for n s.l., where ψ ± , s ± , z are scalar sequences, and The second result gives only necessary conditions for subordinacy, but it can be used sometimes for some weaker assumptions on asymptotics.