1 Introduction

Many mathematical models describing “real” processes are built on the basis of some families of operators. Typically, when we want to define a dynamical system—a mathematical model for time dynamics of a real process, we try to find a family \(\left\{ U_s\right\} _{s\in S}\)Footnote 1of operators \(U_s: X\longrightarrow X\), which “codes the process mathematically”. Then \(\left\{ U_s\right\} _{s\in S}\) has the following “real” interpretation:

If the initial state of the process was \(x\in X\), then the state of the process at the time moment \(s\in S\) is \(U_sx\).

Above S represents the set of “admissible moments of time”, and X—the set of “admissible states” for the process. Hence, in this paper, for fixed \(x\in X\), we use the name trajectory (or orbit), when we consider the family \(\left\{ U_sx\right\} _{s\in S}\) of elements of X, obtained by a family \(\left\{ U_s\right\} _{s\in S}\) of operators. Let us stress, however, that the actual role of the mathematical objects X, S, \(\left\{ U_s\right\} _{s\in S}\) and \(\left\{ U_sx\right\} _{s\in S}\) can be essentially different than their typical role as above—that is, as in mathematical descriptions of some real processes, e.g, in physics etc. As we shall see in Sect. 2 (see 2.28), those objects can be also useful for mathematical description of some “purely mathematical” processes for purely mathematical goals.

In this paper we consider only the linear case, i.e., X is a Banach space here and all the operators \(U_s\) are linear (and we shall mainly assume finite dimension of X). However, some notions introduced here have also sense in non-linear case, and they could be worth future research.

Section 1 is devoted to some abstract studies of so-called uni-asymptotic families of operators, i.e., such families \(\left\{ U_s\right\} _{s\in S}\) that all its non-zero trajectories \(\left\{ U_sx\right\} _{s\in S}\) have “the same norm-asymptotic behavior”. This precisely means, that for any non-zero initial conditions \(x, y\in X\) we have

$$\begin{aligned} \Vert U_sx\Vert _{}\asymp _{s}\Vert U_sy\Vert _{} \end{aligned}$$

(see 0.3). Although the word “asymptotic” may not fit well to the above definition for the general S, it makes sense for some ordered S-s, e.g., for \(S=\mathbb {N}_{n_0}\) (see 0.1). The name “uni-asymptotic” is used also for linear spaces consisting of some functions \(f=\left\{ f(s)\right\} _{s\in S}\) from S into X, with a natural analogic meaning.

This section contains the main results of the paper—Theorems 1.5 and 1.7 on some equivalent conditions for uni-asymptoticity in finite dimension. One of the most important observations is that the uni-asymptoticity is equivalent to tightness, where \(\left\{ U_s\right\} _{s\in S}\) is tight, when the operator norm and the minimal modulus of \(U_s\) have the same asymptotic behavior (see 1.6). Examples showing the necessity of the \(\dim X<\infty \) assumption are also provided. They show as well, that the apparent similarity of the problem to the Banach–Steinhaus theorem is misleading. Another equivalent condition are asymptotic formulae for each non-zero trajectory and for the operator norm \(\Vert U_s\Vert _{}\), expressed in terms of determinants \(\det U_s\) (see (ii), (iii) in Theorem 1.7). This seems to be a simple and, at the same time, unknown result, with a potentially wide application.

Section 2 is an illustration of the above abstract results. It helps in better understanding of some spectral results for Jacobi operators (matrices). Here each \(U_s\), for \(s\in S:=\mathbb {N}_2\), is an appropriate product (see 2.28) of the so-called transfer matrices for Jacobi operator J in infinite dimensional Hilbert space \(l^2(\mathbb {N})\). We consider also a more general case—so-called block Jacobi operator, where the Hilbert space is \(l^2(\mathbb {N}, \mathbb {C}^d)\) with any \(d\ge 1\). The dimension of X is 2 for “classical” (scalar) Jacobi case, and \(X=\mathbb {C}^{2d}\) in block Jacobi case, so it has nothing to do with the infinite dimension of the Hilbert space. The trajectories are nearly related to generalized eigenvectors of Jacobi operator J—namely, they are so-called 2d - generalized eigenvectorsFootnote 2. We show here that the uni-asymptoticity of \(\left\{ U_n\right\} _{n\in \mathbb {N}_2}\) is just equivalent to the well-known H-class property of the transfer matrix family, and we describe the asymptotic behaviour of the norms of the vector terms of the above “eigenvectors”, when the H-class property holds—see Theorem 2.7. It is expressed in a simple way by the determinants of the weight terms for J (see 2.29). Such a behaviour of the norms of the vector terms for the scalar \(d=1\) case was described before (without using the name H-classs) by some authors—see e.g. [12]. Note also, that thanks to subordination theory [5], the uni-asymptoticity of the set of 2-generalized eigenvectors (for the scalar case \(d=1\)) is closely related to the absolute continuity of J—see Theorem 2.5.

1.1 Notation

We introduce here some notation used in the paper. The remaining notation is introduced “locally”. Let us denote:

$$\begin{aligned} \mathbb {C}_*\,{:=}\{z\in \mathbb {C}: \, z\not =0\},\;\, R_+\,{:=}\{t\in \mathbb {R}: \ t>0\},\;\, \mathbb {N}_{n_0}\,{:=}\{n\in \mathbb {Z}: n\ge n_0\}. \end{aligned}$$
(0.1)

For an arbitrary set S and \(f, g:S\longrightarrow \mathbb {C}\) we define:

$$\begin{aligned}&f\prec g \ \ \iff \ \ \exists _{C\in \mathbb {R}_+} \forall _{s\in S}\ \ |f(s)|\le C |g(s)|; \end{aligned}$$
(0.2)
$$\begin{aligned}&f\asymp g \ \ \iff \ \ \exists _{c,C\in \mathbb {R}_+} \forall _{s\in S}\ \ c|g(s)|\le |f(s)|\le C |g(s)|, \end{aligned}$$
(0.3)

i.e., \(f\asymp g\) iff (\(f\prec g\) and \(g\prec f\)). We shall use also alternative notations:

$$\begin{aligned} f(s)\prec \ g(s), \ \ f(s)\prec _{s} \ g(s), \ \ f(s)\begin{array}{c} \prec \\ {s\in S} \end{array} \ g(s) \end{aligned}$$

and analogically for the \(\asymp \) symbol.

Let X be a (real or complex) normed space with the norm \(\Vert \ \Vert _{}\), and suppose that \(\dim X>0\). As usual, the same symbol \(\Vert \ \Vert _{}\) is used here also for the operator norm in the space of bounded linear operators \({\mathcal {B}}(X)\)—i.e. \(\Vert A\Vert _{}:=\displaystyle \sup _{\Vert x\Vert _{}=1}\Vert Ax\Vert _{}\) for \(A\in {\mathcal {B}}(X)\). We denote:

$$\begin{aligned} X_*:= & {} \{x\in X:\ x\not =0\},\\ {\mathbb {S}}_{X}:= & {} \{x\in X:\ \Vert x\Vert _{}=1\}, \\ {\mathcal {B}}_*(X):= & {} \{A\in {\mathcal {B}}(X):\ \text{ Ker }(A)=\{0\}\}. \end{aligned}$$

We shall use also the symbol \(\Downarrow \Downarrow _{}\) for the so-called minimum modulusFootnote 3 of A:

$$\begin{aligned} \Downarrow A \Downarrow _{}:=\inf _{\Vert x\Vert _{}=1}\Vert Ax\Vert _{}. \end{aligned}$$

Recall that if \(A\in {\mathcal {B}}_*(X)\) and \(\text{ Ran }(A)=X\), then

$$\begin{aligned} \Vert A^{-1}\Vert _{}=\Downarrow A \Downarrow _{}^{-1}, \end{aligned}$$
(0.4)

which covers also the case of \(\Vert A^{-1}\Vert _{}=+\infty \) with the convention \(0^{-1}:=+\infty \).

2 Uni-asymptotic Results for Abstract Linear Systems

2.1 Uni-asymptoticity and Tightness

Consider an “abstract” family \(\left\{ U_s\right\} _{s\in S}\) of bounded linear operators in a normed space X, where S is a non-empty “index” set. Typically \(S=\mathbb {N}\) or \(\mathbb {N}_{n_0}\) for some “starting index” \(n_0\), however the cardinality as well as any extra structure of S is not important now. Let us recall again, that any indexed family \(\left\{ \text{ object }_s\right\} _{s\in S}\) is identified here with the function on S acting by the formula: “\(s\ni S\mapsto \text{ object }_s\)”).

We distinguish several specific properties of such operator families. The main notion for us is here the uni-asymptoticity.

Definition 1.1

Let \(\left\{ U_s\right\} _{s\in S}\) be a family of operators from \({\mathcal {B}}(X)\).

  • \(\left\{ U_s\right\} _{s\in S}\) is uni-asymptotic iff

    $$\begin{aligned} \forall _{x,y\in X_*} \ \ \ \Vert U_sx\Vert _{}\asymp _{s}\Vert U_sy\Vert _{}; \end{aligned}$$
    (1.5)
  • \(\left\{ U_s\right\} _{s\in S}\) is tight iff

    $$\begin{aligned} \Downarrow U_s \Downarrow _{}\asymp _{s}\Vert U_s\Vert _{}. \end{aligned}$$
    (1.6)

The name “uni-asymptotic” will be used here also in somewhat different situation—for some sets F of functions \(f=\left\{ f(s)\right\} _{s\in S}\) from S into X. Let \(\mathbb {O}\) be the constant 0-vector function on S.

Definition 1.2

Let F be a set of some functions from S into X.

F is uni-asymptotic iff

$$\begin{aligned} \forall _{f,g\in F\setminus \{\mathbb {O}\}} \ \ \ \Vert f(s)\Vert _{}\asymp _{s}\Vert g(s)\Vert _{}. \end{aligned}$$
(1.7)

These two kinds of “uni-asymptoticity” are closely related. For a family \(\left\{ U_s\right\} _{s\in S}\) of operators from \({\mathcal {B}}(X)\) consider the set of all its trajectories (orbits):

$$\begin{aligned} \text{ Orb }\left( \left\{ U_s\right\} _{s\in S}\right) :=\{\left\{ U_sx\right\} _{s\in S}:\ x\in X\}. \end{aligned}$$

By the linearity of the operators, Orb \(\big ({\left\{ U_s\right\} _{s\in S}}\big )\) is also a linear subspace of the space of all the functions from S into X. By the above definitions we immediately get:

Fact 1.3

\(\left\{ U_s\right\} _{s\in S}\) is uni-asymptotic iff \(\text{ Orb }\left( \left\{ U_s\right\} _{s\in S}\right) \) is uni-asymptotic.

Let us also note the following properties of families of operators.

Fact 1.4

Let \(\left\{ U_s\right\} _{s\in S}\) be a family of operators from \({\mathcal {B}}(X)\).

  1. 1.

    \(\left\{ U_s\right\} _{s\in S}\) is uni-asymptotic iff

    $$\begin{aligned} \forall _{x,y\in {\mathbb {S}}_{X}} \ \ \ \Vert U_sx\Vert _{}\prec _{s}\Vert U_sy\Vert _{}; \end{aligned}$$
    (1.8)
  2. 2.

    If \(\left\{ \lambda _s\right\} _{s\in S}\) is a family of non-zero numbers and \(\left\{ V_s\right\} _{s\in S}:=\left\{ \lambda _sU_s\right\} _{s\in S}\), then \(\left\{ U_s\right\} _{s\in S}\) is uni-asymptotic iff \(\left\{ V_s\right\} _{s\in S}\) is uni-asymptotic; \(\left\{ U_s\right\} _{s\in S}\) is tight iff \(\left\{ V_s\right\} _{s\in S}\) is tight;

  3. 3.

    If \(\left\{ U_s\right\} _{s\in S}\) is uni-asymptotic, then

    $$\begin{aligned} \forall _{s\in S} \ \ \ \left( U_s\in {\mathcal {B}}_*(X) \ \ \text{ or } \ \ U_s=0\right) ; \end{aligned}$$
    (1.9)
  4. 4.

    If \(\left\{ U_s\right\} _{s\in S}\) is tight, then it is uni-asymptotic.

Proof

The parts 1. and 2. are obvious. To prove the part 3. suppose that \(\left\{ U_s\right\} _{s\in S}\) is uni-asymptotic and \(U_s\not \in {\mathcal {B}}_*(X)\) for some \(s\in S\). So, let \(0\not =x_0\in \text{ Ker }U_{s}\). Then for any \(x\in X_*\) we have \(\Vert U_{s}x\Vert _{}\le C_x\Vert U_{s}x_0\Vert _{}=0\) for some \(C_x\in \mathbb {R}_+\), i.e., \(U_{s}=0\). To get the part 4. suppose that \(\left\{ U_s\right\} _{s\in S}\) is tight and choose \(C\in \mathbb {R}_+\) such that \(\Vert U_s\Vert _{}\le C \Downarrow U_s \Downarrow _{}\) for any \(s\in S\). For \(x,y\in {\mathbb {S}}_{X}\) and for any \(s\in S\) we have

$$\begin{aligned} \Vert U_sx\Vert _{}\le \Vert U_s\Vert _{}\le C \Downarrow U_s \Downarrow _{}\le C \Vert U_sy\Vert _{}, \end{aligned}$$

i.e., by the part 1., \(\left\{ U_s\right\} _{s\in S}\) is uni-asymptotic. \(\square \)

The main result of this part says that in the finite dimensional case the above point 4. can be essentially strengthened.

Theorem 1.5

Suppose that \(\dim X<+\infty \) and \(\left\{ U_s\right\} _{s\in S}\) is a family of operators from \({\mathcal {B}}(X)\). Then \(\left\{ U_s\right\} _{s\in S}\) is uni-asymptotic iff it is tight.

Proof

By Fact 1.4.4. it suffices to prove “\(\Longrightarrow \)”. We include some remarks in this proof, showing whether the \(\dim X<+\infty \) assumption is important or not for the actual part of the proof. Suppose that \(\left\{ U_s\right\} _{s\in S}\) is uni-asymptotic. Consider first the special case with extra assumption that \(U_s\in {\mathcal {B}}_*(X)\) for any \(s\in S\). So, choosing an arbitrary \(x_0\in X_*\) we can define a “rescaling number” \(\lambda _s:=\Vert U_sx_0\Vert _{}^{-1}\not =0\) for each \(s\in S\). By Fact 1.4.2. it suffices to prove that \(\left\{ V_s\right\} _{s\in S}:=\left\{ \lambda _sU_s\right\} _{s\in S}\) is tight. However, observe that

$$\begin{aligned} \forall _{s\in S} \ \ \Vert V_sx_0\Vert _{}=\Vert \lambda _sU_sx_0\Vert _{}=\lambda _s\Vert U_sx_0\Vert _{}=1, \end{aligned}$$
(1.10)

and, also by Fact 1.4.2., \(\left\{ V_s\right\} _{s\in S}\) is uni-asymptotic. So applying this for any vector \(x\in X_*\) and for \(y=x_0\), let us choose \(c_x, C_x\in \mathbb {R}_+\) such that

$$\begin{aligned} \forall _{s\in S} \ \ c_x=c_x\Vert V_sx_0\Vert _{}\le \Vert V_sx\Vert _{}\le C_x\Vert V_sx_0\Vert _{}=C_x. \end{aligned}$$
(1.11)

Hence, in particular, the family \(\left\{ V_s\right\} _{s\in S}\) is bounded in \({\mathcal {B}}(X)\) (as follows e.g. from Banach–Steinhaus theorem, but here it is an even more elementary fact, because \(\dim X<+\infty \))—so, choose \(C\in \mathbb {R}_+\) such that

$$\begin{aligned} \forall _{s\in S} \ \ \Vert V_s\Vert _{}\le C. \end{aligned}$$
(1.12)

We shall prove now, that there exists \(\epsilon \in \mathbb {R}_+\) such that \(\epsilon \le \Downarrow V_s \Downarrow _{}\) for any \(s\in S\). Suppose, that it is not true. Thus, there exists a sequence \(\left\{ s_n\right\} _{n\ge 1}\) of indices from S such that \(\Downarrow V_{s_n} \Downarrow _{}\longrightarrow 0\). For each \(n\in \mathbb {N}\) choose \(v_n\) in X with \(\Vert v_n\Vert _{}=1\), such that

$$\begin{aligned} \Vert V_{s_n}v_n\Vert _{}\le \frac{3}{2}\Downarrow V_{s_n} \Downarrow _{}. \end{aligned}$$
(1.13)

(\(\frac{3}{2}\) can be replaced by 1, by \(\dim X<+\infty \), but this constant has no importance). We have

$$\begin{aligned} \Vert V_{s_n}v_n\Vert _{}\longrightarrow 0. \end{aligned}$$
(1.14)

It is only now, that we will essentially use the \(\dim X<+\infty \) assumption: we choose a convergent subsequence \(\left\{ v_{k_n}\right\} _{n\ge 1}\) of \(\left\{ v_{n}\right\} _{n\ge 1}\), i.e., \(k_n\longrightarrow +\infty \) and \(\Vert v_{k_n} -v\Vert _{}\longrightarrow 0\) for some \(v\in X\) with \(\Vert v\Vert _{}=1\). But by (1.12) and (1.11)

$$\begin{aligned} \Vert V_{s_{k_n}}v_{k_n}\Vert _{}= & {} \Vert V_{s_{k_n}}v+V_{s_{k_n}}(v_{k_n}-v)\Vert _{}\ge \Vert V_{s_{k_n}}v\Vert _{}-\Vert V_{s_{k_n}}(v_{k_n}-v)\Vert _{}\\\ge & {} c_v-C\Vert v_{k_n}-v\Vert _{}\longrightarrow c_v, \end{aligned}$$

but \(c_v>0\), hence we get a contradiction with (1.14). This finishes the proof of the special case.

Now, we go back to the general case, and we define \(S_1:=\{s\in S: U_s\in {\mathcal {B}}_*(X) \}\) and \(S_2:=S\setminus S_1\). By Fact 1.4.3. we have \(\Downarrow U_s \Downarrow _{}=0=\Vert U_s\Vert _{}\) for any \(s\in S_2\), i.e., \(\Downarrow U_s \Downarrow _{}\asymp _{s\in S_2}\Vert U_s\Vert _{}\). But also \(\Downarrow U_s \Downarrow _{}\asymp _{s\in S_1}\Vert U_s\Vert _{}\) by the special case just proven, so finally \(\Downarrow U_s \Downarrow _{}\asymp _{s\in S}\Vert U_s\Vert _{}\). \(\square \)

The finite dimension assumption was important not only for the above proof. The assumption that X is a Banach space is not sufficient.

Example 1.6

Consider \(S=\mathbb {N}\) and for any \(s\in S\) define \(F_s:\mathbb {N}\longrightarrow \mathbb {R}\) by

$$\begin{aligned} F_s(k):=\left\{ \begin{array}{ll} {s-(k-1)}&{}\quad \hbox {for}~{k\le s}\\ {1}&{}\quad \hbox {for}~{k>s.}\\ \end{array}\right. \end{aligned}$$
(1.15)

In particular we have

$$\begin{aligned} F_s(1)=s, \ \ F_s(s)=1 \ \ \text{ and } \ \ 1\le F_s(k)\le s \ \ \text{ for } \text{ any } \ \ k\in \mathbb {N}. \end{aligned}$$
(1.16)

Consider now some \(p\in [1;+\infty )\) and the space \(X:=\ell ^p=\ell ^p(\mathbb {N})\) of “power p-summable” sequences. Let us study the family \(\left\{ U_s\right\} _{s\in S}\), where \(U_s\) is the multiplication by \(F_s\) operator in X, i.e., \(U_sx:=F_s\cdot x\) for any sequence \(x\in X\) and any \(s\in S\). By (1.16)

$$\begin{aligned} \Vert U_s\Vert _{}=\sup _{k\in \mathbb {N}}|F_s(k)|=s,\ \ \ \ \ \Downarrow U_s \Downarrow _{}=\inf _{k\in \mathbb {N}}|F_s(k)|=1. \end{aligned}$$
(1.17)

For each x with \(\Vert x\Vert _{}=1\) fix some \(k_0:=k_0(x)\) such that \(|x_{k_0}|>0\). Then again by (1.16), for such x and for any \(s<k_0\) we have

$$\begin{aligned} \Vert U_sx\Vert _{}\ge |x_{k_0}|= \frac{|x_{k_0}|}{s}\cdot s\ge \frac{|x_{k_0}|}{k_0}\cdot s. \end{aligned}$$

On the other hand, if \(s\ge k_0\), then by (1.15)

$$\begin{aligned} \Vert U_sx\Vert _{}\ge (1+ s-k_0)|x_{k_0}| = \frac{1+ s-k_0}{s}|x_{k_0}|\cdot s\ge \frac{|x_{k_0}|}{k_0} \cdot s, \end{aligned}$$

because for \(s\ge k_0\) we have \(\frac{1+ s-k_0}{s}=1-\frac{k_0-1}{s}\ge 1-\frac{k_0-1}{k_0}= \frac{1}{k_0}\). Finally, if \(\Vert x\Vert _{}=1\), then

$$\begin{aligned} \Vert U_sx\Vert _{}\ge \frac{|x_{k_0}|}{k_0} \cdot s \end{aligned}$$

holds for any \(s\in \mathbb {N}\). But by (1.16) we also have \(\Vert U_sx\Vert _{}\le s\Vert x\Vert _{}=s\). Hence

$$\begin{aligned} \Vert U_sx\Vert _{}\asymp _{s} s. \end{aligned}$$

Thus \(\left\{ U_s\right\} _{s\in S}\) is uni-asymptotic, but (1.17) shows that it is not tight. Note also, that the same arguments work also for the similar example with \(X=\ell ^{\infty }\)—the standard space of bounded sequences.

2.2 Uni-asymptoticity and Determinants

We find here the “actual size” of the norms of the vectors from the trajectories for the uni-asymptotic finite dimensional case. It is described in a simple way by the determinants of the operators \(U_s\).

Theorem 1.7

If \(\dim X=m\in \mathbb {N}\) and \(\det U_s\not =0\) for any \(s\in S\), then TFCAEFootnote 4

  1. (i)

    \(\left\{ U_s\right\} _{s\in S}\) is uni-asymptotic

  2. (ii)

    \( \displaystyle \forall _{x\in X_*} \ \ \ \Vert U_sx\Vert _{}\asymp _{s}|\det U_s|^{1/m} \)

  3. (iii)

    \( \displaystyle \Vert U_s\Vert _{}\asymp _{s}|\det U_s|^{1/m} \)

  4. (iii’)

    \( \displaystyle \Vert U_s\Vert _{}\prec _{s}|\det U_s|^{1/m} \)

  5. (iv)

    \(\left\{ U_s\right\} _{s\in S}\) is tight.

We shall use the following simple lemma in the proof of this result.

Lemma 1.8

If \(\dim X=m\in \mathbb {N}\), then:

  1. 1.

    \(\displaystyle \forall _{U\in {\mathcal {B}}(X)}\ \ \ \Downarrow U \Downarrow _{}\le |\det U|^{1/m}\le \Vert U\Vert _{}\),

  2. 2.

    there exists a positive constant \(C_m\) such that

    $$\begin{aligned} \forall _{U\in {\mathcal {B}}_*(X)}\ \ \ \Vert U^{-1}\Vert _{}\le C_m\frac{\Vert U\Vert _{}^{m-1}}{|\det U|}, \end{aligned}$$
  3. 3.

    for any fixed linear base \(e:=\{e_j\}_{j=1,\ldots , m}\) of X consisting of norm-one vectors, the formula

    $$\begin{aligned} {\Vert U\Vert _{}}_e :=\max \{\Vert Ue_j\Vert _{}: j=1,\ldots , m\}, \ \ \ \ U\in {\mathcal {B}}(X), \end{aligned}$$

    defines a norm in \({\mathcal {B}}(X)\), and there exists a positive constant \(D_e\) such that

    $$\begin{aligned} \forall _{U\in {\mathcal {B}}(X)}\ \ \ {\Vert U\Vert _{}}_e\le {\Vert U\Vert _{}}\le D_e{\Vert U\Vert _{}}_e. \end{aligned}$$

Proof

Let \(\lambda _1, \ldots \lambda _m\) be all the eigenvalues of U (taking into account their algebraic multiplicities) and for each \(j=1,\ldots , m\) let \(x_j\) be some normalized eigenvector for U and \(\lambda _j\). By the definitions of \(\Vert \ \Vert _{}\) (—the operator norm) and \(\Downarrow \Downarrow _{}\), we have

$$\begin{aligned} \Downarrow U \Downarrow _{}\le \Vert Ux_j\Vert _{}=|\lambda _j|=\Vert Ux_j\Vert _{}\le \Vert U\Vert _{},\ \ \ \ \ j=1,\ldots , m. \end{aligned}$$

Now, multiplying all the m inequalities “side-by-side” and using \(\det U=\lambda _1\cdot \ldots \cdot \lambda _m\), we get part 1. of the lemma.

To get part 2. consider first m by m scalar matrices. Observe, that defining \({\Vert A\Vert _{}}_{\max } :=\max \{|A_{ij}|: i,j=1,\ldots , m\}\) for any matrix A (here \(A_{ij}\) is the term of A from i-th row and j-th column) we get a certain norm in the matrix space. By Cramer’s rule, if A is invertible, then \((A^{-1})_{ij}=\frac{(A_C)_{ji}}{\det A}\), where \(A_C\) denotes the matrix of cofactors of A. Applying “the permutation formula” for the determinant to the terms of \(A_C\), we get

$$\begin{aligned} |(A_C)_{ji}|\le (m-1)!({\Vert A\Vert _{}}_{\max })^{m-1}. \end{aligned}$$

Hence \({\Vert A^{-1}\Vert _{}}_{\max } \le (m-1)!\frac{({\Vert A\Vert _{}}_{\max })^{m-1}}{|\det A|}\) for any invertible A. Now, fixing a linear base in X to establish a linear isomorphism between \({\mathcal {B}}(X)\) and m by m scalar matrix space, we use it to transfer \({\Vert \ \Vert _{}}_{\max }\) onto a norm in \({\mathcal {B}}(X)\). Finally, the equivalence of any two norms for \({\mathcal {B}}(X)\) (it is a \(m^2\)-dimensional space) proves the assertion of part 2. Note that we have used here also \(\det U=\det A\) for A being the matrix of U for the fixed base.

To obtain part 3., we first easily check that \({\Vert \,\,\Vert _{}}_e\) satisfies all the conditions for norm. Then the estimate \({\Vert U\Vert _{}}_e\le {\Vert U\Vert _{}}\) follows just from the fact that the norm on the RHS is the operator norm and that the base vectors have been normalized. The second estimate with some constant \(D_e\) follows just again from the equivalence of all the norms for \({\mathcal {B}}(X)\). \(\square \)

Proof of Theorem 1.7

Observe that “(iii)\(\Longrightarrow \) (iii’)” follows directly from Lemma 1.8.1. Let us prove “(ii)\(\Longleftrightarrow \) (iii)”. Suppose (ii). By Lemma 1.8.3. and by (ii) used to all the m vectors of a fixed normalized base e we get

$$\begin{aligned} \Vert U_s\Vert _{}\le C {\Vert U_s\Vert _{}}_e \le C'|\det U_s|^{1/m} \end{aligned}$$

for any \(s\in S\) with some constants \(C, C'\). Hence, using also Lemma 1.8.1., we get (iii). Now, we suppose (iii) and let \(x\in X_*\). We thus have

$$\begin{aligned} \Vert U_sx\Vert _{}\le \Vert x\Vert _{} {\Vert U_s\Vert _{}} \le \Vert x\Vert _{} C|\det U_s|^{1/m} \end{aligned}$$

for any \(s\in S\) with some constant C. To get the “opposite direction” estimate, we can assume that \(\Vert x\Vert _{}=1\). Then by Lemma 1.8.2. for any \(s\in S\) we have

$$\begin{aligned} 1=\Vert x\Vert _{}=\Vert (U_s)^{-1}U_sx\Vert _{}\le \Vert U_sx\Vert _{}\Vert (U_s)^{-1}\Vert _{}\le C_m \Vert U_sx\Vert _{} \frac{\Vert U_s\Vert _{}^{m-1}}{|\det U_s|}, \end{aligned}$$

and by (iii), for some constant \(C'>0\) and for any \(s\in S\)

$$\begin{aligned} \Vert U_sx\Vert _{}\ge (C_m)^{-1}\frac{|\det U_s|}{\Vert U_s\Vert _{}^{m-1}}\ge C'\frac{|\det U_s|}{\left( |\det U_s|^{1/m}\right) ^{m-1}}= C'|\det U_s|^{1/m}, \end{aligned}$$

and (ii) is proved.

By Theorem 1.5 we have also “(i)\(\Longleftrightarrow \) (iv)”, so it suffices to prove “(ii)\(\Longrightarrow \) (i)” and “(iv)\(\Longrightarrow \) (iii)”. But the former is obvious by the transitivity of \(\asymp _{s}\) relation, and the latter follows directly from Lemma 1.8.1. \(\square \)

As we could see, there is a lot of equivalent ways to express uni-asymptoticity in the finite dimensional case. On the other hand, Example 1.6 shows that none of these ways work for general Banach space X and general family \(\left\{ U_s\right\} _{s\in S}\) cases. Thus the natural open problem could be formulated:

To find some equivalent criteria for uni-asymptoticity for some special kinds od familes \(\left\{ U_s\right\} _{s\in S}\) of operators acting in infinite dimensional Banach spaces X. E. g., for discrete semigroups (with \(S=\mathbb {N}_0\)), for \(C_0\)-semigroups (with \(S=[0;+\infty )\)) and others.

3 Uni-asymptoticity in Spectral Studies of Jacobi Operators

3.1 Jacobi Operators: Generalized Eigenvectors and \(\mathbf{2}{{\varvec{d}}}\)-generalized Eigenvectors

Let us start from recalling the notions of block Jacobi matrix and operator (see, e.g., [2, 6, 13] and citations therein) and its particular “scalar” case. Let \(d\ge 1\) and let \(A_1, B_1, A_2, B_2,\ldots \) be \(d\times d\) self-adjoint real matrices. We consider the semi-infinite “block-tri-diagonal matrix” of the form:

$$\begin{aligned} {\mathcal {J}}= \left( \begin{array}{ccccc} B_1 &{}\quad A_1 &{}\quad &{}\quad &{}\quad \\ A_1 &{}\quad B_2 &{}\quad A_2 &{}\quad &{}\quad \\ &{}\quad A_2 &{}\quad B_3 &{}\quad A_3 &{}\quad \\ &{}\quad &{}\quad A_3 &{}\quad B_4 &{}\quad \ddots \\ &{}\quad &{}\quad &{}\quad \ddots &{}\quad \ddots \end{array}\right) . \end{aligned}$$

\({\mathcal {J}}\) is called block Jacobi matrix, and in the main case \(d=1\) it is just Jacobi matrix (also: scalar J. m.). Sequences \(\left\{ A_k\right\} _{k\ge 1}\), \(\left\{ B_k\right\} _{k\ge 1}\) are called weight and diagonal sequence, respectively, and here it is assumed that

$$\begin{aligned} \det A_k\not =0, \ \ \ \ k=1,\ldots . \end{aligned}$$
(2.18)

We identify the matrix \({\mathcal {J}}\) with the formal Jacobi operator \({\mathcal {J}}\), acting in the linear space \(\ell \) of all the sequences of \(\mathbb {C}^d\)-vectors \(u=\left\{ u(k)\right\} _{k\ge 1}\) by the formula:

$$\begin{aligned} ({\mathcal {J}}u)(k){:=}A_{k-1}u(k-1)+B_ku(k)+A_ku(k+1), \ \ \ \ k\ge 1, \end{aligned}$$

where we denote additionally \(A_0:=0, u(0):=0\) to get also sense of the formula for \(k=1\). We distinguish here the block Jacobi matrix (= formal operator) \({\mathcal {J}}\) and the block Jacobi operator J—being our main object here. Roughly speaking, J is just “the restriction” of \( {\mathcal {J}}\) to the Hilbert space \(\ell ^2=\ell ^2(\mathbb {N}, \mathbb {C}^d):=\{u\in \ell : \Vert u\Vert _{\ell ^2}<+\infty \} \) with \( \Vert u\Vert _{\ell ^2}{:=}\left( \sum _{k=1}^{+\infty }\Vert u(k)\Vert ^2_{_{\mathbb {C}^d}}\right) ^{\frac{1}{2}}\) for \(u=\left\{ u(k)\right\} _{k\ge 1}\in \ell \). Precisely (note the different notation: J and \({\mathcal {J}}\)): \(D(J){:=}\{u\in \ell ^2: {\mathcal {J}}u\in \ell ^2\}\) and \(J{:=}{{\mathcal {J}}}{\mid D(J)}\), i.e., J is the so-called maximal operator for \({\mathcal {J}}\) in \(\ell ^2\). It is bounded self-adjoint operator in \(\ell ^2\), when \(\left\{ A_k\right\} _{k\ge 1}\), \(\left\{ B_k\right\} _{k\ge 1}\) are bounded. But if at least one of \(\left\{ A_k\right\} _{k\ge 1}\), \(\left\{ B_k\right\} _{k\ge 1}\) is unbounded, then J is unbounded and its self-adjointness may require additional assumptions. Note, that there are several well-known assumption of this kind, e.g., the generalized Carleman condition \(\sum _{n=1}^{+\infty }\frac{1}{\Vert A_n\Vert }=+\infty \) (see [1, 6] for some more information related to the self-adjointness problem for block Jacobi operators). So, in particular, if the weight sequence is bounded, then J is always self-adjoint, regardless of whether the diagonal sequence is bounded or not. In the present paper the self-adjoint case of J is most important for us.

Consider now \(\lambda \in \mathbb {C}\) and \(u\in \ell \).

Definition 2.1

u is a generalized eigenvector of J for \(\lambda \)

$$\begin{aligned} {\text{ iff }} \ \ \ \ \ ({\mathcal {J}}u)(k)=\lambda u(k), \ \ \ \text{ for } \ \ k\ge {\mathbf{2}} . \end{aligned}$$
(2.19)

We use abbreviation GEV for “generalized eigenvector”.

Note that \(k=1\) is omitted above and we do not require that u is in D(J) (nor in \(\ell ^2\)). (2.19) is equivalent to

$$\begin{aligned} u(k+1)=-A_{k}^{-1}\left[ A_{k-1}u(k-1)+(B_{k}-\lambda )u(k)\right] , \ \ \ k\ge 2, \end{aligned}$$
(2.20)

which allows to construct easily the unique GEV u (for \(\lambda \)), starting from any pair of \(\mathbb {C}^d\) vectors u(1), u(2). Denote:

$$\begin{aligned} GEV(\lambda ){:=}\{u: \ ~ u~ \text{ is } \text{ a } \text{ GEV } \text{ of }~ J~ \text{ for }~ \lambda \}. \end{aligned}$$

We have \(\dim GEV(\lambda )=2d\). Let us rewrite (2.20) into a \(\mathbb {C}^{2d}\)-vector form:

$$\begin{aligned} x(k+1)=T_k(\lambda ) x(k), \ \ \ \ \ k \ge 2, \end{aligned}$$
(2.21)

where

$$\begin{aligned} x(k){:=}\left( \begin{array}{c} u(k-1) \\ u(k) \end{array}\right) \in \mathbb {C}^{2d} \end{aligned}$$
(2.22)

for a sequence u of \(\mathbb {C}^{d}\)-vectors and \(T_k(\lambda )\) is the k-th transfer matrix (of the size \(2d\times 2d\))

$$\begin{aligned} T_k(\lambda ){:=}\left( \begin{array}{cc}0\quad &{}I\\ -{A_k^{-1}}{A_{k-1}}\quad &{}{\quad A_k^{-1}}{(\lambda -B_k)}\end{array}\right) . \end{aligned}$$
(2.23)

Hence, if \(x=\left\{ x(k)\right\} _{k\ge 2} \) and \(u=\left\{ u(k)\right\} _{k\ge 2}\) are related by (2.22), then (2.20) is equivalent to (2.21). We define

Definition 2.2

x is a 2d-generalized eigenvector of J for \(\lambda \) iff (2.21) holds. \(\overrightarrow{GEV(\lambda )}\) denotes the 2d-dimensional space of all 2d-generalized eigenvectors of J for \(\lambda \).

Obviously

$$\begin{aligned} \overrightarrow{GEV(\lambda )}=\left\{ \left\{ \left( \begin{array}{c} u(k-1) \\ u(k) \end{array}\right) \right\} _{k\ge 2}\,: \ u\in GEV(\lambda )\right\} . \end{aligned}$$
(2.24)

Some methods used in spectral studies of scalar Jacobi operators are based on the general idea of finding relations between asymptotic properties of sequences from \({GEV(\lambda )}\) or \(\overrightarrow{GEV(\lambda )}\) and spectral properties of J. Let us stress that the above word “asymptotic” can have many particular meanings. Different spectral results could be related to different kinds of asymptotic properties. And for some asymptotic properties the use of 2d-generalized eigenvectors can be more suitable than the use of generalized eigenvectors—see, e.g., [10]. These methods, however, are less developed in the case of block Jacobi operators (with \(d>1\)).

3.2 H-Class and Spectral Results for Scalar Jacobi Operator

The notion of H-class for sequences of \(2\times 2\) scalar matrices was introduced to study the absolutely continuous spectrum of scalar Jacobi operators—see, e.g., [3, 9]. In a natural way, with any m, it can be extended to sequences \(\{C_n\}_{n\ge n_0}\) of \(m\times m\) complex matrices:

Definition 2.3

\(\{C_n\}_{n\ge n_0}\in H\) iff

$$\begin{aligned} \exists _{M>0}\forall _{n\ge n_0}\ \ \Vert C_n\cdot \ldots \cdot C_{n_0}\Vert ^m\le M\prod _{k=n_0}^{n}|\det C_k|. \end{aligned}$$
(2.25)

Note that the above requirement is quite strong, since for any \(\{C_n\}_{n\ge n_0}\) the opposite direction estimate with respect to (2.25) holds. Namely, by Lemma 1.8.1.,

$$\begin{aligned} \prod _{k=n_0}^{n}|\det C_k|= |\det (C_n\cdot \ldots \cdot C_{n_0})| \le \Vert C_n\cdot \ldots \cdot C_{n_0}\Vert ^m. \end{aligned}$$
(2.26)

So, we easily get an equivalent formulation of H-class definition, expressed in asymptotic terms “\(\asymp \) ” used in this paper.

Corollary 2.4

\(\{C_n\}_{n\ge n_0}\in H\) iff \(\Vert C_n\cdot \ldots \cdot C_{n_0}\Vert \asymp _{n}\left( \prod _{k=n_0}^{n}|\det C_k|\right) ^{\frac{1}{m}}\).

Consider now (block) Jacobi operator J, and define

$$\begin{aligned} H(J){:=}\{\lambda \in R:\ \left\{ T_k(\lambda )\right\} _{k\ge 2}\in H\} \end{aligned}$$

we call it the H-set for J. It should be noted, that in the scalar case \(d=1\) the spectral character of H(J) can be decribed via subordination theory of Gilbert–Pearson–Khan [5].

Theorem 2.5

(see [9]) If \(d=1\), and J is self-adjoint, then J is absolutely continuousFootnote 5 in \(\text{ Int }(H(J))\) and \(\overline{\text{ Int }(H(J))}\subset \sigma _{\text{ ac }}(J)\).

3.3 H-Class and Uni-asymptoticity

The concept of H-class turns out to be closely related to our uni-asymptoticity notion.

Consider a sequence \(\{C_n\}_{n\ge n_0}\) of \(m\times m\) complex matrices and define:

$$\begin{aligned} U_n {:=}C_n\cdot \ldots \cdot C_{n_0}, \ \ n\ge n_0. \end{aligned}$$
(2.27)

We get now

Corollary 2.6

If \(\det C_n\not =0\) for any \(n\ge n_0\), then

$$\begin{aligned} \{C_n\}_{n\ge n_0}\in H \ \ \ \text{ iff } \ \ \ \{U_n\}_{n\ge n_0} \ \ \ \text{ is } \text{ uni-asymptotic. } \end{aligned}$$

Proof

It suffices to use Corollary 2.4 and Theorem 1.7 ((i) \(\Longleftrightarrow \) (iii)). \(\square \)

Now we shall use this result to the sequence of transfer matrices for (block) Jacobi operator J. So, for fixed \(\lambda \in \mathbb {C}\) we consider the dynamical system \(\left\{ U_s(\lambda )\right\} _{s\in S}\) with:

$$\begin{aligned} X:=\mathbb {C}^{2d}, \ \ S:=\mathbb {N}_2, \end{aligned}$$

and with

$$\begin{aligned} U_n(\lambda ) :=T_n(\lambda )\cdot \dots \cdot T_{2}(\lambda )\ \ \ \ \ \ \text{ for }~ n\ge 2 \end{aligned}$$
(2.28)

(see 2.23). We get

Theorem 2.7

For any \(\lambda \in \mathbb {C}\) TFCAE:

  1. 1.

    \(\left\{ T_n(\lambda )\right\} _{n\ge 2}\in H\);

  2. 2.

    \(\left\{ U_n(\lambda )\right\} _{n\ge 2}\) is uni-asymptotic;

  3. 3.

    \(\overrightarrow{GEV(\lambda )}\) is uni-asymptotic;

  4. 4.

    for any non-zero \(\left\{ x(n)\right\} _{n\ge 2}\in \overrightarrow{GEV(\lambda )}\)

    $$\begin{aligned} \Vert x(n)\Vert _{}\asymp _{n}\left( \frac{1}{\root d \of {|\det A_{n-1}|}}\right) ^{\frac{1}{2}}. \end{aligned}$$
    (2.29)

Proof

We start from two simple observations on the “shifted” family of matrices \(\{{\widetilde{U}}_n(\lambda )\}_{n\ge 2}\), given by

$$\begin{aligned} \widetilde{U}_n(\lambda ):= \left\{ \begin{array}{ll} {I}&{}\quad \hbox {for}~{n=2}\\ {U_{n-1}(\lambda )}&{}\quad \hbox {for}~{n\ge 3.}\\ \end{array}\right. \end{aligned}$$

First observe, that by Definition 2.2 and by (2.21),

$$\begin{aligned} \overrightarrow{GEV(\lambda )}=\left\{ \left\{ {\widetilde{U}}_n(\lambda )\alpha \right\} _{n\ge 2}\,: \ \alpha \in \mathbb {C}^{2d} \ \right\} . \end{aligned}$$
(2.30)

By part 1. of Fact 1.4 we get the second observation:

$$\begin{aligned} \{U_n(\lambda )\}_{n\ge 2} \ { is}~ {uni{\text {-}}asymptotic}~ { iff} \ \{{\widetilde{U}}_n(\lambda )\}_{n\ge 2}\ \ { is}~ {uni{\text {-}}asymptotic.} \end{aligned}$$
(2.31)

The equivalence of 1. and 2. follows from Corollary 2.6. By our first observation, 2. is equivalent to the uni-asymptoticity of \(\{{\widetilde{U}}_n(\lambda )\}_{n\ge 2}\) . Hence the second observation and Fact 1.3 give the equivalence of 2. and 3.

Now, let us use Theorem 1.7 ((i) \(\Longleftrightarrow \) (ii)) to the family \(\{{\widetilde{U}}_n(\lambda )\}_{n\ge 2}\). We also have

$$\begin{aligned} |\det \widetilde{U}_n(\lambda )|= & {} |\det U_{n-1}(\lambda )|=\prod _{k=2}^{n-1}|\det T_{k}(\lambda )|= \prod _{k=2}^{n-1}\left| \frac{\det A_{k-1}}{\det A_{k}}\right| \\= & {} \left| \frac{\det A_1}{\det A_{n-1}}\right| \end{aligned}$$

for \(n\ge 3\), by (2.23). This gives us the equivalence of 2. and 4. (—note that here “\(m:=2d\)”). \(\square \)

Remark 2.8

  1. 1.

    The uni-asymptoticity results mean also the estimate from below, not only from above. Therefore condition 4. of Theorem 2.7 gives both: the above and below norm estimate for each non-zero 2d-generalized eigenvector by the RHS of (2.29). The idea of such “both sides norm estimates” has been used by many authors to construct Weyl sequences (see, e.g., [4, 7]). For instance, in [8] such method allows to get results on the essential spectrum \(\sigma _{\text{ ess }}(J)\) for scalar Jacobi operators of the form \(J=J_0+P\), where \(\text{ Int }(H(J_0))\) is non-empty and P is a perturbation of some special kind. If \(P=0\), such results are obvious from the point of view of subordination theory methods (see e.g., Theorem 2.5)—just because “\(\sigma _{\text{ ac }}(J)\) information is stronger, than the \(\sigma _{\text{ ess }}(J)\) one”. And if \(P\not =0\), the method starts to be important, because \(\text{ Int }(H(J_0+P))\) can be empty, which means that Theorem 2.5 is useless.

  2. 2.

    Nevertheless, subordinacy does not work in \(d>1\) case. This is why, in case of block Jacobi operators, essential spectrum results and methods are in a sense more significant, than in the scalar case. For instance, if \(d>1\), then it make sense to check, whether \(\sigma _{\text{ ess }}(J_0)\) contains \(\text{ Int }(H(J_0))\). The methods used in [8] for \(P=0\) were based on construction of Weyl sequences by “deforming” generalized eigenvectors of \(J_0\). And in fact, the methods can be extended from \(d=1\) to any d. However, we need some extra assumptions on behaviour of the norms \(\Vert A_n\Vert \) and determinants \(\det A_n\) of the weight terms of J. Those extra assumptions can have various forms. As an example, we announce here the following result:

    If J is self-adjoint, W, V are scalar sequences satisfying

    1. 1.

      W(n), V(n) are monotonic in n,

    2. 2.

      \(W(n)\longrightarrow +\infty \), \(\frac{W(n)}{n}\longrightarrow 0\),

    3. 3.

      \( W(2n)\prec _n W(n)\), \( V(2n)\asymp _n V(n)\),

    and\(\Vert A_n\Vert \asymp _n W(n)\), \(\det A_n \asymp _n V(n)\), then \(H(J)\subset \sigma _{\text{ ess }}(J)\).

    The detailed proof of the above one, and of some other versions, will be placed in the paper in preparation. The general idea is based on mimicking the proof of [8, Th. 2.1]. However, some problems are related to the fact, that the uni-asymptoticity of \(\overrightarrow{GEV(\lambda )}\) does not result in the uni-asymptoticity of \({GEV(\lambda )}\) (even for \(d=1\)!). If \(d=1\), then this problem can be omited by the appropriate choice of the initial condition for this generalized eigenvector, which is used to construct the Weyl sequences. But for \(d>1\) it is not possible, and this is why some extra assumptions (e.g. (1), (2), (3) above) are needed.

  3. 3.

    Let us note some recent papers of Świderski. They contain very interesting approach, leading in particular also to uni-asymptoticity results for 2-generalized eigenvectors in scalar case (together with some extra “\(\lambda \)-uniform” properties)—see [12]. The paper [13] provides some spectral results, including also some essential spectrum results for the block case. Instead of a construction of Weyl sequences, the method is based on checking that there is no any generalized eigenvector in \(\ell ^2(\mathbb {N}, \mathbb {C}^d)\).

  4. 4.

    Due to “the gap” between the scalar and the block case spectral results, the natural open question arises:

    Is there any relation of the set H(J) with \(\sigma _{\text{ ac }}(J)\) for \(d>1\)?

    Basic analysis of simplest examples with diagonal blocks shows, that any possible relation should be much more delicate, than for \(d=1\). Using very informal language: we could suspect that H(J) could be related somehow to a special kind of “multiplicity–d absolute continuity” for J. To explain this more precisely, let us assume that all the blocks \(A_n\), \(B_n\) are diagonal, J is selfadjoint and that H(J) is non-empty. Then, by Theorems 2.5 and 1.7, one can easily prove the following assertion:

    There exist d scalar self-adjoint Jacobi operators \(J^{(1)},\ldots , J^{(d)}\) with the weight sequences \(\left\{ a^{(1)}_n\right\} _{n\ge 1},\ldots ,\left\{ a^{(d)}_n\right\} _{n\ge 1}\), respectively, such that

    1. (i)

      Jis unitary equivalent to \(J^{(1)}\oplus \ldots \oplus J^{(d)}\),

    2. (ii)

      \(J^{(j)}\)is absolutely continuous in \(\text{ Int }(H(J))\) and \(\overline{\text{ Int }(H(J))}\subset \sigma _{\text{ ac }}(J^{(j)})\) for any \(j=1,\ldots ,d\),

    3. (iii)

      \(a^{(i)}_n\asymp _{n}a^{(j)}_n\) for any \(i,j=1,\ldots ,d\).

    Note that in the above “trivial” diagonal case the scalar operator \(J^{(j)}\) can be defined simply by the diagonal terms of the blocks of J: its n-th weight \(a^{(j)}_n\) is just the j-th diagonal term of \(A_n\) and its n-th diagonal term is just the j-th diagonal term of \(B_n\) (also the unitary opreator which gives (i) is easy to guess). Surely, the assertion will not change, if we assume that any two blocks of J commute, instead of assumming the diagonality of all blocks (since there exsists a common diagonalization for all blocks in this case). And the important problem starts, when we reject the assumption of commutativity...

    Thus, let us end with a more concrete formulation of the above open question:

    Is the above assertion with (i), (ii), (iii) also true in the general case of self-adjoint J (without the commutativity of blocks assumption)? And if it is not true, which parts of the assertion could be preserved?