1 Introduction

An important problem in the theory of differential equations is to determine the asymptotic behavior of solutions. One of the main issues in this topic concern stability. In the case of finite dimensional linear systems (exponential or asymptotic) stability of a system is equivalent to the fact that all eigenvalues are in the open left half-plane. For linear equations in infinite-dimensional space the problem of stability is much more complicated. In particular, a system can be asymptotically stable even if it possesses a point of spectrum on the imaginary axis (see Arendt and Batty [1], Lyubich and Phong [8], Sklyar and Shirman [17]). On the other hand, the system may be unstable even in the case when it is spectrum is contained in the open left half-plane, while the eigenvalues approach imaginary axis. Such a situation may occur for hyperbolic equations or delay equations of neutral type (e.g. Rabah et al. [13, 14]).

One of important characteristics describing the asymptotic behavior of solutions of a linear differential equation is the growth bound \(\omega _0=\omega _0(T)=\lim _{t\rightarrow +\infty }\frac{1}{t}\ln \Vert T(t)\Vert \) of a corresponding \(C_0\)-semigroup T(t). In the context of stability the critical situation is when \(\omega _0=0\). In this case the semigroup cannot be exponentially stable (\(\Vert T(t)\Vert \not \rightarrow 0\)), but it still may be asymptotically stable (\(\Vert T(t)x\Vert \rightarrow 0, \, x\in X\)). Then the solutions tend to zero arbitrarily slow. However, the solutions with sufficiently regular initial states (for example, from the domain of the generator) tend to zero uniformly, i.e. \(\Vert T(t)A^{-1}\Vert \rightarrow 0\). In particular, Batty [3, 4] and Phong [10] (cf. Sklyar [15]) proved that for bounded \(C_0\)-semigroup T(t) on a Banach space with the generator A, the following holds: if

$$\begin{aligned} \sigma (A)\cap (i\mathbb {R})=\emptyset , \end{aligned}$$

then

$$\begin{aligned} \Vert T(t)A^{-1}\Vert \rightarrow 0,\quad t\rightarrow +\infty . \end{aligned}$$

This means that all solutions of abstract Cauchy problem with initial condition in the domain of A tend uniformly to zero not slower then the function \(\Vert T(t)A^{-1}\Vert \). This fact leads to the concept of polynomial stability:

Definition 1.1

[2] We say the semigroup T(t) generated by A is polynomially stable if there exist constants \(\alpha ,\beta ,C>0\) such that

$$\begin{aligned} \Vert T(t)(A-dI)^{-\alpha }\Vert \le Ct^{-\beta }, \quad t>0, \end{aligned}$$
(1)

for some \(d\in \rho (A)\).

It is easy to see that the above definition does not depend on the choice of d. Hence if \(0\in \rho (A)\) then (1) is equivalent to

$$\begin{aligned} \Vert T(t)A^{-\alpha }\Vert \le Ct^{-\beta }, \quad t>0. \end{aligned}$$
(2)

The rate of decay of solutions with an initial condition from the set \(D(A^{-1})\) and in general in the set \(D(A^{-\alpha }),\,\, \alpha >0\), is closely related to the asymptotic behavior of the resolvent \(R(A,\lambda )\) on the imaginary axis (Bátkai et al. [2]; Borichev and Tomilov [5]). In particular, it is shown in [5, Theorem 2.4] that for a bounded \(C_0\)-semigroup T(t) on a Hilbert space H and some positive, fixed constant \(\alpha \), the following conditions are equivalent:

$$\begin{aligned} \text{(i) } \quad \Vert R(A,is)\Vert= & {} O(|s|^{\alpha }),\quad s\rightarrow \infty ,\\ \text{(ii) } \quad \Vert T(t)A^{-\alpha }\Vert= & {} O(t^{-1}),\quad t\rightarrow +\infty ,\\ \text{(iii) } \quad \Vert T(t)A^{-1}\Vert= & {} O(t^{-\frac{1}{\alpha }}),\quad t\rightarrow +\infty . \end{aligned}$$

It is shown in [2] that the rate of growth of the resolvent on imaginary axis implies some restrictions on the location of the spectrum. In particular, if condition (i) holds for the generator of a bounded \(C_0\)-semigroup acting in Banach space, then the spectrum of the generator satisfies the following condition (see [2, Propositions 3.6, 3.7]):

$$\begin{aligned} \text{( }A)\quad \mathrm{Re}\,\lambda \le -\gamma |\mathrm{Im}\lambda |^{-\alpha }:\lambda \in \sigma (\mathcal {A}) \end{aligned}$$

for some real, positive constant \(\gamma \) and small values of \(|\mathrm{Re}\,\lambda |\). However, in general case the knowledge about location of the spectrum is not enough to determine asymptotic behavior of the resolvent and/or the semigroup. At the same time in [2, Proposition 4.1] it is also shown that if the generator A of a bounded semigroup is a normal operator in Hilbert space with spectrum \(\sigma (A)\) in the open left half-plane, then the condition (ii) is equivalent to (A).

In this paper we conduct the analysis of polynomial stability in case of certain class of not necessarily bounded discrete semigroups acting in Hilbert space extending the results of [2, 5] to this class. Namely we consider the the class of semigroups whose generator has spectrum splitted into a family of finite separated sets and corresponding eigenspaces are finite dimensional and form a Riesz basis. We give an estimation of asymptotic behavior of these semigroups on dense sets (like \(D(A^{-\alpha })\)) depending on asymptotic closeness of the eigenvalues to the vertical line \(\text{ Re }\lambda =\omega _0\). In particular, in the case of bounded semigroups of our class we show that the condition (A) is equivalent to (iii). The class of semigroups mentioned above was considered earlier in the paper of Miloslavskii [9], where the estimation of the semigroup norm was obtained (see [9, Theorem 1]).

Such semigroups appear naturally in the analysis of delay equation of neutral type. Following [12, 13] we consider equation

$$\begin{aligned} \dot{z}(t)=A_{-1}\dot{z}(t-1) + \int _{-1}^0A_2(\theta )\dot{z}(t+\theta )d\theta + \int _{-1}^0 A_3(\theta )z(t+\theta )d\theta , \end{aligned}$$
(3)

where \(A_{-1}\) is a \(n \times n\) invertible complex matrix, \(A_2\) and \(A_3\) are \(n \times n\) matrices of functions from \(L_2(-1,0)\). The Eq. (3) can be rewritten in the operator form

$$\begin{aligned} \dot{x}=\mathcal {A}x, \qquad x \in \mathcal {M}_2, \end{aligned}$$
(4)

where \(\mathcal {M}_2=\mathbb {C}^n \times L_2(-1,0;\mathbb {C}^n)\), the operator \(\mathcal {A}\) is then given by

$$\begin{aligned} \mathcal {A} \left( {\begin{array}{c}y(t)\\ z_t(\cdot )\end{array}}\right) = \left( {\begin{array}{c}\int _{-1}^0A_2(\theta )\dot{z}_t(\theta )d\theta + \int _{-1}^0 A_3(\theta )z_t(\theta )d\theta \\ dz_t(\theta )/d\theta \end{array}}\right) , \end{aligned}$$
(5)

where \(z_t(\cdot )=z(t+\cdot )\) and the domain of \(\mathcal {A}\) is as follows:

$$\begin{aligned} \mathcal {D}(\mathcal {A})= \left\{ (y,z(\cdot )):z \in H^1(-1,0;\mathbb {C}^n), y=z(0)-A_{-1}z(-1) \right\} \subset \mathcal {M}_2. \end{aligned}$$
(6)

This model of neutral type equation was introduced by Burns et al. [6]. The complete spectral analysis of the operator (5)–(6) was given in [13]. In particular, it was shown that the operator \(\mathcal {A}\) is a generator of discrete \(C_0\)-group, whose spectrum consists of eigenvalues only, that lie asymptotically close to some vertical lines and can be grouped into finite, separated families. Riesz projections corresponding to these families generate Riesz basis of subspaces in the space \(\mathcal {M}_2\). We generalize these properties and consider the following abstract class of operators. Let \(\mathcal {A}:D(\mathcal {A})\subset \mathcal {H} \rightarrow \mathcal {H}\) be the generator of \(C_0\)-semigroup in Hilbert space \(\mathcal {H}\). We assume that

  • (B1) \(\sigma (\mathcal {A})= \bigcup _{k \in \mathbb {Z}} \sigma _k\), and \(\inf \{|\lambda -\mu |:\lambda \in \sigma _i,\mu \in \sigma _j, i\ne j \}=d>0;\)

  • (B2) \(\mathrm{dim}\,\mathcal {P}_k\mathcal {H}\le N, k\in \mathbb {Z}\), where \(\mathcal {P}_k\) is a spectral projection corresponding to \(\sigma _k\);

  • (B3) subspaces \(V_k:=\mathcal {P}_k\mathcal {H}, k\in \mathbb {Z}\), constitute Riesz basis of subspaces.

Note that the condition (B2) implies that the families \(\sigma _k\) must be finite and \(\#\sigma _k\le N\) for all \(k\in \mathbb {Z}\). It turns out (see Xu and Yung [20]; Zwart [21]) that in the case when \(\mathcal {A}\) generates a \(C_0\)-group (not only \(C_0\)-semigroup) satisfying (B1)–(B2), the condition (B3) is a consequence of weaker condition:

(B3’) the span over the (generalized) eigenvectors of \(\mathcal {A}\) is dense in \(\mathcal {H}\).

The main goal of our paper is to extend polynomial stability analysis to the mentioned above class of \(C_0\)-semigroups. We obtain a spectral criterion for polynomial stability of not necessarily bounded semigroups generated by the operators satisfying (B1)–(B3). In particular, we describe the asymptotic behavior of the semigroups restricted to some dense, non-closed subsets in terms of location of the spectrum. Thus we obtain

Theorem 1.1

Let \(\mathcal {A}:D(\mathcal {A})\subset \mathcal {H} \rightarrow \mathcal {H}\) generate \(C_0\)-semigroup T(t) on \(\mathcal {H}\) and satisfy assumptions (B1)–(B3). If

$$\begin{aligned} (A') \quad \mathrm{Re}\,\lambda -\omega _0 \le -\gamma |\mathrm{Im}\lambda |^{-\alpha } \text{ for } \text{ all } \lambda \in \sigma (\mathcal {A}) \end{aligned}$$

for some real, positive constants \(\alpha ,\gamma \), then

$$\begin{aligned} \begin{array}{ll} (a) \quad &{}\Vert R(\mathcal {A},is+\omega _0)\Vert =O(|s|^{\alpha N}), \quad s\in \mathbb {R}, \quad s \rightarrow \infty , \\ (b) \quad &{}\Vert T(t)(\mathcal {A}-\omega _0I)^{-n}\Vert =O(e^{\omega _0t}t^{N-1-\frac{n}{\alpha }}), \quad t\rightarrow +\infty . \end{array} \end{aligned}$$

Basing on this Theorem and results from [2] we obtain the following:

Theorem 1.2

Let \(\mathcal {A}:D(\mathcal {A})\subset \mathcal {H} \rightarrow \mathcal {H}\) generate \(C_0\)-semigroup T(t) on \(\mathcal {H}\) and satisfies assumptions (B1)–(B3) and let \(\rho (\mathcal {A})\subset \mathbb {C}^{-}\). Then semigroup T(t) is polynomially stable if and only if condition (A) holds for some positive constants \(\gamma ,\alpha \) and small values of \(|\mathrm{Re}\,\lambda |\).

We also use Theorem 1.1 to describe the asymptotic behavior of solutions of neutral type equations (Theorem 3.1). This theorem complements our previous results [16] concerning the behavior of the norm of semigroups corresponding to equations (3).

The work is organized as follows. First we give the proof of Theorem 1.1 preceded by several technical results. Next section is devoted to the analysis of stability of neutral type equations (3) and regular feedback stabilizability of these equations [14]. In the appendix we give two simple statements about complex matrices, which are used in our work.

2 Proof of the Main Results

In the beginning we give some technical results which will be used in the proof of Theorem 1.1.

Lemma 2.1

For any sequence \(\{\lambda _1,\ldots ,\lambda _n,\hat{\lambda }_1,\ldots ,\hat{\lambda }_n \}\) of 2n pairwise different complex numbers and any complex number \(\tilde{\lambda }\) the system of linear equations

$$\begin{aligned} \sum _{k=1}^n\frac{\alpha _k}{\lambda _i-\hat{\lambda }_k}=\lambda _i-\tilde{\lambda }, \quad i=1,2,\ldots ,n, \end{aligned}$$
(7)

with n unknowns \(\alpha _1,\ldots , \alpha _n\) has a unique solution given by the formula

$$\begin{aligned} \alpha _k= \left( \hat{\lambda }_k-\tilde{\lambda }+\sum _{p=1}^n(\lambda _p-\hat{\lambda }_p) \right) \left( \lambda _k-\hat{\lambda }_k \right) \prod _{p=1;p\ne k}^n\frac{\lambda _p-\hat{\lambda }_k}{\hat{\lambda }_p-\hat{\lambda }_k}, \quad k=1,2,\ldots ,n. \end{aligned}$$
(8)

Proof

We solve the system by Cramer’s rule. It is easy to compute the main determinant D and the determinant \(D_j\) that is determinant D, with j-th column replaced with the right hand side of system (7). Namely we have

$$\begin{aligned} D=\frac{\prod _{i>j}(\lambda _i-\lambda _j)(\hat{\lambda }_j-\hat{\lambda }_i)}{\prod _{i,j=1}^n(\lambda _i-\hat{\lambda }_j)}, \end{aligned}$$
$$\begin{aligned} D_k=\frac{\prod _{i>j}(\lambda _i-\lambda _j)\prod _{i>j;i,j\ne k}(\hat{\lambda }_j-\hat{\lambda }_i)}{\prod _{i,j=1;j \ne k}^n(\lambda _i-\hat{\lambda }_j)}\left( \hat{\lambda }_k-\tilde{\lambda }+\sum _{p=1}^n(\lambda _p-\hat{\lambda }_p) \right) . \end{aligned}$$

Taking \(\alpha _k=\frac{D_k}{D}, k=1,\ldots , n\) we arrive at (8). \(\square \)

Lemma 2.2

Let \(\mathcal {A}:D(\mathcal {A})\subset \mathcal {H} \rightarrow \mathcal {H}\), be a generator of \(C_0\)-semigroup \(e^{\mathcal {A}t}\) satisfying assumptions (B1)–(B3). In addition to this, we assume

(B1’) the spectral families \(\sigma _k\) are vertically separated i.e. there exists a positive constant \(d_v\) such that

Then there exists constant \(M>0\) independent of k such that \(\Vert e^{\mathcal {A}_kt}\Vert \le Me^{\omega _kt}(t^{N-1}+1)\), where \(\omega _k=\max \{\mathrm{Re}\,\lambda : \lambda \in \sigma _k\}\).

Proof

We define the operator \(\mathcal {B}:D(\mathcal {A})\rightarrow \mathcal {H}\) in each subspace \(V_k\) separately, by the following formula:

$$\begin{aligned} \mathcal {B}|_{V_k}x_k:=\mathcal {B}_kx_k:=\mathcal {A}_kx_k+(\omega _0-\omega _k)x_k, \quad k \in \mathbb {Z}, \quad x=\sum _{k \in \mathbb {Z}} x_k, \end{aligned}$$

where \(\omega _0=\sup \{{\mathrm{Re}}\,\lambda : \lambda \in \sigma (\mathcal {A})\}\). Then for each \(k\in \mathbb {Z}\) we have \(\max \{\mathrm{Re}\,\lambda : \lambda \in \sigma (\mathcal {B}_k)\}\le \omega _0\). Operator \(\mathcal {B}\) generates \(C_0\)-semigroup and it satisfies the assumptions (B1)–(B3). Hence (see [9, Theorem 1d])

$$\begin{aligned} \Vert e^{\mathcal {B}t}\Vert \le M e^{\omega _0 t}(t^{N-1}+1), \quad t\ge 0, \end{aligned}$$

and in the subspaces \(V_k\) we get

$$\begin{aligned} \Vert e^{\mathcal {A}_kt}\Vert \cdot e^{(\omega _0-\omega _k)t}=\Vert e^{\mathcal {B}_kt}\Vert \le M e^{\omega _0 t}(t^{N-1}+1), \quad t\ge 0, \end{aligned}$$

which implies the assertion. \(\square \)

Theorem 2.1

Let \(\mathcal {A}:D(\mathcal {A})\subset \mathcal {H} \rightarrow \mathcal {H}\), satisfy assumptions (B1)–(B3) and generates \(C_0\)-semigroup on \(\mathcal {H}\). If there exists a constant \(L>0\) with

$$\begin{aligned} \omega _0-\mathrm{Re}\,\lambda \le L<+\infty , \quad \lambda \in \sigma (\mathcal {A}), \end{aligned}$$
(9)

then there exists a constant M independent of k such that

$$\begin{aligned} \Vert \mathcal {A}_k-\tilde{\lambda }_kI\Vert \le M, \quad k\in \mathbb {Z}, \end{aligned}$$

where \(\mathcal {A}_k\) denotes the restriction of \(\mathcal {A}\) to the corresponding basis subspace \(V_k:=\mathcal {P}_k\mathcal {H}, k\in \mathbb {Z}\) and \(\tilde{\lambda }_k\in \sigma _k\) is an eigenvalue from \(\sigma _k\) with maximal real part.

Proof

Without loss of generality we assume that each \(\mathcal {A}_k\) has \(n_k\le N\) different eigenvalues and no rootvectors. Indeed, if this assumption is not satisfied then for any \(\varepsilon _1>0\) we can find the operator \(\mathcal {A}_{\varepsilon }\) close to \(\mathcal {A}\) e.i. satisfying condition \(\Vert \mathcal {A}_{\varepsilon }-\mathcal {A}\Vert \le \varepsilon _1\) and \(V_k'=V_k, k\in \mathbb {Z}\), and \(\mathcal {A}_{\varepsilon }\) has only simple, different eigenvalues. If assertion is true for any operator \(\mathcal {A}_{\varepsilon },\,\varepsilon >0\) then it is also true for \(\mathcal {A}\).

Existence of Riesz basis of subspaces allows us to consider operator \(\mathcal {A}\) and its resolvent in each subspace separately. We remind our notation \(\mathcal {A}_k:=\mathcal {A}|_{V_k}\) and \(\mathcal {R}_k(\mathcal {A},\lambda ):=\mathcal {R}(\mathcal {A},\lambda )|_{V_k}\). In each subspace \(V_k\) we choose an orthonormal system \(\{e^{(k)}_i \}_{i=1}^{n_k}\). Note that the system \( \{ e^{(k)}_i: i=1,\ldots ,n_k; k\in \mathbb {Z} \}\) constitute a Riesz basis in \(\mathcal {H}\). We define a family of matrices \(P_k \in M_{n_k}(\mathbb {C}), k\in \mathbb {Z}\), as \(P_k=[a^{(k)}_{ij}]\), where \(a^{(k)}_{ij}\) are the coefficients of normalized eigenvector \(v^{(k)}_i\) in the basis \(\{e^{(k)}_j: j=1,\ldots ,n_k \}\). Let us denote the matrices of the operators \(\mathcal {A}_k\) and \(\mathcal {R}_k(\mathcal {A},\lambda )\) in the basis \(\{e^{(k)}_j: j=1,\ldots ,n_k \}\) by \(A_k\) and \(R_k(A,\lambda )\), respectively. Thus we have

$$\begin{aligned} A_k-\tilde{\lambda }_kI=P_k\Delta _k(\tilde{\lambda }_k)P^{-1}_k, \end{aligned}$$
(10)

where \(\Delta _k(\tilde{\lambda }_k)\) is a diagonal matrix with entries \(\{ \lambda ^{(k)}_1-\tilde{\lambda }_k, \lambda ^{(k)}_2-\tilde{\lambda }_k, \ldots , \lambda ^{(k)}_{n_k}-\tilde{\lambda }_k\}\) and

$$\begin{aligned} R_k(A,\lambda )=P_k\Delta _k^{-1}(\lambda )P^{-1}_k, \end{aligned}$$
(11)

where \(\Delta _k^{-1}(\lambda )\) is a diagonal matrix with entries \(\{ (\lambda ^{(k)}_1-\lambda )^{-1}, (\lambda ^{(k)}_2-\lambda )^{-1}, \ldots , (\lambda ^{(k)}_{n_k}-\lambda )^{-1}\}\).

\(\mathcal {A}\) generates a \(C_0\)-semigroup T(t), thus there exist constants \(M_1,\omega _0\) such that

$$\begin{aligned} \Vert T(t)\Vert \le M_1e^{(\omega _0+1)t}, t\ge 0 \quad \text{ and } \quad \Vert \mathcal {R}(\mathcal {A},\lambda )\Vert \le \frac{M_1}{\mathrm{Re}\,\lambda -(\omega _0+1)}, \quad \mathrm{Re}\,\lambda >1. \end{aligned}$$
(12)

Using Riesz basis property we can conclude the same for \(\Vert R(A_k,\lambda )\Vert \):

(13)

To estimate \(\Vert A_k-\tilde{\lambda }_kI\Vert \), we decompose \(\Delta _k(\tilde{\lambda }_k)\) as follows:

$$\begin{aligned} \Delta _k(\tilde{\lambda }_k)=\sum _{j=1}^{n_k}\alpha ^{(k)}_j\Delta _k^{-1}(\hat{\lambda }^{(k)}_j), \end{aligned}$$
(14)

where \(\{\hat{\lambda }^{(k)}_j\}_{j=1}^{n_k}\) is any pairwise different complex sequence such that \(\hat{\lambda }^{(k)}_j\ne \lambda ^{(k)}_i\) for \(i,j=1,\ldots ,n_k\). We find coefficients \(\alpha ^{(k)}_j, j=1,2,\ldots ,n_k; k\in \mathbb {Z}\) by Lemma 2.1 namely, we have

$$\begin{aligned} \alpha ^{(k)}_j=\left( \hat{\lambda }^{(k)}_j-\tilde{\lambda }_k+\sum _{p=1}^{n_k}(\lambda ^{(k)}_p-\hat{\lambda }^{(k)}_p) \right) \left( \lambda ^{(k)}_j-\hat{\lambda }^{(k)}_j \right) \prod _{p=1;p\ne j}^{n_k}\frac{\lambda ^{(k)}_p-\hat{\lambda }^{(k)}_j}{\hat{\lambda }^{(k)}_p-\hat{\lambda }^{(k)}_j}. \end{aligned}$$
(15)

We choose \(\hat{\lambda }^{(k)}_p:=i\mathrm{Im}\tilde{\lambda }_k+\omega _0+p+1\) for \(p=1,2,\ldots n_k,\, k\in \mathbb {Z}\) and observe that \(|\hat{\lambda }^{(k)}_p-\hat{\lambda }^{(k)}_j|<n_k\le N\). Without loss of generality we assume that the radius of each set \(\sigma _k\) is uniformly bounded by a constant \(r>0\). Hence \(|\lambda ^{(k)}_p-\hat{\lambda }^{(k)}_j|\le N+L+2r\). Taking the above into account we rewrite equality (15) to the form

$$\begin{aligned} |\alpha ^{(k)}_j|\le (N+1)(2r+L+N)^{N+1}=:M(N,r,L). \end{aligned}$$
(16)

Now we estimate the norm of \(A_k-\tilde{\lambda }_kI\). From (10) and (14) we get

$$\begin{aligned} \Vert A_k-\tilde{\lambda }_kI\Vert \le \sum _{j=1}^{n_k}|\alpha ^{(k)}_j| \Vert P_k\Delta _k^{-1}(\hat{\lambda }^{(k)}_j)P^{-1}_k\Vert , \quad k \in \mathbb {Z}. \end{aligned}$$

Next we use (11) and inequality (13) for \(\lambda =\hat{\lambda }^{(k)}_j\) in the above to imply

$$\begin{aligned} \Vert A_k-\tilde{\lambda }_kI\Vert \le \sum _{j=1}^{n_k}|\alpha ^{(k)}_j| \frac{M_2}{j}, \quad k \in \mathbb {Z}. \end{aligned}$$

Estimation (16) and \(n_k\le N\) gives

$$\begin{aligned} \Vert A_k-\tilde{\lambda }_kI\Vert \le M_2 N M(N,r,L),\quad k \in \mathbb {Z}, \end{aligned}$$

that completes the proof. \(\square \)

Lemma 2.3

Let \(\mathcal {A}:D(\mathcal {A})\subset \mathcal {H} \rightarrow \mathcal {H}\) satisfy assumptions (B1)–(B3) and generate a \(C_0\)-semigroup on \(\mathcal {H}\). Then for a sufficiently small \(\varepsilon >0\) and \(I_{\varepsilon }:=\{ k\in \mathbb {Z}: \sigma _k\cap (\mathrm{Re}\,\lambda >\omega _0-\varepsilon )\ne \emptyset \}\), there exists a constant M independent of k such that

$$\begin{aligned} \Vert e^{A_kt}\Vert \le M e^{\omega _kt}(t^{N-1}+1), \quad t>0, \quad k\in I_{\varepsilon }, \end{aligned}$$

and

$$\begin{aligned} \Vert R({\lambda ,A_{k}})\Vert \le \frac{M}{(\mathrm{Re}\,\lambda -\omega _{k})^N}+\frac{M}{(\mathrm{Re}\,\lambda -\omega _k)}, \quad \mathrm{Re}\,\lambda >\omega _k, \quad k \in I_{\varepsilon }. \end{aligned}$$

where \(\omega _{0}=\sup \{\mathrm{Re}\,\lambda : \lambda \in \sigma ({\mathcal {A}})\}\), \(\omega _k=\max \{\mathrm{Re}\,\lambda : \lambda \in \sigma _{k}\}\) and \(A_k\) is a matrix of the operator \(\mathcal {A}\) projected to the subspace \(V_k\).

Proof

We choose \(\varepsilon \) small enough for the families \(\sigma _k':=\sigma _k-\omega _k, k\in I_{\varepsilon }\), satisfy assumptions (B1). Then the subspaces \(V_k, k\in I_{\varepsilon }\) constitute a Riesz basis of subspaces in the closure of a corresponding linear span, which can be extended to the Riesz basis (in this span) by choosing an orthonormal basis in each subspace. In this basis we consider the matrices \(A_k\) of the operators \(\mathcal {A}_k:=\mathcal {A}|_{V_k}\). We define a new operator \(\mathcal {B}\) in each subspace \(V_k\), namely we take \(\mathcal {B}_k:=\mathcal {B}|_{V_k}:=\mathcal {A}_k+(\omega _0-\omega _k)I\). It is easy to see that the operator \(\mathcal {B}\) still generates a \(C_0\)-semigroup and \(\omega _0(\mathcal {B})=\omega _0(\mathcal {A})=\omega _0\). Theorem 1 in [9] implies that

$$\begin{aligned} \Vert e^{\mathcal {B}t}\Vert \le M_2e^{\omega _0 t}(t^{N-1}+1), \quad t>0, \end{aligned}$$

for some constant \(M_2\). Hence for the operator \(e^{\mathcal {B}_kt}\) and its matrix \(e^{B_kt}\) we have a similar estimation

$$\begin{aligned} \Vert e^{B_k t}\Vert \le M_1e^{\omega _0 t}(t^{N-1}+1), \quad t>0, k\in I_{\varepsilon }, \end{aligned}$$

with a new constant \(M_1\). From the definition of \(\mathcal {B}_k\) and the last inequality we conclude that

$$\begin{aligned} \Vert e^{A_k t}\Vert \le M_1e^{\omega _k t}(t^{N-1}+1), \quad t>0, k\in I_{\varepsilon }, \end{aligned}$$

which proves the first assertion. Now we estimate the norm of the resolvent operator \(R(A_k,\lambda )\) using a Laplace transform of \(e^{A_kt}\), namely

$$\begin{aligned} R(A_k,\lambda )x=\int _0^{+\infty }e^{-\lambda t}e^{A_kt}xdt, \quad \mathrm{Re}\,\lambda >\omega _k. \end{aligned}$$

For the norms we get

$$\begin{aligned} \Vert R(A_k,\lambda )x\Vert \le \int _0^{+\infty }|e^{-\lambda t}|\cdot \Vert e^{A_kt}x\Vert dt\le \int _0^{+\infty }M_1e^{-(\mathrm{Re}\,\lambda -\omega _k) t} (t^{N-1}+1)\Vert x\Vert dt. \end{aligned}$$

This finally gives

$$\begin{aligned} \Vert R(A_k,\lambda )x\Vert \le \frac{M_2\Vert x\Vert }{(\mathrm{Re}\,\lambda -\omega _{k})^{N}}+\frac{M_2\Vert x\Vert }{\mathrm{Re}\,\lambda -\omega _{k}}, \quad \mathrm{Re}\,\lambda >\omega _{k}, \end{aligned}$$

where \(M_2\) is a constant independent of k. \(\square \)

Now we are ready to prove the main theorem.

Proof

Without loss of generality we assume that \(\omega _0=0\). To prove (a) we generate a basis in \(\mathcal {H}\), by taking the Riesz basis from subspaces and choosing an orthonormal basis in each of the subspace. Obviously, such a basis is a Riesz basis and we can consider matrices \(R(A_k,\lambda )\) of the resolvent operators in this basis instead of the operators \(R(\mathcal {A}_k,\lambda )\) themselves. We prove that for some constant \(M_1>0\)

$$\begin{aligned} \sup _k \Vert R(A_k, is)\Vert \le M_1|s|^{N\alpha }, \quad s \rightarrow \infty . \end{aligned}$$
(17)

First, we split the set of all k’s into three subsets \(I_0(\varepsilon ):=\mathbb {Z}\setminus I_{\varepsilon }\), \(I_1(s,C,\varepsilon ):=\{k\in I_{\varepsilon }:|\tilde{\lambda }_k-is|<C\}\) and \(I_2(s,C,\varepsilon ):=\{k\in I_{\varepsilon }:|\tilde{\lambda }_k-is|\ge C\}\), where \(I_{\varepsilon }:=\{ k\in \mathbb {Z}: \sigma _k\cap (\mathrm{Re}\,\lambda >\omega _0-\varepsilon )\ne \emptyset \}\). Second, we choose \(\varepsilon \) small enough to use Lemma 2.3. For \(k\in I_0\) the assertion is obvious because the restricted semigroup is bounded by the exponent \(M_{\varepsilon }e^{-\frac{1}{2} \varepsilon t}\), and we need to prove it only for \(k\in I_{\varepsilon }\). Theorem 2.1 implies \(\frac{1}{M}(A_k-isI)\) is close to \(\frac{1}{M}(\tilde{\lambda }_k-is)I\), i.e. \(\left\| \frac{1}{M}(A_k-isI) - \frac{1}{M}(\tilde{\lambda }_k-is)I\right\| \le 1, k\in \mathbb {Z}\). We estimate \(\Vert R(A_k,is)\Vert \) for \(k\in I_2(s,C,\varepsilon )\) using Statement 3.5 (see Appendix) for family of matrices \(\frac{1}{M}(A_k-isI)\). We fix a constant C (independently of s) large enough to make sure that \(\frac{1}{M}|\tilde{\lambda }_k-is|\) satisfy assumptions of Statement 3.5 for all \(k \in I_2(C,s,\varepsilon )\). Hence we have

$$\begin{aligned} \Vert R(A_k,is)\Vert \le \frac{C_1}{|\tilde{\lambda }_k-is|}\le \frac{C_1}{C} , \quad k\in I_2, \end{aligned}$$
(18)

where constant \(C_1\) is independent of k.

For \(k\in I_1(C,s,\varepsilon )\), we use Lemma 2.3,

$$\begin{aligned} \Vert R(A_k,\lambda )\Vert \le \frac{M}{|\mathrm{Re}\,\lambda -\omega _k|^N}+\frac{M}{|\mathrm{Re}\,\lambda -\omega _k|}, \quad \mathrm{Re}\,\lambda >\omega _k, \end{aligned}$$

where constant M is independent of k. Taking \(\lambda =is\) and using the assumption (A) we get

$$\begin{aligned} \Vert R(A_k,is)\Vert \le \frac{M}{|\omega _k|^N}\le |Im\tilde{\lambda }_k|^{N\alpha } \le M|s+C|^{N\alpha }\le M_2|s|^{N\alpha }, \quad k\in I_1. \end{aligned}$$
(19)

Combining (18) and (19) we get (17) which proves the assertion (a).

To prove (b) it suffices to show

$$\begin{aligned} \sup _{k\in \mathbb {Z}}\Vert T_k(t)\mathcal {A}_k^{-n}\Vert \le M t^{N-1-\frac{n}{\alpha }} , \quad t>1. \end{aligned}$$
(20)

For \(\varepsilon >0\) small enough and any \(C>0\) it is easy to see that

$$\begin{aligned} \Vert T_k(t)\mathcal {A}_k^{-n}\Vert \le M_{\varepsilon }e^{-\frac{1}{2} \varepsilon t} \le M_{\varepsilon }' t^{N-1-\frac{n}{\alpha }}, \quad t>1, k\in I_0\cup I_1(0,C,\varepsilon ). \end{aligned}$$
(21)

For \(k\in I_2(0,C,\varepsilon )\) we use Lemma 2.3 and obtain

$$\begin{aligned} \Vert T_k(t)\mathcal {A}_k^{-n}\Vert \le M_1 e^{\omega _k t}t^{N-1}\Vert A_k^{-1}\Vert ^n, \quad t>1, k\in I_2(0,C,\varepsilon ). \end{aligned}$$

For \(|\tilde{\lambda }_k|\) large enough we use Statement 3.5 to estimate \(\Vert A_k^{-1}\Vert \) in the same way as in the proof of assertion (a) i.e. \(\Vert A_k^{-1}\Vert \le \frac{M}{|\tilde{\lambda }_k|}\). Hence we obtain

$$\begin{aligned} \Vert T_k(t)\mathcal {A}_k^{-n}\Vert \le M_2 e^{\omega _k t}t^{N-1}|\tilde{\lambda }_k|^{-n} , \quad t>1, k\in I_2(0,C,\varepsilon ). \end{aligned}$$

From assumption (A) we have \(|\omega _k|=|\mathrm{Re}\,\tilde{\lambda }_k|\ge \gamma |\mathrm{Im}\tilde{\lambda }_k|^{-\alpha }\ge \gamma |\tilde{\lambda }_k|^{-\alpha }\), so

$$\begin{aligned} \Vert T_k(t)\mathcal {A}_k^{-n}\Vert \le M_2 e^{\omega _k t}t^{N-1}|\omega _k|^{\frac{n}{\alpha }}\le M_2 e^{\omega _k t}t^{N-1}|\omega _k t|^{\frac{n}{\alpha }}t^{-\frac{n}{\alpha }}, \quad t>1. \end{aligned}$$

For each \(\beta >0\) the function \(e^{-x}x^{\beta }, x\ge 0\), is bounded, hence

$$\begin{aligned} \Vert T_k(t)\mathcal {A}_k^{-n}\Vert \le M_3 t^{N-1}t^{-\frac{n}{\alpha }}, \quad t>1, k\in I_2, \end{aligned}$$
(22)

where \(M_3\) is a new constant depending on \(n, \alpha \). Now (21) and (22) imply (20) and assertion (b) is proven. \(\square \)

Proof

Theorem 1.1 shows that condition (A) is sufficient for polynomial stability. Necessity. Let T(t) be polynomially stable. We choose one eigenvalue from each family \(\sigma _k\) (say \(\lambda _k\)) and corresponding eigenvectors \(\phi _k\). Consider subspace \(S=\overline{\text{ span }\,\{\phi _k:k\in \mathbb {Z}\}}\). It is easy to see that subspace S is T-invariant and the semigroup T(t) is bounded on S. Applying the results of Sect. 3 from [2] for semigroup T(t) restricted to S we see that family \(\{\lambda _k\}_{k\in \mathbb {Z}}\) satisfies condition (A) with some positive constants \(\gamma , \alpha \).

Since eigenvalues \(\lambda _k\) was chosen arbitrarily, then whole spectrum satisfies condition (A) with some positive constants \(\gamma , \alpha \). \(\square \)

3 Stability and Stabilizability of Neutral Type Equations

Following [13], we consider the delay systems of neutral type of the form (3), which can be represented in the operator form (4), with generator \(\mathcal {A}\) given by (5)–(6). Our goal is to investigate the asymptotic behavior of the solutions of above equation, in particular its stability. The stability is closely related to the location of the spectrum of the operator \(\mathcal {A}\) thus we recall some important properties of \(\mathcal {A}\) (for more details see [13]).

We denote the eigenvalues of the matrix \(A_{-1}\) by \(\mu _m, m=1,\ldots , \ell \) (\(|\mu _1|\ge |\mu _2|\ge \cdots \ge |\mu _{\ell }|\)), and their multiplicities by \(p_m\) (\( \sum {p_m}=n\)). Without loss of generality we assume that if \(|\mu _1|=|\mu _2|=\cdots =|\mu _{\ell _0}|\) then \(p_1\ge p_2\ge \cdots \ge p_{\ell _0}\). The spectrum of \(\mathcal {A}\) cannot be determined explicitly in general, but it is close to the spectrum of the operator \(\tilde{\mathcal {A}}\), which appears when we put \(A_2=A_3=0\) in (5). The eigenvalues of \(\tilde{\mathcal {A}}\) are complex logarithms of \(\mu _m\) and zero, i.e.

$$\begin{aligned} \sigma (\tilde{\mathcal {A}})= & {} \{ \tilde{\lambda }_m^{(k)}=\ln |\mu _m|+i(\arg \mu _m+2k\pi ),\mu _m\in \sigma (A_{-1}), m=1,\ldots ,\ell ;k \in \mathbb {Z} \}\\&\cup \{0\}. \end{aligned}$$

We denote \(\max \{\mathrm{Re}\,\lambda : \lambda \in \sigma (\tilde{\mathcal {A}})\}\) by \(\tilde{\omega }\) and \(\sup \{\mathrm{Re}\,\lambda : \lambda \in \sigma (\mathcal {A})\}\) by \(\omega \).

The spectrum of \(\mathcal {A}\) consist of eigenvalues only. Almost all of them lie close to \(\tilde{\lambda }_m^{(k)}\). More precisely, for k large enough they are contained in the discs \(L_m^{(k)}\) centered at \(\tilde{\lambda }_m^{(k)}\) of radii \(r_k \rightarrow 0\) (see [14, Theorem 4]). The sum of multiplicities of eigenvalues of \( \mathcal {A}\) lying in each disc centered at \(\tilde{\lambda }_m^{(k)}\) equals the multiplicity of \(\tilde{\lambda }_m^{(k)}\) and \(\mu _m\), that is \(p_m\). We denote eigenvalues of the operator \(\mathcal {A}\) by \(\lambda _{m,i}^{(k)}, k\in \mathbb {Z}; m=1,\ldots ,\ell \), and we have \(\{\lambda _{m,i}^{(k)}\}_{i=1}^{p_m}\subset L_m^{(k)}, |k|>N; m=1,\ldots ,\ell \). If there exist eigenvalues of \(\mathcal {A}\) with real part \(\omega =\sup _{\lambda \in \sigma (\mathcal {A})} \mathrm{Re}\,\lambda \) we denote their maximal multiplicity by \(p_0\) and we set \(p_0=0\) if there are no such eigenvalues. Let us denote \(\mathcal {A}\)-invariant subspaces \(V_m^{(k)}=P_m^{(k)}\mathcal {M}_2\), where \(P_m^{(k)}x=\frac{1}{2\pi i} \int _{L_m^{(k)}} R(\mathcal {A},\lambda )xd\lambda \) are Riesz projections, \(m=1,\ldots ,\ell , k \in \mathbb {Z}\). The sequence of \(p_m\)-dimensional subspaces \(V_m^{(k)}, m=1,\ldots ,\ell , |k| \ge N \), and some \(2(N+1)n\)-dimensional subspace \(W_N\) constitute an \(\mathcal {A}\)-invariant Riesz basis of space \(\mathcal {M}_2\).

Taking above into the account, we get \(\omega \ge \tilde{\omega }\) and there is a few possibilities for location of the spectrum \(\sigma (\mathcal {A})\):

  1. (a)

    \(\omega >\tilde{\omega }\), which implies that \(p_0>0\);

  2. (b)

    \(\omega =\tilde{\omega }\) and \(p_0=0\).

  3. (c)

    \(\omega =\tilde{\omega }\) and \(0<p_0<q\), where \(q\ge 1\) is maximal size of Jordan block of matrix \(A_{-1}\) corresponding to the eigenvalue \(\mu _1\);

  4. (d)

    \(\omega =\tilde{\omega }\) and \(q \le p_0<p_1\), where \(p_1\) is the multiplicity of \(\mu _1\);

  5. (e)

    \(\omega =\tilde{\omega }\) and \(p_0\ge p_1\).

In the cases (a) and (e) an asymptotic behavior of the corresponding semigroup is determined by the eigenvalue with maximal real part (equals \(\omega \)) and multiplicity \(p_0\) i.e.

$$\begin{aligned} \Vert e^{\mathcal {A}t}\Vert \le M e^{\omega t}t^{p_0-1}, \quad t>1. \end{aligned}$$

In the cases (b)–(d) we have the following estimation for the norm of semigroup (see [16] for more details):

$$\begin{aligned} me^{\tilde{\omega } t}t^{q-1}\le \Vert e^{\mathcal {A}t}\Vert \le M e^{\tilde{\omega } t}t^{p_1-1}, \quad t>1. \end{aligned}$$
(23)

Moreover in the cases (b) and (c) the semigroup does not have any maximal asymptotics (even when \(q=p_1\)) i.e.

$$\begin{aligned} \lim _{t \rightarrow +\infty }\frac{\Vert e^{\mathcal {A}t}x\Vert }{\Vert e^{\mathcal {A}t}\Vert }=0, \quad x\in \mathcal {M}_2, \end{aligned}$$
(24)

in the case (d), an existence of the maximal asymptotics is independent of our assumptions.

Now we discuss the property of asymptotic stability in the above cases. The necessary condition of stability is that \(\omega \le 0\) and if \(\omega < 0\) then it is even exponential stability, thus only the case \(\omega = 0\) is interesting. In this case we see that stability cannot occur in (a), (c), (d), (e) because we can point out an initial state for which the solution does not decrease or, at least, such an initial state exists (by Banach–Steinhaus Theorem). We focus on the case (b), where \(\omega = 0\) and \(p_0=0\) and discuss the stability. If \(q=p_1=1\) then (23) and lack of maximal asymptotics (24) implies stability. For \(q>1\) the lack of stability is a consequence of inequality (23) and it is caused by the families of eigenvalues approaching imaginary axis from the left-hand side (not by the single eigenvalue). If it is possible to describe the rate of this approaching using the following inequality

$$\begin{aligned} \mathrm{Re}\,\lambda \le - \frac{C}{|\mathrm{Im}\,\lambda |^{\alpha }}, \quad \lambda \in \sigma (\mathcal {A}), \end{aligned}$$
(25)

where \(C,\alpha \) are some real positive constants, then we are able to find a subset of initial states for which the system is stable. Using Theorem 1.1 we obtain sufficient condition for the stability of the system on some non-closed subset, namely we have

Theorem 3.1

Let us consider system (4). If \(\mathrm{Re}\,\lambda < 0\) for all \(\lambda \in \sigma (\mathcal {A})\) and \(\mathrm{Re}\,\lambda \le - \frac{C}{|\mathrm{Im}\lambda |^{\alpha }}\) for all but finitely many \(\lambda \in \sigma (\mathcal {A})\), where \(C,\alpha \) are some real positive constants, then for any \(n\in \mathbb {N}\) there exists \(M>0\) with

$$\begin{aligned} |e^{\mathcal {A}t}\mathcal {A}^{-n}x\Vert \le Mt^{p-1-\frac{n}{\alpha } }\Vert x\Vert , \quad t>1, \quad x\in \mathcal {M}_2. \end{aligned}$$

Proof of Theorem 1.2

The operator \(\mathcal {A}\) satisfies the assumptions (B1)–(B3) and (A), so the proof of theorem follows directly from Theorem 1.1. \(\square \)

Corollary 3.1

For the system (4) satisfying assumption of Theorem 3.1 and any constant \(\beta >0\) there exists \(n_0\) large enough and a constant \(M>0\) such that

$$\begin{aligned} \Vert e^{\mathcal {A}t}x\Vert \le \frac{M\Vert x\Vert _{D(\mathcal {A}^{n_0})}}{t^{\beta }}, \quad t>1, \quad x\in D(\mathcal {A}^{n_0}), \end{aligned}$$

where \(\Vert \cdot \Vert _{D(\mathcal {A}^{n_0})}\) denotes the norm \(\Vert x\Vert _{D(\mathcal {A}^{n_0})}=\Vert \mathcal {A}^{n_0}x\Vert +\Vert x\Vert \).

Now, following [14] we consider regular feedback stabilizability of a system

$$\begin{aligned} \dot{z}(t)=A_{-1}\dot{z}(t-1) + \int _{-1}^0A_2(\theta )\dot{z}(t+\theta )d\theta + \int _{-1}^0 A_3(\theta )z(t+\theta )d\theta +Bu, \end{aligned}$$
(26)

where \(A_{-1}\) is an \(n \times n\) invertible complex matrix, \(A_2\) and \(A_3\) are \(n \times n\) matrices of functions from \(L_2(-1,0)\), B is an \(n \times p\) complex matrix, \(z(t+\cdot )\in H^1(-1,0;\mathbb {C}^n)\). It was shown in [7, 11] that for any \(u\in L_2\) the system (26) has the unique solution \(z(t+\cdot )\in H^1(-1,0;\mathbb {C}^n)\). We say that the system (26) is asymptotically stabilizable if there exists a linear feedback control \(u(t)=F(z_t(\cdot ))=F(z(t+\cdot )\) such that the system (26) becomes asymptotically stable. If in addition to this the asymptotic stabilizability is achieved by a feedback F which is bounded (as an operator acting on space \(H^1\)), then we call it regular asymptotic stabilizability. In our case any regular feedback is of the form (see [14] for more details)

$$\begin{aligned} u(t)=F(z_t(\cdot ))=F(z(t+\cdot ))=\int _{-1}^0F_2(\theta )\dot{z}(t+\theta )d\theta + \int _{-1}^0 F_3(\theta )z(t+\theta )d\theta , \end{aligned}$$
(27)

where \(F_2(\cdot ),F_3(\cdot )\in L_2([-1,0],\mathbb {C}^{n\times p})\). The Eq. (26) can be rewritten in the operator form similar to the Eq. (3),

$$\begin{aligned} \dot{x}=\mathcal {A}x+\mathcal {B}u, \qquad x \in \mathcal {M}_2, \end{aligned}$$
(28)

where operator \(\mathcal {A}\) is given by (5)–(6), and \(\mathcal {B}u={Bu \atopwithdelims ()0}\). Taking (27) into account we can rewrite the Eq. (28) in the form

$$\begin{aligned} \dot{x}=(\mathcal {A}+\mathcal {BF})x, \qquad x \in \mathcal {M}_2. \end{aligned}$$
(29)

We notice that \(\mathcal {A+BF}\) is similar to the operator \(\mathcal {A}\). The operator \(\mathcal {BF}\) affects only the matrices \(A_2\), \(A_3\) and the operator \(\mathcal {A+BF}\) is of the same form as \(\mathcal {A}\) with only \(A_2\) and \(A_3\) exchanged. In particular, the operator \(\mathcal {A+BF}\) generates a \(C_0\)-group and its domain stays unchanged (because the operator \(\mathcal {BF}\) does not affect the matrix \(A_{-1}\)). In the case when eigenvalues of the matrix \(A_{-1}\) with maximal modulus, say \(\mu _m, m=1,\ldots , \ell _0\), are different and simple, the corresponding eigenvalues of \(\mathcal {A}\), say \(\{\lambda _k\}_{k\in \mathbb {Z}}\) (and \(\mathcal {A+BF}\)), are also simple and are in some disjoint circles of summable with square radii \(r_k\). It was proven (see [14, Theorem 8] and [18, 19]) that in such a case for any choice of complex sequence \(\{\hat{\lambda }_k\}_{k\in \mathbb {Z}}\) in the same circles, there exists feedback \(\mathcal {F}\) of the form (27) such that the numbers \(\hat{\lambda }_k\) will be eigenvalues of \(\mathcal {A+BF}\). In other words, the eigenvalues of \(\mathcal {A}\) can be moved by the feedback to any point of corresponding circles. In particular, if centers of circles are on the imaginary axis then eigenvalues of \(\mathcal {A}\) can be moved to the left open half-plane and the \(C_0\)-group generated by \(\mathcal {A+BF}\) will be stable (i.e. \(e^{(\mathcal {A+BF})t}\le M\)). The following statement describes above situation

Statement 3.2

Let us consider Eq. (28) with additional assumptions that

  1. (a)

    spectrum of \(\mathcal {A}\) consist of simple eigenvalues only, say \(\sigma (\mathcal {A})=\{\lambda _k:k\in \mathbb {Z}\}\),

  2. (b)

    linear span of eigenvectors of \(\mathcal {A}\) is dens in \(\mathcal {H}\),

  3. (c)

    eigenvalues \(\lambda _k\) lie in disjoint balls \(B(x_k,r_k)\) centered at the following points of imaginary axis \(x_k=i(kd+d_0), 0\le d_0<d\) and radii \(r_k\) satisfies \(\sum r_k^2<\infty \),

  4. (d)

    vector \(b\in \mathcal {H}\) is not orthogonal to eigenvectors \(\phi _k\) of the operator \(\mathcal {A}^*\).

Then for any \(\alpha >\frac{1}{2}\) there exists regular feedback \(\mathcal {F}\) of the form (27) such that the group \(e^{(\mathcal {A}+b\mathcal {F})t}\) is:

  1. (i)

    stable, that is \(\Vert e^{(\mathcal {A}+b\mathcal {F})t}\Vert <M\) for some constant \(M>0\),

  2. (ii)

    polynomially stable, that is \(\Vert e^{(\mathcal {A}+b\mathcal {F})t}\mathcal {A}^{-1}\Vert \le M_{\alpha }t^{-\frac{1}{\alpha }}\) for some constant \(M_{\alpha }>0\).

Proof of Theorem 1.2

By [14, Theorem 8,Lemma 13] there exist feedback \(\mathcal {F}\), which shifts the eigenvalues \(\lambda _k\) to the points \(\hat{\lambda }=-r'_k+(kd+d_0)i\), where \(r'_k=\max \{r_k, Ck^{-\alpha }\}\). Then all \(\hat{\lambda }_k\) are in the open left half-plane and assertion (i) follows from [9, Theorem 1]. To prove (ii) we check that eigenvalues \(\hat{\lambda }_k\) satisfies condition (A’) with constants \(\omega _0=0, \alpha ,\gamma =Cd^{\alpha }\) and use Theorem 1.1. \(\square \)

For the case of non-single eigenvalues \(\mu _m, m=1,\ldots , \ell _0\) of matrix \(A_{-1}\), even if eigenvalues of \(\mathcal {A}\) can be moved to the open left half-plane, then, in general, stability can not be obtained because the corresponding group can be unbounded (see [16]). Although if we assume that we are able to move eigenvalues in each circle using proper feedback (27) the same way like in the case of single eigenvalues, then using Theorem 1.1 we can obtain polynomial stability of a corresponding group. To illustrate this idea we focus on some special class of equations (26).

Let us denote identity matrix in \(\mathbb {C}^n\) by \(I_n\) and Jordan block with eigenvalue 1 of size n by \(J_n\), i.e. \(J_n=\{a_{p,q}\}: a_{p,p}=a_{p,p+1}=1, p=1,\ldots ,n, \) and all other entries are 0. We consider the Eq. (26) with \(A_{-1}=I_n\), \(A_2=\tilde{f}(\theta )J_n\), \(A_3=\tilde{g}(\theta )J_n\), where \(\tilde{f},\tilde{g} \in L_2(-1,0)\) are fixed. We take \(B=J_n\) and the control u(t) of the form (27) i.e.

$$\begin{aligned} u(t)=\int _{-1}^0f_2(\theta )\dot{z}(t+\theta )d\theta + \int _{-1}^0 f_3(\theta )z(t+\theta )d\theta , \end{aligned}$$

where \(f_2,f_3 \in L_2(-1,0;\mathbb {C})\). Taking \(f=\tilde{f}+f_2, \quad g=\tilde{g}+f_3\) we rewrite the Eq. (26) in the form

$$\begin{aligned} \dot{z}(t)= I_n\dot{z}(t-1) +J_n\left( \int _{-1}^0f(\theta )\dot{z}_t(\theta )d\theta + \int _{-1}^0 g(\theta )z_t(\theta )d\theta \right) . \end{aligned}$$
(30)

The corresponding characteristic function \(\Delta \) is of the form

$$\begin{aligned} \Delta (\lambda ) =\det \left[ I_n \left( \lambda e^{-\lambda }+ \lambda \int _{-1}^{0}f(s)e^{\lambda s}\text{ d }s+\int _{-1}^{0}g(s)e^{\lambda s}\text{ d }s -\lambda \right) \right. \\ \left. \quad +\,(J_n-I_n) \left( \lambda \int _{-1}^{0}f(s)e^{\lambda s}\text{ d }s+\int _{-1}^{0}g(s)e^{\lambda s}\text{ d }s -\lambda \right) \right] , \end{aligned}$$

and it equals zero only if

$$\begin{aligned} \lambda e^{-\lambda }+ \lambda \int _{-1}^{0}f(s)e^{\lambda s}\text{ d }s+\int _{-1}^{0}g(s)e^{\lambda s}\text{ d }s-\lambda =0. \end{aligned}$$
(31)

It is proven (see [13]) that roots of this equation are asymptotically close to roots of equation \(\lambda (e^{\lambda }-1)=0\). More precisely the roots of equation (31) are in the circles centred in \(\lambda _k=2k\pi i\) and square summable radii \(r_k\). For scalar version of equation (30) (i.e. \(n=1\)) Theorem 8 in [14] implies that for any choice of complex sequence \(\tau _k\) in the above circles there exist functions \(f,g\in L_2\) such that the numbers \(\tau _k\) will be roots of the equation (31). Moreover, the Eq. (31) does not depend on n, thus the same functions fg move roots the same way in general case (\(n>1\)). Now we choose \(\tau _k=2k\pi i-\frac{1}{k}\), what means that there exist functions fg such that the numbers \(\tau _k\) are eigenvalues of the corresponding operator \(\mathcal {(A+BF)}\), whose eigenvalues are contained in the left open half-plane and satisfy (25) with \(C=2\pi ,\alpha =1\). Nevertheless the system (28) can not be stable because the corresponding group is not bounded i.e.

$$\begin{aligned} \Vert e^{\mathcal {(A+BF)}t}\Vert \ge M t^{n-1},\quad t>1. \end{aligned}$$

However, our paper provides tools to study polynomial stability of above unbounded group, in particular due to Theorem 3.1 we obtain that sufficiently regular solutions tend to zero polynomially. Namely we have the following

Statement 3.3

We consider control system (28) with feedback control of the form (27). We fix \(n\in \mathbb {N_+}\), \(A_{-1}=I_n\), \(A_2=\tilde{f}(\theta )J_n\), \(A_3=\tilde{g}(\theta )J_n\), \(F_2=f_2(\theta )J_n\), \(F_3=f_3(\theta )J_n\), \(B=J_n\), where \(\tilde{f},\tilde{g}\) are arbitrary functions from \(L_2(-1,0; \mathbb {C})\) and \(J_n\) is a Jordan block with eigenvalue 1 of size n. Then there exist functions \(f_2, f_3\in L_2(-1,0; \mathbb {C})\) and constant \(M>0\) such that for any \(k\in \mathbb {N}\)

$$\begin{aligned} \Vert e^{\mathcal {(A+BF)}t}\mathcal {(A+BF)}^{-(n+k-1)}\Vert \le M t^{-k},\quad t>1, \end{aligned}$$

or equivalently

$$\begin{aligned} \Vert e^{\mathcal {(A+BF)}t}x\Vert \le Mt^{-k}\Vert x\Vert _{D\left( \mathcal {A}^{n+k-1}\right) }, \quad t>1, x\in D\left( \mathcal {A}^{n+k-1}\right) . \end{aligned}$$