On Asymptotic Estimation of a Discrete Type C0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C_0$$\end{document}-Semigroups on Dense Sets: Application to Neutral Type Systems

We consider an abstract Cauchy problem for a certain class of linear differential equations in Hilbert space. We obtain a criterion for stability on some dense subsets of the state space of the C0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C_0$$\end{document}-semigroups in terms of location of eigenvalues of their infinitesimal generators (so-called polynomial stability). We apply this result to analysis of stability and stabilizability of special class of neutral type systems with distributed delay.


Introduction
An important problem in the theory of differential equations is to determine the asymptotic behavior of solutions. One of the main issues in this topic concern stability. In the case of finite dimensional linear systems (exponential or asymptotic) stability of a system is equivalent to the fact that all eigenvalues are in the open left half-plane. For linear equations in infinite-dimensional space the problem of stability is much more complicated. In particular, a system can be asymptotically stable even if it possesses a point of spectrum on the imaginary axis (see Arendt  Phong [8], Sklyar and Shirman [17]). On the other hand, the system may be unstable even in the case when it is spectrum is contained in the open left half-plane, while the eigenvalues approach imaginary axis. Such a situation may occur for hyperbolic equations or delay equations of neutral type (e.g. Rabah et al. [13,14]).
One of important characteristics describing the asymptotic behavior of solutions of a linear differential equation is the growth bound ω 0 = ω 0 (T ) = lim t→+∞ 1 t ln T (t) of a corresponding C 0 -semigroup T (t). In the context of stability the critical situation is when ω 0 = 0. In this case the semigroup cannot be exponentially stable ( T (t) → 0), but it still may be asymptotically stable ( T (t)x → 0, x ∈ X ). Then the solutions tend to zero arbitrarily slow. However, the solutions with sufficiently regular initial states (for example, from the domain of the generator) tend to zero uniformly, i.e. T (t)A −1 → 0. In particular, Batty [3,4] and Phong [10] (cf. Sklyar [15]) proved that for bounded C 0 -semigroup T (t) on a Banach space with the generator A, the following holds: if This means that all solutions of abstract Cauchy problem with initial condition in the domain of A tend uniformly to zero not slower then the function T (t)A −1 . This fact leads to the concept of polynomial stability: Definition 1.1 [2] We say the semigroup T (t) generated by A is polynomially stable if there exist constants α, β, C > 0 such that for some d ∈ ρ(A).
It is easy to see that the above definition does not depend on the choice of d. Hence if 0 ∈ ρ(A) then (1) is equivalent to The rate of decay of solutions with an initial condition from the set D(A −1 ) and in general in the set D(A −α ), α > 0, is closely related to the asymptotic behavior of the resolvent R(A, λ) on the imaginary axis (Bátkai et al. [2]; Borichev and Tomilov [5]). In particular, it is shown in [5,Theorem 2.4] that for a bounded C 0 -semigroup T (t) on a Hilbert space H and some positive, fixed constant α, the following conditions are equivalent: It is shown in [2] that the rate of growth of the resolvent on imaginary axis implies some restrictions on the location of the spectrum. In particular, if condition (i) holds for the generator of a bounded C 0 -semigroup acting in Banach space, then the spectrum of the generator satisfies the following condition (see [2,Propositions 3.6,3.7]): for some real, positive constant γ and small values of |Re λ|. However, in general case the knowledge about location of the spectrum is not enough to determine asymptotic behavior of the resolvent and/or the semigroup. At the same time in [2,Proposition 4.1] it is also shown that if the generator A of a bounded semigroup is a normal operator in Hilbert space with spectrum σ (A) in the open left half-plane, then the condition (ii) is equivalent to (A).
In this paper we conduct the analysis of polynomial stability in case of certain class of not necessarily bounded discrete semigroups acting in Hilbert space extending the results of [2,5] to this class. Namely we consider the the class of semigroups whose generator has spectrum splitted into a family of finite separated sets and corresponding eigenspaces are finite dimensional and form a Riesz basis. We give an estimation of asymptotic behavior of these semigroups on dense sets (like D(A −α )) depending on asymptotic closeness of the eigenvalues to the vertical line Reλ = ω 0 . In particular, in the case of bounded semigroups of our class we show that the condition (A) is equivalent to (iii). The class of semigroups mentioned above was considered earlier in the paper of Miloslavskii [9], where the estimation of the semigroup norm was obtained (see [9,Theorem 1]).
Such semigroups appear naturally in the analysis of delay equation of neutral type. Following [12,13] we consider equatioṅ where A −1 is a n × n invertible complex matrix, A 2 and A 3 are n × n matrices of functions from L 2 (−1, 0). The Eq. (3) can be rewritten in the operator forṁ where M 2 = C n × L 2 (−1, 0; C n ), the operator A is then given by where z t (·) = z(t + ·) and the domain of A is as follows: This model of neutral type equation was introduced by Burns et al. [6]. The complete spectral analysis of the operator (5)-(6) was given in [13]. In particular, it was shown that the operator A is a generator of discrete C 0 -group, whose spectrum consists of eigenvalues only, that lie asymptotically close to some vertical lines and can be grouped into finite, separated families. Riesz projections corresponding to these families generate Riesz basis of subspaces in the space M 2 . We generalize these properties and consider the following abstract class of operators. Let A : D(A) ⊂ H → H be the generator of C 0 -semigroup in Hilbert space H. We assume that where P k is a spectral projection corresponding to σ k ; (B3) subspaces V k := P k H, k ∈ Z, constitute Riesz basis of subspaces.
Note that the condition (B2) implies that the families σ k must be finite and #σ k ≤ N for all k ∈ Z. It turns out (see Xu and Yung [20]; Zwart [21]) that in the case when A generates a C 0 -group (not only C 0 -semigroup) satisfying (B1)-(B2), the condition (B3) is a consequence of weaker condition: (B3') the span over the (generalized) eigenvectors of A is dense in H.
The main goal of our paper is to extend polynomial stability analysis to the mentioned above class of C 0 -semigroups. We obtain a spectral criterion for polynomial stability of not necessarily bounded semigroups generated by the operators satisfying (B1)-(B3). In particular, we describe the asymptotic behavior of the semigroups restricted to some dense, non-closed subsets in terms of location of the spectrum. Thus we obtain for some real, positive constants α, γ , then Basing on this Theorem and results from [2] we obtain the following: We also use Theorem 1.1 to describe the asymptotic behavior of solutions of neutral type equations (Theorem 3.1). This theorem complements our previous results [16] concerning the behavior of the norm of semigroups corresponding to equations (3).
The work is organized as follows. First we give the proof of Theorem 1.1 preceded by several technical results. Next section is devoted to the analysis of stability of neutral type equations (3) and regular feedback stabilizability of these equations [14]. In the appendix we give two simple statements about complex matrices, which are used in our work.

Proof of the Main Results
In the beginning we give some technical results which will be used in the proof of Theorem 1.1.

Lemma 2.1
For any sequence {λ 1 , . . . , λ n ,λ 1 , . . . ,λ n } of 2n pairwise different complex numbers and any complex numberλ the system of linear equations with n unknowns α 1 , . . . , α n has a unique solution given by the formula Proof We solve the system by Cramer's rule. It is easy to compute the main determinant D and the determinant D j that is determinant D, with j-th column replaced with the right hand side of system (7). Namely we have Taking α k = D k D , k = 1, . . . , n we arrive at (8).

Lemma 2.2 Let
Proof We define the operator B : D(A) → H in each subspace V k separately, by the following formula: Operator B generates C 0 -semigroup and it satisfies the assumptions (B1)-(B3). Hence (see [9,Theorem 1d which implies the assertion.
then there exists a constant M independent of k such that where A k denotes the restriction of A to the corresponding basis subspace V k := P k H, k ∈ Z andλ k ∈ σ k is an eigenvalue from σ k with maximal real part.
Proof Without loss of generality we assume that each A k has n k ≤ N different eigenvalues and no rootvectors. Indeed, if this assumption is not satisfied then for any ε 1 > 0 we can find the operator A ε close to A e.i. satisfying condition A ε − A ≤ ε 1 and V k = V k , k ∈ Z, and A ε has only simple, different eigenvalues. If assertion is true for any operator A ε , ε > 0 then it is also true for A. Existence of Riesz basis of subspaces allows us to consider operator A and its resolvent in each subspace separately. We remind our notation A k := A| V k and Let us denote the matrices of the operators A k and A generates a C 0 -semigroup T (t), thus there exist constants M 1 , ω 0 such that , Re λ > 1.
(12) Using Riesz basis property we can conclude the same for R(A k , λ) : To estimate A k −λ k I , we decompose k (λ k ) as follows: where {λ We chooseλ (k) p := iImλ k + ω 0 + p + 1 for p = 1, 2, . . . n k , k ∈ Z and observe that |λ Without loss of generality we assume that the radius of each set σ k is uniformly bounded by a constant r > 0. Hence |λ Now we estimate the norm of A k −λ k I . From (10) and (14) we get Next we use (11) and inequality (13) for λ =λ (k) j in the above to imply Estimation (16) and n k ≤ N gives that completes the proof.
where ω 0 = sup{Re λ : λ ∈ σ (A)}, ω k = max{Re λ : λ ∈ σ k } and A k is a matrix of the operator A projected to the subspace V k .
Proof We choose ε small enough for the families σ k := σ k − ω k , k ∈ I ε , satisfy assumptions (B1). Then the subspaces V k , k ∈ I ε constitute a Riesz basis of subspaces in the closure of a corresponding linear span, which can be extended to the Riesz basis (in this span) by choosing an orthonormal basis in each subspace. In this basis we consider the matrices A k of the operators A k := A| V k . We define a new operator B in each subspace V k , namely we take B k := B| V k := A k + (ω 0 − ω k )I . It is easy to see that the operator B still generates a C 0 -semigroup and ω 0 (B) = ω 0 (A) = ω 0 . Theorem 1 in [9] implies that for some constant M 2 . Hence for the operator e B k t and its matrix e B k t we have a similar estimation with a new constant M 1 . From the definition of B k and the last inequality we conclude that which proves the first assertion. Now we estimate the norm of the resolvent operator R(A k , λ) using a Laplace transform of e A k t , namely For the norms we get This finally gives where M 2 is a constant independent of k.
Now we are ready to prove the main theorem.
Proof Without loss of generality we assume that ω 0 = 0. To prove (a) we generate a basis in H, by taking the Riesz basis from subspaces and choosing an orthonormal basis in each of the subspace. Obviously, such a basis is a Riesz basis and we can consider matrices R(A k , λ) of the resolvent operators in this basis instead of the operators R(A k , λ) themselves. We prove that for some constant First, we split the set of all k's into three subsets I 0 (ε) Second, we choose ε small enough to use Lemma 2.3. For k ∈ I 0 the assertion is obvious because the restricted semigroup is bounded by the exponent M ε e − 1 2 εt , and we need to prove it only for k ∈ I ε . Theorem 2.1 implies We estimate R(A k , is) for k ∈ I 2 (s, C, ε) using Statement 3.5 (see Appendix) for family of matrices 1 M (A k − is I ). We fix a constant C (independently of s) large enough to make sure that 1 M |λ k − is| satisfy assumptions of Statement 3.5 for all k ∈ I 2 (C, s, ε). Hence we have where constant C 1 is independent of k. For k ∈ I 1 (C, s, ε), we use Lemma 2.3, where constant M is independent of k. Taking λ = is and using the assumption (A) we get Combining (18) and (19) we get (17) which proves the assertion (a).

To prove (b) it suffices to show
For ε > 0 small enough and any C > 0 it is easy to see that For k ∈ I 2 (0, C, ε) we use Lemma 2.3 and obtain For |λ k | large enough we use Statement 3.5 to estimate A −1 k in the same way as in the proof of assertion (a) i.e.
where M 3 is a new constant depending on n, α. Now (21) and (22) imply (20) and assertion (b) is proven.
Proof Theorem 1.1 shows that condition (A) is sufficient for polynomial stability. Necessity. Let T (t) be polynomially stable. We choose one eigenvalue from each family σ k (say λ k ) and corresponding eigenvectors φ k . Consider subspace S = span {φ k : k ∈ Z}. It is easy to see that subspace S is T -invariant and the semigroup T (t) is bounded on S. Applying the results of Sect. 3 from [2] for semigroup T (t) restricted to S we see that family {λ k } k∈Z satisfies condition (A) with some positive constants γ, α.
Since eigenvalues λ k was chosen arbitrarily, then whole spectrum satisfies condition (A) with some positive constants γ, α.
of the operator A thus we recall some important properties of A (for more details see [13]).
The spectrum of A consist of eigenvalues only. Almost all of them lie close toλ The sequence of p m -dimensional subspaces V Taking above into the account, we get ω ≥ω and there is a few possibilities for location of the spectrum σ (A): (a) ω >ω, which implies that p 0 > 0; (b) ω =ω and p 0 = 0.
In the cases (a) and (e) an asymptotic behavior of the corresponding semigroup is determined by the eigenvalue with maximal real part (equals ω) and multiplicity p 0 i.e.
In the cases (b)-(d) we have the following estimation for the norm of semigroup (see [16] for more details): Moreover in the cases (b) and (c) the semigroup does not have any maximal asymptotics (even when q = p 1 ) i.e.
in the case (d), an existence of the maximal asymptotics is independent of our assumptions. Now we discuss the property of asymptotic stability in the above cases. The necessary condition of stability is that ω ≤ 0 and if ω < 0 then it is even exponential stability, thus only the case ω = 0 is interesting. In this case we see that stability cannot occur in (a), (c), (d), (e) because we can point out an initial state for which the solution does not decrease or, at least, such an initial state exists (by Banach-Steinhaus Theorem). We focus on the case (b), where ω = 0 and p 0 = 0 and discuss the stability. If q = p 1 = 1 then (23) and lack of maximal asymptotics (24) implies stability. For q > 1 the lack of stability is a consequence of inequality (23) and it is caused by the families of eigenvalues approaching imaginary axis from the left-hand side (not by the single eigenvalue). If it is possible to describe the rate of this approaching using the following inequality where C, α are some real positive constants, then we are able to find a subset of initial states for which the system is stable. Using Theorem 1.1 we obtain sufficient condition for the stability of the system on some non-closed subset, namely we have Theorem 3.1 Let us consider system (4). If Re λ < 0 for all λ ∈ σ (A) and Re λ ≤ − C |Imλ| α for all but finitely many λ ∈ σ (A), where C, α are some real positive constants, then for any n ∈ N there exists M > 0 with Proof of Theorem 1.2 The operator A satisfies the assumptions (B1)-(B3) and (A), so the proof of theorem follows directly from Theorem 1.1.
where · D(A n 0 ) denotes the norm x D(A n 0 ) = A n 0 x + x . Now, following [14] we consider regular feedback stabilizability of a systeṁ where A −1 is an n × n invertible complex matrix, A 2 and A 3 are n × n matrices of functions from L 2 (−1, 0), B is an n × p complex matrix, z(t + ·) ∈ H 1 (−1, 0; C n ).
It was shown in [7,11] that for any u ∈ L 2 the system (26) has the unique solution z(t +·) ∈ H 1 (−1, 0; C n ). We say that the system (26) is asymptotically stabilizable if there exists a linear feedback control u(t) = F(z t (·)) = F(z(t +·) such that the system (26) becomes asymptotically stable. If in addition to this the asymptotic stabilizability is achieved by a feedback F which is bounded (as an operator acting on space H 1 ), then we call it regular asymptotic stabilizability. In our case any regular feedback is of the form (see [14] for more details) where operator A is given by (5)- (6), and Bu = Bu 0 . Taking (27) into account we can rewrite the Eq. (28) in the forṁ We notice that A + BF is similar to the operator A. The operator BF affects only the matrices A 2 , A 3 and the operator A + BF is of the same form as A with only A 2 and A 3 exchanged. In particular, the operator A + BF generates a C 0 -group and its domain stays unchanged (because the operator BF does not affect the matrix A −1 ). In the case when eigenvalues of the matrix A −1 with maximal modulus, say μ m , m = 1, . . . , 0 , are different and simple, the corresponding eigenvalues of A, say {λ k } k∈Z (and A + BF), are also simple and are in some disjoint circles of summable with square radii r k . It was proven (see [14,Theorem 8] and [18,19]  For the case of non-single eigenvalues μ m , m = 1, . . . , 0 of matrix A −1 , even if eigenvalues of A can be moved to the open left half-plane, then, in general, stability can not be obtained because the corresponding group can be unbounded (see [16]). Although if we assume that we are able to move eigenvalues in each circle using proper feedback (27) the same way like in the case of single eigenvalues, then using Theorem 1.1 we can obtain polynomial stability of a corresponding group. To illustrate this idea we focus on some special class of equations (26).
Let us denote identity matrix in C n by I n and Jordan block with eigenvalue 1 of size n by J n , i.e. J n = {a p,q } : a p, p = a p, p+1 = 1, p = 1, . . . , n, and all other entries are 0. We consider the Eq. (26) with A −1 = I n , A 2 =f (θ )J n , A 3 =g(θ )J n , wherẽ f ,g ∈ L 2 (−1, 0) are fixed. We take B = J n and the control u(t) of the form (27) i.e.
where f 2 , f 3 ∈ L 2 (−1, 0; C). Taking f =f + f 2 , g =g + f 3 we rewrite the Eq. (26) in the forṁ The corresponding characteristic function is of the form It is proven (see [13]) that roots of this equation are asymptotically close to roots of equation λ(e λ − 1) = 0. More precisely the roots of equation (31) are in the circles centred in λ k = 2kπi and square summable radii r k . For scalar version of equation (30) (i.e. n = 1) Theorem 8 in [14] implies that for any choice of complex sequence τ k in the above circles there exist functions f, g ∈ L 2 such that the numbers τ k will be roots of the equation (31). Moreover, the Eq. (31) does not depend on n, thus the same functions f, g move roots the same way in general case (n > 1). Now we choose τ k = 2kπi − 1 k , what means that there exist functions f, g such that the numbers τ k are eigenvalues of the corresponding operator (A + BF), whose eigenvalues are contained in the left open half-plane and satisfy (25) with C = 2π, α = 1. Nevertheless the system (28) can not be stable because the corresponding group is not bounded i.e. e (A+BF )t ≥ Mt n−1 , t > 1.
However, our paper provides tools to study polynomial stability of above unbounded group, in particular due to Theorem 3.1 we obtain that sufficiently regular solutions tend to zero polynomially. Namely we have the following Statement 3.3 We consider control system (28) with feedback control of the form (27). We fix n ∈ N + , A −1 = I n , A 2 =f (θ )J n , A 3 =g(θ )J n , F 2 = f 2 (θ )J n , F 3 = f 3 (θ )J n , B = J n , wheref ,g are arbitrary functions from L 2 (−1, 0; C) and J n is a Jordan block with eigenvalue 1 of size n. Then there exist functions f 2 , f 3 ∈ L 2 (−1, 0; C) and constant M > 0 such that for any k ∈ N e (A+BF )t (A + BF) −(n+k−1) ≤ Mt −k , t > 1, or equivalently e (A+BF )t x ≤ Mt −k x D(A n+k−1 ) , t > 1, x ∈ D A n+k−1 .
Remark 1 With the same assumptions we can prove similarly the first inequality of Statement 3.4 for the cofactors of matrix B λ . Namely, if we denote cofactors of matrix B λ ∈ M n (C) by B i, j , then |B i, j | ≤ M|λ| n−1 . Statement 3.5 Let A λ , B λ ∈ M n (C) be such that A λ consist of Jordan blocks of eigenvalue λ, where |λ| is sufficiently large and B λ − A λ ≤ 1, where A = |a i, j |, then there exist constant C such that Proof of Theorem 1.2 Without loss of generality we assume that A λ is a Jordan block.
Using inversion formula to matrix B λ we get where B i, j are cofactors of matrix B λ . Using Statement 3.4 and Remark 1 to estimate | det B λ | and |B i, j | we obtain which ends the proof.