1 Introduction

One of the problems of stability motion is the problem of absolute stability. The problems of absolute stability of nonlinear control systems arise in solving practical tasks. In technical control systems, the control function is the function of one variable located between two lines in the first and third quarters of the coordinate plane. The stability of the control system with a control function located in this sector is referred to, for example, in [16]. Originally, the control systems of ordinary differential equations were considered. The systems with aftereffect, that better describe the real processes, become an object of study later, e.g., in [3, 4, 7, 8]. Some nonlinear systems with indirect regulation and delay argument are considered in [9, 10]. The sufficient conditions of absolute interval stability are derived in the papers [3, 6] by Lyapunov-Krasovskii functionals in the form of the sum of the quadratic form and the integral of nonlinear components of the considered system, and by the so-called S-program the coefficients of the exponential decay of solutions are calculated. But, in the case when the conditions of the theorems quoted there are not met, the linear feedback method is used to stabilize the system.

The main goal of the paper is to solve the problem of stabilization of an indirect control system. The sufficient conditions for absolute stability of the control system are obtained using Lyapunov-Krasovskii functionals which contain an exponential multiplier.

Throughout the paper we will use the following notation. Let S be a real symmetric square matrix. Then the symbol λ min (S) ( λ max (S)) will denote the minimal (maximal) eigenvalue of S. We will also use the following vector norms:

x ( t ) := i = 1 n x i 2 ( t ) , x ( t ) τ , ξ := t τ t e ξ ( t s ) x ( s ) 2 d s ,

where x= ( x 1 , x 2 , , x n ) T and ξ is a real parameter.

The paper is organized as follows. Since for the one-dimensional process it is possible to get simple explicit criteria, Section 2 deals with the stabilization of one-dimensional processes described by two scalar equations with delay. Then the indirect control system in the general matrix form is considered in Section 3.

2 Stabilization of one-dimensional processes

Let us consider an indirect control system described by a system of two scalar equations with delay argument in the form

x ˙ (t)= a 1 x(t)+ a 2 x(tτ)+bf ( σ ( t ) ) ,
(1)
σ ˙ (t)=cx(t)ρf ( σ ( t ) ) ,
(2)

where t t 0 0, x is the state function, σ is the control defined on [ t 0 ,), a 1 , a 2 , b, c, τ>0, ρ>0 are constants, f(σ) is a continuous nonlinear function on ℝ satisfying the so-called sector condition. It means there exist constants k 1 , k 2 , k 2 > k 1 >0 such that inequalities

k 1 σ 2 f(σ)σ k 2 σ 2
(3)

are satisfied.

Definition 1 The continuous vector function (x,σ):[ t 0 τ,) R 2 is said to be a solution of (1), (2) on [ t 0 ,) if (x,σ) is continuously differentiable on [ t 0 ,) and satisfies the system (1), (2) on [ t 0 ,).

Definition 2 The system (1), (2) is called absolutely stable if the trivial solution (x,σ)=(0,0) of the system (1), (2) is globally asymptotically stable for an arbitrary function f(σ) satisfying (3).

In the investigation of absolute stability of the control systems with delay, we will use Lyapunov-Krasovskii functionals which contain, in addition to the quadratic form and the integral of the nonlinear component of the considered system, an exponential multiplier, i.e.,

V [ x ( t ) , σ ( t ) ] =h x 2 (t)+g t τ t e ξ ( t s ) x 2 (s)ds+β 0 σ ( t ) f(s)ds,
(4)

where h, g, β, ξ are positive constants, (x,σ) is a solution of (1), (2), and t t 0 . It is easy to see that the last term in (4) is always nonnegative due to the left-hand part of ‘sector condition’ (3). Define, using the coefficients of the functional (4), auxiliary numbers

s 11 1 = 2 a 1 h g , s 12 1 = a 2 h , s 13 1 = ( h b + 1 2 β c ) , s 22 1 = e ξ τ g , s 33 1 = β ρ

and a matrix

S 1 = S 1 (g,h,β,ξ):=( s 11 1 s 12 1 s 13 1 s 12 1 s 22 1 0 s 13 1 0 s 33 1 )=( 2 a 1 h g a 2 h h b 1 2 β c a 2 h e ξ τ g 0 h b 1 2 β c 0 β ρ ).

Our first result is the theorem on absolute stability for the considered system (1), (2).

Theorem 1 Suppose that there exist constants g>0, h>0, β>0, and ξ>0 such that the matrix S 1 (g,h,β,ξ) is positive definite. Then the system (1), (2) is absolutely stable.

Proof Compute the full derivative of the functional V[x(t),σ(t)] defined by (4) along trajectories of the system (1), (2). Then

d d t V [ x ( t ) , σ ( t ) ] = 2 h x ( t ) [ a 1 x ( t ) + a 2 x ( t τ ) + b f ( σ ( t ) ) ] + g [ x 2 ( t ) e ξ τ x 2 ( t τ ) ] g ξ t τ t e ξ ( t s ) x 2 ( s ) d s + β f ( σ ( t ) ) [ c x ( t ) ρ f ( σ ( t ) ) ] = [ 2 h a 1 + g ] x 2 ( t ) + 2 h a 2 x ( t ) x ( t τ ) g e ξ τ x 2 ( t τ ) + ( 2 h b + c β ) x ( t ) f ( σ ( t ) ) β ρ f 2 ( σ ( t ) ) g ξ t τ t e ξ ( t s ) x 2 ( s ) d s = ( x ( t ) , x ( t τ ) , f ( σ ( t ) ) ) S 1 ( g , h , β , ξ ) ( x ( t ) , x ( t τ ) , f ( σ ( t ) ) ) T g ξ x ( t ) τ , ξ 2 λ min ( S ) ( x ( t ) 2 + x ( t τ ) 2 + f ( σ ( t ) ) 2 ) g ξ x ( t ) τ , ξ 2 .

Using (3) we get

d d t V [ x ( t ) , σ ( t ) ] λ min (S) ( x ( t ) 2 + x ( t τ ) 2 + k 1 2 σ 2 ( t ) ) gξ x ( t ) τ , ξ 2 .

From this inequality and the estimates

h x ( t ) 2 V [ x ( t ) , σ ( t ) ] h x ( t ) 2 +g x ( t ) τ , ξ 2 + 1 2 k 2 σ 2 (t),

where the last term can be derived using the right-hand part of ‘sector condition’ (3), we deduce the absolute stability of the system (1), (2) (we also refer to a theorem by Krasovskii in [[11], Theorem 2, p.145]). □

The crucial assumption in Theorem 1 is the assumption of positive definiteness of the matrix S 1 (g,h,β,ξ). If we cannot find suitable constants g, h, β, and ξ to ensure positive definiteness, or such constants do not exist, Theorem 1 is not applicable. In such a case, we can modify the control function in (1) by adding a linear combination of the values of the state function at the moments t and tτ and we will consider a modified system

x ˙ (t)= a 1 x(t)+ a 2 x(tτ)+bf ( σ ( t ) ) +u(t),
(5)
σ ˙ (t)=cx(t)ρf ( σ ( t ) ) ,
(6)

where

u(t)= c 1 x(t)+ c 2 x(tτ),
(7)

c 1 and c 2 are suitable constants, and t t 0 0.

Then we can apply the following result.

Theorem 2 Let g>0, h>0, β>0, and ξ>0 be fixed. Then the system (5), (6) is absolutely stable if the constants c 1 , c 2 in the control function (7) fulfill the inequality

c 1 < 1 2 h [ s 11 1 1 s 22 1 ( s 12 1 c 2 h ) 2 1 s 33 1 ( s 13 1 ) 2 ] .
(8)

Proof We employ the same functional (4) and the scheme of the proof of Theorem 1. Tracing the proof of Theorem 1, we get that for the absolute stability of the system (5), (6), it is sufficient that the matrix

S 2 = S 2 (g,h,β,ξ):=( s 11 1 2 c 1 h s 12 1 c 2 h s 13 1 s 12 1 c 2 h s 22 1 0 s 13 1 0 s 33 1 )

is positive definite. Applying the known positivity criterion (Sylvester criterion) [[12], p.260], [13] to the matrix S 2 , we require the positivity of its main diagonal minors, i.e.,

Δ 1 = s 11 1 2 c 1 h>0,
(9)
Δ 2 = ( s 11 1 2 c 1 h ) s 22 1 ( s 12 1 c 2 h ) 2 >0,
(10)
Δ 3 = ( s 11 1 2 c 1 h ) s 22 1 s 33 1 s 22 1 ( s 13 1 ) 2 s 33 1 ( s 12 1 c 2 h ) 2 >0.
(11)

The inequality (9) can be rewritten as

c 1 < 1 2 h s 11 1 .
(12)

By a simple modification of inequality (10), we have

2 c 1 h s 22 1 < s 11 1 s 22 1 ( s 12 1 c 2 h ) 2 .

Hence, taking into account that s 22 1 = e ξ τ g>0, we get a more suitable relationship,

c 1 < 1 2 h [ s 11 1 1 s 22 1 ( s 12 1 c 2 h ) 2 ] .
(13)

The inequality (11) can be modified into the form

2 c 1 h s 22 1 s 33 1 < s 11 1 s 22 1 s 33 1 s 22 1 ( s 13 1 ) 2 s 33 1 ( s 12 1 c 2 h ) 2 .
(14)

Finally, with regard to the assumptions

h>0, s 22 1 = e ξ τ g>0, s 33 1 =βρ>0,

the inequality (14) can be written in the form (8). If this inequality holds, then obviously (12) and (13) hold as well. Moreover, it is easy to see that it is possible to find parameters c 1 and c 2 such that the inequality (8) is fulfilled. □

3 Stabilization of the indirect control systems with matrix coefficients

Our goal in this section is to extend the considerations developed in Section 2 to the study of stabilization of the indirect control systems whose coefficients are expressed in a matrix form. It means we will consider an n-dimensional process x described by the system of (n+1) equations,

x ˙ (t)=Ax(t)+Bx(tτ)+bf ( σ ( t ) ) ,
(15)
σ ˙ (t)=cx(t)ρf ( σ ( t ) ) ,
(16)

where t t 0 0, x= ( x 1 , x 2 , , x n ) T is the n-dimensional column vector function of the state, σ is the scalar function of the control defined on [ t 0 ,), A and B are n×n constant matrices, b= ( b 1 , b 2 , , b n ) T is an n-dimensional constant column vector, c=( c 1 , c 2 ,, c n ) is an n-dimensional constant row vector, τ>0 and ρ>0 are constants, and f(σ) is a continuous nonlinear function on ℝ satisfying sector condition (3).

To investigate the system (15), (16) we use a Lyapunov-Krasovskii functional, generalizing the functional (4), in the form

V [ x ( t ) , σ ( t ) ] = x T (t)Hx(t)+ t τ t e ξ ( t s ) x T (s)Gx(s)ds+β 0 σ ( t ) f(s)ds,
(17)

where H and G are n×n constant positive definite symmetric matrices, and ξ and β are positive constants.

We give a generalization of Theorem 1 to the case of the control system (15), (16). For it, we define the matrices

S 11 3 : = A T H H A G , S 12 3 : = H B , S 13 3 : = ( H b + 1 2 β c T ) , S 22 3 : = e ξ τ G

and

S 3 (G,H,β,ξ):=( S 11 3 S 12 3 S 13 3 , ( S 12 3 ) T S 22 3 θ ( S 13 3 ) T θ T s 33 1 ),

where θ= ( θ , θ , , θ ) T is an n-dimensional zero column vector.

Theorem 3 Suppose that there exist positive definite symmetric matrices H, G and constants β>0, ξ>0 such that the matrix S 3 (G,H,β,ξ) is positive definite. Then the system (15), (16) is absolutely stable.

Proof The scheme of the proof repeats the proof of Theorem 1. Compute the full derivative of the functional V[x(t),σ(t)] defined by (17) along trajectories of the system (15), (16). Then

d d t V [ x ( t ) , σ ( t ) ] = x ˙ T ( t ) H x ( t ) + x T ( t ) H x ˙ ( t ) + x T ( t ) G x ( t ) e ξ τ x T ( t τ ) G x ( t τ ) ξ t τ t e ξ ( t s ) x T ( s ) G x ( s ) d s + β f ( σ ( t ) ) σ ˙ ( t ) = [ A x ( t ) + B x ( t τ ) + b f ( σ ( t ) ) ] T H x ( t ) + x T ( t ) H [ A x ( t ) + B x ( t τ ) + b f ( σ ( t ) ) ] + x T ( t ) G x ( t ) e ξ τ x T ( t τ ) G x ( t τ ) ξ t τ t e ξ ( t s ) x T ( s ) G x ( s ) d s + β f ( σ ( t ) ) [ c x ( t ) ρ f ( σ ( t ) ) ] = ( x T ( t ) , x T ( t τ ) , f ( σ ( t ) ) ) S 3 ( G , H , β , ξ ) ( x T ( t ) , x T ( t τ ) , f ( σ ( t ) ) ) T ξ t τ t e ξ ( t s ) x T ( s ) G x ( s ) d s λ min ( S ) ( x ( t ) 2 + x ( t τ ) 2 + f ( σ ( t ) ) 2 ) ξ λ min ( G ) x ( t ) τ , ξ 2 .

Using (3) we get

d d t V [ x ( t ) , σ ( t ) ] λ min (S)( x ( t ) 2 + x ( t τ ) 2 + k 1 2 σ 2 (t))ξ λ min (G) x ( t ) τ , ξ 2 .

From this inequality and the estimates

λ min (H) x ( t ) 2 V [ x ( t ) , σ ( t ) ] λ max (H) x ( t ) 2 + λ max (G) x ( t ) τ , ξ 2 + 1 2 k 2 σ 2 (t),

where the last term can be derived using the right-hand part of sector condition (3), we deduce the absolute stability of the system (15), (16) (we also refer to a theorem by Krasovskii in [[11], Theorem 2, p.145]). □

It may happen that it is not easy to find suitable positive definite symmetric matrices H, G and constants β>0, ξ>0 such that the matrix S 3 (G,H,β,ξ) will be positive definite, or such matrices and constants do not exist. In such a case, we can modify the control function in the system (15), (16) by adding a linear combination of the values of the state function at the moments t and tτ. Therefore, instead of the system (15), (16), we will consider a modified system

x ˙ (t)=Ax(t)+Bx(tτ)+bf ( σ ( t ) ) +u(t),
(18)
σ ˙ (t)=cx(t)ρf ( σ ( t ) ) ,
(19)

where

u(t)= C 1 x(t)+ C 2 x(tτ),
(20)

C 1 and C 2 are n×n constant matrices (the so-called control matrices), and t t 0 0. Our task is to find conditions on the matrices C 1 , C 2 such that the system (18), (19) will be absolutely stable.

We will need some auxiliary results from the theory of matrices.

Lemma 1 [13]

Let A be a regular n×n matrix, B be an n×q matrix, and C be a q×q regular matrix. Let a Hermitian matrix S be represented as

S=( A B B C ).

Then the matrix S is positive definite if and only if the matrices A and C B A 1 B are positive definite.

Lemma 2 [[12], Frobenius formula]

Let A be a regular n×n matrix, D be a q×q matrix, B be an n×q matrix, and C be a q×n matrix, and the matrix

M=( A B C D )

be regular. Then the matrix R=DC A 1 B is regular and

M 1 =( A 1 + A 1 B R 1 C A 1 A 1 B R 1 R 1 C A 1 R 1 ).

Theorem 4 Suppose that there exist positive definite symmetric matrices H and G, control matrices C 1 and C 2 , and constants β>0 and ξ>0 such that

  1. (1)

    The matrices

    Δ 1 4 := S 11 3 C 1 T HH C 1 ,
    (21)
Δ 2 4 := S 22 3 [ S 12 3 H C 2 ] T [ S 12 3 C 1 T H H C 1 ] 1 [ S 12 3 H C 2 ]
(22)

are positive definite.

  1. (2)

    The number

    Δ 3 4 :=βρ ( S 13 3 ) T [ ( S 11 4 ) 1 + ( S 11 4 ) 1 S 12 4 R 1 ( S 12 4 ) T ( S 11 4 ) 1 ] S 13 3 ,
    (23)

where

S 11 4 = Δ 1 4 , S 12 4 = S 12 3 H C 2 , R 1 = S 22 3 ( S 12 4 ) T ( S 11 4 ) 1 S 12 4 ,
(24)

is positive.

Then the system (18), (19) is absolutely stable.

Proof The philosophy of the proof is the same as in the proof of Theorem 2, only the calculations will be more complicated, because now we work with the matrix case. In accordance with Theorem 3, the system (15), (16) is absolutely stable if the matrix S 3 (G,H,β,ξ) is positive definite. Define the auxiliary matrix

S 4 = S 4 (G,H, C 1 , C 2 ,β,ξ):=( S 11 4 S 12 4 S 13 3 ( S 12 4 ) T S 22 3 θ ( S 13 3 ) T θ T s 33 1 ).

The matrix S 4 plays the same role for the system (18), (19) as the matrix S 3 (G,H,β,ξ) for the system (15), (16). Therefore, the system (18), (19) is absolutely stable if the matrix S 4 (G,H, C 1 , C 2 ,β,ξ) is positive definite. It follows from Lemma 1 that the matrix S 4 (G,H, C 1 , C 2 ,β,ξ) is positive definite if and only if the matrix

M 4 =( S 11 4 S 12 4 ( S 12 4 ) T S 22 3 )

is positive definite and the inequality

s 33 1 >( ( S 13 3 ) T θ T ) ( M 4 ) 1 ( S 13 3 θ )
(25)

holds.

The matrix M 4 is positive definite (we use Lemma 1 again) if and only if the matrices

S 11 4 , S 22 3 ( S 12 4 ) T ( S 11 4 ) 1 S 12 4

are positive definite. The matrix S 11 4 is positive definite due to (21). The matrix

S 22 3 ( S 12 4 ) T ( S 11 4 ) 1 S 12 4 = S 22 3 [ S 12 3 H C 2 ] T [ S 11 3 C 1 T H H C 1 ] 1 [ S 12 3 H C 2 ]

is positive definite due to (22).

We compute the inverse matrix to the matrix M 4 using Lemma 2. We get

( M 4 ) 1 =( M 4 11 ( S 11 4 ) 1 S 12 4 R 1 R 1 S 12 4 ( S 11 4 ) 1 R 1 ),

where

M 4 11 = ( S 11 4 ) 1 + ( S 11 4 ) 1 S 12 4 R 1 ( S 12 4 ) T ( S 11 4 ) 1 .

Therefore, the inequality (25) can be rewritten as

s 33 1 > ( S 13 3 ) T M 4 11 S 13 3

and is valid due to (23).

Consequently, the system with control of the form (18), (19) is absolutely stable if there exist matrices C 1 , C 2 in (20) such that conditions (21)-(23) are valid. □

Remark 1 Let us recall the well-known facts that for the validity of Theorem 1, it is necessary that a 1 <0 and for the validity of Theorem 3, it is necessary that all characteristic values of the matrix A have negative real parts.