1 Introduction

In the past few decades, neural networks have been widely investigated by researchers. In 1987, bidirectional associative memory (BAM) neural networks were firstly introduced by Kosko [1, 2]. Due to its better abilities as information associative memory, the BAM neural network has attracted considerable attention in different fields such as signal processing, pattern recognition, optimization, and so on.

It is well known that time delay is unavoidable in the hardware implementation of neural networks due to the finite switching speed of neurons and amplifiers. The delay can cause instability, oscillation, or poor dynamical behavior. In practical applications, there exist many types of time delays such as discrete delays [3], time-varying delays [4], distributed delays [5, 6], random delays [7] and leakage delays (or forgetting delays) [8, 9]. Up to now, a large number of results about delay BAM neural networks have been reported [1013]. All of the conclusions could be roughly summarized into two types: in terms of the stability analysis of equilibrium points, and of the existence and stability of periodic or almost periodic solutions.

The leakage delay, which exists in the negative feedback term of a neural network system, emerges as a research topic of primary importance recently. Gopalsamy [8] investigated the stability of the BAM neural networks with constant leakage delays. Further, Liu [14] discussed the global exponential stability for BAM neural networks with time-varying leakage delays, which extend and improve the main results of Gopalsamy. Peng et al. [1517] derived the stability criteria for the BAM neural networks with leakage delays, unbounded distributed delays and probabilistic time-varying delays.

Sampled-data state feedback is a practical and useful control scheme and has been studied extensively over the past decades. There are some results dealing with synchronization [18, 19], state estimate [2022] and stability [2329]. Recently, the work in [24] has studied the problem of the stability of sampled-data piecewise affine systems via the input delay approach. Although the importance of the stability of neural networks has been widely recognized, no related results have been established for the sampled-data stability of BAM neural networks with leakage time-varying delays. Motivated by the works above, we consider the sampled-data stability of BAM neural networks with leakage time-varying delays under variable sampling with a known upper bound on the sampling intervals.

The organization of this paper is as follows. In Section 2, the problem is formulated and some basic preliminaries and assumptions are given. The main results are presented in Section 3. In Section 4, a numerical example is given to demonstrate the effectiveness of the obtained results. Some conclusions are proposed in Section 5.

2 Preliminaries

In this paper, we consider the following BAM neural networks with leakage time-varying delays and sampled-data state feedback inputs:

{ x i ˙ ( t ) = a i x i ( t ρ i ( t ) ) + j = 1 n b i j ( 1 ) g j ( y j ( t ) ) + j = 1 n b i j ( 2 ) g j ( y j ( t τ i j ( t ) ) ) + u ˜ i ( t ) , y i ˙ ( t ) = c i y i ( t r i ( t ) ) + j = 1 n d i j ( 1 ) f j ( x j ( t ) ) + j = 1 n d i j ( 2 ) f j ( x j ( t σ i j ( t ) ) ) + v ˜ i ( t ) ,
(1)

where i N ˜ ={1,2,,n}, the x i (t) and y i (t) are neuron state variables, the positive constants a i and c i denote the time scales of the respective layers of the networks, b i j ( 1 ) , b i j ( 2 ) , d i j ( 1 ) , d i j ( 2 ) are connection weights of the network. ρ i (t) and r i (t) denote the leakage delays, τ i j (t) and σ i j (t) are time-varying delays, f j (), g j () are neuron activation functions, u ˜ i (t)= k i x i ( t k ), v ˜ i (t)= l i y i ( t k ) are sampled-data state feedback inputs, t k denotes the sample time point, t k t< t k + 1 , kN, ℕ denotes the set of all natural numbers.

Assume that there exists a positive constant L such that the sample interval t k + 1 t k L, kN. Let d k (t)=t t k , for t[ t k , t k + 1 ), then t k =t d k (t) with 0 d k (t)L.

For the sake of convenience, we give the following notations:

ρ ¯ i = sup t R ρ i ( t ) , ρ i ̲ = inf t R ρ i ( t ) , r ¯ i = sup t R r i ( t ) , r i ̲ = inf t R r i ( t ) , τ ¯ i j = sup t R τ i j ( t ) , τ i j ̲ = inf t R τ i j ( t ) , σ ¯ i j = sup t R σ i j ( t ) , σ i j ̲ = inf t R σ i j ( t ) , ρ = sup t R ρ ˙ i ( t ) , r = sup t R r ˙ i ( t ) .

Before ending this section, we introduce two assumptions, which will be used in next section.

Assumption 1 There exist constants L j f >0, L j g >0 such that

0 f j ( x ) f j ( y ) x y L j f ,0 g j ( x ) g j ( y ) x y L j g ,

for all x,yR, xy and j N ˜ .

Assumption 2 Let a i ρ ¯ i <1, c i r ¯ i <1, for all i N ˜ . There exist positive constants ξ 1 , ξ 2 ,, ξ n and η 1 , η 2 ,, η n such that, for t>0 and i N ˜ , the following inequalities hold:

{ [ a i ( 1 2 a i ρ ¯ i ) a i ρ k i ] 1 1 a i ρ ¯ i ξ i + [ j = 1 n | b i j ( 1 ) | + j = 1 n | b i j ( 2 ) | ] L j g 1 1 c j r ¯ j η j < 0 , [ c i ( 1 2 c i r ¯ i ) c i r l i ] 1 1 c i r ¯ i η i + [ j = 1 n | d i j ( 1 ) | + j = 1 n | d i j ( 2 ) | ] L j f 1 1 a j ρ ¯ j ξ j < 0 .
(2)

3 Main results

In this section, we investigate the exponential stability of (1). By using the input delay approach [24], (1) can be rewritten in the following form:

{ x i ˙ ( t ) = a i x i ( t ρ i ( t ) ) + j = 1 n b i j ( 1 ) g j ( y j ( t ) ) x i ˙ ( t ) = + j = 1 n b i j ( 2 ) g j ( y j ( t τ i j ( t ) ) ) k i x i ( t d k ( t ) ) , y i ˙ ( t ) = c i y i ( t r i ( t ) ) + j = 1 n d i j ( 1 ) f j ( x j ( t ) ) y i ˙ ( t ) = + j = 1 n d i j ( 2 ) f j ( x j ( t σ i j ( t ) ) ) l i y i ( t d k ( t ) ) .
(3)

The initial conditions of (3) are: x i (s)= ϕ i (s), y i (s)= φ i (s), s(,0], i N ˜ , where ϕ i (s) and φ i (s) are continuous functions on (,0].

The main results are stated as follows.

Theorem 1 Let Assumptions 1 and 2 hold; then the BAM neural network (3) is exponentially stable, i.e., there exists a positive constant λ such that | x i (t)|=O( e λ t ), | y i (t)|=O( e λ t ), i N ˜ .

Proof Define the continuous functions

{ Φ i ( ω ) = [ ( a i ω ) ( 1 2 a i ρ ¯ i ) a i ( e ω ρ ¯ i ( 1 ρ ) k i e ω L ) ] 1 1 a i ρ ¯ i ξ i Φ i ( ω ) = + [ j = 1 n | b i j ( 1 ) | + e ω τ ¯ i j j = 1 n | b i j ( 2 ) | ] L j g 1 1 c j r ¯ j η j , Φ n + i ( ω ) = [ ( c i ω ) ( 1 2 c i r ¯ i ) c i ( e ω r ¯ i ( 1 r ) l i e ω L ) ] 1 1 c i r ¯ i η i Φ n + i ( ω ) = + [ j = 1 n | d i j ( 1 ) | + e ω σ ¯ i j j = 1 n | d i j ( 2 ) | ] L j f 1 1 a j ρ ¯ j ξ j ,
(4)

where ω0, i N ˜ .

By Assumption 2, we have

{ Φ i ( 0 ) = [ a i ( 1 2 a i ρ ¯ i ) a i ρ k i ] 1 1 a i ρ ¯ i ξ i + [ j = 1 n | b i j ( 1 ) | + j = 1 n | b i j ( 2 ) | ] Φ i ( 0 ) = × L j g 1 1 c j r ¯ j η j < 0 , Φ n + i ( 0 ) = [ c i ( 1 2 c i r ¯ i ) c i r l i ] 1 1 c i r ¯ i η i + [ j = 1 n | d i j ( 1 ) | + j = 1 n | d i j ( 2 ) | ] Φ n + i ( 0 ) = × L j f 1 1 a j ρ ¯ j ξ j < 0 .
(5)

Because Φ i (ω) and Φ n + i (ω) are continuous functions, we can choose a small positive constant λ such that, for all i N ˜ ,

{ Φ i ( λ ) = [ ( a i λ ) ( 1 2 a i ρ ¯ i ) a i [ e λ ρ ¯ i ( 1 ρ ) ] k i e λ L ] 1 1 a i ρ ¯ i ξ i Φ i ( λ ) = + [ j = 1 n | b i j ( 1 ) | + e λ τ ¯ i j j = 1 n | b i j ( 2 ) | ] L j g 1 1 c j r ¯ j η j < 0 , Φ n + i ( λ ) = [ ( c i λ ) ( 1 2 c i r i ¯ ) c i [ e λ r i ¯ ( 1 r ) ] l i e λ L ] 1 1 c i r ¯ i η i Φ n + i ( λ ) = + [ j = 1 n | d i j ( 1 ) | + e λ σ ¯ i j j = 1 n | d i j ( 2 ) | ] L j f 1 1 a j ρ ¯ j ξ j < 0 .
(6)

Let

X i (t)= e λ t x i (t) t ρ i ( t ) t a i e λ s x i (s)ds, Y i (t)= e λ t y i (t) t r i ( t ) t c i e λ s y i (s)ds,i N ˜ .

Calculating the derivative of X i and Y i along the solution of (3), we have

X ˙ i ( t ) = λ e λ t x i ( t ) + e λ t x ˙ i ( t ) a i [ e λ t x i ( t ) ( 1 ρ ˙ i ( t ) ) e λ ( t ρ i ( t ) ) x i ( t ρ i ( t ) ) ] = λ e λ t x i ( t ) + e λ t [ a i x i ( t ρ i ( t ) ) + j = 1 n b i j ( 1 ) g j ( y j ( t ) ) + j = 1 n b i j ( 2 ) g j ( y j ( t τ i j ( t ) ) ) k i x i ( t d k ( t ) ) ] a i e λ t x i ( t ) + a i ( 1 ρ ˙ i ( t ) ) e λ ( t ρ i ( t ) ) x i ( t ρ i ( t ) ) = λ e λ t x i ( t ) a i e λ t x i ( t ) + a i ( 1 ρ ˙ i ( t ) ) e λ ( t ρ i ( t ) ) x i ( t ρ i ( t ) ) a i e λ t x i ( t ρ i ( t ) ) k i e λ t x i ( t d k ( t ) ) + e λ t [ j = 1 n b i j ( 1 ) g j ( y j ( t ) ) + j = 1 n b i j ( 2 ) g j ( y j ( t τ i j ( t ) ) ) ] = ( a i λ ) X i ( t ) ( a i λ ) t ρ i ( t ) t a i e λ s x i ( s ) d s [ a i a i ( 1 ρ ˙ i ( t ) ) e λ ρ i ( t ) ] e λ t x i ( t ρ i ( t ) ) k i e λ t x i ( t d k ( t ) ) + e λ t [ j = 1 n b i j ( 1 ) g j ( y j ( t ) ) + j = 1 n b i j ( 2 ) g j ( y j ( t τ i j ( t ) ) ) ]

and

Y ˙ i ( t ) = λ e λ t y i ( t ) + e λ t y ˙ i ( t ) c i [ e λ t y i ( t ) ( 1 r ˙ i ( t ) ) e λ ( t r i ( t ) ) y i ( t r i ( t ) ) ] = λ e λ t y i ( t ) + e λ t [ c i y i ( t r i ( t ) ) + j = 1 n d i j ( 1 ) f j ( x j ( t ) ) + j = 1 n d i j ( 2 ) f j ( x j ( t σ i j ( t ) ) ) l i y i ( t d k ( t ) ) ] c i [ e λ t y i ( t ) ( 1 r ˙ i ( t ) ) e λ ( t r i ( t ) ) y i ( t r i ( t ) ) ] = ( c i λ ) Y i ( t ) ( c i λ ) t r i ( t ) t c i e λ s y i ( s ) d s [ c i c i ( 1 r ˙ i ( t ) ) e λ r i ( t ) ] e λ t y i ( t r i ( t ) ) l i e λ t y i ( t d k ( t ) ) + e λ t [ j = 1 n d i j ( 1 ) f j ( x j ( t ) ) + j = 1 n d i j ( 2 ) f j ( x j ( t σ i j ( t ) ) ) ] .

We define a positive constant M as follows:

M= max 1 i n { sup t ( , 0 ] | X i ( t ) | , sup t ( , 0 ] | Y i ( t ) | } ,M>0.

Let K be a positive number such that

{ | X i ( t ) | M < K ξ i , | Y i ( t ) | M < K η i , for all  t ( , 0 ] .
(7)

Now, we will prove that

{ | X i ( t ) | M < K ξ i , | Y i ( t ) | M < K η i , for all  t > 0 .
(8)

Let t 0 =0, we firstly prove

{ | X i ( t ) | M < K ξ i , | Y i ( t ) | M < K η i , for  t [ t 0 , t 1 ) .
(9)

In fact, if it is not valid, there exist i N ˜ , t 0 [ t 0 , t 1 ) such that at least one of the following cases occurs:

{ ( a ) X i ( t 0 ) = K ξ i , X i ˙ ( t 0 ) 0 , | X j ( t ) | < K ξ j , | Y j ( t ) | < K η j , for  t ( , t 0 ) , j N ˜ , ( b ) X i ( t 0 ) = K ξ i , X i ˙ ( t 0 ) 0 , | X j ( t ) | < K ξ j , | Y j ( t ) | < K η j , for  t ( , t 0 ) , j N ˜ , ( c ) Y i ( t 0 ) = K η i , Y i ˙ ( t 0 ) 0 , | X j ( t ) | < K ξ j , | Y j ( t ) | < K η j , for  t ( , t 0 ) , j N ˜ , ( d ) Y i ( t 0 ) = K η i , Y i ˙ ( t 0 ) 0 , | X j ( t ) | < K ξ j , | Y j ( t ) | < K η j , for  t ( , t 0 ) , j N ˜ .
(10)

For t(, t 0 ], j N ˜ ,

e λ t | x j ( t ) | | e λ t x j ( t ) t ρ j ( t ) t a j e λ s x j ( s ) d s | + | t ρ j ( t ) t a j e λ s x j ( s ) d s | K ξ j + a j ρ ¯ j sup s ( , t 0 ] e λ s | x j ( s ) | .

Hence

e λ t | x j ( t ) | sup s ( , t 0 ] e λ s | x j ( s ) | K ξ j 1 a j ρ ¯ j .
(11)

Similarly, we have

e λ t | y j ( t ) | sup s ( , t 0 ] e λ s | y j ( s ) | K η j 1 c j r ¯ j .

If (a) holds, we get

X ˙ i ( t 0 ) = ( a i λ ) X i ( t 0 ) ( a i λ ) t 0 ρ i ( t 0 ) t 0 a i e λ s x i ( s ) d s [ a i a i ( 1 ρ ˙ i ( t 0 ) ) e λ ρ i ( t 0 ) ] e λ t 0 x i ( t 0 ρ i ( t 0 ) ) k i e λ t 0 x i ( t 0 d k ( t 0 ) ) + e λ t 0 [ j = 1 n b i j ( 1 ) g j ( y j ( t 0 ) ) + j = 1 n b i j ( 2 ) g j ( y j ( t 0 τ i j ( t 0 ) ) ) ] ( a i λ ) K ξ i + ( a i λ ) a i ρ ¯ i K ξ i 1 a i ρ ¯ i + [ a i a i ( 1 ρ ˙ i ( t 0 ) ) e λ ρ i ( t 0 ) ] e λ ρ i ( t 0 ) e λ ( t 0 ρ i ( t 0 ) ) x i ( t 0 ρ i ( t 0 ) ) + k i e λ d k ( t 0 ) e λ ( t 0 d k ( t 0 ) ) x i ( t 0 d k ( t 0 ) ) + e λ t 0 j = 1 n | b i j ( 1 ) | L j g | y j ( t 0 ) | + e λ τ i j ( t 0 ) j = 1 n | b i j ( 2 ) | L j g e λ ( t 0 τ i j ( t 0 ) ) | y j ( t 0 τ i j ( t 0 ) | ( a i λ ) K ξ i + ( a i λ ) a i ρ ¯ i K ξ i 1 a i ρ ¯ i + [ a i e λ ρ i ( t 0 ) a i ( 1 ρ ˙ i ( t 0 ) ) ] K ξ i 1 a i ρ ¯ i + k i e λ L K ξ i 1 a i ρ ¯ i + j = 1 n | b i j ( 1 ) | L j g K η j 1 c j r ¯ j + e λ τ ¯ i j j = 1 n | b i j ( 2 ) | L j g K η j 1 c j r ¯ j { [ ( a i λ ) ( 1 2 a i ρ ¯ i ) a i ( e λ ρ ¯ i ( 1 ρ ) ) k i e λ L ] ξ i 1 a i ρ ¯ i + [ j = 1 n | b i j ( 1 ) | + e λ τ ¯ i j j = 1 n | b i j ( 2 ) | ] L j g η j 1 c j r ¯ j } K = Φ i ( λ ) K < 0 ,

which contradicts (a).

If (b) holds, we get

X ˙ i ( t 0 ) ( a i λ ) K ξ i ( a i λ ) a i ρ ¯ i K ξ i 1 a i ρ ¯ i [ a i a i ( 1 ρ ˙ i ( t 0 ) ) e λ ρ i ( t 0 ) ] e λ ρ i ( t 0 ) e λ ( t 0 ρ i ( t 0 ) ) x i ( t 0 ρ i ( t 0 ) ) k i e λ L K ξ i 1 a i ρ ¯ i e λ t 0 j = 1 n | b i j ( 1 ) | L j g | y j ( t 0 ) | e λ τ i j ( t 0 ) j = 1 n | b i j ( 2 ) | L j g e λ ( t 0 τ i j ( t 0 ) ) | y j ( t 0 τ i j ( t 0 ) | ( a i λ ) K ξ i ( a i λ ) a i ρ ¯ i K ξ i 1 a i ρ ¯ i [ a i e λ ρ i ( t 0 ) a i ( 1 ρ ˙ i ( t 0 ) ) ] K ξ i 1 a i ρ ¯ i k i e λ L K ξ i 1 a i ρ ¯ i j = 1 n | b i j ( 1 ) | L j g K η j 1 c j r ¯ j e λ τ ¯ i j j = 1 n | b i j ( 2 ) | L j g K η j 1 c j r ¯ j { [ ( a i λ ) ( 1 2 a i ρ ¯ i ) a i ( e λ ρ ¯ i ( 1 ρ ) ) k i e λ L ] ξ i 1 a i ρ ¯ i + [ j = 1 n | b i j ( 1 ) | + e λ τ ¯ i j j = 1 n | b i j ( 2 ) | ] L j g η j 1 c j r ¯ j } ( K ) = Φ i ( λ ) K > 0 .

This is a contradiction with (b).

Similarly, if (c) or (d) holds, we can also derive contradictory results with respect to (c) or (d), respectively. So (9) is correct. From (7) and (9), we have

{ | X i ( t ) | M < K ξ i , | Y i ( t ) | M < K η i , for all t(, t 1 ).
(12)

Next, we will prove

{ | X i ( t ) | M < K ξ i , | Y i ( t ) | M < K η i , for t[ t 1 , t 2 ),i N ˜ .
(13)

If it is not like this, there exist i N ˜ , t 1 [ t 1 , t 2 ) such that one of the following cases occurs:

{ ( a ) X i ( t 1 ) = K ξ i , X i ˙ ( t 1 ) 0 , | X j ( t ) | < K ξ j , | Y j ( t ) | < K η j , for  t ( , t 1 ) , j N ˜ , ( b ) X i ( t 1 ) = K ξ i , X i ˙ ( t 1 ) 0 , | X j ( t ) | < K ξ j , | Y j ( t ) | < K η j , for  t ( , t 1 ) , j N ˜ , ( c ) Y i ( t 1 ) = K η i , Y i ˙ ( t 1 ) 0 , | X j ( t ) | < K ξ j , | Y j ( t ) | < K η j , for  t ( , t 1 ) , j N ˜ , ( d ) Y i ( t 1 ) = K η i , Y i ˙ ( t 1 ) 0 , | X j ( t ) | < K ξ j , | Y j ( t ) | < K η j , for  t ( , t 1 ) , j N ˜ .
(14)

Similar to the proof of (9), we can deduce that (13) holds. Combining (9) and (13), we have

{ | X i ( t ) | M < K ξ i , | Y i ( t ) | M < K η i , for all  t ( , t 2 ) .
(15)

Using mathematical induction, the inequalities (8) hold. By a similar proof to (11), we have e λ t | x i (t)| K ξ i 1 a i ρ ¯ i , e λ t | y i (t)| K η i 1 c i r ¯ i , for t>0, which implies | x i (t)|=O( e λ t ), | y i (t)|=O( e λ t ), i N ˜ . This completes the proof. □

Remark 2 If the leakage delays in (3) are constant, that is, ρ i (t)=ρ, r i (t)=r. Assumption 2 is changed into the following form.

Assumption 2′ Let a i ρ<1, c i r<1, for all i N ˜ . There exist positive constants ξ 1 , ξ 2 ,, ξ n and η 1 , η 2 ,, η n such that, for t>0 and i N ˜ , the following conditions hold:

{ [ a i ( 1 2 ρ a i ) k i ] 1 1 a i ρ ξ i + [ j = 1 n | b i j ( 1 ) | + j = 1 n | b i j ( 2 ) | ] L j g 1 1 c j r η j < 0 , [ c i ( 1 2 r c i ) l i ] 1 1 c i r η i + [ j = 1 n | d i j ( 1 ) | + j = 1 n | d i j ( 2 ) | ] L j f 1 1 a j ρ ξ j < 0 .
(16)

Similar to the proof of Theorem 1, we get the following result.

Corollary 1 If Assumptions 1 and 2′ hold, the BAM neural networks with constant leakage delays and the sampled-data state feedback inputs are exponentially stable.

4 Simulation example

In this section, we give an illustrative example to show the efficiency of our theoretical results.

Example 1 Consider the following BAM neural network with leakage delays and sampled-data state feedback inputs:

{ x ˙ ( t ) = A x ( t ρ i ( t ) ) + B 1 g ( y ( t ) ) + B 2 g ( y ( t τ ( t ) ) K x ( t k ) , y ˙ ( t ) = C y ( t r i ( t ) ) + D 1 f ( x ( t ) ) + D 2 f ( x ( t σ ( t ) ) L y ( t k ) ,
(17)

where

A = [ 1 0 0 0 1 0 0 0 1 ] , C = [ 0.9 0 0 0 0.9 0 0 0 0.9 ] , B 1 = [ 0 0 0.2 0.2 0 0.5 0 0.2 1 ] , B 2 = [ 1 0 0.2 0.2 0 0.4 0.2 0.2 0 ] , D 1 = [ 0.1 0.1 0 0.1 0.1 0.1 0 0 0.1 ] , D 2 = [ 0.1 0 0 0 0.1 0 0 0 0.1 ] ,

and the sampled-data gain

K=L=[ 0.2 0 0 0 0.2 0 0 0 0.2 ].

The activation functions are taken as f()=g()=0.4tanh(). Time-varying delays are chosen as τ(t)=0.1|sint|, σ(t)=0.1|cost| and the leakage delays are chosen as ρ i (t)=0.2+0.01sint, r i (t)=0.2+0.01cost, respectively.

It is easy to verify a i ρ ¯ i <1, c i r ¯ i <1. Select ξ i =20, η i =10, i=1,2,3, and we obtain

{ [ a 1 ( 1 2 a 1 ρ ¯ 1 ) a 1 ρ 1 k 1 ] 1 1 a 1 ρ ¯ 1 ξ 1 + [ j = 1 3 | b 1 j ( 1 ) | + j = 1 3 | b 1 j ( 2 ) | ] L 1 g 1 1 c 1 r ¯ 1 η 1 = 2.9207 < 0 , [ a 2 ( 1 2 a 2 ρ ¯ 2 ) a 2 ρ 2 k 2 ] 1 1 a 2 ρ ¯ 2 ξ 2 + [ j = 1 3 | b 2 j ( 1 ) | + j = 1 3 | b 2 j ( 2 ) | ] L 2 g 1 1 c 2 r ¯ 2 η 2 = 3.4085 < 0 , [ a 3 ( 1 2 a 3 ρ ¯ 3 ) a 3 ρ 3 k 3 ] 1 1 a 3 ρ ¯ 3 ξ 3 + [ j = 1 3 | b 3 j ( 1 ) | + j = 1 3 | b 3 j ( 2 ) | ] L 3 g 1 1 c 3 r ¯ 3 η 3 = 1.9451 < 0 , [ c 1 ( 1 2 c 1 r ¯ 1 ) c 1 r 1 l 1 ] 1 1 c 1 r ¯ 1 η 1 + [ j = 1 3 | d 1 j ( 1 ) | + j = 1 3 | d 1 j ( 2 ) | ] L 1 f 1 1 a 1 ρ ¯ 1 ξ 1 = 1.4756 < 0 , [ c 2 ( 1 2 c 2 r ¯ 2 ) c 2 r 2 l 2 ] 1 1 c 2 r ¯ 2 η 2 + [ j = 1 3 | d 2 j ( 1 ) | + j = 1 3 | d 2 j ( 2 ) | ] L 2 f 1 1 a 2 ρ ¯ 2 ξ 2 = 0.4756 < 0 , [ c 3 ( 1 2 c 3 r ¯ 3 ) c 3 r 3 l 3 ] 1 1 c 3 r ¯ 3 η 3 + [ j = 1 3 | d 3 j ( 1 ) | + j = 1 3 | d 3 j ( 2 ) | ] L 3 f 1 1 a 3 ρ ¯ 3 ξ 3 = 0.4756 < 0 .
(18)

This means that all conditions in Theorem 1 are satisfied. Hence, by Theorem 1 (17) is exponentially stable. On the other hand, we have the following simulation result, shown in Figure 1.

Figure 1
figure 1

State trajectory of the system [17].

5 Conclusion

In this paper, we investigate the stability of BAM neural networks with leakage delays and a sampled-data input. By using the time-delay approach, the conditions for ensuring the exponential stability of the system are derived. It should be pointed out that there are many papers focusing on the stability problem of sampled-data systems, leakage delay, and sampled-data state feedback that have never been taken into consideration in the BAM neural networks. To the best of our knowledge, this is the first time to consider the stability of BAM neural networks with both leakage delays and sampled-data state feedback at the same time. The results of this paper are worthy as complementary to the existing results. Finally, a numerical example and its computer simulations have been presented to show the effectiveness of our theoretical results.