1 Introduction

It is well known that neural networks with delays have a rich dynamical behavior that have been recently investigated by Huand and Li [1] and the references therein. It is naturally important that such systems should contain some information regarding the past rate of change since they effectively describe and model the dynamic of the application of neural networks [24]. As a consequence, scholars and researchers have paid more attention to the stability of neural networks that are described by nonlinear delay differential equations of the neutral type (see [48])

u ˙ i ( t ) = - a i ( t ) u i ( t ) + j = 1 m b i j ( t ) g j ( u j ( t ) ) + j = 1 m c i j u ˙ j ( t - τ ) + j = 1 m d i j ( t ) g j - t h j ( t - s ) u j ( s ) d s + I i ( t ) , i N : = { 1 , 2 , , m }
(1.1)

Cheng et al. first investigated the globally asymptotic stability of a class of neutral-type neural networks with delays [6]. Delay-dependent criterion has been attained in [5] by using Lyapunov stability theory and linear matrix inequality. Recently a conservative robust stability criteria for neutral-type networks with delays are proposed in [4] by using a new Lyapunov-Krasovskii functional and a novel series compensation technique. For more relative results, we can refer to [4, 7] and references cited therein.

Difference equations or discrete-time analogs of differential equations can preserve the convergence dynamics of their continuous-time counterparts in some degree [9]. So, due to its usage in computer simulations and applications, these discrete-type or difference networks have been deeply discussed by the authors of [1015] and extended to periodic or almost periodic difference neural systems [1621].

However, few papers deal with multiperiodicity of neutral-type difference neural networks with delays. Stimulated by the articles [22, 23], in this article, we should consider corresponding neutral-type difference version of (1.1) as follows:

u i ( n + 1 ) = a i ( n ) u i ( n ) + j = 1 m c i j Δ u j ( n - τ ) + j = 1 m b i j ( n ) g j ( u j ( n ) ) + j = 1 m d i j ( n ) g j v = 1 h j ( v ) u j ( n - v ) + I i ( n ) ,
(1.2)

where iN:= { 1 , 2 , , m } . Our main aim is to study biperiodicity of the above neutral-type difference neural networks. Some new criteria for coexistence of a periodic sequence solution and anti-sign periodic one of (1.2) have been derived by using Krasnoselskii's fixed point theorem. Our results are completely different from monoperiodicity existing ones in [1620].

The rest of this article is organized as follows. In Section 2, we shall make some preparations by giving some lemmas and Krasnoselskii's fixed point theorem. In Section 3, we gives new criteria for biperiodicity of (1.2). Finally, two numerical examples are given to illustrate our results.

2 Preliminaries

We begin this section by introducing some notations and some lemmas. Let S T be the set of all real T-periodic sequences defined on ℤ, where T is an integer with T ≥ 1. Then S T is a Banach space when it is endowed with the norm

u = max i N sup s [ 0 , T ] u i ( s ) .

Denote [a,b]: = {a, a + 1,...,b}, where a, b ∈ ℤ and ab. Let C((-∞, 0], ℝm) be the set of all continuous and bounded functions ψ(s) = (ψ1(s), ψ 2(s), ..., ψ m (s))Tmapping (-∞,0] into ℝm. For any given ψC((-∞, 0], ℝN), we denote by {u(n; ψ)} the sequence solution of system (1.2). Next, we present the basic assumptions:

  • Assumption (H1): Each a i (·), b ij (·), d ij (·), and I i (·) are T-periodic functions defined on ℤ, 0 < a i (n) < 1. The activation g j (·) is strictly increasing and bounded with - g j = lim v - g j ( v ) < g j ( v ) < lim v + g j ( v ) = g j for all v ∈ ℝ. The kernel h j : ℕ → ℝ+ is a bounded sequence with v = 1 h j ( v ) =1, where i,jN.

For each iN and any n ∈ ℤ, we let

G i ( n , p ) = s = p + 1 n + T - 1 a i ( s ) 1 - s = n n + T - 1 a i ( s ) - 1 , p [ n , n + T - 1 ]
(2.1)

Since 0 < a i (n) < 1 for all n ∈ [0,T - 1], each G i (n, p) is not zero and

m i : = min { G i ( n , p ) : n 0 , p T } = G i ( n , n ) = G i ( 0 , 0 ) > 0 , M i : = max { G i ( n , p ) : n 0 , p T } = G i ( n , n + T - 1 ) = G i ( 0 , T - 1 ) > 0 .

Lemma 2.1. For each i ∈ ℕ andp ∈ ℤ+,

P s = 0 p - 1 a i - 1 ( s ) u i ( p - τ ) + u i ( p - τ ) s = 0 p - 1 a i - 1 ( s ) = s = 0 p - 1 a i - 1 ( s ) u i ( p - τ )

holds for any sequence solution {u(n)} of (1.2), where, P is a shift operator defined as P u i ( p ) = u i ( p + 1 ) for iN and p ∈ ℤ+.

Proof.

P s = 0 p - 1 a i - 1 ( s ) u i ( p - τ ) + u i ( p - τ ) s = 0 p - 1 a i - 1 ( s ) = s = 0 p a i - 1 ( s ) u i ( p + 1 - τ ) - u i ( p - τ ) + u i ( p - τ ) s = 0 p a i - 1 ( s ) - s = 0 p - 1 a i - 1 ( s ) = s = 0 p a i - 1 ( s ) u i ( p + 1 - τ ) - s = 0 p - 1 a i - 1 ( s ) u i ( p - τ ) = s = 0 p - 1 a i - 1 ( s ) u i ( p - τ ) .

The proof is complete.

Lemma 2.2. Assume that (H1) hold. Any sequence { u ( n ) } S T m := S T × S T × × S T m is a solution of (1.2) if and only if

u i ( n ) = j = 1 m c i j u j ( n τ ) + p = n n + T 1 G i ( n , p ) [ j = 1 m b i j ( p ) g j ( u j ( p ) ) + j = 1 m d i j ( p ) g j ( v = 1 h j ( v ) u j ( p v ) ) + I i ( p ) j = 1 m c i j u j ( p τ ) ( 1 a i ( p ) ) ] ,
(2.2)

where G i (n, p) is defined by (2.1) for iN and p ∈ ℤ+.

Proof. Rewrite (1.2) as

u i ( n ) s = 0 n - 1 a i - 1 ( s ) = j = 1 m c i j u j ( n - τ ) + j = 1 m b i j ( n ) g j ( u j ( n ) ) + j = 1 m d i j ( n ) g j v = 1 h j ( v ) u j ( n - v ) + I i ( n ) s = 0 n a i - 1 ( s ) ,
(2.3)

where iN and n ∈ ℤ+. Summing (2.3) from n to n + T - 1, we obtain

p = n n + T 1 [ u i ( p ) s = 0 p 1 a i 1 ( s ) ] = p = n n + T 1 [ j = 1 m c i j u j ( p τ ) + j = 1 m b i j ( p ) g j ( u j ( p ) ) + j = 1 m d i j ( p ) g j ( v = 1 h j ( v ) u j ( p v ) ) + I i ( p ) ] s = 0 p a i 1 ( s ) .

That is,

u i ( n + T ) s = 0 n + T - 1 a i - 1 ( s ) - u i ( n ) s = 0 n - 1 a i - 1 ( s ) = p = n n + T - 1 j = 1 m c i j u j ( p - τ ) + j = 1 m b i j ( p ) g j ( u j ( p ) ) + j = 1 m d i j ( p ) g j v = 1 h j ( v ) u j ( p - v ) + I i ( p ) s = 0 p a i - 1 ( s ) .

Since u i (n + T) = u i (n), we obtain

u i ( n ) s = 0 n + T - 1 a i - 1 ( s ) - s = 0 n - 1 a i - 1 ( s ) = p = n n + T - 1 j = 1 m c i j u j ( p - τ ) + j = 1 m b i j ( p ) g j ( u j ( p ) ) + j = 1 m d i j ( p ) g j v = 1 h j ( v ) u j ( p - v ) + I i ( p ) s = 0 p a i - 1 ( s ) .
(2.4)

It follows from Lemma 2.1 that

p = n n + T - 1 j = 1 m c i j u j ( p - τ ) s = 0 p a i - 1 ( s ) = p = n n + T - 1 j = 1 m c i j u j ( p - τ ) P s = 0 p - 1 a i - 1 ( s ) = p = n n + T - 1 j = 1 m c i j u j ( p - τ ) s = 0 p - 1 a i - 1 ( s ) - u j ( p - τ ) s = 0 p - 1 a i - 1 ( s ) = j = 1 m c i j p = n n + T - 1 u j ( p - τ ) s = 0 p - 1 a i - 1 ( s ) - p = n n + T - 1 j = 1 m c i j u j ( p - τ ) s = 0 p - 1 a i - 1 ( s ) = j = 1 m c i j u j ( n - τ ) s = 0 n + T - 1 a i - 1 ( s ) - s = 0 n - 1 a i - 1 ( s ) - p = n n + T - 1 j = 1 m c i j u j ( p - τ ) s = 0 u - 1 a i - 1 ( s ) .

Therefore, one gets from (2.4) that

u i ( n ) s = 0 n + T - 1 a i - 1 ( s ) - s = 0 n - 1 a i - 1 ( s ) = j = 1 m c i j u j ( n - τ ) s = 0 n + T - 1 a i - 1 ( s ) - s = 0 n - 1 a i - 1 ( s ) - p = n n + T - 1 j = 1 m c i j u j ( p - τ ) ( 1 - a i ( p ) ) s = 0 p a i - 1 ( s ) + p = n n + T - 1 j = 1 m b i j ( p ) g j ( u j ( p ) ) + j = 1 m d i j ( p ) g j v = 1 h j ( v ) u j ( p - v ) + I i ( p ) s = 0 p a i - 1 ( s ) .

Dividing both sides of the above equation by s = 0 n + T - 1 a i - 1 ( s ) - s = 0 n - 1 a i - 1 ( s ) completes the proof.

In what follows, we state Krasnoselskii's theorem.

Lemma 2.3. Let M be a closed convex nonempty subset of a Banach space (B, ).

Suppose that C and B map M into B such that

(i) x,yM implies that Cx+ByM,

(ii) C is continuous and CM is contained in a compact set and

(iii) B is a contraction mapping.

Then there exists a zM with z = Cz + B z.

3 Biperiodicity of neutral-type difference networks

Due to the introduction of the neutral term neutral j = 1 m c i j , we must construct two closed convex subsets B L and B R in S T m , which necessitate the use of Krasnoselskii's fixed point theorem. As a consequence, we are able to derive the new biperiodicity criteria for (1.2). That is there exists a positive T-periodic sequence solution in B R and an anti-sign T-periodic sequence solution in B L . Next, for the case c ij ≥ 0, we present the following assumption:

  • Assumption (H2): For each i,jN, c ij ≥ 0, b ii (n) > 0 and 0< ĉ i := j = 1 m c i j <1, g j (·) satisfies g j (-v) = -g j (v) for all v ∈ ℝ. Moreover, there exist constants α > 0 and β > 0 with α < β such that for all iN

    - 1 - ĉ i m i T α + b i i ( n ) g i ( α ) - ( 1 - a i ( n ) ) ĉ i β > P i , - - 1 - ĉ i M i T β + b i i ( n ) g i ( β ) + ( 1 - a i ( n ) ) ĉ i α > p i , n

where

P i : = sup n j i b i j ( n ) g j + j = 1 m d i j ( n ) g j + I i ( n ) , i N

Construct two subsets of S T as follows:

B l : = { w S T | - β w ( n ) - α } , B r : = { w S T | α w ( n ) β } .

Obviously, B L : = B l × B l × B l m and B R : = B r × B r × B r m are two closed convex subsets of Banach space S T m . Define the map B Σ : B Σ S T m by

( B Σ u ) i ( n ) = j = 1 m c i j u j ( n - τ ) , i N

and the map C Σ : B Σ S T m by

( C Σ u ) i ( n ) = p = n n + T - 1 G i ( n , p ) j = 1 m b i j ( p ) g j ( u j ( p ) ) - j = 1 m c i j u j ( p - τ ) ( 1 - a i ( p ) ) + j = 1 m d i j ( p ) g j v = 1 h j ( v ) u j ( p - v ) + I i ( p ) , i N
(3.1)

where Σ = R or L. Due to the fact 0< ĉ i <1, B Σ defines a contraction mapping.

Proposition 3.1. Under the basic assumptions (H1) and (H2), for each Σ, the operator CΣ is completely continuous on B .

Proof. For any given Σ and u B , we have two cases for the estimation of (CΣu) i (n).

  • Case 1: As Σ = R and u B R , u i (n) ∈ [α, β] holds for each iN and all n ∈ ℤ. It follows from (3.1) and (H2) that

    ( C R u ) i ( n ) p = n n + T - 1 G i ( n , p ) b i i ( p ) g i ( β ) + j i b i j ( p ) g j - j = 1 m c i j α ( 1 - a i ( p ) ) + j = 1 m d i j ( p ) g j + I i ( p ) p = n n + T - 1 G i ( n , p ) - ĉ i ( 1 - a i ( p ) ) α + b i i ( p ) g i ( β ) + P i T M i 1 - ĉ i M i T β = ( 1 - ĉ i ) β

and

( C R u ) i ( n ) p = n n + T - 1 G i ( n , p ) b i i ( p ) g i ( α ) + j i b i j ( p ) g j - j = 1 m c i j β ( 1 - a i ( p ) ) - j = 1 m d i j ( p ) g j + I i ( p ) p = n n + T - 1 G i ( n , p ) - ĉ i ( 1 - a i ( p ) ) β + b i i ( p ) g i ( α ) - P i T m i 1 - ĉ i m i T α = ( 1 - ĉ i ) α .
  • Case 2: As Σ = L and u B L , u i (n) ∈ [-β, -α] holds for each iN and all n ∈ ℤ. It follows from (3.1) and (H2) that

    ( C L u ) i ( n ) p = n n + T 1 G i ( n , p ) [ b i i ( p ) g i ( β ) j i | b i j ( p ) | g j j = 1 m c i j ( α ) ( 1 a i ( p ) ) j = 1 m | d i j ( p ) | g j | I i ( p ) | ] p = n n + T 1 G i ( n , p ) [ c ^ i ( 1 a i ( p ) ) α b i i ( p ) g i ( β ) P i ] T M i 1 c ^ i M i T ( β ) = ( 1 c ^ i ) β

and

( C L u ) i ( n ) p = n n + T 1 G i ( n , p ) [ b i i ( p ) g i ( α ) + j i | b i j ( p ) | g j j = 1 m c i j ( β ) ( 1 a i ( p ) ) + j = 1 m | d i j ( p ) | g j | I i ( p ) | ] p = n n + T 1 G i ( n , p ) [ c ^ i ( 1 a i ( p ) ) β b i i ( p ) g i ( α ) + P i ] T m i 1 c ^ i m i T ( α ) = ( 1 c ^ i ) α .

It follows from above two cases about the estimation of (CΣu) i (n) that C Σ u ( 1 - min { ĉ i } ) ββ. This shows that CΣ ( B ) is uniformly bounded. Together with the continuity of CΣ, for any bounded sequence {ψ n } in B , we know that there exists a subsequence { ψ n k } in B such that { C Σ ( ψ n k ) } is convergent in CΣ( B ). Therefore, CΣ is compact on B . This completes the proof.

Theorem 3.1. Under the basic assumptions (H1) and (H2), for each Σ, (1.2) has a T-periodic solution uΣ satisfying uΣ B .

Proof. Let u,û B Σ . We should show that B Σ u + C Σ û B Σ . For simplicity we only consider the case Σ = R. It follows from (2.2) and (H2) that

( B R u ) i ( n ) + ( C R û ) i ( n ) = j = 1 m c i j u j ( n - τ ) + p = n n + T - 1 G i ( n , p ) j = 1 m b i j ( p ) g j ( û j ( p ) ) - j = 1 m c i j û j ( p - τ ) ( 1 - a i ( p ) ) + j = 1 m d i j ( p ) g j v = 1 h j ( v ) û j ( p - v ) + I i ( p ) j = 1 m c i j β + p = n n + T - 1 G i ( n , p ) b i i ( p ) g i ( β ) + j 1 b i j ( p ) g j - j = 1 m c i j α ( 1 - a i ( p ) ) + j = 1 m d i j ( p ) g j + I i ( p ) ĉ i β + T M i 1 - ĉ i M i T β = β .

On the other hand,

( B R u ) i ( n ) + ( C R û ) i ( n ) = j = 1 m c i j u j ( n - τ ) + p = n n + T - 1 G i ( n , p ) j = 1 m b i j ( p ) g j ( û j ( p ) ) - j = 1 m c i j û j ( p - τ ) ( 1 - a i ( p ) ) + j = 1 m d i j ( p ) g j v = 1 h j ( v ) û j ( p - v ) + I i ( p ) j = 1 m c i j α + p = n n + T - 1 G i ( n , p ) b i i ( p ) g i ( α ) + j 1 b i j ( p ) g j - j = 1 m c i j β ( 1 - a i ( p ) ) - j = 1 m d i j ( p ) g j + I i ( p ) ĉ i α + T m i 1 - ĉ i m i T α = α .

Therefore, all the hypotheses stated in Lemma 2.3 are satisfied. Hence, (1.2) has a T-periodic solution uRsatisfying uR B R . Almost the same argument can be done for the case Σ = L. The proof is complete.

For the case c ij < 0, we present the following assumption:

  • Assumption ( H ^ 2 ) : For each i,jN, c ij ≤ 0 and -1< ĉ i := j = 1 m c i j <0. There exist constants α > 0 and β > 0 with α < β such that for all n ∈ ℤ

    ( 1 - a i ( n ) ) ĉ i β + β - ĉ i α M i T > Q i , - ( 1 - a i ( n ) ) ĉ i α + ĉ i β - α m i T > Q i .

where

Q i : = sup n j = 1 m ( b i j ( n ) + d i j ( n ) ) g j + I i ( n ) .

Similarly as Proposition 3.1, we can obtain

Proposition 3.2. Under the basic assumptions (H1) and ( H ^ 2 ) , for each Σ, the operator CΣ is completely continuous on B .

Proof For any given Σ and u B , we have two cases for the estimation of (CΣu) i (n).

  • Case 1: As Σ = R and u B R , u i (n) ∈ [α, β] holds for each iN and all n ∈ ℤ. It follows from (3.1) and ( H ^ 2 ) that

    ( C R u ) i ( n ) p = n n + T - 1 G i ( n , p ) - j = 1 m c i j β ( 1 - a i ( p ) ) + Q i T M i β - ĉ i α M i T = β - ĉ i α

and

( C R u ) i ( n ) p = n n + T - 1 G i ( n , p ) - j = 1 m c i j α ( 1 - a i ( p ) ) + Q i T m i α - ĉ i β m i T = α - ĉ i β .
  • Case 2: As Σ = L and u B L , u i (n) ∈ [-β, -α] holds for each iN and all n ∈ ℤ. It follows from (3.1) and ( H ^ 2 ) that

    ( C L u ) i ( n ) p = n n + T - 1 G i ( n , p ) - j = 1 m c i j ( - β ) ( 1 - a i ( p ) ) + Q i T M i ĉ i α - β M i T = ĉ i α - β

and

( C L u ) i ( n ) p = n n + T - 1 G i ( n , p ) - j = 1 m c i j ( - α ) ( 1 - a i ( p ) ) + Q i T m i ĉ i β - α m i T = ĉ i β - α .

By a similar argument, we prove that CΣ is continuous and compact on B . This completes the proof.

Theorem 3.2. Under the basic assumptions (H1) and ( H ^ 2 ) , for each Σ, (1.2) has a T-periodic solution uΣ satisfying uΣ B .

Proof. Let u,û B Σ . We should show that B Σ u + C Σ û B Σ . For simplicity, we only consider the case Σ = L. It follows from (2.2) and ( H ^ 2 ) that

( B L u ) i ( n ) + ( C L û ) i ( n ) = j = 1 m c i j u j ( n - τ ) + p = n n + T - 1 G i ( n , p ) j = 1 m b i j ( p ) g j ( û j ( p ) ) - j = 1 m c i j û j ( p - τ ) ( 1 - a i ( p ) ) + j = 1 m d i j ( p ) g j v = 1 h j ( v ) û j ( p - v ) + I i ( p ) j = 1 m c i j ( - β ) + p = n n + T - 1 G i ( n , p ) - j = 1 m c i j ( - α ) ( 1 - a i ( p ) ) + Q i - ĉ i β + T m i ĉ i β - α m i T = - α .

On the other hand,

( B L u ) i ( n ) + ( C L û ) i ( n ) = j = 1 m c i j u j ( n - τ ) + p = n n + T - 1 G i ( n , p ) j = 1 m b i j ( p ) g j ( û j ( p ) ) - j = 1 m c i j û j ( p - τ ) ( 1 - a i ( p ) ) + j = 1 m d i j ( p ) g j v = 1 h j ( v ) û j ( p - v ) + I i ( p ) j = 1 m c i j ( - α ) + p = n n + T - 1 G i ( n , p ) - j = 1 m c i j ( - β ) ( 1 - a i ( p ) ) + Q i - ĉ i α + T M i ĉ i α - β M i T = - β .

Therefore, all the hypotheses stated in Lemma 2.3 are satisfied. Hence, (1.2) has a T-periodic solution uLsatisfying uL B L . By a similar argument, one can prove the case Σ = R. This completes the proof.

4 Numerical examples

Example 1. Consider the following neutral-type difference neural networks with delays

u i ( n + 1 ) = a i ( n ) u i ( n ) + j = 1 3 c i j u j ( n - τ ) + j = 1 3 b i j ( n ) g j ( u j ( n ) ) + j = 1 3 d i j ( n ) g j v = 1 h j ( v ) u j ( n - v ) + I i ( n ) ,
(4.1)

where

a 1 ( n ) = a 2 ( n ) = a 3 ( n ) = a 3 ( n ) : = exp ( - 0 . 1 - 0 . 01 cos 0 . 2 π n ) , I 1 ( n ) : = 0 . 02 cos 0 . 2 π n , I 2 ( n ) : = 0 . 03 sin 0 . 2 π n , I 3 ( n ) : = 0 . 2 sin 0 . 2 π n , τ = 5 , g ( z ) : = g 1 ( z ) = g 2 ( z ) = tanh ( z ) , m = 3 , C = ( c i j ) = 0 . 2 0 . 1 0 . 05 0 . 1 0 . 25 0 0 . 05 0 . 1 0 . 2 , h 1 ( 10 ) = h 2 ( 10 ) = h 3 ( 10 ) = 1 , T = 10 , D ( n ) = ( d i j ( n ) ) = 0 0 . 05 cos ( 0 . 2 π n ) 0 0 . 1 sin ( 0 . 2 π n ) 0 0 0 0 0 . 01 sin ( 0 . 2 π n ) , B ( n ) = ( b i j ( n ) ) = 7 + sin ( 0 . 2 π n ) 0 . 1 sin ( 0 . 2 π n ) 0 . 01 sin ( 0 . 2 π n ) 0 . 1 cos ( 0 . 2 π n ) 7 + sin ( 0 . 2 π n ) 0 0 . 01 sin ( 0 . 2 π n ) 0 7 + sin ( 0 . 2 π n ) .

Obviously, the sigmoidal function tanh(z) is strictly increasing on ℝ with |tanh(z)| < 1. It is easy for us to check that (H1) holds. After some computations, we have

ĉ 1 = ĉ 2 = ĉ 3 = 0 . 35 , m 1 = m 2 = m 3 = 0 . 6496 , M 1 = M 2 = M 3 = 1 . 2720 , P 1 = 0 . 18 , P 2 = 0 . 23 , P 3 = 0 . 22 .

Take α = 3, β = 160 and define

S 1 ( n ) : = ( 1 - a i ( n ) ) ĉ 1 α - - 1 - ĉ 1 M i T β + b i i ( n ) g i ( β ) S 2 ( n ) : = - 1 - ĉ i m i T α + b i i ( n ) g i ( α ) - ( 1 - a i ( n ) ) ĉ i β

From Figure 1, we can check that assumption (H2) hold. By Theorem 3.1, there exists a positive ten-periodic sequence solution of (4.1) and a negative ten-periodic sequence solution. For the coexistence of positive periodic sequence solution and its anti-sgn ones, we can refer to Figures 2 and 3. Phase view for biperiodicity dynamics of (4.1), we can refer to Figure 4.

Figure 1
figure 1

The estimation of S 1 ( n ) and S 2 ( n ) for assumption ( H 2 ).

Figure 2
figure 2

The existence of a positive T -periodic sequence solution of (4.1).

Figure 3
figure 3

The existence of a negative T -periodic sequence solution of (4.1).

Figure 4
figure 4

Phase view for biperiodicity of neutral-type difference neural networks (4.1).

Example 2. Consider the following neutral-type difference neural networks with delays

u i ( n + 1 ) = a i ( n ) u i ( n ) + j = 1 2 c i j u j ( n - τ ) + j = 1 2 b i j ( n ) g j ( u j ( n ) ) + I i ( n ) ,
(4.2)

where

a 1 ( n ) : = exp ( - 0 . 1 - 0 . 01 cos 0 . 2 π n ) , a 2 ( n ) : = exp ( - 0 . 2 - 0 . 1 sin 0 . 2 π n ) , I 1 ( n ) : = 0 . 02 sin 0 . 2 π n , I 2 ( n ) : = 0 . 02 cos 0 . 2 π n , τ = 5 , g ( z ) : = g 1 ( z ) = g 2 ( z ) = tanh ( z ) , C = ( c i j ) = - 0 . 1 - 0 . 2 - 0 . 2 - 0 . 1 , T = 10 , B ( n ) = 0 . 5 0 . 005 sin ( 0 . 2 π n ) 0 . 1 cos ( 0 . 2 π n ) 0 . 5 .

Obviously, (H1) holds. From some computations, we have

ĉ 1 = ĉ 2 = - 0 . 3 , m 1 = 0 . 6496 , m 2 = 0 . 1912 , M 1 = 1 . 2720 , M 2 = 0 . 8222 , Q 1 = 0 . 525 , Q 2 = 0 . 62 .

Let α = 1, β = 20. We can check assumption ( H ^ 2 ) holds. From Theorem 3.2, there exist a positive ten-periodic sequence solution and an anti-sgn ones of (4.2). For the coexistence of a positive T-periodic sequence solution and its an anti-sgn ones of (4.2), we can refer to Figure 5. Figure 6 shows phase view for biperiodicity dynamics of (4.2).

Figure 5
figure 5

Coexistence of a positive T -periodic solution and its an anti-sgn ones of (4.2).

Figure 6
figure 6

Phase view of biperiodicity for neutral-type difference neural networks (4.2).

5 Remarks and open problems

To the best of authors' knowledge, this is the first time when biperiodicity criteria for neutral-type difference neural networks with delays

u i ( n + 1 ) - a i ( n ) u i ( n ) = j = 1 m c i j u j ( n - τ ) + j = 1 m b i j ( n ) g j ( u j ( n ) ) + j = 1 m d i j ( n ) g j v = 1 h j ( v ) u j ( n - v ) + I i ( n ) , i N

have been studied.

We propose the following open problems for future research:

Our new assumptions (H2) and ( H ^ 2 ) indicate that neutral term plays an important role on the dynamics of biperiodicity. Such study has not been mentioned in the literature. However, there is still more to do. For example:

  1. (i)

    If we relax the conditions c ij ≤ 0 or c ij ≥ 0 for all i,jN on the neutral term, then is the existence of multiperiodic dynamics still exist?

  2. (ii)

    Evidently, in our work Biperiodicity of neural networks depends on the boundedness of activation functions. Can such requirement be relaxed and yet still obtain periodic sequence solutions and whether they are always of anti-sign?

To discuss the sign of each c ij and consider analytic properties of activation functions is a possible way to investigate these problems.