1 Introduction

Let ℕ, ℤ, ℝ be the sets of natural, integer and real numbers, respectively. By ℝm, we denote the m-dimensional Euclidean space with elements x = col(x1, x2, . . . , x m ).

It is well known that the nonhomogeneous linear equation x'(t) = A(t)x(t) + f(t) has periodic solutions if and only if

0 ω y T ( t ) f ( t ) d t = 0
(1)

for all periodic solutions y(t) of period ω of the adjoint equation y'(t) = - AT(t)y(t), where AC(ℝ, ℝm×m) and fC(ℝ, ℝm) are periodic functions of period ω; see for instance [1]. By "T", we mean the transposition.

In his remarkable monograph [2], Halanay extended the above result to linear delay differential equations of the form

x ( t ) = A ( t ) x ( t ) + B ( t ) x ( t - τ ) + f ( t ) , t > 0 ,
(2)

where A, BC(ℝ, ℝm×m) and fC(ℝ, ℝm) are periodic functions of period ω and τ > 0 is a fixed real number. It was shown that the required condition involves the same integral (1). Indeed, Halanay proved that Equation (2) has periodic solutions if and only if (1) holds for all periodic solutions y(t) of period ω of the adjoint equation

y ( t ) = - A T ( t ) y ( t ) - B T ( t + τ ) y ( t + τ ) ,

which is constructed with respect to the function

< y ( t ) , x ( t ) > = y T ( t ) x ( t ) + t t + τ y T ( s ) B ( s ) x ( s - τ ) d s .
(3)

The same problem has been investigated for linear impulsive delay differential equations [3, 4]. The discrete analog of the above mentioned result has been recently studied in [5]. We suggest the reader to consult [610] for more results regarding existence of periodic solutions for difference equations.

The purpose of this article is to establish a necessary and sufficient condition for the existence of periodic solutions for a type of linear difference equation with distributed delay of the form

Δ x ( n ) = k = - d 0 Δ k ζ ( n + 1 , k - 1 ) x ( n + k - 1 ) , n 1 ,
(4)

where ζ: ℕ × ℤ → ℝm × mis a kernel function satisfying the following conditions:

  1. (i)

    ζ(n, k) is normalized so that ζ(n, s) = 0 for s ≥ -1 and for s ≤ - d + 1 where d > 3 is a positive integer;

  2. (ii)

    There exists a positive real number γ such that sup t 0 s = - d 0 || Δ s ζ ( t , s ) ||γ.

For any a, b ∈ ℕ, define ℕ(a) = {a, a + 1, . . .} and ℕ(a, b) = {a, a + 1, . . . , b} where ab. By a solution of (4), we mean a sequence x(n) of elements in ℝm which is defined for all n ∈ ℕ(n0 - d + 1) and satisfies (4) for n ∈ ℕ(n0) for some n0 ∈ ℕ. It is easy to see that for any given n0 ∈ ℕ and initial conditions of the form

x ( n ) = ϕ ( n ) , n ( n 0 - d + 1 , n 0 + 1 ) ,
(5)
  1. (4)

    has a unique solution x(n) which is defined for n ∈ ℕ(n0 - d + 1) and satisfies the initial conditions (5). To emphasize the dependence of the solution on the initial point n0 and the initial functions ϕ, we may use the notation x(n) = x(n; n0, ϕ).

Our approach is based on constructing an adjoint equation for (4) with respect to a discrete analog for function (3) and proving that (4) and its adjoint equation have the same number of linearly independent periodic solutions. We shall employ some primary algebraic techniques to prove the main results of this article. It is worth mentioning here that the equation under consideration in this article (Equation (4)) is given in general form so it includes many particular cases of difference equations with pure delays; see [5, 1113] for more details.

2 Preliminary assertions

This section is devoted to certain auxiliary assertions that will be needed in the proof of the main theorem. Lemma 2.1 which introduces the main result of this section is needed to define an adjoint equation for (4). Lemmas 2.4 and 2.7 give representations of solutions of the considered equations. The proof of these lemmas were given in [14]. For the benefit of the readers, however, we state these lemmas along with their proofs.

Consider the function

< x ( n ) , y ( n ) > = x T ( n ) y ( n ) + s = n - d n x T ( s - 1 ) Δ s α = n + 1 s + d - 1 ζ T ( α , s - α ) y ( α ) ,
(6)

where Δ n ζ(n, k): = ζ(n + 1, k) - ζ(n, k). We claim that the equation

Δ y ( n ) = - Δ n k = - d 0 ζ T ( n - k , k + 1 ) y ( n - k )
(7)

is an adjoint equation of (4) with respect to (6). The following lemma proves meaningful.

Lemma 2.1 Let x(n) be any solution of (4) and y(n) be any solution of (7) then

< x ( n ) , y ( n ) > = constant ,
(8)

where < ·,· > is defined by (6).

Proof. Clearly, it suffices to show that Δ < x(n), y(n) > = 0. It follows that

Δ < x ( n ) , y ( n ) > = x T ( n ) Δ y ( n ) + Δ x T ( n ) y ( n + 1 ) + Δ n s = n - d n g ( s , n ) ,
(9)

where

g ( s , n ) = x T ( s - 1 ) Δ s α = n + 1 s + d - 1 ζ T ( α , s - α ) y ( α ) .
(10)

It is easy to see that

Δ n s = n - d n g ( s , n ) = g ( n + 1 , n + 1 ) - g ( n - d , n ) + s = n - d + 1 n Δ n g ( s , n ) .

Therefore (9) becomes

Δ < x ( n ) , y ( n ) > = x T ( n ) Δ y ( n ) + Δ x T ( n ) y ( n + 1 ) + g ( n + 1 , n + 1 ) - g ( n - d , n ) + s = n - d + 1 n Δ n g ( s , n ) .

Thus

Δ < x ( n ) , y ( n ) > = by ( 7 ) x T ( n ) - Δ n k = - d 0 ζ T ( n - k , k + 1 ) y ( n - k ) + by ( 4 ) k = - d 0 x T ( n + k - 1 ) Δ k ζ T ( n + 1 , k - 1 ) y ( n + 1 ) + by(10) x T ( n ) α = n + 3 n + d Δ n ζ T ( α , n + 1 - α ) y ( α ) - by(10) x T ( n - d - 1 ) Δ n - d α = n + 1 n - 1 ζ T ( α , n - d - α ) y ( α ) - by(10) s = n - d + 1 n x T ( s - 1 ) Δ s ζ T ( n + 1 , s - n - 1 ) y ( n + 1 ) .

By changing the indices and using the properties of ζ, we see that the above equation is equal to zero. The proof is finished.

Remark 2.2 In virtue of Lemma 2.1, we may say that Equation (7) is an adjoint of (4). It is easy to verify also that the adjoint of (7) is (4), i.e., they are mutually adjoint of each other.

Consider the nonhomogeneous equation

Δ x ( n ) = k = - d 0 Δ k ζ ( n + 1 , k - 1 ) x ( n + k - 1 ) + f ( n ) , n 1 ,
(11)

where f is a sequence with values in ℝm.

Definition 2.3 A matrix solution X(n, α) of (4) satisfying X(α, α) = I, (I is an identity matrix), and X(n, α) = 0 for n < α is called a fundamental function of (4).

Lemma 2.4 Let X(n, α) be a fundamental function of (4) and n0 ∈ ℕ. If x(n) is a solution of (11), then

x ( n ) = X ( n , n 0 ) x ( n 0 ) + s = n 0 - d n 0 Δ s α = n 0 + 1 s + d - 1 X ( n , α ) ζ ( α , s - α ) x ( s - 1 ) + k = n 0 n - 1 X ( n , k + 1 ) f ( k ) .
(12)

Proof. A direct substitution of (12) in (11) leads to the desired result. Indeed,

Δ x ( n ) = Δ X ( n , n 0 ) x ( n 0 ) + Δ n s = n 0 - d n 0 Δ s α = n 0 + 1 s + d - 1 X ( n , α ) ζ ( α , s - α ) x ( s - 1 ) + Δ n k = n 0 n - 1 X ( n , k + 1 ) f ( k )
(13)

or

Δ x ( n ) = by ( 4 ) k = - d 0 Δ k ζ ( n + 1 , k - 1 ) X ( n + k - 1 , n 0 ) x ( n 0 ) + s = n 0 - d n 0 Δ s α = n 0 + 1 s + d - 1 k = - d 0 Δ k ζ ( n + 1 , k - 1 ) X ( n + k - 1 , α ) ζ ( α , s - α ) x ( s - 1 ) + f ( n ) + k = n 0 n - 1 k = - d 0 Δ k ζ ( n + 1 , k - 1 ) X ( n + k - 1 , k + 1 ) f ( k ) = f ( n ) + k = - d 0 Δ k ζ ( n + 1 , k - 1 ) x ( n + k - 1 ) .

Corollary 2.5 Let X(n, α) be a fundamental function of (4) and n0 ∈ ℕ. If x(n) is a solution of (4), then

x ( n ) = X ( n , n 0 ) x ( n 0 ) + s = n 0 - d n 0 Δ s α = n 0 + 1 s + d - 1 X ( n , α ) ζ ( α , s - α ) x ( s - 1 ) .
(14)

Definition 2.6 A matrix solution Y (n, α) of (7) satisfying Y (α, α) = I and Y (n, α) = 0 for n > α is called a fundamental function of (7).

Lemma 2.7 Let Y (n, α) be a fundamental function of (7) and n0 ∈ ℕ. If y(n) is a solution of (5), then

y ( n ) = Y ( n , n 0 ) y ( n 0 ) + s = n 0 - d n 0 Y ( n , s - 1 ) Δ s α = n 0 + 1 s + d - 1 ζ T ( α , s - α ) y ( α ) .
(15)

Corollary 2.8 Let X(n; n0) be a fundamental function of (4) and Y (n, n0) be a fundamental function of (7). Then

X ( n , n 0 ) = Y T ( n 0 , n ) .
(16)

Proof. By following the same arguments used by Halanay in [[2], p. 364], (8) can be written as follows

< x ( n ) , y ( n ) > = < x ( n 0 ) , y ( n 0 ) > , for any  n 0 .

Further

X T ( n , n ) Y ( n , n 0 ) + s = n - d n X T ( s - 1 , n ) Δ s α = n + 1 s + d - 1 ζ T ( α , s - α ) Y ( α , n 0 ) = by(6) X T ( n 0 , n ) Y ( n 0 , n 0 ) + s = n - d n X T ( s - 1 , n ) Δ s α = n + 1 s + d - 1 ζ T ( α , s - α ) Y ( α , n 0 ) .
(17)

Upon using the properties of the fundamental functions X(n, n0) and Y (n, n0), identity (16) is obtained.

Remark 2.9 Formulas (14) and (15) can be derived from function (6). Indeed, replacing X by x or Y by y in (17), using (16) and employing the properties of X and Y we obtain the desired results.

3 The main results

With regard to Equation (11), the following conditions are assumed to be valid throughout the remaining part of the article.

  1. (i)

    ζ(n, k): ℕ × ℤ → ℝm × mis p periodic sequence in n, p > d;

  2. (ii)

    f: ℕ → ℝm is p a periodic sequence, p > d.

Let x(n) = x(n; φ) be the solution of Equation (11) defined for n ≥ 1 such that x(n) coincides with φ on [-d + 2, 2]. The periodicity of the equation implies that x(n + p; φ) is likewise a solution of the equation defined for n + pd. If this solution coincides with φ in [-d +2, 2], then on the basis of the uniqueness theorem it follows that x(n + p; φ) = x(n; φ) for all n ≥ -d + 2 and the solution is periodic. Thus the periodicity condition of the solution is written as x(n + p; φ) = φ(n) for n ∈ [-d + 2, 2]. If W is defined by = x(n + p; φ), n ∈ [-d + 2, 2], then it follows that x(n) is periodic if and only if = φ, i.e., φ is a fixed point of W.

Let z(n) = z(n; φ) be the solution of (4) defined for n ≥ 1 such that z(n) = φ(n) on [-d + 2, 2]. Then by Lemma 2.4,

x ( n ; φ ) = z ( n ; φ ) + k = 0 n - 1 X ( n , k + 1 ) f ( k ) .

Define U by = z(n + p; φ), n ∈ [-d + 2, 2]. Then, since

W φ = U φ + k = 0 n + p - 1 X ( n + p , k + 1 ) f ( k ) ,

the periodicity condition reads as

φ = U φ + k = 0 n + p - 1 X ( n + p , k + 1 ) f ( k ) .
(18)

Let y(n) = y(n; ψ) be the solution of (7) defined for np + d such that y(n) = ψ(n) on [p, p + d]. Similarly, we conclude that if y(n - p; ψ) coincides with ψ in [p, p + d] then y(n - p; ψ) = y(n; ψ) and hence the solution is periodic. From Lemma 2.7, we get

ψ ( n ) = X T ( p , n - p ) ψ ( p ) + s = p - d p X T ( s - 1 , n - p ) Δ s α = p + 1 s + d - 1 ζ T ( α , s - α ) ψ ( α ) ,

for n ∈ [p, p + d]. Let φ ̃ ( s ) =ψ ( s + p + d ) for s ∈ [-d + 2, 2]. Setting η = k - p - d, we find out

φ ̃ ( s ) = X T ( p , s + d ) φ ̃ ( - d ) + s = p - d p X T ( s - 1 , s + d ) Δ s η = - d + 1 s - p - 1 ζ T ( η + p + d , s - η - p - d ) φ ̃ ( η ) .

For sake of convenience, we also use the notation

< Ψ ( s ) , Φ ( s ) > = Ψ T ( - d ) Φ ( 0 ) + s = - d 0 Δ s η = - d + 1 s - p - 1 Ψ T ( η ) ζ ( η + p + d , s - η - p - d ) Φ ( s ) ,
(19)

for matrix sequences Ψ and Φ defined on [-d + 2, 2] as long as multiplication is possible. Note that < Ψ(s), Φ(s) > could be either a number or a vector or a matrix, depending on the sizes of Ψ and Φ.

The following lemma, which is a discrete analogue of [4, Lemma 4], plays a key role in our later analysis. Its proof is straightforward and can be achieved directly by changing the order of summations.

Lemma 3.1 For any matrix sequences N, M, L ∈ ℝm × m, we have

< < L ( σ ) , M ( α , σ ) > T , N ( α ) > = < L ( σ ) , < M T ( α , σ ) , N ( α ) > > .

By using this notation, the operator U can be written as

U φ = < X T ( p + s , η + d ) , φ ( η ) > .

If we define Ũ φ ̃ =< φ ̃ ( η ) , X(p + η, s + d) > T, then in view of Lemma 3.1 we obtain

< Ũ φ ̃ , φ > = < φ ̃ ( η ) , < X T ( p + η , s + d ) , φ ( s ) > > = < φ ̃ , U φ > .

Let ψ = y ( n 0 - p ; ψ ) for n0 ∈ [p, p + d]. That is,

ψ = X T ( p , n 0 - p ) ψ ( p ) + s = p - d p X T ( s - 1 , n 0 - p ) Δ s α = p + 1 s + d - 1 ζ T ( α , s - α ) ψ ( α ) ,

for n0 ∈ [p, p + d]. If ρ is an eigenvalue of  Ṽ, then there exists a nonzero solution of

ρ φ ̃ ( s ) = X T ( p , s + d ) φ ̃ ( - d ) + s = p - d p X T ( s - 1 , s + d ) Δ s η = - d + 1 s - p - 1 ζ T ( η + p + d , s - η - p - d ) φ ̃ ( η ) ,

where φ ̃ ( s ) =ψ ( s + p + d ) , s ∈ [-d + 2, 2]. The right side of the above equation is nothing but Ũ φ ̃ . Thus the eigenvalues of the operators  Ũ and  Ṽ coincide and in addition, if ψ is an eigenfunction for  Ṽ, then φ ̃ =ψ ( s + p + d ) is an eigenfunction for  Ũ.

Lemma 3.2 Equations (4) and (7) have the same number of linearly independent periodic solutions of period p > d.

Proof. Consider the equation

ρ φ ( s ) - U φ ( s ) = F ( s ) .
(20)

It is obvious that the fundamental function X can be written as a linear combination of linearly independent vectors. That is,

X ( p + s , ξ + d ) = k = 1 m a k ( s ) b k ( ξ ) + K 1 ( s , ξ ) , for s , ξ [ - d + 2 , 2 ] × [ - d + 2 , 2 ] ,

where a k (s) are column and b k (ξ) are row linearly independent vectors, K1 is a matrix such that |K1| is chosen small. Clearly, we have

X T ( p + s , ξ + d ) = k = 1 m b k T ( ξ ) a k T ( s ) + K 1 T ( s , ξ ) .

Then, by using the fact that < b k T ( ξ ) a k T ( s ) , φ ( s ) >= a k ( s ) < b k T ( ξ ) , φ(s) >, (20) becomes

ρ φ ( s ) - k = 1 m a k ( s ) < b k T ( ξ ) , φ ( ξ ) > - < K 1 T ( s , ξ ) , φ ( ξ ) > = F ( s ) .

Setting

ν ( s ) = 1 ρ k = 1 m a k ( s ) < b k T ( ξ ) , φ ( ξ ) > + 1 ρ F ( s ) ,
(21)

we obtain

ν ( s ) = φ ( s ) - 1 ρ < K 1 T ( s , ξ ) , φ ( ξ ) > .
(22)

Now consider equation of the form

ν ( s ) = φ ( s ) - λ < K 1 T ( s , ξ ) , φ ( ξ ) > .
(23)

We seek a solution of the form φ ( s ) = i = 0 λ i φ i ( s ) . Substituting this into (23) and identifying the coefficients of the powers of λ, we obtain

φ 0 ( s ) = ν ( s ) and φ i ( s ) = < K 1 T ( s , α ) , φ i - 1 ( α ) > , i = 1 , 2 , .

It follows that | φ i ( s ) | M i sup s |ν ( s ) |, where M= sup | K 1 T | and i = 1, 2, . . . . Therefore, the series converges if |λ| M < 1. We have

φ 1 ( s ) = < K 1 T ( s , α ) , ν ( α ) > .

By the induction principle, we obtain

φ l ( s ) = < K l T ( s , α ) , ν ( α ) > ,

where K l ( s , ξ ) =< K 1 T ( s , α ) , Kl-1(α, ξ) >. Indeed, we have

φ l + 1 ( s ) = < K 1 T ( s , α ) , φ l ( α ) > = < K 1 T ( s , α ) , < K l T ( α , ξ ) , ν ( ξ ) > > .

Using Lemma 3.1, we get

φ l + 1 ( s ) = < < K 1 T ( s , α ) , K l ( α , ξ ) > T , ν ( ξ ) > = < K l + 1 T ( s , ξ ) , ν ( ξ ) > .

It follows that, if |λ|< 1 M then the solution of Equation (23) can be written as

φ ( s ) = ν ( s ) + l = 1 λ l φ l ( s ) = ν ( s ) + l = 1 λ l < K l T ( s , α ) , ν ( α ) > .

Thus, φ(s) = v(s) + < ГT (s, α), v(α) > where Γ T ( s , α ) = l = 1 λ l K l T ( s , α ) . Therefore, if 1 | ρ | < 1 M and sup | K 1 T |<|ρ|, we deduce that

φ ( s ) = ν ( s ) + < Γ T ( s , α ) , ν ( α ) >
(24)

is a solution of (22).

On the other hand, consider the equation

ρ φ ̃ ( s ) - Ũ φ ̃ ( s ) = 0 ,

which can be written as

ρ φ ̃ ( s ) = k = 1 m b k T ( s ) < φ ̃ ( α ) , a k ( α ) > T + < φ ̃ ( α ) , K 1 ( α , s ) > T .

Setting

ν ̃ ( s ) = 1 ρ k = 1 m b k T ( s ) < φ ̃ ( α ) , a k ( α ) > T ,
(25)

we obtain

ν ̃ ( s ) = φ ̃ ( s ) - 1 ρ < φ ̃ ( α ) , K 1 ( α , s ) > T .
(26)

Following similar analysis, we obtain that the solution of (26) is in the form

φ ̃ ( s ) = ν ̃ ( s ) + < ν ̃ ( α ) , Γ ̃ ( α , s ) > T ,
(27)

where Γ ̃ ( α , s ) = l = 1 λ l K ̃ l ( α , s ) and K ̃ l ( ξ , s ) =< K l - 1 T ( ξ , α ) , K1(α, s) >. However, using the induction principle and Lemma 3.1, it is easy to verify that K ̃ l ( ξ , s ) = K l ( ξ , s ) by which one can see that

Γ ̃ ( ξ , s ) = Γ ( ξ , s ) .
(28)

In view of Equation (21), we have

ρ ν ( s ) = k = 1 m a k ( s ) < b k T ( ξ ) , φ ( ξ ) > + F ( s ) .
(29)

But φ(s) = v(s)+ < ГT(s, α), n(α) > . So

ρ ν ( s ) = k = 1 m a k ( s ) < b k T ( ξ ) , ν ( ξ ) + < Γ T ( ξ , α ) , ν ( α ) > > + F ( s ) ,

which can be written as

ρ ν ( s ) = k = 1 m a k ( s ) < b k T ( ξ ) , ν ( ξ ) > + < b k T ( ξ ) , < Γ T ( ξ , α ) , ν ( α ) > > + F ( s ) .

Using Lemma 3.1, we get

ρ ν ( s ) = k = 1 m a k ( s ) < b k T ( α ) + < b k T ( ξ ) , Γ ( ξ , α ) > T , ν ( α ) > + F ( s ) .

Hence

ρ ν ( s ) = k = 1 m a k ( s ) < b ̄ k T ( α ) , ν ( α ) > + F ( s ) ,
(30)

where b ̄ k T ( α ) = b k T ( α ) +< b k T ( ξ ) , Г(ξ, α) > T. Setting λ k =< b ̄ k T ( α ) , v(α) >, it follows from (30) that

ρ ν ( s ) - F ( s ) = k = 1 m λ k a k ( s )
(31)

is the form of the solution of (30). Analogously, the solution of

ρ ν ̃ ( s ) = k = 1 m b k T ( s ) < ν ̃ ( ξ ) , ā k ( ξ ) > T ,
(32)

has the form

ρ ν ̃ ( s ) = k = 1 m μ k b k T ( s ) ,
(33)

where μ k =< ν ̃ ( ξ ) , ā k ( ξ ) > T and ā k ( ξ ) = a k ( ξ ) +< Γ ̃ T ( ξ , α ) , a k (α) >. In view of (30), (31)

becomes

k = 1 m λ k a k ( s ) = k = 1 m a k ( s ) < b ̄ k T ( α ) , 1 ρ F ( α ) + 1 ρ j = 1 m λ j a j ( α ) > .
(34)

Similarly, Equation (32) implies that (33) can be written as

k = 1 m μ k b k T ( s ) = k = 1 m b k T ( s ) < 1 ρ j = 1 m μ j b j T ( ξ ) , ā k ( ξ ) > T .
(35)

Taking into account that the vectors {a k } are linearly independent, we obtain from (34) the algebraic equation

ρ λ k = j = 1 m γ k j λ j + f k ,
(36)

where γ k j =< b ̄ k T ( α ) , a j (α) > and f k =< b ̄ k T ( α ) , F(α) >. Similarly, we get from (35) the algebraic equation

ρ μ k = j = 1 m γ ̃ j k T μ j ,
(37)

where γ ̃ j k T =< b j T ( ξ ) , ā k ( ξ ) >. We know that Equation (36) for λ k has a solution if and only if

k = 1 m μ k f k = 0
(38)

for all the solutions μ k of the equation

ρ μ k = j = 1 m γ j k μ j .
(39)

By employing Lemma 3.1 and relation (28), however, we can obtain that γ ̃ j k T = γ j k . Thus, Equations (37) and (39) coincide.

Therefore, we conclude that the equations

ρ λ k = j = 1 m γ k j λ j
(40)

and

ρ μ k = j = 1 m γ j k μ j
(41)

have the same number of linearly independent solutions. To a solution of (40) corresponds ν ( s ) = 1 ρ k = 1 m λ k a k ( s ) and to this corresponds the solution φ(s) = v(s) + < ГT (s, α), v(α) > for the equation ρφ(s) - (s) = 0, linearly independent solutions corresponding to the linearly independent solutions of Equation (40). Likewisely, a solution of the equation ρ φ ̃ ( s ) -Ũ φ ̃ ( s ) =0 will correspond to a solution of Equation (37) which coincides with (41), linearly independent solutions corresponding to linearly independent solutions. It follows from here that the equations ρφ(s) - (s) = 0 and ρ φ ̃ ( s ) -Ũ φ ̃ ( s ) =0 have the same number of independent solutions, which implies in particular the fact that U and  Ũ have the same eigenvalues, hence if ρ is a multiplier of the equation, 1 ρ is a multiplier of the adjoint equation. The proof of Lemma 3.2 is completed.

We are now in a position to state and prove the main result of this article.

Theorem 3.3 A necessary and sufficient condition for the existence of periodic solutions of period p of Equation (11) is that

k = 0 p - 1 y T ( k + 1 ) f ( k ) = 0 ,
(42)

for all periodic solutions y(n) of period p of the adjoint Equation (7).

NECESSITY. Let x(n) be p periodic solution of (11) and y(n) p periodic solution of (7). It follows that < y(n), x(n) > is p periodic. In view of (7) and (11), one can conclude that

Δ < y ( n ) , x ( n ) > = y T ( n + 1 ) f ( n ) , 0 n p .
(43)

Summing (43) over the interval [0, p - 1] results in

k = 0 p - 1 y T ( k + 1 ) f ( k ) = 0 ,

which is the same as (42).

SUFFICIENCY. Suppose that (42) is satisfied for all periodic solutions y(n) of period p of (7). In virtue of relation (38), Lemma 3.2 tells us that

ρ φ ( s ) - U φ ( s ) = F ( s )

has solutions if and only if

< φ ̃ ( α ) , F ( α ) > = 0
(44)

for all φ ̃ satisfying

ρ φ ̃ ( s ) - Ũ φ ̃ ( s ) = 0 .

Therefore, it suffices to show that (44) holds under condition (42). We observe from (18) that

F ( s ) = φ ( s ) - U φ ( s ) = k = 0 s + p - 1 X ( s + p , k + 1 ) f ( k ) .

It follows that

< φ ̃ ( α ) , F ( α ) > = φ ̃ T ( - d ) F ( 0 ) + s = - d 0 Δ s η = - d + 1 s - p - 1 φ ̃ T ( η ) ζ ( η + d , s - η - p - d ) F ( s ) .
(45)

Substituting F into (45) leads to

< φ ̃ ( α ) , F ( α ) > = φ ̃ T ( - d ) k = 0 p - 1 X ( p , k + 1 ) f ( k ) + s = - d 0 Δ s h ( s ) r = 0 k + p - 1 X ( p + k , r + 1 ) f ( r ) ,

where h ( s ) = η = - d + 1 s - p - 1 φ ̃ ( η ) ζ ( η + d , s - η - p - d ) . Setting φ ̃ ( s ) =ψ ( s + p + d ) and interchanging the order of summations, we obtain

< φ ̃ ( α ) , F ( α ) > = ψ T ( p ) k = 0 p - 1 X ( p , k + 1 ) f ( k ) + r = 0 p - 1 s = p - d p Δ s α = p + 1 s + d - 1 ψ T ( α ) ζ ( α , s - α ) X ( s - 1 , k + 1 ) f ( k ) .

Reordering the terms, we end up with

< φ ̃ ( α ) , F ( α ) > = k = 0 p - 1 ψ T ( p ) X ( p , k + 1 ) + s = p - d p Δ s α = p + 1 s + d - 1 ψ T ( α ) ζ ( α , s - α ) X ( s - 1 , k + 1 ) f ( k ) .

In view of Lemma 2.7 we see that the right hand side of the above equation is nothing but

k = 0 p - 1 y T ( k + 1 ) f ( k )

which is clearly zero by our assumption (42). The proof is finished.

Example 1 Equations (4) and (7) can be reduced to the following difference equations with pure delays

Δ x ( n ) = A ( n ) x ( n ) + B ( n + 1 ) x ( n - j + 1 ) , n 1
(46)

and

Δ y ( n ) = - A T ( n ) y ( n + 1 ) - B T ( n + j ) y ( n + j ) ,
(47)

where 2 < j is a fixed positive integer number and A, B: ℕ → ℝm × mare p periodic sequences, p > j. In virtue of [5, Lemma 2], we find that < y(n), x(n) > = constant, where

< y ( n ) , x ( n ) > = y T ( n ) x ( n ) + k = n + 1 n + j - 1 y T ( k ) B ( k ) x ( k - j ) .
(48)

Of particular cases, we take A(n) = 3, B(n) = 5, j = 3 and f ( n ) = cos n π 2 . Then, the equations

Δ x ( n ) = 3 x ( n ) + 5 x ( n - 2 ) + cos n π 2 , n 1
(49)

and

Δ y ( n ) = - 3 y ( n + 1 ) - 5 y ( n + 3 )
(50)

are mutually adjoint to each other with respect to the function

< y ( n ) , x ( n ) > = y ( n ) x ( n ) + 5 k = n + 1 n + 2 y ( k ) x ( k - 3 ) .
(51)

One can easily see that cos n π 2 is periodic of period 4 so p = 4 > 3. It follows that the condition (42) becomes

k = 0 3 y ( k + 1 ) cos n π 2 = y ( 1 ) - y ( 3 ) ,

which is equal to zero for any periodic solution y of Equation (50) under the initial condition y(1) - y(3) = 0. By the result of Theorem 3:3, we conclude that there exist periodic solutions of period 4 for Equation (49).