1 Introduction

Consider the following equations:

{ u t t + u x x x x + δ u t + k u + = l + ϵ h ( x , t ) , in  ( 0 , L ) × R , u ( 0 , t ) = u ( L , t ) = u x x ( 0 , t ) = u x x ( L , t ) = 0 , t R .
(1.1)

Suspension bridge equations (1.1) have been posed as a new problem in the field of nonlinear analysis [1] by Lazer and McKenna in 1990. This model has been derived as follows. In the suspension bridge system, the suspension bridge can be considered as an elastic and unloaded beam with hinged ends. u(x,t) denotes the deflection in the downward direction; δ u t represents the viscous damping. The restoring force can be modeled owing to the cable with one-sided Hooke’s law so that it strongly resists expansion but does not resist compression. The simplest function to model the restoring force of the stays in the suspension bridge can be denoted by a constant k times u, the expansion, if u is positive, but zero if u is negative, corresponding to compression; that is, k u + , where

u + ={ u , if  u > 0 , 0 , if  u 0 .

Besides, the right-hand side of (1.1) also contains two terms: the large positive term l corresponding to gravity, and a small oscillatory forcing term ϵh(x,t), possibly aerodynamic in origin, where ϵ is small.

There are many results for the problem (1.1) (cf. [17]), for instance, the existence, multiplicity and properties of the traveling wave solutions, etc.

In the study of equations of mathematical physics, the attractor is a proper mathematical concept as regards the depiction of the behavior of the solutions of these equations when time is large or tends to infinity, which describes all the possible limits of solutions. In the past two decades, many authors have proved the existence of an attractor and discussed its properties for various mathematical physics models (e.g., see [810] and the references therein). For the long-time behavior of suspension bridge-type equations, for the autonomous case, in [11, 12] the authors have discussed long-time behavior of the solutions of the problem on R 2 and obtained the existence of global attractors in the space H 0 2 (Ω)× L 2 (Ω) and D(A)× H 0 2 (Ω).

It is well known that, for a model to describe the real world which is affected by many kinds of factors, the corresponding non-autonomous model is more natural and precise than the autonomous one, moreover, it always presents a nonlinear equation but not just a linear one. Therefore, in this paper, we will discuss the following non-autonomous suspension bridge-type equation: Let Ω be an open bounded subset of R 2 with smooth boundary, R τ =[τ,+], and we add the nonlinear forcing term g(u,t) (which is dependent on the deflection u and time t) to (1.1) and neglect gravity, then we can obtain the following initial-boundary value problem:

{ u t t + Δ 2 u + α u t + k u + + g ( u , t ) = h ( x , t ) , in  Ω × R τ , u ( x , t ) = Δ u ( x , t ) = 0 , on  Ω × R τ , u ( x , τ ) = u 1 ( x ) , u t ( x , τ ) = u 2 ( x ) , x Ω ,
(1.2)

where u(x,t) is an unknown function, which could represent the deflection of the road bed in the vertical plane; h(x,t) and g(u,t) are time-dependent external forces; k u + represents the restoring force, k denotes the spring constant; α u t represents the viscous damping, α is a given positive constant.

To our knowledge, this is the first time for one to consider the non-autonomous dynamics of equation (1.2). At the same time, in mathematics, we only assume that the force term h(x,t) satisfies the so-called Condition ( C ) (introduced in [13]), which is weaker than the assumption of being translation compact (see [8] or Section 2 below).

This paper is organized as follows. At first, in Section 2, we give (recall) some preliminaries, including the notation we will use, the assumption on nonlinearity g(,t) and some general abstract results for a non-autonomous dynamical system. In Section 3 we prove our main result about the existence of a uniform attractor for the non-autonomous dynamical system generated by the solution of (1.2).

2 Notation and preliminaries

With the usual notation, we introduce the spaces H= L 2 (Ω), V= H 2 (Ω) H 0 1 (Ω), D(A)={u H 2 (Ω) H 0 1 (Ω)|Au L 2 (Ω)}, where A= Δ 2 . We equip these spaces with an inner product and a norm ,, , , 1 , 1 and , 2 , 2 , respectively,

u , v = Ω u ( x ) v ( x ) d x , u 2 = Ω | u ( x ) | 2 d x , u , v H ; u , v 1 = Ω Δ u ( x ) Δ v ( x ) d x , u 1 2 = Ω | Δ u ( x ) | 2 d x , u , v V ; u , v 2 = Ω Δ 2 u ( x ) Δ 2 v ( x ) d x , u 2 2 = Ω | Δ 2 u ( x ) | 2 d x , u , v D ( A ) .

Obviously, we have

D(A)VH= H V ,

where H , V is the dual space of H, V, respectively, the injections are continuous and each space is dense in the following one.

In the following, the assumption on the nonlinearity g is given. Let g be a C 1 function from R×R to ℝ and satisfy

lim inf | u | G ( u , s ) u 2 0,
(2.1)

where G(u,s)= 0 u g(w,s)dw, and there exists C 0 >0, such that

lim inf | u | u , g ( u , s ) C 0 G ( u , s ) u 2 0.
(2.2)

Suppose that γ is an arbitrary positive constant, and

| g u ( u , s ) | C 1 ( 1 + | u | γ ) , | g s ( u , s ) | C 1 ( 1 + | u | γ + 1 ) ,
(2.3)
G s (u,s) δ 2 G(u,s)+ C 2 ,(u,s)R×R,
(2.4)

where δ is a sufficiently small constant.

As a consequence of (2.1)-(2.2), if we denote G(u,s)= Ω G(u,s)dx, then there exist two positive constants K 1 , K 2 such that

G(φ,s)+m φ 2 + K 1 0,
(2.5)
φ , g ( φ , s ) C 0 G(φ,s)+m φ 2 + K 2 0,(φ,s)R×R,
(2.6)

where m, C 0 >0, and we can take m sufficiently small.

By virtue of (2.3), we can get

| g ( u , s ) | C 3 ( 1 + | u | γ + 1 ) , | G ( u , s ) | C 3 ( 1 + | u | γ + 2 ) .
(2.7)

When A= Δ 2 , the problem (1.2) is equivalent to the following equations in H:

{ u t t + α u t + A u + k u + + g ( u , t ) = h ( x , t ) , u ( τ ) = u 1 , u t ( τ ) = u 2 .
(2.8)

From the Poincaré inequality, there exists a proper constant λ 1 >0, such that

λ 1 u 2 u 1 2 ,uV.
(2.9)

We introduce the Hilbert spaces

E 0 =V×H,

and endow this space with the norm

z E 0 = ( u , u t ) E 0 = ( 1 2 ( u 1 2 + u t 2 ) ) 1 2 .

To prove the existence of uniform attractors corresponding to (2.8), we also need the following abstract results (e.g., see [8]).

Let E be a Banach space, and let a two-parameter family of mappings {U(t,τ)}={U(t,τ)|tτ,τR} on E:

U(t,τ):EE,tτ,τR.

Definition 2.1 ([8])

Let Σ be a parameter set. { U σ (t,τ)|tτ,τR}, σΣ is said to be a family of processes in Banach space E, if for each σΣ, { U σ (t,τ)} is a process; that is, the two-parameter family of mappings { U σ (t,τ)} from E to E satisfy

U σ (t,s) U σ (s,τ)= U σ (t,τ),tsτ,τR,
(2.10)
U σ (τ,τ)=I is the identity operator,τR,
(2.11)

where Σ is called the symbol space and σΣ is the symbol.

Note that the following translation identity is valid for a general family of processes { U σ (t,τ)}, σΣ, if a problem has unique solvability and for the translation semigroup {T(l)|l0} satisfying T(l)Σ=Σ:

U σ (t+l,τ+l)= U T ( l ) σ (t,τ),σΣ,tτ,τR,l0.

A set B 0 E is said to be a uniformly (w.r.t. σΣ) absorbing set for the family of processes { U σ (t,τ)}, σΣ if for any τR and BB(E), there exists t 0 = t 0 (τ,B)τ such that σ Σ U σ (t,τ) B 0 for all t t 0 . A set YE is said to be uniformly (w.r.t. σΣ) attracting for the family of processes { U σ (t,τ)}, σΣ, if for any fixed τR and every BB(E),

lim t ( sup σ Σ dist E ( U σ ( t , τ ) B , Y ) ) =0.
(2.12)

Definition 2.2 ([8])

A closed set A Σ E is said to be the uniform (w.r.t. σΣ) attractor of the family of processes { U σ (t,τ)}, σΣ if it is uniformly (w.r.t. σΣ) attracting (attracting property) and contained in any closed uniformly (w.r.t. σΣ) attracting set A of the family of processes { U σ (t,τ)}, σΣ: A Σ A (minimality property).

Now we recall the results in [14].

Definition 2.3 ([14])

A family of processes { U σ (t,τ)}, σΣ, is said to be satisfying the uniform (w.r.t. σΣ) Condition (C) if for any fixed τR, BB(E) and ϵ>0, there exist a t 0 = t 0 (τ,B,ϵ)τ and a finite dimensional subspace E m of E such that

  1. (i)

    P m ( σ Σ t t 0 U σ (t,τ)B) is bounded; and

  2. (ii)

    ( I P m ) ( σ Σ t t 0 U σ ( t , τ ) x ) E ϵ, xB,

where dim E m =m and P m :E E m is abounded projector.

Theorem 2.4 ([14])

Let Σ be a complete metric space, and let {T(t)} be a continuous invariant T(t)Σ=Σ semigroup on Σ satisfying the translation identity. A family of processes { U σ (t,τ)}, σΣ, possess a compact uniform (w.r.t. σΣ) attractor A Σ in E satisfying

A Σ = ω 0 , Σ ( B 0 )= ω τ , Σ ( B 0 ),tR,
(2.13)

if it

  1. (i)

    has a bounded uniformly (w.r.t. σΣ) absorbing set B 0 ; and

  2. (ii)

    satisfies uniform (w.r.t. σΣ) Condition (C),

where ω τ , Σ ( B 0 )= t τ [ σ Σ s t U σ (s,t) B 0 ]. Moreover, if E is a uniformly convex Banach space, then the converse is true.

Let X be a Banach space. Consider the space L loc 2 (R;X) of functions ϕ(s), sR with values in X that are 2-power integrable in the Bochner sense. L c 2 (R;X) is a set of all translation compact functions in L loc 2 (R;X), L b 2 (R;X) is the set of all translation bound functions in L loc 2 (R;X).

In [13], the authors have introduced a new class of functions which are translation bounded but not translation compact. In Section 3, let the forcing term h(x,t) satisfy Condition ( C ); we can prove the existence of compact uniform (w.r.t. σH( σ 0 ), σ 0 (s)=( g 0 (u,s),h(x,s))) attractor for a non-autonomous suspension bridge equation in E 0 .

Definition 2.5 ([13])

Let X be a Banach space. A function f L b 2 (R;X) is said to satisfy Condition ( C ) if, for any ϵ>0, there exists a finite dimensional subspace X 1 of X such that

sup t R t t + 1 ( I P m ) f ( s ) X 2 ds<ϵ,

where P m :X X 1 is the canonical projector.

Denote by L c 2 (R;X) the set of all functions satisfying Condition ( C ). From [13], we can see that L c 2 (R;X) L c 2 (R;X) L b 2 (R;X).

Remark 2.6 In fact, the function satisfying Condition ( C ) implies the dissipative property in some sense, and Condition ( C ) is very natural in view of the compact condition, and the uniform Condition (C).

Lemma 2.7 ([13])

If f L c 2 (R;X), then for any ϵ>0 and τR we have

sup t τ τ t e δ ( t s ) ( I P m ) f ( s ) X 2 dsϵ,

where P m :X X 1 is the canonical projector and δ is a positive constant.

In order to define the family of processes of the equations (2.8), we also need the following results:

Proposition 2.8 ([8])

If X is reflexive separable, then

  1. (i)

    for all h 1 H( h 0 ), h 1 L b 2 ( R ; X ) h 0 L b 2 ( R ; X ) ;

  2. (ii)

    the translation group {T(t)} is weakly continuous on H( h 0 );

  3. (iii)

    T(t)H( h 0 )=H( h 0 ) for all t R + .

Proposition 2.9 ([8])

Let g 0 (s) L c 2 (R;X), then

  1. (i)

    for all g 1 H( g 0 ), g 1 L c 2 (R;X), and the set H( g 0 ) is bound in L b 2 (R;X);

  2. (ii)

    the translation group {T(t)} is continuous on H( g 0 ) with the topology of L loc 2 (R,X);

  3. (iii)

    T(t)H( g 0 )=H( g 0 ) for all t R + .

3 Uniform attractors in E 0

To describe the asymptotic behavior of the solutions of our system, we set h 0 L c 2 ( R τ ;H) L b 2 ( R τ ;H) and H( h 0 )= [ h 0 ( x , s + h ) | h R ] L loc 2 , w ( R τ ; H ) , where [] denotes the closure of a set in topological space L loc 2 , w ( R τ ;H). If hH( h 0 ), then h L b 2 ( R τ ;H); this is to be

sup t τ t t + 1 h ( x , s ) ds<,

where denotes the norm in H.

3.1 Existence and uniqueness of solutions

At first, we give the concept of solutions for the initial-boundary value problem (2.8).

Definition 3.1 Set I=[τ,T], for T>τ0. We suppose that k>0, h L b 2 ( R τ ;H), g C 1 (R×R;R) satisfying (2.1)-(2.4) and g(0,0)=0. The function z=(u, u t )C(I; E 0 ) is said to be a weak solution to the problem (2.8) in the time interval I, with initial data z(τ)= z τ =( u 1 , u 2 ) E 0 , provided

u t t , v ¯ +α u t , v ¯ + Ω ΔuΔ v ¯ dx+ Ω g(u,t) v ¯ dx+k u + , v ¯ = Ω h(x,t) v ¯ dx,
(3.1)

for all v ¯ V and a.e. tI.

Then, by using of the methods in [15] (Galerkin approximation method), we get the following result as regards the existence and uniqueness of solutions:

Theorem 3.2 (Existence and uniqueness of solutions)

Define I=[τ,T], T>τ. Let k>0, h L b 2 ( R τ ;H), g C 1 (R×R;R) satisfying (2.1)-(2.4). Then for any given z τ E 0 , there is a unique solution z=(u, u t ) for the problem (2.8) in E 0 . Furthermore, for i=1,2, let { z τ i , h i } ( z τ i E 0 and h i L b 2 (R;H)) be two initial conditions, and denote by z i the corresponding solutions to the problem (2.8). Then the estimates hold as follows: for all τtT+τ,

z 1 ( t ) z 2 ( t ) E 0 2 Q ( z τ i E 0 , T ) ( z τ 1 z τ 2 E 0 2 + g 1 g 2 L b 2 ( R ; H ) 2 ) .
(3.2)

Thus, (2.8) will be written as an evolutionary system, introduced as z(t)=(u(t), u t (t)) and z τ =z(τ)=( u 1 , u 2 ) for brevity, as z E 0 2 = 1 2 ( u 1 2 + u t 2 ), the system (2.8) can be written in the operator form

t z= A σ ( t ) (z),z | t = τ = z τ ,
(3.3)

where σ(s)=(g(u,s),h(x,s)) is the symbol of (3.3). If z τ E 0 , then the problem (3.3) has a unique solution z(t)C( R τ , E 0 ). This implies that the process { U σ (t,τ)} given by the formula U σ (t,τ) z τ =z(t) is defined in E 0 .

Now we define the symbol space. A fixed symbol σ 0 (s)=( g 0 (u,s), h 0 (x,s)) can be given, where h 0 (x,s) is in L c 2 ( R τ ;H), the function g 0 (u,s) L c 2 ( R τ ;M) satisfying (2.1)-(2.4), and ℳ is a Banach space,

M= { g C ( R × R , R ) | | g ( u ) | + | g s ( u ) | | u | γ + 1 + 1 + | g u ( u ) | | u | γ + 1 < } ,

endowed with the following norm:

g M = sup u R { | g ( u ) | + | g s ( u ) | | u | γ + 1 + 1 + | g u ( u ) | | u | γ + 1 } .

Obviously, the function σ 0 (s)=( g 0 (u,s), h 0 (x,s)) is in L c 2 ( R τ ;M)× L c 2 ( R τ ;H). We define H( σ 0 )=H( g 0 )×H( h 0 )= [ g 0 ( u , s + l ) | l R ] L loc 2 , w ( R τ ; M ) × [ h 0 ( x , s + l ) | l R ] L loc 2 , w ( R τ ; H ) , where [] denotes the closure of a set in topological space L loc 2 , w ( R τ ;M) (or L loc 2 , w ( R τ ;H)). So, if (g,h)H( σ 0 ), then g(u,t) and h(x,t) all satisfy Condition ( C ).

Applying Proposition 2.8, Proposition 2.9, and Theorem 3.2, we can easily know that the family of processes { U σ (t,τ)}: E 0 E 0 , σH( σ 0 ), tτ, are defined. Furthermore, the translation semigroup {T(l)|l R + } satisfies l R + , T(l)H( σ 0 )=H( σ 0 ), and the following translation identity:

U σ (t+l,τ+l)= U T ( l ) σ (t,τ),σH( σ 0 ), for tτ0,l0,

holds.

Then for any σH( σ 0 ), the problem (3.3) with σ instead of σ 0 possesses a corresponding process { U σ (t,τ)} acting on E 0 .

Consequently, for each σH( σ 0 ), σ 0 (s)=( g 0 (u,s), h 0 (x,s)) (here h 0 (x,s) L c 2 ( R τ ;H), g 0 (u,s) L c 2 ( R τ ;M) satisfying (2.1)-(2.4)), we can define a process

U σ ( t , τ ) : E 0 E 0 , z τ = ( u 1 , u 2 ) ( u ( t ) , u t ( t ) ) = U σ ( t , τ ) z τ ,

and U σ (t,τ), σH( σ 0 ), is a family of processes on E 0 .

3.2 Bounded uniformly absorbing set

Before we show the existence of bounded uniformly absorbing set, we firstly make a prior estimate of solutions for equations (2.8) in E 0 .

Lemma 3.3 Assume that z(t) is a solution of (2.8) with initial data z 0 B. If the nonlinearity g(u,t) satisfies (2.1)-(2.4), h 0 L c 2 ( R τ ;H), hH( h 0 ), k>0, then there is a positive constant μ 0 such that for any bounded (in E 0 ) subset B, there exists t 0 = t 0 ( B E 0 ) such that

z ( t ) E 0 2 = 1 2 ( u 1 2 + u t 2 ) μ 0 2 ,t t 0 = t 0 ( B E 0 ) .
(3.4)

Proof Now we will prove z=(u, u t ) to be bounded in E 0 =V×H.

We assume that ϱ is positive and satisfies

0<ϱ(αϱ)< λ 1 .
(3.5)

Multiplying (2.8) by v(t)= u t (t)+ϱu(t) and integrating over Ω, we have

1 2 d d t ( v 2 + u 1 2 ) + ϱ u 1 2 + ( α ϱ ) v 2 ϱ ( α ϱ ) u , v + k u + , v + g ( u , t ) , v = h ( t ) , v .
(3.6)

We easily see that

ϱ(αϱ)u,v(αϱ) v 2 4 +(αϱ) ϱ 2 u 2 ,
(3.7)
h ( t ) , v (αϱ) v 2 4 + h ( t ) 2 α ϱ .
(3.8)

Then, substituting (3.7)-(3.8) into (3.6), we obtain

d d t ( v 2 + u 1 2 ) + 2 ϱ u 1 2 + ( α ϱ ) v 2 2 ϱ 2 ( α ϱ ) u 2 + 2 k u + , v + 2 g , v 2 h ( t ) 2 α ϱ .
(3.9)

In view of (2.4) and (2.6), we know

g , v = g , u t + ϱ u = d d t Ω G ( u ( x , t ) , t ) d x + ϱ g ( u , t ) , u Ω G s ( u ( x , t ) , t ) d x = d d t Ω G ( u ( x , t ) , t ) d x + ϱ Ω g ( u ( x , t ) , t ) u ( x , t ) d x ϱ C 0 Ω G ( u ( x , t ) , t ) d x + ϱ C 0 Ω G ( u ( x , t ) , t ) d x Ω G s ( u ( x , t ) , t ) d x d d t G ( u ( x , t ) , t ) + ϱ C 0 G ( u ( x , t ) , t ) ϱ ( m u 2 + K 2 ) δ 2 G ( u ( x , t ) , t ) C 2 | Ω |
(3.10)

and

k u + , v = 1 2 d d t k u + 2 +ϱk u + 2 .
(3.11)

Consequently,

d d t ( v 2 + u 1 2 + k u + 2 + 2 G ( u ( x , t ) , t ) ) + ( α ϱ ) v 2 + 2 ϱ λ 1 ( λ 1 ϱ ( α ϱ ) m ) u 1 2 + 2 ϱ k u + 2 + ( ϱ C 0 δ 2 ) 2 G ( u ( x , t ) , t ) 2 h ( t ) 2 α ϱ + 2 ( ϱ K 2 + C 2 | Ω | ) .
(3.12)

We introduce the functional as follows:

y(t)= v 2 + u 1 2 +k u + 2 +2G ( u ( x , t ) , t ) +2 K 1 ,for tτ.
(3.13)

Setting β=min{αϱ,2ϱ λ 1 1 ( λ 1 ϱ(αϱ)m),2ϱ,ϱ C 0 δ 2 }, we choose proper positive constants m and δ, such that

m< λ 1 ϱ(αϱ), δ 2 <ϱ C 0
(3.14)

hold, then β>0.

We define m h (t)= h ( t ) 2 , then

d d t y(t)+βy(t) C 4 + C 5 m h (t),
(3.15)

where C 4 =2(ϱ K 2 + C 2 |Ω|)+2β K 1 , C 5 =2 ( α ϱ ) 1 .

Analogous to the proof of Lemma 2.1.3 in [8], we can estimate the integral and obtain

y ( t ) y ( τ ) e β ( t τ ) + C 4 β 1 ( 1 e β t ) + C 5 0 t m h ( s ) e β ( t s ) d s y ( τ ) e β ( t τ ) + C 4 β 1 ( 1 e β t ) + C 5 t 1 t m h ( s ) e β ( t s ) d s + C 5 t 2 t 1 m h ( s ) e β ( t s ) d s + y ( τ ) e β ( t τ ) + C 4 β 1 ( 1 e β t ) + C 5 t 1 t m h ( s ) d s + C 5 e β t 2 t 1 m h ( s ) d s + C 5 e 2 β t 3 t 2 m h ( s ) d s + y ( τ ) e β ( t τ ) + C 4 β 1 ( 1 e β t ) + C 5 m ( 1 + e β + e 2 β + ) y ( τ ) e β ( t τ ) + C 4 β 1 ( 1 e β t ) + C 5 m ( 1 + β 1 ) y ( τ ) e β ( t τ ) + C 4 β 1 + C 5 m ( 1 + β 1 ) , for  t τ ,
(3.16)

where m= sup t τ t t + 1 m h (s)ds.

By virtue of (2.3), we get

2G(u,t)2m u 2 2 K 1 2m λ 1 1 u 1 2 2 K 1 .

Choosing m λ 1 /4, we obtain from (3.13)

y ( t ) = u 1 2 + u t + ϱ u 2 + k u + 2 + 2 G ( u , t ) + 2 K 1 1 2 u 1 2 + u t + ϱ 2 + k u + 2 z ( t ) E 0 2 .
(3.17)

In consideration of (2.7) and 0<γ<, we see

2G ( u τ ( x ) , τ ) 2 C 3 Ω ( | u τ ( x ) | γ + 2 + 1 ) dx C 6 ( u τ 1 γ + 2 + 1 )
(3.18)

and

y ( τ ) = u ( τ ) 1 2 + u t ( τ ) + ϱ u ( τ ) 2 + k ( u ( τ ) ) + 2 + 2 G ( u ( τ ) , τ ) + 2 K 1 C 7 ( z ( τ ) E 0 γ + 2 + 1 ) .
(3.19)

Combining (3.16), (3.17), and (3.19), we deduce that

z ( t ) E 0 2 y ( τ ) e β ( t τ ) + C 4 β 1 + C 5 m ( 1 + β 1 ) C 7 ( z ( τ ) E 0 γ + 2 + 1 ) e β ( t τ ) + C 4 β 1 + C 5 m ( 1 + β 1 ) C 7 z ( τ ) E 0 γ + 2 e β ( t τ ) + C 8 , t τ .

Assuming that z ( τ ) E 0 2 R, as t t 0 = t 0 ( B E 0 ), we have

z ( t ) E 0 μ 0 .
(3.20)

We thus complete the proof. □

And then, combining Theorem 3.2 with Lemma 3.3, we get the result as follows.

Theorem 3.4 (Bounded uniformly absorbing set)

Presume that g 0 L c 2 ( R τ ;M) and h 0 L c 2 ( R τ ;H). Let gH( g 0 ) satisfy (2.1)-(2.4), hH( h 0 ), and { U σ (t,τ)}, σH( σ 0 )=H( g 0 )×H( h 0 ) be the family of processes corresponding to (2.8) in E 0 , then { U σ (t,τ)} has a uniformly (w.r.t. σH( σ 0 )) absorbing set B 0 = B E 0 (0, μ 0 ) in E 0 . That is, for any bounded subset B E 0 , there exists t 0 = t 0 ( B E 0 ) such that

σ H ( σ 0 ) U σ (t,τ)B B 0 ,for allt t 0 .

3.3 The existence of uniform attractor

We will show the existence of uniform attractor to the problem (2.8) in E 0 .

Theorem 3.5 (Uniform attractor)

Let { U σ (t,τ)} be the family of processes corresponding to the problem (2.8). If g 0 L c 2 ( R τ ;M) satisfying (2.1), (2.2), (2.5), and (2.6), h 0 L c 2 ( R τ ;H), and σ 0 =( g 0 , h 0 ), then { U σ (t,τ)} possesses a compact uniform (w.r.t. σH( σ 0 )) attractor A H ( σ 0 ) in E 0 , which attracts any bounded set in E 0 with E 0 , satisfying

A H ( σ 0 ) = ω 0 , H ( σ 0 ) ( B 0 )= ω τ , H ( σ 0 ) ( B 0 ),
(3.21)

where B 0 is the uniformly (w.r.t. σH( σ 0 )) absorbing set in E 0 .

Proof From Theorem 2.4 and Theorem 3.4, we merely need to prove that the family of processes { U σ (t,τ)}, σH( σ 0 ) satisfy the uniform (w.r.t. σH( σ 0 )) Condition (C) in E 0 . We assume that λ ˜ i , i=1,2, are eigenvalue of operator A in D(A), satisfying

0< λ ˜ 1 < λ ˜ 2 λ ˜ j , λ ˜ j ,as j,

ω ˜ i denotes eigenvector corresponding to eigenvalue λ ˜ i , i=1,2,3, , which forms an orthogonal basis in D(A), at the same time they are also a group of canonical basis in V or H, and they satisfy

A ω ˜ i = λ ˜ i ω ˜ i ,iN.

Let H m =span{ ω ˜ 1 , ω ˜ 2 ,, ω ˜ m }, P m :H H m is an orthogonal projector. For any (u, u t ) E 0 , we write

(u, u t )=( u 1 , u 1 t )+( u 2 , u 2 t ),

where ( u 1 , u 1 t )=( P m u, P m u t ).

Choosing 0<ϱ<1 and 0<ϱ(αϱ)< λ 1 . Taking the scalar product with v 2 (t)= u 2 t (t)+ϱ u 2 (t) for (2.8) in H, we have

1 2 d d t ( v 2 2 + u 2 1 2 ) + ϱ u 2 1 2 ϱ ( α ϱ ) u , v 2 + ( α ϱ ) v 2 2 + k u + , v 2 + g ( u , t ) , v 2 = h ( t ) , v 2 ,
(3.22)

where

h ( t ) , v 2 (αϱ) v 2 2 /8+2 ( α ϱ ) 1 ( I P m ) h ( t ) 2 ,
(3.23)
g ( u , t ) , v 2 (αϱ) v 2 2 /8+2 ( α ϱ ) 1 ( I P m ) g ( u , t ) 2 .
(3.24)

Clearly, we get

ϱ(αϱ)u, v 2 (αϱ) v 2 2 /4+(αϱ) ϱ 2 u 2 2 ,
(3.25)
k u + , v 2 = 1 2 d d t k ( u 2 ) + 2 +ϱk ( u 2 ) + 2 .
(3.26)

Combining (3.23)-(3.26), we obtain from (3.22)

1 2 d d t ( v 2 2 + u 2 1 2 + k ( u 2 ) + 2 ) + ϱ u 2 1 2 + 1 2 ( α ϱ ) v 2 2 + ϱ k ( u 2 ) + 2 ( α ϱ ) ϱ 2 u 2 2 1 2 d d t ( v 2 2 + u 2 1 2 + k ( u 2 ) + 2 ) + ϱ λ 1 1 ( λ 1 ( α ϱ ) ϱ ) u 2 1 2 + 1 2 ( α ϱ ) v 2 2 + ϱ k ( u 2 ) + 2 2 ( α ϱ ) 1 ( I P m ) g ( u , t ) 2 + 2 ( α ϱ ) 1 ( I P m ) h ( t ) 2 2 C 9 ( α ϱ ) 1 ( I P m ) g ( u , t ) M 2 ( 1 + u 2 1 2 γ + 2 ) + 2 ( α ϱ ) 1 ( I P m ) h ( t ) 2 .
(3.27)

We define the functional

L(t)= 1 2 ( v 2 2 + u 2 1 2 + k ( u 2 ) + 2 ) ,

and we set ω=min{2ϱ λ 1 1 ( λ 1 (αϱ)ϱ),αϱ,2ϱ}, then

d d t L ( t ) + ω L ( t ) 2 C 9 ( α ϱ ) 1 ( I P m ) g ( u , t ) M 2 ( 1 + ( 2 μ 0 ) 2 γ + 2 ) + 2 ( α ϱ ) 1 ( I P m ) h ( t ) 2 , for  t t 0 .
(3.28)

By Gronwall’s lemma, we obtain

L ( t ) L ( t 0 ) e ω ( t t 0 ) + 2 α ϱ t 0 t e ω ( t s ) ( I P m ) h ( s ) 2 d s + 2 C 10 α ϱ t 0 t e ω ( t s ) ( I P m ) g ( u , s ) M 2 d s , for  t t 0 .
(3.29)

Obviously, there exists a constant C ˜ , such that

z 2 ( t ) E 0 2 L(t) C ˜ z 2 ( t ) E 0 2 ,

so

z 2 ( t ) E 0 2 C ˜ z 2 ( t 0 ) E 0 2 e ω ( t t 0 ) + 2 α ϱ t 0 t e ω ( t s ) ( I P m ) h ( s ) 2 d s + 2 C 10 α ϱ t 0 t e ω ( t s ) ( I P m ) g ( u , s ) M 2 d s .
(3.30)

Since g L c 2 ( R τ ,M) L c 2 ( R τ ,M), h L c 2 ( R τ ,H), from Lemma 2.7, we can know for any ϵ 1 >0, there exists a constant m large enough such that

2 α ϱ t 0 t e ω ( t s ) ( I P m ) h ( s ) 2 ds ϵ 1 3 ,hH( h 0 ),
(3.31)
2 C 10 α ϱ t 0 t e ω ( t s ) ( I P m ) g ( u , s ) M 2 ds ϵ 1 3 ,gH( g 0 ),
(3.32)

where tτ.

Let t 1 =1/ωln(3 C ˜ μ 0 2 / ϵ 1 )+ t 0 , then

C ˜ z 2 ( t 0 ) E 0 2 e ω ( t t 0 ) ϵ 1 3 ,t t 1 .

So for every σH( σ 0 ), we get

z 2 ( t ) E 0 2 ϵ 1 ,t t 1 ,
(3.33)

where z 2 ( t ) E 0 2 = 1 2 ( u 2 1 2 + u 2 t 2 ).

Therefore, the family of processes U σ (t,τ), σH( σ 0 ) satisfy uniformly (w.r.t. σH( σ 0 )) Condition (C) in E 0 . Applying Theorem 2.4, we can obtain the existence of a uniform (w.r.t. σH( σ 0 )) attractor of the family of processes U σ (t,τ), σH( σ 0 ) in E 0 , which satisfies (3.21).

We thus complete the proof. □

So we can draw the conclusion: when the nonlinearity g(u,t) is translation compact and the time-dependent external forces h(x,t) only satisfies Condition ( C ) instead of translation compact, the uniform attractors in ( H 2 (Ω) H 0 1 (Ω))× L 2 (Ω) exist.