1 Introduction

In this paper we are interested in the existence, regularity and upper-semicontinuous convergence of pullback, uniform and cocycle attractors of the problems governed by the following family of weakly damped wave equations

$$\begin{aligned} u_{tt} + u_t - \Delta u = f_\varepsilon (t,u). \end{aligned}$$
(1.1)

We prove that these attractors converge as \(\varepsilon \rightarrow 0\) to the global attractor of the problem governed by the limit autonomous equation

$$\begin{aligned} u_{tt} + u_t - \Delta u = f_0(u), \end{aligned}$$
(1.2)

where \(f_\varepsilon \rightarrow f_0\) in an appropriate sense. The unknowns are the functions \(u:[t_0,\infty ) \times \Omega \rightarrow \mathbb {R}\), where \(\Omega \) is an open and bounded domain in \(\mathbb {R}^3\) with smooth boundary.

The theory of global attractors for the wave equation with damping term \(u_t\) has been developed by Babin and Vishik [3], Ghidaglia and Temam [20], Hale [21], Haraux [23, 24], Pata and Zelik [33]. Overview of the theory can be found, among others, in the monographs of Babin and Vishik [4], Haraux [25], Chueshov and Lasiecka [16]. We also mention the classical monographs of Henry [26], Hale [22], Robinson [34], Temam [41], and Dłotko and Cholewa [18] on infinite dimensional autonomous dynamical systems. Various types of non-autonomous attractors and their properties have been studied, among others, by Chepyzhov and Vishik [15], Cheban [13], Kloeden and Rasmussen [28], Carvalho et al. [11], Chepyzhov [14], and Bortolan et al. [8].

The existence of the global attractor for (1.2) with the cubic growth condition

$$\begin{aligned} |f_0(s)| \le C(1+|s|^3), \end{aligned}$$
(1.3)

has been obtained by Arrieta et al. [2]. This growth exponent had long been considered as critical. In 2016 Kalantarov et al. [27] used the findings on the Strichartz estimates for the wave equation on bounded domains [6, 9] to obtain the global attractor existence for the so called Shatah–Struwe solutions of quintic weakly damped wave equation, i.e. where the exponent 3 in (1.3) is replaced by 5. These findings led to the rapid development of the theory for weakly damped wave equation with supercubic growth. In particular, global attractors for Shatah–Struwe solutions for supercubic case with forcing in \(H^{-1}\) have been studied by Liu et al. [29], and the exponential attractors were investigated by Meng and Liu in [32]. We also mention the work [10] of Carvalho, Cholewa, and Dłotko who proved an existence of the weak global attractor for a concept of solutions for supercubic but subquintic case. Finally, the results on attractors for autonomous problems with supercubic nonlinearities have been generalized to the case of damping given by the fractional Laplacian in the subquintic case in [35] and in the quintic case in [36].

For a non-autonomous dynamical system there exist several important concepts of attractors: the pullback attractor, a time-dependent family of compact sets attracting “from the past” [11, 28], the uniform attractor, the minimal compact set attracting forwards in time uniformly with respect to the driven system of non-autonomous terms [15], and the cocycle attractor which, in a sense unifies and extends the last two concepts [7, 28]. An overview of these notions can be found in the review article [5]. Recent intensive research on the characterization of pullback attractors and continuity properties for PDEs [7, 11, 28] has led to the results on the link between the notions of uniform, pullback, and cocycle attractors, namely an internal characterization of the uniform attractor as the union of the pullback attractors related to all their associated symbols (see [7], and Theorem 6.5 below), and thus allowing to define the notion of lifted invariance (see [7], and Definition 6.6 and Theorem 6.7 below) for uniform attractors.

There are several recent results on the non-autonomous version of the weakly damped wave equation with quintic, or at least supercubic, growth condition which use the concept of Shatah–Struwe solutions. Savostianov and Zelik in the article [37] obtained the existence of the uniform attractor for the problem governed by

$$\begin{aligned} u_{tt} + u_t + (1 - \Delta ) u + f(u) = \mu (t), \end{aligned}$$

on the three dimensional torus, where \(\mu (t)\) can be a measure. Mei, Xiong, and Sun [31] obtained the existence of the pullback attractor for the problem governed by the equation

$$\begin{aligned} u_{tt} + u_t - \Delta u + f(u) = g(t), \end{aligned}$$
(1.4)

for the subquintic case on the space domain given by whole \(\mathbb {R}^3\) in the so called locally uniform spaces. Mei and Sun [30] obtained the existence of the uniform attractor for non a translation compact forcing term for the problem governed by (1.4) with subquintic f. Finally, Chang, Li, Sun, and Zelik [12] considered the problem of the form

$$\begin{aligned} u_{tt} + \gamma (t)u_t-\Delta u + f(u) = g, \end{aligned}$$

and showed the existence of several types of non-autonomous attractors with quintic nonlinearity for the case where the damping may change sign. None of these results considered the nonlinearity of the form f(tu) and none of these results fully explored the structure of non-autonomous attractors and relation between pullback, uniform, and cocycle attractors characterized in [7]. The present paper aims to fill this gap.

In this article we generalize the results of [27] to the problem governed by the weakly damped non-autonomous wave equation (1.1) with the semilinear term \(f_\varepsilon (t,u)\) which is a perturbation of the autonomous nonlinearity \(f_0(u)\), cf. assumptions (H2) and (H3) in Sect. 3. We stress that we deal only with the case of the subquintic growth

$$\begin{aligned} \left| \frac{\partial f_\varepsilon }{\partial u}(t,u)\right| \le C(1+|u|^{4-\kappa }), \ \end{aligned}$$

for which we prove the results on the existence and asymptotic smoothness of Shatah–Struwe solutions, derive the asymptotic smoothing estimates and obtain the result on the upper-semicontinuous convergence of attractors. Thus we extend and complete the previous results in [27] where only the autonomous case was considered, and in [30, 31] where the nonlinearity was only in the autonomous term. We stress some key difficulties and achievements of our work. We follow the methodology of [27, Proposition 3.1 and Proposition 4.1] to derive the Strichartz estimate for the nonlinear problem from the one for the linear problem (where we use the continuation principle that can be found for example in [40, Proposition 1.21]) but in the proof we need the extra property that the constant \(C_h\) in the linear Strichartz estimate

$$\begin{aligned} \Vert u\Vert _{L^4(0,h;L^{12})} \le C_h(\Vert (u_0,u_1)\Vert _{\mathcal {E}_0} + \Vert G\Vert _{L^1(0,T;L^{2})}), \end{aligned}$$

is a nondecreasing function of h. We establish this fact with the use of the Christ–Kiselev Lemma [38, Lemma 3.1]. Moreover, we define the weak solutions as the limits of the Galerkin approximations. In [27, Sect. 3] the authors work with the Shatah–Struwe solutions (i.e. the weak solutions posessing the extra \(L^4(0,T;L^{12}(\Omega ))\) regularity), and they prove that such solutions are indeed the limits of the Galerkin approximations, cf. [27, Corollary 3.6]. We establish that in the subcritical case the two notions are in fact equivalent, cf. our Lemma 5.9.

Main results of our paper are contained in Sects. 6 and 7. The main result of Sect. 6 is Theorem 6.10 on the existence and smoothness of uniform and cocycle attractors for the considered problem and the relation between the two notions. The key property needed for the existence of these objects is the uniform asymptotic smoothness obtained in Lemma 6.8. In [27, Corollary 4.3] the authors derive only \(\mathcal {E}_\delta = (H^{1+\delta }\cap H^1_0)\times H^\delta \) estimates for the autonomous case, with small \(\delta >0\), mentioning in Remark 4.6 the possibility of using further bootstrapping arguments. We derive in Lemma 6.8 the relevant asymptotic smoothing estimates in \(\mathcal {E}_1 = (H^2\cap H^1_0)\times H^1_0\). While \(\mathcal {E}_\delta \) estimates are sufficient for the attractor existence, our estimates allow us to deduce its regularity, namely the fact that it belongs to \(\mathcal {E}_1\). Once we have the uniform asymptotic smoothing estimates we can use the recent findings of of [7, 28], cf. [7, Theorem 3.12.], reminded here as Theorem 6.5 below. This abstract result is applied to get our Theorem 6.10 where we establish the existence of the uniform attractor \(\mathcal {A}_\varepsilon \), and the cocycle attractor \(\{ \mathcal {A}_\varepsilon (p_\varepsilon ) \}_{p_\varepsilon \in \mathcal {H}(f_\varepsilon )}\), an object parameterised by elements \(p_\varepsilon \in \mathcal {H}(f_\varepsilon )\) of the hull of the time shifts of the translation compact non-autonomous term \(f_\varepsilon \). Apart from the existence, application of [7, Theorem 3.12.] allows us to get the relation between the two objects, namely that

$$\begin{aligned} \mathcal {A}_\varepsilon = \bigcup _{p_{\varepsilon }\in \mathcal {H}(f_\varepsilon )}\mathcal {A}(p_\varepsilon ). \end{aligned}$$
(1.5)

Finally, another novelty of the present paper is the upper-semicontinuity result of Sect. 7. In Theorem 7.4 we obtain that uniform attractors \(\mathcal {A}_\varepsilon \) converge upper-semicontinuously to the limit attractor of the autonomous problem, i.e., that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0^+}\mathrm {dist}_{\mathcal {E}_0}(\mathcal {A}_\varepsilon , \mathcal {A}_0) = 0, \end{aligned}$$

where \(\mathrm {dist}_{\mathcal {E}_0}\) is the Hausdorff semidistance in the space \(\mathcal {E}_0 = H^1_0 \times L^2\). The key role in the proof is played by the lifted-invariance property of the uniform attractor, cf. Definition 6.6 and Theorem 6.7, and uniform (with respect to \(\varepsilon \)) \(\mathcal {E}_1\) boundedness of the uniform attractors \(\mathcal {A}_\varepsilon \) obtained in Sect. 6. Note that due to (1.5) the obtained upper-semicontinuity result automatically allows us to deduce that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0^+}\mathrm {dist}_{\mathcal {E}_0}(\mathcal {A}_\varepsilon (p_\varepsilon ), \mathcal {A}_0) = 0, \end{aligned}$$

for every \(\{ p_\varepsilon \in \mathcal {H}(f_\varepsilon )\}_{\varepsilon \in [0,1]}\), i.e. that the cocycle (and hence also the pullback) attractors converge to the limit global attractor in the upper-semicontinuous sense.

The possible extension of our results involves dealing with a non-autonomous nonlinearity with critical quintic growth condition. This case is more delicate because the control of the energy norm of the initial data does not give the control over norm \(L^4(0,T;L^{12}(\Omega ))\) of the solution. To overcome this problem Kalantarov, Savostianov, and Zelik in [27] used the technique of trajectory attractors. Another interesting question is the possibility of extending the results of [19] about the convergence of non-autonomous attractors for equations

$$\begin{aligned} \varepsilon u_{tt}+u_{t}-\Delta u =f_\varepsilon (t,u) \end{aligned}$$

to the attractor for the semilinear heat equation as \(\varepsilon \rightarrow 0\), in the case of subquintic or quintic growth condition on f. The main difficulty is to obtain uniform Strichartz estimates with respect to \(\varepsilon \). Finally we mention the possible further line of research involving the lower semicontinuous convergence of attractor and the stability of the attractor structure under perturbation.

The structure of the article is as follows. After some preliminary facts are reminded in Sect. 2, the formulation of the problem, assumptions of its data, and some auxiliary results regarding the translation compactness of the non-autonomous term are presented in Sect. 3. Next, Sect. 4 is devoted to the Galerkin solutions and their dissipativity. The following Sect. 5 contains the results on the Strichartz estimates, Shatah–Struwe solutions, and their equivalence with the Galerkin solutions. The result on the existence and asymptotic smoothness of non-autonomous attractors, Theorem 6.10, is contained in Sect. 6, while in Sect. 7 we prove their upper-semicontinuous convergence to the global attractor of the limit autonomous problem.

2 Preliminaries

Let \(\Omega \subset \mathbb {R}^3\) be a bounded and open set with sufficiently smooth boundary. We will use the notation \(L^2\) for \(L^2(\Omega )\) and, in general, for notation brevity, we will skip writing dependence on \(\Omega \) in spaces of functions defined on this set. By \((\cdot ,\cdot ),\Vert {.}\Vert \) we will denote respectively the scalar product and the norm in \(L^2\). We will also use the notation \(\mathcal {E}_0 = H^1_0\times L^2\) for the energy space. Its norm is defined by \(\Vert (u,v)\Vert ^2_{\mathcal {E}_0} = \Vert \nabla u\Vert ^2 + \Vert v\Vert ^2\). Throughout this paper, we denote a generic positive constant by C, which values can vary from on line to another. We recall some useful information concerning the spectral fractional Laplacian [1]. Denote by \(\{e_i\}_{i=1}^\infty \) the eigenfunctions (unitary in \(L^2(\Omega )\)) of the operator \(-\Delta \) with the Dirichlet boundary conditions, such that the corresponding eigenvalues are given by

$$\begin{aligned} 0< \lambda _1 \le \lambda _2 \le \ldots \le \lambda _n \le \ldots . \end{aligned}$$

For \(u\in L^2\) its k-th Fourier coefficient is defined as \(\widehat{u}_k = (u,e_k)\). Let \(s\ge 0\). The spectral fractional Laplacian is defined by the formula

$$\begin{aligned} (-\Delta )^\frac{s}{2}u = \sum _{k=1}^\infty \lambda _k^\frac{s}{2} \widehat{u}_k. \end{aligned}$$

The space \(\mathbb {H}^s\) is defined as

$$\begin{aligned} \mathbb {H}^s = \left\{ u\in L^2\,:\ \sum _{k=1}^\infty \lambda _k^s \widehat{u}_k^2 < \infty \right\} . \end{aligned}$$

The corresponding norm is given by

$$\begin{aligned} \Vert u\Vert _{\mathbb {H}^s} = \Vert (-\Delta )^{s/2}u\Vert = \sqrt{\sum _{k=1}^\infty \lambda _k^s \widehat{u}_k^2}. \end{aligned}$$

The space \(\mathbb {H}^s\) is a subspace of the fractional Sobolev space \(H^s\). In particular

$$\begin{aligned} \mathbb {H}^s = {\left\{ \begin{array}{ll} H^s = H^s_0\ \ \text {for}\ \ s\in (0,1/2),\\ H^s_0 \ \ \text {for}\ \ s\in (1/2,1]. \end{array}\right. } \end{aligned}$$

We also remind that the standard fractional Sobolev norm satisfies \(\Vert u\Vert _{H^s}\le C\Vert u\Vert _{\mathbb {H}^s}\) for \(u\in \mathbb {H}^s\), cf. [1, Proposition 2.1]. For \(s\in [0,1]\) we will use the notation \(\mathcal {E}_s = \mathbb {H}^{s+1} \times \mathbb {H}^s\). This space is equipped with the norm \(\Vert (u,v)\Vert ^2_{\mathcal {E}_s} = \Vert u\Vert ^2_{\mathbb {H}^{s+1}} + \Vert v\Vert ^2_{\mathbb {H}^s}\).

3 Problem Definition and Assumptions

We consider the following family of problems parameterized by \(\varepsilon > 0\)

$$\begin{aligned} {\left\{ \begin{array}{ll} u_{tt}+u_t-\Delta u = f_\varepsilon (t,u)\; \text {for}\; (x,t)\in \Omega \times (0,\infty ),\\ u(t,x) = 0\; \text {for}\; x\in \partial \Omega ,\\ u(0,x) = u_0(x),\\ u_t(0,x) = u_1(x). \end{array}\right. } \end{aligned}$$
(3.1)

The initial data has the regularity \((u_0,u_1) \in \mathcal {E}_0\). Throughout the article we always assume that the non-autonomous and nonlinear term \(f_\varepsilon (t,u)\), treated as the mapping which assigns to the time \(t\in \mathbb {R}\) the function of the variable u, belongs to the space \(C(\mathbb {R};C^1(\mathbb {R}))\). This space is equipped with the metric

$$\begin{aligned}&d_{C(\mathbb {R};C^1(\mathbb {R}))}(g_1,g_2)\\&\quad = \sum _{i=1}^\infty \frac{1}{2^i} \frac{\sup _{t\in [-i,i]} d_{C^1(\mathbb {R})}(g_1(t,.),g_2(t,.)) }{1+\sup _{t\in [-i,i]} d_{C^1(\mathbb {R})}(g_1(t,.),g_2(t,.)) } \;\text {for }g_1,g_2\in C(\mathbb {R};C^1(\mathbb {R})) , \end{aligned}$$

where the metric in \(C^1(\mathbb {R})\) is defined as follows

$$\begin{aligned} d_{C^1(\mathbb {R})}(g_1,g_2) =\sum _{i=1}^\infty \frac{1}{2^i} \frac{ \Vert {g_1(u)-g_2(u)}\Vert _{C^1([-i,i])} }{1+ \Vert {g_1(u)-g_2(u)}\Vert _{C^1([-i,i])} } \;\text {for }g_1,g_2\in C^1(\mathbb {R}), \end{aligned}$$

and \(\Vert g\Vert _{C^1(A)} = \max _{r\in A}|g(r)| + \max _{r\in A}|g'(r)|\) for a compact set \(A\subset \mathbb {R}\).

Remark 3.1

If \(g_n\rightarrow g\) in \(C(\mathbb {R};C^1(\mathbb {R}))\) then \( g_n\rightarrow g\) and \(\frac{\partial { g_n}}{\partial u} \rightarrow \frac{\partial {g}}{\partial u} \) uniformly on every bounded subset of \(\mathbb {R}\).

We make the following assumptions on functions \(f_\varepsilon :\mathbb {R}\times \mathbb {R}\rightarrow \mathbb {R}\) and \(f_0:\mathbb {R}\rightarrow \mathbb {R}\)

  1. (H1)

    For every \(\varepsilon \in (0,1]\) the function \(f_{\varepsilon }\in C(\mathbb {R};C^1(\mathbb {R}))\), and \(f_0\in C^1(\mathbb {R})\).

  2. (H2)

    For every \(u\in \mathbb {R}\)

    $$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \sup _{t\in \mathbb {R}}|f_\varepsilon (t,u)-f_0(u)| = 0. \end{aligned}$$
  3. (H3)

    The following holds

    $$\begin{aligned} \sup _{\varepsilon \in [0,1]}\sup _{t\in \mathbb {R}}\sup _{u\in \mathbb {R}} |f_\varepsilon (t,u)-f_0(u)| < \infty . \end{aligned}$$
  4. (H4)

    The following holds

    $$\begin{aligned} \limsup _{|u|\rightarrow \infty } \frac{f_0(u)}{u} < \lambda _1, \end{aligned}$$

    where \(\lambda _1\) is the first eigenvalue of \(-\Delta \) operator with the Dirichlet boundary conditions.

  5. (H5)

    There exist \(0<\kappa \le 4\) and \(C>0\) such that

    $$\begin{aligned} \sup _{\varepsilon \in [0,1]}\sup _{t\in \mathbb {R}}\left| \frac{\partial f_\varepsilon }{\partial u}(t,u)\right| \le C(1+|u|^{4-\kappa })\ \ \text {for every}\ \ u\in \mathbb {R}. \end{aligned}$$
  6. (H6)

    For any fixed \(u\in \mathbb {R}\) the map \(f_\varepsilon (t,u)\) is uniformly continuous with respect to t. Moreover for every \(R>0\) the map \(\mathbb {R}\times [-R,R]\ni (t,u)\mapsto \frac{\partial f_\varepsilon }{\partial u}(t,u)\) is uniformly continuous.

Remark 3.2

An example of family of functions satisfying conditions (H1)–(H6) is \(f_\varepsilon (t,u)=-u|u|^{4-\kappa }+g(u)+\varepsilon \sin (t)\sin (u^3) \) where the growth of g(u) is essentially lower than \(5-\kappa \).

Proposition 3.3

Assuming (H1), (H5), and (H6), for every \(\varepsilon \in [0,1]\) and every \(R > 0\) the mapping

$$\begin{aligned} \mathbb {R}\times [-R,R]\ni (t,u) \mapsto f_{\varepsilon }(t,u) \end{aligned}$$

is uniformly continuous.

Proof

Let \(u_1, u_2 \in [-R,R]\) and \(t_1, t_2 \in \mathbb {R}\). Using (H5), the following holds

$$\begin{aligned}&|f_{\varepsilon }(t_1,u_1)-f_{\varepsilon }(t_2,u_2)| \le |f_{\varepsilon }(t_1,u_1)-f_{\varepsilon }(t_1,u_2)| + |f_{\varepsilon }(t_1,u_2)-f_{\varepsilon }(t_2,u_2)| \\&\qquad \le C(1+R^{4-\kappa })|u_1-u_2| + \sup _{|u|\le R}|f_{\varepsilon }(t_1,u)-f_{\varepsilon }(t_2,u)|. \end{aligned}$$

It suffices to prove that for every \(\eta > 0 \) we can find \(\delta > 0 \) such that if only \(|t_1-t_2|\le \delta \) then \(\sup _{|u|\le R}|f_{\varepsilon }(t_1,u)-f_{\varepsilon }(t_2,u)| \le \eta \). Assume for contradiction that there exists \(\eta _0 > 0\) such that for every \(n\in \mathbb {N}\) we can find \(t^n_1,t^n_2 \in \mathbb {R}\) with \(|t_1-t_2|\le \frac{1}{n}\) and

$$\begin{aligned} \sup _{|u|\le R}|f_{\varepsilon }(t^n_1,u)-f_{\varepsilon }(t^n_2,u)| > \eta _0. \end{aligned}$$

For every n there exists \(u^n\) with \(|u^n|\le R\) such that

$$\begin{aligned} |f_{\varepsilon }(t^n_1,u^n)-f_{\varepsilon }(t^n_2,u^n)| > \eta _0. \end{aligned}$$

For a subsequence \(u^n\rightarrow u^0\) with \(|u^0|\le R\), we have

$$\begin{aligned}&\eta _0 < |f_{\varepsilon }(t^n_1,u^n)-f_{\varepsilon }(t^n_1,u^0)| + |f_{\varepsilon }(t^n_1,u^0)-f_{\varepsilon }(t^n_2,u^0)| + |f_{\varepsilon }(t^n_2,u^0)-f_{\varepsilon }(t^n_2,u^n)|\\&\quad \le 2C(1+R^{4-\kappa }) |u^n-u^0| + |f_{\varepsilon }(t^n_1,u^0)-f_{\varepsilon }(t^n_2,u^0)|, \end{aligned}$$

where in the last estimate we used (H5). By taking n large enough we deduce that

$$\begin{aligned} \frac{\eta _0}{2} < |f_{\varepsilon }(t^n_1,u^0)-f_{\varepsilon }(t^n_2,u^0)|, \end{aligned}$$

a contradiction with uniform continuity of \(f_{\varepsilon }(\cdot ,u_0)\) assumed in (H6). \(\square \)

We define hull of f as the set \(\mathcal {H}(f):=\overline{\{f(t+\cdot ,\cdot )\in C(\mathbb {R};C^1(\mathbb {R}))\}_{t\in \mathbb {R}}}\), where the closure is understood in the metric \(d_{C(\mathbb {R};C^1(\mathbb {R}))}\). We also define set

$$\begin{aligned} \mathcal {H}_{[0,1]}:=\bigcup _{\varepsilon \in [0,1]}\mathcal {H}(f_\varepsilon ) = \bigcup _{\varepsilon \in (0,1]}\mathcal {H}(f_\varepsilon ) \cup \{f_0\}, \end{aligned}$$

where the last equality follows from the simple fact that \(\mathcal {H}(f_0)=\{f_0\}\). We say that a function f is translation compact if its hull \(\mathcal {H}(f)\) is a compact set. The following characterization of translation compactness can be found in [15, Proposition 2.5 and Remark 2.2].

Proposition 3.4

Let \(f\in C(\mathbb {R};C^1(\mathbb {R}))\). Then f is translation compact if and only if for every \(R>0\)

  1. (i)

    \(|f(t,u)|+ |\frac{\partial f}{\partial u}(t,u) |\le C_R\) for \((t,u)\in \mathbb {R}\times [-R,R],\)

  2. (ii)

    The functions f(tu) and \(\frac{\partial f}{\partial u}(t,u)\) are uniformly continuous on \(\mathbb {R}\times [-R,R].\)

We prove two simple results concerning the translation compactness of \(f_\varepsilon \) and each function in its hull.

Corollary 3.5

Assuming (H1), (H3), (H5), and (H6) for every \(\varepsilon \in (0,1]\) function \(f_\varepsilon \) is translation compact.

Proof

From assumption (H3) and the fact that \(f_0\in C^1(\mathbb {R})\) one can deduce that (i) from Proposition 3.4 holds. Moreover, (H6) and Proposition 3.3 imply that (ii) holds, and the proof is complete. \(\square \)

Proposition 3.6

If \(f_\varepsilon \) satisfies conditions (H1), (H2), (H3), and (H5) then these conditions are satisfied by all elements from \(\mathcal {H}_{[0,1]}\). Moreover there exist constants \(C,K>0\) independent of \(\varepsilon \) such that for every \(p_\varepsilon \in \mathcal {H}(f_\varepsilon )\) the following bounds hold

$$\begin{aligned}&\sup _{\epsilon \in [0,1]}\sup _{t\in \mathbb {R}}\sup _{u\in \mathbb {R}} \left| p_\varepsilon (t,s)-f_0(u)\right| \nonumber \\&\quad \le K, \qquad \sup _{\varepsilon \in [0,1]}\sup _{t\in \mathbb {R}}\left| \frac{\partial p_\varepsilon }{\partial u}(t,u)\right| \le C(1+|u|^{4-\kappa })\ \ \text {for every}\ \ u\in \mathbb {R}. \end{aligned}$$
(3.2)

Proof

Property (H1) is clear. Suppose that (H2) does not hold. Then there exists a number \(\delta > 0\), sequences \(\varepsilon _n \rightarrow 0\), \(p_{\varepsilon _n}\in \mathcal {H}(f_{\varepsilon })\), \(t_n \in \mathbb {R}\) and a number \(u\in \mathbb {R}\) such that

$$\begin{aligned} |p_{\varepsilon _n}(t_n,u)-f_0(u)|> 2\delta . \end{aligned}$$

Because \(p_{\varepsilon _n}\in \mathcal {H}(f_{\varepsilon _n})\) we can pick a sequence \(s_n\) such that \(|f_{\varepsilon _n}(s_n+t_n,u)-p_{\varepsilon _n}(t_n,u)| \le \delta \). Then

$$\begin{aligned} |f_{\varepsilon _n}(t_n+s_n,u)-f_0(u)|\ge -|f_{\varepsilon _n}(t_n+s_n,u)-p_{\varepsilon _n}(t_n,u)|+ |p_{\varepsilon _n}(t_n,u)-f_0(u)|\ge \delta . \end{aligned}$$

Now (H2) follows by contradiction. We denote

$$\begin{aligned}K:= \sup _{\varepsilon \in [0,1]}\sup _{t\in \mathbb {R}}\sup _{u\in \mathbb {R}} |f_\varepsilon (t,u)-f_0(u)|, \end{aligned}$$

which from assumption (H3) is a finite number. Taking \(p_\varepsilon \in \mathcal {H}_{[0,1]}\), for every \((t,u) \in \mathbb {R}^2\) we obtain

$$\begin{aligned}&|p_\varepsilon (t,u)-f_0(u)|\le |f_\varepsilon (t+s_n,u)-p_\varepsilon (t,u)|\\&\quad + |f_\varepsilon (t+s_n,u)-f_0(u)| \le K + |f_\varepsilon (t+s_n,u)-p_\varepsilon (t,u)|. \end{aligned}$$

We can pick a sequence \(s_n\) such that \(|f_{\varepsilon }(s_n+t,u)- p_{\varepsilon }(t,u)|\rightarrow 0\). So, passing to the limit, we get

$$\begin{aligned} |p_\varepsilon (t,u)-f_0(u)|\le K. \end{aligned}$$

We have proved that for every \(p_\varepsilon \in \mathcal {H}_{[0,1]}\) we have

$$\begin{aligned} \sup _{t\in \mathbb {R}}\sup _{u\in \mathbb {R}} |p_\varepsilon (t,u)-f_0(u)|\le K. \end{aligned}$$

From (H5) we obtain

$$\begin{aligned} \left| \frac{\partial {f_\varepsilon }}{\partial u} (t+s_n,u)\right| \le C(1 + |u|^{4-\kappa }), \end{aligned}$$

for every \(u,t,s_n\in \mathbb {R},\varepsilon \in [0,1]\). Again by choosing \(s_n\) such that \(|\frac{\partial {f_\varepsilon }}{\partial u} (s_n+t,u)- \frac{\partial {p_\varepsilon }}{\partial u} (t,u)|\rightarrow 0\) and passing to the limit we observe that for every \(p_\varepsilon \in \mathcal {H}_{[0,1]}\) the following holds

$$\begin{aligned} \sup _{t\in \mathbb {R}}\left| \frac{\partial p_\varepsilon }{\partial u}(t,u)\right| \le C(1+|u|^{4-\kappa })\ \ \text {for every}\ \ u\in \mathbb {R}, \end{aligned}$$
(3.3)

which completes the proof. \(\square \)

Proposition 3.7

If (H1), (H2), (H3), and (H5) hold, then for every \(R>0\) and every \(p_\varepsilon \in \mathcal {H}(f_\varepsilon )\)

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\sup _{|u|\le R}\sup _{t\in \mathbb {R}}|p_{\varepsilon }(t,u)-f_0(u)| = 0. \end{aligned}$$

Proof

By contradiction, assume that there exist \(\delta >0\) and sequences \(|u_n|\le R\), \(t_n\in \mathbb {R}\), \(\varepsilon _n\rightarrow 0\) such that

$$\begin{aligned} \delta \le |p_{\varepsilon _n}(t_n,u_n)-f_0(u_n)|. \end{aligned}$$

Passing to a subsequence, if necessary, we can assume that \(u_n\rightarrow u_0\), where \(|u_0|\le R\). Hence,

$$\begin{aligned}&\delta \le |p_{\varepsilon _n}(t_n,u_n)-p_{\varepsilon _n}(t_n,u_0)| +|p_{\varepsilon _n}(t_n,u_0)-f_0(u_0)|+|f_0(u_0)-f_0(u_n)|\\&\quad \le C(1+|\xi _n|^{4-\kappa })|u_n-u_0| + \sup _{t\in \mathbb {R}} |p_{\varepsilon _n}(t,u_0)-f_0(u_0)|+|f_0(u_0)-f_0(u_n)|\\&\quad \le C(1+|R|^{4-\kappa })|u_n-u_0| + \sup _{t\in \mathbb {R}} |p_{\varepsilon _n}(t,u_0)-f_0(u_0)|+|f_0(u_0)-f_0(u_n)|, \end{aligned}$$

where \(\xi _n\) is an intermediate point between \(u_0\) and \(u_n\), where \(\xi _n\) is a point between \(s_n\) and \(s_0\) (which is (H5) applied to \(p_{\varepsilon _n}\) ). \(\square \)

4 Galerkin Solutions

Definition 4.1

Let \((u_0,u_1)\in \mathcal {E}_0\). The function \(u\in L^\infty _{loc}([0,\infty ); H^1_0)\) with \(u_t\in L^\infty _{loc}([0,\infty );L^2)\) and \(u_{tt}\in L^\infty _{loc}([0,\infty );H^{-1})\) is a weak solution of problem (3.1) if for every \(v\in L^2_{loc}([0,\infty );H^1_0)\) and \(t_1>0\) the following holds

$$\begin{aligned} \int _{0}^{t_1} \langle u_{tt}(t), v(t)\rangle _{H^{-1}\times H^1_0} + (u_t(t) - f_{\varepsilon }(t,u(t)),v) + (\nabla u(t), \nabla v(t)) \, dt = 0, \end{aligned}$$

and \(u(0) = u_0\), \(u_t(0) = u_1\).

Note that as \(u\in C([0,\infty );L^2)\) and \(u_t\in C([0,\infty );H^{-1})\), pointwise values of u and \(u_t\), and thus the initial data, make sense. However, due to the lack of regularity of the nonlinear term \(f_{\varepsilon }(\cdot ,u(\cdot ))\), we cannot test the equation with \(u_t\). Indeed, by the Sobolev embedding \(H^1_0\hookrightarrow L^6\), it holds that \(u(t)\in L^6\) for a.e. t and by (H5) for the expression

$$\begin{aligned} \int _{\Omega } f_\varepsilon (t,u(x,t))u_t(x,t)\, dx, \end{aligned}$$

to make sense we would need \(u\in L^{10-2\kappa }\), which we cannot guarantee. Thus, although it is straightforward to prove (using the Galerkin method) the existence of the weak solution given by the above definition, we cannot establish the energy estimates required to work with this solution.

Let \(\{e_i \}_{i=1}^\infty \) be the eigenfunctions of the \(-\Delta \) operator with the Dirichlet boundary conditions on \(\partial \Omega \) sorted by the nondecreasing eigenvalues. They constitute an orthonormal basis of \(L^2\) and they are orthogonal in \(H^1_0\). Denote \(V_N = \text {span}\, \{ e_1,\ldots , e_N \}\). The family of finite dimensional spaces \(\{ V_N\}_{N=1}^\infty \) approximates \(H^1_0\) from the inside, that is

$$\begin{aligned} \overline{\bigcup _{N=1}^\infty V_N}^{H^1_0} = H^1_0\qquad \text {and}\qquad V_N\subset V_{N+1}\ \ \text {for every}\ \ N\ge 1. \end{aligned}$$

Let \(u^N_0\in V_N\) and \(u^N_1\in V_N\) be such that

$$\begin{aligned}&u^N_0 \rightarrow u_0\ \ \text {in}\ \ H^1_0\ \ \text {as}\ \ N\rightarrow \infty ,\\&u^N_1 \rightarrow u_1\ \ \text {in}\ \ L^2\ \ \text {as}\ \ N\rightarrow \infty . \end{aligned}$$

Now the N-th Galerkin approximate solution for (3.1) is defined as follows.

Definition 4.2

The function \(u^N\in C^1([0,\infty ); V_N)\) with \(u^N_t\in AC([0,\infty );V_N)\) is the N-th Galerkin approximate solution of problem (3.1) if \(u^N(0) = u^N_0\), \(u^N_t(0) = u^N_1\) and for every \(v\in V_N\) and a.e. \(t>0\) the following hold

$$\begin{aligned} (u^N_{tt}(t)+ u^N_t(t) - f_{\varepsilon }(t,u^N(t)),v) + (\nabla u^N(t), \nabla v) = 0. \end{aligned}$$

We continue by defining the weak solution of Galerkin type.

Definition 4.3

The weak solution given by Definition 4.1 is said to be of Galerkin type if it is a limit of the subsequence of solutions of the Galerkin problems, understood in the following sense

$$\begin{aligned}&u^N \rightarrow u\ \ \text {weakly-* in}\ \ L^\infty _{loc}([0,\infty );H^1_0), \end{aligned}$$
(4.1)
$$\begin{aligned}&u^N_t \rightarrow u_t\ \ \text {weakly-* in}\ \ L^\infty _{loc}([0,\infty );L^2), \end{aligned}$$
(4.2)
$$\begin{aligned}&u^N_{tt} \rightarrow u_{tt}\ \ \text {weakly-* in}\ \ L^\infty _{loc}([0,\infty );H^{-1}). \end{aligned}$$
(4.3)

For brevity of notation we use the index N to denote the elements of the subsequence.

In the sequel we will consider problem (3.1) with \(p\in \mathcal {H}_{[0,1]}\) replacing \(f_\varepsilon \). In particular, the next two results hold for the function \(f_\varepsilon \) in (3.1) replaced by \(p\in \mathcal {H}_{[0,1]}\). We skip the proof of the following theorem which is standard in the framework of the Galerkin method.

Theorem 4.4

Assume (H1), (H3)–(H5). If \((u_0,u_1)\in \mathcal {E}_0\) then then problem (3.1) has at least one weak solution of Galerkin type.

Proposition 4.5

The weak solutions of Galerin type of problem (3.1) are bounded in \(\mathcal {E}_0\) and there exists a bounded set \(B_0\subset \mathcal {E}_0\) which is absorbing, i.e. for every bounded set \(B\subset \mathcal {E}_0\) there exists \(t_0\ge 0\) such that for every weak solution of Galerkin type \((u(t),u_t(t))\) with the initial conditions in B we have \((u(t),u_t(t)) \in B_0\) for every \(t \ge t_0\). Moreover \(B_0\) and \(t_1\) do not depend on the choice of \(p\in \mathcal {H}_{[0,1]}\) in place of \(f_\varepsilon \) in (3.1).

To prove the above proposition we will need the following Gronwall type lemma.

Lemma 4.6

Let \(I:[0,\infty )\rightarrow \mathbb {R}\) be an absolutely continuous function with \(I(t)=\sum _{i=1}^kI_i(t)\). Suppose that

$$\begin{aligned} \frac{d}{dt}I(t)\le -A_i I_i(t)^{\alpha _i} + B_i, \end{aligned}$$

for every \(i\in \{{1,\ldots ,k}\}\) and for almost every t such that \(I_i(t)\ge 0\), where \(\alpha _i, A_i, B_i > 0\) are constants. Then for every \(\eta >0\) there exists \(t_0>0\) such that

$$\begin{aligned} I(t)\le \sum _{i=1}^k\left( \frac{B_i}{A_i}\right) ^{\frac{1}{\alpha _i}} + \eta ,\; \text {for every }t\ge t_0. \end{aligned}$$

If, in addition, \(\{I^l(t)\}_{l \in \mathcal {L}}\) is a family of functions satisfying the above conditions and such that \(I^l(0)\le Q\) for each \(l\in \mathcal {L}\), then the time \(t_0\) is independent of l and there exists a constant C depending on \(Q, A_i, B_i, \alpha _i\) such that \(I^l(t)\le C\) for every \(t\ge 0\) and every \(l\in \mathcal {L}\).

Proof

We denote

$$\begin{aligned} B=\sum _{i=1}^k\left( \frac{B_i}{A_i}\right) ^{\frac{1}{\alpha _i}} \end{aligned}$$

and let \(A= \min _{i\in \{ 1,\ldots ,k\}} \{ A_i \}\). First we will show that for every \(\eta >0\) if \(I(t_0)\le B+\eta \), then \(I(t)\le B+\eta \) for every \(t\ge t_0\). For the sake of contradiction let us suppose that there exists some \(t_1>t_0\) such that \(I(t_1)>B+\eta \). Let \(t_2=\sup \{{ s\in [t_0,t_1]\,:\ I(s)\le B + \eta }\}\). Choose \(\delta >0\) such that

$$\begin{aligned} \eta \ge \sum _{i=1}^k\left( \left( \frac{B_i}{A_i}+\delta \right) ^{\frac{1}{\alpha _i}}-\left( \frac{B_i}{A_i}\right) ^{\frac{1}{\alpha _i}}\right) . \end{aligned}$$

Hence

$$\begin{aligned} I(s) > B+\eta \ge \sum _{i=1}^k\left( \frac{B_i}{A_i}+\delta \right) ^{\frac{1}{\alpha _i}}\quad \text {for}\ s\in (t_2,t_1]. \end{aligned}$$

We deduce that for every \(s\in (t_2,t_1]\) we can find an index i (which may depend on s) for which

$$\begin{aligned} I_{i}(s)>\left( \frac{B_{i}}{A_{i}}+\delta \right) ^{\frac{1}{\alpha _{i}}}. \end{aligned}$$

Then for a.e. \(s\in (t_2,t_1]\) we have

$$\begin{aligned} \frac{d}{dt}I(s)\le -A_{i}I_{i}(s)^{\alpha _{i}}+B_{i}\le -A\delta , \end{aligned}$$

and after integration we get that \(I(t_1)<I(t_2) -(t_2 - t_1) A\delta \) which is a contradiction. We observe that all functions from the family \(\{I^l(t)\}_{l\in \mathcal {L}}\) are bounded by \(\max \{Q,1\} + B\). Now we will prove existence of \(t_0\). For the sake of contradiction suppose that there exists \(\eta >0\) and the sequence of times \(t_n \rightarrow \infty \) such that \(I^{l_n}(t_n)>B+\eta \) for some \(l_n\in \mathcal {L}\). Then for every \(s\in [0,t_n]\) we must have \(I^{l_n}(s)>B+\eta \). Thus, there exists \(\delta >0\) such that for all \(s\in [0,t_n]\) and \(l_n\) there exists an index \(i\in \{ 1,\ldots ,k\}\) (which may depend on s and n) for which \(I_i^{l_n}(s)>(\frac{B_i}{A_i}+\delta )^\frac{1}{\alpha _i}\). Again for a.e \(s\in (0,t_n)\)

$$\begin{aligned} \frac{d}{dt}I^{l_n}(s)\le -A\delta , \end{aligned}$$

and after integrating we obtain \(I^{l_n}(t_n) \le Q - t_nA\delta \) for each n, which is a contradiction. \(\square \)

Proof of Proposition 4.5

Let u be the Galerkin solution to (3.1) with any function \(p\in \mathcal {H}_{[0,1]}\) in place of \(f_\varepsilon \) at the right-hand side of (3.1). The next estimates hold for the Galerkin problems, but since they do not depend on the dimension of the space used in those problems, they are also satisfied by the limit solution. To make the presentation simpler, we proceed in formal way. By testing the equation with \(u+2u_{t}\) we obtain

$$\begin{aligned}&\frac{d}{dt}\left[ (u_{t},u)+\frac{1}{2}\Vert {u}\Vert ^2+ \Vert {u_{t}}\Vert ^2+\Vert {\nabla u}\Vert ^2- 2\int _\Omega F_0(u)dx \right] \\&\qquad =-\Vert {u_{t}}\Vert ^2-\Vert {\nabla u}\Vert ^2 +(f_0(u),u) + (p(t,u) - f_0(u),2 u_{t}+u) , \end{aligned}$$

where \(F_0(u)=\int _0^uf_0(v)dv\). Assumption (H4) implies the inequality

$$\begin{aligned} (f_0(u),u)\le C+K\Vert {u}\Vert ^2,\text { where }0\le K<\lambda _1. \end{aligned}$$

We define

$$\begin{aligned} I(t)= (u_{t},u)+\frac{1}{2}\Vert {u}\Vert ^2+ \Vert {u_{t}}\Vert ^2+\Vert {\nabla u}\Vert ^2- 2\int _\Omega F_0(v)dx. \end{aligned}$$

Using the Poincaré and Cauchy inequalities we obtain

$$\begin{aligned} \frac{d}{dt}I(t) \le - \Vert {u_t}\Vert ^2 -C \Vert {\nabla u}\Vert ^2 + \Vert {p(t,u) - f_0(u)}\Vert (2\Vert { u_{t}}\Vert + \Vert {u}\Vert )+ C. \end{aligned}$$
(4.4)

Using the Poincaré and Cauchy inequalities again it follows by Proposition 3.6 that

$$\begin{aligned} \frac{d}{dt}I(t) \le - C\left( \Vert {u_t}\Vert ^2 +\Vert {\nabla u}\Vert ^2\right) + C. \end{aligned}$$
(4.5)

We represent the function I(t) as the sum of the following terms

$$\begin{aligned} I_1 = \Vert {u_{t}}\Vert ^2,\;I_2=\frac{1}{2}\Vert { u}\Vert ^2,\;I_3=\Vert {\nabla u}\Vert ^2,\; I_4(t)=(u_{t},u),\; I_5=-2\int _{\Omega }F_0(u)dx. \end{aligned}$$

From the estimate (4.5) and Poincaré inequality we can easily see that

$$\begin{aligned} \frac{d}{dt}I \le -A_iI_i +B_i \text { for } i\in \{1,2,3,4\}, \end{aligned}$$
(4.6)

where \(A_i, B_i\) are positive constants. To deal with the term \(I_5\) we observe that by the growth condition (H5) using the Hölder inequality we obtain

$$\begin{aligned} I_5&\le C \int _\Omega \left| \int _0^u 1+|v|^5dv\right| dx\le C\int _\Omega \left( |u|+|u|^6\right) dx\\&= C \left( \Vert {u}\Vert _{L^1}+\Vert {u}\Vert _{L^6}^6\right) \le C\left( \Vert {u}\Vert _{L^6}+\Vert {u}\Vert _{L^6}^6\right) \\&\le C\left( \Vert {u}\Vert _{L^6}^6 + 1\right) . \end{aligned}$$

From the Sobolev embedding \(H_0^1\hookrightarrow L^6\) it follows that

$$\begin{aligned} I_5^{\frac{1}{3}} \le C\left( \Vert {\nabla u}\Vert ^6+ 1\right) ^{\frac{1}{3}} \le C\left( \Vert {\nabla u}\Vert ^2 + 1\right) . \end{aligned}$$
(4.7)

From the estimate (4.5) we observe that

$$\begin{aligned} \frac{d}{dt}I \le -A_5I_5^{\frac{1}{3}} + B_5, \ \ \text {with}\ \ A_5,B_5 > 0. \end{aligned}$$
(4.8)

By Lemma 4.6 we deduce that there exists a constant \(D>0\) such that every for bounded set of initial data \(B\subset \mathcal {E}_0\) there exists the time \(t_0=t_0(B)\) such that for every \(p\in \mathcal {H}_{[0,1]}\) the following holds

$$\begin{aligned} I(t)\le D \text { for } t\ge t_0 \text { and } (u_0,u_1)\in B. \end{aligned}$$
(4.9)

We observe that from (H4) it follows that

$$\begin{aligned} F_0(u)\le C + \frac{K}{2}u^2 \text { where } 0\le K<\lambda _1. \end{aligned}$$
(4.10)

We deduce

$$\begin{aligned} I(t)\ge \frac{1}{2}\Vert {u_{t}}\Vert ^2 + \Vert {\nabla u}\Vert ^2 - K\Vert {u}\Vert ^2 - C\ge C\Vert {u}\Vert _{\mathcal {E}_0}^2 - C. \end{aligned}$$
(4.11)

We have shown the existence of the absorbing set \(B_0\subset \mathcal {E}_0\) which is independent of the choice of \(p\in \mathcal {H}_{[0,1]}\). By Lemma 4.6 it follows that for every initial condition \((u_0,u_1)\in \mathcal {E}_0\) there exists a constant \(D=D(u_0,u_1) > 0\) such that for every \(p\in \mathcal {H}_{[0,1]}\) and \(t\in \mathbb {R}\) the following holds

$$\begin{aligned} I(t)\le D \text { for } t\in [0,\infty ). \end{aligned}$$
(4.12)

The proof is complete. \(\square \)

5 Shatah–Struwe Solutions, Their Regularity and a Priori Estimates

5.1 Auxiliary Linear Problem

Similar as in [27] we define an auxiliary non-autonomous problem for which we derive a priori estimates both in energy and Strichartz norms.

$$\begin{aligned} {\left\{ \begin{array}{ll} &{} u_{tt}+u_t-\Delta u = G(x,t)\; \text {for}\; (x,t)\in \Omega \times (t_0,\infty ),\\ &{}u(t,x) = 0\; \text {for}\; x\in \partial \Omega \\ &{}u(t_0,x) = u_0(x)\\ &{}u_t(t_0,x) = u_1(x) \end{array}\right. } \end{aligned}$$
(5.1)

It is well known that if only \(G\in L^1_{loc}([t_0,\infty );L^2)\) and \((u_0,u_1)\in \mathcal {E}_0\) then the problem (5.1) has the unique weak solution u belonging to \(C_{loc}([t_0,\infty );H^1_0)\) with \(u_t\in C_{loc}([t_0,\infty );L^2)\) and \(u_{tt}\in L^\infty _{loc}([t_0,\infty );H^{-1})\). This solution is the limit of the Galerkin approximations in the spaces spanned by the eigenfunctions of \(-\Delta \) with homogeneous Dirichlet boundary conditions, and \(L^2\) projections of u on those spaces coincide with the Galerkin solutions. For details, cf. [4, 15, 34, 41]. The next result appears in [27, Proposition 2.1]. For completeness of our argument we provide the outline of the proof.

Proposition 5.1

Let u be the weak solution to problem (5.1) on the interval \([t_0,\infty )\) with \(G\in L^1_{loc}([t_0,\infty );L^2)\) and initial data \(u(t_0) = u_0\), \(u_t(t_0) = u_1\) with \((u_0,u_1) \in \mathcal {E}_0\). Then the following estimate holds

$$\begin{aligned} \Vert {(u(t),u_t(t))}\Vert _{\mathcal {E}_0}\le C \left( \Vert {(u_0,u_1)}\Vert _{\mathcal {E}_0}e^{-\alpha (t-t_0)}+\int _{t_0}^t e^{-\alpha (t-s)}\Vert {G(s)}\Vert ds\right) \end{aligned}$$

for every \(t\ge t_0\), where \(C,\alpha \) are positive constants independent of \(t,t_0,G\) and the initial conditions of (5.1).

Proof

Testing (5.1) by \(u+2u_t\) we obtain

$$\begin{aligned} \frac{d}{dt}\left( (u_{t},u)+\frac{1}{2}\Vert {u}\Vert ^2+ \Vert {u_{t}}\Vert ^2+\Vert {\nabla u}\Vert ^2\right) =-\Vert {u_{t}}\Vert ^2-\Vert {\nabla u}\Vert ^2 +(G(t),u+2u_{t}) \end{aligned}$$

We define \(I(t) = (u_{t},u)+\frac{1}{2}\Vert {u}\Vert ^2+ \Vert {u_{t}}\Vert ^2+\Vert {\nabla u}\Vert ^2\). We easily deduce

$$\begin{aligned} \frac{d}{dt}I(t)\le C\left( -I(t) + \sqrt{I(t)}\Vert {G(t)}\Vert \right) . \end{aligned}$$

Multiplying the above inequality by \(e^{Ct}\) we obtain

$$\begin{aligned} \frac{d}{dt}\left( I(t)e^{Ct}\right) \le C e^{Ct}\Vert G(t)\Vert \sqrt{I(t)}. \end{aligned}$$

After integration it follows that

$$\begin{aligned} I(t)e^{Ct}-I(t_0)e^{Ct_0} \le C\int _{t_0}^t e^{Cs}\Vert G(s)\Vert \sqrt{I(s)}\, ds. \end{aligned}$$

Hence, for every \(\varepsilon > 0\)

$$\begin{aligned} I(t) \le (I(t_0)+\varepsilon )e^{C(t_0-t)} + e^{-Ct}C\int _{t_0}^t e^{Cs}\Vert G(s)\Vert \sqrt{I(s)}\, ds. \end{aligned}$$
(5.2)

Now let

$$\begin{aligned} J(t) = C\int _{t_0}^t e^{Cs}\Vert G(s)\Vert \sqrt{I(s)}\, ds. \end{aligned}$$

Then J is absolutely continuous, \(J(t_0) = 0\), and for almost every \(t > t_0\) we obtain

$$\begin{aligned} J'(t) = Ce^{Ct}\Vert G(t)\Vert \sqrt{I(t)}. \end{aligned}$$

From (5.2) it follows that

$$\begin{aligned} J'(t)\le & {} C e^{Ct}\Vert G(t)\Vert \sqrt{(I(t_0)+\varepsilon )e^{C(t_0-t)}+e^{-Ct}J(t)} \\= & {} C e^{\frac{Ct}{2}}\Vert G(t)\Vert \sqrt{(I(t_0)+\varepsilon )e^{Ct_0}+J(t)}. \end{aligned}$$

Hence

$$\begin{aligned} \frac{J'(t)}{\sqrt{(I(t_0)+\varepsilon )e^{Ct_0}+J(t)}} \le Ce^{\frac{Ct}{2}}\Vert G(t)\Vert , \end{aligned}$$

which makes sense since the denominator in positive as \(\varepsilon >0\). After integrating over interval \([t_0,t]\) we obtain the following inequality valid for every \(t\ge t_0\)

$$\begin{aligned} \sqrt{(I(t_0)+\varepsilon )e^{Ct_0} + J(t)} \le {\sqrt{(I(t_0)+\varepsilon )e^{Ct_0}}}+ \frac{C}{2}\int _{t_0}^t e^{\frac{Cs}{2}}\Vert G(s)\Vert \, ds. \end{aligned}$$

It follows that

$$\begin{aligned} J(t)\le C\left[ \left( \int _{t_0}^t e^{\frac{Cs}{2}}\Vert G(s)\Vert \, ds\right) ^2+(I(t_0)+\varepsilon )e^{Ct_0}\right] . \end{aligned}$$

From definition of J(t) using the inequality (5.2) we notice that

$$\begin{aligned} I(t)\le C \left( (I(t_0)+\varepsilon )e^{\alpha (t_0-t)}+ \left( \int _{t_0}^t e^{-\alpha (t - s)} \Vert G(s)\Vert \, ds\right) ^2\right) , \end{aligned}$$

for a constant \(\alpha >0\). As \(c_1\Vert {(u(t),u_t(t))}\Vert _{\mathcal {E}_0}\le \sqrt{I(t)}\le c_2\Vert {(u(t),u_t(t))}\Vert _{\mathcal {E}_0}\) for some \(c_1,c_2>0\), passing with \(\varepsilon \) to zero we obtain the required assertion. \(\square \)

The following lemma provides us an extra control on the \(L^4(L^{12})\) norm of the solution to the linear problem (5.1). The result is given in [27, Proposition 2.2 and Remark 2.3].

Lemma 5.2

Let \(h>0\) and let u be a weak solution to problem (5.1) on the time interval \((t_0,t_0+h)\) with \(G\in L^1(t_0,t_0+h;L^2)\) and \((u(t_0),u_t(t_0)) = (u_0,u_1)\in \mathcal {E}_0\). Then \(u\in L^4(t_0,t_0+h;L^{12})\) and the following estimate holds

$$\begin{aligned} \Vert {u}\Vert _{L^4(t_0,t_0+h;L^{12})} \le C_h\left( \Vert {(u_0,u_1)}\Vert _{\mathcal {E}_0} + \Vert {G}\Vert _{L^1(t_0,t_0+h;L^2)} \right) , \end{aligned}$$
(5.3)

where the constant \(C_h > 0\) depends only on h but is independent of \(t_0,(u_0,u_1),G\).

We will need the following result.

Proposition 5.3

It is possible to choose the constants \(C_h\) in previous lemma such that the function \([0,\infty ) \ni h \rightarrow C_h\) is nondecreasing.

The above proposition will be proved with the use of the following theorem known as the Christ–Kiselev Lemma, see e.g. [38, Lemma 3.1].

Theorem 5.4

Let XY be Banach spaces and assume that K(ts) is a continuous function taking values in B(XY), the space of linear bounded mappings from X to Y. Suppose that \(-\infty \le a < b \le \infty \) and set

$$\begin{aligned} Tf(t)= & {} \int _a^b K(t,s) f(s)\, ds,\\ Wf(t)= & {} \int _a^t K(t,s) f(s)\, ds. \end{aligned}$$

If, for \(1\le p<q\le \infty \) it holds that

$$\begin{aligned} \Vert {Tf}\Vert _{L^q(a,b;Y)} \le C \Vert {f}\Vert _{L^p(a,b;X)} , \end{aligned}$$

then

$$\begin{aligned} \Vert {Wf}\Vert _{L^q(a,b;Y)} \le \overline{C} \Vert {f}\Vert _{L^p(a,b;X)},\; \text {with } \overline{C}=2C\frac{2^{2\left( \frac{1}{q} - \frac{1}{p} \right) } }{1 - 2^{\frac{1}{q} -\frac{1}{p} } } . \end{aligned}$$

Proof of Proposition 5.3

If \(G\equiv 0\) then we denote the corresponding constant by \(D_h\), i.e.

$$\begin{aligned} \Vert {u}\Vert _{L^4(t_0,t_0+h;L^{12})} \le D_h \Vert {(u_0,u_1)}\Vert _{\mathcal {E}_0}. \end{aligned}$$

Clearly, the function \([0,\infty ) \ni h\rightarrow D_h \in [0,\infty )\) can be made nondecreasing. We will prove that (5.3) holds with \(C_h\), a monotone function of \(D_h\). If the family \(\{S(t)\}_{t\in \mathbb {R}}\) of mappings \(S(t):\mathcal {E}_0\rightarrow \mathcal {E}_0\) is the solution group for the linear homogeneous problem (i.e. if \(G\equiv 0\)) then we denote \(S(t)(u_0,u_1) = (S_u(t)(u_0,u_1),S_{u_t}(t)(u_0,u_1))\). Let \(t_0\in \mathbb {R}\) and \(\delta > 0\). Using the Duhamel formula for equation (5.1) we obtain

$$\begin{aligned} u(t_0+\delta ) = S_u(\delta )(u_0,u_1) + \int _{0}^{\delta } S_u(\delta - s)(0,G(t_0+s))\, ds. \end{aligned}$$

Applying the \(L^4(0,h;L^{12})\) norm with respect to \(\delta \) to both sides we obtain

$$\begin{aligned} \Vert {u}\Vert _{L^4(t_0,t_0+h;L^{12}) }\le D_{h}\Vert {(u_0,u_1)}\Vert _\mathcal {E}+ \Vert {P_1}\Vert _{L^4(0,h;L^{12})}, \end{aligned}$$

for every \(h>0\), where \( P_1(\delta ) = \int _{0}^{\delta } S_u(\delta - s)(0,G(t_0+s))ds\). We will estimate the Strichartz norm of \(P_1\) using Theorem 5.4 with \(X=L^{2},Y=L^{12},q=4,p=1,a=0,b=h\). If \(\Pi _N:L^2\rightarrow V_N\) is \(L^2\)-orthogonal projection, then \(S_u(h-s)(0,\Pi _N(\cdot ))\) is a continuous function of (hs) taking its values in \(B(L^2,L^{12})\). Hence the estimate should be derived separately for every N, and, since it is uniform with with respect to N, it holds also in the limit. We skip this technicality and proceed with the formal estimates only. We set \(P_2(\delta ) = \int _{0}^{h} S_u(\delta - s)(0,G(t_0+s))ds\), and we estimate

$$\begin{aligned} \Vert {P_2}\Vert _{L^4(0,h;L^{12})}\le & {} \int _{0}^{h} \Vert { S_u(\delta - s)(0,G(t_0+s))}\Vert _{L^4(0,h;L^{12})}\,ds\\= & {} \int _{0}^{h} \Vert { S_u(\delta )S(-s)(0,G(t_0+s)) }\Vert _{L^4(0,h;L^{12})}\, ds \\\le & {} \int _{0}^{h} D_{h}\Vert {S(-s)(0,G(t_0+s))}\Vert _{\mathcal {E}_0} \, ds, \end{aligned}$$

where in the last inequality we used the homogeneous Strichartz estimate. Observe that there exists \(\beta >0\) such that

$$\begin{aligned} \Vert {S(-s)(u_0,u_1)}\Vert _{\mathcal {E}_0} \le e^{s \beta }\Vert {(u_0,u_1)}\Vert _{\mathcal {E}_0}. \end{aligned}$$

We deduce

$$\begin{aligned} \Vert {P_2}\Vert _{L^4(0,h;L^{12})} \le D_{h} e^{\beta h} \Vert {G}\Vert _{L^1(t_0,t_0+h,L^2)} . \end{aligned}$$

Hence, by Theorem 5.4 we obtain \( \Vert {P_1}\Vert _{L^4(0,h;L^{12})} \le C D_{h} e^{\beta h} \Vert {G}\Vert _{L^1(t_0,t_0+h,L^2)}\) for every \(h > 0\), and the proof is complete. \(\square \)

The following result will be useful in the bootstrap argument on the attractor regularity.

Lemma 5.5

Let \((u_0,u_1)\in \mathcal {E}_s\) and \(G \in L^1_{loc}([t_0,\infty );\mathbb {H}^s)\) for \(s\in (0,1]\). Then the weak solution of (5.1) has regularity \(u \in C_{loc}([t_0,\infty );\mathbb {H}^{s+1})\) and \(u_t\in C_{loc}([t_0,\infty );\mathbb {H}^s)\), Moreover, the following estimates hold

$$\begin{aligned}&\Vert (u(t),u_t(t))\Vert _{\mathcal {E}_s} \le C\left( \Vert {(u_0,u_1)}\Vert _{\mathcal {E}_s}e^{-\alpha (t-t_0)} + \int _{t_0}^t e^{-\alpha (t-s)}\Vert {G(s)}\Vert _{\mathbb {H}^s} ds\right) ,\\&\quad \Vert u\Vert _{L^4(0,h;W^{s,12})} \le C_h\left( \Vert {(u_0,u_1)}\Vert _{\mathcal {E}_s} + \Vert {G}\Vert _{L^1(t_0,t_0+h;\mathbb {H}^{s})}\right) . \end{aligned}$$

Proof

The problem

$$\begin{aligned} {\left\{ \begin{array}{ll} &{} w_{tt}(t)+w_t(t)-\Delta w(t) = (-\Delta )^{s/2}G(t)\; \text {for}\; (x,t)\in \Omega \times (t_0,\infty ),\\ &{}w(t,x) = 0\; \text {for}\; x\in \partial \Omega ,\\ &{}w(t_0) = (-\Delta )^{s/2}u_0,\\ &{}w_t(t_0) = (-\Delta )^{s/2}u_1, \end{array}\right. } \end{aligned}$$
(5.4)

has the unique weak solution \(w\in C_{loc}([t_0,\infty );H^1_0)\) with the derivative \(w_t\in C_{loc}([t_0,\infty );L^2)\). Both weak solutions u and w are the limits of the Galerkin problems in the spaces spanned by eigenfunctions of \(-\Delta \) with Dirichlet boundary conditions, and moreover the \(L^2\) orthogonal projections of u and w on those spaces coincide with the Galerkin solutions. It is enough to observe that

$$\begin{aligned} \widehat{w}_k(t) =\lambda _k^{\frac{s}{2}} \widehat{u}_k(t)\ \ \text {for every}\ \ k \in \mathbb {N}. \end{aligned}$$

Testing weak solutions wu with \(e_k\), we get systems

$$\begin{aligned}&{\left\{ \begin{array}{ll} \widehat{u}_k''+\widehat{u}_k'+\lambda _k \widehat{u}_k = (G(t),e_k),\\ \widehat{u}_k(t_0)= \widehat{u_0}_k,\\ \widehat{u}_k'(t_0)= \widehat{u_1}_k, \end{array}\right. } \\&{\left\{ \begin{array}{ll} \widehat{w}_k''+\widehat{w}_k'+\lambda _k \widehat{w}_k = ((-\Delta )^{\frac{{s} }{{2} }}G(t),e_k) = \lambda _k^{\frac{s}{2}}(G(t),e_k),\\ \widehat{w}_k (t_0)= ((-\Delta )^{\frac{{s} }{{2} }}u_0, e_k) = \lambda _k^{\frac{2}{s}}\widehat{u_0}_k,\\ \widehat{w}_k'(t_0)= ((-\Delta )^{\frac{{s} }{{2} }} u_1,e_k)= \lambda _k^{\frac{2}{s}}\widehat{u_1}_k. \end{array}\right. } \end{aligned}$$

The difference \(\overline{w}_k(t)=\widehat{w}_k(t) -\lambda _k^{\frac{s}{2}} \widehat{u}_k(t)\) solves the problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \overline{w}_k''+\overline{w}_k'+\lambda _k\overline{w}_k = 0,\\ \overline{w}_k(t_0) = 0,\\ \overline{w}_k'(t_0) = 0. \end{array}\right. } \end{aligned}$$

So \(\overline{w}_k(t)=0\) for every \(t\in [t_0,\infty )\). The assertion follows from Proposition 5.1 and Lemma 5.2. \(\square \)

5.2 Shatah–Struwe Solutions and Their Properties

This section recollects the results from [27]. The non-autonomous generalizations of these results are straightforward so we skip some of the proofs which follow the lines of the corresponding results from [27]. The following remark follows from the Gagliardo–Nirenberg interpolation inequality and the Sobolev embedding \(H^1_0\hookrightarrow L^6\).

Remark 5.6

If \(u\in L^4(0,t;L^{12})\) and \(u\in L^\infty (0,t;H^1_0)\) then

$$\begin{aligned} \Vert {u}\Vert _{L^5(0,t;L^{10})} \le \Vert {u}\Vert _{L^4(0,t;L^{12})}^{\frac{4}{5}} \Vert {u}\Vert _{L^\infty (0,t;H^1_0)}^\frac{1}{5} . \end{aligned}$$

We define the Shatah–Struwe solution of problem (3.1).

Definition 5.7

Let \((u_0,u_1)\in \mathcal {E}_0\). A weak solution of problem (3.1), given by Definition 4.1, is called a Shatah–Struwe solution if \(u\in L^4_{loc}([0,\infty );L^{12})\).

Proposition 5.8

Shatah–Struwe solutions to problem (3.1) given by Definition 5.7 are unique and the mapping \(\mathcal {E}_0 \ni (u_0,u_1) \mapsto (u(t),u_t(t))\in \mathcal {E}_0\) is continuous for every \(t>0\).

Proof

Let uv be Shatah–Struwe solutions to Problem (3.1) with the initial data \((u_0,u_1)\) and \((v_0,v_1)\), respectively. Their difference \(w:=u-v\) satisfies the following equation

$$\begin{aligned} w_{tt}(t)+w_t(t)-\Delta w(t) = p(t,u(t))-p(t,v(t))=w\frac{\partial p(t,\theta u + (1-\theta )v)}{\partial u} . \end{aligned}$$

Testing this equation with \(w_t\) yields

$$\begin{aligned} \frac{1}{2}\frac{d}{dt}\left( \Vert {w_{t}}\Vert ^2+\Vert {\nabla w}\Vert ^2\right) + \Vert {w_{t}}\Vert ^2 =\left( w\frac{\partial p(t,\theta u + (1-\theta )v)}{\partial u},w_t\right) . \end{aligned}$$

Assumption (H5) gives inequality

$$\begin{aligned} \frac{1}{2}\frac{d}{dt}\left( \Vert {w_{t}}\Vert ^2+\Vert {\nabla w}\Vert ^2\right) \le C\int _{\Omega }w(1+|u|^4+|v|^4)w_tdx \end{aligned}$$

Then by using the Hölder inequality with exponents \(\frac{1}{6},\frac{1}{3},\frac{1}{2}\) and the Sobolev embedding \(H_0^1 \hookrightarrow L^6\) we obtain

$$\begin{aligned} \frac{d}{dt}\left( \Vert {w_{t}}\Vert ^2+\Vert {\nabla w}\Vert ^2\right) \le C\left( \Vert {\nabla w}\Vert ^2+\Vert {w_t}\Vert ^2\right) \left( 1+ \Vert {u}\Vert _{L^{12}}^{4}+\Vert {v}\Vert _{L^{12}}^4\right) . \end{aligned}$$

Because vu are Shatah–Struwe solution, i.e. \(u,v\in L^4_{loc}([0,\infty );L^{12})\), it is possible to use integral form of the Gröwall inequality which gives us

$$\begin{aligned} \Vert {\nabla w}\Vert ^2+\Vert {w_t}\Vert ^2 \le (\Vert {\nabla w_0}\Vert ^2+ \Vert {w_1}\Vert ^2) \exp \left( C\left( t +\int _{0}^t\Vert {v}\Vert _{L_{12}}^{4}+\Vert {w}\Vert _{L_{12}}^{4}dt\right) \right) , \end{aligned}$$

for \(t\in [0,\infty )\), hence the assertion follows. \(\square \)

Lemma 5.9

Every weak solution of problem (3.1) is of Galerkin type if and only it is a Shatah–Struwe solution. Moreover for every \(t>0\) there exists a constant \(C_t>0\) such that for every solution u of (3.1), with arbitrary \(p \in \mathcal {H}_{[0,1]}\) in the place of \(f_\varepsilon \), contained in the absorbing set \(B_0\), it holds that

$$\begin{aligned} \Vert {u}\Vert _{L^4(0,t;L^{12} )}\le C_t. \end{aligned}$$

Proof

Let u be the solution of the Galerkin type with the initial data \((u_0,u_1)\in \mathcal {E}_0\). From assumption (H5) we see that

$$\begin{aligned} \Vert {p(t,u)}\Vert \le C(1+\Vert {|u|^{5-\kappa }}\Vert )= C(1+\Vert {u}\Vert ^{5-\kappa }_{L^{2(5-\kappa )}} ) \le C(1+ \Vert {u}\Vert ^{5-\kappa }_{L^{10}}) . \end{aligned}$$

We assume that \(t\in [0,1]\). From the Hölder inequality we obtain

$$\begin{aligned}&\int _{0}^{t}\Vert {p(s,u)}\Vert ds \le Ct +C\int _{0}^{t}\Vert {u}\Vert ^{5-\kappa }_{L^{10}}ds\\&\quad \le C\left( \left( \int _{0}^{t}\Vert {u}\Vert ^{5}_{L^{10}}ds\right) ^{\frac{5-\kappa }{5}} \left( \int _{0}^{t}1dt\right) ^{\frac{\kappa }{5}} + t\right) \\&\quad = C \left( \Vert {u}\Vert _{L^5(0,t; L^{10})}^{5-\kappa }t^{\frac{\kappa }{5}} + t \right) \le C\left( \Vert {u}\Vert _{L^5(0,t;L^{10}) }^{5 - \kappa } + 1\right) t^{\frac{\kappa }{5}} \\&\quad \le CR^{\frac{1}{5}}\left( \Vert {u}\Vert _{L^4(0,t;L^{12})}^{4 - \frac{4\kappa }{5}} + 1 \right) t^{\frac{\kappa }{5}} \end{aligned}$$

where R is the bound of the \(L^\infty (0,t;H^1_0)\) norm of u. We split u as the sum \( u = v+ w\) where vw solve the following problems

$$\begin{aligned} {\left\{ \begin{array}{ll} v_{tt}+v_t-\Delta v = 0,\\ v(t,x) = 0\; \text {for}\; x\in \partial \Omega ,\\ v(0,x) = u_0(x),\\ v_t(0,x) = u_1(x), \end{array}\right. } \qquad \qquad {\left\{ \begin{array}{ll} w_{tt}+w_t-\Delta w = p_\epsilon (t,u),\\ w(t,x) = 0\; \text {for}\; x\in \partial \Omega ,\\ w(0,x) = 0,\\ w_t(0,x) = 0. \end{array}\right. } \end{aligned}$$

From the Strichartz estimate in Lemma 5.2 we deduce

$$\begin{aligned} \Vert {v}\Vert _{L^4(0,t;L^{12}) } \le C_1 \Vert {(u_0,u_1)}\Vert _{\mathcal {E}_0}, \end{aligned}$$

and

$$\begin{aligned} \Vert {w}\Vert _{L^4(0,t;L^{12}) }\le C R^{\frac{1}{5}} \left( \Vert {w}\Vert _{L^4(0,t;L^{12}) }^{4 -\frac{4\kappa }{5}} +\left( C_1\Vert {(u_0,u_1)}\Vert _{\mathcal {E}_0}\right) ^{4 -\frac{4\kappa }{5}} +1\right) t^{\frac{\kappa }{5}}. \end{aligned}$$

We define the function \(Y(t)= \Vert {w}\Vert _{L^4(0,t;L^{12})}\) for \(t\in [0,1]\). Formally we do not know if this function is well defined, so to make the proof rigorous we should proceed for Galerkin approximation, cf. [27]. We continue the proof in formal way. The function \(Y(t)=\Vert {w}\Vert _{L^4(0,t;L^{12})}\) is continuous with \(Y(0)=0\) and

$$\begin{aligned} Y(t)\le CR^{\frac{1}{5}} ( Y(t)^{4 -\frac{4\kappa }{5}} +(C_1\Vert {(u_0,u_1)}\Vert _{\mathcal {E}_0}) ^{4 -\frac{4\kappa }{5}} +1) t^{\frac{\kappa }{5}} . \end{aligned}$$

We define

$$\begin{aligned} t^{\frac{\kappa }{5}}_{\max } = \min \left\{ \frac{1}{2 C R^\frac{1}{5}( (C_1 S )^{4-\frac{4\kappa }{5}}+2 )} , 1 \right\} , \text { where } S \ge \Vert {(u_0,u_1)}\Vert _{\mathcal {E}_0} \end{aligned}$$

Now we will use continuation method to prove that the estimate \(Y(t)\le 1\) holds on the interval \([0,t_{\max }]\). The argument follows the scheme of the proof from [40, Proposition 1.21]. Defining the logical predicates \(H(t)=(Y(t)\le 1)\) and \(C(t)= (Y(t)\le \frac{1}{2} )\) we observe that following facts hold

  • C(0) is true.

  • If \(C(s_0)\) for some \(s_0\) is true then H(s) is true in some neighbourhood of \(s_0\).

  • If \(s_n\rightarrow s_0\) and \(C(s_n)\) holds for every n then \(C(s_0)\) is true.

  • H(t) implies C(t) for \(t\in [0,t_{\text {max}}]\), indeed

    $$\begin{aligned} Y(t)\le & {} CR^\frac{1}{5}((C_1 \Vert {u_0,u_1}\Vert _{\mathcal {E}_0})^{4-\frac{4\kappa }{5}}+1+Y(t)^{4-\frac{4\kappa }{5}} )t^{\frac{\kappa }{5}} \\\le & {} C R^\frac{1}{5} (C_1 \Vert {u_0,u_1}\Vert _{\mathcal {E}_0}+2)^{\frac{4-\kappa }{5}} t^{\frac{\kappa }{5}}_{\max } \le \frac{1}{2}. \end{aligned}$$

The continuation argument implies that C(t) holds for \(t\in [0,t_{\max }]\). From the triangle inequality we conclude that

$$\begin{aligned} \Vert {u}\Vert _{L^4(0,t_{\max };L^{12})}\le C_1\Vert {(u_0,u_1)}\Vert _{\mathcal {E}_0} + 1. \end{aligned}$$

Observe that \(t_{\max }\) and \(C_1\) are independent of choice of \(p\in \mathcal {H}_{[0,1]}\). Because all trajectories are bounded in the \(\mathcal {E}_0\) norm, cf. Proposition 4.5, we deduce that \(\Vert {u}\Vert _{L^4(0,t,L^{12})}\) is well defined for every \(t>0\). Moreover if \((u(t),u_t(t)) \in B_0\) for every \(t\ge 0\), then the bound on the \(\mathcal {E}_0\) norm of the solution is uniform and we deduce the bound \(\Vert {u}\Vert _{L^4(0,t,L^{12})} \le C_t\) with \(C_t\) independent of p. \(\square \)

Remark 5.10

As a consequence of Proposition 5.8 and Lemma 5.9 for every \((u_0,u_1)\in \mathcal {E}_0\), the weak solution of Galerkin type of problem (3.1) is unique.

Lemma 5.11

If the weak solution \((u,u_t)\) of problem (3.1) is of Galerkin type, then for every \(T>0\) it belongs to the space \(C([0,T];\mathcal {E}_0)\).

Proof

The proof follows the arguments of [27, Proposition 3.3]. They key fact is that Galerkin (or equivalently, Shatah–Struwe) solutions satisfy the energy equation. Let \(t_n\rightarrow t\) and let \(T> \sup _{n\in N}\{ t_n \}\). Clearly, \((u,u_t) \in C_w([0,T];\mathcal {E}_0)\) and hence \((u(t_n),u_t(t_n))\rightarrow (u(t),u_t(t))\) weakly in \(\mathcal {E}_0\). To deduce that this convergences is strong we need to show that \(\Vert (u(t_n),u_t(t_n))\Vert _{\mathcal {E}_0} \rightarrow \Vert (u(t),u_t(t))\Vert _{\mathcal {E}_0}\). To this end we will use the energy equation

$$\begin{aligned} \Vert {(u(t),u_t(t))}\Vert _{\mathcal {E}_0}^2- \Vert {(u(t_n),u_t(t_n))}\Vert _{\mathcal {E}_0}^2 = 2\int _{t_n}^t (p(s,u(s)),u_t) -\Vert {u_t(s)}\Vert ^2\, ds. \end{aligned}$$

Then

$$\begin{aligned} \left| \Vert {(u(t),u_t(t))}\Vert _{\mathcal {E}_0}^2- \Vert {(u(t_n),u_t(t_n))}\Vert _{\mathcal {E}_0}^2\right| \le C R\left( (R+1)|t-t_n| + \Vert {u}\Vert _{L^5{(t_n,t;L^{10})}}\right) \end{aligned}$$

where R is the bound of the \(L^\infty (0,T;L^2)\) norm of \(u_t\). The right side tends to zero as \(t_n\rightarrow t\) which proves the assertion. \(\square \)

5.3 Non-autonomous dynamical system.

We will denote by

\((u(t),u_t(t)) = \varphi _\varepsilon (t,p)(u_0,u_1)\) the map which gives the Galerkin-type solution of (3.1) with \(p\in \mathcal {H}(f_\varepsilon )\) as the right-hand side and the initial conditions \(u(0)=u_0\), \(u(0)=u_1\). We remind the definitions of non-autonomous dynamical system and cocycle.

Definition 5.12

Let \(X,\Sigma \) be metric spaces. Assume that \(\{\theta _t\}_{t \ge 0}\) is a semigroup in \(\Sigma \) and \(\varphi :\mathbb {R}^+\times \Sigma \rightarrow C(X)\) is a continuous map. Let the following conditions hold

  • \(\varphi (0,\sigma ) = \text {Id}_X\) for every \(\sigma \in \Sigma \).

  • The map \( \mathbb {R}^+\times \Sigma \ni (t,\sigma )\rightarrow \varphi (t,\sigma )x \in X \) is continuous for every x.

  • For every \(t,s\ge 0\) and \(\sigma \in \Sigma \) the cocycle property holds

    $$\begin{aligned} \varphi (t+s,\sigma )=\varphi (t,\theta _s \sigma )\varphi (s, \sigma ) . \end{aligned}$$

Then the pair \((\varphi ,\theta )\) is called a non-autonomous dynamical (NDS) and map \(\varphi \) a cocycle semiflow.

The next result shows that \(\varphi _\varepsilon \) is an NDS with \(X=\mathcal {E}\) and \(\Sigma = \mathcal {H}(f_\varepsilon )\).

Proposition 5.13

The mapping \(\varphi _\varepsilon :\mathbb {R}\times \mathcal {H}(f_\varepsilon )\rightarrow C(\mathcal {E})\) together with time translation \(\theta _t p_\varepsilon = p_\varepsilon (\cdot + t)\) constitute a non-autonomous dynamical system.

Proof

Property \(\varphi (0,p) = \text {Id}_{\mathcal {E}_0}\) and the cocycle property are obvious from definition of \(\varphi _\varepsilon \) and \(\theta _t\). Let \((u^n_0,u^n_1)\rightarrow (u_0,u_1)\) in \(\mathcal {E}_0\), \(p_\varepsilon ^n\rightarrow p_\varepsilon \) in the metric of \(d_{C(\mathbb {R};C^1(\mathbb {R}))}\) restricted to \(\mathcal {H}(f_\varepsilon )\), \(t_n\rightarrow t\) and let \(\{u^n\}_{n=1}^\infty \) and u be the Galerkin type weak solutions of the problems governed by the equations

$$\begin{aligned}&u^n_{tt} + u^n_t -\Delta u^n = p^n_\varepsilon (t,u^n), \end{aligned}$$
(5.5)
$$\begin{aligned}&u_{tt} + u_t -\Delta u = p_\varepsilon (t,u), \end{aligned}$$
(5.6)

with the boundary data \(u^n=u =0\) on \(\partial \Omega \) and initial data \((u^n(0),u^n_t(0)) = (u^n_0,u^n_1)\in \mathcal {E}_0\) and \((u(0),u_t(0)) = (u_0,u_1)\in \mathcal {E}_0\). Choose \(T>0\) such that \(T> \sup _{n\in \mathbb {N}}\{t_n \}\). The following bounds hold

$$\begin{aligned}&\Vert \nabla u^n(t)\Vert _{L^2} \le C, \ \ \Vert \nabla u(t)\Vert _{L^2} \le C,\\&\quad \Vert u^n_t(t)\Vert _{L^2} \le C, \ \ \Vert u_t(t)\Vert _{L^2} \le C,\\&\quad \Vert u^n_{tt}(t)\Vert _{H^{-1}} \le C, \ \ \Vert u_{tt}(t)\Vert _{H^{-1}} \le C. \end{aligned}$$

for \(t\in [0, T]\) with a constant \(C>0\). Moreover, the following bounds also hold

$$\begin{aligned} \Vert u^n\Vert _{L^4(0,T;L^{12})} \le C, \Vert u\Vert _{L^4(0,T;L^{12})} \le C. \end{aligned}$$
(5.7)

This means that, for a subsequence

$$\begin{aligned}&u^n \rightarrow v \ \ \text {weakly-*}\ \ \text {in}\ \ L^\infty (0,T;H^1_0),\\&\quad u^n_t \rightarrow v_t \ \ \text {weakly-*}\ \ \text {in}\ \ L^\infty (0,T;L^2),\\&\quad u^n_{tt} \rightarrow v_{tt} \ \ \text {weakly-*}\ \ \text {in}\ \ L^\infty (0,T;H^{-1}), \end{aligned}$$

for a certain function \(v \in L^\infty (0,T;H^1_0)\) with \(v_t \in L^\infty (0,T;L^2)\) and \(v_{tt} \in L^\infty (0,T;H^{-1})\). By Lemma 5.11\(u^n, u\in C([0,T];\mathcal {E}_0)\). Moreover \(v\in C([0,T];L^2) \cap C_w([0,T];H^1_0)\) and \(v_t\in C([0,T];H^{-1})\cap C_w([0,T];L^2)\), cf. [41, Lemma 1.4, page 263]. We will show that \(v=u\) for \(t\in [0,T]\). Note that for every \(w\in L^2\)

$$\begin{aligned} (u^n(0),w) = (u^n(t),w) - \int _{0}^{t}(u^n_t(s),w)\, ds. \end{aligned}$$

Integrating with respect to t between 0 and T and exchanging the order of integration we obtain

$$\begin{aligned} T(u^n_0,w) = \int _{0}^{T}(u^n(t),w)\, dt - \int _{0}^{T}(u^n_t(s),(T-s)w)\, ds. \end{aligned}$$

Passing to the limit we obtain

$$\begin{aligned} T(u_0,w) = \int _{0}^{T}(v(t),w)\, dt - \int _{0}^{T}(v_t(s),(T-s)w)\, ds = T(v(0),w), \end{aligned}$$

whence \(v(0) = u_0\). It is straightforward to see that \(u^n(t)\rightarrow v(t)\) weakly in \(H^1_0\) for every \(t\in [0,T]\). Similar reasoning for \(u^n_t\) allows us to deduce that \(v_t(0) = u_1\) and \(u^n_t(t) \rightarrow v_t(t)\) weakly in \(L^2\) for every \(t\in [0,T]\). Now we have to show that v satsfies (5.6). Indeed, weak form of (5.5) is as follows

$$\begin{aligned}&\int _{0}^{T}\langle u^{n}_{tt}(t), w(t)\rangle _{H^{-1}\times H^1_0{}}\, dt + \int _{0}^{T}( u^{n}_{t}(t), w(t)) \, dt + \int _{0}^{T}(\nabla u^{n}(t), \nabla w(t)) \, dt\\&\quad = \int _{0}^{T}\int _{\Omega } p^n_{\varepsilon }(t,u^n(x,t)) w(t) \, dx \, dt, \end{aligned}$$

for every \(w\in L^2(0,T;H^1_0)\). It suffices only to pass to the limit on the right-hand side. Fix \(t\in [0,T]\) and \(w\in H^1_0\). By the compact embedding \(H^1_0 \hookrightarrow L^p\) for \(p\in [1,6)\) it holds that \(u^n(\cdot ,t)\rightarrow u(\cdot ,t)\) strongly in \(L^{6-\frac{6}{5}\kappa }\) and, for a subsequence, \(u^n(x,t)\rightarrow u(x,t)\) for a.e. \(x\in \Omega \) and \(|u^n(x,t)|\le g(x)\) with \(g\in L^{6-\frac{6}{5}\kappa }\), where g can also depend on t. Hence

$$\begin{aligned} p^n_\varepsilon (t,u^n(x,t))w(x) \rightarrow p_\varepsilon (t,u(x,t))w(x)\ \ \text {for a.e.}\ \ x\in \Omega , \end{aligned}$$

moreover, by the Young inequality,

$$\begin{aligned} |p^n_\varepsilon (t,u^n(x,t))w(x)|\le & {} C(1+|u^n(x,t)|^{5-\kappa }) |w(x)| \\\le & {} |w(x)|^6 + C(1+g(x)^{6-\frac{6}{5}\kappa })\in L^1. \end{aligned}$$

We can use the Lebesgue dominated convergence theorem to obtain

$$\begin{aligned} \lim _{n\rightarrow \infty } \int _{\Omega } p^n_{\varepsilon }(t,u^n(x,t)) w(x) \, dx = \int _{\Omega } p_{\varepsilon }(t,u(x,t)) w(x) \, dx\qquad \text {for a.e.}\qquad t\in (0,T). \end{aligned}$$

Now let \(w\in L^2(0,T;H^1_0)\). The following holds

$$\begin{aligned}&\left| \int _{\Omega } p^n_{\varepsilon }(t,u^n(x,t)) w(x,t) \, dx \right| \\&\quad \le C \int _{\Omega }(1+|u^n(x,t)|^{5}) |w(x,t)| \, dx \le C\Vert w(t)\Vert _{L^6}(1+\Vert u^n(t)\Vert _{L^6}^5)\\&\quad \le C\Vert w(t)\Vert _{H^1_0}(1+\Vert u^n(t)\Vert _{H^1_0}^5) \le C\Vert w(t)\Vert _{H^1_0} \in L^1(0,T), \end{aligned}$$

whence we can pass to the limit in the nonlinear term. The fact that the \(L^4(0,T;L^{12})\) estimate on \(u^n\) is independent of n implies that v satisfies the same estimate which ends the proof that \(u=v\).

We must show that \(\Vert {(u^n(t_n),u^n_t(t_n))-(u(t),u_t(t))}\Vert _{\mathcal {E}_0}\rightarrow 0\) We already know that \(u^n(t) \rightarrow u(t)\) weakly in \(H^1_0\) and \(u^n_t(t) \rightarrow u_t(t)\) weakly in \(L^2\) for every \(t\in [0,T]\). We will first prove that these convergences are strong. To this end let \(w^n = u^n-u\). The following holds

$$\begin{aligned} w^n_{tt} + w^n_t - \Delta w^n = p^n_{\varepsilon }(t,u^n) - p_\varepsilon (t,u). \end{aligned}$$

Testing this equation with \(w^n_t\) we obtain

$$\begin{aligned} \frac{1}{2}\frac{d}{dt} \Vert {(w^n(t),w^n_t(t))}\Vert _{\mathcal {E}_0}^2 + \Vert {w^n_t(t)}\Vert ^2= \int _{\Omega }(p^n_{\varepsilon }(t,u^n) - p_\varepsilon (t,u))w^n_t(t)\, dx. \end{aligned}$$

Simple computations lead us to

$$\begin{aligned}&\frac{d}{dt}\Vert {(w^n(t),w^n_t(t))}\Vert _{\mathcal {E}_0}^2 \\&\quad \le \frac{1}{2}\int _{\Omega }(p^n_{\varepsilon }(t,u) - p_\varepsilon (t,u))^2\, dx \\&\qquad + 2\int _{\Omega }(p^n_{\varepsilon }(t,u^n) - p^n_\varepsilon (t,u))w^n_t(t)\, dx. \end{aligned}$$

After integration from 0 to t we obtain

$$\begin{aligned}&\Vert {(w^n(t),w^n_t(t))}\Vert _{\mathcal {E}_0}^2 \le \Vert {(u^n_0-u^n,u^n_1-u_1)}\Vert _{\mathcal {E}_0}^2 \nonumber \\&\qquad + \frac{1}{2}\int _{0}^T\int _{\Omega }(p^n_{\varepsilon }(s,u) - p_\varepsilon (s,u))^2\, dx\, ds \nonumber \\&\qquad + 2 \int _{0}^T\int _{\Omega }|(p^n_{\varepsilon }(s,u^n) - p^n_\varepsilon (s,u))w^n_t(s)|\, dx\, ds. \end{aligned}$$
(5.8)

We must show that the right hand side in the above inequality tends to zero as n goes to infinity. Clearly, the first term tends to zero. To deal with the second term note that \(p^n_\varepsilon (s,u)\rightarrow p_\varepsilon (s,u)\) for almost every \((x,s)\in \Omega \times (0,T)\). Moreover by (H3)

$$\begin{aligned} (p^n_{\varepsilon }(s,u) - p_\varepsilon (s,u))^2 \le C, \end{aligned}$$

and the Lebesgue dominated convergence theorem implies the assertion. We deal with the last term. By the mean value theorem and (H5) we obtain

$$\begin{aligned} |(p^n_{\varepsilon }(s,u^n) - p^n_\varepsilon (s,u))w^n_t(s)|\le C|u^n(s)-u(s)|(1+|u^n(s)|^{4-\kappa }+|u(s)|^{4-\kappa })|w^n_t(s)| \end{aligned}$$

From the compact embedding \(H^1_0\hookrightarrow L^q\) for \(q\in [1,6)\) by the Aubin–Lions lemma, cf. [39, Corollary 4] we know that, for a subsequence, \(u^n-u\rightarrow 0\) in \(C([0,T];L^q)\) with \(q\in [1,6)\). This motivates the use of the Hölder inequality with exponents \(q=\frac{12}{2+\kappa }<6, p=\frac{12}{4-\kappa }, r=2\), which yields

$$\begin{aligned}&\int _{\Omega }|(p^n_{\varepsilon }(s,u^n) - p^n_\varepsilon (s,u))w^n_t(s)|\, dx\\&\quad \le C\Vert u^n(s)-u(s)\Vert _{L^2}\Vert w^n_t(s)\Vert _{L^2} \\&\qquad + C\Vert u^n(s)-u(s)\Vert _{L^{\frac{12}{2+\kappa }}}(\Vert u^n(s)\Vert _{L^{12}}^{4-\kappa }+\Vert u(s)\Vert _{L^{12}}^{4-\kappa })\Vert w^n_t(s)\Vert _{L^2}. \end{aligned}$$

Using the fact that \(\Vert w^n_t(s)\Vert _{L^2}\le \Vert u^n_t(s)\Vert _{L^2}+\Vert u_t(s)\Vert _{L^2}\le C\), after integration in time we obtain

$$\begin{aligned}&\int _0^T\int _{\Omega }|(p^n_{\varepsilon }(s,u^n) - p^n_\varepsilon (s,u))w^n_t(s)|\, dx\, ds \le CT\sup _{s\in [0,T]}\Vert u^n(s)-u(s)\Vert _{L^2}\\&\qquad + C\sup _{s\in [0,T]}\Vert u^n(s)-u(s)\Vert _{L^{\frac{12}{2+\kappa }}}\left( \int _0^T\Vert u^n(s)\Vert _{L^{12}}^{4-\kappa }\, ds+\int _0^T\Vert u(s)\Vert _{L^{12}}^{4-\kappa }\, ds\right) . \end{aligned}$$

The last two time integrals are bounded from (5.7) by a constant independent of n, whence the whole expression converges to zero.

Now, the triangle inequality implies

$$\begin{aligned}&\Vert \nabla u^n(t_n)-\nabla u(t)\Vert _{L^2}^2 + \Vert u^n_t(t_n)- u_t(t)\Vert _{L^2}^2 \\&\quad \le 2\left( \Vert \nabla u^n(t_n)-\nabla u(t_n)\Vert _{L^2}^2 + \Vert u^n_t(t_n)- u_t(t_n)\Vert _{L^2}^2\right) \\&\qquad + 2\left( \Vert \nabla u(t_n)-\nabla u(t)\Vert _{L^2}^2 + \Vert u_t(t_n)- u_t(t)\Vert _{L^2}^2\right) , \end{aligned}$$

where both terms tend to zero, the first one by passing to the limit in (5.8) and the second one by Lemma 5.11 which completes the proof. \(\square \)

6 Existence and regularity of non-autonomous attractors.

6.1 Abstract results on existence and structure of non-autonomous attractors.

In this subsection we remind the definitions of uniform and cocycle attractors, and the results on their existence and relations between them. These results can be found for example in [7, 28]. We remind that the Hausdorff semidistance in the metric space (Xd) between the two sets \(A,B\subset X\) is defined as

$$\begin{aligned} \mathrm {dist}_X(A,B) = \sup _{x\in A}\inf _{y\in B} d(x,y). \end{aligned}$$

Definition 6.1

The set \(\mathcal {A}\subset X\) is called the uniform attractor for the cocycle \(\varphi \) on X if \(\mathcal {A}\) is smallest compact set such that for every bounded sets \(B\subset X\) and \(\Upsilon \subset \Sigma \) it holds that

$$\begin{aligned} \lim _{t\rightarrow \infty }\sup _{\sigma \in \Upsilon } \text {dist}_X(\varphi (t,\sigma )B,\mathcal {A})\rightarrow 0. \end{aligned}$$

Definition 6.2

Let \((\varphi ,\theta )\) be an NDS such that \(\theta \) is a group i.e \(\Sigma \) is invariant for every \(\theta _t\). Then we call the family of compact sets \(\{\mathcal {A}(\sigma ) \}_{\sigma \in \Sigma }\) the cocycle atractor if

  • \(\{\mathcal {A}(\sigma )\}_{\sigma \in \Sigma }\) is invariant under the NDS \((\varphi ,\theta )\), i.e.,

    $$\begin{aligned} \varphi (t,\sigma )\mathcal {A}(\sigma ) = \mathcal {A}(\theta _t \sigma ) \;\text { for every } t\ge 0. \end{aligned}$$
  • \(\{\mathcal {A}(\sigma )\}_{\sigma \in \Sigma }\) pullback attracts all bounded subsets \(B\subset X\), i.e.,

    $$\begin{aligned} \lim _{t\rightarrow \infty }\text {dist}_X(\varphi (t,\theta _{-t} \sigma ) B,\mathcal {A}(\sigma ) )) = 0. \end{aligned}$$

Remark 6.3

If for some \(\sigma \in \Sigma \) we consider the mapping \(S(t,\tau ) =\varphi (t-\tau ,\theta _\tau \sigma )\) for an NDS \((\varphi ,\theta )\) then the family of mappings \(\{S(t,\tau ):t\ge \tau \}\) forms an evolution process. Let \(\{\mathcal {A}(\sigma ) \}_{\sigma \in \Sigma }\) be a cocycyle atrator for NDS. Then \(\mathcal {A}(t) = \mathcal {A}(\theta _t \sigma )\) is called a pullback atractor for \(S(t,\tau )\).

Definition 6.4

We say that the NDS \((\varphi ,\theta )\) is uniformly asymptotically compact if there exist a compact set \(K\subset X\) such that

$$\begin{aligned} \lim _{t\rightarrow \infty }\sup _{\sigma \in \Upsilon } \text {dist}_X(\varphi (t,\sigma )B,K)= 0 . \end{aligned}$$

Theorem 6.5

[7, Theorem 3.12.] Let NDS \((\varphi ,\theta )\) be such that \(\theta \) is a group. Assume that \((\varphi ,\theta )\) is uniformly asymptotically compact, and \(\Sigma \) is compact. Then the uniform and cocycle attractors exist and it holds that

$$\begin{aligned} \bigcup _{\sigma \in \Sigma }\mathcal {A}(\sigma ) = \mathcal {A}, \end{aligned}$$

where \( \{\mathcal {A}(\sigma )\}_{\sigma \in \Sigma }\) is the cocycle atractor and \(\mathcal {A}\) is the uniform atractor.

Definition 6.6

Let \((\varphi ,\theta )\) be an NDS such that \(\theta \) is a group. We call \(\xi :\mathbb {R}\rightarrow X\) a global solution through x and \(\sigma \) if, for all \(t\ge s\) it satisfies

$$\begin{aligned} \varphi (t-s,\theta _s \sigma )\xi (s) = \xi (t) \text { and } \xi (0) = x. \end{aligned}$$

Moreover we say that a subset \(\mathcal {M}\subset X\) is lifted-invariant if for each \(x\in \mathcal {M}\) there exist \(\sigma \) and bounded global solution \(\xi :\mathbb {R}\rightarrow X\) through x and \(\sigma \).

Theorem 6.7

[7, Proposition 3.21] Let NDS \((\varphi ,\theta )\) be such that \(\theta \) is a group. Assume that \((\varphi ,\theta )\) is uniformly asymptotically compact, and \(\Sigma \) is compact. Then the uniform attractor \(\mathcal {A}\) is the maximal bounded lifted-invariant set of the NDS \((\varphi ,\theta )\).

6.2 Uniform and cocycle attractors for the NDS \(\varphi _\epsilon \).

We show the existence and regularity of uniform and cocycle attractors, and relation between them for the NDS given by the Galerkin (or equivalently Shatah–Struwe) solutions of our problem with subquintic nonlinearity. The key property is the uniform asymptotic compactness. To obtain it, we start from the result which states that the solution can be split into the sum of two functions: one that decays to zero, and another one which is more smooth than the initial data.

Lemma 6.8

Let u be the Shatah–Struwe solution of (3.1) such that \(u(t) \in B_0\) for every \(t\ge 0\), where \(B_0\) is the absorbing set from Proposition 4.5. There exists a finite and increasing sequence \(\beta _0,\ldots ,\beta _k\) with \(\beta _0=0\), \(\beta _k=1\) and the constants \(C, C_R, \alpha >0\) such that if \(i\in \{0,\ldots ,k-1\}\) and \(\Vert {(u(t),u_t(t))}\Vert _{\mathcal {E}_{\beta _i}}\le R\) for every \(t\in [0,\infty )\), then u can be represented as the sum of two functions vw satisfying

$$\begin{aligned}&u(t)=v(t)+w(t), \ \ \Vert {(v(t),v_t(t))}\Vert _{\mathcal {E}_{\beta _{i}}}\le \Vert {(u_0,u_1)}\Vert _{\mathcal {E}_{\beta _i}} C e^{-\alpha t} \\&\qquad \text {and}\ \ \Vert {(w(t),w_t(t))}\Vert _{\mathcal {E}_{\beta _{i+1}}} \le C_R \ \ \text {for}\ \ i\in \{0,\ldots ,k-1\}. \end{aligned}$$

Moreover the constants \(C,C_R ,\alpha \) are independent of the choice of \(p_\varepsilon \in \mathcal {H}_{[0,1]}\) treated as the right-hand side in equation (3.1).

Proof

Our first aim is to obtain the relation between \(\beta _i\) and \(\beta _{i+1}\) such that if for every \(t\in [0,\infty )\) the bound \(\Vert {(u(t),u_t(t))}\Vert _{\mathcal {E}_{\beta _i}}\le R\) holds, then \(p_\varepsilon (t,u) \in L^1_{loc}([0,\infty );H^{\beta _{i+1}})\). To this end we first interpolate between \(L^q\) and \(W^{1,s}\), whence, from the Gagliardo–Nirenberg inequality we obtain

$$\begin{aligned} \Vert {p_\varepsilon (t,u)}\Vert _{H^{\alpha }}\le C \Vert {\nabla p_\varepsilon (t,u)}\Vert _{L^s}^\theta \Vert {p_\varepsilon (t,u)}\Vert _{L^q}^{1-\theta }+C\Vert {p_\varepsilon (t,u)}\Vert _{L^q} \end{aligned}$$

with \(\alpha \le \theta \le 1\), \(\frac{1}{2}=\frac{\alpha }{3}+\left( \frac{1}{s}-\frac{1}{3}\right) \theta +\frac{1-\theta }{q}\) and \(s<2\). From the chain rule and the Hölder inequality with conjugate exponents p and \(p'\) we deduce

$$\begin{aligned}&\Vert {p_\varepsilon (t,u)}\Vert _{H^\alpha }\\&\quad \le C \left( \int _{\Omega }\left| \frac{\partial {p_\varepsilon }}{\partial u} (t,u)\right| ^{sp'}dx \right) ^\frac{\theta }{sp'} \left( \int _{\Omega }|\nabla u|^{sp}dx\right) ^\frac{\theta }{sp} \Vert {p_\varepsilon (t,u)}\Vert _{L^q}^{1-\theta }\\&\qquad +C\Vert {p_\varepsilon (t,u)}\Vert _{L^q}. \end{aligned}$$

From assumption (H5), the Cauchy inequality, and the fact that solution u is included in the absorbing set, taking \(sp = 2,\;sp'=3,\;\theta =\frac{1}{2}\) we get the inequality

$$\begin{aligned}&\Vert {p_\varepsilon (t,u)}\Vert _{H^\alpha }\le C(R) \left( \left( \int _{\Omega }|u|^{12}dx\right) ^\frac{1}{3} + \left( \int _{\Omega }|u|^{(5-\kappa )q}dx\right) ^{\frac{1}{q}} +1 \right) \;\nonumber \\&\qquad \qquad \qquad \qquad \qquad \text {with}\; \alpha = \frac{3}{2}\left( \frac{1}{2}-\frac{1}{q}\right) ,\; \alpha <\frac{1}{2}. \end{aligned}$$
(6.1)

The choice of \(s, p, \theta \) is motivated by the fact that we need the terms which we can control to appear on the right hand side of (6.1) after time integration. In the first step of the bootstrap argument this can be either the Strichartz norm \(L^4(L^{12})\) of the solution u, or its energy norm \(L^\infty (H^1_0)\). In the consecutive steps of the bootstrap argument we need the terms on the right-hand side which can be controlled having the bounds on \(L^4(W^{\beta _i,12})\) and \(L^\infty (H^{\beta _{i}+1}_0)\) norms of the solution. Now we will inductively describe the sequence \(\beta _1\ldots ,\beta _{k-1}\) starting with \(\beta _1\). If we set \( \frac{5-\kappa }{10}\le \frac{1}{q} < \frac{1}{2}\) in inequality (6.1), we obtain

$$\begin{aligned}&\int _{t_0}^{t_0+h} \Vert {p_\varepsilon (t,u)}\Vert _{H^{\beta _1}}dt\\&\quad \le C(R)\left( \Vert {u}\Vert _{L^4(t_0,t_0+h;L^{12})}^4 +\Vert {u}\Vert _{L^5(t_0,t_0+h;L^{10})}^5 +h\right) \\&\quad \le C(h,R). \end{aligned}$$

We observe that \(\beta _1\in (0,\delta )\), for some \(\delta >0\). Assume that for some \(i\in \{1,\dots ,k-2\}\)

$$\begin{aligned}&\Vert {(u(t),u_t(t))}\Vert _{\mathcal {E}_{\beta _i}}\\&\quad \le R \quad \text {for}\quad t\in [0,\infty ) \quad \text {and}\quad \int _{t_0}^{t_0+h} \Vert {p_\varepsilon (t,u)}\Vert _{H^{\beta _i}}dt\\&\quad \le C(h,R)\ \quad \text {for } t_0\in [0,\infty ) . \end{aligned}$$

From Lemma 5.5 we see that

$$\begin{aligned} u\in L^4(t_0,t_0+h;W^{\beta _i,12}), \qquad \Vert {u}\Vert _{L^4(t_0,t_0+h;W^{\beta _i,12})}\le C(h,R) . \end{aligned}$$

By the Sobolev embeding \(W^{\beta _i,10}\hookrightarrow L^{\frac{30}{3-10\beta _i}} \) and by interpolation we see that

$$\begin{aligned}&\Vert {u}\Vert _{L^5\left( t_0,t_0+h;L^{\frac{30}{3-10\beta _i}} \right) }\\&\quad \le \Vert {u}\Vert _{L^5(t_0,t_0+h;W^{\beta _i,10})}\\&\quad \le \Vert {u}\Vert _{L^4(t_0,t_0+h;W^{\beta _i,12})}^\frac{4}{5}\Vert {u}\Vert _{L^\infty (t_0,t_0+h;H^{\beta _i +1}_0)}^\frac{1}{5} \le C(h,R). \end{aligned}$$

Using (6.1) with \(q=\frac{6}{3-10\beta _i}\) we obtain

$$\begin{aligned}&\int _{t_0}^{t_0+h} \Vert {p_\varepsilon (t,u)}\Vert _{H^{\beta _{i+1}}}dt\\&\quad \le C(R)\left( \Vert {u}\Vert _{L^4(t_0,t_0+h;L^{12})}^4 +\Vert {u}\Vert _{L^5\left( t_0,t_0+h;L^{\frac{30}{3-10\beta _i}}\right) }^5 +h\right) \\&\quad \le C(h,R), \end{aligned}$$

with \(\beta _{i+1}=\frac{5}{2}\beta _i\). From this recurrent relation and the fact that \(\beta _1\in (0,\delta )\) we can find sequence \(\beta _1,\ldots ,\beta _{k-1}\) such that \(\beta _{k-1} = \frac{9}{20}\). From this exponent we can pass to \(\beta _k=1\) in one final step. Indeed, if \(\beta _{k-1} = \frac{9}{20}\), then from the Sobolev embeddings \(H^{1+\frac{9}{20}}_0 \hookrightarrow L^{60}\) and \(H^{1+\frac{9}{20}}_0 \hookrightarrow W^{1,\frac{60}{21}}_0\) we get the bounds

$$\begin{aligned} \Vert {u}\Vert _{L^{60}}\le C(R) \quad \text {and}\quad \Vert {\nabla u}\Vert _{L^{\frac{60}{21}}}\le C(R). \end{aligned}$$

Hence,

$$\begin{aligned} \Vert {\nabla p_\varepsilon (t,u)}\Vert _{L^2} \le C\left( 1+ \int _\Omega |u|^8 |\nabla u|^2\, dx\right) \le C\left( 1+\Vert {u}\Vert _{L^{28}}^8 \Vert {\nabla u}\Vert _{L^{\frac{60}{21}}}^2\right) \le C(R), \end{aligned}$$

and consequently it holds that

$$\begin{aligned} \int _{t_0}^{t_0+h}\Vert {\nabla p_\varepsilon (t,u)}\Vert _{L^2}\, dt \le C(h,R). \end{aligned}$$

The proof follows by the decomposition argument. Indeed, let us decompose \(u(t)=w(t)+v(t)\) where wv satisfy the problems

$$\begin{aligned} {\left\{ \begin{array}{ll} v_{tt}+v_t-\Delta v = 0,\\ v(t,x) = 0\; \text {for}\; x\in \partial \Omega ,\\ v(0,x) = u_0(x),\\ v_t(0,x) = u_1(x), \end{array}\right. } \qquad \qquad {\left\{ \begin{array}{ll} w_{tt}+w_t-\Delta w = p_\varepsilon (t,v+w),\\ w(t,x) = 0\; \text {for}\; x\in \partial \Omega ,\\ w(0,x) = 0,\\ w_t(0,x) = 0. \end{array}\right. } \end{aligned}$$

From Lemma 5.5 we get that \(\Vert {(v(t),v_t(t))}\Vert _{\mathcal {E}_{\beta _i}} \le C \Vert {(u_0,u_1)}\Vert _{\mathcal {E}_{\beta _i}} e^{- \alpha t}\) and

$$\begin{aligned} \Vert {(w(t+h),w_t(t+h))}\Vert _{\mathcal {E}_{\beta _{i+1}}}\le C e^{-\alpha h}\Vert {(w(t),w_t(t))}\Vert _{\mathcal {E}_{\beta _{i+1}}}+ C(h,R), \end{aligned}$$

for every \(t\ge 0\) and \(h>0\). We set h such that \( Ce^{-\alpha h}\le \frac{1}{2}\). Then we obtain that \(\Vert {(w(t),w_t(t))}\Vert _{\mathcal {E}_{\beta _{i+1}}}\le 2C(h,R) = C_R\) for \(i\in \{0,\ldots ,k-1 \}\). We stress that all constants are independent of \(p_\varepsilon \in \mathcal {H}_{[0,1]}\). \(\square \)

The bounds obtained in the previous lemma allow us to deduce the asymptotic compactness of the considered non-autonomous dynamical system.

Proposition 6.9

For every \(\varepsilon \in [0,1]\), the non-autonomous dynamical system \((\varphi _\varepsilon ,\theta )\) is uniformly asymptotically compact.

Proof

Let \(B_0\) be an absorbing set from Proposition 4.5. Then for every bounded set \(B\subset \mathcal {E}\) there exist \(t_0\) such that for every \(t\ge t_0\) and every \(p_\varepsilon \in \mathcal {H}(f_\varepsilon )\) it holds that \(\varphi _\varepsilon (t,p_\varepsilon )\in B_0\). From Lemma 6.8 there exists the set \(B_{\beta _{1}}\subset \mathcal {E}_{\beta _1}\) which is compact in \(\mathcal {E}_0\) such that

$$\begin{aligned} \lim _{t\rightarrow \infty } \sup _{p_\varepsilon \in \mathcal {H}(f_\varepsilon )} \text {dist}_{\mathcal {E}_0}(\varphi (t,p_\varepsilon )B,B_{\beta _1}) =0, \end{aligned}$$

which shows that the non-autonomous dynamical system \((\varphi _\varepsilon ,\theta )\) is uniformly asymptotically compact. \(\square \)

We are in position to formulate the main result of this section, the theorem on non-autonomous attractors.

Theorem 6.10

For every \(\varepsilon \in [0,1]\) problem (3.1) has uniform \(\mathcal {A}_\varepsilon \), cocycle \(\{\mathcal {A}_\varepsilon (p)\}_ {p\in \mathcal {H}(f_\epsilon )}\) and pullback attractors which are bounded in \(\mathcal {E}_1\) uniformly with respect to \(\varepsilon \). Moreover the following holds

$$\begin{aligned} \mathcal {A}_\varepsilon = \bigcup _{p\in \mathcal {H}(f_\epsilon )} \mathcal {A}_\varepsilon (p). \end{aligned}$$

Proof

Because \((\varphi _\varepsilon ,\theta )\) is asymptotically compact, from Theorem 6.5 we get existence of uniform and cocycle attractors and the relation between them. For \((u_0,u_1) \in \mathcal {A}_\varepsilon \) by Theorem 6.7 there exists the global solution u(t) with \((u(0),u_t(0)) = (u_0,u_1)\). If \(\mathcal {A}_\varepsilon \) is bounded in \(\mathcal {E}_{\beta _i}\) then from Lemma 6.8 we can split this solution into the sum \(u(t)=v^n(t)+w^n(t)\) for \(t\in [-n,\infty )\) such that

$$\begin{aligned} \Vert {(v^n(t),v^n_t(t))}\Vert _{\mathcal {E}_{\beta _{i}}}\le C e^{-\alpha (t+n)} \text { and } \Vert {(w^n(t),w^n_t(t))}\Vert _{\mathcal {E}_{\beta _{i+1}}} \le C. \end{aligned}$$

Then, for the subsequence, it holds that \(w^n(0) \rightarrow w\) and \(v^n(0) \rightarrow 0\) as \(n\rightarrow \infty \) for some \(w\in \mathcal {E}_{\beta _{i+1}}\), so \(w = (u_0,u_1)\). Because \(\mathcal {A_\varepsilon }\) is bounded in \(\mathcal {E}_0\) in finite number of steps we obtain the boudedness of the uniform attractors in \(\mathcal {E}_1\). Moreover, due to Proposition 4.5 and Lemma 6.8 the \(\mathcal {E}_1\) bound of these attractors does not depend on \(\varepsilon \). \(\square \)

7 Upper-Semicontinuous Convergence of Attractors

The paper is concluded with the result on upper-semicontinuous convergence of attractors.

7.1 Definition and Properties of Upper-Semicotinuous Convergence of Sets

We recall the definitions of Hausdorff and Kuratowski Upper-Semicontinuous convergence of sets, and the relation between these conditions.

Definition 7.1

Let (Xd) be a metric space and let \(\{A_\varepsilon \}_{\varepsilon \in [0,1]}\) be a family of sets in X. We say that this family converges to \(A_0\) upper-semicontinuously in Hausdorff sense if

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0^+}\text {dist}_X(A_\varepsilon ,A_0) = 0. \end{aligned}$$

Definition 7.2

Let (Xd) be a metric space and let \(\{A_\varepsilon \}_{\varepsilon \in [0,1]}\) be a family of sets in X. We say that this family converges to \(A_0\) upper-semicontinuously in Kuratowski sense if

$$\begin{aligned} X-\limsup _{\varepsilon \rightarrow 0^+}A_\varepsilon \subset A_0, \end{aligned}$$

where \(X-\limsup _{\varepsilon \rightarrow 0^+}A_\varepsilon \) is the Kuratowski upper limit defined by

$$\begin{aligned} X-\limsup _{\varepsilon \rightarrow 0^+}A_\varepsilon =\{x\in X:\lim _{\varepsilon _n\rightarrow 0^+}d(x_{\varepsilon _n},x)=0,\;x_{\varepsilon _n}\in A_{\varepsilon _n} \}. \end{aligned}$$

The proof of the next result can be found for example in [17, Proposition 4.7.16].

Lemma 7.3

Assume that the sets \(\{A_\varepsilon \}_{\varepsilon \in [0,1]}\) are nonempty and closed and the set \(\cup _{\varepsilon \in [0,1]}A_{\varepsilon }\) is relatively compact in X. If the family \(\{A_\varepsilon \}_{\varepsilon \in [0,1]}\) converges to \(A_0\) upper-semicontinuously in Kuratowski sense then \(\{A_\varepsilon \}_{\varepsilon \in [0,1]}\) converges to \(A_0\) upper-semicontinuously in Hausdorff sense.

7.2 Upper-Semicontinuous Convergence of Uniform Attractors

We conclude with the result on upper-semicontinuous convergence of uniform attractors. Note that it is enough to obtain this property for the uniform attractors and the upper-semicontinuous convergence for cocycle and pullback attractors is a simple consequence.

Theorem 7.4

The family of uniform attractors \(\{\mathcal {A}_\varepsilon \}_{\varepsilon \in [0,1]}\) for the considered non-autonomous dynamical system \((\varphi _\varepsilon ,\theta _t)\) is upper-semicontinuous in Kuratowski and Hausdorff sense in \(\mathcal {E}_0\) as \(\varepsilon \rightarrow 0^+\).

Proof

Let \((u_0^n,u_1^n)\in \mathcal {A}_{\varepsilon _n} \) such that \((u_0^n,u_1^n)\rightarrow (u_0,u_1)\) in \(\mathcal {E}_0\). There exists a function \(p_{\varepsilon _n} \in \mathcal {H}_{[0,1]}\) such that there exists global solution \(u_n(t,x)\) to problem

$$\begin{aligned} {\left\{ \begin{array}{ll} u^n_{tt}+u^n_t-\Delta u^n = p_{\varepsilon _n}(t,u),\\ u^n(t,x) = 0\; \text {for}\; x\in \partial \Omega ,\\ u^n(0,x) = u^n_0(x),\\ u^n_t(0,x) = u^n_1(x). \end{array}\right. } \end{aligned}$$

As in the proof of Proposition 5.13 it follows that for every \(T>0\) there exists \(v\in L^\infty (-T,T;H^1_0)\) with \(v_t\in L^\infty (-T, T; L^2), v_{tt}\in L^\infty (-T,T;H^{-1})\) and \(v\in L^4(-T,T;L^{12})\) such that for the subsequence of \(u^n\) there hold the convergences

$$\begin{aligned}&u^n \rightarrow v \ \ \text {weakly-*}\ \ \text {in}\ \ L^\infty (-T,T;H^1_0),\\&\quad u^n_t \rightarrow v_t \ \ \text {weakly-*}\ \ \text {in}\ \ L^\infty (-T,T;L^2),\\&\quad u^n_{tt} \rightarrow v_{tt} \ \ \text {weakly-*}\ \ \text {in}\ \ L^\infty (-T,T;H^{-1}). \end{aligned}$$

Moreover \((u^n(t),u^n_t(t))\rightarrow (v(t),v_t(t))\) weakly in \(\mathcal {E}_0\) for every \(t\in [-T,T]\) which implies that \((v(0),v_t(0)) = (u_0,u_1)\) and \(u^n(t)\rightarrow v(t)\) strongly in \(L^2\). We will show that v is a weak solution for the autonomous problem, i.e., the problem with \(\varepsilon = 0\). It is enough to show that for every \(w\in L^2(-T,T;H^1_0)\) it holds that

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _{-T}^T (p_{\varepsilon _n}(t,u^n(t)) - f_0(v^n(t)),w(t)) \, dt =0. \end{aligned}$$

Let observe that \(\Vert {u_n(t)}\Vert _{C^0}\le R\) and \(\Vert {v(t)}\Vert _{C^0}\le R\) due to the fact that all attractors are bounded uniformly in \(\mathcal {E}_1\) and the Sobolev embedding \(H^2\hookrightarrow C^0\). Hence

$$\begin{aligned}&\left| \int _{-T}^T (p_{\varepsilon _n}(t,u^n(t))-f_0(v(t)),w(t))dt \right| \\&\quad \le \int _{-T}^T|( p_{\varepsilon _n}(t,u^n(t)) -f_0(u^n(t)),w(t))|dt \\&\qquad + \int _{-T}^T|( f_0(u^n(t)) -f_0(v(t)),w(t))|dt \\&\quad \le \sup _{t\in \mathbb {R}}\sup _{|s|\le R} |( p_{\varepsilon _n}(t,s) -f_0(s))| \Vert {w}\Vert _{L^1(-T,T;L^2)} \\&\qquad +\sup _{|s|\le R} |f'_0(s)| \left( \int _{-T}^T \Vert {u^n(t)-v(t)}\Vert ^2 dt\right) ^{\frac{1}{2}} \Vert {w}\Vert _{L^2(-T,T;L^2)}^{\frac{1}{2}}. \end{aligned}$$

Due to the fact that by Proposition 3.6 the hypothesis (H2) holds for \(p_\varepsilon \), the first term tends to zero. The second term tends to zero by the Aubin–Lions lemma. Hence, v(t) is the weak solution on the interval \([-T,T]\) with \(v(0)=(u_0,u_1)\). By the diagonal argument we can extend v to a global weak solution. Moreover \(\Vert {v(t)}\Vert _{\mathcal {E}_1}\le C\) due to the uniform boudedness of attractors \(\mathcal {A}_\varepsilon \) in \(\mathcal {E}_1\). Hence \(\{v(t)\}_{t\in \mathbb {R}}\) is a global bounded orbit for the autonomous dynamical system \(\varphi _0\) which implies that \((u_0,v_0)\in \mathcal {A}_0 \) and shows the upper-semicontinuity in the Kuratowski sense. Because all uniform attractors \(\mathcal {A}_\varepsilon \) are uniformly bounded in \(\mathcal {E}_1\), their union \(\cup _{\varepsilon \in [0,1]}\mathcal {A}_\varepsilon \) is relatively compact in \(\mathcal {E}_0\). So, by Lemma 7.3 we have also upper-semicontinuity in Hausdorff sense. \(\square \)