Random attractors for stochastic partly dissipative systems

Abstract

We prove the existence of a global random attractor for a certain class of stochastic partly dissipative systems. These systems consist of a partial and an ordinary differential equation, where both equations are coupled and perturbed by additive white noise. The deterministic counterpart of such systems and their long-time behaviour have already been considered but there is no theory that deals with the stochastic version of partly dissipative systems in their full generality. We also provide several examples for the application of the theory.

Introduction

In this work, we study classes of stochastic partial differential equations (SPDEs), which are part of the general partly dissipative system

$$\begin{aligned} \begin{array}{rcl} {\text {d}}u_1&{}=&{}(d\Delta u_1-h(x,u_1)-f(x,u_1,u_2))~{\text {d}}t+B_1(x,u_1,u_2)~{\text {d}}W_1,\\ {\text {d}}u_2&{}=&{}(-\sigma (x)u_2-g(x,u_1,u_2)) ~{\text {d}}t+B_2(x,u_1,u_2)~{\text {d}}W_2, \end{array} \end{aligned}$$
(1.1)

where \(W_{1,2}\) are cylindrical Wiener processes, the \(\sigma ,f,g,h\) are given functions, \(B_{1,2}\) are operator-valued, \(\Delta \) is the Laplace operator, \(d>0\) is a parameter, the equation is posed on a bounded open domain \(D\subset {\mathbb {R}}^n\), \(u_{1,2}=u_{1,2}(x,t)\) are the unknowns for \((x,t)\in D\times [0,T_{\max })\), and \(T_{\max }\) is the maximal existence time. The term partly dissipative highlights the fact that only the first component contains the regularizing Laplace operator. In this work we analyse the case of additive noise and a certain coupling, more precisely,

$$\begin{aligned} B_1(x,u_1,u_2)=B_1, \ \ B_2(x,u_1,u_2)=B_2,\ \ g(x,u_1,u_2)=g(x,u_1), \end{aligned}$$
(1.2)

where \(B_{1,2}\) are bounded linear operators. A deterministic version of such a system has been analysed by Marion [20]. We are going to use certain assumptions for the reaction terms, which are similar to those used in [20]. The precise technical setting of our work starts in Sect. 2.

The goal of this work is to provide a general theory for stochastic partly dissipative systems and to analyse the long-time behaviour of the solution using the random dynamical systems approach. To this aim, we first show that the solution of our system exists globally-in-time, i.e. one can take \(T_{\max }=+{\infty }\) above. Then we prove the existence of a pullback attractor. To our best knowledge the well-posedness and asymptotic behaviour for such systems (and for other coupled SPDEs and SODEs) has only been explored for special cases, i.e. mainly for the FitzHugh Nagumo equation, see [4, 24] for solution theory and [2, 19, 31, 32] for long-time behaviour/attractor theory. Here we develop a much more general theory of stochastic partly dissipative systems, motivated by the numerous applications in the natural sciences such as the the cubic-quintic Allen-Cahn equation [17] in elasticity. Moreover, unlike several previous works mentioned above, we deal with infinite-dimensional noise that satisfies certain regularity assumptions. These combined with the restrictions on the reaction terms allow us to compute sharp a-priori bounds of the solution, which are used to construct a random absorbing set. Even once the absorbing set has been constructed, we emphasize that we cannot directly apply compact embedding results to obtain the existence of an attractor. This issue arises due to the absence of the regularizing effect of the Laplacian in the second component. To overcome this obstacle, we introduce an appropriate splitting of the solution in two components: a regular one, and one that asymptotically tends to zero. This splitting technique goes (at least) back to Temam [28] and it has also been used in the context of deterministic partly dissipative systems [20] and for a stochastic FitzHugh–Nagumo equation with linear multiplicative noise [33, 35]. The necessary additional technical steps for our setting are provided in Sect. 3.4. Using the a-priori bounds, we establish the existence of a pullback attractor [9, 14, 25, 26]; which has been studied in several contexts to capture the long-time behaviour of stochastic (partial) differential equations, see for instance [3, 5, 8, 12, 15] and the references therein. In the stochastic case pullback attractors are random invariant compact sets of phase space that are invariant with respect to the dynamics. They can be viewed as the generalization of non-autonomous attractors for deterministic systems. In the context of coupled SPDEs and SODEs, to our best knowledge, only random attractors for the stochastic FitzHugh–Nagumo equation were treated under various assumptions of the reaction and noise terms: finite-dimensional additive noise on bounded and unbounded domains [32, 33] and for (non-autonomous) FitzHugh–Nagumo equation driven by linear multiplicative noise [1, 19, 35]. Here we provide a general random attractor theory for stochastic partly dissipative systems perturbed by infinite-dimensional additive noise, which goes beyond the FitzHugh–Nagumo system. To this aim we have to employ more general techniques than those used in the references specified above. Furthermore, we emphasize that other dynamical aspects for similar systems have been investigated, e.g. inertial manifolds and master-slave synchronization in reference [7].

We also mention that numerous extensions of our work are imaginable. Evidently the fully dissipative case is easier from the viewpoint of attractor theory. Hence, our results can be extended in a straightforward way to the case when both components of the SPDE contain a Laplacian. Systems with more than two components but with similar assumptions are likely just going to pose notational problems rather than intrinsic ones. From the point of view of applications it would be meaningful to incorporate non-linear couplings between the PDE and ODE parts. For example, this would allow us to use this theory to analyse various systems derived in chemical kinetics from mass-action laws. However, more complicated non-linear couplings are likely to be far more challenging. Moreover, one could also develop a general framework which allows one to deal with other random influences, e.g. multiplicative noise, or more general Gaussian processes than standard trace-class Wiener processes. Furthermore, it would be interesting to investigate several dynamical aspects of partly dissipative SPDEs such as invariant manifolds or patterns. Naturally, one could also aim to derive upper bounds for the Hausdorff dimension of the random attractor and compare them to the deterministic result given in [20].

This paper is structured as follows: Sect. 2 contains all the preliminaries. More precisely, in Sect. 2.1 we define the system that we are going to analyse and state all the required assumptions. Subsequently, in Sect. 2.2, we clarify the notion of solution that we are interested in. The main contribution of this work is given in Sect. 3. Firstly, some preliminary definitions and results about random attractor theory are summarized in Sect. 3.1. Secondly, we derive the random dynamical system associated to our SPDE system in Sect. 3.2. Thirdly, we prove the existence of a bounded absorbing set for the random dynamical system in Sect. 3.3. Lastly, in Sect. 3.4 it is shown that one can indeed find a compact absorbing set implying the existence of a random attractor. In Sect. 4 we illustrate the theory by several examples arising from applications.

Notation Before we start, we define/recall some standard notations that we will use within this work. When working with vectors we use \((\cdot )^\top \) to denote the transpose while \(|\cdot |\) denotes the Euclidean norm. In a metric space (Md) we denote a ball of radius \(r>0\) centred in the origin by

$$\begin{aligned} B(r)=\{x\in M|d(x,0)\le r\}. \end{aligned}$$

We write \(\text {Id}\) for the identity operator/matrix. L(UH) denotes the space of bounded linear operators from U to H. \(O^*\) denotes the adjoint operator of a bounded linear operator O. We let \(D\subset {\mathbb {R}}^n\) always be bounded, open, and with regular boundary, where \(n\in {\mathbb {N}}\). \(L^p(D)\), \(p\ge 1\), denotes the usual Lebesgue space with norm \(\Vert \cdot \Vert _p\). Furthermore, \(\langle \cdot ,\cdot \rangle \) denotes the associated scalar-product in \(L^2(D)\). \(C^p(D)\), \(p\in {\mathbb {N}}\cup \{0,\infty \}\), denotes the space of all continuous functions that have continuous first p derivatives. Lastly, for \(k\in {\mathbb {N}}\), \(1\le p\le \infty \) we consider the Sobolev space of order k as

$$\begin{aligned} \displaystyle W^{k,p}(D)=\left\{ u\in L^{p}(D ):D^{\alpha }u\in L^{p}(D)\,\,\forall |\alpha |\leqslant k\right\} , \end{aligned}$$

with multi-index \(\alpha \), where the norm is given by

$$\begin{aligned} \displaystyle \Vert u\Vert _{W^{k,p}(D )}:={{\left\{ \begin{array}{ll}\left( \sum _{|\alpha |\leqslant k} \left\| D^{\alpha }u\right\| _{L^{p}(D)}^{p}\right) ^{\frac{1}{p}}&{}\quad 1\leqslant p<\infty ;\\ \max _{|\alpha |\leqslant k}\left\| D^{\alpha }u\right\| _{L^{\infty }(D )}&{}\quad p=\infty .\end{array}\right. }} \end{aligned}$$

The Sobolev space \(W^{k,p}(D)\) is a Banach space. \(H_0^k(D)\) denotes the space of functions in \(H^k(D)=W^{k,2}(D)\) that vanish at the boundary (in the sense of traces).

Stochastic partly dissipative systems

Basics

Let \(D\subset {\mathbb {R}}^n\) be a bounded open set with regular boundary, set \(H:=L^2(D)\) and let \(U_1,U_2\) be two separable Hilbert spaces. We consider the following coupled, partly dissipative system with additive noise

$$\begin{aligned}&{\text {d}}u_1=(d\Delta u_1-h(x,u_1)-f(x,u_1,u_2))~{\text {d}}t+B_1~{\text {d}}W_1, \end{aligned}$$
(2.1)
$$\begin{aligned}&{\text {d}}u_2=(-\sigma (x)u_2-g(x,u_1)) ~{\text {d}}t+B_2~{\text {d}}W_2, \end{aligned}$$
(2.2)

where \(u_{1,2}=u_{1,2}(x,t)\), \((x,t)\in D\times [0,T]\), \(T>0\), \(W_{1,2}\) are cylindrical Wiener processes on \(U_1\) respectively \(U_2\), and \(\Delta \) is the Laplace operator. Furthermore, \(B_1\in L(U_1,H)\), \(B_2\in L(U_2,H)\) and \(d>0\) is a parameter controlling the strength of the diffusion in the first component. The system is equipped with initial conditions

$$\begin{aligned} u_1(x,0)=u_1^0(x), ~~~ u_2(x,0)=u_2^0(x), \end{aligned}$$
(2.3)

and a Dirichlet boundary condition for the first component

$$\begin{aligned} u_1(x,t)=0 ~~~\text {on }\partial D\times [0,T]. \end{aligned}$$
(2.4)

We will denote by A the realization of the Laplace operator with Dirichlet boundary conditions, more precisely we define the operator \(A:{{\mathcal {D}}}(A)\rightarrow L^2(D)\) as \(Au=d\Delta u\) with domain \({{\mathcal {D}}}(A):=H^2(D)\cap H_0^1(D)\subset L^2(D)\). Note that A is a self-adjoint operator that possesses a complete orthonormal system of eigenfunctions \(\{e_k\}_{k=1}^\infty \) of \(L^2(D)\). Within this work we always assume that there exists \(\kappa >0\) such that \(|e_k(x)|^2<\kappa \) for \(k\in {\mathbb {N}}\) and \(x\in D\). This holds for example when \(D=[0,\pi ]^n\). For the deterministic reaction terms appearing in (2.1)–(2.2) we assume that:

Assumption 2.1

(Reaction terms)

  1. (1)

    \(h\in C^2({\mathbb {R}}^n\times {\mathbb {R}})\) and there exist \(\delta _1,\delta _2, \delta _3>0\), \(p>2\) such that

    $$\begin{aligned} \delta _1|u_1|^p-\delta _3\le h(x,u_1)u_1\le \delta _2|u_1|^p+\delta _3. \end{aligned}$$
    (2.5)
  2. (2)

    \(f\in C^2({\mathbb {R}}^n\times {\mathbb {R}}\times {\mathbb {R}})\) and there exist \(\delta _4>0\) and \(0<p_1<p-1\) such that

    $$\begin{aligned} |f(x,u_1,u_2)|\le \delta _4 (1+|u_1|^{p_1}+|u_2|). \end{aligned}$$
    (2.6)
  3. (3)

    \(\sigma \in C^2({\mathbb {R}}^n)\) and there exist \(\delta ,{\tilde{\delta }}>0\) such that

    $$\begin{aligned} \delta \le \sigma (x)\le {\tilde{\delta }}. \end{aligned}$$
    (2.7)
  4. (4)

    \(g\in C^2({\mathbb {R}}^n\times {\mathbb {R}})\) and there exists \(\delta _5>0\) such that

    $$\begin{aligned} |g_u(x,u_1)|\le \delta _5,~~ |g_{x_i}(x,u_1)|\le \delta _5(1+|u_1|),~~~i=1,\ldots ,n. \end{aligned}$$
    (2.8)

In particular, Assumptions 2.1 (1) and (4) imply that there exist \(\delta _7,\delta _8>0\) such that

$$\begin{aligned} |g(x,\xi )|&\le \delta _7(1+|\xi |),~~~~~~~ \text {for all } \xi \in {\mathbb {R}}, ~x\in D, \end{aligned}$$
(2.9)
$$\begin{aligned} |h(x,\xi )|&\le \delta _8(1+|\xi |^{p-1}),~~~~\text {for all } \xi \in {\mathbb {R}}, ~x\in D. \end{aligned}$$
(2.10)

The Assumptions 2.1(1)–(4) are identical to those given in [20], except that in the deterministic case only a lower bound on \(\sigma \) was assumed.

We always consider an underlying filtered probability space denoted as \((\Omega ,{\mathcal {F}},({\mathcal {F}}_t)_{t\ge 0},{\mathbb {P}})\) that will be specified later on. In order to guarantee certain regularity properties of the noise terms, we make the following additional assumptions:

Assumption 2.2

(Noise)

  1. (1)

    We assume that \(B_2:U_2\rightarrow H\) is a Hilbert-Schmidt operator. In particular, this implies that \(Q_2:=B_2B_2^*\) is a trace class operator and \(B_2W_2\) is a \(Q_2\)-Wiener process.

  2. (2)

    We assume that \(B_1\in L(U_1,H)\) and that the operator \(Q_t\) defined by

    $$\begin{aligned} Q_tu=\int \nolimits _0^t\exp \left( sA\right) Q_1\exp \left( sA^*\right) u~{\text {d}}s, ~~~u\in H, t\ge 0, \end{aligned}$$

    where \(Q_1:=B_1B_1^*\), is of trace class. Hence, \(B_1W_1\) is a \(Q_1\)-Wiener process as well.

  3. (3)

    Let \(U_1=H\). There exists an orthonormal basis \(\{e_k\}_{k=1}^{\infty }\) of H and sequences \(\{\lambda _k\}_{k=1}^{\infty }\) and \(\{\delta _k\}_{k=1}^{\infty }\) such that

    $$\begin{aligned} A e_k=-\lambda _ke_k,\qquad Q_1e_k=\delta _ke_k,~~k\in {\mathbb {N}}. \end{aligned}$$

    Furthermore, we assume that there exists \(\alpha \in \left( 0,\frac{1}{2}\right) \) such that

    $$\begin{aligned} \sum _{k=1}^\infty \delta _k\lambda _k^{2\alpha +1}<\infty . \end{aligned}$$

Assumptions 2.2 guarantee that the stochastic convolution introduced below is a well-defined process with sufficient regularity properties, see Lemmas 3.17 and 3.25. As an example, one could choose \(B_1=(-A)^{-\gamma /2}\) with \(\gamma >\frac{n}{2}-1\) to ensure that Assumptions 2.2 (2)–(3) hold for \(\alpha \) with \(2\alpha < \gamma -\frac{n}{2}+1\), see [10, Chapter 4].

Let us now formulate problem (2.1)–(2.2) as an abstract Cauchy problem. We define the following space

$$\begin{aligned} {\mathbb {H}}:=L^2(D)\times L^2(D), \end{aligned}$$

with norm \(\Vert (u_1,u_2)^\top \Vert _{{\mathbb {H}}}^2=\Vert u_1\Vert _{2}^2+\Vert u_2\Vert _{2}^2\) this becomes a separable Hilbert space. \(\langle \cdot ,\cdot \rangle _{\mathbb {H}}\) denotes the corresponding scalar product. Furthermore, we let

$$\begin{aligned} {\mathbb {V}}:=H_0^1(D)\times L^2(D), \end{aligned}$$

with norm \(\Vert (u_1,u_2)^\top \Vert _{\mathbb {V}}^2=\Vert u_1\Vert _{H^1(D)}^2+\Vert u_2\Vert _{2}^2\). We define the following linear operator

$$\begin{aligned} {\mathbf {A}}:=\begin{pmatrix}A&{}\quad 0\\ 0&{}\quad -\sigma (x)\end{pmatrix}, \end{aligned}$$

where \({\mathbf {A}}:{{\mathcal {D}}}({\mathbf {A}})\subset {\mathbb {H}}\rightarrow {\mathbb {H}}\) with \({{\mathcal {D}}}({\mathbf {A}}) ={{\mathcal {D}}}(A)\times L^2(D)\). Since all the reaction terms are twice continuously differentiable they obey in particular the Carathéodory conditions [34]. Thus, the corresponding Nemytskii operator is defined by

$$\begin{aligned} {\mathbf {F}}((u_1,u_2)^\top )(x)&:=\begin{pmatrix} F_1((u_1,u_2)^\top )(x)\\ F_2((u_1,u_2)^\top )(x)\end{pmatrix},\\&:=\begin{pmatrix}-h(x,u_1(x))-f(x,u_1(x),u_2(x))\\ -g(x,u_1(x))\end{pmatrix}, \end{aligned}$$

where \({\mathbf {F}}:{{\mathcal {D}}}({\mathbf {F}})\subset {\mathbb {H}}\rightarrow {\mathbb {H}}\) and \({{\mathcal {D}}}({\mathbf {F}}):={\mathbb {H}}\). By setting

$$\begin{aligned} {\mathbf {W}}:=\begin{pmatrix}W_1\\ W_2\end{pmatrix}, \qquad {\mathbf {B}}:=\begin{pmatrix}B_1\\ B_2\end{pmatrix},\quad \text {and}\quad u:=\begin{pmatrix}u_1\\ u_2\end{pmatrix} \end{aligned}$$

we can rewrite the system (2.1)–(2.2) as an abstract Cauchy problem on the space \({\mathbb {H}}\)

$$\begin{aligned} {\text {d}}u=({\mathbf {A}} u+{\mathbf {F}}(u))~{\text {d}}t+{\mathbf {B}} ~{\text {d}}{\mathbf {W}}, \end{aligned}$$
(2.11)

with initial condition

$$\begin{aligned} u(0)=u^0:=\begin{pmatrix}u_1^0\\ u_2^0\end{pmatrix}. \end{aligned}$$
(2.12)

Mild solutions and stochastic convolution

We are interested in the concept of mild solutions to SPDEs. First of all, let us note the following. We have

$$\begin{aligned} {\mathbf {A}}= \underbrace{\begin{pmatrix}A&{}\quad 0\\ 0&{}\quad 0\end{pmatrix}}_{=:A_1}+ \underbrace{\begin{pmatrix}0&{}\quad 0\\ 0&{}\quad -\sigma (x)\end{pmatrix}}_{=:A_2}. \end{aligned}$$

It is well known that \(A_1\) generates an analytic semigroup on \({\mathbb {H}}\) and \(A_2\) is a bounded multiplication operator on \({\mathbb {H}}\). Hence, \({\mathbf {A}}\) is the generator of an analytic semigroup \(\{\exp \left( t{\mathbf {A}}\right) \}_{t\ge 0}\) on \({\mathbb {H}}\) as well, see [23, Chapter 3, Theorem 2.1]. Also note that A generates an analytic semigroup \(\{\exp \left( tA\right) \}_{t\ge 0}\) on \(L^p(D)\) for every \(p\ge 1\). In particular, we have for \(u\in L^p(D)\) that for every \(\alpha \ge 0\) there exists a constant \(C_\alpha >0\) such that

$$\begin{aligned} \Vert (-A)^\alpha \exp \left( tA\right) u\Vert _p\le C_\alpha t^{-\alpha }\exp \left( a t\right) \Vert u\Vert _p, ~ ~ ~ \text { for all }t>0, \end{aligned}$$

where \(a>0\), see for instance [27, Theorem 37.5]. The domain \({{\mathcal {D}}}((-A)^\alpha )\) can be identified with the Sobolev space \(W^{2\alpha ,p}(D)\) and thus we have in our setting for \(t>0\)

$$\begin{aligned} \Vert \exp \left( tA\right) u\Vert _{W^{\alpha ,p}(D)}\le C_\alpha t^{-\alpha /2}\exp \left( a t\right) \Vert u\Vert _p. \end{aligned}$$
(2.13)

Remark 2.3

Omitting the additive noise term in equation (2.11), we are in the deterministic setting of [20]. From there the existence of a global-in-time solution \((u_1,u_2)\in C([0,\infty ),{\mathbb {H}})\) for every initial condition \(u^0\in {\mathbb {H}}\) already follows.

Let us now return to the stochastic Cauchy problem (2.11)–(2.12). We define

Definition 2.4

(Stochastic convolution) The stochastic process defined as

$$\begin{aligned} W_{\mathbf {A}}(t):=\begin{pmatrix}W_{\mathbf {A}}^1(t)\\ W_{\mathbf {A}}^2(t)\end{pmatrix} :=\int \nolimits _0^t\exp \left( (t-s){\mathbf {A}}\right) {\mathbf {B}} ~{\text {d}}{\mathbf {W}}(s), \end{aligned}$$

is called stochastic convolution.

More precisely, we have (see [22, Proposition 3.1])

$$\begin{aligned} W_{\mathbf {A}}(t)&=\int \nolimits _0^t\begin{pmatrix}\exp \left( (t-s)A\right) &{}\quad 0\\ 0&{}\quad \exp \left( -(t-s)\sigma (x)\right) \end{pmatrix} \begin{pmatrix}B_1\\ B_2\end{pmatrix}~{\text {d}}{\mathbf {W}}(s)\\&=\begin{pmatrix}\int \nolimits _0^t\exp \left( (t-s)A\right) B_1~{\text {d}}W_1(s)\\ \int \nolimits _0^t\exp \left( -(t-s)\sigma (x)\right) B_2~{\text {d}}W_2(s)\end{pmatrix}. \end{aligned}$$

This is a well-defined \({\mathbb {H}}\)-valued Gaussian process. Furthermore, Assumptions  2.2 (1) and (2) ensure that \(W_{\mathbf {A}}(t)\) is mean-square continuous and \({\mathcal {F}}_t\)-measurable, see [11].

Remark 2.5

As \(W_{{\mathbf {A}}}\) is a Gaussian process, we can bound all its higher-order moments, i.e. for \(p\ge 1\) we have

$$\begin{aligned} \sup _{t\in [0,T]}{\mathbb {E}}\Vert W_{\mathbf {A}}(t)\Vert _{\mathbb {H}}^p<\infty . \end{aligned}$$
(2.14)

This follows from the Kahane–Khintchine inequality, see [29, Theorem 3.12].

Definition 2.6

(Mild solution) A mean-square continuous, \({\mathcal {F}}_t\)-measurable \({\mathbb {H}}\)-valued process u(t), \(t\in [0,T]\) is said to be a mild solution to (2.11)–(2.12) on [0, T] if \({\mathbb {P}}\)-almost surely we have for \(t\in [0,T]\)

$$\begin{aligned} u(t)=\exp \left( t{\mathbf {A}}\right) u^0+\int \nolimits _0^t\exp \left( (t-s){\mathbf {A}}\right) {\mathbf {F}}(u(s)) ~{\text {d}}s+W_{\mathbf {A}}(t). \end{aligned}$$
(2.15)

Under Assumptions 2.1 and 2.2 (1)–(2) a mild solution exists locally-in-time in

$$\begin{aligned} L^2(\Omega ;C([0,T];{\mathbb {H}}))\cap L^2(\Omega ;L^2([0,T];{\mathbb {V}})), \end{aligned}$$

for some \(T>0\), see [11, Theorem 7.7]. Hence, local in time existence for our problem is guaranteed by the classical SPDE theory.

Random attractor

Preliminaries

We now recall some basic definitions related to random attractors. For more information the reader is referred to the sources given in the introduction.

Definition 3.1

(Metric dynamical system) Let \((\Omega , {\mathcal {F}},{\mathbb {P}})\) be a probability space and let \(\theta =\{\theta _t:\Omega \rightarrow \Omega \}_{t\in {\mathbb {R}}}\) be a family of \({\mathbb {P}}\)-preserving transformations (i.e. \(\theta _t{\mathbb {P}}={\mathbb {P}}\) for \(t\in {\mathbb {R}}\)), which satisfy for \(t,s\in {\mathbb {R}}\) that

  1. (1)

    \((t,\omega )\mapsto \theta _t\omega \) is measurable,

  2. (2)

    \(\theta _0=\text {Id}\),

  3. (3)

    \(\theta _{t+s}=\theta _t\circ \theta _s\).

Then \((\Omega ,{\mathcal {F}}, {\mathbb {P}},\theta )\) is called a metric dynamical system.

The metric dynamical system describes the dynamics of the noise.

Definition 3.2

(Random dynamical system) Let \(({{\mathcal {V}}},\Vert \cdot \Vert )\) be a separable Banach space. A random dynamical system (RDS) with time domain \({\mathbb {R}}_+\) on \(({{\mathcal {V}}},\Vert \cdot \Vert )\) over \(\theta \) is a measurable map

$$\begin{aligned} \varphi :{\mathbb {R}}_+\times {{\mathcal {V}}}\times \Omega \rightarrow {{\mathcal {V}}}; ~~~ (t,v,\omega )\mapsto \varphi (t,\omega )v \end{aligned}$$

such that \(\varphi (0,\omega )=\text {Id}_{{{\mathcal {V}}}}\) and

$$\begin{aligned} \varphi (t+s,\omega )=\varphi (t,\theta _s\omega )\circ \varphi (s,\omega ) \end{aligned}$$

for all \(s,t\in {\mathbb {R}}_+\) and for all \(\omega \in \Omega \). We say that \(\varphi \) is a continuous or differentiable RDS if \(v\mapsto \varphi (t,\omega )v\) is continuous or differentiable for all \(t\in {\mathbb {R}}_+\) and every \(\omega \in \Omega \).

We summarize some further definitions relevant for the theory of random attractors.

Definition 3.3

(Random set) A set-valued map \({{\mathcal {K}}}:\Omega \rightarrow 2^{{\mathcal {V}}}\) is said to be measurable if for all \(v\in {{\mathcal {V}}}\) the map \(\omega \mapsto d(v,{{\mathcal {K}}}(\omega ))\) is measurable. Here, \(d({{\mathcal {A}}},{{\mathcal {B}}}) =\sup _{v\in {{\mathcal {A}}}}\inf _{{\tilde{v}}\in {{\mathcal {B}}}}\Vert v-{\tilde{v}}\Vert \) for \({{\mathcal {A}}},{{\mathcal {B}}}\in 2^{{\mathcal {V}}}\), \({{\mathcal {A}}},{{\mathcal {B}}}\ne \emptyset \) and \(d(v,{{\mathcal {B}}})=d(\{v\},{{\mathcal {B}}})\). A measurable set-valued map is called a random set.

Definition 3.4

(Omega-limit set) For a random set \({{\mathcal {K}}}\) we define the omega-limit set to be

$$\begin{aligned} \Omega _{{\mathcal {K}}}(\omega ):=\bigcap _{T\ge 0}\overline{\bigcup _{t\ge T}\varphi (t,\theta _{-t}\omega ) {{\mathcal {K}}}(\theta _{-t}\omega )}. \end{aligned}$$

\(\Omega _{{\mathcal {K}}}(\omega )\) is closed by definition.

Definition 3.5

(Attracting and absorbing set) Let \({{\mathcal {A}}},{{\mathcal {B}}}\) be random sets and let \(\varphi \) be a RDS.

  • \({{\mathcal {B}}}\) is said to attract \({{\mathcal {A}}}\) for the RDS \(\varphi \), if

    $$\begin{aligned} d(\varphi (t,\theta _{-t}\omega ){{\mathcal {A}}}(\theta _{-t}\omega ),{{\mathcal {B}}}(\omega ))\rightarrow 0~~ \text {for }t\rightarrow \infty . \end{aligned}$$
  • \({{\mathcal {B}}}\) is said to absorb \({{\mathcal {A}}}\) for the RDS \(\varphi \), if there exists a (random) absorption time \(t_{{\mathcal {A}}}(\omega )\) such that for all \(t\ge t_{{\mathcal {A}}}(\omega )\)

    $$\begin{aligned} \varphi (t,\theta _{-t}\omega ){{\mathcal {A}}}(\theta _{-t}\omega )\subset {{\mathcal {B}}}(\omega ). \end{aligned}$$
  • Let \(\mathbf {{\mathcal {D}}}\) be a collection of random sets (of non-empty subsets of \({{\mathcal {V}}}\)), which is closed with respect to set inclusion. A set \({{\mathcal {B}}}\in \mathbf {{\mathcal {D}}}\) is called \(\mathbf {{\mathcal {D}}}\)-absorbing/\({{\mathcal {D}}}\)-attracting for the RDS \(\varphi \), if \({{\mathcal {B}}}\) absorbs/attracts all random sets in \(\mathbf {{\mathcal {D}}}\).

Remark 3.6

Throughout this work we use a convenient criterion to derive the existence of an absorbing set. Let \({{\mathcal {A}}}\) be a random set. If for every \(v\in {{\mathcal {A}}}(\theta _{-t}\omega )\) we have

$$\begin{aligned} \limsup _{t\rightarrow \infty } \Vert \varphi (t,\theta _{-t}\omega ,v)\Vert \le \rho (\omega ), \end{aligned}$$
(3.1)

where \(\rho (\omega )>0\) for every \(\omega \in \Omega \), then the ball centred in 0 with radius \(\rho (\omega )+\epsilon \) for a \(\epsilon >0\), i.e. \({{\mathcal {B}}}(\omega ):= B(0,\rho (\omega )+\epsilon )\), absorbs \({\mathcal {A}}\).

Definition 3.7

(Tempered set) A random set \({{\mathcal {A}}}\) is called tempered provided for \({\mathbb {P}}\)-a.e. \(\omega \in \Omega \)

$$\begin{aligned} \lim _{t\rightarrow \infty } \exp \left( -\beta t\right) d({{\mathcal {A}}}(\theta _{-t}\omega ))=0~~~ \text { for all }\beta >0, \end{aligned}$$

where \(d({{\mathcal {A}}})=\sup _{a\in {{\mathcal {A}}}}\Vert a\Vert \). We denote by \({\mathcal {T}}\) the set of all tempered subsets of \({{\mathcal {V}}}\).

Definition 3.8

(Tempered random variable) A random variable \(X\in {\mathbb {R}}\) on \((\Omega ,{\mathcal {F}},{\mathbb {P}},\theta )\) is called tempered, if there is a set of full \({\mathbb {P}}\)-measure such that for all \(\omega \) in this set we have

$$\begin{aligned} \lim _{t\rightarrow \pm \infty }\frac{\log \left| X(\theta _t\omega )\right| }{|t|}=0. \end{aligned}$$
(3.2)

Hence a random variable X is tempered when the stationary random process \(X(\theta _t\omega )\) grows sub-exponentially.

Remark 3.9

A sufficient condition that a positive random variable X is tempered is that (cf. [3, Proposition 4.1.3])

$$\begin{aligned} {\mathbb {E}}\left( \sup _{t\in [0,1]}X(\theta _t\omega )\right) <\infty . \end{aligned}$$
(3.3)

If \(\theta \) is an ergodic shift, then the only alternative to (3.2) is

$$\begin{aligned} \lim _{t\rightarrow \pm \infty }\frac{\log \left| X(\theta _t\omega )\right| }{|t|}=\infty , \end{aligned}$$

i.e., the random process \(X(\theta _t\omega )\) either grows sub-exponentially or blows up at least exponentially.

Definition 3.10

(Random attractor) Suppose \(\varphi \) is a RDS such that there exists a random compact set \({{\mathcal {A}}}\in {\mathcal {T}}\) which satisfies for any \(\omega \in \Omega \)

  • \({{\mathcal {A}}}\) is invariant, i.e., \(\varphi (t,\omega ){{\mathcal {A}}}(\omega )={{\mathcal {A}}}(\theta _t\omega )\) for all \(t\ge 0\).

  • \({{\mathcal {A}}}\) is \({\mathcal {T}}\)-attracting.

Then \({{\mathcal {A}}}\) is said to be a \({\mathcal {T}}\)-random attractor for the RDS.

Theorem 3.11

([9, 25]) Let \(\varphi \) be a continuous RDS and assume there exists a compact random set \({{\mathcal {B}}}\in {\mathcal {T}}\) that absorbs every \({{\mathcal {D}}}\in {\mathcal {T}}\), i.e. \({\mathcal {B}}\) is \({\mathcal {T}}\)-absorbing. Then there exists a unique \({\mathcal {T}}\)-random attractor \({{\mathcal {A}}}\), which is given by

$$\begin{aligned} {\mathcal {A}}(\omega )=\overline{\bigcup _{{\mathcal {D}} \in {{\mathcal {T}}}} \Omega _{{\mathcal {D}}}(\omega )}. \end{aligned}$$

We will use the above theorem to show the existence of a random attractor for the partly dissipative system at hand.

Associated RDS

We will now define the RDS corresponding to (2.11)–(2.12). We consider \({\mathcal {V}}={\mathbb {H}}:=L^2(D)\times L^2(D)\) and \({{\mathcal {T}}}\) is the set of all tempered subsets of \({\mathbb {H}}\). In the sequel, we consider the fixed canonical probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) corresponding to a two-sided Wiener process, more precisely

$$\begin{aligned} \Omega :=&\left\{ \omega =(\omega _1,\omega _2): \omega _1,\omega _2 \in C({\mathbb {R}},L^2(D)), \omega (0)=0\right\} , \end{aligned}$$

endowed with the compact-open topology. The \(\sigma \)-algebra \({\mathcal {F}}\) is the Borel \(\sigma \)-algebra on \(\Omega \) and \({\mathbb {P}}\) is the distribution of the trace class Wiener process \({\tilde{W}}(t):=({{\tilde{W}}}_1(t),{{\tilde{W}}}_2(t))=(B_1W_1(t),B_2W_2(t))\), where we recall that \(B_1\) and \(B_2\) fulfil Assumptions 2.2. We identify the elements of \(\Omega \) with the paths of these Wiener processes, more precisely

$$\begin{aligned} {{\tilde{W}}}(t,\omega ):=({{\tilde{W}}}_1(t,\omega _{1}),{{\tilde{W}}}_2(t,\omega _{2}))= (\omega _1(t),\omega _2(t))=:\omega (t), \text{ for } \omega \in \Omega . \end{aligned}$$
(3.4)

Furthermore, we introduce the Wiener shift, namely

$$\begin{aligned} \theta _t\omega (\cdot ):=\omega (\cdot +t)-\omega (t), ~\text{ for } \omega \in \Omega \text{ and } t\in {\mathbb {R}}. \end{aligned}$$
(3.5)

Then \(\theta :{\mathbb {R}}\times \Omega \rightarrow \Omega \) is a measure-preserving transformation on \(\Omega \), i.e. \(\theta _{t}{\mathbb {P}}={\mathbb {P}}\), for \(t\in {\mathbb {R}}\). Furthermore, \(\theta _0\omega (s)=\omega (s)-\omega (0)=\omega (s)\) and \(\theta _{t+s}\omega (r) =\omega (r+t+s)-\omega (t+s)=\theta _t(\omega (r+s)-\omega (s))=\theta _t(\theta _s\omega (r))\). Hence, \((\Omega ,{\mathcal {F}},{\mathbb {P}},\theta )\) is a metric dynamical system. Next, we consider the following equations

$$\begin{aligned} {\text {d}}z_1&=Az_1~{\text {d}}t+~{\text {d}}\omega _1, \end{aligned}$$
(3.6)
$$\begin{aligned} {\text {d}}z_2&= -\sigma (x)z_2~{\text {d}}t + {\text {d}}\omega _2. \end{aligned}$$
(3.7)

The stationary solutions of (3.6)–(3.7) are given by

$$\begin{aligned} (t,\omega )\mapsto z_{1}(\theta _{t}\omega ) \text{ and } (t,\omega )\mapsto z_{2}(\theta _t\omega ), \end{aligned}$$

where

$$\begin{aligned} z_1(\theta _{t}\omega )&=\int \nolimits _{-\infty }^{t}e^{(t-s)A }~{\text {d}}\omega _1(s) =\int \nolimits _{-\infty }^{0}e^{-s A}~{\text {d}}\theta _{t}\omega _{1}(s), \\ z_2(\theta _{t}\omega )&= \int \nolimits _{-\infty }^{t} e^{-(t-s)\sigma (x)}~ {\text {d}}\omega _{2}(s) =\int \nolimits _{-\infty }^{0} e^{s \sigma (x)}~{\text {d}}\theta _{t}\omega _{2}(s). \end{aligned}$$

Here, we observe that for \(t=0\)

$$\begin{aligned} z_{1}(\omega )=\int \nolimits _{-\infty }^{0} e^{-s A}~{\text {d}}\omega _{1}(s), ~ ~ ~ z_{2}(\omega ) =\int \nolimits _{-\infty }^{0} e^{s\sigma (x)}~{\text {d}}\omega _{2}(s). \end{aligned}$$

Now consider the Doss–Sussmann transformation \(v(t)=u(t)-z(\theta _t \omega )\), where \(v(t)=(v_1(t),v_2(t))^\top \), \(z(\omega )=(z_1(\omega _{1}),z_2(\omega _{2}))^\top \) and \(u(t)=(u_1(t),u_2(t))^\top \) is a solution to the problem (2.1)–(2.4). Then v(t) satisfies

$$\begin{aligned} \frac{{\text {d}}v}{{\text {d}}t}&={\mathbf {A}} v+{\mathbf {F}}(v+z(\theta _t\omega )). \end{aligned}$$
(3.8)

More explicitly/or component-wise this reads as

$$\begin{aligned} \frac{{\text {d}}v_1(t)}{{\text {d}}t}&=d\Delta v_1(t)-h(x,v_1(t)+z_1(\theta _t\omega ))\nonumber \\&\quad -f(x,v_1(t)+z_1(\theta _t\omega ),v_2(t)+z_2(\theta _t\omega )), \end{aligned}$$
(3.9)
$$\begin{aligned} \frac{{\text {d}}v_2(t)}{{\text {d}}t}&=-\sigma (x)v_2(t)-g(x,v_1(t)+z_1(\theta _t\omega )) . \end{aligned}$$
(3.10)

In the equations above no stochastic differentials appear, hence they can be considered path-wise, i.e., for every \(\omega \) instead just for \({\mathbb {P}}\)-almost every \(\omega \). For every \(\omega \) (3.8) is a deterministic equation, where \(z(\theta _t\omega )\) can be regarded as a time-continuous perturbation. In particular, [6] guarantees that for all \(v^0=(v_1^0,v_2^0)^\top \in {\mathbb {H}}\) there exists a solution \(v(\cdot ,\omega ,v^0)\in C([0,\infty ),{\mathbb {H}})\) with \(v_1(0,\omega ,v_1^0)=v_1^0\), \(v_2(0,\omega ,v_2^0)=v_2^0\). Moreover, the mapping \({\mathbb {H}} \ni v_{0}\mapsto v(t,\omega ,v_{0})\in {\mathbb {H}}\) is continuous. Now, let

$$\begin{aligned} u_1(t,\omega ,u_1^0)&=v_1(t,\omega ,u_1^0-z_1(\omega ))+z_1(\theta _t\omega ), \\ u_2(t,\omega ,u_2^0)&=v_2(t,\omega ,u_2^0-z_2(\omega ))+z_2(\theta _t\omega ). \end{aligned}$$

Then \(u(t,\omega ,u^0)=(u_1(t,\omega ,u_1^0),u_2(t,\omega ,u_2^0))^\top \) is a solution to (2.1)–(2.4). In particular, we can conclude at this point that (2.1)–(2.4) has a global-in-time solution which belongs to \(C([0,\infty );{\mathbb {H}})\); see Remark 2.3. We define the corresponding solution operator \(\varphi :{\mathbb {R}}^+\times \Omega \times {\mathbb {H}}\rightarrow {\mathbb {H}}\) as

$$\begin{aligned} \varphi (t,\omega ,(u_1^0,u_2^0)):=(u_1(t,\omega ,u_1^0),u_2(t,\omega ,u_2^0)), \end{aligned}$$
(3.11)

for all \((t,\omega ,(u_1^0,u_2^0))\in {\mathbb {R}}^{+}\times \Omega \times {\mathbb {H}}\). Now, \(\varphi \) is a continuous RDS associated to our stochastic partly dissipative system. In particular, the cocycle property obviously follows from the mild formulation. In the following, we will prove the existence of a global random attractor for this RDS. Due to conjugacy, see [9, 25] this gives us automatically a global random attractor for the stochastic partly dissipative system (2.1)–(2.4).

Bounded absorbing set

In the following we will prove the existence of a bounded absorbing set for the RDS (3.11). In the calculations we will make use of some versions of certain classical deterministic results several times. Therefore, we recall these results here for completeness and as an aid to follow the calculations later on.

Lemma 3.12

(\(\varepsilon \)-Young inequality) For \(x,y\in {\mathbb {R}}\), \(\varepsilon >0\), \({{\tilde{p}}}, {{\tilde{q}}}>1\), \(\frac{1}{{{\tilde{p}}}}+\frac{1}{{{\tilde{q}}}}=1\) we have

$$\begin{aligned} |xy|\le \varepsilon |x|^{{{\tilde{p}}}}+\frac{({{\tilde{p}}} \varepsilon )^{1-{{\tilde{q}}}}}{{{\tilde{q}}}}|y|^{{{\tilde{q}}}}. \end{aligned}$$
(3.12)

Lemma 3.13

(Gronwall’s inequality) Assume that \(\varphi \), \(\alpha \) and \(\beta \) are integrable functions and \(\varphi (t)\ge 0\). If

$$\begin{aligned} \varphi '(t)\le \alpha (t)+\beta (t)\varphi (t), \end{aligned}$$
(3.13)

then

$$\begin{aligned} \varphi (t)\le \varphi (t_0)\exp \left( \int \nolimits _{t_0}^t\beta (\tau )d\tau \right) + \int \nolimits _{t_0}^t\alpha (s)\exp \left( \int \nolimits _s^t\beta (\tau )d\tau \right) ds, ~ ~ ~ t\ge t_0. \end{aligned}$$
(3.14)

Lemma 3.14

(Uniform Gronwall Lemma [28, Lemma 1.1]) Let g, h, y be positive locally integrable functions on \((t_0,\infty )\) such that \(y'\) is locally integrable on \((t_0,\infty )\) and which satisfy

$$\begin{aligned}&\frac{{\text {d}}y}{{\text {d}}t}\le gy+h, ~~~~ \text { for }t\ge t_0,\\&\int \nolimits _t^{t+r}g(s){\text {d}}s\le a_1,~~~\int \nolimits _t^{t+r}h(s){\text {d}}s\le a_2, ~~~ \int \nolimits _t^{t+r}y(s){\text {d}}s\le a_3~~~\text { for }t\ge t_0, \end{aligned}$$

where \(r,a_1,a_2,a_3\) are positive constants. Then

$$\begin{aligned} y(t+r)\le \left( \frac{a_3}{r}+a_2\right) \exp \left( a_1\right) ,~~~~\forall t\ge t_0. \end{aligned}$$

Lemma 3.15

(Minkowski’s inequality) Let \(p>1\) and \(f,g\in {\mathbb {R}}\), then

$$\begin{aligned} |f+g|^p\le 2^{p-1}(|f|^p+|g|^p). \end{aligned}$$

Lemma 3.16

(Poincaré’s inequality) Let \(1\le p < \infty \) and let \(D\subset {\mathbb {R}}^n\) be a bounded open subset. Then there exists a constant \(c= c(D,p)\) such that for every function \(u\in W_0^{1,p}(D)\)

$$\begin{aligned} \Vert u\Vert _{p}\le c\Vert \nabla u\Vert _{p}. \end{aligned}$$
(3.15)

Having recalled the relevant deterministic preliminaries, we can now proceed with the main line of our argument. For the following result about the stochastic convolutions Assumption 2.2 (3) is crucial.

Lemma 3.17

Suppose Assumptions 2.1 and 2.2 hold. Then for every \(p\ge 1\)

$$\begin{aligned} \Vert z_1(\omega )\Vert _p^p \text { and }\Vert z_2(\omega )\Vert _2^2 \end{aligned}$$

are tempered random variables.

Proof

Using \(0<\delta \le \sigma (x)\le {\tilde{\delta }}\) and the Burkholder-Davis-Gundy inequality we have

$$\begin{aligned}&{\mathbb {E}}\left( \sup _{t\in [0,1]}\Vert z_2(\theta _t\omega )\Vert _2^2\right) \\&\quad ={\mathbb {E}}\left( \sup _{t\in [0,1]}\left\| \int \nolimits _{-\infty }^t\exp \left( -(t-s)\sigma (x)\right) ~ {\text {d}}\omega _2(s)\right\| _2^2\right) \\&\quad ={\mathbb {E}}\left( \sup _{t\in [0,1]}\int \nolimits _D\exp \left( -2t\sigma (x)\right) \left| \int \nolimits _{-\infty }^t\exp \left( s\sigma (x)\right) ~{\text {d}}\omega _2(s)\right| ^2~{\text {d}}x\right) \\&\quad \le {\mathbb {E}}\left( \sup _{t\in [0,1]}\exp \left( -2t\delta \right) \int \nolimits _D\left| \int \nolimits _{-\infty }^t\exp \left( s\sigma (x)\right) ~{\text {d}}\omega _2(s)\right| ^2~{\text {d}}x\right) \\&\quad \le {\mathbb {E}}\left( \sup _{t\in [0,1]}\left\| \int \nolimits _{-\infty }^t\exp \left( s\sigma (x)\right) ~ {\text {d}}\omega _2(s)\right\| _2^2\right) \\&\quad \le C{\mathbb {E}}\left( \int \nolimits _{-\infty }^1\Vert \exp \left( s\sigma (x)\right) \Vert _2^2~{\text {d}}s\right) \\&\quad \le C|D|\int \nolimits _{-\infty }^1\exp \left( 2s{\tilde{\delta }}\right) ~{\text {d}}s= \frac{C|D|}{2{\tilde{\delta }}} \exp \left( 2{\tilde{\delta }}\right) \\&\quad <\infty . \end{aligned}$$

The temperedness of \(\Vert z_2(\omega )\Vert _2^2\) then follows directly using Remark 3.9. Now, we consider the random variable \(\Vert z_1(\omega )\Vert _p^p\). Note that using the so-called factorization method we have for \((x,t)\in D\times [0,T]\) and \(\alpha \in (0,1/2)\) (see [11, Ch. 5.3])

$$\begin{aligned} z_1(x,\theta _t\omega )=\frac{\sin (\pi \alpha )}{\pi }\int \nolimits _{-\infty }^t\exp \left( (t-\tau )A\right) (t-\tau )^{\alpha -1} Y(x,\tau )~{\text {d}}\tau , \end{aligned}$$
(3.16)

with

$$\begin{aligned} Y(x,\tau )&=\int \nolimits _0^\tau \exp \left( (\tau -s)A\right) (\tau -s)^{-\alpha } B_1~{\text {d}}W_1(x,s)\\&=\sum _{k=1}^\infty \int \nolimits _0^\tau \exp \left( (\tau -s)A\right) (\tau -s)^{-\alpha }B_1e_k(x)d\beta _k(s)\\&=\sum _{k=1}^\infty \int \nolimits _0^\tau \exp \left( -(\tau -s)\lambda _k\right) (\tau -s)^{-\alpha }\sqrt{\delta _k} e_k(x)d\beta _k(s), \end{aligned}$$

where we have used the formal representation \(W_1(x,s)=\sum _{k=1}^\infty \beta _k(s)e_k(x)\) of the cylindrical Wiener process, with \(\{\beta _k\}_{k=1}^\infty \) being a sequence of mutually independent real-valued Brownian motions. \(Y(x,\tau )\) is a real-valued Gaussian random variable with mean zero and variance

$$\begin{aligned}&\text {Var}(Y(x,\tau ))={\mathbb {E}}\left[ |Y(x,\tau )|^2\right] \\&\quad ={\mathbb {E}}\left[ \sum _{k=1}^\infty \left( \int \nolimits _0^\tau \exp \left( -(\tau -s)\lambda _k\right) (t-s)^{-\alpha } \sqrt{\delta _k}~{\text {d}}\beta _k(s)\right) ^2|e_k(x)|^2\right] \\&\quad =\sum _{k=1}^\infty \delta _k |e_k(x)|^2 {\mathbb {E}}\left[ \left( \int \nolimits _0^\tau \exp \left( -(\tau -s)\lambda _k\right) (t-s)^{-\alpha }~{\text {d}}\beta _k(s)\right) ^2\right] \\&\quad =\sum _{k=1}^\infty \delta _k |e_k(x)|^2 \int \nolimits _0^\tau \exp \left( -2s\lambda _k\right) s^{-2\alpha }~{\text {d}}s, \end{aligned}$$

where we have used Parseval’s identity and the Itô isometry. Our assumption on the boundedness of the eigenfunctions \(\{e_k\}_{k=1}^\infty \) yields together with Assumption 2.2 (3) that

$$\begin{aligned} \text {Var}(Y(x,\tau ))&<\sum _{k=1}^\infty \delta _k \kappa ^2\int \nolimits _0^\infty \exp \left( -2s\lambda _k\right) s^{-2\alpha }~{\text {d}}s\\&=\kappa ^22^{2\alpha -1}\Gamma (1-2\alpha )\sum _{k=1}^\infty \delta _k \lambda _k^{2\alpha -1}<\infty . \end{aligned}$$

Hence, \({\mathbb {E}}\left[ \left| Y(x,\tau )\right| ^{2m}\right] \le C_m\) for \(C_m>0\) and every \(m\ge 1\) (note that all odd moments of a Gaussian random variable are zero). Thus we have

$$\begin{aligned} {\mathbb {E}}\left[ \int \nolimits _0^T\int _D|Y(x,\tau )|^{2m}{\text {d}}x{\text {d}}\tau \right] \le TC_m|D|, \end{aligned}$$

i.e., in particular for all \(p\ge 1\) we have \(Y\in L^{p}(D\times [0,T])\)\({\mathbb {P}}\)-a.s.. We now observe

$$\begin{aligned}&\Vert z_1(\theta _t\omega )\Vert _{W^{\alpha ,p}(D)}\\&\quad \le \frac{\sin (\pi \alpha )}{\pi } \int \nolimits _{-\infty }^t(t-\tau )^{\alpha -1}\Vert \exp \left( (t-\tau )A\right) Y(\cdot ,\tau )\Vert _{W^{\alpha ,p}(D)}~{\text {d}}\tau \\&\quad \le C\frac{\sin (\pi \alpha )}{\pi }\int \nolimits _{-\infty }^t(t-\tau )^{\alpha -1}(t-\tau )^{-{\alpha /2}} e^{-\lambda (t-\tau )} \Vert Y(\cdot ,\tau )\Vert _{p}~{\text {d}}\tau \\&\quad \le C \sup _{\tau \in (-\infty ,t]}\Vert Y(\cdot ,\tau )\Vert _p \int \nolimits _{-\infty }^t(t-\tau )^{\alpha /2-1} e^{-\lambda (t-\tau )}~{\text {d}}\tau , \end{aligned}$$

where we have used (2.13) and thus

$$\begin{aligned}&{\mathbb {E}}\left( \sup _{t\in [0,1]}\Vert z_1(\theta _t\omega )\Vert _p\right) \\&\quad \le C~{\mathbb {E}}\left( \sup _{t\in [0,1]}\sup _{\tau \in (-\infty ,t]}\Vert Y(\cdot , \tau )\Vert _p\right) \int \nolimits _{0}^\infty \tau ^{\alpha /2-1}e^{-\lambda \tau }~{\text {d}}\tau \\&\quad = C~{\mathbb {E}}\left( \sup _{t\in [0,1]}\sup _{\tau \in (-\infty ,t]}\Vert Y(\cdot , \tau )\Vert _p\right) \frac{\Gamma (\alpha /2)}{\lambda ^{\alpha /2}}. \end{aligned}$$

Now, the right hand side is finite as all moments of \(Y(x,\tau )\) are bounded uniformly in \(x,\tau \), see above. Due to embedding of Lebesgue spaces on a bounded domain we have that

$$\begin{aligned} {\mathbb {E}}\left( \sup _{t\in [0,1]} \Vert z_1(\theta _t\omega )\Vert _p\right)<\infty ~~ \text { implies }~~ {\mathbb {E}}\left( \sup _{t\in [0,1]}\Vert z_1(\theta _t\omega )\Vert _p^p\right) <\infty , \end{aligned}$$

i.e., temperedness of \(\Vert z_1(\omega )\Vert _p^p\) follows again with Remark 3.9. \(\square \)

Remark 3.18

  1. (1)

    Note that Assumption 2.2 (3) together with the boundedness of \(e_{k}\) for \(k\in {\mathbb {N}}\) are essential for this proof. One can extend such statements for general open bounded domains in \(D\subset {\mathbb {R}}^{n}\), according to Remark 5.27 and Theorem 5.28 in [11].

  2. (2)

    Regarding again Assumption 2.2 (3) one can show in a similar way that \( z_1 \in W^{1,p}(D)\) and in particular also \(\Vert \nabla z_1(\omega )\Vert _p^p\) is a tempered random variable for all \(p\ge 1\).

Remark 3.19

Alternatively, one can introduce the Ornstein–Uhlenbeck processes \(z_1\) and \(z_2\) using integration by parts. We applied the factorization Lemma for the definition of \(z_1\) in order to obtain regularity results for \(z_1\) based on the interplay between the eigenvalues of the linear part and of the covariance operator of the noise.

Using integration by parts, one infers that

$$\begin{aligned} z_{1}(\theta _t\omega )&=\int \nolimits _{-\infty }^{t} \exp ((t-\tau )A)~{\text{ d }}\omega _1(\tau ) =\omega _{1}(t) + A \int \nolimits _{-\infty }^{t} \exp ((t-\tau )A)\omega _{1}(\tau )~{\text{ d }}\tau \\ {}&= - A \int \nolimits _{-\infty }^{t} \exp ((t-\tau )A)(\omega _{1}(t) -\omega _{1}(\tau ))~{\text{ d }}\tau . \end{aligned}$$

This expression can also be used in order to investigate the regularity of \(z_1\) in a Banach space \({{\mathcal {H}}}\) as follows:

$$\begin{aligned}&\Big \Vert A \int \nolimits _{-\infty }^{t} \exp ((t-\tau )A)(\omega _{1}(t) -\omega _{1}(\tau ))~{\text {d}}\tau \Big \Vert _{{{\mathcal {H}}}} \\&\quad \le C \int \nolimits _{-\infty }^{t} (t-\tau )^{-1} \Vert \exp (t-\tau ) A\Vert _{{{\mathcal {H}}}} \Vert \omega _{1}(t) -\omega _{1}(\tau )\Vert _{{{\mathcal {H}}}}~{\text {d}}\tau . \end{aligned}$$

Here one uses the Hölder-continuity of \(\omega _{1}\) in an appropriate function space in order to compensate the singularity in the previous formula.

In our case, we need \(z_1\in D(A^{\alpha /2})=W^{\alpha ,p}(D)\). Letting \(\omega _{1}\in D(A^{\varepsilon })\) for \(\varepsilon \ge 0\) and using that \(\omega _1\) is \(\beta \)-Hölder continuous with \(\beta \le 1/2\) one has

$$\begin{aligned} \Vert z_{1}(\theta _{t}\omega )\Vert _{W^{\alpha ,p}(D)} \le \int \nolimits _{-\infty }^{t} (t-\tau )^{\beta +\varepsilon -\alpha /2-1} \Vert \omega _{1}\Vert _{\beta ,\varepsilon } \Vert \exp ((t-\tau )A)\Vert ~{\text {d}}\tau , \end{aligned}$$

which is well-defined if \(\beta +\varepsilon >\alpha /2\). Such a condition provides again an interplay between the time and space regularity of the stochastic convolution.

Based on the results regarding the stochastic convolutions we can now investigate the long-time behaviour of our system. The first step is contained in the next lemma, which establishes the existence of an absorbing set.

Lemma 3.20

Suppose Assumptions 2.1 and 2.2 hold. Then there exists a set \(\{{{\mathcal {B}}}(\omega )\}_{\omega \in \Omega }\in {\mathcal {T}}\) such that \(\{{{\mathcal {B}}}(\omega )\}_{\omega \in \Omega }\) is a bounded absorbing set for \(\varphi \). In particular, for any \({{\mathcal {D}}}=\{{{\mathcal {D}}}(\omega )\}_{\omega \in \Omega }\in {\mathcal {T}}\) and every \(\omega \in \Omega \) there exists a random time \(t_{{\mathcal {D}}}(\omega )\) such that for all \(t\ge t_{{\mathcal {D}}}(\omega )\)

$$\begin{aligned} \varphi (t,\theta _{-t}\omega ,{{\mathcal {D}}}(\theta _{-t}\omega )) \subset {{\mathcal {B}}}(\omega ). \end{aligned}$$
(3.17)

Proof

To show the existence of a bounded absorbing set, we want to make use of Remark 3.6, i.e. we need an a-priori estimate in \({\mathbb {H}}\). We have for \(v=(v_1,v_2)^\top \) solution of (3.8)

$$\begin{aligned}&\frac{1}{2}\frac{{\text {d}}}{{\text {d}}t}\left( \Vert v_1\Vert ^2_2+\Vert v_2\Vert ^2_2\right) =\frac{1}{2}\frac{{\text {d}}}{{\text {d}}t}\Vert v\Vert _{\mathbb {H}}^2 =\left\langle \frac{{\text {d}}}{{\text {d}}t}v, v\right\rangle _{\mathbb {H}} =\langle \mathbf{Av}+{\mathbf {F}}(v+z(\theta _t\omega )),v\rangle _{\mathbb {H}}\\&\quad =\langle dAv_1,v_1\rangle +\langle F_1(v+z(\theta _t\omega )),v_1\rangle -\langle \sigma (x)v_2,v_2\rangle +\langle F_2(v+z(\theta _t \omega )),v_2\rangle \\&\quad =-d \Vert \nabla v_1\Vert _{2}^2\underbrace{-\langle h(x,v_1+z_1(\theta _t\omega )),v_1\rangle }_{=:I_1} \underbrace{-\langle f(x,v_1+z_1(\theta _t\omega ),v_2+z_2(\theta _t\omega )),v_1\rangle }_{=:I_2}\\&\qquad {}-\delta \Vert v_2\Vert _2^2 \underbrace{-\langle g(x,v_1+z_1(\theta _t\omega )),v_2\rangle }_{=:I_3}, \end{aligned}$$

where we have used (2.7). We now estimate \(I_1\)-\(I_3\) separately. Deterministic constants denoted as \(C,C_1,C_2,\ldots \) may change from line to line. Using (2.5) and (2.10) we calculate

$$\begin{aligned} I_1&=-\int \nolimits _Dh(x,v_1+z_1( \theta _t\omega ))v_1~{\text {d}}x\\&\quad =-\int \nolimits _D h(x,v_1+z_1(\theta _t\omega ))(v_1+z_1(\theta _t\omega ))~{\text {d}}x\\&\qquad +\int \nolimits _Dh(x,v_1+z_1(\theta _t\omega ))z_1(\theta _t\omega )~{\text {d}}x\\&\quad \le -\int \nolimits _D\delta _1|u_1|^p~{\text {d}}x+\int \nolimits _D\delta _3~{\text {d}}x+\int \nolimits _D|h(x,v_1+z_1(\theta _t\omega ))||z_1(\theta _t\omega )|~{\text {d}}x\\&\quad \le -\delta _1\Vert u_1\Vert _{p}^p+C+\delta _8\int \nolimits _D(1+|u_1|^{p-1})|z_1(\theta _t\omega )|~{\text {d}}x\\&\quad =-\delta _1\Vert u_1\Vert _{p}^p+C+\delta _8\Vert z_1(\theta _t\omega )\Vert _{1}+\delta _8\int \nolimits _D|u_1|^{p-1}|z_1(\theta _t\omega )|~{\text {d}}x\\&\quad \le -\delta _1\Vert u_1\Vert _{p}^p+C+C_1\Vert z_1(\theta _t\omega )\Vert _{2}^2+\frac{\delta _1}{2}\Vert u_1\Vert _{p}^p+ C_2 \Vert z_1(\theta _t\omega )\Vert _{p}^p\\&\quad =-\frac{\delta _1}{2}\Vert u_1\Vert _{p}^p+C+ C_1\left( \Vert z_1(\theta _t\omega )\Vert _{2}^2+ \Vert z_1(\theta _t\omega )\Vert _{p}^p\right) . \end{aligned}$$

Furthermore, with (2.6) we estimate

$$\begin{aligned} I_2&=-\int \nolimits _Df(x,v_1+z_1(\theta _t\omega ),v_2+z_2(\theta _t\omega ))v_1~{\text {d}}x\\&\le \int \nolimits _D|f(x,v_1+z_1(\theta _t\omega ),v_2+z_2(\theta _t\omega ))||u_1-z_1(\theta _t\omega )|~{\text {d}}x\\&\le \int \nolimits _D \delta _4(1+|u_1|^{p_1}+|u_2|)|u_1|~{\text {d}}x\\&\qquad +\int \nolimits _D\delta _4(1+|u_1|^{p_1}+|u_2|)|z_1(\theta _t\omega )|~{\text {d}}x\\&=\int \nolimits _D \delta _4(|u_1|+|u_1|^{p_1+1})~{\text {d}}x+\int \nolimits _D\delta _4|u_1||u_2|~{\text {d}}x+\delta _4\Vert z_1(\theta _t\omega )\Vert _{1}\\&\qquad +\int \nolimits _D\delta _4|u_1|^{p_1}|z_1(\theta _t\omega )|~{\text {d}}x+\int \nolimits _D\delta _4|u_2||z_1(\theta _t\omega )|~{\text {d}}x\\&\le \int \nolimits _D \delta _4(|u_1|+|u_1|^{p_1+1})~{\text {d}}x+\int \nolimits _D\delta _4|u_1||u_2|~{\text {d}}x+\delta _4\Vert z_1(\theta _t\omega )\Vert _{2}^2+C \\&\qquad +\int \nolimits _D \frac{\delta _4}{2}|u_1|^{p_1+1}~{\text {d}}x+ C_1 \Vert z_1(\theta _t\omega )\Vert _{p_1+1}^{p_1+1}+\int \nolimits _D\delta _4|u_2||z_1(\theta _t\omega )|~{\text {d}}x\\&\le \int \nolimits _D \delta _4\frac{3}{2}(|u_1|+|u_1|^{p_1+1})~{\text {d}}x+\int \nolimits _D\delta _4|u_1||u_2|~{\text {d}}x+C\\&\qquad +C_1 \left( \Vert z_1(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{p_1+1}^{p_1+1}\right) +\int \nolimits _D\delta _4|u_2||z_1(\theta _t\omega )|~{\text {d}}x. \end{aligned}$$

With (2.9) we compute

$$\begin{aligned} I_3&=-\int \nolimits _Dg(x,v_1+z_1(\theta _t\omega ))v_2~{\text {d}}x\\&\le \int \nolimits _D|g(x,u_1)||u_2-z_2(\theta _t\omega )|~{\text {d}}x\\&\le \int \nolimits _D\delta _7(1+|u_1|)|u_2|~{\text {d}}x+\int \nolimits _D\delta _7(1+|u_1|)|z_2(\theta _t\omega )|~{\text {d}}x\\&=\int \nolimits _D\delta _7(1+|u_1|)|u_2|~{\text {d}}x+\delta _7\Vert z_2(\theta _t\omega )\Vert _{1}+\int \nolimits _D\delta _7|u_1||z_2(\theta _t\omega )|~{\text {d}}x\\&\le \int \nolimits _D\delta _7(1+|u_1|)|u_2|~{\text {d}}x+\delta _7\Vert z_2(\theta _t\omega )\Vert _{2}^2+C+\int \nolimits _D\delta _7|u_1||z_2(\theta _t\omega )|~{\text {d}}x. \end{aligned}$$

Now, combining the estimates for \(I_2\) and \(I_3\) yields

$$\begin{aligned}&I_2+I_3\\&\quad \le \int \nolimits _D\delta _7(1+|u_1|)|u_2|~{\text {d}}x+\int \nolimits _D\delta _7|u_1||z_2(\theta _t\omega )|~{\text {d}}x\\&\qquad +\int \nolimits _D \delta _4\frac{3}{2}(|u_1|+|u_1|^{p_1+1})~{\text {d}}x+\int \nolimits _D\delta _4|u_1||u_2|~{\text {d}}x+\int \nolimits _D\delta _4|u_2||z_1(\theta _t\omega )|~{\text {d}}x\\&\qquad +C+ C_1 \left( \Vert z_2(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{p_1+1}^{p_1+1}\right) \\&\quad \le (\delta _4+\delta _7)\int \nolimits _D(1+|u_1|)|u_2|~{\text {d}}x+\int \nolimits _D\delta _7|u_1||z_2(\theta _t\omega )|~{\text {d}}x\\&\qquad +\int \nolimits _D \delta _4\frac{3}{2}(|u_1|+|u_1|^{p_1+1})~{\text {d}}x+\int \nolimits _D\delta _4|u_2||z_1(\theta _t\omega )|~{\text {d}}x\\&\qquad +C+ C_1 \left( \Vert z_2(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{p_1+1}^{p_1+1}\right) \\&\quad \le \frac{\delta }{16}\Vert u_2\Vert _{2}^2+C_2\int \nolimits _D(1+|u_1|)^2~{\text {d}}x+\int \nolimits _D\delta _7|u_1||z_2(\theta _t\omega )|~{\text {d}}x\\&\qquad +\int \nolimits _D \delta _4\frac{3}{2}(|u_1|+|u_1|^{p_1+1})~{\text {d}}x+\int \nolimits _D\delta _4|u_2||z_1(\theta _t\omega )|~{\text {d}}x\\&\qquad +C+C_1 \left( \Vert z_2(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{p_1+1}^{p_1+1}\right) \\&\quad = \frac{\delta }{16}\Vert u_2\Vert _{2}^2+\delta _4\frac{3}{2}\int \nolimits _D \left( |u_1|+|u_1|^{p_1+1}+C_2(1+|u_1|)^2\right) ~{\text {d}}x\\&\qquad +\int \nolimits _D\delta _7|u_1||z_2(\theta _t\omega )|~{\text {d}}x +\int \nolimits _D\delta _4|u_2||z_1(\theta _t\omega )|~{\text {d}}x\\&\qquad +C+ C_1 \left( \Vert z_2(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{p_1+1}^{p_1+1}\right) \\&\quad \le \frac{\delta }{16}\Vert u_2\Vert _{2}^2+C_2\int \nolimits _D(1+|u_1|^q)~{\text {d}}x+\frac{\delta _1}{8}\Vert u_1\Vert _{2}^2+\frac{\delta }{16}\Vert u_2\Vert _{2}^2\\&\qquad +C+ C_1\left( \Vert z_2(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{p_1+1}^{p_1+1}\right) , \end{aligned}$$

where we have used that for \(q=\max \{p_1+1,2\}<p\) there exists a constant \(C_2\) such that

$$\begin{aligned} C_1\left( |\xi |+|\xi |^{p_1+1}+C(1+|\xi |)^2\right) \le C_2(|\xi |^q+1),~~~ \text {for all }\xi \in {\mathbb {R}}. \end{aligned}$$
(3.18)

Thus,

$$\begin{aligned}&I_2+I_3\\&\quad \le \frac{\delta }{8}\Vert u_2\Vert _{2}^2+\frac{\delta _1}{8}\Vert u_1\Vert _{2}^2+\frac{\delta _1}{4}\Vert u_1\Vert _{p}^p\\&\qquad +C+C_1 \left( \Vert z_2(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{p_1+1}^{p_1+1}\right) \\&\quad \le \frac{\delta }{4}\Vert v_2\Vert _{2}^2+\frac{\delta _1 3}{8}\Vert u_1\Vert _{p}^p+C\\&\qquad +C_1 \left( \Vert z_2(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{p_1+1}^{p_1+1}\right) . \end{aligned}$$

Hence, in total we obtain

$$\begin{aligned}&\frac{1}{2}\frac{{\text {d}}}{{\text {d}}t}(\Vert v_1\Vert ^2_{2}+\Vert v_2\Vert ^2_{2}) \nonumber \\&\quad \le -d\Vert \nabla v_1\Vert ^2_{2}-\frac{\delta _1}{2}\Vert u_1\Vert _{p}^p-\delta \Vert v_2\Vert _{2}^2+\frac{\delta }{4}\Vert v_2\Vert _{2}^2+\frac{\delta _1 3}{8}\Vert u_1\Vert _{p}^p\nonumber \\&\qquad {}+C+ C_1 \left( \Vert z_2(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{p_1+1}^{p_1+1}+\Vert z_1(\theta _t\omega )\Vert _{p}^p\right) \nonumber \\&\quad = -d\Vert \nabla v_1\Vert ^2_{2}-\frac{\delta _1}{8}\Vert u_1\Vert _{p}^p-\frac{3\delta }{4}\Vert v_2\Vert _{2}^2\nonumber \\&\qquad {}+C+ C_1 \left( \Vert z_2(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{p_1+1}^{p_1+1}+\Vert z_1(\theta _t\omega )\Vert _{p}^p\right) \nonumber \\&\quad \le -\frac{d}{2}\Vert \nabla v_1\Vert ^2_{2} -\frac{d}{2c}\Vert v_1\Vert ^2_{2}-\frac{3\delta }{4}\Vert v_2\Vert _{2}^2\nonumber \\&\qquad +C+ C_1 \left( \Vert z_2(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{p}^p\right) \end{aligned}$$
(3.19)

and thus

$$\begin{aligned} \frac{{\text {d}}}{{\text {d}}t}(\Vert v_1\Vert ^2_{2}+\Vert v_2\Vert ^2_{2})\le -C_2\left( \Vert v_1\Vert ^2_{2}+\Vert v_2\Vert _{2}^2\right) +C+ C_1 \left( \Vert z_2(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{p}^p\right) . \end{aligned}$$
(3.20)

Now, applying Gronwall’s inequality we obtain

$$\begin{aligned}&\Vert v_1\Vert ^2_{2}+\Vert v_2\Vert ^2_{2}\nonumber \\&\quad \le \left( \Vert v_1^0\Vert ^2_{2}+\Vert v_2^0\Vert ^2_{2}\right) \exp \left( -C_2t\right) +C_3\left( 1-\exp \left( -C_2t\right) \right) \nonumber \\&\qquad {}+C_1\int \nolimits _0^t\exp \left( -C_2(t-s)\right) \left( \Vert z_2(\theta _s\omega )\Vert _{2}^2+\Vert z_1(\theta _s\omega )\Vert _{p}^p\right) ~{\text {d}}s\nonumber \\&\quad \le \left( \Vert v_1^0\Vert ^2_{2}+\Vert v_2^0\Vert ^2_{2}\right) \exp \left( -C_2t\right) +C_3\nonumber \\&\qquad {}+ C_1\int \nolimits _0^t\exp \left( -C_2(t-s)\right) \left( \Vert z_2(\theta _s\omega )\Vert _{2}^2+\Vert z_1(\theta _s\omega )\Vert _{p}^p\right) ~{\text {d}}s. \end{aligned}$$
(3.21)

We replace \(\omega \) by \(\theta _{-t}\omega \) (note the \({\mathbb {P}}\)-preserving property of the MDS) and carry out a change of variables

$$\begin{aligned}&\Vert v_1(t,\theta _{-t}\omega ,v_1^0(\theta _{-t}\omega ))\Vert ^2_{2}+\Vert v_2(t,\theta _{-t}\omega ,v_2^0(\theta _{-t}\omega ))\Vert ^2_{2}\\&\le \left( \Vert v_1^0(\theta _{-t}\omega )\Vert ^2_{2}+\Vert v_2^0(\theta _{-t}\omega )\Vert ^2_{2}\right) \exp \left( -C_2t\right) +C_3\\&\qquad {}+C_1\int \nolimits _0^t\exp \left( -C_2(t-s)\right) \left( \Vert z_2(\theta _{s-t}\omega )\Vert _{2}^2+\Vert z_1(\theta _{s-t}\omega )\Vert _{p}^p\right) ~{\text {d}}s \\&\quad \le \left( \Vert v_1^0(\theta _{-t}\omega )\Vert ^2_{2}+\Vert v_2^0(\theta _{-t}\omega )\Vert ^2_{2}\right) \exp \left( -C_2t\right) +C_3\\&\qquad {}+C_1\int \nolimits _{-t}^0\exp \left( C_2s\right) \left( \Vert z_2(\theta _s\omega )\Vert _{2}^2+\Vert z_1(\theta _s\omega )\Vert _{p}^p\right) ~{\text {d}}s. \end{aligned}$$

Now let \({{\mathcal {D}}}\in {{\mathcal {T}}}\) be arbitrary and \((u_1^0,u_2^0)(\theta _{-t}\omega )\in {{\mathcal {D}}}(\theta _{-t}\omega )\). Then

$$\begin{aligned}&\Vert \varphi (t,\theta _{-t}\omega ,(u_1^0,u_2^0)(\theta _{-t}\omega ))\Vert _{\mathbb {H}}^2\\&\quad = \Vert v_1(t,\theta _{-t}\omega ,u_1^0(\theta _{-t}\omega )-z_1(\theta _{-t}\omega ))+z_1(\omega )\Vert _2^2\\&\qquad +\Vert v_2(t,\theta _{-t}\omega ,u_2^0(\theta _{-t}\omega )-z_2(\theta _{-t}\omega ))+z_2(\omega )\Vert _2^2\\&\quad \le 2\Vert v_1(t,\theta _{-t}\omega ,u_1^0(\theta _{-t}\omega )-z_1(\theta _{-t}\omega ))\Vert _2^2+2\Vert z_1(\omega )\Vert _2^2\\&\qquad +2\Vert v_2(t,\theta _{-t}\omega ,u_2^0(\theta _{-t}\omega )-z_2(\theta _{-t}\omega ))\Vert _2^2 +2\Vert z_2(\omega )\Vert _2^2\\&\quad \le 2\left( \Vert u_1^0(\theta _{-t}\omega )-z_1(\theta _{-t}\omega )\Vert ^2_{2}+\Vert u_2^0(\theta _{-t}\omega )-z_2(\theta _{-t}\omega )\Vert ^2_{2}\right) \exp \left( -C_2t\right) \\&\qquad {}+2C_3+2 C_1\int \nolimits _{-t}^0\exp \left( C_2s\right) \left( \Vert z_2(\theta _s\omega )\Vert _{2}^2+\Vert z_1(\theta _s\omega )\Vert _{p}^p\right) ~{\text {d}}s\\&\qquad +2\Vert z_1(\omega )\Vert _2^2+2\Vert z_2(\omega )\Vert _2^2\\&\quad \le 4\left( \Vert u_1^0(\theta _{-t}\omega )\Vert _2^2+\Vert z_1(\theta _{-t}\omega )\Vert ^2_{2}+\Vert u_2^0(\theta _{-t}\omega )\Vert _2^2+\Vert z_2(\theta _{-t}\omega )\Vert ^2_{2}\right) \exp \left( -C_2t\right) \\&\qquad {}+2C_3+2C_1\int \nolimits _{-t}^0\exp \left( C_2s\right) \left( \Vert z_2(\theta _s\omega )\Vert _{2}^2+\Vert z_1(\theta _s\omega )\Vert _{p}^p\right) ~{\text {d}}s\\&\qquad +2\Vert z_1(\omega )\Vert _2^2+2\Vert z_2(\omega )\Vert _2^2. \end{aligned}$$

Since \((u_1^0,u_2^0)(\theta _{-t}\omega )\in {{\mathcal {D}}}(\theta _{-t}\omega )\) and since \(\Vert z_1(\omega )\Vert _p^p\) (\(p\ge 1\)), \(\Vert z_2(\omega )\Vert _2^2\) are tempered random variables, we have

$$\begin{aligned}&\limsup _{t\rightarrow \infty }\left( \Vert u_1^0(\theta _{-t}\omega )\Vert _2^2+\Vert z_1(\theta _{-t}\omega )\Vert ^2_{2}\right. \ldots \\&\qquad \qquad \left. +\Vert u_2^0(\theta _{-t}\omega )\Vert _2^2+\Vert z_2(\theta _{-t}\omega )\Vert ^2_{2}\right) \exp \left( -C_2t\right) =0. \end{aligned}$$

Hence,

$$\begin{aligned}&\limsup _{t\rightarrow \infty } \Vert \varphi (t,\theta _{-t}\omega ,(u_1^0,u_2^0)(\theta _{-t}\omega ))\Vert _{\mathbb {H}}^2\nonumber \\&\quad \le 2C_3+2C_1\int \nolimits _{-\infty }^0\exp \left( C_2s\right) \left( \Vert z_2(\theta _s\omega )\Vert _{2}^2+\Vert z_1(\theta _s\omega )\Vert _{p}^p\right) ~{\text {d}}s\nonumber \\&\qquad +2\Vert z_1(\omega )\Vert _2^2+2\Vert z_2(\omega )\Vert _2^2 \nonumber \\&=:\rho (\omega ). \end{aligned}$$
(3.22)

Due to the temperedness of \(\Vert z_1(\omega )\Vert _p^p\) for \(p\ge 1\) and \(\Vert z_2(\omega )\Vert _2^2\), the improper integral above exists and \(\rho (\omega )>0\) is a \(\omega \)-dependent constant. As described in Remark 3.6, we can define for some \(\epsilon >0\)

$$\begin{aligned} {\mathcal {B}}(\omega )=B(0,\rho (\omega )+\epsilon ). \end{aligned}$$

Then \({{\mathcal {B}}}=\{{{\mathcal {B}}}(\omega )\}_\omega \in {{\mathcal {T}}}\) is a \({{\mathcal {T}}}\)-absorbing set for the RDS \(\varphi \) with finite absorption time \(t_{{\mathcal {T}}}(\omega )=\sup _{{{\mathcal {D}}}\in {{\mathcal {T}}}}t_{{\mathcal {D}}}(\omega )\). \(\square \)

The random radius \(\rho (\omega )\) depends on the restrictions imposed on the non-linearity and the noise. These were heavily used in Lemma 3.20 in order to derive the expression 3.22 for \(\rho (\omega )\). Regarding the structure of \(\rho (\omega )\) we infer by Lemma 3.17 that \(\rho (\omega )\) is tempered. Although we have now shown the existence of a bounded \({{\mathcal {T}}}\)-absorbing set for the RDS at hand, we need further steps. To show existence of a random attractor, we would like to make use of Theorem 3.11, i.e., we have to show existence of a compact\({{\mathcal {T}}}\)-absorbing set. This will be the goal of the next subsection.

Compact absorbing set

The classical strategy to find a compact absorbing set in \(L^2(D)\) for a reaction-diffusion equation is the following: Firstly, one needs to find an absorbing set in \(L^2(D)\). Secondly, this set is used to find an absorbing set in \(H^1(D)\) and due to compact embedding this automatically defines a compact absorbing set in \(L^2(D)\). In our setting the construction of an absorbing set in \(H^1(D)\) is more complicated as the regularizing effect of the Laplacian is missing in the second component of (3.8). That is solutions with initial conditions in \(L^2(D)\) will in general only belong to \(L^2(D)\) and not to \(H^1(D)\). To overcome this difficulty, we split the solution of the second component into two terms: one which is regular enough, in the sense that it belongs to \(H^1(D)\) and the another one which asymptotically tends to zero. This splitting method has been used by several authors in the context of partly dissipative systems, see for instance [20, 32]. Let us now explain the strategy for our setting in more detail. We consider the equations

$$\begin{aligned} \frac{{\text {d}}v_2^1(t)}{{\text {d}}t} =-\sigma (x)v_2^1(t)-g(x,v_1(t)+z_1(\theta _t\omega )),~~~~v_2^1(0)=0, \end{aligned}$$
(3.23)

and

$$\begin{aligned} \frac{{\text {d}}v_2^2}{{\text {d}}t}=-\sigma (x)v_2^2,~~~v_2^2(0)=v_2^0, \end{aligned}$$
(3.24)

then \(v_2=v_2^1+v_2^2\) solves (3.10). Note at this point that we associate the initial condition \(v_2^0\in L^2(D)\) to the second part. Now, let \({{\mathcal {D}}}=({{\mathcal {D}}}_1,{{\mathcal {D}}}_2) \in {{\mathcal {T}}}\) be arbitrary and \(u^0=(u_1^0,u_2^0)\in {{\mathcal {D}}}\). Then

$$\begin{aligned}&\varphi (t,\theta _{-t}\omega ,u^0(\theta _{-t}\omega ))\\&\quad =(u_1(t,\theta _{-t}\omega ,u_1^0(\theta _{-t}\omega )),u_2(t,\theta _{-t}\omega ,u_2^0(\theta _{-t}\omega )))\\&\quad =\left( v_1(t,\theta _{-t}\omega , v_1^0(\theta _{-t}\omega ))+z_1(\omega ),\right. \\&\qquad \left. v_2^1(t,\theta _{-t}\omega , v_2^0(\theta _{-t}\omega ))+v_2^2(t,\theta _{-t}\omega , v_2^0(\theta _{-t}\omega ))+z_2(\omega )\right) \\&\quad =\left( v_1(t,\theta _{-t}\omega , v_1^0(\theta _{-t}\omega ))+z_1(\omega ),\right. \\&\qquad \left. v_2^1(t,\theta _{-t}\omega , 0)+z_2(\omega )\right) +\left( 0,v_2^2(t,\theta _{-t}\omega , v_2^0(\theta _{-t}\omega ))\right) \\&\quad =:\varphi _1(t,\theta _{-t}\omega ,v_1^0(\theta _{-t}\omega ))+\varphi _2(t,\theta _{-t}\omega ,v_2^0(\theta _{-t}\omega )) \end{aligned}$$

If we can show that for a certain \(t^*\ge t_{{\mathcal {D}}}(\omega )\) there exist tempered random variables \(\rho _1(\omega )\), \(\rho _2(\omega )\) such that

$$\begin{aligned} \Vert v_1(t^*,\theta _{-t^*}\omega ,v_1^0(\theta _{-t^*}\omega ))+z_1(\omega )\Vert _{H^1(D)}<&\rho _1(\omega ), \end{aligned}$$
(3.25)
$$\begin{aligned} \Vert v_2^1(t^*,\theta _{-t^*}\omega ,0)+z_2(\omega )\Vert _{H^1(D)}<&\rho _2(\omega ), \end{aligned}$$
(3.26)

then, because of compact embedding, we know that \(\overline{\varphi _1(t^*,\theta _{-t^*}\omega , {{\mathcal {D}}}_1(\theta _{-t^*}\omega ))}\) is a compact set in \({\mathbb {H}}\). If, furthermore

$$\begin{aligned} \lim _{t\rightarrow \infty }\Vert v_2^2(t,\theta _{-t}\omega , v_2^0(\theta _{-t}\omega ))\Vert _{2} =0, \end{aligned}$$
(3.27)

then \(\varphi _2(t,\theta _{-t}\omega ,{{\mathcal {D}}}_2(\theta _{-t}\omega ))\) can be regarded as a (random) bounded perturbation and \(\overline{\varphi (t,\theta _{-t}\omega ,{{\mathcal {D}}}(\theta _{-t}\omega ))}\) is compact in \({\mathbb {H}}\) as well, see [28, Theorem 2.1]. Then,

$$\begin{aligned} \overline{\varphi (t^*,\theta _{-t^*}\omega ,{{\mathcal {B}}}(\theta _{-t^*}\omega ))} \end{aligned}$$
(3.28)

is a compact absorbing set for the RDS \(\varphi \). We will now prove the necessary estimates (3.25)–(3.27).

Lemma 3.21

Let Assumptions 2.1 and 2.2 hold. Let \({{\mathcal {D}}}_2\subset L^2(D)\) be tempered and \(u_2^0\in {{\mathcal {D}}}_2\). Then

$$\begin{aligned} \lim _{t\rightarrow \infty }\Vert v_2^2(t,\theta _{-t}\omega ,v_2^0(\theta _{-t}\omega ))\Vert _{2}^2=0. \end{aligned}$$

Proof

The solution to (3.24) is given by

$$\begin{aligned} v_2^2(t)=v_2^0\exp \left( -\sigma (x)t\right) \end{aligned}$$

and thus

$$\begin{aligned}&\lim _{t\rightarrow \infty }\Vert v_2^2(t,\theta _{-t}\omega ,v_2^0(\theta _{-t}\omega ))\Vert _{2}^2\\&=\lim _{t\rightarrow \infty }\left\| v_2^0(\theta _{-t}\omega )\exp \left( -\sigma (x)t\right) \right\| _2^2\\&\quad \le \lim _{t\rightarrow \infty }\Vert v_2^0(\theta _{-t}\omega )\Vert _2^2\exp \left( -\delta t\right) \\&\quad \le \lim _{t\rightarrow \infty }\left( \Vert u_2^0(\theta _{-t}\omega )\Vert _2^2+\Vert z_2(\theta _{-t}\omega )\Vert _2^2\right) \exp \left( -\delta t\right) =0, \end{aligned}$$

as \(u_2^0\in {{\mathcal {D}}}_2\) and \(\Vert z_2(\omega )\Vert _2^2\) is a tempered random variable. \(\square \)

We now prove boundedness of \(v_1\) and \(v_2^1\) in \(H^1(D)\). Therefore we need some auxiliary estimates. First, let us derive uniform estimates for \(u_1\in L^p(D)\) and for \(v_1\in H^1(D)\).

Lemma 3.22

Let Assumptions 2.1 and 2.2 hold. Let \({{\mathcal {D}}}_1\subset L^2(D)\) be tempered and \(u_1^0\in {{\mathcal {D}}}_2\). Assume \(t\ge 0\), \(r>0\), then

$$\begin{aligned}&\int \nolimits _{t}^{t+r}\Vert u_1(s,\omega , u_1^0(\omega ))\Vert _p^p~{\text {d}}s\nonumber \\&\quad \le Cr+C_1\int \nolimits _t^{t+r}\left( \Vert z_2(\theta _s\omega )\Vert _{2}^2+\Vert z_1(\theta _s\omega )\Vert _{p}^p\right) {\text {d}}s\nonumber \\&\qquad +\Vert v_1(t,\omega ,v_1^0(\omega ))\Vert ^2_2+\Vert v_2(t,\omega ,v_2^0(\omega ))\Vert _2^2, \end{aligned}$$
(3.29)
$$\begin{aligned}&\int \nolimits _{t}^{t+r}\Vert \nabla v_1(s,\omega , v_1^0(\omega ))\Vert _2^2~{\text {d}}s\nonumber \\&\le Cr+C_1\int \nolimits _t^{t+r}\left( \Vert z_2(\theta _s\omega )\Vert _{2}^2+\Vert z_1(\theta _s\omega )\Vert _{p}^p\right) {\text {d}}s\nonumber \\&\qquad +\Vert v_1(t,\omega ,v_1^0(\omega ))\Vert ^2_2+\Vert v_2(t,\omega ,v_2^0(\omega ))\Vert _2^2, \end{aligned}$$
(3.30)

where \(C,C_1\) are deterministic constants.

Proof

From (3.19) we can derive

$$\begin{aligned}&\frac{{\text {d}}}{{\text {d}}t}(\Vert v_1\Vert ^2_{2}+\Vert v_2\Vert ^2_{2})\\&\quad \le -d\Vert \nabla v_1\Vert _2^2-\frac{\delta _1}{4}\Vert u_1\Vert _{p}^p+C+C_1 \left( \Vert z_2(\theta _t\omega )\Vert _{2}^2+\Vert z_1(\theta _t\omega )\Vert _{p}^p\right) , \end{aligned}$$

and thus by integration

$$\begin{aligned}&d\int \nolimits _{t}^{t+r}\Vert \nabla v_1(s,\omega ,v_1^0(\omega ))\Vert _2^2~{\text {d}}s+\frac{\delta _1}{4}\int \nolimits _{t}^{t+r}\Vert u_1(s,\omega ,u_1^0(\omega ))\Vert _p^p~{\text {d}}s\\&\quad \le Cr+C_1\int \nolimits _t^{t+r}\left( \Vert z_2(\theta _s\omega )\Vert _{2}^2+\Vert z_1(\theta _s\omega )\Vert _{p}^p\right) {\text {d}}s\\&\qquad +\Vert v_1(t,\omega ,v_1^0(\omega ))\Vert ^2_2+\Vert v_2(t,\omega ,v_2^0(\omega ))\Vert _2^2. \end{aligned}$$

The two statements of the lemma follow directly from this estimate. \(\square \)

Lemma 3.23

Let Assumptions 2.1 and 2.2 hold. Let \({{\mathcal {D}}}_1\subset L^2(D)\) be tempered and \(u_1^0\in {{\mathcal {D}}}_1\). Assume \(t\ge r\), then

$$\begin{aligned}&\int \nolimits _{t}^{t+r} \Vert u_1(s,\omega ,u_1^0(\omega ))\Vert _{2p-2}^{2p-2}~{\text {d}}s\nonumber \\&\quad \le C_6r+\int \nolimits _{t-r}^{t+r}C_2\Vert z_1(\theta _s\omega )\Vert _{p^2-p}^{p^2-p}+C_3\Vert z_2(\theta _s\omega )\Vert _2^2+C_4\Vert v_2(s,\omega ,v_2^0(\omega ))\Vert _2^2~{\text {d}}s\nonumber \\&\qquad +C_5\Vert v_1(t-r,\omega ,v_1^0(\omega ))\Vert ^2_2+C_5\Vert v_2(t-r,\omega ,v_2^0(\omega ))\Vert _2^2, \end{aligned}$$
(3.31)

where \(C_2,C_3,C_4,C_5,C_6\) are deterministic constants.

Proof

Remember that \(v_1\) satisfies equation (3.9). Multiplying this equation by \(|v_1|^{p-2}v_1\) and integrating over D yields

$$\begin{aligned}&\frac{1}{p}\frac{{\text {d}}}{{\text {d}}t}\int \nolimits _D|v_1|^{p}~{\text {d}}x\\&\quad =d\int \nolimits _D\Delta v_1(t)|v_1|^{p-2}v_1~{\text {d}}x-\int \nolimits _D h(x,v_1(t)+z_1(\theta _t\omega ))|v_1|^{p-2}v_1~{\text {d}}x\\&\qquad {}-\int \nolimits _D f(x,v_1(t)+z_1(\theta _t\omega ),v_2(t)+z_2(\theta _t\omega ))|v_1|^{p-2}v_1 ~{\text {d}}x\\&\quad = -d(p-1)\int \nolimits _D |\nabla v_1|^2|v_1|^{p-2}~{\text {d}}x-\int \nolimits _D h(x,v_1(t)+z_1(\theta _t\omega ))|v_1|^{p-2}v_1~{\text {d}}x\\&\qquad {}-\int \nolimits _D f(x,v_1(t)+z_1(\theta _t\omega ),v_2(t)+z_2(\theta _t\omega ))|v_1|^{p-2}v_1 ~{\text {d}}x\\&\quad \le -\int \nolimits _D \left( \frac{\delta _1}{2^p}|v_1|^p-C-C_1(|z_1(\theta _t\omega )|^2+|z_1(\theta _t\omega )|^p)\right) |v_1|^{p-2}~{\text {d}}x\\&\qquad {}+\int \nolimits _D |f(x,v_1(t)+z_1(\theta _t\omega ),v_2(t)+z_2(\theta _t\omega ))||v_1|^{p-2}v_1 ~{\text {d}}x\\&\quad \le -\int \nolimits _D \frac{\delta _1}{2^p}|v_1|^{2p-2}~{\text {d}}x+C\int \nolimits _D|v_1|^{p-2}~{\text {d}}x\\&\qquad + C_1\int \nolimits _D (|z_1(\theta _t\omega )|^2+|z_1(\theta _t\omega )|^p)|v_1|^{p-2}~{\text {d}}x\\&\qquad +\int \nolimits _D \delta _4 (1+|v_1+z_1(\theta _t\omega )|^{p_1}+|v_2+z_2(\theta _t\omega )|)|v_1|^{p-2}v_1 ~{\text {d}}x\\&\quad \le -\int \nolimits _D \frac{\delta _1}{2^p}|v_1|^{2p-2}~{\text {d}}x+C\int \nolimits _D|v_1|^{p-2}~{\text {d}}x+C_1\int \nolimits _D|v_1|^{p-1}~{\text {d}}x\\&\qquad +C_2\int \nolimits _D(|z_1(\theta _t\omega )|^{2p-2}+|z_1(\theta _t\omega )|^{p^2-p}) ~{\text {d}}x+\int \nolimits _D \delta _4 \left( |v_1|^{p-1}\ldots \right. \\&\qquad \qquad +C_3\left( |v_1|^{p_1+p-1}+|z_1(\theta _t\omega )|^{p_1}|v_1|^{p-1}+|v_2||v_1|^{p-1}\ldots \right. \\&\qquad \qquad \left. \left. +|z_2(\theta _t\omega )||v_1|^{p-1}\right) \right) ~{\text {d}}x\\&\quad \le -\int \nolimits _D \frac{\delta _1}{2^p}|v_1|^{2p-2}~{\text {d}}x+\frac{\delta _1}{2^p 4}\int \nolimits _D|v_1|^{2p-2}~{\text {d}}x+C_6\\&\qquad +C_2\int \nolimits _D(|z_1(\theta _t\omega )|^{2p-2}+|z_1(\theta _t\omega )|^{p^2-p}) ~{\text {d}}x\\&\qquad {}+\int \nolimits _D C_3 (|z_1(\theta _t\omega )|^{p_1}|v_1|^{p-1}+|v_2||v_1|^{p-1}+|z_2(\theta _t\omega )||v_1|^{p-1}) ~{\text {d}}x, \end{aligned}$$

where we have used condition (2.6), the relations \(p-1,p-2,p_1+p-1<2p-2\) and the inequality

$$\begin{aligned} h(x,v_1+z_1)v_1\ge \frac{\delta _1}{2^p}|v_1|^p-C-C_1(|z_1|^2+|z_1|^p), \end{aligned}$$

that can be proved by using conditions (2.5) and (2.10)

$$\begin{aligned} h(x,v_1+z_1)v_1&=h(x,v_1+z_1)(v_1+z_1)-h(x,v_1+z_1)z_1\\&\ge \delta _1 |v_1+z_1|^p-\delta _3-|h(x,v_1+z_1)||z_1|\\&\ge \delta _1|v_1+z_1|^p-\delta _3-(\delta _8+\delta _8|v_1+z_1|^{p-1})|z_1|\\&\ge \delta _1|v_1+z_1|^p-C-C_1|z_1|^2-\delta _1/2|v_1+z_1|^p-C_2|z_1|^p\\&=\frac{\delta _1}{2}|v_1+z_1|^p-C-C_1(|z_1|^2+|z_1|^p)\\&\ge \frac{\delta _1}{2}||v_1|-|z_1||^p-C-C_1(|z_1|^2+|z_1|^p)\\&\ge \frac{\delta _1}{2^p}|v_1|^p-C-C_1(|z_1|^2+|z_1|^p). \end{aligned}$$

Hence we have

$$\begin{aligned}&\frac{1}{p}\frac{{\text {d}}}{{\text {d}}t}\int \nolimits _D|v_1|^{p}~{\text {d}}x+\int \nolimits _D \frac{3}{4}\frac{\delta _1}{2^p}|v_1|^{2p-2}~{\text {d}}x\\&\quad \le C_6+C_2\int \nolimits _D(|z_1(\theta _t\omega )|^{2p-2}+|z_1(\theta _t\omega )|^{p^2-p}) ~{\text {d}}x\\&\qquad +\int \nolimits _D C_3 (|z_1(\theta _t\omega )|^{p_1}+|v_2|+|z_2(\theta _t\omega )|)|v_1|^{p-1} ~{\text {d}}x \\&\quad \le C_6+C_2\int \nolimits _D(|z_1(\theta _t\omega )|^{2p-2}+|z_1(\theta _t\omega )|^{p^2-p}) ~{\text {d}}x+\int \nolimits _D \frac{1}{4}\frac{\delta _1}{2^p} |v_1|^{2p-2}~{\text {d}}x \\&\qquad {}+\int \nolimits _D C_3 (|z_1(\theta _t\omega )|^{p_1}+|v_2|+|z_2(\theta _t\omega )|)^2 ~{\text {d}}x \end{aligned}$$

and thus

$$\begin{aligned}&\frac{1}{p}\frac{{\text {d}}}{{\text {d}}t}\int \nolimits _D|v_1|^{p}~{\text {d}}x+\int \nolimits _D \frac{1}{2}\frac{\delta _1}{2^p}|v_1|^{2p-2}~{\text {d}}x\nonumber \\&\quad \le C_6+C_2\int \nolimits _D(|z_1(\theta _t\omega )|^{2p-2}+|z_1(\theta _t\omega )|^{p^2-p}) ~{\text {d}}x \nonumber \\&\qquad +\int \nolimits _D C_3 (|z_1(\theta _t\omega )|^{2p_1}+|v_2(t)|^2+|z_2(\theta _t\omega )|^2 )~{\text {d}}x. \end{aligned}$$
(3.32)

We arrive at the following inequality

$$\begin{aligned} \frac{1}{p}\frac{{\text {d}}}{{\text {d}}t} \Vert v_1\Vert _p^p+\frac{\delta _1}{2^{p+1}}\Vert v_1\Vert ^{2p-2}_{2p-2}\le C_6+C_2\Vert z_1(\theta _t\omega )\Vert _{p^2-p}^{p^2-p}+C_3\Vert z_2(\theta _t\omega )\Vert _2^2+C_3\Vert v_2\Vert _2^2 \end{aligned}$$
(3.33)

and hence

$$\begin{aligned} \frac{{\text {d}}}{{\text {d}}t} \Vert v_1\Vert _p^p\le C_6+C_2\Vert z_1(\theta _t\omega )\Vert _{p^2-p}^{p^2-p}+C_3\Vert z_2(\theta _t\omega )\Vert _2^2+C_3\Vert v_2\Vert _2^2-\frac{\delta _1}{2^{p+1}}\Vert v_1\Vert ^{p}_{p}. \end{aligned}$$
(3.34)

With (3.29) we have

$$\begin{aligned}&\int \nolimits _t^{t+r}\Vert v_1(s,\omega ,v_1^0(\omega ))\Vert _p^p~{\text {d}}s\\&\quad =\int \nolimits _t^{t+r}\Vert u_1(s,\omega ,v_1^0(\omega ))-z_1(\theta _s\omega )\Vert _p^p~{\text {d}}s\\&\quad \le Cr+C_1\int \nolimits _t^{t+r}\left( \Vert z_2(\theta _s\omega )\Vert _{2}^2+\Vert z_1(\theta _s\omega )\Vert _{p}^p\right) {\text {d}}s\\&\qquad +C_2\Vert v_1(t,\omega ,v_1^0(\omega ))\Vert ^2_2+C_2\Vert v_2(t,\omega ,v_2^0(\omega ))\Vert _2^2. \end{aligned}$$

Thus by applying the uniform Gronwall Lemma to (3.34) we have

$$\begin{aligned}&\Vert v_1(t+r,\omega ,v_1^0(\omega ))\Vert _p^p\nonumber \\&\quad \le rC_6+\int \nolimits _t^{t+r} C_2\Vert z_1(\theta _s\omega )\Vert _{p^2-p}^{p^2-p}+C_3\Vert z_2(\theta _s\omega )\Vert _2^2+C_4\Vert v_2(s,\omega ,v_2^0(\omega ))\Vert _2^2~{\text {d}}s\nonumber \\&\qquad +C_5\Vert v_1(t,\omega ,v_1^0(\omega ))\Vert ^2_2+C_5\Vert v_2(t,\omega ,v_2^0(\omega ))\Vert _2^2. \end{aligned}$$
(3.35)

Now integrating (3.33) between t and \(t+r\) yields

$$\begin{aligned}&\int \nolimits _t^{t+r}\Vert v_1(s,\omega ,v_1(\omega ))\Vert ^{2p-2}_{2p-2}~{\text {d}}s\\&\quad \le C_6r+\int \nolimits _t^{t+r}C_2\Vert z_1(\theta _s\omega )\Vert _{p^2-p}^{p^2-p}+C_3\Vert z_2(\theta _s\omega )\Vert _2^2+C_3\Vert v_2(s,\omega ,v_2^0(\omega ))\Vert _2^2~{\text {d}}s\\&\qquad +C\Vert v_1(t,\omega ,v_1^0(\omega ))\Vert _p^p \end{aligned}$$

and thus for \(t\ge r\) using (3.35)

$$\begin{aligned}&\int \nolimits _t^{t+r}\Vert v_1(s,\omega ,v_1(\omega ))\Vert ^{2p-2}_{2p-2}~{\text {d}}s\\&\quad \le C_6r+\int \nolimits _{t-r}^{t+r}C_2\Vert z_1(\theta _s\omega )\Vert _{p^2-p}^{p^2-p}+C_3\Vert z_2(\theta _s\omega )\Vert _2^2+C_4\Vert v_2(s,\omega ,v_2^0(\omega ))\Vert _2^2~{\text {d}}s\\&\qquad +C_5\Vert v_1(t-r,\omega ,v_1^0(\omega ))\Vert ^2_2+C_5\Vert v_2(t-r,\omega ,v_2^0(\omega ))\Vert _2^2. \end{aligned}$$

In total this leads to

$$\begin{aligned}&\int \nolimits _t^{t+r}\Vert u_1(s,\omega ,v_1(\omega ))\Vert ^{2p-2}_{2p-2}~{\text {d}}s\\&\le C_6r+\int \nolimits _{t-r}^{t+r}C_2\Vert z_1(\theta _s\omega )\Vert _{p^2-p}^{p^2-p}+C_3\Vert z_2(\theta _s\omega )\Vert _2^2+C_4\Vert v_2(s,\omega ,v_2^0(\omega ))\Vert _2^2~{\text {d}}s\\&\qquad {}+C_5\Vert v_1(t-r,\omega ,v_1^0(\omega ))\Vert ^2_2+C_5\Vert v_2(t-r,\omega ,v_2^0(\omega ))\Vert _2^2\\&\qquad +\int \nolimits _t^{t+r}\Vert z_1(\theta _s\omega )\Vert _{2p-2}^{2p-2}~{\text {d}}s\\&\quad \le C_6r+\int \nolimits _{t-r}^{t+r}C_2\Vert z_1(\theta _s\omega )\Vert _{p^2-p}^{p^2-p}+C_3\Vert z_2(\theta _s\omega )\Vert _2^2+C_4\Vert v_2(s,\omega ,v_2^0(\omega ))\Vert _2^2~{\text {d}}s\\&\qquad {}+C_5\Vert v_1(t-r,\omega ,v_1^0(\omega ))\Vert ^2_2+C_5\Vert v_2(t-r,\omega ,v_2^0(\omega ))\Vert _2^2, \end{aligned}$$

and this finishes the proof. \(\square \)

One can also use appropriate shifts within the integrals on the left hand sides in (3.29), (3.30), (3.31) to obtain simpler forms of the \(\omega \)-dependent constants on the right hand side, see for instance [33, Lemma 4.3, 4.4]. More precisely, in case of (3.29) one can for instance obtain an estimate of the form

$$\begin{aligned} \int \nolimits _{t}^{t+r}\Vert u_{1}(s,\theta _{-t-r}\omega ,u^{0}_{1}(\theta _{-t-r}\omega ))\Vert ^p_{p} \le c (1+{\widetilde{\rho }}(\omega )), \end{aligned}$$

where \({\tilde{\rho }}(\omega )\) is a random constant. Nevertheless such estimates hold for every \(\omega \), independent of the shift that one inserts inside the integral on the left hand side. Without the appropriate shifts on the left hand sides, as in the lemmas above, the constants on the right hand sides depend on the shift. Next, we are going to show the boundedness of \(v_1\) in \(H^1(D)\).

Lemma 3.24

Let Assumptions 2.1 and 2.2 hold. Let \({{\mathcal {D}}}=({{\mathcal {D}}}_1,{{\mathcal {D}}}_2)\in {{\mathcal {T}}}\) and \(u^0\in {{\mathcal {D}}}\). Assume \(t\ge t_{{\mathcal {D}}}(\omega )+2r\) for some \(r>0\) then

$$\begin{aligned} \Vert \nabla v_1(t,\theta _{-t}\omega ,v_1^0(\theta _{-t}\omega ))\Vert _2^2\le \rho _1(\omega ), \end{aligned}$$
(3.36)

where \(\rho _1(\omega )\) is a tempered random variable.

Proof

Remember that \(v_1\) satisfies the equation (3.9) and thus

$$\begin{aligned}&\frac{1}{2}\frac{{\text {d}}}{{\text {d}}t}\Vert \nabla v_1\Vert _{2}^2=\left\langle \frac{{\text {d}}}{{\text {d}}t}v_1,-\Delta v_1\right\rangle \\&\quad =\langle d\Delta v_1-h(x,v_1+z_1(\theta _t\omega ))-f(x,v_1+z_1(\theta _t\omega ),v_2+z_2(\theta _t\omega )),-\Delta v_1\rangle \\&\quad =-d \Vert \Delta v_1\Vert _{2}^2+\langle h(x,v_1+z_1(\theta _t\omega )),\Delta v_1\rangle \\&\qquad +\langle f(x,v_1+z_1(\theta _t\omega ),v_2+z_2(\theta _t\omega )),\Delta v_1\rangle \\&\le -d\Vert \Delta v_1\Vert _2^2+\int \nolimits _D \delta _8(1+|u_1|^{p-1})|\Delta v_1|~{\text {d}}x\\&\qquad +\int \nolimits _D \delta _4(1+|u_1|^{p_1}+|u_2|)|\Delta v_1|~{\text {d}}x \\&\quad \le -d\Vert \Delta v_1\Vert _2^2+C\int \nolimits _D (2+|u_1|^{p-1}+|u_1|^{p_1}+|u_2|)|\Delta v_1|~{\text {d}}x \\&\quad \le -\frac{d}{2}\Vert \Delta v_1\Vert _2^2+C\int \nolimits _D (1+|u_1|^{p-1}+|u_1|^{p_1}+|u_2|)^2~{\text {d}}x \\&\quad \le -\frac{d}{2}\Vert \Delta v_1\Vert _2^2+C\int \nolimits _D (1+|u_1|^{2p-2}+|u_2|^2)~{\text {d}}x \\&\quad = -\frac{d}{2}\Vert \Delta v_1\Vert _2^2+C_1+ C\Vert u_1\Vert _{2p-2}^{2p-2}+ C \Vert u_2\Vert _2^2\\&\quad \le -\frac{dc}{2}\Vert \nabla v_1\Vert _2^2+C_1+ C\Vert u_1\Vert _{2p-2}^{2p-2}+C \Vert u_2\Vert _2^2. \end{aligned}$$

We want to apply the uniform Gronwall Lemma now. Therefore, note

$$\begin{aligned} \frac{{\text {d}}}{{\text {d}}t}\underbrace{\Vert \nabla v_1(t,\omega ,v_1^0(\omega ))\Vert _{2}^2}_{:=y(t)} \le&\underbrace{-dc}_{:=g(t)}\Vert \nabla v_1(t,\omega ,v_1^0(\omega ))\Vert _2^2\\&+\underbrace{C_1+ C\Vert u_1(t,\omega ,u_1^0(\omega ))\Vert _{2p-2}^{2p-2}+C \Vert u_2(t,\omega ,u_2^0(\omega ))\Vert _2^2}_{:=h(t)}. \end{aligned}$$

We calculate

$$\begin{aligned} \int \nolimits _{t}^{t+r}g(s)~{\text {d}}s\le 0 \end{aligned}$$
(3.37)

and

$$\begin{aligned}&\int \nolimits _{t}^{t+r}\Vert \nabla v_1(s,\omega ,v_1^0(\omega ))\Vert _2^2~{\text {d}}s\\&\quad \le Cr+C_1\int \nolimits _t^{t+r}\left( \Vert z_2(\theta _s\omega )\Vert _{2}^2+\Vert z_1(\theta _s\omega )\Vert _{p}^p\right) {\text {d}}s\\&\qquad +C_2\left( \Vert v_1(t,\omega ,v_1^0(\omega ))\Vert ^2_2+\Vert v_2(t,\omega ,v_2^0(\omega ))\Vert _2^2\right. \end{aligned}$$

where we have applied Lemma 3.22. By Lemma 3.23 for \(t\ge r\)

$$\begin{aligned}&\int \nolimits _t^{t+r}\Vert u_1(s,\omega ,u_1^0(\omega ))\Vert ^{2p-2}_{2p-2}~{\text {d}}s \\&\quad \le C_6r+\int \nolimits _{t-r}^{t+r}C_2\Vert z_1(\theta _s\omega )\Vert _{p^2-p}^{p^2-p}+C_3\Vert z_2(\theta _s\omega )\Vert _2^2+C_4\Vert u_2(s,\omega ,v_2^0(\omega ))\Vert _2^2~{\text {d}}s\\&\qquad +C_5\Vert v_1(t-r,\omega ,v_1^0(\omega ))\Vert ^2_2+C_5\Vert v_2(t-r,\omega ,v_2^0(\omega ))\Vert _2^2. \end{aligned}$$

Now, the uniform Gronwall Lemma yields for \(t\ge r\)

$$\begin{aligned}&\Vert \nabla v_1(t+r,\omega ,v_1^0(\omega ))\Vert _2^2\\&\quad \le C+C_1\int \nolimits _t^{t+r}\left( \Vert z_2(\theta _s\omega )\Vert _{2}^2+\Vert z_1(\theta _s\omega )\Vert _{p}^p\right) {\text {d}}s\\&\qquad +C_2\left( \Vert v_1(t,\omega ,v_1^0(\omega ))\Vert ^2_2+\Vert v_2(t,\omega ,v_2^0(\omega ))\Vert _2^2\right) \\&\qquad {}+C_3\int \nolimits _{t-r}^{t+r}\Vert z_1(\theta _s\omega )\Vert _{p^2-p}^{p^2-p}+\Vert z_2(\theta _s\omega )\Vert _2^2+\Vert u_2(s,\omega ,v_2^0(\omega ))\Vert _2^2~{\text {d}}s\\&\qquad {}+C_4\left( \Vert v_1(t-r,\omega ,v_1^0(\omega ))\Vert ^2_2+\Vert v_2(t-r,\omega ,v_2^0(\omega ))\Vert _2^2\right) \\&\qquad +C_5\int \nolimits _t^{t+r}\Vert u_2(s,\omega ,u_2^0(\omega ))\Vert _2^2~{\text {d}}s\\&\quad \le C+C_1\int \nolimits _{t-r}^{t+r}\Vert u_2(s,\omega ,u_2^0(\omega ))\Vert _2^2~{\text {d}}s\\&\qquad +C_2\int \nolimits _{t-r}^{t+r}\Vert z_1(\theta _s\omega )\Vert _{p^2-p}^{p^2-p}+\Vert z_2(\theta _s\omega )\Vert _2^2~{\text {d}}s \\&\qquad +C_3\left( \Vert v_1(t,\omega ,v_1^0(\omega ))\Vert ^2_2 +\Vert v_2(t,\omega ,v_2^0(\omega ))\Vert _2^2 \ldots \right. \\&\qquad \qquad \left. +\Vert v_1(t-r,\omega ,v_1^0(\omega ))\Vert ^2_2+\Vert v_2(t-r,\omega ,v_2^0(\omega ))\Vert _2^2\right) . \end{aligned}$$

That is, for \(t\ge 0\) we have

$$\begin{aligned}&\Vert \nabla v_1(t+2r,\omega ,v_1^0(\omega ))\Vert _2^2\\&\quad \le C+C_1\int \nolimits _{t}^{t+2r}\Vert v_2(s,\omega ,u_2^0(\omega ))\Vert _2^2~{\text {d}}s\\&\qquad +C_2\int \nolimits _{t}^{t+2r}\Vert z_1(\theta _s\omega )\Vert _{p^2-p}^{p^2-p}+\Vert z_2(\theta _s\omega )\Vert _2^2~{\text {d}}s\\&\qquad {}+C_3\left( \Vert v_1(t+r,\omega ,v_1^0(\omega ))\Vert ^2_2+\Vert v_2(t+r,\omega ,v_2^0(\omega ))\Vert _2^2\ldots \right. \\&\qquad \qquad \left. +\Vert v_1(t,\omega ,v_1^0(\omega ))\Vert ^2_2+\Vert v_2(t,\omega ,v_2^0(\omega ))\Vert _2^2\right) . \end{aligned}$$

Let us recall that our goal is to find a \(t^*\ge t_{{\mathcal {D}}}(\omega )\) such that (3.25) holds. Now assume that \(t\ge t_{{\mathcal {D}}}(\omega )\). We replace \(\omega \) by \(\theta _{-t-2r}\omega \) (again note the \({\mathbb {P}}\)-preserving property of the MDS), then

$$\begin{aligned}&\Vert \nabla v_1(t+2r,\theta _{-t-2r}\omega ,v_1^0(\theta _{-t-2r}\omega ))\Vert _2^2\\&\quad \le C+C_1\int \nolimits _{t}^{t+2r}\Vert v_2(s,\theta _{-t-2r}\omega ,u_2^0(\theta _{-t-2r}\omega ))\Vert _2^2~{\text {d}}s\\&\qquad +C_2\int \nolimits _{t}^{t+2r}\Vert z_1(\theta _{s-t-2r}\omega )\Vert _{p^2-p}^{p^2-p}+\Vert z_2(\theta _{s-t-2r}\omega )\Vert _2^2~{\text {d}}s\\&\qquad +C_3\left( \Vert v_1(t+r,\theta _{-t-2r}\omega ,v_1^0(\theta _{-t-2r}\omega ))\Vert ^2_2\right. \ldots \\&\qquad \qquad +\Vert v_2(t+r,\theta _{-t-2r}\omega ,v_2^0(\theta _{-t-2r}\omega ))\Vert _2^2\ldots \\&\qquad \qquad + \Vert v_1(t,\theta _{-t-2r}\omega ,v_1^0(\theta _{-t-2r}\omega ))\Vert ^2_2\ldots \\&\qquad \qquad \left. +\Vert v_2(t,\theta _{-t-2r}\omega ,v_2^0(\theta _{-t-2r}\omega ))\Vert _2^2\right) . \end{aligned}$$

As \(t\ge t_{{\mathcal {D}}}(\omega )\) we know by the absorption property that there exists a \({\tilde{\rho }}(\omega )\) such that

$$\begin{aligned} \Vert v_1(t,\theta _{-t}\omega ,v_1^0(\theta _{-t}\omega ))\Vert _2^2\le {\tilde{\rho }}(\omega ), \end{aligned}$$

and thus replacing \(\omega \) by \(\theta _{-2r}\omega \)

$$\begin{aligned} \Vert v_1(t,\theta _{-t-2r}\omega ,v_1^0(\theta _{-t-2r}\omega ))\Vert _2^2\le {\tilde{\rho }}(\theta _{-2r}\omega ). \end{aligned}$$

Similarly, we know that

$$\begin{aligned} \Vert v_1(t+r,\theta _{-t-r}\omega ,v_1^0(\theta _{-t-r}\omega ))\Vert _2^2\le {\tilde{\rho }}(\theta _{-r}\omega ), \end{aligned}$$

and thus by replacing \(\omega \) by \(\theta _{-r}\omega \)

$$\begin{aligned} \Vert v_1(t+r,\theta _{-t-2r}\omega ,v_1^0(\theta _{-t-2r}\omega ))\Vert _2^2\le {\tilde{\rho }}(\theta _{-2r}\omega ). \end{aligned}$$

The same arguments hold for \(v_2\). Furthermore, as \(t\ge t_{{\mathcal {D}}}(\omega )\) and we know from Lemma 3.20 that there exists a tempered random variable \({\hat{\rho }}(\omega )\) such that for \(s\in (t,t+2r)\)

$$\begin{aligned} \Vert v_{2}(s,\theta _{-s}\omega ,u_2^0(\theta _{-s}\omega ))\Vert ^{2}_{2} \le \hat{\rho }(\omega ) \end{aligned}$$

and thus

$$\begin{aligned}&\int \nolimits _{t}^{t+2r}\Vert v_{2}(s,\theta _{-t-2r}\omega ,u_2^0(\theta _{-t-2r}\omega ))\Vert ^{2}_{2} {\text {d}}s \\&\quad \le \int \nolimits _{t}^{t+2r} \hat{\rho } (\theta _{s-t-2r}\omega )~{\text {d}}s=\int \nolimits _{0}^{2r}\hat{\rho } (\theta _{\tau -2r}\omega )~{\text {d}}\tau =\int \nolimits _{-2r}^{0}\hat{\rho } (\theta _{y}\omega ){\text {d}}y. \end{aligned}$$

With similar substitutions in the integral over \(\Vert z_1(\theta _{s-t-2r}\omega )\Vert _{p^2-p}^{p^2-p}\) and\(\Vert z_2(\theta _{s-t-2r}\omega )\Vert _2^2\) we arrive at

$$\begin{aligned}&\Vert \nabla v_1(t+2r,\theta _{-t-2r}\omega ,v_1^0(\theta _{-t-2r}\omega ))\Vert _2^2\\&\quad \le C+C_1\int \nolimits _{-2r}^{0}\hat{\rho } (\theta _{y}\omega ){\text {d}}y+C_2\int \nolimits _{-2r}^{0}\Vert z_1(\theta _{y}\omega )\Vert _{p^2-p}^{p^2-p}+\Vert z_2(\theta _{y}\omega )\Vert _2^2~{\text {d}}y\\&\qquad +C_3 {\tilde{\rho }}(\theta _{-2r}\omega ), \end{aligned}$$

where the right hand side is independent of t. Due to the temperedness of all terms involved, they can be combined into one tempered random variable \(\rho _1(\omega )\) such that for \(t\ge t_{{\mathcal {D}}}(\omega )+2r=:t^*\) we have

$$\begin{aligned} \Vert \nabla v_1(t,\theta _{-t}\omega ,v_1^0(\theta _{-t}\omega ))\Vert _2^2\le \rho _1(\omega ), \end{aligned}$$

this concludes the proof. \(\square \)

We are now able to prove the boundedness of the first term of \(v_2\) in \(H^1(D)\).

Lemma 3.25

Let Assumptions 2.1 and  2.2 hold. Let \({{\mathcal {D}}}=({{\mathcal {D}}}_1,{{\mathcal {D}}}_2)\in {{\mathcal {T}}}\) and \(u^0\in {{\mathcal {D}}}\). Assume \(t\ge t_{{\mathcal {D}}}(\omega )+2r\) for some \(r>0\). Then we have

$$\begin{aligned} \Vert \nabla v_2^1(t,\theta _{-t}\omega ,0)\Vert _2^2\le \rho _2(\omega ), \end{aligned}$$
(3.38)

where \(\rho _2(\omega )\) is a tempered random variable.

Proof

Remember that \(v_2^1\) satisfies the equation (3.23) and thus

$$\begin{aligned} \frac{1}{2}\frac{{\text {d}}}{{\text {d}}t}\Vert \nabla v_2^1\Vert _{2}^2&=\langle \frac{{\text {d}}}{{\text {d}}t}v_2^1,-\Delta v_2^1\rangle \\&=\langle -\sigma (x)v_2^1-g(x,v_1+z_1),-\Delta v_2^1\rangle \\&=\underbrace{\langle \sigma (x)v_2^1, \Delta v_2^1\rangle }_{=:L_1}+\underbrace{\langle g(x,v_1+z_1),\Delta v_2^1\rangle }_{=:L_2}. \end{aligned}$$

We estimate \(L_1\) and \(L_2\) separately

$$\begin{aligned} L_1&=\int \nolimits _D\sigma (x)v_2^1\Delta v_2^1 {\text {d}}x\\&=-\int \nolimits _D\nabla (\sigma (x)v_2^1)\cdot \nabla v_2^1{\text {d}}x\\&\le -\delta \Vert \nabla v_2^1\Vert _{2}^2-\int \nolimits _D \nabla \sigma (x)v_2^1\cdot \nabla v_2^1{\text {d}}x, \end{aligned}$$

and

$$\begin{aligned} L_2&=\int \nolimits _Dg(x,v_1+z_1)\Delta v_2^1~{\text {d}}x=-\int \nolimits _D\nabla g(x,v_1+z_1)\cdot \nabla v_2^1~{\text {d}}x\\&=-\int \nolimits _D \left( \nabla g(x,v_1+z_1)+\partial _\xi g(x,v_1+z_1)\nabla (v_1+z_1)\right) \cdot \nabla v_2^1~{\text {d}}x, \end{aligned}$$

where in the last equation the gradient is to be understood as

$$\begin{aligned} \nabla g(x,v_1+z_1)=(\partial _{x_1}g(x,v_1+z_1),\ldots ,\partial _{x_n}g(x,v_1+z_1))^\top . \end{aligned}$$

Hence,

$$\begin{aligned}&\frac{{\text {d}}}{{\text {d}}t}\Vert \nabla v_2^1\Vert _{2}^2+2\delta \Vert \nabla v_2^1\Vert _{2}^2\\&\le 2\int \nolimits _D\left| \nabla \sigma (x)v_2^1+\nabla g(x,v_1+z_1)+\partial _\xi g(x,v_1+z_1)\nabla (v_1+z_1)\right| |\nabla v_2^1|~{\text {d}}x\\&\quad \le \frac{1}{\delta } \int \nolimits _D\left| \nabla \sigma (x)v_2^1+\nabla g(x,v_1+z_1)+\partial _\xi g(x,v_1+z_1)\nabla (v_1+z_1)\right| ^2~{\text {d}}x\\&\qquad +\delta \Vert \nabla v_2^1\Vert _{2}^2 \end{aligned}$$

and further with (2.8)

$$\begin{aligned}&\frac{{\text {d}}}{{\text {d}}t}\Vert \nabla v_2^1\Vert _{2}^2+\delta \Vert \nabla v_2^1\Vert _{2}^2\\&\quad \le \frac{1}{\delta } \int \nolimits _D\sum _{i=1}^n\left( |\partial _{x_i} \sigma (x)v_2^1|+|\partial _{x_i}g(x,v_1+z_1)|\right. \ldots \\&\qquad \left. +|\partial _\xi g(x,v_1+z_1)\partial _{x_i}(v_1+z_1)|\right) ^2~{\text {d}}x\\&\quad \le \frac{1}{\delta } \int \nolimits _D\sum _{i=1}^n\left( C|v_2^1|+\delta _5(1+|v_1+z_1|)+\delta _5|\partial _{x_i}(v_1+z_1)|\right) ^2~{\text {d}}x\\&\quad \le \frac{2}{\delta } (C+\delta _5)^2 n \int \nolimits _D\left( |v_2^1|+1+|v_1+z_1|\right) ^2~{\text {d}}x+\frac{2\delta _5^2}{\delta } \int \nolimits _D\sum _{i=1}^n |\partial _{x_i}(v_1+z_1)|^2~{\text {d}}x\\&\quad =\frac{2}{\delta } (C+\delta _5)^2 n \int \nolimits _D\left( |v_2^1|+1+|v_1+z_1|\right) ^2~{\text {d}}x+\frac{2\delta _5^2}{\delta } \Vert \nabla (v_1+z_1)\Vert _{2}^2\\&\quad \le C_1+C_2(\Vert v_2^1\Vert _2^2+\Vert v_1\Vert _2^2+\Vert z_1\Vert _2^2)+C_3(\Vert \nabla v_1\Vert _2^2+\Vert \nabla z_1\Vert _2^2). \end{aligned}$$

where \(C:=\max _{1\le i\le n}\max _{x\in {{\overline{D}}}}| \partial _{x_i}\sigma (x)|\). Next, we apply Gronwall’s inequality while taking the initial condition into account and we obtain for \(t\ge 0\)

$$\begin{aligned} \Vert \nabla v_2^1\Vert _2^2&\le \int \nolimits _0^t \left[ C_1+C_2(\Vert v_2^1\Vert _2^2+\Vert v_1\Vert _2^2+\Vert z_1\Vert _2^2)+C_3(\Vert \nabla v_1\Vert _2^2+\Vert \nabla z_1\Vert _2^2)\right] \ldots .\nonumber \\&\qquad \times \exp \left( (s-t)\delta \right) ~{\text {d}}s. \end{aligned}$$
(3.39)

We have from (3.19) the following equation

$$\begin{aligned} \frac{{\text {d}}}{{\text {d}}t}(\Vert v_1\Vert _2^2{+}\Vert v_2\Vert _2^2){+}M(\Vert v_1\Vert _2^2{+}\Vert v_2\Vert _2^2){+}d\Vert \nabla v_1\Vert _2^2\le {{\hat{C}}}+{{\tilde{C}}}(\Vert z_2(\theta _t\omega )\Vert _2^2+\Vert z_1(\theta _t\omega )\Vert _p^p), \end{aligned}$$
(3.40)

where \(M=\min \{d/c,\delta \}\) and certain constants \({{\hat{C}}},{{\tilde{C}}}\). We multiply (3.40) by \(\exp (Mt)\) and integrate between 0 and t

$$\begin{aligned}&\int \nolimits _0^t \exp (Ms)\frac{{\text {d}}}{{\text {d}}s} (\Vert v_1\Vert _2^2+\Vert v_2\Vert _2^2){\text {d}}s+M\int \nolimits _0^t \exp (Ms) (\Vert v_1\Vert _2^2+\Vert v_2\Vert _2^2) {\text {d}}s\\&\qquad {} +d \int \nolimits _0^t \exp (Ms)\Vert \nabla v_1\Vert _2^2{\text {d}}s\\&\quad \le \int \nolimits _0^t {{\hat{C}}}\exp (Ms){\text {d}}s+{{\tilde{C}}}\int \nolimits _0^t \exp (Ms) (\Vert z_2(\theta _s\omega )\Vert _2^2+\Vert z_1(\theta _s\omega )\Vert _p^p){\text {d}}s. \end{aligned}$$

This yields

$$\begin{aligned}&\int \nolimits _0^t\exp (M(s-t))\Vert \nabla v_1(s,\omega ,v_1^0(\omega ))\Vert _2^2{\text {d}}s\nonumber \\&\quad \le \frac{1}{d}\exp (-Mt)(\Vert v_1^0(\omega )\Vert _2^2+\Vert v_2^0(\omega )\Vert _2^2) +{{\hat{C}}}\nonumber \\&\qquad {}+{{\tilde{C}}}\int \nolimits _0^t \exp (M(s-t)) (\Vert z_2(\theta _s\omega )\Vert _2^2+\Vert z_1(\theta _s\omega )\Vert _p^p){\text {d}}s, \end{aligned}$$
(3.41)

as well as

$$\begin{aligned}&\Vert v_1(t,\omega ,v_1^0(\omega ))\Vert ^2_{2}+\Vert v_2(t,\omega ,v_2^0(\omega ))\Vert ^2_{2}\nonumber \\&\quad \le \left( \Vert v_1^0(\omega )\Vert ^2_{2}+\Vert v_2^0(\omega )\Vert ^2_{2}\right) \exp \left( -Mt\right) +{{\hat{C}}}\nonumber \\&\qquad {}+ {{\tilde{C}}}\int \nolimits _0^t\exp \left( M(s-t)\right) \left( \Vert z_2(\theta _s\omega )\Vert _{2}^2+\Vert z_1(\theta _s\omega )\Vert _{p}^p\right) ~{\text {d}}s. \end{aligned}$$

In particular, from the last estimate we obtain

$$\begin{aligned}&\int \nolimits _0^{t_{{\mathcal {D}}}(\omega )}(\Vert v_1(s,\theta _{-t}\omega ,v_1^0(\theta _{-t}\omega )\Vert _2^2+\Vert v_2(s,\theta _{-t}\omega ,v_2^0(\theta _{-t}\omega ))\Vert _2^2)\exp (M(s-t)){\text {d}}s\nonumber \\&\quad \le \int \nolimits _0^{t_{{\mathcal {D}}}(\omega )} \left( \Vert v_1^0(\theta _{-t}\omega )\Vert ^2_{2}+\Vert v_2^0(\theta _{-t}\omega )\Vert ^2_{2}\right) \exp \left( -Mt\right) {\text {d}}s+{{\hat{C}}}\int \nolimits _0^{t_{{\mathcal {D}}}(\omega )}\exp (M(s-t)){\text {d}}s\nonumber \\&\qquad {}+ {{\tilde{C}}}\int \nolimits _0^{t_{{\mathcal {D}}}(\omega )}\int \nolimits _0^s\exp \left( M(\tau -t)\right) \left( \Vert z_2(\theta _{\tau -t}\omega )\Vert _{2}^2+\Vert z_1(\theta _{\tau -t}\omega )\Vert _{p}^p\right) ~{\text {d}}\tau {\text {d}}s\nonumber \\&\quad \le \left( \Vert v_1^0(\theta _{-t}\omega )\Vert ^2_{2}+\Vert v_2^0(\theta _{-t}\omega )\Vert ^2_{2}\right) \exp \left( -Mt\right) t_{{\mathcal {D}}}(\omega )+{{\hat{C}}}\nonumber \\&\qquad {}+ {{\tilde{C}}} t_{{\mathcal {D}}}(\omega ) \int \nolimits _0^{t_{{\mathcal {D}}}(\omega )}\exp \left( M(\tau -t)\right) \left( \Vert z_2(\theta _{\tau -t}\omega )\Vert _{2}^2+\Vert z_1(\theta _{\tau -t}\omega )\Vert _{p}^p\right) ~{\text {d}}\tau . \end{aligned}$$
(3.42)

where we have replaced \(\omega \) by \(\theta _{-t}\omega \) after integrating and used that \(t\ge t_{{\mathcal {D}}}(\omega )\).

Now, replacing \(\omega \) by \(\theta _{-t}\omega \) in (3.39), noting that \(\delta \ge M\) and assuming that \(t\ge t_{{\mathcal {D}}}(\omega )\), we compute

$$\begin{aligned}&\Vert \nabla v_2^1(t,\theta _{-t}\omega ,0)\Vert _2^2\nonumber \\&\quad \le \frac{C_1}{\delta }+C_2\int \nolimits _0^t \left[ \Vert v_2^1(s,\theta _{-t}\omega , 0)\Vert _2^2+\Vert v_1(s,\theta _{-t}\omega ,v_1^0(\theta _{-t}\omega ))\Vert _2^2+\Vert z_1(\theta _{s-t}\omega )\Vert _2^2\right. \nonumber \\&\qquad {}\left. +\Vert \nabla v_1(s,\theta _{-t}\omega ,v_1^0(\theta _{-t}\omega ))\Vert _2^2+\Vert \nabla z_1(\theta _{s-t}\omega )\Vert _2^2\right] \exp \left( (s-t)M\right) ~{\text {d}}s\nonumber \\&\quad \le C_1+C_2\int \nolimits _0^{t_{{\mathcal {D}}}(\omega )} \left[ \Vert v_2^1(s,\theta _{-t}\omega , 0)\Vert _2^2+\Vert v_1(s,\theta _{-t}\omega ,v_1^0(\theta _{-t}\omega ))\Vert _2^2\right] \exp \left( (s-t)M\right) {\text {d}}s\\&\qquad {}+C_2 \int \nolimits _{t_{{\mathcal {D}}}(\omega )}^t \left[ \Vert v_2^1(s,\theta _{-t}\omega , 0)\Vert _2^2+\Vert v_1(s,\theta _{-t}\omega ,v_1^0(\theta _{-t}\omega ))\Vert _2^2\right] \exp \left( (s-t)M\right) {\text {d}}s\\&\qquad {} + C_3 \exp (-Mt)(\Vert v_1^0(\theta _{-t}\omega )\Vert _2^2+\Vert v_2^0(\theta _{-t}\omega )\Vert _2^2)+C_4\int \nolimits _0^t \exp (M(s-t))\\&\qquad {} \times (\Vert z_2(\theta _{s-t}\omega )\Vert _2^2+\Vert z_1(\theta _{s-t}\omega )\Vert _p^p+\Vert z_1(\theta _{s-t}\omega )\Vert _2^2+\Vert \nabla z_1(\theta _{s-t}\omega )\Vert _2^2){\text {d}}s\\&\quad \le C_1+C_2\left( \Vert v_1^0(\theta _{-t}\omega )\Vert ^2_{2}+\Vert v_2^0(\theta _{-t}\omega )\Vert ^2_{2}\right) \exp \left( -Mt\right) t_{{\mathcal {D}}}(\omega )\\&\qquad {}+ C_5 t_{{\mathcal {D}}}(\omega ) \int \nolimits _0^{t_{{\mathcal {D}}}(\omega )}\exp \left( M(\tau -t)\right) \left( \Vert z_2(\theta _{\tau -t}\omega )\Vert _{2}^2+\Vert z_1(\theta _{\tau -t}\omega )\Vert _{p}^p\right) ~{\text {d}}\tau \\&\qquad {} +C_2 \int \nolimits _{t_{{\mathcal {D}}}(\omega )}^t \rho (\omega )\exp \left( (s-t)M\right) {\text {d}}s\\&\qquad {} + C_3 \exp (-Mt)(\Vert v_1^0(\theta _{-t}\omega )\Vert _2^2+\Vert v_2^0(\theta _{-t}\omega )\Vert _2^2)\nonumber \\&\qquad {}+C_4\int \nolimits _{-\infty }^0 \exp (Ms) (\Vert z_2(\theta _{s}\omega )\Vert _2^2+\Vert z_1(\theta _{s}\omega )\Vert _p^p+\Vert z_1(\theta _{s}\omega )\Vert _2^2+\Vert \nabla z_1(\theta _{s}\omega )\Vert _2^2){\text {d}}s\\&\quad \le C_1+C_2(t_{{\mathcal {D}}}(\omega ))\left( \Vert v_1^0(\theta _{-t}\omega )\Vert ^2_{2}+\Vert v_2^0(\theta _{-t}\omega )\Vert ^2_{2}\right) \exp \left( -Mt\right) +C_3 \rho (\omega )\\&\qquad +C_4(t_{{\mathcal {D}}}(\omega ))\int \nolimits _{-\infty }^0 \exp (Ms)\\&\qquad \times (\Vert z_2(\theta _{s}\omega )\Vert _2^2+\Vert z_1(\theta _{s}\omega )\Vert _p^p+\Vert z_1(\theta _{s}\omega )\Vert _2^2+\Vert \nabla z_1(\theta _{s}\omega )\Vert _2^2){\text {d}}s \end{aligned}$$

where we have used (3.41) in the second inequality and (3.42) in the third inequality. Furthermore, we made use of the absorption property in the third inequality. Finally, since \(\Vert z_2(\theta _{s}\omega )\Vert _2^2,\Vert z_1(\theta _{s}\omega )\Vert _p^p,\Vert z_1(\theta _{s}\omega )\Vert _2^2,\Vert \nabla z_1(\theta _{s}\omega )\Vert _2^2\) (see Lemma 3.17 and Remark 3.18) and \(\Vert v_1^0(\theta _{-t}\omega )\Vert ^2_{2},\Vert v_2^0(\theta _{-t}\omega )\Vert ^2_{2}\) (by assumption) are tempered random variables, we can combine the right hand side into one tempered random variable \(\rho _2(\omega )\) and this concludes the proof. \(\square \)

Theorem 3.26

Let Assumptions 2.1 and  2.2 hold. The random dynamical system defined in (3.11) has a unique \({{\mathcal {T}}}\)-random attractor \({{\mathcal {A}}}\).

Proof

By the previous lemmas there exist a compact absorbing set given by (3.28) in \({\mathcal {T}}\) for the RDS \(\varphi \). Thus Theorem 3.11 guarantees the existence of a unique \({{\mathcal {T}}}\)-random attractor. \(\square \)

Applications

FitzHugh–Nagumo system

Let us consider the famous stochastic FitzHugh–Nagumo system, i.e.,

$$\begin{aligned} \begin{array}{rcl} {\text {d}}u_1 &{} =&{} \left( \nu _1\Delta u_1-p(x)u_1-u_1(u_1-1)(u_1-\alpha _1)-u_2\right) {\text {d}}t+~B_1{\text {d}}W_1,\\ {\text {d}}u_2 &{} =&{} \left( \alpha _2u_1-\alpha _3u_2\right) {\text {d}}t+~B_2{\text {d}}W_2, \end{array} \end{aligned}$$
(4.1)

with \(D=[0,1]\) and \(\alpha _j\in {\mathbb {R}}\) for \(j\in \{1,2,3\}\) are fixed parameters. We always assume that the noise terms satisfy Assumptions 2.2 and \(p\in C^{2}\). Such systems have been considered under various assumptions by numerous authors, for instance see [4, 31] and the references specified therein. Our general assumptions are satisfied in this example as follows. Identifying the terms with the terms given in (2.1)–(2.2) we have

$$\begin{aligned}&h(x,u_1)=p(x)u_1+u_1(u_1-1)(u_1-\alpha _1), \quad f(x,u_1,u_2)=u_2,\\&\sigma (x)u_2=\alpha _3 u_2,\quad g(x,u_1)=-\alpha _2u_1. \end{aligned}$$

We have \(\sigma (x)=\alpha _3\) and \(|f(x,u_1,u_2)|=|u_2|\) , i.e., (2.7) and (2.6) are fulfilled. Furthermore, \(|\partial _ug(x,u_1)|=|\alpha _2|\) and \(|\partial _{x_i}g(x,u_1)|=0\) for \(i=1,\ldots ,n\), hence (2.8) is satisfied. Finally, as a polynomial with odd degree and negative coefficient for the highest degree, h fulfils (2.5). Thus the analysis above guarantees the existence of global mild solutions and the existence of a random pullback attractor for the stochastic FitzHugh–Nagumo system.

The driven cubic-quintic Allen–Cahn model

The cubic-quintic Allen–Cahn (or real Ginzburg–Landau) equation is given by

$$\begin{aligned} \partial _t u = \Delta u + p_1u+u^3-u^5,\qquad u=u(x,t), \end{aligned}$$
(4.2)

where \((x,t)\in D\times [0,T)\), \(p_1\in {\mathbb {R}}\), is a fixed parameter and we will take D as a bounded open domain with regular boundary. The cubic-quintic polynomial non-linearity frequently occurs in the modelling of Euler buckling [30], as a re-stabilization mechanism in paradigmatic models for fluid dynamics [21], in normal form theory and travelling wave dynamics [13, 16], as well as a test problem for deterministic [17] and stochastic numerical continuation [18]. If we want to allow for time-dependent slowly-varying forcing on u and sufficiently regular additive noise, then it is actually very natural to extend the model (4.2) to

$$\begin{aligned} \begin{array}{lcl} {\text {d}}u_1&{}=&{}\left( \Delta u_1 + p_1u_1+u_1^3-u_1^5-u_2\right) ~{\text {d}}t+B_1~{\text {d}}W_1,\\ {\text {d}}u_2&{}=&{}\varepsilon (p_2 u_2-q_2u_1) ~{\text {d}}t+B_2~{\text {d}}W_2, \end{array} \end{aligned}$$
(4.3)

where \(p_2\), \(q_2\), \(0<\varepsilon \ll 1 \) are parameters. One easily checks again that (4.3) fits our general framework as \(h(x,u_1)=-p_1u_1-u_1^3+u_1^5\) satisfies the crucial dissipation assumption (2.5).

References

  1. 1.

    Adili, A., Wang, B.: Random attractors for non-autonomous stochastic FitzHugh–Nagumo systems with multiplicative noise. In: Discrete and Continuous Dynamical Systems SI (2013)

  2. 2.

    Adili, A., Wang, B.: Random attractors for stochastic FitzHugh–Nagumo systems driven by determinisits non-autonomous forcing. Discrete Contin. Dyn. Syst. Ser. B 18(3), 643–666 (2013)

    MathSciNet  MATH  Google Scholar 

  3. 3.

    Arnold, L.: Random Dynamical Systems. Springer, Berlin (2013)

    Google Scholar 

  4. 4.

    Bonaccorsi, S., Mastrogiacomo, E.: Analysis of the stochastic FitzHugh–Nagumo system. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 11(03), 427–446 (2008)

    MathSciNet  MATH  Article  Google Scholar 

  5. 5.

    Caraballo, T., Langa, J.A., Robinson, J.C.: Stability and random attractors for a reaction–diffusion equation with multiplicative noise. Discrete Contin. Dyn. Syst. 6(4), 875–892 (2000)

    MathSciNet  MATH  Article  Google Scholar 

  6. 6.

    Chepyzhov, V.V., Vishik, M.I.: Trajectory attractors for reaction–diffusion systems. Topol. Methods Nonlinear Anal. 7(1), 49–76 (1996)

    MathSciNet  MATH  Article  Google Scholar 

  7. 7.

    Chueshov, I., Schmalfuss, B.: Master-slave synchronization and invariant manifolds for coupled stochastic systems. J. Math. Phys. 51(10), 102702 (2010)

    MathSciNet  MATH  Article  Google Scholar 

  8. 8.

    Crauel, H., Debussche, A., Flandoli, F.: Random attractors. J. Dyn. Differ. Equ. 9(2), 307–341 (1997)

    MathSciNet  MATH  Article  Google Scholar 

  9. 9.

    Crauel, H., Flandoli, F.: Attractors for random dynamical systems. Probab. Theory Relat. Fields 100(3), 365–393 (1994)

    MathSciNet  MATH  Article  Google Scholar 

  10. 10.

    Da Prato, G.: Kolmogorov Equations for Stochastic PDEs. Birkhäuser, Basel (2012)

    MATH  Google Scholar 

  11. 11.

    Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Cambridge University Press, Cambridge (1992)

    MATH  Book  Google Scholar 

  12. 12.

    Debussche, A.: Hausdorff dimension of a random invariant set. J. Math. Pures Appl. 77(10), 967–988 (1998)

    MathSciNet  MATH  Article  Google Scholar 

  13. 13.

    Deissler, R., Brand, H.: Periodic, quasiperiodic, and chaotic localized solutions of the quintic complex Ginzburg–Landau equation. Phys. Rev. Lett. 72(4), 478–481 (1994)

    Article  Google Scholar 

  14. 14.

    Flandoli, F., Schmalfuss, B.: Random attractors for the 3D stochastic Navier–Stokes equation with multiplicative white noise. Stoch. Int. J. Probab. Stoch. Process. 59(1–2), 21–45 (1996)

    MATH  Google Scholar 

  15. 15.

    Gess, B., Liu, W., Röckner, M.: Random attractors for a class of stochastic partial differential equations driven by general additive noise. J. Differ. Equ. 251(4–5), 1225–1253 (2011)

    MathSciNet  MATH  Article  Google Scholar 

  16. 16.

    Kapitula, T., Sandstede, B.: Instability mechanism for bright solitary-wave solutions to the cubic-quintic Ginzburg–Landau equation. JOSA B 15(11), 2757–2762 (1998)

    MathSciNet  Article  Google Scholar 

  17. 17.

    Kuehn, C.: Efficient gluing of numerical continuation and a multiple solution method for elliptic PDEs. Appl. Math. Comput. 266, 656–674 (2015)

    MathSciNet  MATH  Google Scholar 

  18. 18.

    Kuehn, C.: Numerical continuation and SPDE stability for the 2d cubic-quintic Allen–Cahn equation. SIAM/ASA J. Uncertain. Quantif. 3(1), 762–789 (2015)

    MathSciNet  MATH  Article  Google Scholar 

  19. 19.

    Li, Y., Yin, J.: A modified proof of pullback attractors in a sobolev space for stochastic Fitzhugh–Nagumo equations. Discrete Contin. Dyn. Syst. Ser. B 21(4), 1203–1223 (2016)

    MathSciNet  MATH  Article  Google Scholar 

  20. 20.

    Marion, M.: Finite-dimensional attractors associated with partly dissipative reaction–diffusion systems. SIAM J. Math. Anal. 20(4), 816–844 (1989)

    MathSciNet  MATH  Article  Google Scholar 

  21. 21.

    Morgan, D., Dawes, J.: The Swift–Hohenberg equation with a nonlocal nonlinearity. Physica D 270, 60–80 (2014)

    MathSciNet  MATH  Article  Google Scholar 

  22. 22.

    Nagel, R.: Towards a “matrix theory” for unbounded operator matrices. Math. Z. 201(1), 57–68 (1989)

    MathSciNet  MATH  Article  Google Scholar 

  23. 23.

    Pazy, A.: Semigroups of Linear Operators and Applications to Partial Differential Equations. Springer, Berlin (2012)

    MATH  Google Scholar 

  24. 24.

    Sauer, M., Stannat, W.: Analysis and approximation of stochastic nerve axon equations. Math. Comput. 85(301), 2457–2481 (2016)

    MathSciNet  MATH  Article  Google Scholar 

  25. 25.

    Schmalfuss, B.: Backward cocycles and attractors of stochastic differential equations. In: International seminar on applied mathematics-nonlinear dynamics: attractor approximation and global behaviour, pp 185–192 (1992)

  26. 26.

    Schmalfuss, B.: Attractors for the non-autonomous dynamical systems. In: Equadiff 99, vol 2, pp 684–689. World Scientific (2000)

  27. 27.

    Sell, G.R., You, Y.: Dynamics of Evolutionary Equations. Springer, Berlin (2002)

    MATH  Book  Google Scholar 

  28. 28.

    Temam, R.: Infinite-Dimensional Dynamical Systems in Mechanics and Physics. Springer, Berlin (2012)

    Google Scholar 

  29. 29.

    Van Neerven, J.: Stochastic Evolution Equations. ISEM Lecture Notes. Cambridge University of Press, Cambridge (2008)

    Google Scholar 

  30. 30.

    Venkadesan, M., Guckenheimer, J., Valero-Cuevas, F.: Manipulating the edge of instability. J. Biomech. 40, 1653–1661 (2007)

    Article  Google Scholar 

  31. 31.

    Wang, B.: Attractors for reaction–diffusion equations in unbounded domains. Physica D 128(1), 41–52 (1999)

    MathSciNet  MATH  Article  Google Scholar 

  32. 32.

    Wang, B.: Pullback attractors for the non-autonomous FitzHugh–Nagumo system on unbounded domains. Nonlinear Anal. Theor. 70(11), 3799–3815 (2009)

    MathSciNet  MATH  Article  Google Scholar 

  33. 33.

    Wang, B.: Random attractors for the stochastic FitzHugh–Nagumo system on unbounded domains. Nonlinear Anal. Theor. 71(7–8), 2811–2828 (2009)

    MathSciNet  MATH  Article  Google Scholar 

  34. 34.

    Zeidler, E.: Nonlinear Functional Analysis and Its Applications: Part 2 B: Nonlinear Monotone Operators. Springer, Berlin (1989)

    Google Scholar 

  35. 35.

    Zhou, S., Wang, Z.: Finite fractal dimensions of random attractors for stochastic FitzHugh–Nagumo system with multiplicative white noise. J. Math. Anal. Appl. 441(2), 648–667 (2016)

    MathSciNet  MATH  Article  Google Scholar 

Download references

Acknowledgements

Open Access funding provided by Projekt DEAL.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Anne Pein.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

We thank the anonymous referee for useful comments. CK and AN have been supported by a DFG grant in the D-A-CH framework (KU 3333/2-1). CK and AP acknowledge support by a Lichtenberg Professorship.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kuehn, C., Neamţu, A. & Pein, A. Random attractors for stochastic partly dissipative systems. Nonlinear Differ. Equ. Appl. 27, 35 (2020). https://doi.org/10.1007/s00030-020-00638-8

Download citation

Mathematics Subject Classification

  • 60H15
  • 37H05
  • 37L55

Keywords

  • Random attractor
  • Partly dissipative systems
  • Stochastic partial differential equation
  • Random dynamical system