We first collect some basic notions and results from the theory of random dynamical systems, which are mainly taken from [3, 10, 28]. Then, in Sect. 2.2, we state a general existence theorem for random exponential attractors which was proven in [7]. In Sect. 2.3, we recall the notion of pathwise mild solutions for stochastic evolution equations introduced in [26].
Random dynamical systems and random attractors
In order to quantify uncertainty, we describe an appropriate model of the noise. If not further specified, \((\Omega ,\mathcal {F},\mathbb {P})\) denotes a probability space. Moreover, X is a separable and reflexive Banach space and \(\Vert \cdot \Vert _X\) denotes the norm in X.
Definition 2.1
Let \(\theta :\mathbb {R}\times \Omega \rightarrow \Omega \) be a family of \(\mathbb {P}\)-preserving transformations (meaning that \(\theta _{t}\mathbb {P}=\mathbb {P}\) for all \(t\in \mathbb {R}\)) with the following properties:
- (i):
-
the mapping \((t,\omega )\mapsto \theta _{t}\omega \) is \((\mathcal {B}(\mathbb {R})\otimes \mathcal {F},\mathcal {F})\)-measurable for all \(t\in \mathbb {R}, \omega \in \Omega \);
- (ii):
-
\(\theta _{0}=\text {Id}_{\Omega }\);
- (iii):
-
\(\theta _{t+s}=\theta _{t}\circ \theta _{s}\) for all \(t,s,\in \mathbb {R}\),
where \(\mathcal {B}(\mathbb {R})\) denotes the Borel \(\sigma \)-algebra. Then, the quadruple \((\Omega ,\mathcal {F},\mathbb {P},(\theta _{t})_{t\in \mathbb {R}})\) is called a metric dynamical system.
Remark 2.2
-
(a)
Here and in the sequel, we write \(\theta _{t}\omega \) for \(\theta (t,\omega ),\ t\in \mathbb {R},\omega \in \Omega \).
-
(b)
We always assume that \(\mathbb {P}\) is ergodic with respect to \((\theta _{t})_{t\in \mathbb {R}}\), i.e., any \((\theta _{t})_{t\in \mathbb {R}}\)-invariant subset has either zero or full measure.
Our aim is to introduce a metric dynamical system associated with a two-sided X-valued Wiener process.
For the sake of completeness, we recall the construction of such a process if X is not a Hilbert space. First, we introduce an auxiliary separable Hilbert space H and denote by \((W_H(t))_{t\ge 0}\) an H-cylindrical Brownian motion, i.e., \((W_{H}(t)h)_{t\ge 0}\) is a real-valued Brownian motion for every \(h\in H\) and \(\mathbb {E}[W_{H}(t)h\cdot W_H(s)g]=\min \{s,t\}[h,g]_{H}\) for \(s,t\ge 0\) and \(h,g\in H\), where \([\cdot ,\cdot ]\) denotes the inner product in H. Furthermore, an operator \(G:H\rightarrow X\) is called \(\gamma \)-radonifying if
$$\begin{aligned} \mathbb {E}\left\| \sum \limits _{n=1}^{\infty }\gamma _{n}G e_{n} \right\| _X^{2}<\infty , \end{aligned}$$
where \((\gamma _{n})_{n\in \mathbb {N}}\) is a sequence of independent standard Gaussian random variables and \((e_{n})_{n\in \mathbb {N}}\) is an orthonormal basis in H. If X is isomorphic to H, then the previous condition means that G is a Hilbert–Schmidt operator (notation: \(G\in \mathcal {L}_{2}(H)\)). In this framework, letting \((\tilde{e}_n)_{n\in \mathbb {N}}\) be an orthonormal basis of \((\text{ ker }G)^{\perp }\), we know according to [30, Prop. 8.8] that the series
$$\begin{aligned} \sum \limits _{n=1}^{\infty }W_{H}(t){\tilde{e}}_nGe_n \end{aligned}$$
converges almost surely and defines an X-valued Brownian motion. Its covariance operator is given by \(tGG^{*}\), where \(G^*\) denotes the adjoint. Moreover, every X-valued Brownian motion can be obtained in this way. Again, if X is isomorphic to H and \(G\in \mathcal {L}_{2}(H)\) with \(\Vert G\Vert _{\mathcal {L}_{2}(H)}=\text{ Tr }(G G^{*})\), then the previous definition entails a trace-class Wiener process. Finally, we extend this to a two-sided process in the standard way.
To obtain a metric dynamical system associated with such a process, we let \(C_{0}(\mathbb {R};X)\) denote the set of continuous X-valued functions which are zero at zero equipped with the compact open topology. We take \(\mathbb {P}\) as the Wiener measure on \(\mathcal {B}(C_{0}(\mathbb {R};X))\) having a covariance operator Q on X. Then, Kolmogorov’s theorem about the existence of a continuous version yields the canonical probability space \((C_{0}(\mathbb {R};X),\mathcal {B}(C_{0}(\mathbb {R};X)),\mathbb {P})\). Moreover, to obtain an ergodic metric dynamical system we introduce the Wiener shift, which is defined as
$$\begin{aligned} \theta _{t}\omega (\cdot {}):=\omega (t+\cdot {})-\omega (t)\quad \text{ for } \text{ all } t\in \mathbb {R}, \omega \in C_{0}(\mathbb {R};X). \end{aligned}$$
(2.1)
Throughout this manuscript, \(\theta _t\omega (\cdot )\) will always denote the Wiener shift.
We now recall the definition of a random dynamical system.
Definition 2.3
A continuous random dynamical system on X over a metric dynamical system \((\Omega ,\mathcal {F},\mathbb {P},(\theta _{t})_{t\in \mathbb {R}})\) is a mapping
$$\begin{aligned} \varphi :\mathbb {R^{+}}\times \Omega \times X\rightarrow X, (t,\omega ,x)\mapsto \varphi (t,\omega ,x), \end{aligned}$$
which is \((\mathcal {B}(\mathbb {R}^{+})\otimes \mathcal {F}\otimes \mathcal {B}(X),\mathcal {B}(X))\)-measurable and satisfies:
- (i):
-
\(\varphi (0,\omega ,\cdot {})=Id _{X}\) for all \(\omega \in \Omega \);
- (ii):
-
\( \varphi (t+\tau ,\omega ,x)=\varphi (t,\theta _{\tau }\omega ,\varphi (\tau ,\omega ,x)), \text{ for } \text{ all } x\in X, t,\tau \in \mathbb {R}^{+} \text{ and } \text{ all } \omega \in \Omega ;\)
- (iii):
-
\(\varphi (t,\omega ,\cdot {}):X\rightarrow X\) is continuous for all \(t\in \mathbb {R}^{+}\) and \(\omega \in \Omega \).
The second property is referred to as the cocycle property and generalizes the semigroup property. In fact, if \(\varphi \) is independent of \(\omega \), (ii) reduces exactly to the semigroup property, i.e., \(\varphi (t+\tau ,x)=\varphi (t,\varphi (\tau ,x))\). For random dynamical systems, the evolution of the noise \((\theta _{t}\omega )\) has additionally to be taken into account.
Under suitable assumptions, the solution operator of a random differential equation generates a random dynamical system. Stochastic (partial) differential equations are more involved since stochastic integrals are defined almost surely, though the cocycle property must hold for all \(\omega \).
Referring to the monograph by Arnold [3], it is well known that stochastic (ordinary) differential equations generate random dynamical systems under suitable assumptions on the coefficients. This is due to the flow property, see [19], which can be deduced from Kolmogorov’s theorem about the existence of a (Hölder-) continuous random field with a finite-dimensional parameter range. Here, the parameters of this random field are the time and the nonrandom initial data.
Whether an SPDE generates a random dynamical system has been a long-standing open problem, since Kolmogorov’s theorem breaks down for random fields parameterized by infinite-dimensional Hilbert spaces, see [23]. As a consequence, the question of how a random dynamical system can be obtained from an SPDE is not trivial, since solutions are only defined almost surely which is insufficient for the cocycle property. In particular, there exist exceptional sets which depend on the initial condition, and if more than countably many exceptional sets occur, it is unclear how the random dynamical system can be defined. This problem was fully solved only under restrictive assumptions on the structure of the noise. More precisely, for SPDEs with additive or linear multiplicative noise, there are standard transformations which reduce these SPDEs in PDEs with random coefficients. Since random PDEs can be solved pathwise, the generation of the random dynamical system is straightforward.
Before we recall the notions of global and exponential random attractors, we need to introduce the class of tempered random sets. From now on, in this Sect. 2.2, when stating properties involving a random parameter, we assume, unless otherwise specified, that they hold on a \((\theta _t)_{t\in \mathbb {R}}\)-invariant subset of \(\Omega \) of full measure, i.e., there exists a \((\theta _t)_{t\in \mathbb {R}}\)-invariant subset \(\Omega _0\subset \Omega \) of full measure such that the property holds for all \(\omega \in \Omega _0.\) To simplify notations, we denote \(\Omega _0\) again by \(\Omega .\)
Definition 2.4
A multifunction \(\mathcal {B}=\{B(\omega )\}_{\omega \in \Omega }\) of nonempty closed subsets \(B(\omega )\) of X is called a random set if
$$\begin{aligned} \omega \mapsto \inf _{y\in B(\omega )}\Vert x-y\Vert _X \end{aligned}$$
is a random variable for each \(x\in X\).
The random set \(\mathcal {B}\) is bounded (or compact) if the sets \(B(\omega )\subset X\) are bounded (or compact) for all \(\omega \in \Omega .\)
Definition 2.5
A random bounded set \(\{B(\omega )\}_{\omega \in \Omega }\) of X is called tempered with respect to \((\theta _{t})_{t\in \mathbb {R}}\) if for all \(\omega \in \Omega \) it holds that
$$\begin{aligned} \lim \limits _{t\rightarrow \infty }\text {e}^{-\beta t}\sup \limits _{x\in B(\theta _{-t}\omega )}\Vert x\Vert _X=0\quad \text{ for } \text{ all } \beta >0. \end{aligned}$$
Here and in the sequel, we denote by \(\mathcal {D}\) the collection of tempered random sets in X.
Definition 2.6
Let \(\varphi \) be a random dynamical system on X. A random set \(\{\mathcal {A}(\omega )\}_{\omega \in \Omega }\in \mathcal {D}\) is called a \(\mathcal {D}\)-random (pullback) attractor for \(\varphi \) if the following properties are satisfied:
- (a):
-
\(\mathcal {A}(\omega )\) is compact for every \(\omega \in \Omega \);
- (b):
-
\(\{\mathcal {A}(\omega )\}_{\omega \in \Omega }\) is \(\varphi \)-invariant, i.e.,
$$\begin{aligned} \varphi (t,\omega ,\mathcal {A}(\omega ))=\mathcal {A}(\theta _{t}\omega ) \text{ for } \text{ all } t\ge 0, \omega \in \Omega ; \end{aligned}$$
- (c):
-
\(\{\mathcal {A}(\omega )\}_{\omega \in \Omega }\) pullback attracts every set in \(\mathcal {D}\), i.e., for every \(D=\{D(\omega )\}_{\omega \in \Omega }\in \mathcal {D}\),
$$\begin{aligned} \lim \limits _{t\rightarrow \infty }d(\varphi (t,\theta _{-t}\omega ,D(\theta _{-t}\omega )),\mathcal {A}(\omega ))=0, \end{aligned}$$
where d denotes the Hausdorff semimetric in X, \(d(A,B)=\sup \nolimits _{a\in A}\inf \limits _{b\in B}\Vert a-b\Vert _X\), for any subsets \(A\subseteq X\) and \(B\subseteq X\).
The following theorem provides a criterion for the existence of random attractors, see Theorem 4 in [11]. The uniqueness follows from Corollary 1 in [11].
Theorem 2.7
There exists a \(\mathcal {D}\)-random (pullback) attractor for \(\varphi \) if and only if there exists a compact random set that pullback attracts all random sets \(D\in \mathcal {D}\). Moreover, the random (pullback) attractor is unique.
One way of proving the existence of the random attractor, that in addition implies its finite fractal dimension, is to show that a random exponential attractor exists. Exponential attractors are compact subsets of finite fractal dimension that contain the global attractor and are attracting at an exponential rate. This notion was first introduced for semigroups in the autonomous deterministic setting [12] and has later been extended for nonautonomous and random dynamical systems, see [7, 9] and the references therein.
Here, we consider so-called nonautonomous random exponential attractors, see [7]. While random exponential attractors in the strict sense are positively \(\varphi \)-invariant, nonautonomous random exponential attractors are positively \(\varphi \)-invariant in the weaker, nonautonomous sense. To construct exponential attractors for time-continuous random dynamical systems that are positively \(\varphi \)-invariant typically requires the Hölder continuity in time of the cocycle which is a restrictive assumption. However, if we relax the invariance property and consider nonautonomous random exponential attractors instead, only the Lipschitz continuity of the cocycle in space is needed. In fact, the construction can be essentially simplified, we obtain better bounds for the fractal dimension and the assumption of Hölder continuity in time can be omitted, see [7]. Even though we could prove the Hölder continuity in time for the cocycle for our particular problem, we omit it since it has no added value for our main results and would lead to weaker bounds for the fractal dimension.
Definition 2.8
A nonautonomous tempered random set \(\{\mathcal {M}(t,\omega )\}_{t\in \mathbb {R}, \omega \in \Omega }\) is called a nonautonomous \(\mathcal {D}\)-random (pullback) exponential attractor for \(\varphi \) if there exists \({\tilde{t}}>0\) such that \(\mathcal {M}(t+{\tilde{t}},\omega )=\mathcal {M}(t,\omega )\) for all \(t\in \mathbb {R},\omega \in \Omega ,\) and the following properties are satisfied:
- (a):
-
\(\mathcal {M}(t,\omega )\) is compact for every \(t\in \mathbb {R},\omega \in \Omega \);
- (b):
-
\(\{\mathcal {M}(t,\omega )\}_{t\in \mathbb {R},\omega \in \Omega }\) is positively \(\varphi \)-invariant in the nonautonomous sense, i.e.,
$$\begin{aligned} \varphi (s,\omega ,\mathcal {M}(t,\omega ))\subseteq \mathcal {M}(s+t,\theta _{s}\omega )\quad \text{ for } \text{ all } s\ge 0, t\in \mathbb {R}, \omega \in \Omega ; \end{aligned}$$
- (c):
-
\(\{\mathcal {M}(t,\omega )\}_{t\in \mathbb {R}, \omega \in \Omega }\) is pullback \(\mathcal {D}\)-attracting at an exponential rate, i.e., there exists \(\alpha >0\) such that
$$\begin{aligned} \lim \limits _{s\rightarrow \infty }\text {e}^{\alpha s}d(\varphi (s,\theta _{-s}\omega ,D(\theta _{-s}\omega )),\mathcal {M}(t,\omega ))=0\quad \text {for all } D\in \mathcal {D}, t\in \mathbb {R},\omega \in \Omega ; \end{aligned}$$
- (d):
-
the fractal dimension of \(\{\mathcal {M}(t,\omega )\}_{t\in \mathbb {R}, \omega \in \Omega }\) is finite, i.e., there exists a random variable \(k(\omega )\ge 0\) such that
$$\begin{aligned} \sup _{t\in \mathbb {R}}\text {dim}_f(\mathcal {M}(t,\omega ))\le k(\omega )<\infty \quad \text {for all } \omega \in \Omega . \end{aligned}$$
We recall that the fractal dimension of a precompact subset \(M\subset X\) is defined as
$$\begin{aligned} \text {dim}_f(M)=\limsup _{\varepsilon \rightarrow 0}\log _{\frac{1}{\varepsilon }}(N_\varepsilon (M)), \end{aligned}$$
where \(N_\varepsilon (M)\) denotes the minimal number of \(\varepsilon \)-balls in X with centers in M needed to cover the set M.
By Theorem 2.7, the existence of a nonautonomous random exponential attractor immediately implies that the (global) random attractor exists. Moreover, the global random attractor is contained in the random exponential attractor, and hence, its fractal dimension is finite.
Existence proofs for global and exponential random attractors are typically based on the existence of a pullback \(\mathcal {D}\)-absorbing set for \(\varphi \).
Definition 2.9
A set \(\{B(\omega )\}_{\omega \in \Omega }\in \mathcal {D}\) is called random pullback \(\mathcal {D}\)-absorbing for \(\varphi \) if for every \(D=\{D(\omega )\}_{\omega \in \Omega }\in \mathcal {D}\) and \(\omega \in \Omega \), there exists a random time \(T_{D}(\omega )\ge 0\) such that
$$\begin{aligned} \varphi (t,\theta _{-t}\omega ,D(\theta _{-t}\omega ))\subseteq B(\omega ) \quad \text{ for } \text{ all } t\ge T_{D}(\omega ). \end{aligned}$$
The following condition is convenient to show the existence of an absorbing set. Namely, if for every \(x\in D(\theta _{-t}\omega )\), \(D\in \mathcal {D}\) and \(\omega \in \Omega \), it holds that
$$\begin{aligned} \limsup \limits _{t\rightarrow \infty } \Vert \varphi (t,\theta _{-t}\omega ,x)\Vert _X\le \rho (\omega ), \end{aligned}$$
(2.2)
where \(\rho (\omega )>0\) for every \(\omega \in \Omega \), then the ball \(B(\omega ):=B(0,\rho (\omega )+\delta )\) centered in 0 with radius \(\rho (\omega )+\delta \) for some constant \(\delta >0\) is a random absorbing set. For further details and applications, see [5, 28].
Instead of considering random exponential attractors which is typically more involved and requires to verify additional properties of the cocycle, the existence of random attractors is frequently shown using the following result, see Theorem 2.1 in [28].
Theorem 2.10
Let \(\varphi \) be a continuous random dynamical system on X over \((\Omega ,\mathcal {F},\mathbb {P},(\theta _{t})_{t\in \mathbb {R}})\). Suppose that \(\{B(\omega )\}_{\omega \in \Omega }\) is a compact random absorbing set for \(\varphi \) in \(\mathcal {D}\). Then, \(\varphi \) has a unique \(\mathcal {D}\)-random attractor \(\{\mathcal {A}(\omega )\}_{\omega \in \Omega }\) which is given by
$$\begin{aligned} \mathcal {A}(\omega )=\bigcap \limits _{\tau \ge 0} \overline{\bigcup \limits _{t\ge \tau }\varphi (t,\theta _{-t}\omega ,B(\theta _{-t}\omega ))}. \end{aligned}$$
We could apply Theorem 2.10 to prove the existence of a random attractor for our particular problem. However, showing that a nonautonomous random exponential attractor exists does not only imply the existence of the random attractor, but also its finite fractal dimension. Moreover, it turns out to be even simpler in our case than applying Theorem 2.10. To this end, we use an existence result for random exponential attractors obtained in [7] that we recall in the next subsection.
An existence result for random exponential attractors
The existence result for random pullback exponential attractors is based on an auxiliary normed space that is compactly embedded into the phase space and the entropy properties of this embedding. We recall some notions and results that we will need in the sequel, see also [7,8,9].
The (Kolmogorov) \(\varepsilon \)-entropy of a precompact subset M of a Banach space X is defined as
$$\begin{aligned} {\mathcal {H}}_\varepsilon ^X(M)=\log _2(N_\varepsilon ^X(M)), \end{aligned}$$
where \(N_\varepsilon ^X(M)\) denotes the minimal number of \(\varepsilon \)-balls in X with centers in M needed to cover the set M. It was first introduced by Kolmogorov and Tihomirov [14]. The order of growth of \({\mathcal {H}}_\varepsilon ^X(M)\) as \(\varepsilon \) tends to zero is a measure for the massiveness of the set M in X, even if its fractal dimension is infinite.
If X and Y are Banach spaces such that the embedding \(Y\hookrightarrow X\) is compact, we use the notation
$$\begin{aligned} {\mathcal {H}}_\varepsilon (Y;X)={\mathcal {H}}_\varepsilon ^X(B^Y(0,1)), \end{aligned}$$
where \(B^Y(0,1)\) denotes the closed unit ball in Y.
Remark 2.11
The \(\varepsilon \)-entropy is related to the entropy numbers \({\hat{e}}_k\) for the embedding \(Y\hookrightarrow X,\) which are defined by
$$\begin{aligned} {\hat{e}}_k=\inf \left\{ \varepsilon >0 : B^Y(0,1)\subset \bigcup _{j=1}^{2^{k-1}}B^X(x_j,\varepsilon ),\ x_j\in X, \ j=1,\dots ,2^{k-1}\right\} , \end{aligned}$$
\(k\in \mathbb {N}.\) If the embedding is compact, then \({\hat{e}}_k\) is finite for all \(k\in \mathbb {N}\). For certain function spaces, the entropy numbers can explicitly be estimated (see [13]). For instance, if \(D\subset \mathbb {R}^n\) is a smooth bounded domain, then the embedding of the Sobolev spaces
$$\begin{aligned} W^{l_1,p_1}(D)\hookrightarrow W^{l_2,p_2}(D),\qquad l_1,l_2\in \mathbb {R}, \ p_1,p_2\in (1,\infty ), \end{aligned}$$
is compact if \(l_1>l_2\) and \(\frac{l_1}{n} - \frac{1}{p_1} > \frac{l_2}{n}-\frac{1}{p_2}.\) Moreover, the entropy numbers grow polynomially, namely
$$\begin{aligned} {\hat{e}}_k \simeq k^{-\frac{l_1-l_2}{n}} \end{aligned}$$
(see Theorem 2, Section 3.3.3 in [13]), and consequently,
$$\begin{aligned} {\mathcal {H}}_\varepsilon (W^{l_1,p_1}(D);W^{l_2,p_2}(D))\le c \varepsilon ^{-\frac{n}{l_1-l_2}}, \end{aligned}$$
for some constant \(c>0\). Here, we write \(f\simeq g,\) if there exist positive constants \(c_1\) and \(c_2\) such that
$$\begin{aligned} c_1f\le g \le c_2f. \end{aligned}$$
The following existence result for nonautonomous random pullback exponential attractors is a special case of the main result in [7]. In fact, we formulate a simplified version that suffices for the parabolic stochastic evolution problem we consider. In particular, we assume that the cocycle is uniformly Lipschitz continuous and satisfies the smoothing property with a constant that is independent of \(\omega \). More generally, one can allow that the constants depend on the random parameter \(\omega \) and that the cocycle is asymptotically compact, i.e., it is the sum of a mapping satisfying the smoothing property and a contraction.
Theorem 2.12
Let \(\varphi \) be a random dynamical system in a separable Banach space X, and let \(\mathcal {D}\) denote the universe of tempered random sets. Moreover, we assume that the following properties hold for all \(\omega \in \Omega \):
- \((H_1)\):
-
Compact embedding: There exists another separable Banach space Y that is compactly and densely embedded into X.
- \((H_2)\):
-
Random pullback absorbing set: There exists a random closed set \(B\in \mathcal {D}\) that is pullback \(\mathcal {D}\)-absorbing, and the absorbing time corresponding to a random set \(D\in \mathcal {D}\) satisfies \(T_{D,\theta _{-t}\omega }\le T_{D,\omega }\) for all \(t\ge 0\).
- \((H_3)\):
-
Smoothing property: There exist \({\tilde{t}}>T_{B,\omega }\) and a constant \(\kappa >0\) such that
$$\begin{aligned} \Vert \varphi ({\tilde{t}},\omega ,u)-\varphi ({\tilde{t}},\omega ,v)\Vert _Y\le \kappa \Vert u-v\Vert _X\qquad \forall u,v\in B(\omega ). \end{aligned}$$
- \((H_4)\):
-
Lipschitz continuity: There exists a constant \(L_\varphi >0\) such that
$$\begin{aligned} \Vert \varphi (s,\omega ,u)-\varphi (s,\omega ,v)\Vert _{X}\le L_\varphi \Vert u-v\Vert _{X}\qquad \forall s\in [0,\tilde{t}],\ u,v\in B(\omega ). \end{aligned}$$
Then, for every \(\nu \in (0,\frac{1}{2})\) there exists a nonautonomous random pullback exponential attractor, and its fractal dimension is uniformly bounded by
$$\begin{aligned} \text {dim}_f(\mathcal {M}^\nu (t,\omega ))\le \log _{\frac{1}{2\nu }}\left( N_{\frac{\nu }{\kappa }}^X(B^Y(0,1))\right) \qquad \forall t\in \mathbb {R},\ \omega \in \Omega . \end{aligned}$$
Pathwise mild solutions for parabolic SPDEs
Let \(\Delta :=\{(s,t)\in \mathbb {R}^2: s\le t\}\), X be a separable, reflexive, type 2 Banach space and \(({\overline{\Omega }},\overline{\mathcal {F}}, \overline{\mathbb {P}})\) be a probability space. Similarly to [26], we consider nonautonomous SPDEs of the form
$$\begin{aligned} du(t)&= A(t,\overline{\omega }) u(t) ~{\text {d}}t + F(u(t)) ~{\text {d}}t + \sigma (t,u(t))~{\text {d}}W_{t},&t>s,\nonumber \\ u(s)&=u_{0} \in X,&s\in \mathbb {R}, \end{aligned}$$
(2.3)
where \(A=\{A(t,{\overline{\omega }})\}_{t\in \mathbb {R},\overline{\omega }\in \overline{\Omega }}\) is a family of time-dependent random differential operators. Intuitively, this means that the differential operator depends on a stochastic processes, in a meaningful way which will be specified later.
We aim to investigate the longtime behavior of (2.3) using a random dynamical systems approach. First, we recall sufficient conditions that ensure that the family A generates a parabolic stochastic evolution system, see [26]. In particular, we make the following assumptions concerning measurability, sectoriality and Hölder continuity of the operators.
Assumption 1
-
(A0)
We assume that the operators are closed, densely defined and have a common domain, \(\mathcal {D}_A:=D(A(t,{\overline{\omega }}))\) for all \(t\in \mathbb {R}\), \(\overline{\omega }\in \overline{\Omega }\).
-
(A1)
The mapping \(A:\mathbb {R}\times \overline{\Omega }\rightarrow \mathcal {L}(\mathcal {D}_A,X)\) is strongly measurable and adapted.
-
(A2)
There exists \(\vartheta \in (\pi ,\frac{\pi }{2})\) and \(M>0\) such that \(\Sigma _\vartheta :=\{\mu \in : |\text {arg }\mu |<\vartheta \}\subset \rho (A(t,\overline{\omega }))\) and
$$\begin{aligned} \Vert R(\mu ,A(t,\overline{\omega }))\Vert _{\mathcal {L}(X)}\le \frac{M}{|\mu |+1}\qquad \text {for all}\ \mu \in \Sigma _\vartheta \cup \{0\}, t\in \mathbb {R}, \overline{\omega }\in \overline{\Omega }. \end{aligned}$$
-
(A3)
There exists \(\nu \in (0,1]\) and a mapping \(C:\overline{\Omega }\rightarrow X\) such that
$$\begin{aligned} \Vert A(t,\overline{\omega }) - A(s,\overline{\omega })\Vert _{\mathcal {L}(\mathcal {D}_A,X)} \le C({\overline{\omega }}) |t-s|^{\nu }\qquad \text {for all}\ s,t\in \mathbb {R},~ \overline{\omega }\in \overline{\Omega }, \end{aligned}$$
(2.4)
where we assume that \(C({\overline{\omega }})\) is uniformly bounded with respect to \({\overline{\omega }}\), see [26].
Assumptions (A2) and (A3) are referred to in the literature as the Kato–Tanabe assumptions, compare [2], p. 55, or [24], p. 150, and are common in the context of nonautonomous evolution equations. Since the constants in (A2) and (A3) are uniformly bounded w.r.t. \(\overline{\omega }\), all constants arising in the estimates below do not dependent on \(\overline{\omega }\).
In the sequel, we denote by \(X_\eta \), \(\eta \in (-1,1]\), the fractional power spaces \(D((-A(t,\overline{\omega }))^\eta )\) endowed with the norm \(\Vert x\Vert _{X_\eta }=\Vert (-A(t,{\overline{\omega }}))^\eta x\Vert _X\) for \(t\in \mathbb {R}\), \(\overline{\omega }\in {\overline{\Omega }}\) and \(x\in X_\eta \).
Assumption 2
-
(AC)
We assume that the operators \(A(t,\overline{\omega }), t\in \mathbb {R},\overline{\omega }\in \overline{\Omega },\) have a compact inverse. This implies that the embeddings \(X_\eta \hookrightarrow X\), \(\eta \in (0,1]\), are compact.
-
(U)
The evolution family is uniformly exponentially stable, i.e., there exist constants \(\lambda >0\) and \(c>0\) such that
$$\begin{aligned}&\Vert U(t,s,\overline{\omega })\Vert _{\mathcal {L}(X)} \le c e ^{-\lambda (t-s)} \quad \text{ for } \text{ all } (s,t)\in \Delta \text{ and } \overline{\omega }\in \overline{\Omega }. \end{aligned}$$
(2.5)
-
(Drift)
The nonlinearity \(F:X\rightarrow X\) is globally Lipschitz continuous, i.e., there exists a constant \(C_{F}>0\) such that
$$\begin{aligned} \Vert F(x) -F(y)\Vert _{X}\le C_{F}\Vert x-y\Vert _{X}\quad \text{ for } \text{ all } x,y\in X \text{ and } {\overline{\omega }}\in \overline{\Omega }. \end{aligned}$$
This implies a linear growth condition on F. Namely, there exist a positive constant \(\overline{C}_{F}\) such that
$$\begin{aligned} \Vert F(x)\Vert _{X}\le \overline{C}_{F} + C_{F}\Vert x\Vert _{X}\quad \text{ for } \text{ all } x\in X \text{ and } {\overline{\omega }}\in \overline{\Omega }. \end{aligned}$$
(2.6)
Furthermore, we assume that \(\lambda - c C_{F}>0\).
-
(Noise)
We assume that W(t) is a two-sided Wiener process with values in \(X_{\beta }\), \(\beta \in (0,1]\). Furthermore, we set \(\sigma (t,u):=\sigma >0\).
Based on the Assumptions 1, by applying [1, Thm. 2.3] pointwise in \(\overline{\omega }\in \overline{\Omega }\) we obtain the following theorem, see [26, Theorem 2.2]. The measurability was shown in [26, Proposition 2.4]. Before we state the result, we recall the definition of strong measurability of random operators.
Definition 2.13
Let \(X_1\) and \(X_2\) be two separable Banach spaces. A random operator \(L:\overline{\Omega }\times X_1\rightarrow X_2\) is called strongly measurable if the mapping \(\overline{\omega }\mapsto L(\overline{\omega })x\), \({\bar{\omega }}\in \overline{\Omega }\), is a random variable on \(X_2\) for every \(x\in X_1\).
Theorem 2.14
There exists a unique parabolic evolution system \(U:\Delta \times \overline{\Omega }\rightarrow \mathcal {L}(X)\) with the following properties:
- (1):
-
\(U(t,t,\overline{\omega })=\text{ Id }\) for all \(t\ge 0\), \(\overline{\omega }\in \overline{\Omega }\).
- (2):
-
We have
$$\begin{aligned} U(t,s,\overline{\omega })U(s,r,\overline{\omega })=U(t,r,\overline{\omega }) \end{aligned}$$
(2.7)
for all \(0\le r\le s\le t\), \(\overline{\omega }\in \overline{\Omega }\).
- (3):
-
The mapping \(U(\cdot ,\cdot ,{\overline{\omega }})\) is strongly continuous for all \(\overline{\omega }\in \overline{\Omega }\).
- (4):
-
For \(s<t\), the following identity holds pointwise in \(\overline{\Omega }\)
$$\begin{aligned} \frac{d}{dt}U(t,s,\overline{\omega })=A(t,\overline{\omega })U(t,s,\overline{\omega }). \end{aligned}$$
- (5):
-
The evolution system \(U:\Delta \times \overline{\Omega }\rightarrow \mathcal {L}(X)\) is strongly measurable in the uniform operator topology. Moreover, for every \(t\ge s\), the mapping \(\overline{\omega }\mapsto U(t,s,\overline{\omega })\in \mathcal {L}(X)\) is strongly \(\mathcal {F}_t\)-measurable in the uniform operator topology.
To prove the existence of random attractors, we need additional smoothing properties of the evolution system. The following properties and estimates were shown in Lemmas 2.6 and 2.7 in [26]. The exponential decay is a consequence of our assumption (U).
Lemma 2.15
We assume that the family of adjoint operators \(A^*(t,\overline{\omega })\) satisfies (A3) with exponent \(\nu ^*>0\). Then, for every \(t>0\), the mapping \(s\mapsto U(t,s,\overline{\omega })\) belongs to \(C^1([0,t);\mathcal {L}(X))\), and for all \(x\in \mathcal {D}_A\) one has
$$\begin{aligned} \frac{d}{ds}U(t,s,\overline{\omega })x=-U(t,s,\overline{\omega })A(s,\overline{\omega })x. \end{aligned}$$
Moreover, for \(\alpha \in [0,1]\) and \(\eta \in (0,1)\) there exist positive constants \({\widetilde{C}}_\alpha , {\widetilde{C}}_{\alpha ,\eta }\) such that the following estimates hold for \(t>s\) and \({\bar{\omega }}\in \overline{\Omega }\):
$$\begin{aligned} \Vert (-A(t,{\overline{\omega }}))^\alpha U(t,s,{\overline{\omega }})x\Vert _X&\le {\widetilde{C}}_\alpha \frac{\text {e}^{-\lambda (t-s)}}{(t-s)^{\alpha }}\Vert x\Vert _X,&x\in X;\\ \Vert U(t,s,{\overline{\omega }})(-A(s,{\overline{\omega }}))^\alpha x\Vert _X&\le {\widetilde{C}}_\alpha \frac{e^{-\lambda (t-s)}}{(t-s)^{\alpha }}\Vert x\Vert _X,&x\in X_\alpha ;\\ \Vert (-A(t,{\overline{\omega }}))^{-\alpha } U(t,s,{\overline{\omega }}) (-A(s,{\overline{\omega }}))^\eta x\Vert _X&\le {\widetilde{C}}_{\alpha ,\eta } \frac{\text {e}^{-\lambda (t-s)}}{(t-s)^{\eta -\alpha }}\Vert x\Vert _X,&x\in X_\eta . \end{aligned}$$
To shorten notations, in the sequel we omit the \(\overline{\omega }\)-dependence of A and U if there is no danger of confusion. The classical mild formulation of the SPDE (2.3) is
$$\begin{aligned} u(t) = U(t,0)u_{0} + \int \limits _{0}^{t} U(t,s)F(u(s)) ~{\text {d}}s + \sigma \int \limits _{0}^{t} U(t,s)~{\text {d}}W(s). \end{aligned}$$
(2.8)
However, the Itô-integral is not well defined since the mapping \(\overline{\omega }\mapsto U(t,s,\overline{\omega })\) is, in general, only \(\mathcal {F}_{t}\)-measurable and not \(\mathcal {F}_{s}\)-measurable, see [26, Prop. 2.4]. To overcome this problem, Pronk and Veraar introduced in [26] the concept of pathwise mild solutions. In our particular case, this notion leads to the integral representation
$$\begin{aligned} u(t)=&\ U (t,0) u_{0} + \sigma U(t,0) W(t) + \int \limits _{0}^{t} U(t,s)F(u(s))~{\text {d}}s\nonumber \\&\ -\sigma \int \limits _{0}^{t}U(t,s)A(s) (W(t) -W(s)) ~{\text {d}}s. \end{aligned}$$
(2.9)
The formula is motivated by formally applying integration by parts for the stochastic integral, and, as shown in [26], it indeed yields a pathwise representation for the solution.
Our aim is to show the existence of random attractors for SPDEs using this concept of pathwise mild solutions. It allows us to study random attractors without transforming the SPDE into a random PDE, as it is typically done.
Remark 2.16
We emphasize that the concept of pathwise mild solutions also applies if \(\sigma \) is not constant, see [26, Sec.5]. In this case, the solution of (2.3) is given by
$$\begin{aligned} u(t)&= U(t,0)u_{0} + U(t,0) \int \limits _{0}^{t}\sigma (s,u(s))~{\text {d}}W(s) + \int \limits _{0}^{t} U(t,s)F(u(s))~{\text {d}}s \end{aligned}$$
(2.10)
$$\begin{aligned}&\quad - \int \limits _{0}^{t} U(t,s)A(s)\int \limits _{s}^{t}\sigma (\tau ,u(\tau ))~{\text {d}}W(\tau )~{\text {d}}s. \end{aligned}$$
(2.11)
However, it is not possible to obtain a random dynamical system in this case, due to the presence of the stochastic integrals in (2.10) and (2.11) which are not defined in a pathwise sense. Consequently, this representation formula does not hold for every \(\overline{\omega }\in \overline{\Omega }\). We aim to investigate this issue in a future work.
Recalling that W is an \(X_{\beta }\)-valued Wiener process, we introduce the canonical probability space
$$\begin{aligned} \Omega :=(C_{0}(\mathbb {R};X_{\beta }),\mathcal {B}(C_{0}(\mathbb {R};X_{\beta })),\mathbb {P}) \end{aligned}$$
(2.12)
and identify \(W(t,\omega )=:\omega (t)\), for \(\omega \in \Omega \). Moreover, together with the Wiener shift,
$$\begin{aligned} \theta _{t}\omega (s)=\omega (t+s)-\omega (t), \quad \omega \in \Omega , \ s,t\in \mathbb {R}, \end{aligned}$$
we obtain, analogously as in Sect. 2.1, the ergodic metric dynamical system \((\Omega ,\mathcal {F},\mathbb {P},(\theta _{t})_{t\in \mathbb {R}})\).
In the following, \((\Omega ,\mathcal {F},\mathbb {P})\) always denotes the probability space (2.12).