Advertisement

Exponential attractors for random dynamical systems and applications

  • Armen ShirikyanEmail author
  • Sergey Zelik
Article

Abstract

The paper is devoted to constructing a random exponential attractor for some classes of stochastic PDE’s. We first prove the existence of an exponential attractor for abstract random dynamical systems and study its dependence on a parameter and then apply these results to a nonlinear reaction–diffusion system with a random perturbation. We show, in particular, that the attractors can be constructed in such a way that the symmetric distance between the attractors for stochastic and deterministic problems goes to zero with the amplitude of the random perturbation.

Keywords

Random exponential attractors Stochastic PDE’s Reaction-diffusion equation 

AMS subject classifications

35B41 35K57 35R60 60H15 

Introduction

The theory of attractors for partial differential equations (PDE’s) has been developed intensively since late seventies of the last century. It is by now well known that many autonomous dissipative PDE’s possess an attractor, even if the Cauchy problem is not known to be well posed. Moreover, one can establish explicit upper and lower bounds for the dimension of an attractor. A comprehensive presentation of the theory of attractors can be found in [3, 13, 33].

The situation becomes more complicated when dealing with non-autonomous dissipative systems. In that case, there are at least two natural ways to extend the concept of an attractor. The first one is based on the reduction of the non-autonomous dynamical system (DS) in question to the autonomous one and leads to the attractor which is independent of time and attracts the images of bounded sets uniformly with respect to time shifts. It is usually called a uniform attractor. A drawback of this approach is that the attractor is often huge (infinite-dimensional), even in the case when the DS considered has trivial dynamics, with a single exponentially stable trajectory; see [13] and the references therein for details.

An alternative approach treats the attractor for a non-autonomous system as a family of time-depending subsets obtained by the restriction of all bounded trajectories to all possible times. In that case, the resulting object is usually finite-dimensional (as in the autonomous case), but the attraction becomes non-uniform with respect to time shifts. Moreover, as a rule, the attraction forward in time is no longer true, and one has only the attraction property pullback in time, so those objects are called pullback attractors (or kernel sections in the terminology of Vishik and Chepyzhov); see the books [9, 13] and the literature cited there.

The theory of attractors can also be extended to the case of random dynamical systems (RDS) mainly based on the concept of a pullback attractor. Various results similar to the deterministic case were obtained for many RDS generated by stochastic PDE’s, such as the Navier–Stokes system or reaction–diffusion equations with random perturbations. The situation is even slightly better here since, in contrast to general non-autonomous deterministic DS, in the case of RDS one usually has forward attraction property in probability; see [4, 5]. Moreover, if the random dynamics is Markovian and mixing, then a minimal random attractor in probability can be described as the support of the disintegration for the unique Markovian invariant measure of the extended DS corresponding to the problem in question; see [25].

However, there is an intrinsic drawback of the theory of attractors; namely, the rate of attraction to the (global, uniform, pullback) attractor can be arbitrarily slow and there is no way, in general, to express or to estimate this rate of convergence in terms of physical parameters of the system under study. As a consequence, the attractor is also very sensitive to perturbations which makes it, in a sense, unobservable in experiments and numerical simulations.

This drawback can be overcome using the concept of an inertial manifold (IM) instead. This is an invariant finite-dimensional manifold of the phase space which contains the attractor and possesses the so-called exponential tracking property (i.e., every trajectory of the considered DS is attracted exponentially to a trajectory on the manifold). The rate of attraction can be estimated in terms of physical parameters, and the manifold itself is robust with respect to perturbations; see [22, 33] and references therein. Moreover, the construction can be extended to the case of non-autonomous and random DS and the resulting inertial manifold resolves also the problem with the lack of forward attraction: under some natural assumptions, the rate of exponential attraction to the non-autonomous/random inertial manifold is uniform with respect to time shifts; see [2, 7, 8, 10, 12] and the literature cited there.

Unfortunately, being a kind of center manifold, an IM requires a separation of the phase space to “fast” and “slow” variables. This leads, in turn, to very restrictive spectral gap conditions which are violated for many interesting applications, including the 2D Navier–Stokes system, reaction–diffusion equations in higher dimensions, damped wave equations, etc. In addition, when a stochastic dissipative PDE is considered, e.g., with an additive white noise, to guarantee the existence of the IM, one should impose an additional condition that all nonlinear terms are globally Lipschitz continuous.

To overcome these restrictive assumptions, an intermediate (between the IM and attractors) object, so-called exponential attractor (or inertial set), was introduced in [16] for the case of autonomous DS. This is a semi-invariant finite-dimensional set (but not necessarily a manifold) which contains the attractor and possesses the exponential attraction property, like an IM. Moreover, the rate of attraction is controlled, which leads to some stability under perturbations. The initial construction of an exponential attractor given in [16] was restricted to the case of Hilbert phase spaces only and involved the Zorn lemma. A relatively simple and effective explicit construction of this object was suggested later in [17], and as believed nowadays, the exponential attractors are almost as general as the usual ones and no restrictive or artificial assumptions are required for their existence; see the survey [31] and the references therein.

An extension of the theory of exponential attractors theory to the case of non-autonomous DS (including the robustness) was given in [19] (see also [18, 30] for the so-called uniform exponential attractors and [28] for a slight extension of the result of [19]). As shown there, a non-autonomous exponential attractor remains finite-dimensional as a pullback attractor, but attracts the images of bounded sets uniformly with respect to time shifts as a uniform attractor. Thus, like an IM, a non-autonomous exponential attractor contains the pullback one and possesses the forward attraction property, but in contrast to the IM, no restrictive spectral gap assumptions are required.

The aim of the present paper is to extend the theory of exponential attractors to the case of dissipative RDS. Although the theory of random attractors is often similar to the non-autonomous deterministic one and our study is also strongly based on the construction given in [17, 19], there is a fundamental difference between the two cases. Namely, in contrast to the deterministic case considered in [19], a typical trajectory of an RDS is unbounded in time. For instance, this is the case for a dissipative stochastic PDE with an additive white noise. Thus, if we do not impose the restrictive assumption on the global Lipschitz continuity of all nonlinear terms, then all the constants in appropriate squeezing/smoothing properties (which play a key role in the construction of an exponential attractor) will depend on time (in other words, will be random), and a straightforward extension does not work. However, some time averages of these quantities can be controlled, and this turns out to be sufficient for constructing an exponential attractor.

As an application of the theory developed in the paper, the following problem in a bounded domain \(D\subset \mathbb{R }^n\) with a smooth boundary \({\partial }D\) will be considered:
$$\begin{aligned} \dot{u}-a\Delta u+f(u)&= h(x)+\eta (t,x),\end{aligned}$$
(1.1)
$$\begin{aligned} u\bigr |_{{\partial }D}&= 0,\end{aligned}$$
(1.2)
$$\begin{aligned} u(0,x)&= u_0(x). \end{aligned}$$
(1.3)
Here \(u=(u_1,\dots ,u_k)^t\) is an unknown vector function, \(a\) is a \(k\times k\) matrix such that \(a+a^t>0, f\in C^2(\mathbb{R }^k,\mathbb{R }^k)\) is a function satisfying some natural growth and dissipativity conditions, \(h(x)\) is a deterministic external force acting on the system, and \(\eta \) is a random process, white in time and regular in the space variables; see Sect. 2.2 for the exact hypotheses imposed on \(f\) and \(\eta \). The Cauchy problem (1.11.3) is well posed in the space \(H:=L^2(D,\mathbb{R }^k)\), and we denote by \({\varvec{\varPhi }}=\{\varphi _t:H\rightarrow H,t\ge 0\}\) the corresponding RDS defined on a probability space \((\Omega ,\mathcal{F },\mathbb{P })\) with a group of shift operators \(\{\theta _t:\Omega \rightarrow \Omega ,t\in \mathbb{R }\}\) (see Sect. 2.2). We have the following result on the existence of an exponential attractor for \({\varvec{\varPhi }}\).

Theorem A

There is a random compact set \(\mathcal{M }_\omega \subset H\) and an event \(\Omega _*\subset \Omega \) of full measure such that the following properties hold for \(\omega \in \Omega _*\).

Semi-invariance: \(\varphi _t^\omega (\mathcal{M }_\omega )\subset \mathcal{M }_{\theta _t\omega }\) for all \(t\ge 0\).

Exponential attraction: There is \(\beta >0\) such that for any ball \(B\subset H\) we have
$$\begin{aligned} \sup _{u\in B}\,\inf _{v\in \mathcal{M }_{\theta _t\omega }} \Vert \varphi _t^\omega (u)-v\Vert \le C(B)e^{-\beta t}, \quad t\ge 0, \end{aligned}$$
where \(C(B)\) is a constant depending only on \(B\).

Finite-dimensionality: There is a number \(d>0\) such that \(\dim _f(\mathcal{M }_\omega )\le d\), where \(\dim _f\) stands for the fractal dimension of \(\mathcal{M }_\omega \).

Let us now assume that the random force \(\eta \) in Eq. (1.1) is replaced by \({\varepsilon }\eta \), where \({\varepsilon }\in [-1,1]\) is a parameter. We denote by \(\mathcal{M }_\omega ^{\varepsilon }\) the corresponding exponential attractors. Since in the limit case \({\varepsilon }=0\) the equation is no longer stochastic, the corresponding attractor \(\mathcal{M }=\mathcal{M }^0\) is also independent of \(\omega \). A natural question is whether one can construct \(\mathcal{M }_\omega ^{\varepsilon }\) in such a way that the symmetric distance between the attractors of stochastic and deterministic equations goes to zero as \({\varepsilon }\rightarrow 0\). The following theorem gives a positive answer to that question.

Theorem B

The exponential attractors \(\mathcal{M }_\omega ^{\varepsilon }, {\varepsilon }\in [-1,1]\), can be constructed in such a way that
$$\begin{aligned} d^s(\mathcal{M }_\omega ^{\varepsilon },\mathcal{M })\rightarrow 0 \quad \text{ almost} \text{ surely} \text{ as} {\varepsilon }\rightarrow 0, \end{aligned}$$
where \(d^s\) stands for the symmetric distance between two subsets of \(H\).

We refer the reader to Sect. 4 for more precise statements of the results on the existence of exponential attractors and their dependence on a parameter. Let us note that various results similar to Theorem B were established earlier in the case of deterministic PDE’s; e.g., see the papers [19, 20], the first of which is devoted to studying the behaviour of exponential attractors under singular perturbations, while the second deals with non-autonomous DS and proves Hölder continuous dependence of the exponential attractor on a parameter.

We emphasize that the convergence in Theorem B differs from the one in the case of global attractors, for which, in general, only lower semicontinuity can be established. For instance, let us consider the following one-dimensional ODE perturbed by the time derivative of a standard Brownian motion \(w\):
$$\begin{aligned} \dot{u}=u-u^3+{\varepsilon }\dot{w}. \end{aligned}$$
(1.4)
When \({\varepsilon }=0\), the global attractor \(\mathcal{A }\) for (1.4) is the interval \([-1,1]\) and is regular in the sense that it consists of the stationary points and the unstable manifolds around them. It is well known that the regular structure of an attractor is very robust and survives rather general deterministic perturbations, and in many cases it is possible to prove that the symmetric distance between the attractors for the perturbed and unperturbed systems goes to zero; see [3, 14]. On the other hand, it is proved in [6] that the random attractor \(\mathcal{A }_\omega ^{\varepsilon }\) for (1.4) consists of a single trajectory and, hence, the symmetric distance between \(\mathcal{A }\) and \(\mathcal{A }_\omega ^{\varepsilon }\) does not go to zero as \({\varepsilon }\rightarrow 0\) (see also [11] for analogous results for order preserving stochastic PDEs).

In conclusion, let us mention that some results similar to those described above hold for other stochastic PDE’s, including the 2D Navier–Stokes system. They will be considered in a subsequent publication.

The paper is organised as follows. In Sect. 2, we present some preliminaries on RDS and a reaction–diffusion equation perturbed by a spatially regular white noise. Section 3 is devoted to some general results on the existence of exponential attractors and their dependence on a parameter. In Sect. 4, we apply our abstract construction to the stochastic reaction–diffusion system (1.11.3). Appendix gathers some results on coverings of random compact sets and their image under random mappings, as well as the time-regularity of stochastic processes.

Notation

Let \(J\subset \mathbb{R }\) be an interval, let \(D\subset \mathbb{R }^n\) be a bounded domain with smooth boundary \({\partial }D\), and let \(X\) be a Banach space. Given a compact set \(\mathcal{K }\subset X\), we denote by \(\mathcal{H }_{\varepsilon }(\mathcal{K },X)\) and \(\dim _f(\mathcal{K })\) its Kolmogorov \({\varepsilon }\)-entropy and fractal dimension, respectively; see Chapt. 10 in [29] and Chapt. V in [33] for details. Recall that
$$\begin{aligned} \mathcal{H }_{\varepsilon }(\mathcal{K },X)=\ln N_{\varepsilon }(\mathcal{K }), \quad \dim _f(\mathcal{K })=\limsup _{{\varepsilon }\rightarrow 0^+}\frac{\ln N_{\varepsilon }(\mathcal{K })}{\ln {\varepsilon }^{-1}}, \end{aligned}$$
where \(N_{\varepsilon }(\mathcal{K })\) denotes the minimal number of closed balls of radius \({\varepsilon }\) needed to cover \(\mathcal{K }\). If \(Y\) is another Banach space with compact embedding \(Y\Subset X\), then we write \(\mathcal{H }_{\varepsilon }(Y,X)\) for the \({\varepsilon }\)-entropy of a unit ball in \(Y\) considered as a subset in \(X\). We denote by \(\dot{B}_X(v,r)\) and \(B_X(v,r)\) the open and closed balls in \(X\) of radius \(r\) centred at \(v\) and by \(\mathcal{O }_r(A)\) the closed \(r\)-neighbourhood of a subset \(A\subset X\). The closure of \(A\) in \(X\) is denoted by \([A]_X\). Given any set \(C\), we write \(\#C\) for the number of its elements.

We shall use the following function spaces:

\(L^p=L^p(D)\) denotes the usual Lebesgue space in \(D\) endowed with the standard norm \(\Vert \cdot \Vert _{L^p}\). In the case \(p=2\), we omit the subscript from the notation of the norm. We shall write \(L^p(D,\mathbb{R }^k)\) if we need to emphasise the range of functions.

\(W^{s,p}=W^{s,p}(D)\) stands for the standard Sobolev space with a norm \(\Vert \cdot \Vert _{s,p}\). In the case \(p=2\), we write \(H^s=H^s(D)\) and \(\Vert \cdot \Vert _s\), respectively. We denote by \(H_0^s=H^s_0(D)\) the closure in \(H^s\) of the space of infinitely smooth functions with compact support.

\(C(J,X)\) stands for the space of continuous functions \(f:J\rightarrow X\).

When describing a property involving a random parameter \(\omega \), we shall assume that it holds almost surely, unless specified otherwise. Furthermore, when dealing with a property depending on \(\omega \) and an additional parameter \(y\in Y\), we say that it holds almost surely for \(y\in Y\) if there is a set of full measure \(\Omega _*\subset \Omega \) such that the property is true for \(\omega \in \Omega _*\) and \(y\in Y\).

Given a random function \(f_\omega :D\rightarrow X\), we shall say that it is (almost surely) Hölder-continuous if there is \(\gamma \in (0,1)\) such that, for any bounded ball \(B\subset \mathbb{R }^n\), we have
$$\begin{aligned} \Vert f_\omega (t_1)-f_\omega (t_2)\Vert _X\le C_\omega |t_1-t_2|^\gamma , \quad t_1,t_2\in B, \end{aligned}$$
where \(C_\omega =C_\omega (B)\) is an almost surely finite random variable. If \(f\) depends on an additional parameter \(y\in Y\) (that is, \(f=f_\omega ^y(t)\)), then we say that \(f\) is Hölder-continuous uniformly in \(y\) if the above inequality holds for \(f_\omega ^y(t)\) with a random constant \(C_\omega (B)\) not depending on \(y\).

We denote by \(c_i\) and \(C_i\) unessential positive constants not depending on other parameters.

Preliminaries

Random dynamical systems and their attractors

Let \((\Omega ,\mathcal{F },\mathbb{P })\) be a probability space, \(\{\theta _t,t\in \mathbb{R }\}\) be a group of measure-preserving transformations of \(\Omega \), and \(X\) be a separable Banach space. Recall that a continuous RDS in \(X\) over \(\{\theta _t\}\) (or simply an RDS in \(X\)) is defined as a family of continuous mappings \({\varvec{\varPhi }}=\{\varphi _t^\omega :X\rightarrow X,t\ge 0\}\) that satisfy the following conditions:
  • Measurability The mapping \((t,\omega ,u)\mapsto \varphi _t^\omega (u)\) from \(\mathbb{R }_+\times \Omega \times X\) to \(X\) is measurable with respect to the \(\sigma \)-algebras \(\mathcal{B }_{\mathbb{R }_+}\otimes \mathcal{F }\otimes \mathcal{B }_X\) and \(\mathcal{B }_X\).

  • Perfect co-cycle property For almost every \(\omega \in \Omega \), we have the identity
    $$\begin{aligned} \varphi _{t+s}^\omega =\varphi _t^{\theta _s\omega }\circ \varphi _s^\omega , \quad t,s\ge 0. \end{aligned}$$
    (2.1)
  • Time regularity For almost every \(\omega \in \Omega \), the function \((t,\tau )\mapsto \varphi _t^{\theta _{\tau }\omega }(u)\), defined on \(\mathbb{R }_+\times \mathbb{R }\) with range in \(X\), is Hölder-continuous with some deterministic exponent \(\gamma >0\), uniformly with respect to \(u\in \mathcal{K }\) for any compact subset \(\mathcal{K }\subset X\).

Let us note that the third property of the above definition is stronger than the usual hypothesis on time-continuity of trajectories (e.g., see Sect. 1.1 in [1]). This extra regularity will be important when estimating the fractal dimension of the exponential attractor; see Sect. 3.3. An example of RDS is given in the next subsection, which is devoted to some preliminaries on a reaction–diffusion system with a random perturbation.

Large-time asymptotics of trajectories for RDS is often described in terms of attractors. This paper deals with random exponential attractors, and we now define some basic concepts.

Recall that the distance between a point \(u\in X\) and a subset \(F\subset X\) is given by \(d(u,F)=\inf _{v\in F}\Vert u-v\Vert \). The Hausdorff and symmetric distances between two subsets are defined by
$$\begin{aligned} d(F_1,F_2)&= \sup _{u\in F_1}d(u,F_2),\\ d^s(F_1,F_2)&= \max \bigl \{d(F_1,F_2),d(F_2,F_1)\bigr \}. \end{aligned}$$
We shall write \(d_X\) and \(d_X^s\) to emphasise that the distance is taken in the metric of \(X\). Let \(\{\mathcal{M }_\omega ,\omega \in \Omega \}\) be a random compact set in \(X\), that is, a family of compact subsets such that the mapping \(\omega \mapsto d(u,\mathcal{M }_\omega )\) is measurable for any \(u\in X\).

Definition 2.1

A random compact set \(\{\mathcal{M }_\omega \}\) is called a random exponential attractor for the RDS \(\{\varphi _t\}\) if there is a set of full measure \(\Omega _*\in \mathcal{F }\) such that the following properties hold for \(\omega \in \Omega _*\).
  • Semi-invariance For any \(t\ge 0\), we have \(\varphi _t^\omega (\mathcal{M }_\omega )\subset \mathcal{M }_{\theta _t\omega }\).

  • Exponential attraction There is a constant \(\beta >0\) such that
    $$\begin{aligned} d\bigl (\varphi _t^\omega (B),\mathcal{M }_{\theta _t\omega }\bigr ) \le C(B) e^{-\beta t} \quad \text{ for} t\ge 0, \end{aligned}$$
    (2.2)
    where \(B\subset H\) is an arbitrary ball and \(C(B)\) is a constant that depends only on \(B\).
  • Finite-dimensionality There is random variable \(d_\omega \ge 0\) which is finite on \(\Omega _*\) such that
    $$\begin{aligned} \dim _f\bigl (\mathcal{M }_\omega \bigr )\le d_\omega . \end{aligned}$$
    (2.3)
  • Time continuity The function \(t\mapsto d^s\bigl (\mathcal{M }_{\theta _t\omega },\mathcal{M }_\omega \bigr )\) is Hölder-continuous on \(\mathbb{R }\) with some exponent \(\delta >0\).

We shall also need the concept of a random absorbing set. Recall that a random compact set \(\mathcal{A }_\omega \) is said to be absorbing for \({\varvec{\varPhi }}\) if for any ball \(B\subset X\) there is \(T(B)\ge 0\) such that
$$\begin{aligned} \varphi _t^\omega (B)\subset \mathcal{A }_{\theta _t\omega } \quad \text{ for} t\ge T(B), \omega \in \Omega . \end{aligned}$$
(2.4)
All the above definitions make sense also in the case of discrete time, that is, when the time variable varies on the integer lattice \(\mathbb{Z }\). The only difference is that the property of time continuity should be skipped for discrete-time RDS and their attractors. In what follows, we shall deal with both situations.

Reaction–diffusion system perturbed by white noise

Let \(D\subset \mathbb{R }^n\) be a bounded domain with a smooth boundary \({\partial }D\). We consider the reaction–diffusion system (1.1), (1.2), in which \(u=(u_1,\dots ,u_k)^t\) is an unknown vector function and \(a\) is a \(k\times k\) matrix such that
$$\begin{aligned} a+a^t>0. \end{aligned}$$
(2.5)
We assume that \(f\in C^2(\mathbb{R }^k,\mathbb{R }^k)\) satisfies the following growth and dissipativity conditions:
$$\begin{aligned}&\langle f(u),u\rangle \ge -C+c|u|^{p+1},\end{aligned}$$
(2.6)
$$\begin{aligned}&f^{\prime }(u)+f^{\prime }(u)^t\ge -C I,\end{aligned}$$
(2.7)
$$\begin{aligned}&|f^{\prime }(u)|\le C(1+|u|)^{p-1}, \end{aligned}$$
(2.8)
where \(\langle \cdot ,\cdot \rangle \) stands for the scalar product in \(\mathbb{R }^k, f^{\prime }(u)\) is the Jacobi matrix for \(f, I\) is the identity matrix, \(c\) and \(C\) are positive constants, and \(0\le p\le \frac{n+2}{n-2}\). As for the right-hand side of (1.1), we assume \(h\in L^2(D,\mathbb{R }^k)\) is a deterministic function and \(\eta \) is a spatially regular white noise. That is,
$$\begin{aligned} \eta (t,x)=\frac{{\partial }}{{\partial }t}\zeta (t,x), \quad \zeta (t,x)=\sum _{j=1}^\infty b_j\beta _j(t)e_j(x), \end{aligned}$$
(2.9)
where \(\{\beta _j(t),t\in \mathbb{R }\}\) is a sequence of independent two-sided Brownian motions defined on a probability space \((\Omega ,\mathcal{F },\mathbb{P }), \{e_j\}\) is an orthonormal basis in \(L^2(D,\mathbb{R }^k)\) formed of the eigenfunctions of the Dirichlet Laplacian, and \(b_j\) are real numbers satisfying the condition
$$\begin{aligned} \mathfrak{B }:=\sum _{j=1}^\infty b_j^2<\infty . \end{aligned}$$
(2.10)
In what follows, we shall assume that \((\Omega ,\mathcal{F },\mathbb{P })\) is the canonical space; that is, \(\Omega \) is the space of continuous functions \(\omega :\mathbb{R }\rightarrow H\) vanishing at zero, \(\mathbb{P }\) is the law of \(\zeta \) [see (2.9)], and \(\mathcal{F }\) is the \(\mathbb{P }\)-completion of the Borel \(\sigma \)-algebra. In this case, the process \(\zeta \) can be written in the form \(\zeta ^\omega (t)=\omega (t)\), and a group of shifts \(\theta _t\) acts on \(\Omega \) by the formula \((\theta _t\omega )(s)=\omega (t+s)-\omega (t)\). Furthermore, it is well known (e.g., see Chapt. VII in [32]) the restriction of \(\{\theta _t,t\in \mathbb{R }\}\) to any lattice \(T\mathbb{Z }\) is ergodic.

Let us denote \(H=L^2(D,\mathbb{R }^k)\) and \(V=H_0^1(D,\mathbb{R }^k)\). The following result on the well-posedness of problem (1.11.3) can be established by standard methods used in the theory of stochastic PDE’s (e.g., see [15, 21]).

Theorem 2.2

Under the above hypotheses, for any \(u_0\in H\) there is a stochastic process \(\{u(t),t\ge 0\}\) that is adapted to the filtration generated by \(\zeta (t)\) and possesses the following properties:
  • Regularity: Almost every trajectory of \(u(t)\) belongs to the space
    $$\begin{aligned} \mathcal{X }=C(\mathbb{R }_+,H)\cap L_{\mathrm{loc}}^2(\mathbb{R }_+,V)\cap L_{\mathrm{loc}}^{p+1}(\mathbb{R }_+\times D). \end{aligned}$$
  • Solution: With probability 1, we have the relation
    $$\begin{aligned} u(t)=u_0+\int \limits _0^t\bigl (a\Delta u-f(u)+h\bigr )\,ds+\zeta (t), \quad t\ge 0, \end{aligned}$$
    where the equality holds in the space \(H^{-1}(D)\).
Moreover, the process \(u(t)\) is unique in the sense that if \(v(t)\) is another process with the same properties, then with probability 1 we have \(u(t)=v(t)\) for all \(t\ge 0\).

The family of solutions for (1.1), (1.2) constructed in Theorem 2.2 form an RDS in the space \(H\). Let us describe in more detail a set of full measure on which the perfect co-cycle property and the Hölder-continuity in time are true.

Let us denote by \(z=z^\omega (t)\) the solution of the linear equation
$$\begin{aligned} \dot{z}-a\Delta z=h+\eta (t), \end{aligned}$$
(2.11)
supplemented with the zero initial and boundary conditions. Such a solution exists and belongs to the space \(\mathcal{Y }:=C(\mathbb{R }_+,H)\cap L_{\mathrm{loc}}^2(\mathbb{R }_+,V)\) with probability \(1\). Moreover, one can find a set \(\Omega _*\in \mathcal{F }\) of full measure such that \(\theta _t(\Omega _*)=\Omega _*\) for all \(t\in \mathbb{R }\) and \(z^\omega \in \mathcal{Y }\) for \(\omega \in \Omega _*\). We now write a solution of (1.11.3) in the form \(u=z+v\) and note that \(v\) must satisfy the equation
$$\begin{aligned} \dot{v}-a\Delta v+f(z+v)=0. \end{aligned}$$
(2.12)
For any \(\omega \in \Omega _*\) and \(u_0\in H\), this equation has a unique solution \(v\in \mathcal{X }\) issued from \(u_0\). The RDS associated with (1.11.2) can be written as
$$\begin{aligned} \varphi _t^\omega (u_0)= \left\{ \begin{array}{cl} z^\omega (t)+v^\omega (t)&\text{ for}\,\omega \in \Omega _*,\\ 0&\text{ for}\,\omega \notin \Omega _*. \end{array}\right. \end{aligned}$$
Then \({\varvec{\varPhi }}=\{\varphi _t,t\ge 0\}\) is an RDS in the sense defined in the beginning of Sect. 2.1, and the time continuity and perfect co-cycle properties hold on \(\Omega _*\).

Abstract results on exponential attractors

Exponential attractor for discrete-time RDS

Let \(H\) be a Hilbert space and let \({\varvec{\varPsi }}=\{\psi _k^\omega ,k\in \mathbb{Z }_+\}\) be a discrete-time RDS in \(H\) over a group of measure-preserving transformations \(\{\sigma _k\}\) acting on a probability space \((\Omega ,\mathcal{F },\mathbb{P })\). We shall assume that \({\varvec{\varPsi }}\) satisfies the following condition.

Condition 3.1

There is a Hilbert space \(V\) compactly embedded in \(H\), a random compact set \(\{\mathcal{A }_\omega \}\), and constants \(m,r>0\) such that the properties below are satisfied.
  • Absorption The family \(\{\mathcal{A }_\omega \}\) is a random absorbing set for \({\varvec{\varPsi }}\).

  • Stability With probability 1, we have
    $$\begin{aligned} \psi _1^\omega \bigl (\mathcal{O }_r(\mathcal{A }_\omega )\bigr )\subset \mathcal{A }_{\sigma _1\omega }. \end{aligned}$$
    (3.1)
  • Lipschitz continuity There is an almost surely finite random variable \(K_\omega \ge 1\) such that \(K^m\in L^1(\Omega ,\mathbb{P })\) and
    $$\begin{aligned} \Vert \psi _1^\omega (u_1)-\psi _1^\omega (u_2)\Vert _V\le K_\omega \Vert u_1-u_2\Vert _H \quad \text{ for} u_1,u_2\in \mathcal{O }_r(\mathcal{A }_\omega ). \end{aligned}$$
    (3.2)
  • Kolmogorov \({\varepsilon }\) -entropy. There is a constant \(C\) and an almost surely finite random variable \(C_\omega \) such that \(C_\omega K_\omega ^m\in L^1(\Omega ,\mathbb{P })\),
    $$\begin{aligned} \mathcal{H }_{\varepsilon }(V,H)&\le C\,{\varepsilon }^{-m},\end{aligned}$$
    (3.3)
    $$\begin{aligned} \mathcal{H }_{\varepsilon }(\mathcal{A }_\omega ,H)&\le C_\omega {\varepsilon }^{-m}. \end{aligned}$$
    (3.4)

The following theorem is an analogue for RDS of a well-known result on the existence of an exponential attractor for deterministic dynamical systems; e.g., see Sect. 3 of the paper [31] and the references therein.

Theorem 3.2

Assume that the discrete-time RDS \({\varvec{\varPsi }}\) satisfies Condition 3.1. Then \({\varvec{\varPsi }}\) possesses an exponential attractor \(\mathcal{M }_\omega \). Moreover, the attraction property holds for the norm of \(V\):
$$\begin{aligned} d_V\bigl (\psi _k^\omega (B),\mathcal{M }_{\sigma _k\omega }\bigr ) \le C(B) e^{-\beta k} \quad \text{ for} k\ge 0, \end{aligned}$$
(3.5)
where \(B\subset H\) is an arbitrary ball and \(C(B)\) and \(\beta >0\) are some constants not depending on \(k\).
The proof given below will imply that (3.5) holds for \(B=\mathcal{A }_\omega \) with \(C(B)=r\), and that in inequality (3.5) the constant in front of \(e^{-\beta k}\) has the form
$$\begin{aligned} C(B)=2^{T(B)}r, \end{aligned}$$
(3.6)
where \(T(B)\) is a time after which the image of the ball \(B\) under the mapping \(\psi _k^\omega \) belongs to the absorbing set \(\mathcal{A }_{\sigma _t\omega }\). Furthermore, as is explained in Remark 3.4 below, under an additional assumption, the fractal dimension \(\dim _f(\mathcal{M }_\omega )\) can be bounded by a deterministic constant.

Proof

We repeat the scheme used in the case of deterministic dynamical systems. However, an essential difference is that we have a random parameter and need to follow the dependence on it. In addition, the constants entering various inequalities are now (unbounded) random variables, and we shall need to apply the Birkhoff ergodic theorem to bound some key quantities.

Step 1: An auxiliary construction. Let us define a sequence of random finite sets \(V_k(\omega )\) in the following way. Applying Lemma 5.1 with \(\delta _\omega =(2K_\omega )^{-1}r\) to the random compact set \(\mathcal{A }_{\omega }\), we construct a random finite set \(U_0(\omega )\) such that
$$\begin{aligned} d^s\bigl (\mathcal{A }_{\omega },U_0(\omega )\bigr )&\le \delta _\omega ,\end{aligned}$$
(3.7)
$$\begin{aligned} \ln \bigl (\#U_0(\omega )\bigr )&\le 2^mC_\omega \delta _\omega ^{-m} \le (4/r)^mC_\omega K_{\omega }^m. \end{aligned}$$
(3.8)
Since \(K_\omega \ge 1\), we have \(\delta _\omega \le r/2\), whence it follows that \(U_0(\omega )\subset \mathcal{O }_r(\mathcal{A }_{\omega })\). Setting \(V_1(\sigma _1\omega )=\psi _1^{\omega }(U_0(\omega ))\), in view of (3.1), (3.2), and (3.7), we obtain
$$\begin{aligned} \psi _1^{\omega }(\mathcal{A }_{\omega })\subset \bigcup _{u\in V_1(\sigma _1\omega )}B_V(u,r/2)=:\mathcal{C }_1(\omega ), \quad V_1(\sigma _1\omega )\subset \mathcal{O }_{r/2}\bigl (\psi _1^\omega (\mathcal{A }_\omega )\bigr )\cap \mathcal{A }_{\sigma _1\omega }. \end{aligned}$$
Now note that \(\mathcal{C }_1(\omega )\) is a random compact set in \(H\). Moreover, it follows from (3.3) and (3.8) that
$$\begin{aligned} \mathcal{H }_{\varepsilon }(\mathcal{C }_1(\omega ),H)&\le \ln \bigl (\#V_1(\sigma _1\omega ) \bigr )+\mathcal{H }_{2{\varepsilon }/r}(V,H)\nonumber \\&\le (4/r)^m C_\omega K_{\omega }^m+(r/2)^mC{\varepsilon }^{-m}. \end{aligned}$$
(3.9)
Applying Lemma 5.1 with \(\delta _\omega =(4K_{\sigma _{1}\omega })^{-1}r\) to \(\mathcal{C }_1(\omega )\), we construct a random finite set \(U_1(\omega )\) such that
$$\begin{aligned} d^s\bigl (\mathcal{C }_1(\omega ),U_1(\omega )\bigr )&\le \delta _\omega ,\\ \ln \bigl (\#U_1(\omega )\bigr )&\le \mathcal{H }_{\delta _\omega /2}(\mathcal{C }_1(\omega ),H) \le (4/r)^m C_\omega K_{\omega }^m +2^mC K_{\sigma _{1}\omega }^m. \end{aligned}$$
Repeating the above argument and setting \(V_2(\sigma _2\omega )=\psi _1^{\sigma _{1}\omega }(U_1(\omega ))\), we obtain
$$\begin{aligned} \psi _1^{\sigma _{1}\omega }\bigl (\mathcal{C }_1(\omega )\bigr )&\subset \bigcup _{u\in V_2(\sigma _2\omega )}B_V(u,r/4)=:\mathcal{C }_2(\omega ),\\ V_2(\sigma _2\omega )&\subset \mathcal{O }_{r/4}\bigl (\psi _1^{\sigma _1\omega } (\mathcal{C }_1(\omega )\bigr )\cap \mathcal{A }_{\sigma _2\omega }. \end{aligned}$$
Moreover, \(\mathcal{C }_2(\omega )\) is a random compact set \(H\) whose \({\varepsilon }\)-entropy satisfies the inequality [cf. (3.9)]
$$\begin{aligned} \mathcal{H }_{\varepsilon }(\mathcal{C }_2(\omega ),H)&\le \ln \bigl (\#V_2(\sigma _2\omega )\bigr )+\mathcal{H }_{4{\varepsilon }/r}(V,H)\\&\le (4/r)^m C_\omega K_{\omega }^m +2^mC K_{\sigma _{1}\omega }^m+(r/4)^mC{\varepsilon }^{-m}. \end{aligned}$$
Iterating this procedure and recalling that \(\sigma _k:\Omega \rightarrow \Omega \) is a one-to-one transformation, we construct random finite sets \(V_k(\omega ), k\ge 1\), and unions of balls
$$\begin{aligned} \mathcal{C }_k(\omega ):=\bigcup _{u\in V_k(\sigma _k\omega )}B_V(u,2^{-k}r) \end{aligned}$$
such that the following properties hold for any integer \(k\ge 1\):
$$\begin{aligned} \psi _k^{\omega }(\mathcal{A }_{\omega })&\subset \mathcal{C }_k(\omega ),\end{aligned}$$
(3.10)
$$\begin{aligned} V_k(\omega )&\subset \mathcal{O }_{2^{1-k}r}\bigl (\psi _1^{\sigma _{-1}\omega } (\mathcal{C }_{k-1}(\sigma _{-k}\omega ))\bigr )\cap \mathcal{A }_{\omega }, \end{aligned}$$
(3.11)
$$\begin{aligned} \ln \bigl (\#V_k(\omega )\bigr )&\le (4/r)^m C_{\sigma _{-k}\omega } K_{\sigma _{-k}\omega }^m +2^mC\sum _{j=1}^{k-1} K_{\sigma _{j-k}\omega }^m. \end{aligned}$$
(3.12)
Step 2: Description of an attractor. Let us define a sequence of random finite sets by the rule
$$\begin{aligned} E_1(\omega )=V_1(\omega ), \qquad E_k(\omega )=V_k(\omega )\cup \psi _1^{\sigma _{-1}\omega }\bigl (E_{k-1}(\sigma _{-1}\omega )\bigr ), \quad k\ge 2. \end{aligned}$$
The very definition of \(E_k\) implies that
$$\begin{aligned} \psi _1^\omega (E_k(\omega ))\subset E_{k+1}(\sigma _1\omega ). \end{aligned}$$
(3.13)
and since \(\#V_{k}(\omega )\le \#V_{k+1}(\sigma _1\omega )\), it follows from (3.12) that
$$\begin{aligned} \ln \bigl (\#E_k(\omega )\bigr )&\le \ln k+\ln \bigl (\#V_{k}(\omega )\bigr )\nonumber \\&\le \ln k+(4/r)^m C_{\sigma _{-k}\omega } K_{\sigma _{-k}\omega }^m +2^mC\sum _{j=1}^{k-1} K_{\sigma _{j-k}\omega }^m. \end{aligned}$$
(3.14)
Furthermore, it follows from (3.10) that
$$\begin{aligned} d_V\bigl (\psi _k^\omega (\mathcal{A }_\omega ),V_k(\sigma _k\omega )\bigr )\le 2^{-k}r, \quad k\ge 0. \end{aligned}$$
(3.15)
We now define a random compact set \(\mathcal{M }_\omega \) by the formulas
$$\begin{aligned} \mathcal{M }_\omega =\bigl [\mathcal{M }_\omega ^{\prime }\bigr ]_V\,, \quad \mathcal{M }_\omega ^{\prime }=\bigcup _{k=1}^\infty E_k(\omega ). \end{aligned}$$
(3.16)
We claim that \(\mathcal{M }_\omega \) is a random exponential attractor for \({\varvec{\varPsi }}\). Indeed, the semi-invariance follows immediately from (3.13). Furthermore, inequality (3.15) implies that
$$\begin{aligned} d_V\bigl (\psi _k^\omega (\mathcal{A }_\omega ),\mathcal{M }_{\sigma _k\omega }\bigr )\le 2^{-k}r \quad \text{ for} \text{ any}\,k\ge 0. \end{aligned}$$
Recalling that \(\mathcal{A }_\omega \) is an absorbing set and using inclusion (2.4), together with the co-cycle property, we obtain
$$\begin{aligned} d_V\bigl (\psi _k^\omega (B),\mathcal{M }_{\sigma _k\omega }\bigr ) \le d_V\bigl (\psi _{k-T}^{\sigma _T\omega }(\mathcal{A }_{\sigma _T\omega }), \mathcal{M }_{\sigma _{k-T}(\sigma _T\omega )}\bigr )\le 2^{T-k}r, \end{aligned}$$
where \(T=T(B)\) is the constant entering (2.4). This implies the exponential attraction inequality (3.5) with \(\beta =\ln 2\) and \(C(B)=2^{T(B)}r\). It remains to prove that \(\mathcal{M }_\omega \) has a finite fractal dimension. This is done in the next step.

Step 3: Estimation of the fractal dimension. We shall need the following lemma, whose proof is given at the end of this subsection.

Lemma 3.3

Under the hypotheses of Theorem 3.2, for any integers \(l\ge 0, k\in \mathbb{Z }\), and \(m\in [0,l]\), we have
$$\begin{aligned} d_V\bigl (E_k(\sigma _k\omega ),\psi _m^{\sigma _{k-m}\omega } (\mathcal{A }_{\sigma _{k-m}\omega })\bigr )\le 2^{2-(k-l)}r\prod _{j=1}^{l}K_{\sigma _{k-j}\omega }. \end{aligned}$$
(3.17)
Inequality (3.17) with \(m=l\) and \(\omega \) replaced by \(\sigma _{-k}\omega \) implies that
$$\begin{aligned} d_V\bigl (E_k(\omega ),\psi _{l}^{\sigma _{-l}\omega } (\mathcal{A }_{\sigma _{-l}\omega })\bigr )\le 2^{2-(k-l)}r\prod _{j=1}^{l}K_{\sigma _{-j}\omega }, \end{aligned}$$
(3.18)
where \(k\ge 1\) is arbitrary. On the other hand, in view of (3.15) with \(k=l\) and \(\omega \) replaced by \(\sigma _{-l}\omega \), we have
$$\begin{aligned} d_V\bigl (\psi _{l}^{\sigma _{-l}\omega }(\mathcal{A }_{\sigma _{-l}\omega }), V_l(\omega )\bigr )\le 2^{-l}r. \end{aligned}$$
Combining this with (3.18), we obtain
$$\begin{aligned} d_V\biggl (\,\bigcup _{k\ge n}E_k(\omega ),V_l(\omega )\biggr ) \le r\Bigl (2^{-l}+2^{2-(n-l)}\prod _{j=1}^{l}K_{\sigma _{-j}\omega }\Bigr ), \end{aligned}$$
(3.19)
where \(n\ge 1\) and \(l\in [1,n]\) are arbitrary integers. Since \(\{E_k(\omega )\}\) is an increasing sequence and \(V_l(\omega )\subset E_l(\omega )\subset \mathcal{M }_\omega \) for any \(l\ge 1\), inequality (3.19) implies that
$$\begin{aligned} d_V^s\biggl (\mathcal{M }_\omega ,\bigcup _{l=1}^n V_l(\omega )\biggr ) \le r\inf _{l\in [1,n]} \Bigl (2^{-l}+2^{2-(n-l)}\prod _{j=1}^{l} K_{\sigma _{-j}\omega }\Bigr )=:{\varepsilon }_n(\omega ),\nonumber \\ \end{aligned}$$
(3.20)
where \(n\ge 1\) is arbitrary. If we denote by \(N_{\varepsilon }(\omega )\) the minimal number of balls of radius \({\varepsilon }>0\) that are needed to cover \(\mathcal{M }_\omega \), then inequality (3.20) implies that \(N_{{\varepsilon }_n(\omega )}(\omega )\le \sum _{k=1}^n\#V_k(\omega )\). Since \(V_k\subset E_k\) and \(\#V_k(\omega )\ge \#V_{k-1}(\sigma _{-1}\omega )\), it follows from (3.12) that
$$\begin{aligned} \ln N_{{\varepsilon }_n(\omega )}(\omega )&\le \ln \bigl (n\,\#E_n(\omega )\bigr )\le \ln \bigl (n^2\,\#V_n(\omega )\bigr )\nonumber \\&\le 2\ln n+(4/r)^m C_{\sigma _{-n}\omega } K_{\sigma _{-n}\omega }^m + 2^m C\sum _{k=1}^{n-1} K_{\sigma _{-k}\omega }^m. \end{aligned}$$
(3.21)
Since \(K^m\in L^1(\Omega ,\mathbb{P })\), by the Birkhoff ergodic theorem (see Sect. 1.6 in [34]), we have
$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n}\sum _{k=1}^n K_{\sigma _{-k}\omega }^m= \xi _\omega , \end{aligned}$$
(3.22)
where \(\xi _\omega \) is an integrable random variable. This implies, in particular, that \(n^{-1} K_{\sigma _{-n}\omega }^m\rightarrow 0\) as \(n\rightarrow \infty \). By a similar argument, \(n^{-1} C_{\sigma _{-n}\omega }K_{\sigma _{-n}\omega }^m\rightarrow 0\) as \(n\rightarrow \infty \). Combining this with (3.22) and (3.21), we derive
$$\begin{aligned} \ln N_{{\varepsilon }_n(\omega )}(\omega )\le 2^mC \xi _\omega n+o_\omega (n), \end{aligned}$$
(3.23)
where, given \(\alpha \in \mathbb{R }\), we denote by \(o_\omega (n^\alpha )\) any sequences of positive random variables such that \(n^{-\alpha }o_\omega (n^\alpha )\rightarrow 0\) a. s. as \(n\rightarrow \infty \). On the other hand, since the function \(\log _2 x\) is concave, it follows from (3.22) that
$$\begin{aligned} \frac{m}{l}\sum _{j=1}^l\log _2K_{\sigma _{-j}\omega }\le \log _2\Bigl (\frac{1}{l}\sum _{j=1}^l K_{\sigma _{-j}\omega }^m\Bigr ) =\log _2\bigl (\xi _\omega +o_\omega (1)\bigr ), \end{aligned}$$
(3.24)
whence we conclude that the random variable \({\varepsilon }_n\) defined in (3.20) satisfies the inequality
$$\begin{aligned} {\varepsilon }_n(\omega )\le r\inf _{l\in [1,n]} \bigl (2^{-l}+4\cdot 2^{-(n-l)}(\xi _\omega +o_\omega (1))^{l/m}\bigr ). \end{aligned}$$
Taking \(l=mn\bigl (2m+\log _2(\xi _\omega +o_\omega (1))\bigr )^{-1}\), we obtain
$$\begin{aligned} {\varepsilon }_n(\omega ) \le 5\exp \Bigl (-\frac{mn\ln 2}{2m+ \log _2(\xi _\omega +o_\omega (1))}\Bigr ). \end{aligned}$$
(3.25)
Combining this inequality with (3.23), we derive
$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\ln N_{{\varepsilon }_n(\omega )}(\omega )}{\ln {\varepsilon }_n^{-1}(\omega )} \le \frac{2^mC\xi _\omega (\ln \xi _\omega +2m)}{m\ln 2}=:d_\omega . \end{aligned}$$
It is now straightforward to see
$$\begin{aligned} \dim _f(\mathcal{M }_\omega )=\limsup _{{\varepsilon }\rightarrow 0^+}\frac{\ln N_{\varepsilon }}{\ln {\varepsilon }^{-1}} \le d_\omega . \end{aligned}$$
(3.26)
The proof of the theorem is complete.\(\square \)

Remark 3.4

It follows from (3.26) that if the random variable \(\xi _\omega \) entering the Birkhoff theorem is bounded [see (3.22)], then the fractal dimension of \(\mathcal{M }_\omega \) can be bounded by a deterministic constant. For instance, if the group of shift operators \(\{\sigma _k\}\) is ergodic, then \(\xi _\omega \) is constant, and the conclusion holds. This observation will be important in applications of Theorem 3.2.

Proof of Lemma 3.3

The co-cycle property (2.1) and inclusion (3.1) imply that \(\psi _m^{\sigma _{k-m}\omega }(\mathcal{A }_{\sigma _{k-m}\omega })\supset \psi _l^{\sigma _{k-l}\omega }(\mathcal{A }_{\sigma _{k-l}\omega })\) for \(m\le l\). Hence, it suffices to establish (3.17) for \(m=l\).

We first note that inequality (3.2), inclusion (3.11), and the definition of \(\mathcal{C }_k(\omega )\) imply that 1
$$\begin{aligned} d_V\bigl (V_n(\omega ),\psi _1^{\sigma _{-1}\omega }(V_{n-1}(\sigma _{-1}\omega )\bigr )\le 2^{2-n}rK_{\sigma _{-1}\omega }, \end{aligned}$$
where \(n\ge 1\) is an arbitrary integer, and we set \(V_0(\omega )=U_0(\omega )\). Combining this with (3.2) and the co-cycle property, for any integers \(n\ge 1\) and \(q\ge 0\) we derive
$$\begin{aligned} d_V\bigl (\psi _q^\omega (V_n(\omega )),\psi _{q+1}^{\sigma _{-1} \omega }(V_{n-1}(\sigma _{-1}\omega )\bigr ) \le 2^{2-n}r\prod _{j=0}^qK_{\sigma _{j-1}\omega }. \end{aligned}$$
(3.27)
Applying (3.27) to the pairs \((n,q)=(k-i,i), i=0,\dots ,l-1\), with \(\omega \) replaced by \(\sigma _{k-i}\omega \), using the triangle inequality, and recalling that \(K_\omega \ge 1\), we obtain
$$\begin{aligned} d_V\bigl (V_k(\sigma _k\omega ),\psi _{l}^{\sigma _{k-l}\omega }(V_{k-l} (\sigma _{k-l}\omega ))\bigr )&\le r\sum _{i=0}^{l-1} 2^{2-k+i} \prod _{j=0}^i K_{\sigma _{k-i+j-1}\omega }\\&\le 2^{2-(k-l)} r\prod _{j=1}^{l}K_{\sigma _{k-j}\omega }, \end{aligned}$$
where \(k\ge 1\) and \(p\in [1,k]\) are arbitrary integers. A similar argument based on the application of (3.27) to the pairs \((n,q)=(k-s-i,s+i), i=0,\dots , l-s-1\), with \(\omega \) replaced by \(\sigma _{k-s}\omega \), enables one to prove that for any integer \(n\in [1,k]\) we have
$$\begin{aligned} d_V\bigl (\psi _s^{\sigma _{k-s}\omega }(V_{k-s}(\sigma _{k-s}\omega )), \psi _{l}^{\sigma _{k-l}\omega }(V_{k-l}(\sigma _{k-l}\omega ))\bigr ) \le 2^{2-(k-l)} r\prod _{j=1}^{l}K_{\sigma _{k-j}\omega },\nonumber \\ \end{aligned}$$
(3.28)
where \(s\in [0,l-1]\) is an arbitrary integer. Recalling that \(V_n(\omega )\subset \mathcal{A }_\omega \) for any \(n\ge 1\) (see (3.11)), we deduce from (3.28) that
$$\begin{aligned} d_V\bigl (\psi _s^{\sigma _{k-s}\omega }(V_{k-s}(\sigma _{k-s}\omega )), \psi _{l}^{\sigma _{k-l}\omega }(\mathcal{A }_{\sigma _{k-l}\omega })\bigr ) \le 2^{2-(k-l)} r\prod _{j=1}^{l}K_{\sigma _{k-j}\omega } \end{aligned}$$
(3.29)
for any integer \(s\in [0,k-1]\). Since
$$\begin{aligned} E_k(\sigma _k\omega ) =\bigcup _{s=0}^{k-1} \psi _{s}^{\sigma _{k-s}\omega }(V_{k-s}(\sigma _{k-s}\omega )), \end{aligned}$$
inequality (3.29) immediately implies (3.17) with \(m=l\). \(\square \)

Dependence of attractors on a parameter

We now turn to the case in which the RDS in question depends on a parameter. Namely, let \(Y\subset \mathbb{R }\) and \(\mathcal{T }\subset \mathbb{R }\) be bounded closed intervals. We consider a discrete-time RDS \({\varvec{\varPsi }}^y=\{\psi _k^{y,\omega }:H\rightarrow H, k\ge 0\}\) depending on the parameter \(y\in Y\) and a family2 of measurable isomorphisms \(\{\theta _\tau :\Omega \rightarrow \Omega ,\tau \in \mathcal{T }\}\). We assume that \(\theta _\tau \) commutes with \(\sigma _1\) for any \(\tau \in \mathcal{T }\), and the following uniform version of Condition 3.1 is satisfied.

Condition 3.5

There is a Hilbert space \(V\) compactly embedded in \(H\), almost surely finite random variables \(R_\omega ^y, R_\omega \ge 0\), and positive constants \(m, r\), and \(\alpha \le 1\) such that \(R_\omega ^y\le R_\omega \) for all \(y\in Y\), and the following properties hold.
  • Absorption and continuity For any ball \(B\subset H\) there is a time \(T(B)\ge 0\) such that
    $$\begin{aligned} \psi _k^{y,\theta _\tau \omega }(B)\subset \mathcal{A }_\omega ^y\quad \text{ for}\, k\ge T(B), y\in Y, \tau \in \mathcal{T }, \omega \in \Omega , \end{aligned}$$
    (3.30)
    where we set \(\mathcal{A }_\omega ^y=B_V(R_\omega ^y)\). Moreover, there is an integrable random variable \(L_\omega \ge 1\) such that
    $$\begin{aligned} |R_{\theta _{\tau _1}\omega }^{y_1}-R_{\theta _{\tau _2}\omega }^{y_2}| \le L_\omega \bigl (|y_1-y_2|^\alpha +|\tau _1-\tau _2|^\alpha \bigr ) \end{aligned}$$
    (3.31)
    for \(y_1,y_2\in Y, \tau _1,\tau _2\in \mathcal{T }\), and \(\omega \in \Omega \).
  • Stability With probability 1, we have
    $$\begin{aligned} \psi _1^{y,\omega }\bigl (\mathcal{O }_r(\mathcal{A }_\omega ^y)\bigr ) \subset \mathcal{A }_{\sigma _1\omega }^y\quad \text{ for} y\in Y. \end{aligned}$$
    (3.32)
  • Hölder continuity There are almost surely finite random variables \(K_\omega ^y,K_\omega \ge 1\) such that \(K_\omega ^y\le K_\omega \) for all \(y\in Y, (RK)^m\in L^1(\Omega ,\mathbb{P })\), and
    $$\begin{aligned} \Vert \psi _1^{y_1,\theta _{\tau _1}\omega }(u_1)\!-\!\psi _1^{y_2, \theta _{\tau _2}\omega }(u_2)\Vert _V \le K_\omega ^{y_1,y_2} \bigl (|y_1\!-\!y_2|^\alpha \!+\!|\tau _1\!-\!\tau _2|^\alpha \!+\!\Vert u_1-u_2\Vert _H\bigr )\nonumber \\ \end{aligned}$$
    (3.33)
    for \(y_1,y_2\in Y, \tau _1,\tau _2\in \mathcal{T }, u_1,u_2\in \mathcal{O }_r(\mathcal{A }_\omega ^{y_1}\cup \mathcal{A }_\omega ^{y_2})\), and \(\omega \in \Omega \), where we set \(K_\omega ^{y_1,y_2}=\max (K_\omega ^{y_1},K_\omega ^{y_2})\).
  • Kolmogorov \({\varepsilon }\) -entropy Inequalities (3.3) holds with some \(C\) not depending on \({\varepsilon }\).

In particular, for any fixed \(y\in Y\), the RDS \({\varvec{\varPsi }}^y\) satisfies Condition 3.1 and, hence, possesses an exponential attractor \(\mathcal{M }_\omega ^y\). The following result is a refinement of Theorem 3.2.

Theorem 3.6

Let \({\varvec{\varPsi }}^y\) be a family of RDS satisfying Condition 3.5. Then there is a random compact set \((y,\omega )\mapsto \mathcal{M }_\omega ^y\) with the underlying space \(Y\times \Omega \) and a set of full measure \(\Omega _*\in \mathcal{F }\) such that the following properties hold.

Attraction: For any \(y\in Y\), the family \(\{\mathcal{M }_\omega ^y\}\) is a random exponential attractor for \({\varvec{\varPsi }}^y\). Moreover, the attraction property holds uniformly in \(y\) and \(\omega \) in the following sense: for any ball \(B\subset H\) there is \(C(B)>0\) such that
$$\begin{aligned} \sup _{y\in Y}d_V\bigl (\psi _k^{y,\omega }(B),\mathcal{M }_{\sigma _k \omega }^y\bigr ) \le C(B) e^{-\beta k} \quad \text{ for} \,k\ge 0, \omega \in \Omega _*, \end{aligned}$$
(3.34)
where \(\beta >0\) is a constant not depending on \(B, k, y\), and \(\omega \).
Hölder continuity: There are finite random variables \(P_\omega \) and \(\gamma _\omega \in (0,1]\) such that
$$\begin{aligned} d_V^s(\mathcal{M }_{\omega }^{y_1},\mathcal{M }_{\omega }^{y_2}) \le P_\omega |y_1-y_2|^{\gamma _\omega } \quad \text{ for} y_1,y_2\in Y, \omega \in \Omega _*. \end{aligned}$$
(3.35)
If, in addition, the random variable \(\xi _\omega \) entering (3.22) is bounded, then \(\gamma _\omega \) can be chosen to be constant, and we have the inequality
$$\begin{aligned} d_V^s(\mathcal{M }_{\theta _{\tau _1}\omega }^{y},\mathcal{M }_{\theta _{\tau _2}\omega }^{y}) \le Q_\omega |\tau _1-\tau _2|^\gamma \quad \text{ for}\, y\in Y, \tau _1,\tau _2\in \mathcal{T }, \omega \in \Omega _*, \end{aligned}$$
(3.36)
where \(\gamma \in (0,1]\), and \(Q_\omega \) is a finite random constant.

In addition, it can be shown that all the moments of the random variables \(P_\omega \) and \(Q_\omega \) are finite. The proof of this property requires some estimates for the rate of convergence in the Birkhoff ergodic theorem. Those estimates can be derived from exponential bounds for the time averages of some norms of solutions. Since the corresponding argument is technically rather complicated, we shall confine ourselves to the proof of the result stated above.

Proof of Theorem 3.6

To establish the first assertion, we repeat the scheme used in the proof of Theorem 3.2, applying Corollary 5.3 and Lemma 5.5 instead of Lemma 5.1 to construct coverings of random compact sets. Namely, let us denote by \(U_k^y(\omega ), V_k^y(\omega )\), and \(\mathcal{C }_k^y(\omega )\) with \(y\in Y\) the random sets described in the proof of Theorem 3.2 for the RDS \({\varvec{\varPsi }}^y\). In particular, \(U_k^y(\omega )\) is a random finite set such that
$$\begin{aligned} d^s\bigl (U_k^y(\omega ),\mathcal{C }_k^y(\omega )\bigr ) \le \bigl (2^{k+1}K_{\sigma _k\omega }\bigr )^{-1}r,\quad k\ge 0, \end{aligned}$$
(3.37)
where \(\mathcal{C }_0^y(\omega )=\mathcal{A }_\omega ^y\) and
$$\begin{aligned} \mathcal{C }_k^y(\omega )=\bigcup _{u\in V_k^y(\sigma _k\omega )}B_V(u,2^{-k}r),\quad V_k^y(\sigma _k\omega )=\psi _1^{y,\sigma _{k-1}\omega } \bigl (U_{k-1}^y(\omega )\bigr ) \end{aligned}$$
(3.38)
for \(k\ge 1\). We apply Corollary 5.3 to construct a random finite set \(R\mapsto U_{0,R}\) satisfying (5.14)–(5.16) with \(\delta =\frac{r}{2K_\omega ^y}\) and then define \(U_0^y(\omega ):=U_{0,R_\omega ^y}\). The subsequent sets \(U_k^y(\omega ), k\ge 1\), are constructed with the help of Lemma 5.5. What has been said implies the following bound for the number of elements of \(U_k^y(\omega )\) [cf. (3.12)]:
$$\begin{aligned} \ln \bigl (\#U_k^y(\omega )\bigr ) \le 4\bigl (\tfrac{32}{r}\bigr )^mC_\omega ^y K_\omega ^m +4^m C\sum _{j=1}^kK_{\sigma _j\omega }^m, \end{aligned}$$
where \(C_\omega ^y=C(R_\omega ^y)^m\). This enables one to repeat the argument of the proof of Theorem 3.2 and to conclude that the random compact set defined by relations (3.16) is an exponential attractor for \({\varvec{\varPsi }}^y\) (with a uniform rate of attraction).

We now turn to the property of Hölder continuity for \(\mathcal{M }_\omega ^y\). Inequalities (3.35) and (3.36) are proved by similar arguments, and therefore we give a detailed proof for the first of them and confine ourselves to the scheme of the proof for the other. Inequality (3.35) is established in four steps.

Step 1. We first show that
$$\begin{aligned} d_V^s\bigl (V_k^{y_1}(\omega ),V_k^{y_2}(\omega )\bigr )\le |y_1 -y_2|^\alpha \sum _{j=1}^{k}\prod _{i=1}^{j}K_{\sigma _{-i}\omega }, \end{aligned}$$
(3.39)
where \(|y_1-y_2|\le 1\). The proof is by induction on \(k\). For \(k=1\), the random finite set \(U_0^y(\omega )\) does not depend on \(\omega \). Recalling that \(V_1^y(\omega )=\psi _1^{y,{\sigma _{-1} \omega }}(U_0^y(\sigma _{\sigma _{-1}}\omega ))\) and using (3.33), for \(|y_1-y_2|\le 1\) we get the inequality
$$\begin{aligned} d_V^s\bigl (V_1^{y_1}(\omega ),V_1^{y_2}(\omega )\bigr ) \le K_{\sigma _{-1}\omega } |y_1-y_2|^\alpha , \end{aligned}$$
which coincides with (3.39) for \(k=1\). Assuming that inequality (3.39) is established for \(1\le k\le m\), let us prove it for \(k=m+1\). In view of Lemma 5.5, the random finite set \(U_m^y(\omega )\) satisfying (3.37) can be constructed in such a way that
$$\begin{aligned} d^s\bigl (U_m^{y_1}(\omega ),U_m^{y_2}(\omega )\bigr ) \le d^s\bigl (V_m^{y_1}(\sigma _m\omega ),V_m^{y_2}(\sigma _m\omega )\bigr ). \end{aligned}$$
Combining this with (3.33), we see that
$$\begin{aligned} d_V^s\bigl (V_{m+1}^{y_1}(\omega ),V_{m+1}^{y_2}(\omega )\bigr ) \le K_{\sigma _{-1}\omega }\bigl \{|y_1-y_2|^\alpha +d_V^s\bigl (V_m^{y_1} (\sigma _{-1}\omega ),V_m^{y_2}(\sigma _{-1}\omega )\bigr )\bigr \}. \end{aligned}$$
Using inequality (3.39) with \(k=m\) and \(\omega \) replaced by \(\sigma _{-1}\omega \) to estimate the second term on the right-hand side, we arrive at (3.39) with \(k=m+1\).
Step 2. We now prove that
$$\begin{aligned} d_V^s(\mathcal{M }_\omega ^{y_1},\mathcal{M }_\omega ^{y_2}) \le 2{\varepsilon }_n(\omega ) +|y_1-y_2|^\alpha \sum _{k=1}^nk\prod _{i=1}^k K_{\sigma _{-i}\omega }, \end{aligned}$$
(3.40)
where \(n\ge 1\) is an arbitrary integer and \({\varepsilon }_n(\omega )\) is defined in (3.20). Indeed, inequality (3.20), which was proved in the case of a single RDS, remains true in the present parameter-dependent setting:
$$\begin{aligned} d_V^s\biggl (\mathcal{M }_\omega ^{y_p},\bigcup _{k=1}^n V_k^{y_p}(\omega )\biggr )\le {\varepsilon }_n(\omega ), \quad p=1,2. \end{aligned}$$
Combining this with (3.40) and the obvious inequality
$$\begin{aligned} d_V^s(A_1\cup A_2,B_1\cup B_2)\le d_V^s(A_1,B_1)+d_V^s(A_2,B_2), \end{aligned}$$
we derive
$$\begin{aligned} d_V^s(\mathcal{M }_\omega ^{y_1},\mathcal{M }_\omega ^{y_2}) \le 2{\varepsilon }_n(\omega ) +\sum _{k=1}^n d_V^s(V_k^{y_1}(\omega ),V_k^{y_2}(\omega )). \end{aligned}$$
Using (3.39) to estimate each term of the sum on the right-hand side, we arrive at (3.40).
Step 3. Suppose now we have shown that
$$\begin{aligned} \sum _{k=1}^nk\prod _{i=1}^k K_{\sigma _{-i}\omega }\le \exp (\zeta _\omega ^n\,n), \quad n\ge 1, \end{aligned}$$
(3.41)
where \(\zeta _\omega ^n\ge 1\) is a sequence of almost surely finite random variables such that
$$\begin{aligned} \lim _{n\rightarrow \infty }\zeta _\omega ^n=:\zeta _\omega <\infty \quad \text{ with} \text{ probability} \text{1}. \end{aligned}$$
In this case, combining (3.40) with (3.25) and (3.41), we derive
$$\begin{aligned} d_V^s(\mathcal{M }_\omega ^{y_1},\mathcal{M }_\omega ^{y_2}) \le 10\exp (-\eta _\omega ^n\,n)+\exp (\zeta _\omega ^n\,n)|y_1-y_2|^\alpha , \end{aligned}$$
(3.42)
where we set
$$\begin{aligned} \eta _\omega ^n=\frac{m\ln 2}{2m+\log _2(\xi _\omega +o_\omega (1))}. \end{aligned}$$
We wish to optimize the choice of \(n\) in (3.42). To this end, first note that
$$\begin{aligned} \lim _{n\rightarrow \infty }\eta _\omega ^n=:\eta _\omega =\frac{m\ln 2}{2m+\log _2\xi _\omega }>0. \end{aligned}$$
Let \(n_1(\omega )\ge 1\) be the smallest integer such that
$$\begin{aligned} \zeta _\omega ^n\le 2\zeta _\omega , \quad \eta _\omega ^n\ge \frac{\eta _\omega }{2} \quad \text{ for}\,n\ge n_1(\omega ), \end{aligned}$$
(3.43)
and let \(n_2(\omega ,r)\) be the smallest integer greater than \(2\eta _\omega ^{-1}\gamma _\omega \ln r^{-1}\), where the small random constant \(\gamma _\omega >0\) will be chosen later. Note that if
$$\begin{aligned} r\le \tau _\omega :=\exp \bigl (-\tfrac{n_1(\omega )\eta _\omega }{2\gamma _\omega }\bigr ), \end{aligned}$$
then \(n_2(\omega ,r)\ge n_1(\omega )\). Combining (3.42) and (3.43), for \(|y_1-y_2|\le \tau _\omega \) and \(n=n_2(\omega ,|y_1-y_2|)\), we obtain
$$\begin{aligned} d_V^s(\mathcal{M }_\omega ^{y_1},\mathcal{M }_\omega ^{y_2})&\le 10\exp (-\eta _\omega n/2)+\exp (2\zeta _\omega n)|y_1-y_2|^\alpha \\&\le 10\,|y_1-y_2|^{\gamma _\omega } +|y_1-y_2|^{\alpha -8\zeta _\omega \gamma _\omega /\eta _\omega }. \end{aligned}$$
Choosing
$$\begin{aligned} \gamma _\omega =\frac{\alpha \eta _\omega }{\eta _\omega +8\zeta _\omega }, \end{aligned}$$
(3.44)
we obtain
$$\begin{aligned} d_V^s(\mathcal{M }_\omega ^{y_1},\mathcal{M }_\omega ^{y_2}) \le 11\,|y_1-y_2|^{\gamma _\omega } \quad \text{ for}\,|y_1-y_2|\le \tau _\omega . \end{aligned}$$
This obviously implies the required inequality (3.35) with an almost surely finite random constant \(P_\omega \).
Step 4. It remains to prove (3.41). In view of (3.24), we have
$$\begin{aligned} \prod _{i=1}^k K_{\sigma _{-i}\omega } \le \bigl (\xi _\omega + o_\omega (1)\bigr )^{n/m} \quad \text{ for} 1\le k\le n. \end{aligned}$$
(3.45)
It follows that
$$\begin{aligned} \sum _{k=1}^nk\prod _{i=1}^k K_{\sigma _{-i}\omega } \le \frac{n(n+1)}{2}\bigl (\xi _\omega +o_\omega (1)\bigr )^{n/m}, \end{aligned}$$
whence we obtain (3.41) with
$$\begin{aligned} \zeta _\omega ^n=\zeta _\omega +o_\omega (1), \quad \zeta _\omega =\frac{1}{m}\ln \xi _\omega . \end{aligned}$$
(3.46)
This completes the proof of (3.35). It is straightforward to see from (3.44) and the explicit formulas for \(\zeta _\omega \) and \(\eta _\omega \) that if \(\xi _\omega \ge 1\) is bounded, then \(\gamma _\omega \) can be chosen to be independent of \(\omega \).
We now turn to the scheme of the proof of (3.36). Suppose we have shown that [cf. (3.39)]
$$\begin{aligned} d_V^s\bigl (V_k^{y}(\theta _{\tau _1}\omega ),V_k^{y}(\theta _{\tau _1} \omega )\bigr )\le D_k(\omega )|\tau _1-\tau _2|^\alpha \quad \text{ for} y\in Y, |\tau _1-\tau _2|\le 1,\nonumber \\ \end{aligned}$$
(3.47)
where we set
$$\begin{aligned} D_k(\omega )=c\,L_{\sigma _{-k}\omega } \prod _{i=1}^{k}K_{\sigma _{-i}\omega } +\sum _{j=1}^{k}\prod _{i=1}^{j}K_{\sigma _{-i}\omega }, \end{aligned}$$
(3.48)
and \(c\ge 1\) is the constant in (5.12). In this case, repeating the argument used in Step 2, we derive [cf. (3.40)]
$$\begin{aligned} d_V^s(\mathcal{M }_{\theta _{\tau _1}\omega }^{y},\mathcal{M }_{\theta _{\tau _2}\omega }^{y}) \le {\varepsilon }_n(\theta _{\tau _1}\omega )+{\varepsilon }_n(\theta _{\tau _2}\omega ) +D_n(\omega )\,|\tau _1-\tau _2|^\alpha , \end{aligned}$$
(3.49)
where \(n\ge 1\) is an arbitrary integer and \({\varepsilon }_n(\omega )\) is defined in (3.20). If we prove that [cf. (3.41)]
$$\begin{aligned} D_n(\omega )\le \exp (\zeta _\omega ^n\,n), \qquad \lim _{n\rightarrow \infty }\zeta _\omega ^n=\zeta \in \mathbb{R }_+\quad bad hbox, \end{aligned}$$
(3.50)
then the argument of Step 3 combined with the boundedness of \(\xi _\omega \) implies the required inequality (3.36). To prove (3.50), note that, by the Birkhoff theorem, there is an integrable random variable \(\lambda _\omega \ge 1\) such that
$$\begin{aligned} \sum _{k=1}^n L_{\sigma _{-k}\omega }=n\lambda _\omega +o_\omega (n), \quad n\ge 1. \end{aligned}$$
Combining this with (3.41), we obtain inequality (3.50) (with larger random variables \(\zeta _\omega ^n\)).
Thus, it remains to establish inequality (3.47). Its proof is by induction on \(k\). It follows from (5.16) and (3.31) that
$$\begin{aligned} d^s\bigl (U_0(\theta _{\tau _1}\omega ),U_0(\theta _{\tau _2}\omega )\bigr ) \le c\,\bigl |R_{\theta _{\tau _1}\omega }-R_{\theta _{\tau _2}\omega }\bigr | \le c\,L_\omega |\tau _1-\tau _2|^\alpha . \end{aligned}$$
Since \(V_1^y(\omega )=\psi _1^{y,\sigma _{-1}\omega } (U_0(\sigma _{-1}\omega ))\), using (3.33) we derive the inequality
$$\begin{aligned} d_V^s\bigl (V_1^y(\theta _{\tau _1}\omega ),V_1^y(\theta _{\tau _2}\omega ) \bigr )\le K_{\sigma _{-1}\omega }\bigl (1+c\,L_{\sigma _{-1}\omega }\bigr ) |\tau _1-\tau _2|^\alpha , \end{aligned}$$
which coincides with (3.47) for \(k=1\). Let us assume that (3.47) is true for \(k=m\) and prove it for \(k=m+1\). In view of (5.23), the random finite set \(U_m^y(\omega )\) satisfies the inequality
$$\begin{aligned} d^s\bigl (U_m^{y}(\omega _1),U_m^{y}(\omega _2)\bigr ) \le d^s\bigl (V_m^{y}(\sigma _m\omega _1),V_m^{y}(\sigma _m\omega _2)\bigr ), \end{aligned}$$
where \(\omega _i=\theta _{\tau _i}\omega \) for \(i=1,2\). It follows that
$$\begin{aligned} d_V^s\bigl (V_{m+1}^y(\omega _1),V_{m+1}^y(\omega _2)\bigr ) \le K_{\sigma _{-1}\omega }\bigl \{|\tau _1-\tau _2|^\alpha +d^s\bigl (V_m^{y}(\sigma _m\omega _1),V_m^{y}(\sigma _m\omega _2)\bigr )\bigr \}. \end{aligned}$$
The induction hypothesis now implies inequality (3.47) with \(k=m+1\). The proof of Theorem 3.6 is complete.

As in the case of Theorem 3.2, inequality (3.34) holds for \(B=\mathcal{A }_\omega \) with \(C(B)=r\). Furthermore, if the group of shift operators \(\{\sigma _k\}\) is ergodic, then the Hölder exponent in (3.35) is a deterministic constant (cf. Remark 3.4). Finally, if \(\psi _k^{y,\omega }, R_\omega ^y\), and \(K_\omega ^y\) do not depend on \(\omega \) for some \(y=y_0\in Y\), then the exponential attractor \(\mathcal{M }_\omega ^{y_0}\) constructed in the above theorem is also independent of \(\omega \). Indeed, \(\mathcal{M }_\omega ^{y}\) was defined in terms of \(\psi _k^{y,\omega }, R_\omega ^y, K_\omega ^y\) and the random finite sets \(U_k^y(\omega )\) that form \(\delta \)-nets for the random compact sets \(\mathcal{C }_k^y(\omega )\). As is mentioned after the proof of Lemma 5.5, these \(\delta \)-nets are independent of \(\omega \) if so are the random compact sets to be covered. Using this observation, it is easy to prove by recurrence that \(\mathcal{C }_k^{y_0}(\omega )\) and \(U_k^{y_0}(\omega )\) do not depend on \(\omega \), and therefore the same property is true for the attractor \(\mathcal{M }_\omega ^{y_0}\).

Exponential attractor for continuous-time RDS

We now turn to a construction of an exponential attractor for continuous-time RDS. Let us fix a bounded closed interval \(Y\subset \mathbb{R }\) and consider a family of RDS \({\varvec{\varPhi }}^y=\{\varphi _t^{y,\omega }:H\rightarrow H, t\ge 0\}, y\in Y\). We shall always assume that the associated group of shift operators \(\theta _t:\Omega \rightarrow \Omega \) satisfies the following condition.

Condition 3.7

The discrete-time dynamical system \(\{\theta _{k \tau _0}:\Omega \rightarrow \Omega , k\in \mathbb{Z }\}\) is ergodic for any \(\tau _0>0\).

Given \(\tau _0>0\), consider a family of discrete-time RDS \({\varvec{\varPsi }}^y=\{\psi _k^{y,\omega },k\in \mathbb{Z }_+\}\) defined by
$$\begin{aligned} \psi _k^{y,\omega }(u)=\varphi _{k\tau _0}^{y,\omega }(u), \quad u\in H, \quad k\ge 0, \quad \omega \in \Omega . \end{aligned}$$
with the group \(\{\sigma _k=\theta _{k\tau _0}, k\in \mathbb{Z }\}\) as the associated family of shift operators. The following theorem is the main result of this section.

Theorem 3.8

Suppose there is \(\tau _0>0\) such that the family \(\{{\varvec{\varPsi }}^y, y\in Y\}\) satisfies Condition 3.5, in which \(\mathcal{T }=[-\tau _0,\tau _0]\) and the measurable isomorphism \(\theta _\tau \) coincides with the shift operator. Furthermore, suppose that Condition 3.7 is also satisfied, \(\mathcal{A }_\omega ^y=B_V(R_\omega ^y)\) is a random absorbing set for \({\varvec{\varPhi }}^y\), and the mapping
$$\begin{aligned} (t,\tau ,y,u)\mapsto \varphi _t^{y,\theta _\tau \omega }(u), \quad \mathbb{R }_+\times [-\tau _0,\tau _0]\times Y\times V\rightarrow H, \end{aligned}$$
(3.51)
is uniformly Hölder continuous on compact subsets with a universal deterministic exponent. Then there is a random compact set \((y,\omega )\mapsto \mathcal{M }_\omega ^y\) in \(H\) with the underlying space \(Y\times \Omega \) such that the following properties hold.
Attraction: For any \(y\in Y\), the random compact set \(\mathcal{M }_\omega ^y\) is an exponential attractor for \({\varvec{\varPhi }}^y\). Moreover, the fractal dimension of \(\mathcal{M }_\omega ^y\) is bounded by a universal deterministic constant, and the attraction property holds for the norm of \(V\) uniformly with respect to \(y\in Y\):
$$\begin{aligned} d_V\bigl (\varphi _t^{y,\omega }(B),\mathcal{M }_{\theta _t\omega }^y\bigr ) \le C(B)e^{-\beta t}, \quad t\ge 0, \quad y\in Y. \end{aligned}$$
(3.52)
Here \(B\subset H\) is an arbitrary ball, \(C(B)\) and \(\beta \) are positive deterministic constants, and the inequality holds with probability 1.
Hölder continuity: The function \((t,y)\mapsto \mathcal{M }_{\theta _t\omega }^y\) is Hölder-continuous from \(Y\times \mathbb{R }\) to the space of random compact sets in \(H\) with the metric \(d_H^s\). More precisely, there is \(\gamma \in (0,1]\) such that for any \(T>0\) and an almost surely finite random variable \(P_{\omega ,T}\) we have
$$\begin{aligned} d_H^s\bigl (\mathcal{M }_{\theta _{t_1}\omega }^{y_1},\mathcal{M }_{\theta _{t_2} \omega }^{y_2}\bigr )\le P_{\omega ,T}\bigl (|t_1-t_2|^\gamma + |y_1-y_2|^\gamma \bigr ) \end{aligned}$$
(3.53)
for \(y_1,y_2\in Y, t_1,t_2\in [-T,T]\), and \(\omega \in \Omega \).

Proof

By rescaling the time, we can assume that \(\tau _0=1\). Let us denote by \(\{\widetilde{\mathcal{M }}_\omega ^y\}\) the random compact set constructed in Theorem 3.6 for the family of discrete-time RDS \({\varvec{\varPsi }}^y\) and define
$$\begin{aligned} \mathcal{M }_\omega ^y=\bigcup _{\tau \in [0,1]}\varphi _\tau ^{y,\theta _{-\tau } \omega } \bigl (\widetilde{\mathcal{M }}_{\theta _{-\tau }\omega }^y\bigr ). \end{aligned}$$
(3.54)
We shall prove that \(\{\mathcal{M }_\omega ^y\}\) possesses all the required properties.
Step 1: Measurability. Let us show that \((y,\omega )\mapsto \mathcal{M }_\omega ^y\) is a random compact set. We need to prove that, for any \(u\in H\), the function
$$\begin{aligned} (y,\omega )\mapsto \inf _{v\in \mathcal{M }_\omega ^y}\Vert u-v\Vert \end{aligned}$$
is measurable. To this end, we shall apply Proposition 5.6 to the family of compact sets
$$\begin{aligned} (y,\omega )\mapsto \mathcal{K }_{(y,\omega )}=\{(\tau ,u)\in [0,1]\times H:u\in \widetilde{\mathcal{M }}_{\theta _{-\tau }\omega }^y\} \end{aligned}$$
and the random mapping
$$\begin{aligned} \psi _{(y,\omega )}:[0,1]\times H\rightarrow H, \quad (\tau ,u)\mapsto \varphi _\tau ^{y,\theta _{-\tau }\omega }(u). \end{aligned}$$
It is straightforward to see that \(\mathcal{M }_\omega ^y=\psi _{(y,\omega )}(\mathcal{K }_{(y,\omega )})\). If we prove that \(\psi _{(y,\omega )}\) and \(\mathcal{K }_{(y,\omega )}\) satisfy the hypotheses of Proposition 5.6, then we can conclude that \(\mathcal{M }_\omega ^y\) is a random compact set in \(H\).
For any fixed \((y,\omega )\), the mapping \((\tau ,u)\mapsto \psi _{(y,\omega )}(\tau ,u)\) is continuous. On the other hand, the measurability in \(\omega \) and the continuity in \(y\) of the mapping \(\varphi _\tau ^{y,\theta _{-\tau }\omega }(u)\) imply that, for any fixed \((\tau ,u)\), the mapping \(\psi _{(y,\omega )}(\tau ,u)\) is measurable. Furthermore, for any \((\tau ,u)\in [0,1]\times H\), the mapping
$$\begin{aligned} (y,\omega )\mapsto \inf _{(\tau ^{\prime },u^{\prime })\in \mathcal{K }_{(y,\omega )}}\bigl (|\tau - \tau ^{\prime }|+\Vert u-u^{\prime }\Vert \bigr )=\inf _{\tau ^{\prime }\in \mathbb{Q }\cap [0,1]}\Bigl (\,\inf _{u^{\prime }\in \widetilde{\mathcal{M }}_{\theta _{-\tau }\omega }^y}\Vert u-u^{\prime }\Vert \Bigr ) \end{aligned}$$
is measurable, so that \(\mathcal{K }_{(y,\omega )}\) is a random compact set. Thus, the application of Proposition 5.6 is justified.
Step 2: Semi-invariance. Since \(\{\widetilde{\mathcal{M }}_\omega ^y\}\) is an exponential attractor for the discrete-time RDS \({\varvec{\varPsi }}^y\), for any \(y\in Y\) with probability 1 we have
$$\begin{aligned} \varphi _k^{y,\omega }(\widetilde{\mathcal{M }}_\omega ^y)\subset \widetilde{\mathcal{M }}_{\theta _k\omega }^y, \quad k\ge 0. \end{aligned}$$
It follows that, for any rational \(s\in \mathbb{R }\) and \(y\in Y\), the inequality
$$\begin{aligned} \varphi _k^{y,\theta _s\omega }(\widetilde{\mathcal{M }}_{\theta _s\omega }^y)\subset \widetilde{\mathcal{M }}_{\theta _{k+s}\omega }^y, \quad k\ge 0, \end{aligned}$$
(3.55)
takes place almost surely. The continuity in \((s,y)\) of all the objects entering inequality (3.55) implies that it holds, with probability 1, for all \(s\in \mathbb{R }, y\in Y\), and \(k\ge 0\). The semi-invariance can now be established by a standard argument. Namely, for any \(\tau \in [0,1]\) and \(t\ge 0\), we choose an integer \(k\ge 0\) so that \(\sigma =t+\tau -k\in [0,1)\) and write
$$\begin{aligned} \varphi _t^{y,\omega }\bigl (\varphi _\tau ^{y,\theta _{-\tau }\omega } \bigl (\widetilde{\mathcal{M }}_{\theta _{-\tau }\omega }^y\bigr )\bigr )&= \varphi _{k+\sigma }^{y,\theta _{-\tau }\omega }\bigl (\widetilde{\mathcal{M }}_{\theta _{-\tau }\omega }^y\bigr )\\&= \varphi _\sigma ^{y,\theta _{k-\tau }\omega }\bigl (\varphi _k^{y, \theta _{-\tau }\omega } \bigl (\widetilde{\mathcal{M }}_{\theta _{-\tau }\omega } ^y\bigr )\bigr )\\&\subset \varphi _\sigma ^{y,\theta _{-\sigma }(\theta _t\omega )}\bigl ( \widetilde{\mathcal{M }}_{\theta _{k-\tau }\omega }^y\bigr )\\&= \varphi _\sigma ^{y,\theta _{-\sigma }(\theta _t\omega )}\bigl ( \widetilde{\mathcal{M }}_{\theta _{-\sigma }(\theta _t\omega )}^y\bigr )\subset \mathcal{M }_{\theta _t\omega }^y, \end{aligned}$$
where we used (3.55) to derive the first inclusion. Since the above relation is true for any \(\tau \in [0,1]\), we conclude that \(\mathcal{M }_\omega ^y\) is semi-invariant under \(\varphi _t^\omega \).
Step 3: Exponential attraction. We first note that, with probability 1,
$$\begin{aligned} \sup _{y\in Y}d_V\bigl (\varphi _k^{y,\omega }(\mathcal{A }_\omega ), \mathcal{M }_{\theta _k\omega }^y\bigr )\le r\,e^{-\beta k}, \quad k\ge 0. \end{aligned}$$
cf. discussion following Theorem 3.2. It follows that
$$\begin{aligned} \sup _{y\in Y}d_V\bigl (\varphi _k^{y,\theta _s\omega }(\mathcal{A }_{\theta _s \omega }), \mathcal{M }_{\theta _{k+s}\omega }^y\bigr )\le r\,e^{-\beta k}, \quad k\ge 0, \end{aligned}$$
(3.56)
where the inequality holds a.s. for all rational numbers \(s\in \mathbb{R }\). The continuity in \(s\) of all the objects entering inequality (3.56) implies that, with probability 1, it remains true for all \(s\in \mathbb{R }\). We now fix an arbitrary ball \(B\subset H\) and denote by \(T(B)\ge 0\) the instant of time after which the trajectories starting from \(B\) are in \(\mathcal{A }_{\theta _t\omega }\). For any \(t\ge T(B)+1\), we choose \(s\in [T(B),T(B)+1)\) such that \(k:=t-s\) is an integer and use the cocycle property to write
$$\begin{aligned} d_V\bigl (\varphi _t^{y,\omega }(B),\mathcal{M }_{\theta _t\omega }^y\bigr )&= d_V\bigl (\varphi _k^{y,\theta _s\omega }(\varphi _s^{y,\omega }(B)), \mathcal{M }_{\theta _t\omega )}^y\bigr )\\&\le d_V\bigl (\varphi _k^{y,\theta _s\omega }(\mathcal{A }_{\theta _s\omega }), \mathcal{M }_{\theta _{k+s}\omega )}^y\bigr ). \end{aligned}$$
Taking the supremum in \(y\in Y\) and using (3.56), we obtain
$$\begin{aligned} \sup _{y\in Y} d_V\bigl (\varphi _t^{y,\omega }(B), \mathcal{M }_{\theta _t\omega }^y\bigr ) \le r\,e^{-\beta k}\le r\,e^{T(B)+1}e^{-\beta t}. \end{aligned}$$
This proves inequality (3.52) with \(C(B)=r\,e^{T(B)+1}\).
Step 4: Fractal dimension. As was established in the proof of Theorem 3.2, the fractal dimension of \(\widetilde{\mathcal{M }}_\omega ^y\) admits the explicit bound [see (3.26)]
$$\begin{aligned} \dim _f(\widetilde{\mathcal{M }}_\omega ^y) \le \frac{2^mC\xi _\omega (\ln \xi _\omega +2m)}{m\ln 2}, \end{aligned}$$
where \(\xi _\omega \) is the random variable defined in (3.22). Since the group \(\{\sigma _k\}\) is ergodic, \(\xi _\omega \) is constant, and \(\dim _f(\widetilde{\mathcal{M }}_\omega ^y)\) can be estimated, with probability 1, by a constant not depending on \(y\) and \(\omega \). Since the function \(\tau \mapsto \varphi _\tau ^{y,\theta _{-\tau }\omega }(u)\) and \(\tau \mapsto \widetilde{\mathcal{M }}_{\theta _{-\tau }\omega }^y\) are Hölder continuous with a deterministic exponent, it is easy to prove that the fractal dimension of \(\mathcal{M }_\omega ^y\) is bounded by a universal constant.

Step 5: Time continuity. Since mapping (3.51) is Hölder continuous, the required inequality (3.53) will be established if we prove that (3.53) is true for \(\widetilde{\mathcal{M }}_\omega ^y\). However, this is an immediate consequence inequalities (3.35), (3.36) and the ergodicity of the group of shift operators \(\{\sigma _k\}\). The proof of the theorem is complete.

As in the case of discrete-time RDS, if \(\varphi _t^{y,\omega }, R_\omega ^y\), and the random objects entering Condition 3.5 do not depend on \(\omega \) for some \(y_0\), then the exponential attractor \(\mathcal{M }_\omega ^{y_0}\) is also independent of \(\omega \). This fact follows immediately from representation (3.54), because \(\varphi _{\tau }^{y_0,\theta _{-\tau }\omega }\) and \(\widetilde{\mathcal{M }}_{\theta _{-\tau }\omega }^{y_0}\) do not depend on \(\omega \).

Application to a reaction–diffusion system

Formulation of the main result

In this section, we apply Theorem 3.8 to the reaction–diffusion (1.1) in which the amplitude of the random force depends on a parameter. Namely, we consider the equation
$$\begin{aligned} \dot{u}-a\Delta u+f(u)=h(x)+{\varepsilon }\,\eta (t,x),\quad x\in D, \end{aligned}$$
(4.1)
where \(D\subset \mathbb{R }^n\) is a bounded domain with smooth boundary and \({\varepsilon }\in [-1,1]\) is a parameter. Concerning the matrix \(a\), the nonlinear term \(f\), and the external forces \(h\) and \(\eta \), we assume that they satisfy the hypotheses described in Sect. 2.2, with the stronger condition \(p\le \frac{n}{n-2}\) for \(n\ge 3\). Moreover, we impose a higher regularity on the external force, assuming that
$$\begin{aligned} h\in H_0^1(D,\mathbb{R }^k)\cap H^2(D,\mathbb{R }^k), \quad \mathfrak{B }_3:=\sum _{j=1}^\infty \lambda _j^3b_j^2<\infty , \end{aligned}$$
(4.2)
where \(\lambda _j\) denotes the \(j{\mathrm{th}}\) eigenvalue of the Dirichlet Laplacian. This condition ensures that almost every trajectory of a solution for Eq. (4.1) with \(f\equiv 0\) is a continuous function of time with range in \(H^3\). The following theorem is the main result of this section.

Theorem 4.1

Under the above hypotheses, for any \({\varepsilon }\in [-1,1]\) problem (4.1), (1.2) possesses an exponential attractor \(\mathcal{M }_\omega ^{\varepsilon }\). Moreover, the sets \(\mathcal{M }_\omega ^{\varepsilon }\) can be constructed in such a way that \(\mathcal{M }_\omega ^0\) does not depend on \(\omega \), the fractal dimension of \(\mathcal{M }_\omega ^{\varepsilon }\) is bounded by a universal deterministic constant, the attraction property holds uniformly with respect to \({\varepsilon }\), and
$$\begin{aligned} d_H^s\bigl (\mathcal{M }_\omega ^{{\varepsilon }_1},\mathcal{M }_\omega ^{{\varepsilon }_2}\bigr ) \le P_\omega |{\varepsilon }_1-{\varepsilon }_2|^\gamma \quad \text{ for} \,{\varepsilon }_1,{\varepsilon }_2\in [-1,1], \end{aligned}$$
(4.3)
where \(\gamma \in (0,1]\) is a constant and \(P_\omega \) is an almost surely finite random variable.

To prove this result, we shall apply Theorem 3.8. For the reader’s convenience, let us describe briefly the conditions we need to check, postponing their verification to the next subsection.

Recall that \(H=L^2, V=H_0^1\), and the probability space \((\Omega ,\mathcal{F },\mathbb{P })\) and the corresponding group of shits operators \(\theta _t\) were defined in Sect. 2.2. The ergodicity of the restriction of \(\{\theta _t\}\) to any lattice \(\tau _0\mathbb{Z }\) is well known (see Condition 3.7), and the Kolmogorov \({\varepsilon }\)-entropy of a unit ball in \(V\) regarded as a subset in \(H\) can be estimated by \(C{\varepsilon }^{-n}\), where \(n\) is the space dimension (see the fourth item of Condition 3.5). We shall prove that the following properties are true for a sufficiently large \(\tau _0>0\).

Absorbing set There are random variables \(R_\omega ^{\varepsilon },R_\omega \ge 0\) such that \(R_\omega ^{\varepsilon }\le R_\omega \) for all \({\varepsilon }\in [-1,1], R\in L^q(\Omega ,\mathbb{P })\) for any \(q\ge 1\), and for any ball \(B\subset H\) and a sufficiently large \(T(B)>0\) we have
$$\begin{aligned} u^{{\varepsilon },\theta _{\tau }\omega }(t;u_0)\in B_V(R_{\theta _t\omega }^{\varepsilon })\quad \text{ for} t\ge T(B), |\tau |\le \tau _0, |{\varepsilon }|\le 1, u_0\in B, \end{aligned}$$
(4.4)
where \(u^{{\varepsilon },\omega }(t;u_0)\) denotes the solution of (4.1), (1.2), (1.3). Moreover, \(R_\omega ^{\varepsilon }\) satisfies inequality (3.31) with \(y_i={\varepsilon }_i\in [-1,1]\) for an integrable random variable \(L_\omega \) and a deterministic constant \(\alpha \in (0,1]\).
Stability There is \(r>0\) such that
$$\begin{aligned} u^{{\varepsilon },\omega }(\tau _0;u_0)\in B_V(R_{\theta _{\tau _0}\omega }^{\varepsilon }) \quad \text{ for}\,|{\varepsilon }|\le 1, u_0\in \mathcal{O }_r\bigl (B_V(R_\omega ^{\varepsilon })\bigr ). \end{aligned}$$
(4.5)
Hölder continuity There is \(\alpha >0\) such that, for any \(T>0\) and any random variable \(r_\omega >0\) all of whose moments are finite, one can construct a family of random variables \(K_\omega ^{\varepsilon }\ge 1\) satisfying the inequalites
$$\begin{aligned}&\Vert u^{{\varepsilon }_1,\theta _{\tau _1}\omega }(t_1;u_{01}) -u^{{\varepsilon }_2,\theta _{\tau _2}\omega }(t_2;u_{02})\Vert \nonumber \\&\quad \le K_\omega ^{{\varepsilon }_1,{\varepsilon }_2}\bigl (|{\varepsilon }_1-{\varepsilon }_2| +|\tau _1-\tau _2|^\alpha +\Vert u_{01}-u_{02}\Vert +|t_1-t_2|^\alpha \bigr ), \end{aligned}$$
(4.6)
$$\begin{aligned}&\Vert u^{{\varepsilon }_1,\theta _{\tau _1}\omega }(\tau _0;u_{01}) -u^{{\varepsilon }_2,\theta _{\tau _2}\omega }(\tau _0;u_{02})\Vert _1\nonumber \\&\quad \le K_\omega ^{{\varepsilon }_1,{\varepsilon }_2}\bigl (|{\varepsilon }_1-{\varepsilon }_2| +|\tau _1-\tau _2|^\alpha +\Vert \tilde{u}_{01}-\tilde{u}_{02}\Vert \bigr ), \end{aligned}$$
(4.7)
where \(|{\varepsilon }_i|\le 1, |\tau _i|\le \tau _0, 0\le t_i\le T, u_{0i}\in B_V(r_\omega ), \tilde{u}_{0i}\in B_H(r_\omega )\), and we set \(K_\omega ^{{\varepsilon }_1,{\varepsilon }_2}=\max \{K_\omega ^{{\varepsilon }_1},K_\omega ^{{\varepsilon }_1}\}\). Moreover, there is a random variable \(K_\omega \) belonging to \(L^q(\Omega ,\mathbb{P })\) for any \(q\ge 1\) such that \(K_\omega ^{\varepsilon }\le K_\omega \) for all \({\varepsilon }\in [-1,1]\).

We shall also prove that the random variables \(R_\omega ^0\) and \(K_\omega ^0\) are constants. If these properties are established, then all the hypotheses of Theorem 3.8 are fulfilled, and its application to the RDS associated with problem (4.1), (1.2) gives the conclusions of Theorem 4.1.

Proof of Theorem 4.1

Step 1: Absorbing set. Let \(U^{{\varepsilon },\omega }(t)\) be the unique stationary solution of the equation
$$\begin{aligned} \dot{u}-a\Delta u={\varepsilon }\,\eta (t,x), \quad t\in \mathbb{R }, \end{aligned}$$
(4.8)
supplemented with the Dirichlet boundary condition (1.2). It is straightforward to see that, with probability 1,
$$\begin{aligned} U^{{\varepsilon },\theta _\tau \omega }(t)={\varepsilon }\,U^{\omega }(t+\tau ), \quad t,\tau \in \mathbb{R }, \end{aligned}$$
(4.9)
where \(U^{\omega }(t)=U^{1,\omega }(t)\). Using the Itô formula and the regularity assumption (4.2), one can prove that 3
$$\begin{aligned} \mathbb{E }\,e^{\delta \sup _t M_t}<\infty , \quad M_t(\omega ):=\Vert U^\omega (t) \Vert _3^2+ \biggl |\int \limits _0^t\Vert U^\omega (s)\Vert _4^2\,ds\biggr |-C(1+|t|),\nonumber \\ \end{aligned}$$
(4.10)
where \(\delta >0\) and \(C>0\) are deterministic constant, and the supremum is taken over \(t\in \mathbb{R }\). Moreover, by Proposition 5.8, inequality (5.29) holds for \(U\).
Solutions of (4.1), (1.2) can be written as
$$\begin{aligned} u^{{\varepsilon },\theta _\tau \omega }(t,x)={\varepsilon }\,U^{\omega }(t+\tau ,x)+ v^{{\varepsilon },\tau ,\omega }(t,x), \end{aligned}$$
(4.11)
where \(v=v^{{\varepsilon },\tau ,\omega }\) is the solution of the problem
$$\begin{aligned} \dot{v}-a\Delta v+f(v+{\varepsilon }\,U^\omega (t+\tau ))&= h(x), \end{aligned}$$
(4.12)
$$\begin{aligned} v\bigr |_{{\partial }D}&= 0,\end{aligned}$$
(4.13)
$$\begin{aligned} v(0,x)&= v_0(x), \end{aligned}$$
(4.14)
where \(v_0(x)=u_0(x)-{\varepsilon }\,U^\omega (\tau ,x)\). In what follows, we shall often omit the subscripts \({\varepsilon }\) and \(\omega \) to simplify notation. We wish to derive some a priori estimates for \(v\). Since the corresponding argument is rather standard, we only sketch it.
Taking the scalar product of (4.12) in \(L^2\) and carrying out some transformations, we derive
$$\begin{aligned} {\partial }_t\Vert v\Vert ^2+c_1\bigl (\Vert v\Vert ^2+\Vert v\Vert _1^2+\Vert v\Vert _{L^{p+1}}^{p+1}\bigr ) \le C_1\bigl (1+\Vert h\Vert _{-1}^2+\Vert {\varepsilon }\,U\Vert _{L^{p+1}}^{p+1}\bigr ),\nonumber \\ \end{aligned}$$
(4.15)
where \(U=U^\omega (\cdot +\tau )\), and we used inequalities (2.5), (2.6), and (2.8). Let us fix any \(\delta \in (0,c_1)\). Applying the Gronwall inequality, using the continuity of the embedding \(H^1\subset L^{p+1}\), and recalling that \(|\tau |\le \tau _0\), we obtain
$$\begin{aligned} \Vert v(t)\Vert ^2+c_1\int \limits _0^te^{-c_1(t-\sigma )}\bigl (\Vert v\Vert _1^2+ \Vert v\Vert _{L^{p+1}}^{p+1}\bigr )\,d\sigma \le e^{-c_1 t}\Vert v_0\Vert ^2+R_{\theta _t\omega }^{{\varepsilon },1},\nonumber \\ \end{aligned}$$
(4.16)
where we set
$$\begin{aligned} R_\omega ^{{\varepsilon },1}=C_1\int \limits _{-\infty }^0e^{\delta \sigma }\bigl (1+ \Vert h\Vert _{-1}^2 +\Vert {\varepsilon }\,U^\omega (\sigma +\tau _0)\Vert _{1}^{p+1}\bigr ) \,d\sigma . \end{aligned}$$
We now derive a similar estimate for the \(H^1\) norm of \(v\). Taking the scalar product of (4.12) with \(-2(t-s)\Delta v\) in \(L^2\), after some transformations we derive
$$\begin{aligned}&{\partial }_t\bigl ((t-s)\Vert \nabla v\Vert ^2\bigr )+c_2(t-s)\,\Vert v\Vert _2^2\\&\le \Vert \nabla v\Vert ^2 +C_2(t-s)\bigl (1+\Vert h\Vert ^2+ \Vert v\Vert _{L^{p+1}}^{p+1}+\Vert {\varepsilon }\,U\Vert _2^{p+1}\bigr ). \end{aligned}$$
Integrating in \(t\in (s,s+1)\), we obtain
$$\begin{aligned}&\Vert \nabla v(s+1)\Vert ^2+c_2\int \limits _s^{s+1}(\sigma -s)\Vert \Delta v\Vert ^2d\sigma \\&\le C_3+\int \limits _s^{s+1}\bigl (\Vert \nabla v\Vert ^2+C_2\Vert v\Vert _{L^{p+1}}^{p+1}\bigr )d\sigma +C_2\int \limits _s^{s+1}\Vert {\varepsilon }\,U\Vert _2^{p+1}d\sigma , \end{aligned}$$
where \(C_3=C_2(1+\Vert h\Vert ^2)\). Taking \(s=t-1\) and using (4.16) to estimate the second term on the right-hand side, we obtain
$$\begin{aligned} \Vert v(t)\Vert _1^2\le C_3+C_4\bigl (e^{-c_1 t}\Vert v_0\Vert ^2+R_{\theta _t\omega }^{{\varepsilon },1}\bigr )+C_2 R_{\theta _t\omega }^{{\varepsilon },2},\quad t\ge 1, \end{aligned}$$
(4.17)
where we set 4
$$\begin{aligned} R_\omega ^{{\varepsilon },2}=\int \limits _{-\infty }^{0}e^{\delta (\sigma +3)}\Vert {\varepsilon }\, U^\omega (\sigma +\tau _0)\Vert _2^{p+1}d\sigma . \end{aligned}$$
Let us define \(R_\omega ^{\varepsilon }\) by the relation
$$\begin{aligned} (R_\omega ^{\varepsilon })^2=8\Bigl (1+C_3+C_4R_{\omega }^{{\varepsilon },1} +C_2R_{\omega }^{{\varepsilon },2}+\sup _{\sigma \le 0}\bigl ( e^{\delta (\sigma +2\tau _0)}\Vert {\varepsilon }\,U^\omega (\sigma + \tau _0)\Vert _1^2\bigr )\Bigr )\nonumber \\ \end{aligned}$$
(4.18)
and set \(R_\omega =R_\omega ^1\). It is clear that \(R_\omega ^{\varepsilon }\le R_\omega \) for all \({\varepsilon }\in [-1,1]\). Relations (4.11) and (4.17) imply that
$$\begin{aligned} \Vert u^{{\varepsilon },\theta _\tau \omega }(t)\Vert _1^2 \le 2C_4e^{-c_1 t}\bigl (\Vert u_0\Vert ^2+\Vert {\varepsilon }\,U^\omega (\tau )\Vert ^2\bigr ) +4^{-1}(R_{\theta _t\omega }^{\varepsilon })^2-1, \quad t\ge 1, \nonumber \\ \end{aligned}$$
(4.19)
whence we see (4.4) holds for any ball \(B\subset H\) and a sufficiently large \(T(B)>0\). Moreover, it follows from (4.10) and (4.18) that all the moments of \(R_\omega \) are finite. Finally, Proposition 5.8 and the stationarity of \(U\) imply that \(R_\omega ^{\varepsilon }\) satisfies (3.31) with a constant \(\alpha \in (0,1/2)\) and an integrable random variable \(L_\omega \).
Step 2: Stability. It follows from (4.18) that the stability property (4.5) with parameters \(r>0\) and \(\tau _0>0\) will certainly be satisfied if
$$\begin{aligned} 2C_4e^{-c_1\tau _0}\bigl ((R_\omega ^{\varepsilon }+r)^2+\Vert {\varepsilon }\, U^\omega (\tau )\Vert ^2\bigr )+4^{-1}(R_{\theta _{\tau _0} \omega }^{\varepsilon })^2-1\le (R_{\theta _{\tau _0}\omega }^{\varepsilon })^2.\nonumber \\ \end{aligned}$$
(4.20)
Let us note that
$$\begin{aligned} (R_{\theta _t\omega }^{\varepsilon })^2\ge e^{-\delta t}(R_\omega ^{\varepsilon })^2, \quad t\ge 0. \end{aligned}$$
(4.21)
We now take an arbitrary \(r>0\) and choose \(\tau _0>0\) so large that
$$\begin{aligned} 4C_4e^{-c_1\tau _0}r^2\le 1, \quad 16 C_4e^{-(c_1-\delta )\tau _0}\le 1. \end{aligned}$$
In this case, inequality (4.20) holds, so that the stability condition is fulfilled.
Step 3: Hölder continuity. Representation (4.11) implies that it suffices to establish analogues of (4.6) and (4.7) for solutions of problem (4.124.14). Namely, we first prove that for any random variable \(r_\omega >0\) with finite moments there is a family of almost surely finite random variables \({\widetilde{K}}_\omega ^{\varepsilon }\) such that
$$\begin{aligned} \Vert v_1(t)-v_2(t)\Vert&\le {\widetilde{K}}_\omega ^{{\varepsilon }_1,{\varepsilon }_2}\bigl (\Vert {\varepsilon }_1U^{\omega _1}-{\varepsilon }_2U^{\omega _2} \Vert _{L^\infty (0,T;H^1)}+\Vert v_{01}-v_{02}\Vert \bigr ),\qquad \qquad \end{aligned}$$
(4.22)
$$\begin{aligned} \Vert v_1(\tau _0)-v_2(\tau _0)\Vert _1&\le {\widetilde{K}}_\omega ^{{\varepsilon }_1,{\varepsilon }_2}\bigl (\Vert {\varepsilon }_1U^{\omega _1}-{\varepsilon }_2U^{\omega _2} \Vert _{L^\infty (0, \tau _0;H^2)}+\Vert v_{01}-v_{02}\Vert \bigr ), \nonumber \\ \end{aligned}$$
(4.23)
where \(|{\varepsilon }_i|\le 1, |\tau _i|\le \tau _0, 0\le t\le T, v_{0i}\in B_H(r_\omega )\), and we set \(\omega _i=\theta _{\tau _i}\omega , v_i(t)=v^{{\varepsilon }_i,\tau _i,\omega }\), and \({\widetilde{K}}_\omega ^{{\varepsilon }_1,{\varepsilon }_2}=\max \{{\widetilde{K}}_\omega ^{{\varepsilon }_1}, {\widetilde{K}}_\omega ^{{\varepsilon }_2}\}\). Moreover, our proof will imply that \({\widetilde{K}}_\omega ^{\varepsilon }\le {\widetilde{K}}_\omega \) for all \({\varepsilon }\in [-1,1]\), where the random constant \({\widetilde{K}}\) belongs to \(L^q(\Omega ,\mathbb{P })\) for any \(q\ge 1\). Once these properties are established, the Hölder continuity of \(U^\omega (t)\) and relations (4.10) and (4.11) will prove inequalities (4.6) and (4.7) with \(t_1=t_2\). We shall next show that the solutions of (4.124.14) with \(v_0\in B_V(r_\omega )\) satisfy the inequality
$$\begin{aligned} \Vert v^{{\varepsilon },\tau ,\omega }(t_1;v_0)-v^{{\varepsilon },\tau ,\omega }(t_2;v_0)\Vert \le {\widetilde{K}}_\omega ^{\varepsilon }|t_1-t_2|^{\alpha }, \quad t_1,t_2\in [0,T], \end{aligned}$$
(4.24)
with possibly a larger random constant \({\widetilde{K}}_\omega ^{\varepsilon }\) with the same property. This will complete the proof of the property of Hölder continuity and that of Theorem 4.1.
We begin with (4.22). To simplify the presentation, we shall assume that \(n\ge 3\). In what follows, we denote by \(\{K_\omega ^{{\varepsilon },i}, {\varepsilon }\in [-1,1]\}\) (where \(i\ge 1\)) families of random variables that can be bounded by a random constant belonging to \(L^q(\Omega ,\mathbb{P })\) for any \(q\ge 1\). The difference \(v=v_1-v_2\) satisfies the equation
$$\begin{aligned} \dot{v}-a\Delta v+f(u_1)-f(u_2)=0 \end{aligned}$$
(4.25)
and the boundary and initial conditions (4.13) and (4.14), where \(u_i=v_i+{\varepsilon }_i U^{\omega ^i}\) and \(v_0=v_{01}-v_{02}\). Taking the scalar product of (4.25) with \(2v\) in \(L^2\) and using the “monotonicity” assumption (2.7), we derive
$$\begin{aligned} {\partial }_t\Vert v\Vert ^2+2c_3\Vert \nabla v\Vert ^2&\le -\bigl (f(u_1)-f(u_2),v\bigr )\\&\le C\Vert v\Vert ^2+C_5\bigl (\Vert \xi \Vert _{L^q}+\Vert |u_1|^{p-1}\xi \Vert _{L^q}+\Vert | u_2|^{p-1}\xi \Vert _{L^q}\bigr )\Vert v\Vert _{1}\\&\le C\Vert v\Vert ^2\!+\!c_3\Vert \nabla v\Vert ^2+C_6\bigl (1+\Vert u_1\Vert _{L^{p+1}}^{2(p-1)} +\Vert u_2\Vert _{L^{p+1}}^{2(p-1)}\bigr )\Vert \xi \Vert _1^2, \end{aligned}$$
where \(q=\frac{2n}{n+2}\) and \(\xi ={\varepsilon }_1U^{\omega _1}-{\varepsilon }_2U^{\omega _2}\). Applying the Gronwall inequality, we obtain
$$\begin{aligned} \Vert v(t)\Vert ^2\!+\!c_3\int \limits _0^te^{C(t-\sigma )}\Vert \nabla v\Vert ^2d\sigma \le e^{Ct}\Vert v_0\Vert ^2\!+\!C_7\max \{K_\omega ^{{\varepsilon },1},K_\omega ^{{\varepsilon },2}\}\Vert \xi \Vert _{L^\infty (0,T;H^1)}^2,\nonumber \\ \end{aligned}$$
(4.26)
where \(0\le t\le T, C_7=C_7(T)\), and we set
$$\begin{aligned} K_\omega ^{{\varepsilon },1}=\int \limits _0^T\bigl (1+\Vert u^{{\varepsilon },\omega }(\sigma )\Vert _{L^{p+1}}^{p+1} \bigr )\,d\sigma . \end{aligned}$$
Inequality (4.26) immediately implies (4.22).
To prove (4.23), we first note that, in view of (4.26), there is a measurable function \(s:\Omega \rightarrow \mathbb{R }\) such that, with probability \(1\), we have \(s_\omega \in [\frac{\tau _0}{4}, \frac{3\tau _0}{4}]\) and
$$\begin{aligned} \Vert \nabla v(s_\omega )\Vert ^2\le C_8\bigl (\Vert v_0\Vert ^2+K_\omega ^{{\varepsilon },1}\Vert \xi \Vert _{L^\infty (0,\tau _0;H^1)}^2\bigr ). \end{aligned}$$
(4.27)
Let us take the scalar product of (4.25) with \(-2\Delta v\) in \(L^2\). After some transformations, we obtain
$$\begin{aligned} {\partial }_t\Vert \nabla v\Vert ^2+2c_4\,\Vert \Delta v\Vert ^2 \le C_8 \bigl (1+\Vert u_1\Vert _{L^{n(p-1)}}^{p-1}+\Vert u_2\Vert _{L^{n(p-1)}}^{p-1}\bigr ) \Vert v+\xi \Vert _{L^q}\Vert \Delta v\Vert ,\nonumber \\ \end{aligned}$$
(4.28)
where \(q=\frac{2n}{n-2}\). Since \(H^1\subset L^q\) and \(H^1\subset L^{n(p-1)}\), applying the interpolation and Cauchy–Schwartz inequalities, from (4.28) we derive
$$\begin{aligned} {\partial }_t\Vert \nabla v(t)\Vert ^2+c_4\,\Vert \Delta v\Vert ^2\le C_9\bigl (1+\Vert u_1\Vert _1^{4(p-1)}+\Vert u_2\Vert _1^{4(p-1)}\bigr )\,(\Vert v\Vert ^2+\Vert \xi \Vert _1^2). \end{aligned}$$
Integrating in \(t\in [s_\omega ,\tau _0]\) and using (4.26), (4.27) and (4.19) (we can assume that \(\tau _0\ge 4\)), we obtain (4.23).
It remains to establish inequality (4.24). We shall only outline its proof. Taking the scalar product of (4.12) with \(-2\Delta v\) and using some standard arguments [cf. derivation of (4.17)], we obtain
$$\begin{aligned} \int \limits _0^T\Vert \Delta v\Vert ^2d\sigma \le C_{10}\Vert v_0\Vert _1^2+K_\omega ^{{\varepsilon },2}. \end{aligned}$$
Combining this with (4.10) and (4.12), we see that
$$\begin{aligned} \int \limits _0^T\Vert \dot{v}\Vert ^2d\sigma \le K_\omega ^{{\varepsilon },3}. \end{aligned}$$
It follows that \(v\) is Hölder continuous with the exponent \(1/2\). Since \(U^\omega \) is also Hölder continuous in time, in view of (4.11) we arrive at the required result. Finally, it is not diffucult to see that the random variables \(R_\omega ^0\) and \(K_\omega ^0\) are constant. The proof of the theorem is complete.

Appendix

Coverings for random compact sets

In this section, we have gathered three auxiliary results on coverings of random compact sets by balls centred at the points of random finite sets. The first of them establishes the existence of a “minimal” covering with an explicit bound of the number of balls in terms of the Kolmogorov \({\varepsilon }\)-entropy of the random compact set in question.

Lemma 5.1

Let \(\{\mathcal{A }_\omega \}\) be a random compact set in a Hilbert space \(H\). Then for any measurable function \(\delta =\delta _\omega \) satisfying the inequality \(0<\delta \le 1\) one can construct a random finite set \(U_\delta (\omega )\subset H\) such that for
$$\begin{aligned} d^s\bigl (\mathcal{A }_\omega ,U_\delta (\omega )\bigr )&\le \delta _\omega ,\end{aligned}$$
(5.1)
$$\begin{aligned} \ln \bigl (\#U_{\delta }(\omega )\bigr )&\le \mathcal{H }_{\delta _\omega /2}(\mathcal{A }_\omega ,H). \end{aligned}$$
(5.2)
Moreover, if \(\delta _\omega \equiv \delta \) is constant, then one can replace \(\delta _\omega /2\) in the right-hand side (5.2) by \(\delta \).
Note that inequality (5.1) is equivalent to the inclusions
$$\begin{aligned} \mathcal{A }_\omega \subset \bigcup _{u\in U_\delta (\omega )} B_H(u,\delta _\omega ), \quad U_{\delta }(\omega )\subset \mathcal{O }_{\delta _\omega }(\mathcal{A }_\omega ). \end{aligned}$$
(5.3)

Proof

We first assume that \(\delta _\omega \equiv \delta \). Let \(\{u_k\}\subset H\) be a dense sequence. For any \({{\varvec{k}}}=\{k_1,\dots ,k_n\}\subset \mathbb{N }\), define the random variable
$$\begin{aligned} Z_\omega ({{\varvec{k}}})= \left\{ \begin{array}{cl} 1,&\mathcal{A }_\omega \subset \bigcup \limits _{i=1}^n B(u_{k_i},\delta ),\\ 0,&\text{ otherwise}. \end{array}\right. \end{aligned}$$
Since \(\mathcal{A }_\omega \) is a (random) compact set, for any \(\omega \) there is a finite subset \({{\varvec{k}}}\subset \mathbb{N }\) such that \(Z_\omega ({{\varvec{k}}})=1\). Let \(\Omega _n\) be the set of those \(\omega \in \Omega \) for which there is an \(n\)-tuple \({{\varvec{k}}}\subset \mathbb{N }\) such that \(Z_\omega ({{\varvec{k}}})=1\) and \(Z_\omega ({{\varvec{k}}}^{\prime })=0\) for any subset \({{\varvec{k}}}^{\prime }\subset \mathbb{N }\) containing less than \(n\) elements. Then we have \(\Omega =\cup _{n\ge 1}\Omega _n\). Furthermore, since \(\Omega _n\) is the intersection of the measurable sets
$$\begin{aligned} \bigcap _{\#{{\varvec{k}}}=n-1}\{Z_\omega ({{\varvec{k}}})=0\}\quad \text{ and}\quad \bigcup _{\#{{\varvec{k}}}=n}\{Z_\omega ({{\varvec{k}}})=1\}, \end{aligned}$$
we have \(\Omega _n\in \mathcal{F }\) for any \(n\ge 1\). Thus, it suffices to construct \(U_\delta \) on each subset \(\Omega _n\).
Indexing the set of all \(n\)-tuples \({{\varvec{k}}}\subset \mathbb{N }\) in an arbitrary way, it is easy to construct measurable functions \(I_k:\Omega _n\rightarrow \{0,1\}\) such that, for any \(\omega \in \Omega _n\), we have
$$\begin{aligned} \#{{\varvec{k}}}(\omega )=n, \quad \mathcal{A }_\omega \subset \bigcup \limits _{k\in {{\varvec{k}}}(\omega )} B_H(u_k,\delta ), \quad B_H(u_k,\delta )\cap \mathcal{A }_\omega \ne \varnothing , \end{aligned}$$
(5.4)
where \({{\varvec{k}}}(\omega )=\{k\in \mathbb{N }:I_k(\omega )=1\}\) and \(k\in {{\varvec{k}}}(\omega )\) in the third relation. We claim that \(U_\delta (\omega )=\{u_k,k\in {{\varvec{k}}}(\omega )\}\) satisfies the required properties. Indeed, for any \(u\in H\), we have
$$\begin{aligned} d(u,U_\delta (\omega ))=\min \{\Vert u-u_k\Vert :I_k(\omega )=1\}, \quad \omega \in \Omega _n, \end{aligned}$$
whence it follows easily that \(U_\delta (\omega )\) is a random finite set. Furthermore, inclusions (5.3) (which are equivalent to inequality (5.1)) are consequences of the second and third relations in (5.4). Let us prove that inequality (5.2) holds with \(\delta _\omega /2\) replaced by \(\delta _\omega \); that is,
$$\begin{aligned} \ln n\le \mathcal{H }_{\delta }(\mathcal{A }_\omega ,H) \quad \text{ for} \omega \in \Omega _n. \end{aligned}$$
(5.5)
To see this, note that the set \(\mathcal{A }_\omega \) admits a covering by balls \(\{B_j\}\) such that
$$\begin{aligned} \ln \bigl (\#\{B_j\}\bigr )\le \mathcal{H }_{\delta }(\mathcal{A }_\omega ,H), \quad {\mathop {\mathrm{diam}}\nolimits }(B_j)\le \delta . \end{aligned}$$
Choosing arbitrary points \(u_{k_j}\) in every ball \(B_j\), we see that one can cover \(\mathcal{A }_\omega \) by the balls \(\{B_H(u_{k_j},\delta )\}\). The choice of \(n\) now implies that \(n\le \#\{B_j\}\), whence it follows that (5.5) holds.
We now turn to the case of an arbitrary function \(\delta _\omega \) such that \(0<\delta _\omega \le 1\). Let us define \(\Omega ^{(k)}=\{\omega \in \Omega :2^{-k}<\delta _\omega \le 2^{1-k}\}\), so that \(\Omega =\cup _{k\ge 1}\Omega ^{(k)}\). In view of what has been proved above, on each \(\Omega ^{(k)}\) one can construct a random finite set \(U_k(\omega )\) such that, for \(\omega \in \Omega ^{(k)}\), we have
$$\begin{aligned} d^s\bigl (\mathcal{A }_\omega ,U_k(\omega )\bigr )\le 2^{-k}, \quad \ln \bigl (\#U_k(\omega )\bigr )\le \mathcal{H }_{2^{-k}}(\mathcal{A }_\omega ,H). \end{aligned}$$
Setting \(U_\delta (\omega )=U_k(\omega )\) for \(\omega \in \Omega ^{(k)}\), we obtain the required covering. The proof of the lemma is complete.

The second result shows that, if a random compact set depends on a parameter in a Lipschitz manner, then the random finite set constructed above can be chosen to have a similar dependence on the parameter. To prove it, we shall need the following auxiliary construction.

Let us denote by \(\Delta _n\subset \mathbb{R }^n\) the set of vectors \(\theta =(\theta _1,\dots ,\theta _n)\) such that \(\theta _i\ge 0\) and \(\sum _i\theta _i=1\). Given subsets \(W_i\subset H, 1\le i\le n\), a vector \(\theta \in \Delta _n\), and a number \(\alpha >0\), we define
$$\begin{aligned}{}[W_1,\dots ,W_n]_\theta ^\alpha =\biggl \{\sum _{i=1}^n\theta _iu_i:u_i\in W_i, \Vert u_i-u_j\Vert _H\le \alpha \text{ for} 1\le i,j\le n\biggr \}. \end{aligned}$$
It is straightforward to check that
$$\begin{aligned}&\ln \bigl (\#[W_1,\dots ,W_n]_\theta ^\alpha \bigr ) \le \ln (\#W_1)+\cdots +\ln (\#W_n),\end{aligned}$$
(5.6)
$$\begin{aligned}&d^s\bigl ([W_1,\dots ,W_n]_{\theta ^1}^\alpha ,[W_1,\dots ,W_n]_{\theta ^2}^\alpha \bigr ) \le \alpha |\theta ^1-\theta ^2|, \end{aligned}$$
(5.7)
where \(\theta ^j=(\theta _1^j,\dots ,\theta _n^j)\) and \(|\theta ^1-\theta ^2|=\max _i|\theta _i^1-\theta _i^2|\). Moreover, if \(\mathcal{A }\subset H\) and \(r_i\ge 0\) are such that
$$\begin{aligned} \mathcal{A }\subset \bigcup _{u\in W_i}B_H(u,r_i), \quad 1\le i\le n, \end{aligned}$$
then for any \(\theta \in \Delta _n\) we have
$$\begin{aligned} \mathcal{A }\subset \bigcup _{u\in [W_1,\dots ,W_n]_{\theta }^{r}} B_H(u,\max \{r_i,1\le i\le n\}), \end{aligned}$$
(5.8)
where \(r=\max \{r_i+r_j,1\le i,j\le n\}\).

Proposition 5.2

Let \(Y\subset \mathbb{R }\) be a closed interval and let \(\{\mathcal{A }_\omega ^y,y\in Y\}\) be a family of random compact sets in a Hilbert space \(H\) such that
$$\begin{aligned} d^s(\mathcal{A }_\omega ^{y_1},\mathcal{A }_\omega ^{y_2}) \le C\,|y_1-y_2|\quad \text{ for}\,y_1,y_2\in Y \end{aligned}$$
(5.9)
where \(C\ge 1\) is a finite random constant. Then there exists a random finite set \((\delta ,y,\omega )\mapsto U_{\delta ,y}(\omega )\) with the underlying space \((0,1]\times Y\times \Omega \) such that
$$\begin{aligned} d^s\bigl (\mathcal{A }_\omega ^y,U_{\delta ,y}(\omega )\bigr )&\le \delta ,\end{aligned}$$
(5.10)
$$\begin{aligned} \ln \bigl (\#U_{\delta ,y}(\omega )\bigr )&\le 4\,\mathcal{H }_{2^{-4}\delta }(\mathcal{A }_\omega ^y,H),\end{aligned}$$
(5.11)
$$\begin{aligned} d^s\bigl (U_{\delta _1,y_1}(\omega ),U_{\delta _2,y_2}(\omega )\bigr )&\le c\,\bigl (|\delta _1-\delta _2|+C\, |y_1-y_2|\bigr ), \end{aligned}$$
(5.12)
where \(y,y_1,y_2\in Y, \delta ,\delta _1,\delta _2\in (0,1]\), and \(c\ge 1\) is an absolute constant.
In particular, taking a measurable function \(\delta =\delta _\omega \) with range in \((0,1]\), we can construct a random finite set \((y,\omega )\mapsto U_{\delta ,y}(\omega )\) such that
$$\begin{aligned} d^s\bigl (U_{\delta ,y_1}(\omega ),U_{\delta ,y_2}(\omega )\bigr ) \le c\,C\, |y_1-y_2|, \end{aligned}$$
(5.13)
and inequalities (5.10) and (5.11) hold with \(\delta =\delta _\omega \) in the right-hand side.

The proof given below will imply that if \(\mathcal{A }_\omega ^y\) does not depend on \(\omega \) for some \(y=y_0\), then the random set \(U_{\delta ,y}(\omega )\) satisfying (5.105.12) can be chosen in such a way that \(U_{\delta ,y_0}(\omega )\) is also independent of \(\omega \). Furthermore, if \(\mathcal{A }_\omega ^y\) does not depend on \(\omega \) for all \(y\in Y\), then \(U_{\delta ,y}\) is also independent of \(\omega \). The latter observation implies the following corollary used in the main text.

Corollary 5.3

Let \(V\subset H\) be two Hilbert spaces with compact embedding. Then there is a random finite set \((\delta ,R)\mapsto U_{\delta ,R}\) with the underlying space \((0,1]\times \mathbb{R }_+\) such that
$$\begin{aligned} d_H^s\bigl (B_V(R),U_{\delta ,R}\bigr )&\le \delta ,\end{aligned}$$
(5.14)
$$\begin{aligned} \ln \bigl (\#U_{\delta ,R}\bigr )&\le 4\,\mathcal{H }_{{\delta }/{16R}}(V,H),\end{aligned}$$
(5.15)
$$\begin{aligned} d_H^s(U_{\delta ,R_1},U_{\delta ,R_2})&\le c\,|R_1-R_2|, \end{aligned}$$
(5.16)
where \(R,R_1,R_2\ge 0\) and \(\delta \in (0,1]\) are arbitrary, and \(c>0\) is an absolute constant.

To prove this result, it suffices to apply Proposition 5.2 to the non-random compact set \(B_V(R)\) depending on the parameter \(R\in \mathbb{R }_+\).

Proof of Proposition 5.2

Without loss of generality, we assume that the random variable \(C\) is constant, since one can represent \(\Omega \) as the union of the subsets \(\Omega _{l}=\{\omega \in \Omega : l\le C<l+1\}\) and construct required random finite sets on each \(\Omega _{l}\).

Let us fix an integer \(k\ge 1\) and denote by \(\nu _k<C^{-1}2^{-k-4}\) the largest number such that \(N_k:=\nu _k^{-1}\) is an integer. We now set \(y_j^k=j\nu _k\) for \(j\in \mathbb{Z }_+\). In view of Lemma 5.1, there are random finite sets \(U_j^k(\omega )\subset H\) such that
$$\begin{aligned} d^s\bigl (\mathcal{A }_\omega ^{y_j},U_j^k(\omega )\bigr )&\le 2^{-k-3},\end{aligned}$$
(5.17)
$$\begin{aligned} \ln \bigl (\#U_j^k(\omega )\bigr )&\le \mathcal{H }_{2^{-k-3}}(\mathcal{A }_\omega ^{y_j},H), \end{aligned}$$
(5.18)
where we write \(y_j\) instead of \(y_j^k\) to simplify the notation. We now need the following lemma, whose proof 5 is given at end of this section.

Lemma 5.4

Let \(A_1,\dots ,A_4\) be the vertices of a rectangle \(\Pi \subset \mathbb{R }^2\). Then there are Lipschitz functions \(\theta _i:\Pi \rightarrow [0,1], 1\le i\le 4\), such that
$$\begin{aligned} \sum _{i=1}^4\theta _i(A)=1, \quad \sum _{i=1}^4\theta _i(A)A_i=A \quad \text{ for}\,A\in \Pi . \end{aligned}$$
Let \(\theta _i(A), 1\le i\le 4\), be the functions constructed in Lemma 5.4 for the rectangle \(\Pi =[2^{-k},2^{1-k}]\times [y_j,y_{j+1}]\). For \(2^{-k}<\delta \le 2^{1-k}\) and \(y_{j}\le y\le y_{j+1}\), denote by \(A_{\delta ,y}\in \Pi \) the point with the coordinates \((\delta ,y)\). Let us define
$$\begin{aligned} U_{\delta ,y}(\omega )=[U_j^k,U_j^{k+1},U_{j+1}^k,U_{j+1}^{k+1}]_{ \theta (\delta ,y)}^{2^{-k-1}}, \end{aligned}$$
where \(\theta (\delta ,y)=(\theta _i(A_{\delta ,y}), 1\le i\le 4)\in \Delta _4\). We claim that \(U_{\delta ,y}(\omega )\) satisfies the required properties.
Indeed, it follows from the choice of \(y_j\) that
$$\begin{aligned} d^s(\mathcal{A }_\omega ^{y_j},\mathcal{A }_\omega ^{y})\le 2^{-k-4} \quad \text{ for} y_j\le y\le y_{j+1}. \end{aligned}$$
(5.19)
Combining this with (5.17), we see that
$$\begin{aligned} d^s\bigl (\mathcal{A }_\omega ^y,U_{j}^k(\omega )\bigr )\le 2^{-k-2}. \end{aligned}$$
(5.20)
Inclusion (5.8) now implies that
$$\begin{aligned} d\bigl (\mathcal{A }_\omega ^{y},U_{\delta ,y}(\omega )\bigr )\le 2^{-k-2}\le \delta /4. \end{aligned}$$
(5.21)
On the other hand, the definition of \(U_{\delta ,y}(\omega )\) and inequality (5.20) imply that
$$\begin{aligned} d\bigl (U_{\delta ,y}(\omega ),\mathcal{A }_\omega ^{y}\bigr )\le 2^{-k}\le \delta . \end{aligned}$$
Combining this with (5.21), we obtain (5.10).
Inequality (5.19) implies that an \({\varepsilon }\)-covering for \(\mathcal{A }_\omega ^{y}\) with \(y_j\le y\le y_{j+1}\) is an \(({\varepsilon }+2^{-k-4})\)-covering for \(\mathcal{A }_\omega ^{y_j}\). Taking \({\varepsilon }=2^{-k-4}\), we see that
$$\begin{aligned} \mathcal{H }_{2^{-k-3}}(\mathcal{A }_\omega ^{y_j},H)\le \mathcal{H }_{2^{-k-4}}(\mathcal{A }_\omega ^{y},H). \end{aligned}$$
Combining this with (5.18) and (5.6), we obtain (5.11):
$$\begin{aligned} \ln \bigl (\#U_{\delta ,y}(\omega )\bigr ) \le 4\mathcal{H }_{2^{-k-3}}(\mathcal{A }_\omega ^{y_j},H) \le 4\mathcal{H }_{2^{-k-4}}(\mathcal{A }_\omega ^{y},H) \le 4\mathcal{H }_{2^{-4}\delta }(\mathcal{A }_\omega ^{y},H). \end{aligned}$$
Finally, inequality (5.12) follows from (5.7) and the explicit form of the functions \(\theta _i(A)\) [see (5.24)]:
$$\begin{aligned} d^s\bigl (U_{\delta _1,y_1},U_{\delta _2,y_2}\bigr )&\le 2^{-k-1}|\theta (A_{\delta _1,y_1})-\theta (A_{\delta _2,y_2})|\\&\le 2^{-k-1}(\nu _k 2^{-k})^{-1} \bigl (\nu _k|\delta _1-\delta _2|+2^{-k}|y_1-y_2|\bigr )\\&\le \tfrac{1}{2}\,|\delta _1-\delta _2|+8C\,|y_1-y_2|. \end{aligned}$$
The proof of the proposition is complete. \(\square \)

And, finally, our third result refines Proposition 5.2 in a particular case.

Lemma 5.5

Let \(Y\) be an arbitrary metric space, let \(\mathcal{K }\subset H\) be a compact subset, let \((y,\omega )\mapsto V^y(\omega )\) be a random finite set, and let
$$\begin{aligned} \mathcal{A }_\omega ^y=\bigcup _{v\in V^y(\omega )}(v+\mathcal{K }). \end{aligned}$$
Then there is a random finite set \((\delta ,y,\omega )\mapsto U_{\delta ,y}(\omega )\) with the underlying space \((0,1]\times Y\times H\) such that (5.10) holds, and
$$\begin{aligned}&\ln \bigl (\# U_{\delta ,y}(\omega )\bigr )\le \ln (\# V^{y}(\omega )\bigr )+\mathcal{H }_{\delta /2}(\mathcal{K },H), \end{aligned}$$
(5.22)
$$\begin{aligned}&d^s\bigl (U_{\delta ,y_1}(\omega _1),U_{\delta ,y_2}(\omega _2)\bigr ) \le d^s\bigl (V^{y_1}(\omega _1),V^{y_2}(\omega _2)\bigr ). \end{aligned}$$
(5.23)

Proof

Applying Lemma 5.1 to the random compact set \(\delta \mapsto \delta \mathcal{K }\) with the underlying space \((0,1]\), we construct a random finite set \(\delta \mapsto U_\delta \) such that
$$\begin{aligned} d^s(\delta \mathcal{K },U_\delta )\le \delta ^2,\quad \ln (\#U_\delta )\le \mathcal{H }_{\delta ^2/2}(\delta \mathcal{K },H)=\mathcal{H }_{\delta /2}(\mathcal{K },H). \end{aligned}$$
It is straightforward to see that the random set
$$\begin{aligned} U_{\delta ,y}(\omega )=\delta ^{-1}U_\delta +V^y(\omega ) =\{\delta ^{-1}u+v:u\in U_\delta , v\in V^y(\omega )\} \end{aligned}$$
possesses all required properties. \(\square \)

As is clear from the proof, if \(V^y(\omega )\) does not depend on \(\omega \) for some \(y=y_0\), then the random set \(U_{\delta ,y_0}(\omega )\) constructed in Lemma 5.5 is also independent of \(\omega \).

Proof of Lemma 5.4

Given a point \(A\in \Pi \), we divide the rectangle \(\Pi \) into four smaller rectangles \(\Pi _i\) (see Fig. 1). It is easy to prove that the functions
$$\begin{aligned} \theta _i(A)=\frac{{\mathop {\mathrm{Area}}}(\Pi _i)}{{\mathop {\mathrm{Area}}}(\Pi )}, \quad 1\le i\le 4, \end{aligned}$$
(5.24)
possess the required properties. \(\square \)
Fig. 1

Division of \(\Pi \) into four rectangles

Image of random compact sets

Proposition 5.6

Let \(X\) and \(Y\) be Polish spaces, let \((\Omega ,\mathcal{F })\) be a measurable space, let \(\{\mathcal{K }_\omega ,\omega \in \Omega \}\) be a random compact set in \(X\), and let \(\psi _\omega :X\rightarrow Y\) be a family of continuous mappings such that, for any \(u\in X\), the mapping \(\omega \mapsto \psi _\omega (u)\) is measurable from \(\Omega \) to \(Y\). Then \(\{\psi _\omega (\mathcal{K }_\omega ), \omega \in \Omega \}\) is a random compact set in \(Y\).

Proof

By Proposition 1.6.3 in [1], a mapping \(\omega \mapsto \mathcal{K }_\omega \) from \(\Omega \) to the family of closed subsets of \(X\) defines a random closed set if and only if there is a sequence of random variables \(\xi _n:\Omega \rightarrow X\) such that \(\mathcal{K }_\omega =\bigl [\{\xi _n(\omega ),n\ge 1\}\bigr ]_X\). Since \(\mathcal{K }_\omega \) is compact for any \(\omega \in \Omega \), we have
$$\begin{aligned} \psi _\omega (\mathcal{K }_\omega )&= \psi _\omega \bigl (\bigl [\{\xi _n(\omega ),n\ge 1\}\bigr ]_X\bigr ) =\bigl [\psi _\omega \bigl (\{\xi _n(\omega ),n\ge 1\}\bigr )\bigr ]_Y\\&= \bigl [\{\psi _\omega (\xi _n(\omega )),n\ge 1\}\bigr ]_Y. \end{aligned}$$
It remains to note that \(\psi _\omega (\xi _n(\omega ))\) are \(Y\)-valued random variables, and therefore the right-hand side of the above relation defines a random compact set in \(Y\). \(\square \)

Kolmogorov–Čentsov theorem

The Kolmogorov–Čentsov theorem provides a sufficient condition for Hölder-continuity of trajectories of a random process. We shall need the following qualitative version of that result, which is a particular case6 of Theorem 1.4.4 in [27].

Theorem 5.7

Let \(X\) be a Banach space and let \(\{\xi _t, 0\le t\le T\}\) be an \(X\)-valued random process with almost surely continuous trajectories that is defined on a probability space \((\Omega ,\mathcal{F },\mathbb{P })\) and satisfies the inequality
$$\begin{aligned} \mathbb{E }\,\Vert \xi _t-\xi _s\Vert _X^{2p}\le C_p|t-s|^p\quad \text{ for} \text{ any} t,s\in [0,T], p\ge 1, \end{aligned}$$
(5.25)
where \(C_p>0\) is a constant not depending on \(t\) and \(s\). Then for any \(\gamma \in (0,1/2)\) there is a constant \(K_\gamma >0\) and an almost surely positive random variable \(t_\gamma \) such that
$$\begin{aligned} \Vert \xi _t(\omega )-\xi _s(\omega )\Vert _X&\le K_\gamma |t-s|^\gamma \quad \text{ for} |t-s|\le t_\gamma (\omega ),\end{aligned}$$
(5.26)
$$\begin{aligned} \mathbb{E }\,t_\gamma ^{-q}&< \infty \quad \text{ for} \text{ any} q\ge 1. \end{aligned}$$
(5.27)

Let us emphasise that we assume from the very beginning the continuity of almost all trajectories of \(\xi _t\), so that we do not need to modify our process.

Sketch of the proof

We repeat the argument used in Sect. 2.2.B of [24]. Without loss of generality, we can assume that \(T=1\). Let us fix any \(\gamma \in (0,1/2)\) and introduce the events
$$\begin{aligned} \Omega _n^{(k)}=\bigl \{\omega \in \Omega : \Vert \xi _{k/2^n}(\omega )-\xi _{(k-1)/2^n}(\omega )\Vert _X \ge 2^{-\gamma n}\bigr \}, \quad \Omega _n=\bigcup _{k=1}^{2^n}\Omega _n^{(k)}, \end{aligned}$$
where \(n\ge 1\) and \(1\le k\le 2^n\). It follows from (5.25) and the Chebyshev inequality that
$$\begin{aligned} \mathbb{P }\bigl (\Omega _n^{(k)}\bigr )\le C_p2^{-np(1-2\gamma )}. \end{aligned}$$
Summing up over \(k=1,\dots ,2^n\), we derive
$$\begin{aligned} \mathbb{P }(\Omega _n)\le C_p2^{-n\alpha _p}, \quad \alpha _p=-1+p(1-2\gamma ). \end{aligned}$$
Choosing \(p\ge 1\) so large that \(\alpha _p>0\) and applying the Borel–Cantelli lemma, we construct an almost surely finite random integer \(n_0\ge 1\) such that \(\omega \notin \Omega _n\) for \(n\ge n_0(\omega )\) and \(\omega \in \Omega _{n_0-1}\) if \(n_0(\omega )\ge 2\). In particular, we have
$$\begin{aligned} \Vert \xi _{k/2^n}(\omega )-\xi _{(k-1)/2^n}(\omega )\Vert _X \ge 2^{-\gamma n} \quad \text{ for} n\ne n_0(\omega ), k=1,\dots ,2^n. \end{aligned}$$
(5.28)
As is shown in the proof of Theorem 2.8 of [24, Chapt. 2], inequality (5.28) implies (5.26) with \(K_\gamma =2/(1-2^{-\gamma })\) and \(t_0=2^{-n_0}\). Thus, the theorem will be proved if we show that \(\mathbb{E }\,2^{qn_0}<\infty \) for any \(q\ge 1\).
To this end, note that \(\{n_0=m\}\subset \Omega _{m-1}\) for any \(m\ge 2\). It follows that
$$\begin{aligned} \mathbb{E }\,2^{qn_0}\le 2^q+\sum _{m=2}^\infty 2^{qm}\mathbb{P }(\Omega _{m-1}) \le 2^q+C_p\sum _{m=2}^\infty 2^{qm-\alpha _p(m-1)}. \end{aligned}$$
Choosing \(p\ge 1\) so large that \(\alpha _p>q\), we see that the series on the right-hand side of the above inequality converges. \(\square \)
Note that one can rewrite (5.26) and (5.27) in the form
$$\begin{aligned} \Vert \xi _t(\omega )-\xi _s(\omega )\Vert _X\le C_\gamma (\omega )\,|t-s|^\gamma \quad t,s\in [0,T], \end{aligned}$$
where \(C_\gamma \) is a random variable with finite moments. We now apply the above result to establish a time-regularity property for the process \(U^\omega \) defined in the beginning of Sect. 4.2.

Proposition 5.8

For any \(\gamma \in (0,1/2)\) and any \(T>0\) there is a random variable \(C_{\gamma ,T}>0\) all of whose moments are finite such that
$$\begin{aligned} \Vert U(t)-U(s)\Vert _2\le C_{\gamma ,T}\,|t-s|^\gamma \quad \text{ for} t,s\in [-T,T]. \end{aligned}$$
(5.29)

Proof

In view of the remark following the proof of Theorem 5.7, it suffices to check that \(U\) satisfies inequality (5.25) with \([0,T]\) replaced by \([-T,T]\) and \(X=H^1\). Since \(U\) is stationary, we can assume that \(s=0\). Equation (4.8) implies that
$$\begin{aligned} U(t)-U(0)=\int \limits _0^ta\Delta U(r)\,dr+\zeta (t), \end{aligned}$$
whence, applying the Hölder inequality, it follows that
$$\begin{aligned} \Vert U(t)-U(0)\Vert _2^{2p}\le 2^{2p-1} \biggl \{|t|^p\biggl (\int \limits _0^t\Vert a\Delta u\Vert _2^{2}dr\biggr )^p+\Vert \zeta \Vert _2^{2p}\Biggr \}. \end{aligned}$$
Using (4.10), we see that the mean value of first term on the right-hand side can be estimated by \(C|t|^{p}\). Thus, the required inequality will be established if we show that
$$\begin{aligned} \mathbb{E }\,\Vert \zeta \Vert _2^{2p}\le C_p|t|^p. \end{aligned}$$
(5.30)
To this end, we note that \(\Vert \zeta \Vert _2^2=\sum _jc_j^2\beta _j^2(t)\), where \(c_j=b_j\lambda _j\). The monotone convergence theorem and the Burkholder inequality (see Theorem 2.10 in [23]) imply that
$$\begin{aligned} \mathbb{E }\,\Vert \zeta \Vert _2^{2p}&= \lim _{n\rightarrow \infty }\mathbb{E }\biggl (\sum _{j=1}^n c_j^2\beta _j^2(t)\biggr )^p \le C_1\lim _{n\rightarrow \infty }\mathbb{E }\,\biggl |\sum _{j=1}^n c_j\beta _j(t)\biggr |^{2p}\\&= C_2(p)\lim _{n\rightarrow \infty }\Bigl (|t|\sum _{j=1}^nc_j^2\Bigr )^p \le C_3(p)\,|t|^p, \end{aligned}$$
where we used the fact that \(\sum _j c_j\beta _j(t)\) is a zero-mean Gaussian random variable with variance \(t\sum _jc_j^2\). This proves (5.30) and completes the proof of the proposition. \(\square \)

Footnotes

  1. 1.

    In the case \(n=1\), the left-hand side of this inequality is zero.

  2. 2.

    This family of isomorphisms is needed to describe the regularity of dependence of random objects on \(\omega \). In the next subsection, when dealing with continuous-time RDS, we shall take for \(\theta _\tau \) the underlying group of measure-preserving transformations.

  3. 3.

    For instance, see Proposition 2.4.10 in [26] for the more complicated case of the Navier–Stokes system.

  4. 4.

    To have an absorbing set, one could take for \(R_\omega ^{{\varepsilon },2}\) the integral of \(\Vert {\varepsilon }\,U^\omega (\sigma +\tau _0)\Vert _2^{p+1}\) in \(\sigma \in [-3,0]\). However, in this case the stability condition may not hold, and therefore we define \(R_\omega ^{{\varepsilon },2}\) in a different way. Our choice ensures that (4.21) holds for the radius of the absorbing ball.

  5. 5.

    We thank A. Iftimovici for the simple geometric argument proving Lemma 5.4.

  6. 6.

    We need, however, the additional inequality (5.27), which is not mentioned in [27].

Notes

Acknowledgments

We thank the anonymous referees for pertinent critical remarks which helped to improve the presentation and to remove some inaccuracies. This work was supported by the Royal Society–CNRS grant Long time behavior of solutions for stochastic Navier–Stokes equations (No. YFDRN93583). The first author was supported by the ANR grant STOSYMAP (ANR 2011 BS01 015 01).

References

  1. 1.
    Arnold, L.: Random Dynamical Systems. Springer, Berlin (1998)zbMATHCrossRefGoogle Scholar
  2. 2.
    Bensoussan, A., Flandoli, F.: Stochastic inertial manifold. Stoch. Stoch. Rep. 53(1–2), 13–39 (1995)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Babin, A.V., Vishik, M.I.: Attractors of Evolution Equations. North-Holland Publishing, Amsterdam (1992)zbMATHGoogle Scholar
  4. 4.
    Crauel, H., Debussche, A., Flandoli, F.: Random attractors. J. Dyn. Differ. Equ. 9(2), 307–341 (1997)MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    Crauel, H., Flandoli, F.: Attractors for random dynamical systems. Probab. Theory Relat. Fields 100(3), 365–393 (1994)MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    Crauel, H., Flandoli, F.: Additive noise destroys a pitchfork bifurcation. J. Dyn. Differ. Equ. 10, 259–274 (1998)MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    Chueshov, I., Girya, T.: Inertial manifolds and stationary measures for stochastically perturbed dissipative dynamical systems. Mat. Sb. 186(1), 29–46 (1995)MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Chepyzhov, V., Goritsky, A.: Global integral manifolds with exponential tracking for nonautonomous equations. Russ. J. Math. Phys. 5(1), 9–28 (1997)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Carvalho, A., Langa, J., Robinson, J.: Attractors for Infinite-Dimensional Non-Autonomous Dynamical Systems. Springer, New York (2013)zbMATHCrossRefGoogle Scholar
  10. 10.
    Chueshov, I., Scheutzow, M.: Inertial manifolds and forms for stochastically perturbed retarded semilinear parabolic equations. J. Dyn. Differ. Equ. 13(2), 355–380 (2001)MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    Chueshov, I., Scheutzow, M.: On the structure of attractors and invariant measures for a class of monotone random systems. Dyn. Syst. 19(2), 127–144 (2004)MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Chueshov, I., Scheutzow, M., Schmalfuss, B.: Continuity properties of inertial manifolds for stochastic retarded semilinear parabolic equations. Interacting Stochastic Systems. Springer, Berlin (2005)Google Scholar
  13. 13.
    Chepyzhov, V.V., Vishik, M.I.: Attractors for Equations of Mathematical Physics, vol. 49. AMS, Providence (2002)Google Scholar
  14. 14.
    Chepyzhov, V., Vishik, M., Zelik, S.: Regular attractors and their non-autonomous perturbations. Mat. Sb. (N.S.) (2012) (to appear)Google Scholar
  15. 15.
    Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Cambridge University Press, Cambridge (1992)zbMATHCrossRefGoogle Scholar
  16. 16.
    Eden, A., Foias, C., Nicolaenko, B., Temam, R.: Exponential attractors for dissipative evolution equations. RAM: Research in Applied Mathematics, vol. 37. Masson, Paris (1994)Google Scholar
  17. 17.
    Efendiev, M., Miranville, A., Zelik, S.: Exponential attractors for a nonlinear reaction–diffusion system in \({R}^3\). C. R. Acad. Sci. Paris I 330(8), 713–718 (2000)MathSciNetzbMATHCrossRefGoogle Scholar
  18. 18.
    Efendiev, M., Miranville, A., Zelik, S.: Infinite dimensional exponential attractors for a non-autonomous reaction-diffusion system. Math. Nachr. 248/249, 72–96 (2003)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Efendiev, M., Miranville, A., Zelik, S.: Exponential attractors and finite-dimensional reduction for non-autonomous dynamical systems. Proc. R. Soc. Edinb. A 135, 703–730 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    Fabrie, P., Galusinski, C., Miranville, A., Zelik, S.: Uniform exponential attractors for a singularly perturbed damped wave equation. Discret. Contin. Dyn. Syst. 10(1–2), 211–238 (2004)MathSciNetzbMATHGoogle Scholar
  21. 21.
    Flandoli, F.: Dissipativity and invariant measures for stochastic Navier–Stokes equations. NoDEA Nonlinear Differ. Equ. Appl. 1(4), 403–423 (1994)MathSciNetzbMATHCrossRefGoogle Scholar
  22. 22.
    Foias, C., Sell, G., Temam, R.: Inertial manifolds for nonlinear evolutionary equations. J. Differ. Equ. 73(2), 309–353 (1988)MathSciNetzbMATHCrossRefGoogle Scholar
  23. 23.
    Hall, P., Heyde, C.C.: Martingale Limit Theory and Its Application. Academic Press, New York (1980)zbMATHGoogle Scholar
  24. 24.
    Karatzas, I., Shreve, S.E.: Brownian Motion and Stochastic Calculus. Springer, New York (1991)zbMATHGoogle Scholar
  25. 25.
    Kuksin, S., Shirikyan, A.: On random attractors for systems of mixing type. Funct. Anal. Appl. 38(1), 28–37 (2004)MathSciNetzbMATHCrossRefGoogle Scholar
  26. 26.
    Kuksin, S., Shirikyan, A.: Mathematics of Two-Dimensional Turbulence. Cambridge University Press, Cambridge (2012)zbMATHCrossRefGoogle Scholar
  27. 27.
    Kunita, H.: Stochastic Flows and Stochastic Differential Equations. Cambridge University Press, Cambridge (1997)Google Scholar
  28. 28.
    Langa, J., Miranville, A., Real, J.: Pullback exponential attractors. Discret. Contin. Dyn. Syst. 26(4), 1329–1357 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
  29. 29.
    Lorentz, G.G.: Approximation of Functions. Chelsea Publishing Co., New York (1986)zbMATHGoogle Scholar
  30. 30.
    Miranville, A.: Exponential attractors for nonautonomous evolution equations. Appl. Math. Lett. 11(2), 19–22 (1998)MathSciNetCrossRefGoogle Scholar
  31. 31.
    Miranville, A., Zelik, S.: Attractors for Dissipative Partial Differential Equations in Bounded and Unbounded Domains. Handbook of Differential Equations: Evolutionary Equations, vol. IV, pp. 103–200. North-Holland, Amsterdam (2008)Google Scholar
  32. 32.
    Stroock, D.: Probability. An Analytic Viewpoint. Cambridge University Press, Cambridge (1993)Google Scholar
  33. 33.
    Temam, R.: Infinite-Dimensional Dynamical Systems in Mechanics and Physics. Springer, New York (1988)zbMATHCrossRefGoogle Scholar
  34. 34.
    Walters, P.: An Introduction to Ergodic Theory. Graduate Texts in Mathematics, vol. 79. Springer, New York (1982)Google Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.Department of MathematicsUniversity of Cergy-PontoiseCergy-PontoiseFrance
  2. 2.Department of MathematicsUniversity of SurreyGuildfordUK

Personalised recommendations