1 Introduction

Attractor theory is known as a useful tool in the study of infinite-dimensional dynamical systems, especially in numerical simulations and computations. Roughly, if a system has an attractor, any solution trajectory of the system can be tracked by trajectories within the attractor, while, in the meanwhile, if the attractor has finite dimension, finite degrees of freedom are expected to fully determine the asymptotic behavior of the system, though the phase space of the system is infinite dimensional. This is known as a finite-dimensional reduction of infinite-dimensional dynamical systems (Temam 1997; Robinson 2011). In order to describe the long-time behavior of infinite-dimensional dynamical systems, one often studies the attractors associated to them. Depending on the setting a problem is proposed, a number of typical attractors have been introduced and extensively studied: global attractors (Temam 1997; Robinson 2001), exponential attractors (Eden et al. 1994), pullback/cocycle attractors (Kloeden and Rasmussen 2011; Carvalho et al. 2013), uniform attractors (Chepyzhov and Vishik 2002; Bortolan et al. 2020), etc., describing in their own way the asymptotic dynamics of the system under consideration.

Ever since the work of Crauel and Flandoli (1994), the attractor theory has been extended to random dynamical systems for which stochastic perturbations are taken into account (Arnold 1998; Chueshov 2002). Particularly for a non-autonomous random dynamical system (abbrev. NRDS), i.e., a random dynamical system with in addition time-dependent terms (e.g., with a time-dependent forcing field), pullback random attractors have been extensively studied, see, e.g., Wang (2012, 2014), Cui et al. (2017) and also Caraballo and Sonner (2017) for pullback exponential attractors. However, the non-autonomous feature of the system prevents one to learn from these pullback attractors about the forward dynamics of the underlying system, and this is the motivation of our previous work (Cui and Langa 2017), where a random uniform attractor was developed, which provides a certain description of forward dynamics of the system by the property of uniformly forward attracting in probability.

In this paper, we study the random uniform attractors on the conditions that ensure a random uniform attractor to have finite fractal dimension. Thanks to the pioneering works of Mallet-Paret (1976) and Mañé (1981), it is well understood that estimating the fractal dimension of an attractor provides the information that the attractor can be embedded into an Euclidean space \({\mathbb {R}}^k\) for some \(k \in {\mathbb {N}}\), and this embedding is shown to be linear with a Hölder continuous inverse, see, e.g., Robinson (2011). Hence, estimating the fractal dimension of an attractor is useful in the finite-dimensional reduction of an infinite-dimensional dynamical system. However, since the study of uniform attractors is usually based on a symbol space which contains auxiliary elements that could not belong to the original system, a uniform attractor is more often infinite dimensional. In fact, it has been an untouched problem for almost twenty years that if a uniform attractor of an infinite-dimensional system could be finite dimensional under acceptable conditions, and how flexible the conditions could be.

In a previous work (Cui et al. 2021), we studied the finite-dimensionality of deterministic uniform attractors. By a smoothing property of the underlying system, we established criteria for a uniform attractor to have finite fractal dimension, and the upper bound consists of two parts: the fractal dimension of the symbol space plus an auxiliary number arising from the smoothing property. This structure of the upper bound agrees with the result (Chepyzhov and Vishik 2002, Theorem IX 2.1) of Chepyzhov and Vishik established by studying the quasi-differentials of the system. In addition, we showed in Cui et al. (2021) that the finite-dimensionality of the symbol space is fully determined by the tails of the non-autonomous term of the original system; in other words, the tails of the non-autonomous term are what is crucial for the finite-dimensionality of a uniform attractor.

In this paper, we change the framework to a random environment, which implies crucial theoretical and technical differences with previous papers in the literature, and give an alternative of the smoothing property by a squeezing property. More precisely, we shall present two general criteria of estimating the fractal dimension of random uniform attractors. One is based on a smoothing property of the system, which allows the phase space to be only Banach, but requires an auxiliary space that compactly embedded into the phase space, see Theorem 3.3; the other is based on a squeezing property of the system, where no auxiliary space is needed, but the phase space, in applications, should be Hilbert, see Theorem 3.6. Neither of the two theorems implies the other. Note that smoothing and squeezing properties have already been used in the literature to deal with various problems in dynamics, see for instance (Málek et al. 1994) and also later papers (Caraballo and Sonner 2017; Carvalho and Sonner 2013; Czaja and Efendiev 2011; Efendiev et al. 2000, 2003; Efendiev and Zelik 2008; Efendiev et al. 2011; Shirikyan and Zelik 2013; Zhao and Zhou 2016) for smoothing property in estimating the fractal dimensions as well as constructing exponential attractors, Foias and Temam (1979) and later literature (Debussche 1997; Eden et al. 1994; Flandoli and Langa 1999; Kloeden and Langa 2007; Zelati and Kalita 2015; Cui et al. 2018a) for the use of squeezing property. Nevertheless, here we need to carefully overcome the difficulty arising jointly from the three features of problem:

  1. (a)

    The lack of the invariance of the random uniform attractor;

  2. (b)

    The superposition of the base flow on the symbol space;

  3. (c)

    The stochastic nature of the problem.

For the first two problems, our previous work (Cui et al. 2021) provides some inspiration of solutions. We carefully make use of the relationship

$$\begin{aligned} {\mathscr {A}}(\omega ) =\bigcup _{\sigma \in \Sigma } {\mathcal {A}}_\sigma (\omega ),\quad \omega \in \Omega , \end{aligned}$$

between the uniform attractor \({\mathscr {A}}\) and the cocycle attractor \({\mathcal {A}}\) of the underlying system, where \(\Sigma \) is the symbol space of the system and \((\Omega , {\mathcal {F}}, {\mathbb {P}})\) a probability space. This allows us to decompose the uniform attractor into sets of cocycle attractor sections, and then the invariance of the cocycle attractor \({\mathcal {A}}\) is useful. Nevertheless, since the absorption time of the random absorbing set is usually random, the analysis in this paper is much more technical than in Cui et al. (2021). Our solution requires the absorbing set absorbs itself after a deterministic time. This condition is in fact slightly stronger than needed, but facilitates our analysis. It also appeared in Shirikyan and Zelik (2013) for the construction of random exponential attractors; our application to a reaction-diffusion equation in Sect. 4 shows that this condition holds naturally in the additive noise case. Multiplicative noise case can also have this property, which will be shown in our future work.

The third problem is the stochastic nature of the setting. Basically, Birkhoff’s ergodic theorem is frequently used and so the coefficients in the conditions are often required to have finite expectation which, however, is sometimes difficult to verify in applications. An example is the coefficient \(\kappa (\omega )\) lying in the smoothing condition \((H_6)\), for which the finite-expectation is unknown for the application of the reaction–diffusion system (4.1). However, the squeezing condition (S) for system (4.1) is verified. In this sense, the squeezing approach seems more applicable than the smoothing one.

On the other hand, an advantage of the smoothing approach is that, once the finite-dimensionality of the uniform attractor has been establishd, by the smoothing property one could easily improve it to more regular spaces, see Theorem 3.8. In addition, the coefficient in the smoothing property does not need to have finite expectation, so it does not have the application problem mentioned above. Hence, both the ideas of smoothing and squeezing are useful in estimating the fractal dimension of random uniform attractors. In Sect. 4, we develop an application of a reaction–diffusion system for which the finite-dimensionality of the random uniform attractor in \(L^2\) is established by the squeezing method and in \(H_0^1\) by the smoothing method. An absorbing set with a deterministic absorption time for the system is also constructed, which is crucial for the analysis.

Finally, we note that in Han and Zhou (2019) recently constructed a random uniform exponential attractor for a stochastic reaction–diffusion equation with quasi-periodic forcings. Their result was derived by taking into account an extended phase space and then studying its respective skew-product semiflow, for which the problem is then reduced to an autonomous problem. In this case, i.e., when the skew-product semiflow method applies, the random uniform attractor can be studied by the random attractor of the skew-product semiflow, and, as a particular case, the finite-dimensionality of random uniform attractors can be derived more directly from the finite-dimensional theory of random attractors for autonomous RDS. Here, however, as we consider more general non-autonomous terms than quasi-periodic forcings such that the symbol space is even no longer a linear space, the skew-product semiflow approach fails. Thus, we have to treat the random uniform attractor in its own way, and our method has an advantage that more general non-autonomous terms are allowed.

Notations. Throughout the paper, for any metric space \(({\mathscr {X}},d_{\mathscr {X}})\) and real number \(r>0\) we denote by \(B_{\mathscr {X}}(x,r)\) the open ball centered at \(x\in {\mathscr {X}}\) with radius r, and by \(B_{\mathscr {X}}(A,r) :=\cup _{x\in A} B_{\mathscr {X}}(x,r)\) we denote the open r-neighborhood of any non-empty subset A of \({\mathscr {X}}\). Given a precompact set \(E\subset {\mathscr {X}}\), we denote by \(\sharp E\) the cardinality of E, by \(N_{\mathscr {X}}[E;r]\) the minimal number of open balls of radius r in \({\mathscr {X}}\) to cover E, and the fractal dimension of E in \({\mathscr {X}}\) is defined by

$$\begin{aligned} \dim _F(E;{\mathscr {X}}) :=\limsup _{\varepsilon \rightarrow 0^+} \frac{\log _2 N_{\mathscr {X}}[E;\varepsilon ]}{-\log _2\varepsilon } . \end{aligned}$$

The Hausdorff semi-distance between non-empty sets in \({\mathscr {X}}\) is defined by

$$\begin{aligned} \mathrm{dist}_{{\mathscr {X}}} (A,B) := \sup _{a\in A}\inf _{b\in B}d_{\mathscr {X}}(a,b), \qquad A, B\in 2^{\mathscr {X}}{\setminus } \emptyset . \end{aligned}$$

We denote by \({\mathcal {B}}({\mathscr {X}})\) the Borel sigma-algebra of \({\mathscr {X}}\).

2 Preliminaries on Random Uniform Attractors

Given a separable metric space \((\Xi ,d_\Xi )\), suppose that \( \Sigma \subset \Xi \) is a compact submetric space endowed with a continuous group \(\{\theta _t\}_{t\in {\mathbb {R}}}\) acting on it, satisfying that \(\theta _0 \sigma = \sigma \) and \(\theta _t(\theta _s\sigma )=\theta _{t+s}\sigma \) for all \(\sigma \in \Sigma \), \(t,s\in {\mathbb {R}}\), and that the map \((t,\sigma )\mapsto \theta _t\sigma \) is \(({\mathbb {R}}\times \Sigma , \Sigma )\)-continuous. Moreover, we assume that \(\Sigma \) is invariant under \(\{\theta _t\}_{t \in {\mathbb {R}}}\), i.e., \(\theta _t\Sigma =\Sigma \) for all \(t\in {\mathbb {R}}\). For a set A, let \({\mathcal {B}} (A)\) be the Borel sigma-algebra of A. Denote by \((\Omega , {\mathcal {F}}, {\mathbb {P}})\) a probability space, which need not be \({\mathbb {P}}\)-complete, endowed also with a flow \(\{\vartheta _t\}_{t\in {\mathbb {R}}}\) satisfying the following conditions

  • \(\vartheta _0\) = identity operator on \(\Omega \);

  • \(\vartheta _t \Omega =\Omega \),   \(\forall t\in {\mathbb {R}}\);

  • \( \vartheta _{ s}\circ \vartheta _{ t}=\vartheta _{ t+s },\quad \forall t,s\in {\mathbb {R}}; \)

  • \((t,\omega )\mapsto \vartheta _t\omega \) is \( \big ( {\mathcal {B}}({\mathbb {R}})\times {\mathcal {F}}, {\mathcal {F}} \big )\)-measurable;

  • \(\{\vartheta _t\}_{t\in {\mathbb {R}}} \) is \({\mathbb {P}}\)-preserving: \({\mathbb {P}}(\vartheta _tF)={\mathbb {P}}(F)\),    \(\forall t<0\) and \(F\in {\mathcal {F}}\);

  • \(\{\vartheta _t\}_{t\in {\mathbb {R}}} \) is ergodic, namely, if \(F\in {\mathcal {F}}\) is invariant under \(\vartheta _t\), then \({\mathbb {P}}(F)=0\) or 1.

The two groups \(\{\theta _t\}_{t\in {\mathbb {R}}}\) and \(\{\vartheta _t\}_{t\in {\mathbb {R}}}\) acting on \(\Sigma \) and \(\Omega \), respectively, are called base flows. As we did not assume the probability space to be complete, we shall not distinguish a full measure subspace \({\tilde{\Omega }}\) from \(\Omega \), that is, by saying that a statement holds for all \(\omega \in \Omega \) we mean that it holds on \({\tilde{\Omega }}\) almost surely. We denote by \({\mathbb {E}}(L)=\int _\Omega L(\omega ){\mathbb {P}}( \mathrm{d}\omega )\) the expectation of a random variable L.

Suppose that \((X, \Vert \cdot \Vert _X)\) is a separable Banach space. The definition of non-autonomous random dynamical systems in X is given as the following.

Definition 2.1

A map \(\phi : {\mathbb {R}}^+\times \Omega \times \Sigma \times X\rightarrow X \) is said to be a non-autonomous random dynamical system (abbrev. NRDS) in X (with base flows \(\{\vartheta _t\}_{t\in {\mathbb {R}}}\) on \(\Omega \) and \(\{\theta _{t}\}_{t\in {\mathbb {R}}}\) on \(\Sigma \)) if

  1. (i)

    \(\phi \) is \(\big ({\mathcal {B}}({\mathbb {R}}^+)\times {\mathcal {F}}\times {\mathcal {B}}(\Sigma )\times {\mathcal {B}}(X), {\mathcal {B}}(X) \big )\)-measurable;

  2. (ii)

    For every \(\sigma \in \Sigma \) and \(\omega \in \Omega \), \(\phi (0,\omega ,\sigma ,\cdot )\) is the identity on X;

  3. (iii)

    It holds the cocycle property for each fixed \(\sigma \in \Sigma ,\) \(x\in X\) and \(\omega \in \Omega \), i.e.,

    $$\begin{aligned} \phi (t+s, \omega , \sigma ,x)=\phi (t,\vartheta _{s}\omega ,\theta _{s} \sigma )\circ \phi (s, \omega ,\sigma ,x) ,\quad \forall t,s\in {\mathbb {R}}^+ , \end{aligned}$$

    where and throughout the paper, \(\phi (t, \vartheta _{s}\omega ,\theta _{s} \sigma )\circ \phi (s, \omega ,\sigma ,x):=\phi \big (t,\vartheta _{s}\omega ,\theta _{s} \sigma , \phi (s, \omega ,\sigma ,x)\big )\).

An NRDS \(\phi \) is said to be \((\Sigma \times X,X)\)-continuous if the map \((\sigma ,x)\mapsto \phi (t,\omega ,\sigma ,x)\) is continuous from \(\Sigma \times X\) to X.

In applications, an NRDS \(\phi \) is typically generated by an evolution equation with both a non-autonomous forcing (from the space \(\Xi \)) and random perturbations, while the space \(\Sigma \) is formulated via all the time translations of the forcing. In this case, the forcing is called the (non-autonomous) symbol of the equation, and the space \(\Sigma \) is called the symbol space of the NRDS \(\phi \).

Definition 2.2

A set-valued map \(D:\Omega \rightarrow 2^X\) taking values in the closed subsets of a Polish space X is said to be measurable if for each \(x\in X\) the map \(\omega \mapsto \mathrm{dist}_X(x,D(\omega ))\) is \(({\mathcal {F}},{\mathcal {B}}({\mathbb {R}}))\)-measurable. In this case, D is called a closed random set. If each section \(D(\omega )\) of D is in addition compact, then D is called a compact random set. D is said to be an open random set if its complement \(D^c\) is a closed random set.

Due to the attracting property of a random attractor, the distance between trajectories and the attractor is expected to be measurable. This is why we follow the measurability defined above via the distance. Compared with the alternative definition of a closed random set D as a measurable set in \(\Omega \times X\) with all (or almost all) of its sections \(D(\omega )\) being closed, Definition 2.2 requires more: a measurable set in \(\Omega \times X\) with closed sections is not necessarily a closed random set in the sense of Definition 2.2, and the two definitions coincide only with respect to the universal sigma algebra of \({\mathcal {F}}\), see for instance (Crauel 2002, Proposition 2.4). On the other hand, for an open set-valued map \(\omega \mapsto U(\omega )\) it does not suffice to conclude that U is an open random set from the fact of \({\overline{U}}\) being a closed random set, i.e., with \(\mathrm{dist}_X(x, U(\cdot ))\) being measurable we cannot say that U is an open random set. A counterexample is given by Crauel (2002, Remark 2.11).

In the following, a (closed or open) random set D is always mentioned in the sense of Definition 2.2. D is often identified with its image \(\{D(\omega )\}_{\omega \in \Omega }\). Given two random sets \(D_1\) and \(D_2\), \(D_1\) is said to be inside of \(D_2\) if \(D_1(\omega )\subseteq D_2(\omega )\) for all \(\omega \in \Omega \).

2.1 Cocycle and Uniform Attractors of Non-autonomous Random Dynamical Systems

Before the definition of attractors, let us introduce first the attraction universe \({\mathcal {D}}\) which is a collection of some random sets that are expected to be attracted by an attractor. In this paper, we consider the universe \({\mathcal {D}}\) of all tempered random sets in X, i.e.,

$$\begin{aligned} {\mathcal {D}}:=\Big \{D:D \text { is a tempered closed random set in }X \Big \}, \end{aligned}$$

where a closed random set D in X is said to be tempered if \(\Vert D(\omega )\Vert _X:=\sup _{x\in D(\omega )}\Vert x\Vert _X \leqslant R(\omega )\) for some random variable \(R(\cdot ):\Omega \rightarrow {\mathbb {R}}\) which is tempered, i.e.,

$$\begin{aligned} \lim _ { t \rightarrow \pm \infty } \frac{ \ln R (\vartheta _t \omega ) }{|t| } = 0\qquad \big (\ln \cdot =\log _e\cdot \big ),\quad \forall \omega \in \Omega . \end{aligned}$$
(2.1)

Definition 2.3

(Cui et al. 2017) A family \({\mathcal {A}}=\{{\mathcal {A}}_\sigma (\cdot )\}_{\sigma \in \Sigma }\) of compact random sets is said to be the \({\mathcal {D}}\)-cocycle attractor of an NRDS \(\phi \), if

  • \({\mathcal {A}}\) is \({\mathcal {D}}\)-pullback attracting, i.e., for each \(D\in {\mathcal {D}}\),

    $$\begin{aligned} \lim _{t\rightarrow \infty } \mathrm{dist}_X \big (\phi (t,\vartheta _{-t}\omega , \theta _{-t}\sigma , D(\vartheta _{-t}\omega ) ), {\mathcal {A}}_\sigma (\omega ) \big ) =0 ,\quad \forall \omega \in \Omega , \sigma \in \Sigma ; \end{aligned}$$
  • (Minimality) if \({\mathcal {A}}'=\{{\mathcal {A}}'_\sigma (\cdot )\}_{\sigma \in \Sigma }\) is another family of compact random sets satisfying the above condition, then \({\mathcal {A}}_\sigma (\omega )\subseteq {\mathcal {A}}'_\sigma (\omega )\), for all \(\omega \in \Omega \), \(\sigma \in \Sigma \);

  • \({\mathcal {A}}\) is invariant under \(\phi \), that is

    $$\begin{aligned} \phi \big (t, \omega ,\sigma , {\mathcal {A}}_\sigma (\omega ) \big )= {\mathcal {A}}_{\theta _t\sigma }(\vartheta _t\omega ) ,\quad \forall t\geqslant 0, \, \omega \in \Omega ,\, \sigma \in \Sigma . \end{aligned}$$

It is observed in applications that every section \({\mathcal {A}}_\sigma (\cdot )\) of a cocycle attractor \({\mathcal {A}}\) can often belong to the attraction universe, i.e., \({\mathcal {A}}_\sigma (\cdot )\in {\mathcal {D}}\) for every \(\sigma \in \Sigma \). This leads to the possibility of replacing the minimality condition in Definition 2.3 by the condition that \({\mathcal {A}}_\sigma (\cdot )\in {\mathcal {D}}\) for every \(\sigma \in \Sigma \). We above follow the definition of Cui et al. (2017), where a detailed analysis for the existence criteria, characterization and robustness of cocycle attractors was given.

Definition 2.4

(Cui and Langa 2017) A compact random set \({\mathscr {A}}\in {\mathcal {D}}\) is said to be the (random) \({\mathcal {D}}\)-uniform attractor of an NRDS \(\phi \), if

  1. (i)

    \( {\mathscr {A}}\) is uniformly (w.r.t. \(\sigma \in \Sigma \)) \( {\mathcal {D}}\)-pullback attracting, namely, for each \(D\in {\mathcal {D}}\),

    $$\begin{aligned} \lim _{t\rightarrow \infty } \left[ \, \sup _{\sigma \in \Sigma } \mathrm{dist}_X \big (\phi (t,\vartheta _{-t}\omega , \theta _{-t}\sigma , D(\vartheta _{-t}\omega )) , {\mathscr {A}}(\omega ) \big ) \right] =0 ,\qquad \forall \omega \in \Omega ; \end{aligned}$$
  2. (ii)

    (Minimality) \({\mathscr {A}}\) is inside of any compact random set satisfying (i).

The random uniform attractor can be regarded as a random generalization of the deterministic uniform attractor (Haraux 1988; Chepyzhov and Vishik 2002) or a non-autonomous generalization of the autonomous random attractor (Crauel et al. 1997; Crauel and Flandoli 1994; Flandoli and Schmalfuss 1996). Since a uniform attractor describes the dynamics in a uniform way w.r.t. symbols in the symbol space \(\Sigma \), the attraction universe \({\mathcal {D}}\) in consideration consists of some \(\sigma \)-independent autonomous random sets. This is also the case in the deterministic uniform attractor theory, where the attraction universe is usually the collection of (autonomous) bounded sets in the phase space.

Due to the nature of random perturbations in general applications, the uniform attraction of the attractor is defined in the pullback sense, but it implies the forward attraction in probability as given below; so a random uniform attractor describes also a forward dynamics of the NRDS \(\phi \).

Proposition 2.5

(Cui and Langa 2017) A random \({\mathcal {D}}\)-uniform attractor \({\mathscr {A}}\) is forward uniformly attracting in probability:

$$\begin{aligned} \lim _{t\rightarrow \infty } {\mathbb {P}}\left\{ \omega \in \Omega : \sup _{\sigma \in \Sigma } \mathrm{dist}_X \big ( \phi (t,\omega ,\sigma , D(\omega )), {\mathscr {A}}(\vartheta _t\omega )\big )\!>\!\varepsilon \right\} \!=\!0, \quad \forall \varepsilon >0,\ D\!\in \! {\mathcal {D}}. \end{aligned}$$

Recall that a closed random set \({\mathscr {B}}=\{{\mathscr {B}}(\omega )\}_{\omega \in \Omega }\) is said to be a uniformly \({\mathcal {D}}\)-pullback absorbing set, if for each \(D \in {\mathcal {D}}\) and \(\omega \in \Omega \) there is an absorption time \(T_D ( \omega ) \geqslant 0\) such that

$$\begin{aligned} \bigcup _{ \sigma \in \Sigma } \phi \big ( t, \vartheta _{-t} \omega , \theta _{-t} \sigma , D ( \vartheta _{-t} \omega ) \big ) \subseteq {\mathscr {B}} ( \omega ), \qquad \forall t \geqslant T_D( \omega ) . \end{aligned}$$
(2.2)

Note that the absorption time \(T_D(\omega )\) in applications is usually a random variable in \(\omega \), and is even a deterministic number for some particular random set D. The latter observation is crucial for our analysis later, see condition \((H_3)\) in Theorems 3.3 and 3.6.

Then, we have the following existence conditions for a \({\mathcal {D}}\)-uniform attractor and also some properties the attractor possesses.

Theorem 2.6

(Cui and Langa 2017) Suppose that \(\phi \) is a \((\Sigma \times X, X)\)-continuous NRDS. If \(\phi \) has a compact uniformly \({\mathcal {D}}\)-pullback attracting set K and a closed uniformly \({\mathcal {D}}\)-absorbing set \({\mathscr {B}}\in {\mathcal {D}} \), then it has a unique random \({\mathcal {D}}\)-uniform attractor \({\mathscr {A}}\in {\mathcal {D}}\) given by

$$\begin{aligned} {\mathscr {A}}(\omega )= \bigcap _{s\geqslant 0} \overline{ \bigcup _{t\geqslant s} \bigcup _{\sigma \in \Sigma } \phi \big (t, \vartheta _{-t} \omega , \theta _{-t} \sigma ,{\mathscr {B}}( \vartheta _{-t} \omega )\big ) } ,\qquad \forall \omega \in \Omega . \end{aligned}$$

Moreover, the following properties hold:

  1. (i)

    The NRDS \(\phi \) has also a \({\mathcal {D}}\)-cocycle attractor \({\mathcal {A}}=\{{\mathcal {A}}_\sigma (\cdot )\}_{\sigma \in \Sigma }\) which satisfies the relation

    $$\begin{aligned} {\mathscr {A}} (\omega ) = \displaystyle \bigcup _{\sigma \in \Sigma } {\mathcal {A}}_\sigma (\omega ), \quad \forall \omega \in \Omega , \end{aligned}$$
    (2.3)

    and, for each \(\omega \) fixed, the set-valued map \(\sigma \mapsto {\mathcal {A}}_\sigma (\omega )\) is upper semi-continuous:

    $$\begin{aligned} \mathrm{dist}_X\big ( {\mathcal {A}}_\sigma (\omega ), {\mathcal {A}}_{\sigma _0}(\omega ) \big )\rightarrow 0,\quad \text { as }\sigma \rightarrow \sigma _0\text { in }\Sigma ; \end{aligned}$$
  2. (ii)

    \({\mathscr {A}}\) is negatively semi-invariant in the sense that

    $$\begin{aligned} \begin{aligned} {\mathscr {A}}(\vartheta _t\omega ) \subseteq \Phi \big (t,\omega , {\mathscr {A}}(\omega )\big ), \quad \forall t\geqslant 0, \ \omega \in \Omega , \end{aligned} \end{aligned}$$
    (2.4)

    where \(\Phi \big (t,\omega ,{\mathscr {A}}(\omega )\big ) := \cup _{\sigma \in \Sigma } \phi \big (t,\omega , \sigma ,{\mathscr {A}}(\omega )\big )\) defines a multi-valued random dynamical system;

  3. (iii)

    \({\mathscr {A}}\) is characterized by \({\mathcal {D}}\)-complete trajectories, that is,

    $$\begin{aligned} {\mathscr {A}}(\vartheta _t\omega )=\Big \{ \xi (\vartheta _t\omega , t): \xi \text { is a }{\mathcal {D}}\text {-complete trajectory of }\phi \Big \},\quad \forall t\in {\mathbb {R}}, \omega \in \Omega , \end{aligned}$$

    where a \({\mathcal {D}}\)-complete trajectory of the NRDS \(\phi \) is a map \(\xi \): \(\Omega \times {\mathbb {R}}\rightarrow X\) for which there exists \(\sigma \in \Sigma \) such that \(\xi (\vartheta _t \omega , t) =\phi (t-s,\vartheta _s \omega ,\theta _s \sigma , \xi ( \vartheta _s \omega , s)) \) for all \(t\geqslant s \) and \(\omega \in \Omega \), and that there exists \(D\in {\mathcal {D}}\) such that \(\cup _{t\in {\mathbb {R}}} \xi (\cdot ,t) \subset D(\cdot )\);

  4. (iv)

    \({\mathscr {A}}\) is fully determined by uniformly attracting deterministic compact sets: if we denote by \({\mathfrak {A}}\) the random \({\mathfrak {D}}\)-uniform attractor of \(\phi \) with \({\mathfrak {D}}\) the collection of all non-empty compact sets in X, then

    $$\begin{aligned} {\mathbb {P}}({\mathscr {A}}={\mathfrak {A}})=1. \end{aligned}$$

Remark 2.7

The random \({\mathcal {D}}\)-uniform attractor \({\mathscr {A}}\) of a \((\Sigma \times X, X)\)-continuous NRDS is inside of any uniformly \({\mathcal {D}}\)-pullback absorbing set \({\mathscr {B}}\), since from the negative semi-invariance (2.4) of \({\mathscr {A}}\) and the uniform pullback absorption of \({\mathscr {B}}\) it follows

$$\begin{aligned} \begin{aligned} {\mathscr {A}}(\omega ) \subseteq \Phi \big (t,\vartheta _{-t}\omega ,{\mathscr {A}}(\vartheta _{-t}\omega )\big ) \subseteq {\mathscr {B}}(\omega ), \quad \text {for }t\text { large enough}. \end{aligned} \end{aligned}$$

2.2 Conjugate Attractors and Their Structural Relationship

The idea of conjugate dynamical systems has been widely used in, e.g., transforming a stochastic PDE to a deterministic PDE with random parameters, see, e.g., Chueshov (2002), Flandoli and Lisei (2004) and Cui et al. (2016). Now we study in a more abstract framework the attractors under this transformation.

Suppose that X and \({\tilde{X}} \) are two Banach spaces (where \(X={\tilde{X}}\) is allowed), and that \(\phi \) and \({\tilde{\phi }}\) are two NRDS with the same base flows \((\theta , \Sigma )\) and \((\vartheta , \Omega )\) in phase spaces X and \({\tilde{X}}\), respectively.

Definition 2.8

\(\phi \) and \({\tilde{\phi }}\) are said to be conjugate NRDS if there is a map \({\mathsf {T}}:\Omega \times X\rightarrow {\tilde{X}}\), which is called a cohomology of \(\phi \) and \({\tilde{\phi }}\), with properties

  1. (i)

    The map \(x\mapsto {\mathsf {T}}(\omega ,x)\) is a homeomorphism from X onto \({\tilde{X}}\) for every \(\omega \in \Omega \);

  2. (ii)

    The map \(\omega \mapsto {\mathsf {T}}(\omega ,x_1)\) and \(\omega \mapsto {\mathsf {T}}^{-1}(\omega ,x_2)\) are measurable for each \(x_1\in X\) and \(x_2\in {\tilde{X}}\);

  3. (iii)

    For any \( t>0,\, \omega \in \Omega , \, \sigma \in \Sigma ,\, x\in X\),

    $$\begin{aligned} \begin{aligned} {\tilde{\phi }}\big (t,\omega ,\sigma , {\mathsf {T}}(\omega ,x)\big )={\mathsf {T}}\big (\vartheta _t\omega , \phi (t,\omega ,\sigma , x)\big ) . \end{aligned} \end{aligned}$$
    (2.5)

Let \({\mathcal {D}}\) and \({\tilde{{\mathcal {D}}}}\) be collections of tempered closed random sets in X and in \({\tilde{X}}\), respectively. In the following, we will need the cohomology \({\mathsf {T}}\) be a bijection between \({\mathcal {D}}\) and \({\tilde{{\mathcal {D}}}}\), i.e., for each \(D\in {\mathcal {D}}\) there is a unique \({\tilde{D}}\in {\tilde{{\mathcal {D}}}}\) and for each \({\tilde{D}}\in {\tilde{{\mathcal {D}}}}\) there is a unique \( D\in {\mathcal {D}}\) such that \({\tilde{{\mathcal {D}}}}(\omega )={\mathsf {T}}(\omega , D(\omega ))\) for all \(\omega \in \Omega \). A particular example of such a cohomology \({\mathsf {T}}\) is given later in the application section, see (4.16).

Theorem 2.9

Suppose that \(\phi \) and \({\tilde{\phi }}\) are conjugate NRDS with cohomology \({\mathsf {T}}\) satisfying (2.5), and that the cohomology \({\mathsf {T}}\) is a bijection between \({\mathcal {D}}\) and \({\tilde{{\mathcal {D}}}}\). If \(\phi \) has a \({\mathcal {D}}\)-uniform attractor \({\mathscr {A}}\) in X, then \({\tilde{\phi }}\) has a \({\tilde{{\mathcal {D}}}}\)-uniform attractor \({\tilde{{\mathscr {A}}}}\) in \({\tilde{X}}\), and vice versa. Moreover, the two attractors have the relation

$$\begin{aligned} \begin{aligned} {\tilde{{\mathscr {A}}}} (\omega )={\mathsf {T}}\big (\omega ,{\mathscr {A}}( \omega )\big ), \quad \omega \in \Omega . \end{aligned} \end{aligned}$$
(2.6)

Proof

Without loss of generality, we suppose that \(\phi \) has a \({\mathcal {D}}\)-uniform attractor \({\mathscr {A}}\), and then we prove by definition that \({\tilde{{\mathscr {A}}}}(\omega ):= {\mathsf {T}}(\omega ,{\mathscr {A}}(\omega ))\) defines the \({\tilde{{\mathcal {D}}}}\)-uniform attractor of \({\tilde{\phi }}\). Clearly, \({\tilde{{\mathscr {A}}}}\) is compact and measurable, since so is \({\mathscr {A}}\) and \({\mathsf {T}}\) is a homeomorphism. Given \({\tilde{D}}\in {\tilde{{\mathcal {D}}}}\), since \({\mathsf {T}}\) is a bijection there is a \(D\in {\mathcal {D}}\) with \({\tilde{D}}(\omega )={\mathsf {T}}(\omega ,D(\omega ))\), so

$$\begin{aligned} \begin{aligned}&\sup _{\sigma \in \Sigma }\mathrm{dist}_{{\tilde{X}}} \Big ({\tilde{\phi }}\big (t,\vartheta _{-t}\omega ,\theta _{-t}\sigma , {\tilde{D}}(\vartheta _{-t}\omega )\big ), {\tilde{{\mathscr {A}}}}(\omega ) \Big ) \\&\quad = \sup _{\sigma \in \Sigma } \mathrm{dist}_{{\tilde{X}}} \Big ({\tilde{\phi }} \big (t,\vartheta _{-t}\omega ,\theta _{-t}\sigma , {\mathsf {T}}(\vartheta _{-t}\omega , D(\vartheta _{-t}\omega )) \big ), {\tilde{{\mathscr {A}}}}(\omega ) \Big ) \\&\quad = \sup _{\sigma \in \Sigma } \mathrm{dist}_{{\tilde{X}}} \Big ( {\mathsf {T}}\Big (\omega , \phi \big (t,\vartheta _{-t}\omega ,\theta _{-t}\sigma , D(\vartheta _{-t}\omega )\big )\Big ), {\mathsf {T}}\big (\omega , {\mathscr {A}}(\omega ) \big )\Big ) \\&\quad = \mathrm{dist}_{{\tilde{X}}} \Big ({\mathsf {T}}\Big (\omega , \cup _{\sigma \in \Sigma } \phi \big (t,\vartheta _{-t}\omega ,\theta _{-t}\sigma , D(\vartheta _{-t}\omega )\big )\Big ), {\mathsf {T}}\big (\omega , {\mathscr {A}}(\omega ) \big )\Big ) \rightarrow 0 , \end{aligned} \end{aligned}$$

where the last convergence is because \( \mathrm{dist}_{X} \big ( \cup _{\sigma \in \Sigma } \phi (t,\vartheta _{-t}\omega ,\theta _{-t}\sigma , D(\vartheta _{-t}\omega )) , {\mathscr {A}}( \omega ) \big ) \rightarrow 0 \) and that \({\mathsf {T}}(\omega ,\cdot )\) is a homeomorphism. Hence, \({\tilde{{\mathscr {A}}}}\) is uniformly \({\tilde{{\mathcal {D}}}}\)-pullback attracting under \({\tilde{\phi }}\). In the same way, the minimality of \({\tilde{{\mathscr {A}}}} \) follows from that of \({\mathscr {A}}\). \(\square \)

In the same way, we have the corresponding theorem for conjugate cocycle attractors.

Theorem 2.10

Suppose that \(\phi \) and \({\tilde{\phi }}\) are conjugate NRDS with cohomology \({\mathsf {T}}\) satisfying (2.5), and that the cohomology \({\mathsf {T}}\) is a bijection between \({\mathcal {D}}\) and \({\tilde{{\mathcal {D}}}}\). If \(\phi \) has a \({\mathcal {D}}\)-cocycle attractor \({\mathcal {A}}\) in X, then \({\tilde{\phi }}\) has a \({\tilde{{\mathcal {D}}}}\)-cocycle attractor \(\tilde{{\mathcal {A}}}\) in \({\tilde{X}}\), and vice versa. Moreover, the two attractors have the relation

$$\begin{aligned} \begin{aligned} \tilde{{\mathcal {A}}}_\sigma (\omega )= {\mathsf {T}}\big (\omega ,{\mathcal {A}}_\sigma ( \omega )\big ), \quad \sigma \in \Sigma ,\, \omega \in \Omega . \end{aligned} \end{aligned}$$
(2.7)

Remark 2.11

The structural relationships (2.6) and (2.7) allow one to learn the structure of an attractor from that of its conjugate attractor. For instance, conjugate attractors could share the same fractal dimension, e.g., when the cohomology \({\mathsf {T}}(\omega ,x)\) is linear in x.

3 Finite-Dimensionality of Random Uniform Attractors

In this section, we estimate the fractal dimension of random uniform attractors. Two approaches will be presented, one by a smoothing property and the other by a squeezing property of the system. Then in Sect. 3.3, we will improve the finite-dimensionality to more regular Banach spaces.

First recall that, given a precompact subset E of a Banach space X, the fractal dimension (also called box-counting dimension or capacity dimension) of E in X is defined as

$$\begin{aligned} \dim _F(E;X) :=\limsup _{\varepsilon \rightarrow 0^+} \frac{\log _2 N_X[E;\varepsilon ]}{-\log _2\varepsilon } , \end{aligned}$$

where \(N_X[E;r]\) denotes the minimal number of open balls of radius r in X that are necessary to cover E. Note that \(\dim _F(E;X)=\dim _F({\bar{E}};X)\), where \({\bar{E}}\) denotes the closure of E.

3.1 Smoothing Approach

Now we present the first approach by a smoothing property of the system. This approach allows the phase space X to be only Banach, but technically requires an auxiliary Banach space \(Y\subset X\) with compact embedding \(I: Y\hookrightarrow X\). The following lemma of Sobolev compactness embedding gives examples of such Banach spaces.

Lemma 3.1

(Temam 1997) Let \({\mathcal {O}} \subset {\mathbb {R}}^N\) be a \({\mathcal {C}}^1\)-domain which is bounded (or at least bounded in one direction), \(N\in {\mathbb {N}}\). Then the embedding \(W^{1,p} ({\mathcal {O}})\hookrightarrow L^{q_1}({\mathcal {O}})\) is compact for any \(q_1\) with \(q_1\in [1,\infty )\) if \(p\geqslant N\) and \(q_1\in [1,q)\), \( q^{-1}=p^{-1}-N^{-1}\), if \(1\leqslant p<N\).

We will need the Kolmogorov \(\varepsilon \) -entropy of the compact embedding \(I:Y\hookrightarrow X\), also called the Kolmogorov \(\varepsilon \) -entropy of Y in X, \(\varepsilon >0\), which is defined as

$$\begin{aligned} {\mathcal {K}}_ \varepsilon (Y;X) := \log _2 N_X \big [B_Y(0,1); \varepsilon \big ] . \end{aligned}$$
(3.1)

For this Kolmogorov \(\varepsilon \)-entropy, the following estimate is useful.

Lemma 3.2

(Triebel 1978, Section 4.10.3) Let \({\mathcal {O}} \subset {\mathbb {R}}^N\) be a bounded \({\mathcal {C}}^\infty \)-domain, \(N\in {\mathbb {N}}\). If \(1<p,q<\infty \), \(s-\frac{N}{q}>-\frac{N}{p}\), and \(s>0\), then there is a positive constant \(\alpha >0\) such that

$$\begin{aligned} \begin{aligned} {\mathcal {K}}_\varepsilon \big (W^{s,q}({\mathcal {O}});L^p({\mathcal {O}})\big ) \leqslant \alpha \varepsilon ^{-\frac{N}{s}}. \end{aligned} \end{aligned}$$
(3.2)

As a particular case, for some \(\alpha >0\),

$$\begin{aligned} \begin{aligned} {\mathcal {K}}_\varepsilon \big (W^{1,2}({\mathcal {O}});L^2({\mathcal {O}})\big ) \leqslant \alpha \varepsilon ^{- N}. \end{aligned} \end{aligned}$$
(3.3)

Now we give our main criterion for a random uniform attractor to have finite fractal dimension. Suppose that

\((H_1)\):

the symbol space \(\Sigma \) has finite fractal dimension

$$\begin{aligned} \dim _F (\Sigma ;\Xi ) < \infty , \end{aligned}$$

and the driving system \(\{\theta _t\}_{t\in {\mathbb {R}}}\) on \(\Sigma \) is Lipschitz, satisfying

$$\begin{aligned} d_\Xi (\theta _t \sigma _1 , \theta _t \sigma _2 ) \leqslant M(t) d_\Xi (\sigma _1 , \sigma _2), \qquad \forall t \in {\mathbb {R}}, \ \sigma _1 , \sigma _2 \in \Sigma , \end{aligned}$$
(3.4)

where \(M(\cdot )\) is a function with \(1\leqslant M(t)\leqslant c_1 e^{\mu |t|}\), \(t\in {\mathbb {R}}\), for some constants \(c_1, \mu >0\);

\((H_2)\):

\(\phi \) is \((\Sigma \times X,X)\)-continuous;

\((H_3)\):

\(\phi \) has a tempered uniformly \({\mathcal {D}}\)-pullback absorbing set \({\mathscr {B}} =\{{\mathscr {B}}(\omega )\}_{\omega \in \Omega } \) which pullback absorbs itself after a deterministic period of time, i.e., there exists a deterministic time \( T_{{\mathscr {B}}} > 0\) such that for all \(t \geqslant T_{{\mathscr {B}}}\)

$$\begin{aligned} \displaystyle \bigcup _{\sigma \in \Sigma } \phi \big ( t , \vartheta _{-t}\omega , \theta _{-t}\sigma ,{\mathscr {B}}(\vartheta _{-t}\omega ) \big ) \subseteq {\mathscr {B}}(\omega ), \qquad \forall \omega \in \Omega ; \end{aligned}$$
(3.5)
\((H_4)\):

\(\phi \) is Lipschitz continuous in symbols within the absorbing set \({\mathscr {B}}\):

$$\begin{aligned}&\Vert \phi (t, \omega , \sigma _1 , u ) - \phi (t, \omega , \sigma _2 , u ) \Vert _X \\&\quad \leqslant e^{\int _0^t L(\vartheta _s\omega ) \mathrm{d}s } d_\Xi (\sigma _1 , \sigma _2), \ \forall t\geqslant T_{{\mathscr {B}}} , \, \sigma _1 , \sigma _2 \in \Sigma , \, u \in {\mathscr {B}}(\omega ), \end{aligned}$$

for a random variable \(L(\cdot ): \Omega \rightarrow {\mathbb {R}}^+\) with finite expectation \( {\mathbb {E}} (L) <\infty \);

\((H_5)\):

Y is a separable Banach space densely and compactly embedded into X, and for any \(\varepsilon >0\) the Kolmogorov \(\varepsilon \)-entropy of Y in X satisfies

$$\begin{aligned} {\mathcal {K}}_ \varepsilon (Y;X) = \log _2 N_X[B_Y(0,1); \varepsilon ] \leqslant \alpha \varepsilon ^ { -\gamma } \end{aligned}$$
(3.6)

for positive constants \(\alpha , \gamma > 0\);

\((H_6)\):

\(\phi \) is (XY)-smoothing within the absorbing set \({\mathscr {B}}\): there exist \({\tilde{t}}\geqslant T_{{\mathscr {B}}}\) and a random variable \(\kappa (\cdot ): \Omega \rightarrow {\mathbb {R}}^+\) with finite expectation \( {\mathbb {E}} (\kappa ^\gamma ) <\infty \) such that

$$\begin{aligned} \displaystyle \sup _{\sigma \in \Sigma }\Vert \phi ( \tilde{t},\omega , \sigma , u ) \!-\! \phi ( \tilde{t}, \omega , \sigma , v ) \Vert _ Y \!\leqslant \! \kappa ( \omega ) \Vert u \!-\! v \Vert _X , \quad \forall u, v \in {\mathscr {B}} ( \omega ), \omega \!\in \! \Omega .\nonumber \\ \end{aligned}$$
(3.7)

Under these hypotheses, the random uniform attractor \({\mathscr {A}}= \{{\mathscr {A}}(\omega )\}_{\omega \in \Omega }\) of \(\phi \) has finite fractal dimension that can be bounded by a deterministic number. More precisely,

Theorem 3.3

Suppose that \(\phi \) is an NRDS in X with \({\mathcal {D}}\)-uniform attractor \({\mathscr {A}}\). If conditions \((H_1)\)\((H_6)\) hold, then the uniform attractor \( {\mathscr {A}} \) has finite fractal dimension in X: for any \(\nu \in (0, 1)\),

$$\begin{aligned} \dim _F \big ({\mathscr {A}}( \omega );X\big ) \leqslant \frac{ 2^\gamma \alpha \, {\mathbb {E}}(\kappa ^\gamma ) }{ -\nu ^\gamma \log _2 \nu } + \left( \frac{{\mathbb {E}}(L)+\mu }{ - \ln \nu } + 1 \right) \dim _F \big (\Sigma ; \Xi \big ), \qquad \forall \omega \in \Omega .\nonumber \\ \end{aligned}$$
(3.8)

In particular, taking \(\nu = 1/2\),

$$\begin{aligned} \dim _F \big ({\mathscr {A}}( \omega );X\big ) \leqslant 4^\gamma \alpha \, {\mathbb {E}}(\kappa ^\gamma ) + \left( \frac{{\mathbb {E}}(L)+\mu }{ \ln 2} + 1 \right) \dim _F \big (\Sigma ; \Xi \big ), \qquad \forall \omega \in \Omega .\nonumber \\ \end{aligned}$$
(3.9)

Remark 3.4

Note that

  1. (i)

    The upper bound given above is deterministic and uniform w.r.t. \(\omega \in \Omega \);

  2. (ii)

    The entropy condition \((H_5)\) depends only on the spaces X and Y, and is independent of the system. Lemma 3.2 is useful in order to obtain such a property;

  3. (iii)

    In condition \((H_3)\), we required a deterministic absorbing time \(T_{\mathscr {B}}\), which seems unusual in the literature. However, this demand can be naturally satisfied by a broad class of applications. In Sect. 4.3, we will show by a reaction–diffusion model that in additive noise cases the closed random absorbing set \({\mathscr {B}}\) constructed in the usual way will be satisfactory, see Proposition 4.5. Multiplicative noises need some slight modification in constructing the absorbing set, which will be shown in our future work.

Proof of Theorem 3.3

Let \(\nu \in (0, 1)\) be given and fixed, and suppose without loss of generality that \(\tilde{t} = T_{\mathscr {B}}=1\) in hypotheses \((H_3)\) and \((H_6)\). Since the absorbing random set \({\mathscr {B}} \) is tempered, we have

$$\begin{aligned} {\mathscr {B}}(\omega ) = B_X\big (x_\omega , R(\omega )\big ) \cap {\mathscr {B}}(\omega ), \end{aligned}$$

for points \(x_\omega \in {\mathscr {B}}(\omega )\) and some tempered random variable \(R(\cdot )\) satisfying (2.1). Since Y is compactly embedded into X the unit ball \(B_Y(0,1)\) in Y is covered by a finite number of \(\frac{\nu }{2\kappa (\omega )}\)-balls in X, and we denote by \(N(\omega )\) the minimum number of such balls that are necessary for this, i.e.,

$$\begin{aligned} B_Y (0,1) \subseteq \bigcup _{ i=1 }^{ N ( \omega ) } B_X \left( p_i ^ \omega , \, \frac{ \nu }{2\kappa ( \omega )} \right) , \quad p_i^\omega \in B_Y(0,1). \end{aligned}$$
(3.10)

Next, for each \(\omega \in \Omega \) and \(\sigma \in \Sigma \) we construct sets \(U^n(\omega ,\sigma ) \subseteq {\mathscr {B}}(\omega )\) by induction on \(n \in {\mathbb {N}}\) such that

$$\begin{aligned}&U^n(\omega ,\sigma ) \subseteq {\mathscr {B}}(\omega ), \end{aligned}$$
(3.11)
$$\begin{aligned}&\sharp U^n(\omega ,\sigma ) \leqslant \displaystyle \prod _{j=1}^{n} N(\vartheta _{-j}\omega ), \end{aligned}$$
(3.12)
$$\begin{aligned}&\phi \big ( n , \vartheta _{-n}\omega , \theta _{-n}\sigma , {\mathscr {B}}(\vartheta _{-n}\omega ) \big ) \subseteq \displaystyle \bigcup _{u \in U^n(\omega ,\sigma )}B_X \big ( u , R (\vartheta _{-n}\omega ) \nu ^n \big ) \cap {\mathscr {B}}(\omega ). \end{aligned}$$
(3.13)

Note that the bound (3.12) of cardinality is independent of \(\sigma \).

For \(n=1\), by the smoothing property (3.7) in hypothesis \((H_6)\) we have

$$\begin{aligned}&\phi \big ( 1 , \vartheta _{ -1 } \omega , \theta _{ - 1 } \sigma , {\mathscr {B}}(\vartheta _{ - 1 } \omega ) \big ) = \phi \Big (1 , \vartheta _{ - 1 } \omega , \theta _ { - 1 } \sigma , B_X\big ( x_{ \vartheta _{ - 1 } \omega } , R ( \vartheta _{- 1 } \omega ) \big ) \cap {\mathscr {B}} ( \vartheta _ { - 1 } \omega ) \Big ) \\&\quad \subseteq B_Y \big ( \phi ( 1 , \vartheta _{ - 1 } \omega , \theta _ { - 1 } \sigma , x_{ \vartheta _{- 1} \omega } ) , \kappa ( \vartheta _{- 1} \omega ) R ( \vartheta _{- 1} \omega ) \big ) \cap \phi \big ( 1 , \vartheta _{ - 1 } \omega , \theta _ { - 1 } \sigma , {\mathscr {B}} ( \vartheta _{ - 1 } \omega ) \big ). \end{aligned}$$

Let \(y_{\omega ,\sigma } := \phi ( 1 , \vartheta _{ - 1 } \omega , \theta _ { - 1 } \sigma , x_{ \vartheta _{- 1} \omega } )\). From (3.5) (since \(1 = \tilde{t} = T_{{\mathscr {B}}}\)) and (3.10) we note that

$$\begin{aligned} \begin{aligned}&B_Y \big ( y_{\omega ,\sigma } , \kappa ( \vartheta _ {- 1 } \omega ) R ( \vartheta _ {- 1 } \omega ) \big ) \cap \phi \big ( 1 , \vartheta _ {- 1 } \omega , \theta _{ - 1 } \sigma , {\mathscr {B}} ( \vartheta _ {- 1 } \omega ) \big ) \\&\quad \subseteq \displaystyle \bigcup _{ i = 1 }^{ N ( \vartheta _ {-1 } \omega ) } B_X \left( y_{\omega ,\sigma } + \kappa ( \vartheta _ {- 1 } \omega ) R ( \vartheta _ {- 1 } \omega ) p_{i}^{ \vartheta _ {- 1 } \omega } , \, \frac{ \kappa ( \vartheta _ {- 1 } \omega ) R ( \vartheta _ {- 1 } \omega ) \nu }{2\kappa ( \vartheta _{- 1 } \omega ) } \right) \cap {\mathscr {B}}(\omega ) \\&\quad = \displaystyle \bigcup _{ i = 1 }^{ N ( \vartheta _ {- 1 } \omega ) } B_X \left( y_ {\omega ,\sigma } + \kappa ( \vartheta _ {- 1 } \omega ) R ( \vartheta _ {- 1 } \omega ) p_{i}^{ \vartheta _ {- 1 } \omega } , \, \frac{ R ( \vartheta _ {- 1 } \omega ) \nu }{2} \right) \cap {\mathscr {B}}(\omega ) \\&\quad \subseteq \displaystyle \bigcup _{ i = 1 }^{ N ( \vartheta _ {- 1 } \omega ) } B_X \big ( q_{i}^{ \omega ,\sigma } , R ( \vartheta _ {- 1 } \omega ) \nu \big ) \cap {\mathscr {B}}(\omega ) \end{aligned} \end{aligned}$$

for some \(q_{i}^{ \omega ,\sigma } \in {\mathscr {B}}(\omega )\), and with this we have

$$\begin{aligned} \phi \big ( 1 ,&\vartheta _{- 1 } \omega , \theta _{ - 1 } \sigma , {\mathscr {B}}( \vartheta _ {- 1 } \omega ) \big ) \subseteq \bigcup _{ i = 1 }^{ N ( \vartheta _ {- 1 } \omega ) } B_X \big ( q_{i}^{ \omega ,\sigma } , R ( \vartheta _ {- 1 } \omega ) \nu \big ) \cap {\mathscr {B}}(\omega ). \end{aligned}$$

Let \(U^1(\omega ,\sigma ) := \big \{ q_i^{\omega ,\sigma } : i = 1, \ldots , N(\vartheta _{-1}\omega )\big \} \subseteq {\mathscr {B}}(\omega )\), then \(U^1(\omega ,\sigma ) \) satisfies (3.11)–(3.13) for \(n=1\).

Assuming that the sets \(U^k(\omega ,\sigma )\) have been constructed for all \(1\leqslant k\leqslant n\), \(\omega \in \Omega \) and \(\sigma \in \Sigma \), we now construct the sets \(U^{n+1}(\omega ,\sigma )\). Given \(\omega \in \Omega \) and \(\sigma \in \Sigma \), by the cocycle property of \(\phi \) we have

$$\begin{aligned}&\phi \big ( n+1 , \vartheta _{-(n+1)}\omega , \theta _{-(n+1)}\sigma , {\mathscr {B}}(\vartheta _{-(n+1)}\omega ) \big ) \\&\quad = \phi \big ( 1 , \vartheta _{-1}\omega , \theta _{-1}\sigma \big ) \circ \phi \big ( n , \vartheta _{-(n+1)}\omega , \theta _{-(n+1)}\sigma , {\mathscr {B}}(\vartheta _{-(n+1)}\omega ) \big ) , \end{aligned}$$

and by the induction hypothesis

$$\begin{aligned}&\phi \big ( n , \vartheta _{-(n+1)}\omega , \theta _{-(n+1)}\sigma , {\mathscr {B}}(\vartheta _{-(n+1)}\omega ) \big )\\&\quad = \phi \big ( n , \vartheta _{-n}\vartheta _{-1}\omega , \theta _{-n}\theta _{-1}\sigma , {\mathscr {B}}(\vartheta _{-n}\vartheta _{-1}\omega ) \big ) \\&\quad \subseteq \displaystyle \bigcup _{u \in U^n(\vartheta _{-1} \omega ,\theta _{-1}\sigma )}B_X \big ( u , R (\vartheta _{-(n+1)}\omega ) \nu ^n \big ) \cap {\mathscr {B}}(\vartheta _{-1}\omega ), \end{aligned}$$

where \(U^n(\vartheta _{-1}\omega ,\theta _{-1}\sigma ) \subseteq {\mathscr {B}}(\vartheta _{-1}\omega )\) and \(\sharp U^n(\vartheta _{-1}\omega ,\theta _{-1}\sigma ) \leqslant \displaystyle \prod _{j=1}^{n} N\big (\vartheta _{-j}(\vartheta _{-1}\omega )\big )=\displaystyle \prod _{j=2}^{n+1} N(\vartheta _{-j}\omega )\). Moreover, for each \(u \in U^n(\vartheta _{-1}\omega ,\theta _{-1}\sigma )\), by hypothesis \((H_3)\) and the smoothing property \((H_6)\) we obtain

$$\begin{aligned}&\phi \Big ( 1, \vartheta _{-1}\omega , \theta _{-1}\sigma , B_X \big ( u , R (\vartheta _{-(n+1)}\omega ) \nu ^n \big ) \cap {\mathscr {B}}(\vartheta _{-1}\omega ) \Big ) \\&\quad \subseteq B_Y \big ( \phi ( 1, \vartheta _{-1}\omega , \theta _{-1}\sigma , u ) , \kappa (\vartheta _{-1}\omega ) R (\vartheta _{-(n+1)}\omega ) \nu ^n \big ) \cap {\mathscr {B}}(\omega ) \\&\quad \subseteq \displaystyle \bigcup _{i =1}^{ N(\vartheta _{-1}\omega ) } B_X \big (p_{i,u}^{\omega } , R (\vartheta _{-(n+1)}\omega ) \nu ^{n+1} \big ) \cap {\mathscr {B}}(\omega ) \quad \ \text {for points }p_{i,u}^{\omega } \in {\mathscr {B}}(\omega ), \end{aligned}$$

so

$$\begin{aligned}&\phi \big ( n+1 , \vartheta _{-(n+1)}\omega , \theta _{-(n+1)}\sigma , {\mathscr {B}}(\vartheta _{-(n+1)}\omega ) \big ) \\&\quad \subseteq \bigcup _{u \in U^n(\vartheta _{-1}\omega ,\theta _{-1}\sigma )} \phi \Big ( 1, \vartheta _{-1}\omega , \theta _{-1}\sigma , B_X \big ( u , R (\vartheta _{-(n+1)}\omega ) \nu ^n \big ) \cap {\mathscr {B}}(\vartheta _{-1}\omega ) \Big ) \\&\quad \subseteq \bigcup _{u \in U^n(\vartheta _{-1}\omega ,\theta _{-1}\sigma )} \displaystyle \bigcup _{i =1}^{ N(\vartheta _{-1}\omega ) } B_X \big (p_{i,u}^{\omega } , R (\vartheta _{-(n+1)}\omega ) \nu ^{n+1} \big ) \cap {\mathscr {B}}(\omega ). \end{aligned}$$

Define \(U^{n+1}(\omega ,\sigma ) := \big \{ p_{i,u}^{\omega } : u \in U^n(\vartheta _{-1}\omega ,\theta _{-1}\sigma ) ,\, 1 \leqslant i \leqslant N(\vartheta _{-1} \omega ) \big \}\). Then \(U^{n+1}(\omega ,\sigma ) \subseteq {\mathscr {B}}(\omega )\) and \(\sharp U^{n+1}(\omega ,\sigma ) \leqslant \sharp U^{n}(\vartheta _{-1}\omega ,\theta _{-1}\sigma ) \cdot N(\vartheta _{-1}\omega ) =\displaystyle \prod _{j=1}^{n+1} N(\vartheta _{-j}\omega )\). Hence, the desired sets \(\{U^n(\omega ,\sigma )\}_{n\in {\mathbb {N}}}\) are constructed.

Now, to find a finite cover of the random uniform attractor \({\mathscr {A}}\) let us make a decomposition of it using the structure (2.3). By the compactness of the symbol space \(\Sigma \), for any positive number \(\eta > 0\) there exists a finite cover of \(\Sigma \) by at least \(M_\eta := N_\Xi [ \Sigma ; \eta ]\) balls of radius \(\eta \), i.e., there are centers \(\sigma _l\in \Sigma \), \(l=1,2,\ldots , M_\eta \), such that

$$\begin{aligned} \Sigma = \displaystyle \bigcup _{l=1}^{M_\eta }B_\Xi (\sigma _l , \eta ) \cap \Sigma . \end{aligned}$$
(3.14)

For each \(l=1, \ldots , M_\eta \), denote by

$$\begin{aligned} \Sigma _l := B_\Xi (\sigma _l , \eta ) \cap \Sigma \quad \text{ and } \quad {\mathcal {A}}_{\Sigma _l}(\omega ) := \displaystyle \bigcup _{\sigma \in \Sigma _l}{\mathcal {A}}_\sigma (\omega ), \ \ \ \omega \in \Omega , \end{aligned}$$
(3.15)

where \( {\mathcal {A}} \) is the \({\mathcal {D}}\)-cocycle attractor of \(\phi \). Then by (2.3), the random uniform attractor \({\mathscr {A}}\) is decomposed as

$$\begin{aligned} {\mathscr {A}}(\omega ) = \displaystyle \bigcup _{l=1}^{M_\eta } {\mathcal {A}}_{\Sigma _l}(\omega ), \quad \omega \in \Omega . \end{aligned}$$
(3.16)

In the following, we shall find finite covers for each \({\mathcal {A}}_{\Sigma _l}(\omega )\). Note that the constant \(M_\eta \) is independent of \(\omega \in \Omega \), and depends only on the symbol space \(\Sigma \) and the corresponding given number \(\eta \).

For each l, let \(\sigma _l \in \Sigma _l\) be given as above. Then for any \(\sigma \in \Sigma _l\), \(d_\Xi (\sigma , \sigma _l) < \eta \). From the invariance of the cocyle attractor \( \{{\mathcal {A}}_ \sigma (\cdot ) \}_{ \sigma \in \Sigma }\) under \(\phi \), by hypotheses \((H_1)\) and \((H_4)\) we claim that

$$\begin{aligned} {\mathcal {A}}_{\Sigma _l}(\omega ) \subseteq B_X \Big ( \phi \big ( n, \vartheta _{-n} \omega , \theta _{-n} \sigma _l, {\mathcal {A}}_{\theta _{-n} \Sigma _l} (\vartheta _{-n} \omega ) \big ) , M(-n) e^{\int _{-n}^0 L(\vartheta _s \omega ) \mathrm{d}s } \eta \Big )\nonumber \\ \end{aligned}$$
(3.17)

for each \(1\leqslant l \leqslant M_\eta \), \(n\in {\mathbb {N}}\) and \(\omega \in \Omega \). Indeed, if \(h \in {\mathcal {A}}_{\Sigma _l} ( \omega )\) then \(h \in {\mathcal {A}}_{ \sigma } ( \omega )\) for some \( \sigma \in \Sigma _l\). Since \( {\mathcal {A}}_{\sigma }(\omega ) = \phi \big ( n, \vartheta _{-n} \omega , \theta _{-n} \sigma , {\mathcal {A}}_{\theta _{-n} \sigma } (\vartheta _{-n} \omega ) \big ) \), we have \(h = \phi ( n, \vartheta _{-n} \omega , \theta _ {-n} \sigma , u)\) for some \( u \in {\mathcal {A}}_{\theta _{-n} \sigma } (\vartheta _{-n} \omega ) \subseteq {\mathscr {A}} ( \vartheta _{-n} \omega ) \subseteq {\mathscr {B}}(\vartheta _{-n} \omega )\). Hence,

$$\begin{aligned} \begin{aligned} \Vert h - \phi ( n , \vartheta _{-n} \omega , \theta _{-n} \sigma _l , u ) \Vert _X&= \Vert \phi (n , \vartheta _{-n} \omega , \theta _{-n} \sigma , u ) - \phi ( n, \vartheta _{-n} \omega , \theta _{-n} \sigma _l , u ) \Vert _X \\&\leqslant e^{\int _0^n L(\vartheta _{s-n}\omega ) \mathrm{d}s } d_\Xi (\theta _{-n} \sigma , \theta _{-n} \sigma _l ) \\&\leqslant e^{\int _{-n}^0 L(\vartheta _s \omega ) \mathrm{d}s } M(-n) d_\Xi (\sigma ,\sigma _l) \\&< M(-n) e^{\int _{-n}^0 L(\vartheta _s \omega ) \mathrm{d}s } \eta , \end{aligned} \end{aligned}$$

and thus (3.17) holds. Notice that, since \({\mathcal {A}}_{\theta _{-n} \Sigma _l} (\vartheta _{-n} \omega ) \subseteq {\mathscr {B}}(\vartheta _{-n} \omega )\), from (3.13) it follows

$$\begin{aligned} N_X\Big [ \phi \big ( n , \vartheta _{-n} \omega , \theta _{-n} \sigma _l , {\mathcal {A}}_{ \theta _{-n} \Sigma _l } ( \vartheta _{-n} \omega ) \big ) ; \, R (\vartheta _{-n} \omega ) \nu ^n \Big ] \leqslant \prod _{j=1}^{n} N ( \vartheta _{-j} \omega ) , \end{aligned}$$

and then

$$\begin{aligned}&N_X \Big [ B_X \Big ( \phi \big ( n, \vartheta _{-n} \omega , \theta _{-n} \sigma _l, {\mathcal {A}}_{\theta _{-n} \Sigma _l} (\vartheta _{-n} \omega ) \big ) , M(-n) e^{\int _{-n}^0 L(\vartheta _s \omega ) \mathrm{d}s } \eta \Big ) ; R (\vartheta _{-n} \omega ) \nu ^n \\&\quad + M(-n) e^{\int _{-n}^0 L(\vartheta _s \omega ) \mathrm{d}s } \eta \Big ] \leqslant \prod _{j=1}^{n} N ( \vartheta _{-j} \omega ). \end{aligned}$$

Hence, from (3.17),

$$\begin{aligned}&N_X \Big [ {\mathcal {A}}_{\Sigma _l} ( \omega ) ; \ R (\vartheta _{-n} \omega ) \nu ^n + M(-n) e^{\int _{-n}^0 L(\vartheta _s \omega ) \mathrm{d}s } \eta \Big ]\\&\quad \leqslant \prod _{j=1}^{n} N ( \vartheta _{-j} \omega ), \qquad l=1,2,\ldots , M_\eta , \end{aligned}$$

and then by (3.16) we conclude that

$$\begin{aligned} N_X \Big [ {\mathscr {A}}( \omega ) ; \ R (\vartheta _{-n} \omega ) \nu ^n + M(-n) e^{\int _{-n}^0 L(\vartheta _s \omega ) \mathrm{d}s } \eta \Big ] \leqslant \left( \prod _{j=1}^{n} N ( \vartheta _{-j} \omega ) \right) M_{\eta }.\nonumber \\ \end{aligned}$$
(3.18)

Given \(\tau >0\), by Birkhoff’s ergodic theorem there is \(n_0 \in {\mathbb {N}}\) such that for \(n \geqslant n_0\) we have

$$\begin{aligned} e^{\int _{-n}^0 L( \vartheta _s \omega )\mathrm{d}s } \leqslant e^{( {\mathbb {E}} (L ) + \tau ) n}, \end{aligned}$$

and then from (3.18)

$$\begin{aligned} N_X \Big [ {\mathscr {A}}( \omega ) ; \ R (\vartheta _{-n} \omega ) \nu ^n + M(-n) e^{({\mathbb {E}} (L) +\tau )n} \eta \Big ] \leqslant \left( \prod _{j=1}^{n} N ( \vartheta _{-j} \omega ) \right) M_{\eta }\nonumber \\ \end{aligned}$$
(3.19)

for all \(n\geqslant n_0\) and \(\eta >0\).

In the following, we establish by (3.19) a finite \(\varepsilon \)-cover of \({\mathscr {A}}(\omega )\) for any small \(\varepsilon >0\). Let

$$\begin{aligned} \begin{aligned} \eta _n := \displaystyle \frac{R (\vartheta _{-n}\omega ) \nu ^n }{ M(-n) e^{({\mathbb {E}} (L) + \tau )n } },\qquad n\in {\mathbb {N}}. \end{aligned} \end{aligned}$$
(3.20)

Then \(\eta _n\rightarrow 0^+\) as \(n\rightarrow \infty \), and from (3.19) we have for n sufficiently large that

$$\begin{aligned} \begin{aligned} N_X \Big [ {\mathscr {A}}( \omega ) ; \ 2R (\vartheta _{-n} \omega ) \nu ^n \Big ]&= N_X \Big [ {\mathscr {A}}( \omega ) ; \ R (\vartheta _{-n} \omega ) \nu ^n + M(-n) e^{ ({\mathbb {E}} (L) + \tau ) n} \eta _n \Big ] \\&\leqslant \left( \displaystyle \prod _{j=1}^{n} N ( \vartheta _{-j} \omega ) \right) M_{\eta _n}. \end{aligned} \end{aligned}$$

Since the random variable \(R(\cdot )\) is tempered, for any \(\varepsilon \in (0,1)\) there exists an \( n_\varepsilon \in {\mathbb {N}}\) such that

$$\begin{aligned} 2 R (\vartheta _{-{n_\varepsilon }} \omega ) \nu ^{n_\varepsilon } < \varepsilon \leqslant 2 R (\vartheta _ {-({n_\varepsilon }-1) } \omega ) \nu ^{{n_\varepsilon }-1}, \end{aligned}$$
(3.21)

and the numbers \(n_\varepsilon \) can be chosen such that \(n_\varepsilon \rightarrow \infty \) as \(\varepsilon \rightarrow 0^+\). Hence, for \(\varepsilon >0\) sufficiently small

$$\begin{aligned} \begin{aligned} N_X \big [ {\mathscr {A}}( \omega ) ; \varepsilon \big ]&\leqslant N_X \Big [ {\mathscr {A}}( \omega ) ; 2R (\vartheta _{-{n_\varepsilon }} \omega ) \nu ^{n_\varepsilon } \Big ] \\&\leqslant \left( \displaystyle \prod _{j=1}^{{n_\varepsilon }} N ( \vartheta _{-j} \omega ) \right) M_{\eta _{n_\varepsilon }}, \end{aligned} \end{aligned}$$

and then

$$\begin{aligned} \displaystyle \frac{ \log _2 N_X \big [ {\mathscr {A}} ( \omega ) ; \varepsilon \big ] }{ - \log _2 \varepsilon } \leqslant \displaystyle \frac{ \displaystyle \sum _{j=1}^{{n_\varepsilon }} \big ( \log _2 N ( \vartheta _{-j} \omega ) \big ) + \log _2 M_{ \eta _{n_\varepsilon } } }{ - \log _2 \big [ 2 R (\vartheta _ {-({n_\varepsilon }-1) } \omega ) \nu ^{{n_\varepsilon }-1} \big ] }. \end{aligned}$$
(3.22)

Now we estimate the fractal dimension of \({\mathscr {A}}(\omega )\) by studying the limit as \(\varepsilon \rightarrow 0^+\). To begin with, let us handle carefully each term involved in the right-hand side of (3.22) to obtain (3.25) bellow. Firstly, by the entropy hypothesis \((H_5)\), \({\mathcal {K}}_\epsilon (Y; X) = \log _2 N_X \big [ B_Y (0 , 1) ; \epsilon \big ] \leqslant \alpha \epsilon ^{ - \gamma }\) for all \(\epsilon >0\), so

$$\begin{aligned} \begin{aligned} \log _2 N ( \vartheta _{-j} \omega )&= \log _2 \left( N_X\! \left[ B_Y (0 , 1) ;\ \frac{ \nu }{ 2 \kappa ( \vartheta _{-j} \omega ) } \right] \right) \leqslant \displaystyle \frac{ \alpha \big (\kappa ( \vartheta _{-j} \omega )\big )^ \gamma }{ (\nu /2)^\gamma }. \end{aligned} \end{aligned}$$
(3.23)

Secondly, for any \(\beta \in (\nu , 1)\) fixed, by the temperedness of the random variable \(R (\cdot )\) again there exists \(n_1 \in {\mathbb {N}}\) such that

$$\begin{aligned} R ( \vartheta _{-n} \omega ) \left( \frac{ \nu }{\beta } \right) ^n < \displaystyle \frac{1}{2}, \quad \text{ for } \text{ all } n \geqslant n_1, \end{aligned}$$

that is,

$$\begin{aligned} 2R ( \vartheta _{-n} \omega ) \nu ^n < \beta ^n, \quad \text{ for } \text{ all } n \geqslant n_1. \end{aligned}$$
(3.24)

Hence, for all \(\varepsilon >0\) small enough such that \(n_\varepsilon \geqslant \max \{n_0; n_1 +1\} \), it follows from (3.22), (3.23) and (3.24) that

$$\begin{aligned} \begin{aligned} \displaystyle \frac{\log _2 N_X \big [ {\mathscr {A}} ( \omega ); \varepsilon \big ] }{ - \log _2 \varepsilon }&\leqslant \displaystyle \frac{ \displaystyle \sum \nolimits _{j=1}^{{n_\varepsilon }} \big ( \log _2 N ( \vartheta _{-j} \omega ) \big ) + \log _2 M_{ \eta _{n_\varepsilon } } }{ - \log _2 \big [ 2 R (\vartheta _ {-({n_\varepsilon }-1) } \omega ) \nu ^{{n_\varepsilon }-1} \big ] }\\&\leqslant \displaystyle \frac{ \frac{\alpha }{(\nu /2)^\gamma } \displaystyle \sum \nolimits _{j=1}^{n_\varepsilon } \big ( \kappa (\vartheta _{-j} \omega ) \big ) ^{ \gamma } + \log _2 M_{ \eta _ {n_\varepsilon } } }{ - \log _2 \beta ^{n_\varepsilon -1} } \\&= \displaystyle \frac{ \frac{\alpha }{(\nu /2)^\gamma } \displaystyle \sum \nolimits _{j=1}^{n_\varepsilon } \big ( \kappa (\vartheta _{-j} \omega ) \big ) ^{ \gamma } }{ -( n_\varepsilon -1) \log _2 \beta } + \frac{ \log _2 N_{ \Xi } \big [ \Sigma ; \eta _{n_\varepsilon } \big ] }{ -( n_\varepsilon -1) \log _2 \beta }. \end{aligned} \end{aligned}$$
(3.25)

Now we take the limit as \(\varepsilon \rightarrow 0^+\) (which leads to \(n_\varepsilon \rightarrow \infty \) and \(\eta _{n_\varepsilon }\rightarrow 0^+\)). By Birkhoff’s ergodic theorem since \({\mathbb {E}}(\kappa ^\gamma )<\infty \), we first obtain

$$\begin{aligned} \begin{aligned}&\dim _F\!\big ( {\mathscr {A}} (\omega ) ;X\big )= \displaystyle \limsup _{\varepsilon \rightarrow 0^+} \displaystyle \frac{ \log _2 N_X [ {\mathscr {A}}( \omega ) ; \varepsilon ] }{ - \log _2 \varepsilon } \\&\quad \leqslant \limsup _{\varepsilon \rightarrow 0^+} \displaystyle \frac{ \frac{\alpha }{(\nu /2)^\gamma } \displaystyle \sum \nolimits _{j=1}^{n_\varepsilon } \big ( \kappa (\vartheta _{-j} \omega ) \big ) ^{ \gamma } }{ -( n_\varepsilon -1) \log _2 \beta } + \limsup _{\varepsilon \rightarrow 0^+}\frac{ \log _2 N_{ \Xi } \big [ \Sigma ; \eta _{n_\varepsilon } \big ] }{ -( n_\varepsilon -1) \log _2 \beta } \\&\quad = \displaystyle \frac{ \alpha {\mathbb {E}}(\kappa ^\gamma ) }{ - (\nu / 2)^\gamma \log _2 \beta } + \limsup _{\varepsilon \rightarrow 0^+}\frac{ \log _2 N_{ \Xi } \big [ \Sigma ; \eta _{n_\varepsilon } \big ] }{ -( n_\varepsilon -1) \log _2 \beta }. \end{aligned} \end{aligned}$$
(3.26)

Then, we consider the last limit in (3.26). Since by \((H_1)\) the symbol space \(\Sigma \) has finite fractal dimension in \(\Xi \) and that \(\eta _{n_\varepsilon } \rightarrow 0^+\) as \(\varepsilon \rightarrow 0^+\), for any \(\chi >0\) there exists \(\varepsilon _0 =\varepsilon _0(\chi ) \in (0,1)\) such that

$$\begin{aligned} N_\Xi \big [ \Sigma ; \eta _{n_\varepsilon } \big ] \leqslant \left( \displaystyle \frac{1}{\eta _{n_\varepsilon }} \right) ^{\dim _F (\Sigma ;\Xi ) + \chi }, \quad \forall \varepsilon \leqslant \varepsilon _0. \end{aligned}$$
(3.27)

From (3.27) and the definition (3.20) of \(\eta _n\) we obtain

$$\begin{aligned} \begin{aligned} \displaystyle \frac{ \log _2 N_{ \Xi } \big [ \Sigma ; \eta _{n_\varepsilon } \big ] }{ -({n_\varepsilon }-1) \log _2 \beta }&= \displaystyle \frac{ \ln N_{ \Xi } \big [ \Sigma ; \eta _{n_\varepsilon } \big ] }{ -({n_\varepsilon }-1) \ln \beta } \qquad \big (\ln \cdot =\log _e\cdot \big ) \\&\leqslant \displaystyle \frac{ \big ( \! \dim _F (\Sigma ;\Xi ) + \chi \big ) \ln \displaystyle \frac{1}{\eta _{n_\varepsilon }} }{ -( {n_\varepsilon }-1) \ln \beta } \\&= \displaystyle \big (\! \dim _F (\Sigma ;\Xi ) + \chi \big ) \ \frac{ \ln \left[ \displaystyle \frac{ M(-{n_\varepsilon }) e^{({\mathbb {E}}(L) + \tau )n_\varepsilon } }{ R (\vartheta _{-{n_\varepsilon }} \omega ) \nu ^{n_\varepsilon } } \right] }{ -( {n_\varepsilon }-1) \ln \beta },\quad \forall \varepsilon \leqslant \varepsilon _0, \end{aligned} \end{aligned}$$

while from \((H_1)\) we have

$$\begin{aligned} \begin{aligned}&\frac{ \ln \left[ \displaystyle \frac{ M(-{n_\varepsilon }) e^{({\mathbb {E}}(L) + \tau )n_\varepsilon } }{ R (\vartheta _{-{n_\varepsilon }} \omega ) \nu ^{n_\varepsilon } } \right] }{ -( {n_\varepsilon }-1) \ln \beta } \leqslant \frac{ \ln \left[ \displaystyle \frac{ c_1e^{ ( {\mathbb {E}}(L) + \tau + \mu ) {n_\varepsilon } } }{ R ( \vartheta _{-{n_\varepsilon }} \omega )\nu ^{n_\varepsilon } } \right] }{ -({n_\varepsilon }-1) \ln \beta } \quad \big ({\text {by}~(H_1)}\big ) \\&\qquad = \frac{\ln c_1 +\big ( {\mathbb {E}}(L)+\tau +\mu \big ) {n_\varepsilon } }{ -( {n_\varepsilon }-1) \ln \beta } - \frac{ \ln R (\vartheta _{-{n_\varepsilon }} \omega ) }{ -({n_\varepsilon }-1) \ln \beta } - \frac{{n_\varepsilon }\ln \nu }{ -( {n_\varepsilon }-1) \ln \beta }, \quad \forall \varepsilon \leqslant \varepsilon _0. \end{aligned} \end{aligned}$$

Hence, since \(n_\varepsilon \rightarrow \infty \) as \(\varepsilon \rightarrow 0^+\),

$$\begin{aligned} \displaystyle \limsup _{\varepsilon \rightarrow 0^+} \displaystyle \frac{ \log _2 N_{ \Xi } [ \Sigma ; \eta _{n_\varepsilon } ] }{ -( {n_\varepsilon }-1) \log _2 \beta } \leqslant \big ( \! \dim _F (\Sigma ;\Xi ) + \chi \big ) \left( \displaystyle \frac{ {\mathbb {E}}(L)+\tau +\mu }{ - \ln \beta } + \displaystyle \frac{ \ln \nu }{ \ln \beta } \right) , \end{aligned}$$

which along with (3.26) we conclude that

$$\begin{aligned} \begin{aligned}&\dim _F \! \big ( {\mathscr {A}} (\omega ) ;X\big ) \leqslant \displaystyle \frac{ \alpha {\mathbb {E}}(\kappa ^\gamma ) }{ - (\nu / 2)^\gamma \log _2 \beta } \\&\quad +\big (\! \dim _F (\Sigma ;\Xi ) + \chi \big ) \left( \frac{ {\mathbb {E}}(L) + \tau +\mu }{ - \ln \beta } + \displaystyle \frac{ \ln \nu }{ \ln \beta } \right) . \end{aligned} \end{aligned}$$
(3.28)

Since the estimate (3.28) holds for all \(\chi , \tau >0\) and all \(\beta \in (\nu , 1)\) we finally obtain

$$\begin{aligned} \dim _F\big ( {\mathscr {A}} (\omega ) ;X\big ) \leqslant \displaystyle \frac{ \alpha {\mathbb {E}}(\kappa ^\gamma ) }{ - (\nu / 2)^\gamma \log _2 \nu } + \dim _F (\Sigma ;\Xi ) \left( \frac{ {\mathbb {E}}(L) +\mu }{ - \ln \nu } + 1 \right) . \end{aligned}$$

\(\square \)

3.2 Squeezing Approach

Theorem 3.3 gives a criterion on the finite-dimensionality of random uniform attractors where, however, the finiteness of the expectation of the coefficient \(\kappa (\omega )\) in the smoothing condition \((H_6)\) is usually not easy to obtain in real applications. To overcome this, we next propose an alternative method using a squeezing condition instead. The squeezing, in applications, applies mainly to a Hilbert phase space X, but allows the coefficients to be an exponential with only the order having finite expectation (the expectation of the entire exponential does not need to be finite, see (S)).

We first recall the following lemma of finite-coverings of balls in Euclidian spaces.

Lemma 3.5

(Debussche 1997, Lemma 1.2) Let E be an Euclidean space with algebraic dimension equals to \(m\in {\mathbb {N}}\) and \(R\geqslant r>0\) be positive numbers. Then for any \(x\in E\), it holds

$$\begin{aligned} N_E \big [ B_E(x, R) ; r \big ] \leqslant k(R,r) \leqslant \left( \frac{R \sqrt{m}}{ r} + 1 \right) ^m. \end{aligned}$$

In other words, any ball in E with radius \(R >0\) can be covered by k(Rr) balls of radius \(r >0\).

Let \(\phi \) be an NRDS, \(T_{{\mathscr {B}}}>0\) be as in \((H_3)\) and suppose in addition the following squeezing property:

(S):

\(\phi \) satisfies a random uniformly squeezing property on \({\mathscr {B}}\), i.e., there exist \({\tilde{t}} \geqslant T_{\mathscr {B}}\), \(\delta \in (0,1/4)\), an m-dimensional orthogonal projection \(P: X \rightarrow PX\) (\(\mathrm{dim}(PX) = m\)) and a random variable \(\zeta (\cdot ):\Omega \rightarrow {\mathbb {R}}\) with finite expectation \({\mathbb {E}} ( \zeta ) < -\ln \, ({4\delta })\) such that

$$\begin{aligned} \displaystyle \sup _{\sigma \in \Sigma } \big \Vert P\big ( \phi ({\tilde{t}},\omega ,\sigma , u) - \phi ({\tilde{t}},\omega ,\sigma , v) \big ) \big \Vert _X \leqslant e^{\int _0^{{\tilde{t}}} \zeta (\vartheta _s\omega ) \mathrm{d}s } \Vert u-v \Vert _X \end{aligned}$$
(3.29)

and

$$\begin{aligned} \displaystyle \sup _{\sigma \in \Sigma } \big \Vert Q\big ( \phi ({\tilde{t}},\omega ,\sigma , u) - \phi ({\tilde{t}},\omega ,\sigma , v) \big ) \big \Vert _X \leqslant \delta e^{\int _0^{{\tilde{t}}} \zeta (\vartheta _s\omega ) \mathrm{d}s } \Vert u-v \Vert _X \end{aligned}$$
(3.30)

for all \(u,v \in {\mathscr {B}} (\omega )\), \(\omega \in \Omega \), where \(Q:=I-P\).

We have then the following criterion for the finite-dimensionality of random uniform attractors.

Theorem 3.6

Suppose that \(\phi \) is an NRDS in X with \({\mathcal {D}}\)-uniform attractor \({\mathscr {A}}\). If conditions \((H_1)\)\((H_4)\) and (S) hold, then \( {\mathscr {A}} \) has finite fractal dimension in X: for any \(0<\rho < \ln \, (1/4\delta )-{\mathbb {E}} ( \zeta )\),

$$\begin{aligned} \begin{aligned}&\dim _F \big ({\mathscr {A}}( \omega );X\big ) \leqslant \frac{ 2 m \ln \big ( \frac{\sqrt{m}}{\delta } +1 \big ) }{ \rho } \\&\quad + \left( \frac{ 2\big ( {\mathbb {E}} ( L ) +\mu \big )}{ \rho } + 1 \right) \dim _F (\Sigma ; \Xi ), \quad \forall \omega \in \Omega . \end{aligned} \end{aligned}$$
(3.31)

Proof

Suppose without loss of generality that \(\tilde{t} = T_{\mathscr {B}}=1\) in hypotheses \((H_3)\) and (S). Since the random absorbing set \({\mathscr {B}} \) is tempered, we have

$$\begin{aligned} {\mathscr {B}}(\omega ) = B_X\big (x_\omega , R(\omega )\big ) \cap {\mathscr {B}}(\omega ), \end{aligned}$$

for points \(x_\omega \in {\mathscr {B}}(\omega )\) and some tempered random variable \(R(\cdot )\) satisfying (2.1).

Next, for each \(\omega \in \Omega \) and \(\sigma \in \Sigma \) we construct sets \(U^n(\omega ,\sigma ) \subseteq {\mathscr {B}}(\omega )\) by induction on \(n \in {\mathbb {N}}\) such that

$$\begin{aligned}&U^n(\omega ,\sigma ) \subseteq {\mathscr {B}}(\omega ), \end{aligned}$$
(3.32)
$$\begin{aligned}&\sharp U^n(\omega ,\sigma ) \leqslant k_0^n, \ \text {where }k_0:=\text { the integral part of } \left( \frac{\sqrt{m}}{\delta } + 1 \right) ^m , \end{aligned}$$
(3.33)
$$\begin{aligned}&\phi \big ( n , \vartheta _{-n}\omega , \theta _{-n}\sigma , {\mathscr {B}}(\vartheta _{-n}\omega ) \big ) \!\subseteq \!\!\!\displaystyle \bigcup _{u \in U^n(\omega ,\sigma )}\!\!\! B_X \Big ( u , (4\delta )^n e^{\int _{-n}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s } R (\vartheta _{-n}\omega ) \Big ) \!\cap \! {\mathscr {B}}(\omega ) . \end{aligned}$$
(3.34)

Note that the inclusion (3.32) is independent of \(\sigma \) and the bound (3.33) of cardinality is independent of both \(\sigma \) and \(\omega \).

Let \(n=1\), \(\omega \in \Omega \) and \(\sigma \in \Sigma \). Since \({\dim (PX) = m}\), from Lemma 3.5 we have

$$\begin{aligned} \begin{aligned}&N_{PX}\left[ B_{PX}\! \Big ( P \phi (1,\vartheta _{-1}\omega ,\theta _{-1}\sigma , x_{\vartheta _{-1}\omega }) , e^{\int _{-1}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R(\vartheta _{-1}\omega ) \Big ) ;\delta e^{\int _{-1}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R(\vartheta _{-1}\omega ) \right] \\&\quad \leqslant \left( \frac{\sqrt{m}}{\delta } + 1 \right) ^m , \end{aligned} \end{aligned}$$

so there exist \(k_0 \big (:= \)the integral part of \( \big (\frac{\sqrt{m}}{\delta } + 1 \big )^m \big )\) centers \(x_{\omega , \sigma }^1 , \ldots , x_{\omega , \sigma }^{k_0} \in P({\mathscr {B}}(\omega ))\) such that

$$\begin{aligned} \begin{aligned}&B_{PX} \Big ( P \phi (1,\vartheta _{-1}\omega ,\theta _{-1}\sigma , x_{\vartheta _{-1}\omega }) , e^{\int _{-1}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R(\vartheta _{-1}\omega ) \Big ) \\&\quad \subseteq \displaystyle \bigcup _{i=1}^{k_0} B_{PX} \Big ( x_{\omega , \sigma }^{i} , \delta e^{\int _{-1}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R(\vartheta _{-1}\omega ) \Big ). \end{aligned} \end{aligned}$$

Hence, for any \(u \in {\mathscr {B}}(\vartheta _{-1}\omega )\), since by (3.29)

$$\begin{aligned} \big \Vert P\big ( \phi (1,\vartheta _{-1}\omega ,\theta _{-1}\sigma , u) - \phi (1,\vartheta _{-1}\omega ,\theta _{-1}\sigma , x_{\vartheta _{-1}\omega }) \big ) \big \Vert _X \leqslant e^{\int _{-1}^0 \zeta (\vartheta _s\omega ) \mathrm{d}s } R(\vartheta _{-1}\omega ) , \end{aligned}$$

we have

$$\begin{aligned} P \phi (1, \vartheta _{-1}\omega , \theta _{-1}\sigma , u)&\in B_{PX} \Big ( P \phi (1,\vartheta _{-1}\omega ,\theta _{-1}\sigma , x_{\vartheta _{-1}\omega }) , e^{\int _{-1}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R(\vartheta _{-1}\omega ) \Big ) \\&\subseteq \displaystyle \bigcup _{i=1}^{k_0} B_{PX} \Big ( x_{\omega , \sigma }^{i} , \delta e^{\int _{-1}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R(\vartheta _{-1}\omega ) \Big ), \end{aligned}$$

and for some particular \(i_0\in \{1,2,\ldots ,k_0\}\)

$$\begin{aligned} P \phi (1, \vartheta _{-1}\omega , \theta _{-1}\sigma , u) \in B_{PX} \Big ( x_{\omega , \sigma }^{i_0} , \delta e^{\int _{-1}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R(\vartheta _{-1}\omega ) \Big ). \end{aligned}$$
(3.35)

Setting

$$\begin{aligned} y_{\omega , \sigma }^{i} := x_{\omega , \sigma }^{i} + Q \phi (1, \vartheta _{-1}\omega , \theta _{-1}\sigma , x_{\vartheta _{-1}\omega } ) \in X,\quad i=1,2,\ldots , k_0, \end{aligned}$$

we obtain from (3.30) to (3.35)

$$\begin{aligned} \begin{aligned} \Vert \phi (1, \vartheta _{-1}\omega , \theta _{-1}\sigma , u) - y_{\omega , \sigma }^{i_0}\Vert _X&\leqslant \Vert P \phi (1, \vartheta _{-1}\omega , \theta _{-1}\sigma , u) - x_{\omega ,\sigma }^{i_0} \Vert _X \\&\quad + \Vert Q \phi (1, \vartheta _{-1}\omega , \theta _{-1}\sigma , u)\\&\quad - Q \phi (1, \vartheta _{-1}\omega , \theta _{-1}\sigma , x_{\vartheta _{-1}\omega } ) \Vert _X \\&\leqslant \delta e^{\int _{-1}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R(\vartheta _{-1}\omega ) + \delta e^{\int _{-1}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R(\vartheta _{-1}\omega ) \\&= (2\delta ) e^{\int _{-1}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R(\vartheta _{-1}\omega ) . \end{aligned} \end{aligned}$$

Hence, since u was taken arbitrarily in \({\mathscr {B}}(\vartheta _{-1}\omega )\),

$$\begin{aligned} \phi \big (1, \vartheta _{-1}\omega , \theta _{-1}\sigma , {\mathscr {B}}(\vartheta _{-1} \omega )\big ) \subseteq \displaystyle \bigcup _{i=1}^{k_0} B_X \Big ({y}_{\omega ,\sigma }^i , (2\delta ) e^{\int _{-1}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R(\vartheta _{-1}\omega ) \Big ) . \end{aligned}$$

In addition, as \(\phi (1, \vartheta _{-1}\omega , \theta _{-1}\sigma , {\mathscr {B}}(\vartheta _{-1} \omega ) ) \subseteq {\mathscr {B}}(\omega ) \), we finally have

$$\begin{aligned} \phi \big (1, \vartheta _{-1}\omega , \theta _{-1}\sigma , {\mathscr {B}}(\vartheta _{-1} \omega )\big ) \subseteq \displaystyle \bigcup _{i=1}^{k_0} B_X \Big ( \tilde{y}_{\omega ,\sigma }^i , (4\delta ) e^{\int _{-1}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R(\vartheta _{-1}\omega ) \Big ) \cap {\mathscr {B}}(\omega ) \end{aligned}$$

for some points \(\tilde{y}_{\omega ,\sigma }^i \in {\mathscr {B}}(\omega )\). Let \(U^1(\omega ,\sigma ) := \big \{ \tilde{y}_{\omega ,\sigma }^i : i = 1, \ldots , k_0 \big \} \subseteq {\mathscr {B}}(\omega )\). Then, \(U^1(\omega ,\sigma ) \) satisfies (3.32)–(3.34) for \(n=1\).

Assuming that the sets \(U^k(\omega ,\sigma )\) have been constructed for all \(1\leqslant k\leqslant n\), \(\omega \in \Omega \) and \(\sigma \in \Sigma \), we now construct the sets \(U^{n+1}(\omega ,\sigma )\). Given \(\omega \in \Omega \) and \(\sigma \in \Sigma \), by the cocycle property of \(\phi \) we have

$$\begin{aligned} \begin{aligned}&\phi \big ( n+1 , \vartheta _{-(n+1)}\omega , \theta _{-(n+1)}\sigma , {\mathscr {B}}(\vartheta _{-(n+1)}\omega ) \big ) \\&\quad = \phi \big ( 1 , \vartheta _{-1}\omega , \theta _{-1}\sigma \big ) \circ \phi \big ( n , \vartheta _{-(n+1)}\omega , \theta _{-(n+1)}\sigma , {\mathscr {B}}(\vartheta _{-(n+1)}\omega ) \big ) , \end{aligned} \end{aligned}$$
(3.36)

and by the induction hypothesis

$$\begin{aligned} \begin{aligned}&\phi \big ( n , \vartheta _{-(n+1)}\omega , \theta _{-(n+1)}\sigma , {\mathscr {B}}(\vartheta _{-(n+1)}\omega ) \big ) \\&\quad = \phi \big ( n , \vartheta _{-n}\vartheta _{-1}\omega , \theta _{-n}\theta _{-1}\sigma , {\mathscr {B}}(\vartheta _{-n}\vartheta _{-1}\omega ) \big ) \\&\quad \subseteq \displaystyle \bigcup _{u \in U^n(\vartheta _{-1} \omega ,\theta _{-1}\sigma )} B_X \Big ( u , (4\delta )^n e^{\int _{-(n+1)}^{-1} \zeta (\vartheta _s \omega ) \mathrm{d}s} R (\vartheta _{-(n+1)}\omega ) \Big ) \cap {\mathscr {B}}(\vartheta _{-1}\omega ), \end{aligned} \end{aligned}$$
(3.37)

where \(U^n(\vartheta _{-1}\omega ,\theta _{-1}\sigma ) \subseteq {\mathscr {B}}(\vartheta _{-1}\omega )\) and \(\sharp U^n(\vartheta _{-1}\omega ,\theta _{-1}\sigma ) \leqslant k_0^n\). Combine (3.36) and (3.37) to obtain

$$\begin{aligned}&\phi \big ( n+1 , \vartheta _{-(n+1)}\omega , \theta _{-(n+1)}\sigma , {\mathscr {B}}(\vartheta _{-(n+1)}\omega ) \big ) \nonumber \\&\quad \subseteq \displaystyle \bigcup _{u \in U^n(\vartheta _{-1} \omega ,\theta _{-1}\sigma )} \phi \Big ( 1 , \vartheta _{-1}\omega , \theta _{-1}\sigma , B_X \Big ( u , (4\delta )^n e^{\int _{-(n+1)}^{-1} \zeta (\vartheta _s \omega ) \mathrm{d}s} R (\vartheta _{-(n+1)}\omega ) \Big )\nonumber \\&\quad \cap {\mathscr {B}}(\vartheta _{-1}\omega )\Big ). \end{aligned}$$
(3.38)

Now we cover each term in the right-hand side to obtain (3.40). For each \(u \in U^n(\vartheta _{-1}\omega ,\theta _{-1}\sigma ) \subseteq {\mathscr {B}}(\vartheta _{-1}\omega )\), we have

$$\begin{aligned}&B_{PX} \Big ( P \phi (1,\vartheta _{-1}\omega ,\theta _{-1}\sigma , u) , (4\delta )^n e^{\int _{-(n+1)}^{0} \zeta (\vartheta _s \omega ) \mathrm{d}s} R (\vartheta _{-(n+1)}\omega ) \Big ) \\&\quad \subseteq \displaystyle \bigcup _{i=1}^{k_0} B_{PX} \Big ( x_{u}^i , \delta (4\delta )^n e^{\int _{-(n+1)}^{0} \zeta (\vartheta _s \omega ) \mathrm{d}s} R (\vartheta _{-(n+1)}\omega ) \Big ), \end{aligned}$$

where \(x_u^i\in P ({\mathscr {B}}(\omega )) \) since \( \phi (1,\vartheta _{-1}\omega ,\theta _{-1}\sigma , u)\subseteq {\mathscr {B}}(\omega )\) by \((H_3)\), and \(k_0\) is given by (3.33). Hence, for any \(v \in B_X \Big ( u , (4\delta )^n e^{\int _{-(n+1)}^{-1} \zeta (\vartheta _s \omega ) \mathrm{d}s} R (\vartheta _{-(n+1)}\omega ) \Big ) \cap {\mathscr {B}}(\vartheta _{-1}\omega ) \) we obtain by (3.29)

$$\begin{aligned}&P\phi (1, \vartheta _{-1}\omega ,\theta _{-1}\sigma , v) \\&\quad \in B_{PX} \Big ( P \phi (1,\vartheta _{-1}\omega ,\theta _{-1}\sigma , u) , (4\delta )^n e^{\int _{-(n+1)}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R (\vartheta _{-(n+1)}\omega ) \Big ) \\&\quad \subseteq \displaystyle \bigcup _{i=1}^{k_0} B_{PX} \Big ( x_{u}^i , \delta (4\delta )^n e^{\int _{-(n+1)}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R (\vartheta _{-(n+1)}\omega ) \Big ) \end{aligned}$$

and then

$$\begin{aligned} P\phi (1, \vartheta _{-1}\omega ,\theta _{-1}\sigma , v) \in B_{PX} \Big ( x_{u}^{i_0} , \delta (4\delta )^n e^{\int _{-(n+1)}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R (\vartheta _{-(n+1)}\omega ) \Big ) \end{aligned}$$
(3.39)

for some \(i_0\in \{1,\ldots ,k_0\}\). Setting

$$\begin{aligned} y_u^i := x_u^i + Q \phi (1,\vartheta _{-1}\omega ,\theta _{-1}\sigma , u)\in X, \quad i=1,2,\ldots , k_0, \end{aligned}$$

we have from (3.30) to (3.39) that

$$\begin{aligned} \begin{aligned} \Vert \phi (1,\vartheta _{-1}\omega , \theta _{-1}\sigma , v) - y_u^{i_0} \Vert _X&\leqslant \Vert P \phi (1,\vartheta _{-1}\omega ,\theta _{-1}\sigma , v) - x_u^{i_0} \Vert _X \\&\quad + \Vert Q \phi (1,\vartheta _{-1}\omega ,\theta _{-1}\sigma , v)\\&\quad - Q \phi (1,\vartheta _{-1}\omega ,\theta _{-1}\sigma , u) \Vert _X \\&\leqslant \delta (4\delta )^n e^{\int _{-(n+1)}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R (\vartheta _{-(n+1)}\omega ) \\&\quad + \delta (4\delta )^n e^{\int _{-(n+1)}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R (\vartheta _{-(n+1)}\omega ) \\&= 2 \delta (4\delta )^n e^{\int _{-(n+1)}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R (\vartheta _{-(n+1)}\omega ) , \end{aligned} \end{aligned}$$

so

$$\begin{aligned} \begin{aligned}&\phi \Big (1,\vartheta _{-1}\omega ,\theta _{-1}\sigma , B_X \Big ( u , (4\delta )^n e^{\int _{-(n+1)}^{-1} \zeta (\vartheta _s \omega ) \mathrm{d}s} R (\vartheta _{-(n+1)}\omega ) \Big ) \cap {\mathscr {B}}(\vartheta _{-1}\omega ) \Big ) \\&\quad \subseteq \displaystyle \bigcup _{i=1}^{k_0} B_X \Big ( y_u^i , 2\delta (4\delta )^n e^{\int _{-(n+1)}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R (\vartheta _{-(n+1)}\omega ) \Big ) \cap {\mathscr {B}}(\omega ) \\&\quad \subseteq \displaystyle \bigcup _{i=1}^{k_0} B_X \Big ( \tilde{y}_u^i , (4\delta )^{n+1} e^{\int _{-(n+1)}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R (\vartheta _{-(n+1)}\omega ) \Big ) \cap {\mathscr {B}}(\omega ) \end{aligned} \end{aligned}$$

for some points \(\tilde{y}_u^i \in {\mathscr {B}}(\omega )\). Hence, by (3.38) we finally conclude that

$$\begin{aligned} \begin{aligned}&\phi \big ( n+1 , \vartheta _{-(n+1)}\omega , \theta _{-(n+1)}\sigma , {\mathscr {B}}(\vartheta _{-(n+1)}\omega ) \big ) \\&\quad \subseteq \displaystyle \bigcup _{u \in U^n(\vartheta _{-1} \omega ,\theta _{-1}\sigma )} \bigcup _{i=1}^{k_0} B_X \Big ( \tilde{y}_u^i , (4\delta )^{n+1} e^{\int _{-(n+1)}^0 \zeta (\vartheta _s \omega ) \mathrm{d}s} R (\vartheta _{-(n+1)}\omega ) \Big ) \cap {\mathscr {B}}(\omega ). \end{aligned}\nonumber \\ \end{aligned}$$
(3.40)

Define \(U^{n+1}(\omega ,\sigma ) := \big \{ \tilde{y}_{u}^{i} : u \in U^n(\vartheta _{-1}\omega ,\theta _{-1}\sigma ) \text{ and } 1 \leqslant i \leqslant k_0 \big \}\). Then, \(U^{n+1}(\omega ,\sigma ) \subseteq {\mathscr {B}}(\omega )\) and \(\sharp U^{n+1}(\omega ,\sigma ) \leqslant k_0^{n+1}\). The desired sets \(\{U^n (\omega , \sigma ) \}_{n\in {\mathbb {N}}}\) are constructed.

To find a finite cover of the uniform attractor \({\mathscr {A}}\), using the idea of Theorem 3.3 we make the decomposition

$$\begin{aligned} {\mathscr {A}}(\omega ) = \displaystyle \bigcup _{l=1}^{M_\eta } {\mathcal {A}}_{\Sigma _l}(\omega ), \quad \omega \in \Omega , \end{aligned}$$
(3.41)

where for \(\eta >0\) we are denoting \(M_\eta = N_\Xi [\Sigma ; \eta ]\), \(\Sigma = \cup _{l=1}^{M_\eta }\Sigma _l\), \(\Sigma _l = B_\Xi (\sigma _l , \eta ) \cap \Sigma \) and \({\mathcal {A}}_{\Sigma _l}(\omega ) = \cup _{\sigma \in \Sigma _l}{\mathcal {A}}_\sigma (\omega )\). Moreover,

$$\begin{aligned} {\mathcal {A}}_{\Sigma _l}(\omega ) \subseteq B_X \Big ( \phi \big ( n, \vartheta _{-n} \omega , \theta _{-n} \sigma _l, {\mathcal {A}}_{\theta _{-n} \Sigma _l} (\vartheta _{-n} \omega ) \big ) , M(-n) e^{ \int _{-n}^0 L(\vartheta _s \omega ) \mathrm{d}s} \eta \Big ) \end{aligned}$$

for each \(1\leqslant l \leqslant M_\eta \), \(\eta >0,\) \(n\in {\mathbb {N}}\) and \(\omega \in \Omega \), see (3.17). Given \(\tau >0\), by Birkhoff’s ergodic theorem we have for \(n \in {\mathbb {N}}\) great that

$$\begin{aligned} e^{\int _{-n}^0 L( \vartheta _s \omega )\mathrm{d}s } \leqslant e^{( {\mathbb {E}} (L ) + \tau ) n}, \end{aligned}$$
(3.42)

getting

$$\begin{aligned} {\mathcal {A}}_{\Sigma _l}(\omega ) \subseteq B_X \Big ( \phi \big ( n, \vartheta _{-n} \omega , \theta _{-n} \sigma _l, {\mathcal {A}}_{\theta _{-n} \Sigma _l} (\vartheta _{-n} \omega ) \big ) , M(-n) e^{ ({\mathbb {E}} (L) + \tau )n } \eta \Big ).\nonumber \\ \end{aligned}$$
(3.43)

Now fix \(0<\rho < \ln \, (1/4\delta )-{\mathbb {E}} ( \zeta )\) and let \(\gamma >0\) be sufficiently small such that

$$\begin{aligned} {\mathbb {E}} ( \zeta ) + \rho + \gamma < \ln \, (1/4\delta ). \end{aligned}$$

Since \(R(\omega )\) is a tempered random variable we have for n large enough that

$$\begin{aligned} \begin{aligned} (4\delta )^n e^{\int _{-n}^0 \zeta (\vartheta _{s} \omega ) \mathrm{d}s } R(\vartheta _{-n}\omega )&\leqslant (4\delta )^n e^{ ( {\mathbb {E}} (\zeta ) + \gamma ) n } R(\vartheta _{-n}\omega ) \\&= e^{\big ( {\mathbb {E}} ( \zeta ) + \gamma +\rho - \ln (1/4\delta ) \big ) n } e^{(-\rho /2 ) n } e^{(-\rho /2 ) n } R(\vartheta _{-n}\omega ) \\&\leqslant e^{(-\rho /2 ) n }, \end{aligned} \end{aligned}$$

and so (3.34) gives

$$\begin{aligned} \phi \big ( n , \vartheta _{-n}\omega , \theta _{-n}\sigma , {\mathscr {B}}(\vartheta _{-n}\omega ) \big ) \subseteq \displaystyle \bigcup _{u \in U^n(\omega ,\sigma )} B_X \Big ( u , e^{(-\rho /2) n} \Big ) \cap {\mathscr {B}}(\omega ) . \end{aligned}$$
(3.44)

Notice that \({\mathcal {A}}_{\theta _{-n} \Sigma _l} (\vartheta _{-n} \omega ) \subseteq {\mathscr {B}}(\vartheta _{-n} \omega )\), so from (3.44) it follows for \(n\in {\mathbb {N}}\) sufficiently large that

$$\begin{aligned} N_X\Big [ \phi \big ( n , \vartheta _{-n} \omega , \theta _{-n} \sigma _l , {\mathcal {A}}_{ \theta _{-n} \Sigma _l } ( \vartheta _{-n} \omega ) \big ) ; \, e^{(-\rho /2 ) n} \Big ] \leqslant k_0^n , \end{aligned}$$

and then

$$\begin{aligned}&N_X \! \Big [ B_X \! \Big ( \phi \big ( n, \vartheta _{-n} \omega , \theta _{-n} \sigma _l, {\mathcal {A}}_{\theta _{-n} \Sigma _l} (\vartheta _{-n} \omega ) \big ) , M(-n) e^{( {\mathbb {E}} (L ) + \tau ) n} \eta \Big ) ; e^{ -(\rho /2) n } \! \\&\qquad + M(-n) e^{( {\mathbb {E}} ( L) + \tau ) n} \eta \Big ] \\&\quad \leqslant k_0^n. \end{aligned}$$

Hence, from (3.43),

$$\begin{aligned} \begin{aligned} N_X \Big [ {\mathcal {A}}_{\Sigma _l} ( \omega ) ; \ e^{(-\rho /2 ) n} + M(-n) e^{( {\mathbb {E}} (L) + \tau ) n} \eta \Big ] \leqslant k_0^n, \qquad l=1,2,\ldots , M_\eta , \end{aligned} \end{aligned}$$

and then by (3.41) we conclude that

$$\begin{aligned} N_X \Big [ {\mathscr {A}}( \omega ) ; \ e^{(-\rho /2 ) n} + M(-n) e^{( {\mathbb {E}} (L ) + \tau ) n} \eta \Big ] \leqslant k_0^n M_{\eta } \end{aligned}$$
(3.45)

for all \( \eta >0\) and \(n\in {\mathbb {N}}\) large.

In the following, we establish by (3.45) a finite \(\varepsilon \)-cover of \({\mathscr {A}}(\omega )\) for any small \(\varepsilon >0\). Let

$$\begin{aligned} \begin{aligned} \eta _n := \displaystyle \frac{e^{(-\rho /2 ) n} }{ M(-n) e^{( {\mathbb {E}} ( L ) + \tau ) n} },\qquad n\in {\mathbb {N}}\ \text{ large }. \end{aligned} \end{aligned}$$
(3.46)

Then \(\eta _n\rightarrow 0^+\) as \(n\rightarrow \infty \), and from (3.45) we have

$$\begin{aligned} \begin{aligned} N_X \Big [ {\mathscr {A}}( \omega ) ; \ 2e^{(-\rho /2 ) n} \Big ]&= N_X \Big [ {\mathscr {A}}( \omega ) ; \ e^{(-\rho /2 ) n} + M(-n) e^{( {\mathbb {E}} (L) + \tau ) n} \eta _n \Big ] \\&\leqslant k_0^n M_{\eta _n}. \end{aligned} \end{aligned}$$

Notice that for any \(\varepsilon \in (0,1)\) there exists an \( n_\varepsilon \in {\mathbb {N}}\) such that

$$\begin{aligned} 2 e^{(-\rho /2 ) n_{\varepsilon }} < \varepsilon \leqslant 2 e^{(-\rho /2 ) (n_{\varepsilon } - 1) }, \end{aligned}$$
(3.47)

and the numbers \(n_\varepsilon \) can be chosen such that \(n_\varepsilon \rightarrow \infty \) as \(\varepsilon \rightarrow 0^+\). Hence,

$$\begin{aligned} \begin{aligned} N_X \big [ {\mathscr {A}}( \omega ) ; \varepsilon \big ]&\leqslant N_X \Big [ {\mathscr {A}}( \omega ) ; 2e^{(-\rho /2 ) n_{\varepsilon }} \Big ] \\&\leqslant k_0^{n_\varepsilon } M_{\eta _{n_\varepsilon }}, \end{aligned} \end{aligned}$$

and then \(\big (\)recall that \(M_{\eta _{n_\varepsilon }} =N_\Xi [\Sigma ;\eta _{n_\varepsilon }]\big )\)

$$\begin{aligned} \begin{aligned} \displaystyle \frac{ \ln N_X \big [ {\mathscr {A}} ( \omega ) ; \varepsilon \big ] }{ - \ln \varepsilon }&\leqslant \displaystyle \frac{ n_\varepsilon \ln k_0 + \ln N_\Xi [\Sigma ;\eta _{n_\varepsilon }] }{ - \ln \big [ 2 e^{(-\rho /2 ) (n_{\varepsilon } - 1)} \big ] } \\&= \displaystyle \frac{ n_\varepsilon \ln k_0 + \ln N_\Xi [\Sigma ;\eta _{n_\varepsilon }] }{ - \ln 2+ (\rho /2) (n_{\varepsilon } - 1) } ,\qquad \forall \varepsilon \in (0,1). \end{aligned} \end{aligned}$$
(3.48)

Since by \((H_1)\) the symbol space \(\Sigma \) has finite fractal dimension in \(\Xi \) and that \(\eta _{n_\varepsilon } \rightarrow 0^+\) as \(\varepsilon \rightarrow 0^+\), for any \(\chi >0\) there exists \(\varepsilon _0 =\varepsilon _0(\chi ) \in (0,1)\) such that

$$\begin{aligned} N_\Xi \big [ \Sigma ; \eta _{n_\varepsilon } \big ] \leqslant \left( \displaystyle \frac{1}{\eta _{n_\varepsilon }} \right) ^{\dim _F (\Sigma ;\Xi ) + \chi }, \quad \forall \varepsilon \leqslant \varepsilon _0. \end{aligned}$$
(3.49)

From (3.49) and the definition (3.46) of \(\eta _n\), we obtain

$$\begin{aligned} \begin{aligned} \displaystyle \frac{ \ln N_{ \Xi } \big [ \Sigma ; \eta _{n_\varepsilon } \big ] }{ - \ln 2+ (\rho /2) (n_{\varepsilon } - 1) }&\leqslant \displaystyle \frac{ \big ( \! \dim _F (\Sigma ;\Xi ) + \chi \big ) \ln \displaystyle \frac{1}{\eta _{n_\varepsilon }} }{ - \ln 2+ (\rho /2) (n_{\varepsilon } - 1) } \\&= \displaystyle \big (\! \dim _F (\Sigma ;\Xi ) \!+\! \chi \big ) \ \frac{ \ln \left[ \displaystyle \frac{ M(-{n_\varepsilon }) e^{( {\mathbb {E}} (L ) \!+\! \tau ) n_\varepsilon } }{ e^{-(\rho /2) n_\varepsilon } } \right] }{ - \ln 2\!+\! (\rho /2) (n_{\varepsilon } \!-\! 1) },\quad \forall \varepsilon \!\leqslant \! \varepsilon _0, \end{aligned} \end{aligned}$$

while from \((H_1)\) we have

$$\begin{aligned} \begin{aligned} \frac{ \ln \left[ \displaystyle \frac{ M(-{n_\varepsilon }) e^{( {\mathbb {E}} ( L ) + \tau ) n_\varepsilon } }{ e^{-(\rho /2) n_\varepsilon } } \right] }{ - \ln 2+ (\rho /2) (n_{\varepsilon } - 1) }&\leqslant \frac{ \ln \left[ \displaystyle \frac{ c_1e^{( {\mathbb {E}} ( L ) +\tau +\mu ) {n_\varepsilon } } }{ e^{-(\rho /2) n_\varepsilon } } \right] }{ - \ln 2+ (\rho /2) (n_{\varepsilon } - 1) } \quad \big ({\text {by}~(H_1)}\big ) \\&= \frac{\ln c_1 +\big ( {\mathbb {E}} ( L ) + \tau +\mu \big ) {n_\varepsilon } }{ - \ln 2+ (\rho /2) (n_{\varepsilon } - 1) } + \frac{ (\rho /2)n_\varepsilon }{- \ln 2+ (\rho /2) (n_{\varepsilon } - 1) } \end{aligned} \end{aligned}$$

for all \(\varepsilon \leqslant \varepsilon _0\). Hence, since \(n_\varepsilon \rightarrow \infty \) as \(\varepsilon \rightarrow 0^+\),

$$\begin{aligned} \displaystyle \limsup _{\varepsilon \rightarrow 0^+} \displaystyle \frac{ \ln N_{ \Xi } [ \Sigma ; \eta _{n_\varepsilon } ] }{ - \ln 2+ (\rho /2) (n_{\varepsilon } - 1) } \leqslant \big ( \! \dim _F (\Sigma ;\Xi ) + \chi \big ) \left( \displaystyle \frac{2\big ( { {\mathbb {E}} (L ) + \tau } +\mu \big )}{ \rho } + 1 \right) . \end{aligned}$$

Therefore, by taking the limit in (3.48) as \(\varepsilon \rightarrow 0^+\) we conclude that

$$\begin{aligned} \begin{aligned} \dim _F\big ( {\mathscr {A}} (\omega ) ;X\big )&= \displaystyle \limsup _{\varepsilon \rightarrow 0^+} \displaystyle \frac{ \ln N_X [ {\mathscr {A}}( \omega ) ; \varepsilon ] }{ - \ln \varepsilon } \\&\leqslant \displaystyle \frac{ 2\ln k_0 }{ \rho } +\big (\! \dim _F (\Sigma ;\Xi ) + \chi \big ) \left( \frac{ 2\big ( { {\mathbb {E}} (L) + \tau } +\mu \big ) }{ \rho } + 1 \right) . \end{aligned} \end{aligned}$$
(3.50)

Since the estimate (3.50) holds for all \(\chi , \tau >0\) we finally obtain

$$\begin{aligned} \dim _F\big ( {\mathscr {A}} (\omega ) ;X\big ) \leqslant \displaystyle \frac{ 2 \ln k_0 }{ \rho } + \dim _F (\Sigma ;\Xi ) \left( \frac{ 2\big ( {\mathbb {E}}(L) + \mu \big )}{ \rho } + 1 \right) , \end{aligned}$$

which implies (3.31) since \(k_0\leqslant \big ( \frac{\sqrt{m}}{\delta } + 1 \big )^m\) by definition (3.33). \(\square \)

3.3 Fractal Dimension in more Regular Spaces

By Theorems 3.3 and 3.6, we have established the finite-dimensionality of the random uniform attractor \({\mathscr {A}}= \{{\mathscr {A}}(\omega )\}_{\omega \in \Omega }\) in the phase space X. Now we are interested to improve the finite-dimensionality to a more regular space \(Y\subset X\). But to be more general, in the following we study the problem in a Banach space Z for which \(Z=Y\) is a particular case.

Let \((Z,\Vert \cdot \Vert _Z)\) be a Banach space and suppose the NRDS \(\phi \) takes values in Z, i.e., for each \(u\in X\), \(\omega \in \Omega \) and \(\sigma \in \Sigma \) we have \(\phi (t,\omega ,\sigma ,u)\in Z\) for \(t >0\). Suppose also that the random uniform attractor is such that \({\mathscr {A}}(\omega ) \subseteq X\cap Z\) for all \(\omega \in \Omega \). In the following, we shall show that under a \((\Sigma \times X , Z)\)-smoothing property the fractal dimension in Z of the random uniform attractor can be bounded by the dimension of it in X plus the dimension of the symbol space \(\Sigma \) in \(\Xi \).

The \((\Sigma \times X , Z)\)-smoothing condition we need is as follows:

\((H_7)\):

There is a \({\bar{t}} > 0\) such that for some positive constants \(\delta _1 , \delta _2 >0\) and a random variable \({\bar{L}}(\omega ) >0\) it holds

$$\begin{aligned} \Vert \phi ( {\bar{t}} , \omega , \sigma _1 , u ) - \phi ( {\bar{t}} , \omega , \sigma _2 , v ) \Vert _Z \leqslant \bar{L} (\omega ) \Big [ \big (d_\Xi (\sigma _1 , \sigma _2)\big )^{\delta _1} + \Vert u_1 - u_2 \Vert _X^{\delta _2} \Big ], \end{aligned}$$

for all \(\sigma _1 , \sigma _2 \in \Sigma \), \(u, v \in {\mathscr {A}}(\omega )\), \(\omega \in \Omega \).

Remark 3.7

(i) The random variable \(\tilde{L} (\omega )\) in \((H_7)\) does not need to have finite expectation.

(ii) Even for the case \(Z=Y\), \((H_7)\) is not implied by \((H_4)\) and \((H_6)\), and vice versa. Nevertheless, \((H_7)\) is often more applicable in applications since powers \(\delta _1\) and \(\delta _2\) are allowed but no powers are allowed in \((H_6)\). In fact, it is open whether or not Theorem 3.3 can be established using a weaker version of \((H_6)\) with condition (3.7) weakened to: there exists some power \(\delta \in (0,1]\),

figure a

Theorem 3.8

Let \(\phi \) be an NRDS which is \((\Sigma \times X , X)\)-continuous and has a \({\mathcal {D}}\)-uniform attractor \({\mathscr {A}}\subseteq X\cap Z\). Suppose that \({\mathscr {A}}\) has finite fractal dimension in X, i.e., \(\dim _F({\mathscr {A}}(\omega ) ;X)< c(\omega ) < \infty \). Then if \((H_1)\) and \((H_7)\) are satisfied, \({\mathscr {A}}\) has finite fractal dimension in Z as well:

$$\begin{aligned} \mathrm{dim}_F \big ( {\mathscr {A}}(\omega ) ; Z \big ) \leqslant \frac{1}{\delta _1} \mathrm{dim}_F \big ( \Sigma ; \Xi \big ) + \frac{1}{\delta _2}\mathrm{dim}_F \big ( {\mathscr {A}}(\vartheta _{-{\bar{t}}} \omega ) ; X \big ), \quad \omega \in \Omega . \end{aligned}$$

Remark 3.9

Notice that

  1. (i)

    Z does not have to be a subset of X and no embedding from Z into X was required, so the theorem applies to the case of, e.g., \(X=L^2({\mathbb {R}})\) and \(Z=L^p({\mathbb {R}})\) with \(p>2\);

  2. (ii)

    If the fractal dimension of \({\mathscr {A}}\) in X is uniformly (w.r.t. \(\omega \in \Omega \)) bounded, i.e., \(\mathrm{dim}_F \big ( {\mathscr {A}}(\omega ) ; X \big ) \leqslant d\) for all \(\omega \in \Omega \), where d is a deterministic constant, then the fractal dimension of \({\mathscr {A}}\) in Z is also uniformly bounded by a deterministic number: \(\mathrm{dim}_F \big ( {\mathscr {A}}(\omega ) ; Z \big ) \leqslant \frac{1}{\delta _1} \mathrm{dim}_F \big ( \Sigma ; \Xi \big ) + \frac{d}{\delta _2} . \)

Proof of Theorem 3.8

For any \(\varepsilon \in (0,1)\) let \(M_\varepsilon := N_\Xi [\Sigma ; \varepsilon ]\). Then, there exists a sequence \(\{\sigma _l\}_{l=1}^{M_{\varepsilon ^{1/\delta _1}}}\) of centers in \(\Sigma \) such that \(\Sigma = \bigcup _{l=1}^{M_{\varepsilon ^{1/{\delta _1}}} } \Sigma _l\), where \(\Sigma _l := B_\Xi \big ( \sigma _l , \varepsilon ^{ 1/\delta _1 } \big ) \cap \Sigma \).

Let \(\omega \in \Omega \). Since \({\mathscr {A}}(\omega )\) is compact in X we have

$$\begin{aligned} {\mathscr {A}}(\omega ) = \displaystyle \bigcup _{i=1}^{N_X[ {\mathscr {A}}(\omega ); \varepsilon ^{1/\delta _2} ]} B_X \big (x_i^{\omega } , \varepsilon ^{1/\delta _2} \big )\cap {\mathscr {A}}(\omega ), \qquad x_i^{\omega } \in {\mathscr {A}}(\omega ), \end{aligned}$$

and by the negative semi-invariance of \({\mathscr {A}}\) (see (2.4)) we obtain

$$\begin{aligned} \begin{aligned} {\mathscr {A}}(\omega )&\subseteq \displaystyle \bigcup _{\sigma \in \Sigma } \phi \big ( {\bar{t}} , \vartheta _{- {\bar{t}}} \omega , \theta _{- {\bar{t}}} \sigma , {\mathscr {A}}(\vartheta _{- {\bar{t}}} \omega ) \big ) \\&= \displaystyle \bigcup _{l=1}^{M_{\varepsilon ^{ 1/ \delta _1} }} \bigcup _{\sigma \in \Sigma _l} \phi \big ( {\bar{t}} , \vartheta _{- {\bar{t}}} \omega , \theta _{-{\bar{t}}} \sigma , {\mathscr {A}}(\vartheta _{-{\bar{t}}} \omega ) \big ) \\&= \displaystyle \bigcup _{l=1}^{M_{\varepsilon ^{ 1/ \delta _1} }} \phi \big ( {\bar{t}} , \vartheta _{-{\bar{t}}} \omega , \theta _{-{\bar{t}}} \Sigma _l , {\mathscr {A}}(\vartheta _{-{\bar{t}}} \omega ) \big ) \\&= \displaystyle \bigcup _{l=1}^{M_{\varepsilon ^{ 1/ \delta _1} }} \bigcup _{i=1}^{N_X[ {\mathscr {A}}(\vartheta _{-{\bar{t}}}\omega ); \varepsilon ^{1/\delta _2} ]} \!\!\phi \Big ( {\bar{t}} , \vartheta _{-{\bar{t}}} \omega , \theta _{-{\bar{t}}} \Sigma _l , B_X \big (x_i^{\vartheta _{-{\bar{t}}}\omega } , \varepsilon ^{1/\delta _2} \big )\cap {\mathscr {A}}(\vartheta _{-{\bar{t}}}\omega ) \Big ), \end{aligned} \end{aligned}$$
(3.51)

where \(\phi \big ( {\bar{t}} , \vartheta _{-{\bar{t}}} \omega , \theta _{-{\bar{t}}} \Sigma _l , {\mathscr {A}}(\vartheta _{-{\bar{t}}} \omega ) \big ) := \cup _{\sigma \in \Sigma _l} \phi \big ( {\bar{t}} , \vartheta _{- {\bar{t}}} \omega , \theta _{-{\bar{t}}} \sigma , {\mathscr {A}}(\vartheta _{-{\bar{t}}} \omega ) \big )\).

Let \(u_1, u_2 \in \phi \Big ( {\bar{t}} , \vartheta _{-{\bar{t}}} \omega , \theta _{-{\bar{t}}} \Sigma _l , B_X \big (x_i^{\vartheta _{-{\bar{t}}}\omega } , \varepsilon ^{1/\delta _2} \big )\cap {\mathscr {A}}(\vartheta _{- {\bar{t}}}\omega ) \Big )\). Then, \(u_1 = \phi \big ( {\bar{t}} , \vartheta _{-{\bar{t}}} \omega , \theta _{-{\bar{t}}} \sigma _1 ,v_1 \big )\) and \(u_2 = \phi \big ( {\bar{t}} , \vartheta _{-{\bar{t}}} \omega , \theta _{-{\bar{t}}} \sigma _2 ,v_2 \big )\) for some \(\sigma _1 , \sigma _2 \in \Sigma _l\) and \(v_1 , v_2 \in B_X \big (x_i^{\vartheta _{-{\bar{t}}}\omega } , \varepsilon ^{1/\delta _2} \big )\cap {\mathscr {A}}(\vartheta _{-{\bar{t}}}\omega ),\) and

$$\begin{aligned} \Vert u_1 - u_2\Vert _Z&= \Vert \phi \big ( {\bar{t}} , \vartheta _{-{\bar{t}}} \omega , \theta _{-{\bar{t}}} \sigma _1 ,v_1 \big ) - \phi \big ( {\bar{t}} , \vartheta _{-{\bar{t}}} \omega , \theta _{-{\bar{t}}} \sigma _2 ,v_2 \big ) \Vert _Z \\&\leqslant \bar{L} ( \vartheta _{-{\bar{t}}} \omega ) \Big [ \big (d_\Xi (\theta _{-{\bar{t}}}\sigma _1 ,\theta _{-{\bar{t}}} \sigma _2)\big )^{\delta _1} + \Vert v_1 - v_2 \Vert _X^{\delta _2} \Big ] \ \quad (\text {by}~(H_7))\\&\leqslant \bar{L} ( \vartheta _{-{\bar{t}}} \omega ) \Big [ M(-{\bar{t}})^{\delta _1} \big (d_\Xi (\sigma _1 ,\sigma _2)\big )^{\delta _1} + \Vert v_1 - v_2 \Vert _X^{\delta _2} \Big ] \ \quad (\text {by }(3.4)) \\&\leqslant \bar{L} ( \vartheta _{-{\bar{t}}} \omega ) \Big [ M(-{\bar{t}})^{\delta _1}2^{\delta _1}\varepsilon + 2^{\delta _2} \varepsilon \Big ] . \end{aligned}$$

Let \(r({\bar{t}}, \omega , \delta _1 , \delta _2) := \bar{L} ( \vartheta _{-{\bar{t}}} \omega ) \big [ M(-{\bar{t}})^{\delta _1}2^{\delta _1} + 2^{\delta _2} \big ]\). Then,

$$\begin{aligned} \mathrm{diam}_Z \Big ( \phi \big ({\bar{t}} , \vartheta _{-{\bar{t}}} \omega , \theta _{-{\bar{t}}}\Sigma _l , B_X \big ( x_i^{\vartheta _{-{\bar{t}}}\omega } , \varepsilon ^{1/\delta _2} \big ) \cap {\mathscr {A}}( \vartheta _{-{\bar{t}}}\omega ) \big ) \Big ) \leqslant r({\bar{t}}, \omega , \delta _1 , \delta _2) \varepsilon ,\nonumber \\ \end{aligned}$$
(3.52)

for all \( l = 1 , \ldots , M_{\varepsilon ^{1 / \delta _1}} \) and \( i = 1 , \ldots , N_X \big [ {\mathscr {A}}(\vartheta _{-{\bar{t}}}\omega ) ; \varepsilon ^{1/\delta _2} \big ] \), so by (3.51) we obtain

$$\begin{aligned} N_Z \big [ {\mathscr {A}}(\omega ) ; r({\bar{t}}, \omega , \delta _1 , \delta _2) \varepsilon \big ]&\leqslant M_{\varepsilon ^{1 / \delta _1}} \cdot N_X \big [ {\mathscr {A}}(\vartheta _{-{\bar{t}}}\omega ) ; \varepsilon ^{1/\delta _2} \big ] \\&= N_\Xi \big [ \Sigma ; \varepsilon ^{1 / \delta _1} \big ] \cdot N_X \big [ {\mathscr {A}}(\vartheta _{-{\bar{t}}}\omega ) ; \varepsilon ^{1/\delta _2} \big ] . \end{aligned}$$

Since \(r({\bar{t}}, \omega , \delta _1, \delta _2)\) is independent of \(\varepsilon \) and \({\mathscr {A}}(\vartheta _{-{\bar{t}}}\omega )\) is finitely dimensional in X then in a standard way of taking the limit as \(\varepsilon \rightarrow 0^+\) we conclude

$$\begin{aligned} \mathrm{dim}_F \big ( {\mathscr {A}}(\omega ) ; Z \big ) \leqslant \frac{1}{\delta _1} \mathrm{dim}_F \big ( \Sigma ; \Xi \big ) + \frac{1}{\delta _2}\mathrm{dim}_F \big ( {\mathscr {A}}(\vartheta _{-{\bar{t}}}\omega ) ; X \big ) < \infty , \quad \forall \omega \in \Omega . \end{aligned}$$

\(\square \)

3.4 Finite-Dimensional Symbol Space \(\Sigma \) of Continuous Functions

In condition \((H_1)\), the symbol space \(\Sigma \) is required to be finite dimensional, which is technical itself in real applications. In fact, even in the deterministic uniform attractor theory the uniform attractor \({\mathscr {A}}\) and the compact symbol space \(\Sigma \) have a close relationship, which can be seen from, for instance, the presentation

$$\begin{aligned} {\mathscr {A}}=\bigcup _{\sigma \in \Sigma } A(\sigma ), \end{aligned}$$

where \(\{A(\sigma )\}_{\sigma \in \Sigma }\) forms the cocycle attractor of the underlying system (Bortolan et al. 2014). This structure provides a view of the uniform attractor \({\mathscr {A}}\) as the image set of the map \(\pi : \Sigma \rightarrow {\mathscr {A}}, \sigma \mapsto A(\sigma )\). In a simple case that \(\pi \) is single-valued, i.e., each \(A(\sigma )\) is a single point in the phase space, the fractal dimension of \({\mathscr {A}}\) can equal that of the symbol space when, e.g., \(\pi \) is Lipschitz continuous. For this reason, we do not expect a general finite-dimensional result of uniform attractors for infinite-dimensional symbol spaces.

Now from the application point of view we present some conditions that ensure a symbol space to be finite-dimensional. Note that for a non-autonomous evolution equation with time-dependent term g (called the (non-autonomous) symbol of the equation), the symbol space \(\Sigma \) is often formulated as the hull \({\mathcal {H}}(g)\) of g with all the time translations of g being included. More precisely, for \(g\in \Xi \) with \(\Xi \) being a complete metric space,

$$\begin{aligned} \Sigma := {\mathcal {H}}(g) =\overline{ \big \{\theta _sg(\cdot ):s\in {\mathbb {R}}\big \}}^{d_\Xi } , \end{aligned}$$
(3.53)

and \(\theta _s:\Xi \rightarrow \Xi \) are translation operators on \(\Xi \):

$$\begin{aligned} \theta _s \xi (\cdot ) =\xi (s+\cdot ), \quad s\in {\mathbb {R}},\, \xi \in \Xi . \end{aligned}$$

In our previous work (Cui et al. 2021), taking \(\Xi \) as the Fréchet space of continuous functions we gave conditions on a function \(g\in \Xi \) that ensure the hull of g to have finite fractal dimension, which weakened the known condition of quasi-periodicity needed by Chepyzhov and Vishik (2002) in applications. Now we recall briefly the main results, since they give us insights about what applications our theorem can apply to. We begin with two preliminary concepts of almost periodic functions and quasi-periodic functions. The readers are referred to, e.g., Amerio and Prouse (1971) and Chepyzhov and Vishik (2002).

Let \(({\mathscr {X}},d_{\mathscr {X}})\) be a complete metric space and \(\xi (\cdot ): {\mathbb {R}}\rightarrow {\mathscr {X}}\) a continuous map. For any \(\varepsilon >0\), a number \(\tau \in {\mathbb {R}}\) is said to be an \(\varepsilon \)-period of \(\xi \) if

$$\begin{aligned} \sup _{s\in {\mathbb {R}}} d_{\mathscr {X}}\big (\xi (s+\tau ),\xi (s)\big )\leqslant \varepsilon . \end{aligned}$$

If for any \(\varepsilon >0\) the \(\varepsilon \)-periods of function \(\xi \) form a relatively dense set in \({\mathbb {R}}\), i.e., there is a number \(l=l(\varepsilon )>0\) such that for any \(\alpha \in {\mathbb {R}}\) the interval \([\alpha ,\alpha +l]\) contains an \(\varepsilon \)-period \(\tau \) of \(\xi \), then \(\xi \) is said to be an almost periodic function. Note that for an almost periodic function \(\xi \), the set of values \(\{ \xi (t) : t \in {\mathbb {R}}\}\) is precompact in \({\mathscr {X}}\). Also, \(\xi \) is uniformly continuous on \({\mathbb {R}}\), and the sum of almost periodic functions is an almost periodic function. Clearly, periodic functions are almost periodic.

A particular class of almost periodic functions is the quasi-periodic functions. For \(k \in {\mathbb {N}}\), let \({\mathbb {T}}^k = [{\mathbb {R}}\ \ \mathrm{mod} \ 2\pi ]^k\) be the k-dimensional torus and denote by \({\mathcal {C}}({\mathbb {T}}^k ; {\mathscr {X}})\) the set of continuous functions \(\varphi \in {\mathcal {C}} ({\mathbb {R}}^k ; {\mathscr {X}})\) which are \(2\pi \)-periodic in each argument, i.e., for each \(i = 1, \ldots ,k\)

$$\begin{aligned} \varphi (x_1 , \ldots ,x_{i-1} , x_i + 2\pi , x_{i+1}, \ldots , x_k) = \varphi (x_1 , \ldots , x_{i-1}, x_i, x_{i+1}, \ldots , x_k). \end{aligned}$$

Let \(\alpha = (\alpha _1, \ldots ,\alpha _k) \in {\mathbb {T}}^k\), where \(\{ \alpha _i : i = 1, \ldots , k\}\) is a set of rationally independent real numbers, i.e., if \(n_1 , \ldots , n_k \in {\mathbb {Z}}\) are integers such that \(n_1 \alpha _1 + \cdots + n_k \alpha _k = 0\) then \(n_1=\cdots = n_k = 0\). For \(\varphi \in {{\mathcal {C}}} ({\mathbb {T}}^k ; {\mathscr {X}})\), a function \(\xi : {\mathbb {R}}\rightarrow {\mathscr {X}}\) with the form

$$\begin{aligned} \xi (t) := \varphi (\alpha _1 t, \ldots , \alpha _k t) = \varphi (\alpha t), \quad t \in {\mathbb {R}}, \end{aligned}$$

is said to be quasi-periodic (with k frequences) with values in \({\mathscr {X}}\). Note that periodic functions are particular quasi-periodic functions, and quasi-periodic functions are almost periodic.

For the space \( \Xi _b:={\mathcal {C}}_b({\mathbb {R}};{\mathscr {X}})\) of bounded continuous functions with the supremum metric, Chepyzhov and Vishik (2002) showed that the hull of Lipschitz continuous quasi-periodic functions has finite fractal dimension. More precisely,

Lemma 3.10

(Chepyzhov and Vishik 2002, Proposition IX.2.1) If \(\xi \in \Xi _b={\mathcal {C}}_b({\mathbb {R}};{\mathscr {X}})\) is a quasi-periodic function with k frequencies \(\xi (t) = \varphi (t\alpha )\) with \(\varphi \) Lipschitz continuous, then \( \dim _F\big ({\mathcal {H}}_b(\xi ); \Xi _b \big )\leqslant k\), where \({\mathcal {H}}_b(\xi )\) is given as in (3.53) with the closure taken over the supremum metric.

Moreover, the following lemma indicates that a necessary condition for \({\mathcal {H}}_b(\xi )\) to be finite dimensional in \(\Xi _b\) is that \(\xi \) is almost periodic.

Lemma 3.11

(Chepyzhov and Vishik 2002, Theorem V.1.1) A function \(\xi \in \Xi _b={\mathcal {C}}_b({\mathbb {R}};{\mathscr {X}})\) is almost periodic if and only if the hull \({\mathcal {H}}_b(\xi )\) of \(\xi \) is compact in \(\Xi _b\).

In order to study evolution equations with more general non-autonomous terms than quasi-periodic ones, in our previous work (Cui et al. 2021), we considered the space \(\Xi ={\mathcal {C}}({\mathbb {R}};{\mathscr {X}})\) of continuous functions with the Fréchet metric

$$\begin{aligned} d_\Xi (\xi _1,\xi _2) := \displaystyle \sum _{n=1}^{\infty }{ \displaystyle \frac{1}{2^{n}} \frac{d^{(n)}(\xi _1,\xi _2) }{ 1 + d^{(n)}(\xi _1,\xi _2) } } ,\quad \xi _1,\xi _2\in \Xi , \end{aligned}$$

where

$$\begin{aligned} d^{(n)} (\xi _1,\xi _2) := \displaystyle \max _{s \in [-n , n] }{d_{{\mathscr {X}}}\big ( \xi _1(s) , \xi _2(s) \big ) }, \quad n \in {\mathbb {N}}. \end{aligned}$$

In this space, the translation operators \(\theta _t\) are Lipschitz but with t-dependent Lipschitz constants, as indicated by the following lemma.

Lemma 3.12

(Cui et al. 2021, Proposition 4.3) For any \(t\in {\mathbb {R}}\) the translation operator \(\theta _t\) on \(\Xi ={\mathcal {C}}({\mathbb {R}};{\mathscr {X}})\) is Lipschitz:

$$\begin{aligned} \begin{aligned} d_\Xi (\theta _t\xi _1, \theta _t \xi _2) \leqslant 2^{|t|+1} d_\Xi (\xi _1,\xi _2),\quad \forall \xi _1,\xi _2\in \Xi . \end{aligned} \end{aligned}$$

In addition, the following theorem shows that the finite-dimensionality of the hull \({\mathcal {H}}(\xi )\) of a function \(\xi \) in \(\Xi \) is fully determined by the tails of the function.

Theorem 3.13

(Cui et al. 2021, Theorem 4.12) Suppose that \(g_+,\, g_-\in \Xi = {\mathcal {C}}({\mathbb {R}};{\mathscr {X}})\) are two functions with finite-dimensional hulls \({\mathcal {H}}(g_+)\) and \({\mathcal {H}}(g_-)\) in \(\Xi \), respectively. If \(g\in \Xi \) is a function such that

(G1):

g is Lipschitz continuous from \({\mathbb {R}}\) to \({\mathscr {X}}\);

(G2):

g converges forward to \( g_+ \) and backwards to \( g_- \) exponentially, i.e., there exist a time \(T_*\geqslant 0\) and constants \(C, \beta >0\) such that

$$\begin{aligned}&d_{\mathscr {X}}\big (g(t),g_+(t)\big )\!\leqslant \! C e^{-\beta t} \quad \text {and} \quad d_{\mathscr {X}}\big (g(-t),g_-(-t)\big )\!\leqslant \! C e^{-\beta t} \quad \text {for all } t\!\geqslant \! T_*.\nonumber \\ \end{aligned}$$
(3.54)

Then, the hull \({\mathcal {H}}(g)\) of g is finite dimensional in \(\Xi \) with

$$\begin{aligned} \dim _F \big ( {\mathcal {H}}(g); \Xi \big )\leqslant \max \Big \{ 1 , \, \dim _F \big ({\mathcal {H}}(g_+);\Xi \big ), \, \dim _F \big ({\mathcal {H}}(g_-);\Xi \big )\Big \}. \end{aligned}$$

Note that, by Lemma 3.10, quasi-periodic functions are examples of \(g_+\) and \(g_-\).

Theorem 3.13 allows us to consider in applications some non-autonomous terms that are not almost periodic, for instance, the smoothly switching forcing \(g \in {\mathcal {C}}({\mathbb {R}};{\mathbb {R}})\) such that

$$\begin{aligned} g(t)= {\left\{ \begin{array}{ll} 1,\quad t>1;\\ -1,\quad t<-1, \end{array}\right. } \end{aligned}$$

for which by Theorem 3.13 (with \(g_+(t)\equiv 1\) and \(g_-(t)\equiv -1\)) we have \(\dim _F\big ({\mathcal {H}}(g);{\mathcal {C}}({\mathbb {R}};{\mathbb {R}})\big )\leqslant 1\). More examples and comments were given in Cui et al. (2021).

Finally we recall a useful lemma.

Lemma 3.14

(Cui et al. 2021, Lemma 5.1) Let \(\big ({\mathscr {X}}, \Vert \cdot \Vert _{\mathscr {X}}\big )\) be a Banach space. If \(g \in \Xi :={\mathcal {C}}({\mathbb {R}};{\mathscr {X}})\) and the hull \( {\mathcal {H}}(g)\) of g is compact in \(\Xi \), then there is a constant \(c=c(g)>0\) such that for any \(\sigma _1,\sigma _2\in {\mathcal {H}}(g)\)

$$\begin{aligned} \int _\tau ^t \Vert \sigma _1(s)-\sigma _2(s)\Vert ^q_{{\mathscr {X}}}\ \mathrm{d}s \leqslant c\, 2^{q(t-\tau +|\tau |)} \Big (d_\Xi (\sigma _1,\sigma _2)\Big )^q,\quad \forall t\geqslant \tau \text { and } q\geqslant 1. \end{aligned}$$

In particular,

$$\begin{aligned} \begin{aligned} \int _0^t \Vert \sigma _1(s)-\sigma _2(s)\Vert ^2_{{\mathscr {X}}}\ \mathrm{d}s \leqslant c\, 4^{ t} \Big (d_\Xi (\sigma _1,\sigma _2)\Big )^2,\quad \forall t\geqslant 0. \end{aligned} \end{aligned}$$
(3.55)

4 Applications to a Stochastic Reaction–Diffusion Equation

In this section, we study a stochastic reaction–diffusion equation as an application of our theoretical analysis. Note that, under certain conditions, the existence and some preliminary results of the random uniform attractor have been established recently by Cui and Langa (2017). Now, with the non-autonomous term strengthened such that the symbol space is finite dimensional, we show the finite-dimensionality of the random uniform attractor.

4.1 Preliminary Settings and the Symbol Space

We consider the following reaction–diffusion equation with additive scalar white noise

$$\begin{aligned} \begin{aligned} \mathrm{d}u +(\lambda u-\Delta u) \mathrm{d}t= f(u)\mathrm{d}t+g(x,t)\mathrm{d}t+ h(x)\mathrm{d}\omega , \quad x\in {\mathcal {O}}, \ t\geqslant \tau \in {\mathbb {R}}, \end{aligned}\nonumber \\ \end{aligned}$$
(4.1)

endowed with the initial and boundary conditions

$$\begin{aligned} \begin{aligned} u(x,t)|_{t=\tau }&=u_\tau (x), \quad u(x,t)|_{\partial {\mathcal {O}}} =0, \end{aligned} \end{aligned}$$
(4.2)

where \({\mathcal {O}}\subset {\mathbb {R}}^N\), \(N\in {\mathbb {N}}\), is a bounded smooth domain and \(\lambda >0\) is a constant. The nonlinear term \(f \in {\mathcal {C}}^1\big ( {\mathbb {R}}, {\mathbb {R}}\big )\) is assumed to satisfy the following standard conditions

$$\begin{aligned} f(s)s&\leqslant -\alpha _1 |s|^p + \beta _1, \end{aligned}$$
(4.3)
$$\begin{aligned} |f(s)|&\leqslant \alpha _2 |s|^{p-1} + \alpha _2, \end{aligned}$$
(4.4)
$$\begin{aligned} |f'(s)|&\leqslant \kappa _2 |s|^{p-2} + l_2 , \end{aligned}$$
(4.5)
$$\begin{aligned} f'(s)&\leqslant -\kappa _1 |s|^{p-2} + l_1 , \end{aligned}$$
(4.6)

where all the coefficients are positive constants and the growth order \(p\geqslant 2\). Let \(h(x)\in W^{2,2p-2}({\mathcal {O}})\) for simplicity. To establish a smoothing property \((H_6)\), we will also need the growth order p to satisfy

$$\begin{aligned} \begin{aligned} {\left\{ \begin{array}{ll} p\geqslant 2,\quad N=1,2;\\ 2\leqslant p \leqslant \frac{2N-2}{N-2},\quad N\geqslant 3. \end{array}\right. } \end{aligned} \end{aligned}$$
(4.7)

This ensures the continuous embedding \(H_0^1({\mathcal {O}})\hookrightarrow L^{2p-2}({\mathcal {O}})\) with

$$\begin{aligned} \begin{aligned} \Vert u\Vert _{L^{2p-2}({\mathcal {O}})} \leqslant c\Vert \nabla u\Vert ,\quad \forall u\in H_0^1({\mathcal {O}}), \end{aligned} \end{aligned}$$
(4.8)

for some constant \(c>0\), where \(\Vert \cdot \Vert := \Vert \cdot \Vert _{L^2({\mathcal {O}})}\), see Robinson (2001, Theorem 5.26).

The probability space \((\Omega , {\mathcal {F}} , {\mathbb {P}})\) is defined in a standard way. Let

$$\begin{aligned} \Omega := \big \{\omega \in C({\mathbb {R}};{\mathbb {R}}): \omega (0) =0 \big \}, \end{aligned}$$

\({\mathcal {F}}\) be the Borel sigma-algebra induced by the compact-open topology of \(\Omega \) and \({\mathbb {P}}\) the two-sided Wiener measure on \((\Omega ,{\mathcal {F}})\). Define the translation operators \(\vartheta _t\) on \(\Omega \) by

$$\begin{aligned} \vartheta _t \omega =\omega (\cdot +t) -\omega (t),\quad \forall t\in {\mathbb {R}}, \ \omega \in \Omega . \end{aligned}$$

Then \({\mathbb {P}}\) is ergodic and invariant under \(\vartheta \) (see Flandoli and Schmalfuss 1996). Setting

$$\begin{aligned} \begin{aligned} z(\omega ) := -\lambda \int ^0_{-\infty } e^{\lambda \tau } \omega (\tau )\ \mathrm{d}\tau , \quad \forall \omega \in \Omega , \end{aligned} \end{aligned}$$
(4.9)

we have that \(z(\omega )\) is a stationary solution of the one-dimensional Ornstein–Uhlenbeck equation

$$\begin{aligned} \begin{aligned} \mathrm{d}z(\vartheta _t\omega )+\lambda z(\vartheta _t\omega ) \mathrm{d}t=\mathrm{d}\omega . \end{aligned} \end{aligned}$$
(4.10)

Moreover, there is a \(\vartheta \)-invariant subset \({\tilde{\Omega }}\subset \Omega \) of full measure such that \(z(\vartheta _t\omega )\) is continuous in t for every \(\omega \in {\tilde{\Omega }}\) and the random variable \(|z(\cdot )|\) is tempered (see Fan 2006, Lemma 1). Hereafter, we will not distinguish \({\tilde{\Omega }}\) and \(\Omega \).

In order to study the finite-dimensionality of the random uniform attractor, we need the non-autonomous forcing g to have a finite-dimensional hull in some metric space \(\Xi \). By the analysis in Sect. 3.4, for the current reaction–diffusion equation we take \(\Xi :={\mathcal {C}}\big ({\mathbb {R}};L^2({\mathcal {O}})\big )\) and assume that

(G):

\(g\in \Xi ={\mathcal {C}}\big ({\mathbb {R}};L^2({\mathcal {O}})\big )\) and the hull of g

$$\begin{aligned} {\mathcal {H}}(g) =\overline{ \big \{\theta _r g:r\in {\mathbb {R}}\big \}} ^{d_\Xi } \end{aligned}$$

has finite fractal dimension in \(\Xi \), i.e., \(\dim _F\big ({\mathcal {H}}(g);\Xi \big )<\infty \).

Note that, by Lemma 3.10, Lipschitz continuous quasi-periodic functions are examples of such g’s with condition (G), and Theorem 3.13 indicates that Lipschitz continuous functions with tails eventually exponentially converging to quasi-periodic functions satisfy condition (G) as well. Some concrete examples were given in Cui et al. (2021).

Now for the reaction–diffusion equation (4.1), we define the symbol space as the hull of the forcing g in \(\Xi \):

$$\begin{aligned} \Sigma :={\mathcal {H}}(g). \end{aligned}$$

Then condition (G) ensures that \(\Sigma \) is a finite-dimensional compact subset of \(\Xi \). Moreover, the group \( \{\theta _t\}_{t\in {\mathbb {R}}}\) of translation operators forms a base flow on \(\Sigma \).

4.2 Generating an NRDS and the Random Uniform Attractor

Now for \(\sigma \in \Sigma \), we consider the following stochastic reaction–diffusion equation

$$\begin{aligned} \begin{aligned} \left\{ \begin{aligned}&\mathrm{d}u +(\lambda u-\Delta u) \mathrm{d}t = f(u)\mathrm{d}t+\sigma (x,t)\mathrm{d}t+ h(x)\mathrm{d}\omega , \quad x\in {\mathcal {O}}, \ t\geqslant 0, \\&u(x,t)|_{t=0} =u_0(x), \quad u(x,t)|_{\partial {\mathcal {O}}} =0 . \end{aligned} \right. \end{aligned} \end{aligned}$$
(4.11)

By the Ornstein–Uhlenbeck equation (4.10), we transform the equation (4.11) to the following conjugate random problem

$$\begin{aligned} \begin{aligned} \left\{ \begin{aligned}&\frac{ \mathrm{d}v}{\mathrm{d}t} + \lambda v-\Delta v = f\big (v+h z(\vartheta _t\omega ) \big ) +\sigma (x,t) +z(\vartheta _t\omega )\triangle h(x) ,\\&v(x,t)|_{t=0} =v_0(x), \quad v(x,t)|_{\partial {\mathcal {O}}} =0 . \end{aligned} \right. \end{aligned} \end{aligned}$$
(4.12)

Denote by

$$\begin{aligned} H:=\big (L^2({\mathcal {O}}),\Vert \cdot \Vert \big ), \qquad V :=H_0^1 ({\mathcal {O}} ), \qquad Z:= L^p ( {\mathcal {O}} ), \end{aligned}$$

and let \({\mathcal {D}}\) be the collection of tempered closed random sets in H, i.e.,

$$\begin{aligned} {\mathcal {D}} = \Big \{D: D \text { is a tempered closed random set in }H \Big \}. \end{aligned}$$

Then, \(I: V\hookrightarrow H\) is compact, and by Lemma 3.2 we have the Kolmogorov \(\varepsilon \)-entropy condition

$$\begin{aligned} \begin{aligned} {\mathcal {K}}_\varepsilon (V;H ) < \alpha \varepsilon ^{- N} ,\quad \forall \varepsilon >0, \end{aligned} \end{aligned}$$
(4.13)

for some constant \(\alpha >0\).

Following a standard argument by Temam (1997), Chepyzhov and Vishik (2002) we know that for each initial data \(v_0 \in H\), problem (4.12) has a unique solution \(v(\cdot , \omega , \sigma , v_0) \in {\mathcal {C}}([0,\infty );H)\cap L_{loc}^p((0,\infty ), Z)\cap L^2_{loc}((0,\infty );V)\) with \(v(0, \omega , \sigma , v_0)=v_0\). Moreover, v is \(({\mathcal {F}},{\mathcal {B}}(H))\)-measurable in \(\omega \), see, e.g., Cui et al. (2018b). Hence, setting for each \(t\geqslant 0\), \(\omega \in \Omega ,\) \(\sigma \in \Sigma \) and \(v_0\in H\) that

$$\begin{aligned} \begin{aligned} \phi (t,\omega ,\sigma , v_0) := v(t, \omega , \sigma , v_0 ), \end{aligned} \end{aligned}$$
(4.14)

then \(\phi \), generated by solutions of (4.12), is a \((\Sigma \times H, H)\)-continuous NRDS in H (In fact, \(\phi \) is Lipschitz in both initial data and symbols as indicated by Lemma 4.7 later).

Now, for each \(t\geqslant 0\), \(\omega \in \Omega ,\) \(\sigma \in \Sigma \) and \(u_0\in H\), set

$$\begin{aligned} \begin{aligned} u(t,\omega ,\sigma , u_0)= v\big (t, \omega , \sigma , u_0-hz(\omega )\big )+hz(\vartheta _t\omega ) . \end{aligned} \end{aligned}$$
(4.15)

Then, \( u(t,\omega ,\sigma , u_0)\) is the solution of (4.11) at time t with initial data \(u_0\) (at time \(t=0\)) satisfying Definition 2.1. Hence, \({\tilde{\phi }}(t,\omega ,\sigma , u_0):= u(t, \omega , \sigma , u_0 )\), generated by the solutions of stochastic RD equation (4.11), is also a \((\Sigma \times H, H)\)-continuous NRDS in H. In fact, \(\phi \) and \({\tilde{\phi }}\) are conjugate NRDS, satisfying (2.5) with cohomology

$$\begin{aligned} \begin{aligned} {\mathsf {T}}(\omega , u)=u+hz(\omega ), \quad \omega \in \Omega ,\, u\in H. \end{aligned} \end{aligned}$$
(4.16)

Since the cohomology \({\mathsf {T}}\) is a bijection from \({\mathcal {D}}\) onto \({\mathcal {D}}\), Theorem 2.10 shows that the conjugate \({\mathcal {D}}\)- uniform attractors \({\mathscr {A}}\) of \(\phi \) and \({\tilde{{\mathscr {A}}}}\) of \({\tilde{\phi }}\) have the translation relation

$$\begin{aligned} \begin{aligned} {\tilde{{\mathscr {A}}}} (\omega )= {\mathsf {T}}\big (\omega ,{\mathscr {A}}( \omega )\big ) ={\mathscr {A}}(\omega )+hz(\omega ) , \quad \omega \in \Omega , \end{aligned} \end{aligned}$$
(4.17)

which indicates that the two attractors have the same fractal dimension.

In the rest, we shall study the finite-dimensionality of \({\mathscr {A}}\) in H by checking the conditions \((H_1)\)\((H_4)\) and (S) in Theorem 3.6, and in V by checking \((H_7)\) in Theorem 3.8. The existence of the random uniform attractor was proved previously by Cui and Langa (2017), using Theorem 2.6.

4.3 An Admissible Uniformly Absorbing Set \({\mathscr {B}}\)

4.3.1 Estimates of Solutions

The estimates of solutions in this section will be achieved in a standard way as in Cui and Langa (2017) in spirit, but we need more details that are crucial for us to bound the fractal dimension of the uniform attractor afterwards. Note that the condition (4.7) on p is not needed in this section.

Lemma 4.1

(Estimate in H) Let conditions (4.3)–(4.6) hold. Then, any solution v of (4.12) with initial value \(v_0\in H\) satisfies

$$\begin{aligned}&\Vert v(t, \vartheta _{-t}\omega , \theta _{-t}\sigma ,v_0)\Vert ^2 + \int ^t _0 e^{\lambda (s-t)} \Big ( \Vert \nabla v(s , \vartheta _{-t}\omega , \theta _{-t}\sigma ,v_0)\Vert ^2 \\&\qquad + \Vert v (s , \vartheta _{-t}\omega , \theta _{-t}\sigma ,v_0) \Vert ^p_p \Big )\ \mathrm{d}s \\&\quad \leqslant e^{-\lambda t} \Vert v_0\Vert ^2 + C \int ^0_{-t} e^{\lambda s} \Big (|z(\vartheta _{s}\omega )|^p+1+ \Vert \sigma (s)\Vert ^2\Big ) \ \mathrm{d}s ,\quad t\geqslant 0, \end{aligned}$$

where \(C>0\) is a positive constant independent of \(v_0\in H\), \(\sigma \in \Sigma \) and \(\omega \in \Omega \).

Proof

Take the inner product of (4.12) with v in H to obtain

$$\begin{aligned}&\frac{1}{2} \frac{\mathrm{d}}{\mathrm{d}t} \Vert v \Vert ^2 +\lambda \Vert v\Vert ^2 +\Vert \nabla v\Vert ^2 = \int v \Big ( f( v+hz( \vartheta _t \omega ))+ \sigma (t) + z(\vartheta _t\omega ) \triangle h \Big )\, \mathrm{d}x.\nonumber \\ \end{aligned}$$
(4.18)

By (4.3), (4.4) and Young’s inequality, we have, since \(u=v+hz( \vartheta _t \omega )\),

$$\begin{aligned} \int v f(v+hz(\vartheta _t \omega )) \ \mathrm{d}x= & {} \int u f(u) \ \mathrm{d}x - \int hz(\vartheta _t\omega ) f(u) \ \mathrm{d}x \nonumber \\\leqslant & {} -\alpha _1 \Vert u\Vert ^p_p +c + \int \alpha _2 |h z(\vartheta _t\omega )|\Big (| u|^{p-1}+1\Big )\, \mathrm{d}x\nonumber \\\leqslant & {} - \alpha _1 \Vert u\Vert _p^p +c +\frac{\alpha _1}{2}\Vert u\Vert _p^p+ c|z(\vartheta _t\omega )|^p+c|z(\vartheta _t\omega )| \nonumber \\\leqslant & {} -\frac{\alpha _1}{2} \Vert v \Vert ^p_p + c\big (|z(\vartheta _t\omega )|^p+1\big ). \end{aligned}$$
(4.19)

As

$$\begin{aligned} \begin{aligned} \int v \big ( \sigma (t)+ z(\vartheta _t \omega ) \triangle h \big )\, \mathrm{d}x \leqslant \frac{\lambda }{2} \Vert v\Vert ^2 +\frac{1}{\lambda }\Vert \sigma \Vert ^2 + c(|z(\vartheta _t\omega )|^p+1), \end{aligned} \end{aligned}$$
(4.20)

by (4.18)–(4.20) we conclude that

$$\begin{aligned} \begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} \Vert v\Vert ^2 + \lambda \Vert v\Vert ^2 +\Vert \nabla v\Vert ^2 + \Vert v \Vert ^p_p \leqslant c\Big (|z(\vartheta _t\omega )|^p+1+ \Vert \sigma \Vert ^2 \Big ) . \end{aligned} \end{aligned}$$
(4.21)

Multiply (4.21) by \(e^{\lambda t}\) and then integrate over (0, t) to obtain

$$\begin{aligned} \begin{aligned}&\Vert v(t, \omega , \sigma ,v_0)\Vert ^2 + \int ^t _0 e^{\lambda (s-t)}\Big ( \Vert \nabla v(s, \omega , \sigma ,v_0)\Vert ^2 + \Vert v(s , \omega , \sigma ,v_0) \Vert ^p_p \Big ) \, \mathrm{d}s \\&\quad \leqslant e^{-\lambda t} \Vert v_0\Vert ^2 + c \int ^t_0 e^{\lambda (s-t)} \Big (|z(\vartheta _s\omega )|^p+1+\Vert \sigma (s)\Vert ^2\Big ) \ \mathrm{d}s . \end{aligned} \end{aligned}$$
(4.22)

Replacing \(\omega \) and \(\sigma \) by \(\vartheta _{-t}\omega \) and \(\theta _{-t}\sigma \), we have

$$\begin{aligned} \begin{aligned}&\Vert v(t, \vartheta _{-t}\omega , \theta _{-t}\sigma ,v_0)\Vert ^2 +\int ^t _0 e^{\lambda (s-t)} \Big (\Vert \nabla v(s, \vartheta _{-t}\omega , \theta _{-t}\sigma ,v_0) \Vert ^2 \\&\qquad + \Vert v (s, \vartheta _{-t}\omega , \theta _{-t}\sigma ,v_0) \Vert ^p_p \Big )\, \mathrm{d}s \\&\quad \leqslant e^{-\lambda t} \Vert v_0\Vert ^2 +c \int ^t _0 e^{\lambda (s-t)} \Big ( |z(\vartheta _{s-t}\omega )|^p+1+\Vert \sigma (s-t)\Vert ^2 \Big ) \ \mathrm{d}s \\&\quad = e^{-\lambda t} \Vert v_0\Vert ^2 + c \int ^0_{-t} e^{\lambda s} \Big (|z(\vartheta _{s}\omega )|^p+1+ \Vert \sigma (s)\Vert ^2\Big ) \ \mathrm{d}s , \end{aligned} \end{aligned}$$

and the proof is complete. \(\square \)

Lemma 4.2

(Estimate in V) Let conditions (4.3)–(4.6) hold. Then for any \(\epsilon >0\), there is a constant \(c_\epsilon \) such that any solution v of (4.12) with initial value \(v_0\in H\) satisfies

$$\begin{aligned} \begin{aligned}&\Vert \nabla v(t, \vartheta _{-t} \omega , \theta _{-t}\sigma ,v_0)\Vert ^2 \\&\quad \leqslant c_\epsilon e^{ -\lambda t} \Vert v_0\Vert ^2 + c_\epsilon \int ^0_{-t} e^{\lambda s} \Big (|z(\vartheta _{s}\omega )|^p+1+ \Vert \sigma (s)\Vert ^2\Big ) \ \mathrm{d}s,\quad \forall t\geqslant \epsilon , \end{aligned} \end{aligned}$$

where \(c_\epsilon \) is a positive constant depending only on \(\epsilon \).

Proof

Multiply (4.12) by \(-\triangle v\) and then integrate over \({\mathcal {O}}\) to obtain

$$\begin{aligned}&\frac{1}{2} \frac{\mathrm{d}}{\mathrm{d}t} \Vert \nabla v \Vert ^2 \!+\!\lambda \Vert \nabla v\Vert ^2 \!+\!\Vert \triangle v\Vert ^2 \!=\! -\! \int \triangle v \Big ( f \big ( v\!+\!hz( \vartheta _t \omega )\big )\!+\! \sigma (t) \!+\! z(\vartheta _t\omega ) \triangle h \Big )\mathrm{d}x.\nonumber \\ \end{aligned}$$
(4.23)

By (4.4)–(4.6), we have

$$\begin{aligned} \begin{aligned} - \int \triangle v f\big ( v+hz( \vartheta _t \omega )\big ) \ \mathrm{d}x&= - \int \triangle u f(u)\ \mathrm{d}x + \int \triangle h z(\vartheta _t\omega ) f( u)\ \mathrm{d}x \\&= \int |\nabla u |^2 \frac{\mathrm{d}f( u) }{\mathrm{d}u}\ \mathrm{d}x + \int \triangle h z(\vartheta _t\omega ) f( u) \mathrm{d}x \\&\leqslant l_1 \Vert \nabla u\Vert ^2 + \int |\triangle hz(\vartheta _t\omega ) | \Big (\alpha _2|u|^{p-1}+\alpha _2\Big )\, \mathrm{d}x \\&\leqslant l_1\Vert \nabla v\Vert ^2 +c \Vert v\Vert _p^p+c|z(\vartheta _t\omega )|^p+c. \end{aligned} \end{aligned}$$
(4.24)

Since

$$\begin{aligned} \begin{aligned} - \int \triangle v \big ( \sigma (t) + z(\vartheta _t\omega ) \triangle h \big )\mathrm{d}x \leqslant \Vert \triangle v\Vert ^2 + \frac{1}{2} \Vert \sigma \Vert ^2 + c|z(\vartheta _t\omega ) |^2, \end{aligned} \end{aligned}$$
(4.25)

from (4.23) to (4.25) it follows that

$$\begin{aligned}&\frac{\mathrm{d}}{\mathrm{d}t} \Vert \nabla v(t, \omega , \sigma ,v_0)\Vert ^2 \!\leqslant \! c\Vert \nabla v\Vert ^2 +c \Vert v\Vert _p^p+c|z(\vartheta _t\omega )|^p \!+\! \Vert \sigma \Vert ^2 +c , \quad t\!>\!0.\nonumber \\ \end{aligned}$$
(4.26)

For \(\epsilon >0\) and \(s\in (t-\epsilon , t)\), integrate (4.26) over (st) to obtain

$$\begin{aligned} \begin{aligned} \Vert \nabla v(t, \omega , \sigma ,v_0)\Vert ^2 - \Vert \nabla v(s, \omega , \sigma ,v_0)\Vert ^2&\leqslant c\int ^t_{t-\epsilon } \Big (\Vert \nabla v(\tau )\Vert ^2 +\Vert v(\tau )\Vert _p^p \Big ) \mathrm{d}\tau \\&\quad +\!c\int ^t_{t-\epsilon }\Big ( |z(\vartheta _\tau \omega )|^p \!+\!1\!+\! \Vert \sigma (\tau ) \Vert ^2\Big ) \mathrm{d}\tau . \end{aligned} \end{aligned}$$
(4.27)

Then integrating (4.27) with respect to s over \((t-\epsilon ,t)\) we have

$$\begin{aligned} \begin{aligned} \Vert \nabla v(t, \omega , \sigma ,v_0)\Vert ^2&\leqslant \Big (c+\frac{1}{\epsilon }\Big )\int ^t_{t-\epsilon } \Big (\Vert \nabla v(\tau )\Vert ^2 +\Vert v(\tau )\Vert _p^p \Big ) \mathrm{d}\tau \\&\quad +c\int ^t_{t-\epsilon }\Big ( |z(\vartheta _\tau \omega )|^p +1 + \Vert \sigma (\tau ) \Vert ^2\Big ) \mathrm{d}\tau , \end{aligned}\nonumber \\ \end{aligned}$$

and replacing \(\omega \) and \(\sigma \) by \(\vartheta _{-t}\omega \) and \(\theta _{-t}\sigma \), respectively, we have

$$\begin{aligned} \begin{aligned} \Vert \nabla v(t, \vartheta _{-t} \omega , \theta _{-t}\sigma ,v_0)\Vert ^2&\!\leqslant \! \Big (c\!+\!\frac{1}{\epsilon }\Big ) \!\int ^t _{t-\epsilon } \!\Big ( \Vert \nabla v(s, \vartheta _{-t}\omega , \theta _{-t}\sigma ,v_0) \Vert ^2 \!+\! \Vert v (s ) \Vert ^p_p \Big )\ \mathrm{d}s \\&\quad +c\int ^0_{ -\epsilon }\Big ( |z(\vartheta _\tau \omega )|^p +1 + \Vert \sigma (\tau ) \Vert ^2\Big ) \ \mathrm{d}\tau . \end{aligned} \end{aligned}$$

Since, for \(t\geqslant \epsilon \),

$$\begin{aligned} \begin{aligned}&\int ^t _{t-\epsilon } \Big (\Vert \nabla v(s, \vartheta _{-t}\omega , \theta _{-t}\sigma ,v_0) \Vert ^2 + \Vert v (s, \vartheta _{-t}\omega , \theta _{-t}\sigma ,v_0) \Vert ^p_p \Big )\ \mathrm{d}s \\&\quad \leqslant e^{\lambda \epsilon } \int ^t _{t-\epsilon } e^{\lambda (s-t)} \Big (\Vert \nabla v(s, \vartheta _{-t}\omega , \theta _{-t}\sigma ,v_0) \Vert ^2 \ + \Vert v (s, \vartheta _{-t}\omega , \theta _{-t}\sigma ,v_0) \Vert ^p_p\Big ) \ \mathrm{d}s \\&\quad \leqslant e^{\lambda \epsilon } \int ^t _{0} e^{\lambda (s-t)} \Big (\Vert \nabla v(s, \vartheta _{-t}\omega , \theta _{-t}\sigma ,v_0) \Vert ^2 \ + \Vert v (s, \vartheta _{-t}\omega , \theta _{-t}\sigma ,v_0) \Vert ^p_p\Big ) \ \mathrm{d}s \\&\quad \leqslant c e^{ \lambda \epsilon -\lambda t} \Vert v_0\Vert ^2 + c e^{\lambda \epsilon }\int ^0_{-t} e^{\lambda s} \Big (|z(\vartheta _{s}\omega )|^p+1+ \Vert \sigma (s)\Vert ^2\Big ) \ \mathrm{d}s \quad \text {(by Lemma}~4.1) , \end{aligned} \end{aligned}$$

we conclude that

$$\begin{aligned} \begin{aligned} \Vert \nabla v(t, \vartheta _{-t} \omega , \theta _{-t}\sigma ,v_0)\Vert ^2&\leqslant c\Big (1+\frac{1}{\epsilon }\Big )e^{ \lambda \epsilon -\lambda t} \Vert v_0\Vert ^2 \\&\quad + c e^{\lambda \epsilon }\Big (1+\frac{1}{\epsilon }\Big )\int ^0_{-t} e^{\lambda s} \Big (|z(\vartheta _{s}\omega )|^p+1+ \Vert \sigma (s)\Vert ^2\Big ) \ \mathrm{d}s , \end{aligned} \end{aligned}$$

which completes the proof. \(\square \)

To establish the (HV)-smoothing property of the system, we need the following estimates.

Lemma 4.3

(Estimate in Z) Let conditions (4.3)–(4.6) hold. Then, any solution v of (4.12) with initial value \(v_0\in H\) satisfies

$$\begin{aligned} \begin{aligned}&\varepsilon \Vert v(t,\vartheta _{-t}\omega , \theta _{-t}\sigma ,v_0)\Vert _p^p +\int _0^\varepsilon \int ^t_r e^{\lambda (s-t)} \Vert v(s,\vartheta _{-t}\omega , \theta _{-t}\sigma ,v_0)\Vert _{2p-2}^{2p-2} \ \mathrm{d}s\mathrm{d}r \\&\quad \leqslant ce^{-\lambda t} \Vert v_0\Vert ^2+c ( \varepsilon +1) \int _{-\infty }^0 e^{\lambda s} \Big ( |z(\vartheta _s\omega )|^{2p-2} +1+ \Vert \sigma (s)\Vert ^2\Big ) \mathrm{d}s,\quad \forall t \geqslant \varepsilon >0, \end{aligned} \end{aligned}$$
(4.28)

where \(c>0\) is a positive constant independent of \(v_0\in H\), \(\sigma \in \Sigma \) and \(\omega \in \Omega \).

Proof

Taking the inner product of (4.12) with \(|v|^{p-2}v\) in H, we obtain

$$\begin{aligned} \begin{aligned}&\frac{1}{p} \frac{\mathrm{d}}{\mathrm{d}t} \Vert v\Vert _p^p+\lambda \Vert v\Vert _p^p \\&\quad \leqslant \big (f(v+hz(\vartheta _t\omega )), |v|^{p-2}v\big )+\big (\sigma , |v|^{p-2}v\big ) + \big (z(\vartheta _t\omega ) \triangle h, |v|^{p-2}v\big ) \\&\quad \leqslant \big (f(v+hz(\vartheta _t\omega )), |v|^{p-2}v\big )+ \frac{\alpha _1}{8} \Vert v\Vert _{2p-2}^{2p-2} +c\Vert \sigma \Vert ^2 + c|z(\vartheta _t\omega ) |^2. \end{aligned}\nonumber \\ \end{aligned}$$
(4.29)

By (4.3) we see that, since \(u=v+hz(\vartheta _t\omega )\),

$$\begin{aligned} \begin{aligned} f(v+hz(\vartheta _t\omega ))v&=f(u)(u-hz(\vartheta _t\omega ) )\\&\leqslant -\alpha _1 |u|^p +\beta _1 -f(u) hz(\vartheta _t\omega ) \quad \text {(by }(4.3))\\&\leqslant -\alpha _1 | u|^p +\beta _1 + \big (\alpha _2|u|^{p-1}+\alpha _2\big )|hz(\vartheta _t\omega )| \quad \text {(by }(4.4)) \\&\leqslant -\alpha _1 |u|^p +\beta _1 + \frac{3\alpha _1}{4} |u|^{p}+c|hz(\vartheta _t\omega )|^p +\alpha _2|hz(\vartheta _t\omega )| \\&\leqslant -\frac{\alpha _1}{4} |v|^p+ c|hz(\vartheta _t\omega )|^p+ \beta _1 +\alpha _2|hz(\vartheta _t\omega )| . \end{aligned} \end{aligned}$$

Hence,

$$\begin{aligned} \begin{aligned} \big (f(v+hz(\vartheta _t\omega )), |v|^{p-2}v\big )&=\big (f(v+hz(\vartheta _t\omega ))v, |v|^{p-2} \big )\\&\leqslant -\frac{\alpha _1}{4} \Vert v\Vert _{2p-2}^{2p-2} +c|z(\vartheta _t\omega )|^{2p-2}+ c. \end{aligned} \end{aligned}$$

Then from (4.29), it follows that

$$\begin{aligned} \begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} \Vert v\Vert _p^p+\lambda \Vert v\Vert _p^p + \Vert v\Vert _{2p-2}^{2p-2} \leqslant c |z(\vartheta _t\omega )|^{2p-2}+ c\Vert \sigma \Vert ^2+c . \end{aligned} \end{aligned}$$
(4.30)

Multiply (4.30) by \(e^{\lambda t}\) and then integrate over (rt) for \(r\in (0,\varepsilon )\) to obtain

$$\begin{aligned} \begin{aligned}&\Vert v(t)\Vert _p^p +\int ^t_r e^{\lambda (s-t)} \Vert v(s)\Vert _{2p-2}^{2p-2} \ \mathrm{d}s\\&\quad \leqslant ce^{-\lambda (t-r)} \Vert v(r)\Vert _p^p +c\int ^t_r e^{\lambda (s-t)} \Big ( |z(\vartheta _s\omega )|^{2p-2} +1 + \Vert \sigma (s)\Vert ^2\Big )\, \mathrm{d}s. \end{aligned} \end{aligned}$$
(4.31)

Integrating (4.31) over \(r\in (0,\varepsilon )\) yields

$$\begin{aligned} \begin{aligned}&\varepsilon \Vert v(t,\omega ,\sigma ,v_0)\Vert _p^p +\int _0^\varepsilon \int ^t_r e^{\lambda (s-t)} \Vert v(s,\omega ,\sigma ,v_0)\Vert _{2p-2}^{2p-2} \ \mathrm{d}s\mathrm{d}r \\&\quad \leqslant c\int _0^\varepsilon e^{-\lambda (t-r)} \Vert v(r)\Vert _p^p \ \mathrm{d}r+c\int _0^\varepsilon \int ^t_r e^{\lambda (s-t)} \Big ( |z(\vartheta _s\omega )|^{2p-2} +1 + \Vert \sigma (s)\Vert ^2\Big ) \, \mathrm{d}s\mathrm{d}r \\&\quad \leqslant c\int _0^t e^{-\lambda (t-r)} \Vert v(r)\Vert _p^p \ \mathrm{d}r + \varepsilon c \int _0^t e^{\lambda (s-t)} \Big (|z(\vartheta _s\omega )|^{2p-2} +1+ \Vert \sigma (s)\Vert ^2\Big )\, \mathrm{d}s \\&\quad \leqslant ce^{-\lambda t}\Vert v_0\Vert ^2+ (1+\varepsilon ) c \int _0^t e^{\lambda (s-t)} \Big (|z(\vartheta _s\omega )|^{2p-2} +1+ \Vert \sigma (s)\Vert ^2\Big ) \, \mathrm{d}s \quad \text {(by }(4.22)). \end{aligned} \end{aligned}$$
(4.32)

Replacing \(\omega \) and \(\sigma \) by \(\vartheta _{-t}\omega \) and \(\theta _{-t}\sigma \), respectively, we have the lemma. \(\square \)

For later purpose, we state the following corollary.

Corollary 4.4

Let conditions (4.3)–(4.6) hold. Then, any solution v of (4.12) with initial value \(v_0\in H\) satisfies

$$\begin{aligned} \begin{aligned}&\int ^t_1 \Vert v(s, \omega , \sigma ,v_0)\Vert _{2p-2}^{2p-2} \ \mathrm{d}s \!\leqslant \! c \Vert v_0\Vert ^2\!+\!c \!\int ^t_0 e^{\lambda s} \Big ( |z(\vartheta _s\omega )|^{2p-2} \!+\!1 \!+\! \Vert \sigma (s)\Vert ^2\Big ) \, \mathrm{d}s, \end{aligned} \end{aligned}$$
(4.33)

for all \(t\geqslant 1\), where \(c>0\) is a positive constant independent of \(v_0\in H\), \(\sigma \in \Sigma \) and \(\omega \in \Omega \).

Proof

By (4.32) with \(\varepsilon =1\), for \(t\geqslant 1\) we have

$$\begin{aligned} \int ^t_1 \Vert v(s, \omega , \sigma ,v_0)\Vert _{2p-2}^{2p-2} \ \mathrm{d}s&\leqslant \int _0^1 \int ^t_r \Vert v(s, \omega , \sigma ,v_0)\Vert _{2p-2}^{2p-2} \ \mathrm{d}s\mathrm{d}r \\&\leqslant e^{\lambda t} \int _0^1 \int ^t_r e^{\lambda (s-t)} \Vert v(s, \omega , \sigma ,v_0)\Vert _{2p-2}^{2p-2} \ \mathrm{d}s\mathrm{d}r \\&\leqslant c \Vert v_0\Vert ^2\!+\!c \!\int ^t_0 e^{\lambda s} \Big ( |z(\vartheta _s\omega ) |^{2p-2} \!+\!1\!+\! \Vert \sigma (s)\Vert ^2\Big ) \mathrm{d}s . \end{aligned}$$

\(\square \)

4.3.2 The Absorbing set \({\mathscr {B}}\) with Deterministic Absorption Time

Now we construct an admissible uniformly \({\mathcal {D}}\)-absorbing set \({\mathscr {B}}\) satisfying \((H_3)\). Since \(\Sigma = {\mathcal {H}} (g)\) is compact in \(\Xi ={\mathcal {C}} ({\mathbb {R}};H)\) we know from Chepyzhov and Vishik (2002, Proposition V.2.3) that it is bounded in \({\mathcal {C}}_b({\mathbb {R}};H)\), i.e., for some constant \(C_\Sigma >0\)

$$\begin{aligned} \begin{aligned} \sup _{\sigma \in \Sigma } \Vert \sigma \Vert _{{\mathcal {C}}_b({\mathbb {R}};H)}= \sup _{\sigma \in \Sigma }\bigg ( \sup _{t\in {\mathbb {R}}} \Vert \sigma (t)\Vert \bigg )\leqslant C_\Sigma . \end{aligned} \end{aligned}$$
(4.34)

Hence, there exists a uniform bound \(C_b>0\) such that

$$\begin{aligned} \begin{aligned} \sup _{\sigma \in \Sigma } \int _{-\infty }^0 e^{\lambda s} \Vert \sigma (s)\Vert ^2\ \mathrm{d}s&\leqslant \sup _{\sigma \in \Sigma } \left( \Vert \sigma \Vert _{{\mathcal {C}}_b({\mathbb {R}};H)}^2 \int ^0_{-\infty } e^{\lambda s} \ \mathrm{d}s\right) \leqslant C_b. \end{aligned} \end{aligned}$$
(4.35)

Let us define a random set \({\mathscr {B}}=\{{\mathscr {B}}(\omega )\}_{\omega \in \Omega } \) in H by

$$\begin{aligned}&{\mathscr {B}}(\omega )\!:=\!\left\{ u\in H: \Vert u\Vert ^2\!\leqslant \! \rho (\omega )\!:=\! 1 \!+\! C \int ^0_{-\infty } e^{\lambda s} |z(\vartheta _{s}\omega )|^p \ \mathrm{d}s \!+\frac{C}{\lambda }\!+\! CC_b \right\} ,\ \ \omega \in \Omega ,\nonumber \\ \end{aligned}$$
(4.36)

where \(C>0\) is the constant given in Lemma 4.1. Then,

Proposition 4.5

Let conditions (4.3)–(4.6) hold. Then, the random set \({\mathscr {B}}\) defined by (4.36) is a tempered and closed uniformly \({\mathcal {D}}\)-pullback absorbing set of the NRDS \(\phi \) generated by the reaction–diffusion equation (4.12). In addition, \({\mathscr {B}}\) uniformly absorbs itself after a deterministic period of time, i.e., there exists a deterministic \(T_{{\mathscr {B}}} >0\) such that for any \(t\geqslant T_{{\mathscr {B}}}\)

$$\begin{aligned} \bigcup _{\sigma \in \Sigma } v\big (t,\vartheta _{-t}\omega , \theta _{-t}\sigma , {\mathscr {B}}(\vartheta _{-t}\omega ) \big ) \subseteq {\mathscr {B}}(\omega ), \quad \forall \omega \in \Omega . \end{aligned}$$

Proof

The temperedness of \({\mathscr {B}}\) follows from that of \(\rho (\cdot )\). Now we show the uniformly \({\mathcal {D}}\)-pullback absorbing property. By Lemma 4.1, we know that for any tempered set \(D\in {\mathcal {D}}\) (i.e., there is some tempered random variable \(R (\cdot )\) such that \(\Vert D(\omega )\Vert ^2 \leqslant R(\omega ))\), the solutions with initial data in D satisfy

$$\begin{aligned} \begin{aligned}&\sup _{\sigma \in \Sigma } \big \Vert v \big (t, \vartheta _{-t} \omega , \theta _{-t}\sigma ,D(\vartheta _{-t}\omega )\big ) \big \Vert ^2 \\&\quad \leqslant \sup _{\sigma \in \Sigma } \bigg [ e^{ -\lambda t} R(\vartheta _{-t}\omega ) + C \int ^0_{- t} e^{\lambda s} \Big (|z(\vartheta _{s}\omega )|^p+1+ \Vert \sigma (s)\Vert ^2\Big ) \ \mathrm{d}s\bigg ] \\&\quad \leqslant e^{ -\lambda t} R(\vartheta _{-t}\omega ) + C \int ^0_{-t} e^{\lambda s} |z(\vartheta _{s}\omega )|^p \ \mathrm{d}s +\frac{C}{\lambda }+ CC_b \quad \text {(by }(4.35)) ,\quad \forall t>0. \end{aligned} \end{aligned}$$
(4.37)

In addition, since the random variable \(R(\cdot )\) is tempered, there exists a random variable \(T_D(\cdot ) >0\) such that \(e^{ -\lambda t} R(\vartheta _{-t}\omega ) \leqslant 1\) for all \(t\geqslant T_D(\omega )\). Hence,

$$\begin{aligned} \sup _{\sigma \in \Sigma } \big \Vert v\big (t, \vartheta _{-t} \omega , \theta _{-t}\sigma , D(\vartheta _{-t}\omega )\big )\big \Vert ^2&\leqslant 1 + C \int ^0_{-\infty } e^{\lambda s} |z(\vartheta _{s}\omega )|^p \ \mathrm{d}s +\frac{C}{\lambda }+ CC_b =\rho (\omega ) , \end{aligned}$$
(4.38)

for all \(t\geqslant T_D(\omega )\), so \({\mathscr {B}}\) is a uniformly \({\mathcal {D}}\)-pullback absorbing set.

Now we show that \({\mathscr {B}}\) uniformly attracts itself after a deterministic period of time \(T_{{\mathscr {B}}}\). By (4.37),

$$\begin{aligned}&\sup _{\sigma \in \Sigma } \left\| v\big (t,\vartheta _{-t}\omega ,\theta _{-t}\sigma , {\mathscr {B}}(\vartheta _{-t}\omega )\big ) \right\| ^2\\&\quad \leqslant e^{-\lambda t} \rho (\vartheta _{-t}\omega )+ C \int ^0_{- t} e^{\lambda s} |z(\vartheta _{s}\omega )|^p \ \mathrm{d}s +\frac{C}{\lambda }+ CC_b \\&\quad = e^{-\lambda t} \left( 1 + C \int ^0_{-\infty } e^{\lambda s} |z(\vartheta _{s-t}\omega )|^p \ \mathrm{d}s +\frac{C}{\lambda }+ CC_b \right) \\&\qquad + C \int ^0_{- t} e^{\lambda s} |z(\vartheta _{s}\omega )|^p \ \mathrm{d}s +\frac{C}{\lambda }+ CC_b \\&\quad = e^{-\lambda t} \left( 1+ \frac{C}{\lambda }+ CC_b \right) +C\int ^{-t}_{-\infty } e^{\lambda s}|z(\vartheta _{s} \omega )|^p\ \mathrm{d}s\\&\qquad + C \int ^0_{- t} e^{\lambda s} |z(\vartheta _{s}\omega )|^p \ \mathrm{d}s +\frac{C}{\lambda }+ CC_b . \end{aligned}$$

Hence, take \(T_{{\mathscr {B}}} >0\) such that

$$\begin{aligned} e^{-\lambda T_{{\mathscr {B}}}} \left( 1+ \frac{C}{\lambda }+ CC_b \right) =1, \quad {\text {i.e.}},\quad T_{{\mathscr {B}}} := \frac{1}{\lambda }\ln \left( 1+ \frac{C}{\lambda }+ CC_b \right) . \end{aligned}$$

Then, for all \(t\geqslant T_{{\mathscr {B}}},\)

$$\begin{aligned} \begin{aligned} \sup _{\sigma \in \Sigma } \left\| v\big (t,\vartheta _{-t}\omega ,\theta _{-t}\sigma , {\mathscr {B}}(\vartheta _{-t}\omega )\big ) \right\| ^2&\leqslant 1 +C\int ^0_{-\infty } e^{\lambda s} |z(\vartheta _{s}\omega )|^p\ \mathrm{d}s +\frac{C}{\lambda }+ CC_b=\rho (\omega ) , \end{aligned} \end{aligned}$$

which by the definition (4.36) indicates that

$$\begin{aligned} \begin{aligned} \bigcup _{\sigma \in \Sigma } v\big (t,\vartheta _{-t}\omega ,\theta _{-t}\sigma , {\mathscr {B}}(\vartheta _{-t}\omega )\big ) \subseteq {\mathscr {B}}(\omega ) ,\quad \forall t\geqslant T_{{\mathscr {B}}}, \end{aligned} \end{aligned}$$

as desired. \(\square \)

4.4 Finite Fractal Dimension of the Random Uniform Attractor in H and in V

4.4.1 \((\Sigma \times H, H)\)-Lipschitz Continuity and \((\Sigma \times H , V)\)-Smoothing

Notice that condition (4.5) is in fact equivalent to the following form commonly used in the literature, e.g., Zelati and Kalita (2015), Zhu and Zhou (2016),

$$\begin{aligned} \begin{aligned} |f(s_1)-f(s_2)| \leqslant c|s_1-s_2| \big (1+|s_1|^{p-2}+|s_2|^{p-2}\big ), \qquad s_1, s_2 \in {\mathbb {R}}. \end{aligned} \end{aligned}$$
(4.39)

In addition, the following result is obtained via a decomposition of f by Cui et al. (2020) and will facilitate our computations later.

Lemma 4.6

(Cui et al. 2020) For any \({\mathcal {C}}^1\)-function f with conditions (4.3), (4.4) and (4.6), there are positive constants \(c_1, c_2 >0\) such that

$$\begin{aligned} -\big ( f(s_1)-f(s_2)\big )(s_1-s_2) |s_1-s_2|^{r} \geqslant c_1 |s_1-s_2|^{p+r} -c_2 |s_1-s_2|^{r+2} \end{aligned}$$

for any \(r\geqslant 0\) and \( s_1, s_2\in {\mathbb {R}}\).

Now we derive the joint Lipschitz continuity of solutions in symbols and initial data. For any two solutions \(v_{j}\) of (4.12) corresponding to initial data \(v_{j,0} \in H\) and symbols \(\sigma _j \in \Sigma \), \(j=1,2\), respectively, with \({\bar{\sigma }}:= \sigma _1-\sigma _2\), the difference \({\bar{v}}(t,\omega ,{\bar{\sigma }},{\bar{v}}_0) := v_1(t,\omega ,\sigma _1,v_{1,0})-v_2(t,\omega ,\sigma _2,v_{2,0})\) of them satisfies

$$\begin{aligned} \begin{aligned} \frac{ \mathrm{d}{\bar{v}}}{\mathrm{d}t} + \lambda {\bar{v}}-\triangle {\bar{v}} = f\big (v_1+h z(\vartheta _t\omega )\big ) - f\big (v_2+h z(\vartheta _t\omega ) \big ) +{\bar{\sigma }} . \end{aligned} \end{aligned}$$
(4.40)

Lemma 4.7

(\((\Sigma \times H, H)\)-Lipschitz continuity) The NRDS \(\phi \) is Lipschitz continuous from \( \Sigma \times H \) to H with time-dependent Lipschitz constant. More precisely, there exist deterministic constants \(C_1=C_1(\Vert \Sigma \Vert _{{\mathcal {C}}_b({\mathbb {R}};H)}) >0\) and \(\beta =\beta (c_1,c_2,\lambda )>0\) such that for any two solutions \(v_j(t,\omega , \sigma _j, v_{j,0})\) of (4.12) with \(\sigma _j\in \Sigma \) and \(v_{j,0}\in H\), \(j=1,2\), we have

$$\begin{aligned} \Vert v_1(t,\omega ,\sigma _1,v_{1,0}) -v_2(t,\omega ,\sigma _2,v_{2,0}) \Vert ^2 \leqslant C_1e^{\beta t} \Big [\Vert v_{1,0}-v_{2,0}\Vert ^2 +\big ( d_\Xi (\sigma _1,\sigma _2) \big )^2\Big ] \end{aligned}$$

for all \(t>0\), \(\omega \in \Omega \).

Proof

Take the inner product of (4.40) with \({\bar{v}} \, (={\bar{u}})\) in H and we obtain

$$\begin{aligned} \begin{aligned} \frac{1}{2} \frac{\mathrm{d}}{\mathrm{d}t} \Vert {\bar{v}}\Vert ^2 +\lambda \Vert {\bar{v}}\Vert ^2 +\Vert \nabla {\bar{v}}\Vert ^2&=\Big (f\big (v_1+hz(\vartheta _t\omega )\big )-f\big (v_2+hz(\vartheta _t\omega )\big ),{\bar{u}} \Big ) +({\bar{\sigma }} , {\bar{v}}) \\&\leqslant -c_1 \Vert {\bar{u}}\Vert _p^p+ c_2 \Vert {\bar{u}}\Vert ^2 +\Vert {\bar{\sigma }} \Vert \Vert {\bar{v}}\Vert \quad \text {(by Lemma}~4.6) \\&\leqslant -c_1 \Vert {\bar{v}}\Vert _p^p+ c \Vert {\bar{v}}\Vert ^2 +\Vert {\bar{\sigma }} \Vert ^2, \end{aligned} \end{aligned}$$

so

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} \Vert {\bar{v}}\Vert ^2 +\Vert {\bar{v}}\Vert _p^p +\Vert \nabla {\bar{v}}\Vert ^2 \leqslant c\Vert {\bar{v}}\Vert ^2 + \Vert {\bar{\sigma }} \Vert ^2. \end{aligned}$$

By Gronwall’s lemma, we have

$$\begin{aligned}&\Vert {\bar{v}}(t)\Vert ^2 +\int _0^t e^{c(t-s)} \Big (\Vert {\bar{v}}(s)\Vert _p^p+\Vert \nabla {\bar{v}}(s) \Vert ^2\Big ) \mathrm{d}s \leqslant e^{ct} \Vert {\bar{v}}_0 \Vert ^2 +\int _0^t e^{c(t-s)} \Vert {\bar{\sigma }} (s)\Vert ^2 \ \mathrm{d}s .\nonumber \\ \end{aligned}$$
(4.41)

Since from (3.55), it follows

$$\begin{aligned} \begin{aligned} \int _0^t e^{c(t-s)} \Vert {\bar{\sigma }} (s)\Vert ^2 \ \mathrm{d}s&\leqslant e^{ct} \int _0^t \Vert {\bar{\sigma }} (s)\Vert ^2 \ \mathrm{d}s\\&\leqslant c e^{(c+\ln 4) t} \big ( d_\Xi (\sigma _1,\sigma _2) \big )^2 , \end{aligned} \end{aligned}$$

the proof is complete. \(\square \)

Lemma 4.8

(\(( \Sigma \times H ,V )\)-smoothing) Let \({\mathscr {B}}\) be the tempered uniformly \({\mathcal {D}}\)-pullback absorbing set defined by (4.36). Then for all \(\omega \in \Omega \) the difference of two solutions of (4.12) with initial data \(v_{j,0}\in {\mathscr {B}}(\omega )\), \(j=1,2\), satisfies

$$\begin{aligned} \begin{aligned}&\Vert \nabla v_1(t,\omega ,\sigma _1, v_{1,0})-\nabla v_2(t,\omega ,\sigma _2, v_{2,0})\Vert ^2 \\&\quad \leqslant ce^{ c \rho (\omega )+c e^{\lambda t} \int ^t_0 |z(\vartheta _s\omega )|^{2p-2}\mathrm{d}s + ce^{c\lambda t} } \Big ( \Vert v_{1,0}-v_{2,0}\Vert ^2 + \big (d_\Xi (\sigma _1,\sigma _2)\big )^2 \Big ) ,\quad \forall t\geqslant 2, \end{aligned} \end{aligned}$$

where \(\rho (\cdot )\) is the random variable given in (4.36).

Proof

Taking the inner product of (4.40) with \(-\Delta {\bar{v}}\) in H, by (4.39) we obtain

$$\begin{aligned} \begin{aligned}&\frac{1}{2} \frac{\mathrm{d}}{\mathrm{d}t} \Vert \nabla {\bar{v}}\Vert ^2 +\lambda \Vert \nabla {\bar{v}}\Vert ^2 +\Vert \Delta {\bar{v}}\Vert ^2 \\&\quad = \Big ( f(v_1+h z(\vartheta _t\omega ) ) - f(v_2+h z(\vartheta _t\omega ) ) , -\Delta {\bar{v}}\Big ) +({\bar{\sigma }}, -\Delta {\bar{v}}) \\&\quad \leqslant c \int \! \Big ( |u_1|^{p-2}+ |u_2|^{p-2} + 1\Big ) |{\bar{v}}| |\Delta {\bar{v}}| \ \mathrm{d}x +({\bar{\sigma }}, -\Delta {\bar{v}})\\&\quad \leqslant \Vert \Delta {\bar{v}}\Vert ^2+c\int \big ( |u_1|^{2p-4}+ |u_2|^{2p-4}\big )|{\bar{v}}|^2\ \mathrm{d}x +c\Vert {\bar{v}}\Vert ^2 +c\Vert {\bar{\sigma }}\Vert ^2 \\&\quad \leqslant \Vert \Delta {\bar{v}}\Vert ^2+c \big ( \Vert u_1\Vert ^{2p-4}_{2p-2}+ \Vert u_2\Vert ^{2p-4}_{2p-2}\big )\Vert {\bar{v}}\Vert _{2p-2}^2 +c\Vert {\bar{v}}\Vert ^2 +c\Vert {\bar{\sigma }}\Vert ^2 . \end{aligned} \end{aligned}$$

Hence,

$$\begin{aligned} \begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} \Vert \nabla {\bar{v}}\Vert ^2&\leqslant c \big ( \Vert u_1\Vert ^{2p-4}_{2p-2}+ \Vert u_2\Vert ^{2p-4}_{2p-2}\big )\Vert {\bar{v}}\Vert _{2p-2}^2 +c\Vert {\bar{v}}\Vert ^2 +c\Vert {\bar{\sigma }}\Vert ^2 \\&\leqslant c \Big ( \Vert u_1\Vert ^{2p-2}_{2p-2}+ \Vert u_2\Vert ^{2p-2}_{2p-2}+1\Big )\Vert {\bar{v}}\Vert _{2p-2}^2 +c\Vert \nabla {\bar{v}}\Vert ^2 +c\Vert {\bar{\sigma }}\Vert ^2 , \end{aligned} \end{aligned}$$
(4.42)

and then, by the continuous embedding \(V\hookrightarrow L^{2p-2}\) in (4.8),

$$\begin{aligned} \begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} \Vert \nabla {\bar{v}}\Vert ^2 \leqslant c \Big ( \Vert u_1\Vert ^{2p-2}_{2p-2}+ \Vert u_2\Vert ^{2p-2}_{2p-2} +1\Big )\Vert \nabla {\bar{v}}\Vert ^2 +c\Vert {\bar{\sigma }}\Vert ^2 . \end{aligned} \end{aligned}$$

By Gronwall’s lemma, we have

$$\begin{aligned} \begin{aligned} \Vert \nabla {\bar{v}}(t)\Vert ^2&\leqslant e^{ \int _s^t c\big ( \Vert u_1(\eta )\Vert ^{2p-2}_{2p-2}+ \Vert u_2(\eta )\Vert ^{2p-2}_{2p-2} +1 \big ) \mathrm{d}\eta } \Vert \nabla {\bar{v}}(s)\Vert ^2 \\&\quad + \int _s^te^{ \int _1^\tau c\big ( \Vert u_1(\eta )\Vert ^{2p-2}_{2p-2}+ \Vert u_2(\eta )\Vert ^{2p-2}_{2p-2} +1 \big ) \mathrm{d}\eta } \Vert {\bar{\sigma }}(\tau )\Vert ^2\ \mathrm{d}\tau ,\quad \forall t\geqslant s>1, \end{aligned} \end{aligned}$$
(4.43)

and integrating over \(s\in (1,2)\) we obtain

$$\begin{aligned} \begin{aligned} \Vert \nabla {\bar{v}}(t)\Vert ^2&\leqslant e^{ \int _1^t c\big ( \Vert u_1(\eta )\Vert ^{2p-2}_{2p-2}+ \Vert u_2(\eta )\Vert ^{2p-2}_{2p-2} +1 \big ) \mathrm{d}\eta } \int _1^2 \Vert \nabla {\bar{v}}(s)\Vert ^2\ \mathrm{d}s \\&\quad + e^{ \int _1^t c\big ( \Vert u_1(\eta )\Vert ^{2p-2}_{2p-2}+ \Vert u_2(\eta )\Vert ^{2p-2}_{2p-2} +1 \big ) \mathrm{d}\eta } \int _1^t \Vert {\bar{\sigma }}(\tau )\Vert ^2\ \mathrm{d}\tau ,\quad \forall t\geqslant 2 . \end{aligned} \end{aligned}$$
(4.44)

Since

$$\begin{aligned} \begin{aligned} \int _1^2 \Vert \nabla {\bar{v}}(s)\Vert ^2\ \mathrm{d}s&\leqslant \int _1^2 e^{c(2-s)} \Vert \nabla {\bar{v}}(s)\Vert ^2\ \mathrm{d}s \\&\leqslant c\Vert {\bar{v}}(0)\Vert ^2 +\int _0^2 e^{c(2-s)} \Vert {\bar{\sigma }} (s)\Vert ^2\ \mathrm{d}s \quad \text {(by }(4.41)) \\&\leqslant c\Vert {\bar{v}}(0)\Vert ^2 +c\int _0^2 \Vert {\bar{\sigma }} (s)\Vert ^2\ \mathrm{d}s, \end{aligned} \end{aligned}$$
(4.45)

we have

$$\begin{aligned}&\Vert \nabla {\bar{v}}(t)\Vert ^2 \leqslant ce^{ \int _1^t c\big ( \Vert u_1(\eta )\Vert ^{2p-2}_{2p-2}+ \Vert u_2(\eta )\Vert ^{2p-2}_{2p-2} +1 \big ) \mathrm{d}\eta } \bigg ( \Vert {\bar{v}}(0)\Vert ^2 + \int _0^t \Vert {\bar{\sigma }} (s)\Vert ^2\ \mathrm{d}s \bigg ),\quad t\geqslant 2.\nonumber \\ \end{aligned}$$
(4.46)

Notice that for all \(t\geqslant 2 \)

$$ \begin{aligned} \begin{aligned}&\int _1^t c\Big ( \Vert u_1(\eta , \omega , \sigma _1,u_{1,0})\Vert ^{2p-2}_{2p-2}+ \Vert u_2(\eta , \omega , \sigma _2 ,u_{2,0})\Vert ^{2p-2}_{2p-2} +1\Big )\ \mathrm{d}\eta \\&\quad \leqslant \int _1^t c\Big ( \Vert v_1(\eta , \omega , \sigma _1,v_{1,0})\Vert ^{2p-2}_{2p-2}+ \Vert v_2(\eta ,\omega , \sigma _2,v_{2,0})\Vert ^{2p-2}_{2p-2}+|z(\vartheta _{\eta }\omega )|^{2p-2} +1\Big )\ \mathrm{d}\eta \\&\quad \leqslant c \Vert {\mathscr {B}} (\omega )\Vert ^2+c \int ^t_0 e^{\lambda s} \Big ( |z(\vartheta _s\omega )|^{2p-2} +1+ C_\Sigma \Big ) \mathrm{d}s \ \ \ \quad \text {(by }(4.33) \, \& \, (4.34)) \\&\quad \leqslant c \rho (\omega )+c e^{\lambda t} \int ^t_0 |z(\vartheta _s\omega )|^{2p-2}\ \mathrm{d}s + ce^{\lambda t}, \end{aligned} \end{aligned}$$

and by (3.55) we have

$$\begin{aligned} \begin{aligned} c\int _0^t \Vert {\bar{\sigma }} (s)\Vert ^2 \ \mathrm{d}s \leqslant c e^{(\ln 4) t} \big ( d_\Xi (\sigma _1,\sigma _2) \big )^2 , \end{aligned} \end{aligned}$$

so, for all \(t\geqslant 2\),

$$\begin{aligned} \begin{aligned} \Vert \nabla {\bar{v}}(t)\Vert ^2&\leqslant ce^{ c \rho (\omega )+c e^{\lambda t} \int ^t_0 |z(\vartheta _s\omega )|^{2p-2}\mathrm{d}s + ce^{c \lambda t} } \Big ( \Vert {\bar{v}}(0)\Vert ^2 +\big ( d_\Xi (\sigma _1,\sigma _2) \big )^2 \Big ) . \end{aligned} \end{aligned}$$

The proof is complete. \(\square \)

4.4.2 Squeezing

Now we prove the squeezing property on the uniformly absorbing set \({\mathscr {B}}\) defined by (4.36), i.e.,

$$\begin{aligned} {\mathscr {B}} (\omega )=\Big \{u\in {H}:\Vert u\Vert ^2\leqslant \rho (\omega ) \Big \}, \end{aligned}$$

with \(\rho (\omega )\) a tempered random variable given by

$$\begin{aligned} \begin{aligned} \rho (\omega ):= C \int ^0_{-\infty } e^{\lambda s} |z(\vartheta _{s}\omega )|^p \ \mathrm{d}s +\frac{C}{\lambda }+ CC_b +1, \quad \forall \omega \in \Omega , \end{aligned} \end{aligned}$$
(4.47)

for some positive constant \(C >0\). Before giving the desired squeezing property, we prove a useful lemma for the random variable \(\rho (\omega )\).

Lemma 4.9

For the random variable \(\rho (\omega )\) defined above, let

$$\begin{aligned} \begin{aligned} {\hat{\rho }} (\omega ) := \max _{t\in [-1,0]} \rho (\vartheta _t \omega ) , \quad \forall \omega \in \Omega . \end{aligned} \end{aligned}$$
(4.48)

Then, for all \(\omega \in \Omega \),

$$\begin{aligned} {\hat{\rho }} (\omega )&\leqslant e^\lambda \rho (\omega ), \end{aligned}$$
(4.49)
$$\begin{aligned} \big [ \rho (\omega )\big ]^\gamma&\leqslant \int _0^1 \big [{\hat{\rho }}(\vartheta _s\omega )\big ]^\gamma \ \mathrm{d}s,\quad \forall \gamma \geqslant 1. \end{aligned}$$
(4.50)

Proof

By definition,

$$\begin{aligned} \begin{aligned} {\hat{\rho }} (\omega )&= \max _{t\in [-1,0]} C \int ^0_{-\infty } e^{\lambda s} |z(\vartheta _{s+t}\omega )|^p \ \mathrm{d}s +\frac{C}{\lambda }+ CC_b +1\\&=\max _{t\in [-1,0]} Ce^{-\lambda t} \int ^0_{-\infty } e^{\lambda (s+t)} |z(\vartheta _{s+t}\omega )|^p \ \mathrm{d}s +\frac{C}{\lambda }+ CC_b +1\\&=\max _{t\in [-1,0]} Ce^{-\lambda t} \int ^t_{-\infty } e^{\lambda s} |z(\vartheta _{s }\omega )|^p \ \mathrm{d}s +\frac{C}{\lambda }+ CC_b +1\\&\leqslant Ce^{\lambda } \int ^0_{-\infty } e^{\lambda s} |z(\vartheta _{s }\omega )|^p \ \mathrm{d}s +e^\lambda \bigg (\frac{C}{\lambda }+ CC_b +1\bigg ) =e^\lambda \rho (\omega ), \end{aligned} \end{aligned}$$

so (4.49) follows. Since for any \(\gamma \geqslant 1\), we have

$$\begin{aligned} \begin{aligned} \int _0^1\big [ {\hat{\rho }}(\vartheta _s\omega )\big ]^\gamma \ \mathrm{d}s&\geqslant \left[ \int _0^1 {\hat{\rho }}(\vartheta _s\omega )\ \mathrm{d}s\right] ^\gamma \\&\geqslant \left[ \int _0^1 \rho (\omega )\ \mathrm{d}s \right] ^\gamma = \big [\rho (\omega )\big ]^\gamma , \end{aligned} \end{aligned}$$

(4.50) is proved. \(\square \)

Now we prove the squeezing property needed for condition (S). To this end, notice that for any two solutions \(v_{j}\) of (4.12) corresponding to initial data \(v_{j,0}\in {\mathscr {B}}(\omega )\), the difference \( y(t,\omega , \sigma ,{\bar{v}}_0) := v_1(t,\omega ,\sigma ,v_{1,0})-v_2(t,\omega ,\sigma ,v_{2,0})\) of them satisfies

$$\begin{aligned} \begin{aligned} \frac{ \mathrm{d}y}{\mathrm{d}t} + \lambda y-\Delta y = f\big (v_1+h z(\vartheta _t\omega )\big ) - f\big (v_2+h z(\vartheta _t\omega ) \big ) . \end{aligned} \end{aligned}$$
(4.51)

In addition, since \(A:=-\Delta \) is a self-adjoint positive operator on \(D(A)=H^2({\mathcal {O}})\cap H_0^1({\mathcal {O}})\) with compact inverse, there exists a sequence \(\{\lambda _j\}_{j=1}^\infty \) of spectra satisfying

$$\begin{aligned} 0<\lambda _1\leqslant \lambda _2\leqslant \cdots \rightarrow \infty , \end{aligned}$$

and a sequence of eigenvectors \(\{e_j\}_{j=1}^\infty \) forms an orthonormal basis of H and it is such that

$$\begin{aligned} -\Delta e_j=\lambda _je_j,\qquad j=1,2,\ldots . \end{aligned}$$

For \(n\in {\mathbb {N}}\), set

$$\begin{aligned} H_n := \mathrm{span}\{e_1,e_2,\ldots , e_n\}, \end{aligned}$$

and let \(P_n: H\rightarrow H_n\) and \(Q_n:=I-P_n\) be the orthonormal projectors on H. Then,

$$\begin{aligned} \begin{aligned} \lambda _{n+1} \Vert Q_nv\Vert ^2\leqslant \Vert \nabla v\Vert ^2,\quad \forall v\in V,\, n\in {\mathbb {N}}. \end{aligned} \end{aligned}$$
(4.52)

Proposition 4.10

(Squeezing property) There exist \(m\in {\mathbb {N}}\), \(\delta \in (0,1/4)\) and a tempered random variable \(C(\omega )\) with \({\mathbb {E}}\big (C(\omega ) \big ) <-\ln (4\delta )\) such that for any two solutions \(v_1\) and \(v_2\) of (4.12) corresponding to initial data \(v_{1,0},v_{2,0}\in {\mathscr {B}}(\omega )\), respectively, we have

$$\begin{aligned} \sup _{\sigma \in \Sigma } \big \Vert P_m \big ( v_1(T_{{\mathscr {B}}},\omega ,\sigma ,v_{1,0})-v_2(T_{{\mathscr {B}}},\omega ,\sigma ,v_{2,0}) \big ) \big \Vert&\leqslant e^{\int ^{T_{{\mathscr {B}}}}_0 C(\vartheta _s\omega ) \mathrm{d}s} \Vert v_{1,0}-v_{2,0}\Vert , \end{aligned}$$
(4.53)
$$\begin{aligned} \sup _{\sigma \in \Sigma } \big \Vert Q_m \big ( v_1(T_{{\mathscr {B}}},\omega ,\sigma ,v_{1,0})-v_2(T_{{\mathscr {B}}},\omega ,\sigma ,v_{2,0}) \big ) \big \Vert&\leqslant \delta e^{\int ^{T_{{\mathscr {B}}}}_0 C(\vartheta _s\omega ) \mathrm{d}s} \Vert v_{1,0}-v_{2,0}\Vert , \end{aligned}$$
(4.54)

where \(T_{{\mathscr {B}}} >0\) is the deterministic absorption time of \({\mathscr {B}}\) in Proposition 4.5.

Proof

Without loss of generality we let \(T_{{\mathscr {B}}}=1\). Take arbitrarily \(\sigma \in \Sigma \) (the following proof is independent of the choice of \(\sigma \)). Then for two initial data \(v_{j,0}\in {\mathscr {B}}(\omega )\), by Lemma 4.7 the difference \( y(t,\omega , \sigma ,{\bar{v}}_0) = v_1(t,\omega ,\sigma ,v_{1,0})-v_2(t,\omega ,\sigma ,v_{2,0})\) of solutions satisfies

$$\begin{aligned} \begin{aligned} \Vert y(t,\omega , \sigma ,{\bar{v}}_0)\Vert ^2 \leqslant C_1e^{\beta t}\Vert {\bar{v}}_0\Vert ^2 \quad (\text {where }{\bar{v}}_0:=v_{1,0}-v_{2,0}=y(0)), \end{aligned} \end{aligned}$$
(4.55)

for positive constants \(C_1, \beta >0\) and all \(t >0\), and, particularly for \(t=1\),

$$\begin{aligned} \begin{aligned} \Vert y(1,\omega , \sigma ,{\bar{v}}_0)\Vert ^2 \leqslant C_1e^{\beta }\Vert {\bar{v}}_0\Vert ^2 =e^{\ln C_1+\beta } \Vert {\bar{v}}_0\Vert ^2. \end{aligned} \end{aligned}$$
(4.56)

In addition, for any initial value \(v(0)\in {\mathscr {B}}(\omega ) \), by Lemma 4.2 the solution v(t) of (4.12) for \(t>1/4\) is bounded by

$$\begin{aligned} \begin{aligned}&\sup _{\sigma \in \Sigma } \Vert \nabla v (t, \omega , \sigma , v(0) ) \Vert ^2\\&\quad \leqslant \sup _{\sigma \in \Sigma } \bigg [ ce^{ -\lambda t} \Vert v(0)\Vert ^2 +c \int ^0_{- t} e^{\lambda s} \Big (|z(\vartheta _{s+t}\omega )|^p+1+ \Vert \theta _t\sigma (s)\Vert ^2\Big ) \, \mathrm{d}s\bigg ] \\&\quad \leqslant c \Vert v(0)\Vert ^2 + c \int ^0_{-t} e^{\lambda s} |z(\vartheta _{s+t}\omega )|^p \ \mathrm{d}s +\frac{c}{\lambda }+ cC_b \quad \text {(by }(4.35)) \\&\quad \leqslant c \Vert v(0)\Vert ^2+c \rho (\vartheta _t\omega ) ,\qquad \forall t>\frac{1}{4}, \end{aligned} \end{aligned}$$

where c is an absolute constant, and \(\rho (\cdot )\) is the tempered random variable given by (4.47) which also indicates the radius of the absorbing set \({\mathscr {B}}\) in H. Analogously, since the solution u of (4.1) is in the form \(u(t)=v(t)+hz(\vartheta _t\omega )\) with v the solution of (4.12), we have

$$\begin{aligned} \begin{aligned} \sup _{\sigma \in \Sigma } \Vert \nabla u(t, \omega , \sigma , u(0))\Vert ^2&\leqslant 2\sup _{\sigma \in \Sigma } \Vert \nabla v(t, \omega , \sigma , v(0))\Vert ^2 +c |z(\vartheta _t\omega )|^2 \\&\leqslant c \Vert v(0)\Vert ^2 + c\rho (\vartheta _t\omega ) +c |z(\vartheta _t\omega )|^2 \\&\leqslant c \rho (\omega ) +c {\tilde{\rho }}(\vartheta _t\omega ) ,\quad \forall t>\frac{1}{4}, \end{aligned} \end{aligned}$$
(4.57)

where

$$\begin{aligned} \begin{aligned} {\tilde{\rho }}( \omega ):= \rho ( \omega ) + |z( \omega )|^2 ,\quad \forall \omega \in \Omega . \end{aligned} \end{aligned}$$
(4.58)

Taking the inner product of (4.51) with \(y_n:=Q_n y\) in H, we obtain

$$\begin{aligned} \begin{aligned} \frac{1}{2} \frac{\mathrm{d}}{\mathrm{d}t} \Vert y_n\Vert ^2+\lambda \Vert y_n\Vert ^2 +\Vert \nabla y_n\Vert ^2 =\Big (f\big (v_1+hz(\vartheta _t\omega )\big )- f\big (v_2+h z(\vartheta _t\omega )\big ), y_n \Big ) . \end{aligned} \end{aligned}$$

Since by H\({\ddot{\text {o}}}\)lder’s and Young’s inequalities we have the formula

$$\begin{aligned} \begin{aligned} \int abc\ \mathrm{d}x&\leqslant \left( \int |a|^2\ \mathrm{d}x\right) ^{\frac{1}{2}} \left( \int |b|^{\frac{2p-2}{p-2}}\ \mathrm{d}x\right) ^{\frac{p-2}{2p-2}}\left( \int |c|^{2p-2}\ \mathrm{d}x\right) ^{\frac{1}{2p-2}} \\&\leqslant \varepsilon \left( \int |c|^{2p-2}\ \mathrm{d}x\right) ^{\frac{2}{2p-2}} + c_\varepsilon \left( \int |a|^2\ \mathrm{d}x\right) \left( \int |b|^{\frac{2p-2}{p-2}}\ \mathrm{d}x\right) ^{\frac{p-2}{p-1}} , \quad \forall \varepsilon >0, \end{aligned} \end{aligned}$$
(4.59)

where \(c_\varepsilon >0\) is a constant depending on \(\varepsilon \), then for the nonlinearity term it holds

$$\begin{aligned} \begin{aligned}&\Big (f\big (v_1+hz(\vartheta _t\omega )\big )- f\big (v_2+h z(\vartheta _t\omega )\big ), y_n \Big )\\&\quad \leqslant c \int \! \Big ( |u_1|^{p-2}+ |u_2|^{p-2} + 1\Big ) | y| |y_n| \ \mathrm{d}x\quad \text { (by }(4.39)) \\&\quad \leqslant \frac{1}{2} \Vert y_n\Vert _{2p-2}^{2} + c \Big ( \Vert u_1\Vert _{2p-2}^{2p-4}+ \Vert u_2\Vert _{2p-2}^{2p-4} + 1\Big ) \Vert y\Vert ^2 \quad \text {(by }(4.59)) , \end{aligned} \end{aligned}$$

and finally by the continuous embedding (4.8) of \(V\hookrightarrow L^{2p-2}\) we conclude that

$$\begin{aligned} \begin{aligned}&\Big (f(v_1+hz(\vartheta _t\omega ))- f(v_2+h z(\vartheta _t\omega )), y_n \Big ) \\&\quad \leqslant \frac{1}{2} \Vert \nabla y_n\Vert ^2 + c \Big ( \Vert \nabla u_1\Vert ^{2p-4}+ \Vert \nabla u_2\Vert ^{2p-4} + 1\Big ) \Vert y\Vert ^2 . \end{aligned} \end{aligned}$$

Hence, for \(t>1/4\),

$$\begin{aligned} \begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} \Vert y_n\Vert ^2 +\Vert \nabla y_n\Vert ^2&\leqslant c \Big ( \Vert \nabla u_1\Vert ^{2p-4}+ \Vert \nabla u_2\Vert ^{2p-4} + 1\Big ) \Vert y\Vert ^2 \\&\leqslant c\Big ( \big [\rho (\omega )\big ]^{p-2} +\big [ {\tilde{\rho }}(\vartheta _t\omega )\big ]^{p-2} +1 \Big ) \Vert y\Vert ^2 \quad \text {(by }(4.57))\\&\leqslant c\Big ( \big [\rho (\omega )\big ]^{p-2} +\big [ {\tilde{\rho }}(\vartheta _t\omega )\big ]^{p-2} \Big ) \Vert y\Vert ^2 \quad \text {(since }\rho (\omega )\geqslant 1), \end{aligned} \end{aligned}$$

and then, by (4.52),

$$\begin{aligned} \begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} \Vert y_n\Vert ^2 + \lambda _{n+1}\Vert y_n\Vert ^2 \leqslant c\Big ( \big [\rho (\omega )\big ]^{p-2} +\big [ {\tilde{\rho }}(\vartheta _t\omega )\big ]^{p-2} \Big ) \Vert y\Vert ^2,\quad t>1/4. \end{aligned} \end{aligned}$$

For \(r\in (1/4,t)\), by Gronwall’s lemma we have

$$\begin{aligned} \begin{aligned} \Vert y_n(t)\Vert ^2 \leqslant&e^{-\lambda _{n+1}(t-r)} \Vert y_n(r)\Vert ^2 + \int _r^t ce^{\lambda _{n+1} (s-t)} \Big ( \big [\rho (\omega )\big ]^{p-2}\\&+\big [ {\tilde{\rho }}(\vartheta _s\omega )\big ]^{p-2} \Big )\Vert y(s)\Vert ^2\ \mathrm{d}s, \end{aligned} \end{aligned}$$
(4.60)

and then for \(t\geqslant 1/2\) integrate (4.60) over \(r\in (1/4,1/2)\) to have

$$\begin{aligned} \begin{aligned} \Vert y_n(t)\Vert ^2 \leqslant&4\int _{\frac{1}{4}}^{\frac{1}{2}} e^{-\lambda _{n+1}(t-r)} \Vert y_n(r)\Vert ^2 \ \mathrm{d}r \\&+ \int _0^t ce^{\lambda _{n+1} (s-t)} \Big ( \big [\rho (\omega )\big ]^{p-2}+\big [ {\tilde{\rho }}(\vartheta _s\omega )\big ]^{p-2} \Big )\Vert y(s)\Vert ^2\ \mathrm{d}s. \end{aligned} \end{aligned}$$

Since

$$\begin{aligned} \begin{aligned} \int ^{\frac{1}{2}}_{\frac{1}{4}} e^{-\lambda _{n+1} (t-r)} \Vert y_n(r)\Vert ^2\ \mathrm{d}r&\leqslant e^{-\lambda _{n+1} t} \int ^{\frac{1}{2}}_{\frac{1}{4}} e^{\lambda _{n+1} r} \Vert y(r)\Vert ^2 \ \mathrm{d}r \\&\leqslant ce^{-\lambda _{n+1} t} \int ^{\frac{1}{2}}_{\frac{1}{4}} e^{\lambda _{n+1} r} \Big (e^{\beta r} \Vert {\bar{v}}_0\Vert ^2\Big ) \mathrm{d}r \quad \text {(by }(4.55)) \\&\leqslant ce^{-\lambda _{n+1} t} \cdot \frac{e^{\frac{1}{2}(\lambda _{n+1}+\beta )}}{\lambda _{n+1}+\beta }\Vert {\bar{v}}_0\Vert ^2\\&\leqslant \frac{c}{\lambda _{n+1}} \Vert {\bar{v}}_0\Vert ^2,\quad t\geqslant \frac{1}{2}, \end{aligned} \end{aligned}$$
(4.61)

we have

$$\begin{aligned} \Vert y_n(t)\Vert ^2&\leqslant \frac{c}{\lambda _{n+1}} \Vert {\bar{v}}_0\Vert ^2 + \int _0^t ce^{\lambda _{n+1} (s-t)} \Big ( \big [\rho (\omega )\big ]^{p-2}+\big [ {\tilde{\rho }}(\vartheta _s\omega )\big ]^{p-2} \Big )\Vert y(s)\Vert ^2\ \mathrm{d}s \nonumber \\&\leqslant \frac{c}{\lambda _{n+1}} \Vert {\bar{v}}_0\Vert ^2 + \int _0^t ce^{\lambda _{n+1} (s-t)} \Big ( \big [\rho (\omega )\big ]^{p-2}+\big [ {\tilde{\rho }}(\vartheta _s\omega )\big ]^{p-2} \Big ) e^{\beta s}\Vert {\bar{v}}_0\Vert ^2 \ \mathrm{d}s \quad \text {(by }(4.55)) \nonumber \\&\leqslant \frac{c}{\lambda _{n+1}} \Vert {\bar{v}}_0\Vert ^2 + ce^{\beta t}\Vert {\bar{v}}_0\Vert ^2 \int _0^t e^{\lambda _{n+1} (s-t)} \Big ( \big [\rho (\omega )\big ]^{p-2}+\big [ {\tilde{\rho }}(\vartheta _s\omega )\big ]^{p-2} \Big )\ \mathrm{d}s . \end{aligned}$$
(4.62)

Since for the last term, we have

$$\begin{aligned} \begin{aligned}&\int _0^t e^{\lambda _{n+1}(s-t)} \Big ( \big [\rho (\omega )\big ]^{p-2} + \big [ {\tilde{\rho }}(\vartheta _s\omega )\big ]^{p-2}\Big ) \ \mathrm{d}s \\&\qquad \leqslant \bigg [\int _0^t e^{2\lambda _{n+1}(s-t)} \ \mathrm{d}s \bigg ]^{\frac{1}{2}} \bigg [\int _0^t \Big ( \big [\rho (\omega )\big ]^{p-2} + \big [ {\tilde{\rho }}(\vartheta _s\omega )\big ]^{p-2}\Big ) ^2 \ \mathrm{d}s \bigg ]^{\frac{1}{2}} \\&\qquad \leqslant \frac{1}{\sqrt{ 2\lambda _{n+1}}} \bigg [\int _0^t 2 \Big ( \big [\rho (\omega )\big ]^{2p-4} + \big [ {\tilde{\rho }}(\vartheta _s\omega )\big ]^{2p-4}\Big ) \ \mathrm{d}s \bigg ]^{\frac{1}{2}} \\&\qquad = \frac{1}{\sqrt{ \lambda _{n+1}}} \bigg [\int _0^t \Big ( \big [\rho (\omega )\big ]^{2p-4} + \big [ {\tilde{\rho }}(\vartheta _s\omega )\big ]^{2p-4}\Big ) \ \mathrm{d}s \bigg ]^{\frac{1}{2}} \\&\qquad \leqslant \frac{1}{\sqrt{ \lambda _{n+1}}} e^{ \int _0^t ( [\rho (\omega )]^{2p-4} + [ {\tilde{\rho }}(\vartheta _s\omega ) ]^{2p-4} ) \mathrm{d}s } \quad \text {(since } s^{\frac{1}{2}}\leqslant e^s\text { for }s>0) , \end{aligned} \end{aligned}$$

taking in particular \(t=1\) in (4.62) we obtain

$$\begin{aligned} \begin{aligned} \Vert y_n(1)\Vert ^2&\leqslant \frac{c}{\lambda _{n+1}} \Vert {\bar{v}}_0\Vert ^2 + c \Vert {\bar{v}}_0\Vert ^2 \left( \frac{1}{\sqrt{ \lambda _{n+1}}} e^{ \int _0^1 ( [\rho (\omega )]^{2p-4} + [ {\tilde{\rho }}(\vartheta _s\omega ) ]^{2p-4} ) \mathrm{d}s } \right) \\&= \frac{c}{\lambda _{n+1}} \Vert {\bar{v}}_0\Vert ^2 + \frac{c \Vert {\bar{v}}_0\Vert ^2}{\sqrt{ \lambda _{n+1}}} e^{ [\rho (\omega ) ]^{2p-4}+ \int _0^1 [ {\tilde{\rho }}(\vartheta _s\omega ) ]^{2p-4} \mathrm{d}s } \\&\leqslant \frac{c}{\lambda _{n+1}} \Vert {\bar{v}}_0\Vert ^2 + \frac{c \Vert {\bar{v}}_0\Vert ^2}{\sqrt{ \lambda _{n+1}}} e^{ [\rho (\omega ) ]^{2p-2}+ \int _0^1 [ {\tilde{\rho }}(\vartheta _s\omega ) ]^{2p-2} \mathrm{d}s } \\&\leqslant \frac{c}{\lambda _{n+1}} \Vert {\bar{v}}_0\Vert ^2 + \frac{c \Vert {\bar{v}}_0\Vert ^2}{\sqrt{ \lambda _{n+1}}} e^{ \int _0^1 [{\hat{\rho }}(\vartheta _s\omega ) ]^{2p-2}\mathrm{d}s + \int _0^1 [ {\tilde{\rho }}(\vartheta _s\omega ) ]^{2p-2} \mathrm{d}s } \quad \text {(by }(4.50)) \\&\leqslant \! \frac{c}{\lambda _{n+1}} \Vert {\bar{v}}_0\Vert ^2\! +\! \frac{c \Vert {\bar{v}}_0\Vert ^2}{\sqrt{ \lambda _{n+1}}} e^{ \int _0^1 ( e^{\lambda (2p-2)} [ \rho (\vartheta _s\omega ) ]^{2p-2}\!+\! [ {\tilde{\rho }}(\vartheta _s\omega ) ]^{2p-2} ) \mathrm{d}s }\quad \text {(by }(4.49)) . \end{aligned} \end{aligned}$$
(4.63)

By the definition of \({\tilde{\rho }}\) in (4.58), we have

$$\begin{aligned} \begin{aligned} e^{\lambda (2p-2)} [\rho ( \omega ) ]^{2p-2}+ [ {\tilde{\rho }}(\omega ) ]^{2p-2}&= e^{\lambda (2p-2)} [\rho ( \omega ) ]^{2p-2}+ \big [\rho (\omega ) +|z(\omega )|^2\big ]^{2p-2} \\&\leqslant \Big ( e^{\lambda (2p-2)} \!+\! 1\Big ) \big [\rho (\omega ) \!+\!|z(\omega )|^2\big ]^{2p-2} ,\, \omega \in \Omega , \end{aligned} \end{aligned}$$

and for later purpose set \(k:= e^{\lambda (2p-2)} +1+(\ln C_1+\beta )\) and

$$\begin{aligned} \begin{aligned} C(\omega ):= k \big [\rho (\omega ) +|z(\omega )|^2\big ]^{2p-2} ,\quad \forall \omega \in \Omega . \end{aligned} \end{aligned}$$
(4.64)

Then, \(C(\cdot )\) is a tempered random variable with finite expectation, and, by (4.63),

$$\begin{aligned} \begin{aligned} \Vert y_n(1) \Vert ^2&\leqslant \frac{c}{\lambda _{n+1}} \Vert {\bar{v}}_0\Vert ^2 + \frac{c \Vert {\bar{v}}_0\Vert ^2}{\sqrt{ \lambda _{n+1}}} e^{ \int _0^1 C(\vartheta _s\omega ) \mathrm{d}s } \\&\leqslant \left( \frac{c}{\lambda _{n+1}} + \frac{c }{\sqrt{ \lambda _{n+1}}} \right) e^{ \int _0^1 C(\vartheta _s\omega ) \mathrm{d}s }\, \Vert {\bar{v}}_0\Vert ^2 . \end{aligned} \end{aligned}$$

Clearly, there exists a \(\delta \in (0,1/4)\) such that \(4\delta <e^{-{\mathbb {E}}(C(\omega ))} \), or equivalently \({\mathbb {E}}(C(\omega )) <-\ln (4\delta )\). In addition, since \(\lambda _n\rightarrow \infty \) as \(n\rightarrow \infty \), we have an \(m \in {\mathbb {N}}\) large enough such that

$$\begin{aligned} \frac{c}{\lambda _{m+1}} + \frac{ c }{\sqrt{ \lambda _{m+1}}} \leqslant \delta ^2. \end{aligned}$$

In this way we obtain

$$\begin{aligned} \Vert y_m(1)\Vert ^2 \leqslant \delta ^2 e^{2\int ^1_0 C(\vartheta _s\omega ) \mathrm{d}s} \Vert {\bar{v}}_0\Vert ^2 , \qquad \text { with } \ {\mathbb {E}}(C(\omega )) <-\ln (4\delta ) ; \end{aligned}$$

so (4.54) is verified. Since \(2C(\omega ) \geqslant \ln C_1+\beta \) for all \(\omega \in \Omega \), by (4.56) we have

$$\begin{aligned} \begin{aligned} \Vert y(1,\omega , \sigma ,{\bar{v}}_0)\Vert ^2&\leqslant e^{\ln C_1+\beta } \Vert {\bar{v}}_0\Vert ^2 \\&\leqslant e^{2\int _0^1\! C(\vartheta _s\omega )\mathrm{d}s} \Vert {\bar{v}}_0\Vert ^2, \end{aligned} \end{aligned}$$

and (4.53) is also clear. \(\square \)

4.4.3 Finite-Dimensionality of the Random Uniform Attractor in H and in V

Now we are ready to conclude that the random uniform attractor \({\mathscr {A}}\) of (4.12) has a finite fractal dimension in H and in V.

Theorem 4.11

Suppose the symbol space \(\Sigma = {\mathcal {H}}(g)\) has finite fractal dimension in \(\Xi ={\mathcal {C}}({\mathbb {R}};H)\), i.e., (G) holds. Then, the fractal dimension in H and that in V of the random uniform attractor \({\mathscr {A}}\) of the NRDS \(\phi \) generated by (4.12) are both finite.

Proof

Let us prove first the finite-dimensionality of \({\mathscr {A}}\) in the phase space H. Set \(X := H\) and \(Y := V\). From Lemma 4.7, we have conditions \((H_2)\) and \((H_4)\); from Proposition 4.5 it follows \((H_3)\) and finally by Proposition 4.10 we obtain (S). Then by Theorem 3.6, we conclude that the fractal dimension of the random uniform attractor \({\mathscr {A}}\) is uniformly bounded in H, i.e., \(\mathrm{dim}_F \big ( {\mathscr {A}}(\omega ) ; H \big ) \leqslant c_0\), for all \(\omega \in \Omega \), for some deterministic constant \(c_0>0\).

On the other hand, from Lemma 4.8 it follows the smoothing property \((H_7)\) (with \(\delta _1=\delta _2=1\)) and since \({\mathscr {A}}\) has fractal dimension uniformly bounded in H we obtain by Theorem 3.8 that \({\mathscr {A}}\) is finite dimensional in V as well: \(\mathrm{dim}_F \big ( {\mathscr {A}}(\omega ) ; V \big ) \leqslant \dim _F(\Sigma ;\Xi )+c_0\) for all \(\omega \in \Omega \). \(\square \)