Abstract
We prove the existence of a global random attractor for a certain class of stochastic partly dissipative systems. These systems consist of a partial and an ordinary differential equation, where both equations are coupled and perturbed by additive white noise. The deterministic counterpart of such systems and their long-time behaviour have already been considered but there is no theory that deals with the stochastic version of partly dissipative systems in their full generality. We also provide several examples for the application of the theory.
1 Introduction
In this work, we study classes of stochastic partial differential equations (SPDEs), which are part of the general partly dissipative system
where \(W_{1,2}\) are cylindrical Wiener processes, the \(\sigma ,f,g,h\) are given functions, \(B_{1,2}\) are operator-valued, \(\Delta \) is the Laplace operator, \(d>0\) is a parameter, the equation is posed on a bounded open domain \(D\subset {\mathbb {R}}^n\), \(u_{1,2}=u_{1,2}(x,t)\) are the unknowns for \((x,t)\in D\times [0,T_{\max })\), and \(T_{\max }\) is the maximal existence time. The term partly dissipative highlights the fact that only the first component contains the regularizing Laplace operator. In this work we analyse the case of additive noise and a certain coupling, more precisely,
where \(B_{1,2}\) are bounded linear operators. A deterministic version of such a system has been analysed by Marion [20]. We are going to use certain assumptions for the reaction terms, which are similar to those used in [20]. The precise technical setting of our work starts in Sect. 2.
The goal of this work is to provide a general theory for stochastic partly dissipative systems and to analyse the long-time behaviour of the solution using the random dynamical systems approach. To this aim, we first show that the solution of our system exists globally-in-time, i.e. one can take \(T_{\max }=+{\infty }\) above. Then we prove the existence of a pullback attractor. To our best knowledge the well-posedness and asymptotic behaviour for such systems (and for other coupled SPDEs and SODEs) has only been explored for special cases, i.e. mainly for the FitzHugh Nagumo equation, see [4, 24] for solution theory and [2, 19, 31, 32] for long-time behaviour/attractor theory. Here we develop a much more general theory of stochastic partly dissipative systems, motivated by the numerous applications in the natural sciences such as the the cubic-quintic Allen-Cahn equation [17] in elasticity. Moreover, unlike several previous works mentioned above, we deal with infinite-dimensional noise that satisfies certain regularity assumptions. These combined with the restrictions on the reaction terms allow us to compute sharp a-priori bounds of the solution, which are used to construct a random absorbing set. Even once the absorbing set has been constructed, we emphasize that we cannot directly apply compact embedding results to obtain the existence of an attractor. This issue arises due to the absence of the regularizing effect of the Laplacian in the second component. To overcome this obstacle, we introduce an appropriate splitting of the solution in two components: a regular one, and one that asymptotically tends to zero. This splitting technique goes (at least) back to Temam [28] and it has also been used in the context of deterministic partly dissipative systems [20] and for a stochastic FitzHugh–Nagumo equation with linear multiplicative noise [33, 35]. The necessary additional technical steps for our setting are provided in Sect. 3.4. Using the a-priori bounds, we establish the existence of a pullback attractor [9, 14, 25, 26]; which has been studied in several contexts to capture the long-time behaviour of stochastic (partial) differential equations, see for instance [3, 5, 8, 12, 15] and the references therein. In the stochastic case pullback attractors are random invariant compact sets of phase space that are invariant with respect to the dynamics. They can be viewed as the generalization of non-autonomous attractors for deterministic systems. In the context of coupled SPDEs and SODEs, to our best knowledge, only random attractors for the stochastic FitzHugh–Nagumo equation were treated under various assumptions of the reaction and noise terms: finite-dimensional additive noise on bounded and unbounded domains [32, 33] and for (non-autonomous) FitzHugh–Nagumo equation driven by linear multiplicative noise [1, 19, 35]. Here we provide a general random attractor theory for stochastic partly dissipative systems perturbed by infinite-dimensional additive noise, which goes beyond the FitzHugh–Nagumo system. To this aim we have to employ more general techniques than those used in the references specified above. Furthermore, we emphasize that other dynamical aspects for similar systems have been investigated, e.g. inertial manifolds and master-slave synchronization in reference [7].
We also mention that numerous extensions of our work are imaginable. Evidently the fully dissipative case is easier from the viewpoint of attractor theory. Hence, our results can be extended in a straightforward way to the case when both components of the SPDE contain a Laplacian. Systems with more than two components but with similar assumptions are likely just going to pose notational problems rather than intrinsic ones. From the point of view of applications it would be meaningful to incorporate non-linear couplings between the PDE and ODE parts. For example, this would allow us to use this theory to analyse various systems derived in chemical kinetics from mass-action laws. However, more complicated non-linear couplings are likely to be far more challenging. Moreover, one could also develop a general framework which allows one to deal with other random influences, e.g. multiplicative noise, or more general Gaussian processes than standard trace-class Wiener processes. Furthermore, it would be interesting to investigate several dynamical aspects of partly dissipative SPDEs such as invariant manifolds or patterns. Naturally, one could also aim to derive upper bounds for the Hausdorff dimension of the random attractor and compare them to the deterministic result given in [20].
This paper is structured as follows: Sect. 2 contains all the preliminaries. More precisely, in Sect. 2.1 we define the system that we are going to analyse and state all the required assumptions. Subsequently, in Sect. 2.2, we clarify the notion of solution that we are interested in. The main contribution of this work is given in Sect. 3. Firstly, some preliminary definitions and results about random attractor theory are summarized in Sect. 3.1. Secondly, we derive the random dynamical system associated to our SPDE system in Sect. 3.2. Thirdly, we prove the existence of a bounded absorbing set for the random dynamical system in Sect. 3.3. Lastly, in Sect. 3.4 it is shown that one can indeed find a compact absorbing set implying the existence of a random attractor. In Sect. 4 we illustrate the theory by several examples arising from applications.
Notation Before we start, we define/recall some standard notations that we will use within this work. When working with vectors we use \((\cdot )^\top \) to denote the transpose while \(|\cdot |\) denotes the Euclidean norm. In a metric space (M, d) we denote a ball of radius \(r>0\) centred in the origin by
We write \(\text {Id}\) for the identity operator/matrix. L(U, H) denotes the space of bounded linear operators from U to H. \(O^*\) denotes the adjoint operator of a bounded linear operator O. We let \(D\subset {\mathbb {R}}^n\) always be bounded, open, and with regular boundary, where \(n\in {\mathbb {N}}\). \(L^p(D)\), \(p\ge 1\), denotes the usual Lebesgue space with norm \(\Vert \cdot \Vert _p\). Furthermore, \(\langle \cdot ,\cdot \rangle \) denotes the associated scalar-product in \(L^2(D)\). \(C^p(D)\), \(p\in {\mathbb {N}}\cup \{0,\infty \}\), denotes the space of all continuous functions that have continuous first p derivatives. Lastly, for \(k\in {\mathbb {N}}\), \(1\le p\le \infty \) we consider the Sobolev space of order k as
with multi-index \(\alpha \), where the norm is given by
The Sobolev space \(W^{k,p}(D)\) is a Banach space. \(H_0^k(D)\) denotes the space of functions in \(H^k(D)=W^{k,2}(D)\) that vanish at the boundary (in the sense of traces).
2 Stochastic partly dissipative systems
2.1 Basics
Let \(D\subset {\mathbb {R}}^n\) be a bounded open set with regular boundary, set \(H:=L^2(D)\) and let \(U_1,U_2\) be two separable Hilbert spaces. We consider the following coupled, partly dissipative system with additive noise
where \(u_{1,2}=u_{1,2}(x,t)\), \((x,t)\in D\times [0,T]\), \(T>0\), \(W_{1,2}\) are cylindrical Wiener processes on \(U_1\) respectively \(U_2\), and \(\Delta \) is the Laplace operator. Furthermore, \(B_1\in L(U_1,H)\), \(B_2\in L(U_2,H)\) and \(d>0\) is a parameter controlling the strength of the diffusion in the first component. The system is equipped with initial conditions
and a Dirichlet boundary condition for the first component
We will denote by A the realization of the Laplace operator with Dirichlet boundary conditions, more precisely we define the operator \(A:{{\mathcal {D}}}(A)\rightarrow L^2(D)\) as \(Au=d\Delta u\) with domain \({{\mathcal {D}}}(A):=H^2(D)\cap H_0^1(D)\subset L^2(D)\). Note that A is a self-adjoint operator that possesses a complete orthonormal system of eigenfunctions \(\{e_k\}_{k=1}^\infty \) of \(L^2(D)\). Within this work we always assume that there exists \(\kappa >0\) such that \(|e_k(x)|^2<\kappa \) for \(k\in {\mathbb {N}}\) and \(x\in D\). This holds for example when \(D=[0,\pi ]^n\). For the deterministic reaction terms appearing in (2.1)–(2.2) we assume that:
Assumption 2.1
(Reaction terms)
-
(1)
\(h\in C^2({\mathbb {R}}^n\times {\mathbb {R}})\) and there exist \(\delta _1,\delta _2, \delta _3>0\), \(p>2\) such that
$$\begin{aligned} \delta _1|u_1|^p-\delta _3\le h(x,u_1)u_1\le \delta _2|u_1|^p+\delta _3. \end{aligned}$$(2.5) -
(2)
\(f\in C^2({\mathbb {R}}^n\times {\mathbb {R}}\times {\mathbb {R}})\) and there exist \(\delta _4>0\) and \(0<p_1<p-1\) such that
$$\begin{aligned} |f(x,u_1,u_2)|\le \delta _4 (1+|u_1|^{p_1}+|u_2|). \end{aligned}$$(2.6) -
(3)
\(\sigma \in C^2({\mathbb {R}}^n)\) and there exist \(\delta ,{\tilde{\delta }}>0\) such that
$$\begin{aligned} \delta \le \sigma (x)\le {\tilde{\delta }}. \end{aligned}$$(2.7) -
(4)
\(g\in C^2({\mathbb {R}}^n\times {\mathbb {R}})\) and there exists \(\delta _5>0\) such that
$$\begin{aligned} |g_u(x,u_1)|\le \delta _5,~~ |g_{x_i}(x,u_1)|\le \delta _5(1+|u_1|),~~~i=1,\ldots ,n. \end{aligned}$$(2.8)
In particular, Assumptions 2.1 (1) and (4) imply that there exist \(\delta _7,\delta _8>0\) such that
The Assumptions 2.1(1)–(4) are identical to those given in [20], except that in the deterministic case only a lower bound on \(\sigma \) was assumed.
We always consider an underlying filtered probability space denoted as \((\Omega ,{\mathcal {F}},({\mathcal {F}}_t)_{t\ge 0},{\mathbb {P}})\) that will be specified later on. In order to guarantee certain regularity properties of the noise terms, we make the following additional assumptions:
Assumption 2.2
(Noise)
-
(1)
We assume that \(B_2:U_2\rightarrow H\) is a Hilbert-Schmidt operator. In particular, this implies that \(Q_2:=B_2B_2^*\) is a trace class operator and \(B_2W_2\) is a \(Q_2\)-Wiener process.
-
(2)
We assume that \(B_1\in L(U_1,H)\) and that the operator \(Q_t\) defined by
$$\begin{aligned} Q_tu=\int \nolimits _0^t\exp \left( sA\right) Q_1\exp \left( sA^*\right) u~{\text {d}}s, ~~~u\in H, t\ge 0, \end{aligned}$$where \(Q_1:=B_1B_1^*\), is of trace class. Hence, \(B_1W_1\) is a \(Q_1\)-Wiener process as well.
-
(3)
Let \(U_1=H\). There exists an orthonormal basis \(\{e_k\}_{k=1}^{\infty }\) of H and sequences \(\{\lambda _k\}_{k=1}^{\infty }\) and \(\{\delta _k\}_{k=1}^{\infty }\) such that
$$\begin{aligned} A e_k=-\lambda _ke_k,\qquad Q_1e_k=\delta _ke_k,~~k\in {\mathbb {N}}. \end{aligned}$$Furthermore, we assume that there exists \(\alpha \in \left( 0,\frac{1}{2}\right) \) such that
$$\begin{aligned} \sum _{k=1}^\infty \delta _k\lambda _k^{2\alpha +1}<\infty . \end{aligned}$$
Assumptions 2.2 guarantee that the stochastic convolution introduced below is a well-defined process with sufficient regularity properties, see Lemmas 3.17 and 3.25. As an example, one could choose \(B_1=(-A)^{-\gamma /2}\) with \(\gamma >\frac{n}{2}-1\) to ensure that Assumptions 2.2 (2)–(3) hold for \(\alpha \) with \(2\alpha < \gamma -\frac{n}{2}+1\), see [10, Chapter 4].
Let us now formulate problem (2.1)–(2.2) as an abstract Cauchy problem. We define the following space
with norm \(\Vert (u_1,u_2)^\top \Vert _{{\mathbb {H}}}^2=\Vert u_1\Vert _{2}^2+\Vert u_2\Vert _{2}^2\) this becomes a separable Hilbert space. \(\langle \cdot ,\cdot \rangle _{\mathbb {H}}\) denotes the corresponding scalar product. Furthermore, we let
with norm \(\Vert (u_1,u_2)^\top \Vert _{\mathbb {V}}^2=\Vert u_1\Vert _{H^1(D)}^2+\Vert u_2\Vert _{2}^2\). We define the following linear operator
where \({\mathbf {A}}:{{\mathcal {D}}}({\mathbf {A}})\subset {\mathbb {H}}\rightarrow {\mathbb {H}}\) with \({{\mathcal {D}}}({\mathbf {A}}) ={{\mathcal {D}}}(A)\times L^2(D)\). Since all the reaction terms are twice continuously differentiable they obey in particular the Carathéodory conditions [34]. Thus, the corresponding Nemytskii operator is defined by
where \({\mathbf {F}}:{{\mathcal {D}}}({\mathbf {F}})\subset {\mathbb {H}}\rightarrow {\mathbb {H}}\) and \({{\mathcal {D}}}({\mathbf {F}}):={\mathbb {H}}\). By setting
we can rewrite the system (2.1)–(2.2) as an abstract Cauchy problem on the space \({\mathbb {H}}\)
with initial condition
2.2 Mild solutions and stochastic convolution
We are interested in the concept of mild solutions to SPDEs. First of all, let us note the following. We have
It is well known that \(A_1\) generates an analytic semigroup on \({\mathbb {H}}\) and \(A_2\) is a bounded multiplication operator on \({\mathbb {H}}\). Hence, \({\mathbf {A}}\) is the generator of an analytic semigroup \(\{\exp \left( t{\mathbf {A}}\right) \}_{t\ge 0}\) on \({\mathbb {H}}\) as well, see [23, Chapter 3, Theorem 2.1]. Also note that A generates an analytic semigroup \(\{\exp \left( tA\right) \}_{t\ge 0}\) on \(L^p(D)\) for every \(p\ge 1\). In particular, we have for \(u\in L^p(D)\) that for every \(\alpha \ge 0\) there exists a constant \(C_\alpha >0\) such that
where \(a>0\), see for instance [27, Theorem 37.5]. The domain \({{\mathcal {D}}}((-A)^\alpha )\) can be identified with the Sobolev space \(W^{2\alpha ,p}(D)\) and thus we have in our setting for \(t>0\)
Remark 2.3
Omitting the additive noise term in equation (2.11), we are in the deterministic setting of [20]. From there the existence of a global-in-time solution \((u_1,u_2)\in C([0,\infty ),{\mathbb {H}})\) for every initial condition \(u^0\in {\mathbb {H}}\) already follows.
Let us now return to the stochastic Cauchy problem (2.11)–(2.12). We define
Definition 2.4
(Stochastic convolution) The stochastic process defined as
is called stochastic convolution.
More precisely, we have (see [22, Proposition 3.1])
This is a well-defined \({\mathbb {H}}\)-valued Gaussian process. Furthermore, Assumptions 2.2 (1) and (2) ensure that \(W_{\mathbf {A}}(t)\) is mean-square continuous and \({\mathcal {F}}_t\)-measurable, see [11].
Remark 2.5
As \(W_{{\mathbf {A}}}\) is a Gaussian process, we can bound all its higher-order moments, i.e. for \(p\ge 1\) we have
This follows from the Kahane–Khintchine inequality, see [29, Theorem 3.12].
Definition 2.6
(Mild solution) A mean-square continuous, \({\mathcal {F}}_t\)-measurable \({\mathbb {H}}\)-valued process u(t), \(t\in [0,T]\) is said to be a mild solution to (2.11)–(2.12) on [0, T] if \({\mathbb {P}}\)-almost surely we have for \(t\in [0,T]\)
Under Assumptions 2.1 and 2.2 (1)–(2) a mild solution exists locally-in-time in
for some \(T>0\), see [11, Theorem 7.7]. Hence, local in time existence for our problem is guaranteed by the classical SPDE theory.
3 Random attractor
3.1 Preliminaries
We now recall some basic definitions related to random attractors. For more information the reader is referred to the sources given in the introduction.
Definition 3.1
(Metric dynamical system) Let \((\Omega , {\mathcal {F}},{\mathbb {P}})\) be a probability space and let \(\theta =\{\theta _t:\Omega \rightarrow \Omega \}_{t\in {\mathbb {R}}}\) be a family of \({\mathbb {P}}\)-preserving transformations (i.e. \(\theta _t{\mathbb {P}}={\mathbb {P}}\) for \(t\in {\mathbb {R}}\)), which satisfy for \(t,s\in {\mathbb {R}}\) that
-
(1)
\((t,\omega )\mapsto \theta _t\omega \) is measurable,
-
(2)
\(\theta _0=\text {Id}\),
-
(3)
\(\theta _{t+s}=\theta _t\circ \theta _s\).
Then \((\Omega ,{\mathcal {F}}, {\mathbb {P}},\theta )\) is called a metric dynamical system.
The metric dynamical system describes the dynamics of the noise.
Definition 3.2
(Random dynamical system) Let \(({{\mathcal {V}}},\Vert \cdot \Vert )\) be a separable Banach space. A random dynamical system (RDS) with time domain \({\mathbb {R}}_+\) on \(({{\mathcal {V}}},\Vert \cdot \Vert )\) over \(\theta \) is a measurable map
such that \(\varphi (0,\omega )=\text {Id}_{{{\mathcal {V}}}}\) and
for all \(s,t\in {\mathbb {R}}_+\) and for all \(\omega \in \Omega \). We say that \(\varphi \) is a continuous or differentiable RDS if \(v\mapsto \varphi (t,\omega )v\) is continuous or differentiable for all \(t\in {\mathbb {R}}_+\) and every \(\omega \in \Omega \).
We summarize some further definitions relevant for the theory of random attractors.
Definition 3.3
(Random set) A set-valued map \({{\mathcal {K}}}:\Omega \rightarrow 2^{{\mathcal {V}}}\) is said to be measurable if for all \(v\in {{\mathcal {V}}}\) the map \(\omega \mapsto d(v,{{\mathcal {K}}}(\omega ))\) is measurable. Here, \(d({{\mathcal {A}}},{{\mathcal {B}}}) =\sup _{v\in {{\mathcal {A}}}}\inf _{{\tilde{v}}\in {{\mathcal {B}}}}\Vert v-{\tilde{v}}\Vert \) for \({{\mathcal {A}}},{{\mathcal {B}}}\in 2^{{\mathcal {V}}}\), \({{\mathcal {A}}},{{\mathcal {B}}}\ne \emptyset \) and \(d(v,{{\mathcal {B}}})=d(\{v\},{{\mathcal {B}}})\). A measurable set-valued map is called a random set.
Definition 3.4
(Omega-limit set) For a random set \({{\mathcal {K}}}\) we define the omega-limit set to be
\(\Omega _{{\mathcal {K}}}(\omega )\) is closed by definition.
Definition 3.5
(Attracting and absorbing set) Let \({{\mathcal {A}}},{{\mathcal {B}}}\) be random sets and let \(\varphi \) be a RDS.
-
\({{\mathcal {B}}}\) is said to attract \({{\mathcal {A}}}\) for the RDS \(\varphi \), if
$$\begin{aligned} d(\varphi (t,\theta _{-t}\omega ){{\mathcal {A}}}(\theta _{-t}\omega ),{{\mathcal {B}}}(\omega ))\rightarrow 0~~ \text {for }t\rightarrow \infty . \end{aligned}$$ -
\({{\mathcal {B}}}\) is said to absorb \({{\mathcal {A}}}\) for the RDS \(\varphi \), if there exists a (random) absorption time \(t_{{\mathcal {A}}}(\omega )\) such that for all \(t\ge t_{{\mathcal {A}}}(\omega )\)
$$\begin{aligned} \varphi (t,\theta _{-t}\omega ){{\mathcal {A}}}(\theta _{-t}\omega )\subset {{\mathcal {B}}}(\omega ). \end{aligned}$$ -
Let \(\mathbf {{\mathcal {D}}}\) be a collection of random sets (of non-empty subsets of \({{\mathcal {V}}}\)), which is closed with respect to set inclusion. A set \({{\mathcal {B}}}\in \mathbf {{\mathcal {D}}}\) is called \(\mathbf {{\mathcal {D}}}\)-absorbing/\({{\mathcal {D}}}\)-attracting for the RDS \(\varphi \), if \({{\mathcal {B}}}\) absorbs/attracts all random sets in \(\mathbf {{\mathcal {D}}}\).
Remark 3.6
Throughout this work we use a convenient criterion to derive the existence of an absorbing set. Let \({{\mathcal {A}}}\) be a random set. If for every \(v\in {{\mathcal {A}}}(\theta _{-t}\omega )\) we have
where \(\rho (\omega )>0\) for every \(\omega \in \Omega \), then the ball centred in 0 with radius \(\rho (\omega )+\epsilon \) for a \(\epsilon >0\), i.e. \({{\mathcal {B}}}(\omega ):= B(0,\rho (\omega )+\epsilon )\), absorbs \({\mathcal {A}}\).
Definition 3.7
(Tempered set) A random set \({{\mathcal {A}}}\) is called tempered provided for \({\mathbb {P}}\)-a.e. \(\omega \in \Omega \)
where \(d({{\mathcal {A}}})=\sup _{a\in {{\mathcal {A}}}}\Vert a\Vert \). We denote by \({\mathcal {T}}\) the set of all tempered subsets of \({{\mathcal {V}}}\).
Definition 3.8
(Tempered random variable) A random variable \(X\in {\mathbb {R}}\) on \((\Omega ,{\mathcal {F}},{\mathbb {P}},\theta )\) is called tempered, if there is a set of full \({\mathbb {P}}\)-measure such that for all \(\omega \) in this set we have
Hence a random variable X is tempered when the stationary random process \(X(\theta _t\omega )\) grows sub-exponentially.
Remark 3.9
A sufficient condition that a positive random variable X is tempered is that (cf. [3, Proposition 4.1.3])
If \(\theta \) is an ergodic shift, then the only alternative to (3.2) is
i.e., the random process \(X(\theta _t\omega )\) either grows sub-exponentially or blows up at least exponentially.
Definition 3.10
(Random attractor) Suppose \(\varphi \) is a RDS such that there exists a random compact set \({{\mathcal {A}}}\in {\mathcal {T}}\) which satisfies for any \(\omega \in \Omega \)
-
\({{\mathcal {A}}}\) is invariant, i.e., \(\varphi (t,\omega ){{\mathcal {A}}}(\omega )={{\mathcal {A}}}(\theta _t\omega )\) for all \(t\ge 0\).
-
\({{\mathcal {A}}}\) is \({\mathcal {T}}\)-attracting.
Then \({{\mathcal {A}}}\) is said to be a \({\mathcal {T}}\)-random attractor for the RDS.
Theorem 3.11
([9, 25]) Let \(\varphi \) be a continuous RDS and assume there exists a compact random set \({{\mathcal {B}}}\in {\mathcal {T}}\) that absorbs every \({{\mathcal {D}}}\in {\mathcal {T}}\), i.e. \({\mathcal {B}}\) is \({\mathcal {T}}\)-absorbing. Then there exists a unique \({\mathcal {T}}\)-random attractor \({{\mathcal {A}}}\), which is given by
We will use the above theorem to show the existence of a random attractor for the partly dissipative system at hand.
3.2 Associated RDS
We will now define the RDS corresponding to (2.11)–(2.12). We consider \({\mathcal {V}}={\mathbb {H}}:=L^2(D)\times L^2(D)\) and \({{\mathcal {T}}}\) is the set of all tempered subsets of \({\mathbb {H}}\). In the sequel, we consider the fixed canonical probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) corresponding to a two-sided Wiener process, more precisely
endowed with the compact-open topology. The \(\sigma \)-algebra \({\mathcal {F}}\) is the Borel \(\sigma \)-algebra on \(\Omega \) and \({\mathbb {P}}\) is the distribution of the trace class Wiener process \({\tilde{W}}(t):=({{\tilde{W}}}_1(t),{{\tilde{W}}}_2(t))=(B_1W_1(t),B_2W_2(t))\), where we recall that \(B_1\) and \(B_2\) fulfil Assumptions 2.2. We identify the elements of \(\Omega \) with the paths of these Wiener processes, more precisely
Furthermore, we introduce the Wiener shift, namely
Then \(\theta :{\mathbb {R}}\times \Omega \rightarrow \Omega \) is a measure-preserving transformation on \(\Omega \), i.e. \(\theta _{t}{\mathbb {P}}={\mathbb {P}}\), for \(t\in {\mathbb {R}}\). Furthermore, \(\theta _0\omega (s)=\omega (s)-\omega (0)=\omega (s)\) and \(\theta _{t+s}\omega (r) =\omega (r+t+s)-\omega (t+s)=\theta _t(\omega (r+s)-\omega (s))=\theta _t(\theta _s\omega (r))\). Hence, \((\Omega ,{\mathcal {F}},{\mathbb {P}},\theta )\) is a metric dynamical system. Next, we consider the following equations
The stationary solutions of (3.6)–(3.7) are given by
where
Here, we observe that for \(t=0\)
Now consider the Doss–Sussmann transformation \(v(t)=u(t)-z(\theta _t \omega )\), where \(v(t)=(v_1(t),v_2(t))^\top \), \(z(\omega )=(z_1(\omega _{1}),z_2(\omega _{2}))^\top \) and \(u(t)=(u_1(t),u_2(t))^\top \) is a solution to the problem (2.1)–(2.4). Then v(t) satisfies
More explicitly/or component-wise this reads as
In the equations above no stochastic differentials appear, hence they can be considered path-wise, i.e., for every \(\omega \) instead just for \({\mathbb {P}}\)-almost every \(\omega \). For every \(\omega \) (3.8) is a deterministic equation, where \(z(\theta _t\omega )\) can be regarded as a time-continuous perturbation. In particular, [6] guarantees that for all \(v^0=(v_1^0,v_2^0)^\top \in {\mathbb {H}}\) there exists a solution \(v(\cdot ,\omega ,v^0)\in C([0,\infty ),{\mathbb {H}})\) with \(v_1(0,\omega ,v_1^0)=v_1^0\), \(v_2(0,\omega ,v_2^0)=v_2^0\). Moreover, the mapping \({\mathbb {H}} \ni v_{0}\mapsto v(t,\omega ,v_{0})\in {\mathbb {H}}\) is continuous. Now, let
Then \(u(t,\omega ,u^0)=(u_1(t,\omega ,u_1^0),u_2(t,\omega ,u_2^0))^\top \) is a solution to (2.1)–(2.4). In particular, we can conclude at this point that (2.1)–(2.4) has a global-in-time solution which belongs to \(C([0,\infty );{\mathbb {H}})\); see Remark 2.3. We define the corresponding solution operator \(\varphi :{\mathbb {R}}^+\times \Omega \times {\mathbb {H}}\rightarrow {\mathbb {H}}\) as
for all \((t,\omega ,(u_1^0,u_2^0))\in {\mathbb {R}}^{+}\times \Omega \times {\mathbb {H}}\). Now, \(\varphi \) is a continuous RDS associated to our stochastic partly dissipative system. In particular, the cocycle property obviously follows from the mild formulation. In the following, we will prove the existence of a global random attractor for this RDS. Due to conjugacy, see [9, 25] this gives us automatically a global random attractor for the stochastic partly dissipative system (2.1)–(2.4).
3.3 Bounded absorbing set
In the following we will prove the existence of a bounded absorbing set for the RDS (3.11). In the calculations we will make use of some versions of certain classical deterministic results several times. Therefore, we recall these results here for completeness and as an aid to follow the calculations later on.
Lemma 3.12
(\(\varepsilon \)-Young inequality) For \(x,y\in {\mathbb {R}}\), \(\varepsilon >0\), \({{\tilde{p}}}, {{\tilde{q}}}>1\), \(\frac{1}{{{\tilde{p}}}}+\frac{1}{{{\tilde{q}}}}=1\) we have
Lemma 3.13
(Gronwall’s inequality) Assume that \(\varphi \), \(\alpha \) and \(\beta \) are integrable functions and \(\varphi (t)\ge 0\). If
then
Lemma 3.14
(Uniform Gronwall Lemma [28, Lemma 1.1]) Let g, h, y be positive locally integrable functions on \((t_0,\infty )\) such that \(y'\) is locally integrable on \((t_0,\infty )\) and which satisfy
where \(r,a_1,a_2,a_3\) are positive constants. Then
Lemma 3.15
(Minkowski’s inequality) Let \(p>1\) and \(f,g\in {\mathbb {R}}\), then
Lemma 3.16
(Poincaré’s inequality) Let \(1\le p < \infty \) and let \(D\subset {\mathbb {R}}^n\) be a bounded open subset. Then there exists a constant \(c= c(D,p)\) such that for every function \(u\in W_0^{1,p}(D)\)
Having recalled the relevant deterministic preliminaries, we can now proceed with the main line of our argument. For the following result about the stochastic convolutions Assumption 2.2 (3) is crucial.
Lemma 3.17
Suppose Assumptions 2.1 and 2.2 hold. Then for every \(p\ge 1\)
are tempered random variables.
Proof
Using \(0<\delta \le \sigma (x)\le {\tilde{\delta }}\) and the Burkholder-Davis-Gundy inequality we have
The temperedness of \(\Vert z_2(\omega )\Vert _2^2\) then follows directly using Remark 3.9. Now, we consider the random variable \(\Vert z_1(\omega )\Vert _p^p\). Note that using the so-called factorization method we have for \((x,t)\in D\times [0,T]\) and \(\alpha \in (0,1/2)\) (see [11, Ch. 5.3])
with
where we have used the formal representation \(W_1(x,s)=\sum _{k=1}^\infty \beta _k(s)e_k(x)\) of the cylindrical Wiener process, with \(\{\beta _k\}_{k=1}^\infty \) being a sequence of mutually independent real-valued Brownian motions. \(Y(x,\tau )\) is a real-valued Gaussian random variable with mean zero and variance
where we have used Parseval’s identity and the Itô isometry. Our assumption on the boundedness of the eigenfunctions \(\{e_k\}_{k=1}^\infty \) yields together with Assumption 2.2 (3) that
Hence, \({\mathbb {E}}\left[ \left| Y(x,\tau )\right| ^{2m}\right] \le C_m\) for \(C_m>0\) and every \(m\ge 1\) (note that all odd moments of a Gaussian random variable are zero). Thus we have
i.e., in particular for all \(p\ge 1\) we have \(Y\in L^{p}(D\times [0,T])\)\({\mathbb {P}}\)-a.s.. We now observe
where we have used (2.13) and thus
Now, the right hand side is finite as all moments of \(Y(x,\tau )\) are bounded uniformly in \(x,\tau \), see above. Due to embedding of Lebesgue spaces on a bounded domain we have that
i.e., temperedness of \(\Vert z_1(\omega )\Vert _p^p\) follows again with Remark 3.9. \(\square \)
Remark 3.18
-
(1)
Note that Assumption 2.2 (3) together with the boundedness of \(e_{k}\) for \(k\in {\mathbb {N}}\) are essential for this proof. One can extend such statements for general open bounded domains in \(D\subset {\mathbb {R}}^{n}\), according to Remark 5.27 and Theorem 5.28 in [11].
-
(2)
Regarding again Assumption 2.2 (3) one can show in a similar way that \( z_1 \in W^{1,p}(D)\) and in particular also \(\Vert \nabla z_1(\omega )\Vert _p^p\) is a tempered random variable for all \(p\ge 1\).
Remark 3.19
Alternatively, one can introduce the Ornstein–Uhlenbeck processes \(z_1\) and \(z_2\) using integration by parts. We applied the factorization Lemma for the definition of \(z_1\) in order to obtain regularity results for \(z_1\) based on the interplay between the eigenvalues of the linear part and of the covariance operator of the noise.
Using integration by parts, one infers that
This expression can also be used in order to investigate the regularity of \(z_1\) in a Banach space \({{\mathcal {H}}}\) as follows:
Here one uses the Hölder-continuity of \(\omega _{1}\) in an appropriate function space in order to compensate the singularity in the previous formula.
In our case, we need \(z_1\in D(A^{\alpha /2})=W^{\alpha ,p}(D)\). Letting \(\omega _{1}\in D(A^{\varepsilon })\) for \(\varepsilon \ge 0\) and using that \(\omega _1\) is \(\beta \)-Hölder continuous with \(\beta \le 1/2\) one has
which is well-defined if \(\beta +\varepsilon >\alpha /2\). Such a condition provides again an interplay between the time and space regularity of the stochastic convolution.
Based on the results regarding the stochastic convolutions we can now investigate the long-time behaviour of our system. The first step is contained in the next lemma, which establishes the existence of an absorbing set.
Lemma 3.20
Suppose Assumptions 2.1 and 2.2 hold. Then there exists a set \(\{{{\mathcal {B}}}(\omega )\}_{\omega \in \Omega }\in {\mathcal {T}}\) such that \(\{{{\mathcal {B}}}(\omega )\}_{\omega \in \Omega }\) is a bounded absorbing set for \(\varphi \). In particular, for any \({{\mathcal {D}}}=\{{{\mathcal {D}}}(\omega )\}_{\omega \in \Omega }\in {\mathcal {T}}\) and every \(\omega \in \Omega \) there exists a random time \(t_{{\mathcal {D}}}(\omega )\) such that for all \(t\ge t_{{\mathcal {D}}}(\omega )\)
Proof
To show the existence of a bounded absorbing set, we want to make use of Remark 3.6, i.e. we need an a-priori estimate in \({\mathbb {H}}\). We have for \(v=(v_1,v_2)^\top \) solution of (3.8)
where we have used (2.7). We now estimate \(I_1\)-\(I_3\) separately. Deterministic constants denoted as \(C,C_1,C_2,\ldots \) may change from line to line. Using (2.5) and (2.10) we calculate
Furthermore, with (2.6) we estimate
With (2.9) we compute
Now, combining the estimates for \(I_2\) and \(I_3\) yields
where we have used that for \(q=\max \{p_1+1,2\}<p\) there exists a constant \(C_2\) such that
Thus,
Hence, in total we obtain
and thus
Now, applying Gronwall’s inequality we obtain
We replace \(\omega \) by \(\theta _{-t}\omega \) (note the \({\mathbb {P}}\)-preserving property of the MDS) and carry out a change of variables
Now let \({{\mathcal {D}}}\in {{\mathcal {T}}}\) be arbitrary and \((u_1^0,u_2^0)(\theta _{-t}\omega )\in {{\mathcal {D}}}(\theta _{-t}\omega )\). Then
Since \((u_1^0,u_2^0)(\theta _{-t}\omega )\in {{\mathcal {D}}}(\theta _{-t}\omega )\) and since \(\Vert z_1(\omega )\Vert _p^p\) (\(p\ge 1\)), \(\Vert z_2(\omega )\Vert _2^2\) are tempered random variables, we have
Hence,
Due to the temperedness of \(\Vert z_1(\omega )\Vert _p^p\) for \(p\ge 1\) and \(\Vert z_2(\omega )\Vert _2^2\), the improper integral above exists and \(\rho (\omega )>0\) is a \(\omega \)-dependent constant. As described in Remark 3.6, we can define for some \(\epsilon >0\)
Then \({{\mathcal {B}}}=\{{{\mathcal {B}}}(\omega )\}_\omega \in {{\mathcal {T}}}\) is a \({{\mathcal {T}}}\)-absorbing set for the RDS \(\varphi \) with finite absorption time \(t_{{\mathcal {T}}}(\omega )=\sup _{{{\mathcal {D}}}\in {{\mathcal {T}}}}t_{{\mathcal {D}}}(\omega )\). \(\square \)
The random radius \(\rho (\omega )\) depends on the restrictions imposed on the non-linearity and the noise. These were heavily used in Lemma 3.20 in order to derive the expression 3.22 for \(\rho (\omega )\). Regarding the structure of \(\rho (\omega )\) we infer by Lemma 3.17 that \(\rho (\omega )\) is tempered. Although we have now shown the existence of a bounded \({{\mathcal {T}}}\)-absorbing set for the RDS at hand, we need further steps. To show existence of a random attractor, we would like to make use of Theorem 3.11, i.e., we have to show existence of a compact\({{\mathcal {T}}}\)-absorbing set. This will be the goal of the next subsection.
3.4 Compact absorbing set
The classical strategy to find a compact absorbing set in \(L^2(D)\) for a reaction-diffusion equation is the following: Firstly, one needs to find an absorbing set in \(L^2(D)\). Secondly, this set is used to find an absorbing set in \(H^1(D)\) and due to compact embedding this automatically defines a compact absorbing set in \(L^2(D)\). In our setting the construction of an absorbing set in \(H^1(D)\) is more complicated as the regularizing effect of the Laplacian is missing in the second component of (3.8). That is solutions with initial conditions in \(L^2(D)\) will in general only belong to \(L^2(D)\) and not to \(H^1(D)\). To overcome this difficulty, we split the solution of the second component into two terms: one which is regular enough, in the sense that it belongs to \(H^1(D)\) and the another one which asymptotically tends to zero. This splitting method has been used by several authors in the context of partly dissipative systems, see for instance [20, 32]. Let us now explain the strategy for our setting in more detail. We consider the equations
and
then \(v_2=v_2^1+v_2^2\) solves (3.10). Note at this point that we associate the initial condition \(v_2^0\in L^2(D)\) to the second part. Now, let \({{\mathcal {D}}}=({{\mathcal {D}}}_1,{{\mathcal {D}}}_2) \in {{\mathcal {T}}}\) be arbitrary and \(u^0=(u_1^0,u_2^0)\in {{\mathcal {D}}}\). Then
If we can show that for a certain \(t^*\ge t_{{\mathcal {D}}}(\omega )\) there exist tempered random variables \(\rho _1(\omega )\), \(\rho _2(\omega )\) such that
then, because of compact embedding, we know that \(\overline{\varphi _1(t^*,\theta _{-t^*}\omega , {{\mathcal {D}}}_1(\theta _{-t^*}\omega ))}\) is a compact set in \({\mathbb {H}}\). If, furthermore
then \(\varphi _2(t,\theta _{-t}\omega ,{{\mathcal {D}}}_2(\theta _{-t}\omega ))\) can be regarded as a (random) bounded perturbation and \(\overline{\varphi (t,\theta _{-t}\omega ,{{\mathcal {D}}}(\theta _{-t}\omega ))}\) is compact in \({\mathbb {H}}\) as well, see [28, Theorem 2.1]. Then,
is a compact absorbing set for the RDS \(\varphi \). We will now prove the necessary estimates (3.25)–(3.27).
Lemma 3.21
Let Assumptions 2.1 and 2.2 hold. Let \({{\mathcal {D}}}_2\subset L^2(D)\) be tempered and \(u_2^0\in {{\mathcal {D}}}_2\). Then
Proof
The solution to (3.24) is given by
and thus
as \(u_2^0\in {{\mathcal {D}}}_2\) and \(\Vert z_2(\omega )\Vert _2^2\) is a tempered random variable. \(\square \)
We now prove boundedness of \(v_1\) and \(v_2^1\) in \(H^1(D)\). Therefore we need some auxiliary estimates. First, let us derive uniform estimates for \(u_1\in L^p(D)\) and for \(v_1\in H^1(D)\).
Lemma 3.22
Let Assumptions 2.1 and 2.2 hold. Let \({{\mathcal {D}}}_1\subset L^2(D)\) be tempered and \(u_1^0\in {{\mathcal {D}}}_2\). Assume \(t\ge 0\), \(r>0\), then
where \(C,C_1\) are deterministic constants.
Proof
From (3.19) we can derive
and thus by integration
The two statements of the lemma follow directly from this estimate. \(\square \)
Lemma 3.23
Let Assumptions 2.1 and 2.2 hold. Let \({{\mathcal {D}}}_1\subset L^2(D)\) be tempered and \(u_1^0\in {{\mathcal {D}}}_1\). Assume \(t\ge r\), then
where \(C_2,C_3,C_4,C_5,C_6\) are deterministic constants.
Proof
Remember that \(v_1\) satisfies equation (3.9). Multiplying this equation by \(|v_1|^{p-2}v_1\) and integrating over D yields
where we have used condition (2.6), the relations \(p-1,p-2,p_1+p-1<2p-2\) and the inequality
that can be proved by using conditions (2.5) and (2.10)
Hence we have
and thus
We arrive at the following inequality
and hence
With (3.29) we have
Thus by applying the uniform Gronwall Lemma to (3.34) we have
Now integrating (3.33) between t and \(t+r\) yields
and thus for \(t\ge r\) using (3.35)
In total this leads to
and this finishes the proof. \(\square \)
One can also use appropriate shifts within the integrals on the left hand sides in (3.29), (3.30), (3.31) to obtain simpler forms of the \(\omega \)-dependent constants on the right hand side, see for instance [33, Lemma 4.3, 4.4]. More precisely, in case of (3.29) one can for instance obtain an estimate of the form
where \({\tilde{\rho }}(\omega )\) is a random constant. Nevertheless such estimates hold for every \(\omega \), independent of the shift that one inserts inside the integral on the left hand side. Without the appropriate shifts on the left hand sides, as in the lemmas above, the constants on the right hand sides depend on the shift. Next, we are going to show the boundedness of \(v_1\) in \(H^1(D)\).
Lemma 3.24
Let Assumptions 2.1 and 2.2 hold. Let \({{\mathcal {D}}}=({{\mathcal {D}}}_1,{{\mathcal {D}}}_2)\in {{\mathcal {T}}}\) and \(u^0\in {{\mathcal {D}}}\). Assume \(t\ge t_{{\mathcal {D}}}(\omega )+2r\) for some \(r>0\) then
where \(\rho _1(\omega )\) is a tempered random variable.
Proof
Remember that \(v_1\) satisfies the equation (3.9) and thus
We want to apply the uniform Gronwall Lemma now. Therefore, note
We calculate
and
where we have applied Lemma 3.22. By Lemma 3.23 for \(t\ge r\)
Now, the uniform Gronwall Lemma yields for \(t\ge r\)
That is, for \(t\ge 0\) we have
Let us recall that our goal is to find a \(t^*\ge t_{{\mathcal {D}}}(\omega )\) such that (3.25) holds. Now assume that \(t\ge t_{{\mathcal {D}}}(\omega )\). We replace \(\omega \) by \(\theta _{-t-2r}\omega \) (again note the \({\mathbb {P}}\)-preserving property of the MDS), then
As \(t\ge t_{{\mathcal {D}}}(\omega )\) we know by the absorption property that there exists a \({\tilde{\rho }}(\omega )\) such that
and thus replacing \(\omega \) by \(\theta _{-2r}\omega \)
Similarly, we know that
and thus by replacing \(\omega \) by \(\theta _{-r}\omega \)
The same arguments hold for \(v_2\). Furthermore, as \(t\ge t_{{\mathcal {D}}}(\omega )\) and we know from Lemma 3.20 that there exists a tempered random variable \({\hat{\rho }}(\omega )\) such that for \(s\in (t,t+2r)\)
and thus
With similar substitutions in the integral over \(\Vert z_1(\theta _{s-t-2r}\omega )\Vert _{p^2-p}^{p^2-p}\) and\(\Vert z_2(\theta _{s-t-2r}\omega )\Vert _2^2\) we arrive at
where the right hand side is independent of t. Due to the temperedness of all terms involved, they can be combined into one tempered random variable \(\rho _1(\omega )\) such that for \(t\ge t_{{\mathcal {D}}}(\omega )+2r=:t^*\) we have
this concludes the proof. \(\square \)
We are now able to prove the boundedness of the first term of \(v_2\) in \(H^1(D)\).
Lemma 3.25
Let Assumptions 2.1 and 2.2 hold. Let \({{\mathcal {D}}}=({{\mathcal {D}}}_1,{{\mathcal {D}}}_2)\in {{\mathcal {T}}}\) and \(u^0\in {{\mathcal {D}}}\). Assume \(t\ge t_{{\mathcal {D}}}(\omega )+2r\) for some \(r>0\). Then we have
where \(\rho _2(\omega )\) is a tempered random variable.
Proof
Remember that \(v_2^1\) satisfies the equation (3.23) and thus
We estimate \(L_1\) and \(L_2\) separately
and
where in the last equation the gradient is to be understood as
Hence,
and further with (2.8)
where \(C:=\max _{1\le i\le n}\max _{x\in {{\overline{D}}}}| \partial _{x_i}\sigma (x)|\). Next, we apply Gronwall’s inequality while taking the initial condition into account and we obtain for \(t\ge 0\)
We have from (3.19) the following equation
where \(M=\min \{d/c,\delta \}\) and certain constants \({{\hat{C}}},{{\tilde{C}}}\). We multiply (3.40) by \(\exp (Mt)\) and integrate between 0 and t
This yields
as well as
In particular, from the last estimate we obtain
where we have replaced \(\omega \) by \(\theta _{-t}\omega \) after integrating and used that \(t\ge t_{{\mathcal {D}}}(\omega )\).
Now, replacing \(\omega \) by \(\theta _{-t}\omega \) in (3.39), noting that \(\delta \ge M\) and assuming that \(t\ge t_{{\mathcal {D}}}(\omega )\), we compute
where we have used (3.41) in the second inequality and (3.42) in the third inequality. Furthermore, we made use of the absorption property in the third inequality. Finally, since \(\Vert z_2(\theta _{s}\omega )\Vert _2^2,\Vert z_1(\theta _{s}\omega )\Vert _p^p,\Vert z_1(\theta _{s}\omega )\Vert _2^2,\Vert \nabla z_1(\theta _{s}\omega )\Vert _2^2\) (see Lemma 3.17 and Remark 3.18) and \(\Vert v_1^0(\theta _{-t}\omega )\Vert ^2_{2},\Vert v_2^0(\theta _{-t}\omega )\Vert ^2_{2}\) (by assumption) are tempered random variables, we can combine the right hand side into one tempered random variable \(\rho _2(\omega )\) and this concludes the proof. \(\square \)
Theorem 3.26
Let Assumptions 2.1 and 2.2 hold. The random dynamical system defined in (3.11) has a unique \({{\mathcal {T}}}\)-random attractor \({{\mathcal {A}}}\).
Proof
By the previous lemmas there exist a compact absorbing set given by (3.28) in \({\mathcal {T}}\) for the RDS \(\varphi \). Thus Theorem 3.11 guarantees the existence of a unique \({{\mathcal {T}}}\)-random attractor. \(\square \)
4 Applications
4.1 FitzHugh–Nagumo system
Let us consider the famous stochastic FitzHugh–Nagumo system, i.e.,
with \(D=[0,1]\) and \(\alpha _j\in {\mathbb {R}}\) for \(j\in \{1,2,3\}\) are fixed parameters. We always assume that the noise terms satisfy Assumptions 2.2 and \(p\in C^{2}\). Such systems have been considered under various assumptions by numerous authors, for instance see [4, 31] and the references specified therein. Our general assumptions are satisfied in this example as follows. Identifying the terms with the terms given in (2.1)–(2.2) we have
We have \(\sigma (x)=\alpha _3\) and \(|f(x,u_1,u_2)|=|u_2|\) , i.e., (2.7) and (2.6) are fulfilled. Furthermore, \(|\partial _ug(x,u_1)|=|\alpha _2|\) and \(|\partial _{x_i}g(x,u_1)|=0\) for \(i=1,\ldots ,n\), hence (2.8) is satisfied. Finally, as a polynomial with odd degree and negative coefficient for the highest degree, h fulfils (2.5). Thus the analysis above guarantees the existence of global mild solutions and the existence of a random pullback attractor for the stochastic FitzHugh–Nagumo system.
4.2 The driven cubic-quintic Allen–Cahn model
The cubic-quintic Allen–Cahn (or real Ginzburg–Landau) equation is given by
where \((x,t)\in D\times [0,T)\), \(p_1\in {\mathbb {R}}\), is a fixed parameter and we will take D as a bounded open domain with regular boundary. The cubic-quintic polynomial non-linearity frequently occurs in the modelling of Euler buckling [30], as a re-stabilization mechanism in paradigmatic models for fluid dynamics [21], in normal form theory and travelling wave dynamics [13, 16], as well as a test problem for deterministic [17] and stochastic numerical continuation [18]. If we want to allow for time-dependent slowly-varying forcing on u and sufficiently regular additive noise, then it is actually very natural to extend the model (4.2) to
where \(p_2\), \(q_2\), \(0<\varepsilon \ll 1 \) are parameters. One easily checks again that (4.3) fits our general framework as \(h(x,u_1)=-p_1u_1-u_1^3+u_1^5\) satisfies the crucial dissipation assumption (2.5).
References
Adili, A., Wang, B.: Random attractors for non-autonomous stochastic FitzHugh–Nagumo systems with multiplicative noise. In: Discrete and Continuous Dynamical Systems SI (2013)
Adili, A., Wang, B.: Random attractors for stochastic FitzHugh–Nagumo systems driven by determinisits non-autonomous forcing. Discrete Contin. Dyn. Syst. Ser. B 18(3), 643–666 (2013)
Arnold, L.: Random Dynamical Systems. Springer, Berlin (2013)
Bonaccorsi, S., Mastrogiacomo, E.: Analysis of the stochastic FitzHugh–Nagumo system. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 11(03), 427–446 (2008)
Caraballo, T., Langa, J.A., Robinson, J.C.: Stability and random attractors for a reaction–diffusion equation with multiplicative noise. Discrete Contin. Dyn. Syst. 6(4), 875–892 (2000)
Chepyzhov, V.V., Vishik, M.I.: Trajectory attractors for reaction–diffusion systems. Topol. Methods Nonlinear Anal. 7(1), 49–76 (1996)
Chueshov, I., Schmalfuss, B.: Master-slave synchronization and invariant manifolds for coupled stochastic systems. J. Math. Phys. 51(10), 102702 (2010)
Crauel, H., Debussche, A., Flandoli, F.: Random attractors. J. Dyn. Differ. Equ. 9(2), 307–341 (1997)
Crauel, H., Flandoli, F.: Attractors for random dynamical systems. Probab. Theory Relat. Fields 100(3), 365–393 (1994)
Da Prato, G.: Kolmogorov Equations for Stochastic PDEs. Birkhäuser, Basel (2012)
Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Cambridge University Press, Cambridge (1992)
Debussche, A.: Hausdorff dimension of a random invariant set. J. Math. Pures Appl. 77(10), 967–988 (1998)
Deissler, R., Brand, H.: Periodic, quasiperiodic, and chaotic localized solutions of the quintic complex Ginzburg–Landau equation. Phys. Rev. Lett. 72(4), 478–481 (1994)
Flandoli, F., Schmalfuss, B.: Random attractors for the 3D stochastic Navier–Stokes equation with multiplicative white noise. Stoch. Int. J. Probab. Stoch. Process. 59(1–2), 21–45 (1996)
Gess, B., Liu, W., Röckner, M.: Random attractors for a class of stochastic partial differential equations driven by general additive noise. J. Differ. Equ. 251(4–5), 1225–1253 (2011)
Kapitula, T., Sandstede, B.: Instability mechanism for bright solitary-wave solutions to the cubic-quintic Ginzburg–Landau equation. JOSA B 15(11), 2757–2762 (1998)
Kuehn, C.: Efficient gluing of numerical continuation and a multiple solution method for elliptic PDEs. Appl. Math. Comput. 266, 656–674 (2015)
Kuehn, C.: Numerical continuation and SPDE stability for the 2d cubic-quintic Allen–Cahn equation. SIAM/ASA J. Uncertain. Quantif. 3(1), 762–789 (2015)
Li, Y., Yin, J.: A modified proof of pullback attractors in a sobolev space for stochastic Fitzhugh–Nagumo equations. Discrete Contin. Dyn. Syst. Ser. B 21(4), 1203–1223 (2016)
Marion, M.: Finite-dimensional attractors associated with partly dissipative reaction–diffusion systems. SIAM J. Math. Anal. 20(4), 816–844 (1989)
Morgan, D., Dawes, J.: The Swift–Hohenberg equation with a nonlocal nonlinearity. Physica D 270, 60–80 (2014)
Nagel, R.: Towards a “matrix theory” for unbounded operator matrices. Math. Z. 201(1), 57–68 (1989)
Pazy, A.: Semigroups of Linear Operators and Applications to Partial Differential Equations. Springer, Berlin (2012)
Sauer, M., Stannat, W.: Analysis and approximation of stochastic nerve axon equations. Math. Comput. 85(301), 2457–2481 (2016)
Schmalfuss, B.: Backward cocycles and attractors of stochastic differential equations. In: International seminar on applied mathematics-nonlinear dynamics: attractor approximation and global behaviour, pp 185–192 (1992)
Schmalfuss, B.: Attractors for the non-autonomous dynamical systems. In: Equadiff 99, vol 2, pp 684–689. World Scientific (2000)
Sell, G.R., You, Y.: Dynamics of Evolutionary Equations. Springer, Berlin (2002)
Temam, R.: Infinite-Dimensional Dynamical Systems in Mechanics and Physics. Springer, Berlin (2012)
Van Neerven, J.: Stochastic Evolution Equations. ISEM Lecture Notes. Cambridge University of Press, Cambridge (2008)
Venkadesan, M., Guckenheimer, J., Valero-Cuevas, F.: Manipulating the edge of instability. J. Biomech. 40, 1653–1661 (2007)
Wang, B.: Attractors for reaction–diffusion equations in unbounded domains. Physica D 128(1), 41–52 (1999)
Wang, B.: Pullback attractors for the non-autonomous FitzHugh–Nagumo system on unbounded domains. Nonlinear Anal. Theor. 70(11), 3799–3815 (2009)
Wang, B.: Random attractors for the stochastic FitzHugh–Nagumo system on unbounded domains. Nonlinear Anal. Theor. 71(7–8), 2811–2828 (2009)
Zeidler, E.: Nonlinear Functional Analysis and Its Applications: Part 2 B: Nonlinear Monotone Operators. Springer, Berlin (1989)
Zhou, S., Wang, Z.: Finite fractal dimensions of random attractors for stochastic FitzHugh–Nagumo system with multiplicative white noise. J. Math. Anal. Appl. 441(2), 648–667 (2016)
Acknowledgements
Open Access funding provided by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
We thank the anonymous referee for useful comments. CK and AN have been supported by a DFG grant in the D-A-CH framework (KU 3333/2-1). CK and AP acknowledge support by a Lichtenberg Professorship.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kuehn, C., Neamţu, A. & Pein, A. Random attractors for stochastic partly dissipative systems. Nonlinear Differ. Equ. Appl. 27, 35 (2020). https://doi.org/10.1007/s00030-020-00638-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00030-020-00638-8
Mathematics Subject Classification
- 60H15
- 37H05
- 37L55
Keywords
- Random attractor
- Partly dissipative systems
- Stochastic partial differential equation
- Random dynamical system