Abstract
We provide a unified analytic approach to study the asymptotic dynamics of Young differential equations, using the framework of random dynamical systems and random attractors. Our method helps to generalize recent results (Duc et al. in J Differ Equ 264:1119–1145, 2018, SIAM J Control Optim 57(4):3046–3071, 2019; Garrido-Atienza et al. in Int J Bifurc Chaos 20(9):2761–2782, 2010) on the existence of the global pullback attractors for the generated random dynamical systems. We also prove sufficient conditions for the attractor to be a singleton, thus the pathwise convergence is in both pullback and forward senses.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
This paper studies the asymptotic behavior of the stochastic differential equation
where \(A \in {\mathbb {R}}^{d\times d}\), \(f: {\mathbb {R}}^d \rightarrow {\mathbb {R}}^d\), \(g: {\mathbb {R}}^d \rightarrow {\mathbb {R}}^{d\times m}\) are globally Lipschitz continuous functions, and Z is a two-sided stochastic process with stationary increments such that almost sure all realizations of Z are in the space \({\mathcal {C}}^{p\mathrm{- var}}({\mathbb {R}}, {\mathbb {R}}^m)\) of continuous paths with finite p - variation norm, for some \(1 \le p < 2\). An example for such a process Z is a fractional Brownian motion \(B^H\) [20] with Hurst index \(H > \frac{1}{2}\). It is well known that Eq. (1.1) can be solved in the path-wise approach by taking a realization \(x \in {\mathcal {C}}^{p\mathrm{- var}}({\mathbb {R}}, {\mathbb {R}}^m)\) (which is also called a driving path) and considering the Young differential equation
This way, system (1.2) is understood in the integral form
where the second integral is understood in the Young sense [24]. The existence and uniqueness theorem for Young differential equations is proved in many versions, e.g. [6, 17,18,19, 21, 25].
Our aim is to investigate the role of the driving noise in the longterm behavior of system (1.1). Namely we impose assumptions for the drift coefficient so that there exists a unique equilibrium for the deterministic system \({\dot{\mu }} = A\mu + f(\mu )\) which is asymptotically stable; and then raise the questions on the asymptotic dynamics of the perturbed system, in particular the existence of stationary states and their asymptotic (stochastic) stability [14] with respect to almost sure convergence.
These questions could be studied in the framework of random dynamical systems [3]. Specifically, results in [11] and recently in [4, 9] reveal that the stochastic Young system (1.1) generates a random dynamical system, hence asymptotic structures like random attractors are well-understood. In this scenarios, system (1.1) has no deterministic equilibrium but is expected to possess a random attractor, although little is known on the inside structure of the attractor and much less on whether or not the attractor is a (random) singleton.
We remind the reader of a well-known technique in [15, 16, 23] to generate RDS and to study random attractors of system (1.2) by a conjugacy transformation \(y_t = \psi _g(\eta _t,z_t)\), where the semigroup \(\psi _g\) generated by the equation \({\dot{u}} = g(u)\) and \(\eta \) is the unique stationary solution of the Langevin equation \(d\eta = -\eta dt + dZ_t\). The transformed system
can then be solved in the pathwise sense and the existence of random attractor for (1.4) is equivalent to the existence of random attractor for the original system. This conjugacy method works in some special cases, particularly if \(g(\cdot )\) is the identity matrix, or more general if \(g(y) = Cy\) for some matrix C that commutes with A (see further details in Remark 3.10). For more general cases, the reader is refered to [8, 9] and the references therein for recent methods in studying the asymptotic behavior of Young differential equations.
Another approach in [8, 11] uses the semigroup technique to estimate the solution norms, which then proves the existence of a random absorbing set that contains a random attractor for the generated random dynamical system. Specifically, thanks to the rule of integration by parts for Young integrals, the "variation of constants" formula for Young differential equations holds (see e.g. [25] or [8]), so that \(y_t\) in Eq. (1.3) satisfies
where \(\Phi (t)\) is the semigroup generated by A. By constructing a suitable stopping times \(\{\tau _k\}_{k \in {\mathbb {N}}}\), one can estimate the Hölder norm of y on interval \([\tau _n,\tau _{n+1}]\) in the left hand side of (1.5) by the same norm on previous intervals \([\tau _k,\tau _{k+1}]\) for \(k<n\) following a recurrent relation, thereby can apply the discrete Gronwall Lemma (see Lemma 3.12 in the Appendix). However this construction of stopping times only works under the assumption that the noise is small in the sense that its Hölder seminorm is integrable and can be controlled to be sufficiently small.
In this paper, we propose a different approach in Lemma 3.3, which first estimates the Euclidean norm \(\Vert y_n\Vert \) of y at time n in (1.5) by applying the continuous Gronwall Lemma, and then estimate Young integrals in the right hand side by the p-variation norms \(\Vert y\Vert _{p-\mathrm{var},[k,k+1]}\) using Proposition 3.2. Thanks to Theorem 2.4 and its corollaries, these norms \(\Vert y\Vert _{p-\mathrm{var},[k,k+1]}\) are estimated by \(\Vert y_k\Vert \), which leads to a recurrent relation between \(\Vert y_n\Vert \) and previous terms \(\Vert y_k\Vert \). As a consequence, one can apply the discrete Gronwall Lemma 3.12 and yield a stability criterion in Theorem 3.4. Therefore, the method works for a general source of noises, and the stability criterion matches the classical one for ordinary differential equations when the effect of driving noise is cleared. Moreover, the same arguments can be applied for stochastic process Z with lower regularity (for instance Z is a fractional Brownian motion \(B^H\) with \(\frac{1}{3}<H<\frac{1}{2}\)), in that case equation (1.3) is no longer a Young equation but should be understood as a rough differential equation and can be solved by Lyon’s rough path theory [18] (see also [10, 22]).
The paper is organized as follows. Section 2 is devoted to present preliminaries and main results of the paper, where the norm estimates of the solution of (1.2) is then presented in Sect. 2.1. In Sect. 3.1, we introduce the generation of random dynamical system from the equation (1.1). Using Lemma 3.3, we prove the existence of a global random pullback attractor in Theorem 3.4. Finally in Sect. 3.3, we prove that the attractor is both a pullback and forward singleton attractor if g is a linear map in Theorem 3.9, or if \(g \in C^2_b\) for small enough Lipschitz constant \(C_g\) in Theorem 3.11.
2 Preliminaries and Main Results
Let us first briefly make a survey on Young integrals. Denote by \({\mathcal {C}}([a,b],{\mathbb {R}}^r)\), for \(r\ge 1\), the space of all continuous paths \(x:\;[a,b] \rightarrow {\mathbb {R}}^r\) equipped with supremum norm \(\Vert x\Vert _{\infty ,[a,b]}=\sup _{t\in [a,b]} \Vert x_t\Vert \), where \(\Vert \cdot \Vert \) is the Euclidean norm of a vector in \({\mathbb {R}}^r\). For \(p\ge 1\) and \([a,b] \subset {\mathbb {R}}\), denote by \({\mathcal {C}}^{p\mathrm{-var}}([a,b],{\mathbb {R}}^r)\) the space of all continuous paths \(x \in {\mathcal {C}}([a,b],{\mathbb {R}}^r)\) which is of finite \(p-\)variation, i.e. \(\left| \! \left| \! \left| x\right| \! \right| \! \right| _{p\mathrm{-var},[a,b]} :=\left( \sup _{\Pi (a,b)}\sum _{i=1}^n \Vert x_{t_{i+1}}-x_{t_i}\Vert ^p\right) ^{1/p} < \infty \) where the supremum is taken over the whole class \(\Pi (a,b)\) of finite partitions \(\Pi =\{ a=t_0<t_1<\cdots < t_n=b \}\) of [a, b] (see e.g. [10]). Then \({\mathcal {C}}^{p\mathrm{-var}}([a,b],{\mathbb {R}}^r)\), equipped with the \(p-\)var norm \( \Vert x\Vert _{p\mathrm{-var},[a,b]}:= \Vert x_a\Vert +\left| \! \left| \! \left| x\right| \! \right| \! \right| _{p\mathrm{-var},[a,b]}\), is a nonseparable Banach space [10, Theorem 5.25, p. 92]. Also for each \(0<\alpha <1\), denote by \({\mathcal {C}}^{\alpha {\mathrm{-Hol}}}([a,b],{\mathbb {R}}^r)\) the space of Hölder continuous paths with exponent \(\alpha \) on [a, b], and equip it with the norm \(\Vert x\Vert _{\alpha \mathrm{-Hol},[a,b]}: = \Vert x_a\Vert + \sup _{a\le s<t\le b}\frac{\Vert x_t-x_s\Vert }{(t-s)^\alpha }\). Note that for \(\alpha > \frac{1}{p}\), it holds that \({\mathcal {C}}^{\alpha {\mathrm{-Hol}}}([a,b],{\mathbb {R}}^r) \subset {\mathcal {C}}^{p\mathrm{-var}}([a,b],{\mathbb {R}}^r)\).
We recall here a result from [6, Lemma 2.1].
Lemma 2.1
Let \(x\in {\mathcal {C}}^{p\mathrm{-var}}([a,b],{\mathbb {R}}^d)\), \(p\ge 1\). If \(a = a_1<a_2<\cdots < a_k = b\), then
Now, for \(y\in {\mathcal {C}}^{q\mathrm{-var}}([a,b],{\mathbb {R}}^{d\times m})\) and \(x\in {\mathcal {C}}^{p\mathrm{-var}}([a,b],{\mathbb {R}}^m)\) with \(\frac{1}{p}+\frac{1}{q} > 1\), the Young integral \(\int _a^b y_t dx_t\) can be defined as \( \int _a^b y_s dx_s:= \lim \limits _{|\Pi | \rightarrow 0} \sum _{[u,v] \in \Pi } y_u(x_v-x_u)\), where the limit is taken over all the finite partitions \(\Pi \) of [a, b] with \(|\Pi | := \displaystyle \max _{[u,v]\in \Pi } |v-u|\) (see [24, pp. 264–265]). This integral satisfies the additive property and the so-called Young-Loeve estimate [10, Theorem 6.8, p. 116]
From now on, we only consider \(q = p\) for convenience. We impose the following assumptions on the coefficients A, f and g and the driving path x.
Assumptions
- (\({\mathbf{H }}_1\)):
-
\(A \in {\mathbb {R}}^{d\times d}\) is a matrix which has all eigenvalues of negative real parts;
- (\({\mathbf{H }}_2\)):
-
\(f: {\mathbb {R}}^d \rightarrow {\mathbb {R}}^d\) and \(g: {\mathbb {R}}^d \rightarrow {\mathbb {R}}^{d\times m},\) are globally Lipschitz continuous functions. In addition, \(g \in C^1\) such that \(D_g\) is also globally Lipschitz continuous. Denote by \(C_f,C_g\) the Lipschitz constants of f and g respectively;
- (\({\mathbf{H }}_3\)):
-
for a given \(p \in (1,2)\), \(Z_t\) is a two-sided stochastic process with stationary increments such that almost sure all realizations belong to the space \(C^{p\mathrm{- var}}({\mathbb {R}}, {\mathbb {R}}^m)\) and that
$$\begin{aligned} \Gamma (p):=\Big (E \left| \! \left| \! \left| Z \right| \! \right| \! \right| ^p_{p\mathrm{-var},[-1,1]}\Big )^{\frac{1}{p}} < \infty . \end{aligned}$$(2.2)For instance, Z could be an \(m-\)dimensional fractional Brownian motion \(B^H\) [20] with Hurst exponent \(H \in (\frac{1}{2},1)\), i.e. a family of centered Gaussian processes \(B^H = \{B^H_t\}\), \(t\in {\mathbb {R}}\) or \({\mathbb {R}}_+\) with continuous sample paths and the covariance function
$$\begin{aligned} R_H(s,t) = \tfrac{1}{2}(t^{2H} + s^{2H} - |t-s|^{2H}),\quad \forall t,s \in {\mathbb {R}}. \end{aligned}$$Assumption (\({\mathbf{H }}_1\)) ensures that the semigroup \(\Phi (t) =e^{At}, t\in {\mathbb {R}}\) generated by A satisfies the following properties.
Proposition 2.2
Assume that A has all eigenvalues of negative real parts. Then there exist constants \(C_A\ge 1,\lambda _A >0\) such that the generated semigroup \(\Phi (t) = e^{At}\) satisfies
in which \(|A| : =\displaystyle \sup _{\Vert x\Vert =1}\frac{\Vert Ax\Vert }{\Vert x\Vert }\).
Proof
The first inequality is due to [1, Chapter 1, §3]. The second one is followed from the mean value theorem
for any \(u<v\) in [a, b] where \(e^{-\lambda _A \cdot }\) is a decreasing function. \(\square \)
Our main results (Theorems 3.4, 3.9, 3.11) could be summarized as follows.
Theorem 2.3
Assume that the system (1.1) satisfies the assumptions \({\mathbf{H }}_1-{\mathbf{H }}_3\), and further that \(\lambda _A > C_fC_A\), where \(\lambda _A\) and \(C_A\) are given from (2.3), (2.4). If
where \(\Gamma (p)\) is defined in (2.2) and K in (2.7), then the generated random dynamical system \(\varphi \) of (1.1) possesses a pullback attractor \({\mathcal {A}}(x)\). Moreover, in case \(g(y) = Cy +g(0)\) is a linear map satisfying (2.5) or in case \(g \in C^2_b\) with the Lipschitz constant \(C_g\) small enough, this attractor is a singleton, i.e. \({\mathcal {A}}(x) = \{a(x)\}\) a.s., thus the pathwise convergence is in both the pullback and forward directions.
For the convenience of the readers, we introduce some notations and constants which are used throughout the paper.
2.1 Solution Estimates
In this preparatory subsection we are going to estimate several norms of the solution. To do that, the idea is to evaluate the norms of the solution on a number of consecutive small intervals. Here we would like to construct, for any \(\gamma >0\) and any given interval [a, b], a sequence of greedy times \(\{\tau _k(\gamma )\}_{k \in {\mathbb {N}}}\) as follows (see e.g. [5, 6, 8])
Define
then due to the superadditivity of \(\left| \! \left| \! \left| x \right| \! \right| \! \right| ^p_{p\mathrm{-var},[s,t]}\)
From now on, we fix \(p \in (1,2)\) and \(\gamma := \frac{1}{2(K+1)C_g}\), and write in short \(N_{[a,b]}(x)\) to specify the dependence of N on x and the interval [a, b].
We assume throughout this section that the assumption (\({\mathbf{H }}_2\)), (\({\mathbf{H }}_3\)) are satisfied. The following theorem presents a standard method to estimate the \(p-\)variation and the supremum norms of the solution of (1.2), by using the continuous Gronwall lemma and a discretization scheme with the greedy times (2.12).
Theorem 2.4
There exists a unique solution to (1.2) for any initial value, whose supremum and \(p-\)variation norms are estimated as follows
where \(L,\alpha \) and \(M_0\) are given by (2.6), (2.7) and (2.8) respectively.
Proof
There are similar versions of Theorem 2.4 in [19, Proposition 1] for Young equations or in [5, Lemma 4.5] and [22, Theorem 3.1] for rough differential equations with bounded g, thus we will only sketch out the proof here for the benefit of the readers. To prove (2.15), we use the fact that \(\left| \! \left| \! \left| g(y)\right| \! \right| \! \right| _{p\mathrm{-var},[s,t]}\le C_g\left| \! \left| \! \left| y\right| \! \right| \! \right| _{p\mathrm{-var},[s,t]}\) and apply (2.1) with K in (2.7) to derive
As a result,
whenever \((K+1)C_g \left| \! \left| \! \left| x\right| \! \right| \! \right| _{p\mathrm{-var},[s,t]} \le \frac{1}{2}\). Applying the continuous Gronwall Lemma [2, Lemma 6.1, p 89] for \(\left| \! \left| \! \left| y \right| \! \right| \! \right| _{p\mathrm{-var},[s,t]}\) yields
whenever \(\left| \! \left| \! \left| x\right| \! \right| \! \right| _{p\mathrm{-var},[s,t]} \le \gamma = \frac{1}{2(K+1)C_g}\). Now construct the sequence of greedy times \(\{\tau _k=\tau _k(\gamma )\}_{k \in {\mathbb {N}}}\) on interval [a, b] as in (2.12), it follows from induction that
which proves (2.15) since \(\tau _{N_{[a,b]}(x)} = b\). On the other hand, it follows from inequality of p-variation seminorm in Lemmas 2.1 and (2.18) that for all \(k = 0, \ldots , N_{[a,b]}(x)-1\),
which proves (2.16). \(\square \)
By the same arguments, we can prove the following results.
Corollary 2.5
If in addition g is bounded by \(\Vert g\Vert _\infty < \infty \), then
in which \(a\vee b := \max \{a,b\}\).
Corollary 2.6
The following estimate holds
The lemma below is useful in evaluating the difference of two solutions of Eq. (1.2). The proof is similar to [6, Lemma 3.1] and will be omitted here.
Lemma 2.7
Let \(y^1,y^2\) be two solution of (1.2). Assign
where g satisfies (\({\mathbf{H }}_2\)).
(i) If in addition, \(D_g\) is of Lipschitz continuity with Lipschitz constant \(C'_g\), then
(ii) If g is a linear map, then \(\left| \! \left| \! \left| Q\right| \! \right| \! \right| _{p\mathrm{-var},[u,v]}\le C_g\left| \! \left| \! \left| y^1-y^2\right| \! \right| \! \right| _{p\mathrm{-var},[u,v]}\).
Thanks to Lemma 2.7, the difference of two solutions of (1.2) can be estimated in p-var norm as follows.
Corollary 2.8
Let \(y^1,y^2\) be two solutions of (1.2) and assign \(z_t=y^2_t-y^1_t\) for all \(t\ge 0\).
(i) If \(D_g\) is of Lipschitz continuity with Lipschitz constant \(C'_g\) then
in which
(ii) If in addition g is a linear map then
Proof
The proof use similar arguments to the proof of Theorem 2.4, thus it will be omitted here. The the readers are referred to [19, Proposition 1], [6, Theorem 3.9] for similar versions. \(\square \)
Remark 2.9
It follows from (2.14) that
As a result, the norm estimates (2.16), (2.19), (2.20) have the same form
in which \(\Lambda _i(x,[a,b])\) are functions of \(\left| \! \left| \! \left| x\right| \! \right| \! \right| _{p\mathrm{-var},[a,b]}\). Similarly, (2.22) (for a fixed solution \(y^1\)) and (2.24) can also be rewritten in the form (2.25) with \(\Lambda _2 \equiv 0\).
In the following, let y be a solution of (1.2) on \([a,b]\subset {\mathbb {R}}^+\) and \(\mu \) be the solution of the corresponding deterministic system, i.e
with the same initial condition \(\mu _a = y_a\). Assign \(h_t := y_t - \mu _t\). The following result, which is used in studying singleton attractors in Theorem 3.11, estimates the norms of h with the initial condition \(\Vert y_a\Vert \), up to a fractional order.
Corollary 2.10
Assume that g is bounded. Then for a fixed constant \(\beta = \frac{1}{p} \in (\frac{1}{2},1)\), there exists for each interval [a, b] a constant D depending on \(b-a\) such that
Proof
The proof follows similar steps to [13, Proposition 4.6] with only small modifications in estimates for p - variation norms and in usage of the continuous Gronwall lemma. To sketch out the proof, we first observe from (2.26) with \( r= b-a\) that
Next, it follows from \({\mathbf{H }}_2\) and the boundedness of g by \(\Vert g\Vert _\infty \) that
Observe that due to the boundedness of g,
where the last estimate is due to (2.29) and D is a generic constant depending on \(b-a\). This leads to
Replacing (2.31) into (2.30) yields
which is similar to (2.17). Using similar arguments to the proof of Theorem 2.4 and taking into account (2.19), we conclude that
for a generic constant D. Finally, (2.27) is derived since \(h_a=0\). The estimate (2.28) is obtained similarly. \(\square \)
3 Random Attractors
3.1 Generation of Random Dynamical Systems
In this subsection we would like to present the generation of a random dynamical system from Young equation (1.1). Let \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) be a probability space equipped with a so-called metric dynamical system \(\theta \), which is a measurable mapping \(\theta : {\mathbb {R}}\times \Omega \rightarrow \Omega \) such that \(\theta _t:\Omega \rightarrow \Omega \) is \({\mathbb {P}}-\) preserving, i.e. \({\mathbb {P}}(B) = {\mathbb {P}}(\theta ^{-1}_t(B))\) for all \(B\in {\mathcal {F}}, t\in {\mathbb {R}}\), and \(\theta _{t+s} = \theta _t \circ \theta _s\) for all \(t,s \in {\mathbb {R}}\). A continuous random dynamical system \(\varphi : {\mathbb {R}}\times \Omega \times {\mathbb {R}}^d \rightarrow {\mathbb {R}}^d\), \((t,\omega ,y_0)\mapsto \varphi (t,\omega )y_0\) is then defined as a measurable mapping which is also continuous in t and \(y_0\) such that the cocycle property
is satisfied [3].
In our scenario, denote by \({\mathcal {C}}^{0,p-{\mathrm{var}}}([a,b],{\mathbb {R}}^m)\) the closure of \({\mathcal {C}}^{\infty }([a,b],{\mathbb {R}}^m)\) in \({\mathcal {C}}^{p-{\mathrm{var}}}([a,b],{\mathbb {R}}^m)\), and by \({\mathcal {C}}^{0,p-{\mathrm{var}}}({\mathbb {R}},{\mathbb {R}}^m)\) the space of all \(x: {\mathbb {R}}\rightarrow {\mathbb {R}}^m\) such that \(x|_I \in {\mathcal {C}}^{0,p-{\mathrm{var}}}(I, {\mathbb {R}}^m)\) for each compact interval \(I\subset {\mathbb {R}}\). Then equip \({\mathcal {C}}^{0,p-{\mathrm{var}}}({\mathbb {R}},{\mathbb {R}}^m)\) with the compact open topology given by the \(p-\)variation norm, i.e. the topology generated by the metric:
Assign
and equip with the Borel \(\sigma -\) algebra \({\mathcal {F}}\). Note that for \(x\in {\mathcal {C}}^{0,p-{\mathrm{var}}}_0({\mathbb {R}},{\mathbb {R}}^m)\), \(\left| \! \left| \! \left| x\right| \! \right| \! \right| _{p\mathrm{-var},I} \) and \(\Vert x\Vert _{p\mathrm{-var},I}\) are equivalent norms for every compact interval I containing 0.
To equip this measurable space \((\Omega ,{\mathcal {F}})\) with a metric dynamical system, consider a stochastic process \({\bar{Z}}\) defined on a probability space \(({\bar{\Omega }},\bar{{\mathcal {F}}},{\bar{{\mathbb {P}}}})\) with realizations in \(({\mathcal {C}}^{0,p-{\mathrm{var}}}_0({\mathbb {R}},{\mathbb {R}}^m), {\mathcal {F}})\). Assume further that \({\bar{Z}}\) has stationary increments. Denote by \(\theta \) the Wiener shift
It is easy to check that \(\theta \) forms a continuous (and thus measurable) dynamical system \((\theta _t)_{t\in {\mathbb {R}}}\) on \(({\mathcal {C}}^{0,p-{\mathrm{var}}}_0({\mathbb {R}},{\mathbb {R}}), {\mathcal {F}})\). Moreover, the Young integral satisfies the shift property with respect to \(\theta \), i.e.
(see details in [6, p. 1941]). It follows from [4, Theorem 5] that, there exists a probability \({\mathbb {P}}\) on \((\Omega , {\mathcal {F}}) = ({\mathcal {C}}^{0,p-{\mathrm{var}}}_0({\mathbb {R}},{\mathbb {R}}^m), {\mathcal {F}})\) that is invariant under \(\theta \), and the so-called diagonal process \(Z: {\mathbb {R}}\times \Omega \rightarrow {\mathbb {R}}^m, Z(t,x) = x_t\) for all \(t\in {\mathbb {R}}, x \in \Omega \), such that Z has the same law with \({\bar{Z}}\) and satisfies the helix property:
Such stochastic process Z has also stationary increments and almost all of its realizations belongs to \({\mathcal {C}}^{0,p-{\mathrm{var}}}_0({\mathbb {R}},{\mathbb {R}}^m)\). It is important to note that the existence of \({\bar{Z}}\) is necessary to construct the diagonal process Z.
When dealing with fractional Brownian motion [20], we can start with the space \({\mathcal {C}}_0({\mathbb {R}},{\mathbb {R}}^m)\) of continuous functions on \({\mathbb {R}}\) vanishing at zero, with the Borel \(\sigma -\)algebra \({\mathcal {F}}\), and the Wiener shift and the Wiener probability \({\mathbb {P}}\), and then follow [12, Theorem 1] to construct an invariant probability measure \({\mathbb {P}}^H = B^H {\mathbb {P}}\) on the subspace \({\mathcal {C}}^\nu \) such that \(B^H \circ \theta = \theta \circ B^H\). It can be proved that \(\theta \) is ergodic (see [12]).
Under this circumstance, if we assume further that (2.2) is satisfied, then it follows from Birkhorff ergodic theorem that
for almost all realizations \(x_t = Z_t(x)\) of Z. Particularly, in case \(Z=B^H = (B_1^{H}, \dots , B_m^H)\) where \(B_i^H\) are scalar fractional Brownian motions (not necessarily independent), we can apply Lemma 2.1 in [6] with the estimate in [9, Lemma 4.1 (iii), p.14] to obtain that \(\Gamma (p) < \infty \).
Proposition 3.1
The system (1.1) generates a continuous random dynamical system.
Proof
The proof follows directly from [4] and [6, Section 4.2], so we only sketch out the proof here. First, for each fixed driving path \(x \in \Omega \), Eq. (1.1) is solved in the path-wise sense by Young equations (1.2) and (1.3) for the starting point \(y_0\) at time 0 in the forward time if \(t >0\), or in the backward time for \(t<0\) by the backward Young equation
(see details in [6, Theorem 3.8]). Define the mapping \(\varphi (t,x)y_0 := y_t(x,y_0)\), for \(t \in {\mathbb {R}}, x\in \Omega , y_0\in {\mathbb {R}}^d\), which is the pathwise solution of (1.1), then it follows from the existence and uniqueness theorem and the Wiener shift property (3.2) that \(\varphi \) satisfies the cocycle property (3.1) (see [6, Subsection 4.2] and [11] for more details). Also, it is proved in [6, Theorem 3.9] that the solution \(y_t(x,y_0)\) is continuous w.r.t. \((t,x,y_0)\), hence given a probability structure on \(\Omega \), \(\varphi \) is a continuous random dynamical system. \(\square \)
3.2 Existence of Pullback Attractors
Given a random dynamical system \(\varphi \) on \({\mathbb {R}}^d\), we follow [7, 3, Chapter 9] to present the notion of random pullback attractor. Recall that a set \({\hat{M}} := \{M(x)\}_{x \in \Omega }\) a random set, if \(y \mapsto d(y|M(x))\) is \({\mathcal {F}}\)-measurable for each \(y \in {\mathbb {R}}^d\), where \(d(E|F) = \sup \{\inf \{d(y, z)|z \in F\} | y \in E\}\) for E, F are nonempty subset of \({\mathbb {R}}^d\) and \(d(y|E) = d(\{y\}|E)\). An universe \({\mathcal {D}}\) is a family of random sets which is closed w.r.t. inclusions (i.e. if \({\hat{D}}_1 \in {\mathcal {D}}\) and \({\hat{D}}_2 \subset {\hat{D}}_1\) then \({\hat{D}}_2 \in {\mathcal {D}}\)).
In our setting, we define the universe \({\mathcal {D}}\) to be a family of tempered random sets D(x), which means the following: A random variable \(\rho (x) >0\) is called tempered if it satisfies
(see e.g. [3, pp. 164, 386]) which, by [15, p. 220]), is equivalent to the sub-exponential growth
A random set D(x) is called tempered if it is contained in a ball \(B(0,\rho (x))\) a.s., where the radius \(\rho (x)\) is a tempered random variable.
A random subset A is called invariant, if \(\varphi (t,x)A(x) = A(\theta _t x)\) for all \(t\in {\mathbb {R}},\; x\in \Omega .\) An invariant random compact set \({\mathcal {A}} \in {\mathcal {D}}\) is called a pullback random attractor in \({\mathcal {D}}\), if \({\mathcal {A}} \) attracts any closed random set \({\hat{D}} \in {\mathcal {D}}\) in the pullback sense, i.e.
\({\mathcal {A}} \) is called a forward random attractor in \({\mathcal {D}}\), if \({\mathcal {A}} \) is invariant and attracts any closed random set \({\hat{D}} \in {\mathcal {D}}\) in the forward sense, i.e.
The existence of a random pullback attractor follows from the existence of a random pullback absorbing set (see [7, Theorem 3]). A random set \({\mathcal {B}} \in {\mathcal {D}}\) is called pullback absorbing in a universe \({\mathcal {D}}\) if \({\mathcal {B}} \) absorbs all sets in \({\mathcal {D}}\), i.e. for any \({\hat{D}} \in {\mathcal {D}}\), there exists a time \(t_0 = t_0(x,{\hat{D}})\) such that
Given a universe \({\mathcal {D}}\) and a random compact pullback absorbing set \({\mathcal {B}} \in {\mathcal {D}}\), there exists a unique random pullback attractor in \({\mathcal {D}}\), given by
Since the rule of integration by parts for Young integral is proved in [25], the "variation of constants" formula for Young differential equations holds (see e.g. [8]), i.e. \(y_t\) satisfies
We need the following auxiliary results.
Proposition 3.2
Given (2.3) and (2.4), the following estimate holds: for any \(0\le a<b\le c\)
Proof
The proof follows directly from (2.3) and (2.4) as follows
\(\square \)
The following lemma is the crucial technique of this paper.
Lemma 3.3
Assume that \(y_t\) satisfies (3.9). Then for any \(n\ge 0\),
where \(\Delta _k:=[k,k+1]\), \(L_f\) and \(\lambda \) are defined in (2.6).
Proof
First, for any \(t\in [n,n+1)\), it follows from (2.3) and the global Lipschitz continuity of f that
where \(\beta _t := \Big \Vert \int _0^t\Phi (t-s)g(y_s) d x_s \Big \Vert \). Multiplying both sides of the above inequality with \(e^{\lambda _A t}\) yields
By applying the continuous Gronwall Lemma [2, Lemma 6.1, p 89], we obtain
Once again, multiplying both sides of the above inequality with \(e^{-L_f t}\) yields
Next, assign \(p([a,b]) := KC_A \Big [1+|A|(b-a)\Big ]\left| \! \left| \! \left| x\right| \! \right| \! \right| _{p\mathrm{-var},[a,b]}\Big [C_g \Vert y\Vert _{p\mathrm{-var},[a,b]}+\Vert g(0)\Vert \Big ]\), and apply (3.10) in Proposition 3.2, it follows that for all \(s\le t\)
By replacing (3.13) into (3.12) we obtain
where we use the fact that
The continuity of y at \(t= n+1\) then proves (3.11). \(\square \)
We now formulate the first main result of the paper.
Theorem 3.4
Under the assumptions \(({\mathbf{H }}_1)-({\mathbf{H }}_3)\), assume further that \(\lambda _A > C_fC_A\), where \(\lambda _A\) and \(C_A\) are given from (2.3), (2.4). If the criterion (2.5)
holds, where \(\Gamma (p)\) is defined in (2.2), then the generated random dynamical system \(\varphi \) of system (1.1) possesses a pullback attractor \({\mathcal {A}}\).
Proof
Step 1. To begin, we rewrite the estimate (2.15) in the short form, using (2.25) in Remark 2.9
where \(\Delta _k=[k,k+1]\), \(M_0\) is given by (2.8) and
for \(L, \alpha \) in (2.6) and (2.7) respectively. Replacing (3.15) into (3.11) in Lemma 3.3 and using \(M_1,M_2\) in (2.9) and (2.10), we obtain
Assign \(a:= C_A\Vert y_0\Vert \), \(u_k:= e^{\lambda k} \Vert y_k\Vert \) and
for all \(k \ge 0\), where \(\Lambda _1,\Lambda _2\) are given by (3.16). Observe that (3.17) has the form
We are in the position to apply the discrete Gronwall Lemma 3.12, so that
Step 2. Next, for any \(t\in [n,n+1]\), due to (2.15) and (2.14), we can write
Consequently, replacing x with \(\theta _{-t}x\) in (3.21) and using (3.20) yields
(b(x) can take value \(\infty \)). Applying the inequality \(\log (1+ae^b)\le a+b\) for \(a,b\ge 0\) and using (3.18), (3.15), we obtain
Together with (3.3) and (2.9), it follows that for a.s. all x,
where \( {\hat{G}}\) is defined in (2.11) and is also the right hand side of (2.5), and L is defined in (2.6). Hence for \(t\in \Delta _n\) with \(0<\delta < \frac{1}{2}(\lambda -{\hat{G}})\) and \(n \ge n_0\) large enough
Starting from any point \(y_0(\theta _{-t}x) \in D(\theta _{-t}x)\) which is tempered, there exists, due to (3.4), an \(n_0\) independent of \(y_0\) large enough such that for any \(n \ge n_0\) and any \(t\in [n,n+1]\)
where \(F, \Lambda _0\) are given in (3.16) and (3.22). In addition, it follows from (2.14) and the inequality \(\log (1+ ab) \le \log (1+a) + \log b\) for all \(a \ge 0, b\ge 1\), that
where D is a constant.
Step 3. Notice that (3.3) implies \(\lim \limits _{n\rightarrow \infty } \frac{\left| \! \left| \! \left| \theta _{-n}x\right| \! \right| \! \right| _{p\mathrm{-var},[-1,1]}}{n}=0\). The proof would be complete if one can prove Proposition 3.5 below, that b(x) is finite and tempered a.s. Indeed, assume that Proposition 3.5 holds, then by applying [3, Lemma 4.1.2], we obtain the temperedness of \({\hat{b}}(x)\) in the sense of (3.4). We conclude that there exists a compact absorbing set \({\mathcal {B}}(x) = {\bar{B}}(0,{\hat{b}}(x))\) and thus a pullback attractor \({\mathcal {A}}(x)\) for system (1.2) in the form of (3.8). \(\square \)
To complete the proof of Theorem 3.4, we now formulate and prove Proposition 3.5.
Proposition 3.5
Assume that (2.5) holds. Then b(x) defined in (3.24)
is finite and tempered a.s. [in the sense of (3.4)].
Proof
From the definition of H in (3.19), by similar computations as in (3.28) using the integrability of \(\left| \! \left| \! \left| x\right| \! \right| \! \right| _{p\mathrm{-var},[-1,1]}\) it is easy to prove that \(\log H(x,[-1,1])\) is integrable, thus
Hence, under condition (2.5), there exists for each \(0<2\delta < \lambda - {\hat{G}}\) an \(n_0=n_0(\delta , x)\) such that for all \(n\ge n_0\),
Consequently,
which is finite a.s. Moreover, for each fixed x, \(\left| \! \left| \! \left| x\right| \! \right| \! \right| _{p\mathrm{-var},[s,t]}\) is continuous w.r.t (s, t) on \(\{(s,t) \in {\mathbb {R}}^2| s\le t\}\) (see [10, Proposition 5.8, p. 80]). Therefore, \(G(x,[-\varepsilon ,1-\varepsilon ])\) and \(H(x,[-\varepsilon ,1-\varepsilon ])\) are continuous functions of \(\varepsilon \). Since \(b(x)\le b^*(x)\), the series
converges uniformly w.r.t. \(\epsilon \in [0,1]\), thus the series is also continuous w.r.t. \(\varepsilon \in [0,1]\). Hence the supremum in the definition of b(x) in (3.24) can be taken for rational \(\epsilon \), which proves b(x) to be a random variable on \(\Omega \).
To prove the temperdness of b, observe that for each \(t\in [n,n+1]\)
For \(n> 0\),
Therefore
Applying [3, Proposition 4.1.3(i), p. 165] we conclude that
i.e. b is tempered. \(\square \)
Corollary 3.6
Assume that \(f(0) = g(0) = 0\) so that \(y \equiv 0\) is a solution of (1.1). Then under the assumptions of Theorem 3.4 with condition (2.5), the random attractor \({\mathcal {A}}(x)\) is the fixed point 0.
Proof
Using (3.26) and the fact that \(M_0=M_2 =0\) if \(f(0) = g(0) =0\), we obtain
for \(t\in \Delta _n\). It follows that all other solutions converge exponentially in the pullback sense to the trivial solution, which plays the role of the global pullback attractor. \(\square \)
Remark 3.7
In [8, 11] the authors consider a Hilbert space V together with a covariance operator Q on V such that Q is of a trace-class, i.e. for a complete orthonormal basis \((e_i)_{i\in {\mathbb {N}}}\) of V, there exists a sequence of nonnegative numbers \((q_i)_{i \in {\mathbb {N}}}\) such that \(\text {tr}(Q) :=\sum _{i=1}^\infty q_i < \infty \). A \(V-\) valued fractional Brownian motion \(B^H = \sum _{i=1}^\infty \sqrt{q_i} \beta _i^H e_i\) is then considered, where \((\beta ^H_i)_{i \in {\mathbb {N}}}\) are stochastically independent scalar fractional Brownian motions of the same Hurst exponent H. The authors then develop the semigroup method to estimate the Hölder norm of y on intervals \(\tau _k, \tau _{k+1}\) where \(\tau _k\) is a sequence of stopping times
for some \(\mu \in (0,1)\) and \(\beta > \frac{1}{p}\), which leads to the estimate of the exponent as
where \(C(C_A,\mu )\) is a constant depending on \(C_A, \mu \). It is then proved that there exists \(\liminf \limits _{n \rightarrow \infty } \frac{\tau _n}{n} = \frac{1}{d}\), where \(d = d(\mu , \text {tr}(Q))\) is a constant depending on the moment of the stochastic noise. As such the exponent is estimated as
However, it is technically required from the stopping time analysis (see [8, Section 4]) that the stochastic noise has to be small in the sense that the trace \(\text {tr}(Q)=\sum _{i=1}^\infty q_i\) must be controlled as small as possible. In addition, in case the noise is diminished, i.e. \(g \equiv 0\), (3.31) reduces to a very rough criterion for exponential stability of the ordinary differential equation
In contrast, our method uses the greedy time sequence in Theorem 2.4, so that later we can work with the simpler (regular) discretization scheme without constructing additional stopping time sequence. Also in Lemma 3.3 we apply first the continuous Gronwall lemma in (3.12) in order to clear the role of the drift coefficient f. Then by using (2.15) to give a direct estimate of \(y_k\), we are able apply the discrete Gronwall Lemma directly and obtain a very explicit criterion.
The left and the right hand sides of criterion (2.5)
can be interpreted as, respectively, the decay rate of the drift term and the intensity of the diffusion term, where the term \(e^{\lambda _A+4(|A|+C_f)}\) is the unavoidable effect of the discretization scheme. Criterion (2.5) is therefore a better generalization of the classical criterion for stability of ordinary differential equations, and is satisfied if either \(C_g\) or \(\Gamma (p)\) is sufficiently small. In particular, when \(g \equiv 0\), (2.5) reduces to \(\lambda _A > C_A C_f\), which matches to the classical result.
3.3 Singleton Attractors
In the rest of the paper, we would like to study sufficient conditions for the global attractor to consist of only one point, as seen, for instance, in Corrollary 3.6. First, the answer is affirmative for g of linear form, as proved in [9] for dissipative systems. Here we also present a similar version using the semigroup method.
To begin, let \(y^1,y^2\) be two solutions of (1.2) and assign \(z_t=y^2_t-y^1_t\) for all \(t\ge 0\). Similar to (3.9), z satisfies
where \(Q(s,z_s) = g(z_s+y^1_s) - g(y^1_s)\). Observe that by similar computations to (3.10), it is easy to prove that
We need the following auxiliary result.
Lemma 3.8
Assume that all the conditions in Theorem 3.4 are satisfied. Let \(y^1,y^2\) be two solutions of (1.2) and assign \(z_t=y^2_t-y^1_t\) for all \(t\ge 0\).
- (i):
-
If \(D_g\) is of Lipschitz continuity with Lipschitz constant \(C'_g\), then
$$\begin{aligned}&e^{\lambda n} \Vert z_{n}\Vert \le C_A\Vert z_0\Vert + e^{\lambda _A }KC_A(1+|A|)(C_g\vee C'_g) \sum _{k=0}^{n-1} \left| \! \left| \! \left| x\right| \! \right| \! \right| _{p\mathrm{-var},\Delta _k}e^{\lambda k} \nonumber \\&\qquad \Big (1+\left| \! \left| \! \left| y^1\right| \! \right| \! \right| _{{p\mathrm -var},\Delta _k}\Big ) \Vert z\Vert _{p\mathrm{-var},\Delta _k}. \end{aligned}$$(3.34) - (ii):
-
If \(g(y) = C y + g(0)\) then
$$\begin{aligned} e^{\lambda n} \Vert z_{n}\Vert \le C_A\Vert z_0\Vert + e^{\lambda _A }KC_A(1+|A|)C_g \sum _{k=0}^{n-1} \left| \! \left| \! \left| x\right| \! \right| \! \right| _{p\mathrm{-var},\Delta _k}e^{\lambda k} \Vert z\Vert _{p\mathrm{-var},\Delta _k}.\qquad \quad \end{aligned}$$(3.35)
Proof
The arguments follow the proof of Lemma 3.3 step by step, and apply Lemma 2.7, Proposition 3.2 to obtain the estimate
where \(\beta _t= \Vert \int _0^t \Phi (t-s)Q(s,z_s)dx_s\Vert \) is estimated using (3.33). The rest will be omitted. \(\square \)
Theorem 3.9
Assume that \(g(y) = Cy+g(0)\) is a linear map so that \(C_g=|C|\). Then under the condition (2.5),
the pullback attractor is a singleton, i.e. \({\mathcal {A}}(x) = \{a(x)\}\) almost surely. Moreover, it is also a forward singleton attractor.
Proof
The existence of the pullback attractor \({\mathcal {A}}\) is followed by Theorem 3.4. Take any two points \(a_1,a_2\in {\mathcal {A}}(x) \). For a given \(n \in {\mathbb {N}}\), assign \(x^*:=\theta _{-n}x\) and consider the equation
Due to the invariance of \({\mathcal {A}}\) under the flow, there exist \(b_1,b_2\in {\mathcal {A}}(x^*)\) such that \(a_i=y_n(x^*,b_i)\). Put \(z_t= z_t(x^*):= y_t(x^*,b_1)- y_t(x^*,b_2)\) then \(z_n(x^*) =a_1- a_2\). By applying Lemma 3.8 with x replaced by \(x^*\), and using Lemma 2.7 (ii), we can rewrite the estimates in (3.35) as
Meanwhile, using Corollary 2.8(ii) and Remark 2.9, with \(M_0\) in (2.8) now equal to zero, we obtain
in which \(\Lambda _1\) defined in (3.16). As a result, (3.37) has the form
where G is defined in (3.18). Now applying the discrete Gronwall Lemma 3.12, we conclude that
Similar to (3.25)
Therefore it follows from (3.38) that
under the condition (2.5). This follows that \(\lim _{n\rightarrow \infty } \Vert a_1-a_2\Vert =0\) and \({\mathcal {A}}\) is a one point set almost surely. Similar arguments in the forward direction (\(x^*\) is replaced by x) also prove that \({\mathcal {A}}\) is a forward singleton attractor almost surely. \(\square \)
Remark 3.10
As pointed out in the Introduction section, if we use the conjugacy transformation (developed in [15, 16, 23]) of the form \(y_t = e^{C \eta _t} z_t\), where the semigroup \(e^{C t}\) is generated by the equation \({\dot{u}} = C u\) and \(\eta \) is the unique stationary solution of the Langevin equation \(d\eta = -\eta dt + dZ_t\) (with Z is a scalar stochastic process), then the transformed system has the form
However, even in the simplest case \(f \equiv 0\), there is no effective method to study the asymptotic stability of the non-autonomous linear stochastic system
An exception is when A and C are commute, since we could reduce system (3.39) in the form
thereby solve it explicitly as
In this case, the exponential stability is proved using the fact that \(\exp \{- C(\eta _t - \eta _0 - Z_t + Z_0) \}\) is tempered. However, since A and C are in general not commute, we can not apply the conjugacy transformation but should instead use our method described in Theorems 3.4 and 3.9.
Next, motivated by [13], we consider the case in which \(g \in C^2_b\) and \(C_g\) is also the Lipschitz constant of Dg. Notice that our conditions for A and f can be compared similarly to the dissipativity condition in [13]. However, unlike the probabilistic conclusion of existence and uniqueness of a stationary measure in [13], we go a further step by proving that for \(C_g\) small enough, the random attractor is indeed a singleton, thus the convergence to the attractor is in the pathwise sense and of exponential rate.
Theorem 3.11
Assume that \(g \in C^2_b\) with \(\Vert g\Vert _\infty < \infty \), and denote by \(C_g\) the Lipschitz constant of g and Dg. Assume further that \(\lambda _A>C_AC_f\) and
Then system (1.2) possesses a pullback attractor. Moreover, there exists a \(\delta >0\) small enough such that for any \(C_g\le \delta \) the attractor is a singleton almost surely, thus the pathwise convergence is in both the pullback and forward directions.
Proof
Step 1. Similar to [13, Proposition 4.6], we prove that there exist a time \(r>0\), a constant \(\eta \in (0,1)\), and an integrable random variable \(\xi _r(x)\) such that
First we fix \(r>0\) and consider \(\mu ,h\) as defined in Lemma 2.10 on [0, r], i.e \(\mu _t\) is the solution of the deterministic system \({\dot{\mu }} = A\mu + f(\mu )\) which starts at \(\mu _0 = y_0\) and \(h_t=y_t-\mu _t\). Then using (3.12)
On the other hand, due to (2.27) in Corollary 2.10 and (2.14),
where \(\beta = \frac{1}{p}\) and \(\xi _0\) is a polynomial of \(\left| \! \left| \! \left| x\right| \! \right| \! \right| _{p\mathrm{-var},[0,r]}\) of the form
where D is a constant.
Now for \(\epsilon >0\) small enough, we apply the convex inequality and Young inequality to conclude that
where
for some generic constant D (depends on r). Thus \(\xi _r\) is integrable due to \(\frac{2p^2(p+1)}{p-1} \le \frac{4p(p+1)}{p-1}\) and (3.40). By choosing \(r >0\) large enough and \(\epsilon \in (0,1)\) small enough such that
we obtain (3.41).
Step 2. Next, for simplicity we only estimate y at the discrete times nr for \(n \in {\mathbb {N}}\), the estimate for \(t \in [nr,(n+1)r]\) is similar to (3.21). From (3.41), it is easy to prove by induction that
thus for n large enough
In this case we could choose \({\hat{b}}(x)\) in (3.27) to be \({\hat{b}}(x)=R_r(x)^{\frac{1}{2p}}\) so that there exists a pullback absorbing set \({\mathcal {B}}(x) = B(0,{\hat{b}}(x))\) containing our random attractor \({\mathcal {A}}(x)\). Moreover, due to the integrability of \(\xi _r(x)\), \(R_r(x)\) is also integrable with \({\mathbb {E}}R_r = 1 + \frac{{\mathbb {E}}\xi _r}{1-\eta }\).
Step 3. Now back to the arguments in the proof of Theorem 3.9 and note that Dg is also globally Lipschitz with the same constant \(C_g\). Using Lemma 2.7 (i) and rewriting (3.34) in Lemma 3.8 for \(x^*\) yields
where the p-variation norm of z can be estimated, due to Corollary 2.8(i), as
This together with (3.43) derives
By applying the discrete Gronwall Lemma 3.12, we obtain
On the other hand, due to (2.14), (2.19) and (3.27), it is easy to prove with a generic constant D that
thus
All together, \(I_k\) is bounded from above by
where the right hand side of (3.48) is a function of \(\theta _{(k-n)}x\). The ergodic Birkhorff theorem is then applied for (3.47), so that
Apply the inequalities
it follows that
To estimate \({\hat{F}}^p(x)\), we apply Cauchy and Young inequalities to obtain, up to a generic constant \(D>0\),
Hence the right hand side in the last line of (3.50) is integrable due to (3.27) and the integrability of \(\left| \! \left| \! \left| x \right| \! \right| \! \right| ^{\frac{4p(p+1)}{p-1}}_{p\mathrm{-var},[-1,1]}\) in (3.40) and of \({\hat{b}}(x)^{2p}\) in Step 2. On the other hand, the expression under the expectation of (3.49) tends to zero a.s. as \(C_g\) tends to zero. Due to the Lebesgue’s dominated convergence theorem, the expectation converges to zero as \(C_g\) tends to zero. As a result, there exists \(\delta \) small enough such that for \(C_g < \delta \) we have \(\Vert z_{n}\Vert = \Vert a_1 - a_2\Vert \rightarrow 0\) as n tends to infinity exponentially with the uniform convergence rate in (3.49). This proves \(a_1 \equiv a_2\) a.s. and \({\mathcal {A}}\) is a singleton.
Step 4. Let \(y^1_t= y(t,x,a(x))\), \(y^2_t=y(t,x,y_0(x))\) be the solutions starting from a(x), \(y_0(x)\) respectively at \(t=0\). Since \({\mathcal {A}}\) is invariant, \(y^1_t= a(\theta _tx)\). By repeating the arguments in Step 3 (\(x^*\) is replaced by x), we conclude that \( {\mathcal {A}} (x)=\{a(x)\}\) is also a forward attractor. \(\square \)
References
Adrianova, L.Y.: Introduction to linear systems of differential equations. In: Translations of Mathematical Monographs, vol. 46, Americal Mathematical Society (1995)
Amann, H.: Ordinary Differential Equations: An Introduction to Nonlinear Analysis. Walter de Gruyter, Berlin (1990)
Arnold, L.: Random Dynamical Systems. Springer, Berlin (1998)
Bailleul, I., Riedel, S., Scheutzow, M.: Random dynamical systems, rough paths and rough flows. J. Differ. Equ. 262, 5792–5823 (2017)
Cass, T., Litterer, C., Lyons, T.: Integrability and tail estimates for Gaussian rough differential equations. Ann. Probab. 14(4), 3026–3050 (2013)
Cong, N.D., Duc, L.H., Hong, P.T.: Young differential equations revisited. J. Dyn. Differ. Equ. 30(4), 1921–1943 (2018)
Crauel, H., Kloeden, P.: Nonautonomous and random attractors. Jahresber Dtsch. Math. Ver. 117, 173–206 (2015)
Duc, L.H., Garrido-Atienza, M.J., Neuenkirch, A., Schmalfuß, B.: Exponential stability of stochastic evolution equations driven by small fractional Brownian motion with Hurst parameter in \((\frac{1}{2},1)\). J. Differ. Equ. 264, 1119–1145 (2018)
Duc, L.H., Hong, P.T., Cong, N.D.: Asymptotic stability for stochastic dissipative systems with a Hölder noise. SIAM J. Control. Optim. 57(4), 3046–3071 (2019)
Friz, P., Victoir, N.: Multidimensional stochastic processes as rough paths: theory and applications. In: Cambridge Studies in Advanced Mathematics, vo. 120, Cambridge Unversity Press, Cambridge (2010)
Garrido-Atienza, M., Maslowski, B., Schmalfuß, B.: Random attractors for stochastic equations driven by a fractional Brownian motion. Int. J. Bifurc. Chaos 20(9), 2761–2782 (2010)
Garrido-Atienza, M., Schmalfuss, B.: Ergodicity of the infinite dimensional fractional Brownian motion. J. Dyn. Differ. Equ. 23, 671–681 (2011). https://doi.org/10.1007/s10884-011-9222-5
Hairer, M., Ohashi, A.: Ergodic theory for sdes with extrinsic memory. Ann. Probab. 35, 1950–1977 (2007)
Khasminskii, R.: Stochastic Stability of Differential Equations, vol. 66. Springer, Berlin (2011)
Imkeller, P., Schmalfuss, B.: The conjugacy of stochastic and random differential equations and the existence of global attractors. J. Dyn. Differ. Equ. 13(2), 215–249 (2001)
Keller, H., Schmalfuss, B.: Attractors for stochastic differential equations with nontrivial noise. Bul. Acad. Stiinte Repub. Mold. Mat. 26(1), 43–54 (1998)
Lyons, T.: Differential equations driven by rough signals, I: an extension of an inequality of LC Young. Math. Res. Lett. 1(4), 451–464 (1994)
Lyons, T.: Differential equations driven by rough signals. Rev. Mat. Iberoam. 14(2), 215–310 (1998)
Lejay, A.: Controlled differential equations as Young integrals: a simple approach. J. Differ. Equ. 249, 1777–1798 (2010)
Mandelbrot, B., van Ness, J.: Fractional Brownian motion, fractional noises and applications. SIAM Rev. 4(10), 422–437 (1968)
Nualart, D., Răşcanu, A.: Differential equations driven by fractional Brownian motion. Collect. Math. 53(1), 55–81 (2002)
Riedel, S., Scheutzow, M.: Rough differential equations with unbounded drift terms. J. Differ. Equ. 262, 283–312 (2017)
Sussmann, H.J.: On the gap between deterministic and stochastic ordinary differential equations. Ann. Probab. 6(1), 19–41 (1978)
Young, L.C.: An integration of Hölder type, connected with Stieltjes integration. Acta Math. 67, 251–282 (1936)
Zähle, M.: Integration with respect to fractal functions and stochastic calculus. I. Probab. Theory Rel. Fields 111(3), 333–374 (1998)
Acknowledgements
The authors would like to thank the anonymous referee for valuable remarks. This work was supported by the Max Planck Institute for Mathematics in the Science (MIS-Leipzig), and the International Centre for Research and Postgraduate Training in Mathematics (ICRTM)- Institute of Mathematics, Vietnam Academy of Science and Technology under grant number ICRTM02_2020.02. L.H. Duc would like to thank Vietnam Institute for Advanced Studies in Mathematics (VIASM) for finanical support during three month research stay at the institute in 2019. P.T.Hong would like to thank the IMU Breakout Graduate Fellowship Program for the financial support.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Dedicated to Nguyen Dinh Cong on the occasion of his 60th birthday.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Lemma 3.12
(Discrete Gronwall Lemma) Let a be a non negative constant and \(u_n, \alpha _n,\beta _n\) be nonnegative sequences satisfying
Proof
Put
We will prove by induction that \(S_n\le T_n\) for all \(n\ge 1\). Namely, the statement holds for \(n=1\) since \(S_1= a+\alpha _0 u_0+\beta _0 \le \max \{a,u_0\}(1+\alpha _0) + \beta _0 =T_1\).
We assume that \(S_n\le T_n\) for \(n\ge 1\), then due to the fact that \(u_n \le S_n\) we obtain
Since \(u_n\le S_n\), (3.51) holds. \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Duc, L.H., Hong, P.T. Asymptotic Dynamics of Young Differential Equations. J Dyn Diff Equat 35, 1667–1692 (2023). https://doi.org/10.1007/s10884-021-10095-1
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10884-021-10095-1