Skip to main content
Log in

Pullback Attractors for Stochastic Young Differential Delay Equations

  • Published:
Journal of Dynamics and Differential Equations Aims and scope Submit manuscript

Abstract

We study the asymptotic dynamics of stochastic Young differential delay equations under the regular assumptions on Lipschitz continuity of the coefficient functions. Our main results show that, if there is a linear part in the drift term which has no delay factor and has eigenvalues of negative real parts, then the generated random dynamical system possesses a random pullback attractor provided that the Lipschitz coefficients of the remaining parts are small.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Adrianova, L. Ya.: Introduction to Linear Systems of Differential Equations. Translations of Mathematical Monographs 46. Americal Mathematical Society, (1995)

  2. Amann, H.: Ordinary differential equations. An introduction to nonlinear analysis. Walter de Gruyter, Berlin (1990)

    Book  Google Scholar 

  3. Arnold, L.: Random Dynamical Systems. Springer, Berlin (1998)

    Book  Google Scholar 

  4. Bailleul, I., Riedel, S., Scheutzow, M.: Random dynamical systems, rough paths and rough flows. J. Differ. Equ. 262, 5792–5823 (2017)

    Article  MathSciNet  Google Scholar 

  5. Boufoussi, B., Hajji, S.: Stochastic delay differential equations in a Hilbert space driven by fractional Brownian motion. Statist. Probab. Lett. 129, 222–229 (2017)

    Article  MathSciNet  Google Scholar 

  6. Cass, T., Litterer, C., Lyons, T.: Integrability and tail estimates for Gaussian rough differential equations. Ann. Probab. 14(4), 3026–3050 (2013)

    MathSciNet  MATH  Google Scholar 

  7. Cong, N.D., Duc, L.H., Hong, P.T.: Young differential equations revisited. J. Dyn. Diff. Equat. 30(4), 1921–1943 (2018)

    Article  MathSciNet  Google Scholar 

  8. Crauel, H.: A uniformly exponential random forward attractor which is not a pullback attractor. Arch. Math. 78, 329–336 (2002)

    Article  MathSciNet  Google Scholar 

  9. Crauel, H., Scheutzow, M.: Minimal random attractors. J. Differ. Equ. 265(2), 702–718 (2018)

    Article  MathSciNet  Google Scholar 

  10. Cong, N.D., Duc, L.H., Hong, P.T.: Lyapunov spectrum of nonautonomous linear Young differential equations. J. Dyn. Differ. Equ. (2019). https://doi.org/10.1007/s10884-019-09780-z

    Article  MATH  Google Scholar 

  11. Crauel, H., Kloeden, P.: Nonautonomous and random attractors. Jahresber Dtsch. Math-Ver. 117, 173–206 (2015)

    Article  MathSciNet  Google Scholar 

  12. Cheban, D., Kloden, P., Schmalfuß, B.: The relationship between pullback, forward and global attractors of nonautonomous dynamical systems. Nonlinear Dyn. Syst. Theory 2(2), 125–144 (2002)

    MathSciNet  MATH  Google Scholar 

  13. Duc, L.H., Hong, P.T.: Young differential delay equations driven by Hölder continuous paths. Modern Mathematics and Mechanics: Fundamentals, Problems and Challenges. Editors: Victor A. Sadovnichiy, Michael Z. Zgurovsky. Springer International Publishing AG, 2019. ISBN 978-3-319-96754-7. Chapter 17, pp. 313–333 (2019)

  14. Duc, L.H., Hong, P.T., Cong, N.D.: Asymptotic stability for stochastic dissipative systems with a Hölder noise. SIAM J. Control Optim. 57(4), 3046–3071 (2019)

    Article  MathSciNet  Google Scholar 

  15. Duc, L.H., Garrido-Atienza, M.J., Neuenkirch, A., Schmalfuß, B.: Exponential stability of stochastic evolution equations driven by small fractional Brownian motion with Hurst parameter in \((\frac{1}{2},1)\). J. Differ. Equ. 264, 1119–1145 (2018)

    Article  Google Scholar 

  16. Duc, L.H., Hong, P.T.: Asymptotic stability of controlled differential equations. Part I: Young integrals. arXiv preprint arXiv:1905.04945 (2019)

  17. Duc, L.H., Schmalfuß, B., Siegmund, S.: A note on the generation of random dynamical systems from fractional stochastic delay differential equations. Stoch. Dyn. 15(3), 1–13 (2015). https://doi.org/10.1142/S0219493715500185

    Article  MathSciNet  MATH  Google Scholar 

  18. Flandoli, F., Schmalfuss, B.: Random attractors for the 3D stochastic Navier-Stokes equation with multiplicative white noise. Stochast. Stochast. Rep. 59(1–2), 21–45 (1996)

    Article  Google Scholar 

  19. Friz, P., Victoir, N.: Multidimensional Stochastic Processes as Rough Paths: Theory and Applications. Cambridge Studies in Advanced Mathematics, vol. 120. Cambridge University Press, Cambridge (2010)

    Book  Google Scholar 

  20. Imkeller, P., Schmalfuss, B.: The conjugacy of stochastic and random differential equations and the existence of global attractors. J. Dyn. Differ. Equ. 13(2), 215–249 (2001)

    Article  MathSciNet  Google Scholar 

  21. Johnson, R., Munoz-Villarragut, V.: Some questions concerning attractors for non-autonomous dynamical systems. Nonlinear Anal. 71, e1858–e1868 (2009)

    Article  MathSciNet  Google Scholar 

  22. Garrido-Atienza, M., Maslowski, B., Schmalfuß, B.: Random attractors for stochastic equations driven by a fractional Brownian motion. Int. J. Bifurcation Chaos 20(9), 2761–2782 (2010)

    Article  MathSciNet  Google Scholar 

  23. O’Brien, G.L.: The occurrence of large values in stationary sequences. Z. Wahrscheinlichkeitstheorie verw. Gebiete 61, 347–353 (1982)

    Article  MathSciNet  Google Scholar 

  24. Scheutzow, M.: Comparision of various concepts of random attractor: a case study. Arch. Math. 78, 233–240 (2002)

    Article  Google Scholar 

  25. Young, L.C.: An integration of Hölder type, connected with Stieltjes integration. Acta Math. 67, 251–282 (1936)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work is supported by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 101.03-2019.310. P.T. Hong would like to thank the IMU Breakout Graduate Fellowship Program for the financial support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nguyen Dinh Cong.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

In memory of Russell Johnson.

Appendix

Appendix

1.1 Young Integrals

For \([a,b]\subset {\mathbb {R}}\), denote by \({\mathcal {C}}([a,b],{\mathbb {R}}^d)\) the space of all continuous functions \(y: [a,b]\rightarrow {\mathbb {R}}^d\), equipped with the sup norm

$$\begin{aligned} \Vert y\Vert _{\infty ,[a,b]} = \sup _{t\in [a,b]} \Vert y(t)\Vert , \end{aligned}$$

in which \(\Vert \cdot \Vert \) is the Euclide norm of a vector in \({\mathbb {R}}^d\). Also, for \(0<\beta \le 1\) denote by \({\mathcal {C}}^{\beta }([a,b],{\mathbb {R}}^d)\) the Banach space of all Hölder continuous paths \(y:[a,b]\rightarrow {\mathbb {R}}^d\) with exponential \(\beta \), equipped with the norm

$$\begin{aligned} \Vert y\Vert _{\infty ,\beta ,[a,b]}:= & {} \Vert y\Vert _{\infty ,[a,b]} + \left| \! \left| \! \left| y\right| \! \right| \! \right| _{\beta ,[a,b]},\ \text {where}\nonumber \\ \left| \! \left| \! \left| y\right| \! \right| \! \right| _{\beta ,[a,b]}:= & {} \sup _{a\le s<t\le b} \frac{\Vert y(t)-y(s)\Vert }{(t-s)^{\beta }}< \infty . \end{aligned}$$
(4.47)

One can easily prove for any \(a\le s\le t\le u\le b\) that

$$\begin{aligned} \left| \! \left| \! \left| y\right| \! \right| \! \right| _{\beta ,[s,u]} \le \left| \! \left| \! \left| y\right| \! \right| \! \right| _{\beta ,[s,t]}+\left| \! \left| \! \left| y\right| \! \right| \! \right| _{\beta ,[t,u]}. \end{aligned}$$

Note that the space \({\mathcal {C}}^{\beta }([a,b],{\mathbb {R}}^d)\) is not separable. However, the closure of \({\mathcal {C}}^{\infty }([a,b],{\mathbb {R}}^d)\) denoted by \({\mathcal {C}}^{0,\beta }([a,b],{\mathbb {R}}^d)\) is a separable space (see [19, Theorem 5.31, p. 96]), which can be defined as

$$\begin{aligned} {\mathcal {C}}^{0,\beta }([a,b],{\mathbb {R}}^d) := \Big \{x \in {\mathcal {C}}^{\beta }([a,b],{\mathbb {R}}^d) \quad | \quad \lim \limits _{h \rightarrow 0} \quad \sup _{a\le s<t\le b, |t-s|\le h} \frac{\Vert x(t)-x(s)\Vert }{(t-s)^{\beta }} = 0 \Big \}. \end{aligned}$$

It is worth to mention that for \(\beta <\alpha \), \({\mathcal {C}}^{\alpha }([a,b],{\mathbb {R}}^d)\) is a subspace of \({\mathcal {C}}^{0,\beta }([a,b],{\mathbb {R}}^d)\) and moreover, the embedding operator

$$\begin{aligned} id: {\mathcal {C}}^{\alpha }([a,b],{\mathbb {R}}^d) \rightarrow {\mathcal {C}}^{\beta }([a,b],{\mathbb {R}}^d) \end{aligned}$$

is compact (see [19, Proposition 5.28, p. 94]).

Now we recall that for \(y \in {\mathcal {C}}^{\beta }([a,b],{\mathbb {R}}^{d\times k})\) and \(x \in {\mathcal {C}}^{\nu }([a,b],{\mathbb {R}}^k)\) with \(\beta +\nu >1\). Then the Young integral \(\int _a^by(t)dx(t)\) exists (see [25]) and satisfies the Young-Loeve estimate [19, Theorem 6.8, p. 116],

$$\begin{aligned} \left\| \int _s^ty(u)dx(u) - y(s)[x(t)-x(s)]\right\| \le K (t-s)^{\beta +\nu }\left| \! \left| \! \left| x \right| \! \right| \! \right| _{\nu ,[s,t]} \left| \! \left| \! \left| y \right| \! \right| \! \right| _{\beta ,[s,t]},\quad \forall a \le s\le t \le b, \end{aligned}$$

where \(K:= \frac{1}{1-2^{1-(\beta +\nu )}}\). Hence

$$\begin{aligned} \left\| \int _s^ty(u)dx(u)\right\| \le (t-s)^\nu \left| \! \left| \! \left| x\right| \! \right| \! \right| _{\nu ,[s,t]} \left( \Vert y(s)\Vert +K(t-s)^{\beta }\left| \! \left| \! \left| y\right| \! \right| \! \right| _{\beta ,[s,t]} \right) . \end{aligned}$$
(4.48)

1.2 Tempered Variables

Let \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) be a probability space equipped with an ergodic metric dynamical system \(\theta \), which is a \({\mathbb {P}}\) measurable mapping \(\theta : {\mathbb {T}} \times \Omega \rightarrow \Omega \), \({\mathbb {T}}\) is either \({\mathbb {R}}\) or \({\mathbb {Z}}\), and \(\theta _{t+s} = \theta _t \circ \theta _s\) for all \(t,s \in {\mathbb {T}}\). Recall that a random variable \(\rho : \Omega \rightarrow [0,\infty )\) is called tempered if

$$\begin{aligned} \lim \limits _{t \rightarrow \pm \infty } \frac{1}{t} \log ^+ \rho (\theta _{t}x) =0,\quad \text {a.s.} \end{aligned}$$
(4.49)

which, as shown in [20, p. 220], [22], is equivalent to the sub-exponential growth

$$\begin{aligned} \lim \limits _{t \rightarrow \pm \infty } e^{-c |t|} \rho (\theta _{t}x) =0\quad \text {a.s.}\quad \forall c >0. \end{aligned}$$

Note that our definition of temperedness corresponds to the notion of temperedness from above given in [3, Definition 4.1.1(ii)].

Lemma 4.9

  1. (i)

    If \(h_1, h_2\ge 0\) are tempered random variables then \(h_1+h_2\) and \(h_1h_2\) are tempered random variables.

  2. (ii)

    If \(h_1\ge 0\) is a tempered random variable, \(h_2\ge 0\) is a measurable random variable and \(h_2\le h_1\) almost surely, then \(h_2\) is a tempered random variable.

  3. (iii)

    Let \(h_1\) be a nonnegative measurable function. If \(\log ^+ h_1\in L^1\) then \(h_1\) is tempered.

Proof

(i):

See [3, Lemma 4.1.2, p. 164].

(ii):

Immediate from the definition of tempered random variable, formula (4.49).

(iii):

See [3, Proposition 4.1.3, p. 165].

\(\square \)

Lemma 4.10

(i) Let \(a : \Omega \rightarrow [0,\infty )\) be a random variable, \(\log (1+a(\cdot ))\in L^1\) and \({\hat{a}} := E\log (1+a(\cdot )) = \int _\Omega \log (1+a(\cdot )) d{\mathbb {P}}\). Let \(\lambda > {\hat{a}}\) be an arbitrary fixed positive number. Put

$$\begin{aligned} b(x) := \sum _{k=1}^\infty e^{-\lambda k} \prod _{i=0}^{k-1} (1+ a(\theta _{-i} x)). \end{aligned}$$

Then \(b(\cdot )\) is a nonnegative almost everywhere finite and tempered random variable.

(ii) Let \(c : \Omega \rightarrow [0,\infty )\) be a tempered random variable, and \(\delta >0\) be an arbitrary fixed positive number. Put

$$\begin{aligned} d(x) := \sum _{k=1}^\infty e^{-\delta k} c(\theta _{-k} x). \end{aligned}$$

Then \(d(\cdot )\) is a nonnegative almost everywhere finite and tempered random variable.

Proof

(i) Put \(b_n(x) := \sum _{k=1}^n e^{-\lambda k} \prod _{i=0}^{k-1} (1+ a(\theta _{-i} x))\). Then \(b_n(\cdot )\), \(n\in {\mathbb {N}}\), is an increasing sequence of nonnegative random variable, hence converges to the nonnegative random variable \(b(\cdot )\). Since \(\log (1+a(\cdot ))\in L^1\), by Birkhoff ergodic theorem there exists a \(\theta \)-invariant set \(\Omega '\subset \Omega \) of full measure such that for all \(x\in \Omega '\) we have \(\lim _{n\rightarrow \pm \infty } (\sum _{i=0}^{n-1}\log (1+ a(\theta _{-i} x))/n = {\hat{a}}\). Hence given any fixed \(\delta >0\) for all n big enough, \(\prod _{i=0}^{n-1} (1+ a(\theta _{-i} x)) < \exp ({\hat{a}} +\delta )n\). Consequently, since \(\lambda > {\hat{a}}\) the sequence \(b_n(\cdot )\), \(n\in {\mathbb {N}}\) tends to limit \(b(\cdot )\), which is finite almost surely.

Now we show that \(b(\cdot )\) is tempered. For \(m\in {\mathbb {N}}\) and \(x\in \Omega '\) we obtain

$$\begin{aligned} b(\theta _{-m} x)= & {} \sum _{k=1}^\infty e^{-\lambda k} \prod _{i=0}^{k-1} (1+ a(\theta _{-i} \theta _{-m} x)) = \sum _{k=1}^\infty e^{-\lambda k} \prod _{j=m}^{k+m-1} (1+ a(\theta _{-j} x))\\\le & {} e^{\lambda m} \sum _{l=1+m}^\infty e^{-\lambda l} \prod _{j=0}^{l-1} (1+ a(\theta _{-j} x)) \le e^{\lambda m} \sum _{l=1}^\infty e^{-\lambda l} \prod _{j=0}^{l-1} (1+ a(\theta _{-j} x)) = e^{\lambda m} b(x). \end{aligned}$$

This implies that \(\limsup _{m\rightarrow \infty } \frac{1}{m} \log ^+ b(\theta _{-m} x) \le \lambda \). By virtue of [3, Proposition 4.1.3(i), p. 165] and [23, Lemma 4, Corollary 4], for all \(x\in \Omega '\) we have

$$\begin{aligned} \limsup _{m\rightarrow \infty } \frac{1}{m} \log ^+ b(\theta _{-m} x) = \limsup _{m\rightarrow -\infty } \frac{1}{-m} \log ^+ b(\theta _{-m} x)= 0, \end{aligned}$$

which proves that \(b(\cdot )\) is tempered.

(ii) Put \(d_n(x) := \sum _{k=1}^n e^{-\delta k} c(\theta _{-k} x)\). Then \(d_n(\cdot )\), \(n\in {\mathbb {N}}\), is an increasing sequence of nonnegative random variable, hence converges to the nonnegative random variable \(d(\cdot )\). By temperedness of \(c(\cdot )\) we can find a measurable set \({{\tilde{\Omega }}}\subset \Omega \) of full measure such that for all \(x\in {{\tilde{\Omega }}}\) there exists \(n_0(x) >0\) such that for all \(n\ge n_0(x)\) we have \(c(\theta _{-n} x) \le e^{n\delta /2}\). Hence \(d_n(x)\), \(n\in {\mathbb {N}}\), is an increasing sequence of positive numbers tending to finite value d(x). Thus \(d(\cdot )\) is finite almost everywhere. Furthermore, for \(m\in {\mathbb {N}}\) and \(x\in {{\tilde{\Omega }}}\),

$$\begin{aligned} d(\theta _{-m} x)= & {} \sum _{k=1}^\infty e^{-\delta k} c(\theta _{-k} \theta _{-m} x) = \sum _{l=m+1}^\infty e^{-\delta (l-m)} a(\theta _{-l} x)\\\le & {} e^{\delta m} \sum _{l=1}^\infty e^{-\delta l} c(\theta _{-l} x) = e^{\delta m} d(x). \end{aligned}$$

This implies that \(\limsup _{m\rightarrow \infty } \frac{1}{m} \log ^+ d(\theta _{-m} x) \le \delta \). Similar to (i) above, \(d(\cdot )\) is tempered. \(\square \)

1.3 Gronwall Lemma

Lemma 4.11

(Continuous Gronwall Lemma) Let \([t_0,T]\) be an interval on \({\mathbb {R}}\). Assume that \(u(\cdot ), a(\cdot ): [t_0,T] \rightarrow {\mathbb {R}}^+\) are positive continuous functions and \(\beta >0\) is a positive number, such that

$$\begin{aligned} u(t) \le a(t) + \int _{t_0}^t \beta u(s) ds, \quad \forall t \in [t_0,T]. \end{aligned}$$

Then the following inequality holds

$$\begin{aligned} u(t) \le a(t) + \int _{t_0}^t a(s) \beta e^{\beta (t-s)} ds, \quad \forall t \in [t_0,T]. \end{aligned}$$

Proof

See [2, Lemma 6.1, p 89]. \(\square \)

Lemma 4.12

(Discrete Gronwall Lemma) Let a be a non negative constant and \(u_n, \alpha _n,\beta _n\) be nonnegative sequences satisfying for all \(n\in {\mathbb {N}}\), \(n\ge 0\), the equalities

$$\begin{aligned} u_n\le a + \sum _{k=0}^{n-1} \alpha _ku_k + \sum _{k=0}^{n-1} \beta _k. \end{aligned}$$

Then for all \(n\in {\mathbb {N}}\), \(n\ge 1\), the following inequalities hold

$$\begin{aligned} u_n\le \max \{a,u_0\}\prod _{k=0}^{n-1} (1+\alpha _k) + \sum _{k=0}^{n-1}\beta _k\prod _{j=k+1}^{n-1}(1+\alpha _j). \end{aligned}$$
(4.50)

Proof

See [16]. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cong, N.D., Duc, L.H. & Hong, P.T. Pullback Attractors for Stochastic Young Differential Delay Equations. J Dyn Diff Equat 34, 605–636 (2022). https://doi.org/10.1007/s10884-020-09894-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10884-020-09894-9

Keywords

Navigation