Skip to main content

Lyapunov Spectrum of Nonautonomous Linear Young Differential Equations

Abstract

We show that a linear Young differential equation generates a topological two-parameter flow, thus the notions of Lyapunov exponents and Lyapunov spectrum are well-defined. The spectrum can be computed using the discretized flow and is independent of the driving path for triangular systems which are regular in the sense of Lyapunov. In the stochastic setting, the system generates a stochastic two-parameter flow which satisfies the integrability condition, hence the Lyapunov exponents are random variables of finite moments. Finally, we prove a Millionshchikov theorem stating that almost all, in a sense of an invariant measure, linear nonautonomous Young differential equations are Lyapunov regular.

Introduction

In this article we study the Lyapunov spectrum of the nonautonomous linear Young differential equation (abbreviated by YDE)

$$\begin{aligned} dx(t) = A(t)x(t) dt + C(t) x(t) d \omega (t),\ x(t_0)=x_0 \in {\mathbb {R}}^d, t\ge t_0, \end{aligned}$$
(1.1)

where AC are continuous matrix valued functions on \([0,\infty )\), and \(\omega \) is a continuous path on \([0,\infty )\) having finite p-th variation on each compact interval of \([0,\infty )\), for some \(p \in (1,2)\). Such system (1.1) appears, for instance, when considering the linearization of the autonomous Young differential equation

$$\begin{aligned} dy(t) = f(y(t))dt + g(y(t))d\omega (t) \end{aligned}$$
(1.2)

along any reference solution \(y(t, y_0,\omega )\). An example is when we would like to solve in the pathwise sense stochastic differential equations driven by fractional Brownian motions with Hurst index \(H \in (\frac{1}{2},1)\) defined on a complete probability space \((\Omega ,{{\mathcal {F}}},{{\mathbb {P}}})\) [24]. In fact it follows from [5] that (1.2) under the stochastic setting also satisfies the integrability condition.

The Eq. (1.1) can be rewritten in the integral form

$$\begin{aligned} x(t) = x_0 + \int _{t_0}^t A(s)x(s)ds + \int _{t_0}^t C(s)x(s)d\omega (s), \ t\ge t_0, \end{aligned}$$
(1.3)

where the second integral is understood in the Young sense [28], which can also be presented in terms of fractional derivatives [29]. Under some mild conditions, the unique solution of (1.1) generates a two-parameter flow \(\Phi _\omega (t_0,t)\), as seen in [9]. Under a certain stochastic setting, (1.1) actually generates a stochastic two-parameter flow in the sense of Kunita [16].

Our aim is to study the Lyapunov exponents and Lyapunov spectrum of the linear two-parameter flow generated from Young Eq. (1.1). Notice that Lyapunov spectrums and its splitting are the main content of the celebrated multiplicative ergodic theorem (MET) by Oseledets [25]. It was also investigated by Millionshchikov in [18,19,20,21] for linear nonautonomous differential equations. In the stochastic setting, the MET is also formulated for random dynamical systems in [1, Chapter 3]. Further investigations can be found in [6,7,8, 10] for stochastic flows generated by nonautonomous linear stochastic differential equations driven by standard Brownian motion.

For Young equations, we show that Lyapunov exponents can be computed based on the discretization scheme. Moreover, if the driving path \(\omega \) satisfies certain conditions, the Lyapunov spectrum can be computed independently of \(\omega \) for triangular systems (i.e. both AC are upper triangular matrices) which are Lyapunov regular.

One important issue is the non-randomness of Lyapunov exponents when the system is considered under a certain stochastic setting, namely if the driving path \(\omega \) is a realization of a certain stochastic noise. In case the system is driven by standard Brownian noises, a filtration of independent \(\sigma -\) algebras can be constructed and the argument of Kolmogorov’s zero-one law can be applied to prove the non-randomness of Lyapunov exponents, which are measurable to tail events, see [7].

In general, the stochastic noise might be a fractional Brownian motion which is not Markov, hence it is difficult to construct such a filtration and to apply the Kolmogorov’s zero-one law. The question of non-randomness of Lyapunov spectrum is therefore still open. However, the answer is affirmative for some special cases. For example, autonomous and periodic systems can generate random dynamical systems satisfying the integrability condition, thus the Lyapunov spectrum is non-random by the multiplicative ergodic theorem [1]. Our investigation shows that the Lyapunov spectrum of triangular systems that are Lyapunov regular are also non-random. In general, we expect that the statement of non-randomness of Lyapunov spectrum is still true for any Lyapunov regular system, although finding a counter-example of a nonautonomous system with random Lyapunov spectrum also attracts our interest.

The paper is organized as follow. In Sect. 2, we prove in Proposition 2.4 the generation of a two-parameter flow from the unique solution of (1.1). The concepts of Lyapunov exponents and Lyapunov spectrum of system (1.1) are then defined in Sect. 3. Under the assumptions on the driving path \(\omega \) and on the coefficient functions, we prove in Theorem 3.3 that Lyapunov spectrum can be computed using the discretized flow and give an explicit formula of the spectrum in Theorem 3.7 in case of triangular systems which are regular in the sense of Lyapunov. Theorem 3.11 provides a criterion for a triangular system of YDE to be Lyapunov regular. In Sect. 4, we consider the system under random perspectives in which the driving path acts as a realization of a stochastic stationary process in a function space equipped with a probabilistic framework. The system is then proved to generate a stochastic two-parameter flow which satisfies the integrability condition, hence the Lyapunov exponents are proved in Theorem 4.3 to be random variables of finite moments. Section 4.2 is devoted to study the regularity of the system, where we prove a Millionshchikov Theorem 4.6 stating that almost all, in a sense of an invariant measure, nonautonomous linear Young differential equations are Lyapunov regular. We end up with a discussion on the non-randomness of Lyapunov spectrum in some special cases, and raise this interesting question in general.

Preliminary

In this section we present some well-known facts of Young differential equations and two parameter flows. Let \(0 \le T_1< T_2 < \infty \). Denote by \({\mathcal {C}}([T_1,T_2], {\mathbb {R}}^{d\times d})\) the Banach space of continuous matrix-valued functions on \([T_1,T_2]\) equipped with the sup norm \(\Vert \cdot \Vert _{\infty ,[T_1,T_2]}\), by \({\mathcal {C}}^{r\mathrm{-var}}([T_1,T_2],{\mathbb {R}}^d)\) the Banach space of bounded \(r-\)variation continuous functions on \([T_1,T_2]\) having values in \({\mathbb {R}}^d\) with the norm

$$\begin{aligned} \Vert u\Vert _{r\mathrm{-var},[T_1,T_2]}=|u(T_1)|+\left| \left| \left| u \right| \right| \right| _{r\mathrm{-var},[T_1,T_2]} <\infty , \end{aligned}$$

in which \(|\cdot |\) is the Euclidean norm and \(\left| \left| \left| \cdot \right| \right| \right| _{r\mathrm{-var},[T_1,T_2]}\) is the seminorm defined by

$$\begin{aligned} \left| \left| \left| u\right| \right| \right| _{r\mathrm{-var},[T_1,T_2]} = \left( \sup _{\Pi (T_1,T_2)}\sum _{i=0}^{n-1}|u(t_{i+1})-u(t_i)|^r\right) ^{1/r},\quad u\in {\mathcal {C}}^{r\mathrm {-}\mathrm {var}}([T_1,T_2],{\mathbb {R}}^d), \end{aligned}$$

where the supremum is taken over the whole class of finite partitions \(\Pi (T_1,T_2)=\{ T_1= t_0<t_1<\cdots < t_n=T_2\}\) of \([T_1,T_2]\). For each \(0<\alpha <1\), we denote by \({\mathcal {C}}^{\alpha \mathrm {-Hol}}([T_1,T_2],{\mathbb {R}}^d)\) the space of \(\alpha -\)Hölder continuous functions on \([T_1,T_2]\) equipped with the norm

$$\begin{aligned} \Vert u\Vert _{\alpha ,[T_1,T_2]}: = \Vert u\Vert _{\infty ,[T_1,T_2]} +\left| \left| \left| u \right| \right| \right| _{\alpha \mathrm{-Hol},[T_1,T_2]}, \end{aligned}$$

in which \(\Vert u \Vert _{\infty ,[T_1,T_2]}:= \sup _{t\in [T_1,T_2]}|u(t)|\) and \(\left| \left| \left| u \right| \right| \right| _{\alpha \mathrm{-Hol},[T_1,T_2]} = \sup _{T_1\le s<t\le T_2}\frac{|u(t)-u(s)|}{(t-s)^\alpha }\). It is obvious that for all \(u\in {\mathcal {C}}^{\alpha \mathrm {-Hol}}([T_1,T_2],{\mathbb {R}}^d)\),

$$\begin{aligned} \left| \left| \left| u \right| \right| \right| _{r\mathrm{-var},[T_1,T_2]}\le (T_2-T_1)^\alpha \left| \left| \left| u \right| \right| \right| _{\alpha \mathrm{-Hol},[T_1,T_2]}, \end{aligned}$$

with \(\alpha =1/r\). Moreover, we have the following estimate, whose proof follows directly from the definitions of the \(p-\)var seminorm and the sup norm and will be omitted here.

Lemma 2.1

Let \(t_0\ge 0\) and \(T>0\) be arbitrary. If \(C\in {\mathcal {C}}^{q\mathrm{-var}}([t_0,t_0+T],{\mathbb {R}}^{d\times d})\), \( x\in {\mathcal {C}}^{q\mathrm {\mathrm{-var}}}([t_0,t_0+T],{\mathbb {R}}^{d})\), then for all \(s<t\) in \([t_0,t_0+T]\),

$$\begin{aligned} \left| \left| \left| Cx\right| \right| \right| _{q\mathrm{-var},[s,t]}\le \Vert C\Vert _{\infty ,[s,t]}\left| \left| \left| x\right| \right| \right| _{q\mathrm{-var},[s,t]} + \Vert x\Vert _{\infty ,[s,t]}\left| \left| \left| C\right| \right| \right| _{q\mathrm{-var},[s,t]}. \end{aligned}$$

Now, consider \(x\in {\mathcal {C}}^{q\mathrm {-}\mathrm {var}}([T_1,T_2],{\mathbb {R}}^{d\times m})\) and \(\omega \in {\mathcal {C}}^{p\mathrm {-}\mathrm {var}}([T_1,T_2],{\mathbb {R}}^m)\), \(p,q \ge 1\) and \(\frac{1}{p}+\frac{1}{q} > 1\), the Young integral \(\int _a^bx(t)d\omega (t)\) can be defined as

$$\begin{aligned} \int _a^bx(t)d\omega (t):= \lim \nolimits _{|\Pi | \rightarrow 0} \sum _{t_i\in \Pi } x(t_i)(\omega (t_{i+1})-\omega (t_i)), \end{aligned}$$

where the limit is taken on all finite partitions \(\Pi = \{T_1 = t_0< t_1< \ldots < t_n = T_2\}\) with \(|\Pi |:= \displaystyle \max \nolimits _{0\le i \le n-1} |t_{i+1}-t_i|\) (see [28, p. 264–265]). This integral satisfies additive property by the construction, and the so-called Young–Loeve estimate [12, Theorem 6.8, p. 116]

$$\begin{aligned} \Big |\int _s^t x(u)d\omega (u)-x(s)[\omega (t)-\omega (s)]\Big | \le K \left| \left| \left| x\right| \right| \right| _{q\mathrm{-var},[s,t]} \left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[s,t]}, \end{aligned}$$
(2.1)

where

$$\begin{aligned} K:=(1-2^{1-\theta })^{-1},\qquad \theta := \frac{1}{p} + \frac{1}{q} >1. \end{aligned}$$
(2.2)

Now for any \(\omega \in {\mathcal {C}}^{p\mathrm{-var}}([t_0,t_0+T],{\mathbb {R}})\) with some \(1< p<2 \), we consider the deterministic Young equation

$$\begin{aligned} x(t)=x_0 + \int _{t_0}^tA(s)x(s)ds+\int _{t_0}^tC(s)x(s)d\omega (s), \end{aligned}$$
(2.3)

in which \(A\in {\mathcal {C}}([t_0,t_0+T],{\mathbb {R}}^{d\times d}),C\in {\mathcal {C}}^{q\mathrm{-var}}([t_0,t_0+T],{\mathbb {R}}^{d\times d})\) with \( q > p\) and \(\frac{1}{q}+\frac{1}{p}>1\). We first show that under mild conditions on coefficient functions AC, (2.3) has a unique solution in \( {\mathcal {C}}^{q\mathrm{-var}}([t_0,t_0+T],{\mathbb {R}}^{d})\).

Proposition 2.2

Fix \([t_0,t_0+T]\) and consider \(\omega \) varying as an element of the Banach space \({\mathcal {C}}^{p\mathrm{-var}}([t_0,t_0+T])\). Assume that \(A\in {\mathcal {C}}([t_0,t_0+T],{\mathbb {R}}^{d\times d}),C\in {\mathcal {C}}^{q\mathrm{-var}}([t_0,t_0+T],{\mathbb {R}}^{d\times d})\) with \(q>p\) and \(\frac{1}{q}+\frac{1}{p}>1\). Then Eq. (2.3) has a unique solution \(x (\cdot ,t_0,x_{0},\omega )\) in the space \({\mathcal {C}}^{p\mathrm{-var}}([t_0,t_0+T],{\mathbb {R}}^d)\) which satisfies

$$\begin{aligned}&(i)\;\; \Vert x(\cdot ,t_0,x_0,\omega )\Vert _{\infty ,[t_0,t_0+T]}\nonumber \\&\quad \le |x_0| \exp \Big \{\eta \Big [2+ \Big (\frac{2M^*}{\mu }\Big )^p\Big (T^p+\left| \left| \left| \omega \right| \right| \right| ^p_{p\mathrm{-var},[t_0,t_0+T]}\Big )\Big ]\Big \}, \end{aligned}$$
(2.4)
$$\begin{aligned}&(ii)\;\; \left| \left| \left| x(\cdot ,t_0,x_0,\omega )\right| \right| \right| _{p\mathrm{-var},[t_0,t_0+T]}\nonumber \\&\quad \le |x_0| \exp \Big \{(1+\eta )\Big [3+ \Big (\frac{2M^*}{\mu }\Big )^p\Big (T^p+\left| \left| \left| \omega \right| \right| \right| ^p_{p\mathrm{-var},[t_0,t_0+T]}\Big )\Big ]\Big \} \end{aligned}$$
(2.5)

where

$$\begin{aligned} M^*=M^*(t_0,T):=\max \{\Vert A\Vert _{\infty ,[t_0,t_0+T]}, 2K \Vert C\Vert _{q\mathrm{-var},[t_0,t_0+T]}\}<\infty , \end{aligned}$$
(2.6)

K is defined in (2.2), \(\mu \) is a constant such that \(0<\mu <\min \{1,M^*\}\) and \(\eta = -\,\log (1-\mu )\). In addition, the solution mapping

$$\begin{aligned} X: {\mathbb {R}}^d\times {\mathcal {C}}^{p\mathrm{-var}}([t_0,t_0+T],{\mathbb {R}})\longrightarrow & {} {\mathcal {C}}^{p\mathrm{-var}}([t_0,t_0+T],{\mathbb {R}}^d) \\ (x_0,\omega )\mapsto & {} x(\cdot ,t_0,x_0,\omega ). \end{aligned}$$

is continuous w.r.t \((x_0,\omega )\).

Proof

See the “Appendix”. \(\square \)

Remark 2.3

  1. (i)

    Fix \([t_0,t_0+T]\), by considering the backward equation similar to that of [9], we can draw the same conclusions on the existence and uniqueness of the solution for the backward equation at an arbitrary point \(a\in [t_0,t_0+T]\). Moreover, it can be proved that the solution mapping X is continuous with respect to \((a,x_0,\omega ) \in [t_0,t_0+T]\times {\mathbb {R}}^d\times {\mathcal {C}}^{p\mathrm{-var}}([t_0,t_0+T],{\mathbb {R}})\).

  2. (ii)

    If \(\omega \in {\mathcal {C}}^{1/p\mathrm {-Hol}}([t_0,t_0+T],{\mathbb {R}})\subset {\mathcal {C}}^{p\mathrm{-var}}([t_0,t_0+T],{\mathbb {R}})\) then similar arguments prove that the solution is \(1/p-\)Hölder continuous and the solution mapping X is continuous with respect to \((a,x_0,\omega ) \in [t_0,t_0+T]\times {\mathbb {R}}^d\times {\mathcal {C}}^{1/p\mathrm {-Hol}}([t_0,t_0+T],{\mathbb {R}})\).

For any \(t_0\le t_1\le t_2\le t_0+ T\) the Cauchy operator \(\Phi _\omega (t_1,t_2): {\mathbb {R}}^d \rightarrow {\mathbb {R}}^d\) of the YDE (1.1) is defined as \(\Phi _{\omega }(t_1,t_2)x_{t_1}:= x(t_2,t_1,x_{t_1},\omega )\) for any vector \(x_{t_1}\in {\mathbb {R}}^d\).

Following [1, p. 551], a family of mappings \(X_{s,t}: {\mathbb {R}}^d \rightarrow {\mathbb {R}}^d\) depending on two real variables \(s,t\in [a,b] \subset {\mathbb {R}}\) is called a two-parameter flow of homeomorphisms of \({\mathbb {R}}^d\) on [ab] if the mapping \(X_{s,t}\) is a homeomorphism on \({\mathbb {R}}^d\); \(X_{s,s} = id\); \(X_{s,t}^{-1} = X_{t,s}\) and \(X_{s,t} = X_{u,t}\circ X_{s,u}\) for any \(s,t,u\in [a,b]\). If in addition, \(X_{s,t}\) is a linear operator for all \(s,t \in [a,b]\), then the family \(X_{s,t}\) is called a two-parameter flow of linear operators of \({\mathbb {R}}^d\) on [ab].

Proposition 2.4

Suppose that the assumptions of Proposition 2.2 are satisfied. Then the Eq. (1.1) generates a two-parameter flow of linear operators of \({\mathbb {R}}^d\) by means of its Cauchy operators.

Proof

First note that the same method in the proof of Theorem 2.2 can be applied to prove the existence and uniqueness of solution \(\Phi _\omega (t_0,t)\) of the matrix-valued differential equation

$$\begin{aligned} \Phi (t) = I + \int _{t_0}^t A(s)\Phi (s)ds + \int _{t_0}^t C(s)\Phi (s)d\omega (s),\ t \in [t_0,t_0+T]. \end{aligned}$$
(2.7)

It is easy to show that the solution \(\Phi _\omega (\cdot ,\cdot ): \Delta ^2 \rightarrow {\mathbb {R}}^{d\times d}\), with \(\Delta ^2 :=\{(s,t) \in [t_0,t_0+T] \times [t_0,t_0+T]: s\le t\}\), has properties that \(\Phi _\omega (s,s)= I_{d\times d}\) for all \(s\ge 0\) and

$$\begin{aligned} \Phi _\omega (s,t)\circ \Phi _\omega (\tau ,s) = \Phi _\omega (\tau ,t),\quad \forall t_0\le \tau \le s \le t \le t_0+T. \end{aligned}$$
(2.8)

The solution \(\Phi _\omega (\cdot ,\cdot )\) is the mapping along trajectories of (2.3) in forward time since YDE is directed. Like the ODE case, in our setting, the solution of the matrix Eq. (2.7) is the Cauchy operator of the vector Eq. (2.3).

Next, consider the adjoint matrix-valued pathwise differential equation

$$\begin{aligned} d\Psi (t_0,t) = -\,A^T(t) \Psi (t_0,t) dt -C^T(t) \Psi (t_0,t) d\omega (t) \end{aligned}$$
(2.9)

with initial value \(\Psi (t_0,t_0) =I\), and \(A^T(\cdot ), C^T(\cdot )\) are the transpose matrices of \(A(\cdot )\) and \(C(\cdot )\), respectively. By similar arguments we can prove that there exists a unique solution \(\Psi _\omega (t_0,t)\) of (2.9). Introduce the transformation \(u(t) = \Psi _\omega (t_0,t)^T x(t)\). By the formula of integration by parts (see [12, Proposition 6.12 and Exercise 6.13] or a fractional version in Zähle [29]), we conclude that

$$\begin{aligned} du(t)= & {} [d\Psi _\omega (t_0,t)^T ] x(t) + \Psi _\omega (t_0,t)^T dx(t) \\= & {} [-\Psi _\omega (t_0,t)^T A(t) dt - \Psi _\omega (t_0,t)^T C(t) d\omega (t) ] x(t)\\&+\,\Psi _\omega (t_0,t)^T [A(t)x(t) dt + C(t)x(t) d\omega (t)] \\= & {} 0. \end{aligned}$$

In other words, \(u(t) = u(t_0)=x(t_0)=x_0\) or equivalently \(\Psi _\omega (t_0,t)^T x(t) = x_0\). Combining with \(\Phi \) in Eq. (2.7) we conclude that \(\Psi _\omega (t_0,t)^T \Phi _\omega (t_0,t)x_0 = x_0\) for all \(x_0 \in {\mathbb {R}}^d\), hence there exists \(\Phi _\omega (t_0,t)^{-1}\) and \(\Phi _\omega (t_0,t)^{-1} = \Psi _\omega (t_0,t)^T\). As a result, for any \(x_0 \ne 0\) we have \(\Phi _\omega (t_0,t)x_0 \ne 0 \) for all \(t\ge t_0\). Thus we showed that the linear operator \(\Phi _\omega (t_0,t)\), \(t\ge t_0\), is nondegenerate. Similarly, for all \(t_0\le s\le t\le t_0+T\) the operator \(\Phi _\omega (s,t)\) is nondegenerate and \(\Phi _\omega (s,t)^{-1} = \Psi _\omega (s,t)^T\). Putting \(\Phi _\omega (t,s) := \Psi _\omega (s,t)^T\) for \(t_0\le s\le t\le t_0+T\) we have defined the family \(\Phi _\omega (t,s)\) for all \(s,t\in [t_0,t_0+T]\), and it is clearly a continuous two-parameter flow generated by (2.3). \(\square \)

Remark 2.5

Using the solution formula for one dimensional system as in Sect. 3, one derives a Liouville–like formula as follow

$$\begin{aligned} \det \Phi _\omega (t_0,t) = \exp \left\{ \int _{t_0}^t \mathrm{trace\ } A(s)ds + \int _{t_0}^t \mathrm{trace\ } C(s)d\omega (s)\right\} , \end{aligned}$$

which also proves the invertibility of \(\Phi _\omega (t_0,t)\).

Lyapunov Spectrum for Nonautonomous Linear System of YDEs

The classical Lyapunov spectrum of linear system of ordinary differential equations (henceforth abbreviated by ODEs) is a powerful tool in investigation of qualitative behavior of the system, see e.g. [4, 23]. Since (1.3) generates a two-parameter flow of homeomorphisms, we can instead study Lyapunov spectrum of the flow generated by the equation.

Exponents and Spectrum

We aim to follow the technique in [7, 18, 19]. From now on, let us consider the following assumptions on the coefficients of (1.3).

(\({\mathbf{H }}_1\)) \({\hat{A}}:=\Vert A\Vert _{\infty ,{\mathbb {R}}^+} < \infty .\)

(\({\mathbf{H }}_2\)) For some \(\delta >0\), \({\hat{C}}:=\Vert C\Vert _{q\mathrm{-var},\delta ,{\mathbb {R}}^+}:= \displaystyle \sup \nolimits _{0\le t-s \le \delta } \Vert C\Vert _{q\mathrm{-var},[s,t]}< \infty \).

In (\({\mathbf{H }}_2\)) we can assume, without loss of generality that \(\delta = 1\). Put

$$\begin{aligned} M_0:=\max \{{\hat{A}}, 2K{\hat{C}}\} \end{aligned}$$
(3.1)

where K given by (2.2). It is obvious from (2.6) that, for any \(t_0\in {\mathbb {R}}^+\),

$$\begin{aligned} M^*(t_0,1)\le M_0. \end{aligned}$$

Note that conditions (\({\mathbf{H }}_1\)), (\({\mathbf{H }}_2\)) and Proposition 2.2 assure the existence and uniqueness of solution of (1.3) on \({\mathbb {R}}^+\). Moreover, Proposition 2.4 asserts that (1.3) generates a two-parameter flow on \({\mathbb {R}}^d\) by means of its Cauchy operators \(\Phi _\omega (\cdot ,\cdot )\), and \(\Phi _\omega (s,t)x_0\) represents the value at time \(t\in {\mathbb {R}}^+\) of the solution of (1.3) started at \(x_0\in {\mathbb {R}}^d\) at time \(s\in {\mathbb {R}}^+\). Following [7], we introduce the notion of Lyapunov exponents of two-parameter flow of linear operators first, and then use it to define the Lyapunov exponents. We shall denote by \({{\mathcal {G}}}_k\) the Grassmannian manifold of all linear k-dimensional subspaces of \({\mathbb {R}}^d\).

Recall that for a real function \(h: {\mathbb {R}}^+\rightarrow {\mathbb {R}}^d\) the Lyapunov exponent of h is the number (which could be \(\infty \) or \(-\infty \))

$$\begin{aligned} \chi (h(t)) := \limsup _{t\rightarrow \infty }\frac{1}{t}\log |h(t)|. \end{aligned}$$

(We make the convention that \(\log \) is the logarithm of natural base and \(\log 0 := -\,\infty \).)

Definition 3.1

  1. (i)

    Given a two-parameter flow \(\Phi _\omega (s,t)\) of linear operators of \({\mathbb {R}}^d\) on the time interval \([t_0,\infty )\), the extended-real numbers (real numbers or symbol \(\infty \) or \(-\infty \))

    $$\begin{aligned} \lambda _k(\omega ) := \inf _{V\in {{\mathcal {G}}}_{d-k+1}} \sup _{y\in V} \limsup _{t\rightarrow \infty } \frac{1}{t} \log |\Phi _ \omega (t_0,t)y|, \quad k=1,\ldots , d, \end{aligned}$$
    (3.2)

    are called Lyapunov exponents of the flow \(\Phi _\omega (s,t)\). The collection \(\{ \lambda _1(\omega ),\ldots , \lambda _d(\omega )\}\) is called Lyapunov spectrum of the flow \(\Phi _\omega (s,t)\).

  2. (ii)

    For any \(u\in [t_0,\infty )\) the linear subspaces of \({\mathbb {R}}^d\)

    $$\begin{aligned} E_k^u(\omega ) := \big \{ y\in {\mathbb {R}}^d\bigm | \limsup _{t\rightarrow \infty } \frac{1}{t} \log |\Phi _\omega (u,t)y| \le \lambda _k(\omega ) \big \}, \quad k=1,\ldots ,d, \;\; \end{aligned}$$
    (3.3)

    are called Lyapunov subspaces at time u of the flow \(\Phi _\omega (s,t)\). The flag of nonincreasing linear subspaces of \({\mathbb {R}}^d\)

    $$\begin{aligned} {\mathbb {R}}^d = E_1^u(\omega ) \supset E_2^u(\omega ) \supset \cdots \supset E_d^u(\omega ) \supset \{0\} \end{aligned}$$

    is called Lyapunov flag at time u of the flow \(\Phi _\omega (s,t)\).

  3. (iii)

    The Lyapunov spectrum, Lyapunov exponents and Lyapunov subspaces of the linear YDE (1.3) are those of the two-parameter flow \(\Phi _\omega (s,t)\) generated by (1.3).

It is easily seen that the Lyapunov exponents in Definition 3.1 are independent of \(t_0\), and are ordered:

$$\begin{aligned} \lambda _1(\omega ) \ge \lambda _2(\omega ) \ge \cdots \ge \lambda _d(\omega ), \qquad \omega \in \Omega . \end{aligned}$$

Moreover, due to [7, Theorems 2.5, 2.7, 2.8], for any \(u\in [t_0,\infty )\) and \(k=1,\ldots , d\), the Lyapunov subspaces \(E_k^u(\omega )\) are invariant with respect to the flow in the following sense

$$\begin{aligned} \Phi _\omega (s,t) E_k^s (\omega ) = E_k^t(\omega ),\qquad \hbox {for all}\; s,t\in [t_0,\infty ), k=1,\ldots ,d. \end{aligned}$$

The classical definition of Lyapunov spectrum of a linear system of ODE is based on the normal basis of the solution of the system (see [11]). Millionshchikov [18] pointed out that these definitions are equivalent. In the following remark we restate some facts in [11].

Remark 3.2

  1. (i)

    For every invertible matrix \(B(\omega )\), the matrix \(\Phi _\omega (t_0,t)B(\omega )\) satisfies

    $$\begin{aligned} \sum _{i=1}^d \alpha _i(\omega )\ge \sum _{i=1}^d\lambda _i(\omega ) \end{aligned}$$

    where \(\alpha _i(\omega ) \) is the Lyapunov exponent of its \(i\mathrm{th}\) column.

  2. (ii)

    Furthermore, we have Lyapunov inequality

    $$\begin{aligned} \sum _{i=1}^d\lambda _i \ge \displaystyle \limsup _{t\rightarrow \infty } \frac{1}{t}\log |\det \Phi _{\omega }(t_0,t)|. \end{aligned}$$

    Note that if Lyapunov exponents \(\{\alpha _i(\omega ), i=1,\ldots ,d\}\) of the columns of the matrix \(\Phi _\omega (t_0,t)B(\omega )\) satisfy the equality \(\sum _{i=1}^d\alpha (\omega ) =\limsup \nolimits _{t\rightarrow \infty } \frac{1}{t}\log |\det \Phi _{\omega }(t_0,t)|\) then \(\{\alpha _1(\omega ),\ldots , \alpha _d(\omega )\} \) is the spectrum of the flow \(\Phi _{\omega }(s,t)\), i.e

    $$\begin{aligned} \{\alpha _i(\omega ),i=1,\ldots ,d\} = \{\lambda _i(\omega ),i=1,\ldots ,d\}, \end{aligned}$$

    (but the inverse is not true).

Now let us consider the following assumptions on the driving path \(\omega \).

(\({\mathbf{H }}_3\)) \(\lim \nolimits _{\begin{array}{c} n \rightarrow \infty \\ n\in {\mathbb {N}} \end{array}} \frac{1}{n} \left| \left| \left| \omega \right| \right| \right| ^p_{p-\mathrm{var},[n,n+1]} = 0.\)

(\({\mathbf{H }}_3^\prime \)) \(\lim \nolimits _{\begin{array}{c} n \rightarrow \infty \\ n\in {\mathbb {N}} \end{array}} \frac{1}{n} \sum _{k =0}^{n-1}\left| \left| \left| \omega \right| \right| \right| ^p_{p-\mathrm{var},[k,k+1]} = \Gamma _p(\omega ) < \infty .\)

It is easy to see that assumption (\({\mathbf{H }}_3^\prime \)) implies (\({\mathbf{H }}_3\)). We formulate below the first main result of this paper on the Lyapunov spectrum of Eq. (1.3).

Theorem 3.3

Let \(\Phi _{\omega }(s,t)\) be the two-parameter flow generated by (1.3) and \(\{\lambda _1(\omega ),\ldots ,\lambda _d(\omega )\}\) be the Lyapunov spectrum of the flow \(\Phi _{\omega }(s,t)\), hence of Eq. (1.3). Then under assumptions (\({\mathbf{H }}_1\)), (\({\mathbf{H }}_2\)), (\({\mathbf{H }}_3\)), the Lyapunov exponents \(\lambda _k(\omega ), k=1,\ldots ,d,\) can be computed via a discrete-time interpolation of the flow, i.e.

$$\begin{aligned} \lambda _k(\omega ) := \inf _{V \in {\mathcal {G}}_{d-k+1}} \sup _{y\in V} \limsup \nolimits _{{\mathbb {N}} \ni t\rightarrow \infty } \frac{1}{t} \log |\Phi _{\omega }(t_0,t)y|,\ k = 1,\ldots ,d. \end{aligned}$$
(3.4)

In addition, if condition (\({\mathbf{H }}_3^\prime \)) is satisfied, then

$$\begin{aligned} |\lambda _k(\omega )| \le \eta \Big [ 2 + \Big (\frac{2M_0}{\mu }\Big )^{p} (1+ \Gamma _p(\omega ) )\Big ], \quad \forall k = 1,\ldots ,d, \end{aligned}$$
(3.5)

where \(M_0\) is determined by (3.1), \(0<\mu <\min \{1,M_0\}\) and \(\eta = -\,\log (1-\mu )\).

Proof

Recall from (2.4) that for each \(s\in {\mathbb {R}}^+\)

$$\begin{aligned} \sup _{t\in [s,s+1]} \log \Vert \Phi _\omega (s,t)\Vert\le & {} \eta \Big [2+ (\frac{2M_0}{\mu })^p(1+\left| \left| \left| \omega \right| \right| \right| ^p_{p{\mathrm{-var}},[s,s+1]})\Big ]. \end{aligned}$$
(3.6)

Fix \(k\in \{1,\ldots ,d\}\) and \(y\in {\mathbb {R}}^d\). Suppose \(0\le t_0<t_1<t_2<t_3\cdots \) is an increasing sequence of positive real numbers on which the upper limit

$$\begin{aligned} \limsup _{t\rightarrow \infty } \frac{1}{t} \log |\Phi _\omega (t_0,t)y| =: z\in {{{\bar{{\mathbb {R}}}}}} \end{aligned}$$

is realized, i.e.,

$$\begin{aligned} \lim _{m\rightarrow \infty } \frac{1}{t_m} \log |\Phi _\omega (t_0,t_m)y| = z. \end{aligned}$$

Let \(n_m\) denotes the largest natural number which is smaller than or equal to \(t_m\). Using the flow property of \(\Phi _{\omega }(s,t)\) and assumption (\({\mathbf{H }}_3\)) we have

$$\begin{aligned} z= & {} \lim _{m\rightarrow \infty } \frac{1}{t_m} \log |\Phi _\omega (t_0,t_m)y| \\= & {} \lim _{m\rightarrow \infty } \frac{1}{t_m} \log ( |\Phi _\omega (n_m,t_m)\Phi _\omega (t_0,n_m)y|) \\\le & {} \lim _{m\rightarrow \infty } \frac{1}{t_m} \Big ( \log \Vert \Phi _\omega (n_m,t_m)\Vert + \log (|\Phi _\omega (t_0,n_m)y|)\Big ) \\\le & {} \limsup _{m\rightarrow \infty } \frac{1}{n_m} \log |\Phi _\omega (t_0,n_m)y| + \limsup _{m\rightarrow \infty } \frac{1}{n_m} \eta \Big [2+ \Big (\frac{2M_0}{\mu }\Big )^p(1+\left| \left| \left| \omega \right| \right| \right| ^p_{p{\mathrm{-var}},[n_m,n_m+1]})\Big ]\\= & {} \limsup _{m\rightarrow \infty } \frac{1}{n_m}\log |\Phi _\omega (t_0,n_m)y|\\\le & {} \limsup _{\begin{array}{c} t\rightarrow \infty \\ t\in {\mathbb {N}} \end{array}} \frac{1}{t} \log |\Phi _\omega (t_0,t)y|. \end{aligned}$$

On the other hand,

$$\begin{aligned} \limsup _{\begin{array}{c} t\rightarrow \infty \\ t\in {\mathbb {N}} \end{array}} \frac{1}{t} \log |\Phi _\omega (t_0,t)y| \le \limsup _{t\rightarrow \infty } \frac{1}{t} \log |\Phi _\omega (t_0,t)y| = z. \end{aligned}$$

Consequently, for all \(k\in \{1,\ldots ,d\}\) and \(y\in {\mathbb {R}}^d\), we have the equality

$$\begin{aligned} \limsup _{\begin{array}{c} t\rightarrow \infty \\ t\in {\mathbb {N}} \end{array}} \frac{1}{t} \log |\Phi _\omega (t_0,t)y| = \limsup _{t\rightarrow \infty } \frac{1}{t} \log |\Phi _\omega (t_0,t)y|, \end{aligned}$$

which proves (3.4).

Next, assume condition (\({\mathbf{H }}_3^\prime \)) is satisfied. Then

$$\begin{aligned} \limsup _{n \rightarrow \infty } \frac{1}{n} \log |\Phi _\omega (t_0,n)y|\le & {} \limsup _{n \rightarrow \infty } \frac{1}{n} \left( \log \Vert \Phi _\omega (t_0,\lceil t_0\rceil )\Vert + \sum _{j=\lceil t_0\rceil }^{n-1}\log \Vert \Phi _\omega (j,j+1)\Vert \right) \nonumber \\\le & {} \limsup _{n \rightarrow \infty } \frac{1}{n} \sum _{j=0}^{n-1} \eta \Big [2+ \Big (\frac{2M_0}{\mu }\Big )^{p} \Big (1+ \left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[j,j+1]}^{p} \Big )\Big ] \nonumber \\\le & {} \eta \Big [ 2 + \Big (\frac{2M_0}{\mu }\Big )^{p} (1+ \Gamma _p(\omega )) \Big ]. \end{aligned}$$
(3.7)

Since \(\Phi _\omega (s,t) = (\Psi _\omega (s,t)^T)^{-1}\) where \(\Psi \) is the solution matrix of the adjoint Eq. (2.9), it follows that

$$\begin{aligned} \limsup _{n \rightarrow \infty } \frac{1}{n} \log |\Phi _\omega (t_0,n)y|\ge \limsup _{n \rightarrow \infty } -\frac{1}{n} \log \Vert \Psi _\omega (t_0,n)\Vert = -\liminf _{n \rightarrow \infty } \frac{1}{n} \log \Vert \Psi _\omega (t_0,n)\Vert . \end{aligned}$$

Hence, either

$$\begin{aligned} 0\le \limsup _{n \rightarrow \infty } \frac{1}{n} \log |\Phi _\omega (t_0,n)y| \le \eta \Big [ 2 + \Big (\frac{2M_0}{\mu }\Big )^{p} (1+ \Gamma _p(\omega )) \Big ] \end{aligned}$$

or

$$\begin{aligned} 0\ge \limsup _{n \rightarrow \infty } \frac{1}{n} \log |\Phi _\omega (t_0,n)y| \ge -\liminf _{n \rightarrow \infty } \frac{1}{n} \log \Vert \Psi _\omega (t_0,n)\Vert , \end{aligned}$$

which yields

$$\begin{aligned} 0\le & {} \liminf _{n \rightarrow \infty } \frac{1}{n} \log \Vert \Psi _\omega (t_0,n)\Vert \le \limsup _{n \rightarrow \infty } \frac{1}{n} \log \Vert \Psi _\omega (t_0,n)\Vert \\\le & {} \eta \Big [ 2 + \Big (\frac{2M_0}{\mu }\Big )^{p} (1+ \Gamma _p(\omega )) \Big ] \end{aligned}$$

where the last inequality can be proved similarly to the one in (3.7). Hence (3.5) holds. \(\square \)

Remark 3.4

The discretization scheme in Theorem 3.3 can be formulated for any step size \(h>0\).

Lyapunov Spectrum of Triangular Systems

It is well known in the theory of ODE that a linear triangular system can be solved successively and its Lyapunov spectrum is easily computed via its coefficients. In this subsection we present our similar result for linear triangular systems of YDE, under additional assumptions. Let us consider the system

$$\begin{aligned} dX(t) = A(t)X(t)dt+C(t)X(t)d\omega (t) \end{aligned}$$
(3.8)

in which, \(X= (x_1,x_2,\ldots ,x_d)\), \(A = (a_{ij}(t)),C=(c_{ij}(t))\) are d dimensional upper triangular matrices of coefficient functions satisfying conditions (\({\mathbf{H }}_1\)), (\(\mathbf{H }_2\)), the driving path \(\omega \) satisfies (\({\mathbf{H }}_3\)) and also the additional assumption

(\({\mathbf{H }}_4\)) \(\displaystyle \lim \nolimits _{\begin{array}{c} n\rightarrow \infty \\ n \in {\mathbb {N}} \end{array}} \dfrac{\Big |\int _0^n c_{ii}(s)d\omega (s)\Big |}{n} = 0\) for any elements \(c_{ii}(t),\; i=1,\ldots , d\) in the diagonal of C.

As a motivation of our ideas, (\({\mathbf{H }}_4\)) is satisfied for almost all realization \(\omega \) of a fractional Brownian motion (see Lemma 5.3 in Sect. 5 for the proof and [22] for details on fractional Brownian motions). Another situation satisfying (\({\mathbf{H }}_4\)) is the case in which \(\omega (t) = t^{\alpha }\) with \(0<\alpha <1\) and \(C(\cdot )\) is continuous and bounded.

To see how assumption (\({\mathbf{H }}_4\)) is applied, we first consider Eq. (3.8) in the one dimensional case

$$\begin{aligned} dz(t) = a(t)z(t)dt + c(t)z(t)d\omega (t), \; z(0) = z_0. \end{aligned}$$
(3.9)

Thanks to the integration by part formula (see Zähle [29, Theorem 3.1]), (3.9) can be solved explicitly as

$$\begin{aligned} z(t) = z_0e^{\int _{0}^ta(s)ds+\int _{0}^tc(s)d\omega (s)}. \end{aligned}$$
(3.10)

Moreover, we have the following lemma.

Lemma 3.5

The following estimates hold for any nontrivial solution \(z\not \equiv 0\) of (3.9)

  1. (i)

    \(\chi (z(t)) = {\overline{a}}\),

  2. (ii)

    \(\chi (\left| \left| \left| z\right| \right| \right| _{q\mathrm{-var},[t,t+1]}) \le {\overline{a}}\),

    where \({\overline{a}}:=\displaystyle \limsup _{\begin{array}{c} n\rightarrow \infty \\ n\in {\mathbb {N}} \end{array}}\frac{1}{n} \int _0^na(s)ds \).

Proof

(i) The statement is evident under the assumption (\(\mathbf{H }_4\)). Namely,

$$\begin{aligned} \chi (z(t))= & {} \limsup _{\begin{array}{c} n\rightarrow \infty \\ n\in {\mathbb {N}} \end{array}}\left( \frac{\log |z_0|}{n}+\frac{\int _0^na(s)ds}{n}+\frac{\int _0^nc(s)d\omega (s)}{n}\right) = \limsup _{\begin{array}{c} n\rightarrow \infty \\ n\in {\mathbb {N}} \end{array}}\frac{\int _0^na(s)ds}{n}={\overline{a}}. \end{aligned}$$

(ii) Due to linearity it suffices to prove for \(z_0=1\). Introduce the notations \(f(t)= \int _0^ta(s)ds,\;\; g(t) = \int _0^tc(s)d\omega (s)\), then \(z(t) = e^{f(t)+g(t)}\). We have

$$\begin{aligned} \left| \left| \left| f\right| \right| \right| _{q\mathrm{-var},[s,t]}\le & {} (t-s)\Vert a\Vert _{\infty ,[s,t]},\; \left| \left| \left| g\right| \right| \right| _{q\mathrm{-var},[s,t]}\\\le & {} K \Vert c\Vert _{q\mathrm{-var},[s,t]}\left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[s,t]},\; \hbox {for all} \;\; 0\le s<t; \end{aligned}$$

and

$$\begin{aligned} \chi (e^{f(t)})={\overline{a}}, \;\; \chi (e^{g(t)}) = 0. \end{aligned}$$

For given \(\varepsilon >0\), there exists \(D_1\) such that

$$\begin{aligned} e^{f(s)}< D_1e^{({\overline{a}}+\varepsilon /3)s}, \;\; e^{g(s)}<D_1 e^{\varepsilon s /3},\; \forall s\ge 0. \end{aligned}$$

Hence, for any \(t_0\ge 0\), the estimates

$$\begin{aligned} \Vert e^{f}\Vert _{\infty ,[t_0,t_0+1]}\le D_2e^{({\overline{a}}+\varepsilon /3)t_0};\;\;\Vert e^{g}\Vert _{\infty ,[t_0,t_0+1]}\le D_2 e^{\varepsilon t_0/3} \end{aligned}$$

hold for \(D_2=\max \{D_1, D_1e^{{\overline{a}}+\varepsilon /3}\}\). On the other hand, due to the inequality \(|e^a-e^b|\le |a-b|e^{\max \{a,b\}}\) for all \(a,b\in {\mathbb {R}}\) we have

$$\begin{aligned} |e^{f(t)}-e^{f(s)}|\le & {} \Vert e^{f}\Vert _{\infty ,[s,t]} |f(t)-f(s)|\le \Vert e^{f}\Vert _{\infty ,[s,t]} \left| \left| \left| f\right| \right| \right| _{q\mathrm{-var},[s,t]}, \end{aligned}$$

which yields

$$\begin{aligned} \left| \left| \left| e^f\right| \right| \right| _{q\mathrm{-var},[s,t]}\le \Vert e^{f}\Vert _{\infty ,[s,t]} \left| \left| \left| f\right| \right| \right| _{q\mathrm{-var},[s,t]}. \end{aligned}$$

Similarly,

$$\begin{aligned} \left| \left| \left| e^g\right| \right| \right| _{q\mathrm{-var},[s,t]}\le \Vert e^{g}\Vert _{\infty ,[s,t]} \left| \left| \left| g\right| \right| \right| _{q\mathrm{-var},[s,t]}. \end{aligned}$$

For \(s,t\in [t_0,t_0+1]\),

$$\begin{aligned} |z(t)-z(s)|= & {} |e^{f(t)+g(t)}-e^{f(s)+g(s)}|\\\le & {} e^{f(t)}|e^{g(t)}-e^{g(s)}|+ e^{g(s)}|e^{f(t)}-e^{f(s)}|\\\le & {} \Vert e^{f}\Vert _{\infty ,[t_0,t_0+1]}\Vert e^{g}\Vert _{\infty ,[t_0,t_0+1]} \left( \left| \left| \left| f\right| \right| \right| _{q\mathrm{-var},[s,t]}+\left| \left| \left| g\right| \right| \right| _{q\mathrm{-var},[s,t]}\right) , \end{aligned}$$

hence by using Minkowski inequality we get

$$\begin{aligned} \left| \left| \left| z\right| \right| \right| _{q\mathrm{-var},[t_0,t_0+1]}\le & {} \Vert e^{f}\Vert _{\infty ,[t_0,t_0+1]}\Vert e^{g}\Vert _{\infty ,[t_0,t_0+1]} \left( \left| \left| \left| f\right| \right| \right| _{q\mathrm{-var},[t_0,t_0+1]}+\left| \left| \left| g\right| \right| \right| _{q\mathrm{-var},[t_0,t_0+1]}\right) \\\le & {} D_2^2e^{({\overline{a}}+2\varepsilon /3)t_0} (\Vert a\Vert _{\infty ,{\mathbb {R}}^+}+K\Vert c\Vert _{q\mathrm{-var},1,{\mathbb {R}}^+}\left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[t_0,t_0+1]}). \end{aligned}$$

Note that condition (\({\mathbf{H }}_3\)) implies the boundedness of \(\frac{\left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[t_0,t_0+1]}}{t_0}\), \(t_0\in {\mathbb {R}}^+\). Therefore, there exists a constant \(D_3\) such that

$$\begin{aligned} \left| \left| \left| z\right| \right| \right| _{q\mathrm{-var},[t_0,t_0+1] }\le D_3 e^{({\overline{a}}+\varepsilon )t_0}, \end{aligned}$$

which proves (ii). \(\square \)

Next we will show by induction that the Lyapunov spectrum of system (3.8) is \(\{{\overline{a}}_{kk}, 1\le k\le d\}\) with \({\overline{a}}_{kk}:=\lim \nolimits _{t\rightarrow \infty }\frac{\int _0^ta_{kk}(s)ds}{t}\), provided that the limit is well-defined and exact.

The following lemma is a modified version of Demidovich [11, Theorem 1, p. 127].

Lemma 3.6

Assume that \(g^i:{\mathbb {R}}^+\rightarrow {\mathbb {R}}\), \(i=1,\ldots , n\), are continuous functions of finite q-variation norm on any compact interval of \({\mathbb {R}}^+\), which satisfy

$$\begin{aligned} \chi (g^i(t)),\;\chi \left( \left| \left| \left| g^i\right| \right| \right| _{q\mathrm{-var},[t,t+1]}\right) \le \lambda _i\in {\mathbb {R}}, \; i=1,\ldots ,n. \end{aligned}$$

Then

  1. (i)

    \( \chi \left( \sum _{i=1}^n g^i(t)\right) ,\;\chi \left( \left| \left| \left| \sum _{i=1}^n g^i\right| \right| \right| _{q\mathrm{-var},[t,t+1]}\right) \le \max _{1\le i\le n}\lambda _i,\)

  2. (ii)

    \( \chi \left( \prod _{i=1}^n g^i(t)\right) ,\;\chi \left( \left| \left| \left| \prod _{i=1}^n g^i\right| \right| \right| _{q\mathrm{-var},[t,t+1]}\right) \le \sum _{i=1}^n\lambda _i.\)

Proof

(i):

The proof is similar to [11, Theorem 1, p. 127] with note that

$$\begin{aligned} \left| \left| \left| \sum _{i=1}^ng^i\right| \right| \right| _{q\mathrm{-var},[t,t+1]}\le \sum _{i=1}^n \left| \left| \left| g^i\right| \right| \right| _{q\mathrm{-var},[t,t+1]}. \end{aligned}$$
(ii):

The first inequality is known due to [11, Theorem 2, p. 19]. For the second one, it suffices to show for \(k=2\), since the general case is obtained by induction.

It follows from Lemma 2.1 that

$$\begin{aligned} \left| \left| \left| g^1g^2\right| \right| \right| _{q\mathrm{-var},[t,t+1]}\le & {} \left( \Vert g^1\Vert _{\infty ,[t,t+1]}+ \left| \left| \left| g^1\right| \right| \right| _{q\mathrm{-var},[t,t+1]}\right) \left( \Vert g^2\Vert _{\infty ,[t,t+1]}\right. \\&\left. +\, \left| \left| \left| g^2\right| \right| \right| _{q\mathrm{-var},[t,t+1]}\right) \\\le & {} 4 \left( \Vert g^1(t)\Vert + \left| \left| \left| g^1\right| \right| \right| _{q\mathrm{-var},[t,t+1]}\right) \left( \Vert g^2(t)\Vert + \left| \left| \left| g^2\right| \right| \right| _{q\mathrm{-var},[t,t+1]}\right) . \end{aligned}$$

Therefore the the second inequality followed from the first one and (i).\(\square \)

By similar arguments using the integration by part formula, the non-homogeneous one dimensional linear equation

$$\begin{aligned} dx(t) = [a(t)x(t)+h_1(t)]dt + [c(t)x(t)+h_2(t)]d\omega (t) \end{aligned}$$
(3.11)

can be solved explicitly as

$$\begin{aligned} x(t)= & {} e^{\int _0^ta(s)ds+\int _0^tc(s)d\omega (s)}\left( x_0 + \int _0^t e^{-\int _0^ta(s)ds-\int _0^tc(s)d\omega (s)}h_1(s) ds \right. \\&\left. +\, \int _0^t e^{-\int _0^ta(s)ds-\int _0^tc(s)d\omega (s)}h_2(s)d\omega (s)\right) , \end{aligned}$$

provided that \(h_1,h_2\) are in \({\mathcal {C}}^{q\mathrm {-var}}([0,t],{\mathbb {R}})\) for all \(t>0\). This allow us to solve triangular systems by substitution as seen in the following theorem.

Theorem 3.7

Under assumptions (\({\mathbf{H }}_1\)) – (\({\mathbf{H }}_4\)), if there exist the exact limits

$$\begin{aligned} {\overline{a}}_{kk}:=\lim _{t\rightarrow \infty } \frac{1}{t}\int _{0}^t a_{kk}(s)ds,\;\; k= 1, \ldots , d, \end{aligned}$$
(3.12)

then the spectrum of system (3.8) is given by

$$\begin{aligned} \{{\overline{a}}_{11},{\overline{a}}_{22},\ldots ,{\overline{a}}_{dd}\}. \end{aligned}$$

Proof

For all \(k=1,2,\ldots ,d\), put \(Y_k(t)=e^{ \int _0^t a_{kk}(s)ds +\int _0^t c_{kk}(s)d\omega (s)} \). Then due to Lemma 3.5

$$\begin{aligned} \chi (Y_k(t))= & {} {\overline{a}}_{kk},\; \chi (\left| \left| \left| Y_k\right| \right| \right| _{q\mathrm{-var},[t,t+1]})\le {\overline{a}}_{kk},\; \\ \chi (Y^{-1}_k(t))= & {} - {\overline{a}}_{kk}, \;\chi \left( \left| \left| \left| Y^{-1}_k\right| \right| \right| _{q\mathrm{-var},[t,t+1]}\right) \le -{\overline{a}}_{kk}. \end{aligned}$$

We construct a fundamental solution matrix \(X(t)=\left( x_{ij}(t)\right) _{d\times d}\) of (3.8) as follows.

$$\begin{aligned} x_{ik}(t)= {\left\{ \begin{array}{ll} 0 &{} \mathrm{if} \; i>k,\\ Y_k(t)&{} { \mathrm if} \; i=k,\\ Y_i(t) \left[ \displaystyle \int _{t_{ik}}^t Y^{-1}_i(s)\sum _{j=i+1}^k a_{ij}(s)x_{jk}(s)ds + \int _{t_{ik}}^t Y^{-1}_i(s)\sum _{j=i+1}^k c_{ij}(s)x_{jk}(s)d\omega (s) \right] &{} \mathrm{if} \; i<k, \end{array}\right. } \end{aligned}$$

in which, \(t_{ik}= {\left\{ \begin{array}{ll} 0,\;\;\mathrm{if}\;\; {\overline{a}}_{kk}-{\overline{a}}_{ii}\ge 0\\ +\infty ,\;\;\mathrm{if}\;\; {\overline{a}}_{kk}-{\overline{a}}_{ii}< 0. \end{array}\right. } \)

Now we consider the \(d\mathrm{th}\) collumn of X and prove by induction that

$$\begin{aligned} \chi (x_{jd}(t)), \; \chi (\left| \left| \left| x_{jd}\right| \right| \right| _{q{{\mathrm{-var}}},[t,t+1]})\le {\overline{a}}_{dd},\; j=1,2,\ldots , d. \end{aligned}$$

First, by Lemma 3.5 the statement is true for \(j=d\). Assume that \(\chi (x_{jd}(t)), \; \chi (\left| \left| \left| x_{jd}\right| \right| \right| _{q{{\mathrm{-var}}},[t,t+1]})\le {\overline{a}}_{dd}\) for all \(i+1\le j\le d\), we will prove that

$$\begin{aligned} \chi (x_{id}(t)), \; \chi (\left| \left| \left| x_{id}\right| \right| \right| _{q{{\mathrm{-var}}},[t,t+1]})\le {\overline{a}}_{dd}. \end{aligned}$$

Put

$$\begin{aligned} I(t):=\displaystyle \int _{t_{id}}^t Y^{-1}_i(s)\sum _{j=i+1}^d a_{ij}(s)x_{jd}(s)ds \; \;\mathrm{{and }}\;\; J(t):=\displaystyle \int _{t_{id}}^t Y^{-1}_i(t)\sum _{j=i+1}^d c_{ij}(s)x_{jd}(s)d\omega (s), \end{aligned}$$

then

$$\begin{aligned} x_{id}(t) = Y_i(t) [I(t)+J(t)]. \end{aligned}$$

Since A is bounded, we apply [11, Corollary of Theorem 2, p. 129] to get

$$\begin{aligned} \chi \left( \sum _{j=i+1}^{d} a_{ij}(s)x_{jd}(s)\right) \le {\overline{a}}_{dd}. \end{aligned}$$
(3.13)

Therefore, \(\chi \left( Y^{-1}_i(s) \sum _{j=i+1}^{d} a_{ij}(s)x_{jd}(s)\right) \le {\overline{a}}_{dd}-{\overline{a}}_{ii}\). Due to [11, Theorem 4,p. 131] we obtain

$$\begin{aligned} \chi (I(t))\le {\overline{a}}_{dd}-{\overline{a}}_{ii}. \end{aligned}$$

On the other hand, the following estimate holds

$$\begin{aligned} \chi (\left| \left| \left| I\right| \right| \right| _{q{\mathrm{-var}},[t,t+1]})\le {\overline{a}}_{dd}-{\overline{a}}_{ii}. \end{aligned}$$

Indeed, with \(I(t) = \int _0^t k(s)ds \) and \(\chi (k(s))\le \lambda \), we have for \(u,v\in [t,t+1]\),

$$\begin{aligned} |I(u)-I(v)|\le & {} |u-v| \Vert k\Vert _{\infty ,[u,v]}\\\le & {} |u-v| D(\varepsilon )e^{(\lambda +\varepsilon )t}, \text { for each } \varepsilon >0. \end{aligned}$$

This implies \(\left| \left| \left| I\right| \right| \right| _{q\mathrm{-var},[t,t+1]}\le D(\varepsilon )e^{(\lambda +\varepsilon )t}\). The proof for the case \(I(t) = \int _t^\infty k(s)ds \) is similar.

Next, \(\chi (Y^{-1}_i(t)),\chi (\left| \left| \left| Y^{-1}_i\right| \right| \right| _{q{\mathrm{-var}},[t,t+1]})\le - {\overline{a}}_{ii}\) and C satisfies (\({\mathbf{H }}_2\)), i.e \(\chi (C(t)),\;\chi (\left| \left| \left| C\right| \right| \right| _{q\mathrm{-var},[t,t+1]})\le 0\). Together with the induction hypothesis that

$$\begin{aligned} \chi (x_{jd}(t)), \; \chi (\left| \left| \left| x_{jd}\right| \right| \right| _{q{{\mathrm{-var}}},[t,t+1]})\le {\overline{a}}_{dd}, \;\;\forall i+1\le j\le d \end{aligned}$$

and Lemma 3.6 we obtain

$$\begin{aligned} \chi \left( Y^{-1}_i(t)\sum _{j=i+1}^d c_{ij}(t)x_{jd}(t)\right) ,\; \chi \left( \left| \left| \left| Y^{-1}_i\sum _{j=i+1}^d c_{ij}x_{jd} \right| \right| \right| _{q\mathrm{-var},[t,t+1]}\right) \le {\overline{a}}_{dd}-{\overline{a}}_{ii}. \end{aligned}$$

Due to Lemmas 5.1 and 5.2,

$$\begin{aligned} \chi (J(t))\le {\overline{a}}_{dd}-{\overline{a}}_{ii}, \; \chi (\left| \left| \left| J\right| \right| \right| _{q\mathrm{-var},[t,t+1]})\le {\overline{a}}_{dd}-{\overline{a}}_{ii}. \end{aligned}$$

Again, we apply Lemma 3.6 for \(Y_i\), I and J to get

$$\begin{aligned} \chi (x_{id}(t)), \; \chi (\left| \left| \left| x_{id}\right| \right| \right| _{q{{\mathrm{-var}}},[t,t+1]})\le {\overline{a}}_{ii} +{\overline{a}}_{dd}-{\overline{a}}_{ii}={\overline{a}}_{dd}. \end{aligned}$$

Hence, the Lyapunov exponent of the column \(d\mathrm{th}\), \(X_d\), of matrix X does not exceed \({\overline{a}}_{dd}\), meanwhile \(\chi (x_{dd}(t)) = {\overline{a}}_{dd}\). This proves \(\chi (X_d(t))={\overline{a}}_{dd}\).

Similarly, \(\chi (X_i(t))={\overline{a}}_{ii}\) for \(i=1,2,\ldots ,d\), in which \(X_i\) is the column \(i\mathrm{th}\) of X. Finally, since \(\sum _{i=1}^d{\overline{a}}_{ii} = \lim _{t\rightarrow \infty }\frac{1}{t}\log |\det X(t)|\), X(t) is a normal matrix solution to (3.8) and the Lyapunov spectrum of (3.8) is \(\{{\overline{a}}_{11},{\overline{a}}_{22},\ldots ,{\overline{a}}_{dd}\}\). \(\square \)

Remark 3.8

In the theory of ODEs, Theorem Perron states that a linear equation can be reduced to a linear triangular system (see [11, p. 180]). However, we do not know if it is true for linear Young differential equations. That is because for a linear YDE, besides the drift term A corresponding to dt we do have also the diffusion term C corresponding to \(d\omega \). Hence it is difficult to tranform the original system to a triangular form whose coefficient matrices only depend on t.

Lyapunov Regularity

The concept regularity has been introduced by Lyapunov for linear ODEs, and since then has attracted lots of interests (see e.g. [1, Chapter 3, p. 115], [8, 20], or [3, Section 1.2]). For a linear YDE, we define the concept of Lyapunov regularity via the generated two-parameter flow.

Definition 3.9

Let \(\Phi _{\omega }(s,t)\) be a two-parameter flow of linear operators of \({\mathbb {R}}^d\) and \(\{\lambda _1(\omega ),\ldots , \lambda _d(\omega )\}\) be the Lyapunov spectrum of \(\Phi _{\omega }(s,t)\). Then the non-negative \({{\bar{{\mathbb {R}}}}}\)-valued random variable

$$\begin{aligned} \sigma (\omega ):= \sum _{k=1}^d \lambda _k -\liminf _{t\rightarrow \infty } \frac{1}{t} \log |\det \Phi _\omega (0,t)| \end{aligned}$$

is called coefficient of nonregularity of the two-parameter flow \(\Phi _{\omega }(s,t)\).

The coefficient of nonregularity of the linear YDE (1.3) is, by definition, the coefficient of nonregularity of the two-parameter flow generated by (1.3).

A two-parameter flow is called Lyapunov regular if its coefficient of nonregularity equals 0 identically. A linear YDE is called Lyapunov regular if its coefficient of nonregularity equals 0.

It follows from [7] that if a two-parameter linear flow \(\Phi _{\omega }(s,t)\) is Lyapunov regular then its determinant \(\det \Phi _{\omega }(s,t)\) as well as any trajectory have exact Lyapunov exponents, i.e. the limit in (3.2) is exact.

We define the adjoint equation of (1.1) [and also of the equivalent integral Eq. (1.3)] by

$$\begin{aligned} dy(t) = -\,A^T(t)y(t) dt -C^T(t) y(t)d\omega (t). \end{aligned}$$
(3.14)

The following lemma is a version of Perron Theorem from the classical ODE case.

Lemma 3.10

(Perron Theorem) Let \(\alpha _1\ge \cdots \ge \alpha _d\) and \(\beta _1\le \cdots \le \beta _d\) be the Lyapunov spectrum of (1.3) and (3.14) respectively. Then (1.3) is Lyapunov regular if and only if \(\alpha _i + \beta _{i}=0\) for all \(i=1,\ldots ,d\).

Proof

The proof goes line by line with the ODE version in Demidovich [11, p. 170–173]. \(\square \)

Theorem 3.11

(Lyapunov theorem on regularity of triangular system) Suppose that the matrices A(t), C(t) are upper triangular and satisfy (\({\mathbf{H }}_1\)) – (\({\mathbf{H }}_4\)). Then system (3.8) is Lyapunov regular if and only if there exists \(\lim \nolimits _{t\rightarrow \infty } \frac{1}{t}\int _{t_0}^t a_{kk}(s)ds, \; k=\overline{1,d}\).

Proof

The only if part is proved in Theorem 3.7. For the if part, the proof is similar to the [11, p. 174]. Indeed, based on the normal basis of \({\mathbb {R}}^d\) which forms the unit matrix we construct a fundamental basis \( {\tilde{X}}\) of the system which is an upper triangular matrix and the diagonal entry is

$$\begin{aligned} Y_{1}(t),Y_{2}(t),\ldots , Y_{d}(t), \end{aligned}$$

where \(Y_{k}\) are defined in Theorem 3.7.

We choose an upper triangular matrix \(D=D(\omega )\) of which diagonal elements are 1, such that \(X:={\tilde{X}}D\) is an normal basis of (1.3) with \(x^i\) to be the column vectors (see also Remark 3.2). Put \(Y=(y_{ij})=(X^{-1})^T\) and repeat the arguments in Lemma 3.10 under the regularity assumption, it follows that Y is a normal basis of (3.14). Moreover, \( y_{kk}= Y_{k}^{-1}\) and

$$\begin{aligned} \chi (x^k(t))+\chi (y^k(t))=0,\forall k = 1,\ldots ,d. \end{aligned}$$

Hence

$$\begin{aligned} \chi (x^k(t))\ge & {} \chi (Y_{k}(t)) = \limsup _{t\rightarrow \infty }\frac{1}{t}\int _{t_0}^ta_{kk}(s)ds \end{aligned}$$

and similarly,

$$\begin{aligned} \chi (y^k(t))\ge & {} \chi (Y^{-1}_{k}(t)) = -\liminf _{t\rightarrow \infty }\frac{1}{t}\int _{t_0}^ta_{kk}(s)ds. \end{aligned}$$

Therefore,

$$\begin{aligned} 0\ge \limsup _{t\rightarrow \infty }\frac{1}{t}\int _{t_0}^ta_{kk}(s)ds-\liminf _{t\rightarrow \infty }\frac{1}{t}\int _{t_0}^ta_{kk}(s)ds\ge 0 \end{aligned}$$

which implies that there exists the limit \(\lim _{t\rightarrow \infty }\frac{1}{t}\int _{t_0}^ta_{kk}(s)ds, \;\; k = 1,\ldots , d\). \(\square \)

Lyapunov Spectrum for Linear Stochastic Differential Equations

In this section, we would like to investigate the same question in the random perspective, i.e. the driving path \(\omega \) is a realization of a stochastic process Z with stationary increments. System (1.1) can then be embedded into a stochastic differential equation, or precisely a random differential equation which can be solved in the pathwise sense. Such a system generates a stochastic two-parameter flow, hence it makes sense to study its Lyapunov spectrum and also to raise the question on the non-randomness of the spectrum.

Generation of Stochastic Two-Parameter Flows

More precisely, recall that \({\mathcal {C}}^{0,p-\mathrm {var}}([a,b],{\mathbb {R}}^d)\) is the closure of smooth paths from [ab] to \({\mathbb {R}}^d\) in \(p-\) variation norm. It is well known (see e.g. [12, Proposition 5.36,  p. 98]) that \({\mathcal {C}}^{0,p-\mathrm {var}}([a,b],{\mathbb {R}}^d)\) is a separable Banach space and moreover

$$\begin{aligned} {\mathcal {C}}^{\alpha \mathrm {-Hol}}([a,b],{\mathbb {R}}^d)\subset {\mathcal {C}}^{0,p\mathrm{-var}}([a,b],{\mathbb {R}}^d) \end{aligned}$$

for all \(\alpha >1/p\). Denote by \({\mathcal {C}}^{0,p-\mathrm {var}}({\mathbb {R}},{\mathbb {R}}^d)\) the space of all \(x: {\mathbb {R}}\rightarrow {\mathbb {R}}^d\) such that \(x|_I \in {\mathcal {C}}^{0,p-\mathrm {var}}(I, {\mathbb {R}}^d)\) for each compact interval \(I\subset {\mathbb {R}}\). Then equip \({\mathcal {C}}^{0,p-\mathrm {var}}({\mathbb {R}},{\mathbb {R}}^d)\) with the compact open topology given by the \(p-\)variation norm, i.e the topology generated by the metric:

$$\begin{aligned} d_p(x,y): = \sum _{m\ge 1} \frac{1}{2^m} (\Vert x-y\Vert _{p\mathrm{-var},[-m,m]}\wedge 1). \end{aligned}$$

Assign

$$\begin{aligned} {\mathcal {C}}^{0,p-\mathrm {var}}_0({\mathbb {R}},{\mathbb {R}}^d):= \{x\in {\mathcal {C}}^{0,p-\mathrm {var}}({\mathbb {R}},{\mathbb {R}}^d)|\; x(0)=0\}. \end{aligned}$$

Note that for \(x\in {\mathcal {C}}^{0,p-\mathrm {var}}_0({\mathbb {R}},{\mathbb {R}}^d)\), \(\left| \left| \left| x\right| \right| \right| _{p\mathrm{-var},I} \) and \(\Vert x\Vert _{p\mathrm{-var},I}\) are equivalent norms for every compact interval I containing 0.

Let us consider a stochastic process \({\bar{Z}}\) defined on a probability space \(({\bar{\Omega }},\bar{{\mathcal {F}}},{\bar{{\mathbb {P}}}})\) with realizations in \(({\mathcal {C}}^{0,p-\mathrm {var}}_0({\mathbb {R}},{\mathbb {R}}), {\mathcal {B}})\), where \({\mathcal {B}}\) is Borel \(\sigma -\)algebra. Denote by \(\theta \) the Wiener shift on \(({\mathcal {C}}^{0,p-\mathrm {var}}_0({\mathbb {R}},{\mathbb {R}}), {\mathcal {B}})\)

$$\begin{aligned} \theta _t m(\cdot ) = m(t+\cdot ) - m(t),\quad \forall t\in {\mathbb {R}}, \; m\in {\mathcal {C}}^{0,p-\mathrm {var}}_0({\mathbb {R}},{\mathbb {R}}^d). \end{aligned}$$

Due to [2, Theorem 5], \(\theta \) forms a measurable dynamical system \((\theta _t)_{t\in {\mathbb {R}}}\) on \(({\mathcal {C}}^{0,p-\mathrm {var}}_0({\mathbb {R}},{\mathbb {R}}), {\mathcal {B}})\) (see also [9]). Moreover, because of its definition the Young integral satisfies the shift property with respect to \(\theta \), i.e.

$$\begin{aligned} \int _a^b x(u)d\omega (u) = \int _{a-r}^{b-r} x(u+r) d\theta _r \omega (u). \end{aligned}$$
(4.1)

Namely,

$$\begin{aligned} \int _a^bx(u)d\omega (u):= & {} \lim \nolimits _{|\Pi | \rightarrow 0} \sum _{t_i\in \Pi [a,b]} x(t_i)(\omega (t_{i+1})-\omega (t_i))\nonumber \\= & {} \lim \nolimits _{|\Pi | \rightarrow 0} \sum _{t_i\in \Pi [a-r,b-r]} x(r+t_i)(\omega (r+t_{i+1})-\omega (r+t_i))\nonumber \\= & {} \lim \nolimits _{|\Pi | \rightarrow 0} \sum _{t_i\in \Pi [a-r,b-r]} x(r+t_i)(\theta _r\omega (t_{i+1})-\theta _r\omega (t_i)). \end{aligned}$$
(4.2)

Assume further that \({\bar{Z}}\) has stationary increments. It follows, as the simplest version for rough cocycle in [2, Theorem 5] w.r.t. Young integrals that, there exists a probability \({\mathbb {P}}\) on \((\Omega , {\mathcal {F}}) = ({\mathcal {C}}^{0,p-\mathrm {var}}_0({\mathbb {R}},{\mathbb {R}}), {\mathcal {B}})\) that is invariant under \(\theta \), and the so-called diagonal process \(Z: {\mathbb {R}}\times \Omega \rightarrow {\mathbb {R}}, Z(t,{\tilde{\omega }}) = {\tilde{\omega }}(t)\) for all \(t\in {\mathbb {R}}, {\tilde{\omega }} \in \Omega \), such that Z has the same law with \({\bar{Z}}\) and satisfies the helix property:

$$\begin{aligned} Z_{t+s}(\omega ) = Z_s(\omega ) + Z_t(\theta _s\omega ), \forall \omega \in \Omega , t,s\in {\mathbb {R}}. \end{aligned}$$

Such stochastic process Z has also stationary increments and almost all of its realization belongs to \({\mathcal {C}}^{0,p-\mathrm {var}}_0({\mathbb {R}},{\mathbb {R}})\). It is important to note that the existence of \({\bar{Z}}\) is necessary to construct the diagonal process Z. For example if \({\bar{Z}}\) is a fractional Brownian motion then the corresponding probability space \(({\bar{\Omega }},\bar{{\mathcal {F}}},{\bar{{\mathbb {P}}}})\) can be constructed explicitly as in [13]. The fact that almost all realizations of a fractional Brownian motion are Hölder continuous is a direct consequence of Kolmogorov theorem.

Next, we consider the stochastic differential equation

$$\begin{aligned} dx(t) = A(t)x(t) dt + C(t) x(t) d Z(t,\omega ),\ x(t_0)=x_0 \in {\mathbb {R}}^d, t\ge t_0, \end{aligned}$$
(4.3)

where the second differential is understood in the path-wise sense as Young differential. Under the assumptions in Proposition 2.2, there exists, for almost sure all \(\omega \in \Omega \), a unique solution to (4.3) in the pathwise sense with the initial value \(x_0\in {\mathbb {R}}^d\). Moreover, the solution \(X:[t_0,t_0+T]\times [t_0,t_0+T]\times {\mathbb {R}}^d \times \Omega \rightarrow {\mathbb {R}}^d\) satisfies: (i) for a.s. all \(\omega \in \Omega \), \(X(\cdot ,a,x_0,\omega ) \in {\mathcal {C}}^{0,q\mathrm{-var}}([t_0,t_0+T],{\mathbb {R}}^d)\), and (ii) \(X(t,\cdot ,\cdot ,\cdot )\) is measurable w.r.t \((a,x_0,\omega )\). As a result, the generated two parameter flow \(\Phi _{\omega }(s,t): {\mathbb {R}}^d \rightarrow {\mathbb {R}}^d\) in Proposition 2.4 in the pathwise sense is also a stochastic two-parameter flow (see definition in [16, p. 114]).

Proposition 4.1

  1. (i)

    The Lyapunov exponents \(\lambda _k(\omega )\), \(k=1,\ldots , d\), of \(\Phi _\omega (s,t)\) are measurable functions of \(\omega \in \Omega \).

  2. (ii)

    For any \(u\in [t_0,\infty )\), the Lyapunov subspaces \(E_k^u(\omega )\), \(k=1,\ldots , d\), of \(\Phi _\omega (s,t)\) are measurable with respect to \(\omega \in \Omega \), and invariant with respect to the flow in the following sense

    $$\begin{aligned} \Phi _\omega (s,t) E_k^s (\omega ) = E_k^t(\omega ),\qquad \hbox {for all}\; s,t\in [t_0,\infty ), \omega \in \Omega , k=1,\ldots ,d. \end{aligned}$$

Proof

The proof of Theorem 4.1 is similar to the one in [7, Theorems 2.5, 2.7, 2.8]. \(\square \)

Lemma 4.2

(Integrability condition) Assume that there exists a function \(H(\cdot ,\cdot )\) which is increasing in the second variable, such that for any \(r \ge 0\)

$$\begin{aligned} E\left| \left| \left| Z\right| \right| \right| ^r_{p\mathrm{-var},[s,t]} \le H(r,t-s),\ \forall 0\le s\le t \le 1. \end{aligned}$$
(4.4)

Then under assumptions (\({\mathbf{H }}_1\)) and (\({\mathbf{H }}_2\)), \(\Phi _\omega \) satisfies the following integrability condition for any \(t_0\ge 0\)

$$\begin{aligned} E \sup _{t_0\le s \le t \le t_0+1} \log ^+ \Vert \Phi _\omega (s,t)^{\pm 1}\Vert \le \eta \Big [2+ \Big (\frac{2M_0}{\mu }\Big )^{p} \Big (1+ H(p,1) \Big )\Big ], \end{aligned}$$
(4.5)

where \(M_0\) is determined by (3.1), \(0<\mu <\min \{1,M_0\}\) and \(\eta = -\,\log (1-\mu )\), and we use the notation

$$\begin{aligned} \log ^+ \Vert \Phi _\omega (s,t)\Vert := \max \{ \log \Vert \Phi _\omega (s,t)\Vert , 0\}. \end{aligned}$$

Proof

The proof follows directly from (3.6) for \(\Phi \) and \(\Psi \), and from (4.4), with note that for the inverse flow \(\Psi _\omega (s,t)^{\mathrm{T}} = \Phi _\omega (s,t)^{-1}\)

$$\begin{aligned} \sup \limits _{t_0 \le s \le t \le t_0+ 1} \log ^+ \Vert \Phi _\omega (s,t)^{-1}\Vert = \sup \limits _{t_0 \le s \le t \le t_0+1} \log ^+ \Vert \Psi _\omega (s,t)\Vert \end{aligned}$$

and that (4.4) is still satisfied for all \(s,t\in [t_0,t_0+1]\) due to the increment stationary property of Z. \(\square \)

Notice that condition (4.4) derives (\(\mathbf{H }_3^\prime \)) for almost all driving paths \(\omega \) due to Birkhorff ergodic theorem. Moreover, \(\Gamma _p(\omega )\) is a random variable in \(L^r(\Omega ,{\mathcal {F}},{\mathbb {P}})\) for all \(r>0\). If the metric dynamical system \((\Omega , {\mathcal {F}}, {\mathbb {P}}, (\theta _t)_{t\in {\mathbb {R}}})\) is ergodic, it is known that \(\Gamma _p(\omega )=E\left| \left| \left| Z \right| \right| \right| ^p_{p\mathrm{-var},[0,1]} \) almost surely. As a result, the estimate (3.5) implies the following theorem.

Theorem 4.3

Under assumptions (\({\mathbf{H }}_1\)) and (\({\mathbf{H }}_2\)) and condition (4.4), for each \(k=1,\ldots , d\) the Lyapunov exponent \(\lambda _k(\omega )\) is of finite moments of any order \(r>0\). More precisely,

$$\begin{aligned} E|\lambda _k(\omega )|^r \le \eta ^r E \Big [ 2 + \Big (\frac{2M_0}{\mu }\Big )^{p} (1+ \Gamma _p(\omega )) \Big ]^r, \quad \forall k = 1,\ldots ,d. \end{aligned}$$

In particular, if the metric dynamical system \((\Omega , {\mathcal {F}}, {\mathbb {P}}, (\theta _t)_{t\in {\mathbb {R}}})\) is ergodic, the Lyapunov spectrum is bounded a.s. by non-random constants as follow,

$$\begin{aligned} |\lambda _k(\omega )| \le \eta \Big [ 2 + \Big (\frac{2M_0}{\mu }\Big )^{p} (1+E\left| \left| \left| Z \right| \right| \right| ^p_{p\mathrm{-var},[0,1]} ) \Big ], \quad \forall k = 1,\ldots ,d. \end{aligned}$$

Remark 4.4

Assumption (4.4) is satisfied in case Z is a fractional Brownian motion, see [22, Corollary 1.9.2] with \(H> \frac{1}{2}\). Indeed, applying Garsia–Rademich–Rumsey inequality, see [24, Lemma 7.3, Lemma 7.4] we see that for any fixed \(r\ge 1\) and \(\frac{1}{2}<\nu < H\)

$$\begin{aligned} E\left| \left| \left| Z\right| \right| \right| ^r_{p\mathrm{-var},[s,t]}\le & {} |t-s|^{\nu r} E \big (\left| \left| \left| Z \right| \right| \right| _{\nu \mathrm{-Hol},[s,t]}\big )^r \le C_{\nu ,H,q,m} |t-s|^{\nu r} |t-s|^{(H-\nu )r} \\= & {} C_{\nu ,H,q,m} |t-s|^{H r}. \end{aligned}$$

Moreover, it is known in [13] that Z can be defined on a metric dynamical system \((\Omega , {\mathcal {F}}, {\mathbb {P}}, (\theta _t)_{t\in {\mathbb {R}}})\) which is ergodic.

Almost Sure Lyapunov Regularity

In this subsection, for simplicity of presentation we consider all the equations on the whole time line \({\mathbb {R}}\). The half-line case \({\mathbb {R}}^+\) can be easily treated in a similar manner.

We start the subsection with a very special situation in which the coefficient functions are autonomous, i.e. \(A(\cdot ) \equiv A, C(\cdot ) \equiv C\). In this case, the stochastic two-parameter flow \(\Phi _\omega (s,t)\) of (4.3) generates a linear random dynamical system \(\Phi ^\prime \) (see e.g. Arnold [1, Chapter 1] for the definition of random dynamical systems). Indeed, from (4.1) and the fact that

$$\begin{aligned} x(t)= & {} x_0 + \int _0^s A x(u)du + \int _0^s C x(u)d\omega (u) + \int _s^t A x(u)du + \int _s^t C x(u)d\omega (u)\\= & {} x(s) + \int _0^{t-s} A x(u+s)du + \int _0^{t-s} C x(u+s)d\theta _s\omega (u), \end{aligned}$$

it follows due to the autonomy that \(\Phi _\omega (s,t)= \Phi (t-s,\theta _s \omega )\). Hence \(\Phi ^\prime (t,\omega ):= \Phi _\omega (0,t)\) satisfies the cocycle property

$$\begin{aligned} \Phi ^\prime (t+s,\omega ) = \Phi ^\prime (t,\theta _s\omega ) \circ \Phi ^\prime (s,\omega ). \end{aligned}$$

Assign \(t_0 = 0\), if follows from (4.5) that

$$\begin{aligned} \sup _{t\in [0,1]} \log \Vert \Phi ^\prime (t,\omega )^{\pm 1}\Vert \in L^1(\Omega ,{\mathbb {P}}). \end{aligned}$$

By applying the multiplicative ergodic theorem (see Oseledets [25] and Arnold [1, Chapter 3]) for \(\Phi ^\prime \) generated from (4.3), there exists a Lyapunov spectrum consisting of exact Lyapunov exponents provided by the multiplicative ergodic theorem and it coincides with the Lyapunov spectrum defined in Definition 3.1. In addition, the flag of Oseledets’ subspaces coincides with the flag of Lyapunov spaces.

The same conclusions hold if the system is periodic with period r, i.e. \(A(\cdot + r) = A(\cdot ), C(\cdot +r) = C(\cdot )\). In fact, we can prove that \(\Phi _\omega (s,t) = \Phi _{\theta _r \omega }(s-r,t-r)\) and

$$\begin{aligned} \Phi (nr,\omega ) = \Phi (r,\theta _{(n-1)r}\omega ) \circ \Phi ((n-1)r,\omega ),\ \forall n \in {\mathbb {Z}}. \end{aligned}$$

In this case, (4.3) generates a discrete random dynamical system \(\{\Phi (nr,\omega )\}_{n\in {\mathbb {Z}}}\) which satisfies the integrability condition (4.5). Hence all the conclusions of the MET hold and almost all the path-wise system is Lyapunov regular.

In general, it might not be true that system (4.3) is regular for almost sure \(\omega \). However, under the further assumptions of AC, we can construct a linear random dynamical system such that almost sure all the pathwise systems are Lyapunov regular. The construction uses the so-called Bebutov flow, as investigated by Millionshchikov [20, 21] (see also [14, 26, 27]). Specifically, assume that A satisfies a stronger condition that

$$\begin{aligned} ({\mathbf{H }}_1^\prime ) \quad {\hat{A}}:=\Vert A\Vert _{\infty ,[0,\infty )}< \infty \quad \hbox {and}\quad \lim \limits _{\delta \rightarrow 0} \sup \limits _{|t-s| < \delta } |A(t)-A(s)| =0. \end{aligned}$$

Consider the shift dynamical system \(S^A_t(A)(\cdot ):= A(\cdot +t)\) in the space \({\mathcal {C}}^b = {\mathcal {C}}^b({\mathbb {R}},{\mathbb {R}}^{d \times d})\) of bounded and uniformly continuous matrix-valued continuous function on \({\mathbb {R}}\) with the supremum norm. The closed hull \({\mathcal {H}}^A:=\overline{\cup _t S_t(A)}\) in \({\mathcal {C}}^b\) is then compact, hence we can construct on \({\mathcal {H}}^A\) a probability structure such that \(({\mathcal {H}}^A,{\mathcal {F}}^A,\mu ^A, S^A)\) is a probability space where \(\mu ^A\) is a S-invariant probability measure, see e.g. [15, Theorem 4.9, p. 63].

When applying Millionshchikov’s approach of using Bebutov flows to our system (4.3), we need to construct not only \(({\mathcal {H}}^A,{\mathcal {F}}^A,\mu ^A, S^A)\), but also \(({\mathcal {H}}^C,{\mathcal {F}}^C,\mu ^C, S^C)\), with a little more regularity condition for C. Recall that \({\mathcal {C}}^{0,\alpha -\mathrm{Hol}}([a,b],{\mathbb {R}}^{d \times d})\) is the closure of smooth paths from [ab] to \({\mathbb {R}}^{d \times d}\) in \(\alpha \)-Hölder norm and \({\mathcal {C}}^{0,\alpha -\mathrm{Hol}}({\mathbb {R}},{\mathbb {R}}^{d \times d})\) is the space of all \(x: {\mathbb {R}}\rightarrow {\mathbb {R}}^{d \times d}\) such that \(x|_I \in {\mathcal {C}}^{0,\alpha -\mathrm{Hol}}(I,{\mathbb {R}}^{d \times d})\) for each compact interval \(I\subset {\mathbb {R}}\), equipped with the compact open topology given by the Hölder norm, i.e the topology generated by metric

$$\begin{aligned} d(x,y): = \sum _{m\ge 1} \frac{1}{2^m} (\Vert x-y\Vert _{\alpha ,[-m,m]}\wedge 1). \end{aligned}$$

Following [15, Chapter 2, p. 62], for any \(c \in {\mathcal {C}}^{0,\alpha -\mathrm{Hol}}({\mathbb {R}},{\mathbb {R}}^{d \times d})\), any interval [ab] and \(\delta >0\), we define the module of \(\alpha \)-Hölder on [ab]:

$$\begin{aligned} m^{[a,b]}(c,\delta ):= \left| \left| \left| c \right| \right| \right| _{\alpha ,\delta ,[a,b]} = \sup _{a\le s<t \le b, t-s \le \delta } \frac{|c(t)-c(s)|}{|t-s|^\alpha }. \end{aligned}$$

By the same arguments as in [15, Theorems 4.9, 4.10, pp. 62–64] we get the following result, of which the proof is given in the “Appendix”.

Lemma 4.5

A set \({\mathcal {H}}\subset {\mathcal {C}}^{0,\alpha -\mathrm{Hol}}({\mathbb {R}},{\mathbb {R}}^k)\) has a compact closure if and only if the following conditions hold:

$$\begin{aligned} \sup _{c \in {\mathcal {H}}} |c(0)|< & {} \infty , \end{aligned}$$
(4.6)
$$\begin{aligned} \lim \limits _{\delta \rightarrow 0} \sup _{c \in {\mathcal {H}}} m^{[a,b]}(c,\delta )= & {} 0,\quad \forall [a,b]. \end{aligned}$$
(4.7)

To construct a Bebutov flow for C, assume that there exists \(\alpha > \frac{1}{q}\) such that \(C \in {\mathcal {C}}^{0,\alpha -\mathrm{Hol}}({\mathbb {R}}, {\mathbb {R}}^{d\times d})\) satisfies a condition stronger than (\(\mathbf{H }_2\)):

$$\begin{aligned} \mathrm{(}{\mathbf{H }}_2^\prime \mathrm{)}\quad \Vert C\Vert _{\infty ,{\mathbb {R}}} = \sup _{ t\in {\mathbb {R}}} |C(t)|< \infty \qquad \text {and} \qquad \lim \limits _{\delta \rightarrow 0} \sup _{-\infty<s< t < \infty , |t-s|\le \delta } \frac{|C(t) - C(s) |}{|t-s|^\alpha } = 0.\nonumber \\ \end{aligned}$$
(4.8)

Consider the set of translations \(C_r (\cdot ):= C(r + \cdot ) \in {\mathcal {C}}^{0,\alpha -\mathrm{Hol}}({\mathbb {R}}, {\mathbb {R}}^{d\times d})\). Under conditions (4.8), Lemma 4.5 concludes that the closure set \({\mathcal {H}}^C:=\overline{\{C_r: r\in {\mathbb {R}}\}}\) is compact on the separable completely metrizable topological space \({\mathcal {C}}^{0,\alpha -\mathrm{Hol}}({\mathbb {R}},{\mathbb {R}}^{d \times d})\) (the separability of \({\mathcal {C}}^{0,\alpha -\mathrm{Hol}}({\mathbb {R}},{\mathbb {R}}^{d \times d})\) comes from the separability of \({\mathcal {C}}^{0,\alpha -\mathrm{Hol}}([-m,m],{\mathbb {R}}^{d \times d})\) [12, Proposition 5.36,  p. 98] and the characteristics of metric d(xy), see also [2, Proposition 1]), in fact \(\theta _t\) also preserves the norm on \({\mathcal {C}}^{0,\alpha -\mathrm{Hol}}({\mathbb {R}},{\mathbb {R}}^{d \times d})\). The shift dynamical system \(S^C_t c(\cdot ) = c(t + \cdot )\) maps \({\mathcal {H}}^C\) into itself, hence by Krylov-Bogoliubov theorem [23, Chapter VI, §9], there exists at least one probability measure \(\mu ^C\) on \({\mathcal {H}}^C\) that is invariant under \(S^C\), i.e. \(\mu ^C (S^C_t \cdot ) = \mu ^C(\cdot )\), for all \(t \in {\mathbb {R}}\).

It makes sense then to construct the product probability space \({\mathbb {B}} = {\mathcal {H}}^A \times {\mathcal {H}}^C \times \Omega \) with the product sigma field \({\mathcal {F}}^A \times {\mathcal {F}}^C \times {\mathcal {F}}\), the product measure \(\mu ^{{\mathbb {B}}}:=\mu ^A \times \mu ^C \times {\mathbb {P}}\) and the product dynamical system \(\Theta = S^A \times S^C \times \theta \) given by

$$\begin{aligned} \Theta _t({\tilde{A}},{\tilde{C}},\omega ) := (S^A_t({\tilde{A}}), S^C_t({\tilde{C}}), \theta _t \omega ). \end{aligned}$$

Now for each point \(b = ({\tilde{A}},{\tilde{C}},\omega ) \in {\mathbb {B}}\), the fundamental (matrix) solution \(\Phi ^*(t,b)\) of the equation

$$\begin{aligned} dx(t) = {\tilde{A}}(t)x(t)dt + {\tilde{C}}(t)x(t)d\omega (t), \quad x(0)=x_0\in R^d, \end{aligned}$$
(4.9)

defined by \(\Phi ^*(t,b)x_0 := x(t)\), satisfies the cocycle property due to the existence and uniqueness theorem and the fact that

$$\begin{aligned} x(t+s)= & {} x_0 + \int _0^s {\tilde{A}}(u) x(u)du + \int _0^s {\tilde{C}}(u) x(u)d\omega (u) \\&+\, \int _s^{t+s} {\tilde{A}}(u) x(u)du + \int _s^{t+s} {\tilde{C}}(u) x(u)d\omega (u)\\= & {} x(s) + \int _0^{t} S^A_s({\tilde{A}})(u) x(u+s)du + \int _0^{t} S^C_s({\tilde{C}})(u) x(u+s)d\theta _s\omega (u). \end{aligned}$$

Therefore the nonautonomous linear YDE (4.9) generates a cocycle (random dynamical system) \(\Phi ^*: {\mathbb {R}}\times {\mathbb {B}} \times {\mathbb {R}}^d \rightarrow {\mathbb {R}}^d\) over the metric dynamical system \(({\mathbb {B}},\mu ^{{\mathbb {B}}})\). Thus, starting from investigation of one linear stochastic nonautonomous YDE (4.3) we consider its \(\omega \)-wise and embed to a Bebutov flow using Millionshchikov’s approach [21], henceforth construct a random dynamical system over the product probability space for which the following statement holds.

Theorem 4.6

(Millionshchikov theorem) Under assumptions (\({\mathbf{H }}_1^\prime \)), (\({\mathbf{H }}_2^\prime \)) and (4.4), the nonautonomous linear stochastic (\(\omega \)-wise) Young Eq. (4.9) is Lyapunov regular for almost all \(b \in {\mathbb {B}}\) in the sense of the probability measure \(\mu ^{{\mathbb {B}}}\).

Proof

The integrability condition for the product probability measure \(\mu ^{\mathbb {B}}\) is a direct consequence of (4.5). Hence all the conclusions of the multiplicative ergodic theorem hold for almost all \(b \in {\mathbb {B}}\), which implies the Lyapunov regularity of (4.9) for almost all \(b\in {\mathbb {B}}\) in the sense of the probability measure \(\mu ^{{\mathbb {B}}}\). \(\square \)

Remark 4.7

  1. (i)

    In [20, 21], Millionshchikov proved the Lyapunov regularity (almost surely with respect to an arbitrary invariant measure of the Bebutov flow on \({\mathcal {H}}^A\) generated by the ordinary differential equation \({\dot{x}} = A(t)x\)), using the triangularization scheme provided by the Perron theorem for ordinary differential equations. In other words, Millionshchikov obtained an alternative proof of the multiplicative ergodic theorem (see also Arnold [1, p. 112], Johnson et al. [14]). In fact, Millionshchikov proved a bit stronger property than Lyapunov regularity that, almost all such systems are statistically regular.

  2. (ii)

    Theorem 4.6 can be viewed as a version of multiplicative ergodic theorem for a nonautonomous linear stochastic Young differential equation which uses combination of Millionshchikov [21] approach (topological setting using Bebutov flow for differential equation) and Oseledets [25] approach (measurable setting with probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\)).

  3. (iii)

    It is important to note that, although for almost all \(b\in {\mathbb {B}}\) the nonautonomous linear stochastic (\(\omega \)-wise) Young Eq. (4.9) is Lyapunov regular, it does not follow that the original system (4.3) is Lyapunov regular.

Discussions on the Non-randomness of Lyapunov Exponents

Since we are dealing with stochastic equation YDE (4.3) it is important and interesting to know whether its Lyapunov spectrum is nonrandom. We give here a brief discussion on this problem.

We remind the readers of the non-randomness of Lyapunov exponents \(\lambda _1(\omega ),\ldots , \lambda _d(\omega )\) for systems driven by standard Brownian noises (see e.g. [7, 10]). Since Theorem 3.3 still holds in that situation and the definition of Lyapunov exponent does not depend on the initial time \(t_0\), it follows that \(\lambda _k(\omega )\) is measurable with respect to the sigma algebra generate by \(\{W(n+1)-W(n): n\ge m\}\) for any \(m\ge 0\), thus measurable w.r.t. the tail sigma field \(\cap _m \sigma (\{W(n+1)-W(n): n\ge m\})\). Due to pairwise independence of all variables of the form \(W(n+1)-W(n)\), one can apply Kolmogorov’s zero-one law [15] to conclude that Lyapunov exponents are in fact non-random constants. Thus we have nonrandomness of the Lyapunov spectrum in the case of nonautonomous linear stochastic differential equations driven by standard Brownian motions. Note that here the Lyapunov exponents of the systems can be nonexact.

In general, a stochastic process Z does not have independent increments, thus it is difficult to construct such a filtration and to apply the Kolmogorov’s zero-one law. However, the second case of nonrandom Lyapunov spectrum is the case of autonomous or periodic linear stochastic Young equations discussed at the beginning of this subsection where we may apply the classical Oseledets MET by exploiting autonomy or periodicity of the system. Note that in this case the probability measure is the probability measure of the process Z and the Lyapunov exponents of the systems are exact.

The third case is triangular nonautonomous linear stochastic Young differential equations treated in Sect. 3. In this case, due to the triangular form of the system we may solve it successively and use explicit formula of the solution to derive Theorem 3.7 showing that the Lyapunov spectrum consists of exact Lyapunov exponents and is nonrandom. Note that in this case the system in nonautonomous, the measure is the probability measure of the process Z and the Lyapunov exponents of the systems are exact.

For a general system (4.3) which satisfies assumptions (\({\mathbf{H }}_1^\prime \)), (\({\mathbf{H }}_2^\prime \)) and (4.4), the statement on the non-randomness of Lyapunov spectrum depends on whether the product dynamical system \(\Theta \) is ergodic on the product probability measure \(\mu ^{\mathbb {B}}\), as a consequence of the Birkhorff ergodic theorem. The answer is then affirmative in case \(S^A\) and \(S^C\) are weakly mixing and \(\theta \) is ergodic, i.e. \(S^A\) (respectively \(S^C\)) satisfies the condition

$$\begin{aligned} \lim \limits _{n \rightarrow \infty } \frac{1}{n} \sum _{j=0}^{n-1} \Big |\mu ^A\Big (S^A_j(Q_1) \cap Q_2\Big ) - \mu ^A (Q_1) \mu ^A(Q_2)\Big | =0,\quad \forall Q_1\ne Q_2 \in {\mathcal {F}}^A \end{aligned}$$

(respectively for \(S^C\)). It is well known (see e.g. Mañé [17, p. 147]) that the weak mixing of \(S^A\) and \(S^C\) implies the weak mixing of the product dynamical system \(S^A \times S^C\) which, together with the ergodicity of \(\theta \), implies the ergodicity of the product flow \(\Theta \). The problem on non-randomness of Lyapunov spectrum can therefore be translated into the question on the weak-mixing of dynamical systems \(S^A\) and \(S^C\).

References

  1. 1.

    Arnold, L.: Random Dynamical Systems. Springer, Berlin (1998)

    MATH  Book  Google Scholar 

  2. 2.

    Bailleul, I., Riedel, S., Scheutzow, M.: Random dynamical systems, rough paths and rough flows. J. Differ. Equ. 262(12), 5792–5823 (2017)

    MathSciNet  MATH  Article  Google Scholar 

  3. 3.

    Barreira, L.: Lyapunov Exponents. Birkhäuser, Basel (2017)

    MATH  Google Scholar 

  4. 4.

    Bylov, B.F., Vinograd, R.E., Grobman, D.M., Nemytskii, V.V.: Theory of Lyapunov Exponents. Nauka, Moscow (1966). in Russian

    MATH  Google Scholar 

  5. 5.

    Cass, T., Litterer, C., Lyon, T.: Integrability and tail estimates for Gaussian rough differential equations. Ann. Probab. 41(4), 3026–3050 (2013)

    MathSciNet  MATH  Article  Google Scholar 

  6. 6.

    Cong, N.D.: On central and auxiliary exponents of linear equations with coefficients perturbed by a white noise. Diffrentsial’nye Uravneniya, 26, 420–427 (1990). English transl. in Differential equation, 26(1990). No 3, 307–313 (1990)

  7. 7.

    Cong, N.D.: Lyapunov spectrum of nonautonomous linear stochastic differential equations. Stoch. Dyn. 1(1), 1–31 (2001)

    MathSciNet  Article  Google Scholar 

  8. 8.

    Cong, N.D.: Almost all nonautonomous linear stochastic differential equations are regular. Stoch. Dyn. 4(3), 351–371 (2004)

    MathSciNet  MATH  Article  Google Scholar 

  9. 9.

    Cong, N.D., Duc, L.H., Hong, P.T.: Young differential equations revisited. J. Dyn. Differ. Equ. 30(4), 1921–1943 (2018)

    MathSciNet  MATH  Article  Google Scholar 

  10. 10.

    Cong, N.D., Quynh, N.T.T.: Lyapunov exponents and central exponents of linear Ito stochastic differential equations. Acta Mathe. Vietnam. 36, 35–53 (2011)

  11. 11.

    Demidovich, B.P.: Lectures on Mathematical Theory of Stability. Nauka, Moscow (1967). In Russian

    MATH  Google Scholar 

  12. 12.

    Friz, P., Victoir, N.: Multidimensional stochastic processes as rough paths: theory and applications. Cambridge Studies in Advanced Mathematics, 120. Cambridge University Press, Cambridge (2010)

  13. 13.

    Garrido-Atienza, M.J., Schmalfuß, B.: Ergodicity of the infinite dimensional fractional Brownian motion. J. Dyn. Differ. Equ. 23, 671681 (2011). https://doi.org/10.1007/s10884-011-9222-5

    MathSciNet  MATH  Article  Google Scholar 

  14. 14.

    Johnson, R.A., Palmer, K.J., Sell, G.R.: Ergodic properties of linear dynamical systems. SIAM J. Math. Anal. 18(1), 1–33 (1987)

    MathSciNet  MATH  Article  Google Scholar 

  15. 15.

    Karatzas, I., Shreve, S.: Brownian Motion and Stochastics Caculus, 2nd edn. Springer, Berlin (1991)

    MATH  Google Scholar 

  16. 16.

    Kunita, H.: Stochastic Flows and Stochastic Differential Equations. Cambridge University Press, Cambridge (1990)

    MATH  Google Scholar 

  17. 17.

    Mañé, R.: Ergodic Theory and Differentiable Dynamics. Springer, Berlin (1987)

    MATH  Book  Google Scholar 

  18. 18.

    Millionshchikov, V.M.: Formulae for Lyapunov exponents of a family of endomorphisms of a metrized vector bundle. Mat. Zametki, 39, 29–51. English translation in Math. Notes, 39, 17–30 (1986)

  19. 19.

    Millionshchikov, V.M.: Formulae for Lyapunov exponents of linear systems of differential equations. Trans. I. N. Vekya Institute of Applied Mathematics, 22, 150–179 (1987)

  20. 20.

    Millionshchikov, V.M.: Statistically regular systems. Math. USSR-Sbornik 4(1), 125–135 (1968)

    Article  Google Scholar 

  21. 21.

    Millionshchikov, V.M.: Metric theory of linear systems of differential equations. Math. USSR-Sbornik 4(2), 149–158 (1968)

    Article  Google Scholar 

  22. 22.

    Mishura, Y.: Stochastic calculus for fractional Brownian motion and related processes. Lecture notes in Mathematics, Springer, Berlin (2008)

  23. 23.

    Nemytskii, V.V., Stepanov, V.V.: Qualitative theory of differential equations. Princeton University Press, English translation (1960)

  24. 24.

    Nualart, D., Răşcanu, A.: Differential equations driven by fractional Brownian motion. Collect. Math. 53(1), 55–81 (2002)

    MathSciNet  MATH  Google Scholar 

  25. 25.

    Oseledets, V.I.: A multiplicative ergodic theorem. Lyapunov characteristic numbers for dynamical systems. Trans. Moscow Math. Soc. 19, 97–231 (1968)

    MATH  Google Scholar 

  26. 26.

    Sacker, R.J., Sell, G.: Lifting properties in skew-product flows with applications to differential equations. Memoirs of the American Mathematical Society, 11(190) (1977)

  27. 27.

    Sell, G.: Nonautonomous differential equations and topological dynamics. I. The basic theory. Trans. Am. Math. Soc. 127(2), 241–262 (1967)

    MathSciNet  MATH  Google Scholar 

  28. 28.

    Young, L.C.: An inequality of the Hölder type, connected with Stieltjes integration. Acta Math. 67, 251–282 (1936)

    MathSciNet  MATH  Article  Google Scholar 

  29. 29.

    Zähle, M.: Integration with respect to fractal functions and stochastic calculus. I. Probab. Theory Relat. Fields 111(3), 333–374 (1998)

    MathSciNet  MATH  Article  Google Scholar 

Download references

Acknowledgements

Open access funding provided by Max Planck Society. This research is partly funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant Number FWO.101.2017.01.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Luu Hoang Duc.

Additional information

In memory of V. M. Millionshchikov.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Proof of Proposition 2.2

The proof follows the same techniques in [24] and in [9] with some modifications. First, consider \(x\in {\mathcal {C}}^{q\mathrm{-var}}([a,b],{\mathbb {R}}^d)\) with some \([a,b] \subset [t_0,t_0+T]\). Define the mapping given by

$$\begin{aligned} F(x)(t)= & {} x(a) + I(x)(t)+J(x)(t)\nonumber \\:= & {} x(a) + \int _{a}^t A(s)x(s) ds +\int _{a}^t C(s)x(s)d\omega (s), \quad \forall t\in [a,b]. \end{aligned}$$
(5.1)

Then \(F(x)\in {\mathcal {C}}^{p\mathrm{-var}}([a,b],{\mathbb {R}}^{d})\) and direct computations show that for every \([s,t]\subset [a,b]\)

$$\begin{aligned} \left| \left| \left| Fx\right| \right| \right| _{p\mathrm{-var},[s,t]}\le & {} P \Vert x\Vert _{q\mathrm{-var},[s,t]} \end{aligned}$$
(5.2)
$$\begin{aligned} \left| \left| \left| Fx-Fy\right| \right| \right| _{p\mathrm{-var},[s,t]}\le & {} P \Vert x-y\Vert _{q\mathrm{-var},[s,t]}, \end{aligned}$$
(5.3)

where

$$\begin{aligned} P:=\Vert A\Vert _{\infty ,[t_0,t_0+T]}(t-s)+2K\Vert C\Vert _{q\mathrm{-var},[t_0,t_0+T]}\left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[s,t]}, \quad K \; \hbox {is defined in }(2.2). \end{aligned}$$

Similar to [9], for a given \(0<\mu <\min \{1,M^*\}\), where \(M^*\) is defined by (2.6), we construct the sequence of strictly increase greedy times \(\tau _n\) with \(\tau _0 = 0\) satisfying

$$\begin{aligned} (\tau _{k}-\tau _{k-1})+\left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[\tau _{k-1},\tau _k]}=\mu /M^*. \end{aligned}$$
(5.4)

Then \(\tau _k\rightarrow \infty \) as \(k\rightarrow \infty \) (see the proof in [9]). Denote by \(N(a,b,\omega )\) the number of \(\tau _k\) in the finite interval (ab], then from [9]

$$\begin{aligned} N(t_0,t_0+T,\omega )-1\le & {} \left( \frac{2M^*}{\mu }\right) ^p(T^p+\left| \left| \left| \omega \right| \right| \right| ^p_{p\mathrm{-var},[t_0,t_0+T]}). \end{aligned}$$
(5.5)

Without loss of generality assume that \(t_0=0\). Define the set

$$\begin{aligned} B=\{x\in {\mathcal {C}}^{q\mathrm{-var}}([\tau _0,\tau _1],{\mathbb {R}}^d)|\ x(\tau _0)= x_{0},\;\Vert x\Vert _{q\mathrm{-var},[\tau _0,\tau _1]}\le \frac{1}{1-\mu }|x_{0}|\}. \end{aligned}$$

It is easy to check that B is a closed ball in Banach space \( {\mathcal {C}}^{q\mathrm{-var}}([\tau _0,\tau _1],{\mathbb {R}}^d)\). Using (5.2), (5.4) and the fact that \(p<q\), we have

$$\begin{aligned} \Vert F(x)\Vert _{q\mathrm{-var},[\tau _0,\tau _1]}\le & {} |x_{0}| + M^* (\tau _1-\tau _0 + \left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[\tau _0,\tau _1]} ) \Vert x\Vert _{q\mathrm{-var},[\tau _0,\tau _1]}\\\le & {} |x_{0}| +\mu \Vert x\Vert _{q\mathrm{-var},[\tau _0,\tau _1]}\\\le & {} \frac{1}{1-\mu }|x_{0}|, \ \text {for each } x\in B. \end{aligned}$$

Hence, \(F:B\rightarrow B\). On the other hand, by (5.3) and (5.4), for any \(x,y\in B\)

$$\begin{aligned} \Vert F(x)-F(y)\Vert _{q\mathrm{-var},[\tau _0,\tau _1]} \le \mu \Vert x-y\Vert _{q\mathrm{-var},[\tau _0,\tau _1]}. \end{aligned}$$

Since \(\mu <1\), F is a contraction mapping on B. We conclude that there exists a unique fixed point of F in B or there exists local solution of (2.3) on \([\tau _0,\tau _1]\). By induction we obtain the solution on \([\tau _i,\tau _{i+1}]\) for \(i=1,..,N(0,T,\omega )-1\) and finally on \([\tau _{N(0,T,\omega )-1},T]\). The global solution of (2.3) then exists uniquely. From (5.2) it is obvious that the solution is in \( {\mathcal {C}}^{p\mathrm{-var}}([t_0,t_0+T],{\mathbb {R}}^d)\). Estimate (2.4) can then be derived using similar arguments to [9, Remark 3.4iii]. In fact,

$$\begin{aligned} \Vert x\Vert _{\infty ,[\tau _0,\tau _1]}\le \Vert x\Vert _{q\mathrm{-var},[\tau _0,\tau _1]} \le \frac{1}{1-\mu }|x_{0}|, \end{aligned}$$

thus by induction, it is evident that

$$\begin{aligned} \Vert x\Vert _{\infty ,[t_0,t_0+T]}\le |x_{0}| \left( \frac{1}{1-\mu }\right) ^{N(t_0,t_0+T,\omega ) +1} \le |x_{0}| e^{\eta [N(t_0,t_0+T,\omega ) +1]}. \end{aligned}$$

Combining to (5.5), we obtain (2.4).

Since

$$\begin{aligned} \left| \left| \left| x\right| \right| \right| _{q\mathrm{-var},[s,t]}=\left| \left| \left| F(x)\right| \right| \right| _{q\mathrm{-var},[s,t]} \le M^* (t-s + \left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[s,t]})\Vert x\Vert _{q\mathrm{-var},[s,t]} \end{aligned}$$

for all \(s<t\in [t_0, t_0+T]\), by proving similarly to [9, Corollary 3.5] we obtain

$$\begin{aligned} \left| \left| \left| x\right| \right| \right| _{q\mathrm{-var},[t_0,t_0+T]}\le & {} |x_{0}| e^{(1+\eta )[3+ (\frac{2M^*}{\mu })^p(T^p+\left| \left| \left| \omega \right| \right| \right| ^p_{p\mathrm{-var},[t_0,t_0+T]})]}. \end{aligned}$$

Next, fix \((x_0,\omega )\) and consider \((x_0',\omega ')\) in the ball centered at \((x_0,\omega )\) with radius 1.

Put \(x(\cdot ) = x(\cdot , t_0,x_0,\omega )\), \(x'(\cdot ) = x(\cdot , t_0,x'_0,\omega ')\) and \(y(\cdot ) =x(\cdot )-x'(\cdot ) \). By (2.4) and (2.5), we can choose a positive number \(D_1\) (depending on \(M^*,x_0,\omega \)) such that

$$\begin{aligned} \Vert x\Vert _{p\mathrm{-var},[t_0,t_0+T]}, \Vert x'\Vert _{p\mathrm{-var},[t_0,t_0+T]}\le D_1. \end{aligned}$$

We have

$$\begin{aligned} |y(t)-y(s)|\le & {} \Big |\int _s^t A(u)y(u)du\Big | + \Big |\int _s^t C(u)y(u)d\omega (u)\Big | \\&+\, \Big |\int _s^t C(u)x'(u)d(\omega (u)-\omega '(u))\Big |\\\le & {} \Vert A\Vert _{\infty ,[t_0,t_0+T]}\Vert y\Vert _{\infty ,[s,t]}(t-s) \\&+\, 2K\Vert C\Vert _{q\mathrm{-var},[t_0,t_0+T]}\Vert y\Vert _{p\mathrm{-var},[s,t]}\left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[s,t]}\\&+\, 2K\Vert C\Vert _{q\mathrm{-var},[t_0,t_0+T]}\Vert x'\Vert _{p\mathrm{-var},[s,t]}\left| \left| \left| \omega -\omega '\right| \right| \right| _{p\mathrm{-var},[s,t]} \\\le & {} M^* (t-s+\left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[s,t]})\Vert y\Vert _{p\mathrm{-var},[s,t]}\\&+\,M^*\Vert x'\Vert _{p\mathrm{-var},[s,t]}\left| \left| \left| \omega -\omega '\right| \right| \right| _{p\mathrm{-var},[s,t]}, \end{aligned}$$

which yields

$$\begin{aligned} \left| \left| \left| y\right| \right| \right| _{p\mathrm{-var},[s,t]}\le & {} M^* (t-s+\left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[s,t]})\Vert y\Vert _{p\mathrm{-var},[s,t]}\\&+\,M^*\Vert x'\Vert _{p\mathrm{-var},[t_0,t_0+T]}\left| \left| \left| \omega -\omega '\right| \right| \right| _{p\mathrm{-var},[s,t]} \end{aligned}$$

By applying [9, Corollary 3.5], we obtain

$$\begin{aligned}&\left| \left| \left| y\right| \right| \right| _{p\mathrm{-var},[t_0,t_0+T]} \\&\quad \le (|y(t_0)|+M^*\Vert x'\Vert _{p\mathrm{-var},[t_0,t_0+T]}\left| \left| \left| \omega -\omega '\right| \right| \right| _{p\mathrm{-var},[t_0,t_0+T]}) e^{D_2(T^p+\left| \left| \left| \omega \right| \right| \right| ^p_{p\mathrm{-var},[t_0,t_0+T]})}\\&\quad \le \left( |y(t_0)|+D_1M^* \left| \left| \left| \omega -\omega '\right| \right| \right| _{p\mathrm{-var},[t_0,t_0+T]}\right) e^{D_2(T^p+\left| \left| \left| \omega \right| \right| \right| ^p_{p\mathrm{-var},[t_0,t_0+T]})}\\&\quad \le D_3( |x_0 -x_0'|+ \left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[a,a']}+ \left| \left| \left| \omega -\omega '\right| \right| \right| _{p\mathrm{-var},[t_0,t_0+T]}), \end{aligned}$$

where \(D_2, D_3\) are constant depending on x and \(M^*\). Therefore

$$\begin{aligned} \Vert y\Vert _{p\mathrm{-var},[t_0,t_0+T]}\le & {} |y(t_0)| + \left| \left| \left| y\right| \right| \right| _{p\mathrm{-var},[t_0,t_0+T]}\\\le & {} D_4(|x_0 -x_0'|+ \left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[a,a']}+ \left| \left| \left| \omega -\omega '\right| \right| \right| _{p\mathrm{-var},[t_0,t_0+T]}), \end{aligned}$$

with some constant \(D_4\), which proves the continuity of X. \(\square \)

Young Integral on Infinite Domain

Consider \(f:{\mathbb {R}}^+\rightarrow {\mathbb {R}}\) such that \(\int _a^b f(s)d\omega (s)\) exists for all \(a<b\in {\mathbb {R}}^+\). Fix \(t_0\ge 0\), we define \(\int _{t_0}^\infty f(s)d\omega (s)\) as the limit \(\displaystyle \lim _{t\rightarrow \infty }\int _{t_0}^{t}f(s)d\omega (s)\) if the limit exists and is finite. In this case,

$$\begin{aligned} \int _0^{t_0}f(s)d\omega (s) = \int _0^{\infty }f(s)d\omega (s)-\int ^\infty _{t_0}f(s)d\omega (s) \end{aligned}$$

By assumption (\({\mathbf{H }}_3\)) the sequence \(\Big \{\frac{ \left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[k,k+1]}}{k},\; k\ge 1 \Big \}\) is bounded.

Lemma 5.1

Consider \(G(t) = \int _0^tg(s)d\omega (s)\), where g is of bounded \(q-\)variation function on every compact interval. If \(\chi (g(t)),\;\chi (\left| \left| \left| g\right| \right| \right| _{q\mathrm{-var},[t,t+1]})\le \lambda \in [0,+\infty )\) then

$$\begin{aligned} \chi (G(t)), \chi (\left| \left| \left| G\right| \right| \right| _{q\mathrm{-var},[t,t+1]})\le \lambda . \end{aligned}$$

Proof

Since \(\chi (g(t)),\;\chi (\left| \left| \left| g\right| \right| \right| _{q\mathrm{-var},[t,t+1]})\le \lambda \), for any \(\varepsilon >0\), there exists \(D_1=D_1(\varepsilon )\) such that \(|g(s)|\le D_1e^{(\lambda +\varepsilon /2)s}\), \(\left| \left| \left| g\right| \right| \right| _{q\mathrm{-var},[s,s+1]}\le D_1e^{(\lambda +\varepsilon /2)s}\) for all \(s>0\). Then

$$\begin{aligned} |G(t)|\le & {} \sum _{k=0}^{\lfloor t\rfloor -1}\Big |\int _k^{k+1}g(s)d\omega (s)\Big |+ \Big |\int _{\lfloor t\rfloor }^tg(s)d\omega (s)\Big | \\\le & {} K \sum _{k=0}^{\lceil t\rceil -1}\left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[k,k+1]} (|g(k)|+\left| \left| \left| g\right| \right| \right| _{q\mathrm{-var},[k,k+1]})\\\le & {} 2KD_1 \sum _{k=0}^{\lceil t\rceil -1}\frac{\left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[k,k+1]}}{k} ke^{(\lambda +\varepsilon /2)k}\\\le & {} 2KD_1 \sup _{k\ge 1}\frac{\left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[k,k+1]}}{k} (t+1)^2e^{t(\lambda +\varepsilon /2)}\\\le & {} D_2e^{(\lambda +\varepsilon )t}, \end{aligned}$$

where \(D_2\) is a generic constant depends on \(\varepsilon \). This yields \(\chi (G(t))\le \lambda \).

Next, fix \(t_0> 0\) then \([t_0,t_0+1]\subset [n_0,n_0+2]\) with some \(n_0\in {\mathbb {N}}\). For each \(s,t\in [t_0,t_0+1]\) we have

$$\begin{aligned} |G(t)-G(s)|\le & {} K\left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[s,t]} (\Vert g\Vert _{\infty ,[s,t]}+\left| \left| \left| g\right| \right| \right| _{q\mathrm{-var},[s,t]})\\\le & {} 2KD_1\left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[s,t]} e^{(\lambda +\varepsilon /2)(t_0+1)}. \end{aligned}$$

Hence

$$\begin{aligned}&\left| \left| \left| G\right| \right| \right| _{q\mathrm{-var},[t_0,t_0+1]} \\&\quad \le 2 KD_1 \left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[t_0,t_0+1]} e^{(\lambda +\varepsilon /2)(t_0+1)}\\&\quad \le 2.2^{\frac{p-1}{p}}KD_1 \frac{(\left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[n_0,n_0+1]}+\left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[n_0+1,n_0+2]})}{t_0+1}(t_0+1) e^{(\lambda +\varepsilon /2)(t_0+1)}\\&\quad \le D_2 e^{(\lambda +\varepsilon )t_0}, \end{aligned}$$

which implies \(\chi (\left| \left| \left| G\right| \right| \right| _{q\mathrm{-var},[t,t+1]})\le \lambda \). \(\square \)

Lemma 5.2

Let g be of bounded \(q-\)variation function on every compact interval, satisfying \(\chi (g(t)),\;\chi (\left| \left| \left| g\right| \right| \right| _{q\mathrm{-var},[t,t+1]})\le -\lambda \in (-\infty ,0)\). Then the integral \(G(t) := \int _t^\infty g(s)d\omega (s)\) exists for all \(t\in {\mathbb {R}}^+\) and

$$\begin{aligned} \chi (G(t)), \chi (\left| \left| \left| G\right| \right| \right| _{q\mathrm{-var},[t,t+1]})\le -\lambda . \end{aligned}$$

Proof

For each \(\varepsilon >0\) such that \(2\varepsilon <\lambda \), there exists a constant \(D_1\) such that

$$\begin{aligned} |g(s)|\le D_1e^{(-\lambda +\varepsilon )s},\;\;\left| \left| \left| g\right| \right| \right| _{q\mathrm{-var},[s,s+1]}\le D_1e^{(-\lambda +\varepsilon )s}. \end{aligned}$$

Now fix \(t_0 \ge 0\) we first prove the existence and finiteness of \(\lim \nolimits _{\begin{array}{c} n\in {\mathbb {N}}\\ n\rightarrow \infty \end{array}}\int _{t_0}^ng(s)d\omega (s)\). For all \(n<m\in {\mathbb {N}}\) we have

$$\begin{aligned} \Big |\int _{n}^{m}g(s)d\omega (s)\Big |\le & {} \sum _{k=n}^{m-1}\Big |\int _{k}^{k+1}g(s)d\omega (s)\Big | \\\le & {} K \sum _{k=n}^{m-1} \left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[k,k+1]}(|g(k)|+\left| \left| \left| g\right| \right| \right| _{q\mathrm{-var},[k,k+1]})\\\le & {} 2KD_1\sum _{k=n}^{m-1} \left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[k,k+1]}e^{(-\lambda +\varepsilon )k}\\\le & {} 2KD_1e^{(-\lambda +2\varepsilon )n} \sup _k \frac{ \left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[k,k+1]}}{k}\sum _{k=0}^{\infty }ke^{-\varepsilon k} \\\le & {} D_2e^{(-\lambda +2\varepsilon )n} \end{aligned}$$

which converges to zero as \(n,m \rightarrow \infty \) since \(\frac{ \left| \left| \left| \omega \right| \right| \right| _{p\mathrm{-var},[k,k+1]}}{k}\) are bounded and the series \(\sum _{k=0}^{\infty }ke^{-k\varepsilon /2}\) converges. Therefore \(\lim \nolimits _{\begin{array}{c} n\in {\mathbb {N}}\\ n\rightarrow \infty \end{array}}\int _{t_0}^ng(s)d\omega (s)<\infty \). Moreover, for \(t>t_0\) by a similar estimate we have

$$\begin{aligned} \Big |\int _{t_0}^{t}g(s)d\omega (s)-\int _{t_0}^{{\lfloor t\rfloor }}g(s)d\omega (s)\Big |=\Big |\int _{{\lfloor t\rfloor }}^{t}g(s)d\omega (s)\Big | \rightarrow 0 \end{aligned}$$

as \(t\rightarrow \infty .\) This implies the existence of \(\int _{t_0}^\infty g(s)d\omega (s) \). Moreover, \(|G(t)|\le C(\varepsilon )e^{(-\lambda +2\varepsilon )t}\) which yields \(\chi (G(t))\le -\lambda \).

The second conclusion can be proved similarly to Lemma 5.1. \(\square \)

The following lemma shows that the condition (\({\mathbf{H }}_4\)) is satisfied for almost all realization \(\omega \) of a fractional Brownian motion \(B^H_t(\omega )\) (see [22] for definition and details on fractional Brownian motions).

Lemma 5.3

Assume that \(c_0:=\Vert c\Vert _{\infty ,{\mathbb {R}}^+} < \infty \) and the integral

$$\begin{aligned} X(t,\omega ) = \int _0^t c(s)dB^H_s(\omega ) \end{aligned}$$

exists for all \(t\in {\mathbb {R}}^+\). Then \(\lim \nolimits _{\begin{array}{c} n\rightarrow \infty \\ n\in {\mathbb {N}} \end{array}} \frac{X(n,\cdot )}{n} = \lim \limits _{\begin{array}{c} n\rightarrow \infty \\ n\in {\mathbb {N}} \end{array}} \ \frac{\int _0^nc(s)dB^H_s}{n} = 0,\ \text {a.s.} \)

Proof

Fix \(T>0\), and assume that \(\pi _n\) is a sequence of partition of [0, T] such that \(mesh(\pi _n) \rightarrow 0\) as \(n\rightarrow \infty \). Denote

$$\begin{aligned} X_n(t,\omega ) = \sum c(t_i) (B^H_{t_{i+1}}(\omega )-B^H_{t_i}(\omega )). \end{aligned}$$

Then \(X_n(t,\omega ) \rightarrow X(t,\omega )\) as \(n\rightarrow \infty \). It is evident that \(X_n\) is a Gaussian random variable with mean zero. Since

$$\begin{aligned} E(B^H_t-B^H_s)^2= & {} |t-s|^{2H}=H(2H-1)\int _s^t\int _s^t|a-b|^{2H-2}dadb\nonumber \\ E(B^H_t-B^H_s)(B^H_u-B^H_v)= & {} \frac{1}{2} \left[ |s-u|^{2H}+|t-v|^{2H}-|t-u|^{2H}-|s-v|^{2H}\right] \nonumber \\= & {} (2H-1)H\int _v^u\int _s^t|a-b|^{2H-2}dadb \end{aligned}$$
(5.6)

for all \(v<u\le s<t\) (see [22, pp. 7–8]), we have

$$\begin{aligned} V(X_n)= EX_n^2= & {} \sum _{i,j=1}^nc(t_i)c(t_j) E(\Delta B^H_{t_i}\Delta B^H_{t_j}) \\= & {} (2H-1)H\sum _i c^2(t_i)\int _{t_i}^{t_{i+1}}\int _{t_i}^{t_{i+1}}|u-v|^{2H-2}dudv \\&+\, 2(2H-1)H\sum _{i<j} c^2(t_i)\int _{t_i}^{t_{i+1}}\int _{t_j}^{t_{j+1}}|u-v|^{2H-2}dudv \\= & {} D(H)\int _0^t\int _0^tc(u)c(v)|u-v|^{2H-2}dudv \le D(H)c_0^2 t^{2H}, \end{aligned}$$

where D(H) is a generic constant depending on H.

Since \(X_n \rightarrow X\), a.s, X(t, .) is a centered normal random variable with \(V(X(t,\cdot ))\le D(H)c^2_0t^{2H}\). It follows that \(EX(t,.)^{2k}\le D(H,k,c_0) t^{2kH}\) with \(D(H,k,c_0)\) is a constant depending on \(H,k,c_0\). Fix \(0<\varepsilon < 1-H\) and choose k large enough so that \(k(1-\varepsilon -H) \ge 1\) we then have

$$\begin{aligned} \sum _{n=1}^\infty P\left( |\frac{X(n,\cdot )}{n}| > \frac{1}{n^\varepsilon }\right)\le & {} \sum _{n=1}^\infty \frac{EX(n,\cdot )^{2k} }{n^{2k(1-\varepsilon )}}\\\le & {} D(H,k,c_0)\sum _{n=1}^\infty \frac{1}{n^{2k(1-\varepsilon -H)}}\\\le & {} D(H,k,c_0)\sum _{n=1}^\infty \frac{1}{n^2} < \infty . \end{aligned}$$

Using Borel–Caltelli lemma, we conclude that \(\frac{X(n,\cdot )}{n} \rightarrow 0\) as \(n\rightarrow \infty \) almost surely. \(\square \)

Proof of Lemma 4.5

The if part is obvious since it can be proved that

$$\begin{aligned} \lim \limits _{\delta \rightarrow 0} m^{[a,b]}(c,\delta )= & {} 0, \end{aligned}$$
(5.7)
$$\begin{aligned} |m^{[a,b]}(c,\delta ) - m^{[a,b]}(c^\prime ,\delta )|\le & {} \left| \left| \left| c -c^\prime \right| \right| \right| _{\alpha ,[a,b]}, \end{aligned}$$
(5.8)

which shows the continuity of m on \({\mathcal {C}}^{0,\alpha -\mathrm{Hol}}({\mathbb {R}},{\mathbb {R}}^k)\). Hence m is uniformly continuous on a compact set, which shows (4.6) and (4.7).

To be more precise, denote by \({\tilde{C}}\) the space \({\mathcal {C}}^{0,\alpha -\mathrm{Hol}}({\mathbb {R}},{\mathbb {R}}^k)\). Assume that \({\mathcal {H}}\) is compact in \({\tilde{C}}\), we prove that (4.6) and (4.7) are fulfilled. For each \(n\in {\mathbb {N}}^*\), put

$$\begin{aligned} G_n = \{c\in {\tilde{C}}\; \mid |c(n)|< n\}. \end{aligned}$$

Then \(G_n\) is open in \({\tilde{C}}\).

Since \({\overline{{\mathcal {H}}}} \subset \bigcup _{n=1}^{\infty }G_n\) and \(G_n\) is an increasing sequence of open sets, there exists \(n_0\) such that \({\mathcal {H}}\subset G_{n_0}\), which proves (4.6).

To prove (4.7), first note that for each \(c\in {\tilde{C}}\) and \([a,b] \subset {\mathbb {R}}\), \(\lim \nolimits _{\delta \rightarrow 0} m^{[a,b]}(c,\delta ) =0\) (see [12, Theorem 5.31, p. 96]). Secondly

$$\begin{aligned} |m^{[a,b]}(c,\delta ) - m^{[a,b]}(c^\prime ,\delta )| \le \left| \left| \left| c -c^\prime \right| \right| \right| _{\alpha \mathrm{-Hol},[a,b]}. \end{aligned}$$

Indeed, due to the definition of \(m^{[a,b]}(c,\delta )\), there exists for any \(\varepsilon >0\) two points \(s_0,t_0\in [a,b]\), \(0<|s_0-t_0|\le \delta \) such that

$$\begin{aligned} m^{[a,b]}(c,\delta ) \le \frac{|c(t_0)-c(s_0)|}{|t_0-s_0|^\alpha }+\varepsilon . \end{aligned}$$

On the other hand, \(m^{[a,b]}(c',\delta )\ge \frac{|c'(t_0)-c'(s_0)|}{|t_0-s_0|^\alpha }\), which yields

$$\begin{aligned} m^{[a,b]}(c,\delta ) - m^{[a,b]}(c',\delta )\le & {} \frac{|c(t_0)-c(s_0)|- | c'(t_0)-c'(s_0) |}{|t_0-s_0|^\alpha }+\varepsilon \\\le & {} \frac{|c(t_0)-c(s_0) -c'(t_0)+c'(s_0) |}{|t_0-s_0|^\alpha }+\varepsilon \\\le & {} \left| \left| \left| c-c'\right| \right| \right| _{\alpha \mathrm{-Hol},[a,b]}+\varepsilon \end{aligned}$$

Exchanging the role of c and \(c'\) we obtain

$$\begin{aligned} |m^{[a,b]}(c,\delta ) - m^{[a,b]}(c',\delta )| \le \left| \left| \left| c-c'\right| \right| \right| _{\alpha \mathrm{-Hol},[a,b]} \end{aligned}$$

since \(\varepsilon \) is arbitrary.

We now prove the continuity of the map

$$\begin{aligned} m^{[a,b]}(\cdot \;,\delta ): ({\tilde{C}},d)\rightarrow {\mathbb {R}}. \end{aligned}$$

In fact, fix \([-n,n]\) contains [ab]. For each \(c_0\in {\tilde{C}}\) and \(\varepsilon \in (0,1)\) choose \(\eta =\varepsilon /2^n\). If \(d(c,c_0)<\eta \) we have \(\Vert c-c_0\Vert _{\alpha ,[-n,n]}\wedge 1\le 2^nd(c,c_0)< \varepsilon \). Therefore

$$\begin{aligned} |m^{[a,b]}(c,\delta ) - m^{[a,b]}(c',\delta )|\le \left| \left| \left| c-c'\right| \right| \right| _{\alpha ,[-n,n]}\le \varepsilon . \end{aligned}$$

Next, fix \(\varepsilon >0\) and define the set

$$\begin{aligned} K_{\delta }:=\{c\in {\bar{A}}\;\mid \; m^{[a,b]}(c,\delta ) \ge \varepsilon \}. \end{aligned}$$

Then \(K_{\delta }\) is closed for all \(\delta \). Due to the fact that \(\lim \nolimits _{\delta \rightarrow 0} m^{[a,b]}(c,\delta ) =0\) for all \(c\in {\tilde{C}}\), we have \(\displaystyle \bigcap _{\delta >0}K_{\delta }=\emptyset \). Then there exists \(\delta =\delta (\varepsilon )>0\) such that \(K_{\delta }=\emptyset \), which proves (4.7).

For the ”only if” part, assume that (4.6) and (4.7) hold, we are going to prove the compactness of \({\bar{{\mathcal {H}}}}\). Since \({\tilde{C}}\) is a complete metric space, it suffices to prove that every sequence \(\{c_n\}_{n=1}^\infty \subset {\mathcal {H}}\) has a convergent subsequence. Following the arguments of [15, Theorem 4.9, p. 63] line by line, we can construct a convergent subsequence \(\{{\tilde{c}}_n\}_{n=1}^\infty \) by the ”diagonal sequence” such that \({\tilde{c}}_n(r) \rightarrow c(r)\) as \(n\rightarrow \infty \) for any rational number \(r \in {\mathbb {Q}}\). With (4.6) and (4.7), \({\mathcal {H}}\) satisfies the condition in [15, Theorem 4.9, p. 63], hence \({\tilde{c}}_n\) converge uniformly to a continuous function c in every \([a,b]\subset {\mathbb {R}}\).

Fix [ab], by (4.7) for each \(\varepsilon >0\) there exist \(\delta _0>0\) such that if \(\delta \le \delta _0\), \(\sup \nolimits _{\begin{array}{c} s,t\in [a,b] \\ |s-t|\le \delta \end{array}}\frac{|{\tilde{c}}_n(t)-{\tilde{c}}_n(s)|}{|t-s|^{\alpha }}\le \varepsilon \) for all n. Hence

$$\begin{aligned} \sup _{\begin{array}{c} s,t\in [a,b] \\ |s-t|\le \delta \end{array}}\frac{|c(t)-c(s)|}{|t-s|^{\alpha }}\le \varepsilon , \end{aligned}$$

thus \(c\in {\tilde{C}}\). Finally, we prove that \({\tilde{c}}_n\) converge to c in the Hölder seminorm on every compact interval [ab]. Namely, with \(\varepsilon , \delta _0\) given, there exist \(n_0\) such that for all \(n\ge n_0\), \(\Vert {\tilde{c}}_n-c\Vert _{\infty ,[a,b]}\le \delta ^{\alpha }_0\varepsilon \). Then for \(n\ge n_0\)

$$\begin{aligned} \sup _{s,t\in [a,b]}\frac{|({\tilde{c}}_n-c)(t)-({\tilde{c}}_n-c)(s)|}{|t-s|^\alpha }\le & {} \sup _{\begin{array}{c} s,t\in [a,b] \\ |t-s|\le \delta _0 \end{array}}\frac{|({\tilde{c}}_n-c)(t)-({\tilde{c}}_n-c)(s)|}{|t-s|^\alpha }\\&+\,\sup _{\begin{array}{c} s,t\in [a,b] \\ |t-s|\ge \delta _0 \end{array}}\frac{|({\tilde{c}}_n-c)(t)-({\tilde{c}}_n-c)(s)|}{|t-s|^\alpha }\\\le & {} m^{[a,b]}({\tilde{c}}_n,\delta _0)+m^{[a,b]}(c,\delta _0)+ \frac{2}{\delta ^{\alpha }_0}\Vert {\tilde{c}}_n-c\Vert _{\infty ,[a,b]}\\\le & {} 4\varepsilon , \end{aligned}$$

which implies that \(\left| \left| \left| {\tilde{c}}_n-c\right| \right| \right| _{\alpha \mathrm{-Hol},[a,b]}\) converges to 0 as \(n\rightarrow \infty \). This completes the proof. \(\square \)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Cong, N.D., Duc, L.H. & Hong, P.T. Lyapunov Spectrum of Nonautonomous Linear Young Differential Equations. J Dyn Diff Equat 32, 1749–1777 (2020). https://doi.org/10.1007/s10884-019-09780-z

Download citation

Keywords

  • Young differential equation
  • Two parameter flow
  • Lyapunov exponent
  • Lyapunov spectrum
  • Lyapunov regularity
  • Multiplicative ergodic theorem
  • Bebutov flow