1 Introduction

Periodic behavior is ubiquitous in the natural sciences and in engineering. Accordingly, many mathematical models of dynamical systems, usually given by ordinary differential equations (ODEs), are characterized by the existence of attracting periodic orbits, also called limit cycles. Interpreting the limit cycle as a “clock” for the system, one can ask which parts of the state space can be associated with which “time” on the clock.

It turns out that one can generally divide the state space into sections, called isochrons, intersecting the asymptotically stable periodic orbit. Trajectories starting on a particular isochron all converge to the trajectory starting at the intersection of the isochron and the limit cycle. Hence, each point in the basin of attraction of the limit cycle can be allocated a time on the periodic orbit, by belonging to a particular isochron. Isochrons can then be characterized as the sections intersecting the limit cycle, such that the return time under the flow to the same section always equals the period of the attracting orbit and, hence, the return time is the same for all isochrons. The analysis of ODEs provides additional characterizations of isochrons, involving, for example, an isochron map or eigenfunctions of associated operators.

Clearly, mathematical models are simplifications which often leave out parameters and details of the described physical or biological system. Hence, a large number of degrees of freedom is inherent in the modeling. The introduction of random noise is often a suitable way to integrate such non-specified components into the model such that, for example, an ODE becomes a stochastic differential equation (SDE). Examples for stochastic oscillators/oscillations can be found in a wide variety of applications such as neuroscience [4, 12, 31, 43], ecology [37, 39], bio-mechanics [25, 35], geoscience [6, 33], among many others. In addition, stochastic oscillations have become a recently very active research topic in the rigorous theory of stochastic dynamical systems with small noise [3, 7, 8, 26].

Lately, there has been a lively discussion [34, 45] in the mathematical physics community about how to extend the definition and analysis of isochrons to the stochastic setting. As pointed out above, there are several different characterizations in the deterministic case inspiring analogous stochastic approaches. So far, there are two main approaches to define stochastic isochrons in the physics literature, both focused on stochastic differential equations. One approach, due to Thomas and Lindner [44], focuses on eigenfunctions of the associated infinitesimal generator \({\mathcal {L}}\). The other one is due to Schwabedal and Pikovsky [41], who introduce isochrons for noisy systems as sections \(W^\mathbb {E}(x)\) with the mean first return time to the same section \(W^\mathbb {E}(x)\) being a constant \({\bar{T}}\), equaling the average oscillation period. Cao, Lindner and Thomas [13] have used the Andronov–Vitt–Pontryagin formula, involving the backward Kolmogorov operator \({\mathcal {L}}\), with appropriate boundary conditions to establish the isochron functions for \(W^\mathbb {E}(x)\) more rigorously.

These approaches have in common that they focus on the “macroscopic” or “coarse-grained” level by considering averaged objects and associated operators. We complement the existing suggestions by a new approach within the theory of random dynamical systems (see e.g. [1]) which has proven to give a framework for translating many deterministic dynamical concepts into the stochastic context. A random dynamical system in this sense consists of a model of the time-dependent noise formalized as a a dynamical system \(\theta \) on the probability space, and a model of the dynamics on the state space formalized as a cocycle \(\varphi \) over \(\theta \). This point of view considers the asymptotic behaviour of typical trajectories. As trajectories of random dynamical systems depend on the noise realization, any convergent behaviour of individual trajectories to a fixed attractor cannot be expected. The forward in time evolution of sets under the same noise realization yields the random forward attractor A which is a time-dependent object with fibers \(A(\theta _t \omega )\). An alternative view point is to consider, for a fixed noise realization \(\omega \in \Omega \), the flow of a set of initial conditions from time \(t=-T\) to a fixed endpoint in time, say \(t=0\), and then take the (pullback) limit \(T\rightarrow \infty \). If trajectories of initial conditions converge under this procedure to fibers \(\tilde{A}(\omega )\) of some random set \({\tilde{A}}\), then this set is called a random pullback attractor.

In this paper, we will consider mainly situations where the random dynamical system is induced by an SDE and there exists a random (forward and/or pullback) attractor A which is topologically equivalent to a cycle for each noise realization, i.e. a attracting random cycle, whose existence can be made generic in a suitably localized setup around a deterministic limit cycle. We will extend the definition of a random periodic solution \(\psi \) [46] living on such a random attractor to situations where the period is random, giving a pair \((\psi , T)\). Isochrons can then be defined as random stable manifolds \(W^{{\text {f}}}(\omega , x)\) for points x on the attracting random cycle \(A(\omega )\), in particular for random periodic solutions. We usually consider situations with a spectrum of exponential asymptotic growth rates, the Lyapunov exponents \(\lambda _1> \lambda _2> \dots > \lambda _p\), which allows to transform the idea of hyperbolicity to the random context. Additionally, we can introduce a time-dependent random isochron map \({\tilde{\phi }}\), such that the isochrons are level sets of such a map. Hence, on a pathwise level, we achieve a complete generalization of deterministic to random isochronicity, which is the key contribution of this work. The main results can be summarized in the following theorem:

Theorem A

Assume the random dynamical system \((\theta , \varphi )\) on \(\mathbb {R}^m\) has a hyperbolic random limit cycle, supporting a random periodic solution with possibly noise-dependent period. Then, under appropriate assumptions on smoothness and boundedness,

  1. 1.

    The random forward isochrons are smooth invariant random manifolds which foliate the stable neighbourhood of the random limit cycle on each noise fibre,

  2. 2.

    There exists a smooth and measurable (non-autonomus) random isochron map \({\tilde{\phi }}\) whose level sets are the random isochrons and whose time derivative along the random flow is constant.

The remainder of the paper is structured as follows. Section 2 gives an introduction to the deterministic theory of isochrons, summarizing the main properties that we can then transform into the random dynamical systems setting. The latter is discussed in Sect. 3, where we elucidate the notions of Lyapunov exponents, random attractors and, specifically, random limit cycles and their existence. Section 4 establishes the two main statements, contained in Theorem A: in Sect. 4.1, we show Theorem 2, summarizing different scenarios in which random isochrons are random stable manifolds foliating the neighbourhoods of the random limit cycle. In Sect. 4.2, we prove Theorem 3, generalizing characteristic properties of the isochron map to the random case. We conclude Sect. 4 with an elaboration on the relationship between expected quantities of the RDS approach and the definition of stochastic isochrons via mean first return times, i.e., one of the main physics approaches. Additionally, the paper contains a brief conclusion with outlook, and an appendix with some background on random dynamical systems.

2 The Deterministic Case

The basic facts about isochrons have been established in [27]. Here we summarize some facts restricted to the state space \({\mathcal {X}}= \mathbb {R}^m\) but the theory easily lifts to ordinary differential equations (ODEs) on smooth manifolds \({\mathcal {M}}={\mathcal {X}}\). Consider an ODE

$$\begin{aligned} x'=f(x),\qquad x(0)=x_0 \in \mathbb {R}^m, \end{aligned}$$
(2.1)

where f is \(C^k\) for \(k\ge 1\). Let \(\Phi (x_0,t)=x(t)\) be the flow associated to (2.1) and suppose \(\gamma =\{\gamma (t)\}_{t\in [0,\tau _\gamma ]}\) is a hyperbolic periodic orbit with minimal period \(\tau _\gamma >0\). A cross-section \({\mathcal {N}}\subset \mathbb {R}^m\) at \(x\in \gamma \) is a submanifold such that \(x\in {\mathcal {N}}\), \(\bar{{\mathcal {N}}}\cap \gamma =\{x\}\), and

$$\begin{aligned} {\text {T}}_x{\mathcal {N}}\oplus {\text {T}}_x \gamma = {\text {T}}_x\mathbb {R}^m\simeq \mathbb {R}^m, \end{aligned}$$

i.e. the submanifold \({\mathcal {N}}\) and the orbit \(\gamma \) intersect transversally.

Let \(g:{\mathcal {N}}\rightarrow {\mathcal {N}}\) be the Poincaré map defined by the first return of \(y\in {\mathcal {N}}\) under the flow \(\Phi \) with \({\mathcal {N}}\) (see Fig. 1); locally near any point \(x\in \gamma \) the map g is well-defined. For simplicity (and with the look forward towards the noisy case) let us assume that \(\gamma \) is a stable hyperbolic periodic orbit, i.e. the eigenvalues \(\mu _i\) of \({\text {D}}g(x)\), also called characteristic multipliers, satisfy \(\mu _1 = 1\) and \(\left| \mu _2\right| , \dots , \left| \mu _m\right| < 1\), counting multiplicities. The numbers

$$\begin{aligned} \lambda _i = \frac{1}{T}\ln \mu _i \end{aligned}$$

are called the characteristic exponents (for more background on the stability of linear non-autonomous systems and associated Floquet theory see e.g [14, Chapter 2.4]). We call such a stable hyperbolic periodic orbit a stable (hyperbolic) limit cycle since there is a neighbourhood \({\mathcal {U}}\) of \(\gamma \) such that for \(y \in {\mathcal {U}}\) we have \({\text {d}}(\Phi (y,t),\gamma ) \rightarrow 0\), as \(t \rightarrow \infty \), where \({\text {d}}\) is the Euclidean metric on \(\mathbb {R}^m\). In particular, note that there is a lower bound on the speed of exponential convergence to the limit cycle, given by

$$\begin{aligned} \lambda := \min _{i: \lambda _i \ne 0} \mathfrak {R}(-\lambda _i) > 0. \end{aligned}$$

We give a definition of isochrons as stable sets and then establish its equivalence to level sets of a specific map. We further find these level sets to be cross-sections to \(\gamma \) for which the time of first return is identical to the period \(\tau _{\gamma }\), explaining the name isochrons.

Definition 1

The isochron W(x) of a point on a hyperbolic limit cycle \(x\in \gamma \) is given by its stable set

$$\begin{aligned} W(x) :=\left\{ y\in \mathbb {R}^m:\lim _{t\rightarrow +\infty }{\text {d}}(\Phi (x,t),\Phi (y,t))=0\right\} . \end{aligned}$$
(2.2)

In particular, due to hyperbolicity, we have that for every \( {\tilde{\lambda }} \in (0, \lambda )\)

$$\begin{aligned} W(x) =\left\{ y\in \mathbb {R}^m:\sup _{t \ge 0}{\text {e}}^{{\tilde{\lambda }} t} \, {\text {d}}(\Phi (x,t),\Phi (y,t)) < \infty \right\} . \end{aligned}$$
(2.3)

It is by now classical that stable sets are manifolds and for each \(x\in \gamma \), we get a stable manifold \(W^{\text {s}}(x)\) diffeomorphic to \(\mathbb {R}^{m-1}\), precisely coinciding with the isochron W(x). We can foliate a neighbourhood \({\mathcal {U}}\) of \(\gamma \) by the manifolds W(x) and these manifolds are permuted by the flow since

$$\begin{aligned} W(\Phi (x,t))=\Phi (W(x),t)\quad \forall t\in \mathbb {R}. \end{aligned}$$
(2.4)

We summarize these crucial observations in the following theorem.

Theorem 1

(Theorem A in [27], Theorem 2.1 in [26]). Consider the flow \(\Phi : \mathbb {R}^m \times \mathbb {R} \rightarrow \mathbb {R}^m\) for the ODE (2.1) with hyperbolic stable limit cycle \(\gamma =\{\gamma (t)\}_{t\in [0,\tau _\gamma ]}\). Then the following holds:

  1. 1.

    For each \(x \in \gamma \), the isochron W(x) is an \((m-1)\)-dimensional manifold transverse to \(\gamma \), in particular it is a cross-section of \(\gamma \), of the same regularity as the vector field f in the ODE (2.1) (i.e. \(C^k\) if f is \(C^k\)).

  2. 2.

    The stable manifold \(W^{\text {s}}(\gamma )\) contains a full neighbourhood of \(\gamma \) and can be written as

    $$\begin{aligned} W^{\text {s}}(\gamma )=\bigcup _{x\in \gamma } W(x), \end{aligned}$$

    where the union of isochrons is disjoint.

  3. 3.

    The map \(\xi : W^{\text {s}}(\gamma ) \rightarrow \mathbb {R} \mod \tau _{\gamma } \), also called the isochron map, is given for every \(y \in W^{\text {s}}(\gamma )\) as the unique t such that \(y \in W(\gamma (t))\), i.e.

    $$\begin{aligned} \lim _{s\rightarrow +\infty }{\text {d}}(\Phi (\gamma (\xi (y)),s),\Phi (y,s))=\lim _{s\rightarrow +\infty }{\text {d}}(\gamma (s + \xi (y)),\Phi (y,s))=0 \,, \end{aligned}$$
    (2.5)

    and \(\xi \) is also \(C^k\).

Using the properties established in Theorem 1, we can derive the following well-known characterizations of the isochrons W(x), \(x \in \gamma \), and of the isochron map \(\xi \).

Proposition 1

Assume that we are in the situation of Theorem 1. We have that

  1. 1.

    For each \(x \in \gamma \), the isochron W(x) is precisely the level set of \(\xi (x)\), i.e.

    $$\begin{aligned} W(x)=\left\{ y\in W^{\text {s}}(\gamma ): \ \xi (y) = \xi (x)\right\} \,, \end{aligned}$$
    (2.6)
  2. 2.

    The isochron map \(\xi : W^{\text {s}}(\gamma ) \rightarrow \mathbb {R} \mod \tau _{\gamma } \) satisfies

    $$\begin{aligned} \frac{{\text {d}}}{{\text {d}}t} \xi (\Phi (y,t)) = 1 \ \text { for all } t \ge 0, \, y \in W^{\text {s}}(\gamma )\,, \end{aligned}$$
    (2.7)
  3. 3.

    The isochron W(x) is the cross-section \({\mathcal {N}}_x\) at x such that

    $$\begin{aligned} \Phi ({\mathcal {N}}_x,\tau _\gamma )\subseteq {\mathcal {N}}_x, \end{aligned}$$
    (2.8)

    i.e. the cross-section on which all starting points return in the same time \(\tau _{\gamma }\).

Proof

The first statement follows from the fact that \(\gamma (\xi (x)) = x\) for all \(x \in \gamma \) and Eq. (2.5) in combination with the definition of W(x): in more detail, we have \(y \in W(x)\) if and only if \(\lim _{s \rightarrow \infty } d(\Phi (x,s), \Phi (y,s)) = 0\) which is equivalent to \(\lim _{s \rightarrow \infty } d(\gamma (s + \xi (x)), \Phi (y,s)) = 0\) which holds if and only if \(\xi (x) = \xi (y)\).

The second statement can be deduced from the invariance property \(\Phi (\cdot , t) W(x) = W(\Phi (x,t))\) for any \(x \in \gamma \) since it implies for \(y \in W(x)\), i.e. \(\xi (y) = \xi (x)\), that

$$\begin{aligned} \xi (\Phi (y,t)) = \xi (\Phi (x,t)) = \xi (x) + t \mod \tau _{\gamma } = \xi (y) + t \mod \tau _{\gamma }, \end{aligned}$$

which is equivalent to the claim.

The third statement can be easily derived from the fact that for all \(y \in W^{\text {s}}(\gamma )\)

$$\begin{aligned} \lim _{t\rightarrow +\infty }{\text {d}}(\gamma (t + \xi (y)),\Phi (\Phi (y,\tau _{\gamma }),t)&= \lim _{t\rightarrow +\infty }{\text {d}}(\gamma (t + \xi (y)),\Phi (y,t+ \tau _{\gamma }))\\&=\lim _{s\rightarrow +\infty }{\text {d}}(\gamma (s - \tau _{\gamma } + \xi (y)),\Phi (y,s))\\&= \lim _{s\rightarrow +\infty }{\text {d}}(\gamma (s + \xi (y)),\Phi (y,s)) = 0\,. \end{aligned}$$

This finishes the proof.

Summarizing, we can view isochrons W(x) as stable manifolds of points on the limit cycle. The sets W(x) are uniquely defined and have codimension one. They locally foliate neighborhoods of the limit cycle. They can also be characterized and computed as level sets of a specific isochron map whose total derivative along the flow is equal to 1, by looking for sections of fixed return time under the flow. In the course of this article, we will transform all the discussed properties to the random case.

Guckenheimer [27] tackles additional questions regarding the boundary of \(W^{\text {s}}(\gamma )\). These questions concern global properties of isochrons. Since we want to first understand a neighbourhood \({\mathcal {U}}\) of \(\gamma \) in the stochastic setting, we skip these problems here. With this in mind, we consider an adjustment of the main planar example in [27] which does not involve the boundary of \(W^{\text {s}}(\gamma )\). The example is simple but illuminating and already contains the main aspects of the difficulties in extending isochronicity to the stochastic context, as we will see later.

Example 1

Consider the ODE

$$\begin{aligned} \begin{array}{lcl} \vartheta ' &{}=&{} h(r),\\ r' &{}=&{} r(r_1^2-r^2), \end{array} \end{aligned}$$
(2.9)

in polar coordinates \((\vartheta ,r)\in [0,2\pi )\times (0,+\infty )\), where \(r_1 > 0\) is fixed, \(h(r)\ge K > 0\) for some constant K, and h is smooth, such that there is always the periodic orbit \(\gamma =\{r=r_1\}\). If \(h(r)\equiv 1\), then one easily checks that the isochrons of \(\gamma \) are (see Fig. 1a)

$$\begin{aligned} W((\vartheta _*,r_*))=\{(\vartheta ,r):r \in (0,\infty ),\vartheta =\vartheta _*\}. \end{aligned}$$
(2.10)

However, if we consider h such that \(h'(r_1)\ne 0\), then the isochrons bend into curves, instead of being “cut-linear” rays. Indeed, the periodic orbit has period \(\tau _{\gamma } = 2\pi /h(r_1)\) but the return time to the same \(\vartheta \)-coordinate changes near \(\gamma \) (see Fig. 1b).

Fig. 1
figure 1

Sketch of isochrons for limit cycle \(\gamma \) in example (1), with \(h \equiv 1\) (a) where the isochrons are simply given by Eq. (2.10), and with \(h'(r_1) \ne 0\) (b) where the isochrons are curved

Our considerations indicate that, in order to find isochrons in the stochastic case, a first approach is to consider “stable manifolds” also for this situation. The most suitable framework for this approach turns out to be the one of random dynamical systems (RDS).

3 Stochastically Driven Limit Cycles in the Framework of Random Dynamical Systems

In the following, we develop a theory of isochrons within the framework of random dynamical systems. A continuous-time random dynamical system on a topological state space \({\mathcal {X}}\) consists of

  1. (i)

    A model of the noise on a probability space \((\Omega , \mathcal {F}, \mathbb {P})\), formalized as a measurable flow \((\theta _t)_{t \in \mathbb {R}}\) of \({\mathbb {P}}\)-preserving transformations \(\theta _t: \Omega \rightarrow \Omega \),

  2. (ii)

    A model of the dynamics on \({\mathcal {X}}\) perturbed by noise formalized as a cocycle \(\varphi \) over \(\theta \).

This setting is very helpful to understand properties of dynamical systems under the influence of stochastic noise. In technical detail, the definition of a random dynamical system is given as follows [1, Definition 1.1.2].

Definition 2

(Random dynamical system). Let \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) be a probability space and \({\mathcal {X}}\) be a topological space. A random dynamical system (RDS) is a pair of mappings \((\theta , \varphi )\).

  • \(\bullet \) The (\({\mathcal {B}}(\mathbb {R})\otimes {\mathcal {F}}\), \({\mathcal {F}}\))-measurable mapping \(\theta : \mathbb {R}\times \Omega \rightarrow \Omega \), \((t,\omega )\mapsto \theta _t\omega \), is a metric dynamical system, i.e.

    1. (i)

      \(\theta _0={{\,\mathrm{id}\,}}\) and \(\theta _{t+s}=\theta _t\circ \theta _s\) for \(t,s\in {\mathbb {R}}\),

    2. (ii)

      \({\mathbb {P}} (A) = {\mathbb {P}}(\theta _t A)\) for all \(A\in {\mathcal {F}}\) and \(t\in {\mathbb {R}}\).

  • \(\bullet \) The (\(\mathcal {B}(\mathbb {R})\otimes \mathcal {F} \otimes \mathcal {B}({\mathcal {X}})\), \(\mathcal {B}({\mathcal {X}})\))-measurable mapping \(\varphi : \mathbb {R} \times \Omega \times {\mathcal {X}}\rightarrow {\mathcal {X}}, (t, \omega , x) \mapsto \varphi (t, \omega , x)\), is a cocycle over \(\theta \), i.e.

    $$\begin{aligned} \varphi (0, \omega , \cdot ) = {{\,\mathrm{id}\,}}\quad \text {and} \quad \varphi (t+s, \omega , \cdot ) = \varphi (t, \theta _s \omega , \varphi (s, \omega , \cdot )) \quad \text { for all } \omega \in \Omega \text { and } t, s \in \mathbb {R}\,. \end{aligned}$$

The random dynamical system \((\theta ,\varphi )\) is called continuous if \((t, x) \mapsto \varphi (t, \omega ,x)\) is continuous for every \(\omega \in \Omega \). We still speak of a random dynamical system, if its cocycle is only defined in forward time, i.e. if the mapping \(\varphi \) is only defined on \(\mathbb {R}_0^+ \times \Omega \times {\mathcal {X}}\). We will make it noticeable whenever this is the case.

In the following, the metric dynamical system \((\theta _t)_{t \in \mathbb {R}}\) is often even ergodic, i.e. any \(A\in {\mathcal {F}}\) with \(\theta _t^{-1}A =A\) for all \(t\in {\mathbb {R}}\) satisfies \(\mathbb P(A)\in \{0,1\}\). Note that we define \(\theta \) in two-sided time whereas \(\varphi \) can be restricted to one-sided time. This is motivated by the fact that a large part of this article will deal with random dynamical systems generated by stochastic differential equations (SDEs). Hence, we are interested in random dynamical systems adapted to a suitable filtration and of white noise type (see “Appendix A.1”). In this context, we can understand \(\varphi \) as the “stochastic flow” induced by solving the corresponding SDE and \(\theta _t\) as a time shift on the canonical space \(\Omega \) of all continuous paths starting at 0, equipped with the Wiener measure.

Additionally note that the RDS generates a skew product flow, i.e. a family of maps \((\Theta _t)_{t \in \mathbb {T}}\) from \(\Omega \times {\mathcal {X}}\) to itself such that for all \(t \in {\mathbb {T}}\) and \(\omega \in \Omega , x \in {\mathcal {X}}\)

$$\begin{aligned} \Theta _t(\omega , x) = (\theta _t \omega , \varphi (t, \omega ,x))\,. \end{aligned}$$
(3.1)

3.1 Differentiability and Lyapunov exponents

The random dynamical system \((\theta , \varphi )\) is called \(C^k\) if \(\varphi (t, \omega , \cdot ) \in C^k\) for all \(t\in {\mathbb {T}}\) and \(\omega \in \Omega \), where again \( {\mathbb {T}} \in \{ \mathbb {R}, \mathbb {R}_0^+\}\). As in the deterministic case, let us assume that the state space is \({\mathcal {X}}= \mathbb {R}^m\) (the following can also be extended to smooth m-dimensional manifolds as in “Appendix .1”) and that \((\theta , \varphi )\) is \(C^1\). The linearization or derivative \(\mathrm {D}\varphi (t,\omega ,x)\) of \(\varphi (t,\omega ,\cdot )\) at \(x \in \mathbb {R}^m\) is the Jacobian \(m\times m\) matrix

$$\begin{aligned} \mathrm {D}\varphi (t,\omega ,x) = \frac{\partial \varphi (t, \omega ,x)}{\partial x}\,. \end{aligned}$$

Differentiating the equation

$$\begin{aligned} \varphi (t+s,\omega ,x) = \varphi (t, \theta _s \omega , \varphi (s,\omega ,x)) \end{aligned}$$

on both sides and applying the chain rule to the right hand side yields

$$\begin{aligned} \mathrm {D}\varphi (t+s,\omega ,x) = \mathrm {D}\varphi (t, \theta _s \omega , \varphi (s,\omega ,x))\mathrm {D}\varphi (s,\omega ,x) = \mathrm {D}\varphi (t, \Theta _s(\omega ,x))\mathrm {D}\varphi (s,\omega ,x)\,, \end{aligned}$$

i.e. the cocycle property of the fiberwise mappings with respect to the skew product maps \((\Theta _t)_{t \in {\mathbb {T}}}\) (see Eq. (3.1)). Let us further assume that the random dynamical system possesses an invariant measure \(\mu \) (see “Appendix .1”). This implies that \((\Theta ,\mathrm {D}\varphi )\) is a random dynamical system with linear cocycle \(\mathrm {D}\varphi \) over the metric dynamical system \((\Omega \times \mathbb {R}^m, {\mathcal {F}} \times \mathcal {B}(\mathbb {R}^m), (\Theta _t)_{t \in {\mathbb {T}}})\), see e.g. [1, Proposition 4.2.1].

The main models in this article are stochastic differential equation in Stratonovich form

$$\begin{aligned} \mathrm {d}X_t = b(X_t) \mathrm {d}t + \sum _{i=1}^n \sigma _i(X_t) \circ \mathrm {d}W_t^i,\qquad X_0=x \in \mathbb {R}^m, \end{aligned}$$
(3.2)

where \(W_t^i\) are independent real valued Brownian motions, b is a \(C^k\) vector field, \(k \ge 1\), and \(\sigma _1, \dots , \sigma _n\) are \(C^{k+1}\) vector fields satisfying bounded growth conditions, as e.g. (global) Lipschitz continuity, in all derivatives to guarantee the existence of a (global) random dynamical system for \(\varphi \) and \(\mathrm {D}\varphi \). We write the equation in Stratonovich form when differentiation is concerned as the classical rules of calculus are preserved. We can apply the conversion formula to the Itô integral to obtain the situation of (A.1). According to [2], the derivative \(\mathrm {D}\varphi (t,\omega ,x)\) applied to an initial condition \(v_0 \in \mathbb {R}^m\) solves uniquely the variational equation given by

$$\begin{aligned} \mathrm {d}v = \mathrm {D}b(\varphi (t,\omega )x) v \,\mathrm {d}t + \sum _{i=1}^n \mathrm {D}\sigma _i(\varphi (t,\omega )x) v \circ \mathrm {d}W_t^i \,, \quad \text {where } v \in \mathbb {R}^m\,. \end{aligned}$$
(3.3)

The hyperbolicity of such a differentiable RDS with ergodic invariant measure \(\mu \) and random cycle A is expressed via its Lyapunov spectrum which is given due to the Multiplicative Ergodic Theorem (MET) (see Theorem A.3 in “Appendix .1”) under the integrability assumption

$$\begin{aligned} \sup _{0 \le t \le 1} \log ^+ \Vert \mathrm {D}\varphi (t, \omega , \cdot ) \Vert \in L^1(\mu ), \end{aligned}$$
(3.4)

where \(\Vert \mathrm {D}\varphi (t, \omega , \cdot ) \Vert \) denotes the operator norm of the Jacobian as a linear operator from \({\text {T}}_x \mathbb {R}^m\) to \({\text {T}}_{\varphi (t, \omega ,x)} \mathbb {R}^m \) induced by the Euclidean norm and \( \log ^+(a) = \max \{\log (a);0\}\).

Analogously to the characteristic exponents discussed for the deterministic case in Sect. 2, the spectrum of \(p \le m\) Lyapunov exponents \(\lambda _1> \lambda _2> \dots > \lambda _p\) quantifies the asymptotic exponential rates of infinitesimally close trajectories.

3.2 Random attractors

Let \((\theta , \varphi )\) be a white noise random dynamical system on \(\mathbb {R}^m\). (Note that the following can be formulated more generally in complete metric spaces \(({\mathcal {X}}, d)\) but that we again restrict ourselves to the Euclidean case for reasons of clarity). Due to the non-autonomous nature of the RDS, there are no fixed attractors for dissipative systems and different notions of a random attractor exist. We introduce these related but different definitions of random attractors in the following, with respect to tempered sets. Specific random attractors, attracting random cycles, will play a crucial role in the following chapters. A random variable \(R:\Omega \rightarrow \mathbb {R}\) is called tempered if

$$\begin{aligned} \lim _{t \rightarrow \pm \infty } \frac{1}{|t|} \ln ^{+} R(\theta _t\omega )=0 \quad \text {for almost all } \,\omega \in \Omega \,, \end{aligned}$$

see also [1, p. 164]. A set \(D\in {\mathcal {F}}\otimes \mathcal B(\mathbb {R}^m)\) is called tempered if there exists a tempered random variable R such that

$$\begin{aligned} D(\omega )\subset B_{R(\omega )}(0) \quad \text {for almost all } \,\omega \in \Omega \,, \end{aligned}$$

where \(B_{R(\omega )}(0)\) denotes a ball centered at zero with radius \(R(\omega )\) and \(D(\omega ):=\{x\in \mathbb {R}^m: (\omega , x)\in D\}\). D is called compact if \(D(\omega )\subset \mathbb {R}^m\) is compact for almost all \(\omega \in \Omega \). Denote by \({\mathcal {D}}\) the set of all compact tempered sets \(D\in {\mathcal {F}}\otimes {\mathcal {B}}(\mathbb {R}^m)\) and by

$$\begin{aligned} {\text {dist}}(E, F):= \sup _{x\in E}\inf _{y\in F} d(x,y) \end{aligned}$$

the Hausdorff seperation or semi-distance, where d denotes again the Euclidean metric. We now define different notions of a random attractor with respect to a family of sets \({\mathcal {S}} \subset {\mathcal {D}}\), see also [28, Definition 14.3] and [17, Definition 15].

Definition 3

(Random attractor). The set \(A \in {\mathcal {S}} \subset {\mathcal {D}}\) that is strictly \(\varphi \)-invariant, i.e.

$$\begin{aligned} \varphi (t,\omega ) A(\omega ) = A (\theta _t \omega ) \quad \text {for all } \,t\ge 0 \text{ and } \text{ almost } \text{ all } \omega \in \Omega \,, \end{aligned}$$

is called

  1. (i)

    A random pullback attractor with respect to \({\mathcal {S}}\) if for all \(D \in {\mathcal {S}}\) we have

    $$\begin{aligned} \lim _{t \rightarrow \infty } {\text {dist}} \big (\varphi (t, \theta _{-t} \omega )D(\theta _{-t}\omega ), A(\omega )\big ) = 0 \quad \text {for almost all } \,\omega \in \Omega \,, \end{aligned}$$
  2. (ii)

    A random forward attractor with respect to \({\mathcal {S}}\) if for all \(D \in {\mathcal {S}}\) we have

    $$\begin{aligned} \lim _{t \rightarrow \infty } {\text {dist}} \big (\varphi (t, \omega )D(\omega ), A(\theta _t\omega )\big ) = 0 \quad \text {for almost all } \,\omega \in \Omega \,, \end{aligned}$$
  3. (iii)

    A weak random attractor if it satisfies the convergence property in (i) (or (ii)) with almost sure convergence replaced by convergence in probability,

  4. (iv)

    A (weak) random (pullback or forward) point attractor if it satisfies the corresponding properties above for \(\mathcal {S}= \{ D \subset \mathbb {R}^m \,: \, D = \{y\} \text { for some } y \in \mathbb {R}^m\}\), i.e. for single points \(y \in \mathbb {R}^m\).

Note that due to the \(\mathbb {P}\)-invariance of \(\theta _t\) for all \(t \in \mathbb {R}\), it is easy to derive that weak attraction in the pullback and the forward sense are the same and, hence, the notion of a weak random attractor in Definition 3 (iii) is consistent. However, random pullback attractors and random forward attractors with almost sure convergence, as defined above, are generally not equivalent (see [38] for counter-examples). In the following, we will be careful with this distinction, yet in our main examples the random pullback attractor and random forward attractor will be the same. In this case we will simply speak of the random attractor.

Before we introduce random cycles and random periodic solutions, we add some remarks on Definition 3.

Remark 1

Note that we require that the random attractor is measurable with respect to \({\mathcal {F}}\otimes {\mathcal {B}}(\mathbb {R}^m)\), in contrast to a weaker statement often used in the literature (see also [17, Remark 4]).

Remark 2

In many cases, the family of sets \(\mathcal {S}\) is chosen to be the family of all bounded or compact (deterministic) subsets \(B \subset \mathbb {R}^m\), as for example in [23]. Note that our definition of random attractors is a generalization of this weaker definition.

3.2.1 Attracting random cycles and random periodic solutions

Consider a random dynamical system \((\theta , \varphi )\) on \(\mathbb {R}^m\). In the situation of a deterministic limit cycle, the limit cycle is the attractor for all subsets of a neighbourhood of this attractor. Analagously, we give the following definition for the random setting.

Definition 4

(Attracting Random Cycle). We call a random (forward or pullback) attractor A for \((\theta , \varphi )\), with respect to a collection of sets \(\mathcal {S}\), an attracting random cycle if for almost all \(\omega \in \Omega \) we have \(A(\omega ) \cong S^1\), i.e. every fiber is homeomorphic to the circle.

Furthermore, we need to find a stochastic analogue to the limit cycle as a periodic orbit. Firstly, we follow [46] for introducing the notion of random periodic solutions:

Definition 5

(Random periodic solution). Let \(\mathbb {T} \in \{ \mathbb {R}, \mathbb {R}_0^+ \}\). A random periodic solution is an \(\mathcal {F}\)-measurable periodic function \(\psi : \Omega \times \mathbb {T} \rightarrow \mathbb {R}^m\) of period \(T>0\) such that for all \(\omega \in \Omega \)

$$\begin{aligned} \psi ( t + T, \omega ) = \psi (t, \omega ) \ \text { and } \ \varphi (t,\omega , \psi (t_0,\omega )) = \psi (t + t_0,\theta _t \omega ) \ \text { for all } t,t_0 \in \mathbb {T}\,. \end{aligned}$$
(3.5)

Note that this definition assumes that \(T\in \mathbb {R}\) does not depend on the noise realization \(\omega \). We will see the limitations of that concept in Example 3, extending the following example which we introduce first.

Example 2

Similarly to [46], consider the planar stochastic differential equation

$$\begin{aligned} \begin{array}{lcl} \mathrm {d}x &{}=&{} \left( x - y - x\left( x^2 + y^2\right) \right) \, \mathrm {d}t + \sigma x \circ \mathrm {d}W_t\,,\\ \mathrm {d}y &{}=&{} \left( x + y - y\left( x^2 + y^2\right) \right) \, \mathrm {d}t + \sigma y \circ \mathrm {d}W_t\,. \end{array} \end{aligned}$$
(3.6)

where \(\sigma \ge 0\), \(W_t\) denotes a one-dimensional standard Brownian motion and the noise is of Stratonovich type. We denote the cocycle of the induced random dynamical system by \(\varphi = (\varphi _1, \varphi _2)\). Equation (3.6) can be transformed into polar coordinates \((\vartheta , r) \in [0, 2 \pi ) \times [0, \infty )\)

$$\begin{aligned} \begin{array}{lcl} \mathrm {d}\vartheta &{}=&{} 1 \, \mathrm {d}t,\\ \mathrm {d}r &{}=&{} (r - r^3) \, \mathrm {d}t + \sigma r \circ \, \mathrm {d}W_t\,. \end{array} \end{aligned}$$
(3.7)

Therefore, in the situation without noise (\(\sigma = 0\)), the system is as in Example 1 with \(h\equiv 1\) and attracting limit cycle at radius \(r=1\). With noise switched on (\(\sigma > 0\)), Eq. (3.7) has an explicit unique solution given by

$$\begin{aligned} {\hat{\varphi }}(t, \omega , (\vartheta _0, r_0))&= \left( \vartheta _0 + t \mod 2 \pi , \ \frac{r_0 {\text {e}}^{t + \sigma W_t(\omega )}}{\left( 1 + 2 r_0^2 \int _0^t e^{2(s + \sigma W_s(\omega ))} \mathrm {d}s \right) ^{1/2} } \right) \\&=: ( \vartheta (t, \omega , \vartheta _0), r(t,\omega , r_0)) \,. \end{aligned}$$

Moreover, there is a stationary solution for the radial component, satisfying \(r(t, \omega , r^*(\omega )) = r^*(\theta _t \omega )\), and given by

$$\begin{aligned} r^*(\omega ) = \left( 2 \int _{-\infty }^{0} {\text {e}}^{2s + 2 \sigma W_s(\omega )} \mathrm {d}s \right) ^{-1/2}\,. \end{aligned}$$
(3.8)

Furthermore, one can see from a straightforward computation that for all \((x,y) \ne (0,0)\) and almost all \(\omega \in \Omega \)

$$\begin{aligned} \left( \varphi _1(t, \theta _{-t} \omega ,x)^2 + \varphi _2(t, \theta _{-t} \omega , y)^2\right) ^{1/2} \rightarrow r^* (\omega )\ \text { as } t \rightarrow \infty \,, \end{aligned}$$

and also

$$\begin{aligned} \left( \varphi _1(t, \omega ,x)^2 + \varphi _2(t,\omega , y)^2\right) ^{1/2} \rightarrow r^* (\theta _t \omega )\ \text { as } t \rightarrow \infty \,. \end{aligned}$$

Hence, the planar system (3.6) has a random attractor A in the pullback and forward sense, with respect to \(\mathcal {S} =\mathcal {D} {\setminus } \{ \{0\} \}\), where \({\mathcal {D}}\) denotes the set of all compact tempered sets \(D\in {\mathcal {F}}\otimes \mathcal B(\mathbb {R}^2)\) (see also Sect. .1), and the fibers of A are given by (see Fig. 2)

$$\begin{aligned} A(\omega ) = \{ r^* (\omega )(\cos \alpha , \sin \alpha ) \,: \, \alpha \in [0, 2 \pi ) \}. \end{aligned}$$
(3.9)

The system possesses, for any fixed \(\vartheta _0 \in [0, 2 \pi )\), the random periodic solution \(\psi \) which is defined by

$$\begin{aligned} \psi (t, \omega ) = r^* (\omega )(\cos ( \vartheta _0 + t), \sin (\vartheta _0 +t))\,. \end{aligned}$$

Indeed, it is easy to check that \(\psi (t, \omega ) = \psi (t + 2 \pi , \omega ) \) and \( \varphi (t, \omega , \psi (t_0, \omega )) = \psi (t + t_0, \theta _t \omega )\) for all \(t, t_0 \ge 0\).

Example 3

(a) Now consider a stochastic version of Example 1 when the phase dynamics depends on the amplitude, i.e.

$$\begin{aligned} \begin{array}{lcl} \mathrm {d}\vartheta &{}=&{} h(r) \, \mathrm {d}t,\\ \mathrm {d}r &{}=&{} (r - r^3) \, \mathrm {d}t + \sigma r \circ \, \mathrm {d}W_t\,, \end{array} \end{aligned}$$
(3.10)

where the smooth function \(h: \mathbb {R}\rightarrow \mathbb {R}\) with \(h \ge K_h > 0\) is non-constant. The random attractor A for the corresponding planar system

$$\begin{aligned} \begin{array}{lcl} \mathrm {d}x &{}=&{} \left( x - h\left( \sqrt{x^2 + y^2}\right) y - x\left( x^2 + y^2\right) \right) \, \mathrm {d}t + \sigma x \circ \mathrm {d}W_t\,,\\ \mathrm {d}y &{}=&{} \left( h\left( \sqrt{x^2 + y^2}\right) x + y - y\left( x^2 + y^2\right) \right) \, \mathrm {d}t + \sigma y \circ \mathrm {d}W_t\,. \end{array} \end{aligned}$$
(3.11)

is exactly the same as before, as illustrated in Fig. 2. We observe for a point \( a(\omega ) : = r^*(\omega )(\cos \vartheta _0, \sin \vartheta _0) \in A(\omega )\), where \(r^*\) is the random variable defined in Eq. (3.8) and \( \vartheta _0 \in [0, 2 \pi )\), that the cocycle satisfies

$$\begin{aligned} \varphi (t, \omega , a(\omega )) = r^*(\theta _t \omega )\left( \cos \left( \vartheta _0 + \int _0^t h(r^*(\theta _s \omega )) \mathrm {d}s \right) ,\, \sin \left( \vartheta _0 + \int _0^t h(r^*(\theta _s \omega )) \mathrm {d}s\right) \right) \,. \end{aligned}$$

There cannot be a random periodic solution in the sense of Definition 5, since noise-independent periodicity is not possible if h is non-constant.

(b) Naturally, we can also consider the case where the phase amplitude is additionally perturbed by noise, i.e.

$$\begin{aligned} \begin{array}{lcl} \mathrm {d}\vartheta &{}=&{} h(r) \, \mathrm {d}t + {\tilde{h}}(r) \circ \, \mathrm {d}W_t^2,\\ \mathrm {d}r &{}=&{} (r - r^3) \, \mathrm {d}t + \sigma r \circ \, \mathrm {d}W_t^1\,, \end{array} \end{aligned}$$
(3.12)

where \(W_t = (W_t^1, W_t^2)\) is now two-dimensional Brownian motion and \(h, {\tilde{h}}: \mathbb {R}\rightarrow \mathbb {R}\) are smooth functions.

Fig. 2
figure 2

Numerical simulations in (xy)-coordinates, using Euler-Marayama integration with step size \(\mathrm {d}t =10^{-2}\), of forward and pullback dynamics of system (3.6) for a set B of initial conditions generated by a trajectory of (3.6) ((a) and (e)). In (b)–(d), we show the numerical approximation of \(\varphi (T,\omega , B)\) for some \(\omega \in \Omega \), approaching the fiber \(A(\theta _T \omega )\) of the random attractor, changing in forward time. In (f)–(h), we show the numerical approximation of \(\varphi (-T,\theta _{-T} \omega , B)\) for some \(\omega \in \Omega \), approaching the fiber \(A(\omega )\) of the random attractor, fixed by the pullback mechanism

Example 3 motivates us to introduce the following notion of a more general form of random periodic solution. The potential relevance of finding such a generalization was first discussed by Hans Crauel;Footnote 1 hence, we have chosen the name.

Definition 6

(Crauel random periodic solution). Let \(\mathbb {T} \in \{ \mathbb {R}, \mathbb {R}_0^+ \}\). A Crauel random periodic solution (CRPS) is a pair \((\psi ,T)\) consisting of \(\mathcal {F}\)-measurable functions \(\psi : \Omega \times \mathbb {T} \rightarrow \mathbb {R}^m\) and \(T : \Omega \rightarrow \mathbb {R}\) such that for all \(\omega \in \Omega \)

$$\begin{aligned} \psi ( t, \omega ) = \psi (t + T(\theta _{-t} \omega ), \omega ) \ \text { and } \ \varphi (t,\omega , \psi (t_0,\omega )) = \psi (t + t_0,\theta _t \omega ) \ \text { for all } t,t_0 \in \mathbb {T}\,. \end{aligned}$$
(3.13)

In particular, note that condition (3.13) implies \(\psi (0, \omega ) = \psi (T(\omega ), \omega )\) (see Fig. 3 for further details). Furthermore, observe that the classical random periodic solution according to Definition 5 is simply a Crauel random periodic solution with constant T. We show that Definition 6 applies to system (3.10), demonstrating the suitability of this definition.

Proposition 2

  1. (a)

    The planar system associated with (3.10) has a family of Crauel random periodic solutions \((\psi _{\vartheta },T)\) which is defined for every \(\vartheta \in [0, 2 \pi )\) by

    $$\begin{aligned} \psi _{\vartheta }(t, \omega ) = r^* (\omega )\left( \cos \left( \vartheta + \int _{-t}^0 h(r^*(\theta _s \omega )) \mathrm {d}s \right) , \ \sin \left( \vartheta + \int _{-t}^0 h(r^*(\theta _s \omega )) \mathrm {d}s \right) \right) \,, \end{aligned}$$
    (3.14)

    and

    $$\begin{aligned} \int _{-T( \omega )}^{0} h(r^*(\theta _s \omega )) \mathrm {d}s = 2 \pi \,, \end{aligned}$$
    (3.15)

    for almost all \(\omega \in \Omega \) and all \( t \in \mathbb {R}_0^+\).

  2. (b)

    The system associated with (3.12) has a family of Crauel random periodic solutions \((\psi _{\vartheta },T)\) which is defined for every \(\vartheta \in [0, 2 \pi )\) by \(\psi _{\vartheta }\) analogously to (3.14), just adding \(\int _{-t}^{0} \tilde{h}(r^*(\theta _s \omega )) \circ \, \mathrm {d}W_s^2(\omega )\) to the angular direction, and

    $$\begin{aligned} T(\omega ) = \inf \left\{ t > 0: \left| \int _{-t}^{0} h(r^*(\theta _s \omega )) \mathrm {d}s + \int _{-t}^{0} {\tilde{h}}(r^*(\theta _s \omega )) \circ \, \mathrm {d}W_s^2(\omega )\right| = 2 \pi \right\} \,. \end{aligned}$$
    (3.16)

    for almost all \(\omega \in \Omega \) and all \( t \in \mathbb {R}_0^+\).

Proof

Without loss of generality let \(\vartheta =0\).

  1. (a)

    The fact that \(T: \Omega \rightarrow \mathbb {R}\) is well defined can be seen as follows: fix \(\omega \in \Omega \) and let

    $$\begin{aligned} g_{\omega }(t) = \int _{-t}^{0} h(r^*(\theta _s \omega )) \mathrm {d}s - 2 \pi . \end{aligned}$$

    Then \(g_{\omega }(0) < 0\) and \(g_{\omega }(2 \pi /K_h) > 0\) and, hence, the existence of \(T(\omega )\) follows from the intermediate value theorem. Moreover, we have by a change of variables that

    $$\begin{aligned} 2 \pi = \int _{-T( \theta _{-t} \omega )}^{0} h(r^*(\theta _{s-t} \omega )) \mathrm {d}s =\int _{-(t + T(\theta _{-t} \omega ))}^{-t} h(r^*(\theta _s \omega )) \mathrm {d}s \,. \end{aligned}$$

    We use this observation to conclude that for almost all \(\omega \in \Omega \) and any \(t \ge 0\)

    $$\begin{aligned} \psi (t + T(\theta _{-t} \omega ), \omega )&= r^* (\omega )\left( \cos \left( \int _{-(t + T(\theta _{-t} \omega ))}^{0} h(r^*(\theta _s \omega )) \mathrm {d}s \right) , \ \sin \left( \int _{-(t + T(\theta _{-t} \omega ))}^{0} h(r^*(\theta _s \omega )) \mathrm {d}s\right) \right) \\&= r^* (\omega )\left( \cos \left( 2 \pi + \int _{-t }^{0} h(r^*(\theta _s \omega )) \mathrm {d}s \right) , \ \sin \left( 2 \pi + \int _{-t}^{0} h(r^*(\theta _s \omega )) \mathrm {d}s\right) \right) \\&= \psi (t, \omega )\,. \end{aligned}$$

    Furthermore, we observe that for almost all \(\omega \in \Omega \) and \(t, t_0 \ge 0\)

    $$\begin{aligned} \varphi (t, \omega ,\psi (t_0, \omega ))&= r^* (\theta _t \omega )\left( \cos \left( \int _{-t_0}^{t} h(r^*(\theta _s \omega )) \mathrm {d}s \right) , \ \sin \left( \int _{-t_0}^{t} h(r^*(\theta _s \omega )) \mathrm {d}s\right) \right) \\&= r^* (\theta _t \omega )\left( \cos \left( \int _{-t_0 -t}^{0} h(r^*(\theta _{s+t} \omega )) \mathrm {d}s \right) , \ \sin \left( \int _{-t_0 - t}^{0} h(r^*(\theta _{s+t} \omega )) \mathrm {d}s\right) \right) \\&= \psi (t + t_0, \theta _t \omega )\,. \end{aligned}$$
  2. (b)

    The fact that \(T: \Omega \rightarrow \mathbb {R}\) is well defined almost surely in this case follows directly from the properties of SDEs on compact intervals, in this case \([-2 \pi , 2 \pi ]\). Moreover, we have by a change of variables that

    $$\begin{aligned} 2 \pi&= \left| \int _{-T( \theta _{-t} \omega )}^{0} h(r^*(\theta _{s-t} \omega )) \mathrm {d}s + \int _{-T(\theta _{-t} \omega )}^{0} {\tilde{h}}(r^*(\theta _{s-t} \omega )) \circ \, \mathrm {d}W_s^2(\theta _{-t} \omega )\right| \\&= \left| \int _{-T( \theta _{-t} \omega )}^{0} h(r^*(\theta _{s-t} \omega )) \mathrm {d}s + \int _{-T( \theta _{-t} \omega )}^{0} {\tilde{h}}(r^* (\theta _{s-t} \omega )) \circ \, \mathrm {d}W_{s-t}^2( \omega )\right| \\&= \left| \int _{-(t + T(\theta _{-t} \omega ))}^{-t} h(r^*(\theta _s \omega )) \mathrm {d}s + \int _{-(t+T( \theta _{-t} \omega ))}^{-t} {\tilde{h}}(r^*(\theta _{s} \omega )) \circ \, \mathrm {d}W_{s}^2( \omega )\right| \,. \end{aligned}$$

    We use this observation to conclude \(\psi (t + T(\theta _{-t} \omega ), \omega ) = \psi (t, \omega )\) as in (a). Furthermore, we observe that for almost all \(\omega \in \Omega \) and \(t, t_0 \ge 0\)

    $$\begin{aligned} \int _{-t_0}^{t} {\tilde{h}}(r^*(\theta _{s} \omega )) \circ \, \mathrm {d}W_{s}^2( \omega )&= \int _{-t_0 -t}^{0} {\tilde{h}}(r^*(\theta _{s+t} \omega )) \circ \, \mathrm {d}W_{s+t}^2( \omega ) \mathrm {d}s \\&= \int _{-t_0 -t}^{0} {\tilde{h}}(r^*(\theta _{s} (\theta _t \omega ))) \circ \, \mathrm {d}W_{s}^2( \theta _t \omega ) \mathrm {d}s \,, \end{aligned}$$

    such that \(\varphi (t, \omega ,\psi (t_0, \omega )) = \psi (t + t_0, \theta _t \omega )\) follows as in (a). This finishes the proof. \(\square \)

Fig. 3
figure 3

Sketch of Crauel random periodic solutions (CRPS), following two points along the dynamics from \(A(\theta _{-t} \omega )\) via \(A(\omega )\) to \(A(\theta _t \omega )\). The point \(\psi (0, \theta _{-t} \omega )\) is mapped by \(\varphi (t, \theta _{-t} \omega , \cdot )\) to \(\psi (t, \omega )\) which is then mapped by \(\varphi (t, \omega , \cdot )\) to \(\psi (2t, \theta _t\omega )\), in each case preserving the period \(T(\theta _{-t} \omega )\). Similarly, the point \(\psi (-t, \theta _{-t} \omega )\) is mapped by \(\varphi (t, \theta _{-t} \omega , \cdot )\) to \(\psi (0, \omega )\) which is then mapped by \(\varphi (t, \omega , \cdot )\) to \(\psi (t, \theta _t\omega )\), in each case preserving the period \(T(\omega )\). The arrows indicate that the CRPS parametrizes the fiber of the attractor as \(A(\omega ) = \{ \psi (t, \omega )\,: \, t \in [0, T(\omega ))\}\)

Note that in Example 3, and by that also the simpler subcase Example 2, it is easy to check that the Lyapunov exponents satisfy \(\lambda _1 = 0\) and \(\lambda _2 < 0\). We want to make three additional remarks on Proposition 2, also concerning Definition 6.

Remark 3

The proof of Proposition 2 shows why we require \(\psi (t+T(\theta _{-t}\omega ), \omega ) = \psi (t, \omega )\) in Definition 6 instead of choosing \(T(\omega )\) or \(T(\theta _t \omega )\) in such a formula. It is precisely the relation we obtain from Eqs. (3.14) and (3.15). Instead of Eq. (3.15), one might alternatively consider

$$\begin{aligned} \int _{0}^{T( \omega )} h(r^*(\theta _s \omega )) \mathrm {d}s = 2 \pi \,, \end{aligned}$$

and replace the time integral in \(\psi _{\vartheta }(t, \omega )\) (3.14) accordingly. However, it is easy to check that the invariance requirement \(\varphi (t, \omega , \psi _{\vartheta }(t_0, \omega )) = \psi _{\vartheta }(t+t_0, \theta _t \omega )\) is not satisfied in this situation. Hence, the choice of period in Definition 6 turns out to be the appropriate one for an application to Example 3 which we see as the fundamental model for extending random periodic solutions to noise-dependent periods. Additionally note that, when \({\tilde{h}} \ne 0\) in Eq. (3.12), the direction of periodicity depends on the noise realization \(\omega \).

Remark 4

Note that for any \(\vartheta \in [0, 2 \pi )\) we have \(\psi _{\vartheta } (t, \omega ) \in A(\omega )\) for all \(t \ge 0\), \(\omega \in \Omega \), where A is the random attractor given in Eq. (3.9). Hence, we have established the analogous situation to the deterministic case in the sense that the attracting random cycle corresponds to a random periodic solution; see also Fig. 3.

Remark 5

One may ask what happens when \(h, {\tilde{h}}\) in Eq. (3.12) also depend on \(\vartheta \). Then there can, of course, still be a CRPS but we do not know a priori the existence of some stationary process \(\vartheta ^*\) similarly to \(r^*\) which we need to write down for an explicit solution such as (3.14).

We will see later in Proposition 6 that we can determine \(\mathbb {E}[T(\omega )] < \infty \), using a variant of the Andronov–Vitt–Pontryagin formula (cf. [40]).

3.2.2 Chaotic random attractors and singletons

More generally, i.e., in addition to the case with first Lyapunov exponent \(\lambda _1=0\), we want to consider the situations where \(\lambda _1 > 0\) and \(\lambda _1 < 0\) (always assuming volume contraction to an attractor expressed by \(\sum _{j} \lambda _j < 0\)). For \(\lambda _1<0\), this typically means that the random attractor is a singleton (see, for example, [23]) and one speaks of complete synchronization. In such a situation, the dynamics on the random attractor is trivial, so there is no natural notion of isochronicity. In the case \(\lambda _1>0\), one typically speaks of a chaotic random attractor which is not a singleton. We can illustrate these two cases by the following example very similar to the previous ones.

Example 4

We consider the following stochastic differential equations on \(\mathbb {R}^2\) with purely external noise of intensity \(\sigma \ge 0\),

$$\begin{aligned} \begin{array}{ll} \mathrm {d}x &{}= (x - y - (x-by)(x^2 + y^2))\mathrm {d}t + \sigma \circ \mathrm {d}W_t^1,\\ \mathrm {d}y &{}= ( y + x - (bx+y)(x^2 + y^2)) \mathrm {d}t + \sigma \circ \mathrm {d}W_t^2, \end{array} \end{aligned}$$
(3.17)

where \( b \in \mathbb {R}\) and \(W_t^1, W_t^2\) denote independent one-dimensional Brownian motions. In polar coordinates the system can be written as

$$\begin{aligned} \mathrm {d}r&= \left( r - r^3 \right) \mathrm {d}t + \sigma (\cos \vartheta \circ \, \mathrm {d}W_t^1 + \sin \vartheta \circ \, \mathrm {d}W_t^2), \nonumber \\ \mathrm {d}\vartheta&= (1 + br^2) \, \mathrm {d}t + \frac{\sigma }{r}( - \sin \vartheta \circ \, \mathrm {d}W_t^1 + \cos \vartheta \circ \, \mathrm {d}W_t^2). \end{aligned}$$
(3.18)

This form illustrates the role of the parameter b inducing a shear force: if \(b > 0\), the phase velocity \(\frac{\mathrm {d}\vartheta }{\mathrm {d}t}\) depends on the amplitude r. Since Gaussian random vectors are invariant under orthogonal transformations, one might think of writing the problems with the independent Wiener processes

$$\begin{aligned} \mathrm {d}W_r&= \cos \vartheta \,\mathrm {d}W_t^1 + \sin \vartheta \, \mathrm {d}W_t^2,\\ \mathrm {d}W_\vartheta&= - \sin \vartheta \, \mathrm {d}W_t^1 + \cos \vartheta \, \mathrm {d}W_t^2. \end{aligned}$$

However, the pathwise properties of the processes seen as random dynamical systems change under this transformation. In (3.18), the radial components of the trajectories depend on \(\vartheta \) which appears in the diffusion term and destroys the skew-product structure we had in the previous example 3.

It has been shown in [20] that for b small enough the first Laypunov exponent \(\lambda _1 < 0\) is negative such that the corresponding random attractor A is indeed a singleton. For b large, one can see numerically that the attractor becomes chaotic. A proof of \(\lambda _1 > 0\) has been obtained in [22] for a simplified model of (4.20) in cylindrical coordinates and recently also in the setting of restricting the state space on a bounded domain and only considering the dynamics conditioned on survival in this domain, using a computer-assisted proof technique [11].

One can characterize chaotic random attractors as non-trivial geometric objects and supports of SRB measures, i.e. sample measures with densities on unstable manifolds. For details see [10, 29] and for further discussions relevant for our setting e.g. [9, 21]. Due to the compactness and the minimality property of random attractors there must be recurrence on these objects and one may even find Crauel Random Periodic Solutions there. However, it is questionable to what extent one can speak of isochronicity, given the very irregular recurrence properties. This already makes isochronicity a difficult issue for deterministic chaotic oscillators, see e.g. [42].

3.3 Random limit cycles as normally hyperbolic random invariant manifolds

As we have seen in Sect. 3.2.2, we can generally not expect the persistence of periodic orbits from the deterministic to the stochastic case under (global) white noise perturbations. A point of view that is only considering local, bounded noise perurbations of normally hyperbolic manifolds, i.e. implicitly also hyperbolic limit cycles, is presented in [30], where normally hyperbolic random invariant manifolds and their foliations are studied. In more details, consider the ODE (2.1) with a small random perturbation, i.e. the random differential equation

$$\begin{aligned} \dot{x} = f(x) + \varepsilon F(\theta _t \omega , x), \end{aligned}$$
(3.19)

where \(\varepsilon > 0\) is a small parameter and F is \(C^1\), uniformly bounded in x, \(C^0\) in t for fixed \(\omega \), and measurable in \(\omega \). In several cases, SDEs can be transformed into a random differential Eq. (3.19), in particular when the noise is additive or linear multiplicative; however, in this case, F is generally not uniformly bounded. Hence, for an application of the following, one has to truncate the Brownian motion by a fixed large constant, as we will discuss later. Let us firstly give the following definition:

Definition 7

A random invariant manifold for an RDS is a collection of nonempty closed random sets \({\mathcal {M}} (\omega )\), \(\omega \in \Omega \), such that each \(\mathcal {M}(\omega )\) is a manifold and

$$\begin{aligned} \varphi (t, \omega , {\mathcal {M}}(\omega )) = {\mathcal {M}}(\theta _t \omega ) \ \text { for all } t \in \mathbb {R}, \omega \in \Omega . \end{aligned}$$

The random invariant manifold \({\mathcal {M}}\) is called normally hyperbolic if for almost every \(\omega \in \Omega \) and any \(x \in {\mathcal {M}}(\omega )\), there exists a splitting which is \(C^0\) in x and measurable:

$$\begin{aligned} \mathbb {R}^m = E^u(\omega , x) \oplus E^c(\omega , x) \oplus E^s(\omega , x) \end{aligned}$$

of closed subspaces with associated projections \(\Pi ^u(\omega ,x),\Pi ^c(\omega ,x)\) and \(\Pi ^s(\omega ,x)\) such that

  1. (i)

    the splitting is invariant

    $$\begin{aligned} \mathrm {D}\varphi (t,\omega ,x) E^i(\omega ,x) = E^i(\theta _t \omega , \varphi (t, \omega ,x)), \ \text { for } i=u,c, \end{aligned}$$

    and

    $$\begin{aligned} \mathrm {D}\varphi (t,\omega ,x) E^s(\omega ,x) \subset E^s(\theta _t \omega , \varphi (t, \omega ,x)), \end{aligned}$$
  2. (ii)

    \(\mathrm {D}\varphi (t, \omega ,x)|_{E^i(\omega ,x)} : E^i(\omega ,x) \rightarrow E^i(\theta _t \omega , \varphi (t, \omega ,x))\) is an isomorhpism for \(i=u,c,s\) and \(E^c(\omega ,x)\) is the tangent space of \({\mathcal {M}}(\omega )\) at x,

  3. (iii)

    there are \((\theta , \varphi )\)-invariant random variables \({\bar{\alpha }}, {\bar{\beta }}: {\mathcal {M}} \rightarrow (0, \infty ), {\bar{\alpha }} < {\bar{\beta }}\), and a tempered random variable \(K(\omega ,x) : {\mathcal {M}} \rightarrow [1, \infty )\) such that

    $$\begin{aligned} \Vert \mathrm {D}\varphi (t,\omega ,x) \Pi ^s(\omega ,x)\Vert&\le K(\omega ,x) e^{- {\bar{\beta }}(\omega ,x) t} \ \text { for } t\ge 0, \end{aligned}$$
    (3.20)
    $$\begin{aligned} \Vert \mathrm {D}\varphi (t,\omega ,x) \Pi ^u(\omega ,x)\Vert&\le K(\omega ,x) e^{{\bar{\beta }}(\omega ,x) t} \ \text { for } t\le 0, \end{aligned}$$
    (3.21)
    $$\begin{aligned} \Vert \mathrm {D}\varphi (t,\omega ,x) \Pi ^c(\omega ,x)\Vert&\le K(\omega ,x) e^{{\bar{\alpha }}(\omega ,x) \left| t\right| } \ \text { for } - \infty< t < \infty . \end{aligned}$$
    (3.22)

We can then deduce the following statements:

Proposition 3

Assume that \(\Phi \) is a \(C^k\) flow, \(k \ge 1\), in \(\mathbb {R}^m\) which has a hyperbolic periodic orbit \(\gamma \), with exponents \(\bar{\alpha }=0 < {\bar{\beta }}\) characterizing the normal hyperbolicity as in (3.20), (3.22). Then there exists a \(\delta > 0\) such that for any random \(C^1\) flow \(\varphi (t, \omega , \cdot )\) in \(\mathbb {R}^m\), as for example induced by an RDE (3.19), with

$$\begin{aligned} \Vert \Phi (t, \cdot ) - \varphi (t, \omega , \cdot )\Vert _{C^1} < \delta , \ \text { for all } t \in [0,1], \omega \in \Omega , \end{aligned}$$

we have that

  1. (i)

    The random flow \(\varphi (t, \omega , \cdot )\) has a \(C^1\) normally hyperbolic invaraint random manifold \({\mathcal {M}}(\omega )\) in a small neighbourhood of \(\gamma \),

  2. (ii)

    If \(\varphi (t, \omega , \cdot )\) is \(C^k\), then \({\mathcal {M}}(\omega )\) is a \(C^k\) manifold diffeomorphic to \(\gamma \) for each \(\omega \in \Omega \),

  3. (iii)

    There exists a stable manifold \({\mathcal {W}}^s(\omega )\) of \({\mathcal {M}}(\omega )\) under \(\varphi (t, \omega , \cdot )\), i.e. for all \(x \in {\mathcal {W}}^s(\omega )\)

    $$\begin{aligned} \lim _{t \rightarrow \infty } {\text {dist}} \big (\varphi (t, \omega , x), {\mathcal {M}}(\theta _t\omega )\big ) = 0 \quad \text {for almost all } \,\omega \in \Omega \end{aligned}$$
  4. (iv)

    The manifold \({\mathcal {M}}(\omega )\) is, in fact, a random limit cycle in the sense of Definition 4.

Proof

The statements (i)–(iii) follow directly from [30, Theorem 2.2]. It is clear from (iii) that \({\mathcal {M}}(\omega )\) is a random forward attractor with respect to the collection \(\mathcal {S}\) of tempered random sets whose fibers \(S(\omega )\) are contained in \({\mathcal {W}}^s(\omega )\). Additionally, from (ii), it follows directly that \({\mathcal {M}}(\omega )\) is diffeomorphic to the unit circle, and, hence, we can conclude statement (iv). \(\quad \square \)

4 Random Isochrons

4.1 Isochrons as stable manifolds

4.1.1 Definition of forward isochrons

Let A be an attracting random cycle for the random dynamical system \((\theta , \varphi )\) where A is a random forward attractor (and possibly also a random pullback attractor). One may think of equations of the type (3.12), (3.18) or similar such that almost sure forward and pullback convergence coincide (see e.g. [20, Proof of Theorem B] or [38, Example 2.7 (i)]). We further assume that we are in the situation of a differentiable hyperbolic random dynamical system as discussed in Sect. 3.1.

In the typical setting of attracting random cycles, we may assume that \(\lambda _1 = 0\) with single multiplicity and \(\lambda _i < 0\) for all \(2\le i \le p\). In analogy to the stable manifolds of points on a deterministic limit cycle, we can then establish the following key novel definition (see also Fig. 4).

Definition 8

The random forward isochron \(W^{{\text {f}}}(\omega , x)\) of a pair \((\omega , x) \in \Omega \times \mathbb {R}^m\) with \(x \in A(\omega )\) is given by the stable set

$$\begin{aligned} W^{{\text {f}}}(\omega , x):= \left\{ y\in \mathbb {R}^m:\lim _{t\rightarrow +\infty }{\text {d}}(\varphi (t, \omega , y),\varphi (t, \omega , x))=0\right\} , \end{aligned}$$
(4.1)

for almost all \(\omega \in \Omega \) and all \(x \in A(\omega )\). In particular, we have for all \({\tilde{\lambda }} \in (0, - \lambda _2)\), where \(\lambda _2\) denotes the largest nonzero Lyapunov exponent,

$$\begin{aligned} W^{{\text {f}}}(\omega , x) =\left\{ y\in \mathbb {R}^m:\sup _{t\ge 0} {\text {e}}^{{\tilde{\lambda }} t} {\text {d}}(\varphi (t, \omega , y),\varphi (t, \omega , x)) < \infty \right\} . \end{aligned}$$
(4.2)

Remark 6

It is clear from the definition why we exclude the case \(\lambda _1 <0\). In this situation, the set \(W^{{\text {f}}}(\omega ,x)\) is the whole absorbing set and, hence, no information about the decomposition of the state space by the dynamics can be obtained that way.

As indicated in Sect. 3.2.2, a chaotic random attractor, characterized by \(\lambda _1 >0\), also exhibits recurrence properties such that Definition 8 can principally be also applied to this situation. However, it is arguable to what extent one can speak of isochronicity, given the irregular recurrence properties. Since this already makes isochronicity a difficult issue for deterministic chaotic oscillators [42], we leave a detailed analysis of random isochrons for chaotic random attractors as a topic for future work.

It is easy to observe that for all \(s \ge 0\) we have

$$\begin{aligned} \varphi (s,\omega ) W^{{\text {f}}}(\omega , x) = W^{{\text {f}}}(\theta _s \omega , \varphi (s,\omega ,x)), \end{aligned}$$
(4.3)

i.e. the forward isochrons are \(\varphi \)-invariant, as depicted in Fig. 4.

Fig. 4
figure 4

Sketch of isochrons \(W^{{\text {f}}}(\omega , x)\) at \(A(\omega )\) and \(W^{{\text {f}}}(\theta _t \omega , \varphi (t,\omega ,x))\) at \(A(\theta _t \omega )\) as an illustration of Definition 8 and the invariance relation (4.3), for \(A(\omega )\) being a random limit cycle

4.1.2 Existence and properties of random stable sets

In the literature on (global) random dynamical systems, the existence of stable sets such as \(W^{{\text {f}}}(\omega ,x)\) as stable manifolds is often first established for discrete time, see e.g. [36] or [32, Chapter III]. (Arnolds treatment [1, Chapter 7] is limited to equilibria.) Even though the local view in [30], as described in Sect. 3.3, is different, we need to also account for the global situation in order to provide the full picture. Hence, we begin with adopting the discrete-time approach by reducing the analysis to time-one maps \(\varphi (1,\omega , \cdot )\) and its concatenations

$$\begin{aligned} \varphi (n, \omega , x) = (\varphi (1, \theta _{n-1} \omega , \cdot )\circ \varphi (1, \theta _{n-2} \omega , \cdot ) \circ \cdots \circ \varphi (1, \omega , \cdot ))(x), \ n \in \mathbb {N} \,. \end{aligned}$$
(4.4)

First we want to conclude for all \({\tilde{\lambda }} \in (0, - \lambda _2)\) that

$$\begin{aligned} {\tilde{W}}^{{\text {s}}}(\omega , x):=\left\{ y\in \mathbb {R}^m: \sup _{n\ge 0} e^{{\tilde{\lambda }} n} {\text {d}}(\varphi (n, \omega , y),\varphi (n, \omega , x)) < \infty \right\} \end{aligned}$$
(4.5)

is an \((m-1)\)-dimensional immersed \(C^k\)-submanifold under sufficient boundedness assumptions which would be immediately satisfied if the state space \({\mathcal {X}}\) is a compact manifold (cf. [32, Chapter III, Theorem 3.2]). We will state such conditions for our setting \({\mathcal {X}}= \mathbb {R}^m\) in the following. The transition to the time-continuous case, i.e. establishing \(W^{{\text {f}}}(\omega , x) = {\tilde{W}}^{{\text {s}}}(\omega , x)\), then follows immediately from the integrability assumption (3.4) for the MET, as one can observe with the proof of [32, Chapter V, Theorem 2.2].

One possible approach can be found in [9]: consider the maps (4.4). For \(x \in \mathbb {R}^m\), we define the local linear shift function

$$\begin{aligned} f_x \,: \, \mathbb {R}^m \cong {\text {T}}_x \mathbb {R}^m \rightarrow \mathbb {R}^m, \quad y \mapsto f_x(y) := x + y \,. \end{aligned}$$

Further, we define the map

$$\begin{aligned}&F_{(\omega ,x),n} \,:\, {\text {T}}_{\varphi (n, \omega ,x)} \mathbb {R}^m \rightarrow {\text {T}}_{\varphi (n+1, \omega ,x)} \mathbb {R}^m; \\&\quad F_{(\omega ,x),n} := f_{\varphi (n+1, \omega ,x)} ^{-1} \, \circ \varphi (1, \theta _{n} \omega , \cdot ) \circ f_{\varphi (n, \omega ,x)}\,, \end{aligned}$$

which is the evolution process of the linearization around the trajectory starting at \(x \in \mathbb {R}^m\). Assume that there is an invariant probability measure \(\mathbb {P} \times \rho \) for \((\Theta _t)_{t \ge 0}\) on \((\Omega \times \mathbb {R}^m, \mathcal {F}_0^\infty \times \mathcal {B}(\mathbb {R}^m))\) (see “Appendix A.1” and .1). If the RDS is induced by an SDE, the measure \(\rho \) is exactly the stationary measure of the associated Markov process. The integrability condition of the MET with respect to this measure reads

$$\begin{aligned} \log ^+ \Vert \mathrm {D}\varphi (1, \omega , \cdot ) \Vert \in L^1(\mathbb {P} \times \rho )\,. \end{aligned}$$
(4.6)

The crucial boundedness assumption that compensates for the lack of compactness in the proof of a stable manifold theorem reads

$$\begin{aligned} \log \left( \sup _{ \xi \in B_1(x)} \Vert \mathrm {D}_{\xi }^2 F_{(\omega ,x),0}\Vert \right) \in L^1(\mathbb {P} \times \rho ) \,, \end{aligned}$$
(4.7)

where \(\mathrm {D}^2\) is the second derivative operator and \(B_1(x)\) denotes the ball of radius 1 centered at \(x \in \mathbb {R}^m\).

In the situation where the maps (4.4) of the discrete-time RDS are the time-one maps of the continuous-time RDS induced by the SDE (3.2) with the stationary distribution fulfilling

$$\begin{aligned} \int _{\mathbb {R}^m} \log (\left\| x \right\| +1)^{1/2} \, \mathrm {d}\rho (x) < \infty \,, \end{aligned}$$
(4.8)

we have the following requirements on \(b, \sigma _i \in C^{k+1}\), \(1 \le i \le n, k \ge 2\), such that assumption (4.7) is satisfied:

$$\begin{aligned} \Vert b\Vert _{k,\delta } + \sum _{i=1}^n \Vert \sigma _i\Vert _{k,\delta } < \infty \,, \end{aligned}$$
(4.9)

where \(0 < \delta \le 1\) and with multi index notation \(\alpha = (\alpha _1, \dots , \alpha _m)\), \(\left| \alpha \right| = \sum _{i=1}^m \left| \alpha _i \right| \), for \(f \in C^k\)

$$\begin{aligned} \Vert f\Vert _{k,\delta } = \sup _{x \in \mathbb {R}^m} \frac{\Vert f(x)\Vert }{1 + \Vert x\Vert } + \sum _{1 \le \left| \alpha \right| \le k} \sup _{x \in \mathbb {R}^m} \Vert \mathrm {D}^{\alpha } f(x) \Vert + \sum _{\left| \alpha \right| = k} \sup _{x \ne y} \frac{\Vert \mathrm {D}^{\alpha } f(x) - \mathrm {D}^{\alpha } f(y)\Vert }{\Vert x-y\Vert ^{\delta }}. \end{aligned}$$
(4.10)

This means that the coefficients of the SDE have at most linear growth, globally bounded derivatives and the k-th derivatives have bounded \(\delta \)-Hölder norm. In [9], also the backward flow and a condition similar to (4.7) for the inverse are considered, but these are not needed when we purely regard the stable manifold problem. These conditions on the drift b are generally too restrictive since already examples (3.6), (3.10) and (3.11) are not covered. Of course, one can always consider the dynamics on a compact domain \({\mathcal {K}}\), with absorbing or reflecting boundary conditions at the boundary of the domain, as will see later in Sect. 4.3 for the averaged problem on the level of the Kolmogorov equations. However, this involves further technicalities for the random dynamical systems approach which we try to avoid here. The easiest way of reduction to a compact domain \({\mathcal {K}}\) is to assume compact support of the noise and absorption to \({\mathcal {K}}\) through the drift dynamics such that neither global nor boundary conditions are needed (see Theorem 2 (iii)).

Additionally we consider [23, Section 3] which discusses conditions for synchronization to a singleton random attractor for random dynamical systems induced by an SDE (3.2) with additive noise, i.e. \(n=m\) and, for all \(1\le i,j \le n\), \(\sigma _i^j = \sigma \delta _{i,j}\) where \(\sigma > 0\) and \(\sigma _i^j\) denotes the j-th entry of the vector \(\sigma _i\). The authors formulate a special local stable manifold theorem for the case \(\lambda _1 < 0\), which is, however, based on [36] where stable manifold theorems are considered in full generality. The assumption for deducing the local stable manifold theorem amounts to a (weaker) combination of conditions (4.6) and (4.7), and reads

$$\begin{aligned} \mathbb {E} \int _{\mathbb {R}^m} \log ^+ \Vert \varphi (1,\omega , \cdot + x) - \varphi (1,\omega ,x) \Vert _{C^{1,\delta }({\overline{B}}_1(0))} \, \mathrm {d}\rho (x) < \infty \,, \end{aligned}$$
(4.11)

where \(C^{1,\delta }\) is the space of \(C^1\)-functions whose derivatives are \(\delta \)-Hölder continuous for some \(\delta \in (0,1)\) and \(\rho \) denotes the stationary measure of the associated Markov process. We introduce a classical dissipativity condition, the one-sided Lipschitz condition

$$\begin{aligned} \langle b(x) - b(y), x - y \rangle \le \kappa \left\| x - y \right\| ^2\,, \end{aligned}$$
(4.12)

for all \(x,y \in \mathbb {R}^m\) and \(\kappa > 0\). According to [23, Lemma 3.9], condition (4.11) is satisfied in the case of additive noise if \(b \in C^2(\mathbb {R}^m)\) fulfills (4.12), admits at most polynomial growth of the second derivative, i.e.

$$\begin{aligned} \left\| \mathrm {D}^2 b(x) \right\| \le C(\left\| x \right\| ^M +1) \quad \text {for all } x \in \mathbb {R}^m \text { and some } C>0, M \in \mathbb {N}\,, \end{aligned}$$
(4.13)

and the stationary distribution \(\rho \) satisfies

$$\begin{aligned} \int _{\mathbb {R}^m} \log ^+(\left\| x \right\| ) \mathrm {d}\rho (x) < \infty \,. \end{aligned}$$
(4.14)

4.1.3 Main theorem about random isochrons

Assumptions (4.12) and (4.13) on the drift are weaker than condition (4.9) but, in [23], only applied to situations with additive noise whereas at least linear multiplicative noise as in (3.10) is a desirable model for random periodicity. We address this issue in Remark 7 and point (iii) of the following theorem, which summarizes the findings from above:

Theorem 2

(Forward isochrons are stable manifolds). Consider an ergodic \(C^k\), \(k \ge 2\), random dynamical system \((\theta , \varphi )\) on \(R^m\) with random attractor A, satisfying the integrability assumption (3.4) of the Multiplicative Ergodic Theorem such that \(\lambda _1 = 0\) with single multiplicity and \(\lambda _i < 0\) for all \(2\le i \le p\). Let further one of the following assumptions be satisfied:

  1. (i)

    The RDS \((\theta , \varphi )\) is induced by an SDE of the form (3.2) such that the unique stationary measure \(\rho \) satisfies (4.8) and the drift and diffusion coefficients satisfy (4.9),

  2. (ii)

    The RDS \((\theta , \varphi )\) is induced by an SDE of the form (3.2) with \(n=m\) and, for all \(1\le i,j \le n\), \(\sigma _i^j = \sigma \delta _{i,j}\) where \(\sigma > 0\), such that the unique stationary measure \(\rho \) satisfies (4.14) and the drift satisfies conditions (4.12) and (4.13),

  3. (iii)

    The RDS \((\theta , \varphi )\) is induced by an SDE of the form (3.2) such that \({{\,\mathrm{supp}\,}}(\sigma ) \subset \mathbb {R}^m\) is compact, the drift b satisfies condition (4.12) with \(\kappa < 0\) for all \( \Vert x\Vert , \Vert y\Vert > R\) for some \(R>0\) and there is a unique stationary measure \(\rho \) with \({{\,\mathrm{supp}\,}}(\rho ) \subset \mathbb {R}^m\) compact.

  4. (iv)

    The RDS satisfies the conditions of Proposition 3.

Then for almost all \(\omega \in \Omega \) and all \(x \in A(\omega )\) the random forward isochrons \(W^{{\text {f}}}(\omega , x)\) (see (4.2)) are a uniquely determined \(C^{k-1}\) in x family of \(C^k\) \((m-1)\)-dimensional submanifolds (at least locally, i.e. within a neighbourhood \({\mathcal {U}}\) of x) of the stable manifold \({\mathcal {W}}^s(\omega )\) such that

$$\begin{aligned} {\mathcal {W}}^s(\omega ) = \cup _{x \in A(\omega )} W^{{\text {f}}}(\omega , x), \end{aligned}$$

where the union is disjoint.

Proof

As already discussed, in most of the cited literature, the stable manifold theorem is shown for discrete time. However, the transition to the time-continuous case, i.e. establishing \(W^{{\text {f}}}(\omega , x) = {\tilde{W}}^{{\text {s}}}(\omega , x)\), follows immediately from the integrability assumption (3.4) for the MET, as one can observe with the proof of [32, Chapter V, Theorem 2.2]. Hence, the fact that the sets \(W^{{\text {f}}}(\omega , x)\) are a uniquely determined \(C^{k-1}\) in x family of \(C^k\) \((m-1)\)-dimensional submanifolds of the stable manifold \(\mathcal W^s(\omega )\) can be deduced in various situations as follows:

Assumption (i) is derived from [9, Theorem 4.7 and Theorem 9.1], where \(W^{{\text {f}}}(\omega , x)\) are global stable manifolds. Assumption (ii) is derived from [23, Lemma 3.9] showing that the conditions for the local stable manifold theorem [36, Theorem 5.1] are satisfied, i.e. \(W^{{\text {f}}}(\omega , x)\) is a \(C^{k}\) submanifold of \(\mathbb {R}^m\) of dimension \(m-1\), at least within a neighbourhood \({\mathcal {U}}\) of x. Furthermore, it is obvious from the assumptions that condition (4.11) is satisfied and, hence, assumption (iii) is derived similarly to assumption (ii). Assmuption (iv) can be taken according to [30, Theorem 2.4].

This leaves to prove the foliation property in all these cases: the proof that

$$\begin{aligned} {\mathcal {W}}^s(\omega ) = \cup _{x \in A(\omega )} W^{{\text {f}}}(\omega , x), \end{aligned}$$

can be deducted in direct analogy to the proof of [30, Proposition 9 (iv)]. The fact that the union is disjoint can be seen as follows: assume there is a \(y \in W^{{\text {f}}}(\omega , x) \cap W^{{\text {f}}}(\omega , x')\) for \(x\ne x'\). Since \(A(\omega )\) is an invariant hyperbolic limit cycle and \(x,x' \in A(\omega )\), we have that \(d(\varphi (t, \omega ,x), \varphi (t, \omega , x')) \ge \delta > 0\) for all \(t \ge 0\). Hence, we obtain by definition of \(W^{{\text {f}}}\) and the triangle inequality that

$$\begin{aligned} 1 \le \frac{d(\varphi (t, \omega ,x), \varphi (t, \omega , y)) + d(\varphi (t, \omega ,y), \varphi (t, \omega , x'))}{d(\varphi (t, \omega ,x), \varphi (t, \omega , x'))} \rightarrow 0, \end{aligned}$$

which is a contradiction (see proof of [30, Proposition 9 (iii)] for a similar argument). \(\quad \square \)

Remark 7

  1. (i)

    One could also try to extend Theorem 2 (ii) to the situation with any diffusion coefficients satisfying (4.9) instead of only additive noise. For showing this, first notice that under the assumptions on \(\sigma \) the drift \({\hat{b}}= b + b_0\) with the Itô-Stratonovich-conversion term

    $$\begin{aligned} b_0 := \frac{1}{2} \sum _{i=1}^n \sum _{j=1}^m \sigma _i^j \frac{\partial }{\partial x_j} \sigma _i \end{aligned}$$

    still satisfies assumptions (4.12) and (4.13). Due to the mild behaviour (4.9) of the diffusion coefficients, one could then try to make analogous estimates as in [23, Lemma 3.9] to induce that condition (4.11) is satisfied. Since we are mainly interested in the local behavior, we refrain from conducting such estimates here, but point out that this would be an interesting general extension.

  2. (ii)

    Consider the example Eq. (3.11) (and by that Eq. (3.10)): the drift b is polynomial such that condition (4.13) is satisfied and we have

    $$\begin{aligned} \langle b(x) -b(y), x-y\rangle&= \Vert x- y \Vert ^2 - \Vert x\Vert ^4 - \Vert y\Vert ^4 + \langle x,y\rangle (\Vert x\Vert ^2 + \Vert y\Vert ^2)) \nonumber \\&= \Vert x- y \Vert ^2 - \Vert x\Vert ^4 - \Vert y\Vert ^4 + \frac{1}{2}(\Vert x\Vert ^2 + \Vert y\Vert ^2)^2 \nonumber \\&\quad - \frac{1}{2}(\Vert x\Vert ^2 + \Vert y\Vert ^2)\Vert x- y \Vert ^2 \nonumber \\&= \left( 1- \frac{1}{2}(\Vert x\Vert ^2 + \Vert y\Vert ^2)\right) \Vert x- y \Vert ^2 - \frac{1}{2}(\Vert x\Vert ^2 - \Vert y\Vert ^2)^2 \nonumber \\&\le \Vert x- y \Vert ^2. \end{aligned}$$
    (4.15)

    Hence, also condition (4.12) is satisfied. Furthermore, the unique stationary distribution \(\rho \) has a density

    $$\begin{aligned} p(x,y) = \frac{1}{Z} \left( x^2 + y^2\right) ^{\frac{1}{\sigma ^2}-1} \exp \left( - \frac{x^2 + y^2}{\sigma ^2} \right) , \end{aligned}$$
    (4.16)

    solving the stationary Fokker-Planck equation. Hence, also condition (4.14) is fulfilled. Since the noise term is linear, we obviously have \(\sum _{i=1}^n \Vert \sigma _i\Vert _{k,\delta } < \infty \) for all \(k \ge 2, \delta \in (0,1]\). Hence, we could deduce the assertions of Theorem 2 if we had proven the extension as discussed in (i).

    However, for our purposes, this is not necessary: we additionally have, using the same transformation as in estimate (4.15), that for \(R = \sqrt{3}\) and \(\Vert x\Vert , \Vert y\Vert > R\)

    $$\begin{aligned} \langle b(x) -b(y), x-y\rangle&\le \left( 1- \frac{1}{2}(\Vert x\Vert ^2 + \Vert y\Vert ^2)\right) \Vert x- y \Vert ^2 \\&\le - \kappa \Vert x- y \Vert ^2, \end{aligned}$$

    where \(\kappa = R^2/2 -1\). Now choosing a smooth cut-off of \(\sigma \), say \({\tilde{\sigma }}\), such that \(\sigma = {\tilde{\sigma }}\) on \(B_{R^*}(0)\) for some large \(R^* > R\) and \( {\tilde{\sigma }} \equiv 0\) on \(\mathbb {R}^m {\setminus } B_{R^*+1}(0)\), we obtain a stationary density \({\tilde{p}}\) with \({\tilde{p}} = p/{\tilde{Z}} \) on \(B_{R^*}(0)\), where \({\tilde{Z}} > 0\) is a normalization constant, and \( {\tilde{p}} \equiv 0\) on \(\mathbb {R}^m {\setminus } B_{R^*+1}(0)\). Hence, we can apply Theorem 2 (iii). In particular, note that this construction allows, when \({\tilde{\sigma }}\) is small enough, for a transformation into the random ODE (3.19) with sufficiently small bounded noise such that Proposition 3 and, by that, Theorem 2 (iv) can be applied. This procedure is, of course, independent from the particular form of Eq. (3.11) and can be used for any SDEs around deterministic limit cycles when the transformation into the random ODE (3.19) is possible (which is always the case for additive and linear multiplicative noise).

Given (4.2), we further assume that there exists a Crauel random periodic solution \((\psi ,T)\) such that \(\psi (t, \omega ) \in A(\omega )\) for all \(\omega \in \Omega \) and \(t \ge 0\), as for example seen in Proposition 2. Then we can investigate the behaviour of

$$\begin{aligned} W^{{\text {f}}}(\omega , \psi (0, \omega ))=\left\{ y\in \mathbb {R}^m:\lim _{t\rightarrow +\infty }{\text {d}}(\varphi (t, \omega , y), \psi (t, \theta _t \omega ))=0\right\} . \end{aligned}$$

If, as in Proposition 2, each \(x \in A(\omega )\) can be identified as \(\psi _x(\omega ,0)\) for some Crauel random periodic solution, then \(T_x(\omega )\) is the period we can associate with \(W^{{\text {f}}}(\omega , \psi _x(0, \omega ))\). We summarize this insight in the following definition:

Definition 9

(Period of random forward isochron). Let \((\psi ,T)\) be a Crauel random periodic solution for the RDS \((\theta , \varphi )\) such that \(\psi (t, \omega ) \in A(\omega )\) for all \(\omega \in \Omega \) and \(t \ge 0\), where A is an attracting random cycle or chaotic random attractor. Then the we call \(T(\omega )\) the period of the corresponding random forward isochron \(W^{{\text {f}}}(\omega , \psi (0, \omega ))\) for all \(\omega \in \Omega \).

The natural question arises whether

$$\begin{aligned} \varphi (T_x(\omega ), \omega , {\mathcal {N}}_x(\omega )) \subset {\mathcal {N}}_x(\theta _{T_x(\omega )} \omega ) \end{aligned}$$

holds for some measurable family \(N_x(\omega )\) of cross-sections, in particular, whether we can identify \({\mathcal {N}}_x(\omega )= W^{{\text {f}}}(\omega , \psi _x(0, \omega ))\). What we observe, is the following:

Proposition 4

Let \((\theta , \varphi )\) be a random dynamical system with random attractor A and the isochrons \(W^{{\text {f}}}(\omega , x)\) as given in (4.1) such that each \(x \in A(\omega )\) can be identified with \(\psi _x(0, \omega )\) for some Crauel random periodic solution \((\psi _x, T_x)\). Then we have

$$\begin{aligned} \varphi (T_x(\omega ), \omega , W^{{\text {f}}}(\omega , \psi _x(0, \omega ))) \subset W^{{\text {f}}}(\theta _{T_x(\omega )}\omega , \psi _x(T_x(\omega ), \theta _{T_x(\omega )}\omega )). \end{aligned}$$
(4.17)

Proof

Let \(y \in W^{{\text {f}}}(\omega , \psi _x(0, \omega ))\). Then

$$\begin{aligned}&\lim _{t\rightarrow +\infty }{\text {d}}(\varphi (t, \theta _{T_x(\omega )}\omega , \varphi (T_x(\omega ), \omega , y)),\psi _x(t + T_x(\omega ), \theta _{t+T_x(\omega )} \omega )) \\&= \lim _{t\rightarrow +\infty }{\text {d}}(\varphi (T_x(\omega ) + t, \omega , y),\psi _x(t + T_x(\omega ), \theta _{t+T_x(\omega )} \omega )) \\&= \lim _{s\rightarrow +\infty }{\text {d}}(\varphi (s, \omega , y),\psi _x(s, \theta _{s} \omega )) =0. \end{aligned}$$

Hence, the statement follows directly. \(\quad \square \)

4.1.4 Pullback isochrons

In analogy to the different notions of a random attractor, one could also consider defining fiberwise isochrons for random dynamical systems in a pullback sense, as follows:

Again assume there is a Crauel random periodic solution \((\psi ,T)\) on an attracting random cycle A (or chaotic random attractor A). Then the random pullback isochrons could only be defined as

$$\begin{aligned} W^{{\text {p}}}(\omega , \psi (0, \omega ))&:= \left\{ y\in \mathbb {R}^m:\lim _{t\rightarrow +\infty }{\text {d}}(\varphi (t, \theta _{-t} \omega , y),\varphi (t, \theta _{-t} \omega , \psi (0, \theta _{-t} \omega ))=0\right\} \nonumber \\&= \left\{ y\in \mathbb {R}^m:\lim _{t\rightarrow +\infty }{\text {d}}(\varphi (t, \theta _{-t} \omega , y),\psi (t, \omega ))=0\right\} , \end{aligned}$$
(4.18)

for almost all \(\omega \in \Omega \).

In contrast to the random forward isochron \(W^{{\text {f}}}(\omega , \psi (0, \omega ))\), the set \(W^{{\text {p}}}(\omega , \psi (0, \omega ))\) is not given as a stable set for the point \(\psi (0, \omega )\) but as the set of points whose pullback trajectories converge to the trajectories starting in \(\psi (0, \theta _{-t} \omega )\) as \(t \rightarrow \infty \). Hence, such a definition cannot coincide with a stable manifold for a given point on a given fiber of the random attractor and, in particular, there does not seem to be a way to connect the set \(W^{{\text {p}}}(\omega , \psi (0, \omega ))\) to the set \(W^{{\text {f}}}(\omega , \psi (0, \omega ))\). In other words, it is not clear what geometric interpretation such a random pullback isochron could have and it is apparent that the definition in forward time, i.e. Definition 8, yields the most immediately meaningful object in this context.

4.2 The random isochron map

For the following, recall the stochastic differential Eq. (3.2) as

$$\begin{aligned} \mathrm {d}X_t = b(X_t) \mathrm {d}t + \sum _{i=1}^n \sigma _i(X_t) \circ \mathrm {d}W_t^i\,,\qquad X_0=x, \end{aligned}$$
(4.19)

where \(W_t^i\) are independent real valued Brownian motions, b is a \(C^k\) vector field, \(k \ge 1\), and \(\sigma _1, \dots , \sigma _n\) are \(C^{k+1}\) vector fields satisfying bounded growth conditions, as e.g. (global) Lipschitz continuity, in all derivatives to guarantee the existence of a (global) random dynamical system with cocycle \(\varphi \) and derivative cocycle \(\mathrm {D}\varphi \).

Example 5

As before, the main examples we have in mind are two-dimensional. In particular, we may consider the corresponding stochastic differential equation in polar coordinates \((\vartheta , r) \in [0, 2 \pi ) \times [0, \infty )\)

$$\begin{aligned} \begin{array}{lcl} \mathrm {d}\vartheta &{}=&{} f_1(\vartheta , r) \, \mathrm {d}t + \sigma _1 g_1(\vartheta , r) \circ \, \mathrm {d}W_t^1,\\ \mathrm {d}r &{}=&{} f_2(\vartheta , r) \, \mathrm {d}t + \sigma _2 g_2(\vartheta , r) \circ \, \mathrm {d}W_t^2\,. \end{array} \end{aligned}$$
(4.20)

As in Examples 3 and 4, we usually regard a situation such that in the deterministic case \(\sigma _1 = \sigma _2 = 0\) there is an attracting limit cycle at \(r=r^* >0\).

From Theorem 1 recall the isochron map \(\xi : W^{\text {s}}(\gamma ) \rightarrow \mathbb {R} \mod \tau _{\gamma } \) for a limit cycle \(\gamma \) with period \(\tau _{\gamma }\), which is given for every \(y \in W^{\text {s}}(\gamma )\) as the unique t such that \(y \in W^{\text {s}}(\gamma (t))\), i.e.

$$\begin{aligned} \lim _{s\rightarrow +\infty }{\text {d}}(\Phi (\gamma (\xi (y)),s),\Phi (y,s))=\lim _{s\rightarrow +\infty }{\text {d}}(\gamma (s + \xi (y)),\Phi (y,s))=0 \,. \end{aligned}$$

Analogously, we now introduce the following new notion for the random case; recall that for a CRPS \((\psi , T)\) we have, in particular, that \(\psi (0, \omega ) = \psi (T(\omega ), \omega )\) for all \(\omega \in \Omega \).

Theorem 3

Consider the SDE (4.19) such that the induced RDS has a random attractor A with CRPS \((\psi , T)\) and parametrization \(A(\omega ) = \{ \psi (t+s, \omega )\,: \, t \in [0, T(\theta _{-s} \omega ))\}\) for all \(\omega \in \Omega \), \(s \in \mathbb {R}\). Then

  1. 1.

    There exists the random isochron map \({\tilde{\phi }}\), i.e. a measurable function \({\tilde{\phi }}: \mathbb {R}^m \times \Omega \times \mathbb {R} \rightarrow \mathbb {R}\), \(C^k\) in the phase space variable, such that in a neighbourhood \({\mathcal {U}}(\omega )\) of \( A(\omega )\) we have

    $$\begin{aligned} {\tilde{\phi }}(\cdot , \omega , s): {\mathcal {U}}(\omega )\rightarrow [s, s + T(\theta _{-s} \omega )) \end{aligned}$$

    and for each \(y \in {\mathcal {U}}(\omega )\), \(s \in \mathbb {R}\)

    $$\begin{aligned}&\lim _{t\rightarrow +\infty }{\text {d}}(\varphi (t, \omega , y),\varphi (t, \omega , \psi ({\tilde{\phi }} (y, \omega ,s) , \omega ))) \nonumber \\&= \lim _{t\rightarrow +\infty }{\text {d}}(\varphi (t, \omega , y),\psi (t + \tilde{\phi }(y, \omega ,s), \theta _t \omega ))=0, \end{aligned}$$
    (4.21)
  2. 2.

    For any \(\omega \in \Omega \), \(s \in \mathbb {R}\) and \(t \in [0, T(\theta _{-s} \omega ))\), the random \({\tilde{\phi }}\)-isochron \({\tilde{I}}(\omega , \psi (t+s, \omega ),s)\) given by

    $$\begin{aligned} {\tilde{I}}(\omega ,\psi (t+s, \omega ),s) = \{ y \in \mathcal {U}(\omega ) : {\tilde{\phi }}( y, \omega ,s) = {\tilde{\phi }}(\psi (t+s, \omega ), \omega ,s)\} \end{aligned}$$
    (4.22)

    satisfies

    $$\begin{aligned} {\tilde{I}}(\omega , \psi (t+s, \omega ),s) = W^{{\text {f}}}( \omega , \psi (t+s, \omega )). \end{aligned}$$
    (4.23)
  3. 3.

    For any \(\omega \in \Omega \), \(s \in \mathbb {R}\) and \(y \in \mathcal {U}(\omega )\)

    $$\begin{aligned} {\tilde{\phi }} ( \varphi (s, \omega , y), \theta _s \omega , s) = {\tilde{\phi }}(y, \omega ,0) +s\,, \end{aligned}$$
    (4.24)

    or, equivalently,

    $$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}s} {\tilde{\phi }} ( \varphi (s, \omega , y), \theta _s \omega , s) = 1. \end{aligned}$$
    (4.25)

Proof

Since \(A(\omega )\) is a random attractor, we have that for given \(y \in \mathcal {U}(\omega )\) there is an \(x \in A(\omega )\) such that \(y \in W^{{\text {f}}}(\omega , x)\). Due to the assumptions, for any \(s \in \mathbb {R}\) there is a \(t_x \in [0, T(\theta _{-s}\omega ))\) such that \(x = \psi (s + t_x, \omega )\). Then \({\tilde{\phi }}(y, \omega ,s) := t_x + s\) satisfies the required properties, where measurability follows from the measurability of T and, writing \(t=t_x\), differentiability from

$$\begin{aligned} {\tilde{I}}(\omega , \psi (t+s, \omega ),s) = W^{{\text {f}}}(\omega , \psi (t+s, \omega )), \end{aligned}$$

which can be deduced as follows: we have \(y \in {\tilde{I}}(\omega , \psi (t+s, \omega ),s) \) if and only if \({\tilde{\phi }}(y, \omega ,s) = {\tilde{\phi }}(\psi (t+s, \omega ), \omega ,s) =t + s\) which, according to Eq. (4.21), is equivalent to

$$\begin{aligned} \lim _{r\rightarrow +\infty }{\text {d}}(\varphi (r, \omega , y),\varphi (r, \omega , \psi (t +s, \omega )))=0, \end{aligned}$$

which is the case if and only if \(y \in W^{{\text {f}}}(\omega , \psi (t+s, \omega ))\).

It remains to show the third point: firstly, we derive from the invariance of the stable manifolds and equality (4.23) that

$$\begin{aligned}&\varphi (s, \omega , \cdot ) {\tilde{I}} (\omega , \psi (t, \omega ), 0) = \varphi (s, \omega , \cdot ) W^{{\text {f}}}(\omega , \psi (t, \omega ))\nonumber \\&= W^{{\text {f}}}(\theta _s \omega , \psi (t+s, \theta _s \omega )) = \tilde{I} (\theta _s \omega , \psi (t+s, \theta _s \omega ), s)\,. \end{aligned}$$
(4.26)

This means that for \(x \in \mathcal {U}(\theta _s \omega )\) we have that \(x = \varphi (s,\omega ,y)\) for some \(y \in \mathcal {U}( \omega )\) with \({\tilde{\phi }} (y, \omega ,0) = t \in [0, T(\omega ))\) if and only if

$$\begin{aligned} {\tilde{\phi }} (x, \theta _s \omega , s) = {\tilde{\phi }} (\psi (t+s, \theta _s \omega ), \theta _s \omega , s) = t+s \,. \end{aligned}$$

Hence, we obtain Eq. (4.24), or equivalently Eq. (4.25), for any \(y \in \mathcal {U}( \omega )\). \(\quad \square \)

Note that, due to the time dependence, we always give the image of the random isochron map \({\tilde{\phi }}(\cdot , \omega , s)\) as an interval \([s, s + T(\theta _{-s} \omega ))\), in distinction from the deterministic case where the values of the isochron map \(\xi \) lie in \(\mathbb {R} \mod \tau _{\gamma }\), which can be identified with \([0, \tau _{\gamma })\), for fixed period \(\tau _{\gamma }\) (see Proposition 1). We are adding a couple of further remarks to the last theorem in order to highlight its coherence with the above and the analogy to the deterministic case.

Remark 8

  1. (i)

    As seen in the proof of Theorem 3, note that for all \(s \in \mathbb {R}\), \( t \in [0, T(\theta _{-s} \omega ))\)

    $$\begin{aligned} {\tilde{\phi }}(\psi (t+s, \omega ), \omega , s) = t + s, \end{aligned}$$
    (4.27)

    and, in particular,

    $$\begin{aligned} {\tilde{\phi }}( \varphi (t, \theta _{-t} \omega , \psi (0,\theta _{-t} \omega )), \theta _t (\theta _{-t} \omega ), 0) = {\tilde{\phi }} (\psi (t, \omega ), \omega , 0) = t \quad \text {for all } \,t \in [0, T(\omega )). \end{aligned}$$
    (4.28)

    Additionally, observe that the parametrization of the random attractor in Theorem 3 is generally possible when there is a CRPS; with Definition 6 we have for all \(s \ge 0\) that \(\psi (s + T(\omega ), \theta _s \omega ) = \psi (s, \theta _s \omega )\) and, hence, we can also consider

    $$\begin{aligned} A(\theta _s \omega ) = \{ \psi (t + s, \theta _s \omega )\,: \, t \in [0, T(\omega ))\}, \end{aligned}$$

    for which we find, for \(t \in [0, T(\omega ))\),

    $$\begin{aligned} {\tilde{\phi }}(\cdot , \theta _s \omega , s): {\mathcal {U}}(\theta _s \omega )\rightarrow [s, s+ T(\omega )), \ {\tilde{\phi }} (\psi (t+s, \theta _s \omega ), \theta _s \omega , s) = t+s. \end{aligned}$$
  2. (ii)

    From Proposition 1 recall that the isochron map \(\xi : W^{\text {s}}(\gamma ) \rightarrow \mathbb {R} \mod \tau _{\gamma } \) for a deterministic limit cycle \(\gamma \) satisfies Eq. (2.7)

    $$\begin{aligned} \frac{{\text {d}}}{{\text {d}}t} \xi (\Phi (y,t)) = 1 \ \text { for all } t \ge 0, \, y \in W^{\text {s}}(\gamma )\,. \end{aligned}$$

    Equation (4.25) is the analogous equation for the random dynamical system.

  3. (iii)

    In certain cases, it may be convenient to anchor the random \(\tilde{\phi }\)-isochrons at the deterministic limit cycle to compare with the averaging approaches from the physics literature later on. Consider for example the SDE (4.20) with attracting limit cycle at \(r=r^* >0\) in the deterministic case \(\sigma _1 = \sigma _2 = 0\). We can then write the random isochron map \({\tilde{\phi }}: [0, 2 \pi ) \times [0, \infty ) \times \Omega \times \mathbb {R} \rightarrow \mathbb {R}\) such that in a neighbourhood \({\mathcal {U}}\) of the circle \(\{r=r^*\}\) we have

    $$\begin{aligned} {\tilde{\phi }}(\cdot , \omega , s): {\mathcal {U}} \rightarrow [s, s + T(\theta _{-s} \omega )) \end{aligned}$$

    and, based on Eqs. (4.25) and (4.24),

    $$\begin{aligned} {\tilde{\phi }} ( \varphi (s, \omega , (\vartheta _0, r_0)), \theta _s \omega , s) = \tilde{\phi }((\vartheta _0, r_0), \omega ,0) +s\,, \end{aligned}$$
    (4.29)

    or equivalently

    $$\begin{aligned} \mathrm {d}\, {\tilde{\phi }}(\varphi (s, \omega ,(\vartheta _0, r_0)), \theta _s \omega , s) = 1 \, \mathrm {d}s\,, \end{aligned}$$
    (4.30)

    for any \((\vartheta _0, r_0) \in {\mathcal {U}}\), \(s \in \mathbb {R}\) and \(\omega \in \Omega \). For any \(\vartheta \in [0, 2 \pi )\), \(s \in \mathbb {R}\) and \(\omega \in \Omega \), we can write \(\tilde{I}_{\vartheta }(\omega ,s)\) for the level set

    $$\begin{aligned} {\tilde{I}}_{\vartheta }(\omega ,s) = \{ ({\tilde{\vartheta }}, {\tilde{r}}) \in {\mathcal {U}}: {\tilde{\phi }}( ({\tilde{\vartheta }}, {\tilde{r}}), \omega , s) = \tilde{\phi }((\vartheta , r^*), \omega , s)\}. \end{aligned}$$
    (4.31)

Following Theorem 3, we can simply define isochrons for any point \( x \in {\mathcal {U}}(\omega )\) by setting

$$\begin{aligned} {\tilde{I}}(\omega ,x,s) := {\tilde{I}}(\omega ,\psi (t+s, \omega ),s) \ \text { for } x \in {\tilde{I}}(\omega ,\psi (t+s, \omega ),s), t \in [0, T(\theta _{-s} \omega ))\,. \end{aligned}$$
(4.32)

We can then show the invariance of \({\tilde{I}}(\omega , x, 0)\) under the RDS, similarly to the invariance property (4.3) of the forward isochrons, extending property (4.26) to any \(x \in \mathcal U(\omega )\).

Proposition 5

The random \({\tilde{\phi }}\)-isochrons \({\tilde{I}}(\omega , x, 0)\) for \(x \in \mathcal {U}(\omega )\) where \(\mathcal {U}(\omega )\) is an attracting neighbourhood of \( A(\omega )\), are forward-invariant under the RDS cocycle, i.e.

$$\begin{aligned} \varphi (s,\omega ) {\tilde{I}}(\omega , x, 0) \subset {\tilde{I}} (\theta _s \omega , \varphi (s,\omega ,x), s) \ \text { for almost all } \omega \in \Omega \text { and all } x \in {\mathcal {U}}(\omega ), s \ge 0. \end{aligned}$$
(4.33)

Proof

Let \(y \in \varphi (s,\omega ) {\tilde{I}}( \omega , x, 0)\). This means that there is a \(z \in \mathbb {R}^m\) such that \(y = \varphi (s, \omega ,z)\) and \({\tilde{\phi }} (z, \omega , 0) = {\tilde{\phi }} (x, \omega , 0)\). We obtain from Eq. (4.24) that

$$\begin{aligned} {\tilde{\phi }} (y, \theta _s \omega , s)&= {\tilde{\phi }} (\varphi (s, \omega ,z), \theta _s \omega , s) \\&= {\tilde{\phi }}(z, \omega , 0) + s \\&={\tilde{\phi }}(x, \omega , 0) + s\\&= {\tilde{\phi }} (\varphi (s, \omega ,x), \theta _s \omega , s). \end{aligned}$$

Hence, we have \(y \in {\tilde{I}} (\theta _s \omega , \varphi (s,\omega ,x), s)\) and therefore

$$\begin{aligned} \varphi (s,\omega ) {\tilde{I}}(\omega , x,0) \subset {\tilde{I}} (\theta _s \omega , \varphi (s,\omega ,x),s). \end{aligned}$$

This finishes the proof. \(\quad \square \)

4.3 Stochastic isochrons via mean return time and random isochrons

One main approach to define stochastic isochrons in the physics literature is due to Schwabedal and Pikovsky [41] who introduce isochrons (or isophase surfaces) for noisy systems as sections \(W^\mathbb {E}(x)\) with the mean first return time to the same section \(W^\mathbb {E}(x)\) being a constant \({\bar{T}}\), equaling the average oscillation period. Note that such an object is not well-defined a priori as it seems unclear, what we imply here by “return”, i.e., return to what? The paper does not rigorously establish these objects but only gives a numerical algorithm which is successfully tested at the hand of several examples. According to the algorithm, a deterministic starting section \({\mathcal {N}}\) is adjusted according to the mean return time, i.e., points are moved correspondent to the mismatch of their return time and the mean period for \({\mathcal {N}}\), and this procedure is repeated until all points have the same mean return time.

4.3.1 The modified Andronov–Vitt–Pontryagin formula in [13]

Cao, Lindner and Thomas [13] have made this approach rigorous by using a modified version of the Andronov–Vitt–Pontryagin formula for the mean first passage time (MFPT) \(\tau _D\) on a bounded domain D through its boundary \(\partial D\). In more detail (cf. [40, Chapter 4.4]), the associated boundary value problem for \(\mathcal L\) denoting the generator of the process, also called backward Kolmogorov operator, is given by

$$\begin{aligned} \mathcal {L} u(x) = -1 \quad \text {for all } \,x \in D, \quad u(x) = 0 \quad \text {for all } \,x \in \partial D, \end{aligned}$$
(4.34)

which is solved by

$$\begin{aligned} u(x) = \mathbb {E}[\tau _D | x(0) =x]. \end{aligned}$$

The problem in our case is that if we consider a domain whose absorbing boundary in \(\theta \)-direction is a line \({\tilde{l}} := \{({\tilde{\vartheta }}({\tilde{r}}), {\tilde{r}}) \,:\, R_1 \le {\tilde{r}} \le R_2 \} \), where \({\tilde{\vartheta }}\) is a smooth function, the stochastic motion might not perform a full rotation to reach this boundary line. In particular, the mean return time for trajectories starting on \({\tilde{l}}\) will be zero. To circumvent this problem, Cao et al. unwrap the phase by considering infinite copies of \({\tilde{l}}\) on the extended domain \( \mathbb {R} \times [R_1, R_2]\). For some \((\vartheta ,r)\) with \(\vartheta< 2 \pi < \tilde{\vartheta } (r)\), the mean first passage time \(T(\vartheta , r)\) is then calculated via the Andronov–Vitt–Pontryagin formula with periodic-plus-jump boundary condition in the \(\vartheta \)-direction and reflecting boundary condition in the r-direction.

In more detail, the process solving Eq. (4.20), or its Itô version respectively, with strongly elliptic generator \({\mathcal {L}}\) and its adjoint \({\mathcal {L}}^*\), the forward Kolmogorov operator, is assumed to have a unique stationary density \(\rho \) on \(\Omega = [0, 2 \pi ) \times [R_1, R_2]\) solving the stationary Fokker-Planck equation

$$\begin{aligned} \mathcal {L}^* \rho = 0\,, \end{aligned}$$

with reflecting (Neumann) boundary conditions at \(r \in \{R_1, R_2\}\) and periodic boundaries \(\rho (0,r) = \rho (2 \pi ,r)\) for all \(r \in [R_1, R_2]\). For model (4.20), the stationary probability current \(J_{\rho }\) reads, for \(j=1,2\),

$$\begin{aligned} J_{\rho , j} (\vartheta , r) = \left( f_j(\vartheta , r) + \frac{1}{2}g_j(\vartheta , r) \partial _j g_j(\vartheta , r) \right) \rho (\vartheta , r) - \frac{1}{2} \partial _j \left( g_j^2(\vartheta ,r) \rho (\vartheta , r) \right) . \end{aligned}$$

Furthermore, for a \(C^1\)-function \(\gamma : [R_1, R_2] \rightarrow [0, 2 \pi ]\) the graph \(C_{\gamma }\) (cf. \({\tilde{l}}\) above) separates the domain \(\Omega _{\text {ext}}= \mathbb {R} \times [R_1, R_2]\) into a left and right connected component, with unit normal vector n(r) oriented to the right. It is then assumed that the mean rightward probability flux through \(C_{\gamma }\) is positive, which means that

$$\begin{aligned} {\bar{J}}_{\rho } := \int _{R_1}^{R_2} n^{\top }(r) J_{\rho } (\gamma (r),r) \, \mathrm {d}r > 0. \end{aligned}$$
(4.35)

The mean period of the oscillator is then given as

$$\begin{aligned} {\bar{T}} = \frac{1}{{\bar{J}}_{\rho }}\,. \end{aligned}$$
(4.36)

The modified Andronov–Vitt–Pontryagin formula is then given by the following PDE, with reflecting and jump-periodic boundary conditions

$$\begin{aligned} {\mathcal {L}} T&= -1, \quad \text {on } \Omega , \nonumber \\ g_2^2(\vartheta , r) \partial _2 T(\vartheta , r)&= 0, \quad \forall \vartheta \in \mathbb {R}, r \in \{R_1, R_2\}\,\nonumber \\ T(\vartheta , r) - T(\vartheta + 2 \pi , r)&= {\bar{T}}, \quad \forall (\vartheta , r) \in \Omega _{\text {ext}}. \end{aligned}$$
(4.37)

In fact, the last condition can be weakened to

$$\begin{aligned} T(0, r) - T(2 \pi , r) = {\bar{T}}, \quad \forall r \in [R_1,R_2]. \end{aligned}$$
(4.38)

Under the discussed assumptions, it is then shown in [13, Theorem 3.1] that the equation has a solution \(T(\vartheta , r)\) on \(\Omega _{\text {ext}}\) and, hence by restriction, on \(\Omega \), which is unique up to an additive constant. The level sets of \(T(\vartheta ,r)\) are then supposed to be the stochastic isochrons \(W^\mathbb {E}((\vartheta ,r))\) with mean return time \({\bar{T}}\) and associated isophase (up to some constant \(\bar{\Theta }_0\))

$$\begin{aligned} {\bar{\Theta }}(\vartheta , r) = - T(\vartheta , r) \frac{2 \pi }{{\bar{T}}}\,, \end{aligned}$$

which therefore satisfies

$$\begin{aligned} \mathcal {L} \bar{\Theta }= \frac{2 \pi }{{\bar{T}}}. \end{aligned}$$
(4.39)

4.3.2 Relation to random isochrons

Recall from Definition 9 that, for a CRPS \((\psi ,T)\), the random period \(T(\omega )\) corresponds to the random forward isochron \(W^{{\text {f}}}(\omega , \psi (0, \omega ))\) for all \(\omega \in \Omega \). Hence, we can define the expected period as

$$\begin{aligned} {\bar{T}}_{\text {RDS}} := \mathbb {E} [T(\cdot )]\,, \end{aligned}$$
(4.40)

where the index RDS indicates the random dynamical systems perspective. In the following, we discuss how \({\bar{T}}_{\text {RDS}}\) is related to \({\bar{T}}\) and the isochron function \({\bar{\Theta }}\) (4.39).

4.3.3 Expectation of random period

Similarly to Sect. 4.3.1, consider Eq. (4.20) in an annulus \({\mathcal {R}}\) given by \(0 \le R_1 \le r \le R_2 \le \infty \), i.e. including the full space case \({\mathcal {R}} = [0, \infty ) \times [0, 2 \pi )\). Consider the slightly modified version of the PDE system (4.37)

$$\begin{aligned} {\mathcal {L}} u&= -1, \quad \text {on } (-2 \pi , 2 \pi ) \times (R_1, R_2) , \nonumber \\ u(\pm 2 \pi , r) - u(0, r)&= {\bar{T}}, \quad \forall r \in (R_1,R_2)\,, \end{aligned}$$
(4.41)

where for the case \(R_1 > 0, R_2 < \infty \) one can take again Neumann boundary conditions

$$\begin{aligned} g_2^2(\vartheta , r) \partial _2 u(\vartheta , r) = 0, \quad \forall \vartheta \in \mathbb {R}, r \in \{R_1, R_2\}. \end{aligned}$$

Then we can formulate the following observation.

Proposition 6

Assume that system (4.20) has a CRPS \((\psi ,T)\), fixing \(\psi (0, \omega ) \in \{0\} \times (R_1, R_2)\) for \(0 \le R_1 < R_2 \le \infty \), where the RDS and its attracting random cycle are supported within \((R_1, R_2) \times [0, 2 \pi )\) (see Theorem 2 and Remark 7). Then we obtain that

  1. (a)

    The expectation of the random period is given by

    $$\begin{aligned} \mathbb {E} [T(\cdot )] = \mathbb {E} [u(\psi (- T(\cdot ),\theta _{-T(\cdot )}(\cdot )))] - \mathbb {E} [u(\psi (0,\cdot ))], \end{aligned}$$
    (4.42)

    where u solves Eq. (4.41),

  2. (b)

    And, in particular, if the radial components of \(\psi (0,\cdot )\) and \(\psi (- T(\cdot ),\theta _{-T(\cdot )}(\cdot ))\) are equally distributed on \((R_1, R_2)\), we have

    $$\begin{aligned} {\bar{T}} = T_{\text {RDS}} = \mathbb {E} [T(\cdot )]\,. \end{aligned}$$

Proof

As we have seen in the proof of Proposition 2, the period \(T(\omega )\) has to satisfy for system (4.20)

$$\begin{aligned} T(\omega ) = \inf \left\{ t > 0: \left| \int _{-t}^{0} f_1(\psi (s, \theta _s \omega )) \mathrm {d}s + \sigma _1 \int _{-t}^{0} g_1(\psi (s,\theta _s \omega )) \circ \, \mathrm {d}W_s^2(\omega ) \right| = 2 \pi \right\} \,. \end{aligned}$$

Hence, using Dynkin’s equation for the solution u of the boundary value problem (4.41), we obtain

$$\begin{aligned} \mathbb {E}[u(\psi (0,\cdot ))]&= \mathbb {E}[u(\psi (-T(\cdot ), \theta _{-T(\cdot )}(\cdot )))] + \mathbb {E}\left[ \int _{-T(\omega )}^0 L u(\psi (s, \theta _s \cdot ))\, \mathrm {d}s \right] \\&= \mathbb {E}[u(\psi (-T(\cdot ), \theta _{-T(\cdot )}(\cdot )))] - \mathbb {E}[T(\cdot )], \end{aligned}$$

which shows claim (a).

Claim (b) follows straightforwardly, inserting (4.41) into (4.42). \(\quad \square \)

Note that this result is consistent with the basic Example 2, where we have \(T(\omega ) = {\bar{T}}\) for all \(\omega \in \Omega \) since in this case \(\psi (0,\cdot )\) and \(\psi (- T(\cdot ),\theta _{-T(\cdot )}(\cdot ))\) are both distributed according to the stationary radial solution \(r^*(\omega )\). Addtionally note that u is the isochron function via mean return time, as discussed in Sect. 4.3.1.

4.3.4 Expectation of isochron function

Furthermore, we want to give an alternative derivation to Sect. 4.3.1 of an isochron function \(\bar{\phi }((\vartheta ,r)): \mathcal {R} \rightarrow \mathbb {R}\), yielding the sections \(W^\mathbb {E}((\vartheta ,r))\) with fixed mean return time given as level sets

$$\begin{aligned} W^\mathbb {E}((\vartheta ,r)) = \{ ({\tilde{\vartheta }},{\tilde{r}}) \in \mathcal {R}\,:\, {\bar{\phi }}((\tilde{\vartheta },{\tilde{r}})) = {\bar{\phi }}((\vartheta ,r)) \}. \end{aligned}$$
(4.43)

In more detail, we try to find the function \({\bar{\phi }}\) via an expected version of Eqs. (4.29) and (4.30). We fix \((\vartheta _0, r_0) \in \mathcal R\) and require that the function \({\bar{\phi }}\) satisfies along solutions \((\vartheta (t), r(t))\) of the SDE (4.20) the equality (cf. Eq. (2.7) in the deterministic case)

$$\begin{aligned} \mathbb {E} \left[ \mathrm {d}\, {\bar{\phi }}(\vartheta (t),r(t)) | (\vartheta (0),r(0)) = (\vartheta _0, r_0)\right] = 1 \, \mathrm {d}t\,. \end{aligned}$$
(4.44)

By this, we can show the following result:

Proposition 7

There is \({\bar{\phi }}((\vartheta ,r)): \mathcal {R} \rightarrow \mathbb {R}\) and a period \({\bar{T}} > 0\) with

$$\begin{aligned} \mathbb {E} \left[ {\bar{\phi }}(\vartheta (t),r(t)) | (\vartheta (0),r(0)) = (\vartheta _0, r_0)\right] = \bar{\phi }(\vartheta (0), r(0)) + t \mod \bar{T}\,. \end{aligned}$$
(4.45)

This \({\bar{T}}\) is the expected return time to the isochron \(W^\mathbb {E}((\vartheta _0,r_0))\) which is the level set of \(\bar{\phi }(\vartheta _0, r_0)\).

In particular, the function \({\bar{\phi }}\) can be identified with the solution \({\bar{\Theta }}\) of Eq. (4.39).

Proof

Using the chain rule of Stratonovich calculus and inserting (4.20), Eq. (4.44) can be rewritten as

$$\begin{aligned} 1 \, \mathrm {d}t&= \mathbb {E} \left[ \frac{\mathrm {d}}{\mathrm {d}\vartheta }\, {\bar{\phi }}(\vartheta (t),r(t)) \, \mathrm {d}\vartheta + \frac{\mathrm {d}}{\mathrm {d}r}\, {\bar{\phi }}(\vartheta (t),r(t)) \, \mathrm {d}r \bigg | (\vartheta (0),r(0)) = (\vartheta _0, r_0)\right] \\&= \mathbb {E} \bigg [ \frac{\mathrm {d}}{\mathrm {d}\vartheta }\, {\bar{\phi }}(\vartheta (t),r(t)) \left( f_1(\vartheta (t), r(t)) \, \mathrm {d}t + \sigma _1 g_1(\vartheta (t), r(t)) \circ \, \mathrm {d}W_t^1 \right) \\&\quad +\,\frac{\mathrm {d}}{\mathrm {d}r}\, {\bar{\phi }}(\vartheta (t),r(t)) \left( f_2(\vartheta (t), r(t)) \, \mathrm {d}t + \sigma _2 g_2(\vartheta (t), r(t)) \circ \, \mathrm {d}W_t^2 \right) \bigg | (\vartheta (0),r(0)) = (\vartheta _0, r_0)\bigg ] \,, \end{aligned}$$

where the boundary condition in angular direction is

$$\begin{aligned} {\bar{\phi }} (2 \pi , r) ={\bar{\phi }} (0, r) \mod \bar{T}, \end{aligned}$$
(4.46)

for all \(R_1 \le r \le R_2\), fixing

$$\begin{aligned} {\bar{\phi }} (0, r^*) = 0\,, \end{aligned}$$

and

$$\begin{aligned} {\bar{T}} = {\bar{\phi }}(2 \pi , r^*)\,. \end{aligned}$$

In radial direction, if \(0< R_1< R_2 < \infty \), one can choose reflecting boundary conditions as in Sect. 4.3.1.

Writing time t as an index, transforming the Stratonovich noise terms into Itô noise terms and using the fact that the Itô noise terms have zero expectation, leads to the equation

$$\begin{aligned} 1&= \mathbb {E} \bigg [ \left( f_1(\vartheta _t, r_t) + \frac{1}{2}g_1(\vartheta _t, r_t) \frac{\partial }{\partial \vartheta }g_1(\vartheta _t, r_t), f_2(\vartheta _t, r_t) + \frac{1}{2}g_2(\vartheta _t, r_t) \frac{\partial }{\partial r}g_2(\vartheta _t, r_t) \right) \cdot \nabla {\bar{\phi }} (\vartheta _t, r_t) \nonumber \\&\quad + \frac{1}{2} \sigma _1^2 g_1^2(\vartheta _t,r_t) \frac{\partial ^2}{\partial \vartheta ^2} \phi (\vartheta _t,r_t) + \frac{1}{2} \sigma _2^2 g_2^2(\vartheta _t,r_t) \frac{\partial ^2}{\partial r^2} {\bar{\phi }}(\vartheta _t,r_t) \bigg | (\vartheta (0),r(0)) = (\vartheta _0, r_0))\bigg ] \nonumber \\&= \mathbb {E} \bigg [ {\mathcal {L}} {\bar{\phi }}(\vartheta _t,r_t)\bigg | (\vartheta (0),r(0)) = (\vartheta _0, r_0)\bigg ]\,, \end{aligned}$$
(4.47)

where \({\mathcal {L}}\) denotes the backward Kolmogorov operator associated with the SDE (4.20). In particular, a solution is given by the stationary version

$$\begin{aligned} {\mathcal {L}} {\bar{\phi }}(\vartheta ,r) = 1, \end{aligned}$$
(4.48)

with boundary condition (4.46). Note that, up to the change of sign \(\phi \rightarrow - \phi \), Eq. (4.48) is Dynkin’s equation and that Eq. (4.39) is equivalent to Eq. (4.48) with boundary condition (4.46) such that \({\bar{\phi }}\) is taken as a function from the domain \(\Omega \) to \(\mathbb {R} \mod \bar{T}\). Hence, the two approaches, one starting with (4.44) and the other, considering the MFPT, lead to the same outcome regarding the stochastic isochrons \(W^\mathbb {E}((\vartheta ,r))\). \(\quad \square \)

We exmplify this derivation of an isochron function \({\bar{\phi }}\) by reference to the fundamental Example 3:

Example 6

Recall Example 3 with Eq. (3.12), i.e. in its most general form,

$$\begin{aligned} \begin{array}{lcl} \mathrm {d}\vartheta &{}=&{} h(r) \, \mathrm {d}t + {\tilde{h}}(r) \circ \, \mathrm {d}W_t^2,\\ \mathrm {d}r &{}=&{} (r - r^3) \, \mathrm {d}t + \sigma r \circ \, \mathrm {d}W_t^1\,, \end{array} \end{aligned}$$

choosing \(h(r) = \kappa + (r^2 -1) \), \(\kappa \ge 1\), similarly to [41, Example (1)], and \({\tilde{h}}\) some arbitrary smooth and bounded function. Note that \(r^*=1\) for this case and that there is a stationary density p for the radial process which has the form

$$\begin{aligned} p(r) = \frac{1}{Z} r^{\frac{2}{\sigma ^2} -1} {\text {e}}^{-\frac{r^2}{\sigma ^2}} , \end{aligned}$$

where \(Z > 0\) is a normalization constant. One can then additionally observe that \(\mathbb {E}_{p} [r^2] = 1\) for all \(\sigma \ge 0\), and, hence, \(\mathbb {E}_{p} [h(r)] = \kappa .\)

It is easy to see that

$$\begin{aligned} {\hat{\phi }} (\vartheta , r) = \frac{1}{\kappa }(\vartheta + \ln r) \end{aligned}$$

solves (4.48) such that (4.45) is actually satisfied with \({\bar{T}} = \frac{2\pi }{\kappa }\). In fact, we have (up to some constant \({\bar{\phi }}_0\))

$$\begin{aligned} {\bar{\phi }} (\vartheta , r) = \frac{1}{\kappa }(\vartheta + \ln r) \mod \bar{T}, \end{aligned}$$

which, in this case, is also the deterministic isochron.

Similarly to \(T_{\text {RDS}} := \mathbb {E} [T(\cdot )]\), we can introduce for the associated random isochron map \({\tilde{\phi }}\) the expected quantity

$$\begin{aligned} {\bar{\phi }}_{\text {RDS}}(x) = \mathbb {E} [ {\tilde{\phi }} (x, \cdot , 0 )], \end{aligned}$$
(4.49)

for fixed \(x \in \mathbb {R}^m\), where \({\tilde{\phi }}\) is the random isochron map from Sect. 4.2. It remains to clarify how the isochron function \({\bar{\phi }}\), or equivalently \(\bar{\Theta }\) (4.39), may be related to \(\bar{\phi }_{\text {RDS}}\), assuming the existence of a CRPS \((\psi , T)\) as for Example 3 (see Proposition 2). We give a brief discussion of a possible approach to this question in “Appendix .1”, leaving a more thourough investigation as future work.

5 Conclusion

We have introduced a new perspective on the problem of stochastic isochronicity, by considering random isochrons as random stable manifolds anchored at attracting random cycles with random periodic solutions. We have further characterized these random isochrons as level sets of a time-dependent random isochron map. Precisely this time-dependence of the random dynamical system, i.e., its non-autonomous nature, makes it difficult to specify the concrete relation to the definitions of stochastic isochrons given by fixed expected mean return times for whom we have given an alternative derivation of the isochron function \({\bar{\phi }}\) with return time \({\bar{T}}\). We suggest an extended investigation of their relationship to the expected quantity \({\bar{\phi }}_{\text {RDS}}\) as an intriguing problem for future work. Additionally, it would be interesting to study the relation between stochastic isochronicity via eigenfunctions of the backward Kolmogorov operator [44] and random Koopman operators (see [18]), extending the eigenfunction approach from the deterministic setting to the random dynamical systems case.