Abstract
For an attracting periodic orbit (limit cycle) of a deterministic dynamical system, one defines the isochron for each point of the orbit as the crosssection with fixed return time under the flow. Equivalently, isochrons can be characterized as stable manifolds foliating neighborhoods of the limit cycle or as level sets of an isochron map. In recent years, there has been a lively discussion in the mathematical physics community on how to define isochrons for stochastic oscillations, i.e. limit cycles or heteroclinic cycles exposed to stochastic noise. The main discussion has concerned an approach finding stochastic isochrons as sections of equal expected return times versus the idea of considering eigenfunctions of the backward Kolmogorov operator. We discuss the problem in the framework of random dynamical systems and introduce a new rigorous definition of stochastic isochrons as random stable manifolds for random periodic solutions with noisedependent period. This allows us to establish a random version of isochron maps whose level sets coincide with the random stable manifolds. Finally, we discuss links between the random dynamical systems interpretation and the equal expected return time approach via averaged quantities.
Similar content being viewed by others
1 Introduction
Periodic behavior is ubiquitous in the natural sciences and in engineering. Accordingly, many mathematical models of dynamical systems, usually given by ordinary differential equations (ODEs), are characterized by the existence of attracting periodic orbits, also called limit cycles. Interpreting the limit cycle as a “clock” for the system, one can ask which parts of the state space can be associated with which “time” on the clock.
It turns out that one can generally divide the state space into sections, called isochrons, intersecting the asymptotically stable periodic orbit. Trajectories starting on a particular isochron all converge to the trajectory starting at the intersection of the isochron and the limit cycle. Hence, each point in the basin of attraction of the limit cycle can be allocated a time on the periodic orbit, by belonging to a particular isochron. Isochrons can then be characterized as the sections intersecting the limit cycle, such that the return time under the flow to the same section always equals the period of the attracting orbit and, hence, the return time is the same for all isochrons. The analysis of ODEs provides additional characterizations of isochrons, involving, for example, an isochron map or eigenfunctions of associated operators.
Clearly, mathematical models are simplifications which often leave out parameters and details of the described physical or biological system. Hence, a large number of degrees of freedom is inherent in the modeling. The introduction of random noise is often a suitable way to integrate such nonspecified components into the model such that, for example, an ODE becomes a stochastic differential equation (SDE). Examples for stochastic oscillators/oscillations can be found in a wide variety of applications such as neuroscience [4, 12, 31, 43], ecology [37, 39], biomechanics [25, 35], geoscience [6, 33], among many others. In addition, stochastic oscillations have become a recently very active research topic in the rigorous theory of stochastic dynamical systems with small noise [3, 7, 8, 26].
Lately, there has been a lively discussion [34, 45] in the mathematical physics community about how to extend the definition and analysis of isochrons to the stochastic setting. As pointed out above, there are several different characterizations in the deterministic case inspiring analogous stochastic approaches. So far, there are two main approaches to define stochastic isochrons in the physics literature, both focused on stochastic differential equations. One approach, due to Thomas and Lindner [44], focuses on eigenfunctions of the associated infinitesimal generator \({\mathcal {L}}\). The other one is due to Schwabedal and Pikovsky [41], who introduce isochrons for noisy systems as sections \(W^\mathbb {E}(x)\) with the mean first return time to the same section \(W^\mathbb {E}(x)\) being a constant \({\bar{T}}\), equaling the average oscillation period. Cao, Lindner and Thomas [13] have used the Andronov–Vitt–Pontryagin formula, involving the backward Kolmogorov operator \({\mathcal {L}}\), with appropriate boundary conditions to establish the isochron functions for \(W^\mathbb {E}(x)\) more rigorously.
These approaches have in common that they focus on the “macroscopic” or “coarsegrained” level by considering averaged objects and associated operators. We complement the existing suggestions by a new approach within the theory of random dynamical systems (see e.g. [1]) which has proven to give a framework for translating many deterministic dynamical concepts into the stochastic context. A random dynamical system in this sense consists of a model of the timedependent noise formalized as a a dynamical system \(\theta \) on the probability space, and a model of the dynamics on the state space formalized as a cocycle \(\varphi \) over \(\theta \). This point of view considers the asymptotic behaviour of typical trajectories. As trajectories of random dynamical systems depend on the noise realization, any convergent behaviour of individual trajectories to a fixed attractor cannot be expected. The forward in time evolution of sets under the same noise realization yields the random forward attractor A which is a timedependent object with fibers \(A(\theta _t \omega )\). An alternative view point is to consider, for a fixed noise realization \(\omega \in \Omega \), the flow of a set of initial conditions from time \(t=T\) to a fixed endpoint in time, say \(t=0\), and then take the (pullback) limit \(T\rightarrow \infty \). If trajectories of initial conditions converge under this procedure to fibers \(\tilde{A}(\omega )\) of some random set \({\tilde{A}}\), then this set is called a random pullback attractor.
In this paper, we will consider mainly situations where the random dynamical system is induced by an SDE and there exists a random (forward and/or pullback) attractor A which is topologically equivalent to a cycle for each noise realization, i.e. a attracting random cycle, whose existence can be made generic in a suitably localized setup around a deterministic limit cycle. We will extend the definition of a random periodic solution \(\psi \) [46] living on such a random attractor to situations where the period is random, giving a pair \((\psi , T)\). Isochrons can then be defined as random stable manifolds \(W^{{\text {f}}}(\omega , x)\) for points x on the attracting random cycle \(A(\omega )\), in particular for random periodic solutions. We usually consider situations with a spectrum of exponential asymptotic growth rates, the Lyapunov exponents \(\lambda _1> \lambda _2> \dots > \lambda _p\), which allows to transform the idea of hyperbolicity to the random context. Additionally, we can introduce a timedependent random isochron map \({\tilde{\phi }}\), such that the isochrons are level sets of such a map. Hence, on a pathwise level, we achieve a complete generalization of deterministic to random isochronicity, which is the key contribution of this work. The main results can be summarized in the following theorem:
Theorem A
Assume the random dynamical system \((\theta , \varphi )\) on \(\mathbb {R}^m\) has a hyperbolic random limit cycle, supporting a random periodic solution with possibly noisedependent period. Then, under appropriate assumptions on smoothness and boundedness,

1.
The random forward isochrons are smooth invariant random manifolds which foliate the stable neighbourhood of the random limit cycle on each noise fibre,

2.
There exists a smooth and measurable (nonautonomus) random isochron map \({\tilde{\phi }}\) whose level sets are the random isochrons and whose time derivative along the random flow is constant.
The remainder of the paper is structured as follows. Section 2 gives an introduction to the deterministic theory of isochrons, summarizing the main properties that we can then transform into the random dynamical systems setting. The latter is discussed in Sect. 3, where we elucidate the notions of Lyapunov exponents, random attractors and, specifically, random limit cycles and their existence. Section 4 establishes the two main statements, contained in Theorem A: in Sect. 4.1, we show Theorem 2, summarizing different scenarios in which random isochrons are random stable manifolds foliating the neighbourhoods of the random limit cycle. In Sect. 4.2, we prove Theorem 3, generalizing characteristic properties of the isochron map to the random case. We conclude Sect. 4 with an elaboration on the relationship between expected quantities of the RDS approach and the definition of stochastic isochrons via mean first return times, i.e., one of the main physics approaches. Additionally, the paper contains a brief conclusion with outlook, and an appendix with some background on random dynamical systems.
2 The Deterministic Case
The basic facts about isochrons have been established in [27]. Here we summarize some facts restricted to the state space \({\mathcal {X}}= \mathbb {R}^m\) but the theory easily lifts to ordinary differential equations (ODEs) on smooth manifolds \({\mathcal {M}}={\mathcal {X}}\). Consider an ODE
where f is \(C^k\) for \(k\ge 1\). Let \(\Phi (x_0,t)=x(t)\) be the flow associated to (2.1) and suppose \(\gamma =\{\gamma (t)\}_{t\in [0,\tau _\gamma ]}\) is a hyperbolic periodic orbit with minimal period \(\tau _\gamma >0\). A crosssection \({\mathcal {N}}\subset \mathbb {R}^m\) at \(x\in \gamma \) is a submanifold such that \(x\in {\mathcal {N}}\), \(\bar{{\mathcal {N}}}\cap \gamma =\{x\}\), and
i.e. the submanifold \({\mathcal {N}}\) and the orbit \(\gamma \) intersect transversally.
Let \(g:{\mathcal {N}}\rightarrow {\mathcal {N}}\) be the Poincaré map defined by the first return of \(y\in {\mathcal {N}}\) under the flow \(\Phi \) with \({\mathcal {N}}\) (see Fig. 1); locally near any point \(x\in \gamma \) the map g is welldefined. For simplicity (and with the look forward towards the noisy case) let us assume that \(\gamma \) is a stable hyperbolic periodic orbit, i.e. the eigenvalues \(\mu _i\) of \({\text {D}}g(x)\), also called characteristic multipliers, satisfy \(\mu _1 = 1\) and \(\left \mu _2\right , \dots , \left \mu _m\right < 1\), counting multiplicities. The numbers
are called the characteristic exponents (for more background on the stability of linear nonautonomous systems and associated Floquet theory see e.g [14, Chapter 2.4]). We call such a stable hyperbolic periodic orbit a stable (hyperbolic) limit cycle since there is a neighbourhood \({\mathcal {U}}\) of \(\gamma \) such that for \(y \in {\mathcal {U}}\) we have \({\text {d}}(\Phi (y,t),\gamma ) \rightarrow 0\), as \(t \rightarrow \infty \), where \({\text {d}}\) is the Euclidean metric on \(\mathbb {R}^m\). In particular, note that there is a lower bound on the speed of exponential convergence to the limit cycle, given by
We give a definition of isochrons as stable sets and then establish its equivalence to level sets of a specific map. We further find these level sets to be crosssections to \(\gamma \) for which the time of first return is identical to the period \(\tau _{\gamma }\), explaining the name isochrons.
Definition 1
The isochron W(x) of a point on a hyperbolic limit cycle \(x\in \gamma \) is given by its stable set
In particular, due to hyperbolicity, we have that for every \( {\tilde{\lambda }} \in (0, \lambda )\)
It is by now classical that stable sets are manifolds and for each \(x\in \gamma \), we get a stable manifold \(W^{\text {s}}(x)\) diffeomorphic to \(\mathbb {R}^{m1}\), precisely coinciding with the isochron W(x). We can foliate a neighbourhood \({\mathcal {U}}\) of \(\gamma \) by the manifolds W(x) and these manifolds are permuted by the flow since
We summarize these crucial observations in the following theorem.
Theorem 1
(Theorem A in [27], Theorem 2.1 in [26]). Consider the flow \(\Phi : \mathbb {R}^m \times \mathbb {R} \rightarrow \mathbb {R}^m\) for the ODE (2.1) with hyperbolic stable limit cycle \(\gamma =\{\gamma (t)\}_{t\in [0,\tau _\gamma ]}\). Then the following holds:

1.
For each \(x \in \gamma \), the isochron W(x) is an \((m1)\)dimensional manifold transverse to \(\gamma \), in particular it is a crosssection of \(\gamma \), of the same regularity as the vector field f in the ODE (2.1) (i.e. \(C^k\) if f is \(C^k\)).

2.
The stable manifold \(W^{\text {s}}(\gamma )\) contains a full neighbourhood of \(\gamma \) and can be written as
$$\begin{aligned} W^{\text {s}}(\gamma )=\bigcup _{x\in \gamma } W(x), \end{aligned}$$where the union of isochrons is disjoint.

3.
The map \(\xi : W^{\text {s}}(\gamma ) \rightarrow \mathbb {R} \mod \tau _{\gamma } \), also called the isochron map, is given for every \(y \in W^{\text {s}}(\gamma )\) as the unique t such that \(y \in W(\gamma (t))\), i.e.
$$\begin{aligned} \lim _{s\rightarrow +\infty }{\text {d}}(\Phi (\gamma (\xi (y)),s),\Phi (y,s))=\lim _{s\rightarrow +\infty }{\text {d}}(\gamma (s + \xi (y)),\Phi (y,s))=0 \,, \end{aligned}$$(2.5)and \(\xi \) is also \(C^k\).
Using the properties established in Theorem 1, we can derive the following wellknown characterizations of the isochrons W(x), \(x \in \gamma \), and of the isochron map \(\xi \).
Proposition 1
Assume that we are in the situation of Theorem 1. We have that

1.
For each \(x \in \gamma \), the isochron W(x) is precisely the level set of \(\xi (x)\), i.e.
$$\begin{aligned} W(x)=\left\{ y\in W^{\text {s}}(\gamma ): \ \xi (y) = \xi (x)\right\} \,, \end{aligned}$$(2.6) 
2.
The isochron map \(\xi : W^{\text {s}}(\gamma ) \rightarrow \mathbb {R} \mod \tau _{\gamma } \) satisfies
$$\begin{aligned} \frac{{\text {d}}}{{\text {d}}t} \xi (\Phi (y,t)) = 1 \ \text { for all } t \ge 0, \, y \in W^{\text {s}}(\gamma )\,, \end{aligned}$$(2.7) 
3.
The isochron W(x) is the crosssection \({\mathcal {N}}_x\) at x such that
$$\begin{aligned} \Phi ({\mathcal {N}}_x,\tau _\gamma )\subseteq {\mathcal {N}}_x, \end{aligned}$$(2.8)i.e. the crosssection on which all starting points return in the same time \(\tau _{\gamma }\).
Proof
The first statement follows from the fact that \(\gamma (\xi (x)) = x\) for all \(x \in \gamma \) and Eq. (2.5) in combination with the definition of W(x): in more detail, we have \(y \in W(x)\) if and only if \(\lim _{s \rightarrow \infty } d(\Phi (x,s), \Phi (y,s)) = 0\) which is equivalent to \(\lim _{s \rightarrow \infty } d(\gamma (s + \xi (x)), \Phi (y,s)) = 0\) which holds if and only if \(\xi (x) = \xi (y)\).
The second statement can be deduced from the invariance property \(\Phi (\cdot , t) W(x) = W(\Phi (x,t))\) for any \(x \in \gamma \) since it implies for \(y \in W(x)\), i.e. \(\xi (y) = \xi (x)\), that
which is equivalent to the claim.
The third statement can be easily derived from the fact that for all \(y \in W^{\text {s}}(\gamma )\)
This finishes the proof.
Summarizing, we can view isochrons W(x) as stable manifolds of points on the limit cycle. The sets W(x) are uniquely defined and have codimension one. They locally foliate neighborhoods of the limit cycle. They can also be characterized and computed as level sets of a specific isochron map whose total derivative along the flow is equal to 1, by looking for sections of fixed return time under the flow. In the course of this article, we will transform all the discussed properties to the random case.
Guckenheimer [27] tackles additional questions regarding the boundary of \(W^{\text {s}}(\gamma )\). These questions concern global properties of isochrons. Since we want to first understand a neighbourhood \({\mathcal {U}}\) of \(\gamma \) in the stochastic setting, we skip these problems here. With this in mind, we consider an adjustment of the main planar example in [27] which does not involve the boundary of \(W^{\text {s}}(\gamma )\). The example is simple but illuminating and already contains the main aspects of the difficulties in extending isochronicity to the stochastic context, as we will see later.
Example 1
Consider the ODE
in polar coordinates \((\vartheta ,r)\in [0,2\pi )\times (0,+\infty )\), where \(r_1 > 0\) is fixed, \(h(r)\ge K > 0\) for some constant K, and h is smooth, such that there is always the periodic orbit \(\gamma =\{r=r_1\}\). If \(h(r)\equiv 1\), then one easily checks that the isochrons of \(\gamma \) are (see Fig. 1a)
However, if we consider h such that \(h'(r_1)\ne 0\), then the isochrons bend into curves, instead of being “cutlinear” rays. Indeed, the periodic orbit has period \(\tau _{\gamma } = 2\pi /h(r_1)\) but the return time to the same \(\vartheta \)coordinate changes near \(\gamma \) (see Fig. 1b).
Our considerations indicate that, in order to find isochrons in the stochastic case, a first approach is to consider “stable manifolds” also for this situation. The most suitable framework for this approach turns out to be the one of random dynamical systems (RDS).
3 Stochastically Driven Limit Cycles in the Framework of Random Dynamical Systems
In the following, we develop a theory of isochrons within the framework of random dynamical systems. A continuoustime random dynamical system on a topological state space \({\mathcal {X}}\) consists of

(i)
A model of the noise on a probability space \((\Omega , \mathcal {F}, \mathbb {P})\), formalized as a measurable flow \((\theta _t)_{t \in \mathbb {R}}\) of \({\mathbb {P}}\)preserving transformations \(\theta _t: \Omega \rightarrow \Omega \),

(ii)
A model of the dynamics on \({\mathcal {X}}\) perturbed by noise formalized as a cocycle \(\varphi \) over \(\theta \).
This setting is very helpful to understand properties of dynamical systems under the influence of stochastic noise. In technical detail, the definition of a random dynamical system is given as follows [1, Definition 1.1.2].
Definition 2
(Random dynamical system). Let \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) be a probability space and \({\mathcal {X}}\) be a topological space. A random dynamical system (RDS) is a pair of mappings \((\theta , \varphi )\).

\(\bullet \) The (\({\mathcal {B}}(\mathbb {R})\otimes {\mathcal {F}}\), \({\mathcal {F}}\))measurable mapping \(\theta : \mathbb {R}\times \Omega \rightarrow \Omega \), \((t,\omega )\mapsto \theta _t\omega \), is a metric dynamical system, i.e.

(i)
\(\theta _0={{\,\mathrm{id}\,}}\) and \(\theta _{t+s}=\theta _t\circ \theta _s\) for \(t,s\in {\mathbb {R}}\),

(ii)
\({\mathbb {P}} (A) = {\mathbb {P}}(\theta _t A)\) for all \(A\in {\mathcal {F}}\) and \(t\in {\mathbb {R}}\).

(i)

\(\bullet \) The (\(\mathcal {B}(\mathbb {R})\otimes \mathcal {F} \otimes \mathcal {B}({\mathcal {X}})\), \(\mathcal {B}({\mathcal {X}})\))measurable mapping \(\varphi : \mathbb {R} \times \Omega \times {\mathcal {X}}\rightarrow {\mathcal {X}}, (t, \omega , x) \mapsto \varphi (t, \omega , x)\), is a cocycle over \(\theta \), i.e.
$$\begin{aligned} \varphi (0, \omega , \cdot ) = {{\,\mathrm{id}\,}}\quad \text {and} \quad \varphi (t+s, \omega , \cdot ) = \varphi (t, \theta _s \omega , \varphi (s, \omega , \cdot )) \quad \text { for all } \omega \in \Omega \text { and } t, s \in \mathbb {R}\,. \end{aligned}$$
The random dynamical system \((\theta ,\varphi )\) is called continuous if \((t, x) \mapsto \varphi (t, \omega ,x)\) is continuous for every \(\omega \in \Omega \). We still speak of a random dynamical system, if its cocycle is only defined in forward time, i.e. if the mapping \(\varphi \) is only defined on \(\mathbb {R}_0^+ \times \Omega \times {\mathcal {X}}\). We will make it noticeable whenever this is the case.
In the following, the metric dynamical system \((\theta _t)_{t \in \mathbb {R}}\) is often even ergodic, i.e. any \(A\in {\mathcal {F}}\) with \(\theta _t^{1}A =A\) for all \(t\in {\mathbb {R}}\) satisfies \(\mathbb P(A)\in \{0,1\}\). Note that we define \(\theta \) in twosided time whereas \(\varphi \) can be restricted to onesided time. This is motivated by the fact that a large part of this article will deal with random dynamical systems generated by stochastic differential equations (SDEs). Hence, we are interested in random dynamical systems adapted to a suitable filtration and of white noise type (see “Appendix A.1”). In this context, we can understand \(\varphi \) as the “stochastic flow” induced by solving the corresponding SDE and \(\theta _t\) as a time shift on the canonical space \(\Omega \) of all continuous paths starting at 0, equipped with the Wiener measure.
Additionally note that the RDS generates a skew product flow, i.e. a family of maps \((\Theta _t)_{t \in \mathbb {T}}\) from \(\Omega \times {\mathcal {X}}\) to itself such that for all \(t \in {\mathbb {T}}\) and \(\omega \in \Omega , x \in {\mathcal {X}}\)
3.1 Differentiability and Lyapunov exponents
The random dynamical system \((\theta , \varphi )\) is called \(C^k\) if \(\varphi (t, \omega , \cdot ) \in C^k\) for all \(t\in {\mathbb {T}}\) and \(\omega \in \Omega \), where again \( {\mathbb {T}} \in \{ \mathbb {R}, \mathbb {R}_0^+\}\). As in the deterministic case, let us assume that the state space is \({\mathcal {X}}= \mathbb {R}^m\) (the following can also be extended to smooth mdimensional manifolds as in “Appendix .1”) and that \((\theta , \varphi )\) is \(C^1\). The linearization or derivative \(\mathrm {D}\varphi (t,\omega ,x)\) of \(\varphi (t,\omega ,\cdot )\) at \(x \in \mathbb {R}^m\) is the Jacobian \(m\times m\) matrix
Differentiating the equation
on both sides and applying the chain rule to the right hand side yields
i.e. the cocycle property of the fiberwise mappings with respect to the skew product maps \((\Theta _t)_{t \in {\mathbb {T}}}\) (see Eq. (3.1)). Let us further assume that the random dynamical system possesses an invariant measure \(\mu \) (see “Appendix .1”). This implies that \((\Theta ,\mathrm {D}\varphi )\) is a random dynamical system with linear cocycle \(\mathrm {D}\varphi \) over the metric dynamical system \((\Omega \times \mathbb {R}^m, {\mathcal {F}} \times \mathcal {B}(\mathbb {R}^m), (\Theta _t)_{t \in {\mathbb {T}}})\), see e.g. [1, Proposition 4.2.1].
The main models in this article are stochastic differential equation in Stratonovich form
where \(W_t^i\) are independent real valued Brownian motions, b is a \(C^k\) vector field, \(k \ge 1\), and \(\sigma _1, \dots , \sigma _n\) are \(C^{k+1}\) vector fields satisfying bounded growth conditions, as e.g. (global) Lipschitz continuity, in all derivatives to guarantee the existence of a (global) random dynamical system for \(\varphi \) and \(\mathrm {D}\varphi \). We write the equation in Stratonovich form when differentiation is concerned as the classical rules of calculus are preserved. We can apply the conversion formula to the Itô integral to obtain the situation of (A.1). According to [2], the derivative \(\mathrm {D}\varphi (t,\omega ,x)\) applied to an initial condition \(v_0 \in \mathbb {R}^m\) solves uniquely the variational equation given by
The hyperbolicity of such a differentiable RDS with ergodic invariant measure \(\mu \) and random cycle A is expressed via its Lyapunov spectrum which is given due to the Multiplicative Ergodic Theorem (MET) (see Theorem A.3 in “Appendix .1”) under the integrability assumption
where \(\Vert \mathrm {D}\varphi (t, \omega , \cdot ) \Vert \) denotes the operator norm of the Jacobian as a linear operator from \({\text {T}}_x \mathbb {R}^m\) to \({\text {T}}_{\varphi (t, \omega ,x)} \mathbb {R}^m \) induced by the Euclidean norm and \( \log ^+(a) = \max \{\log (a);0\}\).
Analogously to the characteristic exponents discussed for the deterministic case in Sect. 2, the spectrum of \(p \le m\) Lyapunov exponents \(\lambda _1> \lambda _2> \dots > \lambda _p\) quantifies the asymptotic exponential rates of infinitesimally close trajectories.
3.2 Random attractors
Let \((\theta , \varphi )\) be a white noise random dynamical system on \(\mathbb {R}^m\). (Note that the following can be formulated more generally in complete metric spaces \(({\mathcal {X}}, d)\) but that we again restrict ourselves to the Euclidean case for reasons of clarity). Due to the nonautonomous nature of the RDS, there are no fixed attractors for dissipative systems and different notions of a random attractor exist. We introduce these related but different definitions of random attractors in the following, with respect to tempered sets. Specific random attractors, attracting random cycles, will play a crucial role in the following chapters. A random variable \(R:\Omega \rightarrow \mathbb {R}\) is called tempered if
see also [1, p. 164]. A set \(D\in {\mathcal {F}}\otimes \mathcal B(\mathbb {R}^m)\) is called tempered if there exists a tempered random variable R such that
where \(B_{R(\omega )}(0)\) denotes a ball centered at zero with radius \(R(\omega )\) and \(D(\omega ):=\{x\in \mathbb {R}^m: (\omega , x)\in D\}\). D is called compact if \(D(\omega )\subset \mathbb {R}^m\) is compact for almost all \(\omega \in \Omega \). Denote by \({\mathcal {D}}\) the set of all compact tempered sets \(D\in {\mathcal {F}}\otimes {\mathcal {B}}(\mathbb {R}^m)\) and by
the Hausdorff seperation or semidistance, where d denotes again the Euclidean metric. We now define different notions of a random attractor with respect to a family of sets \({\mathcal {S}} \subset {\mathcal {D}}\), see also [28, Definition 14.3] and [17, Definition 15].
Definition 3
(Random attractor). The set \(A \in {\mathcal {S}} \subset {\mathcal {D}}\) that is strictly \(\varphi \)invariant, i.e.
is called

(i)
A random pullback attractor with respect to \({\mathcal {S}}\) if for all \(D \in {\mathcal {S}}\) we have
$$\begin{aligned} \lim _{t \rightarrow \infty } {\text {dist}} \big (\varphi (t, \theta _{t} \omega )D(\theta _{t}\omega ), A(\omega )\big ) = 0 \quad \text {for almost all } \,\omega \in \Omega \,, \end{aligned}$$ 
(ii)
A random forward attractor with respect to \({\mathcal {S}}\) if for all \(D \in {\mathcal {S}}\) we have
$$\begin{aligned} \lim _{t \rightarrow \infty } {\text {dist}} \big (\varphi (t, \omega )D(\omega ), A(\theta _t\omega )\big ) = 0 \quad \text {for almost all } \,\omega \in \Omega \,, \end{aligned}$$ 
(iii)
A weak random attractor if it satisfies the convergence property in (i) (or (ii)) with almost sure convergence replaced by convergence in probability,

(iv)
A (weak) random (pullback or forward) point attractor if it satisfies the corresponding properties above for \(\mathcal {S}= \{ D \subset \mathbb {R}^m \,: \, D = \{y\} \text { for some } y \in \mathbb {R}^m\}\), i.e. for single points \(y \in \mathbb {R}^m\).
Note that due to the \(\mathbb {P}\)invariance of \(\theta _t\) for all \(t \in \mathbb {R}\), it is easy to derive that weak attraction in the pullback and the forward sense are the same and, hence, the notion of a weak random attractor in Definition 3 (iii) is consistent. However, random pullback attractors and random forward attractors with almost sure convergence, as defined above, are generally not equivalent (see [38] for counterexamples). In the following, we will be careful with this distinction, yet in our main examples the random pullback attractor and random forward attractor will be the same. In this case we will simply speak of the random attractor.
Before we introduce random cycles and random periodic solutions, we add some remarks on Definition 3.
Remark 1
Note that we require that the random attractor is measurable with respect to \({\mathcal {F}}\otimes {\mathcal {B}}(\mathbb {R}^m)\), in contrast to a weaker statement often used in the literature (see also [17, Remark 4]).
Remark 2
In many cases, the family of sets \(\mathcal {S}\) is chosen to be the family of all bounded or compact (deterministic) subsets \(B \subset \mathbb {R}^m\), as for example in [23]. Note that our definition of random attractors is a generalization of this weaker definition.
3.2.1 Attracting random cycles and random periodic solutions
Consider a random dynamical system \((\theta , \varphi )\) on \(\mathbb {R}^m\). In the situation of a deterministic limit cycle, the limit cycle is the attractor for all subsets of a neighbourhood of this attractor. Analagously, we give the following definition for the random setting.
Definition 4
(Attracting Random Cycle). We call a random (forward or pullback) attractor A for \((\theta , \varphi )\), with respect to a collection of sets \(\mathcal {S}\), an attracting random cycle if for almost all \(\omega \in \Omega \) we have \(A(\omega ) \cong S^1\), i.e. every fiber is homeomorphic to the circle.
Furthermore, we need to find a stochastic analogue to the limit cycle as a periodic orbit. Firstly, we follow [46] for introducing the notion of random periodic solutions:
Definition 5
(Random periodic solution). Let \(\mathbb {T} \in \{ \mathbb {R}, \mathbb {R}_0^+ \}\). A random periodic solution is an \(\mathcal {F}\)measurable periodic function \(\psi : \Omega \times \mathbb {T} \rightarrow \mathbb {R}^m\) of period \(T>0\) such that for all \(\omega \in \Omega \)
Note that this definition assumes that \(T\in \mathbb {R}\) does not depend on the noise realization \(\omega \). We will see the limitations of that concept in Example 3, extending the following example which we introduce first.
Example 2
Similarly to [46], consider the planar stochastic differential equation
where \(\sigma \ge 0\), \(W_t\) denotes a onedimensional standard Brownian motion and the noise is of Stratonovich type. We denote the cocycle of the induced random dynamical system by \(\varphi = (\varphi _1, \varphi _2)\). Equation (3.6) can be transformed into polar coordinates \((\vartheta , r) \in [0, 2 \pi ) \times [0, \infty )\)
Therefore, in the situation without noise (\(\sigma = 0\)), the system is as in Example 1 with \(h\equiv 1\) and attracting limit cycle at radius \(r=1\). With noise switched on (\(\sigma > 0\)), Eq. (3.7) has an explicit unique solution given by
Moreover, there is a stationary solution for the radial component, satisfying \(r(t, \omega , r^*(\omega )) = r^*(\theta _t \omega )\), and given by
Furthermore, one can see from a straightforward computation that for all \((x,y) \ne (0,0)\) and almost all \(\omega \in \Omega \)
and also
Hence, the planar system (3.6) has a random attractor A in the pullback and forward sense, with respect to \(\mathcal {S} =\mathcal {D} {\setminus } \{ \{0\} \}\), where \({\mathcal {D}}\) denotes the set of all compact tempered sets \(D\in {\mathcal {F}}\otimes \mathcal B(\mathbb {R}^2)\) (see also Sect. .1), and the fibers of A are given by (see Fig. 2)
The system possesses, for any fixed \(\vartheta _0 \in [0, 2 \pi )\), the random periodic solution \(\psi \) which is defined by
Indeed, it is easy to check that \(\psi (t, \omega ) = \psi (t + 2 \pi , \omega ) \) and \( \varphi (t, \omega , \psi (t_0, \omega )) = \psi (t + t_0, \theta _t \omega )\) for all \(t, t_0 \ge 0\).
Example 3
(a) Now consider a stochastic version of Example 1 when the phase dynamics depends on the amplitude, i.e.
where the smooth function \(h: \mathbb {R}\rightarrow \mathbb {R}\) with \(h \ge K_h > 0\) is nonconstant. The random attractor A for the corresponding planar system
is exactly the same as before, as illustrated in Fig. 2. We observe for a point \( a(\omega ) : = r^*(\omega )(\cos \vartheta _0, \sin \vartheta _0) \in A(\omega )\), where \(r^*\) is the random variable defined in Eq. (3.8) and \( \vartheta _0 \in [0, 2 \pi )\), that the cocycle satisfies
There cannot be a random periodic solution in the sense of Definition 5, since noiseindependent periodicity is not possible if h is nonconstant.
(b) Naturally, we can also consider the case where the phase amplitude is additionally perturbed by noise, i.e.
where \(W_t = (W_t^1, W_t^2)\) is now twodimensional Brownian motion and \(h, {\tilde{h}}: \mathbb {R}\rightarrow \mathbb {R}\) are smooth functions.
Example 3 motivates us to introduce the following notion of a more general form of random periodic solution. The potential relevance of finding such a generalization was first discussed by Hans Crauel;^{Footnote 1} hence, we have chosen the name.
Definition 6
(Crauel random periodic solution). Let \(\mathbb {T} \in \{ \mathbb {R}, \mathbb {R}_0^+ \}\). A Crauel random periodic solution (CRPS) is a pair \((\psi ,T)\) consisting of \(\mathcal {F}\)measurable functions \(\psi : \Omega \times \mathbb {T} \rightarrow \mathbb {R}^m\) and \(T : \Omega \rightarrow \mathbb {R}\) such that for all \(\omega \in \Omega \)
In particular, note that condition (3.13) implies \(\psi (0, \omega ) = \psi (T(\omega ), \omega )\) (see Fig. 3 for further details). Furthermore, observe that the classical random periodic solution according to Definition 5 is simply a Crauel random periodic solution with constant T. We show that Definition 6 applies to system (3.10), demonstrating the suitability of this definition.
Proposition 2

(a)
The planar system associated with (3.10) has a family of Crauel random periodic solutions \((\psi _{\vartheta },T)\) which is defined for every \(\vartheta \in [0, 2 \pi )\) by
$$\begin{aligned} \psi _{\vartheta }(t, \omega ) = r^* (\omega )\left( \cos \left( \vartheta + \int _{t}^0 h(r^*(\theta _s \omega )) \mathrm {d}s \right) , \ \sin \left( \vartheta + \int _{t}^0 h(r^*(\theta _s \omega )) \mathrm {d}s \right) \right) \,, \end{aligned}$$(3.14)and
$$\begin{aligned} \int _{T( \omega )}^{0} h(r^*(\theta _s \omega )) \mathrm {d}s = 2 \pi \,, \end{aligned}$$(3.15)for almost all \(\omega \in \Omega \) and all \( t \in \mathbb {R}_0^+\).

(b)
The system associated with (3.12) has a family of Crauel random periodic solutions \((\psi _{\vartheta },T)\) which is defined for every \(\vartheta \in [0, 2 \pi )\) by \(\psi _{\vartheta }\) analogously to (3.14), just adding \(\int _{t}^{0} \tilde{h}(r^*(\theta _s \omega )) \circ \, \mathrm {d}W_s^2(\omega )\) to the angular direction, and
$$\begin{aligned} T(\omega ) = \inf \left\{ t > 0: \left \int _{t}^{0} h(r^*(\theta _s \omega )) \mathrm {d}s + \int _{t}^{0} {\tilde{h}}(r^*(\theta _s \omega )) \circ \, \mathrm {d}W_s^2(\omega )\right = 2 \pi \right\} \,. \end{aligned}$$(3.16)for almost all \(\omega \in \Omega \) and all \( t \in \mathbb {R}_0^+\).
Proof
Without loss of generality let \(\vartheta =0\).

(a)
The fact that \(T: \Omega \rightarrow \mathbb {R}\) is well defined can be seen as follows: fix \(\omega \in \Omega \) and let
$$\begin{aligned} g_{\omega }(t) = \int _{t}^{0} h(r^*(\theta _s \omega )) \mathrm {d}s  2 \pi . \end{aligned}$$Then \(g_{\omega }(0) < 0\) and \(g_{\omega }(2 \pi /K_h) > 0\) and, hence, the existence of \(T(\omega )\) follows from the intermediate value theorem. Moreover, we have by a change of variables that
$$\begin{aligned} 2 \pi = \int _{T( \theta _{t} \omega )}^{0} h(r^*(\theta _{st} \omega )) \mathrm {d}s =\int _{(t + T(\theta _{t} \omega ))}^{t} h(r^*(\theta _s \omega )) \mathrm {d}s \,. \end{aligned}$$We use this observation to conclude that for almost all \(\omega \in \Omega \) and any \(t \ge 0\)
$$\begin{aligned} \psi (t + T(\theta _{t} \omega ), \omega )&= r^* (\omega )\left( \cos \left( \int _{(t + T(\theta _{t} \omega ))}^{0} h(r^*(\theta _s \omega )) \mathrm {d}s \right) , \ \sin \left( \int _{(t + T(\theta _{t} \omega ))}^{0} h(r^*(\theta _s \omega )) \mathrm {d}s\right) \right) \\&= r^* (\omega )\left( \cos \left( 2 \pi + \int _{t }^{0} h(r^*(\theta _s \omega )) \mathrm {d}s \right) , \ \sin \left( 2 \pi + \int _{t}^{0} h(r^*(\theta _s \omega )) \mathrm {d}s\right) \right) \\&= \psi (t, \omega )\,. \end{aligned}$$Furthermore, we observe that for almost all \(\omega \in \Omega \) and \(t, t_0 \ge 0\)
$$\begin{aligned} \varphi (t, \omega ,\psi (t_0, \omega ))&= r^* (\theta _t \omega )\left( \cos \left( \int _{t_0}^{t} h(r^*(\theta _s \omega )) \mathrm {d}s \right) , \ \sin \left( \int _{t_0}^{t} h(r^*(\theta _s \omega )) \mathrm {d}s\right) \right) \\&= r^* (\theta _t \omega )\left( \cos \left( \int _{t_0 t}^{0} h(r^*(\theta _{s+t} \omega )) \mathrm {d}s \right) , \ \sin \left( \int _{t_0  t}^{0} h(r^*(\theta _{s+t} \omega )) \mathrm {d}s\right) \right) \\&= \psi (t + t_0, \theta _t \omega )\,. \end{aligned}$$ 
(b)
The fact that \(T: \Omega \rightarrow \mathbb {R}\) is well defined almost surely in this case follows directly from the properties of SDEs on compact intervals, in this case \([2 \pi , 2 \pi ]\). Moreover, we have by a change of variables that
$$\begin{aligned} 2 \pi&= \left \int _{T( \theta _{t} \omega )}^{0} h(r^*(\theta _{st} \omega )) \mathrm {d}s + \int _{T(\theta _{t} \omega )}^{0} {\tilde{h}}(r^*(\theta _{st} \omega )) \circ \, \mathrm {d}W_s^2(\theta _{t} \omega )\right \\&= \left \int _{T( \theta _{t} \omega )}^{0} h(r^*(\theta _{st} \omega )) \mathrm {d}s + \int _{T( \theta _{t} \omega )}^{0} {\tilde{h}}(r^* (\theta _{st} \omega )) \circ \, \mathrm {d}W_{st}^2( \omega )\right \\&= \left \int _{(t + T(\theta _{t} \omega ))}^{t} h(r^*(\theta _s \omega )) \mathrm {d}s + \int _{(t+T( \theta _{t} \omega ))}^{t} {\tilde{h}}(r^*(\theta _{s} \omega )) \circ \, \mathrm {d}W_{s}^2( \omega )\right \,. \end{aligned}$$We use this observation to conclude \(\psi (t + T(\theta _{t} \omega ), \omega ) = \psi (t, \omega )\) as in (a). Furthermore, we observe that for almost all \(\omega \in \Omega \) and \(t, t_0 \ge 0\)
$$\begin{aligned} \int _{t_0}^{t} {\tilde{h}}(r^*(\theta _{s} \omega )) \circ \, \mathrm {d}W_{s}^2( \omega )&= \int _{t_0 t}^{0} {\tilde{h}}(r^*(\theta _{s+t} \omega )) \circ \, \mathrm {d}W_{s+t}^2( \omega ) \mathrm {d}s \\&= \int _{t_0 t}^{0} {\tilde{h}}(r^*(\theta _{s} (\theta _t \omega ))) \circ \, \mathrm {d}W_{s}^2( \theta _t \omega ) \mathrm {d}s \,, \end{aligned}$$such that \(\varphi (t, \omega ,\psi (t_0, \omega )) = \psi (t + t_0, \theta _t \omega )\) follows as in (a). This finishes the proof. \(\square \)
Note that in Example 3, and by that also the simpler subcase Example 2, it is easy to check that the Lyapunov exponents satisfy \(\lambda _1 = 0\) and \(\lambda _2 < 0\). We want to make three additional remarks on Proposition 2, also concerning Definition 6.
Remark 3
The proof of Proposition 2 shows why we require \(\psi (t+T(\theta _{t}\omega ), \omega ) = \psi (t, \omega )\) in Definition 6 instead of choosing \(T(\omega )\) or \(T(\theta _t \omega )\) in such a formula. It is precisely the relation we obtain from Eqs. (3.14) and (3.15). Instead of Eq. (3.15), one might alternatively consider
and replace the time integral in \(\psi _{\vartheta }(t, \omega )\) (3.14) accordingly. However, it is easy to check that the invariance requirement \(\varphi (t, \omega , \psi _{\vartheta }(t_0, \omega )) = \psi _{\vartheta }(t+t_0, \theta _t \omega )\) is not satisfied in this situation. Hence, the choice of period in Definition 6 turns out to be the appropriate one for an application to Example 3 which we see as the fundamental model for extending random periodic solutions to noisedependent periods. Additionally note that, when \({\tilde{h}} \ne 0\) in Eq. (3.12), the direction of periodicity depends on the noise realization \(\omega \).
Remark 4
Note that for any \(\vartheta \in [0, 2 \pi )\) we have \(\psi _{\vartheta } (t, \omega ) \in A(\omega )\) for all \(t \ge 0\), \(\omega \in \Omega \), where A is the random attractor given in Eq. (3.9). Hence, we have established the analogous situation to the deterministic case in the sense that the attracting random cycle corresponds to a random periodic solution; see also Fig. 3.
Remark 5
One may ask what happens when \(h, {\tilde{h}}\) in Eq. (3.12) also depend on \(\vartheta \). Then there can, of course, still be a CRPS but we do not know a priori the existence of some stationary process \(\vartheta ^*\) similarly to \(r^*\) which we need to write down for an explicit solution such as (3.14).
We will see later in Proposition 6 that we can determine \(\mathbb {E}[T(\omega )] < \infty \), using a variant of the Andronov–Vitt–Pontryagin formula (cf. [40]).
3.2.2 Chaotic random attractors and singletons
More generally, i.e., in addition to the case with first Lyapunov exponent \(\lambda _1=0\), we want to consider the situations where \(\lambda _1 > 0\) and \(\lambda _1 < 0\) (always assuming volume contraction to an attractor expressed by \(\sum _{j} \lambda _j < 0\)). For \(\lambda _1<0\), this typically means that the random attractor is a singleton (see, for example, [23]) and one speaks of complete synchronization. In such a situation, the dynamics on the random attractor is trivial, so there is no natural notion of isochronicity. In the case \(\lambda _1>0\), one typically speaks of a chaotic random attractor which is not a singleton. We can illustrate these two cases by the following example very similar to the previous ones.
Example 4
We consider the following stochastic differential equations on \(\mathbb {R}^2\) with purely external noise of intensity \(\sigma \ge 0\),
where \( b \in \mathbb {R}\) and \(W_t^1, W_t^2\) denote independent onedimensional Brownian motions. In polar coordinates the system can be written as
This form illustrates the role of the parameter b inducing a shear force: if \(b > 0\), the phase velocity \(\frac{\mathrm {d}\vartheta }{\mathrm {d}t}\) depends on the amplitude r. Since Gaussian random vectors are invariant under orthogonal transformations, one might think of writing the problems with the independent Wiener processes
However, the pathwise properties of the processes seen as random dynamical systems change under this transformation. In (3.18), the radial components of the trajectories depend on \(\vartheta \) which appears in the diffusion term and destroys the skewproduct structure we had in the previous example 3.
It has been shown in [20] that for b small enough the first Laypunov exponent \(\lambda _1 < 0\) is negative such that the corresponding random attractor A is indeed a singleton. For b large, one can see numerically that the attractor becomes chaotic. A proof of \(\lambda _1 > 0\) has been obtained in [22] for a simplified model of (4.20) in cylindrical coordinates and recently also in the setting of restricting the state space on a bounded domain and only considering the dynamics conditioned on survival in this domain, using a computerassisted proof technique [11].
One can characterize chaotic random attractors as nontrivial geometric objects and supports of SRB measures, i.e. sample measures with densities on unstable manifolds. For details see [10, 29] and for further discussions relevant for our setting e.g. [9, 21]. Due to the compactness and the minimality property of random attractors there must be recurrence on these objects and one may even find Crauel Random Periodic Solutions there. However, it is questionable to what extent one can speak of isochronicity, given the very irregular recurrence properties. This already makes isochronicity a difficult issue for deterministic chaotic oscillators, see e.g. [42].
3.3 Random limit cycles as normally hyperbolic random invariant manifolds
As we have seen in Sect. 3.2.2, we can generally not expect the persistence of periodic orbits from the deterministic to the stochastic case under (global) white noise perturbations. A point of view that is only considering local, bounded noise perurbations of normally hyperbolic manifolds, i.e. implicitly also hyperbolic limit cycles, is presented in [30], where normally hyperbolic random invariant manifolds and their foliations are studied. In more details, consider the ODE (2.1) with a small random perturbation, i.e. the random differential equation
where \(\varepsilon > 0\) is a small parameter and F is \(C^1\), uniformly bounded in x, \(C^0\) in t for fixed \(\omega \), and measurable in \(\omega \). In several cases, SDEs can be transformed into a random differential Eq. (3.19), in particular when the noise is additive or linear multiplicative; however, in this case, F is generally not uniformly bounded. Hence, for an application of the following, one has to truncate the Brownian motion by a fixed large constant, as we will discuss later. Let us firstly give the following definition:
Definition 7
A random invariant manifold for an RDS is a collection of nonempty closed random sets \({\mathcal {M}} (\omega )\), \(\omega \in \Omega \), such that each \(\mathcal {M}(\omega )\) is a manifold and
The random invariant manifold \({\mathcal {M}}\) is called normally hyperbolic if for almost every \(\omega \in \Omega \) and any \(x \in {\mathcal {M}}(\omega )\), there exists a splitting which is \(C^0\) in x and measurable:
of closed subspaces with associated projections \(\Pi ^u(\omega ,x),\Pi ^c(\omega ,x)\) and \(\Pi ^s(\omega ,x)\) such that

(i)
the splitting is invariant
$$\begin{aligned} \mathrm {D}\varphi (t,\omega ,x) E^i(\omega ,x) = E^i(\theta _t \omega , \varphi (t, \omega ,x)), \ \text { for } i=u,c, \end{aligned}$$and
$$\begin{aligned} \mathrm {D}\varphi (t,\omega ,x) E^s(\omega ,x) \subset E^s(\theta _t \omega , \varphi (t, \omega ,x)), \end{aligned}$$ 
(ii)
\(\mathrm {D}\varphi (t, \omega ,x)_{E^i(\omega ,x)} : E^i(\omega ,x) \rightarrow E^i(\theta _t \omega , \varphi (t, \omega ,x))\) is an isomorhpism for \(i=u,c,s\) and \(E^c(\omega ,x)\) is the tangent space of \({\mathcal {M}}(\omega )\) at x,

(iii)
there are \((\theta , \varphi )\)invariant random variables \({\bar{\alpha }}, {\bar{\beta }}: {\mathcal {M}} \rightarrow (0, \infty ), {\bar{\alpha }} < {\bar{\beta }}\), and a tempered random variable \(K(\omega ,x) : {\mathcal {M}} \rightarrow [1, \infty )\) such that
$$\begin{aligned} \Vert \mathrm {D}\varphi (t,\omega ,x) \Pi ^s(\omega ,x)\Vert&\le K(\omega ,x) e^{ {\bar{\beta }}(\omega ,x) t} \ \text { for } t\ge 0, \end{aligned}$$(3.20)$$\begin{aligned} \Vert \mathrm {D}\varphi (t,\omega ,x) \Pi ^u(\omega ,x)\Vert&\le K(\omega ,x) e^{{\bar{\beta }}(\omega ,x) t} \ \text { for } t\le 0, \end{aligned}$$(3.21)$$\begin{aligned} \Vert \mathrm {D}\varphi (t,\omega ,x) \Pi ^c(\omega ,x)\Vert&\le K(\omega ,x) e^{{\bar{\alpha }}(\omega ,x) \left t\right } \ \text { for }  \infty< t < \infty . \end{aligned}$$(3.22)
We can then deduce the following statements:
Proposition 3
Assume that \(\Phi \) is a \(C^k\) flow, \(k \ge 1\), in \(\mathbb {R}^m\) which has a hyperbolic periodic orbit \(\gamma \), with exponents \(\bar{\alpha }=0 < {\bar{\beta }}\) characterizing the normal hyperbolicity as in (3.20), (3.22). Then there exists a \(\delta > 0\) such that for any random \(C^1\) flow \(\varphi (t, \omega , \cdot )\) in \(\mathbb {R}^m\), as for example induced by an RDE (3.19), with
we have that

(i)
The random flow \(\varphi (t, \omega , \cdot )\) has a \(C^1\) normally hyperbolic invaraint random manifold \({\mathcal {M}}(\omega )\) in a small neighbourhood of \(\gamma \),

(ii)
If \(\varphi (t, \omega , \cdot )\) is \(C^k\), then \({\mathcal {M}}(\omega )\) is a \(C^k\) manifold diffeomorphic to \(\gamma \) for each \(\omega \in \Omega \),

(iii)
There exists a stable manifold \({\mathcal {W}}^s(\omega )\) of \({\mathcal {M}}(\omega )\) under \(\varphi (t, \omega , \cdot )\), i.e. for all \(x \in {\mathcal {W}}^s(\omega )\)
$$\begin{aligned} \lim _{t \rightarrow \infty } {\text {dist}} \big (\varphi (t, \omega , x), {\mathcal {M}}(\theta _t\omega )\big ) = 0 \quad \text {for almost all } \,\omega \in \Omega \end{aligned}$$ 
(iv)
The manifold \({\mathcal {M}}(\omega )\) is, in fact, a random limit cycle in the sense of Definition 4.
Proof
The statements (i)–(iii) follow directly from [30, Theorem 2.2]. It is clear from (iii) that \({\mathcal {M}}(\omega )\) is a random forward attractor with respect to the collection \(\mathcal {S}\) of tempered random sets whose fibers \(S(\omega )\) are contained in \({\mathcal {W}}^s(\omega )\). Additionally, from (ii), it follows directly that \({\mathcal {M}}(\omega )\) is diffeomorphic to the unit circle, and, hence, we can conclude statement (iv). \(\quad \square \)
4 Random Isochrons
4.1 Isochrons as stable manifolds
4.1.1 Definition of forward isochrons
Let A be an attracting random cycle for the random dynamical system \((\theta , \varphi )\) where A is a random forward attractor (and possibly also a random pullback attractor). One may think of equations of the type (3.12), (3.18) or similar such that almost sure forward and pullback convergence coincide (see e.g. [20, Proof of Theorem B] or [38, Example 2.7 (i)]). We further assume that we are in the situation of a differentiable hyperbolic random dynamical system as discussed in Sect. 3.1.
In the typical setting of attracting random cycles, we may assume that \(\lambda _1 = 0\) with single multiplicity and \(\lambda _i < 0\) for all \(2\le i \le p\). In analogy to the stable manifolds of points on a deterministic limit cycle, we can then establish the following key novel definition (see also Fig. 4).
Definition 8
The random forward isochron \(W^{{\text {f}}}(\omega , x)\) of a pair \((\omega , x) \in \Omega \times \mathbb {R}^m\) with \(x \in A(\omega )\) is given by the stable set
for almost all \(\omega \in \Omega \) and all \(x \in A(\omega )\). In particular, we have for all \({\tilde{\lambda }} \in (0,  \lambda _2)\), where \(\lambda _2\) denotes the largest nonzero Lyapunov exponent,
Remark 6
It is clear from the definition why we exclude the case \(\lambda _1 <0\). In this situation, the set \(W^{{\text {f}}}(\omega ,x)\) is the whole absorbing set and, hence, no information about the decomposition of the state space by the dynamics can be obtained that way.
As indicated in Sect. 3.2.2, a chaotic random attractor, characterized by \(\lambda _1 >0\), also exhibits recurrence properties such that Definition 8 can principally be also applied to this situation. However, it is arguable to what extent one can speak of isochronicity, given the irregular recurrence properties. Since this already makes isochronicity a difficult issue for deterministic chaotic oscillators [42], we leave a detailed analysis of random isochrons for chaotic random attractors as a topic for future work.
It is easy to observe that for all \(s \ge 0\) we have
i.e. the forward isochrons are \(\varphi \)invariant, as depicted in Fig. 4.
4.1.2 Existence and properties of random stable sets
In the literature on (global) random dynamical systems, the existence of stable sets such as \(W^{{\text {f}}}(\omega ,x)\) as stable manifolds is often first established for discrete time, see e.g. [36] or [32, Chapter III]. (Arnolds treatment [1, Chapter 7] is limited to equilibria.) Even though the local view in [30], as described in Sect. 3.3, is different, we need to also account for the global situation in order to provide the full picture. Hence, we begin with adopting the discretetime approach by reducing the analysis to timeone maps \(\varphi (1,\omega , \cdot )\) and its concatenations
First we want to conclude for all \({\tilde{\lambda }} \in (0,  \lambda _2)\) that
is an \((m1)\)dimensional immersed \(C^k\)submanifold under sufficient boundedness assumptions which would be immediately satisfied if the state space \({\mathcal {X}}\) is a compact manifold (cf. [32, Chapter III, Theorem 3.2]). We will state such conditions for our setting \({\mathcal {X}}= \mathbb {R}^m\) in the following. The transition to the timecontinuous case, i.e. establishing \(W^{{\text {f}}}(\omega , x) = {\tilde{W}}^{{\text {s}}}(\omega , x)\), then follows immediately from the integrability assumption (3.4) for the MET, as one can observe with the proof of [32, Chapter V, Theorem 2.2].
One possible approach can be found in [9]: consider the maps (4.4). For \(x \in \mathbb {R}^m\), we define the local linear shift function
Further, we define the map
which is the evolution process of the linearization around the trajectory starting at \(x \in \mathbb {R}^m\). Assume that there is an invariant probability measure \(\mathbb {P} \times \rho \) for \((\Theta _t)_{t \ge 0}\) on \((\Omega \times \mathbb {R}^m, \mathcal {F}_0^\infty \times \mathcal {B}(\mathbb {R}^m))\) (see “Appendix A.1” and .1). If the RDS is induced by an SDE, the measure \(\rho \) is exactly the stationary measure of the associated Markov process. The integrability condition of the MET with respect to this measure reads
The crucial boundedness assumption that compensates for the lack of compactness in the proof of a stable manifold theorem reads
where \(\mathrm {D}^2\) is the second derivative operator and \(B_1(x)\) denotes the ball of radius 1 centered at \(x \in \mathbb {R}^m\).
In the situation where the maps (4.4) of the discretetime RDS are the timeone maps of the continuoustime RDS induced by the SDE (3.2) with the stationary distribution fulfilling
we have the following requirements on \(b, \sigma _i \in C^{k+1}\), \(1 \le i \le n, k \ge 2\), such that assumption (4.7) is satisfied:
where \(0 < \delta \le 1\) and with multi index notation \(\alpha = (\alpha _1, \dots , \alpha _m)\), \(\left \alpha \right = \sum _{i=1}^m \left \alpha _i \right \), for \(f \in C^k\)
This means that the coefficients of the SDE have at most linear growth, globally bounded derivatives and the kth derivatives have bounded \(\delta \)Hölder norm. In [9], also the backward flow and a condition similar to (4.7) for the inverse are considered, but these are not needed when we purely regard the stable manifold problem. These conditions on the drift b are generally too restrictive since already examples (3.6), (3.10) and (3.11) are not covered. Of course, one can always consider the dynamics on a compact domain \({\mathcal {K}}\), with absorbing or reflecting boundary conditions at the boundary of the domain, as will see later in Sect. 4.3 for the averaged problem on the level of the Kolmogorov equations. However, this involves further technicalities for the random dynamical systems approach which we try to avoid here. The easiest way of reduction to a compact domain \({\mathcal {K}}\) is to assume compact support of the noise and absorption to \({\mathcal {K}}\) through the drift dynamics such that neither global nor boundary conditions are needed (see Theorem 2 (iii)).
Additionally we consider [23, Section 3] which discusses conditions for synchronization to a singleton random attractor for random dynamical systems induced by an SDE (3.2) with additive noise, i.e. \(n=m\) and, for all \(1\le i,j \le n\), \(\sigma _i^j = \sigma \delta _{i,j}\) where \(\sigma > 0\) and \(\sigma _i^j\) denotes the jth entry of the vector \(\sigma _i\). The authors formulate a special local stable manifold theorem for the case \(\lambda _1 < 0\), which is, however, based on [36] where stable manifold theorems are considered in full generality. The assumption for deducing the local stable manifold theorem amounts to a (weaker) combination of conditions (4.6) and (4.7), and reads
where \(C^{1,\delta }\) is the space of \(C^1\)functions whose derivatives are \(\delta \)Hölder continuous for some \(\delta \in (0,1)\) and \(\rho \) denotes the stationary measure of the associated Markov process. We introduce a classical dissipativity condition, the onesided Lipschitz condition
for all \(x,y \in \mathbb {R}^m\) and \(\kappa > 0\). According to [23, Lemma 3.9], condition (4.11) is satisfied in the case of additive noise if \(b \in C^2(\mathbb {R}^m)\) fulfills (4.12), admits at most polynomial growth of the second derivative, i.e.
and the stationary distribution \(\rho \) satisfies
4.1.3 Main theorem about random isochrons
Assumptions (4.12) and (4.13) on the drift are weaker than condition (4.9) but, in [23], only applied to situations with additive noise whereas at least linear multiplicative noise as in (3.10) is a desirable model for random periodicity. We address this issue in Remark 7 and point (iii) of the following theorem, which summarizes the findings from above:
Theorem 2
(Forward isochrons are stable manifolds). Consider an ergodic \(C^k\), \(k \ge 2\), random dynamical system \((\theta , \varphi )\) on \(R^m\) with random attractor A, satisfying the integrability assumption (3.4) of the Multiplicative Ergodic Theorem such that \(\lambda _1 = 0\) with single multiplicity and \(\lambda _i < 0\) for all \(2\le i \le p\). Let further one of the following assumptions be satisfied:

(i)
The RDS \((\theta , \varphi )\) is induced by an SDE of the form (3.2) such that the unique stationary measure \(\rho \) satisfies (4.8) and the drift and diffusion coefficients satisfy (4.9),

(ii)
The RDS \((\theta , \varphi )\) is induced by an SDE of the form (3.2) with \(n=m\) and, for all \(1\le i,j \le n\), \(\sigma _i^j = \sigma \delta _{i,j}\) where \(\sigma > 0\), such that the unique stationary measure \(\rho \) satisfies (4.14) and the drift satisfies conditions (4.12) and (4.13),

(iii)
The RDS \((\theta , \varphi )\) is induced by an SDE of the form (3.2) such that \({{\,\mathrm{supp}\,}}(\sigma ) \subset \mathbb {R}^m\) is compact, the drift b satisfies condition (4.12) with \(\kappa < 0\) for all \( \Vert x\Vert , \Vert y\Vert > R\) for some \(R>0\) and there is a unique stationary measure \(\rho \) with \({{\,\mathrm{supp}\,}}(\rho ) \subset \mathbb {R}^m\) compact.

(iv)
The RDS satisfies the conditions of Proposition 3.
Then for almost all \(\omega \in \Omega \) and all \(x \in A(\omega )\) the random forward isochrons \(W^{{\text {f}}}(\omega , x)\) (see (4.2)) are a uniquely determined \(C^{k1}\) in x family of \(C^k\) \((m1)\)dimensional submanifolds (at least locally, i.e. within a neighbourhood \({\mathcal {U}}\) of x) of the stable manifold \({\mathcal {W}}^s(\omega )\) such that
where the union is disjoint.
Proof
As already discussed, in most of the cited literature, the stable manifold theorem is shown for discrete time. However, the transition to the timecontinuous case, i.e. establishing \(W^{{\text {f}}}(\omega , x) = {\tilde{W}}^{{\text {s}}}(\omega , x)\), follows immediately from the integrability assumption (3.4) for the MET, as one can observe with the proof of [32, Chapter V, Theorem 2.2]. Hence, the fact that the sets \(W^{{\text {f}}}(\omega , x)\) are a uniquely determined \(C^{k1}\) in x family of \(C^k\) \((m1)\)dimensional submanifolds of the stable manifold \(\mathcal W^s(\omega )\) can be deduced in various situations as follows:
Assumption (i) is derived from [9, Theorem 4.7 and Theorem 9.1], where \(W^{{\text {f}}}(\omega , x)\) are global stable manifolds. Assumption (ii) is derived from [23, Lemma 3.9] showing that the conditions for the local stable manifold theorem [36, Theorem 5.1] are satisfied, i.e. \(W^{{\text {f}}}(\omega , x)\) is a \(C^{k}\) submanifold of \(\mathbb {R}^m\) of dimension \(m1\), at least within a neighbourhood \({\mathcal {U}}\) of x. Furthermore, it is obvious from the assumptions that condition (4.11) is satisfied and, hence, assumption (iii) is derived similarly to assumption (ii). Assmuption (iv) can be taken according to [30, Theorem 2.4].
This leaves to prove the foliation property in all these cases: the proof that
can be deducted in direct analogy to the proof of [30, Proposition 9 (iv)]. The fact that the union is disjoint can be seen as follows: assume there is a \(y \in W^{{\text {f}}}(\omega , x) \cap W^{{\text {f}}}(\omega , x')\) for \(x\ne x'\). Since \(A(\omega )\) is an invariant hyperbolic limit cycle and \(x,x' \in A(\omega )\), we have that \(d(\varphi (t, \omega ,x), \varphi (t, \omega , x')) \ge \delta > 0\) for all \(t \ge 0\). Hence, we obtain by definition of \(W^{{\text {f}}}\) and the triangle inequality that
which is a contradiction (see proof of [30, Proposition 9 (iii)] for a similar argument). \(\quad \square \)
Remark 7

(i)
One could also try to extend Theorem 2 (ii) to the situation with any diffusion coefficients satisfying (4.9) instead of only additive noise. For showing this, first notice that under the assumptions on \(\sigma \) the drift \({\hat{b}}= b + b_0\) with the ItôStratonovichconversion term
$$\begin{aligned} b_0 := \frac{1}{2} \sum _{i=1}^n \sum _{j=1}^m \sigma _i^j \frac{\partial }{\partial x_j} \sigma _i \end{aligned}$$still satisfies assumptions (4.12) and (4.13). Due to the mild behaviour (4.9) of the diffusion coefficients, one could then try to make analogous estimates as in [23, Lemma 3.9] to induce that condition (4.11) is satisfied. Since we are mainly interested in the local behavior, we refrain from conducting such estimates here, but point out that this would be an interesting general extension.

(ii)
Consider the example Eq. (3.11) (and by that Eq. (3.10)): the drift b is polynomial such that condition (4.13) is satisfied and we have
$$\begin{aligned} \langle b(x) b(y), xy\rangle&= \Vert x y \Vert ^2  \Vert x\Vert ^4  \Vert y\Vert ^4 + \langle x,y\rangle (\Vert x\Vert ^2 + \Vert y\Vert ^2)) \nonumber \\&= \Vert x y \Vert ^2  \Vert x\Vert ^4  \Vert y\Vert ^4 + \frac{1}{2}(\Vert x\Vert ^2 + \Vert y\Vert ^2)^2 \nonumber \\&\quad  \frac{1}{2}(\Vert x\Vert ^2 + \Vert y\Vert ^2)\Vert x y \Vert ^2 \nonumber \\&= \left( 1 \frac{1}{2}(\Vert x\Vert ^2 + \Vert y\Vert ^2)\right) \Vert x y \Vert ^2  \frac{1}{2}(\Vert x\Vert ^2  \Vert y\Vert ^2)^2 \nonumber \\&\le \Vert x y \Vert ^2. \end{aligned}$$(4.15)Hence, also condition (4.12) is satisfied. Furthermore, the unique stationary distribution \(\rho \) has a density
$$\begin{aligned} p(x,y) = \frac{1}{Z} \left( x^2 + y^2\right) ^{\frac{1}{\sigma ^2}1} \exp \left(  \frac{x^2 + y^2}{\sigma ^2} \right) , \end{aligned}$$(4.16)solving the stationary FokkerPlanck equation. Hence, also condition (4.14) is fulfilled. Since the noise term is linear, we obviously have \(\sum _{i=1}^n \Vert \sigma _i\Vert _{k,\delta } < \infty \) for all \(k \ge 2, \delta \in (0,1]\). Hence, we could deduce the assertions of Theorem 2 if we had proven the extension as discussed in (i).
However, for our purposes, this is not necessary: we additionally have, using the same transformation as in estimate (4.15), that for \(R = \sqrt{3}\) and \(\Vert x\Vert , \Vert y\Vert > R\)
$$\begin{aligned} \langle b(x) b(y), xy\rangle&\le \left( 1 \frac{1}{2}(\Vert x\Vert ^2 + \Vert y\Vert ^2)\right) \Vert x y \Vert ^2 \\&\le  \kappa \Vert x y \Vert ^2, \end{aligned}$$where \(\kappa = R^2/2 1\). Now choosing a smooth cutoff of \(\sigma \), say \({\tilde{\sigma }}\), such that \(\sigma = {\tilde{\sigma }}\) on \(B_{R^*}(0)\) for some large \(R^* > R\) and \( {\tilde{\sigma }} \equiv 0\) on \(\mathbb {R}^m {\setminus } B_{R^*+1}(0)\), we obtain a stationary density \({\tilde{p}}\) with \({\tilde{p}} = p/{\tilde{Z}} \) on \(B_{R^*}(0)\), where \({\tilde{Z}} > 0\) is a normalization constant, and \( {\tilde{p}} \equiv 0\) on \(\mathbb {R}^m {\setminus } B_{R^*+1}(0)\). Hence, we can apply Theorem 2 (iii). In particular, note that this construction allows, when \({\tilde{\sigma }}\) is small enough, for a transformation into the random ODE (3.19) with sufficiently small bounded noise such that Proposition 3 and, by that, Theorem 2 (iv) can be applied. This procedure is, of course, independent from the particular form of Eq. (3.11) and can be used for any SDEs around deterministic limit cycles when the transformation into the random ODE (3.19) is possible (which is always the case for additive and linear multiplicative noise).
Given (4.2), we further assume that there exists a Crauel random periodic solution \((\psi ,T)\) such that \(\psi (t, \omega ) \in A(\omega )\) for all \(\omega \in \Omega \) and \(t \ge 0\), as for example seen in Proposition 2. Then we can investigate the behaviour of
If, as in Proposition 2, each \(x \in A(\omega )\) can be identified as \(\psi _x(\omega ,0)\) for some Crauel random periodic solution, then \(T_x(\omega )\) is the period we can associate with \(W^{{\text {f}}}(\omega , \psi _x(0, \omega ))\). We summarize this insight in the following definition:
Definition 9
(Period of random forward isochron). Let \((\psi ,T)\) be a Crauel random periodic solution for the RDS \((\theta , \varphi )\) such that \(\psi (t, \omega ) \in A(\omega )\) for all \(\omega \in \Omega \) and \(t \ge 0\), where A is an attracting random cycle or chaotic random attractor. Then the we call \(T(\omega )\) the period of the corresponding random forward isochron \(W^{{\text {f}}}(\omega , \psi (0, \omega ))\) for all \(\omega \in \Omega \).
The natural question arises whether
holds for some measurable family \(N_x(\omega )\) of crosssections, in particular, whether we can identify \({\mathcal {N}}_x(\omega )= W^{{\text {f}}}(\omega , \psi _x(0, \omega ))\). What we observe, is the following:
Proposition 4
Let \((\theta , \varphi )\) be a random dynamical system with random attractor A and the isochrons \(W^{{\text {f}}}(\omega , x)\) as given in (4.1) such that each \(x \in A(\omega )\) can be identified with \(\psi _x(0, \omega )\) for some Crauel random periodic solution \((\psi _x, T_x)\). Then we have
Proof
Let \(y \in W^{{\text {f}}}(\omega , \psi _x(0, \omega ))\). Then
Hence, the statement follows directly. \(\quad \square \)
4.1.4 Pullback isochrons
In analogy to the different notions of a random attractor, one could also consider defining fiberwise isochrons for random dynamical systems in a pullback sense, as follows:
Again assume there is a Crauel random periodic solution \((\psi ,T)\) on an attracting random cycle A (or chaotic random attractor A). Then the random pullback isochrons could only be defined as
for almost all \(\omega \in \Omega \).
In contrast to the random forward isochron \(W^{{\text {f}}}(\omega , \psi (0, \omega ))\), the set \(W^{{\text {p}}}(\omega , \psi (0, \omega ))\) is not given as a stable set for the point \(\psi (0, \omega )\) but as the set of points whose pullback trajectories converge to the trajectories starting in \(\psi (0, \theta _{t} \omega )\) as \(t \rightarrow \infty \). Hence, such a definition cannot coincide with a stable manifold for a given point on a given fiber of the random attractor and, in particular, there does not seem to be a way to connect the set \(W^{{\text {p}}}(\omega , \psi (0, \omega ))\) to the set \(W^{{\text {f}}}(\omega , \psi (0, \omega ))\). In other words, it is not clear what geometric interpretation such a random pullback isochron could have and it is apparent that the definition in forward time, i.e. Definition 8, yields the most immediately meaningful object in this context.
4.2 The random isochron map
For the following, recall the stochastic differential Eq. (3.2) as
where \(W_t^i\) are independent real valued Brownian motions, b is a \(C^k\) vector field, \(k \ge 1\), and \(\sigma _1, \dots , \sigma _n\) are \(C^{k+1}\) vector fields satisfying bounded growth conditions, as e.g. (global) Lipschitz continuity, in all derivatives to guarantee the existence of a (global) random dynamical system with cocycle \(\varphi \) and derivative cocycle \(\mathrm {D}\varphi \).
Example 5
As before, the main examples we have in mind are twodimensional. In particular, we may consider the corresponding stochastic differential equation in polar coordinates \((\vartheta , r) \in [0, 2 \pi ) \times [0, \infty )\)
As in Examples 3 and 4, we usually regard a situation such that in the deterministic case \(\sigma _1 = \sigma _2 = 0\) there is an attracting limit cycle at \(r=r^* >0\).
From Theorem 1 recall the isochron map \(\xi : W^{\text {s}}(\gamma ) \rightarrow \mathbb {R} \mod \tau _{\gamma } \) for a limit cycle \(\gamma \) with period \(\tau _{\gamma }\), which is given for every \(y \in W^{\text {s}}(\gamma )\) as the unique t such that \(y \in W^{\text {s}}(\gamma (t))\), i.e.
Analogously, we now introduce the following new notion for the random case; recall that for a CRPS \((\psi , T)\) we have, in particular, that \(\psi (0, \omega ) = \psi (T(\omega ), \omega )\) for all \(\omega \in \Omega \).
Theorem 3
Consider the SDE (4.19) such that the induced RDS has a random attractor A with CRPS \((\psi , T)\) and parametrization \(A(\omega ) = \{ \psi (t+s, \omega )\,: \, t \in [0, T(\theta _{s} \omega ))\}\) for all \(\omega \in \Omega \), \(s \in \mathbb {R}\). Then

1.
There exists the random isochron map \({\tilde{\phi }}\), i.e. a measurable function \({\tilde{\phi }}: \mathbb {R}^m \times \Omega \times \mathbb {R} \rightarrow \mathbb {R}\), \(C^k\) in the phase space variable, such that in a neighbourhood \({\mathcal {U}}(\omega )\) of \( A(\omega )\) we have
$$\begin{aligned} {\tilde{\phi }}(\cdot , \omega , s): {\mathcal {U}}(\omega )\rightarrow [s, s + T(\theta _{s} \omega )) \end{aligned}$$and for each \(y \in {\mathcal {U}}(\omega )\), \(s \in \mathbb {R}\)
$$\begin{aligned}&\lim _{t\rightarrow +\infty }{\text {d}}(\varphi (t, \omega , y),\varphi (t, \omega , \psi ({\tilde{\phi }} (y, \omega ,s) , \omega ))) \nonumber \\&= \lim _{t\rightarrow +\infty }{\text {d}}(\varphi (t, \omega , y),\psi (t + \tilde{\phi }(y, \omega ,s), \theta _t \omega ))=0, \end{aligned}$$(4.21) 
2.
For any \(\omega \in \Omega \), \(s \in \mathbb {R}\) and \(t \in [0, T(\theta _{s} \omega ))\), the random \({\tilde{\phi }}\)isochron \({\tilde{I}}(\omega , \psi (t+s, \omega ),s)\) given by
$$\begin{aligned} {\tilde{I}}(\omega ,\psi (t+s, \omega ),s) = \{ y \in \mathcal {U}(\omega ) : {\tilde{\phi }}( y, \omega ,s) = {\tilde{\phi }}(\psi (t+s, \omega ), \omega ,s)\} \end{aligned}$$(4.22)satisfies
$$\begin{aligned} {\tilde{I}}(\omega , \psi (t+s, \omega ),s) = W^{{\text {f}}}( \omega , \psi (t+s, \omega )). \end{aligned}$$(4.23) 
3.
For any \(\omega \in \Omega \), \(s \in \mathbb {R}\) and \(y \in \mathcal {U}(\omega )\)
$$\begin{aligned} {\tilde{\phi }} ( \varphi (s, \omega , y), \theta _s \omega , s) = {\tilde{\phi }}(y, \omega ,0) +s\,, \end{aligned}$$(4.24)or, equivalently,
$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}s} {\tilde{\phi }} ( \varphi (s, \omega , y), \theta _s \omega , s) = 1. \end{aligned}$$(4.25)
Proof
Since \(A(\omega )\) is a random attractor, we have that for given \(y \in \mathcal {U}(\omega )\) there is an \(x \in A(\omega )\) such that \(y \in W^{{\text {f}}}(\omega , x)\). Due to the assumptions, for any \(s \in \mathbb {R}\) there is a \(t_x \in [0, T(\theta _{s}\omega ))\) such that \(x = \psi (s + t_x, \omega )\). Then \({\tilde{\phi }}(y, \omega ,s) := t_x + s\) satisfies the required properties, where measurability follows from the measurability of T and, writing \(t=t_x\), differentiability from
which can be deduced as follows: we have \(y \in {\tilde{I}}(\omega , \psi (t+s, \omega ),s) \) if and only if \({\tilde{\phi }}(y, \omega ,s) = {\tilde{\phi }}(\psi (t+s, \omega ), \omega ,s) =t + s\) which, according to Eq. (4.21), is equivalent to
which is the case if and only if \(y \in W^{{\text {f}}}(\omega , \psi (t+s, \omega ))\).
It remains to show the third point: firstly, we derive from the invariance of the stable manifolds and equality (4.23) that
This means that for \(x \in \mathcal {U}(\theta _s \omega )\) we have that \(x = \varphi (s,\omega ,y)\) for some \(y \in \mathcal {U}( \omega )\) with \({\tilde{\phi }} (y, \omega ,0) = t \in [0, T(\omega ))\) if and only if
Hence, we obtain Eq. (4.24), or equivalently Eq. (4.25), for any \(y \in \mathcal {U}( \omega )\). \(\quad \square \)
Note that, due to the time dependence, we always give the image of the random isochron map \({\tilde{\phi }}(\cdot , \omega , s)\) as an interval \([s, s + T(\theta _{s} \omega ))\), in distinction from the deterministic case where the values of the isochron map \(\xi \) lie in \(\mathbb {R} \mod \tau _{\gamma }\), which can be identified with \([0, \tau _{\gamma })\), for fixed period \(\tau _{\gamma }\) (see Proposition 1). We are adding a couple of further remarks to the last theorem in order to highlight its coherence with the above and the analogy to the deterministic case.
Remark 8

(i)
As seen in the proof of Theorem 3, note that for all \(s \in \mathbb {R}\), \( t \in [0, T(\theta _{s} \omega ))\)
$$\begin{aligned} {\tilde{\phi }}(\psi (t+s, \omega ), \omega , s) = t + s, \end{aligned}$$(4.27)and, in particular,
$$\begin{aligned} {\tilde{\phi }}( \varphi (t, \theta _{t} \omega , \psi (0,\theta _{t} \omega )), \theta _t (\theta _{t} \omega ), 0) = {\tilde{\phi }} (\psi (t, \omega ), \omega , 0) = t \quad \text {for all } \,t \in [0, T(\omega )). \end{aligned}$$(4.28)Additionally, observe that the parametrization of the random attractor in Theorem 3 is generally possible when there is a CRPS; with Definition 6 we have for all \(s \ge 0\) that \(\psi (s + T(\omega ), \theta _s \omega ) = \psi (s, \theta _s \omega )\) and, hence, we can also consider
$$\begin{aligned} A(\theta _s \omega ) = \{ \psi (t + s, \theta _s \omega )\,: \, t \in [0, T(\omega ))\}, \end{aligned}$$for which we find, for \(t \in [0, T(\omega ))\),
$$\begin{aligned} {\tilde{\phi }}(\cdot , \theta _s \omega , s): {\mathcal {U}}(\theta _s \omega )\rightarrow [s, s+ T(\omega )), \ {\tilde{\phi }} (\psi (t+s, \theta _s \omega ), \theta _s \omega , s) = t+s. \end{aligned}$$ 
(ii)
From Proposition 1 recall that the isochron map \(\xi : W^{\text {s}}(\gamma ) \rightarrow \mathbb {R} \mod \tau _{\gamma } \) for a deterministic limit cycle \(\gamma \) satisfies Eq. (2.7)
$$\begin{aligned} \frac{{\text {d}}}{{\text {d}}t} \xi (\Phi (y,t)) = 1 \ \text { for all } t \ge 0, \, y \in W^{\text {s}}(\gamma )\,. \end{aligned}$$Equation (4.25) is the analogous equation for the random dynamical system.

(iii)
In certain cases, it may be convenient to anchor the random \(\tilde{\phi }\)isochrons at the deterministic limit cycle to compare with the averaging approaches from the physics literature later on. Consider for example the SDE (4.20) with attracting limit cycle at \(r=r^* >0\) in the deterministic case \(\sigma _1 = \sigma _2 = 0\). We can then write the random isochron map \({\tilde{\phi }}: [0, 2 \pi ) \times [0, \infty ) \times \Omega \times \mathbb {R} \rightarrow \mathbb {R}\) such that in a neighbourhood \({\mathcal {U}}\) of the circle \(\{r=r^*\}\) we have
$$\begin{aligned} {\tilde{\phi }}(\cdot , \omega , s): {\mathcal {U}} \rightarrow [s, s + T(\theta _{s} \omega )) \end{aligned}$$and, based on Eqs. (4.25) and (4.24),
$$\begin{aligned} {\tilde{\phi }} ( \varphi (s, \omega , (\vartheta _0, r_0)), \theta _s \omega , s) = \tilde{\phi }((\vartheta _0, r_0), \omega ,0) +s\,, \end{aligned}$$(4.29)or equivalently
$$\begin{aligned} \mathrm {d}\, {\tilde{\phi }}(\varphi (s, \omega ,(\vartheta _0, r_0)), \theta _s \omega , s) = 1 \, \mathrm {d}s\,, \end{aligned}$$(4.30)for any \((\vartheta _0, r_0) \in {\mathcal {U}}\), \(s \in \mathbb {R}\) and \(\omega \in \Omega \). For any \(\vartheta \in [0, 2 \pi )\), \(s \in \mathbb {R}\) and \(\omega \in \Omega \), we can write \(\tilde{I}_{\vartheta }(\omega ,s)\) for the level set
$$\begin{aligned} {\tilde{I}}_{\vartheta }(\omega ,s) = \{ ({\tilde{\vartheta }}, {\tilde{r}}) \in {\mathcal {U}}: {\tilde{\phi }}( ({\tilde{\vartheta }}, {\tilde{r}}), \omega , s) = \tilde{\phi }((\vartheta , r^*), \omega , s)\}. \end{aligned}$$(4.31)
Following Theorem 3, we can simply define isochrons for any point \( x \in {\mathcal {U}}(\omega )\) by setting
We can then show the invariance of \({\tilde{I}}(\omega , x, 0)\) under the RDS, similarly to the invariance property (4.3) of the forward isochrons, extending property (4.26) to any \(x \in \mathcal U(\omega )\).
Proposition 5
The random \({\tilde{\phi }}\)isochrons \({\tilde{I}}(\omega , x, 0)\) for \(x \in \mathcal {U}(\omega )\) where \(\mathcal {U}(\omega )\) is an attracting neighbourhood of \( A(\omega )\), are forwardinvariant under the RDS cocycle, i.e.
Proof
Let \(y \in \varphi (s,\omega ) {\tilde{I}}( \omega , x, 0)\). This means that there is a \(z \in \mathbb {R}^m\) such that \(y = \varphi (s, \omega ,z)\) and \({\tilde{\phi }} (z, \omega , 0) = {\tilde{\phi }} (x, \omega , 0)\). We obtain from Eq. (4.24) that
Hence, we have \(y \in {\tilde{I}} (\theta _s \omega , \varphi (s,\omega ,x), s)\) and therefore
This finishes the proof. \(\quad \square \)
4.3 Stochastic isochrons via mean return time and random isochrons
One main approach to define stochastic isochrons in the physics literature is due to Schwabedal and Pikovsky [41] who introduce isochrons (or isophase surfaces) for noisy systems as sections \(W^\mathbb {E}(x)\) with the mean first return time to the same section \(W^\mathbb {E}(x)\) being a constant \({\bar{T}}\), equaling the average oscillation period. Note that such an object is not welldefined a priori as it seems unclear, what we imply here by “return”, i.e., return to what? The paper does not rigorously establish these objects but only gives a numerical algorithm which is successfully tested at the hand of several examples. According to the algorithm, a deterministic starting section \({\mathcal {N}}\) is adjusted according to the mean return time, i.e., points are moved correspondent to the mismatch of their return time and the mean period for \({\mathcal {N}}\), and this procedure is repeated until all points have the same mean return time.
4.3.1 The modified Andronov–Vitt–Pontryagin formula in [13]
Cao, Lindner and Thomas [13] have made this approach rigorous by using a modified version of the Andronov–Vitt–Pontryagin formula for the mean first passage time (MFPT) \(\tau _D\) on a bounded domain D through its boundary \(\partial D\). In more detail (cf. [40, Chapter 4.4]), the associated boundary value problem for \(\mathcal L\) denoting the generator of the process, also called backward Kolmogorov operator, is given by
which is solved by
The problem in our case is that if we consider a domain whose absorbing boundary in \(\theta \)direction is a line \({\tilde{l}} := \{({\tilde{\vartheta }}({\tilde{r}}), {\tilde{r}}) \,:\, R_1 \le {\tilde{r}} \le R_2 \} \), where \({\tilde{\vartheta }}\) is a smooth function, the stochastic motion might not perform a full rotation to reach this boundary line. In particular, the mean return time for trajectories starting on \({\tilde{l}}\) will be zero. To circumvent this problem, Cao et al. unwrap the phase by considering infinite copies of \({\tilde{l}}\) on the extended domain \( \mathbb {R} \times [R_1, R_2]\). For some \((\vartheta ,r)\) with \(\vartheta< 2 \pi < \tilde{\vartheta } (r)\), the mean first passage time \(T(\vartheta , r)\) is then calculated via the Andronov–Vitt–Pontryagin formula with periodicplusjump boundary condition in the \(\vartheta \)direction and reflecting boundary condition in the rdirection.
In more detail, the process solving Eq. (4.20), or its Itô version respectively, with strongly elliptic generator \({\mathcal {L}}\) and its adjoint \({\mathcal {L}}^*\), the forward Kolmogorov operator, is assumed to have a unique stationary density \(\rho \) on \(\Omega = [0, 2 \pi ) \times [R_1, R_2]\) solving the stationary FokkerPlanck equation
with reflecting (Neumann) boundary conditions at \(r \in \{R_1, R_2\}\) and periodic boundaries \(\rho (0,r) = \rho (2 \pi ,r)\) for all \(r \in [R_1, R_2]\). For model (4.20), the stationary probability current \(J_{\rho }\) reads, for \(j=1,2\),
Furthermore, for a \(C^1\)function \(\gamma : [R_1, R_2] \rightarrow [0, 2 \pi ]\) the graph \(C_{\gamma }\) (cf. \({\tilde{l}}\) above) separates the domain \(\Omega _{\text {ext}}= \mathbb {R} \times [R_1, R_2]\) into a left and right connected component, with unit normal vector n(r) oriented to the right. It is then assumed that the mean rightward probability flux through \(C_{\gamma }\) is positive, which means that
The mean period of the oscillator is then given as
The modified Andronov–Vitt–Pontryagin formula is then given by the following PDE, with reflecting and jumpperiodic boundary conditions
In fact, the last condition can be weakened to
Under the discussed assumptions, it is then shown in [13, Theorem 3.1] that the equation has a solution \(T(\vartheta , r)\) on \(\Omega _{\text {ext}}\) and, hence by restriction, on \(\Omega \), which is unique up to an additive constant. The level sets of \(T(\vartheta ,r)\) are then supposed to be the stochastic isochrons \(W^\mathbb {E}((\vartheta ,r))\) with mean return time \({\bar{T}}\) and associated isophase (up to some constant \(\bar{\Theta }_0\))
which therefore satisfies
4.3.2 Relation to random isochrons
Recall from Definition 9 that, for a CRPS \((\psi ,T)\), the random period \(T(\omega )\) corresponds to the random forward isochron \(W^{{\text {f}}}(\omega , \psi (0, \omega ))\) for all \(\omega \in \Omega \). Hence, we can define the expected period as
where the index RDS indicates the random dynamical systems perspective. In the following, we discuss how \({\bar{T}}_{\text {RDS}}\) is related to \({\bar{T}}\) and the isochron function \({\bar{\Theta }}\) (4.39).
4.3.3 Expectation of random period
Similarly to Sect. 4.3.1, consider Eq. (4.20) in an annulus \({\mathcal {R}}\) given by \(0 \le R_1 \le r \le R_2 \le \infty \), i.e. including the full space case \({\mathcal {R}} = [0, \infty ) \times [0, 2 \pi )\). Consider the slightly modified version of the PDE system (4.37)
where for the case \(R_1 > 0, R_2 < \infty \) one can take again Neumann boundary conditions
Then we can formulate the following observation.
Proposition 6
Assume that system (4.20) has a CRPS \((\psi ,T)\), fixing \(\psi (0, \omega ) \in \{0\} \times (R_1, R_2)\) for \(0 \le R_1 < R_2 \le \infty \), where the RDS and its attracting random cycle are supported within \((R_1, R_2) \times [0, 2 \pi )\) (see Theorem 2 and Remark 7). Then we obtain that

(a)
The expectation of the random period is given by
$$\begin{aligned} \mathbb {E} [T(\cdot )] = \mathbb {E} [u(\psi ( T(\cdot ),\theta _{T(\cdot )}(\cdot )))]  \mathbb {E} [u(\psi (0,\cdot ))], \end{aligned}$$(4.42)where u solves Eq. (4.41),

(b)
And, in particular, if the radial components of \(\psi (0,\cdot )\) and \(\psi ( T(\cdot ),\theta _{T(\cdot )}(\cdot ))\) are equally distributed on \((R_1, R_2)\), we have
$$\begin{aligned} {\bar{T}} = T_{\text {RDS}} = \mathbb {E} [T(\cdot )]\,. \end{aligned}$$
Proof
As we have seen in the proof of Proposition 2, the period \(T(\omega )\) has to satisfy for system (4.20)
Hence, using Dynkin’s equation for the solution u of the boundary value problem (4.41), we obtain
which shows claim (a).
Claim (b) follows straightforwardly, inserting (4.41) into (4.42). \(\quad \square \)
Note that this result is consistent with the basic Example 2, where we have \(T(\omega ) = {\bar{T}}\) for all \(\omega \in \Omega \) since in this case \(\psi (0,\cdot )\) and \(\psi ( T(\cdot ),\theta _{T(\cdot )}(\cdot ))\) are both distributed according to the stationary radial solution \(r^*(\omega )\). Addtionally note that u is the isochron function via mean return time, as discussed in Sect. 4.3.1.
4.3.4 Expectation of isochron function
Furthermore, we want to give an alternative derivation to Sect. 4.3.1 of an isochron function \(\bar{\phi }((\vartheta ,r)): \mathcal {R} \rightarrow \mathbb {R}\), yielding the sections \(W^\mathbb {E}((\vartheta ,r))\) with fixed mean return time given as level sets
In more detail, we try to find the function \({\bar{\phi }}\) via an expected version of Eqs. (4.29) and (4.30). We fix \((\vartheta _0, r_0) \in \mathcal R\) and require that the function \({\bar{\phi }}\) satisfies along solutions \((\vartheta (t), r(t))\) of the SDE (4.20) the equality (cf. Eq. (2.7) in the deterministic case)
By this, we can show the following result:
Proposition 7
There is \({\bar{\phi }}((\vartheta ,r)): \mathcal {R} \rightarrow \mathbb {R}\) and a period \({\bar{T}} > 0\) with
This \({\bar{T}}\) is the expected return time to the isochron \(W^\mathbb {E}((\vartheta _0,r_0))\) which is the level set of \(\bar{\phi }(\vartheta _0, r_0)\).
In particular, the function \({\bar{\phi }}\) can be identified with the solution \({\bar{\Theta }}\) of Eq. (4.39).
Proof
Using the chain rule of Stratonovich calculus and inserting (4.20), Eq. (4.44) can be rewritten as
where the boundary condition in angular direction is
for all \(R_1 \le r \le R_2\), fixing
and
In radial direction, if \(0< R_1< R_2 < \infty \), one can choose reflecting boundary conditions as in Sect. 4.3.1.
Writing time t as an index, transforming the Stratonovich noise terms into Itô noise terms and using the fact that the Itô noise terms have zero expectation, leads to the equation
where \({\mathcal {L}}\) denotes the backward Kolmogorov operator associated with the SDE (4.20). In particular, a solution is given by the stationary version
with boundary condition (4.46). Note that, up to the change of sign \(\phi \rightarrow  \phi \), Eq. (4.48) is Dynkin’s equation and that Eq. (4.39) is equivalent to Eq. (4.48) with boundary condition (4.46) such that \({\bar{\phi }}\) is taken as a function from the domain \(\Omega \) to \(\mathbb {R} \mod \bar{T}\). Hence, the two approaches, one starting with (4.44) and the other, considering the MFPT, lead to the same outcome regarding the stochastic isochrons \(W^\mathbb {E}((\vartheta ,r))\). \(\quad \square \)
We exmplify this derivation of an isochron function \({\bar{\phi }}\) by reference to the fundamental Example 3:
Example 6
Recall Example 3 with Eq. (3.12), i.e. in its most general form,
choosing \(h(r) = \kappa + (r^2 1) \), \(\kappa \ge 1\), similarly to [41, Example (1)], and \({\tilde{h}}\) some arbitrary smooth and bounded function. Note that \(r^*=1\) for this case and that there is a stationary density p for the radial process which has the form
where \(Z > 0\) is a normalization constant. One can then additionally observe that \(\mathbb {E}_{p} [r^2] = 1\) for all \(\sigma \ge 0\), and, hence, \(\mathbb {E}_{p} [h(r)] = \kappa .\)
It is easy to see that
solves (4.48) such that (4.45) is actually satisfied with \({\bar{T}} = \frac{2\pi }{\kappa }\). In fact, we have (up to some constant \({\bar{\phi }}_0\))
which, in this case, is also the deterministic isochron.
Similarly to \(T_{\text {RDS}} := \mathbb {E} [T(\cdot )]\), we can introduce for the associated random isochron map \({\tilde{\phi }}\) the expected quantity
for fixed \(x \in \mathbb {R}^m\), where \({\tilde{\phi }}\) is the random isochron map from Sect. 4.2. It remains to clarify how the isochron function \({\bar{\phi }}\), or equivalently \(\bar{\Theta }\) (4.39), may be related to \(\bar{\phi }_{\text {RDS}}\), assuming the existence of a CRPS \((\psi , T)\) as for Example 3 (see Proposition 2). We give a brief discussion of a possible approach to this question in “Appendix .1”, leaving a more thourough investigation as future work.
5 Conclusion
We have introduced a new perspective on the problem of stochastic isochronicity, by considering random isochrons as random stable manifolds anchored at attracting random cycles with random periodic solutions. We have further characterized these random isochrons as level sets of a timedependent random isochron map. Precisely this timedependence of the random dynamical system, i.e., its nonautonomous nature, makes it difficult to specify the concrete relation to the definitions of stochastic isochrons given by fixed expected mean return times for whom we have given an alternative derivation of the isochron function \({\bar{\phi }}\) with return time \({\bar{T}}\). We suggest an extended investigation of their relationship to the expected quantity \({\bar{\phi }}_{\text {RDS}}\) as an intriguing problem for future work. Additionally, it would be interesting to study the relation between stochastic isochronicity via eigenfunctions of the backward Kolmogorov operator [44] and random Koopman operators (see [18]), extending the eigenfunction approach from the deterministic setting to the random dynamical systems case.
Notes
Through personal communication.
References
Arnold, L.: Random Dynamical Systems. Springer, Berlin (1998)
Arnold, L., Scheutzow, M.: Perfect cocycles through stochastic differential equations. Probab. Theory Relat. Fields 101(1), 65–88 (1995)
Baudel, M., Berglund, N.: Spectral theory for random Poincaré maps. SIAM J. Math. Anal. 49(6), 4319–4375 (2017)
Bauermeister, C., Schwalger, T., Russell, D., Neiman, A., Lindner, B.: Characteristic effects of stochastic oscillatory forcing on neural firing: analytical theory and comparison to paddlefish electroreceptor data. PLoS Comput. Bio. 9(8), e1003170 (2013)
Baxendale, P.: Statistical equilibrium and twopoint motion for a stochastic flow of diffeomorphisms. Spatial Stochastic Processes. Volume 19 of Progress in Probability, pp. 189–218. Birkhäuser, Boston (1991)
Benzi, R., Parisi, G., Sutera, A., Vulpiani, A.: Stochastic resonance in climatic change. Tellus 34(11), 10–16 (1982)
Berglund, N., Gentz, B., Kuehn, C.: From random Poincaré maps to stochastic mixedmodeoscillation patterns. J. Dyn. Differ. Equ. 27(1), 83–136 (2015)
Berglund, N., Landon, D.: Mixedmode oscillations and interspike interval statistics in the stochastic FitzHugh–Nagumo model. Nonlinearity 25, 2303–2335 (2012)
Biskamp, M.: Pesin’s formula for random dynamical systems on \(\mathbb{R}^d\). J. Dyn. Differ. Equ. 26(1), 109–142 (2014)
Blumenthal, A., Young, L.S.: Equivalence of physical and SRB measures in random dynamical systems. Nonlinearity 32(4), 1494–1524 (2019)
Breden, M., Engel, M.: Computerassisted proof of shearinduced chaos in stochastically perturbed Hopf systems (2021). arXiv:2101.01491
Brooks, H., Bressloff, P.: Quasicycles in the stochastic hybrid Morris–Lecar neural model. Phys. Rev. E 92(1), 012704 (2015)
Cao, A., Lindner, B., Thomas, P.J.: A partial differential equation for the meanreturntime phase of planar stochastic oscillators. SIAM J. Appl. Math. 80(1), 422–447 (2020)
Chicone, C.: Ordinary Differential Equations with Applications. Volume 34 of Texts in Applied Mathematics, 2nd edn. Springer, New York (2006)
Crauel, H.: Markov measures for random dynamical systems. Stoch. Stoch. Rep. 37(3), 153–173 (1991)
Crauel, H., Flandoli, F.: Attractors for random dynamical systems. Probab. Theory Relat. Fields 100, 365–393 (1994)
Crauel, H., Kloeden, P.: Nonautonomous and random attractors. Jahresbericht der Deutschen MathematikerVereinigung 117(3), 173–206 (2015)
ČrnjarićŽic, N., Maćešić, S., Mezić, I.: Koopman operator spectrum for random dynamical systems. J. Nonlinear Sci. 30(5), 2007–2056 (2019)
Dimitroff, G., Scheutzow, M.: Attractors and expansion for Brownian flows. Electonical J. Probab. 16(42), 1193–1213 (2011)
Doan, T.S., Engel, M., Lamb, J.S.W., Rasmussen, M.: Hopf bifurcation with additive noise. Nonlinearity 31(10), 4567–4601 (2018)
Engel, M.: Local phenomena in random dynamical systems: bifurcations, synchronisation, and quasistationary dynamics. Ph.D. thesis, Imperial College London (2018)
Engel, M., Lamb, J.S.W., Rasmussen, M.: Bifurcation analysis of a stochastically driven limit cycle. Commun. Math. Phys. 365(3), 935–942 (2019)
Flandoli, F., Gess, B., Scheutzow, M.: Synchronization by noise. Probab. Theory Relat. Fields 168(3–4), 511–556 (2017)
Flandoli, F., Schmalfuss, B.: Random attractors for the 3D stochastic Navier–Stokes equation with multiplicative white noise. Stoch. Stoch. Rep. 59(1–2), 21–45 (1996)
Gates, D., Su, J., Dingwell, J.: Possible biomechanical origins of the longrange correlations in stride intervals of walking. Phys. A 380, 259–270 (2007)
Giacomin, G., Poquet, C., Shapira, A.: Small noise and long time phase diffusion in stochastic limit cycle oscillators. J. Differ. Equ. 264(2), 1019–1049 (2018)
Guckenheimer, J.: Isochrons and phaseless sets. J. Math. Biol. 1(3), 259–273 (1975)
Kloeden, P., Rasmussen, M.: Nonautonomous Dynamical Systems. Mathematical Surveys and Monographs, vol. 176. American Mathematical Society, Providence (2011)
Ledrappier, F., Young, L.S.: Entropy formula for random transformations. Probab. Theory Related Fields 80(2), 217–240 (1988)
Li, J., Lu, K., Bates, P.W.: Invariant foliations for random dynamical systems. Discrete Contin. Dyn. Syst. 34(9), 3639–3666 (2014)
Lindner, B., GarciaOjalvo, J., Neiman, A., SchimanskyGeier, L.: Effects of noise in excitable systems. Phys. Rep. 392, 321–424 (2004)
Liu, P., Qian, M.: Smooth Ergodic Theory of Random Dynamical Systems. Lecture Notes in Mathematics, vol. 1606. Springer, Berlin (1995)
Nicolis, C., Nicolis, G.: Stochastic aspects of climatic transitionsadditive fluctuations. Tellus 33(3), 225–234 (1981)
Pikovsky, A.: Comment on “asymptotic phase for stochastic oscillators”. Phys. Rev. Lett. 115, 069401 (2015)
Revzen, S., Guckenheimer, J.: Finding the dimension of slow dynamics in a rhythmic system. J. R. Soc. Interface 9, 957–971 (2012)
Ruelle, D.: Ergodic theory of differentiable dynamical systems. Institut des Hautes Études Scientifiques. Publications Mathématiques 50, 27–58 (1979)
Sadhu, S., Kuehn, C.: Stochastic mixedmode oscillations in a threespecies predatorprey model. Chaos 28(3), 033606 (2017)
Scheutzow, M.: Comparison of various concepts of a random attractor: a case study. Arch. Math. (Basel) 78(3), 233–240 (2002)
Schreiber, S., Benaïm, M., Atchadé, K.: Persistence in fluctuating environments. J. Math. Biol. 62(5), 655–683 (2011)
Schuss, Z.: Theory and Applications of Stochastic Processes. Volume 170 of Applied Mathematical Sciences. Springer, Berlin (2010)
Schwabedal, J., Pikovsky, A.: Phase description of stochastic oscillations. Phys. Rev. Lett. 110(20), 204102 (2013)
Schwabedal, J., Pikovsky, A., Kralemann, B., Rosenblum, M.: Optimal phase description of chaotic oscillators. Phys. Rev. E 85, 026216 (2012)
Su, J., Rubin, J., Terman, D.: Effects of noise on elliptic bursters. Nonlinearity 17, 133–157 (2004)
Thomas, P., Lindner, B.: Asymptotic phase for stochastic oscillators. Phys. Rev. Lett. 113(25), 254101 (2014)
Thomas, P.J., Lindner, B.: Thomas and Lindner reply. Phys. Rev. Lett. 115, 069402 (2015)
Zhao, H., Zheng, Z.H.: Random periodic solutions of random dynamical systems. J. Differ. Equ. 246(5), 2020–2038 (2009)
Acknowledgements
The authors gratefully acknowledge support by the DFG via the SFB/TR109 Discretization in Geometry and Dynamics. ME has also been supported by Germany’s Excellence Strategy—The Berlin Mathematics Research Center MATH+ (EXC2046/1, Project ID: 390685689). CK acknowledges support by a Lichtenberg Professorship of the VolkswagenFoundation.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by M.Hairer
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A Random Dynamical Systems
A Random Dynamical Systems
In this appendix we have collected several constructions for reference from the theory of random dynamical systems, which we have used throughout the main part of this work.
1.1 A.1 Random dynamical systems induced by stochastic differential equations
Following [23], we make the following definition:
Definition A.1
(White noise RDS). Let \((\theta , \varphi )\) be a random dynamical system over a probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) on a topological space \({\mathcal {X}}\) where \(\varphi \) is defined in forward time. Let \((\mathcal {F}_s^t)_{\infty \le s \le t \le \infty } \) be a family of sub\(\sigma \)algebras of \({\mathcal {F}}\) such that

(i)
\({\mathcal {F}}_t^u \subset {\mathcal {F}}_s^v\) for all \( s \le t \le u \le v\),

(ii)
\({\mathcal {F}}_s^t\) is independent from \({\mathcal {F}}_u^v\) for all \( s \le t \le u \le v\),

(iii)
\( \theta _r^{1}({\mathcal {F}}_s^t) = \mathcal {F}_{s+r}^{t+r}\) for all \( s \le t\), \(r \in \mathbb {R}\),

(iv)
\(\varphi (t, \cdot , x)\) is \({\mathcal {F}}_0^t\)measurable for all \(t \ge 0\) and \(x \in {\mathcal {X}}\).
Furthermore we denote by \(\mathcal {F}_{\infty }^t\) the smallest sigmaalgebra containing all \({\mathcal {F}}_s^t\), \(s \le t\), and by \(\mathcal {F}_{t}^{\infty }\) the smallest sigmaalgebra containing all \({\mathcal {F}}_t^u\), \(t \le u\). Then \((\theta , \varphi )\) is called a white noise (filtered) random dynamical system.
Consider a stochastic differential equation (SDE)
where \((W_t)\) denotes some rdimensional standard Brownian motion, the drift \(f: \mathbb {R}^d \rightarrow \mathbb {R}^d\) is a locally Lipschitz continuous vector field and the diffusion coefficient \(g: \mathbb {R}^d \rightarrow \mathbb {R}^{d \times r}\) a Lipschitz continuous matrixvalued map. If in addition f satisfies a bounded growth condition, as for example a onesided Lipschitz condition, then by [19] there is a white noise random dynamical system \((\theta , \varphi )\) associated to the diffusion process solving (A.1). The probabilistic setting is as follows: We set \(\Omega =C_0({\mathbb {R}},{\mathbb {R}}^r)\), i.e. the space of all continuous functions \(\omega :{\mathbb {R}}\rightarrow {\mathbb {R}}^r\) satisfying that \(\omega (0)=0\in {\mathbb {R}}^r\). If we endow \(\Omega \) with the compact open topology given by the complete metric
we can set \({\mathcal {F}} = \mathcal {B} (\Omega )\), the Borelsigma algebra on \((\Omega ,\kappa )\). There exists a probability measure \({\mathbb {P}}\) on \((\Omega ,{\mathcal {F}})\) called Wiener measure such that the r processes \((W_t^1), \dots , (W_t^r)\) defined by \((W_t^1(\omega ), \dots , W_t^r(\omega ))^{\mathrm T}:=\omega (t)\) for \(\omega \in \Omega \) are independent onedimensional Brownian motions. Furthermore, we define the sub\(\sigma \)algebra \(\mathcal {F}_s^t\) as the \(\sigma \)algebra generated by \(\omega (u) \omega (v)\) for \(s \le v \le u \le t\). The ergodic metric dynamical system \((\theta _t)_{t\in {\mathbb {R}}}\) on \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) is given by the shift maps
Indeed, these maps form an ergodic flow preserving the probability \({\mathbb {P}}\), see e.g. [1].
Note that, by the ItôStratonovich conversion formula, euqation (A.1) with Stratonovich noise instead of Itô noise also induces a random dynamical system under analogous assumptions.
1.2 Invariant measures
Let \((\theta , \varphi )\) be a random dynamical system with the cocycle \(\varphi \) being defined on oneor twosided time \({\mathbb {T}} \in \{ \mathbb {R}_0^+, \mathbb {R} \}\). Then the system generates a skew product flow, i.e. a family of maps \((\Theta _t)_{t \in \mathbb {T}}\) from \(\Omega \times {\mathcal {X}}\) to itself such that for all \(t \in {\mathbb {T}}\) and \(\omega \in \Omega , x \in {\mathcal {X}}\)
The notion of an invariant measure for the random dynamical system is given via the invariance with respect to the skew product flow, see e.g. [1, Definition 1.4.1]. We denote by \( T\mu \) the push forward of a measure \(\mu \) by a map T, i.e. \(T \mu (\cdot ) = \mu (T^{1}(\cdot ))\).
Definition A.2
(Invariant measure). A probability measure \(\mu \) on \(\Omega \times {\mathcal {X}}\) is invariant for the random dynamical system \((\theta , \varphi )\) if

(i)
\(\Theta _t \mu = \mu \) for all \( t \in {\mathbb {T}}\) ,

(ii)
the marginal of \(\mu \) on \(\Omega \) is \(\mathbb {P}\), i.e. \(\mu \) can be factorised uniquely into \(\mu (\mathrm {d}\omega , \mathrm {d}x) = \mu _{\omega }(\mathrm {d}x) \mathbb {P}(\mathrm {d}\omega )\) where \(\omega \mapsto \mu _{\omega }\) is a random measure (or disintegration or sample measure) on \({\mathcal {X}}\), i.e. \(\mu _{\omega }\) is a probability measure on \({\mathcal {X}}\) for \(\mathbb {P}\)a.a. \(\omega \in \Omega \) and \(\omega \mapsto \mu _{\omega }(B)\) is measurable for all \(B \in \mathcal {B}({\mathcal {X}})\).
The marginal of \(\mu \) on the probability space is demanded to be \(\mathbb {P}\) since we assume the model of the noise to be fixed. Note that the invariance of \(\mu \) is equivalent to the invariance of the random measure \(\omega \mapsto \mu _{\omega }\) on the state space \({\mathcal {X}}\) in the sense that
For white noise random dynamical systems \((\theta , \varphi )\), in particular random dynamical systems induced by a stochastic differential equation, there is a onetoone correspondence between certain invariant random measures and stationary measures of the associated stochastic process, first observed in [15]. In more detail, we can define a Markov semigroup \((P_t)_{t \ge 0}\) by setting
for all measurable and bounded functions \(f: {\mathcal {X}}\rightarrow \mathbb {R}\). If \(\omega \mapsto \mu _{\omega }\) is a \(\mathcal {F}_{\infty }^{0}\)measurable invariant random measure in the sense of (A.2), also called Markov measure, then
turns out to be an invariant measure for the Markov semigroup \((P_t)_{t \ge 0}\), often also called stationary measure for the associated process. If \(\rho \) is an invariant measure for the Markov semigroup, then
exists \(\mathbb {P}\)a.s. and is an \(\mathcal {F}_{\infty }^{0}\)measurable invariant random measure.
We observe similarly to [5] that, in the situation of \(\mu \) and \(\rho \) corresponding in the way described above,
and, hence,
Therefore the probability measure \(\mathbb {P} \times \rho \) is invariant for \((\Theta _t)_{t \ge 0}\) on \((\Omega \times {\mathcal {X}}, \mathcal {F}_0^\infty \times \mathcal {B}({\mathcal {X}}))\). In words, the product measure with marginals \(\mathbb {P}\) and \(\rho \) is invariant for the random dynamical system restricted to onesided path space.
1.3 A.3 Lyapunov spectrum
Consider a \(C^k\) random dynamical system \((\theta , \varphi )\), i.e. \(\varphi (t, \omega , \cdot ) \in C^k\) for all \(t\in {\mathbb {T}}\) and \(\omega \in \Omega \), where again \( {\mathbb {T}} \in \{ \mathbb {R}, \mathbb {R}_0^+\}\). Let’s assume that \({\mathcal {X}}\) is a smooth mdimensional manifold and that \((\theta , \varphi )\) is \(C^1\). Recall that the linearization or derivative \(\mathrm {D}\varphi (t,\omega ,x)\) of \(\varphi (t,\omega ,\cdot )\) at \(x \in {\mathcal {X}}\) is a linear map from the tangent space \(T_x\) to the tangent space \(T_{\varphi (t,\omega ,x)}\). If \({\mathcal {X}}= \mathbb {R}^m\), the linearization is simply the Jacobian \(m\times m\) matrix
Further assume that the random dynamical system possesses an invariant measure \(\mu \). In case \({\mathcal {X}}= \mathbb {R}^m\), this implies that \((\Theta ,\mathrm {D}\varphi )\) is a random dynamical system with linear cocycle \(\mathrm {D}\varphi \) over the metric dynamical system \((\Omega \times {\mathcal {X}}, {\mathcal {F}} \times \mathcal {B}({\mathcal {X}}), (\Theta _t)_{t \in {\mathbb {T}}})\), see e.g. [1, Proposition 4.2.1]. Generally, we have that \(\mathrm {D}\varphi \) is a linear bundle random dynamical system on the tangent bundle \(T{\mathcal {X}}\) (see [1, Definition 1.9.3, Proposition 4.25]).
In case the derivative can be written as a matrix, as for example for \({\mathcal {X}}= \mathbb {R}^m\), the Jacobian \(\mathrm {D}\varphi (t,\omega ,x)\) satisfies Liouville’s equation
We summarise the different versions of the Multiplicative Ergodic Theorem for differentiable random dynamical systems in onesided and twosided time in the following theorem [1, Theorem 3.4.1, Theorem 3.4.11, Theorem 4.2.6], establishing a Lyapunov spectrum with an associated filtration of random sets and, in twosided time, with a splitting into invariant random subspaces.
Theorem A.3

(a)
Suppose the \(C^1\)random dynamical system \((\theta ,\varphi )\), where \(\varphi \) is defined in forward time, has an ergodic invariant measure \(\nu \) and satisfies the integrability condition
$$\begin{aligned} \sup _{0 \le t \le 1} \ln ^+ \Vert \mathrm {D}\varphi (t, \omega , x) \Vert \in L^1(\nu ). \end{aligned}$$Then there exist a \(\Theta \)invariant set \(\Delta \subset \Omega \times {\mathcal {X}}\) with \(\nu (\Delta ) = 1 \), a number \(1 \le p \le m\) and real numbers \(\lambda _1> \dots > \lambda _p\), the Lyapunov exponents with respect to \(\nu \), such that for all \(0 \ne v \in T_x {\mathcal {X}}\cong \mathbb {R}^m\) and \((\omega ,x) \in \Delta \)
$$\begin{aligned} \lambda (\omega , x, v) := \lim _{t \rightarrow \infty } \frac{1}{t} \ln \Vert \mathrm {D}\varphi (t, \omega , x)v \Vert \in \{\lambda _p, \dots , \lambda _1\}\,. \end{aligned}$$Furthermore, the tangent space \(T_x {\mathcal {X}}\cong \mathbb {R}^m\) admits a filtration
$$\begin{aligned} {\mathbb {R}}^m = V_1(\omega , x) \supsetneq V_2(\omega ,x)\supsetneq \dots \supsetneq V_p(\omega ,x) \supsetneq V_{p+1}(\omega ,x)= \{0\}\,, \end{aligned}$$for all \((\omega ,x) \in \Delta \) such that
$$\begin{aligned} \lambda (\omega , x, v) = \lambda _{i}\quad \Longleftrightarrow \quad v \in V_i(\omega ,x) {\setminus } V_{i+1}(\omega ,x) \quad \text { for all } i\in \{1, \dots , p\}\,. \end{aligned}$$In case the derivative can be written as a matrix, we have for all \((\omega ,x) \in \Delta \)
$$\begin{aligned} \lim _{t \rightarrow \infty } \frac{1}{t} \ln \det \mathrm {D}\varphi (t, \omega , x) = \sum _{i=1}^p d_i \lambda _i\,, \end{aligned}$$(A.4)where \(d_i\) is the multiplicity of the Lyapunov exponent \(\lambda _i\) and \(\sum _{i=1}^p d_i =m\).

(b)
If the cocycle \(\varphi \) is defined in twosided time and satisfies the above integrability condition also in backwards time, there exists the Oseledets splitting
$$\begin{aligned} \mathbb {R}^m = E_1(\omega ,x) \oplus \cdots \oplus E_p(\omega ,x) \end{aligned}$$of the tangent space into random subspaces \(E_i(\omega ,x)\), the Oseledets spaces, for all \((\omega ,x) \in \Delta \). These have the following properties for all \((\omega ,x) \in \Delta \):

(i)
The Oseledets spaces are invariant under the derivative flow, i.e. for all \(t \in \mathbb {R}\)
$$\begin{aligned} \mathrm {D}\varphi (t, \omega ,x) E_i(\omega ,x) = E_i( \Theta _t(\omega ,x)) \,, \end{aligned}$$ 
(ii)
The Oseledets space \(E_i\) corresponds with \(\lambda _1\) in the sense that
$$\begin{aligned} \lim _{t \rightarrow \infty } \frac{1}{t} \ln \Vert \mathrm {D}\varphi (t, \omega , x)v \Vert = \lambda _{i}\quad \Longleftrightarrow \quad v \in E_i(\omega ,x) {\setminus } \{0\} \quad \text { for all } i\in \{1, \dots , p\}\,, \end{aligned}$$ 
(iii)
The dimension equals the multiplicity of the associated Lyapunov exponent, i.e.
$$\begin{aligned} \dim E_i(\omega ,x) = d_i\,. \end{aligned}$$

(i)
1.4 A.4 Existence of random attractors
The existence of random attractors is proved via socalled absorbing sets. A set \(B\in {\mathcal {D}}\) is called an absorbing set if for almost all \(\omega \in \Omega \) and any \(D \in {\mathcal {D}}\), there exists a \(T>0\) such that
A proof of the following theorem can be found in [24, Theorem 3.5].
Theorem A.4
(Existence of random attractors). Suppose that \((\theta ,\varphi )\) is a continuous random dynamical system with an absorbing set B. Then there exists a unique random attractor A, given by
Furthermore, \( \omega \mapsto A(\omega )\) is measurable with respect to \(\mathcal {F}_{\infty }^{0}\), i.e. the past of the system.
Remark A.5
Naturally, random attractors are related to invariant probability measures of a random dynamical system \((\theta , \varphi )\). It follows directly from [16, Proposition 4.5] that, if the fibers of a random attractor A, i.e. \(\omega \mapsto A(\omega )\), are measurable with respect to \({\mathcal {F}}_{\infty }^0\), there is an invariant measure \(\mu \) for \((\theta , \varphi )\) such that \(\omega \mapsto \mu _{\omega }\) is measurable with respect to \(\mathcal F_{\infty }^0\), i.e. is a Markov measure, and satisfies \(\mu _{\omega } (A(\omega )) =1\) for almost all \(\omega \in \Omega \). In particular, if there exists a unique invariant probability measure \(\rho \) for the Markov semigroup \((P_t)_{t \ge 0}\), then the invariant Markov measure, supported on A, is unique by the onetoone correspondence explained above. Additionally, if the Markov semigroup is strongly mixing, i.e.
then the set \({\tilde{A}} \in \mathcal {F} \times \mathbb {B}({\mathcal {X}})\), given by \({\tilde{A}}(\omega ) = {{\,\mathrm{supp}\,}}\mu _{\omega } \subset A(\omega )\) for almost all \(\omega \in \Omega \), is a minimal weak random point attractor according to [23, Proposition 2.20].
1.5 A.5 Expectation of random isochron map
We observe from Eq. (4.29) that the random isochron map \(\tilde{\phi }\) satisfies
Assume now that there is a function \(\phi : \mathcal {R} \rightarrow \mathbb {R}\) such that for all t in some interval \(J=[0,T]\), \(T >0\), we have
Then we obtain from Eq. (A.5) that
Hence, assuming the appropriate boundary conditions, we can deduce that \(\phi = {\bar{\phi }}\), where \({\bar{\phi }}\) is the isochron function as derived above, satisfying Eq. (4.48). Furthermore, we can observe directly that
is the only cadidate for relation (A.6) to hold. When we insert equality (A.8) back into Eq. (A.6), we obtain
If we choose \((\vartheta , r)\) to be a point on the random attractor, belonging to the CRPS \(\psi \), say \((\vartheta , r)= \psi (0,\omega )\), then due to the fact that \( {\tilde{\phi }} ( \psi (t, \theta _t \omega ), \theta _t \omega , t) = t\) for (almost) all \(\omega \in \Omega \), this means that
Verifying equality (A.9) would therefore lead to establishing \({\bar{\phi }}_{\text {RDS}} = {\bar{\phi }}\). We have not found a clear reasoning when and why (or why not) relation (A.9) holds and leave it as an open problem to get a better understanding of this gap.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Engel, M., Kuehn, C. A Random Dynamical Systems Perspective on Isochronicity for Stochastic Oscillations. Commun. Math. Phys. 386, 1603–1641 (2021). https://doi.org/10.1007/s0022002104077z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s0022002104077z