A Random Dynamical Systems Perspective on Isochronicity for Stochastic Oscillations

For an attracting periodic orbit (limit cycle) of a deterministic dynamical system, one defines the isochron for each point of the orbit as the cross-section with fixed return time under the flow. Equivalently, isochrons can be characterized as stable manifolds foliating neighborhoods of the limit cycle or as level sets of an isochron map. In recent years, there has been a lively discussion in the mathematical physics community on how to define isochrons for stochastic oscillations, i.e. limit cycles or heteroclinic cycles exposed to stochastic noise. The main discussion has concerned an approach finding stochastic isochrons as sections of equal expected return times versus the idea of considering eigenfunctions of the backward Kolmogorov operator. We discuss the problem in the framework of random dynamical systems and introduce a new rigorous definition of stochastic isochrons as random stable manifolds for random periodic solutions with noise-dependent period. This allows us to establish a random version of isochron maps whose level sets coincide with the random stable manifolds. Finally, we discuss links between the random dynamical systems interpretation and the equal expected return time approach via averaged quantities.


Introduction
Periodic behavior is ubiquitous in the natural sciences and in engineering. Accordingly, many mathematical models of dynamical systems, usually given by ordinary differential equations (ODEs), are characterized by the existence of attracting periodic orbits, also called limit cycles. Interpreting the limit cycle as a "clock" for the system, one can ask which parts of the state space can be associated with which "time" on the clock.
It turns out that one can generally divide the state space into sections, called isochrons, intersecting the asymptotically stable periodic orbit. Trajectories starting on a particular isochron all converge to the trajectory starting at the intersection of the isochron and the limit cycle. Hence, each point in the basin of attraction of the limit cycle can be allocated a time on the periodic orbit, by belonging to a particular isochron. Isochrons can then be characterized as the sections intersecting the limit cycle, such that the return time under the flow to the same section always equals the period of the attracting orbit and, hence, the return time is the same for all isochrons. The analysis of ODEs provides additional characterizations of isochrons, involving, for example, an isochron map or eigenfunctions of associated operators.
Clearly, mathematical models are simplifications which often leave out parameters and details of the described physical or biological system. Hence, a large number of degrees of freedom is inherent in the modeling. The introduction of random noise is often a suitable way to integrate such non-specified components into the model such that, for example, an ODE becomes a stochastic differential equation (SDE). Examples for stochastic oscillators/oscillations can be found in a wide variety of applications such as neuroscience [4,12,31,43], ecology [39,37], biomechanics [25,35], geoscience [6,33], among many others. In addition, stochastic oscillations have become a recently very active research topic in the rigorous theory of stochastic dynamical systems with small noise [3,7,8,26].
Lately, there has been a lively discussion [34,45] in the mathematical physics community about how to extend the definition and analysis of isochrons to the stochastic setting. As pointed out above, there are several different characterizations in the deterministic case inspiring analogous stochastic approaches. So far, there are two main approaches to define stochastic isochrons in the physics literature, both focused on stochastic differential equations. One approach, due to Thomas and Lindner [44], focuses on eigenfunctions of the associated infinitesimal generator L. The other one is due to Schwabedal and Pikovsky [41], who introduce isochrons for noisy systems as sections W E (x) with the mean first return time to the same section W E (x) being a constantT , equaling the average oscillation period. Cao, Lindner and Thomas [13] have used the Andronov-Vitt-Pontryagin formula, involving the backward Kolmogorov operator L, with appropriate boundary conditions to establish the isochron functions for W E (x) more rigorously.
These approaches have in common that they focus on the "macroscopic" or "coarse-grained" level by considering averaged objects and associated operators. We complement the existing suggestions by a new approach within the theory of random dynamical systems (see e.g. [1]) which has proven to give a framework for translating many deterministic dynamical concepts into the stochastic context. A random dynamical system in this sense consists of a model of the time-dependent noise formalized as a a dynamical system θ on the probability space, and a model of the dynamics on the state space formalized as a cocycle ϕ over θ. This point of view considers the asymptotic behaviour of typical trajectories. As trajectories of random dynamical systems depend on the noise realization, any convergent behaviour of individual trajectories to a fixed attractor cannot be expected. The forward in time evolution of sets under the same noise realization yields the random forward attractor A which is a time-dependent object with fibers A(θ t ω). An alternative view point is to consider, for a fixed noise realization ω ∈ Ω, the flow of a set of initial conditions from time t = −T to a fixed endpoint in time, say t = 0, and then take the (pullback) limit T → ∞. If trajectories of initial conditions converge under this procedure to fibersÃ(ω) of some random setÃ, then this set is called a random pullback attractor.
In this paper, we will consider mainly situations where the random dynamical system is induced by an SDE and there exists a random (forward and/or pullback) attractor A which is topologically equivalent to a cycle for each noise realization, i.e. a attracting random cycle, whose existence can be made generic in a suitably localized setup around a deterministic limit cycle. We will extend the definition of a random periodic solution ψ [46] living on such a random attractor to situations where the period is random, giving a pair (ψ, T ). Isochrons can then be defined as random stable manifolds W f (ω, x) for points x on the attracting random cycle A(ω), in particular for random periodic solutions. We usually consider situations with a spectrum of exponential asymptotic growth rates, the Lyapunov exponents λ 1 > λ 2 > · · · > λ p , which allows to transform the idea of hyperbolicity to the random context. Additionally, we can introduce a time-dependent random isochron mapφ, such that the isochrons are level sets of such a map. Hence, on a pathwise level, we achieve a complete generalization of deterministic to random isochronicity, which is the key contribution of this work. The main results can be summarized in the following theorem: Theorem A. Assume the random dynamical system (θ, ϕ) on R m has a hyperbolic random limit cycle, supporting a random periodic solution with possibly noise-dependent period. Then, under approptriate assumptions on smoothness and boundedness, 1. the random forward isochrons are smooth invariant random manifolds which foliate the stable neighbourhood of the random limit cycle on each noise fibre, 2. there exists a smooth and measurable (non-autonomus) random isochron mapφ whose level sets are the random isochrons and whose time derivative along the random flow is constant.
The remainder of the paper is structured as follows. Section 2 gives an introduction to the deterministic theory of isochrons, summarizing the main properties that we can then transform into the random dynamical systems setting. The latter is discussed in Section 3, where we elucidate the notions of Lyapunov exponents, random attractors and, specifically, random limit cycles and their existence. Section 4 establishes the two main statements, contained in Theorem A: in Section 4.1, we show Theorem 4.3, summarizing different scenarios in which random isochrons are random stable manifolds foliating the neighbourhoods of the random limit cycle. In Section 4.2, we prove Theorem 4.8, generalizing characteristic properties of the isochron map to the random case. We conclude Section 4 with an elaboration on the relationship between expected quantities of the RDS approach and the definition of stochastic isochrons via mean first return times, i.e., one of the main physics approaches. Additionally, the paper contains a brief conclusion with outlook, and an appendix with some background on random dynamical systems.

The deterministic case
The basic facts about isochrons have been established in [27]. Here we summarize some facts restricted to the state space X = R m but the theory easily lifts to ordinary differential equations (ODEs) on smooth manifolds M = X . Consider an ODE be the flow associated to (2.1) and suppose γ = {γ(t)} t∈[0,τγ ] is a hyperbolic periodic orbit with minimal period τ γ > 0. A cross-section N ⊂ R m at x ∈ γ is a submanifold such that x ∈ N ,N ∩ γ = {x}, and i.e. the submanifold N and the orbit γ intersect transversally. Let g : N → N be the Poincaré map defined by the first return of y ∈ N under the flow Φ with N (see Figure 1); locally near any point x ∈ γ the map g is well-defined. For simplicity (and with the look forward towards the noisy case) let us assume that γ is a stable hyperbolic periodic orbit, i.e. the eigenvalues µ i of Dg(x), also called characteristic multipliers, satisfy µ 1 = 1 and |µ 2 | , . . . , |µ m | < 1, counting multiplicities. The numbers are called the characteristic exponents (for more background on the stability of linear nonautonomous systems and associated Floquet theory see e.g [14,Chapter 2.4]). We call such a stable hyperbolic periodic orbit a stable (hyperbolic) limit cycle since there is a neighbourhood U of γ such that for y ∈ U we have d(Φ(y, t), γ) → 0, as t → ∞, where d is the Euclidean metric on R m . In particular, note that there is a lower bound on the speed of exponential convergence to the limit cycle, given by λ := min We give a definition of isochrons as stable sets and then establish its equivalence to level sets of a specific map. We further find these level sets to be cross-sections to γ for which the time of first return is identical to the period τ γ , explaining the name isochrons.
Definition 2.1. The isochron W (x) of a point on a hyperbolic limit cycle x ∈ γ is given by its stable set In particular, due to hyperbolicity, we have that for everyλ ∈ (0, λ) It is by now classical that stable sets are manifolds and for each x ∈ γ, we get a stable manifold W s (x) diffeomorphic to R m−1 , precisely coinciding with the isochron W (x). We can foliate a neighbourhood U of γ by the manifolds W (x) and these manifolds are permuted by the flow since We summarize these crucial observations in the following theorem. 1. For each x ∈ γ, the isochron W (x) is an (m − 1)-dimensional manifold transverse to γ, in particular it is a cross-section of γ, of the same regularity as the vector field f in the 2. The stable manifold W s (γ) contains a full neighbourhood of γ and can be written as where the union of isochrons is disjoint.
Using the properties established in Theorem 2.2, we can derive the following well-known characterizations of the isochrons W (x), x ∈ γ, and of the isochron map ξ. Proposition 2.3. Assume that we are in the situation of Theorem 2.2. We have that 1. for each x ∈ γ, the isochron W (x) is precisely the level set of ξ(x), i.e.
i.e. the cross-section on which all starting points return in the same time τ γ .
The third statement can be easily derived from the fact that for all y ∈ W s (γ) This finishes the proof.
Summarizing, we can view isochrons W (x) as stable manifolds of points on the limit cycle. The sets W (x) are uniquely defined and have codimension one. They locally foliate neighborhoods of the limit cycle. They can also be characterized and computed as level sets of a specific isochron map whose total derivative along the flow is equal to 1, by looking for sections of fixed return time under the flow. In the course of this article, we will transform all the discussed properties to the random case.
Guckenheimer [27] tackles additional questions regarding the boundary of W s (γ). These questions concern global properties of isochrons. Since we want to first understand a neighbourhood U of γ in the stochastic setting, we skip these problems here. With this in mind, we consider an adjustment of the main planar example in [27] which does not involve the boundary of W s (γ). The example is simple but illuminating and already contains the main aspects of the difficulties in extending isochronicity to the stochastic context, as we will see later.
in polar coordinates (ϑ, r) ∈ [0, 2π) × (0, +∞), where r 1 > 0 is fixed, h(r) ≥ K > 0 for some constant K, and h is smooth, such that there is always the periodic orbit γ = {r = r 1 }. If h(r) ≡ 1, then one easily checks that the isochrons of γ are (see Figure 1 (a)) However, if we consider h such that h (r 1 ) = 0, then the isochrons bend into curves, instead of being "cut-linear" rays. Indeed, the periodic orbit has period τ γ = 2π/h(r 1 ) but the return time to the same ϑ-coordinate changes near γ (see Figure 1 (b)). Our considerations indicate that, in order to find isochrons in the stochastic case, a first approach is to consider "stable manifolds" also for this situation. The most suitable framework for this approach turns out to be the one of random dynamical systems (RDS).
3 Stochastically driven limit cycles in the framework of random dynamical systems In the following, we develop a theory of isochrons within the framework of random dynamical systems. A continuous-time random dynamical system on a topological state space X consists of (i) a model of the noise on a probability space (Ω, F, P), formalized as a measurable flow (θ t ) t∈R of P-preserving transformations θ t : Ω → Ω, (ii) a model of the dynamics on X perturbed by noise formalized as a cocycle ϕ over θ.
This setting is very helpful to understand properties of dynamical systems under the influence of stochastic noise. In technical detail, the definition of a random dynamical system is given as follows [1, Definition 1.1.2].
Definition 3.1 (Random dynamical system). Let (Ω, F, P) be a probability space and X be a topological space. A random dynamical system (RDS) is a pair of mappings (θ, ϕ).
(i) θ 0 = id and θ t+s = θ t • θ s for t, s ∈ R, (ii) P(A) = P(θ t A) for all A ∈ F and t ∈ R.
The random dynamical system (θ, ϕ) is called continuous if (t, x) → ϕ(t, ω, x) is continuous for every ω ∈ Ω. We still speak of a random dynamical system, if its cocycle is only defined in forward time, i.e. if the mapping ϕ is only defined on R + 0 × Ω × X . We will make it noticeable whenever this is the case.
In the following, the metric dynamical system (θ t ) t∈R is often even ergodic, i.e. any A ∈ F with θ −1 t A = A for all t ∈ R satisfies P(A) ∈ {0, 1}. Note that we define θ in two-sided time whereas ϕ can be restricted to one-sided time. This is motivated by the fact that a large part of this article will deal with random dynamical systems generated by stochastic differential equations (SDEs). Hence, we are interested in random dynamical systems adapted to a suitable filtration and of white noise type (see Appendix A.1). In this context, we can understand ϕ as the "stochastic flow" induced by solving the corresponding SDE and θ t as a time shift on the canonical space Ω of all continuous paths starting at 0, equipped with the Wiener measure. Additionally note that the RDS generates a skew product flow, i.e. a family of maps (Θ t ) t∈T from Ω × X to itself such that for all t ∈ T and ω ∈ Ω, x ∈ X Θ t (ω, x) = (θ t ω, ϕ(t, ω, x)) . (3.1)

Differentiability and Lyapunov exponents
The random dynamical system (θ, ϕ) is called C k if ϕ(t, ω, ·) ∈ C k for all t ∈ T and ω ∈ Ω, where again T ∈ {R, R + 0 }. As in the deterministic case, let us assume that the state space is X = R m (the following can also be extended to smooth m-dimensional manifolds as in Appendix A.3) and that (θ, ϕ) is C 1 . The linearization or derivative Dϕ(t, ω, x) of ϕ(t, ω, ·) at x ∈ R m is the Jacobian m × m matrix Differentiating the equation on both sides and applying the chain rule to the right hand side yields i.e. the cocycle property of the fiberwise mappings with respect to the skew product maps (Θ t ) t∈T (see equation (3.1)). Let us further assume that the random dynamical system possesses an invariant measure µ (see Appendix A.2). This implies that (Θ, Dϕ) is a random dynamical system with linear cocycle Dϕ over the metric dynamical system The main models in this article are stochastic differential equation in Stratonovich form where W i t are independent real valued Brownian motions, b is a C k vector field, k ≥ 1, and σ 1 , . . . , σ n are C k+1 vector fields satisfying bounded growth conditions, as e.g. (global) Lipschitz continuity, in all derivatives to guarantee the existence of a (global) random dynamical system for ϕ and Dϕ. We write the equation in Stratonovich form when differentiation is concerned as the classical rules of calculus are preserved. We can apply the conversion formula to the Itô integral to obtain the situation of (A.1). According to [2], the derivative Dϕ(t, ω, x) applied to an initial condition v 0 ∈ R m solves uniquely the variational equation given by The hyperbolicity of such a differentiable RDS with ergodic invariant measure µ and random cycle A is expressed via its Lyapunov spectrum which is given due to the Multiplicative Ergodic Theorem (MET) (see Theorem A.3 in Appendix A.3) under the integrability assumption sup 0≤t≤1 log + Dϕ(t, ω, ·) ∈ L 1 (µ), (3.4) where Dϕ(t, ω, ·) denotes the operator norm of the Jacobian as a linear operator from T x R m to T ϕ(t,ω,x) R m induced by the Euclidean norm and log + (a) = max{log(a); 0}. Analogously to the characteristic exponents discussed for the deterministic case in Section 2, the spectrum of p ≤ m Lyapunov exponents λ 1 > λ 2 > · · · > λ p quantifies the asymptotic exponential rates of infinitesimally close trajectories.

Random attractors
Let (θ, ϕ) be a white noise random dynamical system on R m . (Note that the following can be formulated more generally in complete metric spaces (X , d) but that we again restrict ourselves to the Euclidean case for reasons of clarity). Due to the non-autonomous nature of the RDS, there are no fixed attractors for dissipative systems and different notions of a random attractor exist. We introduce these related but different definitions of random attractors in the following, with respect to tempered sets. Specific random attractors, attracting random cycles, will play a crucial role in the following chapters. A random variable R : where B R(ω) (0) denotes a ball centered at zero with radius R(ω) and D(ω)    Note that due to the P-invariance of θ t for all t ∈ R, it is easy to derive that weak attraction in the pullback and the forward sense are the same and, hence, the notion of a weak random attractor in Definition 3.2 (iii) is consistent. However, random pullback attractors and random forward attractors with almost sure convergence, as defined above, are generally not equivalent (see [38] for counter-examples). In the following, we will be careful with this distinction, yet in our main examples the random pullback attractor and random forward attractor will be the same. In this case we will simply speak of the random attractor.
Before we introduce random cycles and random periodic solutions, we add some remarks on Definition 3.2.
Remark 3.3. Note that we require that the random attractor is measurable with respect to F ⊗ B(R m ), in contrast to a weaker statement often used in the literature (see also [17,Remark 4]).
Remark 3.4. In many cases, the family of sets S is chosen to be the family of all bounded or compact (deterministic) subsets B ⊂ R m , as for example in [23]. Note that our definition of random attractors is a generalization of this weaker definition.

Attracting random cycles and random periodic solutions
Consider a random dynamical system (θ, ϕ) on R m . In the situation of a deterministic limit cycle, the limit cycle is the attractor for all subsets of a neighbourhood of this attractor. Analagously, we give the following definition for the random setting.
Definition 3.5 (Attracting Random Cycle). We call a random (forward or pullback) attractor A for (θ, ϕ), with respect to a collection of sets S, an attracting random cycle if for almost all ω ∈ Ω we have A(ω) ∼ = S 1 , i.e. every fiber is homeomorphic to the circle.
Furthermore, we need to find a stochastic analogue to the limit cycle as a periodic orbit. Firstly, we follow [46] for introducing the notion of random periodic solutions: Note that this definition assumes that T ∈ R does not depend on the noise realization ω. We will see the limitations of that concept in Example 3.8, extending the following example which we introduce first.
Example 3.7. Similarly to [46], consider the planar stochastic differential equation where σ ≥ 0, W t denotes a one-dimensional standard Brownian motion and the noise is of Stratonovich type. We denote the cocycle of the induced random dynamical system by ϕ = (ϕ 1 , ϕ 2 ). Equation (3.6) can be transformed into polar coordinates (ϑ, r) Therefore, in the situation without noise (σ = 0), the system is as in Example 2.4 with h ≡ 1 and attracting limit cycle at radius r = 1. With noise switched on (σ > 0), equation (3.7) has an explicit unique solution given bŷ Moreover, there is a stationary solution for the radial component, satisfying r(t, ω, r * (ω)) = r * (θ t ω), and given by Furthermore, one can see from a straightforward computation that for all (x, y) = (0, 0) and almost all ω ∈ Ω Hence, the planar system (3.6) has a random attractor A in the pullback and forward sense, with respect to S = D \ {{0}}, where D denotes the set of all compact tempered sets D ∈ F ⊗ B(R 2 ) (see also Section A.4), and the fibers of A are given by (see Figure 2) The system possesses, for any fixed ϑ 0 ∈ [0, 2π), the random periodic solution ψ which is defined by ψ(t, ω) = r * (ω)(cos(ϑ 0 + t), sin(ϑ 0 + t)) .
where the smooth function h : The random attractor A for the corresponding planar system is exactly the same as before, as illustrated in Figure 2. We observe for a point a(ω) := r * (ω)(cos ϑ 0 , sin ϑ 0 ) ∈ A(ω), where r * is the random variable defined in equation (3.8) and ϑ 0 ∈ [0, 2π), that the cocycle satisfies There cannot be a random periodic solution in the sense of Definition 3.6, since noise-independent periodicity is not possible if h is non-constant.
(b) Naturally, we can also consider the case where the phase amplitude is additionally perturbed by noise, i.e. Figure 2: Numerical simulations in (x, y)-coordinates, using Euler-Marayama integration with step size dt = 10 −2 , of forward and pullback dynamics of system (3.6) for a set B of initial conditions generated by a trajectory of (3.6) ((a) and (e)). In (b)-(d), we show the numerical approximation of ϕ(T, ω, B) for some ω ∈ Ω, approaching the fiber A(θ T ω) of the random attractor, changing in forward time. In (f )-(h), we show the numerical approximation of ϕ(−T, θ −T ω, B) for some ω ∈ Ω, approaching the fiber A(ω) of the random attractor, fixed by the pullback mechanism.
Example 3.8 motivates us to introduce the following notion of a more general form of random periodic solution. The potential relevance of finding such a generalization was first discussed by Hans Crauel 1 ; hence, we have chosen the name.
In particular, note that condition (3.13) implies ψ(0, ω) = ψ(T (ω), ω) (see Figure 3 for further details). Furthermore, observe that the classical random periodic solution according to Definition 3.6 is simply a Crauel random periodic solution with constant T . We show that Definition 3.9 applies to system (3.10), demonstrating the suitability of this definition. Proposition 3.10. (a) The planar system associated with (3.10) has a family of Crauel random periodic solutions (ψ ϑ , T ) which is defined for every ϑ ∈ [0, 2π) by for almost all ω ∈ Ω and all t ∈ R + 0 . (b) The system associated with (3.12) has a family of Crauel random periodic solutions (ψ ϑ , T ) which is defined for every ϑ ∈ [0, 2π) by ψ ϑ analogously to (3.14), just adding to the angular direction, and for almost all ω ∈ Ω and all t ∈ R + 0 .
(a) The fact that T : Ω → R is well defined can be seen as follows: fix ω ∈ Ω and let Then g ω (0) < 0 and g ω (2π/K h ) > 0 and, hence, the existence of T (ω) follows from the intermediate value theorem. Moreover, we have by a change of variables that h(r * (θ s ω))ds .
We use this observation to conclude that for almost all ω ∈ Ω and any t ≥ 0 Furthermore, we observe that for almost all ω ∈ Ω and t, t 0 ≥ 0 The fact that T : Ω → R is well defined almost surely in this case follows directly from the properties of SDEs on compact intervals, in this case [−2π, 2π]. Moreover, we have by a change of variables that We use this observation to conclude ψ(t + T (θ −t ω), ω) = ψ(t, ω) as in (a). Furthermore, we observe that for almost all ω ∈ Ω and t, t 0 ≥ 0 follows as in (a). This finishes the proof.
to ψ(0, ω) which is then mapped by ϕ(t, ω, ·) to ψ(t, θ t ω), in each case preserving the period T (ω). The arrows indicate that the CRPS parametrizes the fiber of the attractor as Note that in Example 3.8, and by that also the simpler subcase Example 3.7, it is easy to check that the Lyapunov exponents satisfy λ 1 = 0 and λ 2 < 0. We want to make three additional remarks on Proposition 3.10, also concerning Definition 3.9.
Remark 3.11. The proof of Proposition 3.10 shows why we require ψ(t + T (θ −t ω), ω) = ψ(t, ω) in Definition 3.9 instead of choosing T (ω) or T (θ t ω) in such a formula. It is precisely the relation we obtain from equations (3.14) and (3.15). Instead of equation (3.15), one might alternatively consider and replace the time integral in ψ ϑ (t, ω) (3.14) accordingly. However, it is easy to check that the invariance requirement ϕ(t, ω, ψ ϑ (t 0 , ω)) = ψ ϑ (t+t 0 , θ t ω) is not satisfied in this situation. Hence, the choice of period in Definition 3.9 turns out to be the appropriate one for an application to Example 3.8 which we see as the fundamental model for extending random periodic solutions to noise-dependent periods. Additionally note that, whenh = 0 in equation (3.12), the direction of periodicity depends on the noise realization ω.
where A is the random attractor given in equation (3.9). Hence, we have established the analogous situation to the deterministic case in the sense that the attracting random cycle corresponds to a random periodic solution; see also  Then there can, of course, still be a CRPS but we do not know a priori the existence of some stationary process ϑ * similarly to r * which we need to write down for an explicit solution such as (3.14).

Chaotic random attractors and singletons
More generally, i.e., in addition to the case with first Lyapunov exponent λ 1 = 0, we want to consider the situations where λ 1 > 0 and λ 1 < 0 (always assuming volume contraction to an attractor expressed by j λ j < 0). For λ 1 < 0, this typically means that the random attractor is a singleton (see, for example, [23]) and one speaks of complete synchronization. In such a situation, the dynamics on the random attractor is trivial, so there is no natural notion of isochronicity. In the case λ 1 > 0, one typically speaks of a chaotic random attractor which is not a singleton. We can illustrate these two cases by the following example very similar to the previous ones.
Example 3.14. We consider the following stochastic differential equations on R 2 with purely external noise of intensity σ ≥ 0, where b ∈ R and W 1 t , W 2 t denote independent one-dimensional Brownian motions. In polar coordinates the system can be written as This form illustrates the role of the parameter b inducing a shear force: if b > 0, the phase velocity dϑ dt depends on the amplitude r. Since Gaussian random vectors are invariant under orthogonal transformations, one might think of writing the problems with the independent Wiener processes However, the pathwise properties of the processes seen as random dynamical systems change under this transformation. In (3.18), the radial components of the trajectories depend on ϑ which appears in the diffusion term and destroys the skew-product structure we had in the previous example 3. 8.
It has been shown in [20] that for b small enough the first Laypunov exponent λ 1 < 0 is negative such that the corresponding random attractor A is indeed a singleton. For b large, one can see numerically that the attractor becomes chaotic. A proof of λ 1 > 0 has been obtained in [22] for a simplified model of (4.20) in cylindrical coordinates and recently also in the setting of restricting the state space on a bounded domain and only considering the dynamics conditioned on survival in this domain, using a computer-assisted proof technique [11].
One can characterize chaotic random attractors as non-trivial geometric objects and supports of SRB measures, i.e. sample measures with densities on unstable manifolds. For details see [10,29] and for further discussions relevant for our setting e.g. [9,21]. Due to the compactness and the minimality property of random attractors there must be recurrence on these objects and one may even find Crauel Random Periodic Solutions there. However, it is questionable to what extent one can speak of isochronicity, given the very irregular recurrence properties. This already makes isochronicity a difficult issue for deterministic chaotic oscillators, see e.g. [42].

Random limit cycles as normally hyperbolic random invariant manifolds
As we have seen in Section 3.2.2, we can generally not expect the persistence of periodic orbits from the deterministic to the stochastic case under (global) white noise perturbations. A point of view that is only considering local, bounded noise perurbations of normally hyperbolic manifolds, i.e. implicitly also hyperbolic limit cycles, is presented in [30], where normally hyperbolic random invariant manifolds and their foliations are studied. In more details, consider the ODE (2.1) with a small random perturbation, i.e. the random differential equatioṅ where ε > 0 is a small parameter and F is C 1 , uniformly bounded in x, C 0 in t for fixed ω, and measurable in ω. In several cases, SDEs can be transformed into a random differential equation (3.19), in particular when the noise is additive or linear multiplicative; however, in this case, F is generally not uniformly bounded. Hence, for an application of the following, one has to truncate the Brownian motion by a fixed large constant, as we will discuss later. Let us firstly give the following definition: The random invariant manifold M is called normally hyperbolic if for almost every ω ∈ Ω and any x ∈ M(ω), there exists a splitting which is C 0 in x and measurable: of closed subspaces with associated projections Π u (ω, x), Π c (ω, x) and Π s (ω, x) such that (i) the splitting is invariant and We can then deduce the following statements: Proposition 3.16. Assume that Φ is a C k flow, k ≥ 1, in R m which has a hyperbolic periodic orbit γ, with exponentsᾱ = 0 <β characterizing the normal hyperbolicity as in (3.20), (3.22).
Then there exists a δ > 0 such that for any random C 1 flow ϕ(t, ω, ·) in R m , as for example induced by an RDE (3.19), with we have that (i) the random flow ϕ(t, ω, ·) has a C 1 normally hyperbolic invaraint random manifold M(ω) in a small neighbourhood of γ, Proof. The statements (i)-(iii) follow directly from [30,Theorem 2.2]. It is clear from (iii) that M(ω) is a random forward attractor with respect to the collection S of tempered random sets whose fibers S(ω) are contained in W s (ω). Additionally, from (ii), it follows directly that M(ω) is diffeomorphic to the unit circle, and, hence, we can conclude statement (iv).

Definition of forward isochrons
Let A be an attracting random cycle for the random dynamical system (θ, ϕ) where A is a random forward attractor (and possibly also a random pullback attractor). One may think of equations of the type (3.12), (3.18)  . We further assume that we are in the situation of a differentiable hyperbolic random dynamical system as discussed in Section 3.1.
In the typical setting of attracting random cycles, we may assume that λ 1 = 0 with single multiplicity and λ i < 0 for all 2 ≤ i ≤ p. In analogy to the stable manifolds of points on a deterministic limit cycle, we can then establish the following key novel definition (see also Figure 4).
for almost all ω ∈ Ω and all x ∈ A(ω). In particular, we have for allλ ∈ (0, −λ 2 ), where λ 2 denotes the largest nonzero Lyapunov exponent, Remark 4.2. It is clear from the definition why we exclude the case λ 1 < 0. In this situation, the set W f (ω, x) is the whole absorbing set and, hence, no information about the decomposition of the state space by the dynamics can be obtained that way. As indicated in Section 3.2.2, a chaotic random attractor, characterized by λ 1 > 0, also exhibits recurrence properties such that Definition 4.1 can principally be also applied to this situation. However, it is arguable to what extent one can speak of isochronicity, given the irregular recurrence properties. Since this already makes isochronicity a difficult issue for deterministic chaotic oscillators [42], we leave a detailed analysis of random isochrons for chaotic random attractors as a topic for future work.
It is easy to observe that for all s ≥ 0 we have i.e. the forward isochrons are ϕ-invariant, as depicted in Figure 4.

Existence and properties of random stable sets
In the literature on (global) random dynamical systems, the existence of stable sets such as W f (ω, x) as stable manifolds is often first established for discrete time, see e.g. [36] or [32,Chapter III]. (Arnolds treatment [1,Chapter 7] is limited to equilibria.) Even though the local view in [30], as described in Section 3.3, is different, we need to also account for the global situation in order to provide the full picture. Hence, we begin with adopting the discrete-time approach by reducing the analysis to time-one maps ϕ(1, ω, ·) and its concatenations First we want to conclude for allλ ∈ (0, −λ 2 ) that W s (ω, x) := y ∈ R m : sup n≥0 eλ n d(ϕ(n, ω, y), ϕ(n, ω, x)) < ∞ (4.5) is an (m − 1)-dimensional immersed C k -submanifold under sufficient boundedness assumptions which would be immediately satisfied if the state space X is a compact manifold (cf. [32, Chapter III, Theorem 3.2]). We will state such conditions for our setting X = R m in the following. The transition to the time-continuous case, i.e. establishing W f (ω, x) =W s (ω, x), then follows immediately from the integrability assumption (3.4) for the MET, as one can observe with the proof of [32, Chapter V, Theorem 2.2]. One possible approach can be found in [9]: consider the maps (4.4). For x ∈ R m , we define the local linear shift function Further, we define the map F (ω,x),n : T ϕ(n,ω,x) R m → T ϕ(n+1,ω,x) R m ; F (ω,x),n := f −1 ϕ(n+1,ω,x) • ϕ(1, θ n ω, ·) • f ϕ(n,ω,x) , which is the evolution process of the linearization around the trajectory starting at x ∈ R m . Assume that there is an invariant probability measure P×ρ for (Θ t ) t≥0 on (Ω×R m , F ∞ 0 ×B(R m )) (see Appendix A.1 and A.2). If the RDS is induced by an SDE, the measure ρ is exactly the stationary measure of the associated Markov process. The integrability condition of the MET with respect to this measure reads log + Dϕ(1, ω, ·) ∈ L 1 (P × ρ) . (4.6) The crucial boundedness assumption that compensates for the lack of compactness in the proof of a stable manifold theorem reads log sup where D 2 is the second derivative operator and B 1 (x) denotes the ball of radius 1 centered at x ∈ R m . In the situation where the maps (4.4) of the discrete-time RDS are the time-one maps of the continuous-time RDS induced by the SDE (3.2) with the stationary distribution fulfilling we have the following requirements on b, σ i ∈ C k+1 , 1 ≤ i ≤ n, k ≥ 2, such that assumption (4.7) is satisfied: where 0 < δ ≤ 1 and with multi index notation α = (α 1 , . . . , α m ), This means that the coefficients of the SDE have at most linear growth, globally bounded derivatives and the k-th derivatives have bounded δ-Hölder norm. In [9], also the backward flow and a condition similar to (4.7) for the inverse are considered, but these are not needed when we purely regard the stable manifold problem. These conditions on the drift b are generally too restrictive since already examples (3.6), (3.10) and (3.11) are not covered. Of course, one can always consider the dynamics on a compact domain K, with absorbing or reflecting boundary conditions at the boundary of the domain, as will see later in Section 4.3 for the averaged problem on the level of the Kolmogorov equations. However, this involves further technicalities for the random dynamical systems approach which we try to avoid here. The easiest way of reduction to a compact domain K is to assume compact support of the noise and absorption to K through the drift dynamics such that neither global nor boundary conditions are needed (see Theorem 4.3 (iii)). Additionally we consider [23, Section 3] which discusses conditions for synchronization to a singleton random attractor for random dynamical systems induced by an SDE (3.2) with additive noise, i.e. n = m and, for all 1 ≤ i, j ≤ n, σ j i = σδ i,j where σ > 0 and σ j i denotes the j-th entry of the vector σ i . The authors formulate a special local stable manifold theorem for the case λ 1 < 0, which is, however, based on [36] where stable manifold theorems are considered in full generality. The assumption for deducing the local stable manifold theorem amounts to a (weaker) combination of conditions (4.6) and (4.7), and reads where C 1,δ is the space of C 1 -functions whose derivatives are δ-Hölder continuous for some δ ∈ (0, 1) and ρ denotes the stationary measure of the associated Markov process. We introduce a classical dissipativity condition, the one-sided Lipschitz condition for all x, y ∈ R m and κ > 0. According to [23,Lemma 3.9], condition (4.11) is satisfied in the case of additive noise if b ∈ C 2 (R m ) fulfills (4.12), admits at most polynomial growth of the second derivative, i.e. 13) and the stationary distribution ρ satisfies (4.14)

Main theorem about random isochrons
Assumptions (4.12) and (4.13) on the drift are weaker than condition (4.9) but, in [23], only applied to situations with additive noise whereas at least linear multiplicative noise as in (3.10) is a desirable model for random periodicity. We address this issue in Remark 4.4 and point (iii) of the following theorem, which summarizes the findings from above:

Theorem 4.3 (Forward isochrons are stable manifolds).
Consider an ergodic C k , k ≥ 2, random dynamical system (θ, ϕ) on R m with random attractor A, satisfying the integrability assumption (3.4) of the Multiplicative Ergodic Theorem such that λ 1 = 0 with single multiplicity and λ i < 0 for all 2 ≤ i ≤ p. Let further one of the following assumptions be satisfied: (i) The RDS (θ, ϕ) is induced by an SDE of the form (3.2) such that the unique stationary measure ρ satisfies (4.8) and the drift and diffusion coefficients satisfy (4.9), (ii) The RDS (θ, ϕ) is induced by an SDE of the form (3.2) with n = m and, for all 1 ≤ i, j ≤ n, σ j i = σδ i,j where σ > 0, such that the unique stationary measure ρ satisfies (4.14) and the drift satisfies conditions (4.12) and (4.13), (iii) The RDS (θ, ϕ) is induced by an SDE of the form (3.2) such that supp(σ) ⊂ R m is compact, the drift b satisfies condition (4.12) with κ < 0 for all x , y > R for some R > 0 and there is a unique stationary measure ρ with supp(ρ) ⊂ R m compact.
(iv) the RDS satisfies the conditions of Proposition 3.16.
Then for almost all ω ∈ Ω and all x ∈ A(ω) the random forward isochrons W f (ω, x) (see (4.2)) are a uniquely determined C k−1 in x family of C k (m − 1)-dimensional submanifolds (at least locally, i.e. within a neighbourhood U of x) of the stable manifold W s (ω) such that where the union is disjoint.  (ω, x) is a C k submanifold of R m of dimension m − 1, at least within a neighbourhood U of x. Furthermore, it is obvious from the assumptions that condition (4.11) is satisfied and, hence, assumption (iii) is derived similarly to assumption (ii). Assmuption (iv) can be taken according to [30,Theorem 2.4].
This leaves to prove the foliation property in all these cases: the proof that can be deducted in direct analogy to the proof of [30,Proposition 9 (iv)]. The fact that the union is disjoint can be seen as follows: assume there is a y is an invariant hyperbolic limit cycle and x, x ∈ A(ω), we have that d(ϕ(t, ω, x), ϕ(t, ω, x )) ≥ δ > 0 for all t ≥ 0. Hence, we obtain by definition of W f and the triangle inequality that which is a contradiction (see proof of [30, Proposition 9 (iii)] for a similar argument). . Due to the mild behaviour (4.9) of the diffusion coefficients, one could then try to make analogous estimates as in [23,Lemma 3.9] to induce that condition (4.11) is satisfied. Since we are mainly interested in the local behavior, we refrain from conducting such estimates here, but point out that this would be an interesting general extension.
(ii) Consider the example equation (3.11) (and by that equation (3.10)): the drift b is polynomial such that condition (4.13) is satisfied and we have Hence, also condition (4.12) is satisfied. Furthermore, the unique stationary distribution ρ has a density solving the stationary Fokker-Planck equation. Hence, also condition (4.14) is fulfilled. Since the noise term is linear, we obviously have n i=1 σ i k,δ < ∞ for all k ≥ 2, δ ∈ (0, 1]. Hence, we could deduce the assertions of Theorem 4.3 if we had proven the extension as discussed in (i).
However, for our purposes, this is not necessary: we additionally have, using the same transformation as in estimate (4.15), Hence, we can apply Theorem 4.3 (iii). In particular, note that this construction allows, whenσ is small enough, for a transformation into the random ODE (3.19) with sufficiently small bounded noise such that Proposition 3.16 and, by that, Theorem 4.3 (iv) can be applied. This procedure is, of course, independent from the particular form of equation (3.11) and can be used for any SDEs around deterministic limit cycles when the transformation into the random ODE (3.19) is possible (which is always the case for additive and linear multiplicative noise).
If, as in Proposition 3.10, each x ∈ A(ω) can be identified as ψ x (ω, 0) for some Crauel random periodic solution, then T x (ω) is the period we can associate with W f (ω, ψ x (0, ω)). We summarize this insight in the following definition: Definition 4.5 (Period of random forward isochron). Let (ψ, T ) be a Crauel random periodic solution for the RDS (θ, ϕ) such that ψ(t, ω) ∈ A(ω) for all ω ∈ Ω and t ≥ 0, where A is an attracting random cycle or chaotic random attractor. Then the we call T (ω) the period of the corresponding random forward isochron W f (ω, ψ(0, ω)) for all ω ∈ Ω.
The natural question arises whether holds for some measurable family N x (ω) of cross-sections, in particular, whether we can identify N x (ω) = W f (ω, ψ x (0, ω)). What we observe, is the following: Proposition 4.6. Let (θ, ϕ) be a random dynamical system with random attractor A and the isochrons W f (ω, x) as given in (4.1) such that each x ∈ A(ω) can be identified with ψ x (0, ω) for some Crauel random periodic solution (ψ x , T x ). Then we have (4.17) Proof. Let y ∈ W f (ω, ψ x (0, ω)). Then Hence, the statement follows directly.

Pullback isochrons
In analogy to the different notions of a random attractor, one could also consider defining fiberwise isochrons for random dynamical systems in a pullback sense, as follows: Again assume there is a Crauel random periodic solution (ψ, T ) on an attracting random cycle A (or chaotic random attractor A). Then the random pullback isochrons could only be defined as for almost all ω ∈ Ω. In contrast to the random forward isochron W f (ω, ψ(0, ω)), the set W p (ω, ψ(0, ω)) is not given as a stable set for the point ψ(0, ω) but as the set of points whose pullback trajectories converge to the trajectories starting in ψ(0, θ −t ω) as t → ∞. Hence, such a definition cannot coincide with a stable manifold for a given point on a given fiber of the random attractor and, in particular, there does not seem to be a way to connect the set W p (ω, ψ(0, ω)) to the set W f (ω, ψ(0, ω)). In other words, it is not clear what geometric interpretation such a random pullback isochron could have and it is apparent that the definition in forward time, i.e. Definition 4.1, yields the most immediately meaningful object in this context.

The random isochron map
For the following, recall the stochastic differential equation (3.2) as where W i t are independent real valued Brownian motions, b is a C k vector field, k ≥ 1, and σ 1 , . . . , σ n are C k+1 vector fields satisfying bounded growth conditions, as e.g. (global) Lipschitz continuity, in all derivatives to guarantee the existence of a (global) random dynamical system with cocycle ϕ and derivative cocycle Dϕ. Example 4.7. As before, the main examples we have in mind are two-dimensional. In particular, we may consider the corresponding stochastic differential equation in polar coordinates (ϑ, r) (4.20) As in Examples 3.8 and 3.14, we usually regard a situation such that in the deterministic case σ 1 = σ 2 = 0 there is an attracting limit cycle at r = r * > 0.
Equation (4.25) is the analogous equation for the random dynamical system.
This finishes the proof.

Stochastic isochrons via mean return time and random isochrons
One main approach to define stochastic isochrons in the physics literature is due to Schwabedal and Pikovsky [41] who introduce isochrons (or isophase surfaces) for noisy systems as sections W E (x) with the mean first return time to the same section W E (x) being a constantT , equaling the average oscillation period. Note that such an object is not well-defined a priori as it seems unclear, what we imply here by "return", i.e., return to what? The paper does not rigorously establish these objects but only gives a numerical algorithm which is successfully tested at the hand of several examples. According to the algorithm, a deterministic starting section N is adjusted according to the mean return time, i.e., points are moved correspondent to the mismatch of their return time and the mean period for N , and this procedure is repeated until all points have the same mean return time.

The modified Andronov-Vitt-Pontryagin formula in [13]
Cao, Lindner and Thomas [13] have made this approach rigorous by using a modified version of the Andronov-Vitt-Pontryagin formula for the mean first passage time (MFPT) τ D on a bounded domain D through its boundary ∂D. In more detail (cf. [40,Chapter 4.4]), the associated boundary value problem for L denoting the generator of the process, also called backward Kolmogorov operator, is given by The problem in our case is that if we consider a domain whose absorbing boundary in θ-direction is a linel := {(θ(r),r) : R 1 ≤r ≤ R 2 }, whereθ is a smooth function, the stochastic motion might not perform a full rotation to reach this boundary line. In particular, the mean return time for trajectories starting onl will be zero. To circumvent this problem, Cao et al. unwrap the phase by considering infinite copies ofl on the extended domain R×[R 1 , R 2 ]. For some (ϑ, r) with ϑ < 2π <θ(r), the mean first passage time T (ϑ, r) is then calculated via the Andronov-Vitt-Pontryagin formula with periodic-plus-jump boundary condition in the ϑ-direction and reflecting boundary condition in the r-direction.
Furthermore, for a C 1 -function γ : [R 1 , R 2 ] → [0, 2π] the graph C γ (cf.l above) separates the domain Ω ext = R × [R 1 , R 2 ] into a left and right connected component, with unit normal vector n(r) oriented to the right. It is then assumed that the mean rightward probability flux through C γ is positive, which means that The mean period of the oscillator is then given as The modified Andronov-Vitt-Pontryagin formula is then given by the following PDE, with reflecting and jump-periodic boundary conditions In fact, the last condition can be weakened to (4.38) Under the discussed assumptions, it is then shown in [13, Theorem 3.1] that the equation has a solution T (ϑ, r) on Ω ext and, hence by restriction, on Ω, which is unique up to an additive constant. The level sets of T (ϑ, r) are then supposed to be the stochastic isochrons W E ((ϑ, r)) with mean return timeT and associated isophase (up to some constantΘ 0 ) which therefore satisfies LΘ = 2π T . (4.39)

Relation to random isochrons
Recall from Definition 4.5 that, for a CRPS (ψ, T ), the random period T (ω) corresponds to the random forward isochron W f (ω, ψ(0, ω)) for all ω ∈ Ω. Hence, we can define the expected period asT where the index RDS indicates the random dynamical systems perspective. In the following, we discuss howT RDS is related toT and the isochron functionΘ (4.39).

(4.45)
ThisT is the expected return time to the isochron W E ((ϑ 0 , r 0 )) which is the level set ofφ(ϑ 0 , r 0 ). In particular, the functionφ can be identified with the solutionΘ of equation (4.39).
Proof. Using the chain rule of Stratonovich calculus and inserting (4.20), Equation (4.44) can be rewritten as where the boundary condition in angular direction is andT =φ(2π, r * ) .
In radial direction, if 0 < R 1 < R 2 < ∞, one can choose reflecting boundary conditions as in Section 4.3.1.
Writing time t as an index, transforming the Stratonovich noise terms into Itô noise terms and using the fact that the Itô noise terms have zero expectation, leads to the equation  ((ϑ, r)).
We exmplify this derivation of an isochron functionφ by reference to the fundamental Example 3.8: Example 4.13. Recall Example 3.8 with equation (3.12), i.e. in its most general form, choosing h(r) = κ + (r 2 − 1), κ ≥ 1, similarly to [41,Example (1)], andh some arbitrary smooth and bounded function. Note that r * = 1 for this case and that there is a stationary density p for the radial process which has the form where Z > 0 is a normalization constant. One can then additionally observe that E p [r 2 ] = 1 for all σ ≥ 0, and, hence, It is easy to see thatφ solves (4.48) such that (4.45) is actually satisfied withT = 2π κ . In fact, we have (up to some which, in this case, is also the deterministic isochron. Similarly to T RDS := E[T (·)], we can introduce for the associated random isochron mapφ the expected quantityφ for fixed x ∈ R m , whereφ is the random isochron map from Section 4.2. It remains to clarify how the isochron functionφ, or equivalentlyΘ (4.39), may be related toφ RDS , assuming the existence of a CRPS (ψ, T ) as for Example 3.8 (see Proposition 3.10). We give a brief discussion of a possible approach to this question in Appendix A.5, leaving a more thourough investigation as future work.

Conclusion
We have introduced a new perspective on the problem of stochastic isochronicity, by considering random isochrons as random stable manifolds anchored at attracting random cycles with random periodic solutions. We have further characterized these random isochrons as level sets of a time-dependent random isochron map. Precisely this time-dependence of the random dynamical system, i.e., its non-autonomous nature, makes it difficult to specify the concrete relation to the definitions of stochastic isochrons given by fixed expected mean return times for whom we have given an alternative derivation of the isochron functionφ with return timeT . We suggest an extended investigation of their relationship to the expected quantityφ RDS as an intriguing problem for future work. Additionally, it would be interesting to study the relation between stochastic isochronicity via eigenfunctions of the backward Kolmogorov operator [44] and random Koopman operators (see [18]), extending the eigenfunction approach from the deterministic setting to the random dynamical systems case.

Acknowledgments:
The authors gratefully acknowledge support by the DFG via the SFB/TR109 Discretization in Geometry and Dynamics. ME has also been supported by Germany's Excellence Strategy -The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689). CK acknowledges support by a Lichtenberg Professorship of the Volk-swagenFoundation.

A Random dynamical systems
In this appendix we have collected several constructions for reference from the theory of random dynamical systems, which we have used throughout the main part of this work.

A.1 Random dynamical systems induced by stochastic differential equations
Following [23], we make the following definition: Definition A.1 (White noise RDS). Let (θ, ϕ) be a random dynamical system over a probability space (Ω, F, P) on a topological space X where ϕ is defined in forward time. Let (F t s ) −∞≤s≤t≤∞ be a family of sub-σ-algebras of F such that (iv) ϕ(t, ·, x) is F t 0 -measurable for all t ≥ 0 and x ∈ X .
Furthermore we denote by F t −∞ the smallest sigma-algebra containing all F t s , s ≤ t, and by F ∞ t the smallest sigma-algebra containing all F u t , t ≤ u. Then (θ, ϕ) is called a white noise (filtered) random dynamical system. Consider a stochastic differential equation (SDE) where (W t ) denotes some r-dimensional standard Brownian motion, the drift f : R d → R d is a locally Lipschitz continuous vector field and the diffusion coefficient g : R d → R d×r a Lipschitz continuous matrix-valued map. If in addition f satisfies a bounded growth condition, as for example a one-sided Lipschitz condition, then by [19] there is a white noise random dynamical system (θ, ϕ) associated to the diffusion process solving (A.1). The probabilistic setting is as follows: We set Ω = C 0 (R, R r ), i.e. the space of all continuous functions ω : R → R r satisfying that ω(0) = 0 ∈ R r . If we endow Ω with the compact open topology given by the complete metric we can set F = B(Ω), the Borel-sigma algebra on (Ω, κ). There exists a probability measure P on (Ω, F) called Wiener measure such that the r processes (W 1 t ), . . . , (W r t ) defined by (W 1 t (ω), . . . , W r t (ω)) T := ω(t) for ω ∈ Ω are independent one-dimensional Brownian motions. Furthermore, we define the sub-σ-algebra F t s as the σ-algebra generated by ω(u) − ω(v) for s ≤ v ≤ u ≤ t. The ergodic metric dynamical system (θ t ) t∈R on (Ω, F, P) is given by the shift maps θ t : Ω → Ω, (θ t ω)(s) = ω(s + t) − ω(t) .
Indeed, these maps form an ergodic flow preserving the probability P, see e.g. [1]. Note that, by the Itô-Stratonovich conversion formula, euqation (A.1) with Stratonovich noise instead of Itô noise also induces a random dynamical system under analogous assumptions.

A.2 Invariant measures
Let (θ, ϕ) be a random dynamical system with the cocycle ϕ being defined on one-or two-sided time T ∈ {R + 0 , R}. Then the system generates a skew product flow, i.e. a family of maps (Θ t ) t∈T from Ω × X to itself such that for all t ∈ T and ω ∈ Ω, x ∈ X Θ t (ω, x) = (θ t ω, ϕ(t, ω, x)) .
The notion of an invariant measure for the random dynamical system is given via the invariance with respect to the skew product flow, see e.g. [1, Definition 1.4.1]. We denote by T µ the push forward of a measure µ by a map T , i.e. T µ(·) = µ(T −1 (·)).
The marginal of µ on the probability space is demanded to be P since we assume the model of the noise to be fixed. Note that the invariance of µ is equivalent to the invariance of the random measure ω → µ ω on the state space X in the sense that ϕ(t, ω, ·)µ ω = µ θtω P-a.s. for all t ∈ T . (A.2) For white noise random dynamical systems (θ, ϕ), in particular random dynamical systems induced by a stochastic differential equation, there is a one-to-one correspondence between certain invariant random measures and stationary measures of the associated stochastic process, first observed in [15]. In more detail, we can define a Markov semigroup (P t ) t≥0 by setting for all measurable and bounded functions f : X → R. If ω → µ ω is a F 0 −∞ -measurable invariant random measure in the sense of (A.2), also called Markov measure, then ρ(·) = E[µ ω (·)] = Ω µ ω (·)P(dω) turns out to be an invariant measure for the Markov semigroup (P t ) t≥0 , often also called stationary measure for the associated process. If ρ is an invariant measure for the Markov semigroup, then µ ω = lim t→∞ ϕ(t, θ −t ω, ·)ρ exists P-a.s. and is an F 0 −∞ -measurable invariant random measure. We observe similarly to [5] that, in the situation of µ and ρ corresponding in the way described above, E[µ ω (·)|F ∞ 0 ] = E[µ ω (·)] = ρ(·) , and, hence, E[µ(·)|F ∞ 0 ] = (P × ρ)(·) . Therefore the probability measure P × ρ is invariant for (Θ t ) t≥0 on (Ω × X , F ∞ 0 × B(X )). In words, the product measure with marginals P and ρ is invariant for the random dynamical system restricted to one-sided path space.

A.3 Lyapunov spectrum
Consider a C k random dynamical system (θ, ϕ), i.e. ϕ(t, ω, ·) ∈ C k for all t ∈ T and ω ∈ Ω, where again T ∈ {R, R + 0 }. Let's assume that X is a smooth m-dimensional manifold and that (θ, ϕ) is C 1 . Recall that the linearization or derivative Dϕ(t, ω, x) of ϕ(t, ω, ·) at x ∈ X is a linear map from the tangent space T x to the tangent space T ϕ(t,ω,x) . If X = R m , the linearization is simply the Jacobian m × m matrix Dϕ(t, ω, x) = ∂ϕ(t, ω, x) ∂x .
Further assume that the random dynamical system possesses an invariant measure µ. In case X = R m , this implies that (Θ, Dϕ) is a random dynamical system with linear cocycle Dϕ over the metric dynamical system (Ω × X , F × B(X ), (Θ t ) t∈T ), see e.g. [1, Proposition 4.2.1]. Generally, we have that Dϕ is a linear bundle random dynamical system on the tangent bundle T X (see [1, Definition 1.9.3, Proposition 4.25]). In case the derivative can be written as a matrix, as for example for X = R m , the Jacobian Dϕ(t, ω, x) satisfies Liouville's equation ln + Dϕ(t, ω, x) ∈ L 1 (ν).
. Furthermore, the tangent space T x X ∼ = R m admits a filtration for all (ω, x) ∈ ∆ such that x) for all i ∈ {1, . . . , p} .
In case the derivative can be written as a matrix, we have for all (ω, x) ∈ ∆ lim t→∞ 1 t ln det Dϕ(t, ω, where d i is the multiplicity of the Lyapunov exponent λ i and p i=1 d i = m. b) If the cocycle ϕ is defined in two-sided time and satisfies the above integrability condition also in backwards time, there exists the Oseledets splitting of the tangent space into random subspaces E i (ω, x), the Oseledets spaces, for all (ω, x) ∈ ∆. These have the following properties for all (ω, x) ∈ ∆: (i) The Oseledets spaces are invariant under the derivative flow, i.e. for all t ∈ R Dϕ(t, ω, x)E i (ω, x) = E i (Θ t (ω, x)) , (ii) The Oseledets space E i corresponds with λ 1 in the sense that dim E i (ω, x) = d i .

A.4 Existence of random attractors
The existence of random attractors is proved via so-called absorbing sets. A set B ∈ D is called an absorbing set if for almost all ω ∈ Ω and any D ∈ D, there exists a T > 0 such that ϕ(t, θ −t ω)D(θ −t ω) ⊂ B(ω) for all t ≥ T .
A proof of the following theorem can be found in [24,Theorem 3.5].
Furthermore, ω → A(ω) is measurable with respect to F 0 −∞ , i.e. the past of the system. Remark A.5. Naturally, random attractors are related to invariant probability measures of a random dynamical system (θ, ϕ). It follows directly from [16,Proposition 4.5] that, if the fibers of a random attractor A, i.e. ω → A(ω), are measurable with respect to F 0 −∞ , there is an invariant measure µ for (θ, ϕ) such that ω → µ ω is measurable with respect to F 0 −∞ , i.e. is a Markov measure, and satisfies µ ω (A(ω)) = 1 for almost all ω ∈ Ω. In particular, if there exists a unique invariant probability measure ρ for the Markov semi-group (P t ) t≥0 , then the invariant Markov measure, supported on A, is unique by the one-to-one correspondence explained above. Additionally, if the Markov semi-group is strongly mixing, i.e. P t f (x) t→∞ − −− → X f (y)ρ(dy) for all continuous and bounded f : X → R and x ∈ X , then the setÃ ∈ F × B(X ), given byÃ(ω) = supp µ ω ⊂ A(ω) for almost all ω ∈ Ω, is a minimal weak random point attractor according to [23,Proposition 2.20].
If we choose (ϑ, r) to be a point on the random attractor, belonging to the CRPS ψ, say (ϑ, r) = ψ(0, ω), then due to the fact thatφ(ψ(t, θ t ω), θ t ω, t) = t for (almost) all ω ∈ Ω, this means that E[E[φ(ψ(t, θ t ω), ω , 0)]] = t. (A.9) Verifying equality (A.9) would therefore lead to establishingφ RDS =φ. We have not found a clear reasoning when and why (or why not) relation (A.9) holds and leave it as an open problem to get a better understanding of this gap.