Abstract
Given two distinct subsets A, B in the state space of some dynamical system, transition path theory (TPT) was successfully used to describe the statistical behavior of transitions from A to B in the ergodic limit of the stationary system. We derive generalizations of TPT that remove the requirements of stationarity and of the ergodic limit and provide this powerful tool for the analysis of other dynamical scenarios: periodically forced dynamics and timedependent finitetime systems. This is partially motivated by studying applications such as climate, ocean, and social dynamics. On simple model examples, we show how the new tools are able to deliver quantitative understanding about the statistical behavior of such systems. We also point out explicit cases where the more general dynamical regimes show different behaviors to their stationary counterparts, linking these tools directly to bifurcations in nondeterministic systems.
Introduction
The understanding of when and how dynamical transitions, such as tipping processes, happen is important for many systems from physics, biology (Noé et al. 2009), ecology (Scheffer et al. 2001; Hastings et al. 2018), the climate (Lenton et al. 2008; Lenton 2013), and the social sciences (Nyborg et al. 2016; Otto et al. 2020).
If the system dynamics can be modeled by a stationary Markov process running for infinite time, transition path theory (TPT) provides a rigorous approach for studying the transitions from one subset A to another subset B of the state space. The main tool of transition path theory (Weinan and VandenEijnden 2006; Metzner et al. 2009) is the forward and backward committor probabilities telling us the probability of the Markov process to next commit to (i.e., hit) B relative to A, either forward or backward in time. Given these committor probabilities, one can derive important statistics of the ensemble of reactive trajectories (i.e., of the collection of all possible paths of the Markov process that start in A and end in B), such as

the density of reactive trajectories telling us about the bottlenecks during transitions,

the current of reactive trajectories indicating the most likely transition channels,

the rate of reactive trajectories leaving A or entering B, and

the mean duration of reactive trajectories.
Other approaches that characterize the ensemble of transition paths, on the one hand, place the focus elsewhere: For instance, in transition path sampling (Bolhuis et al. 2002) one is interested in directly sampling trajectories of the reactive ensemble. On the other hand, these approaches consider different objects, such as the steepest descent path (Ulitsky and Elber 1990; Czerminski and Elber 1990; Olender and Elber 1997), the most probable path (Olender and Elber 1996; Elber and Shalloway 2000; Pinski and Stuart 2010; Faccioli et al. 2010; Beccara et al. 2012) [also in temporal networks (SerGiacomi et al. 2015)], or the first passage path ensemble (von Kleist et al. 2018) (see also Remark 5.6).
For many physical, especially molecular, systems that are equilibrated and where transitions happen on a smaller time scale than the observation window, the assumption of a stationary, infinitetime Markov process is reasonable and common practice (Noé et al. 2009). For illustration, we consider the overdamped Langevin dynamics
in the triple well landscape V(x, y) (as in Fig. 1), and we are interested in the transitions between the deep well A and the other deep well B. If the noise intensity \(\sigma \) is sufficiently small, then the system tends to spend long times near local minima of deep wells (this behavior is called metastability) and transitions predominantly happen across regions of possibly low values of the potential V (e.g., saddle points). If the system from Fig. 1 is stationary and has infinite time for transitioning, the transition channel via the metastable well centered at (0, 1.5) is preferred since the barriers are lower and it does not matter that transitions take a very long time due to being stuck in the metastable set.
However, in order to study transitions and tipping paths in different dynamical contexts (e.g., social systems or climate models) which are often characterized by timedependent (e.g., seasonal) dynamics as well as transitions of interest within a finitetime window, the current theory of transition paths has to be extended.
In these applications, one might, for instance, ask: What are the possible transition channels from the current state to a desirable and sustainable state of our social or climate system within the next 30 years (Steffen et al. 2018; Otto et al. 2020)? By requiring the transitions to depart from A and arrive in B within a finitetime interval, the affinity of the system taking the different transition channels is altered. This is visualized in the triple well dynamics, as shown in Fig. 1c, where now only the lower transition channel passing the high barrier is possible. Whenever the trajectory takes the channel through the upper metastable set (cf. black trajectory in subfigure (a)), it is stuck there for a long time and will not reach B anymore within the finitetime horizon.
Moreover, systems containing human agents are usually timeinhomogeneous and not equilibrated, while climate systems are often affected by seasonal forcing, raising questions such as: What are the likely spreading paths of a contagion in a timeevolving network (Pan and Saramäki 2011; Brockmann and Helbing 2013; Valdano et al. 2018)? What are bottlenecks in the transient dynamics toward the equilibrium state (Schonmann 1992; Hollander et al. 2000)? How does tipping occur under the joint effect of noise and parameter changes (Ashwin et al. 2012; Giorgini et al. 2019) or periodic forcing (Herrmann et al. 2005)?
In this paper, we generalize TPT to a broader class of dynamical scenarios. In particular, we focus on two generalizations which we consider as natural but not exclusive building blocks for these more general cases: (a) periodically forced infinitetime system and (b) arbitrary timeinhomogeneous finitetime system.
We start in Sect. 2 by formulating the general setting of TPT for Markov chains \((X_n)_{n\in {\mathbb {T}}}\) on a finite state space^{Footnote 1} by introducing timedependent forward committor functions \(q^+_i(n)\), giving the probability within the time horizon \({\mathbb {T}}\) to next commit to B and not A conditional on being in \(X_n=i\), as well as timedependent backward committor functions \(q^_i(n)\), both of which are needed for computing the desired statistics of the transitions from A to B.
We then in more detail work out the following main cases.

(i)
Under the assumption of stationary, infinitetime dynamics, it is known Weinan and VandenEijnden (2006), Metzner et al. (2009) that the committor functions are timeindependent \(q^+(n) = q^+\) by stationarity and solve the following linear system
$$\begin{aligned} \left\{ \begin{array}{rcll} q_i^+ &{}=&{} \sum \limits _{j\in {\mathbb {S}}} \, P_{ij} \, q_j^+ &{}i \in (A\cup B)^c \\ q_i^+ &{}=&{} 0&{} i \in A \\ q_i^+ &{}=&{} 1&{} i \in B \\ \end{array}\right. \end{aligned}$$(1)where \(P = (P_{ij})_{i,j\in {\mathbb {S}}}\) is the transition matrix. Similarly, all the transition statistics are timeindependent and by ergodicity, the statistics can also be found by averaging along one infinitely long equilibrium trajectory (Weinan and VandenEijnden 2006; Metzner et al. 2009). In Sect. 3, we recall the theory (Weinan and VandenEijnden 2006; Metzner et al. 2009) from a different point of view. Instead of defining all quantities in terms of trajectorywise time averages, we prove by using the Markov property that they can be written in terms of the committors and the stationary distribution. Further, we extend the theory by Lemma 3.5 decomposing the committors into path probabilities.

(ii)
In Sect. 4, we derive the committors and transition statistics for periodically varying dynamics with a period of length M that are equilibrated (i.e., the law of the chain is also periodic). It follows that the committors are periodically varying \(q^+(n) = q^+_m\) whenever \(n=m\) modulo M, and we show that the committors solve the following linear system with periodic boundary conditions in time \(q^+_0 = q^+_M\)
$$\begin{aligned} \left\{ \begin{array}{rcll} q_{m,i}^+ &{}=&{} \sum \limits _{j\in {\mathbb {S}}} \, P_{m,ij} \, q_{m+1,j}^+ &{} i \in (A\cup B)^c \\ q_{m,i}^+ &{}=&{} 0&{} i \in A \\ q_{m,i}^+ &{}=&{} 1&{} i \in B \\ \end{array}\right. \end{aligned}$$(2)where \(P_m = (P_{m,ij})_{i,j\in {\mathbb {S}}}\) is the transition matrix at time \(n=m\) modulo M. This is consistent with the previous case when choosing a period of length \(M=1\).

(iii)
In Sect. 5, we derive the committor equations and transition statistics for general timeinhomogeneous Markov chains \((X_n)_{n\in {\mathbb {T}}}\) on a finitetime interval \({\mathbb {T}}=\{0,\dots ,N1\}\) defined by the transition probabilities \(P(n) = (P_{ij}(n))_{i,j\in {\mathbb {S}}}\) and an initial density \(\lambda _{}(0)\). The forward committor \(q^+(n)\) for a finitetime Markov chain satisfies the following iterative system of equations:
$$\begin{aligned} \left\{ \begin{array}{rcll} q_i^+(n) &{}=&{} \sum \limits _{j \in {\mathbb {S}}} \, P_{ij}(n) \, q_j^+(n+1) &{} i \in (A\cup B)^c \\ q_i^+(n) &{}=&{} 0 &{} i \in A \\ q_i^+(n) &{}=&{} 1 &{} i \in B \end{array}\right. \end{aligned}$$(3)with final condition \(q^+_i(N1) = \mathbb {1}_B(i)\). The transition statistics depend on the current time point in the time interval and can be related to an average over the ensemble of reactive trajectories. We also show consistency, i.e., that given a stationary process on a finitetime interval, the committors and statistics converge to their classical counterparts (i) in the infinite time limit.
We note that in the stationary regime, committor functions have been used for finding basins of attraction of stochastic dynamics (Koltai 2011b; Koltai and Volf 2014; Lindner and Hellmann 2019), as reaction coordinates (Lu and VandenEijnden 2014), as basis functions in the coreset approach (Schütte et al. 2011; Sarich 2011; Schütte and Sarich 2013) and for studying modules and flows in networks (Djurdjevac et al. 2011; Cameron and VandenEijnden 2014). We expect our results to enable similar uses for the timedependent regime.
The theoretical results from this paper are accompanied by numerical studies on two toy examples, a network of 5 nodes with timevarying transition probabilities, and a discrete model of the overdamped Langevin dynamics in a triple well potential with timedependent forcing (as in Fig. 1). In these examples, we will particularly show to what extent timedependent dynamics or finitetime restrictions affect the transition statistics in contrast to those in stationary, infinite time dynamics. The TPTrelated objects derived here allow for a quantitative assessment of the dominant statistical behavior in complicated dynamical regimes:

(i)
By adding a periodic forcing to the stationary dynamics, the reaction channels are perturbed and new transition paths, that were not possible before, can appear (see Example 2 and 7).

(ii)
By restricting the stationary dynamics with matrix P to a finitetime window (cf. Example 3 and 8), only transitions within this window are allowed and the average rate of transitions is much lower than in the infinitetime situation. By additionally applying a forcing (cf. Example 4), the system is not equilibrated anymore and we can get a higher average rate of transitions than without forcing, although we set the timedependent transition matrix P(n) such that its time average equals the transition matrix P of the stationary case. A similar approach is used in rare events simulation where the system is pushed by an optimal nonequilibrium forcing under which the rare events become more likely (Hartmann and Schütte 2012; Hartmann et al. 2014).

(iii)
The finitetime case can also be employed for studying qualitative changes in the transition dynamics when parameters are perturbed. In Example 9, we exemplarily show that by increasing the finitetime interval length N, the transition dynamics change qualitatively, even though the dynamics are stationary. Thus, TPT can be used as a quantitative tool describing bifurcations in nonstationary nondeterministic systems.
The code used for the examples is available on Github at www.github.com/LuzieH/pytpt.
Preliminaries and General Setup
The objective of transition path theory (TPT) is to understand the mechanisms by which noiseinduced transitions of a Markov chain \((X_n)_{n\in {\mathbb {T}}}\) from one subset of the state space \(A \subset {\mathbb {S}}\) to another subset \(B \subset {\mathbb {S}}\) take place.^{Footnote 2}A and B are chosen as two nonempty, disjoint subsets of the finite state space \({\mathbb {S}}\), such that the transition region \(C:= (A \cup B)^c\) is also nonempty. Since historically in TPT one thinks of A as the reactant states of a system, B as the product states and the transitions from A to B as reaction events, we call the pieces of a trajectory that connect A to B by the name reactive trajectories. Each reactive trajectory contains the successive states in C visited during a transition that starts in A and ends in B, see Fig. 2, and we are interested in gathering dynamical information about the ensemble of reactive trajectories.
In this paper, we are interested not only in Markov chains living on the infinitetime frame \({\mathbb {T}}= {\mathbb {Z}}\) but also those on finitetime intervals \({\mathbb {T}}=\{0,1,\dots , N1\}\), that is where the transitions from A to B have to take place during a finitetime frame \({\mathbb {T}}\). Moreover, we also consider nonstationary dynamics. This either means

that the system is in the transient phase toward equilibrium but otherwise has a timeindependent transition matrix, or

that the dynamical rules (the transition matrices) are varying in time, as well as

that we are dealing with timevarying sets A and B (see the comment in Sect. 4.2 for the case of periodic dynamics and Remark 5.4 for finitetime dynamics).
In this section, we will define the committor functions and show how they can be used to derive important statistics of the ensemble of reactive trajectories, and this entails, e.g., the frequency of transitions, the most important transition channels and where the process on the way to B gets stuck or spends most of its time. In doing so, we will keep everything general enough for timedependent and finitetime dynamics and for the moment only need to assume the Markovianity of the chain \((X_n)_{n\in {\mathbb {T}}}\) and that the distribution \({\mathbb {P}}(X_n=i)\) and timedependent transition rule \(P(n) = ({\mathbb {P}}(X_{n+1}=jX_n=i))_{i,j\in {\mathbb {S}}}\) is given for all \(n\in {\mathbb {T}}\).
Later, the results from this section will be applied to the special cases of (i) infinitetime, stationary dynamics (Sect. 3), (ii) infinitetime, periodic dynamics (Sect. 4), and (iii) finitetime, timedependent systems (Sect. 5), where we will also prove the existence of committors and give the linear system of equations they are solving. These three are just some selected special cases, and other interesting cases of systems are, e.g., stochastic regimeswitching (see Remark A.1).
Committor Equations
All of the transition statistics and characteristics can be computed from the committor probabilities; therefore, we will start by defining them. The forward committor is the probability that the Markov chain, currently in some state i, will next^{Footnote 3} go to set B and not to A. The backward committor is the same for the timereversed Markov chain, i.e., the probability that the timereversed process will next hit A and not B, or equivalently, the probability that the chain last came from A and not B.
More precisely, the forward committor \(q^+(n)=(q^+_i(n))_{i\in {\mathbb {S}}}\) gives the probability that the process starting in \(i\in {\mathbb {S}}\) at time \(n\in {\mathbb {T}}\) reaches at next within \({\mathbb {T}}\) first B and not A. We write
where the first entrance time of a set \(S\subset {\mathbb {S}}\) after or at time \(n\in {\mathbb {T}}\) is given by
The backward committor \(q^(n)=(q^_i(n))_{i\in {\mathbb {S}}}\) gives the probability that the trajectory arriving at time \(n\in {\mathbb {T}}\) in state \(i\in {\mathbb {S}}\) last within \({\mathbb {T}}\) came from A not B,
where the last exit time of a set S before time or at time n is given by
Note that the first entrance and last exit times are stopping times with respect to the forward and timereversed process (Ribera Borrell 2019, Lemma 3.1.1). The timereversed process will be introduced in the following sections for infinitetime and finitetime Markov chains.
The forward committor, as it is defined, only considers trajectory pieces arriving at B within the time horizon \({\mathbb {T}}\); similarly, the backward committor only considers excursions that left A within \({\mathbb {T}}\).
Transition Statistics
We will now define statistical objects that characterize the ensemble of reactive trajectories from subset A to B and see that they can be computed using the committor probabilities and the Markovianity assumption.
The first two objects, the distribution of reactive trajectories \(\mu ^{AB}(n)\) and its normalized version \({\hat{\mu }}^{AB}(n)\), tell us where the reactive trajectories are most likely to be found, i.e., where the transitioning trajectories spend most of their time.
Definition 2.1
The distribution of reactive trajectories \(\mu ^{AB}(n) =(\mu ^{AB}_i(n))_{i\in {\mathbb {S}}}\) for \(n\in {\mathbb {T}}\) gives the joint probability that the Markov chain is in a state i at time n while transitioning from A to B:
Note that \(\mu ^{AB}_i(n)=0\) for \(i \notin C\), i.e., we only get information about the density of transitions passing through C. Direct transitions from A to B are neglected, and in general assumed not to exist.
Theorem 2.2
For a general Markov chain \((X_n)_{n\in {\mathbb {T}}}\) with committors \(q^+(n)\), \(q^(n)\), the distribution of reactive trajectories can be expressed as
Proof
We can compute
by conditioning on \(\{X_n=i\}\), and by using independence of the two events \(\{\tau _A^(n) >\tau _B^(n) \}\), \(\{\tau _B^+(n) < \tau _A^+(n)\}\) given \(\{X_n=i\}\), which follows from the Markov property.^{Footnote 4}\(\square \)
The distribution \(\mu ^{AB}(n)\) is not normalized but can easily be normalized by dividing \(\mu ^{AB}(n)\) by the probability to be on a transition at time n,
to give a probability distribution on \({\mathbb {S}}\):
Definition 2.3
Whenever \({\mathbb {P}}(\tau _A^(n)>\tau _B^(n) , \tau _B^+(n) < \tau _A^+(n))>0\) for \(n\in {\mathbb {T}}\), we can define the normalized distribution of reactive trajectories at time \(n\in {\mathbb {T}}\) by
giving the density of states in which trajectories transitioning from A to B spend their time.
The next object tells us about the average number of jumps from i to j during one time step (i.e., the probability flux), while the trajectory is on its way from A to B:
Definition 2.4
The current of reactive trajectories \(f^{AB}(n) = (f^{AB}_{ij}(n))_{i,j\in {\mathbb {S}}}\) at time \(n\in {\mathbb {T}}\) gives the average flux of trajectories going through i at time \(n\in {\mathbb {T}}\) and j at time \(n+1\in {\mathbb {T}}\) consecutively while on their way from A to B:
Theorem 2.5
The current of reactive trajectories for a Markov chain \((X_n)_{n\in {\mathbb {T}}}\) with transition probabilities \(P(n)\) and committors \(q^+(n)\), \(q^(n)\), is given by
Proof
The reactive current can be computed as
by first conditioning on \(\{X_n=i\}\), then by independence of \(\{X_{n+1}=j, \tau _B^+(n+1) < \tau _A^+(n+1)\}\) and \(\{ \tau _A^(n) >\tau _B^(n)\}\) given \(\{X_n=i\}\), by conditioning on \(\{X_{n+1}=j\}\), and last by the Markov property. \(\square \)
Let us note that the reactive current also counts the transitions going directly from \(i\in A\) to \(j\in B\), these are not accounted for in the reactive distribution which only accounts for transitions passing the region \((A\cup B)^c\).
In order to eliminate information about detours of reactive trajectories, we define:
Definition 2.6
The effective current of reactive trajectories \(f^+(n) = (f^+_{ij}(n))_{i,j\in {\mathbb {S}}}\) at time \(n\in {\mathbb {T}}\) gives the net amount of reactive current going through i at time \(n\in {\mathbb {T}}\) and j at time \(n+1\in {\mathbb {T}}\) consecutively,
Ultimately, the effective current of reactive trajectories can be used to find the dominant transition channels in state space between A and B (see, e.g., Metzner et al. 2009).
The current of reactive trajectories only goes out of A, not into A; moreover, the current of reactive trajectories only points into B, not out of B. Therefore, A can be thought of as a source of reactive trajectories, whereas B acts like their sink. This leads us to our next characteristic of reactive trajectories: By summing the current of reactive trajectories over A, we get the discrete rate of reactive trajectories flowing out of A, and by summing the current over B, we obtain the rate of inflow into B:
Definition 2.7
For \(n, n+1 \in {\mathbb {T}}\), the discrete rate of transitions leaving A at time n is given by
i.e., the probability of a reactive trajectory leaving A at time n. When \(n1, n \in {\mathbb {T}}\), the discrete rate of transitions entering B at time n is given by
i.e., the probability of a reactive trajectory entering B at time n.
Theorem 2.8
For a Markov chain \((X_n)_{n\in {\mathbb {T}}}\) with current of reactive trajectories \(f^{AB}(n)\), we find the discrete rates to be
Proof
We can compute by using the law of total probability
\(\square \)
TPT for Stationary, InfiniteTime Markov Chains
TPT was originally designed for stationary, infinitetime Markov processes (Weinan and VandenEijnden 2006; Metzner et al. 2009) that are often used as models for molecular systems (Noé et al. 2009). Here, we will recall that theory by using the results from the previous section and equip it with some new results (e.g., Lemma 3.5) that will be needed later.
Setting
We begin with describing the processes of interest in this section.
Assumption 3.1
We consider a Markov chain \((X_n)_{n\in {\mathbb {Z}}}\) taking values in a discrete and finite state space \({\mathbb {S}}\), and the timediscrete jumps between states \(i\in {\mathbb {S}}\) and \(j\in {\mathbb {S}}\) occur with probability
stored in the rowstochastic transition matrix \(P=(P_{ij})_{i,j \in {\mathbb {S}}}\). We assume that the process is irreducible, and ergodic with respect to the unique, strictly positive invariant distribution \(\pi = (\pi _i)_{i \in {\mathbb {S}}}\) (also called stationary distribution interchangeably) solving \(\pi ^\top = \pi ^\top P\).
The timereversed process \((X^_n)_{n \in {\mathbb {Z}}}\), \(X^_n := X_{n}\) traverses the chain backwards in time. It is also a Markov chain (Ribera Borrell 2019, Thm 2.1.19) and stationary with respect to the same invariant distribution. The transition probabilities of the timereversed process \(P^= (P^_{ij})_{i,j \in {\mathbb {S}}}\) with entries
can be found from expressing the flux in two ways,
Committor Probabilities
Due to stationarity of the chain (Assumption 3.1), the law of the chain is the same for all times, and we simply have that the committors (4) and (5) are timeindependent \(q_i^+(n)= q_i^+\), similarly \(q_i^(n)=q_i^\) for all n.
The forward and backward committors can be found by solving a linear matrix equation of size C with appropriate boundary conditions (Norris 1998, Chapter 1.3).
Theorem 3.2
The forward committor for a Markov chain according to Assumption 3.1 with transition probabilities \(P=(P_{ij})_{i,j\in {\mathbb {S}}}\) satisfies the following linear system
Analogously for the backward committor, we have to solve the following linear system
Proof
From the definition of the committors (4), it immediately follows that we have \( q_i^+ =0 \) for \(i \in A\) since we always have \(\tau _A^+(n)=n\), while \(\tau _B^+(n)>n\). Analogously, we have \(q_i^+ = 1\) for \(i \in B\) since in that case, \(\tau _A^+(n)>n\) and \(\tau _B^+(n)=n\). For the committor at node i in the transition region C, we can sum the forward committor at all the other states j weighted with the transition probability to transition from i to j. This follows from
first using the law of total probability, then conditioning on \(\{X_{n+1}=j\}\), using that at time n the chain is in \(i\in C\) and thus \(\tau _A^+(n),\tau _B^+(n)\ge n+1\), and last using the Markov property.
For the backward committor equations, we can proceed in a similar way, by additionally using the timereversed process. \(\square \)
Remark 3.3
If the Markov chain in addition is reversible, i.e., if \(P_{ij} \pi _i = P_{ji} \pi _j\) (equivalently, \(P^_{ij}=P_{ij}\)) holds, then it follows from Theorem 3.2 that the forward and backward committors are related by \(q_i^+=1q_i^\).
The following lemma provides us with the necessary condition such that existence and uniqueness of the committors is guaranteed. For a proof, see Ribera Borrell (2019, Lemma 3.2.4) or Norris (1998, Chapter 4.2).
Lemma 3.4
If \(P\) is irreducible, then the two problems (8) and (9) each have a unique solution.
There is a second characterization of the committors in the transition region using path probabilities, as summarized in the following lemma. The proof can be found in Appendix A.2.1.
Lemma 3.5
For any \(i \in C\), the forward committor can also be specified as the probability of all possible paths starting from node i that reach the set B before A
Similarly for any \(i \in C\), the backward committor can also be understood as the sum of path probabilities of all possible paths arriving at node i that last came from A and not B
Transition Statistics
The committor, the distribution, and the transition probabilities are timeindependent; thus, the statistics from Sect. 2.2 are timeindependent. We write the distribution of reactive trajectories (Theorem 2.2) as \(\mu ^{AB}=(\mu ^{AB}_i)_{i\in {\mathbb {S}}}\), where
the normalized distribution as \({\hat{\mu }}^{AB}=({\hat{\mu }}^{AB}_i)_{i\in {\mathbb {S}}}\). The current of reactive trajectories (Theorem 2.5) \(f^{AB}=(f^{AB}_{ij})_{i,j\in {\mathbb {S}}}\) is given by
and the effective reactive current is denoted \(f^+= (f^+_{ij})_{i,j\in {\mathbb {S}}}\).
Theorem 3.6
For a stationary Markov chain \((X_n)_{n\in {\mathbb {Z}}}\), the reactive current out of a node \(i\in C\) equals the current flowing into the node \(i\in C\), i.e.,
Further, the the reactive current flowing out of A into \({\mathbb {S}}\) (equivalently into \(C\cup B\)) equals the flow of reactive trajectories from \({\mathbb {S}}\) (equivalently from \(C \cup A\)) into B
For the proof, see Appendix A.2.2.
Remark 3.7
Due to these conservation laws, there is a close relation of the committors and the effective current (when the chain is reversible) to the voltage and the electric current in an electric resistor network (Doyle 1984; Norris 1998) with a voltage applied between A and B, see Metzner (2008), Metzner et al. (2009) for work in this direction. Also, in Metzner (2008), Metzner et al. (2009), a decomposition algorithm of the effective current into the paths from A to B carrying a substantial portion of the transition rate is proposed, yielding the dominant transition channels between A and B. The effective current of reactive trajectories is looperased; therefore, in Banisch et al. (2015) a decomposition of the current of reactive trajectories into cyclic structures and noncyclic parts is suggested.
Further, since \(\smash { \sum _{i\in A, j\in {\mathbb {S}}} f^{AB}_{ij} = \sum _{ i\in {\mathbb {S}}, j\in B} f^{AB}_{ij} }\) by Theorem 3.6, the discrete rate of leaving A equals the discrete rate of entering into B; thus, we denote \(k^{AB}:= k^{A\rightarrow }= k^{\rightarrow B}\). This tells us the probability of a realized transition per time step, i.e., either the probability to leave A and be on the way to B next, or the probability to reach B when coming from A. That \(k^{AB}\) indeed has the physical interpretation of a rate becomes clear from the characterization in Theorem 3.8.
Interpretation of the Statistics as Time Averages Along Trajectories
The statistics from the previous section give us dynamical information about the ensemble of reactive trajectories. Due to the ergodicity, the Markov chain will visit all states infinitely many times and the ensemble space average of a quantity equals the time average this quantity along a single infinitely long trajectory (Birkhoff’s ergodic theorem). Therefore, the reactive trajectory statistics from the previous section can also be found by considering the reactive pieces along a single infinitely long trajectory and by averaging over them.
Theorem 3.8
For a Markov chain \((X_n)_{n\in {\mathbb {Z}}}\) satisfying Assumption 3.1, we have the following \({\mathbb {P}}\)almost sure convergence results:
where \(i,j \in {\mathbb {S}}\) and \(\mathbb {1}_A(x)\) is the indicator function on the set A.
The proof of Theorem 3.8 can be found in Ribera Borrell (2019, Thm 3.3.2, Thm 3.3.7, Thm 3.3.11) and relies on Birkhoff’s ergodic theorem (Walters 2000) for the canonical representation of the process as a Markov shift.
The theorem not only offers a datadriven approach to approximate the transition statistics by averaging along a given sufficiently long trajectory sample but also gives interpretability to the statistics. While \(\mu ^{AB}\) as the (not normalized) trajectorywise distribution of reactive trajectories and \( f^{AB}\) as the trajectorywise flux of reactive trajectories are still straightforward to understand, we can also give meaning to the rate \(k^{AB}\) and to \(Z^{AB}=\sum _{i \in C} \mu ^{AB}_i\). We can think of \(k^{AB}\) as the total number of reactive transitions taking place within the time interval \(\{N,\dots ,N\}\) divided by the number of time steps \(2N+1\) in the limit of \(N\rightarrow \infty \). Similarly, we can give meaning to \(Z^{AB}\) as the total time spent transitioning during \(\{N,\dots ,N\}\) divided by \(2N+1\) in the limit of \(N\rightarrow \infty \). Last, we note that the ratio between \( Z^{AB}\) and \(k^{AB}\) provides us with a further characteristic transition quantity (VandenEijnden 2006),
telling us the total time spent transitioning divided by the total number of transitions, i.e., the expected length of a transition from A to B.
Numerical Example 1: InfiniteTime, Stationary Dynamics on a 5State Network
We consider a Markov chain on a network of five states \({\mathbb {S}}= \{0,1,2,3,4\}\) of which 0, 2, 4 have a high probability to remain in the same state. The transition matrix is given by the rowstochastic matrix \(P\), see Fig. 3. We are interested in transitions from the subset \(A = \{0\}\) to the subset \(B=\{4\}\). What is the most likely route that the transitions take?
The numerically computed committors and transition statistics are shown in Fig. 3, and the discrete rate of transitions is \(k^{AB}= 0.018\), i.e., on average every 56 time steps a transition from A to B is completed. We see that the effective current is the strongest along the path \(A \rightarrow 1 \rightarrow B\), i.e., most transitions effectively happen via this path, but a share of 33.3% of transitions happens via the route from \(A \rightarrow 3 \rightarrow B\). Also, we can note that the density of reactive trajectories has a very high value in the state 2, indicating that many reactive trajectories pass this state or stay there for a long time. Thus, reactive trajectories on the effective path \(A \rightarrow 1 \rightarrow B\) will likely do a detour to 2, which is not visible from the effective current since it does not tell us about detours.
TPT for Periodically Driven, InfiniteTime Markov Chains
Many realworld systems showcase periodicity, for example, any system subject to seasonal driving, or physical systems with periodic external stimuli.
For studying transitions in these systems, we extend TPT to Markov chains with periodically varying transition probabilities that are equilibrated to the forcing and cycle through the same distributions each period. If the period is only one time step long, this case reduces to the previous case of stationary, infinitetime dynamics.
We start by laying out the exact setting of the process that we consider, before turning to the computation of committors and transition statistics for periodically forced dynamics. As we will see, by writing the committor equations on a timeaugmented state space, we can also find committors for systems with stochastic switching between different dynamical regimes.
Setting
Assumption 4.1
Consider a Markov chain \((X_n)_{n\in {\mathbb {Z}}}\) on a finite and discrete state space \({\mathbb {S}}\) with transition probabilities \(P_{ij} (n) = {\mathbb {P}}(X_{n+1}=j \,  \, X_n=i) \) that are periodically varying in time with period length \(M\in {\mathbb {N}}\), i.e., the transition matrices fulfill
Therefore, the transition matrices at the times within one period \({\mathbb {M}}:= \{0,\dots ,M1\}\) are sufficient to describe all the dynamics and we denote them by \(P_m := P(n)\) for time \(n\in {\mathbb {Z}}\) congruent to \(m\in {\mathbb {M}}\) modulo M.
The product of transition matrices over one \(M\)period starting at a time equivalent to \(m\in {\mathbb {M}}\) (modulo M) is denoted by \({\bar{P}}_m := P_m P_{m+1} \cdots P_{m+M1}\), which is again a transition matrix pushing the Markov chain M time instances in time forward starting from time \(\equiv m \ (\mathrm {mod}\ M)\), see Fig. 4. The chain described by \({\bar{P}}_m\) is not timedependent anymore, and it resolves the state of the original system only every M time instance with a timeindependent transition matrix \({\bar{P}}_m\).
Proposition 4.2
If \({\bar{P}}_0\) is irreducible and assuming the setting 4.1 as described above, then for all \(m\in {\mathbb {M}}\), there exists an invariant distribution \(\pi _m\) such that \( \pi _m^\top = \pi _m^\top {\bar{P}}_m \) of the transition matrix \({\bar{P}}_m\). Further, \(\pi _0\) is unique and \(\pi _{0,i}>0\) for all \(i\in {\mathbb {S}}\). If we in addition require \( \pi _{m+1}^\top = \pi _m^\top P_m\) for all m, then the entire family \((\pi _m)_{m=0,\ldots ,M1}\) is unique.^{Footnote 5}
Proof
Since \({\bar{P}}_0\) is irreducible and the state space is finite, the Markov chain induced by \({\bar{P}}_0\) has a unique and positive invariant density \(\pi _0=(\pi _{0,i})_{i\in {\mathbb {S}}}\) such that \(\pi _0^\top {\bar{P}}_0 = \pi _0^\top \). It follows that also \({\bar{P}}_1\) has an invariant density, namely \(\pi _1^\top :=\pi _0^\top P_0 \), since it fulfills
In the same way, \({{\bar{P}}_m, m=2,\dots , M1,}\) have an invariant distribution \(\pi _m^\top :=\pi _{m1}^\top P_{m1}\), such that \({\pi _m^\top {\bar{P}}_m = \pi _m^\top }\). \(\square \)
Thus, the densities \(\pi _1,\dots \pi _{M1}\) are not necessarily unique, unless we require irreducibility. Doing so, there is a unique periodic family of distributions that the chain can admit, and we call such a chain \(M\)stationary; see Fig. 4. Having the longtime behavior of chains in mind in this section, we will assume this property, relying on ergodicity.
Assumption 4.3
We assume that \({\bar{P}}_0\) is irreducible and that the chain is Mstationary, i.e., \({\mathbb {P}}(X_n=i)=\pi _{m,i}\) whenever n is equivalent to m modulo M.
Next, we introduce the timereversed chain \((X^_n)_{n\in {\mathbb {Z}}}\) with \(X^_n = X_{n}\). Due to \(M\)stationarity, the transition probabilities of the timereversed chain are also \(M\)periodic, and it is enough to give the transition probabilities backward in time \(P^_{m}\) for each time point during the period \(m\in {\mathbb {M}}\)
whenever \(\pi _{m,i}>0\), else for \(\pi _{m,i}=0\) we set \(P^_{m,ij} :=0\).
Committor Probabilities
We will first look at the forward and backward committors and the system of equations that can be solved to acquire them. The forward \(q^+(n)\), respectively, backward committor \(q^(n)\) is defined as before in (4), respectively, (5), but now since the law of the Markov chain is the same every M time instances, the committors also vary periodically and are identical every M time steps, and we therefore denote
whenever n is equivalent to \(m\in {\mathbb {M}}\).
Again, we consider nonempty and disjoint subsets A, B of the state space. It is straightforward to extend the theory to periodically varying sets \(A = (A_m)_{m\in {\mathbb {M}}}\), \(B=(B_m)_{m\in {\mathbb {M}}}\) defined on \({\mathbb {S}}^M\).
Theorem 4.4
Suppose a Markov chain meeting Assumptions 4.1 and 4.3. Then, the \(M\)periodic forward committor \(q^+_{m}=(q^+_{m,i})_{i\in {\mathbb {S}}}\) fulfills the following iterative system with periodic conditions \(q_{M}^+=q_{0}^+\)
whereas the \(M\)periodic backward committor \(q^_{m}=(q^_{m,i})_{i\in {\mathbb {S}}}\) satisfies
where \(q_{M}^=q_{0}^\).
The proof follows the lines of the proof above for the stationary, infinitetime case and can be found in Appendix A.2.3.
Before proving that the two systems are uniquely solvable, we characterize the forward and backward committors in terms of the path probabilities over one period \({\mathbb {M}}\):
Lemma 4.5
For any time \(n=m\) modulo M and \(i \in C\), the committor functions (15) satisfy the following equalities
Proof
First, it follows from (16) for \(i\in C\) that
since \(q^+_{m+1,i_1} = 1\) if \( i_1\in B \), \(q^+_{m+1,i_1} = 0\) if \( i_1\in A \). By inserting the committor equations at the following times iteratively and by using that \(q^+_0=q^+_M\), we get (18). We can proceed analogously for the backward committor, starting from (17) and reinserting committor equations. \(\square \)
Equation (18) with \(m=0\) only contains one unknown and can be solved, e.g., numerically for all \(i\in C\), whereas for A, respectively, B, the committor is simply 0, respectively, 1. The committor for the remaining times \(m=1,2,\ldots \) can then be computed thereof by using (16), analogously for (19).
The time resolution of the Markov chain during the period is important for the committors since we can resolve hitting events of B within the period. The committors one would compute for a more coarsely resolved chain without state information during the period, i.e., for the chain described by \({\bar{P}}_0\) (timehomogeneous, but mapping one period in time forward), will not notice that the chain has hit B at times other than \(m=0\). To see that, compare Lemma 4.5 with Lemma 3.5 using \({\bar{P}}_0\).
Lemma 4.6
By the irreducibility of \({\bar{P}}_0\), the solutions to (16) and (17) exist and are unique.
The proof can be found in Appendix A.2.4.
Remark 4.7
The committor equations can also be written on a timeaugmented state space using a periodaugmented transition matrix that pushes the dynamics deterministically forward in time
Extending this approach, one can also consider committor equations for systems that switch stochastically with probabilities \({\hat{P}}\in {\mathbb {R}}^{M\times M}\) between M different regimes, and each regime is described by a transition matrix \(P_m\). This is essentially the Markov chain analogue of a random dynamical system (Arnold 1995). The regimeaugmented transition matrix is given by
We refer to Appendix A.1 for more details on both ansatzes and the computation of committor probabilities on the augmented space. The augmented approach also offers a numerical way of solving the committor equations with periodic boundary conditions.
Transition Statistics
We have seen that the forward and backward committors in the case of periodically driven dynamics are also \(M\)periodic and can be computed from the iterative equations (16), (17) with periodic conditions in time. Since committors, densities, and transition matrices are \(M\)periodic, all statistics computed thereof are so too by the theory in Sect. 2.2, and we equip them with a subscript m, e.g., \({\hat{\mu }}^{AB}(n) = {\hat{\mu }}^{AB}_m\), \(f^{AB}(n) = f^{AB}_{m} \), whenever \(n\equiv m\) modulo M.
Compared to the previous case of stationary, infinitetime, the discrete rate of reactive trajectories leaving A at time m, \(\smash { k^{A\rightarrow }_m = \sum _{i\in A, j\in {\mathbb {S}}} f^{AB}_{ij}(m) }\), does not anymore equal the discrete rate of reactive trajectories arriving in B at time m, \(\smash { k^{\rightarrow B}_m = \sum _{i\in {\mathbb {S}}, j\in B} f^{AB}_{ij}(m1) }\).
The next theorem provides us with the reactive current conservation laws in the case of periodic dynamics and will allow us to find the relation between \(k^{A\rightarrow }_m\) and \(k^{\rightarrow B}_m\).
Theorem 4.8
Consider a Markov chain satisfying Assumptions 4.1 and 4.3. Then, for each node \(i\in C\) and time \(m\in {\mathbb {M}}\) we have the following current conservation law
i.e., all the reactive trajectories that flow out of i at time (congruent to) m have flown into i at time equivalent to \(m1\).
Further, over one period the amount of reactive flux leaving A is the same as the amount of flux entering B, i.e.,
The proof can be found in Appendix A.2.5 and follows from straightforward computations using the committor equations to rewrite the reactive current.
As a result of (22), the discrete outrate averaged over one period equals the average discrete inrate, which we define to be \({\bar{k}}^{AB}_{M}\), i.e.,
This periodaveraged discrete rate tells us the average probability per time step of a reactive trajectory to depart in A or in other words, the expected number of reactive trajectories leaving A per time step.
Numerical Example 2: Periodically Varying Dynamics on a 5State Network
We consider the same \(5\)state network as before in Example 1, but this time the Markov chain is in equilibrium to the periodically varying transition probabilities
with period length \(M=6\). The transition matrices are chosen such that for time \(m=0\), the dynamics are the same as in the stationary Example 1, \(P_0=T+L=P\), whereas at time \(m=3\), the dynamics are reversed \(P_3 = TL\). Matrix T is a fixed rowstochastic matrix which has transition probabilities that are symmetric along the axis through A and B and L is a 0rowsum matrix that is not symmetric along the axis through A and B. The probabilities of the transition matrices are shown in Fig. 5.
With the numerically computed transition statistics as shown in Fig. 6, we try to answer whether the added perturbation results in alternative effective paths compared to Example 1. The effective current indicates that the most likely effective transition path from A to B either goes via \(A\rightarrow 1\rightarrow B\) (with a likely detour to 2) during the first half of the period, or via \(A\rightarrow 3\rightarrow B\) (with a detour to 2) toward the second half of the period. But interestingly, we have additional transition paths that go via \(A \rightarrow 1\rightarrow 2\rightarrow 3\rightarrow B\) and \(A \rightarrow 3\rightarrow 2\rightarrow 1\rightarrow B\). Neither in the stationary system described by \(P_0=T+L\) nor by \(P_3 = TL\), this path would be possible. Additionally, the periodaveraged rate \({\bar{k}}^{AB}_{6} = 0.034\) is higher than the rate in the stationary case (Example 1).
TPT for Markov Chains on a FiniteTime Interval
We now develop TPT for Markov chains with the transitions of interest taking place during a finitetime interval. The transition rules can be timedependent, and the dynamics can be nonstationary, i.e., out of equilibrium.
The resulting committor equations (Sect. 5.2) and statistics (Sect. 5.3) in case of finitetime dynamics are similar as in the periodic case, yet there are some distinctions. The committor equations are now equipped with final, respectively, initial conditions, and the statistics show some boundary effects at the time interval limits.
In Sect. 5.4, we also provide a consistency result between the stationary, infinitetime case and the finitetime case for a stationary Markov chain, by considering the limit of the time interval going to infinity.
Setting
Let us start by describing the systems of interest in this section.
Assumption 5.1
We consider a Markov chain on a finitetime interval \((X_n)_{0 \le n \le N1}\), \(N \in {\mathbb {N}}\), taking values in a discrete and finite space \({\mathbb {S}}\). The probability of a jump from the state \(i \in {\mathbb {S}}\) to the state \(j \in {\mathbb {S}}\) at time \(n \in \{0, \dots , N2\}\) is given by the \((i,j)\)entry of the rowstochastic transition matrix \(P(n) = (P_{ij}(n))_{i,j \in {\mathbb {S}}}\):
Setting the initial density \(\lambda (0) = (\lambda _i(0))_{i \in {\mathbb {S}}}\), the densities at later times \(n\in \{1,\dots ,N1\}\) are given by \(\lambda (n+1)^\top = \lambda (n)^\top P(n) \).
By these assumptions, the chain can have timeinhomogeneous transition probabilities, or even if \(P(n)=P\) for all n, the densities \(\lambda (n)\) can be changing in time. Also, we are not requiring the chain to be irreducible anymore.
The timereversed process \((X^_n)_{0 \le n \le N1}\) defined by \({X^_n := X_{N1n}}\) is also a Markov chain (Ribera Borrell 2019, Thm 2.1.18). Its transition probabilities are given for any \(n \in \{1, \dots , N1\}\) by
whenever \( \lambda _{n}(i) >0\). From the backward transition probabilities (23), we note that even the timereversed process of a finitetime, timehomogeneous Markov chain (i.e., \(P(n)=P\) for all n) is in general a finitetime, timeinhomogeneous Markov chain, unless also the distribution \(\lambda _{}(n)\) is timeindependent.
Committor Probabilities
The forward (4) and backward committors (5) keep their dependence on the time of the chain \(n \in \{ 0, \dots , N1\}\). The following theorem provides us with two iterative equations for the forward and backward committors. Because one can solve (24) and (25) iteratively, the solutions exist and are unique.
Theorem 5.2
The forward committor for a finitetime Markov chain of form 5.1 satisfies the following iterative system of equations for \(n \in \{0, \dots , N2\}\):
with final condition \(q^+_i(N1) = \mathbb {1}_B(i)\). Analogously, the backward committor for a finitetime Markov chain satisfies for \({n \in \{1, \dots , N1\}}\)
with initial condition \(q_i^(0) = \mathbb {1}_A(i)\).
The proof uses some of the arguments of the proofs above for the stationary, infinitetime and periodic infinitetime cases and can be found in Appendix A.2.6.
The following theorem provides us with an analogue result to Theorem 3.5 for the forward and the backward committors of a finitetime Markov chain written in terms of path probabilities of the paths that start in A and end in B within the restricted time frame \({\mathbb {T}}\).
Lemma 5.3
The forward committor at time \(n \in \{0, \dots , N2\}\) and the backward committor at time \({n \in \{1, \dots , N1\}}\), respectively, satisfy for \(i\in C\) the following equalities:
The proof follows from rewriting \(q_i^+(n)\) for any time \(n \in \{0, \dots , N2\}\) into a decomposition of the probabilities of all possible paths that reach B within the time interval \(\{n+1,\dots ,N1\}\) and rewriting \(q_i^(n)\) for any time \({n \in \{1, \dots , N1\}}\) into a decomposition of the probabilities of all possible paths that came from A within \(\{1, \dots , n1\}\).
Remark 5.4
Similar as in the periodic case, it is possible to extend the approach to timedependent sets (i.e., spacetime sets) A(n) and B(n). For instance, in order to study transitions that leave a set at a certain time (e.g., at time \(n=0)\) and arrive at a certain time (e.g., at \(n=N\)) we can choose \(A(n) = \mathbb {1}_{\{0\}}(n) A\) and \(B(n) = \mathbb {1}_{\{N\}}(n) B\).
Transition Statistics and Their Interpretation
We have seen that the forward and backward committors for a finitetime Markov chain can be computed from the iterative equations (24) and (25) with final, respectively, initial conditions. Based on these, we will next introduce the corresponding transition statistics.
The distribution of reactive trajectories (Definition 2.1) is defined for any time \({n \in \{0, \dots , N1 \}}\), and by Theorem 2.2, it is given by
Observe that \(\mu ^{AB}_i(0) = \mu ^{AB}_i(N1) = 0\) because there are no reactive trajectories at these times. Thus, the distribution of reactive trajectories cannot be normalized at times 0 and \(N1\). As a consequence, the normalized distribution of reactive trajectories \( {\hat{\mu }}^{AB}(n)\) is just defined for times \(n \in \{1, \dots , N2\}\).
The current of reactive trajectories is defined for any time \(n \in \{0, \dots , N2\}\), and it is given by (Theorem 2.5)
Similarly, the effective current of reactive trajectories
is defined only for times \(n \in \{0, \dots , N2\}\).
Also in this case, the current satisfies certain conservation principles:
Theorem 5.5
For a finitetime Markov chain \((X_n)_{0 \le n \le N1}\) satisfying Assumption 5.1, the reactive current flowing out of the node \(i \in C\) at time n equals the current flowing into a node \(i \in C\) at time \(n1\), i.e.,
for \(n \in \{1, \dots , N2\}\). Further, the reactive current flowing out of A into \({\mathbb {S}}\) over the whole time period \(\{0,\dots ,N2\}\) equals the flow of reactive trajectories from \({\mathbb {S}}\) into B over the period
The proof can be found in Appendix A.2.7.
The discrete rate of transitions leaving A is defined for times \(n \in \{0, \dots , N2\}\)
whereas the discrete rate of transitions entering B is defined for times \(n \in \{1, \dots , N1\}\)
By plugging the definitions of the rates into result (29) of Theorem 5.5 and by reindexing the times, we note that the discrete departure rate averaged over the times \({n \in \{0, \dots , N2\}}\) equals the timeaveraged discrete arrival rate over the times \({n \in \{1, \dots , N1\}}\), which we denote by \({\bar{k}}^{AB}_{N}\), i.e.,
In the infinitetime, stationary case, Theorem 3.8 tells us that \(k^{AB}\) equals the time average of the number of reactive pieces departing per time step along a single infinitely long trajectory. Here, we cannot apply the ergodic theorem to turn \({\bar{k}}^{AB}_{N}\) into an average along a single trajectory. Instead, we can write the timeaveraged rate in terms of an ensemble of trajectories to get a better understanding. For this, we take \(K\in {\mathbb {N}}\) i.i.d. realizations of the finitetime chain, i.e., each sample \(\smash { ({\hat{X}}_n^i)_{n\in \{0,\dots ,N1\}} }\) is distributed according to the law of the finitetime dynamics with \({\hat{X}}_0^i \sim \lambda (0)\). Then, we have by the law of large numbers:
Further, we can rewrite the second line of (30) as
and thus giving the average rate \({\bar{k}}^{AB}_{N}\) the interpretation of the total expected amount of reactive trajectories within \(\{0,\dots ,N1\}\) divided by the number of time steps. Analogously, we can apply the same argument for the timeaveraged probability of being on a transition
which can be understood as the expected number of time steps the Markov chain is on a transition during \(\{0, \dots , N1\}\) divided by N. Last, we define the ratio
and observe that it provides us with the average expected duration of a reactive trajectory over \({n \in \{0, \dots , N1\}}\).
Remark 5.6
In von Kleist et al. (2018), transitions from A to B are studied in nonergodic and nonstationary processes. But the focus there is put on the first pieces of the trajectories starting in \(B^c\) which arrive in B after having been to A (called the first passage paths), whereas we consider the ensemble of all the transitions from A to B within the time interval \({\mathbb {T}}\). Further, the first passage paths are divided into nonreactive and reactive segments and statistics for both, the reactive and the nonreactive ensemble, are computed. Last, their approach does not have any restrictions on the length of the first passage path and therefore is not straightforwardly extendable to finitetime processes.
Convergence Results of TPT Objects from FiniteTime Systems to InfiniteTime
The aim of this section is to show the consistency of the committors and TPT objects between the finitetime case and infinitetime case. We will show that when assuming a timehomogeneous chain on a finitetime interval that is stationary, the TPT objects converge to the infinitetime, stationary case when letting the time interval go to infinity.
To this end, let us consider the Markov chain \((X_n)_{N \le n \le N}\) on the time interval \(\{N, \dots , N\}\) with a timehomogeneous and irreducible transition matrix \(P=(P_{ij})_{i,j \in {\mathbb {S}}}\). When choosing the unique, strictly positive invariant density \({\pi = (\pi _i)_{i \in {\mathbb {S}}}}\) of P as an initial density, the density for all \({n \in \{N, \dots , N\}}\) is given by
The timereversed process \((X^_n)_{N \le n \le N}\) is also timehomogeneous and stationary, since its transition probabilities are given by
The forward committor and the backward committor for a finitetime Markov chain on the time interval \( \{N, \dots , N\}\) satisfy (24) and (25) with a slight adjustment of the time interval.
The next theorem provides us the desired result:
Theorem 5.7
The committors and transition statistics defined for an irreducible Markov chain in stationarity on a finitetime interval correspond in the limit that the interval \(\{N, \dots , N\}\) tends to \({\mathbb {Z}}\) to the objects defined for a stationary, infinitetime Markov chain. For any \(i, j \in {\mathbb {S}}\), it holds that
Proof
First, we see that the forward committor at time \({n \in \{N, \dots , N1\}}\) and the backward committor at time \({n \in \{N+1, \dots , N\}}\) for finitetime Markov chains correspond in the limit that \(N \rightarrow \infty \) to the forward and the backward committors defined for stationary, infinitetime Markov chains. For any \({n \in \{N, \dots , N1\}}\), we can see that
where (1) follows from applying first Lemma 5.3 and then reindexing the coefficients of the transition matrix P, and (2) follows directly from Lemma 3.5. Analogously, we obtain for any \(n \in \{N +1, \dots , N\}\) that
Hence, by putting together (34), (35) and (32) we show that for any \(i, j \in {\mathbb {S}}\)
\(\square \)
Numerical Example 3: TimeHomogeneous, FiniteTime Dynamics on a 5State Network
We consider a timehomogeneous Markov chain over \({\mathbb {T}}=\{0,1,\dots ,N1\}\), \(N=5\) with the transition rules given by \(P(n) = P\) for all \(n \in \{0,1,\dots ,N2\}\). P is the transition matrix used in Example 1, and the initial density is given by the stationary density \(\pi \) of P. Thus, we are in the same setting as before in Example 1, but we are now looking at transitions between A and B that are restricted to take place within the finitetime window.
The transition statistics computed for this example are shown in Fig. 7. Since the time interval was chosen rather small, transitions via 1, that usually do a detour to the metastable state 2, are very unlikely. The transition path via 3 is more likely, we can see that the effective current through 3 is much stronger, and also, the normalized reactive distribution indicates that it is the most likely to be in node 3 when reactive. We also note that the rate \({\bar{k}}_5^{AB}=0.005\) is much smaller than in the stationary case, and only few transitions from A to B are completed (on average) in the short time frame \(\{0,1,\ldots ,N1\}\).
Numerical Example 4: TimeInhomogeneous, FiniteTime Dynamics on a 5State Network
As a next example, we consider a timeinhomogeneous chain over the finitetime window \({\mathbb {T}}= \{0, \dots , N1\}\), \(N=5\). We again set the stationary density of P as an initial density and let the transition matrices depend on the time but in such a way that \(\frac{1}{N1}\sum _n P(n) = P\). For any \(n \in \{0, \dots , N2\}\), let
where K is the 0rowsum matrix as given in Fig. 8. At times \(n = 0, 2\), the transition matrices become \(P + K\) and the subsets A and B are less metastable. At times \(n=1, 3\) the transition matrices become \(P  K\) and A and B are more metastable.
The computed transition statistics are shown in Fig. 9. The effective current plot shows that the majority of reactive trajectories are leaving A at times \(n=0, 2\) and go to 3 and at times \(n=1, 3\), they move from 3 to B. The timeaveraged rate for the finitetime, timehomogeneous case from Example 3 is \({\bar{k}}_{5}^{AB} = 0.005\), while the timeaveraged rate for this timeinhomogeneous case is \({\bar{k}}_{5}^{AB} =0.012\), more than two times larger. We thus demonstrated that by adding a forcing to the timehomogeneous dynamics that changes the metastability of A and B, we can increase the rate of transitions, even though the forcing vanishes on average. We note that utilizing perturbations that tip a system out of equilibrium is used in statistical mechanics to accelerate convergence of statistics (Hamelberg et al. 2004; Lelièvre et al. 2013).
Numerical Example 5: Growing FiniteTime Window for the 5State Network
Let us consider the stationary Markov chain introduced in Example 3 but on the time interval \({\mathbb {T}}= \{N, \dots , N\}\). We want to see numerically that the forward, respectively, backward committor at time n converges under the \(l^2\)norm to the forward, respectively, backward committor of the infinitetime, stationary system by extending the time interval \({{\mathbb {T}}= \{N, \dots , N\}}\) for N big enough such that \(N \pm n \gg 1\), i.e.,
In Fig. 10, we show this convergence numerically; note that the statistics will not necessarily converge near the boundary of the interval.
Numerical Examples in a Triple Well Landscape
In this section, we exemplarily study the transition behavior of a particle diffusing in a triple well energy landscape for several scenarios: for infinitetime, stationary dynamics (Sect. 6.1), for infinitetime dynamics with an added periodic forcing (Sect. 6.2), and for stationary, finitetime dynamics (Sects. 6.3, 6.4).
Numerical Example 6: InfiniteTime, Stationary Dynamics
We consider the diffusive motion of a particle \((X_t)_{t\in {\mathbb {R}}}\) in \({\mathbb {R}}^2\) according to the overdamped Langevin equation
where \(\sigma >0\) is the diffusion constant, \(W_t\) is a \(2\)dimensional standard Brownian motion (Wiener process), and \(V(x,y): {\mathbb {R}}^2 \rightarrow {\mathbb {R}}\) is the triple well potential given in Metzner et al. (2009), Schütte and Sarich (2013),
and shown in Fig. 11. The particle is pushed by the force \(\nabla V(x,y)\) “downhill” in the energy landscape, while simultaneously being subject to a random forcing due to the Brownian motion term. The stationary density of the process is given by the Boltzmann distribution
with normalization factor Z.
Before applying TPT on this example, we have to discretize the process into a Markov chain. We want to estimate a transition matrix that gives us the probability to jump between discrete cells of the state space \([2,2]\times [1,2]\), here we choose regular square grid cells \(\{A_i,i=1,\ldots \}\) of size \(0.2 \times 0.2\). By means of Ulams method [Ulam 1960, see also (Koltai 2011a, Chapter 2.3) for a summary of the method] one can project the dynamics of (40) onto the space spanned by indicator functions on the grid cells and then further approximate the projected dynamics by a Monte Carlo sum of sampled trajectory snippets
where we sample \(\hat{X}^c\), \(c=1,\dots ,C\), uniformly from the cell \(A_i\) and we get \({\hat{Y}}^c\) by evolving the sample forward in time according to (40) with time step \(\tau \) (e.g., by using the Euler–Maruyama discretization of the stochastic differential equation (40)). The resulting process defined by the transition matrix \(P\) is a discretetime, discretespace Markov chain with time steps \(\tau \).^{Footnote 6}
We are now interested in the transition behavior between the deep wells of V when the dynamics are stationary. Therefore, we choose sets A and B as centered at \((1,0)\) and (1, 0), respectively, and we ask: Which transition path from A to B is more likely; via the third metastable shallow well at (0, 1.5), or via the direct barrier between A and B?
The computed committor functions and statistics of the reactive trajectories are shown in Fig. 12, and we chose \(\sigma = 1\), \(\tau = 0.3\) and sampled \(C=10,000\) for estimating the transition matrix. Since the dynamics are reversible, the backward committor is just \(1q^+\). Also, we can see that the committors are close to constant inside the metastable sets (the wells) due to the fast mixing inside the wells, but vary across the barriers. The computed effective current^{Footnote 7} indicates that most transitions occur via the direct barrier between A and B. The effective current through the shallow well at (0, 1.5) is only very small, but due to the metastability of the well, reactive trajectories taking that path are stuck there for long and thus contributing a lot to the density \(\mu ^{AB}\). The rate of transitions is \(k^{AB}= 0.0142\), and the mean transition time is \(t^{AB}= 10.01\).
Numerical Example 7: Periodic, InfiniteTime Dynamics
Next, we are interested in studying transitions in the triple well landscape when a periodically varying forcing is applied. We add the force \(F(x_1,x_2, t) = 1.4 \, \cos \left( \frac{2 \pi t}{ 1.8}\right) (x_2,x_1)\) that alternatingly due to the cosine modulation exhibits an anticlockwise circulation and a clockwise circulation, to the dynamics (40), resulting in the diffusion process with 1.8periodic forcing,
We again discretize the dynamics and estimate transition matrices \(P_0, P_1, \dots ,P_{M1}\) (\(M=6\)) for \(\tau \)spaced time points during the period, and each transition matrix is mapping \(\tau =0.3\) into the future. In Fig. 11, the force from the potential plus circulation is shown for time points \(m=0\) and \(m=3\).
Considering the same A and B as before, we are now interested in the transition channels when the dynamics are equilibrated to the periodic forcing. The computed results are shown in Fig. 13, the reactive trajectories take the lower channel via the direct barrier at the beginning of the period due to the additional push from the forcing, and the upper channel via the shallow well toward the end of the period when the applied circulation is clockwise. It is interesting to note that the rate of reactive trajectories leaving A and entering B is highest toward the end of the period, when the added forcing is clockwise and the preferred channel is the upper channel. This is contrary to the stationary example before, where reactive trajectories dominantly passed the lower channel.
Numerical Example 8: FiniteTime, TimeHomogeneous Dynamics
To demonstrate the effect of the finitetime restriction on the transition behavior between A and B, we now study the timehomogeneous triple well dynamics restricted to the time window \({\mathbb {T}}= \{0,\dots , N1\}\), \(N=6\), and initiated in the stationary density.
Even though we study the same underlying dynamics as in the stationary, infinitetime case (Example 6), the possible transition paths between A and B are limited to the pathway that is fast to traverse, i.e., the lower channel via the direct barrier, see Fig. 14. Since only a small portion of the reactive trajectories from the infinitetime example 6 has a short enough transition time to be considered in this case, the average rate of transitions \({\bar{k}}^{AB}_6= 0.0017\) is much lower than the corresponding rate \(k^{AB}= 0.0142\) in the infinitetime case (Example 6), and the average time a transition takes \({\bar{t}}^{AB}_6= 2.055\) is much shorter than in the infinitetime case.
Numerical Example 9: Bifurcation Studies Using FiniteTime TPT
Last, we want to highlight the usage of finitetime TPT to study large qualitative changes in the transition behavior of a system. We consider the stationary triple well example over a finite interval \({\mathbb {T}}=\{0,\dots ,N1\}\), but this time with a smaller noise strength of \(\sigma =0.26\) compared to the previous examples.
As we increase the interval length N from \(N=20\) to \(N=500\), we allow reactive trajectories to be longer and longer, and thus, the average reactive trajectory changes. Whereas for \(N=20\), most of the density and current are around the lower transition channel, see Fig. 15, for \(N=500\), most of the density and current are around the upper transition channel. The transition behavior restricted to the time interval of size \(N=500\) is already close to the infinitetime transition dynamics, this can be seen by comparing the case of \(N=500\) with Fig. 1b, where the effective current of the same dynamics but in infinite time is depicted.
Conclusion
In this paper, we generalized transition path theory such that it is applicable not only to infinitetime, stationary Markov processes but also to periodically varying and to timedependent Markov processes on finitetime intervals. We restricted our results to Markov processes on discrete state spaces and in discrete time, but generalizations should be straightforward (e.g., following Weinan and VandenEijnden 2006; Metzner et al. 2009).
The theory is intended to generalize TPT toward applicability in, e.g., nonequilibrium molecular, climate, fluid, or social dynamics (agentbased models). In most of these applications, the problem of computing the TPT objects arises, as the state space can be high and also infinite dimensional. This is a nontrivial task even in the traditional TPT context stemming from molecular dynamics—where these tools have nevertheless been successfully applied. Resolving the timedependence poses an additional computational challenge. All this goes beyond the scope of the present work and will be addressed elsewhere.
First results toward the application of stationary TPT in highdimensional state spaces have already been proposed. In Thiede et al. (2019), a workaround was given for solving the committor equations in the case of infinitetime, stationary dynamics by a Galerkin projection which works well as long as the dynamics of the Markov chain can be described by lowdimensional representations. Another interesting first work (Khoo et al. 2019) goes into the direction of using neural networks for solving the committor equations in high dimensions.
Further, the interpretability and visualization of the transition statistics in high dimensions or for large time intervals (large N or M) is a point of future research. In Metzner et al. (2009), an algorithm for computing transition channels from the effective current of reactive trajectories was proposed, and a generalization for timedependent dynamics is a ongoing work.
An implementation of the tools developed in this paper can be found at www.github.com/LuzieH/pytpt.
Notes
Here, with next we also include the current time point, i.e., that the system is already in B.
Ribera Borrell (2019, Prop 2.1.10) provides us with a generalization of the Markov property for Markov chains for events like \(\{\tau _A^(n) >\tau _B^(n) \}\) resp. \(\{\tau _B^+(n) < \tau _A^+(n)\}\), which belong to the \(\sigma \)algebra that contains the present and future resp. the present and past of the chain.
Sometimes such a family of invariant densities is called equivariant.
Regarding plotting \(f^+\): If the underlying process is a diffusion process in \({\mathbb {R}}^d\), we can estimate for each i the vector of the average direction of the effective current and the amount, i.e., to each i at time n, we can attach the vector \(\sum _{j\ne i} f^+_{ij} (n) v_{ij}\), where \(v_{ij}\) is the unit vector pointing from the center of the grid cell i to the center of the grid cell j (see, e.g., Fig. 12).
References
Arnold, L.: Random dynamical systems. In: Johnson, R. (ed.) Dynamical Systems, pp. 1–43. Springer, Berlin (1995)
Ashwin, P., Wieczorek, S., Vitolo, R., Cox, P.: Tipping points in open systems: bifurcation, noiseinduced and ratedependent examples in the climate system. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 370(1962), 1166–1184 (2012)
Banisch, R., Conrad, N.D., Schütte, C.: Reactive flows and unproductive cycles for random walks on complex networks. Eur. Phys. J. Spec. Top. 224(12), 2369–2387 (2015)
Beccara, Sa, Škrbić, T., Covino, R., Faccioli, P.: Dominant folding pathways of a ww domain. Proc. Natl. Acad. Sci. 109(7), 2330–2335 (2012)
Bolhuis, P.G., Chandler, D., Dellago, C., Geissler, P.L.: Transition path sampling: throwing ropes over rough mountain passes, in the dark. Annu. Rev. Phys. Chem. 53(1), 291–318 (2002). PMID: 11972010
Brockmann, D., Helbing, D.: The hidden geometry of complex, networkdriven contagion phenomena. Science 342(6164), 1337–1342 (2013)
Cameron, M., VandenEijnden, E.: Flows in complex networks: theory, algorithms, and application to Lennard–Jones cluster rearrangement. J. Stat. Phys. 156(3), 427–454 (2014)
Czerminski, R., Elber, R.: Selfavoiding walk between two fixed points as a tool to calculate reaction paths in large molecular systems. Int. J. Quantum Chem. 38(S24), 167–185 (1990)
Djurdjevac, N., Bruckner, S., Conrad, T., Schütte, C.: Random walks on complex modular networks12. JNAIAM 6(1–2), 29–50 (2011)
Doyle, P.G., Snell, J.L.: Random Walks and Electric Networks, vol. 22. American Mathematical Society (1984)
Elber, R., Shalloway, D.: Temperature dependent reaction coordinates. J. Chem. Phys. 112(13), 5539–5545 (2000)
Faccioli, P., Lonardi, A., Orland, H.: Dominant reaction pathways in protein folding: a direct validation against molecular dynamics simulations. J. Chem. Phys. 133(4), 045104 (2010)
Giorgini, L.T., Lim, S., Moon, W., Wettlaufer, J.: Predicting rare events in stochastic resonance. arXiv preprint arXiv:1906.10469 (2019)
Hamelberg, D., Mongan, J., McCammon, J.A.: Accelerated molecular dynamics: a promising and efficient simulation method for biomolecules. J. Chem. Phys. 120(24), 11919–11929 (2004)
Hartmann, C., Schütte, C.: Efficient rare event simulation by optimal nonequilibrium forcing. J. Stat. Mech. Theory Exp. 2012(11), P11004 (2012)
Hartmann, C., Banisch, R., Sarich, M., Badowski, T., Schütte, C.: Characterization of rare events in molecular dynamics. Entropy 16(1), 350–376 (2014)
Hastings, A., Abbott, K.C., Cuddington, K., Francis, T., Gellner, G., Lai, Y.C., Morozov, A., Petrovskii, S., Scranton, K., Zeeman, M.L.: Transient phenomena in ecology. Science 361(6406), eaat6412 (2018)
Heida, M.: Convergences of the squareroot approximation scheme to the Fokker–Planck operator. Math. Models Methods Appl. Sci. 28(13), 2599–2635 (2018)
Herrmann, S., Imkeller, P., et al.: The exit problem for diffusions with timeperiodic drift and stochastic resonance. Ann. Appl. Probab. 15(1A), 39–68 (2005)
Hollander, Fd, Olivieri, E., Scoppola, E.: Metastability and nucleation for conservative dynamics. J. Math. Phys. 41(3), 1424–1498 (2000)
Khoo, Y., Lu, J., Ying, L.: Solving for highdimensional committor functions using artificial neural networks. Res. Math. Sci. 6(1), 1 (2019)
Koltai, P.: A stochastic approach for computing the domain of attraction without trajectory simulation. Discrete. Contin. Dyn. Syst. Suppl. 2011, 854–863 (2011a)
Koltai, P.: Efficient Approximation Methods for the Global LongTerm Behavior of Dynamical Systems: Theory, Algorithms and Examples. Logos Verlag, Berlin (2011b)
Koltai, P., Volf, A.: Optimizing the stable behavior of parameterdependent dynamical systems—maximal domains of attraction, minimal absorption times. J. Comput. Dyn. 1(2), 339–356 (2014)
Lelièvre, T., Nier, F., Pavliotis, G.A.: Optimal nonreversible linear drift for the convergence to equilibrium of a diffusion. J. Stat. Phys. 152(2), 237–274 (2013)
Lenton, T.M.: Environmental tipping points. Annu. Rev. Environ. Resour. 38, 1–29 (2013)
Lenton, T.M., Held, H., Kriegler, E., Hall, J.W., Lucht, W., Rahmstorf, S., Schellnhuber, H.J.: Tipping elements in the Earth’s climate system. Proc. Natl. Acad. Sci. 105(6), 1786–1793 (2008)
Lie, H.C., Fackeldey, K., Weber, M.: A square root approximation of transition rates for a Markov state model. SIAM J. Matrix Anal. Appl. 34(2), 738–756 (2013)
Lindner, M., Hellmann, F.: Stochastic basins of attraction and generalized committor functions. Phys. Rev. E 100(2), 022124 (2019)
Lu, J., VandenEijnden, E.: Exact dynamical coarsegraining without timescale separation. J. Chem. Phys. 141(4), 07B6191 (2014)
Metzner, P.: Transition Path Theory for Markov Processes. PhD thesis, Freie Universität Berlin (2008)
Metzner, P., Schütte, C., VandenEijnden, E.: Transition path theory for Markov jump processes. Multiscale Model. Simul. 7(3), 1192–1219 (2009)
Noé, F., Schütte, C., VandenEijnden, E., Reich, L., Weikl, T.R.: Constructing the equilibrium ensemble of folding pathways from short offequilibrium simulations. Proc. Natl. Acad. Sci. 106(45), 19011–19016 (2009)
Norris, J.R.: Markov Chains. Number 2 in Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge (1998)
Nyborg, K., Anderies, J.M., Dannenberg, A., Lindahl, T., Schill, C., Schlüter, M., Adger, W.N., Arrow, K.J., Barrett, S., Carpenter, S., et al.: Social norms as solutions. Science 354(6308), 42–43 (2016)
Olender, R., Elber, R.: Calculation of classical trajectories with a very large time step: formalism and numerical examples. J. Chem. Phys. 105(20), 9299–9315 (1996)
Olender, R., Elber, R.: Yet another look at the steepest descent path. J. Mol. Struct. Theochem. 398, 63–71 (1997)
Otto, I.M., Donges, J.F., Cremades, R., Bhowmik, A., Hewitt, R.J., Lucht, W., Rockström, J., Allerberger, F., McCaffrey, M., Doe, S.S., et al.: Social tipping dynamics for stabilizing Earth’s climate by 2050. Proc. Natl. Acad. Sci. 117, 2354–2365 (2020)
Pan, R.K., Saramäki, J.: Path lengths, correlations, and centrality in temporal networks. Phys. Rev. E 84(1), 016105 (2011)
Pinski, F., Stuart, A.: Transition paths in molecules at finite temperature. J. Chem. Phys. 132(18), 184104 (2010)
Ribera Borrell, E.: From Ergodic InfiniteTime to FiniteTime Transition Path Theory. Master’s thesis, Freie Universität Berlin (2019)
Sarich, M.: Projected Transfer Operators: Discretization of Markov Processes in Highdimensional State Spaces. PhD thesis, Freie Universität Berlin (2011)
Scheffer, M., Carpenter, S., Foley, J.A., Folke, C., Walker, B.: Catastrophic shifts in ecosystems. Nature 413(6856), 591–596 (2001)
Schonmann, R.H.: The pattern of escape from metastability of a stochastic ising model. Commun. Math. Phys. 147(2), 231–240 (1992)
Schütte, C., Sarich, M.: Metastability and Markov State Models in Molecular Dynamics: Modeling, Analysis, Algorithmic Approaches, Vol. 24. American Mathematical Society, Philadelphia (2013)
Schütte, C., Noé, F., Lu, J., Sarich, M., VandenEijnden, E.: Markov state models based on milestoning. J. Chem. Phys. 134(20), 05B609 (2011)
SerGiacomi, E., Vasile, R., HernándezGarcía, E., López, C.: Most probable paths in temporal weighted networks: an application to ocean transport. Phys. Rev. E 92(1), 012818 (2015)
Steffen, W., Rockström, J., Richardson, K., Lenton, T.M., Folke, C., Liverman, D., Summerhayes, C.P., Barnosky, A.D., Cornell, S.E., Crucifix, M., et al.: Trajectories of the earth system in the anthropocene. Proc. Natl. Acad. Sci. 115(33), 8252–8259 (2018)
Thiede, E.H., Giannakis, D., Dinner, A.R., Weare, J.: Galerkin approximation of dynamical quantities using trajectory data. J. Chem. Phys. 150(24), 244111 (2019)
Ulam, S.M.: A Collection of Mathematical Problems, vol. 8. Interscience Publishers, New York (1960)
Ulitsky, A., Elber, R.: A new technique to calculate steepest descent paths in flexible polyatomic systems. J. Chem. Phys. 92(2), 1510–1511 (1990)
Valdano, E., Fiorentin, M.R., Poletto, C., Colizza, V.: Epidemic threshold in continuoustime evolving networks. Phys. Rev. Lett. 120(6), 068302 (2018)
VandenEijnden, E.: Transition path theory. In: Ferrario, M., Ciccotti, G., Binder, K. (eds.) Computer Simulations in Condensed Matter Systems: From Materials to Chemical Biology, vol. 1, pp. 453–493. Springer, Berlin (2006)
von Kleist, M., Schütte, C., Zhang, W.: Statistical analysis of the first passage path ensemble of jump processes. J. Stat. Phys. 170(4), 809–843 (2018)
Walters, P.: An Introduction to Ergodic Theory, Vol. 79. Springer, Berlin (2000)
Weinan, E., VandenEijnden, E.: Towards a theory of transition paths. J. Stat. Phys. 123(3), 503 (2006)
Acknowledgements
We would like to thank Alexander Sikorski and Jobst Heitzig for insightful discussions on the theory, as well as Niklas Wulkow for valuable feedback on the paper manuscript. LH received funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy—The Berlin Mathematics Research Center MATH+ (EXC2046/1, Project ID: 390685689). ERB and PK acknowledge support by the DFG through Grant CRC 1114, Projects A05 and A01, respectively.
Funding
Open Access funding provided by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Dr. Oliver Junge.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Committors for Periodic, InfiniteTime Dynamics: Augmenting the State Space
As an alternative to the approach in Sect. 4.2, we can consider the dynamics and the committor equations on the augmented state space \( {\mathbb {S}}^M\). The augmented transition matrix \(P_\text {Aug}\) of size \(M{\mathbb {S}} \times M{\mathbb {S}}\) contains all transition matrices \(P_0,\dots , P_{M1}\) and applies them consecutively in a row
e.g. applying \(P_\text {Aug}\) to a spacetime distribution with mass only at the \(0\)th time point shifts all the mass to the next time point. Since the transition matrices are rowstochastic, so is the augmented transition matrix.
With that set, we can take a slightly different approach for writing the forward committor equations as one augmented matrix equation. Again, we need to solve the committor only on C; therefore, we define the augmented \(CM \times CM \) matrix \({\tilde{P}}_\text {Aug}\), which encodes only the transitions inside C
and the CMdimensional vectors
Then from (16), we arrive at the following equation for \({\tilde{q}}^+\)
Note that \((I  {\tilde{P}}_\text {Aug}) \) is large but sparse compared to using the stacked equations (18).
For the backward committor equations, we proceed similarly and arrive at the equations
where we defined
Remark A.1
This approach can also be used for studying committors and transitions in systems with stochastic switching between different regimes. Consider the dynamics of a Markov chain that can be in M different regimes each described by a transition matrix \(P_m\), \(m\in \{1,\dots ,M\}\). The probability to switch between regimes is given by \({\hat{P}}\in {\mathbb {R}}^{M \times M}\). The committor probabilities would give the probability to next hit B and not A, given the chain is in a certain state and regime and assuming the dynamics are equilibrated. Note that the periodic dynamics described in this paper are just a special case of stochastic switching, where the switching is deterministic from time m to time \(m+1\), i.e., \({\hat{P}}_{mm'} = \delta _{mm'1}\) modulo M. The regimeaugmented matrix can be written as:
which is still a rowstochastic matrix, since \(\sum _{m'=1}^M {\hat{P}}_{mm'} =1 \) for all \(m=1,\dots ,M\). Exactly as above, \(P^{\text {switch}}_\text {Aug}\) can be used for finding committor probabilities.
Proofs
Lemma 3.5
Proof
Let us consider the joint probability that the process starting in \(i \in {\mathbb {S}}\) at time \(n \in {\mathbb {Z}}\) reaches first B before A at time \(\tau \in {\mathbb {Z}}_n^+ \cup \{ \infty \}\):
The law of total probability lets us write the forward committor in terms of a countable sum of the abovementioned joint probability:
where we have used that \({\mathbb {P}}(\tau _{A \cup B}^+(n) = \infty ) = 0\) due to the ergodicity of the process, which ensures that the process will arrive at some time to the subset \(A \cup B\).
Next, by applying the same arguments of the proof of Theorem 3.2 we see that the joint probability \(J_i^+(n, \tau )\) satisfies the following iterative system of equations for \(\tau \in {\mathbb {Z}}_{n+1}^+\):
with initial condition \(J_i^+(n, n) = \mathbb {1}_B(i)\).
Last, by using (47) recursively we can compute for any \(i \in C\) and \(\tau \in {\mathbb {Z}}_n^+\)
where we recursively used \(J_i^+(k, \tau ) = 0\) for any \(i \in A \cup B\) and \(n+1\le k<\tau \), and in the last iteration the fact that \(J_i^+(\tau , \tau ) = \mathbb {1}_B(i)\).
Putting the results (46) and (48) together completes the proof for the forward committor. The proof for the backward committor follows by using the same arguments. \(\square \)
Theorem 3.6
Proof
First, for any \(i\in C\), we have
using the definition of the timereversed transition probabilities and the committor equations (4), (5) for \(i\in C\).
Second, using that \(f^{AB}_{ij}=0\) if \(i\in B, j\in {\mathbb {S}}\) and also if \(i\in {\mathbb {S}}, j\in A\), we can compute
and by the current conservation (49) for \(i\in C\), \(\sum _{\begin{array}{c} i\in C \\ j \in {\mathbb {S}} \end{array}} f^{AB}_{ij} = \sum _{\begin{array}{c} i\in C \\ j \in {\mathbb {S}} \end{array}} f^{AB}_{ji}=\sum _{\begin{array}{c} j\in C \\ i \in {\mathbb {S}} \end{array}} f^{AB}_{ij}\), we arrive at
implying that \(\sum _{\begin{array}{c} i\in A \\ j \in {\mathbb {S}} \end{array}} f^{AB}_{ij} = \sum _{\begin{array}{c} i\in {\mathbb {S}}\\ j \in B \end{array}} f^{AB}_{ij}.\) \(\square \)
Theorem 4.4
Proof
For \(i\in C\), the forward committor at time \(n=m\) modulo M reads
using (1) that \(\tau _B^+(n), \tau _A^+(n)\ge n+1\) for \(i\in C\), (2) the law of total probability, (3) the definition of conditional probabilities, (4) the strong Markov property.
And for \(i\in A \) at time n, it follows from the definition that \(\tau ^+_A(n)=n\), whereas \(\tau ^+_B(n)>n\), thus \(q^+_{i}(n)=0\), in a similar way for \(i\in B\), \(\tau ^+_A(n)>n\), \(\tau ^+_B(n)=n\) thus \(q^+_{ i}(n)=1\).
The proof for the backward committor equations follows the same lines. \(\square \)
Lemma 4.6
Proof
We start with the case of the forward committor (16).
We can rewrite (18) with \(m=0\), with \(\left. P_0\right _{I\rightarrow J}\) denoting the restriction of the matrix \(P_0\) to entries from \(i\in I\) to \(j\in J\), as the matrix equation
equivalently
We note that the equation is uniquely solvable as long as \((ID)\) is invertible, and \((ID) \) is invertible if \(\rho (D) <1\).
By assuming that \({\bar{P}}_0\) is irreducible, we will show that for all \(v \in {\mathbb {R}}^{ C}\), \( \Vert vD\Vert _1 < \Vert v \Vert _1 \). Since this holds in particular for the eigenvectors, it follows that all eigenvalues \(\lambda <1\) and thus \(\rho (D) <1\).
We know that \(D\) is a substochastic matrix (row sum is \(\le 1\)) since it is a product of substochastic matrices, and that all entries are nonnegative, i.e., \( \sum _j D_{ij} \le 1\) for all \(i\in C\).
Moreover, there exists at least one row with row sum less than 1 since by irreducibility of \({\bar{P}}_0\), there must be at least one state i in C with a positive probability to go to A or B. We call the first one of this kind by \(i^*\) with \( \sum _{j} D_{i^*j} < 1\). Thus, we can compute
We have shown that a unique solution which we will call \(q^+_0\) exists. The forward committors for \(m=1,\dots , M1\) can uniquely be computed thereof by using (16).
For the case of the backward committor, we can proceed analogously by using the timereversed transition probabilities mapping M instances back
and by noting that if \({\bar{P}}_0\) is irreducible, also \(\bar{P^_0}\) is irreducible, which follows from the definition of irreducibility of \({\bar{P}}_0\) and using (15). \(\square \)
Theorem 4.8
Proof
To show that the flux conservation in node \(i\in C\) holds, we compute
using (1) \(P^_{m,ij}\pi _{m,i} = P_{m1,ji}\pi _{m1,j}\) and (2) the backward and forward committor equations for \(i\in C\).
Next, we want to show that the current of reactive trajectories leaving A during one period equals the current entering B during one period. We calculate
using that \(f^{AB}_{m,ij}=0\) if \(i\in B, j\in {\mathbb {S}}\) and if \(i\in {\mathbb {S}}, j\in A\). And by the current conservation for \(i\in C\), \(m\in {\mathbb {M}}\) and by relabeling i, j, m,
we arrive at
implying that \( \sum _{m\in {\mathbb {M}}} \sum _{\begin{array}{c} i\in A \\ j \in {\mathbb {S}} \end{array}} f^{AB}_{m,ij} = \sum _{m\in {\mathbb {M}}} \sum _{\begin{array}{c} i\in {\mathbb {S}}\\ j \in B \end{array}} f^{AB}_{m,ij}.\) \(\square \)
Theorem 5.2
Proof
First, we recall what we have already seen in the proof of Theorem 4.4. If \(i \in A\) for any \(n \in \{0, \dots , N1\}\) we have that \(q_i^+(n) = 0\), \(q_i^(n) = 1\) and if \(i \in B\) for any \(n \in \{0, \dots , N1\}\) we have that \(q_i^+(n) = 1\), \(q_i^(n) = 0\).
Second, we find a final condition for the forward committor on \(i \in C\)
and an initial condition for the backward committor on \(i \in C\)
where we have used in (1) and (2) that \(i \in C\).
Third, by following the same arguments used to prove (50) (in the proof of Theorem 4.4) we get for any \(n \in \{0, \dots , N2\}\) that
Analogously, for any \(n \in \{1, \dots , N1\}\) we get that
\(\square \)
Theorem 5.5
Proof
First, for any \(i\in C\) and \(n \in \{1, \dots , N2\}\) we have on one hand that
and on the other hand that
where (1) and (3) follow by (24) and (25) and (2) follows by (23).
Second, by using that \(f^{AB}_{ij}(n)=0\) for any \(n \in \{0, \dots , N2\}\) if \(i\in B, j\in {\mathbb {S}}\) and if \(i\in {\mathbb {S}}, j\in A\) we arrive at the following equality
Then, we show that
where in (4) we have applied the timedependent current conservation for \(i \in C\), \({n \in \{1, \dots , N2\}}\) and we have used that \({f^{AB}_{ij}(0)=0}\), and in (5) we have relabeled i, j and used that \({f^{AB}_{ij}(N2)=0}\). As a consequence,
\(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Helfmann, L., Ribera Borrell, E., Schütte, C. et al. Extending Transition Path Theory: Periodically Driven and FiniteTime Dynamics. J Nonlinear Sci 30, 3321–3366 (2020). https://doi.org/10.1007/s00332020096527
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00332020096527
Keywords
 Transition path theory
 Markov chains
 Timeinhomogeneous process
 Periodic driving
 Finitetime dynamics
Mathematics Subject Classification
 60J22
 82C26
 60J45
 60J10