Abstract
For a network of discrete states with a periodically driven Markovian dynamics, we develop an inference scheme for an external observer who has access to some transitions. Based on waiting-time distributions between these transitions, the periodic probabilities of states connected by these observed transitions and their time-dependent transition rates can be inferred. Moreover, the smallest number of hidden transitions between accessible ones and some of their transition rates can be extracted. We prove and conjecture lower bounds on the total entropy production for such periodic stationary states. Even though our techniques are based on generalizations of known methods for steady states, we obtain original results for those as well.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The framework of stochastic thermodynamics provides rules to describe small physical systems that are embedded into a thermal reservoir but remain out of equilibrium due to external driving [1,2,3]. If the relevant degreees of freedom can be described by a memoryless, i.e., Markovian dynamics on a discrete set of states, the time-evolution of the system is governed by the network structure and the transition rates between the states. In the case of periodically driven transition rates, such a dynamics relaxes into a periodic stationary state (PSS) [4,5,6,7], which, as a special case, becomes a non-equilibrium steady state (NESS) [8,9,10,11] for constant transition rates.
Since a model is fully specified only if all transition rates are known, practically relevant scenarios in which parts of the model remain hidden [12, 13] require methods to recover, e.g., hidden transition rates on the basis of observable data of a particular form. The combination of such methods with the physical constraints provided by the rules of stochastic thermodynamics comprises the field of thermodynamic inference [14]. With a focus on quantities that have a thermodynamic interpretation, recent works in the field obtain bounds on entropy production [15,16,17,18,19,20] or affinities [19, 21,22,23], which are complemented with techniques to recover topological information [24, 25] and speed limits [26,27,28].
Many of the methods discussed above apply to the case of time-independent driving and cannot straightforwardly be generalized to a PSS. For one of the standard methods of estimating entropy production, the thermodynamic uncertainty relation [29, 30], generalizations to PSSs exist [31,32,33,34], which require more input than their time-independent counterparts in general.
For the purpose of estimating entropy production, the usual rationale if given information about residence in states is to identify appropriate transitions or currents, since such time-antisymmetric data allow one to infer the entropy production. When observing transitions, one can ask the converse question: Can we infer information about states, which are time-symmetric, from antisymmetric data like transitions? We will address in this work how observing transitions allows us to recover occupation probabilites in states if the system is in a PSS. In addition, we will generalize and extend methods from [19] to the periodically driven case to infer transition rates and the number of hidden transitions between two observable ones. We also formulate and compare different lower bounds on the mean total entropy production. These entropy estimators are either proved or supported with strong numerical evidence.
The paper is structured as follows. In Sect. 2, we describe the setting and identify waiting-time distributions between observed transitions as the basic quantities we use to formulate our results. In Sect. 3, we investigate how these quantities can be used to infer kinetic information about the hidden part of a system in a PSS or NESS. Estimators for the mean entropy production are discussed in Sect. 4. We conclude and give an outlook on further work in Sect. 5.
2 General Setup
We consider a network of N states \(i\in \left\{ 1,\dots ,N\right\} \) that is periodically driven. The system is in state i(t) at time t and follows a stochastic description by allowing transitions between states sharing an edge in the graph. A transition from i to j happens instantaneously with rate \(k_{{ij}}(t)\), which has the periodicity of the driving. To ensure thermodynamic consistency, we assume the local detailed balance condition [1,2,3]
at each link, i.e., for each transition and its reverse. The driving with period \(\mathcal {T}\) may change the free energy \(F_k(t)\) of states k or act as a non-conservative force along transitions from i to j with \(f_{{ij}}(t) = -f_{{ji}}(t)\). Energies in this work are given in units of thermal energy so that entropy production is dimensionless.
The dynamics of the probability \(p_{i}(t)\) to occupy state i at time t obeys the master equation
In the long-time limit \(t\rightarrow \infty \), these networks approach a periodic stationary state (PSS) \(p^\text {pss}_{i}(t)\). The transition rates and these probabilities \(p^\text {pss}_{i}(t)\) determine the mean entropy production rate in the PSS [1,2,3]
In this work, we assume that at least one pair of transitions of a Markov network in its PSS or NESS is observable for an external observer while other transitions and all states are hidden, i.e., not directly accessible for the observer. We illustrate this with graphs of two exemplary Markov networks in Fig. 1. States with an observable transition between them will be called boundary states. If two boundary states are connected with one hidden transition, these transitions and the boundary states form the boundary of the hidden network. Additionally, we assume the period \(\mathcal {T}\) of the driving to be known.
The task is to determine hidden quantities like the probabilities \(p^\text {pss}_{i}(t)\) of such partially accessible networks as well as to estimate the overall entropy production. In such a network, we can determine distributions of waiting times t between two successive observable transitions \(I=(ij)\) and \(J=({lm})\), whereas observing the full microscopic dynamics is impossible. These waiting-time distributions are of the form
They depend on the time \(t_0\in [0,\mathcal {T}]\) at which transition I occurs within one period of the PSS. Since an arbitrary number of hidden transitions occurs between I and J, the distributions are given by the sum of conditional path weights \(\mathcal {P}\left[ \gamma _{I\rightarrow J}(t, t_0) \big | I,t_0\right] \) corresponding to all microscopic trajectories \(\gamma _{I\rightarrow J}(t, t_0)\) that start directly after a transition I at \(t_0\) and end with the next observable transition J after waiting time t.
Furthermore, we define
where we use the conditional probability \(p^\text {pss}(t_0\vert I)\) to detect a particular transition I at a specific time \(t_0\in [0,\mathcal {T})\) within the period. Due to effectively marginalizing \(t_0\) like in Equation (5) when using trajectories with uncorrelated \(t_0\), e.g., observed trajectories for unknown \(\mathcal {T}\) in which we discard a sufficient number of successive waiting times between two saved ones, we can always get these waiting-time distributions from measured waiting times. In the special case of a NESS,
holds for an arbitrarily assigned period \(\mathcal {T}\), which we emphasize by using \(\psi _{I\rightarrow J}(t)\).
3 Shortest Hidden Paths, Transition Rates and Occupation Probabilities
We first generalize methods to infer the number of hidden transitions in the shortest path between any two observable transitions from a NESS [19, 24] to a PSS. For any two transitions I, J for which the waiting-time distribution does not vanish, the number of hidden transitions \(M_{IJ}\) along the shortest path between I and J is given by
which can be derived following an idea adopted in reference [19] for systems in a NESS. In order to sketch the general idea how in the short-time limit waiting-time distributions relate to the number of transition, we first consider a trajectory that starts in state i at time \(t_0\) and ends in a neighboring state j at time \(t_0+t\). In the short time limit \(t\rightarrow 0\), the probability of such a trajectory fulfills \(\lim _{t\rightarrow 0}p(j,t_0+t|i,t_0)/t= k_{ij}(t_0)\), which is the path weight of an infinitesimal short trajectory that contains only a single transition. Paths with multiple transitions contribute to higher-order terms in t and thus become irrelevant. In a second step, we use the same idea to compute the path weight of trajectories \(\gamma _{I\rightarrow J}(t, t_0)\) in the short-time limit. In expanded form, a concrete realization of \(\gamma _{I\rightarrow J}(t, t_0)\) that contributes to the sum in Equation (4) reads
where we assume that transition I ends in state \(i_0\) at time \(t_0+\tau _0\) and the concluding transition J connects states \(i_L\) and \(i_{L+1}\) after duration \(\tau _{L+1}=t\). With the explicit expression for the path weight [1] in a Markov network we calculate the probability of a particular sequence \(i_0\rightarrow \dots \rightarrow i_{L+1}\) as
which is given as an integral over waiting times. From definition (4), we identify waiting-time distributions as sum of these probabilities over all possible sequences. By inserting Equation (9) into the definition (4), we find
where we assume that the shortest path of the form (8) has \(L=M_{IJ}\) hidden transitions. For systems in a PSS, the waiting-time distributions \(\psi _{I\rightarrow J}(t\vert t_0)\) and \(\Psi _{I\rightarrow J}(t)\) are interchangeable since both of their short-time limits are proportional to \(t^{M_{IJ}}\), i.e., are dominated by the shortest path between I and J. Since this shortest path between I and J consists of the same number of transitions, no matter at what time \(t_0\) a trajectory starts, the expression in the middle of Equation (7) is independent of \(t_0\). In the example of Fig. 1a, with \(I=+\) and \(J=+\), we get \(M_{++}=2\) since the two transitions (23) and (31) form the corresponding shortest hidden path. For the graph shown in Fig. 1b with \(I=L_-\) and \(J=R_+\), we find \(M_{L_-R_+}=1\) since (16) is the transition inbetween \(L_-\) and \(R_+\).
The rates of observable transitions can be recovered from waiting-time distributions. Given a transition \(J=({lm})\) and its reverse \(\tilde{J}=({ml})\), we obtain the corresponding rate \(k_{{lm}}\) through
where we use that the short-time limit is dominated by the path with the fewest transitions, which, in this case, is \(\tilde{J}\) followed by J without any hidden intermediate transitions. Moreover, the state of the system is known immediately after the initial transition \(\tilde{J}\) at time \(t_0\) within the period, which leads to \(p_{l}(t=0\vert \tilde{J},t_0)=1\).
Further transition rates are inferable for any combination of transitions I and J with \(M_{IJ} = 1\), i.e., whenever the shortest path between \(I=(ij)\) and \(J=({lm})\) consists of only one hidden transition. For \(t\rightarrow 0\), its transition rate \(k_{j{l}}(t_0)\) then follows from a Taylor expansion as
with \(k_{{l}j}(t_0)\) following analogously. As an example, we get the transition rates \(k_{14},\,k_{16},\,k_{41}\) and \(k_{61}\) for the network shown in Fig. 1b even though the related links are hidden.
Occupation probabilities of boundary states of the hidden network can be inferred as follows. During a measurement of length \(M\mathcal {T}\) with large \(M\in \mathbb {N}\), we count the number \(N_I(t_0\le \tau \le t_0+\Delta t)\) of transitions \(I=(ij)\) that occur during the infinitesimal interval \([t_0,t_0+\Delta t]\), where we map all times at which transitions happen into one period of the PSS using a modulo operation. We, therefore, obtain the rate of transitions I at time \(t_0\in [0,\mathcal {T})\) within one period of the PSS as
As the transition rate \(k_{ij}(t_0)\) can be determined as described above, we can thus infer \(p_i^\text {pss}(t_0)\) from experimentally accessible data. Knowing the occupation probabilities of all boundary states of the hidden network allows us to calculate instantaneous currents along single transitions between them using the corresponding inferred transition rates.
These results can be specialized to NESSs, where, to the best of our knowledge, they have not been reported yet either. In this special case, dropping the irrelevant \(t_0\) in Equations (11) and (12) leads to constant transition rates. Moreover, in a NESS, the mean rate of transitions I, \(\left\langle n_I\right\rangle _\text {ss}\), can directly be obtained from the total number \(N_{I,T}\) of observed transitions I along a measured trajectory of length T. Inferring occupation probabilities \(p_i^\text {ss}\) then only requires dividing through the already calculated transition rate \(k_{ij}\), i.e.,
Through equations (13) and (14) we show how to infer occupation probabilities of boundary states of the hidden network. Given the inferable quantities \(p_i^\text {pss}(t_0)\) or \(p_i^\text {ss}\), we can calculate how much probability rests on states in the network beyond the so identified boundary states. As an example, Fig. 2b and c illustrate the probability to find the network shown in Fig. 2a in its boundary states rather than in states within the interior of the hidden network. In both figures, different sets of observable transitions lead to different boundaries of the hidden network. Figure 2b displays sums of probabilities for these systems in a PSS, while Fig. 2c gives an example for NESSs. Each sum of inferable occupation probabilities quantifies the probability of finding the system in the boundary of the hidden network. The closer this sum is to one, the less relevant the inaccessible states in the interior of the hidden network are for the dynamics.
4 Three Estimators for Entropy Production in PSSs
In this section, we estimate irreversibility via the entropy production rate in a PSS. We have seen above how waiting-time distributions contain information on the hidden dynamics of a network. Thus, it seems sensible to expect that these quantities can be used as entropy estimator to infer irreversibility in both the observable and the hidden parts of the network.
For a trajectory \(\Gamma \) of length T, reversing the driving protocol leads to transition rates \(\tilde{k}_{ij}(t)= k_{ij}(T-t)\). The corresponding waiting-time distributions \(\tilde{\psi }_{\tilde{J}\rightarrow \tilde{I}}(t\vert t_0+t)\) for reversed paths \(\tilde{J}\rightarrow \tilde{I}\) are the time-reversed versions of \(\psi _{I\rightarrow J}(t\vert t_0)\). Once waiting-time distributions of the form \(\tilde{\psi }_{\tilde{J}\rightarrow \tilde{I}}(t\vert t_0+t)\) have been determined, the fluctuation relation
for a trajectory \(\Gamma \) of length T and its time-reverse \(\widetilde{\Gamma }\) allows us to derive an estimator \(\left\langle \hat{\sigma }_\psi \right\rangle _\text {pss}\) that fulfills
Here, the index \(\psi \) of the estimator highlights the type of waiting-time distribution that enters its expression in the above inequality. We prove this inequality in Appendix B as a generalization of the trajectory-based entropy estimator \(\left\langle \hat{\sigma }\right\rangle \) introduced in [19] for NESSs.
For time-symmetric driving, estimating \(\left\langle \sigma \right\rangle _\text {pss}\) with \(\left\langle \hat{\sigma }_\psi \right\rangle _\text {pss}\) does not require to reverse the driving protocol in an experiment. In this case, \(\tilde{\psi }_{\tilde{J}\rightarrow \tilde{I}}(t\vert t_0+t)\) results from \(\psi _{I\rightarrow J}(t\vert t_0)\) by exploiting the symmetry \(k_{ij}(t_*+t_0)=k_{ij}(t_*-t_0)\) of the protocol for all transitions (ij) after finding \(t_*\in [0,\mathcal {T})\). In the next paragraphs, we discuss experimentally accessible entropy estimators that do not require waiting-time distributions of the time-reversed process regardless of symmetry properties of the driving.
We have performed extensive numerical computations of random, periodically driven Markov networks corresponding to different underlying graphs to compute
where the index \(\Psi \) indicates the type of waiting-time distribution used. Here, \(\left\langle n_I\right\rangle _\text {pss}\) is the mean of \(n_I(t_0)\) in one period \(t_0\in [0,\mathcal {T}]\) that results from measured data. For over \(10^5\) randomly chosen systems from unicyclic graphs of three states, diamond-shaped graphs as displayed in Fig. 1a and more complex underlying graphs, the inequalities
hold true as shown in the scatter plots in Fig. 3a, c and e. Therefore, we conjecture inequality (18) to hold true for periodically driven Markov networks, so that \(\left\langle \hat{\sigma }_\Psi \right\rangle _\text {pss}\) is a thermodynamically consistent estimator of \(\left\langle \sigma \right\rangle _\text {pss}\).
Furthermore, transition rates and occupation probabilities that are inferred as described in Sect. 3 allow us to prove another lower bound on the entropy production rate of a Markov network in a PSS which complements the previous two. The estimator
adds up the contributions to entropy production along transitions of the set \(\mathcal {V}\) containing all transitions that are either observable or within the boundary of the hidden network. As \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}\) solely depends on inferable probabilities and rates, its index is pk. Since each of the terms in Equation (19) is non-negative for all t and part of \(\left\langle \sigma \right\rangle _\text {pss}\) as given in Equation (3), \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}\) constitutes a lower bound on the total entropy production rate of the system. The bound is tight if the set \(\mathcal {V}\) comprises all edges along which the current does not vanish identically. Put differently, \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}= \left\langle \sigma \right\rangle _\text {pss}\) if and only if all edges with non-vanishing current are either observable or within the boundary of the hidden network. This bound may often be less tight than the conjectured bound (17) for periodically driven Markov networks though this ordering does not hold in general as shown in Fig. 3b, d and f.
In the special case of a NESS, the last bound (19) acquires the familiar form
The crucial part is that here the entropy estimator \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}\) is based on occupation probabilities and transition rates inferred from distributions of waiting times between observable transitions as described in Sect. 3. Although the term (20) shows superficial similarities to the main result of reference [35] interpreted as an entropy estimator, our estimator \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}\) differs in two ways. First, \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}\) can be used for partially accessible Markov networks in a PSS, which include systems in a NESS as special cases. Second, the sum in Equations (19) and (20) includes contributions of both observable transitions and transitions within the boundary of the hidden network. These additional contributions allow for a more accurate estimate of the entropy production rate.
5 Concluding Perspectives
5.1 Comparisons with Known Entropy Estimators
In contrast to the steady-state case, only a few methods bound entropy production in partially accessible Markov networks that are driven periodically. One class of such estimators relies on measuring currents and their precision in a similar fashion as the thermodynamic uncertainty relation [29, 30]. A variant of the thermodynamic uncertainty relation proved in reference [36] makes use of the response of currents to a change in speed of the protocol that drives the system out of equilibrium. Since this method requires control over the driving speed, the prerequisites are more restrictive than in our case. Similar methods exist that estimate quantities that are related to entropy production in a broader picture. The result in reference [37] yields an estimate on the combined entropy production of a time-dependently driven process and its time-reverse, whereas reference [32] describes a method to estimate entropy production of a related auxiliary process.
In addition to the class of methods above, which rely on the measurement of currents, we can identify a second class of methods in which the entropy estimator takes the form of a Kullback-Leiber divergence, which includes our proven bounds (16) and (19) but not the conjectured one (17). More specifically, two of these methods require access to the time-reversed process and use waiting-time distributions. The recent reference [38] yields a proven and a conjectured lower bound of the entropy production rate \(\left\langle \sigma \right\rangle _\text {pss}\) with the aim of utilizing waiting-time distributions that are independent of the time that is called \(t_0\) in the present work. Since the proven estimator of reference [38] uses distributions with marginalized \(t_0\), it is at best as tight as \(\left\langle \hat{\sigma }_\psi \right\rangle _\text {pss}\), which makes use of the full distributions. A similar relation applies to the conjectured estimator in reference [38], which requires less data than \(\left\langle \hat{\sigma }_\psi \right\rangle _\text {pss}\) since it is independent of \(t_0\). However, obtaining this conjectured bound still requires measurements in the time-reversed system and is therefore more restrictive than our conjectured bound \(\left\langle \hat{\sigma }_\Psi \right\rangle _\text {pss}\). In a more general setup, we could also allow for additional observations during the time interval between two visible transitions. The method reported in reference [39] allows us to find a stronger lower bound on \(\left\langle \sigma \right\rangle _\text {pss}\) by utilizing such additional, potentially even non-Markovian data. However, kinetic results like particular transition rates or, in particular, the entropy estimator (19) are inaccessible in this more general setup.
Finally, the technique published for the case in which one is able to count transitions in time-dependent time series [40] can also be applied to partially accessible, periodically driven Markov networks. Within this setup, the counting yields empirical currents of visible transitions that are used to get a lower bound on \(\left\langle \sigma \right\rangle _\text {pss}\) using machine learning and multiple time series from repeated measurements. Hence, our bound \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}\) is better than the lower bound established by the method of reference [40] because it not only contains the entropy production of all visible links but even that of some hidden transitions between visible ones.
5.2 Summary and Outlook
In this paper, we have introduced inference methods based on distributions of waiting times between consecutive observed transitions in partially accessible, periodically driven Markov networks. Successive use of these methods yields information about the kinetics of such a Markov network as well as its underlying topology, including hidden parts, as summarized in Fig. 4.
We have first shown how to infer the number of hidden transitions along the shortest path between two observable transitions. We have then derived methods to infer transition rates between boundary states of the hidden network. Occupation probabilities of these boundary states then follow by discerning when the observable transitions happen within one period. Consequently, we find the total probability resting on the hidden states in the interior of the hidden network.
In addition, we have presented three entropy estimators enabling us to estimate irreversibility of a driven Markov network based on observed transitions during a partially accessible dynamics. The first and third one are proven to be lower bounds of the mean entropy production rate, whereas we conjecture the second estimator to have this property too. Its proof remains as open theoretical challenge. The second and third estimator have the advantage of not requiring control of the driving since its time reversal is not needed. Furthermore, we emphasize that even for the simpler NESS most of these results are original as well.
Finally, it will be interesting to explore whether and how such an approach can be adapted to continuous systems described by a Langevin dynamics. We also hope that our non-invasive method yielding time-dependent transition rates and occupation probabilities will be applied to experimental data of periodically driven small systems.
Data availability
Data generated during this study are available from the authors upon reasonable request.
References
Seifert, U.: Stochastic thermodynamics, fluctuation theorems, and molecular machines. Rep. Prog. Phys. 75, 126001 (2012). https://doi.org/10.1088/0034-4885/75/12/126001
Peliti, L., Pigolotti, S.: Stochastic Thermodynamics. An Introduction. Princeton Univ. Press, Princeton and Oxford (2021)
Shiraishi, N.: An Introduction to Stochastic Thermodynamics. Fundamental Theories of Physics. Springer, Singapore (2023). https://doi.org/10.1007/978-981-19-8186-9
Rahav, S., Horowitz, J., Jarzynski, C.: Directed flow in nonadiabatic stochastic pumps. Phys. Rev. Lett. 101, 140602 (2008). https://doi.org/10.1103/PhysRevLett.101.140602
Chernyak, V.Y., Sinitsyn, N.A.: Pumping restriction theorem for stochastic networks. Phys. Rev. Lett. 101, 160601 (2008). https://doi.org/10.1103/PhysRevLett.101.160601
Raz, O., Subasi, Y., Jarzynski, C.: Mimicking nonequilibrium steady states with time-periodic driving. Phys. Rev. X 6, 021022 (2016). https://doi.org/10.1103/PhysRevX.6.021022
Rotskoff, G.M.: Mapping current fluctuations of stochastic pumps to nonequilibrium steady states. Phys. Rev. E 95, 030101 (2017). https://doi.org/10.1103/PhysRevE.95.030101
Stigler, J., Ziegler, F., Gieseke, A., Gebhardt, J.C.M., Rief, M.: The complex folding network of single calmodulin molecules. Science 334(6055), 512–516 (2011). https://doi.org/10.1126/science.1207598
Roldan, E., Parrondo, J.M.R.: Estimating dissipation from single stationary trajectories. Phys. Rev. Lett. 105, 150607 (2010). https://doi.org/10.1103/PhysRevLett.105.150607
Muy, S., Kundu, A., Lacoste, D.: Non-invasive estimation of dissipation from non-equilibrium fluctuations in chemical reactions. The Journal of Chemical Physics 139(12), 124109 (2013). https://doi.org/10.1063/1.4821760
Ge, H., Qian, M., Qian, H.: Stochastic theory of nonequilibrium steady states. Part II: Applications in chemical biophysics. Physics Reports 510(3), 87–118 (2012). https://doi.org/10.1016/j.physrep.2011.09.001
Esposito, M.: Stochastic thermodynamics under coarse-graining. Phys. Rev. E 85, 041125 (2012). https://doi.org/10.1103/PhysRevE.85.041125
Ariga, T., Tomishige, M., Mizuno, D.: Nonequilibrium energetics of molecular motor kinesin. Phys. Rev. Lett. 121, 218101 (2018). https://doi.org/10.1103/PhysRevLett.121.218101
Seifert, U.: From stochastic thermodynamics to thermodynamic inference. Ann. Rev. Cond. Mat. Phys. 10(1), 171–192 (2019). https://doi.org/10.1146/annurev-conmatphys-031218-013554
Martínez, I.A., Bisker, G., Horowitz, J.M., Parrondo, J.M.R.: Inferring broken detailed balance in the absence of observable currents. Nature Communications 10, 3542 (2019). https://doi.org/10.1038/s41467-019-11051-w
Dechant, A., Sasa, S.I.: Improving thermodynamic bounds using correlations. Phys. Rev. X 11, 041061 (2021). https://doi.org/10.1103/PhysRevX.11.041061
Skinner, D.J., Dunkel, J.: Improved bounds on entropy production in living systems. Proc. Natl. Acad. Sci. USA (2021). https://doi.org/10.1073/pnas.2024300118
Harunari, P.E., Dutta, A., Polettini, M., Roldan, E.: What to learn from a few visible transitions’ statistics? Phys. Rev. X 12, 041026 (2022). https://doi.org/10.1103/PhysRevX.12.041026
van der Meer, J., Ertel, B., Seifert, U.: Thermodynamic inference in partially accessible markov networks: A unifying perspective from transition-based waiting time distributions. Phys. Rev. X 12, 031025 (2022). https://doi.org/10.1103/PhysRevX.12.031025
van der Meer, J., Degünther, J., Seifert, U.: Time-resolved statistics of snippets as general framework for model-free entropy estimators. Phys. Rev. Lett. 130, 257101 (2023). https://doi.org/10.1103/PhysRevLett.130.257101
Ohga, N., Ito, S., Kolchinsky, A.: Thermodynamic bound on the asymmetry of cross-correlations. Phys. Rev. Lett. 131, 077101 (2023). https://doi.org/10.1103/PhysRevLett.131.077101
Liang, S., Pigolotti, S.: Thermodynamic bounds on time-reversal asymmetry. Phys. Rev. E 108, 062101 (2023). https://doi.org/10.1103/PhysRevE.108.L062101
Degünther, J., van der Meer, J., Seifert, U.: Fluctuating entropy production on the coarse-grained level: Inference and localization of irreversibility. Phys. Rev. Research 6, 023175 (2024). https://doi.org/10.1103/PhysRevResearch.6.023175
Li, X., Kolomeisky, A.B.: Mechanisms and topology determination of complex chemical and biological network systems from first-passage theoretical approach. The Journal of Chemical Physics 139(14), 144106 (2013). https://doi.org/10.1063/1.4824392
Van Vu, T., Saito, K.: Topological speed limit. Phys. Rev. Lett. 130, 010402 (2023). https://doi.org/10.1103/PhysRevLett.130.010402
Ito, S., Dechant, A.: Stochastic time evolution, information geometry, and the Cramér-Rao bound. Phys. Rev. X 10, 021056 (2020). https://doi.org/10.1103/PhysRevX.10.021056
Shiraishi, N., Funo, K., Saito, K.: Speed limit for classical stochastic processes. Phys. Rev. Lett. 121, 070601 (2018). https://doi.org/10.1103/PhysRevLett.121.070601
Dechant, A., Garnier-Brun, J., Sasa, S.-I.: Thermodynamic bounds on correlation times. Phys. Rev. Lett. 131, 167101 (2023). https://doi.org/10.1103/PhysRevLett.131.167101
Barato, A.C., Seifert, U.: Thermodynamic uncertainty relation for biomolecular processes. Phys. Rev. Lett. 114, 158101 (2015). https://doi.org/10.1103/PhysRevLett.114.158101
Gingrich, T.R., Horowitz, J.M., Perunov, N., England, J.L.: Dissipation bounds all steady-state current fluctuations. Phys. Rev. Lett. 116, 120601 (2016). https://doi.org/10.1103/PhysRevLett.116.120601
Proesmans, K., van den Broeck, C.: Discrete-time thermodynamic uncertainty relation. EPL 119(2), 20001 (2017). https://doi.org/10.1209/0295-5075/119/20001
Barato, A.C., Chetrite, R., Faggionato, A., Gabrielli, D.: Bounds on current fluctuations in periodically driven systems. New J. Phys. 20, 103023 (2018). https://doi.org/10.1088/1367-2630/aae512
Koyuk, T., Seifert, U.: Operationally accessible bounds on fluctuations and entropy production in periodically driven systems. Phys. Rev. Lett. 122(23), 230601 (2019). https://doi.org/10.1103/PhysRevLett.122.230601
Barato, A.C., Chetrite, R., Faggionato, A., Gabrielli, D.: A unifying picture of generalized thermodynamic uncertainty relations*. Journal of Statistical Mechanics: Theory and Experiment 2019(8), 084017 (2019). https://doi.org/10.1088/1742-5468/ab3457
Shiraishi, N., Sagawa, T.: Fluctuation theorem for partially masked nonequilibrium dynamics. Phys. Rev. E 91, 012130 (2015). https://doi.org/10.1103/PhysRevE.91.012130
Koyuk, T., Seifert, U.: Thermodynamic uncertainty relation for time-dependent driving. Phys. Rev. Lett. 125, 260604 (2020). https://doi.org/10.1103/PhysRevLett.125.260604
Proesmans, K., Horowitz, J.M.: Hysteretic thermodynamic uncertainty relation for systems with broken time-reversal symmetry. J. Stat. Mech.: Theor. Exp. 2019(5), 054005 (2019)
Harunari, P.E., Fiore, C.E., Barato, A.C.: Inference of entropy production for periodically driven systems (2024) arXiv:2406.12792 [cond-mat.stat-mech]
Degünther, J., van der Meer, J., Seifert, U.: Unraveling the where and when of coarse-grained entropy production: General theory meets single-molecule experiments (2024) arXiv:2405.18316 [cond-mat.stat-mech] Proc. Natl. Acad. Sci. USA, in press
Otsubo, S., Manikandan, S.K., Sagawa, T., Krishnamurthy, S.: Estimating time-dependent entropy production from non-equilibrium trajectories. Communications Physics 5(1), 11 (2022). https://doi.org/10.1038/s42005-021-00787-x
Sekimoto, K.: Derivation of the first passage time distribution for markovian process on discrete network (2022) arXiv:2110.02216 [cond-mat.stat-mech]
Gomez-Marin, A., Parrondo, J.M.R., Van den Broeck, C.: Lower bounds on dissipation upon coarse-graining. Phys. Rev. E 78, 011107 (2008). https://doi.org/10.1103/PhysRevE.78.011107
Cover, T.M., Thomas, J.A.: Elements of Information Theory. Telecommunications and signal processing. Wiley, Hoboken (2006)
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Additional information
Communicated by Keiji Saito.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Parameters Used for Numerical Data
1.1 A.1 Parameters for Data Shown in Fig. 2
For the network shown in Fig. 2 a), the PSS is generated with transition rates
Therein, we set all \(\kappa _{ij}=1\) as well as \(f_{12}=f_{23}=f_{34}=f_{41}=2\) and \(f_{45}=f_{56}=f_{67}=f_{73}=20/3\). Furthermore, we choose the free energies
and solve the master equation (2) of the network for the occupation probabilities \(p_i^\text {pss}(t)\).
The NESSs for this network as shown in Fig. 2c are generated with the non-zero transition rates \(k_{12} = 1.7,\ k_{14} = 0.4,\ k_{21} = 0.6,\ k_{23} = 3.5,\ k_{32} = 0.3,\ k_{34} = 3.3,\ k_{37} = 0.02,\ k_{41} = 5.7,\ k_{43} = 0.3,\ k_{54} = 0.1,\ k_{56} = 0.7,\ k_{65} = 0.2,\ k_{67} = 0.8,\ k_{73} = 4.6,\ k_{76} = 0.05\) and \(k_{45} \in [0.2,75]\).
1.2 A.2 Parameters for Data Shown in Fig. 3
For Fig. 3a and b, we have used diamond-shaped networks as shown in Fig. 1a but with observable transitions \(1\leftrightarrow 2\) and \(1\leftrightarrow 4\). For Fig. 3 c and d, we have used unicyclic three-state systems with observable transitions \(1\leftrightarrow 2\). All transition rates are parameterized as in Equation (A1) with \(\kappa _{ij}=1\) unless otherwise specified.
All diamond-shaped systems are characterized by \(f_{12}=f_{14}=f_{23}=f_{31}=f_{43}=2\). The free energies of the states in each simulated diamond network are given by
where constant energies \(F_{ci}\), energy amplitudes \(F_{ai}\) and the angles \(\varphi _{0,i}\) are randomly picked from normal distributions with mean and variance as given in Table 1. For \(j\in \lbrace 1,\dots 4\rbrace \), normally distributed \(r_j\sim \mathcal {N}(0,1)\) define
for the data set plotted in and in , respectively, and for both data sets
With the exception \(k_{13} = 1\) and \(k_{31} = \exp \left[ -F_1(t)+F_3(t)+ f_{31}\right] \), the transition rates of the three-state networks used for Fig. 3c and d are given by Equation (A1) with \(\kappa _{ij}=1\) and \(f_{12}=f_{23}=f_{31}=2\). Moreover, the parameters in the free energies
are normally distributed with mean and variance as listed in Table 2. The normally distributed \(r_j\sim \mathcal {N}(0,1)\) for \(j=1,2\) define
The graph in Fig. 1b with observable transitions \(1\leftrightarrow 2\) and \(4\leftrightarrow 6\) is the underlying graph for the Markov networks simulated for Fig. 3e and f. The transition rates with parametrization (A1), wherein \(\kappa _{ij} = 1\), determine the dynamics.
We have set the non-conservative driving to \(f_{12} = f_{23} = f_{34} = f_{65} = f_{64} = 2.0\), \(f_{41} = 1.8\), \(f_{45} = 1.6\) and \(f_{61} = 0.4\) for all networks. For \(x\in \lbrace 2,3,\dots 6\rbrace \), the time-dependent free energies are given by
where we have randomly picked the constant energies \(F_{ci}\), energy amplitudes \(F_{ai}\) and the angles \(\varphi _{0,i}\) from normal distributions with mean and variance as given in Table 3. Normally distributed \(r_j\sim \mathcal {N}(0,1)\) with \(j\in \lbrace 1,\dots 6\rbrace \) define
for the data set plotted in and in , respectively. For both data sets,
In all cases, we have computed the entropy production rate \(\left\langle \sigma \right\rangle _\text {pss}\) through Equation (3). For Fig. 3b, d and f, we have also calculated \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}\) via Equation (19), to which only observable transitions, e.g., (12) and (21) for the unicycles, contribute. Integrating initial value problems of the absorbing network, where observed transitions are redirected into auxiliary states [19, 41], yields waiting-time distributions \(\psi _{I\rightarrow J}(t\vert t_0)\). Using the previously obtained probabilities and transition rates, the estimator \(\left\langle \hat{\sigma }_\Psi \right\rangle _\text {pss}\) can be determined after integrating out the phase-like time on which all waiting-time distributions depend.
Appendix B: Proof for the Entropy Estimator (16)
The observed pairs of directed transitions yield coarse-grained trajectories \(\Gamma (t)\)
of length T in time, where we choose the starting time of a period such that we observe the first transition at \(t_{0_0}=0\). A trajectory consists of tuples of observed transition \(I_i\) and the phase-like time \(t_{0_i}\) of its observation as well as of waiting times \(\Delta t_i\) inbetween. During \(\Delta t_i\), an arbitrary number of hidden transitions can occur. Moreover, as we know \(\mathcal {T}\), we know the state of the network directly after each instantaneous transition. Thus, the next observable transition is independent of the past, i.e., the system has the Markov property at observable transitions. Therefore, the path weight of coarse-grained trajectories \(\Gamma (t)\) factors into
This factoring introduces waiting-time distributions of the form
defined in Equation (4) in terms of conditional path weights of microscopic trajectories \(\gamma _{I\rightarrow J}(\Delta t, t_0)\) that start right after an observed transition I and end with the next observed transition J.
The time-reversed process results from reversing the protocol, the trajectory and the time. For a trajectory \(\Gamma (t)\) of length T that starts at the phase-like time \(t_0\), the time-reversed transition rates read \(\tilde{k}_{ij}(t) = k_{ij}(T-t)\) while the time transforms as \(\tilde{t} = T-t\). Similarly, all quantities obtained by time-reversal will be marked with a tilde. The path weight of the time-reversed trajectory \(\widetilde{\Gamma }\) is, in analogy to Equation (B18), given by
Similar to reference [42], we estimate the entropy production rate \(\left\langle \sigma \right\rangle _\text {pss}\) using the log-sum inequality (see, e.g., [43]) as
Here, \(\mathcal {P}\left[ \Gamma (t) \big | \zeta (t)\right] = 1 =\widetilde{\mathcal {P}}\left[ \widetilde{\Gamma }(t)\big | \tilde{\zeta }(t)\right] \) holds if \(\Gamma (t)\) is the correct coarse-grained trajectory onto which \(\zeta (t)\) is mapped under coarse-graining. Otherwise, these conditional path weights vanish.
Replacing the sums of conditional path weights in the logarithm of the second line of inequality (B21) with waiting-time distributions as in Equations (B18) and (B20) yields
The first term on the right hand side, \(\delta (T,t_{0_0})\), is periodic when varying one of the fixed times \(t_{0_0}\) and T. Hence \(\left| {\delta (T,t_{0_0})}\right| \le c\) holds for a constant \(c\in \mathbb {R}^+\).
To reformulate the sum on the right hand side of Equation (B22), we define the conditional counter
It sums all terms of trajectories that start with I at \(t_0\) and end with the succeeding observable transition J after waiting time t. Substituting the conditional counter into Equation (B22) leads to
With \(\lim _{T\rightarrow \infty }\left| {\delta (T,t_{0_0})}\right| /T =0\), the calculation of the expectation value of Equation (B24) reduces to determining the expectation value of the conditional counter. Following Ref. [19], we argue that
holds true. Together with \(n_I(t_0)/\mathcal {T}= \left\langle n_I\right\rangle _\text {pss} p^\text {pss}(t_0\vert I)\), this results in
In total, the estimator \(\left\langle \hat{\sigma }_\psi \right\rangle _\text {pss}\) of the mean entropy production rate is given by
where the second equality follows from the vanishing \(\delta (T,t_{0_0})/T\) in the limit \(T\rightarrow \infty \). The estimator is non-negative as its definition (B21) has the form of a Kullback-Leibler divergence. In the special case of a NESS, rewriting \(\left\langle \hat{\sigma }_\psi \right\rangle _\text {pss}\) using Equation (B26) reveals that this estimator reduces to \(\left\langle \hat{\sigma }_\Psi \right\rangle _\text {pss}\), which we define in Equation (17), since then \(p^\text {pss}(t_0\vert I)=1/\mathcal {T}\) and the waiting-time distributions do not depend on \(t_0\).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Maier, A.M., Degünther, J., van der Meer, J. et al. Inferring Kinetics and Entropy Production from Observable Transitions in Partially Accessible, Periodically Driven Markov Networks. J Stat Phys 191, 104 (2024). https://doi.org/10.1007/s10955-024-03315-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10955-024-03315-7