1 Introduction

The framework of stochastic thermodynamics provides rules to describe small physical systems that are embedded into a thermal reservoir but remain out of equilibrium due to external driving [1,2,3]. If the relevant degreees of freedom can be described by a memoryless, i.e., Markovian dynamics on a discrete set of states, the time-evolution of the system is governed by the network structure and the transition rates between the states. In the case of periodically driven transition rates, such a dynamics relaxes into a periodic stationary state (PSS) [4,5,6,7], which, as a special case, becomes a non-equilibrium steady state (NESS) [8,9,10,11] for constant transition rates.

Since a model is fully specified only if all transition rates are known, practically relevant scenarios in which parts of the model remain hidden [12, 13] require methods to recover, e.g., hidden transition rates on the basis of observable data of a particular form. The combination of such methods with the physical constraints provided by the rules of stochastic thermodynamics comprises the field of thermodynamic inference [14]. With a focus on quantities that have a thermodynamic interpretation, recent works in the field obtain bounds on entropy production [15,16,17,18,19,20] or affinities [19, 21,22,23], which are complemented with techniques to recover topological information [24, 25] and speed limits [26,27,28].

Many of the methods discussed above apply to the case of time-independent driving and cannot straightforwardly be generalized to a PSS. For one of the standard methods of estimating entropy production, the thermodynamic uncertainty relation [29, 30], generalizations to PSSs exist [31,32,33,34], which require more input than their time-independent counterparts in general.

For the purpose of estimating entropy production, the usual rationale if given information about residence in states is to identify appropriate transitions or currents, since such time-antisymmetric data allow one to infer the entropy production. When observing transitions, one can ask the converse question: Can we infer information about states, which are time-symmetric, from antisymmetric data like transitions? We will address in this work how observing transitions allows us to recover occupation probabilites in states if the system is in a PSS. In addition, we will generalize and extend methods from [19] to the periodically driven case to infer transition rates and the number of hidden transitions between two observable ones. We also formulate and compare different lower bounds on the mean total entropy production. These entropy estimators are either proved or supported with strong numerical evidence.

The paper is structured as follows. In Sect. 2, we describe the setting and identify waiting-time distributions between observed transitions as the basic quantities we use to formulate our results. In Sect. 3, we investigate how these quantities can be used to infer kinetic information about the hidden part of a system in a PSS or NESS. Estimators for the mean entropy production are discussed in Sect. 4. We conclude and give an outlook on further work in Sect. 5.

2 General Setup

We consider a network of N states \(i\in \left\{ 1,\dots ,N\right\} \) that is periodically driven. The system is in state i(t) at time t and follows a stochastic description by allowing transitions between states sharing an edge in the graph. A transition from i to j happens instantaneously with rate \(k_{{ij}}(t)\), which has the periodicity of the driving. To ensure thermodynamic consistency, we assume the local detailed balance condition [1,2,3]

$$\begin{aligned} \frac{k_{{ij}}(t)}{k_{{ji}}(t)} = e^{F_{{i}}(t) - F_{{j}}(t) + f_{{ij}}(t)} \end{aligned}$$
(1)

at each link, i.e., for each transition and its reverse. The driving with period \(\mathcal {T}\) may change the free energy \(F_k(t)\) of states k or act as a non-conservative force along transitions from i to j with \(f_{{ij}}(t) = -f_{{ji}}(t)\). Energies in this work are given in units of thermal energy so that entropy production is dimensionless.

The dynamics of the probability \(p_{i}(t)\) to occupy state i at time t obeys the master equation

$$\begin{aligned} \partial _t p_{i}(t) = \sum _{{j}}\left[ -p_{i}(t)k_{{ij}}(t) + p_{j}(t)k_{{ji}}(t)\right] . \end{aligned}$$
(2)

In the long-time limit \(t\rightarrow \infty \), these networks approach a periodic stationary state (PSS) \(p^\text {pss}_{i}(t)\). The transition rates and these probabilities \(p^\text {pss}_{i}(t)\) determine the mean entropy production rate in the PSS [1,2,3]

$$\begin{aligned} \left\langle \sigma \right\rangle _\text {pss} \equiv \frac{1}{\mathcal {T}}\int _{0}^{\mathcal {T}} \sum _{{ij}}p^\text {pss}_{i}(t)k_{{ij}}(t) \ln \frac{p^\text {pss}_{i}(t)k_{{ij}}(t)}{p^\text {pss}_{j}(t)k_{{ji}}(t)} {\textrm{d}}{t}. \end{aligned}$$
(3)
Fig. 1
figure 1

Graphs of partially observable, periodically driven Markov networks. Observable transitions are labeled and displayed in blue. All states and the remaining transitions are assumed to be hidden. Purple states and purple transitions form the boundary of the hidden network. In a only one pair of transitions can be observed. In b the two pairs \(1\leftrightarrow 2\) and \(4\leftrightarrow 6\) are observable. The whole network consists of the observable transitions, the boundary of the hidden network and its interior

In this work, we assume that at least one pair of transitions of a Markov network in its PSS or NESS is observable for an external observer while other transitions and all states are hidden, i.e., not directly accessible for the observer. We illustrate this with graphs of two exemplary Markov networks in Fig. 1. States with an observable transition between them will be called boundary states. If two boundary states are connected with one hidden transition, these transitions and the boundary states form the boundary of the hidden network. Additionally, we assume the period \(\mathcal {T}\) of the driving to be known.

The task is to determine hidden quantities like the probabilities \(p^\text {pss}_{i}(t)\) of such partially accessible networks as well as to estimate the overall entropy production. In such a network, we can determine distributions of waiting times t between two successive observable transitions \(I=(ij)\) and \(J=({lm})\), whereas observing the full microscopic dynamics is impossible. These waiting-time distributions are of the form

$$\begin{aligned} \psi _{I\rightarrow J}(t\vert t_0)&\equiv \sum _{\gamma _{I\rightarrow J}(t, t_0)} \mathcal {P}\left[ \gamma _{I\rightarrow J}(t, t_0) \big | I,t_0\right] . \end{aligned}$$
(4)

They depend on the time \(t_0\in [0,\mathcal {T}]\) at which transition I occurs within one period of the PSS. Since an arbitrary number of hidden transitions occurs between I and J, the distributions are given by the sum of conditional path weights \(\mathcal {P}\left[ \gamma _{I\rightarrow J}(t, t_0) \big | I,t_0\right] \) corresponding to all microscopic trajectories \(\gamma _{I\rightarrow J}(t, t_0)\) that start directly after a transition I at \(t_0\) and end with the next observable transition J after waiting time t.

Furthermore, we define

$$\begin{aligned} \Psi _{I\rightarrow J}(t) = \int _{0}^{\mathcal {T}} p^\text {pss}(t_0\vert I)\psi _{I\rightarrow J}(t\vert t_0){\textrm{d}}{t_0}, \end{aligned}$$
(5)

where we use the conditional probability \(p^\text {pss}(t_0\vert I)\) to detect a particular transition I at a specific time \(t_0\in [0,\mathcal {T})\) within the period. Due to effectively marginalizing \(t_0\) like in Equation (5) when using trajectories with uncorrelated \(t_0\), e.g., observed trajectories for unknown \(\mathcal {T}\) in which we discard a sufficient number of successive waiting times between two saved ones, we can always get these waiting-time distributions from measured waiting times. In the special case of a NESS,

$$\begin{aligned} \psi _{I\rightarrow J}(t\vert t_0) = \Psi _{I\rightarrow J}(t) \equiv \psi _{I\rightarrow J}(t) \end{aligned}$$
(6)

holds for an arbitrarily assigned period \(\mathcal {T}\), which we emphasize by using \(\psi _{I\rightarrow J}(t)\).

3 Shortest Hidden Paths, Transition Rates and Occupation Probabilities

We first generalize methods to infer the number of hidden transitions in the shortest path between any two observable transitions from a NESS [19, 24] to a PSS. For any two transitions IJ for which the waiting-time distribution does not vanish, the number of hidden transitions \(M_{IJ}\) along the shortest path between I and J is given by

$$\begin{aligned} M_{IJ}&= \lim _{t\rightarrow 0} \left( t \frac{\text {d}}{\text {d}{t}}\ln \left[ \psi _{I\rightarrow J}(t\vert t_0)\right] \right) = \lim _{t\rightarrow 0} \left( t \frac{\text {d}}{\text {d}{t}}\ln \left[ \Psi _{I\rightarrow J}(t)\right] \right) , \end{aligned}$$
(7)

which can be derived following an idea adopted in reference [19] for systems in a NESS. In order to sketch the general idea how in the short-time limit waiting-time distributions relate to the number of transition, we first consider a trajectory that starts in state i at time \(t_0\) and ends in a neighboring state j at time \(t_0+t\). In the short time limit \(t\rightarrow 0\), the probability of such a trajectory fulfills \(\lim _{t\rightarrow 0}p(j,t_0+t|i,t_0)/t= k_{ij}(t_0)\), which is the path weight of an infinitesimal short trajectory that contains only a single transition. Paths with multiple transitions contribute to higher-order terms in t and thus become irrelevant. In a second step, we use the same idea to compute the path weight of trajectories \(\gamma _{I\rightarrow J}(t, t_0)\) in the short-time limit. In expanded form, a concrete realization of \(\gamma _{I\rightarrow J}(t, t_0)\) that contributes to the sum in Equation (4) reads

$$\begin{aligned} \gamma _{I\rightarrow J}(t, t_0)=(i_0,t_0+\tau _0)\rightarrow (i_1,t_0+\tau _1)\rightarrow \dots \rightarrow (i_L,t_0+\tau _L)\rightarrow (i_{L+1},t_0+\tau _{L+1}), \end{aligned}$$
(8)

where we assume that transition I ends in state \(i_0\) at time \(t_0+\tau _0\) and the concluding transition J connects states \(i_L\) and \(i_{L+1}\) after duration \(\tau _{L+1}=t\). With the explicit expression for the path weight [1] in a Markov network we calculate the probability of a particular sequence \(i_0\rightarrow \dots \rightarrow i_{L+1}\) as

$$\begin{aligned} P(i_0\rightarrow \dots&\rightarrow i_{L+1},t\vert I,t_0) = \int _{0}^{t}{\textrm{d}}{\tau _1}\dots \int _{\tau _{L-1}}^{t}{\textrm{d}}{\tau _L} \mathcal {P}\left[ \gamma _{I\rightarrow J}(t, t_0) \big | I,t_0\right] \nonumber \\&= \int _{0}^{t}{\textrm{d}}{\tau _1}\dots \int _{\tau _{L-1}}^{t}{\textrm{d}}{\tau _L} \prod _{l=0}^{L} e^{-\int _{\tau _l}^{\tau _{l+1}} \sum _j k_{i_lj}(t'+t_0) {\textrm{d}}{t'}} k_{i_li_{l+1}}(\tau _{l+1} + t_0) \nonumber \\&= \left( \prod _{l=0}^{L} k_{i_li_{l+1}}(t_0)\right) \int _{0}^{t} {\textrm{d}}{\tau _{L}}\dots \int _{0}^{\tau _3} {\textrm{d}}{\tau _2}\int _{0}^{\tau _2} {\textrm{d}}{\tau _1} + \mathcal {O}(t^{L+1}), \end{aligned}$$
(9)

which is given as an integral over waiting times. From definition (4), we identify waiting-time distributions as sum of these probabilities over all possible sequences. By inserting Equation (9) into the definition (4), we find

$$\begin{aligned} \lim _{t\rightarrow 0}\psi _{I\rightarrow J}(t\vert t_0)&=\left( \prod _{l=0}^{M_{IJ}} k_{i_li_{l+1}}(t_0)\right) \frac{t^{M_{IJ}}}{M_{IJ}!} \sim t^{M_{IJ}}, \end{aligned}$$
(10)

where we assume that the shortest path of the form (8) has \(L=M_{IJ}\) hidden transitions. For systems in a PSS, the waiting-time distributions \(\psi _{I\rightarrow J}(t\vert t_0)\) and \(\Psi _{I\rightarrow J}(t)\) are interchangeable since both of their short-time limits are proportional to \(t^{M_{IJ}}\), i.e., are dominated by the shortest path between I and J. Since this shortest path between I and J consists of the same number of transitions, no matter at what time \(t_0\) a trajectory starts, the expression in the middle of Equation (7) is independent of \(t_0\). In the example of Fig. 1a, with \(I=+\) and \(J=+\), we get \(M_{++}=2\) since the two transitions (23) and (31) form the corresponding shortest hidden path. For the graph shown in Fig. 1b with \(I=L_-\) and \(J=R_+\), we find \(M_{L_-R_+}=1\) since (16) is the transition inbetween \(L_-\) and \(R_+\).

The rates of observable transitions can be recovered from waiting-time distributions. Given a transition \(J=({lm})\) and its reverse \(\tilde{J}=({ml})\), we obtain the corresponding rate \(k_{{lm}}\) through

$$\begin{aligned} \lim _{t\rightarrow 0} \psi _{\tilde{J}\rightarrow J}(t\vert t_0)&= \lim _{t\rightarrow 0}k_{{lm}}(t+t_0)p_{l}(t\vert \tilde{J},t_0) = k_{{lm}}(t_0), \end{aligned}$$
(11)

where we use that the short-time limit is dominated by the path with the fewest transitions, which, in this case, is \(\tilde{J}\) followed by J without any hidden intermediate transitions. Moreover, the state of the system is known immediately after the initial transition \(\tilde{J}\) at time \(t_0\) within the period, which leads to \(p_{l}(t=0\vert \tilde{J},t_0)=1\).

Further transition rates are inferable for any combination of transitions I and J with \(M_{IJ} = 1\), i.e., whenever the shortest path between \(I=(ij)\) and \(J=({lm})\) consists of only one hidden transition. For \(t\rightarrow 0\), its transition rate \(k_{j{l}}(t_0)\) then follows from a Taylor expansion as

$$\begin{aligned} k_{j{l}}(t_0) = \lim _{t\rightarrow 0} \frac{\psi _{I\rightarrow J}(t\vert t_0)}{k_{{lm}}(t_0+t)t} \end{aligned}$$
(12)

with \(k_{{l}j}(t_0)\) following analogously. As an example, we get the transition rates \(k_{14},\,k_{16},\,k_{41}\) and \(k_{61}\) for the network shown in Fig. 1b even though the related links are hidden.

Occupation probabilities of boundary states of the hidden network can be inferred as follows. During a measurement of length \(M\mathcal {T}\) with large \(M\in \mathbb {N}\), we count the number \(N_I(t_0\le \tau \le t_0+\Delta t)\) of transitions \(I=(ij)\) that occur during the infinitesimal interval \([t_0,t_0+\Delta t]\), where we map all times at which transitions happen into one period of the PSS using a modulo operation. We, therefore, obtain the rate of transitions I at time \(t_0\in [0,\mathcal {T})\) within one period of the PSS as

$$\begin{aligned} n_I(t_0) = \lim _{M\rightarrow \infty }\lim _{\Delta t\rightarrow 0} \frac{N_I(t_0\le \tau \le t_0+\Delta t)}{M\Delta t} = p_i^\text {pss}(t_0)k_{ij}(t_0). \end{aligned}$$
(13)

As the transition rate \(k_{ij}(t_0)\) can be determined as described above, we can thus infer \(p_i^\text {pss}(t_0)\) from experimentally accessible data. Knowing the occupation probabilities of all boundary states of the hidden network allows us to calculate instantaneous currents along single transitions between them using the corresponding inferred transition rates.

These results can be specialized to NESSs, where, to the best of our knowledge, they have not been reported yet either. In this special case, dropping the irrelevant \(t_0\) in Equations (11) and (12) leads to constant transition rates. Moreover, in a NESS, the mean rate of transitions I, \(\left\langle n_I\right\rangle _\text {ss}\), can directly be obtained from the total number \(N_{I,T}\) of observed transitions I along a measured trajectory of length T. Inferring occupation probabilities \(p_i^\text {ss}\) then only requires dividing through the already calculated transition rate \(k_{ij}\), i.e.,

$$\begin{aligned} p_i^\text {ss}k_{ij} = \left\langle n_I\right\rangle _\text {ss} = \lim _{T\rightarrow \infty }\frac{N_{I,T}}{T}. \end{aligned}$$
(14)

Through equations (13) and (14) we show how to infer occupation probabilities of boundary states of the hidden network. Given the inferable quantities \(p_i^\text {pss}(t_0)\) or \(p_i^\text {ss}\), we can calculate how much probability rests on states in the network beyond the so identified boundary states. As an example, Fig. 2b and c illustrate the probability to find the network shown in Fig. 2a in its boundary states rather than in states within the interior of the hidden network. In both figures, different sets of observable transitions lead to different boundaries of the hidden network. Figure 2b displays sums of probabilities for these systems in a PSS, while Fig. 2c gives an example for NESSs. Each sum of inferable occupation probabilities quantifies the probability of finding the system in the boundary of the hidden network. The closer this sum is to one, the less relevant the inaccessible states in the interior of the hidden network are for the dynamics.

Fig. 2
figure 2

Inference of occupation probabilities of boundary states. a Network with hidden states and hidden (black) transitions. Beginning with the light green transitions (12) and (21) in a, darker colored transitions are successively considered as observable. b The sums of occupation probabilities of states within the boundary of each hidden network in the PSS are shown in the respective color. The time-dependent transition rates are given in Appendix A.1. c As in b but for constant driving, i.e., for a NESS as a function of \(k_{45}\) with the other rates given in Appendix A.1

4 Three Estimators for Entropy Production in PSSs

In this section, we estimate irreversibility via the entropy production rate in a PSS. We have seen above how waiting-time distributions contain information on the hidden dynamics of a network. Thus, it seems sensible to expect that these quantities can be used as entropy estimator to infer irreversibility in both the observable and the hidden parts of the network.

For a trajectory \(\Gamma \) of length T, reversing the driving protocol leads to transition rates \(\tilde{k}_{ij}(t)= k_{ij}(T-t)\). The corresponding waiting-time distributions \(\tilde{\psi }_{\tilde{J}\rightarrow \tilde{I}}(t\vert t_0+t)\) for reversed paths \(\tilde{J}\rightarrow \tilde{I}\) are the time-reversed versions of \(\psi _{I\rightarrow J}(t\vert t_0)\). Once waiting-time distributions of the form \(\tilde{\psi }_{\tilde{J}\rightarrow \tilde{I}}(t\vert t_0+t)\) have been determined, the fluctuation relation

$$\begin{aligned} \hat{\sigma }_\psi \equiv \lim _{T\rightarrow \infty }\frac{1}{T} \ln \frac{\mathcal {P}\left[ \Gamma \right] }{ \widetilde{\mathcal {P}}[\widetilde{\Gamma }]} \end{aligned}$$
(15)

for a trajectory \(\Gamma \) of length T and its time-reverse \(\widetilde{\Gamma }\) allows us to derive an estimator \(\left\langle \hat{\sigma }_\psi \right\rangle _\text {pss}\) that fulfills

$$\begin{aligned} \left\langle \sigma \right\rangle _\text {pss} \ge \left\langle \hat{\sigma }_\psi \right\rangle _\text {pss}=\sum _{I,J} \int _{0}^{\infty }\int _{0}^{\mathcal {T}} \frac{n_I(t_0)}{\mathcal {T}} \psi _{I\rightarrow J}(t\vert t_0) \ln \frac{\psi _{I\rightarrow J}(t\vert t_0)}{\tilde{\psi }_{\tilde{J}\rightarrow \tilde{I}}(t\vert t_0+t)} {\textrm{d}}{t_0}{\textrm{d}}{t} \ge 0. \end{aligned}$$
(16)

Here, the index \(\psi \) of the estimator highlights the type of waiting-time distribution that enters its expression in the above inequality. We prove this inequality in Appendix B as a generalization of the trajectory-based entropy estimator \(\left\langle \hat{\sigma }\right\rangle \) introduced in [19] for NESSs.

For time-symmetric driving, estimating \(\left\langle \sigma \right\rangle _\text {pss}\) with \(\left\langle \hat{\sigma }_\psi \right\rangle _\text {pss}\) does not require to reverse the driving protocol in an experiment. In this case, \(\tilde{\psi }_{\tilde{J}\rightarrow \tilde{I}}(t\vert t_0+t)\) results from \(\psi _{I\rightarrow J}(t\vert t_0)\) by exploiting the symmetry \(k_{ij}(t_*+t_0)=k_{ij}(t_*-t_0)\) of the protocol for all transitions (ij) after finding \(t_*\in [0,\mathcal {T})\). In the next paragraphs, we discuss experimentally accessible entropy estimators that do not require waiting-time distributions of the time-reversed process regardless of symmetry properties of the driving.

We have performed extensive numerical computations of random, periodically driven Markov networks corresponding to different underlying graphs to compute

$$\begin{aligned} \left\langle \hat{\sigma }_\Psi \right\rangle _\text {pss}\equiv \sum _{I,J} \left\langle n_I\right\rangle _\text {pss} \int _{0}^{\infty } \Psi _{I\rightarrow J}(t) \ln \frac{\Psi _{I\rightarrow J}(t)}{\Psi _{\tilde{J}\rightarrow \tilde{I}}(t)} {\textrm{d}}{t}, \end{aligned}$$
(17)

where the index \(\Psi \) indicates the type of waiting-time distribution used. Here, \(\left\langle n_I\right\rangle _\text {pss}\) is the mean of \(n_I(t_0)\) in one period \(t_0\in [0,\mathcal {T}]\) that results from measured data. For over \(10^5\) randomly chosen systems from unicyclic graphs of three states, diamond-shaped graphs as displayed in Fig. 1a and more complex underlying graphs, the inequalities

$$\begin{aligned} \left\langle \sigma \right\rangle _\text {pss}\ge \left\langle \hat{\sigma }_\Psi \right\rangle _\text {pss}\ge 0 \end{aligned}$$
(18)

hold true as shown in the scatter plots in Fig. 3a, c and e. Therefore, we conjecture inequality (18) to hold true for periodically driven Markov networks, so that \(\left\langle \hat{\sigma }_\Psi \right\rangle _\text {pss}\) is a thermodynamically consistent estimator of \(\left\langle \sigma \right\rangle _\text {pss}\).

Fig. 3
figure 3

Ratios \(\left\langle \sigma \right\rangle _\text {pss}/\left\langle \hat{\sigma }_\Psi \right\rangle _\text {pss}\) and \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}/\left\langle \hat{\sigma }_\Psi \right\rangle _\text {pss}\) involving entropy estimators in scatter plots. a) Quality factor \(\left\langle \hat{\sigma }_\Psi \right\rangle _\text {pss}/\left\langle \sigma \right\rangle _\text {pss}\) for two data sets of networks with diamond-shaped graph as shown in Fig. 1 a. b Comparison between \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}\) and \(\left\langle \hat{\sigma }_\Psi \right\rangle _\text {pss}\) for the blue data set of a. c and d Ratios like in a and b, respectively, for unicyclic three-state systems. e and f Both ratios like in a and b for networks with graph as shown in Fig. 1 b The ratios in all scatter plots are plotted against the random angle \(\varphi _0\) that is part of the free energy parametrization as detailed in Appendix A.2

Furthermore, transition rates and occupation probabilities that are inferred as described in Sect. 3 allow us to prove another lower bound on the entropy production rate of a Markov network in a PSS which complements the previous two. The estimator

$$\begin{aligned} \left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}\equiv \int _{0}^{\mathcal {T}} \sum _{ij\in \mathcal {V}} \frac{p^\text {pss}_i(t) k_{ij}(t)}{\mathcal {T}} \ln \frac{p^\text {pss}_i(t) k_{ij}(t)}{p^\text {pss}_j(t) k_{ji}(t)} {\textrm{d}}{t} \end{aligned}$$
(19)

adds up the contributions to entropy production along transitions of the set \(\mathcal {V}\) containing all transitions that are either observable or within the boundary of the hidden network. As \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}\) solely depends on inferable probabilities and rates, its index is pk. Since each of the terms in Equation (19) is non-negative for all t and part of \(\left\langle \sigma \right\rangle _\text {pss}\) as given in Equation (3), \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}\) constitutes a lower bound on the total entropy production rate of the system. The bound is tight if the set \(\mathcal {V}\) comprises all edges along which the current does not vanish identically. Put differently, \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}= \left\langle \sigma \right\rangle _\text {pss}\) if and only if all edges with non-vanishing current are either observable or within the boundary of the hidden network. This bound may often be less tight than the conjectured bound (17) for periodically driven Markov networks though this ordering does not hold in general as shown in Fig. 3b, d and f.

In the special case of a NESS, the last bound (19) acquires the familiar form

$$\begin{aligned} \left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}\overset{\text {NESS}}{=} \sum _{ij\in \mathcal {V}} p^\text {ss}_i k_{ij} \ln \frac{p^\text {ss}_i k_{ij}}{p^\text {ss}_j k_{ji}}. \end{aligned}$$
(20)

The crucial part is that here the entropy estimator \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}\) is based on occupation probabilities and transition rates inferred from distributions of waiting times between observable transitions as described in Sect. 3. Although the term (20) shows superficial similarities to the main result of reference [35] interpreted as an entropy estimator, our estimator \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}\) differs in two ways. First, \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}\) can be used for partially accessible Markov networks in a PSS, which include systems in a NESS as special cases. Second, the sum in Equations (19) and (20) includes contributions of both observable transitions and transitions within the boundary of the hidden network. These additional contributions allow for a more accurate estimate of the entropy production rate.

5 Concluding Perspectives

5.1 Comparisons with Known Entropy Estimators

In contrast to the steady-state case, only a few methods bound entropy production in partially accessible Markov networks that are driven periodically. One class of such estimators relies on measuring currents and their precision in a similar fashion as the thermodynamic uncertainty relation [29, 30]. A variant of the thermodynamic uncertainty relation proved in reference [36] makes use of the response of currents to a change in speed of the protocol that drives the system out of equilibrium. Since this method requires control over the driving speed, the prerequisites are more restrictive than in our case. Similar methods exist that estimate quantities that are related to entropy production in a broader picture. The result in reference [37] yields an estimate on the combined entropy production of a time-dependently driven process and its time-reverse, whereas reference [32] describes a method to estimate entropy production of a related auxiliary process.

In addition to the class of methods above, which rely on the measurement of currents, we can identify a second class of methods in which the entropy estimator takes the form of a Kullback-Leiber divergence, which includes our proven bounds (16) and (19) but not the conjectured one (17). More specifically, two of these methods require access to the time-reversed process and use waiting-time distributions. The recent reference [38] yields a proven and a conjectured lower bound of the entropy production rate \(\left\langle \sigma \right\rangle _\text {pss}\) with the aim of utilizing waiting-time distributions that are independent of the time that is called \(t_0\) in the present work. Since the proven estimator of reference [38] uses distributions with marginalized \(t_0\), it is at best as tight as \(\left\langle \hat{\sigma }_\psi \right\rangle _\text {pss}\), which makes use of the full distributions. A similar relation applies to the conjectured estimator in reference [38], which requires less data than \(\left\langle \hat{\sigma }_\psi \right\rangle _\text {pss}\) since it is independent of \(t_0\). However, obtaining this conjectured bound still requires measurements in the time-reversed system and is therefore more restrictive than our conjectured bound \(\left\langle \hat{\sigma }_\Psi \right\rangle _\text {pss}\). In a more general setup, we could also allow for additional observations during the time interval between two visible transitions. The method reported in reference [39] allows us to find a stronger lower bound on \(\left\langle \sigma \right\rangle _\text {pss}\) by utilizing such additional, potentially even non-Markovian data. However, kinetic results like particular transition rates or, in particular, the entropy estimator (19) are inaccessible in this more general setup.

Finally, the technique published for the case in which one is able to count transitions in time-dependent time series [40] can also be applied to partially accessible, periodically driven Markov networks. Within this setup, the counting yields empirical currents of visible transitions that are used to get a lower bound on \(\left\langle \sigma \right\rangle _\text {pss}\) using machine learning and multiple time series from repeated measurements. Hence, our bound \(\left\langle \hat{\sigma }_{pk}\right\rangle _\text {pss}\) is better than the lower bound established by the method of reference [40] because it not only contains the entropy production of all visible links but even that of some hidden transitions between visible ones.

5.2 Summary and Outlook

In this paper, we have introduced inference methods based on distributions of waiting times between consecutive observed transitions in partially accessible, periodically driven Markov networks. Successive use of these methods yields information about the kinetics of such a Markov network as well as its underlying topology, including hidden parts, as summarized in Fig.  4.

We have first shown how to infer the number of hidden transitions along the shortest path between two observable transitions. We have then derived methods to infer transition rates between boundary states of the hidden network. Occupation probabilities of these boundary states then follow by discerning when the observable transitions happen within one period. Consequently, we find the total probability resting on the hidden states in the interior of the hidden network.

Fig. 4
figure 4

Summary of the inference scheme. Starting from waiting-time distributions and the known period \(\mathcal {T}\) of the driving, the number of hidden transitions \(M_{IJ}\) along the shortest path between two observable transitions I and J, the occupation probabilities of boundary states \(p_i^\text {pss}(t)\) as well as the rates \(k_{ij}(t)\) of transitions between boundary states are inferable. These quantities enter the three lower bounds on entropy production

In addition, we have presented three entropy estimators enabling us to estimate irreversibility of a driven Markov network based on observed transitions during a partially accessible dynamics. The first and third one are proven to be lower bounds of the mean entropy production rate, whereas we conjecture the second estimator to have this property too. Its proof remains as open theoretical challenge. The second and third estimator have the advantage of not requiring control of the driving since its time reversal is not needed. Furthermore, we emphasize that even for the simpler NESS most of these results are original as well.

Finally, it will be interesting to explore whether and how such an approach can be adapted to continuous systems described by a Langevin dynamics. We also hope that our non-invasive method yielding time-dependent transition rates and occupation probabilities will be applied to experimental data of periodically driven small systems.