Introduction

Energy transduction and information processing—the hallmarks of biological systems—can only happen in finite times at the cost of continuous dissipation. Quantifying time irreversibility and entropy production is thus a central challenge in experiments on mesoscopic systems out of equilibrium1,2, including living matter3,4,5,6,7,8,9,10 and technological devices11. Experimental efforts have been paralleled by the recent development of nonequilibrium statistical mechanics, focused on adding methods for estimating the entropy production rate σ in steady states10,12,13,14,15,16,17,18,19,20. Several approaches, for example, focus on improving estimates of σ in cases of incomplete information and coarse-graining21,22,23,24,25. Sometimes, they are based on lower bounds on σ2,18,19, of which the thermodynamic uncertainty relation (TUR) is a prominent example17,26,27,28,29,30,31,32,33,34. Classical methods estimate σ by adding local contributions from non-zero fluxes between states35. However, recent approaches have also introduced methods exploiting the statistics of return times or waiting times36,37,38,39,40.

For high-dimensional diffusive systems or Markov jump processes on an ample state space, where the experimental estimate of microscopic forces becomes difficult, the TUR is a valuable tool for quantifying approximately σ. Indeed, the TUR is a frugal inequality, being based only on the knowledge of the first two cumulants of any current J integrated over a time τ, i.e. its average 〈J〉 and its variance var(J) = 〈J2〉 − 〈J2,

$$\frac{\sigma }{{k}_{B}}\ge 2\frac{{\langle J\rangle }^{2}}{{{{{\rm{var}}}}}(J)\,\tau }.$$
(1)

However, (1) might provide a loose bound on σ. For example, far from equilibrium, kinetic factors41 often constrain the right-hand side of (1) to values much smaller than σ.

Let us focus on a Markov jump process with transition rate wij from state i to state j, in a stationary regime with steady-state probability ρi. If there is a coupling between equilibrium reservoirs and the system, for each forward transition between two states i and j, denoted (i, j), the backward evolution (j, i) is possible. By defining fluxes ϕij = ρiwij, the mean entropy production rate is written42,43,44 as

$$\frac{\sigma }{{k}_{B}}={\sum}_{i < j}({\phi }_{ij}-{\phi }_{ji})\ln \frac{{\phi }_{ij}}{{\phi }_{ji}}.$$
(2)

The system is in equilibrium only if all currents are zero, that is, ϕij = ϕji for every pair {i, j}. Experimentally, fluxes ϕij are estimated by counting the transitions between states i and j in a sufficiently long time interval t.

The challenge we focus on is evaluating the entropy production rate with experimental data missing the observation of one or more backward transitions—any null flux makes (2) inapplicable. This situation is encountered in a vast class of idealized mesoscopic systems such as totally asymmetric exclusion processes45, directed percolation46, spontaneous emission in a zero-temperature environment47, enzymatic reactions48, perfect resetting49. A first solution to this problem involves replacing each unobserved transition ϕij by a fictitious rate ϕij ~ t−1 scaling with the observation time. Intuitively, this corresponds to assuming that no transition was observed during a time t because its flux was barely lower than t−1. A Bayesian approach refines this simple argument, proposing the optimized assumption ϕij ≃ (ρj/ρi)/t for null fluxes in unidirectional transitions35. Hence, it allows us to estimate the entropy production directly from (2).

In this Letter, we put forward an additional estimation of the entropy production rate, not based on (2). Instead, we introduce a lower bound to σ (see (15) with (3) below) using the exact quantities needed for the estimation of (2), i.e., single fluxes. Notwithstanding that, our inequality can be tight and give an estimate of σ that outperforms (2) (and any standard TUR) in regimes lacking data, in which transitions between states may appear experimentally as unidirectional in a trajectory of finite duration t. The efficacy of our approach derives from several optimizations related to (i) the short time limit τ → 018,19, (ii) the so-called hyperaccurate current32,33,50,51, and, most importantly, (iii) an uncertainty relation32,52 in which the specific presence of the inverse hyperbolic tangent boosts the lower limit imposed to σ by the TUR (1) (the hyperbolic tangent appears in several derivations of TURs29,30). In the extreme case in which all observed transitions appear unidirectional, the inequality simplifies to

$$\frac{\sigma }{{k}_{B}}\ge \kappa \log (\kappa \,t),\qquad {{{{\rm{for}}}}}\,\kappa \,t \, \gg \, 1,$$
(3)

where the average jumping rate (or dynamical activity, or frenesy53) κ characterizes the degree of agitation of the system. Thus, far from equilibrium, the nondissipative quantity κ binds the average amount of dissipation σ. Key to our approach is the assumption that the overall rate of all unobserved (reverse) transitions is of the order ~t−1. At the same time, we make no assumption for the specific reverse rate of each unidirectional transition. Equation (3) can also be used if only forward rates are known analytically and backward rates are small and unknown. The chemical reactions described at the end of the paper fall into this scenario.

Results and discussion

Empirical estimation

Equation (2) holds for Markov jump processes with transition rates satisfying the local detailed balance condition \({w}_{ij}/{w}_{ji}={e}^{{s}_{ij}}\), where sij = − sji is entropy increase in the environment (in units of kB = 1, hereafter) when transition (i, j) takes place. For these processes, the experimental data we consider are time series of states (i(0), i(1), …, i(n)) and of the corresponding jumping times (t(1), t(2), …, t(n) < t). The total residence time \({t}_{i}^{R}\) that a trajectory spends on a state i gives the empirical steady-state distribution \({p}_{i}={t}_{i}^{R}/t\) that approaches the steady-state probability distribution ρi for long times t.

The estimation of σ is based on empirical measurements of fluxes ϕij, which we define starting from the number nij = nij(t) of observed transitions (i, j),

$${\phi }_{ij}\simeq {\dot{n}}_{ij}\equiv \frac{{n}_{ij}}{t}.$$
(4)

If the observation time t is much larger than the largest time scale, i.e. \(t \, \gg \, {\tau }_{{{{{\rm{sys}}}}}}={({\min }_{i,j}{\phi }_{ij})}^{-1}\), the empirical flux \({\dot{n}}_{ij}\) converges to ϕij and the estimate of the entropy production rate simply becomes

$$\sigma \simeq {\sigma }_{{{{{\rm{emp}}}}}}\equiv {\sum}_{i < j}({\dot{n}}_{ij}-{\dot{n}}_{ji})\ln \frac{{\dot{n}}_{ij}}{{\dot{n}}_{ji}}.$$
(5)

However, our focus is on systems in which t < τsys, so that some transitions \((i,j)\in {{{{\mathcal{I}}}}}\) are probably never observed (nij = 0) while their reverse ones (j, i) are (nji ≠ 0). In this case, the process appears absolutely irreversible54 and (4) is inapplicable—the estimate (5) would give an infinite entropy production. We assume that the network remains connected if one removes the transitions belonging to the set \({{{{\mathcal{I}}}}}\) so that the dynamics stays ergodic. Note that the case where both nij = 0 and nji = 0 poses no difficulties. Indeed, one neglects the edge {i, j}, at the possible price of underestimating σ. We leave this understood and deal with the residual cases in which only one of the two countings is null.

If in a time t a transition from i to j is not observed, we conclude that the typical time scale of the transition is not shorter than t, that is ϕij ≲ t−1. A more quantitative argument35 suggests “curing” the numerical estimates of fluxes by introducing a similar minimal assumption,

$${\dot{n}}_{ij}=\left\{\begin{array}{ll}{n}_{ij}/t,\quad \hfill &{{{{\rm{if}}}}}\,{n}_{ij} \, > \, 0 \hfill \\ {p}_{j}/({p}_{i}t),\quad &{{{{\rm{if}}}}}\,(i,j)\in {{{{\mathcal{I}}}}}\\ 0,\quad \hfill&{{{{\rm{otherwise}}}}}, \hfill \end{array}\right.$$
(6)

and uses these regularized estimates of ϕ’s in (5).

Lower bounds to σ

To use TURs, we define currents by counting algebraically transitions between states. For example, for (i, j), an integrated current during a time τ is just the counting nij(τ) − nji(τ) during that period. By linearly combining single transition currents via antisymmetric weights cij = − cji (stored in a matrix c), one may define a generic current J = Jc = ∑i < jcij[nij(τ) − nji(τ)].

Among all possible currents, one can choose those giving the best lower bound to σ, for instance, by machine learning methods18,19,20. However, refs. 50,51 show that the TUR can be analytical optimized by choosing the hyperaccurate current Jhyp. In the limit τ → 0, the coefficients defining Jhyp take the simple form32

$${c}_{ij}^{{{{{\rm{hyp}}}}}}=\frac{{\phi }_{ij}-{\phi }_{ji}}{{\phi }_{ij}+{\phi }_{ji}}\simeq \frac{{\dot{n}}_{ij}-{\dot{n}}_{ji}}{{\dot{n}}_{ij}+{\dot{n}}_{ji}}.$$
(7)

Moreover, the TUR (1) is the tightest when J is integrated over an infinitesimal time τ → 018,19 and can become equality in the limit τ → 0 only for overdamped Langevin dynamics. However, for Markovian stationary processes Eq. (24) in ref. 32 states that

$$\frac{{\langle {J}_{\tau }\rangle }^{2}}{{{{{\rm{var}}}}}({J}_{\tau })\,\tau }\ge \frac{{\langle {J}_{{{{{\mathcal{T}}}}}}\rangle }^{2}}{{{{{\rm{var}}}}}({J}_{{{{{\mathcal{T}}}}}})\,{{{{\mathcal{T}}}}}}\qquad {{{{\rm{for}}}}}\quad J={J}^{{{{{\rm{hyp}}}}}},$$
(8)

where \({{{{\mathcal{T}}}}}\equiv M\tau\) is a time span collecting M short steps of duration τ. It is therefore useful to exploit the TUR in the short τ limit and define the short-time precision \({\mathfrak{p}}(J)\) as

$${\mathfrak{p}}(J)\equiv {\lim}_{\tau \to 0}\frac{{\langle J\rangle }^{2}}{{{{{\rm{var}}}}}(J)\,\tau }.$$
(9)

For Markov jump processes, for a generic J, we have

$$\langle J\rangle = {\sum}_{i < j}{c}_{ij}({\phi }_{ij}-{\phi }_{ji})\tau ,\\ {{{{\rm{var}}}}}(J) = {\sum}_{i < j}{c}_{ij}^{2}({\phi }_{ij}+{\phi }_{ji})\tau +O({\tau }^{2}),$$
(10)

and the related TUR in the limit τ → 0 is

$$\sigma \ge {\sigma }_{{{{{\rm{TUR}}}}}}^{J}=2\,{\mathfrak{p}}(J).$$
(11)

Focusing on the hyperaccurate current, which is also characterized by 〈Jhyp〉 = var(Jhyp), the optimized TUR reduces to \(\sigma \ge {\sigma }_{{{{{\rm{TUR}}}}}}^{{{{{\rm{hyp}}}}}}=2{{\mathfrak{p}}}^{{{{{\rm{hyp}}}}}}\) (involving the so-called pseudo-entropy55) with

$${{\mathfrak{p}}}^{{{{{\rm{hyp}}}}}}\equiv {\sum}_{i < j}\frac{{({\phi }_{ij}-{\phi }_{ji})}^{2}}{{\phi }_{ij}+{\phi }_{ji}}$$
(12)

In the following, we will use an improvement of the TUR. We start from the implicit formula derived in52,

$${\mathfrak{p}}(J)\le \frac{{\sigma }^{2}}{4\kappa {f\left(\frac{\sigma }{2\kappa }\right)}^{2}}$$
(13)

which depends on the frenesy

$$\kappa = {\sum}_{i < j}({\phi }_{ij}+{\phi }_{ji}),$$
(14)

and the function f that is the inverse of \(x\tanh x\). We use the relation g(x) = x/f(x) with g the inverse function of \(x{\tanh }^{-1}x\)56 to turn (13) into an explicit lower bound on the entropy production

$$\sigma \ge 2\sqrt{{{\mathfrak{p}}}^{{{{{\rm{hyp}}}}}}\kappa }\,{\tanh }^{-1}\left(\sqrt{{{\mathfrak{p}}}^{{{{{\rm{hyp}}}}}}/\kappa }\right).$$
(15)

The inequality (15) reduces to the TUR close to equilibrium, where σ → 0, and is tighter than the kinetic uncertainty relation41 far from equilibrium where \(\tanh (\sigma /2\sqrt{{{\mathfrak{p}}}^{{{{{\rm{hyp}}}}}}\kappa })\to 1\), namely \(\kappa \ge {{\mathfrak{p}}}^{{{{{\rm{hyp}}}}}}\).

Empirically, for κt ≫ 1, we can approximate the optimized precision (12) and the frenesy (14) with

$${{\mathfrak{p}}}_{{{{{\rm{emp}}}}}}^{{{{{\rm{hyp}}}}}}={\sum}_{i < j}\frac{{({\dot{n}}_{ij}-{\dot{n}}_{ji})}^{2}}{{\dot{n}}_{ij}+{\dot{n}}_{ji}},$$
(16)
$${\kappa }_{{{{{\rm{emp}}}}}}= {\sum}_{i < j}({\dot{n}}_{ij}+{\dot{n}}_{ji}).$$
(17)

Neither of them does require the regularization (6). However, the (positive) argument of the \({\tanh }^{-1}\) function in (15) needs to be strictly lower than 1, i.e. \({{\mathfrak{p}}}_{{{{{\rm{emp}}}}}}^{{{{{\rm{hyp}}}}}} < {\kappa }_{{{{{\rm{emp}}}}}}\).

With \({{\mathfrak{p}}}_{{{{{\rm{emp}}}}}}^{{{{{\rm{hyp}}}}}}\) there arises a problem when one measures irreversible transitions in all edges, \(\min ({\dot{n}}_{ij},{\dot{n}}_{ji})=0\) for every i and j: in that case, one can see that \({{\mathfrak{p}}}_{{{{{\rm{emp}}}}}}^{{{{{\rm{hyp}}}}}}={\kappa }_{{{{{\rm{emp}}}}}}\). To fix the divergence of \({\tanh }^{-1}\) that it would lead to, we use an assumption milder than the requirement of reversibility for all transitions used for (6).

If \({{\mathfrak{p}}}_{{{{{\rm{emp}}}}}}^{{{{{\rm{hyp}}}}}}={\kappa }_{{{{{\rm{emp}}}}}}\), we assume that the observation time t is barely larger than the typical time needed to have any reverse transition. By denoting a tiny rate of any unobserved transition \((i,j)\in {{{{\mathcal{I}}}}}\) as ϵij, the ratio of the precision of the hyperaccurate current over the frenesy becomes

$$\frac{{{\mathfrak{p}}}^{{{{{\rm{hyp}}}}}}}{\kappa } \, \simeq \, \frac{1}{{\sum}_{(i,j)\in {{{{\mathcal{I}}}}}}({\dot{n}}_{ji}+{\epsilon }_{ij})} {\sum}_{(i,j)\in {{{{\mathcal{I}}}}}}\frac{{({\dot{n}}_{ji}-{\epsilon }_{ij})}^{2}}{{\dot{n}}_{ji}+{\epsilon }_{ij}}\\ \simeq \, \frac{{\kappa }_{{{{{\rm{emp}}}}}}-3{\sum}_{(i,j)\in {{{{\mathcal{I}}}}}}{\epsilon }_{ij}}{{\kappa }_{{{{{\rm{emp}}}}}}+{\sum}_{(i,j)\in {{{{\mathcal{I}}}}}}{\epsilon }_{ij}}\\ \simeq \, 1-\frac{4}{{\kappa }_{{{{{\rm{emp}}}}}}}{\sum}_{(i,j)\in {{{{\mathcal{I}}}}}}{\epsilon }_{ij}\\ = \, 1-\frac{4}{{\kappa }_{{{{{\rm{emp}}}}}}\,t}\,\simeq 1-\frac{4}{\kappa \,t},$$
(18)

where we have set \({\sum }_{(i,j)\in {{{{\mathcal{I}}}}}}{\epsilon }_{ij}=1/t\) according to our assumption and we replaced κemp with κ because they give the same bound to leading order in t.

Hence, in an experiment measuring only irreversible transitions for t ≫ 1/κ, (15) with (18) still provide a lower bound on the entropy production rate:

$$\frac{\sigma }{{k}_{B}}\ge 2\kappa\, {\tanh }^{-1}\left(\sqrt{1-\frac{4}{\kappa \,t}}\right)$$
(19)

By Taylor expanding the argument of \({\tanh }^{-1}\) to the leading order, (19) simplifies to the lower bound anticipated in (3), which is based simply on evaluating the frenesy κ empirically with (17).

Examples

Let us illustrate the performance of the various estimators of the entropy production rate with the four-state nonequilibrium model sketched in Fig. 1. Some transitions (black arrows) have a constant rate wij = 1 (in dimensionless units). Other transition rates depend on a nonequilibrium strength α, with α = 0 representing equilibrium: wij = eα (red arrows), and wij = e−3α (thin blue arrows). The latter class is the first to go undetected for sufficiently large α, which causes the empirical estimate σemp to start deviating from the theoretical value (α ≳ 1.7 in Fig. 1). However, σemp remains the best option for evaluating σ up to a value α ≈ 4, where the lower bound \({\sigma }_{\tanh }^{{{{{\rm{hyp}}}}}}\) takes over as the best σ estimator far from equilibrium. Interestingly, for α ≳ 4, the probability firr to measure a trajectory with only irreversible transitions (green curve in Fig. 1) is still small. Hence, it is the full scheme with (15) and the extreme case (3) that provides a good estimate of σ. For this setup, the TUR optimized with the hyperaccurate current is only useful close to equilibrium but is never the best option. For comparison with the hyperaccurate version, in Fig. 1, we also show the loose lower bound offered by the TUR for the current J defined on the single edge {i = 1, j = 2}.

Fig. 1: Estimation of the entropy production in a 4-state model.
figure 1

For the 4-states model (inset and description in the subsection Examples), estimates of the entropy production rate and theoretical value σth (left axis), and the fraction of trajectories displaying only irreversible transitions (green curve, right axis) as a function of the nonequilibrium strength α. The sampling time is t = 103, and bands show one standard deviation variability over trajectories. Highlighted regions: (i) the empirical estimate works well while lower bounds progressively depart from σth; (ii) σemp deviates from σth but remains the best estimator; (iii) the lower bound \({\sigma }_{\tanh }^{{{{{\rm{hyp}}}}}}\) is the best estimator.

To appreciate the influence of the trajectory duration on the estimators, in Fig. 2a, we plot the scaling with the sampling time t of σemp and \({\sigma }_{\tanh }^{{{{{\rm{hyp}}}}}}\). In this example, the farther one goes from equilibrium, the longer \({\sigma }_{\tanh }^{{{{{\rm{hyp}}}}}}\) remains the better estimator. This aspect can be crucial if, for experimental limitations, one is restricted to a finite t. It is also emphasized in Fig. 2b, where we plot the time at which σemp becomes larger than \({\sigma }_{\tanh }^{{{{{\rm{hyp}}}}}}\), as a function of the true dissipation rate σth. Again, it shows that in more dissipative regimes, one can rely on \({\sigma }_{\tanh }^{{{{{\rm{hyp}}}}}}\) if trajectories are too short to obtain a good estimate of σth with σemp.

Fig. 2: Estimation of the entropy production as a function of the sampling time.
figure 2

a For three values of the nonequilibrium parameter α (see legend) in the model of Fig. 1, scaling with the sampling time t of the estimators σemp and \({\sigma }_{\tanh }^{{{{{\rm{hyp}}}}}}\), and theoretical values (horizontal thick lines). Bands show one standard deviation variability over trajectories. b Time when σemp becomes larger than \({\sigma }_{\tanh }^{{{{{\rm{hyp}}}}}}\), as a function of σth.

The second example shows how our lower bound may scale favorably with the system size compared to the other σ estimators. We study a periodic ring of N states with local energy \({u}_{i}=-\cos (2\pi i/N)\) and transitions rates wi,i+1 = 1, \({w}_{i,i-1}=\exp [-\alpha /N+{u}_{i}-{u}_{i-1}]\) (temperature is T = 1). Figure 3 shows the ratios of \({\sigma }_{{{{{\rm{TUR}}}}}}^{{{{{\rm{hyp}}}}}}\), \({\sigma }_{\tanh }^{{{{{\rm{hyp}}}}}}\), and σemp over the theoretical value σth, as a function of the nonequilibrium force α, both for a ring with N = 10 and for a longer ring with N = 20 states. For each N, we see that \({\sigma }_{\tanh }^{{{{{\rm{hyp}}}}}}\) far from equilibrium is the best estimator of the entropy production rate. Furthermore, it also appears to be the estimator that scales better by increasing the system size: its plateau at σ/σth ≃ 1 scales linearly with N (the inset of Fig. 3 shows the α* value where σ/σth drops below the arbitrary threshold 0.9, divided by N, as a function of N), while this is not the case for the empirical estimate σemp. These results propose \({\sigma }_{\tanh }^{{{{{\rm{hyp}}}}}}\) as a valuable resource in a continuous limit to deterministic, macroscopic conditions. Next, we explore this possibility.

Fig. 3: Estimation of the entropy production for a ring model.
figure 3

For the ring model described in the subsection Examples, we show various estimates of the entropy production rate divided by the theoretical value, for t = 104, as a function of the nonequilibrium strength α/N, for ring lengths N = 10 and N = 20. The inset shows α*/N. For α → 0, the standard deviation of σ/σth (shaded bands) are amplified because also σth → 0.

Deterministic limit

Our approach extends to deterministic dynamics resulting from the macroscopic limit of underlying Markov jump processes for many interacting particles such as driven or active gasses57,58, diffusive and reacting particles59,60, mean-field Ising and Potts models61,62, and charges in electronic circuits63. In these models, the state’s label i is a vector with entries that indicate the number of particles of a given type. The system becomes deterministic when the typical number of particles goes to infinity, controlled by a parameter, such as a system size V → . In this limit, it is customary to introduce continuous states x = i/V64, e.g., a vector of concentrations, such that wij = Vωr(x) where ± r labels the transitions to (or from) the states infinitesimally close to x. Such transition rate scaling is equivalent to measuring events in a macroscopic time \(\tilde{t}\equiv tV\). The probability pi ~ δ(x − x*) peaks around the most probable state 〈x〉 ≡ x*. The entropy production rate (5) becomes extensive in V65, and its density takes the deterministic value

$$\tilde{\sigma }\equiv \frac{\sigma }{V}= {\sum}_{r > 0}[{\omega }_{r}({x}^{* })-{\omega }_{-r}({x}^{* })]\ln \frac{{\omega }_{r}({x}^{* })}{{\omega }_{-r}({x}^{* })},$$
(20)

with ω±r(x*) the macroscopic fluxes.

We are interested in the case where backward transitions (r < 0) are practically not observable, i.e., the fluxes ω−∣r(x*) are negligibly small compared to the experimental errors. Since κ is also extensive in V, we define the frenesy density \(\tilde{\kappa }\equiv \kappa /V={\sum }_{r}{\omega }_{r}({x}^{* })\) and apply (3) as

$$\tilde{\sigma }\ge \tilde{\kappa }\log (\tilde{\kappa }\,\tilde{t}).$$
(21)

The formula above holds for \(\tilde{t} \, \ll \, 1/\max ({\omega }_{-| r| }({x}^{* }))\), which is the typical time when backward fluxes become sizable.

We compare (20) and (21) for systems of chemical reactions with mass action kinetics, i.e., each flux ω±r(x*) is given by the product of reactant concentrations times the rate constant k±r66. In particular, we take the following model of two chemical species, X and Y, with uni- and bimolecular reactions,

$$\begin{array}{l}{{\emptyset}}{\rightleftharpoons}_{{k}_{-1}}^{{k}_{+1}}{{{\rm{X}}}}\qquad 2\,{{{\rm{X}}}}{\rightleftharpoons}_{{k}_{-2}}^{{k}_{+2}}{{{\rm{X}}}}+{{{\rm{Y}}}}\\ {{{\rm{X}}}}{\rightleftharpoons}_{{k}_{-3}}^{{k}_{+3}}{{{\rm{Y}}}}\qquad {{{\rm{Y}}}}{\rightleftharpoons}_{{k}_{-4}}^{{k}_{+4}}{{\emptyset}}.\end{array}$$
(22)

Figure 4a shows, for a specific set of rate constants, that the bound (21) outperforms the empirical estimation (20) for all times when the dynamics appears absolutely irreversible. This occurs for all randomly drawn values of the rate constants (Fig. 4b). Additionally, we plot the histogram of times \({\tilde{t}}_{q}\) at which (21) and (20) reach the fraction q < 1 of the theoretical entropy production rate, for q = 0.6 in Fig. 4c and q = 0.8. Figure 4d. Times resulting from (21) are significantly shorter than those from the empirical measure (20).

Fig. 4: Estimation of the entropy production in a deterministic chemical reaction network.
figure 4

a Estimates (20) and (21) normalized to the true σth as a function of macroscopic time \(\tilde{t}\) up to the inverse of the maximum flux, for the chemical reaction network (22) with forward rate constants k+r = {5, 2, 1, 0.2} and backward kr = 10−5. b With 2000 random realizations of the chemical reaction network (uniformly sampled rate constants k+r ∈ [10−2, 102], and kr = 10−5), estimate (20) vs. (21), both normalized to the true \({\tilde{\sigma }}_{{{{{\rm{th}}}}}}\), for a fixed time \(\tilde{t}=0.1/\max ({\omega }_{r}({x}^{* }))\). c Probability distribution of \({\tilde{t}}_{q}\) for q = 0.6, and d q = 0.8.

The application of (3) is possible either when the forward rates are analytically known (as in the example in Fig. 4) or when they can be experimentally reconstructed. Direct measurement of chemical fluxes is a challenging task. One case in which they are measurable with high precision is photochemical reactions in which the emission of photons at a specific known frequency signals the reaction events (see47 and references therein). In general, one can measure reliably only concentrations, from which one can extract reaction fluxes only in specific networks (and if the reaction constants are known). We note, however, that the same method exemplified with chemical reactions can be used, e.g., for electronic circuits, where counting electron fluxes between resistive elements is much easier.

Conclusion

In summary, the lower bound (15), turning into (3) for systems that appear absolutely irreversible, is more effective than the direct estimate (5) with (6) in cases of lacking data. Knowledge of the macroscopic fluxes is enough to apply our formula (3), which outperforms the direct estimation in strongly irreversible systems where all backward fluxes are undetectable. Thus, we provide a effective tool to estimate the dissipation in biological and artificial systems, whose performances are limited by energetic constraints7,67,68.

Methods

Dynamics of the stochastic model systems described in the subsection Examples and the deterministic chemical reaction network described in subsection Deterministic limit have been obtained by the standard Gillespie algorithm and numerical integration of the rate equations, respectively.