# Canonical Structure and Orthogonality of Forces and Currents in Irreversible Markov Chains

- 540 Downloads
- 1 Citations

## Abstract

We discuss a canonical structure that provides a unifying description of dynamical large deviations for irreversible finite state Markov chains (continuous time), Onsager theory, and Macroscopic Fluctuation Theory (MFT). For Markov chains, this theory involves a non-linear relation between probability currents and their conjugate forces. Within this framework, we show how the forces can be split into two components, which are orthogonal to each other, in a generalised sense. This splitting allows a decomposition of the pathwise rate function into three terms, which have physical interpretations in terms of dissipation and convergence to equilibrium. Similar decompositions hold for rate functions at level 2 and level 2.5. These results clarify how bounds on entropy production and fluctuation theorems emerge from the underlying dynamical rules. We discuss how these results for Markov chains are related to similar structures within MFT, which describes hydrodynamic limits of such microscopic models.

## Keywords

Nonequilibrium dynamical fluctuations Large deviations Microscopic fluctuation theory Irreversible Markov chains## Mathematics Subject Classification

82C22 82C35 60J27 60F10## 1 Introduction

We consider dynamical fluctuations in systems described by Markov chains. The nature of such fluctuations in physical systems constrains the mathematical models that can be used to describe them. For example, there are well-known relationships between equilibrium physical systems and detailed balance in Markov models [20, Sect. 5.3.4]. Away from equilibrium, fluctuation theorems [12, 19, 25, 32, 37] and associated ideas of local detailed balance [32, 39] have shown how the entropy production of a system must be accounted for correctly when modelling physical systems. However, the mathematical structures that determine the probabilities of non-equilibrium fluctuations are still only partially understood.

We characterise dynamical fluctuations using an approach based on the *Onsager–Machlup (OM) theory* [36], which is concerned with fluctuations of macroscopic properties of physical systems (for example, density or energy). Associated to these fluctuations is a *large-deviation principle* (LDP), which encodes the probability of rare dynamical trajectories. The classical ideas of OM theory have been extended in recent years, through the *Macroscopic Fluctuation Theory* (MFT) of Bertini et al. [7]. This theory uses an LDP to describe path probabilities for the density and current in diffusive systems, on the hydrodynamic scale. At the centre of MFT is a decomposition of the current into two orthogonal terms, one of which is symmetric under time-reversal, and another which is anti-symmetric. The resulting theory is a general framework for the analysis of dynamical fluctuations in a large class of non-equilibrium systems. It also connects dynamical fluctuations with thermodynamic quantities like free energy and entropy production, and with associated non-equilibrium objects like the quasi-potential (which extends the thermodynamic free energy to non-equilibrium settings).

Here, we show how several features that appear in MFT can be attributed to a general structure that characterises dynamical fluctuations in microscopic Markov models. That is, the properties of the hydrodynamic (MFT) theory can be traced back to the properties of the underlying stochastic processes. Our approach builds on recent work by Mielke, Renger and M. A. Peletier, in which the analogue of the OM theory for reversible Markov chains has been described in terms of a *generalised gradient-flow structure* [43]. To describe non-equilibrium processes, that theory must be generalised to include irreversible Markov chains. This can be achieved using the canonical structure of fluctuations discovered by Maes and Netočný [38]. Extending their approach, we decompose currents in the system into two parts, and we identify a kind of orthogonality relationship associated with this decomposition. However, in contrast to the classical OM theory and to MFT, the large deviation principles that appear in our approach have non-quadratic rate functions, which means that fluxes have non-linear dependence on their conjugate forces. Thus, the idea of orthogonality between currents needs to be generalised, just as the notion of gradient flows in macroscopic equilibrium systems can be extended to generalised gradient flows.

The central players in our analysis are the probability density \(\rho \) and the probability current *j*. For a given Markov chain, the relation between these quantities is fully encoded in the master equation, which also fully specifies the dynamical fluctuations in that model. However, thermodynamic aspects of the system—the roles of heat, free energy, and entropy production—are not apparent in the master equation. Within the Onsager–Machlup theory, these thermodynamic quantities appear in the action functional for paths, and solutions of the master equation appear as paths of minimal action. Hence, the structure that we discuss here, and particularly the decomposition of the current into two components, links the dynamical properties of the system to thermodynamic concepts, both for equilibrium and non-equilibrium systems.

### 1.1 Summary

We now sketch the setting considered in this article (precise definitions of the systems of interest and the relevant currents, densities and forces will be given in Sect. 2).

*T*]. Consider a random initial condition such that \(\mathrm {Prob}( \hat{\rho }_0^{\;\!{\mathcal {N}}} \approx \rho ) \asymp \exp [-\mathcal {N} I_0(\rho ) ]\), asymptotically as \(\mathcal {N}\rightarrow \infty \), for some rate functional \(I_0\). Paths that in addition satisfy a continuity equation \(\dot{\rho } + {\text {div}} j =0\) have the asymptotic probability

*rate functional*

*generalised OM functional*, which has the general form

*j*and a force

*f*, while \(\varPsi \) and \(\varPsi ^\star \) are a pair of functions which satisfy

*f*indicates a force, while

*F*is a function whose (density-dependent) value is a force.

The large deviation principle stated in (1) is somewhat abstract: for example, \(\hat{\rho }_t^{\;\!{\mathcal {N}}}\) might be defined as a density on a discrete space or on \(\mathbb {R}^d\), depending on the system of interest. Specific examples will be given below. In addition, all microscopic parameters of the system (particle hopping rates, diffusion constants, etc.) will enter the (system-dependent) functions \(\varPsi \), \(\varPsi ^\star \) and *F*.

*n*currents \(j=(j^\alpha )_{\alpha =1}^n\) and a set of conjugate applied forces \(F=(F^\alpha )_{\alpha =1}^n\). Examples of currents might be particle flow or heat flow, and the relevant forces might be pressure or temperature gradients. The large parameter \({\mathcal {N}}\) corresponds to the size of a macroscopic system. The theory aims to to describe the typical (average) response of the current

*j*to the force

*F*, and also the fluctuations of

*j*. In this (simplest) case, the density \(\rho \) plays no role, so the force

*F*has a fixed value in \(\mathbb {R}^n\). The dual pairing is simply \(j\cdot f = \sum _\alpha j^\alpha f^\alpha \) and \(\varPsi \) is given by \(\varPsi (\rho ,j)=\frac{1}{2} \sum _{\alpha ,\beta } j^\alpha R^{\alpha \beta } j^\beta \), where

*R*is a symmetric \(n\times n\) matrix with elements \(R^{\alpha \beta }\). The Legendre dual of \(\varPsi \) is \(\varPsi ^\star (\rho ,f) =\frac{1}{2} \sum _{\alpha ,\beta } f^\alpha L^{\alpha \beta } f^\beta \), where \(L=R^{-1}\) is the

*Onsager matrix*, whose elements are the linear response coefficients of the system. One sees that \(\varPsi \) and \(\varPsi ^\star \) can be interpreted as squared norms for currents and forces respectively. Denoting this norm by \(\Vert j \Vert ^2_{L^{-1}} := \varPsi (\rho ,j)\), one has

*F*, the response of the current

*j*is obtained as the minimum of \(\varPhi \), so \(j=LF\) (that is, \(j^\alpha = \sum _\beta L^{\alpha \beta } F^\beta \)). One sees that \(\varPhi \) measures the deviation of the current

*j*from its expected value

*LF*, within an appropriate norm. From the LDP (1), one sees that the size of this deviation determines the probability of observing a current fluctuation of this size.

In this article, we show in Sect. 2 that finite Markov chains have an LDP rate functional of the form (3), where \(\varPhi \) (and thus \(\varPsi ^\star \)) are *not* quadratic. In that case, \(\rho \) and *j* correspond to probability densities and probability currents, while the transition rates of the Markov chain determine the functions *F*, \(\varPsi \) and \(\varPsi ^\star \). Since \(\varPsi \) and \(\varPsi ^\star \) measure respectively the sizes of the currents and forces, we interpret them as generalisations of the squared norms that appear in the classical case. The resulting \(\varPhi \) is not a squared norm, but it is still a non-negative function that measures the deviation of *j* from its most likely value. This leads to nonlinear relations between forces and currents. The MFT theory [7] also fits in this framework, as we show in Sect. 4: in that case \(\rho ,j\) are a particle density and a particle current. However, there are relationships between the functions \(\varPhi \) for MFT and for general Markov chains, as we discuss in Sect. 4.5.

Hence, the general structure of Eqs. (1)–(4) describes classical OM theory [36], MFT, and finite Markov chains. A benefit is that the terms have a physical interpretation. For a path \((\rho , j)\), the time-reversed path is \((\rho ^*_t,j^*_t):=(\rho _{T-t},-j_{T-t})\). Since both \(\varPsi \) and \(\varPsi ^\star \) are symmetric in their second argument and thus invariant under time reversal, it holds that \(\varPhi (\rho ,j,f) - \varPhi (\rho ^*,j^*,f) = - 2 j \cdot f\). This allows us to identify \(j\cdot F(\rho )\) as a rate of entropy production. In contrast, the term \(\varPsi (\rho ,j) + \varPsi ^\star (\rho , {F(\rho )})\) is symmetric under time reversal and encodes the frenesy (see [3]). Thus, within this general structure, the physical significance of Eqs. (1)–(4) is that they connect path probabilities to physical notions such as force, current, entropy production and breaking of time-reversal symmetry. Furthermore, we introduce in Sect. 3 decompositions of forces and the (path-wise) rate functional. Sect. 4 shows that some results of MFT originate from generalised orthogonalities of the underlying Markov chains derived in Sect. 3. Similar results hold for time-average large deviation principles, as shown in Sect. 5. In Sect. 6, we show how some properties of MFT can be derived directly from the canonical structure (1)–(4), independent of the specific models of interest. Hence these results of MFT have analogues in Markov chains. Finally we briefly summarise our conclusions in Sect. 7.

## 2 Onsager–Machlup Theory for Markov Chains

In this section, we collect results on forces and currents in Markov chains and on associated LDPs. In particular, we recall the setting of [38, 39]; other references for this section are for example [49] (for the definition of forces and currents in Markov chains) and [43] for LDPs.

### 2.1 Setting

We consider an irreducible continuous time Markov chain \(X_t\) on a finite state space *V* with a unique stationary distribution \(\pi \) that satisfies \(\pi (x)>0\) for all \(x\in V\). The transition rate from state *x* to state *y* is denoted with \(r_{xy}\). We assume that \(r_{xy}>0\) if and only if \(r_{yx}>0\).

We restrict to finite Markov chains for simplicity: the theory can be extended to countable state Markov chains, but this requires some additional assumptions. Briefly, one requires that the Markov chain should be positively recurrent and ergodic (see for instance [9]), for which it is sufficient that (i) the transition rates are not degenerate: \(\sum _{y\in V}r_{xy}<\infty \) for all \(x\in V\), and (ii) for each \(x\in V\), the Markov chain started in *x* almost all trajectories of the Markov chain do not exhibit infinitely many jumps in finite time (“no explosion”). Second, one has to invoke a summability condition for the currents considered below (see, e.g., Eqs. 9 and 10), such that in particular the discrete integration by parts (or summation by parts) formula (15) holds. Finally, note that the cited result for existence and uniqueness of the optimal control potential (the solution to (70)) is only valid for finite state Markov chains.

*V*and edges \(E=\left\{ xy \bigm | x,y\in V, r_{xy}>0\right\} \), such that \(xy\in E\) if and only if \(yx\in E\). Let \(\rho \) be a probability measure on

*V*. We define rescaled transition rates with respect to \(\pi \) as

*detailed balance*condition \(\pi (x) r_{xy} = \pi (y) r_{yx}\) reads \(q_{xy} = q_{yx}\), so this equality holds precisely if the Markov chain is reversible (i.e. satisfies detailed balance). In general (not assuming reversibility), since \(\pi \) is the invariant measure for the Markov chain, one has (for all

*x*) that

*free energy*\(\mathcal {F}\) on

*V*to be the

*relative entropy*(or

*Kullback–Leibler divergence*) with respect to \(\pi \),

*probability current*\(J(\rho )\) is defined as [49, Eq. (7.4)]

*j*such that \(j_{xy}=-j_{yx}\), we define the

*divergence*as

*j*is

*divergence free*if \({\mathrm{div}\,}j(x) = 0\) for every \(x \in V\). The time evolution of the probability density \(\rho \) is then given by the master equation

### 2.2 Non-linear Flux–Force Relation and the Associated Functionals \(\varPsi \) and \(\varPsi ^\star \)

To apply the theory outlined in Sect. 1.1, the next step is to identify the appropriate forces \(F(\rho )\) and also a set of mobilities \(a(\rho )\). In this section we define these forces, following [38, 39, 49]. This amounts to a reparameterisation of the rates of the Markov process in terms of physically-relevant variables: an example is given in Sect. 3.5.

*E*we assign a

*force*

*F*and a

*mobility*

*a*, as

*affinity*[49, Eq. (7.5)]; see also [1]. With this definition, the probability current (9) is

*non-linear*relation between forces and fluxes, although one recovers a linear structure for small forces (recall the classical theory in Sect. 1.1, for which \(j=Lf\)).

*j*defined on

*E*, with \(j_{xy}=-j_{yx}\), and a general force

*f*that satisfies \(f_{xy}=-f_{yx}\) (which is not in general given by (12)). Define a dual pair on

*E*as

*E*is a set of directed edges, so it contains both

*xy*and

*yx*, which have the same contribution to \(j\cdot f\)).

*j*.

*f*and \(\varPsi (\rho ,j)\) is a measure of the magnitude of the current

*j*. Consistent with this interpretation, note that \(\varPsi \) and \(\varPsi ^\star \) are symmetric in their second arguments. Moreover, for small forces and currents, \(\varPsi ^\star \) and \(\varPsi \) are quadratic in their second arguments, and can be interpreted as generalisations of squared norms of the force and current respectively. Note that Eqs. (16) and (18) can alternatively be represented as

### 2.3 Large Deviations and the Onsager–Machlup Functional

As anticipated in Sect. 1.1, the motivation for the definitions of \(\varPsi \), \(\varPsi ^\star \), and *F* is that there is a large deviation principle for these Markov chains, whose rate function is of the form given in (2). This large deviation principle appears when one considers \(\mathcal {N}\) independent copies of the Markov chain.

*i*th copy of the Markov chain by \(X^i_t\) and define the empirical density for this copy as \(\hat{\rho }^{\;\!i}_t(x)=\delta _{X^i_t,x}\), where \(\delta \) is a Kronecker delta function. Let the times at which the Markov chain \(X^i_t\) has jumps in [0,

*T*] be \(t_1^i, t_2^i, \dots , t^i_{K_i}\). Further denote the state just before the

*k*th jump with \(x_{k-1}^i\) (such that the state after the

*k*th jump is \(x_{k}^i\)). With this, the empirical current is given by

*T*] and consider the large \(\mathcal {N}\) limit. We assume that the \(\mathcal {N}\) copies at time \(t=0\) have initial conditions drawn from the invariant measure of the process (the generalisation to other initial conditions is straightforward). Then, the probability to observe a joint density and current \((\rho _t, j_t)_{t\in [0,T]}\) over the time interval [0,

*T*] is in the limit as \(\mathcal {N}\rightarrow \infty \) given by (1). That is,

We emphasise that the arguments \(\rho \) and *j* of the function \(\varPhi \) correspond to the random variables that appear in the LDP, while the functions *F*, \(\varPsi \) and \(\varPsi ^\star \) that appear in \(\varPhi \) encapsulate the transition rates of the Markov chain. Thus, by reparameterising the rates \(r_{xy}\) in terms of forces *F* and mobilities *a*, we arrive at a representation of the rate function which helps to make its properties transparent (convexity, positivity, symmetries such as (25)).

We note that for reversible Markov chains, the force \(F(\rho )\) is a pure gradient \(F=\nabla G\) for some potential *G* (see Sect. 3), in which case one may write \(j\cdot F=\sum _x \dot{\rho }(x) G(x)\), which follows from an integration by parts and application of the continuity equation. In this case, Mielke, M. A. Peletier, and Renger [43] also identified a slightly different canonical structure to the one presented here, in which the dual pairing is \(\sum _x v(x) G(x)\), for a velocity \(v(x)=\dot{\rho }(x)\) and a potential *G*. The analogues of \(\varPsi \) and \(\varPsi ^\star \) in that setting depend on *v* and *G* respectively, instead of *j* and *F*. The setting of (3) and (4) is more general, in that the functions \(\varPsi ,\varPsi ^\star \) for the velocity/potential setting are fully determined by those for the current/force setting. Also, focusing on the velocity *v* prevents any analysis of the divergence-free part of the current, and restricting to potential forces does not generalise in a simple way to irreversible Markov chains. For this reason, we use the current/force setting in this work.

In a separate development, Maas [35] identified a quadratic cost function for paths (in fact a metric structure) for which the master equation (11) is the minimiser in the case of reversible dynamics. This metric corresponds to the solution of an optimal mass transfer problem which seems to have no straightforward extension to irreversible systems. Of course, in the reversible case, the pathwise rate function (24) has the same minimiser, but is non-quadratic and therefore does not correspond to a metric structure, so there is no simple geometrical interpretation of (24). It seems that the non-quadratic structure in the rate function is essential in order capture the large deviations encoded by (23).

### 2.4 Time-Reversal Symmetry, Entropy Production, and the Gallavotti–Cohen Theorem

The rate function for the large-deviation principle (23) is given by (24), which has been written in terms of forces *F*, currents *j*, and densities \(\rho \). To explain why it is useful to write the rate function in this way, we compare the probability of a path \((\rho _t,j_t)_{t\in [0,T]}\) with that of its time-reversed counterpart \((\rho _t^*,j_t^*)_{t\in [0,T]}\), where \((\rho ^*_t,j^*_t) =(\rho _{T-t},-j_{T-t})\) as before.

*F*in (12) has been chosen so that the dual pairing \(j\cdot F\) is equal to this rate of heat flow: this means that the forces and currents are conjugate variables, just as (for example) pressure and volume are conjugate in equilibrium thermodynamics. See also the example in Sect. 3.5.

## 3 Decomposition of Forces and Rate Functional

We now introduce a splitting of the force \(F(\rho )\) into two parts \(F^S(\rho )\) and \(F^A\), which are related to the behaviour of the system under time-reversal, as well as to the splitting of the heat current into “excess” and “housekeeping” contributions [50]. We use this splitting to decompose the function \(\varPhi \) into three pieces, which allows us to compare (for example) the behaviour of reversible and irreversible Markov chains. This splitting also mirrors a similar construction within MFT [7], and this link will be discussed in Sect. 4. Related splittings have been introduced elsewhere; see [30] and [47] for decompositions of forces in stochastic differential equations, and [13] for decompositions of the instantaneous current in interacting particle systems.

### 3.1 Splitting of the Force According to Time-Reversal Symmetry

We define the *adjoint process* associated with the original Markov chain of interest. The transition rates of the adjoint process are \(r^*_{xy}:=\pi (y)r_{yx}\pi (x)^{-1}\). It is easily verified that the adjoint process has invariant measure \(\pi \), so \(q^*_{xy}:=\pi (x)r^*_{xy}=q_{yx}\). Under the assumption that the initial distribution is sampled from the steady state, the probability to observe a trajectory for the adjoint process coincides with the probability to observe the time-reversed trajectory for the original process.

### Lemma 1

### Proof

In Sect. 4.4, we will reformulate the so-called Hamilton–Jacobi relation of MFT in terms of forces, and show that this yields an equation analogous to (28).

### 3.2 Physical Interpretation of \(F^S\) and \(F^A\)

In stochastic thermodynamics, one may identify \(F^A_{xy}\) as the *housekeeping heat* (or *adiabatic entropy production*) associated with a single transition from state *x* to state *y*, see [16, 50]. (Within the Markov chain formalism, there is some mixing of the notions of force and energy: usually an energy would be a product of a force and a distance but there is no notion of a distance between states of the Markov chain, so forces and energies have the same units in our analysis.) Hence \(j\cdot F^A\) is the rate of flow of housekeeping heat into the environment. The meaning of the housekeeping heat is that for irreversible systems, transitions between states involve unavoidable dissipated heat which cannot be transformed into work (this dissipation is required in order to “do the housekeeping”).

*j*. Moreover it is easy to see that

*j*and \(F^S\) is equal to (the negative of) the rate of change of the free energy. It follows that the right hand side of (25) can alternatively be written as \(-\int j\cdot F^A\, \mathrm {d}t\).

We also recall from Sect. 2.2 that the force *F* acts in the space of probability densities: \(F_{xy}\) depends not only on the states *x*, *y* but also on the density \(\rho \). (Physical forces acting on individual copies of the system should not depend on \(\rho \) since each copy evolves independently, but *F* includes entropic terms associated with the ensemble of copies.) To understand this dependence, it is useful to write \(\mathcal {F}(\rho ) = -\sum _x \rho (x) \log \pi (x) + \sum _x \rho (x) \log \rho (x)\). We also write the invariant measure in a Gibbs-Boltzmann form: \(\pi (x) = \exp (-U(x))/Z\), where *U*(*x*) is the internal energy of state *x* and \(Z=\sum _x \exp (-U(x))\) is a normalisation constant. Then \(-\sum _x \rho (x) \log \pi (x) = \mathbb {E}_\rho (U) + \log Z\) depends on the mean energy of the system, while \(\sum _x \rho (x) \log \rho (x)\) is (the negative of) the mixing entropy, which comes from the many possible permutations of the copies of the system among the states of the Markov chain. From (31) one then sees that \(F^S\) has two contributions: one term (independent of \(\rho \)) that comes from the gradient of the energy *U* and the other (which depends on \(\rho \)) comes from the gradient of the entropy. These entropic forces account for the fact that a given empirical density \(\rho ^{\;\!{\mathcal {N}}}\) can be achieved in many different ways, since individual copies of the system can be permuted among the different states of the system.

### 3.3 Generalised Orthogonality for Forces

Let \(\varPsi _S^\star \) be the symmetric version of \(\varPsi ^\star \) obtained from (16) with \(a_{xy}(\rho )\) replaced by \(a^S_{xy}(\rho )\). (The Legendre transform of \(\varPsi ^\star _S\) is similarly denoted \(\varPsi _S\)). This leads to a separation of \(\varPsi ^\star (\rho ,F(\rho ))\) in a term corresponding to \(F^S(\rho )\) and a term corresponding to \(F^A\).

### Lemma 2

### Proof

The physical interpretation of Lemma 2 is that the strength of the force \(F(\rho )\) can be written as separate contributions from \(F^S(\rho )\) and \(F^A\). The following corollary allows us to think of a generalised orthogonality of the forces \(F^S(\rho )\) and \(F^A\).

### Proposition 3

### Proof

This follows directly from Lemma 2 and the symmetry of \(\varPsi ^\star (\rho ,\cdot )\). \(\square \)

We refer to Proposition 3 as a generalised orthogonality between \(F^S\) and \(F^A\) because \(\varPsi ^\star \) is acting as generalisation of a squared norm (see Sect. 1.1), so (39) can be viewed as a nonlinear generalisation of \(\Vert F^S + F^A \Vert ^2 = \Vert F^S - F^A \Vert ^2\), which would be a standard orthogonality between forces.

Moreover, Lemma 2 can be used to decompose the OM functional as a sum of three terms.

### Corollary 4

### Proof

Recall from Sect. 1.1 that \(\varPhi \) measures how much the current *j* deviates from the typical (or most likely) current \(J(\rho )\). One sees from (40) that it can be large for three reasons. The first term is large if the current is pushing the system up in free energy (because *D* is the rate of change of free energy induced by the current *j*). The second term comes from the time-reversal symmetric (gradient) force \(F^S(\rho )\), which is pushing the system towards equilibrium. The third term comes from the time-reversal anti-symmetric force \(F^A\); namely, it measures how far the current *j* is from the value induced by the force \(F^A\).

Corollary 4 also makes it apparent that the free energy \(\mathcal {F}\) is monotonically decreasing for solutions of (11), which are minimisers of \(I_{[0,T]}\).

### Corollary 5

### Proof

### 3.4 Hamilton–Jacobi Like Equation for Markov Chains

*extended Hamiltonian*, for reasons discussed in Sect. 6.3 (see also Sect. IV.G of [7]).

*extended Hamilton–Jacobi equation*for a functional \(\mathcal {S}\) is then (cf. equation (100) in Sect. 6.3) given by

### 3.5 Example: Simple Ring Network

To illustrate these abstract ideas, we consider a very simple Markov chain, in which *n* states are arranged in a circle, see Fig. 1. So \(V=\{1,2,\dots ,n\}\) and the only allowed transitions take place between state *x* and states \(x\pm 1\) (to incorporate the circular geometry we interpret \(n+1=1\) and \(1-1=n\)). In physics, such Markov chains arise (for example) as simple models of nano-machines or motors, where an external energy source might be used to drive circular motion [29, 53]. Alternatively, such a Markov chain might describe a protein molecule that goes through a cyclic sequence of conformations, as it catalyses a chemical reaction [31]. In both cases, the systems evolve stochastically because the relevant objects have sizes on the nano-scale, so thermal fluctuations play an important role.

To apply the analysis presented here, the first step is to identify forces and mobilities, as in (12). Let \(R_x = \sqrt{r_{x,x+1} r_{x+1,x}}\). The invariant measure may be identified by solving \(\sum _y \pi (x) r_{xy}= \sum _y \pi (y) r_{yx}\) subject to \(\sum _y \pi (y)=1\). Finally, one computes the steady state current \({\mathcal {J}} = \pi (x) r_{x,x+1} - \pi (x\!+\!1) r_{x+1,x}\), where the right hand side is independent of *x* (this follows from the steady-state condition on \(\pi \)). The original Markov process has 2*n* parameters, which are the rates \(r_{x,x\pm 1}\): these are completely determined by the \(n-1\) independent elements of \(\pi \), the *n* mobilities \((R_x)_{x=1}^n\) and the current \(\mathcal {J}\). The idea is that this reparameterisation allows access to the physically important quantities in the system.

*R*, it may be verified that

*a*and the forces \(F^S\) and \(F^A\). The physical meaning of these quantities may not be obvious from these definitions, but we show in the following that reparameterising the transition rates in this way reveals structure in the dynamical fluctuations.

For example, equilibrium models (with detailed balance) can be identified via \(F^A_{x,x+1}=0\) (for all *x*). In general \(F^A_{x,x+1}\) is the (steady-state) entropy production associated with a transition from *x* to \(x+1\), see Sect. 3.2. The steady state entropy production associated with going once round the circuit is \(\sum _x F^A_{x,x+1}=\log \prod _x (r_{x,x+1}/r_{x+1,x})\), as it must be [1].

*J*obeys the simple formula

*j*[not necessarily equal to \(J(\rho )\)] then the rate of change of free energy of the ensemble can be written compactly as \(D(\rho ,j) = -j\cdot F^S(\rho )\), from (32). The quantity \(j\cdot F^A\) is the rate of dissipation via housekeeping heat (see Sect. 3.2). This (physically-motivated) splitting of \(j\cdot F=j\cdot (F^S+F^A)\) motivates our introduction of the two forces \(F^S\) and \(F^A\). Note that \(j \cdot F\) is the rate of heat flow from the system to its environment, and appears in the fluctuation theorem (25).

Finally we turn to the large deviations of this ensemble of nano-scale objects. There is an LDP (23), whose rate function can be decomposed into three pieces (Corollary 4), because of the generalised orthogonality of the forces \(F^S\) and \(F^A\) (Lemma 2). This splitting of the rate function is useful because the symmetry properties of the various terms yields bounds on rate functions for some other LDPs obtained from \(\varPhi \) by contraction, see Sect. 5.

## 4 Connections to MFT

MFT is a field theory which describes the mass evolution of particle systems in the drift-diffusive regime, on the level of hydrodynamics. In this setting, it can be seen as generalisation of Onsager–Machlup theory [36]. For a comprehensive review, we refer to [7]. This section gives an overview of the theory, focussing on the connections to the results presented in Sects. 2 and 3.

We seek to emphasise two points: first, while the particle currents in MFT and the probability current in Markov chains are very different objects, they both obey large-deviation principles of the form presented in Sect. 1.1. This illustrates the broad applicability of this general setting. Second, we note that many of the particle models for which MFT gives a macroscopic description are Markov chains on discrete spaces. Starting from this observation, we argue in Sect. 4.5 that some results that are well-known in MFT originate from properties of these underlying Markov chains, particularly Proposition 3 and Corollary 4.

### 4.1 Setting

*N*of indistinguishable particles, moving on a lattice \(\varLambda _L\) (indexed by \(L\in {\mathbb {N}}\), such that the number of sites \(|\varLambda _L|\) is strictly increasing with

*L*). These particles are described by a Markov chain, so the relevant forces and currents satisfy the equations derived in Sects. 2 and 3. The hydrodynamic limit is obtained by letting \(L\rightarrow \infty \) such that the total density \(N/|\varLambda _L|\) converges to a fixed number \(\bar{\rho }\). In this limit, the lattice \(\varLambda _L\) is rescaled into a domain \(\varLambda \subset {\mathbb {R}}^d\) and one can characterise the system by a local (mass) density \(\rho :\varLambda \rightarrow [0,\infty )\) together with a local current \(j :\varLambda \rightarrow {\mathbb {R}}^d\), which evolve deterministically as a function of time [7, 28]. This time evolution depends on some (density-dependent) applied forces \(F(\rho ) :\varLambda \rightarrow {\mathbb {R}}^d\). The force at \(x\in \varLambda \) can be written as

*f*[7]; here we use a different notation since

*f*indicates a force in this work.) With these definitions, the deterministic currents satisfy the linear relation [41]

### 4.2 Onsager–Machlup Functional

*f*, we have that \(\varPhi _{{\mathrm {MFT}}}\) is uniquely minimised (and equal to zero) for the current \(j = \chi (\rho ) f\).

### 4.3 Large Deviation Principle

Within MFT, one considers an empirical density and an empirical current. We emphasise that these refer to particles, which are interacting and move on the lattice \(\varLambda _L\); this is in contrast to the case of Markov chains, where the copies of the system were non-interacting and one considers a density and current of probability. The averaged number of particles at site \(i\in \varLambda _L\) is denoted with \(\hat{\rho }_t^{\;\!L}(x_i)\), where \(x_i\) is the image in the rescaled domain \(\varLambda \) of site \(i\in \varLambda _L\), and the corresponding particle current is given by \(\hat{\jmath }_t^{\;\!L}\) (cf. Sect. VIII.F in [7] for details). Note that both the particle density \(\hat{\rho }_t^{\;\!L}\) and the particle current \(\hat{\jmath }_t^{\;\!L}\) are random quantities (see also Sect. 4.5).

*quasipotential*, which plays the role of a non-equilibrium free energy. We may think of \(\mathcal {V}\) as the macroscopic analogue of the free energy \(\mathcal {F}\) defined in (8). It is the rate functional for the process sampled from the invariant measure, which is consistent with the case for Markov chains in (24). We assume that \(\mathcal {V}\) has a unique minimiser \(\pi \), which is the steady-state density profile (so \(\mathcal {V}(\pi )=0\)).

An important difference between the Markov chain setting and MFT is that the OM functional for Markov chains is non-quadratic, which is equivalent to a non-linear flux force relation, whereas MFT is restricted to quadratic OM functionals.

Equation (53) is the basic assumption in MFT [7], in the sense that all systems considered by MFT are assumed to satisfy this pathwise LDP. In fact, both the process and its adjoint are assumed to satisfy such LDPs (with similar rate functionals, but different forces) [7].

### 4.4 Decomposition of the Force *F*

The force *F* in (49) can be written as the sum of a symmetric and an anti-symmetric part, \(F(\rho )=F_S(\rho )+F_A(\rho )\), just as in Sect. 3.1. The force for the adjoint process is given by \(F^*(\rho )=F_S(\rho )-F_A(\rho )\). Note that, unlike in the case of Markov chains, \(F_A(\rho )\) can here depend on \(\rho \). More precisely, \(F_S(\rho ) = -\nabla \frac{\delta \mathcal {V}}{\delta \rho }\) and \(F_A(\rho )\) is given implicitly by \(F_A(\rho ) = F(\rho )-F_S(\rho )\).

*Hamilton–Jacobi orthogonality*, which states that

### 4.5 Relating Markov Chains to MFT: Hydrodynamic Limits

We have discussed a formal analogy between current/density fluctuations in Markov chains and in MFT: the large deviation principles (23) and (53) refer to different objects and different limits, but they both fall within the general setting described in Sect. 1.1. We argue here that the similarities between these two large deviation principles are not coincidental—they arise naturally when MFT is interpreted as a theory for hydrodynamic limits of interacting particle systems.

To avoid confusion between particle densities and probability densities, we introduce (only for this section) a different notation for some properties of discrete Markov chains, which is standard for interacting particle systems. Let \(\eta \) represent a state of the Markov chain (in place of the notation *x* of Sect. 2), and let \(\mu \) be a probability distribution over these states (in place of the notation \(\rho \) of Sect. 2). Let \(\jmath \) be the probability current.

We illustrate our argument using the weakly asymmetric simple exclusion process (WASEP) in one dimension, so the lattice is \(\varLambda _L=\{1,2,\dots ,L\}\), and each lattice site contains at most one particle, so \(V=\{0,1\}^L\). The lattice has periodic boundary conditions and the occupancy of site *i* is \(\eta (i)\). Particles hop to the right with rate \(L^2\) and to the left with rate \(L^2(1-({ E}/L))\), but in either case only if the destination site is empty. Here *E* is a fixed parameter (an external field); the dependence of the hop rates on *L* is chosen to ensure a diffusive hydrodynamic limit (as required for MFT).

*V*, one can write a corresponding smoothed particle density \(\rho ^\epsilon \) on \(\varLambda \), as

*i*to site \(i+1\); if there is no particle on site

*i*then define \(\eta ^{i,i+1}=\eta \) so that \(\jmath _{\eta ,\eta ^{i,i+1}}=0\). Physically, \(\rho ^\epsilon \) is the average particle density associated to \(\mu \), and \(j^\epsilon \) is the particle current associated to \(\jmath \).

As noted above, MFT is concerned with the limit \(L\rightarrow \infty \). The LDP (23) is not relevant for that limit (it applies when one considers many (\({\mathcal {N}}\rightarrow \infty \)) independent copies of the Markov chain, with *L* being finite for each copy). However, the rate function \(I_{[0,T]}\) that appears in (23) has an alternative physical interpretation, as the relative entropy between two path measures: see Appendix A. This relative entropy can be seen as a property of the WASEP; there is no requirement to invoke many copies of the system. Physically, the relative entropy measures how different is the WASEP from an alternative Markov process with a given probability and current \((\mu _t,\jmath _t)_{t\in [0,T]}\).

The key point is that in cases where MFT applies, one expects that the rate function \(I^\mathrm{MFT}_{[0,T]}\) can be related to this relative entropy. In fact, there is a deeper relation between relative entropies and rate functionals: it can be shown that Large Deviation Principles are equivalent to \(\varGamma \)-convergence of relative entropy functionals (see [42] for details).

*L*) a time-dependent probability and current \((\mu _t^L,\jmath _t^L)_{t\in [0,T]}\), with \( \dot{\mu }^L_t = -{\mathrm{div}\,}\jmath ^L_t\), such on taking the limit \(\epsilon \rightarrow 0\)

*after*\(L\rightarrow \infty \), the associated particle densities \((\rho ^\epsilon _t,j^\epsilon _t) \rightarrow (\rho _t,j_t)\) and moreover

For interacting particle systems, this “controlled” process is usually obtained by adding a time dependent external field to the system that acts on the individual particles. This was first derived for the symmetric SEP in [27] (see also [4] for a treatment of the zero-range process). For the WASEP (in a slightly different situation with open boundaries) a proof of (61) can e.g. be found in [6], Lemma 3.7.

Moreover, on decomposing \(I^\mathrm{MFT}_{[0,T]}\) and \(I_{[0,T]}\) as in (3), the separate functions \(\varPsi \) and \(\varPsi ^\star \) obey formulae analogous to (61): this is the sense in which the structure of the MFT rate function is inherited from the relative entropy of the Markov chains. The quadratic functions \(\varPsi \) and \(\varPsi ^\star \) in MFT arise because the forces that appear in the underlying Markov chains are small (compared to unity), so second order Taylor expansions of \(\varPsi ^\star \) and \(\varPsi \) give in the limit the accurate description, similar to [2]. We will return to this discussion in a later publication.

## 5 LDPs for Time-Averaged Quantities

So far we have considered large deviation principles for hydrodynamic limits, and for systems consisting of many independent copies of a single Markov chain. We now show how some of the results derived in Sects. 2 and 3 also have analogues for large deviations for a single Markov chain, in the large-time limit.

### 5.1 Large Deviations at Level 2.5

*level 2.5 LDPs*. For countable state Markov chains the rate functional \(I_{2.5}(\rho ,j)\) was derived in [39], and was proven rigorously in [8, 9] for Markov chains in the setting of Sect. 2.1 under some additional conditions (see [8, 9] for the details). We can recast the rate functional (see [8, Theorem 6.1]) as

We have stated this LDP for joint fluctuations of the density and the current. For Markov chains, the LDP for the density and the *flow* is also known as a level-2.5 LDP [9], so our general use of the name level-2.5 for (63) may be non-standard, but it seems reasonable. The rate functional for the density and the current in (63) can be obtained by contraction from the rate functional for the density and the flow (see Theorem 6.1 in [8]).

Using the splitting obtained in Sect. 3.3, we obtain the following representation for the rate functional on level-2.5.

### Proposition 6

*j*be divergence free. Then the level-2.5 rate functional (64) is given by

### 5.2 Large Deviations for Currents

The significance of the splitting (65) for this result is that \(J^\mathrm{ss}_{xy}F_{xy}^A\) is the rate of flow of housekeeping heat associated with edge *xy*: the appearance of the housekeeping heat is natural since the bound comes from the second term in (65), which is independent of \(F^S\) and depends only on \(F^A\).

### 5.3 Optimal Control Theory

*controlled process*, where the rates are modified by a

*control potential*\(\varphi \), as

### 5.4 Decomposition of Rate Functions

The ideas of optimal control theory are useful since they facilitate the further decomposition of the level-2.5 rate function into several contributions.

### Lemma 7

*j*are given and that \({\mathrm{div}\,}j=0\). Then

### Proof

The physical interpretation of (75) is as follows. The contribution \(\frac{1}{2}\varPhi ( \rho , j, \tilde{F}^A )\) is a rate functional for observing an empirical current *j* in the controlled process, while \(\frac{1}{2}\varPhi ( \rho , \tilde{J}^{{\mathrm {ss}}}, F(\rho ) )\) is the rate functional for observing an empirical current \(\tilde{J}^{{\mathrm {ss}}}\) in the original process. Since \(\tilde{J}^{{\mathrm {ss}}}\) is the (deterministic) probability current for the controlled process, one has that the more the controlled process differs from the original one, the larger will be \(\varPhi ( \rho , \tilde{J}^{{\mathrm {ss}}}, F(\rho ) )\). Hence the level-2.5 rate functional is large if the controlled process is very different from the original one, as one might expect. The rate functional also takes larger values if the empirical current *j* is very different from the probability current of the controlled process.

We obtain our final representation for the level-2.5 rate functional, consisting of the sum of three different OM functionals.

### Proposition 8

*j*be divergence free. We can represent the level-2.5 rate functional (64) as

### Proof

This follows immediately from Lemma 7 followed by an application of Corollary 4 to \(\varPhi \bigl (\rho ,\tilde{J}^{{\mathrm {ss}}}, F^A \bigr )\) and that \(D=0\), from (33). \(\square \)

The three terms in (78) also appear in Lemma 7 and Corollary 4, and their interpretations have been discussed in the context of those results. Briefly, we recall that \(I_{2.5}(\rho ,j)\) sets the probability of fluctuations in which a non-typical density \(\rho \) and current *j* are sustained over a long time period. The first term in (78) reflects the fact that the free-energy gradient \(F^S(\rho )\) tends to push \(\rho \) towards the steady state \(\pi \), so maintaining any non-typical density is unlikely if \(F^S(\rho )\) is large. Similarly, the second term in (78) reflects the fact that large non-gradient forces \(F^A\) also tend to suppress the probability that \(\rho \) maintains its non-typical value. The final term is the only place in which the (divergence-free) current *j* appears: it vanishes if the current *j* is typical within the controlled process (see Corollary 9); otherwise it reflects the probability cost of maintaining a non-typical circulating current.

### 5.5 Large Deviations at Level 2

*level-2 LDP*, where one considers the density only. It is formally given by

*j*yields the level-2 rate functional

### Corollary 9

### Proof

This follows from (80) and (78), since \(\varPhi \bigl ( \rho , j, \tilde{F}^A \bigr )\) has a minimal value of zero. \(\square \)

This last identity extends the results obtained in [26] on the accelerated convergence to equilibrium for irreversible processes using LDPs from the macroscopic scale (i.e. in the regime of MFT) to Markov chains. The level-2 rate function in (82) can be interpreted as a rate of convergence to the steady state. It was shown in [26] that the rate is higher for irreversible processes, as opposed to reversible ones (as the second term \(\varPhi (\rho ,\tilde{J}^{{\mathrm {ss}}}, F^A)=0\) for reversible processes). We remark that splitting techniques for irreversible jump processes have been used to devise efficient MCMC samplers; see for example [5, 34].

### 5.6 Connection to MFT

*j*with \({\text {div}}j=0\), given by

*j*is not divergence free. If \({\mathrm{div}\,}j=0\) then the rate function can be written in the form [26]

### Proposition 10

This proposition is equivalent to Proposition 5 of [26], but has now been rewritten in the language of optimal control theory. As discussed in [26], Eq. (89) quantifies the extent to which breaking detailed balance accelerates convergence of systems to equilibrium, at the hydrodynamic level. For this work, the key point is that this result originates from Corollary 9, which is the equivalent statement for Markov chains (without taking any hydrodynamic limit).

## 6 Consequences of the Structure of the OM Functional \(\varPhi \)

We have shown that the rate functions for several LDPs in several different contexts depend on functionals \(\varPhi \) with the general structure presented in (3) and (4). In this section, we show how this structure alone is sufficient to establish some features that are well-known in MFT. This means that these results within MFT have analogues for Markov chains. Our derivations mostly follow the standard MFT routes [7], but we use a more abstract notation to emphasise the minimal assumptions that are required.

### 6.1 Assumptions

The following minimal assumptions are easily verified for Markov chains; they are also either assumed or easily proven for MFT. The results of this section are therefore valid in both settings.

We consider a process described by a time-dependent density \(\rho \) and current *j*, with an associated continuity equation \(\dot{\rho } = -{\mathrm{div}\,}j\) and unique steady state \(\pi \). We are given a set of (\(\rho \)-dependent) forces denoted by \(F(\rho )\), a dual pairing \(j\cdot f\) between forces and currents, and a function \(\varPsi (\rho ,j)\) which is convex in *j* and satisfies \(\varPsi (\rho ,j)=\varPsi (\rho ,-j)\). With these choices, the functions \(\varPsi ^\star \) and \(\varPhi \) are fully specified via (3) and (4). We assume that for initial conditions chosen from the invariant measure, the system satisfies an LDP of the form (1) with rate function of the form (2).

*I*by replacing the force \(F(\rho )\) with some adjoint force \(F^*(\rho )\). That is,

### 6.2 Symmetric and Anti-symmetric Forces

### Proposition 11

### Proof

*T*and using (3) together with \(\varPsi (\rho ,j)=\varPsi (\rho ,-j)\) and (92), one has

Proposition 11 also yields a variational characterisation of \(I_0\). The following corollary is analogous to Eq. (4.8) of [7], as is its proof.

### Corollary 12

### Proof

### 6.3 Hamilton–Jacobi Like Equation for the Extended Hamiltonian

*h*solves \({\mathrm{div}\,}(\nabla h) = (\rho _0-\bar{\rho })\), see [13] for the relevant properties of these vector fields. With this choice, and using \(\dot{\rho } = -{\mathrm{div}\,}j\), one has \(\rho _t=\bar{\rho }+ {\mathrm{div}\,}A_t\) for all

*t*, and one may also write (formally) \(A_t = {\mathrm{div}\,}^{-1}(\rho _t-\bar{\rho })\). Comparing with [7, Sect. IV.G], we write \(\rho =\bar{\rho }+{\mathrm{div}\,}A\) instead of \(\rho ={\mathrm{div}\,}A\) since for Markov chains one has (for any discrete vector field

*A*) that \(\sum _x {\mathrm{div}\,}A(x)=0\), so it is not possible to solve \({\mathrm{div}\,}A = \rho \) if \(\rho \) is normalised to unity (recall that discrete vector fields have by definition \(A_{xy}=-A_{yx}\) [13]).

The fluctuations of *A* are therefore determined by the fluctuations of \((\rho ,j)\), so the LDP (1) implies a similar LDP for *A*, whose rate function is \(I^\mathrm{ex}_{[0,T]}((A_t)_{t\in [0,T]}) = I^\mathrm{ex}_0(A_0) + \int _0^T \mathbb {L}^\mathrm{ex}(A_t,\dot{A}_t)\mathrm {d}t\), where \(\mathbb {L}^\mathrm{ex}\) is a Lagrangian that depends on *A* and its time derivative (which we again refer to as extended Lagrangian, cf. [7]). The function \(\mathbb {L}\) in (97) is then related to \(\mathbb {L}^\mathrm{ex}\) via the bijection between \((\rho ,j)\) and *A*. Considering again the case of Markov chains, the time evolution of the system depends only on \({\mathrm{div}\,}A\) (which is \(\rho -\bar{\rho }\)) and not on *A* itself, one sees that \(\mathbb {L}^\mathrm{ex}(A,\dot{A})\) depends only on \({\mathrm{div}\,}A\) and \(\dot{A}\) (which is *j*). Hence we write, formally, \(\mathbb {L}(\rho ,j) = \mathbb {L}^\mathrm{ex}({\mathrm{div}\,}^{-1}(\rho -\bar{\rho }),-j)\), and we recover (97).

Hence \(\mathbb {L}\) is nothing but the extended Lagrangian \(\mathbb {L}^\mathrm{ex}\), written in different variables: for this reason we refer to \(\mathbb {L}\) as an (extended) Lagrangian.

*j*. We identify \(\mathbb {H}\) as the scaled cumulant generating function associated with the rate function \(I_{2.5}(\rho ,j)=\mathbb {L}(\rho ,j)\) [52, Sect. 3.1]. Analysis of rare fluctuations in terms of the field \(\xi \) is often more convenient than direct analysis of the rate function [32, 33] and is the basis of the “

*s*-ensemble” method that has recently been exploited in a number of physical applications (for example [21, 24]). Using (3) and (4), we obtain

*extended Hamilton–Jacobi equation*, which is for a functional \(\mathcal {S}\) given by

### Proposition 13

The free energy \(I_0\) is the maximal non-negative solution to (100) which vanishes at the steady state \(\pi \). In other words, any functional \(\mathcal {S}\) that solves (100) and has \(\mathcal {S}(\pi )=0\) also satisfies \(\mathcal {S}\le I_0\).

### Proof

### 6.4 Generalisation of Lemma 2

Before ending, we note that (94) is analogous to Proposition 3 in the general setting of this section, but we have not yet proved any analogue of Lemma 2. Hence we have not obtained a generalisation of Corollary 4, nor any of its further consequences. To achieve this, one requires a further assumption within the general framework considered here, which amounts to a splitting of the Hamiltonian. This assumption holds for MFT and for Markov chains, and is a sufficient condition for a generalised Lemma 2.

*F*by \(F^*\) in (99). Then, one assumes further that

## 7 Conclusion

In this article, we have presented several results for dynamical fluctuations in Markov chains. The central object in our discussion has been the function \(\varPhi \), which plays a number of different roles—it is the rate function for large deviations at level 2.5 (Eq. 64), and it also appears in the rate function for pathwise large deviation functions (Eq. 2). These results—derived originally by Maes et al. [38, 39]—originate from the relationship between \(\varPhi \) and the relative entropy between path measures (Appendix A). The canonical (Legendre transform) structure of \(\varPhi \) (Eq. 4) and its relation to time reversal (Eq. 25) have also been discussed before [38].

The function \(\varPhi \) depends on probability currents *j* and their conjugate forces *f*. Our Proposition 3 and Corollary 4 show how the rate functions in which \(\varPhi \) appears have another level of structure, based on the decomposition of the forces *F* in two pieces \(F=F^S+F^A\), according to its behaviour under time-reversal. A similar decomposition is applied in MFT [7]: the discussion of Sects. 5 and 6 show how several results of that theory—which applies on macroscopic (hydrodynamic) scales—already have analogues for Markov chains, which provide microscopic descriptions of interacting particle systems. These results—which concern symmetries, gradient structures and (generalised) orthogonality relationships—show how properties of the rate functions are directly connected to physical ideas of free energy, dissipation, and time-reversal.

Looking forward, we hope that these structures can be exploited both in mathematics and physics. From a mathematical viewpoint, the canonical structure and generalised orthogonality relationships may provide new routes for scale-bridging calculations, just as the geometrical structure identified by Maas [35] has been used to develop new proofs of hydrodynamic limits [17]. In physics, a common technique is to propose macroscopic descriptions of physical systems based on symmetries and general principles—examples in non-equilibrium (active) systems include [51, 54]. However, this level of description leaves some ambiguity as to the best definitions of some physical quantities, such as the local entropy production [44]. We hope that the structures identified here can be useful in relating such macroscopic theories to underlying microscopic behaviour.

## Notes

### Acknowledgements

We thank Freddy Bouchet, Davide Gabrielli, Juan Garrahan, Jan Maas, Michiel Renger and Hugo Touchette for useful discussions. MK is supported by a scholarship from the EPSRC Centre for Doctoral Training in Statistical Applied Mathematics at Bath (SAMBa), under the project EP/L015684/1. JZ gratefully acknowledges funding by the EPSRC through project EP/K027743/1, the Leverhulme Trust (RPG-2013-261) and a Royal Society Wolfson Research Merit Award. The authors thank the anonymous referees for their careful reading of the manuscript and for many helpful comments and suggestions.

## References

- 1.Andrieux, D., Gaspard, P.: Fluctuation theorem for currents and Schnakenberg network theory. J. Stat. Phys.
**127**(1), 107–131 (2007)ADSMathSciNetCrossRefMATHGoogle Scholar - 2.Basile, G., Benedetto, D., Bertini, L.: A gradient flow approach to linear Boltzmann equations. arXiv preprint. arXiv:1707.09204 (2017)
- 3.Basu, U., Maes, C.: Nonequilibrium response and frenesy. J. Phys: Conf. Ser.
**638**(1), 012001 (2015)Google Scholar - 4.Benois, O., Kipnis, C., Landim, C.: Large deviations from the hydrodynamical limit of mean zero asymmetric zero range processes. Stoch. Process. Appl.
**55**(1), 65–89 (1995)MathSciNetCrossRefMATHGoogle Scholar - 5.Bernard, E.P., Krauth, W.: Two-step melting in two dimensions: first-order liquid-hexatic transition. Phys. Rev. Lett.
**107**, 155704 (2011)ADSCrossRefGoogle Scholar - 6.Bertini, L., Landim, C., Mourragui, M., et al.: Dynamical large deviations for the boundary driven weakly asymmetric exclusion process. Ann. Probab.
**37**(6), 2357–2403 (2009)MathSciNetCrossRefMATHGoogle Scholar - 7.Bertini, L., De Sole, A., Gabrielli, D., Jona-Lasinio, G., Landim, C.: Macroscopic fluctuation theory. Rev. Mod. Phys.
**87**(2), 593–636 (2015)ADSMathSciNetCrossRefMATHGoogle Scholar - 8.Bertini, L., Faggionato, A., Gabrielli, D.: Flows, currents, and cycles for Markov chains: large deviation asymptotics. Stoch. Process. Appl.
**125**(7), 2786–2819 (2015)MathSciNetCrossRefMATHGoogle Scholar - 9.Bertini, L., Faggionato, A., Gabrielli, D.: Large deviations of the empirical flow for continuous time Markov chains. Ann. Inst. Henri Poincaré Probab. Stat.
**51**(3), 867–900 (2015)ADSMathSciNetCrossRefMATHGoogle Scholar - 10.Chernyak, V.Y., Chertkov, M., Bierkens, J., Kappen, H.J.: Stochastic optimal control as non-equilibrium statistical mechanics: calculus of variations over density and current. J. Phys. A
**47**(2), 022001 (2014)ADSCrossRefMATHGoogle Scholar - 11.Chetrite, R., Touchette, H.: Variational and optimal control representations of conditioned and driven processes. J. Stat. Mech. Theory Exp.
**2015**(12), P12001, 42 (2015)Google Scholar - 12.Crooks, G.E.: Path-ensemble averages in systems driven far from equilibrium. Phys. Rev. E
**61**, 2361–2366 (2000)ADSCrossRefGoogle Scholar - 13.De Carlo, L., Gabrielli, D.: Gibbsian stationary non-equilibrium states. J. Stat. Phys.
**168**(6), 1191–1222 (2017)ADSMathSciNetCrossRefMATHGoogle Scholar - 14.Dembo, A., Zeitouni, O.: Large Deviations Techniques and Applications. Stochastic Modelling and Applied Probability, vol. 38. Springer, Berlin (2010). Corrected Reprint of 2nd edn (1998)Google Scholar
- 15.den Hollander, F.: Large Deviations. Fields Institute Monographs, vol. 14. American Mathematical Society, Providence (2000)Google Scholar
- 16.Esposito, M., Van den Broeck, C.: Three faces of the second law. I. Master equation formulation. Phys. Rev. E
**82**, 011143 (2010)ADSCrossRefGoogle Scholar - 17.Fathi, M., Simon, M.: The Gradient Flow Approach to Hydrodynamic Limits for the Simple Exclusion Process, pp. 167–184. Springer, Cham (2016)MATHGoogle Scholar
- 18.Fleming, W.H., Soner, H.M.: Controlled Markov Processes and Viscosity Solutions. Stochastic Modelling and Applied Probability, vol. 25, 2nd edn. Springer, New York (2006)Google Scholar
- 19.Gallavotti, G., Cohen, E.G.D.: Dynamical ensembles in stationary states. J. Stat. Phys.
**80**(5–6), 931–970 (1995)ADSMathSciNetCrossRefMATHGoogle Scholar - 20.Gardiner, C.: Stochastic Methods. A Handbook for the Natural and Social Sciences. Springer Series in Synergetics, 4th edn. Springer, Berlin (2009)Google Scholar
- 21.Garrahan, J.P., Jack, R.L., Lecomte, V., Pitard, E., van Duijvendijk, K., van Wijland, F.: First-order dynamical phase transition in models of glasses: an approach based on ensembles of histories. J. Phys. A
**42**(7), 075007, 34 (2009)Google Scholar - 22.Gingrich, T.R., Horowitz, J.M., Perunov, N., England, J.L.: Dissipation bounds all steady-state current fluctuations. Phys. Rev. Lett.
**116**, 120601 (2016)ADSCrossRefGoogle Scholar - 23.Gingrich, T.R., Rotskoff, G.M., Horowitz, J.M.: Inferring dissipation from current fluctuations. J. Phys. A
**50**(18), 184004 (2017)ADSMathSciNetCrossRefMATHGoogle Scholar - 24.Jack, R.L., Sollich, P.: Effective interactions and large deviations in stochastic processes. Eur. Phys. J. Spec. Top.
**224**(12), 2351–2367 (2015)CrossRefGoogle Scholar - 25.Jarzynski, C.: Nonequilibrium equality for free energy differences. Phys. Rev. Lett.
**78**, 2690–2693 (1997)ADSCrossRefGoogle Scholar - 26.Kaiser, M., Jack, R.L., Zimmer, J.: Acceleration of convergence to equilibrium in Markov chains by breaking detailed balance. J. Stat. Phys.
**168**, 259–287 (2017)ADSMathSciNetCrossRefMATHGoogle Scholar - 27.Kipnis, C., Olla, S., Varadhan, S.R.S.: Hydrodynamics and large deviation for simple exclusion processes. Commun. Pure Appl. Math.
**42**(2), 115–137 (1989)MathSciNetCrossRefMATHGoogle Scholar - 28.Kipnis, C., Landim, C.: Scaling Limits of Interacting Particle Systems. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 320. Springer, Berlin (1999)Google Scholar
- 29.Kolomeisky, A.B., Fisher, M.E.: Molecular motors: a theorist’s perspective. Annu. Rev. Phys. Chem.
**58**(1), 675–695 (2007)ADSCrossRefGoogle Scholar - 30.Kwon, C., Ao, P., Thouless, D.J.: Structure of stochastic dynamics near fixed points. Proc. Natl Acad. Sci. U.S.A.
**102**(37), 13029–13033 (2005)ADSCrossRefGoogle Scholar - 31.Lavorel, J.: Matrix analysis of the oxygen evolving system of photosynthesis. J. Theor. Biol.
**57**(1), 171–185 (1976)CrossRefGoogle Scholar - 32.Lebowitz, J.L.: A Gallavotti–Cohen-type symmetry in the large deviation functional for stochastic dynamics. J. Stat. Phys.
**95**(1–2), 333–365 (1999)ADSMathSciNetCrossRefMATHGoogle Scholar - 33.Lecomte, V., Appert-Rolland, C., van Wijland, F.: Thermodynamic formalism for systems with Markov dynamics. J. Stat. Phys.
**127**(1), 51–106 (2007)ADSMathSciNetCrossRefMATHGoogle Scholar - 34.Ma, Y.-A., Fox, E.B., Chen, T., Wu, L.: A unifying framework for devising efficient and irreversible MCMC samplers. arXiv preprint. arXiv:1608.05973 (2016)
- 35.Maas, J.: Gradient flows of the entropy for finite Markov chains. J. Funct. Anal.
**261**(8), 2250–2292 (2011)MathSciNetCrossRefMATHGoogle Scholar - 36.Machlup, S., Onsager, L.: Fluctuations and irreversible process. II. Systems with kinetic energy. Phys. Rev.
**91**, 1512–1515 (1953)ADSMathSciNetCrossRefMATHGoogle Scholar - 37.Maes, C.: The fluctuation theorem as a Gibbs property. J. Stat. Phys.
**95**(1–2), 367–392 (1999)ADSMathSciNetCrossRefMATHGoogle Scholar - 38.Maes, C., Netočný, K.: Canonical structure of dynamical fluctuations in mesoscopic nonequilibrium steady states. Europhys. Lett. EPL
**82**(3), Art. 30003, 6 (2008)Google Scholar - 39.Maes, C., Netočný, K., Wynants, B.: On and beyond entropy production: the case of Markov jump processes. Markov Process. Relat. Fields
**14**(3), 445–464 (2008)MathSciNetMATHGoogle Scholar - 40.Maes, C., Netočný, K., Wynants, B.: Monotonicity of the dynamical activity. J. Phys. A
**45**(45), 455001, 13 (2012)Google Scholar - 41.Maes, C.: Netočný, Karel: Revisiting the Glansdorff-Prigogine criterion for stability within irreversible thermodynamics. J. Stat. Phys.
**159**(6), 1286–1299 (2015)ADSMathSciNetCrossRefMATHGoogle Scholar - 42.Mariani, M.: A gamma-convergence approach to large deviations. arXiv preprint. arXiv:1204.0640 (2012)
- 43.Mielke, A., Peletier, M.A., Renger, D.R.M.: On the relation between gradient flows and the large-deviation principle, with applications to Markov chains and diffusion. Potential Anal.
**41**(4), 1293–1327 (2014)MathSciNetCrossRefMATHGoogle Scholar - 44.Nardini, C., Fodor, É., Tjhung, E., van Wijland, F., Tailleur, J., Cates, M.E.: Entropy production in field theories without time-reversal symmetry: quantifying the non-equilibrium character of active matter. Phys. Rev. X
**7**, 021007 (2017)Google Scholar - 45.Pietzonka, P., Barato, A.C., Seifert, U.: Universal bounds on current fluctuations. Phys. Rev. E
**93**, 052145 (2016)ADSCrossRefMATHGoogle Scholar - 46.Polettini, M., Lazarescu, A., Esposito, M.: Tightening the uncertainty principle for stochastic currents. Phys. Rev. E
**94**, 052104 (2016)ADSCrossRefGoogle Scholar - 47.Qian, H.: A decomposition of irreversible diffusion processes without detailed balance. J. Math. Phys.
**54**(5), 053302 (2013)ADSMathSciNetCrossRefMATHGoogle Scholar - 48.Renger, D.R.M.: Large deviations of specific empirical fluxes of independent Markov chains, with implications for macroscopic fluctuation theory. Weierstrass Institute. Preprint 2375 (2017)Google Scholar
- 49.Schnakenberg, J.: Network theory of microscopic and macroscopic behavior of master equation systems. Rev. Mod. Phys.
**48**(4), 571–585 (1976)ADSMathSciNetCrossRefGoogle Scholar - 50.Seifert, U.: Stochastic thermodynamics, fluctuation theorems and molecular machines. Rep. Prog. Phys.
**75**(12), 126001 (2012)ADSCrossRefGoogle Scholar - 51.Toner, J., Yuhai, T.: Long-range order in a two-dimensional dynamical XY model: how birds fly together. Phys. Rev. Lett.
**75**, 4326–4329 (1995)ADSCrossRefGoogle Scholar - 52.Touchette, H.: The large deviation approach to statistical mechanics. Phys. Rep.
**478**(1–3), 1–69 (2009)ADSMathSciNetCrossRefGoogle Scholar - 53.Vaikuntanathan, S., Gingrich, T.R., Geissler, P.L.: Dynamic phase transitions in simple driven kinetic networks. Phys. Rev. E
**89**, 062108 (2014)ADSCrossRefGoogle Scholar - 54.Wittkowski, R., Tiribocchi, A., Stenhammar, J., Allen, R.J., Marenduzzo, D., Cates, M.E.: Scalar \(\phi ^4\) field theory for active-particle phase separation. Nat. Commun.
**5**, 4351 (2014)ADSCrossRefGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.