1 Introduction

In this paper we investigate the large deviation principle for a model of random permutations called the one-dimensional interchange process. The process can be roughly described as follows. We put N particles, labelled from 1 to N, on a line \(\{1,\ldots , N\}\) and at each time step perform the following procedure: an edge is chosen at random and adjacent particles are swapped. By comparing the particles’ initial positions with their positions after given time t we obtain a random permutation from the symmetric group \(\mathcal {S}_N\) on N elements.

The interchange process on the interval (whose discrete time analog is known as the adjacent transposition shuffle) and on more general graphs has attracted considerable attention in probability theory, for example with regard to the analysis of mixing times. It is natural to ask whether, after proper rescaling and as \(N \rightarrow \infty \), the permutations obtained in the interchange process converge in distribution to an appropriately defined limiting process.

Such limits have been recently studied ([HKM+13, RVV19]) under the name of permutons and permuton processes. These notions have been inspired by the theory of graph limits ([Lov12]), where the analogous notion of a graphon as a limit of dense graphs appears. A permuton is a Borel probability measure on \([0,1]^2\) with uniform marginals on each coordinate. A sequence of permutations \(\sigma ^N \in \mathcal {S}_N\) is said to converge to a permuton \(\mu \) as \(N \rightarrow \infty \) if the corresponding empirical measures

$$\begin{aligned} \frac{1}{N}\sum _{i=1}^N \delta _{\left( \frac{i}{N},\frac{\sigma ^N(i)}{N}\right) } \end{aligned}$$

converge weakly to \(\mu \). A permuton process is a stochastic process \(X = (X_{t}, 0 \le t \le T)\) taking values in [0, 1], with continuous sample paths and having uniform marginals at each time \(t \in [0,T]\). A permutation-valued path, such as a sample from the interchange process, is said to converge to X if the trajectory of a randomly chosen particle converges in distribution to X.

Depending on the time scale considered, one observes different asymptotic structure in the permutations arising from the interchange process. If the average number of all swaps is greater than \(\sim N^3 \log N\), the process will be close to its stationary distribution ([Ald83, Lac16]), which is the uniform distribution on \(\mathcal {S}_N\). For \(\sim N^3\) swaps each particle has displacement of order N and the whole process converges, in the sense of permuton processes, to a Brownian motion on [0, 1] ([RV17]).

Here we will be interested in yet shorter time scales, corresponding to \(\sim N^{2+\varepsilon }\) swaps for fixed \(\varepsilon \in (0,1)\). In this scaling each particle has displacement \(\ll N\), so the resulting permutations will be close to the identity permutation. Nevertheless, in the spirit of large deviation theory one can still ask questions about rare events, for example “what is the probability that starting from the identity permutation we are close to a fixed permuton after time t?” or, more generally, “what is the probability that the interchange process behaves like a given permuton process X?”. We expect such probabilities to decay exponentially in \(N^\gamma \) for some \(\gamma > 0\), with the decay rate given by a rate function on the space of permuton processes.

The large deviation principle we obtain in this paper can be informally summarized as follows: for a class of permuton processes solving a natural energy minimization problem, the probability \(\mathbb {P}(A)\) that the interchange process is close in distribution to a process X satisfies asymptotically

$$\begin{aligned} \frac{1}{N^{\gamma }} \log \mathbb {P}(A) \approx - I(X), \end{aligned}$$
(1)

where \(\gamma = 2 - \varepsilon \) and I(X) is the energy of X, defined as the expected Dirichlet energy of a path sampled from X. Apart from a purely probabilistic interest, the result is relevant to two other seemingly unrelated subjects, namely the study of Euler equations in fluid dynamics and the study of sorting networks in combinatorics.

Let us first state the energy minimization problem in question, which is as follows – given a permuton \(\mu \), find

$$\begin{aligned} \inf \limits _{(X_0, X_T) \sim \mu } I(X), \end{aligned}$$
(2)

where the infimum is over all permuton processes X such that \((X_0, X_T)\) has distribution \(\mu \). As it happens, such energy-minimizing processes have been considered in fluid dynamics in the study of incompressible Euler equations, under the name of generalized incompressible flows. This connection is discussed in more detail in Sect. 2.2. Very roughly speaking, Euler equations in a domain \(D \subseteq \mathbb {R}^d\) describe motion of fluid particles whose trajectories satisfy the equation

$$\begin{aligned} x''(t) = - \nabla p (t, x) \end{aligned}$$
(3)

for some function p called the pressure. The incompressibility constraint means that the flow defined by the equation has to be volume-preserving. Classical, smooth solutions to Euler equations correspond to flows which are diffeomorphisms of D. Generalized incompressible flows are a stochastic variant of such solutions in which each particle can choose its initial velocity independently from a given probability distribution.

It turns out that, under additional regularity assumptions, such generalized solutions to Euler Eq. (3) for \(D = [0,1]\) correspond exactly to permuton processes solving the energy minimization problem (2) for some permuton \(\mu \). Our large deviation result (1) is valid precisely for such energy-minimizing permuton processes (again, under certain regularity assumptions).

As it happens, the original motivation for our work came from a different direction, namely from the study of sorting networks in combinatorics. This connection is explained in more detail below. Using our large deviation principle (1), we are able to prove novel results on a variant of the model we call relaxed sorting networks. Thus the large deviation principle presented in this paper provides a rather unexpected link between problems in combinatorics (sorting networks) and fluid dynamics (incompressible Euler equations), along with a quite general framework for analyzing permuton processes which we hope will find further applications.

Main results. Let us now state our main results more formally, still with complete definitions and discussion of assumptions deferred until Sects. 2.1 and 3. Let \(\mathcal {D}= \mathcal {D}([0,T], [0,1])\) be the space of càdlàg paths from [0, T] to [0, 1] and let \(\mathcal {M}(\mathcal {D})\) be the space of Borel probability measures on \(\mathcal {D}\). Let \(\mathcal {P}\subseteq \mathcal {M}(\mathcal {D})\) denote the space of permuton processes and their approximations by permutation-valued processes. For \(\pi \in \mathcal {M}(\mathcal {D})\) by \(I(\pi )\) we will denote the expected Dirichlet energy of the process X whose distribution is \(\pi \).

Let \(\eta ^N\) denote the interchange process in continuous time on the interval \(\{1, \ldots , N\}\), speeded up by \(N^{\alpha }\) for some \(\alpha \in (1,2)\). Let \(\gamma = 3 - \alpha \). We have the following large deviation principle

Theorem A

(Large deviation lower bound). Let \(\mathbb {P}^{N}\) be the law of the interchange process \(\eta ^N\) and let \(\mu ^{\eta ^{N}} \in \mathcal {M}(\mathcal {D})\) be the empirical distribution of its trajectories. Let \(\pi \) be a permuton process which is a generalized solution to Euler Eq. (19). Provided \(\pi \) satisfies Assumptions (3.1), for any open set \(\mathcal {O}\subseteq \mathcal {P}\) such that \(\pi \in \mathcal {O}\) we have

$$\begin{aligned} \liminf _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N}\left( \mu ^{\eta ^{N}} \in \mathcal {O}\right) \ge - I(\pi ). \end{aligned}$$

Theorem B

(Large deviation upper bound). Let \(\mathbb {P}^{N}\) be the law of the interchange process \(\eta ^{N}\) and let \(\mu ^{\eta ^{N}} \in \mathcal {M}(\mathcal {D})\) be the empirical distribution of its trajectories. For any closed set \({\mathcal {C}} \subseteq \mathcal {P}\) we have

$$\begin{aligned} \limsup _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N}\left( \mu ^{\eta ^{N}} \in {\mathcal {C}} \right) \le - \inf \limits _{\pi \in {\mathcal {C}}} I(\pi ). \end{aligned}$$

The results are referred to as respectively Theorems 7.3 and 8.4 in the following sections. Here the large deviation upper bound is valid for all permuton processes, without any additional assumptions. On the other hand, in the proof of the lower bound we exploit rather heavily the special structure possessed by generalized solutions to Euler equations. We expect the lower bound to hold for arbitrary permuton processes as well, since one can locally approximate any permuton process by energy minimizers. However, for our techniques to apply one would need to understand in more detail regularity of the associated velocity distributions and pressure functions, which falls outside the scope of our work.

The reader may notice that the rate function, which is the energy \(I(\pi )\), is similar to the one appearing in the analysis of large deviations for independent random walks. In fact, the crux of our proofs lies in proving that particles in the interchange process and its perturbations are in a certain sense almost independent. The main techniques used here come from the field of interacting particle systems. A comprehensive introduction to the subject can be found in [KL99]. The novelty in our approach is in applying tools usually used to study hydrodynamic limits to a setting which is in some respects more involved, since the limiting objects we consider, permuton processes, are stochastic processes instead of deterministic objects like solutions of PDEs apearing, for example, for exclusion processes.

Sorting networks and the sine curve process. The large deviation bounds can be applied to obtain results on a model related to sorting networks. A sorting network on N elements is a sequence of \(M = \left( {\begin{array}{c}N\\ 2\end{array}}\right) \) transpositions \((\tau _{1}\), \(\tau _{2}\), \(\ldots \), \(\tau _{M})\) such that each \(\tau _{i}\) is a transposition of adjacent elements and \(\tau _{M} \circ \ldots \circ \tau _{1} = \textrm{rev}_{N}\), where \(\textrm{rev}_{N} = (N \,\ldots \,2 \, 1)\) denotes the reverse permutation. It is easy to see that any sequence of adjacent transpositions giving the reverse permutation must have length at least \(\left( {\begin{array}{c}N\\ 2\end{array}}\right) \), hence sorting networks can be thought of as shortest paths joining the identity permutation and the reverse permutation in the Cayley graph of \(\mathcal {S}_{N}\) generated by adjacent transpositions.

A random sorting network is obtained by sampling a sorting network uniformly at random among all sorting networks on N elements. Let us work in continuous time, assuming each transposition \(\tau _i\) happens at time \(\frac{i}{M+1}\). It was conjectured in [AHRV07] and recently proved in [Dau22] that the trajectory of a randomly chosen particle in a random sorting network has a remarkable limiting behavior as \(N \rightarrow \infty \), namely it converges in the sense of permuton processes to a deterministic limit, which is the sine curve process described below.

Here it will be more natural to consider the square \([-1,1]^2\) and processes with values in \([-1,1]\) instead of [0, 1] (with the obvious changes in the notion of a permuton and a permuton process which we leave implicit). The Archimedean law is the measure on \([-1,1]^2\) obtained by projecting the normalized surface area of a 2-dimensional half-sphere to the plane or, equivalently, the measure supported inside the unit disk \(\{x^2 + y^2 \le 1\}\) whose density is given by \(1 /(2\pi \sqrt{1 - x^2 - y^2}) \, dx \, dy\). Observe that thanks to the well-known plank property each strip \([a,b] \times [-1,1]\) has measure proportional to \(b-a\), hence the Archimedean law defines a permuton.

The sine curve process is the permuton process \({\mathcal {A}} = ({\mathcal {A}}_{t}, 0 \le t \le 1)\) with the following distribution – we sample (XY) from the Archimedean law and then follow the path

$$\begin{aligned} {\mathcal {A}}_t = X \cos \pi t + Y \sin \pi t. \end{aligned}$$

One can directly check that \({\mathcal {A}}_t\) has uniform distribution on \([-1,1]\) at each time t, hence \({\mathcal {A}}_t\) indeed defines a permuton process. Observe that \(({\mathcal {A}}_{0}, {\mathcal {A}}_{0}) = (X,X)\) and \(({\mathcal {A}}_{0}, {\mathcal {A}}_{1}) = (X, -X)\), thus the sine curve process defines a path between the identity permuton and the reverse permuton.

An equivalent way of describing the sine curve process consists of choosing a pair \((R, \theta )\) at random, where the angle \(\theta \) is uniform on \([0,2\pi ]\) and R has density \(r / 2 \pi \sqrt{1-r^2} \, dr\) on [0, 1], and following the path \({\mathcal {A}}_t = R \cos (\pi t + \theta )\). Thus the trajectories of this process are sine curves with random initial phase and amplitude – the path of a random particle is determined by its initial position X and velocity V, given by \((X,V) = (R \cos \theta , -\pi R \sin \theta )\).

Recall now the energy minimization problem (18). The sine curve process is the unique minimizer of energy among all permuton processes joining the identity to the reverse permuton ([Bre89], see also [RVV19]), with the minimal energy equal to \(I({\mathcal {A}}) = \frac{\pi ^2}{6}\). It is one of the few examples where the solution to the problem (18) can be explicitly calculated for a target permuton \(\mu \). It also seems to play a special role in constructing generalized incompressible flows which are non-unique solutions to the energy minimization problem in dimensions greater than one, see, e.g., [BFS09].

The sine curve process is a generalized solution to Euler equations with the pressure function \(p(x) = \frac{x^2}{2}\), which unsurprisingly leads to each particle satisyfing the harmonic oscillator equation \(x'' = -x\). The reader may check that the sine curve process satisfies the Assumptions (3.1) (with the velocity distribution being time-independent), thus providing a non-trivial and explicit example for which our large deviation bounds hold. To the best of our knowledge the connection between sorting networks on the one hand and Euler equations on the other hand was first observed in the literature in [Dau22].

Let us now describe the results on relaxed sorting networks. Fix \(\delta > 0\) and \(N \ge 1\). We define a \(\delta \)-relaxed sorting network of length M on N elements to be a sequence of M adjacent transpositions \((\tau _1, \ldots , \tau _M)\) such that the permutation \(\sigma _M = \tau _M \circ \ldots \circ \tau _1\) is \(\delta \)-close to the reverse permutation \(\textrm{rev} = (N \, \ldots \, 2 \, 1)\) in the Wasserstein distance on the space \(\mathcal {M}([0,1]^2)\) of Borel probability measures on \([0,1]^2\) (see Sect. 2.1 for the definition). For fixed \(\kappa \in (0,1)\) we define a random \(\delta \)-relaxed sorting network on N elements by choosing M from a Poisson distribution with mean \(\lfloor \frac{1}{2} N^{1 + \kappa }(N-1) \rfloor \) and then sampling a \(\delta \)-relaxed sorting network of length M uniformly at random.

Our first result is that the analog of the sorting network conjecture holds for relaxed sorting networks, that is, in a random relaxed sorting network the trajectory of a random particle is with high probability close in distribution to the sine curve process. Precisely, we have the following

Theorem 1.1

Fix \(\kappa \in (0,1)\) and let \(\pi ^{N}_{\delta }\) denote the empirical distribution of the permutation process (as defined in 5) associated to a random \(\delta \)-relaxed sorting network on N elements. Let \(\pi _{{\mathcal {A}}}\) denote the distribution of the sine curve process. Given any \(\varepsilon > 0\) we have for all sufficiently small \(\delta > 0\)

$$\begin{aligned} \lim \limits _{N \rightarrow \infty } \mathbb {P}^N \left( \pi ^{N}_{\delta } \in B(\pi _{{\mathcal {A}}}, \varepsilon ) \right) = 1, \end{aligned}$$

where \(B(\pi _{{\mathcal {A}}}, \varepsilon )\) is the \(\varepsilon \)-ball in the Wasserstein distance on \(\mathcal {P}\).

Here for consistency of notation we assume that the sine curve process is rescaled so that it is supported on [0, 1] rather than \([-1,1]\).

The second result is more combinatorial and concerns the problem of enumerating sorting networks. A remarkable formula due to Stanley ([Sta84]) says that the number of all sorting networks on N elements is equal to

$$\begin{aligned} \frac{\left( {\begin{array}{c}N\\ 2\end{array}}\right) !}{1^{N-1} 3^{N-2} \ldots (2N-3)^{1}}, \end{aligned}$$

which is asymptotic to \(\exp \left\{ \frac{N^2}{2}\log N + (\frac{1}{4} - \log 2)N^2 + O(N \log N) \right\} \).

For relaxed sorting networks we have the following asymptotic estimate

Theorem 1.2

For any \(\kappa \in (0,1)\) let \({\mathcal {S}}^{N}_{\kappa , \delta }\) be the number of \(\delta \)-relaxed sorting networks on N elements of length \(M = \lfloor \frac{1}{2} N^{1+\kappa }(N-1)\rfloor \). We have

$$\begin{aligned} {\mathcal {S}}^{N}_{\kappa ,\delta } = \exp \left\{ \frac{1}{2} N^{1 + \kappa } (N-1)\log (N-1) - \left( \frac{\pi ^2}{6} + \varepsilon ^{N}_{\delta }\right) N^{2 - \kappa }\right\} , \end{aligned}$$

where \(\varepsilon ^{N}_{\delta }\) satisfies \(\lim \limits _{\delta \rightarrow 0} \lim \limits _{N \rightarrow \infty } \varepsilon ^{N}_{\delta } = 0\).

The asymptotics is analogous to that of Stanley’s formula – the first term in the exponent corresponds simply to the number of all paths of required length, and, crucially, the factor \(\frac{\pi ^2}{6}\) corresponds to the energy of the sine curve process.

The proofs of Theorems 1.1 and 1.2 are given in Sect. 9. It would be an interesting problem to obtain analogous results for relaxed sorting networks reaching exactly the reverse permutation, not only being \(\delta \)-close in the permuton topology. This case is not covered by the results of this paper, since the set of permuton processes reaching exactly the reverse permuton is not open, hence the lower bound of Theorem A does not apply.

2 Preliminaries

2.1 Permutons and stochastic processes.

Permutons. Consider the space \(\mathcal {M}([0,1]^2)\) of all Borel probability measures on the unit square \([0,1]^2\), endowed with the weak topology. A permuton is a probability measure \(\mu \in \mathcal {M}([0,1]^2)\) with uniform marginals. In other words, \(\mu \) is the joint distribution of a pair of random variables (XY), with X, Y taking values in [0, 1] and having marginal distribution \(X, Y \sim {\mathcal {U}}[0,1]\). We will sometimes call the pair (XY) itself a permuton if there is no risk of ambiguity. A few simple examples of permutons are the identity permuton (XX), the uniform permuton (the distribution of two independent copies of X, which is the uniform measure on the square) or the reverse permuton \((X, 1 - X)\).

Permutons can be thought of as continuous limits of permutations in the following sense. Let \(\mathcal {S}_{N}\) be the symmetric group on N elements and let \(\sigma \in \mathcal {S}_{N}\). We associate to \(\sigma \) its empirical measure

$$\begin{aligned} \mu _{\sigma } = \frac{1}{N} \sum \limits _{i=1}^{N} \delta _{\left( \frac{i}{N}, \, \frac{\sigma (i)}{N}\right) }, \end{aligned}$$
(4)

which is an element of \(\mathcal {M}([0,1]^2)\). By a slight abuse of terminology we will sometimes identify \(\sigma \) with \(\mu _\sigma \). Since every such measure has uniform marginals on \(\left\{ \frac{1}{N}, \frac{2}{N}, \ldots , 1 \right\} \), it is not difficult to see that if a sequence of empirical measures converges weakly, the limiting measure will be a permuton. Conversely, every permuton can be realized as a limit of finite permutations, in the sense of weak convergence of empirical measures (see [HKM+13]). We will consider \(\mathcal {M}([0,1]^2)\) endowed with the Wasserstein distance corresponding to the Euclidean metric on \([0,1]^2\), under which the distance of measures \(\mu \) and \(\nu \) is given by

$$\begin{aligned} d_{{\mathcal {W}}}(\mu , \nu ) = \inf \limits _{\left\{ (X,Y), (X',Y')\right\} } \mathbb {E}\left[ \sqrt{(X - X')^2 + (Y-Y')^2} \right] , \end{aligned}$$

where the infimum is over all couplings of (XY) and \((X',Y')\) such that \((X,Y) \sim \mu \), \((X',Y') \sim \nu \).

The path space \(\mathcal {D}\) and stochastic processes. A natural setting for analyzing trajectories of particles in random permutation sequences is to consider \(\mathcal {D}= \mathcal {D}([0,T], [0,1])\), the space of all càdlàg paths from [0, T] to [0, 1]. We endow it with the standard Skorokhod topology, metrized by a metric \(\rho \) under which \(\mathcal {D}\) is separable and complete. By \(\mathcal {M}(\mathcal {D})\) we will denote the space of all Borel probability measures on \(\mathcal {D}\), endowed with the weak topology. It will be convenient to metrize \(\mathcal {M}(\mathcal {D})\) by the Wasserstein distance, under which the distance between measures \(\mu \) and \(\nu \) is given by

$$\begin{aligned} d_{{\mathcal {W}}}(\mu , \nu ) = \inf \limits _{(X,Y)} \mathbb {E}\left[ \rho (X,Y) \right] , \end{aligned}$$

where the infimum is over all couplings (XY) such that \(X \sim \mu \), \(Y \sim \nu \). We will also make use of the Wasserstein distance associated to the supremum norm, given by

$$\begin{aligned} d_{{\mathcal {W}}}^{sup}(\mu , \nu ) = \inf \limits _{(X,Y)} \mathbb {E}\left[ \left\| X - Y \right\| _{sup} \right] , \end{aligned}$$

where \(\left\| \cdot \right\| _{sup}\) is the supremum norm on \(\mathcal {D}\) and again the infimum is over all couplings (XY) as above.

Given two times \(0 \le s \le t \le T\) and a stochastic process \(X = (X_{t}, 0 \le t \le T)\) with distribution \(\mu \in \mathcal {M}(\mathcal {D})\), by \(\mu _{s, t} \in \mathcal {M}([0,1]^2)\) we will denote the distribution of the marginal \((X_{s}, X_{t})\). Note that the projection \(\mu \mapsto \mu _{s, t}\) is continuous as a map from \(\mathcal {M}(\mathcal {D})\) to \(\mathcal {M}([0,1]^2)\) as long as paths \(X \sim \mu \) sampled from \(\mu \) have almost surely no jumps at times s and t. We will sometimes implicitly identify the stochastic process with its distribution when there is no risk of misunderstanding.

Permutation processes and permuton processes. Consider a permutation-valued path \(\eta ^{N} = (\eta ^{N}_{t}, 0 \le t \le T)\), with \(\eta ^{N}_{t}\) taking values in the symmetric group \({\mathcal {S}}_{N}\). We will always assume that \(\eta ^N\) is càdlàg as a map from [0, T] to \(\mathcal {S}_N\). Let \(\eta ^{N}(i) = \left( \eta ^{N}_{t}(i), 0 \le t \le T \right) \) be the trajectory of i under \(\eta ^{N}\) and let \(X^{\eta ^{N}}(i) = \frac{1}{N}\eta ^{N}(i)\) be the rescaled trajectory. We define the empirical measure

$$\begin{aligned} \mu ^{\eta ^{N}} = \frac{1}{N} \sum \limits _{i=1}^{N} \delta _{X^{\eta ^{N}}(i)}, \end{aligned}$$
(5)

where \(\delta _{X^{\eta ^{N}}(i)}\) is the delta measure concentrated on the trajectory \(X^{\eta ^{N}}(i)\).

The associated permutation process \(X^{\eta ^{N}} = (X^{\eta ^{N}}_{t}, 0 \le t \le T)\) is obtained by choosing \(i = 1, \ldots , N\) uniformly at random and following the path \(X^{\eta ^{N}}(i)\). In other words, \(X^{\eta ^{N}}\) is a random path with values in [0, 1] whose distribution is \(\mu ^{\eta ^{N}} \in \mathcal {M}(\mathcal {D})\). If \(\eta ^N\) is fixed, the only randomness here comes from the random choice of the particle i. Note that at each time t the marginal distribution of \(X_{t}^{\eta ^{N}}\) is uniform on \(\left\{ \frac{1}{N}, \frac{2}{N}, \ldots , 1 \right\} \).

A permuton process is a stochastic process \(X = (X_{t}, 0 \le t \le T)\) taking values in [0, 1], with continuous sample paths and such that for every \(t \in [0,T]\) the marginal \(X_{t}\) is uniformly distributed on [0, 1]. The name is justified by observing that if \(\pi \) is the distribution of X, then for any fixed \(s,t \in [0,T]\) the joint distribution \(\pi _{s,t} \in \mathcal {M}([0,1]^2)\) of \((X_{s}, X_{t})\) defines a permuton. As explained in the next subsection, permuton processes arise naturally as limits of permutation processes defined above.

Since every permutation process has marginals uniform on \(\left\{ \frac{1}{N}, \frac{2}{N}, \ldots , 1 \right\} \), we will call it an approximate permuton process. By \(\mathcal {P}\) we will denote the space of all permuton processes and approximate permuton processes, treated as a subspace of \(\mathcal {M}(\mathcal {D})\) (with the same topology and the metric \(d_{{\mathcal {W}}}\)).

Random permutation and permuton processes. A random permuton process is a permuton process chosen from some probability distribution on the space of all permuton processes, i.e., a random variable X, defined for a probability space \(\Omega \), such that \(X(\omega )\) is a permuton process for \(\omega \in \Omega \). By identifying the random variable with its distribution we can also think of a random permuton process as a random element of \(\mathcal {M}(\mathcal {P})\). In this setting, with weak topology on \(\mathcal {M}(\mathcal {P})\), one can consider convergence in distribution of random permuton processes \(X_{n}\) to a (possibly also random) permuton process X.

One can prove (see [RVV19]) that if a sequence of random permutation processes \(X^{\eta ^N}\) converges in distribution, then the limit is a permuton process (in general also random). Of particular interest will be sequences of random permutation-valued paths \(\eta ^{N}\) (coming for example from the interchange process) such that the corresponding permutation processes \(X^{\eta ^N}\) converge in distribution to a deterministic permuton process (for example the sine curve process described below).

For any random permuton process X we define its associated random particle process \({\bar{X}} = \mathbb {E}_{\omega } X(\omega )\), which is a process with a deterministic distribution, obtained by first sampling a permuton process \(X(\omega )\) and then sampling a random path according to \(X(\omega )\).

To elucidate the difference between random and deterministic permuton processes, consider a random permuton process X and its associated random particle process \({\bar{X}}\). If we sample an outcome \(X(\omega )\) and then a path from \(X(\omega )\), then obviously the distribution of paths will be the same as for \({\bar{X}}\). However, consider now sampling an outcome \(X(\omega )\) and then sampling independently two paths from \(X(\omega )\). The distribution of a pair of paths obtained in this way will not in general be the same as the distribution of two independent copies sampled from \({\bar{X}}\), since the paths might be correlated within the outcome \(X(\omega )\). The following general lemma will be useful later for showing that limits of certain random permutation processes are in fact deterministic ([RV17, Lemma 3]):

Lemma 2.1

Let K be a compact metric space and let \(\mu \) be a random probability measure on K, i.e., a random variable with values in \(\mathcal {M}(K)\). Let X and Y be two independent samples from an outcome of \(\mu \) and let Z be a sample from an outcome of an independent copy of \(\mu \). If (XY), as a \(K^2\)-valued random variable, has the same distribution as (XZ), then \(\mu \) is in fact deterministic, i.e., there exists \(\nu \in \mathcal {M}(K)\) such that \(\mu = \nu \) almost surely.

Energy. Here we introduce several related notions of energy for paths, permutations, permutons and permuton processes.

Given a path \(\gamma : [0,T] \rightarrow [0,1]\) and a finite partition \(\Pi = \{ 0 = t_{0}< t_{1}< \ldots < t_{k} = T \}\) we define the energy of \(\gamma \) with respect to \(\Pi \) as

$$\begin{aligned} \mathcal {E}^{\Pi }(\gamma ) = \frac{1}{2} \sum \limits _{i=1}^{k} \frac{| \gamma (t_{i}) - \gamma (t_{i-1}) |^2}{t_{i} - t_{i-1}}, \end{aligned}$$
(6)

and the energy of \(\gamma \) as

$$\begin{aligned} \mathcal {E}(\gamma ) = \sup _{\Pi } \mathcal {E}^{\Pi }(\gamma ), \end{aligned}$$
(7)

where the supremum is over all finite partitions \(\Pi = \{ 0 = t_{0}< t_{1}< \ldots < t_{k} = T \}\). For a path which is not absolutely continuous the supremum is equal to \(+\infty \). If a path \(\gamma \) is differentiable, its energy is equal to

$$\begin{aligned} \frac{1}{2} \int \limits _{0}^{T} {\dot{\gamma }}(s)^2 \, ds. \end{aligned}$$

For a permutation \(\sigma \in {\mathcal {S}}_N\) we define its energy as

$$\begin{aligned} I(\sigma ) = \frac{1}{2} \left( \frac{1}{N} \sum \limits _{i=1}^{N} \left( \frac{\sigma (i) - i}{N} \right) ^2 \right) . \end{aligned}$$
(8)

Likewise, for a permuton \(\mu \in \mathcal {M}([0,1]^2)\) its energy is defined by

$$\begin{aligned} I(\mu ) = \frac{1}{2} \mathbb {E}|X-Y|^2, \end{aligned}$$
(9)

where the pair (XY) has distribution \(\mu \). If \(\mu = \mu _{\sigma }\) is the empirical measure of a permutation \(\sigma \in {\mathcal {S}}_N\), defined by (4), then we have \(I(\mu _{\sigma }) = I(\sigma )\). Note also that \(I = I(\mu )\) is a continuous function of \(\mu \) in the weak topology on \(\mathcal {M}([0,1]^2)\).

Finally, we define the energy of a permuton process \(\pi \) as

$$\begin{aligned} I(\pi ) = \mathbb {E}_{\gamma \sim \pi } \mathcal {E}(\gamma ), \end{aligned}$$
(10)

where the expectation is over paths \(\gamma \) sampled from \(\pi \). We can extend this definition to any process \(\pi \in \mathcal {M}(\mathcal {D})\) by adopting the convention that \(I(\pi ) = + \infty \) if paths sampled from \(\pi \) are not absolutely continuous almost surely. The function I will turn out to correspond to the rate function in large deviation bounds for random permuton process. It can be checked that I is lower semicontinuous (in the weak topology on \(\mathcal {P}\)) and its level sets \(\{ \pi \in \mathcal {P}: I(\pi ) \le C\}\) are compact.

We will also use the notation

$$\begin{aligned} I^{\Pi }(\pi ) = \mathbb {E}_{\gamma \sim \pi } \mathcal {E}^{\Pi }(\gamma ) \end{aligned}$$
(11)

to denote the approximation of energy of \(\pi \) associated to the finite partition \(\Pi \). The following lemma will be useful in characterizing the large deviation rate function in terms of these approximations

Lemma 2.2

For any process \(\pi \in \mathcal {M}(\mathcal {D})\) we have

$$\begin{aligned} I(\pi ) = \sup \limits _{\Pi } I^{\Pi }(\pi ), \end{aligned}$$

where the supremum is taken over all finite partitions \(\Pi = \{ 0 = t_{0}< t_{1}< \ldots < t_{k} = T \}\).

Proof

Let \(\Pi _n = \left\{ 0< \frac{1}{2^n}< \frac{2}{2^n}< \ldots < 1\right\} \), \(n=0,1,2,\ldots \), be the sequence of dyadic partitions of [0, 1]. It is elementary to show that if a path \(\gamma \) is continuous, then \(\mathcal {E}(\gamma ) = \lim \limits _{n \rightarrow \infty } \mathcal {E}^{\Pi _n}(\gamma )\). Note that if \(\Pi '\) is a refinement of \(\Pi \), then we have \(\mathcal {E}^{\Pi }(\gamma ) \le \mathcal {E}^{\Pi '}(\gamma )\), thus \(\mathcal {E}^{\Pi _n}(\gamma ) \rightarrow \mathcal {E}(\gamma )\) monotonically as \(n \rightarrow \infty \). Now we apply the monotone convergence theorem to get the same same convergence for the expectations \(\mathbb {E}_{\gamma \sim \pi }\mathcal {E}^{\Pi _n}(\gamma )\). \(\square \)

The interchange process. The interchange process on the interval \(\{1, \ldots , N\}\) is a Markov process in continuous time defined in the following way. Consider particles labelled from 1 to N on a line with N vertices. Each edge has an independent exponential clock that rings at rate 1. Whenever a clock rings, the particles at the endpoints of the corresponding edge swap places. By comparing the initial position of each particle with its position after time t we obtain a random permutation of \(\{ 1, \ldots ,N\}\).

Formally, we define the state space of the process as consisting of permutations \(\eta \in {\mathcal {S}}_N\), with the notation \( \eta = (x_1, \ldots , x_N)\) indicating that the particle with label i is at the position \(x_{i}\), or in other words, \(x_i = \eta (i)\). The dynamics is given by the generator

$$\begin{aligned} (\mathcal {L}f)(\eta ) = \frac{1}{2} N^{\alpha } \sum \limits _{x=1}^{N-1} \left( f(\eta ^{x,x+1}) - f(\eta ) \right) , \end{aligned}$$
(12)

where \(\eta ^{x, x+1}\) is the configuration \(\eta \) with particles at locations x and \(x+1\) swapped and \(\alpha \in (1,2)\) is a fixed parameter (introduced so that we will be able to consider the limit \(N \rightarrow \infty \)). Since we will also be considering variants of this process with modified rates, we will often refer to the process with generator \(\mathcal {L}\) as the unbiased interchange process.

The interchange process defines a probability distribution on permutation-valued paths \(\eta ^N = (\eta ^N_t, 0 \le t \le T)\) for any \(T \ge 0\). Consider now the permutation process \(X^{\eta ^N}\) associated to \(\eta ^N\), that is, sample \(\eta ^N\) according to the interchange process, pick a particle uniformly at random and follow its trajectory in \(\eta ^N\). The distribution \(\mu ^{\eta ^N}\) of \(X^{\eta ^N}\), defined by (5), is then a random element of \(\mathcal {M}(\mathcal {D})\).

The position of a random particle in the interchange process will be distributed as the stationary simple random walk (in continuous time) on the line \(\{1, \ldots , N\}\). If we look at timescales much shorter than \(N^2\), typically each particle will have distance o(N) from its origin, so the permutation obtained at time t such that \(tN^{\alpha } \ll N^{2}\) will be close (in the sense of permutons) to the identity permutation. As mentioned in the introduction, we will be interested in large deviation bounds for rare events such as seeing a nontrivial permutation after a short time.

2.2 Euler equations and generalized incompressible flows.

Let us now discuss the connection to fluid dynamics and incompressible flows (the discussion here follows [AF09] and [BFS09]). The Euler equations describe the motion of an incompressible fluid in a domain \(D \subseteq \mathbb {R}^d\) in terms of its velocity field u(tx), which is assumed to be divergence-free. The evolution of u is given in terms of the pressure field p

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t u + (u \cdot \nabla )u = - \nabla p \quad &{} \text{ in } [0,T] \times D, \\ \textrm{div} \, u = 0 \quad &{} \text{ in } [0,T] \times D,\\ u \cdot n = 0 \quad &{} \text{ on } [0,T] \times \partial D, \end{array}\right. } \end{aligned}$$

where the second equation encodes the incompressiblity constraint and the third equation means that u is parallel to the boundary \(\partial D\).

Assuming u is smooth, the trajectory g(tx) of a fluid particle initially at position x is obtained by solving the equation

$$\begin{aligned} {\left\{ \begin{array}{ll} {\dot{g}}(t,x) = u(t, g(t,x)), \\ g(0,x) = x.\\ \end{array}\right. } \end{aligned}$$

Since u is assumed to be divergence-free, the flow map \(\Phi ^{t}_{g} : D \rightarrow D\) given by \(\Phi ^{t}_{g}(x) = g(t,x)\) is a measure-preserving diffeomorphism of D for each \(t \in [0,T]\). This means that \((\Phi ^{t}_{g})_{*} \mu _D = \mu _D\), where from now on by \(f_{*}\) we denote the pushforward map on measures, associated to f, and \(\mu _D\) is the Lebesgue measure inside D. Denoting by \(\textrm{SDiff}(D)\) the space of all measure-preserving diffeomorphisms of D, we can rewrite the Euler equations in terms of g

$$\begin{aligned} {\left\{ \begin{array}{ll} \ddot{g}(t,x) = - \nabla p (t, g(t,x)) \quad &{} \text{ in } [0,T] \times D, \\ g(0,x) = x &{} \text{ in } D,\\ g(t, \cdot ) \in \textrm{SDiff}(D) &{} \text{ for } \text{ each } t \in [0,T]. \end{array}\right. } \end{aligned}$$
(13)

Arnold proposed an interpretation according to which the equation above can be viewed as a geodesic equation on \(\textrm{SDiff}(D)\). Thus one can look for solutions to (13) by considering the variational problem

$$\begin{aligned} \text{ minimize } \quad \frac{1}{2}\int \limits _{0}^{T} \int \limits _{D} |{\dot{g}}(t,x)|^2 \, d\mu _{D}(x) \, dt \end{aligned}$$
(14)

among all paths \(g(t, \cdot ) : [0,T] \rightarrow \textrm{SDiff}(D)\) such that \(g(0, \cdot ) = f\), \(g(T, \cdot ) = h\) for some prescribed \(f, h \in \textrm{SDiff}(D)\) (by right invariance without loss of generality f can be assumed to be the identity). The pressure p then arises as a Lagrange multiplier coming from the incompressibility constraint.

Shnirelman proved ([Shn87]) that in dimensions \(d \ge 3\) the infimum in this minimization problem is not attained in general and in dimension \(d=2\) there exist diffeomorphisms \(h = g(T, \cdot )\) which cannot be connected to the identity map by a path with finite action. This motivated Brenier ([Bre89]) to consider the following relaxation of this problem. With C(D) denoting the space of continuous paths from [0, T] to D and \(\mathcal {M}(C(D))\) the set of probability measures on C(D), the variational problem is

$$\begin{aligned} \text{ minimize } \quad \int \limits _{C(D)} \left( \frac{1}{2}\int \limits _{0}^{T} |{\dot{\gamma }}(t)|^2 \, dt \right) d\pi (\gamma ) \end{aligned}$$
(15)

over all \(\pi \in \mathcal {M}(C(D))\) satisfying the constraints

$$\begin{aligned} {\left\{ \begin{array}{ll} \pi _{0,T} = (id, h)_{*} \mu _D, \\ \pi _t = \mu _D \quad \text{ for } \text{ each } t \in [0,T], \end{array}\right. } \end{aligned}$$
(16)

where \(\pi _{0,T}\), \(\pi _t\) denote the marginals of \(\pi \) at times respectively 0, T and at time t.

Following Brenier, a probability measure \(\pi \in \mathcal {M}(C(D))\) satisfying constraints (16) is called a generalized incompressible flow between the identity id and h. To see that indeed (15) is a relaxation of (14), note that any sufficiently regular path \(g(t, \cdot ) : [0,T] \rightarrow \textrm{SDiff}(D)\), for example corresponding to a solution of (13), induces a generalized incompressible flow given by \(\pi = (\Phi _g)_{*} \mu _D\), where as before \(\Phi _g(x) = g(\cdot , x)\). As evidenced by the sine curve process mentioned in the introduction, the converse is false – trajectories of particles sampled from a generalized flow can cross each other or split at a later time when starting from the same position, which is not possible for classical, smooth flows. We refer the reader to [Bre08] for an interesting discussion of physical relevance of this phenomenon.

The problem admits a natural further relaxation in which the target map is “non-deterministic”, in the sense that we have \(\pi _{0,T} = \mu \) with \(\mu \) being an arbitrary probability measure supported on \(D \times D\) and having uniform marginals on each coordinate, not necessarily of the form \(\mu = (id, h)_{*}\mu _{D}\) for some map h. From now on whenever we refer to problem (15) or generalized incompressible flows we will be always considering this more general variant.

The connection between the generalized problem (15) and the original Euler equations (13) is provided by a theorem due to Ambrosio and Figalli ([AF09]), with earlier weaker results by Brenier ([Bre99]). Roughly speaking, they showed that given a measure \(\mu \) with uniform marginals there exists a pressure function p(tx) such that the following holds – one can replace the problem of minimizing the functional (15) over incompressible flows satisfying \(\pi _{0,T} = \mu \) by an easier problem in which the incompressibility constraint is dropped, provided one adds to the functional a Lagrange multiplier given by p. We refer the reader to [AF09, Section 6] for a precise formulation and further results on regularity of p.

In particular, if \(\pi \) is optimal for (15) and the corresponding pressure p is smooth enough, their result implies that almost every path \(\gamma \) sampled from \(\pi \) minimizes the functional

$$\begin{aligned} \gamma \mapsto \int \limits _{0}^{T} \left( \frac{1}{2} |{\dot{\gamma }}(t)|^2 - p(t, \gamma (t)) \right) dt. \end{aligned}$$
(17)

In that case the equation \(\ddot{g}(t,x) = - \nabla p(t, g(t,x))\) from (13) is nothing but the Euler-Lagrange equation for extremal points of the functional (17). We can therefore, at least under some regularity assumptions on p, think of generalized incompressible flows as solutions to (13) in which instead of having a diffeomorphism we assume random initial conditions for each particle.

From now on let us restrict the discussion to \(D = [0,1]\), which will be most directly relevant to the results of this paper. In this case the original problem (14) is somewhat uninteresting, since the only measure-preserving diffeomorphisms of [0, 1] are \(f(x) = x\) and \(f(x) = 1 - x\). However, the relaxed problem (15) is non-trivial and indeed for the target map \(h(x) = 1 - x\) and \(T = 1\) the unique optimal solution is given by the sine curve process.

In this setting, the reader may recognize that generalized incompressible flows are in fact the same objects as permuton processes. The term measure-preserving plans is used in [AF09] for what we call permutons. The functional minimized in (15) is the energy \(I(\pi )\) of a permuton process, defined in (10). In this language the optimization problem we are interested in can be rephrased as follows:

$$\begin{aligned} \text{ find } \inf \limits _{\begin{array}{c} \pi \in \mathcal {P}\\ \pi _{0,T} = \mu \end{array}} I(\pi ), \end{aligned}$$
(18)

where the infimum is over all permuton processes \(\pi \in \mathcal {P}\) satisfying \(\pi _{0,T} = \mu \) for a given permuton \(\mu \in \mathcal {M}([0,1]^2)\).

Generalized solutions to Euler equations. We will say that a permuton process \(\pi \) is a generalized solution to Euler equations if there exists a function \(p : [0,T] \times [0,1] \rightarrow \mathbb {R}\), differentiable in the second variable, such that almost every path \(x : [0,T] \rightarrow [0,1]\) sampled from \(\pi \) satisfies the equation

$$\begin{aligned} {\left\{ \begin{array}{ll} x'(t) = v(t), \\ v'(t) = - \partial _x p(t, x(t)), \\ \end{array}\right. } \end{aligned}$$
(19)

for \(t \in [0,T]\). This is of course equivalent to \(x''(t) = - \partial _x p (t, x(t))\).

By the remarks above, if \(\pi \) minimizes the energy in (18) and the associated pressure p is smooth enough, then \(\pi \) is always a generalized solution to Euler equations. However, this is only a necessary condition – for a discussion of corresponding sufficient conditions see [BFS09].

2.3 Proof outline and structure of the paper.

Let us now give a brief outline of the proof strategy for Theorems A and B. For the lower bound, given a process X we construct a perturbation of the interchange process (defined by introducing asymmetric jump rates based on 19) for which a law of large numbers holds, namely, the distribution of the path of a random particle converges to a deterministic limit (which is the distribution of X). The large deviation principle is then proved by estimating the Radon–Nikodym derivative between the biased process and the original one.

The key property which makes this construction possible is that the process X satisfies a second order ODE given by (19), so its trajectories are fully specified by the particle’s position and velocity (the latter chosen initially from a mean zero distribution). The biased process is then constructed by endowing each particle with an additional parameter keeping track of its velocity, but we perform an additional change variables, working instead of velocity with a variable we call color. The advantage of this is that the uniform distribution of colors is stationary when the jump rates are properly chosen, which will greatly facilitate the analysis. An additional technical difficulty arises if the velocity distribution of X is time-dependent or not regular enough near the boundary, in which case we first approximate X by a process with a sufficiently regular and piecewise time-homogeneous velocity distribution.

To prove the law of large numbers we need to show that in the biased interchange process particles’ trajectories behave approximately like independent samples from X. This requires proving that their velocities remain uncorrelated when averaged over time and is accomplished by means of a local mixing result called the one block estimate. It is here that we rely on stationarity of the uniform distribution of colors in the biased process and the fact that X has velocity zero on average.

The strategy for proving the upper bound is somewhat simpler. We consider a family of exponential martingales similar to the one employed in analyzing independent random walks and use the one block estimate to show that the particles’ velocities are typically nonnegatively correlated. This enables us to prove the large deviation upper bound for compact sets and the extension to closed sets is done by proving exponential tightness.

Structure of the paper. The rest of the paper is structured as follows. In Sect. 3 we introduce the change of variables needed to define the process with colors and prove the approximation result for X mentioned above (Proposition 3.7). In Sect. 4 we define the biased interchange process and derive the conditions on its rates which guarantee stationarity. Section 5 contains the proof of the law of large numbers for the biased interchange process (Theorem 5.1). In Sect. 6 we prove two variants of the one block estimate – one needed for the large deviation upper bound (Lemma 6.2) and a more involved one needed for the proof of the law of large numbers (Lemma 5.4). In Sect. 7 these pieces are then used to prove the large deviation lower bound (Theorem 7.3). Section 8 is devoted to the proof of the large deviation upper bound (Theorem 8.4) and is independent of the previous sections (apart from the use of Lemma 6.2). Finally, in Sect. 9 we prove Theorem 1.1 and Theorem 1.2 on relaxed sorting networks.

3 ODEs and Generalized Solutions to Euler Equations

Regularity assumptions and properties of generalized solutions. Suppose \(\pi \) is a generalized solution to Euler Eq. (19) and let X be a process with distribution \(\pi \). For the proof of the large deviation lower bound we will need to impose additional regularity assumptions on \(\pi \). For \(t \in [0,T]\) let \(\mu _t\) denote the joint distribution of \((x(t), x'(t))\) when x is sampled according to \(\pi \). In particular, \(\mu _0\) is the joint distribution of the initial conditions of the ODE (19). If \(\Phi ^{t,s}(x,v)\) denotes the solution x(s) of (19) satisfying \((x(t), v(t)) = (x, v)\), then \(\mu _t = \Phi ^{0,t}_{*}\mu _0\).

We will assume that each \(\mu _t\) has a density \(\rho _t(x,v)\) with respect to the Lebesgue measure on \([0,1] \times \mathbb {R}\). For \(x \in [0,1]\) and \(t \in [0,T]\) let \(\mu _{t,x}\) denote the conditional distribution of v, given x, at time t. In addition we assume that for \(x=0\) or 1 the distribution \(\mu _{t,x}\) is a delta mass at 0, as otherwise the process X cannot stay confined to [0, 1] and have mean velocity zero everywhere (see the discussion of incompressiblity below).

Let \(F_{t,x}\) denote the cumulative distribution function of \(\mu _{t,x}\) and let \(V_{t}(x, \cdot ) : [0,1] \rightarrow \mathbb {R}\) be the quantile function of \(\mu _{t,x}\), defined for \(x \in [0,1]\) and \(\phi \in (0,1]\) by

$$\begin{aligned} V_{t}(x,\phi ) = \inf \left\{ v \in \mathbb {R}\, | \, F_{t,x}(v) \ge \phi \right\} \end{aligned}$$

and \(V_t(x, 0) = \inf \left\{ v \in \mathbb {R}\, | \, F_{t,x}(v) > 0 \right\} .\) In particular for \(x=0,1\) we have \(V_{t}(x,\phi ) = 0\).

Assumption 3.1

Throughout the paper, we will assume that for a generalized solution to Euler equations \(\pi \) the following properties are satisifed

  1. (1)

    the pressure function \((t,x) \mapsto p(t,x)\) in (19) is measurable in t and differentiable in x, with the derivative \(\partial _x p(t,x)\) Lipschitz continuous in x (with the Lipschitz constant uniform in t);

  2. (2)

    there exists a compact set \(K \subseteq [0,1] \times \mathbb {R}\) such that for each \(t \in [0,T]\) the density \(\rho _t\) is supported in K;

  3. (3)

    for \(t \in [0,T], x \in [0,1]\) the support of \(\mu _{t,x}\) is a connected interval in \(\mathbb {R}\);

  4. (4)

    the density \(\rho _t\) is continuously differentiable in t, x and v for each \(t \in [0,T]\) and xv in the interior of the support of \(\rho _t\).

Let us comment on the relevance of these assumptions. Assumption (1) will guarantee uniqueness of solutions to (19). Assumption (2) implies that the velocity of a particle moving along a path sampled from \(\pi \) stays uniformly bounded in time. Assumption (3) implies that for any \(x \in (0,1)\) and \(\phi \in [0,1]\) we have \(F_{t,x}(V_{t}(x, \phi )) = \phi \), i.e., \(V_t(x, \cdot )\) is the inverse function of \(F_{t,x}\). Assumptions (3) and (4) imply that \(V_{t}(x,\phi )\) is a continuous function of t, x, \(\phi \) and it is continuously differentiable in all variables for \(x \in (0,1)\).

Note that for \(V_t(x,\phi )\) to be differentiable at \(\phi = 0,1\), the distribution function \(F_{t,x}\) necessarily has to be non-differentiable at corresponding v such that \(F_{t,x}(v) = \phi \). This is why we can require the density \(\rho _t\) to be smooth only in the interior of its support and not at the boundary.

From now on we assume that \(\pi \) is a fixed generalized solution to Euler equations, satisfying Assumptions (3.1). Almost every path \(x : [0,T] \rightarrow [0,1]\) sampled from \(\pi \) satisfies the ODE

$$\begin{aligned} {\left\{ \begin{array}{ll} x'(t) = v(t), \\ v'(t) = - \partial _x p(t, x(t)). \\ \end{array}\right. } \end{aligned}$$
(20)

Note that since \(\pi \) is a permuton process, each measure \(\mu _t\) satisfies the incompressibility condition, meaning that its projection onto the first coordinate is equal to the uniform measure on [0, 1]. This is equivalent to the property that for any test function \(f : [0,1] \rightarrow \mathbb {R}\) we have

$$\begin{aligned} \int \limits f(x) \, d\mu _t(x,v) = \int \limits _{0}^{1} f(x) \, dx. \end{aligned}$$

An important consequence of the incompressibility assumption is that under \(\mu _t\) the velocity has mean zero at each x, that is, we have the following

Lemma 3.2

For any \(t \in [0,T]\) and \(x \in [0,1]\) we have

$$\begin{aligned} \int v \, d\mu _{t,x}(v) = 0. \end{aligned}$$

Proof

Consider any test function \(f : [0,1] \rightarrow \mathbb {R}\) and write

$$\begin{aligned} \int \limits f(x) \, d\mu _{t+s}(x,v) = \int \limits f(x) \, d(\Phi ^{t,t+s}_{*}\mu _t) (x,v) = \int \limits f(\Phi ^{t,t+s}(x,v)) \, d\mu _{t}(x,v). \end{aligned}$$

By incompressibility the integral above is always equal to \(\int \limits _{0}^{1} f(x) \, dx \), in particular does not depend on time. On the other hand its derivative with respect to s is

$$\begin{aligned} \frac{d}{ds} \int \limits f(x) \, d\mu _{t+s}(x,v)= & {} \frac{d}{ds} \int \limits f(\Phi ^{t,t+s}(x,v)) \, d\mu _{t}(x,v)\\= & {} \int \limits f'(\Phi ^{t,t+s}(x,v)) \frac{d\Phi ^{t,t+s}}{ds}(x,v) \, d\mu _{t}(x,v). \end{aligned}$$

Since \(\Phi ^{t,t+s}(x,v)\vert _{s=0} = x\) and \(\frac{d\Phi ^{t,t+s}}{ds}(x,v)\vert _{s=0} = v\), by evaluating the derivative at \(s = 0\) we arrive at \(\int \limits f'(x) v \, d\mu _{t}(x,v) = 0\). Since \(\int g(x,v) \, d\mu _t(x,v) = \int g(x,v) \, d\mu _{t,x}(v) dx\) for any measurable g and f was an arbitrary test function, the claim of the lemma holds for almost every x. Since we have assumed that \(\mu _{t}\) has a continuous density, the claim in fact holds for all x, which ends the proof. \(\square \)

We will also make use of an explicit evolution equation that the densities \(\rho _t\) have to satisfy. This is the content of the following lemma.

Lemma 3.3

For any \(t \in [0,T]\) and xv in the interior of the support of \(\rho _t\) we have

$$\begin{aligned} \frac{\partial \rho _t}{\partial t} (x,v) = - v \frac{\partial \rho _t}{\partial x} (x,v) + \partial _x p(t, x) \frac{\partial \rho _t}{\partial v} (x,v). \end{aligned}$$

Proof

Let \(f : [0,1] \times \mathbb {R}\rightarrow \mathbb {R}\) be any test function and consider the integral

$$\begin{aligned} I_{t+s} = \int f(x,v) \, d\mu _{t+s}(x,v). \end{aligned}$$

On the one hand, its derivative with respect to s is equal to

$$\begin{aligned} \frac{d}{ds}I_{t+s}&= \frac{d}{ds} \int f(x,v) \, d\mu _{t+s}(x,v)\\&=\frac{d}{ds} \int f\left( \Phi ^{t,t+s}(x,v),\frac{d\Phi ^{t,t+s}}{ds}(x,v)\right) \rho _t (x,v) \, dx \, dv \\&= \int \bigg [ \frac{\partial f}{\partial x}\left( \Phi ^{t,t+s}(x,v),\frac{d\Phi ^{t,t+s}}{ds}(x,v)\right) \frac{d\Phi ^{t,t+s}}{ds}(x,v) \\&\quad + \frac{\partial f}{\partial v}\left( \Phi ^{t,t+s}(x,v),\frac{d\Phi ^{t,t+s}}{ds}(x,v)\right) \frac{d^2\Phi ^{t,t+s}}{ds^2}(x,v) \bigg ] \rho _t (x,v) \, dx \, dv. \end{aligned}$$

Since \(\Phi ^{t,t+s}(x,v)\) is a solution to (20), we have \(\frac{d\Phi ^{t,t+s}}{ds}(x,v)\big \vert _{s=0} = v\) and \(\frac{d^2\Phi ^{t,t+s}}{ds^2}(x,v)\big \vert _{s=0} = -\partial _x p(t, x)\), which gives us

$$\begin{aligned} \frac{d}{ds}I_{t+s}\Big \vert _{s=0} = \int \left( \frac{\partial f}{\partial x}(x,v)v - \frac{\partial f}{\partial v}(x,v)\partial _x p(t, x) \right) \rho _t (x,v) \, dx \, dv. \end{aligned}$$

Performing integration by parts with respect to x for the first term and with respect to v for the second term gives (noting that f has compact support so the boundary terms vanish)

$$\begin{aligned} \frac{d}{ds}I_{t+s}\Big \vert _{s=0} = -\int f(x,v)v \frac{\partial \rho _t}{\partial x} (x,v) \, dx \, dv + \int f(x,v)\partial _x p(t, x) \frac{\partial \rho _t}{\partial v} (x,v) \, dx \, dv. \end{aligned}$$

On the other hand, we have

$$\begin{aligned} \frac{d}{ds}I_{t+s}= & {} \frac{d}{ds} \int f(x,v) \, d\mu _{t+s}(x,v) = \frac{d}{ds} \int f(x,v) \rho _{t+s} (x,v) \, dx \, dv\\= & {} \int f(x,v) \frac{\partial \rho _{t+s}}{\partial s} (x,v) \, dx \, dv, \end{aligned}$$

so

$$\begin{aligned} \frac{d}{ds}I_{t+s}\Big \vert _{s=0} = \int f(x,v) \frac{\partial \rho _{t}}{\partial t} (x,v) \, dx \, dv \end{aligned}$$

and thus

$$\begin{aligned} \int f(x,v) \left( - v \frac{\partial \rho _t}{\partial x} (x,v) + \partial _x p(t, x) \frac{\partial \rho _t}{\partial v} (x,v) - \frac{\partial \rho _{t}}{\partial t} (x,v) \right) dx \, dv. \end{aligned}$$

Since the test function f was arbitrary, the equation from the statement of the lemma must hold for every t, x, v as assumed. \(\square \)

The colored trajectory process. Let \(X = (X_t, 0 \le t \le T)\) be the permuton process with distribution \(\pi \). For the large deviation lower bound we will need to construct a suitable interacting particle system in which the behavior of a random particle mimics that of the permuton process X. A crucial ingredient will be a property analogous to Lemma 3.2, i.e., having velocity distribution whose mean is locally zero. Instead of working with velocity v, whose distribution \(\rho _t(x,v)\) at a given site x may change in time, it will be more convenient to perform a change variables and use another variable \(\phi \), which we call color, whose distribution will be invariant in time.

Recall that under Assumptions (3.1) the distribution function \(F_{t,x}(\cdot )\) and the quantile function \(V_{t}(x, \cdot )\) are related by

$$\begin{aligned} {\left\{ \begin{array}{ll} F_{t,x}(V_{t}(x, \phi )) = \phi , \\ V_t(x, F_{t,x}(v)) = v, \end{array}\right. } \end{aligned}$$
(21)

for any \(t \in [0,T]\), \(x \in (0,1)\), \(\phi \in [0,1]\), \(v \in \textrm{supp}\, \mu _{t,x}\).

The reason for introducing the variable \(\phi \) is the following elementary property – if \(\phi \) is sampled from the uniform distribution on [0, 1], then \(V_t(x, \phi )\) is distributed according to \(\mu _{t,x}\). Thus instead of working with (xv) variables in the ODE (20), where the distribution of v evolves in time, we can set up an ODE for x and \(\phi \) such that the joint distribution of \((x, \phi )\) will be uniform on \([0,1]^2\) at each time. The velocity v and its distribution can then be recovered via the equation \(v = V_t(x, \phi )\).

Let (x(t), v(t)) be a solution to (20) such that \(x(t) \ne 0,1\) and let

$$\begin{aligned} \phi (t) = F_{t, x(t)}(v(t)). \end{aligned}$$

Let us derive the ODE that \((x(t), \phi (t))\) satisifes. Since (x(t), v(t)) is a solution of (20), we have

$$\begin{aligned} \phi '(t) =&\frac{\partial F_{t, x(t)}}{\partial t}(v(t)) + \frac{\partial F_{t, x(t)}}{\partial x}(v(t))x'(t) + \frac{\partial F_{t, x(t)}}{\partial v}(v(t)) v'(t) \\ =&\frac{\partial F_{t, x(t)}}{\partial t}(v(t)) + \frac{\partial F_{t, x(t)}}{\partial x}(v(t))v(t) + \rho _t(x(t), v(t)) \left[ -\partial _x p (t, x(t)) \right] . \end{aligned}$$

Lemma 3.3 implies that

$$\begin{aligned} \frac{\partial F_{t, x(t)}}{\partial t} (x(t), v(t)) = - \int \limits _{-\infty }^{v(t)} w \frac{\partial \rho _t}{\partial x} (x(t),w) + \left[ \partial _x p(t, x(t)) \right] \rho _t (x(t),v(t)), \end{aligned}$$

which gives

$$\begin{aligned} \phi '(t) = \frac{\partial F_{t, x(t)}}{\partial x}(v(t))v(t) - \int \limits _{-\infty }^{v(t)} w \frac{\partial \rho _t}{\partial x} (x(t),w) \end{aligned}$$

and upon integrating by parts in the last integral we obtain

$$\begin{aligned} \phi '(t) = \int \limits _{-\infty }^{v(t)} \frac{\partial F_{t,x(t)}}{\partial x}(x(t),w) \, dw. \end{aligned}$$
(22)

Now, differentiating (21) with respect to x and \(\phi \) gives

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{\partial F_{t,x}}{\partial x}(V_t(x, \phi )) + \rho _t(x,\phi ) \frac{\partial V_t}{\partial x}(x, \phi ) = 0, \\ \rho _t(x, \phi ) \frac{\partial V_t}{\partial \phi }(x, \phi ) = 1. \end{array}\right. } \end{aligned}$$

Also by (21) we have \(v(t) = V_t(x(t), \phi (t))\), so a change of variables \(w = V_t(x(t), \psi )\) in (22) yields

$$\begin{aligned} \phi '(t) = R_t(x(t), \phi (t)), \end{aligned}$$

where \(R_t(x,\phi ) = - \int \limits _{0}^{\phi } \frac{\partial V_t}{\partial x}(x, \psi ) \, d\psi \).

Thus we have shown that \((x(t), \phi (t))\) satisfies the ODE

$$\begin{aligned} {\left\{ \begin{array}{ll} x'(t) = V_t(x(t), \phi (t)), \\ \phi '(t) = R_t(x(t), \phi (t)). \\ \end{array}\right. } \end{aligned}$$
(23)

If \(x(t) \ne 0,1\), this equation is equivalent to (20), i.e., \((x(t), \phi (t))\) is a solution of (23) with initial conditions \((x(0), \phi (0)) = (x_0, \phi _0)\) if and only if (x(t), v(t)) is a solution of (20) with initial conditions \((x(0), v(0))= (x_0, V_0(x_0, \phi _0))\). We also note that Lemma 3.2 expressed in terms of \((x, \phi )\) variables states that for each \(t \in [0,T]\) and \(x \in [0,1]\) we have

$$\begin{aligned} \int \limits _{0}^{1} V_t(x, \psi ) \, d\psi = 0. \end{aligned}$$
(24)

From now on we work exclusively with (23). We will need to make two approximations necessary for the interacting particle system analysis later on. One is necessitated by the fact that the function \(V_t(x, \phi )\) might not be smooth with respect to x at the boundaries \(x = 0,1\) (this happens, for example, for the sine curve process). We will therefore replace the function by its smooth approximation in a \(\beta \)-neighborhood of the boundary and in the end take \(\beta \rightarrow 0\). The other approximation consists in dividing the time interval [0, T] into intervals of length \(\delta \) and approximating \(V_t(x, \phi )\) for given \(x, \phi \) with a piecewise-constant function of t. This will enable us to give a simple stationarity condition for the corresponding interacting particle system and in the end take \(\delta \rightarrow 0\) a well.

Let \(\beta \in (0,\frac{1}{4})\) and let \(V_{t}^{\beta }(x, \phi )\) be a function with the following properties

  1. (a)

    \(V_{t}^{\beta }(x, \phi )\) is continuously differentiable for every \(t \in [0,T]\), \(x \in [0,1]\), \(\phi \in [0,1]\),

  2. (b)

    \(V_{t}^{\beta }(x, \phi ) = V_{t}(x, \phi )\) for \(x \in \left[ \beta , 1 - \beta \right] \) and \(V_{t}^{\beta }(0,\phi ) = V_{t}^{\beta }(1,\phi ) = 0\),

  3. (c)

    for each \(x \in [0,1]\) we have \(\int \limits _{0}^{1} V_{t}^{\beta }(x, \psi ) \, d\psi = 0\),

  4. (d)

    \(|V_{t}^{\beta }(x, \phi )| \le |V_{t}(x, \phi )| + 1\),

  5. (e)

    we have \(\lim \limits _{\beta \rightarrow 0} \int \limits _{0}^{1}\int \limits _{0}^{1} |V_{t}^{\beta }(x, \phi )|^2 \, dx \, d\phi = \int \limits _{0}^{1}\int \limits _{0}^{1} |V_{t}(x, \phi )|^2 \, dx \, d\phi \).

The existence of such a function \(V_{t}^{\beta }\) is proved at the end of this section. By \((x^{\beta }(t), \phi ^{\beta }(t))\) we will denote the solution to the ODE

$$\begin{aligned} {\left\{ \begin{array}{ll} x'(t) = V^{\beta }_{t}(x(t), \phi (t)), \\ \phi '(t) = R^{\beta }_{t}(x(t), \phi (t)). \\ \end{array}\right. } \end{aligned}$$
(25)

Take any \(\delta > 0\) (to simplify notation we will assume that T is an integer multiple of \(\delta \), this will not influence the argument in any substantial way) and consider a partition \(0 = t_0< t_1< \ldots < t_M = T\) of [0, T] into \(M = \frac{T}{\delta }\) intervals of length \(\delta \), with \(t_k = k \delta \). Let \(V^{\beta , \delta }(t, x,\phi )\) be the piecewise-constant in time approximation of \(V^{\beta }_{t}(x, \phi )\), defined by

$$\begin{aligned} V^{\beta , \delta }(t, x,\phi ) = V^{\beta }_{t_k}(x,\phi ) \quad \text{ for } t \in [t_k, t_{k+1}), \, \, k=0,1,\ldots ,M - 1. \end{aligned}$$
(26)

We can now define the piecewise-stationary process which will be our main tool in subsequent arguments. Consider the ODE

$$\begin{aligned} {\left\{ \begin{array}{ll} y'(t) = V^{\beta , \delta }(t, y(t),\phi (t)), \\ \phi '(t) = R^{\beta , \delta }(t, y(t), \phi (t)), \\ \end{array}\right. } \end{aligned}$$
(27)

where

$$\begin{aligned} R^{\beta , \delta }(t, y, \phi ) = - \int \limits _{0}^{\phi } \frac{\partial V^{\beta , \delta }}{\partial y}(t, y, \psi ) \, d\psi . \end{aligned}$$

Solutions to (27) exist and are unique as usual for any initial conditions, provided we interpret \((y'(t), \phi '(t))\) above as right-handed derivatives at \(t = 0, t_1, t_2, \ldots , t_{M-1}\) (we adopt this convention from now on).

Let \(P^{\beta , \delta } = \left( (X^{\beta , \delta }_{t}, \Phi ^{\beta , \delta }_{t}), 0 \le t \le T \right) \) be the stochastic process with values in \([0,1]^2\) with the following distribution: choose \((X^{\beta , \delta }_{0}, \Phi ^{\beta , \delta }_{0})\) uniformly at random from \([0,1]^2\) and then take \((X^{\beta , \delta }_{t}, \Phi ^{\beta , \delta }_{t}) = (y(t), \phi (t))\), where \((y, \phi )\) is the solution of the system (27) with initial conditions given by \((y(0), \phi (0)) = (X^{\beta , \delta }_{0}, \Phi ^{\beta , \delta }_{0})\). We will call this process the colored trajectory process associated to (27).

We also define the process \(P^{\beta } = \left( (X^{\beta }_{t}, \Phi ^{\beta }_{t}), 0 \le t \le T \right) \), which is obtained in the same way as \(P^{\beta , \delta }\) except that we follow solutions to (25) instead of (27), i.e., make no piecewise approximation in time of \(V_{t}^{\beta }\).

The key property of the process \(P^{\beta , \delta }\) is the following

Lemma 3.4

For each \(t \in [0,T]\) the distribution of \((X^{\beta , \delta }_t, \Phi ^{\beta , \delta }_t)\) is uniform on \([0,1]^2\).

Proof

First we show that the process stays confined to \([0,1]^2\). Because of uniqueness of solutions to (27) it is enough to show that if a solution starts in the interior of \([0,1]^2\), it never reaches the boundary, or, equivalently, that if a solution is at the boundary at some t, it is actually at the boundary for all \(s \in [0,T]\). If \(y(t) = 0\) or 1 for any t, then \(y'(t) = 0\), since \(V^{\beta , \delta }(t, 0,\phi ) = V^{\beta , \delta }(t, 1,\phi ) = 0\) for any \(\phi \). By uniqueness of solutions we then have \(y(t) \equiv 0\) or 1. If \(\phi (t) = 0\) for any t, then \(R^{\beta , \delta }(t, y, 0) = 0\) regardless of y, so as before \(\phi '(t) = 0\) and \(\phi (t) \equiv 0\). Finally, if \(\phi (t) = 1\), then using the property (c) of the function \(S^{\beta }_{t}(x, \phi )\) we have

$$\begin{aligned} R^{\beta , \delta }(t,y,1) = - \int \limits _{0}^{1} \frac{\partial V^{\beta , \delta }}{\partial y}(t, y, \psi ) \, d\psi = - \frac{\partial }{\partial y} \int \limits _{0}^{1} V^{\beta , \delta } (t, y, \psi ) \, d\psi = 0, \end{aligned}$$

so as before \(\phi '(t) = 0\) and \(\phi (t) \equiv 1\).

Now we observe that the form of \(V^{\beta , \delta }\) and \(R^{\beta , \delta }\) in (27) implies that the vector field \((V^{\beta , \delta }(t,\cdot ,\cdot ),R^{\beta , \delta }(t,\cdot ,\cdot ))\) is divergence-free at each t, so by Liouville’s theorem the uniform measure on \([0,1]^2\) is invariant for the corresponding flow map. \(\square \)

In particular, the process \(X^{\beta , \delta } = (X^{\beta , \delta }_t, 0 \le t \le T)\) is a permuton process. Crucially, we can couple it to the process X in a natural way. Consider \((x_0, \phi _0)\) chosen uniformly at random from \([0,1]^2\) and take \((x(0), v(0)) = (x_0, V_{0}(x_0, \phi _0))\), resp. \((y(0), \phi (0)) = (x_0, \phi _0)\), as initial conditions for (23), resp. (27). By definition of \(V_0(x, \phi )\), the pair (x(0), v(0)) has distribution given by \(\mu _0\), so indeed the pair of solutions (x(t), y(t)) corresponding to the initial conditions above defines a coupling of X and \(X^{\beta , \delta }\). From now on X and \(X^{\beta , \delta }\) are always assumed to be coupled in this way.

It is readily seen that the statements above also hold for \(P^{\beta }\) instead of \(P^{\beta , \delta }\), hence with a slight abuse of notation we can allow \(\delta = 0\) and write \(P^{\beta , 0} = P^{\beta }\), \(X^{\beta ,0} = X^{\beta }\) etc.

Our goal in the remainder of this section is to show that, as \(\beta , \delta \rightarrow 0\), the processes X and \(X^{\beta , \delta }\) typically stay close to each other and have approximately the same Dirichlet energy, so in the probabilistic part of the arguments it will be enough to work with the process \((X^{\beta , \delta }, \Phi ^{\beta , \delta })\), which is more convenient thanks to piecewise stationarity.

First we prove a simple lemma, showing that \(X^{\beta }\) is unlikely to ever be close to the boundary (so that approximation of X with \(X^{\beta }\) is meaningful as \(\beta \rightarrow 0\)).

Lemma 3.5

Let \(\mathbb {P}\) denote the law of the process \(X^{\beta }\). Let

$$\begin{aligned} B^{\beta } = \left\{ \exists t \in [0,T] \, X^{\beta }_{t} \notin [\beta , 1 - \beta ] \right\} . \end{aligned}$$

We have

$$\begin{aligned} \mathbb {P}\left( B^{\beta } \right) \xrightarrow {\beta \rightarrow 0} 0. \end{aligned}$$

Proof

We will prove that \(X^{\beta }_{t} \notin [0, \beta ]\) with high probability as \(\beta \rightarrow 0\) (the proof for \([1-\beta , 1]\) is analogous). Suppose that y is a solution of (27) with initial condition \(y(0) \notin [0, 2\beta ]\) and that \(y(t) \in [0, \beta ]\) for some \(t \in [0,T]\). Then there exists a time interval \([s,s']\) such that \(y(s) = 2\beta \), \(y(s') = \beta \) and \(y(u) \in [\beta , 2 \beta ]\) for every \(u \in [s,s']\). Without loss of generality we can assume that \([s,s'] \subseteq [t_k, t_{k+1})\) for some k (the other case is easily dealt with by further subdividing \([\beta , 2 \beta ]\) into two equal subintervals and repeating the argument for each of them). By the mean value theorem

$$\begin{aligned} |y(s) - y(s')| = (s' - s) y'(w) \end{aligned}$$

for some \(w \in [s,s']\). For \(x \in [\beta , 2\beta ]\) we have \(V^{\beta }(w,x,\phi ) = V_{t_k}(x,\phi )\), so \(y'(w) = V_{t_k}(w, y(w), \phi (w))\). Since \(|y(w)| \le 2\beta \) and \(V_{t_k}(x,\phi )\) is continuous at \(x=0\), we have \(|y'(w)| \le f(\beta )\) for some function f (depending only on V) satisfying \(\lim \limits _{\beta \rightarrow 0} f(\beta ) = 0\). As \(|y(s) - y(s')| = \beta \), altogether this implies that \(s'- s \ge \frac{\beta }{f(\beta )}\), i.e., if the process \(X^{\beta }\) starts outside \([0, 2\beta ]\), it has to spend time at least \(\frac{\beta }{f(\beta )}\) before it reaches \([0, \beta ]\). Thus

$$\begin{aligned} \int \limits _{0}^{T} \mathbb {1}_{\{ X^{\beta }_{s} \in [0, \beta ] \}} \, ds \ge \frac{\beta }{f(\beta )} \mathbb {1}_{\{\exists t \in [0,T] \, X^{\beta }_{t} \in [0,\beta ]\}} \mathbb {1}_{\{X^{\beta }_{0} \notin [0,2\beta ]\}}. \end{aligned}$$

Taking expectation yields

$$\begin{aligned} \mathbb {E}\int \limits _{0}^{T} \mathbb {1}_{\{ X^{\beta }_{s} \in [0, \beta ] \}} \, ds \ge \frac{\beta }{f(\beta )} \mathbb {P}\left( \{\exists t \in [0,T] \, X^{\beta }_{t} \in [0,\beta ]\} \cap \{X^{\beta }_{0} \notin [0,2\beta ]\} \right) . \end{aligned}$$

Since \(X^{\beta }\) is a permuton process, \(X^{\beta }_s\) has uniform distribution for each s, which gives

$$\begin{aligned} \mathbb {E}\int \limits _{0}^{T} \mathbb {1}_{\{ X^{\beta }_{s} \in [0, \beta ] \}} \, ds = \int \limits _{0}^{T} \mathbb {E}\mathbb {1}_{\{ X^{\beta }_{s} \in [0, \beta ] \}} \, ds = \int \limits _{0}^{T} \mathbb {P}\left( X^{\beta }_{s} \in [0, \beta ] \right) ds = T \beta . \end{aligned}$$

Together with the inequality above this implies

$$\begin{aligned} T \beta \ge \frac{\beta }{f(\beta )} \left( \mathbb {P}\left( \exists t \in [0,T] \, X^{\beta }_{t} \in [0,\beta ] \right) - \mathbb {P}(X^{\beta }_{0} \in [0, 2\beta ])\right) . \end{aligned}$$

Since \(X^{\beta }_{0}\) has uniform distribution, we have \(\mathbb {P}(X^{\beta }_{0} \in [0, 2\beta ]) = 2\beta \). Thus

$$\begin{aligned} \mathbb {P}\left( \exists t \in [0,T] \, X^{\beta }_{t} \in [0,\beta ] \right) \le 2 \beta + T f(\beta ). \end{aligned}$$

Since \(f(\beta ) \rightarrow 0\) as \(\beta \rightarrow 0\), the claim is proved. \(\square \)

Proposition 3.6

Fix \(\beta \in (0, \frac{1}{4})\) and \((x_0, \phi _0) \in [0,1]^2\). Let \((x^{\beta }(t), \phi ^{\beta }(t))\), resp. \((x^{\beta , \delta }(t), \phi ^{\beta , \delta }(t))\), be the solution to (25), resp. (27), with initial conditions \((x_0, \phi _0)\). We have

$$\begin{aligned}&\sup \limits _{t \in [0,T]} | x^{\beta , \delta }(t) - x^{\beta }(t)| \xrightarrow {\delta \rightarrow 0} 0, \\&\sup \limits _{t \in [0,T]} | \phi ^{\beta , \delta }(t) - \phi ^{\beta }(t)| \xrightarrow {\delta \rightarrow 0} 0. \end{aligned}$$

Proof

The statement follows from continuous dependence of solutions to an ODE on parameters, see e.g., [CL55, Theorem 4.2]. Denoting \(V^{\beta , 0}(t, y, \phi ) = V^{\beta }_t(y, \phi )\), \(R^{\beta , 0}(t, y, \phi ) = R^{\beta }_t(y, \phi )\), we only need to check that for \(f(t, y, \phi , \delta ) = V^{\beta , \delta }(t,y,\phi )\), \(g(t, y, \phi , \delta ) = R^{\beta , \delta }(t, y, \phi )\) we have

  1. (1)

    \(f(\cdot , y, \phi , \delta )\) and \(g(\cdot , y, \phi , \delta )\) are measurable on [0, T],

  2. (2)

    for any fixed \(t \in [0,T]\) and \(\delta > 0\) \(f(t,\cdot ,\cdot ,\delta )\) and \(g(t,\cdot ,\cdot ,\delta )\) are continuous in \((y, \phi )\),

  3. (3)

    for any fixed \(t \in [0,T]\) \(f(t,\cdot ,\cdot ,\cdot )\) and \(g(t,\cdot ,\cdot ,\cdot )\) are continuous in \((y,\phi ,\delta )\) at \(\delta = 0\),

  4. (4)

    \(f(t,y,\phi ,\delta )\), \(g(t,y,\phi ,\delta )\) are uniformly bounded.

Properties 1), 2) and 4) follow directly from our regularity assumptions about \(V^{\beta , \delta }(t,y,\phi )\) (in case of \(R^{\beta , \delta }(t,y,\phi )\) we use continuity of \(\frac{\partial V^{\beta , \delta }}{\partial y}(t, y, \phi )\)). Property 3) follows from pointwise convergence \(f(t,y,\phi ,\delta ) \xrightarrow {\delta \rightarrow 0} f(t,y,\phi ,0)\) and equicontinuity of \(\{f(t,y,\phi ,\delta )\}_{\delta \ge 0}\) in \((y, \phi )\), which in turn follows from uniform continuity of \(V^{\beta }_{t}(y, \phi )\) in ty and \(\phi \). The argument for \(g(t,y,\phi , \delta )\) is analogous (again, using uniform continuity of \(\frac{\partial V^{\beta , \delta }}{\partial y}(t, y, \phi )\)). \(\square \)

Now we can prove the main result of this section, which states that the trajectories of the process X and its energy can be approximated by those of the process \(X^{\beta , \delta }\).

Proposition 3.7

Let \(\pi \in \mathcal {M}(\mathcal {D})\) be the distribution of the process X and let \(\pi ^{\beta , \delta } \in \mathcal {M}(\mathcal {D})\) be the distribution of the process \(X^{\beta , \delta }\). Then we have

$$\begin{aligned} \lim \limits _{\beta \rightarrow 0} \lim \limits _{\delta \rightarrow 0} d_{{\mathcal {W}}}^{sup}(\pi , \pi ^{\beta , \delta }) = 0, \end{aligned}$$

where \(d_{{\mathcal {W}}}^{sup}\) is the Wasserstein distance associated to the supremum norm on \(\mathcal {D}\).

Furthermore,

$$\begin{aligned} \lim \limits _{\beta \rightarrow 0} \lim \limits _{\delta \rightarrow 0} I(\pi ^{\beta , \delta }) = I(\pi ), \end{aligned}$$

where \(I(\mu )\) is the energy of the process \(\mu \) defined in (10).

Proof

For the first convergence it is enough to show that \(\mathbb {E}\left\| X - X^{\beta , \delta } \right\| _{sup} \rightarrow 0\) in the coupling between X and \(X^{\beta , \delta }\) considered before. We have

$$\begin{aligned} \mathbb {E}\left\| X - X^{\beta , \delta } \right\| _{sup} \le \mathbb {E}\left\| X - X^{\beta } \right\| _{sup} + \mathbb {E}\left\| X^{\beta } - X^{\beta , \delta } \right\| _{sup}. \end{aligned}$$

Let \(B^{\beta }\) be the event from the statement of Lemma 3.5. Since the supremum norm is bounded by 1, we have

$$\begin{aligned} \mathbb {E}\left\| X - X^{\beta } \right\| _{sup} \le \mathbb {P}\left( B^{\beta } \right) + \mathbb {E}\left[ \left\| X - X^{\beta } \right\| _{sup} \mathbb {1}_{(B^{\beta })^c} \right] . \end{aligned}$$

By Lemma 3.5 the first term is o(1) as \(\beta \rightarrow 0\). Since \(V^{\beta }_{t}(x, \phi ) = V_{t}(x, \phi )\) if \(x \in [\beta , 1-\beta ]\), on the event \((B^{\beta })^c\) we have \(X^{\beta } = X\), so the second term above is equal to 0. As for \(\mathbb {E}\left\| X^{\beta } - X^{\beta , \delta } \right\| _{sup}\), by Proposition 3.6 for fixed \(\beta > 0\) we have with probability one \(\left\| X^{\beta } - X^{\beta , \delta } \right\| _{sup} \rightarrow 0\) as \(\delta \rightarrow 0\), which together with the estimate on \(\mathbb {E}\left\| X - X^\beta \right\| _{sup}\) proves the first claim of the theorem.

As for the energy, let \(\pi ^\beta \) denote the distribution of the process \(X^\beta \), with X, \(X^\beta \) and \(X^{\beta , \delta }\) coupled as before. Since

$$\begin{aligned} |I(\pi ) - I(\pi ^{\beta , \delta })| \le |I(\pi ) - I(\pi ^{\beta })| + |I(\pi ^{\beta }) - I(\pi ^{\beta , \delta })| \end{aligned}$$

it is enough to show that \(\lim \limits _{\delta \rightarrow 0} I(\pi ^{\beta , \delta }) = I(\pi ^{\beta })\) and \(\lim \limits _{\beta \rightarrow 0} I(\pi ^{\beta }) = I(\pi )\). We have

$$\begin{aligned} I(\pi ^{\beta , \delta }) = \mathbb {E}\int \limits _{0}^{T} |{\dot{X}}^{\beta , \delta }(t)|^2 \, dt = \mathbb {E}\int \limits _{0}^{T} V^{\beta , \delta }(t, X^{\beta , \delta }(t), \Phi ^{\beta , \delta }(t))^2 \, dt. \end{aligned}$$

For fixed \(t \in [0,T]\) by Lemma 3.4\((X^{\beta , \delta }(t), \Phi ^{\beta , \delta }(t))\) has uniform distribution on \([0,1] \times [0,1]\) and moving the expectation inside the integral we obtain

$$\begin{aligned} I(\pi ^{\beta , \delta }) = \int \limits _{0}^{T} \mathbb {E}\left[ V^{\beta , \delta }(t, X^{\beta , \delta }(t), \Phi ^{\beta , \delta }(t))^2 \right] dt = \int \limits _{0}^{T} \left( \int \limits _{0}^{1} \int \limits _{0}^{1} V^{\beta , \delta }(t, x, \phi )^2 \, dx \, d\phi \right) dt. \end{aligned}$$

The analogous formula is valid for \(I(\pi )\) as well. Now, for fixed \(\beta > 0\) we have \(V^{\beta , \delta }(t,x,\phi ) \xrightarrow {\delta \rightarrow 0} V^{\beta }_t(x,\phi )\) and \(V^{\beta , \delta }(t,x,\phi )\) is uniformly bounded in tx and \(\phi \), independently of \(\delta \), which by dominated convergence implies the convergence of the integrals above as well. Thus \(\lim \limits _{\delta \rightarrow 0} I(\pi ^{\beta , \delta }) = I(\pi ^{\beta })\). The convergence \(\lim \limits _{\beta \rightarrow 0} I(\pi ^{\beta }) = I(\pi )\) follows directly from properties (d) and (e) of \(V^{\beta }_{t}(x, \phi )\) and dominated convergence.

\(\square \)

Construction of \(V_{t}^{\beta }\). We will construct the desired modification of \(V_t(x, \phi )\) for \(x \in [0,\beta ]\), the construction for \(x \in [1-\beta ,1]\) is analogous. Fix \(t \in [0,T]\). Let

$$\begin{aligned} L_t(x, \phi ) = \frac{\partial V_t}{\partial x}(\beta , \phi ) (x - \beta ) + V_{t}(\beta , \phi ). \end{aligned}$$

Consider \(\beta ' < \beta / 2\) to be fixed later and let f be a smooth approximation of a step function which has values in [0, 1], is equal to 0 on \([0, \beta - 2 \beta ']\), equal to 1 on \([\beta ' - \beta , \beta ]\) and is increasing on \([\beta - 2\beta ', \beta - \beta ']\). In particular we have \(f(0) = 0\), \(f(\beta ) = 1\) and \(f'(\beta ) = 0\).

Let us now take

$$\begin{aligned} {\widetilde{V}}_t(x, \phi ) = f(x)L_t(x, \phi ) \end{aligned}$$

and

$$\begin{aligned} V^{\beta }_{t}(x, \phi ) = {\left\{ \begin{array}{ll} {\widetilde{V}}_t(x, \phi ) \quad \text{ for } x \in [0, \beta ], \\ V_{t}(x, \phi ) \quad \text{ otherwise }. \end{array}\right. } \end{aligned}$$

We will check that \(V^{\beta }_{t}(x, \phi )\) indeed satisfies the desired properties.

Let us first check that the property (c) is satisfied for \(x \in [0, \beta ]\). We have

$$\begin{aligned}&\int \limits _{0}^{1} {\widetilde{V}}_t(x, \psi ) \, d\psi = \int \limits _{0}^{1} f(x)L_t(x, \psi ) \, d\psi = f(x) \int \limits _{0}^{1} \left( \frac{\partial V_t}{\partial x}(\beta , \psi ) (x - \beta ) + V_{t}(\beta , \psi ) \right) d\psi \\&\quad = f(x)(x - \beta ) \int \limits _{0}^{1} \frac{\partial V_t}{\partial x}(\beta , \psi ) \, d\psi + f(x) \int \limits _{0}^{1} V_{t}(\beta , \psi ) \, d\psi \\&\quad = f(x)(x - \beta ) \frac{d}{d x}\Big \vert _{x=\beta } \left( \int \limits _{0}^{1} V_t(x, \psi ) \, d\psi \right) + f(x) \int \limits _{0}^{1} V_{t}(\beta , \psi ) \, d\psi = 0, \end{aligned}$$

thanks to (24).

Property (b) follows directly from \(f(0) = 0\). As for (a), for \(x \in [0,\beta )\) continuous differentiability of \(V^{\beta }_t(x, \phi )\) follows from continuous differentiability of f(x). At \(x = \beta \) we have

$$\begin{aligned} {\widetilde{V}}_t(\beta , \phi ) = f(\beta ) L_t (\beta , \phi ) = f(\beta ) V_t(\beta , \phi ) \end{aligned}$$

and \(f(\beta ) = 1\), so \({\widetilde{V}}_t(x, \phi )\) is continuous at \(x=\beta \). Likewise,

$$\begin{aligned} \frac{\partial {\widetilde{V}}_t}{\partial x}(x, \phi ) = f'(x) L_t (x, \phi ) + f(x) \frac{\partial L_t}{\partial x}(x, \phi ) = f'(x) L_t (x, \phi ) + f(x) \frac{\partial V_t}{\partial x} (x, \phi ). \end{aligned}$$

Since \(f(\beta ) = 1\) and \(f'(\beta ) = 0\), we have \(\frac{\partial {\widetilde{V}}_t}{\partial x}(\beta , \phi ) = \frac{\partial V_t}{\partial x} (\beta , \phi )\). As the functions in the formula above are continuously differentiable at \(x=\beta \), \(V^{\beta }_t(x, \phi )\) is continuously differentiable at \(x=\beta \) as well.

To see that property (d) is satisfied, we note that by continuity of \(V_t(x, \phi )\) and \(\frac{\partial V_t}{\partial x}(x, \phi )\) for \(x \ne 0,1\) we can take \(\beta '\) in the definition of f(x) above to be arbitrarily small (depending on \(V_t\), \(\frac{\partial V_t}{\partial x}\) and \(\beta \)) so that on \([\beta - 2 \beta ', \beta ]\) the function \({\widetilde{V}}_t(x, \phi )\) is less than \(|V_t(\beta , \phi )| + 1\) in absolute value. Since on \([0, \beta - 2\beta ']\) we have \(V_{t}^{\beta }(x, \phi ) = 0\), the desired bound on \(|V_{t}^{\beta }(x, \phi )|\) follows.

Finally, to prove that property (e) holds it is enough to show that

$$\begin{aligned} \int \limits _{0}^{1} \int \limits _{0}^{\beta } |V_{t}^{\beta }(x, \phi )|^2 \, dx \, d\phi \rightarrow 0 \end{aligned}$$

as \(\beta \rightarrow 0\), since \(V_{t}^{\beta }(x, \phi ) = V_{t}(x, \phi )\) for \(x \notin [0,\beta ]\). The claim follows immediately from property (d), since the integrand is bounded independently of \(\beta \).

4 The Biased Interchange Process and Stationarity

The biased interchange process. For the sake of proving a large deviation lower bound, we will need to perturb the interchange process to obtain dynamics which typically exhibits (otherwise rare) behavior of a fixed permuton process. Let us introduce the biased interchange process. Its configuration space E consists of sequences \(\eta = \left( (x_{i}, \phi _{i})\right) _{i=1}^{N}\), where as before \((x_1, \ldots , x_N)\) is a permutation of \(\{1, \ldots , N\}\) and \(\phi _{i}\) has N possible values, \(1, \ldots , N\). Here \(x_{i}\) will be the position of the particle with label i and \(\phi _{i}\) will be its color.

By a slight abuse of notation we will write \(\eta ^{-1}(x)\) to denote the label (number) of the particle at position x in configuration \(\eta \) (so that \(\eta ^{-1}(x_{i}) = i\)). For a position x we will often write \(\phi _{x}\) as a shorthand for \(\phi _{\eta ^{-1}(x)}\) (the positions will be always denoted by x or y and labels by i, so there is no risk of ambiguity). In this way we can treat any configuration \(\eta \) as a function which assigns to each site x a pair \((\eta ^{-1}(x), \phi _x)\), the label and the color of the particle present at x

The configuration at time t will be denoted by \(\eta ^{N}_{t}\) (or simply \(\eta _{t}\)), and likewise by \(x_{i}(\eta ^{N}_{t})\) and \(\phi _{i}(\eta ^{N}_{t})\) we denote the position and the color of the particle number i at time t. We will use notation \(X_{i}(\eta ^{N}_{t}) = \frac{1}{N}x_{i}(\eta ^{N}_{t})\), \(\Phi _{i}(\eta ^{N}_{t}) = \frac{1}{N} \phi _{i}(\eta ^{N}_{t})\) for the rescaled positions and colors. By the same convention as above \(\Phi _x(\eta ^{N}_{t})\) will denote the rescaled color of the particle at site x at time t.

Let \(\varepsilon = N^{1-\alpha }\), with the same \(\alpha \in (1,2)\) as in (12). Suppose we are given functions \(v, r : [0,T] \times \{1, \ldots , N\} \times \{1, \ldots , N\}\). The dynamics of the corresponding biased interchange process is defined by the (time-inhomogeneous) generator

$$\begin{aligned} ({\widetilde{\mathcal {L}}}_t f)(\eta )&= \frac{1}{2} N^{\alpha } \sum \limits _{x=1}^{N-1} \big ( 1 + \varepsilon \left[ v(t, x, \phi _{x}(\eta )) - v(t,x+1, \phi _{x+1}(\eta ))\right] \big ) (f(\eta ^{x, x+1}) - f(\eta )) \nonumber \\&\quad + \frac{1}{2} N^{\alpha } \sum \limits _{x=1}^{N} \Big [ \big ( 1 + \varepsilon r(t,x, \phi _{x}(\eta )) \big ) (f(\eta ^{x,+}) - f(\eta )) + \big ( 1 - \varepsilon r (t,x, \phi _{x}(\eta )) \big ) (f(\eta ^{x,-}) - f(\eta )) \Big ]. \end{aligned}$$
(28)

Here \(\eta ^{x, x+1}\) is the configuration \(\eta \) with particles at locations x and \(x+1\) swapped, and \(\eta ^{y, \pm }\) is the configuration \(\eta \) with \(\phi _{y}\) changed by \(\pm 1\) (with the convention that \(\eta ^{y,+} = \eta ^{y}\) if \(\phi _y = N\) and likewise \(\eta ^{y,-} = \eta ^{y}\) if \(\phi _y = 1\)). We will often use the abbreviated notation \(v_{x}(t,\eta ) = v(t,x, \phi _{x}(\eta ))\) (with the convention \(v_{0}(t,\eta ) = v_{N+1}(t,\eta ) = 0\)).

In other words, at each time neighboring particles make a swap at rate close to 1, with bias proportional to the difference of their velocities \(v(t,x,\phi _{x})\), and each particle independently changes its color by \(\pm 1\), also at rate close to 1 with bias proportional to \(\pm r(t,x, \phi _{x})\). The parameter \(\varepsilon \) has been chosen so that we expect particles to have displacement of order N at macroscopic times.

Since the interchange process is a pure jump Markov process, for each particle its rescaled position \(X_{i}(\eta ^{N})\) and color \(\Phi _{i}(\eta ^{N})\) will be càdlàg paths from [0, T] to [0, 1] and thus elements of \(\mathcal {D}\). In the same way we can consider the joint trajectory \(P_{i}(\eta ^{N}) = (X_{i}(\eta ^{N}), \Phi _{i}(\eta ^{N}))\) as an element of \(\widetilde{\mathcal {D}}= \mathcal {D}([0,T], [0,1]^2)\), the space of cádlág paths from [0, T] to \([0,1]^2\) (equipped with the Skorokhod topology). By \(\mathcal {M}(\widetilde{\mathcal {D}})\) we will denote the space of Borel probability measures on \(\widetilde{\mathcal {D}}\), endowed with the weak topology, and by a slight abuse of notation the corresponding Wasserstein distance will be denoted by \(d_{{\mathcal {W}}}\), as for \(\mathcal {M}(\mathcal {D})\).

If \(\eta ^{N}\) is the trajectory of the biased interchange process, then by analogy with the permutation process \(X^{\eta ^{N}}\) we can define the colored permutation process \(P^{\eta ^{N}} = (X^{\eta ^{N}}, \Phi ^{\eta ^{N}})\), obtained by choosing a particle i at random and following the path \((X_{i}(\eta ^{N}_{t}), \Phi _{i}(\eta ^{N}_{t}))\). Thus we keep track both of the position and the color of a random particle. Since \(\eta ^N\) is random, the distribution \(\nu ^{\eta ^N}\) of \(P^{\eta ^{N}}\), given by

$$\begin{aligned} \nu ^{\eta ^N} = \frac{1}{N} \sum \limits _{i=1}^{N} \delta _{P^{\eta ^{N}}_{i}}, \end{aligned}$$

is a random element of \(\mathcal {M}(\widetilde{\mathcal {D}})\).

Stationarity conditions. Let us now connect the discussion of the interchange process with deterministic permuton processes and generalized solutions to Euler equations considered in Sect 3. Recall the colored trajectory process \(P^{\beta , \delta } = (X^{\beta , \delta }, \Phi ^{\beta , \delta })\) defined in Sect. 3. From now on we consider \(\beta \in (0, \frac{1}{4})\) and \(\delta > 0\) to be fixed and we suppress them in the notation, writing \(X = X^{\beta , \delta }\), \(\Phi = \Phi ^{\beta , \delta }\), \(V(t, x, \phi ) = V^{\beta , \delta }(t, x, \phi )\), \(R(t, x, \phi ) = R^{\beta , \delta }(t, x, \phi )\). Note that this should not be confused with the actual generalized solution to Euler equations, which was also denoted by X, but does not appear in this and the following sections except in Theorem 7.3.

Our goal is to set up a biased interchange process so that typically trajectories of particles will behave like trajectories of the process X. We would also like to preserve the stationarity of the uniform distribution of colors, which will greatly facilitate parts of the argument. To find the correct rates \(v(t, x, \phi )\) and \(r(t, x, \phi )\) in (28), recall that by definition the trajectories of the colored trajectory process \(P = (X, \Phi )\) satisfy the equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{dX}{dt}(t) = V(t, X(t), \Phi (t)), \\ \frac{d\Phi }{dt}(t) = R(t, X(t), \Phi (t)), \\ \end{array}\right. } \end{aligned}$$
(29)

with the functions V and R satisfying

$$\begin{aligned} {\left\{ \begin{array}{ll} V(t,X, \Phi ) = \frac{\partial F}{\partial \Phi }(t,X, \Phi ), \\ R(t,X, \Phi ) = -\frac{\partial F}{\partial X}(t,X, \Phi ) \end{array}\right. } \end{aligned}$$
(30)

for \(F(t,X,\Phi ) = \int \limits _{0}^{\Phi } V(t,X,\psi ) \, d\psi \). Note that \(F(t,X,0) = 0\) and \(F(t,X,1)=0\), where the latter equality follows from property (c) of \(V^{\beta }_{t}(x,\phi )\) (and thus of \(V = V^{\beta ,\delta }\)).

It is clear that v and r should be chosen so that approximately we have \(v(t,x, \phi ) \approx V\left( t,\frac{x}{N} , \frac{\phi }{N}\right) \), \(r(t,x, \phi ) \approx R\left( t,\frac{x}{N} , \frac{\phi }{N}\right) \). To analyze the stationarity condition, consider the uniform distribution on configurations of the biased interchange process, i.e., a distribution in which the labelling of particles is a uniformly random permutation and each particle has a uniformly random color, chosen indepedently from \(\{1, \ldots , N\}\) for each of them. We want to find a condition on rates \(v(t,x, \phi )\) and \(r(t,x, \phi )\) such that this measure will be invariant for the dynamics of \({\widetilde{\mathcal {L}}}_t\).

Note that since \(V(t,X,\Phi )\), \(R(t,X,\Phi )\) are piecewise-constant as functions of t, the dynamics induced by \({\widetilde{\mathcal {L}}}_t\) is time-homogeneous on each interval \([t_k, t_{k+1})\) from the definition (26) of V. Thus the stationarity condition for the uniform measure is that for each state (i.e., each configuration \(\eta \)) the sums of outgoing and incoming jump rates have to be equal. We write down this condition as follows. For any given configuration \(\eta \), with particle at location x having color \(\phi _{x} = \phi _{x}(\eta )\), there are the following possible outgoing jumps:

  • for some \(x \in \{ 1, \ldots , N - 1\}\) the particles at locations x and \(x+1\) swap, at rate \(1 + \varepsilon \left[ v (t,x, \phi _{x}) - v (t,x+1, \phi _{x+1}) \right] \);

  • for some \(x \in \{1, \ldots , N\}\) the particle at x changes its color from \(\phi _x\) to \(\phi _x \pm 1\), at rate \(1 \pm \varepsilon r (t,x, \phi _{x})\);

and incoming jumps:

  • for some \(x \in \{ 1, \ldots , N - 1\}\) the particles at locations x and \(x+1\) swap, at rate \(1 + \varepsilon \left[ v (t,x, \phi _{x+1}) - v (t,x+1, \phi _{x}) \right] \);

  • for some \(x \in \{1, \ldots , N\}\) the particle at x changes its color from \(\phi _x \pm 1\) to \(\phi _x\), at rate \(1 \mp \varepsilon r (t,x, \phi _{x} \pm 1)\).

Thus the condition on the sums of jump rates is

$$\begin{aligned}&\sum \limits _{x=1}^{N-1} \left( v (t,x, \phi _{x}) - v (t,x+1, \phi _{x+1}) \right) \\&\quad = \sum \limits _{x=1}^{N-1} \left( v (t,x, \phi _{x+1}) - v (t,x+1, \phi _{x}) \right) + \sum \limits _{x=1}^{N} \left( r (t,x, \phi _{x} - 1) - r (t,x, \phi _{x} + 1) \right) , \end{aligned}$$

where we adopt the convention \(r(t,x,0) = r(t,x,N+1) = 0\). This implies

$$\begin{aligned}&\sum \limits _{x=2}^{N-1} \big ( v (t,x-1, \phi _{x}) - v (t,x+1, \phi _{x}) + r(t,x,\phi _{x} - 1) - r(t,x,\phi _{x} + 1) \big ) \\&\quad + v(t,N-1, \phi _{N}) + v(t,N, \phi _{N}) - v(t,1, \phi _{1}) - v(t,2, \phi _{1}) \\&\quad + \left[ r(t,1,\phi _{1} - 1) - r(t,1,\phi _{1} + 1) \right] + \left[ r(t,N,\phi _{N} - 1) - r(t,N,\phi _{N} + 1) \right] = 0. \end{aligned}$$

Since we would like this equation to be satisfied for any configuration, regardless of the choice of \(\phi _x\) for each x, we want each term in the sum and each of the boundary terms to vanish. This gives us a set of equations

$$\begin{aligned} {\left\{ \begin{array}{ll} v(t,1, \phi ) + v(t,2, \phi ) = r(t,1,\phi - 1) - r(t,1,\phi + 1), \\ v (t,x+1, \phi ) - v (t,x-1, \phi ) = r(t,x,\phi - 1) - r(t,x,\phi + 1), \, \, \, x = 2, \ldots , N-1, \\ v(t,N-1, \phi ) + v(t,N, \phi ) = r(t,N,\phi + 1) - r(t,N,\phi - 1), \end{array}\right. } \end{aligned}$$
(31)

which have to be satisfed for every \(\phi = 1, \ldots , N\).

Let us consider the function \(f(t,x, \phi )\) defined for \(x \in \{0, \ldots , N+1\}\), \(\phi \in \{1, \ldots , N\}\) by

$$\begin{aligned} f (t,x, \phi ) = {\left\{ \begin{array}{ll} F\left( t,\frac{x}{N}, \frac{\phi }{N+1} \right) , \quad &{} x = 2, \ldots , N-1, \\ 0, \quad &{} x = 0, 1, N, N + 1, \end{array}\right. } \end{aligned}$$

where F is the function appearing in (30). It is straightforward to check that the rates given by

$$\begin{aligned} {\left\{ \begin{array}{ll} v(t,x, \phi ) = \frac{N}{2} \left( f(t,x, \phi - 1) - f(t,x, \phi + 1) \right) , \\ r(t,x, \phi ) = \frac{N}{2} \left( f(t,x+1, \phi ) - f(x-1,\phi ) \right) , \end{array}\right. } \end{aligned}$$
(32)

solve the equations for stationarity, given by (31), for any \(x, \phi \in \{1, \ldots , N\}\).

Note that with this choice of rates we have for any \(x, \phi \in \{1, \ldots , N\}\)

$$\begin{aligned} {\left\{ \begin{array}{ll} v(t,x, \phi ) = V \left( t, \frac{x}{N}, \frac{\phi }{N} \right) + O\left( \frac{1}{N} \right) , \\ r(t,x, \phi ) = R \left( t, \frac{x}{N}, \frac{\phi }{N} \right) + O\left( \frac{1}{N} \right) , \end{array}\right. } \end{aligned}$$
(33)

uniformly in x, \(\phi \) and t, because of smoothness of \(F(t, X, \Phi )\) in X and \(\Phi \) variables. In particular the rates v and r are uniformly bounded for all N.

From now on we will always assume that the biased interchange process has rates \(v(t,x, \phi )\) and \(r(t,x, \phi )\) given by (32) and is started from the uniform distribution (which by the discussion above is stationary). The properties of v and r which will be relevant to our analysis is that they are bounded, approximately equal to some smooth functions V, R, that the corresponding dynamics has the uniform measure as the stationary distribution and, crucially, that in stationarity the velocities are independent and mean zero. This last property, which should be thought of as the particle system analog of Lemma 3.2, is conveniently summarized in the following proposition.

Proposition 4.1

Let \(\phi _{x}\), \(x=1, \ldots , N\), be independent and uniformly distributed on \(\{1, \ldots , N\}\). Then for each \(t \in [0,T]\) the random variables \(v(t,x, \phi _x)\), \(x=1, \ldots , N\), are independent and for each x we have

$$\begin{aligned} \mathbb {E}\, v(t,x, \phi _x) = 0. \end{aligned}$$

Proof

Under the uniform distribution of \(\phi _x\) we have

$$\begin{aligned} \mathbb {E}\, v(t,x, \phi _x) = \frac{1}{N}\sum \limits _{\phi =1}^{N} v(t,x,\phi ), \end{aligned}$$

which by definition of v is equal to

$$\begin{aligned} \frac{1}{N}\sum \limits _{\phi =1}^{N} \frac{N}{2} \left( f(t,x, \phi - 1) - f(t,x, \phi + 1) \right) = \frac{1}{2} \left( F\left( t, \frac{x}{N}, 0\right) - F\left( t,\frac{x}{N}, 1\right) \right) . \end{aligned}$$

Recalling the definition of F below (30), the right-hand side is equal to 0. \(\square \)

5 Law of Large Numbers

Throughout this section \({\widetilde{\mathbb {P}}}^{N}\) will denote the probability law of the biased interchange process on N particles, started in stationarity, associated to the equation (29) (with all the assumptions from the previous section). To simplify notation we will usually write \(\eta = \eta ^{N}\). Whenever we use \(o(\cdot )\) or \(O(\cdot )\) asymptotic notation the implicit constants will depend only on the rates v, r and possibly on T.

Let \(P = (X, \Phi )\) be the colored trajectory process associated to the Eq. (29) and let \(P^{\eta ^{N}}\) be the colored permutation process defined in Sect. 4. Let us denote the distributions of P and \(P^{\eta ^{N}}\) respectively by \(\nu \) and \(\nu ^{\eta ^{N}}\), with \(\nu , \nu ^{\eta ^{N}} \in \mathcal {M}(\widetilde{\mathcal {D}})\). We will prove the following theorem

Theorem 5.1

Let \(\eta ^{N}\) be the trajectory of the biased interchange process. The measures \(\nu ^{\eta ^{N}}\) converge in distribution, as random elements of \(\mathcal {M}(\widetilde{\mathcal {D}})\), to the deterministic measure \(\nu \) as \(N \rightarrow \infty \).

In other words, the random processes \(P^{\eta ^{N}}\) converge in distribution to the process P whose distribution is deterministic. The theorem above can be thought of as a law of large numbers for random permuton processes and it will be useful for establishing the large deviation lower bound.

Remark 5.2

Since the limiting measure \(\nu \) is deterministic and supported on continuous trajectories, Theorem 5.1 implies that the convergence \(\nu ^{\eta ^N} \rightarrow \nu \) in fact holds in a stronger sense, namely in probability when \(\mathcal {M}(\widetilde{\mathcal {D}})\) is endowed with the Wasserstein distance \(d_{{\mathcal {W}}}^{sup}\) associated to the supremum norm on \(\widetilde{\mathcal {D}}\).

To prove Theorem 5.1, we will show that typically trajectories of most particles approximately follow the same ODE (29) as trajectories of the limiting process. In other words, if a given particle is at site x, it should locally move according to its velocity \(v(t, x, \phi _{x})\). However, because of swaps between particles the actual jump rates of the particle will be influenced by velocities of its neighbors. Nevertheless, since velocity at each site has mean 0 in stationarity, we will be able to show that the contribution from velocities of the particle’s neighbors cancels out when averaged over time – this will be the content of the one block estimate proved in the next section.

Note that to prove that the random processes converge indeed to a deterministic process, it is not enough to look only at single path distributions, as explained in Sect. 2.1. Nevertheless, we will show that in the interchange process typically any two particles (in fact almost all of them) behave like independent random walks, which by Lemma 2.1 will be enough to establish a deterministic limit.

Throughout this and the following sections we will make extensive use of martingales associated to Markov processes (see [KL99] for a comprehensive treatment of such techniques applied to interacting particle systems). For any Markov process with generator \(\mathcal {L}\) and a bounded function \(F : E \rightarrow \mathbb {R}\), where E is the configuration space of the process, the following processes are mean zero martingales ([KL99, Lemma A1.5.1])

$$\begin{aligned}&M_t = F(\eta _t) - F(\eta _0) - \int \limits _{0}^{t} \mathcal {L}F(\eta _s) \, ds, \end{aligned}$$
(34)
$$\begin{aligned}&N_t = M_{t}^{2} - \int \limits _{0}^{t} \left( \mathcal {L}F(\eta _s)^2 - 2F(\eta _s)\mathcal {L}F(\eta _s)\right) ds . \end{aligned}$$
(35)

Furthermore, for any F as above the following process is a mean one positive martingale (see discussion following [KL99, Lemma A1.7.1])

$$\begin{aligned} {\mathbb {M}}_t = \exp \left\{ F(\eta _t) - F(\eta _0) - \int \limits _{0}^{t} e^{-F(\eta _s)} \mathcal {L}e^{F(\eta _s)} \, ds \right\} . \end{aligned}$$
(36)

In the following sections we will also consider the case when F is not necessarily bounded, in which case \(M_t\), \(N_t\), \({\mathbb {M}}_t\) are only local martingales.

Our first goal is to prove that with high probability almost all particles move according to their local velocity \(v(t,x_{i}, \phi _i)\). Recall that

$$\begin{aligned} X_{i}(\eta _{t}) = \frac{1}{N} x_{i}(\eta _{t}), \quad \Phi _{i}(\eta _{t}) = \frac{1}{N} \phi _{i}(\eta _{t}) \end{aligned}$$

are respectively the rescaled position and color of the particle with label i. Our first goal is to prove the following

Proposition 5.3

For any fixed \(t \in [0,T]\) and \(\varepsilon > 0\) we have in the biased interchange process

$$\begin{aligned}&{\widetilde{\mathbb {P}}}^{N} \left( \frac{1}{N} \sum \limits _{i=1}^{N}\left| X_{i}(\eta _{t}) - X_{i}(\eta _{0}) - \int \limits _{0}^{t} v(s,x_{i}(\eta _{s}), \phi _{i}(\eta _{s})) \, ds \right|> \varepsilon \right) \rightarrow 0, \\&{\widetilde{\mathbb {P}}}^{N} \left( \frac{1}{N} \sum \limits _{i=1}^{N} \left| \Phi _{i}(\eta _{t}) - \Phi _{i}(\eta _{0}) - \int \limits _{0}^{t} r(s,x_{i}(\eta _{s}), \phi _{i}(\eta _{s})) \, ds \right| > \varepsilon \right) \rightarrow 0, \end{aligned}$$

as \(N \rightarrow \infty \).

As a starting point let us rewrite \(X_i(\eta _t)\) in a more useful form. Recall from (28) that \({\widetilde{\mathcal {L}}}\) denotes the generator of the biased interchange process. By the formula (34) applied to \(F(\eta _s) = X_{i}(\eta _s)\) we have

$$\begin{aligned} X_{i}(\eta _{t}) - X_{i}(\eta _{0}) = M_{t}^{i} + \int \limits _{0}^{t} {\widetilde{\mathcal {L}}} X_{i}(\eta _{s}) \, ds, \end{aligned}$$

where \(M_{t}^{i}\) is a mean zero martingale with respect to \({\widetilde{\mathbb {P}}}^{N}\). Recall that \(v_x(t,\eta ) = v(t,x, \phi _{x}(\eta ))\) denotes the velocity of the particle at site x in configuration \(\eta \) at time t. For simplicity we will also write \(v_{x_{i}}(t,\eta ) = v(t,x_{i}(\eta ), \phi _{i}(\eta ))\) for the velocity of the particle with label i. We have

$$\begin{aligned} {\widetilde{\mathcal {L}}} X_{i}(\eta _{s})&= \frac{1}{N} {\widetilde{\mathcal {L}}} (x_{i}(\eta _{s})) \\&= \frac{1}{2} N^{\alpha -1} \sum \limits _{x=1}^{N-1} \left( 1 + \varepsilon \left[ v_{x}(s,\eta _{s}) - v_{x+1}(s,\eta _{s})\right] \right) (x_{i}(\eta ^{x, x+1}_{s}) - x_{i}(\eta _{s})) \\&= \frac{1}{2} N^{\alpha -1}\varepsilon \Big [ - \left[ v_{x_{i}-1}(s,\eta _{s}) - v_{x_{i}}(s,\eta _{s}) \right] + \left[ v_{x_{i}}(s,\eta _{s}) - v_{x_{i}+1}(s,\eta _{s}) \right] \Big ] \\&= \frac{1}{2} \left( 2 v_{x_{i}}(s,\eta _{s}) - v_{x_{i}-1}(s,\eta _{s}) - v_{x_{i}+1}(s,\eta _{s})\right) , \end{aligned}$$

since the position of the particle i changes by \(\pm 1\) depending on whether it makes a swap with its left or right neighbor.

Thus we obtain

$$\begin{aligned}&X_{i}(\eta _{t}) - X_{i}(\eta _{0}) = M_{t}^{i} + \int \limits _{0}^{t} v_{x_{i}}(s,\eta _{s}) \, ds + \frac{1}{2} \int \limits _{0}^{t} \left( v_{x_{i}-1}(s,\eta _{s}) + v_{x_{i}+1}(s,\eta _{s}) \right) ds, \end{aligned}$$

or in other words

$$\begin{aligned}&X_{i}(\eta _{t}) - X_{i}(\eta _{0}) - \int \limits _{0}^{t} v_{x_{i}}(s,\eta _{s}) \, ds = M_{t}^{i} + \frac{1}{2} \int \limits _{0}^{t} \left( v_{x_{i}-1}(s,\eta _{s}) + v_{x_{i}+1}(s,\eta _{s}) \right) \, ds. \end{aligned}$$
(37)

For the sake of proving the first part of Proposition 5.3 it will be enough to show that

$$\begin{aligned} \frac{1}{N} \sum \limits _{i=1}^{N} \mathbb {E}\Big ( X_{i}(\eta _{t}) - X_{i}(\eta _{0}) - \int \limits _{0}^{t} v_{x_{i}}(\eta _{s}) \, ds \Big )^2 \rightarrow 0 \end{aligned}$$
(38)

as \(N \rightarrow \infty \). First we prove that for most particles the martingale term \(M_{t}^{i}\) will be small with high probability. Let us define

$$\begin{aligned} Q_{s}^{i} = {\widetilde{\mathcal {L}}} X_{i}(\eta _{s})^2 - 2 X_{i}(\eta _{s}) {\widetilde{\mathcal {L}}} X_{i}(\eta _{s}). \end{aligned}$$

By the martingale formula (35) we have that

$$\begin{aligned} N_{t}^{i} = (M_{t}^{i})^2 - \int \limits _{0}^{t} Q_{s}^{i} \, ds \end{aligned}$$
(39)

is a mean zero martingale. A quick calculation gives

$$\begin{aligned}&{\widetilde{\mathcal {L}}} X_{i}(\eta _{s})^2 = \frac{1}{2} \Big [ \left( v_{x_{i}-1}(s,\eta _{s}) - v_{x_{i}}(s,\eta _{s})\right) \left( \frac{-2 x_{i}(\eta _{s})+1}{N} \right) + \left( v_{x_{i}}(s,\eta _{s}) \right. \\&\quad \left. - v_{x_{i}+1}(s,\eta _{s})\right) \left( \frac{2 x_{i}(\eta _{s})+1}{N} \right) \Big ] + N^{\alpha - 2} \end{aligned}$$

and

$$\begin{aligned}&2 X_{i}(\eta _{s}) {\widetilde{\mathcal {L}}} X_{i}(\eta _{s}) = \frac{x_{i}(\eta _{s})}{N} \left( 2 v_{x_{i}}(s,\eta _{s}) - v_{x_{i}-1}(s,\eta _{s}) - v_{x_{i}+1}(s,\eta _{s}) \right) , \end{aligned}$$

so these two quantities are the same up to terms of order o(1). Thus \(Q_{s}^{i} = o(1)\) (uniformly in s and i) and, since \(\mathbb {E}N_{t}^{i} = 0\), we obtain from (39) that \(\mathbb {E}(M_{t}^{i})^2 = o(1)\) as well.

Incidentally, a similar calculation (only simpler, since it does not involve correlations between adjacent particles) and the martingale argument gives us that for \(\Phi _{i}(\eta _{t}) = \frac{1}{N}\phi _{i}(\eta _{t})\) we have

$$\begin{aligned} \Phi _{i}(\eta _{t}) - \Phi _{i}(\eta _{0}) - \int \limits _{0}^{t} r(s,x_{i}(\eta _{s}), \phi _{i}(\eta _{s})) \, ds = o(1) \end{aligned}$$

for any fixed particle i. This proves the second part of Proposition 5.3.

Recalling (37) and (38), to finish the proof of the first part of Proposition 5.3 we only need to show that

$$\begin{aligned} \frac{1}{N} \sum \limits _{i=1}^{N} \mathbb {E}\left( Y_{i}^{t}\right) ^2 \rightarrow 0 \end{aligned}$$

as \(N \rightarrow \infty \), where

$$\begin{aligned} Y_{i}^{t} = \int \limits _{0}^{t} \left( v_{x_{i}-1}(s,\eta _{s}) + v_{x_{i}+1}(s,\eta _{s}) \right) ds. \end{aligned}$$

Recall from (26) that \(V^{\beta ,\delta }(s,x,\phi )\) was defined in terms of a partition \(0 = t_0< t_1< \ldots < t_M = T\). We would like to take advantage of the fact that on each interval the dynamics of the biased interchange process is time-homogeneous. Suppose that \(t \in [t_l, t_{l+1})\) for some \(l \le M - 1\) and let us write

$$\begin{aligned} Y_{i}^{t} = \sum \limits _{k=0}^{l-1} \int \limits _{t_k}^{t_{k+1}} \left( v_{x_{i}-1}(s,\eta _{s}) + v_{x_{i}+1}(s,\eta _{s}) \right) ds + \int \limits _{t_l}^{t} \left( v_{x_{i}-1}(s,\eta _{s}) + v_{x_{i}+1}(s,\eta _{s}) \right) ds. \end{aligned}$$

For any \(t \ge 0\) let

$$\begin{aligned} Y_{i}^{t,k} = \int \limits _{t_k}^{t_k + t} \left( v_{x_{i}-1}(s,\eta _{s}) + v_{x_{i}+1}(s,\eta _{s}) \right) ds. \end{aligned}$$

Since M is fixed, it is enough to show that for any fixed \(k \le M - 1\) and \(t \in [0, t_{k+1} - t_{k}]\) we have

$$\begin{aligned} \frac{1}{N} \sum \limits _{i=1}^{N} \mathbb {E}\left( Y_{i}^{t,k}\right) ^2 \rightarrow 0 \end{aligned}$$

as \(N \rightarrow \infty \).

To keep the notation simple we will prove the desired statement just for \(k=0\), with the general case being exactly analogous. Recall that \(t_0 = 0\). By definition of the piecewise-constant in time approximation of \(V^{\beta ,\delta }\), for \(s \in [0, t_{1})\) we have \(v_{x}(s, \eta _s) = v_{x}(0, \eta _s)\). Let us define \(v_{x}(\eta ) = v_{x}(0, \eta )\). Fix any \(t \in [0, t_{1}] \) and let us look at

$$\begin{aligned} \left( Y_{i}^{t,0}\right) ^2 = \left( \int \limits _{0}^{t} \left( v_{x_{i}-1}(\eta _{s}) + v_{x_{i}+1}(\eta _{s}) \right) ds \right) ^2. \end{aligned}$$

We will have four cross-terms here, it is enough to show that each of them is small in expectation. The argument will be similar in all cases, so we will only present the proof for one of them. Let us focus on

$$\begin{aligned}&\mathbb {E}\left[ \left( \int \limits _{0}^{t} v_{x_{i}-1}(\eta _{s}) \, ds \right) \left( \int \limits _{0}^{t} v_{x_{i}-1}(\eta _{s}) \, ds \right) \right] = \mathbb {E}\int \limits _{0}^{t} \int \limits _{0}^{t} v_{x_{i}-1}(\eta _{u_{1}}) v_{x_{i}-1}(\eta _{u_{2}}) \, du_{1} \, du_{2}. \end{aligned}$$

For each particle i we are looking at the correlation of the velocity of its left neighbor at time \(u_1\) with the velocity of its left neighbor at time \(u_{2}\). By averaging over particles \(i = 1, \ldots , N\) and using the symmetry between \(u_1\) and \(u_2\) we can write the contribution to the second moment of \(Y_{i}^{t,0}\) as

$$\begin{aligned}&\frac{2}{N} \sum \limits _{i=1}^{N} \mathbb {E}\int \limits _{0}^{t} \int \limits _{u_{1}}^{t} v_{x_{i}-1}(\eta _{u_{1}}) v_{x_{i}-1}(\eta _{u_{2}}) \, du_{2} \, du_{1} \\&\quad = 2 \int \limits _{0}^{t} \, du_{1} \left( \frac{1}{N} \sum \limits _{i=1}^{N} \mathbb {E}\int \limits _{u_{1}}^{t} v_{x_{i}-1}(\eta _{u_{1}}) v_{x_{i}-1}(\eta _{u_{2}}) \, du_{2} \right) . \end{aligned}$$

Since the rates v are bounded, it is enough to show that for each fixed \(u_{1} \in [0,t]\) the expression inside the bracket is close to 0 as \(N \rightarrow \infty \). Let us look at

$$\begin{aligned} \frac{1}{N} \sum \limits _{i=1}^{N} \mathbb {E}\int \limits _{u_{1}}^{t} v_{x_{i}-1}(\eta _{u_{1}}) v_{x_{i}-1}(\eta _{u_{2}}) \, du_{2}. \end{aligned}$$

Since the average here depends only on the configuration at time \(u_{1}\) and its evolution from that point on (and not otherwise on the trajectory of the process before time \(u_{1}\)), by stationarity of the biased interchange process it will be the same as

$$\begin{aligned} \frac{1}{N} \sum \limits _{i=1}^{N} \mathbb {E}\int \limits _{0}^{t - u_1} v_{x_{i}-1}(\eta _{0}) v_{x_{i}-1}(\eta _{s}) \, ds , \end{aligned}$$
(40)

since the dynamics of the process is time-homogeneous on \([0, t_{1})\).

Thus we have to prove that for a random particle the velocity of its initial left neighbor is uncorrelated (when averaged over time) with the velocity of its current left neighbor. Let us introduce the following setup – we can rewrite the average above in terms of a sum over sites (for \(y = x_{i}(\eta _{s})\)) instead over particles

$$\begin{aligned} \frac{1}{N} \sum \limits _{y=1}^{N} \mathbb {E}\int \limits _{0}^{t - u_1} v_{x_{\eta ^{-1}_{s}(y)}(\eta _{0})-1}(\eta _{0}) v_{y-1}(\eta _{s}) \, ds \end{aligned}$$
(41)

To analyze this average we introduce the following extension of the biased interchange process. Consider the extended configuration space \({\widetilde{E}}\) consisting of sequences \(\left( (x_i, \phi _i, L_i) \right) _{i=1}^{N}\), with \(L_i \in \{1, \ldots , N\}\). Here each particle, in addition to its color \(\phi _{i}\), also has an additional color \(L_{i}\) in which we keep information about the velocity of its left neighbor at time 0, that is

$$\begin{aligned} L_{i} = v_{x_{i}(\eta _{0})-1}(\eta _{0}). \end{aligned}$$

The dynamics is given by the same generator (28) as before, i.e., labels (together with their corresponding colors \(\phi _i\) and \(L_i\)) are exchanged by swaps of adjacent particles, each \(\phi _i\) has its own evolution and \(L_i\) does not evolve. For a site x let \(L_{x}(\eta )\) be the additional color at site x in configuration \(\eta \), i.e., \(L_x(\eta ) = L_{\eta ^{-1}(x)}\). We can now treat \(\eta \) as a function which assigns to each site x a triple \((\eta ^{-1}(x), \phi _{x}, L_{x})\) or simply a pair \((\phi _{x}, L_{x})\), since we are not interested in particles’ labels at this point, only in the distribution of colors.

In this setup the average (41) can be written as

$$\begin{aligned} \frac{1}{N} \sum \limits _{y=1}^{N} \mathbb {E}\int \limits _{0}^{t - u_1} f_{y}(\eta _s) \, ds, \end{aligned}$$
(42)

where \(f_{y}(\eta ) = L_{y}(\eta ) v_{y-1}(\eta )\). Let

$$\begin{aligned} \Lambda _{x, l} = \{x-l, x-l + 1, \ldots , x+l\}, \end{aligned}$$

denote a box of size l around x (with the convention that the box is truncated if the endpoints \(x-l\) or \(x+l\) exceed 1 or N, but this will not influence the argument in any substantial way) and let \({\widehat{\mu }}_{x,l}^{\eta }\) be the empirical distribution of colors in \(\Lambda _{x,l}\) in configuration \(\eta \), given for any \((L, \phi )\) by

$$\begin{aligned} {\widehat{\mu }}_{x, l}^{\eta } \left( L, \phi \right) = \frac{1}{|\Lambda _{x,l}|} \# \{ z \in \Lambda _{x,l} \, | \, \left( L_z(\eta ), \phi _z(\eta ) \right) = (L, \phi )\}. \end{aligned}$$

Consider the associated i.i.d. distribution on configurations restricted to \(\Lambda _{x,l}\), given by

$$\begin{aligned} \mu _{x, l}^{\eta } \left( (L_y, \phi _y)_{y=x-l}^{x+l} \right) = \prod \limits _{y=x-l}^{x+l} {\widehat{\mu }}_{x, l}^{\eta } \left( L_y, \phi _y \right) . \end{aligned}$$

In other words, under the measure \(\mu _{x, l}^{\eta }\) the probability of seeing a color pair \((L, \phi )\) at site \(y \in \Lambda _{x,l}\) is proportional to the number of sites in \(\Lambda _{x,l}\) with the color pair \((L, \phi )\), independently for each site.

The superexponential one block estimate says that on an event of high probability we can replace \(f_{y}(\eta _{s})\) in the time average (42) by its average \(\mathbb {E}_{\mu _{y,l}^{\eta _{s}}} (f)\) with respect to the local i.i.d. distribution over a sufficiently large box. In other words, due to local mixing the distribution of colors in a microscopic box can be approximated by an i.i.d. distribution for large l.

Lemma 5.4

Let \(U_{x,l}(\eta ) = | f_{x}(\eta ) - \mathbb {E}_{\mu _{x,l}^{\eta }} (f) |\). For any \(t \in [0,t_{1}]\) and \(\delta > 0\) we have

$$\begin{aligned} \limsup \limits _{l \rightarrow \infty } \limsup \limits _{N \rightarrow \infty } N^{-\gamma } \log {\widetilde{\mathbb {P}}}^{N} \left( \int \limits _{0}^{t} \frac{1}{N} \sum \limits _{x=1}^{N} U_{x,l}(\eta _{s}) \, ds > \delta \right) = - \infty , \end{aligned}$$

where \(\gamma = 3-\alpha \).

The lemma is proved in the next section. Let us see how it enables us to finish the proof of Proposition 5.3. By the one block estimate, in (42) we can replace

$$\begin{aligned} \frac{1}{N} \sum \limits _{y=1}^{N} \int \limits _{0}^{t - u_1} f_{y} (\eta _s) \, ds \end{aligned}$$

by

$$\begin{aligned} \frac{1}{N} \sum \limits _{y=1}^{N} \int \limits _{0}^{t - u_1} \mathbb {E}_{\mu _{y,l}^{\eta _{s}}} f_{y} (\eta _s) \, ds, \end{aligned}$$
(43)

with the difference going to 0 in expectation as first \(N \rightarrow \infty \) and then \(l \rightarrow \infty \), so we only need to show that the latter expression goes to 0 in the same limit.

Observe that in \(f_{y}(\eta ) = L_y(\eta )v_{y-1}(\eta ) = L_y(\eta )v(y-1, \phi _{y-1}(\eta ))\) the colors \(\phi _{y-1}\) and \(L_y\) depend on different sites, so they are independent under \(\mu _{y,l}^{\eta _{s}}\), since the measure is product. Thus in the average above we can simply write

$$\begin{aligned} \mathbb {E}_{\mu _{y,l}^{\eta _{s}}} f_{y} (\eta _s) = \mathbb {E}_{\mu _{y,l}^{{\tilde{\eta }}_{s}}} \left[ L_y(\eta )v_{y-1}(\eta ) \right] = \left( \mathbb {E}_{\sigma \sim \mu _{y,l}^{\eta _{s}}} L_{y}(\sigma ) \right) \left( \mathbb {E}_{\sigma \sim \mu _{y,l}^{\eta _{s}}} v_{y-1}(\sigma ) \right) , \end{aligned}$$

where by a slight abuse of notation we have denoted by \(\sigma \) the local configuration of colors in a box \(\Lambda _{y,l}\) and considered \(L_{y}\), \(v_{y-1}\) as functions of \(\sigma \). The average (43) now becomes

$$\begin{aligned} \frac{1}{N} \sum \limits _{y=1}^{N} \int \limits _{0}^{t - u_1} \left( \mathbb {E}_{\sigma \sim \mu _{y,l}^{\eta _{s}}} L_{y}(\sigma ) \right) \left( \mathbb {E}_{\sigma \sim \mu _{y,l}^{\eta _{s}}} v_{y-1}(\sigma ) \right) ds. \end{aligned}$$

Since the distribution of \(\eta _{s}\) in the biased interchange process process without the additional colors \(L_i\) is stationary, the distribution of the average \(\mathbb {E}_{\sigma \sim \mu _{y,l}^{\eta _{s}}} v_{y-1}(\sigma )\) does not depend on s. So we only need to show that \(\mathbb {E}_{\sigma \sim \mu _{y,l}^{\eta _{0}}} v_{y-1}(\sigma )\) is small, since \(L_{y}\) is bounded.

Recall that in stationarity \(\phi _{y}\) has uniform distribution, so for any y the expectation of \(v_{y-1}(\sigma ) = v(0, y-1, \phi _{y-1}(\sigma ))\) with respect to \(\mu _{y,l}^{\eta _{0}}\) is simply equal to

$$\begin{aligned} \frac{1}{2l+1} \sum \limits _{j=1}^{2l+1} v(0, y-1, \phi _{j}), \end{aligned}$$

where \(\phi _{j}\) are independent and uniformly distributed on \(\{1, \ldots , N\}\). As for each x the random variables \(v(0, x, \phi _{j})\) are independent, bounded and have mean 0 (see Proposition 4.1), an easy application of Hoeffding’s inequality gives that for fixed y the sum above goes to 0 in probability as \(l \rightarrow \infty \). This finishes the proof of Proposition 5.3.

We can now prove the law of large numbers.

Proof of Theorem 5.1

Consider the random particle process \({\bar{P}}^{N} = ({\bar{X}}^{N}, {\bar{\Phi }}^{N})\), obtained by first sampling \(\eta = \eta ^N\) and then following the trajectory \(P_{i}(\eta _t) = (X_i(\eta _t), \Phi _{i}(\eta _t))\) of a randomly chosen particle i. We will first show that the (deterministic) distribution \({\bar{\nu }}^{N}\) converges to \(\nu \), the distribution of P (in the metric \(d_{{\mathcal {W}}}^{sup}\)).

Let us start by proving that the estimate from Proposition 5.3 holds not only at each time t, but also with the supremum over all times \(t \le T\) under the sum over particles. Consider the process \((A^{N}, B^{N})\) defined as

$$\begin{aligned}&A_{t}^{N} = X_{i}(\eta _{t}) - X_{i}(\eta _{0}) - \int \limits _{0}^{t} v(s, x_{i}(\eta _{s}), \phi _{i}(\eta _{s})) \, ds, \\&B_{t}^{N} = \Phi _{i}(\eta _{t}) - \Phi _{i}(\eta _{0}) - \int \limits _{0}^{t} r(s, x_{i}(\eta _{s}), \phi _{i}(\eta _{s})) \, ds, \end{aligned}$$

where i is a random particle and \(\eta = \eta ^{N}\) comes from the biased interchange process. Proposition 5.3 implies that all finite-dimensional marginals of \((A^{N},B^{N})\) converge to 0. To obtain convergence to 0 for the whole process in the supremum norm we only need to check tightness in the Skorokhod topology (which will imply convergence in the supremum norm, since the limiting process is continuous). We will use the following stopping time criterion ([KL99, Proposition 4.1.6]). Let \(Y^{N}\) be a family of stochastic processes with sample paths in \(\widetilde{\mathcal {D}}\) such that for each time \(t \in [0,T]\) the marginal distribution of \(Y^{N}_{t}\) is tight. If for every \(\varepsilon > 0\) we have

$$\begin{aligned} \lim _{\gamma \rightarrow 0} \limsup _{N \rightarrow \infty } \, \sup _{\begin{array}{c} \tau \\ \theta \le \gamma \end{array}} \mathbb {P}\left( \left\| Y^{N}_{\tau + \theta } - Y^{N}_{\tau } \right\| > \varepsilon \right) = 0, \end{aligned}$$
(44)

where the supremum is over all stopping times \(\tau \) bounded by T, then the family \(Y^{N}\) is tight. Here \(\left\| \cdot \right\| \) denotes the Euclidean distance on \([0,1]^2\) and for simplicity we write \(\tau + \theta \) instead of \((\tau + \theta ) \wedge T\). Let \(\tau \) be any stopping time bounded by T. We have from formula 37

$$\begin{aligned}{} & {} A_{\tau + \theta }^{N} - A_{\tau }^{N} = M^{i}_{\tau + \theta } - M^{i}_{\tau } \\{} & {} \quad - \frac{1}{2} \int \limits _{\tau }^{\tau + \theta } \left[ v(s, x_{i}(\eta _{s})-1, \phi _{x_{i}(\eta _{s}) - 1}(\eta _{s})) + v(s, x_{i}(\eta _{s})+1, \phi _{x_{i}(\eta _{s}) + 1}(\eta _{s})) \right] ds. \end{aligned}$$

Since \(v(\cdot , \cdot ,\cdot )\) is bounded, the integral is bounded by \(C \theta \) for some constant \(C > 0\), regardless of \(\tau \), so goes to 0 as \(\theta \rightarrow 0\) (deterministically and for every i). Thus it only remains to bound the martingale term. As \(\tau \) is a stopping time, by formula (39) we have for each i

$$\begin{aligned} \mathbb {E}\left[ \left( M^{i}_{\tau + \theta }\right) ^2 - \left( M^{i}_{\tau }\right) ^2\right] = \mathbb {E}\int \limits _{\tau }^{\tau + \theta } Q_{s} \, ds. \end{aligned}$$

As in the calculation of \(\mathbb {E}(M^{i}_{t})^2\) following (39) we have that for fixed \(\theta \) the right hand side is o(1) as \(N \rightarrow \infty \). Since \(M^{i}_{t}\) is bounded, we obtain \( \mathbb {E}\left| M^{i}_{\tau + \theta } - M^{i}_{\tau }\right| \rightarrow 0\) as \(N \rightarrow \infty \), for any \(\theta \) and i (independently of \(\tau \)). The calculation for \(B^{N}\) is analogous.

This shows that the family \((A^{N}, B^{N})\) satisfies the tightness criterion (44). In particular it converges to 0 in the supremum norm as \(N \rightarrow \infty \). Thus for any \(\varepsilon > 0\) we have

$$\begin{aligned}&{\widetilde{\mathbb {P}}}^{N} \left( \frac{1}{N} \sum \limits _{i=1}^{N} \sup _{0 \le t \le T} \left| X_{i}(\eta _{t}) - X_{i}(\eta _{0}) - \int \limits _{0}^{t} v(s, x_{i}(\eta _{s}), \phi _{i}(\eta _{s})) \, ds \right| > \varepsilon \right) \rightarrow 0, \end{aligned}$$
(45)
$$\begin{aligned}&{\widetilde{\mathbb {P}}}^{N} \left( \frac{1}{N} \sum \limits _{i=1}^{N} \sup _{0 \le t \le T} \left| \Phi _{i}(\eta _{t}) - \Phi _{i}(\eta _{0}) - \int \limits _{0}^{t} r(s,x_{i}(\eta _{s}), \phi _{i}(\eta _{s})) \, ds \right| > \varepsilon \right) \rightarrow 0, \end{aligned}$$
(46)

as \(N \rightarrow \infty \).

Now we can prove that \({\bar{\nu }}^N\) converges to \(\nu \). Recalling the definition of the Wasserstein distance \(d_{{\mathcal {W}}}^{sup}\), it is enough to construct for each N a coupling \(({\bar{P}}^{N}, P)\) such that

$$\begin{aligned} \mathbb {E}\left\| {\bar{P}}^{N} - P \right\| _{sup} \rightarrow 0 \end{aligned}$$

as \(N \rightarrow \infty \).

Let us couple these two processes in the following way: first we let \({\bar{P}}^{N} = \left( \left( {\bar{X}}^{N}_{t}, {\bar{\Phi }}^{N}_{t}\right) , 0 \le t \le T\right) \) be a path sampled according to \({\bar{\nu }}^{\eta ^N}\), starting at \(({\bar{X}}^{N}_{0}, {\bar{\Phi }}^{N}_{0})\) (whose distribution is uniform on \(\left\{ \frac{1}{N}, \ldots , 1\right\} \times \left\{ \frac{1}{N}, \ldots , 1\right\} \)). We then take \(P(t) = (X(t), \Phi (t))\) to be the solution of the ODE (29) started from an initial condition \((X(0), \Phi (0))\) chosen uniformly at random from \(\left[ {\bar{X}}^{N}_{0} - \frac{1}{N}, {\bar{X}}^{N}_{0} \right] \times \left[ {\bar{\Phi }}^{N}_{0} - \frac{1}{N}, {\bar{\Phi }}^{N}_{0} \right] \) (so the two processes start close to each other). Because the initial condition is distributed uniformly on \([0,1]^2\), the path \(P = \left( P(t), 0 \le t \le T \right) \) will be distributed according to \(\nu \).

Since \(P(t) = (X(t), \Phi (t))\) is the solution of (29), we have at each time \(t \le T\)

$$\begin{aligned}&X(t) - X(0) = \int \limits _{0}^{t} V(s,X(s), \Phi (s)) \, ds, \\&\Phi (t) - \Phi (0) = \int \limits _{0}^{t} R(s,X(s), \Phi (s)) \, ds. \end{aligned}$$

Bounds (45), (46) imply that for all times \(t \le T\) we have

$$\begin{aligned}&{\bar{X}}^{N}(t) - {\bar{X}}^{N}(0) = \int \limits _{0}^{t} v\left( s, N {\bar{X}}^{N}(s), N {\bar{\Phi }}^{N}(s) \right) \, ds + \varepsilon _{t}^{1}, \\&{\bar{\Phi }}^{N}(t) - {\bar{\Phi }}^{N}(0) = \int \limits _{0}^{t} r\left( s, N {\bar{X}}^{N}(s), N {\bar{\Phi }}^{N}(s) \right) \, ds + \varepsilon _{t}^{2}, \end{aligned}$$

with \(\varepsilon _{t}^{1}\), \(\varepsilon _{t}^{2}\) satisfying \(\sup \limits _{ 0 \le t \le T} |\varepsilon _{t}^{i}| \rightarrow 0\) in probability as \(N \rightarrow \infty \). Recalling from (33) that \(v(\cdot , \cdot , \cdot ), r(\cdot , \cdot , \cdot )\) are approximately equal to \(V(\cdot , \cdot , \cdot ), R(\cdot , \cdot , \cdot )\) after rescaling of the arguments, we obtain

$$\begin{aligned}&{\bar{X}}^{N}(t) - {\bar{X}}^{N}(0) = \int \limits _{0}^{t} V\left( s, {\bar{X}}^{N}(s), {\bar{\Phi }}^{N}(s) \right) ds + o(1), \\&{\bar{\Phi }}^{N}(t) - {\bar{\Phi }}^{N}(0) = \int \limits _{0}^{t} R\left( s, {\bar{X}}^{N}(s), {\bar{\Phi }}^{N}(s) \right) ds + o(1), \end{aligned}$$

with the o(1) terms going to 0 in probability (in the supremum norm over t) as \(N \rightarrow \infty \).

Thus \(({\bar{X}}^{N}, {\bar{\Phi }}^{N})\) approximately satisfies the same ODE as \((X, \Phi )\) and an application of Grönwall’s inequality gives that for any \(\varepsilon > 0\) with probability approaching 1 as \(N \rightarrow \infty \) we have

$$\begin{aligned} \left\| {\bar{P}}^{N} - P \right\| _{sup} \le C \max \{ |{\bar{X}}^{N}(0) - X(0)| + \varepsilon , |{\bar{\Phi }}^{N}(0) - \Phi (0)| + \varepsilon \} e^{KT} \end{aligned}$$

for some \(C>0\), where \(K>0\) depends only on the Lipschitz constants of V an R.

By definition of the processes \({\bar{P}}^{N}\) and P the initial conditions \({\bar{X}}^{N}(0)\), X(0) and \({\bar{\Phi }}^{N}(0)\), \(\Phi (0)\) differ by at most \(\frac{1}{N}\), which implies that \(\mathbb {E}\left\| {\bar{P}}^{N} - P \right\| _{sup} \rightarrow 0\) as \(N \rightarrow \infty \). Thus the distribution \({\bar{\nu }}^{\eta ^N}\) of the random particle process \({\bar{P}}^{N}\) converges to \(\nu \) in the \(d_{{\mathcal {W}}}^{sup}\) metric as desired.

Now we can show that the random measures \(\nu ^{\eta ^{N}}\) converge in distribution to the deterministic measure \(\nu \). By the characterization of tightness for random measures (see, e.g., [Kal21, Theorem 23.15]) the family \(\nu ^{\eta ^{N}}\) will be tight, as a family of \(\mathcal {M}(\widetilde{\mathcal {D}})\)-valued random variables, if for any \(\varepsilon > 0\) there exists a compact set \(K \subseteq \widetilde{\mathcal {D}}\) such that \(\limsup \limits _{N \rightarrow \infty } \mathbb {E}\left( \nu ^{\eta ^{N}}(K) \right) \ge 1 - \varepsilon \), or, more simply put, \(\limsup \limits _{N \rightarrow \infty } {\widetilde{\mathbb {P}}}^{N} \left( P^{\eta ^{N}} \in K \right) \ge 1 - \varepsilon \). Exactly the same calculation as for the processes \((A^N, B^N)\) before shows the processes \(P^{\eta ^{N}}\) satisfy the tightness criterion (44), which guarantess the existence of desired compact sets K and in turn tightness of \(\nu ^{\eta ^{N}}\).

Now to finish the proof we only need to show uniqueness of subsequential limits for the family \(\nu ^{\eta ^{N}}\). Since any such (possibly random) limit must have the associated random particle process distributed according to \(\nu \), it is enough to show that the limit is deterministic.

Consider an outcome of \(\nu ^{\eta ^{N}}\), which is a measure from \(\mathcal {M}(\widetilde{\mathcal {D}})\), and sample independently two paths \(P_{1}^{N}, P_{2}^{N}\) from it. This corresponds to sampling \(\eta ^{N}\) according to the biased interchange process, then choosing uniformly at random a pair of particles ij (possibly with \(i=j\), but this event has vanishing probability) and following their trajectories in \(\eta ^{N}\). By the already established convergence \({\bar{\nu }}^{N} \rightarrow \nu \) in \(\mathcal {M}(\widetilde{\mathcal {D}})\), each path \(P_{1}^{N}\) and \(P_{2}^{N}\) separately has distribution converging to \(\nu \). Moreover, due to stationarity of \(\eta ^{N}\) the initial colors \(\phi _i(\eta ^{N}_{0})\), \(\phi _j(\eta ^{N}_{0})\) of any two particles ij are chosen uniformly at random, in particular they are independent for \(i \ne j\). Thus the joint distribution of \((P_{1}^{N},P_{2}^{N})\) converges to the distribution of two independent paths sampled from \(\nu \), as a path P sampled from \(\nu \) is uniquely determined by its initial conditions. Since we already have tightness, applying Lemma 2.1 gives that any limit of a subsequence has to be deterministic, which finishes the proof. \(\square \)

6 One Block Estimate

In this section we prove the one block estimate of Lemma 5.4, needed for the proof of Theorem 5.1. Since another, simpler variant of this estimate will also be needed for the proof of the large deviation upper bound (Lemma 8.2), we prove the result in generality suited for both of these applications.

Let us fix a continuous function \(w : [0,1]^2 \rightarrow \mathbb {R}\) and let \(I^N_{w} = \left\{ w\left( \frac{i}{N}, \frac{j}{N} \right) \right\} _{i,j=1}^{N}\). Let \(I^N = \left\{ \frac{1}{N}, \ldots , 1 \right\} \). Consider the interchange process on an extended configuration space \(E'\) in which each particle in addition to its label i has two colors \((a_i, \phi _i)\), with \(a_i \in I^N_{w}\), \(\phi _i \in I^N\). The dynamics is given by the usual generator \(\mathcal {L}\) – adjacent particles are making swaps at rate \(\frac{1}{2}N^{\alpha }\) and the colors \(a_i, \phi _i\) of the particle i do not evolve in time. Since the one block estimate concerns only the distribution of colors, from now on we ignore the labels of the particles altogether. Similarly as before we use the notation \(a_x = a_x (\eta ), \phi _x = \phi _x (\eta )\) to denote the colors of the particle at site x in configuration \(\eta \). The configuration at time s is denoted by \(\eta _s\).

Consider a continuous function \(g : [0,1] \rightarrow [-1,1]\) and for \(\eta \in E'\) let \(h_x(\eta ) = a_x(\eta ) b_{x-1}(\eta )\), where \(b_x(\eta ) = g(\phi _{x}(\eta ))\) or \(b_x (\eta ) = a_x (\eta )\). As in the previous section let \(\Lambda _{x, l} = \{x-l, x-l + 1, \ldots , x+l\}\) denote the box of size l around x (with an appropriate truncation if the endpoints \(x-l\) or \(x+l\) exceed 1 or N, which we neglect in the notation from now on) and let \({\widehat{\mu }}_{x, l}^{\eta }\) be the empirical distribution of colors in \(\Lambda _{x,l}\) in configuration \(\eta \), given for any \((\alpha , \varphi ) \in I^N_{w} \times I^N\) by

$$\begin{aligned} {\widehat{\mu }}_{x, l}^{\eta } \left( \alpha , \varphi \right) = \frac{1}{|\Lambda _{x,l}|} \# \{ z \in \Lambda _{x,l} \, | \, \left( a_z(\eta ), \phi _z(\eta ) \right) = (\alpha , \varphi )\}. \end{aligned}$$

Consider the associated i.i.d. distribution on configurations restricted to \(\Lambda _{x,l}\), given for \((\alpha _y, \varphi _y)_{y=x-l}^{x+l} \in \left( I^N_{w} \times I^N \right) ^{2l+1}\) by

$$\begin{aligned} \mu _{x, l}^{\eta } \left( (\alpha _y, \varphi _y)_{y=x-l}^{x+l} \right) = \prod \limits _{y=x-l}^{x+l} {\widehat{\mu }}_{x, l}^{\eta } \left( \alpha _y, \varphi _y \right) . \end{aligned}$$

Since \(h_x\) depends on \(\eta \) only through the colors at x and \(x-1\), we will slightly abuse notation by writing \(\mathbb {E}_{\mu _{x,l}^{\eta }} (h_x)\) for the expectation of \(h_x\) with respect to \(\mu _{x,l}^{\eta }\).

Let \(\psi : [0,1] \rightarrow \mathbb {R}\) be a continuous function and let

$$\begin{aligned} U^{N}_{x,l}(\eta ) = \psi (x) \left( h_{x}(\eta ) - \mathbb {E}_{\mu _{x,l}^{\eta }} (h_x) \right) . \end{aligned}$$

We define

$$\begin{aligned} U^{N}_{l}(\eta ) = \frac{1}{N} \sum \limits _{x=1}^{N} |U^{N}_{x,l}(\eta )|. \end{aligned}$$

Let \(\mu \) denote the uniform distribution on \(E'\). Note that the dynamics given by \(\mathcal {L}\) is reversible with respect to \(\mu \) and the associated Dirichlet form is given by

$$\begin{aligned} D^{N}(f) = \frac{1}{4} N^{\alpha } \int \sum \limits _{x=1}^{N-1} \left( \sqrt{f(\eta ^{x, x+1})} - \sqrt{f(\eta )}\right) ^2 \, d\mu (\eta ) \end{aligned}$$

for any \(f : E' \rightarrow [0, \infty )\).

Lemma 6.1

With \(\mu \) denoting the uniform distribution on \(E'\), we have for any \(C_0 > 0\)

$$\begin{aligned} \limsup \limits _{l \rightarrow \infty } \limsup \limits _{N \rightarrow \infty } \sup \limits _{\begin{array}{c} f \\ D^{N}(f) \le C_0 N^\gamma \end{array}} \int U^{N}_{l}(\eta )f(\eta ) \, d\mu (\eta ) = 0, \end{aligned}$$

where \(\gamma = 3-\alpha \) and the supremum is over all densities f with respect to \(\mu \) such that \(D^{N}(f) \le C_0 N^\gamma \).

Proof

Let us decompose \(a_x = a_x(\eta )\) and \(b_x = b_x(\eta )\) into their positive and negative parts, \(a_x = a_x^{+} - a_x^{-}\), \(b_x = b_x^{+} - b_x^{-}\). Since

$$\begin{aligned} h_x = a_x b_{x-1} = a_x^{+} b_{x-1}^{+} - a_x^{+} b_{x-1}^{-} - a_x^{-} b_{x-1}^{+} + a_x^{-} b_{x-1}^{-}, \end{aligned}$$

by the triangle inequality it is enough to prove the lemma with \(h_x\) replaced by one of the terms in the sum above, say, \(a_x^{+} b_{x-1}^{+}\). Let \(K = \max \{ 1, \left\| w \right\| _{\infty } \}\) and let us write

$$\begin{aligned}&a_x^{+}(\eta ) = \int \limits _{0}^{K} \mathbb {1}_{\{ a_x(\eta )> \lambda \}} \, d\lambda , \\&b_x^{+}(\eta ) = \int \limits _{0}^{K} \mathbb {1}_{\{ b_x(\eta ) > \theta \}} \, d\theta . \end{aligned}$$

We have

$$\begin{aligned}&\frac{1}{N} \sum \limits _{x=1}^{N}\left| \psi (x) \left( a_x^{+}(\eta ) b_{x-1}^{+}(\eta ) - \mathbb {E}_{\mu _{x,l}^{\eta }} \left[ a_x^{+}(\eta ) b_{x-1}^{+}(\eta ) \right] \right) \right| \\&\quad \le \int \limits _{0}^{K} \int \limits _{0}^{K} \frac{1}{N} \sum \limits _{x=1}^{N} \left| \psi (x) \left( \mathbb {1}_{\{ a_x(\eta )> \lambda \}} \mathbb {1}_{\{ b_{x-1}(\eta )> \theta \}} - \mathbb {E}_{\mu _{x,l}^{\eta }} \left[ \mathbb {1}_{\{ a_x(\eta )> \lambda \}} \mathbb {1}_{\{ b_{x-1}(\eta ) > \theta \}} \right] \right) \right| d\lambda \, d\theta , \end{aligned}$$

where the inequality comes from pulling the integrals over \(\lambda \) and \(\theta \) outside the absolute value. Let us denote the expression under the integrals on the right hand side by \(U^{N}_{l, \lambda , \theta }\). Since it is nonnegative and bounded, we can write

$$\begin{aligned} \sup \limits _{\begin{array}{c} f \end{array}} \int \left( \int \limits _{0}^{K} \int \limits _{0}^{K} U^{N}_{l, \lambda , \theta }(\eta ) \, d\lambda \, d\theta \right) f(\eta ) \, d\mu (\eta ) \le \int \limits _{0}^{K} \int \limits _{0}^{K} \left( \sup \limits _{\begin{array}{c} f \\ \end{array}} \int U^{N}_{l, \lambda , \theta }(\eta ) f(\eta ) \, d\mu (\eta ) \right) \, d\lambda \, d\theta , \end{aligned}$$

where the supremum is over all densities f satisfying \(D^{N}(f) \le C_0 N^\gamma \). By the same token, when taking the \(\limsup \) first over N and then over l, we can bound the resulting limit from above by one with the integral over \(\lambda \) and \(\theta \) outside the \(\limsup \). Thus we see that it is enough to prove for fixed \(\lambda , \theta \in [0, K]\)

$$\begin{aligned} \limsup \limits _{l \rightarrow \infty } \limsup \limits _{N \rightarrow \infty } \sup \limits _{\begin{array}{c} f \\ D^{N}(f) \le C_0 N^\gamma \end{array}} \int U^{N}_{l,\lambda , \theta }(\eta )f(\eta ) \, d\mu (\eta ) = 0. \end{aligned}$$

Since

$$\begin{aligned} U^{N}_{l,\lambda , \theta }(\eta ) = \frac{1}{N} \sum \limits _{x=1}^{N} \left| \psi (x) \left( \mathbb {1}_{\{ a_x(\eta )> \lambda \}} \mathbb {1}_{\{ b_{x-1}(\eta )> \theta \}} - \mathbb {E}_{\mu _{x,l}^{\eta }} \left[ \mathbb {1}_{\{ a_x(\eta )> \lambda \}} \mathbb {1}_{\{ b_{x-1}(\eta ) > \theta \}} \right] \right) \right| , \end{aligned}$$

we have reduced the problem to proving the one block estimate for the interchange process in which each particle has only four possible colors, corresponding to the possible values of the pair \((\mathbb {1}_{\{ a_i(\eta )> \lambda \}}, \mathbb {1}_{\{ b_i(\eta ) > \theta \}})\). This in turn follows by essentially the same argument as for the simple exclusion process, which can be thought of as interchange process with just two colors (see e.g., [KL99, Lemma 5.3.1]). Since the argument is by now standard and used in several places in the literature (see e.g., [FT04] for the case of three possible colors), let us only explain that the bound on the Dirichlet form under the supremum is of the right order. The argument for the simple exclusion process goes through (see the remark following the proof of [KL99, Lemma 5.4.2]) if we assume that the Dirichlet form corresponding to the generator without time scaling is o(N) and the process is speeded up by \(N^2\). In our case the generator \(\mathcal {L}\) has a scaling factor of \(N^{\alpha }\), so if \(N^{-\alpha } D^{N}(f)\) is the Dirichlet form corresponding to the process without time scaling, then our bound on this Dirichlet form is \(\le C_0 N^{\gamma - \alpha } = C_{0} N^{3 - 2\alpha }\). Since \(\alpha \in (1,2)\), this is o(N), which agrees with the assumptions for the simple exclusion process. \(\square \)

Lemma 6.2

Let \(\mathbb {P}^N\) denote the law of the interchange process on \(E'\) with an arbitrary initial distribution. With the notation as above we have for any \(t \ge 0\) and \(\delta > 0\)

$$\begin{aligned} \limsup \limits _{l \rightarrow \infty } \limsup \limits _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N} \left( \int \limits _{0}^{t} U^{N}_{l}(\eta _s) \, ds > \delta \right) = - \infty . \end{aligned}$$

Proof

Let \(\mu _{0}\) be an arbitrary initial distribution. Let \(\mathbb {P}_{0, \mu }^{N}\), resp. \(\mathbb {P}_{0, \mu _{0}}^{N}\), denote the distribution of the process started from \(\mu \), resp \(\mu _{0}\), and let \(\mathbb {E}_{\mu }\), resp. \(\mathbb {E}_{\mu _{0}}\), denote the corresponding expectation.

By Chebyshev’s inequality we have for any \(c > 0\)

$$\begin{aligned} \mathbb {P}_{0, \mu _{0}}^{N} \left( \int \limits _{0}^{t} U^{N}_{l}(\eta _s) \, ds > \delta \right) \le e^{-cN^{\gamma }} \mathbb {E}_{\mu _{0}} \exp \left\{ c N^{\gamma } \int \limits _{0}^{t} U^{N}_{l}(\eta _s) \, ds\right\} . \end{aligned}$$
(47)

We also have

$$\begin{aligned} \mathbb {E}_{\mu _{0}} \exp \left\{ c N^{\gamma } \int \limits _{0}^{t} U^{N}_{l}(\eta _s) \, ds\right\}&= \mathbb {E}_{\mu } \left[ \frac{\textrm{d}\mathbb {P}_{0,\mu _{0}}^{N}}{\textrm{d}\mathbb {P}_{0,\mu }^{N}} (t) \exp \left\{ c N^{\gamma } \int \limits _{0}^{t} U^{N}_{l}(\eta _s) \, ds\right\} \right] \\&\le \left\| \frac{\textrm{d}\mathbb {P}_{0,\mu _{0}}^{N}}{\textrm{d}\mathbb {P}_{0,\mu }^{N}} \right\| _{\infty } \mathbb {E}_{\mu } \exp \left\{ c N^{\gamma } \int \limits _{0}^{t} U^{N}_{l}(\eta _s) \, ds\right\} . \end{aligned}$$

Let \(M = |I^{N}_{w}|\). Since \(M \le N^2\) and under \(\mu \) each initial configuration has probability \((MN)^N\) = \(e^{o(N^{\gamma })}\), the supremum norm of the Radon–Nikodym derivative above is \(e^{o(N^\gamma )}\) as well, so to prove (47) it is in fact enough to show that for any \(c > 0\)

$$\begin{aligned} \limsup \limits _{l \rightarrow \infty } \limsup \limits _{N \rightarrow \infty } N^{-\gamma } \log \exp \left\{ c N^{\gamma } \int \limits _{0}^{t} U^{N}_{l}(\eta _s) \, ds\right\} \le 0 \end{aligned}$$
(48)

and then take \(c \rightarrow \infty \).

An application of Feynman-Kac formula to the semigroup generated by \(\mathcal {L}\) shows (see e.g., [KL99, Theorem 10.3.1 and Section A1.7]) that to obtain (48) it is sufficient to prove for any \(c > 0\)

$$\begin{aligned} \limsup \limits _{l \rightarrow \infty } \limsup \limits _{N \rightarrow \infty } \sup \limits _{f} \left\{ \int c U^{N}_{l}(\eta )f(\eta ) \, d\mu (\eta ) - N^{-\gamma } D^{N}(f) \right\} \le 0, \end{aligned}$$

where the supremum is taken over all densities with respect to \(\mu \). Since \(U^{N}_{l}\) is bounded by a constant \(C > 0\) depending only on \(\psi \) and g, the expression under the supremum becomes negative if \(D^{N}(f) > c C N^{\gamma }\). Thus it is enough to show that for any constant \(C_0 > 0\) we have

$$\begin{aligned} \limsup \limits _{l \rightarrow \infty } \limsup \limits _{N \rightarrow \infty } \sup \limits _{\begin{array}{c} f \\ D^{N}(f) \le C_0 N^\gamma \end{array}} \int U^{N}_{l}(\eta )f(\eta ) \, d\mu (\eta ) \le 0, \end{aligned}$$

which exactly the statement of Lemma 6.1. \(\square \)

This estimate will be enough for application in the proof of Lemma 8.2. As for the proof of Lemma 5.4, we will first show that the one block estimate holds for the unbiased process with color evolution, but with all rates equal to 1, i.e., the process with state space \(E'\) and the generator

$$\begin{aligned} (\mathcal {L}_{0} f)(\eta ) =&\frac{1}{2} N^{\alpha } \sum \limits _{x=1}^{N-1} (f(\eta ^{x, x+1}) - f(\eta )) \\&+ \frac{1}{2} N^{\alpha } \sum \limits _{x=1}^{N} \left[ (f(\eta ^{x,+}) - f(\eta )) + (f(\eta ^{x,-}) - f(\eta )) \right] . \end{aligned}$$

Here as usual \(\eta ^{x, \pm }\) denotes the configuration obtained from \(\eta \) by changing the color \(\phi _x\) of the particle at site x to \(\phi _x \pm 1\) (note that the colors \(a_i\) do not evolve in time here). We will then transfer the result to the biased process by estimating its Radon–Nikodym derivative.

Lemma 6.3

Let \(\mathbb {P}_{0}^{N}\) be the law of the unbiased process with rates 1 described above (with an arbitrary initial distribution). With the notation from Lemma 6.2, we have for any \(t \ge 0\) and \(\delta > 0\)

$$\begin{aligned} \limsup \limits _{l \rightarrow \infty } \limsup \limits _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}_{0}^{N} \left( \int \limits _{0}^{t} U^{N}_{l}(\eta _s) \, ds > \delta \right) = - \infty . \end{aligned}$$

Proof

Let us write \(\mathcal {L}_{0} = \mathcal {L}+ \mathcal {L}_{c}\), where \(\mathcal {L}\) is the first term in the definition of \(\mathcal {L}_{0}\) and \(\mathcal {L}_{c}\) is the second term. The dynamics induced by \(\mathcal {L}\) and by \(\mathcal {L}_{c}\) is reversible with respect to \(\mu \), so the Dirichlet forms associated respectively to \(\mathcal {L}_{c}\) and \(\mathcal {L}_{0}\) can be written as

$$\begin{aligned}&D^{N}_{c}(f) = \frac{1}{4} N^{\alpha } \int \sum \limits _{x=1}^{N} \left[ \left( \sqrt{f(\eta ^{x,+})} - \sqrt{f(\eta )} \right) ^2 + \left( \sqrt{f(\eta ^{x,-})} - \sqrt{f(\eta )} \right) ^2 \right] \, d\mu (\eta ), \\&D^{N}_{0}(f) = D^{N}(f) + D^{N}_{c}(f). \end{aligned}$$

By repeating the argument from the proof of Lemma 6.2 with the generator \(\mathcal {L}_0\) instead of \(\mathcal {L}\) we obtain that it is enough to prove that for any \(c > 0\)

$$\begin{aligned} \limsup \limits _{l \rightarrow \infty } \limsup \limits _{N \rightarrow \infty } \sup \limits _{f} \left\{ \int c U^{N}_{l}(\eta )f(\eta ) \, d\mu (\eta ) - N^{-\gamma } D^{N}_{0}(f) \right\} \le 0, \end{aligned}$$

where the supremum is taken over all densities with respect to \(\mu \).

Now observe that since \(D^{N}_{c}(f) \ge 0\) for any nonnegative f, it is in fact enough to prove the statement above with \(D^{N}_{0}(f)\) replaced by \(D^{N}(f)\). Thus we have eliminated color evolution and the conclusion follows as in the proof of Lemma 6.2. \(\square \)

We can now prove the superexponential estimate for the biased process.

Proof of Lemma 5.4

Recall that \(f_x(\eta ) = L_{x}(\eta ) v_{x-1}(\eta )\). Since we can uniformly approximate \(v(0,x,\phi )\) by finite sums of terms which are product in x and \(\phi \), by using the triangle inequality we can without loss of generality assume that \(v_{x}(\eta ) = \psi (x)g(\phi _x)\) for some continuous functions \(\psi : [0,1] \rightarrow \mathbb {R}, g : [0,1] \rightarrow [-1,1]\). Applying Lemma 6.3 with \(w(x, \phi )= v(0,x,\phi )\), \(a_i = L_i\) and \(h_x = L_x g(\phi _{x-1})\) provides us with the superexponential estimate for the process \(\mathbb {P}_{0}^{N}\). To transfer the estimate to the biased process \({\widetilde{\mathbb {P}}}^{N}\) we will need to estimate the Radon–Nikodym derivative of the two processes.

If \(\mathbb {P}\) is a Markov process with jump rates \(\lambda (x)p(x,y)\) and \({\widetilde{\mathbb {P}}}\) is another process on the same state space with rates \({\widetilde{\lambda }}(x) {\widetilde{p}}(x,y)\), the Radon–Nikodym derivative up to time t is given by (see, e.g., [KL99, Proposition A1.2.6])

$$\begin{aligned} \frac{\textrm{d}{\widetilde{\mathbb {P}}}}{\textrm{d}\mathbb {P}} (t) = \exp \left\{ - \int \limits _{0}^{t} \left( {\widetilde{\lambda }}(X_{s}) - \lambda (X_{s}) \right) ds + \sum \limits _{s \le t}\log \frac{{\widetilde{\lambda }}(X_{s-}){\widetilde{p}}(X_{s-},X_{s})}{\lambda (X_{s-})p(X_{s-},X_{s})}\right\} , \end{aligned}$$
(49)

where the sum is over jump times \(s \le t\).

Let us look at \(\frac{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}{\textrm{d}\mathbb {P}_{0}^{N}}\). By the form (28) of the generator of \({\widetilde{\mathbb {P}}}^{N}\) the sum of outgoing rates for any \(\eta \) is equal to

$$\begin{aligned} \frac{1}{2}N^{\alpha } \left( \sum \limits _{x=1}^{N-1} \left[ 1 + \varepsilon \left( v_{x}(\eta ) - v_{x+1}(\eta ) \right) \right] + \sum \limits _{x=1}^{N} \left[ 1 + \varepsilon r_{x}(\eta ) \right] + \sum \limits _{x=1}^{N} \left[ 1 - \varepsilon r_{x}(\eta ) \right] \right) . \end{aligned}$$

Since the sum of \(\varepsilon (v_{x} - v_{x+1})\) telescopes, the rates \(v_x\) are 0 at the boundaries \(x=1, N\) and rates \(r_x\) for the color change cancel out, the intensities \({\widetilde{\lambda }}\) and \(\lambda \) cancel out as well. The Radon–Nikodym derivative takes the form

$$\begin{aligned} \frac{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}{\textrm{d}\mathbb {P}_{0}^{N}}(t) =&\exp \Big \{ \sum \limits _{s \le t} \log \left( 1 + \varepsilon \left[ v(x_{j_{s}}, \phi _{x_{j_{s}}}(\eta _{s})) - v(x_{j_{s}}+1, \phi _{x_{j_{s}}+1}(\eta _{s}))\right] \right) \nonumber \\&+ \sum \limits _{s_{+} \le t} \log \left( 1 + \varepsilon \left[ r(x_{j_{s_{+}}}, \phi _{x_{j_{s_{+}}}}(\eta _{s_{+}}))\right] \right) \nonumber \\&+ \sum \limits _{s_{-} \le t} \log \left( 1 - \varepsilon \left[ r(x_{j_{s_{-}}}, \phi _{x_{j_{s_{-}}}}(\eta _{s_{-}}))\right] \right) \Big \}, \end{aligned}$$
(50)

where \(j_{s}\) is the label of the particle which makes a swap at time s and \(j_{s_{\pm }}\) is the label of the particle that changes its color by \(\pm 1\) at time \(s_{\pm }\).

To simplify this formula we will use the fact that empirical currents across edges can be approximated by their averages, modulo a small martingale. More precisely, let us denote for simplicity

$$\begin{aligned} (\nabla _{x} v)(\eta ) = v_x (\eta ) - v_{x+1} (\eta ). \end{aligned}$$

We will sometimes use this notation with \(x=N\), in which case we assume \((\nabla _{x} v)(\eta ) = 0\). For brevity of notation whenever sums involving both \(r_x\) and \(-r_x\) appear, we will write them as one term with a ± sign, that is, with \(\sum \limits _{x}(1 \pm \varepsilon r_x)\) serving as a shorthand for \(\sum \limits _{x}(1 + \varepsilon r_x) + \sum \limits _{x}(1 - \varepsilon r_x)\) and so on.

We introduce the following extension of the dynamics under \(\mathbb {P}_{0}^{N}\) – for any functions \(h(x,\eta )\), \(h^{\pm }(x,\eta )\), \(x \in \{1, \ldots , N\}\) consider the extended state space \(E'\), consisting of pairs \((\eta , J)\), \(J \in {\mathbb {R}}\), and the generator \(\mathcal {L}'\) acting by

$$\begin{aligned} (\mathcal {L}' f)(\eta , J) =&\frac{1}{2} N^{\alpha } \Big [ \sum \limits _{x=1}^{N-1} \left( f(\eta ^{x,x+1}, J + h(x, \eta )) - f(\eta , J) \right) \\&+ \sum \limits _{x=1}^{N} \left( f(\eta ^{x,+}, J + h^{+}(x, \eta )) - f(\eta , J) \right) \\&+ \sum \limits _{x=1}^{N} \left( f(\eta ^{x,-}, J + h^{-}(x, \eta )) - f(\eta , J) \right) \Big ]. \end{aligned}$$

In other words, in the evolution of the extended configuration \((\eta _t, J_t)\) each time the process makes a jump, \(J_t\) is increased by \(h(x, \eta _t)\), \(h^{+}(x, \eta _t)\) or \(h^{-}(x, \eta _t)\), depending on the type of the jump (swap or color change). Now if we take

$$\begin{aligned}&h(x,\eta ) = \log \left[ 1 + \varepsilon (\nabla _{x} v)(\eta )\right] , \\&h^{\pm }(x,\eta ) = \log \left[ 1 \pm \varepsilon r_{x}(\eta )\right] , \end{aligned}$$

we see that \(J_t\) is simply equal to the sum over jumps appearing in the exponent in (50). Thus to bound the Radon–Nikodym derivative we only need to bound \(J_t\).

This is done by use of an exponential martingale – for any \(\lambda > 0\) the following process

$$\begin{aligned} Z_{t} = \exp \left\{ \lambda J_{t}- \int \limits _{0}^{t} e^{-\lambda J_{s}} \mathcal {L}' e^{\lambda J_{s}} \, ds \right\} \end{aligned}$$

is a local martingale with respect to \(\mathbb {P}_{0}^{N}\). We will actually only need to consider \(\lambda = 2\). Writing out the action of \(\mathcal {L}'\) on the function \(g(\eta , J) = e^{2 J}\) we obtain

$$\begin{aligned} Z_{t} = \exp \left\{ 2 J_{t}- \frac{1}{2} N^{\alpha } \int \limits _{0}^{t} \sum \limits _{x=1}^{N} \left[ \left( e^{2 \log (1 + \varepsilon (\nabla _{x} v)(\eta _{s}))} - 1\right) + \left( e^{2 \log (1 \pm \varepsilon r_x (\eta _{s})} - 1 \right) \right] ds \right\} . \end{aligned}$$

Now we have

$$\begin{aligned}&e^{2 \log (1 + \varepsilon (\nabla _{x} v)(\eta _{s}))} - 1 = (1 + \varepsilon (\nabla _{x} v)(\eta _{s}))^2 - 1 = 2 \varepsilon (\nabla _{x} v)(\eta _{s}) + \varepsilon ^2 \left[ (\nabla _{x} v)(\eta _{s})\right] ^2, \\&e^{2 \log (1 \pm \varepsilon r_{x}(\eta _{s}))} - 1 = (1 \pm \varepsilon r_{x}(\eta _{s}))^2 - 1 = \pm 2 \varepsilon r_{x}(\eta _{s}) + \varepsilon ^2 r_{x}(\eta _{s})^2. \end{aligned}$$

The sum of terms linear in \(\varepsilon \) vanishes – the rates r for \(\pm 1\) color change have opposite sign and the sum involving \(\nabla _{x}v\) telescopes. Recalling that \(\varepsilon = N^{1-\alpha }\) and \(\gamma = 3 - \alpha \), so \(N^{\alpha +1}\varepsilon ^2 = N^\gamma \), we can then write

$$\begin{aligned} Z_{t} = \exp \left\{ 2 J_{t} - \frac{1}{2} N^{\gamma } \int \limits _{0}^{t} \frac{1}{N}\sum \limits _{x=1}^{N} \left[ (\nabla _{x} v)(\eta _{s})^2 + r_x(\eta _{s})^2\right] \, ds \right\} . \end{aligned}$$

Since the rates v and r are bounded, we have \(Z_t = e^{ 2 J_t - N^{\gamma } X_t}\), where \(|X_t| \le C\) for some constant \(C > 0\) depending only on v, r and T. In particular we get

$$\begin{aligned} \mathbb {E}e^{2 J_t} = \mathbb {E}\left( e^{2 J_t - N^{\gamma } X_t} e^{N^{\gamma }X_t} \right) = \mathbb {E}\left( Z_t e^{N^{\gamma }X_t} \right) \le e^{C N^{\gamma }} \mathbb {E}Z_t. \end{aligned}$$

Since \(Z_t\) is a local martingale bounded from below, it is a supermartingale, so we have \(\mathbb {E}Z_t \le \mathbb {E}Z_0 = 1\) and thus

$$\begin{aligned} \mathbb {E}e^{2 J_t} \le e^{C N^\gamma }. \end{aligned}$$
(51)

Now we can transfer the superexponential bound of Lemma 6.3 from \(\mathbb {P}_{0}^{N}\) to \({\widetilde{\mathbb {P}}}^{N}\). Let \(\mathcal {O}_{N,l}\) be the event from the statement of the lemma and let us write simply \(\frac{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}{\textrm{d}\mathbb {P}_{0}^{N}} = \frac{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}{\textrm{d}\mathbb {P}_{0}^{N}}(T)\). Denoting by \({\widetilde{\mathbb {E}}}\) the expectation with respect to \({\widetilde{\mathbb {P}}}^{N}\) we have

$$\begin{aligned} {\widetilde{\mathbb {P}}}^{N}\left( \mathcal {O}_{N,l} \right) = {\widetilde{\mathbb {E}}} \left( \mathbb {1}_{\mathcal {O}_{N,l}} \right) = \mathbb {E}\left( \frac{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}{\textrm{d}\mathbb {P}_{0}^{N}} \mathbb {1}_{\mathcal {O}_{N,l}} \right) . \end{aligned}$$

Applying the Cauchy-Schwarz inequality gives

$$\begin{aligned} {\widetilde{\mathbb {P}}}^{N}\left( \mathcal {O}_{N,l} \right) \le \left[ \mathbb {E}\left( \frac{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}{\textrm{d}\mathbb {P}_{0}^{N}}\right) ^2 \right] ^{1/2} \cdot \mathbb {P}_{0}^{N}\left( \mathcal {O}_{N,l}\right) ^{1/2}. \end{aligned}$$

Recalling that \(\frac{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}{\textrm{d}\mathbb {P}_{0}^{N}} = e^{J_T}\) and applying the bound (51) we obtain

$$\begin{aligned} {\widetilde{\mathbb {P}}}^{N}\left( \mathcal {O}_{N,l} \right) \le e^{c N^{\gamma }} \mathbb {P}_{0}^{N}\left( \mathcal {O}_{N,l}\right) ^{1/2} \end{aligned}$$

with \(c = \frac{C}{2}\). Thus

$$\begin{aligned} \limsup \limits _{N \rightarrow \infty } N^{-\gamma } \log {\widetilde{\mathbb {P}}}^{N}\left( \mathcal {O}_{N,l} \right) \le c + \frac{1}{2}\limsup \limits _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}_{0}^{N}\left( \mathcal {O}_{N,l} \right) \end{aligned}$$

and taking \(\limsup \) as \(l \rightarrow \infty \) together with an application of Lemma 6.3 finishes the proof. \(\square \)

7 Large Deviation Lower Bound

In this section we prove the large deviation lower bound of Theorem A. Let us assume that the permuton process X satisfies Eq. 29. Since we already know how to construct a biased interchange process that will typically display the behavior of X, to bound the probability that the trajectory of a random particle in the interchange process is close in distribution to X we only need to compare the unbiased process with the biased one by means of calculating their Radon–Nikodym derivative.

Since these two processes have different configuration spaces, for convenience we introduce the unbiased interchange process with colors, which has the same configuration space as the biased process associated to 29 and the generator \(\mathcal {L}^{u}\) obtained by putting all velocities v to 0

$$\begin{aligned} (\mathcal {L}^{u}_{t} f)(\eta )&= \frac{1}{2} N^{\alpha } \sum \limits _{y=1}^{N-1} (f(\eta ^{y, y+1}) - f(\eta )) \nonumber \\&\quad + \frac{1}{2} N^{\alpha } \sum \limits _{x=1}^{N} \left[ 1 \pm \varepsilon r(t, x, \phi _{x}(\eta )) \right] (f(\eta ^{x,\pm }) - f(\eta )). \end{aligned}$$
(52)

Since here the colors do not influence the dynamics of swaps, the corresponding permutation process \(X^{\eta ^{N}}\) will be the same as for the ordinary unbiased interchange process (and we will never be interested in the distribution of \(\Phi ^{\eta ^{N}}\) for the unbiased process with colors).

Let us start by deriving the formula for the Radon–Nikodym derivative of the unbiased process with colors with respect to the biased one. Recall that \(v_x (s, \eta _{s}) = v(s,x,\phi _{x}(\eta _s))\) denotes the velocity at time s of the particle at site x. Let \(\mathbb {P}^{N}_{u}\) denote the law of the unbiased process with colors. We will prove the following statement

Lemma 7.1

We have

$$\begin{aligned} \frac{\textrm{d}\mathbb {P}_{u}^{N}}{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}(T) = \exp \Bigg \{ - \frac{1}{2} N^{\gamma } \Bigg [ \int \limits _{0}^{T} \frac{1}{N} \sum \limits _{x=1}^{N} v_x (s, \eta _{s})^2 \, ds + o(1) \Bigg ]\Bigg \}, \end{aligned}$$

where the o(1) term goes to 0 in probability as \(N \rightarrow \infty \).

Proof

The calculation is similar as in the proof of Lemma 5.4, with the difference that we are using generator \({\widetilde{\mathcal {L}}}\) instead of \(\mathcal {L}_0\). By the analog of formula (49) for time-inhomogeneous processes we have

$$\begin{aligned} \frac{\textrm{d}\mathbb {P}_{u}^{N}}{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}(t) = \exp \left\{ -\sum \limits _{s \le t} \log \left( 1 + \varepsilon \left[ v(s,x_{j_{s}}, \phi _{x_{j_{s}}}(\eta _{s})) - v(s,x_{j_{s}}+1, \phi _{x_{j_{s}}+1}(\eta _{s}))\right] \right) \right\} , \end{aligned}$$

where the sum is over jump times \(s \le t\).

Denoting the sum in the exponent by \(J_{t}\), we obtain by (34) (by considering as before the generator \({\widetilde{\mathcal {L}}}\) acting on an extended configuration space) that

$$\begin{aligned} J_{t} = M_{t} + \frac{1}{2} N^{\alpha }\sum \limits _{x=1}^{N-1} \int \limits _{0}^{t} \left[ 1 + \varepsilon (\nabla _{x} v)(s, \eta _{s})\right] \log \left[ 1 + \varepsilon (\nabla _{x} v)(s, \eta _{s})\right] ds, \end{aligned}$$

where \(M_{t}\) is a local martingale with respect to \({\widetilde{\mathbb {P}}}^{N}\). Expanding all terms up to order \(\varepsilon ^2\) allows us to write

$$\begin{aligned} \frac{\textrm{d}\mathbb {P}_{u}^{N}}{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}(t)&= \exp \left\{ - M_{t} - \frac{1}{2} N^{\alpha } \int \limits _{0}^{t} \sum \limits _{x=1}^{N-1} \left[ \varepsilon (\nabla _{x} v)(s,\eta _{s}) + \frac{\varepsilon ^2}{2} \left[ (\nabla _{x} v)(s,\eta _{s}) \right] ^2 \right] ds \right. \nonumber \\&\quad \left. + O(N^{\alpha +1}\varepsilon ^3)\right\} . \end{aligned}$$

As before the term linear in \(\varepsilon \) vanishes. Recalling that \(\varepsilon = N^{1-\alpha }\) and \(\gamma = 3 - \alpha \) we have

$$\begin{aligned} \frac{\textrm{d}\mathbb {P}_{u}^{N}}{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}(t) = \exp \left\{ - M_{t} - \frac{1}{4} N^{\gamma } \int \limits _{0}^{t} \frac{1}{N} \sum \limits _{x=1}^{N-1} \left[ (\nabla _{x} v)(s,\eta _{s}) \right] ^2 \, ds + o(N^{\gamma })\right\} . \end{aligned}$$

Expanding \(\left[ (\nabla _{x} v)(s,\eta _s)\right] ^2 = (v_x(s,\eta _s) - v_{x+1}(s,\eta _s))^2\) leads us to

$$\begin{aligned} \frac{\textrm{d}\mathbb {P}_{u}^{N}}{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}(t)&= \exp \Bigg \{ - M_{t} - \frac{1}{2} N^{\gamma } \int \limits _{0}^{t} \frac{1}{N} \sum \limits _{x=1}^{N} v_x(s,\eta _{s})^2 \, ds \nonumber \\&\quad + \frac{1}{2} N^{\gamma } \int \limits _{0}^{t} \frac{1}{N} \sum \limits _{x=1}^{N} v_x(s,\eta _{s})v_{x+1}(s,\eta _{s}) \, ds + o(N^{\gamma })\Bigg \}. \end{aligned}$$
(53)

The martingale term will be typically \(o(N^{\gamma })\). To see this, we use formula (35) – by performing a calculation similar to the one above we get that

$$\begin{aligned} N_{t} = M_{t}^2 - \frac{1}{2} N^{\alpha } \int \limits _{0}^{t} \sum \limits _{x=1}^{N} \left[ 1 + \varepsilon (\nabla _{x} v)(s,\eta _{s})\right] \left[ \log \left( 1 + \varepsilon (\nabla _{x} v)(s,\eta _{s})\right) \right] ^2 \, ds \end{aligned}$$

is a local martingale with respect to \({\widetilde{\mathbb {P}}}^{N}\). By expanding the \(\log \) terms up to \(\varepsilon ^2\) we see that the second term above is bounded by \(C N^{\alpha +1} \varepsilon ^2 = C N^{\gamma }\) for some \(C > 0\). In particular \(N_{t}\) is bounded from below, so it is a supermartingale. Thus \(\mathbb {E}N_T \le \mathbb {E}N_0 = 0\) and \(\mathbb {E}M_T^2 \le C N^\gamma \), so Chebyshev’s inequality implies that \(M_T = o(N^\gamma )\) with high probability.

The second sum in the exponent in (53) will be small by invariance of the uniform distribution of colors in the biased process. More precisely, at fixed time s for each x the correlation term \(v_{x}(s,\eta _{s})v_{x+1}(s,\eta _{s})\) has mean 0, since \(\eta _{s}\) has stationary distribution and by Proposition 4.1 in stationarity velocities at different sites are independent with mean 0. Moreover, for the same reason these terms are uncorrelated for different x, so by the weak law of large numbers we get that for any \(s \le T\) and \(\delta > 0\)

$$\begin{aligned} {\widetilde{\mathbb {P}}}^{N} \left( \left| \frac{1}{N} \sum \limits _{x=1}^{N} v_{x}(s,\eta _{s})v_{x+1}(s,\eta _{s}) \right| > \delta \right) \rightarrow 0 \end{aligned}$$

as \(N \rightarrow \infty \).

Since this holds for any fixed s and the random variables are bounded, we also have

$$\begin{aligned} {\widetilde{\mathbb {P}}}^{N} \left( \left| \int \limits _{0}^{T} \frac{1}{N} \sum \limits _{x=1}^{N} v_{x}(s,\eta _{s})v_{x+1}(s,\eta _{s}) \, ds \right| > \delta \right) \rightarrow 0 \end{aligned}$$

as \(N \rightarrow \infty \), which proves that the correlation term is \(o(N^{\gamma })\) with high probability. Together with the bound on \(M_t\) this proves the desired formula for the Radon–Nikodym derivative. \(\square \)

We can now use Lemma 7.1 and the law of large numbers established in Theorem 5.1 to prove a large deviation lower bound for the interchange process. As the formula from the lemma suggests, the large deviation rate function will be related to the energy of the process to which the biased interchange process converges.

Recall from (10) that for any process \(\pi \in \mathcal {P}\) its energy was defined by

$$\begin{aligned} I(\pi ) = \mathbb {E}_{\gamma \sim \pi } \mathcal {E}(\gamma ), \end{aligned}$$

where \(\mathcal {E}(\gamma )\) is the Dirichlet energy of the path \(\gamma \) defined by (7). We have the following large deviation lower bound

Theorem 7.2

Let \(\mathbb {P}^{N}\) be the law of the unbiased interchange process \(\eta ^N\) and let \(\mu ^{\eta ^{N}}\) be the (random) distribution of the corresponding permutation process \(X^{\eta ^N}\). Let \(P = (X, \Phi )\) be the colored trajectory process associated to the equation (29) and let \(\mu \) denote the distribution of X. For any open set \(\mathcal {O}\subseteq \mathcal {P}\) such that \(\mu \in \mathcal {O}\) we have

$$\begin{aligned} \liminf _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N}\left( \mu ^{\eta ^{N}} \in \mathcal {O}\right) \ge - I(\mu ). \end{aligned}$$

Proof

It will be enough to show the bound above for \(\mathcal {O}\) being any open ball \(B(\mu , \varepsilon )\) in \(\mathcal {P}\) around \(\mu \). Let \(\mathbb {P}_{u}^{N}\) be the distribution of the unbiased process with colors, \(\nu ^{\eta ^{N}}\) the distribution of the colored permutation process \(P^{\eta ^N} = (X^{\eta ^N}, \Phi ^{\eta ^N})\) associated to \(\eta ^N\). Let \(\nu \) denote the distribution of \(P = (X, \Phi )\) and \({\widetilde{B}}(\nu , \varepsilon )\) an open ball around \(\nu \) in \(\mathcal {M}(\widetilde{\mathcal {D}})\). Since the projection \((X, \Phi ) \mapsto X\) is continuous as a map from \(\widetilde{\mathcal {D}}\) to \(\mathcal {D}\), the corresponding projection from \(\mathcal {M}(\widetilde{\mathcal {D}})\) to \(\mathcal {M}(\mathcal {D})\) is also continuous. As \(\mu ^{\eta ^N}\) has the same law under \(\mathbb {P}^N\) and \(\mathbb {P}_{u}^N\) (remember that in the latter process the colors do not influence the dynamics of swaps), we have that for any \(\varepsilon > 0\) there exists \(\varepsilon ' > 0\) such that \(\mathbb {P}^{N}\left( \mu ^{\eta ^{N}} \in B(\mu , \varepsilon ) \right) \ge \mathbb {P}_{u}^{N}\left( \nu ^{\eta ^{N}} \in {\widetilde{B}}(\nu , \varepsilon ')\right) \). Thus to prove the large deviation bound it is sufficient to prove the local lower bound

$$\begin{aligned} \liminf \limits _{\varepsilon \rightarrow 0}\liminf _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}_{u}^{N}\left( \nu ^{\eta ^{N}} \in {\widetilde{B}}(\nu , \varepsilon )\right) \ge - I(\mu ). \end{aligned}$$
(54)

Recall that \({\widetilde{\mathbb {P}}}^{N}\) denotes the distribution of the biased process associated to (29) and consider the Radon–Nikodym derivative \(\frac{\textrm{d}\mathbb {P}_{u}^{N}}{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}(t)\). By Lemma 7.1 we have

$$\begin{aligned} \frac{\textrm{d}\mathbb {P}_{u}^{N}}{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}(T) = \exp \Bigg \{ -\frac{1}{2} N^{\gamma } \Bigg [ \int \limits _{0}^{T} \frac{1}{N} \sum \limits _{x=1}^{N} v_{x}(\eta _{s})^2 \, ds + Y_N\Bigg ]\Bigg \}, \end{aligned}$$
(55)

where \(Y_N\) goes to 0 in probability as \(N \rightarrow \infty \).

Now by the law of large numbers from Theorem 5.1 and Remark 5.2 the distributions \(\nu ^{\eta ^{N}}\) converge in probability in the \(d_{{\mathcal {W}}}^{sup}\) metric to \(\nu \) when \(\eta ^{N}\) is sampled according to \({\widetilde{\mathbb {P}}}^{N}\). Thus for any \(\varepsilon > 0\) and an open ball \({\widetilde{B}}_{\varepsilon } = \{ \zeta \in \mathcal {M}(\widetilde{\mathcal {D}}) \, | \, d_{{\mathcal {W}}}^{sup}(\zeta , \nu ) < \varepsilon \}\) around \(\nu \) in the \(d_{{\mathcal {W}}}^{sup}\) metric we have \(\lim \limits _{N \rightarrow \infty } {\widetilde{\mathbb {P}}}^{N}\left( \nu ^{\eta ^{N}} \in {\widetilde{B}}_{\varepsilon }\right) = 1\). Since convergence in \(d_{{\mathcal {W}}}^{sup}\) implies convergence in \(d_{{\mathcal {W}}}\), to prove (54) it is enough to analyze the probability \(\mathbb {P}^N_{u}\left( \nu ^{\eta ^{N}} \in {\widetilde{B}}_{\varepsilon }\right) \).

Fix arbitrary \(\delta > 0\) and let \(U_{N} = \{ |Y_N| \le \delta \}\). Let \(V_{N,\varepsilon } = U_{N} \cap \{ \nu ^{\eta ^{N}} \in {\widetilde{B}}_{\varepsilon }\}\) and \(\frac{\textrm{d}\mathbb {P}_{u}^{N}}{\textrm{d}{\widetilde{\mathbb {P}}}^{N}} = \frac{\textrm{d}\mathbb {P}_{u}^{N}}{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}(T)\). With \(\mathbb {E}\) denoting the expectation with respect to \(\mathbb {P}_{u}^{N}\) and \({\widetilde{\mathbb {E}}}\) with respect to \({\widetilde{\mathbb {P}}}^{N}\) we have for any \(\varepsilon > 0\) and sufficiently large N

$$\begin{aligned} \mathbb {P}^{N}_{u}\left( \nu ^{\eta ^{N}} \in {\widetilde{B}}_{\varepsilon }\right)= & {} \mathbb {E}\left( \mathbb {1}_{\{ \nu ^{\eta ^{N}} \in {\widetilde{B}}_{\varepsilon } \}}\right) \ge \mathbb {E}(\mathbb {1}_{V_{N,\varepsilon }}) \\= & {} {\widetilde{\mathbb {E}}} \left( \frac{\textrm{d}\mathbb {P}_{u}^{N}}{\textrm{d}{\widetilde{\mathbb {P}}}^{N}} \mathbb {1}_{V_{N,\varepsilon }} \right) \ge {\widetilde{\mathbb {P}}}^{N}(V_{N,\varepsilon }) \left( \inf _{ V_{N,\varepsilon }}\frac{\textrm{d}\mathbb {P}_{u}^{N}}{\textrm{d}{\widetilde{\mathbb {P}}}^{N}} \right) . \end{aligned}$$

We have \(\lim \limits _{N \rightarrow \infty } {\widetilde{\mathbb {P}}}^{N}(V_{N,\varepsilon }) = 1 \) and on the event \(U_N\) we have

$$\begin{aligned} \frac{\textrm{d}\mathbb {P}_{u}^{N}}{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}(T) \ge \exp \Bigg \{ -\frac{1}{2} N^{\gamma } \Bigg [ \int \limits _{0}^{T} \frac{1}{N} \sum \limits _{x=1}^{N} v_{x}(\eta _{s})^2 \, ds + \delta \Bigg ]\Bigg \}. \end{aligned}$$

This implies

$$\begin{aligned} N^{-\gamma } \log \mathbb {P}_{u}^{N}\left( \nu ^{\eta ^{N}} \in {\widetilde{B}}_{\varepsilon }\right) \ge - \inf _{\eta \in V_{N,\varepsilon }} I_N (\eta ) - \delta , \end{aligned}$$
(56)

where

$$\begin{aligned} I_{N}(\eta ) = \frac{1}{2} \left( \frac{1}{N} \sum \limits _{x=1}^{N} \int \limits _{0}^{T} v_x(s,\eta _{s})^2 \, ds \right) . \end{aligned}$$

Now it is not difficult to see that the infimum on the right hand side of (56) converges to \(I(\mu )\) as \(N \rightarrow \infty \) and then \(\varepsilon \rightarrow 0\). When \((X, \Phi )\) is sampled from \(\nu \), X is the solution of (29) with a uniformly random initial condition, so the energy \(I(\mu )\) is simply equal to

$$\begin{aligned} \mathbb {E}\int \limits _{0}^{T} V(s, X(s),\Phi (s))^2 \, ds, \end{aligned}$$

where the expectation is with respect to the choice of \((X(0), \Phi (0))\). Recall the notation \(X_{i}(\eta ^{N}_{t}) = \frac{1}{N} x_{i}(\eta ^{N}_{t})\), \(\Phi _{i}(\eta ^{N}_{t}) = \frac{1}{N} \phi _{i}(\eta ^{N}_{t})\). In light of (33) what we need to show is that

$$\begin{aligned} \inf \limits _{\eta \in V_{N,\varepsilon }} \left( \frac{1}{N} \sum \limits _{i=1}^{N} \int \limits _{0}^{T} V(s, X_{i}(\eta ^{N}_{s}),\Phi _{i}(\eta ^{N}_{s}))^2 \, ds \right) \rightarrow \mathbb {E}\int \limits _{0}^{T} V(s, X(s),\Phi (s))^2 \, ds \end{aligned}$$
(57)

as \(N \rightarrow \infty \) and then \(\varepsilon \rightarrow 0\).

Consider the trajectory \(\eta ^N\) and for any particle i let \((X_{i}(t), \Phi _{i}(t))\) denote the solution of (29) corresponding to the initial condition \((X_{i}(\eta ^{N}_{0}), \Phi _{i}(\eta ^{N}_{0}))\). Since the velocities V are bounded, we can write

$$\begin{aligned}&\left| \int \limits _{0}^{T} \left[ V(s,X_{i}(\eta ^{N}_{s}),\Phi _{i}(\eta ^{N}_{s}))^2 - V(s,X_{i}(s),\Phi _{i}(s))^2 \right] ds \right| \nonumber \\&\quad \le C \int \limits _{0}^{T} \big | V(s,X_{i}(\eta ^{N}_{s}),\Phi _{i}(\eta ^{N}_{s})) - V(s,X_{i}(s),\Phi _{i}(s)) \big | \, ds \nonumber \\&\quad \le K T \max \left\{ \sup _{t \le T} \left| X_{i}(\eta ^{N}_{t}) - X_{i}(t)\right| , \sup _{t \le T} \left| \Phi _{i}(\eta ^{N}_{t}) - \Phi _{i}(t)\right| \right\} \end{aligned}$$
(58)

for some \(C, K > 0\) depending on the bound on V and the Lipschitz constant of V.

Now note that if \(\nu ^{\eta ^N} \in {\widetilde{B}}_{\varepsilon }\), then by considering the same coupling as in the proof of Theorem 5.1 we have

$$\begin{aligned} \limsup \limits _{N \rightarrow \infty } \frac{1}{N} \sum \limits _{i=1}^{N} \left| \max \left\{ \sup _{t \le T} \left| X_{i}(\eta ^{N}_{t}) - X_{i}(t)\right| , \sup _{t \le T} \left| \Phi _{i}(\eta ^{N}_{t}) - \Phi _{i}(t)\right| \right\} \right| \le \varepsilon ' \end{aligned}$$
(59)

for some \(\varepsilon ' > 0\) satisfying \(\varepsilon ' \rightarrow 0\) as \(\varepsilon \rightarrow 0\). Since \(\{ \nu ^{\eta ^{N}} \in {\widetilde{B}}_{\varepsilon }\} \subseteq V_{N,\varepsilon }\), combining this with (58) we obtain that the left hand side of (57) converges to

$$\begin{aligned} \frac{1}{N} \sum \limits _{i=1}^{N} \int \limits _{0}^{T} V(s,X_{i}(s),\Phi _{i}(s))^2 \, ds \end{aligned}$$

as \(N \rightarrow \infty \) and then \(\varepsilon \rightarrow 0\).

Since \((X_{i}(t), \Phi _{i}(t))\) is a solution of (29) and V is the derivative of X, the integral is equal simply to the energy of the path \(X_{i}(t)\). Since for each i the initial condition \(\Phi _{i}(\eta ^{N}_{0})\) has uniform distribution on \(\left\{ \frac{1}{N}, \ldots , 1 \right\} \), independently for all i, it follows easily that this expression converges with high probability to the expected energy on the right hand side of (57). This implies \(\inf _{\eta \in V_{N,\varepsilon }} I_N (\eta ) \rightarrow I(\nu )\) as \(N \rightarrow \infty \) and then \(\varepsilon \rightarrow 0\). Since in (56) we can take \(\delta \) to be arbitrarily small, this proves (54) and finishes the proof of the lower bound. \(\square \)

With this theorem the large deviation lower bound for generalized solutions to Euler equations, announced as Theorem A in the introduction, is now an easy corollary.

Theorem 7.3

Let \(\mathbb {P}^{N}\) be the law of the interchange process \(\eta ^N\) and let \(\mu ^{\eta ^{N}}\) be the (random) distribution of the corresponding permutation process \(X^{\eta ^N}\). Let \(\pi \) be a permuton process which is a generalized solution to Euler equations (19). Provided \(\pi \) satisfies Assumptions (3.1), for any open set \(\mathcal {O}\subseteq \mathcal {P}\) such that \(\pi \in \mathcal {O}\) we have

$$\begin{aligned} \liminf _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N}\left( \mu ^{\eta ^{N}} \in \mathcal {O}\right) \ge - I(\pi ). \end{aligned}$$

Proof

Let \(\pi ^{\beta , \delta }\) be the distribution of the process \(X^{\beta , \delta }\) defined in Sect. 3. By the first part of Proposition 3.7 we have \(d_{{\mathcal {W}}}^{sup}(\pi , \pi ^{\beta , \delta }) \rightarrow 0\) as first \(\delta \) and then \(\beta \rightarrow 0\), in particular for small enough \(\delta \) and \(\beta \) we have \(\pi ^{\beta , \delta } \in \mathcal {O}\). Then Theorem 7.2 implies that

$$\begin{aligned} \liminf _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N}\left( \mu ^{\eta ^{N}} \in \mathcal {O}\right) \ge - I(\pi ^{\beta , \delta }). \end{aligned}$$

Since by the second part of Proposition 3.7 we have \(\lim \limits _{\beta \rightarrow 0} \lim \limits _{\delta \rightarrow 0} I(\pi ^{\beta , \delta }) = I(\pi )\), the lower bound is proved. \(\square \)

8 Large Deviation Upper Bound

In this section we prove Theorem B, a large deviation upper bound for the distribution of the interchange process (we will drop the term “unbiased” from now on). As a first step we will bound the probability that after a (possibly short) time \(t > 0\) we see a fixed permutation in the interchange process. This is summarized in the following

Proposition 8.1

Let \(\mathbb {P}^{N}\) be the law of the interchange process, with \(\eta = \eta ^N\) denoting the trajectory of the process. Let \(\sigma ^N \in {\mathcal {S}}_N\) be a sequence of permutations. For any \(t > 0\) we have

$$\begin{aligned} \limsup \limits _{N \rightarrow \infty } N^{-\gamma }\log \mathbb {P}^{N}(\eta _{0}^{-1}\eta _{t} = \sigma _N) \le - \frac{1}{t} \left( \liminf \limits _{N \rightarrow \infty } I(\sigma ^N) \right) , \end{aligned}$$

where \(I(\sigma )\) is the energy of the permutation \(\sigma \) defined in (8).

In other words, the large deviation rate of seeing a permutation \(\sigma \) at time t in the interchange process is asymptotically bounded from above by \(\frac{1}{t}\) times the energy of the permutation \(\sigma \).

To prove Proposition 8.1 we will employ exponential martingales. The idea is as follows – if \(M_{S}(\eta )\) is a function of the process (depending on some set of parameters S) which is a positive mean one martingale, then for any permutation \(\sigma \in {\mathcal {S}}_N\) we can write

$$\begin{aligned}&\mathbb {P}^{N}(\eta _{0}^{-1}\eta _{t} = \sigma ) = \mathbb {E}(\mathbb {1}_{\{\eta _{0}^{-1}\eta _{t} = \sigma \}} )= \mathbb {E}\left( M_{S}(\eta ) M_{S}(\eta )^{-1} \mathbb {1}_{\{\eta _{0}^{-1}\eta _{t} = \sigma \}}\right) \nonumber \\&\quad \le \sup _{ \{\chi _{0}^{-1}\chi _{t} = \sigma \}} M_{S}(\chi )^{-1} \mathbb {E}\left( M_{S}(\eta ) \mathbb {1}_{\{\eta _{0}^{-1}\eta _{t} = \sigma \}}\right) \le \sup _{\{\chi _{0}^{-1}\chi _{t} = \sigma \}} M_{S}(\chi )^{-1}, \end{aligned}$$
(60)

where the supremum is over all deterministic permutation-valued paths \(\chi = (\chi _s, 0 \le s \le T)\) satisfying \(\chi _{0}^{-1}\chi _{t} = \sigma \) and the last inequality comes from the fact that \(M_{S}(\chi )\) is a positive mean one martingale. If \(M_{S}\) depends only on the increment \(\chi _{0}^{-1}\chi _{t}\), we obtain a particularly simple expression

$$\begin{aligned} \mathbb {P}^{N}(\eta _{0}^{-1}\eta _{t} = \sigma ) \le M_{S}(\sigma )^{-1}. \end{aligned}$$

We can then optimize over the set of parameters S to obtain a large deviation upper bound. The family of martingales we will use is similar to the one used in analyzing large deviations for a simple random walk.

Fix \(t > 0\) and a sequence \(S = (s_{1}, \ldots , s_{N})\), with \(s_{i} \in \left\{ \frac{-1 + \frac{1}{N}}{t}, \frac{-1 + \frac{2}{N}}{t}, \ldots , \frac{1 - \frac{2}{N}}{t},\right. \left. \frac{1 - \frac{1}{N}}{t} \right\} \). We will think of \(s_i\) as “velocity” assigned to the particle i. Consider the function

$$\begin{aligned} F_{S}(\eta _{t}) = \varepsilon \sum \limits _{i=1}^{N} s_{i} x_{i}(\eta _{t}), \end{aligned}$$

where \(x_{i}(\eta _{t})\) is the position of the particle i in the configuration \(\eta _{t}\). If \(\mathcal {L}\) is the generator of the interchange process, given by (12), then by the formula (36) for exponential martingales we obtain that

$$\begin{aligned} M_{t}^{S} = \exp \left\{ F_{S}(\eta _{t}) - F_{S}(\eta _{0}) - \int \limits _{0}^{t} e^{-F_{S}(\eta _{s})} \mathcal {L}e^{F_{S}(\eta _{s})} \, ds \right\} \end{aligned}$$

is a mean one positive martingale with respect to \(\mathbb {P}^{N}\).

For simplicity we will use the same notation \(s_{x}(\eta ) = s_{\eta ^{-1}(x)}\) as for velocities \(v_x\) of particles in the previous sections (with the convention that i denotes labels of particles and x denotes the positions), although bear in mind that now \(s_x\) are just parameters, not related in any way to the the biased interchange process considered in the preceding sections. We have

$$\begin{aligned} \mathcal {L}e^{F_{S}(\eta )} = \frac{1}{2} N^{\alpha } \sum \limits _{x=1}^{N-1} \left( e^{F_{S}(\eta ^{x,x+1})} - e^{F_{S}(\eta )} \right) = \frac{1}{2} N^{\alpha } \sum \limits _{x=1}^{N-1} \left( e^{F_{S}(\eta ) + \varepsilon \left[ s_{x}(\eta ) - s_{x+1}(\eta )\right] } - e^{F_{S}(\eta )} \right) , \end{aligned}$$

so

$$\begin{aligned} M_{t}^{S} = \exp \left\{ \varepsilon \sum \limits _{i=1}^{N} s_{i} \left( x_{i}(\eta _{t}) - x_{i}(\eta _{0})\right) - \frac{1}{2} N^{\alpha } \int \limits _{0}^{t} \sum \limits _{x=1}^{N-1} \left( e^{\varepsilon [s_{x}(\eta _{s}) - s_{x+1}(\eta _{s})]} -1\right) ds \right\} . \end{aligned}$$

Expanding up to order \(\varepsilon ^2\) we get

$$\begin{aligned} M_{t}^{S}&= \exp \Bigg \{ \varepsilon \sum \limits _{i=1}^{N} s_{i} \left( x_{i}(\eta _{t}) - x_{i}(\eta _{0})\right) - \frac{1}{2} N^{\alpha } \varepsilon \int \limits _{0}^{t} \sum \limits _{x=1}^{N-1} [s_{x}(\eta _{s}) - s_{x+1}(\eta _{s})] \, ds \nonumber \\&\quad - \frac{1}{4} N^{\alpha }\varepsilon ^2 \int \limits _{0}^{t} \sum \limits _{x=1}^{N-1} \left( s_{x}(\eta _{s}) - s_{x+1}(\eta _{s})\right) ^2 \, ds + O(N^{\alpha +1} \varepsilon ^3 ) \Bigg \}, \end{aligned}$$
(61)

where the constants in the \(O(\cdot )\) notation depend on t (which is fixed). Observe that the sum of \(s_{x} - s_{x+1}\) telescopes, leaving only terms with \(s_{1}\) and \(s_{N}\), which are \(O(N^{\alpha } \varepsilon ) = o(N^{\gamma })\). Rescaling by appropriate powers of N and expressing the exponents in terms of the large deviation exponent \(\gamma \) we get

$$\begin{aligned} M_{t}^{S}= & {} \exp \Bigg \{ N^{\gamma } \left[ \frac{1}{N}\sum \limits _{i=1}^{N} s_{i} \left( \frac{x_{i}(\eta _{t}) - x_{i}(\eta _{0})}{N} \right) \right. \\{} & {} \left. - \frac{1}{4} \int \limits _{0}^{t} \frac{1}{N}\sum \limits _{x=1}^{N-1} \left( s_{x}(\eta _{s}) - s_{x+1}(\eta _{s})\right) ^2 \, ds \right] + o(N^{\gamma }) \Bigg \}. \end{aligned}$$

Expanding \((s_x - s_{x+1})^2\) we obtain (after adding and subtracting the boundary terms \(s_1^2\), \(s_N^2\) which are only o(1) after rescaling) twice the sum of \(s_{x}^{2}\) and the sum of mixed terms \(s_x s_{x+1}\). Since \(\sum \limits _{x=1}^N s_x^2 = \sum \limits _{i=1}^N s_i^2 \) does not depend on time, we can write

$$\begin{aligned} M_{t}^{S}&= \exp \Bigg \{ N^{\gamma } \Bigg [ \frac{1}{N}\sum \limits _{i=1}^{N} s_{i} \left( \frac{x_{i}(\eta _{t}) - x_{i}(\eta _{0})}{N} \right) - \frac{t}{2} \left( \frac{1}{N}\sum \limits _{i=1}^{N} s_{i}^{2} \right) \\&+ \frac{1}{2} \int \limits _{0}^{t} \frac{1}{N}\sum \limits _{x=1}^{N-1} s_{x}(\eta _{s}) s_{x+1}(\eta _{s}) \, ds + o(1)\Bigg ] \Bigg \}. \end{aligned}$$

As in the proof of the law of large numbers we want to use the one block estimate to get rid of the sum involving correlations between \(s_{x}\) for adjacent x. This time the correlation term might not be small, since \(s_{i}\) are arbitrary, but typically it will be nonnegative, so we can neglect it for the sake of the upper bound. More precisely, we have the following

Lemma 8.2

Let \(\mathbb {P}^{N}\) be the law of the interchange process. Fix \(t > 0\) and let \(s_{i} \in \left\{ \frac{-1}{t}, \frac{-1 + \frac{1}{N}}{t}, \ldots , \frac{1 - \frac{1}{N}}{t}, \frac{1}{t} \right\} \). Then, with notation as above, we have for any \(\delta > 0\)

$$\begin{aligned} \limsup \limits _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N} \left( \int \limits _{0}^{t} \frac{1}{N}\sum \limits _{x=1}^{N-1} s_{x}(\eta _{s}) s_{x+1}(\eta _{s}) \, ds \le - \delta \right) = - \infty . \end{aligned}$$

Proof

We employ Lemma 6.2 with \(w(x,\phi ) = \frac{2x-1}{t}\), \(a_i = s_i\) and \(b_{x}(\eta ) = a_{x}(\eta )\), in particular \(h_{x}(\eta ) = s_x(\eta ) s_{x-1}(\eta )\). As in the lemma consider \(\mathbb {E}_{\mu _{x,l}^{\eta _s}} \left( s_{x}(\eta ) s_{x+1}(\eta )\right) \), where \(\mu _{x,l}^{\eta _s}\) is the empirical distribution of \(a_i\) in a box \(\Lambda _{x,l}\). Let us write

$$\begin{aligned}&\int \limits _{0}^{t} \frac{1}{N}\sum \limits _{x=1}^{N-1} s_{x}(\eta _s) s_{x+1}(\eta _s) \, ds \nonumber \\&\quad = \int \limits _{0}^{t} \frac{1}{N}\sum \limits _{x=1}^{N-1} \left( s_{x}(\eta _s) s_{x+1}(\eta _s) - \mathbb {E}_{\mu _{x,l}^{\eta _s}} \left[ s_{x}(\eta ) s_{x+1}(\eta )\right] \right) \, ds \nonumber \\&\quad \quad + \int \limits _{0}^{t} \frac{1}{N}\sum \limits _{x=1}^{N-1} \mathbb {E}_{\mu _{x,l}^{\eta _s}} \left[ s_{x}(\eta ) s_{x+1}(\eta ) \right] \, ds. \end{aligned}$$
(62)

Since under \(\mu _{x,l}^{\eta _s}\) the colors are i.i.d. random variables, we have \(\mathbb {E}_{\mu _{x,l}^{\eta _s}} \left[ s_{x}(\eta ) s_{x+1}(\eta )\right] = \left( \mathbb {E}_{\mu _{x,l}^{\eta _{s}}} s_{x}(\eta )\right) ^2 \ge 0\), so the second term on the right hand side of (62) is nonnegative for every l. Lemma 6.2 guarantees that for any \(\delta > 0\)

$$\begin{aligned}{} & {} \limsup \limits _{l \rightarrow \infty } \limsup \limits _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N} \left( \int \limits _{0}^{t} \frac{1}{N}\sum \limits _{x=1}^{N-1} \left( s_{x}(\eta _s) s_{x+1}(\eta _s)\right. \right. \nonumber \\{} & {} \quad \left. \left. - \mathbb {E}_{\mu _{x,l}^{\eta _s}} \left[ s_{x}(\eta ) s_{x+1}(\eta )\right] \right) \, ds \le - \delta \right) = - \infty . \end{aligned}$$

Since the left hand side of (62) does not depend on l, this finishes the proof. \(\square \)

With this lemma the proof of Proposition 8.1 is rather straightforward.

Proof of Proposition 8.1

Lemma 8.2 implies that for any \(a > 0\) there exist sets \(\mathcal {O}_{N,a}\) such that on \(\mathcal {O}_{N,a}\) we have

$$\begin{aligned} M_{t}^{S}(\eta ) \ge \exp \Bigg \{ N^{\gamma } \Bigg [ \frac{1}{N}\sum \limits _{i=1}^{N} s_{i} \left( \frac{x_{i}(\eta _{t}) - x_{i}(\eta _{0})}{N} \right) - \frac{t}{2} \left( \frac{1}{N}\sum \limits _{i=1}^{N} s_{i}^{2} \right) - a + o(1)\Bigg ] \Bigg \}, \end{aligned}$$
(63)

with the o(1) term depending on t, and \(\mathbb {P}^{N}(\mathcal {O}_{N,a}^{c}) \rightarrow 0 \) as \(N \rightarrow \infty \) superexponentially fast

$$\begin{aligned} \limsup _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N}(\mathcal {O}_{N,a}^{c}) = - \infty . \end{aligned}$$

Now we can use the strategy outlined earlier with the positive mean one martingale \(M_{t}^{S}(\eta )\). We write

$$\begin{aligned}&\mathbb {P}^{N}(\eta _{0}^{-1}\eta _{t} = \sigma ^N) = \mathbb {E}\left( \mathbb {1}_{\{\eta _{0}^{-1}\eta _{t} = \sigma ^N\}} \right) = \mathbb {E}\left( M_{t}^{S}(\eta )^{-1} M_{t}^{S}(\eta ) \mathbb {1}_{\{\eta _{0}^{-1}\eta _{t} = \sigma ^N\}}\right) \\&\quad = \mathbb {E}\left( M_{t}^{S}(\eta )^{-1} M_{t}^{S}(\eta ) \mathbb {1}_{\{\eta _{0}^{-1}\eta _{t} = \sigma ^N \}}\mathbb {1}_{\mathcal {O}_{N,a}} \right) + \mathbb {E}\left( \mathbb {1}_{\{\eta _{0}^{-1}\eta _{t} = \sigma ^N \}}\mathbb {1}_{\mathcal {O}_{N,a}^{c}} \right) . \end{aligned}$$

On \(\mathcal {O}_{N,a}\) we can use the bound (63) obtained above. Note also that on the event \(\{\eta _{0}^{-1}\eta _{t} = \sigma ^N\}\) we have \(x_{i}(\eta _{t}) - x_{i}(\eta _{0}) = \sigma ^N(i) - i\), which together with (60) leads us to

$$\begin{aligned} \mathbb {P}^{N}(\eta _{0}^{-1}\eta _{t} = \sigma ^N) \le e^{-N^{\gamma } \left( I_{S}(\sigma ^N) - a + o(1)\right) } + \mathbb {P}^{N}(\mathcal {O}_{N,a}^{c}), \end{aligned}$$
(64)

where

$$\begin{aligned} I_{S}(\sigma ^N) = \frac{1}{N}\sum \limits _{i=1}^{N} s_{i} \left( \frac{\sigma ^N(i) - i}{N} \right) - \frac{t}{2} \left( \frac{1}{N}\sum \limits _{i=1}^{N} s_{i}^{2} \right) . \end{aligned}$$

To optimize over the choice of \(S = (s_1, \ldots , s_N)\), observe that \(I_{S}(\sigma ^N)\) is quadratic in \(s_{i}\), so an easy calculation shows that the optimal choice is

$$\begin{aligned} s_{i} = \frac{\sigma ^N(i) - i}{t N}, \end{aligned}$$

which is valid, since we assumed \(s_{i} \in \left\{ \frac{-1 + \frac{1}{N}}{t}, \frac{-1 + \frac{2}{N}}{t}, \ldots , \frac{1 - \frac{1}{N}}{t}\right\} \). This gives the maximal value of \(I_{S}(\sigma ^N)\) equal to

$$\begin{aligned} \frac{1}{2} \left( \frac{1}{N}\sum \limits _{i=1}^{N} \frac{1}{t}\left( \frac{\sigma ^N(i) - i}{N} \right) ^2 \right) , \end{aligned}$$

which is exactly the energy \(I(\sigma ^N)\) rescaled by t. Inserting this into (64) gives us

$$\begin{aligned} \mathbb {P}^{N}(\eta _{0}^{-1}\eta _{t} = \sigma ^N) \le e^{-N^{\gamma } \left( I(\sigma ^N) - a + o(1)\right) } + \mathbb {P}^{N}(\mathcal {O}_{N,a}^{c}) \end{aligned}$$

Since \(\limsup \limits _{n \rightarrow \infty } \frac{1}{n} \log (a_n + b_n) = \max \{\limsup \limits _{n \rightarrow \infty } \frac{1}{n} \log a_n, \limsup \limits _{n \rightarrow \infty } \frac{1}{n} \log b_n\}\), we obtain

$$\begin{aligned}{} & {} \limsup \limits _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N}(\eta _{0}^{-1}\eta _{t} = \sigma ^N) \\{} & {} \qquad \le \max \left\{ - \liminf \limits _{N \rightarrow \infty } I(\sigma ^N) + a, \limsup \limits _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N} (\mathcal {O}_{N,a}^{c}) \right\} . \end{aligned}$$

The second lim sup is \(-\infty \) and by taking \(a \rightarrow 0\) we arrive at

$$\begin{aligned} \limsup \limits _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N}(\eta _{0}^{-1}\eta _{t} = \sigma ^N) \le - \liminf \limits _{N \rightarrow \infty } I(\sigma ^N) \end{aligned}$$

as desired. \(\square \)

We can readily extend the bound from Proposition 8.1 to all finite-dimensional distributions of the interchange process as follows. Fix a finite set of times \(0 \le t_0< t_1< \ldots < t_k \le T\) and for clarity of notation let us write \(\vec {\eta }_{t_0, \ldots , t_k} = (\eta _{t_{0}}^{-1}\eta _{t_1}, \ldots , \eta _{t_{k-1}}^{-1}\eta _{t_k})\) for the corresponding sequence of increments of \(\eta \). Suppose we want to bound the probability \(\mathbb {P}^{N}(\vec {\eta }_{t_0, \ldots , t_k} = (\sigma ^N_{1}, \ldots , \sigma ^N_{k}))\), where \((\sigma ^N_{1}, \ldots , \sigma ^N_{k})\) is a fixed sequence of permutations for each N, \(\sigma ^N_j \in {\mathcal {S}}_N\). Recall that the interchange process has independent increments, i.e., the permutations \(\left( \eta _{t_{j}}\right) ^{-1} \eta _{t_{j+1}}\) for any family non-overlapping intervals \([t_{j}, t_{j+1})\) are independent. Therefore we can write

$$\begin{aligned} \mathbb {P}^{N}(\vec {\eta }_{t_0, \ldots , t_k} = (\sigma ^N_{1}, \ldots , \sigma ^N_{k})) = \prod \limits _{j=1}^{k} \mathbb {P}^{N}(\eta _{t_{j-1}}^{-1} \eta _{t_j} = \sigma ^N_{j}). \end{aligned}$$

As the interchange process is stationary, we have \(\mathbb {P}^{N}(\eta _{t_{j-1}}^{-1} \eta _{t_j} = \sigma ^N_{j}) = \mathbb {P}^{N}(\eta _{0}^{-1} \eta _{t_{j} - t_{j-1}} = \sigma ^N_{j})\). Thus by applying Proposition 8.1 we obtain

$$\begin{aligned} \limsup \limits _{N \rightarrow \infty } N^{-\gamma }\log \mathbb {P}^{N}(\vec {\eta }_{t_0, \ldots , t_k} = (\sigma ^N_{1}, \ldots , \sigma ^N_{k})) \le - \liminf \limits _{N \rightarrow \infty } \sum \limits _{j=1}^{k} \frac{1}{t_j - t_{j-1}}I(\sigma ^N_j). \end{aligned}$$
(65)

Recall that \(\mu ^{\eta ^N}\) denotes the distribution of the random permutation process associated to \(\eta ^N\) (defined by (5)) and for a finite partition \(\Pi \) by \(I^{\Pi }(\mu ^{\eta ^N})\) we denote the approximation of energy of \(\mu ^{\eta ^N}\) associated to \(\Pi \) (defined by (11)). From Eq. (65) we obtain the following corollary which will be useful later

Corollary 8.3

For any \(C > 0\) and any finite partition \(\Pi = \{ 0 = t_0< t_1< \ldots < t_k = T\}\) we have

$$\begin{aligned} \limsup \limits _{N \rightarrow \infty } N^{-\gamma }\log \mathbb {P}^{N}(I^{\Pi }(\mu ^{\eta ^{N}}) \ge C) \le - C. \end{aligned}$$

Proof

Consider the set \(A_{C}^{N}\) of all sequences of permutations \((\sigma ^N_1, \ldots , \sigma ^N_k)\), \(\sigma ^N_j \in {\mathcal {S}}_N\), such that \(\sum \limits _{j=1}^{k} \frac{1}{t_j - t_{j-1}}I(\sigma ^N_j) \ge C\). By performing a union bound over all such sequences we get

$$\begin{aligned} \mathbb {P}^{N}(I^{\Pi }(\mu ^{\eta ^{N}}) \ge C) \le N!^k \sup \limits _{A_{C}^{N}} \mathbb {P}^{N}(\vec {\eta }_{t_0, \ldots , t_k} = (\sigma ^N_{1}, \ldots . \sigma ^N_{k})), \end{aligned}$$

Now it is enough to observe that for fixed k we have \(\log (N!^k) = o(N^\gamma )\) and apply (65). \(\square \)

Now we can proceed to prove a general large deviation upper bound, announced as Theorem B in the introduction,. Recall that \(\mathcal {P}\subseteq \mathcal {M}(\mathcal {D})\) denotes the space of all permuton and approximate permuton processes.

Theorem 8.4

Let \(\mathbb {P}^{N}\) be the law of the interchange process \(\eta ^{N}\) and let \(\mu ^{\eta ^{N}}\) be the (random) distribution of the corresponding random permutation process \(X^{\eta ^N}\). For any closed set \({\mathcal {C}} \subseteq \mathcal {P}\) we have

$$\begin{aligned} \limsup _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N}\left( \mu ^{\eta ^{N}} \in {\mathcal {C}} \right) \le - \inf \limits _{\pi \in {\mathcal {C}}} I(\pi ), \end{aligned}$$

where \(I(\pi )\) is the energy of the process \(\pi \) defined by (10).

Proof

It is standard (see, e.g., [Var16, Lemma 2.3]) that the large deviation upper bound for closed sets follows from a local upper bound for open balls and exponential tightness of the sequence \(\mu ^{\eta ^{N}}\). The exponential tightness part will be proved in Proposition 8.5 below, so here we focus on the first part, that is, we will prove that for any \(\pi \in \mathcal {P}\) we have

$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0} \limsup _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N}\left( \mu ^{\eta ^{N}} \in B(\pi , \varepsilon ) \right) \le - I(\pi ), \end{aligned}$$
(66)

where \(B(\pi , \varepsilon )\) denotes the open \(\varepsilon \)-ball around \(\pi \) in the Wasserstein distance \(d_{{\mathcal {W}}}\) on \(\mathcal {P}\).

Fix a finite set of times \(0 = t_{0}< t_{1}< \ldots < t_{k} = T\). Since almost surely the interchange process does not make jumps at any of the prescribed times \(t_0, t_1, \ldots , t_k\), by continuity of projections for any \(\varepsilon > 0\) there exists \(\varepsilon ' > 0\) such that

$$\begin{aligned} \mathbb {P}^{N}\left( d_{{\mathcal {W}}}(\mu ^{\eta ^{N}}, \pi )< \varepsilon ' \right) \le \mathbb {P}^{N} \left( d(\mu ^{\eta ^{N}}_{t_{0}, t_{1}}, \pi _{t_0, t_1})< \varepsilon ) \wedge \ldots \wedge d(\mu ^{\eta ^{N}}_{t_{k-1}, t_{k}},\pi _{t_{k-1}, t_{k}}) < \varepsilon \right) , \end{aligned}$$
(67)

where d denotes the Wasserstein distance on \(\mathcal {M}([0,1]^2)\). Furthermore, note that the permutation process with distribution \(\mu ^{\eta ^{N}}\) has independent increments, i.e., the permutations \(\left( \eta _{t_{j}}^{N}\right) ^{-1} \eta _{t_{j+1}}^{N}\) for any family non-overlapping intervals \([t_{j}, t_{j+1})\) are independent. Thus we can write

$$\begin{aligned}&\mathbb {P}^{N} \left( d(\mu ^{\eta ^{N}}_{t_{0}, t_{1}}, \pi _{t_0, t_1})< \varepsilon ) \wedge \ldots \wedge d(\mu ^{\eta ^{N}}_{t_{k-1}, t_{k}},\pi _{t_{k-1}, t_{k}})< \varepsilon \right) \nonumber \\&\quad = \prod \limits _{i=0}^{k-1} \mathbb {P}^{N} \left( d(\mu ^{\eta ^{N}}_{t_{i}, t_{i+1}}, \pi _{t_{i}, t_{i+1}}) < \varepsilon )\right) . \end{aligned}$$
(68)

In this way we have reduced the problem to bounding the probability that the random measure \(\mu ^{\eta ^{N}}_{t_{i}, t_{i+1}}\) is close to a fixed permuton \(\pi _{t_{i}, t_{i+1}}\).

Fix i and consider all permutations \(\sigma \in {\mathcal {S}}_{N}\) such that the empirical measure \(\mu _{\sigma }\) satisfies \(d(\mu _{\sigma }, \pi _{t_{i}, t_{i+1}}) < \varepsilon \). As there are at most N! such permutations, by performing a union bound over this set we obtain

$$\begin{aligned} \mathbb {P}^{N} \left( d(\mu ^{\eta ^{N}}_{t_{i}, t_{i+1}}, \pi _{t_{i}, t_{i+1}})< \varepsilon )\right) \le N! \sup _{ \begin{array}{c} \sigma \in {\mathcal {S}}_{N} \\ d(\mu _{\sigma }, \pi _{t_{i}, t_{i+1}}) < \varepsilon \end{array}} \mathbb {P}^{N} \left( \mu ^{\eta ^{N}}_{t_{i}, t_{i+1}} = \mu _{\sigma }\right) , \end{aligned}$$

where on the right hand side we have the probability that the random measure \(\mu _{t_i, t_{i+1}}^{\eta ^{N}}\) is equal to \(\mu _{\sigma }\). This probability is simply equal to \(\mathbb {P}^{N} \left( \left( \eta _{t_i}^{N}\right) ^{-1} \eta _{t_{i+1}}^{N} = \sigma )\right) \) and by stationarity of the interchange process this is the same as \(\mathbb {P}^{N} \left( \left( \eta _{0}^{N}\right) ^{-1} \eta _{t_{i+1} - t_i}^{N} = \sigma )\right) \). By employing Proposition 8.1, with \(\sigma ^N \in {\mathcal {S}}_N\) being any permutation attaining the supremum above, and noticing that \(\log N! = o(N^{\gamma })\) we get

$$\begin{aligned} \limsup _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N} \left( d(\mu ^{\eta ^{N}}_{t_i, t_{i+1}}, \pi _{t_{i}, t_{i+1}}) < \varepsilon )\right) \le \limsup _{N \rightarrow \infty } \left( - \frac{1}{t_{i+1} - t_i} I(\sigma ^N)\right) . \end{aligned}$$

Now observe that for any \(\sigma \) such that \(d(\mu _{\sigma }, \pi _{t_{i}, t_{i+1}}) < \varepsilon \) the energy \(I(\sigma ) = I(\mu _{\sigma })\) has to be close to \(I(\pi _{t_{i}, t_{i+1}})\), the energy of the permuton \(\pi _{t_i, t_{i+1}}\) (recall definition 9), since I is continuous in the weak topology on \(\mathcal {M}([0,1]^2)\). Thus upon taking \(\varepsilon \rightarrow 0\) we obtain

$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0} \limsup _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N} \left( d(\mu ^{\eta ^{N}}_{t_i, t_{i+1}}, \pi _{t_{i}, t_{i+1}}) < \varepsilon \right) \le -\frac{1}{t_{i+1} - t_{i}} I(\pi _{t_{i}, t_{i+1}}) . \end{aligned}$$

Applying this estimate to the product in (68) and observing that in (67) without loss of generality we can assume \(\varepsilon ' \le \varepsilon \), we arrive at the following bound

$$\begin{aligned}&\limsup _{\varepsilon \rightarrow 0} \limsup _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N}\left( d_{{\mathcal {W}}}(\mu ^{\eta ^{N}}, \pi ) < \varepsilon \right) \le -\sum \limits _{i=0}^{k-1} \frac{1}{t_{i+1} - t_{i}} I(\pi _{t_{i}, t_{i+1}}). \end{aligned}$$

Since \(t_0, t_1, \ldots , t_k\) were arbitrary, by optimizing over all finite partitions \(\Pi = \{ 0 = t_{0}< t_{1}< \ldots < t_{k} = T \}\) we obtain

$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0} \limsup _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N}\left( d_{{\mathcal {W}}}(\mu ^{\eta ^{N}}, \pi ) < \varepsilon \right) \le - \sup _{\Pi } \sum \limits _{i=0}^{k-1} \frac{1}{t_{i+1} - t_{i}} I(\pi _{t_{i}, t_{i+1}}). \end{aligned}$$

Recalling the definitions (6), (10) and (11), to prove (66) it remains to show that we have \(I(\pi ) = \sup \limits _{\Pi } I^{\Pi }(\pi )\), which is exactly the statement of Lemma 2.2. \(\square \)

Proposition 8.5

The family of measures \(\mu ^{\eta ^{N}}\) is exponentially tight, that is, there exists a sequence of compact sets \(K_{m} \subseteq \mathcal {P}\) such that

$$\begin{aligned} \limsup _{N \rightarrow \infty } N^{-\gamma } \log \mathbb {P}^{N} (\mu ^{\eta ^{N}} \notin K_{m}) \le - m. \end{aligned}$$

Proof

The idea of the proof is to show that having many particles whose trajectories have poor modulus of continuity (which would spoil compactness) necessarily implies the process having high energy, which by Corollary 8.3 is unlikely.

Recall that \(\mathcal {D}= \mathcal {D}([0,T], [0,1])\) is the space of all càdlàg paths from [0, T] to [0, 1]. It will be convenient to work with the following notion of càdlàg modulus of continuity – for a path \(f \in \mathcal {D}\) we define

$$\begin{aligned} w_{\delta }''(f) = \sup _{\begin{array}{c} t_1 \le t \le t_2 \\ t_2 - t_1 \le \delta \end{array}} \left\{ |f(t) - f(t_1)| \wedge |f(t_2) - f(t)| \right\} . \end{aligned}$$

By a characterization of compactness in the Skorokhod space ([Bil13, Theorem 12.4]) a set \(A \subseteq \mathcal {D}\) has compact closure if and only if the following conditions hold

$$\begin{aligned} {\left\{ \begin{array}{ll} \sup \limits _{f \in A} \sup \limits _{t \in [0,T]} |f(t)| < \infty , \\ \lim \limits _{\delta \rightarrow 0} \sup \limits _{f \in A} w_{\delta }''(f) = 0, \\ \lim \limits _{\delta \rightarrow 0} \sup \limits _{f \in A} |f(\delta ) - f(0)| = 0, \\ \lim \limits _{\delta \rightarrow 0} \sup \limits _{f \in A} |f(T-) - f(T-\delta )| = 0. \end{array}\right. } \end{aligned}$$

In our setting the first condition is trivially satisifed. To exploit the other conditions let us introduce for any \(m,r \ge 1\) the following sets

$$\begin{aligned}&K^{w}_{m,r} = \bigcap \limits _{k \ge 1}\left\{ f \in \mathcal {D}\, \big \vert \, w_{\delta _{k}(m,r)}''(f) \le \varepsilon _{k} \right\} , \\&K^{0}_{m,r} = \bigcap \limits _{k \ge 1}\left\{ f \in \mathcal {D}\, \big \vert \, |f(\delta _{k}(m,r)) - f(0)| \le \varepsilon _{k} \right\} , \\&K^{T}_{m,r} = \bigcap \limits _{k \ge 1}\left\{ f \in \mathcal {D}\, \big \vert \, |f(T-) - f(T - \delta _{k}(m,r))| \le \varepsilon _{k} \right\} , \end{aligned}$$

and

$$\begin{aligned} K_{m,r} = K^{w}_{m,r} \cap K^{0}_{m,r} \cap K^{T}_{m,r}, \end{aligned}$$

where \(\varepsilon _{k} = 4^{-k}\) and \(\delta _{k}(m,r)\) will be appropriately chosen later. We will assume that for fixed mr we have \(\lim \limits _{k \rightarrow \infty } \delta _{k}(m,r) = 0\) and that for any \(k \ge 1\) both \(\frac{T}{\delta _k}\) and \(\frac{\delta _{k}(m,r)}{\delta _{k+1}(m,r)}\) are integer (the latter assumption is for simplicity of notation only). Note that by the aforementioned compactness conditions each set \(K_{m,r}\) has compact closure in \(\mathcal {D}\).

Let

$$\begin{aligned} K_m = \bigcap \limits _{r \ge 1} \left\{ \mu \in \mathcal {M}(\mathcal {D}) \, \bigg \vert \, \mu (K_{m,r}) \ge 1 - \frac{1}{r} \right\} . \end{aligned}$$

We claim that \(K_m\) has compact closure in \(\mathcal {M}(\mathcal {D})\). Indeed, by Prokhorov’s theorem it is enough to prove that \(K_m\) is tight. If \(\mu \in K_m\), then for any \(r \ge 1\) we have \(\mu (K_{m,r}^{c}) \le \frac{1}{r}\), so the sets \(K_{m,r}\) form the family of compact sets needed for tightness of \(K_m\).

The sets \(K_m\) (possibly after taking their closures) will form the family of compact sets needed for exponential tightness. Thus our goal is to bound \(\mathbb {P}^{N}(\mu ^{\eta ^{N}} \notin K_{m})\). Let us write

$$\begin{aligned} \mathbb {P}^{N}(\mu ^{\eta ^{N}} \notin K_{m}) = \mathbb {P}^{N}\left( \exists _{r \ge 1} \, \, \mu ^{\eta ^{N}} (K_{m,r}^{c}) \ge \frac{1}{r} \right) \le \sum \limits _{r \ge 1} \mathbb {P}^{N}\left( \mu ^{\eta ^{N}} (K_{m,r}^{c}) \ge \frac{1}{r}\right) . \end{aligned}$$

It is enough to show that for any \(m,r \ge 1\) and any \(N \ge 1\) we have

$$\begin{aligned} \mathbb {P}^{N}\left( \mu ^{\eta ^{N}} (K_{m,r}^{c}) \ge \frac{1}{r} \right) \le C e^{-mr N^{\gamma }}, \end{aligned}$$
(69)

where \(C > 0\) is some global constant.

For any given m and r, observe that \(\mu ^{\eta ^{N}} (K_{m,r}^{c}) \ge \frac{1}{r}\) means that in \(\eta ^N\) we have at least \(\frac{N}{r}\) particles with paths \(f \notin K_{m,r}\). Since \(K_{m,r} = K^{w}_{m,r} \cap K^{0}_{m,r} \cap K^{T}_{m,r}\), clearly it is enough to estimate separately the probabilities that at least \(\frac{N}{3r}\) particles have paths respectively not in \(K^{w}_{m,r}\), \(K^{0}_{m,r}\) or \(K^{T}_{m,r}\). The argument for \(K^{0}_{m,r}\) and \(K^{T}_{m,r}\) is much simpler, so we skip it and concentrate only on the case of \(K^{w}_{m,r}\). For simplicity we will write \(\alpha (r) = \frac{1}{3r}\)

For fixed m and r we will call a path f bad if \(w_{\delta _{k}(m,r)}''(f) > \varepsilon _{k}(m,r)\) for some \(k \ge 1\). We will call f bad exactly at scale k if \(w_{\delta _{k}(m,r)}''(f) > \varepsilon _{k}(m,r)\), but \(w_{\delta _{j}(m,r)}''(f) \le \varepsilon _{j}(m,r)\) for all \(j \ge k+1\). Recalling the definition of the set \(K^{w}_{m,r}\), the event whose probability we would like to bound is

$$\begin{aligned} A_{N}^{m,r} = \left\{ \text{ there } \text{ exist } \ge \alpha (r) N \text{ particles } \text{ with } \text{ bad } \text{ paths } \right\} . \end{aligned}$$

Consider now the events

$$\begin{aligned} B_{N}^{m,r,k} = \left\{ \text{ there } \text{ exist } \ge \frac{\alpha (r)}{2^k} N \text{ particles } \text{ whose } \text{ paths } \text{ are } \text{ bad } \text{ exactly } \text{ at } \text{ scale } k \right\} . \end{aligned}$$

Note that if f is a bad path with jumps of fixed size \(\frac{1}{N}\), then there exists \(k \ge 1\) such that f is bad exactly at scale k (since all paths we are considering are cádlàg). Thus we have \(A_{N}^{m,r} \subseteq \bigcup \limits _{k \ge 1} B_{N}^{m,r,k}\), so

$$\begin{aligned} \mathbb {P}^{N}\left( \mu ^{\eta ^{N}} ((K_{m,r}^{w})^c) \ge \alpha (r)\right) = \mathbb {P}^{N}\left( A_{N}^{m,r}\right) \le \sum \limits _{k \ge 1} \mathbb {P}^{N}(B_{N}^{m,r,k}). \end{aligned}$$

Thus it is enough to show that for any \(m,r,k \ge 1\) and any \(N \ge 1\) we have

$$\begin{aligned} \mathbb {P}^{N}(B_{N}^{m,r,k}) \le e^{-mrk N^\gamma }. \end{aligned}$$
(70)

From now on we fix mrk and N. All paths we are considering are assumed to come from the interchange process \(\eta ^N\), in particular they have jumps of fixed size \(\frac{1}{N}\). For the sake of brevity we will simply write \(\delta _k = \delta _k(m,r)\).

Let us divide the interval [0, T] into \(J = \frac{T}{\delta _k}\) intervals of the form \([j \delta _k, (j+1)\delta _k]\), \(j=0, \ldots , J-1\). Consider any path f which is bad exactly at scale k. The condition \(w_{\delta _{k}}''(f) > \varepsilon _{k}\) implies that for some \(t, t_1, t_2\) such that \(t ,t_2 \in [t_1, t_1 + \delta _k]\) we have \(|f(t) - f(t_1)| > \varepsilon _k\) and \(|f(t_2) - f(t)| > \varepsilon _k\). A simple application of the triangle inequality implies that there exists \(j \in \{0, \ldots , J-1 \}\) and \(t' \in [j \delta _k, (j+1)\delta _k]\) such that \(|f(j\delta _k) - f(t')| > \frac{\varepsilon _k}{2}\).

Let us consider the interval \([s, s'] = [j \delta _k, (j+1)\delta _k]\) obtained above. Subdivide it into \(L = \frac{\delta _k}{\delta _{k+1}}\) intervals of the form \([s_{\ell }, s_{\ell +1}]\), \(\ell = 0, \ldots , L-1\), where \(s_{\ell } = s + \ell \delta _{k+1}\). For \(\ell =0, \ldots , L-1\) let \(\Delta _{\ell }(f) = |f(s_{\ell }) - f(s_{\ell +1})|\).

The crucial observation is that \(\sum \limits _{\ell =0}^{L-1} \Delta _{\ell }(f) > \frac{\varepsilon _k}{4}\). To see this, consider \(t'\) such that \(|f(s) - f(t')| > \frac{\varepsilon _k}{2}\), obtained above, and let \({\tilde{\ell }}\) be such that \(t' \in (s_{{\tilde{\ell }}}, s_{{\tilde{\ell }} + 1}]\). By the triangle inequality we have

$$\begin{aligned} \frac{\varepsilon _{k}}{2} < |f(s) - f(t')| \le \sum \limits _{\ell =0}^{{\tilde{\ell }} - 2} \Delta _{\ell }(f) + |f(s_{{\tilde{\ell }}}) - f(t')|. \end{aligned}$$

Since f is bad exactly at scale k, we have \(w_{\delta _{k+1}}''(f) \le \varepsilon _{k+1}\), which together with \(|s_{{\tilde{\ell }}} - t'| \le \delta _{k+1}\) implies \(|f(s_{{\tilde{\ell }}}) - f(t')| \le \varepsilon _{k+1} = 4^{-(k+1)} = \frac{\varepsilon _{k}}{4}\). Thus necessarily \(\sum \limits _{\ell =0}^{{\tilde{\ell }} - 2} \Delta _{\ell }(f) > \frac{\varepsilon _k}{4}\). From this we obtain

$$\begin{aligned} \frac{\varepsilon _k^2}{16} < \left( \sum \limits _{\ell =0}^{L-1} \Delta _{\ell }(f) \right) ^2 \le L \sum \limits _{\ell =0}^{L-1} \Delta _{\ell }(f)^2, \end{aligned}$$
(71)

where the right-hand side estimate follows from the Cauchy-Schwarz inequality.

Now let us suppose that the event \(B_{N}^{m,r,k}\) holds. Then there exist at least \(\frac{\alpha (r)}{2^k}N\) paths \(f_i\) for which the estimate (71) holds. Consider the partition \(\Pi = \{0 = t_0< t_1< \ldots < t_n = T \}\) where \(n = \frac{T}{\delta _{k+1}}\), \(t_j = j \delta _{k+1}\) for \(j=0, \ldots , n\). Recalling that \(f_i = \frac{1}{N}\eta ^{N}(i)\), the definition of \(\Delta _{\ell }(f)\) and the definition (11) of the energy \(I^{\Pi }(\mu ^{\eta ^{N}})\) we obtain that on \(B_{N}^{m,r,k}\) we have

$$\begin{aligned} I^{\Pi }(\mu ^{\eta ^{N}})&= \frac{1}{N} \sum \limits _{i=1}^{N} \left( \frac{1}{2} \sum \limits _{j=1}^{n} \frac{| f_{i}(t_{j}) - f_{i}(t_{j-1}) |^2}{t_{j} - t_{j-1}} \right) \\&= \frac{1}{N} \sum \limits _{i=1}^{N} \left( \frac{1}{2\delta _{k+1}} \sum \limits _{j=1}^{n} | f_{i}(t_{j}) - f_{i}(t_{j-1}) |^2 \right) > \frac{1}{N} \cdot \frac{\alpha (r)}{2^k}N \cdot \left( \frac{1}{2\delta _{k+1}} \frac{\varepsilon _k^2}{16 L} \right) \\&=\frac{\alpha (r)}{2^{k+5}} \frac{1}{\delta _{k+1}} \frac{\varepsilon _k^2}{ \frac{\delta _k}{\delta _{k+1}}} = \frac{\varepsilon _k^2}{\delta _k} \frac{\alpha (r)}{2^{k+5}}. \end{aligned}$$

Writing again \(\delta _k = \delta _k(m,r)\), we have thus obtained the bound

$$\begin{aligned} \mathbb {P}^{N}(B_{N}^{m,r,k}) \le \mathbb {P}^{N}\left( I^{\Pi }(\mu ^{\eta ^{N}}) \ge \frac{\varepsilon _k^2}{\delta _k(m,r)} \frac{\alpha (r)}{2^{k+5}}\right) . \end{aligned}$$

Recalling \(\varepsilon _k = 4^{-k}\), \(\alpha (r) = \frac{1}{3r}\), we see that to prove (70) it is sufficient to take \(\delta _k(m,r)\) small enough so that

$$\begin{aligned} \frac{4^{-2k}}{\delta _k(m,r)} \frac{1}{3r 2^{k+5}} \ge 2 mrk. \end{aligned}$$

By applying Corollary 8.3 we obtain that

$$\begin{aligned} \mathbb {P}^{N}(B_{N}^{m,r,k}) \le \mathbb {P}^{N}\left( I^{\Pi }(\mu ^{\eta ^{N}}) \ge 2mrk \right) \le e^{-mrkN^{\gamma }} \end{aligned}$$

for N large enough. By taking \(\delta _k(m,r)\) even smaller if necessary we can make this estimate true for all values of \(N \ge 1\), which proves (70) and finishes the proof of exponential tightness. \(\square \)

9 Asymptotics of Relaxed Sorting networks

In this section we prove the limiting behavior of random relaxed sorting networks, given by Theorem 1.1, and the asymptotic counting formula of Theorem 1.2. With the large deviation bounds obtained in the preceding sections both of the proofs are now rather straightforward.

Proof of Theorem 1.1

Let \({\mathcal {R}} \subseteq \mathcal {P}\) be the set of permuton processes X reaching exactly the reverse permuton at time 1, i.e., such that \((X_0, X_1) \sim (X, 1 - X)\), and likewise let \({\mathcal {R}}_N\) be the set of permutation processes on N elements reaching exactly the reverse permutation \(\textrm{rev}_N = (N \, \ldots \, 2 \, 1)\) at time 1. Let \({\mathcal {R}}_{\delta }\) denote the \(\delta \)-neighborhood in the Wasserstein distance on \(\mathcal {P}\) of the set \({\mathcal {R}} \cup \bigcup \limits _{N \ge 1} {\mathcal {R}}_N\).

Let \(\eta ^N\) be the interchange process with \(\alpha = 1 + \kappa \in (1,2)\) and let \(\mu ^{\eta ^N}\) be the distribution of the corresponding permutation process. By definition of a random relaxed sorting network, for any given \(\delta > 0\) we have for sufficiently large N

$$\begin{aligned} \mathbb {P}^N \left( \pi ^{N}_{\delta } \in B(\pi _{{\mathcal {A}}}, \varepsilon ) \right) = \mathbb {P}^N \left( \mu ^{\eta ^N} \in B(\pi _{{\mathcal {A}}}, \varepsilon ) \big \vert \mu ^{\eta ^N} \in {\mathcal {R}}_{\delta } \right) . \end{aligned}$$

Now, we have

$$\begin{aligned}{} & {} \mathbb {P}^N \left( \mu ^{\eta ^N} \notin B(\pi _{{\mathcal {A}}}, \varepsilon ) \big \vert \mu ^{\eta ^N} \in {\mathcal {R}}_{\delta } \right) \\{} & {} \quad = \frac{1}{\mathbb {P}^N\left( \mu ^{\eta ^N} \in {\mathcal {R}}_{\delta } \right) }\mathbb {P}^N \left( \left\{ \mu ^{\eta ^N} \notin B(\pi _{{\mathcal {A}}}, \varepsilon ) \right\} \cap \left\{ \mu ^{\eta ^N} \in {\mathcal {R}}_{\delta } \right\} \right) . \end{aligned}$$

By the large deviation lower bound of Theorem 7.3 we have

$$\begin{aligned} \mathbb {P}^N\left( \mu ^{\eta ^N} \in {\mathcal {R}}_{\delta } \right) \ge \exp \left\{ -N^\gamma \left( I(\pi _{{\mathcal {A}}}) + o(1) \right) \right\} , \end{aligned}$$

where \(\gamma = 3 - \alpha \).

Let \({\mathcal {C}}_{\varepsilon , \delta } = B(\pi _{{\mathcal {A}}}, \varepsilon )^{c} \cap \overline{{\mathcal {R}}_{\delta }}\). By the large deviation upper bound of Theorem 8.4 we have

$$\begin{aligned} \mathbb {P}^N \left( \mu ^{\eta ^N} \in {\mathcal {C}}_{\varepsilon , \delta } \right) \le \exp \left\{ -N^\gamma \left( \inf \limits _{\mu \in {\mathcal {C}}_{\varepsilon , \delta }} I(\mu ) + o(1)\right) \right\} . \end{aligned}$$

Since \(\pi _{{\mathcal {A}}}\) is the unique minimizer of energy on \({\mathcal {R}}\) ([Bre89, RVV19]), given \(\varepsilon > 0\) there exists \(\beta = \beta (\varepsilon ) > 0\) such that

$$\begin{aligned} \inf \limits _{\mu \in B(\pi _{{\mathcal {A}}}, \varepsilon )^c \cap {\mathcal {R}}} I(\mu ) \ge I(\pi _{{\mathcal {A}}}) + \beta . \end{aligned}$$
(72)

Since \(I(\cdot )\) is lower semi-continuous, by (72) we obtain (possibly after adjusting \(\delta \) to replace \(\overline{{\mathcal {R}}_{\delta }}\) with \({\mathcal {R}}_{\delta }\)) that for all sufficiently small \(\delta \) we have

$$\begin{aligned} \inf \limits _{\mu \in B(\pi _{{\mathcal {A}}}, \varepsilon )^c \cap {\mathcal {R}}_{\delta }} I(\mu ) \ge I(\pi _{{\mathcal {A}}}) + \frac{\beta }{2}. \end{aligned}$$

Altogether we obtain that for all sufficiently small \(\delta \) (depending on \(\varepsilon \) only)

$$\begin{aligned} \mathbb {P}^N \left( \mu ^{\eta ^N} \notin B(\pi _{{\mathcal {A}}}, \varepsilon ) \big \vert \mu ^{\eta ^N} \in {\mathcal {R}}_{\delta } \right) \le e^{N^{\gamma } \left( I(\pi _{{\mathcal {A}}}) + o(1) \right) } e^{- N^{\gamma } \left( I(\pi _{{\mathcal {A}}}) + \frac{\beta }{2} + o(1) \right) } = e^{-N^{\gamma } \left( \frac{\beta }{2} + o(1) \right) } \end{aligned}$$

and the right-hand side goes to 0 as \(N \rightarrow \infty \) \(\square \)

Proof of Theorem 1.2

Let \(\mathbb {P}^N\) denote the law of the interchange process with \(\alpha = 1 + \kappa \). Let J be the number of all particle swaps in the process and let \(M = \left\lfloor \frac{1}{2} N^{\alpha }(N-1) \right\rfloor \).

Let \({\mathcal {S}}_{\delta }\) denote the \(\delta \)-neighborhood of the reverse permuton in the Wasserstein distance on \(\mathcal {M}([0,1]^2)\). Observe that for sufficiently large N we have for any \(k \le M\)

$$\begin{aligned} \mathbb {P}^N \left( \mu ^{\eta ^N}_{0,T} \in {\mathcal {S}}_{\delta } \big \vert J = k\right) \le \mathbb {P}^N \left( \mu ^{\eta ^N}_{0,T} \in {\mathcal {S}}_{\delta } \big \vert J = M\right) (1 + o(1)). \end{aligned}$$
(73)

This is because if the process has done k swaps up to time \(T_k\) and \(\mu ^{\eta ^N}_{0,T_k} \in {\mathcal {S}}_{\delta }\), then with high probability \(\mu ^{\eta ^N}_{0,T_M} \in {\mathcal {S}}_{\delta }\) as well, since \({\mathcal {S}}_{\delta }\) is an open set in \(\mathcal {M}([0,1]^2)\) and the additional number of steps done between \(T_k\) and \(T_M\) is \(\le \frac{1}{2} N^{\alpha }(N-1) = o(N^3)\), so typically almost all particles have negligible displacement.

On the other hand, since in the interchange process each sequence of swaps of given length is equally likely, we have

$$\begin{aligned} \mathbb {P}^N \left( \mu ^{\eta ^N}_{0,T} \in {\mathcal {S}}_{\delta } \big \vert J = M\right) = \frac{|{\mathcal {S}}^{N}_{\kappa , \delta }|}{|\mathcal {P}^{N}_{M}|}, \end{aligned}$$

where \(\mathcal {P}^{N}_{M}\) is the set of all sequences of adjacent transpositions of length M. Summing (73) over \(k \le M\) we obtain

$$\begin{aligned} \mathbb {P}^N \left( \mu ^{\eta ^N}_{0,T} \in {\mathcal {S}}_{\delta } \cap \{ J \le M \}\right) \le \mathbb {P}^N \left( J \le M \right) \frac{|{\mathcal {S}}^{N}_{\kappa , \delta }|}{|\mathcal {P}^{N}_{M}|}(1 + o(1)). \end{aligned}$$

Since under \(\mathbb {P}^N\) J has Poisson distribution with mean \(\frac{1}{2}N^{\alpha }(N-1)\), we have \(\mathbb {P}^N ( J \le M ) \rightarrow 1/2\) as \(N \rightarrow \infty \).

To estimate the left-hand side, let \({\widetilde{\mathbb {P}}}^N\) be the law of the biased interchange process corresponding to the sine curve process \(\pi _{{\mathcal {A}}}\). Recall Lemma 7.1 and for fixed \(\varepsilon > 0\) let A be the event that the o(1) term in the formula for \(\frac{\textrm{d}\mathbb {P}_{u}^{N}}{\textrm{d}{\widetilde{\mathbb {P}}}^{N}}(T)\) is at most \(\varepsilon \). Let us write

$$\begin{aligned} \mathbb {P}^N \left( \mu ^{\eta ^N}_{0,T} \in {\mathcal {S}}_{\delta } \cap \{ J \le M \} \cap A\right) \le \mathbb {P}^N \left( \mu ^{\eta ^N}_{0,T} \in {\mathcal {S}}_{\delta } \cap \{ J \le M \}\right) \end{aligned}$$

and

$$\begin{aligned}&\mathbb {P}^N \left( \mu ^{\eta ^N}_{0,T} \in {\mathcal {S}}_{\delta } \cap \{ J \le M \} \cap A \right) \\&\quad = {\widetilde{\mathbb {P}}}^N \left( \mu ^{\eta ^N}_{0,T} \in {\mathcal {S}}_{\delta } \cap \{ J \le M \} \cap A \right) \frac{\mathbb {P}^N \left( \mu ^{\eta ^N}_{0,T} \in {\mathcal {S}}_{\delta } \cap \{ J \le M \}\cap A \right) }{{\widetilde{\mathbb {P}}}^N \left( \mu ^{\eta ^N}_{0,T} \in {\mathcal {S}}_{\delta } \cap \{ J \le M \} \cap A \right) }. \end{aligned}$$

By Theorem 5.1\(\mu ^{\eta ^N}_{0,T} \in {\mathcal {S}}_{\delta }\) has high probability under \({\widetilde{\mathbb {P}}}^N\) and, since the particle swap rates for the biased process sum up to \(\frac{1}{2}N^{\alpha }(N-1)\) (recall (28)), we have similarly as for the unbiased process \({\widetilde{\mathbb {P}}}^N \left( J \le M \right) \rightarrow 1/2\) as \(N \rightarrow \infty \). By Lemma 7.1A is a high probability event under \({\widetilde{\mathbb {P}}}^N\) as well.

To estimate the remaining probabilities, we employ the formula for the Radon–Nikodym derivative from Lemma 7.1. Since in the biased process with high probability the energy term in the derivative is close to \(I(\pi _{{\mathcal {A}}}) = \frac{\pi ^2}{6}\), we obtain

$$\begin{aligned} \frac{\mathbb {P}^N \left( \mu ^{\eta ^N}_{0,T} \in {\mathcal {S}}_{\delta } \cap \{ J \le M \}\cap A \right) }{{\widetilde{\mathbb {P}}}^N \left( \mu ^{\eta ^N}_{0,T} \in {\mathcal {S}}_{\delta } \cap \{ J \le M \} \cap A \right) } \ge e^{-N^{\gamma } \left( \frac{\pi ^2}{6} + \varepsilon \right) + o(N^\gamma )}, \end{aligned}$$

where \(\gamma = 3 - \alpha \).

Altogether we obtain

$$\begin{aligned} |{\mathcal {S}}^{N}_{\kappa , \delta }| \ge |\mathcal {P}^{N}_{M}| e^{-N^{\gamma } \left( \frac{\pi ^2}{6} + \varepsilon \right) + o(N^\gamma )}. \end{aligned}$$

Since \(|\mathcal {P}^{N}_{M}| = (N-1)^M = e^{\lfloor \frac{1}{2} N^{\alpha }(N-1) \rfloor \log (N-1)}\) and \(\varepsilon \) was arbitrary, we obtain the asymptotic lower bound on \(|{\mathcal {S}}^{N}_{\kappa , \delta }|\) as claimed.

For the upper bound, let \({\mathcal {R}}_{\delta }\) be as in the previous theorem. By the large deviation upper bound of Theorem 8.4 we have

$$\begin{aligned} \mathbb {P}^N\left( \mu ^{\eta ^N}_{0,T} \in {\mathcal {R}}_{\delta } \right) \le \exp \left\{ -N^{\gamma } \left( \inf \limits _{\mu \in \overline{{\mathcal {R}}_{\delta }}} I(\mu ) + o(1) \right) \right\} , \end{aligned}$$

Since I is lower semi-continuous, given any \(\varepsilon > 0\) we have for all sufficiently small \(\delta > 0\)

$$\begin{aligned} \inf \limits _{\mu \in \overline{{\mathcal {R}}_{\delta }}} I(\mu ) \ge \inf \limits _{\mu \in {\mathcal {R}}} I(\mu ) - \varepsilon = I(\pi _{{\mathcal {A}}}) - \varepsilon , \end{aligned}$$

where again we have used the energy minimization property of \(\pi _{{\mathcal {A}}}\). Since \(I(\pi _{{\mathcal {A}}}) = \frac{\pi ^2}{6}\), this implies that for any \(\varepsilon > 0\) and sufficiently small \(\delta > 0\)

$$\begin{aligned} \mathbb {P}^N\left( \mu ^{\eta ^N} \in {\mathcal {R}}_{\delta } \right) \le e^{-N^{\gamma } \left( I(\pi _{{\mathcal {A}}}) - \varepsilon + o(1) \right) }. \end{aligned}$$

Now we estimate

$$\begin{aligned} \mathbb {P}^N\left( \mu ^{\eta ^N} \in {\mathcal {R}}_{\delta } \right) \ge \mathbb {P}^N\left( \mu ^{\eta ^N} \in {\mathcal {R}}_{\delta } \big \vert J = M\right) \mathbb {P}^N \left( J = M \right) = \frac{|{\mathcal {S}}^{N}_{\kappa , \delta }|}{|\mathcal {P}^{N}_{M}|}\mathbb {P}^N \left( J = M \right) \end{aligned}$$

and use the same asymptotic estimate for \(|\mathcal {P}^{N}_{M}|\) as in the lower bound. Since J is Poisson with mean \(\frac{1}{2}N^\alpha (N-1)\) under \(\mathbb {P}^N\), the second term on the right-hand side is \(e^{O(\log N)}\). Altogether we obtain

$$\begin{aligned} |{\mathcal {S}}^{N}_{\kappa , \delta }| \le \exp \left\{ \frac{1}{2}N^{1 + \kappa } (N-1) \log (N-1)- N^{2 - \kappa } \left( I(\pi _{{\mathcal {A}}}) - \varepsilon \right) + o(N^{2 - \kappa })\right\} , \end{aligned}$$

which proves the desired asymptotic upper bound on \(|{\mathcal {S}}^{N}_{\kappa , \delta }|\). \(\square \)