The number of optimal matchings for Euclidean Assignment on the line

We consider the Random Euclidean Assignment Problem in dimension $d=1$, with linear cost function. In this version of the problem, in general, there is a large degeneracy of the ground state, i.e. there are many different optimal matchings (say, $\sim \exp(S_N)$ at size $N$). We characterize all possible optimal matchings of a given instance of the problem, and we give a simple product formula for their number. Then, we study the probability distribution of $S_N$ (the zero-temperature entropy of the model), in the uniform random ensemble. We find that, for large $N$, $S_N \sim \frac{1}{2} N \log N + N s + \mathcal{O}\left( \log N \right)$, where $s$ is a random variable whose distribution $p(s)$ does not depend on $N$. We give expressions for the asymptotics of the moments of $p(s)$, both from a formulation as a Brownian process, and via singularity analysis of the generating functions associated to $S_N$. The latter approach provides a combinatorial framework that allows to compute an asymptotic expansion to arbitrary order in $1/N$ for the mean and the variance of


Introduction
The Euclidean Assignment Problem (EAP) is a combinatorial optimization problem in which one has to pair N white points to N black points minimizing a total cost function that depends on the Euclidean distances among the points. The Euclidean Random Assignment Problem (ERAP) is the statistical system in which these 2N points are drawn from a given probability measure. In the latter version of the problem, one is interested in characterizing the statistical properties of the optimal solution, such as its average cost, average structural properties, etc. . .
The EAP problem has applications in Computer Science, where it has been used in computer graphics, image processing and machine learning [1], and it is also a discretised version of a problem in functional analysis, called optimal transport problem, where the N -tuples of points are replaced by probability measures. The optimal transport problem has recently seen a growing interest in pure mathematics, where it has found applications in measure theory and gradient flows [2].
To be more definite, the EAP problem is the optimization problem defined by min where • J = ({w i }, {b j }) is the instance of the problem, i.e., in the Euclidean version in dimension d, the datum of the positions of N white points {w i } and N black points {b j } in R d ; • π is a bijection between the two sets of points, linking biunivocally each white point to a unique black point. In other words, is a perfect matching. We denote by S N the set of all possible bijections between sets of size N ; • H J (π) is the cost function i.e. the sum of the costs of the links of π, where a link is weighted using a link cost function c(x) : R + → R, depending only on the Euclidean distance among the two points. In the EAP, J is considered as fixed, while in the ERAP, J is a random variable with a fixed probability distribution.
It is a longstanding project to understand the phase diagram of the model, in the plane (p, d), when (in the simplest version of this problem) one suppose that J is given by 2N i.i.d. points from the d-dimensional hypercube [0, 1] d , and that the link cost function is given by c(x) = x p . This phase diagram is, to a certain extent, still mysterious, although some progress has been made recently (see for example [3]). In particular, when d = 1, the analytical characterization of optimal solutions is quite tractable [4,5]: • for p > 1, the optimal matching is ordered 1 independently on J, and this is due to the fact that the link cost function is increasing and convex. In this case, plenty of results have been obtained on the statistics of the average optimal cost [6][7][8][9]; • for 0 < p < 1, the optimal matching is non-crossing, meaning that pairs of matched points are either nested one inside the other, or disjoint (that is, matched pairs do not interlace) [4]. This property is imposed by the concavity of the link cost function. In this case, the optimal matching is not uniquely determined by the ordering of the points (although the viable candidates are typically reduced, roughly, from n! to √ n!), and a comparatively smaller number of results has been found so far for the random version of the problem [10][11][12][13]; • for p < 0, due to the fact that an overall positive factor in c(x) is irrelevant in the determination of the optimal matching, it is questionable if one should consider the analytic continuation of the cost function c(x) = x p (which has the counter-intuitive property that the preferred links are the longest ones), or of the cost function c(x) = p x p (which has the property that the preferred links are the shortest ones, and that the limit p → 0 is well-defined, as it corresponds to c(x) = log x, but has the disadvantage of having average cost −∞ when p ≤ −d). In the first case, the optimal matching is cyclical, meaning that the permutation that describes the optimal matching has a single cycle, and some result on the average optimal cost where obtained in [7]. In the second case, in the pertinent range −d < p ≤ 0, it seems that the qualitative features of the p ∈ (0, 1) regime are preserved.
We notice that in all the cases above, as well as in all the cases with d = 1 and any p, the optimal matching is 'effectively unique', meaning that it is almost surely unique, and, even in presence of an instance showing degeneracy of ground states, almost surely an infinitesimal perturbation immediately lifts the degeneracy. This generic non-degeneracy property does not hold only for the d = p = 1 case (note that p = 1 is the exponent at which the cost function changes concavity). It is known [5] that in this case there are (almost surely) at least two distinct optimal matchings for each instance of the problem, the ordered matching and the Dyck matching [10] which coincide only in the rather atypical case (with probability N !/(2N − 1)!!) in which, for all i, the i-th white and black points are consecutive along the segment.
These observations suggest some fundamental questions for the (d, p) = (1, 1) problem: (1) is it possible to characterize all the optimal matchings of a fixed instance J of the EAP?
(2) how many optimal configurations are there for a fixed instance J of the EAP? (3) for random J's, what are the statistical properties of the number of optimal configurations?
The aim of this paper is to answer the three questions above. Our interest is not purely combinatorial. In fact, the ERAP is a well-known and well-studied toy model of Euclidean spin glass [14][15][16][17]. The characterization of the set Z J ⊆ S N of the optimal matchings of J is thus related to the computation of the zero-temperature partition function Z J = |Z J | of the disordered model, and of its zero-temperature entropy S J = log(Z J ).
In this manuscript we will focus on the statistical properties of the zero-temperature entropy, the thermodynamic potential that rules the physics of the model when the disorder is quenched, i.e. when the timescale of the dynamics of the disorder degrees of freedom of the model is much larger than that of the microscopic degrees of freedom. We leave to future investigations the study of the annhealed and replicated partition functions Z J and Z k J , which are less fundamental from the point of view of statistical physics, but, as we will show elsewhere, have remarkable combinatorial and number-theoretical properties. 1 The ordered matching is the one in which the first white point from the left is matched with the first black point and so on, and is represented by the identity permutation π ord (i) = i if the points are sorted by increasing coordinate.
1.1. Summary of results. In Section 2.1 we prove that, given a fixed instance J of the (d, p) = (1, 1) EAP problem, a matching π is optimal if and only if where k LB (z) is a function of J that counts the difference of the numbers of white and black points on the left of z, and k π (z) counts the number of links of π whose endpoints lie on opposite sides of z.
In Section 2.2, we show that Z J , the set of optimal matchings, depends on J only through the ordering of the points, while it is independent on their positions (provided that the ordering is not changed). Using a straightforward bijection between the ordering of bi-colored point configurations on the line and a class of lattice paths (the Dyck bridges), we provide a combinatorial recipe to construct the set Z J . As a corollary, we obtain a rather simple product formula for the cardinality Z J := |Z J |, that is, roughly speaking, height of the step (4) where the product runs over the descending steps of the Dyck bridge associated with the ordering of the points of configuration J, and the height of a step is, roughly speaking, the absolute value of its vertical position. This implies an analogous sum formula for the entropy S J = log(Z J ).
Then, we study the statistics of S J when J is a random instance of the problem. Our techniques apply equally well, and with calculations that can be performed in parallel, to two interesting statistical ensembles: Dyck bridges: the case in which white and black points are just i.i.d. on the unit interval. Dyck excursions: the restriction of the previous ensemble to the case in which, for all z ∈ [0, 1], there are at least as many white points on the left of z than black points.
The fact that we can study these two ensembles in parallel is present also in our study for the distribution of the energy distribution of the "Dyck matching", that we perform elsewhere [10,13]. In Section 3.1, we highlight a connection between S J (in the two ensembles) and the observable over Brownian bridges (or excursions) σ. Simple scaling arguments imply that is a random variable with a non-trivial limit distribution for large N . We provide integral formulas for the integer moments of s, and we use these formulas to compute analytically its first two moments in the two ensembles.
In Section 3.2, we complement this analysis with a combinatorial framework at finite size N . We use this second approach to provide an effective strategy for the computation of finite-size corrections, that we illustrate by calculating the first and second moment, for both bridges and excursions.
In particular, we can establish that where s is a random variable whose distribution depends on the ensemble (among bridges and excursions). For Dyck bridges we have and for Dyck excursions we have In Section 4 we provide numerical evidence that the distribution of the rescaled entropy s is non-Gaussian (but we leave unsolved the question whether the centered distribution for the excursions is an even function), and we confirm that the predicted values for the first two moments match with the simulated data. With both our approaches it seems possible to access also higher-order moments, however we cannot prove at present that, for any finite moment, the evaluation can be performed in closed form, and that (as we conjecture) the result is in the form of a rational polynomial that only involves γ E and (multiple) zeta functions. We will investigate these aspects in future works.

Optimal matchings at p = 1
In the following, we focus on the EAP with p = d = 1. We have N white points on a segment [0, L] (i.e., we have 'closed' boundary conditions, instead of 'periodic', as we would have if the points were located on a circle). We assume that the instance is generic (i.e., no two coordinates are the same), and we label the points so that the lists above are ordered (w i < w i+1 and b i < b i+1 ).

2.1.
Characterization of optimal matchings. We start by giving an alternate, integral representation of the cost function H J (π). For a given matching π, and every z ∈ R which is not a point of W or B, define the function κ(z, x, y) = 1 x < z < y or y < z < x 0 otherwise (10) In words, κ(z, x, y) is just the indicator function over the segment with extreme points x and y, and k π (z) counts the number of links of π that have endpoints on opposite sides of z.
Furthermore, define the function where #(I) denotes the cardinality of the set I. As well as k π (z), also k LB is defined for all z ∈ R \ (W ∪ B), and counts the excess of black or white points on the left of z.
Moreover, at p = 1, for all π and all z, This has the immediate corollary that, for all π, Now, call π id the identity permutations, that is the so-called ordered matching. By simple inspection, we have that H J (π id ) = H LB J . This implies that π id is optimal, and more generally Corollary 1. π ∈ Z J iff the functions k LB and k π coincide.
See Figure 1 for the description of all optimal matchings at N = 2.
In the following we will provide a simple algorithm to construct the optimal matchings of a given instance. In order to do this, we shall now give another characterization of optimal matchings: Definition 1. Let π be a matching. Let us call P = (p 1 , . . . , p 2N ) the ordered list of the points in W ∪ B. For 1 ≤ i ≤ 2N , we call P i (π) the stack of π at i, that is, the set of points in {p 1 , . . . , p i } that are paired by π to points in {p i+1 , . . . , p 2N } (see Figure 2). Optimal configurations Figure 1. Optimal matchings at p = 1 for 2N = 4 points. π 1 is the matching in which the first white point is matched with the first black point; π 2 is the only other possible matching. All optimal configurations satisfy kπ(x) = k LB (x).
Proof. Suppose that for, some 1 ≤ i ≤ 2N , the stack of π at i is non-empty and non-monochromatic. Then there are p w ∈ P i (π) ∩ W and p b ∈ P i (π) ∩ B which are matched to q b ∈ P c i (π) ∩ B and q w ∈ P c i ∩ W , respectively. In the matching π in which we swap these two pairs, we have k π (z) = k π (z) − 2 for all max(p w , p b ) < z < min(q w , q b ), and k π (z) = k π (z) elsewhere, thus, by Corollary 1, π cannot be optimal.
Viceversa, let π have only empty or monochromatic stacks. This means that, in a right neighbourhood of p i , the cardinality of the stack, which by definition coincides with k π (x), is exactly given by k LB . Furthermore, both these functions are constant on the intervals between the points (where they jump by ±1), so the two functions coincide everywhere on the domain. This means, by Corollary 1, that π must be optimal.
2.2. Enumeration of optimal matchings. We are now interested in enumerating the optimal matchings at p = 1 for a fixed configuration J of size N . First of all, we give an alternative representation of J (already adopted in [10]) that will be useful in the following. A configuration J can be encoded by sorting the 2N points in order of increasing coordinate (as in Definition 1 above), and defining • a vector of spacings s(J) ∈ (R + ) 2N , given by It is easily seen that the criterium in Proposition 2 is stated only in terms of σ(J). This makes clear that the set Z J itself is fully determined by σ(J). Thus, from this point onward, we understand that Z(σ), Z(σ) and S(σ) are synonims of the quantities Z J , Z J and S J , for any J with σ(J) = σ.
Binary vectors can be represented as lattice paths, i.e. paths in the plane, starting at the origin and composed by up-steps (or rises) (+1, +1) and down-steps (or falls) (+1, −1). So we have a bijection among zero-sum binary vectors σ and lattice bridges, in which the i-th step of the path is (i, σ i ). The bijection between color orderings, binary vectors and lattice paths is so elementary that in the following, with a slight abuse of notation, we will just identify the three objects.
We are interested in two classes of binary vectors / lattice paths: • Dyck bridges B N of semi-lenght (size) N . These are lattice paths with an equal number of up-and down-steps. They are precisely in bijection with the color orderings of N white and N black points, i.e. with the color ordering of all the possible configurations J. There are B N = |B N | = 2N N Dyck bridges of size N . • Dyck paths C N of semi-length (size) N . These are lattice bridges that never reach negative ordinate (we will also call them excursions, in analogy with their continuum counterparts). They are in bijection with configurations J in the forementioned "Dyck excursions" ensemble. There are C N = |C N | = 1 N +1 B N Dyck paths of size N . In the following, the generating function of the series B N and C N will turn useful. We have Our notion of height will be associated to the steps of the path. We call h i (σ) the height of the path at step i, that is, the height of the midpoint of the i-th step of the path, that in terms of the binary vector reads The choice of the midpoint to compute the height is arbitrary, but has the advantage of being a symmetric definition with respect to reflections w.r.t. the x-axis, while taking values in an equispaced range of integers. We can then defineh i as the positive integers Then we have Lemma 1.
Proof. The proof goes through the characterisation of the stacks of optimal configurations, given in Proposition 2. First of all, notice that the list of positions given by the condition h i σ i < 0 is the one at which the stack at i − 1 decreases its size, because one point in the stack is paired to the i-th point. We shall call closing steps the elements of this set (and closing points the associated points in P ), and opening steps those in the complementary set. Indeed, the cardinality of the stack at i − 1 is exactlyh i , while the sign of h i determines the colour of the points in the stack at i − 1 (which is also the colour of the stack at i, unless the latter is empty). So, there are exactlȳ h i choices for the pairing at i, while, if i is not in the list above, the choice is unique. As the cardinalities of the stacks are the same for all optimal configurations, the choice at i does not affect the number of possible choices at j > i, and we end up with Equation (18).
Notice that Equation (18) is trivially equivalent to The proof above has a stronger implication: we can construct the m-th of the Z J solutions by a polynomial-time algorithm (which takes on average time ∼ N log N and space ∼ √ N if the suitable data structure is used), despite the fact that, as is apparent from Lemma 1, the typical values of Z J are potentially at least exponential in N . The algorithm goes as follows. First, rewrite m in the form m − 1 = a 1 + a 2h1 + a 3h1h2 + · · · + a Nh1h2 · · ·h N −1 , with 0 ≤ a j <h j . Then, say that i(j) = i if the j-th closing point is p i . Now, produce the N pairs of the m-th optimal matching by pairing the closing points, in order of increasing j, by pairing this point to the a j -th of the stack in i(j) − 1, when this is sorted (say) in increasing order.

Statistical properties of S(σ)
We are now interested in the statistical properties of the entropy when σ(J) is a random variable induced by some probability measure on the space of configurations J of 2N points, and N tends to infinity. The subscript N reminds us that we are at size |W | = |B| = N .
In matching problems, the typical choices for the configurational probability measure are factorized over a measure on the spacings and a measure on the color orderings, i.e. µ(J) = µ spacing ( s(J)) µ color (σ(J)) (21) (see [10] for more details and examples). As S N (J) = S N (σ(J)), we can again forget about the spacing degrees of freedom, and study the statistics of S N (σ) induced by some measure µ color (σ). In particular, we will study the cases in which σ is uniformly drawn from the set of Dyck paths, or uniformly drawn from the set of Dyck bridges.

3.1.
Integral formulas for the integer moments of S(σ) via Wiener processes. It is well known (see Donsker's theorem [18]) that lattice paths such as Dyck paths and bridges converge, as N → ∞ and after a proper rescaling, to Brownian bridges and Brownian excursions. Brownian bridges are Wiener processes constrained to end at null height, while Brownian excursions are Wiener processes constrained to end at null height and to lie in the upper half-plane. The correct rescaling of the steps of the lattice paths that highlights this convergence is given by (+1, ±1) → These scalings suggest to consider a rescaled version of the entropy In the limit N → ∞, the rescaled entropy will converge to an integral operator over Wiener processes The integer moments of s[σ] can be readily computed as correlation functions of the Brownian process: is the standard measure on the Brownian process of choice among bridges and excursions. The last integral is the probability that the Brownian process we are interested in starts an ends at the origin and visits the points (t 1 , x 1 ), . . . , (t k , h k ), while subject to its constraints. Let us consider Brownian bridges first. In this case, the probability that a Wiener process travels Gaussian distribution with variance σ 2 . The factor 2 comes from Donsker's theorem, and is due to the fact that the variance of the distribution of the steps in the lattice paths is exactly 2. Thus, for Brownian bridges where x 0 = 0, x k+1 = 0 and the factor √ 4π is a normalization, so that Brownian excursions can be treated analogously using the reflection principle. In this case, the conditional probability that a Wiener process travels from (t i , x i ) to (t f , x f ) without ever reaching negative heights, given that it already reached (t i , x i ), is given by . Moreover, now all x i 's are constrained to be positive. Thus, for Brownian excursions In both cases, the Gaussian integrations on the heights x i can be explicitly performed. First of all, we replace Then, we treat the contact terms. In the case of bridges, the contact terms can be rewritten as where the hyperbolic sine term is discarded due to the parity of the rest of the integrand in the variables x a . In the case of excursions, the contact term instead reads In both cases, we can expand the hyperbolic function in power-series, so that the integrations in the x a variables are now factorized and of the kind (in the case of excursions, a factor 1/2 must be added to take into account the halved integration domain). Using these manipulations, the first two moments for both bridges and excursions can be analytically computed. We detail the computations in Appendix A. The results are: where γ E is the Euler-Mascheroni constant, and The approach presented in this Section is simple in spirit, and allows to connect our problem to the vast literature on Wiener processes. Moreover, it is suitable for performing Monte Carlo numerical integration to retrieve the moments of s(σ).
In this Section we worked directly in the continuum limit. In the next Section, we provide a combinatorial approach that allows to recover the values of the first two moments in a discrete setting, and to compute finite-size corrections in the limit N → ∞.

3.2.
Combinatorial properties of the integer moments of S(σ) at finite N . In this section, we introduce a combinatorial method to compute the moments of S N (σ) in the limit N → ∞. This new approach allows to retain informations on the finite-size corrections.
The underlying idea is to reproduce Equation (24) in the discrete setting for the variable S N (σ), and to study its large-N behaviour using methods from analytic combinatorics.
We start again from In the following, the superscript/subscript T = E, B will stand for Dyck paths (excursions, E) and Dyck bridges (B) respectively, T N = C N , B N and T N = C N , B N ; we will mantain the notation unified whenever possible.
The k-th integer moment equals where M The last equation reproduces, as anticipated earlier, Equation (24) in the discrete setting. Notice that here we must take into account the multiplicities ν a , while in the continuous setting we could just set c = k and ν a = 1 for all a, as the contribution from the other terms is washed out in the continuum limit. This suggests that, in this more precise approach, we will verify explicitly that the leading contributions in the large-N limit comes from the c = k term of Equation (35).
In order to study Equation (35), we take the following route. As this equation depends on N only implicitly through the summation range, and explicitly through a normalization, we would like to introduce a generating function where if a, b ∈ Z + and a + b is even is the number of unconstrained paths that start at (x, y) and end at (x + a, y + b).
Proof. Let us start by considering Dyck bridges. The idea is to decompose a path contributing to the count of M The product of the contribution of each subpath recovers Equation (37).
The case of Dyck paths can be treated analogously, with a few crucial differences. In fact, each of the portions of path between the i-th and (i + 1)-th closing steps (which, for excursions, are just down-steps) has now the constraint that it must never fall below the horizontal axis, i.e. must never reach a height h i − 1 2 lower with respect to its starting step. Let us count these paths. A useful trick to this end is the discrete version of the reflection method, that we already used in Section 3.1. Call a the total number of steps, b the relative height of the final step with respect to the starting step, and c the maximum fall allowed with respect to the starting step. Moreover, call bad paths all paths that do not respect the last constraint. A bad path is characterized by reaching relative height −c − 1 at some point (say, the first time after s steps). By reflecting the portion of path composed of the first s steps, we obtain a bijection between bad paths and unconstrained paths that start at relative height −2(c + 1), and reach relative height b after a steps. Thus, the total number of good paths C a,b,d is given by subtraction as Equation (39) can be easily established by decomposing a generic (marked) path around its closing steps, and by applying our result above.
The fact that we want to exploit now is that, while a given binomial factor B a,b (and its constrained variant C a,b,d ) are not easy to handle exactly, their generating function in a have simple expressions, induced by analogously simple decompositions, that we collect in the following: where, as in (15), Proof. To obtain Equation (42), observe that a path going from (0, 0) to (a, b) with non-negative b can be uniquely decomposed as w = w 0 u 1 w 1 u 2 w 2 . . . u h w h where u i is the right-most up-step of w at height i − 1/2, and w i is a (possibly empty) Dyck path, for all i = 1, . . . , h, while w 0 is a (possibly empty) Dyck bridge. Thus, For negative b, the same reasoning holds with u i 's replaced by down-steps, hence the absolute value on b in the result. Equation (42) then follows easily.
Equation (44) can be derived either as a special case of Equation (43), or as a variation of (42) where w 0 must be a Dyck path. The equivalence of these two decompositions is granted by the fact that C(z) = 1 − zC(z) 2 B(z).
Let us introduce the symbol x = x(z) for the recurrent quantity where Proposition 5. Using x to denote x(z), we have that for bridges and for excursions Proof. First of all, we notice that for some functions f i that depend on the type of paths T that we are studying. Thus, by performing the change of summation variables {t 1 , · · · , t c , N } → {α 1 , · · · , α c+1 } such that we have that so that all summations are now untangled. Equations (51) and (52) can now be recovered by using the explicit form of the functions f i for Dyck paths and Dyck bridges given in Proposition 3, and the analytical form for the generating functions given in Proposition 4. Again, notice that the θ functions are all automatically satisfied ash i ≥ 1 for all 1 ≤ i ≤ c.
At this point, we have obtained a quite explicit expression for M (T) k (z). In the following sections, we will study the behaviour near the leading singularities of the quantities above, for the first two moments, i.e. k = 1, 2. Higher-order moments require a more involved computational machinery that will be presented elsewhere.
3.2.1. Singularity analysis for k = 1. We start our analysis from the simplest case, k = 1, to illustrate how singularity analysis is applied in this context. We expect to recover Equation (32). From now on, for simplicity, as theh indices are mute summation indices, we will call them simply h. We have and where Li s,r (x) = h≥1 h −s (log(h)) r x h is the generalized polylogarithm function [20].
In both cases, the dominant singularity is at x(z) = 1, i.e. at z = 1/4. We have and Here and in the following, the rewriting of Li α,r (x) (for −α, r ∈ N) in the form P (L(x))/(1 − x) 1−α , with P (y) a polynomial of degree r, can be done either by matching the asymptotics of the coefficients in the two expressions (and appealing to the Transfer Theorem), or by using the explicit formulas in [19, Thm. VI.7]. In this paper we mostly adopt the first strategy. Passing to the variable z gives Thus, the singular expansion of M The behaviour of T N M (T) N,1 for large N can be now estimated by using the so-called transfer theorem (see [19], in particular Chapter VI for general informations, and the table in Figure VI.5 for the explicit formulas), that allows to jump back and forth between singular expansion of generating functions and the asymptotic expansion at large order of their coeffcients. In practice, we can expand the approximate generating functions given in Equations (61) for N → ∞, we obtain an asymptotic expansion for the first moment of S(σ) (which agrees with what we already found in Equation (32)) Notice that, although we have truncated our perturbative series at the first significant order, in principle the combinatorial method gives us access to finite-size corrections at arbitrary finite order.

3.2.2.
Singularity analysis for k = 2. For k = 2, we compute Equation (49) by studying separately terms at different values of c. Let us start from bridges. For c = 1, and thus ν 1 = 2, we have while for c = 2, and thus ν 1 = ν 2 = 1, we have The presence of the absolute value |h 2 − h 1 + 1| forces us to consider separately the case h 1 > h 2 and h 1 ≤ h 2 . In the first case we get while in the second case we obtain The combination of these two terms gives In the case of excursions, the computations are completely analogous, and give In order to compute the singular expansion of Li 0,2 (x) and of h≥1 (log(h 2 + h) log(h!)x h ), one can again use the transfer theorem, obtaining We see that, for both Dyck bridges and paths, the c = 1 term is subleading with respect to the c = 2 term, by a factor 1 − x which, after use of the Transfer Theorem, implies a factor O(N −1 ). It is easy to imagine (and in agreement with the discussion in Section 3.1) that this pattern will hold also for all subsequent moments, that is, the leading term for the k-th moment will be given by the c = k contribution alone, all other terms altogether giving a correction O(N −1 ).
After substituting x(z) with its expansion around z = 1/4, and after having performed a series of tedious but trivial computations, we obtain Finally, we recover the moments of s(σ) Details for these computations can be found in Appendix B.

Numerical results
To check our analytical predictions, we performed an exact sampling of our configurations, and collected a statistics on the resulting entropies, from which we estimated the distribution of the rescaled entropy For both bridges and excursions, and for values of N = 10 3 , 10 4 , 10 5 , 10 6 , we sampled uniformly 10 5 random paths. Figure 3 summarises our results. We clearly see that as N grows larger and larger, the prediction for the first two moments of s matches better and better the empirical value in both ensembles. The distribution of s is clearly non-Gaussian in the case of Dyck bridges. For Dyck excursion, a quick Kolmogorov test rules out the Gaussian hypothesis (in particular, more easily, the 4-th centered moment is only ∼ 2.7 times the 2-nd centered moment squared, instead of a factor 3 required for gaussianity). However, at the present statistical precision we cannot rule out the hypothesis that the centered distribution is symmetric, i.e. that all the centered odd moments vanish, although we have no theoretical argument for conjecturing this fact, neither from the probabilistic approach of Section 3.1, nor from the combinatorial approach of Section 3.2. It would be interesting to understand the reasons of this unexpected numerical finding.
The code and the raw data used to produce Figure 3 are available at https://github.com/ vittorioerba/EntropyMatching.

Acknowledgements
A. Sportiello is partially supported by the Agence Nationale de la Recherche, Grant Numbers ANR-18-CE40-0033 (ANR DIMERS) and ANR-15-CE40-0014 (ANR MetACOnc).  Appendix A. The first two moments of the rescaled entropy in the integral representation In this Appendix we provide the computation for the first two moments of the rescaled entropy in the bridges ensemble, using the integral representation. The case of excursion can be treated analogously.
A.1. Useful formulas. We start by providing some useful identities.
We start giving an explicit representation for the non-integer Gaussian moments: In the following we shall also use the duplication formula for the Gamma function and the expansion for the hyperbolic cosine Finally, we recall the definition of the hypergeometric function and the Euler identity A.2. First moment. We wish to compute First of all, we substitute log x 2 → x 2k . We will later take the derivative in k and evaluate our expressions for k = 0 to obtain back the logarithmic contribution.
Thus, we start from Then, where ψ 0 (z) = d dz log Γ(z) is the digamma function, and Finally, A.3. Second moment. We wish to compute Again, we substitute log x 2 → x 2k1 and log y 2 → y 2k2 , and we will recover the correct logarithmic factors by taking derivatives in k 1 and k 2 later. We start by dealing with the contact term: where the hyperbolic sine was discarded due to parity in x and y. With this substitution, the integrals in x and y are decoupled, and can be evaluated as where in the last line we used the fact that Γ(−k) = k −1 + . . . as k goes to zero to highlight the linear dependence in k 1 and k 2 in the first term. Notice that We can now take the derivative with respect to k 1 and k 2 and evaluate the expression in k 1 = k 2 = 0 to obtain I 2 (t 1 , t 2 ) The second line is easily treated by noting that it is symmetric under t 1 → 1 − t 1 , giving 1 2 In this Appendix, we present in greater detail the computations sketched in Section 3.2.2. First of all, let us state the transfer theorem of singularity analysis in a slightly unprecise, but operatively correct form. See [19, Chapter VI] for the details.
Theorem 1. Let f (z), g(z) and h(z) be generating function with unit radius of convergence, and let f n , g n and h n be their coefficients. Then if and only if In practice, one builds a standard scale of functions whose coefficient expansion is known, expands a generic generating function over the standard scale (assuming that the scale is rich enough to admit the generating function of interest in its linear span), and uses the transfer theorem to guarantee that an asymptotic equivalence at the level of the generating functions implies (and is implied) by an asymptotic equivalence of their coefficients. A standard scale adopted already in [19, Section VI.8], and which is rich enough for our purposes, is given by functions of the form where L(x) = log (1 − x) −1 .
B.1. Singular behaviour of Li 0,2 (x). The coefficient of Li 0,2 (x) equals log 2 (h). It is easy to see, using [19, Figure VI.5], that the singular expansion of Li 0,2 (x) onto the standard scale must be made by the terms (1 − x) −1 L(x) 2 , (1 − x) −1 L(x), (1 − x) −1 and higher-order terms. The coefficient of the expansion can be retrieved by basic linear algebra: • the coefficient of (1 − x) −1 L(x) 2 must be 1 to reconstruct the log(h) 2 term; • the coefficient of (1 − x) −1 L(x) must be −2γ E to cancel the log(h) term introduced by the first standard scale function; • the coefficient of (1 − x) −1 must be ζ(2) + γ 2 E = π 2 /6 + γ 2 E to cancel the constant terms introduced in the expansion by the previous standard scale functions. The error term can be obtained by observing that all expansions used above are valid up to order O h −1 log h . Thus B.2. Singular behaviour of h≥1 log(h 2 + h) log h! x h . The procedure is analogous to the computation for the Li 0,2 (x). We just need to use Stirling's approximation for log h! to obtain the expansion of the coefficients in h. We have, for the coefficient of order k of this function, log(h 2 + h) log h! = 2h log 2 (h) − 2h log(h) + log 2 (h) + (1 + log(2) + log(π)) log(h) − 1 + O log(h) h so that the singular expansion must be made by the terms (1 − x) −2 L(x) 2 , (1 − x) −2 L(x), (1 − x) −2 , and higher-order ones. The coefficients can be found using the same strategy adopted for Li 0,2 (x), obtaining These relations are enough to convert singular expansions in X into singular expansions in Z at the leading algebraic order. The formulas of this subsection are useful, in principle, also at higher moments.