1 Introduction

Spectral gap is a fundamental concept in mathematics and related sciences as it governs the rate at which a process converges towards its stationary state. The question that motivates this paper is whether random objects have large, or even optimal, spectral gaps. This will be made precise below.

One of the simplest examples of spectral gap is the spectral gap of a graph. The spectrum of a graph \({\mathcal {G}}\) on n vertices is the collection of eigenvalues of its adjacency matrix \(A_{{\mathcal {G}}}\). Assuming that \({\mathcal {G}}\) is d-regular, the largest eigenvalue occurs at d and is simple if and only if \({\mathcal {G}}\) is connected. This means, writing

$$\begin{aligned} \lambda _{0}=d\ge \lambda _{1}\ge \lambda _{2}\ge \cdots \ge \lambda _{n-1} \end{aligned}$$

for the eigenvalues of \(A_{{\mathcal {G}}}\), then there is a spectral gap between \(\lambda _{0}\) and \(\lambda _{1}\) (i.e. \(\lambda _{0}>\lambda _{1}\)) if and only if \({\mathcal {G}}\) is connected. In fact, the Cheeger inequalities for graphs due to Alon and Milman [AM85] show that the size of the spectral gap (i.e. \(\lambda _{0}-\lambda _{1}\)) quantifies how difficult it is, roughly speaking, to separate the vertices of \({\mathcal {G}}\) into two sets, each not too small, with few edges between them. This is in tension with the fact that a d-regular graph is sparse. Sparse yet highly-connected graphs are called expander graphs and are relevant to many real-world examples.Footnote 1

However, a result of Alon and Boppana [Nil91] puts a sharp bound on what one can achieve: for a sequence of d-regular graphs \({\mathcal {G}}_{n}\) on n vertices, as \(n\rightarrow \infty \), \(\lambda _{1}({\mathcal {G}}_{n})\ge 2\sqrt{d-1}-o(1)\). The trivial eigenvalues of a graph occur at d, and if \({\mathcal {G}}\) has a bipartite component, at \(-d\). A connected d-regular graph with all its non-trivial eigenvalues in the interval \([-2\sqrt{d-1},2\sqrt{d-1]}\) is called a Ramanujan graph after Lubotzky, Phillips, and Sarnak [LPS88].

In the rest of the paper, if an event depending on a parameter n holds with probability tending to 1 as \(n\rightarrow \infty \), then we say it holds asymptotically almost surely (a.a.s.). A famous conjecture of Alon [Alo86], now a theorem due to Friedman [Fri08], states that for any \(\varepsilon >0\), a.a.s. a random d-regular graph on n vertices, chosen uniformly amongst such graphs, has all its non-trivial eigenvalues bounded in absolute value by \(2\sqrt{d-1}+\varepsilon \). In other words, almost all d-regular graphs have almost optimal spectral gaps. In [Bor], Bordenave has given a shorter proof of Friedman’s theorem. A first result about uniform spectral gap for random regular graphs is due to Broder and Shamir [BS87b] who proved a.a.s. \(\lambda _{1}\le 3d^{3/4}\). The approach in the current paper is similar to the direct trace method introduced by Broder–Shamir, which was subsequently improved by Puder and Friedman-Puder [Pud15, FP22] to show a.a.s. that \(\lambda _{1}\le 2\sqrt{d-1}+\frac{2}{\sqrt{d-1}}\).

Friedman conjectured in [Fri03] that the following extension of Alon’s conjecture holds. Given any finite graph \({\mathcal {G}}\) there is a notion of a degree-n coverFootnote 2\({\mathcal {G}}_{n}\) of the graph. Elements of the spectrumFootnote 3 of \({\mathcal {G}}_{n}\) that are not elements of the spectrum of \({\mathcal {G}}\) are called new eigenvalues of \({\mathcal {G}}_{n}\). Friedman conjectured that for a fixed finite graph \({\mathcal {G}}\), for any \(\varepsilon >0\) a random degree-n cover of \({\mathcal {G}}\) a.a.s. has no new eigenvalues of absolute value larger than \(\rho ({\mathcal {G}})+\varepsilon \), where \(\rho ({\mathcal {G}})\) is the spectral radius of the adjacency operator of the universal cover of \({\mathcal {G}}\), acting on \(\ell ^{2}\) functions. For d even, the special case where \({\mathcal {G}}\) is a bouquet of \(\frac{d}{2}\) loops recovers Alon’s conjecture. Friedman’s conjecture has recently been proved in a breakthrough by Bordenave and Collins [BC19].

The focus of this paper is the extension of Alon’s and Friedman’s conjectures to compact hyperbolic surfaces.

A hyperbolic surface is a Riemannian surface of constant curvature \(-1\) without boundary. In this paper, all surfaces will be orientable. By uniformization [Bea84, Section 9.2], a connected compact hyperbolic surface can be realized as \(\Gamma \backslash {\mathbb {H}}\) where \(\Gamma \) is a discrete subgroup of \(\mathrm {PSL_{2}({\mathbf {R}})}\) and

$$\begin{aligned} {\mathbb {H}}&=\{\,x+iy\,:\,x,y\in {\mathbf {R}},\,y>0\,\} \end{aligned}$$

is the hyperbolic upper half plane, upon which \(\mathrm {PSL_{2}({\mathbf {R}})}\) acts via Möbius transformations preserving the hyperbolic metric

$$\begin{aligned} \frac{dx^{2}+dy^{2}}{y^{2}}. \end{aligned}$$

Let \(X=\Gamma \backslash {\mathbb {H}}\) be a connected compact hyperbolic surface. Topologically, X is a connected closed surface of some genus \(g\ge 2\).

Since the Laplacian \(\Delta _{{\mathbb {H}}}\) on \({\mathbb {H}}\) is invariant under \(\mathrm {PSL}_{2}({\mathbf {R}})\), it descends to a differential operator on \(C^{\infty }(X)\) and extends to a non-negative, unbounded, self-adjoint operator \(\Delta _{X}\) on \(L^{2}(X)\). The spectrum of \(\Delta _{X}\) consists of real eigenvalues

$$\begin{aligned} 0=\lambda _{0}(X)\le \lambda _{1}(X)\le \cdots \le \lambda _{n}(X)\le \cdots \end{aligned}$$

with \(\lambda _{i}\rightarrow \infty \) as \(i\rightarrow \infty \). The same discussion also applies if we drop the condition that X is connected.Footnote 4 We have \(\lambda _{0}(X)<\lambda _{1}(X)\) if and only if X is connected, as for graphs. With Friedman’s conjecture in mind, we also note that the spectrum of \(\Delta _{{\mathbb {H}}}\) is absolutely continuous and supported on the interval \([\frac{1}{4},\infty )\) (e.g. [Bor16, Thm. 4.3]). There is also an analog of the Alon–Boppana Theorem in this setting: a result of Huber [Hub74] states that for any sequence of compact hyperbolic surfaces \(X_{i}\) with genera \(g(X_{i})\) tending to infinity,

$$\begin{aligned} \limsup _{i\rightarrow \infty }\lambda _{1}(X_{i})\le \frac{1}{4}. \end{aligned}$$

To state an analog of the Alon/Friedman conjecture for surfaces, we need a notion of a random cover. Suppose X is a compact connected hyperbolic surface, and suppose \({\tilde{X}}\) is a degree-n Riemannian cover of X. Fix a point \(x_{0}\in X\) and label the fiber above it by \([n]{\mathop {=}\limits ^{\mathrm {def}}}\{1,\ldots ,n\}\). There is a monodromy map

$$\begin{aligned} \pi _{1}(X,x_{0})\rightarrow S_{n} \end{aligned}$$

that describes how the fiber of \(x_{0}\) is permuted when following lifts of a closed loop from X to \({\tilde{X}}\). Here \(S_{n}\) is the symmetric group of permutations of the set [n]. The cover \({\tilde{X}}\) is uniquely determined by the monodromy homomorphism. Let g denote the genus of X. We fix an isomorphism

$$\begin{aligned} \pi _{1}(X,x_{0})\cong \Gamma _{g}{\mathop {=}\limits ^{\mathrm {def}}}\left\langle a_{1},b_{1},a_{2},b_{2},\ldots ,a_{g},b_{g}\,|\,\left[ a_{1},b_{1}\right] \cdots \left[ a_{g},b_{g}\right] =1\right\rangle . \end{aligned}$$
(1.1)

Now, given any

$$\begin{aligned} \phi \in {\mathbb {X}}_{g,n}{\mathop {=}\limits ^{\mathrm {def}}}\mathrm {Hom}(\Gamma _{g},S_{n}) \end{aligned}$$

we can construct a cover of X whose monodromy map is \(\phi \) as follows. Using the fixed isomorphism of (1.1), we have a free properly discontinuous action of \(\Gamma _{g}\) on \({\mathbb {H}}\) by isometries. Define a new action of \(\Gamma _{g}\) on \({\mathbb {H}}\times [n]\) by

$$\begin{aligned} \gamma (z,i)=(\gamma z,\phi [\gamma ](i)). \end{aligned}$$

The quotient of \({\mathbb {H}}\times [n]\) by this action is named \(X_{\phi }\) and is a hyperbolic cover of X with monodromy \(\phi \). This construction establishes a one-to-one correspondence between \(\phi \in {\mathbb {X}}_{g,n}\) and degree-n covers with a labeled fiber \(X_{\phi }\) of X. See also Example 3.4.

As for graphs, any eigenvalue of \(\Delta _{X}\) will also be an eigenvalue of \(\Delta _{X_{\phi }}\): every eigenfunction of \(\Delta _{X}\) can be pulled back to an eigenfunction of \(\Delta _{X_{\phi }}\) with the same eigenvalue. We say that an eigenvalue of \(\Delta _{X_{\phi }}\) is new if it is not one of \(\Delta _{X},\) or more generally, appears with greater multiplicity in \(X_{\phi }\). To pick a random cover of X, we simply use the uniform probability measure on the finite set \({\mathbb {X}}_{g,n}\). Recall we say an event that pertains to any n holds a.a.s. if it holds with probability tending to one as \(n\rightarrow \infty \). The analog of Friedman’s conjecture for surfaces is the following.

Conjecture 1.1

Let X be a compact connected hyperbolic surface. Then for any \(\varepsilon >0\), a.a.s.

$$\begin{aligned} \mathrm {spec}\left( \Delta _{X_{\phi }}\right) \cap \left[ 0,\frac{1}{4}-\varepsilon \right] = \mathrm {spec}\left( \Delta _{X}\right) \cap \left[ 0,\frac{1}{4}-\varepsilon \right] \end{aligned}$$

and the multiplicities on both sides are the same.

Remark 1.2

The analog of Conjecture 1.1 for finite area non-compact surfaces appeared previously in the work of Golubev and Kamber [GK19, Conj. 1.6(1)].

Remark 1.3

We have explained the number \(\frac{1}{4}\) in terms of the spectrum of the Laplacian on the hyperbolic plane and as an asymptotically optimal spectral gap in light of Huber’s result [Hub74]. The number \(\frac{1}{4}\) also features prominently in Selberg’s eigenvalue conjecture [Sel65], that states for \(X=\mathrm {SL}_{2}({\mathbf {Z}})\backslash {\mathbb {H}},\) the (deterministic) family of congruence covers of X never have new eigenvalues below \({\frac{1}{4}}.\) Although Selberg’s conjecture is for a finite-area, non-compact hyperbolic orbifold, the Jacquet-Langlands correspondence [JL70] means that it also applies to certain arithmetic compact hyperbolic surfaces.

Remark 1.4

In [Wri20, Problem 10.4], Wright asks, for random compact hyperbolic surfaces sampled according to the Weil–Petersson volume form on the moduli space of genus g closed hyperbolic surfaces, whether \(\liminf _{g\rightarrow \infty }({\mathbb {P}}(\lambda _{1}>\frac{1}{4}))>0\). See Section 1.1 for what is known in this setting. It is not even known [Wri20, Problem 10.3] whether there is a sequence of Riemann surfaces \(X_{n}\) with genus tending to \(\infty \) such that \(\lambda _{1}(X_{n})\rightarrow \frac{1}{4}\). Conjecture 1.1 offers a new route to resolving this problem via the probabilistic method, since it is known by work of Jenni [Jen84] that there exists a genus 2 hyperbolic surface X with \(\lambda _{1}(X)>\frac{1}{4}\) and this X can be taken as the base surface in Conjecture 1.1. (See Section 1.2 for important developments in this area after the current paper was written.)

The main theorem of the paper, described in the title, is the following.

Theorem 1.5

Let X be a compact connected hyperbolic surface. Then for any \(\varepsilon >0\), a.a.s.

$$\begin{aligned} \mathrm {spec}\left( \Delta _{X_{\phi }}\right) \cap \left[ 0,\frac{3}{16}-\varepsilon \right] = \mathrm {spec}\left( \Delta _{X}\right) \cap \left[ 0,\frac{3}{16}-\varepsilon \right] \end{aligned}$$

and the multiplicities on both sides are the same.

Remark 1.6

The appearance of the number \(\frac{3}{16}\) in Theorem 1.5 is essentially for the same reason that \(\frac{3}{4}\) appears in [MN20] (note that \(\frac{3}{16}=\frac{3}{4}(1-\frac{3}{4})\), and eigenvalues of the Laplacian are naturally parameterized as \(s(1-s)\)). Ultimately, the appearance of \(\frac{3}{4}\) can be traced back to the method of Broder and Shamir [BS87b] who proved that a.a.s. a random 2d-regular graph on n vertices has \(\lambda _{1}\le O\left( d^{3/4}\right) \), using an estimate analogous to Theorem 1.11 below.

Remark 1.7

More mysteriously, \(\frac{3}{16}\) is also the lower bound that Selberg obtained for the smallest new eigenvalue of a congruence cover of the modular curve \(\mathrm {SL}_{2}({\mathbf {Z}})\backslash {\mathbb {H}}\), in the same paper [Sel65] as his eigenvalue conjecture. In this context, the number arises ultimately from bounds on Kloosterman sums due to Weil [Wei48] that follow from Weil’s resolution of the Riemann hypothesis for curves over finite fields. The state of the art on Selberg’s eigenvalue conjecture, after decades of intermediate results [GJ78, Iwa89, LRS95, Iwa96, KS02], is due to Kim and Sarnak [Kim03] who produced a spectral gap of size \(\frac{975}{4096}\) for congruence covers of \(\mathrm {SL}_{2}({\mathbf {Z}})\backslash {\mathbb {H}}\).

It was pointed out to us by A. Kamber that our methods also yield the following estimate on the density of new eigenvalues of a random cover.

Theorem 1.8

Let

$$\begin{aligned} 0\le \lambda _{i_{1}}(X_{\phi })\le \lambda _{i_{2}}(X_{\phi })\le \cdots \le \lambda _{i_{k(\phi )}}(X_{\phi })\le \frac{1}{4} \end{aligned}$$

denote the collection of new eigenvalues of \(\Delta _{X_{\phi }}\) of size at most \(\frac{1}{4}\), included with multiplicity. For each of these, we write \(\lambda _{i_{j}}=s_{i_{j}}(1-s_{i_{j}})\) with \(s_{i_{j}}=s_{i_{j}}(X_{\phi })\in \left[ \frac{1}{2},1\right] \). For any \(\varepsilon >0\) and \(\sigma \in \left( \frac{1}{2},1\right) \), a.a.s.

$$\begin{aligned} \#\left\{ 1\le j\le k(\phi )\,:\,\lambda _{i_{j}}<\sigma \left( 1-\sigma \right) \right\} =\#\left\{ 1\le j\le k(\phi )\,:\,s_{i_{j}}>\sigma \right\} \le n^{3-4\sigma +\varepsilon }. \end{aligned}$$
(1.2)

Remark 1.9

The estimate (1.2) was established by Iwaniec [Iwa02, Thm 11.7] for congruence covers of \(\mathrm {SL}_{2}({\mathbf {Z}})\backslash {\mathbb {H}}\). Although Iwaniec’s theorem has been generalized in various directions [Hux86, Sar87, Hum18], as far as we know, Iwaniec’s result has not been directly improved, so speaking about density of eigenvalues, Theorem 1.8 establishes for random covers the best result known in the arithmetic setting for eigenvalues above the Kim–Sarnak bound \(\frac{975}{4096}\) [Kim03]. Density estimates such as Theorem 1.8 have applications to the cutoff phenomenon on hyperbolic surfaces by work of Golubev and Kamber [GK19].

We prove Theorems 1.5 and 1.8 using Selberg’s trace formula in Section 2. We use as a ‘black-box’ in this method a statistical result (Theorem 1.11) about the expected number of fixed points of a fixed \(\gamma \in \Gamma _{g}\) under a random \(\phi \).

If \(\pi \in S_{n}\) then we write \(\mathsf {fix}(\pi )\) for the number of fixed points of the permutation \(\pi \). Given an element \(\gamma \in \Gamma _{g}\), we let \(\mathsf {fix}_{\gamma }\) be the function

$$\begin{aligned} \mathsf {fix}_{\gamma }:{\mathbb {X}}_{g,n}\rightarrow {\mathbf {Z}},\quad \mathsf {fix}_{\gamma }(\phi ){\mathop {=}\limits ^{\mathrm {def}}}\mathsf {fix}(\phi (\gamma )). \end{aligned}$$

We write \({\mathbb {E}}_{g,n}[\mathsf {fix}_{\gamma }]\) for the expected value of \(\mathsf {fix}_{\gamma }\) with respect to the uniform probability measure on \({\mathbb {X}}_{g,n}\). In [MP20], the first and third named authors proved the following theorem.

Theorem 1.10

Let \(g\ge 2\) and \(1\ne \gamma \in \Gamma _{g}\). If \(q\in {\mathbf {N}}\) is maximal such that \(\gamma =\gamma _{0}^{~q}\) for some \(\gamma _{0}\in \Gamma _{g}\), then, as \(n\rightarrow \infty \),

$$\begin{aligned} {\mathbb {E}}_{g,n}[\mathsf {fix}_{\gamma }]&=d(q)+O_{\gamma }\left( n^{-1}\right) , \end{aligned}$$

where d(q) is the number of divisors of q.

In the current paper, we need an effective version of Theorem 1.10 that controls the dependence of the error term on \(\gamma \). We need this estimate only for \(\gamma \) that are not a proper power. For \(\gamma \in \Gamma _{g}\), we write \(\ell _{w}(\gamma )\) for the cyclic-word-length of \(\gamma \), namely, for the length of a shortest word in the generators \(a_{1},b_{1},\ldots ,a_{g},b_{g}\) of \(\Gamma _{g}\) that represents an element in the conjugacy class of \(\gamma \) in \(\Gamma _{g}\). The effective version of Theorem 1.10 that we prove here is the following.

Theorem 1.11

For each genus \(g\ge 2\), there is a constant \(A=A(g)\) such that for any \(c>0\), if \(1\ne \gamma \in \Gamma _{g}\) is not a proper power of another element in \(\Gamma _{g}\) and \(\ell _{w}(\gamma )\le c\log n\) then

$$\begin{aligned} {\mathbb {E}}_{g,n}[\mathsf {fix}_{\gamma }]=1+O_{c,g}\left( \frac{(\log n)^{A}}{n}\right) . \end{aligned}$$

The implied constant in the big-O depends only on c and g.

Remark 1.12

In the rest of the paper, just to avoid complications in notation and formulas that would obfuscate our arguments, we give the proof of Theorem 1.11 when \(g=2\). The extension to arbitrary genus is for the most part obvious: if it is not at some point, we will point out the necessary changes.

The proof of Theorem 1.11 takes up the bulk of the paper, spanning Section 4–Section 6. The proof of Theorem 1.11 involves delving into the proof of Theorem 1.10 and refining the estimates, as well as introducing some completely new ideas.

1.1 Related works.

1.2 The Brooks–Makover model.

The first study of spectral gap for random surfaces in the literature is due to Brooks and Makover [BM04] who form a model of a random compact surface as follows. Firstly, for a parameter n, they glue together n copies of an ideal hyperbolic triangle where the gluing scheme is given by a random trivalent ribbon graph. Their model for this random ribbon graph is a modification of the Bollobás bin model from [Bol88]. This yields a random finite-area, non compact hyperbolic surface. Then they perform a compactification procedure to obtain a random compact hyperbolic surface \(X_{\mathrm {BM}}(n)\). The genus of this surface is not deterministic, however. Brooks and Makover prove that for this random model, there is a non-explicit constant \(C>0\) such that a.a.s. (as \(n\rightarrow \infty \))

$$\begin{aligned} \lambda _{1}(X_{\mathrm {BM}}(n))\ge C. \end{aligned}$$

Theorem 1.5 concerns a different random model, but improves on the Brooks–Makover result in two important ways: the bound on new eigenvalues is explicit, and this bound is independent of the compact hyperbolic surface X with which we begin.

It is also worth mentioning a recent result of Budzinski, Curien, and Petri [BCP21, Thm. 1] who prove that the ratios

$$\begin{aligned} \frac{\mathrm {diameter}(X_{\mathrm {BM}}(n))}{\log n} \end{aligned}$$

converge to 2 in probability as \(n\rightarrow \infty \); they also observe that this is not the optimal value by a factor of 2.

The Weil–Petersson model. Another reasonable model of random surfaces comes from the Weil–Petersson volume form on the moduli space \({\mathcal {M}}_{g}\) of compact hyperbolic surfaces of genus g. Let \(X_{\mathrm {WP}}(g)\) denote a random surface in \({\mathcal {M}}_{g}\) sampled according to the (normalized) Weil–Petersson volume form. Mirzakhani proved in [Mir13, Section 1.2.I] that with probability tending to 1 as \(g\rightarrow \infty \),

$$\begin{aligned} \lambda _{1}(X_{\mathrm {WP}}(g))\ge \frac{1}{4}\left( \frac{\log 2}{2\pi +\log 2} \right) ^{2}\approx 0.00247. \end{aligned}$$

We also note recent work of Monk [Mon22] who gives estimates on the density of eigenvalues below \(\frac{1}{4}\) of the Laplacian on \(X_{\mathrm {WP}}(g)\).

Prior work of the authors. In some sense, the closest result to Theorem 1.5 in the literature is due to the first and second named authors of the paper [MN20], but it does not apply to compact surfaces, rather to infinite area convex co-compact hyperbolic surfaces. Because these surfaces have infinite area, their spectral theory is more involved. We will focus on one result of [MN20] to illustrate the comparison with this paper.

Suppose X is a connected non-elementary, non-compact, convex co-compact hyperbolic surface. The spectral theory of X is driven by a critical parameter \(\delta =\delta (X)\in (0,1)\). This parameter is both the critical exponent of a Poincaré series and the Hausdorff dimension of the limit set of X. If \(\delta >\frac{1}{2}\) then results of Patterson [Pat76] and Lax-Phillips [LP81] say that the bottom of the spectrum of X is a simple eigenvalue at \(\delta (1-\delta )\) and there are finitely many eigenvalues in the range \([\delta (1-\delta ),\frac{1}{4})\). In [MN20], a model of a random degree-n cover of X was introduced that is completely analogous to the one used here; the only difference in the construction is that the fundamental group of X is a free group \(\mathrm {{\mathbf {F}}} _{r}\) and hence one uses random \(\phi \in \mathrm {Hom}(\mathrm {{\mathbf {F}}} _{r},S_{n})\) to construct the random surface \(X_{\phi }\). The following theorem was obtained in [MN20, Thm. 1.3.].

Theorem 1.13

Assume that \(\delta =\delta (X)>\frac{1}{2}\). Then for any \(\sigma _{0}\in \left( \frac{3}{4}\delta ,\delta \right) \), a.a.s.

$$\begin{aligned} \mathrm {spec}\left( \Delta _{X_{\phi }}\right) \cap \left[ \delta \left( 1-\delta \right) ,\sigma _{0}(1-\sigma _{0})\right] =\mathrm {spec}\left( \Delta _{X}\right) \cap \left[ \delta \left( 1-\delta \right) ,\sigma _{0}(1-\sigma _{0})\right] \end{aligned}$$
(1.3)

and the multiplicities on both sides are the same.

Although Theorem 1.13 is analogous to Theorem 1.5 (for compact X, \(\delta (X)=1\)), the methods used in [MN20] have almost no overlap with the methods used here. For infinite area X, the fundamental group is free, so the replacement of Theorem 1.11 was already known by results of Broder–Shamir [BS87b] and the third named author [Pud15]. The challenge in [MN20] was to develop bespoke analytic machinery to access these estimates.

Conversely, in the current paper, the needed analytic machinery already exists (Selberg’s trace formula) and rather, it is the establishment of Theorem 1.11 that is the main challenge here, stemming from the non-free fundamental group \(\Gamma _{g}\).

1.3 Subsequent results.

Since the preprint version of the current paper appeared in March 2020, several important results have been obtained in the area of spectral gap of random surfaces. Independently of each other, Wu and Xue [WX21] and Lipnowski and Wright [LW21] proved that for any \(\varepsilon >0\), a Weil–Petersson random compact hyperbolic surface of genus g has spectral gap of size at least \(\frac{3}{16}-\varepsilon \) with probability tending to one as \(g\rightarrow \infty \). This result has been extended to the case of Weil–Petersson random surfaces with not too many cusps by Hide in [Hid21].

In [MN21], the results of [MN20] have been strengthened by the first and second named author to an (essentially optimal) analog of Friedman’s theorem for bounded frequency resonances on infinite area Schottky surfaces.

Hide and the first named author have recently proved in [HM21] that the analog of Conjecture 1.1 for finite area non-compact hyperbolic surfaces holds true, and by combining this result with a cusp removal argument of Buser, Burger, and Dodziuk [BBD88], in [HM21] it is also proved that there exist compact hyperbolic surfaces with genera tending to infinity and \(\lambda _{1}\rightarrow \frac{1}{4}\). (We have chosen to preserve Remark 1.4 as originally written here for posterity.)

1.4 Structure of the proofs and the issues that arise.

1.5 Proof of Theorem 1.5 given Theorem 1.11.

First, we explain the outline of the proof of Theorem 1.5 from Theorem 1.11. Theorem 1.8 also follows from Theorem 1.11 using the same ideas. Both proofs are presented in full in Section 2.

Our method of proving Theorem 1.5 is analogous to the method of Broder and Shamir [BS87b] for proving that a random 2d-regular graph has a large spectral gap. For us, the Selberg trace formula replaces a more elementary formula for the trace of a power of the adjacency operator of a graph in terms of closed paths in the graph.

Let \(\Gamma \) denote the fundamental group of X. By taking the difference of the Selberg trace formula for \(X_{\phi }\) and that for X we obtain a formula of the form

$$\begin{aligned} \sum _{\text {new eigenvalues }\lambda \text { of } X_{\phi }}F(\lambda )=\sum _{[\gamma ]\in C(\Gamma )}G(\gamma )\left( \mathsf {fix}_{\gamma }(\phi )-1\right) , \end{aligned}$$
(1.4)

where \(C(\Gamma )\) is the collection of conjugacy classes in \(\Gamma \), and F and G are interdependent functions that depend on n. The way we choose F and G together is to ensure

  • \(F(\lambda )\) is non-negative for any possible \(\lambda \), and large if \(\lambda \) is an eigenvalue we want to forbid, and

  • \(G(\gamma )\) localizes to \(\gamma \) with \(\ell _{w}(\gamma )\le c\log n\) for some \(c=c(X)\).

By taking expectations of (1.4) we obtain

$$\begin{aligned} {\mathbb {E}}\left[ \sum _{\text {new eigenvalues }\lambda \text { of }X_{\phi }}F(\lambda )\right] =\sum _{[\gamma ]\in C(\Gamma )}G(\gamma ){\mathbb {E}}\left[ \mathsf {fix}_{\gamma }(\phi )-1\right] . \end{aligned}$$
(1.5)

The proof will conclude by bounding the right hand side and applying Markov’s inequality to conclude that there are no new eigenvalues in the desired forbidden region. Since G is well-controlled in our proof, it remains to estimate each term \({\mathbb {E}}\left[ \mathsf {fix}_{\gamma }(\phi )-1\right] \). To do this, we echo Broder–Shamir [BS87b] and partition the summation on the right-hand side of (1.5) according to three groups.

  • If \(\gamma \) is the identity, then G(1) is easily analyzed, and \({\mathbb {E}}\left[ \mathsf {fix}_{\gamma }(\phi )-1\right] =n-1\).

  • If \(\gamma \) is a proper power of a non-trivial element of \(\Gamma \), then we use a trivial bound \({\mathbb {E}}\left[ \mathsf {fix}_{\gamma }(\phi )-1\right] \le n-1\), so we get no gain from the expectation. On the other hand, the contribution to

    $$\begin{aligned} \sum _{[\gamma ]\in C(\Gamma )}G(\gamma ) \end{aligned}$$

    from these elements is negligible. Intuitively, this is because the number of elements of \(\Gamma \) with \(\ell _{w}(\gamma )\le L\) and that are proper powers is (exponentially) negligible compared to the total number of elements.

  • If \(\gamma \) is not a proper power and not the identity, then we use Theorem 1.11 to obtain \({\mathbb {E}}\left[ \mathsf {fix}_{\gamma }(\phi )-1\right] =O_{X}\left( \frac{(\log n)^{A}}{n}\right) \). Thus for ‘most’ summands in the right-hand side of (1.5) we obtain a significant gain from the expectation.

Assembling all these estimates together gives a sufficiently upper strong bound on (1.5) to obtain Theorem 1.5 via Markov’s inequality.

1.6 Proof of Theorem 1.11.

To understand the proof of Theorem 1.11, we suggest that the reader first read the overview below, then Section 6 where all the components of the proof are brought together, and then Section 3–Section 5 where the technical ingredients are proved. As throughout the paper, we assume \(g=2\) in this overview and we will forgo precision to give a bird’s-eye view of the proof.

Fixing an octagonal fundamental domain for X, any \(X_{\phi }\) is tiled by octagons; this tiling comes with some extra labelings of edges corresponding to the generators of \(\Gamma \). Any labeled 2-dimensional CW-complex that can occur as a subcomplex of some \(X_{\phi }\) is called a tiled surface. For any tiled surface Y, we write \({\mathbb {E}}_{n}^{\mathrm {emb}}\left( Y\right) \) for the expected number, when \(\phi \) is chosen uniformly at random in \(\mathrm {Hom}(\Gamma ,S_{n})\), of embedded copies of Y in \(X_{\phi }\).

In the previous paper [MP20], we axiomatized certain collections \({\mathcal {R}}\) of tiled surfaces, depending on \(\gamma \), that have the property that

$$\begin{aligned} {\mathbb {E}}_{2,n}[\mathsf {fix}_{\gamma }]=\sum _{Y\in \mathcal {{\mathcal {R}}}}{\mathbb {E}}_{n}^{\mathrm {emb}}(Y). \end{aligned}$$
(1.6)

These collections are called resolutions. Here we have oversimplified the definitions to give an overview of the main ideas.

In [MP20], we chose a resolution, depending on \(\gamma \), that consisted of two special types of tiled surfaces: those that are boundary reduced or strongly boundary reduced. The motivation for these definitions is that they make our methods for estimating \({\mathbb {E}}_{n}^{\mathrm {emb}}(Y)\) more accurate. To give an example, if Y is strongly boundary reduced then we prove that for Y fixed and \(n\rightarrow \infty \), we obtainFootnote 5

$$\begin{aligned} {\mathbb {E}}_{n}^{\mathrm {emb}}(Y)=n^{\chi (Y)}\left( 1+O_{Y}\left( n^{-1}\right) \right) . \end{aligned}$$
(1.7)

However, the implied constant depends on Y, and in the current paper we have to control uniformly all \(\gamma \) with \(\ell _{w}(\gamma )\le c\log n\). The methods of [MP20] are not good enough for this goal. To deal with this, we introduce in Definition 3.12 a new type of tiled surface called ‘\(\varepsilon \)-adapted’ (for some \(\varepsilon \ge 0\)) that directly generalizes, and quantifies, the concept of being strongly boundary reduced. We will explain the benefits of this definition momentarily. We also introduce a new algorithm called the octagons-vs-boundary algorithm that given \(\gamma \), produces a finite resolution \({\mathcal {R}}\) as in (1.6) such that every \(Y\in {\mathcal {R}}\) is either

  • \(\varepsilon \)-adapted for some \(\varepsilon >0\), or

  • boundary reduced, with the additional condition that \({\mathfrak {d}}\left( Y\right)<{\mathfrak {f}}\left( Y\right) <-\chi (Y)\), where \({\mathfrak {d}}(Y)\) is the length of the boundary of Y and \({\mathfrak {f}}(Y)\) is the number of octagons in Y.

Any \(Y\in {\mathcal {R}}\) has \({\mathfrak {d}}(Y)\le c'(\log n)\) and \({\mathfrak {f}}(Y)\le c'(\log n)^{2}\) given that \(\ell _{w}(\gamma )\le c\log n\) (Corollary 3.25). The fact that we maintain control on these quantities during the algorithm is essential. However, a defect of this algorithm is that we lose control of how many \(\varepsilon \)-adapted \(Y\in {\mathcal {R}}\) there are of a given Euler characteristic. In contrast, in the algorithm of [MP20] we control, at least, the number of elements in the resolution of Euler characteristic zero. We later have to work to get around this.

We run the octagons-vs-boundary algorithm for a fixed \(\varepsilon =\frac{1}{32}\) to obtain a resolution \({\mathcal {R}}.\) Let us explain the benefits of this resolution we have constructed. The \(\varepsilon \)-adapted \(Y\in {\mathcal {R}}\) contribute the main contributions to (1.6), and the merely boundary reduced Y contribute something negligible.

Indeed, we prove for any boundary reduced \(Y\in {\mathcal {R}}\) in the regime of parameters we care about, that

$$\begin{aligned} {\mathbb {E}}_{n}^{\mathrm {emb}}(Y)\ll (A_{0}{\mathfrak {f}}(Y))^{A_{0}{\mathfrak {f}}(Y)}n^{\chi (Y)}, \end{aligned}$$
(1.8)

where \(A_{0}>0\). This bound (1.8) appears in (6.5) as the result of combining Corollary 4.5, Theorem 5.1, Proposition 5.11 and Lemma 3.6; the proof is by carefully effectivizing the arguments of [MP20].

While the bound (1.8) is quite bad (for example, using it on all terms in (1.6) would not even recover the results of [MP20]), the control of the dependence on \({\mathfrak {d}}(Y)\) is enough so that when combined with \({\mathfrak {d}}\left( Y\right)<{\mathfrak {f}}\left( Y\right) <-\chi (Y)\) we obtain

$$\begin{aligned} {\mathbb {E}}_{n}^{\mathrm {emb}}(Y)\ll (A_{0}{\mathfrak {f}}(Y))^{A_{0}{\mathfrak {f}}(Y)}n^{-{\mathfrak {f}}(Y)}\ll \left( \frac{\left( c'(\log n)^{2}\right) ^{A_{0}}}{n}\right) ^{{\mathfrak {f}}(Y)}. \end{aligned}$$

This is good enough that it can simply be combined with counting all possible Y with \({\mathfrak {d}}(Y)\le c'(\log n)\) and \({\mathfrak {f}}(Y)\le c'(\log n)^{2}\) to obtain that the non-\(\varepsilon \)-adapted surfaces in \({\mathcal {R}}\) contribute \(\ll \frac{(\log n)^{A}}{n}\) to (1.6) for \(A>0\). This is Proposition 6.1.

So from now on \({\underline{\text{ assume } Y\in {\mathcal {R}} \text{ is } {\varepsilon }\text{-adapted }}}\) and we explain how to control the contributions to (1.6) from these remaining Y. We first prove that there is a rational function \(Q_{Y}\) such that

$$\begin{aligned} {\mathbb {E}}_{n}^{\mathrm {emb}}(Y)=n^{\chi (Y)}\left( Q_{Y}(n)+O\left( \frac{1}{n}\right) \right) \left( 1+O\left( \frac{(\log n)^{2}}{n}\right) \right) , \end{aligned}$$
(1.9)

where the implied constants hold for any \(\varepsilon \)-adapted \(Y\in {{{\mathcal {R}}}}\) as long as \(\ell _{w}\left( \gamma \right) \le c\log n\) (Theorem 5.1, Proposition 5.12 and Corollary 5.21). In fact, this expression remains approximately valid for the same Y if n is replaced throughout by m with \(m\approx (\log n)^{B}\) for some \(B>0\); this will become relevant momentarily.

The rational function \(Q_{Y}\) is new to this paper; it appears through Corollary 5.15 and Lemma 5.20 and results from refining the representation-theoretic arguments in [MP20]. The description of \(Q_{Y}\) is in terms of Stallings core graphs [Sta83], and related to the theory of expected number of fixed points of words in the free group. In the notation of the rest of the paper,

$$\begin{aligned} Q_{Y}(n)=\frac{\left( n\right) _{{\mathfrak {v}}(Y)}}{n^{\chi (Y)}}\sum _{H\in {{{\mathcal {Q}}}} \left( Y\right) }\frac{\left( n\right) _{{\mathfrak {v}}(H)}}{\prod _{f\in \left\{ a,b,c,d\right\} }(n)_{{\mathfrak {e}}_{f}(H)}}, \end{aligned}$$
(1.10)

where \({\mathcal {Q}}(Y)\) is a collection of core graphs obtained by adding handles to the one-skeleton of Y, performing ‘folding’ operations, and taking quotients in a particular way (see Section 5.8 for details).

The argument leading to (1.9) involves isolating some of the terms that contribute to \({\mathbb {E}}_{n}^{\mathrm {emb}}(Y)\), and reinterpreting these as related to the size of a set \({\mathbb {X}}_{n}^{*}(Y,{\mathcal {J}})\) of maps \(\mathrm {{\mathbf {F}}} _{4}\rightarrow S_{n}\) that contain, in an appropriate sense, an embedded copy of Y but only satisfy the relation of \(\Gamma \) modulo \(S_{n-{\mathfrak {v}}(Y)}\) rather than absolutely (Proposition 5.13). Then by topological arguments the set \({\mathbb {X}}_{n}^{*}(Y,{\mathcal {J}})\) is counted in terms of core graphs leading to Lemma 5.20 that gives (1.10) here.

One unusual thing is that our combinatorial description of \(Q_{Y}\) does not immediately tell us the order of growth of \(Q_{Y}(n)\), because we do not know much about \({\mathcal {Q}}(Y)\). On the other hand, we know enough about \(Q_{Y}\) (for example, for what range of parameters it is positive) so that we can ‘black-box’ results from [MP20] to learn that if Y is fixed and \(n\rightarrow \infty \), \(Q_{Y}(n)\rightarrow 1\). (We also learn from this argument the interesting topological fact that there is exactly one element of \({\mathcal {Q}}(Y)\) of maximal Euler characteristic.)

This algebraic properties of \(Q_{Y}\), together with a priori facts about \(Q_{Y}\), allow us to use (1.9) to establish the two following important inequalities:

$$\begin{aligned} {\mathbb {E}}_{n}^{\mathrm {emb}}(Y)&=n^{\chi (Y)}\left( 1+O_{c}\left( \frac{(\log n)^{4}}{n}\right) +O\left( \frac{m}{n}\frac{{\mathbb {E}}_{m}^{\mathrm {emb}}(Y)}{m^{\chi (Y)}} \right) \right) \end{aligned}$$
(1.11)
$$\begin{aligned} n^{\chi (Y)}&\ll \frac{m}{n}{\mathbb {E}}_{m}^{\mathrm {emb}}(Y),\quad \text {if }\chi (Y)<0 \end{aligned}$$
(1.12)

where \(m\approx (\log n)^{B}\) is much smaller than n. These inequalities are provided by Proposition 5.27 and Corollary 5.25 (see also Remark 5.26). While (1.12) may look surprising, its purpose is for running our argument in reverse with decreased parameters as explained below.

Let us now explain precisely the purpose of (1.12) and (1.11). By black-boxing the results of [MP20] one more time, we learn that there is exactly one \(\varepsilon \)-adapted \(Y\in {\mathcal {R}}\) with \(\chi (Y)=0\), and none with \(\chi (Y)>0\). This single Y with \(\chi (Y)=0\) contributes the main term of Theorems 1.10 and 1.11 through (1.11). Any other term coming from \(\varepsilon \)-adapted Y can be controlled in terms of \({\mathbb {E}}_{m}^{\mathrm {emb}}(Y)\) using (1.11) and (1.12). These errors could accumulate, but we can control them all at once by using (1.6) in reverse with n replaced by m to obtain

$$\begin{aligned} \sum _{Y\in {\mathcal {R}}}{\mathbb {E}}_{m}^{\mathrm {emb}}(Y)={\mathbb {E}}_{2,m}[\mathsf {fix}_{\gamma }]\le m\approx (\log n)^{B}. \end{aligned}$$

Putting the previous arguments together proves Theorem 1.11.

1.7 Notation.

The commutator of two group elements is \(\left[ a,b\right] {\mathop {=}\limits ^{\mathrm {def}}}aba^{-1}b^{-1}.\) For \(m,n\in {\mathbf {N}}\), \(m\le n\), we use the notation [mn] for the set \(\{m,m+1,\ldots ,n\}\) and [n] for the set \(\{1,\ldots ,n\}\). For \(q,n\in {\mathbf {N}}\) with \(q\le n\) we use the Pochammer symbol

$$\begin{aligned} (n)_{q}{\mathop {=}\limits ^{\mathrm {def}}}n(n-1)\cdots (n-q+1). \end{aligned}$$

For real-valued functions fg that depend on a parameter n we write \(f=O(g)\) to mean there exist constants \(C,N>0\) such that for \(n>N\), \(|f(n)|\le Cg(n)\). We write \(f\ll g\) if there are \(C,N>0\) such that \(f(n)\le Cg(n)\) for \(n>N\). We add constants as a subscript to the big O or the \(\ll \) sign to mean that the constants C and N depend on these other constants, for example, \(f=O_{\varepsilon }(g)\) means that both \(C=C(\varepsilon )\) and \(N=N(\varepsilon )\) may depend on \(\varepsilon \). If there are no subscripts, it means the implied constants depend only on the genus g, which is fixed throughout most of the paper. We use the notation \(f\asymp g\) to mean \(f\ll g\) and \(g\ll f\); the use of subscripts is the same as before.

2 The Proof of Theorem 1.5 Given Theorem 1.11

2.1 Selberg’s trace formula and counting closed geodesics.

Here we describe the main tool of this Section 2: Selberg’s trace formula for compact hyperbolic surfaces. Let \(C_{c}^{\infty }({\mathbf {R}})\) denote the infinitely differentiable real functions on \({\mathbf {R}}\) with compact support. Given an even function \(\varphi \in C_{c}^{\infty }({\mathbf {R}})\), its Fourier transform is defined by

$$\begin{aligned} {\widehat{\varphi }}(\xi ){\mathop {=}\limits ^{\mathrm {def}}}\int _{-\infty }^{\infty }\varphi (x)e^{-ix\xi }dx \end{aligned}$$

for any \(\xi \in {\mathbf {C}}\). As \(\varphi \in C_{c}^{\infty }({\mathbf {R}})\), the integral above converges for all \(\xi \in {\mathbf {C}}\) to an entire function.

Given a compact hyperbolic surface X, we write \({\mathcal {L}}(X)\) for the set of closed oriented geodesics in X. A geodesic is called primitive if it is not the result of repeating another geodesic q times for \(q\ge 2\). Let \({\mathcal {P}}(X)\) denote the set of closed oriented primitive geodesics on X. Every closed geodesic \(\gamma \) has a length \(\ell (\gamma )\) according to the hyperbolic metric on X. Every closed oriented geodesic \(\gamma \in {\mathcal {L}}(X)\) determines a conjugacy class \([{\tilde{\gamma }}]\) in \(\pi _{1}(X,x_{0})\) for any basepoint \(x_{0}\). Clearly, a closed oriented geodesic in X is primitive if and only if the elements of the corresponding conjugacy class are not proper powers in \(\pi _{1}(X,x_{0})\). For \(\gamma \in {\mathcal {L}}(X)\) we write \(\Lambda (\gamma )=\ell (\gamma _{0})\) where \(\gamma _{0}\) is the unique primitive closed oriented geodesic such that \(\gamma =\gamma _{0}^{q}\) for some \(q\ge 1\).

We now give Selberg’s trace formula for a compact hyperbolic surface in the form of [Bus10, Thm. 9.5.3] (see Selberg [Sel56] for the original appearance of this formula and Hejhal [Hej76, Hej83] for an encyclopedic treatment).

Theorem 2.1

(Selberg’s trace formula). Let X be a compact hyperbolic surface and let

$$\begin{aligned} 0=\lambda _{0}(X)\le \lambda _{1}(X)\le \cdots \le \lambda _{n}(X)\le \cdots \end{aligned}$$

denote the spectrum of the Laplacian on X. For \(i\in {\mathbf {N}}\cup \{0\}\) let

$$\begin{aligned} r_{i}(X){\mathop {=}\limits ^{\mathrm {def}}}{\left\{ \begin{array}{ll} \sqrt{\lambda _{i}(X)-\frac{1}{4}} &{} if\ \lambda _{i}(X)>1/4\\ i\sqrt{\frac{1}{4}-\lambda _{i}(X)} &{} if\ \lambda _{i}(X)\le 1/4 \end{array}\right. }. \end{aligned}$$

Then for any even \(\varphi \in C_{c}^{\infty }({\mathbf {R}})\)

$$\begin{aligned} \sum _{i=0}^{\infty }{\widehat{\varphi }}(r_{i}(X))=\frac{{\mathrm {area}}(X)}{4\pi }\int _{-\infty }^{\infty }r{\widehat{\varphi }}(r)\tanh (\pi r)dr+\sum _{\gamma \in {\mathcal {L}}(X)}\frac{\Lambda (\gamma )}{2\sinh \left( \frac{\ell (\gamma )}{2}\right) }\varphi (\ell (\gamma )). \end{aligned}$$

(Both sides of the formula are absolutely convergent.)

We will also need a bound on the number of closed oriented geodesics with length \(\ell (\gamma )\le T\). In fact we only need the following very soft bound from e.g. [Bus10, Lem. 9.2.7].

Lemma 2.2

For a compact hyperbolic surface X, there is a constant \(C=C(X)\) such that

$$\begin{aligned} \left| \left\{ \gamma \in {\mathcal {L}}(X)\,:\,\ell (\gamma )\le T\right\} \right| \le Ce^{T}. \end{aligned}$$

Much sharper versions of this estimate are known, but Lemma 2.2 suffices for our purposes.

Suppose that X is a connected compact hyperbolic surface. We fix a basepoint \(x_{0}\in X\) and an isomorphism \(\pi _{1}(X,x_{0})\cong \Gamma _{g}\) as in (1.1) where \(g\ge 2\) is the genus of X. If \(\gamma \) is a closed oriented geodesic, by abuse of notation we let \(\ell _{w}\left( \gamma \right) \) denote the minimal word-length of an element in the conjugacy class in \(\Gamma _{g}\) specified by \(\gamma \) (on page 5 we used the same notation for an element of \(\Gamma _{g}\)). We want to compare \(\ell (\gamma )\) and \(\ell _{w}(\gamma )\). We will use the following simple consequence of the varc–Milnor lemma [BH99, Prop. 8.19].

Lemma 2.3

With notations as above, there exist constants \(K_{1},K_{2}\ge 0\) depending on X such that

$$\begin{aligned} \ell _{w}(\gamma )\le K_{1}\ell (\gamma )+K_{2}. \end{aligned}$$

2.2 Choice of function for use in Selberg’s trace formula.

We now fix a function \(\varphi _{0}\in C_{c}^{\infty }({{\mathbf {R}}})\) which has the following key properties:

  1. 1.

    \(\varphi _{0}\) is non-negative and even.

  2. 2.

    \(\mathrm {Supp}(\varphi _{0})=(-1,1)\).

  3. 3.

    The Fourier transform \(\widehat{\varphi _{0}}\) satisfies \(\widehat{\varphi _{0}}(\xi )\ge 0\) for all \(\xi \in {{\mathbf {R}}}\cup i{{\mathbf {R}}}\).

Proof that such a function exists

Let \(\psi _{0}\) be a \(C^{\infty }\), even, real-valued non-negative function whose support is exactly \((-\frac{1}{2},\frac{1}{2})\). Let \(\varphi _{0}{\mathop {=}\limits ^{\mathrm {def}}}\psi _{0}\star \psi _{0}\) where

$$\begin{aligned} \psi _{0}\star \psi _{0}(x){\mathop {=}\limits ^{\mathrm {def}}}\int _{{{\mathbf {R}}}}\psi _{0}(x-t)\psi _{0}(t)dt. \end{aligned}$$

Then \(\varphi _{0}\) has the desired properties. \(\square \)

We now fix a function \(\varphi _{0}\) as above and for any \(T>0\) define

$$\begin{aligned} \varphi _{T}(x){\mathop {=}\limits ^{\mathrm {def}}}\varphi _{0}\left( \frac{x}{T}\right) . \end{aligned}$$

Lemma 2.4

For all \(\varepsilon >0\), there exists \(C_{\varepsilon }>0\) such that for all \(t\in {{\mathbf {R}}}_{\ge 0}\) and for all \(T>0\)

$$\begin{aligned} \widehat{\varphi _{T}}(it)\ge C_{\varepsilon }Te^{T(1-\varepsilon )t}. \end{aligned}$$

Proof

First observe that

$$\begin{aligned} \widehat{\varphi _{T}}(it)=T\widehat{\varphi _{0}}(Tit)=T\int _{{{\mathbf {R}}}}\varphi _{0}(x)e^{Txt}dx. \end{aligned}$$

Using \(t\ge 0\) and \(\mathrm {Supp}(\varphi _{0})=(-1,1)\) with \(\varphi _{0}\) non-negative, we have for some \(C_{\varepsilon }>0\)

$$\begin{aligned} \widehat{\varphi _{T}}(it)\ge T\int _{1-\varepsilon }^{1}\varphi _{0}(x)e^{Txt}dx\ge TC_{\varepsilon }e^{T(1-\varepsilon )t}. \end{aligned}$$

\(\square \)

2.3 Proof of Theorem 1.5.

Let X be a genus g compact hyperbolic surface and let \(X_{\phi }\) be the cover of X corresponding to \(\phi \in \mathrm {Hom}(\Gamma _{g},S_{n})\) constructed in the introduction. In what follows we let

$$\begin{aligned} T=4\log n. \end{aligned}$$

For every \(\gamma \in {\mathcal {L}}(X)\), we pick \({\tilde{\gamma }}\in \Gamma _{g}\) in the conjugacy class in \(\Gamma _{g}\) corresponding to \(\gamma \) (so in particular \(\ell _{w}\left( {\tilde{\gamma }}\right) =\ell _{w}\left( \gamma \right) \)). Every closed oriented geodesic \(\delta \) in \(X_{\phi }\) covers, via the Riemannian covering map \(X_{\phi }\rightarrow X,\) a unique closed oriented geodesic in X that we will call \(\pi (\delta )\). This gives a map

$$\begin{aligned} \pi :{\mathcal {L}}(X_{\phi })\rightarrow {\mathcal {L}}(X). \end{aligned}$$

Note that \(\ell (\delta )=\ell (\pi (\delta )).\) We claim that \(|\pi ^{-1}(\gamma )|=\mathsf {fix}_{{\tilde{\gamma }}}(\phi )\), recalling that \(\mathsf {fix}_{{\tilde{\gamma }}}(\phi )\) is the number of fixed points of \(\phi ({\tilde{\gamma }})\). Indeed, by its very definition, \(X_{\phi }\) is a fiber bundle over X with fiber [n]. If \(\gamma \in {\mathcal {P}}(X)\), and we fix some regular point \(o\in \gamma \) (not a self-intersection point), then in \(X_{\phi }\), the fiber of o can be identified with [n]. The oriented geodesic path \(\gamma \backslash \{o\}\) lifts to n oriented geodesic paths with start and end points equal to [n]. The permutation of [n] obtained by following these from start to end is (up to conjugation) \(\phi ({\tilde{\gamma }})\) and hence, the \(\delta \)’s with \(\pi (\delta )=\gamma \) are precisely the paths that close up, or in other words, the \(\delta \)’s with \(\pi (\delta )=\gamma \) correspond to fixed points of \(\phi ({\tilde{\gamma }})\). For general \(\gamma \in {\mathcal {L}}\left( X\right) \), assume that \(\gamma =\gamma _{0}^{~q}\) with \(q\ge 1\) and \(\gamma _{0}\in {\mathcal {P}}\left( X\right) \). A similar argument shows that the elements in \(\pi ^{-1}\left( \gamma \right) \) are in bijection with fixed points of \(\tilde{\gamma _{0}}^{q}\) which we may take as our \({\tilde{\gamma }}\).

We also have \({\mathrm {area}}(X_{\phi })=n\cdot {\mathrm {area}}(X)\). Now applying Theorem 2.1 to \(X_{\phi }\) with the function \(\varphi _{T}\) gives

$$\begin{aligned} \sum _{i=0}^{\infty }\widehat{\varphi _{T}}(r_{i}(X_{\phi }))&=\frac{\mathrm {area}(X_{\phi })}{4\pi }\int _{-\infty }^{\infty }r\widehat{\varphi _{T}}(r)\tanh (\pi r)dr+\sum _{\delta \in {\mathcal {L}}(X_{\phi })}\frac{\Lambda (\delta )}{2\sinh \left( \frac{\ell (\delta )}{2}\right) }\varphi _{T}(\ell (\delta ))\\&=\frac{n\cdot \mathrm {area}(X)}{4\pi }\int _{-\infty }^{\infty }r\widehat{\varphi _{T}}(r)\tanh (\pi r)dr\\&\quad +\sum _{\gamma \in {\mathcal {L}}(X)}\sum _{\delta \in \pi ^{-1}(\gamma )}\frac{\Lambda (\delta )}{2\sinh \left( \frac{\ell (\gamma )}{2}\right) }\varphi _{T}(\ell (\gamma ))\\&=\frac{n\cdot \mathrm {area}(X)}{4\pi }\int _{-\infty }^{\infty }r\widehat{\varphi _{T}}(r)\tanh (\pi r)dr\\&\quad +\sum _{\gamma \in {\mathcal {P}}(X)}\frac{\mathsf {fix}_{{\tilde{\gamma }}}(\phi )\ell (\gamma )}{2\sinh \left( \frac{\ell (\gamma )}{2}\right) }\varphi _{T}(\ell (\gamma ))\\&\quad +\sum _{\gamma \in \mathcal {{\mathcal {L}}}(X)-{\mathcal {P}}(X)}\sum _{\delta \in \pi ^{-1}(\gamma )}\frac{\Lambda (\delta )}{2\sinh \left( \frac{\ell (\gamma )}{2}\right) }\varphi _{T}(\ell (\gamma )), \end{aligned}$$

where in the second equality we used the fact that for \(\delta \in {\mathcal {L}}\left( X_{\phi }\right) \), \(\ell \left( \delta \right) =\ell \left( \pi \left( \delta \right) \right) \), and in the third equality we used that if \(\gamma \in {\mathcal {P}}(X)\), then \(\delta \in {\mathcal {P}}(X_{\phi })\) for all \(\delta \in \pi ^{-1}(\gamma )\), so \(\Lambda (\delta )=\Lambda (\gamma )=\ell (\gamma )\). Let \(i_{1},i_{2},i_{3},\ldots \) be a subsequence of \(1,2,3,\ldots \) such that

$$\begin{aligned} 0\le \lambda _{i_{1}}(X_{\phi })\le \lambda _{i_{2}}(X_{\phi })\le \cdots \end{aligned}$$

are the new eigenvalues of \(X_{\phi }\). Thus \(\lambda _{i_{1}}\left( X_{\phi }\right) \) is the smallest new eigenvalue of \(X_{\phi }\). Taking the difference of the above formula with the trace formula for X (with the same function \(\varphi _{T}\)) gives

$$\begin{aligned} \sum _{j=1}^{\infty }\widehat{\varphi _{T}}(r_{i_{j}}(X_{\phi }))&=\frac{(n-1)\cdot \mathrm {area}(X)}{4\pi }\int _{-\infty }^{\infty }r\widehat{\varphi _{T}}(r)\tanh (\pi r)dr\nonumber \\&\quad +\sum _{\gamma \in {\mathcal {P}}(X)}\frac{(\mathsf {fix}_{{\tilde{\gamma }}}(\phi )-1)\ell (\gamma )}{2\sinh \left( \frac{\ell (\gamma )}{2}\right) }\varphi _{T}(\ell (\gamma ))\nonumber \\&\quad +\sum _{\gamma \in \mathcal {{\mathcal {L}}}(X)-{\mathcal {P}}(X)}\frac{\varphi _{T}\left( \ell (\gamma )\right) }{2\sinh \left( \frac{\ell (\gamma )}{2}\right) }\left( \left( \sum _{\delta \in \pi ^{-1}(\gamma )}\Lambda (\delta )\right) -\Lambda (\gamma )\right) . \end{aligned}$$
(2.1)

Since \(\varphi _{T}\) is non-negative and for any \(\gamma \in {\mathcal {L}}(X)\), \(|\pi ^{-1}(\gamma )|\le n\), and \(\Lambda (\delta )\le \ell \left( \delta \right) =\ell (\gamma )\) for all \(\delta \in \pi ^{-1}(\gamma )\), the sum on the bottom line of (2.1) is bounded from above by

$$\begin{aligned} n\sum _{\gamma \in \mathcal {{\mathcal {L}}}(X)-{\mathcal {P}}(X)}\frac{\varphi _{T}\left( \ell (\gamma )\right) }{2\sinh \left( \frac{\ell (\gamma )}{2}\right) }\cdot \ell (\gamma )= & {} n\sum _{\gamma \in {\mathcal {P}}(X)}\sum _{k=2}^{\infty }\frac{\varphi _{T}\left( k\ell (\gamma )\right) }{2\sinh \left( \frac{k\ell (\gamma )}{2}\right) }k\ell (\gamma ). \end{aligned}$$
(2.2)

We have

$$\begin{aligned} \sum _{k=2}^{\infty }\frac{\varphi _{T}\left( k\ell (\gamma )\right) }{2\sinh \left( \frac{k\ell (\gamma )}{2}\right) }k\ell (\gamma ){\mathop {\ll }\limits ^{\left( *\right) }}_{X}\ell (\gamma )\sum _{k=2}^{\infty }ke^{-\frac{k\ell (\gamma )}{2}}{\mathop {\ll }\limits ^{\left( **\right) }}_{X}\ell (\gamma )e^{-\ell (\gamma )}, \end{aligned}$$
(2.3)

where in \(\left( *\right) \) we relied on that \(\varphi _{T}\) is bounded, and in both \(\left( *\right) \) and \(\left( **\right) \) on that there is a positive lower bound on the lengths of closed geodesics in X. As \(\varphi _{T}\) is supported on \(\left( -T,T\right) \), the left hand side of (2.3) vanishes whenever \(\ell \left( \gamma \right) \ge T/2\). Using Lemma 2.2 we thus get

$$\begin{aligned} n\sum _{\gamma \in {\mathcal {P}}(X)}\sum _{k=2}^{\infty }\frac{\varphi _{T}\left( k\ell (\gamma )\right) }{2\sinh \left( \frac{k\ell (\gamma )}{2}\right) }k\ell (\gamma )&\ll _{X}n\sum _{\gamma \in {\mathcal {P}}(X):\ell (\gamma )\le T}\ell (\gamma )e^{-\ell (\gamma )}\nonumber \\&\le n\sum _{m=0}^{T}\sum _{\gamma \in {\mathcal {L}}(X)\,:\,m\le \ell (\gamma )<m+1}\left( m+1\right) e^{-m}\nonumber \\&\ll _{X}n\sum _{m=0}^{T}(m+1)e^{m+1}e^{-m}\ll nT^{2}. \end{aligned}$$
(2.4)

We also have

$$\begin{aligned} \int _{-\infty }^{\infty }r\widehat{\varphi _{T}}(r)\tanh (\pi r)dr&=T\int _{-\infty }^{\infty }r\widehat{\varphi _{0}}(Tr)\tanh (\pi r)dr\nonumber \\&=\frac{1}{T}\int _{-\infty }^{\infty }r'\widehat{\varphi _{0}}(r')\tanh \left( \pi \frac{r'}{T}\right) dr'\nonumber \\&\le \frac{2}{T}\int _{0}^{\infty }|r'||\widehat{\varphi _{0}}(r')|dr'\ll \frac{1}{T}. \end{aligned}$$
(2.5)

The final estimate uses that, since \(\varphi _{0}\) is compactly supported, \(\widehat{\varphi _{0}}\) is a Schwartz function and decays faster than any inverse of a polynomial. Combining (2.1), (2.2), (2.4) and (2.5) gives

$$\begin{aligned} \sum _{j=1}^{\infty }\widehat{\varphi _{T}}(r_{i_{j}}(X_{\phi }))&=O\left( \frac{(n-1)\cdot \mathrm {area}(X)}{4\pi }\cdot \frac{1}{T}\right) \nonumber \\&\quad +\sum _{\gamma \in {\mathcal {P}}(X)}\frac{\left( \mathsf {fix}_{{\tilde{\gamma }}} (\phi )-1\right) \ell (\gamma )}{2\sinh \left( \frac{\ell (\gamma )}{2}\right) } \varphi _{T}\left( \ell (\gamma )\right) +O_{X}\left( T^{2}n\right) \nonumber \\&=\sum _{\gamma \in {\mathcal {P}}(X)}\frac{\left( \mathsf {fix}_{{\tilde{\gamma }}} (\phi )-1\right) \ell (\gamma )}{2\sinh \left( \frac{\ell (\gamma )}{2}\right) } \varphi _{T}\left( \ell (\gamma )\right) +O_{X}\left( T^{2}n\right) , \end{aligned}$$
(2.6)

where in the last equality we used \(T>1\).

We are now in a position to use Theorem 1.11. The contributions to the sum above come from \(\gamma \) with \(\ell (\gamma )\le T\). By Lemma 2.3, this entails \(\ell _{w}({\tilde{\gamma }})=\ell _{w}\left( \gamma \right) \le K_{1}T+K_{2}\le c\log n\) for some \(c=c(X)>0\) and n sufficiently large. Moreover, if \(\gamma \in {\mathcal {P}}(X)\), then \({\tilde{\gamma }}\) is not a proper power in \(\Gamma _{g}\). Thus for each \(\gamma \) appearing in (2.6), Theorem 1.11 applies to give

$$\begin{aligned} {\mathbb {E}}_{g,n}\left[ \mathsf {fix}_{{\tilde{\gamma }}}(\phi )-1\right] \ll _{X}\frac{\left( \log n\right) ^{A}}{n} \end{aligned}$$

where \(A=A(g)>0\) and the implied constant depends on X. Now using that \({\widehat{\varphi _{T}}}\) is non-negative on \({\mathbf {R}}\cup i{\mathbf {R}}\), we take expectations of (2.6) with respect to the uniform measure on \({\mathbb {X}}_{g,n}\) to obtain

$$\begin{aligned}&{\mathbb {E}}_{g,n}\left[ \widehat{\varphi _{T}}\left( r_{i_{1}}(X_{\phi })\right) \right] \nonumber \\&\,\,\,\quad \le \sum _{\gamma \in {\mathcal {P}}(X)}\frac{{\mathbb {E}}_{g,n}\left[ \mathsf {fix}_{{\tilde{\gamma }}}(\phi )-1\right] \ell (\gamma )}{2\sinh \left( \frac{\ell (\gamma )}{2}\right) }\varphi _{T}(\ell (\gamma ))+O_{X}\left( T^{2}n\right) \nonumber \\&{\mathop {\ll _{X}}\limits ^{\text {Theorem}~11}} \frac{\left( \log n\right) ^{A}}{n}\sum _{\gamma \in {\mathcal {P}}(X)}\frac{\ell (\gamma )}{2\sinh \left( \frac{\ell (\gamma )}{2}\right) }\varphi _{T}(\ell (\gamma ))+T^{2}n\nonumber \\&\,\,\quad \ll _{X} \frac{\left( \log n\right) ^{A}}{n}\sum _{\gamma \in {\mathcal {P}}(X)\,:\,\ell (\gamma )\le T}\ell (\gamma )e^{-\ell (\gamma )/2}+T^{2}n\nonumber \\&\,\,\quad \le \frac{\left( \log n\right) ^{A}}{n}\sum _{m=0}^{\left\lceil T-1\right\rceil }\sum _{\gamma \in {\mathcal {L}}(X)\,:\,m\le \ell (\gamma )<m+1}(m+1)e^{-m/2}+T^{2}n\nonumber \\&\,\, {\mathop {\ll _{X}}\limits ^{\text {Lemma}~2.2}} \frac{\left( \log n\right) ^{A}}{n}\sum _{m=0}^{\left\lceil T-1\right\rceil }(m+1)e^{-m/2}e^{m+1}+T^{2}n\nonumber \\&\,\,\quad \ll \frac{\left( \log n\right) ^{A}}{n}Te^{T/2}+T^{2}n\nonumber \\&\,\, {\mathop {\ll _{\varepsilon }}\limits ^{T=4\log n}} n^{1+\varepsilon /3}, \end{aligned}$$
(2.7)

where \(\varepsilon \) is the parameter given in Theorem 1.5. The third inequality above used that on a compact hyperbolic surface, the lengths of closed geodesics are bounded below away from zero (by the Collar Lemma [Bus10, Thm. 4.1.1]), together with the fact that \(\varphi _{T}\) is supported in \([-T,T]\). So \({\mathbb {E}}_{g,n}\left[ \widehat{\varphi _{T}}\left( r_{i_{1}}(X_{\phi })\right) \right] \le n^{1+\varepsilon /2}\) for large enough n, and for these values of n, by Markov’s inequality

$$\begin{aligned} {\mathbb {P}}\left[ \widehat{\varphi _{T}}(r_{i_{1}}(X_{\phi }))>n^{1+\varepsilon }\right] \le n^{-\varepsilon /2}. \end{aligned}$$
(2.8)

Lemma 2.4 implies that if \(\lambda _{i_{1}}(X_{\phi })\le \frac{3}{16}-\varepsilon \), in which case \(r_{i_{1}}(X_{\phi })=it_{\phi }\) with \(t_{\phi }\in {\mathbf {R}}\) and \(t_{\phi }\ge \sqrt{\frac{1}{16}+\varepsilon }\ge \frac{1}{4}+\varepsilon \) for \(\varepsilon \) sufficiently small, then

$$\begin{aligned} \widehat{\varphi _{T}}(r_{i_{1}}(X_{\phi }))\ge C_{\varepsilon }Te^{T(1-\varepsilon )t_{\phi }}\ge C_{\varepsilon }n^{4(1-\varepsilon )(1/4+\varepsilon )}\ge C_{\varepsilon }n^{1+2\varepsilon }>n^{1+\varepsilon }, \end{aligned}$$
(2.9)

by decreasing \(\varepsilon \) if necessary, and then assuming n is sufficiently large. Combining (2.8) and (2.9) gives

$$\begin{aligned} {\mathbb {P}}\left[ X_{\phi }\text { has a new eigenvalue }\le \frac{3}{16}-\varepsilon \right] \le {\mathbb {P}}\left[ \widehat{\varphi _{T}}(r_{i_{1}}(X_{\phi }))>n^{1+\varepsilon }\right] \le n^{-\varepsilon /2} \end{aligned}$$

completing the proof of Theorem 1.5, under the assumption of Theorem 1.11. \(\square \)

2.4 Proof of Theorem 1.8.

We continue using the same notation as in the previous section, including the choice of \(T=4\log n\). We let

$$\begin{aligned} 0\le \lambda _{i_{1}}(X_{\phi })\le \lambda _{i_{2}}(X_{\phi })\le \cdots \le \lambda _{i_{k(\phi )}}\le \frac{1}{4} \end{aligned}$$

denote the collection of new eigenvalues of \(X_{\phi }\) of size at most \(\frac{1}{4}\), with multiplicities included. For each such eigenvalue we write \(\lambda _{i_{j}}=s_{i_{j}}(1-s_{i_{j}})\) with \(s_{i_{j}}\in \left[ \frac{1}{2},1\right] \); this has the result that \(r_{i_{j}}=i(s_{i_{j}}-\frac{1}{2})\).

Again taking expectations of (2.6) with respect to the uniform measure on \({\mathbb {X}}_{g,n}\), but this time, keeping more terms, gives

$$\begin{aligned} {\mathbb {E}}_{g,n}\left[ \sum _{j=1}^{k(\phi )}\widehat{\varphi _{T}}\left( r_{i_{j}}(X_{\phi })\right) \right]&\le \sum _{\gamma \in {\mathcal {P}}(X)}\frac{{\mathbb {E}}_{g,n}\left[ \mathsf {fix}_{{\tilde{\gamma }}}(\phi )-1\right] \ell (\gamma )}{2\sinh \left( \frac{\ell (\gamma )}{2}\right) }\varphi _{T}(\ell (\gamma ))+O_{X}\left( T^{2}n\right) \\&\ll _{X,\varepsilon }n^{1+\varepsilon /3} \end{aligned}$$

by (2.7). On the other hand, Lemma 2.4 implies that for \(\varepsilon \in (0,1)\)

$$\begin{aligned} \sum _{j=1}^{k(\phi )}\widehat{\varphi _{T}}\left( r_{i_{j}}\left( X_{\phi }\right) \right) \gg _{\varepsilon }\sum _{j=1}^{k(\phi )}Te^{T(1-\varepsilon )\left( s_{i_{j}}\left( X_{\phi }\right) -\frac{1}{2}\right) }\gg \sum _{j=1}^{k(\phi )}n^{4(1-\varepsilon )\left( s_{i_{j}}\left( X_{\phi }\right) -\frac{1}{2}\right) }. \end{aligned}$$

Therefore

$$\begin{aligned} {\mathbb {E}}_{g,n}\left[ \sum _{j=1}^{k(\phi )}n^{4(1-\varepsilon )\left( s_{i_{j}}\left( X_{\phi }\right) -\frac{1}{2}\right) }\right] \le n^{1+\varepsilon /2} \end{aligned}$$

for n sufficiently large. Markov’s inequality therefore gives

$$\begin{aligned} {\mathbb {P}}\left[ \sum _{j=1}^{k(\phi )}n^{4(1-\varepsilon )\left( s_{i_{j}}\left( X_{\phi }\right) -\frac{1}{2}\right) }\ge n^{1+\varepsilon }\right] \le n^{-\varepsilon /2} \end{aligned}$$

so a.a.s. \(\sum _{j=1}^{k(\phi )}n^{4(1-\varepsilon )\left( s_{i_{j}}\left( X_{\phi }\right) -\frac{1}{2}\right) }<n^{1+\varepsilon }\). This gives that for any \(\sigma \in \left( \frac{1}{2},1\right) \), a.a.s.

$$\begin{aligned} \#\left\{ 1\le j\le k(\phi )\,:\,s_{i_{j}}>\sigma \right\} \le n^{1+\varepsilon -4(1-\varepsilon )(\sigma -\frac{1}{2})}\le n^{3-4\sigma +3\varepsilon }. \end{aligned}$$

This finishes the proof of Theorem 1.8 assuming Theorem 1.11. \(\square \)

3 Tiled Surfaces

3.1 Tiled surfaces.

Here we assume \(g=2\), and let \(\Gamma {\mathop {=}\limits ^{\mathrm {def}}}\Gamma _{2}\). We write \({\mathbb {X}}_{n}{\mathop {=}\limits ^{\mathrm {def}}}{\mathbb {X}}_{2,n}\) throughout the rest of the paper. Consider the construction of the surface \(\Sigma _{2}\) from an octagon by identifying its edges in pairs according to the pattern \(aba^{-1}b^{-1}cdc^{-1}d^{-1}\). This gives rise to a CW-structure on \(\Sigma _{2}\) consisting of one vertex (denoted o), four oriented \(1-\)cells (labeled by abcd) and one 2-cell which is the octagon glued along eight 1-cells.Footnote 6 See Figure 1. We identify \(\Gamma _{2}\) with \(\pi _{1}\left( \Sigma _{2},o\right) \), so that in the presentation (1.1), words in the generators abcd correspond to the homotopy classes of the corresponding closed paths based at o along the 1-skeleton of \(\Sigma _{2}\).

Fig. 1
figure 1

The CW-structure we give to the surface \(\Sigma _{2}\) with fundamental group \(\Gamma =\Gamma _{2}=\left\langle a,b,c,d\,|\,\left[ a,b\right] \left[ c,d\right] \right\rangle \): it consists of a single vertex (0-cell), four edges (1-cells) and one octagon (a 2-cell).

Note that every covering space \(p:\Upsilon \rightarrow \Sigma _{2}\) inherits a CW-structure from \(\Sigma _{2}\): the vertices are the pre-images of o, and the open 1-cells (2-cells) are the connected components of the pre-images of the open 1-cells (2-cells, respectively) in \(\Sigma _{2}\). In particular, this is true for the universal covering space \(\widetilde{\Sigma _{2}}\) of \(\Sigma _{2}\), which we can now think of as a CW-complex. A sub-complex of a CW-complex is a subspace consisting of cells such that if some cell belongs to the subcomplex, then so are the cells of smaller dimension at its boundary.

Definition 3.1

(Tiled surface) [MP21, Def. 3.1]. A tiled surface Y is a sub-complex of a (not-necessarily-connected) covering space of \(\Sigma _{2}\). In particular, a tiled surface is equipped with the restricted covering map \(p:Y\rightarrow \Sigma _{2}\) which is an immersion. We denote by \(Y^{\left( 0\right) }\) the set of vertices and by \(Y^{\left( 1\right) }\) the 1-skeleton of Y. If Y is compact, we write \({\mathfrak {v}}\left( Y\right) \) for the number of vertices of Y, \({\mathfrak {e}}\left( Y\right) \) for the number of edges and \({\mathfrak {f}}\left( Y\right) \) for the number of octagons.

Alternatively, instead of considering a tiled surface Y to be a complex equipped with a restricted covering map, one may consider Y to be a complex as above with directed and labeled edges: the directions and labels (abcd) are pulled back from \(\Sigma _{2}\) via p. These labels uniquely determine p as a combinatorial map between complexes.

Note that a tiled surface is not always a surface: for example, it may also contain vertices or edges with no 2-cells incident to them. However, as Y is a sub-complex of a covering space of \(\Sigma _{2}\), namely, of a surface, any neighborhood of Y inside the cover is a surface, and it is sometimes beneficial to think of Y as such.

Definition 3.2

(Thick version of a tiled surface) [MP21, Def. 3.2]. Given a tiled surface Y which is a subcomplex of the covering space \(\Upsilon \) of \(\Sigma _{2}\), consider a small, closed, regular neighborhood of Y in \(\Upsilon \). This neighborhood is a closed surface, possibly with boundary, which is referred to as the thick version of Y.

We let \(\partial Y\) denote the boundary of the thick version of Y and \({\mathfrak {d}}\left( Y\right) \) denote the number of edges along \(\partial Y\) (so if an edge of Y does not border any octagon, it is counted twice).

We stress that we do not think of Y as a sub-complex, but rather as a complex for its own sake, which happens to have the capacity to be realized as a subcomplex of a covering space of \(\Sigma _{2}\). In particular, if Y is compact, it is a combinatorial object given by a finite amount of data. See [MP21, Section 3] for a more detailed discussion.

Definition 3.3

(Morphisms of tiled surfaces). Let \(p_{i}:Y_{i}\rightarrow \Sigma _{2}\) be tiled surfaces for \(i=1,2\). A map \(f:Y_{1}\rightarrow Y_{2}\) is a morphism of tiled surfaces if it is a combinatorial map of CW-complexes that commutes with the restricted covering maps.

In other words, a morphism of tiled surfaces is a combinatorial map of CW-complexes sending i-cells to i-cells and which respects the directions and labels of edges.

Example 3.4

The fibered product construction gives a one-to-one correspondence between \(\mathrm {Hom}(\Gamma ,S_{n})\) and topological degree-n covers of \(\Sigma _{2}\) with a labeled fiber over the basepoint o. Explicitly, for \(\phi \in \mathrm {Hom}(\Gamma ,S_{n})\), we can consider the quotient

$$\begin{aligned} X_{\phi }{\mathop {=}\limits ^{\mathrm {def}}}\Gamma \backslash \left( \widetilde{\Sigma _{2}}\times [n]\right) \end{aligned}$$

where \(\widetilde{\Sigma _{2}}\) is the universal cover of \(\Sigma _{2}\) (an open disc) and \(\Gamma \) acts on \(\widetilde{\Sigma _{2}}\times [n]\) diagonally, by the usual action of \(\Gamma \) on \(\widetilde{\Sigma _{2}}\) on the first factor, and via \(\phi \) on the second factor. The covering map \(X_{\phi }\rightarrow \Sigma _{2}\) is induced by the projection \(\widetilde{\Sigma _{2}}\times [n]\rightarrow \widetilde{\Sigma _{2}}\).

Being a covering space of \(\Sigma _{2}\), each \(X_{\phi }\) is automatically also a tiled surface. The fiber of \(o\in \Sigma _{2}\) is the collection of vertices of \(X_{\phi }\). We fix throughout the rest of the paper a vertex \(u\in \widetilde{\Sigma _{2}}\) lying over \(o\in \Sigma _{2}\). This identifies the fiber over o in \(X_{\phi }\) with \(\{u\}\times [n]\) and hence gives a fixed bijection between the vertices of \(X_{\phi }\) and the numbers in [n]. The map \(\phi \mapsto X_{\phi }\) is the desired one-to-one correspondence between \(\mathrm {Hom}(\Gamma ,S_{n})\) and topological degree-n covers of \(\Sigma _{2}\) with the fiber over o labeled bijectively by \(\left[ n\right] \).

Example 3.5

For any \(1\ne \gamma \in \Gamma \), pick a word \({\tilde{\gamma }}\) of minimal length in the letters abcd and their inverses that represents an element in the conjugacy class of \(\gamma \) in \(\Gamma \). In particular, \({\tilde{\gamma }}\) is cyclically reduced. Now take a circle and divide it into \(\{a,b,c,d\}\)-labeled and directed edges separated by vertices, such that following around the circle from some vertex and in some orientation, and reading off the labels and directions, spells out \({\tilde{\gamma }}\). Call the resulting complex \({\mathcal {C}}_{\gamma }\). That \({\mathcal {C}}_{\gamma }\) is a tiled surface follows from [MP21] (in particular, it is embedded in the core surface \(\mathrm {Core}\left( \left\langle \gamma \right\rangle \right) \) which is itself a tiled surface, by [MP21, Thm. 5.10]). Note that generally \({\mathcal {C}}_{\gamma }\) is not uniquely determined by \(\gamma \) (e.g., [MP21, Figure 1.2 and Section 4, Section 5]), and we choose one of the options arbitrarily. We have \({\mathfrak {v}}({\mathcal {C}}_{\gamma })={\mathfrak {e}}\left( {\mathcal {C}}_{\gamma }\right) =\ell _{w}(\gamma )\) and \({\mathfrak {f}}\left( {\mathcal {C}}_{\gamma }\right) =0\).

If Y is a compact tiled surface, there are some simple relations between the quantities \({\mathfrak {v}}(Y)\), \({\mathfrak {e}}(Y)\), \({\mathfrak {f}}(Y)\), \({\mathfrak {d}}(Y)\), and \(\chi (Y),\) the topological Euler characteristic of Y. We note the following relations, which are straightforward or standard. For example, \({\mathfrak {e}}\left( Y\right) \le 4{\mathfrak {v}}\left( Y\right) \) as each vertex is incident to at most 8 half-edges.Footnote 7

$$\begin{aligned} {\mathfrak {d}}\left( Y\right)= & {} 2{\mathfrak {e}}\left( Y\right) -8{\mathfrak {f}}\left( Y\right) . \end{aligned}$$
(3.1)
$$\begin{aligned} 4{\mathfrak {f}}\left( Y\right)\le & {} {\mathfrak {e}}\left( Y\right) ~\le ~4{\mathfrak {v}}\left( Y\right) . \end{aligned}$$
(3.2)

The following lemma will be useful later.

Lemma 3.6

Let Y be a compact tiled surface without isolated vertices. Then

$$\begin{aligned} {\mathfrak {v}}\left( Y\right) \le {\mathfrak {f}}\left( Y\right) +{\mathfrak {d}}(Y). \end{aligned}$$

Proof

Let \({\mathfrak {i}}\) denote the number of internal vertices of Y, namely, vertices adjacent to 8 octagons, and let \({\mathfrak {p}}\) denote the number of the remaining, peripheral vertices. As there are no isolated vertices, \({\mathfrak {p}}\le {\mathfrak {d}}\left( Y\right) \) (when going through the boundary cycles, one edge at a time, one passes at every step exactly one peripheral vertex, and each peripheral vertex is traversed at least once, although possibly more than once). We have

$$\begin{aligned} 8{\mathfrak {f}}(Y)&=\sum _{O\text { an octagon of }Y}\#\{\text {corners of }O\}\\&=\sum _{v\text { a vertex of }Y}\#\{\text {corners of octagons at }v\}\\&\ge 8{\mathfrak {i}}=8{\mathfrak {v}}(Y)-8{\mathfrak {p}}\ge 8{\mathfrak {v}}\left( Y\right) -8{\mathfrak {d}}\left( Y\right) . \end{aligned}$$

\(\square \)

The Euler characteristic \(\chi (Y)\) is also controlled by \({\mathfrak {f}}(Y)\) and \({\mathfrak {d}}(Y)\).

Lemma 3.7

Let Y be a compact tiled surface without isolated vertices. ThenFootnote 8

$$\begin{aligned} \chi (Y)\le \frac{{\mathfrak {d}}(Y)}{2}-2{\mathfrak {f}}(Y). \end{aligned}$$

Proof

We have

$$\begin{aligned} \chi \left( Y\right)= & {} {\mathfrak {v}}\left( Y\right) -{\mathfrak {e}}\left( Y\right) +{\mathfrak {f}}\left( Y\right) {\mathop {=}\limits ^{(3.1)}}{\mathfrak {v}}\left( Y\right) -3{\mathfrak {f}}\left( Y\right) -\frac{{\mathfrak {d}}\left( Y\right) }{2} {\mathop {\le }\limits ^{\text {Lemma}~3.6}}\frac{{\mathfrak {d}}\left( Y\right) }{2} -2{\mathfrak {f}}\left( Y\right) . \end{aligned}$$

\(\square \)

3.2 Blocks and chains.

Here we introduce language that was used in [MP21, MP20], based on terminology of Birman and Series from [BS87a]. Let Y denote a tiled surface throughout this Section 3.2. When we refer to directed edges of Y, they are not necessarily directed according to the definition of Y.

First of all, we augment Y by adding half-edges, which should be thought of as copies of \([0,\frac{1}{2})\). Of course, every edge of \(Y^{(1)}\) is thought of as containing two half edges, each of which inherits a label in \(\{a,b,c,d\}\) and a direction from their ambient edge. We add to Y \(\{a,b,c,d\}\)-labeled and directed half-edges to form \(Y_{+}\) so that every vertex of \(Y_{+}\) has exactly 8 emanating half-edges, with labels and directions given by ‘a-outgoing, b-incoming, a-incoming, b-outgoing, c-outgoing, d-incoming, c-incoming, d-outgoing’. The cyclic order we have written here induces a fixed cyclic ordering on the half-edges at each vertex of \(Y_{+}\). If a half-edge of \(Y_{+}\) does not belong to an edge of Y (hence was added to \(Y_{+}\)), we call it a hanging half-edge. We may think of \(Y_{+}\) as a surface too, by considering the thick version of Y and attaching a thin rectangle for every hanging half-edge. We call the resulting surface the thick version of \(\varvec{Y_{+}}\), and mark its boundary by \(\varvec{\partial Y_{+}}\). See Figure 2 for the cyclic ordering of half-edges around every vertex and Figure 4 for a piece of \(\partial Y_{+}\).

Fig. 2
figure 2

The right figure shows a vertex with 8 half-edges around it, ordered (clockwise) according to the fixed cyclic order induced from the CW-structure on \(\Sigma _{2}\). On the left is a tiled surface with 11 vertices, 11 edges and one octagon. The orientation on the octagon is counter-clockwise, while around any vertex it is clockwise. The pink stripes describe blocks: a half-block spelling \(c^{-1}d^{-1}ab\) and a block of length 3 spelling \(d^{-1}ab\). The latter one can be extended in both ends.

For two directed edges \(\mathbf {e_{1}}\) and \(\mathbf {e}_{2}\) of Y with the terminal vertex v of \(\mathbf {e}_{1}\text { equal to the source of }\mathbf {e}_{2}\), the half-edges between \(\mathbf {e}_{1}\) and \(\mathbf {e}_{2}\) are by definition the half edges of \(Y_{+}\) at v that are strictly between \(\mathbf {e}_{1}\text { and }\mathbf {e}_{2}\) in the given cyclic ordering. There are m of these where \(0\le m\le 7\).

A path in Y is a sequence \({{{\mathcal {P}}}}{=}(\mathbf {e_{1}},\ldots ,\mathbf {e}_{k})\) of directed edges in \(Y^{(1)}\), such that for each \(1\le i\le k-1\) the terminal vertex of \(\mathbf {e}_{i}\) is the initial vertex of \(\mathbf {e}_{i+1}\). A cycle in Y is a cyclic sequence \({{\mathcal {C}}=}(\mathbf {e_{1}},\ldots ,\mathbf {e}_{k})\) which is a path with the terminal vertex of \(\mathbf {e}_{k}\) identical to the initial vertex of \(\mathbf {e}_{1}\). A boundary cycle of Y is a cycle corresponding to a boundary component of the thick version of Y. A boundary cycle is always oriented so that if Y is embedded in the full cover Z, the boundary reads successive segments of the boundaries of the neighboring octagons (in \(Z-Y\)) with the orientation of each octagon coming from \(\left[ a,b\right] \left[ c,d\right] \) (and not from the inverse word). For example, the unique boundary cycle of the tiled surface in the left side of Figure 2, starting at the rightmost vertex, spells the cyclic word \(c^{-1}d^{-1}abab^{-1}a^{-1}dcd^{-1}c^{-1}a^{-1}dc\).

If \({{{\mathcal {P}}}}\) is a path in \(Y^{\left( 1\right) }\), a block in \({{{\mathcal {P}}}}\) is a non-empty (possibly cyclic) subsequence of successive edges, each successive pair of edges having no half-edges between them (this means that a block reads necessarily a subword of the cyclic word \(\left[ a,b\right] \left[ c,d\right] \)). A half-block is a block of length 4 (in general, 2g) and a long block is a block of length at least 5 (in general, \(2g+1\)). See Figure 2.

Two blocks \((\mathbf {e}_{i},\ldots ,\mathbf {e}_{j})\) and \((\mathbf {e}_{k},\ldots ,\mathbf {e}_{\ell })\) in a path \({{{\mathcal {P}}}}\) are called consecutive if \((\mathbf {e}_{i},\ldots ,\mathbf {e}_{j},\mathbf {e}_{k},\ldots ,\mathbf {e}_{\ell })\) is a (possibly cyclic) subsequence of \({{{\mathcal {P}}}}\) and there is precisely one half-edge between \(\mathbf {e}_{j}\) and \(\mathbf {e}_{k}\). A chain is a (possibly cyclic) sequence of consecutive blocks. Note that in a chain, an f-edge with some \(f\in \left\{ a^{+1},\ldots ,d^{\pm 1}\right\} \) is followed by an edge labeled by the letter \(f'\) that follows f in the cyclic word \(\left[ a,b\right] \left[ c,d\right] \), or by the letter that follows the inverse of \(f'\). For example, a \(b^{-1}\)-edge is always followed in a chain by either a c-edge or a \(d^{-1}\)-edge. A cyclic chain is a chain whose blocks pave an entire cycle (with exactly one half-edge between the last block and the first blocks). A long chain is a chain consisting of consecutive blocks of lengths

$$\begin{aligned} 4,3,3,\ldots ,3,4 \end{aligned}$$

(in general, \(2g,2g-1,2g-1,\ldots ,2g-1,2g\)). See Figure 3. A half-chain is a cyclic chain consisting of consecutive blocks of length 3 (in general, \(2g-1\)) each.

Fig. 3
figure 3

A long chain (the pink stripe) consisting of five consecutive blocks of lengths 4, 3, 3, 3, 4.

3.3 Boundary reduced and strongly boundary reduced tiled surfaces.

We recall the following definitions from [MP21, Def. 4.1, 4.2].

Definition 3.8

(Boundary reduced). A tiled surface Y is boundary reduced if no boundary cycle of Y contains a long block or a long chain.

Definition 3.9

(Strongly boundary reduced). A tiled surface Y is strongly boundary reduced if no boundary cycle of Y contains a half-block or is a half-chain.

Given a tiled surface Y embedded in a boundary reduced tiled surface Z, the \(\mathsf {BR}\)-closure of Y in Z, denoted \(\mathsf {BR}\left( Y\hookrightarrow Z\right) \) and introduced in [MP21, Def. 4.4], is defined as the intersection of all boundary reduced sub-tiled surfaces of Z containing Y.We compile some properties of the \(\mathsf {BR}\)-closure into the following proposition.

Proposition 3.10

Let \(Y\hookrightarrow Z\) be an embedding of a compact tiled surface Y into a boundary reduced tiled surface Z, and denote \(Y'{\mathop {=}\limits ^{\mathrm {def}}}\mathsf {BR}\left( Y\hookrightarrow Z\right) \).

  1. 1.

    [MP21, Prop. 4.5] \(Y'\) is boundary reduced.

  2. 2.

    [MP21, proof of Prop. 4.6] \(Y'\) is compact, and \({\mathfrak {d}}\left( Y'\right) \le {\mathfrak {d}}\left( Y\right) \), with equality if and only if \(Y'=Y\).

  3. 3.

    [MP21, proof of Prop. 4.6] \(Y'\) can be obtained from Y by initializting \(Y'=Y\) and then repreatedly either \(\left( i\right) \) annexing an octagon of \(Z{\setminus } Y'\) which borders a long block along \(\partial Y'\), or \(\left( ii\right) \) annexing the octagons of \(Z{\setminus } Y'\) bordering some long chain along \(\partial Y'\), until \(Y'\) is boundary reduced.

  4. 4.

    We haveFootnote 9

    $$\begin{aligned} {\mathfrak {f}}(Y')\le {\mathfrak {f}}(Y)+\frac{{\mathfrak {d}}(Y)^{2}}{6}. \end{aligned}$$

Proof of item 4

Assume that \(Y'\) is obtained from Y by the procedure described in item 3. In each such step, \({\mathfrak {d}}\left( Y'\right) \) decreases by at least two, so there are at most \(\frac{{\mathfrak {d}}\left( Y\right) }{2}\) steps where octagons are added. We will be done by showing that at each step at most \(\frac{{\mathfrak {d}}\left( Y\right) }{3}\) octagons are added. And indeed, in option \(\left( i\right) \) exactly one octagon is added (and \(1\le \frac{{\mathfrak {d}}\left( Y\right) }{3}\) or otherwise Y is boundary reduced). In option \(\left( ii\right) \), if the long chain consists of \(\ell \) blocks, it is of length \(3\ell +2\le {\mathfrak {d}}\left( Y'\right) \), and at most \(\ell \le \frac{{\mathfrak {d}}\left( Y'\right) -2}{3}<\frac{{\mathfrak {d}}\left( Y\right) }{3}\) new octagons are added. \(\square \)

3.4 Pieces and \(\varepsilon \)-adapted tiled surfaces.

For the proof of Theorem 1.11 we will need to quantify (strongly) boundary reduced tiled surfaces. This is captured by the notion of \(\varepsilon \)-adapted tiled surface we introduce in this Section 3.4. The following concepts of a piece and its defect play a crucial role here.

Definition 3.11

(Piece, defect). A piece P of \(\partial Y_{+}\) is a (possibly cyclic) path along \(\partial Y_{+}\), consisting of whole directed edges and/or whole hanging half-edges. We write \({\mathfrak {e}}(P)\) for the number of full directed edges in P, \(\mathfrak {he}(P)\) for the number of hanging half-edges in P, and \(|P|{\mathop {=}\limits ^{\mathrm {def}}}{\mathfrak {e}}(P)+\mathfrak {he}(P)\). We let

$$\begin{aligned} \mathrm {Defect}(P){\mathop {=}\limits ^{\mathrm {def}}}{\mathfrak {e}}(P)-3\mathrm {\mathfrak {he}}(P). \end{aligned}$$

(In general, \(\mathrm {Defect}(P){\mathop {=}\limits ^{\mathrm {def}}}{\mathfrak {e}}(P)-\left( 2g-1\right) \mathrm {\mathfrak {he}}(P).\)) See Figure 4 for an illustration of a piece.

Fig. 4
figure 4

A piece P of \(\partial Y_{+}\) is shown in black line. The broken black line marks parts of \(\partial Y_{+}\) adjacent to but not part of P and the yellow stripe marks the side of the internal side of Y. This piece consists of 9 full directed edges and 9 hanging half-edges, so \(\mathrm {Defect}\left( P\right) =-18\).

Definition 3.12

(\(\varepsilon \)-adapted). Let \(\varepsilon \ge 0\) and let Y be a tiled surface. A piece P of \(\partial Y_{+}\) is \(\varepsilon \)-adapted if it satisfiesFootnote 10

$$\begin{aligned} \mathrm {Defect}(P)\le 4\chi (P)-\varepsilon |P|. \end{aligned}$$
(3.3)

We have \(\chi (P)=0\) if P is a whole boundary component and \(\chi (P)=1\) otherwise. We say that a piece P is \(\varepsilon \)-bad if (3.3) does not hold, i.e., if \(\mathrm {Defect}(P)>4\chi (P)-\varepsilon |P|\). We say that Y is \(\varepsilon \)-adapted if every piece of Y is \(\varepsilon \)-adapted.

The following lemma shows that this notion indeed quantifies the notion of strongly boundary reduced tiled surfaces.

Lemma 3.13

Let Y be a tiled surface.

  1. 1.

    Y is boundary reduced if and only if it is 0-adapted.

  2. 2.

    Y is strongly boundary reduced if and only if every piece of \(\partial Y\) is \(\varepsilon \)-adapted for some \(\varepsilon >0\). If Y is compact, this is equivalent to that Y is \(\varepsilon \)-adapted for some \(\varepsilon >0\).

Proof

A block at \(\partial Y\) is a piece P with \(\mathfrak {he}\left( P\right) =0\). Assume that Y is 0-adapted. If P is a block at \(\partial Y\), then \({\mathfrak {e}}\left( P\right) =\mathrm {Defect}\left( P\right) \le 4\chi \left( P\right) \le 4\), so P cannot be a long block. If P is a long chain at \(\partial Y\) consisting of k blocks (\(k-2\) of length 3 and two of length 4) and the \(k-1\) hanging edges between them, then \(\mathrm {Defect}\left( P\right) =\left( 3k+2\right) -3\left( k-1\right) =5>4=4\chi \left( P\right) \), which is a contradiction. Similarly, if P is a half-block or a half-chain, then \(\mathrm {Defect}\left( P\right) =4\chi \left( P\right) \), and so P is \(\varepsilon \)-bad for any \(\varepsilon >0\). The converse implications are not hard and can be found in [MP20, proof of Lem. 5.18]. \(\square \)

We need the following lemma in the analysis of the next subsection.

Lemma 3.14

If \(0\le \varepsilon <3\) and P is an \(\varepsilon \)-bad piece of a compact tiled surface Y, thenFootnote 11

$$\begin{aligned} \left| P\right| <\frac{4{\mathfrak {d}}\left( Y\right) }{3-\varepsilon }. \end{aligned}$$
(3.4)

Proof

If P is \(\varepsilon \)-bad, then by definition \({\mathfrak {e}}\left( P\right) -3\cdot \mathfrak {he}\left( P\right) >4\chi \left( P\right) -\varepsilon \left| P\right| \). So

$$\begin{aligned} \left( 3-\varepsilon \right) \left| P\right| <3\left( {\mathfrak {e}}\left( P\right) +\mathfrak {he}\left( P\right) \right) +\left( {\mathfrak {e}}\left( P\right) -3\cdot \mathfrak {he}\left( P\right) -4\chi \left( P\right) \right) \le 4\cdot {\mathfrak {e}}\left( P\right) \le 4{\mathfrak {d}}\left( Y\right) . \end{aligned}$$

\(\square \)

3.5 The octagons-vs-boundary algorithm.

In this Section 3.5 we describe an algorithm whose purpose is to grow a given tiled surface in such a way that either

  • the output \(Y'\) is \(\varepsilon \)-adapted for some fixed \(\varepsilon >0\), or alternatively,

  • the number of octagons of \(Y'\) is larger than the length of the boundary of \(Y'\).

If \(Y'\) is \(\varepsilon \)-adapted for a suitable \(\varepsilon \), it is very well adapted to our methods, so that we can give an estimate for \({\mathbb {E}}_{n}^{\mathrm {emb}}(Y')\) with an effective error term (e.g., Proposition 5.27). If, on the other hand, \({\mathfrak {f}}(Y')>{\mathfrak {d}}(Y')\), then the Euler characteristic of \(Y'\) can be linearly comparable to the number of octagons in \(Y'\) by Lemma 3.7, and see Section 6.2 where it is used.

The algorithm depends on a positive constant \(\varepsilon >0\); we shall see below that fixing \(\varepsilon =\frac{1}{32}\) works fine for our needs (for arbitrary \(g\ge 2\) we shall fix \(\varepsilon =\frac{1}{16g}\).) To force the algorithm to be deterministic, we a priori make some choices:

Notation 3.15

For every compact tiled surface Y which is boundary reduced but not \(\varepsilon \)-adapted, we pick an \(\varepsilon \)-bad piece P(Y) of \(\partial Y\).

With the ambient parameter \(\varepsilon \) fixed as well as the choices of \(\varepsilon \)-bad pieces, the octagons-vs-boundary (OvB) algorithm is as follows.

figure b

Note that the output \(Y'\) of the algorithm is always boundary reduced. Of course, we would like to know when/if this algorithm terminates.

In step (a), if \(\mathsf {BR}\left( Y'\hookrightarrow Z\right) \ne Y'\) then \({\mathfrak {d}}(Y')\) decreases by at least two, and \({\mathfrak {f}}(Y')\) increases by at least one. So \(\theta (Y')\) increases by at least three.

In step (b), if \(Y'\) changes, the following lemma shows that \(\theta (Y')\) increases by at least one provided that \(\varepsilon \le \frac{1}{16}\).

Lemma 3.16

With notation as above, if \(Y'\) is modified in step (b), then

  1. 1.

    \({\mathfrak {d}}(Y')\) increases by less than \(2\varepsilon \left| P\left( Y'\right) \right| \).

  2. 2.

    \(\theta (Y')\) increases by more than \(\left( \frac{1}{8}-2\varepsilon \right) \left| P\left( Y'\right) \right| \), so the increase is positive whenFootnote 12\(\varepsilon \le \frac{1}{16}\).

Note that \(\theta \left( Y'\right) \) is an integer, so any positive increase is an increase by at least one.

Proof

Suppose that in step (b) \(Y'\) is modified. Let \(Y''\) denote the result of this modification and let \(P=P(Y')\). Let k denote the number of new octagons added. First assume that P is a non-closed path, so \(\chi \left( P\right) =1\). We have \(k\le \mathfrak {he}\left( P\right) +1\) because every hanging half-edge along P marks the passing from one new octagon to the next one. Every new octagon borders 8 edges in Z. For most new octagons, two of these edges contain hanging half-edges of P and are internal edges in \(Y''\), so if j of the edges belong to P, the net contribution of the octagon to \({\mathfrak {d}}\left( Y''\right) -{\mathfrak {d}}\left( Y'\right) \) is at most \(6-2j\). The exceptions are the two extreme octagons, which possibly meet only one hanging half-edge of P, and contribute a net of at most \(7-2j\). The sum of the parameter j over all new octagons is exactly \({\mathfrak {e}}\left( P\right) \). In total, we obtain:

$$\begin{aligned} {\mathfrak {d}}\left( Y''\right) -{\mathfrak {d}}\left( Y'\right)\le & {} 6k+2-2\cdot {\mathfrak {e}}\left( P\right) \\\le & {} 6\left( \mathfrak {he}\left( P\right) +1\right) +2-2\cdot {\mathfrak {e}}\left( P\right) \\= & {} 2\left( 3\cdot \mathfrak {he}\left( P\right) -{\mathfrak {e}}\left( P\right) \right) +8\\< & {} 2\left( \varepsilon \left| P\right| -4\chi \left( P\right) \right) +8=2\cdot \varepsilon \left| P\right| , \end{aligned}$$

where the last inequality comes from the definition of an \(\varepsilon \)-bad piece. If P is a whole boundary cycle of \(Y'_{+}\), we have \(k\le \mathfrak {he}\left( P\right) \) and all octagons contribute at most \(6-2j\) to \({\mathfrak {d}}\left( Y''\right) -{\mathfrak {d}}\left( Y'\right) \), so

$$\begin{aligned} {\mathfrak {d}}\left( Y''\right) -{\mathfrak {d}}\left( Y'\right)\le & {} 6k-2\cdot {\mathfrak {e}}\left( P\right) \le 6\cdot \mathfrak {he}\left( P\right) -2\cdot {\mathfrak {e}}\left( P\right) <2\left( \varepsilon \left| P\right| -4\chi \left( P\right) \right) =2\varepsilon \left| P\right| . \end{aligned}$$

This proves Part 1.

There is a total of 8k directed edges at the boundaries of the new octagons. Of these, \({\mathfrak {e}}\left( P\right) \) are edges of P. Each of the remaining \(8k-{\mathfrak {e}}\left( P\right) \) can ‘host’ two hanging half-edges of P, and each hanging half-edge appears in exactly 2 directed edges of new octagons. This gives

$$\begin{aligned} 2\mathfrak {he}\left( P\right) \le 2\left( 8k-{\mathfrak {e}}\left( P\right) \right) , \end{aligned}$$

so \(8k\ge \mathfrak {he}\left( P\right) +{\mathfrak {e}}\left( P\right) =\left| P\right| \). Hence

$$\begin{aligned} \theta \left( Y''\right) -\theta \left( Y'\right) =k-\left( {\mathfrak {d}}\left( Y''\right) -{\mathfrak {d}}\left( Y'\right) \right) >\frac{1}{8}\left| P\right| -2\varepsilon \left| P\right| =\left( \frac{1}{8}-2\varepsilon \right) \left| P\right| . \end{aligned}$$

\(\square \)

The upshot of the previous observations and Lemma 3.16 is that, provided \(\varepsilon \le \frac{1}{16}\), every time step (a) of the algorithm is reached, except for the first time, \(Y'\) has changed in step (b), so \({\theta (Y')}\) has increased by at least one. Since

$$\begin{aligned} \theta (Y)={\mathfrak {f}}(Y)-{\mathfrak {d}}(Y)\ge -{\mathfrak {d}}(Y), \end{aligned}$$

and the algorithm halts at the latest after the first time that \(\theta \left( Y\right) \) is positive, we deduce the following lemma:

Lemma 3.17

If \(\varepsilon \le \frac{1}{16}\), then during the octagons-vs-boundary algorithm, step (a) is reached at most \({\mathfrak {d}}(Y)+2\) times. In particular, the algorithm always terminates.

Now that we know the algorithm always terminates (assuming \(\varepsilon \le \frac{1}{16}\)), and it clearly has deterministic output due to our a priori choices, if \(Y\hookrightarrow Z\) is an embedding of a compact tiled surface Y into a tiled surface Z without boundary we write \(\mathsf {OvB}_{\varepsilon }(Y\hookrightarrow Z)\) for the output of the OvB algorithm with parameter \(\varepsilon \) applied to \(Y\hookrightarrow Z\). Thus \(\mathsf {OvB}_{\varepsilon }(Y\hookrightarrow Z)\) is a tiled surface \(Y'\) with an attached embedding \(Y\hookrightarrow Y'\). We can now make the following easy observation.

Lemma 3.18

Let \(\varepsilon \le \frac{1}{16}\), let \(Y\hookrightarrow Z\) be an embedding of a compact tiled surface Y into a tiled surface Z without boundary, and let \(Y'=\mathsf {OvB}_{\varepsilon }(Y\hookrightarrow Z)\). Then at least one of the following holds:

  • \(Y'\) is \(\varepsilon \)-adapted.

  • \(Y'\) is boundary reduced and \({\mathfrak {f}}(Y')>{\mathfrak {d}}(Y')\).

We also want an upper bound on how \({\mathfrak {d}}(Y')\) and \({\mathfrak {f}}\left( Y'\right) \) increase during the OvB algorithm.

Lemma 3.19

AssumeFootnote 13\(\varepsilon \le \frac{1}{32}\). Let Y be a compact tiled surface, Z be a boundary-less tiled surface and denote \({\overline{Y}}=\mathsf {OvB}_{\varepsilon }\left( Y\hookrightarrow Z\right) \). Then

$$\begin{aligned} {\mathfrak {d}}({\overline{Y}})&\le 3{\mathfrak {d}}(Y), \end{aligned}$$
(3.6)
$$\begin{aligned} {\mathfrak {f}}({\overline{Y}})&\le {\mathfrak {f}}(Y)+4{\mathfrak {d}}\left( Y\right) +{\mathfrak {d}}(Y)^{2}. \end{aligned}$$
(3.7)

Proof

If step (a) is only reached once, then the result of the algorithm, \({\overline{Y}}\), is equal to \(\mathsf {BR}(Y\hookrightarrow Z)\). In this case we have \({\mathfrak {d}}\left( {\overline{Y}}\right) \le {\mathfrak {d}}\left( Y\right) \) and \({\mathfrak {f}}\left( {\overline{Y}}\right) \le {\mathfrak {f}}\left( Y\right) +\frac{{\mathfrak {d}}\left( Y\right) ^{2}}{6}\) by Proposition 3.10 part 4, so the statement of the lemma holds. So from now on suppose step (a) is reached more than once.

Let \(Y_{1}=Y'\) at the penultimate time that step (a) is completed. Between the penultimate time that step (a) is completed and the algorithm terminates, step (b) takes place to form \(Y_{2}=Y'\), and then step (a) takes place one more time to form \(Y_{3}={\overline{Y}}\) which is the output of the algorithm.

First we prove the bound on \({\mathfrak {d}}\left( Y_{3}\right) \). We have \(\theta (Y_{1})\le 0\), so

$$\begin{aligned} \theta \left( Y_{1}\right) -\theta \left( Y\right) \le 0-\left( {\mathfrak {f}}\left( Y\right) - {\mathfrak {d}}\left( Y\right) \right) \le {\mathfrak {d}}\left( Y\right) . \end{aligned}$$

We claim that in every step of the OvB algorithm, the increase in \(\theta \) is larger then the increase in \({\mathfrak {d}}\). Indeed, this is obviously true in step (a), where \(\theta \) does not decrease and \({\mathfrak {d}}\) does not increase. It is also true in step (b) by Lemma 3.16 and our assumption that \(\varepsilon \le \frac{1}{32}\). Therefore,

$$\begin{aligned} {\mathfrak {d}}\left( Y_{1}\right) -{\mathfrak {d}}\left( Y\right) \le \theta \left( Y_{1}\right) - \theta \left( Y\right) \le {\mathfrak {d}}\left( Y\right) , \end{aligned}$$

and we conclude that \({\mathfrak {d}}\left( Y_{1}\right) \le 2{\mathfrak {d}}\left( Y\right) \).

Let \(P=P\left( Y_{1}\right) \). By Lemma 3.16,

$$\begin{aligned} {\mathfrak {d}}\left( Y_{2}\right)\le & {} {\mathfrak {d}}\left( Y_{1}\right) +2\varepsilon \left| P\right| {\mathop {\le }\limits ^{(3.4)}}{\mathfrak {d}}\left( Y_{1}\right) +2\varepsilon \cdot \frac{4{\mathfrak {d}}\left( Y_{1}\right) }{3-\varepsilon }\\= & {} {\mathfrak {d}}\left( Y_{1}\right) \left[ 1+\frac{8\varepsilon }{3-\varepsilon }\right] \le 1.1\cdot {\mathfrak {d}}\left( Y_{1}\right) \le 2.2\cdot {\mathfrak {d}}\left( Y\right) , \end{aligned}$$

where the penultimate inequality is based on that \(\varepsilon \le \frac{1}{32}\). Finally, \({\mathfrak {d}}\left( Y_{3}\right) \le {\mathfrak {d}}\left( Y_{2}\right) \), so (3.6) is proven.

For the number of octagons, note first that

$$\begin{aligned} {\mathfrak {f}}\left( Y_{1}\right) =\theta \left( Y_{1}\right) +{\mathfrak {d}}\left( Y_{1}\right) \le {\mathfrak {d}}\left( Y_{1}\right) \le 2{\mathfrak {d}}\left( Y\right) . \end{aligned}$$

Let k denote the number of new octagons added in step (b) to form \(Y_{2}\) from \(Y_{1}\). As noted in the proof of Lemma 3.16, \(k\le \mathfrak {he}\left( P\right) +1\). As \(P=P\left( Y_{1}\right) \) is \(\varepsilon \)-bad, we have

$$\begin{aligned} \mathfrak {he}\left( P\right) \le \frac{1}{3}\left( {\mathfrak {e}}\left( P\right) +\varepsilon \left| P\right| \right) {\mathop {\le }\limits ^{(3.4)}}\frac{1}{3}{\mathfrak {d}}\left( Y_{1}\right) \left( 1+\frac{4\varepsilon }{3-\varepsilon }\right) <{\mathfrak {d}}\left( Y_{1}\right) \le 2{\mathfrak {d}}\left( Y\right) , \end{aligned}$$

the penultimate inequality is based again on that \(\varepsilon \le \frac{1}{32}\). Thus \({\mathfrak {f}}\left( Y_{2}\right) -{\mathfrak {f}}\left( Y_{1}\right) \le \mathfrak {he}\left( P\right) +1\le 2{\mathfrak {d}}\left( Y\right) \).

Finally, by Proposition 3.10 part 4, \({\mathfrak {f}}\left( Y_{3}\right) -{\mathfrak {f}}\left( Y_{2}\right) \le \frac{{\mathfrak {d}}\left( Y_{2}\right) ^{2}}{6}\le {\mathfrak {d}}\left( Y\right) ^{2}\), and we conclude

$$\begin{aligned} {\mathfrak {f}}\left( Y_{3}\right)= & {} {\mathfrak {f}}\left( Y_{1}\right) +\left[ {\mathfrak {f}}\left( Y_{2}\right) -{\mathfrak {f}}\left( Y_{1}\right) \right] +\left[ {\mathfrak {f}}\left( Y_{3}\right) -{\mathfrak {f}}\left( Y_{2}\right) \right] \\\le & {} 2{\mathfrak {d}}\left( Y\right) +2{\mathfrak {d}}\left( Y\right) +{\mathfrak {d}}\left( Y\right) ^{2}=4 {\mathfrak {d}}\left( Y\right) +{\mathfrak {d}}\left( Y\right) ^{2}, \end{aligned}$$

which proves (3.7) in this case as well. \(\square \)

3.6 Resolutions from the octagons-vs-boundary algorithm.

Recall the definition of the tiled surface \(X_{\phi }\) from Section 1 and Example 3.4. Given a tiled surface Y, we define

$$\begin{aligned} {\mathbb {E}}_{n}(Y){\mathop {=}\limits ^{\mathrm {def}}}{\mathbb {E}}_{\phi \in {\mathbb {X}}_{n}}[\#\text {morphisms }Y\rightarrow X_{\phi }]. \end{aligned}$$

This is the expected number of morphisms from Y to \(X_{\phi }\). Recall that we use the uniform probability measure on \({\mathbb {X}}_{n}\). We have the following result that relates this concept to Theorem 1.11.

Lemma 3.20

Given \(1\ne \gamma \in \Gamma \), let \({\mathcal {C}}_{\gamma }\) be as in Example 3.5. Then

$$\begin{aligned} {\mathbb {E}}_{n}[\mathsf {fix}_{\gamma }]={\mathbb {E}}_{n}({\mathcal {C}}_{\gamma }). \end{aligned}$$
(3.8)

Proof

This is not hard to check but also follows from [MP20, Lem. 2.7]. \(\square \)

We need to work not only with \({\mathbb {E}}_{n}(Y)\) for various tiled surfaces, but also with the expected number of times that Y embeds into \(X_{\phi }\). For a tiled surface Y, this is given by

$$\begin{aligned} {\mathbb {E}}_{n}^{\mathrm {emb}}(Y){\mathop {=}\limits ^{\mathrm {def}}}{\mathbb {E}}_{\phi \in {\mathbb {X}}_{n}}[\#\text {embeddings }Y\hookrightarrow X_{\phi }]. \end{aligned}$$

We recall the following definition from [MP20, Def. 2.8].

Definition 3.21

(Resolutions). A resolution \(\mathcal {{\mathcal {R}}}\) of a tiled surface Y is a collection of morphisms of tiled surfaces

$$\begin{aligned} {\mathcal {R}}=\left\{ f:Y\rightarrow W_{f}\right\} , \end{aligned}$$

such that every morphism \(h:Y\rightarrow Z\) of Y into a tiled surface Z with no boundary decomposes uniquely as \(Y{\mathop {\rightarrow }\limits ^{f}}W_{f}{\mathop {\hookrightarrow }\limits ^{{\overline{h}}}}Z\), where \(f\in {\mathcal {R}}\) and \({\overline{h}}\) is an embedding.

The point of this definition is the following lemma also recorded in [MP20, Lem. 2.9].

Lemma 3.22

If Y is a compact tiled surface and \({\mathcal {R}}\) is a finite resolution of Y, then

$$\begin{aligned} {\mathbb {E}}_{n}\left( Y\right) =\sum _{f\in {\mathcal {R}}}{\mathbb {E}}_{n}^{\mathrm {emb}}\left( W_{f}\right) . \end{aligned}$$
(3.9)

The type of resolution we wish to use in this paper is the following.

Definition 3.23

(\({\mathcal {R}}_{\varepsilon }(Y)\)). For a compact tiled surface Y, let \({\mathcal {R}}_{\varepsilon }(Y)\) denote the collection of all morphisms \(Y\xrightarrow {f}W_{f}\) obtained as follows:

  • \(F:Y\rightarrow Z\) is a morphism of Y into a boundary-less tiled surface Z.

  • \(U_{F}\) is the image of F in Z. Hence there is a given embedding \(\iota _{F}:U_{F}\hookrightarrow Z\).

  • \(W_{f}\) is given by \(W_{f}=\mathsf {OvB}_{\varepsilon }(U_{f}\hookrightarrow Z)\) and \(f=\iota _{F}\circ F:Y\rightarrow W_{f}\).

Theorem 3.24

Given a compact tiled surface Y amd \(\varepsilon \le \frac{1}{32}\) (or \(\varepsilon \le \frac{1}{16g}\) for arbitrary \(g\ge 2\)), the collection \({\mathcal {R}}_{\varepsilon }(Y)\) defined in Definition 3.23 is a finite resolution of Y.

Proof

To see that \({\mathcal {R}}_{\varepsilon }(Y)\) is finite, note that there are finitely many options for \(U_{F}\) (this is a quotient of the compact complex Y). For any such \(U_{F}\) we have \({\mathfrak {f}}(U_{F})\le {\mathfrak {f}}(Y)\) and \({\mathfrak {d}}(U_{F})\le {\mathfrak {d}}(Y)\), and hence by Lemma 3.19 there is a bound on \({\mathfrak {f}}(W_{f})\) depending only on Y. As we add a bounded number of octagons to obtain \(W_{f}\), there is a bound also on \({\mathfrak {v}}\left( W_{f}\right) \) and on \({\mathfrak {e}}\left( W_{f}\right) \). This means that \(W_{f}\) is one of only finitely many tiled surfaces, and there are finitely many morphisms of Y to one of these.

Now we explain why \({\mathcal {R}}_{\varepsilon }(Y)\) is a resolution – this is essentially the same as [MP20, proof of Thm. 2.14]. Let \(F:Y\rightarrow Z\) be a morphism with \(\partial Z=\emptyset \). By the definition of \({\mathcal {R}}_{\varepsilon }(Y)\), it is clear that F decomposes as \(Y{\mathop {\rightarrow }\limits ^{f}}W_{f}\hookrightarrow Z\) for the \(f\in \mathcal {R_{\varepsilon }}(Y)\) that originates in F. To show uniqueness, assume that F decomposes in an additional way

$$\begin{aligned} Y{\mathop {\rightarrow }\limits ^{f'}}W_{f'}\hookrightarrow Z \end{aligned}$$

where \(W_{f'}\) is the result of the OvB algorithm for some \(F':Y\rightarrow Z'\) with \(\partial Z'=\emptyset \). We claim that both decompositions are precisely the same decomposition of F (namely \(W_{f'}=W_{f}\) and \(f'=f\)). First, \(U_{F'}=F'\left( Y\right) \hookrightarrow W_{f'}\hookrightarrow Z\), so \(U_{F'}=F'\left( Y\right) =F\left( Y\right) =U_{F}\). The OvB algorithm with input \(F'\left( Y\right) \hookrightarrow Z'\) takes place entirely inside \(W_{f'}\), and does not depend on the structure of \(Z'\backslash W_{f'}\): the choices are made depending only on the structure of the boundary of \(Y'\) in step (b) of the OvB algorithm, as well as in every step of the procedure described in Proposition 3.10(3) to obtain \(\mathsf {BR}\left( Y'\hookrightarrow Z\right) \) in step (a). Moreover, the result of these steps depends only on the octagons of Z immediately adjacent to the boundary of \(Y'\). But \(W_{f'}\) is embedded in Z, and so it must be identical to \(W_{f}\) and \(f'\) identical to f. \(\square \)

It is the following corollary of the previous results, applied to a tiled surface \({\mathcal {C}}_{\gamma }\) as in Example 3.5, that will be used in the rest of the paper. Recall that for \(\gamma \in \Gamma \), \(\ell _{w}\left( \gamma \right) \) denotes the word-length, with respect to the generators \(\left\{ a,b,c,d\right\} \), of a shortest representative of the conjugacy class of \(\gamma \) in \(\Gamma \).

Corollary 3.25

Let \(1\ne \gamma \in \Gamma \) andFootnote 14\(\varepsilon \le \frac{1}{32}\). For any \(f:{\mathcal {C}}_{\gamma }\rightarrow W_{f}\) in \({\mathcal {R}}_{\varepsilon }({\mathcal {C}}_{\gamma })\), either

  1. 1.

    \(W_{f}\) is boundary reduced, and \(\chi (W_{f})<-{\mathfrak {f}}(W_{f})<-{\mathfrak {d}}(W_{f})\), or

  2. 2.

    \(W_{f}\) is \(\varepsilon \)-adapted.

Moreover, in either case,

$$\begin{aligned} {\mathfrak {d}}(W_{f})&\le 6\ell _{w}\left( \gamma \right) , \end{aligned}$$
(3.10)
$$\begin{aligned} {\mathfrak {f}}(W_{f})&\le 8\ell _{w}\left( \gamma \right) +4\left( \ell _{w}\left( \gamma \right) \right) ^{2}. \end{aligned}$$
(3.11)

Proof

The inequalities (3.10) and (3.11) are from Lemma 3.19 and the fact that \({\mathfrak {d}}\left( {\mathcal {C}}_{\gamma }\right) =2\ell _{w}\left( \gamma \right) \) and \({\mathfrak {f}}\left( {\mathcal {C}}_{\gamma }\right) =0\). It follows from the construction of \({\mathcal {R}}_{\varepsilon }({\mathcal {C}}_{\gamma })\) using the OvB algorithm that if \(f\in {\mathcal {R}}_{\varepsilon }(Y)\) with \(f:Y\rightarrow W_{f}\), and \(W_{f}\) is not \(\varepsilon \)-adapted, then \(W_{f}\) is boundary reduced and \({\mathfrak {d}}(W_{f})<{\mathfrak {f}}(W_{f})\). Combined with Lemma 3.7 this gives

$$\begin{aligned} \chi (W_{f})&\le -2{\mathfrak {f}}(W_{f})+\frac{1}{2}{\mathfrak {d}}(W_{f})<-2{\mathfrak {f}}(W_{f})+\frac{1}{2}{\mathfrak {f}}(W_{f})\le -{\mathfrak {f}}(W_{f}). \end{aligned}$$

\(\square \)

4 Representation Theory of Symmetric Groups

4.1 Background.

We write \(S_{n}\) for the symmetric group of permutations of the set [n]. By convention \(S_{0}\) is the trivial group with one element. If \(m\le n\), we always let \(S_{m}\le S_{n}\) be the subgroup of permutations fixing \([m+1,n]\) element-wise. For \(k\le n\), we will let \(S'_{k}\le S_{n}\) be our notation for the subgroup of permutations fixing \([n-k]\) element-wise. We write \({\mathbf {C}}[S_{n}]\) for the group algebra of \(S_{n}\) with complex coefficients.

4.1.1 Young diagrams.

A Young diagram (YD) of size n is a collection of n boxes, arranged in left-aligned rows in the plane, such that the number of boxes in each row is non-increasing from top to bottom. A Young diagram is uniquely specified by the sequence \(\lambda _{1},\lambda _{2},\ldots ,\lambda _{r}\) where \(\lambda _{i}\) is the number of boxes in the ith row (and there are r rows). We have \(\lambda _{1}\ge \lambda _{2}\ge \cdots \ge \lambda _{r}>0\); such a sequence of integers is called a partition. We view YDs and partitions interchangeably in this paper. If \(\sum _{i}\lambda _{i}=n\) we write \(\lambda \vdash n\). Two important examples of partitions are (n), with all boxes of the corresponding YD in the first row, and \((1)^{n}{\mathop {=}\limits ^{\mathrm {def}}}(\underbrace{1,\ldots ,1}_{n})\), with all boxes of the corresponding YD in the first column. If \(\mu ,\lambda \) are YDs, we write \(\mu \subset \lambda \) if all boxes of \(\mu \) are contained in \(\lambda \) (when both are aligned to the same top-left borders). We say \(\mu \subset _{k}\lambda \) if \(\mu \subset \lambda \) and there are k boxes of \(\lambda \) that are not in \(\mu \). We write \(\emptyset \) for the empty YD with no boxes. If \(\lambda \) is a YD, \({\check{\lambda }}\) is the conjugate YD obtained by reflecting \(\lambda \) in the diagonal (switching rows and columns).

A skew Young diagram (SYD) is a pair of Young diagrams \(\mu \) and \(\lambda \) with \(\mu \subset \lambda \). This pair is denoted \(\lambda /\mu \) and thought of as the collection of boxes of \(\lambda \) that are not in \(\mu \). We identify a YD \(\lambda \) with the SYD \(\lambda /\emptyset \) so that YDs are special cases of SYDs. The size of a SYD \(\lambda /\mu \) is the number of boxes it contains; i.e. the number of boxes of \(\lambda \) that are not in \(\mu \). The size is denoted by \(|\lambda /\mu |\), or if \(\lambda \) is a YD, \(|\lambda |\).

4.1.2 Young tableaux.

Let \(\lambda /\mu \) be a SYD, with \(\lambda \vdash n\) and \(\mu \vdash k\). A standard Young tableau of shape \(\lambda /\mu \) is a filling of the boxes of \(\lambda /\mu \) with the numbers \([k+1,n]\) such that each number appears in exactly one box and the numbers in each row (resp. column) are strictly increasing from left to right (resp. top to bottom). We refer to standard Young tableaux just as tableaux in this paper. We write \(\mathrm {Tab}(\lambda /\mu )\) for the collection of tableaux of shape \(\lambda /\mu \). Given a tableau T, we denote by \(T|_{\le m}\) (resp. \(T|_{>m})\) the tableau formed by the numbers-in-boxes of T with numbers in the set [m] (resp. \([m+1,n]\)). The shape of \(T|_{\le m}\) and of \(T|_{>m}\) is a SYD in general. If T is a tableau and the shape of T is a YD we let \(\mu _{m}(T)\) be the YD that is the shape of \(T|_{\le m}\). If \(\nu \subset \mu \subset \lambda \), \(T\in \mathrm {Tab}(\mu /\nu )\) and \(R\in \mathrm {Tab}(\lambda /\mu )\), then we write \(T\sqcup R\) for the tableau in \(\mathrm {Tab}(\lambda /\nu )\) obtained by adjoining R to T in the obvious way.

4.1.3 Irreducible representations.

The equivalence classes of irreducible unitary representations of \(S_{n}\) are in one-to-one correspondence with Young diagrams of size n. Given a YD \(\lambda \vdash n\), we write \(V^{\lambda }\) for the corresponding irreducible representation of \(S_{n}\); each \(V^{\lambda }\) is a finite dimensional Hermitian complex vector space with an action of \(S_{n}\) by unitary linear automorphisms. Hence \(V^{\lambda }\) can also be thought of as a module for \({\mathbf {C}}[S_{n}]\). We write \(d_{\lambda }{\mathop {=}\limits ^{\mathrm {def}}}\dim V^{\lambda }\). It is well-known, and also follows from the discussion of the next paragraphs, that \(d_{\lambda }=|\mathrm {Tab}(\lambda )|\). Note that \(d_{\lambda }=d_{{\check{\lambda }}}\) since reflection in the diagonal gives a bijection between \(\mathrm {Tab}(\lambda )\) and \(\mathrm {Tab}({\check{\lambda }})\).

We now give an account of the Vershik–Okounkov approach to the representation theory of symmetric groups from [VO04]. According to the usual ordering of [n] there is a filtration of subgroups

$$\begin{aligned} S_{0}\le S_{1}\le S_{2}\le \cdots \le S_{n}. \end{aligned}$$

If W is any unitary representation of \(S_{n}\), \(m\in [n]\) and \(\mu \vdash m\), we write \(W_{\mu }\) for the span of vectors in copies of \(V^{\mu }\) in the restriction of W to \(S_{m}\); we call \(W_{\mu }\) the \(\mu \)-isotypic subspace of W.

It follows from the branching law for restriction of representations between \(S_{m}\) and \(S_{m-1}\) that for \(\lambda \vdash n\) and \(T\in \mathrm {Tab}(\lambda )\) the intersection

$$\begin{aligned} \left( V^{\lambda }\right) _{\mu _{1}(T)}\cap \left( V^{\lambda }\right) _{\mu _{2}(T)}\cap \cdots \cap \left( V^{\lambda }\right) _{\mu _{n-1}(T)} \end{aligned}$$

is one-dimensional. Vershik–Okounkov specify a unit vector \(v_{T}\) in this intersection. The collection

$$\begin{aligned} \{\,v_{T}\,:\,T\in \mathrm {Tab}(\lambda )\,\} \end{aligned}$$

is an orthonormal basis for \(V^{\lambda }\) called a Gelfand-Tsetlin basis.

4.1.4 Modules from SYDs.

If \(m,n\in {\mathbf {N}}\), \(\lambda \vdash n\), \(\mu \vdash m\) and \(\mu \subset \lambda \), then

$$\begin{aligned} V^{\lambda /\mu }{\mathop {=}\limits ^{\mathrm {def}}}\mathrm {Hom}_{S_{m}}(V^{\mu },V^{\lambda }) \end{aligned}$$

is a unitary representation of \(S'_{n-m}\) as \(S'_{n-m}\) is in the centralizer of \(S_{m}\) in \(S_{n}\). We write \(d_{\lambda /\mu }\) for the dimension of this representation. There is also an analogous Gelfand-Tsetlin orthonormal basis of \(V^{\lambda /\mu }\) indexed by \(T\in \mathrm {Tab}(\lambda /\mu \)); the basis element corresponding to a skew tableau T will be denoted \(w_{T}\). It follows that \(d_{\lambda /\mu }=|\mathrm {Tab}(\lambda /\mu )|\). Note that when \(\mu =\lambda \), \(\mathrm {Tab}\left( \lambda /\mu \right) =\left\{ \emptyset \right\} \) (\(\emptyset \) the empty tableau), and the representation \(V^{\lambda /\mu }\) is one-dimensional with basis \(w_{\emptyset }\).

One has the following consequence of Frobenius reciprocity (cf. e.g. [MP20, Lem. 3.1]).

Lemma 4.1

Let \(n\in {\mathbf {N}}\), \(m\in \left[ n\right] \) and \(\mu \vdash m\). Then

$$\begin{aligned} \sum _{\lambda \vdash n:\mu \subset \lambda }d_{\lambda /\mu }d_{\lambda }=\frac{n!}{m!}d_{\mu }. \end{aligned}$$

4.2 Effective bounds for dimensions.

Throughout the paper, we will write \(b_{\lambda }\) for the number of boxes outside the first row of a YD \(\lambda \), and write \({\check{b}}_{\lambda }\) for the number of boxes outside the first column of \(\lambda \). More generally, we write \(b_{\lambda /\nu }\) (resp. \({\check{b}}_{\lambda /\nu }\)) for the number of boxes outside the first row (resp. column) of the SYD \(\lambda /\nu \), so \(b_{\lambda /\nu }=b_{\lambda }-b_{\nu }\) and \({\check{b}}_{\lambda /\nu }={\check{b}}_{\lambda }-{\check{b}}_{\nu }\). We need the following bounds on dimensions of representations.

Lemma 4.2

[MP20, Lem. 4.3]. If \(n\in {\mathbf {N}}\), \(m\in \left[ n\right] \), \(\lambda \vdash n\), \(\nu \vdash m\), \(\nu \subset \lambda \) and \(m\ge 2b_{\lambda }\), then

$$\begin{aligned} \frac{(n-b_{\lambda })^{b_{\lambda }}}{b_{\lambda }^{~b_{\lambda }}m^{b_{\nu }}} \le \frac{d_{\lambda }}{d_{\nu }}\le \frac{b_{\nu }^{~b_{\nu }}n^{b_{\lambda }}}{(m-b_{\nu })^{b_{\nu }}}. \end{aligned}$$
(4.1)

The condition \(m\ge 2b_{\lambda }\) ensures that both \(\nu \) and \(\lambda \) have most boxes in their first row. This is an important and recurring theme of the paper (see e.g. Proposition 4.6).

Lemma 4.3

Let \(\lambda /\nu \) be a skew Young diagram of size n. Then

$$\begin{aligned} d_{\lambda /\nu }\le (n)_{b_{\lambda /\nu }}\quad \mathrm {and} \quad d_{\lambda /\nu }\le (n)_{{\check{b}}_{\lambda /\nu }}. \end{aligned}$$

Proof

There are at most \(\left( {\begin{array}{c}n\\ b_{\lambda /\nu }\end{array}}\right) \) options for the set of \(b_{\lambda /\nu }\) elements outside the first row. Given these, there are at most \(b_{\lambda /\nu }!\) choices for how to place them outside the first row. The proof of the second inequality is analogous. \(\square \)

4.3 Effective bounds for the zeta function of the symmetric group.

The Witten zeta function of the symmetric group \(S_{n}\) is defined for a real parameter s as

$$\begin{aligned} \zeta ^{S_{n}}(s){\mathop {=}\limits ^{\mathrm {def}}}\sum _{\lambda \vdash n}\frac{1}{d_{\lambda }^{~s}}. \end{aligned}$$
(4.2)

This function, and various closely related functions, play a major role in this paper. One main reason for its appearance is due to a formula going back to Hurwitz [Hur02] that states

$$\begin{aligned} |{\mathbb {X}}_{g,n}|=|\mathrm {Hom}(\Gamma _{g},S_{n})|=|S_{n}|^{2g-1}\zeta ^{S_{n}}(2g-2). \end{aligned}$$
(4.3)

This is also sometimes called Mednykh’s formula [Med78]. We first give the following result due to Liebeck and Shalev [LS04, Thm. 1.1] and independently, Gamburd [Gam06, Prop. 4.2]. We refer the reader to Section 1.4 for the definition of notations (e.g. \(O,\ll \)) that we use in this Section 4.

Theorem 4.4

[LS04, Gam06]. For any \(s>0\), as \(n\rightarrow \infty \)

$$\begin{aligned} \zeta ^{S_{n}}(s)=2+O\left( n^{-s}\right) . \end{aligned}$$

This has the following corollary when combined with (4.3).

Corollary 4.5

For any \(g\in {\mathbf {N}}\) with \(g\ge 2\), we have

$$\begin{aligned} \frac{|{\mathbb {X}}_{g,n}|}{(n!)^{2g-1}}=2+O(n^{-2}). \end{aligned}$$

As well as the previous results, we also need to know how well \(\zeta ^{S_{n}}(2g-2)\) is approximated by restricting the summation in (4.2) to \(\lambda \) with a bounded number of boxes either outside the first row or outside the first column. We let \(\Lambda (n,b)\) denote the collection of \(\lambda \vdash n\) such that \(\lambda _{1}\le n-b\) and \({\check{\lambda }}_{1}\le n-b\). In other words, \(\Lambda (n,b)\) is the collection of YDs \(\lambda \vdash n\) with both \(b_{\lambda }\ge b\) and \({\check{b}}_{\lambda }\ge b\). A version of the next proposition, when b is fixed and \(n\rightarrow \infty \), is due independently to Liebeck and Shalev [LS04, Prop. 2.5] and Gamburd [Gam06, Prop. 4.2]. Here, we need a version that holds uniformly over b that is not too large compared to n.

Proposition 4.6

Fix \(s>0\). There exists a constant \(\kappa =\kappa (s)>1\) such that when \(b^{2}\le \frac{n}{3}\),

$$\begin{aligned} \sum _{\lambda \in \Lambda (n,b)}\frac{1}{d_{\lambda }^{~s}}\ll _{s}\left( \frac{\kappa b^{2s}}{\left( n-b^{2}\right) ^{s}}\right) ^{b}. \end{aligned}$$
(4.4)

Proof

Here we follow Liebeck and Shalev [LS04, proof of Prop. 2.5] and make the proof uniform over b. Let \(\Lambda _{0}(n,b)\) denote the collection of \(\lambda \vdash n\) with \({\check{\lambda }}_{1}\le \lambda _{1}\le n-b\). Since \(d_{\lambda }=d_{{\check{\lambda }}}\),

$$\begin{aligned} \sum _{\lambda \in \Lambda (n,b)}\frac{1}{d_{\lambda }^{~s}}\le 2\sum _{\lambda \in \Lambda _{0}(n,b)}\frac{1}{d_{\lambda }^{~s}}, \end{aligned}$$

so it suffices to prove a bound for \(\sum _{\lambda \in \Lambda _{0}(n,b)}\frac{1}{d_{\lambda }^{~s}}\). Let \(\Lambda _{1}(n,b)\) denote the elements \(\lambda \) of \(\Lambda _{0}(n,b)\) with \(\lambda _{1}\ge \frac{2n}{3}\). We write

$$\begin{aligned} \sum _{\lambda \in \Lambda _{0}(n,b)}\frac{1}{d_{\lambda }^{~s}}=\Sigma _{1}+\Sigma _{2} \end{aligned}$$

where

$$\begin{aligned} \Sigma _{1}{\mathop {=}\limits ^{\mathrm {def}}}\sum _{\lambda \in \Lambda _{1}(n,b)}\frac{1}{d_{\lambda }^{~s}},\quad \Sigma _{2}{\mathop {=}\limits ^{\mathrm {def}}}\sum _{\lambda \in \Lambda _{0}(n,b)-\Lambda _{1}(n,b)}\frac{1}{d_{\lambda }^{~s}}. \end{aligned}$$

Bound for \(\Sigma _{1}\). By [LS04, Lem. 2.1] if \(\lambda \in \Lambda _{1}(n,b)\) then since \(\lambda _{1}\ge \frac{n}{2}\), \(d_{\lambda }\ge \left( {\begin{array}{c}\lambda _{1}\\ n-\lambda _{1}\end{array}}\right) .\) Indeed, for completeness, following [LS04, Proof of Lem. 2.1] we can find many tableaux of shape \(\lambda \) as follows. Put the numbers \(1,\ldots ,n-\lambda _{1}\) in the left most entries of the first row of \(\lambda \). Then for any of the \(\left( {\begin{array}{c}\lambda _{1}\\ n-\lambda _{1}\end{array}}\right) \) choices of size \(n-\lambda _{1}\) subsets of \([n-\lambda _{1}+1,n]\), there is obviously a tableau of shape \(\lambda \) with those numbers outside the first row.

Let p(m) denote the number of \(\mu \vdash m\). The number of \(\lambda \in \Lambda _{1}(n,b)\) with a valid fixed value of \(\lambda _{1}\) is \(p(n-\lambda _{1})\) since \(\lambda _{1}\ge \frac{n}{2}\) and hence any YD with \(n-\lambda _{1}\) boxes can be added below the fixed first row of \(\lambda _{1}\) boxes to form \(\lambda \). Therefore

$$\begin{aligned} \Sigma _{1}\le \sum _{\lambda _{1}=\lceil \frac{2n}{3}\rceil }^{n-b} \frac{p\left( n-\lambda _{1}\right) }{\left( {\begin{array}{c}\lambda _{1}\\ n-\lambda _{1}\end{array}}\right) ^{s}} =\sum _{\ell =b}^{\lfloor \frac{n}{3}\rfloor }\frac{p(\ell )}{\left( {\begin{array}{c}n-\ell \\ \ell \end{array}}\right) ^{s}}. \end{aligned}$$

We now split the sum into two ranges to estimate \(\Sigma _{1}\le \Sigma '_{1}+\Sigma ''_{1}\) where

$$\begin{aligned} \Sigma '_{1}=\sum _{\ell =b}^{b^{2}}\frac{p(\ell )}{\left( {\begin{array}{c}n-\ell \\ \ell \end{array}}\right) ^{s}},\quad \Sigma ''_{1}=\sum _{\ell =b^{2}+1}^{\lfloor \frac{n}{3}\rfloor } \frac{p(\ell )}{\left( {\begin{array}{c}n-\ell \\ \ell \end{array}}\right) ^{s}}. \end{aligned}$$

First we deal with \(\Sigma '_{1}\). We have \(p(\ell )\le c_{1}^{\sqrt{\ell }}\) for some \(c_{1}>1\) [Apo76, Thm. 14.5]. As \(\ell \le n-\ell \),

$$\begin{aligned} \left( {\begin{array}{c}n-\ell \\ \ell \end{array}}\right) \ge \frac{(n-\ell )^{\ell }}{\ell ^{\ell }}. \end{aligned}$$

This gives

$$\begin{aligned} \Sigma '_{1}&\le \sum _{\ell =b}^{b^{2}}c_{1}^{\sqrt{\ell }}\left( \frac{\ell }{n-\ell }\right) ^{s\ell }\le c_{1}^{~b}\sum _{\ell =b}^{b^{2}}\left( \frac{b^{2}}{n-b^{2}}\right) ^{s\ell } \ll _{s}c_{1}^{~b}\left( \frac{b^{2}}{n-b^{2}}\right) ^{sb}, \end{aligned}$$
(4.5)

where the last inequality used that \(\frac{b^{2}}{(n-b^{2})}\le \frac{1}{2}\) as we assume \(b^{2}\le \frac{n}{3}\).

To deal with \(\Sigma ''_{1}\) we make the following claim.

Claim

There is \(n_{00}>0\) such that when \(n\ge n_{00}\) and \(\ell \le \frac{n}{3}\)

$$\begin{aligned} \left( {\begin{array}{c}n-\ell \\ \ell \end{array}}\right) \ge \left( \frac{2n}{3}\right) ^{\sqrt{\ell }}. \end{aligned}$$
(4.6)

Proof of claim

Observe that when \(\ell \le \frac{n}{3}\)

$$\begin{aligned} \left( {\begin{array}{c}n-\ell \\ \ell \end{array}}\right)&\ge \frac{(n-\ell )^{\ell }}{\ell ^{\ell }}=(n-\ell )^{\sqrt{\ell }}\left( n-\ell \right) ^{\ell -\sqrt{\ell }}\ell ^{-\ell }\\&\ge \left( \frac{2n}{3}\right) ^{\sqrt{\ell }}\left( 2\ell \right) ^{\ell -\sqrt{\ell }}\ell ^{-\ell }=\left( \frac{2n}{3}\right) ^{\sqrt{\ell }}\left( \frac{2^{\sqrt{\ell }-1}}{\ell }\right) ^{\sqrt{\ell }}. \end{aligned}$$

We have \(2^{\sqrt{\ell }-1}\ge \ell \) when \(\ell \ge 49\) which proves the claim in this case. On the other hand, it is easy to see that there is a \(n_{00}>0\) such that (4.6) holds when \(n\ge n_{00}\) and \(1\le \ell <49\). This proves the claim. \(\square \)

The claim gives

$$\begin{aligned} \Sigma ''_{1}\le \sum _{\ell =b^{2}+1}^{\lfloor \frac{n}{3}\rfloor }\left( \frac{c_{2}}{n^{s}}\right) ^{\sqrt{\ell }} \end{aligned}$$

for some \(c_{2}=c_{2}(s)>1\) when \(n\ge n_{00}\). Let \(n_{0}=n_{0}(s)\ge n_{00}\) be such that when \(n\ge n_{0}\), \(\frac{c_{2}}{n^{s}}<e^{-1}\). Let \(q=q\left( n\right) {\mathop {=}\limits ^{\mathrm {def}}}\frac{c_{2}}{n^{s}}.\) Then when \(n\ge n_{0}\), \(\log (q)\le -1\) and

$$\begin{aligned} \Sigma ''_{1}\le \int _{b^{2}}^{\infty }q^{\sqrt{x}}dx=\frac{2q^{b}}{\log q}\left( \frac{1}{\log q}-b\right) . \end{aligned}$$

We obtain

$$\begin{aligned} \Sigma ''_{1}\le 2(b+1)q^{b}\le \frac{2(b+1)c_{2}^{~b}}{n^{sb}}. \end{aligned}$$
(4.7)

Together with (4.5) this yields:

$$\begin{aligned} \Sigma _{1}\ll _{s}c_{1}^{~b}\left( \frac{b^{2}}{n-b^{2}}\right) ^{sb}+\frac{2(b+1)c_{2}^{~b}}{n^{sb}}\ll _{s}\left( \frac{\kappa b^{2s}}{\left( n-b^{2}\right) ^{s}}\right) ^{b} \end{aligned}$$
(4.8)

with \(\kappa =\kappa \left( s\right) =\max \left( c_{1},c_{2}\right) \).

Bound for \(\Sigma _{2}\). If \(\lambda \in \Lambda _{0}(n,b)-\Lambda _{1}(n,b)\) then \({\check{\lambda }}_{1}\le \lambda _{1}<\frac{2n}{3}\) and [LS04, Prop. 2.4] gives the existence of an absolute \(c_{0}>1\) such that

$$\begin{aligned} d_{\lambda }\ge c_{0}^{~n}. \end{aligned}$$

Thus for large enough n and \(b^{2}\le \frac{n}{3}\)

$$\begin{aligned} \Sigma _{2}\le \sum _{\lambda \in \Lambda _{0}(n,b)-\Lambda _{1}(n,b)}c_{0}^{-ns}\le p(n)c_{0}^{-ns}\le c_{1}^{\sqrt{n}}c_{0}^{-ns}\ll _{s}n^{-bs}. \end{aligned}$$
(4.9)

Putting (4.8) and (4.9) together proves the proposition. \(\square \)

5 Estimates for the Probabilities of Tiled Surfaces

Before reading this Section 5, we recommend the reader to have read Section 1.3 for context and motivation.

5.1 Prior results.

The aim of this Section 5.1 is to introduce an already known formula (Theorem 5.1) for the quantities \({\mathbb {E}}_{n}^{\mathrm {emb}}(Y)\) that are essential to this paper, and to give some known first estimates for the quantities appearing therein (Lemma 5.2). To better understand their source and logic, the reader is advised to look at [MP20, Section 5].

We continue to assume \(g=2\). Throughout this entire Section 5 we will assume that Y is a fixed compact tiled surface. We let \({\mathfrak {v}}={\mathfrak {v}}(Y)\), \({\mathfrak {e}}={\mathfrak {e}}(Y)\), \({\mathfrak {f}}={\mathfrak {f}}(Y)\) denote the number of vertices, edges, and octagons of Y, respectively. Throughout this section, f will stand for one of the letters abcd. For each letter \(f\in \{a,b,c,d\}\), let \({\mathfrak {e}}_{f}\) denote the number of f-labeled edges of Y.

In [MP20, Section 5.3] we constructed permutations

$$\begin{aligned} \sigma _{f}^{+},\sigma _{f}^{-},\tau _{f}^{+},\tau _{f}^{-}\in S'_{{\mathfrak {v}}}\subset S_{n} \end{aligned}$$

for each \(f\in \{a,b,c,d\}\) satisfying certain five properties named P1, P2, P3, P4, and P5 that are essential to the development of the theory, but not illuminating to state precisely here. We henceforth view these permutations as fixed, given Y.

Recall from Section 4.1 that for YDs \(\mu \subset \lambda \) we say \(\mu \subset _{k}\lambda \) if \(\lambda \) has k more boxes than \(\mu \). Also recall from Section 4.1 that \(\sqcup \) denotes concatenation of Young tableaux, and for a SYD \(\lambda /\nu \), if T is a (standard) tableau of shape \(\lambda /\nu \), \(w_{T}\) denotes a Gelfand-Tsetlin basis vector in \(V^{\lambda /\nu }\) associated to T. In the same situation, we write \(\langle \bullet ,\bullet \rangle \) for the inner product in the unitary representation \(V^{\lambda /\nu }\). In the prequel paper [MP20, Thm. 5.10] the following theorem was proved.

Theorem 5.1

For \(n\ge {\mathfrak {v}}\) we have

$$\begin{aligned} {\mathbb {E}}_{n}^{\mathrm {emb}}(Y)=\frac{\left( n!\right) ^{3}}{\left| {\mathbb {X}}_{n}\right| }\cdot \frac{\left( n\right) _{{\mathfrak {v}}}\left( n\right) _{{\mathfrak {f}}}}{\prod _{f}\left( n\right) _{{\mathfrak {e}}_{f}}}\cdot \Xi _{n}(Y) \end{aligned}$$
(5.1)

where

$$\begin{aligned}&\Xi _{n}(Y){\mathop {=}\limits ^{\mathrm {def}}}\sum _{\begin{array}{c} \lambda ,\nu :\\ \nu \subset _{{\mathfrak {v}}-{\mathfrak {f}}}\lambda \vdash n-{\mathfrak {f}} \end{array} }d_{\lambda }d_{\nu }\sum _{\begin{array}{c} \mu _{a},\mu _{b},\mu _{c},\mu _{d}\\ \forall f,\,\nu \subset \mu _{f}\subset _{{\mathfrak {e}}_{f}-{\mathfrak {f}}}\lambda \end{array} }\frac{1}{d_{\mu _{a}}d_{\mu _{b}}d_{\mu _{c}}d_{\mu _{d}}}\Upsilon _{n}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm }\right\} ,\nu ,\left\{ \mu _{f}\right\} ,\lambda \right) , \nonumber \\ \end{aligned}$$
(5.2)
$$\begin{aligned}&\Upsilon _{n}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm }\right\} ,\nu ,\left\{ \mu _{f}\right\} ,\lambda \right) {\mathop {=}\limits ^{\mathrm {def}}}\sum _{\begin{aligned}r_{f}^{+},r_{f}^{-}\in \mathrm {Tab}\left( \mu _{f}/\nu \right) \\ s_{f},t_{f}\in \mathrm {Tab}\left( \lambda /\mu _{f}\right) \end{aligned} }{\mathcal {M}}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\right\} \right) \nonumber \\ \end{aligned}$$
(5.3)

and \({\mathcal {M}}(\{\sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\})\) is the following product of matrix coefficients:

$$\begin{aligned} {\mathcal {M}}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\right\} \right)&{\mathop {=}\limits ^{\mathrm {def}}}&\left\langle \sigma _{b}^{-}\left( \sigma _{a}^{+}\right) ^{-1}w_{r_{a}^{+}\sqcup s_{a}},w_{r_{b}^{-}\sqcup s{}_{b}}\right\rangle \left\langle \tau _{a}^{+}\left( \sigma _{b}^{+}\right) ^{-1}w_{r_{b}^{+}\sqcup s_{b}},w_{r_{a}^{+}\sqcup t_{a}}\right\rangle \nonumber \\&\cdot&\left\langle \tau _{b}^{+}\left( \tau _{a}^{-}\right) ^{-1}w_{r_{a}^{-}\sqcup t{}_{a}},w_{r_{b}^{+}\sqcup t_{b}}\right\rangle \left\langle \sigma _{c}^{-}\left( \tau _{b}^{-}\right) ^{-1}w_{r_{b}^{-}\sqcup t{}_{b}},w_{r_{c}^{-}\sqcup s{}_{c}}\right\rangle \nonumber \\&\cdot&\left\langle \sigma _{d}^{-}\left( \sigma _{c}^{+}\right) ^{-1}w_{r_{c}^{+}\sqcup s_{c}},w_{r_{d}^{-}\sqcup s{}_{d}}\right\rangle \left\langle \tau _{c}^{+}\left( \sigma _{d}^{+}\right) ^{-1}w_{r_{d}^{+}\sqcup s_{d}},w_{r_{c}^{+}\sqcup t_{c}}\right\rangle \nonumber \\&\cdot&\left\langle \tau _{d}^{+}\left( \tau _{c}^{-}\right) ^{-1}w_{r_{c}^{-}\sqcup t{}_{c}},w_{r_{d}^{+}\sqcup t_{d}}\right\rangle \left\langle \sigma _{a}^{-}\left( \tau _{d}^{-}\right) ^{-1}w_{r_{d}^{-}\sqcup t{}_{d}},w_{r_{a}^{-}\sqcup s{}_{a}}\right\rangle .\nonumber \\ \end{aligned}$$
(5.4)

Note that \(\frac{\left( n!\right) ^{3}}{\left| {\mathbb {X}}_{n}\right| }{\mathop {\rightarrow }\limits ^{n\rightarrow \infty }}2\) by (4.3) and Theorem 4.4, and that \(\frac{\left( n\right) _{{\mathfrak {v}}}\left( n\right) _{{\mathfrak {f}}}}{\prod _{f}\left( n\right) _{{\mathfrak {e}}_{f}}}=n^{\chi \left( Y\right) }\left( 1+O\left( n^{-1}\right) \right) \), so the more mysterious term in (5.1) is \(\Xi _{n}\left( Y\right) \). In light of Theorem 5.1, we will repeatedly discuss \(\nu ,\{\mu _{f}\},\lambda \) satisfying

$$\begin{aligned} \nu \subset _{{\mathfrak {v}}-{\mathfrak {e}}_{f}}\mu _{f}\subset _{{\mathfrak {e}}_{f}-{\mathfrak {f}}}\lambda \vdash n-{\mathfrak {f}}\,\,\,\quad \forall f\in \{a,b,c,d\} \end{aligned}$$
(5.5)

and \(\{r_{f}^{\pm },s_{f},t_{f}\}\) satisfying

$$\begin{aligned} r_{f}^{+},r_{f}^{-}\in \mathrm {Tab}(\mu _{f}/\nu ),\quad s_{f},t_{f}\in \mathrm {Tab}(\lambda /\mu _{f})\,\,\,\quad \forall f\in \{a,b,c,d\}. \end{aligned}$$
(5.6)

To give good estimates for \(\Xi _{n}(Y)\), we need an effective bound for the quantities \({\mathcal {M}}(\{\sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\})\) that was obtained in [MP20]. Before giving this bound, we recall some notation. For \(T\in \mathrm {Tab}(\lambda /\nu )\), we write \(\mathrm {top}(T)\) for the set of elements in the top row of T (the row of length \(\lambda _{1}-\nu _{1}\) which may be empty). For any two sets AB in [n], we define \(d(A,B)=|A\backslash B|\). Given \(\{r_{f}^{\pm },s_{f},t_{f}\}\) as in (5.6), we define

$$\begin{aligned}&D_{\mathrm {top}}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\right\} \right) \nonumber \\&\quad {\mathop {=}\limits ^{\mathrm {def}}}d\left( \sigma _{b}^{-}\left( \sigma _{a}^{+}\right) ^{-1}\mathrm {top}(r_{a}^{+}\sqcup s_{a}),\mathrm {top}(r_{b}^{-}\sqcup s{}_{b})\right) +d\left( \tau _{a}^{+}\left( \sigma _{b}^{+}\right) ^{-1}\mathrm {top}(r_{b}^{+}\sqcup s_{b}),\mathrm {top}(r_{a}^{+}\sqcup t_{a})\right) \nonumber \\&\qquad + d\left( \tau _{b}^{+}\left( \tau _{a}^{-}\right) ^{-1}\mathrm {top}(r_{a}^{-}\sqcup t{}_{a}),\mathrm {top}(r_{b}^{+}\sqcup t_{b})\right) +d\left( \sigma _{c}^{-}\left( \tau _{b}^{-}\right) ^{-1}\mathrm {top}(r_{b}^{-}\sqcup t{}_{b}),\mathrm {top}(r_{c}^{-}\sqcup s{}_{c})\right) \nonumber \\&\qquad + d\left( \sigma _{d}^{-}\left( \sigma _{c}^{+}\right) ^{-1}\mathrm {top}(r_{c}^{+}\sqcup s_{c}),\mathrm {top}(r_{d}^{-}\sqcup s{}_{d})\right) +d\left( \tau _{c}^{+}\left( \sigma _{d}^{+}\right) ^{-1}\mathrm {top}(r_{d}^{+}\sqcup s_{d}),\mathrm {top}(r_{c}^{+}\sqcup t_{c})\right) \nonumber \\&\qquad + d\left( \tau _{d}^{+}\left( \tau _{c}^{-}\right) ^{-1}\mathrm {top}(r_{c}^{-}\sqcup t{}_{c}),\mathrm {top}(r_{d}^{+}\sqcup t_{d})\right) +d\left( \sigma _{a}^{-}\left( \tau _{d}^{-}\right) ^{-1}\mathrm {top}(r_{d}^{-}\sqcup t{}_{d}),\mathrm {top}(r_{a}^{-}\sqcup s{}_{a})\right) . \end{aligned}$$
(5.7)

Lemma 5.2

[MP20, Lem. 5.14]. Let \(\nu ,\{\mu _{f}\},\lambda \) be as in (5.5) and \(\{r_{f}^{\pm },s_{f},t_{f}\}\) be as in (5.6). If \(\lambda _{1}+\nu _{1}>n-{\mathfrak {f}}+({\mathfrak {v}}-{\mathfrak {f}})^{2}\), then

$$\begin{aligned} \left| {\mathcal {M}}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\right\} \right) \right| \le \left( \frac{({\mathfrak {v}}-{\mathfrak {f}})^{2}}{\lambda {}_{1}+\nu _{1}-(n-{\mathfrak {f}})}\right) ^{D_{\mathrm {top}}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\right\} \right) }. \end{aligned}$$

The condition \(\lambda _{1}+\nu _{1}>n-{\mathfrak {f}}+({\mathfrak {v}}-{\mathfrak {f}})^{2}\) corresponds to the bound given by Lemma 5.2 being non-trivial, and we will be applying Lemma 5.2 when both \(\lambda \) and \(\nu \) have \(O(n^{1/4})\) boxes outside their first rows and \({\mathfrak {v}},{\mathfrak {f}}\ll n^{1/4}\). In particular, \(\lambda _{1}+\nu _{1}\) is of order 2n, while \({\mathfrak {f}}\) and \(\left( {\mathfrak {v}}-{\mathfrak {f}}\right) ^{2}\) are of much smaller order. Hence the condition will be met for sufficiently large n (Fig. 5).

Recall from Section 4.2 that \(b_{\nu }\) is the number of boxes of a Young diagram \(\nu \) outside the first row, and \({\check{b}}_{\nu }\) is the number of boxes outside the first column. We have the following trivial upper bound for \(D_{\mathrm {top}}(\{\sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\})\):

$$\begin{aligned} D_{\mathrm {top}}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\right\} \right)\le & {} 8\left( b_{\lambda }-b_{\nu }\right) . \end{aligned}$$
(5.8)

We recall the following estimate obtained in [MP20, Prop. 5.22].

Proposition 5.3

Let \(\varepsilon \ge 0\). Suppose that \(\nu ,\{\mu _{f}\},\lambda \) are as in (5.5) and \(\{r_{f}^{\pm },s_{f},t_{f}\}\) are as in (5.6). If Y is \(\varepsilon \)-adapted then

$$\begin{aligned} D_{\mathrm {top}}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\right\} \right)&\ge b_{\lambda }+3b_{\nu }-b_{\mu _{a}}-b_{\mu _{b}}-b_{\mu _{c}}-b_{\mu _{d}}+\varepsilon b_{\lambda /\nu }. \end{aligned}$$
(5.9)
Fig. 5
figure 5

An example of possible \(\nu ,\mu _{a},\lambda \) appearing in Theorem 5.1 (supposing e.g. \(n=10,{\mathfrak {v}}=6,{\mathfrak {e}}_{a}=3,{\mathfrak {f}}=1\)).

5.2 Partitioning \(\Xi _{n}\) and preliminary estimates.

In this Section 5.2 we show how the condition that Y is \(\varepsilon \)-adapted leads to bounds on \(\Xi _{n}\). We continue to view Y as fixed and hence suppress dependence of quantities on Y. We write \({\mathfrak {D}}={\mathfrak {D}}\left( Y\right) {\mathop {=}\limits ^{\mathrm {def}}}{\mathfrak {v}}-{\mathfrak {f}}\). Note that \({\mathfrak {D}}\ge 0\) by (3.2), with equality if and only if Y has no boundary. By Lemma 3.6, \({\mathfrak {D}}\le {\mathfrak {d}}\left( Y\right) \). So \({\mathfrak {D}}\) is another measure of the size of the boundary of Y, and it plays an important role in some of our bounds below. We will use the notation \(\Xi _{n}^{P(\nu )}\) where P is a proposition concerning \(\nu \) to mean

$$\begin{aligned} \Xi _{n}^{P(\nu )}{\mathop {=}\limits ^{\mathrm {def}}}\sum _{\begin{array}{c} \nu \subset _{{\mathfrak {v}}-{\mathfrak {f}}}\lambda \vdash \,n-{\mathfrak {f}}\\ P(\nu )\text { holds true} \end{array} }d_{\lambda }d_{\nu }\sum _{\nu \subset \mu _{f}\subset _{{\mathfrak {e}}_{f}-{\mathfrak {f}}}\lambda }\frac{1}{d_{\mu _{a}}d_{\mu _{b}}d_{\mu _{c}}d_{\mu _{d}}}\Upsilon _{n}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm }\right\} ,\nu ,\left\{ \mu _{f}\right\} ,\lambda \right) . \end{aligned}$$

We will continue to use this notation, for various propositions P, throughout the rest of the paper. We want to give bounds for various \(\Xi _{n}^{P(\nu )}\) under the condition that Y is either boundary reduced (namely, 0-adapted) or, moreover, \(\varepsilon \)-adapted for some \(\varepsilon >0\). We will always assume \({\mathfrak {v}}\le n^{1/4}\) and so also \({\mathfrak {D}}={\mathfrak {v}}-{\mathfrak {f}}\le n^{1/4}\). Note that \(b_{\nu }\le {\mathfrak {D}}\) and \({\check{b}}_{\nu }\le {\mathfrak {D}}\) cannot hold simultaneously as \({\mathfrak {v}},{\mathfrak {D}}\le n^{1/4}\), and as all but one box of \(\nu \vdash n-{\mathfrak {v}}\) is either outside the first row or first column, one has the simple inequality \(b_{\nu }+{\check{b}}_{\nu }+1\ge n-{\mathfrak {v}}\).

Then for \(n\gg 1\) we have

$$\begin{aligned} \Xi _{n}=\Xi _{n}^{\nu =(n-{\mathfrak {v}})}+\Xi _{n}^{\nu =(1)^{n-{\mathfrak {v}}}}+\Xi _{n}^{0<b_{\nu }\le {\mathfrak {D}};{\check{b}}_{\nu }>0}+\Xi _{n}^{0<{\check{b}}_{\nu }\le {\mathfrak {D}};b_{\nu }>0}+\Xi _{n}^{b_{\nu },{\check{b}}_{\nu }>{\mathfrak {D}}}. \end{aligned}$$

Moreover by [MP20, Lem. 5.9] we have

$$\begin{aligned} \Xi _{n}^{\nu =(n-{\mathfrak {v}})}=\Xi _{n}^{\nu =(1)^{n-{\mathfrak {v}}}},\quad \Xi _{n}^{0<b_{\nu }\le {\mathfrak {D}};{\check{b}}_{\nu }>0}=\Xi _{n}^{0<{\check{b}}_{\nu }\le {\mathfrak {D}};b_{\nu }>0} \end{aligned}$$

hence

$$\begin{aligned} \Xi _{n}=2\Xi _{n}^{\nu =(n-{\mathfrak {v}})}+2\Xi _{n}^{0<b_{\nu }\le {\mathfrak {D}};{\check{b}}_{\nu }>0}+\Xi _{n}^{b_{\nu },{\check{b}}_{\nu }>{\mathfrak {D}}}. \end{aligned}$$
(5.10)

This is according to three regimes for \(b_{\nu }\) and \({\check{b}}_{\nu }\):

  • The zero regime: when \(b_{\nu }\) or \({\check{b}}_{\nu }\) equal 0. The contribution from here is \(2\Xi _{n}^{\nu =(n-\nu )}\).

  • The intermediate regime: when \(b_{\nu },{\check{b}}_{\nu }>0\) but one of them is at most \({\mathfrak {D}}\). The contribution from this regime is \(2\Xi _{n}^{0<b_{\nu }\le {\mathfrak {D}};{\check{b}}_{\nu }>0}\).

  • The large regime: when both \(b_{\nu },{\check{b}}_{\nu }>{\mathfrak {D}}\). The contribution from this regime is \(\Xi _{n}^{b_{\nu },{\check{b}}_{\nu }>{\mathfrak {D}}}\).

The strategy for bounding these different contributions is to further partition the tuples \((\nu ,\left\{ \mu _{f}\right\} ,\lambda )\) according to the data \(b_{\lambda },\{b_{\mu _{f}}\},b_{\nu },{\check{b}}_{\lambda },\{{\check{b}}_{\mu _{f}}\},{\check{b}}_{\nu }\).

Definition 5.4

For \({\overline{B}}=\left( B_{\lambda },\{B_{\mu _{f}}\},B_{\nu },{\check{B}}_{\lambda },\{{\check{B}}_{\mu _{f}}\},{\check{B}}_{\nu }\right) \) we write

$$\begin{aligned} \left( \nu ,\left\{ \mu _{f}\right\} ,\lambda \right) \vdash {\overline{B}} \end{aligned}$$

if (5.5) holds, and \(\nu \), \(\left\{ \mu _{f}\right\} \) and \(\lambda \) have the prescribed number of blocks outside the first row and outside the first column, namely,

$$\begin{aligned} b_{\lambda }=B_{\lambda },{\check{b}}_{\lambda }={\check{B}}_{\lambda },\,b_{\nu } =B_{\nu },{\check{b}}_{\nu }={\check{B}}_{\nu }~~~~\mathrm {and}~~~~\forall f\in \left\{ a,b,c,d\right\} ~~b_{\mu _{f}}=B_{\mu _{f}},{\check{b}}_{\mu _{f}}={\check{B}}_{\mu _{f}}. \end{aligned}$$

We denote by \({\mathcal {B}}_{n}\left( Y\right) \) the collection of tuples \({\overline{B}}\) which admit at least one tuple of YDs \((\nu ,\left\{ \mu _{f}\right\} ,\lambda )\). Finally, we let

$$\begin{aligned} \Xi _{n}^{{\overline{B}}}=\Xi _{n}^{{\overline{B}}}\left( Y\right) {\mathop {=}\limits ^{\mathrm {def}}}&\sum _{\left( \nu ,\{\mu _{f}\},\lambda \right) \vdash {\overline{B}}} \frac{d_{\lambda }d_{\nu }}{d_{\mu _{a}}d_{\mu _{b}}d_{\mu _{c}}d_{\mu _{d}}} \Upsilon _{n}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm }\right\} ,\nu ,\left\{ \mu _{f}\right\} ,\lambda \right) . \end{aligned}$$
(5.11)

Note that \(\Xi _{n}\left( Y\right) =\sum _{{\overline{B}}\in {\mathcal {B}}_{n}\left( Y\right) }\Xi _{n}^{{\overline{B}}}\). Also, note that \({\overline{B}}\in {\mathcal {B}}_{n}\left( Y\right) \) imposes restrictions on the possible values of \(B_{\lambda },\{B_{\mu _{f}}\},B_{\nu },{\check{B}}_{\lambda },\{{\check{B}}_{\mu _{f}}\},{\check{B}}_{\nu }\). For example, for every \(f\in \{a,b,c,d\}\), \(0\le B_{\mu _{f}}-B_{\nu }\le {\mathfrak {v}}-{\mathfrak {e}}_{f}\) and \(0\le B_{\lambda }-B_{\mu _{f}}\le {\mathfrak {e}}_{f}-{\mathfrak {f}}\), and likewise for the \({\check{B}}\)’s. In addition, \(B_{\nu }+{\check{B}}_{\nu }+1\ge n-\nu \), and so on.

We first give a general estimate for the quotient of dimensions in the summands in (5.11).

Lemma 5.5

Suppose that \({\mathfrak {v}}\le n^{1/4}\) and that \((\nu ,\{\mu _{f}\},\lambda )\) satisfy (5.5). If \(b_{\nu }\le {\mathfrak {D}}\) then

$$\begin{aligned} \frac{d_{\lambda }d_{\nu }}{d_{\mu _{a}}d_{\mu _{b}}d_{\mu _{c}}d_{\mu _{d}}} \ll \frac{1}{d_{\nu }^{~2}}b_{\lambda }^{5b_{\lambda }}n^{\left( b_{\lambda }+3b_{\nu }- \sum _{f}b_{\mu _{f}}\right) }. \end{aligned}$$
(5.12)

Proof

By Lemma 4.2,

$$\begin{aligned} \frac{d_{\nu }}{d_{\mu _{f}}}\le \frac{b_{\mu _{f}}^{~b_{\mu _{f}}} \left( n-{\mathfrak {v}}\right) ^{b_{\nu }}}{(n-{\mathfrak {e}}_{f}-b_{\mu _{f}})^{b_{\mu _{f}}}} \le \frac{b_{\lambda }^{~b_{\lambda }}n^{b_{\nu }}}{\left( n-2n^{1/4}\right) ^{b_{\mu _{f}}}}, \end{aligned}$$

where the second inequality is based on that \({\mathfrak {e}}_{f}+b_{\mu _{f}}\le {\mathfrak {e}}_{f}+\left( b_{\nu }+{\mathfrak {v}}-{\mathfrak {e}}_{f}\right) =b_{\nu }+{\mathfrak {v}}\le 2n^{1/4}\). The hypotheses of Lemma 4.2 are met here since

$$\begin{aligned} 2b_{\mu _{f}}-|\nu |\le 2(b_{\nu }+{\mathfrak {v}}-{\mathfrak {e}}_{f})-(n-{\mathfrak {v}})\le 5{\mathfrak {v}}-n\le 5n^{\frac{1}{4}}-n\le 0 \end{aligned}$$

for \(n\gg 1\). Similarly, since \(2b_{\lambda }-|\nu |\le 2(b_{\nu }+{\mathfrak {D}})-(n-{\mathfrak {v}})\le 5n^{\frac{1}{4}}-n\le 0\) for \(n\gg 1\), Lemma 4.2 gives \(\frac{d_{\lambda }}{d_{\nu }}\le \frac{b_{\nu }^{b_{\nu }}(n-{\mathfrak {f}})^{b_{\lambda }}}{(n-{\mathfrak {v}}-b_{\nu })^{b_{\nu }}}\le \frac{b_{\lambda }^{~b_{\lambda }}n^{b_{\lambda }}}{\left( n-2n^{1/4}\right) ^{b_{\nu }}}.\) Altogether,

$$\begin{aligned} \frac{d_{\lambda }d_{\nu }^{~3}}{d_{\mu _{a}}d_{\mu _{b}}d_{\mu _{c}}d_{\mu _{d}}}\le & {} \frac{b_{\lambda }^{~5b_{\lambda }}n^{\left( b_{\lambda } +4b_{\nu }\right) }}{\left( n-2n^{1/4}\right) ^{b_{\nu } +\sum _{f}b_{\mu _{f}}}}=b_{\lambda }^{~5b_{\lambda }}n^{\left( b_{\lambda } +3b_{\nu }-\sum _{f}b_{\mu _{f}}\right) }\left( \frac{1}{1-2n^{-3/4}} \right) ^{b_{\nu }+\sum _{f}b_{\mu _{f}}}\\\le & {} b_{\lambda }^{~5b_{\lambda }}n^{\left( b_{\lambda }+3b_{\nu }-\sum _{f} b_{\mu _{f}}\right) }\cdot \left( \frac{1}{1-2n^{-3/4}}\right) ^{9n^{1/4}}. \end{aligned}$$

As \(\left( \frac{1}{1-2n^{-3/4}}\right) ^{9n^{1/4}}{\mathop {\rightarrow }\limits ^{n\rightarrow \infty }}1\), the right hand side of the last inequality is at most \(2b_{\lambda }^{~5b_{\lambda }}n^{\left( b_{\lambda }+3b_{\nu }-\sum _{f}b_{\mu _{f}}\right) }\) for large enough n. \(\square \)

We next give bounds for the individual \(\Xi _{n}^{{\overline{B}}}\).

Lemma 5.6

There is \(\kappa >1\) such that if Y is \(\varepsilon \)-adapted for \(\varepsilon \ge 0\), \({\mathfrak {v}}\le n^{1/4}\) and \(B_{\nu }\le {\mathfrak {D}}\), then

$$\begin{aligned} \left| \Xi _{n}^{{\overline{B}}}\right|&\ll B_{\lambda }^{~10B_{\lambda }}\left( {\mathfrak {D}}^{24}n^{-\varepsilon }\right) ^{B_{\lambda } -B_{\nu }}\left( \frac{\kappa {\mathfrak {D}}^{4}}{\left( n-{\mathfrak {v}}-{\mathfrak {D}}^{2}\right) ^{2}}\right) ^{B_{\nu }}. \end{aligned}$$

Proof

By assumption, \(B_{\nu }\le {\mathfrak {D}}\le {\mathfrak {v}}\le n^{\frac{1}{4}}\). So for every \(\left( \nu ,\{\mu _{f}\},\lambda \right) \vdash {\overline{B}}\),

$$\begin{aligned} \lambda _{1}+\nu _{1}-\left( n-{\mathfrak {f}}\right)= & {} \left( n-{\mathfrak {f}}-B_{\lambda }\right) +\left( n-{\mathfrak {v}}-B_{\nu }\right) -\left( n-{\mathfrak {f}}\right) \\\ge & {} n-{\mathfrak {v}}-B_{\nu }-\left( B_{\nu }+{\mathfrak {D}}\right) \ge n-4{\mathfrak {v}}, \end{aligned}$$

and Lemma 5.2 gives that whenever \(\left\{ r_{f}^{\pm },s_{f},t_{f}\right\} \) satisfy (5.6),

$$\begin{aligned} \left| {\mathcal {M}}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\right\} \right) \right| \le \left( \frac{{\mathfrak {D}}^{2}}{n-4{\mathfrak {v}}}\right) ^{D_{\mathrm {top}}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\right\} \right) }. \end{aligned}$$
(5.13)

Proposition 5.3 gives

$$\begin{aligned} B_{\lambda }+3B_{\nu }-\sum _{f}B_{\mu _{f}}\le D_{\mathrm {top}}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\right\} \right) -\varepsilon \left( B_{\lambda }-B_{\nu }\right) , \end{aligned}$$

so by Lemma 5.5

$$\begin{aligned}&\frac{d_{\lambda }d_{\nu }^{3}}{d_{\mu _{a}}d_{\mu _{b}}d_{\mu _{c}}d_{\mu _{d}}} \left| {\mathcal {M}}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\right\} \right) \right| \\&\ll B_{\lambda }^{~5B_{\lambda }}n^{-\varepsilon \left( B_{\lambda }-B_{\nu }\right) } \left( \frac{n{\mathfrak {D}}^{2}}{n-4{\mathfrak {v}}}\right) ^{D_{\mathrm {top}}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\right\} \right) }. \end{aligned}$$

Now using the trivial upper bound \(D_{\mathrm {top}}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\right\} \right) \le 8(B_{\lambda }-B_{\nu })\) in (5.8) and \(B_{\lambda }-B_{\nu }\le {\mathfrak {v}}-{\mathfrak {f}}\le {\mathfrak {v}}\le n^{\frac{1}{4}}\), we obtain that for large enough n,

$$\begin{aligned} \left( \frac{n{\mathfrak {D}}^{2}}{n-4{\mathfrak {v}}}\right) ^{D_{\mathrm {top}}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\right\} \right) }\le {\mathfrak {D}}^{16\left( B_{\lambda }-B_{\nu }\right) } \left( \frac{1}{1-4n^{-3/4}}\right) ^{8n^{1/4}}\le 2{\mathfrak {D}}^{16(B_{\lambda }-B_{\nu })}. \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{d_{\lambda }d_{\nu }^{3}}{d_{\mu _{a}}d_{\mu _{b}}d_{\mu _{c}} d_{\mu _{d}}}\left| {\mathcal {M}}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm },s_{f},t_{f}\right\} \right) \right|&\ll B_{\lambda }^{~5B_{\lambda }}\left( {\mathfrak {D}}^{16}n^{-\varepsilon }\right) ^{B_{\lambda }-B_{\nu }}. \end{aligned}$$

From this we obtain

$$\begin{aligned} \left| \Xi _{n}^{{\overline{B}}}\right|\ll & {} B_{\lambda }^{~5B_{\lambda }}\left( {\mathfrak {D}}^{16}n^{-\varepsilon }\right) ^{B_{\lambda } -B_{\nu }}\sum _{\left( \nu ,\{\mu _{f}\},\lambda \right) \vdash {\overline{B}}}\frac{1}{d_{\nu }^{2}}\sum _{\begin{aligned}r_{f}^{+},r_{f}^{-}\in \mathrm {Tab}\left( \mu _{f}/\nu \right) \\ s_{f},t_{f}\in \mathrm {Tab}\left( \lambda /\mu _{f}\right) \end{aligned} }1\\\le & {} B_{\lambda }^{~5B_{\lambda }}\left( {\mathfrak {D}}^{24}n^{-\varepsilon }\right) ^{B_{\lambda }-B_{\nu }}\sum _{\left( \nu ,\{\mu _{f}\},\lambda \right) \vdash {\overline{B}}}\frac{1}{d_{\nu }^{2}} \end{aligned}$$

since there are at most \(({\mathfrak {D}})_{(B_{\lambda }-B_{\nu })}\le {\mathfrak {D}}^{(B_{\lambda }-B_{\nu })}\) choices of \(r_{f}^{+}\sqcup s_{f}\) or of \(r_{f}^{-}\sqcup t_{f}\) for all f, by Lemma 4.3. For fixed \(\nu \) above, there are at most \(B_{\lambda }^{5B_{\lambda }}\) choices of \(\{\mu _{f}\}\) and \(\lambda \) such that \((\nu ,\{\mu _{f}\},\lambda )\vdash {\overline{B}}\). For example, the boxes outside the first row of \(\lambda \) uniquely determine \(\lambda \) and form a YD of size \(B_{\lambda }\); there are at most \(B_{\lambda }!\le B_{\lambda }^{B_{\lambda }}\) of these. Hence

$$\begin{aligned} \left| \Xi _{n}^{{\overline{B}}}\right|&\ll B_{\lambda }^{~10B_{\lambda }}\left( {\mathfrak {D}}^{24}n^{-\varepsilon }\right) ^{B_{\lambda }-B_{\nu }}\sum _{\nu \vdash n-{\mathfrak {v}}:b_{\nu }=B_{\nu }}\frac{1}{d_{\nu }^{2}}. \end{aligned}$$

Note that above, we have \(\nu _{1}=n-{\mathfrak {v}}-B_{\nu }\ge n-2n^{\frac{1}{4}}\), so \({\check{b}}_{\nu }\ge n-2n^{\frac{1}{4}}-1\ge n^{\frac{1}{4}}\ge B_{\nu }\) for \(n\gg 1\), and in this case \(\nu \in \Lambda (n-{\mathfrak {v}},B_{\nu })\). Moreover, for \(n\gg 1\), \(B_{\nu }^{2}\le n^{\frac{1}{2}}\le \frac{n-n^{\frac{1}{4}}}{3}\le \frac{n-{\mathfrak {v}}}{3}\) and so we can finally apply Proposition 4.6 to obtain for the same \(\kappa =\kappa \left( 2\right) >1\) from Proposition 4.6 that

$$\begin{aligned} \left| \Xi _{n}^{{\overline{B}}}\right|\ll & {} B_{\lambda }^{10B_{\lambda }}\left( {\mathfrak {D}}^{24}n^{-\varepsilon }\right) ^{B_{\lambda }-B_{\nu }}\left( \frac{\kappa B_{\nu }^{~4}}{\left( n-{\mathfrak {v}}-B_{\nu }^{~2}\right) ^{2}}\right) ^{B_{\nu }}\\\le & {} B_{\lambda }^{~10B_{\lambda }}\left( {\mathfrak {D}}^{24}n^{-\varepsilon }\right) ^{B_{\lambda }-B_{\nu }}\left( \frac{\kappa {\mathfrak {D}}^{4}}{\left( n-{\mathfrak {v}}-{\mathfrak {D}}^{2}\right) ^{2}}\right) ^{B_{\nu }}. \end{aligned}$$

\(\square \)

Since Lemma 5.6 is only useful for \(B_{\nu }\) or \({\check{B}}_{\nu }\) small compared to n we have to supplement it with the following weaker bound.

Lemma 5.7

If Y is any tiled surface and \({\overline{B}}\in {\mathcal {B}}_{n}\left( Y\right) \) then

$$\begin{aligned} \left| \Xi _{n}^{{\overline{B}}}\right|&\le \left( {\mathfrak {D}}!\right) ^{8}\sum _{\left( \nu ,\{\mu _{f}\},\lambda \right) \vdash {\overline{B}}}\frac{d_{\lambda }}{d_{\nu }^{3}}. \end{aligned}$$

Proof

Since \({\mathcal {M}}(\{\sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm }s_{f},t_{f}\})\) is a product of matrix coefficients of unit vectors in unitary representations, we obtain \(|{\mathcal {M}}(\{\sigma _{f}^{\pm },\tau _{f}^{\pm },r_{f}^{\pm }s_{f},t_{f}\})|\le 1\). Therefore, with assumptions as in the lemma, and arguing similarly as in the proof of Lemma 5.6, we obtain

$$\begin{aligned} \left| \Xi _{n}^{{\overline{B}}}\right|&\le \sum _{\left( \nu ,\{\mu _{f}\},\lambda \right) \vdash {\overline{B}}}\frac{d_{\lambda }d_{\nu }}{d_{\mu _{a}}d_{\mu _{b}}d_{\mu _{c}}d_{\mu _{d}}}\sum _{\begin{aligned}r_{f}^{+},r_{f}^{-}\in \mathrm {Tab}\left( \mu _{f}/\nu \right) \\ s_{f},t_{f}\in \mathrm {Tab}\left( \lambda /\mu _{f}\right) \end{aligned} }1\\&{\mathop {\le }\limits ^{\left( *\right) }}\left( {\mathfrak {D}}!\right) ^{8}\sum _{\left( \nu ,\{\mu _{f}\},\lambda \right) \vdash {\overline{B}}}\frac{d_{\lambda }d_{\nu }}{d_{\mu _{a}}d_{\mu _{b}}d_{\mu _{c}}d_{\mu _{d}}}\\&\le ({\mathfrak {D}}!)^{8}\sum _{\left( \nu ,\{\mu _{f}\},\lambda \right) \vdash {\overline{B}}}\frac{d_{\lambda }}{d_{\nu }^{3}}, \end{aligned}$$

where in \(\left( *\right) \) we used the fact there are at most \(\left| \lambda /\nu \right| !=\left( {\mathfrak {v}}-{\mathfrak {f}}\right) !\) choices of \(r_{f}^{+}\sqcup s_{f}\) and of \(r_{f}^{-}\sqcup t_{f}\). \(\square \)

5.3 The zero regime of \(b_{\nu }\).

We only need analytic estimates for \(\Xi _{n}^{\nu =(n-{\mathfrak {v}})}\) when Y is boundary reduced (so 0-adapted); when Y is \(\varepsilon \)-adapted for \(\varepsilon >0\) we will take a different, more algebraic approach to \(\Xi _{n}^{\nu =(n-{\mathfrak {v}})}\) in Section 5.7.

Lemma 5.8

If Y is boundary reduced and \({\mathfrak {v}}\le n^{1/4}\) then

$$\begin{aligned} \left| \Xi _{n}^{\nu =(n-{\mathfrak {v}})}\right| \ll ({\mathfrak {D}}+1)^{9}{\mathfrak {D}}^{34{\mathfrak {D}}}. \end{aligned}$$

Proof

If \(\nu =(n-{\mathfrak {v}})\) then \(B_{\nu }=0\). Inserting the bounds from Lemma 5.6 with \(\varepsilon =0\) (since Y is boundary reduced, see Lemma 3.13) and \(B_{\nu }=0\) gives

$$\begin{aligned} \left| \Xi _{n}^{\nu =(n-{\mathfrak {v}})}\right| \ll \sum _{{\overline{B}}\in {\mathcal {B}}_{n}\left( Y\right) :B_{\nu }=0}B_{\lambda }^{~10B_{\lambda }}{\mathfrak {D}}^{24B_{\lambda }}. \end{aligned}$$

Because \({\overline{B}}\in {\mathcal {B}}_{n}\left( Y\right) \), there exist some \(\left( \nu ,\left\{ \mu _{f}\right\} ,\lambda \right) \vdash {\overline{B}}\) and satisfying (5.5). We then have since \(\nu \subset _{{\mathfrak {v}}-{\mathfrak {f}}}\lambda \), and \(b_{\nu }=0\), \(B_{\lambda }=b_{\lambda }\le b_{\nu }+{\mathfrak {v}}-{\mathfrak {f}}={\mathfrak {v}}-{\mathfrak {f}}={\mathfrak {D}}\). In \({\mathcal {B}}_{n}\left( Y\right) \), the set of \({\overline{B}}'s\) with \(B_{\nu }=0\) and a fixed value of \(B_{\lambda }\) is of size at most \(({\mathfrak {D}}+1)^{9}\). Indeed, there are at most \(B_{\lambda }+1\le {\mathfrak {D}}+1\) options for \(B_{\mu _{f}}\) for each f. Since \(n-{\mathfrak {v}}-1={\check{B}}_{\nu }\le {\check{B}}_{\mu _{f}}\le {\check{B}}_{\lambda }\le n-{\mathfrak {f}}-1\), there are at most \({\mathfrak {v}}-{\mathfrak {f}}+1={\mathfrak {D}}+1\) possible values of each of \({\check{B}}_{\mu _{f}}\) and \({\check{B}}_{\lambda }\). In total then there are at most \(({\mathfrak {D}}+1)^{9}\) choices. Hence

$$\begin{aligned} \left| \Xi _{n}^{\nu =(n-{\mathfrak {v}})}\right|&\ll ({\mathfrak {D}}+1)^{9}\sum _{B_{\lambda }=0}^{{\mathfrak {D}}}\left( B_{\lambda }^{~10}{\mathfrak {D}}^{24}\right) ^{B_{\lambda }}\le ({\mathfrak {D}}+1)^{9}\\&\sum _{B_{\lambda }=0}^{{\mathfrak {D}}}({\mathfrak {D}}^{34})^{B_{\lambda }}\ll ({\mathfrak {D}}+1)^{9}{\mathfrak {D}}^{34{\mathfrak {D}}}. \end{aligned}$$

\(\square \)

5.4 The intermediate regime of \(b_{\nu }\).

Lemma 5.9

Assume that \({\mathfrak {v}}\le n^{1/4}.\)

  1. 1.

    If Y is boundary reduced with \({\mathfrak {D}}\le n^{1/10}\) then

    $$\begin{aligned} \left| \Xi _{n}^{0<b_{\nu }\le {\mathfrak {D}};{\check{b}}_{\nu }>0}\right| \ll \frac{\left( {\mathfrak {D}}^{34}2^{10}\right) ^{{\mathfrak {D}}+1}}{\left( n-{\mathfrak {v}}-{\mathfrak {D}}^{2}\right) ^{2}}. \end{aligned}$$
    (5.14)
  2. 2.

    For any \(\varepsilon \in (0,1)\), there is \(\eta =\eta (\varepsilon )\in (0,\frac{1}{100})\) such that if Y is \(\varepsilon \)-adapted, with \({\mathfrak {D}}\le n^{\eta }\) then

    $$\begin{aligned} \left| \Xi _{n}^{0<b_{\nu }\le {\mathfrak {D}};{\check{b}}_{\nu }>0}\right| \ll _{\varepsilon }\frac{1}{n}. \end{aligned}$$
    (5.15)

Proof

When \({\mathfrak {D}}=0\), the inequality \(0<b_{\nu }\le {\mathfrak {D}}\) cannot hold, and so \(\Xi _{n}^{0<b_{\nu }\le {\mathfrak {D}};{\check{b}}_{\nu }>0}=0\) by definition, and both statements hold. So assume \({\mathfrak {D}}\ge 1\). We can also assume that \({\mathfrak {D}}\le n^{1/10}\).

For any \(\varepsilon \ge 0\), the bounds from Lemma 5.6 give

$$\begin{aligned} \left| \Xi _{n}^{0<b_{\nu }\le {\mathfrak {D}};{\check{b}}_{\nu }>0}\right| \ll \sum _{\begin{array}{c} {\overline{B}}\in {{{\mathcal {B}}}}_{n}\left( Y\right) :\\ 0<B_{\nu }\le {\mathfrak {D}};{\check{B}}_{\nu }>0 \end{array} }B_{\lambda }^{~10B_{\lambda }}\left( {\mathfrak {D}}^{24}n^{-\varepsilon }\right) ^{B_{\lambda }-B_{\nu }}\left( \frac{\kappa {\mathfrak {D}}^{4}}{\left( n-{\mathfrak {v}}-{\mathfrak {D}}^{2}\right) ^{2}}\right) ^{B_{\nu }}. \end{aligned}$$

Arguing similarly as in the proof of Lemma 5.8, the number of \({\overline{B}}\)’s in the sum above with a fixed value of \(B_{\nu }\) and \(B_{\lambda }\) is \(\ll {\mathfrak {D}}^{10}\). Also note that \(B_{\lambda }\le B_{\nu }+{\mathfrak {D}}\le 2{\mathfrak {D}}\). We obtain

$$\begin{aligned} \left| \Xi _{n}^{0<b_{\nu }\le {\mathfrak {D}};{\check{b}}_{\nu }>0}\right|&\ll {\mathfrak {D}}^{10}\sum _{\begin{array}{c} 0<B_{\nu }\le {\mathfrak {D}}\\ B_{\nu }\le B_{\lambda }\le B_{_{\nu }}+{\mathfrak {D}} \end{array} }B_{\lambda }^{~10B_{\lambda }}\left( {\mathfrak {D}}^{24}n^{-\varepsilon }\right) ^{B_{\lambda }-B_{\nu }}\left( \frac{\kappa {\mathfrak {D}}^{4}}{\left( n-{\mathfrak {v}}-{\mathfrak {D}}^{2}\right) ^{2}}\right) ^{B_{\nu }}\\&\le {\mathfrak {D}}^{10}\sum _{B_{\nu }=1}^{{\mathfrak {D}}}\left( \frac{\kappa (2{\mathfrak {D}})^{10}{\mathfrak {D}}^{4}}{\left( n-{\mathfrak {v}}-{\mathfrak {D}}^{2}\right) ^{2}}\right) ^{B_{\nu }}\sum _{B_{\lambda }=B_{\nu }}^{B_{\nu }+{\mathfrak {D}}}\left( {\mathfrak {D}}^{24}B_{\lambda }^{~10}n^{-\varepsilon }\right) ^{B_{\lambda }-B_{\nu }}. \end{aligned}$$

As \(B_{\lambda }\le 2{\mathfrak {D}}\), we bound the second summation by \(\sum _{t=0}^{{\mathfrak {D}}}\left( {\mathfrak {D}}^{34}2^{10}n^{-\varepsilon }\right) ^{t}\). By our assumption that \({\mathfrak {D}}\le n^{1/10}\) and \({\mathfrak {v}}\le n^{1/4}\), we have \(\frac{\kappa (2{\mathfrak {D}})^{10}{\mathfrak {D}}^{4}}{\left( n-{\mathfrak {v}}-{\mathfrak {D}}^{2}\right) ^{2}}\le \frac{1}{2}\) for large enough n. Hence

$$\begin{aligned} \left| \Xi _{n}^{0<b_{\nu }\le {\mathfrak {D}};{\check{b}}_{\nu }>0}\right|\ll & {} \frac{{\mathfrak {D}}^{10}\cdot \kappa (2{\mathfrak {D}})^{10}{\mathfrak {D}}^{4}}{\left( n-{\mathfrak {v}}-{\mathfrak {D}}^{2}\right) ^{2}} \sum _{t=0}^{{\mathfrak {D}}}\left( {\mathfrak {D}}^{34}2^{10}n^{-\varepsilon }\right) ^{t}\nonumber \\\ll & {} \frac{{\mathfrak {D}}^{24}}{\left( n-{\mathfrak {v}}-{\mathfrak {D}}^{2}\right) ^{2}}\sum _{t=0}^{{\mathfrak {D}}} \left( {\mathfrak {D}}^{34}2^{10}n^{-\varepsilon }\right) ^{t}.~~~~~~~~~~ \end{aligned}$$
(5.16)

If Y is boundary reduced, it is 0-adapted (Lemma 3.13), so (5.16) yields

$$\begin{aligned} \left| \Xi _{n}^{0<b_{\nu }\le {\mathfrak {D}};{\check{b}}_{\nu }>0}\right|\ll & {} \frac{{\mathfrak {D}}^{24}}{\left( n-{\mathfrak {v}}-{\mathfrak {D}}^{2}\right) ^{2}}\cdot \left( {\mathfrak {D}}^{34}2^{10} \right) ^{{\mathfrak {D}}}\le \frac{\left( {\mathfrak {D}}^{34}2^{10}\right) ^{{\mathfrak {D}}+1}}{\left( n-{\mathfrak {v}}-{\mathfrak {D}}^{2}\right) ^{2}} \end{aligned}$$

proving the first statement.

For the second statement, given \(\varepsilon >0\), let \(\eta =\frac{\varepsilon }{100}\) and assume \(1\le {\mathfrak {D}}\le n^{\eta }\). The choice of \(\eta \) implies that for \(n\gg _{\varepsilon }1\), \({\mathfrak {D}}^{34}2^{10}n^{-\varepsilon }\le \frac{1}{2}\), so (5.16) gives

$$\begin{aligned} \left| \Xi _{n}^{0<b_{\nu }\le {\mathfrak {D}};{\check{b}}_{\nu }>0}\right|&\ll _{\varepsilon }&\frac{{\mathfrak {D}}^{24}}{\left( n-{\mathfrak {v}}-{\mathfrak {D}}^{2}\right) ^{2}}\ll _{\varepsilon }\frac{1}{n}. \end{aligned}$$

\(\square \)

5.5 The large regime of \(b_{\nu },{\check{b}}_{\nu }\).

In the large regime of \(b_{\nu }\) and \({\check{b}}_{\nu }\) we use the same estimate for any type of tiled surface.

Lemma 5.10

If \({\mathfrak {v}}\le n^{1/4}\) and \({\mathfrak {D}}\le n^{1/24}\) then

$$\begin{aligned} \left| \Xi _{n}^{b_{\nu },{\check{b}}_{\nu }>{\mathfrak {D}}}\right| \ll \frac{({\mathfrak {D}}+1)^{4}}{\left( n-{\mathfrak {v}}-{\mathfrak {D}}^{2}\right) ^{2}}. \end{aligned}$$

Proof

Using the bound from Lemma 5.7 gives

$$\begin{aligned} \left| \Xi _{n}^{b_{\nu },{\check{b}}_{\nu }>{\mathfrak {D}}}\right|&\le \sum _{{\overline{B}}\in {\mathcal {B}}_{n}\left( Y\right) :~B_{\nu },{\check{B}}_{\nu }>{\mathfrak {D}}}({\mathfrak {D}}!)^{8}\sum _{\left( \nu ,\{\mu _{f}\},\lambda \right) \vdash {\overline{B}}}\frac{d_{\lambda }}{d_{\nu }^{3}}\\&\le \left( {\mathfrak {D}}!\right) ^{8}\sum _{\nu \vdash n-{\mathfrak {v}},b_{\nu }>{\mathfrak {D}},{\check{b}}_{\nu }>{\mathfrak {D}}}d_{\nu }^{-3}\sum _{\nu \subset _{{\mathfrak {v}}-{\mathfrak {f}}}\lambda }d_{\lambda }\sum _{\nu \subset \mu _{f}\subset _{{\mathfrak {e}}_{f}-{\mathfrak {f}}}\lambda }1\\&\le \left( {\mathfrak {D}}!\right) ^{12}\sum _{\nu \vdash n-{\mathfrak {v}},b_{\nu }>{\mathfrak {D}},{\check{b}}_{\nu }>{\mathfrak {D}}}d_{\nu }^{-3}\sum _{\nu \subset _{{\mathfrak {v}}-{\mathfrak {f}}}\lambda }d_{\lambda }~\le ~{\mathfrak {D}}^{12{\mathfrak {D}}}\frac{(n-{\mathfrak {f}})!}{(n-{\mathfrak {v}})!}\sum _{\nu \vdash n-{\mathfrak {v}},b_{\nu }>{\mathfrak {D}},{\check{b}}_{\nu }>{\mathfrak {D}}}d_{\nu }^{-2}\\&\ll {\mathfrak {D}}^{12{\mathfrak {D}}}n^{{\mathfrak {D}}}\left( \frac{\kappa \left( {\mathfrak {D}}+1\right) ^{4}}{\left( n-{\mathfrak {v}}-\left( {\mathfrak {D}}+1\right) ^{2}\right) ^{2}}\right) ^{{\mathfrak {D}}+1}\\&=\left( \frac{\kappa n{\mathfrak {D}}^{12}\left( {\mathfrak {D}}+1\right) ^{4}}{\left( n-{\mathfrak {v}}-\left( {\mathfrak {D}}+1\right) ^{2}\right) ^{2}}\right) ^{{\mathfrak {D}}}\frac{\kappa \left( {\mathfrak {D}}+1\right) ^{4}}{\left( n-{\mathfrak {v}}-\left( {\mathfrak {D}}+1\right) ^{2}\right) ^{2}}. \end{aligned}$$

The second-last inequality used Lemma 4.1 and the final inequality used Proposition 4.6. Since we assume \({\mathfrak {D}}\le n^{1/24}\) and \({\mathfrak {v}}\le n^{1/4}\) we obtain the stated result. \(\square \)

5.6 Assembly of analytic estimates for \(\Xi _{n}\).

Now we combine the estimates obtained in Sections 5.3, 5.45.5. First we give the culmination of our previous estimates when Y is boundary reduced.

Proposition 5.11

There is \(A_{0}>0\) such that if Y is boundary reduced, \({\mathfrak {v}}\le n^{1/4}\), and \({\mathfrak {D}}\le n^{1/24}\), then

$$\begin{aligned} |\Xi _{n}|\ll (A_{0}{\mathfrak {D}})^{A_{0}{\mathfrak {D}}}. \end{aligned}$$

Proof

With assumptions as in the proposition, splitting \(\Xi _{n}\) as in (5.10) and using Lemmas 5.8, 5.9(1), and 5.10 gives

$$\begin{aligned} \left| \Xi _{n}\right| \ll ({\mathfrak {D}}+1)^{9}{\mathfrak {D}}^{34{\mathfrak {D}}}+\frac{\left( {\mathfrak {D}}^{34}2^{10}\right) ^{{\mathfrak {D}}+1}}{\left( n-{\mathfrak {v}}-{\mathfrak {D}}^{2}\right) ^{2}}+\frac{({\mathfrak {D}}+1)^{4}}{\left( n-{\mathfrak {v}}-{\mathfrak {D}}^{2}\right) ^{2}}. \end{aligned}$$

If \({\mathfrak {D}}=0\) this gives \(|\Xi _{n}|\ll 1\) which proves the result. If \(1\le {\mathfrak {D}}\le n^{1/24}\) we obtain \(|\Xi _{n}|\ll (A_{0}{\mathfrak {D}})^{A_{0}{\mathfrak {D}}}\) as required. \(\square \)

Next we show that if Y is \(\varepsilon \)-adapted, then \({\mathfrak {D}}\) can be as large as a fractional power of n while \(\Xi _{n}\) is still very well approximated by \(2\Xi _{n}^{\nu =(n-{\mathfrak {v}})}\).

Proposition 5.12

For any \(\varepsilon \in (0,1)\), there is \(\eta =\eta (\varepsilon )\in (0,\frac{1}{100})\) such that if Y is \(\varepsilon \)-adapted with \({\mathfrak {D}}\le n^{\eta }\) and \({\mathfrak {v}}\le n^{1/4}\), then

$$\begin{aligned} \left| \Xi _{n}-2\Xi _{n}^{\nu =(n-{\mathfrak {v}})}\right| \ll _{\varepsilon }\frac{1}{n}. \end{aligned}$$

Proof

Lemmas 5.9(2) and 5.10 yield that given \(\varepsilon \in (0,1)\), there is \(\eta =\eta (\varepsilon )\in (0,\frac{1}{100})\), such that if \({\mathfrak {D}}\le n^{\eta }\), \({\mathfrak {v}}\le n^{1/4}\) and Y is \(\varepsilon \)-adapted, then

$$\begin{aligned} \left| \Xi _{n}-2\Xi _{n}^{\nu =(n-{\mathfrak {v}})}\right|&=\left| 2\Xi _{n}^{0<b_{\nu }\le {\mathfrak {D}};{\check{b}}_{\nu }>0}+\Xi _{n}^{b_{\nu },{\check{b}}_{\nu }>{\mathfrak {D}}}\right| \\&\ll _{\varepsilon }\frac{1}{n}+\frac{({\mathfrak {D}}+1)^{4}}{\left( n-{\mathfrak {v}}-{\mathfrak {D}}^{2}\right) ^{2}}\ll \frac{1}{n}. \end{aligned}$$

\(\square \)

Remark

For general g, the condition \(\eta (\epsilon )<\frac{1}{100}\) of Proposition 5.12 should be replaced by \(\eta (\epsilon )<\frac{1}{Cg}\) for some universal \(C\ge 100\).

5.7 A new expression for \(\Xi _{n}^{\nu =(n-{\mathfrak {v}})}\).

We continue to fix a compact tiled surface Y. The goal of this section is to give a formula for \(\Xi _{n}^{\nu =(n-{\mathfrak {v}})}\) that is more precise than is possible to obtain with the methods of the previous section. This will be done by refining the methods of [MP20, Section 5].

We will assume throughout that \(n\ge {\mathfrak {v}}\). We fix a bijective map \({\mathcal {J}}:Y^{(0)}\rightarrow [{\mathfrak {v}}]\), and as in [MP20, Section 5] for each \(n\in {\mathbf {N}}\) we modify \({\mathcal {J}}\) by letting

$$\begin{aligned} {\mathcal {J}}_{n}:Y^{(0)}\rightarrow [n-{\mathfrak {v}}+1,n],\quad {\mathcal {J}}_{n}(v){\mathop {=}\limits ^{\mathrm {def}}}{\mathcal {J}}(v)+n-{\mathfrak {v}}. \end{aligned}$$
(5.17)

We use the map \({\mathcal {J}}_{n}\) to identify the vertex set of Y with \([n-{\mathfrak {v}}+1,n\)]. Let \({\mathcal {V}}_{f}^{-}={\mathcal {V}}_{f}^{-}(Y)\subset [n-{\mathfrak {v}}+1,n]\) be the subset of vertices of Y with outgoing f-labeled edges, and \({\mathcal {V}}_{f}^{+}\subset [n-{\mathfrak {v}}+1,n]\) those vertices of Y with incoming f-labeled edges. Note that \({\mathfrak {e}}_{f}=|{\mathcal {V}}_{f}^{-}|=|{\mathcal {V}}_{f}^{+}|\). Recall that \(S'_{{\mathfrak {v}}}\le S_{n}\) is the subgroup of permutations fixing \(\left[ n-{\mathfrak {v}}\right] \) element-wise. For each \(f\in \{a,b,c,d\}\) we fix \(g_{f}^{0}\in S'_{{\mathfrak {v}}}\) such that for every pair of vertices ij of Y in \([n-{\mathfrak {v}}+1,n]\) with a directed f-labeled edge from i to j, we have \(g_{f}^{0}(i)=j\). Note that \(g_{f}^{0}({\mathcal {V}}_{f}^{-})={\mathcal {V}}_{f}^{+}\). We let \(g^{0}{\mathop {=}\limits ^{\mathrm {def}}}(g_{a}^{0},g_{b}^{0},g_{c}^{0},g_{d}^{0})\in S_{n}^{4}\). For each \(f\in \{a,b,c,d\}\) let \(G_{f}\) be the subgroup of \(S_{n}\) fixing pointwise \({\mathcal {V}}_{f}^{-}\). Let \(G{\mathop {=}\limits ^{\mathrm {def}}}G_{a}\times G_{b}\times G_{c}\times G_{d}\le S_{n}^{4}\).

Our formula for \(\Xi _{n}^{\nu =(n-{\mathfrak {v}})}\) will involve the size of the set

$$\begin{aligned} {\mathbb {X}}_{n}^{*}(Y,{\mathcal {J}}){\mathop {=}\limits ^{\mathrm {def}}}\left\{ \left( \alpha _{a},\alpha _{b},\alpha _{c},\alpha _{d}\right) \in g^{0}G\,|\,W\left( \alpha _{a},\alpha _{b},\alpha _{c},\alpha _{d}\right) \in S_{n-{\mathfrak {v}}}\right\} \end{aligned}$$
(5.18)

whereFootnote 15\(W(g_{a},g_{b},g_{c},g_{d}){\mathop {=}\limits ^{\mathrm {def}}}g_{d}^{-1}g_{c}^{-1}g_{d}g_{c}g_{b}^{-1}g_{a}^{-1}g_{b}g_{a}\). Note that a similar set, denoted \({\mathbb {X}}_{n}(Y,{\mathcal {J}})\) in [MP20, Section 5.2], is the set in which the condition is that \(W\left( \alpha _{a},\alpha _{b},\alpha _{c},\alpha _{d}\right) =1\) rather than the identity only when restricted to \(\left[ n-{\mathfrak {v}}+1,n\right] \), as in (5.18). This smaller set \({\mathbb {X}}_{n}(Y,{\mathcal {J}})\) counts the number of covers \(\phi \in \mathrm {Hom}\left( \Gamma _{2},S_{n}\right) \) in which \(\left( Y,{\mathcal {J}}\right) \) embeds.

The main result of this Section 5.7 is the following.

Proposition 5.13

With notations as above,

$$\begin{aligned} \Xi _{n}^{\nu =(n-{\mathfrak {v}})}=\,\frac{\left( n\right) _{{\mathfrak {v}}}\left| {\mathbb {X}}_{n}^{*}(Y,{\mathcal {J}})\right| }{\left( n\right) _{{\mathfrak {f}}}\prod _{f\in a,b,c,d}(n-{\mathfrak {e}}_{f})!}. \end{aligned}$$

Recall that \((n)_{q}\) is the Pochhammer symbol as defined in Section 1.4. In the rest of the paper, whenever we write an integral over a group, it is performed with respect to the uniform measure on the relevant group. Let

$$\begin{aligned} I&{\mathop {=}\limits ^{\mathrm {def}}}\int _{h_{f}\in G_{f}}\int _{\pi \in S_{n-{\mathfrak {v}}}}{} \mathbf{1}\left\{ W\left( g_{a}^{0}h_{a},g_{b}^{0}h_{b},g_{c}^{0}h_{c},g_{d}^{0}h_{d}\right) \pi =1\right\} . \end{aligned}$$

The following lemma is immediate as a result of relating sums to normalized integrals.

Lemma 5.14

We have \(\left| {\mathbb {X}}_{n}^{*}(Y,{\mathcal {J}})\right| =\left| S_{n-{\mathfrak {v}}}\right| \cdot \left| G\right| \cdot I\).

For a Young diagram \(\lambda \) of size m, we write \(\chi _{\lambda }\) for the trace of the irreducible representation of \(S_{m}\) on \(V^{\lambda }\).

Corollary 5.15

We have

$$\begin{aligned} |{\mathbb {X}}_{n}^{*}(Y,{\mathcal {J}})|=\frac{\prod _{f\in a,b,c,d}(n-{\mathfrak {e}}_{f})!}{(n)_{{\mathfrak {v}}}}\sum _{\lambda \vdash n}d_{\lambda }\Theta _{\lambda }^{(n-{\mathfrak {v}})}(Y,{\mathcal {J}}) \end{aligned}$$

where

$$\begin{aligned} \Theta _{\lambda }^{(n-{\mathfrak {v}})}(Y,{\mathcal {J}}){\mathop {=}\limits ^{\mathrm {def}}}\int _{h_{f}\in G_{f}}\int _{\pi \in S_{n-{\mathfrak {v}}}}\chi _{\lambda }\left( W\left( g_{a}^{0}h_{a},g_{b}^{0}h_{b},g_{c}^{0}h_{c},g_{d}^{0}h_{d}\right) \pi \right) . \end{aligned}$$
(5.19)

Proof

Using Schur orthogonality, write

$$\begin{aligned} {\mathbf {1}}\{g=1\}&=\frac{1}{n!}\sum _{\lambda \vdash n}d_{\lambda }\chi _{\lambda }(g), \end{aligned}$$

hence

$$\begin{aligned} I=\frac{1}{n!}\sum _{\lambda \vdash n}d_{\lambda }\Theta _{\lambda }^{(n-{\mathfrak {v}})}\left( Y,{\mathcal {J}}\right) . \end{aligned}$$

We have \(|G|=\prod _{f\in \left\{ a,b,c,d\right\} }(n-{\mathfrak {e}}_{f})!\), hence by Lemma 5.14

$$\begin{aligned} \left| {\mathbb {X}}_{n}^{*}\left( Y,{\mathcal {J}}\right) \right|&=(n-{\mathfrak {v}})!\prod _{f\in \left\{ a,b,c,d\right\} }\left( n-{\mathfrak {e}}_{f}\right) !\cdot \frac{1}{n!}\sum _{\lambda \vdash n}d_{\lambda }\Theta _{\lambda }^{(n-{\mathfrak {v}})}\left( Y,{\mathcal {J}}\right) \\&=\frac{\prod _{f\in a,b,c,d}(n-{\mathfrak {e}}_{f})!}{(n)_{{\mathfrak {v}}}}\sum _{\lambda \vdash n}d_{\lambda }\Theta _{\lambda }^{(n-{\mathfrak {v}})}\left( Y,{\mathcal {J}}\right) . \end{aligned}$$

\(\square \)

Consider the vector space

$$\begin{aligned} W^{\lambda }{\mathop {=}\limits ^{\mathrm {def}}}V^{\lambda }\otimes {\check{V}}^{\lambda }\otimes V^{\lambda }\otimes {\check{V}}^{\lambda }\otimes V^{\lambda }\otimes {\check{V}}^{\lambda }\otimes V^{\lambda }\otimes {\check{V}}^{\lambda } \end{aligned}$$

as a unitary representation of \(S_{n}^{8}\). This is a departure from [MP20, Section 5] where \(W^{\lambda }\) was thought of as a representation of \(S_{n}^{4}\); we take a more flexible setup here. The reader may find it useful to see [MP20, Section 5.4] for extra background on representation theory. The inner product on \(V^{\lambda }\) gives an isomorphism \(V^{\lambda }\cong {\check{V}}^{\lambda }\), \(v\mapsto {\check{v}}\). Let \(B_{\lambda }\in \mathrm {End}(W^{\lambda })\) be defined as in [MP20, Equation (5.9)] by the formula

$$\begin{aligned}&\left\langle B_{\lambda }\left( v_{1}\otimes {\check{v}}_{2}\otimes v_{3}\otimes {\check{v}}_{4}\otimes v_{5}\otimes {\check{v}}_{6}\otimes v_{7}\otimes {\check{v}}_{8}\right) ,w_{1}\otimes {\check{w}}_{2}\otimes w_{3}\otimes {\check{w}}_{4}\otimes w_{5}\otimes {\check{w}}_{6}\otimes w_{7}\otimes {\check{w}}_{8}\right\rangle \nonumber \\&\quad {\mathop {=}\limits ^{\mathrm {def}}}\langle v_{1},w_{3}\rangle \langle v_{3},v_{2}\rangle \langle w_{2},v_{4}\rangle \langle w_{4},w_{5}\rangle \langle v_{5},w_{7}\rangle \langle v_{7},v_{6}\rangle \langle w_{6},v_{8}\rangle \langle w_{8},w_{1}\rangle . \end{aligned}$$
(5.20)

We note the following, extending [MP20, Lem. 5.4].

Lemma 5.16

For any \((g_{1},g_{2},g_{3},g_{4},g_{5},g_{6},g_{7},g_{8})\in S_{n}^{8}\), we have

$$\begin{aligned} \mathrm {tr}_{W^{\lambda }}(B_{\lambda }\circ (g_{1},g_{2},g_{3},g_{4},g_{5},g_{6},g_{7},g_{8}))=\chi _{\lambda }(g_{8}^{-1}g_{6}^{-1}g_{7}g_{5}g_{4}^{-1}g_{2}^{-1}g_{3}g_{1}). \end{aligned}$$

Proof

The proof is a direct calculation directly generalizing [MP20, Lem. 5.4]. \(\square \)

Let Q be the orthogonal projection in \(W^{\lambda }\) onto the vectors that are invariant by G acting on \(W^{\lambda }\) by the map

$$\begin{aligned} (g_{a},g_{b},g_{c},g_{d})\in G\mapsto (g_{a},g_{a},g_{b},g_{b},g_{c},g_{c},g_{d},g_{d})\in S_{n}^{8}. \end{aligned}$$

This projection appeared also in [MP20, Section 5.4].

Lemma 5.17

We have \(\Theta _{\lambda }^{(n-{\mathfrak {v}})}(Y,{\mathcal {J}})=\mathrm {tr}_{W^{\lambda }}({\mathfrak {p}}B_{\lambda }g^{0}Q)\) where \({\mathfrak {p}}\) denotes the operator

$$\begin{aligned} {\mathfrak {p}}{\mathop {=}\limits ^{\mathrm {def}}}\int _{\pi \in S_{n-{\mathfrak {v}}}}\left( \pi ,1,1,1,1,1,1,1\right) \in \mathrm {End}\left( W^{\lambda }\right) . \end{aligned}$$

Remark 5.18

Note that \({\mathfrak {p}}\) is the projection in \(\mathrm {End}(W^{\lambda })\) onto the \(\mathrm {triv}\)-isotypic subspace for the action of \(S_{n-{\mathfrak {v}}}\) on the first factor of \(W^{\lambda }\) (while being the identity on the remaining seven factors). This is a self-adjoint operator.

Proof

Recall the definition of \(\Theta _{\lambda }^{(n-{\mathfrak {v}})}(Y,{\mathcal {J}})\) in (5.19). Using Lemma 5.16, for every set of fixed values of the \(h_{f}\) and \(\pi \), we have

$$\begin{aligned}&\chi _{\lambda }\left( W\left( g_{a}^{0}h_{a},g_{b}^{0}h_{b},g_{c}^{0}h_{c},g_{d}^{0}h_{d}\right) \pi \right) \\&\quad = \mathrm {tr}_{W^{\lambda }}\left( B_{\lambda }\circ \left( g_{a}^{0}h_{a}\pi ,g_{a}^{0}h_{a},g_{b}^{0}h_{b},g_{b}^{0}h_{b},g_{c}^{0}h_{c},g_{c}^{0}h_{c},g_{d}^{0}h_{d},g_{d}^{0}h_{d}\right) \right) \end{aligned}$$

Therefore,

$$\begin{aligned} \Theta _{\lambda }^{(n-{\mathfrak {v}})}(Y,{\mathcal {J}})&=\mathrm {tr}_{W^{\lambda }}(B_{\lambda }g^{0}Q{\mathfrak {p}})=\mathrm {tr}_{W^{\lambda }}({\mathfrak {p}}B_{\lambda }g^{0}Q). \end{aligned}$$

\(\square \)

Using Lemma 5.17, we now find a new expression for \(\Theta _{\lambda }^{(n-{\mathfrak {v}})}(Y,{\mathcal {J}})\) by calculating \(\mathrm {tr}_{W^{\lambda }}({\mathfrak {p}}B_{\lambda }g^{0}Q)\).

Proposition 5.19

We have

$$\begin{aligned} \Theta _{\lambda }^{(n-{\mathfrak {v}})}\left( Y,{\mathcal {J}}\right)= & {} \sum _{(n-{\mathfrak {v}})\subset \mu _{f}\subset _{{\mathfrak {e}}_{f}-{\mathfrak {f}}}\lambda '\subset _{{\mathfrak {f}}}\lambda }\frac{d_{\lambda /\lambda '}}{d_{\mu _{a}}d_{\mu _{b}}d_{\mu _{c}}d_{\mu _{d}}}\Upsilon _{n}\left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm }\right\} ,(n-{\mathfrak {v}}),\left\{ \mu _{f}\right\} ,\lambda '\right) .\nonumber \\ \end{aligned}$$
(5.21)

Proof

This calculation is very similar to the proof of [MP20, Prop. 5.8] where \(\mathrm {tr}_{W^{\lambda }}(B_{\lambda }g^{0}Q)\) was calculated. The only difference here is the presence of the additional operator \({\mathfrak {p}}\). Therefore we will not give all the details. The proof follows [MP20, proof of Prop. 5.8] using properties P1P4 of \(\sigma _{f}^{\pm },\tau _{f}^{\pm }\). One also uses that \({\mathfrak {p}}\) is a self-adjoint projection. The role that \({{\mathfrak {p}}}\) plays in the proof is that instead of obtaining a summation over all \(\nu \subset _{{\mathfrak {v}}}\lambda \), the projection \({\mathfrak {p}}\) forces only the relevant \(\nu =(n-{\mathfrak {v}})\) to appear.

Indeed, the calculation leading to [MP20, Equation (5.17)] is replaced by

$$\begin{aligned}&\left\langle {\mathfrak {p}}B_{\lambda }\left[ {\mathcal {E}}_{\mu _{a},S_{a},T_{a}}^{\lambda ,a,+}\otimes {\mathcal {E}}_{\mu _{b},S_{b},T_{b}}^{\lambda ,b,+}\otimes {\mathcal {E}}_{\mu _{c},S_{c},T_{c}}^{\lambda ,c,+}\otimes {\mathcal {E}}_{\mu _{d},S_{d},T_{d}}^{\lambda ,d,+}\right] , \right. \\&\quad \left. {\mathcal {E}}_{\mu {}_{a},S_{a},T_{a}}^{\lambda ,a,-}\otimes {\mathcal {E}}_{\mu {}_{b},S_{b},T_{b}}^{\lambda ,b,-}\otimes {\mathcal {E}}_{\mu {}_{c},S_{c},T_{c}}^{\lambda ,c,-}\otimes {\mathcal {E}}_{\mu {}_{d},S_{d},T_{d}}^{\lambda ,d,-}\right\rangle \\&\quad = \left\langle B_{\lambda }\left[ {\mathcal {E}}_{\mu _{a},S_{a},T_{a}}^{\lambda ,a,+}\otimes {\mathcal {E}}_{\mu _{b},S_{b},T_{b}}^{\lambda ,b,+}\otimes {\mathcal {E}}_{\mu _{c},S_{c},T_{c}}^{\lambda ,c,+}\otimes {\mathcal {E}}_{\mu _{d},S_{d},T_{d}}^{\lambda ,d,+}\right] , \right. \\&\quad \left. {\mathfrak {p}}\left( {\mathcal {E}}_{\mu {}_{a},S_{a},T_{a}}^{\lambda ,a,-}\otimes {\mathcal {E}}_{\mu {}_{b},S_{b},T_{b}}^{\lambda ,b,-}\otimes {\mathcal {E}}_{\mu {}_{c},S_{c},T_{c}}^{\lambda ,c,-}\otimes {\mathcal {E}}_{\mu {}_{d},S_{d},T_{d}}^{\lambda ,d,-}\right) \right\rangle \\&\quad = \frac{1}{d_{\mu _{a}}d_{\mu _{b}}d_{\mu _{c}}d_{\mu _{d}}}\sum _{R_{f}^{\pm }\in \mathrm {Tab}\left( \mu _{f}\right) }\left\langle v_{R_{a}^{+}\sqcup S_{a}}^{\sigma _{a}^{+}},v_{R_{b}^{-}\sqcup S_{b}}^{\sigma _{b}^{-}}\right\rangle \left\langle v_{R_{b}^{+}\sqcup S_{b}}^{\sigma _{b}^{+}},v_{R_{a}^{+}\sqcup T_{a}}^{\tau _{a}^{+}}\right\rangle \left\langle v_{R_{a}^{-}\sqcup T_{a}}^{\tau _{a}^{-}},v_{R_{b}^{+}\sqcup T_{b}}^{\tau _{b}^{+}}\right\rangle \\&\qquad \cdot \left\langle v_{R_{b}^{-}\sqcup T_{b}}^{\tau _{b}^{-}},v_{R_{c}^{-}\sqcup S_{c}}^{\sigma _{c}^{-}}\right\rangle \left\langle v_{R_{c}^{+}\sqcup S_{c}}^{\sigma _{c}^{+}},v_{R_{d}^{-}\sqcup S_{d}}^{\sigma _{d}^{-}}\right\rangle \left\langle v_{R_{d}^{+}\sqcup S_{d}}^{\sigma _{d}^{+}},v_{R_{c}^{+}\sqcup T_{c}}^{\tau _{c}^{+}}\right\rangle \\&\qquad \qquad \left\langle v_{R_{c}^{-}\sqcup T_{c}}^{\tau _{c}^{-}},v_{R_{d}^{+}\sqcup T_{d}}^{\tau _{d}^{+}}\right\rangle \left\langle v_{R_{d}^{-}\sqcup T_{d}}^{\tau _{d}^{-}},{\mathfrak {p}}_{0}v_{R_{a}^{-}\sqcup S_{a}}^{\sigma _{a}^{-}}\right\rangle \end{aligned}$$

where \({\mathfrak {p}}_{0}\) is orthogonal projection to the \(S_{n-{\mathfrak {v}}}\)-invariant vectors in \(V^{\lambda }\). Then the same discussion as precedes [MP20, Equation (5.17)] applies now to show that the above is zero unless there is \(\nu \vdash n-{\mathfrak {v}}\) such that \(\nu \subset \mu _{f}\) for all \(f\in \{a,b,c,d\}\), and all \(R_{f}^{+}|_{\le n-{\mathfrak {v}}}\), \(R_{f}^{-}|_{\le n-{\mathfrak {v}}}\) are equal and of shape \(\nu \), except now, the presence of \(\mathfrak {{\mathfrak {p}}}_{0}\) forces \(\nu =(n-{\mathfrak {v}})\). Then the rest of the proof is the same. \(\square \)

Proof of Proposition ??

Combining Corollary 5.15 and Proposition 5.19 we obtain

$$\begin{aligned}&\left| {\mathbb {X}}_{n}^{*}(Y,{\mathcal {J}})\right| \\&\quad = \frac{\prod _{f\in \left\{ a,b,c,d\right\} }(n-{\mathfrak {e}}_{f})!}{(n)_{{\mathfrak {v}}}}\sum _{\lambda \vdash n}d_{\lambda }\sum _{(n-{\mathfrak {v}})\subset \mu _{f}\subset _{{\mathfrak {e}}_{f}-{\mathfrak {f}}}\lambda '\subset _{{\mathfrak {f}}}\lambda }\frac{d_{\lambda /\lambda '}}{d_{\mu _{a}}d_{\mu _{b}}d_{\mu _{c}}d_{\mu _{d}}}\Upsilon _{n}\\&\qquad \left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm }\right\} ,(n-{\mathfrak {v}}),\left\{ \mu _{f}\right\} ,\lambda '\right) \\&\quad = \frac{\prod _{f\in a,b,c,d}\left( n-{\mathfrak {e}}_{f}\right) !(n)_{{\mathfrak {f}}}}{(n)_{{\mathfrak {v}}}}\sum _{(n-{\mathfrak {v}})\subset \mu _{f}\subset _{{\mathfrak {e}}_{f}-{\mathfrak {f}}}\lambda '\vdash n-{\mathfrak {f}}}\frac{d_{\lambda '}}{d_{\mu _{a}}d_{\mu _{b}}d_{\mu _{c}}d_{\mu _{d}}}\Upsilon _{n}\\&\qquad \left( \left\{ \sigma _{f}^{\pm },\tau _{f}^{\pm }\right\} ,(n-{\mathfrak {v}}),\left\{ \mu _{f}\right\} ,\lambda '\right) \\&\quad = \frac{\prod _{f\in a,b,c,d}\left( n-{\mathfrak {e}}_{f}\right) !(n)_{{\mathfrak {f}}}}{(n)_{{\mathfrak {v}}}}\Xi _{n}^{\nu =(n-{\mathfrak {v}})}, \end{aligned}$$

where the second equality used Lemma 4.1 and the third used \(d_{(n-{\mathfrak {v}})}=1\). This gives the result. \(\square \)

5.8 Understanding \(\left| {\mathbb {X}}_{n}^{*}(Y,{\mathcal {J}})\right| \).

Recall the definition of \({\mathbb {X}}_{n}^{*}(Y,{\mathcal {J}})\) in (5.18). Because these 4-tuples of permutations generally do not correspond to covers of the surface \(\Sigma _{2}\), they are better analyzed as n-degree covers of the bouquet of four loops, namely, as graphs on n vertices labeled by \(\left[ n\right] \) with directed edges labeled by abcd, and exactly one incoming f-edge and one outgoing f-edge in every vertex and every \(f\in \left\{ a,b,c,d\right\} \). Equivalently, these graphs are the Schreier graphs depicting the action of \(S_{n}\) on \(\left[ n\right] \) with respect to the four permutations \(\alpha _{a},\alpha _{b},\alpha _{c},\alpha _{d}\).

Such a Schreier graph \({\mathcal {G}}\) corresponds to some 4-tuple \(\left( \alpha _{a},\alpha _{b},\alpha _{c},\alpha _{d}\right) \in {\mathbb {X}}_{n}^{*}(Y,{\mathcal {J}})\) if and only if the following two conditions are satisfied. The assumption that \(\left( \alpha _{a},\alpha _{b},\alpha _{c},\alpha _{d}\right) \in g^{0}G\) means that \(Y^{\left( 1\right) }\), the 1-skeleton of Y, is embedded in \({\mathcal {G}}\), in an embedding that extends \({\mathcal {J}}_{n}\) on the vertices. The condition that \(W\left( \alpha _{a},\alpha _{b},\alpha _{c},\alpha _{d}\right) \in S_{n-{\mathfrak {v}}}\), means that at every vertex of \({\mathcal {G}}\) with label in \([n-{\mathfrak {v}}+1,n]\), there is a closed path of length 8 that spells out the word [ab][cd].

In Lemma 5.20 below we show that the number of such graphs (equal to \(\left| {\mathbb {X}}_{n}^{*}\left( Y,{\mathcal {J}}\right) \right| \)) is rational in n. To this end, we apply techniques based on Stallings core graphs, in a similar fashion to the techniques applied in [Pud14, PP15].

Construct a finite graph \({\hat{Y}}\) as follows. Start with \(Y^{\left( 1\right) }\), the 1-skeleton of Y. At every vertex attach a closed cycle of length 8 spelling out \(\left[ a,b\right] \left[ c,d\right] \). Then fold the resulting graph, in the sense of Stallings,Footnote 16 to obtain \({\hat{Y}}\). In other words, at each vertex v of \(Y^{\left( 1\right) }\), if there is a closed path at v spelling \(\left[ a,b\right] \left[ c,d\right] \), do nothing. Otherwise, find the largest prefix of \(\left[ a,b\right] \left[ c,d\right] \) that can be read on a path p starting at v and the largest suffix of \(\left[ a,b\right] \left[ c,d\right] \) that can be read on a path s terminating at v. Because Y is a tiled surface, \(\left| p\right| +\left| s\right| <8\). Attach a path of length \(8-\left| p\right| -\left| s\right| \) between the endpoint of p and the beginning of s which spells out the missing part of the word \(\left[ a,b\right] \left[ c,d\right] \). In this description, no folding is required. Note, in particular, that \(Y^{\left( 1\right) }\) is embedded in \({\hat{Y}}\).

Fig. 6
figure 6

On the left is a tiled surface Y consisting of two vertices and a single d-edge between them. The middle part shows \({\hat{Y}}\): where we “grow” an octagon from every vertex of Y, and in which \(Y^{\left( 1\right) }\) is embedded. The figure on the right shows another element of \({{{\mathcal {Q}}}}\left( Y\right) \): a folded quotient of \({\hat{Y}}\) where \(Y^{\left( 1\right) }\) is still embedded.

By the discussion above, the Schreier graphs \({\mathcal {G}}\) corresponding to \({\mathbb {X}}_{n}^{*}\left( Y,{\mathcal {J}}\right) \) are the graphs in which there is an embedding of \(Y^{\left( 1\right) }\) which extends to a morphism of directed edge-labeled graphs of \({\hat{Y}}\). We group these \({\mathcal {G}}\) according to the image of \({\hat{Y}}\). So denote by \({{{\mathcal {Q}}}}\left( Y\right) \) the possible images of \({\hat{Y}}\) in the graphs \({\mathcal {G}}\): these are precisely the folded quotients of \({\hat{Y}}\) (edges can only be merged with equally-labeled other edges) which restrict to a bijection on \(Y^{\left( 1\right) }\). In particular, \({\hat{Y}}\in {{{\mathcal {Q}}}}\left( Y\right) \). We illustrate these concepts in Figure 6. As \({\hat{Y}}\) is a finite graph, the set \({{{\mathcal {Q}}}}\left( Y\right) \) is finite.

Lemma 5.20

For every \(n\ge 8{\mathfrak {v}}\left( Y\right) \),

$$\begin{aligned} \left| {\mathbb {X}}_{n}^{*}(Y,{\mathcal {J}})\right| =\frac{\left( n!\right) ^{4}}{(n)_{{\mathfrak {v}}(Y)}} \sum _{H\in {{{\mathcal {Q}}}}\left( Y\right) }\frac{\left( n\right) _{{\mathfrak {v}}\left( H\right) }}{\prod _{f\in \left\{ a,b,c,d\right\} }\left( n\right) _{{\mathfrak {e}}_{f}(H)}}. \end{aligned}$$
(5.22)

Proof

By the discussion above it is enough to show that for every \(H\in {{{\mathcal {Q}}}}\left( Y\right) \) and \(n\ge 8{\mathfrak {v}}\left( Y\right) \), the number of Schreier graphs \({\mathcal {G}}\) on n vertices where the image of \({\hat{Y}}\) is H, is precisely

$$\begin{aligned} \frac{(n!)^{4}}{\left( n\right) _{{\mathfrak {v}}\left( Y\right) }}\cdot \frac{\left( n\right) _{{\mathfrak {v}}\left( H\right) }}{\prod _{f\in \left\{ a,b,c,d\right\} }\left( n\right) _{{\mathfrak {e}}_{f}(H)}}. \end{aligned}$$

First, note that \({\mathfrak {v}}\left( H\right) \le {\mathfrak {v}}\left( {\hat{Y}}\right) \le 8{\mathfrak {v}}\left( Y\right) \), so under the assumption that \(n\ge 8{\mathfrak {v}}\left( Y\right) \), H can indeed be embedded in Schreier graphs on n vertices. The number of possible labelings of the vertices of H, which must extend the labeling of the vertices of \(Y^{\left( 1\right) }\), is

$$\begin{aligned} \left( n-{\mathfrak {v}}\left( Y\right) \right) \left( n-{\mathfrak {v}}\left( Y\right) -1\right) \cdots \left( n-{\mathfrak {v}}\left( H\right) +1\right) = \frac{\left( n\right) _{{\mathfrak {v}}\left( H\right) }}{\left( n\right) _{{\mathfrak {v}}\left( Y\right) }}. \end{aligned}$$

There are exactly \({\mathfrak {e}}_{a}\left( H\right) \) constraints on the permutation \(\alpha _{a}\) for it to agree with the data in the vertex-labeled H, so there are \(\left( n-{\mathfrak {e}}_{a}\left( H\right) \right) !=\frac{n!}{\left( n\right) _{{\mathfrak {e}}_{a}\left( H\right) }}\) such permutations. The same logic applied to the other letters gives the required result. \(\square \)

Combining Lemma 5.20 with Proposition 5.13 gives the following corollary.

Corollary 5.21

For \(n\ge 8{\mathfrak {v}}(Y)\) we have

$$\begin{aligned} \Xi _{n}^{\nu =(n-{\mathfrak {v}})}(Y)= & {} \frac{\prod _{f\in \left\{ a,b,c,d\right\} }(n)_{{\mathfrak {e}}_{f}(Y)}}{(n)_{{\mathfrak {f}}(Y)}}\sum _{H\in {{{\mathcal {Q}}}}\left( Y\right) }\frac{\left( n\right) _{{\mathfrak {v}}(H)}}{\prod _{f\in \left\{ a,b,c,d\right\} }(n)_{{\mathfrak {e}}_{f}(H)}}. \end{aligned}$$

In particular, if Y is fixed and \(n\rightarrow \infty ,\) we have

$$\begin{aligned} \Xi _{n}^{\nu =(n-{\mathfrak {v}})}(Y)=\sum _{H\in {{{\mathcal {Q}}}}\left( Y\right) }n^{{\mathfrak {e}}\left( Y\right) -{\mathfrak {f}}\left( Y\right) +\chi \left( H\right) }\left( 1+O_{Y}\left( \frac{1}{n}\right) \right) . \end{aligned}$$
(5.23)

Proof

The first statement follows directly from Lemma 5.20 and Proposition 5.13. To obtain the second statement from the first, we use that all Pochammer symbols \((n)_{q}\) appearing therein have q bounded depending on Y and hence \((n)_{q}=n^{q}+O_{Y}(n^{q-1})\). \(\square \)

Note that in the construction of \({\hat{Y}}\) from \(Y^{\left( 1\right) }\), we add a “handle” (a sequence of edges) to the graph for every vertex of Y that does not admit a closed cycle spelling \(\left[ a,b\right] \left[ c,d\right] \). Hence the Euler characteristic of \({\hat{Y}}\) is equal to that of \(Y^{\left( 1\right) }\) minus the number of such vertices in Y. If Y has an octagon attached along every closed cycle spelling \(\left[ a,b\right] \left[ c,d\right] \), there are \({\mathfrak {v}}\left( Y\right) -{\mathfrak {f}}\left( Y\right) \) such vertices, so

$$\begin{aligned} \chi \left( {\hat{Y}}\right) =\chi \left( Y^{\left( 1\right) }\right) - \left( {\mathfrak {v}}\left( Y\right) -{\mathfrak {f}}\left( Y\right) \right) ={\mathfrak {f}}\left( Y\right) - {\mathfrak {e}}\left( Y\right) . \end{aligned}$$
(5.24)

In particular, this is the case when Y is (strongly) boundary reduced. This is important because of the role of \(\chi \left( H\right) \) in (5.23) for \(H\in {{{\mathcal {Q}}}}(Y)\). It turns out that when Y is strongly boundary reduced, \({\hat{Y}}\) has Euler characteristic strictly larger than all other graphs in \({{{\mathcal {Q}}}}\left( Y\right) \):

Lemma 5.22

If Y is strongly boundary reduced, then for every \(H\in {{{\mathcal {Q}}}}(Y){\setminus }\{{\hat{Y}}\}\),

$$\begin{aligned} \chi \left( H\right) <\chi \left( {\hat{Y}}\right) . \end{aligned}$$

Proof

We use [MP20, Prop. 5.26] that states that if Y is strongly boundary reduced, then as \(n\rightarrow \infty \),

$$\begin{aligned} \Xi _{n}(Y)=2+O_{Y}\left( n^{-1}\right) . \end{aligned}$$
(5.25)

When Y is fixed and \(n\rightarrow \infty \), it follows from Lemmas 5.9(1) and 5.10 that

$$\begin{aligned} \Xi _{n}(Y)=2\Xi _{n}^{(n-{\mathfrak {v}})}(Y)+O_{Y}\left( n^{-2}\right) . \end{aligned}$$
(5.26)

Combining (5.25) and (5.26) gives

$$\begin{aligned} \Xi _{n}^{(n-{\mathfrak {v}})}(Y)=1+O_{Y}\left( n^{-1}\right) . \end{aligned}$$
(5.27)

Comparing (5.27) with (5.23) shows that there is exactly one \(H\in {{{\mathcal {Q}}}}(Y\)) with \(\chi \left( H\right) ={\mathfrak {f}}(Y)-{\mathfrak {e}}(Y)\), and all remaining graphs in \({{{\mathcal {Q}}}}(Y)\) have strictly smaller Euler characteristic. Finally, (5.24) shows this H must be \({\hat{Y}}\) itself. \(\square \)

5.9 Bounds on \({\mathbb {E}}_{n}^{\mathrm {emb}}(Y)\) for \(\varepsilon \)-adapted Y.

In this section we give the final implications of the previous sections for \({\mathbb {E}}_{n}^{\mathrm {emb}}(Y)\) for \(\varepsilon \)-adapted Y. Recall the definition of \({{{\mathcal {Q}}}}\left( Y\right) \) from Section 5.8. We will need the following easy bound for Pochhammer symbols.

Lemma 5.23

Let \(n\in {\mathbf {N}}\) and \(q\in {\mathbf {N}}\cup \{0\}\) with \(q\le \frac{1}{2}n\). Then

$$\begin{aligned} n^{q}\left( 1-\frac{q^{2}}{n}\right) \le n^{q}\exp \left( \frac{-q^{2}}{n}\right) \le (n)_{q}\le n^{q}. \end{aligned}$$

Proof

The first inequality is based on \(1-x\le e^{-x}\). The second one is based on writing \(\left( n\right) _{q}=n^{q}\left( 1-\frac{1}{n}\right) \cdots \left( 1-\frac{q-1}{n}\right) \) and using \(e^{-2x}\le 1-x\) which holds for \(x\in \left[ 0,\frac{1}{2}\right] \). The third inequality is obvious. \(\square \)

Proposition 5.24

Let \(\varepsilon \in (0,1)\) and \(\eta =\eta (\varepsilon )\in (0,\frac{1}{100})\) be the parameter provided by Proposition 5.12 for this \(\varepsilon \). Let \(n\in {\mathbf {N}}\) and \(M=M\left( n\right) \ge 1\). Let Y be \(\varepsilon \)-adapted with \({\mathfrak {D}}(Y)\le n^{\eta }\) and \({\mathfrak {v}}(Y),{\mathfrak {e}}(Y),{\mathfrak {f}}(Y)\le M\le n^{1/4}\). Then

$$\begin{aligned} \frac{{\mathbb {E}}_{n}^{\mathrm {emb}}(Y)}{n^{\chi (Y)}}=\left( 1+O_{\varepsilon }\left( \frac{M^{2}}{n}\right) \right) \left( 1+\sum _{H\in {{{\mathcal {Q}}}}\left( Y\right) \backslash \{{\hat{Y}}\}}n^{\chi (H)+{\mathfrak {e}}(Y)-{\mathfrak {f}}(Y)}\right) . \end{aligned}$$
(5.28)

Proof

Assume all parameters are as in the statement of the proposition. By Theorem 5.1 and Proposition 5.12 we have

$$\begin{aligned} \frac{{\mathbb {E}}_{n}^{\mathrm {emb}}(Y)}{n^{\chi (Y)}}=\frac{(n!)^{3}}{|{\mathbb {X}}_{n}|} \cdot \frac{(n)_{{\mathfrak {v}}(Y)}(n)_{{\mathfrak {f}}(Y)}}{\prod _{f}(n)_{{\mathfrak {e}}_{f}(Y)}n^{\chi (Y)}}\left[ 2\Xi _{n}^{\nu =(n-{\mathfrak {v}})}\left( Y\right) +O_{\varepsilon }\left( \frac{1}{n}\right) \right] . \end{aligned}$$

By Lemma 5.23, \(\frac{(n)_{{\mathfrak {v}}(Y)}(n)_{{\mathfrak {f}}(Y)}}{\prod _{f}(n)_{{\mathfrak {e}}_{f}(Y)}n^{\chi (Y)}}= 1+O\left( \frac{M^{2}}{n}\right) \). By Corollary 4.5, \(\frac{\left( n!\right) ^{3}}{\left| {\mathbb {X}}_{n}\right| }=\frac{1}{2}+O \left( \frac{1}{n^{2}}\right) \). With Corollary 5.21, this gives

$$\begin{aligned}&\frac{{\mathbb {E}}_{n}^{\mathrm {emb}}(Y)}{n^{\chi (Y)}} = \left[ \frac{1}{2}+O \left( \frac{M^{2}}{n}\right) \right] \left[ 2\frac{\prod _{f}\left( n\right) _{{\mathfrak {e}}_{f} \left( Y\right) }}{\left( n\right) _{{\mathfrak {f}}\left( Y\right) }}\sum _{H\in {{{\mathcal {Q}}}} \left( Y\right) }\frac{\left( n\right) _{{\mathfrak {v}}\left( H\right) }}{\prod _{f} \left( n\right) _{{\mathfrak {e}}_{f}\left( H\right) }}\right] +O_{\varepsilon } \left( \frac{1}{n}\right) \nonumber \\&\quad \qquad {\mathop {=}\limits ^{\text {Lem.}~5.23}} \left[ 1+O\left( \frac{M^{2}}{n}\right) \right] \sum _{H\in {{{\mathcal {Q}}}}\left( Y\right) }n^{{\mathfrak {e}}(Y)-{\mathfrak {f}}(Y)+\chi (H)}+O_{\varepsilon }\left( \frac{1}{n}\right) , \end{aligned}$$
(5.29)

where the use of Lemma 5.23 is justified since for every \(H\in {{{\mathcal {Q}}}}\left( Y\right) \), \({\mathfrak {v}}\left( H\right) \le {\mathfrak {v}}({\hat{Y}})\le 8{\mathfrak {v}}\left( Y\right) \le 8M\), and \({\mathfrak {e}}\left( H\right) \le {\mathfrak {e}}({\hat{Y}})\le {\mathfrak {e}}\left( Y\right) +8{\mathfrak {v}}\left( Y\right) \le 9M\). In the summation in (5.29), the top power of n is realized by \({\hat{Y}}\) and is equal to zero (by (5.24) and Lemma 5.22), so we obtain

$$\begin{aligned} \frac{{\mathbb {E}}_{n}^{\mathrm {emb}}(Y)}{n^{\chi (Y)}}&=\left[ 1+O\left( \frac{M^{2}}{n}\right) \right] \left( 1+\sum _{H\in {{{\mathcal {Q}}}}\left( Y\right) \backslash \{{\hat{Y}}\}}n^{\chi (H)+{\mathfrak {e}}(Y)-{\mathfrak {f}}(Y)}\right) +O_{\varepsilon }\left( \frac{1}{n}\right) , \end{aligned}$$

which yields (5.28). \(\square \)

The drawback of Proposition 5.24 is that we do not know how to directly estimate the sum over \(H\in {{{\mathcal {Q}}}}\left( Y\right) \backslash \{{\hat{Y}}\}\) that appears therein. Because we can not directly deal with this sum, we instead use Proposition 5.24 to deduce in the remaining results of this section that for \(\varepsilon \)-adapted Y we can control \({\mathbb {E}}_{n}^{\mathrm {emb}}(Y)\) using \({\mathbb {E}}_{m}^{\mathrm {emb}}(Y)\) with m much smaller than n.

Corollary 5.25

Let \(\varepsilon \in (0,1)\), and \(\eta =\eta (\varepsilon )\in (0,\frac{1}{100})\) be the parameter provided by Proposition 5.12 for this \(\varepsilon \). Let \(m\in {\mathbf {N}}\). Let Y be \(\varepsilon \)-adapted with \({\mathfrak {D}}(Y)\le m^{\eta }\) and \({\mathfrak {v}}(Y),{\mathfrak {e}}(Y),{\mathfrak {f}}(Y)\le m^{1/4}\). Then

$$\begin{aligned} \frac{{\mathbb {E}}_{m}^{\mathrm {emb}}(Y)}{m^{\chi \left( Y\right) }}\gg _{\varepsilon }1+\sum _{H\in {{{\mathcal {Q}}}}\left( Y\right) \backslash \{{\hat{Y}}\}}m^{\chi (H)+{\mathfrak {e}}(Y)-{\mathfrak {f}}(Y)}. \end{aligned}$$

In particular, \({\mathbb {E}}_{m}^{\mathrm {emb}}(Y)\gg _{\epsilon }m^{\chi (Y)}.\)

Remark 5.26

As a direct consequence of Corollary 5.25, under the same conditions, if \(\chi (Y)<0\) and \(n\ge m\) we obtain

$$\begin{aligned} \frac{m}{n}{\mathbb {E}}_{m}^{\mathrm {emb}}(Y)\gg \frac{m^{\chi (Y)+1}}{n}\gg n^{\chi (Y)}. \end{aligned}$$

While Corollary 5.25 is a direct consequence of Proposition 5.24, we can get more information by combining Proposition 5.24 with what we already know about \({\mathcal {Q}}(Y)\).

Proposition 5.27

Let \(\varepsilon \in (0,1)\), \(\eta \) be as in Proposition 5.12 and \(K>1\). Let \(n\in {\mathbf {N}}\) and \(m=m\left( n\right) \in {\mathbf {N}}\) with \(m<n\) and \(m{\mathop {\rightarrow }\limits ^{n\rightarrow \infty }}\infty \). Let Y be \(\varepsilon \)-adapted and suppose that \({\mathfrak {v}}(Y),{\mathfrak {e}}(Y),{\mathfrak {f}}(Y)\le (K\log n)^{2}\le m^{1/4}\) and that \({\mathfrak {D}}(Y)\le K\log n\le m^{\eta }\). Then

$$\begin{aligned} \frac{{\mathbb {E}}_{n}^{\mathrm {emb}}(Y)}{n^{\chi (Y)}}=1+O_{\varepsilon ,K}\left( \frac{(\log n)^{4}}{n}\right) +O_{\varepsilon ,K}\left( \frac{m}{n}\frac{{\mathbb {E}}_{m}^{\mathrm {emb}}(Y)}{m^{\chi (Y)}}\right) . \end{aligned}$$
(5.30)

Proof

With assumptions as in the proposition, Proposition 5.24 gives

$$\begin{aligned} \frac{{\mathbb {E}}_{n}^{\mathrm {emb}}(Y)}{n^{\chi (Y)}}&=\left( 1+O_{\varepsilon ,K}\left( \frac{(\log n)^{4}}{n}\right) \right) \left( 1+\sum _{H\in {{{\mathcal {Q}}}}\left( Y\right) \backslash \{{\hat{Y}}\}}n^{\chi (H)+{\mathfrak {e}}(Y)-{\mathfrak {f}}(Y)}\right) \\&=1+O_{\varepsilon ,K}\left( \frac{(\log n)^{4}}{n}\right) +O_{\varepsilon ,K}\left( \sum _{H\in {{{\mathcal {Q}}}}\left( Y\right) \backslash \{{\hat{Y}}\}}n^{\chi (H)+{\mathfrak {e}}(Y)-{\mathfrak {f}}(Y)}\right) . \end{aligned}$$

Finally, because for every \(H\in {{{\mathcal {Q}}}}\left( Y\right) {\setminus }\{{\hat{Y}}\}\) we have \(\chi \left( H\right) +{\mathfrak {e}}\left( Y\right) -{\mathfrak {f}}\left( Y\right) \le -1\) and \(m<n\),

$$\begin{aligned} \sum _{H\in {{{\mathcal {Q}}}}\left( Y\right) \backslash \{{\hat{Y}}\}}n^{\chi (H) +{\mathfrak {e}}(Y)-{\mathfrak {f}}(Y)}= & {} \sum _{H\in {{{\mathcal {Q}}}}\left( Y\right) \backslash \{{\hat{Y}}\}} \left( \frac{n}{m}\right) ^{\chi (H)+{\mathfrak {e}}(Y)-{\mathfrak {f}}(Y)}m^{\chi (H)+{\mathfrak {e}}(Y)-{\mathfrak {f}}(Y)}\\\le & {} \frac{m}{n}\sum _{H\in {{{\mathcal {Q}}}}\left( Y\right) \backslash \{{\hat{Y}}\}} m^{\chi (H)+{\mathfrak {e}}(Y)-{\mathfrak {f}}(Y)} {\mathop {\ll _{\varepsilon }}\limits ^{\text {Cor.}~5.25}} \frac{m}{n}\frac{{\mathbb {E}}_{m}^{\mathrm {emb}}(Y)}{m^{\chi (Y)}}, \end{aligned}$$

concluding the proof of the proposition. \(\square \)

6 Proof of Theorem 1.11

The reader is suggested to have read the overview in Section 1.3 before attempting to read this section of the paper.

6.1 Setup.

We remind the reader that \(g=2\). We are given \(c>0\), and an element \(\gamma \in \Gamma \) of cyclic word length \(\ell _{w}(\gamma )\le c\log n\). We assume that \(\gamma \) is not a proper power of another element of \(\Gamma \). We remind the reader that \({\mathcal {C}}_{\gamma }\) is an annular tiled surface associated to \(\gamma \) as in Example 3.5. By Lemma 3.20,

$$\begin{aligned} {\mathbb {E}}_{n}\left[ \mathsf {fix}_{\gamma }\right] ={\mathbb {E}}_{n}\left( {\mathcal {C}}_{\gamma }\right) , \end{aligned}$$

where \({\mathbb {E}}_{n}({\mathcal {C}}_{\gamma })\) is the expected number of morphisms from \({\mathcal {C}}_{\gamma }\) to the random surface \(X_{\phi }\). Let \(\varepsilon =\frac{1}{32}\) (for general g, \(\varepsilon =\frac{1}{16g}\)) and let \({\mathcal {R}}_{\varepsilon }({\mathcal {C}}_{\gamma })\) be the finite resolution of \(\mathcal {C_{\gamma }}\) provided by Definition 3.23 and Theorem 3.24. Each element of this resolution is a morphism \(h:{\mathcal {C}}_{\gamma }\rightarrow W_{h}\) where \(W_{h}\) is a tiled surface. By Lemma 3.22 we have for any \(n\ge 1\)

$$\begin{aligned} {\mathbb {E}}_{n}\left[ \mathsf {fix}_{\gamma }\right] =\sum _{h\in \mathcal {R_{\varepsilon }}({\mathcal {C}}_{\gamma })}{\mathbb {E}}_{n}^{\mathrm {emb}}\left( W_{h}\right) , \end{aligned}$$
(6.1)

where \({\mathbb {E}}_{n}^{\mathrm {emb}}\left( W_{h}\right) \) is the expected number of embeddings of \(W_{h}\) into the random tiled surface \(X_{\phi }\). Associated to each \(W_{h}\) here, \({\mathfrak {v}}(W_{h}),{\mathfrak {e}}(W_{h}),\) and \({\mathfrak {f}}(W_{h})\) are the number of vertices, edges, and faces of \(W_{h}\). Also associated to \(W_{h}\) are \({\mathfrak {d}}(W_{h})\), the number of edges in the boundary of \(W_{h}\), \(\chi (W_{h})\), the topological Euler characteristic of \(W_{h}\), and \({\mathfrak {D}}(W_{h})={\mathfrak {v}}(W_{h})-{\mathfrak {f}}(W_{h})\).

By Corollary 3.25, there is a constant \(K=K(c)>0\), such that for each \(h\in {\mathcal {R}}_{\varepsilon }({\mathcal {C}}_{\gamma })\), and for \(n\ge 3\), we have

$$\begin{aligned} {\mathfrak {d}}(W_{h})&\le K\log n,\\ {\mathfrak {f}}(W_{h})&\le K(\log n)^{2}. \end{aligned}$$

By Lemma 3.6 we have \({\mathfrak {v}}(W_{h})\le {\mathfrak {d}}(W_{h})+{\mathfrak {f}}(W_{h})\), so \({\mathfrak {D}}\left( W_{h}\right) \le {\mathfrak {d}}\left( W_{h}\right) \) and

$$\begin{aligned} {\mathfrak {D}}(W_{h})\le K\log n. \end{aligned}$$

We also have \({\mathfrak {e}}(W_{h})\le 4{\mathfrak {v}}(W_{h})\) by (3.2). Hence by increasing K if necessary we can also ensure

$$\begin{aligned} {\mathfrak {v}}(W_{h}),{\mathfrak {e}}(W_{h})\le K(\log n)^{2}. \end{aligned}$$

6.2 Part I: the contribution from non-\(\varepsilon \)-adapted surfaces.

Our first goal is to control the contribution to \({\mathbb {E}}_{n}[\mathsf {fix}_{\gamma }]\) in (6.1) from non-\(\varepsilon \)-adapted surfaces. Let \(\mathcal {R_{\varepsilon }}^{(\text {non-}\varepsilon \text {-ad})}\left( {\mathcal {C}}_{\gamma }\right) \) denote the set of morphisms \(h:{\mathcal {C}}_{\gamma }\rightarrow W_{h}\) in \(\mathcal {R_{\varepsilon }}({\mathcal {C}}_{\gamma })\) such that \(W_{h}\) is not \(\varepsilon \)-adapted. In particular, such \(W_{h}\) is boundary reduced and \({\mathfrak {f}}\left( W_{h}\right) >{\mathfrak {d}}\left( W_{h}\right) \).

Proposition 6.1

There is a constant \(A>0\) such that for any \(c>0\), if \(\ell _{w}(\gamma )\le c\log n\), then

$$\begin{aligned} \sum _{h\in \mathcal {R_{\varepsilon }}^{(\text {non-}\varepsilon \text {-ad})}\left( {\mathcal {C}}_{\gamma }\right) }{\mathbb {E}}_{n}^{\mathrm {emb}}\left( W_{h}\right) \ll _{c}\frac{(\log n)^{A}}{n}. \end{aligned}$$

Proof

We first do some counting. Let us count \(h\in \mathcal {R_{\varepsilon }}^{(\text {non-}\varepsilon \text {-ad})}\left( {\mathcal {C}}_{\gamma }\right) \) by their value of \({\mathfrak {D}}(W_{h})\) and \({\mathfrak {f}}(W_{h})\). By Corollary 3.25 every \(h\in \mathcal {R_{\varepsilon }}^{(\text {non-}\varepsilon \text {-ad})}\left( {\mathcal {C}}_{\gamma }\right) \) has

$$\begin{aligned} \chi (W_{h})<-{\mathfrak {f}}(W_{h})<-{\mathfrak {d}}(W_{h}). \end{aligned}$$
(6.2)

Combining (6.2) with Lemma 3.6 yields

$$\begin{aligned} 0\le {\mathfrak {D}}(W_{h})\le {\mathfrak {d}}\left( W_{h}\right) <{\mathfrak {f}}(W_{h}). \end{aligned}$$
(6.3)

Notice that (6.3) implies \({\mathfrak {f}}(W_{h})\ge 1\). First we bound the number of possible \(W_{h}\) with \({\mathfrak {D}}(W_{h})={\mathfrak {D}}_{0}\) and \({\mathfrak {f}}(W_{h})={\mathfrak {f}}_{0}\) for fixed \({\mathfrak {D}}_{0}<{\mathfrak {f}}_{0}\). Note that in this case \({\mathfrak {v}}(W_{h})={\mathfrak {v}}_{0}{\mathop {=}\limits ^{\mathrm {def}}}{\mathfrak {D}}_{0}+{\mathfrak {f}}_{0}\). We may over-count the number of \(W_{h}\) with \({\mathfrak {v}}_{0}\) vertices by counting the number of \(W_{h}\) together with a labeling of their vertices by \([{\mathfrak {v}}_{0}]\). We first construct the one-skeleton of such a tiled surface: there are at most \({\mathfrak {v}}_{0}^{~{\mathfrak {v}}_{0}}\) choices for the a-labeled edges, and also for the b-labeled edges etc. Because \(W_{h}\) are all boundary reduced, there is an octagon attached to any closed \(\left[ a,b\right] \left[ c,d\right] \) path, so the one-skeleton completely determines the entire tiled surface. Hence there are at most \({\mathfrak {v}}_{0}^{~4{\mathfrak {v}}_{0}}\) choices for \(W_{h}\) with \({\mathfrak {v}}(W_{h})={\mathfrak {v}}_{0}\).

We also have to estimate how many ways there are to map \({\mathcal {C}}_{\gamma }\) into such a \(W_{h}\). Fixing arbitrarily a vertex v of \({\mathcal {C}}_{\gamma }\), any morphism \({\mathcal {C}}_{\gamma }\rightarrow W_{h}\) is uniquely determined by where v goes; hence there are at most \({\mathfrak {v}}_{0}\) morphisms and so in total there are at most

$$\begin{aligned} {\mathfrak {v}}_{0}^{4{\mathfrak {v}}_{0}+1}&\le {\mathfrak {v}}_{0}^{5{\mathfrak {v}}_{0}}=({\mathfrak {D}}_{0}+{\mathfrak {f}}_{0})^{5({\mathfrak {D}}_{0}+{\mathfrak {f}}_{0})}\le (2{\mathfrak {f}}_{0})^{10{\mathfrak {f}}_{0}} \end{aligned}$$

elements \(h\in \mathcal {R_{\varepsilon }}^{(\text {non-}\varepsilon \text {-ad})}\left( {\mathcal {C}}_{\gamma }\right) \) with \({\mathfrak {D}}(W_{h})={\mathfrak {D}}_{0}\) and \({\mathfrak {f}}(W_{h})={\mathfrak {f}}_{0}\). Hence there are at most \(K\log n\cdot (2{\mathfrak {f}}_{0})^{10{\mathfrak {f}}_{0}}\) elements \(h\in \mathcal {R_{\varepsilon }}^{(\text {non-}\varepsilon \text {-ad})}\left( {\mathcal {C}}_{\gamma }\right) \) with \({\mathfrak {f}}(W_{h})={\mathfrak {f}}_{0}\).

We are going to use Theorem 5.1 that relates \({\mathbb {E}}_{n}^{\mathrm {emb}}\left( W_{h}\right) \) to a certain quantity \(\Xi _{n}(W_{h})\). By Proposition 5.11 there is \(A_{0}>1\) such that for \(h\in \mathcal {R_{\varepsilon }}^{(\text {non-}\varepsilon \text {-ad})}\left( {\mathcal {C}}_{\gamma }\right) \)

$$\begin{aligned} \left| \Xi _{n}(W_{h})\right| \ll _{K}\left( A_{0}{\mathfrak {D}}(W_{h})\right) ^{A_{0} {\mathfrak {D}}(W_{h})}\le \left( A_{0}{\mathfrak {f}}(W_{h})\right) ^{A_{0}{\mathfrak {f}}(W_{h})}, \end{aligned}$$
(6.4)

so by Theorem 5.1, Corollary 4.5, and Lemma 5.23 we get

$$\begin{aligned} {\mathbb {E}}_{n}^{\mathrm {emb}}\left( W_{h}\right)&{\mathop {=}\limits ^{\mathrm {Thm}~5.1}}&\frac{n!^{3}}{|{\mathbb {X}}_{n}|}\frac{(n)_{{\mathfrak {v}}(W_{h})}(n)_{{\mathfrak {f}}(W_{h})}}{\prod _{f} (n)_{{\mathfrak {e}}_{f}(W_{h})}}\Xi _{n}\left( W_{h}\right) {\mathop {\ll }\limits ^{\mathrm {Cor.}~4.5}}\frac{(n)_{{\mathfrak {v}}(W_{h})}(n)_{{\mathfrak {f}}(W_{h})}}{\prod _{f}(n)_{{\mathfrak {e}}_{f}(W_{h})}}\Xi _{n}\left( W_{h}\right) \nonumber \\&{\mathop {\ll _{K}}\limits ^{\mathrm {Lemma}~5.23}}&n^{\chi (W_{h})}\Xi _{n}\left( W_{h}\right) {\mathop {\ll _{K}}\limits ^{(6.4)}}n^{\chi (W_{h})}\left( A_{0}{\mathfrak {f}}(W_{h})\right) ^{A_{0}{\mathfrak {f}}(W_{h})}. \end{aligned}$$
(6.5)

Therefore, for every \(1\le f_{0}\le K\left( \log n\right) ^{2}\),

$$\begin{aligned}&\sum _{\begin{array}{c} h\in \mathcal {R_{\varepsilon }}^{(\text {non-}\varepsilon \text {-ad})}\left( {\mathcal {C}}_{\gamma }\right) \\ {\mathfrak {f}}(W_{h})={\mathfrak {f}}_{0} \end{array} }{\mathbb {E}}_{n}^{\mathrm {emb}}\left( W_{h}\right) \ll _{K} (A_{0}{\mathfrak {f}}_{0})^{A_{0}{\mathfrak {f}}_{0}}\sum _{\begin{array}{c} h\in \mathcal {R_{\varepsilon }}^{(\text {non-}\varepsilon \text {-ad})}\left( {\mathcal {C}}_{\gamma }\right) \\ {\mathfrak {f}}(W_{h})={\mathfrak {f}}_{0} \end{array} }n^{\chi (W_{h})}\\&\quad {\mathop {\le }\limits ^{(6.2)}}(A_{0}{\mathfrak {f}}_{0})^{A_{0}{\mathfrak {f}}_{0}}\sum _{\begin{array}{c} h\in \mathcal {R_{\varepsilon }}^{(\text {non-}\varepsilon \text {-ad})}\left( {\mathcal {C}}_{\gamma }\right) \\ {\mathfrak {f}}(W_{h})={\mathfrak {f}}_{0} \end{array} }n^{-{\mathfrak {f}}_{0}}\\&\,\,\quad \le K\log n\left( \frac{\left( A_{0}{\mathfrak {f}}_{0}\right) ^{A_{0}} \left( 2{\mathfrak {f}}_{0}\right) ^{10}}{n}\right) ^{{\mathfrak {f}}_{0}}\\&\,\,\quad \le K\log n\cdot \left( \frac{A_{0}^{A_{0}}2^{10}\left( K(\log n)^{2}\right) ^{A_{0}+10}}{n}\right) ^{{\mathfrak {f}}_{0}}. \end{aligned}$$

So

$$\begin{aligned} \sum _{h\in \mathcal {R_{\varepsilon }}^{(\text {non-}\varepsilon \text {-ad})}\left( {\mathcal {C}}_{\gamma }\right) }{\mathbb {E}}_{n}^{\mathrm {emb}}\left( W_{h}\right)= & {} \sum _{{\mathfrak {f}}_{0}=1}^{K(\log n)^{2}}\sum _{\begin{array}{c} h\in \mathcal {R_{\varepsilon }}^{(\text {non-}\varepsilon \text {-ad})}\left( {\mathcal {C}}_{\gamma }\right) \\ {\mathfrak {f}}(W_{h})={\mathfrak {f}}_{0} \end{array} }{\mathbb {E}}_{n}^{\mathrm {emb}}\left( W_{h}\right) \\\ll & {} _{K} K\log n\cdot \sum _{{\mathfrak {f}}_{0}=1}^{K(\log n)^{2}}\left( \frac{A_{0}^{A_{0}}2^{10}\left( K(\log n)^{2}\right) ^{A_{0}+10}}{n}\right) ^{{\mathfrak {f}}_{0}}\\\ll & {} _{K}\frac{(\log n)^{2A_{0}+21}}{n}, \end{aligned}$$

where the last inequality is based on that \(\frac{A_{0}^{A_{0}}2^{10}\left( K(\log n)^{2}\right) ^{A_{0}+10}}{n}\le \frac{1}{2}\) for \(n\gg _{K}1\). \(\square \)

6.3 Part II: the contribution from \(\varepsilon \)-adapted surfaces.

Write \(\mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma })\subset {\mathcal {R}}_{\varepsilon }({\mathcal {C}}_{\gamma })\) for the collection of morphisms \(h:{\mathcal {C}}_{\gamma }\rightarrow W_{h}\) in \({\mathcal {R}}_{\varepsilon }({\mathcal {C}}_{\gamma })\) such that \(W_{h}\) is \(\varepsilon \)-adapted. In light of Proposition 6.1 it remains to deal with the contributions to \({\mathbb {E}}_{n}[\mathsf {fix}_{\gamma }]\) from \(\mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma })\). Indeed we have by Proposition 6.1 and (6.1)

$$\begin{aligned} {\mathbb {E}}_{n}[\mathsf {fix}_{\gamma }]=\sum _{h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma })}{\mathbb {E}}_{n}^{\mathrm {emb}}\left( W_{h}\right) +O_{c}\left( \frac{(\log n)^{A}}{n}\right) . \end{aligned}$$
(6.6)

Recall that if \(W_{h}\) is \(\varepsilon \)-adapted, it is, in particular, strongly boundary reduced, and so by [MP20, Section 1.6], \({\mathbb {E}}_{n}^{\mathrm {emb}}\left( W_{h}\right) =n^{\chi \left( W_{h}\right) }\left[ 1+O\left( n^{-1}\right) \right] \) as \(n\rightarrow \infty \). By Theorem 1.10, \({\mathbb {E}}_{n}[\mathsf {fix}_{\gamma }]=1+O\left( n^{-1}\right) \). Comparing this with (6.6), we conclude that there is exactly one \(h_{0}\in {\mathcal {R}}_{\varepsilon }({\mathcal {C}}_{\gamma })\) with \(\chi \left( W_{h_{0}}\right) =0\). This \(h_{0}\) also satisfies that \(W_{h_{0}}\) is \(\varepsilon \)-adapted.Footnote 17

Still, we are missing some information about \(\mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma })\) that we will need: for example, the ability to count how many \(h:{\mathcal {C}}_{\gamma }\rightarrow W_{h}\) there are in \(\mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma })\) with different orders of contributions (i.e. \(n^{\chi (W_{h})}\)) to (6.6). We are going to use a trick to get around this missing information.

Let \(\eta \in (0,\frac{1}{100})\) be the parameter provided by Proposition 5.12 for the current \(\varepsilon =\frac{1}{32}\) (the reason for choosing \(\eta \) like this now is just so that we can momentarily apply Corollary 5.25 and Proposition 5.27). Let m be an auxiliary parameter given by

$$\begin{aligned} m=\left\lceil \left( K\log n\right) ^{1/\eta }\right\rceil \end{aligned}$$

so that when \(n\gg _{c}1\), for all \(h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma })\), \({\mathfrak {D}}(W_{h})\le K\log n\le m^{\eta }\) and \({\mathfrak {v}}(W_{h}),{\mathfrak {e}}(W_{h}),{\mathfrak {f}}(W_{h})\le K(\log n)^{2}\le m^{1/4}\). Moreover, \((\log n)^{100}\ll _{c}m\ll _{c}(\log n)^{\frac{1}{\eta }}\). To exploit the fact that each \({\mathbb {E}}_{n}^{\mathrm {emb}}(W_{h})\) is controlled by \({\mathbb {E}}_{m}^{\mathrm {emb}}(W_{h})\) (Corollary 5.25 and Proposition 5.27), we will at two points use the inequality

$$\begin{aligned} m&\ge {\mathbb {E}}_{m}[\mathsf {fix}_{\gamma }]{\mathop {=}\limits ^{(6.1)}}\sum _{h\in \mathcal {R_{\varepsilon }}({\mathcal {C}}_{\gamma })}{\mathbb {E}}_{m}^{\mathrm {emb}}\left( W_{h}\right) \ge \sum _{h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma })}{\mathbb {E}}_{m}^{\mathrm {emb}}\left( W_{h}\right) . \end{aligned}$$
(6.7)

We begin with

$$\begin{aligned} \sum _{h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})} ({\mathcal {C}}_{\gamma })}{\mathbb {E}}_{n}^{\mathrm {emb}}\left( W_{h}\right)&{\mathop {=}\limits ^{\text {Prop}.~5.27}} \sum _{h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})} ({\mathcal {C}}_{\gamma })}n^{\chi (W_{h})}\left[ 1+O_{c}\left( \frac{(\log n)^{4}}{n}\right) \right. \nonumber \\&\qquad \left. +O_{c} \left( \frac{m}{n}\frac{{\mathbb {E}}_{m}^{\mathrm {emb}}(W_{h})}{m^{\chi \left( W_{h}\right) }} \right) \right] \nonumber \\&\,\,\quad =\sum _{h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})} ({\mathcal {C}}_{\gamma })}n^{\chi (W_{h})}\left[ 1+O_{c}\left( \frac{(\log n)^{4}}{n}\right) \right] \nonumber \\&\quad \quad +O_{c}\left( \frac{m}{n}\sum _{h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})} ({\mathcal {C}}_{\gamma })}{\mathbb {E}}_{m}^{\mathrm {emb}}(W_{h})\right) \nonumber \\&\,\quad {\mathop {=}\limits ^{(6.7)}}\sum _{h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma })}n^{\chi (W_{h})}\left( 1+O_{c}\left( \frac{(\log n)^{4}}{n}\right) \right) +O_{c}\left( \frac{m^{2}}{n}\right) . \end{aligned}$$
(6.8)

The middle estimate above used that \(\chi (W_{h})\le 0\) for all \(h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma })\), and so \(\left( \frac{n}{m}\right) ^{\chi \left( W_{h}\right) }\le 1\). The contribution to (6.8) from \(h_{0}\) is \(1+O_{c}\left( \frac{(\log n)^{4}}{n}\right) \). So we obtain

$$\begin{aligned} \sum _{h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma })}{\mathbb {E}}_{n}^{\mathrm {emb}}\left( W_{h}\right)= & {} 1+O_{c}\left( \frac{(\log n)^{4}}{n}\right) \\&+O\left( \frac{m^{2}}{n}\right) +O\left( \sum _{\begin{array}{c} h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma })\\ \chi (W_{h})<0 \end{array} }n^{\chi (W_{h})}\right) . \nonumber \\ \end{aligned}$$
(6.9)

To deal with the last error term, we relate it to the expectations at level m. Indeed,

$$\begin{aligned}&\left| \sum _{\begin{array}{c} h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma })\\ \chi (W_{h})<0 \end{array} }n^{\chi (W_{h})}\right| = \sum _{\begin{array}{c} h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma })\\ \chi (W_{h})<0 \end{array} }\left( \frac{n}{m}\right) ^{\chi (W_{h})}m^{\chi (W_{h})}\le \frac{m}{n}\sum _{\begin{array}{c} h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma }) \end{array} }m^{\chi (W_{h})}\\&\qquad \qquad \qquad \qquad {\mathop {\ll }\limits ^{\text {Cor}.~5.25}} \frac{m}{n}\sum _{\begin{array}{c} h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma }) \end{array} }{\mathbb {E}}_{m}^{\mathrm {emb}}\left( W_{h}\right) {\mathop {\le }\limits ^{(6.7)}}\frac{m^{2}}{n}. \end{aligned}$$

Incorporating this estimate into (6.10) gives

$$\begin{aligned} \sum _{h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma })}{\mathbb {E}}_{n}^{\mathrm {emb}}\left( W_{h}\right)&=1+O_{c}\left( \frac{(\log n)^{4}}{n}\right) +O\left( \frac{m^{2}}{n}\right) =1+O_{c}\left( \frac{(\log n)^{A}}{n}\right) , \end{aligned}$$

where \(A=\frac{2}{\eta }\). Combining this with (6.6) and increasing A if necessary we obtain

$$\begin{aligned} {\mathbb {E}}_{n}[\mathsf {fix}_{\gamma }]=1+O_{c}\left( \frac{(\log n)^{A}}{n}\right) \end{aligned}$$

as required. This concludes the proof of Theorem 1.11. \(\square \)

Remark 6.2

The arguments above show that

$$\begin{aligned} \sum _{\begin{array}{c} h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma }) \end{array} }m^{\chi (W_{h})}\ll \sum _{\begin{array}{c} h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma }) \end{array} }{\mathbb {E}}_{m}^{\mathrm {emb}}\left( W_{h}\right) \ll m, \end{aligned}$$

hence the number of elements of \(h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma })\) with \(\chi (W_{h})=\chi \) is \(\ll m^{1-\chi }\). In general, given arbitrary \(\gamma \in \Gamma \), and \(\epsilon >0\) we obtain by the same argument that for some \(\eta =\eta (\epsilon )>0\) we have

$$\begin{aligned} \#\{\,h\in \mathcal {R_{\varepsilon }}^{(\varepsilon \text {-ad})}({\mathcal {C}}_{\gamma })\,:\,\chi (W_{h})=\chi \,\}\ll _{\epsilon }\left( \ell (\gamma )^{\frac{1}{\eta }}\right) ^{1-\chi }. \end{aligned}$$

We mention this side-effect of our proof in case it is of independent interest.