1 Introduction

In this relatively short, highly technical paper, we prove a substantial extension of a recent result of Amanatidis and Kleer [1, Theorem 1.3]. Our proof is based on the unified approach that was developed in Erdős et al. [4] for P-stable degree sequences. For sake of brevity in this section, we concisely describe the problem itself, but we will not give a detailed description of the background. For further details, the diligent reader is referred to Amanatidis and Kleer and Erdős et al. [1, 4].

Approximate sampling graphs with given degree sequences play increasingly important role in modeling different real-life dynamics. One basic way to study them is the switch Markov chain method, made popular by Kannan et al. [9]. The currently best result via this method is Erdős et al. [4] where it is proved that the switch Markov chain is rapidly mixing on P-stable degree sequences. The notion of P-stability was introduced by Jerrum and Sinclair [8] and studied for its own sake at first by Jerrum et al. [6].

In real-life applications, it is not always possible to know the exact degree sequence of the targeted network. For example, a natural scenario where this problem arises is in hypothesis testing on social networks that are only partially observed. Therefore, it can happen that we have to sample networks with slightly different degree sequences. It is possible to study the situation via Markov chain decompositions, where there is another Markov chain to move among the component chains. A good example for this approach is the proof of Amanatidis and Kleer [1, Theorem 1.1].

Another possibility is to introduce further local operations, since the switch operation itself does not change the degree sequence. Such operations are the hinge flip and the toggle (the deletion–insertion) operations. These two latter operations were introduced by Jerrum and Sinclair in their seminal work about approximate 0-1 permanents [7]. (The number of perfect matchings of a bipartite graph is equal to the permanent of the bipartite adjacency matrix.) These three operations together are often applied in network building applications in practice (as it was pointed out in Coolen et al. [2]) but without any theoretical insurance for the correct result.

Rechner et al. [10] defined a Markov chain with these three local operations for bipartite graphs. Amanatidis and Kleer recognized in their important, recent paper [1] the following very interesting fact: assume that the inconsistencies in the degree sequences are never bigger than one (the degrees can be i or \(i+1\)) coordinate-wise, and the degree intervals are placed close to a given constant r (the interval placements can vary between \([r-r^\alpha , r+r^\alpha ]\) where alpha is at most 1/2). The authors coined the name near-regular degree intervals for this degree sequence property and the name degree interval Markov chain for this whole setup. Their result is that the degree interval Markov chain for near-regular degree intervals is rapidly mixing.

Our main result (Theorem 2.20) is that this Markov chain is rapidly mixing for such tight degree intervals where they are placed at P-stable degree sequences. Since all degree sequences close to some constant are P-stable, but P-stable degree sequences can be very far from regular sequences, our result is clearly a very extensive generalization of the theorem of Amanatidis and Kleer.

To our great surprise, it turned out that this result can be derived from the proof of the main theorem of Erdős et al. [4]. For that end, we had to analyze in detail the auxiliary structures of the proof and to extend to cover this setup. The result of this analysis is the notion of precursor (Sect. 3.3). In turn, this notion is conducive to a rather short proof of the rapidly mixing property. Therefore, the main task in this paper is to define the appropriate precursor.

2 Definitions and Notation

Many of the definitions in this section are extensions or generalizations of notions introduced in Erdős et al. [4]. We will alert the reader whenever this is the case.

We consider \({\mathbb {N}}\) the set of non-negative integers. Let \([n]=\{1,\ldots ,n\}\) denote the integers from 1 to n, and let \(\left( {\begin{array}{c}[n]\\ k\end{array}}\right) \) denote the set of k-element subsets of [n]. Given a subset \(S\subseteq [n]\), let \(\mathbb {1}_S: [n]\rightarrow \{0,1\}\) be the characteristic function of S, that is, \(\mathbb {1}_S(s)=1\Leftrightarrow s\in S\). We often use \(\uplus \) to emphasize that a union of pairwise disjoint sets is taken. The graphs in this paper are vertex-labeled and finite. Parallel edges and loops are forbidden, and unless otherwise stated, the labeled vertex set of an n-vertex graph is [n]. The line graph L(G) of a graph G is a graph on the vertex set E(G) (so the vertices of L(G) are taken from \(\left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \)), where any two edges \(e,f\in E(G)\) that are adjacent are joined (by an edge). The line graph is also free of parallel edges and loops. A trail is a walk that does not visit any edge twice. An open trail starts and ends on two distinct vertices. A closed trail does not have a start nor an end vertex. Given a matrix \(M\in {\mathbb {Z}}^{n\times n}\), its \(\ell _1\)-norm is \(\Vert M\Vert _1=\sum _{ij}|M_{ij}|\).

Definition 2.1

Given two graphs on [n] as vertices, say, \(X=([n],E(X))\) and \(Y=([n],E(Y))\), we define their symmetric difference graph

$$\begin{aligned} X\triangle Y=([n],E(X)\triangle E(Y)). \end{aligned}$$

Definition 2.2

Given a set of edges \(R\subseteq \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \), we may treat R as a graph. If X is a graph on the vertex set [n], let

$$\begin{aligned} X\triangle R=([n],E(X)\triangle R). \end{aligned}$$

Definition 2.3

A degree sequence on n vertices is a vector \(d\in {\mathbb {N}}^n\) which is coordinate-wise at most \(n-1\). The set of realizations of d denotes the following set of graphs:

$$\begin{aligned} {\mathcal {G}}(d)=\left\{ G \Big | V(G)=[n], \deg _G(i)=d_i \forall i\in [n]\right\} , \end{aligned}$$

where \(\deg _G(i)\) is the degree of the ith vertex in G. The degree sequence d is graphic if \({\mathcal {G}}(d)\) is non-empty. A set of degree sequences \({\mathcal {D}}\) may contain graphic as well as non-graphic degree sequences.

The degree sequence of a graph G on [n] as vertices is the vector \(\deg _G=(\deg _G(1),\deg _G(2),\ldots ,\deg _G(n))\).

Definition 2.4

For a pair of vectors \(\ell ,u\in {\mathbb {N}}^n\) we write \(\ell \le u\) if and only if \(\ell \) is coordinate-wise less than or equal to u, that is, \(\ell _i\le u_i\) for all \(i\in [n]\). Furthermore, let

$$\begin{aligned}{}[\ell ,u]=\left\{ d\in {\mathbb {N}}^n\ | \ell \le d\le u\right\} . \end{aligned}$$

Definition 2.5

If \(\ell \le u\) are both degree sequences of length n, then \([\ell ,u]\) is a degree sequence interval. A degree sequence interval \([\ell ,u]\) is called thin if \(u_i\le \ell _i+1\) for all \(i\in [n]\). We denote the set of realizations of the degree sequence interval \([\ell ,u]\) by

$$\begin{aligned} {\mathcal {G}}(\ell ,u)=\bigcup _{d\in [\ell ,u]}{\mathcal {G}}(d). \end{aligned}$$

Remark 2.6

Not every degree sequence in \([\ell ,u]\) is necessarily graphic, even if both \(\ell \) and u are graphic.

Definition 2.7

Given a polynomial \(p\in {\mathbb {R}}[x]\), we say that a degree sequence \(d\in {\mathbb {N}}^n\) is p-stable if

$$\begin{aligned} \Big |{\mathcal {G}}(d)\cup \bigcup _{\{i,j\}\in \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) }{\mathcal {G}}(d+\mathbb {1}_{\{i,j\}})\Big |\le p(n)\cdot |{\mathcal {G}}(d)|. \end{aligned}$$

Definition 2.8

A set of degree sequences \({\mathcal {D}}\) is p-stable if every degree sequence \(d\in {\mathcal {D}}\) is p-stable.

Definition 2.9

A set of degree sequences \({\mathcal {D}}\) is P-stable if there exists \(p\in {\mathbb {R}}[x]\) such that \({\mathcal {D}}\) is p-stable.

In Erdős et al. [4], only P-stability is defined, but in this paper, it is more convenient to also define p-stability.

Remark 2.10

A finite set of degree sequences \({\mathcal {D}}\) is always P-stable.

Let us introduce a weaker stability notion for degree sequence intervals.

Definition 2.11

Given \(p\in {\mathbb {R}}[x]\), we say that a degree sequence interval \([\ell ,u]\subseteq {\mathbb {N}}^n\) is weakly p-stable if

$$\begin{aligned} \Big |\bigcup _{\{i,j\}\in \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) }{\mathcal {G}}(\ell ,u+\mathbb {1}_{\{i,j\}})\Big |\le p(n)\cdot |{\mathcal {G}}(\ell ,u)|. \end{aligned}$$
(1)

Definition 2.12

A set \({\mathcal {I}}\) of degree sequence intervals is weakly P-stable if there exists \(p\in {\mathbb {R}}[x]\) such that every \([\ell ,u]\in {\mathcal {I}}\) is weakly p-stable. (Any finite \({\mathcal {I}}\) is weakly P-stable.)

Remark 2.13

If the set of degree sequences \([\ell ,u]\) is p-stable, then \([\ell ,u]\) is weakly p-stable.

Remark 2.14

It is possible indeed that \([\ell ,u]\) is weakly p-stable, but \([\ell ,u]\) (as a set of degree sequences) is not p-stable. For example, take \(\ell =(0)_{i=1}^n\) and \(u={(n-1)}_{i=1}^n\): the interval \([\ell ,u]\) is clearly 1-stable, but most of the degree sequences on n vertices are not 1-stable.

Definition 2.15

(Degree interval Markov chain) Let us define the degree interval Markov chain \({\mathbb {G}}(\ell ,u)\). The state space of the Markov chain is \({\mathcal {G}}(\ell ,u)\). In the following, we define three types of transitions: switches, hinge-flips, and edge-toggles (Fig. 1). If the current state of the Markov chain is \(G\in {\mathcal {G}}(\ell ,u)\), then

  • with probability 1/2, the chain stays in G (the Markov chain is lazy),

  • with probability 1/6, pick 4 vertices abcd (uniformly and randomly), and the Markov chain changes its state to \(G'=G\triangle \{ab,cd,ac,bd\}\) if \(\deg _{G'}=\deg _G\) (in which case the transition is a switch), otherwise the chain stays in G,

  • with probability 1/6, pick 3 vertices abc (uniformly and randomly), and the Markov chain changes its state to \(G''=G\triangle \{ab,bc\}\) if \(e(G'')=e(G)\) and \(G''\in {\mathcal {G}}(\ell ,u)\) (a hinge-flip), otherwise the chain stays in G,

  • with probability 1/6, pick a pair of vertices ab (uniformly and randomly), and the Markov chain changes its state to \(G'''=G\triangle \{ab\}\) if \(G'''\in {\mathcal {G}}(\ell ,u)\) (an edge-toggle), otherwise the chain stays in G.

Fig. 1
figure 1

The three types of operations employed by the degree interval Markov chain. Solid () and dashed () line segments represent edges and non-edges, respectively

We will use the following seminal result of Sinclair. Let \(\Pr _{\mathbb {G}}(x\rightarrow y)\) denote the transition probability from state x to y in the Markov chain \({\mathbb {G}}\). Let \(\sigma \equiv |V({\mathbb {G}})|^{-1}\) be the unique stationary distribution on \({\mathbb {G}}\). Given a multicommodity flow f on \({\mathbb {G}}\), let \(\ell (f)\) be the length of the longest path with positive flow, and let \(\rho (f)\) be the maximum loading through an oriented edge of the Markov graph, that is,

$$\begin{aligned} \rho (f)=\max _{xy\in E({\mathbb {G}})}\frac{1}{\sigma (x)\Pr _{\mathbb {G}}(x\rightarrow y)}\sum _{xy\in \gamma \in \Gamma _{X,Y}}f(\gamma ). \end{aligned}$$
(2)

where \(\Gamma _{X,Y}\) is the set of all simple directed paths from X to Y in \({\mathbb {G}}\).

Theorem 2.16

(adapted from Sinclair [11, Proposition 1 and Corollary 6’]) Let \({\mathbb {G}}\) be an irreducible, symmetric, reversible, and lazy Markov chain. Let f be a multicommodity flow on \({\mathbb {G}}\) which sends \(\sigma (X)\sigma (Y)\) commodity between any ordered pair \(X,Y\in V({\mathbb {G}})\). Then, the mixing time of the Markov chain in which it converges \(\varepsilon \) close in \(\ell _1\)-norm to \(\sigma \) started from any element in \(V({\mathbb {G}})\) is

$$\begin{aligned} \tau _{\mathbb {G}}(\varepsilon )\le \rho (f)\cdot \ell (f)\cdot \left( \log |V({\mathbb {G}})|-\log \varepsilon \right) , \end{aligned}$$
(3)

One of the most famous applications of this idea is the result of Jerrum and Sinclair [7] providing a probabilistic approximation of the permanent. The following result also relies on Theorem 2.16, and it describes the largest known class of degree sequences where the switch Markov chain is rapidly mixing (that is, the rate of convergence of the Markov chain is bounded by a polynomial of the length of the degree sequence).

Theorem 2.17

[4] The switch Markov chain is rapidly mixing on the realizations of any degree sequence in a set of P-stable degree sequences (the rate of convergence depends on the set).

There are several known P-stable regions, one of the earliest and most well-known ones is the following.

Theorem 2.18

(Jerrum, McKay, and Sinclair [6]) Let \(\delta =\min (d)\) and \(\Delta =\max (d)\) be the minimum and maximum value elements in d. The set of degree sequences d satisfying

$$\begin{aligned} {(\Delta -\delta +1)}^2\le 4\delta (n-\Delta -1), d\in {\mathbb {N}}^n \end{aligned}$$
(4)

for any n are P-stable. (See Fig. 2.)

Amanatidis and Kleer [1] recently published a surprising new type of result, a clever approximate uniform sampler (see, for e.g., [7]) for \({\mathcal {G}}(\ell ,u)\) where elements of \([\ell ,u]\) are near regular. They achieve this using a composite Markov chain. They also provide the first step in the direction of sampling \({\mathcal {G}}(\ell ,u)\) directly using the degree interval Markov chain.

Let us reiterate that Amanatidis and Kleer [1] apply the Markov chain suggested by  Rechner, Strowick, and Müller–Hannemann [10], which is routinely used in practice.

Theorem 2.19

(Theorem 1.3 in Amanatidis and Kleer [1]) Let \(0<\alpha <\frac{1}{2}\) and \(0<\rho <1\) be fixed. Let \(r=r(n)\) with \(2\le r\le (1-\rho )n\). If \([\ell _i,u_i]\subseteq [r-r^\alpha ,r+r^\alpha ]\) and \(u_i-1\le \ell _i\le u_i\) for all \(i\in [n]\), then the degree interval Markov chain \({\mathbb {G}}(\ell ,u)\) is rapidly mixing.

Let \(w_m\) be the number of realizations in \({\mathcal {G}}(\ell ,u)\) with m edges. The conditions \(u_i-1\le \ell _i\le u_i\) for all \(i\in [n]\) are sufficient to prove that \(w_m\) is log-concave, i.e., \(w_{m-1}w_{m+1}\le w_m^2\), see Amanatidis and Kleer [1, Theorem 5.4]. The main idea for that proof is a symmetric difference decomposition, which we also characterize in our key decomposition lemma, Lemma 3.21.

Our contribution. The main objective of this paper is to prove the following theorem.

Theorem 2.20

Suppose \({\mathcal {I}}\) is a set of weakly P-stable and thin degree sequence intervals. Then, the degree interval Markov chain \({\mathbb {G}}(\ell ,u)\) is rapidly mixing for any \((\ell ,u)\in {\mathcal {I}}\).

It is not hard to see that Theorem 2.19 is a special case of Theorem 2.20: substituting into Eq. (4), we get

$$\begin{aligned} {(2r^\alpha +1)}^2 \le 4(r-r^\alpha )(n-r-r^\alpha -1), \end{aligned}$$

which holds for any r and \(\alpha \) if n is large enough; see Fig. 2.

The switch Markov chain can be embedded into the degree interval Markov chain (the transition probabilities differ by constant factors). Actually, we will use the proof of Theorem 2.17 as a plug-in in the proof of Theorem 2.20, so this paper does not provide a new proof for the switch Markov chain. We will not consider bipartite and direct degree sequences in this paper, but note that Theorem 2.17 applies to those as well. It is easy to check that the proof of Theorem 2.20 works verbatim for bipartite graphs, because the edge-toggles and hinge-flips are applied on vertices that are joined by paths of odd length (hence in different classes). In all likelihood, the proof of Theorem 2.20 can be probably extended to directed graphs, because directed graphs can be represented as bipartite graphs endowed with a forbidden 1-factor.

Fig. 2
figure 2

Theorem 2.18 defines pairs of lower and upper bounds (\(\delta \) and \(\Delta \)), such that any degree sequence which obeys these bounds is P-stable; the area between these functions is filled with vertical lines. The pairs \((\delta ,\Delta )\) of most distant bounds allowed by Eq. (4) are given by intersections with vertical lines. For example, any degree sequence which is (element-wise) between \(\delta =\frac{1}{4} n\) and \(\Delta =\frac{3}{4} n\) is P-stable. In comparison, the solid gray region represents a \(\sqrt{r}\)-wide region around the regular degree sequences, which corresponds to the domain of Theorem 2.19

3 Constructing and Bounding the Multicommodity Flow

We will define a number of auxiliary structures. Via these structures, we will define a multicommodity flow on the degree interval Markov chain \({\mathbb {G}}(\ell ,u)\) and measure its load.

3.1 Constructing and Counting the Auxiliary Matrices

Kannan et al. [9] already introduced an auxiliary matrix to examine the load of a multicommodity flow. Our auxiliary matrices will be a little different. We start with some definitions, then prove two easy statements.

Definition 3.1

Let the adjacency matrix of a graph X on vertex set [n] be \(A_X\in {\{0,1\}}^{n\times n}\). Let \(A_{(vw)}\) be the adjacency matrix of the graph \(([n],\{vw\})\) with exactly one edge. Let us define

$$\begin{aligned} {\widehat{M}}(X,Y,Z)=A_X+A_Y-A_Z. \end{aligned}$$

Remark 3.2

If XYZ are graphs on [n], then \(\widehat{M}(X,Y,Z)\in {\{-1,0,1,2\}}^{n\times n}\).

Let us define the matrix switch operation. (In a previous paper [4], this operation was called a generalized switch.)

Definition 3.3

(Switch on a matrix) The switch operation on a matrix M on vertices (abcd) produces the matrix

$$\begin{aligned} M-A_{(ab)}-A_{(cd)}+A_{(ac)}+A_{(bd)}. \end{aligned}$$

Remark 3.4

A switch on a graph X corresponds to a switch on its adjacency matrix \(A_X\).

Definition 3.5

Let \(M\in {\{-1,0,1,2\}}^{n\times n}\) and let \(\deg _M\in {\mathbb {Z}}^n\) with \((\deg _M)_i=\sum _{j=1}^n M_{ij}\) be the sequence of its row-sums. We say that M is c-tight (for some \(c\in {\mathbb {N}}\)) if M is a symmetric matrix with zero diagonal and there exists a graph \(W\in {\mathcal {G}}(\deg _M,\deg _M+\mathbb {1}_{\{i,j\}})\) for some \(\{i,j\}\in \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \) such that \(\Vert M-A_W\Vert _1\le 2c\).

Recall the definition of weak p-stability and Eq. (1). We will use the number of c-tight matrices to bound the number of auxiliary matrices.

Lemma 3.6

The number of matrices \(M\in {\{-1,0,1,2\}}^{n\times n}\) that are c-tight and \(\deg _M\in [\ell ,u]\) for a weakly p-stable \([\ell ,u]\) is at most

$$\begin{aligned} \left| \left\{ M\in {\{-1,0,1,2\}}^{n\times n}\ | \deg _M\in [\ell ,u]\text {and} M is c-\text {tight}\right\} \right| \le n^{2c}\cdot p(n)\cdot |{\mathcal {G}}(\ell ,u)|. \end{aligned}$$

Proof

We can obtain any c-tight M we want to enumerate as follows. First, select, an appropriate \(\{i,j\}\in \left( {\begin{array}{c}n\\ 2\end{array}}\right) \) and a realization \(W\in {\mathcal {G}}(\ell ,u+\mathbb {1}_{\{i,j\}})\): by weak p-stability, there are at most \(p(n)\cdot |{\mathcal {G}}(\ell ,u)|\) such choices. Then, select c symmetric pairs of positions where the adjacency matrix \(A_W\) is changed to \(-1\) or \(+2\) (while preserving symmetry). The latter selection can be made in at most \(\left( {\begin{array}{c}n\\ 2\end{array}}\right) ^c\cdot 2^c\) different ways. \(\square \)

The following lemma is crucial for proving the tightness of the auxiliary matrices arising in the multicommodity flow. A switch operation on a matrix takes a \(2\times 2\) submatrix and adds \(+1\)’s and \(-1\)’s to the two–two diagonally opposed entries so that the row- and column-sums are preserved.

Lemma 3.7

(Based on Lemma 7.2 of Erdős et al. [4]) Suppose \(M\in {\{-1,0,1,2\}}^{n\times n}\) is a symmetric matrix whose diagonal is zero. Suppose further, that

  1. (i)

    the number of \(+2\) entries of M is at most 4,

  2. (ii)

    the number of \(-1\) entries of M is at most 2,

  3. (iii)

    there exists \(V\subseteq [n]\) where \(M|_{V\times V}\) contains every \(+2\) and \(-1\) entries of M,

  4. (iv)

    there exists \(v\in V\) such that the \(+2\) and \(-1\) entries of M are all located in the row and column corresponding to v,

  5. (v)

    the row-sum of v in \(M|_{V\times V}\) is minimal, and finally,

  6. (vi)

    every row- and column sum in \(M|_{V\times V}\) is at least 1 and at most \(|V|-2\).

Then, M is 5-tight.

Proof

By Lemma 7.2 of Erdős et al. [4], there exist at most two matrix switches that turn M into a \(\{0,1\}\) matrix with the possible exception of a symmetric pair of \(-1\) entries. The \(-1\)’s remaining after the two matrix switches can be removed by adding \(+1\) to the pairs of negative entries. \(\square \)

3.2 The Alternating-Trail Decomposition

We consider the set \(\left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \) in lexicographic order, which induces an order on the set of edges of any graph defined on [n].

Definition 3.8

Given a set of edges \(\nabla \subseteq \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \) on [n] as vertices, let \(\nabla _v=\{e\ |\ v\in e\in \nabla \}\). We call \(s:\{(v,e)\ |\ v\in e\in \nabla \}\rightarrow \nabla \) a pairing function on \(\nabla \) if \(s(v,\bullet ):\nabla _v\rightarrow \nabla _v\) defined as \(s(v,\bullet ):e\mapsto s(v,e)\) is an involution, i.e., \(s(v,\bullet )\) is it own inverse for any \(v\in [n]\). (The bullet \(\bullet \) is the placeholder for the variable e which is the second argument of s.) The set of all pairing functions on \(\nabla \) is denoted by \(\Pi (\nabla )\) (Fig. 3).

Fig. 3
figure 3

The functions \(s(u,\bullet )\) and \(s(v,\bullet )\) pair the edges incident on u and v, respectively. The orange arcs () join edges that are pairs in \(s(u,\bullet )\). The cyan arcs () join edges that are pairs in \(s(v,\bullet )\). The cyan loop () corresponds to the relation \(s(v,uv)=uv\)

Definition 3.9

Let \(L(\nabla ,s)\) be the following subgraph of the line graph of \(([n],\nabla )\): join \(e,f\in \nabla \) if and only if \(e\ne f\) and there exists a vertex \(v\in e\cap f\) such that \(s(v,e)=f\) (or equivalently, \(s(v,f)=e\)).

Lemma 3.10

Each connected component of \(L(\nabla ,s)\) is a path or a cycle.

Proof

Every edge \(e=ij\in \nabla \) has at most two neighbors in \(L(\nabla ,s)\), the edges s(ie) and s(je), thus the maximum degree in \(L(\nabla ,s)\) is 2. \(\square \)

Remark 3.11

A cycle in a line graph corresponds to a closed trail in the original graph. A path in a line graph corresponds to an open trail in the original graph (which may in theory start and end at the same vertex, but this will never be the case in our applications, see Lemma 3.21). Definition 3.9 generalizes a concept of Kannan et al. [9], where all of the components are cycles.

Definition 3.12

Suppose \(\nabla \subseteq \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \) and s is a pairing function on \(\nabla \). Denote by \(p_s\) the number of connected components of \(L(\nabla ,s)\), and let us define the unique partition

$$\begin{aligned} \nabla =W_1^s\uplus W_2^s\uplus \cdots \uplus W_{p_s}^s, \end{aligned}$$
(5)

where each \(W^s_k\) is the vertex set of a component of \(L(\nabla ,s)\), and the sets \({(W^s_k)}_{k=1}^{p_s}\) are listed in the order induced by their lexicographically first edges.

Definition 3.13

For any set of edges W and \(s\in \Pi (\nabla )\), let

$$\begin{aligned} s|_{W}=\left\{ (v,e)\mapsto s(v,e)\ | v\in e\in W\, \,\text {and}\,\, s(v,e)\in W\right\} . \end{aligned}$$

Subsequently, we also define

$$\begin{aligned} s-e=s|_{\nabla \setminus \{ e\}}. \end{aligned}$$

Remark 3.14

If W is the vertex set of a component of \(L(\nabla ,s)\), then \(s|_{W}\in \Pi (W)\).

Definition 3.15

If \(\nabla =\{u_{i-1}u_{i}\ |\ i=1,\ldots ,r\}\) is a set of r distinct edges, let \(s=u_{0}u_{1}\ldots u_{r-1} u_r\in \Pi (\nabla )\) denote

$$\begin{aligned} s=\bigcup _{1\le i\le r-1}&\Big \{ (u_{i},u_{i-1}u_{i})\mapsto u_{i}u_{i+1},\ (u_{i},u_{i}u_{i+1})\mapsto u_{i-1}u_{i}\ \Big \}\cup \\ \cup&\left\{ \begin{array}{ll} \left\{ (u_0,u_0u_1)\mapsto u_0u_1,(u_r,u_{r-1}u_r)\mapsto u_{r-1}u_r\right\} &{}\text {if }u_0\ne u_r,\\ \emptyset &{}\text {if }u_0=u_r. \end{array}\right. \end{aligned}$$

Lemma 3.16

Let \(\nabla =\{u_{i-1}u_{i}\ |\ i=1,\ldots ,r\}\) and \(s=u_0u_1\ldots u_r\). Then,

  • the walk \(u_0u_1\ldots u_r\) is a closed trail if and only if \(L(\nabla ,s)\) is a cycle, and

  • the walk \(u_0u_1\ldots u_r\) is an open trail if and only if \(L(\nabla ,s)\) is a path.

In other words, the Eulerian trails on \(\nabla \) can be naturally identified with those pairing functions \(s\in \Pi (\nabla )\) for which \(L(\nabla ,s)\) is connected.

Proof

Trivial. \(\square \)

Figure 4 shows a closed trail defined by a pairing function.

Fig. 4
figure 4

An example for \(\nabla =\nabla _{X,Y}\) and an (XY)-alternating \(s\in \Pi (\nabla )\) (see Definition 3.18). Red edges belong to X and blue edges belong to Y. There are \(2^4\) different \(\pi \in \Pi (\nabla )\) that are (XY)-alternating. There is one such \(\pi \) where \(L(\nabla ,\pi )\) has 3 components (the two \(C_6\)’s and a \(C_4\) in the middle), and there are 6 cases where \(L(\nabla ,\pi )\) has 2 components. The black arcs represent an s such that \(L(\nabla ,s)\) has exactly one component, or, in other words, s defines a closed Eulerian trail on \(\nabla \)

From now on, by slight abuse of notation, we will not distinguish between \(s=u_0u_1\ldots u_r\) as a pairing function and the trail it describes.

Definition 3.17

Let Z be an arbitrary graph on n-vertices and let \(\nabla \subseteq \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \) be an arbitrary subset of pairs of vertices. A pairing function \(s\in \Pi (\nabla )\) is said to be Z-alternating or alternating in Z if for every \(v\in e\in \nabla \) either

  • e is a unique solution to \(s(v,e)=e\) (the function \(s(v,\bullet )\) has at most one fixpoint), or

  • \(e\in \nabla \cap E(Z)\) and \(s(v,e)\in \nabla {\setminus } E(Z)\), or

  • \(e\in \nabla {\setminus } E(Z)\) and \(s(v,e)\in \nabla \cap E(Z)\).

In other words, the trail \(s|_{W^s_k}\) traverses edges in Z and \({\overline{Z}}\) in turn for any \(k=1,\ldots ,p_s\); furthermore, at any vertex \(v\in [n]\), there is at most one trail \(s|_{W^s_k}\) which starts or ends at v. For example, if \(ij\notin E(Z)\) and \(s(i,ij)=ij=s(j,ij)\), then the trail \(s|_{\{ij\}}\) consists of one non-edge of Z.

Furthermore, we say that \(s\in \Pi (\nabla )\) is Z-alternating with at most c exceptions if

$$\begin{aligned}{} & {} \left| \left\{ \{e,s(v,e)\}\,\ v\in e\in \nabla ,\ s(v,e)\ne e\text { and }\right. \right. \nonumber \\{} & {} \left. \left. \Big (\{e,s(v,e)\}\subseteq \nabla \cap E(Z)\text { or }\{e,s(v,e)\}\subseteq \nabla \setminus E(Z)\Big )\right\} \right| \le c \end{aligned}$$
(6)

and \(s(v,\bullet )\) has at most one fixpoint for every \(v\in [n]\). We say that v is a site of non-alternation of s in Z if \(\{e,s(v,e)\}\) is set of size 2 which is a subset of either \(\nabla \cap E(Z)\) or \(\nabla \setminus E(Z)\).

Definition 3.18

Let \(X,Y\in {\mathcal {G}}(\ell ,u)\), where \([\ell ,u]\) is a thin degree sequence interval. Denote \(\nabla _{X,Y}=E(X)\triangle E(Y)\). An \(s\in \Pi (\nabla _{X,Y})\) which is both X-alternating and Y-alternating is called (XY)-alternating.

Lemma 3.19

Any pairing function \(s\in \Pi (\nabla _{X,Y})\) is X-alternating if and only if s is Y-alternating.

Proof

Trivial, since \(\nabla _{X,Y}\setminus E(X)=E(Y)\setminus E(X)\) and \(\nabla _{X,Y}\cap E(X)=E(X)\setminus E(Y)\). \(\square \)

Definition 3.20

Given a degree sequence interval \([\ell ,u]\), for any \(X,Y\in {\mathcal {G}}(\ell ,u)\), define

$$\begin{aligned} S_{X,Y}=\Big \{ s\in \Pi (\nabla _{X,Y})\ \big |\ s \ \text {is}\, (X,Y)-\text {alternating}\Big \}. \end{aligned}$$

Recall Definition 3.13. The following key decomposition lemma (KD-lemma) will be referred to repeatedly in this paper.

Lemma 3.21

(Key decomposition lemma) Let \([\ell ,u]\) be a thin degree sequence interval, and let \(X,Y\in {\mathcal {G}}(\ell ,u)\), \(s\in S_{X,Y}\). Then, \(s|_{W^s_k}\) is (XY)-alternating, and \(s|_{W^s_k}\) describes an Eulerian trail on \(W^s_k\) for any \(1\le k\le p_s\). If \(s|_{W^s_k}\) describes an open trail, then its end-vertices are (by definition) distinct, and the end-vertices of the trail \(s|_{W^s_k}\) are disjoint from the end-vertices of any other open trail \(s|_{W^s_j}\) (\(j\ne k\)).

Proof

We have \(|\deg _X(v)-\deg _Y(v)|\le 1\) for any \(v\in V\). Thus, the involution \(s(v,\bullet )\) pairs the X-edges of \(\nabla _{X,Y}\) incident to v to the Y-edges of \(\nabla _{X,Y}\) incident to v, with the exception of the at most one fixpoint of \(s(v,\bullet )\). The closed trails must have even length, because \(s(v,\bullet )\) pairs X-edges to Y-edges at any v.

Clearly, if an open trail \(s|_{W^s_i}\) both starts and ends at v, then \(s(v,\bullet )\) has at least two fixpoints, which is a contradiction. Similarly, we have a contradiction if more than one trail terminates at some vertex v. Lastly, if \(s|_{W^s_k}\) is an open trail, then the degree \(\deg _{W^s_k}(v)\) is even, except if v is one of the two end-vertices of \(s|_{W^s_k}\), in which case \(\deg _{W^s_k}(v)\) is odd. \(\square \)

Lemma 3.22

For any thin degree sequence interval \([\ell ,u]\) on n vertices and any two graphs \(X,Y\in {\mathcal {G}}(\ell ,u)\)

$$\begin{aligned} |S_{X,Y}|=\prod _{v\in [n]}\big \lceil \frac{\deg _{\nabla _{X,Y}}(v)}{2}\big \rceil {!}, \end{aligned}$$
(7)

where the right hand side is the product of factorials.

Proof

We have

$$\begin{aligned} |\deg _{E(X)\setminus E(Y)}(v)-\deg _{E(Y)\setminus E(X)}(v)|=|\deg _X(v)-\deg _Y(v)|\le 1. \end{aligned}$$

If \(\deg _X(v)=\deg _Y(v)\), then we have \((\deg _X(v)-\deg _{X\cap Y}(v))!=(\frac{1}{2}\deg _{\nabla _{X,Y}}(v))!\) ways to choose \(s(v,\bullet )\) such that it is an involution which maps edges of X to edges of Y: if \(s(v,\bullet )\) had a fixpoint, then by parity it must have had another, too, which contradicts Definition 3.17.

If \(\deg _X(v)=\deg _Y(v)+1\) and \(s(v,e)=e\), then \(e\in E(X){\setminus } E(Y)\) and e is the only fixpoint of \(s(v,\bullet )\). Therefore, there are \(\deg _X(v)-\deg _{X\cap Y}(v)=\frac{1}{2}(\deg _{\nabla _{X,Y}}(v)+1)\) ways to choose the fixpoint, and \((\deg _X(v)-\deg _{X\cap Y}(v)-1)!\) ways to choose the rest of the map \(s(v,\bullet )\). \(\square \)

Lemma 3.23

For any graph \(Z\in {\mathcal {G}}(\ell ,u)\) for a thin degree sequence interval \([\ell ,u]\) and any \(\nabla \subseteq \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \), we have

$$\begin{aligned}{} & {} \left| \Big \{ s\in \Pi (\nabla )\ \big |\ s\text { is} Z-\text {alternating with at most} c \text {exceptions}\Big \}\right| \nonumber \\{} & {} \quad \le n^{3c}\cdot \prod _{v\in [n]}\big \lceil \frac{\deg _{\nabla }(v)}{2}\big \rceil {!} \end{aligned}$$
(8)

Proof

There are at most \(n^{3c}\) different choices for the set on the left hand side of Eq. (6). If we fix the non-alternating pairs, then the number of remaining choices at \(s(v,\bullet )\) are still upper bounded by \(\lceil \frac{1}{2}\deg _{\nabla }(v)\rceil {!}\), thus Eq. (8) holds.

\(\square \)

3.3 The Precursor

So far, every proof of rapid mixing for the switch Markov chain which is based on Sinclair’s method contains at its core a counting lemma (Greenhill [5]). The purpose of the counting lemma is to enumerate the possible auxiliary structures and parameter sets from which the source and sink of any commodity passing through a realization Z can be recovered from. The difficult technical parts of the proofs are concerned with the maintenance and upkeep associated to these structures. To our surprise, for thin degree sequence intervals, by slightly tweaking these structures, the arising technicalities can almost entirely be reduced to Erdős et al. [4], and a major shortcut is taken by this paper by reusing these parts. A relatively long, but mostly elementary Definition 3.25 will specify the properties that we expect from the auxiliary structures and parameter sets borrowed from Erdős et al. [4]. In Sect. 4, we will use this framework to recombine the borrowed parts into a proof for thin degree sequence intervals.

The decomposition in Definition 3.12 is formally very similar to the decomposition in Erdős et al. [4, Section 4.1]. Whenever the degree sequences of X and Y are identical (\(\nabla =\nabla _{X,Y}\) and \(s\in S_{X,Y}\)), the two decompositions are actually identical. In any other case, for every two unit differences between the degree sequences of X and Y we will utilize a hinge-flip or an edge-toggle in the multicommodity flow between X and Y.

Let us now turn to defining the framework for the reduction to Erdős et al. [4]. We need the following structure and in particular the matrix M to be able to find an appropriate reduction which is compatible with the processes of Erdős et al. [4].

Definition 3.24

Let \(M\in {\{0,1,2\}}^{n\times n}\) be a symmetric matrix with zero diagonal. For technical purposes, let us define the following set of triples:

$$\begin{aligned} {\mathfrak {D}}_M =\left\{ (X,Y,s)\ \Bigg |\ \ \text {where} \ \ {\left\{ \begin{array}{ll} X,Y\text { graphs on }[n],\ s\in S_{X,Y},\\ \left\{ vw\ |\ v\ne w,\ M_{vw}=2\right\} \subseteq E(X)\cap E(Y),\\ \left\{ vw\ |\ v\ne w,\ M_{vw}=0\right\} \subseteq E({{\overline{X}}})\cap E({{\overline{Y}}}). \end{array}\right. } \right\} .\nonumber \\ \end{aligned}$$
(9)

The next definition collects a number of properties (of the multicommodity flow and the auxiliary structures designed for the switch Markov chain) that we want to preserve from Erdős et al. [4].

Definition 3.25

We call the ordered triple \((\varUpsilon ,B,\pi )\) a precursor with parameter \(c\in {\mathbb {N}}\), if the following properties hold. The objects \(\varUpsilon _M\), \(B_M\), and \(\pi _M\) are functions for any symmetric matrix \(M\in {\{0,1,2\}}^{n\times n}\) with zero diagonal, where \(n\in {\mathbb {N}}\). We require that the domain of \(\varUpsilon _M\) satisfies

$$\begin{aligned} {{\,\textrm{dom}\,}}(\varUpsilon _{M})\subseteq {\mathfrak {D}}_M, \end{aligned}$$
(10)

Furthermore, for any \((X,Y,s)\in {{\,\textrm{dom}\,}}(\varUpsilon _M)\), let us define two degree sequences:

$$\begin{aligned} \ell _{X,Y}&={\Big (\min \{\deg _X(i),\deg _Y(i)\}\Big )}_{i=1}^n,\\ u_{X,Y}&={\Big (\max \{\deg _X(i),\deg _Y(i)\}\Big )}_{i=1}^n. \end{aligned}$$

We require that \(\varUpsilon _{M}(X,Y,s)\) is a sequence of graphs that forms a path connecting X and Y in the Markov graph \({\mathbb {G}}(\ell _{X,Y},u_{X,Y})\). We require that \(\pi _M\) and \(B_M\) is defined on

$$\begin{aligned} {{\,\textrm{dom}\,}}(\pi _M)={{\,\textrm{dom}\,}}(B_M)=\{ (X,Y,s,Z)\ |\ Z\in \varUpsilon _M(X,Y,s),\ (X,Y,s)\in {{\,\textrm{dom}\,}}(\varUpsilon _M)\}. \end{aligned}$$

Moreover,

  1. (a)

    The length of \(\varUpsilon _M(X,Y,s)\) is at most \(c\cdot |\nabla _{X,Y}|\).

  2. (b)

    The size \(|(E(X)\triangle E(Z))\setminus \nabla _{X,Y}|\le c\) for any \(Z\in \varUpsilon _M(X,Y,s)\).

  3. (c)

    The matrix \(M-A_Z\) is c-tight for any \(Z\in \varUpsilon _{M}(X,Y,s)\).

  4. (d)

    The pairing function \(\pi _M(X,Y,s,Z)\) is a member of \(\Pi (\nabla _{X,Y})\) and it is alternating in Z with at most c exceptions.

  5. (e)

    \(\pi _M(X,Y,s,X)=\pi _M(X,Y,s,Y)=s\).

  6. (f)

    If \(L(\nabla _{X,Y},s)\) is connected, then \(L(\nabla _{X,Y},\pi _M(X,Y,s,Z))\) is also connected.

  7. (g)

    The cardinality of

    $$\begin{aligned} {\mathfrak {B}}_n= & {} \Big \{B_M(X,Y,s,Z)\ \mid \ Z\in \varUpsilon _M(X,Y,s),\ M \,\text {arbitrary}, \\ {}{} & {} \ (X,Y,s)\in {{\,\textrm{dom}\,}}(\varUpsilon _M),\ |V(X)|=n\Big \} \end{aligned}$$

    is at most a constant times \(n^c\), i.e., \(|{\mathfrak {B}}_n|={\mathcal {O}}(n^c)\).

  8. (h)

    The function

    $$\begin{aligned} \varPsi&=\left\{ (Z,\nabla _{X,Y},\pi _M(X,Y,s,Z),B_M(X,Y,s,Z)) \right. \\ {}&\left. \mapsto (X,Y,s)\ \big |\ M \text {is arbitrary},\ Z\in \varUpsilon _M(X,Y,s) \right\} \end{aligned}$$

    is well defined, i.e., two different images in the co-domain are not assigned to the same element from the domain of \(\varPsi \).

\(\square \)

Typically, the value of \(B_M(X,Y,s,Z)\) will be a long tuple (an ordered set of parameters). The exact value of c is not important here, the requirements only impose a lower bound on its value. However, it is important to note that c is a constant, independent even from the number of vertices n. Note also that in applications of Definition 3.25, the matrix M will not be completely arbitrary.

Definition 3.26

A subset \({\mathfrak {P}}\) is a precursor domain if it is a set of triples (XYs) such that X and Y have the same vertex set [n] for some \(n\in {\mathbb {N}}\) (where n may vary) and \(s\in S_{X,Y}\). We say that a precursor \((\varUpsilon ,B,\pi )\) is defined on a precursor domain \({\mathfrak {P}}\) if and only if for any \(n\in {\mathbb {N}}\) and symmetric matrix \(M\in {\{0,1,2\}}^{n\times n}\) with zero diagonal, we have

$$\begin{aligned} {{\,\textrm{dom}\,}}(\varUpsilon _{M})\supseteq {\mathfrak {P}}\cap {\mathfrak {D}}_M. \end{aligned}$$

Let us define two precursor domains:

$$\begin{aligned} {\mathfrak {C}}_\textrm{thin}&=\Big \{ (X,Y,s)\ \Big |\ s\in S_{X,Y},\ L(\nabla _{X,Y},s) \ \text {is connected, and}\nonumber \\&\quad \Vert \deg _X-\deg _Y\Vert _\infty \le 1\Big \}, \end{aligned}$$
(11)
$$\begin{aligned} {\mathfrak {R}}_\textrm{thin}&=\Big \{ (X,Y,s)\ \Big |\ s\in S_{X,Y} \ \text {and} \ \Vert \deg _X-\deg _Y\Vert _\infty \le 1\Big \}. \end{aligned}$$
(12)

The set \({\mathfrak {C}}_\textrm{thin}\) describes the identifiers of the small parts from which the whole multicommodity flow will be built from. In contrast, the multicommodity flow was built in Erdős et al. [4] for each triple in \({\mathfrak {R}}_\textrm{thin}\) directly.

Lemma 3.27

If there exists a precursor with parameter c which is defined on \({\mathfrak {C}}_\textrm{thin}\), then there exists a precursor on \({\mathfrak {R}}_\textrm{thin}\) with parameter 3c.

Proof

We will show that the precursor can be extended so that it is also defined on \({\mathfrak {R}}_\textrm{thin}\) without violating Definition 3.25. For any \((X,Y,s)\in {\mathfrak {R}}_\textrm{thin}\cap {\mathfrak {D}}_M\), we construct a path in the Markov graph of \({\mathbb {G}}(\ell _{X,Y},u_{X,Y})\), where \([\ell _{X,Y},u_{X,Y}]\) is the smallest degree sequence interval that contains both \(\deg _X\) and \(\deg _Y\). By the thinness of \({\mathcal {I}}\), we have \(|\ell (i)-u(i)|\le 1\) for every \(i\in [n]\). According to Definition 3.12 and the KD-lemma (Lemma 3.21), any \(s\in S_{X,Y}\) partitions \(\nabla _{X,Y}=E(X)\triangle E(Y)\) into edge sets of (XY)-alternating trails, let that decomposition be

$$\begin{aligned} \nabla _{X,Y}=W_1^s\uplus W_2^s\uplus \cdots \uplus W_{p_s}^s. \end{aligned}$$

Let

$$\begin{aligned} G^{X,Y}_k=X\triangle \bigcup _{i=1}^{k}W_i^s, \end{aligned}$$

so that \(G^{X,Y}_0=X\) and \(G^{X,Y}_{p_s}=Y\). By definition, \(s|_{W^s_k}\) is connected, so \((G^{X,Y}_{k-1},G^{X,Y}_{k},s|_{W^s_k})\in {\mathfrak {C}}_\textrm{thin}\) for \(k=1,\ldots ,p_s\). Let us confirm that \(G^{X,Y}_k\in {\mathcal {G}}(\ell ,u)\). If \(s_k\) is a closed trail, then the degree sequences of \(G^{X,Y}_k\) and \(G^{X,Y}_{k-1}\) are identical. If \(s_k\) is an open trail whose end-vertices are v and w, then the degree sequences of \(G^{X,Y}_{k}\) and \(G^{X,Y}_{k-1}\) differ by 1 precisely on v and w; since these end-vertices are distinct from any other end-vertices of another open trail \(s_j\), such a change of the degree of v and w not occur for any other k. Thus, the degree v satisfies

$$\begin{aligned} \deg _{G^{X,Y}_i}(v)\in \{\deg _X(v),\deg _Y(v)\}\text { for any }i=1,\ldots ,p_s \ \text {and any} \ v\in [n],\nonumber \\ \end{aligned}$$
(13)

and so \(\deg _{G^{X,Y}_i}\in [\ell _{X,Y},u_{X,Y}]\).

It is easy to see that \((G^{X,Y}_{k-1},G^{X,Y}_{k},s|_{W^s_k})\in {\mathfrak {D}}_M\). If \(e\in E(X)\cap E(Y)\), then \(e\notin W^s_i\) for any i. Similarly, if \(e\in E({\overline{X}})\cap E({\overline{Y}})\), then \(e\notin W^s_i\) for any i.

We may now define \(\varUpsilon _M\) on \({\mathfrak {R}}_\textrm{thin}\) recursively: concatenate the sequences \(\varUpsilon _M(G^{X,Y}_{k-1},G^{X,Y}_k,s|_{W_k^s})\) in increasing order of k to obtain

$$\begin{aligned} \varUpsilon _M(X,Y,s)={\left( \varUpsilon _M\left( G^{X,Y}_{k-1},G^{X,Y}_k,s|_{W_k^s}\right) \right) }_{k=1}^{p_s}, \end{aligned}$$
(14)

where the concatenation keeps only one of the last and first element of consecutive sequences. For \(Z\in \varUpsilon _M\left( G^{X,Y}_{k-1},G^{X,Y}_k,s|_{W^s_k}\right) \) (take the maximal k such that the relation holds) let

$$\begin{aligned} \pi _M(X,Y,s,Z)&=\left( \bigcup _{i=1}^{k-1}s|_{W^s_i}\right) \cup \pi _M\left( G^{X,Y}_{k-1},G^{X,Y}_k,s|_{W^s_k},Z\right) \cup \left( \bigcup _{i=k+1}^{p_s} s|_{W^s_i}\right) \end{aligned}$$
(15)
$$\begin{aligned} B_M(X,Y,s,Z)&=\left( k-1,B_M\left( G^{X,Y}_{k-1},G^{X,Y}_k,s|_{W^s_k},Z\right) \right) \end{aligned}$$
(16)

We claim that the extended functions provide a precursor on \({\mathfrak {R}}_\textrm{thin}\). Let us check the non-trivial properties of Definition 3.25. Suppose that \(Z\in \varUpsilon _M\left( G^{X,Y}_{k-1},G^{X,Y}_k,s|_{W^s_k}\right) \). Then, \(\deg _Z\in [\ell _{X,Y},u_{X,Y}]\), since \(\deg _{G^{X,Y}_{k-1}},\deg _{G^{X,Y}_{k}}\in [\ell _{X,Y},u_{X,Y}]\).

Checking Definition3.25((b)). By Definition 3.25((b)), \(\left| \left( E\left( G^{X,Y}_{k-1}\right) \triangle E(Z)\right) {\setminus }\right. \left. W^s_k\right| \le c\), and

$$\begin{aligned} (E(X)\triangle E(Z))\setminus \nabla _{X,Y}= & {} \left( E\left( G^{X,Y}_{k-1}\right) \triangle E(Z)\triangle \bigcup _{i=1}^{k-1} W^s_i\right) \setminus \nabla _{X,Y} \nonumber \\ {}\subseteq & {} \left( E\left( G^{X,Y}_{k-1}\right) \triangle E(Z)\right) \setminus W^s_k, \end{aligned}$$
(17)

therefore, the LHS has cardinality at most c as well.

Checking Definition 3.25((c)). The precursor property holds for \({\mathfrak {C}}_\textrm{thin}\) and \(\left( G^{X,Y}_{k-1},G^{X,Y}_k,s_k\right) \in {\mathfrak {C}}_\textrm{thin}\cap {\mathfrak {D}}_M\), therefore, \(M-A_Z\) is c-tight.

Checking Definition3.25((d)). By Eq. (17), \(\pi _M(X,Y,s,Z)\) alternates in Z with at most 2c extra exceptions on top of the c non-alternations of \(\pi _M\left( G^{X,Y}_{k-1},G^{X,Y}_k,s|_{W^s_k},Z\right) \).

Checking Definition 3.25((g)).

The cardinality of the range of \(B_M(X,Y,s,Z)\) grows by a factor of at most \(n^2\), due to the one extra integer \(k-1\).

Checking Definition 3.25((h)). The last missing piece to proving that the extended functions are a precursor on \({\mathfrak {R}}_\textrm{thin}\) is showing that \(\varPsi \) is well defined on the larger domain. By Definition 3.25((f)), the connected components of \(L(\nabla _{X,Y},\pi _M(X,Y,s,Z))\) determine \(W^s_i\) and \(s|_{W^s_i}\) for any \(i\ne k\), and also \(\pi _M(G^{X,Y}_{k-1},G^{X,Y}_k,s|_{W^s_k},Z)\) and \(W^s_k\).

The number \(k-1\) is recorded in \(B_M(X,Y,s,Z)\) (see Eq. (16)), which is an argument of \(\varPsi \). Knowing k, we can select \(\pi _M(G^{X,Y}_{k-1},G^{X,Y}_k,s|_{W^s_k},Z)\) from the components of \(L(\nabla _{X,Y},\pi _M(X,Y,s,Z))\), see Eq. (15). Because the original functions provide a precursor on \({\mathfrak {C}}_\textrm{thin}\), the original \(\varPsi \) function is well defined, so the following value:

$$\begin{aligned}{} & {} \varPsi \Bigg (Z,W^s_k,\pi _M\left( G^{X,Y}_{k-1},G^{X,Y}_k,s|_{W^s_k},Z\right) , B_M\left( G^{X,Y}_{k-1},G^{X,Y}_k,s|_{W^s_k},Z\right) \Bigg ) \\ {}{} & {} \quad = \left( G^{X,Y}_{k-1},G^{X,Y}_k,s|_{W^s_k}\right) . \end{aligned}$$

is determined. Since

$$\begin{aligned} E(X)=E\left( G^{X,Y}_{k-1}\right) \triangle \bigcup _{i=1}^{k-1}W^s_i,\qquad E(Y)=E(X)\triangle \nabla _{X,Y},\qquad s=\bigcup _{i=1}^{p_s} s|_{W^s_i}, \end{aligned}$$

we have shown that \(\varPsi \) is well defined even on the extended domain. \(\square \)

In the proof of Lemma 3.27, we extensively used the fact that the degree sequence intervals in \({\mathcal {I}}\) are thin.

Theorem 3.28

Let \({\mathcal {I}}\) be a set of weakly P-stable degree sequence intervals. If there exists a precursor on \({\mathfrak {R}}_\textrm{thin}\) with parameter c then the degree interval Markov chain \({\mathcal {G}}(\ell ,u)\) is rapidly mixing for any \([\ell ,u]\in {\mathcal {I}}\).

Proof

This proof is not new and fairly straightforward, but it is presented for the sake of completeness. The core of this approach had already appeared in the paper of Kannan et al. [9]. We will practically repeat the skeleton of the proof of Erdős et al. [4] using the definitions of the precursor, which hides the majority of the technical difficulties. We will take \(M=A_X+A_Y\), but we want to be explicit about the dependence on X and Y even when M appears as an index, so let \(X+Y\) denote the matrix \(A_X+A_Y\) in this proof.

Let \([\ell ,u]\in {\mathcal {I}}\), where \(\ell \) and u are degree sequences on [n]. Let us define the multicommodity flow f on the Markov graph of \({\mathbb {G}}(\ell ,u)\): for every \(X,Y\in {\mathcal {G}}(\ell ,u)\) and \(s\in S_{X,Y}\), send \({\sigma (X)\sigma (Y)}/{|S_{X,Y}|}\) amount of flow on \(\varUpsilon _{X+Y}(X,Y,s)\). The total flow in f from X to Y sums to \(\sigma (X)\sigma (Y)\).

Let us recall Eq. (3):

$$\begin{aligned} \tau _{{\mathbb {G}}(\ell ,u)}(\varepsilon )\le \rho (f)\,\ell (f) \left( \log |{\mathbb {G}}(\ell ,u)|-\log \varepsilon \right) \le \rho (f)\, \ell (f) \left( \left( {\begin{array}{c}n\\ 2\end{array}}\right) -\log \varepsilon \right) .\nonumber \\ \end{aligned}$$
(18)

By Definition 3.25((a)), \(\ell (f)\le c\cdot \left( {\begin{array}{c}n\\ 2\end{array}}\right) \). It only remains to show that \(\rho (f)\) is polynomial in n. Continuing Eq. (2) with the substitution \({\mathbb {G}}={\mathbb {G}}(\ell ,u)\):

$$\begin{aligned} \rho (f)=&\max _{ZW\in E({\mathbb {G}})}\frac{1}{\sigma (Z)\Pr _{\mathbb {G}}(Z\rightarrow W)} \sum _{\begin{array}{c} X,Y\in {\mathcal {G}}(\ell ,u),\ s\in S_{X,Y}\\ ZW\in E(\varUpsilon _{X+Y}(X,Y,s)) \end{array}}f(\varUpsilon _{X+Y}(X,Y,s))\nonumber \\ \rho (f)\le&\max _{ZW\in E({\mathbb {G}})}\frac{1}{\sigma (Z)\cdot 6/n^4} \sum _{\begin{array}{c} X,Y\in {\mathcal {G}}(\ell ,u),\ s\in S_{X,Y}\\ ZW\in E(\varUpsilon _{X+Y}(X,Y,s)) \end{array}}\frac{\sigma (X)\sigma (Y)}{|S_{X,Y}|}\nonumber \\ \rho (f)\le&\frac{n^4}{6|{\mathcal {G}}(\ell ,u)|}\cdot \max _{ZW\in E({\mathbb {G}})} \sum _{\begin{array}{c} X,Y\in {\mathcal {G}}(\ell ,u),\ s\in S_{X,Y}\\ ZW\in E(\varUpsilon _{X+Y}(X,Y,s)) \end{array}}\frac{1}{|S_{X,Y}|}\nonumber \\ \rho (f)\le&\frac{n^4}{6|{\mathcal {G}}(\ell ,u)|}\cdot \max _{Z\in V({\mathbb {G}})} \sum _{\begin{array}{c} X,Y\in {\mathcal {G}}(\ell ,u),\ s\in S_{X,Y}\\ Z\in \varUpsilon _{X+Y}(X,Y,s) \end{array}}\frac{1}{|S_{X,Y}|} \end{aligned}$$
(19)

According to Definition 3.25((h)), given Z, \(\nabla _{X,Y}\), \(\pi _{X+Y}(X,Y,s,Z)\), and \(B_{X+Y}(X,Y,s,Z)\), the function \(\varPsi \) determines (XYs). Therefore, the relation \(Z\in \varUpsilon _{X+Y}(X,Y,s)\) is equivalent to saying that there exists a triple \((\nabla ,\pi ,B)\) such that \((Z,\nabla ,\pi ,B)\in \varPsi ^{-1}(X,Y,s)\):

$$\begin{aligned} \rho (f)\le \frac{n^4}{6|{\mathcal {G}}(\ell ,u)|}\cdot \max _{Z\in V({\mathbb {G}})}\sum _{\begin{array}{c} (Z,\nabla ,\pi ,B)\in \varPsi ^{-1}(X,Y,s) \end{array}}\frac{1}{|S_{X,Y}|} \end{aligned}$$
(20)

Next, we use Lemma 3.22, which shows that \(|S_{X,Y}|\) is determined by \(\nabla _{X,Y}\), and its value does not depend directly on X or Y:

$$\begin{aligned} \rho (f)\le \frac{n^4}{6|{\mathcal {G}}(\ell ,u)|}\cdot \max _{Z\in V({\mathbb {G}})}\sum _{\begin{array}{c} (Z,\nabla ,\pi ,B)\in \varPsi ^{-1}(X,Y,s) \end{array}}\quad \prod _{v\in [n]}{\left( \big \lceil \frac{\deg _{\nabla }(v)}{2}\big \rceil \mathrm{!}\right) }^{-1} \end{aligned}$$
(21)

Given Z, the matrix \({\widehat{M}}(X,Y,Z)\) (Definition 3.1) determines \(\nabla _{X,Y}=E(X)\triangle E(Y)\): the edges that belong to \(\nabla _{X,Y}\) are precisely those where the sum of the adjacency matrices \(A_X+A_Y={\widehat{M}}(X,Y,Z)+A_Z\) takes 1. Furthermore, by a property of the precursor, for any \(Z\in \varUpsilon _{X+Y}(X,Y,s)\), we have \(\deg _Z\in [\ell _{X,Y},u_{X,Y}]\), therefore,

$$\begin{aligned} \deg _{{\widehat{M}}(X,Y,Z)}=\deg _{A_X+A_Y-A_Z}=\deg _X+\deg _Y-\deg _Z\in [\ell _{X,Y},u_{X,Y}]\subseteq [\ell ,u]. \end{aligned}$$

Now using that \({\widehat{M}}(X,Y,Z)\) is c-tight, it follows from Lemmas 3.6 and 3.23 that

$$\begin{aligned} \rho (f)\le \frac{n^4}{6|{\mathcal {G}}(\ell ,u)|}\cdot \max _{Z\in V({\mathbb {G}})}\sum _{B\in {\mathfrak {B}}_n} n^{5c}\cdot p(n)\cdot |{\mathcal {G}}(\ell ,u)|\le \frac{1}{6}n^{5c+4}\cdot p(n)\cdot |{\mathfrak {B}}_n|,\nonumber \\ \end{aligned}$$
(22)

where the right hand side is dominated by a polynomial of n (according to Definition 3.25((g))). In conclusion, the mixing time in Eq. (18) is polynomial.

\(\square \)

To prove Theorem 2.20, it only remains to construct a precursor on \({\mathfrak {C}}_\textrm{thin}\). The next section proceeds with the construction in two separate stages.

4 Constructing the Precursor

We will construct a precursor on \({\mathfrak {C}}_\textrm{thin}\) for any weakly P-stable thin set of degree sequence intervals \({\mathcal {I}}\) in two stages. In the first stage, we show that there exists a precursor on \({\mathfrak {C}}_\textrm{id}\) (see Definition 4.1), and then we will extend this precursor to \({\mathfrak {C}}_\textrm{thin}\) in the second stage. Then, we will apply Lemma 3.27 and Theorem 3.28 to prove Theorem 2.20.

4.1 Stage 1: Closed trails

Definition 4.1

Let us define

$$\begin{aligned} {\mathfrak {C}}_\textrm{id}&=\Big \{ (X,Y,s)\ \Big |\ s\in S_{X,Y},\ L(\nabla _{X,Y},s) \ \text {is connected, and} \ \deg _X=\deg _Y\Big \},\\ {\mathfrak {R}}_\textrm{id}&=\Big \{ (X,Y,s)\ \Big |\ s\in S_{X,Y} \ \text {and } \deg _X=\deg _Y\Big \}. \end{aligned}$$

The graph \(L(\nabla _{X,Y},s)\) is a cycle for any \((X,Y,s)\in {\mathfrak {C}}_\textrm{id}\), because the degree sequences of X and Y are identical. To handle this case, a large machinery was developed in Erdős et al. [4]. However, there the range of auxiliary matrices M was much smaller. Because of the larger range of auxiliary matrices in the current paper, we had to introduce and explicitly define the precursor. Therefore, we unfortunately need to repeat some parts of the proof of Erdős et al. [4] to obtain those claims in the desired generality. The following lemma collects the necessary technical lemmas proved in Erdős et al. [4].

Lemma 4.2

There exists a precursor on \({\mathfrak {C}}_\textrm{id}\) with parameter \(c=12\).

Proof

Let \((X,Y,s)\in {\mathfrak {C}}_\textrm{id}\) be arbitrary with \(X,Y\in {\mathcal {G}}(d)\). Since s is an (XY)-alternating closed trail, \(|\nabla _{X,Y}|\) is even. In Erdős et al. [4], the path \(\varUpsilon (X,Y,s)\) in the switch Markov graph is defined exactly when the degree sequences of X and Y are identical and \(s\in S_{X,Y}\). We use the definition of \(\varUpsilon (X,Y,s)\) from Erdős et al. [4] only when \(s\in S_{X,Y}\) and \(L(\nabla _{X,Y},s)\) is a cycle, so when \(p_s=1\).

First, let us recall that \(\varUpsilon (X,Y,s)\) in Erdős et al. [4] describes a sequence of graphs such that each two consecutive graphs can be obtained from each other by a switch. In Erdős et al. [4, Definition 4.2], for any \(s\in S_{X,Y}\), the path \(\varUpsilon (X,Y,s)\) is composed by concatenating a number of Sweep sequences:

$$\begin{aligned} \varUpsilon (X,Y,s)= {\left( {\left( \textsc {Sweep}(G^k_r,C^k_r)\right) }_{r=1}^{\mu _{k}+1} \right) }_{k=1}^{p_s}, \end{aligned}$$

where \(C^k_r\) are circuits and \(G^k_{r+1}=G^k_r\triangle C^k_r\), where \(G^k_0=X\triangle \uplus _{i=1}^{k-1} W^s_i\) and \(G^k_{\mu _k+1}=X\triangle \uplus _{i=1}^{k} W^s_i\), and \(W^s_k=\uplus _{r=1}^{\mu _k+1} E(C^k_r)\) and \(\nabla _{X,Y}=\uplus _{k=1}^{p_s} W^s_k\). It is easy to check in Erdős et al. [4, Algorithm 2.1], that \(\textsc {Sweep}(G^k_r,C^k_r)\) is a sequence of switches such that each switch is incident with all four vertices on \(V(C^k_r)\).

When \((X,Y,s)\in {\mathfrak {C}}_\textrm{id}\cap {\mathfrak {D}}_M\), by definition \(L(\nabla _{X,Y},s)\) is connected and \(p_s=1\), thus we may define

$$\begin{aligned} \varUpsilon _{M}(X,Y,s)={\left( \textsc {Sweep}(X\triangle \uplus _{i=1}^{r-1}C_i,C_r)\right) }_{r=1}^{\mu +1}, \end{aligned}$$
(23)

where s decomposes \(\nabla _{X,Y}\) into (primitive) circuits \({(C_r)}_{r=1}^\mu \) such that \(\nabla =\uplus _{r=1}^\mu E(C_r)\), see Erdős et al. [4, Lemma 5.13]. The circuit \(C_r\) defines a cyclical order on its vertices, but \(\textsc {Sweep}\) takes a linear order, so we still need to select the cornerstone, where the linear order starts to enumerate the vertices in the given cyclical order. The choice of the cornerstone ([4, eq. (5.11)]) only plays a role in proving that \({\widehat{M}}(X,Y,Z)\) is close to the adjacency matrix of an appropriate graph in \(\ell _1\)-norm. From the rest of Erdős et al. [4]’s point of view, the cornerstone is arbitrarily chosen.

In this adaptation of the proof in Erdős et al. [4], the index M of \(\varUpsilon _M(X,Y,s)\) matters only in the choice of the cornerstones. The current proof is slightly more general than that of Erdős et al. [4], because we not only consider \(M=X+Y\), but also any other M such that \((X,Y,s)\in {\mathfrak {D}}_M\) (recall Eq. (9)). In the path \(\varUpsilon _{M}(X,Y,s)\) incorporating \(\textsc {Sweep}(G_r,C_r)\) (see Eq. )23)) choose the cornerstone \(v_r\) of the \(\textsc {Sweep}(G_r,C_r)\) as follows:

$$\begin{aligned} \begin{aligned}&\text {Let }v_r\in V(C_r) \ \text {be the vertex which minimizes the row-sum in}\\&\qquad \left( M-A_{X\triangle \uplus _{i=1}^{r-1}C_i}\right) \Big |_{V(C_r)\times V(C_r)}\\&\text {and} v_r \text {is lexicographically minimal with respect to this condition.}\\ \end{aligned} \end{aligned}$$
(24)

Since \(X,Y\in {\mathcal {G}}(d)\) for some d, Lemma 2.6 of Erdős et al. [4] applies, which claims that \(\textsc {Sweep}(X\triangle \uplus _{i=1}^{r-1}C_i,C_r)\) is a sequence of at most \(\frac{1}{2}|E(C_r)|-1\) switches that connect \(X\triangle \uplus _{i=1}^{r-1}C_i\) to \(X\triangle \uplus _{i=1}^{r}C_i\). Thus, the total length of the switch sequence \(\varUpsilon _M(X,Y,s)\) is at most \(\frac{1}{2}|\nabla _{X,Y}|-1\). For any \(Z\in \varUpsilon _M(X,Y,s)\), the degree sequences of X, Y and Z are identical, because switches preserve the degree sequence. Note that for any \(j\ne r\), the sequence \(\textsc {Sweep}(X\uplus _{i=1}^{j-1}C_i,C_j)\) does not depend on the cornerstone \(v_r\).

For any \(Z\in \varUpsilon _M(X,Y,s)\), the matrix \(M-A_Z\) belongs to \({\{-1,0,1,2\}}^{n\times n}\). Recall Eq. (9). If \((M-A_Z)_{vw}=2\), then vw is an edge in both X and Y, but vw is not present in Z as an edge. If, however, \((M-A_Z)_{vw}=-1\), then \(vw\notin E(X),E(Y)\) and \(vw\in E(Z)\). With formulae,

$$\begin{aligned} \{ vw\ |\ (M-A_Z)_{vw}=+2\}&\subseteq E(X)\setminus E(Z)\setminus \nabla _{X,Y} \end{aligned}$$
(25)
$$\begin{aligned} \{ vw\ |\ (M-A_Z)_{vw}=-1\}&\subseteq E(Z)\setminus E(X)\setminus \nabla _{X,Y} \end{aligned}$$
(26)

respectively. In Erdős et al. [4, Lemma 2.7], the set \(R=R_Z\) is defined, and it has cardinality at most 4. By its definition, the set of edges in R is a superset of \(\left( E(X)\triangle E(Z)\right) \setminus \nabla _{X,Y}\), which is the union of the right hands sides of Eqs. (25) and (26). In short, every \(+2\) and \(-1\) entry of \(M-A_Z\) is in a position which is associated to an edge in R.

We will show that \(M-A_Z\) is 7-tight. Lemma 8.2 in Erdős et al. [4] is the analogue of this tightness statement, and its proof can be repeated for this case with little to no modification. Suppose first that every edge in R is incident on \(v_r\) from Eq. (24): then, Erdős et al. [4, Lemma 7.1] claim that the entries in \(A_X+A_Y-A_Z\) associated to edges in R consist of at most two pairs of symmetric \(+2\) entries, and at most one pair of symmetric \(-1\) entries. By Eqs. (25) and (26), \(M-A_Z\) also contains at most two pairs of symmetric \(+2\) entries, and at most one pair of symmetric \(-1\) entries.

Recall that Z is obtained from \(X\triangle \uplus _{i=1}^{r-1}C_i\) through a series of switches that only touch edges whose vertices are contained in \(V(C_r)\). Thus, the row- and columns-sums of the submatrices

$$\begin{aligned} (M-A_Z)|_{V(C_r)\times V(C_r)}\quad \text {and}\quad \left( M-A_{X\triangle \uplus _{i=1}^{r-1}C_i}\right) \Big |_{V(C_r)\times V(C_r)} \end{aligned}$$

are identical. Let v and w be two distinct vertices in \(V(C_r)\).

  • If \(M_{vw}=2\), then by Eq. (25), \(vw\in E(X)\) and \(vw\notin \nabla _{X,Y}\), thus

    $$\begin{aligned} {\left( M-A_{X\triangle \uplus _{i=1}^{r-1}C_i}\right) }_{vw}={(M-A_X)}_{vw}=2-1=1. \end{aligned}$$
  • If \(M_{vw}=0\), then by Eq. (26), \(vw\notin E(X)\) and \(vw\notin \nabla _{X,Y}\), thus

    $$\begin{aligned} {\left( M-A_{X\triangle \uplus _{i=1}^{r-1}C_i}\right) }_{vw}={(M-A_X)}_{vw}=0-0=0. \end{aligned}$$
  • If \(M_{vw}=1\) and \(vw\in E\left( X\triangle \uplus _{i=1}^{r-1}C_i\right) \), then \({\left( M-A_{X\triangle \uplus _{i=1}^{r-1}C_i}\right) }_{vw}=0\).

  • If \(M_{vw}=1\) and \(vw\notin E\left( X\triangle \uplus _{i=1}^{r-1}C_i\right) \), then \({\left( M-A_{X\triangle \uplus _{i=1}^{r-1}C_i}\right) }_{vw}=1\).

Every entry of \(\left( M-A_{X\triangle \uplus _{i=1}^{r-1}C_i}\right) \big |_{V(C_r)\times V(C_r)}\) is either a 0 or a 1, and the diagonal is identically zero. Since \(C_r\) is alternating in \(X\triangle \uplus _{i=1}^{r-1}C_i\), there is at least one 0 entry and one 1 entry in every row and every column. Therefore, the row- and column-sums of \((M-A_Z)|_{V(C_r)\times V(C_r)}\) are at least 1 and at most \(|V(C_r)|-2\). Moreover, Eq. (24) ensures that the row-sum corresponding to \(v_r\) in \({(M-A_Z)}_{V(C_r)\times V(C_r)}\) is minimal. By Lemma 3.7, \(M-A_Z\) is 5-tight.

We will again use Erdős et al. [4, Lemma 2.7] to understand the more detailed structure of \(R_Z\). If there is an edge in \(R_Z\) which is not incident on \(v_r\), then R falls under case (e) of Erdős et al. [4, Lemma 2.7]. Let \(Z\triangle F\) be the next graph in the Sweep sequence, where F is a \(C_4\). By Erdős et al. [4, Lemma 2.7(d)], every edge in the set \(R_{Z\triangle F}\) is incident on \(v_r\). As previously, Lemma 3.7 implies that \(M-A_{Z\triangle F}\) is 5-tight, and thus \(M-A_Z\) is 7-tight. \(\square \)

Next, we will cite 3 lemmas from Erdős et al. [4]. The first of these lemmas refers to the graph \(Z'=Z\triangle R\), which is defined in Erdős et al. [4, eq. (13)]. Note that the graph \(Z'\) is just a slight perturbation of Z.

Lemma 4.3

(Adapted from Lemma 5.15 in Erdős et al. [4]) For any \(Z\in \varUpsilon (X,Y,s)\) for \(s\in S_{X,Y}\), there exists \(\pi _{Z'}\in \Pi (\nabla )\) which defines a closed Eulerian trail on \(\nabla _{X,Y}\) which is alternating in \(Z'\) with at most 4 exceptions.

Lemma 4.4

(Lemma 5.21 in Erdős et al. [4]) For a fixed number n of vertices of X and Y, the cardinality of the set of possible tuples B(XYZs) is \({\mathcal {O}}(n^{8})\), where \(s\in S_{X,Y}\) and \(Z\in \varUpsilon (X,Y,s)\) are arbitrary.

Lemma 4.5

(Lemma 5.22 in Erdős et al. [4]) The quadruplet composed of the graphs Z, \(\nabla \), \(\pi _{Z'}\), and B(XYZs) uniquely determines the triplet (XYs).

We define \(\pi _M(X,Y,s,Z)=\pi _{Z'}\). Lemma 4.3 implies that \(\pi _{Z'}\) is alternating in Z with at most \(4+2|R_Z|\le 12\) exceptions. Let \(B_M(X,Y,s,Z)\) be identical with the parameter set B(XYZs) defined in Erdős et al. [4]. Lemmas 4.3 to 4.5 ensure that every itemized requirement of Definition 3.25 holds, similarly to the situation in Erdős et al. [4]. \(\square {{\textbf {Lemma}} 4.2}\)

Now we are at the point where Theorem 2.17 is reproved by the generalized machinery: the Markov chain \({\mathbb {G}}(d)\) (using switches only) is rapidly mixing for any d from a P-stable set. By Lemmas 4.2 and 3.27, there exists a precursor on \({\mathfrak {R}}_\textrm{id}\) with parameter 3c, and the theorem follows from Remark 2.13 and Theorem 3.28.

4.2 Stage 2: Open trails

Until now, the degree sequences of X and Y in \((X,Y,s)\in {\mathfrak {C}}_\textrm{thin}\) were identical, that is, s was a closed trail. In the second stage we deal with the case when \(\Vert \deg _X-\deg _Y\Vert _1=2\) (while \(\Vert \deg _X-\deg _Y\Vert _\infty =1\)). The following lemma is actually a framework for reducing the construction of the precursor on \({\mathfrak {C}}_\textrm{thin}\) to Lemma 4.2. Note that we do not aim to optimize our estimate of the mixing time, we are merely interested in bounding it polynomially. Surprisingly, to construct the precursor on \({\mathfrak {C}}_\textrm{thin}\), it is sufficient to consider only those open trails s that have odd length.

Informally, the forthcoming Lemma 4.6 states that if any open (XY)-alternating trail of odd length can be cut up into a constant number of segments that can be reassembled into at most two (XY)-alternating trails that are either closed or can be closed by including \(v_0v_\lambda \) or \(v_1v_{\lambda -1}\) to join the two ends (alternation is not required there), then we can reduce the precursor construction on \({\mathfrak {C}}_\textrm{thin}\) to a precursor construction on \({\mathfrak {C}}_\textrm{id}\).

Lemma 4.6

Suppose there exists a precursor on \({\mathfrak {C}}_\textrm{id}\) with parameter c, and let \(c'\) be a fixed integer. Suppose, moreover, that for any \((X,Y,s)\in {\mathfrak {C}}_\textrm{thin}\) where \(s=v_0v_1\ldots v_\lambda \in \Pi (\nabla _{X,Y})\) is an open trail with \(v_0<_\textrm{lex} v_\lambda \) for some odd integer \(\lambda \), there exist \(\nabla _1,\nabla _2\) and \(s_1\in \Pi (\nabla _1),s_2\in \Pi (\nabla _2)\) (where \(\nabla _2=\emptyset \) is allowed) such that

  1. (1)

    \(\nabla _{X,Y}{\setminus }\{v_0v_\lambda \}\subseteq \nabla _1\cup \nabla _2\subseteq \left\{ \begin{array}{ll} \nabla _{X,Y}\cup \{v_0v_\lambda \} &{} \text {if }v_1=v_{\lambda -1}\\ \nabla _{X,Y}\cup \{v_0v_\lambda ,v_1v_{\lambda -1}\}&{} \text {if }v_1\ne v_{\lambda -1}\end{array}\right. \),

  2. (2)

    \(\nabla _{X,Y}\triangle \nabla _1\triangle \nabla _2\subseteq \{v_0v_\lambda \}\),

  3. (3)

    if \(v_1v_{\lambda -1}\in (\nabla _1\cup \nabla _2){\setminus } \nabla _{X,Y}\), then \(v_0v_\lambda \in \nabla _{X,Y}\) and \(s_1\) or \(s_2\) is equal to \(v_0v_1v_{\lambda -1}v_\lambda v_0\).

Moreover, for both \(i=1,2\):

  1. (4)

    the line graph \(L(\nabla _i,s_i)\) is an even cycle (or an empty graph),

  2. (5)

    \(s_i-v_0v_\lambda -v_1v_{\lambda -1}\) is (XY)-alternating,

  3. (6)

    \(s_i-v_0v_\lambda \) is (XY)-alternating with 0 or 2 exceptions,

  4. (7)

    \(s_i-v_1v_{\lambda -1}\) is (XY)-alternating with 0 or 2 exceptions,

  5. (8)

    the number of components of \(L(\nabla _i,s_i)\cap L(\nabla ,s)\) is at most \(c'\).

Then, there exists a precursor on \({\mathfrak {C}}_\textrm{thin}\) with parameter \(3c+60c'+300\).

We are aware that such a huge parameter is nowhere near a practical bound. We made virtually zero effort to optimize the parameter.

Proof

Let \((X,Y,s)\in {\mathfrak {C}}_\textrm{thin}\) be such that \(s=v_0v_1\ldots v_\lambda \) and \(v_0<_\textrm{lex} v_\lambda \). We will now consider the case when \(\lambda \) is odd. As discussed earlier, the case of even \(\lambda \) will be handled by a reduction to the odd case. For an odd \(\lambda \), we must have either \(\deg _Y=\deg _X+\mathbb {1}_{\{v_0,v_\lambda \}}\) or \(\deg _Y=\deg _X-\mathbb {1}_{\{v_0,v_\lambda \}}\), because s is (XY)-alternating and its length \(\lambda \) is odd.

Let \(\nabla _i\) and \(s_i\in \Pi (\nabla _i)\) for \(i=1,2\) be the set of edges and pairing function assumed to exist in the statement of this lemma. Let M be such that \((X,Y,s)\in {\mathfrak {D}}_M\), and we will first define \(\varUpsilon _M(X,Y,s)\), then we will also define \(\pi _M(X,Y,s,Z)\) and \(B_M(X,Y,s,Z)\) for any \(Z\in \varUpsilon _M(X,Y,s)\).

Let us modify the auxiliary matrix M. Recall from Definition 3.24 that if \(ab\in \nabla _{X,Y}\), then \(M_{ab}=1\). By Assumption (3) of this lemma, if \(M_{v_1v_{\lambda -1}}\ne 1\) and \(v_1v_{\lambda -1}\in \nabla _i\), then \(v_0v_\lambda \in \nabla _{X,Y}\) and \(M_{v_0v_\lambda }=1\). Let us define

$$\begin{aligned} M'{=}\left\{ \begin{array}{ll} M{+}A_{(v_0v_\lambda )} &{} \text {if }M_{v_0v_\lambda }{=}0,\\ M-A_{(v_0v_\lambda )} &{} \text {if }M_{v_0v_\lambda }{=}2,\\ M{+}A_{(v_1v_{\lambda -1})}-A_{(v_0v_1)}-A_{(v_{\lambda -1}v_\lambda )} &{} \text {if }M_{v_0v_\lambda }{=}1\text { and }M_{v_1v_{\lambda -1}}{=}0\text { and }v_1\ne v_{\lambda -1},\\ M-A_{(v_1v_{\lambda -1})}{+}A_{(v_0v_1)}{+}A_{(v_{\lambda -1}v_\lambda )} &{} \text {if }M_{v_0v_\lambda }{=}1\text { and }M_{v_1v_{\lambda -1}}{=}2\text { and }v_1\ne v_{\lambda -1},\\ M &{}\text {if }M_{v_0v_\lambda }{=}1\text { and }M_{v_1v_{\lambda -1}}{=}1\text { or }v_1{=}v_{\lambda -1}, \end{array}\right. \nonumber \\ \end{aligned}$$
(27)

so that \(M'_{v_0v_\lambda }=1\). Also, \(M'_{v_1v_{\lambda -1}}=1\) if \(v_1v_{\lambda -1}\in \nabla _1\cup \nabla _2\). The row-sums of M and \(M'\) are equal on every vertex except possibly on \(v_0\) and \(v_\lambda \).

By assumption (3), \(|\nabla _i|\) is even. From Assumptions (1) and (2) it follows that any \(v_{j}v_{j+1}\) is contained in either \(\nabla _1\) or \(\nabla _2\), but not both, except if \(\{v_{j},v_{j+1}\}=\{v_0,v_\lambda \}\) or \(\{v_{j},v_{j+1}\}=\{v_1,v_{\lambda -1}\}\). Therefore, \(\nabla _1\cap \nabla _2\subseteq \{ v_0v_\lambda , v_1v_{\lambda -1}\}\). Let us start to extend the precursor. Without loss of generality, we may assume that \(\nabla _1\ne \emptyset \).

Case A1: \(\nabla _2=\emptyset \). Note that \(|\nabla _{X,Y}|=\lambda \) is odd. Since \(|\nabla _1|\) is even and \(\nabla _1\setminus \nabla _{X,Y}\subseteq \{v_0v_\lambda \}\), we must have \(v_0v_{\lambda }\notin \nabla _{X,Y}\Leftrightarrow v_0v_\lambda \in \nabla _1\). Also, \(v_1v_{\lambda -1}\in \nabla _1\Leftrightarrow v_1v_{\lambda -1}\in \nabla _{X,Y}\). Let us slightly change X and Y, so that the symmetric difference of the modified graphs \(X_1,Y_1\) is exactly \(\nabla _1\):

$$\begin{aligned} X_1&=\left\{ \begin{array}{ll} X\triangle v_0v_{\lambda }, &{}\text {if} s_1 \text {is not alternating in} X;\\ X, &{}\text {if} s_1 \text {is alternating in} X;\\ \end{array} \right. \\ Y_1&=\left\{ \begin{array}{ll} Y, &{}\text {if}s_1 \text {is not alternating in} X;\\ Y\triangle v_0v_{\lambda }, &{}\text {if} s_1 \text {is alternating in} X;\\ \end{array} \right. \end{aligned}$$

Suppose \(s_1-v_0v_\lambda \) is not alternating in X: then \(v_1v_{\lambda -1}\in \nabla _{X,Y}\) and the two non-alternations of \(s_1-v_0v_\lambda \) are located at \(v_1\) and \(v_{\lambda -1}\). But because \(s_1-v_0v_\lambda -v_1v_\lambda \) is alternating in X, we have \(|\deg _{E(X)\cap \nabla _{X,Y}}(v_1)-\deg _{E(Y)\cap \nabla _{X,Y}}(v_1)|=2\), so s cannot possibly be (XY)-alternating, a contradiction. It follows that \(s_1\) is alternating in \(X_1\) (and thus \(Y_1\)): indeed, if \(s_1\) is not alternating in X, then the two exceptions are the endpoints of \(v_0v_\lambda \). Therefore, \((X_1,Y_1,s_1)\in {\mathfrak {C}}_\textrm{id}\cap {\mathfrak {D}}_{M'}\).

We extend the precursor to (XYs) as follows.

$$\begin{aligned} \varUpsilon _{M}(X,Y,s)&=\left\{ \begin{array}{ll} X\xrightarrow {\text {toggle} v_0v_\lambda } \varUpsilon _{M'}(X_1,Y_1,s_1) &{}\text {if} s_1 \text {is not alternating in} X\\ \varUpsilon _{M'}(X_1,Y_1,s_1)\xrightarrow {\text {toggle} v_0v_\lambda }Y &{}\text {if} s_1 \text {is alternating in} X, \end{array}\right. \end{aligned}$$
(28)
$$\begin{aligned} \pi _M(X,Y,s,Z)&=\left\{ \begin{array}{ll} s &{} \text {if }Z=X,Y\\ \pi _{M'}(X_1,Y_1,s_1,Z)|_{\nabla _{X,Y}} &{} \text {if }Z\in \varUpsilon _{M'}(X_1,Y_1,s_1)\setminus \{X,Y\} \end{array}\right. \end{aligned}$$
(29)
$$\begin{aligned} B_M(X,Y,s,Z)&=\left\{ \begin{array}{l} (0,\textrm{true}) \hspace{8em}\text {if }Z=X\\ \\ (0,\textrm{false}) \hspace{8em}\text {if }Z=Y\\ \\ \big (1,\lambda ,v_0v_\lambda ,v_{0}v_{\lambda }\in E(X), \pi _M(X,Y,s,Z)\triangle \pi _{M'}(X_1,Y_1,\\ \hspace{11em}s_1,Z) ,B_{M'}(X_1,Y_1,s_1,Z)\big ) \\ \hspace{11em}\text {if }Z\in \varUpsilon _{M'}(X_1,Y_1,s_1)\setminus \{X,Y\} \end{array}\right. \end{aligned}$$
(30)

Let us verify that Definition 3.25 holds for the extension. The defined path \(\varUpsilon _M(X,Y,s)\) in the Markov graph utilizes one edge-toggle, while the rest of the steps are switches. When the edge-toggle occurs, the degree sequence of the then current graph changes from \(\deg _X\) to \(\deg _Y\), because the rest of the steps do not change the degree sequence.

If \(Z=X,Y\), then \(M-A_Z\) is 0-tight, because \((X\triangle Z){\setminus } \nabla =(Y\triangle Z){\setminus } \nabla =\emptyset \). Suppose next, that \(Z\in \varUpsilon _{M'}(X_1,Y_1,s_1)\). If \(M'=M\), then \(M-A_Z\) is c-tight by induction. If \(M'=M\pm A_{v_0v_\lambda }\), then note that the row-sums of \(M'-A_{X}\) are equal to the row-sums of \(M-A_Y\), and the row-sums of \(M'-A_Y\) are equal to the row-sums of \(M-A_X\). The degree sequence of Z is equal to \(\deg _X\) or \(\deg _Y\), so \(M'-A_Z\) is c-tight, and therefore, \(M-A_Z\) is \(c+1\)-tight.

The length of \(\varUpsilon _M(X,Y,s)\) is at most \(1+c|\nabla _1|=1+c|\nabla _{X,Y}|+c\), still linear. The symmetric difference of X and Z outside \(\nabla _{X,Y}\) may also include \(v_0v_\lambda \), so the upper bound in Definition 3.25((b)) increases by at most one.

The maximum number of exceptions to alternation of \(\pi _M(X,Y,s,Z)\) in Z is no more than the number of exceptions to alternation of \(\pi _{M'}(X_1,Y_1,s_1,Z)\) in Z, because \(v_0v_\lambda \notin \nabla _{X,Y}\). Since \(\pi _{M'}(X_1,Y_1,s_1,Z)\) is a closed trail, even if we restrict its domain from \(\nabla _1\) to \(\nabla _{X,Y}\), it remains connected. The range of \(B_M(X,Y,s,Z)\) increases by a polynomial multiplicative factor (of at most \(4n^4\), but this will be dwarfed by the bound in the next case).

Lastly, \(\varPsi \) is still well defined. Trivially, if \(B_M(X,Y,s,Z)=(0,\textrm{true})\) (alternatively \((0,\textrm{false})\)), then \(X=Z\) (\(Y=Z\)) and \(Y=Z\triangle \nabla _{X,Y}\) (\(X=Z\triangle \nabla _{X,Y}\)). If \(B_M(X,Y,s,Z)=(1,\cdots )\), then we can recover \(\pi _{M'}(X_1,Y_1,s_1,Z)\) from \(\pi _M(X,Y,s,Z)\) using their symmetric difference, and subsequently, we can recover \(X_1\) and \(Y_1\) via \(\varPsi \), because we have a precursor on \({\mathfrak {C}}_\textrm{id}\). From these graphs we can easily recover both X and Y, as \(B_M(X,Y,s,Z)\) describes whether \(v_{0}v_{\lambda }\) is in E(X) or not (and the same containment relation holds for E(Y) because \(v_0v_\lambda \notin \nabla _{X,Y}\)).

Case A2: \(\nabla _1\ne \emptyset \) and \(\nabla _2\ne \emptyset \). Task 1: constructing \(\varUpsilon _M(X,Y,s)\). Obviously, \(|\nabla _i|\ge 4\) for \(i=1,2\) (\(s_i\) is an even length closed trail), and \(|\nabla _1\cap \nabla _2|\le 2\), so \(\lambda \ge 5\). The reduction is similar to the previous case, however, the construction of the precursor on (XYs) will be reduced to not one, but two elements of \({\mathfrak {C}}_\textrm{id}\). Recall, that any \(v_{j}v_{j+1}\ne v_0v_\lambda ,v_1v_{\lambda -1}\) appears in exactly one of \(\nabla _1\) and \(\nabla _2\). If \(v_1v_{\lambda -1}\in \nabla _{X,Y}\cup \nabla _1\cup \nabla _2\), then the edge \(v_1v_{\lambda -1}\) appears in exactly two of \(\nabla _{X,Y}\), \(\nabla _1\), \(\nabla _2\). Observe, that for any vertex \(v\in [n]\), we have

$$\begin{aligned} \deg _X(v)-\deg _{E(X)\cap \nabla _{X,Y}}+\deg _{E(Y)\cap \nabla _{X,Y}}=\deg _Y(v). \end{aligned}$$
(31)

Thus, for any \(v\ne v_0,v_\lambda \), we have

$$\begin{aligned} \deg _{E(X)\cap \nabla _{X,Y}}=\deg _{E(Y)\cap \nabla _{X,Y}}. \end{aligned}$$
(32)

Suppose that \(s_i-v_0v_\lambda \) is not alternating in X for \(i=1\) and \(i=2\). Then, \(s_i-v_0v_{\lambda }\) is not alternating at \(v_1\) and \(v_{\lambda -1}\), which implies that \(v_1v_{\lambda -1}\in \nabla _1,\nabla _2\) and \(v_1v_{\lambda -1}\notin \nabla _{X,Y}\). If, say, \(v_1v_{\lambda -1}\in E(X)\), then \(s_i(v_1,v_1v_{\lambda -1})\in E(X)\), but \(s_i-v_0v_\lambda -v_1v_{\lambda -1}\) is alternating (for \(i=1,2\)); thus, we have \(\deg _{E(X)\cap \nabla _{X,Y}}(v_1)=\deg _{E(Y)\cap \nabla _{X,Y}}(v_1)+2\), so s cannot possibly be (XY)-alternating, a contradiction. The case \(v_1v_{\lambda -1}\notin E(X)\) similarly leads to a contradiction, therefore, at least one of \(s_1-v_0v_\lambda \) and \(s_2-v_0v_\lambda \) must be alternating in X.

By swapping \(\nabla _1\) with \(\nabla _2\) and \(s_1\) with \(s_2\), we may assume that \(s_1-v_0v_\lambda \) is alternating in X. We claim that

$$\begin{aligned} s_2-v_0v_\lambda \text { is not alternating in }X\Longleftrightarrow v_1v_{\lambda -1}\in \nabla _1\cap \nabla _2. \end{aligned}$$
(33)

If \(v_1v_{\lambda -1}\in \nabla _1,\nabla _2\), then \(v_1v_{\lambda -1}\notin \nabla _{X,Y}\), and as before, we get a contradiction if \(s_i-v_0v_{\lambda }\) is alternating in X for both \(i=1,2\), so \(s_2-v_0v_{\lambda }\) must not alternate in X. If \(s_2-v_0v_\lambda \) is not alternating in X, then \(v_1v_{\lambda -1}\in \nabla _2\). Thus, if \(s_2-v_0v_\lambda \) is not alternating in X and \(v_1v_{\lambda -1}\notin \nabla _1\), then \(v_1v_{\lambda -1}\in \nabla _{X,Y}\), and so \(|\deg _{E(X)\cap \nabla _{X,Y}}(v_1)-\deg _{E(Y)\cap \nabla _{X,Y}}(v_1)|=2\), a contradiction.

Let us define now 4 auxiliary graphs.

$$\begin{aligned} X_1&=\left\{ \begin{array}{ll} X\triangle v_0v_{\lambda }, &{}\text {if}\, s_1 \, \text {is not alternating in} X \\ X, &{}\text {if}\, s_1 \, \text {is alternating in} X \\ \end{array} \right. \\ Y_1&=X_1\triangle \nabla _1\\ X_2&=\left\{ \begin{array}{ll} Y_1\triangle v_0v_{\lambda }, &{}\text {if}\, s_2 \, \text {is not alternating in} Y_1 \\ Y_1, &{}\text {if}\, s_2 \, \text {is alternating in} \, Y_1 \\ \end{array} \right. \\ Y_2&=X_2\triangle \nabla _2. \end{aligned}$$

By our assumptions, \(s_1\) is alternating in \(X_1\). Furthermore, from (33) it follows that \(s_2\) is alternating in \(X_2\). Because \(s_i\) defines an alternating trail in \(X_i\), we have \(\deg _{X_i}=\deg _{Y_i}\). Trivially, \(E(X_1)\triangle E(X)\subseteq \{v_0v_\lambda \}\), and by the assumptions of the lemma,

$$\begin{aligned} E(Y_2)\triangle E(Y)&\subseteq \{v_0v_\lambda \}\cup \big (E(Y_1)\triangle \nabla _2\triangle E(Y)\big ) \\ {}&\subseteq \{v_0v_\lambda \}\cup \big (E(X)\triangle \nabla _1\triangle \nabla _2\triangle E(Y)\big )\\ E(Y_2)\triangle E(Y)&\subseteq \{v_0v_\lambda \}\cup \big (\nabla _{X,Y}\triangle \nabla _1\triangle \nabla _2\big )\\ E(Y_2)\triangle E(Y)&\subseteq \{v_0v_\lambda \} \end{aligned}$$

We claim that

$$\begin{aligned} s_1 \text { is alternating in} X or s_2 \text { is alternating in} Y_1 \text { (or both).} \end{aligned}$$
(34)

Suppose that \(s_1\) is not alternating in X and \(s_2\) is not alternating in \(Y_1\). Then, \(X_1=X\triangle v_0v_\lambda \) and \(X_2=Y_1\triangle v_0v_\lambda \). Because \(s_1\) is not alternating in X, we have \(v_0v_\lambda \in \nabla _1\). Also, because \(s_2-v_1v_{\lambda -1}\) is not alternating in \(Y_1=X_1\triangle \nabla _1=X\triangle (\nabla _1{\setminus } \{v_0v_\lambda \})\), \(s_2-v_1v_{\lambda -1}\) is not alternating in X either. But this implies that \(|\deg _{X\cap \nabla _{X,Y}}(v_0)-\deg _{Y\cap \nabla _{X,Y}}(v_0)|\in \{2,3\}\) (depends on whether \(v_0v_\lambda \) is in \(\nabla _{X,Y}\) or not), which is a contradiction.

From now on, we assume that \(X_1=X\) or \(X_2=Y_1\). In other words, at least one of the following three symmetric differences is an empty set:

$$\begin{aligned} E(X)\triangle E(X_1),E(Y_1)\triangle E(X_2),E(Y_2)\triangle E(Y)\subseteq \{v_0v_\lambda \}. \end{aligned}$$
(35)

If exactly one of them is an empty set, then observe that

$$\begin{aligned} 2={\Vert \deg _X-\deg _Y\Vert }_1\equiv & {} {\Vert \deg _{X}-\deg _{X_1}\Vert }_1+{\Vert \deg _{Y_1} -\deg _{X_2}\Vert }_1+{\Vert \deg _{Y_2}-\deg _{Y}\Vert }_1 \\ {}\equiv & {} 2+2\pmod {4}, \end{aligned}$$

which is a contradiction. Thus, there are exactly two empty sets on the left hand side of Eq. (35). From \(\deg _{X_i}=\deg _{Y_i}\) for \(i=1,2\), it follows that

$$\begin{aligned} \deg _{X_i},\deg _{Y_i}\in \{\deg _X,\deg _Y\}\ \text {for} \ i=1,2. \end{aligned}$$
(36)

In other words, we have shown that \((X_i,Y_i,s_i)\in {\mathfrak {C}}_\textrm{id}\cap {\mathfrak {D}}_{M'}\) for \(i=1,2\), and we may proceed with the reduction. By (34), we have three cases:

$$\begin{aligned}&\varUpsilon _{M}(X,Y,s) \\ {}&\quad =\left\{ \begin{array}{ll} X\xrightarrow {\text {toggle} v_0v_\lambda } \varUpsilon _{M'}(X_1,Y_1,s_1)\rightarrow \varUpsilon _{M'}(X_2,Y_2,s_2)\rightarrow Y &{}\text {if}\, s_1 \,\text {is not alternating in} \, X ,\\ X\rightarrow \varUpsilon _{M'}(X_1,Y_1,s_1)\xrightarrow {\text {toggle} v_0v_\lambda } \varUpsilon _{M'}(X_2,Y_2,s_2)\rightarrow Y &{}\text {if}\, s_2 \, \text {is not alternating in}\, Y_1 ,\\ X\rightarrow \varUpsilon _{M'}(X_1,Y_1,s_1)\rightarrow \varUpsilon _{M'}(X_2,Y_2,s_2)\xrightarrow {\text {toggle} v_0v_\lambda }Y &{}\text {otherwise},\\ \end{array} \right. \end{aligned}$$

where the \(\rightarrow \) signs simply represent joining two sequences (repeated graphs are dropped from the sequence). By the above observations about the symmetric differences and (36), \(\varUpsilon _M(X,Y,s)\) is indeed a path in the desired Markov graph.

\(M'-A_Z\) is c-tight by the properties of the precursor on \({\mathfrak {C}}_\textrm{id}\). Therefore, \(M-A_Z\) is \((c+3)\)-tight.

Task 2: Constructing \(\pi _M(X,Y,s)\). We have to construct a connected \(\pi _M(X,Y,s,Z)\) from the current \(\pi _{M'}(X_i,Y_i,s_i,Z)\) (where \(Z\in \varUpsilon _{M'}(X_i,Y_i,s_i)\)). Notice that \(L(\nabla _{X,Y},s)-\nabla _i\) (delete \(\nabla _i\) from the vertex set of the line graph) has at most \(c'+1\) components, since \(L(\nabla _{X,Y},s)\) is a path. Furthermore, \(L(\nabla _i,\pi _{M'}(X_i,Y_i,s_i))-(\nabla _i\setminus \nabla _{X,Y})\) has at most 2 components (since \(|\nabla _i{\setminus } \nabla _{X,Y}|\le 2\)). For \(Z\in \varUpsilon _{M'}(X_i,Y_i,s_i)\), let

$$\begin{aligned} \sigma _Z=\pi _{M'}(X_i,Y_i,s_i,Z)|_{\nabla _{X,Y}}\cup s|_{\nabla _{X,Y}\setminus \nabla _i}. \end{aligned}$$

The graph \(L(\nabla _{X,Y},\sigma _Z)\) has at most \(c'+3\) components because \(\pi _{M'}(X_i,Y_i,s_i)|_{\nabla _{X,Y}}\) and \(s|_{\nabla _{X,Y}\setminus \nabla _i}\) are composed of at most 2 and \(c'+1\) trails, respectively. Note, that

$$\begin{aligned} \sigma _{X_i}=s_i|_{\nabla _{X,Y}}\cup s|_{\nabla _{X,Y}\setminus \nabla _i}, \end{aligned}$$

and thus, \(|\sigma _{X_i}\triangle s|\le 2(c'+3)\).

We claim that there exists \(\sigma '_Z\in \Pi (\nabla _{X,Y})\) such that \(\sigma '_Z\supseteq \sigma _Z\) (extends \(\sigma _Z\)) and \(|\sigma '_Z\triangle \sigma _Z|\le 2(c'+3)\). Let \(U_x=\{xy\in \nabla _{X,Y}\ |\ (x,xy)\notin \textrm{dom}(\sigma _Z)\}\) be the set of unpaired edges incident to x. In total, we have \(\sum _{x\in [n]}|U_x|\le 2(c'+3)\). It is sufficient now to define \(\sigma '_Z(x,\bullet )\) on \(U_x\) for every \(x\in [n]\). To do so, observe that:

$$\begin{aligned} |U_x|= & {} \deg _{\nabla _{X,Y}}(x)-|\{ (x,xy)\in \textrm{dom}(\pi _{M'}(X_i,Y_i,s_i)|_{\nabla _{X,Y}})\}| \\ {}{} & {} -|\{ (x,xy)\in \textrm{dom}(s|_{\nabla _{X,Y}\setminus \nabla _i})\}|. \end{aligned}$$

Then, the parity of \(|U_x|\) satisfies:

$$\begin{aligned} |U_x|&\equiv \deg _{\nabla _{X,Y}}(x)+|\{ (x,xy)\in \textrm{dom}(s|_{\nabla _{X,Y}\setminus \nabla _i})\}|\pmod {2}\\ |U_x|&\equiv \deg _{\nabla _{X,Y}}(x)+|\{ xy\in {\nabla _{X,Y}\setminus \nabla _i}\ |\ s(x,xy)\in {\nabla _{X,Y}\setminus \nabla _i}\}|\pmod {2}\\ |U_x|&\equiv \deg _{\nabla _{X,Y}}(x)+I_{x=v_0}\cdot I_{v_0v_1\notin \nabla _i}+I_{x=v_\lambda }\cdot I_{v_{\lambda -1}v_\lambda \notin \nabla _i}\pmod {2} \end{aligned}$$

From the last congruence it follows that \(|U_x|\) is even for \(x\ne v_0,v_\lambda \), so we may choose \(\sigma '_Z(x,\bullet )\) such that it pairs the edges in \(U_x\). If \(v_0v_1\notin \nabla _i\), then \(|U_{v_0}|\) is even, and we may choose \(\sigma '_Z(v_0,\bullet )\) such that it pairs the edges in \(U_x\) (note that \(\sigma '_Z(v_0,v_0v_1)=\sigma _Z(v_0,v_0v_1)=v_0v_1\)). If \(v_0v_1\in \nabla _i\), then \(|U_{v_0}|\) is odd and by definition \(\pi _{M'}(X_i,Y_i,s_i,Z)\) cannot map \((v_0,v_0v_1)\) to \(v_0v_1\); thus, \(\sigma '_Z(v_0,\bullet )\) can pair all edges of \(U_{v_0}\) except one, which \(\sigma '_Z(v_0,\bullet )\) will map to itself. Define \(\sigma '_Z(v_\lambda ,\bullet )\) on \(U_{v_\lambda }\) analogously. In any case, \(L(\nabla _{X,Y},\sigma '_Z)\) is composed of a path and a certain number of cycles, in total still no more than \(c'+3\) components.

Furthermore, we claim that there exists \(\pi _Z\in \Pi (\nabla _{X,Y})\) such that \(|\pi _Z\triangle \sigma '_Z|\le 4(c'+3)\) and \(L(\nabla _{X,Y},\pi _Z)\) is connected. The pairing function \(\sigma '_Z\) defines one open trail and at most \(2(c'+3)-1\) closed trails in \(\nabla _{X,Y}\), and these trails partition the edge set of the connected trail s. Any closed trail intersecting the open trail can be incorporated into the open trail by changing the pairing function such that the symmetric difference increases by 4.

Let the pairing function associated to Z be

$$\begin{aligned} \pi _M(X,Y,s,Z)=\pi _Z. \end{aligned}$$

We know that \(\sigma _Z\) alternates with at most 3c exceptions in Z, since \(|(E(X_i)\triangle E(Z))\setminus \nabla _{i}|\le c\) and \(\pi _{M'}(X_i,Y_i,s_i)\) alternates in Z with at most c exceptions. Since \(|\pi _Z\triangle \sigma _Z|\le 6(c'+3)\), we get that \(\pi _Z\) alternates in Z with at most \(9(c'+3)\) exceptions.

Task 3: Constructing \(B_M(X,Y,s)\). Let us identify the ends of intervals of \(\nabla _i\) edges in a pairing function \(\vartheta \):

$$\begin{aligned} T_Z(\vartheta )&=\{ (x,xy)\ |\ xy\in \nabla _i\text { and } \left( (x,xy)\notin \textrm{dom}(\vartheta )\text { or }\vartheta (x,xy)\notin \nabla _i\right) \}\\ C_Z(\vartheta )&=\{ \min _\textrm{lex}V(L)\ |\ L\text { is a component in }L(\nabla _i\cap \nabla _{X,Y},\vartheta )\}, \end{aligned}$$

where \(\displaystyle \min _\textrm{lex}V(L)\) stands for the lexicographically minimal edge in V(L). Retracing the steps by which \(\pi _Z\) is obtained, we have

$$\begin{aligned} |T_Z(\pi _Z)|&\le |T_Z(\sigma _Z)|+6(c'+3)\le \big |T_Z(\pi _{M'}(X_i,Y_i,s_i)|_{\nabla _{X,Y}}\big |+6(c'+3) \nonumber \\ {}&\le 8+6(c'+3),\end{aligned}$$
(37)
$$\begin{aligned} |C_Z(\pi _Z)|&\le |C_Z(\sigma _Z)|+6(c'+3)\le \big |C_Z(\pi _{M'}(X_i,Y_i,s_i)|_{\nabla _{X,Y}}\big |+6(c'+3) \nonumber \\ {}&\le 2+6(c'+3). \end{aligned}$$
(38)

Let

$$\begin{aligned} B_M(X,Y,s,Z)&=\left\{ \begin{array}{ll} (0,Z\equiv X) \hspace{6em} \text {if }Z=X,Y\\ \big (2,\lambda ,v_0v_\lambda ,v_{0}v_{\lambda }\in E(X),v_1,v_{\lambda -1},(s_i|_{\nabla _{X,Y}}\cup \\ s|_{\nabla _{X,Y}\setminus \nabla _i})\triangle s,T_Z(\pi _Z),C_Z(\pi _Z), i,\pi _Z\triangle \sigma _Z,\\ \pi _M(X,Y,s,Z)|_{\nabla _i}\triangle \pi _{M'}(X_i,Y_i,s_i,Z),B_{M'}(X_i,Y_i,s_i,Z)\big ) \\ \text {if }Z\in \varUpsilon _{M'}(X_i,Y_i,s_i)\setminus \{X,Y\}\\ \end{array}\right. \end{aligned}$$

Every set listed in \(B_M(X,Y,s,Z)\) has at most a constant size, so the size of the range of \(B_M\) increases by a polynomial factor of n (of at most \(n^{60c'+240}\)). It remains to show that \(\varPsi \) is still well defined. This is trivial if \(Z=X,Y\). Suppose from now on, that \(Z\in \varUpsilon _{M'}(X_i,Y_i,s_i)\). Since \(L(\nabla _{X,Y},\pi _Z)\) is composed of paths and cycles, \(T_Z\) determines the ends of intervals of consecutive \(\nabla _i\) edges in the trails determined by \(\pi _Z\), and \(C_Z\) determines those \(L(\nabla _{X,Y},\pi _Z)\) components whose vertex set is a subset of \(\nabla _i\). Therefore, \(\nabla _{X,Y},T_Z,C_Z\) and \(\pi _Z\) determine \(\nabla _i\). Thus, \(\pi _Z|_{\nabla _i}=\pi _M(X,Y,s,Z)|_{\nabla _i}\) can be determined, and in turn \(\pi _{M'}(X_i,Y_i,s_i,Z)\) can be reconstructed too. Since we have a precursor on \({\mathfrak {C}}_\textrm{id}\), we get

$$\begin{aligned} (X_i,Y_i,s_i)=\varPsi (Z,\nabla _i,\pi _{M'}(X_i,Y_i,s_i,Z),B_{M'}(X_i,Y_i,s_i,Z)). \end{aligned}$$

Notice, that \(X-v_0v_\lambda =X_1-v_0v_\lambda \) and \(Y-v_0v_\lambda =Y_2-v_0v_\lambda \). Since \(v_0v_\lambda \in E(Y)\) if and only if \(v_0v_\lambda \in E(X)\triangle \nabla _{X,Y}\), both X and Y are determined by \((X_i,Y_i)\). Furthermore, \(\sigma _Z|_{\nabla _{X,Y}{\setminus } \nabla _i}=s|_{\nabla _{X,Y}{\setminus } \nabla _i}\) is already determined, and together with \(s_i|_{\nabla _{X,Y}}\) and the auxiliary parameters, they determine s.

We have now defined the precursor on any \((X,Y,s)\in {\mathfrak {C}}_\textrm{thin}\) where s is an open trail of odd length. Suppose from now on that \((X,Y,s)\in {\mathfrak {C}}_\textrm{thin}\) where s is an open trail of even length.

Case B: \(s=v_0v_1\ldots v_{\lambda -1}v_\lambda \) is an open trail of even length and \(v_0=v_{\lambda -1}\). We will perform exactly one hinge-flip \(\{v_{\lambda -2}v_0,v_{\lambda -2}v_\lambda \}\). This case is very similar to when s is an open trail of odd length and \(\nabla _2=\emptyset \), so we will give the construction, but checking the precursor properties is left to the diligent reader. Let

$$\begin{aligned} s_1&=v_0v_1\ldots v_{\lambda -2}v_\lambda v_0\\ \nabla _1&=\nabla _{X,Y}\triangle \{v_{\lambda -2}v_0,v_{\lambda -2}v_\lambda \}\\ M'&=\left\{ \begin{array}{ll} M+A_{(v_{\lambda -2}v_\lambda )} &{} \text {if }M_{v_{\lambda -2}v_\lambda }=0\\ M-A_{(v_{\lambda -2}v_\lambda )} &{} \text {if }M_{v_{\lambda -2}v_\lambda }=2\\ M &{} \text {if }M_{v_{\lambda -2}v_\lambda }=1 \end{array} \right. \\ X_1&=\left\{ \begin{array}{ll} X\triangle \{v_{\lambda -2}v_0,v_{\lambda -2}v_\lambda \}, &{}\text {if }s_1\text { is not alternating in }X\\ X, &{}\text {if }s_1\text { is alternating in }X\\ \end{array} \right. \\ Y_1&=\left\{ \begin{array}{ll} Y, &{}\text {if }s_1\text { is not alternating in }X\\ Y\triangle \{v_{\lambda -2}v_0,v_{\lambda -2}v_\lambda \}, &{}\text {if }s_1\text { is alternating in }X\\ \end{array} \right. \\ \varUpsilon _{M}(X,Y,s)&=\left\{ \begin{array}{ll} X\xrightarrow {\text {hinge-flip}} \varUpsilon _{M'}(X_1,Y_1,s_1) &{}\text {if }s_1\text { is not alternating in }X\\ \varUpsilon _{M'}(X_1,Y_1,s_1)\xrightarrow {\text {hinge-flip}}Y &{}\text {if }s_1\text { is alternating in }X \end{array} \right. \end{aligned}$$

We define \(\pi _M(X,Y,s,Z)\) simply by replacing the \((v_{\lambda -2},v_{\lambda -2}v_{\lambda })\) with \((v_{\lambda -2},v_{\lambda -2}v_0)\) in the pairing \(\varUpsilon _{M'}(X_1,Y_1,s_1)\), remove \((v_{\lambda },v_{\lambda -2}v_{\lambda })\) from the pairing (and create the self-paired edges at \(v_0\) and \(v_{\lambda }\)). Defining a suitable \(B_M(X,Y,s,Z)\) is straightforward and it is also left to the reader.

Case C: \(s=v_0v_1\ldots v_{\lambda -1}v_\lambda \) is an open trail of even length and \(v_0\ne v_{\lambda -1}\). Let \(s'=s-v_{\lambda -1}v_\lambda \) and observe that we have already defined \(\varUpsilon _M(X,Y\triangle v_{\lambda -1}v_\lambda ,s')\) in the previous subsection, since \(L(\nabla -v_{\lambda -1}v_\lambda ,s')\) is a path of odd length.

However, choosing \(\varUpsilon _M(X,Y,s)=\varUpsilon _M(X,Y\triangle v_{\lambda -1}v_\lambda ,s')\rightarrow Y\) violates the precursor property because the degree at \(v_{\lambda -1}\) may become too small or too large when the edge-toggle is performed on \(v_0v_{\lambda -1}\) (the rest of the steps are switches). Fortunately, this is very easy to fix: simply replace the edge-toggle on \(v_0v_{\lambda -1}\) in the previous definitions of \(\varUpsilon _M\) with the hinge-flip between \(v_0v_{\lambda -1}\) and \(v_{\lambda -1}v_\lambda \) to obtain \(\varUpsilon _M(X,Y\triangle v_{\lambda -1}v_\lambda ,s')\). Since every other step in \(\varUpsilon _M(X,Y\triangle v_{\lambda -1}v_\lambda ,s')\) is a switch, this ensures that for any \(Z\in \varUpsilon _M(X,Y,s)\) we have \(\deg _Z\in \{\deg _X,\deg _Y\}\).

We also need to define \(\pi _M\) and \(B_M\). Since the odd length case already describes a trail from \(v_0\) to \(v_{\lambda -1}\), we can join \(v_{\lambda -1}v_\lambda \) to the edge ending the trail at \(v_{\lambda -1}\) to obtain a suitable \(\pi _M(X,Y,s,Z)\) (in the derived bounds, this essentially increases \(c'\) by 1). Furthermore, we also need to store in \(B_M(X,Y,s,Z)\) that the identify of \(v_{\lambda -1}\) and \(v_\lambda \). Since for any \(Z\in \varUpsilon _M(X,Y,s)\). As a result, the range of \(B_M(X,Y,s,Z)\) increases by a polynomial factor (also note that the parameter of the precursor has to be increased by a constant to accommodate \(v_{\lambda -1}v_\lambda \)).

The well definedness of \(\varPsi \) follows, because the constant number of differences compared to the previous case can be all stored in \(B_M(X,Y,s,Z)\) without violating Definition 3.25((g)).

One can say that this is proof is not very detailed, but we think it is not worth describing the details, because it would be an almost verbatim repetition of the first two cases. \(\square \)

5 Proof of Theorem 2.20.

Let \({\mathcal {I}}\) be a set of weakly P-stable thin degree sequence intervals. By Lemma 4.2, there exists a precursor with parameter \(c=12\) on \({\mathfrak {C}}_\textrm{id}\). We want to apply Lemma 4.6 to prove that there exists a precursor on \({\mathfrak {C}}_\textrm{thin}\) with some fixed parameter. Showing this, Theorem 2.20 follows: the precursor can be extended to \({\mathfrak {R}}_\textrm{thin}\) by Lemma 3.27, which is sufficient for proving rapid maxing of \({\mathbb {G}}(\ell ,u)\) on every \((\ell ,u)\in {\mathcal {I}}\) by Theorem 3.28. Suppose \((X,Y,s)\in {\mathfrak {C}}_\textrm{thin}\). If s is a closed trail, then \((X,Y,s)\in {\mathfrak {C}}_\textrm{id}\), on which we have already defined a precursor.

Suppose from now on that \(s=v_0v_1\ldots v_{\lambda }\) is an open trail of odd length (possibly 1). By the KD-lemma (Lemma 3.21), \(v_0\ne v_\lambda \). To apply Lemma 4.6, it is enough to define \(s_1\) and \(s_2\), since their domains determine \(\nabla _1,\nabla _2\). The premises of Lemma 4.6 are elementary and trivial to check once \(s_1\in \Pi (\nabla _1)\) and \(s_2\in \Pi (\nabla _2)\) are given. We will finish the proof by a complete case analysis, where we provide a suitable \(s_1\) and \(s_2\) for each case.

We will prove that Lemma 4.6 holds for \({\mathfrak {C}}_\textrm{thin}\) with \(c'=2\). We will distinguish between 8 main cases, 3 of these have 2 subcases. The cases will be distinguished based on the relationship between \(v_0,v_1,v_{\lambda -1},v_{\lambda }\) and s. Recall that \(v_0\ne v_\lambda \). On the corresponding figures, by exchanging X and Y, we may suppose that \(v_0v_1\in E(X)\). Thus, the edges of X are drawn with solid lines, edges of Y with dashed lines, and edges not contained in \(\nabla _{X,Y}\) are dotted. Those pairs that are contained in \(\nabla _{X,Y}\) are joined by thick solid or dashed lines. The similarly thick dash-dotted lines represent (XY)-alternating segments of the trail s. Recall, that a trail may visit a vertex multiple times, but it can only traverse an edge at most once. The trails traversed by \(s_1\) and \(s_2\) are colored in blue and red, respectively.

Case 1. First, we assume that \(v_0 v_\lambda \notin \nabla _{X,Y}\).

figure f

From now on, we assume that \(v_0 v_\lambda \in \nabla _{X,Y}\). In other words, the open trail \(s\in \Pi (\nabla _{X,Y})\) traverses \(v_0v_\lambda \), that is, there exists \(2\le j\le \lambda -2\) such that \(\{v_0,v_\lambda \}=\{v_j,v_{j+1}\}\).

Case 2. We assume in this case that j is even.

Case 2a. If \(v_j=v_0\) and \(v_{j+1}=v_\lambda \), then let

figure g

Case 2b. If \(v_{j+1}=v_{0}\) and \(v_{j}=v_\lambda \), then let

figure h

From now on, we assume that j is odd.

Case 3. If \(v_{j}=v_\lambda \) and \(v_{j+1}=v_0\), then let

figure i

From now on, we assume that \(v_{j}=v_0\) and \(v_{j+1}=v_\lambda \).

Case 4. If \(v_1=v_{\lambda -1}\), then let

figure j

From now on, we assume that \(v_1\ne v_{\lambda -1}\).

Case 5. If \(v_1v_{\lambda -1}\notin \nabla _{X,Y}\).

figure k

From now on, we assume that \(v_1 v_{\lambda -1}\in \nabla _{X,Y}\). In other words, the open trail \(s\in \Pi (\nabla _{X,Y})\) traverses \(v_1v_{\lambda -1}\), that is, there exists \(1\le k\le \lambda -1\) such that \(\{v_1,v_{\lambda -1}\}=\{v_k,v_{k+1}\}\).

First, we assume that \(k<j\); the case \(k>j\) will follow easily by symmetry.

Case 6. Suppose that k is even.

Case 6a. If \(v_k=v_{1}\) and \(v_{k+1}=v_{\lambda -1}\), then let

figure l

Case 6b. If \(v_{k+1}=v_{1}\) and \(v_{k}=v_{\lambda -1}\), then let

figure m

Case 7. Suppose that k is odd.

Case 7a. If \(v_k=v_{1}\) and \(v_{k+1}=v_{\lambda -1}\), then let

figure n

Case 7b. If \(v_{k+1}=v_{1}\) and \(v_{k}=v_{\lambda -1}\), then let

figure o

Case 8. The remaining case is when \(k>j\). By taking the reverse order \(v'_i=v_{\lambda -i}\) for \(i=0,\ldots ,\lambda \), we have \(\lambda -k-1<\lambda -j-1\), so one of the previous subcases of Case 6 or Case 7 applies to \(s'=v'_0v'_1\ldots v'_{\lambda -1}v'_{\lambda }\). Clearly, the relevant properties of \(s_i\) are preserved by reversing the order of the indices.