Abstract
The approximate uniform sampling of graph realizations with a given degree sequence is an everyday task in several social science, computer science, engineering etc. projects. One approach is using Markov chains. The best available current result about the wellstudied switch Markov chain is that it is rapidly mixing on Pstable degree sequences (see DOI:10.1016/j.ejc.2021.103421). The switch Markov chain does not change any degree sequence. However, there are cases where degree intervals are specified rather than a single degree sequence. (A natural scenario where this problem arises is in hypothesis testing on social networks that are only partially observed.) Rechner, Strowick, and Müller–Hannemann introduced in 2018 the notion of degree interval Markov chain which uses three (separately well studied) local operations (switch, hingeflip and toggle), and employing on degree sequence realizations where any two sequences under scrutiny have very small coordinatewise distance. Recently, Amanatidis and Kleer published a beautiful paper (DOI:10.4230/LIPIcs.STACS.2023.7), showing that the degree interval Markov chain is rapidly mixing if the sequences are coming from a system of very thin intervals which are centered not far from a regular degree sequence. In this paper, we substantially extend their result, showing that the degree interval Markov chain is rapidly mixing if the intervals are centered at Pstable degree sequences.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this relatively short, highly technical paper, we prove a substantial extension of a recent result of Amanatidis and Kleer [1, Theorem 1.3]. Our proof is based on the unified approach that was developed in Erdős et al. [4] for Pstable degree sequences. For sake of brevity in this section, we concisely describe the problem itself, but we will not give a detailed description of the background. For further details, the diligent reader is referred to Amanatidis and Kleer and Erdős et al. [1, 4].
Approximate sampling graphs with given degree sequences play increasingly important role in modeling different reallife dynamics. One basic way to study them is the switch Markov chain method, made popular by Kannan et al. [9]. The currently best result via this method is Erdős et al. [4] where it is proved that the switch Markov chain is rapidly mixing on Pstable degree sequences. The notion of Pstability was introduced by Jerrum and Sinclair [8] and studied for its own sake at first by Jerrum et al. [6].
In reallife applications, it is not always possible to know the exact degree sequence of the targeted network. For example, a natural scenario where this problem arises is in hypothesis testing on social networks that are only partially observed. Therefore, it can happen that we have to sample networks with slightly different degree sequences. It is possible to study the situation via Markov chain decompositions, where there is another Markov chain to move among the component chains. A good example for this approach is the proof of Amanatidis and Kleer [1, Theorem 1.1].
Another possibility is to introduce further local operations, since the switch operation itself does not change the degree sequence. Such operations are the hinge flip and the toggle (the deletion–insertion) operations. These two latter operations were introduced by Jerrum and Sinclair in their seminal work about approximate 01 permanents [7]. (The number of perfect matchings of a bipartite graph is equal to the permanent of the bipartite adjacency matrix.) These three operations together are often applied in network building applications in practice (as it was pointed out in Coolen et al. [2]) but without any theoretical insurance for the correct result.
Rechner et al. [10] defined a Markov chain with these three local operations for bipartite graphs. Amanatidis and Kleer recognized in their important, recent paper [1] the following very interesting fact: assume that the inconsistencies in the degree sequences are never bigger than one (the degrees can be i or \(i+1\)) coordinatewise, and the degree intervals are placed close to a given constant r (the interval placements can vary between \([rr^\alpha , r+r^\alpha ]\) where alpha is at most 1/2). The authors coined the name nearregular degree intervals for this degree sequence property and the name degree interval Markov chain for this whole setup. Their result is that the degree interval Markov chain for nearregular degree intervals is rapidly mixing.
Our main result (Theorem 2.20) is that this Markov chain is rapidly mixing for such tight degree intervals where they are placed at Pstable degree sequences. Since all degree sequences close to some constant are Pstable, but Pstable degree sequences can be very far from regular sequences, our result is clearly a very extensive generalization of the theorem of Amanatidis and Kleer.
To our great surprise, it turned out that this result can be derived from the proof of the main theorem of Erdős et al. [4]. For that end, we had to analyze in detail the auxiliary structures of the proof and to extend to cover this setup. The result of this analysis is the notion of precursor (Sect. 3.3). In turn, this notion is conducive to a rather short proof of the rapidly mixing property. Therefore, the main task in this paper is to define the appropriate precursor.
2 Definitions and Notation
Many of the definitions in this section are extensions or generalizations of notions introduced in Erdős et al. [4]. We will alert the reader whenever this is the case.
We consider \({\mathbb {N}}\) the set of nonnegative integers. Let \([n]=\{1,\ldots ,n\}\) denote the integers from 1 to n, and let \(\left( {\begin{array}{c}[n]\\ k\end{array}}\right) \) denote the set of kelement subsets of [n]. Given a subset \(S\subseteq [n]\), let \(\mathbb {1}_S: [n]\rightarrow \{0,1\}\) be the characteristic function of S, that is, \(\mathbb {1}_S(s)=1\Leftrightarrow s\in S\). We often use \(\uplus \) to emphasize that a union of pairwise disjoint sets is taken. The graphs in this paper are vertexlabeled and finite. Parallel edges and loops are forbidden, and unless otherwise stated, the labeled vertex set of an nvertex graph is [n]. The line graph L(G) of a graph G is a graph on the vertex set E(G) (so the vertices of L(G) are taken from \(\left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \)), where any two edges \(e,f\in E(G)\) that are adjacent are joined (by an edge). The line graph is also free of parallel edges and loops. A trail is a walk that does not visit any edge twice. An open trail starts and ends on two distinct vertices. A closed trail does not have a start nor an end vertex. Given a matrix \(M\in {\mathbb {Z}}^{n\times n}\), its \(\ell _1\)norm is \(\Vert M\Vert _1=\sum _{ij}M_{ij}\).
Definition 2.1
Given two graphs on [n] as vertices, say, \(X=([n],E(X))\) and \(Y=([n],E(Y))\), we define their symmetric difference graph
Definition 2.2
Given a set of edges \(R\subseteq \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \), we may treat R as a graph. If X is a graph on the vertex set [n], let
Definition 2.3
A degree sequence on n vertices is a vector \(d\in {\mathbb {N}}^n\) which is coordinatewise at most \(n1\). The set of realizations of d denotes the following set of graphs:
where \(\deg _G(i)\) is the degree of the ith vertex in G. The degree sequence d is graphic if \({\mathcal {G}}(d)\) is nonempty. A set of degree sequences \({\mathcal {D}}\) may contain graphic as well as nongraphic degree sequences.
The degree sequence of a graph G on [n] as vertices is the vector \(\deg _G=(\deg _G(1),\deg _G(2),\ldots ,\deg _G(n))\).
Definition 2.4
For a pair of vectors \(\ell ,u\in {\mathbb {N}}^n\) we write \(\ell \le u\) if and only if \(\ell \) is coordinatewise less than or equal to u, that is, \(\ell _i\le u_i\) for all \(i\in [n]\). Furthermore, let
Definition 2.5
If \(\ell \le u\) are both degree sequences of length n, then \([\ell ,u]\) is a degree sequence interval. A degree sequence interval \([\ell ,u]\) is called thin if \(u_i\le \ell _i+1\) for all \(i\in [n]\). We denote the set of realizations of the degree sequence interval \([\ell ,u]\) by
Remark 2.6
Not every degree sequence in \([\ell ,u]\) is necessarily graphic, even if both \(\ell \) and u are graphic.
Definition 2.7
Given a polynomial \(p\in {\mathbb {R}}[x]\), we say that a degree sequence \(d\in {\mathbb {N}}^n\) is pstable if
Definition 2.8
A set of degree sequences \({\mathcal {D}}\) is pstable if every degree sequence \(d\in {\mathcal {D}}\) is pstable.
Definition 2.9
A set of degree sequences \({\mathcal {D}}\) is Pstable if there exists \(p\in {\mathbb {R}}[x]\) such that \({\mathcal {D}}\) is pstable.
In Erdős et al. [4], only Pstability is defined, but in this paper, it is more convenient to also define pstability.
Remark 2.10
A finite set of degree sequences \({\mathcal {D}}\) is always Pstable.
Let us introduce a weaker stability notion for degree sequence intervals.
Definition 2.11
Given \(p\in {\mathbb {R}}[x]\), we say that a degree sequence interval \([\ell ,u]\subseteq {\mathbb {N}}^n\) is weakly pstable if
Definition 2.12
A set \({\mathcal {I}}\) of degree sequence intervals is weakly Pstable if there exists \(p\in {\mathbb {R}}[x]\) such that every \([\ell ,u]\in {\mathcal {I}}\) is weakly pstable. (Any finite \({\mathcal {I}}\) is weakly Pstable.)
Remark 2.13
If the set of degree sequences \([\ell ,u]\) is pstable, then \([\ell ,u]\) is weakly pstable.
Remark 2.14
It is possible indeed that \([\ell ,u]\) is weakly pstable, but \([\ell ,u]\) (as a set of degree sequences) is not pstable. For example, take \(\ell =(0)_{i=1}^n\) and \(u={(n1)}_{i=1}^n\): the interval \([\ell ,u]\) is clearly 1stable, but most of the degree sequences on n vertices are not 1stable.
Definition 2.15
(Degree interval Markov chain) Let us define the degree interval Markov chain \({\mathbb {G}}(\ell ,u)\). The state space of the Markov chain is \({\mathcal {G}}(\ell ,u)\). In the following, we define three types of transitions: switches, hingeflips, and edgetoggles (Fig. 1). If the current state of the Markov chain is \(G\in {\mathcal {G}}(\ell ,u)\), then

with probability 1/2, the chain stays in G (the Markov chain is lazy),

with probability 1/6, pick 4 vertices a, b, c, d (uniformly and randomly), and the Markov chain changes its state to \(G'=G\triangle \{ab,cd,ac,bd\}\) if \(\deg _{G'}=\deg _G\) (in which case the transition is a switch), otherwise the chain stays in G,

with probability 1/6, pick 3 vertices a, b, c (uniformly and randomly), and the Markov chain changes its state to \(G''=G\triangle \{ab,bc\}\) if \(e(G'')=e(G)\) and \(G''\in {\mathcal {G}}(\ell ,u)\) (a hingeflip), otherwise the chain stays in G,

with probability 1/6, pick a pair of vertices a, b (uniformly and randomly), and the Markov chain changes its state to \(G'''=G\triangle \{ab\}\) if \(G'''\in {\mathcal {G}}(\ell ,u)\) (an edgetoggle), otherwise the chain stays in G.
We will use the following seminal result of Sinclair. Let \(\Pr _{\mathbb {G}}(x\rightarrow y)\) denote the transition probability from state x to y in the Markov chain \({\mathbb {G}}\). Let \(\sigma \equiv V({\mathbb {G}})^{1}\) be the unique stationary distribution on \({\mathbb {G}}\). Given a multicommodity flow f on \({\mathbb {G}}\), let \(\ell (f)\) be the length of the longest path with positive flow, and let \(\rho (f)\) be the maximum loading through an oriented edge of the Markov graph, that is,
where \(\Gamma _{X,Y}\) is the set of all simple directed paths from X to Y in \({\mathbb {G}}\).
Theorem 2.16
(adapted from Sinclair [11, Proposition 1 and Corollary 6’]) Let \({\mathbb {G}}\) be an irreducible, symmetric, reversible, and lazy Markov chain. Let f be a multicommodity flow on \({\mathbb {G}}\) which sends \(\sigma (X)\sigma (Y)\) commodity between any ordered pair \(X,Y\in V({\mathbb {G}})\). Then, the mixing time of the Markov chain in which it converges \(\varepsilon \) close in \(\ell _1\)norm to \(\sigma \) started from any element in \(V({\mathbb {G}})\) is
One of the most famous applications of this idea is the result of Jerrum and Sinclair [7] providing a probabilistic approximation of the permanent. The following result also relies on Theorem 2.16, and it describes the largest known class of degree sequences where the switch Markov chain is rapidly mixing (that is, the rate of convergence of the Markov chain is bounded by a polynomial of the length of the degree sequence).
Theorem 2.17
[4] The switch Markov chain is rapidly mixing on the realizations of any degree sequence in a set of Pstable degree sequences (the rate of convergence depends on the set).
There are several known Pstable regions, one of the earliest and most wellknown ones is the following.
Theorem 2.18
(Jerrum, McKay, and Sinclair [6]) Let \(\delta =\min (d)\) and \(\Delta =\max (d)\) be the minimum and maximum value elements in d. The set of degree sequences d satisfying
for any n are Pstable. (See Fig. 2.)
Amanatidis and Kleer [1] recently published a surprising new type of result, a clever approximate uniform sampler (see, for e.g., [7]) for \({\mathcal {G}}(\ell ,u)\) where elements of \([\ell ,u]\) are near regular. They achieve this using a composite Markov chain. They also provide the first step in the direction of sampling \({\mathcal {G}}(\ell ,u)\) directly using the degree interval Markov chain.
Let us reiterate that Amanatidis and Kleer [1] apply the Markov chain suggested by Rechner, Strowick, and Müller–Hannemann [10], which is routinely used in practice.
Theorem 2.19
(Theorem 1.3 in Amanatidis and Kleer [1]) Let \(0<\alpha <\frac{1}{2}\) and \(0<\rho <1\) be fixed. Let \(r=r(n)\) with \(2\le r\le (1\rho )n\). If \([\ell _i,u_i]\subseteq [rr^\alpha ,r+r^\alpha ]\) and \(u_i1\le \ell _i\le u_i\) for all \(i\in [n]\), then the degree interval Markov chain \({\mathbb {G}}(\ell ,u)\) is rapidly mixing.
Let \(w_m\) be the number of realizations in \({\mathcal {G}}(\ell ,u)\) with m edges. The conditions \(u_i1\le \ell _i\le u_i\) for all \(i\in [n]\) are sufficient to prove that \(w_m\) is logconcave, i.e., \(w_{m1}w_{m+1}\le w_m^2\), see Amanatidis and Kleer [1, Theorem 5.4]. The main idea for that proof is a symmetric difference decomposition, which we also characterize in our key decomposition lemma, Lemma 3.21.
Our contribution. The main objective of this paper is to prove the following theorem.
Theorem 2.20
Suppose \({\mathcal {I}}\) is a set of weakly Pstable and thin degree sequence intervals. Then, the degree interval Markov chain \({\mathbb {G}}(\ell ,u)\) is rapidly mixing for any \((\ell ,u)\in {\mathcal {I}}\).
It is not hard to see that Theorem 2.19 is a special case of Theorem 2.20: substituting into Eq. (4), we get
which holds for any r and \(\alpha \) if n is large enough; see Fig. 2.
The switch Markov chain can be embedded into the degree interval Markov chain (the transition probabilities differ by constant factors). Actually, we will use the proof of Theorem 2.17 as a plugin in the proof of Theorem 2.20, so this paper does not provide a new proof for the switch Markov chain. We will not consider bipartite and direct degree sequences in this paper, but note that Theorem 2.17 applies to those as well. It is easy to check that the proof of Theorem 2.20 works verbatim for bipartite graphs, because the edgetoggles and hingeflips are applied on vertices that are joined by paths of odd length (hence in different classes). In all likelihood, the proof of Theorem 2.20 can be probably extended to directed graphs, because directed graphs can be represented as bipartite graphs endowed with a forbidden 1factor.
3 Constructing and Bounding the Multicommodity Flow
We will define a number of auxiliary structures. Via these structures, we will define a multicommodity flow on the degree interval Markov chain \({\mathbb {G}}(\ell ,u)\) and measure its load.
3.1 Constructing and Counting the Auxiliary Matrices
Kannan et al. [9] already introduced an auxiliary matrix to examine the load of a multicommodity flow. Our auxiliary matrices will be a little different. We start with some definitions, then prove two easy statements.
Definition 3.1
Let the adjacency matrix of a graph X on vertex set [n] be \(A_X\in {\{0,1\}}^{n\times n}\). Let \(A_{(vw)}\) be the adjacency matrix of the graph \(([n],\{vw\})\) with exactly one edge. Let us define
Remark 3.2
If X, Y, Z are graphs on [n], then \(\widehat{M}(X,Y,Z)\in {\{1,0,1,2\}}^{n\times n}\).
Let us define the matrix switch operation. (In a previous paper [4], this operation was called a generalized switch.)
Definition 3.3
(Switch on a matrix) The switch operation on a matrix M on vertices (a, b, c, d) produces the matrix
Remark 3.4
A switch on a graph X corresponds to a switch on its adjacency matrix \(A_X\).
Definition 3.5
Let \(M\in {\{1,0,1,2\}}^{n\times n}\) and let \(\deg _M\in {\mathbb {Z}}^n\) with \((\deg _M)_i=\sum _{j=1}^n M_{ij}\) be the sequence of its rowsums. We say that M is ctight (for some \(c\in {\mathbb {N}}\)) if M is a symmetric matrix with zero diagonal and there exists a graph \(W\in {\mathcal {G}}(\deg _M,\deg _M+\mathbb {1}_{\{i,j\}})\) for some \(\{i,j\}\in \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \) such that \(\Vert MA_W\Vert _1\le 2c\).
Recall the definition of weak pstability and Eq. (1). We will use the number of ctight matrices to bound the number of auxiliary matrices.
Lemma 3.6
The number of matrices \(M\in {\{1,0,1,2\}}^{n\times n}\) that are ctight and \(\deg _M\in [\ell ,u]\) for a weakly pstable \([\ell ,u]\) is at most
Proof
We can obtain any ctight M we want to enumerate as follows. First, select, an appropriate \(\{i,j\}\in \left( {\begin{array}{c}n\\ 2\end{array}}\right) \) and a realization \(W\in {\mathcal {G}}(\ell ,u+\mathbb {1}_{\{i,j\}})\): by weak pstability, there are at most \(p(n)\cdot {\mathcal {G}}(\ell ,u)\) such choices. Then, select c symmetric pairs of positions where the adjacency matrix \(A_W\) is changed to \(1\) or \(+2\) (while preserving symmetry). The latter selection can be made in at most \(\left( {\begin{array}{c}n\\ 2\end{array}}\right) ^c\cdot 2^c\) different ways. \(\square \)
The following lemma is crucial for proving the tightness of the auxiliary matrices arising in the multicommodity flow. A switch operation on a matrix takes a \(2\times 2\) submatrix and adds \(+1\)’s and \(1\)’s to the two–two diagonally opposed entries so that the row and columnsums are preserved.
Lemma 3.7
(Based on Lemma 7.2 of Erdős et al. [4]) Suppose \(M\in {\{1,0,1,2\}}^{n\times n}\) is a symmetric matrix whose diagonal is zero. Suppose further, that

(i)
the number of \(+2\) entries of M is at most 4,

(ii)
the number of \(1\) entries of M is at most 2,

(iii)
there exists \(V\subseteq [n]\) where \(M_{V\times V}\) contains every \(+2\) and \(1\) entries of M,

(iv)
there exists \(v\in V\) such that the \(+2\) and \(1\) entries of M are all located in the row and column corresponding to v,

(v)
the rowsum of v in \(M_{V\times V}\) is minimal, and finally,

(vi)
every row and column sum in \(M_{V\times V}\) is at least 1 and at most \(V2\).
Then, M is 5tight.
Proof
By Lemma 7.2 of Erdős et al. [4], there exist at most two matrix switches that turn M into a \(\{0,1\}\) matrix with the possible exception of a symmetric pair of \(1\) entries. The \(1\)’s remaining after the two matrix switches can be removed by adding \(+1\) to the pairs of negative entries. \(\square \)
3.2 The AlternatingTrail Decomposition
We consider the set \(\left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \) in lexicographic order, which induces an order on the set of edges of any graph defined on [n].
Definition 3.8
Given a set of edges \(\nabla \subseteq \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \) on [n] as vertices, let \(\nabla _v=\{e\ \ v\in e\in \nabla \}\). We call \(s:\{(v,e)\ \ v\in e\in \nabla \}\rightarrow \nabla \) a pairing function on \(\nabla \) if \(s(v,\bullet ):\nabla _v\rightarrow \nabla _v\) defined as \(s(v,\bullet ):e\mapsto s(v,e)\) is an involution, i.e., \(s(v,\bullet )\) is it own inverse for any \(v\in [n]\). (The bullet \(\bullet \) is the placeholder for the variable e which is the second argument of s.) The set of all pairing functions on \(\nabla \) is denoted by \(\Pi (\nabla )\) (Fig. 3).
Definition 3.9
Let \(L(\nabla ,s)\) be the following subgraph of the line graph of \(([n],\nabla )\): join \(e,f\in \nabla \) if and only if \(e\ne f\) and there exists a vertex \(v\in e\cap f\) such that \(s(v,e)=f\) (or equivalently, \(s(v,f)=e\)).
Lemma 3.10
Each connected component of \(L(\nabla ,s)\) is a path or a cycle.
Proof
Every edge \(e=ij\in \nabla \) has at most two neighbors in \(L(\nabla ,s)\), the edges s(i, e) and s(j, e), thus the maximum degree in \(L(\nabla ,s)\) is 2. \(\square \)
Remark 3.11
A cycle in a line graph corresponds to a closed trail in the original graph. A path in a line graph corresponds to an open trail in the original graph (which may in theory start and end at the same vertex, but this will never be the case in our applications, see Lemma 3.21). Definition 3.9 generalizes a concept of Kannan et al. [9], where all of the components are cycles.
Definition 3.12
Suppose \(\nabla \subseteq \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \) and s is a pairing function on \(\nabla \). Denote by \(p_s\) the number of connected components of \(L(\nabla ,s)\), and let us define the unique partition
where each \(W^s_k\) is the vertex set of a component of \(L(\nabla ,s)\), and the sets \({(W^s_k)}_{k=1}^{p_s}\) are listed in the order induced by their lexicographically first edges.
Definition 3.13
For any set of edges W and \(s\in \Pi (\nabla )\), let
Subsequently, we also define
Remark 3.14
If W is the vertex set of a component of \(L(\nabla ,s)\), then \(s_{W}\in \Pi (W)\).
Definition 3.15
If \(\nabla =\{u_{i1}u_{i}\ \ i=1,\ldots ,r\}\) is a set of r distinct edges, let \(s=u_{0}u_{1}\ldots u_{r1} u_r\in \Pi (\nabla )\) denote
Lemma 3.16
Let \(\nabla =\{u_{i1}u_{i}\ \ i=1,\ldots ,r\}\) and \(s=u_0u_1\ldots u_r\). Then,

the walk \(u_0u_1\ldots u_r\) is a closed trail if and only if \(L(\nabla ,s)\) is a cycle, and

the walk \(u_0u_1\ldots u_r\) is an open trail if and only if \(L(\nabla ,s)\) is a path.
In other words, the Eulerian trails on \(\nabla \) can be naturally identified with those pairing functions \(s\in \Pi (\nabla )\) for which \(L(\nabla ,s)\) is connected.
Proof
Trivial. \(\square \)
Figure 4 shows a closed trail defined by a pairing function.
From now on, by slight abuse of notation, we will not distinguish between \(s=u_0u_1\ldots u_r\) as a pairing function and the trail it describes.
Definition 3.17
Let Z be an arbitrary graph on nvertices and let \(\nabla \subseteq \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \) be an arbitrary subset of pairs of vertices. A pairing function \(s\in \Pi (\nabla )\) is said to be Zalternating or alternating in Z if for every \(v\in e\in \nabla \) either

e is a unique solution to \(s(v,e)=e\) (the function \(s(v,\bullet )\) has at most one fixpoint), or

\(e\in \nabla \cap E(Z)\) and \(s(v,e)\in \nabla {\setminus } E(Z)\), or

\(e\in \nabla {\setminus } E(Z)\) and \(s(v,e)\in \nabla \cap E(Z)\).
In other words, the trail \(s_{W^s_k}\) traverses edges in Z and \({\overline{Z}}\) in turn for any \(k=1,\ldots ,p_s\); furthermore, at any vertex \(v\in [n]\), there is at most one trail \(s_{W^s_k}\) which starts or ends at v. For example, if \(ij\notin E(Z)\) and \(s(i,ij)=ij=s(j,ij)\), then the trail \(s_{\{ij\}}\) consists of one nonedge of Z.
Furthermore, we say that \(s\in \Pi (\nabla )\) is Zalternating with at most c exceptions if
and \(s(v,\bullet )\) has at most one fixpoint for every \(v\in [n]\). We say that v is a site of nonalternation of s in Z if \(\{e,s(v,e)\}\) is set of size 2 which is a subset of either \(\nabla \cap E(Z)\) or \(\nabla \setminus E(Z)\).
Definition 3.18
Let \(X,Y\in {\mathcal {G}}(\ell ,u)\), where \([\ell ,u]\) is a thin degree sequence interval. Denote \(\nabla _{X,Y}=E(X)\triangle E(Y)\). An \(s\in \Pi (\nabla _{X,Y})\) which is both Xalternating and Yalternating is called (X, Y)alternating.
Lemma 3.19
Any pairing function \(s\in \Pi (\nabla _{X,Y})\) is Xalternating if and only if s is Yalternating.
Proof
Trivial, since \(\nabla _{X,Y}\setminus E(X)=E(Y)\setminus E(X)\) and \(\nabla _{X,Y}\cap E(X)=E(X)\setminus E(Y)\). \(\square \)
Definition 3.20
Given a degree sequence interval \([\ell ,u]\), for any \(X,Y\in {\mathcal {G}}(\ell ,u)\), define
Recall Definition 3.13. The following key decomposition lemma (KDlemma) will be referred to repeatedly in this paper.
Lemma 3.21
(Key decomposition lemma) Let \([\ell ,u]\) be a thin degree sequence interval, and let \(X,Y\in {\mathcal {G}}(\ell ,u)\), \(s\in S_{X,Y}\). Then, \(s_{W^s_k}\) is (X, Y)alternating, and \(s_{W^s_k}\) describes an Eulerian trail on \(W^s_k\) for any \(1\le k\le p_s\). If \(s_{W^s_k}\) describes an open trail, then its endvertices are (by definition) distinct, and the endvertices of the trail \(s_{W^s_k}\) are disjoint from the endvertices of any other open trail \(s_{W^s_j}\) (\(j\ne k\)).
Proof
We have \(\deg _X(v)\deg _Y(v)\le 1\) for any \(v\in V\). Thus, the involution \(s(v,\bullet )\) pairs the Xedges of \(\nabla _{X,Y}\) incident to v to the Yedges of \(\nabla _{X,Y}\) incident to v, with the exception of the at most one fixpoint of \(s(v,\bullet )\). The closed trails must have even length, because \(s(v,\bullet )\) pairs Xedges to Yedges at any v.
Clearly, if an open trail \(s_{W^s_i}\) both starts and ends at v, then \(s(v,\bullet )\) has at least two fixpoints, which is a contradiction. Similarly, we have a contradiction if more than one trail terminates at some vertex v. Lastly, if \(s_{W^s_k}\) is an open trail, then the degree \(\deg _{W^s_k}(v)\) is even, except if v is one of the two endvertices of \(s_{W^s_k}\), in which case \(\deg _{W^s_k}(v)\) is odd. \(\square \)
Lemma 3.22
For any thin degree sequence interval \([\ell ,u]\) on n vertices and any two graphs \(X,Y\in {\mathcal {G}}(\ell ,u)\)
where the right hand side is the product of factorials.
Proof
We have
If \(\deg _X(v)=\deg _Y(v)\), then we have \((\deg _X(v)\deg _{X\cap Y}(v))!=(\frac{1}{2}\deg _{\nabla _{X,Y}}(v))!\) ways to choose \(s(v,\bullet )\) such that it is an involution which maps edges of X to edges of Y: if \(s(v,\bullet )\) had a fixpoint, then by parity it must have had another, too, which contradicts Definition 3.17.
If \(\deg _X(v)=\deg _Y(v)+1\) and \(s(v,e)=e\), then \(e\in E(X){\setminus } E(Y)\) and e is the only fixpoint of \(s(v,\bullet )\). Therefore, there are \(\deg _X(v)\deg _{X\cap Y}(v)=\frac{1}{2}(\deg _{\nabla _{X,Y}}(v)+1)\) ways to choose the fixpoint, and \((\deg _X(v)\deg _{X\cap Y}(v)1)!\) ways to choose the rest of the map \(s(v,\bullet )\). \(\square \)
Lemma 3.23
For any graph \(Z\in {\mathcal {G}}(\ell ,u)\) for a thin degree sequence interval \([\ell ,u]\) and any \(\nabla \subseteq \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \), we have
Proof
There are at most \(n^{3c}\) different choices for the set on the left hand side of Eq. (6). If we fix the nonalternating pairs, then the number of remaining choices at \(s(v,\bullet )\) are still upper bounded by \(\lceil \frac{1}{2}\deg _{\nabla }(v)\rceil {!}\), thus Eq. (8) holds.
\(\square \)
3.3 The Precursor
So far, every proof of rapid mixing for the switch Markov chain which is based on Sinclair’s method contains at its core a counting lemma (Greenhill [5]). The purpose of the counting lemma is to enumerate the possible auxiliary structures and parameter sets from which the source and sink of any commodity passing through a realization Z can be recovered from. The difficult technical parts of the proofs are concerned with the maintenance and upkeep associated to these structures. To our surprise, for thin degree sequence intervals, by slightly tweaking these structures, the arising technicalities can almost entirely be reduced to Erdős et al. [4], and a major shortcut is taken by this paper by reusing these parts. A relatively long, but mostly elementary Definition 3.25 will specify the properties that we expect from the auxiliary structures and parameter sets borrowed from Erdős et al. [4]. In Sect. 4, we will use this framework to recombine the borrowed parts into a proof for thin degree sequence intervals.
The decomposition in Definition 3.12 is formally very similar to the decomposition in Erdős et al. [4, Section 4.1]. Whenever the degree sequences of X and Y are identical (\(\nabla =\nabla _{X,Y}\) and \(s\in S_{X,Y}\)), the two decompositions are actually identical. In any other case, for every two unit differences between the degree sequences of X and Y we will utilize a hingeflip or an edgetoggle in the multicommodity flow between X and Y.
Let us now turn to defining the framework for the reduction to Erdős et al. [4]. We need the following structure and in particular the matrix M to be able to find an appropriate reduction which is compatible with the processes of Erdős et al. [4].
Definition 3.24
Let \(M\in {\{0,1,2\}}^{n\times n}\) be a symmetric matrix with zero diagonal. For technical purposes, let us define the following set of triples:
The next definition collects a number of properties (of the multicommodity flow and the auxiliary structures designed for the switch Markov chain) that we want to preserve from Erdős et al. [4].
Definition 3.25
We call the ordered triple \((\varUpsilon ,B,\pi )\) a precursor with parameter \(c\in {\mathbb {N}}\), if the following properties hold. The objects \(\varUpsilon _M\), \(B_M\), and \(\pi _M\) are functions for any symmetric matrix \(M\in {\{0,1,2\}}^{n\times n}\) with zero diagonal, where \(n\in {\mathbb {N}}\). We require that the domain of \(\varUpsilon _M\) satisfies
Furthermore, for any \((X,Y,s)\in {{\,\textrm{dom}\,}}(\varUpsilon _M)\), let us define two degree sequences:
We require that \(\varUpsilon _{M}(X,Y,s)\) is a sequence of graphs that forms a path connecting X and Y in the Markov graph \({\mathbb {G}}(\ell _{X,Y},u_{X,Y})\). We require that \(\pi _M\) and \(B_M\) is defined on
Moreover,

(a)
The length of \(\varUpsilon _M(X,Y,s)\) is at most \(c\cdot \nabla _{X,Y}\).

(b)
The size \((E(X)\triangle E(Z))\setminus \nabla _{X,Y}\le c\) for any \(Z\in \varUpsilon _M(X,Y,s)\).

(c)
The matrix \(MA_Z\) is ctight for any \(Z\in \varUpsilon _{M}(X,Y,s)\).

(d)
The pairing function \(\pi _M(X,Y,s,Z)\) is a member of \(\Pi (\nabla _{X,Y})\) and it is alternating in Z with at most c exceptions.

(e)
\(\pi _M(X,Y,s,X)=\pi _M(X,Y,s,Y)=s\).

(f)
If \(L(\nabla _{X,Y},s)\) is connected, then \(L(\nabla _{X,Y},\pi _M(X,Y,s,Z))\) is also connected.

(g)
The cardinality of
$$\begin{aligned} {\mathfrak {B}}_n= & {} \Big \{B_M(X,Y,s,Z)\ \mid \ Z\in \varUpsilon _M(X,Y,s),\ M \,\text {arbitrary}, \\ {}{} & {} \ (X,Y,s)\in {{\,\textrm{dom}\,}}(\varUpsilon _M),\ V(X)=n\Big \} \end{aligned}$$is at most a constant times \(n^c\), i.e., \({\mathfrak {B}}_n={\mathcal {O}}(n^c)\).

(h)
The function
$$\begin{aligned} \varPsi&=\left\{ (Z,\nabla _{X,Y},\pi _M(X,Y,s,Z),B_M(X,Y,s,Z)) \right. \\ {}&\left. \mapsto (X,Y,s)\ \big \ M \text {is arbitrary},\ Z\in \varUpsilon _M(X,Y,s) \right\} \end{aligned}$$is well defined, i.e., two different images in the codomain are not assigned to the same element from the domain of \(\varPsi \).
\(\square \)
Typically, the value of \(B_M(X,Y,s,Z)\) will be a long tuple (an ordered set of parameters). The exact value of c is not important here, the requirements only impose a lower bound on its value. However, it is important to note that c is a constant, independent even from the number of vertices n. Note also that in applications of Definition 3.25, the matrix M will not be completely arbitrary.
Definition 3.26
A subset \({\mathfrak {P}}\) is a precursor domain if it is a set of triples (X, Y, s) such that X and Y have the same vertex set [n] for some \(n\in {\mathbb {N}}\) (where n may vary) and \(s\in S_{X,Y}\). We say that a precursor \((\varUpsilon ,B,\pi )\) is defined on a precursor domain \({\mathfrak {P}}\) if and only if for any \(n\in {\mathbb {N}}\) and symmetric matrix \(M\in {\{0,1,2\}}^{n\times n}\) with zero diagonal, we have
Let us define two precursor domains:
The set \({\mathfrak {C}}_\textrm{thin}\) describes the identifiers of the small parts from which the whole multicommodity flow will be built from. In contrast, the multicommodity flow was built in Erdős et al. [4] for each triple in \({\mathfrak {R}}_\textrm{thin}\) directly.
Lemma 3.27
If there exists a precursor with parameter c which is defined on \({\mathfrak {C}}_\textrm{thin}\), then there exists a precursor on \({\mathfrak {R}}_\textrm{thin}\) with parameter 3c.
Proof
We will show that the precursor can be extended so that it is also defined on \({\mathfrak {R}}_\textrm{thin}\) without violating Definition 3.25. For any \((X,Y,s)\in {\mathfrak {R}}_\textrm{thin}\cap {\mathfrak {D}}_M\), we construct a path in the Markov graph of \({\mathbb {G}}(\ell _{X,Y},u_{X,Y})\), where \([\ell _{X,Y},u_{X,Y}]\) is the smallest degree sequence interval that contains both \(\deg _X\) and \(\deg _Y\). By the thinness of \({\mathcal {I}}\), we have \(\ell (i)u(i)\le 1\) for every \(i\in [n]\). According to Definition 3.12 and the KDlemma (Lemma 3.21), any \(s\in S_{X,Y}\) partitions \(\nabla _{X,Y}=E(X)\triangle E(Y)\) into edge sets of (X, Y)alternating trails, let that decomposition be
Let
so that \(G^{X,Y}_0=X\) and \(G^{X,Y}_{p_s}=Y\). By definition, \(s_{W^s_k}\) is connected, so \((G^{X,Y}_{k1},G^{X,Y}_{k},s_{W^s_k})\in {\mathfrak {C}}_\textrm{thin}\) for \(k=1,\ldots ,p_s\). Let us confirm that \(G^{X,Y}_k\in {\mathcal {G}}(\ell ,u)\). If \(s_k\) is a closed trail, then the degree sequences of \(G^{X,Y}_k\) and \(G^{X,Y}_{k1}\) are identical. If \(s_k\) is an open trail whose endvertices are v and w, then the degree sequences of \(G^{X,Y}_{k}\) and \(G^{X,Y}_{k1}\) differ by 1 precisely on v and w; since these endvertices are distinct from any other endvertices of another open trail \(s_j\), such a change of the degree of v and w not occur for any other k. Thus, the degree v satisfies
and so \(\deg _{G^{X,Y}_i}\in [\ell _{X,Y},u_{X,Y}]\).
It is easy to see that \((G^{X,Y}_{k1},G^{X,Y}_{k},s_{W^s_k})\in {\mathfrak {D}}_M\). If \(e\in E(X)\cap E(Y)\), then \(e\notin W^s_i\) for any i. Similarly, if \(e\in E({\overline{X}})\cap E({\overline{Y}})\), then \(e\notin W^s_i\) for any i.
We may now define \(\varUpsilon _M\) on \({\mathfrak {R}}_\textrm{thin}\) recursively: concatenate the sequences \(\varUpsilon _M(G^{X,Y}_{k1},G^{X,Y}_k,s_{W_k^s})\) in increasing order of k to obtain
where the concatenation keeps only one of the last and first element of consecutive sequences. For \(Z\in \varUpsilon _M\left( G^{X,Y}_{k1},G^{X,Y}_k,s_{W^s_k}\right) \) (take the maximal k such that the relation holds) let
We claim that the extended functions provide a precursor on \({\mathfrak {R}}_\textrm{thin}\). Let us check the nontrivial properties of Definition 3.25. Suppose that \(Z\in \varUpsilon _M\left( G^{X,Y}_{k1},G^{X,Y}_k,s_{W^s_k}\right) \). Then, \(\deg _Z\in [\ell _{X,Y},u_{X,Y}]\), since \(\deg _{G^{X,Y}_{k1}},\deg _{G^{X,Y}_{k}}\in [\ell _{X,Y},u_{X,Y}]\).
Checking Definition3.25((b)). By Definition 3.25((b)), \(\left \left( E\left( G^{X,Y}_{k1}\right) \triangle E(Z)\right) {\setminus }\right. \left. W^s_k\right \le c\), and
therefore, the LHS has cardinality at most c as well.
Checking Definition 3.25((c)). The precursor property holds for \({\mathfrak {C}}_\textrm{thin}\) and \(\left( G^{X,Y}_{k1},G^{X,Y}_k,s_k\right) \in {\mathfrak {C}}_\textrm{thin}\cap {\mathfrak {D}}_M\), therefore, \(MA_Z\) is ctight.
Checking Definition3.25((d)). By Eq. (17), \(\pi _M(X,Y,s,Z)\) alternates in Z with at most 2c extra exceptions on top of the c nonalternations of \(\pi _M\left( G^{X,Y}_{k1},G^{X,Y}_k,s_{W^s_k},Z\right) \).
Checking Definition 3.25((g)).
The cardinality of the range of \(B_M(X,Y,s,Z)\) grows by a factor of at most \(n^2\), due to the one extra integer \(k1\).
Checking Definition 3.25((h)). The last missing piece to proving that the extended functions are a precursor on \({\mathfrak {R}}_\textrm{thin}\) is showing that \(\varPsi \) is well defined on the larger domain. By Definition 3.25((f)), the connected components of \(L(\nabla _{X,Y},\pi _M(X,Y,s,Z))\) determine \(W^s_i\) and \(s_{W^s_i}\) for any \(i\ne k\), and also \(\pi _M(G^{X,Y}_{k1},G^{X,Y}_k,s_{W^s_k},Z)\) and \(W^s_k\).
The number \(k1\) is recorded in \(B_M(X,Y,s,Z)\) (see Eq. (16)), which is an argument of \(\varPsi \). Knowing k, we can select \(\pi _M(G^{X,Y}_{k1},G^{X,Y}_k,s_{W^s_k},Z)\) from the components of \(L(\nabla _{X,Y},\pi _M(X,Y,s,Z))\), see Eq. (15). Because the original functions provide a precursor on \({\mathfrak {C}}_\textrm{thin}\), the original \(\varPsi \) function is well defined, so the following value:
is determined. Since
we have shown that \(\varPsi \) is well defined even on the extended domain. \(\square \)
In the proof of Lemma 3.27, we extensively used the fact that the degree sequence intervals in \({\mathcal {I}}\) are thin.
Theorem 3.28
Let \({\mathcal {I}}\) be a set of weakly Pstable degree sequence intervals. If there exists a precursor on \({\mathfrak {R}}_\textrm{thin}\) with parameter c then the degree interval Markov chain \({\mathcal {G}}(\ell ,u)\) is rapidly mixing for any \([\ell ,u]\in {\mathcal {I}}\).
Proof
This proof is not new and fairly straightforward, but it is presented for the sake of completeness. The core of this approach had already appeared in the paper of Kannan et al. [9]. We will practically repeat the skeleton of the proof of Erdős et al. [4] using the definitions of the precursor, which hides the majority of the technical difficulties. We will take \(M=A_X+A_Y\), but we want to be explicit about the dependence on X and Y even when M appears as an index, so let \(X+Y\) denote the matrix \(A_X+A_Y\) in this proof.
Let \([\ell ,u]\in {\mathcal {I}}\), where \(\ell \) and u are degree sequences on [n]. Let us define the multicommodity flow f on the Markov graph of \({\mathbb {G}}(\ell ,u)\): for every \(X,Y\in {\mathcal {G}}(\ell ,u)\) and \(s\in S_{X,Y}\), send \({\sigma (X)\sigma (Y)}/{S_{X,Y}}\) amount of flow on \(\varUpsilon _{X+Y}(X,Y,s)\). The total flow in f from X to Y sums to \(\sigma (X)\sigma (Y)\).
Let us recall Eq. (3):
By Definition 3.25((a)), \(\ell (f)\le c\cdot \left( {\begin{array}{c}n\\ 2\end{array}}\right) \). It only remains to show that \(\rho (f)\) is polynomial in n. Continuing Eq. (2) with the substitution \({\mathbb {G}}={\mathbb {G}}(\ell ,u)\):
According to Definition 3.25((h)), given Z, \(\nabla _{X,Y}\), \(\pi _{X+Y}(X,Y,s,Z)\), and \(B_{X+Y}(X,Y,s,Z)\), the function \(\varPsi \) determines (X, Y, s). Therefore, the relation \(Z\in \varUpsilon _{X+Y}(X,Y,s)\) is equivalent to saying that there exists a triple \((\nabla ,\pi ,B)\) such that \((Z,\nabla ,\pi ,B)\in \varPsi ^{1}(X,Y,s)\):
Next, we use Lemma 3.22, which shows that \(S_{X,Y}\) is determined by \(\nabla _{X,Y}\), and its value does not depend directly on X or Y:
Given Z, the matrix \({\widehat{M}}(X,Y,Z)\) (Definition 3.1) determines \(\nabla _{X,Y}=E(X)\triangle E(Y)\): the edges that belong to \(\nabla _{X,Y}\) are precisely those where the sum of the adjacency matrices \(A_X+A_Y={\widehat{M}}(X,Y,Z)+A_Z\) takes 1. Furthermore, by a property of the precursor, for any \(Z\in \varUpsilon _{X+Y}(X,Y,s)\), we have \(\deg _Z\in [\ell _{X,Y},u_{X,Y}]\), therefore,
Now using that \({\widehat{M}}(X,Y,Z)\) is ctight, it follows from Lemmas 3.6 and 3.23 that
where the right hand side is dominated by a polynomial of n (according to Definition 3.25((g))). In conclusion, the mixing time in Eq. (18) is polynomial.
\(\square \)
To prove Theorem 2.20, it only remains to construct a precursor on \({\mathfrak {C}}_\textrm{thin}\). The next section proceeds with the construction in two separate stages.
4 Constructing the Precursor
We will construct a precursor on \({\mathfrak {C}}_\textrm{thin}\) for any weakly Pstable thin set of degree sequence intervals \({\mathcal {I}}\) in two stages. In the first stage, we show that there exists a precursor on \({\mathfrak {C}}_\textrm{id}\) (see Definition 4.1), and then we will extend this precursor to \({\mathfrak {C}}_\textrm{thin}\) in the second stage. Then, we will apply Lemma 3.27 and Theorem 3.28 to prove Theorem 2.20.
4.1 Stage 1: Closed trails
Definition 4.1
Let us define
The graph \(L(\nabla _{X,Y},s)\) is a cycle for any \((X,Y,s)\in {\mathfrak {C}}_\textrm{id}\), because the degree sequences of X and Y are identical. To handle this case, a large machinery was developed in Erdős et al. [4]. However, there the range of auxiliary matrices M was much smaller. Because of the larger range of auxiliary matrices in the current paper, we had to introduce and explicitly define the precursor. Therefore, we unfortunately need to repeat some parts of the proof of Erdős et al. [4] to obtain those claims in the desired generality. The following lemma collects the necessary technical lemmas proved in Erdős et al. [4].
Lemma 4.2
There exists a precursor on \({\mathfrak {C}}_\textrm{id}\) with parameter \(c=12\).
Proof
Let \((X,Y,s)\in {\mathfrak {C}}_\textrm{id}\) be arbitrary with \(X,Y\in {\mathcal {G}}(d)\). Since s is an (X, Y)alternating closed trail, \(\nabla _{X,Y}\) is even. In Erdős et al. [4], the path \(\varUpsilon (X,Y,s)\) in the switch Markov graph is defined exactly when the degree sequences of X and Y are identical and \(s\in S_{X,Y}\). We use the definition of \(\varUpsilon (X,Y,s)\) from Erdős et al. [4] only when \(s\in S_{X,Y}\) and \(L(\nabla _{X,Y},s)\) is a cycle, so when \(p_s=1\).
First, let us recall that \(\varUpsilon (X,Y,s)\) in Erdős et al. [4] describes a sequence of graphs such that each two consecutive graphs can be obtained from each other by a switch. In Erdős et al. [4, Definition 4.2], for any \(s\in S_{X,Y}\), the path \(\varUpsilon (X,Y,s)\) is composed by concatenating a number of Sweep sequences:
where \(C^k_r\) are circuits and \(G^k_{r+1}=G^k_r\triangle C^k_r\), where \(G^k_0=X\triangle \uplus _{i=1}^{k1} W^s_i\) and \(G^k_{\mu _k+1}=X\triangle \uplus _{i=1}^{k} W^s_i\), and \(W^s_k=\uplus _{r=1}^{\mu _k+1} E(C^k_r)\) and \(\nabla _{X,Y}=\uplus _{k=1}^{p_s} W^s_k\). It is easy to check in Erdős et al. [4, Algorithm 2.1], that \(\textsc {Sweep}(G^k_r,C^k_r)\) is a sequence of switches such that each switch is incident with all four vertices on \(V(C^k_r)\).
When \((X,Y,s)\in {\mathfrak {C}}_\textrm{id}\cap {\mathfrak {D}}_M\), by definition \(L(\nabla _{X,Y},s)\) is connected and \(p_s=1\), thus we may define
where s decomposes \(\nabla _{X,Y}\) into (primitive) circuits \({(C_r)}_{r=1}^\mu \) such that \(\nabla =\uplus _{r=1}^\mu E(C_r)\), see Erdős et al. [4, Lemma 5.13]. The circuit \(C_r\) defines a cyclical order on its vertices, but \(\textsc {Sweep}\) takes a linear order, so we still need to select the cornerstone, where the linear order starts to enumerate the vertices in the given cyclical order. The choice of the cornerstone ([4, eq. (5.11)]) only plays a role in proving that \({\widehat{M}}(X,Y,Z)\) is close to the adjacency matrix of an appropriate graph in \(\ell _1\)norm. From the rest of Erdős et al. [4]’s point of view, the cornerstone is arbitrarily chosen.
In this adaptation of the proof in Erdős et al. [4], the index M of \(\varUpsilon _M(X,Y,s)\) matters only in the choice of the cornerstones. The current proof is slightly more general than that of Erdős et al. [4], because we not only consider \(M=X+Y\), but also any other M such that \((X,Y,s)\in {\mathfrak {D}}_M\) (recall Eq. (9)). In the path \(\varUpsilon _{M}(X,Y,s)\) incorporating \(\textsc {Sweep}(G_r,C_r)\) (see Eq. )23)) choose the cornerstone \(v_r\) of the \(\textsc {Sweep}(G_r,C_r)\) as follows:
Since \(X,Y\in {\mathcal {G}}(d)\) for some d, Lemma 2.6 of Erdős et al. [4] applies, which claims that \(\textsc {Sweep}(X\triangle \uplus _{i=1}^{r1}C_i,C_r)\) is a sequence of at most \(\frac{1}{2}E(C_r)1\) switches that connect \(X\triangle \uplus _{i=1}^{r1}C_i\) to \(X\triangle \uplus _{i=1}^{r}C_i\). Thus, the total length of the switch sequence \(\varUpsilon _M(X,Y,s)\) is at most \(\frac{1}{2}\nabla _{X,Y}1\). For any \(Z\in \varUpsilon _M(X,Y,s)\), the degree sequences of X, Y and Z are identical, because switches preserve the degree sequence. Note that for any \(j\ne r\), the sequence \(\textsc {Sweep}(X\uplus _{i=1}^{j1}C_i,C_j)\) does not depend on the cornerstone \(v_r\).
For any \(Z\in \varUpsilon _M(X,Y,s)\), the matrix \(MA_Z\) belongs to \({\{1,0,1,2\}}^{n\times n}\). Recall Eq. (9). If \((MA_Z)_{vw}=2\), then vw is an edge in both X and Y, but vw is not present in Z as an edge. If, however, \((MA_Z)_{vw}=1\), then \(vw\notin E(X),E(Y)\) and \(vw\in E(Z)\). With formulae,
respectively. In Erdős et al. [4, Lemma 2.7], the set \(R=R_Z\) is defined, and it has cardinality at most 4. By its definition, the set of edges in R is a superset of \(\left( E(X)\triangle E(Z)\right) \setminus \nabla _{X,Y}\), which is the union of the right hands sides of Eqs. (25) and (26). In short, every \(+2\) and \(1\) entry of \(MA_Z\) is in a position which is associated to an edge in R.
We will show that \(MA_Z\) is 7tight. Lemma 8.2 in Erdős et al. [4] is the analogue of this tightness statement, and its proof can be repeated for this case with little to no modification. Suppose first that every edge in R is incident on \(v_r\) from Eq. (24): then, Erdős et al. [4, Lemma 7.1] claim that the entries in \(A_X+A_YA_Z\) associated to edges in R consist of at most two pairs of symmetric \(+2\) entries, and at most one pair of symmetric \(1\) entries. By Eqs. (25) and (26), \(MA_Z\) also contains at most two pairs of symmetric \(+2\) entries, and at most one pair of symmetric \(1\) entries.
Recall that Z is obtained from \(X\triangle \uplus _{i=1}^{r1}C_i\) through a series of switches that only touch edges whose vertices are contained in \(V(C_r)\). Thus, the row and columnssums of the submatrices
are identical. Let v and w be two distinct vertices in \(V(C_r)\).

If \(M_{vw}=2\), then by Eq. (25), \(vw\in E(X)\) and \(vw\notin \nabla _{X,Y}\), thus
$$\begin{aligned} {\left( MA_{X\triangle \uplus _{i=1}^{r1}C_i}\right) }_{vw}={(MA_X)}_{vw}=21=1. \end{aligned}$$ 
If \(M_{vw}=0\), then by Eq. (26), \(vw\notin E(X)\) and \(vw\notin \nabla _{X,Y}\), thus
$$\begin{aligned} {\left( MA_{X\triangle \uplus _{i=1}^{r1}C_i}\right) }_{vw}={(MA_X)}_{vw}=00=0. \end{aligned}$$ 
If \(M_{vw}=1\) and \(vw\in E\left( X\triangle \uplus _{i=1}^{r1}C_i\right) \), then \({\left( MA_{X\triangle \uplus _{i=1}^{r1}C_i}\right) }_{vw}=0\).

If \(M_{vw}=1\) and \(vw\notin E\left( X\triangle \uplus _{i=1}^{r1}C_i\right) \), then \({\left( MA_{X\triangle \uplus _{i=1}^{r1}C_i}\right) }_{vw}=1\).
Every entry of \(\left( MA_{X\triangle \uplus _{i=1}^{r1}C_i}\right) \big _{V(C_r)\times V(C_r)}\) is either a 0 or a 1, and the diagonal is identically zero. Since \(C_r\) is alternating in \(X\triangle \uplus _{i=1}^{r1}C_i\), there is at least one 0 entry and one 1 entry in every row and every column. Therefore, the row and columnsums of \((MA_Z)_{V(C_r)\times V(C_r)}\) are at least 1 and at most \(V(C_r)2\). Moreover, Eq. (24) ensures that the rowsum corresponding to \(v_r\) in \({(MA_Z)}_{V(C_r)\times V(C_r)}\) is minimal. By Lemma 3.7, \(MA_Z\) is 5tight.
We will again use Erdős et al. [4, Lemma 2.7] to understand the more detailed structure of \(R_Z\). If there is an edge in \(R_Z\) which is not incident on \(v_r\), then R falls under case (e) of Erdős et al. [4, Lemma 2.7]. Let \(Z\triangle F\) be the next graph in the Sweep sequence, where F is a \(C_4\). By Erdős et al. [4, Lemma 2.7(d)], every edge in the set \(R_{Z\triangle F}\) is incident on \(v_r\). As previously, Lemma 3.7 implies that \(MA_{Z\triangle F}\) is 5tight, and thus \(MA_Z\) is 7tight. \(\square \)
Next, we will cite 3 lemmas from Erdős et al. [4]. The first of these lemmas refers to the graph \(Z'=Z\triangle R\), which is defined in Erdős et al. [4, eq. (13)]. Note that the graph \(Z'\) is just a slight perturbation of Z.
Lemma 4.3
(Adapted from Lemma 5.15 in Erdős et al. [4]) For any \(Z\in \varUpsilon (X,Y,s)\) for \(s\in S_{X,Y}\), there exists \(\pi _{Z'}\in \Pi (\nabla )\) which defines a closed Eulerian trail on \(\nabla _{X,Y}\) which is alternating in \(Z'\) with at most 4 exceptions.
Lemma 4.4
(Lemma 5.21 in Erdős et al. [4]) For a fixed number n of vertices of X and Y, the cardinality of the set of possible tuples B(X, Y, Z, s) is \({\mathcal {O}}(n^{8})\), where \(s\in S_{X,Y}\) and \(Z\in \varUpsilon (X,Y,s)\) are arbitrary.
Lemma 4.5
(Lemma 5.22 in Erdős et al. [4]) The quadruplet composed of the graphs Z, \(\nabla \), \(\pi _{Z'}\), and B(X, Y, Z, s) uniquely determines the triplet (X, Y, s).
We define \(\pi _M(X,Y,s,Z)=\pi _{Z'}\). Lemma 4.3 implies that \(\pi _{Z'}\) is alternating in Z with at most \(4+2R_Z\le 12\) exceptions. Let \(B_M(X,Y,s,Z)\) be identical with the parameter set B(X, Y, Z, s) defined in Erdős et al. [4]. Lemmas 4.3 to 4.5 ensure that every itemized requirement of Definition 3.25 holds, similarly to the situation in Erdős et al. [4]. \(\square {{\textbf {Lemma}} 4.2}\)
Now we are at the point where Theorem 2.17 is reproved by the generalized machinery: the Markov chain \({\mathbb {G}}(d)\) (using switches only) is rapidly mixing for any d from a Pstable set. By Lemmas 4.2 and 3.27, there exists a precursor on \({\mathfrak {R}}_\textrm{id}\) with parameter 3c, and the theorem follows from Remark 2.13 and Theorem 3.28.
4.2 Stage 2: Open trails
Until now, the degree sequences of X and Y in \((X,Y,s)\in {\mathfrak {C}}_\textrm{thin}\) were identical, that is, s was a closed trail. In the second stage we deal with the case when \(\Vert \deg _X\deg _Y\Vert _1=2\) (while \(\Vert \deg _X\deg _Y\Vert _\infty =1\)). The following lemma is actually a framework for reducing the construction of the precursor on \({\mathfrak {C}}_\textrm{thin}\) to Lemma 4.2. Note that we do not aim to optimize our estimate of the mixing time, we are merely interested in bounding it polynomially. Surprisingly, to construct the precursor on \({\mathfrak {C}}_\textrm{thin}\), it is sufficient to consider only those open trails s that have odd length.
Informally, the forthcoming Lemma 4.6 states that if any open (X, Y)alternating trail of odd length can be cut up into a constant number of segments that can be reassembled into at most two (X, Y)alternating trails that are either closed or can be closed by including \(v_0v_\lambda \) or \(v_1v_{\lambda 1}\) to join the two ends (alternation is not required there), then we can reduce the precursor construction on \({\mathfrak {C}}_\textrm{thin}\) to a precursor construction on \({\mathfrak {C}}_\textrm{id}\).
Lemma 4.6
Suppose there exists a precursor on \({\mathfrak {C}}_\textrm{id}\) with parameter c, and let \(c'\) be a fixed integer. Suppose, moreover, that for any \((X,Y,s)\in {\mathfrak {C}}_\textrm{thin}\) where \(s=v_0v_1\ldots v_\lambda \in \Pi (\nabla _{X,Y})\) is an open trail with \(v_0<_\textrm{lex} v_\lambda \) for some odd integer \(\lambda \), there exist \(\nabla _1,\nabla _2\) and \(s_1\in \Pi (\nabla _1),s_2\in \Pi (\nabla _2)\) (where \(\nabla _2=\emptyset \) is allowed) such that

(1)
\(\nabla _{X,Y}{\setminus }\{v_0v_\lambda \}\subseteq \nabla _1\cup \nabla _2\subseteq \left\{ \begin{array}{ll} \nabla _{X,Y}\cup \{v_0v_\lambda \} &{} \text {if }v_1=v_{\lambda 1}\\ \nabla _{X,Y}\cup \{v_0v_\lambda ,v_1v_{\lambda 1}\}&{} \text {if }v_1\ne v_{\lambda 1}\end{array}\right. \),

(2)
\(\nabla _{X,Y}\triangle \nabla _1\triangle \nabla _2\subseteq \{v_0v_\lambda \}\),

(3)
if \(v_1v_{\lambda 1}\in (\nabla _1\cup \nabla _2){\setminus } \nabla _{X,Y}\), then \(v_0v_\lambda \in \nabla _{X,Y}\) and \(s_1\) or \(s_2\) is equal to \(v_0v_1v_{\lambda 1}v_\lambda v_0\).
Moreover, for both \(i=1,2\):

(4)
the line graph \(L(\nabla _i,s_i)\) is an even cycle (or an empty graph),

(5)
\(s_iv_0v_\lambda v_1v_{\lambda 1}\) is (X, Y)alternating,

(6)
\(s_iv_0v_\lambda \) is (X, Y)alternating with 0 or 2 exceptions,

(7)
\(s_iv_1v_{\lambda 1}\) is (X, Y)alternating with 0 or 2 exceptions,

(8)
the number of components of \(L(\nabla _i,s_i)\cap L(\nabla ,s)\) is at most \(c'\).
Then, there exists a precursor on \({\mathfrak {C}}_\textrm{thin}\) with parameter \(3c+60c'+300\).
We are aware that such a huge parameter is nowhere near a practical bound. We made virtually zero effort to optimize the parameter.
Proof
Let \((X,Y,s)\in {\mathfrak {C}}_\textrm{thin}\) be such that \(s=v_0v_1\ldots v_\lambda \) and \(v_0<_\textrm{lex} v_\lambda \). We will now consider the case when \(\lambda \) is odd. As discussed earlier, the case of even \(\lambda \) will be handled by a reduction to the odd case. For an odd \(\lambda \), we must have either \(\deg _Y=\deg _X+\mathbb {1}_{\{v_0,v_\lambda \}}\) or \(\deg _Y=\deg _X\mathbb {1}_{\{v_0,v_\lambda \}}\), because s is (X, Y)alternating and its length \(\lambda \) is odd.
Let \(\nabla _i\) and \(s_i\in \Pi (\nabla _i)\) for \(i=1,2\) be the set of edges and pairing function assumed to exist in the statement of this lemma. Let M be such that \((X,Y,s)\in {\mathfrak {D}}_M\), and we will first define \(\varUpsilon _M(X,Y,s)\), then we will also define \(\pi _M(X,Y,s,Z)\) and \(B_M(X,Y,s,Z)\) for any \(Z\in \varUpsilon _M(X,Y,s)\).
Let us modify the auxiliary matrix M. Recall from Definition 3.24 that if \(ab\in \nabla _{X,Y}\), then \(M_{ab}=1\). By Assumption (3) of this lemma, if \(M_{v_1v_{\lambda 1}}\ne 1\) and \(v_1v_{\lambda 1}\in \nabla _i\), then \(v_0v_\lambda \in \nabla _{X,Y}\) and \(M_{v_0v_\lambda }=1\). Let us define
so that \(M'_{v_0v_\lambda }=1\). Also, \(M'_{v_1v_{\lambda 1}}=1\) if \(v_1v_{\lambda 1}\in \nabla _1\cup \nabla _2\). The rowsums of M and \(M'\) are equal on every vertex except possibly on \(v_0\) and \(v_\lambda \).
By assumption (3), \(\nabla _i\) is even. From Assumptions (1) and (2) it follows that any \(v_{j}v_{j+1}\) is contained in either \(\nabla _1\) or \(\nabla _2\), but not both, except if \(\{v_{j},v_{j+1}\}=\{v_0,v_\lambda \}\) or \(\{v_{j},v_{j+1}\}=\{v_1,v_{\lambda 1}\}\). Therefore, \(\nabla _1\cap \nabla _2\subseteq \{ v_0v_\lambda , v_1v_{\lambda 1}\}\). Let us start to extend the precursor. Without loss of generality, we may assume that \(\nabla _1\ne \emptyset \).
Case A1: \(\nabla _2=\emptyset \). Note that \(\nabla _{X,Y}=\lambda \) is odd. Since \(\nabla _1\) is even and \(\nabla _1\setminus \nabla _{X,Y}\subseteq \{v_0v_\lambda \}\), we must have \(v_0v_{\lambda }\notin \nabla _{X,Y}\Leftrightarrow v_0v_\lambda \in \nabla _1\). Also, \(v_1v_{\lambda 1}\in \nabla _1\Leftrightarrow v_1v_{\lambda 1}\in \nabla _{X,Y}\). Let us slightly change X and Y, so that the symmetric difference of the modified graphs \(X_1,Y_1\) is exactly \(\nabla _1\):
Suppose \(s_1v_0v_\lambda \) is not alternating in X: then \(v_1v_{\lambda 1}\in \nabla _{X,Y}\) and the two nonalternations of \(s_1v_0v_\lambda \) are located at \(v_1\) and \(v_{\lambda 1}\). But because \(s_1v_0v_\lambda v_1v_\lambda \) is alternating in X, we have \(\deg _{E(X)\cap \nabla _{X,Y}}(v_1)\deg _{E(Y)\cap \nabla _{X,Y}}(v_1)=2\), so s cannot possibly be (X, Y)alternating, a contradiction. It follows that \(s_1\) is alternating in \(X_1\) (and thus \(Y_1\)): indeed, if \(s_1\) is not alternating in X, then the two exceptions are the endpoints of \(v_0v_\lambda \). Therefore, \((X_1,Y_1,s_1)\in {\mathfrak {C}}_\textrm{id}\cap {\mathfrak {D}}_{M'}\).
We extend the precursor to (X, Y, s) as follows.
Let us verify that Definition 3.25 holds for the extension. The defined path \(\varUpsilon _M(X,Y,s)\) in the Markov graph utilizes one edgetoggle, while the rest of the steps are switches. When the edgetoggle occurs, the degree sequence of the then current graph changes from \(\deg _X\) to \(\deg _Y\), because the rest of the steps do not change the degree sequence.
If \(Z=X,Y\), then \(MA_Z\) is 0tight, because \((X\triangle Z){\setminus } \nabla =(Y\triangle Z){\setminus } \nabla =\emptyset \). Suppose next, that \(Z\in \varUpsilon _{M'}(X_1,Y_1,s_1)\). If \(M'=M\), then \(MA_Z\) is ctight by induction. If \(M'=M\pm A_{v_0v_\lambda }\), then note that the rowsums of \(M'A_{X}\) are equal to the rowsums of \(MA_Y\), and the rowsums of \(M'A_Y\) are equal to the rowsums of \(MA_X\). The degree sequence of Z is equal to \(\deg _X\) or \(\deg _Y\), so \(M'A_Z\) is ctight, and therefore, \(MA_Z\) is \(c+1\)tight.
The length of \(\varUpsilon _M(X,Y,s)\) is at most \(1+c\nabla _1=1+c\nabla _{X,Y}+c\), still linear. The symmetric difference of X and Z outside \(\nabla _{X,Y}\) may also include \(v_0v_\lambda \), so the upper bound in Definition 3.25((b)) increases by at most one.
The maximum number of exceptions to alternation of \(\pi _M(X,Y,s,Z)\) in Z is no more than the number of exceptions to alternation of \(\pi _{M'}(X_1,Y_1,s_1,Z)\) in Z, because \(v_0v_\lambda \notin \nabla _{X,Y}\). Since \(\pi _{M'}(X_1,Y_1,s_1,Z)\) is a closed trail, even if we restrict its domain from \(\nabla _1\) to \(\nabla _{X,Y}\), it remains connected. The range of \(B_M(X,Y,s,Z)\) increases by a polynomial multiplicative factor (of at most \(4n^4\), but this will be dwarfed by the bound in the next case).
Lastly, \(\varPsi \) is still well defined. Trivially, if \(B_M(X,Y,s,Z)=(0,\textrm{true})\) (alternatively \((0,\textrm{false})\)), then \(X=Z\) (\(Y=Z\)) and \(Y=Z\triangle \nabla _{X,Y}\) (\(X=Z\triangle \nabla _{X,Y}\)). If \(B_M(X,Y,s,Z)=(1,\cdots )\), then we can recover \(\pi _{M'}(X_1,Y_1,s_1,Z)\) from \(\pi _M(X,Y,s,Z)\) using their symmetric difference, and subsequently, we can recover \(X_1\) and \(Y_1\) via \(\varPsi \), because we have a precursor on \({\mathfrak {C}}_\textrm{id}\). From these graphs we can easily recover both X and Y, as \(B_M(X,Y,s,Z)\) describes whether \(v_{0}v_{\lambda }\) is in E(X) or not (and the same containment relation holds for E(Y) because \(v_0v_\lambda \notin \nabla _{X,Y}\)).
Case A2: \(\nabla _1\ne \emptyset \) and \(\nabla _2\ne \emptyset \). Task 1: constructing \(\varUpsilon _M(X,Y,s)\). Obviously, \(\nabla _i\ge 4\) for \(i=1,2\) (\(s_i\) is an even length closed trail), and \(\nabla _1\cap \nabla _2\le 2\), so \(\lambda \ge 5\). The reduction is similar to the previous case, however, the construction of the precursor on (X, Y, s) will be reduced to not one, but two elements of \({\mathfrak {C}}_\textrm{id}\). Recall, that any \(v_{j}v_{j+1}\ne v_0v_\lambda ,v_1v_{\lambda 1}\) appears in exactly one of \(\nabla _1\) and \(\nabla _2\). If \(v_1v_{\lambda 1}\in \nabla _{X,Y}\cup \nabla _1\cup \nabla _2\), then the edge \(v_1v_{\lambda 1}\) appears in exactly two of \(\nabla _{X,Y}\), \(\nabla _1\), \(\nabla _2\). Observe, that for any vertex \(v\in [n]\), we have
Thus, for any \(v\ne v_0,v_\lambda \), we have
Suppose that \(s_iv_0v_\lambda \) is not alternating in X for \(i=1\) and \(i=2\). Then, \(s_iv_0v_{\lambda }\) is not alternating at \(v_1\) and \(v_{\lambda 1}\), which implies that \(v_1v_{\lambda 1}\in \nabla _1,\nabla _2\) and \(v_1v_{\lambda 1}\notin \nabla _{X,Y}\). If, say, \(v_1v_{\lambda 1}\in E(X)\), then \(s_i(v_1,v_1v_{\lambda 1})\in E(X)\), but \(s_iv_0v_\lambda v_1v_{\lambda 1}\) is alternating (for \(i=1,2\)); thus, we have \(\deg _{E(X)\cap \nabla _{X,Y}}(v_1)=\deg _{E(Y)\cap \nabla _{X,Y}}(v_1)+2\), so s cannot possibly be (X, Y)alternating, a contradiction. The case \(v_1v_{\lambda 1}\notin E(X)\) similarly leads to a contradiction, therefore, at least one of \(s_1v_0v_\lambda \) and \(s_2v_0v_\lambda \) must be alternating in X.
By swapping \(\nabla _1\) with \(\nabla _2\) and \(s_1\) with \(s_2\), we may assume that \(s_1v_0v_\lambda \) is alternating in X. We claim that
If \(v_1v_{\lambda 1}\in \nabla _1,\nabla _2\), then \(v_1v_{\lambda 1}\notin \nabla _{X,Y}\), and as before, we get a contradiction if \(s_iv_0v_{\lambda }\) is alternating in X for both \(i=1,2\), so \(s_2v_0v_{\lambda }\) must not alternate in X. If \(s_2v_0v_\lambda \) is not alternating in X, then \(v_1v_{\lambda 1}\in \nabla _2\). Thus, if \(s_2v_0v_\lambda \) is not alternating in X and \(v_1v_{\lambda 1}\notin \nabla _1\), then \(v_1v_{\lambda 1}\in \nabla _{X,Y}\), and so \(\deg _{E(X)\cap \nabla _{X,Y}}(v_1)\deg _{E(Y)\cap \nabla _{X,Y}}(v_1)=2\), a contradiction.
Let us define now 4 auxiliary graphs.
By our assumptions, \(s_1\) is alternating in \(X_1\). Furthermore, from (33) it follows that \(s_2\) is alternating in \(X_2\). Because \(s_i\) defines an alternating trail in \(X_i\), we have \(\deg _{X_i}=\deg _{Y_i}\). Trivially, \(E(X_1)\triangle E(X)\subseteq \{v_0v_\lambda \}\), and by the assumptions of the lemma,
We claim that
Suppose that \(s_1\) is not alternating in X and \(s_2\) is not alternating in \(Y_1\). Then, \(X_1=X\triangle v_0v_\lambda \) and \(X_2=Y_1\triangle v_0v_\lambda \). Because \(s_1\) is not alternating in X, we have \(v_0v_\lambda \in \nabla _1\). Also, because \(s_2v_1v_{\lambda 1}\) is not alternating in \(Y_1=X_1\triangle \nabla _1=X\triangle (\nabla _1{\setminus } \{v_0v_\lambda \})\), \(s_2v_1v_{\lambda 1}\) is not alternating in X either. But this implies that \(\deg _{X\cap \nabla _{X,Y}}(v_0)\deg _{Y\cap \nabla _{X,Y}}(v_0)\in \{2,3\}\) (depends on whether \(v_0v_\lambda \) is in \(\nabla _{X,Y}\) or not), which is a contradiction.
From now on, we assume that \(X_1=X\) or \(X_2=Y_1\). In other words, at least one of the following three symmetric differences is an empty set:
If exactly one of them is an empty set, then observe that
which is a contradiction. Thus, there are exactly two empty sets on the left hand side of Eq. (35). From \(\deg _{X_i}=\deg _{Y_i}\) for \(i=1,2\), it follows that
In other words, we have shown that \((X_i,Y_i,s_i)\in {\mathfrak {C}}_\textrm{id}\cap {\mathfrak {D}}_{M'}\) for \(i=1,2\), and we may proceed with the reduction. By (34), we have three cases:
where the \(\rightarrow \) signs simply represent joining two sequences (repeated graphs are dropped from the sequence). By the above observations about the symmetric differences and (36), \(\varUpsilon _M(X,Y,s)\) is indeed a path in the desired Markov graph.
\(M'A_Z\) is ctight by the properties of the precursor on \({\mathfrak {C}}_\textrm{id}\). Therefore, \(MA_Z\) is \((c+3)\)tight.
Task 2: Constructing \(\pi _M(X,Y,s)\). We have to construct a connected \(\pi _M(X,Y,s,Z)\) from the current \(\pi _{M'}(X_i,Y_i,s_i,Z)\) (where \(Z\in \varUpsilon _{M'}(X_i,Y_i,s_i)\)). Notice that \(L(\nabla _{X,Y},s)\nabla _i\) (delete \(\nabla _i\) from the vertex set of the line graph) has at most \(c'+1\) components, since \(L(\nabla _{X,Y},s)\) is a path. Furthermore, \(L(\nabla _i,\pi _{M'}(X_i,Y_i,s_i))(\nabla _i\setminus \nabla _{X,Y})\) has at most 2 components (since \(\nabla _i{\setminus } \nabla _{X,Y}\le 2\)). For \(Z\in \varUpsilon _{M'}(X_i,Y_i,s_i)\), let
The graph \(L(\nabla _{X,Y},\sigma _Z)\) has at most \(c'+3\) components because \(\pi _{M'}(X_i,Y_i,s_i)_{\nabla _{X,Y}}\) and \(s_{\nabla _{X,Y}\setminus \nabla _i}\) are composed of at most 2 and \(c'+1\) trails, respectively. Note, that
and thus, \(\sigma _{X_i}\triangle s\le 2(c'+3)\).
We claim that there exists \(\sigma '_Z\in \Pi (\nabla _{X,Y})\) such that \(\sigma '_Z\supseteq \sigma _Z\) (extends \(\sigma _Z\)) and \(\sigma '_Z\triangle \sigma _Z\le 2(c'+3)\). Let \(U_x=\{xy\in \nabla _{X,Y}\ \ (x,xy)\notin \textrm{dom}(\sigma _Z)\}\) be the set of unpaired edges incident to x. In total, we have \(\sum _{x\in [n]}U_x\le 2(c'+3)\). It is sufficient now to define \(\sigma '_Z(x,\bullet )\) on \(U_x\) for every \(x\in [n]\). To do so, observe that:
Then, the parity of \(U_x\) satisfies:
From the last congruence it follows that \(U_x\) is even for \(x\ne v_0,v_\lambda \), so we may choose \(\sigma '_Z(x,\bullet )\) such that it pairs the edges in \(U_x\). If \(v_0v_1\notin \nabla _i\), then \(U_{v_0}\) is even, and we may choose \(\sigma '_Z(v_0,\bullet )\) such that it pairs the edges in \(U_x\) (note that \(\sigma '_Z(v_0,v_0v_1)=\sigma _Z(v_0,v_0v_1)=v_0v_1\)). If \(v_0v_1\in \nabla _i\), then \(U_{v_0}\) is odd and by definition \(\pi _{M'}(X_i,Y_i,s_i,Z)\) cannot map \((v_0,v_0v_1)\) to \(v_0v_1\); thus, \(\sigma '_Z(v_0,\bullet )\) can pair all edges of \(U_{v_0}\) except one, which \(\sigma '_Z(v_0,\bullet )\) will map to itself. Define \(\sigma '_Z(v_\lambda ,\bullet )\) on \(U_{v_\lambda }\) analogously. In any case, \(L(\nabla _{X,Y},\sigma '_Z)\) is composed of a path and a certain number of cycles, in total still no more than \(c'+3\) components.
Furthermore, we claim that there exists \(\pi _Z\in \Pi (\nabla _{X,Y})\) such that \(\pi _Z\triangle \sigma '_Z\le 4(c'+3)\) and \(L(\nabla _{X,Y},\pi _Z)\) is connected. The pairing function \(\sigma '_Z\) defines one open trail and at most \(2(c'+3)1\) closed trails in \(\nabla _{X,Y}\), and these trails partition the edge set of the connected trail s. Any closed trail intersecting the open trail can be incorporated into the open trail by changing the pairing function such that the symmetric difference increases by 4.
Let the pairing function associated to Z be
We know that \(\sigma _Z\) alternates with at most 3c exceptions in Z, since \((E(X_i)\triangle E(Z))\setminus \nabla _{i}\le c\) and \(\pi _{M'}(X_i,Y_i,s_i)\) alternates in Z with at most c exceptions. Since \(\pi _Z\triangle \sigma _Z\le 6(c'+3)\), we get that \(\pi _Z\) alternates in Z with at most \(9(c'+3)\) exceptions.
Task 3: Constructing \(B_M(X,Y,s)\). Let us identify the ends of intervals of \(\nabla _i\) edges in a pairing function \(\vartheta \):
where \(\displaystyle \min _\textrm{lex}V(L)\) stands for the lexicographically minimal edge in V(L). Retracing the steps by which \(\pi _Z\) is obtained, we have
Let
Every set listed in \(B_M(X,Y,s,Z)\) has at most a constant size, so the size of the range of \(B_M\) increases by a polynomial factor of n (of at most \(n^{60c'+240}\)). It remains to show that \(\varPsi \) is still well defined. This is trivial if \(Z=X,Y\). Suppose from now on, that \(Z\in \varUpsilon _{M'}(X_i,Y_i,s_i)\). Since \(L(\nabla _{X,Y},\pi _Z)\) is composed of paths and cycles, \(T_Z\) determines the ends of intervals of consecutive \(\nabla _i\) edges in the trails determined by \(\pi _Z\), and \(C_Z\) determines those \(L(\nabla _{X,Y},\pi _Z)\) components whose vertex set is a subset of \(\nabla _i\). Therefore, \(\nabla _{X,Y},T_Z,C_Z\) and \(\pi _Z\) determine \(\nabla _i\). Thus, \(\pi _Z_{\nabla _i}=\pi _M(X,Y,s,Z)_{\nabla _i}\) can be determined, and in turn \(\pi _{M'}(X_i,Y_i,s_i,Z)\) can be reconstructed too. Since we have a precursor on \({\mathfrak {C}}_\textrm{id}\), we get
Notice, that \(Xv_0v_\lambda =X_1v_0v_\lambda \) and \(Yv_0v_\lambda =Y_2v_0v_\lambda \). Since \(v_0v_\lambda \in E(Y)\) if and only if \(v_0v_\lambda \in E(X)\triangle \nabla _{X,Y}\), both X and Y are determined by \((X_i,Y_i)\). Furthermore, \(\sigma _Z_{\nabla _{X,Y}{\setminus } \nabla _i}=s_{\nabla _{X,Y}{\setminus } \nabla _i}\) is already determined, and together with \(s_i_{\nabla _{X,Y}}\) and the auxiliary parameters, they determine s.
We have now defined the precursor on any \((X,Y,s)\in {\mathfrak {C}}_\textrm{thin}\) where s is an open trail of odd length. Suppose from now on that \((X,Y,s)\in {\mathfrak {C}}_\textrm{thin}\) where s is an open trail of even length.
Case B: \(s=v_0v_1\ldots v_{\lambda 1}v_\lambda \) is an open trail of even length and \(v_0=v_{\lambda 1}\). We will perform exactly one hingeflip \(\{v_{\lambda 2}v_0,v_{\lambda 2}v_\lambda \}\). This case is very similar to when s is an open trail of odd length and \(\nabla _2=\emptyset \), so we will give the construction, but checking the precursor properties is left to the diligent reader. Let
We define \(\pi _M(X,Y,s,Z)\) simply by replacing the \((v_{\lambda 2},v_{\lambda 2}v_{\lambda })\) with \((v_{\lambda 2},v_{\lambda 2}v_0)\) in the pairing \(\varUpsilon _{M'}(X_1,Y_1,s_1)\), remove \((v_{\lambda },v_{\lambda 2}v_{\lambda })\) from the pairing (and create the selfpaired edges at \(v_0\) and \(v_{\lambda }\)). Defining a suitable \(B_M(X,Y,s,Z)\) is straightforward and it is also left to the reader.
Case C: \(s=v_0v_1\ldots v_{\lambda 1}v_\lambda \) is an open trail of even length and \(v_0\ne v_{\lambda 1}\). Let \(s'=sv_{\lambda 1}v_\lambda \) and observe that we have already defined \(\varUpsilon _M(X,Y\triangle v_{\lambda 1}v_\lambda ,s')\) in the previous subsection, since \(L(\nabla v_{\lambda 1}v_\lambda ,s')\) is a path of odd length.
However, choosing \(\varUpsilon _M(X,Y,s)=\varUpsilon _M(X,Y\triangle v_{\lambda 1}v_\lambda ,s')\rightarrow Y\) violates the precursor property because the degree at \(v_{\lambda 1}\) may become too small or too large when the edgetoggle is performed on \(v_0v_{\lambda 1}\) (the rest of the steps are switches). Fortunately, this is very easy to fix: simply replace the edgetoggle on \(v_0v_{\lambda 1}\) in the previous definitions of \(\varUpsilon _M\) with the hingeflip between \(v_0v_{\lambda 1}\) and \(v_{\lambda 1}v_\lambda \) to obtain \(\varUpsilon _M(X,Y\triangle v_{\lambda 1}v_\lambda ,s')\). Since every other step in \(\varUpsilon _M(X,Y\triangle v_{\lambda 1}v_\lambda ,s')\) is a switch, this ensures that for any \(Z\in \varUpsilon _M(X,Y,s)\) we have \(\deg _Z\in \{\deg _X,\deg _Y\}\).
We also need to define \(\pi _M\) and \(B_M\). Since the odd length case already describes a trail from \(v_0\) to \(v_{\lambda 1}\), we can join \(v_{\lambda 1}v_\lambda \) to the edge ending the trail at \(v_{\lambda 1}\) to obtain a suitable \(\pi _M(X,Y,s,Z)\) (in the derived bounds, this essentially increases \(c'\) by 1). Furthermore, we also need to store in \(B_M(X,Y,s,Z)\) that the identify of \(v_{\lambda 1}\) and \(v_\lambda \). Since for any \(Z\in \varUpsilon _M(X,Y,s)\). As a result, the range of \(B_M(X,Y,s,Z)\) increases by a polynomial factor (also note that the parameter of the precursor has to be increased by a constant to accommodate \(v_{\lambda 1}v_\lambda \)).
The well definedness of \(\varPsi \) follows, because the constant number of differences compared to the previous case can be all stored in \(B_M(X,Y,s,Z)\) without violating Definition 3.25((g)).
One can say that this is proof is not very detailed, but we think it is not worth describing the details, because it would be an almost verbatim repetition of the first two cases. \(\square \)
5 Proof of Theorem 2.20.
Let \({\mathcal {I}}\) be a set of weakly Pstable thin degree sequence intervals. By Lemma 4.2, there exists a precursor with parameter \(c=12\) on \({\mathfrak {C}}_\textrm{id}\). We want to apply Lemma 4.6 to prove that there exists a precursor on \({\mathfrak {C}}_\textrm{thin}\) with some fixed parameter. Showing this, Theorem 2.20 follows: the precursor can be extended to \({\mathfrak {R}}_\textrm{thin}\) by Lemma 3.27, which is sufficient for proving rapid maxing of \({\mathbb {G}}(\ell ,u)\) on every \((\ell ,u)\in {\mathcal {I}}\) by Theorem 3.28. Suppose \((X,Y,s)\in {\mathfrak {C}}_\textrm{thin}\). If s is a closed trail, then \((X,Y,s)\in {\mathfrak {C}}_\textrm{id}\), on which we have already defined a precursor.
Suppose from now on that \(s=v_0v_1\ldots v_{\lambda }\) is an open trail of odd length (possibly 1). By the KDlemma (Lemma 3.21), \(v_0\ne v_\lambda \). To apply Lemma 4.6, it is enough to define \(s_1\) and \(s_2\), since their domains determine \(\nabla _1,\nabla _2\). The premises of Lemma 4.6 are elementary and trivial to check once \(s_1\in \Pi (\nabla _1)\) and \(s_2\in \Pi (\nabla _2)\) are given. We will finish the proof by a complete case analysis, where we provide a suitable \(s_1\) and \(s_2\) for each case.
We will prove that Lemma 4.6 holds for \({\mathfrak {C}}_\textrm{thin}\) with \(c'=2\). We will distinguish between 8 main cases, 3 of these have 2 subcases. The cases will be distinguished based on the relationship between \(v_0,v_1,v_{\lambda 1},v_{\lambda }\) and s. Recall that \(v_0\ne v_\lambda \). On the corresponding figures, by exchanging X and Y, we may suppose that \(v_0v_1\in E(X)\). Thus, the edges of X are drawn with solid lines, edges of Y with dashed lines, and edges not contained in \(\nabla _{X,Y}\) are dotted. Those pairs that are contained in \(\nabla _{X,Y}\) are joined by thick solid or dashed lines. The similarly thick dashdotted lines represent (X, Y)alternating segments of the trail s. Recall, that a trail may visit a vertex multiple times, but it can only traverse an edge at most once. The trails traversed by \(s_1\) and \(s_2\) are colored in blue and red, respectively.
Case 1. First, we assume that \(v_0 v_\lambda \notin \nabla _{X,Y}\).
From now on, we assume that \(v_0 v_\lambda \in \nabla _{X,Y}\). In other words, the open trail \(s\in \Pi (\nabla _{X,Y})\) traverses \(v_0v_\lambda \), that is, there exists \(2\le j\le \lambda 2\) such that \(\{v_0,v_\lambda \}=\{v_j,v_{j+1}\}\).
Case 2. We assume in this case that j is even.
Case 2a. If \(v_j=v_0\) and \(v_{j+1}=v_\lambda \), then let
Case 2b. If \(v_{j+1}=v_{0}\) and \(v_{j}=v_\lambda \), then let
From now on, we assume that j is odd.
Case 3. If \(v_{j}=v_\lambda \) and \(v_{j+1}=v_0\), then let
From now on, we assume that \(v_{j}=v_0\) and \(v_{j+1}=v_\lambda \).
Case 4. If \(v_1=v_{\lambda 1}\), then let
From now on, we assume that \(v_1\ne v_{\lambda 1}\).
Case 5. If \(v_1v_{\lambda 1}\notin \nabla _{X,Y}\).
From now on, we assume that \(v_1 v_{\lambda 1}\in \nabla _{X,Y}\). In other words, the open trail \(s\in \Pi (\nabla _{X,Y})\) traverses \(v_1v_{\lambda 1}\), that is, there exists \(1\le k\le \lambda 1\) such that \(\{v_1,v_{\lambda 1}\}=\{v_k,v_{k+1}\}\).
First, we assume that \(k<j\); the case \(k>j\) will follow easily by symmetry.
Case 6. Suppose that k is even.
Case 6a. If \(v_k=v_{1}\) and \(v_{k+1}=v_{\lambda 1}\), then let
Case 6b. If \(v_{k+1}=v_{1}\) and \(v_{k}=v_{\lambda 1}\), then let
Case 7. Suppose that k is odd.
Case 7a. If \(v_k=v_{1}\) and \(v_{k+1}=v_{\lambda 1}\), then let
Case 7b. If \(v_{k+1}=v_{1}\) and \(v_{k}=v_{\lambda 1}\), then let
Case 8. The remaining case is when \(k>j\). By taking the reverse order \(v'_i=v_{\lambda i}\) for \(i=0,\ldots ,\lambda \), we have \(\lambda k1<\lambda j1\), so one of the previous subcases of Case 6 or Case 7 applies to \(s'=v'_0v'_1\ldots v'_{\lambda 1}v'_{\lambda }\). Clearly, the relevant properties of \(s_i\) are preserved by reversing the order of the indices.
References
G. Amanatidis and P. Kleer. Approximate Sampling and Counting of Graphs with NearRegular Degree Intervals. In: 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023). Ed. by P. Berenbrink, P. Bouyer, A. Dawar, and M. M. Kanté. Vol. 254. Leibniz International Proceedings in Informatics (LIPIcs). ISSN: 18688969. Dagstuhl, Germany: Schloss Dagstuhl  LeibnizZentrum für Informatik, 2023, 7:1–7:23. isbn: 9783959772662. https://doi.org/10.4230/LIPIcs.STACS.2023.7.https://drops.dagstuhl.de/opus/volltexte/2023/17659 (visited on 10/13/2023).
T. Coolen, A. Annibale, and E. Roberts. Generating Random Networks and Graphs. Oxford University Press, May 26, 2017. 325 pp. isbn: 9780191019814.
C. Cooper, M. Dyer, and C. Greenhill. Sampling Regular Graphs and a PeertoPeer Network. In: Combinatorics, Probability and Computing 16.4 (July 2007), pp. 557–593. issn: 14692163, 09635483. https://doi.org/10.1017/S0963548306007978.https://www.cambridge.org/core/journals/combinatoricsprobabilityand computing/article/samplingregulargraphsandapeertopeernetwork/3C6DABD887139971589C69A0F0B52688 (visited on 03/14/2019).23
P. L. Erdös, C. Greenhill, T. R. Mezei, I. Miklós, D. Soltész, and L. Soukup. The mixing time of switch Markov chains: A unified approach. In: European Journal of Combinatorics 99 (Jan. 1, 2022), pp. 99–146. issn: 01956698. https://doi.org/10.1016/j.ejc.2021.103421.https://www.sciencedirect.com/science/article/pii/S0195669821001141 (visited on 12/08/2021).
C. Greenhill. The switch Markov chain for sampling irregular graphs (Extended Abstract). In: Proceedings of the 2015 Annual ACMSIAM Symposium on Discrete Algorithms (SODA). Proceedings. Society for Industrial and Applied Mathematics, Dec. 22, 2014, pp. 1564–1572. isbn: 9781611973747. https://doi.org/10.1137/1.9781611973730.103.https://epubs.siam.org/doi/10.1137/1.9781611973730.103 (visited on 02/25/2022).
M. Jerrum, B. D. McKay, and A. Sinclair. When is a Graphical Sequence Stable? In: Random Graphs: Volume 2 (eds. A. Frieze and T. Luczak). Wiley, Aug. 4, 1992, pp. 101–116. isbn: 9780471572923.
M. Jerrum and A. Sinclair. Approximating the Permanent. In: SIAM J. Comput. 18.6 (Dec. 1, 1989), pp. 1149–1178. issn: 00975397. https://doi.org/10.1137/0218077.https://epubs.siam.org/doi/abs/10.1137/0218077 (visited on 02/24/2019).
M. Jerrum and A. Sinclair. Fast uniform generation of regular graphs. In: Theoretical Computer Science 73.1 (June 8, 1990), pp. 91–100. issn: 03043975. https://doi.org/10.1016/03043975(90)90164D.http://www.sciencedirect.com/science/article/pii/030439759090164D (visited on 02/08/2019).
R. Kannan, P. Tetali, and S. Vempala. Simple Markovchain algorithms for generating bipartite graphs and tournaments. In: Random Structures & Algorithms 14.4 (1999), pp. 293–308. issn: 10982418. https://doi.org/10.1002/(SICI)10982418(199907)14:4<293::AIDRSA1>3.0.CO;2G. https://onlinelibrary.wiley.com/doi/abs/10.1002/%28SICI%2910982418%28199907%2914%3A4%3C293%3A%3AAIDRSA1%3E3.0.CO%3B2G (visited on 03/14/2019).
S. Rechner, L. Strowick, and M. MüllerHannemann. Uniform sampling of bipartite graphs with degrees in prescribed intervals. In: Journal of Complex Networks 6.6 (Dec. 1, 2018), pp. 833–858. issn: 20511329. https://doi.org/10.1093/comnet/cnx059. (visited on 02/13/2022).
A. Sinclair. Improved Bounds for Mixing Rates of Markov Chains and Multicommodity Flow. In: Combinatorics, Probability and Computing 1.4 (Dec. 1992), pp. 351–370. issn: 14692163, 09635483. https://doi.org/10.1017/S0963548300000390.url: https://www.cambridge.org/core/journals/combinatoricsprobabilityandcomputing/article/improvedboundsformixingratesofmarkovchainsandmulticommodityflow/0AD89F9FA25567A5CBBF672C371FBC2B# (visited on 03/02/2019). 24
Acknowledgements
PLE, TRM, and IM were supported in part by the National Research, Development and Innovation Office—NKFIH Grants SNN 135643 and K 132696
Funding
Open access funding provided by ELKH Alfréd Rényi Institute of Mathematics.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all the authors, the corresponding author states that there is no conflict of interest.
Additional information
Communicated by Frèdérique Bassino.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Erdős, P.L., Mezei, T.R. & Miklós, I. Approximate Sampling of Graphs with NearPStable Degree Intervals. Ann. Comb. 28, 223–256 (2024). https://doi.org/10.1007/s00026023006788
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00026023006788