Approximate Sampling of Graphs with Near-P -stable Degree Intervals

,


Introduction
In this relatively short, highly technical paper we prove a substantial extension of a recent result of Amanatidis and Kleer [1,Theorem 1.3].Our proof is based on the unified approach that was developed in [4] for P -stable degree sequences.For sake of brevity in this section we concisely describe the problem itself, but we will not give a detailed description of the background.For further details, the diligent reader is referred to [1,4].
Approximate sampling graphs with given degree sequences play increasingly important role in modelling different real-life dynamics.One basic way to study them is the switch Markov chain method, made popular by Kannan, Tetali, and Vempala [9].The currently best result via this method is [4] where it is proved that the switch Markov chain is rapidly mixing on P -stable degree sequences.The notion of P -stability was introduced by Jerrum and Sinclair [8] and studied for its own sake at first by Jerrum, McKay, and Sinclair [6].
In real-life applications it is not always possible to know the exact degree sequence of the targeted network.For example a natural scenario where this problem arises is in hypothesis testing on social networks that are only partially observed.Therefore it can happen that we have to sample networks with slightly different degree sequences.It is possible to study the situation via Markov chain decompositions, where there is another Markov chain to move among the component chains.A good example for this approach is the proof of [1,Theorem 1.1].
Another possibility is to introduce further local operations, since the switch operation itself does not change the degree sequence.Such operations are the hinge flip and the toggle (the deletion-insertion) operations.These two latter operations was introduced by Jerrum and Sinclair in their seminal work about approximate 0-1 permanents [7].(The number of perfect matchings of a bipartite graph is equal to the permanent of the bipartite adjacency matrix.)These three operations together are often applied in network building applications in practice (as it was pointed out in [2]) but without any theoretical insurance for the correct result.
In 2018 Rechner, Strowick, and Müller-Hannemann [10] defined a Markov chain with these three local operations for bipartite graphs.Amanatidis and Kleer recognized in their important, recent preprint [1] the following very interesting fact: assume that the inconsistencies in the degree sequences are never bigger than one (the degrees can be i or i + 1) coordinate-wise, and the degree intervals are placed close to a given constant r (the interval placements can vary between [r − r α , r + r α ] where alpha is at most 1/2).The authors coined the name near-regular degree intervals for this degree sequence property and the name degree interval Markov chain for this whole setup.Their result is that the degree interval Markov chain for near-regular degree intervals is rapidly mixing.
Our main result (Theorem 2.20) is that this Markov chain is rapidly mixing for such tight degree intervals where they are placed at P -stable degree sequences.Since all degree sequences close to some constant are P -stable, but P -stable degree sequences can be very far from regular sequences, our result is clearly a very extensive generalization of the theorem of Amanatidis and Kleer.
To our great surprise, it turned out that this result can be derived from the proof of the main theorem of [4].For that end we had to analyse in detail the auxiliary structures of the proof and to extend to cover this setup.The result of this analysis is the notion of precursor (Section 3.3).In turn this notion is conducive to a rather short proof of the rapidly mixing property.Therefore the main task in this paper is to define the appropriate precursor.

Definitions and Notation
Many of the definitions in this section are extensions or generalizations of notions introduced in [4].We will alert the reader whenever this is the case.We consider N the set of non-negative integers.Let [n] = {1, . . ., n} denote the integers from 1 to n, and let [n]   k denote the set of k-element subsets of [n].Given a subset S ⊆ [n], let 1 S : [n] → {0, 1} be the characteristic function of S, that is, 1 S (s) = 1 ⇔ s ∈ S. We often use ⊎ to emphasize that a union of pairwise disjoint sets is taken.The graphs in this paper are vertex-labelled and finite.Parallel edges and loops are forbidden, and unless otherwise stated, the labelled vertex set of an n-vertex graph is [n].The line graph L(G) of a graph G is a graph on the vertex set E(G) (so the vertices of L(G) are taken from [n]  2 ), where any two edges e, f ∈ E(G) that are adjacent are joined (by an edge).The line graph is also free of parallel edges and loops.A trail is a walk that does not visit any edge twice.An open trail starts and ends on two distinct vertices.A closed trail does not have a start nor an end vertex.Given a matrix 2 , we may treat R as a graph.If X is a graph on the vertex set [n], let X△R = ([n], E(X)△R).
Definition 2.3.A degree sequence on n vertices is a vector d ∈ N n which is coordinate-wise at most n − 1.The set of realizations of d denotes the following set of graphs: The degree sequence d is graphic if G(d) is non-empty.A set of degree sequences D may contain graphic as well as non-graphic degree sequences.
Definition 2.4.For a pair of vectors ℓ, u ∈ N n we write ℓ ≤ u if and only if ℓ is coordinate-wise less than or equal to u, that is, We denote the set of realizations of the degree sequence interval [ℓ, u] by Remark 2.6.Not every degree sequence in [ℓ, u] is necessarily graphic, even if both ℓ and u are graphic.
Definition 2.7.Given a polynomial p ∈ R[x], we say that a degree sequence d 2 ) Definition 2.9.A set of degree sequences D is P -stable if there exists p ∈ R[x] such that D is p-stable.
In [4], only P -stability is defined, but in this paper it is more convenient to also define p-stability.
Remark 2.10.A finite set of degree sequences D is always P -stable.
Let us introduce a weaker stability notion for degree sequence intervals.
Definition 2.11.Given p ∈ R[x], we say that a degree sequence interval 2 ) Definition 2.12.A set I of degree sequence intervals is weakly P -stable if there exists p ∈ R[x] such that every [ℓ, u] ∈ I is weakly p-stable.(Any finite I is weakly P -stable.) Remark 2.13.If the set of degree sequences [ℓ, u] is p-stable, then [ℓ, u] is weakly p-stable.
Remark 2.14.It is possible indeed that [ℓ, u] is weakly p-stable, but [ℓ, u] (as a set of degree sequences) is not p-stable.For example, take ℓ = (0) n i=1 and u = (n − 1) n i=1 : the interval (ℓ, u) is clearly 1-stable, but most of the degree sequences on n vertices are not 1-stable.

Definition 2.15 (Degree interval Markov chain). Let us define the degree interval Markov chain G(ℓ, u).
The state space of the Markov-chain is G(ℓ, u).In the following we define three types of transitions: switches, hinge-flips, and edge-toggles.If the current state of the Markov chain is G ∈ G(ℓ, u), then • with probability 1/2, the chain stays in G (the Markov chain is lazy), • with probability 1/6, pick 4 vertices a, b, c, d (uniformly and randomly), and the Markov chain changes its state to • with probability 1/6, pick 3 vertices a, b, c (uniformly and randomly), and the Markov chain changes its state to G ′′ = G△{ab, bc} if e(G ′′ ) = e(G) and G ′′ ∈ G(ℓ, u) (a hinge-flip), otherwise the chain stays in G, • with probability 1/6, pick a pair of vertices a, b (uniformly and randomly), and the Markov chain changes its state to ) line segments represent edges and non-edges, respectively.
We will use the following seminal result of Sinclair.Let Pr G (x → y) denote the transition probability from state x to y in the Markov chain G.
where ℓ(f ) is the length of the longest path with positive flow and ρ(f ) is the maximum loading through an oriented edge of the Markov graph where Γ X,Y is the set of all simple directed paths from X to Y in G.
One of the most famous applications of this idea is the result of Jerrum and Sinclair [7] providing a probabilistic approximation of the permanent.The following result also relies on Theorem 2.16, and it describes the largest known class of degree sequences where the switch Markov chain is rapidly mixing (that is, the rate of convergence of the Markov chain is bounded by a polynomial of the length of the degree sequence).
Theorem 2.17 ( [4]).The switch Markov chain is rapidly mixing on the realizations of any degree sequence in a set of P -stable degree sequences (the rate of convergence depends on the set).
There are several known P -stable regions, one of the earliest and most well-known ones is the following.
Theorem 2.18 (Jerrum, McKay, and Sinclair [6]).The set of degree sequences d satisfying for any n are P -stable.(See Figure 2.) Amanatidis and Kleer [1] recently published a surprising new type of result, a clever approximate uniform sampler (see, for e.g.[7]) for G(ℓ, u) where elements of [ℓ, u] are near regular.They achieve this using a composite Markov-chain.They also provide the first step in the direction of sampling G(ℓ, u) directly using the degree interval Markov chain.
Let us reiterate that Amanatidis and Kleer [1] apply the Markov chain suggested by Rechner, Strowick, and Müller-Hannemann [10], which is routinely used in practice.
Theorem 2.19 (Theorem 1.3 in [1]).Let 0 < α < 1 2 and 0 < ρ < 1 be fixed.Let Let w m be the number of realizations in G(ℓ, u) with m edges.The conditions u i − 1 ≤ ℓ i ≤ u i for all i ∈ [n] are sufficient to prove that w m is log-concave, i.e., w m−1 w m+1 ≤ w 2 m , see [1,Theorem 5.4].The main idea for that proof is a symmetric-difference decomposition, which we also characterize in our key decomposition lemma, Lemma 3.21.
Our contribution.The main objective of this paper is to prove the following theorem.
Theorem 2.20.Suppose I is a set of weakly P -stable and thin degree sequence intervals.Then the degree interval Markov chain G(ℓ, u) is rapidly mixing for any (ℓ, u) ∈ I.
It is not hard to see that Theorem 2.19 is a special case of Theorem 2.20: substituting into eq.( 4), we get (2r α + 1) 2 ≤ 4(r − r α )(n − r − r α + 1), which holds for any r and α if n is large enough; see Figure 2.
The switch Markov chain can be embedded into the degree interval Markov chain (the transition probabilities differ by constant factors).Actually, we will use the proof of Theorem 2.17 as a plug-in in the proof of Theorem 2.20, so this paper does not provide a new proof for the switch Markov chain.We will not consider bipartite and direct degree sequences in this paper, but note that Theorem 2.17 applies to those as well.It is easy to check that the proof of Theorem 2.20 works verbatim for bipartite graphs, because the edge-toggles and hinge-flips are applied on vertices that are joined by paths of odd length (hence in different classes).In all likelihood, the proof of Theorem 2.20 can be probably extended to directed graphs, because directed graphs can be represented as bipartite graphs endowed with a forbidden 1-factor.18 defines pairs of lower and upper bounds (δ and ∆), such that any degree sequence which obeys these bounds is P -stable; the area between these functions is filled with vertical lines.The pairs (δ, ∆) of most distant bounds allowed by eq. ( 4) are given by intersections with vertical lines.For example, any degree sequence which is (element-wise) between δ = 1  4 n and ∆ = 3 4 n is P -stable.In comparison, the solid gray region represents a √ r-wide region around the regular degree sequences, which corresponds to the domain of Theorem 2.19.

Constructing and bounding the multicommodity-flow
We will define a number of auxiliary structures.Via these structures, we will define a multicommodityflow on the degree-interval Markov chain G(ℓ, u) and measure its load.

Constructing and counting the auxiliary matrices
Kannan, Tetali, and Vempala [9] already introduced an auxiliary matrix to examine the load of a multicommodity-flow.Our auxiliary matrices will be a little different.We start with some definitions, then prove two easy statements.
Definition 3.1.Let the adjacency matrix of a graph X on vertex set [n] be A X ∈ {0, 1} n×n .Let A (vw) be the adjacency matrix of the graph ([n], {vw}) with exactly one edge.Let us define Let us define the matrix switch operation.(In a previous paper [4], this operation was called a generalized switch.)Definition 3.3 (Switch on a matrix).The switch operation on a matrix M on vertices (a, b, c, d) produces the matrix Remark 3.4.A switch on a graph X corresponds to a switch on its adjacency matrix A X .
M ij be the sequence of its row sums.We say that M is c-tight (for some c ∈ N) if M is a symmetric matrix with zero diagonal and there exists a graph Recall the definition of weak p-stability and eq.( 1).We will use the number of c-tight matrices to bound the number of auxiliary matrices.
Lemma 3.6.The number of matrices Proof.We can obtain any c-tight M we want to enumerate as follows.First select, an appropriate {i, j} ∈ n 2 and a realization W ∈ G(ℓ, u + 1 {i,j} ): by weak p-stability, there are at most p(n) • |G(ℓ, u)| such choices.Then select c symmetric pairs of positions where the adjacency matrix A W is changed to −1 or +2 (while preserving symmetry).The latter selection can be made in at most n The following lemma is crucial for proving the tightness of the auxiliary matrices arising in the multicommodityflow.A switch on a matrix adds +1 and −1 to the two-two diagonally opposed entries located in a 2 × 2 submatrix so that the row-and column-sums are preserved.
n×n is a symmetric matrix whose diagonal is zero.Suppose further, that Then M is 5-tight.
Proof.By Lemma 7.2 of [4], there exist at most two matrix switches that turn M into a {0, 1} matrix with the possible exception of a symmetric pair of −1 entries.The −1's remaining after the two matrix switches can be removed by adding +1 to the pairs of negative entries.

The alternating-trail decomposition
We consider the set [n]  2 in lexicographic order, which induces an order on the set of edges of any graph defined on [n].
(The bullet • is the placeholder for the variable e which is the second argument of s.)The set of all pairing functions on ∇ is denoted by Π(∇).Proof.Every edge e = ij ∈ ∇ has at most two neighbors in L(∇, s), the edges s(i, e) and s(j, e), thus the maximum degree in L(∇, s) is 2.
Remark 3.11.A cycle in a line graph corresponds to a closed trail in the original graph.A path in a line graph corresponds to an open trail in the original graph (which may in theory start and end at the same vertex, but this will never be the case in our applications, see Lemma 3.21).Definition 3.9 generalizes a concept of Kannan, Tetali, and Vempala [9], where all of the components are cycles.Definition 3.12.Suppose ∇ ⊆ [n]   2 and s is a pairing function on ∇.Denote by p s the number of connected components of L(∇, s), and let us define the unique partition
Remark 3.14.If W is the vertex set of a component of L(∇, s), then s| W ∈ Π(W ).
• the walk u 0 u 1 . . .u r is a closed trail if and only if L(∇, s) is a cycle, and In other words, the Eulerian trails on ∇ can be naturally identified with those pairing functions s ∈ Π(∇) for which L(∇, s) is connected. Proof.Trivial.
Figure 4 shows a closed trail defined by a pairing function.
From now on, by slight abuse of notation, we will not distinguish between s = u 0 u 1 . . .u r as a pairing function and the trail it describes.Definition 3.17.Let Z be an arbitrary graph on n-vertices and let ∇ ⊆ [n]   2 be an arbitrary subset of pairs of vertices.A pairing-function s ∈ Π(∇) is said to be Z-alternating or alternating in Z if for every v ∈ e ∈ ∇ either • e is a unique solution to s(v, e) = e (the function s(v, •) has at most one fixpoint), or In other words, the trail s| W s k traverses edges in Z and Z in turn for any k = 1, . . ., p s ; furthermore, at any vertex v ∈ [n], there is at most one trail s| W s k which starts or ends at v. For example, if ij / ∈ E(Z) and s(i, ij) = ij = s(j, ij), then the trail s| {ij} consists of one non-edge of Z. (6) and s(v, •) has at most one fixpoint for every v ∈ [n].We say that v is a site of non-alternation of s in Z if {e, s(v, e)} is set of size 2 which is a subset of either ∇ ∩ E(Z) or ∇ \ E(Z).
Recall Definition 3.13.The following key decomposition lemma (KD-lemma) will be referred to repeatedly in this paper.

Lemma 3.21 (Key decomposition lemma). Let [ℓ, u] be a thin degree sequence interval, and let
where the right hand side is the product of factorials.
Proof.We have •) such that it is an involution which maps edges of X to edges of Y : if s(v, •) had a fixpoint, then by parity it must have had another, too, which contradicts Definition 3.17 Lemma 3.23.For any graph Z ∈ G(ℓ, u) for a thin degree sequence interval [ℓ, u] and any ∇ ⊆ [n]  2 , we have Proof.There are at most n 3c different choices for the set on the left hand side of eq. ( 6).If we fix the non-alternating pairs, then the number of remaining choices at s(v, •) are still upper bounded by ⌈ 1 2 deg ∇ (v)⌉!, thus eq. ( 8) holds.

The precursor
So far, every proof of rapid mixing for the switch Markov chain which is based on Sinclair's method contains at its core a counting lemma (Greenhill [5]).The purpose of the counting lemma is to enumerate the possible auxiliary structures and parameter sets from which the source and sink of any commodity passing through a realization Z can be recovered from.The difficult technical parts of the proofs are concerned with the maintenance and upkeep associated to these structures.To our surprise, for thin degree sequence intervals, by slightly tweaking these structures, the arising technicalities can almost entirely be reduced to [4], and a major shortcut is taken by this paper by reusing these parts.A relatively long, but mostly elementary Definition 3.25 will specify the properties that we expect from the auxiliary structures and parameter sets borrowed from [4].In Section 4, we will use this framework to recombine the borrowed parts into a proof for thin degree sequence intervals.
The decomposition in Definition 3.12 is formally very similar to the decomposition in [4, Section 4.1].Whenever the degree sequences of X and Y are identical (∇ = ∇ X,Y and s ∈ S X,Y ), the two decompositions are actually identical.In any other case, for every two unit differences between the degree sequences of X and Y we will utilize a hinge-flip or an edge-toggle in the multicommodity-flow between X and Y .
Let us now turn to defining the framework for the reduction to [4].We need the following structure and in particular the matrix M to be able to find an appropriate reduction which is compatible with the processes of [4].
Definition 3.24.Let M ∈ {0, 1, 2} n×n be a symmetric matrix with zero diagonal.For technical purposes, let us define the following set of triples: The next definition collects a number of properties (of the multicommodity-flow and the auxiliary structures designed for the switch Markov chain) that we want to preserve from [4].
Definition 3.25.We call the ordered triple (Υ, B, π) a precursor with parameter c ∈ N, if the following properties hold.The objects Υ M , B M , and π M are functions for any symmetric matrix M ∈ {0, 1, 2} n×n with zero diagonal, where n ∈ N. We require that the domain of Furthermore, for any (X, Y, s) ∈ dom(Υ M ), let us define two degree sequences: .
We require that Υ M (X, Y, s) is a sequence of graphs that forms a path connecting X and Y in the Markov graph G(ℓ X,Y , u X,Y ).We require that π M and B M is defined on Moreover: (d) The pairing function π M (X, Y, s, Z) is a member of Π(∇ X,Y ) and it is alternating in Z with at most c exceptions.
) is also connected.
(g) The cardinality of (h) The function is well-defined, i.e., two different images in the co-domain are not assigned to the same element from the domain of Ψ .
Typically, the value of B M (X, Y, s, Z) will be a long tuple (an ordered set of parameters).The exact value of c is not important here, the requirements only impose a lower bound on its value.However, it is important to note that c is a constant, independent even from the number of vertices n.Note also that in applications of Definition 3.25, the matrix M will not be completely arbitrary.Let us define two precursor domains: The set C thin describes the identifiers of the small parts from which the whole multicommodity-flow will be built from.In contrast, the multicommodity-flow was built in [4] for each triple in R thin directly.
Lemma 3.27.If there exists a precursor with parameter c which is defined on C thin , then there exists a precursor on R thin with parameter 3c.
Proof.We will show that the precursor can be extended so that it is also defined on R thin without violating Definition 3.25.For any (X, Y, s) ∈ R thin ∩ D M , we construct a path in the Markov graph of G(ℓ X,Y , u X,Y ), where [ℓ X,Y , u X,Y ] is the smallest degree sequence interval that contains both deg X and deg Y .By the thinness of I, we have |ℓ(i) − u(i)| ≤ 1 for every i ∈ [n].According to Definition 3.12 and the KD-lemma (Lemma 3.21), any s ∈ S X,Y partitions ∇ X,Y = E(X)△E(Y ) into edge sets of (X, Y )-alternating trails, let that decomposition be and so deg ∈ W s i for any i.We may now define Υ M on R thin recursively: concatenate the sequences where the concatenation keeps only one of the last and first element of consecutive sequences.For (take the maximal k such that the relation holds) let We claim that the extended functions provide a precursor on R thin .Let us check the non-trivial properties of Definition 3.25.Suppose that Checking Definition 3.25(b).
therefore the LHS has cardinality at most c as well. (b)

Checking Definition 3.25(c). The precursor property holds for
Checking Definition 3.25(d).By eq. ( 17), π M (X, Y, s, Z) alternates in Z with at most 2c extra exceptions on top of the c non-alternations of Checking Definition 3.25(g).The cardinality of the range of B M (X, Y, s, Z) grows by a factor of at most n 2 , due to the one extra integer k − 1. (g)

Checking Definition 3.25(h).
The last missing piece to proving that the extended functions are a precursor on R thin is showing that Ψ is well-defined on the larger domain.By Definition 3.25(f), the connected components of L(∇ X,Y , π M (X, Y, s, Z)) determine W s i and s| W s i for any i = k, and also 16)), which is an argument of Ψ .Knowing k, ), see eq. ( 15).Because the original functions provide a precursor on C thin , the original Ψ function is well-defined, so the following value we have shown that Ψ is well-defined even on the extended domain.
In the proof of Lemma 3.27, we extensively used the fact that the degree sequence intervals in I are thin.
Theorem 3.28.Let I be a set of weakly P -stable degree sequence intervals.If there exists a precursor on R thin with parameter c then the degree interval Markov chain G(ℓ, u) is rapidly mixing for any [ℓ, u] ∈ I.
Proof.This proof is not new and fairly straightforward, but it is presented for the sake of completeness.The core of this approach had already appeared in the paper of Kannan, Tetali, and Vempala [9].We will practically repeat the skeleton of the proof of [4] using the definitions of the precursor, which hides the majority of the technical difficulties.We will take M = A X + A Y , but we want to be explicit about the dependence on X and Y even when M appears as an index, so let X + Y denote the matrix A X + A Y in this proof.Let us recall eq. ( 2): It only remains to show that ρ(f ) is polynomial in n.Continuing eq. ( 3) with the substitution G = G(ℓ, u): According to Definition 3.25(h), given Z, ∇ X,Y , π X+Y (X, Y, s, Z), and B X+Y (X, Y, s, Z), the function Ψ determines (X, Y, s).Therefore the relation Z ∈ Υ X+Y (X, Y, s) is equivalent to saying that there exists a triple (∇, π, B) such that (Z, ∇, π, B) ∈ Ψ −1 (X, Y, s): Next, we use Lemma 3.22, which shows that |S X,Y | is determined by ∇ X,Y , and its value does not depend directly on X or Y .
Given Z, the matrix M (X, Y, Z) (Definition 3.1) determines ∇ X,Y = E(X)△E(Y ): the edges that belong to ∇ X,Y are precisely those where the sum of the adjacency matrices Furthermore, by a property of the precursor, for any Now using that M (X, Y, Z) is c-tight, it follows from Lemmas 3.6 and 3.23 that where the right hand side is dominated by a polynomial of n (according to Definition 3.25(g)).In conclusion, the mixing time in eq. ( 18) is polynomial.
To prove Theorem 2.20, it only remains to construct a precursor on C thin .The next section proceeds with the construction in two separate stages.

Constructing the precursor
We will construct a precursor on C thin for any weakly P -stable thin set of degree sequence intervals I in two stages.In the first stage, we show that there exists a precursor on C id (see Definition 4.1), and then we will extend this precursor to C thin in the second stage.Then we will apply Lemma 3.27 and Theorem 3.28 to prove Theorem 2.20.

Stage 1: closed trails
The graph L(∇ X,Y , s) is a cycle for any (X, Y, s) ∈ C id , because the degree sequences of X and Y are identical.To handle this case, a large machinery was developed in [4].However, there the range of auxiliary matrices M was much smaller.Because of the larger range of auxiliary matrices in the current paper, we had to introduce and explicitly define the precursor.Therefore, we unfortunately need to repeat some parts of the proof of [4] to obtain those claims in the desired generality.The following lemma collects the necessary technical lemmas proved in [4].

Lemma 4.2.
There exists a precursor on C id with parameter c = 12.
Proof.Let (X, Y, s) ∈ C id be arbitrary with X, Y ∈ G(d).Since s is an (X, Y )-alternating closed trail, |∇ X,Y | is even.In [4], the path Υ (X, Y, s) in the switch Markov graph is defined exactly when the degree sequences of X and Y are identical and s ∈ S X,Y .We use the definition of Υ (X, Y, s) from [4] only when s ∈ S X,Y and L(∇ X,Y , s) is a cycle, so when p s = 1.
First of all, let us recall that Υ (X, Y, s) in [4] describes a sequence of graphs such that each two consecutive graphs can be obtained from each other by a switch.In [4, Definition 4.2], for any s ∈ S X,Y , the path Υ (X, Y, s) is composed by concatenating a number of Sweep sequences: , where C k r are circuits and ) is a sequence of switches such that each switch is incident with all four vertices on V (C k r ).When (X, Y, s) ∈ C id ∩ D M , by definition L(∇ X,Y , s) is connected and p s = 1, thus we may define where Lemma 5.13].The circuit C r defines a cyclical order on its vertices, but Sweep takes a linear order, so we still need to select the cornerstone, where the linear order starts to enumerate the vertices in the given cyclical order.The choice of the cornerstone ([4, eq.(5.11)])only plays a role in proving that M (X, Y, Z) is close to the adjacency matrix of an appropriate graph in ℓ 1 -norm.From the rest of [4]'s point of view, the cornerstone is arbitrarily chosen.
In this adaptation of the proof in [4], the index M of Υ M (X, Y, s) matters only in the choice of the cornerstones.The current proof is slightly more general than that of [4], because we not only consider M = X + Y , but also any other M such that (X, Y, s) ∈ D M (recall eq. ( 9)).In the path Υ M (X, Y, s) incorporating Sweep(G r , C r ) (see eq. ( 23)) choose the cornerstone v r of the Sweep(G r , C r ) as follows: Let v r ∈ V (C r ) be the vertex which minimizes the row-sum in and v r is lexicographically minimal with respect to this condition. ( Because X, Y ∈ G(d) for some d, Lemma 2.6 of [4] applies, which claims that Sweep(X△ ⊎ r−1 i=1 C i , C r ) is a sequence of at most 1  2 |E(C r )| − 1 switches that connect X△ ⊎ r−1 i=1 C i to X△ ⊎ r i=1 C i .Thus the total length of the switch sequence Υ M (X, Y, s) is at most 1  2 |∇ X,Y | − 1.For any Z ∈ Υ M (X, Y, s), the degree sequences of X, Y and Z are identical, because switches preserve the degree sequence.Note that for any j = r, the sequence Sweep(X ⊎ j−1 i=1 C i , C j ) does not depend on the cornerstone v r .For any Z ∈ Υ M (X, Y, s), the matrix M − A Z belongs to {−1, 0, 1, 2} n×n .Recall eq. ( 9).If (M − A Z ) vw = 2, then vw is an edge in both X and Y , but vw is not present in Z as an edge.If, however, (M − A Z ) vw = −1, then vw / ∈ E(X), E(Y ) and vw ∈ E(Z).With formulae, respectively.In [4, Lemma 2.7], the set R = R Z is defined, and it has cardinality at most 4. By its definition, the set of edges in R is a superset of (E(X)△E(Z)) \ ∇ X,Y , which is the union of the right hands sides of eqs.( 25) and (26).In short, every +2 and −1 entry of M − A Z is in a position which is associated to an edge in R.
We will show that M − A Z is 7-tight.Lemma 8.2 in [4] is the analogue of this tightness statement, and its proof can be repeated for this case with little to no modification.Suppose first that every edge in R is incident on v r from eq. ( 24): then [4, Lemma 7.1] claims that the entries in A X + A Y − A Z associated to edges in R consist of at most two pairs of symmetric +2 entries, and at most one pair of symmetric −1 entries.By eqs.( 25) and (26), M − A Z also contains at most two pairs of symmetric +2 entries, and at most one pair of symmetric −1 entries.
Recall that Z is obtained from X△ ⊎ r−1 i=1 C i through a series of switches that only touch edges whose vertices are contained in V (C r ).Thus the row-and columns-sums of the submatrices are identical.Let v and w be two distinct vertices in V (C r ).
• If M vw = 2, then by eq. ( 25), vw ∈ E(X) and vw / ∈ ∇ X,Y , thus • If M vw = 0, then by eq. ( 26), vw / ∈ E(X) and vw / ∈ ∇ X,Y , thus Every entry of M − A X△⊎ r−1 i=1 Ci V (Cr)×V (Cr) is either a 0 or a 1, and the diagonal is identically zero.Since C r is alternating in X△ ⊎ r−1 i=1 C i , there is at least one 0 entry and one 1 entry in every row and every column.Therefore the row-and column-sums of (M − A Z )| V (Cr)×V (Cr) are at least 1 and at most |V (C r )| − 2.Moreover, eq. ( 24) ensures that the row-sum corresponding to v r in (M − A Z ) V (Cr)×V (Cr) is minimal.By Lemma 3.7, M − A Z is 5-tight.
We will again use [4,Lemma 2.7] to understand the more detailed structure of R Z .If there is an edge in R Z which is not incident on v r , then R falls under case (e) of [4,Lemma 2.7].Let Z△F be the next graph in the Sweep sequence, where F is a C 4 .By [4,Lemma 2.7(d)], every edge in the set R Z△F is incident on v r .As previously, Lemma 3.7 implies that M − A Z△F is 5-tight, and thus M − A Z is 7-tight.
Next, we will cite 3 lemmas from [4].The first of these lemmas refers to the graph Z ′ = Z△R, which is defined in [4, eq. ( 13)].Note that the graph Z ′ is just a slight perturbation of Z. Lemma 4.3 (adapted from Lemma 5.15 in [4]).For any Z ∈ Υ (X, Y, s) for s ∈ S X,Y , there exists π Z ′ ∈ Π(∇) which defines a closed Eulerian trail on ∇ X,Y which is alternating in Z ′ with at most 4 exceptions.Lemma 4.4 (Lemma 5.21 in [4]).For a fixed number n of vertices of X and Y , the cardinality of the set of possible tuples B(X, Y, Z, s) is O(n 8 ), where s ∈ S X,Y and Z ∈ Υ (X, Y, s) are arbitrary.Lemma 4.5 (Lemma 5.22 in [4]).The quadruplet composed of the graphs Z, ∇, π Z ′ , and B(X, Y, Z, s) uniquely determines the triplet (X, Y, s).
We define π M (X, Y, s, Z) = π Z ′ .Lemma 4.3 implies that π Z ′ is alternating in Z with at most 4 + 2|R Z | ≤ 12 exceptions.Let B M (X, Y, s, Z) be identical with the parameter set B(X, Y, Z, s) defined in [4].Lemmas 4.3 to 4.5 ensure that every itemized requirement of Definition 3.25 holds, similarly to the situation in [4].Now we are at the point where Theorem 2.17 is reproved by the generalized machinery: the Markov chain G(d) (using switches only) is rapidly mixing for any d from a P -stable set.By Lemmas 3.27 and 4.2, there exists a precursor on R id with parameter 3c, and the theorem follows from Remark 2.13 and Theorem 3.28.

Stage 2: open trails
Until now, the degree sequences of X and Y in (X, Y, s) ∈ C thin were identical, that is, s was a closed trail.In the second stage we deal with the case when deg The following lemma is actually a framework for reducing the construction of the precursor on C thin to Lemma 4.2.Note that we do not aim to optimize our estimate of the mixing time, we are merely interested in bounding it polynomially.Surprisingly, to construct the precursor on C thin , it is sufficient to consider only those open trails s that have odd length.
Informally, the forthcoming Lemma 4.6 states that if any open (X, Y )-alternating trail of odd length can be cut up into a constant number of segments that can be reassembled into at most two (X, Y )alternating trails that are either closed or can be closed by including v 0 v λ or v 1 v λ−1 to join the two ends (alternation is not required there), then we can reduce the precursor construction on C thin to a precursor construction on C id .Lemma 4.6.Suppose there exists a precursor on C id with parameter c, and let c ′ be a fixed integer.Suppose, moreover, that for any (X, Y, s) ∈ C thin where s ) is an open trail with v 0 < lex v λ for some odd integer λ, there exist ∇ 1 , ∇ 2 and s Moreover, for both i = 1, 2: (4) the line graph L(∇ i , s i ) is an even cycle (or an empty graph), (5) Then there exists a precursor on C thin with parameter 3c + 60c ′ + 300.
We are aware that such a huge parameter is nowhere near a practical bound.We made virtually zero effort to optimize the parameter.
Proof.Let (X, Y, s) ∈ C thin be such that s = v 0 v 1 . . .v λ and v 0 < lex v λ .We will now consider the case when λ is odd.As discussed earlier, the case of even λ will be handled by a reduction to the odd case.
For an odd λ, we must have either deg )-alternating and its length λ is odd.
Let ∇ i and s i ∈ Π(∇ i ) for i = 1, 2 be the set of edges and pairing function assumed to exist in the statement of this lemma.Let M be such that (X, Y, s) ∈ D M , and we will first define Υ M (X, Y, s), then we will also define π M (X, Y, s, Z) and B M (X, Y, s, Z) for any Z ∈ Υ M (X, Y, s).
Let us modify the auxiliary matrix M .Recall from Definition 3.24 that if ab ∈ ∇ X,Y , then M ab = 1.By assumption (3) of this lemma, if The row-sums of M and M ′ are equal on every vertex except possibly on v 0 and v λ .
By assumption (4), |∇ i | is even.From assumptions (1) and ( 2) it follows that any Let us start to extend the precursor.Without loss of generality, we may assume that ∇ 1 = ∅.
Let us slightly change X and Y , so that the symmetric difference of the modified graphs X 1 , Y 1 is exactly ∇ 1 : bounds, this essentially increases c ′ by 1).Furthermore, we also need to store in B M (X, Y, s, Z) that the identify of v λ−1 and v λ .Since for any Z ∈ Υ M (X, Y, s).As a result, the range of B M (X, Y, s, Z) increases by a polynomial factor (also note that the parameter of the precursor has to be increased by a constant to accommodate v λ−1 v λ ).
The well-definedness of Ψ follows, because the constant number of differences compared to the previous case are all stored/noted in B M (X, Y, s, Z).
One can say that this is proof is not very detailed, but we think it is not worth describing the details, because it would be an almost verbatim repetition of the first two cases.
Let I be a set of weakly P -stable thin degree sequence intervals.By Lemma 4.2, there exists a precursor with parameter c = 12 on C id .We want to apply Lemma 4.6 to prove that there exists a precursor on C thin with some fixed parameter.Showing this, Theorem 2.20 follows: the precursor can be extended to R thin by Lemma 3.27, which is sufficient for proving rapid maxing of G(ℓ, u) on every (ℓ, u) ∈ I by Theorem 3.28.Suppose (X, Y, s) ∈ C thin .If s is a closed trail, then (X, Y, s) ∈ C id , on which we have already defined a precursor.
Suppose from now on that s = v 0 v 1 . . .v λ is an open trail of odd length (possibly 1).By the KD-lemma (Lemma 3.21), v 0 = v λ .To apply Lemma 4.6, it is enough to define s 1 and s 2 , since their domains determine ∇ 1 , ∇ 2 .The premises of Lemma 4.6 are elementary and trivial to check once s 1 ∈ Π(∇ 1 ) and s 2 ∈ Π(∇ 2 ) are given.We will finish the proof by a complete case analysis, where we provide a suitable s 1 and s 2 for each case.
We will prove that Lemma 4.6 holds for C thin with c ′ = 2.We will distinguish between 8 main cases, 3 of these have 2 subcases.The cases will be distinguished based on the relationship between v 0 , v 1 , v λ−1 , v λ and s.Recall that v 0 = v λ .On the corresponding figures, by exchanging X and Y , we may suppose that v 0 v 1 ∈ E(X).Thus the edges of X are drawn with solid lines, edges of Y with dashed lines, and unknown is dotted.Those pairs that are contained in ∇ X,Y are joined by thick solid or dashed lines.The similarly thick dash-dotted lines represent (X, Y )-alternating segments of the trail s.Recall, that a trail may visit a vertex multiple times, but it can only traverse an edge at most once.
Case 2. We assume in this case that j is even.
Case 2a.If v 0 = v j and v λ = v j+1 , then let Case 2b.If v 0 = v j+1 and v λ = v j , then let From now on, we assume that j is odd.
Case 3. If v j = v λ and v j+1 = v 0 , then let From now on, we assume that v j = v 0 and v j+1 = v λ .
First, we assume that k < j; the case k > j will follow easily by symmetry.
Case 6. Suppose that k is even.

Figure 1 :
Figure 1: The three types of operations employed by the degree interval Markov chain.Solid ( ) and dashed () line segments represent edges and non-edges, respectively.

2 degreeFigure 2 :
Figure2: Theorem 2.18 defines pairs of lower and upper bounds (δ and ∆), such that any degree sequence which obeys these bounds is P -stable; the area between these functions is filled with vertical lines.The pairs (δ, ∆) of most distant bounds allowed by eq.(4) are given by intersections with vertical lines.For example, any degree sequence which is (element-wise) between δ =1  4 n and ∆ = 3 4 n is P -stable.In comparison, the solid gray region represents a √ r-wide region around the regular degree sequences, which corresponds to the domain of Theorem 2.19.
ii) the number of −1 entries of M is at most 2, (iii) there exists V ⊆ [n] where M | V ×V contains every +2 and −1 entries of M , (iv) there exists v ∈ V such that the +2 and −1 entries of M are all located in the row and column corresponding to v, (v) the row-sum of v in M | V ×V is minimal, and finally, (vi) every row-and column sum in M | V ×V is at least 1 and at most |V | − 2.

Figure 3 :
Figure 3: The functions s(u, •) and s(v, •) pair the edges incident on u and v, respectively.The orange arcs ( ) join edges that are pairs in s(u, •).The cyan arcs ( ) join edges that are pairs in s(v, •).The cyan loop ( ) corresponds to the relation s(v, uv) = uv.
where each W s k is the vertex set of a component of L(∇, s), and the sets (W s k ) ps k=1 are listed in the order induced by their lexicographically first edges.Definition 3.13.For any set of edges W and s ∈ Π(∇), let s| W = {(v, e) → s(v, e) | v ∈ e ∈ W and s(v, e) ∈ W } .

Figure 4 :
Figure 4: An example for ∇ = ∇X,Y and an (X, Y )-alternating s ∈ Π(∇) (see Definition 3.18).Red edges belong to X and blue edges belong to Y .There are 2 4 different π ∈ Π(∇) that are (X, Y )-alternating.There is one such π where L(∇, π) has 3 components (the two C6's and a C4 in the middle), and there are 6 cases where L(∇, π) has 2 components.The black arcs represent an s such that L(∇, s) has exactly one component, or, in other words, s defines a closed Eulerian trail on ∇.

k
and G X,Y k−1 are identical.If s k is an open trail whose end-vertices are v and w, then the degree sequences of G X,Y k and G X,Y k−1 differ by 1 precisely on v and w; since these end-vertices are distinct from any other end-vertices of another open trail s j , such a change of the degree of v and w not occur for any other k.Thus the degree v satisfies: Let [ℓ, u] ∈ I, where ℓ and u are degree sequences on [n].Let us define the multicommodity-flow f on the Markov-graph of G(ℓ, u): for every X, Y ∈ G(ℓ, u) and s ∈ S X,Y , send σ(X)σ(Y )/|S X,Y | amount of flow on Υ X+Y (X, Y, s).The total flow in f from X to Y sums to σ(X)σ(Y ).

Lemma 4. 2
alternating, and s| W s k describes an Eulerian trail on W s k for any 1 ≤ k ≤ p s .If s| W s k describes an open trail, then its end-vertices are (by definition) distinct, and the end-vertices of the trail s| W s k are disjoint from the end-vertices of any other open trail s| W s j with the exception of the at most one fixpoint of s(v, •).The closed trails must have even length, because s(v, •) pairs X-edges to Y -edges at any v.Clearly, if an open trail s| W s i both starts and ends at v, then s(v, •) has at least two fixpoints, which is a contradiction.Similarly, we have a contradiction if more than one trail terminates at some vertex v. Lastly, if s| W s k is an open trail, then the degree deg W s Lemma 3.22.For any thin degree sequence interval [ℓ, u] on n vertices and any two graphs X, Y ∈ G(ℓ, u) k (v) is even, except if v is one of the two end-vertices of s| W s k , in which case deg W s k (v) is odd.